Cisco DNA Center installation and setup
Manual underlay and validation
Log configuration pushed by Cisco DNA Center
Cisco DNA Center-to-ISE integration
Design: Creating a site hierarchy
Verify that telemetry is configured by discovery on the WLC
Validating the Assurance use case
Use Case 1: Proactive network health assurance
Use Case 2: Proactive client experience monitoring
Use Case 3: Reactive client troubleshooting
Reset the edge configuration from wireless issues in Cisco DNA Center
Define scalable group tags for user groups
Create VNs and choose scalable groups
Create SD-Access device credentials (for TACACS)
Provision devices from Cisco DNA Center
Create fabric domain and a transit site
Configure the fusion router manually
Create Layer 3 connectivity between the border and the fusion router
Extend virtual networks from the border to the fusion router
Use VRF leaking to share routes on the fusion router and distribute them to the border
Share virtual networks between border nodes for traffic resiliency
Micro-segmentation within a VN and policy between scalable groups
Create and apply a “deny all” rule between scalable groups
Create a custom contract (optional)
Configure dynamic authentication with ISE
Create user identity groups for the Campus
Create an identity for each campus user
Define 802.1X authorization profile and policies for campus users
ISE policies for wireless guest access
Endpoint onboarding: Validation
Client group tag classification
Test intra-virtual network connectivity
Cisco DNA Center is the foundational controller and analytics platform at the heart of Cisco’s intent-based network. It offers centralized, intuitive management that makes it fast and easy to design, provision, and apply policies across your network environment. The Cisco DNA Center UI provides end-to-end network visibility and uses network insights to optimize network performance and deliver the best user and application experience.
This guide demonstrates the value of the Assurance and Software-Defined Access (SD-Access) solutions using a specific combination of equipment, OS version, and configuration. To obtain the best outcome, follow this guide exactly. The procedure and configurations described have been tested and validated. If you must deviate from this guide, we recommend that you stage the setup out of band and conduct extensive testing before you deploy Cisco DNA Center SD-Access in production.
To deploy the SD-Access solution in your network, a Cisco Advantage license is required.
Table 1. Prerequisites and workflows
Prerequisites |
Assurance workflow |
SD-Access workflows |
●
Install and set up single-node Cisco DNA Center, version 1.2.8
●
Install Cisco® Identity Services Engine (ISE), 2.4 patch 5
●
Ensure that the network devices are on a supported OS
|
Health Dashboards
●
Client Health: Health summary, onboarding, and connectivity analytics
●
Network Health: Health summary, analytics (AP)
●
Site-level drilldown
Device 360
●
Historical analysis
●
Connectivity details
●
RF information
Client 360
●
Historical analysis
●
Onboarding events
●
Near-time troubleshooting
Network time travel Issue troubleshooting |
●
Set up manual underlay
●
Integrate Cisco DNA Center with ISE
●
Discover network devices via Cisco DNA Center
●
Design: Create site hierarchy
●
Add devices to site
●
Observe Assurance data
●
Design: Network settings and Service Set Identifiers (SSIDs) for WLC
●
Create virtual networks (VNs) and policies
●
Provision devices to site
●
Create fabric and assign device roles
●
Configure ISE policies
●
Onboarding hosts
●
Enable multicast (optional)
●
Enable Layer 2 flooding (optional)
|
Table 2. Recommended platforms and supported OS versions for SD-Access
Device type |
Platform |
OS version |
Cisco DNA Center |
DN1-HW-APL |
1.2.8 |
Fabric edge |
Cisco Catalyst® 9300 and 9400 Series Switches |
Cisco IOS® XE 16.9.2s |
Fusion router |
Cisco 4000 Series Integrated Services Routers (ISRs) |
Cisco IOS XE 16.9.2s |
Fabric border/control plane |
Cisco Catalyst 9300 and 9500 Series, 4000 Series ISRs |
Cisco IOS XE 16.9.2s |
Wireless LAN controller |
Cisco 3504 and 5520 |
AireOS 8.5 MR4 (8.5.140.0) |
Access point (AP) |
802.11ac Wave 2 (Cisco Aironet® 2800 and 3800 Series) |
AireOS 8.5 MR4 (8.5.140.0) |
Identity Services Engine |
VM/appliance |
2.4 patch 5 |
Table 3. Tested scale guidelines
Switches or routers |
Endpoints |
Fabric domains or sites |
Fabric edge nodes |
Virtual networks |
IP pools |
Scalable groups |
Policies |
200 |
APs: 500 Hosts: 5000 |
Fabric domain: 1 Fabric site: 1 |
200 |
5 |
20 |
20 |
50 |
This document assumes that the tasks explained in the following sections have been performed.
Cisco DNA Center installation and setup
If you haven’t done so already, install Cisco DNA Center according to the Cisco DNA Center Appliance Installation Guide.
After you install Cisco DNA Center, be sure to use the Cloud Update to get to the latest version, Cisco DNA Center 1.2.8. A single-node Cisco DNA Center running version 1.2.8 must be installed.
In the Cisco DNA Center UI, go to Settings > About Cisco DNA Center > Show Packages to check the version running on the appliance.
The topology for each pod in the lab is as shown in the diagram below. Every pod is identical, with similar addressing schemes. The underlay IP addressing is as shown in the topology.
In this lab, we will be manually creating the underlay. The underlay is the Layer 3 point-to-point links (/32) that make up the topology you see in the figure below. These are the IP addresses that we will configure on the loopback interfaces. Cisco DNA Center uses these loopback interfaces to push configuration and read data from the device.
Cisco DNA Center supports any Layer 3 underlay topology. In this setup, we are using the devices shown in the figure in a collapsed topology. We will be configuring intermediate system-to-intermediate system (IS-IS) as the underlay routing protocol. The big advantages of IS-IS are the quick convergence times and large area scalability.
Cisco DNA Center can also automate the underlay configuration, configuring Layer 3 point-to-point links and using IS-IS as the underlay routing protocol. This is done in the LAN Automation feature. In this lab, we will not be using the LAN Automation feature, as we will be manually configuring the underlay with given templates.
Internal Border Gateway Protocol (iBGP) is used outside the fabric (between the border and fusion devices) and has been selected for resiliency and its convergence times.
When we build the fabric in this lab, we will be using multiple virtual networks (VNs). These VNs may be communicated and represented as virtual routing and forwarding instances (VRFs). We will define three VRFs within the fabric (Guest, Campus, IoT). We also will be defining a shared services VRF on the fusion router to be used for leaking the shared services into the other VRFs. The logical topology is represented below.
The fabric devices do not have any configuration on them. We will be accessing these devices via the console interface through the terminal servers listed in the table. Ensure that your VPN connection is up before trying to connect to the terminal servers.
Based on the pod assigned, pick your console server IP address.
Table 4. Terminal server IP addresses
Terminal server IP |
Pod number |
100.127.3.11 |
POD 01 , POD 02 |
100.127.3.12 |
POD 03 , POD 04 |
100.127.3.13 |
POD 05 , POD 06 |
100.127.3.14 |
POD 07 , POD 08 |
100.127.3.15 |
POD 09 , POD 10 |
100.127.3.16 |
POD 11 , POD 12 |
100.127.3.17 |
POD 13 , POD 14 |
100.127.3.18 |
POD 15 , POD 16 |
100.127.3.19 |
POD 17 , POD 18 |
100.127.3.20 |
POD 19 , POD 20 |
100.127.3.21 |
POD 21 , POD 22 |
100.127.3.22 |
POD 23 , POD 24 |
100.127.3.23 |
POD 25 , POD 26 |
100.127.3.24 |
POD 27 , POD 28 |
100.127.3.25 |
POD 29 , POD 30 |
For the console port access, check to see if the pod assigned to you is an even or odd number. Use the port numbers assigned to the devices accordingly.
Table 5. Port numbers for odd- and even-numbered pods
|
Session name |
Port |
Odd-numbered pods |
ISR4451-1 |
2002 |
ISR4451-2 |
2003 |
|
9516-B1 |
2004 |
|
9516-B2 |
2005 |
|
9300-1 |
2006 |
|
9300-2 |
2007 |
|
WLC-3504 |
2008 |
|
|
Session name |
Port |
Even-numbered pods |
ISR4451-1 |
2010 |
ISR4451-2 |
2011 |
|
9516-B1 |
2012 |
|
9516-B2 |
2013 |
|
9300-1 |
2014 |
|
9300-2 |
2015 |
|
WLC-3504 |
2016 |
Table 6. Pod numbers and IP addresses
Pod number |
IP address |
Username/password |
Cisco DNA Center Pod 01 |
100.127.17.101 |
admin/Dnac123! |
Cisco DNA Center Pod 02 |
100.127.17.102 |
admin/Dnac123! |
Cisco DNA Center Pod 03 |
100.127.17.103 |
admin/Dnac123! |
Cisco DNA Center Pod 04 |
100.127.17.104 |
admin/Dnac123! |
Cisco DNA Center Pod 05 |
100.127.17.105 |
admin/Dnac123! |
Cisco DNA Center Pod 06 |
100.127.17.106 |
admin/Dnac123! |
Cisco DNA Center Pod 07 |
100.127.17.107 |
admin/Dnac123! |
Cisco DNA Center Pod 08 |
100.127.17.108 |
admin/Dnac123! |
Cisco DNA Center Pod 09 |
100.127.17.109 |
admin/Dnac123! |
Cisco DNA Center Pod 10 |
100.127.17.110 |
admin/Dnac123! |
Cisco DNA Center Pod 11 |
100.127.17.111 |
admin/Dnac123! |
Cisco DNA Center Pod 12 |
100.127.17.112 |
admin/Dnac123! |
Cisco DNA Center Pod 13 |
100.127.17.113 |
admin/Dnac123! |
Cisco DNA Center Pod 14 |
100.127.17.114 |
admin/Dnac123! |
Cisco DNA Center Pod 15 |
100.127.17.115 |
admin/Dnac123! |
Cisco DNA Center Pod 16 |
100.127.17.116 |
admin/Dnac123! |
Cisco DNA Center Pod 17 |
100.127.17.117 |
admin/Dnac123! |
Cisco DNA Center Pod 18 |
100.127.17.118 |
admin/Dnac123! |
Cisco DNA Center Pod 19 |
100.127.17.119 |
admin/Dnac123! |
Cisco DNA Center Pod 20 |
100.127.17.120 |
admin/Dnac123! |
Cisco DNA Center Pod 21 |
100.127.17.121 |
admin/Dnac123! |
Cisco DNA Center Pod 22 |
100.127.17.122 |
admin/Dnac123! |
Cisco DNA Center Pod 23 |
100.127.17.123 |
admin/Dnac123! |
Cisco DNA Center Pod 24 |
100.127.17.124 |
admin/Dnac123! |
Cisco DNA Center Pod 25 |
100.127.17.125 |
admin/Dnac123! |
Cisco DNA Center Pod 26 |
100.127.17.126 |
admin/Dnac123! |
Cisco DNA Center Pod 27 |
100.127.17.127 |
admin/Dnac123! |
Cisco DNA Center Pod 28 |
100.127.17.128 |
admin/Dnac123! |
Cisco DNA Center Pod 29 |
100.127.17.129 |
admin/Dnac123! |
Cisco DNA Center Pod 30 |
100.127.17.130 |
admin/Dnac123! |
Table 7. Jump host pod numbers and RDP addresses
Jumphost # |
External IP (RDP) |
Username |
Password |
JumpHost Pod 1 |
100.127.13.101 |
admin |
Dnac123! |
JumpHost Pod 2 |
100.127.13.102 |
admin |
Dnac123! |
JumpHost Pod 3 |
100.127.13.103 |
admin |
Dnac123! |
JumpHost Pod 4 |
100.127.13.104 |
admin |
Dnac123! |
JumpHost Pod 5 |
100.127.13.105 |
admin |
Dnac123! |
JumpHost Pod 6 |
100.127.13.106 |
admin |
Dnac123! |
JumpHost Pod 7 |
100.127.13.107 |
admin |
Dnac123! |
JumpHost Pod 8 |
100.127.13.108 |
admin |
Dnac123! |
JumpHost Pod 9 |
100.127.13.109 |
admin |
Dnac123! |
JumpHost Pod 10 |
100.127.13.110 |
admin |
Dnac123! |
JumpHost Pod 11 |
100.127.13.111 |
admin |
Dnac123! |
JumpHost Pod 12 |
100.127.13.112 |
admin |
Dnac123! |
JumpHost Pod 13 |
100.127.13.113 |
admin |
Dnac123! |
JumpHost Pod 14 |
100.127.13.114 |
admin |
Dnac123! |
JumpHost Pod 15 |
100.127.13.115 |
admin |
Dnac123! |
JumpHost Pod 16 |
100.127.13.116 |
admin |
Dnac123! |
JumpHost Pod 17 |
100.127.13.117 |
admin |
Dnac123! |
JumpHost Pod 18 |
100.127.13.118 |
admin |
Dnac123! |
JumpHost Pod 19 |
100.127.13.119 |
admin |
Dnac123! |
JumpHost Pod 20 |
100.127.13.120 |
admin |
Dnac123! |
JumpHost Pod 21 |
100.127.13.121 |
admin |
Dnac123! |
JumpHost Pod 22 |
100.127.13.122 |
admin |
Dnac123! |
JumpHost Pod 23 |
100.127.13.123 |
admin |
Dnac123! |
JumpHost Pod 24 |
100.127.13.124 |
admin |
Dnac123! |
JumpHost Pod 25 |
100.127.13.125 |
admin |
Dnac123! |
JumpHost Pod 26 |
100.127.13.126 |
admin |
Dnac123! |
JumpHost Pod 27 |
100.127.13.127 |
admin |
Dnac123! |
JumpHost Pod 28 |
100.127.13.128 |
admin |
Dnac123! |
JumpHost Pod 29 |
100.127.13.129 |
admin |
Dnac123! |
JumpHost Pod 30 |
100.127.13.130 |
admin |
Dnac123! |
Each pod has an ISE and WLC with the same IP address.
Table 8. Pods’ ISE and WLC IP addresses
Device |
IP address |
Username/password |
ISE |
https://10.172.3.200 |
admin/Dnac123! |
WLC |
https://10.172.4.5 |
netadmin/Dnac123! |
Manual underlay and validation
The assumption for this lab is that the network devices have no configuration on them as the factory default. They first need to be configured with basic IP connectivity. We will configure the underlay for two fusion nodes, two border nodes, and two edge nodes.
In this case, we will use:
● Layer 3 point-to-point interconnects between all the network devices.
● IS-IS routing protocol between edge and border nodes
● BGP routing protocol between border and fusion routers
● The shared services routes (ISE, Cisco DNA Center, DHCP, DNS) are assigned to a dedicated VRF, Shared_Services.
The figure below shows the underlay IP addressing scheme.
The table lists the reference configurations for the fusion, border, and edge nodes. Please refer to the appendix for detailed configuration information for all devices.
Table 9. Reference configurations for fusion, border, and edge nodes
Fusion underlay configuration |
Comments |
! conf t
hostname Fusion1-ISR4451 |
Look for this hostname on Cisco DNA Center after this device is discovered. |
vrf definition Shared_Services rd 100:100 |
Dedicated VRF for shared services (Cisco DNA Center, Dynamic Host Configuration Protocol [DHCP], DNS, ISE). |
address-family ipv4 route-target export 100:100 route-target import 100:100 exit-address-family |
Route import/export for shared services |
enable secret Dnac123! |
Entered on the Cisco DNA Center during device discovery. Needed for the Cisco DNA Center to log into the device. |
! no aaa new-model |
|
ip domain name cisco.com no ip domain-lookup |
Required for device discovery on the Cisco DNA Center |
username netadmin privilege 15 password 0 Dnac123! |
Local username and password. Entered on the Cisco DNA Center during device Discovery. Needed for the Cisco DNA Center to log into the device |
snmp-server community Dnac123! Rw
interface Loopback0 vrf forwarding Shared_Services ip address x.x.x.x 255.255.255.255
interface GigabitEthernet0/0/0 description "Connection to MGMT-Switch" vrf forwarding Shared_Services ip address x.x.x.x 255.255.255.252 negotiation auto no shut !
interface GigabitEthernet0/0/2 no ip address negotiation auto no shut ! |
Required for device discovery
Loopback 0 with a /32 IP address. Used to discover the device from the Cisco DNA Center. It is mandatory to use loopback0 for wired devices in SD Access. |
description "Connection to Border-1" encapsulation dot1Q 2 vrf forwarding Shared_Services ip address x.x.x.x 255.255.255.252 no shut |
Interface connecting to Border-1. No IP address assigned to the physical interface, VRF-lite handoff from the border node. |
interface GigabitEthernet0/0/3 no ip address negotiation auto no shut |
Interface connecting to Border-2. No IP address assigned to the physical interface, VRF-lite handoff from the border node. |
interface GigabitEthernet0/0/3.22 description "Connection to Border-2" encapsulation dot1Q 22 vrf forwarding Shared_Services ip address 192.168.1.5 255.255.255.252 no shut ! router bgp 65000 bgp router-id 192.168.200.1 bgp log-neighbor-changes |
Interface connecting to Border-2. No IP address assigned to the physical interface, VRF-lite handoff from the border node. |
address-family ipv4 vrf Shared_Services network 192.168.1.0 mask 255.255.255.252 network 192.168.1.4 mask 255.255.255.252 network 192.168.101.0 mask 255.255.255.252 network 192.168.200.1 mask 255.255.255.255 |
Advertising the connected links into BGP. |
neighbor 192.168.1.2 activate neighbor 192.168.1.6 remote-as 65001 neighbor 192.168.1.6 activate neighbor 192.168.101.1 remote-as 65000 neighbor 192.168.101.1 activate maximum-paths 2 exit-address-family ! ! ! line con 0 exec-timeout 0 0 logging synchronous login local transport input none stopbits 1 line aux 0 stopbits 1 ! |
Forming neighbor relation with the border nodes and TOR. |
line vty 0 4 exec-timeout 0 0 login local transport input all ! line vty 5 97 exec-timeout 0 0 login local transport input all |
Enabling telnet and Secure Shell (SSH). Required by Cisco DNA Center. |
clock timezone PST -8 ntp source loop 0 ntp master |
For this lab, fusion is the Network Time Protocol (NTP) primary. |
Underlay configurations for border |
Comments |
! conf t ! hostname Border1-9500 ! enable secret Dnac123! ! no aaa new-model ! ip routing ! ip domain name cisco.com ! no ip domain-lookup
system mtu 9100 ! username netadmin privilege 15 password 0 Dnac123! |
Increasing the maximum transmission unit (MTU) to account for the Virtual Extensible LAN (VXLAN) encapsulation. |
! snmp-server community Dnac123! rw
crypto key generate rsa 1024 |
Required to enable SSH on the device. |
router isis net 49.0000.0100.0421.2003.00 domain-password C1sco123 metric-style wide log-adjacency-changes nsf ietf |
For this lab we are using IS-IS in the underlay. |
bfd all-interfaces |
Redistribute the shared services network into underlay. |
! interface Loopback0 ip address x.x.x.x 255.255.255.255 ip router isis bfd interval 500 min_rx 500 multiplier 3 ! interface TenGigabitEthernet1/0/1 switchport mode trunk ! interface TenGigabitEthernet1/0/2 switchport mode trunk ! ! interface TenGigabitEthernet1/0/9 description "Connection to Edge-1" no switchport ip address x.x.x.x 255.255.255.252 ip router isis bfd interval 500 min_rx 500 multiplier 3 no bfd echo no shut ! interface TenGigabitEthernet1/0/10 description "Connection to Edge-2" no switchport ip address x.x.x.x 255.255.255.252 ip router isis bfd interval 500 min_rx 500 multiplier 3 no bfd echo no shut ! interface TenGigabitEthernet1/0/16 description "Connection to Border2-9500" switchport switchport mode trunk ! Vlan 2 no shut exit ! Vlan 3 no shut exit ! Vlan 25 no shut exit ! interface Vlan2 description "Connection to Border-1" ip address 192.168.1.2 255.255.255.252 no shut ! interface Vlan3 description "Connection to Border-2" ip address 192.168.1.10 255.255.255.252 no shut ! interface Vlan25 description "Connection to Border2-9500" ip address 192.168.1.93 255.255.255.252 ip router isis bfd interval 500 min_rx 500 multiplier 3 no bfd echo no shut |
|
router bgp 65001 bgp router-id 192.168.200.3 bgp log-neighbor-changes neighbor 192.168.1.1 remote-as 65000 neighbor 192.168.1.9 remote-as 65000 |
BGP neighbor with the fusion routers. |
!
network 192.168.1.0 mask 255.255.255.252
redistribute isis level-1-2 neighbor 192.168.1.1 activate neighbor 192.168.1.9 activate maximum-paths 2 exit-address-family ! ip forward-protocol nd |
Advertising connected links into BGP
Advertising the underlay into the shared services |
ip prefix-list SHARED_SERVICES_NETS seq 5 permit 10.172.3.0/24 ip prefix-list SHARED_SERVICES_NETS seq 10 permit 10.172.4.0/24 ip prefix-list SHARED_SERVICES_NETS seq 15 permit 10.172.2.0/24 ! ! route-map GLOBAL_SHARED_SERVICES_NETS permit 10 match ip address prefix-list SHARED_SERVICES_NETS ! ! line con 0 exec-timeout 0 0 logging synchronous login local stopbits 1 line vty 0 4 exec-timeout 0 0 login local transport input all ! line vty 5 97 exec-timeout 0 0 login local transport input all ! |
Limiting the shared services routes to be imported into the VNs. |
Underlay configurations for Edge |
Comments |
! conf t ! hostname Edge1-9300 ! enable secret Dnac123! ! no aaa new-model ! ip routing ! ip domain name cisco.com ! system mtu 9100 ! username netadmin privilege 15 password 0 Dnac123! ! snmp-server community Dnac123! Rw ! no ip domain-lookup no ip domain lookup ! crypto key generate rsa 1024
! router isis net 49.0000.0100.0421.2005.00 domain-password C1sco123 metric-style wide log-adjacency-changes nsf ietf bfd all-interfaces ! interface Loopback0 ip address x.x.x.x 255.255.255.255 ip router isis bfd interval 500 min_rx 500 multiplier 3 no bfd echo ! interface GigabitEthernet1/0/23 description "Connection to Border2 " no switchport ip address 192.168.1.110 255.255.255.252 ip router isis bfd interval 500 min_rx 500 multiplier 3 no bfd echo no shut ! interface GigabitEthernet1/0/24 description "Connection to Border1" no switchport ip address 192.168.1.102 255.255.255.252 bfd interval 500 min_rx 500 multiplier 3 ip router isis no bfd echo no shut ! ip forward-protocol nd ip pim ssm default ! line con 0 exec-timeout 0 0 logging synchronous login local stopbits 1 line vty 0 4 exec-timeout 0 0 login local transport input all ! ! line vty 5 97 exec-timeout 0 0 login local transport input all ! vlan 5 no shut ! ! $$ Assurance Configuration $$ !
ip address 172.16.20.1 255.255.255.0 ip helper-address 10.172.3.220 ip router isis bfd interval 500 min_rx 500 multiplier 3 no shut exit |
Required in this lab to onboard non-fabric APs for Assurance. |
int range gig 1/0/11-12 switchport switchport mode access switchport access vlan 5 no shut exit |
APs connected to these ports. Assigning them to VLAN 5 for the Assurance use case. |
Validation:
Run the following commands on the border and edge nodes to verify the underlay configuration and reachability to shared services.
Border underlay verification:
Execute the command “show ip bgp summary” on the border nodes. You should observe two BGP neighbors; these are the two fusion routers.
Border1-9500#sh ip bgp summary
BGP router identifier 192.168.200.3, local AS number 65001
BGP table version is 25, main routing table version 25
18 network entries using 4464 bytes of memory
26 path entries using 3536 bytes of memory
6 multipath network entries and 12 multipath paths
4/4 BGP path/bestpath attribute entries using 1120 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 9144 total bytes of memory
BGP activity 19/1 prefixes, 32/6 paths, scan interval 60 secs
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
192.168.1.1 4 65000 22 18 25 0 0 00:11:13 9
192.168.1.9 4 65000 18 19 25 0 0 00:08:44 9
Execute the “sh ip route bgp” command on the border nodes. Observe the routes for the shared services learned via BGP on the border.
Shared services IP range:
10.172.2.x- Cisco DNA Center
10.172.3.x- DHCP, ISE
10.172.4.x- WLC
Border1-9500#sh ip route bgp
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is not set
10.0.0.0/24 is subnetted, 4 subnets
B 10.172.2.0 [20/0] via 192.168.1.9, 00:08:47
[20/0] via 192.168.1.1, 00:08:47
B 10.172.3.0 [20/0] via 192.168.1.9, 00:08:47
[20/0] via 192.168.1.1, 00:08:47
B 10.172.4.0 [20/0] via 192.168.1.9, 00:08:47
[20/0] via 192.168.1.1, 00:08:47
B 10.172.5.0 [20/0] via 192.168.1.9, 00:08:47
[20/0] via 192.168.1.1, 00:08:47
192.168.1.0/24 is variably subnetted, 14 subnets, 2 masks
B 192.168.1.4/30 [20/0] via 192.168.1.1, 00:12:14
B 192.168.1.12/30 [20/0] via 192.168.1.9, 00:08:47
192.168.101.0/30 is subnetted, 2 subnets
B 192.168.101.0 [20/0] via 192.168.1.9, 00:08:47
[20/0] via 192.168.1.1, 00:08:47
B 192.168.101.4 [20/0] via 192.168.1.9, 00:08:47
[20/0] via 192.168.1.1, 00:08:47
192.168.200.0/32 is subnetted, 6 subnets
B 192.168.200.1 [20/0] via 192.168.1.1, 00:12:14
B 192.168.200.2 [20/0] via 192.168.1.9, 00:08:47
Execute the “sh isis neighbors” command on the border nodes. Observe three neighbors – Other Border Node , Edge-1, and Edge-2.
Border1-9500#sh isis neighbors
System Id Type Interface IP Address State Holdtime Circuit Id
Border2-9500 L1 Vl25 192.168.1.94 UP 9 Border2-9500.03
Border2-9500 L2 Vl25 192.168.1.94 UP 9 Border2-9500.03
Edge1-9300 L1 Te1/0/9 192.168.1.102 UP 25 Border1-9500.01
Edge1-9300 L2 Te1/0/9 192.168.1.102 UP 28 Border1-9500.01
Edge2-9300 L1 Te1/0/10 192.168.1.106 UP 27 Border1-9500.02
Edge2-9300 L2 Te1/0/10 192.168.1.106 UP 21 Border1-9500.02
Edge underlay verification:
Execute the “sh ip route” command on the edge nodes. Observe the routes for the shared services learned via IS-IS on the edge.
Shared services IP range:
10.172.2.100- Cisco DNA Center
10.172.3.220, 10.172.3.200- DHCP, ISE
10.172.4.x- WLC
Edge1-9300#sh ip route
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is not set
10.0.0.0/24 is subnetted, 3 subnets
i L2 10.172.2.0
[115/10] via 192.168.1.109, 00:15:11, GigabitEthernet1/0/23
i L2 10.172.3.0
[115/10] via 192.168.1.109, 00:15:11, GigabitEthernet1/0/23
i L2 10.172.4.0
[115/10] via 192.168.1.109, 00:15:11, GigabitEthernet1/0/23
172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C 172.16.20.0/24 is directly connected, Vlan5
L 172.16.20.1/32 is directly connected, Vlan5
192.168.1.0/24 is variably subnetted, 7 subnets, 2 masks
i L1 192.168.1.92/30
[115/20] via 192.168.1.109, 00:26:12, GigabitEthernet1/0/23
[115/20] via 192.168.1.101, 00:26:12, GigabitEthernet1/0/24
C 192.168.1.100/30 is directly connected, GigabitEthernet1/0/24
L 192.168.1.102/32 is directly connected, GigabitEthernet1/0/24
i L1 192.168.1.104/30
[115/20] via 192.168.1.101, 00:26:12, GigabitEthernet1/0/24
C 192.168.1.108/30 is directly connected, GigabitEthernet1/0/23
L 192.168.1.110/32 is directly connected, GigabitEthernet1/0/23
i L1 192.168.1.112/30
[115/20] via 192.168.1.109, 00:26:43, GigabitEthernet1/0/23
192.168.200.0/32 is subnetted, 4 subnets
i L1 192.168.200.3
[115/20] via 192.168.1.101, 00:26:12, GigabitEthernet1/0/24
i L1 192.168.200.4
[115/20] via 192.168.1.109, 00:26:43, GigabitEthernet1/0/23
C 192.168.200.5 is directly connected, Loopback0
i L1 192.168.200.6
[115/30] via 192.168.1.109, 00:26:09, GigabitEthernet1/0/23
[115/30] via 192.168.1.101, 00:26:09, GigabitEthernet1/0/24
Ensure that both edge nodes have IP reachability to Cisco DNA Center from loopback0.
Edge1-9300#ping 10.172.2.100 source loop 0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.172.2.100, timeout is 2 seconds:
Packet sent with a source address of 192.168.200.5
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/3 ms
Edge1-9300#
Ensure both edge nodes have IP reachability to DHCP server from loopback0.
Edge1-9300#ping 10.172.3.220 source loop 0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.172.3.220, timeout is 2 seconds:
Packet sent with a source address of 192.168.200.5
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Log configuration pushed by Cisco DNA Center
Cisco DNA Center configures the devices when they are discovered, added to the site, and provisioned. In this lab for learning purposes, you can log these configuration changes to a file on the flash.
Apply EEM script
enable
configure terminal
event manager applet catchall
event cli pattern ".*" sync no skip no
action 1.0 syslog msg "$_cli_msg"
action 2.0 file open FH flash:eem_logall.txt a+
action 2.1 file puts FH "$_event_pub_time %HA_EM-6-LOG: catchall: $_cli_msg"
action 2.2 file close FH
end
Remove EEM script
enable
configure terminal
no event manager applet catchall
end
Cisco DNA Center-to-ISE integration
Cisco ISE is a next-generation identity and access control policy platform that enables enterprises to enforce compliance, enhance infrastructure security, and streamline their service operations.
ISE will be used to manage authentication, authorization and scalable group tag (SGT) policies within Cisco SD-Access. ISE is also the authentication, authorization, and accounting (AAA) server used in SD-Access.
Cisco DNA Center subscribes as a pxGrid client to ISE and obtains the SGTs from ISE through this pxGrid client/server relationship. Cisco DNA Center writes scalable group access control lists (SGACLs) (enforcement policies), guest wireless authentication and authorization results, and associated policies via External RESTful services (ERS). Both ISE and Cisco DNA Center are monitoring the TrustSecMetaData pxGrid topic, where SGTs and SGACLs are communicated.
Procedure:
1. On ISE: Enable ISE pxGrid services and ERS read and write.
a) Log in to the ISE primary admin node (PAN) from the Remote Desktop Protocol (RDP) session, ISE IP:10.172.3.200 with admin/Dnac123!.
Ensure that the ISE version is 2.4 patch 5 by clicking the gear icon at the top right and selecting About ISE.
b) Choose Administration >System> Deployment.
c) Click the hostname of the ISE node (“ise”) on which you want to enable pxGrid services.
Note: In a distributed deployment, this can be any ISE node. However, there are best practices for distributing this function. Please refer to ISE Performance and Scale.
d) Under the General Settings tab, make sure the pxGrid and Enable Device Admin Service check boxes are checked.
e) Make a note of the fully qualified domain name (FQDN) on ISE. This will be used during the Cisco DNA Center and ISE integration.
2. On Cisco DNA Center:
Add the ISE node as an Authentication and Policy server
a) Log in to the Cisco DNA Center web-based GUI.
b) Click the gear icon and choose System Settings.
c) Under External Network Services > Identity Services Engine, click the Configure Settings link.
d) On the Settings > Authentication and Policy Servers page, click the large Add icon to display the Add AAA/ISE server settings.
For this lab, use:
Server IP Address: 10.172.3.200
Shared Secret: Dnac123!
Toggle the Cisco ISE server to ON
Username: admin
Password: Dnac123!
FQDN: ise.cisco.com
Subscriber Name: Cisco-DNA-Center
e) Click View Advanced Settings, and also select the TACACS. Leave the defaults as is on the port numbers and retries.
Note: The subscriber name “Cisco-DNA-Center” will be shown as the pxGrid subscriber on ISE.
3. When you are finished populating these fields, click Apply and wait for the server status to show as ACTIVE. This can take up to five minutes.
4. Approve Cisco DNA Center as a pxGrid subscriber.
a) By default, ISE requires all pxGrid clients to be approved. You can change this under pxGrid Services -> Settings. On ISE:
b) Notice, after Total Pending Approval, that there is (1) approval pending. And Cisco DNA Center subscriber status is Pending. Select the client, dna-center, and click Approve.
Now that the underlay devices have basic IP connectivity, they can be discovered by Cisco DNA Center and added to its inventory.
Log in to Cisco DNA Center with Chrome or Firefox.
To view the Cisco DNA Center version, click the gear at the top right and select About Cisco DNA Center.
Click the Discovery app:
The Discovery tool is used to find devices using Cisco Discovery Protocol, Link Layer Discovery Protocol (LLDP), or IP address ranges. The tool will push the command-line interface (CLI) credentials defined in the discovery if these specific values are not already defined on the device. Simple Network Management Protocol (SNMP) is required for device discovery.
Create a new discovery profile, Network-devices-wired, by clicking the + symbol.
Use the Range option to discover the devices.
IP address range: 192.168.200.1 to192.168.200.6
Click Add Credentials to show the side panel. CLI credentials are the credentials used to log in to the network device. In this lab, we added the CLI credentials on the switches in the manual underlay step.
Note: You cannot use “admin” or any permutation of “Cisco” for the username or password.
SNMP v2 read and write credentials need to be entered.
Note: If this is an existing network being added to Cisco DNA Center for Assurance, use the same SNMP credentials already configured on the devices, if any.
If the credentials are not the same, Cisco DNA Center will configure the new SNMP credentials on the network devices after discovery.
Note: Be sure to save each of the credentials screens before moving to the next credential definition.
Credentials: netadmin/Dnac123!
SNMP: SNMP-Read community: Dnac123!
SNMP-Write community: Dnac123!
Cisco DNA Center can store these credentials in the Design application to be used in the future.
In this lab, we are discovering devices via the loopback interfaces we created on the switches in the manual underlay step. In an existing network, the devices can be discovered via any IP address on the switch reachable by Cisco DNA Center. If the network is SD-Access, ensure that the discovery IP is the Loopback0 interface.
Once populated, the credentials will appear in the Credentials section on the page. These can be checked or unchecked if there are multiple sets of credentials.
Pick the mode for Cisco DNA Center to access the network devices: SSH, Telnet, or both. In an existing network, pick one of the network protocols already configured on the network devices. Both SSH and Telnet can be chosen if the network has devices using both protocols. The order of SSH and Telnet can be changed by drag-and-drop.
Expand the Advanced section and select SSH and Telnet.
Click Start for the discovery process to begin. The discovery will initially show ‘’Queued,” and then once it begins, it will show “In Progress.” As the devices start being discovered, they will populate on the right-hand side. Six devices should be discovered once the discovery is complete. Ensure that status, ICMP, SNMP, and CLI have green check marks. If any of these fields are shown with a red X, that function is not working. You will have to validate the ICMP, SNMP, CLI credentials, etc.
At this point, Cisco DNA Center configures the discovered devices with a trustpoint certificate, creates RSA keys (for trust between Cisco DNA Center and devices), and enables IP device tracking (IPDT) (to keep track of connected wired hosts) on the wired devices.
Start a discovery for the WLC, using the same global credentials as configured during the previous discovery.
Profile name: Network-devices-wireless
IP address: 10.172.4.5 - 10.172.4.5
Ensure that None is selected under the Preferred Management IP for the WLC
The streaming telemetry configuration and trustpoint certificate are also pushed to the WLC during discovery.
Verify that the devices are added to the inventory on Cisco DNA Center in the Inventory app. Ensure that the devices show as Managed and Reachable.
Design: Creating a site hierarchy
Cisco DNA Center provides a robust Design application to allow customers of every size and scale to easily define their physical sites and common network resources (DHCP, DNS, etc.). This is implemented using a hierarchical format for intuitive use, while removing the need to redefine the same resource in multiple places when provisioning devices.
The behavior of Cisco DNA Center is to inherit settings from the global level into subsequent levels of the hierarchy. This enables consistency across large domains while also giving administrators the flexibility to adapt and change the settings for a local building or a floor.
To begin, select the Design app to open it. Once there, you will see a world map within a frame and a site hierarchy on the left-hand side. Add Site is used to create new sites manually or to import them from a CSV file. Cisco DNA Center provides a template that can be downloaded from the Import Sites area. This template can be populated with countries, states, buildings, etc. and then imported to create a global hierarchy. In this lab, we will be creating sites manually.
Note: The browser used to configure Cisco DNA Center must have Internet connectivity for the maps to appear.
Create a site hierarchy – North America > San Jose > Building22 > Floor-1, Floor-2.
Click Add Site and select Add Area. Define this area as North America.
Highlight North America and click Add Site, then select Add Area. Define this area as San Jose. This will create another area called San Jose under North America.
Devices need to be assigned to a building or floor. Expand North America, and next to San Jose select the gear icon to add a building where the network devices will reside. Define this building as Building 22. Address: 821 Alder Dr. Milpitas, California 95035 United States
We will create two floors in Building 22. Highlight Building 22. Click the gear icon to add a floor in Building 22 where the network devices will reside. Define this Floor as Floor-1, with the parameters shown in the screen shot.
Note: The floor names have to be unique.
Repeat the steps to add another Floor, called Floor-2. Once complete, you should see the following hierarchy in your Cisco DNA Center.
On Cisco DNA Center, go to Provision.
To begin, you will see the seven devices we discovered in the inventory under the Provision application (excluding the APs). All devices will need to be assigned to a building or floor except APs, which need to be assigned to a floor.
Note: When assigning switches and routers to sites or provisioning them, Cisco DNA Center version 1.2.6 does not automatically enable telemetry for wired devices (such as SNMP, syslog, or NetFlow). This needs to be done using the Telemetry tool.
Assign these devices to a site, one product family at a time.
Select the devices and, under Actions, click Assign Device to Site.
Choose Global/North America/San Jose/Building22/Floor-1 as the site. Check Apply to All. Click Apply.
Similarly, add other devices to the site as shown below.
Edge-1 has two APs connected on interfaces GigabitEthernet1/0/11 and GigabitEthernet1/0/12. To simulate an existing AP broadcasting a nonfabric SSID, configure the edge ports in VLAN 5.
Log in to the console for Edge-1 and verify the following configuration.
vlan 5
interface Vlan5
ip address 172.16.20.1 255.255.255.0
ip helper-address 10.172.3.220
ip router isis
exit
!
interface GigabitEthernet1/0/11
switchport access vlan 5
no shut
!
interface GigabitEthernet1/0/12
switchport access vlan 5
no shut
end
!
The AP connected on the edge switches should get an IP address from the DHCP server and join the WLC. The DHCP servers are configured with option 43 pointing to the WLC IP address.
The AP will now show up in the inventory and on the Provision page. Add the APs to the site: Global/North America/San Jose/Building22/Floor-1
Note: It will take up to 30 minutes before Assurance data gets populated in some detail and can be viewed for monitoring and troubleshooting purposes.
Verify that telemetry is configured by discovery on the WLC
Log in to the WLC to confirm that streaming telemetry has been configured and is operational. Using Putty you can log in to the WLC (10.172.4.5- admin/Cisco123) and execute a “show network assurance summary” command.
We need to ensure:
● The correct Cisco DNA Center IP is configured (10.172.2.100)
● The WSA service is enabled
● The “Last Success” is newer than “Last Error”
● The “JWT Last Success” is newer than “JWT Last Failure”
Log in to the WLC using Putty and verify that telemetry is configured on the WLC:
Alternatively, verify that streaming telemetry is configured on WLC using the WLC UI:
Validating the Assurance use case
Verify that two APs are joined to the WLC. Log in to the WLC and navigate to the Monitor menu as shown below and select 802.11a/n/ac under Radios to view the APs.
Now we will connect the two wireless clients to the network.
Use vCenter on the remote desktop to access the clients.
IP address: 100.127.12.100
Username: instructor
Password: Dnac123!
We have four wired and two wireless clients. Pick your pod from the left side, click the first wireless client, and select Open Console.
Once in the wireless client console, select the wireless network setting as shown and pick the SSID for your pod to connect to. Do not connect to any other SSID than the one for your pod.
For the username, enter user1 for the first wireless client, and for the password enter Dnac123!
Follow the same steps as above for the second wireless client, but enter the username user2 and password Dnac123! to connect to the SSID for your pod.
Use Case 1: Proactive network health assurance
In a production network with thousands of devices, it becomes challenging for network administrators to quickly identify and troubleshoot network issues.
Cisco DNA Center Assurance provides a contextual dashboard with a health summary of all network devices globally and per site or location. Network administrators can focus on an individual site where any devices or clients are experiencing issues.
The Overall Health page on Cisco DNA Center Assurance provides a health score of all the network devices managed by Cisco DNA Center. Health statistics for all network devices are visible for up to 14 days. Network administrators can also view a breakdown of all healthy and unhealthy network devices by device type. Proactively monitoring the health of the wireless infrastructure helps ensure optimal performance and longer network uptime.
Click the Assurance section in the Cisco DNA Center dashboard to view the Overall Health page.
The Overall Health page gives you a quick summary of the health of the clients and network devices. Wired and wireless clients’ health are shown separately. Hover over the network device to see the classification of devices by health.
Hover over the information icon to view how the health score is calculated for clients.
Click Show to see the Hierarchical Site View, which displays the health and count of client and network devices per site.
The network health score provides an indication of the health of the network infrastructure devices on the network. The client health score provides an indication of the health of the client endpoints on the network.
The top 10 major issues are listed on the Overall Health page, as shown below. Issues are prioritized based on their impact on the network.
Click an issue to view detailed information about the issue itself, the clients and location impacted, and a list of suggested actions to help troubleshoot the issue. If you don’t see an issue in the Top 10 issues, you can do this when you go to a Client 360 page.
To view detailed information about devices experiencing issues in a network, network administrators can navigate to the Network Health page, which gives detailed analytical information on all devices based on device type (WLC, AP, and so on). Network administrators can monitor the health scores of APs in a wireless network. They can see which APs are serving many clients and which APs are experiencing high interference.
Navigate to the Network Health screen, as shown below.
The time slider lets you view network health for a particular time period.
Information on the Network Health page is divided by device role.
Click Show/Hide to see the Hierarchical Site View. Choose a site of interest and click Apply to view the network health for that site.
Scroll down the Network Health page to see the Total APs up and down, Top N APs by client count, Top N APs with high interference, and the Network Devices table. Filter the table by device, type, and overall health, as needed. Click the Latest tab to view the current 5-minute snapshot of network health.
Hover over the health score of a device to view key performance indicators (KPIs) that affect the health score.
The trend analysis shows a 24-hour trend of network health and AP analytics. Click View Details to see detailed information.
On the view details screen, click a bar on the graph to view the device table.
After identifying the list of unhealthy devices, network administrators can use the Device 360 page to conduct further troubleshooting. The Device 360 page provides granular information on the KPI metrics that are contributing to the device health score. View the physical neighbor topology and device details to help pinpoint the root cause of the issue.
In the Network Devices table, click a device to open the Device 360 page for that device.
On the Device 360 page, hover over the time slider to view the KPI metrics. Also, view a list of any issues for the device. Click an issue to view detailed information and suggested actions.
The Physical Neighbor Topology shows the neighboring devices and their respective health scores.
Scroll down the Device 360 page to view detailed information about the device.
Click the Connectivity tab to view the radio traffic and client count.
Click the RF tab to view radio frequency details
Use Case 2: Proactive client experience monitoring
When hundreds or thousands of clients are present in the network at any given time, providing a view of each client can be difficult. Network administrators must know the general state of the clients, which includes visibility into the number of clients with good health and a way to determine which clients show degraded health and require attention.
The Client Health page in Cisco DNA Center Assurance helps in proactive monitoring of client onboarding and connectivity health.
Network administrators can identify and resolve issues before they affect users. Proactive monitoring also reduces the mean time to acknowledge an issue and the mean time to resolution.
Navigate to the Client Health page, as shown below.
The Client Health page shows the health scores and connectivity details of wired and wireless clients.
The time slider lets you view client health for a particular time period.
Use the Sankey chart to identify the root cause of any onboarding issues. If needed, choose the appropriate SSID to view client health per SSID.
The Client Onboarding Times area monitors how long it takes for a wireless client to go through the Association, AAA, and DHCP process, and for an 802.1X-enabled wired client to go through AAA and DHCP. If a threshold breach or onboarding failure occurred, you can identify which onboarding step caused it. You can click View Details to get more information.
The Connectivity RSSI area displays the RSSI data for wireless clients on the network and highlights clients with poor RF.
The Connectivity SNR area provides insight into whether wireless clients have signal-to-noise ratio (SNR) issues.
Scroll down the Client Health page to view the Client Devices table. Filter the table by type, health, and data, as needed. The table shows the client username, IP address, device type, and other details. Click the Latest tab to view the current 5-minute snapshot of client health.
Use Case 3: Reactive client troubleshooting
Understanding how healthy the network connectivity is for a given client is a challenge for network administrators. The Cisco DNA Center Client 360 page helps troubleshoot and identify the root cause of a connectivity issue experienced by a single client. Network administrators can view the overall connectivity experience of the client and its health based on a number of wireless KPI metrics.
The Client 360 page provides faster root cause analysis of client-reported problems that are impacting end-user productivity, using real-time and unique time travel capabilities. This helps to increase client productivity and decrease the mean time to resolution.
Click a client in the table to view the Client 360 page.
The tile slider shows the client health and events over time. Move the time slider to a time period of interest to view client health and events for that interval.
View the list of issues the client experienced over 24 hours or during the time period selected in the time slider. Click the issue to view a detailed description and suggested actions.
View the onboarding status of the client with SSID, AP, and WLC connection information.
View the health score of the AP and WLC to make sure they are in good health.
Scroll down the Client 360 page to see the Event Viewer, which shows the onboarding events from the WLC. Click an event to see the status and details.
Scroll down to see detailed client information. Click the Device Info, Connectivity, and RF tabs to view different statistics for the client.
Hover over the charts in the Connectivity and RF tabs to view additional details.
Reset the edge configuration from wireless issues in Cisco DNA Center
SD-Access will configure the access ports on the edge switches that connect to the APs for the appropriate VN (INFRA_VN), so we will remove the nonrelevant switchport access vlan5 configuration from Edge-1 and Edge-2. Before starting the SD-Access lab, restore the defaults on the interfaces on Edge-1 and Edge-2 connected to the APs.
On Edge-1 :
conf t
default interface range GigabitEthernet 1/0/11-12
end
Cisco DNA Center lets you save common resources (such as DHCP, DNS, and syslog) with the Network Settings feature in the Design application. Information pertaining to the enterprise can be stored and reused across the network and is assigned when the devices are provisioned to the site.
In the Cisco DNA Center UI, navigate to Design > Network Settings. This is where you configure all device-related network settings.
Click + Add Servers to add AAA and NTP servers. Click OK.
We will use ISE TACACS for network device authentication and ISE RADIUS for endpoint authentication.
Under AAA Server, select Network and Client/Endpoint.
For Network, select ISE as the server and TACACS as the protocol. Choose 10.172.3.200 for both Network and IP Address (Primary). The first selection specifies the ISE policy administration node (PAN) and the second is the ISE policy service node (PSN).
For Clients, select ISE as the server and RADIUS as the protocol. Choose 10.172.3.200 for both Client/Endpoint and IP Address (Primary).
The NetFlow, Syslog, and SNMP fields are used for wired Assurance and will provision these settings for sending the information back to Cisco DNA Center for Assurance use cases. In this lab we are not doing wired Assurance, so we will not be populating these fields. Add the following information for the common resources (shared services).
NTP: 192.168.200.1 and 192.168.200.2
DHCP: 10.172.3.220
DNS domain: cisco.com
DNS server: 10.172.3.220
Time zone: UTC
The device credentials created during discovery show up here. Credentials can be modified or new credentials can be created. For onboarding of APs and extended nodes, device CLI credentials and SNMP read and write need to be selected and saved.
Click the CLI Credentials radio button and click the Save button at the bottom of the screen.
Click SNMPV2C Write, click the radio button, and click the Save button at the bottom of the screen.
Define global IP pools for the network
IP address pools are created at the global level and then reserved within sites. They can be created in Global as a larger network (for example, /16) and then reserved as a smaller subnet within the sites (for example, /24). Cisco DNA Center uses IP addresses from configured IP address pools for the SD-Access use cases.
Cisco DNA Center supports manually entering IP address allotments as well as integrating with IP address management (IPAM) solutions, such as Infoblox, to learn of existing IP address pools already in use.
In this lab, we will be manually defining the IP address pools we require and creating only /24 subnets for global IP pools.
Navigate to Design > Network Settings > IP Address Pools and click the + icon to add an IP pool.
Create one global IP pool 172.16.0.0/16 subnet.
Note: The Overlapping check box should not be checked. Overlapping allows users to identify overlapping subnets within their network, enabling those addresses to be used in multiple places that would otherwise be denied.
We will be reserving the IP pools for the site, Building 22. In the hierarchy on the left side, choose Building 22.
When you navigate to the building, the following message appears. It explains the functioning of the hierarchy within Cisco DNA Center and how the network settings can be inherited (assigned) for the child sites in the hierarchy. To prevent its reappearance, check Don’t show again. Click OK to continue.
On Building 22, click Reserve IP Pool to make a reservation for this building. Follow the screen shots shown below to reserve IP pools (for APs, campus, IoT, guest, border handoff, and multicast) for Building 22.
WPA2 enterprise SSID for the campus
Navigate to Design > Network Settings > Wireless.
Next to Enterprise Wireless, click Add.
Define an enterprise SSID Ent-PODX (where X is the pod number) for a voice and data network. Below is the screen shot from the POD1 setup.
Choose WPA2 Enterprise security for the SSID. Check the Fast Lane check box. Fast Lane is a set of configuration changes that tune the wireless network for iOS 10 devices. It is part of a suite of features that resulted from the Cisco and Apple partnership that includes Adaptive 802.11r, and robust 11k/v support on iOS devices. Leave the remaining fields with their default values. Click Next.
Click Next to assign a wireless profile. This will determine where (to which site or sites) the SSID will be broadcasted and which APs will be broadcasting this SSID. Click Add.
This will take you to a popup window. Enter Wireless Fabric as the profile name. Select San Jose Building 22 Floor-1 from the drop-down options as the site. This SSID will be broadcasted to the APs assigned to that floor. Click Add.
Click Finish to complete the SSID workflow. The new SSID will appear in the Enterprise Wireless Network area.
Build the guest wireless SSID (optional)
Similarly, next to Guest Wireless Network, click Add. Name the SSID Guest-PODX (where X is the pod number) and select Web Auth security. Choose ISE Authentication for the authentication server.
Click Next. In the Edit a Wireless Profile area, check the Wireless Fabric check box.
Scroll down and click Save. Click Next.
You will be presented with a Portal Builder. This Portal Builder will automate the creation of a guest portal in ISE. Enter Guest Portal as the portal name in the top left of the window. Leave the default values in this screen.
Scroll down to the bottom of the window and click Save.
Click Finish to complete the creation of the enterprise and guest wireless SSIDs and the guest wireless portal.
The two created SSIDs will appear on the Wireless screen, as shown below.
Security and policy are an integral part of SD-Access. Cisco ISE is a critical pillar that automates the security policy by integrating with Cisco DNA Center. Segmentation within SD-Access is enabled through VNs—which are equivalent to VRFs—and SGTs. VNs provide macro-level segmentation with complete isolation of devices, whereas SGTs allow micro-segmentation within a VN by providing logical segmentation based on various attributes defined in ISE. These attributes are based on your company’s group-based segmentation requirements. They can be based on Active Directory (AD) group membership, endpoint type, OS type, location, etc. The attributes currently supported by ISE define these attributes for SD-Access. ISE maintains all the scalable group information. Cisco DNA Center uses these attributes for policies and the corresponding Layer 3 contracts (deny, permit) and Layer 4 contracts (permit HTTPS, SSH, deny Telnet, FTP, etc.). These policies and contracts are communicated back to ISE via ERS (REST APIs). ISE then pushes those policies and the SGACL contracts to the network infrastructure upon user/endpoint authorization onto the network.
Note: Use an RDP session from the jump host on your pod to access Cisco DNA Center. You will need RDP to cross-launch ISE from Cisco DNA Center.
Define scalable group tags for user groups
AD, local, endpoint, and user groups can be associated with an SGT. SGTs are carried throughout the fabric and are the basis for access policy enforcement within a VN. This section explains how to define SGTs for the HR, Acct, and HVAC groups that will be used in this lab.
In the Cisco DNA Center UI, navigate to Policy > Dashboard and make sure that you see Scalable Groups, which confirms that the Cisco DNA Center and ISE pxGrid connection is up and the SGTs are synchronized between them. By default, ISE is predefined with 16 scalable groups.
From the Policy page, click Group-Based Access Control. Click Scalable Groups to view the default SGTs pushed from ISE.
Click Add Groups in the top right corner. This launches a new browser tab that connects to ISE. Click the Add button above the table header.
For the SGT name, enter Acct. The icon here is not critical. You can select an icon that may be relevant to the SGT. Choose an icon and add a description of your choice. Click Submit at the bottom of the page to save the new group.
Follow the same process for creating SGTs for the HR and HVAC groups.
Return to Cisco DNA Center and refresh the Scalable Groups table. Go to the second page and verify that Cisco DNA Center learned the newly created SGTs: HR, Acct, and HVAC.
Create VNs and choose scalable groups
By default, any network device or user within the VN is permitted to communicate with other users and endpoints in the same VN. To enable communication between different VNs, traffic must leave the fabric border and then return, typically traversing a firewall or fusion router. It is recommended that if users or endpoints need to communicate with each other, they should be assigned to the same VN and be associated with SGT(s). Communication between users and endpoints in the same VN can then be defined with the necessary SGACL to limit any undesired access.
By default, Cisco DNA Center has a DEFAULT_VN that can be used if you do not wish to create a custom named VN. The INFRA_VN is used for APs and extended nodes, and its VRF/VN is leaked to the global routing table (GRT) on the borders. INFRA_VN is used for the Plug and Play (PnP) onboarding services for these devices through Cisco DNA Center. INFRA_VN cannot be used for other endpoints and users.
In this lab, we will create the Campus, Guest, and IoT VNs and assign the desired SGTs to these VNs.
Note: The VN name is case-sensitive, as it corresponds to the VRF name that will be configured on the fabric devices.
Campus virtual network
Click Policy Ò Virtual Network in Cisco DNA Center. Click the blue plus icon at the top left and add a new VN named Campus_VN.
Drag and drop the following scalable groups to the new VN on the right side. You can also click multiple groups and drag them together. Click Save.
Guest virtual network
In SD-Access, we have the ability to create a guest VN as well as a dedicated guest border, control plane, and ISE PSN (RADIUS server). In this lab, we will use the same border, control plane, and ISE PSN as the Enterprise network. Navigate to Policy > Virtual Network. Click the blue plus icon to add a new VN named Guest_VN. Drag and drop only the Guests scalable group to the right side. Check the Guest Virtual Network check box. This will allow guest wireless networks to be instantiated through Cisco DNA Center, along with the ability to specify guest borders that are separate from enterprise borders during fabric provisioning. Click Save.
IoT virtual network
Follow the same steps (as in the previous section) to create IoT_VN and assign the HVAC and Point_of_Sale_Systems groups to it.
Once completed, the three newly created VNs will be reflected on the left panel of the screen, as shown below.
Create SD-Access device credentials (for TACACS)
In this lab, we selected TACACS for device authentication. As part of the provisioning process, Cisco DNA Center will automatically add the devices to ISE as authorized network devices, along with the necessary AAA configuration on the devices to perform network authentication against ISE. Therefore, any device users who administer the network devices must be added to ISE. TACACS policies can be sophisticated and use many attributes, but in this lab, we will create a basic policy for network device access.
Note: Using TACACS for network device authentication is not required. If RADIUS is used, the RADIUS server will also need to have users added so that they can administer the network devices. If AAA is not desired for network device administration, the Network option should not be selected in the Design Ò Network Settings Ò AAA Server section under Cisco DNA Center.
In ISE, from the main menu, choose Administration > Identity Management > Identities.
Click Add and create a user netadmin with the password Dnac123! as shown below. The User Groups for netadmin can be left blank, that is, no group association.
Click Save.
Log in to ISE. Navigate to Administration > Deployment. It will bring you to the deployment nodes page. Select the hostname ise and click Edit. Under General Settings, scroll down to Policy Service, and check Enable Device Admin Service (if it has not been enabled previously). Click Save.
Go to Work Center > Device Administration > Device Admin Policy Sets. You will see the Default policy set.
Click the arrow to the far right. Here, expand on the Authorization Policy (1) at the bottom of the list.
Create a new policy by clicking the + symbol above the “Default” rule name. Name the rule Network Device Access. Under Conditions, click the + symbol. Drag Network_Access_Authenticaton_Passed from the left library panel and drop it onto the editor. Click the green Use button at the bottom right to save the changes and return to the default policy set window.
Under Results Command Sets column, click the + symbol and select Create a New Command Set.
Name the rule Permit. Under Commands, click Add. Select Permit from the first drop-down under Grant. Enter * for Command and * for Arguments as shown in the screen shot below.
Click the checkmark icon to the right to accept the rule.
Click Submit to save the changes and return to the default policy set window.
Under Results Command Sets, select the newly defined Permit command set. Click Save.
In this section, we have shown the following:
● How to quickly verify successful communication between Cisco DNA Center and ISE by looking at the Scalable Groups widget under Policy Ò Dashboard in Cisco DNA Center.
● How to easily add new SGTs by cross-launching into ISE from Cisco DNA Center.
● How to easily add new VNs and tie SGTs to those VNs.
● How to create policies in ISE to provide TACACS network authentication services.
In the next section, we will take the settings that we’ve specified under Design and Policy and start the provisioning of devices through Cisco DNA Center.
Provision devices from Cisco DNA Center
In the Cisco DNA Center UI, navigate to the Provision page.
Select the following devices and provision them all to Floor-1. During the Provisioning workflow, Cisco DNA Center can only provision devices of similar families. We will group and provision all Cisco Catalyst 9000 family switches (edges and borders), and then provision the wireless LAN controller.
Select the following switches by checking the check box:
● Border1-9500.cisco.com
● Border2-9500.cisco.com
● Edge1-9300.cisco.com
● Edge2-9300.cisco.com
Note: SD-Access requires only the fabric nodes to be provisioned to a site. Although other devices, such as Intermediate nodes and fusion routers, can be provisioned with these settings or customized with configurations defined in the Template Editor, they are not required.
From the Actions drop-down, pick Provision.
They should already be assigned to the site and the site information populated, as this was done in the previous Assurance lab. Click Next.
For SD-Access there is no additional configuration here, so you will see a blank screen. Click Next.
Cisco DNA Center has a Template Editor tool that allows the creation of custom configurations for sites, projects, and devices. In this lab we have not created any templates for these devices, so you will see a blank screen. Click Next.
The summary page shows details of the configuration being pushed to the devices. You will see the details of what you added on the Network Settings page under Design. Review them and click Deploy.
Similarly, provision the wireless LAN controller.
After the provisioning process begins, a message in the lower right corner shows the provision status.
Note: Before proceeding, make sure that all devices are provisioned to a site and are in the Managed state. You may need to click the Refresh button on the page.
After successfully provisioning the devices, on the Cisco DNA Center console, open one of the provisioned nodes from your pod RDP session and issue the command test aaa group tacacs+ netadmin Dnac123! new-code.
Edge1-9300#test aaa group tacacs+ netadmin Dnac123! new-code
Sending password
User successfully authenticated
USER ATTRIBUTES
username 0 "netadmin"
reply-message 0 "Password:"
Cisco DNA Center configures the network devices with AAA.
Edge1-9300#sh run | sec aaa
aaa new-model
aaa group server radius dnac-client-radius-group
server name dnac-radius_10.172.3.200
ip radius source-interface Loopback0
aaa group server tacacs+ dnac-network-tacacs-group
server name dnac-tacacs_10.172.3.200
aaa authentication login default local
aaa authentication login VTY_authen group dnac-network-tacacs-group local
aaa authentication dot1x default group dnac-client-radius-group
aaa authorization exec default local
aaa authorization exec VTY_author group dnac-network-tacacs-group local if-authenticated
aaa authorization network default group dnac-client-radius-group
aaa authorization network dnac-cts-list group dnac-client-radius-group
aaa accounting update newinfo periodic 2880
aaa accounting identity default start-stop group dnac-client-radius-group
aaa accounting exec default start-stop group dnac-network-tacacs-group
aaa server radius dynamic-author
client 10.172.3.200 server-key 7 112D17041443595F45
aaa session-id common
Create fabric domain and a transit site
Once Cisco DNA Center has provisioned the fabric devices to sites, the SD-Access fabric can be created. In the Cisco DNA Center UI, navigate to Provision > Fabric. This is where you create and manage your fabric domains and transits.
Transits are used to connect multiple fabric sites within a fabric domain (SD-Access Transit) or to the outside world (IP Transit).
IP Transit leverages a traditional IP-based (VRF-lite, MPLS) network, which requires remapping of VRFs and SGTs between sites. This transit requires manual configuration of the devices between the fabric domains and nonfabric sites.
SD-Access Transit enables a native SD-Access (Locator/ID Separation Protocol [LISP], VXLAN, Cisco TrustSec®) fabric, with a domainwide control plane node for intersite communication. This transit is fully automated between the fabric sites.
In this lab, we will be using IP Transit between the borders and the fusion routers. Cisco DNA Center will automate the uplink (external interface) configuration and provide the configuration details that we will manually configure on the next hop (in this lab, a fusion router).
Create transit:
Click + Add Fabric Domain or Transit to create an IP Transit for the fabric site, as shown below. Click Add Transit.
Use the following details for the transit configuration:
Name: SJ-Transit
Transit Type: IP-Based
Routing Protocol: BGP
Autonomous System Number: 65000
Click Save.
Create fabric site:
Click + Add Fabric Domain or Transit to create a fabric site (site hierarchy: Building 22), as shown below. Click Add Fabric. Name the fabric “SJC.”
Click the newly created SJC fabric and select Building 22 from the hierarchy.
All the devices that were provisioned to Building 22 will be shown here in a topology. The placement of the devices in the topology is dependent on the roles you defined earlier in the Inventory tool during the Wireless Assurance lab.
To assign fabric roles, click each device and choose the appropriate role, as described below.
Border1-9500 will be assigned as CP+Border. Since borders are the entry and exit points for the fabric, there are different types based on the kind of network domain they are connecting to. Cisco DNA Center allows you to configure an internal border (a border that connects to known IP prefixes in the rest of the company), an external border (a border that connects to unknown prefixes in the outside world) and an internal and external border (a border that connects to unknown and known prefixes anywhere). In this lab we will be configuring an external border (as this is the only exit point for the traffic to leave the fabric network). Click Border1-9500 and select Add as CP+Border.
In the resulting window, enter the following information:
Border to: Outside World (External)
Local Autonomous Number: 65001
Select IP Address Pool: Border-Handoff-22
Transits: SJ-Transit
Note: Be sure to select “connected to the internet,” as this is required for any external IP transit that is used to advertise a default gateway to other borders in the fabric domain.
Expand Transits. Select IP: SJ-Transit. Click the Add button that is next to it.
Once the transit is selected, expand the arrow next to SJ-Transit and click Add interface.
We will be selecting the uplink interface that is connected to our fusion router. Select External Interface: TenGigabitEthernet1/0/1. This is one of the border interfaces connecting to Fusion1.
Cisco DNA Center will configure each VN as a switch virtual interface (SVI). SVIs are configured for border switches or subinterfaces for border routers. When multiple VNs are selected and saved at one time, Cisco DNA Center will configure the SVIs and associated IP addresses for each VN, in no specific order.
From Cisco DNA Center, select the VNs (INFRA_VN, Campus_VN, IoT_VN, Guest_VN) that will be handed off to the fusion node via BGP.
Click Save.
In this lab, we have redundant links between the borders and fusion routers, so we need to add the other interface. Select External Interface: TenGigabitEthernet1/0/1 (this connects Border1 to Fusion2). Fill in the information as shown below.
Click Save.
Repeat the above steps for Border2-9500. Border2 is using the same information (Ten 1/0/1 and Ten 1/0/2 interfaces, INFRA_VN, Campus_VN, IoT_VN, and Guest_VN, border handoff IP Pool) as was used in Border1.
Click Edge-1 and Edge-2, and add them to fabric. This will add them as fabric edge nodes.
Click the wireless LAN controller and click Add to Fabric. Click Save to save the changes.
Once the device fabric roles are selected, devices become outlined in blue to indicate that they are planned to be added to the fabric but have not yet been configured. Once all the devices are assigned roles, click Save to configure the fabric.
Cisco DNA Center allows you to push the fabric configuration at a set time. For this lab, we will apply the configuration now. Leave the default selection and click “Apply”.
A small green message appears in the lower right, indicating that fabric provisioning succeeded. All devices turn solid blue, indicating that they have been added to the SJC fabric.
After the overlay is provisioned, IP address pools must be assigned to VNs to enable hosts to communicate within the fabric. When an IP pool is assigned to a VN in SD-Access, Cisco DNA Center immediately connects to each edge node to create the appropriate SVI to allow the hosts to communicate. In addition, an anycast gateway is configured for each IP pool on all edge nodes within the fabric domain. This is an essential element of SD-Access, because it allows hosts to easily roam to any edge node with no additional provisioning.
Click Fabric Ò Host Onboarding, select the Closed Authentication template, and click Save.
We will now assign the IP pools to the created VNs.
As mentioned earlier, the APs and extended nodes (not used in this lab) will be part of the INFRA_VN for Cisco DNA Center’s PnP host onboarding feature. Click the INFRA_VN and select AP-pool-22. Ensure that AP is selected as the Pool Type. Click Update.
The VN will turn blue, indicating that there is an active IP pool associated with it, as shown below.
Repeat the steps for adding the Campus-22 IP pool to Campus_VN, the IoT-22 IP pool to IoT_VN, and the Guest-22 IP pool to Guest_VN, selecting Data as the Traffic Type. As of Cisco DNA Center 1.2.5, we have the ability to statically assign a specific VN-to-IP pool mapping to a scalable group. We will not be configuring this in this lab.
The associated screen shots are shown below.
At the end of the exercise you should see the following on the Host Onboarding screen.
SD-Access allows for consistent policies across wired and wireless networks. It also allows for the same IP pools and VNs on both wired and wireless. In this next step, we will be applying the same VN and IP pool used for the wired network to the Ent-POD1 SSID. For the Ent-POD1 SSID, select Campus_VN from the Address Pool drop-down, as shown below.
Similarly, select Guest_VN from the Address Pool drop-down for Guest-POD1 to assign the Guest_VN and associated IP pool to be used for the Guest SSID.
The SD-Access topology has two APs connected on fabric Edge-1 (Gi1/0/11 and Gi1/0/12).
The default authentication template (selected earlier) was Closed Authentication, resulting in every port on the fabric edge being configured for authentication. Access point authentication is not supported. To override the selected global default, we can manually select the ports and change this behavior.
For the APs, we will be using the No Authentication security template. Scroll to the bottom of the Host Onboarding page. In the Select Port Assignment area, choose Edge-1 from the left side and select the AP-connected ports GigabitEthernet1/0/11 and GigabitEthernet1/0/12. Click Assign.
In the side window that opens, from the Connected Device Type drop-down list, choose Access Point(AP). From the Auth Template drop-down list, choose No Authentication. Click Update.
Note: Assuming that DHCP option 43 pointing to the WLC IP address is already configured for the AP DHCP Scope on the DHCP server, the fabric edge-connected APs will register with the fabric WLC at this point.
Execute sh ip int bri | exc unas and verify that the edge devices have SVIs created for all the VNs configured in Cisco DNA Center and the VLANs created.
Edge1-9300#sh ip int bri | exc unas
Interface IP-Address OK? Method Status Protocol
Vlan5 172.16.20.1 YES manual up
Vlan1021 172.16.13.1 YES manual up down
Vlan1022 172.16.11.1 YES manual up down
Vlan1023 172.16.12.1 YES manual up down
Vlan1024 172.16.14.1 YES manual up down
GigabitEthernet1/0/23 192.168.1.110 YES manual up
GigabitEthernet1/0/24 192.168.1.102 YES manual up
LISP0.4097 192.168.200.5 YES unset up
Loopback0 192.168.200.5 YES manual up
Tunnel0 192.168.200.5 YES unset up
Execute show run int vlan 1021.
interface Vlan1021
description Configured from apic-em
mac-address 0000.0c9f.f45c
vrf forwarding Guest_VN
ip address 172.16.13.1 255.255.255.0
ip helper-address 10.172.3.220
no ip redirects
ip route-cache same-interface
no lisp mobility liveness test
lisp mobility 172_16_13_0-Guest_VN
end
Execute show lisp session on the fabric devices (edges, borders) to ensure that the LISP neighbors are up.
Edge1-9300#sh lisp session
Sessions for VRF default, total: 2, established: 2
Peer State Up/Down In/Out Users
192.168.200.3:4342 Up 00:12:25 129/80 9
192.168.200.4:4342 Up 00:12:24 103/69 9
Configure the fusion router manually
Cisco DNA Center automates the border handoff configuration on the external interface of the border node. This is the interface selected when Border is chosen as the fabric role with IP Transit. To provide reachability between a nonfabric network (where shared services such as DHCP, DNS, and ISE reside) and the SD-Access fabric network, we need to configure fusion routers to route-leak between the different networks.
For this lab, we will be manually configuring the fusion routers, allowing hosts to get onboarded in the VN to communicate with a shared services server outside the fabric domain. The following steps will accomplish this:
1. Create the Layer 3 connectivity between the border and the fusion router.
2. Use BGP to extend the VRFs from the border to the fusion router (ISR).
3. Use VRF leaking to share the routes between the VRF and GRT on the fusion router.
4. Share the VNs between two border nodes by forming iBGP neighbors.
There are multiple ways to do VRF leaking. In this lab, we will be sharing VRF Shared_Services subnets on the fusion router to VRF VN segments created on the fabric border. This makes leaking between the Shared_Services VRF and the other VRFs easier. However, customers can also choose not extend the VRFs to the fusion router and put everything into the GRT. Customers can also use a fusion firewall to do the leaking. The type of leaking, the routing protocols, the platforms used, etc. should be based on the customer’s requirements and environment.
Once the fabric VNs and shared services VN are leaked at the fusion router, the end host connected to the fabric edge will route through the SD-Access fabric to the border and then leave the fabric to traverse the fusion router. The fusion router will leak the traffic appropriately, and vice versa. For the return path, the fusion router will send it back to the border, where it will be reencapsulated into the SD-Access fabric and routed to the destination end host.
Create Layer 3 connectivity between the border and the fusion router
1. The first task is to configure IP connectivity from each border node to each fusion router for every VN that requires external connectivity.
In this lab, shared services (DHCP, DNS, ISE, WLC, and Cisco DNA Center) are part of the Shared_Services VRF.
Cisco DNA Center automatically configures the border for eBGP handoff (toward the fusion router). View that configuration and then configure the fusion router with the corresponding IP addresses in the point-to-point subnet. Use the Layer 3 subinterface and dot1q tag on the fusion router that matches the VLAN on the border node.
Enter the following CLI commands on the border node, using the exact configuration that is pushed by Cisco DNA Center on the fabric border nodes to the fusion router(s).
Note: The configuration might differ for each pod. Be sure to configure based on your testbed.
Border1-9500#sh run | sec vrf def
vrf definition Campus_VN
rd 1:4099
!
address-family ipv4
route-target export 1:4099
route-target import 1:4099
exit-address-family
vrf definition Guest_VN
rd 1:4100
!
address-family ipv4
route-target export 1:4100
route-target import 1:4100
exit-address-family
vrf definition IoT_VN
rd 1:4101
!
address-family ipv4
route-target export 1:4101
route-target import 1:4101
exit-address-family
2. Create the Layer 3 subinterfaces on the fusion router after defining the VRFs first, as described above.
Note: If the border platform is a Cisco Catalyst 3000 or 6000 Series or 9000 family switch, Cisco DNA Center automatically configures VLANs and trunks for the border interface connected to the fusion router.. If the border is a routing platform, such as an ASR or ISR, Cisco DNA Center configures subinterfaces for the handoff.
Navigate to Provision Ò Fabric Ò Fabric Site.
Select Border1 Ò View Device Info and drill down into the interface information.
This section in the popup window will provide information that was automated on the border.
Matching the configuration (VLAN and IP address), configure the fusion router(s) for each border node.
INFRA_VN on the border nodes is mapped to Shared_Services on the fusion router.
Note: The configuration might differ for each pod. Be sure to configure based on your testbed.
Note: Before configuring the following sections, be sure to “wr mem” your configurations on the fusion routers so that you can easily back out with a “reload” if there are any mistakes.
Fusion1:
!
interface gig 0/0/2.30xx
description vrf interface to Border1-9500
vrf forwarding Campus_VN
encapsulation dot1Q 30xx
ip address 172.16.15.xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
interface gig 0/0/2.30xx
description vrf interface to Border1-9500
vrf forwarding Shared_Services
encapsulation dot1Q 30xx
ip address 172.16.15.xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
interface gig 0/0/2.30xx
description vrf interface to Border1-9500
vrf forwarding IoT_VN
encapsulation dot1Q 30xx
ip address 172.16.15.xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
interface gig 0/0/2.30 xx
description vrf interface to Border1-9500
vrf forwarding Guest_VN
encapsulation dot1Q 30xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
!
!
!
!
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding Shared_Services
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding Guest_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding Campus_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding IoT_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
Fusion2:
!
interface gig 0/0/2.30 xx
description vrf interface to Border1-9500
vrf forwarding IoT_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
interface gig 0/0/2.30 xx
description vrf interface to Border1-9500
vrf forwarding Shared_Services
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
interface gig 0/0/2.30 xx
description vrf interface to Border1-9500
vrf forwarding Campus_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
interface gig 0/0/2.30 xx
description vrf interface to Border1-9500
vrf forwarding Guest_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
!
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding Shared_Services
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding Campus_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding Guest_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
interface gig 0/0/3.30 xx
description vrf interface to Border2-9500
vrf forwarding IoT_VN
encapsulation dot1Q 30 xx
ip address 172.16.15. xx 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
3. Verify IP connectivity between the fusion router and the border for each VRF.
Example:
Fusion1-ISR4451#sh run int gig 0/0/2.3001
Building configuration...
Current configuration : 195 bytes
!
interface GigabitEthernet0/0/2.3001
description vrf interface to Border1-9500
encapsulation dot1Q 3001
vrf forwarding Campus_VN
ip address 172.16.15.2 255.255.255.252
no ip redirects
end
Fusion1-ISR4451#ping vrf Campus_VN 172.16.15.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.15.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Extend virtual networks from the border to the fusion router
Cisco DNA Center has fully automated the border eBGP handoff configuration. Configure the fusion router to extend the fabric VNs.
1. On the fusion router, create a BGP router instance using the AS number defined earlier (65000). Define the address-family VRF for each VN that was automated on the border.
Note: The configuration might be differ for each pod. Be sure to configure based on your testbed.
Note: Before configuring the following sections, be sure to “wr mem” your configurations on the fusion routers so that you can easily back out with a “reload” if there are any mistakes.
Fusion 1:
router bgp 65000
!
address-family ipv4 vrf Shared_Services
neighbor 172.16.15.xx remote-as 65001
neighbor 172.16.15.xx update-source gig0/0/2.30 xx
neighbor 172.16.15.xx activate
neighbor 172.16.15.xx remote-as 65001
neighbor 172.16.15.xx update-source gig0/0/3.30 xx
neighbor 172.16.15.xx activate
network 172.16.15.xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
!
address-family ipv4 vrf Campus_VN
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15.0 mask 255.255.255.252
network 172.16.15.40 mask 255.255.255.252
maximum-paths 2
exit-address-family
!
address-family ipv4 vrf Guest_VN
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15. xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
!
address-family ipv4 vrf IoT_VN
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15. xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
Fusion 2:
router bgp 65000
!
address-family ipv4 vrf Shared_Services
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15. xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
!
address-family ipv4 vrf Campus_VN
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15. xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
!
address-family ipv4 vrf Guest_VN
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15. xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
!
address-family ipv4 vrf IoT_VN
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/2.30 xx
neighbor 172.16.15. xx activate
neighbor 172.16.15. xx remote-as 65001
neighbor 172.16.15. xx update-source gig0/0/3.30 xx
neighbor 172.16.15. xx activate
network 172.16.15. xx mask 255.255.255.252
network 172.16.15. xx mask 255.255.255.252
maximum-paths 2
exit-address-family
2. Verify that BGP neighbors are established between the border and fusion router for all defined VNs.
Fusion2-ISR4451#sh ip bgp vpnv4 all summary
BGP router identifier 192.168.200.2, local AS number 65000
BGP table version is 114, main routing table version 114
30 network entries using 7680 bytes of memory
56 path entries using 7616 bytes of memory
13 multipath network entries and 26 multipath paths
20/11 BGP path/bestpath attribute entries using 5920 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
4 BGP extended community entries using 96 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 21336 total bytes of memory
BGP activity 37/7 prefixes, 78/22 paths, scan interval 60 secs
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
172.16.15.17 4 65001 16 12 114 0 0 00:07:02 1
172.16.15.21 4 65001 23 25 114 0 0 00:07:08 8
172.16.15.25 4 65001 16 12 114 0 0 00:07:04 1
172.16.15.29 4 65001 16 12 114 0 0 00:06:59 1
172.16.15.49 4 65001 20 19 114 0 0 00:05:55 8
172.16.15.53 4 65001 16 15 114 0 0 00:07:03 1
172.16.15.57 4 65001 16 15 114 0 0 00:06:59 1
172.16.15.61 4 65001 16 15 114 0 0 00:06:57 1
192.168.1.10 4 65001 534 537 114 0 0 07:44:34 8
192.168.1.14 4 65001 531 542 114 0 0 07:41:36 8
192.168.101.5 4 65000 523 561 114 0 0 07:48:51 6
Use VRF leaking to share routes on the fusion router and distribute them to the border
Route maps are used to select the routes to leak to VNs. Import and export of these route maps enables the VRF route leaking required on the fusion router.
Note: The configuration might differ for each pod. Be sure to configure based on your testbed.
Note: Before configuring the following sections, be sure to “wr mem” your configurations on the fusion routers so that you can back out with a “reload” if there are any mistakes.
Fusion 1 and Fusion 2:
!
ip prefix-list SHARED_SERVICES_NETS seq 5 permit 10.172.3.0/24
!
! $$ SHARED_SERVICES_NETS - 10.172.3.0/24 contains ISE, DHCP, DNS in this subnet $$
!
route-map SHARED_SERVICES_NETS permit 10
match ip address prefix-list SHARED_SERVICES_NETS
!
vrf definition Campus_VN
rd 1:4099
!
address-family ipv4
import map SHARED_SERVICES_NETS
route-target import 100:100
exit-address-family
!
!
vrf definition Guest_VN
rd 1:4100
!
address-family ipv4
import map SHARED_SERVICES_NETS
route-target import 100:100
exit-address-family
vrf definition IoT_VN
rd 1:4101
!
address-family ipv4
import map SHARED_SERVICES_NETS
route-target import 100:100
exit-address-family
vrf definition Shared_Services
rd 100:100
!
address-family ipv4
route-target import 1:4099
route-target import 1:4100
route-target import 1:4101
exit-address-family
Border1-9500#sh ip route vrf Campus_VN
Routing Table: Campus_VN
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
E1 - OSPF external type 1, E2 - OSPF external type 2
i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
ia - IS-IS inter area, * - candidate default, U - per-user static route
o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
a - application route
+ - replicated route, % - next hop override, p - overrides from PfR
Gateway of last resort is not set
10.0.0.0/24 is subnetted, 1 subnets
B 10.172.3.0 [20/0] via 172.16.15.2, 00:11:01
172.16.0.0/16 is variably subnetted, 10 subnets, 3 masks
B 172.16.11.0/24 [200/0], 00:10:44, Null0
C 172.16.11.1/32 is directly connected, Loopback1022
C 172.16.15.0/30 is directly connected, Vlan3001
L 172.16.15.1/32 is directly connected, Vlan3001
C 172.16.15.24/30 is directly connected, Vlan3007
L 172.16.15.25/32 is directly connected, Vlan3007
B 172.16.15.40/30 [20/0] via 172.16.15.2, 00:11:01
B 172.16.15.52/30 [20/0] via 172.16.15.26, 00:11:01
C 172.16.16.4/30 is directly connected, Vlan102
L 172.16.16.5/32 is directly connected, Vlan102
Border1-9500#
Share virtual networks between border nodes for traffic resiliency
At this time, Cisco DNA Center does not automate overlay routing connectivity between multiple borders. We will manually deploy a resilient direct link between border nodes to reroute traffic against connectivity failures between a border and fusion device. This is not required in a customer environment, but is preferred for routing redundancy. Create an iBGP neighbor relationship between the border nodes for every configured VN.
Border 1:
!
vlan 101
name iBGP_Infra_VN
exit
!
vlan 102
name iBGP_Campus_VN
exit
!
vlan 103
name iBGP_IoT_VN
exit
!
vlan 104
name iBGP_Guest_VN
exit
!
int vlan 101
description vrf interface to Border2-9500
ip address 172.16.16.1 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
int vlan 102
description vrf interface to Border2-9500
vrf forwarding Campus_VN
ip address 172.16.16.5 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
int vlan 103
description vrf interface to Border2-9500
vrf forwarding IoT_VN
ip address 172.16.16.9 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
int vlan 104
description vrf interface to Border2-9500
vrf forwarding Guest_VN
ip address 172.16.16.13 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
router bgp 65001
neighbor 172.16.16.2 remote-as 65001
neighbor 172.16.16.2 update-source Vlan101
!
address-family ipv4
neighbor 172.16.16.2 activate
neighbor 172.16.16.2 weight 65535
neighbor 172.16.16.2 advertisement-interval 0
exit-address-family
!
address-family ipv4 vrf Campus_VN
neighbor 172.16.16.6 remote-as 65001
neighbor 172.16.16.6 update-source Vlan102
neighbor 172.16.16.6 activate
exit-address-family
!
address-family ipv4 vrf Guest_VN
neighbor 172.16.16.14 remote-as 65001
neighbor 172.16.16.14 update-source Vlan104
neighbor 172.16.16.14 activate
exit-address-family
!
address-family ipv4 vrf IoT_VN
neighbor 172.16.16.10 remote-as 65001
neighbor 172.16.16.10 update-source Vlan103
neighbor 172.16.16.10 activate
exit-address-family
!
Border 2:
!
vlan 101
name iBGP_Infra_VN
exit
!
vlan 102
name iBGP_Campus_VN
exit
!
vlan 103
name iBGP_IoT_VN
exit
!
vlan 104
name iBGP_Guest_VN
exit
!
int vlan 101
description vrf interface to Border2-9500
ip address 172.16.16.2 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
int vlan 102
description vrf interface to Border2-9500
vrf forwarding Campus_VN
ip address 172.16.16.6 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
int vlan 103
description vrf interface to Border2-9500
vrf forwarding IoT_VN
ip address 172.16.16.10 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
int vlan 104
description vrf interface to Border2-9500
vrf forwarding Guest_VN
ip address 172.16.16.14 255.255.255.252
no ip redirects
ip route-cache same-interface
no shut
exit
!
router bgp 65001
neighbor 172.16.16.1 remote-as 65001
neighbor 172.16.16.1 update-source Vlan101
!
address-family ipv4
neighbor 172.16.16.1 activate
exit-address-family
!
address-family ipv4 vrf Campus_VN
neighbor 172.16.16.5 remote-as 65001
neighbor 172.16.16.5 update-source Vlan102
neighbor 172.16.16.5 activate
exit-address-family
!
address-family ipv4 vrf Guest_VN
neighbor 172.16.16.13 remote-as 65001
neighbor 172.16.16.13 update-source Vlan104
neighbor 172.16.16.13 activate
exit-address-family
!
address-family ipv4 vrf IoT_VN
neighbor 172.16.16.9 remote-as 65001
neighbor 172.16.16.9 update-source Vlan103
neighbor 172.16.16.9 activate
exit-address-family
Border1-9500#sh ip bgp vpnv4 all summary
BGP router identifier 192.168.200.3, local AS number 65001
BGP table version is 25, main routing table version 25
21 network entries using 5376 bytes of memory
42 path entries using 5712 bytes of memory
22/12 BGP path/bestpath attribute entries using 6512 bytes of memory
1 BGP AS-PATH entries using 24 bytes of memory
3 BGP extended community entries using 72 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 17696 total bytes of memory
BGP activity 189/129 prefixes, 558/402 paths, scan interval 60 secs
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
172.16.16.6 4 65001 9 10 25 0 0 00:02:47 6
172.16.16.10 4 65001 10 10 25 0 0 00:02:47 6
172.16.16.14 4 65001 9 9 25 0 0 00:02:41 6
Once the WLC was provisioned, the APs’ host onboarding was complete, the AP fabric edge ports were assigned with No Authentication, and the fusion routing was complete between INFRA_VN and shared services VRF, the APs rebooted and joined the controller. We will now see them in the Device Inventory.
Return to Provision -> Device Inventory. Check the check box next to the APs, and then from the Actions drop-down, click Provision.
Assign the APs to Global/SJC/Building-22/Floor-1, and click Next. Once this is completed, the APs will appear in the Floor-1 hierarchy, and the associated SSIDs for that site will be broadcasted through these APs.
On the Configuration screen, from the RF Profile drop-down list, choose Typical.
Click Next. On the Summary screen, click Deploy to deploy this configuration to the APs. Click OK. The Provision Status of the APs will change to Success.
Micro-segmentation within a VN and policy between scalable groups
Segmentation using SGTs provides an additional layer of granularity within a VN. The group membership in ISE is defined based on business roles and requirements. ISE can push an SGT and the corresponding dynamic VLAN to an endpoint through an authorization profile based on user credentials, device type, and profiling or posture state.
The rules that are built in this section using the Cisco DNA Center Policy application are implemented as downloadable SGACLs in ISE. The SGACLs are then provided to the edge switches to be enforced. These SGACLs are applied at the destination edge port for a consistent segmentation policy to be implemented at scale, as SGACLs are dynamically downloaded to the switches for interested clients.
Create and apply a “deny all” rule between scalable groups
Once a user has been assigned a scalable group (either statically or through an ISE authorization rule), relevant SGACLs are downloaded to the edge device. These rules are created, deleted, or updated dynamically via ISE; there is no need to schedule rollout of the updates to the network devices.
In the Cisco DNA Center UI, click Policy to begin building rules.
Navigate to Policy > Group-Based Access Control > Group-Based Access Control Policies.
Click the blue plus icon next to Add Policy to bring up the Create Policy window.
Create a new policy named Acct-HR. From the available scalable groups, drag and drop Acct to the Source Scalable Groups area and HR to the Destination Scalable Groups area on the right. By default, Cisco DNA Center provides two Layer 3 contracts, deny and permit. As mentioned previously, by default all SGTs within a VN have access to each other. You can assign the deny contract to deny all traffic or specify Layer 4 traffic between the groups. You can also configure an implicit deny and then permit only Layer 3 or Layer 4 traffic between the groups within a VN. Click Add Contract and choose deny as the access contract. Click OK.
By default, access control policies are unidirectional. Check the Enable Bi-directional check box to create a policy for bidirectional traffic. The bidirectional option will create two policies that will enforce the selected contract from the defined source scalable groups to the destination scalable groups and vice versa. Click Save.
A warning message will pop up indicating that the policy will be enforced immediately. In this lab, we have not yet defined the ISE authentication and authorization policies and created the fabric, so there will be no interruption. If policies were created after the associated ISE authentication and authorization policies and within a running fabric, these policies would immediately be sent to ISE and would be enforced as authorized traffic and traffic flow is matched. At the warning message, click Yes.
The created policy will appear as shown below.
Similarly, create another policy named Guest-Guest denying communication between guest users. Select Guest as the source and destination scalable groups and bind them using a deny access contract as shown below. Since both source and destination scalable groups are the same, there is no need to check the “Enable Bi-directional” box.
Created group-based rules will appear as shown below-
Create a custom contract (optional)
This section explains how to create a Layer 4 group-based access control policy that permits certain traffic types. The policy is created using the custom application tool, which allows the definition of select ports and protocols when writing rules. The following example shows a rule to permit only HTTP and FTP traffic. This is done as a Layer 4 contract.
Navigate to Policy > Group-Based Access Control > Access Contract.
Click the blue plus icon next to Add Contract. In the Name field, enter web_ftp_only. Under Implicit Action, choose Deny.
Click Add to define a permit statement for http (TCP 80) traffic. Change the Action to Permit.
Add a second Permit statement that uses a custom port and protocol by clicking the blue Add Port/Protocol link.
In the Add Port/Protocol window, name the rule ftp_only, and select TCP as the protocol and 20,21 under Port Range. Check Ignore Conflict and click Save.
Back in the Custom Contract window, click Add and choose the newly created port/protocol rule ftp_only. It should resemble the following screen shot.
Click Save.
Under Access Contract, the new contract should match the following screen shot.
The following screen shot shows how to create a policy with a custom contract between the Acct source scalable group and the HR destination scalable group.
Navigate to Policy Ò Group-Based Access Control Ò Group-Based Access Control Policies. Select the policy Acct-HR.
Click Edit and the following screen will pop up.
Select the custom contract web_ftp_only created earlier, then click OK.
It should look like the following screen shot. Click Save.
Upon completion of the exercise, revert the policy changes for the Acct-HR policy. Edit the rule and use a Deny contract between the Acct and HR scalable groups.
Configure dynamic authentication with ISE
The ISE authorization policy is defined by configuring rules based on identity groups or other conditions and attributes. In this guide, the authorization policy is used to authenticate users to the SD-Access network and authorize them to use network resources.
Create user identity groups for the Campus
In this lab, we will be using ISE internal users for network authentication. Most companies will use Lightweight Directory Access Protocol (LDAP)/AD, one-time passwords, etc. for user authentication. The current ISE identity sources are all supported with SD-Access.
In ISE, from the main menu, choose Administration > Identity Management > Groups.
On the left side, click User Identity Groups. Click Add above the table on the right side to create a group.
Define HR as the user identity group name. (The description is optional.) Click Submit. A new HR user group will be created.
Repeat the same steps to create another user identity group called Acct.
Create an identity for each campus user
Since this lab is using internal user identity groups, we now need to define users and add them to the user identity groups we just created. In ISE, from the lower menu bar, choose Administration > Identity Management > Identities.
Click + Add to add a user to ISE.
Create a user HR1 with password Dnac123! and assign that user to the recently created group HR. Click Submit.
Follow the same steps to create another user Acct1 with password Dnac123! and assign it to the group Acct. Click Submit when finished.
Define 802.1X authorization profile and policies for campus users
The authorization policy applies the virtual network and SGT assignment based on the authorization conditions/attributes. For this lab, we will define an authorization policy that will look at the user identity group that the user is part of and assign the virtual network and SGT accordingly (such as Acct or HR). The authentication policy is matched to the authorization policy and verifies (in this lab) that the user authenticates via an 801.2Xconnection. If the user’s password is correct, authentication succeeds and the user is assigned to the correct SGT and VLAN.
This section explains how to create a rule for each user identity group, Acct and HR.
In ISE, from the lower menu bar, choose Policy > Policy Sets.
Use the arrow on the far right to expand the default policy set.
You will see the following screen:
For this lab, we will leave the default authentication policies as is. The default authentication policies have already defined the use of the local identity store.
The policies in ISE are enforced when the conditions are matched. The authenticated user’s and endpoint’s attributes will be matched against the beginning of the policies (rules) and continue down the list. Once a condition or attribute is matched, the associated authorization (result profile) and SGT is applied to the user’s and endpoint’s RADIUS session.
Click the arrow on the last row to expand Authorization Policy.
Locate the topmost rule, click the gear icon to the far right, and choose Insert New Rule Above.
Note: Administrators can also copy and edit an existing rule, if preferred. Doing so is helpful when adding multiple similar rules.
Name this authorization rule 802.1x-Acct. Under Conditions, click the plus icon, which opens the Conditions Studio.
In the left panel, under Library, select the group icon. In the Editor to the right, click in the Click to add an attribute window. Click the identity group icon. Choose IdentityGroup Name as shown below.
To the right of the Equals drop-down list, choose User Identity Groups:Acct as shown below.
Click the Use button at the bottom of the screen.
When complete, the authentication policy appears as a new entry in the default policy set.
After the conditions are defined, you must select the authorization results of a match. For this policy, under Results Profiles, click + and choose Create a New Authorization Profile. The authorization profile is where VLAN association can be created for later use by the 802.1X authorization policy. The VLAN used must be the one that Cisco DNA Center generates automatically when provisioning the IP pools. You can get the VLAN name from Cisco DNA Center by navigating to Provision > Fabric > Host Onboarding, and clicking the VN. Each IP pool under the VN will have a VLAN associated with it. Alternatively, the VLAN names can be obtained by logging in to the edge switches and executing the “show vlan brief” command.
Name the authorization profile Acct_result. From the Access Type drop-down list, choose ACCESS_ACCEPT. Enter an optional description. In the Common Tasks area, choose VLAN and enter 172_16_11_0-Campus_VN, which is mapped to IP pool 172.16.11.0/24.
Note: By default, Cisco DNA Center generates formatted VLAN names when assigning an IP pool to a VN. The format is [<IP_Pool_Subnet>-<Virtual_Network_Name>], where the subnet octets are separated by underscores, not decimals.
Click Save. This information will be pushed to the fabric edge device upon successful authentication and successful matching of the authorization condition (in this case being a part of the Acct user identity group).
You are returned to the policy set screen. Choose the newly created Acct_result authorization profile.
For the policy, under Security Groups, choose the security group Acct from the drop-down.
Scroll down and click Save.
We will repeat the steps to create rules for the HR group.
Locate the rule 802.1x-Acct, click the gear icon to the far right, and choose Duplicate Above.
Change the rule name 802.1x-Acct_copy to 802.1x-HR
Under Conditions, click the plus icon, which opens the Conditions Studio.
Since this rule set was copied, we merely need to change the identity group to User Identity Groups: HR as shown.
Click Save.
After the conditions are defined, you must select the results of a match. For this policy, under Results Profiles, click the “X” next to Acct_result to remove it, then click on the + symbol and choose Create a New Authorization Profile.
Name the authorization profile HR_result. From the Access Type drop-down list, choose ACCESS_ACCEPT. Enter an optional description. In the Common Tasks area, choose VLAN and 172_16_11_0-Campus_VN, which is mapped to IP pool 172.16.11.0/24.
Click Save.
You are returned to the policy set screen. Choose the newly created HR_result authorization profile.
For the Security Groups, choose the security group HR from the drop-down.
You should now be able to view the two authorization policies for HR and Acct.
ISE policies for wireless guest access
This section describes Cisco DNA Center’s automatically configured polices in ISE to support the Guest VN’s wireless captive portal.
In ISE, navigate to Policy > Policy Elements > Results > Authorization > Authorization Profiles.
Under Standard Authorization Profiles, click the GuestPortal_Profile that Cisco DNA Center automatically created earlier during the Guest Wireless SSID workflow. Click Edit to view the profile’s details.
You will see that Central Web Authentication (CWA) and the URL redirect to the guest portal have been configured.
Scroll down to Common Tasks and check the Static IP/Host name/FQDN check box. In the text box, enter the ISE IP address as the static IP address. Click Save.
Note: In a production environment, this change is not required. This change is required in a lab network when no DNS reverse lookup entry for ISE is available. For reachability, clients must be offered the ISE IP address as opposed to its FQDN.
The guest authorization policies for the Central Web authentication flow are also automatically created for you. Browse to Policy > Policy Sets, and click Authorization Policy to view all of the rules.
Endpoint onboarding: Validation
Use the vSphere client on the remote desktop to access the clients
IP address: 100.127.12.100
Username: instructor
Password: Dnac123!
We have four wired and two wireless clients.
Click the wireless client in the left sidebar. Click the Console tab on the right.
Connect to the WLAN associated with Ent-PODx that was created earlier.
Note: Connect only to the WLAN being broadcasted by the APs on your pod.
Client group tag classification
Connect two test workstations (wired or wireless) to the Campus VN. Connect one with Acct and the other with HR credentials.
Once connected, verify the live logs on the ISE (Operations Ò RADIUS Ò Live Logs) to ensure that the authorization was successful and the correct SGT is assigned to the client based on the username and password used.
Both wired and wireless clients in the Campus VN should receive an IP address within the same 172.16.11.0/24 subnet.
All users in the campus—both wired and wireless—have a consistent experience, as can be seen in the ISE live logs.
Test intra-virtual network connectivity
From the two clients, start an ICMP ping test to the default gateway and to each other. The pings to the default gateway are successful, while the Acct-HR SGACL policy that we created earlier prevents the HR user from communicating with the Acct user despite sharing the same subnet.
Pings to the gateway are successful, but communication between Acct1 and HR1 is blocked.
The SGACL policy that is being enforced to prevent communication between Acct1 and HR1 can be verified on the edge switch where Acct1 is connected, specifically the Edge-1 node, as shown by the ISE live logs.
Alternatively, the security group matrix can be seen on ISE at Work Centers Ò TrustSec Ò TrustSec Policy.
Return to the Group-Based Access Control Policies window in Cisco DNA Center. Delete the Acct-HR SGACL policy (and its reverse direction) while the ping tests are still running on the clients (“ping -t 172.16.11.11”). Connectivity between the HR client and the Acct client is restored shortly after the removal of the rule from Cisco DNA Center.
Connect to the Guest-POD# SSID.
Once connected, open a browser and try to connect to a website. For ease, you can try 1.1.1.1. Due to the ACL-WebRedirect in the guest authorization profile, the WLC will redirect you to the ISE’s guest portal.
Click on Or register for guest access to create a guest account. These settings (allowing a guest to register) can be changed in the ISE guest portal configuration.
Remember the username and password assigned to you after the registration process is complete. Below is an example:
Log in to the Guest WLAN after successful registration. You can verify that communication between VNs is not permitted by pinging a wired or wireless endpoint in the Campus_VN. This behavior is intrinsic to the macro-segmentation nature of VN and VRF. Also, due to the restrictive Guests-Guests SGACL policy that we defined previously, guest-to-guest communication is not allowed. You can verify that, once the Guests-Guests SGACL policy is removed (similar to how you removed the Acct-HR SGACL), guest-to-guest communication is allowed.
By completing this section, we have demonstrated the following:
● The relative ease of creating scalable group policies through Cisco DNA Center that are dynamically deployed to infrastructure devices in a scalable fashion.
● How to create authentication and authorization policies in ISE to provide secured access and assign scalable groups.
● A consistent security experience for users, whether they are connecting through wired or wireless.
● The powerful means of restricting access between users or devices, even if they’re on the same subnet, by employing scalable group access policies.
● Macro-segmentation of devices in different VNs and VRFs.
Multicast traffic forwarding is used by many applications in enterprise networks to distribute copies of data to multiple different network destinations simultaneously. Within an SD-Access fabric deployment, multicast traffic flows can be handled in one of two ways—overlay or underlay, depending on whether the underlay network supports multicast replication.
The SD-Access use case focuses on multicast replication in the overlay. In this case, the first SD-Access fabric node that receives the multicast traffic (also known as the head end) must replicate multiple unicast copies of the original multicast traffic to each of the remote fabric edge nodes where the multicast receivers are located. This approach is known as head-end multicast replication. It provides an efficient multicast distribution model for networks that do not support multicast in the underlay. Dual rendezvous points (RPs) are supported on border nodes within a fabric site. When a redundant RP is added to the network, the Multicast Source Discovery Protocol (MSDP) session is enabled between the RP nodes for redundancy, automated by Cisco DNA Center. MSDP allows RPs to share information about active sources.
This section explains how to enable multicast in the fabric for the Campus VN.
Configure and reserve an IP pool for multicast. This will be used to configure RP and MSDP peering between the border nodes for redundancy (configured in Use Case 1).
Navigate to Provision > Fabric > SJC >Building 22. Click the border node and choose Enable Rendezvous Point.
Choose the Campus_VN. Choose the Multicast-RP-22 reserved.
Select the Campus_VN and Enable multicast within this VN.
Navigate to Host Onboarding. Observe that the host pool on the host onboarding has an M badge once multicast is enabled in the Campus_VN.