Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

Software-Defined Access Management Infrastructure - Prescriptive Deployment Guide

Available Languages

Download Options

  • PDF
    (3.3 MB)
    View with Adobe Reader on a variety of devices
Updated:August 23, 2019

Available Languages

Download Options

  • PDF
    (3.3 MB)
    View with Adobe Reader on a variety of devices
Updated:August 23, 2019
 


 

Definition and Design: Software-Defined Access

Cisco® Software-Defined Access (SD-Access) is the evolution from traditional campus LAN designs to networks that directly implement the intent of an organization. SD-Access is enabled with an application package that runs as part of the Cisco DNA Center software for designing, provisioning, applying policy, and facilitating the creation of an intelligent campus wired and wireless network with assurance.

This guide is used to deploy the management infrastructure, including Cisco DNA Center, Cisco Identity Services Engine (ISE), and Cisco Wireless LAN Controllers (WLC), described in the companion Software-Defined Access Solution Design Guide. The deployment described in this guide is used in advance of deploying a Cisco Software-Defined Access fabric, as described in the companion Software Defined Access Fabric Deployment Guide.

If you didn’t download this guide from Cisco Community or Design Zone, you can check for the latest version of this guide.

Find the companion Software-Defined Access Solution Design Guide, Software-Defined Access Fabric Provisioning Prescriptive Deployment Guide, Software-Defined Access for Distributed Campus Prescriptive Deployment Guide, related deployment guides, design guides, and white papers, at the following pages:

●     https://www.cisco.com/go/designzone

●     https://cs.co/en-cvds

Deployment: SD-Access infrastructure

How to read deployment commands

The guide uses the following conventions for commands that you enter at the command-line interface (CLI).

Commands to enter at a CLI prompt:

configure terminal

Commands that specify a value for a variable (variable is in bold italics):

ntp server 10.4.0.1

Commands with variables that you must define (definition is bracketed in bold and italics):

class-map [highest class name]

Commands at a CLI or script prompt (entered commands are in bold):

Router# enable

Long commands that line wrap on a printed page (underlined text is entered as one command):

police rate 1000 pps burst 10000
packets conform-action

The SD-Access management components are deployed into the topology described in the companion Software-Defined Access Solution Design Guide, shown in the topology diagram.

Figure 1.        

Design topology

sda-infra-deploy-2019jul_0.jpg

The Cisco SD-Access management infrastructure solution described uses a single Cisco DNA Center hardware appliance, installed initially as a single-node cluster and then expanded into a three-node cluster as an option. For this solution, the Cisco DNA Center software integrates with two Cisco ISE nodes configured for redundancy and dedicated to the Cisco SD-Access deployment, as detailed in the installation. To support Cisco SD-Access Wireless, the solution includes two Cisco WLCs for controller redundancy

Before you begin, you must identify the following:

●     IP addressing and network connectivity for all controllers being deployed: Cisco DNA Center must have Internet access for system updates from the Cisco cloud catalog server.

●     A network-reachable Network Time Protocol (NTP) server, used during Cisco DNA Center installation to help ensure reliable digital certificate operation for securing connections.

●     Certificate server information, when self-signed digital certificates are not used.

Process: Installing Cisco DNA Center

The Cisco DNA Center appliance has 10-Gbps SFP+ modular LAN on motherboard (mLOM) interfaces and integrated copper interfaces available for network connectivity. Use the following table to assist with IP address assignment and connections. The validation starts with a single-node cluster that uses a virtual IP (VIP) configured on a single Cisco DNA Center appliance, easing future migration to a three-node cluster. The update from a single-node cluster to a three-node cluster is described. For provisioning and assurance communication efficiency, Cisco DNA Center should be installed in close network proximity to the greatest number of devices being managed.

Reserve an arbitrary private IP space at least 20 bits of netmask in size that is not used elsewhere in the network (example: 192.168.240.0/20). Divide the /20 address space into two /21 address spaces (examples: 192.168.240.0/21, 192.168.248.0/21) and use them in a later setup step for services communication among the processes running in a Cisco DNA Center instance. Both single-node cluster and three-node cluster configurations require the reserved IP address space.

The Cisco DNA Center appliance also must have Internet connectivity, either directly or via a web proxy, to obtain software updates from the Cisco cloud catalog server. Internet access requirements and optional proxy server setup requirements are detailed in the applicable version of the Cisco Digital Network Architecture Center Appliance Installation Guide.

Caution

The installation described assumes a new installation of Cisco DNA Center. If you already have Cisco DNA Center deployed and managing devices in your network, do not use the steps in this Installing Cisco DNA Center process. Instead, you must refer to the release notes on Cisco.com for the correct procedure for a successful upgrade to your desired release.

https://www.cisco.com/c/en/us/support/cloud-systems-management/dna-center/products-release-notes-list.html

The validated installation process uses a DN2-HW-APL-L appliance. If you are using an appliance with a different physical interface structure, such as the DN1-HW-APL appliance, the Maglev Configuration wizard steps for interface configuration display with different names and in a different order. Details for other appliances are also shown in the release notes.

The modular 10-Gbps ports on the original M4-based appliance are reversed left-to-right from the more recent M5-based appliance, and the location of the on-board copper Ethernet ports are in different locations. The M4 appliance uses an 802.1q header tag with VLAN ID 0, requiring IOS-XE switches to use an interface configuration supporting the tagged configuration (switchport voice vlan dot1p), whereas an M5-based appliance requires a basic interface access VLAN configuration for the Ethernet switch connection, as described in the associated installation guides.

Figure 2.        

Figure 2 Rear view of the original Cisco DNA Center appliance — DN1-HW-APL (M4)

sda-infra-deploy-2019jul_1.jpg

Figure 3.        

Figure 3 Rear view of the Cisco DNA Center appliance — DN2-HW-APL (M5)

sda-infra-deploy-2019jul_2.jpg

Table 1.         Cisco DNA Center server LAN Ethernet interface assignments

 

PORT 1
mLOM
SFP+ 10 Gbps

PORT 2
mLOM
SFP+ 10 Gbps

1
Integrated
RJ-45 1 Gbps

2
Integrated
RJ-45 1 Gbps

M
(or “gear” label)
RJ-45 1 Gbps

Wizard name when using
DN2-HW-APL, DN2-HW-APL-L

enp94s0f0

enp94s0f1

eno1

eno2

Wizard name when using
DN1-HW-APL

enp9s0

enp10s0

enp1s0f0

enp1s0f1

Use

Enterprise: Enterprise network infrastructure

Cluster: Intra-cluster communications

Management: Dedicated management network for web access

Cloud: Optional cloud network port for separated Internet connectivity

CIMC: Cisco Integrated Management Controller out-of-band server appliance management

Example cluster VIP address

10.4.49.29
255.255.255.0

192.168.127.1
255.255.255.248

Example interface address (node 1)

10.4.49.34
255.255.255.0

192.168.127.2
255.255.255.248

Unused in this example

Unused in this example

10.204.49.34
255.255.255.0

Example interface address (node 2)

10.4.49.35
255.255.255.0

192.168.127.3
255.255.255.248

Unused in this example

Unused in this example

10.204.49.35
255.255.255.0

Example interface address (node 3)

10.4.49.36
255.255.255.0

192.168.127.4
255.255.255.248

Unused in this example

Unused in this example

10.204.49.36
255.255.255.0

 

Tech tip

Connecting Cisco DNA Center to your network using a single network interface (enterprise network infrastructure, mLOM PORT1) simplifies the configuration by requiring only a default gateway and by avoiding the need to maintain a list of static routes for any additional interfaces connected. When you use additional interfaces (for example, to separate the managed enterprise network for infrastructure provisioning and management network for administrative access to Cisco DNA Center), subsequent network route changes may require that you reconfigure the appliance. To update static routes in Cisco DNA Center after the installation, follow the procedure to reconfigure the appliance in the Cisco Digital Network Architecture Center Appliance Installation Guide associated with your installed version.

Procedure 1.            Connect and configure the Cisco DNA Center hardware appliance

Step 1.         Connect the Cisco DNA Center hardware appliance to a Layer 2 access switch port in your
network, by:

●   Using the 10-Gbps SFP+ port labeled PORT 1 on the mLOM card (named enp94s0f1 or enp9s0 in the wizard).

●   Using the 10-Gbps port SFP+ labeled PORT 2 on the mLOM card (named enp94s0f0 or enp10s0 in the wizard). This port must be up for single-node cluster configurations and in a 3-port Layer 2 network for a three-node cluster.

●   Using the Cisco Integrated Management Controller (IMC) port (labeled with a gear symbol or letter M on the integrated copper Ethernet ports).

For maximum physical network resiliency in a three-node cluster, each cluster node should connect to a unique top-of-rack switch, with each node interface placed into a separate Layer 2 domain (VLAN) on that switch. Enable communication between the nodes by using trunks to aggregate the Layer 2 domains from each switch—typical designs aggregate top-of-rack switches to redundant distribution switches for this purpose. This design enables at least two nodes of the three-node cluster to communicate during an outage of any single switch or link, meeting the minimum criteria for the cluster to survive those communication failures.

Step 2.         Connect any other ports needed for the deployment, such as the dedicated web management port or an isolated enterprise network port. These ports are not used for the deployment described.

The following example steps are described in detail with all options within the Installation Guide for the appliance software version. Use the Installation Guide to configure Cisco IMC on the appliance during first boot, along with the credentials required for Cisco IMC access. The Installation Guide describes the complete set of options. The example procedure that follows configures a single appliance for a single-node cluster or the first appliance for a three-node cluster deployment, without configuring a network proxy.

Step 3.         Boot the Cisco DNA Center hardware appliance. A welcome message appears.

Welcome to the Maglev Configuration Wizard!

Step 4.        Press Enter to accept the default choice, Start a DNA-C Cluster.

Step 5.         Continue by accepting the wizard default choices, while supplying information for the following steps within the wizard (the wizard steps are in order but are not sequential; different hardware appliances have different adapter names and may be in a different order):

●   In wizard STEP #4, selection for NETWORK ADAPTER #1 (eno1):

This interface can be used as a dedicated management interface for administrative web access to Cisco DNA Center. If you are using this option (which requires static route configuration), fill in the information; otherwise leave all selections blank, and then select next >> to continue.

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #2 (eno2):

This interface is available for use with a separate network (example: firewall DMZ) to the Internet cloud catalog server using a static route. Unless you require this connectivity, leave all selections blank, and select next >> to continue.

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #3 (enp94s0f0):

Use this interface for communications with your network infrastructure. Supply at least the Host IP Address, Netmask, Default Gateway IP Address, and DNS Servers. If you are not using the single interface with default gateway, supply Static Routes, and then select next >> to continue.

Host IP Address:

  10.4.49.34

Netmask:

  255.255.255.0

Default Gateway IP Address:

  10.4.49.1

DNS Servers:

  10.4.49.10

Static Routes:

  [blank for combined management/enterprise interface installation]

Cluster Link

  [blank]

Configure IPv6 address

  [blank]

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #4 (enp94s0f1):

This interface is used for clustering—configure clustering to easily allow for future clustering capability, even if initially you don't need clustering. Fill in the information for the Host IP Address and Netmask (a /29 size network or larger covers a three-member cluster), use the spacebar to select Cluster Link, do not fill in any other fields, and then select next >> to continue.

Host IP Address:

  192.168.127.2

Netmask:

  255.255.255.248

Default Gateway IP Address:

  [blank]

DNS Servers:

  [blank]

Static Routes:

  [blank]

Cluster Link

  [use spacebar to select]

Configure IPv6 address

  [blank]

Tech tip

Confirm that the cluster link configuration is correct before proceeding. Changing the cluster link configuration after it is applied may require initiating a fresh configuration.

Selecting an interface as the cluster link ties the availability of the VIP addresses to the interface state of the cluster link interface on that node. For VIPs to be active and available using a single-node cluster configuration, the cluster link interface must have an SFP+ installed and link state must be up.

The wizard displays an informational message.

The wizard will need to shutdown the controller in order to validate…

Tech tip

The wizard validates the DNS Servers entry using ping. Do not restrict ICMP echo communication between the appliance and any configured DNS servers.

Step 6.        Select proceed >> to continue with the network validation. The installation validates gateway reachability.

Please wait while we validate and configure host networking...

Step 7.         If the wizard detects a network proxy server, then you are prompted to configure the proxy settings.

●   In wizard STEP #8, selection for NETWORK PROXY:

Update the settings as required and select next >> to continue.

Step 8.        After the wizard network validation completes, continue entering initial configuration values. For both single-node and three-node cluster installations, create a cluster configuration. Define VIPs for each of the interfaces in the cluster, with a minimum of two required, including the intracluster and management interfaces, and a maximum of four.

●   In wizard STEP #11, MAGLEV CLUSTER DETAILS:

Cluster Virtual IP address(s):

    10.4.49.29 192.168.127.1

Cluster’s hostname:

    [cluster fully-qualified domain name]

●   In wizard STEP #13, USER ACCOUNT SETTINGS:

Linux Password: *

  [linux password]

Re-enter Linux Password: *

  [linux password]

Password Generation Seed:

  [skip this entry]

Auto Generated Password:

  [skip this entry]

Administrator Passphrase: *

  [DNAC administrator password]

Re-enter Administrator Passphrase: *

  [DNAC administrator password]

Step 9.        In wizard STEP #14, NTP SERVER SETTINGS, you must supply at least one active NTP server, which is tested before the installation can proceed.

NTP Servers: *

  10.4.0.1 10.4.0.2

Step 10.     Select next >>. The installation validates connectivity to the NTP servers.

Validating NTP Server: 10.4.0.1 ...

Step 11.      In wizard STEP #16, MAGLEV ADVANCED SETTINGS, you assign unique IP networks that are not part of the enterprise network that are used by Cisco DNA Center to manage its own API services and cluster services. The minimum recommended size for each is a network with a 21-bit netmask to accommodate the large numbers of different services with unique IP addresses that communicate with one another.

Services Subnet: *

  192.168.240.0/21

Cluster Services Subnet: *

  192.168.248.0/21

Select next >>. The wizard displays an informational message.

The wizard is now ready to apply the configuration on the controller.

Step 12.      Disregard any additional warning messages about existing disk partitions. Select proceed >> to apply the configuration and complete the installation. You should not interact with the system until the installation is complete.

Many status messages scroll by during the installation. The platform boots the installed image and configures the base processes for the first time, which can take several hours. When installation and configuration are complete, a login message is displayed.

Welcome to the Maglev Appliance (tty1)

Step 13.      Log in with the maglev user from the Cisco IMC console or connect using an SSH session to the host IP address as assigned during the installation and destination port 2222.

maglev-master-1 login: maglev

Password: [linux password assigned during installation]

Step 14.     Verify that processes are deployed.

$ maglev package status

For the validated version, all packages are DEPLOYED initially, except for any NOT_DEPLOYED packages listed, including the following, which vary depending on your installation version:

application-policy

sd-access

sensor-automation

You install other required components in later steps. Do not proceed until all packages are listed as DEPLOYED or NOT_DEPLOYED.

Procedure 2.            Connect to Cisco DNA Center and verify the version

Step 1.         Log in to the Cisco DNA Center web interface by directing a web browser to the Cluster Virtual IP address that you supplied in the previous procedure (example: https://10.4.49.29/). While processes are launched after installation, you may have to wait until the web server is available to serve your first request.

Step 2.         At the Username line, enter admin; at the Password line, enter the Cisco DNA Center administrator password that you assigned using the Maglev Configuration wizard, and then click Log In.

sda-infra-deploy-2019jul_3.png

Step 3.         At the prompt to reset the password, choose a new password or skip to the next step.

Step 4.        At the welcome prompt, provide a Cisco.com ID and password. The ID is used to register software downloads and receive system communications.

If you skip this step because you do not have an ID or plan to add one later by using Settings (gear) > System Settings > Settings > Cisco Credentials, features such as SWIM, Telemetry, and Licensing will be unable to function properly. Additionally, credentials are required for downloading software packages as described in the software migration and update procedures.

Step 5.         In the previous step, if you did not enter an ID with Smart Account access with privileges for managing Cisco software licenses for your organization, a Smart Account prompt displays. Enter a Cisco.com ID associated with a Smart Account or click Skip.

Step 6.        If you have an IPAM server (examples: Infoblox, Bluecat), enter the details at the IP Address Manager prompt and click Next. Otherwise, click Skip.

Step 7.         If you are using a proxy server, enter the details at the Enter Proxy Server prompt and click Next. Otherwise, click Skip.

Step 8.        At the Terms and Conditions display, click Next, and then at the Ready to go! display, click Go to System 360.

 sda-infra-deploy-2019jul_4.png

Step 9.        At the main Cisco DNA Center dashboard, click the help (life preserver) icon, and then click About.

 sda-infra-deploy-2019jul_5.png

Step 10.     Check the DNA Center version.

 sda-infra-deploy-2019jul_6.png

If you are using an original M4-based DN1-HW-APL appliance, verify that the version is at least 1.2.5. If your version is earlier than 1.2.5 and you’re creating a three-node cluster, or if your version is earlier than 1.1.6 and you’re creating a single-node cluster, contact support to reimage your Cisco DNA Center appliances to your final target version before continuing. Version 1.2.5 is the minimum software requirement to cluster nodes in advance of upgrading the entire cluster to version 1.2.8 or later from the cloud catalog server. Newer M5-based appliances are preinstalled with 1.2.8 or a more recent version.

Procedure 3.            Connect and configure the second and third add-on nodes to the cluster

Optional

If you are creating a three-node cluster configuration, complete this procedure.

Step 1.         Connect the second and third add-on Cisco DNA Center hardware appliance nodes to a Layer 2 access switch port in your network, by:

●   Using the 10-Gbps SFP+ port labeled PORT 1 on the mLOM card (named enp94s0f1 or enp9s0 in the wizard).

●   Using the 10-Gbps port SFP+ labeled PORT 2 on the mLOM card (named enp94s0f0 or enp10s0 in the wizard). This port must be up for single-node cluster configurations and in a 3-port Layer 2 network for a three-node cluster.

●   Using the Cisco Integrated Management Controller (IMC) port (labeled with a gear symbol or letter M on the integrated copper Ethernet ports).

The Cisco DNA Center nodes joining the cluster must boot from the same version of software as the first node.

Step 2.         Connect any other ports needed for the deployment, such as the dedicated web management port or an isolated enterprise network port. These ports are not used for the deployment described.

The following example steps are described in detail with all options in the Installation Guide for the appliance software version. Use the Installation Guide to configure Cisco IMC on the appliance during first boot, along with the credentials required for Cisco IMC access. The Installation Guide describes the complete set of options.

Step 3.         Boot the second Cisco DNA Center hardware appliance. A welcome message appears.

Welcome to the Maglev Configuration Wizard!

Step 4.        Select Join a DNA-C Cluster (do not accept the default choice), and then press Enter.

Tech tip

Do this step only on the second node, and do not attempt to configure the third node in parallel. The second node must be joined into the cluster completely before you start the steps of joining the third node into the cluster.

Step 5.         Continue by accepting the wizard default choices, while supplying information for the following steps within the wizard (the wizard steps are in order but are not sequential; different hardware appliances have different adapter names and may be in a different order):

●   In wizard STEP #4, selection for NETWORK ADAPTER #1 (eno1):

This interface can be used as a dedicated management interface for administrative web access to Cisco DNA Center. If you are using this option (which requires static route configuration), fill in the information; otherwise leave all selections blank, and then select next >> to continue.

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #2 (eno2):

This interface is available for use with a separate network (example: firewall DMZ) to the Internet cloud catalog server using a static route. Unless you require this connectivity, leave all selections blank, and select next >> to continue.

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #3 (enp94s0f0):

Use this interface for communications with your network infrastructure. Supply at least the Host IP Address, Netmask, Default Gateway IP Address, and DNS Servers. If you are not using the single interface with default gateway, supply Static Routes, and then select next >> to continue.

Host IP Address:

  10.4.49.35

Netmask:

  255.255.255.0

Default Gateway IP Address:

  10.4.49.1

DNS Servers:

  10.4.49.10

Static Routes:

  [blank for combined management/enterprise interface installation]

Cluster Link

  [blank]

Configure IPv6 address

  [blank]

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #4 (enp94s0f1):

This interface is used for clustering— configure clustering to easily allow for future clustering capability, even if initially you don't need clustering. Fill in the information for the Host IP Address and Netmask (a /29 size network or larger covers a three-member cluster), use the spacebar to select Cluster Link, do not fill in any other fields, and then select next >> to continue.

Host IP Address:

  192.168.127.3

Netmask:

  255.255.255.248

Default Gateway IP Address:

  [blank]

DNS Servers:

  [blank]

Static Routes:

  [blank]

Cluster Link

  [use spacebar to select]

Configure IPv6 address

  [blank]

The wizard displays an informational message.

The wizard will need to shutdown the controller in order to validate…

Step 6.        Select proceed >> to continue with the network validation. The installation validates gateway reachability.

Please wait while we validate and configure host networking...

Step 7.         If the wizard detects a network proxy server, then you are prompted to configure the proxy settings.

●   In wizard STEP #8, selection for NETWORK PROXY:

Update the settings as required and select next >> to continue.

Step 8.        After the wizard network validation completes, continue entering configuration values for the add-on node. The add-on node refers to the IP address of the cluster link on the first master node when joining the cluster.

●   In wizard STEP #11, MAGLEV CLUSTER DETAILS:

Maglev Master Node: *

    192.168.127.2

Username: *

    maglev

Password: *

    [linux password assigned to first (master) node]

The wizard checks connectivity and uses the credentials to register to the master node.

Step 9.        Continue entering the add-on node settings.

●   In wizard STEP #13, USER ACCOUNT SETTINGS:

Linux Password: *

  [linux password]

Re-enter Linux Password: *

  [linux password]

Password Generation Seed:

  [skip this entry]

Auto Generated Password:

  [skip this entry]

Step 10.     In wizard STEP #14, NTP SERVER SETTINGS, you must supply at least one active NTP server, which is tested before the installation can proceed.

NTP Servers: *

  10.4.0.1 10.4.0.2

Step 11.      Select next >>.

The installation validates connectivity to the NTP servers.

Validating NTP Server: 10.4.0.1 ...

The wizard displays an informational message.

The wizard is now ready to apply the configuration on the controller.

Disregard any additional warning messages about existing disk partitions.

Step 12.      Select proceed >> to apply the configuration and complete the installation. You should not interact with the system until the installation is complete.

Many status messages scroll by during the installation. The platform boots the installed image and configures the base processes for the first time, which can take over an hour. When installation and configuration are complete, a login message is displayed.

Welcome to the Maglev Appliance (tty1)

Step 13.      Log in with the maglev user from the Cisco IMC console or connect using an SSH session to the host IP address as assigned during the installation and destination port 2222.

maglev-master-192 login: maglev

Password: [password assigned during installation]

Step 14.     Verify that the first two nodes are deployed.

$ kubectl get nodes

The installed nodes appear, and the status is updated from NotReady to Ready:

NAME            STATUS    AGE       VERSION

192.168.127.2   Ready     15h       v1.7.3

192.168.127.3   Ready     4m        v1.7.3

If the command returns an error instead of displaying the nodes, wait for the node process startup and communication establishment to complete and then try again. Do not proceed until the first two nodes in the cluster appear.

Step 15.      Boot the third Cisco DNA Center hardware appliance. A welcome message appears.

Welcome to the Maglev Configuration Wizard!

Tech tip

Complete these steps on the third node only after the second node is verified as completely joined into the cluster.

Step 16.     Select Join a DNA-C Cluster (do not accept the default choice), and then press Enter.

Step 17.      Continue by accepting the wizard default choices, while supplying information for the following steps within the wizard (the wizard steps are in order but are not sequential; different hardware appliances have different adapter names and may be in a different order):

●   In wizard STEP #4, selection for NETWORK ADAPTER #1 (eno1):

This interface can be used as a dedicated management interface for administrative web access to Cisco DNA Center. If you are using this option (which requires static route configuration), fill in the information; otherwise leave all selections blank, and then select next >> to continue.

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #2 (eno2):

This interface is available for use with a separate network (example: firewall DMZ) to the Internet cloud catalog server using a static route. Unless you require this connectivity, leave all selections blank, and select next >> to continue.

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #3 (enp94s0f0):

Use this interface for communications with your network infrastructure. Supply at least the Host IP Address, Netmask, Default Gateway IP Address, and DNS Servers. If you are not using the single interface with default gateway, supply Static Routes, and then select next >> to continue.

Host IP Address:

  10.4.49.36

Netmask:

  255.255.255.0

Default Gateway IP Address:

  10.4.49.1

DNS Servers:

  10.4.49.10

Static Routes:

  [blank for combined management/enterprise interface installation]

Cluster Link

  [blank]

Configure IPv6 address

  [blank]

●   In wizard STEP #4, selection for OPTIONAL - NETWORK ADAPTER #4 (enp94s0f1):

This interface is used for clustering— configure clustering to easily allow for future clustering capability, even if initially you don't need clustering. Fill in the information for the Host IP Address and Netmask (a /29 size network or larger covers a three-member cluster), use the spacebar to select Cluster Link, do not fill in any other fields, and then select next >> to continue.

Host IP Address:

  192.168.127.4

Netmask:

  255.255.255.248

Default Gateway IP Address:

  [blank]

DNS Servers:

  [blank]

Static Routes:

  [blank]

Cluster Link

  [use spacebar to select]

Configure IPv6 address

  [blank]

The wizard displays an informational message.

The wizard will need to shutdown the controller in order to validate…

Step 18.     Select proceed >> to continue with the network validation. The installation validates gateway reachability.

Please wait while we validate and configure host networking...

Step 19.     If the wizard detects a network proxy server, then you are prompted to configure the proxy settings.

●   In wizard STEP #8, selection for NETWORK PROXY:

Update the settings as required and select next >> to continue.

Step 20.     After the wizard network validation completes, continue entering configuration values for the add-on node. The add-on node refers to the IP address of the cluster link on the first master node when joining the cluster.

●   In wizard STEP #11, MAGLEV CLUSTER DETAILS:

Maglev Master Node: *

    192.168.127.2

Username: *

    maglev

Password: *

    [linux password assigned to first (master) node]

The wizard checks connectivity and uses the credentials to register to the master node.

Step 21.      Continue entering the add-on node settings.

●   In wizard STEP #13, USER ACCOUNT SETTINGS:

Linux Password: *

  [linux password]

Re-enter Linux Password: *

  [linux password]

Password Generation Seed:

  [skip this entry]

Auto Generated Password:

  [skip this entry]

Step 22.      In wizard STEP #14, NTP SERVER SETTINGS, you must supply at least one active NTP server, which is tested before the installation can proceed.

NTP Servers: *

  10.4.0.1 10.4.0.2

Step 23.      Select next >>.

The installation validates connectivity to the NTP servers.

Validating NTP Server: 10.4.0.1 ...

The wizard displays an informational message.

The wizard is now ready to apply the configuration on the controller.

Disregard any additional warning messages about existing disk partitions.

Step 24.     Select proceed >> to apply the configuration and complete the installation. You should not interact with the system until the installation is complete.

Many status messages scroll by during the installation. The platform boots the installed image and configures the base processes for the first time, which can more than an hour. When installation and configuration are complete, a login message is displayed.

Welcome to the Maglev Appliance (tty1)

Step 25.      Log in with the maglev user from the Cisco IMC console or connect using an SSH session to the host IP address as assigned during the installation and destination port 2222.

maglev-master-1 login: maglev

Password: [password assigned during installation]

Step 26.     Verify that all three nodes are deployed.

$ kubectl get nodes

The installed nodes appear, and the status is updated from NotReady to Ready:

NAME            STATUS    AGE       VERSION

192.168.127.2   Ready     16h       v1.7.3

192.168.127.3   Ready     34m       v1.7.3

192.168.127.4   Ready     11m       v1.7.3

Step 27.      Log in to the Cisco DNA Center web interface by directing a web browser to the cluster VIP address (example: https://10.4.49.29/).

Step 28.     At the main Cisco DNA Center dashboard, click the settings (gear) icon, and then click System Settings.

 sda-infra-deploy-2019jul_7.png

Step 29.     Next to Hosts, click Enable Service Distribution (or, depending on the display for your version, toggle the HIGH AVAILABILITY switch on), and then at the warning message click Continue.

 sda-infra-deploy-2019jul_8.png

The button text changes to Enabling Service Distribution… and the services are distributed across the nodes in the cluster.

Wait until the services are distributed into the high availability configuration. For active systems, automation and assurance services are disrupted during the distribution process. This process can take approximately an hour. Use the browser refresh button to verify the configuration status, which shows DNA Center is in maintenance mode until the process completes.

 sda-infra-deploy-2019jul_9.png

Procedure 4.            Update the Cisco DNA Center software

Cisco DNA Center automatically connects to the Cisco cloud catalog server to find the latest updates. Update Cisco DNA Center to the required version using the Cisco cloud catalog server.

Tech tip

This procedure shows a Cisco DNA Center upgrade from release 1.2.8, and illustrations are installation examples. Software versions used for validation are listed in Appendix A: Product List. For upgrade requirements using other software versions, refer to the release notes on Cisco.com for the correct procedure for a successful upgrade to the target version from the installed version.

https://www.cisco.com/c/en/us/support/cloud-systems-management/dna-center/products-release-notes-list.html

The release notes include access requirements for connecting Cisco DNA Center to the Internet behind a firewall to download packages from the cloud catalog server.

Step 1.         At the main Cisco DNA Center dashboard, at the top right of the window, click the Software Updates (cloud) button, and then click Go to Software Updates.

 sda-infra-deploy-2019jul_10.png

The Settings > Software Updates > Updates screen appears. This screen is used to install updates and packages that add functionality to the controller, including SD-Access. For significant system-wide updates, an announcement is displayed at the top of the updates window.

 sda-infra-deploy-2019jul_11.png

Step 2.         Click the Switch Now button, and then acknowledge that the migration is irreversible by clicking OK.

 sda-infra-deploy-2019jul_12.png

Cisco DNA Center connects to the cloud catalog server.

 sda-infra-deploy-2019jul_13.png

After Cisco DNA Center finishes connecting to the cloud catalog server, use the Refresh button to manually update the screen to display the available system update package.

Step 3.         To the right of the available system update, click the Update button, click Continue, and then click Continue.

Caution

The System package within the System Updates section is the only package you download or update during the initial system update. After the installation of the system is complete, download and install the application package updates.

Do not switch to a new version of Cisco DNA Center until you have completely updated the system. Before switching, check the listing of permitted update paths in the Cisco Digital Network Architecture Center Upgrade Guide.

sda-infra-deploy-2019jul_14.png

The system goes into maintenance mode, and a message appears stating that there is a system update in progress. The download and installation can take more than an hour. Use the Refresh button to check the status.

At the end of the installation, refresh the browser to view the web interface for the updated Cisco DNA Center.

Procedure 5.            Upgrade the Cisco DNA Center application packages

When Cisco DNA Center platform is running the latest system update, you upgrade the packages to the versions associated with the updated system version.

Step 1.         Log in to the Cisco DNA Center web interface and navigate to the main dashboard.

Step 2.         In the top right of the Cisco DNA Center dashboard, click the Software Updates (cloud) button, and then click Go to Software Updates.

 sda-infra-deploy-2019jul_15.png

The system navigates to the Software Updates > Updates > System Update screen.

Step 3.         At the top right of the System Update screen, on the same row as Application Updates, click the upper Download All button. At the pop-up window, click Continue to confirm the update operation, and then, at the second System Readiness Check pop-up window, click Continue.

 sda-infra-deploy-2019jul_16.png

The browser interface updates, showing the package installation status. At the top of the screen, the cloud icon also offers status information to users navigating to any screen.

sda-infra-deploy-2019jul_17.png

Before proceeding to the next step, refresh the screen until there are no longer any packages that are downloading. The download and installation can take over an hour to complete, including the associated package dependency download. If there are still package dependencies for updates, the Download All button is displayed again.

Step 4.        After the downloads complete, if any additional packages are listed for updates, repeat the previous two steps until the Download All button is replaced with an Update All button that is not grayed out.

 sda-infra-deploy-2019jul_18.png

Step 5.         After the new versions of the packages are downloaded, at the top right of the System Update screen, on the same row as Application Updates, click the upper Install All button. On the pop-up window, click Continue, and then, on the System Readiness Check pop-up window, click Continue. An informational message appears, and the installation begins.

 sda-infra-deploy-2019jul_19.png

The remaining package installations begin. The browser refreshes automatically, showing the updated status for each package. The installation process can take over an hour to complete.

Tech tip

Packages must be updated in a specific order to appropriately address package interdependencies. Allow Cisco DNA Center to handle dependencies by selecting and updating all package updates at once. The Installation Guide for the installed version explains how to use the Maglev CLI to force a download retry for any stalled download.

While the packages are installing, you can work in parallel on the next process for installing the Identity Services Engine nodes.

All application package updates are installed when the Software Updates > Updates screen no longer shows any available packages listed under App Updates and the cloud icon in the top right of the screen displays a green check mark.

 sda-infra-deploy-2019jul_20.png

Continue to the next step after all packages are installed.

Step 6.        In the top right of the main Cisco DNA Center dashboard, click the help (life preserver) icon, click About, and then click Show Packages. This view is useful for comparing to the release notes, which are available by clicking Release Notes.

 sda-infra-deploy-2019jul_21.png

Step 7.         At the main Cisco DNA Center dashboard, click the Settings (gear) icon, and then click System Settings. Status is shown for hosts in the cluster.

 sda-infra-deploy-2019jul_22.png

If you need additional functionality in later Cisco DNA Center releases, such as support for new switches or features, you can continue the upgrade process as required.

With all application packages installed and hosts in the cluster showing a status of Up, the SD-Access functionality is available to configure, and integration with ISE can proceed.

Process: Installing Identity Services Engine nodes

The SD-Access solution described in this guide uses two ISE nodes in a high-availability standalone configuration dedicated to the SD-Access network and integrated into Cisco DNA Center management. The first ISE node has the primary policy administration node (PAN) persona configuration and the secondary monitoring and troubleshooting (MnT) persona configuration. The second ISE node has the secondary PAN persona configuration and the primary MnT persona configuration. Both nodes include policy services node (PSN) persona configurations. You must also enable pxGrid and External RESTful Services (ERS) on the ISE nodes.

Table 2.         ISE node configurations

ISE Node 1

ISE Node 2

Primary PAN

Secondary PAN

Secondary MnT

Primary MnT

PSN

PSN

pxGrid

pxGrid

ERS Services

ERS Services

 

Tech tip

There are specific ISE software versions required for compatibility with Cisco DNA Center. To be able to integrate with an existing ISE installation, you must first ensure that the existing ISE is running at least the minimum supported version. An ISE integration option, which is not included in this validation, is to deploy a new ISE instance as a proxy to earlier versions of ISE.

The versions of ISE and Cisco DNA Center validated in HA standalone mode for this guide are listed in Appendix A: Product List. You may find alternative recommended images in the latest SD-Access Hardware and Software Compatibility Matrix.

Procedure 1.            Install ISE server images

Step 1.         On both ISE nodes, boot and install the ISE image.

Step 2.         On the console of the first ISE node, at the login prompt, type setup, and then press Enter.

**********************************************

Please type ‘setup’ to configure the appliance

**********************************************

localhost login: setup

Step 3.          Enter the platform configuration parameters.

Press ‘Ctrl-C’ to abort setup

Enter hostname[]: m29-ise1

Enter IP address []: 10.4.49.30

Enter IP netmask[]: 255.255.255.0

Enter IP default gateway[]: 10.4.49.1

Enter default DNS domain[]: ciscodna.net

Enter Primary nameserver[]: 10.4.49.10

Add secondary nameserver? Y/N [N]: N

Enter NTP server[time.nist.gov]: 10.4.0.1

Add another NTP server? Y/N [N]: Y

Enter NTP server[time.nist.gov]: 10.4.0.2

Add another NTP server? Y/N [N]: N

Enter system timezone[UTC]: UTC

Enable SSH service? Y/N [N]: Y

Enter username[admin]: admin

Enter password: [admin password]

Enter password again: [admin password]

Copying first CLI user to be first ISE admin GUI user...

Bringing up network interface...

Pinging the gateway...

Pinging the primary nameserver...

 

Do not use ‘Ctrl-C’ from this point on...

 

Installing Applications...

 === Initial Setup for Application: ISE ===

Additional installation messages appear, and then the server reboots.

Rebooting...

Step 4.        Repeat Step 2 to Step 3 on the second ISE node, using the appropriate parameters for it.

The systems reboot automatically and display the Cisco ISE login prompt.

localhost login:

Procedure 2.            Configure roles for first ISE node

Step 1.         On the first ISE node, log in using a web browser and the configured username and password, and then accept any informational messages.

https://m29-ise1.ciscodna.net/

Step 2.         Navigate to Administration > System > Deployment, and then click OK to the informational message.

 sda-infra-deploy-2019jul_23.png

Step 3.         Click on the ISE node hostname, and then, under Role, click Make Primary.

 sda-infra-deploy-2019jul_24.png

Step 4.        Under Policy Service, select Enable Device Admin Service and Enable Passive Identity Service, select pxGrid, and then click Save.

 sda-infra-deploy-2019jul_25.png

TACACS infrastructure device administration support, authentication using Cisco EasyConnect with domain controllers, and pxGrid services for Cisco DNA Center are now enabled, and the node configuration is saved.

Procedure 3.            Register ISE node 2 and configure roles

Using the same ISE administration session started on the first node, integrate the additional ISE node.

Step 1.         Using the existing session, refresh the view by navigating again to Administration > System > Deployment, and then under the Deployment Nodes section, click Register.

 sda-infra-deploy-2019jul_26.png

A screen allowing registration of the second ISE node into the deployment appears.

Step 2.         Enter the ISE fully-qualified domain name Host FQDN (m29-ise2.ciscodna.net), User Name (admin), and Password ([admin password]), and then click Next.

Step 3.         If you are using self-signed certificates, click Import Certificate and Proceed. If you are not using self-signed certificates, follow the instructions for importing certificates and canceling this registration, and then return to the previous step.

Step 4.        On the Register ISE Node - Step 2: Configure Node screen, under Monitoring, change the role for this second ISE node to PRIMARY. Under Policy Service, select Enable Device Admin Service and Enable Passive Identity Service, select pxGrid, and then click Submit.

 sda-infra-deploy-2019jul_27.png

The node configuration is saved.

Step 5.         Click OK to the notification that the data is to be synchronized to the node and the application server on the second node will restart.

The synchronization and restart of the second node can take more than ten minutes to complete. You can use the refresh button on the screen to observe when the node returns from In Progress to a Connected state to proceed to the next step.

 sda-infra-deploy-2019jul_28.png

Step 6.        Check Cisco.com for ISE release notes and the SD-Access Hardware and Software Compatibility Matrix and download any patch required for your installation. Then, install the patch by navigating in ISE to Administration > System > Maintenance > Patch Management, click Install, click Browse, browse for the patch image, and then click Install. The patch installs node-by-node to the cluster, and each cluster node reboots.

Step 7.         After the ISE web interface is active again, check the progress of the patch installation by navigating to Administration > System > Maintenance > Patch Management, select the patch, and then select Show Node Status. Use the Refresh button to update status until all nodes are in Installed status before proceeding.

 sda-infra-deploy-2019jul_29.png

Step 8.        Navigate to Administration > System > Settings. On the left pane, navigate to ERS Settings. Under ERS Setting for Primary Administration Node, select Enable ERS for Read/Write, and accept any dialog box that appears. Under ERS Setting for All Other Nodes, select Enable ERS for Read. Under CRSF Check, select Disable CSRF for ERS Request, and then click Save. Accept any additional dialog box that appears.

 sda-infra-deploy-2019jul_30.png

The ERS settings are updated, and ISE is ready to be integrated with Cisco DNA Center.

Process: Integrating Identity Services Engines with Cisco DNA Center

Integrate ISE with Cisco DNA Center by defining ISE as an authentication and policy server to Cisco DNA Center and permitting pxGrid connectivity from Cisco DNA Center into ISE. Integration enables information sharing between the two platforms, including device information and group information, and allows Cisco DNA Center to define policies to be rendered into the network infrastructure by ISE.

Tech tip

The validation includes Cisco DNA Center integration with ISE servers as a requirement for automation of the assignment of edge ports to VNs and policy configuration, including the deployment of scalable group tags and group-based policies.

Procedure 1.            Configure Cisco DNA Center authentication and policy servers

Step 1.         Log in to the Cisco DNA Center web interface. At the top-right corner, select the Settings (gear) icon, and then navigate to System Settings.

 sda-infra-deploy-2019jul_31.png

Step 2.         Navigate to Settings > Authentication and Policy Servers, and then click the + Add button.

Tech tip

The next step for integrating an ISE installation is the same whether you use a high-availability standalone ISE deployment, as shown in this example, or a distributed ISE deployment. The shared secret chosen needs to be consistent with the shared secret used across the devices in the network for communicating with the authentication, authorization, and accounting (AAA) server. The username and password are used for Cisco DNA Center to communicate with ISE using SSH and must be the default super admin account that was created during the ISE installation.

Step 3.         In the Add AAA/ISE SERVER slide-out display, enter the ISE node 1 (primary PAN) Server IP Address (example: 10.4.49.30) and Shared Secret, toggle the Cisco ISE server selector to On, enter the ISE Username (example: admin), enter the ISE Password. For the FQDN and enter the ISE fully qualified domain name, enter Subscriber Name (example: dnac) and leave the SSH Key blank. If you are using TACACS for infrastructure device administration, click View Advanced Settings and select TACACS. Click Apply.

 sda-infra-deploy-2019jul_32.png

During communication establishment, status from Cisco DNA Center displays Creating AAA server… and then Status displays INPROGRESS. Use the Refresh button until communication establishes with ISE and the server displays ACTIVE status. If communication is not established, an error message displays information reported from ISE regarding the problem to be addressed before continuing. You also can see the communication status by navigating from the Settings (gear) icon to System Settings > System 360. Under External Network Services, the Cisco ISE server shows in Active status.

 sda-infra-deploy-2019jul_33.png

With communications established, Cisco DNA Center requests a pxGrid session with ISE.

Step 4.        Log in to ISE and navigate to Administration > pxGrid Services.

The client named dnac shows Pending in the Status column.

Step 5.         Check the box next to dnac, above the list, click Approve, and then click Yes to confirm.

  sda-infra-deploy-2019jul_34.png

A success message appears, and the Pending status changes to Online (XMPP). You can additionally verify that the integration is active by expanding the view for the client and observing two subscribers, Core and TrustSecMetaData.

 sda-infra-deploy-2019jul_35.png

If ISE is integrated with Cisco DNA Center after scalable groups are already created in ISE, in addition to the default groups available, any existing ISE groups also are visible by logging in to Cisco DNA Center and navigating to Policy > Dashboard > Scalable Groups. Existing ISE policies are not migrated to Cisco DNA Center.

Process: Preparing ISE for TACACS network device management

For TACACS configurations, Cisco DNA Center modifies discovered devices to use authentication and accounting services from ISE and local failover serves by default. ISE must be prepared to support the device administration configurations pushed to the devices during the discovery process.

Procedure 1.            Verify ISE and Cisco DNA Center TACACS configuration

Step 1.         Using ISE, navigate to Administration > System > Deployment. Under Policy Service, verify that Enable Passive Identity Service is selected.

 sda-infra-deploy-2019jul_36.png

Step 2.         Log in to the Cisco DNA Center web interface. At the top-right corner, click the Settings (gear) icon. At the top, click System Settings. On the right, click Authentication and Policy Servers, and verify that ISE is active and supporting the TACACS protocol in addition to RADIUS.

 sda-infra-deploy-2019jul_37.png

If any integration component is missing, return to the Software-Defined Access Management Infrastructure Prescriptive Deployment Guide and correct the integration.

Procedure 2.            Create a Cisco DNA Center administrative login in ISE

Update the ISE configuration with credentials supporting centralized authentication.

Tech tip

When devices are discovered, the devices receive configurations appropriate for the assigned site, including the centralized AAA server configuration using ISE, which is preferred over local login credentials. To maintain the ability to manage the devices after discovery, the credentials discovery uses must be available from the ISE server, either directly or as the means to accessing an external identity source, such as Active Directory.

Step 1.         Log in to ISE, navigate to Administration > Identity Management > Identities, click +Add, enter the Name (matching what was used for Cisco DNA Center discovery, and different from the ISE administrator), enter the associated Login Password and Re-Enter Password, and then, at the bottom of the screen, click Submit.

 sda-infra-deploy-2019jul_38.png

The network administrative user login is now available from ISE, and the same user ID is created on each device in a later procedure.

Procedure 3.            Use ISE to configure TACACS command sets

Centralized authentication includes authorization capabilities, which can be used to limit the commands to a device that are permitted. The default ISE authorization policy denies all commands. This example creates a command set that does not restrict the commands available to the authenticated user.

Step 1.         Log in to ISE, navigate to Work Centers > Device Administration > Policy Elements. On the left side, navigate to Results > TACACS Command Sets, and then click Add.

 sda-infra-deploy-2019jul_39.jpg

Step 2.         In the form under Command Set, supply a Name (example: PermitAllCommands) and a Description. Under Commands, select Permit any command that is not listed below, and then click Submit.

 sda-infra-deploy-2019jul_40.png

The new command set is saved and added to the list of available command sets.

 sda-infra-deploy-2019jul_41.png

Procedure 4.            Use ISE to configure TACACS device authorization

The new command set is applied to the authorization policy rules to change the default deny-all authorization behavior.

Step 1.         Navigate to Work Centers > Device Administration > Device Admin Policy Sets, and then, to the right of the Default policy set, click > to expand the policy set.

 sda-infra-deploy-2019jul_42.png

Step 2.         To the left of Authorization Policy, click > to expand the policy. Above the rule named Default, click + (plus) to insert Authorization Rule 1, and then, under conditions, click + (plus) to add a condition.

 sda-infra-deploy-2019jul_43.png

The Conditions Studio wizard appears.

Step 3.         From the Library on the left, drag Network_Access_Authentication_Passed to the Editor window, and then, at the bottom, click Use.

 sda-infra-deploy-2019jul_44.png

Step 4.        Under Results, Command Sets, select PermitAllCommands. Under Results, Shell Profiles, select Default Shell Profile, and then click Save.

 sda-infra-deploy-2019jul_45.png

The default authorization policy set is saved, allowing the authenticated Cisco DNA Center login to have command authorization to update the network devices.

Process: Installing SD-Access Wireless LAN controllers

For a Cisco SD-Access Wireless deployment, dedicate a WLC or pair of WLCs to SD-Access Wireless connectivity by integrating the WLCs natively with the fabric. The WLCs use link aggregation to connect to a redundant Layer 2 shared services distribution outside of the SD-Access fabric, as described in the Campus LAN and Wireless LAN Design Guide.

For high availability stateful switchover (HA SSO) resiliency, use a pair of WLCs with all network connectivity in place before starting the configuration procedure. Redundant WLCs are connected to a set of devices configured to support the Layer 2 redundancy suitable for the HA SSO WLCs, such as a switch stack, Cisco Virtual Switching System, or Cisco StackWise® Virtual, which may exist in a data center or shared services network. For maximum resiliency, redundant WLCs should not be directly connected to the Layer 3 border nodes.

Tech tip

The SD-Access solution described supports transport of IP frames in the Layer 2 overlays that are used for WLAN, without Layer 2 flooding of broadcast and unknown multicast traffic. Without broadcasts from the fabric edge, Address Resolution Protocol (ARP) functions by using the fabric control plane for MAC-to-IP address table lookups. Transporting non-IP frames and Layer 2 flooding of broadcast and unknown multicast is restricted. See the release notes for your software and hardware versions restrictions.

Procedure 1.            Configure the WLC Cisco AireOS platforms using the startup wizard

Perform the initial configuration using the CLI startup wizard.

After powering up the WLC, you should see the following on the WLC console. If not, type - (hyphen) followed by Enter repeatedly until the startup wizard displays the first question.

Welcome to the Cisco Wizard Configuration Tool

Use the ‘-‘ character to backup

Step 1.         Terminate the auto-install process.

Would you like to terminate autoinstall? [yes]: YES

Step 2.         Enter a system name. Do not use colons in the system name, and do not use the default name.

System Name [Cisco_7e:8e:43] (31 characters max): SDA-WLC-1

Step 3.         Enter an administrator username and password. Use at least three of the following character classes in the password: lowercase letters, uppercase letters, digits, and special characters.

Enter Administrative User Name (24 characters max): admin

Enter Administrative Password (24 characters max): [password]

Re-enter Administrative Password: [password]

Step 4.        Use DHCP for the service port interface address.

Service Interface IP address Configuration [static] [DHCP]: DHCP

Step 5.         Enable Link Aggregation (LAG).

Enable Link Aggregation (LAG) [yes][NO]: YES

Step 6.        Enter the management interface IP address, mask, and default router. The IP address for the secondary controller of an HA SSO pair is used only temporarily until the secondary WLC downloads the configuration from the primary and becomes a member of the HA controller pair.

Management Interface IP Address: 10.4.174.26

Management Interface Netmask: 255.255.255.0

Management interface Default Router: 10.4.174.1

Step 7.         Configure the management interface VLAN identifier.

Management Interface VLAN Identifier (0 = untagged): 174

Step 8.        Configure the management interface port number. The displayed range varies by WLC model. This number is arbitrary after enabling LAG, because all management ports are automatically configured and participate as one LAG, and any functional physical port in the group can pass management traffic.

Management Interface Port Num [1 to 2]: 1

Step 9.        Enter the DHCP server for clients (example: 10.4.48.10).

Management Interface DHCP Server IP Address: 10.4.48.10

Step 10.     You do not need to enable HA SSO in this step. Cisco DNA Center automates the HA SSO controller configuration during device provisioning.

Enable HA (Dedicated Redundancy Port is used by Default)[yes][NO]: NO

Step 11.      The WLC uses the virtual interface for mobility DHCP relay, guest web authentication, and intercontroller communication. Enter an IP address that is not used in your organization’s network.

Virtual Gateway IP Address: 192.0.2.1

Step 12.      If the option is presented, enter a multicast address that will be used by each AP to subscribe to IP multicast flows coming from the WLC. This address will be used only when configuring the IP multicast delivery method called multicast-multicast.

Multicast IP Address: 239.1.1.1

Tech tip

The multicast address must be unique for each controller or HA pair in the network. The multicast address entered is used as the source multicast address, which the access points registered to the controller use for receiving wireless user-based multicast streams.

Step 13.      Enter a name for the default mobility and RF group.

Mobility/RF Group Name: SDA-Campus

Step 14.     Enter an SSID for the data WLAN. This is used later in the deployment process.

Network Name (SSID): SDA-Data

Step 15.      Disable DHCP Bridging Mode.

Configure DHCP Bridging Mode [yes][NO]: NO

Step 16.     Enable DHCP snooping.

Allow Static IP Addresses [YES][no]: NO

Step 17.      Do not configure the RADIUS server now. You will configure the RADIUS server later, using the GUI.

Configure a RADIUS Server now? [YES][no]: NO

Warning! The default WLAN security policy requires a RADIUS server.

Please see documentation for more details.

Step 18.     Enter the country code where you are deploying the WLC.

Enter Country Code list (enter ‘help’ for a list of countries) [US]: US

Step 19.     Enable the required wireless networks.

Enable 802.11b network [YES][no]: YES

Enable 802.11a network [YES][no]: YES

Enable 802.11g network [YES][no]: YES

Step 20.     Enable the radio resource management (RRM) auto-RF feature.

Enable Auto-RF [YES][no]: YES

Step 21.      Synchronize the WLC clock to your organization's NTP server.

Configure a NTP server now? [YES][no]: YES

Enter the NTP server's IP address: 10.4.0.1

Enter a polling interval between 3600 and 604800 secs: 86400

Step 22.      Do not configure IPv6.

Would you like to configure IPv6 parameters? [YES][no]: NO

Step 23.      Confirm that the configuration is correct. The WLC saves the configuration and resets automatically. 

Configuration correct? If yes, system will save it and reset. [yes][NO]: YES

Configuration saved!

Resetting system with new configuration…

If you press Enter or respond with no, the system resets without saving the configuration, and you will have to complete this procedure again.

The WLC resets and displays a User: login prompt.

(Cisco Controller)

Enter User Name (or 'Recover-Config' this one-time only to reset configuration to factory defaults)

User:

Step 24.     Repeat Step 1 through Step 23 for the secondary WLC, using the appropriate parameters for it.

Step 25.      Use a web browser to verify connectivity by logging in to each of the Cisco WLC administration web pages using the credentials created in Step 3 of Procedure 1 (example: https://10.4.174.26).

 sda-infra-deploy-2019jul_46.png

Step 26.     From the home page, at the top right, click Advanced. Navigate to COMMANDS > Set Time. Verify that the date and time agree with the NTP server. If the time appears to be significantly different, manually correct it, and, if your network infrastructure devices use something other than the default time zone, also choose a time zone. The correct date and time are important for certificate validation and successful AP registration with the WLC. Repeat this step with each WLC.

Procedure 2.            Configure the WLC discovery and management credentials

Add the credentials to the WLC used for discovery and management by Cisco DNA Center. During discovery, with the Device Controllability feature enabled, Cisco DNA Center uses the user credentials to configure additional management access requirements, such as SNMPv3.

Step 1.         Use a web browser to connect to the Cisco WLC administration web page using the credentials created in Step 3 of Procedure 1 (example: https://10.4.174.26).

Step 2.         Add a local login for Cisco DNA Center to manage the device. From the home page, at the top right, click Advanced. Navigate to MANAGEMENT > Local Management Users. At the top right, click New…, fill out the form, supplying User Name (example: dna), Password, Confirm Password, set User Access Mode to ReadWrite, supply a Description, and then click Apply.

 sda-infra-deploy-2019jul_47.png

Step 3.         At the top right, click Save Configuration, and then, at the dialog box, click OK.

 sda-infra-deploy-2019jul_48.png

Step 4.        Repeat this procedure for the secondary WLC, using the appropriate parameters for it.

The WLCs are ready for integration into the Cisco DNA Center setup. Integration is part of the fabric deployment itself and consists of adding WLCs into inventory, optionally creating HA pairs, creating IP pools and SSIDs for fabric wireless, provisioning WLCs into the fabric, and assigning wireless endpoints to the fabric. These steps are outlined as part of the Software-Defined Access Fabric Provisioning Prescriptive Deployment Guide.

Appendix A: Product List

The following products and software versions were included as part of validation in this deployment guide, and this validated set is not inclusive of all possibilities. Additional hardware options are listed in the associated Software-Defined Access Solution Design Guide, the SD-Access Product Compatibility Matrix, and the Cisco DNA Center data sheets. These documents may provide guidance beyond what was tested as part of this guide. Updated Cisco DNA Center package files are regularly released and available within the packages and updates listings.

Table 3.         Cisco DNA Center

Product

Part number

Software version

Cisco DNA Center Appliance

DN2-HW-APL-L
(M5-based chassis)

1.2.10.4
(System 1.1.0.754)

Table 4.         Cisco DNA Center packages

All packages running on the Cisco DNA Center during validation are listed—not all packages are included as part of the testing for SD-Access validation.

Package

Version

Application Policy

2.1.28.170011

Assurance – Base

1.2.11.304

Assurance – Sensor

1. 2.10.254

Automation – Base

2.1.28.600244.9

Automation – Intelligent Capture

2.1.28.60244

Automation – Sensor

2.1.28.60244

Cisco DNA Center UI

1.2.11.19

Command Runner

2.1. 28.60244

Device Onboarding

2.1.18.60024

DNAC Platform

1.0.8.8

Image Management

2.1.28.60244

NCP – Base

2.1.28.60244

NCP – Services

2.1.28.60244.9

Network Controller Platform

2.1.28.60244.9

Network Data Platform – Base Analytics

1.1.11.8

Network Data Platform – Core

1.1.11.77

Network Data Platform – Manager

1.1.11.8

Path Trace

2.1.28.60244

SD-Access

2.1.28.60244.9

Table 5.         Identity management

Functional area

Product

Software version

Cisco ISE Server

Cisco Identity Services Engine

2.4 Patch 6

Table 6.        SD-Access Wireless Controller

Functional area

Product

Software Version

Wireless LAN controller

Cisco 8540, 5520, and 3504 Series Wireless Controllers

8.8.111.0 (8.8 MR1)

Feedback

For comments and suggestions about this guide and related guides, join the discussion on Cisco Community at https://cs.co/en-cvds.

Learn more