Explore Cisco
How to Buy

Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

Automating Data Center Architecture with Nexus Dashboard Orchestrator (NDO) and Fabric Controller (NDFC)

Available Languages

Download Options

  • PDF
    (4.7 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (3.1 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (2.1 MB)
    View on Kindle device or Kindle app on multiple devices
Updated:April 25, 2022

Bias-Free Language

The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.

Available Languages

Download Options

  • PDF
    (4.7 MB)
    View with Adobe Reader on a variety of devices
  • ePub
    (3.1 MB)
    View in various apps on iPhone, iPad, Android, Sony Reader, or Windows Phone
  • Mobi (Kindle)
    (2.1 MB)
    View on Kindle device or Kindle app on multiple devices
Updated:April 25, 2022
 

Introduction

Given the exponential growth of applications, enterprises and service providers need the ability to create scalable data centers. This led to the deployment of large and small data centers spread across geographies, with business requirements to interconnect them for many reasons, including high availability, disaster recovery, and scale.

VXLAN (Virtual extensible Local Area Network) fabric with BGP EVPN (Ethernet Virtual Private Network) control-plane has proven that it can provide secure multi-tenancy and mobility at scale, both within the same data center as well as across data centers with VXLAN Multi-Site. To simplify deployment, Cisco NDFC has provided ways in which customers can deploy individual data center fabrics and provide the ability to extend networks and VRFs across fabrics grouped together as part of the same Multi-Site Domain (MSD).

You can use this document to learn more about the VXLAN Multi-Site architecture and the deployment of NDFC Multi-Site Domain.

Multi-Site Use-Case

There are several drivers for the deployment of a Multi-Site architecture, including the following.

Mega Scale Data Center

Large enterprises and service providers deploy large-scale data centers with upwards of 1000 racks within the data center's physical location. To simplify operations and limit the fault domain, the data center is logically segmented into smaller fabrics with the ability to extend any VRF/network anywhere within the data center.

Data Center Interconnectivity (DCI)

For high availability, networks are deployed across sites and then interconnected using a DCI technology such as VXLAN Multi-Site. The architecture enables extending connectivity and policy across sites and allows IP mobility and active-active use cases across sites.

 

 

 DiagramDescription automatically generated

Figure 1.  Data Center Interconnectivity

 

Multi-Cloud

With the adoption of hybrid and multi-Cloud architectures, applications now can co-exist across on-prem data centers and public clouds. This requires extending the connectivity and policies across the on-prem and public cloud.

Related image, diagram or screenshot

Figure 2.  Extending On-Prem networks to Public Cloud

Service Provider 5G Telco DC

In a service provider, data center use cases such as centrally managing multiple data centers that can be interconnected using IP, MPLS, or Segment Routing transport becomes necessary.

 Related image, diagram or screenshot

Figure 3.  Telco DC

Site, Availability Zones and Regions

When describing data center deployment architectures, a geographical location is often referred to as a “site.” At the same time, the term “site” may also be used to reference a specific VXLAN EVPN fabric part of a Multi-Site architecture, and this may lead to confusion because multiple fabrics may be deployed in a given “site” geographical location. It is hence helpful to introduce terms like “Availability Zone (AZ)” and “Region” to differentiate different deployment scenarios. 

Related image, diagram or screenshot

Figure 4.  Logical Representation of a Highly Available Data Center Architecture

     An Availability zone refers to a set of network components representing a specific infrastructure fault domain. For VXLAN EVPN deployments, an AZ corresponds to a fabric part of a particular NDFC MSD construct. The geographic placement of AZs is use case dependent; for scaling-out network designs, for example, it is possible to deploy multiple AZs in the same physical (and geographic) data center location.

     A Region is a collection of one or more AZs representing a single change and resource domain; a region typically includes AZs deployed in one or more geographic data center locations. In terms of a VXLAN EVPN deployment with NDFC, a Region represents a single fabric or multiple fabrics managed through a single NDFC controller (and hence part of the same NDFC MSD construct). So, a controller's scope is that of managing all the data centers (or AZs) within the region.

 

Related image, diagram or screenshot

Figure 5.   Scope of a Controller

Cisco Nexus Dashboard Orchestrator (NDO)

Customers want to have the ability to establish flexible inter-regions network communications without jeopardizing change and fault domain isolations between those locations. A given application instance usually is deployed inside a region. However, communication between different application instances or access to services shared between application instances requires extending connectivity across these regions.  

NDFC acts as a controller within a region, allowing customers to deploy extended networks and VRFs across availability zones in that region (i.e., VXLAN EVPN fabrics grouped in the same MSD). A new functional element, the Cisco Nexus Dashboard Orchestrator (NDO, formerly known as MSO), can then be introduced to control the establishment of network connectivity across regions managed by independent NDFC controllers.

 

 Related image, diagram or screenshot

Figure 6.  NDO for Multi-Region Orchestration

The hierarchical model shown above brings several benefits when compared to the deployment of VXLAN EVPN fabrics managed by the same NDFC instance (i.e., part of the same Region/MSD) such as:

     Ability to build and manage large scale fabrics

     Ability to maintain fault domain separation across data plane and management plane

     Ability to extend connectivity across NDFC managed fabrics and ACI fabrics*

     Ability to extend connectivity between on-prem and public cloud*

     Ability to manage multi-cloud networks

* Road-map item, refer to the NDO release notes for software support

To summarize, NDO can be used to interconnect NDFC managed regions, enable DCI automation, and allow scale-out deployment of data center networks to extend VRF and networks across the regions.

Operating Multi-Region Networks with NDO and NDFC

To implement highly available data center deployments, the operator must interconnect various regions together yet ensure a given fault domain is contained within that region.

This can be accomplished by extending only Layer 3 connectivity across regions (i.e., extending one or more VRFs) and keeping the Layer 2 extension limited inside each MSD (region). Along with fault domain and flooding containment, restricting a specific network (IP subnet) presence to a region brings several advantages such as the ability to optimize north-south traffic path in and out of each region as well as apply service redirection policy using VXLAN Policy-based redirect or ePBR for any security, optimization or compliance reasons.

Related image, diagram or screenshot

Figure 7.  Hierarchical Deployment model with NDO and NDFC

When extending VRF across regions but keeping the specific subnets localized, the operator has the capability of only advertising subnet routes across regions instead of specific endpoints’ host routes. This helps make optimal use of the forwarding resources.

The other advantage of the hierarchical model shown in figure 9 is the ability to scale out fabrics. As each region is managed by its own NDFC controller, multiple regions can be built out without limiting the overall scale depending on what a single controller can support.

(Refer to the NDO scalability guide for scale numbers)

Related image, diagram or screenshot

Figure 8.   Horizontal Scale out Data center deployment

Along with VRF extension, NDO can also set up the underlay networking and overlay EBGP Multi-Site configuration across VXLAN border-gateways between regions.

NDO supports full mesh EBGP peering across regions (figure 9) or peering via a route-server (figure 10).

Given packets between regions will be VXLAN encapsulated, ensure the MTU is properly set to accommodate the VXLAN header overhead (50B) on the underlay ISN/DCI network.

      Related image, diagram or screenshot

Figure 9.  Inter Region Connectivity Options – Full Mesh Peering

      Related image, diagram or screenshot

Figure 10.  Inter Region Connectivity Options – Route Server

 

In certain specific use cases, applications may require the network infrastructure to provide L2 extension capabilities across regions; if needed, this also can be accomplished with NDO, but careful considerations must be made regarding MAC address scale as all MACs deployed in a given network in a region are advertised and installed on all other VXLAN TEPs part of different regions to which the L2 network is extended. This also means that host routes are exchanged across regions for endpoints belonging to this extended network, resulting in higher utilization of forwarding resources. Also, extending the layer 2 network means the fault domain is no longer constrained within the region. Operators must consider leveraging VXLAN Multi-Site aggregated storm-control functionality on the border gateways to ensure any traffic storm in a region is contained within that region. This best practice also applies for layer 2 network extension within a region. Lastly, managing traffic redirection and service chaining rules can become more complex to implement.

Extending Overlays with NDO and NDFC

This section describes the concept and design of Schemas and Templates in order to extend the overlays (VXLAN Networks and VRFs) across the AZs and Regions.

Tenants

A tenant is a logical container for application policies that enable an administrator to exercise domain-based access control. A tenant represents a unit of isolation from a policy perspective. When using Nexus Dashboard Orchestrator to manage Cisco NDFC fabrics, you will use the default “dcnm-default-tn" that is preconfigured for you and allows you to create and manage the following objects:

     VRFs

     Networks

Schemas

A schema is a collection of templates, which are used for defining networking configuration, with each template assigned to the default “dcnm-default-tn" tenant.

Templates

A template is a set of configuration objects and their properties that you deploy all at once to one or more sites. Each template must have at least one site associated with it. A template associated to a single site allows to deploy site-local configuration (i.e., networks and VRF that are provisioned only in that site). A template associated to multiple sites allows instead to provision stretched networks/VRFs (i.e., objects that are concurrently deployed in multiple sites).

Schemas and Templates Design Considerations

There are multiple approaches you can take when it comes to creating schema and template configurations specific to your deployment use case. The following sections describe a few simple design directions you can take when deciding how to define the schemas, templates, and policies in your Multi-Site Environment.

Keep in mind that when designing schemas, you must consider the supported scalability limits for the number of schemas, templates, and objects per schema. Detailed information on verified scalability limits is available in the Cisco Multi-Site Verified Scalability Guides for your release.

Intra-Region Schema Deployment

Single Schema Design is best suited for Intra-Region communication. Take the following as an example:

Region = US-Midwest

AZ 1 = Chicago

AZ 2 = Omaha

The first template will define the networks that need to be extended (i.e., stretched) across the AZs (Chicago and Omaha) part of the US-Midwest region. In contrast, the second template will contain the definition of the VRFs extended across the same two AZs.

Related image, diagram or screenshot

Figure 11.   Single Schema Design

Inter-Region Schema Deployment

Single Schema Design is also suited for Inter-Region communication. Take the following as an example:

Region = US-Midwest

AZ 1 = Chicago

AZ 2 = Omaha

Region = US-East

AZ 1 = New York City

AZ 2 = Boston

Use case 1: Stretch VRF across Regions

In this use case, a VRF is extended across the regions while Networks are local to a given region. Therefore, while a Network is not stretched across fabrics belonging to different regions, it is stretched across AZs part of the same region. 

Related image, diagram or screenshot

Figure 12.   Stretch VRF across Regions

Use case 2: Stretch VRF across Regions and Local Networks within a Region

In this use case, a VRF is extended across the regions while some Networks are local to a given AZ within a region while other Networks are stretched within a region or across AZs.

 Related image, diagram or screenshot

Figure 13.  Stretch VRF across Regions and Local Networks within a Region

Use Case 3: Stretch VRF and Stretch Network across Regions

In this use case, both VRF and Networks are stretched across the Regions and available across all the AZs within the regions.

 Related image, diagram or screenshot

Figure 14.  Stretch VRF and Stretch Network across Regions

NDFC and NDO Configuration for Highly Available Data Center Architecture

This section details the deployment design, configurations, and settings to implement NDFC and NDO Integration for Highly Available Data Center Architecture.

Software and Hardware Product Versions:

The example in this white paper has the following product software versions:

     Nexus Dashboard version 2.1(2d)

     Nexus Dashboard Orchestrator version 3.7(1g)

     Nexus Dashboard Fabric Controller version 12.0(2f)

For more information about supported software versions and compatibilities of related products, refer to the Cisco Nexus Dashboard and Services Compatibility Matrix at the following link: https://www.cisco.com/c/dam/en/us/td/docs/dcn/tools/dcn-apps/index.html

Deploying NDFC and NDO

 Related image, diagram or screenshot

Figure 15.  NDFC and NDO Inter-connectivity

 

Each service, such as Nexus Dashboard Fabric Controller and Nexus Dashboard Orchestrator, requires its own dedicated ND cluster. The interaction between the applications happens over the Nexus Dashboard Fabric interface (aka Data interface). Services like Kafka are bound to the Fabric interface. Hence, controllers like NDFC send notifications to the ND Kafka broker as part of the application integrations. Furthermore, as part of the site onboarding process, NDFC checks that it has reachability to the ND fabric interface IPs, and only if it does, then NDFC onboarding on ND will work.

In the current shipping releases, co-hosting of NDFC and NDO is not supported. For scaling and requirements, refer to the Cisco Nexus Dashboard and Services Capacity Planning at the following link: https://www.cisco.com/c/dam/en/us/td/docs/dcn/tools/nd-sizing/index.html

Configuring NDFC and NDO Integration

The following examples show the integration between NDFC and NDO services to enable scale-out VXLAN EVPN Multi-Site Architecture between Multiple Data Centers.

In this use-case, an Overlay (Network and VRF) is deployed and extended between multiple AZs (Availability Zones) within a Region and L3 VRF is also extended across the Regions.

Step 1. Create VXLAN EVPN fabrics with NDFC

Each AZ is represented as a single VXLAN EVPN fabric (aka Easy Fabric) using NDFC. Refer the following link for configuring the fabrics.

https://www.cisco.com/c/en/us/td/docs/dcn/ndfc/1201/configuration/fabric-controller/cisco-ndfc-fabric-controller-configuration-guide-1201/fabrics.html#concept_e5s_yjw_sqb

Step 2. Create External fabric with NDFC

VXLAN EVPN Multi-Site requires the BGWs in different sites to exchange network and endpoint reachability information using the MP-BGP EVPN overlay control plan. Nexus Dashboard Orchestrator supports 2 methods to allow for this exchange of information across sites:

    Router-Server (Centralized EVPN peering): with this option, all the BGWs deployed in different sites peer with the same pair of Route-Server devices, usually deployed in the Inter-Site Network (ISN).

    Full-mesh (Back-to-Back EVPN peering): in this case, the BGWs belonging to different sites peer directly with each other in a full mesh fashion.

For scale-out architectures Cisco recommends Route-Server option as it simplifies configurations, control plane peering, and cabling requirements. We can provide up to 4 Route-Servers.

In this guide, we will be using the Route-Server option.

Step 3. Create Loopback IP address on Route-Server with NDFC

Loopback IP address is required on Route-Servers to establish BGP EVPN full-mesh peering with the BGWs of the different fabric part of the Multi-Site domain. Each AZs will deploy dedicated BGWs peering with the Route-Servers.

Perform the following steps to create a loopback interface on each RS device (each loopback must obviously have a unique IP address):

    Set the role of the device to “Core Router”.

    Navigate to LAN > Fabric Overview of External fabric.

    Navigate to Interfaces tab. 

    Click on Right hand side Action button and Create Interface.

    Select type = Loopback

    Provide the Loopback IP address, save the policy, and deploy the configuration.

             Related image, diagram or screenshot

Step 4. Add VXLAN EVPN fabrics as sites

Nexus Dashboard Orchestrator is hosted in a dedicated Nexus Dashboard Cluster. In this step, we will be adding previously created VXLAN EVPN fabrics (aka Easy Fabric). The communication between service applications like NDFC and NDO happens over Nexus Dashboard Fabric Interface aka bond0br or Data Interface.

            Related image, diagram or screenshot

Step 5. Navigate to Nexus Dashboard hosting NDO application and add site.

    From Admin console, select Sites.

    Click on Add site and provide site’s configuration.

            Related image, diagram or screenshot

A.   Select NDFC as a Site.

B.   Provide the IP address of the Data interface of one of the ND cluster nodes hosting the NDFC service. 

C.   Provide NDFC access user-name.

D.   Provide NDFC access password.

E.   Click on Sites. All the fabrics managed by that specific NDFC instance are displayed.

       Select all the necessary sites that must be onboarded and managed by NDO.

F.   Click on Save to finish the onboarding process.

            Related image, diagram or screenshot

Step 6.   Access the NDO service UI and manage NDFC sites using NDO

            Related image, diagram or screenshot

A.   From Admin console, select Services.

B.   Click on Open to get access to the NDO UI.

            Related image, diagram or screenshot

A.   Navigate to Infrastructure.

B.   Click on Sites.

C.   Click on individual site state and select the “Managed” option. Provide the Site ID in the field.

It is common practice to use a BGN ASN value as Site ID.

The available sites belong to the NDFC Easy and External fabrics. The Easy fabrics consist of VXLAN EVPN Multisite capable BGWs, and the External fabrics will contain Core routers responsible for ISN and Route-Server functionality. 

When NDO manages different sites across NDFC instances, then various Meta fabrics inside NDFC are created as follows:

Meta fabrics are known as neighbor devices and fabrics that are not managed by the local NDFC instance but are managed by a remote NDFC instance. 

VXLAN Multi-Site domain as seen from NDFC-A Instance:

             Related image, diagram or screenshot

VXLAN Multi-Site domain as seen from NDFC-B Instance

            Related image, diagram or screenshot

Step 7. Deploy VXLAN EVPN Multi-Site Underlay and Overlay using NDO

            Related image, diagram or screenshot

A.   Navigate to Infrastructure.

B.   Click on Site Connectivity.

C.   Click on Configure.

       Related image, diagram or screenshot

As part of the “General Settings” section, verify the default DCNM/NDFC settings that will be used by NDO to allocate and configure VXLAN Multi-Site. These values are user configurable.

A.   Layer-2 VNI range represents the VNI ID for VXLAN Overlay Networks.

B.   Layer-3 VNI range represents the VNI ID for VXLAN Overlay VRFs.

C.   Multi-Site routing loopback range is a typical Lo100 ID used for BGWs Virtual IP address (aka Multi-Site VIP).

D.   Anycast Gateway MAC value used for VXLAN Overlay Networks.

Step 8. Provide Control Plane Configuration.

             Related image, diagram or screenshot

A.   Default value for Multi-Site peering is full mesh. In this example, we will be using the route-server option.

B.   Click on Add route-server.

            Related image, diagram or screenshot

A.   Select the fabric where the route-server is located. This will be the External fabric managed by this specific

NDFC instance (named “Backbone” in the example above).

B.   Select route-server.

C.   Select the appropriate loopback IP previously created in step 3.

VXLAN EVPN overlay peering’s will be sourced using this IP address.

D.   Click on Save.

Note: In production deployments, you should always consider deploying a pair of Route-Server nodes for redundancy, hence step C above should be repeated for each RS node.

            Related image, diagram or screenshot

A.   Navigate to individual sites.

B.   Click on Auto Allocate. This field is used to derive the VXLAN EVPN Multi-Site VIP.

The IP address allocation is automatically derived from the general settings field of Multi-Site routing loopback range as shown above. Repeat this step for all sites where BGWs are present. 

After the global control plane configuration is completed, it is required to perform site level configuration to ensure that the BGW nodes of each site can establish underlay EBGP sessions with the ISN devices. This is required to allow for the establishment of the MP-BGP EVPN overlay peering between BGWs and route server nodes.

             Related image, diagram or screenshot

A.   Navigate to individual sites.

B.   Select individual BGW switches.

C.   Click on Add port.

            Related image, diagram or screenshot

A.   Provide the Ethernet Interface details. This interface connects to the Route-Server (RS) or to a generic Inter-Site Network (ISN) device, depending on the specific network topology.

B.   Provide local interface IP address.

C.   Provide remote interface IP address.

D.   Provide Remote BGP ASN. The ASN belongs to the Route-Server (RS) or to a generic Inter-Site Network (ISN) device, depending on the network topology.

E.   Optionally, edit the MTU. Default value is 9216.

F.   Click on Save.

Repeat the above steps for all the VXLAN EVPN fabrics facing the Route-Server (RS) or the generic Inter-Site Network (ISN) device, depending on the network topology and the External fabric hosting the route-server. This will ensure end-to-end Interface and eBGP underlay connectivity between BGWs and external devices. 

 

            Related image, diagram or screenshot

A.   Click on Deploy. This will generate necessary API calls from NDO to NDFC controller to configure

IP interfaces, eBGP IPv4 underlay, and eBGP EVPN Overlay for Multi-Site peering. 

To check the VXLAN EVPN Multi-Site status on NDO, perform the following steps:

             Related image, diagram or screenshot

A.   Navigate to Infrastructure.

B.   Select Site Connectivity.

C.   Expand on Connectivity Status.

D.   Select either Underlay or Overlay Status.

At this point, VXLAN EVPN Multi-Site Underlay and Overlay sessions are up. Next step is to create and deploy VXLAN Overlays using NDO.

Step 9. Create and Attach VRF and Network Overlays using NDO

In the previous steps, we went over how NDO manages and deploys Day-0 configurations for VXLAN EVPN Multi-Site Architectures, in order to establish an EVPN control plane and VXLAN data plane between the BGW nodes part of different sites. In this step, we will be creating schemas and templates for deploying Day-1 configurations like VRF and Network Overlays on the fabric switches.

In this example, an Overlay (Network and VRF) is deployed and extended between multiple AZs (Availability Zones) within a Region and L3 VRF is also extended across the Regions.

Region = US-Midwest

AZ 1 = Chicago

AZ 2 = Omaha

Region = US-East

AZ 1 = New York

AZ 2 = Boston

VNI 100100 (10.10.100.0/24) is extended across AZ 1 and AZ 2 of Region US-Midwest.

VNI 100200 (10.10.200.0/24) is extended across AZ 1 and AZ 2 of Region US-East.

L3 VRF 9999 (CORP) is extended across Region US-Midwest and US-East.

The logical diagram below highlights how the networks and the VRFs are going to be deployed across the AZs part of the different regions and how NDO schemas and templates are associated to the different fabrics.

             Related image, diagram or screenshot

Add a new schema:

            Related image, diagram or screenshot

A.   Navigate to Application Management

B.   Click on Schemas.

C.   Click on Add Schema.

            Related image, diagram or screenshot

Provide Schema name and description. In this example, we are creating one schema and multiple templates per region.

             Related image, diagram or screenshot

Click on the Plus sign (figure above) to select the template type and then choose “Network” (figure below):

            Related image, diagram or screenshot

Step 10.  Provide the template information.

            Related image, diagram or screenshot

A.   Provide the Template name (“Inter-region VRF”).

B.   Optionally, provide the description.

C.   Select the default tenant that must be associated to all the defined templates.

Step 11.  Repeat the above step to create another template for Overlay Networks.

            Related image, diagram or screenshot

            Related image, diagram or screenshot

Step 12.  Define the VRF that should be stretched across the two regions and that is hence part of the “Inter-region VRF” template.

            Related image, diagram or screenshot

            Related image, diagram or screenshot 

A.   Click on Create Object to create a new VRF definition.

B.   Provide the VRF name.

C.   Optionally, provide the VRF ID (the default value is empty). If the value is left empty, NDFC creates the VRF with

The next available ID assigned from the resource manager pool as seen in the DCNM/NDFC General settings

The rest of the field values are default, and it is recommended to keep them as such.

Note: Currently, NDFC VRF templates do not support custom Route-Targets. Hence, all VRFs are created and deployed using Auto RT. The support for custom Route-Targets is planned for NDFC 12.1.1 release.

Repeat the same step to create Network Objects in the “Networking” Template.

            Related image, diagram or screenshot

A.   Provide the Network name.

B.   Optionally, provide the Network ID (the default value is empty). If the value is left empty, NDFC creates the network with the next available ID from the resource manager pool.

C.   Check this box if the intended overlay network is Layer-2 only.

D.   For all L3 networks, provide the L3 VRF association.

E.   Optionally, provide a VLAN ID (the default value is empty). If the value is left empty, NDFC creates the network with the next available ID from the resource manager pool.

F.   Create Anycast Gateway SVI.

Similarly create a network object definition for US East region.

Once the overlay definitions are created using templates, the next step is to associate the templates to the various sites.

            Graphical user interface, applicationDescription automatically generated

A.   Click on the Plus sign.

B.   Select respective sites (AZs) for the overlay extension. Assign templates to the sites.

C.   Click on Save.

Similarly associate US East region sites with US East specific templates.

            Related image, diagram or screenshot

As part of the Overlay attachments (in this case for the VRF) perform the tasks pointed out in the figure below:

            Graphical user interface, text, application, TeamsDescription automatically generated

A.   Select the VRF template for each AZ in the site level section, since the Overlay VRF attachment to leaf nodes is a site-specific configuration.

 

B.   Select the VRF object that has been previously configured and that needs to be deployed on the leaf nodes.

C.   Click on Add static leaf for overlay attachments. This will allow to deploy the VRF on all the specified nodes.

Notice that this step is mandatory before deploying the associated networks to these leaf nodes.

As part of the Overlay attachments (in this case for the Network) perform the tasks shown in the figure below.

            Graphical user interface, text, application, TeamsDescription automatically generated

A.   Select Network template for each AZ (site level configuration), since the Overlay Network attachment is a site- specific configuration.

B.   Select the Network name.

C.   Click on Add static port for network overlay attachments. This allows to deploy the network on the specified edge interfaces of the leaf nodes. Notice that if the VLAN is not explicitly configured when creating the overlay attachment, a VLAN ID from the resource manager pool will be automatically selected.

Repeat the above steps for VRF and Network attachments for the other AZs (as required).

Step 13.  Deploy VRF and Network Overlays using NDO

In the previous step, we created and attached overlay objects (VRF and Network) using templates. In this step, we will be deploying these overlays across the sites. As we have a dedicated template for VRF and Network, the deployment sequence is crucial. In this example, we will first deploy the VRF template and then the Network template.

            Related image, diagram or screenshot

A.   Select the VRF template.

B.   Click on this option to view configuration payload and template versions.

C.   Click on Deploy.

All configurations required for Day-0 and Day-1 VXLAN EVPN Multi-Site operations between NDO and NDFC are completed. The following can be verified on NDFC as well.

VRF Deployment status on NDFC-A / US-Midwest Region:

            Related image, diagram or screenshot

VRF Deployment status on NDFC-B / US-East Region:

            Related image, diagram or screenshot

Repeat the same step to deploy the Network template and verify that the network shows as deployed on the NDFC instance managing the US-Midwest AZs and NDFC instance managing the US-East AZs.

            Related image, diagram or screenshot

            Related image, diagram or screenshot

Note: Once NDO manages NDFC fabrics, all overlay (VRFs and Networks) definitions and configurations are handled using NDO. Both creation and deletion of overlay definitions are under the control of NDO. Therefore, NDO becomes a single source of truth and orchestrator to manage these fabrics.

MSD Brownfield Import Using NDO

Nexus Dashboard Orchestrator supports Brownfield configuration import and the deployment of incremental Multi-Site scale-out architectures.

Let’s take a look at the following use-case:

Currently, NDFC instance-A manages the following Region and AZs using an MSD fabric template.

Region: South-Asia

AZ1: Mumbai

AZ2: Vadodara

One more Region and multiple AZs need VXLAN Multi-Site connectivity with South-Asia. Following are the list of such Region and AZs managed by NDFC instance-B.

Region: East-Asia

AZ1: Tokyo

AZ2: Osaka

In this specific example, the South-Asia sites are already part of an existing Multi-Site Domain (MSD). Since in the current implementation NDO can only import brownfield configuration from a single MSD domain, the sites belonging to the East-Asia regions will be standalone. This way, we can utilize Nexus Dashboard Orchestrator to import an Existing VXLAN MSD of the South-Asia region and then add individual AZs of the East-Asia region to extend VXLAN Multi-Site across Regions.

NDFC-A instance with South-Asia region:

The AZs are part of the local NDFC MSD fabric.

Related image, diagram or screenshot

Figure 16.  Figure 52: AZs part of the local NDFC-A MSD fabric

 

NDFC-B instance with East-Asia region:

The AZs are standalone VXLAN EVPN sites. Nexus Dashboard Orchestrator will be used to extend Multi-Site across the regions.

Related image, diagram or screenshot

Figure 17.  sites managed as standalone AZs on NDFC-B

 

As part of centralized orchestration and unified overlay policies, NDO will be managing all these Regions and AZs. The network admin has been tasked to achieve the following:

     Extend existing VRF from the South-Asia region to East-Asia.

     Extend a network from the South-Asia region to East-Asia that requires mobility across regions.

Note: Please refer to the use-cases shown in the previous sections of this document for more information on site creation, onboarding, management, and infra configuration, as the steps are similar for both greenfield and brownfield use cases.

Step 1. Verify current VXLAN EVPN MSD

The following figure shows the NDFC instance-A is locally managing the VXLAN EVPN Multi-Site domain between AZs of South-Asia:

            Related image, diagram or screenshot

Step 2. NDO site on-boarding and configuring infrastructure

Once NDO starts managing the sites, the fabric technology is changed to “MSO/NDO Site Group.” Hence, AZs of East-Asia have joined and formed VXLAN EVPN Multi-Site peering with AZs of South-Asia

            Related image, diagram or screenshot 

Step 3. Creating Schemas and Templates for the South-Asia region

            Related image, diagram or screenshot

A.   Schema name for the region.

B.   Template #1 for stretching the VRF between all the AZs of the Asia region. The VRF defined here will be stretched across the AZs of the South-Asia and East-Asia regions.

C.   Template #2 for stretching the Networks between all the AZs of the Asia region. The Network defined here will be stretched across the AZs of the South-Asia and East-Asia regions.

D.   Template #3 for creating and deploying Networks that are local between the AZs of the South-Asia region only.

Step 4. Site to Template association for the Asia region

Navigate to the schema page of the Asia region and associate sites as follow:

            Related image, diagram or screenshot

Step 5. Brownfield import of existing overlays from NDFC to NDO

            Related image, diagram or screenshot

A.   Click on the Asia-Stretch-VRF template

B.   Click on the import button

C.   Select AZ1-Mumbai as a site

             Related image, diagram or screenshot

As shown above, select the VRF name as “ENG” and click on import. Repeat the same steps to import networks into respective templates.

            Related image, diagram or screenshot

Finally, repeat the same steps for AZ2-Vadodara as well. This way, NDO will have the VRF and Network associations and attachments applied on the AZ2-Vadodara site.

Step 6. Extend Overlays to the East-Asia region

At this point, we have completed the brownfield import of existing overlay configuration from NDFC-A to NDO. The South-Asia region has 2 x networks and 1 x VRF available, and we will be extending these overlays to the East-Asia region.

Refer the following examples for the overlay attachments:

Stretch-VRF attachment on AZ1-Tokyo devices:

            Related image, diagram or screenshot

Stretch-Network attachment on AZ1-Tokyo devices:

            Related image, diagram or screenshot 

Repeat the above step for AZ2-Osaka site.

Step 7. Verification of the Overlays and VRF attachment across regions.

VRF attachment in region 2:

            Related image, diagram or screenshot

VRF attachment in region 2

            Related image, diagram or screenshot

Stretch network across regions, networks attachment in region 1:

            Related image, diagram or screenshot

Networks attachment in region 2

            Related image, diagram or screenshot

Local networks for South-Asia region:

            Related image, diagram or screenshot

Conclusion

The introduction of the hierarchical model leveraging the Cisco Nexus Dashboard Orchestrator allows interconnecting of separate regions, each managed by its own NDFC controller. It provides a simplified mechanism for customers to extend connectivity across regions, providing end-to-end network and policy consistency and secure automated connectivity, all with a single point of orchestration, while maintaining clear change and fault domain separation between them. This allows customers to deploy large-scale and highly available architectures.

Additional Information

Additional documentation about Cisco Nexus Dashboard Orchestrator and Cisco Nexus Dashboard Fabric Controller can be found at the following links:

     Nexus Dashboard Fabric Controller

    Release Notes: Cisco Nexus Dashboard Fabric Controller Release Notes, Release 12.0.2f

    Compatibility Matrix: Cisco Nexus Dashboard Fabric Controller Compatibility Matrix, Release 12.0.2f

    Scalability Guide: Verified Scalability Guide for Cisco Nexus Dashboard Fabric Controller, Release 12.0.2f

    Configuration Guide: Cisco Nexus Dashboard Fabric Controller for LAN Configuration Guide, Release 12.0

     Nexus Dashboard Orchestrator

    Release Notes: Cisco Nexus Dashboard Orchestrator Release Notes, Release 3.7(1)

    Scalability Guide: Cisco Nexus Dashboard Orchestrator Verified Scalability Guide, Release 3.7(1)

    Configuration Guide: Cisco Nexus Dashboard Orchestrator Configuration Guide for NDFC (DCNM) Fabrics, Release 3.7(x)

     Nexus Dashboard

    Nexus Dashboard Capacity Planning Tool

Printed in USA	Cxx-xxxxxx-xx	01/22Copyright

Learn more