Explore Cisco
How to Buy

Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

Migrating Classic Ethernet Environments to VXLAN BGP EVPN

Available Languages

Download Options

  • PDF
    (1.7 MB)
    View with Adobe Reader on a variety of devices
Updated:August 18, 2021

Available Languages

Download Options

  • PDF
    (1.7 MB)
    View with Adobe Reader on a variety of devices
Updated:August 18, 2021

Table of Contents

 

 

Introduction

This document describes the migration from a Classic Ethernet ”brownfield” environment to a ”greenfield” virtual extensible LAN (VXLAN) Border Gateway Protocol (BGP) Ethernet Virtual Private Network (EVPN) fabric. The main focus is extending the Classic Ethernet network to a VXLAN BGP EVPN fabric, including migration of the first-hop gateway, which in turn facilitates moving workloads from the old network to a new one. The migration use case includes connectivity to an external Layer 3 network.

This document covers the concepts of interconnecting a Classic Ethernet brownfield environment with a new VXLAN BGP EVPN fabric.

Limited background information is included on other related components whose understanding is required for the migration. (See the “For more information” section at the end of this document for where to find background information on VXLAN BGP EVPN, Classic Ethernet, and Cisco Virtual Port Channel.)

Migrating a brownfield network

The migration described in this document is often referred to as “Virtual Port Channel (VPC) back-to-back” and consists of interconnecting an existing brownfield network (based on Spanning Tree Protocol, VPC, or FabricPath technologies) to a newly developed VXLAN BGP EVPN fabric, with the end goal of migrating applications or workloads between those environments.

Figure 1  shows the migration methodology, highlighting the major steps required for performing the migration of applications.

Figure 1.           Migration Steps

Graphical user interface, text, applicationDescription automatically generated

The steps of the migration methodology are as follows:

1.     First is the design and deployment of the new VXLAN BGP EVPN environment (greenfield network). It is likely that such a deployment will initially be small, with plans to grow over time as the number of workloads go higher. A typical VXLAN BGP EVPN fabric consists traditionally of a leaf-and-spine topology.

2.     Second is the integration between the existing data center network infrastructure (called the brownfield network) and the new VXLAN BGP EVPN fabric. Layer 2 and Layer 3 connectivity between the two networks is required for successful application and workload migration across the two network infrastructures.

3.     Third and final step consists of migrating workloads between the brownfield and the greenfield network. It is likely that this application migration process takes several months to complete, depending on the number and complexity of the applications being migrated. The communication between the greenfield and brownfield networks, across the Layer 2 and Layer 3 connections established in step 2, are used during this phase.

Through the migration steps, the placement of the first-hop gateway needs to be carefully considered. For newly deployed Virtual LANs (VLANs) and associated IP subnets, the greenfield network is the desired place for hosting the first-hop gateway function.

For VLANs and associated IP subnets that are migrated from the brownfield to the greenfield network, the timing of the first-hop gateway migration can be chosen based on the following criteria:

      The time period when the majority of the workloads are migrated to the greenfield network

      Premigration of the first workload

      Postmigration of the last workload

The correct timing depends on many factors, with the most critical being when a possible disruption to the network can be accommodated.

Layer 2 interconnection

Interconnecting the brownfield network with the greenfield network via Layer 2 is crucial to facilitate seamless workload migration.

Note:      When seamless workload migration is not required, a Layer 2 interconnect between brownfield and greenfield is not necessary. In these cases, a per-VLAN or per-IP subnet approach can be chosen for the migration. This approach does not provide a seamless migration, but it is viable in case it is beneficial.

Figure 2  shows the brownfield-greenfield interconnection, which highlights the major components of the migration approach.

Figure 2.           Overview: Brownfield-greenfield Interconnection

Graphical user interface, websiteDescription automatically generated

For the Layer 2 interconnection, we establish a double-sided VPC (Virtual Port-Channel for Classic Ethernet) between a pair of nodes in the greenfield (VXLAN) and brownfield (Classic Ethernet) networks. A Classic Ethernet network is the focus of the migration. Interconnect the VPC domain with the VPC domain in the VXLAN BGP EVPN fabric. The double-sided VPC connection between the two network infrastructures allows a Layer 2 extension without risking any Layer 2 loop by maintaining all VPC links for actively forwarding traffic.

The nodes chosen in the greenfield network can represent a border node or any other switch that provides the VXLAN BGP EVPN tunnel endpoint functionality. In the brownfield network, the nodes for the interconnection should represent the Layer 2–Layer 3 demarcation. In the case of Classic Ethernet, the Layer 2–Layer 3 demarcation is found at various locations, depending on the topology and the chosen first-hop gateway mode. The commonly found Classic Ethernet with VPC deployments is Access-Aggregation topologies with first-hop gateway at the Aggregation nodes using VPC and traditional First Hop Routing Protocol (FHRP) – Hot Standby Router Protocol (HSRP)

Figures 3–5 depict these topologies and associated gateway placement options for the brownfield network.

Figure 3.           Access-aggregation with first-hop gateway at aggregation

 

A picture containing graphical user interfaceDescription automatically generated

 

The access-aggregation topology with VPC for Classic Ethernet shown in Figure 3 represents a brownfield network that was built with Spanning Tree Protocol or VPC technology.

The Layer 2–Layer 3 interconnection between the brownfield and the greenfield network would be placed at the aggregation nodes.

VPC considerations

VPC is typically used in the access or aggregation layer of a network. At the access layer, it is used for active-active connectivity from endpoints (server, switch, NAS storage device, etc.) to the VPC domain. At the aggregation layer, VPC is used for providing both active-active connectivity from access layer to the aggregation VPC domain, and active-active connectivity to the first-hop gateway along with HSRP or VRRP, for the Layer 2–Layer 3 demarcation.

However, because VPC provides capabilities to build a loop-free topology, it is also commonly used to interconnect two separate networks at Layer 2, allowing extension of the Layer 2 domain. For the scope of this document, VPC is used to interconnect the brownfield Classic Ethernet network with the greenfield VXLAN BGP EVPN network.

Figure 4.           Double-sided VPC (loop-free topology)

DiagramDescription automatically generated

 

Note:      Using VPC for Layer 2 interconnection between the brownfield and greenfield networks makes all existing VPC best practices applicable.

VPC configuration

The configuration examples provided in this section highlight key concepts for interconnecting brownfield and greenfield networks.

Classic Ethernet VPC

The configuration example below shows a Classic Ethernet VPC domain in a brownfield network. Port-Channel 1 comprising member ports Ethernet 1/47 and 1/48, represents the VPC peer-link, which is required to be a IEEE 802.1Q trunk (switchport mode trunk). In addition, a Port-Channel 20 with VPC ID 20 is configured to provide Layer 2 interconnection to the VXLAN BGP EVPN greenfield network. The Virtual Port-Channel 20 has Ethernet interface 1/1 as a member port for the IEEE 802.1Q trunk and uses Link Aggregation Control Protocol (LACP).

Note:      With LACP, the VPC domain ID should be different in the brownfield and greenfield networks.

Classic Ethernet node 1

vpc domain 20

  peer-switch

  peer-gateway

  ipv6 nd synchronize

  ip arp synchronize

!

interface port-channel 1

  description VPC peer-link

  switchport mode trunk

  vpc peer-link

!

interface port-channel 20

  description virtual port-channel to greenfield

  switchport mode trunk

  vpc 20

!

interface Ethernet 1/1

  description member port of port-channel/VPC 20

  switchport mode trunk

  channel-group 20 mode active

!

interface ethernet 1/47

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

!

interface ethernet 1/48

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

Classic Ethernet node 2

vpc domain 20

  peer-switch

  peer-gateway

  ipv6 nd synchronize

  ip arp synchronize

!

interface port-channel 1

  description VPC peer-link

  switchport mode trunk

  vpc peer-link

!

interface port-channel 20

  description virtual port-channel to greenfield

  switchport mode trunk

  vpc 20

!

interface Ethernet 1/1

  description member port of port-channel/VPC 10

  switchport mode trunk

  channel-group 10 mode active

!

interface ethernet 1/47

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

!

interface ethernet 1/48

  description Member port VPC peer-link

  switchport mode trunk

  channel-group 1

VXLAN BGP EVPN VPC

The following configuration example shows a Cisco VXLAN BGP EVPN VPC domain in the greenfield network. The individual VXLAN tunnel endpoint (VTEP) IP addresses are 10.10.10.11 and 10.10.10.12, for nodes 1 and 2, respectively, and the anycast VTEP IP address is 10.10.10.100, shared across both nodes. Port-channel 1 represents the VPC peer-link, which is a traditional IEEE 802.1Q trunk (switchport mode trunk) with participating interfaces Ethernet 1/47 and 1/48. In addition, a port-channel with VPC ID 10 is configured to provide the Layer 2 interconnection to the brownfield Classic Ethernet network. The virtual port-channel 10 has interface Ethernet 1/1 as a member port for the IEEE 802.1Q trunk and uses LACP.

Note:      With LACP, the VPC domain ID should be different in the brownfield and greenfield network.

VXLAN BGP EVPN node 1

vpc domain 10

  peer-switch

  peer-gateway

  ipv6 nd synchronize

  ip arp synchronize

!

interface loopback1

  description loopback for VTEP (NVE)

  ip address 10.10.10.11/32

  ip address 10.10.10.100/32 secondary

!

interface port-channel 1

  description VPC peer-link

  switchport mode trunk

  vpc peer-link

!

interface port-channel 10

  description virtual port-channel to brownfield

  switchport mode trunk

  vpc 10

!

interface Ethernet 1/1

  description member port of port-channel/VPC 10

  switchport mode trunk

  channel-group 10 mode active

!

interface ethernet 1/47

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

!

interface ethernet 1/48

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

VXLAN BGP EVPN node 2

  peer-switch

  peer-gateway

  ipv6 nd synchronize

  ip arp synchronize

!

interface loopback1

  description loopback for VTEP (NVE)

  ip address 10.10.10.12/32

  ip address 10.10.10.100/32 secondary

!

interface port-channel 1

  description VPC peer-link

  switchport mode trunk

  vpc peer-link

!

interface port-channel 10

  description virtual port-channel to brownfield

  switchport mode trunk

  vpc 10

!

interface Ethernet 1/1

  description member port of port-channel/VPC 10

  switchport mode trunk

  channel-group 10 mode active

!

interface ethernet 1/47

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

!

interface ethernet 1/48

  description member port VPC peer-link

  switchport mode trunk

  channel-group 1

Spanning-Tree considerations

VPC for Classic Ethernet supports not only endpoint connections but also connection of entire Classic Ethernet networks running Spanning Tree Protocol. Commonly, when a Classic Ethernet network is built, the Spanning Tree root is considered to be placed at the Aggregation nodes.

Figure 5.           Layer 2 interconnect with loop

Graphical user interfaceDescription automatically generated

As opposed to a Classic Ethernet network, a VXLAN BGP EVPN network has no specific requirement for Spanning Tree. Even as the best practice dictates that every VTEP should be performing as the Spanning Tree root, the VXLAN overlay itself is not aware of Bridge Protocol Data Units (BPDUs) or Spanning Tree–related forwarding state, nor will it forward them. With the Classic Ethernet network being the Spanning Tree root, the connected VTEPs should have the Spanning Tree root port toward the Classic Ethernet network. As a result, it is crucial that only a single, logical, or physical, active Layer 2 connections between the brownfield and greenfield network be active. Otherwise, a Layer 2 loop exists, as shown in Figure 5. The single active connection can be achieved using a double-sided VPC connection or by manual VLAN distribution (see Figure 6).

Figure 6.           Loop-free Layer 2 interconnect (options)

A picture containing diagramDescription automatically generated

Note:      Spanning Tree BPDUs from the Classic Ethernet network are sent towards the VTEPs, but the VXLAN overlay does not forward the BPDUs, nor does it perform any blocking action on the VXLAN tunnel. As a result, a Layer 2 Loop can occur, so proper design of the Layer 2 interconnect is critical.

Spanning Tree configuration

The examples in this section highlight key concepts for interconnecting the brownfield and greenfield network as well as the Spanning Tree caveats. All best practices for Spanning Tree with Classic Ethernet VPC and VXLAN BGP EVPN VPC are applicable, whether or not shown in these examples.

VXLAN BGP EVPN Spanning Tree and VPC

The example below shows a Cisco VXLAN BGP EVPN VPC domain in the greenfield network. The individual VTEP IP addresses are 10.10.10.11 and 10.10.10.12, for nodes 1 and 2, respectively, and the anycast VTEP IP address is 10.10.10.100, shared across both VXLAN nodes. The Spanning Tree priority is set on both nodes to be the same and an inferior value to the Classic Ethernet nodes, so that the Classic Ethernet nodes the Spanning Tree root.

Note:      The VXLAN Overlay does not forward BPDUs, hence no Spanning Tree blocking ports exist, specifically for the overlay. Best practice dictates setting the lowest Spanning Tree priority (root) on all the VXLAN BGP EVPN nodes, but sometimes the Classic Ethernet network needs to be root. This practice has to be changed for migration.

VXLAN BGP EVPN node 1

vpc domain 10

  peer-switch

  peer-gateway

  ipv6 nd synchronize

  ip arp synchronize

!

interface loopback1

  description loopback for VTEP (NVE)

  ip address 10.10.10.11/32

  ip address 10.10.10.100/32 secondary

!

spanning-tree vlan 1-4094 priority 32768

VXLAN BGP EVPN node 2

vpc domain 10

  peer-switch

  peer-gateway

  ipv6 nd synchronize

  ip arp synchronize

!

interface loopback1

  description loopback for VTEP (NVE)

  ip address 10.10.10.12/32

  ip address 10.10.10.100/32 secondary

!

spanning-tree vlan 1-4094 priority 32768

Note:      Spanning Tree root is specific to the Nexus 7000s implementation. This is a mismatch to the requirement for interconnecting with a Classic Ethernet network. So a BPDU filter must be used on the Layer 2 interconnecting interfaces. Alternate methods are valid as long as the requirement for Spanning Tree root on the Classic Ethernet and VXLAN side is met.

VLAN mapping

In Classic Ethernet, with or without VPC, all VLANs have to be configured to allow the respective Layer 2 traffic to be forwarded from one Classic Ethernet enabled node to another. Primarily, Classic Ethernet uses the traditional 12-bit VLAN namespace (Figure 7), which allows for approximately 4000 VLANs.

Note:      When traffic exits an Ethernet port, traditional Ethernet and VLAN semantics are used (Figure 7). Multiple VLANs can be transported over a single IEEE 802.1Q trunk toward an endpoint or Ethernet switch.

Figure 7.           Ethernet namespace

Chart, waterfall chartDescription automatically generated

With VXLAN, with or without VPC, VLANs do not exist between the VTEPs. Instead of using the VLAN namespace within a VXLAN enabled fabric, mapping is done on the nodes performing the VTEP function. At the VTEP, the Ethernet VLAN identifier is mapped to a VXLAN Network Identifier (VNI) through configuration. As a result, the VLAN itself becomes locally significant to the VTEP, whereas when communication is transported between VTEPs, a different namespace is used. VXLAN provides a more extensive namespace by allowing approximately 16 million unique identifiers in its 24-bit namespace Figure 8.

Figure 8.           VXLAN namespace

Chart, waterfall chartDescription automatically generated

Given these different approaches taken by Classic Ethernet and the VXLAN BGP EVPN fabrics, the VLAN mapping is not required to be consistent across all the network nodes in either brownfield or greenfield networks.

The following two scenarios show different VLAN-mapping approaches available for the Classic Ethernet to VXLAN BGP EVPN migration.

Scenario 1: 1:1 mapping between VLANs

The first scenario follows consistent mapping where all the VLANs on every Ethernet-enabled node are consistent. From the first Classic Ethernet node (ingress), the VLAN stays consistent until it reaches the first VTEP (ingress). At this point, the VLAN is mapped to a VNI and transported across the overlay. At the destination VTEP (egress), the VNI is mapped to the same originally used VLAN. This scenario is referred to as 1:1 mapping or consistent VLAN usage (Figure 9)

Figure 9.           Figure 1Consistent VLAN Mapping

A picture containing meter, clockDescription automatically generated.

 

As shown in the example below, the drawback of using the same VLAN mapping across all nodes is that even though VXLAN can support a significantly larger namespace, the number of Layer 2 identifiers possible across both networks stays at the available VLAN namespace.

VLAN mapping—Ingress Classic Ethernet node

vlan 10

VLAN mapping—Egress Classic Ethernet node

vlan 10

VLAN mapping—Ingress VXLAN node

vlan 10

  vn-segment 30001

VLAN mapping—Egress VXLAN node

vlan 10

  vn-segment 30001

Scenario 2: Mapping between different VLANs

The second scenario provides a flexible mapping option for VLANs. From the first Classic Ethernet node (ingress), the VLAN stays consistent until it reaches the first VTEP (ingress). At this point, the VLAN is mapped to a VNI and transported across the overlay. At the destination VTEP (egress), the VNI is mapped to a different VLAN. (See Figure 10)

Figure 10.        Flexible VLAN Mapping

A picture containing meter, clockDescription automatically generated

 

In addition to the flexible VLAN Mapping, the port-VLAN translation approach in VXLAN can provide additional flexibility. This approach allows translation of the incoming VLAN from the brownfield (Classic Ethernet) so that the VXLAN environment will never learn of the originally used Classic Ethernet VLAN. (See Figure 11)

Figure 11.        Flexible VLAN mapping with port-VLAN translation

A close up of a signDescription automatically generated

 

The drawback to this scenario resides in that VLANs change at various stages. While this method allows using VXLAN’s larger namespace, the translations and mapping at various stages can introduce operational complexity.

VLAN mapping—Ingress Classic Ethernet node

vlan 10

VLAN mapping—Egress Classic Ethernet node

vlan 10

VLAN mapping—Ingress VXLAN node (without port-VLAN)

vlan 10

  vn-segment 30001

VLAN mapping—Ingress VXLAN node (with port-VLAN)

vlan 23

  vn-segment 30001

 

interface port-channel 10

  switchport vlan mapping enable

  switchport vlan mapping 10 23

  switchport trunk allowed vlan 23

VLAN mapping—Egress VXLAN node

vlan 55

  vn-segment 30001

Layer 3 interconnection

Interconnecting a brownfield network with a greenfield network via Layer 3 is crucial to allow communication between the endpoints in different IP subnets at various stages of the migration (Figures 12–13). The idea is to allow endpoints the ability to communicate with other endpoints in the same subnet or different subnets before, during, and after migration.

Note:      Even when seamless workload migration is not required, a Layer 3 interconnect between brownfield and greenfield is necessary. However, the requirement for a direct interconnection can be relaxed, and external connectivity of the individual environments can be used for a per-subnet migration.

Figure 12.        Overview: Brownfield-greenfield interconnection (direct)

Graphical user interface, websiteDescription automatically generated

 

Figure 13.        Overview: Brownfield-greenfield interconnection (datacenter core or WAN)

A picture containing graphical user interfaceDescription automatically generated

 

For the Layer 3 interconnection, establish a route peering session between a pair of nodes in the greenfield (VXLAN) and brownfield (ClassicEthernet) network respectively. During the migration from a Classic Ethernet network to a VXLAN BGP EVPN network, you interconnect the networks with a Virtual Route Forwarding (VRF)-aware approach, thereby using the multitenancy capability present in the greenfield VXLAN BGP EVPN network.

Note:      Workloads or endpoints in the VXLAN BGP EVPN network are always present in a VRF instance other than VRF default or management.

As mentioned earlier, the nodes chosen in the greenfield network can represent a border node or any other switch that provides the VXLAN BGP EVPN tunnel endpoint functionality. In the brownfield network, the nodes for the interconnection should represent the Layer 2–Layer 3 demarcation. In Classic Ethernet, that demarcation is often found at the Aggregation nodes. The topology is referred to as Access/aggregation with first-hop gateway at aggregation using VPC and traditional FHRP (HSRP)

Note:      This guide considers the Layer 2–Layer 3 interconnect to be separate connections, hence separate physical interfaces are used. In certain scenarios, the same physical connection can be employed for carrying Layer 2 and Layer 3 traffic with the use of the dynamic-routing-over-VPC feature. However, for this scenario, this feature must be supported on both the Classic Ethernet VPC as well as in the VXLAN BGP EVPN VPC environment.

Routing protocol choice

Several considerations must be considered when choosing routing protocols. Many or all may be viable for providing Layer 3 routing exchange between network nodes, but in the case of migration from a fabric network to a VXLAN BGP EVPN network, the following considerations are important in the context of this guide:

      Greenfield network with VXLAN BGP EVPN

      Clean routing domain separation

      Extensive routing policy capability

      VRF awareness

Given that BGP provides these capabilities and meets the requirements, we focus on the Layer 3 interconnection with external BGP (eBGP) as the routing protocol of choice.

Note:      Other routing protocols can equally accommodate the requirement for the Layer 3 interconnect, but they might require additional redistribution configuration.

Note:      By using VXLAN BGP EVPN in the greenfield network and eBGP for the Layer 3 interconnect, all host routes (/32 and /128) by default are advertised to the eBGP peers in the brownfield network. For migration, it might be beneficial to filter out these host routes to not overwhelm the available scale in the brownfield environment. Recall that in the brownfield environment, only non-host (/32 and /128) routing prefixes are advertised for reachability.

VRF mapping

Note:      By using VRF-lite for the Layer 3 interconnect between brownfield and greenfield networks, all existing best practices for VXLAN BGP EVPN and VRF-lite are applicable, even though some configurations may have been omitted for the sake of brevity.

Scenario 1: 1:1 Mapping between VRFs

The first scenario follows a consistent mapping where all the VRFs from the Classic Ethernet network are mapped to a matching VRF in the VXLAN BGP EVPN network. To accommodate this mapping, employ a VRF-lite approach by using subinterfaces and Layer 3 ECMP at the interconnect. The result is per-VRF eBGP peering at the Layer 2–Layer 3 demarcation node in the brownfield Classic Ethernet network and at the VXLAN BGP EVPN border node in the greenfield network. A point-to-point IP subnet per-VRF is employed, and the routing table between the two environments is exchanged. For the IP subnets in the Classic Ethernet network, ensure that the associated network prefixes are advertised into BGP. In the example in Figure 14, Switched Virtual Interface (SVI) 10 is instantiated on the VXLAN BGP EVPN network with distributed IP anycast gateway 192.168.10.1. The first-hop gateway for IP subnet 192.168.20.0/24 is instantiated on the brownfield Classic Ethernet network with HSRP. Routed traffic between these two subnets traverses the Layer 3 interconnect between the two networks.

Figure 14.        Consistent per-VRF mapping

A picture containing chartDescription automatically generated

 

Layer 3 configuration—Classic Ethernet aggregation node (named-to-named)

vlan 20

!

vrf context Tenant-A

!

interface vlan 20

  vrf member Tenant-A

  ip address 192.168.20.201/24

  hsrp 10

  ip 192.168.20.1

!

interface ethernet 1/10

  no switchport

!

interface ethernet 1/10.20

  encapsulation dot1q 20

  vrf member Tenant-A

  ip address 10.1.1.2/30

!

router bgp 65502

  vrf Tenant-A

    address-family ipv4 unicast

      network 192.168.20.0/24

    neighbor 10.1.1.1

      remote-as 65501

      update-source Ethernet1/10.20

      address-family ipv4 unicast

Layer 3 configuration—VXLAN BGP EVPN border node (named-to-named)

vlan 2001

  vn-segment 50001

!

interface vlan 2001

  vrf member Tenant-A

   ip forward

   no ip redirects

   no shutdown

!

vrf context Tenant-A

  vni 50001

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

!

interface nve 1

  member vni 50001 associate-vrf

!

interface ethernet 1/10

  no switchport

!

interface ethernet 1/10.20

  encapsulation dot1q 20

  vrf member Tenant-A

  ip address 10.1.1.1/30

!

router bgp 65501

  vrf Tenant-A

    address-family ipv4 unicast

      advertise l2vpn evpn

    neighbor 10.1.1.2

      remote-as 65502

      update-source Ethernet1/10.20

      address-family ipv4 unicast

Scenario 2: Mapping from default VRF

The second scenario follows a many-to-one mapping where the VRF “default” in the Classic Ethernet network is mapped to a named VRF in the VXLAN BGP EVPN network (Figure 15). For this mapping, we employ a VRF-lite approach using the physical Interface in the brownfield and greenfield network. For redundancy and load sharing, Layer 3 ECMP is used at the interconnect. As a result, there is one eBGP peering in the VRF default (global routing table/underlay) at the Layer 2–Layer 3 demarcation node in the brownfield Classic Ethernet network, and a named VRF eBGP peering at the VXLAN BGP EVPN border node in the greenfield network. As before, a point-to-point IP subnet is used for peering, and the routing table between the two environments is exchanged. For each IP subnet in the Classic Ethernet network, we ensure that the associated network prefixes are respectively advertised into BGP.

Figure 15.        VRF default to VRF Tenant-A

A picture containing meter, clockDescription automatically generated

 

Layer 3 configuration—Classic Ethernet aggregation node (default-to-named)

vlan 20

!

interface vlan 20

  ip address 192.168.20.201/24

  hsrp 10

  ip 192.168.20.1

!

interface ethernet 1/10

  ip address 10.1.1.2/30

!

router bgp 65502

  address-family ipv4 unicast

    network 192.168.20.0/24

  neighbor 10.1.1.1

    remote-as 65501

    update-source Ethernet1/10

    address-family ipv4 unicast

Layer 3 configuration—VXLAN BGP EVPN border node (default-to-named)

vlan 2001

  vn-segment 50001

!

interface vlan 2001

  vrf member Tenant-A

  ip forward

  no ip redirects

  no shutdown

!

vrf context Tenant-A

  vni 50001

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

!

interface nve 1

  member vni 50001 associate-vrf

!

interface ethernet 1/10

  no switchport

  vrf member Tenant-A

  ip address 10.1.1.1/30

!

router bgp 65501

  vrf Tenant-A

    address-family ipv4 unicast

      advertise l2vpn evpn

    neighbor 10.1.1.2

      remote-as 65502

      update-source Ethernet1/10

      address-family ipv4 unicast

If it is necessary to allow the VXLAN BGP EVPN underlay to be reachable from the Classic Ethernet network, an extra eBGP peering session can be established from the brownfield VRF default to the greenfield VRF default (Figure 16). Because we require a routing session from the VXLAN BGP EVPN network in both the VRF default and VRF Tenant-A into the VRF default on the Classic Ethernet side, we either need two physical interfaces or use subinterfaces.

The example below shows how this can be achieved using subinterfaces. Note that while, as before, SVI 20 (HSRP) and SVI 20 (DAG) have been instantiated on the brownfield and greenfield networks, in this example, 10.10.10.0/24 is the underlay subnet on the greenfield VXLAN network that needs to be advertised over to the brownfield Classic Ethernet network.

Figure 16.        VRF default to VRF default and Tenant-A

A picture containing meter, clockDescription automatically generated

 

Layer 3 configuration—Classic Ethernet aggregation node (default-to-default/named

vlan 20

!

interface vlan 20

  ip address 192.168.20.201/24

  hsrp 10

  ip 192.168.20.1

!

interface ethernet 1/10

  no switchport

  ip address 10.1.0.2/30

!

interface ethernet 1/10.20

  encapsulation dot1q 20

  ip address 10.1.1.2/30

!

router bgp 65502

  address-family ipv4 unicast

    network 192.168.20.0/24

  neighbor 10.1.0.1

    remote-as 65501

    update-source Ethernet1/10

    address-family ipv4 unicast

  neighbor 10.1.1.1

    remote-as 65501

    update-source Ethernet1/10.20

    address-family ipv4 unicast

Layer 3 configuration—VXLAN BGP EVPN border node (default-to-default/named)

vlan 2001

  vn-segment 50001

!

interface vlan 2001

  vrf member Tenant-A

  ip forward

  no ip redirects

  no shutdown

!

vrf context Tenant-A

  vni 50001

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

!

interface nve 1

  member vni 50001 associate-vrf

!

interface ethernet 1/10

  no switchport

  ip address 10.1.0.1/30

!

interface ethernet 1/10.20

  encapsulation dot1q 20

  vrf member Tenant-A

  ip address 10.1.1.1/30

!

router bgp 65501

  address-family ipv4 unicast

    network 10.10.10.0/24

  neighbor 10.1.0.2

    remote-as 65502

    update-source Ethernet1/10

    address-family ipv4 unicast

  vrf Tenant-A

    address-family ipv4 unicast

      advertise l2vpn evpn

    neighbor 10.1.1.2

      remote-as 65502

      update-source Ethernet1/10.20

      address-family ipv4 unicast

Default gateway migration considerations

While interconnecting the brownfield network with the greenfield network is an important task, the placement of the first-hop gateway is equally important. During migration from a Classic Ethernet network to a VXLAN BGP EVPN network, the first-hop gateway cannot simultaneously be active in both the brownfield and greenfield network, because the two first-hop gateways operate in different modes. While the brownfield operates in a traditional FHRP or anycast HSRP mode, the VXLAN BGP EVPN greenfield uses the distributed IP anycast gateway (DAG). These two different first-hop gateway modes are not compatible and cannot be active at the same time.

Scenario 1: Centralized first-hop gateway

Because migration starts from the brownfield network, the first-hop gateway used to establish communication between IP subnets is initially maintained there. This placement implies that the VXLAN BGP EVPN fabric initially provides only Layer 2 services, and the endpoints already migrated to the VXLAN BGP EVPN fabric send traffic to the brownfield network across the Layer 2 interconnect. Intersubnet or routed traffic from and to endpoints in the greenfield network, trombones over the Layer 2 interconnect to reach the first-hop gateway on the brownfield side, as shown in Figure 17.

Figure 17.        First-hop gateway in brownfield network

A close up of a signDescription automatically generated

 

Once all the workloads of a given IP subnet (VLAN) are migrated into the VXLAN BGP EVPN fabric, it is then possible to also migrate the first-hop gateway into the VXLAN BGP EVPN domain. This migration is done by turning on DAG routing in the VLAN or VNI associated with the corresponding IP subnet and deconfiguring the first-hop gateway function on the brownfield network devices (Figure 18). This way, the border nodes never need to have the distributed IP anycast gateway, assuming they have no directly attached workloads.

Figure 18.        First-hop gateway in brownfield and greenfield networks

A picture containing meter, clockDescription automatically generated

 

First-hop configuration—Classic Ethernet aggregation node

vlan 20

!

vrf context Tenant-A

!

interface vlan 20

  vrf member Tenant-A

  ip address 192.168.20.201/24

  hsrp 10

  ip 192.168.20.1

!

First-hop configuration—VXLAN BGP EVPN leaf node

fabric forwarding anycast-gateway-mac 2020.0000.00aa

!

vlan 10

  vn-segment 30001

!

vrf context Tenant-A

  vni 50001

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

!

interface vlan 10

  vrf member Tenant-A

  ip address 192.168.10.1/24

  fabric forwarding mode anycast-gateway

Scenario 2: Anycast first-hop gateway

In the second scenario, the first-hop gateway is immediately migrated from the brownfield network to the greenfield network before the workload migration begins (Figure 19). In this approach, no change to the migration infrastructure is required once migration begins. In contrast to the first scenario, there is a centralized first-hop gateway and later move the function to a DAG once all endpoints in that associated subnet are migrated. Here we move to the DAG first and maintain that state during the lifecycle of the network. Note that in this scenario, the DAG is also instantiated at the border nodes. This serves as the first-hop gateway for the workloads in the brownfield environment. As workloads move over to the VXLAN BGP EVPN network, their directly attached leaf takes over the first-hop gateway functionality.

Figure 19.        First-hop gateway greenfield network only

A close up of a signDescription automatically generated

 

First-hop configuration—VXLAN BGP EVPN nodes

fabric forwarding anycast-gateway-mac 2020.0000.00aa

!

vlan 10

  vn-segment 30001

!

vlan 20

  vn-segment 30002

!

vrf context Tenant-A

  vni 50001

  rd auto

  address-family ipv4 unicast

    route-target both auto

    route-target both auto evpn

!

interface vlan 10

  vrf member Tenant-A

  ip address 192.168.10.1/24

  fabric forwarding mode anycast-gateway

!

interface vlan 20

  vrf member Tenant-A

  ip address 192.168.20.1/24

  fabric forwarding mode anycast-gateway

Although neither first-hop gateway migration approach is preferred, each approach has its advantages and disadvantages. The second scenario has in its favor the fact that the DAG is used early, thereby providing experience in using it before migrating the major workloads. On the other hand, scenario 2 also has the disadvantage that, until the workload migration begins, traffic will always trombone over the Layer 2 interconnect.

Regardless of the chosen scenario, the preparatory steps required before the migration begins are similar.

Premigration preparation—First-hop gateway

For the first-hop gateway migration, make sure that the change is as seamless as possible for the endpoints. The endpoints are typically configured with a default gateway IP to reach any destination outside their local IP subnet. The default gateway IP-to-MAC binding at the endpoint is resolved via the Address Resolution Protocol (ARP). Although it is easy to align the IP addresses from FHRP to the DAG, the alignment of the virtual MAC address to the anycast gateway MAC requires additional considerations.

With HSRP, the virtual MAC address for the first-hop gateway is derived from the HSRP version (1 or 2) and the configured HSRP group. It is commonly seen that HSRP groups change on a per-VLAN or per-SVI basis. The DAG used in VXLAN BGP EVPN follows a different approach from the per-group virtual MAC employed by HSRP. For the DAG, a global anycast gateway MAC is defined. This means that the virtual MAC—or more accurately the anycast gateway MAC—is the same for all first-hop gateways on the given node. In fact, the same anycast gateway MAC is shared by all the nodes in a given fabric.

Clearly, with these different approaches for virtual MAC assignments, some mechanism is needed to align the virtual MACs to allow a migration from the HSRP MAC to the anycast gateway MAC.

Since the endpoints are part of the brownfield network, they store the default gateway IP–to–HSRP virtual MAC binding in their ARP cache. Eventually, when the DAG is enabled, the ARP cache of the endpoints should be updated to have the gateway IP mapped to the anycast gateway MAC. Clearly, manually updating the ARP cache of every endpoint is tedious and impractical. Hence, in the brownfield network, even before starting the migration, the HSRP virtual MAC address for each VLAN or subnet should be updated to be the same as the anycast gateway MAC, via a configuration update, as shown here:

HSRP virtual MAC configuration—Classic Ethernet aggregation nodes

interface vlan 20

  vrf member Tenant-A

  ip address 192.168.20.201/24

  hsrp 10

  ip 192.168.20.1

  mac-address 2020.0000.00aa

Anycast gateway MAC configuration—VXLAN BGP EVPN nodes

fabric forwarding anycast-gateway-mac 2020.0000.00aa

After the change from the HSRP group-based virtual MAC address on the brownfield network side (Classic Ethernet) to the anycast gateway MAC, we must ensure that all endpoints learn about that change. Changing the state of FHRP from active to standby enables the first-hop gateway instance to send out a gratuitous ARP (GARP) message to inform all endpoints about the updated IP-to-MAC binding. As a result of this state change and GARP, the endpoints either update their ARP cache or invalidate their ARP cache and trigger an ARP request for the first-hop gateway’s MAC address. As a result, the new virtual MAC address (anycast gateway MAC) for the first-hop gateway is learned at the endpoints.

Note:      The practice of changing the FHRP virtual MAC followed by a state change (active-standby) results in the highest probability that connected endpoints relearn the first-hop gateway’s new virtual MAC address. Nonetheless, a possibility remains that some endpoints will not honor the signalization through GARP or have a static MAC entry for the first-hop gateway. These endpoints require manual intervention to flush their ARP cache and hence, we recommend performing this action during a Maintenance Window.

Once the premigration steps for the first-hop gateway are complete, the migration of workloads can be performed seamlessly. At the time, when the old first-hop gateway (HSRP) has to be disabled and the new first-hop gateway (DAG) enabled, a small traffic disruption may be observed. Hence, we recommend performing such first-hop gateway changes during a maintenance window. We reiterate that, for a given IP subnet or VLAN, FHRP in the brownfield network and the DAG in the greenfield network should never be enabled at the same time. Otherwise, unexpected forwarding behavior, ARP table misprogramming, and traffic forwarding failure can result.

Migration walkthrough

The preceding sections gave a detailed account of different aspects of migrating a brownfield Classic Ethernet network to a greenfield VXLAN BGP EVPN network. Although the individual steps have been explained, we have not yet described the migration process in a chronological order. This section summarizes the main steps of the migration.

Locating the interconnection nodes in the brownfield and greenfield network

It is important to define the location of where the Layer 2 to Layer 3 demarcation exists in the brownfield network (Figure 20). In the greenfield network, the interconnection point can be at any border node or similar node that can serve the routing and bridging requirements.

Figure 20.        Interconnection location

A picture containing logoDescription automatically generated

 

Building a Layer 3 interconnect

The Layer 3 interconnect or Layer 3 external connectivity has to exist in the brownfield and greenfield network (Figure 21). Ensure that the IP subnet and associated prefix local to each of the respective environments are advertised and learned in the adjacent network.

Figure 21.        Layer 3 interconnect

Layer 3 interconnect

 

Building a Layer 2 interconnect

The Layer 2 Interconnect is necessary if only seamless workload mobility and first-hop gateway sharing are required (Figure 22). If the brownfield and greenfield network needs to share the same IP subnet, the Layer 2 interconnect is necessary (Figure 22).

Figure 22.        Layer 2 interconnect

Diagram, schematicDescription automatically generated

 

Defining the first-hop gateway approach

The choice of first-hop gateway approach is dependent on if the brownfield network provides the first-hop gateway during the migration (Scenario 1) or the greenfield network takes over this function as soon as possible (Scenario 2). Two different first-hop gateway modes (HSRP and DAG) cannot be simultaneously enabled for the same IP subnet. Only one first-hop gateway mode at a time must be enabled, with the goal being to migrate to the DAG at the end of the migration. See Figure 23.

Figure 23.        Layer 2–Layer 3 demarcation (FHRP) as first-hop gateway

DiagramDescription automatically generated

 

Aligning the first-hop gateway information (virtual MAC and virtual IP)

To facilitate seamless migration of the first-hop gateway, the virtual MAC and first-hop gateway IP address must be aligned first. To ensure that all endpoints learn the new virtual MAC (specifically the anycast gateway MAC) for the first-hop gateway, a state change has to be performed on the FHRP-based first-hop gateway in the brownfield network.

Performing the workload migration

Once the interconnection at Layer 2 and Layer 3 is ready and the first-hop gateway has been respectively aligned, workloads can be migrated between the brownfield and the greenfield networks (Figure 24). This can be performed by using virtual machine mobility (cold or hot move) or by physically recabling workloads to the greenfield network.

Figure 24.        Workload Migration

A picture containing diagramDescription automatically generated

 

Migrate and Decommission unnecessary first-hop gateway

Once the workloads have been migrated, the brownfield first-hop gateway can be decommissioned (Figure 25) and the greenfield first-hop gateway activated (Scenario 1). The decommission is not necessary with Scenario 2, where the DAG is enabled on the greenfield network before the workload migration begins.

Figure 25.        Decommission first-hop gateway

 A picture containing diagramDescription automatically generated

 

Decommission the Layer 2 interconnect

Although the Layer 3 external connectivity or interconnect might remain necessary for the lifecycle of the remaining resources in the brownfield network, the Layer 2 interconnect for the first-hop gateway can be decommissioned once the workload migration is complete. It is a good practice not to have any Layer 2 interconnects if they are not required, to avoid any possibility of Layer 2 loops (Figure 26).

Figure 26.        Decommission Layer 2 interconnect

 A picture containing diagramDescription automatically generated

 


 

Cisco Data Center Network Manager

Using DCNM for Migration

Among the many capabilities of Cisco Data Center Network Manager 11 software, and perhaps its most appealing, is its ability to manage multiple network deployments across the Cisco Nexus family of switches. The same Network Manager instance can manage legacy 3-tier access-aggregation-core deployments, FabricPath deployments, Routed Fabrics, and VXLAN BGP EVPN deployments. What is even more enterprising is the ability of DCNM to manage both brownfield and greenfield networks (Figure 27). Data Center Network Manager supports Day-0 network provisioning using a flexible, customizable bootstrap workflow for device onboarding, Day-1 provisioning using configuration templates or profiles, and Day-2 network performance monitoring and troubleshooting. A Configuration Compliance engine closes the operations loop by continuously checking the DCNM defined intent against what is configured on the switch. Any deviation is detected, flagged, and an appropriate remediation action is provided to bring back the switch IN-SYNC. DCNM groups switches that belong to a given network deployment into what are called fabrics. For more information on Day-0/Day-1/Day-2 VXLAN EVPN–based LAN provisioning, refer to Cisco DCNM LAN Fabric Configuration Guide.

Figure 27.        Cisco Data Center Network Manager managing brownfield and greenfield deployments.

 

A picture containing diagramDescription automatically generated

 

When migrating a brownfield Classic Ethernet network to a greenfield VXLAN BGP EVPN network, the Cisco Data Center Network Manager solution can help in the following ways:

      Setting up the greenfield VXLAN BGP EVPN network via POAP/Bootstrap

      Setting up the Layer 3 Interconnect from the greenfield VXLAN BGP EVPN network to the brownfield Classic Ethernet network

      Setting up the VPC connection between the brownfield Classic Ethernet network and the greenfield VXLAN BGP EVPN network (Layer 2 interconnect)

      Helping migrating the first-hop gateway from the brownfield network to the greenfield network

Performing Migration using DCNM

Once the greenfield VXLAN EVPN fabric is provisioned via the DCNM Fabric Builder workflow, the VXLAN BGP EVPN overlay configuration to be instantiated on Cisco Nexus switches via a top-down push mechanism using configuration profile templates (Figure 28). Once the Layer 2–Layer 3 interconnect has been established and the premigration steps completed, VXLAN overlay top-down provisioning can be employed to push the appropriate Layer 2 configuration to the switches. By selecting the “Layer 2 Only” option for a network, initially only the Layer 2 configuration of the associated network will be deployed to the switches.

Figure 28.        Data Center Network Manager deploying VXLAN network without gateway on the leaf switches

 

Graphical user interface, text, applicationDescription automatically generated

 

Figure 29  shows the preview screen for the configuration that is pushed down to the selected VXLAN BGP EVPN leaf switches.

Figure 29.        Preview of the VXLAN configuration that is pushed to the leaf switches

 

Graphical user interface, text, applicationDescription automatically generated

 

Now, you can start deploying new workloads in these networks below the VXLAN EVPN fabric. In addition, existing workloads can also be migrated over to the VXLAN EVPN fabric. All routed traffic from/to the workloads in the VXLAN EVPN fabric will still be forwarded via the centralized gateway on the classic Ethernet network side.

Separately, the VRF can be deployed prior to the leafs/border switches on the VXLAN EVPN fabric to keep the switches prepared ahead of time. When all the endpoints for a given IP subnet aka network are migrated from the brownfield network to the greenfield network, it is time to decommission the first-hop gateway on the brownfield Classic Ethernet network. Now, we enable the distributed IP anycast gateway on the greenfield VXLAN BGP EVPN network (Figure 30). Essentially the gateway IP with the appropriate network mask, can be filled in with the default gateway information for that subnet/network.

Figure 30.        Toggle the Layer 3 gateway flag to trigger redeploy of the network to the appropriate leaf switches

 

Graphical user interface, applicationDescription automatically generated

 

The change on the first-hop gateway can be performed using a script-based approach that shuts the FHRP-based first-hop gateway in the brownfield network or removes the FHRP configuration for that IP subnet. In addition, once this step is done, the DAG is pushed down to the switches on the greenfield network. Figure 31  depicts the configuration the DAG configuration that will be pushed to all switches on the VXLAN EVPN side. Both these tasks can be performed through the Data Center Network Manager Representational State Transfer (REST) APIs that can trigger a configuration template job (for the brownfield network task) and the Network Manager top-down fabric provisioning for the greenfield network. As a result, the DAG becomes the first-hop gateway for the entire greenfield network.

Figure 31.        Preview of the distributed anycast gateway configuration that will be pushed to the leaf switches

Graphical user interface, text, applicationDescription automatically generated

 

For more information

Learn about VXLAN BGP EVPN.

Learn about VPC For Classic Ethernet.

Learn more