Migrating Existing Networks to
December 23, 2015
Migrating a Brownfield Network to ACI
L2 Connectivity with VLAN to EPG Static Mapping
Virtual Workloads Migration Considerations
Default Gateway Migration Considerations
L3 Routing Between Brownfield and Greenfield Networks
Migration of L4-L7 Network Services
There are several current and future trends happening concurrently in the DC space, described as follows:
■ Operations evolution: Moving from the use of CLI to orchestration tools to manage and operate the DC network.
■ Policy evolution: Evolving from standard IP subnet FW-based access control to advanced, policy-based access control, which allows deploying new applications into an ACI Greenfield environment in a more “application-centric” manner.
■ Component evolution: Interconnecting existing DC network infrastructure to newly deployed ACI fabrics in order to allow applications to be gradually migrated from one infrastructure to another, ideally in a non-disruptive manner.
These three evolution paths (shown as follows) may occur in parallel or in an independent and sequential way. The main purpose of this guide is the migration of the underlying infrastructure.
Note: A solid understanding of Cisco ACI and its functionalities is required to leverage the information contained in this guide. For more background information on ACI, refer to the following link: http://www.cisco.com/go/aci.
The specific migration process described in this guide is usually referred to as “network centric migration” and consists in interconnecting the existing brownfield network (built based on STP, vPC, or FabricPath technologies) to a newly developed ACI POD with the end goal of migrating applications or workloads between those environments.
In order to accomplish this application migration task, it is required to map traditional networking concepts (VLANs, IP subnets, VRFs, etc.) to new ACI constructs, like endpoint groups (EPGs), Bridge Domains, and Private Networks. The ACI constructs previously mentioned is explained in more detail throughout this guide.
The following diagram shows the ACI network-centric migration methodology, which highlights the major steps required for performing the migration of applications from a Brownfield network to an ACI fabric.
The steps of the ACI network-centric migration methodology are described as follows:
1. The first step is the design and deployment of the new ACI POD (Greenfield POD); it is likely that the size of such a deployment is initially small with plans to grown in time with the number of applications that are migrated. A typical ACI POD consists of at least two spine switches and two leaf switches, managed by a cluster of APIC controllers.
2. The second step is the integration between the existing DC network infrastructure (usually called the “Brownfield” network) and the new ACI POD. L2 and L3 connectivity between the two networks is required to allow successful applications and workload migration across the two network infrastructures.
3. The final step consists of migrating workloads between the Brownfield and the Greenfield network. It is likely that this application migration process may take several months to complete (depending also on the number and complexity of the applications being migrated), so communication between Greenfield and Brownfield networks via the L2 and L3 connections previously mentioned is utilized during this phase.
In ACI, VLANs do not exist inside the fabric; they are only defined on the edge ports connecting virtual or physical endpoints. Hence, the meaning of the VLAN tags is localized on a per interface level. This method allows the possibility to establish intra-IP subnet communication between devices that are part of L2 segments identified by different VLAN ID tags (VLAN cross-connect) or even other type of tags (VXLAN and NVGRE, for example).
The following diagram shows the ACI normalization of ingress encapsulation, which demonstrates the fabric normalization of port encapsulations.
The traditional concept of VLAN as L2 broadcast domain is replaced in the ACI fabric with Bridge Domain (BD), representing the L2 broadcast domain where endpoints (physical or virtual) connect.
As shown in the following diagram, on the ACI fabric it is possible to associate different VLAN tags (VLAN 20 and 30 in this example) defined on different edge ports to the same broadcast domain. The result is that endpoint 10.10.10.10 can still communicate with endpoint 10.10.10.11 even though they are attached to different VLANs.
It is then possible to deploy endpoints as part of a specific security group, called an endpoint group (EPG) that is associated with the bridge domain. In traditional networking, different security groups are usually associated to separate VLANs (L2 broadcast domains) and security policies get applied by leveraging L3 ACLs defined on the routing devices interconnecting them.
The recommended approach for a network centric migration consists of associating each VLAN originally defined in the brownfield infrastructure to a corresponding EPG and BD pair in the ACI fabric (VLAN = EPG = BD). It should be noted that while the above is our recommended approach, ACI does give you the ability to associate more than one endpoint group to the same L2 broadcast domain, while maintaining logical isolation between endpoints in the respective EPGs.
The following diagram shows the mapping of each migrated application to a separate EPG/BD. Endpoints 10.10.10.10 and 10.10.10.11 cannot communicate to endpoints 10.20.20.10 or 10.20.20.11 unless explicitly configured. This holds true even if all the hosts were on the same L2 domain or IP subnet.
To connect the Brownfield network and the ACI fabric via L2 and perform the workload migration:
1. Establish a double-sided vPC connection between a pair of ACI border leaf nodes and the two devices
representing the boundary between L2 and L3 in the Brownfield infrastructure. Depending on the technology used in the legacy network (STP, vPC, or FabricPath), this L2/L3 boundary may be found at the aggregation layer or on a dedicated pair of devices normally named border leaf nodes. The following diagram shows Brownfield VLANs connected to the ACI fabric.
Note: The use of a dedicated border leaf pair is recommended but not required.
In the previous example, the Brownfield network is represented by a FabricPath implementation leveraging a default gateway deployment at the spine layer (that is, the spines also perform the duty of border leaf nodes). A double-sided vPC+ connection to a pair of ACI leaf nodes allows extending L2 connectivity between the two network infrastructures without creating any L2 loop, hence maintaining all the vPC links actively forwarding traffic.
Note: This design would look identical if the Brownfield network was built with STP or vPC technologies as opposed to FabricPath.
2. Associate endpoints connected to VLANs in the Brownfield network to security groups defined in the ACI fabric and named endpoint groups (EPGs). The recommended approach discussed in this paper consists in statically mapping VLAN tags to EPGs on the ACI leaf nodes connecting to the Brownfield network. When doing so, there are a couple of specific use cases worth considering.
Scenario 1: 1:1 Mapping between Brownfield VLANs and EPGs
In this scenario, which is shown in the following diagram, performing VLAN10/EPG1 and VLAN20/EPG2 static mappings ensures that the workloads connected to the Brownfield network remain part of the same L2 broadcast domain with the workloads that are migrated to the ACI fabric.
Also, as previously explained, different VLAN tags can be used for the workloads migrated to the ACI fabric, as long as they are also mapped to the proper EPG1 or EPG2 group, as shown above.
The examples shown in the previous diagrams follow the recommended approach of assigning a dedicated bridge domain and EPG for each Brownfield VLAN (Brownfield VLAN = EPG = BD). Once the static mapping is performed, L2 (intra-IP subnet) communication can be successfully established between workloads connected to the Brownfield FabricPath and the Greenfield (ACI fabric) networks.
Using the VLAN to EPG static binding method allows you to migrate workloads to the ACI fabric in the least disruptive way. If the workloads are virtualized, it is possible to perform VM live migration (more considerations about this can be found in the “Virtual Workloads Migration Considerations” section). If the workloads are bare-metal servers, it is possible to physically move it between networks without having to perform any IP re-addressing.
Scenario 2: Mapping a Brownfield VLAN to Different EPGs
The second scenario is where the applications in the Brownfield network are still deployed with multiple tiers, but the same VLAN segment is utilized by workloads belonging to the same tier of different applications (that is, workloads for the Web Tier of App1 and App2 are all connected to the same VLAN 10, as depicted in the following diagram).
The goal is to have workloads migrated to the ACI fabric and eventually assigned to separate EPGs, to take advantage of the security functionalities offered by ACI to logically isolate groups of endpoints. The static VLAN/EPG mapping performed on the ACI leaf nodes causes the assignment of the workloads part of VLAN 10 to a common EPG, as shown in the following diagram:
In this scenario, the workloads belonging to the Web tier of different applications are logically isolated between them as soon as they are migrated to the ACI fabric. In order to allow communication to the workloads located in the Brownfield network, a contract is required between EPG_Outside and EPG1 and between EPG_Outside and EPG2. A contract is an ACI construct that allows or denies communication between EPGs.
Specific considerations are required for the migration of virtual workloads between the Brownfield network and the ACI fabric. The focus of this guide is on vSphere deployments, as ESXi is likely the prevalent hypervisor utilized in Brownfield deployments. The main goal of the migration procedure is to ensure that virtual machines originally connected to the Brownfield network can be moved to ESXi resources in the newly deployed ACI fabric. Ideally the procedure should be performed in a seamless manner (that is, live vMotion).
The following two scenarios are probably the most commonly encountered in a real life deployment:
1. Scenario 1: The same vCenter server deployed in the brownfield network to manage locally connected ESXi clusters is also managing new ESXi clusters added to the ACI fabric.
2. Scenario 2: One (or more) vCenter servers are deployed in the ACI fabric to manage new ESXi clusters, whereas a separate vCenter server remains connected to the Brownfield network to manage locally connected ESXi hosts.
The following two sections describe the migration steps required for these two scenarios.
Scenario 1: Single vCenter Server Managing ESXi Clusters in Brownfield and Greenfield
The following are the steps required to complete the migration procedure:
1. Connect the new ESXi hosts in the ACI fabric to the same DVS already used by the hosts on the Brownfield site. This ensures that the port-groups where the virtual machines are initially connected are made available also on the ESXi hosts connected to the ACI fabric.
2. At this point, having connected via L2 the two networks as discussed in the previous section, the virtual machines can be migrated in a live fashion (live vMotion) from the Brownfield to the Greenfield network. In the example in the following diagram, the red VMs are moved to the ESXi hosts connected to the ACI fabric.
Note: This scenario does not involve VMM integration. You attach to the ESXi hosts and the Brownfield environment via the use of a static binding under the EPG (just as you would with bare-metal hosts).
The following diagram shows a live migration of VMs to the ACI fabric.
The VMs moved to the new ESXi hosts are part of the same port-group used in the Brownfield site. This implies that the same VLAN tag is used at this point by the VMs to send traffic into the ACI fabric. A static mapping is therefore required on the ACI leaf nodes connecting to the ESXi hosts, so that traffic originated from the migrated VMs can be properly classified and associated to the EPG dedicated to the “red” VMs. Note that this means that the ESXi hosts are in this phase integrated into ACI as physical resources (that is, part of a physical domain). The default gateway of the newly migrated VMs remains on the Brownfield side.
Note: The requirement for the new ESXi host is to be equipped with at least a pair of physical uplinks in order to be able to be connected simultaneously to the two DVSs.
As shown above, the red VMs at this point are still connected to the manually created static port-groups associated to the old DVS, whereas the purple VMs for new applications directly deployed on the ACI fabric can be connected to dynamically created port-groups associated to the new DVS.
As a last migration step, it is then possible to move the red VMs to the dynamically created port-group associated to the EPG on the new DVS (shown as follows).
This is advantageous from an operational perspective, because it allows the removal of the old DVS from the configuration at the end of the migration process for all the workloads, as seen in the following diagram. It may also allow for a convenient rollback situation during the migration if any misconfiguration were uncovered in any part of the infrastructure.
Scenario 2: Separate vCenter Servers for Brownfield and Greenfield
In the second scenario, a separate vCenter server is introduced to manage the new ESXi resources connected to the ACI fabric, as shown in the following diagram.
The migration procedure in this case must be modified as follows:
1. Pair the new vCenter with the APIC controller cluster. This process creates a VMM domain and dynamically pushes a new DVS to all the new ESXi hosts in the ACI fabric. The ESXi hosts can then be connected to this newly created DVS.
2. The EPGs to be used for the migrated workloads are associated to the VMM domain, which implies corresponding port-groups are also dynamically added to the DVS.
3. Perform the live migration of workloads between ESXi clusters. The following diagram shows the inter-vCenter live migration of VMs.
The capability of performing vMotion between ESXi hosts managed by separate vCenter servers has been introduced in vSphere 6.0 software release. Support for integration between ACI and vSphere 6.0 is introduced in the ACI 1.1(2h) release.
Note: Support for vMotion between ESXi hosts managed by different vCenter servers is introduced from ACI release 1.2.
The default gateway used by the workloads to establish communication outside their IP subnet is initially maintained in the Brownfield network; this implies that the ACI fabric initially provides only Layer 2 services for devices part of EPG1, and the workloads already migrated to the ACI fabric send traffic to the Brownfield network when they need to communicate with devices external to their IP subnet (shown in the following diagram).
To enable this behavior, you must configure specific properties on the bridge domain defined in the ACI fabric and associated to the legacy VLAN 10 (shown in the following diagram). The default settings for a BD have ARP Flooding disabled and Unicast Routing enabled. For L2 communication to work, you must adjust these settings from the defaults.
· Disable Unicast Routing: The ACI fabric must behave as a L2 network in this initial migration phase, therefore it is required to uncheck the flag to disable the Unicast Routing capabilities. As a consequence, the ACI fabric will only forward traffic for endpoints part of this bridge domain by performing L2 lookups and only MAC address information would be stored in the ACI database for those workloads (that is, their IP addresses will not be learned).
· Enable ARP flooding: ARP requests originated from devices connected to the ACI fabric should be able to reach the default gateway or other endpoints part of the same IP subnet and still connected to the Brownfield network. Since those entities are unknown to the ACI fabric, it is required to flood ARP requests across the ACI fabric and toward the Brownfield network.
· Enable Unknown Unicast flooding: similar considerations valid for ARP traffic apply also to L2 unknown traffic (unicast and multicast), so it is required to ensure flooding is enabled in this phase for those traffic types.
Once all (or the majority of) the workloads belonging to the IP subnet are migrated into the ACI fabric, it is then possible to also migrate the default gateway into the ACI domain. This migration is done by turning on ACI routing in the BD and de-configuring the default gateway function on the Brownfield network devices.
The following diagram shows how to enable ACI Unicast Routing.
As shown in the previous diagram, ACI allows the administrator to statically configure the MAC address associated to the default gateway defined for a specific bridge domain: it is therefore possible to use the same MAC address previously used for the default gateway in the legacy network, so that the gateway move is completely seamless for the workloads connected to the ACI fabric (that is, there is no need to refresh their ARP cache entry).
Once the migration of an application is completed, it is also possible to leverage all the flooding containment functionalities offered by the ACI fabric. Specifically, ARP flooding can be disabled, as well as L2 Unknown Unicast flooding.
Note: This is possible only if there are no workloads belonging to that specific L2 broadcast domain that remain connected to the Brownfield network (that is, all the workloads, physical and virtual, have been migrated to the ACI fabric). In real life deployments, there are often specific hosts that remain connected to the brownfield network for quite a long time. This is usually the case for bare-metal servers, like for example Oracle RAC databases that remain untouched until the following refresh cycle. Even in this case it may make sense to move the default gateway for those physical servers to the ACI fabric. This method will provide the environment with a centralized point of management for security policies, which can be applied between IP subnets; however, the flooding of traffic must remain enabled.
Once the default gateway for different IP subnets is moved to the ACI fabric, routing communication between workloads belonging to the migrated subnets will always occur on the ACI leaf nodes leveraging the distributed Anycast gateway functionality.
As shown in the following diagram, this is true for workloads that are still connected to the Brownfield network (routing happens on the pair of border leaf nodes interconnecting Brownfield and Greenfield infrastructure). Once workloads are migrated to the ACI fabric, traffic will be routed by leveraging the Anycast gateway functionality on the leaf node where they are connected.
Migrating the workloads and their default gateway to the ACI fabric brings advantages even when maintaining the security policies at the IP subnet level, as it allows the ACI fabric to become the single point of security policy enforcement between IP subnets hence providing a sort of ACL management functionality. As always, this can be achieved following a gradual procedure: once the default gateway for the different IP subnets has been moved to the ACI fabric, it is possible to enable full and open connectivity between endpoints connected to different EPGs (IP subnets), by simply applying a “permit any” contract between the different EPGs (shown as follows).
With this configuration in place, every time a workload tries to communicate with a device in a different EPG (IP subnets), a centrally managed security policy is applied to the ACI leaf where the distributed default gateway function is enabled. Given the fact that the policy has a single “permit any” statement, this results in open connectivity between the devices, as shown in the following diagram.
As previously mentioned, because routing between different IP subnets is performed at the ACI fabric level, the security policy can be enforced not only between hosts connected to the ACI fabric, but will also be applied to devices connected to VLAN segments in the Brownfield infrastructure.
A key advantage of the ACI centrally managed policy system is the ability to restrict communication between hosts belonging to different IP subnets. With ACI, it is possible to restrict communication between hosts in a holistic manner by applying a central policy from the APIC, dictating which traffic flows are allowed and to and from each of the respective EPGs.
The data plane security policy enforcement between a pair of EPGs is shown in the following diagram.
Despite the fact that the Brownfield and Greenfield networks are connected at L2, there may still be IP subnets/VLANs that are not extended to the ACI fabric. In order for workloads belonging to the ACI fabric to communicate with those IP subnets, L3 routing should be enabled between the ACI fabric and the Brownfield network, as shown in the following diagram.
Given that the establishment of dynamic routing peering via a vPC connection is not fully supported across different Cisco Nexus platforms, and that it is likely that the Brownfield network is already connected to the WAN via an additional pair of L3 core devices, one possible approach the approach consists in leveraging dedicated L3 links connect the ACI fabric to those L3 devices, as shown in the following diagram.
Note: Using a pair of L3 links is usually sufficient, but in order to increase the resiliency of the solution it is also possible to deploy a full mesh of connections between the ACI leaf nodes and the L3 devices. Also, an alternative design option could be used to connect the ACI nodes to the Brownfield aggregation/spine switches. The approach shown above is preferred, as it simplifies the eventual migration to the ACI fabric of connectivity to the WAN.
In this scenario, the ACI fabric sees the IP subnet 10.30.30.0/24 as an external prefix reachable via an L3Out connection established with the pair of L3 devices in the brownfield network. Devices connected to the ACI fabric, and belonging to EPG1, can benefit from the Anycast Default Gateway functionalities.
ACI does not offer yet the capability to integrate network service functions (like FW and SLB) between endpoints belonging to separate EPGs based on a dynamic policy configuration, a functionality called “Policy Based Redirection” (PBR). This implies that it is required to deploy more traditional design options to ensure traffic can be sent across a chain of network services. The adopted solution depends on the mode of deployment of the network services:
1. Network services in transparent mode: least common model, after migrating the network services to the ACI fabric, the fabric performs traffic stitching across the service nodes and offers distributed default gateway and access control services to the workloads deployed in different EPGs.
2. Network services as default gateway for the workloads: when migrating the network services to ACI, the ACI simply offers L2 services to allow workload to communicate with the gateway.
3. Network services in routed mode and stitched in the data path (“VRF sandwich”): the ACI fabric is the default gateway for the tenant workload and the use of VRF stitching is in place to ensure traffic traverses the various network services node.
Independently from the specific network services deployment model, the procedure to integrate those services into the ACI fabric remains similar and leverages the fact that network services are usually deployed as a pair of active / standby nodes. This is true regardless of whether or not the network service devices are physical or virtual.
For example, consider the migration procedure required for the scenario where the network services are deployed in routed mode and stitched in the data path by leveraging VRF stitching. Focus on a FW deployment scenario, in which similar considerations apply to Server Load Balancers. In a traditional DC network design, the network services are usually connected to the devices at the L2/L3 demarcation line, either as physical services appliances or as services modules inserted in a service chassis, shown as follows. (This diagram shows network services in a traditional DC design.)
As shown in the previous diagram, both North-South and East-West traffic is pushed to the network services, in our example leveraging a VRF sandwich approach.
As mentioned in previous sections of this guide, the migration to the ACI fabric consists not only in moving their workloads initially connected to the Brownfield network, but also in relocating into the ACI fabric specific services like routing and security policy enforcement (as previously shown in this guide). The stitching of traffic through the network services can also be enforced at the ACI fabric level, leveraging the same VRF sandwich design option. The following diagram shows the VRF sandwich for network services integration.
From a physical perspective, it is likely that the initial ACI deployment could be built with a limited number of ACI nodes, for then being scaled out in time, as shown in the simplified option as shown in the following diagram.
(This is a simplified diagram for L2 connectivity to ACI.)
The following diagram shows an alternate view of the topology, showing the connectivity between the Brownfield network and the ACI fabric:
By comparing this diagram with the diagram showing network services in a traditional DC design, you can understand how introducing even a small ACI fabric built with four nodes allows you to relocate and centralize the routing, access control, and network services functionalities, which will simplify the overall management and operation of the network.
This is the case even when keeping the workloads connected to the Brownfield network.
When the goal is the migration of the workloads to the ACI fabric, the end goal is to relocate not only those workloads but also the services nodes, as shown in the following diagram. (This diagram shows the end state for the migration of network services to the ACI fabric.)
The procedure for workload migration has already been discussed in the previous sections of this guide. Concerning the migration of network services, it is advantageous that they are usually deployed as a pair of Active/Standby devices. This deployment allows us to provide a seamless migration in two simple steps:
1. Disconnect the standby node from the Brownfield network and connect it to a pair of ACI leaf nodes. A vPC connection is usually leveraged for connecting the network service device to the leaf nodes and a static mapping of local VLAN tags to the proper EPGs ensures proper communication to the network. The keepalives between active and standby nodes can still be exchanged, either via the interconnection links between the Brownfield network and the ACI fabric, or via dedicated OOB connections. As a consequence, the active service node in this phase remains connected to the Brownfield network, whereas the standby is connected to the ACI fabric. The following diagram shows the migration of the standby node to the ACI fabric.
2. Disconnect the active node from the Brownfield network. This triggers the failover event that causes the service node connected to the ACI fabric to become active. The ACI fabric is now performing all the routing and VRF stitching functions to ensure all the communications between EPG1 and EPG2 are enforced via the FW node. The disconnected node is then re-connected to the ACI fabric; the recommendation is to connect it to the same pair of ACI leaf nodes where the currently active node is connected, but there is no technical reason not to connect it to a separate pair of leaf switches.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2015 Cisco Systems, Inc. All rights reserved.