MPLS Segment Routing to VxLAN Handoff
MPLS SR to VxLAN handoff enables seamless routing and forwarding between MPLS Segment Routing (SR) domains and VxLAN overlays in data center and WAN edge architectures.
-
Interconnects MPLS SR (WAN/core) and VxLAN EVPN (data center) domains.
-
Maintains L3VPN segmentation using per-VRF label allocation.
-
Handles control plane and data plane translation, including next-hop resolution, label/VNI mapping, and QoS marking.
How MPLS Segment Routing to VxLAN Handoff Works
The handoff enables communication between a core MPLS SR network and a VxLAN-based data center fabric, typically at the border leaf or spine (DCI node). This is essential for multi-domain connectivity, data center expansion, and migration scenarios.
-
The DCI node acts as the gateway, performing protocol translation and encapsulation/decapsulation between MPLS SR and VxLAN overlays.
Summary
This process describes how traffic is handed off between an MPLS Segment Routing (SR) network and a VxLAN overlay at the Data Center Interconnect (DCI) node, enabling seamless L3VPN connectivity between MPLS and VxLAN domains.
Key components in this process are:
-
DCI Node (Border Leaf/Border Spine): Performs the handoff and encapsulation functions between MPLS SR and VxLAN overlays.
-
MPLS SR Core: Provides L3VPN connectivity using segment routing.
-
VxLAN EVPN Fabric: Connects ToRs and other leaf switches using VxLAN overlays.
Workflow
-
Route and Label Advertisement
- The DCI node receives BGP route updates from both the MPLS SR domain and the VxLAN EVPN domain, including VPN labels and next-hop information.
- BGP control plane exchanges ensure appropriate import/export of routes between domains using route targets.
The DCI node synchronizes the control plane state across both domains using BGP, with appropriate label allocation per VRF. The result is that both MPLS SR and VxLAN domains can learn and use routes across the handoff boundary.When... And... Then... And... A new host comes online in the VxLAN domain
The DCI node imports the EVPN route
The DCI reoriginates and advertises a VPN label to the MPLS SR core
The route is reachable from both domains
-
Data Plane Handoff (Packet Forwarding)
- Packets arriving from the MPLS SR core are decapsulated and re-encapsulated into VxLAN (or vice versa) by the DCI node.
- QoS, TTL, and ECN fields are mapped between MPLS and VxLAN headers according to platform-specific rules (e.g., uniform or pipe modes).
The DCI node ensures the correct translation and forwarding of packets between domains, applying platform-specific QoS and statistics handling. The result is end-to-end traffic flow between MPLS SR and VxLAN domains, with resiliency provided by fallback to underlay if the VxLAN overlay is unavailable.When... And... Then... And... A packet arrives from the MPLS SR core with a VPN label
The DCI node matches the label to a VRF and destination
The DCI strips MPLS headers and applies VxLAN encapsulation with the correct VNI
The packet is forwarded into the VxLAN fabric towards the destination host
A packet arrives from the VxLAN domain with a VNI
The DCI node matches the VNI to a VRF and destination
The DCI strips VxLAN headers and applies MPLS SR encapsulation with the correct label stack
The packet is forwarded into the MPLS SR core towards the remote PE
-
Resiliency and Fallback Handling
- If the VxLAN NVE interface is down, the DCI node automatically falls back to using the MPLS SR underlay for next-hop resolution, maintaining reachability.
This stage maintains operational continuity and network resilience by leveraging dual-domain routing. The result is uninterrupted service during overlay outages, with automatic return to VxLAN overlay when available.When... And... Then... And... NVE1 (VxLAN) is down on the DCI node
The VxLAN overlay is unavailable
The next-hop is resolved via MPLS SR underlay routes
Traffic continues to flow using backup MPLS SR paths until overlay is restored
Guidelines and Limitations
Platform and Feature Guidelines
-
This feature is supported only on Cisco Nexus 9000 Cloudscale platforms, including FX2, FX3, GX, GX2, and select modular platforms.
-
Coexistence of VxLAN-EVPN and MPLS Segment Routing (SR-MPLS) or MPLS L3VPN (LDP) features is required on the same device for DCI handoff.
-
vPC, VMCT, and pMCT configurations are not supported with SR MPLS to VxLAN handoff.
-
Supported only on physical interfaces for core-facing (WAN) ports. SVI and sub-interface handoff for core-facing links is not supported.
-
Only per-VRF label allocation is supported for VPN label assignment. Per-prefix label allocation is not supported.
Operational Limitations and Restrictions
Be aware of all known operational limitations when deploying SR MPLS to VxLAN handoff. Avoid unsupported configurations and understand the impact on failover, statistics, and scale.
-
Only EVPN Type 5 (IP prefix routes) are supported for handoff to MPLS/SR core. Subnet (Type 2) handoff and L2 extension are not supported in the current release.
-
Multisite BGW (Border Gateway) and DCI handoff functions cannot be enabled on the same node.
-
On some platforms, MPLS and VxLAN statistics are not supported; on FX2, only VPN label statistics are available (no LSR or adjacency stats).
-
End-to-end TTL and ECN propagation is not fully supported due to ASIC limitations. Only pipe mode TTL is supported at the handoff.
-
FX2 Platform supports a maximum of 256 VxLAN peers, 900 VRFs (of which up to 100 can be extended to MPLS), 48,000 adjacencies, and 500 MPLS labels.
-
Priority Flow Control (PFC) is not supported in DCI handoff mode.
-
Route leaking or VRF import/export between VxLAN and MPLS domains is not supported; only same-VRF handoff is allowed.
-
During a failure or shutdown of the NVE (VxLAN) interface, next-hop resolution falls back to the MPLS underlay, which is expected behavior for resiliency.
Configure MPLS SR to VXLAN Handoff
This procedure enables seamless routing and forwarding between an MPLS SR core and a VXLAN EVPN-based data center fabric at the DCI border.
This configuration is required when connecting a VXLAN EVPN data center fabric to an MPLS SR or LDP-based WAN/core using a Nexus 9000 as the DCI/border device.
Before you begin
Ensure that the device has appropriate hardware resources for both VXLAN and MPLS SR features.
-
Licensing for both VXLAN and MPLS features is applied.
-
Required VLANs, VRFs, and interfaces are provisioned.
Follow these steps to configure MPLS SR to VXLAN handoff on the DCI node.
Procedure
|
Step 1 |
Enable required features and global configurations. Example:
Ensure all features are enabled before proceeding with interface and protocol configuration.
Required features are enabled, and the device is prepared for further configuration. |
||
|
Step 2 |
Configure VRFs and VXLAN-to-MPLS mapping Example:
Route-targets must match between DC and WAN to import/export correct L3VPN routes.
VRFs and VNIs are mapped for inter-domain handoff. |
||
|
Step 3 |
Configure interfaces for both MPLS and VXLAN connectivity Example:
Enable MPLS on WAN/core-facing interfaces. Assign VRFs and IP addresses per design.
All physical and logical interfaces for VXLAN and MPLS are configured and active. |
||
|
Step 4 |
Set up BGP with appropriate address families and route re-origination Example:
Configure BGP neighbors for both MPLS (WAN/core) and VXLAN (fabric) sides, enabling import and reorigination for cross-domain route exchange.
BGP sessions are established and routes are exchanged between domains. |
MPLS SR to VXLAN handoff is successfully configured, enabling seamless L3VPN connectivity between data center and core network domains.
Verify the DCI VxLAN-MPLS Handoff
Perform this verification to confirm that the DCI device correctly forwards traffic between VxLAN and MPLS domains.
-
Confirm that overlay and underlay routing tables are populated as expected.
-
Check that interface and protocol states are up and operational.
-
Validate correct data plane handoff by simulating end-to-end traffic.
Before you begin
Before starting this verification, ensure that all relevant VxLAN, EVPN, MPLS, and BGP configurations have been applied and the devices have completed initial convergence.
-
All physical and logical interfaces involved in the handoff are up and configured.
-
Control-plane protocols (BGP, OSPF/ISIS, etc.) are established and stable.
Follow these steps to verify the DCI VxLAN-MPLS handoff functionality.
Procedure
|
Step 1 |
Check the status of NVE and MPLS interfaces on the DCI node. Example:
These commands display the operational status of the VxLAN and MPLS interfaces. Both must be up for successful handoff. If either interface is down, troubleshoot physical connectivity, configuration, or protocol states before proceeding. |
|
Step 2 |
Verify route propagation and label allocation between VxLAN and MPLS domains. Example:
These commands confirm that route and label exchanges are occurring correctly between the overlay (EVPN) and underlay (MPLS) domains.
|
|
Step 3 |
Test end-to-end traffic forwarding across the DCI node. Example:
These tests verify data plane connectivity and the proper functioning of the handoff between VxLAN and MPLS domains.
|
At the end of this procedure, you have verified that the DCI node correctly hands off traffic between VxLAN and MPLS domains, with all routing, label, and interface states operational. Traffic forwarding is confirmed by end-to-end connectivity tests.
Feedback