Configure Multi-Site

This chapter contains these sections:

VXLAN EVPN multi-sites

A VXLAN EVPN multi-site is a data center network solution that

  • interconnects two or more BGP-based Ethernet VPN (EVPN) sites or overlay domains over an IP-only network,

  • uses border gateways (BGWs) in anycast or vPC mode to terminate and interconnect sites, and

  • enforces scalable traffic control and failure containment across domains.

All routes that reach destinations outside a fabric have a next hop on the BGW, for both Layer 2 and Layer 3 traffic. The BGW serves as the node that interacts both with local site nodes and with nodes external to the site. In a leaf-spine data center fabric, BGWs can be leaf switches, spine switches, or dedicated gateway devices.

The VXLAN EVPN multi-site approach creates multiple site-local EVPN control planes and IP forwarding domains, interconnected by a single, common EVPN control and IP forwarding domain.

  • Each EVPN node receives a unique site-scope identifier. Site-local EVPN domains consist of nodes using the same identifier; BGWs belong both to their site’s EVPN domain and to the common multi-site EVPN domain.

  • Site-local bridging, routing, and flood domains connect only via BGWs to corresponding domains in other sites.

  • Selective advertisement on BGWs configures per-tenant information, such as IP VRF or MAC VRF (EVPN instance). When external connectivity (VRF-lite) and EVPN Multi-Site share a BGW, advertisements remain enabled.

  • In the BGP control plane, for releases prior to Cisco NX-OS Release 9.3(5), BGWs rewrite next hop information for EVPN routes and reoriginate them. Beginning with Cisco NX-OS Release 9.3(5), reorigination is always enabled (with either single or dual route distinguishers), and rewrite is not performed. For more information, see .

If a data center has three EVPN overlay domains, each domain is connected to others only through its designated BGWs, which enforce traffic boundaries and provide scalable inter-site forwarding.
Connecting two EVPN overlays directly, without BGWs, bypasses traffic enforcement and failure containment, and does not qualify as a VXLAN EVPN multi-site deployment.
Attribute VXLAN EVPN single-site VXLAN EVPN multi-sites
Scope of control Single overlay domain Multiple overlay domains
Inter-site gateway Not required Required (BGWs)
Failure containment Fabric-wide Site-specific, enforced at BGWs

A VXLAN EVPN multi-site is like several office buildings (sites), each protected by its own security staff (BGWs). Visitors and deliveries travel only through main entrances (BGWs), ensuring each building’s safety and managing interactions between locations.

Dual RDs for multi-site

A dual RD is a route distinguishing mechanism that

  • enables the use of both a primary and secondary route distinguisher (RD) in VXLAN EVPN multi-site deployments

  • allows reoriginated routes to be advertised with a secondary type-0 RD (site-id:VNI format), and

  • supports automatic allocation of the secondary RD for border gateways (BGWs).

Beginning with Cisco NX-OS Release 9.3(5), VXLAN EVPN Multi-Site supports route reorigination with RDs. This feature is enabled automatically.

  • Each VRF or L2VNI tracks two RDs:

    • The primary RD is unique to each instance.

    • The secondary RD is the same across BGWs.

  • Reoriginated routes are advertised with the secondary type-0 RD (using a site-id:VNI format).

  • All other routes use the primary RD.

  • The secondary RD is allocated automatically when the router operates in Multi-Site BGW mode.

If the site ID is greater than 2 bytes, the secondary RD can't be generated automatically on the Multi-Site BGW, and the following message appears:

%BGP-4-DUAL_RD_GENERATION_FAILED: bgp- [12564] Unable to generate dual RD on EVPN multisite border gateway. This may increase memory consumption on other BGP routers receiving re-originated EVPN routes. Configure router bgp <asn> ; rd dual id <id> to avoid it.

In this case, you can either manually configure the secondary RD value or disable dual RDs. For more information, see Configure dual RD support for Multi-Site.

RP placements in DCI cores

RP placements in DCI cores are multicast routing design approaches that

  • ensure that PIM Rendezvous Points (RPs) for the Data Center Interconnect (DCI) multicast underlay are distinct from those used in the fabric underlay

  • prevent multicast group overlap between the DCI and fabric networks, and

  • allow flexible selection and redundancy of RP locations to maintain robust multicast operations.

Proper RP placements in DCI cores are critical for scalable and reliable multicast routing across interconnected data centers. Logical separation between multicast domains within the fabric and DCI protects against routing conflicts and unintended traffic propagation.

  • PIM RPs and multicast groups for the fabric underlay and the DCI underlay must be different.

  • The multicast group range used in the DCI underlay must not overlap with the group range used in the fabric underlay.

  • Multicast groups and RPs in DCI and fabric networks should be distinct and configured based on specific address ranges.

  • RPs can be placed on any node in the DCI core; multiple RPs may be used for redundancy.

Direct BGW to BGW Peering Deployment: For direct Border Gateway (BGW) to BGW peering, set up PIM RPs on BGW devices. Use anycast PIM RP for redundancy in case of failure.

BGW to Cloud Model Deployment: In designs involving peering between BGWs and the cloud, place the PIM RP on core routers or superspine switches in the DCI underlay layer.

A deployment where the DCI underlay uses the same multicast group range or shares the same PIM RP as the fabric underlay can cause routing conflicts and disrupt multicast traffic flows.

Supported ESI behavior for EVPN multi-homing and Anycast BGW

Beginning Cisco NX-OS Release 10.2(2)F, EVPN MAC/IP routes (Type 2) with both reserved and non-reserved Ethernet Segment Identifier (ESI) values are evaluated for forwarding (ESI RX). See RFC 7432 Section 9.2.2 for the definition of EVPN MAC/IP route resolution.

  • Type 2 MAC/IP routes with reserved ESI values (0 or MAX-ESI) are resolved solely by the MAC/IP route (BGP next-hop within Type 2).

  • Type 2 MAC/IP routes with non-reserved ESI values are resolved only when an associated per-ES Ethernet Auto-Discovery (EAD) route (Type 1, per-ES EAD) is present.

  • On Multi-Site Anycast Border Gateway (BGW), MAC/IP routes with both reserved and non-reserved ESI values can be forwarded, rewritten, and re-originated. The Multi-Site BGW always re-originates the per-ES EAD route in these cases.

Supported platforms and configuration guidelines for VXLAN EVPN Multi-Site

VXLAN EVPN Multi-Site supports a range of Cisco Nexus platforms, specific line cards, and firmware versions. Configuration guidelines and operational restrictions ensure optimal deployment and feature compatibility.

Supported Cisco Nexus platforms

VXLAN EVPN Multi-Site is supported on the following platforms and line cards:

  • Cisco Nexus 9300-EX and 9300-FX platform switches (except Cisco Nexus 9348GC-FXP platform switches)

  • Cisco Nexus 9300-FX2, 9300-FX3, and 9300-GX platform switches

  • Cisco Nexus 9300-GX2 platform switches

  • Cisco Nexus 9500 platform switches with -EX or -FX or -GX line cards


    Note


    Cisco Nexus 9500 platform switches with -R/RX line cards don't support VXLAN EVPN Multi-Site.


Switch or Port restrictions

  • The evpn multisite dci-tracking is mandatory for anycast BGWs and vPC BGW DCI links.

  • EVPN multisite DCI-tracking and EVPN multisite fabric-tracking are only supported on physical interfaces. Use on SVIs is not supported.

  • Cisco Nexus 9332C and 9364C platform switches can be BGWs.

Deployment restrictions

  • In a VXLAN EVPN Multi-Site deployment, when you use the ttag feature, make sure that the ttag is stripped ( ttag-strip ) on BGW's DCI interfaces attached to Non-NXOS gear.

  • In TRM with multi-site deployments, all BGWs receive traffic from fabric. However, only the designated forwarder (DF) BGW forwards the traffic. All other BGWs drop the traffic through a default drop ACL. This ACL is programmed in all DCI tracking ports. Don't remove the evpn multisite dci-tracking configuration from the DCI uplink ports. If you do, you remove the ACL, which creates a nondeterministic traffic flow in which packets can be dropped or duplicated instead of deterministically forwarded by only one BGW, the DF.

  • The DCI underlay group and the fabric underlay group must be distinct, ensuring no overlap between DCI multicast and fabric multicast underlay groups.

  • Bind NVE to a loopback address that is separate from loopback addresses that are required by Layer 3 protocols. A best practice is to use a dedicated loopback address for the NVE source interface (PIP VTEP) and multi-site source interface (anycast and virtual IP VTEP).

  • Beginning with Cisco NX-OS Release 9.3(5), if you disable the host-reachability protocol bgp command under the NVE interface in a VXLAN EVPN Multi-Site topology, the NVE interface stays operationally down.

  • Ensure that the ip pim sparse-mode is enabled on the Multi-Site VIP loopback interface.

  • Multi-Site BGW deployment restrictions:

    • The Multi-Site BGW allows the coexistence of Multi-Site extensions (Layer 2 unicast/multicast and Layer 3 unicast) as well as Layer 3 unicast and multicast external connectivity.

    • Beginning with Cisco NX-OS Release 9.3(5), Multi-Site Border Gateways re-originate incoming remote routes when advertising to the site's local spine/leaf switches. These re-originated routes modify the following fields:

      • RD value changes to [Multisite Site ID:L3 VNID].

      • It is mandatory that Route-Targets are defined on all VTEP that are participating in a given VRF, this includes and is explicitly required for the BGW to extend the given VRF. Prior to Cisco NX-OS Release 9.3(5), Route-Targets from intra-site VTEPs were inadvertently kept across the site boundary, even if not defined on the BGW. Starting from Cisco NX-OS Release 9.3(5) the mandatory behavior is enforced. By adding the necessary Route-Targets to the BGW, the change from inadvertent Route-Target advertisement to explicit Route-Target advertisement can be performed.

      • Path type changes from external to local.

      • For SVI-related triggers (such as shut/unshut or PIM enable/disable), a 30-second delay was added, allowing the Multicast FIB (MFIB) Distribution module (MFDM) to clear the hardware table before toggling between L2 and L3 modes or vice versa.

    • In a VXLAN Multi-Site environment, a border gateway device that uses ECMP for routing through both a VXLAN overlay and an L3 prefix to access remote site subnets might encounter adjacency resolution failure for one of these routes. If the switch attempts to use this unresolved prefix, it will result in traffic being dropped.

  • Convergence recommendations

    • To improve the convergence in case of fabric link failure and avoid issues in case of fabric link flapping, ensure to configure multi-hop BFD between loopbacks of spines and BGWs.

      In the specific scenario where a BGW node becomes completely isolated from the fabric due to all its fabric links failing, the use of multi-hop BFD ensures that the BGP sessions between the spines and the isolated BGW can be immediately brought down, without relying on the configured BGP hold-time value.

vPC BGW restrictions

  • BGWs in a vPC topology are supported.

  • vPC mode can support only two BGWs.

  • vPC mode can support both Layer 2 hosts and Layer 3 services on local interfaces.

  • In vPC mode, BUM is replicated to either of the BGWs for traffic coming from the external site. Hence, both BGWs are forwarders for site external to site internal (DCI to fabric) direction.

  • In vPC mode, BUM is replicated to either of the BGWs for traffic coming from the local site leaf for a VLAN using Ingress Replication (IR) underlay. Both BGWs are forwarders for site internal to site external (fabric to DCI) direction for VLANs using the IR underlay.

  • In vPC mode, BUM is replicated to both BGWs for traffic coming from the local site leaf for a VLAN using the multicast underlay. Therefore, a decapper/forwarder election happens, and the decapsulation winner/forwarder only forwards the site-local traffic to external site BGWs for VLANs using the multicast underlay.

  • In vPC mode, all Layer 3 services/attachments are advertised in BGP via EVPN Type-5 routes with their virtual IP as next hop. If the VIP/PIP feature is configured, they are advertised with PIP as the next hop.

Multi-site BGW maintenance mode restrictions

  • BUM Traffic from remote Fabrics will still be attracted to the Border gateway that is in maintenance mode

  • Border Gateway in maintenance mode still participates in Designated Forwarder Election

  • Default Maintenance mode profile applies the command "ip pim isolate" and so the Border gateway is isolated from S,G tree towards the fabric direction. This leads to BUM traffic loss and hence an appropriate maintenance mode profile should be used for Border Gateways than the default.

Unsupported features

  • Multicast Flood Domain between inter-site/fabric BGWs isn't supported.

  • iBGP EVPN Peering between BGWs of different fabrics/sites isn't supported.

  • PIM BiDir is not supported for fabric underlay multicast replication with VXLAN Multi-Site.

  • FEX is not supported on a vPC BGW and Anycast BGW.

Anycast BGW restrictions

  • Anycast mode can support up to six BGWs per site.

  • Anycast mode can support only Layer 3 services that are attached to local interfaces.

  • In Anycast mode, BUM is replicated to each border leaf. DF election between the border leafs for a particular site determines which border leaf forwards the inter-site traffic (fabric to DCI and conversely) for that site.

  • In Anycast mode, all Layer 3 services are advertised in BGP via EVPN Type-5 routes with their physical IP as the next hop.

  • If different Anycast Gateway MAC addresses are configured across sites, enable ARP suppression and ND suppression for all VLANs that have been extended.

Supported features

  • VXLAN EVPN Multi-Site and Tenant Routed Multicast (TRM) are supported between sources and receivers deployed across different sites.

  • Prior to NX-OS 10.2(2)F only ingress replication was supported between DCI peers across the core. Beginning with Cisco NX-OS Release 10.2(2)F both ingress replication and multicast are supported between DCI peers across the core.

  • Beginning with Cisco NX-OS Release 9.3(5), VTEPs support VXLAN-encapsulated traffic over parent interfaces if subinterfaces are configured. This feature is supported for VXLAN EVPN Multi-Site and DCI. DCI tracking can be enabled only on the parent interface.

  • Beginning with Cisco NX-OS Release 9.3(5), VXLAN EVPN Multi-Site supports asymmetric VNIs. For more information, see Multi-Site with Asymmetric VNIs and .

Dual RD support for multi-site

  • Dual RD are supported beginning with Cisco NX-OS Release 9.3(5).

  • Dual RD is enabled automatically for Cisco Nexus 9332C, 9364C, 9300-EX, and 9300-FX/FX2 platform switches and Cisco Nexus 9500 platform switches with - EX/ FX line cards that have VXLAN EVPN Multi-Site enabled.

  • Beginning with Cisco NX-OS Release 10.2(3)F, the dual RD support for Multi-Site is supported on the Cisco Nexus 9300-FX3 platform switches.

  • To use CloudSec or other features that require PIP advertisement for multi-site reoriginated routes, configure BGP additional paths on the route server if dual RD are enabled on the BGW, or disable dual RD.

  • Sending secondary RD additional paths at the BGW node isn't supported.

  • During an ISSU, the number of paths for the leaf nodes might double temporarily while all BGWs are being upgraded.

Guidelines and Limitations for VXLAN Multi-Site Anycast BGW Support on Cisco Nexus 9800 Series Switches

Enable VXLAN EVPN Multi-Site

Use this procedure on each Border Gateway (BGW) switch to activate and configure VXLAN EVPN Multi-Site. Ensure the site ID is consistent across all BGWs within a site.

Follow these steps to enable VXLAN EVPN Multi-Site:

Before you begin

Ensure you have administrative privileges.

  • Verify that required software features are licensed and available.

  • Plan your loopback interfaces and IPs for source and BGW VIP.

  • Confirm underlay connectivity and routing advertisement for loopback addresses.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Enable EVPN Multi-Site and configure the site ID. evpn multisite border-gateway ms-id

Example:

switch(config)# evpn multisite border-gateway 100 

The range of values for ms-id is 1 to 2,814,749,767,110,655. Ensure the same site ID is configured on all BGWs in the fabric.

Step 3

(Optional) Enable split-horizon per-site if using DCI multicast underlay with anycast BGW.

Example:

switch(config-evpn-msite-bgw)# split-horizon per-site 

Note

 

Use this command when DCI multicast underlay is configured on a site with anycast border gateway.

Step 4

Create the NVE (Network Virtualization Edge) interface.

Example:

switch(config-evpn-msite-bgw)# interface nve 1

Note

 

Only one NVE interface is allowed on the switch.

Step 5

Assign the source interface as a loopback with a /32 IP address advertised throughout the transport network.

Example:

switch(config-if-nve)# source-interface loopback 0 

The source interface must be a loopback interface that is configured on the switch with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This requirement is accomplished by advertising it through a dynamic routing protocol in the transport network.

Step 6

Configure BGP as the host-reachability protocol for VXLAN Ethernet VPN. host-reachability protocol bgp

Example:

switch(config-if-nve)# host-reachability protocol bgp

Step 7

Specify the multisite border-gateway interface (another loopback, different from the source interface) for the BGW Virtual IP.

Example:

switch(config-if-nve)# multisite border-gateway interface loopback 100

Defines the loopback interface used for the BGW virtual IP address (VIP). The border-gateway interface must be a loopback interface that is configured on the switch with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This requirement is accomplished by advertising it through a dynamic routing protocol in the transport network. This loopback must be different than the source interface loopback. The range of vi-num is from 0 to 1023.

Step 8

Activate the NVE interface.

Example:

switch(config-if-nve)# no shutdown
switch(config-if-nve)# exit
switch(config-if)# exit
switch(config

Step 9

Configure and loopback interfaces and IP addresses.

Example:

switch(config)# interface loopback 0 
switch(config-if)# ip address 192.0.2.0/32 

VXLAN EVPN Multi-Site is enabled on the BGW switch. The device is ready to participate in multi-site fabric connectivity.

Configure dual RD support for Multi-Site

Enable and manage dual Route Distinguisher (RD) support for VXLAN EVPN Multi-Site environments.

Follow these steps when you need to manually configure the secondary RD value across Multi-Site border gateways (BGWs) or revert to single RD support.

Before you begin

Before you begin, ensure:

  • VXLAN EVPN Multi-Site is enabled on your device.

  • You have the required BGP Autonomous System Number (AS number).

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal 
switch(config)#

This command puts the device into global configuration mode.

Step 2

Configure the BGP router process with your autonomous system number.

Example:

switch(config)# router bgp 100 
switch(config-router)#

Use a value from 1 to 4,294,967,295 for autonomous system number.

Step 3

Define the first 2 bytes of the secondary RD for Multi-Site BGWs.

Example:

switch(config-router)# rd dual id 1

Choose a value from 1 to 65535 for rd dual ID. Ensure this ID is consistent across all BGWs in the Multi-Site deployment.

Note

 
To disable dual RD support and revert to a single RD, use the command:
no rd dual

Step 4

(Optional) Verify the secondary RD for a specific Ethernet VPN Instance (EVI).

Example:

switch(config-router)# show bgp evi 100

This command displays the current secondary RD configuration for the specified EVI.


Dual RD support is now configured for Multi-Site, allowing consistent EVPN route identification using both primary and secondary RD values across border gateways.

The following output shows how the secondary RD is displayed:

switch# show bgp evi 100
-----------------------------------------------
  L2VNI ID                     : 100 (L2-100)
  RD                           : 3.3.3.3:32867
  Secondary RD                 : 1:100
  Prefixes (local/total)       : 1/6
  Created                      : Jun 23 22:35:13.368170
  Last Oper Up/Down            : Jun 23 22:35:13.369005 / never
  Enabled                      : Yes
  <ph feature-id="nexus-writer-comment">Associated IP-VRF            : vni101<!--removed as per input from hganapat--></ph>
  Active Export RT list        :
        100:100
  Active Import RT list        :
        100:100

Configure VNI dual mode

This task is used to set up VNI dual mode, allowing BUM traffic to be managed using multicast or ingress replication within and across fabrics/sites.

For more information about configuring multicast or ingress replication for a large number of VNIs, see VXLAN BGP EVPN eBGP topologies.

Follow these steps to configure VNI dual mode:

Before you begin

  • Ensure you have administrator access to the switch CLI.

  • If only a Layer 3 extension is configured on the Border Gateway (BGW), create an additional loopback interface in the same VRF instance on all BGWs and assign a unique IP address per BGW. Redistribute the loopback interface’s IP address into BGP EVPN, especially toward Site-External.

  • For multiple VRFs, if only one is extended to all leaf switches, add a dummy loopback to the extended VRF and advertise through BGP. Otherwise, add dummy loopbacks to each extended VRF and advertise them accordingly.

  • Use the advertise-pip command to prevent potential configuration errors.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Create a VXLAN overlay interface.

Example:

switch(config)# interface nve 1

Note

 

Only one NVE interface is allowed per switch.

Step 3

Configure the VNI.

Example:

switch(config-if-nve)# member vni 200

The range for vni-range is from 1 to 16,777,214. The value of vni-range can be a single value like 5000 or a range like 5001-5008.

Step 4

Choose one for BUM traffic within the fabric/site

  • Configure the NVE multicast group.
    switch(config-if-nve-vni)# mcast-group 255.0.4.1
  • Enable BGP EVPN with ingress replication.
    switch(config-if-nve-vni)# ingress-replication protocol bgp

Step 5

Defines the Multi-Site BUM replication method for extending the Layer 2 VNI.

Example:

switch(config-if-nve-vni)# multisite ingress-replication

The switch is configured with VNI dual mode, supporting BUM traffic replication via multicast or ingress replication as required for your network fabric/site design.

Configure Fabric/DCI link tracking

Configure tracking for all DCI-facing and site internal/fabric-facing interfaces to control EVPN route origination when links fail.

Tracking ensures EVPN routes are not reoriginated from or to a site if all DCI or fabric links go down. This helps prevent routing issues and is mandatory for Multi-Site EVPN deployments.

Follow these steps to configure Fabric/DCI Link Tracking.

Before you begin

  • Confirm you have administrative access to the device CLI.

  • Identify DCI-facing and site internal/fabric-facing interfaces to be tracked.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Enter interface configuration mode for the DCI or fabric-facing interface.

Example:

switch(config)# interface ethernet1/1

Enters interface configuration mode for the DCI or fabric interface.

Note

 

Enter one of the following commands in Step 3 or Step 4.

Step 3

Choose one od the required tracking.evpn multisite dci-tracking

Configures DCI interface tracking.

  • To track a DCI-facing interface, enable DCI tracking:
    switch(config-if)# evpn multisite dci-tracking
  • To track a fabric-facing interface (mandatory for anycast BGWs and vPC BGW fabric links), enable fabric tracking:
    switch(config-if)# evpn multisite fabric-tracking

Step 4

Configure the IP or IPv6 address for the interface.

  • For IPv4:
    switch(config-if)# ip address 192.1.1.1
  • For IPv6:
    switch(config-if)# ipv6 address 2001:DB8::192:1:1:1

Example:

Step 5

Enable the interface.

Example:

switch(config-if)# no shutdown

Fabric/DCI link tracking is enabled. The device will automatically stop reoriginating EVPN routes if all DCI or fabric links at the site go down.

Configure fabric external neighbors

This task is required when you need to establish BGP-based connectivity from your fabric to other sites or external fabrics using Border Gateway nodes. Proper configuration ensures correct route exchange for EVPN across site boundaries

Follow these steps to configure fabric external neighbors:

Before you begin

  • Ensure you have administrative access to the switch.

  • Confirm the AS (Autonomous System) numbers for your site and remote BGP neighbor.

  • Have the IP addresses (IPv4 or IPv6) of the external neighbors ready.

Follow these steps to configure fabric external neighbors.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Configure the local BGP process with your site’s autonomous system number.

Example:

switch(config)# router bgp 100

The range for as-num is from 1 to 4,294,967,295.

Step 3

Configure the BGP neighbor using its IP address.

Example:

switch(config-router)# neighbor 100.0.0.1

Step 4

Set the remote autonomous system number for the neighbor.

Example:

switch(config-router-neighbor)# remote-as 69000

Step 5

Designate the peer as a fabric-external type to enable next hop rewrite for Multi-Site deployments.

Example:

switch(config-router-neighbor)# peer-type fabric-external

The default for peer-type is fabric-internal .

Note

 

The peer-type fabric-external command is required only for VXLAN Multi-Site BGWs. It is not required for pseudo BGWs.

Step 6

Enable the EVPN address family for the neighbor.

Example:

switch(config-router-neighbor)# address-family l2vpn evpn

Step 7

Rewrite the route target autonomous system number for correct EVPN route propagation.

Example:

switch(config-router-neighbor)# rewrite-evpn-rt-asn

Rewrites the route target (RT) information to simplify the MAC-VRF and IP-VRF configuration. BGP receives a route, and as it processes the RT attributes, it checks if the AS value matches the peer AS that is sending that route and replaces it. Specifically, this command changes the incoming route target’s AS number to match the BGP-configured neighbor’s remote AS number. You can see the modified RT value in the receiver router.


Fabric external/DCI neighbors are configured, enabling BGP-based communication and route exchange between your site’s BGWs and external/fabric peers for EVPN services.

Configure VXLAN EVPN Multi-Site storm control

VXLAN EVPN Multi-Site storm control lets you rate-limit BUM traffic, helping prevent network disruptions caused by traffic storms. Storm control is implemented on the ingress direction of fabric and DCI interfaces.


Note


For information on access port storm control, see the Cisco Nexus 9000 Series NX-OS Layer 2 Configuration Guide.


Follow these steps to configure VXLAN EVPN Multi-Site storm control:

Before you begin

  • Remote peer reachability must be only through DCI links. Appropriate routing configuration must ensure that remote site routes are not advertised over Fabric links.

  • Multicast traffic is policed only on DCI interfaces, while unknown unicast and broadcast traffic is policed on both DCI and fabric interfaces.

  • Cisco NX-OS Release 9.3(6) and later releases optimize rate granularity and accuracy. Bandwidth is calculated based on the accumulated DCI uplink bandwidth, and only interfaces tagged with DCI tracking are considered. (Prior releases also include fabric-tagged interfaces.) In addition, granularity is enhanced by supporting two digits after the decimal point. These enhancements apply to the Cisco Nexus 9300-EX, 9300-FX/FX2/FX3, and 9300-GX platform switches.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal 
switch(config)#

Step 2

Configure the storm suppression level for broadcast, multicast, or unknown unicast traffic.

Example:

switch(config)# evpn storm-control unicast level 10 

Example:

switch(config)# evpn storm-control unicast level 10.20 

Configures the storm suppression level as a number from 0–100.

0 means that all traffic is dropped, and 100 means that all traffic is allowed. For any value in between, the unknown unicast traffic rate is restricted to a percentage of available bandwidth. For example, a value of 10 means that the traffic rate is restricted to 10% of the available bandwidth, and anything above that rate is dropped.

Beginning with Cisco NX-OS Release 9.3(6), you can configure the level as a fractional value by adding two digits after the decimal point. For example, you can enter a value of 10.20.


VXLAN EVPN Multi-Site storm control is applied to selected traffic types according to the specified suppression levels. The network drops excess BUM traffic, helping maintain stability and prevent disruption.

EVPN storm control commands for VXLAN Multi-Site environments

You can view the status of EVPN storm control settings in VXLAN Multi-Site environments using theis command:
Command Purpose

slot 1 show hardware vxlan storm-control

Displays the status of EVPN storm control setting.


Note


When the storm control threshold is exceeded, the system logs the following message:

BGWY-1 %ETHPORT-5-STORM_CONTROL_ABOVE_THRESHOLD: Traffic in port Ethernet1/32 exceeds the configured threshold , action - Trap (message repeated 38 times)

Multi-Site with vPC Support

Multi-Site with vPC support

A Multi-Site with vPC support is a network architecture that

  • allows Border Gateways (BGWs) to be part of a vPC complex,

  • supports dually-attached directly-connected hosts (which may be bridged or routed) as well as dually-attached firewalls or other service attachments, and

  • uses vPC-specific multihoming techniques that do not rely on EVPN Type 4 routes for Designated Forwarder (DF) election or split horizon.

In this architecture, vPC BGWs provide flexibility for attaching hosts, firewalls, or service functions to the network through dual connections, enabling both redundancy and load sharing. The reliance on dedicated vPC multihoming techniques increases resiliency and simplifies operations, as it removes the need for certain control-plane mechanisms (such as Type 4 routes).

Guidelines for configuring Multi-Site with vPC support

Follow these guidelines when configuring Multi-Site with vPC support:

  • o not configure 4000 VNIs for vPC, as this is not supported.

  • For BUM traffic with continued VIP use, ensure the MCT link is used as transport during core isolation or fabric isolation, and for unicast traffic in fabric isolation.

  • Beginning with Cisco NX-OS Release 10.1(2), you can use TRM Multisite with vPC BGW.

  • Always prioritize routes to remote Multisite BGW loopback addresses over the DCI link path rather than iBGP protocol with the backup SVI. Use the backup SVI only in the event of a DCI link failure.

Configure Multi-Site with vPC support

Establish Multi-Site connectivity with vPC support to enable dual-active links and network redundancy across fabrics.

This task is performed when deploying a VXLAN EVPN Multi-Site solution requiring virtual port channel (vPC) integration between border gateways.

Before you begin

  • Ensure devices are running compatible NX-OS software.

  • Verify IP addresses, VLANs, and interface numbers for configuration.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

You are now in configuration mode.

Step 2

Enable core system features: vPC, LACP, interface VLAN, PIM, and OSPF.

Example:

switch(config)# feature vpc
switch(config)# feature lacp
switch(config)# feature interface-vlan
switch(config)# feature pim
switch(config)# feature ospf

Step 3

Define the PIM RP address and multicast group range for the underlay.

Example:

switch(config)# ip pim rp-address 192.0.2.1 group-list 224.0.0/4

Step 4

Configure the vPC domain and peer-keepalive link, including essential vPC parameters.

Example:

switch(config)# vpc domain 1
switch(config-vpc-domain)# peer-switch
switch(config-vpc-domain)# peer-gateway
switch(config-vpc-domain)# peer-keepalive destination 192.0.2.2

There is no default value for vPC domain. The range is from 1 to 1000.

Configures the IPv4 address for the remote end of the vPC peer-keepalive link. The management ports and VRF are the defaults.

Note

 

The system does not form the vPC peer link until you configure a vPC peer-keepalive link.

Step 5

Enable ARP and IPv6 ND synchronization under the vPC domain.

Example:

switch(config-vpc-domain)# ip arp synchronize
switch(config-vpc-domain)# ipv6 nd synchronize
switch(config-vpc-domain)# delay restore interface-vlan 45  !optional

Enables IP ARP and ND synchronize under the vPC domain to facilitate faster ARP and ND table population following device reload.

The delay restore interface-vlan configuration is optional. We recommend tuning this value when the SVI/VNI scale is high. For example, when the SCI count is 1000, we recommend that you set the delay restore to 45 seconds.

Step 6

Create the vPC peer-link port-channel, add member interfaces, and configure trunking.

Example:

switch(config)# interface port-channel 1
switch(config-if)# switchport
switch(config-if)# switchport mode trunk
switch(config-if)# switchport trunk allowed vlan 1,10,100-200
switch(config-if)# mtu 9216
switch(config-if)# vpc peer-link
switch(config-if)# no shutdown
switch(config)# interface Ethernet1/1, Ethernet1/21
switch(config-if)# channel-group 1 mode active
switch(config-if)# no shutdown

Creates the vPC peer-link port-channel interface and adds two member interfaces to it.

Step 7

Configure infrastructure VLAN and create routed SVI.

Example:

switch(config)# vlan 10
switch(config)# system nve infra-vlans 10
switch(config)# interface vlan 10
switch(config-if)# ip address 10.0.0.1/30
switch(config-if)# ip router ospf process UNDERLAY area 0
switch(config-if)# ip pim sparse-mode
switch(config-if)# no ip redirects
switch(config-if)# mtu 9216
switch(config-if)# no shutdown

Creates the SVI used for the backup routed path over the vPC peer-link.

Step 8

Set up VXLAN NVE interface and EVPN Multi-Site border gateway ID; configure routeable loopbacks for BGP and PIM.

Example:

switch(config)# evpn multisite border-gateway 100
switch(config)# interface nve 1
switch(config-if-nve)# source-interface loopback0
switch(config-if-nve)# host-reachability protocol bgp
switch(config-if-nve)# multisite border-gateway interface loopback100
switch(config-if-nve)# no shutdown
switch(config-if-nve)# exit
switch(config-if)# exit
switch(config)# interface loopback0
switch(config-if)# ip address 198.51.100.0/32
switch(config-if)# ip pim sparse-mode
switch(config-if)# exit
switch(config)# interface loopback100
switch(config-if)# ip address 198.51.100.1/32
switch(config-if)# exit

The range of values for ms-id is 1 to 281474976710655. The ms-id must be the same in all BGWs within the same fabric/site.

Defines the source interface, which must be a loopback interface with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This requirement is accomplished by advertising the address through a dynamic routing protocol in the transport network.

Defines the loopback interface used for the BGW virtual IP address (VIP). The BGW interface must be a loopback interface that is configured on the switch with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This requirement is accomplished by advertising the address through a dynamic routing protocol in the transport network. This loopback must be different than the source interface loopback. The range of vi-num is from 0 to 1023.

Step 9

Verify configuration and operational status.

Example:

switch# show vpc brief
switch# show nve peers
switch# show vlan brief
switch# show interface port-channel 1

Confirm that vPC, peer-link, SVI, and NVE interfaces are up and operational.


Multi-Site with vPC support is operational, providing high availability and resilient inter-site connectivity.

Multi-Site vPC support verification commands

These commands provide information to verify Multi-Site with vPC support configuration.

Table 1. vPC Verification Commands

Command

Description

show vpc brief

Displays general vPC and CC status.

show vpc consistency-parameters global

Displays the status of those parameters that must be consistent across all vPC interfaces.

show vpc consistency-parameters vni

Displays configuration information for VNIs under the NVE interface that must be consistent across both vPC peers.

show vpc brief

switch# show vpc brief
Legend:
                (*) - local vPC is down, forwarding via vPC peer-link
 
vPC domain id                     : 1  
Peer status                       : peer adjacency formed ok     (<--- peer up)
vPC keep-alive status             : peer is alive                
Configuration consistency status  : success (<----- CC passed)
Per-vlan consistency status       : success                       (<---- per-VNI CCpassed)
Type-2 consistency status         : success
vPC role                          : secondary                    
Number of vPCs configured         : 1  
Peer Gateway                      : Enabled
Dual-active excluded VLANs        : -
Graceful Consistency Check        : Enabled
Auto-recovery status              : Enabled, timer is off.(timeout = 240s)
Delay-restore status              : Timer is off.(timeout = 30s)
Delay-restore SVI status          : Timer is off.(timeout = 10s)
Operational Layer3 Peer-router    : Disabled
[...]

show vpc consistency-parameters global

switch# show vpc consistency-parameters global
 
    Legend:
        Type 1 : vPC will be suspended in case of mismatch
 
Name                        Type  Local Value            Peer Value            
-------------               ----  ---------------------- -----------------------
[...]
Nve1 Adm St, Src Adm St,    1     Up, Up, 2.1.44.5, CP,  Up, Up, 2.1.44.5, CP,
Sec IP, Host Reach, VMAC          TRUE, Disabled,        TRUE, Disabled,      
Adv, SA,mcast l2, mcast           0.0.0.0, 0.0.0.0,      0.0.0.0, 0.0.0.0,    
l3, IR BGP,MS Adm St, Reo         Disabled, Up,          Disabled, Up,        
                                  198.51.100.1        198.51.100.1
[...]

show vpc consistency-parameters vni

switch(config-if-nve-vni)# show vpc consistency-parameters vni
 
    Legend:
        Type 1 : vPC will be suspended in case of mismatch
 
Name                        Type  Local Value            Peer Value            
-------------               ----  ---------------------- -----------------------
Nve1 Vni, Mcast, Mode,      1     11577, 224.0.0.0,      11577, 224.0.0.0,    
Type, Flags                       Mcast, L2, MS IR       Mcast, L2, MS IR      
Nve1 Vni, Mcast, Mode,      1     11576, 224.0.0.0,      11576, 224.0.0.0,    
Type, Flags                       Mcast, L2, MS IR       Mcast, L2, MS IR
[...]

Asymmetric VNIs in multi-site deployments

Asymmetric VNIs allow you to connect sites with different internal L2VNI and L3VNI assignments to a common MAC VRF or IP VRF by manually aligning route-targets. Use this reference to identify required prerequisites and configuration parameters.

  • Each site uses its own L2VNI and L3VNI values. For example, site 1 uses VNI 200 internally, and site 2 uses VNI 300.

  • Automatic route-target assignment does not match when VNI values differ; a common route-target (such as 222:333) must be manually assigned.


Note


  • Basic multi-site configuration must be completed.

  • VLAN-to-VRF mapping must be configured on each Border Gateway (BGW) to maintain proper L2VNI/L3VNI association and MAC-IP route reorigination.


Site-specific VNI assignments

Site VNIs
Site 1 BGW L2VNI = 200, L3VNI = 201
Site 2 BGW L2VNI = 300, L3VNI = 301

Layer 3 configuration

Assign a common route-target (e.g., 201:301) for VRF context on both BGWs.

Site 1 BGW:

vrf context vni201
  vni 201
  address-family ipv4 unicast
    route-target both auto evpn
    route-target import 201:301 evpn
    route-target export 201:301 evpn

Site 2 BGW:

vrf context vni301
  vni 301
  address-family ipv4 unicast
    route-target both auto evpn
    route-target import 201:301 evpn
    route-target export 201:301 evpn

Layer 2 configuration

  • Assign a common route-target (e.g., 222:333) for L2VNI configuration on both BGWs.

  • Map each L2VNI interface to the corresponding VRF to support MAC-IP route reorigination.

Site 1 BGW:

evpn
  vni 200 l2
    rd auto
    route-target import auto
    route-target import 222:333
    route-target export auto
    route-target export 222:333

Associate the VRF (L3VNI) to the L2VNI for MAC-IP label reorigination:

interface Vlan 200
  vrf member vni201

Site 2 BGW:

evpn
  vni 300 l2
    rd auto
    route-target import auto
    route-target import 222:333
    route-target export auto
    route-target export 222:333

Associate the VRF (L3VNI) to the L2VNI for MAC-IP label reorigination:

interface vlan 300
  vrf member vni301

Result

This configuration enables sites with different VNI assignments to participate in the same MAC VRF or IP VRF and ensures proper route exchange and MAC-IP route stitching across sites.

TRM with Multi-Site

This section contains these topics:

Tenant routed multicasts with Multi-Site

A Tenant Routed Multicast with Multi-Site deployment is a VXLAN EVPN architecture that

  • enables multicast forwarding across multiple VXLAN EVPN fabrics connected via Multi-Site

  • provides Layer 3 multicast services for multicast sources and receivers across different sites, and

  • addresses the need for efficient East-West multicast traffic between geographically distributed networks.

TRM with Multi-Site Reference Information

  • Each TRM site is operating independently. Border gateways on each site allow stitching across the sites. There can be multiple border gateways for each site.

  • Multicast source and receiver information across sites is propagated by BGP on the border gateways that are configured with TRM. The border gateway on each site receives the multicast packet and re-encapsulates the packet before sending it to the local site. Beginning with Cisco NX-OS Release 10.1(2), TRM with Multi-Site supports both Anycast Border Gateway and vPC Border Gateway.

  • The border gateway that is elected as Designated Forwarder (DF) for the L3VNI forwards the traffic from fabric toward the core side. In the TRM Multicast-Anycast Gateway model, we use the VIP-R based model to send traffic toward remote sites. The IR destination IP is the VIP-R of the remote site. Each site that has the receiver gets only one copy from the source site. DF forwarding is applicable only on Anycast Border Gateways.

TRM with Multi-Site Examples

  • On the remote site, the border gateway that receives inter-site multicast traffic from the core forwards it toward the fabric. DF checks are not performed in the core-to-fabric direction, because non-DF border gateways can also receive a copy for local distribution.

  • Only the Designated Forwarder (DF) sends traffic toward remote sites.

  • Beginning with Cisco NX-OS Release 9.3(3), TRM with Multi-Site supports BGW connections to the external multicast network in addition to the BL connectivity, which is supported in previous releases. Forwarding occurs as documented in the previous example, except the exit point to the external multicast network can optionally be provided through the BGW.

Figure 1. TRM with Multi-Site Topology, BL External Multicast Connectivity
Figure 2. TRM with Multi-Site Topology, BGW External Multicast Connectivity

Supported platforms, software versions, and features for TRM with Multi-Site

Tenant Routed Multicast (TRM) with Multi-Site is supported across a range of Cisco Nexus platforms and NX-OS software versions. TRM enables scalable multicast routing between fabric sites in a VXLAN EVPN multi-site environment.

TRM with Multi-SiteSupported platforms

These switches support TRM with Multi-Site:

  • The following platforms support TRM with Multi-Site:

    • Cisco Nexus 9300-EX platform switches

    • Cisco Nexus 9300-FX/FX2/FX3 platform switches

    • Cisco Nexus 9300-GX platform switches

    • Cisco Nexus 9300-GX2 platform switches

    • Cisco Nexus 9500 platform switches with -EX/ FX line cards

TRM multi-site with vPC BGW and with Anycast BGW support

Release

Platforms

9.x

Cisco Nexus 9300 -EX, FX, FX2, and FX3 family switches

10.2(1)F

Cisco Nexus 9300-GX family switches.

10.2(1q)F

N9K-C9332D-GX2B switches

10.4(1)F

Cisco Nexus 9332D-H2R switches.

10.4(2)F

Cisco Nexus 93400LD-H1 switches.

10.5(2)F

Cisco Nexus 9500 Series switches with N9K-X9736C-FX3 line card

Deployment restrictions

  • Beginning with Cisco NX-OS Release 9.3(3), a border leaf and Multi-Site border gateway can coexist on the same node for multicast traffic.

  • Beginning with Cisco NX-OS Release 9.3(3), all border gateways for a given site must run the same Cisco NX-OS 9.3(x) image.

  • Cisco NX-OS Release 10.1(2) has the following guidelines and limitations:

    • Backup SVI is needed between the two vPC peers.

    • Orphan ports attached with L2 and L3 are supported with vPC BGW.

    • TRM multi-site with vPC BGW is not supported with vMCT.

    For details on TRM and Configuring TRM with vPC Support, see Configuring Tenant Routed Multicast.

  • Beginning with Cisco NX-OS Release 10.2(2)F, multicast group configuration is used to encapsulate TRM and L2 BUM packets in the DCI core using the multisite mcast-group dci-core-group command.

Supported features

  • TRM with Multi-Site supports these features:

    • TRM Multi-Site with vPC Border Gateway.

    • PIM ASM multicast underlay in the VXLAN fabric

    • TRM with Multi-Site Layer 3 mode only

    • TRM with Multi-Site with Anycast Gateway

    • Terminating VRF-lite at the border leaf

    • The following RP models with TRM Multi-Site:

      • External RP

      • RP Everywhere

      • Internal RP

  • Prior to NX-OS 10.2(2)F only ingress replication was supported between DCI peers across the core. Beginning with Cisco NX-OS Release 10.2(2)F both ingress replication and multicast are supported between DCI peers across the core.

Feature limitation

  • Border routers reoriginate MVPN routes from fabric to core and from core to fabric.

  • Only one pair of vPC BGW can be configured on one site.

  • A pair of vPC BGW and Anycast BGW cannot co-exist on the same site.

  • Only eBGP peering between border gateways of different sites is supported.

  • Each site must have a local RP for the TRM underlay.

  • Keep each site's underlay unicast routing isolated from another site's underlay unicast routing. This requirement also applies to Multi-Site.

  • MVPN address family must be enabled between BGWs.

  • MED is supported for iBGP only.

VRF-lite hand-off and Multi-site configuration support

  • When configuring BGW connections to the external multicast fabric, be aware of the following:

    • The multicast underlay must be configured between all BGWs on the fabric side even if the site doesn’t have any leafs in the fabric site.

    • Sources and receivers that are Layer-3 attached through VRF-Lite links to the BGW of a single site acting therefore also as Border Leaf (BL) node need to have reachability through the external Layer-3 network. If there's a Layer-3 attached source on BGW BL Node-1 and a Layer-3 attached receiver on BGW BL Node-2 for the same site, the traffic between these two endpoints flows through the external Layer-3 network and not through the fabric.

    • External multicast networks should be connected only through the BGW or BL. If a deployment requires external multicast network connectivity from both the BGW and BL at the same site, make sure that external routes that are learned from the BGW are preferred over the BL. To do so, the BGW must have a lower MED and a higher OSPF cost (on the external links) than the BL.

      The following figure shows a site with external network connectivity through BGW-BLs and an internal leaf (BL1). The path to the external source should be through BGW-1 (rather than through BL1) to avoid duplication on the remote site receiver.

  • The BGW supports VRF-lite hand-off and Multi-site configuration on the same physical interface as shown in the diagram.

Configure TRM with Multi-Site

Use this procedure to configure a switch for VXLAN Multi-Site deployments with TRM, ensuring seamless communication across sites.

Follow these steps to configure TRM with Multi-Site:

Before you begin

The following must be configured:

  • Ensure VXLAN TRM and VXLAN Multi-Site features are enabled on the switch.

  • For Anycast Border Gateway (BGW), follow this procedure. For vPC BGW, ensure vPC is also configured.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Access and enable the NVE interface.

Example:

switch(config)# interface nve1
switch(config-if-nve)# no shutdown

Step 3

Set BGP as the host reachability protocol.

Example:

switch(config-if-nve)# host-reachability protocol bgp

Step 4

Specify loopback interfaces for source and border-gateway.

Example:

switch(config-if-nve)# source-interface loopback 0
switch(config-if-nve)# multisite border-gateway interface loopback 1

Defines the source interface, which must be a loopback interface with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This requirement is accomplished by advertising the address through a dynamic routing protocol in the transport network.

Defines the loopback interface used for the border gateway virtual IP address (VIP). The border-gateway interface must be a loopback interface that is configured on the switch with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This requirement is accomplished by advertising the address through a dynamic routing protocol in the transport network. This loopback must be different than the source interface loopback. The range of vi-num is from 0 to 1023.

Step 5

Add a virtual network identifier (VNI) and associate it with a VRF.

Example:

switch(config-if-nve)# member vni 10010 associate-vrf

The range for vni-range is from 1 to 16,777,214 The value of vni-range can be a single value like 5000 or a range like 5001-5008.

Step 6

Configure the multicast group address for the NVE.

Example:

switch(config-if-nve-vni)# mcast-group 224.0.0.0

Step 7

Configure the multicast group for the Data Center Interconnect (DCI) core to encapsulate TRM and L2 BUM packets.

Example:

switch(config-if-nve-vni)# multisite mcast-group 224.1.1.1

Step 8

multisite ingress-replication optimized

Example:

switch(config-if-nve-vni)# multisite ingress-replication optimized

Defines the Multi-Site BUM replication method for extending the Layer 2 VNI.

Step 9

Exit configuration mode and save the configuration.

Example:

switch(config-if-nve-vni)# exit
switch# copy running-config startup-config

TRM with VXLAN Multi-Site is now configured on the switch. BGP is used for host reachability, and the correct multicast settings support inter-site communication.

TRM status in multi-site configurations

To display the status for the TRM with Multi-Site configuration, enter the following command.

Table 2. Command Purpose Lookup

Command

Purpose

show nve vni virtual-network-identifier

Displays the L3VNI.

Note

 

For this feature, optimized IR is the default setting for the Multi-Site extended L3VNI. The MS-IR flag inherently means that it's MS-IR optimized.

The show nve vni command provides information about the status of TRM (Tenant Routed Multisite) in a multi-site configuration. This reference outlines the command usage, its purpose, and example outputs for both IPv4 and IPv6 environments.

  • For IPv4
    switch(config)# show nve vni 51001
    Codes: CP - Control Plane        DP - Data Plane
           UC - Unconfigured         SA - Suppress ARP
           SU - Suppress Unknown Unicast
           Xconn - Crossconnect
           MS-IR - Multisite Ingress Replication
     
    Interface VNI      Multicast-group   State Mode Type [BD/VRF]      Flags
    --------- -------- ----------------- ----- ---- ------------------ -----
    nve1      51001    226.0.0.1         Up    CP   L3 [cust_1]        MS-IR