Configure Tenant Routed Multicast

This chapter contains these sections:

Tenant Routed Multicast

A Tenant Routed Multicast (TRM) is a VXLAN EVPN multicast forwarding solution that

  • enables efficient multicast delivery across a multi-tenant VXLAN fabric using a BGP-based EVPN control plane

  • supports multicast forwarding between senders and receivers within the same or different subnets and VTEPs, and

  • improves Layer-3 overlay multicast scalability and efficiency for modern data center networks.

Tenant routed multicast brings standards-based multicast capabilities to the VXLAN overlay network by leveraging next generation multicast VPN (ngMVPN) as described in IETF RFC 6513 and 6514. TRM allows each edge device (VTEP) with a distributed IP Anycast Gateway to also act as a Designated Router (DR) for multicast forwarding. Bridged multicast forwarding is optimized through IGMP snooping, ensuring only interested receivers at the edge receive the multicast traffic, while all non-local multicast traffic is routed for efficient delivery.

When TRM is enabled, multicast forwarding in the underlay network is used to replicate VXLAN-encapsulated routed multicast traffic. A Default Multicast Distribution Tree (Default-MDT) is built per VRF, which supplements the existing multicast groups for Layer-2 VNI Broadcast, Unknown Unicast, and multicast replication. Overlay multicast groups are mapped to corresponding underlay multicast addresses for scalable transport. The BGP-based approach allows the fabric to distribute the rendezvous point (RP) functionality, making every VTEP an RP for multicast.

TRM enables seamless integration with existing multicast-enabled networks, supporting external multicast rendezvous points and tenant-aware external connectivity through Layer-3 physical or subinterfaces.

In a data center fabric using TRM, multicast sources and receivers can be located within the data center, in a separate campus network, or reachable externally via the WAN. TRM ensures multicast traffic reaches only interested receivers, even across different sites and tenants, while using underlay multicast replication to optimize bandwidth and resiliency.

Figure 1. VXLAN EVPN TRM
VXLAN EVPN TRM

Tenant routed multicast mixed modes

A tenant routed multicast mixed mode is a VXLAN multicast network feature that:

  • enables TRM-capable and non-TRM-capable edge devices to coexist on the same fabric

  • allows multicast traffic to be partially routed by TRM-capable devices but primarily bridged by legacy devices, and

  • assigns one or more TRM-capable edge devices as gateways to translate multicast traffic between TRM and non-TRM domains.

This mixed mode approach ensures backward compatibility and simplifies migration to newer hardware across the fabric.

A network with both TRM-capable and non-TRM-capable devices uses mixed mode to ensure seamless multicast communication across the fabric.

Figure 2. TRM Layer 2/Layer 3 Mixed Mode

Tenant routed multicast with IPv6 overlay

A tenant routed multicast with IPv6 overlay is a data center multicast architecture that

  • enables multicast traffic for tenant networks by providing IPv6 overlay support in Cisco NX-OS fabric

  • supports multisite deployments using Anycast Border Gateway and Anycast RP, and

  • operates on Cisco Nexus 9300 and 9500 series switches with various overlay configurations.

Beginning with Cisco NX-OS Release 10.2(1), Tenant Routed Multicast (TRM) supports IPv6 in the overlay.

Key supported features include:

  • Multicast IPv4 underlay within fabric.

  • IPv4 underlay in the data center core for multisite.

  • IPv4 overlay only, IPv6 overlay only, or a combination of overlays.

  • IPv6 in the underlay.

  • Anycast Border Gateway with Border Leaf Role, vPC support on BGW and Leaf, and Virtual MCT on Leaf. Multisite Border Gateway is supported on Cisco Nexus 9300 -FX3, -GX, GX2, -H2R, and -H1 TORs.

  • Anycast RP (internal, external, and RP-everywhere).

  • TRMv6 is supported only on default system routing mode.

  • MLD snooping with VXLAN VLANs with TRM.

  • TRM with IPv6 Overlay is supported on Cisco Nexus 9300 -EX, -FX, - FX2, -FX3, -GX, - GX2, -H2R, -H1 TORs.

The following are not supported by TRM with IPv6 overlay:

  • L2 TRM

  • VXLAN flood mode on L2 VLANs with L3TRM

  • L2-L3 TRM Mixed Mode

  • VXLAN Ingress Replication within a single site

  • MLD snooping with VXLAN VLANs without TRM

  • PIM6 SVI and MLD snooping configuration on the VLAN

  • MSDP, Bidir and SSM.

Multicast flow path visibility for TRM flows

A multicast flow path visibility for TRM flows is a network diagnostics feature that

  • enables exporting all multicast states in Cisco Nexus 9000 Series switches

  • provides comprehensive traceability of multicast flow paths from source to receiver, and

  • supports both TRM L3 mode and underlay multicast flows starting in Cisco NX-OS Release 10.3(2)F.

VXLAN EVPN and TRM with IPv6 underlays

A VXLAN EVPN and TRM with IPv6 multicast underlay is a data center networking solution that

  • supports IPv6 multicast as the transport for underlay traffic in VXLAN fabrics,

  • enables multi-destination overlay traffic (such as TRM, BUM) , and

  • allows overlay hosts to run on IPv4 or IPv6 address families for flexible deployment.

Beginning with Cisco NX-OS Release 10.4(2)F, support is provided for VXLAN with IPv6 multicast in the underlay. Overlay hosts can be either IPv4 or IPv6. Deploying this feature requires IPv6 versions of unicast routing protocols and use of IPv6 multicast (PIMv6) in the underlay. Any multi-destination overlay traffic (such as TRM or BUM) can use the IPv6 multicast underlay.

Beginning with Cisco NX-OS Release 10.4(3)F, the combination of PIMv6 underlay on the fabric side and Ingress Replication (IPv6) on Data Center Interconnect (DCI) side is supported on Cisco Nexus 9300-FX/FX2/FX3/GX/GX2/H2R/H1 ToR switches and 9500 switches with X9716D-GX and X9736C-FX line cards.

Beginning with Cisco NX-OS Release 10.5(1)F, the underlay network supports the following combinations for VXLAN EVPN:

  • In the data center fabric, both Multicast Underlay (PIMv6) and Ingress Replication (IPv6) are supported.

  • In the Data Center Interconnect (DCI), only Ingress Replication (IPv6) is supported.

In a VXLAN fabric topology with four leaf switches and two spine switches, IPv6 multicast underlay (PIMv6) can be used, with route reflectors (RP) positioned in the spines via anycast RP.

Figure 3. Topology - VXLAN EVPN with IPv6 Multicast Underlay
Topology - VXLAN EVPN with IPv6 Multicast Underlay

Features and limitations of Tenant Routed Multicast

Tenant Routed Multicast (TRM) has these guidelines and limitations:

TRM configuration limitations

  • If VXLAN TRM feature is enabled on a VTEP, it would stop to send IGMP messages to the VXLAN fabric.

  • The Guidelines and Limitations for VXLAN also apply to TRM.

  • If TRM is configured, ISSU is disruptive.

  • TRM supports IPv4 and IPv6 multicast underlay.

  • TRM supports overlay PIM ASM and PIM SSM only. PIM BiDir is not supported in the overlay.

  • Both PIM and ip igmp snooping vxlan must be enabled on the L3 VNI's VLAN in a VXLAN vPC setup.

  • Beginning with Cisco NX-OS Release 10.3(1)F, the Real-time/flex statistics for TRM is supported on Cisco Nexus 9300-X Cloud Scale Switches.

Unsupported features

  • With TRM enabled, SVI as a core link is not supported.

  • TRM with Multi-Site is not supported on Cisco Nexus 9504-R platforms.

RP support

  • RP has to be configured either internal or external to the fabric.

  • The internal RP must be configured on all TRM-enabled VTEPs including the border nodes.

  • The external RP must be external to the border nodes.

  • The RP must be configured within the VRF pointing to the external RP IP address (static RP). This ensures that unicast and multicast routing is enabled to reach the external RP in the given VRF.

  • In a Transit Routing Multicast (TRM) deployment, the RP-on-stick model can sometimes lead to traffic drops if there is flapping on the Protocol Independent Multicast (PIM) enabled interface. Use the ip pim spt-switch-graceful command on the turnaround router that leads to the RP. This command allows for a graceful switch to the Shortest Path Tree (SPT) during flapping, which can minimize traffic drops.

  • TRM supports multiple border nodes. Reachability to an external RP/source via multiple border leaf switches is supported with ECMP and requires symmetric unicast routing.

  • For traffic streams with an internal source and external L3 receiver using an external RP, the external L3 receiver might send PIM S,G join requests to the internal source. Doing so triggers the recreation of S,G on the fabric FHR, and it can take up to 10 minutes for this S,G to be cleared.

Replication support

  • Replication of first packet is supported only on Cisco Nexus 9300 – EX, FX, FX2 family switches.

  • Beginning with Cisco NX-OS Release 10.2(3)F, Replication of first packet is supported on the Cisco Nexus 9300-FX3 platform switches.

FEX support and limitations

  • Support straight-through FEX connected to a standalone VXLAN VTEP and multicast source/receiver behind the FEX port.

  • With TRM enabled, we do not support a multicast receiver behind active-active FEX and vPC behind straight-through FEX.

  • Cisco Nexus 9300-EX, 9300-FX, and 9300-FX2 switches support FEX.

  • FEX is not supported on Cisco Nexus 9500 platform switches.

  • Support added for Cisco Nexus 2248, 2232, and 2348 Fabric Extenders.

Supported platforms

  • Beginning with Cisco NX-OS Release 10.1(2), TRM Multisite with vPC BGW is supported.

  • Beginning with Cisco NX-OS Release 10.2(1q)F, VXLAN TRM is supported on Cisco Nexus N9K-C9332D-GX2B platform switches.

  • Beginning with Cisco NX-OS Release 10.2(3)F, VXLAN TRM is supported on Cisco Nexus 9364D-GX2A, and 9348D-GX2A platform switches.

  • Beginning with Cisco NX-OS Release 10.4(1)F, VXLAN TRM is supported on Cisco Nexus 9332D-H2R switches.

  • Beginning with Cisco NX-OS Release 10.4(2)F, VXLAN TRM is supported on Cisco Nexus 93400LD-H1 switches.

  • Beginning with Cisco NX-OS Release 10.4(3)F, VXLAN TRM is supported on Cisco Nexus 9364C-H1 switches.

BGW support

  • TRM supports vPC fabric peering leaf’s as well as vPC/Anycast BGW.

Recommended configuration

  • For Tenant Routed Multicast with eBGP underlay:

    If all leaf switches use the same AS and spine switches use a different AS, enable the maximum-paths command under the address-family ipv4/ipv6 mvpn on the spines.

    This configuration allows spine switches to advertise all available paths to remote peers.

VXLAN with TRM Upgrade limitations


Caution


Following changes must be done during a Maintenance window.


After upgrading a Cisco NX-OS 9000 Series switches configured with VXLAN (specifically VRF-related configurations) from Cisco NX-OS Release 7.x through 9.3 to 10.3(6) or earlier, two issues arise:

  • The startup-config displays both legacy and new Layer 3 VNID configuration modes

  • TRM traffic’s RPF changes to the new mode for S,Gs, causing multicast traffic forwarding problems.

To avoid these issues, follow these steps:

  • Enable the REST configuration input using the following commands:
    feature nxapi
                            nxapi http port 80
  • Open a browser and enter the management IP address of the switch. This will open the Sandbox page. Use the same credentials as the switch admin login to sign in.

  • In the top input textbox, enter the following command for each VRF that has an issue with the VNI ID:
    vrf context tenant-1 
                            no vni 50000 l3
  • On the right side of the page, set the Method to NXAPI-REST(DME) and keep the Input Type as cli.

  • Click the Convert (with DN) button in the middle of the page. This will generate the XML equivalent of the configuration change.

  • When the XML appears in the second textbox, click Send to apply the changes and remove the VNI ID configuration from the switch.

  • To ensure the changes are applied, run the command:
    copy running-config startup-config

Supported features and limitations of Layer 3 TRM

Layer 3 Tenant Routed Multicast (TRM) has these configuration guidelines and limitations:

  • When configuring TRM VXLAN BGP EVPN, the following platforms are supported:

    • Cisco Nexus 9200, 9332C, 9364C, 9300-EX, and 9300-FX/FX2/FX3/FXP platform switches.

    • Cisco Nexus 9300-GX/GX2 platform switches.

    • Cisco Nexus 9300-H2R/H1 platform switches.

    • Cisco Nexus 9500 platform switches with 9700-EX line cards, 9700-FX line cards, 9700-FX3 line cards.

  • Layer 3 TRM and VXLAN EVPN Multi-Site are supported on the same physical switch. For more information, see Configure VXLAN EVPN Multi-Site.

  • TRM with vPC border leafs is supported only for Cisco Nexus 9200, 9300-EX, and 9300-FX/FX2/FX3/GX/GX2/H2R/H1 platform switches and Cisco Nexus 9500 platform switches with -EX/FX/FX3or -R/RX line cards. The advertise-pip and advertise virtual-rmac commands must be enabled on the border leafs to support this functionality. For configuration information, see the "Configuring VIP/PIP" section.

  • To support any Layer 3 source behind one of the vPC peers, whether physical or virtual MCT, a physical link configured as VRF-lite is required between the vPC peers. This setup is necessary to accommodate a receiver located behind the vPC peer, especially if it is the sole receiver in the fabric. This requirement applies to all scenarios where the vPC functions as a BGW, border Leaf, or an internal Leaf.

    On the receiving vPC peer, the VRF-lite link must have a superior reachability metric to the L3 source compared to any other paths (iBGP or eBGP) to be selected as the RPF towards the L3 source. In this configuration, traffic will flow directly to the receiver without traversing the EVPN fabric.

  • Well-known local scope multicast (224.0.0.0/24) is excluded from TRM and is bridged.

  • When an interface NVE is brought down on the border leaf, the internal overlay RP per VRF must be brought down.

  • Beginning with Cisco NX-OS Release 10.3(1)F, TRM support for the new L3VNI mode CLIs are provided on Cisco Nexus 9300-X Cloud Scale switches.

ISSU

  • When upgrading from Cisco NX-OS Release 9.3(3) to Cisco NX-OS Release 9.3(6), if you do not retain configurations of the TRM enabled VRFs from Cisco NX-OS Release 9.3(3), or if you create new VRFs after the upgrade, the auto-generation of ip multicast multipath s-g-hash next-hop-based CLI, when feature ngmvpn is enabled, will not happen. You must enable the CLI manually for each TRM enabled VRF.

TRM Flow Path Visualization support

  • Beginning Cisco NXOS release 10.2(1)F, TRM Flow Path Visualization is supported for flows within a single VXLAN EVPN site.

  • Beginning Cisco NXOS Release 10.3(2)F, TRM Flow Path Visualization support has been extended to below traffic patterns on Cisco Nexus 9000 Series platform switches:

    • TRM Multisite DCI Multicast

    • TRM Multisite DCI IR

    • TRM Data MDT

    • TRM on Virtual MCT vPC

    • TRM using new L3VNI

    • BUM Traffic visibility is not supported.

Layer 3 TRM supported platforms and release

  • Layer 3 TRM is supported for Cisco Nexus 9200, 9300-EX, and 9300-FX/FX2/FX3/FXP and 9300-GX platform switches.

  • Beginning with Cisco NX-OS Release 10.2(3)F, Layer 3 TRM is supported on the Cisco Nexus 9300-GX2 platform switches.

  • Beginning with Cisco NX-OS Release 10.4(1)F, Layer 3 TRM is supported on the Cisco Nexus 9332D-H2R switches.

  • Beginning with Cisco NX-OS Release 10.4(2)F, Layer 3 TRM is supported on the Cisco Nexus 93400LD-H1 switches.

  • Beginning with Cisco NX-OS Release 10.4(3)F, Layer 3 TRM is supported on the Cisco Nexus 9364C-H1 switches.

Support for combination of Layer 3 TRM and EVPN Multi-Site

  • Beginning with Cisco NX-OS Release 9.3(7), Cisco Nexus N9K-C9316D-GX, N9K-C9364C-GX, and N9K-X9716D-GX platform switches support the combination of Layer 3 TRM and EVPN Multi-Site.

  • Cisco Nexus 9300-GX platform switches do not support the combination of Layer 3 TRM and EVPN Multi-Site in Cisco NX-OS Release 9.3(5).

  • Beginning with Cisco NX-OS Release 10.2(3)F, the combination of Layer 3 TRM and EVPN Multi-Site is supported on the Cisco Nexus 9300-GX2 platform switches.

  • Beginning with Cisco NX-OS Release 10.4(1)F, the combination of Layer 3 TRM and EVPN Multi-Site is supported on the Cisco Nexus 9332D-H2R switches.

  • Beginning with Cisco NX-OS Release 10.4(2)F, the combination of Layer 3 TRM and EVPN Multi-Site is supported on the Cisco Nexus 93400LD-H1 switches.

  • Beginning with Cisco NX-OS Release 10.4(3)F, the combination of Layer 3 TRM and EVPN Multi-Site is supported on the Cisco Nexus 9364C-H1 switches.

Support on Nexus 9800 Series switches

  • Beginning with Cisco NX-OS Release 10.4(3)F, the TRM Multi-Site Anycast BGW on Cisco Nexus 9808/9804 switches with Cisco Nexus X9836DM-A and X98900CD-A line cards support the following features:

    • TRMv4

    • Ingress Replication between DCI peers across the core

    • Multicast underlay for fabric peers.

    • Only new L3VNI mode is supported. However, the traditional L3VNI mode is not supported

    TRM Multi-Site Anycast BGW on Cisco Nexus 9808/9804 switches with Cisco Nexus X9836DM-A and X98900CD-A line cards do not support the following features:

    • TRMv6

    • Data MDT

    • Multicast underlay between DCI peers across the core is not supported.

Support on -R/RX linecards

  • Beginning with Cisco NX-OS Release 9.3(3), the Cisco Nexus 9504 and 9508 platform switches with -R/RX line cards support TRM in Layer 3 mode. This feature is supported on IPv4 overlays only. Layer 2 mode and L2/L3 mixed mode are not supported.

    The Cisco Nexus 9504 and 9508 platform switches with -R/RX line cards can function as a border leaf for Layer 3 unicast traffic. For Anycast functionality, the RP can be internal, external, or RP everywhere.

  • TRM Multi-Site functionality is not supported on Cisco Nexus 9504 platform switches with -R/RX line cards.

  • If one or both VTEPs is a Cisco Nexus 9504 or 9508 platform switch with -R/RX line cards, the packet TTL is decremented twice, once for routing to the L3 VNI on the source leaf and once for forwarding from the destination L3 VNI to the destination VLAN on the destination leaf.

Supported features and platforms for Layer 2/Layer 3 TRM (Mixed Mode)

Layer 2/Layer 3 Tenant Routed Multicast (TRM) in Mixed Mode supports the following configurations, platforms, and guidelines:

  • All TRM Layer 2/Layer 3 configured switches must be Anchor DR. This is because in TRM Layer 2/Layer 3, you can have switches configured with TRM Layer 2 mode that co-exist in the same topology. This mode is necessary if non-TRM and Layer 2 TRM mode edge devices (VTEPs) are present in the same topology. 


  • Anchor DR is required to be an RP in the overlay.

  • An extra loopback is required for anchor DRs.

  • Non-TRM and Layer 2 TRM mode edge devices (VTEPs) require an IGMP snooping querier configured per multicast-enabled VLAN. Every non-TRM and Layer 2 TRM mode edge device (VTEP) requires this IGMP snooping querier configuration because in TRM multicast control-packets are not forwarded over VXLAN.

  • The IP address for the IGMP snooping querier can be re-used on non-TRM and Layer 2 TRM mode edge devices (VTEPs).

  • The IP address of the IGMP snooping querier in a VPC domain must be different on each VPC member device.

  • When interface NVE is brought down on the border leaf, the internal overlay RP per VRF should be brought down.

  • The NVE interface must be shut and unshut while configuring the ip multicast overlay-distributed-dr command.

  • Beginning with Cisco NX-OS Release 9.2(1), TRM with vPC border leafs is supported. Advertise-PIP and Advertise Virtual-Rmac need to be enabled on border leafs to support with functionality. For configuring advertise-pip and advertise virtual-rmac, see the "Configuring VIP/PIP" section.

Supported platforms

Anchor DR is supported only on these platforms:

  • Cisco Nexus 9200, 9300-EX, and 9300-FX/FX2 platform switches

  • Cisco Nexus 9500 platform switches with 9700-EX/FX/FX3 line cards.

  • Beginning with Cisco NX-OS Release 10.2(3)F, Anchor DR is supported on the Cisco Nexus 9300-FX3 platform switches.

Unsupported features and platforms

  • Layer 2/Layer 3 Tenant Routed Multicast (TRM) is not supported on Cisco Nexus 9300-FX3/GX/GX2/H2R/H1 platform switches.

Guidelines and limitations for VXLAN EVPN and TRM with IPv6 multicast underlay

VXLAN EVPN and TRM with IPv6 multicast underlay have specific guidelines and limitations you must follow for proper deployment and operation.

Supported features

  • Spine-based static RP is supported in underlay.

  • When an EoR is deployed as a spine node with Multicast Underlay (PIMv6) Any-Source Multicast (ASM), it in mandatory to configure non-default template using one of the following commands in global configuration mode:

    • system routing template-multicast-heavy

    • system routing template-multicast-ext-heavy

  • OSPFv3, ISIS, eBGP underlay is supported.

  • PIMv6 ASM (sparse mode) is supported in underlay.

  • PIMv6 Anycast RP is supported in underlay as RP redundancy.

  • Underlay IPv6 Multicast is supported.

  • For overlay traffic, each Cisco Nexus 9000 leaf switch is an RP. External RP is also supported.

  • EVPN TRMv4 and TRMv6 with IPv6 Multicast Underlay are supported on the Fabric.

Supported platforms

  • Cisco Nexus 9300-FX, FX2, FX3, GX, GX2, H2R, and H1 ToR switches are supported as the leaf VTEP.

  • Cisco Nexus X9716D-GX and X9736C-FX line cards are supported only on the spine (EoR).

Unsupported features

  • Underlay IPv6 Multicast is not supported on EOR platforms as a leaf.

  • Fabric Peering and Multisite are not supported with IPv6 multicast underlay.

  • The global mcast-group under NVE should not be configured as SSM range, and vice versa. If there is no explicit SSM configuration, then 232/8 is the default in data plane. hence 232.0.0.0/8 should not be configured as SSM and vice versea.

  • GPO is not supported with IPv6 multicast underlay.

TCAM specifications

For EVPN TRMv4 and TRMv6 with IPv6 Multicast Underlay, the TCAM region for ingress sup region must be carved to 768.

  • Check the ingress sup region using show hardware access-list tcam region command.

  • If the ingress sup region is not 768 or above, you must configure using the hardware access-list tcam region ing-sup 768 command.


    Note


    If you get an error, “Aggregate ingress TCAM allocation failure” while configuring ing-sup as 768, you must borrow the amount from other TCAM regions.


  • Reload the device after this configuration.

Ingress Replication (IR) support

Beginning with Cisco NX-OS Release 10.5(1)F, VXLAN EVPN in the data center fabric supports both Multicast Underlay (PIMv6) Any-Source Multicast (ASM) and Ingress Replication (IPv6) in the underlay. This support is available on the following switches and line cards:

  • Cisco Nexus 9300-FX, FX2, FX3, GX, GX2, H2R, and H1 ToR switches as the leaf VTEPs.

  • Cisco Nexus N9K-X9716D-GX and N9K-X9736C-FX line cards as spines if the underlay is configured for Multicast Underlay (PIMv6) Any-Source Multicast (ASM).

  • Cisco Nexus N9K-X9716D-GX and N9K-X9736C-FX line cards as VTEPs if the underlay uses Ingress Replication (IPv6).

Supported rendezvous point options by TRM mode

With TRM enabled Internal and External RP is supported. These tables provide information about which TRM modes support internal and external rendezvous point (RP) options, along with the minimum supported NX-OS release for each combination. This information helps network designers and administrators determine the appropriate TRM modes and software versions needed for specific RP deployments.

Table 1. TRM RP support

Mode

RP Internal

RP External

PIM-Based RP Everywhere

TRM L2 Mode

N/A

N/A

N/A

TRM L3 Mode

7.0(3)I7(1), 9.2(x)

7.0(3)I7(4), 9.2(3)

Supported in 7.0(3)I7(x) releases starting from 7.0(3)I7(5)

Not supported in 9.2(x)

Supported in NX-OS releases beginning with 9.3(1) for the following Nexus 9000 switches:

  • Cisco Nexus 9200 Series switches

  • Cisco Nexus 9364C platform switches

  • Cisco Nexus 9300-EX/FX/FX2 platform switches (excluding the Cisco Nexus 9300-FXP platform switch)

Supported for Cisco Nexus 9300-FX3 platform switches beginning with Cisco NX-OS Release 9.3(5)

TRM L2L3 Mode

7.0(3)I7(1), 9.2(x)

N/A

N/A

Options for rendezvous points in TRM deployments

For Tenant Routed Multicast, these rendezvous point options are supported:

Configure a rendezvous point inside the VXLAN fabric

Configure the loopback interface and related parameters for TRM VRFs on all VTEPs. This ensures multicast traffic is managed correctly and efficiently throughout the fabric. The loopback address must be reachable and advertised in EVPN.

Follow these steps to configure the rendezvous point inside the VXLAN fabric:

Before you begin

  • Verify that all devices (VTEPs) support TRM VRFs.

  • Ensure network connectivity so the loopback address is reachable in EVPN.

  • Plan and reserve the loopback IP address for the RP.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Configure the loopback interface for use with multicast RP on all TRM-enabled nodes.

Example:

switch(config)# interface loopback 11

Step 3

Assign the loopback interface to the correct VRF.

Example:

switch(config-if)# vrf member vrf100

Step 4

Specify the IP address for the loopback interface.

Example:

switch(config-if)# ip address 209.165.200.1/32

Step 5

Enable PIM sparse-mode on the loopback interface.

Example:

switch(config-if)# ip pim sparse-mode

Step 6

Create the VXLAN tenant VRF if it does not already exist.

Example:

switch(config-if)# vrf context vrf100

Step 7

Configure the RP address and group-list for multicast.

Example:

switch(config-vrf# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Use the same RP IP address for all edge devices (VTEPs) to enable a fully distributed RP.


The rendezvous point for multicast is configured and distributed across all VTEPs in the VXLAN fabric, allowing for efficient multicast routing and group communication.

Configure an external rendezvous point

Configure the external rendezvous point (RP) IP address within the TRM VRFs on all devices (VTEP). In addition, ensure reachability of the external RP within the VRF via the border node. With TRM enabled and an external RP in use, ensure that only one routing path is active. Routing between the TRM fabric and the external RP must be via a single border leaf (non ECMP).

Follow these steps to configure an external rendezvous point:

Before you begin

Ensure TRM is enabled.

  • Identify the RP IP address to use.

  • Confirm all relevant VTEP and border node devices are reachable.

  • Ensure only one routing path (non-ECMP) is active between the TRM fabric and the external RP via a single border leaf.

Procedure


Step 1

Enter configuration mode.

Example:

switch# configure terminal

Step 2

Enter the target TRM VRF context.

Example:

switch(config)# vrf context vrf100

Step 3

Configure the multicast RP address for the VRF.

Example:

switch(config-vrf)# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Use the same RP IP address on all edge devices (VTEPs) for a distributed RP setup.


The external rendezvous point is configured for multicast in the TRM fabric. All devices in the specified VRFs use the designated RP, and multicast routing traverses a single, controlled border node as intended.

RP Everywhere with PIM Anycast solution

RP Everywhere with PIM Anycast provides these features and benefits:

  • Enables efficient Rendezvous Point (RP) redundancy and load sharing for multicast routing.

  • Supports seamless failover using Anycast addresses, minimizing service interruptions.

  • Allows multiple RPs to share a single logical Anycast address for improved scalability.

  • Provides automatic failover between RPs, enhancing network resilience.

  • Simplifies configuration and ongoing maintenance for multicast deployments.

  • Maintains seamless multicast signaling across the network.

For information about configuring RP Everywhere with PIM Anycast, see:

Configure a TRM leaf node for RP Everywhere with PIM Anycast

Perform this configuration on each VXLAN VTEP device that will participate as a distributed RP in an Anycast RP model for multicast routing.

Before you begin

  • Ensure device access with the necessary privileges.

  • Determine the loopback interface number, VRF name, RP IP address, and multicast group range to be used.

  • Verify that all edge devices (VTEPs) share the same RP IP address.

Procedure

Step 1

Enter configuration mode.

Example:
switch# configure terminal

Step 2

Create and configure a loopback interface for RP functionality on each VXLAN VTEP.

Example:
switch(config)# interface loopback 11

Assign the desired loopback interface number.

Step 3

Assign the loopback interface to the relevant VRF.

Example:
switch(config-if)# vrf member vrf100

Step 4

Set an IP address for the loopback interface.

Example:
switch(config-if)# ip address 209.165.200.1/32

Step 5

Enable PIM sparse mode on the loopback interface.

Example:
switch(config-if)# ip pim sparse-mode

Step 6

Create the VXLAN tenant VRF context.

Example:
switch(config-if)# vrf context vrf100

Step 7

Configure the RP address and group list for PIM, specifying the RP IP address and multicast group range. ip pim rp-address ip-address-of-router group-list group-range-prefix

Example:
switch(config-vrf# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.


The TRM leaf node is configured as a distributed Rendezvous Point (RP) for RP Everywhere, supporting PIM Anycast within the specified VXLAN tenant.

Configure a TRM border leaf node for RP Everywhere with PIM Anycast

Configure a TRM border leaf node to enable distributed RP functionality for multicast routing with PIM Anycast in a VXLAN-EVPN fabric.

Follow these steps to configure the TRM border leaf node:

Before you begin

  • Ensure you have the required IP addresses and VRF names.

  • Confirm administrative CLI access to the switch.

  • Verify VXLAN-EVPN mode is enabled.

Procedure

Step 1

Enter configuration mode.

Example:
switch# configure terminal

Step 2

Configure VXLAN VTEP as TRM border leaf node.

Example:
switch(config)# ip pim evpn-border-leaf

Step 3

Create loopback interfaces for TRM and RP Anycast.

Example:
switch(config)# interface loopback 11
switch(config)# interface loopback 12
switch(config-if)#

Step 4

Assign VRF to each loopback interface.

Example:
!For TRM
switch(config-if)# vrf member vrf100
!For RP loopback
switch(config-if)# vrf member vxlan100

Step 5

Specify IP addresses for loopback interfaces.

Example:
!For TRM
switch(config-if)# ip address 209.165.200.1/32
!For RP loopback
switch(config-if)# ip address 209.165.200.11/32

Step 6

Enable sparse-mode PIM on both loopback interfaces.

Example:
switch(config-if)# ip pim sparse-mode

Step 7

Create a VXLAN tenant VRF.

Example:
switch(config-if)# vrf context vrf100

Step 8

Configure the PIM RP address and group list.

Example:
switch(config-vrf)# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.

Step 9

Configure PIM Anycast RP set with required addresses.

Example:
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.11
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.12
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.13
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.14

The TRM border leaf node now serves as a distributed RP with PIM Anycast, ready to support multicast traffic in the VXLAN-EVPN fabric.

Configure an external router for RP Everywhere with PIM Anycast

Configure an external router to act as a Rendezvous Point (RP) for multicast traffic, using Protocol Independent Multicast (PIM) Anycast RP for redundancy and scalability.

Follow these steps to configure the external router for RP Everywhere with PIM Anycast:

Before you begin

  • Ensure you have administrative access to the router.

  • Identify the loopback interfaces and VRF names to be used.

  • Gather the required IP addresses for the PIM Anycast RP set.

Procedure

Step 1

Enter configuration mode.

Example:
switch# configure terminal

Step 2

Create the first loopback interface.

Example:
switch(config)# interface loopback 11

Step 3

Assign the loopback interface to a VRF.

Example:
switch(config-if)# vrf member vfr100

Step 4

Assign an IP address to the loopback interface.

Example:
switch(config-if)# ip address 209.165.200.1/32

Step 5

Enable PIM sparse mode on the loopback interface.

Example:
switch(config-if)# ip pim sparse-mode

Step 6

Create a second loopback interface for additional Anycast RP.

Example:
switch(config)# interface loopback 12
  1. Repeat Steps 3–5 for this interface with its respective VRF and IP

    Example:
    switch(config-if)# vrf member vrf100
    switch(config-if)# ip address 209.165.200.13/32
    switch(config-if)# ip pim sparse-mode

Step 7

Create the VXLAN tenant VRF if not already created.

Example:
switch(config-if)# vrf context vrf100

Step 8

Configure the PIM RP address and group-list.

Example:
switch(config-vrf)# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.

Step 9

Configure PIM Anycast RP set with required addresses. ip pim anycast-rp anycast-rp-address address-of-rp

Example:
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.11
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.12
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.13
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.14

The router is configured as a PIM Anycast Rendezvous Point, providing a resilient multicast RP for the network.

Features of RP Everywhere with MSDP peering solutions

RP Everywhere with MSDP peering is a multicast routing solution that offers the following features:

  • Each router can act as a Rendezvous Point (RP) for its own domain, improving local multicast source management.

  • Multicast Source Discovery Protocol (MSDP) enables sharing of multicast source information between RPs in different domains, allowing seamless inter-domain multicast communication.

  • The solution provides redundancy, scalability, and resiliency for multicast services across network segments.

This approach is beneficial for large-scale multicast deployments where high availability and inter-domain source discovery are required.

For information about configuring RP Everywhere with MSDP Peering, see:

Figure 4. RP Everywhere configuration with MSDP RP solution
RP Everywhere configuration with MSDP RP solution

Configure a TRM leaf node for RP Everywhere with MSDP peering

Configure a TRM leaf node to support RP Everywhere architecture using MSDP peering, allowing distributed Rendezvous Point (RP) functionality for multicast routing in a VXLAN environment.

Follow these steps to configure a TRM leaf node for RP Everywhere with MSDP peering:

Before you begin

  • Confirm you are logged in with administrative privileges.

  • Verify VXLAN and multicast routing features are enabled.

  • Gather the required IP addresses and VRF names for configuration.

Procedure

Step 1

Enter configuration mode.

Example:
switch# configure terminal

Step 2

Configure the loopback interface on all VXLAN VTEP devices.

Example:
switch(config)# interface loopback 11

Step 3

Assign the loopback interface to the appropriate VRF.

Example:
switch(config-if)# vrf member vrf100

Step 4

Specify the IP address for the loopback interface.

Example:
switch(config-if)# ip address 209.165.200.1/32

Specify IP address.

Step 5

Enable PIM sparse mode on the loopback interface.

Example:
switch(config-if)# ip pim sparse-mode

Step 6

Create the VXLAN tenant VRF context.

Example:
switch(config-if)# vrf context vrf100

Step 7

Configure the RP address and multicast group range for MSDP peering.

Example:
switch(config-vrf# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.


The TRM leaf node is now configured for RP Everywhere with MSDP peering, enabling distributed multicast routing across all VXLAN VTEP edge devices.

Configure a TRM border leaf node for RP Everywhere with MSDP peering

Configure a TRM border leaf node to function as an Anycast Rendezvous Point (RP) with MSDP peering for multicast source discovery in a VXLAN EVPN fabric

Follow these steps to configure the TRM border leaf node:

Before you begin

  • Identify the loopback interfaces and IP addresses for the Anycast RP.

  • Determine the VRF name used for multicast routing.

  • Ensure your device supports PIM and MSDP features.

Procedure

Step 1

Enter configuration mode.

Example:
switch# configure terminal

Step 2

Enable the MSDP feature.

Example:
switch(config)# feature msdp

Step 3

Configure VXLAN VTEP as TRM border leaf node,

Example:
switch(config)# ip pim evpn-border-leaf

Step 4

Create the first loopback interface for the primary Anycast RP address:

  1. Assign the VRF membership.

    Example:
    switch(config)# interface loopback 11
    switch(config-if)# vrf member vrf100
  2. Configure the Anycast RP IP address.

    Example:
    switch(config-if)# ip address 209.165.200.1/32
  3. Enable PIM sparse mode.

    Example:
    switch(config-if)# ip pim sparse-mode

Step 5

Create the second loopback interface for Anycast RP redundancy

  1. Assign the VRF membership.

    Example:
    switch(config)# interface loopback 12
    switch(config-if)# vrf member vrf100
  2. Configure the Anycast RP IP address.

    Example:
    switch(config-if)# ip address 209.165.200.11/32
  3. Enable PIM sparse mode.

    Example:
    switch(config-if)# ip pim sparse-mode

Step 6

Create the tenant VRF context for multicast:

Example:
switch(config-if)# vrf context vrf100

Step 7

Configure the RP address and group list for PIM in the VRF.

Example:
switch(config-vrf)# ip pim rp-address 209.165.200.1 group-list 224.0.0.0/4

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.

Step 8

Configure PIM Anycast RP set and assign all participating RP addresses

Example:
switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.11

Configure PIM Anycast RP set.

Step 9

Configure MSDP originator ID and peer under the VRF:

  1. Assign the originator loopback.

    Example:
    switch(config-vrf)# ip pim anycast-rp 209.165.200.1 209.165.200.12
    switch(config-vrf)# ip msdp originator-id loopback12
  2. Define the MSDP peer and source loopback

    Example:
    loopback
    switch(config-vrf)# ip msdp peer 209.165.201.11 connect-source loopback12

The TRM border leaf node is enabled as an Anycast RP, participating in MSDP peering for distributed multicast routing in the fabric.

Configure an external router for RP Everywhere with MSDP peering

Configure an external router to support Rendezvous Point (RP) Everywhere multicast operation using MSDP peering.

Procedure

Step 1

Enter configuration mode.

Example:
switch# configure terminal

Step 2

Enable the MSDP feature.

Example:
switch(config)# feature msdp

Step 3

Configure the first loopback interface on all VXLAN VTEP devices.

Example:
switch(config)# interface loopback 11
switch(config-if)# vrf member vrf100
switch(config-if)# ip address 209.165.201.1/32
switch(config-if)# ip pim sparse-mode

Step 4

Configure the PIM Anycast set RP loopback interface.

Example:
switch(config)# interface loopback 12
switch(config-if)# vrf member vrf100
switch(config-if)# ip address 209.165.201.11/32
switch(config-if)# ip pim sparse-mode

Configure the PIM Anycast set RP loopback interface.

Step 5

Create the VXLAN tenant VRF.

Example:
switch(config-if)# vrf context vrf100

Step 6

Configure the Rendezvous Point (RP) address and multicast group range.

Example:
switch(config-vrf)# ip pim rp-address 209.165.201.1 group-list 224.0.0.0/4

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.

Step 7

Set the MSDP originator ID to the Anycast RP loopback

Example:
switch(config-vrf)# ip msdp originator-id loopback12

Step 8

Establish MSDP peering with each TRM border node

Example:
switch(config-vrf)# ip msdp peer 209.165.200.11 connect-source loopback12

Configure MSDP peering between external RP router and all TRM border nodes.


The external router is now configured as an RP and MSDP peer, supporting distributed multicast operation for VXLAN in the network.

Configure Layer 3 Tenant Routed Multicast

This procedure enables the Tenant Routed Multicast (TRM) feature. TRM operates primarily in the Layer 3 forwarding mode for IP multicast by using BGP MVPN signaling. TRM in Layer 3 mode is the main feature and the only requirement for TRM enabled VXLAN BGP EVPN fabrics. If non-TRM capable edge devices (VTEPs) are present, the Layer 2/Layer 3 mode and Layer 2 mode have to be considered for interop.

To forward multicast between senders and receivers on the Layer 3 cloud and the VXLAN fabric on TRM vPC border leafs, the VIP/PIP configuration must be enabled. For more information, see Configuring VIP/PIP.


Note


TRM follows an always-route approach and hence decrements the Time to Live (TTL) of the transported IP multicast traffic.


Follow these steps to configure Layer 3 Tenant Routed Multicast:

Before you begin

  • Ensure VXLAN EVPN (feature nv overlay, nv overlay evpn) is enabled.

  • Confirm the rendezvous point (RP) is configured.

  • Enable PIM v4/v6 if TRM v4/v6 is needed.

Procedure


Step 1

Enable the Next-Generation Multicast VPN (ngMVPN) control plane.

Example:

switch# configure terminal
switch(config)# feature ngmvpn

New address family commands become available in BGP.

Note

 

The no feature ngmvpn command will not remove MVPN configuration under BGP.

You will get a syslog message when you enable this command. The message informs you that ip multicast multipath s-g-hash next-hop-based is the recommended multipath hashing algorithm and you need enable it for the TRM enabled VRFs.

The auto-generation of ip multicast multipath s-g-hash next-hop-based command does not happen after you enable the feature ngmvpn command. You need to configure ip multicast multipath s-g-hash next-hop-based as part of the VRF configuration.

Step 2

Configure IGMP snooping for VXLAN VLANs.

Example:

switch(config)# ip igmp snooping vxlan

Step 3

Configure the NVE (Network Virtualization Edge) interface and associate the Layer 3 VNI with the VRF.

Example:

switch(config)# interface nve 1
switch(config-if-nve)# member vni 200100 associate-vrf
switch(config-if-nve-vni)# mcast-group 225.3.3.3

The range of vni-range is from 1 to 16,777,214.

Builds the default multicast distribution tree for the VRF VNI (Layer 3 VNI).

The multicast group is used in the underlay (core) for all multicast routing within the associated Layer 3 VNI (VRF).

Note

 

We recommend that underlay multicast groups for Layer 2 VNI, default MDT, and data MDT not be shared. Use separate, non-overlapping groups.

Step 4

Set up BGP and enable multicast VPN for the peer

Example:

switch(config)# router bgp 100
switch(config-router)# neighbor 1.1.1.1
switch(config-router-neighbor)# address-family ipv4 mvpn
switch(config-router-neighbor-af)# send-community extended

Enables ngMVPN for address family signalization. The send community extended command ensures that extended communities are exchanged for this address family.

Step 5

Configure the tenant VRF context, VNI, and enable TRM.

Example:

switch(config-router)# vrf context vrf100
switch(config-router)# vni 500001 l3
switch(config-router)# mvpn vri 100 
switch(config-router)# mdt v4 vxlan

Beginning with Cisco NX-OS Release 10.3(1)F, the L3 keyword is provided to indicate that the new L3VNI configuration is enabled.

Beginning with Cisco NX-OS Release 10.4(3)F, this command with L3 option is supported on Cisco Nexus 9808/9804 switches with Cisco Nexus X9836DMA and X98900CD-A line cards.

Run the mvpn vri id<id> command under router bgp <as-number> submode. The vri id range is from 1 to 65535.

Note

 
  • This command is mandatory on vPC leaf nodes, and value has to be same across vPC pair and unique in TRM domain. Also the value must not collide with any site-id value.

  • This command is required on BGWs if site-id value is greater than 2 bytes, and value has to be same across all same site BGWs and unique in TRM domain. Also the value must not collide with any site-id value.

The TRM v4/v6 is enabled by default.

The no mdt [ v4 | v6 ] vxlan command disables the TRM v4/v6 on the specified VRF.

Run this command under the sub-mode of new L3VNI config.

Note

 
This command is applicable only to VRFs configured with new-L3VNI.

Step 6

Enable recommended multipath hashing for TRM-enabled VRFs.

Example:

switch(config-vrf)# ip multicast multipath s-g-hash next-hop-based

Configures multicast multipath and initiates S, G, nexthop hashing (rather than the default of S/RP, G-based hashing) to select the RPF interface.

Step 7

Specify the rendezvous point (RP) address for multicast traffic.

Example:

switch(config-vrf)# ip pim rp-address 209.165.201.1 group-list 226.0.0.0/8

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.

For overlay RP placement options, see the Options for rendezvous points in TRM deployments section.

Step 8

Configure SVI for Layer 2 and Layer 3 VNIs, assign VRF membership, and enable PIM as required.

Example:

switch(config)# interface vlan11
switch(config-if)# no shutdown
switch(config-if)# vrf member vrf100
switch(config-if)# ip address 11.1.1.1/24
switch(config-if)# ip pim sparse-mode
switch(config-if)# ip pim neighbor-policy route-map1 !if preventing PIM neighborship on L2VNI SVI
switch(config-if)# fabric forwarding mode anycast-gateway !as needed
switch(config-if)# ip forward !for L3VNI SVI

Configures the first-hop gateway (distributed anycast gateway for the Layer 2 VNI. No router PIM peering must ever happen with this interface.

Creates an IP PIM neighbor policy with a suitable route-map to deny any IPv4 addresses, preventing PIM from establishing PIM neighborship on the L2VNI SVI.

Note

 

Do not use Distributed Anycast Gateway for PIM Peerings.

Step 9

Configure the BGP address family for unicast and set the auto route-target for multicast VPN.

Example:

switch(config-vrf)# address-family ipv4 unicast
switch(config-vrf-af-ipv4)# route-target both auto mvpn
switch(config)# ip multicast overlay-spt-only

Defines the BGP route target that is added as an extended community attribute to the customer multicast (C_Multicast) routes (ngMVPN route type 6 and 7).

Auto route targets are constructed by the 2-byte Autonomous System Number (ASN) and Layer 3 VNI.

Gratuitously originate (S,A) route when the source is locally connected. The ip multicast overlay-spt-only command is enabled by default on all MVPN-enabled Cisco Nexus 9000 Series switches (typically leaf node).


Layer 3 Tenant Routed Multicast is enabled, providing IP multicast forwarding for tenants over the VXLAN BGP EVPN fabric.

Configure TRM on the VXLAN EVPN spine

This procedure enables Tenant Routed Multicast (TRM) on a VXLAN EVPN spine switch.

Follow these steps to configure TRM on the VXLAN EVPN spine:

Before you begin

  • Confirm that the VXLAN BGP EVPN spine configuration is complete. For more information see Configure iBGP for EVPN on the spine.

  • Ensure you know your BGP autonomous system numbers and neighbor IP addresses.

Procedure


Step 1

Enter configuration mode.

Example:

switch# configure terminal

Step 2

Create a route-map to retain the next-hop for EVPN routes.

Example:

switch(config)# route-map permitall permit 10

Note

 

The route-map keeps the next-hop unchanged for EVPN routes

  • Required for eBGP

  • Options for iBGP

Step 3

Retain the next-hop attribute in the route-map.

Example:

switch(config-route-map)# set ip next-hop unchanged
switch(config-route-map)# exit
switch(config)#

Note

 

The route-map keeps the next-hop unchanged for EVPN routes

  • Required for eBGP

  • Options for iBGP

Step 4

Enter BGP router configuration mode using your AS number.

Example:

switch(config)# router bgp 65002

Specify BGP.

Step 5

Configure the address family IPv4 MVPN under the BGP.

Example:

switch(config-router)# address-family ipv4 mvpn

Step 6

Configure retain route-target all under address-family IPv4 MVPN [global].

Example:

switch(config-router-af)# retain route-target all

Note

 

Required for eBGP. Allows the spine to retain and advertise all MVPN routes when there are no local VNIs configured with matching import route targets.

Step 7

Configure your BGP multicast VPN neighbor.

Example:

switch(config-router-af)# neighbor 100.100.100.1 

Step 8

Under the neighbor’s IPv4 MVPN address-family, apply TRM-specific settings:

Example:

switch(config-router-neighbor)# address-family ipv4 mvpn
  1. If using eBGP, enter:

    Example:

    switch(config-router-neighbor-af)# disable-peer-as-check
    switch(config-router-neighbor-af)# rewrite-rt-asn
    switch(config-router-neighbor-af)# send-community extended
    switch(config-router-neighbor-af)# route-map permitall out

    Configure disable-peer-as-check parameter on the spine for eBGP when all leafs are using the same AS but the spines have a different AS than leafs.

    The rewrite-rt-asn command is required if the route target auto feature is being used to configure EVPN route targets.

  2. If using iBGP with route reflectors, enter:

    Example:

    switch(config-router-neighbor-af)# route-reflector-client

Step 9

Exit configuration and save your changes.


TRM is enabled on the VXLAN EVPN spine, supporting multicast routing for tenant networks.

Configure TRM in Layer 2 and Layer 3 mixed mode

This procedure enables the Tenant Routed Multicast (TRM) feature. This enables both Layer 2 and Layer 3 multicast BGP signaling. This mode is only necessary if non-TRM edge devices (VTEPs) are present in the Cisco Nexus 9000 Series switches (1st generation) . Only the Cisco Nexus 9000-EX and 9000-FX switches can do Layer 2/Layer 3 mode (Anchor-DR).

To forward multicast between senders and receivers on the Layer 3 cloud and the VXLAN fabric on TRM vPC border leafs, the VIP/PIP configuration must be enabled. For more information, see Configuring VIP/PIP.

All Cisco Nexus 9300-EX and 9300-FX platform switches must be in Layer 2/Layer 3 mode.

Follow these steps to configure Tenant Routed Multicast (TRM) in Layer 2/Layer 3 mixed mode:

Before you begin

  • Ensure VXLAN EVPN is configured.

  • Ensure the rendezvous point (RP) is configured for multicast.

Procedure


Step 1

Enter configuration mode.

Example:

switch# configure terminal

Step 2

Enable ngMVPN and advertise EVPN multicast.feature ngmvpn

Example:

switch(config)# feature ngmvpn
switch(config)# advertise evpn multicast

Note

 

The no feature ngmvpn command does not remove MVPN configuration under BGP.

Step 3

Enable IGMP snooping for VXLAN VLANs.

Example:

switch(config)# ip igmp snooping vxlan

Step 4

Enable multicast overlay SPT-only and distributed anchor DR.

Example:

switch(config)# ip multicast overlay-spt-only
switch(config)# ip multicast overlay-distributed-dr

Gratuitously originate (S,A) route when source is locally connected. The ip multicast overlay-spt-only command is enabled by default on all MVPN-enabled Cisco Nexus 9000 Series switches (typically leaf nodes).

Note

 

You must shut and unshut the NVE interface after configuring ip multicast overlay-distributed-dr .

Step 5

Configure the NVE interface, associate Layer 3 VNIs, and assign multicast groups.

Example:

switch(config)# interface nve 1
switch(config-if-nve)# member vni 200100 associate-vrf
switch(config-if-nve-vni)# mcast-group 225.3.3.3

The range of vni-range is from 1 to 16,777,214.

Step 6

Set up loopback interface on all anchor DR devices, and configure OSPF and PIM.

Example:

switch(config-if-nve)# interface loopback 10
switch(config-if)# ip address 100.100.1.1/32
switch(config-if)# ip router ospf 100 area 0.0.0.0
switch(config-if)# ip pim sparse-mode

The IP address must be the same on all distributed anchor DRs.

Step 7

Configure multicast routing to override the source-interface on every TRM-enabled VTEP (Anchor DR).

Example:

switch(config-if)# interface nve1
switch(config-if-nve)# mcast-routing override source-interface loopback 10

The loopback10 variable must be configured on every TRM-enabled VTEP (Anchor DR) in the underlay with the same IP address. This loopback and the respective override command are needed to serve TRM VTEPs in co-existence with non-TRM VTEPs.

Step 8

Configure BGP for multicast VPN and send extended communities and set route-targets.

Example:

switch(config)# router bgp 100
switch(config-router)# neighbor 1.1.1.1
switch(config-router-neighbor)# address-family ipv4 mvpn
switch(config-router-neighbor-af)# send-community extended
switch(config-vrf-af-ipv4)# route-target both auto mvpn

Step 9

Configure Layer 2/Layer 3 VNI VLAN interfaces with IP, PIM, and anycast gateway settings.

Example:

switch(config)# interface vlan11  ! Layer 2 VNI
switch(config-if)# vrf member vrf100
switch(config-if)# ip address 11.1.1.1/24
switch(config-if)# ip pim sparse-mode
switch(config-if)# fabric forwarding mode anycast-gateway
switch(config-if)# ip pim neighbor-policy route-map1
switch(config-if)# exit
switch(config)# interface vlan100   !Layer 3 VNI
switch(config-if)# vrf member vrf100
switch(config-if)# ip forward
switch(config-if)# ip pim sparse-mode
switch(config-if)# exit
switch(config)# vrf context vrf100
switch(config-vrf)# ip pim rp-address 209.165.201.1 group-list 226.0.0.0/8
switch(config-vrf)# address-family ipv4 unicast

For overlay RP placement options, see the Options for rendezvous points in TRM deployments.

To prevent PIM neighborship on the L2VNI SVI, create an IP PIM neighbor policy with a suitable route map to deny IPv4 addresses.

Ensure that the same RP IP address and group range are configured on all VXLAN VTEPs to enable a fully distributed RP.


Tenant Routed Multicast is enabled in Layer 2/Layer 3 mixed mode, allowing multicast traffic forwarding between senders and receivers across the fabric and external Layer 3 networks.

Configuration options for VXLAN EVPN and TRM with IPv6 multicast underlay

You can configure IPv6 multicast underlay in a VXLAN fabric using these options:

Configure an L2-VNI based multicast group in underlay

Apply this configuration when setting up VXLAN environments that require multicast replication in the underlay, specifically using IPv6 multicast groups for each Layer 2 VNI.

Follow these steps to configure an L2-VNI based multicast group in the underlay:

Before you begin

  • Ensure you have administrative access to the device.

  • Identify the VNI (VLAN) and IPv6 multicast group address you want to assign.

  • Confirm that the NVE interface (such as nve1) is available on the device.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Access the NVE interface.

Example:

switch(config)# interface nve1

Step 3

Define the Layer 2 VNI membership for the interface.

Example:

switch(config-if-nve)# member vni 10501

Step 4

Assign the IPv6 multicast group address to the VNI.

Example:

switch(config-if-nve-vni)# mcast-group ff04::40

Step 5

Configure a global multicast group for all Layer 2 VNIs.

Example:

switch(config-if-nve)# global mcast-group ff04::40 l2

Step 6

exit

Example:

switch(config-if-nve)# exit

Exits configuration mode.


Configure an L3-VNI based multicast group in the underlay

Use this procedure to set up IPv6 multicast for VXLAN environments, ensuring efficient distribution of multicast traffic within each VRF.

Follow these steps to configure an L3-VNI based multicast group in the underlay:

Before you begin

  • Ensure you have administrative access to the switch’s CLI.

  • Confirm that the NVE interface (for example, nve1) is already created and up.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Access the NVE interface configuration mode.

Example:

switch(config)# interface nve1

Step 3

Associate the Layer 3 VNI with its target VRF.

Example:

switch(config-if-nve)# member vni 50001 associate-vrf

Step 4

Assign an IPv6 multicast group address to the Layer 3 VNI.

Example:

switch(config-if-nve-vni)# mcast-group ff10:0:0:1::1

Step 5

(Optional) Configure a global multicast group for all Layer 3 VNIs, if needed.

Example:

switch(config-if-nve)# global mcast-group ff04::40 l3

Step 6

Exit the interface configuration mode.exit

Example:

switch(config-if-nve)# exit

The switch establishes the default multicast distribution tree for the Layer 3 VNI(s), enabling IPv6 multicast forwarding in the underlay network.

Enable PIMv6 for underlay

Configure Protocol Independent Multicast for IPv6 (PIMv6) sparse mode on a Cisco NX-OS switch to support multicast routing in your underlay network.

Follow these steps to enable PIMv6 for underlay:

Before you begin

  • Ensure you have administrator access to the Cisco NX-OS switch.

  • Decide which loopback interface and IPv6 address you will use.

  • Confirm that no existing configurations will conflict with PIMv6 settings on the selected interfaces.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Configure the loopback interface.

Example:

switch(config)# interface loopback 1
switch(config-if)# ipv6 address 11:0:0:1::1/128
switch(config-if)# ipv6 pim sparse-mode

Step 3

Configure the NVE interface.

Example:

switch(config-if)# interface nve1
switch(config-if-nve)# source-interface loopback 1

Configures the NVE interface.

Step 4

Exit configuration mode.

Example:

switch(config-if-nve)# exit

Note

 

For the PIMv6 configuration see the Cisco Nexus 9000 Series NX-OS Multicast Routing Configuration Guide.

For the TRM configuration see the Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide.


The switch is now configured for PIMv6 sparse mode on the underlay network and can participate in IPv6 multicast routing.

Configure Layer 2 Tenant Routed Multicast

Before you begin

VXLAN EVPN must be configured.

TRM allows multicast traffic optimization by signaling Layer 2 multicast routes. This procedure activates TRM features and configures IGMP snooping querier settings on required switches.

Follow these steps to configure Layer 2 Tenant Routed Multicast:

Before you begin

  • VXLAN EVPN must be configured.

  • You must configure IGMP snooping querier per multicast-enabled VXLAN VLAN on all Layer-2 TRM leaf switches.

Procedure


Step 1

Enter configuration mode.

Example:

switch# configure terminal

Step 2

Enable the EVPN/MVPN feature.

Example:

switch(config)# feature ngmvpn

Note

 

Disabling this feature with the no feature ngmvpn command does not remove existing MVPN configurations under BGP.

Step 3

Advertise Layer 2 multicast capability for EVPN.

Example:

switch(config)# advertise evpn multicast

Step 4

Enable IGMP snooping for VXLANs.

Example:

switch(config)# ip igmp snooping vxlan

Step 5

Enter VLAN configuration mode for each multicast-enabled VXLAN VLAN.

Example:

switch(config)# vlan configuration 101

Step 6

Configure the IGMP snooping querier by specifying its IP address for each relevant VLAN.

Example:

switch(config-vlan-config)# ip igmp snooping querier 2.2.2.2

TRM is enabled with Layer 2 multicast and IGMP snooping querier configured, ensuring proper multicast routing and signaling within the VXLAN EVPN fabric.

Configure TRM with vPC support

You can onfigure TRM Multisite with vPC support on Cisco NX-OS. Beginning with Cisco NX-OS Release 10.1(2), TRM Multisite with vPC BGW is supported.

Follow these steps to configure TRM with vPC support:

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal 

Step 2

Enable required features:

Example:

switch(config)# feature vpc
switch(config)# feature interface-vlan
switch(config)# feature lacp
switch(config)# feature pim
switch(config)# feature ospf

Step 3

Configure PIM RP address for the multicast group range:

Example:

switch(config)# ip pim rp-address 100.100.100.1 group-list 224.0.0/4

Step 4

Configure the vPC domain and basic vPC parameters.

Example:

switch(config)# vpc domain 1
switch(config-vpc-domain)# peer switch
switch(config-vpc-domain)# peer gateway
switch(config-vpc-domain)# peer-keepalive destination 172.28.230.85

There is no default for vPC domain. The range is from 1 to 1000.

To enable Layer 3 forwarding for packets destined to the gateway MAC address of the virtual port channel (vPC), use the peer-gateway command.

The peer-keepalive destination ipaddress command configures the IPv4 address for the remote end of the vPC peer-keepalive link.

Note

 

The system does not form the vPC peer link until you configure a vPC peer-keepalive link.

The management ports and VRF are the defaults.

Note

 

We recommend that you configure a separate VRF and use a Layer 3 port from each vPC peer device in that VRF for the vPC peer-keepalive link.

For more information about creating and configuring VRFs, see the Cisco Nexus 9000 NX-OS Series Unicast Routing Config Guide, 9.3(x).

Step 5

(Optional) Set the delay restore timer for SVIs as needed.

Example:

switch(config-vpc-domain)# delay restore interface-vlan 45

We recommend tuning this value when the SVI/VNI scale is high. For example, when the SCI count is 1000, we recommend that you set the delay restore for interface-vlan to 45 seconds.

Step 6

Enable ARP and IPv6 ND synchronization for faster recovery.

Example:

switch(config-vpc-domain)# ip arp synchronize
switch(config-vpc-domain)# ipv6 nd synchronize

Step 7

Create the vPC peer-link port-channel interface and add member interfaces.

Example:

switch(config)# interface port-channel 1
                        switch(config)# switchport
                        switch(config)# switchport mode trunk
                        switch(config)# switchport trunk allowed vlan 1,10,100-200
                        switch(config)# mtu 9216
                        switch(config)# vpc peer-link
                        switch(config)# no shut
                        
                        switch(config)# interface Ethernet 1/1, 1/21
                        switch(config)# switchport
                        switch(config)# mtu 9216
                        switch(config)# channel-group 1 mode active
                        switch(config)# no shutdown
                    

Step 8

Define the infra-VLAN and create the required VLAN.

Example:

switch(config)# system nve infra-vlans 10
switch(config)# vlan 10

Step 9

Configure the SVI for the infra-VLAN and enable underlay routing.

Example:

switch(config)# interface vlan 10
switch(config)# ip address 10.10.10.1/30
switch(config)# ip router ospf process UNDERLAY area 0
switch(config)# ip pim sparse-mode
switch(config)# no ip redirects
switch(config)# mtu 9216
switch(config)# no shutdown

Configure TRM with vPC support on Cisco Nexus 9504-R and 9508-R switches

Use this task when deploying VXLAN TRM in a vPC topology on Cisco Nexus 9504-R and 9508-R switches equipped with -R line cards.

Follow these steps to configure TRM with vPC support:

Before you begin

  • Ensure you have CLI access to a Cisco Nexus 9504-R or 9508-R switch with -R line cards.

  • Back up your running configuration.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal 

Step 2

Enable the following features: vPC, interface VLAN, LACP, PIM, and OSPF.

Example:

switch(config)# feature vpc
switch(config)# feature interface-vlan
switch(config)# feature lacp
switch(config)# feature pim
switch(config)# feature ospf

Step 3

Define the PIM RP address for the multicast group range.

Example:

switch(config)# ip pim rp-address 100.100.100.1 group-list 224.0.0/4

Step 4

(Optional) Set the delay restore timer for SVIs as needed.

Example:

switch(config-vpc-domain)# delay restore interface-vlan 45

Enables the delay restore timer for SVIs. We recommend tuning this value when the SVI/VNI scale is high. For example, when the SCI count is 1000, we recommend that you set the delay restore for interface-vlan to 45 seconds.

Step 5

Carve TCAM regions for TRM and VXLAN as required for N9K-X9636C-RX line cards only and reload the switch.

Example:

switch(config)# hardware access-list tcam region mac-ifacl 0  ! For TRM
switch(config)# hardware access-list tcam region vxlan 10   ! For VXLAN
switch(config)# reload

Note

 

This TCAM carving command is required to enable TRM forwarding for N9K-X9636C-RX line cards only. With no TCAM region carved for mac-ifacl , the TCAM resources are used for TRM instead.

Step 6

Configure the vPC domain and vPC peer options.

  1. Create and configure the vPC domain.

    Example:

    switch(config)# vpc domain 1

    There is no default. The range is 1–1000.

  2. Set peer switch and peer gateway.

    Example:

    switch(config-vpc-domain)# peer switch
    switch(config-vpc-domain)# peer gateway

    To enable Layer 3 forwarding for packets that are destined to the gateway MAC address of the virtual port channel (vPC), use the peer-gateway command.

  3. Specify peer-keepalive destination IP.

    Example:

    switch(config-vpc-domain)# peer-keepalive destination 172.28.230.85

    Configures the IPv4 address for the remote end of the vPC peer-keepalive link.

    Note

     

    The system does not form the vPC peer link until you configure a vPC peer-keepalive link.

    The management ports and VRF are the defaults.

    Note

     

    We recommend that you configure a separate VRF and use a Layer 3 port from each vPC peer device in that VRF for the vPC peer-keepalive link.

    For more information about creating and configuring VRFs, see the Cisco Nexus 9000 NX-OS Series Unicast Routing Config Guide, 9.3(x).

Step 7

Enable ARP and IPv6 ND synchronization for faster recovery.

Example:

switch(config-vpc-domain)# ip arp synchronize
switch(config-vpc-domain)# ipv6 nd synchronize

Step 8

Create the vPC peer-link and assign member interfaces. ip arp synchronize

Example:

switch(config)# interface port-channel 1
switch(config)# switchport
switch(config)# switchport mode trunk
switch(config)# switchport trunk allowed vlan 1,10,100-200
switch(config)# mtu 9216
switch(config)# vpc peer-link
switch(config)# no shut

switch(config)# interface Ethernet 1/1, 1/21
switch(config)# switchport
switch(config)# mtu 9216
switch(config)# channel-group 1 mode active
switch(config)# no shutdown

Step 9

Create the infra-VLAN and associated SVI for the backup routed path over the vPC peer-link.

Example:

switch(config)# system nve infra-vlans 10
switch(config)# vlan 10

switch(config)# interface vlan 10
switch(config)# ip address 10.10.10.1/30
switch(config)# ip router ospf process UNDERLAY area 0
switch(config)# ip pim sparse-mode
switch(config)# no ip redirects
switch(config)# mtu 9216
switch(config)# no shutdown

Flex stats

A flex stat is a statistics collection method that

  • works in real time to monitor overlay route activity on supported Cisco Nexus switches,

  • enables flexible and granular tracking of multicast routes (mroutes) in VXLAN environments, and

  • replaces traditional per-interface statistics gathering for specific scenarios.

Beginning with Cisco NX-OS Release 10.3(1)F, flex stats are supported for overlay routes in Cisco Nexus 9300-X Cloud Scale Switches. Flex stats are not supported for underlay routes. VXLAN NVE VNI ingress and egress, NVE per-peer ingress, and tunnel transmission statistics are not supported under flex stats.

In a VXLAN TRM setup, to collect mroute statistics for overlay mroutes, configure the hardware profile multicast flex-stats-enable command in the default template.

The following CLI commands are not supported after flex stats are enabled:

  • show nve vni <vni_id>/<all> counters
  • show nve peers <peer-ip> interface nve 1 counters
  • show int tunnel <Tunnel interface number> counters

For configuration steps, see Configure Flex Stats for TRM.

Configure Flex Stats for TRM

Flex stats counters provide detailed multicast traffic statistics in VXLAN TRM environments. You can control whether these stats are collected using a hardware profile setting.

Follow these steps to configure Flex Stats for TRM:

Before you begin

Ensure you have administrative access to the switch.

Procedure


Step 1

Enter configuration mode.

Example:

switch# configure terminal

Step 2

Enable the flex stats counters for VXLAN TRM.

Example:

switch(config)# hardware profile multicast flex-stats-enable
To disable the counters, enter:
no hardware profile multicast flex-stats-enable

Note

 

To reflect the changes done during configuration, ensure that the switch is reloaded.

Step 3

Reload the switch to apply the configuration changes.


Flex stats counters are enabled or disabled for VXLAN TRM after the switch reloads.

Configure TRM Data MDT

TRM data MDTs

A TRM data MDT is a multicast forwarding mechanism that

  • encapsulates source traffic in a selective multicast tunnel

  • forwards multicast only to leaf nodes with interested receivers, and

  • allows immediate or threshold-based switchover from the default multicast distribution tree.

In VXLAN networks using BGP-based EVPN control planes, TRM enables multi-tenancy aware multicast forwarding within or across VTEPs. Traditionally, the default multicast distribution tree (default MDT) forwards traffic to all nodes (PEs) in the underlay, regardless of whether there are interested receivers in the overlay. In contrast, a TRM data MDT (using S-PMSI) optimizes delivery by ensuring that only leaf nodes with active receivers participate in the selective multicast distribution tree and receive traffic.

Table 2. MDT Comparision table
Attribute Default MDT Data MDT (S-PMSI)
Traffic Distribution All nodes receive traffic Only leaf nodes with receivers join
Tunnel Type Default multicast tunnel Selective multicast tunnel
Switchover Not applicable Immediate or based on bandwidth threshold

Supported platforms and configuration constraints for TRM Data MDT

The table and lists summarize supported Cisco NX-OS platforms, software releases, and key configuration constraints for TRM Data MDT (Multicast Distribution Tree) functionality.

Supported Platforms and Software Releases

NX-OS Release Supported Platforms / Line Cards
10.3(2)F and later Cisco Nexus 9300 EX/FX/FX2/FX3/GX/GX2 switches, and 9500 switches with 9700-EX/FX/GX line cards.
10.4(1)F and later Cisco Nexus 9332D-H2R switches
10.4(2)F and later Cisco Nexus 93400LD-H1 switches
10.4(3)F and later Cisco Nexus 9364C-H1 switches
10.5(2)F and later Cisco Nexus 9500 Series switches with N9K-X9736C-FX3 line card

Feature and Configuration Support

  • Data MDT in fabric is supported only with DCI IR for a given VRF. Data MDT in fabric is not supported with DCI Multicast for a given VRF on the site BGW.

  • Data MDT configuration is VRF specific and configured under L3 VRF.

  • The following TRM Data MDT features are supported:

    • ASM and SSM group ranges are supported for Data MDT. PIM-Bider Underlay is not supported for Data MDT.

    • Data MDT supports IPv4 and IPv6 overlay multicast traffic.

    • Data MDT will be supported by vPC, VMCT leaf’s as well as vPC/Anycast BGW. Also, L2, L3 orphan/external network can be connected to vPC nodes.

    • Data MDT config per L3 VRF.

    • Data MDT origination (immediate and threshold based).

    • Data MDT encap route programming delay of 3 seconds. User-defined delays are currently not supported.

  • L2, L2 -L3 mixed mode will not be supported.

  • New L3VNI mode is supported.

  • Ensure that the total number of underlay groups (L2 BUM, default MDT, and data MDT groups) is 512.

Configuration Constraints

Configure TRM Data MDT

TRM (Topology-Rooted Multicast) Data MDT (Multicast Distribution Tree) increases multicast efficiency by offloading large data flows to specific data MDT groups when certain traffic thresholds are exceeded.

Before you begin

To enable switching to data MDT group based on real-time flow rate, the following command is needed:

hardware profile multicast flex-stats-enable


Note


You must reload the switch after entering this command.


Follow these steps to configure TRM Data MDT:

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

Step 2

Configure the VRF context.

Example:

switch(config)# vrf context vrf1

Step 3

Configure the address family for unicast traffic.

Example:

For IPv4
switch(config-vrf)# address-family ipv4 unicast
For IPv6
switch(config-vrf)# address-family ipv6 unicast

Step 4

Enable or disable data MDT per address family.

Example:

switch(config-vrf-af)# mdt data vxlan 224.7.8.0/24 route-map map1 10​

Cisco Nexus supports overlapping group ranges between VRF as well as within the VRF between the address families.

  • Threshold & route-maps are optional. The traffic threshold is the traffic of the source and is measured in kbps. When the threshold is exceeded, the traffic takes 3 seconds to switch over to data MDT.

  • Group-range is part of the command key. More than one group range can be configured per address family.

  • BUM & default MDT group should not overlap with data MDT group.

  • Data MDT can have overlapping config range.


Verification commands for TRM Data MDT configuration

To display the TRM Data MDT configuration information, enter one of these commands:

Command

Purpose

show nve vni { vni-id | all } mdt [{ local | remote | peer-sync }] [{ cs cg } | { cs6 cg6 }]

Displays customer source (CS), customer group (CG) to data source (DS), data group (DG) mapping information.

show nve vrf [x] mdt [local | remote | peer-sync] [y] [z]

Displays CS, CG allocations under VRF.

show bgp ipv4 mvpn route-type 3 detail

Displays BGP S-PMSI route information for IPv4 overlay route.

show bgp ipv6 mvpn route-type 3 detail

Displays BGP S-PMSI route information for IPv6 overlay route.

show fabric multicast [ipv4 | ipv6] spmsi-ad-route [Source Address] [Group address] vrf vrf_name

Displays fabric multicast SPMSI-AD IPV4/IPv6 information for a given tenant VRF.

show ip mroute detail vrf vrf_name

Displays IP multicast route information for default VRF.

show l2route spmsi {all | topology vlan}

Displays CS-CG to DS-DG mapping information at L2RIB (Encap route programming).

show forwarding distribution multicast vxlan mdt-db

Displays MFDM/MFIB data MDT db.

show nve resource multicast

Displays the resource usage of data MDT and any failed allocations.

Configure IGMP Snooping

IGMP snooping mechanisms over VXLAN

IGMP snooping mechanisms over VXLAN are multicast traffic management features that

  • enable each VTEP to monitor IGMP reports sent within VXLAN segments,

  • selectively forward multicast traffic only to interested receivers, and

  • reduce unnecessary flooding of multicast traffic in the VNI/VLAN.

The configuration of IGMP snooping in VXLAN is the same as in a regular VLAN domain.

For further information, see:

Guidelines for IGMP snooping over VXLAN

  • Do not configure IGMP snooping over VXLAN on VLANs with FEX member ports.

  • You can configure IGMP snooping over VXLAN with both IR and multicast underlay.

  • Use IGMP snooping over VXLAN in BGP EVPN topologies, not in flood and learn topologies.

  • Beginning with NX-OS release 9.3(3), you can configure IGMP snooping over VXLAN on Cisco Nexus 9300-GX switches.

Configure IGMP snooping over VXLAN

IGMP snooping over VXLAN ensures optimized multicast traffic forwarding by delivering packets only to receivers that have joined the group, reducing unnecessary flooding in the data center fabric.

Before you begin

  • You have access to the device CLI in privileged EXEC mode.

  • VXLAN and VLANs are already configured on the switch.

Procedure


Step 1

Enter global configuration mode.

Example:

switch# configure terminal

This mode allows you to configure system-wide settings.

Step 2

Enable IGMP snooping for VXLAN VLANs.

Example:

switch(config)# ip igmp snooping vxlan

This command is required to activate snooping on VXLAN-enabled VLANs.

Step 3

(Optional) Prevent the NVE interface from acting as a static mrouter port.

Example:

switch(config)# ip igmp snooping disable-nve-static-router-port

By default, IGMP snooping over VXLAN adds the NVE interface as a multicast router port.

Note

 

This command is not required for TRM-based VXLAN EVPN fabrics. Enabling it may cause multicast forwarding issues in TRM fabrics.


IGMP snooping is enabled for VXLAN VLANs as configured.

Verification commands for VXLAN EVPN and TRM with IPv6 multicast underlay

This topic provides commonly used show commands and sample output to verify the configuration and operational status of VXLAN EVPN and Tenant Routed Multicast (TRM) components in networks using an IPv6 multicast underlay.

show run interface nve 1

Displays the running configuration of the NVE (Network Virtualization Edge) interface, summarizing key VXLAN, multicast, and source-interface settings.

What to check:

  • host-reachability protocol bgp: Confirms BGP is used for host reachability.

  • source-interface loopback1: Identifies the NVE source interface.

  • member vni sections: Show the VNIs and associated multicast groups.

switch(config)# show run interface nve 1

!Command: show running-config interface nve1
!Running configuration last done at: Wed Jul  5 10:03:58 2023
!Time: Wed Jul  5 10:04:01 2023
version 10.3(99x) Bios:version 01.08

interface nve1
  no shutdown
  host-reachability protocol bgp
  source-interface loopback1
  member vni 10501
    mcast-group ff04::40
  member vni 50001 associate-vrf
    mcast-group ff10:0:0:1::1

show ipv6 mroute

Displays the IPv6 multicast routing table, including incoming and outgoing interfaces for multicast groups and related PIMv6 (ASM) configuration.

What to check:

  • Multicast group membership and source

  • RPF neighbor addresses

  • Outgoing interfaces for group replication

switch(config)# show ipv6 mroute
IPv6 Multicast Routing Table for VRF "default"

(*, ff04::40/128), uptime: 05:20:19, nve pim6 ipv6
  Incoming interface: Ethernet1/36, RPF nbr: fe80::23a:9cff:fe23:8367
  Outgoing interface list: (count: 1)
    nve1, uptime: 05:20:19, nve


(172:172:16:1::1/128, ff04::40/128), uptime: 05:20:19, nve m6rib pim6 ipv6
  Incoming interface: loopback1, RPF nbr: 172:172:16:1::1
  Outgoing interface list: (count: 2)
    Ethernet1/36, uptime: 01:47:03, pim6
    Ethernet1/27, uptime: 04:14:20, pim6


(*, ff10:0:0:1::10/128), uptime: 05:20:18, nve ipv6 pim6
  Incoming interface: Ethernet1/36, RPF nbr: fe80::23a:9cff:fe23:8367
  Outgoing interface list: (count: 1)
    nve1, uptime: 05:20:18, nve


(172:172:16:1::1/128, ff10:0:0:1::10/128), uptime: 05:20:18, nve m6rib ipv6 pim6
  Incoming interface: loopback1, RPF nbr: 172:172:16:1::1
  Outgoing interface list: (count: 2)
    Ethernet1/36, uptime: 04:04:35, pim6
    Ethernet1/27, uptime: 04:13:35, pim6

show ipv6 pim neighbor

Lists PIMv6 neighbors and key information about PIM adjacency, including interface, uptime, DR priority, and secondary addresses.

What to check:

  • Existence and status of expected PIM neighbors

  • Interface associations and bidirectional capability

switch(config)# show ipv6 pim neighbor
PIM Neighbor Status for VRF "default"
Neighbor                     Interface            Uptime    Expires   DR       Bidir-  BFD     ECMP Redirect
                                                                      Priority Capable State   Capable
fe80::23a:9cff:fe28:5e07     Ethernet1/27         20:23:38  00:01:44  1        yes     n/a     no
   Secondary addresses:
    27:50:1:1::2

show ipv6 pim rp

Displays Rendezvous Point (RP) status and configuration, detailing RP address, uptime, priority, and group ranges.

What to check:

  • Correct RP address advertised for the multicast domain

  • Associated group ranges

switch(config)# show ipv6 pim rp
PIM RP Status Information for VRF "default"
BSR disabled
BSR RP Candidate policy: route-map1
BSR RP policy: route-map1

RP: 101:101:101:101::101, (0),
 uptime: 21:30:43   priority: 255,
 RP-source: (local),
 group ranges:
 ff00::/8

show ipv6 bgp neighbors

Shows status and parameters for BGP IPv6 neighbors. Confirms peering relationships, state, capabilities, and accepted prefixes.

What to check:

  • BGP session state (must be "Established")

  • Neighbor IP and AS numbers

  • Number of prefixes accepted or advertised

  • Capability negotiation (for example, 4-Byte AS, Graceful Restart)

switch(config-if)# show ipv6 bgp neighbors
BGP neighbor is2001:DB8::1, remote AS 200, ebgp link, Peer index 3
  BGP version 4, remote router ID 192.0.2.1
  Neighbor previous state = OpenConfirm
  BGP state = Established, up for 00:00:16
  Neighbor vrf: default
  Peer is directly attached, interface Ethernet1/33
  Enable logging neighbor events
  Last read 0.926823, hold time = 3, keepalive interval is 1 seconds
  Last written 0.926319, keepalive timer expiry due 0.073338
  Received 23 messages, 0 notifications, 0 bytes in queue
  Sent 67 messages, 0 notifications, 0(0) bytes in queue
  Enhanced error processing: On
    0 discarded attributes
  Connections established 1, dropped 0
  Last update recd 00:00:15, Last update sent  = 00:00:15
   Last reset by us 00:08:45, due to session closed
  Last error length sent: 0
  Reset error value sent: 0
  Reset error sent major: 104 minor: 0
  Notification data sent:
  Last reset by peer never, due to No error
  Last error length received: 0
  Reset error value received 0
  Reset error received major: 0 minor: 0
  Notification data received:

  Neighbor capabilities:
  Dynamic capability: advertised (mp, refresh, gr) received (mp, refresh, gr)
  Dynamic capability (old): advertised received
  Route refresh capability (new): advertised received
  Route refresh capability (old): advertised received
  4-Byte AS capability: advertised received
  Address family IPv6 Unicast: advertised received
  Graceful Restart capability: advertised received

  Graceful Restart Parameters:
  Address families advertised to peer:
    IPv6 Unicast
  Address families received from peer:
    IPv6 Unicast
  Forwarding state preserved by peer for:
  Restart time advertised to peer: 400 seconds
  Stale time for routes advertised by peer: 300 seconds
  Restart time advertised by peer: 120 seconds
  Extended Next Hop Encoding Capability: advertised received
  Receive IPv6 next hop encoding Capability for AF:
    IPv4 Unicast  VPNv4 Unicast

  Message statistics:
                              Sent               Rcvd
  Opens:                        46                  1
  Notifications:                 0                  0
  Updates:                       2                  2
  Keepalives:                   18                 18
  Route Refresh:                 0                  0
  Capability:                    2                  2
  Total:                        67                 23
  Total bytes:                 521                538
  Bytes in queue:                0                  0

  For address family: IPv6 Unicast
  BGP table version 10, neighbor version 10
  3 accepted prefixes (3 paths), consuming 864 bytes of memory
  0 received prefixes treated as withdrawn
  2 sent prefixes (2 paths)
  Inbound soft reconfiguration allowed(always)
  Allow my ASN 3 times
  Last End-of-RIB received 00:00:01 after session start
  Last End-of-RIB sent 00:00:01 after session start
  First convergence 00:00:01 after session start with 2 routes sent

  Local host: FE80::/10, Local port: 179
  Foreign host:2001:DB8::1, Foreign port: 17226
  fd = 112

show bgp l2vpn evpn neighbors

Displays BGP EVPN neighbor information for Layer 2 VPN, including state, capabilities, and advertised/received prefixes.

What to check:

  • BGP state (should be "Established")

  • Advertised/received address families (for example, L2VPN EVPN, MVPN)

  • Number of EVPN prefixes

  • Route-map associations if present

switch(config-if)# show bgp l2vpn evpn neighbors 2001:DB8::/32
BGP neighbor is 2001:DB8::/32, remote AS 200, ebgp link, Peer index 5
  BGP version 4, remote router ID 192.0.2.1
  Neighbor previous state = OpenConfirm
  BGP state = Established, up for 00:01:33
  Neighbor vrf: default
  Using loopback0 as update source for this peer
  Using iod 65 (loopback0) as update source
  Enable logging neighbor events
  External BGP peer might be up to 5 hops away
  Last read 0.933565, hold time = 3, keepalive interval is 1 seconds
  Last written 0.915927, keepalive timer expiry due 0.083742
  Received 105 messages, 0 notifications, 0 bytes in queue
  Sent 105 messages, 0 notifications, 0(0) bytes in queue
  Enhanced error processing: On
    0 discarded attributes
  Connections established 1, dropped 0
  Last update recd 00:01:32, Last update sent  = 00:01:32
   Last reset by us never, due to No error
  Last error length sent: 0
  Reset error value sent: 0
  Reset error sent major: 0 minor: 0
  Notification data sent:
  Last reset by peer never, due to No error
  Last error length received: 0
  Reset error value received 0
  Reset error received major: 0 minor: 0
  Notification data received:

  Neighbor capabilities:
  Dynamic capability: advertised (mp, refresh, gr) received (mp, refresh, gr)
  Dynamic capability (old): advertised received
  Route refresh capability (new): advertised received
  Route refresh capability (old): advertised received
  4-Byte AS capability: advertised received
  Address family IPv4 MVPN: advertised received
  Address family IPv6 MVPN: advertised received
  Address family L2VPN EVPN: advertised received
  Graceful Restart capability: advertised received

  Graceful Restart Parameters:
  Address families advertised to peer:
    IPv4 MVPN  IPv6 MVPN  L2VPN EVPN
  Address families received from peer:
    IPv4 MVPN  IPv6 MVPN  L2VPN EVPN
  Forwarding state preserved by peer for:
  Restart time advertised to peer: 400 seconds
  Stale time for routes advertised by peer: 300 seconds
  Restart time advertised by peer: 120 seconds
  Extended Next Hop Encoding Capability: advertised received
  Receive IPv6 next hop encoding Capability for AF:
    IPv4 Unicast  VPNv4 Unicast

  Message statistics:
                              Sent               Rcvd
  Opens:                         1                  1
  Notifications:                 0                  0
  Updates:                       6                  3
  Keepalives:                   95                 95
  Route Refresh:                 0                  0
  Capability:                    6                  6
  Total:                       105                105
  Total bytes:                2551               2047
  Bytes in queue:                0                  0

  For address family: IPv4 MVPN
  BGP table version 3, neighbor version 3
  0 accepted prefixes (0 paths), consuming 0 bytes of memory
  0 received prefixes treated as withdrawn
  0 sent prefixes (0 paths)
  Community attribute sent to this neighbor
  Extended community attribute sent to this neighbor
  Allow my ASN 3 times
  Outbound route-map configured is RN_NextHop_Unchanged, handle obtained
  Last End-of-RIB received 00:00:01 after session start
  Last End-of-RIB sent 00:00:01 after session start
  First convergence 00:00:01 after session start with 0 routes sent

  For address family: IPv6 MVPN
  BGP table version 3, neighbor version 3
  0 accepted prefixes (0 paths), consuming 0 bytes of memory
  0 received prefixes treated as withdrawn
  0 sent prefixes (0 paths)
  Community attribute sent to this neighbor
  Extended community attribute sent to this neighbor
  Allow my ASN 3 times
  Outbound route-map configured is RN_NextHop_Unchanged, handle obtained
  Last End-of-RIB received 00:00:01 after session start
  Last End-of-RIB sent 00:00:01 after session start
  First convergence 00:00:01 after session start with 0 routes sent

  For address family: L2VPN EVPN
  BGP table version 7, neighbor version 7
  0 accepted prefixes (0 paths), consuming 0 bytes of memory
  0 received prefixes treated as withdrawn
  4 sent prefixes (4 paths)
  Community attribute sent to this neighbor
  Extended community attribute sent to this neighbor
  Allow my ASN 3 times
  Advertise GW IP is enabled
  Outbound route-map configured is RN_NextHop_Unchanged, handle obtained
  Last End-of-RIB received 00:00:01 after session start
  Last End-of-RIB sent 00:00:01 after session start
  First convergence 00:00:01 after session start with 4 routes sent

  Local host: 2001:DB8::2, Local port: 21132
  Foreign host: 2001:DB8::/32, Foreign port: 179
  fd = 113

VXLAN EVPN and TRM configuration with IPv6 multicast underlay

This topic provides the configuration details for VXLAN EVPN and TRM networks using an IPv6 multicast underlay. The configurations below apply to leaf and spine devices.

Leaf device configuration

  • NVE Configuration
    interface nve1
      no shutdown
      host-reachability protocol bgp
      source-interface loopback1
      member vni 10501
        mcast-group ff04::40
      member vni 50001 associate-vrf
        mcast-group ff10:0:0:1::1
    
  • PIMv6 Configuration
    feature pim6
    
    ipv6 pim rp-address 101:101:101:101::101 group-list ff00::/8
    
    interface loopback1
      ipv6 address 172:172:16:1::1/128
      ipv6 pim sparse-mode
    
    interface Ethernet1/27
      ipv6 address 27:50:1:1::1/64
      ospfv3 hello-interval 1
      ipv6 router ospfv3 v6u area 0.0.0.0
      ipv6 pim sparse-mode
      no shutdown
    
  • BGP Configuration
    router bgp 100
        router-id 172.16.1.1
        address-family ipv4 unicast
          maximum-paths 64
          maximum-paths ibgp 64
        address-family ipv6 unicast
          maximum-paths 64
          maximum-paths ibgp 64
        address-family ipv4 mvpn
        address-family l2vpn evpn
        neighbor 172:17:1:1::1
          remote-as 100
          update-source loopback0
          address-family ipv4 mvpn
            send-community
            send-community extended
          address-family ipv6 mvpn
            send-community
            send-community extended
          address-family l2vpn evpn
            send-community
         neighbor 172:17:2:2::1
           remote-as 100
           update-source loopback0
           address-family ipv4 mvpn
             send-community
             send-community extended
           address-family ipv6 mvpn
             send-community
             send-community extended
           address-family l2vpn evpn
             send-community
             send-community extended
         vrf VRF1
           reconnect-interval 1
           address-family ipv4 unicast
             network 150.1.1.1/32
             advertise l2vpn evpn
             redistribute hmm route-map hmmAdv
    
    evpn
      vni 10501 l2
        rd auto
        route-target import auto
        route-target export auto
    vrf context VRF1
      vni 50001
    rd auto
    address-family ipv4 unicast
       route-target both auto
       route-target both auto mvpn
       route-target both auto evpn
    address-family ipv6 unicast
       route-target both auto
       route-target both auto mvpn
       route-target both auto evpn
    
    Note: Incase of vPC leafs, you need to configure identical “mvpn vri id” on both the vPC nodes. For example:
    
    router bgp 100
      mvpn vri id 2001
    

    Note


    MVPN VRI ID must be unique within the network or setup. That is, if the network has three different sets of vPC pairs, each pair must have a different VRI ID.


Spine device configuration

  • NVE Configuration
    nv overlay evpn
  • PIMv6 Configuration
    feature pim6
    
    ipv6 pim rp-address 101:101:101:101::101 group-list ff00::/8
    ipv6 pim anycast-rp 101:101:101:101::101 102:102:102:102::102
    ipv6 pim anycast-rp 101:101:101:101::101 103:103:103:103::103
    
    interface loopback101
      ipv6 address 101:101:101:101::101/128
      ipv6 router ospfv3 v6u area 0.0.0.0
      ipv6 pim sparse-mode
    
    interface loopback102
      ipv6 address 102:102:102:102::102/128
      ipv6 router ospfv3 v6u area 0.0.0.0
      ipv6 pim sparse-mode
    
    interface Ethernet1/50/1
      ipv6 address 27:50:1:1::2/64
      ipv6 pim sparse-mode
      no shutdown
    
  • BGP Configuration
    feature bgp
    
    router bgp 100
            router-id 172.16.40.1
             address-family ipv4 mvpn
            address-family ipv6 mvpn
            address-family l2vpn evpn
            neighbor 172:16:1:1::1
              remote-as 100
              update-source loopback0
              address-family ipv4 mvpn
                send-community
                send-community extended
                 route-reflector-client
              address-family ipv6 mvpn
               send-community
                send-community extended
                 route-reflector-client
              address-family l2vpn evpn
              send-community
                send-community extended
                 route-reflector-client