Multicast Configuration Guide for Cisco ASR 9000 Series Routers, IOS XR Release 6.2.x
Bias-Free Language
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
Implementing
Layer-3
Multicast Routing on Cisco IOS XR Software
This module describes how to implement Layer 3 multicast routing
on
Cisco ASR 9000 Series Routers running
Cisco IOS XR Software.
Multicast routing is a bandwidth-conserving
technology that reduces traffic by simultaneously delivering a single stream of
information to potentially thousands of corporate recipients and homes.
Applications that take advantage of multicast routing include video
conferencing, corporate communications, distance learning, and distribution of
software, stock quotes, and news.
This document assumes
that you are familiar with IPv4
and IPv6 multicast
routing configuration tasks and concepts for
Cisco IOS XR Software .
Multicast routing
allows a host to send packets to a subset of all hosts as a group transmission
rather than to a single host, as in unicast transmission, or to all hosts, as
in broadcast transmission. The subset of hosts is known as
group members
and are identified by a single multicast group address that falls under the IP
Class D address range from 224.0.0.0 through 239.255.255.255.
For detailed conceptual
information about multicast routing and complete descriptions of the multicast
routing commands listed in this module, you can refer to the
Related Documents.
Feature History
for Configuring Multicast Routing on the
Cisco ASR 9000 Series Routers
Release
Modification
Release 3.7.2
This
feature was introduced.
Release 3.9.0
Support
was added for these features:
Flow-based multicast only fast reroute (MoFRR).
IGMP
VRF override.
Release 3.9.1
Support
was added for the Multicast VPN feature. (For IPv4 address family)
Release 4.0.0
Support
was added for IPv4 Multicast routing, Multicast VPN basic and InterAS option A
on Cisco ASR 9000 Series SPA Interface Processor-700 linecard and MVPN Hub and
Spoke Topology.
Release 4.0.1
Support
was added for IPv6 Multicast routing.
Release 4.1.0
Support
was added for Label Switched Multicast using Point-to-Multipoint Traffic
Engineering in global context only (not in VRF).
Release 4.2.1
Support
was added for these features:
Label Switched Multicast using MLDP (Multicast Label
Distribution Protocol).
Multicast VPN for IPv6 address family.
Support for Satellite nV.
InterAS Support on Multicast VPN.
Release 4.3.2
Support was added for these features:
Support for IPv4 traffic on Multicast over unicast GRE was introduced.
Support was added for TI (Topology Independent) MoFRR.
Release
5.2.0
Support
was introduced for Bidirectional Global Protocol Independent Multicast.
Release
5.3.2
Support
for IPv6 traffic and ECMP on Multicast over unicast GRE was introduced.
Release
6.0.0
Support
for Segmented Multicast Stitching with Inter AS was introduced.
Release
6.0.0
Support
for MLDP Carrier Supporting Carrier based MVPN was introduced.
Release 6.1.2
Layer 3 Multicast Bundle Subinterface Load Balancing feature
was introduced.
Release 6.1.2
Segmented Multicast Stitching with Inter AS and MLDP Carrier
Supporting Carrier based MVPN feature support was extended to support
Cisco IOS XR 64 bit.
Release 6.1.2
MVPN, MoGRE, MoFRR and Global Table Multicast feature
support was extended to support
Cisco IOS XR 64 bit.
Prerequisites for
Implementing Multicast Routing
You must install
and activate the multicast
pie.
For detailed information
about optional PIE installation, see
Cisco ASR 9000 Series
Aggregation Services Router Getting Started Guide
For MLDP, an MPLS
PIE has to be installed.
You must be in a user group associated with a task group that includes the proper task IDs. The command reference guides include
the task IDs required for each command. If you suspect user group assignment is preventing you from using a command, contact
your AAA administrator for assistance.
You must be
familiar with IPv4
and IPv6 multicast
routing configuration tasks and concepts.
Unicast routing
must be operational.
To enable
multicast VPN, you must configure a VPN routing and forwarding (VRF) instance.
Information About Implementing Multicast Routing
Key Protocols and
Features Supported in the Cisco IOS XR Software Multicast Routing
Implementation
Traditional IP
communication allows a host to send packets to a single host (unicasttransmission) or
to all hosts (broadcasttransmission).
Multicast provides a third scheme, allowing a host to send a single data stream
to a subset of all hosts (grouptransmission) at
about the same time. IP hosts are known as group members.
Packets delivered to
group members are identified by a single multicast group address. Multicast
packets are delivered to a group using best-effort reliability, just like IP
unicast packets.
The multicast
environment consists of senders and receivers. Any host, regardless of whether
it is a member of a group, can send to a group. However, only the members of a
group receive the message.
A multicast address is
chosen for the receivers in a multicast group. Senders use that group address
as the destination address of a datagram to reach all members of the group.
Membership in a
multicast group is dynamic; hosts can join and leave at any time. There is no
restriction on the location or number of members in a multicast group. A host
can be a member of more than one multicast group at a time.
How active a multicast
group is and what members it has can vary from group to group and from time to
time. A multicast group can be active for a long time, or it may be very
short-lived. Membership in a group can change constantly. A group that has
members may have no activity.
Routers use the
Internet Group Management Protocol (IGMP) (IPv4) and Multicast Listener
Discovery (MLD) (IPv6) to learn whether members of a group are present on their
directly attached subnets. Hosts join multicast groups by sending IGMP or MLD
report messages.
Many multimedia
applications involve multiple participants. Multicast is naturally suitable for
this communication paradigm.
Multicast Routing
Implementation
Cisco IOS XR Software supports the following protocols to
implement multicast routing:
IGMP
is
used
between hosts on a LAN and the routers on that LAN to track the
multicast groups of which hosts are members.
Protocol
Independent Multicast in sparse mode (PIM-SM) is used between routers so that
they can track which multicast packets to forward to each other and to their
directly connected LANs.
Protocol
Independent Multicast in Source-Specific Multicast (PIM-SSM) is similar to
PIM-SM with the additional ability to report interest in receiving packets from
specific source addresses (or from all but the specific source addresses), to
an IP multicast address.
PIM-SSM is made
possible by IGMPv3 and MLDv2. Hosts can now indicate interest in specific
sources using IGMPv3 and MLDv2. SSM does not require a rendezvous point (RP) to
operate.
PIM Bidirectional
is a variant of the Protocol Independent Multicast suit of routing protocols
for IP multicast. PIM-BIDIR is designed to be used for many-to-many
applications within individual PIM domains.
This image shows
IGMP and PIM-SM
operating in a multicast environment.
Figure 1. Multicast
Routing Protocols
PIM-SM, PIM-SSM, and
PIM-BIDIR
Protocl Independent
Multicast (PIM) is a multicast routing protocol used to create multicast
distribution trees, which are used to forward multicast data packets. PIM is an
efficient IP routing protocol that is “independent” of a routing table, unlike
other multicast protocols such as Multicast Open Shortest Path First (MOSPF) or
Distance Vector Multicast Routing Protocol (DVMRP).
Cisco IOS XR Software supports Protocol Independent
Multicast in sparse mode (PIM-SM), Protocol Independent Multicast in
Source-Specific Multicast (PIM-SSM), and Protocol Independent Multicast in
Bi-directional mode (BIDIR) permitting these modes to operate on your router at
the same time.
PIM-SM and PIM-SSM
supports one-to-many applications by greatly simplifying the protocol mechanics
for deployment ease. Bidir PIM helps deploy emerging communication and
financial applications that rely on a many-to-many applications model. BIDIR
PIM enables these applications by allowing them to easily scale to a very large
number of groups and sources by eliminating the maintenance of source state.
PIM-SM Operations
PIM in sparse mode operation is used in a multicast network when relatively few routers are involved in each multicast and
these routers do not forward multicast packets for a group, unless there is an explicit request for the traffic.
PIM in Source-Specific
Multicast operation uses information found on source addresses for a multicast
group provided by receivers and performs source filtering on traffic.
By default,
PIM-SSM operates in the 232.0.0.0/8 multicast group range for IPv4
and
ff3x::/32 (where x is any valid scope) in IPv6. To configure these values,
use the
ssm range
command.
If SSM is deployed
in a network already configured for PIM-SM, only the last-hop routers must be
upgraded with
Cisco IOS XR Software that supports the SSM feature.
No MSDP SA
messages within the SSM range are accepted, generated, or forwarded.
PIM-Bidirectional Operations
PIM Bidirectional (BIDIR) has one shared tree from sources to RP and from RP to receivers. This is unlike the PIM-SM, which
is unidirectional by nature with multiple source trees - one per (S,G) or a shared tree from receiver to RP and multiple SG
trees from RP to sources.
Benefits of PIM BIDIR are as follows:
As many sources for the same group use one and only state (*, G), only minimal states are required in each router.
No data triggered events.
Rendezvous Point (RP) router not required. The RP address only needs to be a routable address and need not exist on a physical
device.
Restrictions for
PIM-SM and PIM-SSM, and PIM BIDIR
Interoperability with SSM
PIM-SM operations within the SSM range of addresses change to PIM-SSM. In this mode, only PIM (S,G) join and prune messages
are generated by the router, and no (S,G) RP shared tree or (*,G) shared tree messages are generated.
IGMP Version
To report multicast memberships to neighboring multicast routers, hosts use IGMP, and all routers on the subnet must be configured
with the same version of IGMP.
A router running Cisco IOS XR Software does not automatically detect Version 1 systems. You must use the version command in router IGMP configuration submode to configure the IGMP version.
PIM-Bidir Restrictions
PIM-Bidir is not supported on MVPN.
Internet Group
Management Protocol
Cisco IOS XR Software provides support for Internet Group
Management Protocol (IGMP) over IPv4.
IGMP
provides
a means for hosts to indicate which multicast traffic they are
interested in and for routers to control and limit the flow of multicast
traffic throughout the network. Routers build state by means of IGMP
and
MLD messages; that is, router queries and host reports.
A set of queries and
hosts that receive multicast data streams from the same source is called a
multicast group.
Hosts use IGMP
and
MLD messages to join and leave multicast groups.
Note
IGMP messages use
group addresses, which are Class D IP addresses. The high-order four bits of a
Class D address are 1110. Host group addresses can be in the range 224.0.0.0 to
239.255.255.255. The address 224.0.0.0 is guaranteed not to be assigned to any
group. The address 224.0.0.1 is assigned to all systems on a subnet. The
address 224.0.0.2 is assigned to all routers on a subnet.
IGMP Versions
The following points describe IGMP versions 1, 2, and 3:
IGMP Version 1 provides for the basic query-response mechanism that allows the multicast router to determine which multicast
groups are active and for other processes that enable hosts to join and leave a multicast group.
IGMP Version 2 extends IGMP allowing such features as the IGMP query timeout and the maximum query-response time. See RFC
2236.
IGMP Version 3 permits joins and leaves for certain source and group pairs instead of requesting traffic from all sources
in the multicast group.
IGMP Routing Example
Figure 1 illustrates two sources, 10.0.0.1 and 10.0.1.1, that are multicasting to group 239.1.1.1. The receiver wants to receive traffic
addressed to group 239.1.1.1 from source 10.0.0.1 but not from source 10.0.1.1. The host must send an IGMPv3 message containing
a list of sources and groups (S, G) that it wants to join and a list of sources and groups (S, G) that it wants to leave.
Router C can now use this information to prune traffic from Source 10.0.1.1 so that only Source 10.0.0.1 traffic is being
delivered to
Router C.
Figure 2. IGMPv3 Signaling
Note
When configuring IGMP, ensure that all systems on the subnet support the same IGMP version. The router does not automatically
detect Version 1 systems. Configure the router for Version 2 if your hosts do not support Version 3.
Configuring IGMP Per
Interface States Limit
The IGMP Per Interface
States Limit sets a limit on creating OLEs for the IGMP interface. When the set
limit is reached, the group is not accounted against this interface but the
group can exist in IGMP context for some other interface.
The following
configuration sets a limit on the number of group memberships created on an
interface as a result of receiving IGMP or MLD membership reports.
<threshold> is
the threshold number of groups at which point a syslog warning message will be
issued
<acl> provides
an option for selective accounting. If provided, only groups or (S,G)s that are
permitted by the ACL is accounted against the limit. Groups or (S, G)s that are
denied by the ACL are not accounted against the limit. If not provided, all the
groups are accounted against the limit.
The following messages
are displayed when the threshold limit is reached for IGMP:
igmp[1160]: %ROUTING-IPV4_IGMP-4-OOR_THRESHOLD_REACHED : Threshold for Maximum number of group per interface has been reached 3: Groups joining will soon be throttled.
Config a higher max or take steps to reduce states
igmp[1160]: %ROUTING-IPV4_IGMP-4-OOR_LIMIT_REACHED : Maximum number of group per interface has been reached 6: Groups joining is throttled.
Config a higher max or take steps to reduce states
Limitations
If a user has configured a maximum of 20 groups and has reached
the maximum number of groups, then no more groups can be created. If the user
reduces the maximum number of groups to 10, the 20 joins will remain and a
message of reaching the maximum is displayed. No more joins can be added until
the number of groups has reached less than 10.
If a user already has configured a maximum of 30 joins and add a
max of 20, the configuration occurs displaying a message that the maximum has
been reached. No state change occurs and also no more joins can occur until the
threshold number of groups is brought down below the maximum number of groups.
Protocol Independent
Multicast
Protocol Independent
Multicast (PIM) is a routing protocol designed to send and receive multicast
routing updates. Proper operation of multicast depends on knowing the unicast
paths towards a source or an RP. PIM relies on unicast routing protocols to
derive this reverse-path forwarding (RPF) information. As the name PIM implies,
it functions independently of the unicast protocols being used. PIM relies on
the Routing Information Base (RIB) for RPF information.
If the
multicast subsequent address family identifier (SAFI) is configured for Border
Gateway Protocol (BGP), or if multicast intact is configured, a separate
multicast unicast RIB is created and populated with the BGP multicast SAFI
routes, the intact information, and any IGP information in the unicast RIB.
Otherwise, PIM gets information directly from the unicast SAFI RIB. Both
multicast unicast and unicast databases are outside of the scope of PIM.
The Cisco IOS XR
implementation of PIM is based on RFC 4601 Protocol Independent Multicast -
Sparse Mode (PIM-SM): Protocol Specification. For more information, see RFC
4601 and the Protocol Independent Multicast (PIM): Motivation and Architecture
Internet Engineering Task Force (IETF) Internet draft.
Note
Cisco IOS XR Software supports PIM-SM, PIM-SSM,
and PIM Version 2 only. PIM Version 1 hello messages that arrive
from neighbors are rejected.
PIM-Sparse Mode
Typically, PIM in sparse mode (PIM-SM) operation is used in a multicast network when relatively few routers are involved in
each multicast. Routers do not forward multicast packets for a group, unless there is an explicit request for traffic. Requests
are accomplished using PIM join messages, which are sent hop by hop toward the root node of the tree. The root node of a tree
in PIM-SM is the rendezvous point (RP) in the case of a shared tree or the first-hop router that is directly connected to
the multicast source in the case of a shortest path tree (SPT). The RP keeps track of multicast groups, and the sources that
send multicast packets are registered with the RP by the first-hop router of the source.
As a PIM join travels up the tree, routers along the path set up the multicast forwarding state so that the requested multicast
traffic is forwarded back down the tree. When multicast traffic is no longer needed, a router sends a PIM prune message up
the tree toward the root node to prune (or remove) the unnecessary traffic. As this PIM prune travels hop by hop up the tree,
each router updates its forwarding state appropriately. Ultimately, the forwarding state associated with a multicast group
or source is removed. Additionally, if prunes are not explicitly sent, the PIM state will timeout and be removed in the absence
of any further join messages.
PIM-SM is the best choice for multicast networks that have potential members at the end of WAN links.
PIM-Source Specific Multicast
In many multicast deployments where the source is known, protocol-independent multicast-source-specific multicast (PIM-SSM)
mapping is the obvious multicast routing protocol choice to use because of its simplicity. Typical multicast deployments that
benefit from PIM-SSM consist of entertainment-type solutions like the ETTH space, or financial deployments that completely
rely on static forwarding.
PIM-SSM is derived from PIM-SM. However, whereas PIM-SM allows for the data transmission of all sources sending to a particular
group in response to PIM join messages, the SSM feature forwards traffic to receivers only from those sources that the receivers
have explicitly joined. Because PIM joins and prunes are sent directly towards the source sending traffic, an RP and shared
trees are unnecessary and are disallowed. SSM is used to optimize bandwidth utilization and deny unwanted Internet broadcast
traffic. The source is provided by interested receivers through IGMPv3 membership reports.
In SSM, delivery of datagrams is based on (S,G) channels. Traffic for one (S,G) channel consists of datagrams with an IP unicast
source address S and the multicast group address G as the IP destination address. Systems receive traffic by becoming members
of the (S,G) channel. Signaling is not required, but receivers must subscribe or unsubscribe to (S,G) channels to receive
or not receive traffic from specific sources. Channel subscription signaling uses IGMP to include mode membership reports,
which are supported only in Version 3 of IGMP (IGMPv3).
To run SSM with IGMPv3, SSM must be supported on the multicast router, the host where the application is running, and the
application itself. Cisco IOS XR Software allows SSM configuration for an arbitrary subset of the IP multicast address range 224.0.0.0 through 239.255.255.255. When
an SSM range is defined, existing IP multicast receiver applications do not receive any traffic when they try to use addresses
in the SSM range, unless the application is modified to use explicit (S,G) channel subscription.
PIM-Bidirectional Mode
PIM BIDIR is a variant of the Protocol Independent Multicast (PIM)
suite of routing protocols for IP multicast. In PIM, packet traffic
for a multicast group is routed according to the rules of the mode
configured for that multicast group.
In bidirectional mode, traffic is only routed along a bidirectional shared tree that is rooted at the rendezvous point (RP)
for the group. In PIM-BIDIR, the IP address of the RP acts as the key to having all routers establish a loop-free spanning
tree topology rooted in that IP address. This IP address does not need to be a router, but can be any unassigned IP address
on a network that is reachable throughout the PIM domain. Using this technique is the preferred configuration for establishing
a redundant RP configuration for PIM-BIDIR.
Note
In Cisco IOS XR Release 4.2.1, Anycast RP is not supported on PIM Bidirectional
mode.
PIM-BIDIR is designed to be used for many-to-many
applications within individual PIM domains. Multicast groups in
bidirectional mode can scale to an arbitrary number of sources
without incurring overhead due to the number of sources. PIM-BIDIR is derived from the mechanisms of PIM-sparse mode (PIM-SM)
and shares many SPT operations. PIM-BIDIR also
has unconditional forwarding of source traffic toward the RP
upstream on the shared tree, but no registering process for sources
as in PIM-SM. These modifications are necessary and sufficient to
allow forwarding of traffic in all routers solely based on the (*,
G) multicast routing entries. This feature eliminates any
source-specific state and allows scaling capability to an arbitrary
number of sources.
The traditional PIM protocols (dense-mode and sparse-mode) provided two models for forwarding multicast packets, source trees
and shared trees. Source trees are rooted at the source of the traffic while shared trees are rooted at the rendezvous point.
Source trees achieve the optimum path between each receiver and the source at the expense of additional routing information:
an (S,G) routing entry per source in the multicast routing table. The shared tree provides a single distribution tree for
all of the active sources. This means that traffic from different sources traverse the same distribution tree to reach the
interested receivers, therefore reducing the amount of routing state in the network. This shared tree needs to be rooted somewhere,
and the location of this root is the rendezvous point. PIM BIDIR uses shared trees as their main forwarding mechanism.
The algorithm to elect the designated forwarder is straightforward, all the PIM neighbors in a subnet advertise their unicast
route to the rendezvous point and the router with the best route is elected. This effectively builds a shortest path between
every subnet and the rendezvous point without consuming any multicast routing state (no (S,G) entries are generated). The
designated forwarder election mechanism expects all of the PIM neighbors to be BIDIR enabled. In the case where one of more
of the neighbors is not a BIDIR capable router, the election fails and BIDIR is disabled in that subnet.
Configuring PIM Per
Interface States Limit
The PIM Per Interface
States Limit sets a limit on creating OLEs for the PIM interface. When the set
limit is reached, the group is not accounted against this interface but the
group can exist in PIM context for some other interface.
The following
configuration sets a limit on the number of routes for which the given
interface may be an outgoing interface as a result of receiving a PIM J/P
message.
<threshold> is
the threshold number of groups at which point a syslog warning message will be
issued
<acl> provides
an option for selective accounting. If provided, only groups or (S,G)s that are
permitted by the ACL is accounted against the limit. Groups or (S, G)s that are
denied by the ACL are not accounted against the limit. If not provided, all the
groups are accounted against the limit.
The following messages
are displayed when the threshold limit is reached for PIM:
pim[1157]: %ROUTING-IPV4_PIM-4-CAC_STATE_THRESHOLD : The interface GigabitEthernet0_2_0_0 threshold number (4) allowed states has been reached.
State creation will soon be throttled. Configure a higher state limit value or take steps to reduce the number of states.
pim[1157]: %ROUTING-IPV4_PIM-3-CAC_STATE_LIMIT : The interface GigabitEthernet0_2_0_0 maximum number (5) of allowed states has been reached.
State creation will not be allowed from here on. Configure a higher maximum value or take steps to reduce the number of states
Limitations
If a user has configured a maximum of 20 groups and has reached
the maximum number of groups, then no more groups/OLEs can be created. If the
user now decreases the maximum number to 10, the 20 joins/OLE will remain and a
message of reaching the max is displayed. No more joins/OLE can be added at
this point until it has reached less than 10.
If a user already has configured a maximum of 30 joins/OLEs and
add a max of 20, the configuration occurs displaying a message that the max has
been reached. No states will change but no more joins/OLEs can happen until the
number is brought down below the maximum number of groups.
Local interest joins are added, even if the limit has reached and
is accounted for it.
PIM Shared Tree and Source Tree (Shortest Path Tree)
In PIM-SM, the rendezvous point (RP) is used to bridge sources sending data to a particular group with receivers sending joins
for that group. In the initial setup of state, interested receivers receive data from senders to the group across a single
data distribution tree rooted at the RP. This type of distribution tree is called a shared tree or rendezvous point tree (RPT)
as illustrated in Figure 1 . Data from senders is delivered to the RP for distribution to group members joined to the shared tree.
Figure 3. Shared Tree and Source Tree (Shortest Path Tree)
Unless the spt-threshold infinity command is configured, this
initial state gives way as soon as traffic is received on the leaf routers (designated
router closest to the host receivers). When the leaf router receives traffic from the RP on
the RPT, the router initiates a switch to a data distribution tree rooted at the source
sending traffic. This type of distribution tree is called a shortest path
tree or source tree. By default, the Cisco IOS XR Software switches to a source
tree when it receives the first data packet from a source.
The following process describes the move from shared tree to source tree in more detail:
Receiver joins a group; leaf Router C sends a join message toward RP.
RP puts link to Router C in its outgoing interface list.
Source sends data; Router A encapsulates data in Register and sends it to RP.
RP forwards data down the shared tree to Router C and sends a join message toward Source. At this point, data may arrive twice
at the RP, once encapsulated and once natively.
When data arrives natively (unencapsulated) at RP, RP sends a register-stop message to Router A.
By default, receipt of the first data packet prompts Router C to send a join message toward Source.
When Router C receives data on (S,G), it sends a prune message for Source up the shared tree.
RP deletes the link to Router C from outgoing interface of (S,G). RP triggers a prune message toward Source.
Join and prune messages are sent for sources and RPs. They are sent hop by hop and are processed by each PIM router along
the path to the source or RP. Register and register-stop messages are not sent hop by hop. They are exchanged using direct
unicast communication between the designated router that is directly connected to a source and the RP for the group.
Tip
The spt-threshold infinity command lets you configure the
router so that it never switches to the shortest path tree (SPT).
Multicast-Intact
The multicast-intact feature provides the ability to run multicast routing (PIM) when
Interior Gateway Protocol (IGP) shortcuts are configured and active on the router. Both
Open Shortest Path First, version 2 (OSPFv2), and Intermediate System-to-Intermediate
System (IS-IS) support the multicast-intact feature. Multiprotocol Label Switching Traffic
Engineering (MPLS-TE) and IP multicast coexistence is supported in Cisco IOS XR Software by using the
mpls traffic-eng multicast-intact IS-IS or OSPF router
command. See Routing Configuration Guide for Cisco ASR 9000 Series Routers for information on configuring multicast intact using IS-IS and OSPF commands.
You can enable multicast-intact in the IGP when multicast routing protocols (PIM) are configured and IGP shortcuts are configured
on the router. IGP shortcuts are MPLS tunnels that are exposed to IGP. The IGPs route the IP traffic over these tunnels to
destinations that are downstream from the egress router of the tunnel (from an SPF perspective). PIM cannot use IGP shortcuts
for propagating PIM joins because reverse path forwarding (RPF) cannot work across a unidirectional tunnel.
When you enable multicast-intact on an IGP, the IGP publishes a parallel or alternate set
of equal-cost next-hops for use by PIM. These next-hops are called mcast-intact
next-hops. The mcast-intact next-hops have the following attributes:
They are guaranteed not to contain any IGP shortcuts.
They are not used for unicast routing but are used only by PIM to look up an IPv4 next hop to a PIM source.
They are not published to the Forwarding Information Base (FIB).
When multicast-intact is enabled on an IGP, all IPv4 destinations that were learned through link-state advertisements are
published with a set equal-cost mcast-intact next-hops to the RIB. This attribute applies even when the native next-hops have
no IGP shortcuts.
In IS-IS, the max-paths limit is applied by counting both the native and mcast-intact next-hops together. (In OSPFv2, the
behavior is slightly different.)
Designated Routers
Cisco routers use PIM-SM to forward multicast traffic and follow an election process to select a designated router (DR) when
there is more than one router on a LAN segment.
The designated router is responsible for sending PIM register and PIM join and prune messages toward the RP to inform it about
host group membership.
If there are multiple PIM-SM routers on a LAN, a designated router must be elected to avoid
duplicating multicast traffic for connected hosts. The PIM router with the highest IP
address becomes the DR for the LAN unless you choose to force the DR election by use of the
dr-priority command. The DR priority option allows you to
specify the DR priority of each router on the LAN segment (default priority = 1) so that
the router with the highest priority is elected as the DR. If all routers on the LAN
segment have the same priority, the highest IP address is again used as the tiebreaker.
Figure 1 illustrates what happens on a multiaccess segment. Router A (10.0.0.253) and Router B (10.0.0.251) are connected to a common
multiaccess Ethernet segment with Host A (10.0.0.1) as an active receiver for Group A. As the Explicit Join model is used,
only Router A, operating as the DR, sends joins to the RP to construct the shared tree for Group A. If Router B were also
permitted to send (*, G) joins to the RP, parallel paths would be created and Host A would receive duplicate multicast traffic.
When Host A begins to source multicast traffic to the group, the DR’s responsibility is to send register messages to the RP.
Again, if both routers were assigned the responsibility, the RP would receive duplicate multicast packets.
If the DR fails, the PIM-SM provides a way to detect the failure of Router A and to elect a failover DR. If the DR (Router
A) were to become inoperable, Router B would detect this situation when its neighbor adjacency with Router A timed out. Because
Router B has been hearing IGMP membership reports from Host A, it already has IGMP state for Group A on this interface and
immediately sends a join to the RP when it becomes the new DR. This step reestablishes traffic flow down a new branch of the
shared tree using Router B. Additionally, if Host A were sourcing traffic, Router B would initiate a new register process
immediately after receiving the next multicast packet from Host A. This action would trigger the RP to join the SPT to Host
A, using a new branch through Router B.
Tip
Two PIM routers are neighbors if there is a direct connection between them. To display
your PIM neighbors, use the showpim neighbor command in EXEC mode.
Figure 4. Designated Router Election on a Multiaccess Segment
Note
DR election process is required only on multiaccess LANs. The last-hop router directly connected to the host is the DR.
Rendezvous Points
When PIM is configured in sparse mode, you must choose one or more routers to operate as a rendezvous point (RP). A rendezvous
point is a single common root placed at a chosen point of a shared distribution tree, as illustrated in Figure 1. A rendezvous point can be either configured statically in each box or learned through a dynamic mechanism.
PIM DRs forward data from directly connected multicast sources to the rendezvous point for distribution down the shared tree.
Data is forwarded to the rendezvous point in one of two ways:
Encapsulated in register packets and unicast directly to the rendezvous point by the first-hop router operating as the DR
Multicast forwarded by the RPF forwarding algorithm, described in the Reverse-Path Forwarding, if the rendezvous point has itself joined the source tree.
The rendezvous point address is used by first-hop routers to send PIM register messages on behalf of a host sending a packet
to the group. The rendezvous point address is also used by last-hop routers to send PIM join and prune messages to the rendezvous
point to inform it about group membership. You must configure the rendezvous point address on all routers (including the rendezvous
point router).
A PIM router can be a rendezvous point for more than one group. Only one rendezvous point address can be used at a time within
a PIM domain. The conditions specified by the access list determine for which groups the router is a rendezvous point.
You can either manually configure a PIM router to function as a rendezvous point or allow the rendezvous point to learn group-to-RP
mappings automatically by configuring Auto-RP or BSR. (For more information, see the Auto-RP section that follows and PIM Bootstrap Router.)
Auto-RP
Automatic route
processing (Auto-RP) is a feature that automates the distribution of
group-to-RP mappings in a PIM network. This feature has these benefits:
It is easy to use
multiple RPs within a network to serve different group ranges.
It allows load
splitting among different RPs.
It facilitates the
arrangement of RPs according to the location of group participants.
It avoids
inconsistent, manual RP configurations that might cause connectivity problems.
Multiple RPs can be
used to serve different group ranges or to serve as hot backups for each other.
To ensure that Auto-RP functions, configure routers as candidate RPs so that
they can announce their interest in operating as an RP for certain group
ranges. Additionally, a router must be designated as an RP-mapping agent that
receives the RP-announcement messages from the candidate RPs, and arbitrates
conflicts. The RP-mapping agent sends the consistent group-to-RP mappings to
all remaining routers. Thus, all routers automatically determine which RP to
use for the groups they support.
Tip
By default, if a
given group address is covered by group-to-RP mappings from both static RP
configuration, and is discovered using Auto-RP or PIM BSR, the Auto-RP or PIM
BSR range is preferred. To override the default, and use only the RP mapping,
use the
rp-address
override keyword.
Note
If you configure PIM
in sparse mode and do not configure Auto-RP, you must statically configure an
RP as described in the
Configuring a Static RP and Allowing Backward Compatibility. When router interfaces are
configured in sparse mode, Auto-RP can still be used if all routers are
configured with a static RP address for the Auto-RP groups.
Note
Auto-RP is not
supported on VRF interfaces. Auto-RP Lite allows you to configure auto-RP on
the CE router. It allows the PE router that has the VRF interface to relay
auto-RP discovery, and announce messages across the core and eventually to the
remote CE. Auto-RP is supported in only the IPv4 address family.
PIM Bootstrap Router
The PIM bootstrap router (BSR) provides a fault-tolerant, automated RP discovery and distribution mechanism that simplifies
the Auto-RP process. This feature is enabled by default allowing routers to dynamically learn the group-to-RP mappings.
PIM uses the BSR to discover and announce RP-set information for each group prefix to all the routers in a PIM domain. This
is the same function accomplished by Auto-RP, but the BSR is part of the PIM Version 2 specification. The BSR mechanism interoperates
with Auto-RP on Cisco routers.
To avoid a single point of failure, you can configure several candidate BSRs in a PIM domain. A BSR is elected among the candidate
BSRs automatically. Candidates use bootstrap messages to discover which BSR has the highest priority. The candidate with the
highest priority sends an announcement to all PIM routers in the PIM domain that it is the BSR.
Routers that are configured as candidate RPs unicast to the BSR the group range for which they are responsible. The BSR includes
this information in its bootstrap messages and disseminates it to all PIM routers in the domain. Based on this information,
all routers are able to map multicast groups to specific RPs. As long as a router is receiving the bootstrap message, it has
a current RP map.
Reverse-Path Forwarding
Reverse-path forwarding (RPF) is an algorithm used for forwarding multicast datagrams. It functions as follows:
If a router receives a datagram on an interface it uses to send unicast packets to the source, the packet has arrived on the
RPF interface.
If the packet arrives on the RPF interface, a router forwards the packet out the interfaces present in the outgoing interface
list of a multicast routing table entry.
If the packet does not arrive on the RPF interface, the packet is silently discarded to prevent loops.
PIM uses both source trees and RP-rooted shared trees to forward datagrams; the RPF check is performed differently for each,
as follows:
If a PIM router has an (S,G) entry present in the multicast routing table (a source-tree state), the router performs the RPF
check against the IP address of the source for the multicast packet.
If a PIM router has no explicit source-tree state, this is considered a shared-tree state. The router performs the RPF check
on the address of the RP, which is known when members join the group.
Sparse-mode PIM uses the RPF lookup function to determine where it needs to send joins and prunes. (S,G) joins (which are
source-tree states) are sent toward the source. (*,G) joins (which are shared-tree states) are sent toward the RP.
Multicast Non-Stop
Routing
Multicast Non-Stop Routing (NSR)
enables the router to synchronize the multicast routing tables on both the
active and standby RSPs so that during an HA scenario like an RSP failover
there is no loss of multicast data. Multicast NSR is enabled through the
multicast processes being hot standby. Multicast NSR supports both Zero Packet
Loss (ZPL) and Zero Topology Loss (ZTL). With Multicast NSR, there is less CPU
churn and no multicast session flaps during a failover event.
Multicast NSR is
enabled by default, however, if any unsupported features like BNG or Snooping
are configured, Multicast performs Non-Stop Forwarding (NSF) functionality
during failover events. When Multicast NSR is enabled, multicast routing state
is synchronized between the active and standby RSPs. Once the synchronization
occurs, each of the multicast processes signal the NSR readiness to the system.
For the multicast processes to support NSR, the processes must be hot standby
compliant. That is, the processes on active and standby RSPs both have to be in
synchronization at all times. The active RSP receives packets from the network
and makes local decisions while the standby receives packet from the network
and synchronizes it with the active RSPs for all the local decisions. Once the
state is determined, a check is performed to verify if the states are
synchronized. If the states are synchronized, a signal in the form NSR_READY is
conveyed to the NSR system.
With NSR, in the case
of a failover event, routing changes are updated to the forwarding plane
immediately. With NSF, there is an NSF hold time delay before routing changes
can be updated.
Non-Supported
Features
The following
features are unsupported on NG NSR:
IGMP and MLD
Snooping
BNG
Failure Scenarios in
NSR
If a switchover
occurs before all multicast processes issue an NSR_READY signal, the
proceedings revert back to the existing NSF behavior. Also, on receiving the
GO_ACTIVE signal from the multicast processes, the following events occur in
processes that have not signaled NSR_READY:
IGMP starts the
NSF timer for one minute.
PIM starts the
NSF timer for two minutes.
MSDP resets all
peer sessions that are not synchronized.
Multicast
VPN
Multicast VPN (MVPN)
provides the ability to dynamically provide multicast support over MPLS
networks. MVPN introduces an additional set of protocols and procedures that
help enable a provider to support multicast traffic in a VPN.
Note
PIM-Bidir is not supported on MVPN.
There are two ways MCAST VPN traffic can be transported over the core network:
Rosen GRE (native): MVPN uses GRE with unique multicast distribution tree (MDT) forwarding to enable scalability of native
IP Multicast in the core network. MVPN introduces multicast routing information to the VPN routing and forwarding table (VRF),
creating a Multicast VRF. In Rosen GRE, the MCAST customer packets (c-packets) are encapsulated into the provider MCAST packets
(p-packets), so that the PIM protocol is enabled in the provider core, and mrib/mfib is used for forwarding p-packets in the
core.
MLDP ones (Rosen, partition): MVPN allows a service provider to configure and support multicast traffic in an MPLS VPN environment.
This type supports routing and forwarding of multicast packets for each individual VPN routing and forwarding (VRF) instance,
and it also provides a mechanism to transport VPN multicast packets across the service provider backbone. In the MLDP case,
the regular label switch path forwarding is used, so core does not need to run PIM protocol. In this scenario, the c-packets
are encapsulated in the MPLS labels and forwarding is based on the MPLS Label Switched Paths (LSPs) ,similar to the unicast
case.
In both the above
types, the MVPN service allows you to build a Protocol Independent Multicast
(PIM) domain that has sources and receivers located in different sites.
To provide Layer 3
multicast services to customers with multiple distributed sites, service
providers look for a secure and scalable mechanism to transmit customer
multicast traffic across the provider network. Multicast VPN (MVPN) provides
such services over a shared service provider backbone, using native multicast
technology similar to BGP/MPLS VPN.
In addition to all the ethernet based line cards, Multicast VPN
is also supported on the Cisco ASR 9000 Series SPA Interface Processor-700 card
from the Cisco IOS XR Software Release 4.0 onwards. Cisco ASR 9000 Series SPA
Interface Processor-700 enables the
Cisco ASR 9000 Series Routers to support multiple legacy services
(such as TDM and ATM) on a router that is primarily designed for Ethernet
networks. Cisco ASR 9000 Series SPA Interface Processor-700 is QFP-based and
therefore has the flexibility and service scale offered by Cisco ASIC and the
reliability of Cisco IOS XR Software.
MVPN emulates MPLS VPN
technology in its adoption of the multicast domain (MD) concept, in which
provider edge (PE) routers establish virtual PIM neighbor connections with
other PE routers that are connected to the same customer VPN. These PE routers
thereby form a secure, virtual multicast domain over the provider network.
Multicast traffic is then transmitted across the core network from one site to
another, as if the traffic were going through a dedicated provider network.
Multi-instance BGP is
supported on multicast and MVPN. Multicast-related SAFIs can be configured on
multiple BGP instances.
Multicast VPN Routing and Forwarding
Dedicated multicast routing and forwarding tables are created for each VPN to separate traffic in one VPN from traffic in
another.
The VPN-specific multicast routing and forwarding database is referred to as
MVRF. On a PE router, an MVRF is created when multicast is
enabled for a VRF. Protocol Independent Multicast (PIM), and Internet Group Management
Protocol (IGMP) protocols run in the context of MVRF, and all routes created by an MVRF
protocol instance are associated with the corresponding MVRF. In addition to VRFs, which
hold VPN-specific protocol states, a PE router always has a global VRF instance, containing
all routing and forwarding information for the provider network.
Multicast Distribution Tree Tunnels
The multicast distribution tree (MDT) can span multiple customer sites through provider networks, allowing traffic to flow
from one source to multiple receivers. For MLDP, the MDT tunnel are called Labeled MDT (LMDT).
Secure data transmission of multicast packets sent from the customer edge (CE) router at the ingress PE router is achieved
by encapsulating the packets in a provider header and transmitting the packets across the core. At the egress PE router, the
encapsulated packets are decapsulated and then sent to the CE receiving routers.
Multicast distribution tree (MDT) tunnels are point-to-multipoint. A MDT tunnel interface is an interface that MVRF uses to
access the multicast domain. It can be deemed as a passage that connects an MVRF and the global MVRF. Packets sent to an MDT
tunnel interface are received by multiple receiving routers. Packets sent to an MDT tunnel interface are encapsulated, and
packets received from a MDT tunnel interface are decapsulated.
Figure 5. Virtual PIM Peer Connection over an MDT Tunnel Interface
Encapsulating multicast packets in a provider header allows PE routers to be kept unaware of the packets’ origin—all VPN packets
passing through the provider network are viewed as native multicast packets and are routed based on the routing information
in the core network. To support MVPN, PE routers only need to support native multicast routing.
MVPN also supports optimized VPN traffic forwarding for high-bandwidth applications that have sparsely distributed receivers.
A dedicated multicast group can be used to encapsulate packets from a specific source, and an optimized MDT can be created
to send traffic only to PE routers connected to interested receivers. This is referred to data MDT.
InterAS Support on
Multicast VPN
The Multicast VPN
Inter-AS Support feature enables service providers to provide multicast
connectivity to VPN sites that span across multiple autonomous systems. This
feature was added to MLDP profile that enables Multicast Distribution Trees
(MDTs), used for Multicast VPNs (MVPNs), to span multiple autonomous systems.
There are two types of
MVPN inter-AS deployment scenarios:
Single-Provider
Inter-AS—A service provider whose internal network consists of multiple
autonomous systems.
Intra-Provider
Inter-AS—Multiple service providers that need to coordinate their networks to
provide inter-AS support.
To establish a
Multicast VPN between two autonomous systems, a MDT-default tunnel must be
setup between the two PE routers. The PE routers accomplish this by joining the
configured MDT-default group. This MDT-default group is configured on the PE
router and is unique for each VPN. The PIM sends the join based on the mode of
the groups, which can be PIM SSM,
or sparse mode.
Note
PIM-Bidir is not supported on MVPN.
Benefits of MVPN
Inter-AS Support
The MVPN Inter-AS
Support feature provides these benefits to service providers:
Increased
multicast coverage to customers that require multicast to span multiple
services providers in an MPLS Layer 3 VPN service.
The ability to
consolidate an existing MVPN service with another MVPN service, as in the case
of a company merger or acquisition.
InterAS Option
A
InterAS Option A is
the basic Multicast VPN configuration option. In this option, the PE router
partially plays the Autonomous System Border Router (ASBR) role in each
Autonomous System (AS). Such a PE router in each AS is directly connected
through multiple VRF bearing subinterfaces. MPLS label distribution protocol
need not run between these InterAS peering PE routers. However, an IGP or BGP
protocol can be used for route distribution under the VRF.
The Option A model
assumes direct connectivity between PE routers of different autonomous systems.
The PE routers are attached by multiple physical or logical interfaces, each of
which is associated with a given VPN (through a VRF instance). Each PE router,
therefore, treats the adjacent PE router like a customer edge (CE) router. The
standard Layer 3 MPLS VPN mechanisms are used for route redistribution with
each autonomous system; that is, the PEs use exterior BGP (eBGP) to distribute
unlabeled IPv4 addresses to each other.
Note
Option A allows
service providers to isolate each autonomous system from the other. This
provides better control over routing exchanges and security between the two
networks. However, Option A is considered the least scalable of all the
inter-AS connectivity options.
InterAS Option
B
InterAS Option B is
a model that enables VPNv4 route exchanges between the ASBRs. This model also
distributes BGP MVPN address family. In this model, the PE routers use internal
BGP (iBGP) to redistribute labeled VPNv4 routes either to an ASBR or to route
reflector of which an ASBR is a client. These ASBRs use multiprotocol eBGP
(MP-eBGP) to advertise VPNv4 routes into the local autonomous systems. The
MP-eBGP advertises VPNv4 prefix and label information across the service
provider boundaries. The advertising ASBR router replaces the two-level label
stack, which it uses to reach the originating PE router and VPN destination in
the local autonomous system, with a locally allocated label before advertising
the VPNv4 route. This replacement happens because the next-hop attribute of all
routes advertised between the two service providers is reset to the ASBR
router's peering address, thus making the ASBR router the termination point of
the label-switched path (LSP) for the advertised routes. To preserve the LSP
between ingress and egress PE routers, the ASBR router allocates a local label
that is used to identify the label stack of the route within the local VPN
network. This newly allocated label is set on packets sent towards the prefix
from the adjacent service provider.
Note
Option B enables
service providers to isolate both autonomous systems with the added advantage
that it scales to a higher degree than Option A.
In the InterAS
Option B model, only BGP-AD profiles are supported:
MLDP MS-PMSI
MP2MP with BGP-AD (profile 4)
Rosen GRE with
or without BGP-AD (profile 9)
Note
Profile 9 is
only supported with leaking root address into IGP.
Note
MLDP MS-PMSI MP2MP
with BGP-AD (profile 5) is not supported.
InterAS Option
C
InterAS Option C
allows exchange of VPNv4 routes between router reflectors (RRs) using multihop
eBGP peering sessions. In this model, the MP-eBGP exchange of VPNv4 routes
between the RRs of different autonomous systems is combied with the next hops
for these routes exchanges between corresponding ASBR routers. This model also
distributes BGP MVPN address family along with VPNv4. This model neither allows
the VPNv4 routes to be maintained nor distributes by the ASBRs. ASBRs maintains
labeled IPv4 routes to the PE routers within its autonomous system and uses
eBGP to distribute these routes to other autonomous systems. In any transit
autonomous systems, the ASBRs uses eBGP to pass along the labeled IPv4 routes,
resulting in the creation of a LSP from the ingress PE router to the egress PE
router.
Option C model uses
the multihop functionality to allow the establishment for MP-eBGP peering
sessions as the RRs of different autonomous systems are not directly connected.
The RRs also do not reset the next-hop attribute of the VPNv4 routes when
advertising them to adjacent autonomous systems as these do not attract the
traffic for the destinations that they advertise, making it mandatory to enable
the exchange of next hops. These are just a relay station between the source
and receiver PEs. The PE router next-hop addresses for the VPNv4 routes, thus,
are exchanged between ASBR routers. The exchange of these addresses between
autonomous systems is accomplished by redistributing the PE router /32
addresses between the autonomous systems or by using BGP label distribution.
Note
Option C normally
is deployed only when each autonomous system belongs to the same overall
authority, such as a global Layer 3 MPLS VPN service provider with global
autonomous systems.
In the InterAS
Option C model, these profiles are supported:
Rosen MLDP
without BGP-AD (profile 1)
MLDP MS-PMSI
MP2MP with BGP-AD (profile 4)
MLDP MS-PMSI
MP2MP with BGP-AD (profile 5)
MLDP VRF in-band
signaling (profile 6)
Rosen GRE with
BGP-AD (profile 9)
IPv6 Connectivity
over MVPN
On the Cisco ASR 9000
Series Routers, in
Cisco IOS XR Software starting
Release 4.2.1, IPv6 connectivity is supported
between customer sites over an IPv4-only core network with a default VRF. VPN
PE routers interoperate between the two address families, with control and
forwarding actions between IPv4-encapsulated MDTs and IPv6 customer routes.
IPv6 users can configure IPv6-over-IPv4 multicast VPN support through BGP.
In
Cisco IOS XR Software, MVPNv6 can have a separate data mdt
group configured, which can be different from MVPNv4. But both MVPNv6 and
MVPNv4 must have the same default mdt group configured.
The configuration example
below shows MVPNv6 data mdt :
PE routers are the only routers that need to be MVPN-aware and able to signal remote PEs with information regarding the MVPN.
It is fundamental that all PE routers have a BGP relationship with each other, either directly or through a route reflector,
because the PE routers use the BGP peering address information to derive the RPF PE peer within a given VRF.
PIM-SSM MDT tunnels cannot be set up without a configured BGP MDT address-family, because you establish the tunnels, using
the BGP connector attribute.
See the Implementing BGP on Cisco IOS XR Software module of the Routing Configuration Guide for Cisco ASR 9000 Series Routers for information on BGP support for Multicast VPN.
Multicast and MVPNv4 over v4GRE Interfaces
Different types of networks rely on the third party network security to attain a secure IP multicast service, which encrypts
and decrypts IP unicast traffic across untrusted core network through point-to-point tunnel. Therefore, the customer multicast
traffic must be delivered as unicast traffic with encryption across untrusted core network. This is obtained by using generic
routing encapsulation (GRE) tunneling to deliver multicast traffic as unicast through tunnel interfaces. Both Multicast and
MVPN-v4 over GRE is supported.
Multicast over v4-GRE Interfaces: Customer networks which are transporting Native IP Multicast across un-trusted core via
IPv4 unicast GRE tunnels and encryption.
MVPN-v4 over GRE Interfaces: Customer networks which are transporting L3VPN multicast services (mVPN-GRE) across an un-trusted
core via IPv4 unicast GRE tunnels and encryption.
Note
IPv6 Multicast and MVPNv6 over GRE are not supported.
Multicast interface features for GRE tunnels are applied
when the inner packet is forwarding through multicast forwarding chain. However, the unicast interface features for GRE underlying
interface are applied when the outer transport packet is forwarding through unicast
forwarding chain.
Thus, multicast interface features such as boundary ACL and TTL
threshold are applicable and supported for unicast GRE tunnel
just as other multicast main or sub interfaces. However, QoS for
unicast GRE tunnel are applied at its underlying physical
interface instead of applied on tunnel interface itself.
After setting up unicast routing protocol, the unicast GRE tunnels are treated as interfaces similar to that of a main or
sub interface. The unicast GRE tunnels can participate in multicast routing when these are added to multicast routing protocols
as multicast enabled interfaces. The unicast GRE tunnels are also used as the accepting or the forwarding interfaces of a
multicast route.
Concatenation of Unicast GRE Tunnels for Multicast Traffic
This concatenation of unicast GRE tunnels refers to connecting trusted network islands by terminating one unicast GRE tunnel
and relaying multicast forwarding to olist that includes different unicast GRE tunnels.
TTL Threshold
GRE enables to workaround networks containing
protocols that have limited hop counts.
Multicast traffic of mVPN-GRE from encapsulation provider edge (PE)
router to decapsulation PE router is considered one hop, and customer
packet TTL should be decremented by one number, irrespective of
mid-point P routers between these PE routers.
The TTL on GRE transport header is derived from the
configuration of GRE tunnel interface, and is decremented
when traffic travels from encapsulation PE to decapsulation PE router via P routers.
However, for concatenated unicast GRE tunnels, TTL on GRE transport
header is reset when the router terminates one unicast GRE
tunnel and forwards multicast packet to another unicast GRE
tunnel.
Note
GRE keep-alive message and the frequency of keep-alive message generation is1 pps. Static police rate in a line card remain
1000 pps to accommodate max 500 unicast GRE tunnel. However, the GRE key is not supported.
Segmented Multicast
- Overview
IOS-XR supports the NextGen
(NG) Multicast VPNs with BGP (Border Gateway Protocol) MVPN SAFI (Sub AFI).
NextGen MVPN defines a set of auto-discovery and C-multicast Route types that
supports different MVPN features. The set of standards that extend MVPN-SAFI
for Global Table Multicast (GTM) and to support MVPN in the presence of
Segmented Cores is called as Segmented Multicast.
In Segmented Core
MVPN, the Layer-3 VPN core or the GTM core is divided into multiple Segments.
These segments can be multiple OSPF areas, Intermediate System - Intermediate
System (IS-IS) levels, or multiple IGP instances, within an Autonomous System
(AS) or across multiple ASes. Multicast core-trees are generally built from
Ingress-PE to Egress-PE. With Segmented cores, separate multicast trees are
present in each segment, and the border routers stitch the multicast trees
between segments.
Border router refers to an
Area Border Router (ABR) or an Autonomous System Border Router (ASBR). In
certain cases, the routers are the aggregation routers, connected to two
segments in a network. These border routers are attached to two Intra-AS
segments. They may also be connected to ASBRs in other ASes at the same time.
To support Segmented
Core, BGP has to be enabled between the Provider Edge (PE) and the Border
Routers, and between the Border routers as well. BGP sessions are used to
exchange Unicast routing information (for sources, RPs, and so on) and MVPN
SAFI. Unicast routing information is exchanged with either Multicast SAFI
(SAFI-2) or Unicast SAFI (SAFI-1) protocols.
The Segmented Core
procedures change the way BGP A-D routes are sent between PEs. The C-multicast
Routes (Types 6 and 7) are unaffected and only required on the PEs. An
additional support is provided to facilitate the split behavior, where Types 1,
3, 4, 5 are to be sent from PEs to Border routers, while Types 6 and 7 are sent
to a Service RR. The Service RR only peers with the PEs. This is achieved by
adding the Inter-Area Segmented NH EC (SNH-EC) to the A-D routes alone and
having a BGP policy to announce or block MVPN SAFI routes with and without
SNH-ECs. Segmented Multicast and MVPNs are supported for LSM trees only.
Segmented Multicast
- Examples
Segmented Core Single AS
In the following figure, a single AS runs OSPF in the core. The core
has a Backbone Area (0) and non-zero Areas for each site. A path from an
Ingress PE to an Egress PE traverses through an Ingress non-zero Area (iPE to
iABRs), Area 0 (iABR to eABRs), and the Egress non-zero Area (eABR to ePEs). In
the following figure, only the PEs and the border routers (ABRs) are used,
however, there can be multiple P-routers between the PE and ABR or between
ABRs.
With Segmented Multicast, when there is a need to build an I-PMSI or
S-PMSI tunnel between iPE and ePE, the tunnel is built with multiple
core-trees. The iPE builds a core-tree to iABRs, and later the iABR builds a
separate core-tree to the set of eABRs. The iABR then stitches the two
core-trees, such that packets arriving on the non-zero Area core-tree will be
forwarded out of the Area-0 core-tree. The Egress ABR (eABR) also performs a
similar stitching between two core-trees to forward traffic to the Egress PEs
(ePEs).
The example shows OSPF areas, however, the same concept is supported
on IS-IS IGP as well.
Multiple ASes
The following example shows the case of segments spanning across
multiple ASes. In most of the cases, the core tree on the DMZ link is Ingress
Replication.
Segmented Multicast
- Examples
Segmented Core Single AS
In the following figure, a single AS runs OSPF in the core. The core
has a Backbone Area (0) and non-zero Areas for each site. A path from an
Ingress PE to an Egress PE traverses through an Ingress non-zero Area (iPE to
iABRs), Area 0 (iABR to eABRs), and the Egress non-zero Area (eABR to ePEs). In
the following figure, only the PEs and the border routers (ABRs) are used,
however, there can be multiple P-routers between the PE and ABR or between
ABRs.
With Segmented Multicast, when there is a need to build an I-PMSI or
S-PMSI tunnel between iPE and ePE, the tunnel is built with multiple
core-trees. The iPE builds a core-tree to iABRs, and later the iABR builds a
separate core-tree to the set of eABRs. The iABR then stitches the two
core-trees, such that packets arriving on the non-zero Area core-tree will be
forwarded out of the Area-0 core-tree. The Egress ABR (eABR) also performs a
similar stitching between two core-trees to forward traffic to the Egress PEs
(ePEs).
The example shows OSPF areas, however, the same concept is supported
on IS-IS IGP as well.
Multiple ASes
The following example shows the case of segments spanning across
multiple ASes. In most of the cases, the core tree on the DMZ link is Ingress
Replication.
Segmented Multicast
Stitching with inter-AS Solution
The segmented
multicast stitching with inter-AS solution ensures that the ABR and ASBR having
an incoming core type is switched to a different or same core type. iABR is the
tail for the P2MP/mLDP tree of the ingress non-zero area, however, it can be a
bud for the P2MP/mLDP tree on the ingress ASBR. The ingress LC decapsulates the incoming
packet, and the encapsulated ID model is used to encapsulate the packet with
the egress core type. When the egress core type is P2MP, the incoming label is
replaced with the head local label of the outgoing P2MP core.
In the case where
there are label receivers for the same core, the
ingress LC creates two copies - one for the bud case and the other for
encapsulation of the next core. The impact of sending two copies to the fabric
will be similar to that of other existing implementations such as IRB and
PBB-EVPN.
Working of
Segmented Multicast Stitching
The working of the
Segmented Multicast stitching is explained in the following steps:
iABR is the
tail for the P2MP/MLDP tree of the ingress non-zero area. Similarly, eABR is
the tail for the P2MP/MLDP tree of the zero-area core. At the iABR tail node's
ingress LC, the encapsulation ID of the core zero area tree is downloaded.
The incoming
label lookup on the tail node indicates that this is a stitching case, and
decapsulates, and picks up the tunnel label/encapsulation ID of the next
segment and initiate forwarding on to the next core type.
In the case of
a bud scenario, the ingress LC at the ABR creates two copies when the incoming
label indicates the need for stitching. One copy is used for stitching, and the
other copy for regular bud node forwarding. For the bud scenario, 2 sets of
(FGID, MGID) is required
one for the
stitching
other for
regular bud node forwarding
Control
packets: PIM packets are exchanged between iPE and ePE. BGP packets are
exchanged between iPE/iABR, iABR/eABR, and eABR/ePE.
OAM packets:
OAM packets are placed on the incoming core and stitched across to the next
core.
Configuring
Segmented Multicast Stitching
You must configure segmented color on the PEs and optionally
segment-border route policy on the ABRs/ASBRs for Segmented Multicast. When the
segment-border router policy is not configured, the downstream core inherits
the core type of the upstream core.
Configuration on a
PE Router
SUMMARY STEPS
configure
multicast-routing
vrf<vrf-name>
address-family ipv4
bgp auto-discovery mldp
segmented colorcolor
commit
DETAILED STEPS
Command or Action
Purpose
Step 1
configure
Step 2
multicast-routing
vrf<vrf-name>
Example:
RP/0/RSP0/CPU0:router(config)# multicast-routing vrf red
Enters multicast
configuration mode for the specified VRF. Note that the default configuration
mode for multicast routing is default vrf (if the non-default VRF name is not
specified).
RP/0/RSP0/CPU0:router(config-mcast-default-ipv4)# mdt segment-border route-policy blue
Enables segmented multicast on the border router for the specified
route policy.
Step 5
commit
MVPN Static P2MP TE
This feature describes the Multicast VPN (MVPN) support for Multicast over Point-to-Multipoint -Traffic Engineering (P2MP-TE).
Currently, Cisco IOS-XR Software supports P2MP-TE only in the Global table and the (S,G) route in the global table can be
mapped to P2MP-TE tunnels. However, this feature now enables service providers to use P2MP-TE tunnels to carry VRF multicast
traffic. Static mapping is used to map VRF (S, G) traffic to P2MP-TE tunnels, and BGP-AD is used to send P2MP BGP opaque that
includes VRF-based P2MP FEC as MDT Selective Provider Multicast Service Interface (S-PMSI).
The advantages of the MVPN support for Multicast over P2MP-TE are:
Supports traffic engineering such as bandwidth reservation, bandwidth sharing, forwarding replication, explicit routing,
and Fast ReRoute (FRR).
Supports the mapping of multiple multicast streams onto tunnels.
Figure 6. Multicast VRF
On PE1 router, multicast S,G (video) traffic is received on a VRF interface. The multicast S,G routes are statically mapped
to P2MP-TE tunnels. The head-end then originates an S-PMSI (Type-3) BGP-AD route, for each of the S,Gs, with a PMSI Tunnel
Attribute (PTA) specifying the P2MP-TE tunnel as the core-tree. The type of the PTA is set to RSVP-TE P2MP LSP and the format
of the PTA Tunnel-identifier <Extended Tunnel ID, Reserved, Tunnel ID, P2MP ID>, as carried in the RSVP-TE P2MP LSP SESSION
Object. Multiple S,G A-D routes can have the same PMSI Tunnel Attribute.
The tail-end PEs (PE2, PE3) receive and cache these S-PMSI updates (sent by all head-end PEs). If there is an S,G Join present
in the VRF, with the Upstream Multicast Hop (UMH) across the core, then the PE looks for an S-PMSI announcement from the UMH.
If an S-PMSI route is found with a P2MP-TE PTA, then the PE associates the tail label(s) of the Tunnel, with that VRF. When
a packet arrives on the P2MP-TE tunnel, the tail-end removes the label and does an S,G lookup in the 'associated' VRF. If
a match is found, the packet is forwarded as per its outgoing information.
Multitopology
Routing
Multitopology routing
allows you to manipulate network traffic flow when desirable (for example, to
broadcast duplicate video streams) to flow over non-overlapping paths.
At the core of
multitopology routing technology is router space infrastructure (RSI). RSI
manages the global configuration of routing tables. These tables are
hierarchically organized into VRF tables under logical routers. By default, RSI
creates tables for unicast and multicast for both IPv4 and IPv6 under the
default VRF. Using multitopology routing, you can configure named topologies
for the default VRF.
PIM uses a routing
policy that supports matching on source or group address to select the topology
in which to look up the reverse-path forwarding (RPF) path to the source. If
you do not configure a policy, the existing behavior (to select a default
table) remains in force.
Currently, IS-IS and
PIM routing protocols alone support multitopology-enabled network.
Multicast VPN Extranet Routing
Multicast VPN (MVPN) extranet routing lets service providers distribute IP multicast content from one enterprise site to another
across a multicast VRF. In other words, this feature provides capability to seamlessly hop VRF boundaries to distribute multicast
content end to end.
Unicast extranet can be achieved simply by configuring matching route targets across VRFs. However, multicast extranet requires
such configuration to resolve route lookups across VRFs in addition to the following:
Maintain multicast topology maps across VRFs.
Maintain multicast distribution trees to forward traffic across VRFs.
Information About
Extranets
An extranet can be
viewed as part of an enterprise intranet that is extended to users outside the
enterprise. A VPN is used as a way to do business with other enterprises and
with customers, such as selling products and maintaining strong business
partnerships. An extranet is a VPN that connects to one or more corporate sites
to external business partners or suppliers to securely share a designated part
of the enterprise’s business information or operations.
MVPN extranet routing
can be used to solve such business problems as:
Inefficient
content distribution between enterprises.
Inefficient
content distribution from service providers or content providers to their
enterprise VPN customers.
MVPN extranet routing
provides support for IPv4 and IPv6 address family.
An extranet network
requires the PE routers to pass traffic across VRFs (labeled “P” in
Figure 1).
Extranet networks can run either IPv4 or IPv6, but the core network always runs
only IPv4 active multicast.
Note
Multicast extranet routing is not supported on BVI interfaces.
Extranet
Components
Figure 7. Components of
an Extranet MVPN
MVRF—Multicast VPN
routing and forwarding (VRF) instance. An MVRF is a multicast-enabled VRF. A
VRF consists of an IP routing table, a derived forwarding table, a set of
interfaces that use the forwarding table, and a set of rules and routing
protocols that determine what goes into the forwarding table. In general, a VRF
includes the routing information that defines a customer VPN site that is
attached to a provider edge (PE) router.
Source MVRF—An MVRF
that can reach the source through a directly connected customer edge (CE)
router.
Receiver MVRF—An
MVRF to which receivers are connected through one or more CE devices.
Source PE—A PE
router that has a multicast source behind a directly connected CE router.
Receiver PE—A PE
router that has one or more interested receivers behind a directly connected CE
router.
Information About the Extranet MVPN Routing Topology
In unicast routing of peer-to-peer VPNs, BGP routing protocol is used to advertise VPN IPv4 and IPv6 customer routes between
provider edge (PE) routers. However, in an MVPN extranet peer-to-peer network, PIM RPF is used to determine whether the RPF
next hop is in the same or a different VRF and whether that source VRF is local or remote to the PE.
Source MVRF on a Receiver PE Router
To provide extranet MVPN services to enterprise VPN customers by configuring a source MVRF on a receiver PE router, you would
complete the following procedure:
On a receiver PE router that has one or more interested receivers in an extranet site behind a directly connected CE router,
configure an MVRF that has the same default MDT group as the site connected to the multicast source.
On the receiver PE router, configure the same unicast routing policy to import routes from the source MVRF to the receiver
MVRF.
If the originating MVRF of the RPF next hop is local (source MVRF at receiver PE
router), the join state of the receiver VRFs propagates over the core by using the
default multicast distribution tree (MDT) of the source VRF. Figure 1 illustrates the flow of
multicast traffic in an extranet MVPN topology where the source MVRF is configured on a
receiver PE router (source at receiver MVRF topology). An MVRF is configured for VPN-A
and VPN-B on PE2, a receiver PE router. A multicast source behind PE1, the source PE
router, is sending out a multicast stream to the MVRF for VPN-A, and there are
interested receivers behind PE2, the receiver PE router for VPN-B, and also behind PE3,
the receiver PE router for VPN-A. After PE1 receives the packets from the source in the
MVRF for VPN-A, it replicates and forwards the packets to PE2 and PE3. The packets
received at PE2 in VPN-A are decapsulated and replicated to receivers in VPN-B.
Figure 8. Source MVRF at the Receiver PE Router
Receiver MVRF on the Source PE Router
To provide extranet MVPN services to enterprise VPN customers by configuring the receiver MVRF on the source PE router, complete
the following procedure:
For each extranet site, you would configure an additional MVRF on the source PE router, which has the same default MDT group
as the receiver MVRF, if the MVRF is not already configured on the source PE.
In the receiver MVRF configuration, you would configure the same unicast routing policy on the source and receiver PE routers
to import routes from the source MVRF to the receiver MVRF.
If the originating MVRF of the RPF next-hop is remote (receiver MVRF on the source PE router), then the join state of receiver
VRFs propagates over the core through the MDT of each receiver.
Figure 2
illustrates the flow of multicast traffic in an extranet MVPN topology where a receiver
MVRF is configured on the source PE router. An MVRF is configured for VPN-A and VPN-B on
PE1, the source PE router. A multicast source behind PE1 is sending out a multicast
stream to the MVRF for VPN-A, and there are interested receivers behind PE2 and PE3, the
receiver PE routers for VPN-B and VPN-A, respectively. After PE1 receives the packets
from the source in the MVRF for VPN-A, it independently replicates and encapsulates the
packets in the MVRF for VPN-A and VPN-B and forwards the packets. After receiving the
packets from this source, PE2 and PE3 decapsulate and forward the packets to the
respective MVRFs.
Figure 9. Receiver MVRF at the Source PE Router Receiver
RPF policies can be configured in receiver VRFs to bypass RPF lookup in receiver VRFs and statically propagate join states
to specified source VRF. Such policies can be configured to pick a source VRF based on either multicast group range, multicast
source range, or RP address.
Hub and spoke topology is an interconnection of two categories of sites — Hub sites and Spoke sites. The routes advertised
across sites are such that they achieve connectivity in a restricted hub and spoke fashion. A spoke can interact only with
its hub because the rest of the network (that is, other hubs and spokes) appears hidden behind the hub.
The hub and spoke topology can be adopted for these reasons:
Spoke sites of a VPN customer receives all their traffic from a central (or Hub) site hosting services such as server farms.
Spoke sites of a VPN customer requires all the connectivity between its spoke sites through a central site. This means that
the hub site becomes a transit point for interspoke connectivity.
Spoke sites of a VPN customer do not need any connectivity between spoke sites. Hubs can send and receive traffic from all
sites but spoke sites can send or receive traffic only to or from Hub sites.
Realizing the Hub and Spoke Topology
Hub and Spoke implementation leverages the infrastructure built for MVPN Extranet. The regular MVPN follows the model in which
packets can flow from any site to the other sites. But Hub and Spoke MVPN will restrict traffic flows based on their subscription.
A site can be considered to be a geographic location with a group of CE routers and other devices, such as server farms, connected
to PE routers by PE-CE links for VPN access. Either every site can be placed in a separate VRF, or multiple sites can be combined
in one VRF on the PE router.
By provisioning every site in a separate VRF, you can simplify the unicast and multicast Hub and Spoke implementation. Such
a configuration brings natural protection from traffic leakage - from one spoke site to another. Cisco IOS XR Software implementation
of hub and spoke follows the one- site-to-one VRF model. Any site can be designated as either a hub or spoke site, based on
how the import or export of routes is setup. Multiple hub and spoke sites can be collated on a given PE router.
Unicast Hub and Spoke connectivity is achieved by the spoke sites importing routes from only Hub sites, and Hub sites importing
routes from all sites. As the spoke sites do not exchange routes, spoke to spoke site traffic cannot flow. If interspoke connectivity
is required, hubs can choose to re-inject routes learned from one spoke site into other spoke site.
MVPN Hub and Spoke is achieved by separating core tunnels, for traffic sourced from hub sites, and spoke sites. MDT hub is
the tunnel carrying traffic sourced from all Hub sites, and MDT spoke carries traffic sourced from all spoke sites. Such tunnel
end-points are configured on all PEs participating in hub and spoke topology. If spoke sites do not host any multicast sources
or RPs, provisioning of MDT Spoke can be completely avoided at all such routers.
Once these tunnels are provisioned, multicast traffic path will be policy routed in this manner:
Hub sites will send traffic to only MDT Hub.
Spoke sites will send traffic to only MDT Spoke.
Hub sites will receive traffic from both tunnels.
Spoke sites will receive traffic from only MDT Hub.
These rules ensure that hubs and spokes can send and receive traffic to or from each other, but direct spoke to spoke communication
does not exist. If required, interspoke multicast can flow by turning around the traffic at Hub sites.
These enhancements are made to the Multicast Hub and Spoke topology in Cisco IOS XR Software Release 4.0:
Auto-RP and BSR are supported across VRFs that are connected through extranet. It is no longer restricted to using static
RP only.
MP-BGP can publish matching import route-targets while passing prefix nexthop information to RIB.
Route policies can use extended community route targets instead of IP address ranges.
Support for extranet v4 data mdt was included so that data mdt in hub and spoke can be implemented.
Label Switched
Multicast (LSM) Multicast Label Distribution Protocol (mLDP) based Multicast
VPN (mVPN) Support
Label Switch Multicast (LSM) is MPLS technology extensions to support multicast using label encapsulation. Next-generation
MVPN is based on Multicast Label Distribution Protocol (mLDP), which can be used to build P2MP and MP2MP LSPs through a MPLS
network. These LSPs can be used for transporting both IPv4 and IPv6 multicast packets, either in the global table or VPN context.
For more information about the characteristics of each of the mLDP Profiles, see Characteristics of mLDP Profiles section in the Implementing Layer-3 Multicast Routing on Cisco IOS XR Software chapter of the Multicast Configuration Guide for Cisco ASR 9000 Series Routers, IOS XR Release 6.5.x.
Benefits of LSM MLDP based MVPN
LSM provides these benefits when compared to GRE core tunnels that are currently used to transport customer traffic in the
core:
It leverages the MPLS infrastructure for transporting IP multicast packets, providing a common data plane for unicast and
multicast.
It applies the benefits of MPLS to IP multicast such as Fast ReRoute (FRR) and
It eliminates the complexity associated PIM.
Configuring MLDP
MVPN
The MLDP MVPN configuration enables IPv4 multicast packet delivery using MPLS. This configuration uses MPLS labels to construct
default and data Multicast Distribution Trees (MDTs). The MPLS replication is used as a forwarding mechanism in the core network.
For MLDP MVPN configuration to work, ensure that the global MPLS MLDP configuration is enabled. To configure MVPN extranet
support, configure the source multicast VPN Routing and Forwarding (mVRF) on the receiver Provider Edge (PE) router or configure
the receiver mVRF on the source PE. MLDP MVPN is supported for both intranet and extranet.
Figure 10. MLDP based MPLS Network
P2MP and MP2MP Label Switched Paths
mLDP is an application that sets up Multipoint Label Switched Paths (MP LSPs) in MPLS networks without requiring multicast
routing protocols in the MPLS core. mLDP constructs the P2MP or MP2MP LSPs without interacting with or relying upon any other
multicast tree construction protocol. Using LDP extensions for MP LSPs and Unicast IP routing, mLDP can setup MP LSPs. The
two types of MP LSPs that can be setup are Point-to-Multipoint (P2MP) and Multipoint-to-Multipoint (MP2MP) type LSPs.
A P2MP LSP allows traffic from a single root (ingress node) to be delivered to a number of leaves (egress nodes), where each
P2MP tree is uniquely identified with a 2-tuple (root node address, P2MP LSP identifier). A P2MP LSP consists of a single
root node, zero or more transit nodes, and one or more leaf nodes, where typically root and leaf nodes are PEs and transit
nodes are P routers. A P2MP LSP setup is receiver-driven and is signaled using mLDP P2MP FEC, where LSP identifier is represented
by the MP Opaque Value element. MP Opaque Value carries information that is known to ingress LSRs and Leaf LSRs, but need
not be interpreted by transit LSRs. There can be several MP LSPs rooted at a given ingress node, each with its own identifier.
A MP2MP LSP allows traffic from multiple ingress nodes to be delivered to multiple egress nodes, where a MP2MP tree is uniquely
identified with a 2-tuple (root node address, MP2MP LSP identifier). For a MP2MP LSP, all egress nodes, except the sending
node, receive a packet sent from an ingress node.
A MP2MP LSP is similar to a P2MP LSP, but each leaf node acts as both an ingress and egress node. To build an MP2MP LSP, you
can setup a downstream path and an upstream path so that:
Downstream path is setup just like a normal P2MP LSP
Upstream path is setup like a P2P LSP towards the upstream router, but inherits the downstream labels from the downstream
P2MP LSP.
Packet Flow in mLDP-based Multicast VPN
For each packet coming in, MPLS creates multiple out-labels. Packets from the source network are replicated along the path
to the receiver network. The CE1 router sends out the native IP multicast traffic. The Provider Edge1 (PE1) router imposes
a label on the incoming multicast packet and replicates the labeled packet towards the MPLS core network. When the packet
reaches the core router (P), the packet is replicated with the appropriate labels for the MP2MP default MDT or the P2MP data
MDT and transported to all the egress PEs. Once the packet reaches the egress PE , the label is removed and the IP multicast
packet is replicated onto the VRF interface.
Realizing a mLDP-based Multicast VPN
There are different ways a Label Switched Path (LSP) built by mLDP can be used depending on the requirement and nature of
application such as:
P2MP LSPs for global table transit Multicast using in-band signaling.
P2MP/MP2MP LSPs for MVPN based on MI-PMSI or Multidirectional Inclusive Provider Multicast Service Instance (Rosen Draft).
P2MP/MP2MP LSPs for MVPN based on MS-PMSI or Multidirectional Selective Provider Multicast Service Instance (Partitioned E-LAN).
The router performs the following important functions for the implementation of MLDP:
Encapsulating VRF multicast IP packet with GRE/Label and replicating to core interfaces (imposition node).
Replicating multicast label packets to different interfaces with different labels (Mid node).
Decapsulate and replicate label packets into VRF interfaces (Disposition node).
Characteristics of
mLDP Profiles
The characteristics of
various mLDP profiles are listed in this section.
Profile
1:Rosen-mLDP (with no BGP-AD)
These are the
characteristics of this profile:
MP2MP mLDP trees
are used in the core.
VPN-ID is used
as the VRF distinguisher.
Configuration
based on Default MDTs.
Same Default-MDT
core-tree used for IPv4 and IPv6 traffic.
Data-MDT
announcements sent by PIM (over Default-MDT).
The multicast
traffic can either be SM or SSM.
Inter-AS Options
A, B, and C are supported. Connector Attribute is announced in VPN-IP routes.
Profile
2:MS-PMSI-mLDP-MP2MP (No BGP-AD)
These are the
characteristics of this profile:
MP2MP mLDP trees
are used in the core.
Different
MS-PMSI core-trees for IPv4 and IPv6 traffic.
The multicast
traffic can be SM or SSM.
Extranet, Hub
and Spoke are supported.
Inter-AS Options
A, B, and C are supported. Connector Attribute is announced in VPN-IP routes.
Profile
3:Rosen-GRE with BGP-AD
These are the
characteristics of this profile:
PIM-trees are
used in the core. The data encapsulation method used is GRE.
SM
or SSM
used in the
core.
Configuration is
based on Default-MDTs.
The multicast
traffic can be SM or SSM.
MoFRR in the
core is supported.
Extranet, Hub
and Spoke, CsC, Customer-RP-discovery (Embedded-RP, AutoRP and BSR) are
supported.
Inter-AS Options
A, B, and C are supported. VRF-Route-Import EC is announced in VPN-IP routes.
Profile 4:
MS-PMSI-mLDP-MP2MP with BGP-AD
These are the
characteristics of this profile:
MP2MP mLDP trees
are used in the core.
The multicast
traffic can be SM or SSM.
Extranet, Hub
and Spoke, CsC, Customer-RP-discovery (Embedded-RP, AutoRP, and BSR) are
supported.
Inter-AS Options
A, B, and C are supported. VRF-Route-Import EC is announced in VPN-IP routes.
Profile 5:
MS-PMSI-mLDP-P2MP with BGP-AD
These are the
characteristics of this profile:
P2MP mLDP trees
are used in the core.
The multicast
traffic can be SM or SSM.
Extranet, Hub
and Spoke, CsC, Customer-RP-discovery (Embedded-RP, AutoRP and BSR) are
supported.
Inter-AS Options
A, B, and C are supported. . VRF-Route-Import EC is announced in VPN-IP routes.
Profile 6: VRF
In-band Signaling (No BGP-AD)
These are the
characteristics of this profile:
P2MP mLDP trees
are used in the core.
MoFRR in the
core is supported.
There is one
core tree built per VRF-S,G route. There can be no ( *,G) routes in VRF, with
RPF reachability over the core.
The multicast
traffic can be SM S,G or SSM.
Profile 7:
Global Inband Signalling
These are the
characteristics of this profile:
P2MP mLDP
inband tree in the core; no C-multicast Routing.
Customer
traffic can be SM S,G or SSM.
Support for
global table S,Gs on PEs.
For more
information on MLDP implementation and OAM concepts, see the Cisco IOS XR MPLS
Configuration Guide for the
Cisco ASR 9000 Series Router
Profile 8:
Global P2MP-TE
These are the
characteristics of this profile:
P2MP-TE tree,
with static Destination list, in the core; no C-multicast Routing.
Static config
of (S,G) required on Head-end PE.
Only C-SSM
support on PEs.
Support for
global table S,Gs on PEs.
Profile 9:
Rosen-mLDP with BGP-AD
These are the
characteristics of this profile:
Single MP2MP
mLDP core-tree as the Default-MDT, with PIM C-multicast Routing.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Profile 10 :
VRF Static-P2MP-TE with BGP AD
These are the
characteristics of this profile:
P2MP-TE tree,
with static Destination list, in the core; no C-multicast Routing.
Static config of (S,G)
required on Head-end PE.
Only C-SSM support on PEs.
Support for IPv4 MVPN S,Gs
on PEs. No support for IPv6 MVPN routes.
Profile 11 :
Rosen-PIM/GRE with BGP C-multicast Routing
These are the
characteristics of this profile:
PIM-trees in the core,
data encapsulation is GRE, BGP C-multicast Routing.
Static config of (S,G)
required on Head-end PE.
For PIM-SSM core-tree and
PIM-SM core-tree with no spt-infinity, all UMH options are supported.
For PIM-SM
core-tree with spt-infinity case, only SFS (Highest PE or Hash-of-BGP-paths) is
supported. Hash of installed-paths method is not supported.
Default and
Data MDTs supported.
Customer
traffic can be SM
or SSM
.
Inter-AS
Option A supported. Options B and C not supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Profile 12 :
Rosen-mLDP-P2MP with BGP C-multicast Routing
These are the
characteristics of this profile:
Full mesh of P2MP mLDP
core-tree as the Default-MDT, with BGP C-multicast Routing.
All UMH options supported.
Default and Data MDT
supported.
Customer traffic can be
SM
or SSM
.
RPL-Tail-end-Extranet
supported.
Inter-AS Option A, B and C
supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Profile 13 :
Rosen-mLDP-MP2MP with BGP C-multicast Routing
These are the
characteristics of this profile:
Single MP2MP mLDP
core-tree as the Default-MDT, with BGP C-multicast Routing.
Only SFS
(Highest PE or Hash-of-BGP-paths) is supported. Hash of Installed-paths method
is not supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS
Option A, B and C supported. For Options B and C, Root has to be on a PE or the
roor-address reachability has to be leaked across all autonomous systems.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Profile 14 :
MP2MP-mLDP-P2MP with BGP C-multicast Routing
These are the
characteristics of this profile:
Full mesh of P2MP mLDP
core-tree as the Default-MDT, with BGP C-multicast Routing.
All UMH
options supported.
Default and
Data MDT supported.
Customer
traffic can be SM
or SSM
.
RPL-Tail-end-Extranet
supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A, B and C
supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Profile 15 :
MP2MP-mLDP-MP2MP with BGP C-multicast Routing
These are the
characteristics of this profile:
Full mesh of MP2MP mLDP
core-tree as the Default-MDT, with BGP C-multicast Routing.
All UMH
options supported.
Default and
Data MDT supported.
Customer
traffic can be SM
or SSM
.
RPL-Tail-end-Extranet
supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A, B and C
supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Profile 16 :
Rosen-Static-P2MP-TE with BGP C-multicast Routing
These are the
characteristics of this profile:
Full mesh of
Static-P2MP-TE core-trees, as the Default-MDT, with BGP C-multicast Routing.
All UMH
options supported.
Support for
Data MDT, Default MDT.
Customer
traffic can be SM, SSM .
RPL-Tail-end-Extranet
supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A
supported. Options B and C not supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Note
Whenever
multicast stream crosses configured threshold on encap PE(Head PE), S-PMSI is
announced. Core tunnel is static P2MP-TE tunnel configured under route-policy
for the stream. Static P2MP-TE data mdt is implemented in such a way that it
can work with dynamic data mdt, dynamic default mdtand default static P2MP.
Profile 17:
Rosen-mLDP-P2MP with BGP AD/PIM C-multicast Routing
These are the
characteristics of this profile:
Full mesh of P2MP mLDP
core-tree as the Default-MDT, with PIM C-multicast Routing.
All UMH
options supported.
Default and
Data MDT supported.
Customer
traffic can be SM
or SSM
.
RPL-Extranet,
Hub & Spoke supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A, B and C
supported.
Profile 18 :
Rosen-Static-P2MP-TE with BGP AD/PIM C-multicast Routing
These are the
characteristics of this profile:
Full mesh of
Static-P2MP-TE core-trees, as the Default-MDT, with PIM C-multicast Routing.
All UMH
options supported.
Default MDT
supported; Data MDT is not supported.
Customer
traffic can be SM, SSM .
RPL-Extranet,
Hub & Spoke supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A
supported. Options B and C not supported.
Profile
20 : Rosen-P2MP-TE with BGP AD/PIM C-multicast Routing
These are the
characteristics of this profile:
Dynamic P2MP-TE tunnels
setup on demand, with PIM C-multicast Routing
All UMH
options supported.
Default and
Data MDT supported.
Customer
traffic can be SM, SSM .
RPL-Extranet,
Hub & Spoke supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A and C-
supported.
Profile 21 : Rosen-IR with BGP C-multicast Routing
These are the characteristics of this profile:
Full mesh of P2MP IR core-tree as the Default-MDT, with BGP C-multicast Routing.
Only SFS (Highest PE or Hash-of-BGP-paths) is supported. Hash of Installed-paths method is not supported.
Default and Data MDT supported.
Customer traffic can be SM, SSM .
RPL-Tail-end-Extranet supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is supported.
Inter-AS Option A and C- supported.
Profile
22 : Rosen-P2MP-TE with BGP C-multicast Routing
These are the
characteristics of this profile:
Dynamic P2MP-TE tunnels
with BGP C-multicast Routing
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A and C-
supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Profile
24: Partitioned-P2MP-TE with BGP AD/PIM C-multicast Routing
These are the
characteristics of this profile:
Dynamic
P2MP-TE tunnels setup on demand, with PIM C-multicast Routing
All UMH
options supported.
Default and
Data MDT supported.
Customer
traffic can be SM
or SSM
.
RPL-Extranet, Hub &
Spoke supported.
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A and C-
supported.
Profile
26 : Partitioned-P2MP-TE with BGP C-multicast Routing
These are the
characteristics of this profile:
Dynamic
P2MP-TE tunnels with BGP C-multicast Routing
Customer-RP-discovery (Embedded-RP, AutoRP & BSR) is
supported.
Inter-AS Option A and C-
supported.
All PEs must have a unique BGP Route Distinguisher (RD) value. To configure BGP RD value, refer
Cisco ASR 9000 Series Aggregation Services Router Routing Configuration Guide .
Configuration rules
for profiles
Rules for Rosen-mGRE profiles (profiles- 0, 3, 11)
All profiles
require VPNv4 or v6 unicast reachability.
By default, encap
1400-byte size c-multicast IP packet is supported. To support decap or encap
larger packet size,
mdt mtu
command.
Loopback
configuration is required. Use the
mdt source loopback0
command. Other loopbacks can be used for different
VRFs, but this is not recommended.
Rules for Rosen-mLDP profiles (profiles- 1, 9, 12, 13, 17)
mLDP must be
globally enabled.
VPN-id is
mandatory for Rosen-mLDP MP2MP profiles.
Root node must be specified
manually. Multiple root nodes can be configured for Root Node Redundancy.
If only profile 1 is
configured, MVPN must be enabled under bgp.
For BGP-AD profiles, the
remote PE address is required.
Rules for mLDP profiles (profiles- 2, 4, 5, 14, 15)
MVPN must be enabled under
bgp, if only profile 2 is configured.
Support only for static RP
for customer RP.
Rules for inband mLDP profiles (profiles- 6, 7)
MVPN must be
enabled under bgp for vrf-inband profiles.
Data MDT is not supported.
Backbone facing interface
(BFI) must be enabled on tail PE.
Source route of SSM must be
advertise to tail PE by iBGP.
MLDP inband
signaling
MLDP Inband signaling
allows the core to create (S,G) or (*,G) state without using out-of-band
signaling such as BGP or PIM. It is supported in VRF (and in the global
context). Both IPv4 and IPv6 multicast groups are supported.
In MLDP Inband
signaling, one can configure an ACL range of multicast (S,G). This (S,G) can be
transported in MLDP LSP. Each multicast channel (S,G), is 1 to 1 mapped to each
tree in the inband tree. The (S,G) join, through IGMP/MLD/PIM, will be
registered in MRIB, which is the client of MLDP.
MLDP In-band
signalling supports transiting PIM (S,G) or (*,G) trees across a MPLS core
without the need for an out-of-band protocol. In-band signaling is only
supported for shared-tree-only forwarding (also known as sparse-mode threshold
infinity). PIM Sparse-mode behavior is not supported (switching from (*,G) to
(S,G).
The details of the
MLDP profiles are discussed in the
Multicast Configuration Guide for Cisco ASR 9000 Series Routers
Summary of Supported
MVPN Profiles
This tables summarizes
the supported MVPN profiles:
Profile Number
Name
Opaque-value
BGP-AD
Data-MDT
0
Rosen GRE
N/A
N/A
PIM TLVs over default MDT
1
Rosen MLDP
Type 2 - Root Address:VPN-ID:0-n
N/A
PIM TLVs over default MDT
2
MS- PMSI (Partition) MLDP MP2MP
Cisco proprietary - Source- PE:RD:0
N/A
N/A
3
Rosen GRE with BGP -AD
N/A
Intra-AS MI- PMSI
S- PMSI for Data-MDT
PIM or BGP -AD (knob controlled)
4
MS- PMSI (Partition) MLDP MP2MP with BGP -AD
Type 1 - Source- PE:Global -ID
I- PMSI with empty PTA
MS- PMSI for partition mdt
S- PMSI for data-mdt
S- PMSI cust RP-discovery trees
BGP-AD
5
MS- PMSI (Partition) MLDP P2MP with BGP -AD
Type 1 - Source- PE:Global -ID
I- PMSI with empty PTA
MS- PMSI for partition mdt
S- PMSI for data-mdt
S- PMSI cust RP-discovery trees
BGP-AD
6
VRF Inband MLDP
RD:S,G
N/A
N/A
7
Global Inband
S,G
N/A
N/A
8
Global P2MP TE
N/A
N/A
N/A
9
Rosen MLDP with BGP -AD
Type 2 - RootAddresss:VPN - ID:0 -n
Intra-AS MI- PMSI
S- PMSI for Data-MDT
PIM or BGP-AD (knob controlled)
LSP-switch for
P2MP-TE
Turnaround for P2MP-TE can be handled by LSP-switch with a partitioned
profile. For partitioned profiles, there is no core tree (where all the PEs
join). When the traffic arrives at the ingress PE, it is forwarded to the RP-PE
on a LSP. The RP-PE must then switch the traffic to a different LSP for all the
non-RP PE receivers.
Configuration Process for MLDP MVPN (Intranet)
These steps provide a broad outline of the different configuration process of MLDP MVPN for intranet:
Note
For detailed summary of the various MVPN profiles, see the Summary of Supported MVPN Profiles.
Enabling MPLS MLDP
configure
mpls ldp mldp
Configuring a VRF entry
configure
vrf vrf_name
address-family ipv4/ipv6 unicast
import route-target route-target-ext-community
export route-target route-target-ext-community
Configuring VPN ID
configure
vrf vrf_name
vpn id vpn_id
Configuring MVPN Routing and Forwarding instance
configure
multicast-routing vrf vrf_name
address-family ipv4
mdt default mldp ipv4 root-node
Configuring the Route Distinguisher
configure
router bgp AS Number
vrf vrf_name
rd rd_value
Configuring Data MDTs (optional)
configure
multicast-routing vrf vrf_name
address-family ipv4
mdt data <1-255>
Configuring BGP MDT address family
configure
router bgp AS Number
address-family ipv4 mdt
Configuring BGP vpnv4 address family
configure
router bgp AS Number
address-family vpnv4 unicast
Configuring BGP IPv4 VRF address family
configure
router bgp AS Number
vrf vrf_name
address-family ipv4 unicast
Configuring PIM SM/SSM Mode for the VRFs
configure
router pim
vrf vrf_name
address-family ipv4
rpf topology route-policy rosen_mvpn_mldp
For each profile, a different route-policy is configured.
Configuring route-policy
route-policy rosen_mvpn_mldp
set core-tree tree-type
pass
end-policy
Note
The configuration of the above procedures depends on the profile used for each configuration.
MLDP Loop-Free
Alternative Fast Reroute
Background
Generally, in a
network, a network topology change, caused by a failure in a network, results
in a loss of connectivity until the control plane convergence is complete.
There can be various levels of loss of connectivity depending on the
performance of the control plane, fast convergence tuning, and leveraged
technologies of the control plane on each node in the network.
The amount of loss
of connectivity impacts some loss-sensitive applications, which have severe
fault tolerance (typically of the order of hundreds of milliseconds and up to a
few seconds). In order to ensure that the loss of connectivity conforms to such
applications, a technology implementation for data plane convergence is
essential.
Fast Reroute
(FRR) is one of such technologies that is primarily applicable to the
network core.
With the FRR
solution, at each node, the backup path is pre-computed, and the traffic is
routed through this backup path. As a result, the reaction to failure is local;
immediate propagation of the failure and subsequent processing on to other
nodes is not required. With FRR, if the failure is detected quickly, a loss of
connectivity as low as 10s of milliseconds is achieved.
Loop-Free
Alternative Fast Reroute
IP Loop Free
Alternative FRR is a mechanism that enables a router to rapidly switch traffic
to a pre-computed or a pre-programmed
loop-free
alternative (LFA) path (Data Plane Convergence), following either an
adjacent link and node failure, or an adjacent link or node failure in both IP
and LDP networks. The LFA path is used to switch traffic till the router
installs the new primary next-hops based upon the changed network topology
(Control Plane Convergence).
The goal of LFA FRR
is to reduce the loss of connectivity to tens of milliseconds by using a
pre-computed alternative next-hop, in the case where the selected primary
next-hop fails.
There are two
approaches to computing LFA paths:
Link-based (per-link): In
link-based LFA paths, all prefixes reachable through the primary (protected)
link share the same backup information. This means that the whole set of
prefixes sharing the same primary also shares the repair and FRR ability.
Prefix-based
(per-prefix): Prefix-based LFAs allow computing backup information for each
prefix. This means that the repair and backup information computed for a given
prefix using prefix-based LFA may be different from the one computed by
link-based LFA.
Node-protection
support is available with per-prefix LFA FRR on ISIS currently. It uses a
tie-breaker mechanism in the code to select node-protecting backup paths.
The per-prefix LFA
approach is preferred to the per-link LFA approach for the following reasons:
Better node
failure resistance.
Better coverage:
Each prefix is analyzed independently.
Better capacity
planning: Each flow is backed up on its own optimized shortest path.
mLDP LFA
FRR
The point-to-point physical or bundle interface FRR mechanism is supported on mLDP. FRR with LFA backup is supported on mLDP.
When there is a link failure, mLDP automatically sets up and chooses the backup path. With this implementation, you must configure
the physical or bundle interface for unicast traffic, so that the mLDP can act as an mLDP FRR.
LFA FRR support on
mLDP is a per-prefix backup mechanism. As part of computing the LFA backup for
a remote IP, the LFA backup paths for the loopback address of the downstream
intermediate nodes are also computed. MLDP uses this small subset of
information, by using the loopback address of the peer to compute the LFA
backup path.
Note
Both IPv4 and
IPv6 traffic is supported on the mLDP LFA FRR solution.
Supported MLDP
Profiles
The list of
supported mLDP profiles are:
MVPN-MLDP Inband Signaling
Global Inband Profile
VRF Inband Profile
MVPN Rosen MLDP
MVPN Partitioned
MLDP
Supported Line Cards And Interfaces
The supported line cards include Cisco ASR 9000 Enhanced Ethernet Line Card and Cisco ASR 9000 High Density 100GE Ethernet
line cards; and the supported interface types include: Physical interface, Bundle interface, and the Bundle VLANs (Local Shut).
Advantages of LFA
FRR
The following are the
advantages of the LFA FRR solution:
The backup path
for the traffic flow is pre-computed.
Reaction to
failure is local, an immediate propagation and processing of failure on to
other nodes is not required.
If the failure is
detected in time, the loss of connectivity of up to 10s of milliseconds can be
achieved. Prefix independency is the key for a fast switchover in the
forwarding table.
The mechanism is
locally significant and does not impact the Interior Gateway Protocol (IGP)
communication channel.
LFA next-hop can
protect against:
a single link
failure
failure of
one of more links within a shared risk link group (SRLG)
any
combination of the above
MLDP LFA FRR -
Features
The following are the
features of mLDP LFA FRR solution:
Supports both IPv4 and IPv6 traffic
Supports all the mLDP profiles
Supports the LAG interfaces and sub-interfaces in the core
Supports ECMP primary paths
Supports both ISIS and OSPF routing protocols
Supports switchover time of less than 50 milliseconds
Supports switchover time to be independent of the number of multicast routes that has to be switched over
Limitations of LFA
FRR
The following are some
of the known limitations of the LFA FRR solution:
When a failure
that is more extensive than that which the alternate was intended to protect
occurs, there is the possibility of temporarily looping traffic (micro looping
until Control Plane Convergence).
Topology
dependent. For example, either MPLS or MLDP dependent.
Complex
implementation.
The solution is
currently not supported on all platforms.
MLDP LFA FRR -
Working
To enable FRR for mLDP
over physical or bundle interfaces, LDP session-protection has to be
configured. The sequence of events that occur in an mLDP LFA FRR scenario is
explained with the following example:
Figure 11. MLDP LFA FRR -
Setup
In this figure:
Router A is the
source provider edge router, and the next Hop is Router B.
The primary path
is Router A -> Router B - > Router D, and the backup path is from Router
A -> Router C -> Router B -> Router D. The backup path is pre-computed
by IGP through LFA prefix-based selection.
Backup tunnels are
configured for Link A or auto-tunnels are enabled.
MLDP LSP is build
from D, B, and A towards the root.
Router A installs
a downstream forwarding replication over link A to Router B. This entry has
both the primary interface (Link A) and the backup tunnel programmed.
Figure 12. Link
Failure
When a ink failure
occurs on Link A:
Traffic over Link
A is rerouted over the backup tunnel by imposing the traffic engineering (TE)
label 20 towards mid Router C.
Router C performs
penultimate hop popping (PHP) and removes the outer label 20.
Router B receives
the mLDP packets with label 17 and forwards to Router D.
Figure 13. Re-optimization
- Make-Before-Break
During
re-optimization:
mLDP is notified
that the root is reachable through Router C, and mLDP converges. With this, a
new mLDP path is built to router A through Router C.
Router A forwards
packets natively with old label 17 and also new label 22.
Router B drops
traffic carried from new label 22 and forwards traffic with label 17.
Router B uses
make-before-break (MBB) trigger to switch from either physical or bundle
interface to native, label 17 to 21.
Router B prunes
off the physical or bundle interface with a label withdraw to router A.
MLDP LFA FRR -
Behavior
In the following
scenarios, S is source router, D is the destination router, E is primary next
hop, and N_1 is the alternative next hop.
Figure 14. LFA FRR Behavior
- LFA Available
With LFA FRR, the
source router S calculates an alternative next hop N_1 to forward traffic
towards the destination router D through N_1, and installs N_1 as a the
alternative next hop. On detecting the link failure between routers S and E,
router S stops forwarding traffic destined for router D towards E through the
failed link; instead it forwards the traffic to a pre-computed alternate next
hop N_1, until a new SPF is run and the results are installed.
Figure 15. LFA FRR Behavior
- LFA Not Available
In the above scenario,
if the link cost between the next hop N_1 and the destination router D is
increased to 30, then the next hop N_1 would no longer be a loop-free
alternative. (The cost of the path, from the next hop N_1 to the destination D
through the source S, would be 17, while the cost from the next hop N_1
directly to destination D would be 30). Thus, the existence of a LFA next hop
is dependent on the topology and the nature of the failure, for which the
alternative is calculated.
LFA Criteria
In the above example,
the LFA criteria of whether N is to be the LFA next-hop is met, when:
Cost of path (N_1, D) < Cost of path (N_1, S) + Cost of path
(E, S) + Cost of path (D, E)
Downstream Path
criteria, which is subset of LFA, is met when:
Cost of path (N_1, D) < Cost of path (E, S) + Cost of path
(D, E)
Link Protecting LFA
Figure 16. Link Protecting
LFA
In the above
illustration, if router E fails, then both router S and router N detects a
failure and switch to their alternates, causing a forwarding loop between both
routers S and N. Thus, the Link Protecting LFA causes Loop on Node Failure;
however, this can be avoided by using a down-stream path, which can limit the
coverage of alternates. Router S will be able to use router N as a downstream
alternate, however, router N cannot use S. Therefore, N would have no alternate
and would discard the traffic, thus avoiding the micro-looping.
Node Protecting LFA
Link and node
protecting LFA guarantees protection against either link or node failure.
Depending on the protection available at the downstream node, the downstream
path provides protection against a link failure; however, it does not provide
protection against a node failure, thereby preventing micro looping.
The criteria for LFA
selection priority is that: the Link and Node protecting LFA is greater than
the Link Protecting Downstream is greater than the Link Protecting LFA.
Configurations to
Enable LFA FRR
Key
Configurations To Enable LFA FRR
The key
configurations to enable LFA FRR feature include:
Router OSPF
configuration
The various configurations available under OSPF are:
Enabling
Per-Prefix LFA
Excluding
Interface from Using Backup
Adding
Interfaces to LFA Candidate List
Restricting
LFA Candidate List
Limiting
Per-Prefix Calculation by Prefix Priority
It is used to control the load-balancing of the backup paths on a per-prefix basis.
Note
By default, load-balancing of per-prefixes across all backup paths is enabled.
Step 6
commit
Configuring Router ISIS LFA FRR
In ISIS configuration, configure fast-reroute per-prefix to enable the LFA FRR feature.
Procedure
Step 1
configure
Example:
RP/0/RSP0/CPU0:router# configure
Enters the global configuration mode.
Step 2
router isisinstance id
Example:
RP/0/RSP0/CPU0:router(config)# router isis MCAST
Enables IS-IS routing for the specified routing instance, and places the router in router configuration mode.
Step 3
netnetwork-entity-title
Example:
RP/0/RSP0/CPU0:router(config-isis)# net 49.0001.0000.0000.0001.00
Configures network entity titles (NETs) for the routing instance.
Specify a NET for each routing instance if you are configuring multi-instance IS-IS.
This example, configures a router with area ID 49.0001.0000.0000 and system ID 0000.0001.0000.0000
To specify more than one area address, specify additional NETs. Although the area address portion of the NET differs for all
of the configured items, the system ID portion of the NET must match exactly.
When a local interface is down, that is, due to either a fiber cut or because of interface shutdown configuration is run,
it can take a long delay in the order of tens of milliseconds for the remote peer to detect the link disconnection; so, to
quickly detect the remote shut on physical port or on bundle interfaces, the physical port and bundle interfaces must be running
Bidirectional Forwarding Detection (BFD) to ensure faster failure detection.
In the above configuration example, bfd minimum-interval 3 and bfd multiplier 2 is configured; this means, that when a core-facing interface of a remote peer is down, the router detects this disconnect
event in as short a time as 6 milliseconds.
Configuring MPLS LFA FRR
Before you begin
In MPLS configuration, configure session protection to support LFA FRR feature. The detailed configuration steps and an example
follows.
Make Before Break
(MBB) is an inherent nature of MLDP. In MBB configuration, configure forwarding
recursive to enable LFA FRR feature. If forwarding recursive is not configured,
MLDP uses non-recursive method to select MLDP core facing interface towards
next hop. The detailed configuration steps and an example follows.
Procedure
Command or Action
Purpose
Step 1
configure
Example:
RP/0/RSP0/CPU0:router# configure
Enters global
configuration mode.
Step 2
mpls ldp
Example:
RP/0/RSP0/CPU0:router(config)#mpls ldp
Enters the LDP
configuration mode.
Step 3
log
Example:
RP/0/RSP0/CPU0:router(config-ldp)# log
Enters the log
sub mode under the LDP sub mode.
Step 4
neighbor
Example:
RP/0/RSP0/CPU0:router(config-ldp-log)# neighbor
Configures the
specified neighbor to the MLDP policy.
In the above
configuration example, the MBB (delay) period is set of 90 seconds. The merge
node starts accepting new label 90 seconds after detecting the link
disconnection towards the head node. The delete delay is set to 60 seconds;
that is, when MBB expires, the time period after which the merge node sends old
label delete request to head node is 60 seconds. The default value is zero. The
range of delete delay is from 30 to 60, for scale LSPs.
Configuring FRR Time
for Scalable Number of mLDP LSPs
In a scalable setup
with more than 500 LSPs, when an FRR occurs, the unicast Internet Gateway
Protocol (IGP) converges faster than multicast updates (LMRIB to FIB) for MLDP
label updates. As a result, FIB can mark off FRR bit in 2 seconds after an FRR
event, where MLDP label hardware programing is not complete in the egress LC
hosting backup path.
The command
frr-holdtime configures frr-holdtime to be
proportional to the scale number of LSPs. The recommended frr-holdtime value is
either the same, or lesser than the MBB delay timer. This ensures that the
egress LC is in FRR state after the primary path down event.
When not configured,
the default frr-holdtimer, in seconds, is set to 2.
Configures
frr-holdtime to be proportional to the scale number of LSPs. In this case,
configures the frr-holdtime to 30 seconds.
Step 3
commit
Example:
Configure FRR Holdtime
cef platform lsm frr-holdtime ?
<3-180> Time in seconds
cef platform lsm frr-holdtime 45
commit
!
!
MLDP Carrier
Supporting Carrier Based MVPN
The
carrier-supporting-carrier (CSC) feature enables one MPLS VPN-based service
provider to allow other service providers to use a segment of its backbone
network. The service provider that provides the segment of backbone network to
the other provider is called the backbone carrier, whereas the service provider
that uses the backbone network is called the customer carrier. The customer
carrier can be either an ISP itself or a BGP or MPLS VPN service provider, and
can run an IP or MPLS in its network in former and later cases respectively. In
either case, MPLS is run in backbone network and between the backbone and the
customer carrier (on PE-CE link).
Figure 17. MLDP CSC based
MVPN
In the above illustration,
P-PE and P routers are a part of backbone carrier. Customer carrier PEs is
labeled C-PE. The Link between P-PE and C-PE is on VRF on P-PE and global table
on C-PE. LDP/MLDP sessions run in VRF context on P-PEs link towards C-PE. There
is an iBGP sessions between P-PEs exchanging vpnv4 addresses.
MLDP CsC -
Restrictions
The following are the limitations of
the MLDP CsC solution:
P2MP LSPs are supported for CsC, however, no MP2MP support is
provided.
MBB cannot be enabled per VRF. It is either to be enabled for all
VRFs or none can be enabled.
MBB delay can be configured per VRF only.
CsC Configuration
Example - Overview
The following figure
describes an example configuration of the CsC feature:
Figure 18. CsC -
Configuration Overview
The network is
consists of:
Two cores: Inner
and outer cores.
Inner Core:
includes P-PE and P routers.
Outer Core:
includes P-PE routers which are connected directly to C-PE routers.
VRF-lite: more
than one C-PE connected to the same P-PE.
IGP: [OSPF] on
inner core routers in the global table (not on VRFs)
BGP:
BGP/iBGP
between P-PE routers.
eBGP between
C-PE and P-PE routers.
Note
C-PE and
P-PE are directly connected.
Static Routing:
between C-PE and P-PE to trigger a creation of a label
MLDP/MPLS:
Two types of
sessions: Global table of P-PE and P routers (of the inner core) and VRF of the
P-PE routers and the global table of the C-PE routers.
Peer model: a
P2MP tree is created in the inner core for each P2MP that exists in the outer
core. When data MDT is selected, one LSP is created for each Mroute.
PIM/Multicast: Not
run either in the inner or outer cores. The inner core is transparent to PIM.
Only profiles 12 and 14 are applicable.
Multipoint Label
Distribution Protocol Route Policy Map
Multicast supports Multipoint Label Distribution Protocol Route Policy
Map, wherein Multipoint Label Distribution Protocol uses the route policy maps
to filter Label Mappings and selectively apply the configuration features on
Cisco IOS-XR operating system.
Route policy map for configuration commands:
The route policy map
for the configuration commands provide you the flexibility to selectively
enable some of the mLDP features such as Make Before Break (MBB), Multicast
only FRR (MoFRR) features, and so on, on the applicable LSPs. Features like
Make Before Break (MBB), Multicast only FRR (MoFRR), etc. can be enabled on
mLDP on IOS-XR operating system. When each of these features are enabled, they
are enabled for all of the mLDP Labeled-Switched Paths (LSPs) irrespective of
whether they are applicable for the particular LSP or not. For example, MoFRR
is used for IPTV over mLDP in-band signaled P2MP LSPs, but not for the generic
MVPN using a MP2MP LSPs. Using the route policy map, you can configure mLDP to
to selectively enable some of the features.
Route policy for label mapping filtering:
The route policy map
for the Label Mapping filtering provides a way to prevent the mLDP from
crossing over from one plane to another in the event of a failure.
Generally, the LSPs based on
mLDP are built on unicast routing principle, and the LSPs follow unicast
routing as well. However, some networks are built on the concept of dual-plane
design, where an mLDP LSP is created in each of the planes to provide
redundancy. In the event of a failure, mLDP crosses over to another plane. To
prevent mLDP from crossing over, mLDP Label Mappings are filtered either in an
inbound or outbound direction.
mLDP uses the existing RPL
policy infrastructure in IOS-XR. With the existing RPL policy, mLDP FECs are
created and compared with the real mLDP FEC for filtering and configuration
commands. (To create mLDP FECs for filtering, create a new RPL policy (specific
for mLDP FECs) with the necessary show and configuration commands.) An mLDP FEC
consists of 3 tuples: a tree type, root node address, and the opaque encoding,
which uniquely identifies the mLDP LSP. An opaque encoding has a different TLV
encoding associated with it. For each of the different opaque TLV, a unique RPL
policy is to be created since the information in the mLDP opaque encoding is
different.
The implementation of mLDP
FEC based RPL filter is done in both RPL and LDP components.
mLDP FEC
The mLDP FEC Route Policy Filtering is a combination of a root node
and opaque types.
Root Node:
Filtering is allowed only at the root node in combination with
opaque types.
Opaque Types:
The following are the opaque types allowed to create the Route
Policies.
IPV4
In-band type
IPV6
In-band type
VPNv4
In-band type
VPNv6
In-band type
MDT Rosen
model (VPN-ID) type
Global ID
type
Static ID
type
Recursive
FEC type
VPN
Recursive FEC type
mLDP Label Mapping
Filtering:
Label mapping
filtering is supported either in inbound or outbound directions, based on the
user preference. All default policies applicable in the neighborhood are
supported by Label Mapping Filtering.
mLDP Feature
Filtering:
The RPL policy allows selective features to be enabled, applies to
the following feature configuration commands:
MoFRR
Make Before
Break
Recursive FEC
Configuring mLDP
User Interface (Opaque Types) Using the Routing Policy
Perform this task to
configure the LDP user interface using the route policy to filter Label
Mappings and selectively apply the configuration features. LDP interface can be
configured using the various available mLDP opaque parameters like the Global
ID, IPv4, IPv6, MDT, Recursive, Recursive RD, Static ID, VPNv4, and VPNv6.
See the
Implementing
Routing Policy on Cisco ASR 9000 Series Router module of
Cisco ASR 9000 Series
Aggregation Services Router Routing Configuration Guide for a list of the supported attributes and
operations that are valid for policy filtering.
Configuring the mLDP
User Interface for LDP Opaque Global ID Using the Routing Policy
SUMMARY STEPS
configure
route-policy mldp_policy
if mldp opaque global-id32-bit
decimal numberthen pass
endif
end-policy
commit
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
Enters the
Route-policy configuration mode, where you can define the route policy.
Step 3
if mldp opaque mdt
[1:1]
then pass
endif
Example:
RP/0/RSP0/CPU0:router(config-rpl)# if mldp opaque mdt then pass endif
Configures the
mLDP VPNID to the specific MDT number.
Step 4
end-policy
Example:
RP/0/RSP0/CPU0:router(config-rpl)# end-policy
Step 5
commit
Step 6
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
Example
outputs are as shown:
Sun Jun 22 20:03:34.308 IST
route-policy mldp_policy
if mldp opaque mdt 1:1 0 then
pass
endif
end-policy
route-policy mldp_policy
if mldp opaque mdt any 10 then
pass
endif
end-policy
!
Configuring the mLDP
User Interface for LDP Opaque Static ID Using the Routing Policy
SUMMARY STEPS
configure
route-policy mldp_policy
if mldp opaque static-id32-bit
decimal numberthen pass
endif
end-policy
commit
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
Enters the
Route-policy configuration mode, where you can define the route policy.
Step 3
if mldp opaque vpnv4
[2:2]
then pass
endif
Example:
RP/0/RSP0/CPU0:router(config-rpl)# if mldp opaque vpnv4 then pass endif
Configures the
mLDP vpnv4 variable to the specified variable.
Step 4
if mldp opaque vpnv4
[2:210.1.1.1
232.1.1.1]
then pass
endif
Example:
RP/0/RSP0/CPU0:router(config-rpl)# if mldp opaque vpnv4 then pass endif
Configures the
mLDP vpnv4 variable to the specified range of variable addresses.
Step 5
end-policy
Example:
RP/0/RSP0/CPU0:router(config-rpl)# end-policy
Step 6
commit
Step 7
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
Example
outputs are as shown:
Sun Jun 22 20:03:34.308 IST
route-policy mldp_policy
if mldp opaque vpnv4 2:2 10.1.1.1 232.1.1.1 then
pass
endif
end-policy
route-policy mldp_policy
if mldp opaque vpnv4 any 0.0.0.0 224.1.1.1 then
pass
endif
end-policy
!
Configuring the mLDP
User Interface for LDP Opaque VPNv6 Using the Routing Policy
SUMMARY STEPS
configure
route-policy mldp_policy
if mldp opaque vpnv6
[2:2]
then pass
endif
if mldp opaque vpnv6
[2:210::1 FF05::1]
then pass
endif
end-policy
commit
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
Enters the
Route-policy configuration mode, where you can define the route policy.
Step 3
if mldp opaque vpnv6
[2:2]
then pass
endif
Example:
RP/0/RSP0/CPU0:router(config-rpl)# if mldp opaque vpnv4 then pass endif
Configures the
mLDP vpnv6 variable to the specified variable.
Step 4
if mldp opaque vpnv6
[2:210::1 FF05::1]
then pass
endif
Example:
RP/0/RSP0/CPU0:router(config-rpl)# if mldp opaque vpnv6 then pass endif
Configures the
mLDP vpnv6 variable to the specified variable range of addresses.
Step 5
end-policy
Example:
RP/0/RSP0/CPU0:router(config-rpl)# end-policy
Step 6
commit
Step 7
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
An
example output is as shown:
Sun Jun 22 20:03:34.308 IST
route-policy mldp_policy
if mldp opaque vpnv6 2:2 10::1 ff05::1 then
pass
endif
end-policy
!
Configuring mLDP FEC
at the Root Node
Perform this task to
configure mLDP FEC at the root node using the route policy to filter Label
Mappings and selectively apply the configuration features. Currently, mLDP FEC
is configured to filter at the IPV4 root node address along with the mLDP
opaque types.
Configuring the mLDP
FEC at the Root Node Using the Route Policy
SUMMARY STEPS
configure
route-policy mldp_policy
if mldp root
end-policy
commit
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
Enters the
Route-policy configuration mode, where you can define the route policy.
Step 3
if mldp root
Example:
RP/0/RSP0/CPU0:router(config-rpl)# if mldp root[ipv4 address]then pass endif
Configures the
mLDP root address to the specified IPv4 IP address.
Step 4
end-policy
Example:
RP/0/RSP0/CPU0:router(config-rpl)# end-policy
Step 5
commit
Step 6
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy
The
current configuration output is as shown:
route-policy mldp_policy
if mldp root 10.0.0.1 then
pass
endif
end-policy
!
Example of an
MLDP Route Policy which shows the filtering option of a Root Node IPv4 address
and mLDP Opaque IPv4 address
Show configuration output for
the mLDP root IPv4 address and mLDP opaque IPv4 address range
route-policy mldp_policy
if mldp root 10.0.0.1 and mldp opaque ipv4 192.168.3.1 232.2.2.2 then
pass
endif
end-policy
!
Configuring the mLDP
User Interface to Filter Label Mappings
Label mapping filtering is supported either in inbound or outbound
directions, based on the user preference. All default policies applicable in
the neighborhood are supported by Label Mapping Filtering.
Configuring the mLDP
User Interface to Filter Label Mappings
SUMMARY STEPS
configure
mpls ldp mldp
address-family ipv4
neighbor[ipv4 ip address]route-policy mldp_policy in |
out
end-policy
commit
Use the show
command to verify the configuration:
show running-config
route-policy mldp_policy