Cisco 10000 Series Router Software Configuration Guide
Configuring Multilink Point-to-Point Protocol Connections
Downloads: This chapterpdf (PDF - 828.0KB) The complete bookPDF (PDF - 16.03MB) | Feedback

Configuring Multilink Point-to-Point Protocol Connections

Table Of Contents

Configuring Multilink Point-to-Point Protocol Connections

Multilink Point-to-Point Protocol

Feature History for Multilink PPP

MLP Bundles

Restrictions for MLP Bundles

MLP Bundles and PPP Links

System Limits for MLP Bundles

Cisco 10000 series routers do not support VAI bundle interfaces in a PTA configuration. VAI bundles are supported only on the L2TP network server (LNS) for MLPoLNS.MLP Groups

How MLP Determines the Link a Bundle Joins

IP Addresses on MLP-Enabled Links

Valid Ranges for MLP Interfaces

MLP Overhead

Configuration Commands for MLP

interface multilink Command

ppp multilink Command

ppp multilink fragment-delay Command

ppp multilink interleave Command

ppp multilink fragment disable Command

ppp multilink group Command

MLP over Serial Interfaces

Performance and Scalability for MLP over Serial Interfaces

Restrictions and Limitations for MLP over Serial Interfaces

Single-VC MLP over ATM Virtual Circuits

Performance and Scalability for Single-VC MLP over ATM

Restrictions and Limitations for Single-VC MLP over ATM

Multi-VC MLP over ATM Virtual Circuits

Performance and Scalability for Multi-VC MLP over ATM VCs

Restrictions and Limitations for Multi-VC MLP over ATM VCs

MLP on LNS

About MLP on LNS

PPP multilink links max Command

Performance and Scalability of MLP on LNS

PXF Memory and Performance Impact for MLP on LNS

Scenario 1

Scenario 2

Restrictions and Limitations for MLP on LNS

Configuring MLP on LNS

MLPoE LAC Switching

Restrictions for MLPoE LAC Switching

MLPoE at PTA

ATM Overhead Accounting

Prerequisites of MLPoE at PTA

Restrictions of MLPoE at PTA

Memory and Performance Impact of MLPoE at PTA

MLP-Based Link Fragmentation and Interleaving

Configuring MLP Bundles and Member Links

Creating an MLP Bundle Interface

Configuration Example for Creating an MLP Bundle Interface

Enabling MLP on a Virtual Template

Configuration Example for Enabling MLP on a Virtual Template

Adding a Serial Member Link to an MLP Bundle

Adding an ATM Member Link to an MLP Bundle

Configuration Example for Adding ATM Links to an MLP Bundle

Moving a Member Link to a Different MLP Bundle

Removing a Member Link from an MLP Bundle

Changing the Default Endpoint Discriminator

Configuration Example for Changing the Endpoint Discriminator

Configuration Examples for Configuring MLP

Configuration Example for Configuring MLP over Serial Interfaces

Configuration Example for Configuring Single-VC MLP over ATM

Configuration Example for Configuring Multi-VC MLP over ATM

Configuration Example for MLP on LNS

Configuration Example for MLPoE LAC Switching

Configuration Examples of MLPoE at PTA

Configuring MLPoE over IEEE 802.1Q VLANs

Configuring MLPoE through RADIUS

Verifying and Monitoring MLP Connections

Bundle Counters and Link Counters

Verification Examples for MLP Connections

Verification Example for the show interfaces multilink Command

Verification Example for the show ppp multilink Command

Verification Example for the show interfaces multilink stat Command

Related Documentation


Configuring Multilink Point-to-Point Protocol Connections


LAN-based applications and information transfer services, such as electronic mail, transmit large amounts of traffic, placing increased demand on wide-area networks (WANs). Multilink Point-to-Point Protocol (MLP) is a reliable and cost-effective solution that makes efficient use of WAN links.

This chapter describes MLP and how to configure it on serial and ATM connections on the Cisco 10000 series router. It includes the following topics:

Multilink Point-to-Point Protocol

MLP Bundles

Cisco 10000 series routers do not support VAI bundle interfaces in a PTA configuration. VAI bundles are supported only on the L2TP network server (LNS) for MLPoLNS.MLP Groups

Cisco 10000 series routers do not support VAI bundle interfaces in a PTA configuration. VAI bundles are supported only on the L2TP network server (LNS) for MLPoLNS.MLP Groups

How MLP Determines the Link a Bundle Joins

IP Addresses on MLP-Enabled Links

Valid Ranges for MLP Interfaces

MLP Overhead

Configuration Commands for MLP

MLP over Serial Interfaces

Single-VC MLP over ATM Virtual Circuits

Multi-VC MLP over ATM Virtual Circuits

MLP-Based Link Fragmentation and Interleaving

Configuring MLP Bundles and Member Links

Configuration Examples for Configuring MLP

Verifying and Monitoring MLP Connections

Related Documentation

Multilink Point-to-Point Protocol

Multilink Point-to-Point Protocol (MLP) is used to combine multiple physical links into a single logical connection or MLP bundle (see Figure 22-1). Using MLP, you can increase bandwidth and more easily manage all of the circuits through a single interface. The MLP connection has a maximum bandwidth that is equal to the sum of the bandwidths of the component links. MLP also provides load balancing, multivendor interoperability, packet fragmentation and reassembly, and increased redundancy. The Cisco 10008 router implements the MLP specifications defined in RFC 1990.

MLP provides traffic load balancing over multiple wide-area network (WAN) links by sending packets and packet fragments over the links of bundle members. The multiple links come up in response to a defined load threshold. MLP mechanisms can calculate load on both inbound and outbound traffic, or on either direction as needed for the traffic between specific sites. MLP provides bandwidth on demand and reduces transmission latency across WAN links.

MLP allows packets to be fragmented and the fragments to be sent at the same time over multiple point-to-point links to the same remote address. Large nonreal-time packets are multilink encapsulated and fragmented into a small enough size to satisfy the delay requirements of real-time traffic. However, the smaller real-time packets are not multilink encapsulated. Instead, MLP interleaving provides a special transmit queue (priority queue) for these delay-sensitive packets to allow the packets to be sent earlier than other packet flows. Real-time packets remain intact and MLP interleaving mechanisms send the real-time packets between fragments of the larger nonreal-time packets. For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

MLP can provide increased redundancy by allowing traffic to flow over the remaining member links when a port fails. You can configure the member links on separate physical ports on the same line card or on different line cards. If a port becomes unavailable, MLP directs traffic over the remaining member links with minimal disruption to the traffic flow.

MLP mechanisms preserve packet ordering over an entire bundle, guaranteeing that network packets are processed at the receiving system in the same order that they are logically transmitted.

Valid multilink interface values for MLP over serial or multi-VC MLP over ATM are from 1 to 9999 (Release 12.2(28)SB and later), or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later). For example:

Router(config)# interface multilink 8

The Cisco 10008 router supports the following MLP features:

MLP over Serial Interfaces

Single-VC MLP over ATM Virtual Circuits

Multi-VC MLP over ATM Virtual Circuits

MLP on LNS

MLPoE LAC Switching

MLPoE at PTA

Feature History for Multilink PPP

Cisco IOS Release
Description
Required PRE

12.0(23)SX

The MLP over Serial feature was introduced on the Cisco 10000 series router.

PRE1

12.2(28)SB

The MLP over Serial, Single-VC MLP over ATM VCs, and Multi-VC MLP over ATM VCs features were introduced on the PRE2.

PRE2

12.2(31)SB2

Support was added for the PRE3 and the valid multilink interface ranges for MLP over serial or multi-VC MLP over ATM changed from 1 to 9999 (Release 12.2(28)SB and later) to from 1 to 9999 and 65,536 to 2,147,483,647.

PRE3

12.2(33)SB

The MLPPP on LNS feature was introduced on the Cisco 10000 series router that is supported on the PRE3 and PRE4. This feature is not supported on the PRE2.

PRE3 and PRE4

12.2(33)SB2

The MLPoE LAC Switching feature was introduced on the Cisco 10000 series router.

PRE3

12.2(33)XNE

The MLPoE at PTA feature was introduced on the Cisco 10000 series router.

PRE3 and PRE4


MLP Bundles

MLP combines multiple physical links into a logical bundle called an MLP bundle (see Figure 22-1). An MLP bundle is a single, virtual interface that connects to the peer system. Having a single virtual interface enables fancy queuing and QoS to be applied to the traffic on the virtual interface (for example, policing and traffic shaping can be applied to the traffic flows). Each individual link to the peer system might be doing some form of fancy queuing, but none of the links knows about the traffic on the other parallel links. Fancy queuing and QoS cannot be applied uniformly to the entire aggregate traffic between the system and its peer system. A single virtual interface also simplifies the task of monitoring traffic to the peer system (for example, traffic statistics are all on one interface).

Figure 22-1 Multilink PPP Bundle

An endpoint discriminator is used to identify the member links of the MLP bundle.

Restrictions for MLP Bundles

The router supports links equal to T1/E1 or less for MLPPP bundling. You cannot bundle high-speed links (for example, E3) because the router can store only 50 ms of data based on the E1 speed.

MLP Bundles and PPP Links

MLP works with fully functional Point-to-Point Protocol (PPP) interfaces. An MLP bundle can consist of a PPP over serial link and a PPP over ATM link. As long as each link behaves like a standard serial interface, the mixed links work properly in a bundle.

Adding the ppp multilink group command to a link's configuration does not make that link part of the specified bundle. This command only places a restriction on the link. If the link negotiates to use multilink, then it must provide the proper identification to join the bundle on the multilink interface or to activate a bundle on that interface. If the link provides identification that coincides with another active bundle in the system, or the link fails to match the identity of a bundle that is already active on the multilink group interface, the connection terminates.

A link joins an MLP bundle only if it negotiates to use multilink when the connection is established and the identification information exchanged matches that of an existing bundle. If a link supplies identification information that does not match any known bundle, MLP creates a new bundle for the user.

System Limits for MLP Bundles

Table 22-1 lists the system limits for MLP bundles.

Table 22-1 System Limits for MLP Bundles

Feature
Maximum No. of Members Per Bundle
Maximum No. of Bundles
Per System
Maximum No. of Member Links Per System
Multilink Interface Range
LFI
Supported

MLP over Serial

10

1250

2500

1 to 9999 (Release 12.2(28)SB and later) and from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later)

Yes

Interleaving on all member links

Single-VC MLP over ATM

1

8192

8192

10,000 and higher

Yes

Interleaving on 1 member link

Multi-VC MLP over ATM

10

1250

2500

1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later)

Yes

Interleaving on 1 member link

MLP LAC switched MLPPPoE

1

10240

10240

Yes

Interleaving on 1 member link


Note The multilink interface ranges described in Table 22-1 require Cisco IOS Release 12.2(28)SB or later releases. For releases earlier than Cisco IOS Release 12.2(28)SB, the valid multilink interface range is 1 to 2,147,483,647.

Cisco 10000 series routers do not support VAI bundle interfaces in a PTA configuration. VAI bundles are supported only on the L2TP network server (LNS) for MLPoLNS.MLP Groups

When you configure the ppp multilink group command on a link, the command applies a restriction to the link that indicates the link is not allowed to join any bundle other than the indicated group interface, and that the connection is to be terminated if the peer system attempts to join a different bundle.

A link actually joins a bundle when the identification keys for that link match the identification keys for an existing bundle (see the "How MLP Determines the Link a Bundle Joins" section). Configuring the ppp multilink group command on a link does not allow the link to bypass this process, unless a bundle does not already exist for this particular user. When matching links to bundles, the identification keys are always the determining factors.

Because the ppp multilink group command merely places a restriction on the link, any MLP-enabled link that is not assigned to a particular multilink group can join the dedicated bundle interface if it provides the correct identification keys for that dedicated bundle. Removing the ppp multilink group command from an active link that currently is a member of a multilink group does not make that link leave the bundle because the link is still a valid member. It is just no longer restricted to this one bundle.

How MLP Determines the Link a Bundle Joins

A link joins a bundle when the identification keys for that link match the identification keys for an existing bundle.

Two keys define the identity of a remote system: the PPP username and the MLP endpoint discriminator. The PPP authentication mechanisms (for example, PAP or CHAP) learn the PPP username. The endpoint discriminator is an option negotiated by the Link Control Protocol (LCP). Therefore, a bundle consists of all of the links that have the same PPP username and endpoint discriminator.

A link that does not provide a PPP username or endpoint discriminator is an anonymous link. MLP collects all of the anonymous links into a single bundle referred to as the anonymous bundle or default bundle. Typically, there can be only one anonymous bundle. Any anonymous links that negotiate MLP join (or create) the anonymous bundle.

When using multilink group interfaces, more than one anonymous peer is allowed. When you preassign a link to an MLP bundle by using the ppp multilink group command, and the link is anonymous, the link joins the bundle interface it is assigned to if the interface is not already active and associated with a nonanonymous user.

MLP determines the bundle a link joins in the following steps:

1. When a link connects, MLP creates a bundle name identifier for the link.

2. MLP then searches for a bundle with the same bundle name identifier.

If a bundle with the same identifier exists, the link joins that bundle.

If a bundle with the same identifier does not exist, MLP creates a new bundle with the same identifier as the link, and the link is the first link in the bundle.

Table 22-2 describes the commands and associated algorithm used to generate a bundle name. In the table, "username" typically means the authenticated username; however, an alternate name can be used instead. The alternate name is usually an expanded version of the username (for example, VPDN tunnels might include the network access server name) or a name derived from other sources.

Table 22-2 Bundle Name Generation 

Command
Bundle Name Generation Algorithm

multilink bundle-name authenticated

The bundle name is the peer's username, if available.

If the peer does not provide a username, the algorithm uses the peer's endpoint discriminator.

Note The authenticated keyword specifies that the bundle name is based on whatever notion of a username the system can derive. The endpoint discriminator is ignored entirely, unless it is the only name that can be found.

The multilink bundle-name authenticated command is the default naming policy.

multilink bundle-name endpoint

The bundle name is the peer's endpoint discriminator.

If there is no endpoint discriminator, the algorithm uses the peer's username.

multilink bundle-name both

The name of the bundle is a concatenation of the username and the endpoint discriminator.


IP Addresses on MLP-Enabled Links

Configuring an IP address on a link used for MLP does not always work as expected. For example, consider the following configuration:

interface Serial 1/0/0
ip address 10.2.3.4 255.255.255.0
encapsulation ppp
ppp multilink

You might expect the following behavior as a result of this configuration:

If the interface does not negotiate to use MLP and the interface comes up as a regular PPP link, then the interface negotiates the Internet Protocol Control Protocol (IPCP) and its local address is 10.2.3.4.

If the interface did negotiate to use MLP, then the configured IP address is meaningless because the link is not visible to IP while it is part of a bundle. The bundle is a network-level interface and can have its own IP address, depending on the configuration used for the bundle.

Instead, if a link with an IP address configured comes up and joins a bundle, IP installs a route directly to that link interface and it might try to route packets directly to that link, bypassing the MLP bundle. This behavior occurs because IP considers an interface to be up for IP traffic whenever IP is configured on the interface and the interface is up. MLP intercepts and discards these misdirected frames. This condition occurs frequently if you use a virtual template interface to configure both the PPPoX member links and the bundle interface.

Using unnumbered IP interfaces enables you to work around IP problems and configure an IP address on an MLP-enabled link. The following example shows how to configure Multi-VC MLP over ATM using an unnumbered IP interface:

!
interface Multilink1
ip unnumbered Loopback0
peer default ip address pool mlpoa_pool
ppp chap hostname m1 
ppp multilink
ppp multilink group 1
!
interface atm 2/0/0
no ip address
!
interface atm 2/0/0.1 point-to-point
pvc 0/32
ppp multilink group 1
vbr-nrt 128 64 20
encapsulation aal5mux ppp Virtual-Template1
! 
! 
interface atm 2/0/0.2 point-to-point
pvc 0/33
ppp multilink group 1
vbr-nrt 128 64 20 
encapsulation aal5mux ppp Virtual-Template1
!
interface Virtual-Template1
no ip address
keepalive 30
ppp max-configure 110
ppp max-failure 100
ppp multilink
ppp timeout retry 5
!
ip local pool mlpoa_pool 100.1.1.1 100.1.7.254
!

Valid Ranges for MLP Interfaces

Table 22-3 lists the valid ranges you can specify when creating MLP interfaces using the interface multilink command.

Table 22-3 MLP Interface Ranges

Cisco IOS Release
PRE2 MLP Interface Ranges
PRE3 MLP Interface Ranges

Release 12.2(28)SB and later

1 to 9999

Release 12.2(31)SB2 and later

1 to 9999
65,536 to 2,147,483,647

1 to 9999
65,536 to 2,147,483,647


MLP Overhead

MLP encapsulation adds six extra bytes (4 header, 2 checksum) to each outbound packet. These overhead bytes reduce the effective bandwidth on the connection; therefore, the throughput for an MLP bundle is slightly less than an equivalent bandwidth connection that is not using MLP. If the average packet size is large, the extra MLP overhead is not readily apparent; however, if the average packet size is small, the extra overhead becomes more noticeable.

Using MLP fragmentation adds additional overhead to a packet. Each fragment contains six bytes of MLP header plus a link encapsulation header.

Configuration Commands for MLP

This section describes the following commands used to configure MLP and MLP-based link fragmentation and interleaving:

interface multilink Command

ppp multilink Command

ppp multilink fragment-delay Command

ppp multilink interleave Command

ppp multilink fragment disable Command

ppp multilink group Command

For more information about MLP-based link fragmentation and interleaving, see the Cisco 10000 Series Router Quality of Service Configuration Guide.

interface multilink Command

To create and configure a multilink bundle, use the interface multilink command in global configuration mode. To remove a multilink bundle, use the no form of the command.

interface multilink multilink-bundle-number

no interface multilink multilink-bundle-number

Syntax Description

multilink-bundle-number

A nonzero number that identifies the multilink bundle.


Command History

Cisco IOS Release
Description

12.0

The interface multilink command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.

12.2(31)SB2

This command was introduced on the PRE3. Valid multilink interface values changed. See the Usage Guidelines or Table 22-3.


Defaults

No multilink interfaces are configured.

Usage Guidelines

For Cisco IOS Release 12.2(28)SB and later releases, the range of valid values for multilink interfaces are the following:

MLP over Serial—1 to 9999 (Release 12.2(28)SB and later), and 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later)

Single-VC MLP over ATM—10,000 and higher

Multi-VC MLP over ATM—1 to 9999 (Release 12.2(28)SB and later), and 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later)

For releases earlier than Cisco IOS Release 12.2(28)SB, the valid multilink interface range is 1 to 2,147,483,647.

ppp multilink Command

To enable MLP on an interface, use the ppp multilink command in interface configuration mode. To disable MLP, use the no form of the command.

ppp multilink

no ppp multilink

Command History

Cisco IOS Release
Description

12.0(23)SX

The ppp multilink command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.


Defaults

The command is disabled.

Usage Guidelines

The ppp multilink command applies only to interfaces that use Point-to-Point Protocol (PPP) encapsulation.

When you use the ppp multilink command, the first channel negotiates the appropriate Network Control Protocol (NCP) layers (such as the IP Control Protocol and IPX Control Protocol), but subsequent links negotiate only the Link Control Protocol (LCP) and MLP.

ppp multilink fragment-delay Command

To specify a maximum size in units of time for packet fragments on a MLP bundle, use the ppp multilink fragment-delay command in interface configuration mode. To reset the maximum delay to the default value, use the no form of the command.

ppp multilink fragment-delay delay-max

no ppp multilink fragment-delay delay-max

Syntax Description

delay-max

Specifies the maximum amount of time, in milliseconds, that is required to transmit a fragment. Valid values are from 1 to 1000 milliseconds.


Command History

Cisco IOS Release
Description

12.0(23)SX

The ppp multilink fragment-delay command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.


Defaults

If fragmentation is enabled, the fragment delay is 30 milliseconds.

Usage Guidelines

The ppp multilink fragment-delay command is useful when packets are interleaved and traffic characteristics such as delay, jitter, and load balancing must be tightly controlled.

MLP chooses a fragment size on the basis of the maximum delay allowed. If real-time traffic requires a certain maximum boundary on delay, using the ppp multilink fragment-delay command to set that maximum time can ensure that a real-time packet gets interleaved within the fragments of a large packet.

By default, MLP has no fragment size constraint, but the maximum number of fragments is constrained by the number of links. If interleaving is enabled, or if a fragment delay is explicitly configured with the ppp multilink fragment-delay command, then MLP uses a different fragmentation algorithm. In this mode, the number of fragments is unconstrained, but the size of each fragment is limited to the fragment-delay value, or 30 milliseconds if the fragment delay has not been configured.

The ppp multilink fragment-delay command is configured under the multilink interface. The value assigned to the delay-max argument is scaled by the speed at which a link can convert the time value into a byte value.

ppp multilink interleave Command

To enable interleaving of real-time packets among the fragments of larger nonreal-time packets on a MLP bundle, use the ppp multilink interleave command in interface configuration mode. To disable interleaving, use the no form of the command.

ppp multilink interleave

no ppp multilink interleave

Command History

Cisco IOS Release
Description

12.0(23)SX

The ppp multilink interleave command was introduced on the Cisco 10000 series router.

12.2(16)BX

This command was introduced on the PRE2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.


Defaults

Interleaving is disabled.

Usage Guidelines

The ppp multilink interleave command applies to multilink interfaces, which are used to configure a bundle.

Interleaving works only when the queuing mode on the bundle is set to fair queuing.

If interleaving is enabled when fragment delay is not configured, the default delay is 30 milliseconds. The fragment size is derived from that delay, depending on the bandwidths of the links.

ppp multilink fragment disable Command

To disable packet fragmentation, use the ppp multilink fragment disable command in interface configuration mode. To enable fragmentation, use the no form of this command.

ppp multilink fragment disable

no ppp multilink fragment disable

Command History

Cisco IOS Release
Description

11.3

This command was introduced as ppp multilink fragmentation.

12.2

The no ppp multilink fragmentation command was changed to ppp multilink fragment disable. The no ppp multilink fragmentation command was recognized and accepted through Cisco IOS Release 12.2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.


Usage Guidelines

The ppp multilink fragment delay and ppp multilink interleave commands have precedence over the ppp multilink fragment disable command. Therefore, the ppp multilink fragment disable command has no effect if these commands are configured for a multilink interface and the following message displays:

Warning: 'ppp multilink fragment disable' or 'ppp multilink fragment maximum' will be 
ignored, since multilink interleaving or fragment delay has been configured and have 
higher precedence.

To completely disable fragmentation, you must do the following:

Router(config-if)# no ppp multilink fragment delay
Router(config-if)# no ppp multilink interleave
Router(config-if)# ppp multilink fragment disable

ppp multilink group Command

To restrict a physical link to joining only a designated multilink group interface, use the ppp multilink group command in interface configuration mode. To remove the restriction, use the no form of the command.

ppp multilink group group-number

no ppp multilink group group-number

Syntax Description

group-number

Identifies the multilink group. This number must be identical to the multilink-bundle-number you assigned to the multilink interface. Valid values are:

MLP over Serial—1 to 9999

Single-VC MLP over ATM—10,000 and higher

Multi-VC MLP over ATM—1 to 9999


Command History

Cisco IOS Release
Description

12.0

The multilink-group command was introduced on the Cisco 10000 series router.

12.2

This command was changed to ppp multilink group. The multilink-group command is accepted by the command line interpreter through Cisco IOS Release 12.2.

12.2(28)SB

This command was integrated into Cisco IOS Release 12.2(28)SB.


Defaults

The command is disabled.

Usage Guidelines

By default the ppp multilink group command is disabled, which means the link can negotiate to join any bundle in the system.

When the ppp multilink group command is configured, the physical link is restricted from joining any but the designated multilink group interface. If a peer at the other end of the link tries to join a different bundle, the connection is severed. This restriction applies when MLP is negotiated between the local end and the peer system. The link can still come up as a regular PPP interface.

MLP over Serial Interfaces

The MLP over Serial interfaces feature enables you to bundle together T1 interfaces into a single logical connection called an MLP bundle (see the "MLP Bundles" section). MLP over Serial also provides the following functions:

Load balancing—MLP provides bandwidth on demand and uses load balancing across all member links (up to 10) to transmit packets and packet fragments. MLP mechanisms calculate the load on either the inbound or outbound traffic between specific sites. Because MLP splits packets and fragments across all member links during transmission, MLP reduces transmission latency across WAN links.

Increased redundancy—MLP allows traffic to flow over the remaining member links when a port fails. By configuring an MLP bundle that consists of T1 lines from more than one line card, if one line card stops operating, the part of the bundle on the other line cards continues to operate.

Link fragmentation and interleaving—The MLP fragmenting mechanism fragments large nonreal-time packets and sends the fragments at the same time over multiple point-to-point links to the same remote address. Smaller real-time packets remain intact. The MLP interleaving mechanism sends the real-time packets between the fragments of the nonreal-time packets, thus reducing real-time packet delay. For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide, at the following url:

http://www.cisco.com/en/US/docs/routers/10000/10008/configuration/guides/qos/10qlfi.html

Figure 22-2 shows an MLP bundle that consists of T1 interfaces from three T3 interfaces.

Figure 22-2 MLP Bundle for Multilink PPP over Serial Connections

Performance and Scalability for MLP over Serial Interfaces

Configure the hold-queue command in interface configuration mode for all physical interfaces. For example:

Router(config-if)# hold-queue 4096 in

For more information, see the "Scalability and Performance" chapter in this guide.

Restrictions and Limitations for MLP over Serial Interfaces

A multilink bundle can have up to 10 member links. The router supports both full T1 interfaces and fractional T1 interfaces as member links, but fractional T1 interfaces are supported only when LFI is enabled.


Note You can terminate the serial links on multiple line cards in the router chassis if all of the links are the same type, such as T1 or E1.


The router supports a maximum of 1250 bundles per system and a maximum of 2500 member links per system.

The valid multilink interface ranges are from 1 to 9999 (Release 12.2(28)SB and later) and from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later). For example:

Router(config)# interface multilink 8

Interleaving is supported on all member links. MLP over Serial-based LFI must be enabled on an interface that has interleaving turned on.

All member links in an MLP bundle must have the same encapsulation type and bandwidth.

If a virtual template attached to a member link specifies a bandwidth, the router does not clone the specified bandwidth to the MLP bundle and the member links.

You cannot manually configure the bandwidth on a bundle interface by using the bandwidth command.

You cannot apply a virtual template with MLP configured to an MLP bundle.

We strongly recommend that you use only strict priority queues when configuring MLP over Serial-based LFI. For more information, see the "Prioritizing Services" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Single-VC MLP over ATM Virtual Circuits

The Single-VC MLP over ATM virtual circuits (VCs) feature enhances the MLP over Serial interfaces feature by enabling you to configure multilink Point-to-Point Protocol (MLP) on an ATM VC. By doing so, you can aggregate multiple data paths (for example, PPP over ATM encapsulated ATM VCs) into a single logical connection called an MLP bundle (see the "MLP Bundles" section). The MLP bundle can have only one member link.

MLP supports link fragmentation and interleaving (LFI). When enabled, the MLP fragmentation mechanism multilink encapsulates large nonreal-time packets and fragments them into a small enough size to satisfy the delay requirements of real-time traffic. The smaller real-time packets remain intact and MLP sends the packets to a special transmit queue, allowing the packets to be sent earlier than other packet flows. The MLP interleaving mechanism sends the real-time packets between the fragments of the nonreal-time packets. For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Performance and Scalability for Single-VC MLP over ATM

Configure the hold-queue command in interface configuration mode for all physical interfaces, except when configuring the OC-12 ATM line card. The 1-Port OC-12 ATM line card does not require the hold-queue command. For example:

Router(config-if)# hold-queue 4096 in

Configure the following commands and recommended values on the virtual template interface:

ppp max-configure 110

ppp max-failure 100

ppp timeout retry 5

keepalive 30

For example:

Router(config-if)# ppp max-configure 110
Router(config-if)# ppp max-failure 100
Router(config-if)# ppp timeout retry 5
Router(config-if)# keepalive 30

For more information, see the "Scalability and Performance" chapter in this guide.

Restrictions and Limitations for Single-VC MLP over ATM

Only one member link is supported per bundle.

Single-VC MLP over ATM member links are restricted to nonaggregated PVCs (for example, variable bit rate-nonreal-time [VBR-nrt] and constant bit rate [CBR] ATM traffic classes only).

The router supports a maximum of 8192 bundles per system and 8192 member links per system.

Each member link can have a bandwidth rate up to 2048 kbps.

The router only supports member links with the same encapsulation type.

MLP PVCs cannot be on-demand VCs that are automatically provisioned.

Associating MLP over ATM PVCs with ATM virtual paths (VPs) is discouraged, though not prevented.

The valid multilink interface values are 10000 to 65534. For example:

Router(config)# interface multilink 10004

The values higher than 65534 are used for multi-member bundles

Cisco IOS software supports a maximum of 4096 total virtual template interfaces.

You cannot manually configure the bandwidth on a bundle interface using the bandwidth command.

If a virtual template attached to a member link specifies a bandwidth, the router does not clone the specified bandwidth to the MLP bundle and the member links.

You cannot apply a virtual template with MLP configured to an MLP bundle.

If link fragmentation and interleaving (LFI) is enabled, only one link is used for interleaving. For more information, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

We strongly recommend that you use only strict priority queues when configuring MLP over ATM-based LFI. For more information, see the "Prioritizing Services" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Multi-VC MLP over ATM Virtual Circuits

The Multi-VC MLP over ATM virtual circuits (VCs) feature enhances the MLP over Serial interfaces feature by enabling you to configure multilink Point-to-Point Protocol (MLP) on multiple ATM VCs. By doing so, you can aggregate multiple data paths (for example, PPP over ATM encapsulated ATM VCs) into a single logical connection called an MLP bundle (see the "MLP Bundles" section). An MLP bundle can have up to 10 member links.

Multi-VC MLP over ATM provides the following functions:

Load balancing—MLP provides bandwidth on demand and uses load balancing across all member links (up to 10) to transmit packets and packet fragments. The multiple links come up in response to a defined load threshold. MLP mechanisms calculate load on both inbound and outbound traffic, or on either direction as needed for traffic between specific sites. Because MLP uses all member links to transmit packets and fragments, MLP reduces transmission latency across WAN links.

Increased redundancy—MLP allows traffic to flow over the remaining member links when a port fails. You can configure the member links on separate physical ports on the same line card or on different line cards. If a port becomes unavailable, MLP directs traffic over the remaining member links with minimal disruption to the traffic flow. MLP mechanisms preserve packet ordering over an entire bundle.

Link fragmentation and interleaving—The MLP fragmentation mechanism fragments packets and sends the fragments at the same time over multiple point-to-point links to the same remote address. MLP multilink encapsulates large nonreal-time packets and fragments them into a small enough size to satisfy the delay requirements of real-time traffic. The smaller real-time packets remain intact and MLP sends the packets to a special transmit queue, allowing the packets to be sent earlier than other packet flows. The MLP interleaving mechanism sends the real-time packets between the fragments of the nonreal-time packets.

For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

Performance and Scalability for Multi-VC MLP over ATM VCs

Configure the hold-queue command in interface configuration mode for all physical interfaces, except when configuring the OC-12 ATM line card. The 1-Port OC-12 ATM line card does not require the hold-queue command. For example:

Router(config-if)# hold-queue 4096 in

Configure the following commands and recommended values on the virtual template interface:

ppp max-configure 110

ppp max-failure 100

ppp timeout retry 5

keepalive 30

For example:

Router(config-if)# ppp max-configure 110
Router(config-if)# ppp max-failure 100
Router(config-if)# ppp timeout retry 5
Router(config-if)# keepalive 30

For more information, see the "Scalability and Performance" chapter in this guide.

Restrictions and Limitations for Multi-VC MLP over ATM VCs

A maximum of 10 member links is supported per bundle.

MLP over ATM member links are restricted to nonaggregated PVCs (for example, variable bit rate-nonreal-time [VBR-nrt] and constant bit rate [CBR] ATM traffic classes only).

The router supports a maximum of 1250 bundles per system and 2500 member links per system.

Each member link can have a bandwidth rate up to 2048 kbps.

The router only supports member links with the same encapsulation type.

The valid multilink interface ranges are from 1 to 9999 (Release 12.2(28)SB and later) and from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later). For example:

Router(config)# interface multilink 8

MLP PVCs cannot be on-demand VCs that are automatically provisioned.

Associating MLP over ATM PVCs with ATM virtual paths (VPs) is discouraged, though not prevented.

Cisco IOS software supports a maximum of 4096 total virtual template interfaces.

You cannot manually configure the bandwidth on a bundle interface using the bandwidth command.

You cannot apply a virtual template with MLP configured to an MLP bundle.

If a virtual template attached to a member link specifies a bandwidth, the router does not clone the specified bandwidth to the MLP bundle and the member links.

If link fragmentation and interleaving (LFI) is enabled, only one link is used for interleaving. For more information, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

We strongly recommend that you use only strict priority queues when configuring Multi-VC MLP over ATM-based LFI. For more information, see the "Prioritizing Services" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.

MLP on LNS

Networks are migrating from the digital subscriber line (DSL) aggregation network connectivity to broadband remote access server (BRAS), with the mix of Ethernet and ATM access networks. Therefore, there is an increasing need to support MLP, and, link fragmentation and interleaving (LFI), to allow high-priority, low-latency packets to be interleaved between fragments of lower-priority, higher-latency packets. Voice over IP (VoIP) is an example of a low-latency service.

In the Cisco 12.2(33)SB release, the MLP on LNS feature is introduced for asymmetric digital subscriber line (ADSL) deployments where the upstream bandwidth (BW) is low. The MLP on LNS feature can receive fragments from the customer premises equipment (CPE), ensuring that there is less latency upstream, even if a large packet gets in between the voice packets.

The MLP on LNS feature bundles together a virtual private dial network (VPDN) session on a single logical connection, which forms an MLP bundle on the LNS. Before the Cisco IOS 12.2(33)SB release, Cisco 10000 series routers supported only multilink bundle termination on the PPP termination aggregation (PTA) router. In the Cisco IOS 12.2(33)SB release, Cisco 10000 series routers support MLP termination on the LNS also. Figure 22-3 shows an MLP on LNS application.

Figure 22-3 MLP on LNS Application

The MLP on LNS feature is described in the following sections:

About MLP on LNS

PPP multilink links max Command

PXF Memory and Performance Impact for MLP on LNS

Restrictions and Limitations for MLP on LNS

Configuring MLP on LNS

About MLP on LNS

The multilink interface-based configuration requires one virtual template per bundle so that the multilink group # command can be configured on the virtual template. However, for the MLP on LNS feature, you can only scale up to 2000 virtual templates.

To address the virtual template scaling issue and to avoid cumbersome configuration management, in the Cisco IOS 12.2(33)SB release, virtual access bundles are supported. In virtual access bundles, the bundle interface is cloned from the virtual template when the first member link is negotiated on the LNS. The virtual access bundle support is limited to bundle termination on LNS.

Before the Cisco IOS 12.2(33)SB release, multilink interface-based configuration was used to distinguish between single and multi-member bundles. However, for the virtual access based bundle interface, you can no longer use the interface number range to distinguish between single and multi-member bundles because the bundles are generated dynamically in the Cisco IOS 12.2(33)SB release. To distinguish single and multi-member bundles, the user-specified value for the ppp multilinks max link # command is used.

The following two diagrams show two different MLP on LNS bundle configurations supported with the Cisco IOS 12.2(33)SB release. Figure 22-4 shows MLP on CPE for dial-up networks.

Figure 22-4 MLP on LNS-Multimember Bundle

Figure 22-5 shows a single-member bundle on the CPE. These are single-member bundles where the traffic received by the Cisco 10000 router is fragmented to interleave high-priority traffic in between low-priority network traffic.

Figure 22-5 MLP on LNS-Single-Member Bundle

To accommodate the scaling requirements of up to 2040 multi member and 10240 single-member bundles for the MLP on LNS feature, an additional reassembly buffer is reserved in the external column memory (XCM). The reassembly buffer that was reserved in the Cobalt space is used for multi-member bundles and the XCM reassembly buffer is used for single-member bundles.

The fixed reassembly table size for the MLP on LNS feature to buffer fragments is 256 entries. The reassembly table size restricts the maximum differential delay for all of the different paths for the member links from the CPE to the LNS. For example, if there are 10 members in a bundle, and one of the members is associated with a "slow" (high delay) path, then the other nine members must have their fragments/packets buffered while waiting for the slower link. Because the reassembly table stores descriptors, each entry represents one fragment or a whole packet if fragmentation is not in effect. The amount of time each fragment takes to get transmitted is equal to the configured fragment delay, which is independent of link bandwidth. If fragmentation is not in effect, the transmit time depends on the packet size, with smaller packets being slower. Therefore, the amount of tolerated differential delay is the reassembly table buffering limit for the other 9 links:

(256 / 9) * frag_delay = 28.4 * frag_delay


Note The default differential delay for MLP on LNS is 50ms.


Table 22-4 shows the resource usage on Cisco 10000 series router.

Table 22-4 Resource Usage

 

VCCI1

HWIDB2

SWIDB3

PBLT4

Cisco 10000 series routers MAX

64000

Memory dependent

Memory dependent

16000

Bundle interface

1

1

1

1

Member link single-member bundle

1

1

1

0

Member link multi-member bundle

1

1

1

1

1 A virtual circuit connection identifier (VCCI) is a variable that identifies a virtual circuit connection between two nodes.

2 A hardware interface descriptor block (HWIDB) represents a physical interface, which includes physical ports and channelized interface definitions.

3 A software interface descriptor block (SWIDB) represents a logical sub interface such as permanent virtual circuit (PVC) or virtual LAN (VLAN), or a Layer 2 encapsulation (Point-to-Point Protocol (PPP)), or high-level data link control (HDLC).

4 An HQF resource that is used by the RP and PXF to program physical layer scheduling for an interface. It could be considered an instance of physical layer scheduling; Cisco 10000 series routers currently support 16K such instances. All bundle interfaces (single or multi-member bundles) use one instance of this resource. For single-member bundles the scheduling is done at the logical layer. All members of multi-member bundles are scheduled at the physical layer, so each member link in a multi-member bundle uses one instance.


PPP multilink links max Command

Support for the ppp multilink links max command is new in the Cisco IOS 12.2(33)SB release, to distinguish between single and multimember MLP on LNS bundles. The default maximum number of links for the Cisco 10000 series routers is 10. The ppp multilink links max 1 command is required for single-member bundles.

Performance and Scalability of MLP on LNS

The following commands allow for better scaling when used in configuring MLP on LNS:

Configure the hold-queue command in interface configuration mode for the trunk interfaces in which the L2TP tunnel is negotiated. For example:

Router(config-if)# hold-queue 4096 in

Configure the following commands and recommended values on the virtual template interface:

Router(config-if)# ppp max-configure 110
Router(config-if)# ppp max-failure 100
Router(config-if)# ppp timeout retry 5
Router(config-if)# keepalive 30

Configure the lcp renegotiation always command on the VPDN group to renegotiate between L2TP access concentrator (LAC) and LNS. The maximum number of multilink member links that can be configured on the Cisco 10000 series routers is up to 20440. Different combinations of bundle configurations can be configured on the box at any given time based on resource availability.

For more information, see the Scalability and Performance chapter in this guide.

PXF Memory and Performance Impact for MLP on LNS

PXF performance is measured as follows:

Packet buffer usage

The number of packet buffers available on the PRE3 is 832K small buffers (for packet sizes of 768 bytes or less) and 120K large buffers (for packet sizes greater than 768 bytes). With full scaling of 12280 bundles (2040 multilink and 10240 single link), the average number of buffers is 69.4 small buffers and 10.0 large buffers per bundle for a total of 79.4 buffers per bundle.

Each bundle includes 256 entries. However, in a single link bundle most packets arrive in order; therefore, fewer buffers are required per single link bundle. For example, if the average usage is 10 buffers per single link bundle, the average usage is 436.7 buffers per multilink bundle.

Packet processing rate

The PRE3 has a rate of 10 million contexts per second, which is the rate of contexts or packets passing through the PXF complex. The packet processing rate is measured by the number of packets per second that can either be enqueued or dequeued by the PRE3. If each packet takes 2 passes to be enqueued, then the enqueue rate is 5 mcps. Because the enqueueing and dequeueing processing is performed concurrently, the overall performance is determined by the worst case between enqueue and dequeue as shown in the following sections.

If the packet processing demand exceeds the available contexts, nonpriority packets are dropped.

Scenario 1

A bidirectional rate of 64kbps per link. Table 22-5 shows the speed performance of a 64kbps link.

2040 multilink bundles

2, 5, and 10 links per multilink bundle

10240 single link bundles

200 byte packet size in both directions

100 byte fragment size (fragmentation for ingress only)


Note A fragmentation delay of 2ms requires a 16 byte fragment size. Because the MLP over L2TP header can be on the order of 50 bytes, a 16 byte fragment size is not possible.


Table 22-5 64-kbps Link Speed Performance

Links per multilink bundle

10

5

2

Total links (multi + single)

30640

20440

18400

Total context rate (million contexts/sec)

9.8

6.5

4.6


This scenario shows that for 64Kbps links with a maximum bundle scaling and high-demand traffic, the PXF can barely keep up with demand. Therefore, the total number of 64Kbps links should not exceed 20440.

Scenario 2

A bidirectional rate of 2mbps per link. Table 22-6 shows the speed performance of a 2Mbps link.

500 and 2040 multilink bundles

2 and 4 links per bundle

No single link bundles

500 and 1000 byte packets in both directions

512 byte fragment size (fragmentation for ingress only)

Table 22-6 2-mbps Link Speed Performance (in Million Contexts per Second)

Bundles

2040

500

Links per bundle

4

2

4

2

Total links

8160

4080

2000

1000

500 byte packets (million context/sec)

24.5

12.2

6.0

3.0

1000 byte packets (million context/sec)

16.3

8.2

4.0

2.0


This scenario shows that for 2-Mpbs links with high-traffic demand, Cisco 10000 series routers cannot obtain maximum bundle scaling. Therefore, we recommend that the total number of 2mbps links not exceed 4080.

Restrictions and Limitations for MLP on LNS

In Cisco IOS Release 12.2(33)SB, the MLP on LNS feature has the following restrictions:

The MLP on LNS feature does not include SSO support.

Bundles are only supported with Gigabit Ethernet and ATM as the trunk between the LAC and LNS.

Because the bandwidth of the member link is received from the LAC through the Connect speed AV-Pair, L2TP sessions on a single link bundle are provisioned at the logical layer (HQF). The L2TP sessions on a multi-member MLP bundle are provisioned as physical links and are bundled at the physical layer (HQF). For multi-member bundles, the bandwidth received through the AV-Pair carves out the bandwidth from the physical/tunnel interface to reserve it for MLP.

Oversubscription is not supported for MLP bundled L2TP members or on the underlying tunnel interface.

All member L2TP sessions within the same bundle belong to the same physical interface and the same L2TP tunnel.

QoS on multiple member MLP bundles is not supported. If any MLPoLNS bundles are negotiated on the Gigabit Ethernet or ATM VC interface, applying a service policy on the Gigabit Ethernet or ATM VC tunnel interface is also not supported.

Each member link in a bundle has the same speed. We do not recommend or support configuring member links of different speeds.

Fragmentation and interleaving on MLP on LNS bundles in the downstream direction are not supported.

Locally terminated member links and member links forwarded from the LAC are not supported within the same bundle (although the setup is not prevented).

Sessions from different tunnels are not allowed to join the same bundle. All members of a bundle must be part of the same L2TP tunnel and share the same physical interface.

Multiclass MLP is not supported for MLP on LNS bundles.

When one additional toaster phase per fragment is added to the MLP on LNS dequeue process, performance is impacted.

Multilink interface based bundles for MLP on LNS are not supported.

Virtual-access bundle support for existing MLP features is not included in this release.

Due to changes in route or switching to backup because of problems on the line, dynamic changing of the physical tunnel interface (the Gigabit Ethernet and ATM on which the L2TP tunnel for MLPoLNS bundle is negotiated) can happen. These changes require the bundles to renegotiate.

For multi-member bundles, carve out and reserve the bandwidth from the physical interface, which is the trunk interface on which the L2TP tunnel is negotiated. The bandwidth available for use on the trunk interface or other connection is reduced by the sum of the bandwidth reserved for the bundle.

Bandwidth available = Gigabit Ethernet or ATM VC - Bandwidth of bundle

Bandwidth available = Gigabit Ethernet or ATM VC - (Bandwidth of bundle 1 + Bandwidth of bundle 2)

Packet loss during oversubscription and congestion is not fixed.

MLP on LNS sessions are torn down when there is a Hw-module reset or a physical OIR of an ATM line card in a tunnel interface.

Configuring MLP on LNS

You can refer to the following sections for configuring MLP on LNS:

Required Configuration Tasks for LNS, page 5-29

Optional Configuration Tasks for LNS, page 5-30

For a configuration example of the MLP on LNS feature, see the Configuration Example for MLP on LNS.

MLPoE LAC Switching

In the Cisco IOS 12.2(33)SB release, MLP bundling on LNS was supported. In the Cisco IOS 12.2(33)SB2 release, there is added support for switching MLPoEoVLAN sessions received on the LAC to the LNS. However, due to PXF resource limitations, this feature is supported on the PRE3 platform only.

For a configuration example of the MLP over Ethernet (MLPoE) LAC Switching feature, see Configuration Example for MLPoE LAC Switching.

The MLPoE LAC Switching is described in the following section:

Restrictions for MLPoE LAC Switching

Restrictions for MLPoE LAC Switching

In Cisco IOS Release 12.2(33)SB2, the MLPoE LAC Switching feature has the following restrictions:

MLPoVLAN encapsulation (between the CPE and LAC) is supported. MLPoEoE and MLPoEoQinQ is not supported.

L2TP tunnel over Gigabit Ethernet (between the LAC and LNS) and ATM is supported. However, VLAN and QinQ encapsulations for the L2TP tunnel are not supported.

Similar to the MLP on LNS feature, bundles are only supported with Gigabit Ethernet and ATM as the trunk between the LAC and LNS.

QoS on interfaces towards the CPE and the tunnel is not supported.

Only single-member MLPoE bundles are supported (with LFI support). The maximum number of single-member MLPoE bundles that can be supported is 10240.

MLPoE at PTA

In Cisco IOS Release 12.2(33)SB, MLPoE supports LFI on single-link MLP bundles. This support enables high priority and low-latency packets to be interleaved between fragments of lower-priority and higher-latency packets. Figure 22-6 shows a MLPoE DSL network using LFI.

Figure 22-6 MLPoE DSL Network with LFI

In the upstream direction, the CPE fragments non-priority packets and interleaves high priority packets between the fragments. In the downstream direction, the Cisco 10000 series router reassembles the fragmented non-priority packets. However, from Cisco IOS Release 12.2(33)XNE onwards, to reduce any delay in sending high-priority packets, the router processes high priority packets as soon as they arrive.

Point-to-Point Protocol over Ethernet (PPPoE) sessions in the MLPoE at PTA feature are handled as follows:

All variations of PPPoE, such as PPPoEoE, PPPoEoA, PPPoEo802.1Q, and PPPoEoQinQ, are usable as member links for MLPoE bundles.

Termination of a MLPoE bundle in a Virtual Routing and Forwarding (VRF) block is similar to terminating a PPPoE session in a VRF.

MLPoE bundles are distinguished by the username that was used to authenticate the PPPoE session.

MLPoE reassembles received fragmented MLP packets, but fragmentation is not performed in the transmit direction.

PPPoE sessions are dynamically created and cannot be pre-configured similar to MLPoA for bundling with a multilink interface. MLPoE bundles are also dynamically created when a PPPoE session, with the multilink option enabled, is configured for a user for the first time.

ATM Overhead Accounting

Figure 22-6 shows that the outbound interface on the BRAS to the DSLAM is Ethernet. The encapsulation from DSLAM to CPE can be ATM. The overhead must be accounted at the BRAS to avoid the overrun at the subscriber line. The overhead is added by segmenting packets on the DSLAM. As a result, ATM overhead must be accounted for, by applying the traffic shape. Per-MLPoE bundle shaping with ATM overhead accounting is supported.

ATM overhead accounting for the MLPoE at PTA feature has the following restrictions:

Overhead accounting is only supported on single-member MLPoE bundles.

The line rate needs to be under-subscribed to prevent or reduce loss of traffic downstream.

Overhead accounting is supported for bi-level service policies only.

Overhead accounting support from Digital Subscriber Line Access Multiplexer (DSLAM) to CPE in the downstream direction can be applied at both logical and class levels.

The MLPoE at PTA can be described in the following sections:

Prerequisites of MLPoE at PTA

Restrictions of MLPoE at PTA

Memory and Performance Impact of MLPoE at PTA

Configuration Examples of MLPoE at PTA

Prerequisites of MLPoE at PTA

The Cisco 10000 series router must be the PTA router.

Restrictions of MLPoE at PTA

In Cisco IOS Release 12.2(33)XNE, the MLPoE at PTA feature has the following restrictions:

Interaction with L2TP is not supported.

Only single-member MLP bundles are supported. The ppp multilink links maximum 1 command must be configured.

Maximum of 10240 single-member LFI bundles are supported

No fragmentation of packets can be done in the downstream direction. The ppp multilink fragment delay interval command does not affect MLPoE.

High Availability (HA) is not supported.

Only long-sequence numbers are supported for packets header format option.

The outer-tag on the packet is a service-tag, or tag that identifies the DSLAM with the BRAS.

The multilink interface CLI is not supported, because a MLPoE bundle is created dynamically. The username and endpoint discriminator decide how a link joins a bundle.

An MLPoE bundle must have a shaped PPPoE session configured.

The number of bundles and links that MLPoE can use depend on single-member bundles and links left unused by other MLP bundles, and vice versa. For example, if MLPoA is using 5000 single-member bundles with 5000 member-links, MLPoE can use up to only 5240 single-member bundles with 5240 member-links, because the single-member bundle pool is exhausted. For example, if MLPoA is using 2040 multi-member bundles with 10200 member-links (5 links per bundle), MLPoE can only use up to 10220 single-member bundles with 10220 member-links, because the member-link pool is exhausted.

Memory and Performance Impact of MLPoE at PTA

The MLPoE at PTA feature impacts the memory and VCCI resources of the router processor (RP) to scale MLPoE bundles. Table 22-7 shows the MLP interface descriptor block (IDB) and VCCI capabilities for LFI over Ethernet (LFIoE) bundles.

Table 22-7 Cisco 10000 Series Routers MLP IDB and VCCI Capabilities

Bundle Type

Total Bundles supported

Members per Bundle

SWIDB/HWIDB used per bundle

Total of SW/HW IDBs maximum used

VCCIs used per Bundle

Total VCCIs maximum used

LFIoE

10240

1

2

20480

2

20480


The MLPoE at PTA feature does not impact to the PXF memory resources.


Note The number of MLP bundles that can be brought up with the MLPoE at PTA feature depends upon the available system resources.


MLP-Based Link Fragmentation and Interleaving

MLP supports link fragmentation and interleaving (LFI). The MLP fragmentation mechanism multilink encapsulates large nonreal-time packets and fragments them into a small enough size to satisfy the delay requirements of real-time traffic. Smaller real-time packets are not multilink encapsulated. Instead, the MLP interleaving mechanism provides a special transmit queue (priority queue) for these delay-sensitive packets to allow the packets to be sent earlier than other packet flows. Real-time packets remain intact and MLP interleaving mechanisms send the real-time packets between fragments of the larger non- real-time packets.

For more information about link fragmentation and interleaving, see the "Fragmenting and Interleaving Real-Time and Nonreal-Time Packets" chapter in the Cisco 10000 Series Router Quality of Service Configuration Guide.


Note In PRE1, Cisco 10000 series routers support fragmentation only on single link bundles when configured for LFI, using the ppp multilink interleave command. However, for multiple link bundles, the router does not support fragmentation and interleaving. You can turn off fragmentation by using the no ppp multilink fragmentation command on the Cisco 10000 router and the peer end.


Configuring MLP Bundles and Member Links

Table 22-8 shows the components you must define when configuring MLP (without link fragmentation and interleaving) on specific interface types.

Table 22-8 Requirements for Configuring MLP

Type
MLP Bundle
Member Links
Virtual Template
Service Policy

MLP over Serial

Required

Required

Not required

Not required

Single-VC MLP over ATM

Required

Required

Required

Required1

Multi-VC MLP over ATM

Required

Required

Required

Required1

1 A service policy is required only when configuring MLP-based link fragmentation and interleaving (LFI) for Single-VC or Multi-VC MLP over ATM. For MLP-based LFI, a service policy with a priority queue defined must be attached to the multilink interface. The VC does not require a service policy.


To configure MLP bundles and member links, perform the following configuration tasks:

Creating an MLP Bundle Interface

Enabling MLP on a Virtual Template

Adding a Serial Member Link to an MLP Bundle

Adding an ATM Member Link to an MLP Bundle

The following configuration tasks are optional:

Moving a Member Link to a Different MLP Bundle

Removing a Member Link from an MLP Bundle

Changing the Default Endpoint Discriminator

Creating an MLP Bundle Interface

To create an MLP bundle interface, enter the following commands beginning in global configuration mode:

 
Command
Purpose

Step 1 

Router(config)# interface multilink multilink-bundle-number

Creates a multilink bundle. Enters interface configuration mode to configure the bundle.

multilink-bundle-number is a nonzero number that identifies the multilink bundle. For Cisco IOS Release 12.2(28)SB and later releases, valid values are:

MLP over Serial—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

Single-VC MLP over ATM—10,000 and higher.

Multi-VC MLP over ATM—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

Note For releases earlier than Cisco IOS Release 12.2(28)SB, valid values are from 1 to 2,147,483,647.

Step 2 

Router(config-if)# ip address address mask

Specifies the IP address and subnet mask assigned to the interface.

address is the IP address.

mask is the subnet mask for the associated IP address.

Step 3 

Router(config-if)# ppp chap hostname hostname

(Optional) Identifies the hostname sent in the Challenge Handshake Authentication Protocol (CHAP) challenge.

hostname is the name of the bundle group. This name uniquely identifies the bundle.

Note If you configure this command on the bundle and its member links, specify the same identifier for both the bundle and the member links.

Step 4 

Router(config-if)# ppp multilink fragment-delay delay-max

(Optional) Configures the maximum delay allowed for the transmission of a packet fragment on an MLP bundle.

delay-max specifies the maximum amount of time, in milliseconds, that is required to transmit a fragment. Valid values are from 1 to 1000 milliseconds.

Step 5 

Router(config-if)# ppp multilink interleave

(Optional) Enables interleaving of real-time packets among the fragments of larger nonreal-time packets on an MLP bundle.

Step 6 

Router(config-if)# ppp multilink fragment disable

(Optional) Disables packet fragmentation.

Note The router automatically adds the ppp multilink and ppp multilink group commands to the MLP bundle configuration.

Configuration Example for Creating an MLP Bundle Interface

Example 22-1 shows a sample configuration for creating an MLP bundle interface.

Example 22-1 Creating an MLP Bundle Interface

Router(config)# interface multilink 8
Router(config-if)# ip address 172.16.48.209 255.255.0.0
Router(config-if)# ppp chap hostname cambridge

Enabling MLP on a Virtual Template

The virtual template interface is attached to the member links, not to the MLP bundle. You can apply the same virtual template to the member links; you are not required to apply a unique virtual template to each member link.

To enable MLP on a virtual template, enter the following commands beginning in global configuration mode:

 
Command
Purpose

Step 1 

Router(config)# interface virtual-template number

Creates or modifies a virtual template interface that can be configured and applied dynamically to virtual access interfaces. Enters interface configuration mode.

number is a number that identifies the virtual template interface. You can configure up to 5061 total virtual template interfaces (requires Cisco IOS Release 12.2(28)SB and later releases).

Step 2 

Router(config-if)# ppp max-configure retries

Specifies the maximum number of configure requests to attempt before stopping the requests due to no response.

retries specifies the maximum number of retries. Valid values are from 1 to 255. The default is 10 retries. We recommend 110 retries.

Step 3 

Router(config-if)# ppp max-failure retries

Configures the maximum number of consecutive Configure Negative Acknowledgements (CONFNAKs) to permit before terminating a negotiation.

retries is the maximum number of retries. Valid values are from 1 to 255. The default is 5 retries. We recommend 100 retries.

Step 4 

Router(config-if)# ppp timeout retry response-time

Sets the maximum time to wait for Point-to-Point Protocol (PPP) negotiation messages.

response-time specifies the maximum time, in seconds, to wait for a response during PPP negotiation. We recommend 5 seconds.

Step 5 

Router(config-if)# keepalive [period]

Enables keepalive packets to be sent at the specified time interval to keep the interface active.

period specifies a time interval, in seconds. The default is 10 seconds. We recommend 30 seconds.

Step 6 

Router(config-if)# no ip address

Removes an IP address.

Step 7 

Router(config-if)# ppp multilink

Enables MLP on the virtual template interface.

Configuration Example for Enabling MLP on a Virtual Template

Example 22-2 shows a sample configuration for enabling MLP on a virtual template.

Example 22-2 Enabling MLP on a Virtual Template

Router(config)# interface virtual-template1
Router(config-if)# ppp max-configure 110
Router(config-if)# ppp max-failure 100
Router(config-if)# ppp timeout retry 5
Router(config-if)# keepalive 30
Router(config-if)# no ip address
Router(config-if)# ip mroute-cache
Router(config-if)# ppp authentication chap
Router(config-if)# ppp multilink
Router(config-if)# exit

Adding a Serial Member Link to an MLP Bundle

You can configure up to 10 serial member links per MLP bundle. When adding T1 member links, add only full T1 interfaces. If the interface you add to the MLP bundle contains information such as an IP address, routing protocol, or access control list, the router ignores that information. If you remove the interface from the MLP bundle, that information becomes active again.

To add serial member links to an MLP bundle, enter the following commands beginning in global configuration mode:

 
Command
Purpose

Step 1 

Router(config)# interface serial slot/module/port.channel:controller-number

Specifies the interface that you want to add to the MLP bundle. Enters interface configuration mode.

slot/module/port identifies the line card. The slashes are required.

channel: is the channel group number. The colon is required.

controller-number is the member link controller number.

Step 2 

Router(config-if)# hold-queue length {in | out}

Limits the size of the IP output queue on an interface. We recommend that you configure this command on all physical interfaces.

length is a number that specifies the maximum number of packets in the queue. Valid values are from 0 to 4096. We recommend 4096 packets for all line cards. By default, the input queue is 75 packets and the output queue is 40 packets.

in specifies the input queue.

out specifies the output queue.

Step 3 

Router(config-if)# ppp max-configure retries

Specifies the maximum number of configure requests to attempt before stopping the requests due to no response.

retries specifies the maximum number of retries. Valid values are from 1 to 255. The default is 10 retries. We recommend 110 retries.

Step 4 

Router(config-if)# ppp max-failure retries

Configures the maximum number of consecutive Configure Negative Acknowledgements (CONFNAKs) to permit before terminating a negotiation.

retries is the maximum number of retries. Valid values are from 1 to 255. The default is 5 retries. We recommend 100 retries.

Step 5 

Router(config-if)# ppp timeout retry response-time

Sets the maximum time to wait for Point-to-Point Protocol (PPP) negotiation messages.

response-time specifies the maximum time, in seconds, to wait for a response during PPP negotiation. We recommend 5 seconds.

Step 6 

Router(config-if)# keepalive [period]

Enables keepalive packets to be sent at the specified time interval to keep the interface active.

period specifies a time interval, in seconds. The default is 10 seconds. We recommend 30 seconds.

Step 7 

Router(config-if)# ppp chap hostname hostname

(Optional) Identifies the hostname sent in the Challenge Handshake Authentication Protocol (CHAP) challenge.

hostname is the name of the bundle group. This name uniquely identifies the bundle.

Note If you configure this command on the bundle and its member links, specify the same identifier for both the bundle and the member links.

Step 8 

Router(config-if)# encapsulation ppp

Specifies Point-to-Point Protocol (PPP) encapsulation for the interface.

Step 9 

Router(config-if)# no ip address

Removes any existing IP address from the main interface.

Step 10 

Router(config-if)# ppp multilink

Enables MLP on the interface.

Step 11 

Router(config-if)# ppp multilink group group-number

Associates the interface with an MLP bundle.

group-number is a nonzero number that identifies the multilink group. Valid values are from 1 to 9999.

The group-number must be identical to the specified multilink-bundle-number of the MLP bundle to which you want to add this link.

Adding an ATM Member Link to an MLP Bundle

You can configure up to 10 member links per MLP bundle for Multi-VC MLP over ATM. However, you can configure only one member link per MLP bundle for Single-VC MLP over ATM.

To add ATM member links to an MLP bundle, enter the following commands beginning in global configuration mode:

 
Command
Purpose

Step 1 

Router(config)# interface atm slot/module/port

Configures or modifies the ATM interface you specify and enters interface configuration mode.

Step 2 

Router(config-if)# hold-queue length {in | out}

Limits the size of the IP output queue on an interface.

length is a number that specifies the maximum number of packets in the queue. Valid values are from 0 to 4096. We recommend 4096 packets for all line cards. By default, the input queue is 75 packets and the output queue is 40 packets.

in specifies the input queue.

out specifies the output queue.


Note We recommend that you configure this command on all physical interfaces, except when using the ATM OC-12 line card.


Step 3 

Router(config-if)# interface atm slot/module/port.subinterface point-to-point

Creates or modifies a point-to-point subinterface. Enters subinterface configuration mode.

Step 4 

Router(config-subif)# ppp chap hostname hostname

(Optional) Identifies the hostname sent in the Challenge Handshake Authentication Protocol (CHAP) challenge.

hostname is the name of the bundle group. This name uniquely identifies the bundle.

Note If you configure this command on the bundle and its member links, specify the same identifier for both the bundle and the member links.

Step 5 

Router(config-subif)# no ip address

Removes any existing IP address from the main interface.

Step 6 

Router(config-subif)# pvc [name] vpi/vci

Creates or modifies an ATM PVC. Enters ATM VC configuration mode.

name is the name of the ATM PVC.

vpi/ is the virtual path identifier. If you do not specify a VPI value and the slash character (/), the VPI value defaults to 0.

vci is the virtual channel identifier.

Step 7 

Router(config-if-atm-vc)# vbr-nrt output-pcr output-scr output-mbs

Configures the variable bit rate-nonreal time (VBR-nrt) quality of service (QoS).

output-pcr is the output peak cell rate (PCR), in kbps.

output-scr is the sustainable cell rate (SCR), in kbps.

output-mbs is the output maximum burst cell size (MBS), expressed in number of cells.

Step 8 

Router(config-if-atm-vc)# encapsulation {aal5mux ppp virtual-template number | aal5ciscoppp virtual-template number | aal5snap}

Configures the ATM adaptation layer (AAL) and encapsulation type for an ATM virtual circuit (VC).

aal5mux ppp specifies the AAL and encapsulation type for multiplex (MUX)-type VCs. The keyword ppp is Internet Engineering Task Force (IETF)-compliant PPP over ATM. It specifies the protocol type being used by the MUX encapsulated VC. Use this protocol type for Multi-VC MLP over ATM to identify the virtual template. This protocol is supported on ATM PVCs only.

aal5ciscoppp specifies the AAL and encapsulation type for Cisco PPP over ATM. Supported on ATM PVCs only.

aal5snap specifies the AAL and encapsulation type that supports Inverse ARP. Logical Link Control/Subnetwork Access Protocol (LLC/SNAP) precedes the protocol datagram.

virtual-template number is the number used to identify the virtual template.

Step 9 

Router(config-if-atm-vc)# protocol ppp virtual-template number

Enables PPP sessions to be established over the ATM PVC using the configuration from the virtual template you specify. Use this command only if you specified aal5snap as the encapsulation type and you are configuring MLP on multiple VCs.

number is a nonzero number that identifies the virtual template that you want to apply to this ATM PVC.

Step 10 

Router(config-if-atm-vc)# ppp multilink group group-number

Associates the PVC with an MLP bundle.

group-number is a nonzero number that identifies the multilink group. Valid values are:

Single-VC MLP over ATM—10,000 and higher.

Multi-VC MLP over ATM—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

The group-number must be identical to the specified multilink-bundle-number of the MLP bundle to which you want to add this link.

Configuration Example for Adding ATM Links to an MLP Bundle

Example 22-3 shows how to add ATM links to an MLP bundle. In the example, the virtual template named Virtual-Template 1 is applied to PVCs 0/34, 0/35, and 0/36. Each of these PVCs is assigned to MLP bundle group 1. Notice that all of the member links have the same encapsulation type. The router does not support member links with different encapsulation types.

Example 22-3 Adding ATM Links to an MLP Bundle

Router(config)# interface Multilink 1
Router(config-if)# ip address 10.6.6.1 255.255.255.0
Router(config-if)# ppp multilink
Router(config-if)# ppp multilink group 1
!
Router(config)# interface virtual-template1
Router(config-if)# ppp max-configure 110
Router(config-if)# ppp max-failure 100
Router(config-if)# ppp timeout retry 5
Router(config-if)# keepalive 30
Router(config-if)# no ip address
Router(config-if)# ppp multilink
!
Router(config)# interface atm 6/0/0
Router(config-if)# no ip address
Router(config-if)# hold-queue 4096 in
!
Router(config)# interface atm 6/0/0.1 point-to-point
Router(config-if)# no ip address
Router(config-if)# pvc 0/34
Router(config-if-atm-vc)# vbr-nrt 512 256 20
Router(config-if-atm-vc)# encapsulation aal5snap
Router(config-if-atm-vc)# protocol ppp Virtual-Template 1
Router(config-if-atm-vc)# ppp multilink group 1
!
Router(config)# interface atm 6/0/0.2 point-to-point
Router(config-if)# no ip address
Router(config-if)# pvc 0/35
Router(config-if-atm-vc)# vbr-nrt 512 256 20
Router(config-if-atm-vc)# encapsulation aal5snap
Router(config-if-atm-vc)# protocol ppp Virtual-Template 1
Router(config-if-atm-vc)# ppp multilink group 1
!
Router(config)# interface ATM 6/0/0.3 point-to-point
Router(config-if)# no ip address
Router(config-if)# pvc 0/36
Router(config-if-atm-vc)# vbr-nrt 512 256 20
Router(config-if-atm-vc)# encapsulation aal5snap
Router(config-if-atm-vc)# protocol ppp Virtual-Template 1
Router(config-if-atm-vc)# ppp multilink group 1

Moving a Member Link to a Different MLP Bundle

To move a member link to a different MLP bundle, enter the following commands beginning in interface configuration mode:

 
Command
Purpose

Step 1 

Router(config)# interface type number

Specifies the interface that you want to move to a different MLP bundle. Enters interface or subinterface configuration mode.

type specifies the type of interface (for example, ATM).

number specifies the interface number and is the slot/module/port.subinterface number or the slot/module/port.channel:controller-number of the interface (for example, ATM 1/0/0.1).

Step 2 

Router(config-if)# ppp chap hostname hostname

(Optional) Identifies the hostname sent in the Challenge Handshake Authentication Protocol (CHAP) challenge.

hostname is the name of the bundle group. This name uniquely identifies the bundle.

Note If you configure this command on the bundle and its member links, specify the same identifier for both the bundle and the member links.

Step 3 

Router(config-if)# ppp multilink group group-number

Moves this interface to the MLP bundle you specify.

group-number identifies the multilink group. Change this group-number to the new MLP group group-number. Valid values are:

MLP over Serial—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

Single-VC MLP over ATM—10,000 and higher.

Multi-VC MLP over ATM—1 to 9999 (Release 12.2(28)SB and later) or from 1 to 9999 and 65,536 to 2,147,483,647 (Release 12.2(31)SB2 and later).

Removing a Member Link from an MLP Bundle

To remove a member link from an MLP bundle, enter the following commands beginning in global configuration mode:

 
Command
Purpose

Step 1 

Router(config-if)# interface type number

Specifies the member link that you want to remove from the MLP bundle. Enters interface configuration mode.

type specifies the type of interface (for example, ATM).

number specifies the interface number and is the slot/module/port.subinterface number or the slot/module/port.channel:controller-number of the interface (for example, ATM 1/0/0.1).

Step 2 

Router(config-if)# no ppp multilink group group-number

Removes the member link from the MLP group.

group-number is the number of the MLP group from which you want to remove the member link.

Step 3 

Router(config-if)# no ppp multilink

Disables multilink for the link.

Step 4 

Router(config-if)# no ppp chap hostname

Removes PPP authentication.

Changing the Default Endpoint Discriminator

When the local system negotiates using MLP with the peer system, the default endpoint discriminator value provided is the username that is used for authentication. The ppp chap hostname or ppp pap sent-username command is used to configure the username for the interface or the username defaults to the globally configured hostname.

To change the default endpoint discriminator, enter the following command in interface configuration mode:

Command
Purpose

Router(config-if)# ppp multilink endpoint {hostname | ip ip-address | mac lan-interface | none | phone telephone-number | string char-string}

Overrides or changes the default endpoint discriminator the system uses when negotiating the use of MLP with the peer system.

hostname indicates to use the hostname configured for the router. This is useful when multiple routers are using the same username to authenticate, but have different hostnames.

ip ip-address indicates to use the supplied IP address.

mac lan-interface indicates to use the specified LAN interface whose MAC address is to be used.

none causes negotiation of the Link Control Protocol (LCP) without requesting the endpoint discriminator option, which is useful when the router connects to a malfunctioning peer system that does not handle the endpoint discriminator option properly.

phone telephone-number indicates to use the specified telephone number. Accepts E.164-compliant, full international telephone numbers.

string char-string indicates to use the supplied character string.


Configuration Example for Changing the Endpoint Discriminator

Example 22-4 shows how to change the MLP endpoint discriminator from the default CHAP hostname C-host1 to the hostname cambridge.

Example 22-4 Changing the Default Endpoint Discriminator

Router(config)# interface multilink 8
Router(config-if)# ip address 10.1.1.4 255.255.255.0
Router(config-if)# ppp chap hostname C-host1
Router(config-if)# ppp multilink endpoint hostname cambridge

Configuration Examples for Configuring MLP

This section provides the following configuration examples:

Configuration Example for Configuring MLP over Serial Interfaces

Configuration Example for Configuring Single-VC MLP over ATM

Configuration Example for Configuring Multi-VC MLP over ATM

Configuration Example for MLP on LNS

Configuration Example for MLPoE LAC Switching

Configuration Examples of MLPoE at PTA

Configuration Example for Configuring MLP over Serial Interfaces

Example 22-5 shows a sample configuration for configuring MLP over serial interfaces. In the example, 1/0/0/1:0 and 1/0/0/2:0 subinterfaces are added to the Multilink1 bundle.

Example 22-5 Configuring MLP on Serial Interfaces

interface Multilink1
 ip address 100.1.1.1 255.255.255.0
 no keepalive
 ppp multilink
 ppp multilink group 1
!
interface serial 1/0/0/1:0
 no ip address
 encapsulation ppp
 ppp chap hostname m1
 ppp multilink
 ppp multilink group 1
!
interface serial 1/0/0/2:0
 no ip address
 encapsulation ppp
 ppp chap hostname m1
 ppp multilink
 ppp multilink group 1

Configuration Example for Configuring Single-VC MLP over ATM

Example 22-6 shows a sample configuration for configuring Single-VC MLP over ATM. In the example, PVC 0/36 on ATM subinterface 5/0/0.3 is added to a single link MLP bundle. Single-VC MLP over ATM is configured on remote sites to deploy LFI for protecting interactive traffic on low speed ATM VCs.

Example 22-6 Configuring Single-VC MLP over ATM VCs

interface ATM5/0/0
no ip address
no atm ilmi-keepalive

interface ATM5/0/0.3 point-to-point
pvc 0/36
vbr-nrt 512 612
encapsulation aal5mux ppp Virtual-Template1
ppp multilink group 10001

interface Virtual-Template1
bandwidth 512
no ip address
ppp multilink

interface Multilink 10001
ip address <ip address>
ppp multilink
ppp multilink group 10001

Configuration Example for Configuring Multi-VC MLP over ATM

Example 22-7 shows a sample configuration for configuring Multi-VC MLP over ATM. In the example, PVC 0/36 on ATM subinterface 5/0/0.3 and PVC 0/37 on ATM subinterface 5/0/0.4 are added to the Multilink2 bundle. The virtual template named Virtual-Template1 is applied to PVC 0/36 and PVC 0/37.

Example 22-7 Configuring Multi-VC MLP over ATM VCs

interface Multilink2
 ip address 100.1.2.1 255.255.255.0
 ppp multilink
 ppp multilink group 2
!
interface ATM5/0/0
 no ip address
 no atm ilmi-keepalive
!
interface ATM5/0/0.3 point-to-point
 pvc 0/36
  ppp chap hostname m2
  ppp multilink group 2
  vbr-nrt 128 64 20
  encapsulation aal5mux ppp Virtual-Template1
! 
interface ATM5/0/0.4 point-to-point
 pvc 0/37
  ppp chap hostname m2
  ppp multilink group 2
  vbr-nrt 128 64 20
  encapsulation aal5mux ppp Virtual-Template1 
!
interface Virtual-Template1
 no ip address
 no keepalive
 ppp max-configure 110
 ppp max-failure 100
 ppp multilink
 ppp timeout retry 5
!

Configuration Example for MLP on LNS

Example 22-8 shows how to set up a tunnel on the GigabitEthernet interface on which the VPDN member links are negotiated and added to the MLP bundle cloned from virtual template 500.

Example 22-8 MLP on LNS

aaa new-model
!
!
aaa authentication ppp default local
aaa authentication ppp TESTME group radius
aaa authorization network default local 
aaa authorization network TESTME group radius 
!
aaa session-id common

buffers small perm 15000
buffers mid perm 12000
buffers big perm 8000


!
vpdn enable
!
vpdn-group LNS_1
 accept-dialin
  protocol l2tp
  virtual-template 500
 terminate-from hostname LAC1-1
 local name LNS1-1
 lcp renegotiation always
 l2tp tunnel receive-window 100
 L2tp tunnel password 0 cisco
 l2tp tunnel nosession-timeout 30
 l2tp tunnel retransmit retries 7
 l2tp tunnel retransmit timeout min 2
 l2tp tunnel retransmit timeout max 8
!
!
interface GigabitEthernet2/0/0
 ip address 210.1.1.3 255.255.255.0
 negotiation auto
 hold-queue 4096 in
!
!
interface Virtual-Template500
 ip unnumbered Loopback1
 peer default ip address pool pool-1
 ppp mtu adaptive
 ppp timeout authentication 100
 ppp max-configure 110
 ppp max-failure 100
 ppp timeout retry 5
keepalive 30
 ppp authentication pap TESTME
 ppp authorization TESTME
 ppp multilink
!
ip local pool pool-1 1.1.1.1 1.1.1.100

radius-server host 15.1.0.100 auth-port 1645 acct-port 1646 key cisco
radius-server retransmit 0

Configuration Example for MLPoE LAC Switching

Example 22-9 shows how to configure the LAC for switching an MLPoE connection to the LNS, while also forwarding the DSL tags.

Example 22-9 MLPoE LAC Switching

aaa new-model
!
multilink bundle-name authenticated
vpdn enable
!
vpdn-group LACoe_LFI
 request-dialin
  protocol l2tp
  domain hello_oe
dsl-line-info-forwarding 
initiate-to ip 192.168.125.54
 local name LACoe_LFI
 l2tp tunnel password 0 lab
!
username LNSoe_LFI nopassword
!
bba-group pppoe global
 virtual-template 800
vendor-tag dsl-sync-rate service
!
interface GigabitEthernet4/0/0
 no ip address
 negotiation auto
!
interface GigabitEthernet4/0/0.1
 encapsulation dot1Q 800
 pppoe enable group global
!
interface GigabitEthernet4/1/0
 ip address 192.168.125.53 255.255.255.0
 negotiation auto
!
interface Virtual-Template800 
 no peer default ip address
 keepalive 30
 ppp authentication pap
 ppp multilink
	ppp multilink links maximum 1
!

Configuration Examples of MLPoE at PTA

This section has the following configuration examples of the MLPoE at PTA feature:

Configuring MLPoE over IEEE 802.1Q VLANs

Configuring MLPoE through RADIUS

Configuring MLPoE over IEEE 802.1Q VLANs

Example 22-10 shows how to configure the PPPoE over IEEE 802.1Q VLANs:

Example 22-10 Configuring MLPoE over IEEE 802.1Q VLANs

policy-map policy_mlpoe_out
  class class-default
    shape average 2048000
policy-map policy_mlpoe_in
  class class-default
    shape average 512000
!
!
bba-group pppoe PPPoE
 virtual-template 3
 sessions per-vc limit 65530
 sessions per-mac limit 65530
 sessions per-vlan limit 8000 inner 1
!
interface Loopback3
 ip address 13.0.0.1 255.0.0.0
!
interface GigabitEthernet1/0/0
 ip address 1.0.0.1 255.255.0.0
 negotiation auto
!
interface GigabitEthernet1/0/0.1
 encapsulation dot1Q 2
 pppoe enable group PPPoE
 no snmp trap link-status
!
interface Virtual-Template3
 ip unnumbered Loopback3
 peer default ip address pool MLPoEpool
 ppp authentication pap
 ppp multilink
ppp multilink links max 1
 ppp multilink interleave
 ppp multilink fragment delay 8
 service-policy output policy_mlpoe_out
 service-policy input policy_mlpoe_in
!
ip local pool MLPoEpool 13.1.0.2 13.1.255.255
!
!
end

Note As MLPoE supports only single-member bundles, the ppp multilink links maximum 1 command must be configured to restrict the number of links that join the bundle.


Configuring MLPoE through RADIUS

Example 22-11 shows how to configure the PPPoE on multiple link bundles through Remote Authentication Dial-In User Service (RADIUS):

Example 22-11 Configuring MLPoE through RADIUS

cisco@domain_1
Password="cisco" 
	Service-Type=Framed-User, 
	Framed-Protocol=PPP, 
	Framed-IP-Address=255.255.255.254, 
        Cisco-avpair = "lcp:interface-config=ppp multilink",
        Cisco-avpair = "lcp:interface-config=ppp multilink interleave",
        Cisco-Policy-Down = "policy_mlpoe_in"


Note A PPPoE session shaper is required on the virtual template, or must be applied through RADIUS to avoid flooding a downstream device such as an ADSL2+.


Verifying and Monitoring MLP Connections

To verify and monitor MLP connections, enter the following commands in privileged EXEC mode:

Command
Purpose

Router# debug ppp multilink events

Displays information about events affecting multilink groups established for Bandwidth Allocation Control Protocol (BACP).

Router# show atm pvc

Displays all ATM permanent virtual circuits (PVCs) and traffic information.

Router# show interfaces type number

Displays statistics for the interface you specify. If you do not specify a specific interface, statistics display for all interfaces configured on the router.

Router# show interfaces virtual-access number [configuration]

Displays status, traffic data, and configuration information about the virtual access interface you specify.

Note This command currently displays statistics for system traffic only. Statistics for bundle traffic do not display. For information about bundle traffic, see the show interfaces or show ppp multilink command.

number is the number of the virtual access interface.

(Optional) configuration restricts output to configuration information.

Router# show interfaces multilink group-number [stat]

Displays configuration information about the MLP bundle you specify.

group-number is a nonzero number that identifies the multilink bundle.

(Optional) stat displays traffic statistics for the MLP bundle such as the number of packets in and out.

Router# show ppp multilink [bundle-interface]

Displays bundle information for all of the MLP bundles and their PPP links configured on the router.

(Optional) bundle-interface specifies the multilink interface (for example, Multilink5).

If you specify bundle-interface, the command displays information for only that specific bundle.

Router# show running-config

Displays information about the current router configuration, including information about each interface configuration.


Bundle Counters and Link Counters

When you enter the show interface command on an MLP bundle interface and on all of its member link interfaces, you might expect the counters on the bundle to be equal to the sum of the counters for all of the link interfaces. However, this is not the case.

The statistics for the various interfaces reflect the data that actually goes through those interfaces. The data that goes through the bundle is different from the data going through the links. All of the traffic at the bundle level does eventually pass through the link level, but it is not in the same format. In addition, links also carry traffic that is private to that link, such as link-level keepalives.

The following list describes some of the reasons link-level and bundle-level counts might be different (ignoring the link-private traffic):

Multilink fragmentation might be occurring. A single packet at the bundle level becomes multiple packets at the link level.

Frames at the bundle level include only bundle-level encapsulation, which consists of a 2-byte PPP header (or 1-byte header under some circumstances).

Frames at the link level include link level encapsulation bytes, which include all forms of media-specific encapsulation and framing. This information includes headers and trailers for High-Level Data Link Control (HDLC) and PPP over ATM. The link-level encapsulation bytes also include multilink subheaders (for example, sequence numbers), if they are used.


Note Multilink subheaders are not part of the packet encapsulation because it exists at the bundle level. Multilink subheaders are part of the encapsulation that is added to fragments before placing them on the link; they are not added to the network-level datagrams (for example, IP packets) before sending them to the fragmentation engine.


Because of the factors listed above, the counts on the links can be greater than the counts on the bundle. The link level has a great deal of overhead that is not visible at the bundle level.

Verification Examples for MLP Connections

This section provides the following verification examples:

Verification Example for the show interfaces multilink Command

Verification Example for the show ppp multilink Command

Verification Example for the show interfaces multilink stat Command

Verification Example for the show interfaces multilink Command

Example 22-12 shows sample output for the show interfaces multilink command. In the example, configuration information and packet statistics display for the MLP bundle 8.

Example 22-12 Sample Output for the show interfaces multilink Command

Router# show interfaces multilink 8
Multilink8 is up, line protocol is up
Hardware is multilink group interface
Internet address is 10.1.1.1/24
MTU 1500 bytes, BW 15360 Kbit, DLY 100000 usec, rely 255/255, load
1/255
Encapsulation PPP, crc 16, loopback not set
Keepalive not set
DTR is pulsed for 2 seconds on reset
LCP Open, multilink Open
Open:IPCP
Last input 15:24:43, output never, output hang never
Last clearing of "show interface" counters 15:27:59
Queueing strategy:fifo
Output queue 0/40, 0 drops; input queue 0/75, 0 drops
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
36 packets input, 665 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
31 packets output, 774 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

Verification Example for the show ppp multilink Command

Example 22-13 shows sample output from the show ppp multilink command. In the example, information about the MLP over ATM bundle (Multilink3) displays first. Information about the member links then displays, including the number of active and inactive member links. Class fields are omitted from the output; everything is implicitly in receive class 0 and transmit class 0.

Example 22-13 Sample Output for the show ppp multilink Command

Router# show ppp multilink

Multilink3, bundle name is multilink_name-3
  Endpoint discriminator is multilink_name-3
  Bundle up for 3d21h, total bandwidth 128, load 1/255
  Receive buffer limit 24384 bytes, frag timeout 1000 ms
  Bundle is Distributed
    0/0 fragments/bytes in reassembly list
    1 lost fragments, 1 reordered
    0/0 discarded fragments/bytes, 0 lost received
    0x831D received sequence, 0x0 sent sequence
  C10K Multilink PPP info
    Bundle transmit info
        send_seq_num   0x0
    Bundle reassembly info
        expected_seq_num:  0x00831E
  Member links: 2 active, 0 inactive (max 10, min not set)
    Vi5, since 3d21h, 16 weight, 82 frag size
    Vi4, since 3d19h, 16 weight, 82 frag size No inactive multilink interfaces

The following list describes the bundle-level fields and lines in the show ppp multilink command output:

Bundle name is name—The bundle identifier for the bundle.

Bundle up for time—The elapsed time since the bundle first came up.

load n/255—The traffic load on the bundle as multilink computes loads for bandwidth-on-demand purposes. This load might count all traffic, or just inbound or outbound traffic, depending on the configuration.

Receive buffer limit n bytes—The maximum amount of fragment data that multilink can buffer in its fragment reassembly engine for each receive class. This amount is derived from the configured slippage constraints.

Frag timeout n ms—The maximum amount of time that multilink waits for an expected fragment before declaring it lost. This limit applies only when fragment loss cannot be detected by other, faster means such as sequence number-based detection.

Member links:—The number of active and inactive links currently in the bundle, followed by the desired minimum and maximum number of links. The actual number might be outside the range.

After all of the bundle parameters display, information about each individual link in the bundle displays. Extra link-level parameters might be shown after each link in certain circumstances. The following list describes the individual link parameters:

Weight—The weight is used for load balancing. Data is distributed between the member links in proportion to their weight. The weight is proportional to multilink's notion of the effective bandwidth of a link. Therefore, multilink effectively distributes data to the links in proportion to their bandwidth.

The effective bandwidth of a link is the configured bandwidth value, except on asynchronous lines where multilink uses a value that is 0.8 times the configured bandwidth setting. This exception occurs because, on an asynchronous line, at best only 8/10 of the raw bandwidth is available for transmitting real data and the remainder is consumed in framing overhead.

Previously, the weight also controlled the size of the fragments generated for that link. However, Cisco IOS software now computes a separate fragment size value.

Frag size—The size of the largest fragment that can be generated for that link. It is the size of the MLP payload carried by a fragment and does not include MLP headers or link-level framing.

Unsequenced—The serial link is unsequenced and packets can arrive in a different order than the peer transmitted them. To compensate for this, multilink relaxes its lost fragment detection mechanisms.

Receive only (or receive only pending)—The link is in idle mode or is about to be put in idle mode. Processing of arriving data on the link continues normally, but data is not transmitted on the link. The remote system is expected to not send data on the link.

Verification Example for the show interfaces multilink stat Command

Example 22-14 shows sample output for the show interfaces multilink stat command. In the example, the number of packets in and out display for each of the specified switching paths.

Example 22-14 Sample Output for the show interfaces multilink stat Command

Router# show interfaces multilink 8 stat
Multilink 8
Switching path	Pkts In	Chars In	Pkts Out	Chars Out
	Processor	36	665	31	774
	Route cache	0	0	0	0
	Total	36	665	31	774

Related Documentation

This section provides hyperlinks to additional Cisco documentation for the features discussed in this chapter. To display the documentation, click the document title or a section of the document highlighted in blue. When appropriate, paths to applicable sections are listed below the documentation title.

Feature
Documentation

Multilink PPP

Cisco IOS Dial Services Configuration Guide: Terminal Services, Release 12.1

Part 4: PPP Configuration > Configuring Media-Independent PPP and multilink PPP

MLP over ATM

RFC 1990, The PPP Multilink Protocol

Designing and Deploying Multilink PPP over Frame Relay and ATM Tech Note

MLP over Serial

RFC 1990, The PPP Multilink Protocol

Link Fragmentation and Interleaving

Cisco 10000 Series Router Quality of Service Configuration Guide

Fragmenting and Interleaving Real-Time and Nonreal-Time Packets

Link Fragmentation and Interleaving for Frame Relay and ATM Virtual Circuits, Release 12.1(5)T feature module

Cisco IOS Quality of Service Solutions Configuration Guide

Link Efficiency Mechanisms > Link Efficiency Mechanisms Overview > Link Fragmentation and Interleaving for Frame Relay and ATM VCs

Configuring Link Fragmentation and Interleaving for Frame Relay and ATM Virtual Circuits

RFC 1990, The PPP Multilink Protocol

PPP Encapsulation

RFC 1661, The Point-to-Point Protocol

Cisco IOS Wide-Area Networking Configuration Guide, Release 12.2

Configuring Broadband Access: PPP and Routed Bridge Encapsulation