Although LFI is thought of as a single feature, it is actually two independent features within MLP. MLP link fragmentation allows larger packets to be Layer 2 fragmented by MLP, and the fragments to be distributed across the various member links in the MLP bundle. These fragments are MLP encapsulated and sequenced. These fragments are then collected, reordered, and reassembled at the peer termination point for the MLP bundle interface.
For more information about interleaving with QoS, see the Quality of Service section..
Interleaving enables you to reduce transmission delay on delay-sensitive voice, video, and interactive application data by interleaving it with the MLP fragments. When interleaving is configured, the packets on the bundle interface that QoS classifies as priority packets are interleaved. These priority packets are PPP encapsulated and interleaved with the MLP-encapsulated fragments or packets. When the peer router receives the PPP packets, they can be immediately forwarded, whereas, the received MLP-encapsulated packets have to be reordered and reassembled before being forwarded. While link fragmentation and interleaving can be configured on any multilink bundle, this LFI functionality is beneficial only on bundles of 1 Mbps or less. Packet transmission delays of higher bandwidth bundles are such that QoS prioritization of priority traffic should be sufficient to guarantee preferential treatment of the priority traffic without the need for LFI.
One downside of interleaving is that when there are two or more links in an MLP bundle, the order of the PPP-encapsulated packets cannot be guaranteed. In most applications sending data, such as, voice, video, and Telnet, this is not an issue because the gap between the packets on a given flow is large enough that the packets must not pass each other on the multiple links in the bundle. Since the order cannot be guaranteed for the priority PPP-encapsulated packets that are interleaved, IP Header Compression (IPHC) is skipped on any packet that is classified as priority-interleaved packet. IPHC continues to occur for nonpriority packets that are sent as MLP encapsulated because MLP guarantees reordering before the packets are forwarded to IPHC.
The Multi-Class Multilink Protocol (MCMP) (RFC-2686) addresses the issues related to ordering of priority-interleaved packets. Currently, the MCMP is not supported on the Cisco ASR 1000 Series Aggregation Services Routers.
MLP LFI must be configured on the Cisco ASR 1000 Series Aggregation Services Routers to enable LFI.
In the context of interface multilink or interface virtual template, use any of the following commands to enable link fragmentation:
ppp multilink fragment delay (delay in milliseconds)
ppp multilink fragment delay (maximum fragment size, in bytes)
ppp multilink interleave
For MLP using serial links, link fragmentation can also be enabled by configuring the ppp multilink fragment size (maximum fragment size, in bytes) command on the member-link serial interface.
If the MLP bundle has only one active member link and interleaving is not enabled, MLP fragmentation is disabled. In addition, all the packets are sent PPP encapsulated instead of MLP encapsulated. When a second link in the bundle becomes active or interleaving is enabled, MLP and fragmentation is enabled.
If the ppp multilink interleave command is not configured, only MLP link fragmentation is enabled. To enable interleaving, you must also configure the ppp multilink interleave command at the interface multilink level or the interface virtual template level. In addition to configuring interleaving as indicated here, you must also define a QoS policy with one or more priority classes, and attach the QoS to this interface using the service-policy output
policy-map-name command. This command classifies the priority traffic, that is interleaved by the MLP.
See the QoS and LFI configuration examples in the “Configuring Multilink PPP Connections” chapter in the Wide-Area Networking Configuration Guide: Multilink PPP.
When configuring MLP fragmentation on the various Cisco platforms, the functionality of MLP fragmentation and interleaving support on the various platforms may differ. This section explains the configuration options and their interpretation in the context of the Cisco ASR 1000 Series Aggregation Services Routers.
Based on the values of the MLP fragmentation configuration commands, the MLP feature calculates two values that are used during MLP fragmentation: link weight and maximum fragment size. These parameters are calculated for each member link in the bundle.
First, a link weight must be determined for each member link. The link weight indicates the number of bytes, and the MLP uses this value to balance the data amongst the links in the bundle. This parameter is especially important when the links in a bundle are of unequal bandwidth. The link weight is based on a combination of the bandwidth of the member link and the PPP multilink fragment delay value. If you do not configure the fragment delay value, a default delay value of 30 milliseconds is used:
Link Weight = (Member Link Interface Bandwidth in bps/8) * Fragment Delay
Configuring the fragment delay to a smaller value results in smaller fragment size because the fragment delay value determines the default fragment size on the member link. This, in turn, implies loss of bandwidth due to the added Layer 2 header overhead. This is important for broadband MLP, which can have Layer 2 headers of 4 to 58 bytes in length.
The default maximum fragment size must be calculated per member link. The default maximum fragment size used will be the lesser value obtained from either of the following calculations:
After the default maximum fragment size is calculated, if you have configured the ppp multilink fragment size
(maximum) command at the multilink, virtual template, or serial interface level, the default maximum fragment size is compared against the configured maximum value and is capped accordingly. If the fragment size is configured at the serial interface level and the multilink interface level, the serial interface configuration takes precedence.