by William Stallings
Multiprotocol Label Switching
(MPLS) is a promising effort to provide the kind of traffic management and connection-oriented
Quality of Service
(QoS) support found in
Asynchronous Transfer Mode
(ATM) networks, to speed up the IP packet-forwarding process, and to retain the flexibility of an IP-based networking approach.
The roots of MPLS go back to numerous efforts in the mid-1990s to combine IP and ATM technologies. The first such effort to reach the marketplace was IP switching, developed by Ipsilon. To compete with this offering, numerous other companies announced their own products, notably Cisco Systems (Tag Switching), IBM (aggregate routebased IP switching), and Cascade (IP Navigator). The goal of all these products was to improve the throughput and delay performance of IP, and all took the same basic approach: Use a standard routing protocol such as
Open Shortest Path First
(OSPF) to define paths between endpoints; assign packets to these paths as they enter the network; and use ATM switches to move packets along the paths. When these products came out, ATM switches were much faster than IP routers, and the intent was to improve performance by pushing as much of the traffic as possible down to the ATM level and using ATM switching hardware.
In response to these proprietary initiatives, the
Internet Engineering Task Force
(IETF) set up the MPLS working group in 1997 to develop a common, standardized approach. The working group issued its first set of Proposed Standards in 2001. Meanwhile, however, the market did not stand still. The late 1990s saw the introduction of many routers that are as fast as ATM switches, eliminating the need to provide both ATM and IP technology in the same network.
Nevertheless, MPLS has a strong role to play. MPLS reduces the amount of per-packet processing required at each router in an IP-based network, enhancing router performance even more. More significantly, MPLS provides significant new capabilities in four areas that have ensured its popularity: QoS support, traffic engineering,
Virtual Private Networks
(VPNs), and multiprotocol support. Before turning to the details of MPLS, we briefly examine each of these.
Connection-Oriented QoS Support
Network managers and users require increasingly sophisticated QoS support for numerous reasons. The following are key requirements:
|Guarantee a fixed amount of capacity for specific applications, such as audio/video conference|
|Control latency and jitter and ensure capacity for voice|
|Provide very specific, guaranteed, and quantifiable service-level agreements, or traffic contracts|
|Configure varying degrees of QoS for multiple network customers|
A connectionless network, such as in IP-based internetwork, cannot provide truly firm QoS commitments. A
(DS) framework works in only a general way and upon aggregates of traffic from numerous sources. An
(IS) framework, using the
Resource Reservation Protocol
(RSVP), has some of the flavor of a connection-oriented approach, but is nevertheless limited in terms of its flexibility and scalability. For services such as voice and video that require a network with high predictability, the DS and IS approaches, by themselves, may prove inadequate on a heavily loaded network. By contrast, a connection-oriented network has powerful traffic-management and QoS capabilities. MPLS imposes a connection-oriented framework on an IP-based internet and thus provides the foundation for sophisticated and reliable QoS traffic contracts.
MPLS makes it easy to commit network resources in such a way as to balance the load in the face of a given demand and to commit to differential levels of support to meet various user traffic requirements. The ability to dynamically define routes, plan resource commitments on the basis of known demand, and optimize network utilization is referred to as
With the basic IP mechanism, there is a primitive form of automated traffic engineering. Specifically, routing protocols such as OSPF enable routers to dynamically change the route to a given destination on a packet-by-packet basis to try to balance load. But such dynamic routing reacts in a very simple manner to congestion and does not provide a way to support QoS. All traffic between two endpoints follows the same route, which may be changed when congestion occurs. MPLS, on the other hand, is aware of not just individual packets, but flows of packets in which each flow has certain QoS requirements and a predictable traffic demand. With MPLS, it is possible to set up routes on the basis of these individual flows, with two different flows between the same endpoints perhaps following different routers. Further, when congestion threatens, MPLS paths can be rerouted intelligently. That is, instead of simply changing the route on a packet-by-packet basis, with MPLS, the routes are changed on a flow-by-flow basis, taking advantage of the known traffic demands of each flow. Effective use of traffic engineering can substantially increase usable network capacity.
MPLS provides an efficient mechanism for supporting VPNs. With a VPN, the traffic of a given enterprise or group passes transparently through an internet in a way that effectively segregates that traffic from other packets on the internet, proving performance guarantees and security.
MPLS, which can be used on many networking technologies, is an enhancement to the way a connectionless IP-based internet is operated, requiring an upgrade to IP routers to support the MPLS features. MPLS-enabled routers can coexist with ordinary IP routers, facilitating the introduction of evolution to MPLS schemes. MPLS is also designed to work in ATM and Frame Relay networks. Again, MPLS-enabled ATM switches and MPLS-enabled Frame Relay switches can be configured to coexist with ordinary switches. Furthermore, MPLS can be used in a pure IP-based internet, a pure ATM network, a pure Frame Relay network, or an internet that includes two or even all three technologies. This universal nature of MPLS should appeal to users who currently have mixed network technologies and seek ways to optimize resources and expand QoS support.
For the remainder of this discussion, we focus on the use of MPLS in IPbased internets, with brief comments about formatting issues for ATM and Frame Relay networks.
An MPLS network or internet consists of a set of nodes, called
Label Switched Routers
(LSRs), that are capable of switching and routing packets on the basis of a label which has been appended to each packet. Labels define a flow of packets between two endpoints or, in the case of multicast, between a source endpoint and a multicast group of destination endpoints. For each distinct flow, called a
Forwarding Equivalence Class
(FEC), a specific path through the network of LSRs is defined. Thus, MPLS is a connection-oriented technology. Associated with each FEC is a traffic characterization that defines the QoS requirements for that flow. The LSRs do not need to examine or process the IP header, but rather simply forward each packet based on its label value. Therefore, the forwarding process is simpler than with an IP router.
Figure 1, based on one in , depicts the operation of MPLS within a domain of MPLS-enabled routers. The following are key elements of the operation.
Prior to the routing and delivery of packets in a given FEC, a path through the network, known as a Label Switched
Path (LSP), must be defined and the QoS parameters along that path must be established. The QoS parameters determine (1)
how many resources to commit to the path, and (2) what queuing and discarding policy to establish at each LSR for packets in
this FEC. To accomplish these tasks, two protocols are used to exchange the necessary information among routers:
|2.||A packet enters an MPLS domain through an ingress edge LSR where it is processed to determine which network-layer services it requires, defining its QoS. The LSR assigns this packet to a particular FEC, and therefore a particular LSP, appends the appropriate label to the packet, and forwards the packet. If no LSP yet exists for this FEC, the edge LSR must cooperate with the other LSRs in defining a new LSP.|
Within the MPLS domain, as each LSR receives a labeled packet, it:
|4.||The egress edge LSR strips the label, reads the IP packet header, and forwards the packet to its final destination.|
Several key features of MLSP operation can be noted at this point:
|1.||An MPLS domain consists of a contiguous, or connected, set of MPLS-enabled routers. Traffic can enter or exit the domain from an endpoint on a directly connected network, as shown in the upperright corner of Figure 1. Traffic may also arrive from an ordinary router that connects to a portion of the internet not using MPLS, as shown in the upper-left corner of Figure 1.|
The FEC for a packet can be determined by one or more of a number of parameters, as specified by the network manager. Among
the possible parameters:
|3.||Forwarding is achieved by doing a simple lookup in a predefined table that maps label values to next-hop addresses. There is no need to examine or process the IP header or to make a routing decision based on destination IP address.|
|4.||A particular Per-Hop Behavior (PHB) can be defined at an LSR for a given FEC. The PHB defines the queuing priority of the packets for this FEC and the discard policy.|
|5.||Packets sent between the same endpoints may belong to different FECs. Thus, they will be labeled differently, will experience different PHB at each LSR, and may follow different paths through the network.|
Figure 2 shows the label-handling and label-forwarding operation in more detail. Each LSR maintains a forwarding table for each LSP passing through the LSR. When a labeled packet arrives, the LSR indexes the forwarding table to determine the next hop. For scalability, as was mentioned, labels have local significance only. Thus, the LSR removes the incoming label from the packet and attaches the matching outgoing label before forwarding the packet. The ingress-edge LSR determines the FEC for each incoming unlabeled packet and, on the basis of the FEC, assigns the packet to a particular LSP, attaches the corresponding label, and forwards the packet.
One of the most powerful features of MPLS is
. A labeled packet may carry many labels, organized as a last-in-first-out stack. Processing is always based on the top label. At any LSR, a label may be added to the stack (push operation) or removed from the stack (pop operation). Label stacking allows the aggregation of LSPs into a single LSP for a portion of the route through a network, creating a
. At the beginning of the tunnel, an LSR assigns the same label to packets from a number of LSPs by pushing the label onto the stack of each packet. At the end of the tunnel, another LSR pops the top element from the label stack, revealing the inner label. This is similar to ATM, which has one level of stacking (virtual channels inside virtual paths), but MPLS supports unlimited stacking.
Label stacking provides considerable flexibility. An enterprise could establish MPLS-enabled networks at various sites and establish numerous LSPs at each site. The enterprise could then use label stacking to aggregate multiple flows of its own traffic before handing it to an access provider. The access provider could aggregate traffic from multiple enterprises before handing it to a larger service provider. Service providers could aggregate many LSPs into a relatively small number of tunnels between points of presence. Fewer tunnels means smaller tables, making it easier for a provider to scale the network core.
Label Format and Placement
An MPLS label is a 32-bit field consisting of the following elements (Figure 3):
|Label value: locally significant 20-bit label|
|Exp: 3 bits reserved for experimental use; for example, these bits could communicate DS information or PHB guidance|
|S: set to one for the oldest entry in the stack, and zero for all other entries|
|Time To Live (TTL): 8 bits used to encode a hop count, or time to live, value|
A key field in the IP packet header is the TTL field (IPv4), or Hop Limit (IPv6). In an ordinary IP-based internet, this field is decremented at each router and the packet is dropped if the count falls to zero. This is done to avoid looping or having the packet remain too long in the internet because of faulty routing. Because an LSR does not examine the IP header, the TTL field is included in the label so that the TTL function is still supported. The rules for processing the TTL field in the label are as follows:
When an IP packet arrives at an ingress edge LSR of an MPLS domain, a single label stack entry is added to the packet. The
TTL value of this label stack entry is set to the value of the IP TTL value. If the IP TTL field needs to be decremented, as
part of the IP processing, it is assumed that this has already been done.
When an MPLS packet arrives at an internal LSR of an MPLS domain, the TTL value in the top label stack entry is decremented. Then:
When an MPLS packet arrives at an egress edge LSR of an MPLS domain, the TTL value in the single label stack entry is
decremented and the label is popped, resulting in an empty label stack. Then:
The label stack entries appear after the data link layer headers, but before any network layer headers. The top of the label stack appears earliest in the packet (closest to the network layer header), and the bottom appears latest (closest to the data link header). The network layer packet immediately follows the label stack entry that has the
bit set. In a data link frame, such as for the
(PPP), the label stack appears between the IP header and the data link header (Figure 4a). For an IEEE 802 frame, the label stack appears between the IP header and the
Logical Link Control
(LLC) header (Figure 4b).
If MPLS is used over a connection-oriented network service, a slightly different approach may be taken, as shown in Figure 4c and d. For ATM cells, the label value in the topmost label is placed in the
Virtual Path/Channel Identifier
(VPI/VCI) field in the ATM cell header. The entire top label remains at the top of the label stack, which is inserted between the cell header and the IP header. Placing the label value in the ATM cell header facilitates switching by an ATM switch, which would, as usual, need to look only at the cell header. Similarly, the topmost label value can be placed in the
Data Link Connection Identifier
(DLCI) field of a Frame Relay header. Note that in both these cases, the TTL field is not visible to the switch and so is not decremented. The reader should consult the MPLS specifications for the details of the way this situation is handled.
FECs, LSPs, and Labels
To understand MPLS, it is necessary to understand the operational relationship among FECs, LSPs, and labels. The specifications covering all the ramifications of this relationship are lengthy. In the remainder of this section, we provide a summary.
The essence of MPLS functionality is that traffic is grouped into FECs. The traffic in an FEC transits an MPLS domain along an LSP. Individual packets in an FEC are uniquely identified as being part of a given FEC by means of a
locally significant label
At each LSR, each labeled packet is forwarded on the basis of its label value, with the LSR replacing the incoming label value with an outgoing label value.
The overall scheme described in the previous paragraph imposes numerous requirements. Specifically:
|1.||Traffic must be assigned to a particular FEC.|
|2.||A routing protocol is needed to determine the topology and current conditions in the domain so that a particular LSP can be assigned to an FEC. The routing protocol must be able to gather and use information to support the QoS requirements of the FEC.|
|3.||Individual LSRs must become aware of the LSP for a given FEC, must assign an incoming label to the LSP, and must communicate that label to any other LSR that may send it packets for this FEC.|
The first requirement is outside the scope of the MPLS specifications. The assignment needs to be done either by manual configuration, by means of some signaling protocol, or by an analysis of incoming packets at ingress LSRs. Before looking at the other two requirements, let us consider the topology of LSPs. We can classify these in the following manner:
|Unique ingress and egress LSR: In this case a single path through the MPLS domain is needed.|
|Unique egress LSR, multiple ingress LSRs: If traffic assigned to a single FEC can arise from different sources that enter the network at different ingress LSRs, then this situation occurs. An example is an enterprise intranet at a single location but with access to an MPLS domain through multiple MPLS ingress LSRs. This situation would call for multiple paths through the MPLS domain, probably sharing a final few hops.|
|Multiple egress LSRs for unicast traffic: RFC 3031 states that most commonly, a packet is assigned to a FEC based (completely or partially) on its network layer destination address. If not, then it is possible that the FEC would require paths to multiple distinct egress LSRs. However, more likely, there would be a cluster of destination networks, all of which are reached via the same MPLS egress LSR.|
|Multicast: RFC 3031 lists multicast as a subject for further study.|
Route selection refers to the selection of an LSP for a particular FEC. The MPLS architecture supports two options: hop-by-hop routing and explicit routing.
, each LSR independently chooses the next hop for each FEC. The RFC implies that this option makes use of an ordinary routing protocol, such as OSPF.
This option provides some of the advantages of MPLS, including rapid switching by labels, the ability to use label stacking, and differential treatment of packets from different FECs following the same route. However, because of the limited use of performance metrics in typical routing protocols, hop-by-hop routing does not readily support traffic engineering or policy routing (defining routes based on some policy related to QoS, security, or some other consideration).
, a single LSR, usually the ingress or egress LSR, specifies some or all of the LSRs in the LSP for a given FEC. For strict explicit routing, an LSR specifies all of the LSRs on an LSP. For loose explicit routing, only some of the LSRs are specified. Explicit routing provides all the benefits of MPLS, including the ability to do traffic engineering and policy routing.
Explicit routes can be selected by configuration, that is, set up ahead of time, or dynamically. Dynamic explicit routing would provide the best scope for traffic engineering. For dynamic explicit routing, the LSR setting up the LSP would need information about the topology of the MPLS domain as well as QoS-related information about that domain. An MPLS traffic engineering specification  suggests that the QoS-related information falls into two categories:
|A set of attributes associated with an FEC or a collection of similar FECs that collectively specify their behavioral characteristics|
|A set of attributes associated with resources (nodes, links) that constrain the placement of LSPs through them|
A routing algorithm that accounts for the traffic requirements of various flows and the resources available along various hops and through various nodes is referred to as a
constraint-based routing algorithm
. In essence, a network that uses a constraint-based routing algorithm is aware of current utilization, existing capacity, and committed services at all times. Traditional routing algorithms, such as OSPF and the
Border Gateway Protocol
(BGP), do not employ a sufficient array of cost metrics in their algorithms to qualify as constraint-based.
Furthermore, for any given route calculation, only a single cost metric (for instance, number of hops, delay) can be used. For MPLS, it is necessary either to augment an existing routing protocol or to deploy a new one. For example, an enhanced version of OSPF has been defined  that provides at least some of the support required for MPLS. Examples of metrics that would be useful to constraint-based routing include the following:
|Maximum link data rate|
|Current capacity reservation|
|Packet loss ratio|
|Link propagation delay|
Route selection consists of defining an LSP for an FEC. A separate function is the actual setting up of the LSP. For this purpose, each LSR on the LSP must:
|1.||Assign a label to the LSP to be used to recognize incoming packets that belong to the corresponding FEC.|
|2.||Inform all potential upstream nodes (nodes that will send packets for this FEC to this LSR) of the label assigned by this LSR to this FEC, so that these nodes can properly label packets to be sent to this LSR.|
|3.||Learn the next hop for this LSP and learn the label that the downstream node (LSR that is the next hop) has assigned to this FEC. This process will enable this LSR to map an incoming label to an outgoing label.|
The first item in the preceding list is a local function. Items 2 and 3 must be done either by manual configuration or by using some sort of label distribution protocol. Thus, the essence of a label distribution protocol is that it enables one LSR to inform others of the label/FEC bindings it has made. In addition, a label distribution protocol enables two LSRs to learn each other's MPLS capabilities. The MPLS architecture does not assume a single label distribution protocol but allows for multiple such protocols. Specifically, RFC 3031 refers to a new label distribution protocol and to enhancements to existing protocols, such as RSVP and BGP, to serve the purpose.
The relationship between label distribution and route selection is complex. It is best to look at in the context of the two types of route selection.
With hop-by-hop route selection, no specific attention is paid to traffic engineering or policy routing concerns, as we have seen. In such a case, an ordinary routing protocol such as OSPF is used to determine the next hop by each LSR. A relatively straightforward label distribution protocol can operate using the routing protocol to design routes.
With explicit route selection, a more sophisticated routing algorithm must be implemented, one that does not employ a single metric to design a route. In this case, a label distribution protocol could make use of a separate route selection protocol, such as an enhanced OSPF, or incorporate a routing algorithm into a more complex label distribution protocol.
The two most important defining documents for MPLS are  and . Reference  provides a thorough treatment of MPLS;  covers not only MPLS but other Internet QoS concepts; it includes an excellent chapter on MPLS traffic engineering. Reference  includes a concise overview of the MPLS architecture and describes the various proprietary efforts that preceded MPLS.
 Apostolopoulos, G., et al., "QoS Routing Mechanisms and OSPF Extensions,"
, August 1999.
 Awduche, D., et al. "Requirements for Traffic Engineering over MPLS,"
, September 1999.
 Black, U.,
MPLS and Label Switching Networks
, ISBN 0130158232, Prentice Hall, 2001.
 Redford, R., "Enabling Business IP Services with Multiprotocol Label Switching," Cisco White Paper, July 2000 (
 Rosen, E., et al. "Multiprotocol Label Switching Architecture,"
RFC 3031, January 2001.
 Rosen, E., et al. "MPLS Label Stack Encoding,"
RFC 3032, January 2001.
 Viswanathan, A., et al., "Evolution of Multiprotocol Label Switching,"
IEEE Communications Magazine
, May 1998.
 Wang, Z.,
Internet QoS: Architectures and Mechanisms for Quality of Service
, ISBN 1558606084, Morgan Kaufmann, 2001.
Useful Web Sites
|MPLS Forum: An industry forum to promote MPLS: http://www.mplsforum.org/|
|MPLS Resource Center: Clearinghouse for information on MPLS: http://www.mplsrc.com/|
|MPLS Working Group: Chartered by IETF to develop standards related to MPLS. The Web site includes all relevant RFCs and Internet Drafts:|
WILLIAM STALLINGS is a consultant, lecturer, and author of over a dozen books on data communications and computer networking. He also maintains a computer science resource site for CS students and professionals at
. He has a PhD in computer science from M.I.T. His latest book is
Wireless Communications and Networks
(Prentice Hall, 2001). His home in cyberspace is
and he can be reached at