by Francesco Palmieri, Federico II University of Napoli, Italy
One of the major concerns in the Internet-based information society today is the tremendous demand for more and more bandwidth. Optical communication technology has the potential for meeting the emerging needs of obtaining information at much faster yet more reliable rates because of its potentially limitless capabilities—huge bandwidth (nearly 50 terabits per second ), low signal distortion, low power requirement, and low cost. The challenge is to turn the promise of optical networking into reality to meet our Internet communication demands for the next decade. With the deployment of Dense Wavelength Division Multiplexing (DWDM) technology, a new and very crucial milestone is being reached in network evolution. The speed and capacity of such wavelength switched networks—with hundreds of channels per fiber strand—seem to be more then adequate to satisfy the medium to long term connectivity demands. In this scenario, carriers need powerful, commercially viable and scalable devices and control plane technologies that can dynamically manage traffic demands and balance the network load on the various fiber links, wavelengths, and switching nodes so that none of these components is over- or underused.
This process of adaptively mapping traffic flows onto the physical topology of a network and allocating resources to these flows—usually referred to as traffic engineering—is one of the most difficult tasks facing Internet backbone providers today. Generalized Multiprotocol Label Switching (GMPLS) is the most promising technology. GMPLS will play a critical role in future IP pure optical networks by providing the necessary bridges between the IP and optical layers to deliver effective traffic-engineering features and allow for interoperable and scalable parallel growth in the IP and photonic dimension. The GMPLS control plane technology, when fully available in next-generation optical switching devices, will support all the needed traffic-engineering functions and enable a variety of protection and restoration capabilities, while simplifying the integration of new photonic switches and existing label switching routers.
Wavelength Division Multiplexing
Traditional Electronic Time-Division Multiplexed (ETDM) networks use an electrical signal form to switch traffic along routes and restore signal strength. These networks do not fully exploit the bandwidth available on optical fibers because only a single frequency (wavelength or lambda) of light is used on each fiber to transmit data signals that can be modulated at a maximum bit rate of the order of 40 Gbps. The high bandwidth of optical fibers can be better used through WDM technology by which distinct data signals may share an optical fiber, provided they are transmitted on carriers having different wavelengths .
In more detail, the optical transmission spectrum is divided into numerous nonoverlapping wavelengths, with each wavelength supporting a single communication channel. Each channel, which can be viewed as a light path, is transmitted at a different wavelength (or frequency). Multiple wavelengths are multiplexed into a single optical fiber and multiple light-path data is transmitted as shown in Figure 1.
Dense WDM (DWDM), an evolution of WDM referring essentially to the closer spacing of channels, is the current favorite multiplexing technology for long-haul communications in modern optical networks. Hence, all the major carriers today devote significant effort to developing and applying DWDM technology in their business.
All-optical networks employing the concept of WDM and wavelength routing are thought to be the transport networks for the future . In such networks, two adjacent nodes are connected by one or multiple fibers, each carrying multiple wavelengths or channels. Each node consists of a dynamically configurable optical switch that supports fiber switching and wavelength switching; that is, the data on a specified input fiber and wavelength can be switched to a specified output fiber on the same wavelength . In order to transfer data between source–destination node pairs, a light path needs to be established by allocating the same wavelength throughout the route of the transmitted data. Benefiting from the development of all-optical amplifiers, light paths can span more than one fiber link and remain entirely optical from end to end. It has been demonstrated that the introduction of wavelength-routing networks not only offers the advantages of higher transmission capacity and routing node throughput, but also satisfies the growing demand for protocol transparency and simplified operation and management  .
Optical Transport Backbones
The modern Internet transport infrastructure can be physically seen as a very complex mesh of variously interconnected optical or traditional ETDM subnetworks, where each subnetwork consists of several heterogeneous routing and switching devices built by the same or different vendor and operating according to the same control plane protocols and policies. With these very different types of devices, all the forwarding decisions will be based on a combination of packet or cell, timeslot, wavelengths, or physical ports, depending on the position (edge or core) and role (intermediate or termination or gateway node) of the switching devices in the network layout.
In particular, WDM-switched optical subnetworks are typically used as backbone infrastructures to interconnect a large number of different IP as well as other packet networks such as SDH, ATM, and Frame Relay.
New optical devices such as DWDM multiplexers, Add/Drop Multiplexers (ADM), and Optical Cross-Connects (OXC) are making possible an intelligent all-optical core where packets are routed through the network without leaving the optical domain. The optical network and the surrounding IP networks are independent of each other, and an edge IP router interacts with its ingress switching node only over a well-defined User-Network Interface (UNI). Clearly, the optical network is responsible for setting up light paths between the edge IP routers. A light path can be either switched or permanent. Switched light paths are established in real time using proper signaling procedures, and they may last for a short or a long period of time. Permanent light paths are set up administratively by subscription, and they typically last for a very long time. An edge IP router requests a switched light path from its ingress optical switching device using a proper signaling protocol over the UNI. See Figure 2.
The key concept to guarantee desirable speeds and correct functional behavior in these networks is to maintain the signal in pure optical form, thereby avoiding the prohibitive overhead of conversion to and from electrical form. Such a network would be "optical transparent" in the sense that it would be able to transport client signals with any format and with a wide range of bit rates (at least from about 10 Mbps to more than 10 Gbps). In particular, transparent OXCs, used to selectively switch wavelengths between their input and output ports, are likely to emerge as the preferred option for switching multigigabit or even terabit data streams, because any slow electronic per-packet processing is avoided.
Transparent Optical Switching Nodes
Transparent OXC systems are expected to be the cornerstone of the photonic layer, offering carriers more dynamic and flexible options in building network topologies with enhanced performance and scalability. The development of large and flexible transparent OXCs, now enabled by a new generation of optical components such as optical amplifiers, tunable lasers, and wavelength filters, is still a significant challenge . Their architecture makes use of optical switching fabrics, wavelength multiplexers and demultiplexers, and transparent wavelength converters, which eliminate the need for optoelectronic transponders. A simple and linear architectural model for an optical transparent OXC is shown in Figure 3.
Here, the WDM demultiplexers separate incoming grouped wavelengths from input ports into individual lambdas. A sufficiently large low-loss connectivity and compact-design, all-optical switching fabric can be realized by using the reflection of light and Micro-Electromechanical Systems (MEMS) technology, now widely available on the market. This multilayer switching fabric driven by a micro-machined electrical actuator redirects, according to the control plane instructions, each wavelength into appropriate output ports passing through optical amplifiers, typically Erbium-Doped Fiber Amplifiers (EDFAs), which boost the signal power in line without the need for any optoelectronic conversion to cope with the effects of light dispersion and attenuation on long distances. The WDM multiplexer then groups the wavelengths from the above multiple layers of cross-connects. Furthermore, the wavelength that arrives into an OXC can be directly passed to the optical switching fabric, to be switched to the appropriate output fiber or previously converted, based on the control plane instructions, to another particular wavelength with the use of a tunable wavelength converter (without being transformed to electricity) if the former output wavelength is not available.
This architecture is transparent; that is, the optical signal does not need to be transformed to electricity at all, implying that this architecture can support any protocol and any data rate. Hence, possible upgrades in the wavelength transport capacity can be accommodated at no extra cost. Furthermore, this architecture decreases the cost because it involves the use of fewer devices than the other architectures. In addition, transparent wavelength conversion eliminates constraints on conversions. In this way the real switching capacity of the OXC is increased, leading to cost reduction. First-generation OXCs require manual configuration. Clearly, an automatic switching capability allowing optical nodes to dynamically modify the network topology based on changing traffic demand is highly desirable.
Automatically Switched Optical Networks
For automatically switched networks, where network nodes may directly initiate or terminate new connections or perform wavelength-level switching in the network, sophisticated and flexible control functions are needed.
The control plane supports connection management by clients and also provides protection and restoration services. The control plane of an optical network is also responsible for tracking the network topology and for notifying the state of the network resources. Two families of protocols achieve this task:
Data transport is the most obvious task and the main purpose of an optical network data plane. It provides uni- or bidirectional information transport (transmission and switching) between users, detects faults, and monitors signal quality. More specifically, the data plane performs, under the directions of the control plane, data routing to the appropriate ports; channel adds and drops to external, older networks (using the edge interfaces); and label or lambda swapping through an array of WDM demultiplexers, wavelength converters, OXCs, optical amplifiers, and multiplexers.
An important concern that must be addressed in designing an optical network is the cross effect of the failure of a data or control plane. Failures of the data plane are usually addressed by the control plane itself by rerouting the disrupted flows at the appropriate level. The control plane must then advertise quickly the new network state to the neighboring nodes to avoid the presence of stale information in the link databases. A failure of the IP-based control plane usually significantly affects the data plane.
Traffic Engineering in Optical Networks
Traffic engineering should be viewed as assistance to the routing and switching infrastructure that provides additional information used in forwarding traffic along alternate paths across the network, trying to optimize service delivery throughout the network by improving its balanced usage and avoiding congestion caused by uneven traffic distribution. Traffic engineering is required in the modern Internet mainly because the current dynamic routing protocols always use the shortest paths to forward traffic. This practice, obviously, conserves network resources, but it causes some of them to be overused while the other resources remain underused. Furthermore, the routing protocols mentioned earlier never account for specific traffic flow requirements such as bandwidth and Quality of Service (QoS) needs. Practitioners in the field often assert that traffic engineering essentially signifies the ability to place traffic where the capacity exists to accommodate it—whereas network engineering denotes the ability to install capacity where the traffic exists.
When a traffic-engineering application implements the right set of features, it should provide precise control over the placement of traffic flows within a routing and switching domain, gaining better network use and realizing a more manageable network. A traffic-engineering solution suitable for transparent optical networks always consists of numerous basic functional components; for example:
Traditionally, all provisioning and engineering in optical networks has required manual planning and configuration, resulting in setup times of days or even weeks and a marked reluctance among network managers to de-provision resources in case doing so would affect other services. In the last few years, during which control protocols have been deployed to dynamically provide traffic engineering and provisioning or management assistance in optical networks, the control protocols have been proprietary and have greatly suffered from interoperability problems. Consequently, a new standardized control plane framework, supporting evolutionary traffic-engineering features, is needed for automatically switched optical transport networks to foster the expedited development and deployment of a new class of versatile optical switches that specifically address the optical transport needs of the Internet.
The important remaining challenge to be addressed in developing a dynamically reconfigurable optical network is that of controlling the optical resources, especially under distributed control where the network elements exchange information among themselves in a standardized multivendor environment. Performance and reliability requirements make this challenge of paramount importance to photonic networks. Beyond eliminating proprietary "islands of deployment," this common control plane enables independent innovation curves within each product class, and faster service deployment with end-to-end provisioning using a single set of semantics.
The GMPLS Paradigm
GMPLS, the emerging paradigm for the design of control planes for OXCs, aims to address and solve all the challenges mentioned previously, trying to automatically and dynamically configure any kind of network element. It was proposed shortly after Multiprotocol Label Switching (MPLS) to extend its packet control plane to encompass time division (for example, for SONET/SDH), wavelength (for optical lambdas) and spatial switching (for example, for incoming port or fiber to outgoing port or fiber). Nongeneralized MPLS overlays a packet-switched IP network to facilitate traffic engineering and allow resources to be reserved and routes predetermined. It provides virtual links or tunnels through the network to connect nodes that lie at the edge of the network. For packets injected into the ingress of an established tunnel, normal IP routing procedures are suspended; instead the packets are label-switched so that they automatically follow the tunnel to its egress.
With the success of MPLS in packet-switched IP networks, optical network providers have accelerated a process to generalize the applicability of MPLS to cover all-optical networks as well. The premise of GMPLS is that the idea of a label can be generalized to be anything that is sufficient to identify a traffic flow. For example, in an optical fiber whose bandwidth is divided into wavelengths, the whole of one wavelength could be allocated to a requested flow. The Label Switch Routers (LSRs) at either end of the fiber simply have to agree on which frequency to use. From a control plane perspective, an LSR bases its functions on a table that maintains relations between incoming label or port and outgoing label or port. It should be noted that in the case of the OXC, the table that maintains the relations is not a software entity but it is implemented in a more straightforward way, for example, by appropriately configuring the micro-mirrors of the optical switching fabric.
There are several constraints in reusing the GMPLS control plane. These constraints arise from the fact that LSRs and OXCs use different data technologies. More specifically, LSRs manipulate packets that bear an explicit label, and OXCs manipulate wavelengths that bear the label implicitly; that is, the label value is implicit in the fact that the data is being transported within the agreed frequency band.
Furthermore, because the analogy of a label in the OXC is a wavelength or an optical channel, there are no equivalent concepts of label merging nor label push and pop operations in the optical domain, and label swapping can be realized through wavelength conversion. The transparency and multiprotocol properties of such a control plane approach would allow an OXC to route optical channel trails carrying various types of digital payloads (including IP, ATM, SDH, etc.) coherently and uniformly.
GMPLS Control Plane Functions and Services
GMPLS focuses mainly on the control plane services that perform connection management for the data plane (the actual forwarding logic) for both packet-switched interfaces and non-packet-switched interfaces. The GMPLS control plane essentially facilitates four basic functions:
The fundamental service offered by the GMPLS control plane is dynamic end-to-end connection provisioning. The operators need only to specify the connection parameters and send them to the ingress node. The network control plane then determines the optical paths across the network according to the parameters that the user provides and signals the corresponding nodes to establish the connection. The whole procedure can be done within seconds instead of hours. The other important service is bandwidth on demand, which extends the ease of provisioning even further by allowing the client devices that connect to the optical network to request the connection setup in real time as needed. In order to establish a connection that will be used to transfer data between a source–destination node pair, a light path needs to be established by allocating, in presence of the so-called continuity constraint, the same wavelength throughout the route of the transmitted data or selecting the proper wavelength conversion-capable nodes across the path. In fact, if the wavelength continuity constraint is not fully enforced, some wavelength conversion-capable nodes can be placed in the network to reduce the overall blocking probability in case of wavelength resource exhaustion on some nodes. Light paths can span more than one fiber link and remain entirely optical from end to end.
However, according to the mandatory clash constraint, two light paths traversing the same fiber link cannot share the same wavelength on that link. That is, each wavelength on a given fiber is not a sharable resource between light paths.
In general, if there are multiple feasible wavelengths (lambdas) between a source node and a destination node, then a Wavelength Assignment algorithm is required to select a wavelength for a given light path. The wavelength selection can be performed either after an optical route has been determined (in the so-called decoupled approach), or in parallel with finding a route. In the latter case, we refer to the coupled approach, in which the entire job is accomplished by a single Routing and Wavelength Assignment (RWA) algorithm. When light paths are established and taken down dynamically, routing and wavelength assignment decisions must be made as connection requests arrive to the network. It is possible that, for a given connection request, there may be insufficient network resources to set up a light path, in which case the connection request is blocked. The connection may also be blocked if there is no common wavelength available on all the links along the chosen route. Thus, the objective in the dynamic situation is to choose a route and a wavelength that maximizes the probability of setting up a given connection, while at the same time attempting to minimize the blocking for future connections.
In addition, because the quality of an optical signal degrades as it travels through several optical components and fiber segments, the deployment of "long-distance" light paths may require signal regeneration at strategic locations in a nationwide or global WDM network. As a result, the algorithms performing routing and wavelength assignment, virtual-topology embedding, wavelength conversion, etc. must also be mindful of the locations of the sparse signal regenerators in the network. Such regenerators, which are placed at select locations in the network, "clean up" the optical WDM signal either entirely in the optical domain or through an optoelectronic conversion followed by an electro-optic conversion. Thus the signal from the source travels through the network as far as possible before its quality drops below a certain threshold, thereby requiring it to be regenerated at an intermediate node. The same signal could be regenerated several times in the network before it reaches the destination.
Furthermore, in current multilayer transport networks the bandwidth demanded by traffic typically is orders of magnitude lower than the capacity of lambda links, and the number of available wavelengths per fiber is limited and costly. Hence, it is not worth assigning exclusive end-to-end light paths to these demands, so a better sub-lambda granularity is required. Thus, to increase the throughput of a network with a limited number of lambdas per fiber, traffic grooming is required in certain nodes, typically those on the network edge.
The GMPLS control plane ensures traffic-grooming capability on edge nodes by operating on a two-layer model; that is, an underlying pure optical wavelength routed network and an "optoelectronic" time-division multiplexed layer built over it. In the wavelength routed layer, operating exclusively at lambda granularity, when a transparent light path connects two physically adjacent or distant nodes, these nodes will seem adjacent for the upper layer. The upper layer can perform multiplexing of different traffic streams into a single wavelength-based light path through simultaneous time and space switching. Similarly it can demultiplex different traffic streams of a single lambda path. It can also perform remultiplexing: some of the demands demultiplexed can be again multiplexed into some other wavelength paths and handled together along it. This is due to the "generalized" and hence multilayer nature of the GMPLS control plane.
The electronic layer is clearly required for multiplexing packets coming from different ports. This upper electronic layer can be a classical or "next-generation" technology, such as IP/MPLS, but it can also be based on any other networking technology (that is SDH/SONET, ATM, Ethernet, etc.). However, the technology of the upper layer must be unique for all traffic streams that have to be demultiplexed and then multiplexed again, because the network cannot directly multiplex, for example, ATM cells with Ethernet frames.
Another service that gives greatest flexibility to users in handling their own virtual network topologies on the transport core is the Optical Virtual Private Network (OVPN), which allows users to have full network resource control of a defined partition of the carrier optical network. Although users have full network resource control of that portion of the network, the OVPN is just a logical network partition and the end users still do not have access and visibility to the carrier's networks. This service can save the carrier's operation resources by allowing end users to perform circuit provisioning and setup procedures.
GMPLS encompasses control plane signaling for multiple interface types. The diversity of controlling not only switched packets and cells but also TDM network traffic and optical network components makes GMPLS flexible enough to position itself in the direct migration path from electronic to all-optical network switching. The five main interface types supported by GMPLS follow:
These supported interfaces are hierarchal in structure and controlled simultaneously by GMPLS.
GMPLS defines several new forms of label—the generalized label objects. These objects include the generalized label request, the generalized label, the explicit label control, and the protection flag. The generalized label can be used to represent timeslots, wavelengths, wavebands, or space-division multiplexed positions.
With plain MPLS labels embedded in the cell or packet structure for in-band control plane signaling, with the different kinds of interfaces supported by GMPLS it is impossible to embed label-specific information, in terms of fiber port or wavelength switching, into the traffic packet structure. Consequentially, new "virtual" labels have been added to the MPLS label structure. These virtual labels comprise specific indicators that represent wavelengths, fiber bundles, or fiber ports and are distributed to GMPLS nodes through out-of-band GMPLS signaling. GMPLS out-of-band signaling causes a control-channel separation problem.
With MPLS, the control information is found in the label, which is directly attached to the data payload. However, when you send the control information out of band, the label is separated from the data that it is attempting to control. GMPLS provides a means for identifying explicit data channels. Having the ability to identify data channels allows the control message to be associated with a particular data flow, whether it is a wavelength, fiber, or fiber bundle.
Generalized Label-Switched Paths
The handling of label-switched paths (LSPs) under GMPLS differs from that of MPLS. MPLS does not provide for bidirectional LSPs. Each direction LSP has to be established in turn. Under GMPLS, the LSP can be established bidirectionally. The traffic-engineering requirements for the bidirectional LSP are the same in both directions, and it is established for both directions through only one signaling message, allowing for reductions in latency-related setup time. In the optical environment, OXC translates label assignments into corresponding wavelength assignments and sets up generalized LSPs (G-LSPs) using their local control interfaces to the other switching devices. Subsequent to G-LSP setup, no explicit label or lambda lookup or processing operations are performed by the OXC nodes.
GMPLS supports traffic engineering by allowing the node at the network ingress to specify the route that a G-LSP will take by using explicit light-path routing. An explicit route is specified by the ingress as a sequence of hops and wavelengths that must be used to reach the egress, which is different from the hop-by-hop routing that is usually associated with PSC networks.
GMPLS also maintains the capability already available with MPLS to nest G-LSPs. Nested G-LSPs make possible the building of a forwarding hierarchy. At the top of this hierarchy are nodes that have FSC interfaces, followed by nodes that have LSC interfaces, followed by nodes that have TDMC interfaces, and followed by nodes with PSC interfaces. Nesting of G-LSPs between interface types increases flexibility in service definition and makes it possible for service providers operating a GMPLS network to deliver both bundled and unbundled services.
Because the deployment of DWDM equipment makes feasible the creation a large number of individual connections between two adjacent nodes, another very useful feature of bundling is the ability to simultaneously handle multiple adjacent links. Link bundling treats the traffic of these links as a single link.
In order for the adjacent links to be bundled, they must be on the same GMPLS segment, they must be of the same type, and they must have the same traffic-engineering requirements. These requirements reduce the amount of link advertisements that need to be maintained throughout the network, thereby increasing the control plane scalability. Just as in MPLS label stacking, GMPLS labels only contain information about a single level of hierarchy. The difference for GMPLS is that this hierarchy can be fiber-, wavelength-, timeslot-, packet- or cell-based
For instance, if a connection is desired from one PSC interface to another PSC interface, and the traffic traverses physically separate fibers, a unique LSP has to be established for each level in turn. First, the FSC LSP, then the LSC LSP, then the TDMC LSP, and finally the PSC LSP have to be established through GMPLS signaling.
Signaling and Routing Protocols
In order to set up a light path, a signaling protocol is also required to exchange control information among nodes, to distribute labels, and to reserve resources along the path. In our case, the signaling protocol is closely integrated with the routing and wavelength assignment protocols. Suitable GMPLS signaling protocols for the GMPLS control plane include Resource Reservation Protocol (RSVP) and Constraint-Based Label Distribution Protocol (CR-LDP). Any of the objects that are defined within the GMPLS specification can be carried within the message of either of these signaling protocols that are responsible for all the connection management actions such as setup, modify, or remove the G-LSPs. Clearly, support for provisioning and restoration of end-to-end optical trails within a photonic network consisting of heterogeneous networking elements imposes new requirements for these signaling protocols. Specifically, optical trails require small setup latency (especially for restoration purposes), support for bidirectional trails, rapid failure detection and notification, and fast intelligent trail restoration.
Both RSVP and CR-LDP can be used to reserve a single wavelength for a light path if the wavelength is known in advance. These protocols can also be modified to incorporate wavelength selection functions into the reservation process . In RSVP, signaling takes place between the source and destination nodes. The signaling messages may contain information such as QoS requirements for the carried traffic and label requests for assigning labels at intermediate nodes that reserve the appropriate resources for the path. CR-LDP uses TCP sessions between nodes in order to provide a hop-by-hop reliable distribution of control messages, indicating the route and the required traffic parameters for the route. Each intermediate node reserves the required resources, allocates a label, and sets up its forwarding table before backward signaling to the previous node.
To correctly perform resource reservation, allocation, and topology discovery on the available optical link resources, each node needs to maintain a representation of the state of each link in the network. The link state includes the total number of active channels, the number of allocated channels, and the number of channels reserved for light-path restoration. Additional parameters can be associated with allocated channels; for example, some light paths can be preemptable or have associated hold priorities. When the local inventory is constructed, the node engages in a routing protocol to distribute and maintain the topology and resource information. Standard IP routing protocols, such as Open Shortest Path First (OSPF) or Intermediate System-to-Intermediate System (IS-IS) with GMPLS Traffic Engineering extensions, can be used to reliably propagate the information.
The extensions to OSPF and IS-IS add additional information about links and nodes into the link-state database. Such information includes the type of LSPs that can be established across a given link (for example, packet forwarding, SONET/SDH trails, wavelengths, or fibers), as well as the current unused bandwidth, the maximum size of G-LSP that can be established, and the administrative groups supported. This information allows the node computing the explicit route for an LSP to do so more intelligently. Furthermore, any switching node cooperating in the GMPLS control plane will maintain a per-interface or per-fiber Wavelength Forwarding Information Base (WFIB) because lambdas and channels (labels) are specific to a particular interface or fiber, and the same lambda or channel (label) could be used concurrently on multiple interfaces or fibers.
Link Management Protocol
GMPLS also uses the Link Management Protocol (LMP) to communicate proper cross-connect information between the network elements. LMP runs between adjacent systems for link provisioning and fault isolation. It can be used for any type of network element, particularly in natively photonic switches. LMP automatically generates and maintains associations between links and labels for use in label swapping . Automating the labeling process simplifies management and avoids the errors associated with manual label assignment. LMP provides control-channel management, link-connectivity verification, link-property correlation, and fault isolation. Control-channel management establishes and maintains connectivity between adjacent nodes using a keepalive protocol. Link verification verifies the physical connectivity between nodes, thereby detecting loss of connections and misrouting of cable connections. Fault isolation pinpoints failures in both electronic and optical links without regard to the data format traversing the link.
In order for these link bundles to be handled accordingly, GMPLS needed a method to manage the links between adjacent nodes. LMP was developed to address several link-specific problems that surfaced when generalizing the MPLS protocol across different interface types. The main responsibilities of the LMP follow:
Although LMP assumes the messages are IP encoded, it does not dictate the actual transport mechanism used for the control channel. However, the control channel must terminate on the same two nodes that the bearer channels span. Therefore, this protocol can be implemented on any OXC, regardless of the internal switching fabric. A requirement for LMP is that each link has an associated bidirectional control channel and that free bearer channels must be opaque (that is, able to be terminated); however, when a bearer channel is allocated, it may become transparent. Note that this requirement is trivial for optical cross-connects with electronic switching planes, but is an added restriction for photonic switches.
Innovations in the field of optical components will take advantage of the introduction of all-optical networking in all areas of information transport and will offer system designers the opportunity to create new solutions that will allow smooth evolution of all telecommunication networks. A new class of versatile IP-addressable optical switching devices is emerging, operating according to a common GMPLS-based control plane to support full-featured traffic engineering in modern optical transparent infrastructures.
The main advantage of this approach is that it is based on already existing and widely deployed protocols while simplifying network management and engineering tasks that can be performed in a unified way in both the data and the optical domains. Furthermore, it offers a function framework that can accommodate future expectations concerning the way networks will work and the way services will be provided to clients. Thus we envision a horizontal network, harmonized by a common GMPLS-based control plane, where all network elements work as peers to dynamically establish optical paths through the network.
This new photonic internetwork will make it possible to provision high bandwidth in tenths of seconds, and enable new revenue-generating services and dramatic cost savings for service providers.
In the same way that digital communication technologies changed the twentieth century into the "electronic century," the optical technologies discussed in this article will make the next century "the photonic century." All winning strategies must rely on such GMPLS-based photonic infrastructures—an environment in which innovations work at the speed of light.