Technical Services Newsletter

Chalk Talk

MPLS and Next-Generation Networks: Foundations for NGN and Enterprise Virtualization

By Monique Jeanne Morrow

Description

Network managers often question the value that Multiprotocol Label Switching (MPLS) brings to their business environment.

This article provides network managers with a precise guide for evaluating the benefits of MPLS-based applications and solutions. The article guides the network manager through the business case for MPLS by exploring other technology alternatives, including their applications, benefits, and deficiencies, and, is a synopsis of the Cisco Press book, MPLS and Next-Generation Networks for NGN and Enterprise Virtualization, written by Azhar Sayeed and Monique Morrow.

Drivers Towards Virtualization

Service providers have been in the process of evaluating and evolving their multiple networks to a single converged infrastructure upon which they will deploy existing and future services.

IP/MPLS is a technology that "virtualizes" services and applications. Further, IP/MPLS is the foundation for the service provider Next Generation Network (NGN) evolution or network convergence. To be "service aware," the architecture should offer a differentiated set of services to client applications. However, factors behind the convergence trend in the industry include reduction of operating expense, optimizing capital expenditure and generating new services ultimately to retain profitability. Declining revenues, aging infrastructure, increased competition amongst service providers and regulatory conditions factors designed to open up the market, are additional factors behind adoption of IP/MPLS by many service providers today. We note for service providers, that aging infrastructure can be in the order of 10 years or older, for example, some Public Switched Telephone Network (PSTN) switches; consequently, maintaining such infrastructure becomes cost prohibitive over time. An additional critical factor from such service convergence is to decrease the time to market (TTM) for new services (for example, IP-based) and, to facilitate the operating expense reduction (OPeX) like multiple Operations Support Systems (OSS). The mid- to long-term strategy characterized over the next 3-7 years is for service providers to consolidate these various networks to an all-packet network that supports both existing revenue streams and future new profitable services. Some service providers have already commenced this consolidation process. In the long-term, the telecommunications industry can no longer support multiple networks to deploy services as these become cost prohibitive to maintain (numerous OSS; a variety of Network Operations Centers etc). Content, broadband, and mobility are drivers for these new profitable services. An evolutionary strategy means a gradual deployment of new services for top-line growth and new customers that require the lowest cost network architectures; therefore, migration to IP/MPLS should facilitate this consolidation and the delivery of common services.

Figure 1 depicts the evolution towards a multi-service aware IP/MPLS core and highlights the operational inefficiencies with the multiple OSS, and identifies the opportunity for service automation that can be possible with a converged network using MPLS.

Service Provider Network Operation

Multi-Service Aware IP/MPLS Core
  • Create operational efficiencies and increase automation in a highly technology-intensive market
  • Enable competitive differentiation and customer retention through high-margin, bundled services
  • Progressively consolidate disparate networks
  • Sustain existing business while rolling out new services

Figure 1: Service Convergence

This convergence trend towards a packet-based network, namely IP/MPLS has often been called the "Next Generation Network," or NGN, a term depicting the evolution from a circuit-switched paradigm to IP/MPLS. The International Telecommunications Union (ITU) has defined the NGN in ITU-T Recommendation Y.2001 as follows:

"Next Generation Network (NGN): a packet-based network able to provide telecommunication services and able to make use of multiple broadband, QoS-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users."

NGN within service provider companies additionally are characterized by such factors as Fixed-Line-Mobile convergence (FMC) -- use of broadband and cable to deploy triple/quadruple play services such as voice, data, video and mobile (add GriD and we have quadruple play plus). Architecturally, convergence can be depicted by layer simplification; for example, IP directly to optics.

Service provider business engineering processes can often be complex and cumbersome due to years of supporting multiple OSS platforms. Such complexity affects service creation due to the challenges to reduce OPeX and the requirement by customers (with global subsidiaries) to assure end-to-end quality of service when transiting multi-providers. Using IP/MPLS for service automation presents an opportunity to reduce such complexity. Work is underway in the industry to explore multiprovider mechanisms. For example, IPsphere Forum is defining business signalling across a pan-provider environment that is based upon a service oriented architecture; the MPLS and Frame relay Alliance (MFA) is defining MPLS Layer Requirements for Inter-carrier Interconnection; and, the MIT Futures Communications program for Interprovider QoS, just to name a few initiatives.

Virtualization in the Enterprise

What does this virtualization mean for enterprise organizations?

Enterprise customers have invested in such applications such as Enterprise Resource Planning (ERP), Supply Chain Management (SCM), and Customer Relationship Management (CRM) that facilitate collaborative workplace processes requiring integration to the corporate LAN.

ERP is an industry term for the broad set of activities supported by multi-module application software that help a manufacturer or other business manage the important parts of its business, including product planning, parts purchasing, maintaining inventories, interacting with suppliers, providing customer service, and tracking orders. ERP can also include application modules for the finance and human resources aspects of a business. Typically, an ERP system uses or is integrated with a relational database system.

SCM is fundamentally the delivery of customer and economic value through integrated management of the flow of physical goods and associated information, from raw materials sourcing to delivery of finished products to consumers.

CRM is fundamentally an information industry term for methodologies, software, and usually Internet capabilities that help an enterprise manage customer relationships in an organized way. For example, an enterprise might build a database about its customers that described relationships in sufficient detail so that management, salespeople, people providing service, and perhaps the customer directly could access information, match customer needs with product plans and offerings, remind customers of service requirements, know what other products a customer had purchased. These applications facilitate workflow collaboration across the enterprise organization.

Large enterprises need efficient solutions to provide real-time access of these applications for their customers who may be geographically dispersed throughout the world and, where leased lines and frame relay may not be readily accessible or even cost effective. Total Cost of Ownership (TCO) is an important driver for an enterprise customer when comparing various solutions and alternatives. Enterprise customers are exploring the pros and cons of managing disparate networks that may often lead to high operating costs. Additionally, global reach, quality of service, security, and scalability are drivers toward considering an IP-VPN solution based on MPLS.

Why are enterprises migrating to layer 3 services particularly based on MPLS? While traditional factors such as cost and reliability are significant, there are new challenges for the enterprises such as: distributed applications and business-to-business communications that facilitate workflow collaboration. MPLS provides the any-to-any solution requisite for such applications, as opposed to complex overlay implementations that are common in Layer 2 networks. Moreover, these applications are IP-based, so there is an opportunity for enterprise organizations to mitigate against protocol complexity by, for example, executing a strategy that reduces the protocols to perhaps, IP for applications.

Business separation, mergers and de-mergers, and acquisitions require an extranet implementation coupled with security. Layer 2 implementations may be complex due to N*(N-1)/2 challenge (N*(N-1)/2 depicts the complexity of deploying a site as required (for every connection, one needs to re-configure all other sites respectively), as opposed to a peer model for extranet that an organization may have at layer 3 (for example, layer 3 MPLS VPN).

Figure 2 depicts Service Virtualization constructs from an enterprise organization perspective.

Figure 2: Virtualization Constructs (animation)

Further, when assessing quality of service requirements, we need to associate services with metrics such as jitter, delay, and required bandwidth to support the services.

In determining bandwidth for streaming services, the amount of bulk data transfer/retrieval and synchronization information is approximately ‹384 Kb/s. A movie clip, surveillance, or real-time video requires between 20-384 kb/s. Bandwidth requirements for conversational/real-time services such as audio and video applications include, for example, videophone, which is between 32-384 kb/s; Telnet, about ‹1 KB; and telemetry approximately ‹28.8 kb/s.

Finally, service providers tend to bundle, that is, propose multiple services with a target to prevent customer churn. An example is triple play where voice, data, and video may be offered as a bundle, perhaps over a single transport link. Bandwidth requirements for cable-modem may be approximately 1 Mb upstream to the provider and 3 Mb downstream to the subscriber. As an example for dimensioning a service bundle capability, one could additionally prioritize traffic for Voice over IP (VoIP); two VoIP phone lines, per call charging and broadcast video MPEG 2, one half D1, with one channel per set-top.

A Note on Quality of Service

QoS is based on some basic building blocks that allow traffic characterization or classification, policing, queuing and random discard, scheduling, and transmission. Each of these building blocks plays a vital role in implementing QoS in IP Networks.

Traffic Classification & Marking: In order to provide the right QoS behavior for applications, traffic needs to be classified. Traffic classification simply means identifying traffic types for treatment in the network. Traffic can be classified based on any criteria. A simple criterion is by source and destination address. Other criteria could be the protocol type or the application type. A third one could be traffic marking, a fourth one by deep packet inspection and identification of payload types such as Web URLs, Transactions, interactive gaming, etc. Once the traffic is classified, it is marked for appropriate treatment in the network. The marking is done by setting the DiffServ field or the IP Type of Service field in the IP header.

Policing: Traffic policing needs to be done to identify if the incoming traffic is in contract or out of contract. Traffic is supposed to be in contract if the user is sending traffic at the specified interval and at a specified rate, and is not exceeding that rate and frequency. A policer determines if there is too much traffic coming in and sets up traffic for transmission or discard. For example, traffic can be policed, and out of contract traffic can be re-marked with a different (lower grade) QoS marking for best effort service delivery (so that should congestion occur, the out-of-contract traffic can be dropped). If the operator does not police, then the operator will not know if the links are oversubscribed. Policing is key to determining the actual over subscription factor. For better traffic control, policing can be selectively applied to various QoS classes to meet specific QoS delivery targets or taffic contracts.

Queuing and Random Discard: When the incoming traffic rate is greater than the outgoing traffic rate, then the traffic must be queued; otherwise, it is discarded. Traffic can be queued based on individual flows or based on some aggregate QoS groups or classes. For example, 100 voice flows can be queued separately resulting in 100 queues, where each queue can be serviced fairly or 100 flows can all be queued into the same class queue, and the class queue can be serviced at an aggregate level with the highest priority. By queuing flows separately, isolation is achieved between flows where potentially one flow cannot hog bandwidth of other flows. Although, in the case of voice, flows may not hog bandwidth due to standard codecs, it is indeed true for voice, video, mission critical data, and best effort data. A video flow can hog bandwidth and starve the ERP traffic or voice traffic if they are both queued in the same queue. However, per flow queuing means that when there are a large number of flows, you need a large number of queues and some weighted fair queuing mechanism (this could result in a scale issue). The method recommended most often is to queue voice packets in the highest priority queue and place video and ERP traffic in separate queues to provide isolation and maintain QoS delivery guarantees. Random discard can be applied on a queue when the queue builds up due to a high incoming rate of traffic. Random discard or Weighted Random Early Discard (WRED) can be done on a queue to discard lower priority packets or out of contract packets from the queue and with this technique you can avoid loosing the higher priority packets.

Scheduler: Queues can be serviced at a specified rate. If all queues are serviced fairly, this is called weighted fair queuing (WFQ), that is, queues are serviced in a fair manner such that equal amounts of data are transmitted from each queue in one cycle. The queues can be weighted to provide a bias for the high priority traffic. If class-based queuing is done, each class can be serviced at a specified rate providing fairness to all traffic. Alternatively, a strict priority scheduler is one in which all traffic in a queue is serviced first until no packets remain in the queue; only then are other queues served. If there are always packets to send in the priority queue, other queues could get starved.

Transmission: Packet transmission on the wire is also an important factor. For example, voice packets are small (usually 64 bytes), whereas data packets could be large. For example, on low speed links where the serialization delay is large, voice packets becoming stuck behind the large data packets can affect the link delay budgets and ultimately voice quality. It may be useful to fragment larger packets into smaller chunks and interleave the voice packets to deal with link delay budgets. These serialization delays do not have any effect on high speed links. The serialization delay is most pronounced in sub T1/E1 rates or specifically data rates of 768Kbps or less.

The building blocks previously described are used in any QoS model, whether it be signaled QoS (specific source signals for QoS) or provisioned QoS (Manually pre-provisioned by the operator).

IETF has developed two main models for delivering QoS. Both of these models use the basic QoS building blocks such as queuing, policing, and discard mechanisms to deliver QoS. The first model developed was a per flow QoS model known as Integrated Services (IntServ). Due to scalability issues with per flow models, IETF also developed an aggregate model called DiffServ. Each of these models classifies the incoming traffic, polices them if necessary, queues them and applies WRED, and schedules the traffic on the wire. However, the differences are in the granularity, that is the amount of state stored in each of these models. An IP network can provide an SLA that has a close to absolute guarantee by using a combination of admission control and MPLS DiffServ. The admission control function allows the ability to reject calls when the network cannot guarantee QoS. By separating the admission control function from data plane queuing, a compromise is struck between absolute QoS and scalability.

MPLS and Security

Security is paramount as companies migrate from Layer 2 to Layer 3 services. Detecting and responding to distributed denial of service attacks (DDoS) and providing work containment measures without disturbing global services must be part of the overall security policy. The sophistication in attacks is one of the more frightening trends occurring in the security industry. Attacks were once primarily the work of hackers who wanted to temporarily take well-known sites offline to get media attention or brag to their friend. Now attacks are increasingly being used as the foundation of elaborate extortion schemes. In addition, some attacks are motivated by political or economic objectives, costing businesses and service providers millions of dollars each year. Providers and enterprise organizations, in order to protect core assets, require security as a component for the NGN architecture execution.

Figure 3 summarizes MPLS security best practice guidelines for implementation.

MPLS Security Best Practice Guidelines

Figure 3: MPLS Security Best Practice Guideline Summary

Summary

In summary, IT managers must continually manage costs and maintain reliable wide area network infrastructures to meet their business goals. Success in today’s business climate also depends on the ability to overcome a more complex set of challenges to their corporate wide area network. Enterprise IT managers are faced with and require a solution that will address the following factors:

  • Geographically dispersed sites and teams that must share information across the network and have secure access to networked corporate resources.
  • Mission critical, distributed applications that must be deployed and managed on a network-wide basis. Further, IT managers are faced with a combination of centralized hosted applications and distributed applications, which complicates the management task.
  • Security requirements for networked resources and information that must be reliably available but protected from unauthorized access.
  • Business-to-business communication needs, both to users within the company as well as extending to partners and customers.
  • Quality of Service features that ensure end-to-end application performance.
  • Support for the convergence of previously disparate data, voice, and video networks resulting in cost savings for the enterprise.
  • Security and privacy equivalent to Frame Relay and ATM.
  • Easier deployment of productivity-enhancing applications such as enterprise resource planning, e-learning, and streaming video. (These productivity-enhancing applications are IP based and Layer 2 VPNs do not provide the basis to support these applications.)
  • Pay-as-you-go scalability as companies expand, merge, or consolidate.
  • Flexibility to support thousands of sites.

MPLS provides the any-to-any connectivity; assures separation of organizations, functions by supporting the concept of VPN; provides security due to its inherent VPN capabilities; supports quality of service mechanisms; and is the basis for virtualized architectures as we move to next generation service constructs for both service provider and enterprise business models.

About the author:

Monique Morrow

Monique Morrow is currently a Distinguished Consulting Engineer at Cisco. She has over 20 years experience in IP internetworking that includes design, implementation of complex customer projects and service development for service providers. Monique has been involved in developing managed Network Services like Remote Access and LAN Switching in a Service Provider environment. She has worked for both enterprise and service provider companies in the United States and in Europe. Monique led the Engineering Project team for one of the first European MPLS-VPN deployments in 1999 for a European service provider. Monique has an M.S. in Telecommunications Management and an MBA. She speaks French, German, and is learning Mandarin.

MPLS and Next-Generation Networks: Foundations for NGN and Enterprise Virtualization

MPLS and Next-Generation Networks: Foundations for NGN and Enterprise Virtualization
Azhar Sayeed and Monique Morrow
ISBN-10: 1587201208
Pub Date: 11/6/2006
US SRP $40.00
Publisher: Cisco Press