Service Provider Strategy

Broadband Access in the 21st Century: Applications, Services, and Technologies

  • Viewing Options

  • PDF (555.0 KB)
  • Feedback

What You Will Learn

Next-generation access (NGA) is the term commonly used to refer to access networks consisting wholly or partly of optical fiber, which can provide Internet access and deliver other content and applications at much faster speeds. It is a simple concept; the implementation is more complex and depends heavily on local factors, including the market and regulatory environment.
Even though incumbent operators have decades of experience in managing telecommunications networks and have already deployed fiber in their core or backbone networks, they have not previously deployed fiber on the scale required in the access network. The transformation to NGA requires operators to support new technologies and may necessitate changes in business processes and customer relationships to cope with the substantial investment needed.
The scale of the transformation, which is taking place across the world and will take many years to complete, has created an opportunity for new players to enter the market. These new entrants include competitive telecommunications operators and cable multiservice operators (MSOs), utility companies and municipal authorities, housing associations and property developers, and national and local governments. The end result is a variety of different business models and approaches to fiber deployment in access networks and the absence of a "one size fits all" solution.
By June 2011, there were more than 66 million fiber-to-the-home (FTTH) subscribers globally, according to data from the three regional FTTH councils. The shift to FTTH started in the Asia-Pacific region and the majority (46 million) of FTTH subscribers are located there. North America is home to 9.5 million FTTH subscribers. Deployment and uptake of fiber lines has been slower in Europe - just 4.6 million FTTH subscribers (plus another 5.6 million in Russia) - but the market is accelerating, with subscriptions increasing 14 percent in the first half of 2011.
This document discusses the evolution of broadband services and applications, examines the primary factors shaping the NGA market in Europe today, considers the main influences on the business case including the impact of the policy environment, and explores how different technology solutions can best deliver those services under diverse local conditions.

Evolution of Services and Applications

Broadband - a term that is often used synonymously with an Internet access connection - has become an indispensible tool for business users and consumers alike. It enables access to a plethora of services, including online shopping and banking, entertainment and gaming, remote education and teleworking, public services, and healthcare - and this list is by no means exhaustive. As a result, the number of broadband subscribers around the world continues to grow annually, and now exceeds 500 million (Point Topic, 2010).
Alongside the growth in the number of Internet connections, the capacity required by each individual subscriber is rapidly increasing. Nielsen's Law, which is based on empirical observation, states that network connection speeds for high-end home Internet users will increase 50 percent per year, or will double every 21 months. Formulated in 1998, this law has been followed from 1984 to the present. Nielsen added a data point at 31 Mbps for 2010.
Historically, the need for faster Internet connections was influenced by a combination of increasing software complexity, higher-resolution displays, and the shift from the transmission of plain text to images and then audio and video. The trend toward more data-intensive transmission formats continues in the present with the widespread uptake of high-definition (HD) video (1920 x 1080 pixels); next on the horizon is the introduction of ultra-HD formats, which have 16 times as many pixels as HD video.
Over the past few years, consumption of online video has been evolving from low-quality, short-form clips to high-quality, long-form programs and movies, delivered through platforms such as Apple TV, BBC iPlayer, and Netflix. Consumer behavior is also changing, with the result that people are tending to watch less linear broadcast programming (which is arguably better delivered on other platforms such as terrestrial or satellite systems), and more on-demand content at a time and place and on a device to suit them, whether delivered over the Internet ("over the top") or as a telco managed service.
Next-generation applications, such as consumer telepresence, place even greater demands on network performance. Based on two-way simultaneous transmission of video, voice, and application traffic, telepresence requires sustained high bit rates both downstream (from the Internet to the user) and upstream (from the user to the Internet), as well as low latency and jitter (transmission delay and variation), so that users can interact in an environment that feels like real time. In contrast to the bursty, asymmetric nature of data-based applications such as web browsing, videocentric applications such as telepresence create sustained, symmetric traffic flows. Video communication also underpins many other next-generation services, including telemedicine, remote care for the elderly, online learning, and building security.
In 2009, the Cisco ® Visual Networking Index (VNI) revealed that video had become the dominant type of Internet traffic, exceeding traffic from peer-to-peer file sharing for the first time. In the June 2011 update to this study[1], Cisco forecast that the annual global Internet traffic will quadruple between 2010 and 2015, to reach 966 exabytes - or nearly a zettabyte (10 21 bytes of data). Internet video will constitute 61 percent of the total traffic carried in 2015, up from 26 percent in 2010, making it the most significant factor in that growth (Figure 1).

Figure 1. Internet Video Will Comprise 61 Percent of Internet Traffic in 2015 (Cisco VNI Forecast)

The increase in Internet traffic predicted by the Cisco VNI forecast will also be influenced by changes in the way that we access the Internet. In 2010, PCs generated 96 percent of consumer Internet traffic, but this rate will fall to 84 percent by 2015 due to a sharp increase in the number of Internet-enabled devices such as tablets, smartphones, and connected TVs. In turn, this increase will give rise to a doubling of Internet-enabled device connections by 2015, to nearly 15 billion global network connections, meaning there will be more than two connected devices for every person on earth.
The growing popularity of connected devices will enhance the desirability of cloud-based data storage, because it enables users to access their data from any device and synchronize data across multiple devices. Business and consumer cloud-based applications are already firmly established and are certain to grow in popularity as broadband connections become faster and more reliable. Remote data storage facilities such as Amazon Cloud Drive and Dropbox, hosted applications such as Google Docs and Microsoft Office Web Apps, photo and video sharing websites, blogs, and social networking tools are all examples of services that reside in the cloud. Bit rates have to be fast enough, ideally symmetrical, to support transmission in a reasonable time, particularly when uploading large files (minutes, not days, should be needed to back up a hard disk, for example). In addition, low latency is essential for cloud applications that require a response, such as online games and business productivity tools.
Advanced coding schemes can reduce bit-rate requirements, and significant advances have been made in this area in recent years, particularly for video. However, a trade-off always exists between compression factor and image quality. Various attributes are used to compress the image, such as spatial redundancy, temporal redundancy, transforms with quantization, and code redundancy. Some of the steps in compression are lossless, but quantization always leads to loss of information. Obviously, if an image has no temporal redundancy because of quick scene changes or fast movement in large parts of the image, or if an image has little spatial redundancy because it is diverse and complex, the compression ratio for a certain picture quality will be limited.
HD video starts from an uncompressed bit rate of about 1.5 Gbps (in Europe), which is then processed using H.264 compression codec technology. In most cases, bit rates of around 8 Mbps will deliver a good compromise between compression and picture quality. While this bit rate does not sound particularly demanding, note that in 2010 the global average broadband speed recorded by the Cisco VNI forecast was just 7 Mbps, indicating that a significant number of access networks around the world are not capable of supporting HD video and would need to be upgraded.
The Broadband Quality Study (BQS)[2], a survey sponsored by Cisco and carried out by the Oxford University Said Business School and the University of Oviedo Department of Applied Economics, sheds some light on the broadband performance of specific countries. Using data gathered from 40 million real-life tests conducted in May and June of 2010 on the Internet speed-testing site,, the researchers examined the broadband performance of 72 countries around the world to determine whether they are ready for the "applications of tomorrow," defined fairly conservatively as 11.25-Mbps download speed, 5-Mbps upload speed, and latency of 60 ms. The countries that scored well on these criteria, such as Japan, South Korea, Lithuania, Sweden, and the Netherlands, were those with good penetration of next-generation networks based on FTTH or advanced cable systems (Figure 2).

Figure 2. Broadband Leaders Have Good Penetration of NGA Networks and a High Broadband Quality Score (Broadband Quality Study, 2010)

Most interestingly, the study highlighted for the first time that broadband consumption patterns are diverging, and that this divergence will have significant consequences in the business models and supporting infrastructures of service providers.
The study modeled broadband quality requirements and traffic consumption by selecting specific web applications, all of them already in the market. For instance, a standard basic household with one or two dwellers concurrently using a number of low- to medium-quality broadband applications such as social networking, instant messaging, low-definition video streaming, basic video chatting, and small-file sharing would require a minimum downstream bit rate of 2.7 Mbps and consume about 20 GB per month. In contrast a smart and connected household with two or three dwellers using higher-quality broadband applications such as HD video streaming, telepresence for communications and remote tutoring, security, HD IPTV, and large-file sharing would require a downstream bit rate of more than 20 Mbps and consume 500 GB a month.
The study concludes that services providers must deploy infrastructures that are capable of providing flexibility to meet different consumer requirements for quality and traffic within the same neighborhoods. Therefore, the network must be scalable and cost effective, and also intelligent, to allow different qualities of service and experience to be provisioned on demand in an automated manner.
With just over a decade of history, broadband is still in its infancy. Just as in any other consumer market, end users can be expected to demand choice, and service providers need to have the capability to provide it.

Building the NGA Business Case

Consumer eagerness for broadband services is only half of the equation, however. Before deciding to deploy a new access network and choosing the kind of technology to use, a telecommunications operator must construct a solid business case. Digging trenches to install new duct and fiber infrastructure is labor intensive and therefore expensive (typically up to 80 percent of the cost of a FTTH project), and creating a business case with a return on investment (ROI) that is acceptable to investors can be challenging. Nevertheless, many enterprises in the telecommunications market are already deploying fiber networks, proving that there are plenty of viable business cases.
How that business case is assembled varies greatly from country to country and from region to region, but the common factor that underlies all the business cases is a confluence of circumstances: compelling services that people want to buy, a competitive environment that creates urgency, and a regulatory environment that is conducive to investment.
In Europe, FTTH began to emerge by 2005, although 95 percent of all fiber customers were located in just four countries at that time. For each of those four countries, a unique combination of factors helped nurture the development of new access networks. The experiences of the early adopters illustrate how diverse the factors that advance business cases can be:

• Denmark (infrastructure synergy): Danish utility companies decided to roll out fiber to consumers as they worked to meet a national directive to bury their overhead power lines.

• Italy (market opportunity): Telecommunications operator e.Biscom joined forces with local utility company AEM to create FastWeb, an alternative operator that wanted to enter the nascent IPTV market, starting in Milan.

• The Netherlands (competitive pressure): Competitive pressure from a highly developed cable TV market (nearly 100 percent coverage) and the high population density of the country made the Netherlands a unique laboratory for the development of new FTTH projects.

• Sweden (political support): The first country in Europe to issue a broadband policy, Sweden provided strong public support for construction of citywide optical networks, called "stadsnät".

Many incumbent telecommunications operators originally decided in favor of fiber-to-the-cabinet (FTTC) and very-high-speed digital subscriber line (VDSL) technologies because these allowed them to boost broadband speeds while continuing to get good use of their existing assets. Deutsche Telekom was the first incumbent to announce a large-scale FTTC and VDSL plan: €3 billion investment to deploy a FTTC and VDSL system in 10 cities by the end of 2006 and in 50 cities by the end of 2007. As competition for services has intensified, however, incumbents all over Europe have started rolling out FTTH, and in August 2011 Deutsche Telekom announced the creation of a new subsidiary dedicated to FTTH deployment.
Some business plans are created in direct response to a competitive threat, which is often triggered by a technology or regulatory development. Completion of a new cable standard in 2006 (DOCSIS ® 3.0) paved the way for cable operators to start offering broadband speeds of 160 Mbps or higher, effectively surpassing the capabilities of many incumbents at the time. Cable operators had already invested heavily (in response to the emergence of satellite TV providers during the 1990s) to upgrade their networks from unidirectional analog systems to two-way digital by deploying additional fiber in their networks. In this respect, the cable operators already owned a network that was similar to the FTTC architecture that the incumbents were starting to deploy, making them direct competitors.
Other business cases are advanced primarily by positive factors: the possibility of creating and monetizing new services. Most FTTH operators offer the standard "triple-play" package of telephony, broadband, and TV, while some also offer a broader menu of services that customers can choose through online portals, such as cloud-based data backup, home security, and mobile phone offload (e.g., femtocells). Content is central to the franchise model of Lyse Tele, owner of the Altibox (all-in-one box) brand in Norway, which is a good example of a successful business model. Franchise partners, typically utility companies, deploy the FTTH network, and Lyse Tele supplies the content and services, including local TV channels and news, which is delivered with the franchise partner's branding.
Broadband prices have fallen in recent years. The eagerness of consumers to try new services does not always match consumers' willingness to pay. The number of subscribers on a network is a critical element of the business case. For this reason, FTTH operators expanding into new areas often set a "trigger level" for subscriptions before they begin rolling out the network. One FTTH operator, Hong Kong Broadband Networks, took the bold step of offering 1-Gbps broadband at utility prices, rather than at the premium rates normally associated with high-end products, with excellent results: the operator now has more than 1.1 million subscriptions in a market of about 2 million households and businesses.
Adjacent (nonconsumer) business areas can help an operator create a robust business case. Health insurance companies, for example, have the financial resources needed to pay for HD video conferencing for patient monitoring, and they may invest if they can save money over the long term. Likewise, utility companies may be able to exploit the synergy between smart grid technology and the installation of mass market fiber broadband. The challenge is getting all the players in the ecosystem to cooperate and to share the value, while also having a critical mass of end users who are willing to consume and pay a reasonable incremental fee for such services.
All of these examples fall under a free-market model, where the new access network is being deployed by private companies, with the government sometimes acting as a catalyst. A different approach is being taken in countries such as Australia, New Zealand, and Singapore, which have adopted national broadband plans in which the government is the main actor and uses public money to build a national FTTH network. In these countries, there is a firm belief in the value that such a network can create, both in terms of economic growth and savings arising from better government services such as healthcare. Governments are able to consider a longer investment horizon and to reap financial benefit from factors external to the network such as better citizen engagement.

Influence of Policy and Regulation

Policy and regulation, both at European and country levels, can have a decisive influence on the business case and deployment strategies for rolling out high-speed broadband services.
At the political level, the need for high-speed broadband service has clearly been recognized. Both the European Union and the vast majority of countries in Europe have implemented broadband policies and targets. The European Commission published the Digital Agenda in May 2010, which sets aggressive broadband targets: by 2020 everyone should be connected at speeds of at least 30 Mbps, with half of all households subscribing to 100 Mbps or above. The 100-Mbps goal was set to foster the transition to NGA, including FTTH and upgraded cable systems.
All European member states are implementing national policies that will enable them to meet these targets, and a few have decided to set even higher goals. Estonia, Denmark, Finland, France, and Sweden, for example, are aiming to provide 100 Mbps to 90 percent of the population by 2020. Luxembourg has set the most ambitious target in Europe to date: 1 Gbps for all its citizens by 2020.
On the regulatory front, the European Commission published its long-awaited "Recommendation on Regulated Access to Next-Generation Access Networks" (here called the Recommendation), which clarifies the rules that apply to market-led FTTH deployments and aims to promote investments while helping ensure a competitive environment. The Recommendation is now being implemented at the national level by the national regulatory authorities. The commission has also clarified the way in which state aid rules apply to public funding of broadband networks. While the overall regulatory framework in Europe promotes market-led FTTH deployment as the most desirable scenario, governments and authorities also clearly recognize that intervention of some kind may be necessary to facilitate large-scale deployment of fiber networks in Europe.
The challenge is to find the best way to encourage the necessary investments in NGA while reducing distortion of the market.

The NGA Recommendation

The "Recommendation on Regulated Access to Next-Generation Access Networks" provides guidance to national regulatory authorities on how to regulate the new access networks based on optical fiber. The measures in the Recommendation aim to balance the goals of promoting efficient investment in new infrastructure and maintaining effective competition in the market place.
Operators that have significant market power (SMP) are required to grant access to their NGA networks at a regulated price. The network can be opened up at different layers: the infrastructure layer (access to ducts), the physical layer (fiber unbundling), and the active layer (wholesale connectivity or bitstream products). In principle, the whole range of access products should be available. In practice, however, there is leeway for the regulator to decide when and how products are made available.
The Commission recommends that regulators mandate physical unbundling as soon as technically and commercially feasible, regardless of the network architecture. In practice, however, point-to-multipoint fiber was not designed for sharing, and so the commission has accepted that alternatives such as virtual unbundling (for example, VULA in the UK) could be offered for a transitional period. Technological advances such as the capability to share a fiber in the wavelength domain may permit future wavelength-unbundling of point-to-multipoint networks.
The Recommendation also identifies circumstances in which regulation may not be necessary. For example, regulators can define subnational geographic markets in which alternative infrastructures such as advanced cable or FTTH networks are already available, and waive the requirement for unbundled access to new fiber networks in those areas. Requirements may also be applied lightly when operators co-invest on the basis of multiple fiber lines to each property. Regulators also have the discretion to remove cost-based prices when certain conditions are met. For example, where fiber unbundling is seen to be working well, wholesale bitstream transport could be allowed on a commercial basis.
The various opt-outs give national regulators quite a bit of flexibility to decide when and how regulations should apply, and divergent approaches are already emerging across Europe. Some countries, including Finland and Sweden, mandate fiber unbundling, while others, such as Austria and the UK, do not. Some countries, such as Italy and Switzerland, are focusing increased attention on service provider co-investment models to roll out FTTH. As a result, while the Recommendation increases regulatory certainty, there is not going to be a single regulatory solution across Europe.

Applying State Aid Rules

The public sector is expected to take a more important role in funding NGA networks in the near future. In the autumn of 2009, the European Commission published guidelines to clarify the application of state aid rules to public funding of broadband networks, with the aim of supporting public-sector investment in areas of market failure. In 2010, the commission adopted a record number of 20 decisions authorizing the use of over €1.8 billion of public funds for broadband development: more than four times the amount allowed in 2009.
The guidelines strike a balance between encouraging investments in areas with market failure and reducing the potential for market distortion. Three types of area are defined: white (no existing NGA network), gray (only one NGA network), and black (two or more NGA networks). Different levels of assessment apply to each category; conditions for support in white areas are easier to comply with than those in gray and black areas, in which the potential for market distortion is greater. As a general rule, state aid is acceptable for white areas, possibly acceptable in gray areas, and not allowed in black areas.
Member states are encouraged to invest in projects that lower the barriers to market entry, such as passive infrastructures (ducts and dark fiber) or backhaul networks to connect remote communities to the nearest metropolitan point of presence. The guidelines also make it clear that publicly funded broadband networks should support effective and full unbundling and satisfy all different types of network access that operators may seek (including but not limited to access to ducts, fibre, and bitstream transport).
The state aid rules do not prescribe technology or network architecture; the whole range of possibilities should be available. In rural areas that are covered using public funds, a mix of technologies will be used: fiber, wireless, and even advanced copper-based technologies where appropriate. The commission is planning to review the guidelines in 2012, and it may take a more flexible approach with regard to NGA and include advanced wireless technologies.

Evolution of Wireline Access Technologies

Every Internet connection relies on optical fiber connectivity to carry the traffic generated or consumed by individual customers or groups of customers. The differences between the various network architectures and their respective performance depend on the termination point of the fiber (whether central office, street cabinet, building, or home) and the technology employed on the medium that extends to the customer (ADSL or VDSL on copper pairs; DOCSIS over coaxial cable; or Ethernet, passive optical network (PON), or DOCSIS over fiber, for example).
An important factor that determines the cost of an access technology is the availability of the transmission media. While telephone-grade copper and coaxial cable are usually considered to be readily available (these media were deployed long ago to support traditional services such as analog telephony and cable TV, and the cost has already been absorbed), the use of fiber is associated with significant investment because, in the vast majority of cases, fiber needs to be deployed starting from scratch.
This section of this document discusses these various high-speed access technologies and their capabilities and likely evolutionary paths to help understand their suitability for future service needs and thus how well certain technologies really can meet the requirements of the future.

DSL Technology

Digital subscriber line (DSL) technologies are a group of technologies for broadband access over telephone-grade copper pairs. The technologies that are relevant for the mass market share a common modulation scheme: discrete multitone (DMT) modulation. DMT is characterized by the use of individual frequency carriers, which are spaced at about 4-kHz intervals, and each carrier is individually modulated. The highest frequencies used on the medium determine the different types of DSL: ADSL, ADSL2, ADSL2+, VDSL, VDSL2, etc. The maximum achievable aggregate bit rate for upstream and downstream traffic is roughly proportional to the number of carriers employed, and thus to the overall bandwidth used on the medium, if interference is neglected. Carriers are grouped in bands for upstream and downstream transmission. Typically, much more spectrum is allocated to downstream traffic, which makes most DSL implementations highly asymmetric.
The medium used by DSL - the telephone-grade copper pair - was originally defined for the transport of analog voice with a maximum frequency of 3.4 kHz. All DSL versions exceed this maximum frequency by several orders of magnitude: for example, ADSL2+ supports frequencies of up to 2.2 MHz, and VDSL2 supports frequencies of up to 30 MHz. Consequently, DSL is affected by two kinds of impairments: attenuation (the gradual loss of signal intensity) and crosstalk (signals transmitted over one copper pair influence the signals on other copper pairs). Attenuation grows exponentially with the length of the medium, and logarithmically with the frequency. Crosstalk grows with the frequency, the power of the signal, and the number of active pairs in a cable.
DMT is a highly effective modulation scheme, and its performance is considered to be close to the theoretical limit given by the Shannon theorem, which determines the maximum achievable bit rate over a medium as a function of the frequency-specific signal-to-noise ratio (SNR) values. The SNR decreases with increasing attenuation and increasing noise from crosstalk. Therefore, higher bit rates require higher frequencies, which in turn increase attenuation and crosstalk. As a result, high bit rates can be achieved only over short distances. Commercial deployments of ADSL2+ with 16 Mbps (downstream) are typically limited to 1.5 to 2.5 km, depending on the diameter of the copper wires, and those of VDSL2 with 50 Mbps (downstream) to few hundred meters. VDSL2, therefore, is usually provided by a DSL access multiplexer (DSLAM) located in a street cabinet close to the customer.
Although attenuation is determined by the physical characteristics of the medium, crosstalk can be mitigated by a range of technologies, called dynamic spectrum management (DSM), which consider all the signals in a cable jointly, adapting power levels and use of carrier frequencies and compensating for crosstalk using estimation algorithms (vectoring). These technologies promise to increase the achievable bit rates per copper pair to 100 Mbps over a few hundred meters.
To increase DSL bit rates even further, bonding of multiple pairs and phantom pairs (that is, pairing wires taken from different twisted pairs) can be used in combination with vectoring. Some vendors have reported combined bit rates of up to 900 Mbps (upstream and down) with four physical pairs over a few hundred meters in a laboratory environment. Whether these bit rates are achievable depends primarily on the availability of multiple pairs between the DSLAM and the subscriber.
While DSL is clearly more limited in capacity and distance than all-fiber approaches, these technology developments will extend the lifespan of telephone-grade copper cables for some years to come.

Cable Technology Evolution

As described previously, cable MSOs have a hybrid fiber and coaxial (HFC) infrastructure in which signals are transmitted bidirectionally between the cable modem termination system (CMTS) located at the head end (the master site where the TV signals are received) and the fiber node, at which point they are converted onto coaxial cable that reaches the customer premises.
A typical coaxial cable spectrum ranges from 100 to 750 MHz in the downstream direction, and from 5 to 65 MHz in the upstream direction, but the range can go higher: to 1 GHz (downstream) and 85 MHz (upstream). The cable plant is segmented: a group of 500 or more homes is connected to a single fiber and coaxial segment. The spectrum must be shared by the group of homes connected to the same segment, and encryption helps ensure that customers receive only the data intended for them.
DOCSIS is the standard protocol for broadband services over cable. DOCSIS divides signals in the downstream direction into frequency slots of 8 MHz (European version); each slot is used to transport one or more analog TV, digital TV, or DOCSIS broadband services. In the upstream direction, there are frequency slots with a maximum of 6.4 MHz for uplink communication. Quadrature amplitude modulation (QAM) is used to increase the spectrum efficiency of the digital signals in the downstream (up to QAM256) and upstream (up to QAM64) directions, achieving a maximum of 50 Mbps and 30 Mbps, respectively, for each channel.
Previous versions of DOCSIS allocated the frequency equivalent of one TV channel for downstream broadband transmissions. DOCSIS 3.0 is the latest standard for cable data services, enabling the bonding of multiple downstream and upstream channels to create high-bandwidth pipes. Using DOCSIS 3.0, data is striped across multiple QAM channels, forming a single logical channel that aggregates the capacity of the individual QAM channels. DOCSIS 3.0 has no limit on the number of QAM channels aggregated; the limit will arise from the CMTS and customer premises equipment (CPE) capabilities. Current DOCSIS 3.0 technology provides:

• Downstream bonding capacity of more than 1 Gbps (20 or more channels) on the CMTS and 400 Mbps (8 channels) on the DOCSIS CPE

• Upstream bonding capacity of 240 Mbps (8 channels) on the CMTS and 120 Mbps (4 channels) on the DOCSIS CPE

In 2012, the industry will ship CPE devices able to bond 16 channels (800 Mbps downstream), while a higher number of channels will be bonded per CMTS in a single fiber and coaxial segment.
The use of HFC spectrum still has much inefficiency, however. Multiple fiber nodes, each serving up to several hundred homes, are often connected over the same fiber, and analog TV channels occupy significantly more cable spectrum than the equivalent digital service. As a result, operators needing to upgrade the cable plant to cope with bandwidth growth are segmenting the network into smaller sharing groups of 500 homes, and even only 250 or 100 homes in some cases, by:

• Connecting individual fibers to each fiber node

• Dividing the fiber nodes into two or more smaller fiber nodes

• Bringing fiber to the last coaxial amplifier (FTTLA), connecting to a smaller group of homes

On top of the HFC segmentation, services are converging onto all-IP-based networks, so the cable industry expects all services to be delivered over DOCSIS in the future. Analog TV will disappear as more subscribers get a set-top box (STB) or use TV sets with an IP interface, and this change will release spectrum to carry new TV channels or additional broadband capacity.
Of the overall bandwidth capacity of cable's HFC spectrum, a maximum of 6 Gbps is available in the downstream direction (100 MHz to 1 GHz), and a maximum of 300 Mbps (5 to 85 MHz) is available in the upstream direction, which is shared by the homes connected to the same fiber and coaxial segment. As the cable plants are segmented into smaller groups of homes (particularly when the number of homes on each shared segment is 100 or fewer), analog TV disappears, and the silicon density in the CPE increases, cable networks will move onto a model of full DOCSIS spectrum for all CPEs sharing the same fiber and coaxial segment, enabling these devices to access the significant bandwidth pool of 6 Gbps downstream and 300 Mbps upstream.
In terms of downstream capacity, cable is very competitive with some of the shared FTTH technologies such as Gigabit PON (GPON). In terms of upstream capacity, cable is more limited; however, the cable industry is discussing ways to increase the upstream bandwidth: for example, by introducing an upstream and downstream frequency split in the middle, at 200 MHz, or using some of the existing downstream spectrum to increase the upstream capacity. This approach will require a new DOCSIS standard and changes to the physical components used to build the HFC network (optical transmitters and receivers, fiber nodes, amplifiers, taps, and splitters), but it is a viable option if needed.
Another opportunity that cable operators are exploring is the evolution of HFC into FTTH. Generally, cable MSOs that want to upgrade their networks first look to DOCSIS 3.0 and HFC segmentation, deploying FTTH only in new areas. However, radio frequency (RF) over glass (RFoG) technology is an upgrade option that is getting some attention from cable operators because it allows operators to offer FTTH-class services while reusing the existing back-end and customer premises equipment, including the CMTS and cable modems, while retaining the well-known engineering and operations of the HFC network.
RFoG technology delivers DOCSIS, digital video, and analog TV services across an optical access network based on specialized optical transmitters and receivers and an RFoG optical network termination (ONT), which provides an RF interface to the customer, allowing customers to connect any standard cable customer equipment (TV with analog tuner, DOCSIS modem, or cable STB).
Using RFoG, FTTH networks can be built in segments of 32 or 64 homes. This approach mimics GPON technology, but adds the flexibility to build RFoG segments with more than 64 homes initially and perform virtual node optical splitting at the head end without the need for a major equipment upgrade, and to allocate DOCSIS downstream and upstream channels per FTTH RFoG segment according to bandwidth needs.
The combination of DOCSIS 3.0, HFC segmentation, and various RFoG FTTx technologies will allow the cable industry to remain competitive in the delivery of broadband and entertainment services for many decades.

FTTH Technology Evolution

Fiber in the access network is the long-term goal because it can provide almost unlimited bit rates for any perceivable future services. It can be classified as fiber to the home (FTTH), which brings the fiber directly to the residence, or fiber to the building (FTTB), with the fiber terminating in the basement of a multidwelling unit (MDU), from which information is transported to individual apartments using copper-based cabling in the building.
Fiber can be deployed in various topologies, and it can use various transmission technologies. In fiber-based access networks, only single-mode fiber (SMF) plays a role, except in in-home cabling, for which multimode fiber (MMF) and polymer optical fiber (POF) also are being used.
Fiber is typically deployed in one of two topologies: point to point (P2P) or point to multipoint (P2MP), as shown in Figure 3. In the past, ring topologies have also been used, although these are not generally favored today because they do not scale readily.
In P2MP topologies, a fiber from the optical line terminal (OLT) in the point of presence (POP) leads to an aggregation device in the field, typically in a splicing enclosure, a street cabinet, or the basement of a building. This aggregation device can be as simple as a passive optical splitter, or it can represent an active device: an Ethernet switch or a DSLAM. In contrast, a P2P topology uses dedicated fibers all the way between the POP and the customer.

Figure 3. Fiber Access Topologies

The technologies used on the fiber can be divided into three broad categories: Ethernet; PONs with time-division multiplexing (TDM) access-control protocols (TDM-PONs); and PONs using individual wavelengths per customer employing wavelength-division multiplexing (WDM-PONs).
As shown in Table 1, almost any combination of topology and technology is possible and is being used in real deployments, with the exception of WDM-PON in a P2P topology, which does not make any technical sense. In any case, WDM-PON is still seen as an immature, nonstandard technology that is too expensive for widespread deployment in the residential market.

Table 1. FTTH Network Architecture Classification

IPTV-based video solutions can offer superior features compared to simple broadcast TV and have, therefore, become an indispensable part of any triple-play offering. Often, however, RF video broadcast overlays are needed to support existing TV sets in subscribers' households. This approach can greatly facilitate the introduction of FTTH.
In P2MP architectures, this approach is typically implemented by providing an RF video signal, compatible with cable TV solutions, over an additional wavelength; indeed, the PON standard allows this. In point-to-point fiber installations, this implementation can be achieved by two different approaches:

• In the first approach, an additional fiber per customer is deployed in a tree structure and carries only an RF video signal that can be inserted to the in-home coaxial distribution network.

• In the second approach, a video signal is inserted into every P2P fiber. The RF video signal carried by a dedicated wavelength from a video OLT is first split into multiple identical streams by an optical splitter and then inserted to each P2P fiber by means of triplexers. On the customer side, the wavelengths are separated, with one signal converted into an RF signal for coaxial distribution, and the other signal made available on an Ethernet port.

Table 2 provides a general comparison of the various topologies, and Table 3 provides a comparison of the Ethernet and GPON technologies, which represent the most important FTTH technologies today.

Table 2. Topology Comparison




Bitrate potential

Almost unlimited

Limited by characteristics of aggregation point

Technology dependence


Limited to few classes of access technologies

Technology upgrade

Per subscriber

Per aggregation point

Open access to fiber


Complex or impossible


Simple: optical time-domain reflectometer (OTDR)

Complex: failure correlation

Number of feeder fibers

One fiber per subscriber

One fiber per aggregation point

Dimension of fiber management

One port per subscriber

One port per aggregation point

Table 3. Technology Comparison





100 Mbps to 10 Gbps or more

2.5 to 10 Gbps

Bitrate sharing

Dedicated bit rate

Shared bit rate


High through dedicated medium

Requires encryption; vulnerable to denial of service (DoS) attacks

Upstream traffic management

Highly sophisticated through switch matrix

Limited by capabilities of MAC protocol

Interoperability OLT-CPE

Easy because of ubiquitous technology

Still challenging

Per-subscriber CO power consumption

Well-defined low value (less than 2 watts), independent of take rate

Depends on take rate: from very high to very low

CO real-estate use

Approximately 1600 homes connected per rack

Depends on take rate: from very high to very low

Clearly, P2P topologies provide the highest degree of flexibility and are considered by many stakeholders to be the most future-proof approach. Cisco's experience with many deployments has shown that the premium for the deployment of a P2P topology is usually below 10 percent (after a route is set up, it costs very little extra to install more fiber on that route). This metric has also been verified in a comprehensive report by Wissenschaftliches Institut für Kommunikationsdienste (WIK), Germany's leading research and advisory institute for communications [3]. In terms of technologies, Ethernet looks very attractive based on its scalability, simplicity, and ubiquity.
For these reasons, Cisco's recommendation to its customers has always been the Ethernet P2P architecture, and the market studies of FTTH in Europe have clearly indicated that a majority of the European FTTH players have endorsed this recommendation, across the entire spectrum of providers: municipalities, utilities, housing companies, alternative operators, and incumbent operators[4].
Recent innovations such as bend-insensitive fiber and preterminated cables, new installation techniques such as microtrenching, and regulation that opens up access to ducts continue to reduce the cost of deploying fiber, helping lower the barriers to FTTH deployment.

Wireless as a Complementary Technology

In the public debate on broadband, wireless broadband technologies are often considered to compete against wireline technologies. Advertised wireless broadband speeds based on third-generation (3G) and fourth-generation (4G) technologies are well in the order of magnitude of today's typical DSL offerings.
However, marketing messages usually neglect a number of factors:

• Every wireless access technology is based on a shared medium: the air interface. The spectrum on this air interface is shared among all the customers in a cell. Therefore, the more customers who are active in such an access domain, the lower the average bit rate per customer. Recent evidence shows that many mobile wireless networks, consequently, have become victims of their own success as air interfaces have become overloaded.

• Wireless technologies are distance dependent and deliver maximum throughput only when the user is adjacent to the transmitter. This behavior is a consequence of the fact that wireless transmission technologies have already been optimized to make the most efficient use of spectrum (which is a scarce resource) and operate close to the Shannon limit.

• Latency on mobile broadband networks is typically two to three times higher on a 3G mobile network than on a DSL network, and it can be much higher on older second-generation (2G) and 2.5G networks. This latency negatively affects the end-user experience for mobile data services, and it can make real-time applications such as gaming and cloud-based services impractical to use.

In fact, the Broadband Quality Study (BQS)[2] also shows that although mobile broadband quality has indeed improved in the past few years with the deployment of 3G technologies, on average the quality of mobile broadband is far from that of fixed-line broadband, and about 90 percent of users have quality experiences significantly below those with fixed-line broadband. Even the worldwide country leaders in mobile broadband quality, Sweden and Denmark, have mobile BQS ratios that are below 50 percent of their wireline equivalents.
In Cisco's opinion, wireless networks should be promoted for their strengths - mobile computing and networking with limited requirements for services and bit rates - rather than as direct substitutes for highly demanding residential and business services. An exception to this general rule is services in very sparsely populated areas, where the deployment of new wireline networks may not be commercially viable. In these areas, coverage with fixed-wireless access networks can be provided comparatively quickly and at relatively low cost, at least for a transient period.
The speed and latency of wireless network technologies will continue to improve. However, the demands placed on these networks by consumers will also continue to increase. The Cisco VNI global data traffic forecast for 2009-2015 [1] predicts that mobile data traffic volumes in 2015 will grow to 26 times what they were in 2010. Sixty-six percent of this traffic will be mobile video, with its high bandwidth and quality of service (QoS) demands. Traffic will grow faster than revenues, with enormous pressure on the network from over-the-top (OTT) video.
Far from being competitors, however, high-capacity wireless and fiber networks will actively support each other. The technologies will be highly complementary in two main situations:

• Fiber networks can provide transport capacity and ease of deployment for radio access network (RAN) infrastructure. Next-generation wireless base stations will have to handle several hundreds of megabits per second (more if they use multiple sectors), so the requirements for backhaul capacity will grow accordingly. The most straightforward way to connect such base stations to the aggregation network is through fiber. Therefore, obvious synergies exist in the build-out of fiber-based access networks for wireline access and wireless backhaul. A case in point is the Swedish city of Stockholm, which already supports two 4G mobile operators and has a third on the way, as a result of the citywide availability of fiber. Additional information about IP RAN backhauling can be found at[5].

• Wired networks can provide alternative backhaul capabilities to meet the increase in mobile data demand. Service providers are seeing an upsurge of data traffic across their networks by users who increasingly expect ready access to online services that consume large amounts of data without being limited to a fixed location. The rapid increase in 3G wireless broadband services, combined with the proliferation of dual-mode 3G and Wi-Fi smartphones, affordably priced data plans, and new online services, has stimulated data traffic by wireless users. One way to cope with this increase is to offload mobile traffic using Wi-Fi or femtocells onto existing wireline access networks. Because a very large proportion of mobile data traffic is actually consumed in mobile subscribers' homes or at the workplace, this approach is very compelling. With this approach, not only will the cost of delivering mobile data traffic be greatly reduced, but a valuable asset, the licensed spectrum, will be preserved. Additional information about this topic can be found at[6].


The demand for access bit rates continues to grow exponentially and shows no sign of slowing down in the foreseeable future. The copper telephone networks that have carried Internet services for the past 20 years are reaching the physical limits of their capabilities and will not be able to sustain the applications and services of the next 20 years. There is a general consensus among policy makers and telecommunications operators that NGA using optical fiber needs to be deployed soon, and the factors propelling this change are well understood. NGA networks may be based on FTTH or advanced cable systems; the common feature of these two approaches is increasing use of optical fiber.
New network deployments will initially be propelled by products and services that can be anticipated today, notably those based on streaming Internet video and cloud-based services. After fiber-based access becomes ubiquitous and a critical mass of potential end users is created, innovative applications and services with much more demanding bit-rate requirements are likely to emerge.
There is an important distinction between optical fiber as an infrastructure, and the technology used to light the fiber. The physical cables have a lifespan of at least 30 years, while the electrical equipment typically has a much shorter replacement cycle of 5 to 7 years. No technology has ever provided "abundant capacity" for an extended period. A dedicated fiber is the only thing that comes close to a notion of abundance.
P2P fiber topologies should be the preferred solution because only a dedicated fiber can provide a secure migration path that can make it possible to meet bit-rate requirements over this time frame. P2MP technologies may offer short-term cost savings in fiber infrastructure, but there is high risk that these topologies could lead to technological bottlenecks in the future, with equipment upgrades becoming increasingly costly or extensive reengineering of the network required.
A few years ago, the discussion was not about whether NGA was necessary, it was about when and how quickly to provide it. Today the discussion is no longer about when NGA will happen, it is about how. Given the long-term nature of the investment, the principal consideration should be deployment of infrastructure capable of meeting requirements well into the future. Technology choice is important, but it is secondary to achieving a sustainable ROI over the lifetime of the infrastructure.


3. Architectures and competitive models in fiber networks, WIK, 2011:
4. December 2010 FTTH market data, FTTH Council Europe, 2011:
5. Additional information about RAN backhauling solutions:
6. Additional information about service provider Wi-Fi solutions: