Virtualizing the Cable Access Network the Right Way
Realize the promise of Network Functions Virtualization by going cloud native
Realize the promise of Network Functions Virtualization by going cloud native
For service providers, the prospect of Network Functions Virtualization (NFV) is based on greater efficiencies and opportunities for business innovation. Fundamentally, virtualization is about scaling operations and capital investments more efficiently to agilely deliver network services. Multiple system operators (MSOs) are beginning to realize the power of cloud scalability and new operating efficiencies through the virtualization of their network functions. The initial promise of virtual network functions (VNFs) has been lacking. Replicating and porting code from a physical appliance to a virtual machine without addressing the underlying software architecture resulted in a false start for what was supposed to be a networking revolution. Lessons have been learned, and the lift and shift of existing code to be wrapped in virtual machines has proven to be a clunky and inefficient use of compute power and resources. It tends to miss the business objective of virtualization, which is to achieve greater CapEx and OpEx efficiencies.
Rapid technological changes create challenges for service providers that cannot be quickly resolved in a solution that relies on manual intervention. Too often the dependency of the underlying infrastructure has impeded the efficient deployment of a VNF, adding to system integration time and costs. As a result, MSOs are looking to cloud native technology to open new revenue opportunities and realize OpEx savings with process automation. To fully realize the flexibility that virtualization promises, the underlying software applications for network functions must be architected to support any infrastructure and fully automate deployments and lifecycle events such as service creation, transparent software upgrades, dynamic scalability, and simple recovery. An API-driven NFV model gives MSOs the ability to combine applications from different sources, support new functionality, and install patches quickly. Using this model can greatly expand the amount of innovation providers and the telecom community at large can achieve.
The use of cloud native technologies is a strategy for virtualization. However, virtualization is not inherently cloud native. Cloud native development is an approach to building and running applications that fully exploit the advantages of the cloud computing model. A cloud native application uses a collection of tools that manage, simplify, and orchestrate the services that make up the application. These services, each with its own lifecycle, are connected by APIs and deployed in software containers. These containers are orchestrated by a container scheduler, which manages where and when a container should be provisioned into an application. The scheduler is also responsible for the lifecycle management. Cloud native applications are designed to be portable to different deployment environments, such as in a public, private, or hybrid cloud for example. Continuous delivery and DevOps are methods used to automate the process of building, validating, and deploying services into a production network.
Cloud native applications have the following primary tenets.
Microservices are an architectural style that structures an application as a collection of loosely coupled services to implement business capabilities. Microservices are commonly deployed in containers and enable the continuous delivery and deployment of large, complex applications. Each microservice can be deployed, upgraded, scaled, and restarted independently of other services in the application as part of an automated system, which makes it possible to perform frequent updates to live applications without affecting customers.
Containers are another form of virtualization that use operating system (OS)–level virtualization. A single OS instance is dynamically divided among one or more isolated containers, each with a unique writable file system and resource quota. Containers can be deployed on both bare-metal and virtual machines. Containers deployed on bare-metal machines offer performance benefits over virtual machines by eliminating the hypervisor overhead. Although each microservice is commonly deployed in a separate container, multiple microservices may be deployed per container to address application and performance requirements. For example, multiple microservices may be used when colocation of services logically simplifies the design or when services fork multiple processes within a container.
With continuous integration and continuous delivery (CI/CD), you can build package, test, and deploy applications on your own timetable. You can minimize service update windows and make changes to your production environments applications and then release them as soon as they are ready. You don’t have to wait to bundle application changes with other changes into a release or an event such as a maintenance window. Continuous delivery makes releases easy and reliable, so you can deliver frequently, at less risk, and with immediate user feedback. Service provider software updates will increase in frequency and become an integral part of business processes, shortening the time to market for new services and innovations.
DevOps uses lean and agile techniques to combine development and operations into a single IT value stream. Using DevOps practices, organizations can build, test, and release software more rapidly and iteratively by applying continuous integration and delivery. For example, DevOps can help automate deploying and validating a new software feature in an isolated production environment, which can then be rolled out more broadly into production after it has been proven.
Software containers and microservices underscore the modularity of cloud native applications and how they are designed to work together. New services and features can be developed and deployed as components of applications that are designed to work together as a system, which allows for dynamic and optimal scaling. Application components are designed to be easily composed and connected for building higher-level services.
A bare-metal cloud model optimizes your compute resources and provides flexible scale for on-demand usage. You can maximize your performance while using the hardware of your choice. Software containers offer flexibility for deploying on bare metal with a basic Linux OS or on virtual machines that reside on top of a hypervisor.
MSOs need a way to scale services while reducing their operating expenses automating and simplifying their network operations. Automation and simplification can help MSOs bring services to market more quickly and optimize their physical footprints by decoupling services from proprietary and dedicated network hardware. A cloud native approach to virtualization provides the fundamental building blocks for operators to design, develop, and build applications, services, and features, which can be composed for scale and achieving their objectives.
The virtualization of cable headend functions come with a unique set of considerations that must be addressed to achieve the full potential of cloud native technology. Virtualizing the cable headend has a major prerequisite: a digitized access network. VNFs are not suited for an analog world. Remote PHY is the solution for migrating from analog to digital and preparing for virtualization. Once you remove the PHY functions from the cable headend and switch to digital fiber, CMTS platforms effectively become software processors with Ethernet running in and out.
Virtualizing CMTS functions is a logical step in migrating to the cloud and delivering services from a data center, where they become more flexible, resilient, and scalable. You no longer have to restrict CMTS functions to a hardware-dependent platform. Instead, you can rely on Moore’s Law for continuous improvements in processing speeds, capacity, and cost-per-bit with cloud computing. At the same time, outside plant equipment becomes simpler and less expensive to operate and maintain.
Other headend hardware-based services will also follow suit and take advantage of the microservice composability that cloud native functions afford. Traditional video and voice services will make the transition from dedicated headend hardware to digital IP-based services. Subscriber services will ultimately transform through new agile and automatable processes that can deliver higher-level services composed from a tailored set of microservices.
The virtualization of the cable headend begins with the Cisco Cloud Native Broadband Router (cnBR). It is a containerized, virtual Converged Cable Access Platform (vCCAP) solution designed to take the service capabilities of legacy physical hardware and virtualize them as a customizable, scalable, and resilient set of microservices. Headend hardware that takes up valuable physical space can be virtualized and migrated to data centers and run alongside other virtualized services in the cloud.
The cnBR operates in both singular and blended environments, where operators can mix and match their physical CCAPs and then augment and or replace them with their cloud native counterparts. The cnBR brings web-scale resiliency, elasticity, scalability, and feature velocity to the cable access network.
The cnBR offers a number of advantages.
Some virtualization solutions are technically “virtualized” in the sense that they are deployed as software solutions that run on servers, rather than dedicated hardware. When a solution only lifts and shifts existing code from legacy hardware, wraps it in a virtual machine, and operates it in the same way and in the same locations, begging the question: how much are you really benefiting from “virtualization”? The cnBR capabilities go well beyond basic virtualization. The software was rewritten to harness cloud portability, scalability, resilience, and resource optimization. With a containerized software architecture and composable microservices the cnBR can constantly evolve, add, update, upgrade, and be composed with other cloud native functions.
Applications composed of cloud native microservices have an unprecedented degree of resiliency. A fault in any particular microservice has a limited blast radius and will not drag down the composed higher-level service. Microservices that fail can be brought back up without perceptible impact. In practical terms, with a cloud native system, the operator can try out new software changes in production early and often with minimal service impact. The composability of microservices also supports resiliency for continuous integration and continuous deployment operating models. Server redundancy will remain important for failover scenarios. Even in these instances, the cnBR will automatically port and recompose itself on another healthy server.
The basic premise of virtualization is that software-based network functions can run on commercial off-the-shelf servers (COTS). If you start modifying the compute platform for a VNF, you’re no longer virtualizing. You just have a different kind of dedicated appliance. Part of being cloud native means that the cnBR is hardware agnostic and can run on bare-metal x86 servers without proprietary hardware dependencies (although there are recommended specs for optimal performance). The cnBR also supports VM-based use cases, where the bare-metal environment is recreated inside clusters of VMs.
Physical CCAP hardware is designed to service a large number of subscribers, regardless of the size of your serviceable market. Traditional hardware comes with big chunks of capacity at big expense. Capacity planning is important to determine the right amount of hardware required for profitably delivering network services. The cnBR takes the investment risk of hardware out of the equation with its ability to granularly scale up for large service markets and down to a single service group for small ones. You will no longer have to worry about leaving excess capacity on the table.
MSOs want the flexibility, efficiency, and agility of web-scale networking on their own timeline and in line with their migration strategy. The cnBR is designed to minimize disruption by transparently integrating with existing access services. As services grow and evolve so does the cnBR. The cnBR is ready to smoothly transition services and supplement capacity needs on top of existing systems. As decisions are made about aging hardware, capacity needs, operational efficiencies, and pursuing new market opportunities, the cnBR is there to be a solution.
Cisco provides industry-leading, standards-based network virtualization capabilities to decouple services from infrastructure dependencies. Using open standards and interfaces, you can design services and applications at a high level and provision them in the same way. The services and applications are independent of the devices or last-mile networks that will deliver them.
Now, you can begin provisioning services that used to be entirely separate infrastructures, such as CMTS, Metro Ethernet, Wi-Fi, and others. The services are just different applications running on the same unified network. You can address the entire heterogeneous environment, including physical and virtual devices and equipment from multiple vendors through a single orchestration layer that abstracts the underlying complexity. You can draw on NFV and SDN to make the access network more automated, economical, resilient, and elastically scalable. Elements of the access network that used to require complex dedicated infrastructures become interchangeable cloud resources that can be repositioned and scaled up and down as business needs dictate.