Virtualization is not a new concept but it is now being applied to network functions such as those in switches, routers, and the myriad network appliances deployed. The expectation is for sizable costs savings and greatly reduced network complexity. The early days of server virtualization had a dramatic impact on lowering server capital expenditures. Yet operational costs skyrocketed as more labor-intensive and complex processes were required. Eventually these costs were reined in with more integration of servers and network infrastructure and more advanced software capabilities.
So what lessons can operators learn from the past experience with server virtualization? Beware of merely shifting costs from capital to operating expenditures. Be selective in virtualizing the right resources and functions driven by the business need, and not the technology lure. Let lowering total cost of ownership with a flexible, adaptable infrastructure be your focus in regard to optimization.
Focusing the Optimization Spotlight in Operator Environments
Complexity - Reducing complexity is a good place to start. Right now, maintaining current services and applications along with introducing new ones are cumbersome processes. They rely on incredibly complex and time-consuming operational processes and require highly skilled personnel. Reducing complexity through automation and orchestration will speed operations, contribute to service agility, and lower operational costs. Any virtualization technology suite being considered should be able to demonstrate reduced complexity of the network operations as a whole and not merely move the problem from one area to another.
Virtualizing selectively - There are some obvious areas where virtualization and the associated use of common servers make good sense generally. For example, one area is network functions with very high computing needs but low to modest networking performance requirements for latency, speed, and predictability. These functions include subsystem control, such as IP Multimedia Subsystem (IMS), and network control, such as Domain Name System (DNS). Virtualization and use of common servers makes less sense for network functions with very high-performance networking needs, such as high bandwidth load, low latency, or high predictability. Examples here include core data center switching and WAN backbone routing which have modest computing needs but very high forwarding needs.
Positioning virtualized functions - Another important aspect that affects the level of optimization is deciding where to locate virtualized functions. Optimization efforts should always be balanced with the need to maintain required levels of performance and quality of service (QoS). Reducing costs through the use of virtualization and commodity platforms at the expense of the end customer’s experience will be inadvisable in many circumstances. Deciding where to place a virtualized function should always include consideration of the total cost to deliver and maintain a service, the ease of scaling the service up and down, and the infrastructure’s relative agility in response to changing market conditions.
For example, a virtualized, high-definition (HD) video feed might best be located as close to the customer as possible for the highest quality and performance. Another issue to consider in locating some functions far from where they are being used is the bandwidth cost required to backhaul across the network. A distributed, popular HD video feed is likely to yield better economics by reducing the network load, yet long-tail content is likely to be better placed centrally where there is cheaper storage. An ideal infrastructure will provide the operator with the flexibility to position and easily reposition content according to the best economics and not be constraining due to its technology.
Candidate locations for virtualized functions include central data centers, distributed data centers, points of presence (POPs) and central offices (COs). (Telecommunications providers already have a central office infrastructure and are turning those locations into small data centers.) A final location for locating virtualized functions for the highest quality experience is, of course, the customer premises.
An infrastructure that can support the easy relocation of a virtualized function could also be critical in a case such as a service introduction through a central data center for rapid time to market. The network functions within the service might then need to move to more optimal locations as the service scales and performance requirements become paramount.
How virtualized functions are deployed - Virtualized functions may be deployed onto common servers in several ways: on x86 rack or blade servers in data centers, on x86 server blades in router or switch slots, and on x86 servers located next to routers and switches at other locations (CO, POP, customer site). The optimization of network architecture can be well served by a careful end-to-end analysis of the performance benefits and cost associated with where virtualized functions are physically located.
Capacity planning - A study by ACG Research shows how traditional approaches to capacity planning can no longer keep economic pace with demand (Figure 1). This can lead to lost opportunities, customers, and revenue, or to an under-utilized investment. Hence, the old capacity-planning model is another opportunity for economic optimization. What operators need is a more flexible and granular infrastructure to support increasingly unpredictable capacity demands.
With greater mobility of users and high network-loading HD video content, network capacity and resources need to be responsive much faster than the current provisioning model can support. This implies that the appropriate infrastructure can adapt its resources and capacity to application and user demand times. Virtualization applied correctly will result in more granular up or down scaling with demand; the hardware increment or decrement becomes the server. Virtualization also makes it easier to power down hardware not needed for current demand, yielding significant energy and cooling costs savings.
Use orchestration - Most services today require several functions. If these functions are virtualized then a key part of optimizing your network is tying all of these functions together correctly and in as simple a way as possible. That’s very hard to do without NFV-ready orchestration tools. A suitable orchestration system automatically responds to the needs of applications and services to apply functions optimally - in the correct order, with the correct CPU, storage, and network capacity, and in the optimal locations. The orchestrator knows where virtualized functions are (or places them where they are needed in real time) and which are necessary through service profiles, discovery, and cataloging techniques. Sometimes a function will be moved or powered up or down when not needed based on business needs. Use an orchestrator to optimize the integration and coordinated use of network functions.
Lowering capital expenditures - Another focus for optimization is the reduction of capital costs through greater reusability of resources, another benefit of the new world of virtualized resources, converged infrastructure, and programmable networks. The current operator environment is full of special-purpose, dedicated infrastructure that is complex to manage and is largely under-utilized. With the move to standard x86 server platforms and higher utilization of network resources with greater intelligence, virtualization, and automation, operators optimize their capital budgets through much higher reuse of resource.
And what does this mean for a virtualized infrastructure? In regard to optimization, the infrastructure chosen by operators will need to be flexible in supporting all types of services and the varying degrees of capacity and resource granularity that they consume. It will need to support virtualization at several locations throughout the network, along with easy relocation of a function, as the business requires. It will need to interact with applications and management systems to provide highly responsive elasticity of resources-fast ramp-up as demand increases and fast ramp-down and power-down when no longer needed. Lastly, it will need to reduce complexity for the network operations staff, freeing them up to do more business innovation.
For More Information