Virtualized Application Delivery with Cisco Application Control Engine (ACE)
PDF(688.8 KB) View with Adobe Reader on a variety of devices
Updated:December 22, 2013
The unique virtualization capabilities of the Cisco
® ACE Application Control Engine enable enterprises and service providers to accelerate and scale application deployments, reduce costs in the data center, simplify application delivery network architectures, and delegate application delivery management tasks.
Data Center Challenges
As applications are increasingly deployed to support primary business objectives within multiple organizations, new requirements are emerging to provide scalable application delivery services while reducing overall data center costs. Enterprises and service providers alike require scalable application delivery services that offer application security, maximized application availability, and optimized application performance as seen by users, all within a unified management framework. These services require several key data center technologies, including advanced server load balancing, Secure Sockets Layer (SSL) offloading, multilayer security, and application acceleration. The need to increase application deployment scale and velocity presents many challenges to today's data center personnel. These include reducing equipment acquisition costs, minimizing application deployment costs, improving application deployment workflows, and reducing data center resource requirements associated with application deployments. The capability to virtualize application delivery services is a crucial requirement for meeting these challenges.
What is Virtualization?
Virtualization is the ability to logically partition a single physical device into many virtual devices. Each virtual device must have all the capabilities of the actual physical device, and each virtual device must be independent and isolated so that it appears to be a unique physical device from the viewpoint of the network and the network administrator. With virtualization, each virtual device can be allocated its own resources and quality of service (QoS) with bursting capability to the virtual IP address (VIP) or real IP address (RIP) level if desired. Each virtual device can also be assigned its own configuration files, management interfaces, and access-control policies in which access control privileges are assigned to users based on their administrative roles.
Benefits of Virtualization with Cisco ACE
Virtualization in Cisco ACE delivers many important business and technical benefits.
Accelerated and Lower-Cost Application Rollout and Upgrades
With Cisco ACE, rolling out a new application or adding application support for another department simply requires the addition of a new virtual partition to create another virtual Cisco ACE device within the existing physical Cisco ACE module, rather than deployment of an additional hardware platform. This approach significantly reduces the costs associated with deploying new applications, and it also enables administrators to accelerate application rollouts. Cisco ACE also streamlines application upgrading. By virtualizing the Cisco ACE, administrators can achieve tremendous cost savings with a superior architecture for conducting ongoing maintenance while maximizing application availability. With Cisco ACE, the administrator can easily take an application out of service and gracefully add or remove systems in one virtual device without affecting any other applications serviced by the physical Cisco ACE device.
With traditional application delivery solutions, application deployment often proceeds slowly because of the need for complex workflow coordination. For a new application to be deployed or for an existing application to be tested or upgraded, the application group must work with the network administrator. Coordination is required to make the desired configuration changes on the application delivery device (typically a load balancer), a process particularly problematic for the network administrator, who is responsible for helping ensure that any change to the configuration does not impact existing services. Virtual partitioning with role-based access control (RBAC) in Cisco ACE mitigates this concern by enabling the network administrator to create an isolated configuration domain for the application group. By assigning configuration privileges within a single isolated virtual device to the application group, the network administrator can stay out of the workflow and eliminate the risk of misconfiguration of existing applications enabled in other virtual devices. This improved workflow creates a self-service model in which the application group can independently test, upgrade, and deploy applications faster than ever before.
Complete Isolation of Applications, Departments, and Customers
With Cisco ACE, administrators have the flexibility to allocate resources to virtual devices in any way they see fit. For example, one administrator may want to allocate a virtual device for every application deployed. Another administrator may want to allocate a virtual device for each department's use, even for multiple applications. A service provider administrator may want to allocate a virtual device for each customer. Regardless of how resources are allocated, Cisco ACE's virtual devices are completely isolated from each other. Configurations in one virtual device do not affect configurations in other virtual devices. As a result, virtual partitioning provides a novel way of protecting a set of services configured in several virtual devices from accidental mistakes, or from malicious configurations, made in another virtual device. A configuration failure on Cisco ACE is limited to the scope of the virtual device in which it was created. A failure in one virtual device has no effect on other virtual devices in the Cisco ACE, maximizing uptime for critical applications, especially when Cisco ACE is deployed in a redundant high-availability configuration. Note that with competitors' offerings, customers need to purchase and deploy additional physical units to achieve this level of configuration isolation for applications, departments, and customers.
Optimized Cisco ACE Capacity Utilization
With Cisco ACE, resources can be allocated with fine granularity to virtual devices. Allocatable resources include, but are not limited to, bandwidth, connections, connections per second, number of Network Address Translation (NAT) entries, and memory. With this flexibility, resources can be shifted with fine granularity to the virtual devices (and hence applications or departments) that need them most, resulting in highest application performance while maximizing resource utilization within the Cisco ACE module.
Capacity planning practices generally require devices in the data center to be deployed at about 75 percent capacity to allow for periods of peak utilization. However, this approach does not scale with increased application rollouts and growth in business requirements. As the number of devices increases, the amount of unused resources committed for peak usage can exceed the capacity of entire devices. For example, if four devices are deployed at the recommended 75 percent capacity to allow for scaling, 25 percent of each device is unused. In this simple four-device example, the aggregated unused resources equal the capacity of an entire device (4 times 25 percent equals 100 percent). When extrapolated to real-life data center scenarios in which dozens or hundreds of devices are deployed, the costs associated with such unused reserved resources are significant. Cisco ACE's virtual partitioning capability in combination with the module's unmatched scalability, delivering up to 16 Gbps throughput per module, enables multiple virtual devices to share the same physical Cisco ACE device while at the same time allocating resources for future growth.
With Cisco ACE, costly forklift upgrades are no longer necessary. Using a "pay-as-you-grow" model, the Cisco ACE enables scaling from 4 Gbps to 8 Gbps to 16 Gbps throughput on a single Cisco ACE module using only a software license upgrade. In addition, up to four Cisco ACE modules can be deployed in a single Cisco Catalyst
® 6500 Series Switch chassis to deliver up to 64 Gbps throughput: performance and scale far beyond that offered by any competitor.
Reduced Data Center Resource Requirements
The unique virtualization capabilities of Cisco ACE enable customers to drastically reduce both physical and environmental data center resources, resulting in significant overall cost savings associated with application delivery. As previously discussed, Cisco ACE allows administrators to roll out additional applications simply by configuring additional virtual devices within the same physical Cisco ACE module, rather than deploying additional hardware platforms. As a result, network sprawl is reduced and additional cabling requirements and incremental rack space requirements are eliminated. In addition, because Cisco ACE reduces the number of physical devices in the network, it also significantly reduces power consumption and costs in the data center, even as customers deploy additional applications. Indeed, Cisco ACE is the only application delivery solution that enables customers to scale application delivery service without requiring incremental power requirements. Because no additional power is required to support additional applications deployed by adding virtual devices, power consumption per unit actually decreases as organizations scale upward, making Cisco ACE the most energy-efficient, high-performance solution available.
Unmatched Service Continuity Through Advanced High-Availability Capabilities
For high availability, Cisco ACE supports stateful redundancy with both active-active and active-standby designs. Stateful redundancy is crucial because it enables user sessions to continue uninterrupted during a failover event. Cisco ACE pairs can be deployed either within the same Cisco Catalyst 6500 Series chassis or, more commonly in redundant data center designs, within two physically separate Cisco Catalyst 6500 Series chassis. This flexibility allows the Cisco ACE to easily fit into the existing network architecture rather than requiring costly and time-consuming network design activities.
Stateful Cisco ACE redundancy can be enabled on a per-virtual-device basis,, isolating a failure to its specific virtual device. With Cisco ACE, a failover event in one virtual device does not impact operation of other virtual devices. In addition, only the failed virtual device undergoes a failover event, rather than the whole physical device, resulting in nearly instant failover and outstanding service continuity.
Redundancy per virtual device allows users to use both Cisco ACE modules simultaneously in an active-active configuration, optimizing Cisco ACE utilization and maximizing Cisco ACE return on investment (ROI). In a traditional active-standby high-availability design, the primary Cisco ACE is active, and all the virtual devices within the primary Cisco ACE module are active. If the primary Cisco ACE or a virtual device within it fails, the backup (standby) Cisco ACE takes over, and all virtual devices fail over to the backup Cisco ACE module. In an active-active high-availability design, both the primary and backup Cisco ACE modules are active simultaneously. The active virtual devices are distributed across both Cisco ACE modules such that half are active on the first active Cisco ACE module, and the remainder is active on the second active Cisco ACE module. In the event of a virtual device failure, the standby virtual device takes over on the peer Cisco ACE module. In the unlikely event of a Cisco ACE failure, all previously active virtual devices fail over to the peer Cisco ACE module.
With its comprehensive and flexible high-availability capabilities, Cisco ACE provides highly cost-effective and reliable application delivery services.
Centralized Control of Delegated Management
Because all virtual devices reside on the same physical Cisco ACE platform, all of the logical devices can be easily and centrally managed. Cisco ACE can be logically partitioned using either the Cisco Application Networking Manager (ANM) GUI, a powerful command-line interface (CLI), or an Extensible Markup Language (XML)-based API. The ANM is an intuitive workstation-based management system capable of simultaneously managing many Cisco ACE modules in multiple Cisco Catalyst 6500 Series chassis, each with many virtual devices. With competitors' offerings, multiple devices must be deployed, requiring costly and time-consuming management of many hardware devices. Figure 1 shows the deployment of Cisco ANM in the network.
Many existing applications deployed in the data center utilize complex architectures typically using several layers of separate application delivery devices. The unique virtualization capabilities of Cisco ACE, combined with its superior scalability to 16 Gbps throughput, can help reduce the number of devices deployed while simplifying the network topology. In each of the five scenarios presented here, the scalability and virtualization capabilities of the Cisco ACE drastically simplifies the network architecture, reduces application rollout costs, reduces equipment costs, and eases management tasks.
Example 1: Replacing Many Devices with a Single Cisco ACE
In this common scenario, the customer has to deploy several application delivery devices to handle high request rates. The customer must purchase, configure, deploy, and manage separate devices. With Cisco ACE, the customer can deploy a single device to meet performance needs as shown in Figure 2. Although this scenario does not explicitly make use of Cisco ACE's virtualization capabilities, virtual partitioning can be used to support additional applications on the same Cisco ACE module, enabling additional scalability and network simplification (discussed in example 4 below).
Figure 2. Replacing Many Devices with a Single Cisco ACE
Example 2: Collapsing a Multi-Tier Architecture with Cisco ACE
In this scenario, the customer has deployed the classic multi-tier architecture using separate load balancers and firewalls at each of the Web server, application server, and back-end server tiers as shown in Figure 3. Using the scale, virtualization, and firewalling capabilities of Cisco ACE, the three distinct load balancer and firewall tiers can be collapsed into a single physical device. Here, the Cisco ACE is partitioned to take on the role of the load balancers in each tier as well as to provide application layer security with SSL offload, stateful packet inspection, IP normalization, and server offload capabilities.
Figure 3. Collapsing a Multi-Tier Architecture with Cisco ACE
Example 3: Consolidating the Data Center with Cisco ACE
To reduce data center costs and centralize management capabilities, many enterprise and service providers choose to consolidate their data center resources as shown in Figure 4. Consolidation creates an enormous set of challenges associated with migrating large numbers of devices and applications from distributed data centers to a central location. In many cases, overlapping IP address spaces make this type of consolidation extremely difficult and time consuming. Cisco ACE provides an excellent way to mitigate such data center consolidation challenges using all of the capabilities discussed in this document. Multiple virtual devices can be enabled on a single centrally deployed Cisco ACE module to support the applications and organizations supported in the previously distributed data center architecture. Because all virtual devices are isolated with separate configurations, Cisco ACE easily supports overlapping IP address spaces. When deployed as part of a data center consolidation strategy, Cisco ACE significantly reduces data center costs, simplifies the network topology, accelerates application rollouts, and eases management tasks.
Figure 4. Consolidating the Data Center with Cisco ACE
Example 4: Deploying Multiple Applications on Cisco ACE
In this scenario, the customer has deployed disparate application delivery devices for each application as shown in Figure 5. Here, virtual devices can be deployed on the Cisco ACE to support multiple applications rather than requiring deployment of additional hardware platforms. Note that while the single Cisco ACE deployment is functionally identical to the deployment of multiple application delivery devices, it is much more cost effective and easier to manage.
Figure 5. Deploying Multiple Applications on Cisco ACE
Example 5: Consolidating Multiple Functions with Cisco ACE
In this scenario, the customer has deployed many disparate devices to deliver load balancing, SSL offload, and data center firewall capabilities in a redundant configuration as shown in Figure 6. Not only is this configuration costly, it presents multiple TCP termination points, resulting in poor performance and scalability. The complexity of this architecture also makes it very difficult to manage and troubleshoot. With Cisco ACE, administrators can simplify this network architecture by consolidating these functions in a single Cisco ACE module. Consolidating these functions in Cisco ACE also significantly improves performance and scalability and facilitates troubleshooting by providing a single point of TCP termination.
Figure 6. Deploying Multiple Functions with Cisco ACE
Solution: Virtualized Application Delivery with Cisco ACE
Deployed as a service module in the Cisco Catalyst 6500 Series Switches for ease of deployment and use, the Cisco ACE delivers superior performance, scalability, high-availability, application acceleration, and data center firewalling capabilities. Cisco ACE simplifies the application infrastructure by offering a single point of policy control in the network with the industry's only virtualized architecture.
The Cisco ACE solution provides the following benefits:
• Application rollout velocity: Reduces time required to deploy and upgrade crucial applications
• Data center cost reduction: Reduces costs and resource requirements associated with data center application infrastructure
• Application performance: Provides the industry-leading performance, scalability, and availability for data center application delivery
• Application infrastructure control: Gives IT organizations a next-generation solution to better control the way they deploy, operate, and manage data center application infrastructure with virtual partitioning and RBAC
• Data center firewalling: Helps ensure that crucial applications, infrastructure, and data are protected from abuse and misuse
• Network simplification: Minimizes the cost and complexity of the network and reduces the number of devices and vendors required
The unique virtualization capabilities of Cisco ACE enable the support of numerous applications and departments in multiple isolated virtual devices on a single Cisco ACE module, resulting in optimized capacity utilization, decreased deployment costs, improved workflow, and decreased power consumption costs.
It is important to note that the Cisco virtualization implementation is at the device-level, allowing all aspects of the physical device to be virtualized. Other vendors limit their support of virtualization to access-control capabilities and do not offer device-level virtualization equivalent to that offered by Cisco ACE.
Virtualized partitioning is a crucial requirement for providing scalable, reliable, and cost-effective application delivery services in the data center. Enterprises and service providers alike can reap significant cost reduction and application deployment acceleration benefits with Cisco ACE. Cisco ACE is the only solution on the market today that provides true application delivery virtualization capabilities.
1Cisco ACE supports 5 virtual devices by default. Software license upgrades are available to scale from 5 to 250 virtual devices.