This paper examines the Cisco® Application Centric Infrastructure (ACI) approach to modeling business applications to the Cisco ACI network fabric and applying consistent, robust policies to those applications. The approach is a unique blend of mapping hardware and software capabilities to the deployment of applications either graphically through the Cisco Application Policy Infrastructure Controller (APIC) GUI or programmatically through the Cisco APIC API.
Cisco ACI Policy Theory
The Cisco ACI fabric is designed as an application-centric intelligent network. The Cisco APIC policy model is defined from the top down as a policy enforcement engine focused on the application itself and abstracting the networking functionality underneath. The policy model marries with the advanced hardware capabilities of the Cisco ACI fabric underneath the business application-focused control system.
The Cisco APIC policy model is an object-oriented model based on promise theory. Promise theory is based on declarative, scalable control of intelligent objects, in comparison to legacy imperative models, which can be thought of as heavyweight, top-down management.
An imperative model is a big brain system or top-down style of management. In these systems the central manager must be aware of both the configuration commands of underlying objects and the current state of those objects. Figure 1 depicts this model.
Promise theory, in contrast, relies on the underlying objects to handle configuration state changes initiated by the control system as desired state changes. The objects are in turn also responsible for passing exceptions or faults back to the control system. This lightens the burden and complexity of the control system and allows for greater scale. These systems scale further by allowing for methods of underlying objects to in turn request state changes from one another and/or lower level objects. Figure 2 depicts promise theory.
Underlying the promise theory-based model, Cisco APIC builds an object model to focus on the deployment of applications and with the applications themselves as the central focus. Traditionally applications have been restricted by capabilities of the network. Concepts such as addressing, VLAN, and security have been tied together, limiting the scale and mobility of the application itself. Because today’s applications are being redesigned for mobility and web scale, this is not conducive to rapid and consistent deployment.
The physical Cisco ACI fabric itself is built on a spine-leaf design; its topology is illustrated in Figure 3 using a bipartite graph, where each leaf, known as an leaf, is a switch that connects to each spine switch, and no direct connections are allowed between leaf switches and between spine switches. The ileaves act as the connection point for all external devices and networks, and the spine acts as the high-speed forwarding engine between leaves. Cisco ACI fabric is managed, monitored, and administered by the Cisco APIC.
Cisco APIC Policy Object Model
At the top level the APIC policy model is built on a series of one or more tenants, which allow the network infrastructure administration and data flows to be segregated out. One or more tenants can be used for customers, business units, or groups, depending on organizational needs. For instance, a given enterprise might use one tenant for the entire organization, while a cloud provider might have customers using one or more tenants to represent its organization.
Tenants further break down into private Layer 3 networks, which directly relate to a Virtual Route Forwarding (VRF) instance or separate IP space. Each tenant may have one or more private Layer 3 networks depending on the business needs of that tenant. Private Layer 3 networks provide a way to further separate the organizational and forwarding requirements below a given tenant. Because contexts use separate forwarding instances, IP addressing can be duplicated in separate contexts for the purpose of multitenancy.
Below the context the model provides a series of objects that define the application itself. These objects are endpoints, endpoint groups (EPGs), and the policies that define their relationship. It is important to note that policies in this case are more than just a set of access control lists (ACLs) and include a collection of inbound/outbound filters, traffic quality settings, marking rules/redirection, rules and Layer 4 through 7 service device graphs. This relationship is shown in Figure 4.
Figure 4 depicts two contexts under a given tenant and the series of applications that make up that context. The EPGs shown are groups of endpoints that make up an application tier or other logical application grouping. For example, Application B shown expanded on the right, could be a blue web tier, red application tier, and orange database tier. The combination of EPGs and the policies that define their interaction is an application profile on the Cisco ACI fabric.
Endpoint groups (EPGs) are a collection of similar endpoints representing an application tier or set of services. They provide a logical grouping for objects that require similar policy. For example, an EPG could be the group of components that make up an applications web tier. Endpoints themselves are defined using NIC, vNIC, IP address, or DNS name with extensibility for future methods of identifying application components.
EPGs are also used to represent other entities such as outside networks, network services, security devices, network storage, and so on. EPGs are collections of one or more endpoints providing a similar function. They are a logical grouping with varying use options depending on the application deployment model in use. Figure 5 depicts the relationship between endpoints, EPGs, and applications themselves.
EPGs are designed for flexibility, allowing their use to be customized to one or more deployment models a given customer might choose. The EPGs themselves are then used to define where policy is applied. Within the Cisco ACI fabric, policy is applied between EPGs, therefore defining how EPGs communicate with one another. This is designed to be extensible in the future to policy application within an EPG itself.
Some example uses of EPGs are:
● EPG defined by traditional network VLANs
- All endpoints connecting to a given VLAN are placed in an EPG
● EPG defined by a VxLAN
- Same as preceding using VxLAN
● EPG mapped to a VMware port group
● EPG defined by IPs or subnet
- For example, 220.127.116.11 or 172.168.10*
● EPG defined by DNS names or DNS ranges
- For example, example.web.cisco.com or *.web.cisco.com
The use of EPG is intentionally left both flexible and extensible. The model is intended to provide tools to build an application’s network representation in a model that maps to the actual environment’s deployment model. Additionally, the definition of endpoints themselves is intended to be extensible to provide support for future product enhancements and industry requirements.
The implementation of EPGs within the fabric provides several valuable benefits. EPGs act as a single policy enforcement point for a group of contained objects. This simplifies configuration of these policies and makes sure that it is consistent. Additional policy is applied based on not subnet, but rather on the EPG itself. This means that IP addressing changes to the endpoint itself do not necessarily change its policy, as is commonly the case in traditional networks (the exception here is an endpoint defined by its IP). Alternatively, moving an endpoint to another EPG would apply the new policy to the leaf switch to which the endpoint is connected and define new behavior for that endpoint based on the new EPG. Figure 6 depicts these benefits.
The final benefit provided by EPGs is in the way in which policy is enforced for an EPG. The physical ternary content-addressable memory (TCAM) in which policy is stored for enforcement is an expensive component of switch hardware and therefore tends to lower policy scale or raise hardware costs. Within the Cisco ACI fabric, policy is applied based on the EPG rather than the endpoint itself. This policy size can be expressed as n*m*f, where n is the number of sources, m is the number of destinations, and f is the number of policy filters. Within the Cisco ACI fabric, sources and destinations become one entry for a given EPG, which reduces the number of total entries required. This benefit is shown in Figure 7.
As discussed, policy within an Cisco ACI fabric is applied between two EPGs. These policies can be applied in either a unidirectional or bidirectional mode between any given pair of EPGs. These policies then define the allowed communication between EPGs. This is shown in Figure 8.
Cisco APIC Policy Enforcement
The relationship between EPGs and policies can be thought of as a matrix with one axis representing source EPGs (sEPGs) and the other representing destination EPGs (dEPGs). One or more policies will be placed in the intersection between appropriate sEPGs and dEPGs. The matrix will end up sparsely populated in most cases because many EPGs will have no need to communicate with one another. (See Figure 9)
Policies themselves break down into a series of filters for quality of service, access control, and so on. Filters are specific rules that make up the policy between two EPGs. Filters are composed of inbound and outbound: permit, deny, redirect, log, copy (separate from SPAN), and mark functions. Policies allow for wildcard functionality within the definition. The enforcement of policy typically takes a most specific match first approach. The table in Figure 10 shows the specific enforcement order.
Enforcement of policy within the fabric is always guaranteed; however, policy can be applied in one of two places. Policy can be enforced opportunistically at the ingress leaf; otherwise, it is enforced on the egress leaf. Whether or not policy can be enforced at ingress is determined by whether the destination EPG is known. The source EPG will always be known, and policy rules pertaining to that source as both an sEPG and a dEPG are always pushed to the appropriate leaf switch when an endpoint attaches. After policy is pushed to an leaf, it is stored and enforced in hardware. Because the Cisco APIC is aware of all EPGs and the endpoints assigned to them, the leaf to which the EPG is attached will always have all policies required and will never need to punt traffic to a controller, as might be the case in other systems. (See Figure 11)
If the destination EPG is not known, policy cannot be enforced at ingress. Instead, the source EPG is tagged, and policy applied bits are not marked. Both of these fields exist in the reserved bits of the VxLAN header. The packet is then forwarded to the forwarding proxy, typically resident in the spine. The spine is aware of all destinations in the fabric; therefore, if the destination is unknown, the packet is dropped. If the destination is known, the packet is forwarded to the destination leaf. The spine never enforces policy; this will be handled by the egress leaf.
When a packet is received by the egress leaf, the sEPG and the policy applied bits are read (these were tagged at ingress). If the policy applied bits are marked as applied, the packet is forwarded without additional processing. If instead the policy applied bits do not show that policy has been applied, the sEPG marked in the packet is matched with the dEPG (always known on the egress leaf), and the appropriate policy is then applied. This is shown in Figure 12.
The opportunistic policy application allows for efficient handling of policy within the fabric. This opportunistic nature of policy application is further represented in Figure 13.
Multicast Policy Enforcement
The nature of multicast makes the requirements for policy enforcement slightly different. Although the source EPG is easily determined at ingress because it is never a multicast address, the destination is an abstract entity; the multicast group may consist of endpoints from multiple EPGs. In multicast cases the Cisco ACI fabric uses a multicast group for policy enforcement. The multicast groups are defined by specifying a multicast address range or ranges. Policy is then configured between the sEPG and the multicast group. (See Figure 14)
The multicast group (EPG group corresponding to the multicast stream) will always be the destination and never used as a source EPG. Traffic sent to a multicast group will be either from the multicast source or a receiver joining the stream through an IGMP join. Because multicast streams are nonhierarchical and the stream itself will already be in the forwarding table (using IGMP join), multicast policy is always enforced at ingress. This prevents the need for multicast policy to be written to egress leaves. (See Figure 15)
Application Network Profiles
As stated earlier, an application network profile (ANP) within the fabric is a collection of the EPGs, their connections, and the policies that define those connections. ANPs become the logical representation of all of an application and its interdependencies on the Cisco ACI fabric.
ANPs are designed to be modeled in a logical fashion, which matches the way applications are designed and deployed. The configuration and enforcement of the policies and connectivity are then handled by the system itself using the Cisco APIC rather than through an administrator. Figure 16 shows an example ANP.
Creating ANPs requires three general steps:
● Creation of EPGs, as discussed earlier
● Creation of policies that define connectivity and include:
- Copy to
- Service graphs
● Creating connection points between EPGs utilizing policy constructs known as contracts
Contracts define inbound and outbound permits, denies, QoS, redirects, and service graphs. Contracts allow for both simple and complex definition of how a given EPG communicates with other EPGs dependent on the requirements of a given environment. This relationship is shown in Figure 17.
In Figure 17 we see the relationship between the three tiers of a web application defined by EPG connectivity and the contracts that define their communication. The sum of these parts becomes an ANP. Contracts also provide reusability and policy consistency for services that typically communicate with multiple EPGs. Figure 18 uses the concept of network file system (NFS) and management resources to show this.
Figure 18 shows the basic three-tier web application used previously with some common additional connectivity that would be required in the real world. In this diagram we see shared network services, NFS, and management, which would be used by all three tiers as well as other EPGs within the fabric. In these cases the contract provides a reusable policy defining how the NFS and MGMT EPGs produce functions or services that can be consumed by other EPGs.
Within the Cisco ACI fabric, the what and where of policy application have been intentionally separated. This allows policy to be created independently of how it is applied and reused where required. The actual policy configured in the fabric will be determined based on the policy itself defined as a contract (the what) and the intersection of EPGs and other contracts with those policies (the where).
In more complex application deployment environments, contracts can be further broken down using subjects, which can be thought of as applications or subapplications. To better understand this concept, think of a web server. Although it might be classified as web, it might be producing HTTP, HTTPS, FTP, and so on, and each of these subapplications might require different policies. Within the Cisco APIC model, these separate functions or services are defined using subjects, and subjects are combined within contracts to represent the set of rules that define how an EPG communicates with other EPGs. (See Figure 19)
Subjects describe the functions that an application exposes to other processes on the network. This can be thought of as producing a set of functions: that is, the web server produces HTTP, HTTPS, and FTP. Other EPGs then consume one or more of these functions; which EPGs consume these services are defined by creating relationships between EPGs and contracts, which contain the subjects defining applications or subapplications. Full policy is defined by administrators defining groups of EPGs that can consume what another provides. This model provides functionality for hierarchical EPGs, or more simply EPGs that are groups of applications and subapplications. (See Figure 20)
Additionally, this model provides the capability to define a disallow list on a per EPG basis. These disallows override the contract itself, making sure that certain communication can be denied on a per EPG basis. These disallows are known as taboos. This capability provides the ability to provide a blacklist model within the Cisco ACI fabric, as shown in Figure 21.
Figure 21 shows that a contract can be defined allowing all traffic from all EPGs. This allow is then refined by creating a taboo list of specific ports or ranges that are undesirable. This model provides a transitional method for customers desiring to migrate over time from a blacklist model, which is typically in use today, to the more desirable whitelist model. Blacklist is a model in which all communication is open unless explicitly denied, whereas a whitelist model requires communication to be explicitly defined before being permitted. It is important to remember that disallow lists are optional, and in a full whitelist model they will rarely be needed.
Contracts provide a grouping for the descriptions and associated policies that define those application services. They can be contained within a given scope, tenant, context, or EPG as a local contract. An EPG is also capable of subscribing to multiple contracts, which will provide the superset of defined policies.
Although contracts can be used to define complex real-world application relationships, they can also be used very simply for traditional application deployment models. For instance, if a single VLAN/VxLAN is used to define separate services, and those VLANs are tied to port groups within VMware, a simple contract model can be defined without unneeded complexity. (See Figure 22)
However, in more advanced application deployment models such as PaaS, SOA 2.0, and Web 2.0 models, where more application granularity is required, complex contract relationships can be used. These relationships can be used to define detailed relationships between components within a single EPG and to multiple other EPGs. This is shown in Figure 23.
Figure 23 shows that multiple application tiers may exist within a single EPG, and relationships are defined between those tiers as well as tiers residing in external EPGs. This allows complex relationships to be defined where certain constructs are consumed by other constructs that might reside within various EPGs. Functionality is provided by the Cisco ACI policy model to define relationships based on these components, which can be thought of as services, functions, applications, or subapplications residing in the same container.
Figure 23 also depicts the ability to provide intra-EPG policy, which is policy applied within a given EPG. This functionality will be supported in future releases without the requirement to change the model deployed today. As shown in the diagram, an EPG can consume its own resources as defined by the contract. In the diagram both NFS and database exist within an EPG that has requirements to consume those relationships. The policy is depicted by the arrows looping back to the application policy construct.
Although contracts provide the means to support more complex application models, they do not dictate additional complexity. As stated, for simple application relationships, simple contracts can be used. For complex application relationships, the contract provides a means for building those relationships and reusing them where required.
Contracts break down into subcomponents:
● Subjects: Group of filters that apply to a specific app or service
● Filters, which are used to classify traffic
● Actions such as permit, deny, mark, and so on to perform on matches to those filters
● Labels, which are used optionally to group objects such as subjects and EPGs for the purpose of further defining policy enforcement
In a simple environment, the relationship between two EPGs would look similar to that in Figure 24. Here web and app EPGs are considered a single application construct and defined by a given set of filters. This will be a very common deployment scenario. Even in complex environments, this model will be preferred for many applications.
Many environments will require more complex relationships; some examples of this are:
● Environments using complex middleware systems
● Environments in which one set of servers provides functionality to multiple applications or groups (for example, a database farm providing data for several applications)
● PaaS, SOA, and Web 2.0 environments
● Environments where multiple services run within a single OS
In these environments the Cisco ACI fabric provides a more robust set of optional features to model actual application deployments in a logical fashion. In both cases the Cisco APIC and fabric software are responsible for flattening the policy down and applying it for hardware enforcement. This relationship between the logical model, which is used to configure application relationships, and the concrete model used to implement them on the fabric simplifies design, deployment, and change within the fabric.
An example of this would be an SQL database farm providing database services to multiple development teams within an organization: for instance, a red team, blue team, and green team each using separate database constructs supplied by the same farm. In this instance, a separate policy might be required for each team’s access to the database farm. Figure 25 depicts this relationship.
The simple models discussed previously do not adequately cover this more complex relationship between EPGs. In these instances, we need the ability to separate policy for the three separate database instances within the SQL-DB EPG, which can be thought of as subapplications and are referred to within ACI as subjects.
The Cisco ACI fabric provides multiple ways to model this application behavior depending on user preference and application complexity. The first way in which to model this behavior is using three contracts, one for each team. Remember that an EPG can inherit more than one contract and will receive the superset of rules defined there. In Figure 26 each app team’s EPG connects to the SQL-DB EPG using its own specific contract.
As shown, the SQL-DB EPG inherits the superset of policies from three separate contracts. Each application team’s EPG then connects to the appropriate contract. The contract itself designates the policy, while the relationship defined by the arrows denotes where policy will be applied or who is providing/consuming which service. In this example the Red-App EPG will consume SQL-DB services with the QoS, ACL, marking, redirect, and so on behavior defined within the Red-Team APC. The same will be true for the blue and green teams.bbb
In many instances, there will be groups of contracts that get applied together quite frequently. For example, if multiple DB farms are created that all require access by the three teams in our example or development, test, and production farms are used. In these cases, a bundle can be used to logically group the contracts. Bundles are optional; a bundle can be thought of as a container for one or more contracts for the purpose of ease of use. The use of bundles is depicted in Figure 27.
In Figure 27 it is very important to note the attachment points of the arrows showing relationship. Remember that policy is determined by what and where within the fabric. In this example we want SQL-DB EPG to provide all contracts within the contract bundle, so we attach the bundle itself to the EPG. For each of the three application teams, we only want access defined by its specific contract, so we attach each team to consume the corresponding contract itself within the bundle.
This same relationship can optionally be modeled in another way using labels. Labels provide another grouping function for use within application policy definition. In most environments labels will not be required, but they are available for deployments with advanced application models and teams who are familiar with the concept.
Using labels, a single contract can be used to represent multiple services or components of applications provided by a given EPG. In this case the labels represent the DB EPG providing database services to three separate teams. By labeling the subjects and the EPGs using them, separate policy can be applied within a given contract even if the traffic types or other classifiers are identical. Figure 28 shows this relationship.
In Figure 28 the SQL-DB EPG provides services using a single contract called SQ-DB, which defines the database services it provides to three different teams. Each of the three teams’ EPGs that will consume these services are then attached to the same traffic. By using labels on the subjects and the EPGs themselves, specific separate rules are defined for each team. The rules within the contract that matches the label will be the only ones applied for each EPG. This holds true even if the classification within the construct is the same: for example, the same Layer 4 ports and so on.
Labels provide a very powerful classification tool that allows objects to be grouped together for the purpose of policy enforcement. This also allows applications to be moved quickly through various development lifecycles. For example, if the red label service Red-App represented a development environment that needed to be promoted to test represented by the blue label, the only required change would be to the label assigned to that EPG.
The Cisco ACI policy model is designed top down using a promise theory model to control a scalable architecture of defined network and service objects. This model provides robust repeatable controls, multitenancy, and minimal requirements for detailed knowledge by the control system known as the Cisco APIC. The model is designed to scale beyond current needs to the needs of private clouds, public clouds, and software-defined data centers.
The policy enforcement model within the fabric is built from the ground up in an application-centric object model. This provides a logical model for laying out applications, which will then be applied to the fabric by the Cisco APIC. This helps to bridge the gaps in communication between application requirements and the network constructs that enforce them. The Cisco APIC model is designed for rapid provisioning of applications on the network that can be tied to robust policy enforcement while maintaining a workload anywhere approach.
For additional references:.