This chapter contains the following sections:
A typical architecture for an ACI fabric with an OpenStack deployment consists of a Nexus 9000 Spine/Leaf topology, an APIC cluster, and a group of servers to run the various control and compute components of OpenStack. OpenStack can be deployed in many ways, but a very basic test architecture would consist of at least one OpenStack Controller server which is also acting as the Neutron network node, and two or more OpenStack compute nodes to host Virtual Machine (VM) instances. An ACI External Routed Network connection as a Layer 3 connection outside of the fabric can be used to provide connectivity outside the OpenStack cloud.
The Modular Layer 2 framework in OpenStack allows integration of networking services based on TypeDrivers, and MechanismDrivers. Common networking type drivers include local, flat, vlan, vxlan, etc. OpFlex is added as a new network type through ML2, with an actual packet encapsulation of either VXLAN or VLAN on the host defined in the OpFlex configuration. A mechanism driver is enabled to communicate networking requirements from the Neutron server(s) to the Cisco APIC cluster. The APIC mechanism driver translates Neutron networking elements such as a network (segment), subnet, router, or external network into APIC constructs within the ACI Policy Model.
The OpFlex ML2 software stack also currently utilizes a modified Open vSwitch Package, and local software agents on each OpenStack compute host that communicate with the Neutron server(s) and OVS. An OpFlex proxy from the ACI leaf switch exchanges policy information with the Agent-OVS instance in each compute host, effectively extending the ACI switch fabric and policy model into the virtual switch. This results in a cohesive system that can apply networking policy anywhere in the combined virtual and physical switching fabric starting from the virtual port where a VM instance attaches to the network.The Figure below illustrates the interaction of the OpFlex ML2 APIC Driver with the ACI Fabric, and the extension of the OpFlex proxy down into the Agent-OVS service on the compute host.
![]() Note | The OpFlex ML2 APIC Driver for integration into Neutron runs on the server running the neutron-server service. This server may be a controller node running other OpenStack software elements, or be dedicated to the Neutron function. High Availability configurations with multiple Neutron servers are also supported. |

On the compute node, the neutron-opflex-agent service receives information about OpenStack endpoints from the ML2 Driver software on the Neutron server. This information is stored locally in the endpoint files located in /var/lib/opflex-agent-ovs/endpoints. The agent-ovs service uses the endpoint information to resolve policy for the endpoints through the OpFlex Proxy on the connected ACI leaf switch. The agent-ovs then programs policy on OVS using Open Flow for policies that can be enforced locally. Non-local policies are enforced on the upstream leaf switch. The Figure below illustrates the interaction between the Opflex modules running on the compute node, and OVS.

OpenStack defines multiple network connection requirements for the server nodes providing cloud services. Communication paths need to be defined and provided for Management traffic, Tenant Data, and External networking requirements, as well as API communication between the various OpenStack services. Deployments may also dedicate a network segment for storage traffic or other specific needs. An ACI switching fabric is able to provide networking services to meet all of these requirements. Server connectivity can consist of either separate physical interfaces, virtualized network adapters such as the Cisco VIC, or a managed blade server system such as Cisco UCS B-Series.
Management and API Network(s)—This network segment is for administrative secure shell access to OpenStack servers, as well as API communication directly to the servers and between OpenStack functions. The Management and API functions may also be broken out into different network segments, the example configurations in this guide use a single network segment for both.
External Network—With an ACI fabric integrated with OpFlex, the External Network path is provided by an External Routed Network configuration in the APIC. An External network in a system running the Neutron L3-Agent is a network on the outside of a software-based routing function. An External Network utilizes NAT services to allow hidden and overlapping IPv4 address space to be used by Tenants.
Tenant Data Network—A Tenant network in OpenStack can be dynamically created by a Tenant to provide connectivity between VM instances in the cloud, and also to connect to cloud-based routing services to other Tenant or External networks. The segment ID's assigned to Tenant networks with OpFlex are tracked by the ACI fabric, and consist of VXLAN between leaf switches, with either VXLAN or VLAN between leaf switch and server.
OpenStack Neutron defines common networking constructs and services required by VM instances operating in the cloud environment. Both availability and scale of Neutron services can become a concern when implementing all of these functions on a single server, or a small cluster of servers. The OpFlex ML2 Driver software stack provides the ability to distribute these network services across the compute nodes in the cluster, using a scale-out approach to reduce the load on any single instance of a service while increasing overall service availability.
The following OpenStack Neutron services may be distributed to compute nodes when using the OpFlex ML2 Driver software:
NAT for External Networks—The Opflex ML2 Driver approach for supporting External networks distributes Source NAT and Floating IP functions for OpenStack to the Open vSwitch of the compute hosts. Packets destined for IP addresses not defined in the private OpenStack Tenant space are automatically NATted prior to egressing a compute host. The translated packets are then routed to the external routed network defined in the APIC. Distributed NAT services are inherent in the solution.
Layer 3 Forwarding—The Neutron reference software implementation of Layer 3 Agent is replaced by a combination of Layer 3 forwarding in the ACI fabric, as well as local forwarding within a compute node. If two VMs connected to the same OpenStack Tenant router reside on the same compute node, Layer 3 traffic between them will be forwarded by OVS and remain local to that physical server. Distributed Layer 3 for traffic local to a compute node is inherent in the solution.
DHCP—Reference Neutron software implementations have a DHCP Agent service centralized on the Neutron server(s). The OpFlex ML2 driver software allows a distributed DHCP approach using the agent-ovs service. Distributing the DHCP function across the compute nodes keeps DHCP Discovery, Offer, Request and Acknowledge (DORA) traffic local to the host and ensures reliable allocation of IP addressing to VM Instances. Central Neutron address management functions communicate DHCP addressing and options to the local agent-OVS over the management network. This optimized DHCP approach is enabled by default in the solution, but can be reverted to the traditional centralized mode if desired.
Metadata Proxy—OpenStack VM's can receive instance-specific information such as instance-id, hostnames, and SSH keys from the Nova Metadata Service. This service is normally reached through a Neutron service acting as a proxy on behalf of OpenStack VM instances. The OpFlex ML2 software allows this proxy function to be distributed to each of the compute hosts. This optimized metadata proxy is disabled by default, and either a traditional centralized or the distributed approach may be configured.
The logical topology diagram in the Figure below illustrates the connections to OpenStack network segments from Neutron server(s) and compute hosts, including the distributed Neutron services.

![]() Note | The Management/API network for OpenStack may be connected to servers using an additional Virtual NIC on a common uplink with tenant networking to the ACI fabric, or via a separate physical interface. |
Cisco ACI uses a policy model to enable network connectivity between endpoints attached to the fabric. OpenStack Neutron uses more traditional Layer 2 and Layer 3 networking concepts to define networking configuration. The OpFlex ML2 driver translates the Neutron networking requirements into the necessary ACI policy model constructs to achieve the desired connectivity. The Table below illustrates OpenStack Neutron constructs, and the corresponding APIC policy objects that will be configured when they are created.
|
OpenStack Object |
APIC Object |
|---|---|
|
Neutron Instance |
ACI Tenant(s), VMM Domain |
|
Tenant/Project |
APP Profile or Separate ACI Tenant |
|
Tenant Network |
Endpoint Group + Bridge Domain |
|
Subnet |
Subnet |
|
Security Groups/Rules |
N/A (Linux iptables rules are maintained per-host) |
|
Router |
Contract+EPG+Bridge Domain |
|
External Network |
Layer 3 Out / Outside EPG |
By default, the OpFlex ML2 Driver associates an entire instance of OpenStack Neutron to a single ACI tenant, and names this tenant according to the apic_system_id setting in the /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini file. This allows the ACI administrator to manage each cloud instance connected to the fabric as a single entity, and not generate a large number of ACI tenants in the APIC for a fabric that is being used for multiple systems. In this mode, separate OpenStack tenants are defined in the APIC as different Application Profiles.
There is an alternative option that can be configured at installation time in the ml2_conf_cisco_apic.ini file using the setting single_tenant_mode = False which tells the system to create a new ACI Tenant for each OpenStack tenant. This results in a 1:1 correlation between OpenStack tenant and ACI tenant, and generates ACI tenants named according to the convention_<apic_system_id>_<openstack tenant name> for each OpenStack tenant. If multi-tenant mode is used the value apic_name_mapping = use_uuid should also be set in the ml2_conf_cisco_apic.ini file for proper system function.
The OpFlex ML2 Driver software brings the capability to support Network Address Translation (NAT) functions in a distributed manner using the local OVS instance on each OpenStack compute node. This distributed approach increases the availability of the overall solution, and offloads the central processing of NAT from the Neutron server L3-agent used in the reference implementation.
Three distinct IP subnets are required to take full advantage of external network functionality with the OpFlex ML2 driver. This is a different approach from the default Neutron external network behavior that typically uses a single external subnet for these functions.
Link Subnet—This subnet represents the actual physical connection to the external next-hop router outside of the fabric. This will be assigned to a routed interface, sub-interface, or SVI depending on the configuration.
Source-NAT Subnet—The term Source-NAT or SNAT in OpenStack is used to describe allowing VM instances to initiate connections with destinations outside of the cloud by sharing an address on the external network. This subnet is used for Port Address Translation (PAT) allowing multiple VM’s to share an outside-routable IP address. A single IP address is assigned to each compute host and Layer 4 port number manipulation is used to maintain unique session traffic.
Floating IP Subnet—The term Floating IP in OpenStack is used when a VM instance is allowed to claim a distinct static NAT address to support inbound connections to the VM from outside the cloud. The Floating IP subnet will be the subnet assigned within OpenStack to the Neutron external network entity.
Traffic egressing the cloud will carry a source IP address of either the SNAT subnet or Floating subnet. The routing hops external to ACI needs to have routes back to these subnets, either through a dynamic routing protocol or static configuration to allow return traffic to find its way back to OpenStack.
With the NAT function itself occurring in the local OVS of the compute node, the physical switches in the ACI fabric need only to route external traffic to and from the external next-hop router. This external routing is taken care of through a Virtual Routing and Forwarding (VRF) instance associated with the Layer-3 Out. This L3-Out VRF has an interface associated with the physical link to the external next-hop router. The same VRF also has an interface with an IP address of the assigned Source NAT subnet, and the Floating IP subnet. A loopback interface is also present in this VRF for routing protocol interaction. A graphical representation of the subnet architecture to support this NAT approach is shown in Figure below.

The L3-Out VRF associated with the OpenStack Neutron external network processes NAT traffic that egresses OVS on the compute host. Non-NAT traffic is processed by a Tenant VRF based on the OpenStack Tenant/Project association of the VM Instance.
The OpFlex ML2 Driver software stack provides optimized traffic flow and distributed processing to provide DHCP and Metadata Proxy services for VM instances. These services are designed to keep as much processing and packet traffic local to the compute host. The distributed elements communicate with centralized functions to ensure system consistency.
The reference OpenStack Neutron architecture utilizes the neutron-dhcp-agent service running on the Neutron server(s) to provide all DHCP communication to VM instances over the OpenStack tenant networks. The neutron-dhcp-agent provides central IP address management, and also communicates with each VM instance for DHCP Discovery, Offer, Request, and Acknowledgement (DORA) functions.
The OpFlex optimized DHCP approach instead provides all DORA services locally on the compute host via the agent-ovs service. The distributed services communicate over the Management network to the Neutron server(s) for allocation of IP addressing and DHCP options. This architecture keeps the bulk of the packet traffic required to issue a DHCP lease local to the compute host itself, while also offloading the processing of this interaction from the Neutron server(s). An illustration of this DHCP architecture is provided in the Figure below.

The reference OpenStack Neutron architecture for Metadata delivery to VM instances provides a centralized proxy service running on the Neutron server(s). This proxy service looks up instance information in Nova API, then adds HTTP headers and redirects the Metadata request to the Nova Metadata service. The Metadata requests from VM instances are transmitted on the OpenStack Tenant networks.
The OpFlex optimized Metadata Proxy approach instead provides Metadata delivery using distributed Metadata Proxy instances running on each compute host. The agent-ovs service reads the OpFlex service file and programs a flow in OVS to direct Metadata service requests to the local neutron-metadata-agent. This local agent runs in a separate Linux namespace on the compute host. The Metadata Proxy function then accesses Nova-API and Nova Metadata Service running on the OpenStack Controller over the Management network to deliver VM-specific metadata to each VM instance. An illustration of this Metadata Proxy architecture is shown in the Figure below.

Cisco ACI supports integration with multiple Virtual Machine Manager (VMM) systems, including OpenStack. This integration provides direct APIC visibility to the OpenStack compute nodes, including a detailed listing of all of the VM instances on each node along with the virtual interface information for each learned port. An example VM Networking view of OpenStack hypervisors is shown in the Figure below.

The VM Networking section of the APIC web interface also provides a view by Distributed Virtual Switch (DVS). Each DVS corresponds to an OpenStack network, which may be distributed across multiple compute nodes. The listing includes details of where each compute node and ACI leaf each VM instance is connected. This listing is instrumented with sort and filtering capabilities to locate any VM by IP or MAC address. An example VM Networking view of OpenStack DVS instances is shown in the Figure below.
