Cisco Application Policy Infrastructure Controller OpenStack and Container Plugins Release 3.2(1), Release Notes
This document describes the features, caveats, and limitations for the Cisco Application Policy Infrastructure Controller (APIC) OpenStack and Container Plugins.
Cisco APIC OpenStack Plugins are used to deploy and operate OpenStack instances on an ACI fabric. It allows dynamic creation of networking constructs to be driven directly from OpenStack, while providing additional visibility and control from the Cisco APIC.
Cisco APIC CNI Plugin is used to deploy and operate Kubernetes clusters or OpenShift clusters on an ACI fabric. It allows dynamic creation of networking constructs to be driven directly from Kubernetes, while providing additional visibility and control from the Cisco APIC.
For the verified scalability limits (except the CLI limits), see the Verified Scalability Guide for this release. For the OpenStack, Kubernetes, OpenShift, and Coud Foundry Platform Scale Limits:
| Configurable Options |
Per Leaf Scale |
Per Fabric Scale |
| Number of OpFlex hosts per leaf or vPC pair |
40* |
N/A |
| Number of endpoints per leaf or vPC pair |
2000* |
N/A |
* = same scalability values for OpenStack, Kubernetes, OpenShift, and Could Foundry Platform
For the CLI verified scalability limits, see the Cisco NX-OS Style Command-Line Interface Configuration Guide for this release.
Release notes are sometimes updated with new information about restrictions and caveats. See the following website for the most recent version of this document:
Table 1 shows the online change history for this document.
Table 1 Online History Change
| Date |
Description |
| June 1, 2018 |
Release 3.2(1) became available. |
| August 23, 2018 |
Added upgrading information. For more information, see the Cisco APIC OpenStack, Container Plugins and Cisco APIC Compatibility Matrix and Changes In Behavior sections. |
| September 20, 2018 |
Added information about using external IPs for Load Balancing. For more information, see the Containers Guidelines section. |
This document includes the following sections:
■ Cisco APIC OpenStack, Container Plugins and Cisco APIC Compatibility Matrix
■ Cisco ACI Virtualization Compatibility Matrix
■ Caveats
Table 2 shows the Cisco APIC OpenStack, Container Plugins and Cisco APIC Compatibility Matrix
| Cisco APIC OpenStack and Container Plugins |
Cisco APIC |
Supported |
Guidelines and Restrictions |
| 3.2(1) |
3.2(1) |
Yes |
Cisco APIC OpenStack and Container Plugins, Release 3.2(1) is compatible with Cisco APIC, Release 3.2(1). Note: When you upgrade, you must upgrade the Cisco APIC to 3.2(1) before upgrading the Cisco APIC OpenStack and container plugin to release 3.2(1). Otherwise, you may see instability on the attached top of rack (ToR) switch. |
This section lists the new and changed features in this release and includes the following topics:
The following are the new software features for this release:
Table 3 Software Features, Guidelines, and Restrictions
| Feature |
Description |
Guidelines and Restrictions |
| Pivotal Cloud Foundry Integration |
This release enables the deployment of Pivotal Cloud Foundry in the Cisco ACI fabric. For more information, see the Cisco ACI and Pivotal Cloud Foundry Integration knowledge base article. |
None. |
| OVS-DPDK support |
This release adds OVS-DPDK compatibility for non-opflex deployment. |
None. |
| Neutron Trunk Ports support |
This release adds support for Neutron Trunk Ports. For more information, see the OpenStack Trunking documentation: |
None. |
| Neutron Service Function Chaining Integration |
This release allows you to use the Neutron Service Function Chaining to create service chains within APIC. For more information, see the Cisco ACI OpenStack User Guide. |
Cisco only supports the REST API for this feature. Check with your OpenStack provider for CLI and GUI support. |
| OpenStack Pike support |
This release adds support for OpenStack Pike. |
None. |
| Kubernetes |
This release adds support for Kubernetes 1.7. For more information, see the Cisco ACI and Kubernetes Integration knowledge base article and the Cisco ACI Virtualization Compatibility Matrix. |
None. |
This section lists changes in behavior in this release.
■ If you are going to upgrade, you must upgrade the Cisco ACI fabric first before upgrading the Cisco APIC OpenStack and container plugins. The only exception is for the Cisco ACI fabric releases that have been explicitly validated for this specific plugin version in the Cisco ACI Virtualization Compatibility Matrix.
For more information, see the Cisco ACI Virtualization Compatibility Matrix at the following URL:
■ Support for security groups are implemented in OVS and upgrading to this version from earlier versions of the plugins.
■ Starting in release 3.1(1) for OpenStack, the following changes were made to the unified plugin:
— Adds support for the OpenStack Ocata release
— Moves security group implementation from IPtables to OVS
— Improves support for multiple OpenStack instances on the same APIC cluster
■ Security Groups for Opflex hosts are implemented natively in OVS, instead of using IPtables rules.
If you are using an installer plugin distributed with this code, the appropriate configuration of Opflex hosts is automatically done. If you have your own installer, this change requires the following changes to the bridge configuration on all Opflex hosts:
1. Create the br-fabric bridge, enter the following commands:
# ovs-vsctl add-br br-fabric
# ovs-vsctl set-fail-mode br-fabric secure
2. Add a vxlan port to the br-fabric, enter the following command. The br-int_vxlan0 vxlan port on the br-int bridge is no longer needed and can be removed.
# ovs-vsctl add-port br-fabric br-fab_vxlan0 -- set Interface br-fab_vxlan0 type=vxlan options:remote_ip=flow options:key=flow options:dst_port=8472
3. Change the agent-ovs config file:
"renderers": {
"stitched-mode": {
//"ovs-bridge-name": "br-int", <=== Remove this line.
"int-bridge-name": "br-fabric", <=== Add this line.
"access-bridge-name": "br-int", <=== Add this line.
"encap": {
"vxlan" : {
//"encap-iface": "br-int_vxlan0", <=== Change from br-int to br-fab.
"encap-iface": "br-fab_vxlan0",
"uplink-iface": "eth1.4093",
"uplink-vlan": 4093,
"remote-ip": "10.0.0.32",
"remote-port": 8472
}
},
■ Multiple OpenStack instances can share the same Cisco ACI fabric. Earlier versions of unified plugin would attach all OpenStack VMM domains to every OpenStack instance. This release allows cleaner separation by using this procedure:
You must provision the VMM domains owned by each openstack instance using the new host-domain-mapping CLI command:
# aimctl manager host-domain-mapping-v2-create [options] <host name> <domain name> <domain type>
The host name can be a wildcard, which is indicated using an asterisk surrounded by double quotes ("*"). A wildcard means that the mapping should be used for all hosts. When more than one OpenStack instance shares the fabric, an entry must be created in this table for each VMM domain in use by that OpenStack instance. As an example, if one OpenStack instance is using VMM Domains "ostack1" and "ostack2", the following commands would be run on that OpenStack controller to put entries to this table:
# aimctl manager host-domain-mapping-v2-create "*" ostack1 OpenStack
# aimctl manager host-domain-mapping-v2-create "*" ostack2 OpenStack
If the second OpenStack instance is using VMM Domain "ostack3", the following command would be run on that OpenStack controller to add an entry to its table:
# aimctl manager host-domain-mapping-v2-create "*" ostack3 OpenStack
■ Earlier versions only supported one logical uplink for hierarchical port binding or non-opflex VLAN network binding. In this release, you can have multiple links for those use-cases when using unified plugin.
In order to use this feature, the AIM CLI has to be used to provide the mapping of physnets in OpenStack and an interface on a specific host. The following aimctl CLI command is used to configure this mapping:
# aimctl manager host-link-network-label-create <host_name> <network_label> <interface_name>
As an example, host h1.example.com is provisioned to map its eth1 interface to physnet1:
# aimctl manager host-link-network-label-create h1.example.com physnet1 eth1
■ Previously it was not possible for a single L3 Out to be shared across multiple OpenStack instances when using AIM, due to the fact that both OpenStack instances would attempt to use an External Network Endpoint Group of the same name. This release adds scoping of the Application Profile for the External Network Endpoint Group using the apic_system_id, which is configured in the [DEFAULT] section of the neutron configuration file.
■ In earlier versions, the AIM plugin would take ownership of pre-existing L3 Outs when NAT was not being used, which led to scenarios where the AIM plugin would delete the pre-existing L3 Out in some corner cases. With this release, the AIM plugin will not take ownership of any pre-existing L3 Outs.
■ Legacy plugin is not supported with the Ocata Plugins and will not be supported on future versions of OpenStack. The legacy plugin for Newton is supported. All customers are recommended to use unified mode for both Newton and Ocata.
■ The OpFlex agent does not support client authentication. This means that the SSL certificate check must be disabled in Cisco APIC GUI.
1. In the APIC GUI, on the menu bar, choose System > System Settings > Fabric Wide Setting.
2. Ensure that the OpFlex Client Authentication check box is not checked.
For information about Cisco ACI, Kubernetes, OpenShift and OpenStack, see the Cisco ACI Virtualization Compatibility Matrix at the following URL:
This section lists the known limitations.
■ GBP and ML2 Unified Mode does not have feature parity with the ealier non-unified mode. In particular, it does not support the following features and for deployments that need some of these feautres, continue using the existing plugin configuration.
— ESX hypervisor support
— ASR1K edgeNAT support
— GBP/NFP Service chaining
— ML2 Network constraints
■ Not all Unified mode features are support by the legacy plugin:
— Support for OpenStack address scops
— OpenStack address scopes are supported only in the Unified mode (where they are mapped to VRFs in the Unified model) and are not support in the earlier configurations.
— Dual stack IPv6 deployment
■ GBP and ML2 Unified Mode is a new mode of operation. So, while there can be a manual transition to this mode of usage, there is no automated upgrade from previous install to this mode.
■ Dual-stack operation requires that all IPv4 and IPv6 subnets - both for internal and external networks - use the same VRF in Cisco ACI. The one exception to this is when separate external networks are used for IPv4 and IPv6 traffic. In that workflow, the IPv4 and IPv6 subnets used for internal networks plus the IPv6 subnets used for external networks all belong to one VRF, while the subnets for the IPv4 external network belong to a different VRF. IPv4 NAT can then be used for external networking.
■ Mirantis Fuel based plugins are not supported. For Ubuntu based installs, use the released Juju based installer.
■ APIC OpenStack plugins do not support the Cisco ACI Multi-Site at this time.
■ If you are using SLAAC, add a security group rule to allow ICMPv6 to the effected Neutron networks. For example, the following security group (ipv6-sg) allows the required traffic:
# openstack security group rule create --ethertype IPv6 --ingress --protocol 58 --src-ip ::/0 \ ipv6-sg
■ Before performing an upgrade from 3.1(1) using OpenStack Director or attempting an APIC ID recovery procedure, all AIM processes on all controllers need to be shutdown. To shutdown all the AIM processes on all controllers, run the following command on the undercloud:
for IP in $(nova list | grep ACTIVE | sed 's/.*ctlplane=//' | sed 's/ |//') ; do
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no heat-admin@$IP \
"sudo systemctl stop aim-event-service-rpc; sudo systemctl stop aim-aid; sudo systemctl stop aim-event-service-polling" ;
done
If upgrading, you do not need to explicitly restart the AIM processes as the upgrade will automatically restart them.
If attempting an APIC ID recovery, you must restart the AIM processes on all the controllers manually after ID Recovery is complete.
■ Keystone configuration update
When the OpenStack plugin is installed in the unified mode, the Cisco installer adds the required configuration for keystone integration with AIM. When not using unified mode, or when using your own installer, the configuration section must be provisioned manually:
[apic_aim_auth]
auth_plugin=v3password
auth_url=http://<IP Address of controller>:35357/v3
username=admin
password=<admin_password>
user_domain_name=default
project_domain_name=default
project_name=admin
■ When using optimized DHCP, the DHCP lease times are set by the configuration variable apic_optimized_dhcp_lease_time under the [ml2_apic_aim] section.
— This requires a restart of neutron-server to take effect
— If this value is updated, existing instances will continue using the old lease time, provided their neutron port is not changed (e.g. rebooting the instance would trigger a port change, and cause it to get the updated lease time). New instances will however use the updated lease time.
■ In upstream Neutron, the "advertise_mtu" option has been removed.
Since the aim_mapping driver still uses this configuration, the original configuration which appeared in the default section should be moved to the aim_mapping section. For example:
[aim_mapping]
advertise_mtu = True
It is set to True by default in the code (if not explicitly specified in the config file).
■ GBP and ML2 Unified Mode allows coexistence of those OpenStack networking APIs on the same OpenStack and Cisco ACI Instance, but they need to be running on different VRFs. This is a constraint that we may remove in future, but at this time this is the supported configuration.
■ Unified mode has features not supported by the legacy plugin:
— Support for Openstack address scopes and subnetpools
OpenStack address scopes and subnetpools are supported only in the Unified mode (where they are mapped to VRFs in the unified model) and are not supported in the earlier configurations.
— Dual stack IPv6 deployment
■ If a default VRF is implicitly created for a tenant in ML2, it is not implicitly deleted until the tenant is deleted (even if it not being used anymore).
■ Unified model impact of the transaction Model Updates in Newton
When GBP and ML2 co-exist, GBP implicitly created some neutron resources. In Newton, the neutron transaction model has been updated and has added a lot of checks. Some of those checks spuriously see this nested transaction usage as an error and log and raise an exception. The exception is handled correctly by GBP and there is no functional impact but unfortunately the neutron code also logs some exceptions in neutron log file – leading to the impression that the action had failed.
While most such exceptions are logged at the DEBUG level, occasionally you might see some exceptions being logged at the ERROR level. If such an exception log is followed by a log message which indicates that the operation is being retried, the exception is being handled correctly. One such example is the following:
Delete of policy-target on a policy-target-group associated to a network-service-policy could raise this exception:
2017-03-18 12:52:34.421 27767 ERROR neutron.api.v2.resource […] delete failed
2017-03-18 12:52:34.421 27767 ERROR neutron.api.v2.resource Traceback …:
2017-03-18 12:52:34.421 27767 ERROR neutron.api.v2.resource File "/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, …
...
2017-03-18 12:52:34.421 27767 ERROR neutron.api.v2.resource raise …
2017-03-18 12:52:34.421 27767 ERROR neutron.api.v2.resource ResourceClosedError: This transaction is closed
Note: We are working with the upstream community for further support on Error level logs.
■ When a L2 Policy is deleted in GBP, some implicit artifacts related to it may not be deleted (resulting in unused BDs/subnets on APIC). If you hit that situation, the workaround is to create a new empty L2Policy in the same context and delete it.
■ The ASR1K for edge-NAT doesn't provide support to create instances attached to external networks.
■ If you use tempest to validate OpenStack, the following tests are expected to fail and can be ignored:
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_update_router_admin_state
■ Neutron-server logs may show the following message when DEBUG level is enabled:
Timed out waiting for RPC response: Timeout while waiting on RPC response - topic: "<unknown>", RPC method: "<unknown>" info: "<unknown>"
This message can be ignored.
■ CSCvf34907: During upgrade of the leaf switch, the opflex-proxy may sometimes create a core. This can be ignored and the upgrade should proceed as expected (except, potentially, a small delay as the opflex-proxy process is restarted).
■ High Availability LBaaSv2 is not supported.
■ OpenStack Ocata is only supported with unified plugin.
■ For OpenShift, the external IP used for the LoadBalancer service type is automatically chosen from the subnet pool specified in the ingressIPNetworkCIDR configuration in the /etc/origin/master/master-config.yaml file. This subnet should match the extern_dynamic property configured in the input file provided to acc_provision script. If a specific IP is desired from this subnet pool, it can be assigned to the "loadBalancerIP" property in the LoadBalancer service spec. For more details refer to OpenShift documentation here:
NOTE: The extern_static subnet configuration in the acc_provision's input is not used for OpenShift.
■ You should be familiar with installing and using Kubernetes or OpenShift. The CNI plugin (and the corresponding deployment file) is provided to enable networking for an existing installer such as kubeadm or kargo. Cisco ACI does not provide the Kubernetes or Openshift installer.
■ The released images for this version are available on dockerhub under user Noiro. A copy of those container images and the RPM/DEB packages for support tools (acc-provision and acikubectl) are also published on CCO.
■ OpenShift has a tighter security model by default and many off the shelf Kubernetes applications such as guestbook may not run on OpenShift (if, for example, they run as root or open privileged ports like 80).
Please refer to the following for details:
https://blog.openshift.com/getting-any-docker-image-running-in-your-own-openshift-cluster/
■ When running openshift, ‘oc new-app’ tries to reach Github. If you are running behind a proxy, due to OpenShift's issues with handling the proxy environment variable when it is set on the compute node, this connection may fail. This is an Openshift issue and not related to the networking provided by Cisco ACI.
■ CSCvf34907: During upgrade of the leaf switch, the opflex-proxy may sometimes create a core. This can be ignored and the upgrade should proceed as expected (except, potentially, a small delay as the opflex-proxy process is restarted).
■ In this release the supported platforms are:
— Upstream Kubernetes 1.6 or 1.7 installed on Ubuntu 16.04 or OpenShift 3.6 installed on servers running Redhat OCP.
— The servers above can either be bare metal or VMs attached an ESX VMM on Cisco ACI.
— Other platforms and operating systems will be supported in the future. For more information, see the Cisco ACI Virtualization Compatibility Matrix section.
■ In this release, the maximum supported number of PBR based external services is 200 VIPs. Scalability is expected to increase in upcoming releases.
NOTE: With OpenShift master nodes and router nodes will be tainted by default and you might see lower scale than an upstream Kubernetes install on the same hardware.
■ APIC Kubernetes and OpenShift plugins do not support the Cisco ACI Multi-Site at this time.
This section contains lists of open and resolved caveats and known behaviors.
This section lists the open caveats. Click the bug ID to access the Bug Search tool and see additional information about the bug.
The following are open caveats in the 3.2(1) release.
Table 4 Open Caveats in the 3.2(1) Release
| Bug ID |
Description |
Fixed In |
| During upgrade of the leaf switch, the OpFlex proxy may sometimes create a core. This can be ignored and the upgrade should proceed as expected (except, potentially, a small delay as the OpFlex proxy process is restarted). |
|
|
| It is recommended that when using ESX nested Kubernetes hosts, for best performance you should provision one Kubernetes host per Kubernetes cluster on a specific ESX server. |
|
This section lists the resolved caveats. Click the bug ID to access the Bug Search tool and see additional information about the bug.
The following table lists the resolved caveats in the 3.2(1) release.
Table 5 Resolved Caveats in the 3.2(1) Release
| Bug ID |
Description |
| Handle DHCP even when the network has not been resolved Details: Handle DHCP even when the network has not been resolved. This avoids situations where the network resolves take longer than the DHCP request timeout in OpenStack. Solution: In this release, the agent fulfils DHCP DORA traffic before EPG resolution has completed. |
|
| EPG Operational tab not showing VM names for Openstack VMM Domains Details: VM instance from OS integration does not show the VM name on the EPG tab operational in APIC GUI. Solution: In this release, EPG Operational tab now shows VM names for Openstack VMM Domains in the APIC GUI. |
|
| OSD installer not setting apic_system_id in aimctl.conf Details: Because of a typo the apic_system_id variable in aimctl.conf is set to default value "openstack" despite the ciscoaim.yaml setting being different. Solution: In this release, it is fixed and the value for YAML file is used for apic_system_id in aimctl.conf. |
|
| OSD installer plugin does not set persistent iptables rules for VxLAN Details: OSD installer plugin does not set persistent iptables rules for VxLAN, so instance traffic destined outside of the compute-node fails after a host reboot. Solution: In this release, the VXLAN rule update in the installer plugin is persistent. |
|
| agent-ovs not downloading policies from the Cisco ACI fabric Details: Under certain circumstances, a race condition could allow the managed object database to become confused about the source of a managed object and treat it as a local object. This causes it to treat valid child objects as orphans and delete them prematurely and not download them again until an agent restart. Solution: In this release, the underlying race condition is fixed. |
|
| neutron-opflex-agnet EP file creation blocked by slow OVSDB read Details: Under some circumstances, OVSDB read can become very slow. This can block opflex-agent from writing the EP files for several minutes, causing the Neutron port to be stuck in building state and for Nova to eventually timeout. Solution: In this release, the agent is more resilient to OVSDB delay. |
|
| Opflex connection gets disruption to existing leaf when peer gets reloaded Details: When a leaf in a VPC pair is reloaded, the opflex connections to the other leaf also get reset. This has no effect on working system, but it creates additional load and errors during that period. Solution: In this release, the opflex connection to the other leaf is not impacted. |
|
| Kubernetes provisioning tool does not support multiple PODs Details: Kubernetes provisioning tool does not support multiple PODs. Solution: In this release, the Kubernetes provisioning tool now supports multiple POD. |
This section lists caveats that describe known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the bug.
There are no known behaviors in the 3.2(1) release.
The Cisco Application Policy Infrastructure Controller (APIC) documentation can be accessed from the following website:
The documentation includes installation, upgrade, configuration, programming, and troubleshooting guides, technical references, release notes, and knowledge base (KB) articles, as well as other documentation. KB articles provide information about a specific use case or a specific topic.
By using the "Choose a topic" and "Choose a document type" fields of the APIC documentation website, you can narrow down the displayed documentation list to make it easier to find the desired document.
This section lists the new Cisco ACI product documents for this release.
■ Cisco ACI and Pivotal Cloud Foundry Integration
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2018 Cisco Systems, Inc. All rights reserved.