Cisco APIC Container Plug-in Release 5.0(1), Release Notes
This document describes the features, bugs, and limitations for the Cisco Application Policy Infrastructure Controller (APIC) Container Plug-in.
The Cisco Application Centric Infrastructure (ACI) Container Network Interface (CNI) Plug-in provides network services to Kubernetes, Red Hat OpenShift, and Docker EE clusters on a Cisco ACI fabric. It allows the cluster pods to be treated as fabric end points in the fabric integrated overlay, as well as providing IP Address Management (IPAM), security, and load balancing services.
Release notes are sometimes updated with new information about restrictions and bugs. See the following website for the most recent version of this document:
Table 1: Online History Change
Date |
Description |
2020-05-29 |
Release 5.0(1) became available. |
2020-05-31 |
Added Universal Base Image (UBI) 8 to New Software Features section. |
2020-06-10 |
Added open bug CSCvu58070. |
2022-02-28 |
Updated the Supported Scale table. |
This document includes the following sections:
Cisco ACI Virtualization Compatibility Matrix
For information about Cisco ACI supported Container Products, see the Cisco ACI Virtualization Compatibility Matrix at the following URL:
This section lists the new and changed features in this release.
Cisco ACI supports Kubernetes 1.17.
Cisco ACI supports the deployment of Istio control plane using the upstream community supported Istio Operator.
Note: This is a preview feature and is rapidly evolving in response to upstream changes.
Beginning with this release, Cisco ACI CNI deployments are managed by a Kubernetes Operator.
Multiple SNAT policies can be associated with a Pod such that the SNAT IP is allocated based on the destination. SNAT can also be completely suppressed to known destinations on the same fabric.
You can configure VMware teaming policy when link aggregation groups (LAGs) are used. For more information, see Cisco ACI and Kubernetes Integration.
You can deploy a Kubernetes cluster with the Cisco ACI CNI plug-in on a mixed collection of virtual machines (VMs) for the master nodes, and bare-metal servers for the worker nodes.
You can add a Kubernetes cluster to an existing tenant—such as the common tenant in Cisco APIC. You do so by modifying the configuration file. For more information, see Cisco ACI and Kubernetes Integration.
Container images are now based on Red Hat Universal Base Images instead of Alpine base images which were used in earlier releases.
■ The Cisco ACI CNI deployments and daemonsets are now deployed by the Cisco ACI CNI Operator, and the lifecycle of these resources is subsequently managed by this Operator. The user workflow does not change as this change is captured in the Kubernetes deployment file that is generated by the acc-provision tool, and that deployment now deploys the Cisco ACI CNI Operator.
■ The Cisco APIC resources created by acc-provision for Kubernetes clusters now contain the prefix “aci-containers-“ instead of “kube-“. Some resource names, for instance endpoint group (EPG) names, also contain the system-id. Running acc-provision will always result in this new naming convention unless it is explicitly configured to use the older naming convention when upgrading existing clusters. Please refer to the Usage Guidelines for use_legacy_kube_naming_convention to use older naming convention.
■ The egress SNAT feature has gone through significant enhancements by decentralizing the computation previously performed by the SNAT operator container and delegating it to the host agents. This resulted in the elimination of the SNAT operator and the snatlocalinfos custom resource. Any tools or scripts written to reference the snatlocalinfos resource will need to be updated accordingly.
■ The use of acc-provision will, by default, install Istio control plane (version 1.5.2) on the cluster using a “demo” configuration profile. This installation is driven from aci-containers-controller pod. Istio control plane pods are brought up in “istio-system” namespace and are isolated in “aci-containers-istio” EPG. The constructs to achieve this isolation such as contracts, filters, and contract-relationships are automatically configured on the Cisco APIC.
■ The scope of the SNAT service graph contract can be configured by the user. Please refer to the Usage Guidelines to perform this configuration.
■ The "annotation" property for Cisco APIC objects created by acc-provision is set to orchestrator:aci-containers-controller. This is also reflected in a unique icon on the Cisco APIC objects in the Cisco APIC GUI.
■ acikubectl has been enhanced to collect relevant config maps.
■ The Cisco ACI CNI 5.0(1) release is not tested for OpenShift 3.11. We recommend that deployments running OpenShift 3.11 continue using the Cisco ACI CNI plugin from 4.2(2) release.
■ Istio 1.5.2 deployment is a preview feature and is expected to rapidly evolve in the next release in response to upstream community changes. This may create issues with backward compatibility and as such this feature should only be used in experimental or pilot deployments.
■ The Cisco IPI installer for Openshift 4.3 on OpenStack 13 does not currently support autoscale or label/taint updates by the machineset-operator.
■ Openshift 4.3 on AWS does not currently support policy enforcement in a hybrid/multi-site deployment.
■ The Cisco ACI CNI Plug-in is not integrated with the Multi-Site Orchestrator. When deploying to a Multi-Site deployment, the Cisco ACI configurations implemented by the plug-in must not be affected by the Multi-Site Orchestrator.
■ SNAT is not supported for services inside the same fabric.
■ If you are upgrading a Kubernetes cluster that was provisioned with ACI CNI version 4.2(2), you need to add the following config in the original config.json:
Also note that upgrading a Cisco ACI CNI cluster requires running acc-provision with the “-a” option.
■ Istio installation can be disabled by setting the config parameter “install-istio” to False in the acc-provision-input file and generate/apply the deployment file.
■ The scope of the SNAT service graph contract can be configured by the user in the acc-provision input file as follows:
Valid values (as allowed by Cisco APIC) are "global", "tenant" and "context". The default is set to "global".
■ ACC subscribes for notifications on certain objects to the Cisco APIC. There is a timeout associated with this subscription. A shorter timeout requires more frequent subscription renewals. The timeout is set to 900 seconds for Cisco APIC 4.x and can be changed by configuring the acc-provision input file:
Note: The subscription timeout is configurable only in Cisco APIC 4.x.
■ The memory limit for the Open vSwitch container is set to 1GB. It can be changed by configuring the acc-provision input file as follows:
■ Policy Based Routing (PBR) tracking can be enabled for the Cisco APIC service graph created for supporting the SNAT feature. More details on PBR tracking can be found in the chapter “Configuring Policy-Based Redirect” In the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide, Release 4.2(x).
One HealthGroup for each node is created, and it is associated with the redirect policy of the SNAT service graph with the internet protocol service level agreement (IP SLA) interval set to 5 seconds. This interval is configurable through the acc- provision input file:
If the service_monitor_interval is set to zero, PBR tracking is disabled.
PBR tracking can be also be enabled for other Cisco APIC service graphs created for each Kubernetes external service, setting the following configuration in the acc-provision input file:
If enabled, the service_monitoring_interval described earlier applies here as well.
Note that in a Cisco ACI CNI-based cluster, the same worker node is used to provide both the external Layer 4 load balancer and SNAT services. So if PBR tracking is enabled, and if the worker node reports unhealthy status for SNAT, a fault appears in the redirect policies associated with all other (non-SNAT) service graphs that have this node. However, this fault does not actually affect those other services and traffic from those services is still distributed to that node. The fault manifests for those other services only in the Cisco APIC GUI.
Note: The following are general usage guidelines that also apply to this release:
■ The Cisco ACI CNI Plug-in is supported with the following container solutions:
— Canonical Kubernetes on Ubuntu 18.04
— Red Hat OpenShift on Red Hat Enterprise Linux 7
■ You should be familiar with installing and using Kubernetes or OpenShift. Cisco ACI does not provide the Kubernetes or OpenShift installer. Refer to the following documents on Cisco.com for details:
— Cisco ACI and Kubernetes Integration
— Cisco ACI and OpenShift Integration
— Cisco ACI CNI Plugin for Red Hat OpenShift Container Platform Architecture and Design Guide
— Upgrading the Cisco ACI CNI Plug-in
■ The Cisco ACI CNI plug-in implements various functions running as containers inside pods. The released images for those containers for a given version are available on the Docker Hub website under user noiro. A copy of those container images and the RPM/DEB packages for support tools (acc-provision and acikubectl) are also published on the Software Download page on Cisco.com.
■ OpenShift has a tighter security model by default, and many off-the-shelf Kubernetes applications, such as guestbook, may not run on OpenShift (if, for example, they run as root or open privileged ports like 80).
■ Refer to the article “Getting any Docker image running in your own OpenShift cluster” on the Red Hat OpenShift website for details. The Cisco ACI CNI Plug-in is not aware of any configuration on OpenShift cluster or pods when it comes to working behind a proxy. Running OpenShift "oc new-app," for instance, may require access to Git Hub, and if the proxy settings on the OpenShift cluster are not correctly set, this access may fail. Ensure your proxy settings are correctly set.
■ In this release, the maximum supported number of PBR based external services is 250 virtual IP addresses (VIPs). Scalability is expected to increase in upcoming releases.
Note: With OpenShift, master nodes and router nodes are tainted by default, and you might see lower scale than an upstream Kubernetes installation on the same hardware.
■ Some deployments require installation of an “allow” entry in IP Tables for IGMP. This must be added to all hosts running an OpFlex agent and using VXLAN encapsulation to the leaf. The rule must be added using the following command:
In order to make this change persistent across reboots, add the command either to /etc/rc.d/rc.local or to a cron job that runs after reboot.
■ Both RHEL and Ubuntu distributions set net.ipv4.igmp_max_memberships set to 20 by default. This limits the number of end point groups (EPGs) that can be used in addition to the kube-default EPG for pod networking. If you anticipate using more than 20 EPGs, set the value to the desired number of EPGs on each node as follows:
■ For the VMware VDS integration, you can refer to the Enhanced Link Aggregation Group (eLAG) configured through the Cisco APIC by using the following configuration in the acc-provision input file:
The Kubernetes, OpenShift, Cloud Foundry, and Pivotal Cloud Foundry Platform scale limits are shown in the following table:
Table 3: Supported Scale Limits
Limit Type |
Maximum Supported |
Nodes/Leaf (or OpFlex hosts per leaf) |
1201 |
Nodes/interface on Leaf (or OpFlex hosts per port) |
20 |
VPC links/Leaf |
40 |
Endpoints2/Leaf |
10000 |
Endpoints/Host |
400 |
Virtual endpoints3/Leaf |
40,000 |
1- The indicated scale value is for Cisco ACI version 5.0(1) and later. If the ACI version is less than 5.0(1), the number of supported OpFlex hosts are 40.
2- An endpoint corresponds to a Pod’s network interface.
3- Total virtual endpoints on a leaf can be calculated as: Virtual endpoints / leaf = VPCs x EPGs where:
■ VPCs is the number of VPC links on the switch in the attachment profile used by the OpenStack Virtual Machine Manager (VMM).
■ EPGs is the number of EPGs provisioned for the OpenStack VMM.
For the CLI verified scalability limits, see the Cisco NX-OS Style Command-Line Interface Configuration Guide for this release.
This section contains lists of bugs and known behaviors.
Table 4: Open bugs in the 5.0(1) release
Bug ID |
Description |
Incorrect PBR connector class ID for inter-VRF contract |
Table 5: Resolved bugs in the 5.0(1) release
Bug ID |
Description |
aci-container-controllers: Add error message if VRF config is inconsistent |
|
Add support for additional Local Subnets for SNAT |
|
SNAT is applied to all Namespaces |
|
SNAT for Services with Multiple ports does not work |
|
acc-provision: remove extern_static for OpenShift Flavor |
This section lists known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the bug.
Table 5: Known Behaviors in the 5.0(1) release
Bug ID |
Description |
Containers IP is shown as learned on all cluster interfaces. |
The Cisco Application Policy Infrastructure Controller (APIC) documentation can be accessed from the following website:
https://www.cisco.com/c/en/us/support/cloud-systems-management/application-policy-infrastructure-controller-apic/tsd-products- support-series-home.html
The documentation includes installation, upgrade, configuration, programming, and troubleshooting guides, technical references, release notes, and knowledge base (KB) articles, as well as other documentation. KB articles provide information about a specific use case or a specific topic.
By using the "Choose a topic" and "Choose a document type" fields of the Cisco APIC documentation website, you can narrow down the displayed documentation list to make it easier to find the desired document.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2020 Cisco Systems, Inc. All rights reserved.