Cisco ACI Multi-Site Orchestrator Release Notes, Release 2.2(2)
This document describes the features, issues, and limitations for the Cisco Application Centric Infrastructure (ACI) Multi-Site Orchestrator software.
Cisco ACI Multi-Site is an architecture that allows you to interconnect separate Cisco APIC cluster domains (fabrics), each representing a different availability zone, all part of the same region. This helps ensure multitenant Layer 2 and Layer 3 network connectivity across sites and extends the policy domain end-to-end across the entire system.
Cisco ACI Multi-Site Orchestrator is the intersite policy manager. It provides single-pane management that enables you to monitor the health of all the interconnected sites. It also allows you to centrally define the intersite policies that can then be pushed to the different Cisco APIC fabrics, which in term deploys them on the physical switches that make up those fabrics. This provides a high degree of control over when and where to deploy those policies.
For more information, see Related Content.
Date |
Description |
December 2, 2020 |
Additional open issue CSCvw61549. |
July 20, 2020 |
Additional open issue CSCvu23330. |
April 23, 2020 |
Additional open issues CSCvs57670, CSCvs90027, CSCvt21657, CSCvs79493, CSCvs69260. |
April 19, 2020 |
Removed bug CSCvq19270 from open issues. The issue is resolved in all releases starting with Release 2.2(1c). Additional open issue CSCvt48924. |
March 2, 2020 |
Additional open issues CSCvs79515, CSCvs79493, CSCvs91386. |
December 16, 2019 |
Release 2.2(2d) became available. Additional open issue CSCvs42770 in Releases 2.2(2b) – 2.2(2c). |
November 4, 2019 |
Release 2.2(2c) became available. Additional open issue CSCvr85866 in Release 2.2(2b). |
October 10, 2019 |
Release 2.2(2b) became available. |
Cisco ACI Multi-Site, Release 2.2(2) supports the following new features.
Feature |
Description |
Docker custom overlay and bridge subnets |
When installing Cisco ACI Multi-Site Orchestrator, you can now provide custom subnet information for Docker containers. For more information, see the Cisco ACI Multi-Site Orchestrator Installation and Upgrade Guide, Release 2.2(x) |
There is no new hardware supported in this release.
The complete list of supported hardware is available in the Cisco ACI Multi-Site Hardware Requirements Guide.
If you are upgrading to this release, you will see the following changes in behavior:
■ Release 2.2(1) adds a GUI lockout timer for repeated failed login attempts. This is enabled by default when you upgrade to Release 2.2(1) or later and is set to 5 login attempts before a lockout with the lockout duration incremented exponentially every additional failed login attempt.
■ If you configure read-only user roles in Release 2.1(2) or later and then choose to downgrade your Multi-Site Orchestrator to an earlier version where the read-only roles are not supported:
— You will need to reconfigure your external authentication servers to the old attribute-value (AV) pair string format. For details, see the "Administrative Operations" chapter in the Cisco ACI Multi-Site Configuration Guide.
— The read-only roles will be removed from all users. This also means that any user that has only the read-only roles will have no roles assigned to them and a Power User or User Manager will need to re-assign them new read-write roles.
■ Starting with Release 2.1(2), the 'phone number' field is no longer mandatory when creating a new Multi-Site Orchestrator user. However, because the field was required in prior releases, any user created in Release 2.1(2) or later without a phone number provided will be unable to log into the GUI if the Orchestrator is downgraded to Release 2.1(1) or earlier. In this case, a Power User or User Manager will need to provide a phone number for the user.
■ If you are upgrading from any release prior to Release 2.1(1), the default password and the minimum password requirements for the Multi-Site Orchestrator GUI have been updated. The default password has been changed from ‘We1come!” to “We1come2msc!” and the new password requirements are:
— At least 12 characters
— At least 1 letter
— At least 1 number
— At least 1 special character apart from * and space
You will be prompted to reset your passwords when you:
— First install Release 2.2(x)
— Upgrade to Release 2.2(x) from a release prior to Release 2.1(1)
— Restore the Multi-Site Orchestrator configuration from a backup
■ Starting with Release 2.1(1), Multi-Site Orchestrator encrypts all stored passwords, such as each site’s APIC passwords and the external authentication provider passwords. As a result, if you downgrade to any release prior to Release 2.1(1), you will need to re-enter all the passwords after the Orchestrator downgrade is completed.
To update APIC passwords:
a. Log in to the Orchestrator after the downgrade.
b. From the main navigation menu, select Sites.
c. For each site, edit its properties and re-enter its APIC password.
To update external authentication passwords:
a. Log in to the Orchestrator after the downgrade.
b. From the navigation menu, select Admin à Providers.
c. For each authentication provider, edit its properties and re-enter its password.
This section lists the resolved issues. Click the bug ID to access the Bug Search tool and see additional information about the issue. The "Fixed In" column of the table specifies the relevant releases.
Bug ID |
Description |
Exists in |
Unable deploy configurations with EPG or Bridge Domain (BD) network masks greater than 30. |
2.2(2b) |
|
If the epg/bd are imported from APIC without VRF reference in a particular template and then migrated to a different template, MSO throws error: :Unable to find BD reference in the older template from which it is getting migrated. |
2.2(2b) – 2.2(2c) |
|
In multisite environment the instantiation of the service graph fails with fault F1690 |
2.2(2b) and later |
|
High CPU usage when deploying large schemas. |
2.2(2b) and later |
|
MSO sending Remote Address tty10 to ISE |
2.2(2b) and later |
|
When service graphs or devices are created on Cloud APIC by using the API and custom names are specified for AbsTermNodeProv and AbsTermNodeCons, a brownfield import to the Multi-Site Orchestrator will fail. |
2.2(2b) and later |
|
Contract is not created between shadow EPG and on-premises EPG when shared service is configured between Tenants. |
2.2(2b) and later |
|
After migrating an EPG object from Site X to Site Y, the VRF object referenced by the migrated object is not deleted from Site X |
2.2(2b) and later |
|
shadow of extepg's vrf not getting updated. |
2.2(2b) and later |
|
Contract is not created between shadow EPG and on-premises EPG when shared service is configured between Tenants. |
2.2(2b) and later |
|
A template contains an external EPG, L3out and a VRF (CTX) and is stretched to two sites. If the VRF (CTX) reference is changed to a different VRF'(CTX'), the old VRF(CTX) object is not deleted from the APIC sites. |
2.2(2b) and later |
|
Unsupported Scenario: (Multiple VRFs OnPrem having contract with Single VRF InCloud) User has EPG-A in VRF-A and EPG-B in VRF-B having Contract C-A with CLOUD_EPG-C in VRF-C. Above configuration creates 2 shadow l3outs(for VRF-A and VRF-B) in On-prem APIC for shadowInstP of CLOUD_EPG-C. When user removes the contract C-A from EPG-B(VRF-B), the shadow L3out for VRF-B is not deleted along with shadowInstP of CLOUD_EPG-C. User may see shadow L3out for VRF-B along with shadowInstP of CLOUD_EPG-C in On-Prem APIC which could be confusing. |
2.2(2b) and later |
|
Downgrade execution fails complaining about the certs location from a release higher than 2.1.1 to 2.0.1 or below. This bring up all services except the UI service since certs are missing from docker swarm. |
2.2(2b) and later |
|
After updating a template to delete a VRF object from it and deploying the change, the VRF object is not deleted from the APIC. |
2.2(2b) and later |
|
After downgrading from 2.2(x) MSO image to 1.2(5b) MSO image, the MSO UI incorrectly shows templates as un-deployed even when no configuration changes were applied after the downgrade. |
2.2(2b) and later |
|
After un-deploying and deleting a template that contains an external EPG, L3out and a VRF (CTX), unable to create and deploy a new template with the same external EPG, L3out and VRF (CTX) names. Following error is seen - "Bad Request: VRF: ctx<VRF name> is already deployed by schema: <schma name> - template: <template name> on site <site name>" |
2.2(2b) and later |
|
Undeploying a template from a Cloud site throws an error to delete the contextprofiles before deleting the VRF instance, which is getting deleted as part of the template undeployment. Given that the cloudCtxprofile configuration changes/deletion may not work after an import (if the names given to cloudCtxProfiles on Cloud APIC are not of the VRF-region format), brownfield import from Cloud APIC is not supported completely in the 4.2(1) release” |
2.2(2b) and later |
|
Stale contract stale entry may persist after the contract is deleted. |
2.2(2b) and later |
|
Inter-site shared service between VRF instances across different tenants will not work, unless the tenant is stretched explicitly to the cloud site with the correct provider credentials. That is, there will be no implicit tenant stretch by Multi-Site Orchestrator. |
2.2(2b) and later |
|
Undeploy and redeploy external EPG from one site causing L3Out to EPG traffic disruption. |
2.2(2b) and later |
|
Updating TEP pool may cause a validation error. |
2.2(2b) and later |
|
If a template with empty AP (cloudApp without any cloudEpgs) is defined and it's undeployed, it deletes the cloudApp. If other templates are defined with same AP name and have cloudEpgs, then as a result of cloudApp deletion, all those cloudEpgs defined in other templates are also deleted. |
2.2(2b) and later |
|
After upgrading the Orchestrator from Release 1.2(5) to Release 2.2(2c), clicking "View Swagger Docs" in the Orchestrator GUI returns HTTP403 (not authorized) response for remote (RADIUS or TACACS) non-admin users. |
2.2(2b) and later |
|
When logging in via the Orchestrator GUI, the "What's New" dialogue is always displayed even if the option to show the screen is turned off. This behavior is observed only for remote (RADIUS or TACACS) users. |
2.2(2b) and later |
|
When creating or editing a user, internal role IDs (for example, SITEMANAGER) are displayed instead of role labels (for example, "Site and Tenant Manager"). Additionally, role descriptions are missing. |
2.2(2b) and later |
|
+ The verification is always successful, even in scenarios when we know it should fail (ex. we delete from the APIC GUI an object created and still configured in the MSC GUI) + The "Last verified" date doesn't change after triggering the verification |
2.2(2b) and later |
|
If the VRF, BD, and EPGs in PG are in different schemas and objects are deployed in order VRF, then BD, then the EPGs, VRF is not enabled for PG. |
2.2(2b) and later |
|
Stale Shadow EPG/BD present in a site2 from local EPG in Site1. It might create traffic disruption for a subnets defined in the stale Shadow EPG/BD. |
2.2(2b) and later |
|
While setting the user preferences generates the correct HTTP request and response messages to NOT display the "What's New" window on login, the window is still always displayed. |
2.2(2b) and later |
|
There is no capability to capture packet flow. |
2.2(2b) and later |
|
Unable to select the site local L3Out for a newly created BD from MSO. |
2.2(2b) and later |
This section lists the resolved issues. Click the bug ID to access the Bug Search tool and see additional information about the issue. The "Fixed In" column of the table specifies whether the bug was resolved in the base release or a patch release.
Bug ID |
Description |
Fixed in |
The capic-sync docker images of older versions are now being cleaned up at the end of the upgrade. If no images exist, you may see the following message:
Removing the capic-sync images failed in <node-no>
This message can be safely ignored. |
2.2(2b) |
|
During upgrade, external disk check will pass but internal script will fail with below error:
Aug 13 2019 19:20:13.438 INFO: Checking Disk space....
Aug 13 2019 19:20:13.443 CRITICAL: Not enough disk space available. Aborting..
|
2.2(2b) |
|
Seeing following error when deploying templates from MSO GUI as long as there is BD/EPG subnet configured with mask length => /31. Example below: "Bad Request: For EPG EPG-1 Broadcast IP can not be used as Subnet IP 192.168.1.6/32" MSO throws error if there is BD/EPG subnet configured with mask length greater than 30; Anything less than /30 (including /30) is working fine. |
2.2(2c) |
|
If the epg/bd are imported from APIC without VRF reference in a particular template and then migrated to a different template, MSO throws error: :Unable to find BD reference in the older template from which it is getting migrated. |
2.2(2d) |
This section lists known behaviors. Click the Bug ID to access the Bug Search Tool and see additional information about the issue. The "Exists In" column of the table specifies the relevant releases.
Bug ID |
Description |
Unable to download Multi-Site Orchestrator report and debug logs when database and server logs are selected |
|
Unicast traffic flow between Remote Leaf Site1 and Remote Leaf in Site2 may be enabled by default. This feature is not officially supported in this release. |
|
After downgrading to a release prior to Release 2.1(1), preferred group traffic continues to work. You must disable the preferred group feature before downgrading to an earlier release. |
|
No validation is available for shared services scenarios |
|
The upstream server may time out when enabling audit log streaming |
|
For Cisco ACI Multi-Site, Fabric IDs Must be the Same for All Sites, or the Querier IP address Must be Higher on One Site. The Cisco APIC fabric querier functions have a distributed architecture, where each leaf switch acts as a querier, and packets are flooded. A copy is also replicated to the fabric port. There is an Access Control List (ACL) configured on each TOR to drop this query packet coming from the fabric port. If the source MAC address is the fabric MAC address, unique per fabric, then the MAC address is derived from the fabric-id. The fabric ID is configured by users during initial bring up of a pod site. In the Cisco ACI Multi-Site Stretched BD with Layer 2 Broadcast Extension use case, the query packets from each TOR get to the other sites and should be dropped. If the fabric-id is configured differently on the sites, it is not possible to drop them. To avoid this, configure the fabric IDs the same on each site, or the querier IP address on one of the sites should be higher than on the other sites. |
|
STP and "Flood in Encapsulation" Option are not Supported with Cisco ACI Multi-Site. In Cisco ACI Multi-Site topologies, regardless of whether EPGs are stretched between sites or localized, STP packets do not reach remote sites. Similarly, the "Flood in Encapsulation" option is not supported across sites. In both cases, packets are encapsulated using an FD VNID (fab-encap) of the access VLAN on the ingress TOR. It is a known issue that there is no capability to translate these IDs on the remote sites. |
|
Proxy ARP is not supported in Cisco ACI Multi-Site Stretched BD without Flooding use case. Unknown Unicast Flooding and ARP Glean are not supported together in Cisco ACI Multi-Site across sites. |
|
If an infra L3Out that is being managed by Cisco ACI Multi-Site is modified locally in a Cisco APIC, Cisco ACI Multi-Site might delete the objects not managed by Cisco ACI Multi-Site in an L3Out. |
|
Downgrading From 2.1(1) to 2.0(2) may fail if node runs out of space |
|
"Phone Number" field is required in all releases prior to Release 2.2(1). Users with no phone number specified in Release 2.2(1) or later will not be able to log in to the GUI when Orchestrator is downgraded to a an earlier release. |
This section lists usage guidelines for the Cisco ACI Multi-Site software.
■ In Cisco ACI Multi-Site topologies, we recommend that First Hop Routing protocols such as HSRP/VRRP are not stretched across sites.
■ HTTP requests are redirected to HTTPS and there is no HTTP support globally or per user basis.
■ Up to 12 interconnected sites are supported.
■ Proxy ARP glean and unknown unicast flooding are not supported together.
Unknown Unicast Flooding and ARP Glean are not supported together in Cisco ACI Multi-Site across sites.
■ Flood in encapsulation is not supported for EPGs and Bridge Domains that are extended across ACI fabrics that are part of the same Multi-Site domain. However, flood in encapsulation is fully supported for EPGs or Bridge Domains that are locally defined in ACI fabrics, even if those fabrics may be configured for Multi-Site.
■ The leaf and spine nodes that are part of an ACI fabric do not run Spanning Tree Protocol (STP). STP frames originated from external devices can be forwarded across an ACI fabric (both single Pod and Multi-Pod), but are not forwarded across the inter-site network between sites, even if stretching a BD with BUM traffic enabled.
■ GOLF L3Outs for each tenant must be dedicated, not shared.
The inter-site L3Out functionality introduced on MSO release 2.2(1) does not apply when deploying GOLF L3Outs. This means that for a given VRF there is still the requirement of deploying at least one GOLF L3Out per site in order to enable north-south communication. An endpoint connected in a site cannot communicate with resources reachable via a GOLF L3Out connection deployed in a different site.
■ While you can create the L3Out objects in the Multi-Site Orchestrator GUI, the physical L3Out configuration (logical nodes, logical interfaces, and so on) must be done directly in each site's APIC.
■ VMM and physical domains must be configured in the Cisco APIC GUI at the site and will be imported and associated within the Cisco ACI Multi-Site.
Although domains (VMM and physical) must be configured in Cisco APIC, domain associations can be configured in the Cisco APIC or Cisco ACI Multi-Site.
■ Some VMM domain options must be configured in the Cisco APIC GUI.
The following VMM domain options must be configured in the Cisco APIC GUI at the site:
— NetFlow/EPG CoS marking in a VMM domain association
— Encapsulation mode for an AVS VMM domain
■ Some uSeg EPG attribute options must be configured in the Cisco APIC GUI.
The following uSeg EPG attribute options must be configured in the Cisco APIC GUI at the site:
— Sub-criteria under uSeg attributes
— match-all and match-any criteria under uSeg attributes
■ Site IDs must be unique.
In Cisco ACI Multi-Site, site IDs must be unique.
■ To change a Cisco APIC fabric ID, you must erase and reconfigure the fabric.
Cisco APIC fabric IDs cannot be changed. To change a Cisco APIC fabric ID, you must erase the fabric configuration and reconfigure it.
However, Cisco ACI Multi-Site supports connecting multiple fabrics with the same fabric ID.
■ Caution: When removing a spine switch port from the Cisco ACI Multi-Site infrastructure, perform the following steps:
a. Click Sites.
b. Click Configure Infra.
c. Click the site where the spine switch is located.
d. Click the spine switch.
e. Click the x on the port details.
f. Click Apply.
■ Shared services use case: order of importing tenant policies
When deploying a provider site group and a consumer site group for shared services by importing tenant policies, deploy the provider tenant policies before deploying the consumer tenant policies. This enables the relation of the consumer tenant to the provider tenant to be properly formed.
■ Caution for shared services use case when importing a tenant and stretching it to other sites
When you import the policies for a consumer tenant and deploy them to multiple sites, including the site where they originated, a new contract is deployed with the same name (different because it is modified by the inter-site relation). To avoid confusion, delete the original contract with the same name on the local site. In the Cisco APIC GUI, the original contract can be distinguished from the contract that is managed by Cisco ACI Multi-Site, because it is not marked with a cloud icon.
■ When a contract is established between EPGs in different sites, each EPG and its bridge domain (BD) are mirrored to and appear to be deployed in the other site, while only being actually deployed in its own site. These mirrored objects are known as "shadow” EPGs and BDs.
For example, if one EPG in Site 1 and another EPG in Site 2 have a contract between them, in the Cisco APIC GUI at Site 1 and Site 2, you will see both EPGs. They appear with the same names as the ones that were deployed directly to each site. This is expected behavior and the shadow objects must not be removed.
For more information, see the Schema Management chapter in the Cisco ACI Multi-Site Configuration Guide.
■ Inter-site traffic cannot transit sites.
Site traffic cannot transit sites on the way to another site. For example, when Site 1 routes traffic to Site 3, it cannot be forwarded through Site 2.
■ The ? icon in Cisco ACI Multi-Site opens the menu for Show Me How modules, which provide step-by-step help through specific configurations.
— If you deviate while in progress of a Show Me How module, you will no longer be able to continue.
— You must have IPv4 enabled to use the Show Me How modules.
■ User passwords must meet the following criteria:
— Minimum length is 8 characters
— Maximum length is 64 characters
— Fewer than three consecutive repeated characters
— At least three of the following character types: lowercase, uppercase, digit, symbol
— Cannot be easily guessed
— Cannot be the username or the reverse of the username
— Cannot be any variation of "cisco", "isco", or any permutation of these characters or variants obtained by changing the capitalization of letters therein
■ If you are associating a contract with the external EPG, as provider, choose contracts only from the tenant associated with the external EPG. Do not choose contracts from other tenants. If you are associating the contract to the external EPG, as consumer, you can choose any available contract.
■ Policy objects deployed from ACI Multi-Site software should not be modified or deleted from any site-APIC. If any such operation is performed, schemas have to be re-deployed from ACI Multi-Site software.
■ The Rogue Endpoint feature can be used within each site of an ACI Multi-Site deployment to help with misconfigurations of servers that cause an endpoint to move within the site. The Rogue Endpoint feaure is not designed for scenarios where the endpoint may move between sites.
This release supports the hardware listed in the Cisco ACI Multi-Site Hardware Requirements Guide.
Multi-Site Orchestrator releases have been decoupled from the APIC releases. The APIC clusters in each site as well as the Orchestrator itself can now be upgraded independently of each other and run in mixed operation mode. For more information, see the Interoperability Support section in the “Infrastructure Management” chapter of the Cisco ACI Multi-Site Configuration Guide.
For the verified scalability limits, see the Cisco ACI Verified Scalability Guide.
See the Cisco Application Policy Infrastructure Controller (APIC) page for ACI Multi-Site documentation. On that page, you can use the "Choose a topic" and "Choose a document type" fields to narrow down the displayed documentation list and find a desired document.
The documentation includes installation, upgrade, configuration, programming, and troubleshooting guides, technical references, release notes, and knowledge base (KB) articles, and videos. KB articles provide information about a specific use cases or topics. The following tables describe the core Cisco Application Centric Infrastructure Multi-Site documentation.
Document |
Description |
This document. Provides release information for the Cisco ACI Multi-Site Orchestrator product. |
|
Provides basic concepts and capabilities of the Cisco ACI Multi-Site. |
|
Provides the hardware requirements and compatibility. |
|
Describes how to install Cisco ACI Multi-Site Orchestrator and perform day-0 operations. |
|
Describes Cisco ACI Multi-Site configuration options and procedures. |
|
Describes how to use the Cisco ACI Multi-Site REST APIs. |
|
Provides descriptions of common operations issues and Describes how to troubleshoot common Cisco ACI Multi-Site issues. |
|
Contains the maximum verified scalability limits for Cisco Application Centric Infrastructure (Cisco ACI), including Cisco ACI Multi-Site. |
|
Contanis videos that demonstrate how to perform specific tasks in the Cisco ACI Multi-Site. |
To provide technical feedback on this document, or to report an error or omission, send your comments to apic-docfeedback@cisco.com. We appreciate your feedback.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.
© 2019 Cisco Systems, Inc. All rights reserved.