Cisco ACI GOLF
The Cisco ACI GOLF feature (also known as Layer 3 EVPN Services for Fabric WAN) enables much more efficient and scalable ACI fabric WAN connectivity. It uses the BGP EVPN protocol over OSPF for WAN routers that are connected to spine switches.

All tenant WAN connections use a single session on the spine switches where the WAN routers are connected. This aggregation of tenant BGP sessions towards the Data Center Interconnect Gateway (DCIG) improves control plane scale by reducing the number of tenant BGP sessions and the amount of configuration required for all of them. The network is extended out using Layer 3 subinterfaces configured on spine fabric ports. Transit routing with shared services using GOLF is not supported.
A Layer 3 external outside network (L3extOut
) for GOLF physical connectivity for a spine switch is specified under the infra
tenant, and includes the following:
-
LNodeP
(l3extInstP
is not required within the L3Out in the infra tenant. ) -
A provider label for the
L3extOut
for GOLF in the infra tenant. -
OSPF protocol policies
-
BGP protocol policies
All regular tenants use the above-defined physical connectivity. The L3extOut
defined in regular tenants requires the following:
-
An
l3extInstP
(EPG) with subnets and contracts. The scope of the subnet is used to control import/export route control and security policies. The bridge domain subnet must be set to advertise externally and it must be in the same VRF as the application EPG and the GOLF L3Out EPG. -
Communication between the application EPG and the GOLF L3Out EPG is governed by explicit contracts (not Contract Preferred Groups).
-
An
l3extConsLbl
consumer label that must be matched with the same provider label of anL3Out
for GOLF in theinfra
tenant. Label matching enables application EPGs in other tenants to consume theLNodeP
externalL3Out
EPG. -
The BGP EVPN session in the matching provider
L3extOut
in theinfra
tenant advertises the tenant routes defined in thisL3Out
.
Guidelines and Limitations
Observe the following GOLF guidelines and limitations:
-
GOLF routers must advertise at least one route to Cisco ACI in order to accept traffic. No tunnel is created between leaf switches and the external routers until Cisco ACI receives a route from the external routers.
-
All Cisco Nexus 9000 Series ACI-mode switches and all of the Cisco Nexus 9500 platform ACI-mode switch line cards and fabric modules support GOLF. With Cisco APIC, release 3.1(x) and higher, this includes the N9K-C9364C switch.
-
At this time, only a single GOLF provider policy can be deployed on spine switch interfaces for the whole fabric.
-
Up to APIC release 2.0(2), GOLF is not supported with multipod. In release 2.0 (2) the two features are supported in the same fabric only over Cisco Nexus N9000K switches without “EX” on the end of the switch name; for example, N9K-9312TX. Since the 2.1(1) release, the two features can be deployed together over all the switches used in the multipod and EVPN topologies.
-
When configuring GOLF on a spine switch, wait for the control plane to converge before configuring GOLF on another spine switch.
-
A spine switch can be added to multiple provider GOLF outside networks (GOLF L3Outs), but the provider labels have to be different for each GOLF L3Out. Also, in this case, the OSPF Area has to be different on each of the
L3extOut
s and use different loopback addresses. -
The BGP EVPN session in the matching provider
L3Out
in theinfra
tenant advertises the tenant routes defined in thisL3extOut
. -
When deploying three GOLF Outs, if only 1 has a provider/consumer label for GOLF, and 0/0 export aggregation, APIC will export all routes. This is the same as existing
L3extOut
on leaf switches for tenants. -
If you have an ERSPAN session that has a SPAN destination in a VRF instance, the VRF instance has GOLF enabled, and the ERSPAN source has interfaces on a spine switch, the transit prefix gets sent from a non-GOLF L3Out to the GOLF router with the wrong BGP next-hop.
-
If there is direct peering between a spine switch and a data center interconnect (DCI) router, the transit routes from leaf switches to the ASR have the next hop as the PTEP of the leaf switch. In this case, define a static route on the ASR for the TEP range of that ACI pod. Also, if the DCI is dual-homed to the same pod, then the precedence (administrative distance) of the static route should be the same as the route received through the other link.
-
The default
bgpPeerPfxPol
policy restricts routes to 20, 000. For ACI WAN Interconnect peers, increase this as needed. -
In a deployment scenario where there are two
L3extOut
s on one spine switch, and one of them has the provider labelprov1
and peers with the DCI 1, the secondL3extOut
peers with DCI 2 with provider labelprov2
. If the tenant VRF has a consumer label pointing to any 1 of the provider labels (either prov1 or prov2), the tenant route will be sent out both DCI 1 and DCI 2. -
When aggregating GOLF OpFlex VRFs, the leaking of routes cannot occur in the ACI fabric or on the GOLF device between the GOLF OpFlex VRF and any other VRF in the system. An external device (not the GOLF router) must be used for the VRF leaking.
![]() Note |
Cisco ACI does not support IP fragmentation. Therefore, when you configure Layer 3 Outside (L3Out) connections to external routers, or Multi-Pod connections through an Inter-Pod Network (IPN), it is recommended that the interface MTU is set appropriately on both ends of a link. On some platforms, such as Cisco ACI, Cisco NX-OS, and Cisco IOS, the configurable MTU value does not take into account the Ethernet headers (matching IP MTU, and excluding the 14-18 Ethernet header size), while other platforms, such as IOS-XR, include the Ethernet header in the configured MTU value. A configured value of 9000 results in a max IP packet size of 9000 bytes in Cisco ACI, Cisco NX-OS, and Cisco IOS, but results in a max IP packet size of 8986 bytes for an IOS-XR untagged interface. For the appropriate MTU values for each platform, see the relevant configuration guides. We highly recommend that you test the MTU using CLI-based commands. For example, on the Cisco NX-OS CLI, use a command such as |