Overview

This guide contains the maximum verified scalability limits for Cisco Application Centric Infrastructure (Cisco ACI) parameters in the following releases:

  • Cisco Application Policy Infrastructure Controller (Cisco APIC), releases 5.1(3) and 5.1(4)

  • Cisco Nexus 9000 Series ACI-Mode Switches, releases 15.1(3) and 15.1(4)

These values are based on a profile where each feature was scaled to the numbers specified in the tables. These numbers do not represent the theoretically possible Cisco ACI fabric scale.


Note


The verified scalability limits for Cisco Multi-Site previously included as part of this guide are now listed in the Cisco Nexus Dashboard Orchestrator (NDO) release-specific documents available at the following URL: https://www.cisco.com/c/en/us/support/cloud-systems-management/multi-site-orchestrator/products-device-support-tables-list.html.


New and Changed Information

The following changes have been made to this document since initial release:

Date

Changes

February 2, 2023

Updated "Number of EIGRP neighbors" scale.

May 11, 2022

Added dual-stack scale for "Number of L3 Outs".

April 29, 2022

Updated "Number of External Route Reflectors between Pods" scale.

Updated "Number of External EPGs" and "Number of External EPGs per L3 Out" scale with examples for clarity.

April 21, 2022

Added "DHCP relay addresses per BD across all labels" scale.

March 25, 2022

Updated "PTP Scalability Limits" section.

March 15, 2022

Updated "Maximum number of Data Plane policers at the interface level" scale numbers.

February 25, 2022

Updated "Number of source EPGs in tenant SPAN sessions" if both Access and Tenant SPAN are configured.

December 17, 2021

Added NetFlow scale numbers.

February 1, 2021

First release of this document.

General Scalability Limits

  • L2 Fabric: L2 Fabric in this document refers to an ACI fabric that contains only BDs with Scaled L2 Only mode (formerly known as Legacy mode). See Bridging > Bridge Domain Options > Scaled L2 Only Mode - Legacy Mode in APIC Layer 2 Configuration Guide for details about Scaled L2 Only mode.

  • L3 Fabric: The ACI L3 fabric solution provides a feature-rich highly scalable solution for public cloud and large enterprise. With this design, almost all supported features are deployed at the same time and are tested as a solution. The scalability numbers listed in this section are multi-dimensional scalability numbers. The fabric scalability numbers represent the overall number of objects created on the fabric. The per-leaf scale numbers are the objects created and presented on an individual leaf switch. The fabric level scalability numbers represent APIC cluster scalability and the tested upper limits. Some of the per-leaf scalability numbers are subject to hardware restrictions. The per-leaf scalability numbers are the maximum limits tested and supported by leaf switch hardware. This does not necessarily mean that every leaf switch in the fabric was tested with maximum scale numbers.

  • Stretched Fabric: Stretched fabric allows multiple fabrics (up to 3) distributed in multiple locations to be connected as a single fabric with a single management domain. The scale for the entire stretched fabric remains the same as for a single site fabric. For example a L3 stretched fabric will support up to 400 leaf switches total which is the maximum number of leaf switches supported on a single site fabric. Parameters only relevant to stretched fabric are mentioned in the tables below.

  • Multi-Pod: Multi-Pod enables provisioning a more fault-tolerant fabric comprised of multiple Pods with isolated control plane protocols. Also, Multi-Pod provides more flexibility with regard to the full mesh cabling between leaf and spine switches. For example, if leaf switches are spread across different floors or different buildings, Multi-Pod enables provisioning multiple Pods per floor or building and providing connectivity between Pods through spine switches.

    Multi-Pod uses a single APIC cluster for all the Pods; all the Pods act as a single fabric. Individual APIC controllers are placed across the Pods but they are all part of a single APIC cluster.

  • Multi-Site: Multi-Site is the architecture interconnecting and extending the policy domain across multiple APIC cluster domains. As such, Multi-Site could also be named as Multi-Fabric, since interconnects separate Availability Zones (Fabrics) and managed by an independent APIC controller cluster. A Cisco Nexus Dashboard Orchestrator (NDO) is part of the architecture and is used to communicate with the different APIC domains to simplify the management of the architecture and the definition of inter-site policies.

Leaf Switches and Ports

The maximum number of leaf switches is 400 per Pod and 500 total in Multi-Pod fabric. The maximum number of physical ports is 24,000 per fabric. The maximum number of remote leaf (RL) switches is 128 per fabric, with total number of BDs deployed on all remote leaf switches in the fabric not exceeding 60,000. The total number of BDs on all RLs is equal to the sum of BDs on each RL.

If Remote Leaf Pod Redundancy policy is enabled, we recommended that you disable the Pre-emption flag in the APIC for all scaled up RL deployments. In other words, you must wait for BGP CPU utilization to fall under 50% on all spine switches before you initiate pre-emption.

Breakout Ports

The N9K-C9336C-FX2 switch supports up to 34 breakout ports in both 10G or 25G mode.

General Scalability Limits

Table 1. General Scalability Limits for L3 Fabrics

Configurable Options

L3 Fabric

Large L3 Fabric

Number of APIC controllers

Note

 

* denotes preferred cluster size.

While the higher number of controllers is supported, the preferred size is based on the number of leaf switches in the environment.

3* or 4 node APIC cluster

5*, 6, or 7 node APIC cluster

Number of leaf switches

80 for 3-node cluster

200 for 4-node cluster

300 for 5- or 6-node cluster

500 for 7-node cluster

Number of tier-2 leaf switches per Pod in Multi-Tier topology

Note

 

The total number of leaf switches from all tiers should not exceed the "Number of leaf switches" listed above

80 for 3-node cluster

100 for 4-node cluster

100

Number of spine switches

Maximum spines per Pod: 6.

Total spines per fabric: 24.

Maximum spines per Pod: 6.

Total spines per fabric: 24.

Number of FEXs

20 FEXs per leaf switch

576 ports per leaf switch

650 FEXs per fabric

20 FEXs per leaf switch

576 ports per leaf switch

650 FEXs per fabric

Number of tenants

1,000

3,000

Number of Layer 3 (L3) contexts (VRFs)

1,000

3,000

Number of contracts/filters

  • 10,000 contracts

  • 10,000 filters

  • 10,000 contracts

  • 10,000 filters

Number of endpoint groups (EPGs)

For a fabric with a single Tenant: 4,000

For a fabric with multiple Tenants: 500 per Tenant, up to 15,000 total across all Tenants

For a fabric with a single Tenant: 4,000

For a fabric with multiple Tenants: 500 per Tenant, up to 15,000 total across all Tenants

Number of Isolation enabled EPGs

400

400

Number of bridge domains (BDs)

15,000

15,000

Number of OSPF sessions + EIGRP (for external connection)

3,000

3,000

Number of Multicast routes

32,000

32,000

Number of Multicast routes per VRF

32,000

32,000

Number of static routes to a single SVI/VRF

5,000

5,000

Number of static routes on a single leaf switch

10,000

10,000

Number of vCenters

  • 200 VDS

  • 50 AVS

  • 50 Cisco ACI Virtual Edge

  • 200 VDS

  • 50 AVS

  • 50 Cisco ACI Virtual Edge

Number of Service Chains

1000

1000

Number of L4 - L7 devices

30 managed or 50 unmanaged physical HA pairs, 1,200 virtual HA pairs (1,200 maximum per fabric)

30 managed or 50 unmanaged physical HA pairs, 1,200 virtual HA pairs (1,200 maximum per fabric)

Number of ESXi hosts - VDS

3,200

3,200

Number of ESXi hosts - AVS

3,200 (Only 1 AVS instance per host)

3,200 (Only 1 AVS instance per host)

Number of ESXi hosts - AVE

3,200 (Only 1 AVE instance per host)

3,200 (Only 1 AVE instance per host)

Number of VMs

Depends upon server scale

Depends upon server scale

Number of configuration zones per fabric

30

30

Number of BFD sessions per leaf switch

256

Minimum BFD timer required to support this scale:

  • minTx:50

  • minRx:50

  • multiplier:3

256

Minimum BFD timer required to support this scale:

  • minTx:50

  • minRx:50

  • multiplier:3

Multi-Pod

Note

 

* denotes preferred cluster size.

  • 3* or 4 node APIC cluster

  • 6 Pods

  • 80 for 3-node cluster

    200 for 4-node cluster

  • 5* or 6 node APIC cluster,6 Pods, 200 leaf switches max per Pod, 300 leaf switches max overall

  • 7 node APIC cluster,12 Pods, 400 leaf switches max per Pod, 500 leaf switches max overall

L3 EVPN Services over Fabric WAN - GOLF (with and without OpFlex)

1,000 VRFs, 60,000 routes in a fabric

1,000 VRFs, 60,000 routes in a fabric

Layer 3 Multicast routes

32,000

32,000

Number of Routes in Overlay-1 VRF

1,000

1,000

Table 2. General Scalability Limits for L2 Fabrics

Configurable Options

L2 Fabric Scale

Number of APIC controllers

Note

 

* denotes preferred cluster size.

While the higher number of controllers is supported, the preferred size is based on the number of leaf switches in the environment.

3* or 4 node APIC cluster

Number of leaf switches

80

Number of tier-2 leaf switches per Pod in Multi-Tier topology

80

Number of spine switches per fabric

24

Number of FEXs

20 FEXs per leaf switch

576 ports per leaf switch

650 FEXs per fabric

Number of tenants

1,000

Number of endpoint groups (EPGs)

For a fabric with a single Tenant: 4,000

For a fabric with multiple Tenants: 500 per Tenant, up to 21,000 total across all Tenants

Number of bridge domains (BDs)

21,000

Number of configuration zones per fabric

30

Number of Pods in Multi-Pod

6

Number of Routes in Overlay-1 VRF

1,000

Multiple Fabric Options Scalability Limits

Stretched Fabric

Configurable Options

Per Fabric Scale

Maximum number of fabrics that can be a stretched fabric

3

Maximum number of Route Reflectors

6

Multi-Pod

Configurable Options

Per Fabric Scale

Maximum number of Pods

12

Maximum number of leaf switches per Pod

400

Maximum number of leaf switches overall

500

Maximum number of Route Reflectors for L3Out

24

Number of External Route Reflectors between Pods

  • For 1-3 Pods: Up to 3 external route reflectors

    We recommend full mesh for external BGP peers instead of using external route reflectors when possible

  • For 4 or more Pods: Up to 4 external route reflectors

    We recommend using external route reflectors instead of full mesh

    We recommend that the external route reflectors are distributed across Pods so that in case of any failure there are always at least two Pods with external route reflectors still reachable

Cisco ACI vPod Scalability Limits

Cisco ACI vPod Scalability Limits

Configurable Options

Scale

Number of vPods

6

Number of Cisco ACI Virtual Edge (AVE) instances per vPod

32

Number of Virtual Ethernet Ports (vEThs) per AVE in vPod

32

Number of EPGs per vPod

256

Number of EPGs across all vPods

864

Number of EPGs across all physical and virtual pods

15,000

Number of filters per ACI Virtual Edge

128

Number of contracts per ACI Virtual Edge

*The total number of filters used by all contracts must not exceed the filter limit above

36

Cisco Multi-Site Scalability Limits

Cisco Nexus Dashboard Orchestrator (NDO) does not require a specific version of APIC to be running in all sites. The APIC clusters in each site as well as the NDO itself can be upgraded independently of each other and run in mixed operation mode as long as each fabric is running APIC, Release 3.2(6) or later.

As such, the verified scalability limits for your specific Cisco Nexus Dashboard Orchestrator release are now available at the following URL: https://www.cisco.com/c/en/us/support/cloud-systems-management/multi-site-orchestrator/products-device-support-tables-list.html.


Note


Each site managed by the Cisco Nexus Dashboard Orchestrator must still adhere to the scalability limits specific to that site's APIC Release. For a complete list of all Verified Scalability Guides, see https://www.cisco.com/c/en/us/support/cloud-systems-management/application-policy-infrastructure-controller-apic/tsd-products-support-series-home.html#Verified_Scalability_Guides


Fabric Topology, SPAN, Tenants, Contexts (VRFs), External EPGs, Bridge Domains, Endpoints, and Contracts Scalability Limits

The following table shows the mapping of the "ALE/LSE Type" to the corresponding ToR switches. This information is helpful to determine which ToR switch is affected when we use the terms ALE v1, ALE v2, LSE, or LSE2 in remaining sections.


Note


In the following table, the N9K-C9336C-FX2 and N9K-C93360YC-FX2 switches are listed as LSE for scalability limits purposes only; the switches support LSE2 platform features. Consult specific feature documentation for the full list of supported devices.


ALE/LSE Type

ACI-Supported ToR switches

ALE v2

  • N9K-C9396TX + N9K-M6PQ

  • N9K-C93128TX + N9K-M6PQ

  • N9K-C9396PX + N9K-M6PQ

  • N9K-C9372TX 64K

  • N9K-C9332PQ

  • N9K-C9372PX

LSE

  • N9K-C93108TC-EX

  • N9K-C93180YC-EX

  • N9K-C93180LC-EX

  • N9K-C9336C-FX2

  • N9K-C93216TC-FX2

  • N9K-C93240YC-FX2

  • N9K-C93360YC-FX2

LSE2

  • N9K-C93108TC-FX

  • N9K-C93180YC-FX

  • N9K-C9348GC-FXP

  • N9K-C93600CD-GX

  • N9K-C9364C-GX

  • N9K-C93180YC-FX3

  • N9K-C93108TC-FX3P


Note


  • Unless explicitly called out, LSE represents both LSE and LSE2 and ALE represents both ALE v1 and ALE v2 in the rest of this document.

  • The High Policy profile in LSE2 switches listed in the following sections is supported only on Cisco Nexus N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX switches with 32GB of RAM.

  • High IPv4 EP Scale—This profile is recommended to be used only for the ACI border leaf (BL) switches in Multi-Domain (ACI-SDA) Integration. It provides enhanced IPv4 EP and LPM scales specifically for these BLs and has specific hardware requirements.


Fabric Topology

Configurable Options

Per Leaf Scale

Per Fabric Scale

Number of PCs, vPCs

320 (with FEX HIF)

N/A

Number of encapsulations per access port, PC, vPC (non-FEX HIF)

3,000

N/A

Number of encapsulations per FEX HIF, PC, vPC

20

N/A

Number of member links per PC, vPC*

*vPC total ports = 32, 16 per leaf

16

N/A

Number of ports x VLANs (global scope and no FEX HIF)

64,000

168,000 (when using legacy BD mode)

N/A

Number of ports x VLANs (FEX HIFs and/or local scope)

ALE v2: 9,000

LSE and LSE2: 10,000

N/A

Number of static port bindings

ALE v2: 30,000

For LSE and LSE2: 60,000

400,000

Number of VMACs

For ALE v2: 255

For LSE and LSE2: 510

N/A

STP

All VLANs

N/A

Mis-Cabling Protocol (MCP)

256 VLANs per interface

2,000 logical ports (port x VLAN) per leaf

N/A

Maximum number of endpoints (EPs)

Default (Dual Stack) profile:

  • ALE v2:

    • MAC: 12,000

    • IPv4: 12,000 or

    • IPv6: 6,000 or

    • IPv4: 4,000

      IPv6: 4,000

Default profile or High LPM profile:

  • LSE or LSE2:

    • MAC: 24,000

    • IPv4: 24,000

    • IPv6: 12,000

IPv4 scale profile:

  • LSE and LSE2:

    • MAC: 48,000

    • IPv4: 48,000

    • IPv6: Not supported

  • ALE v2: Not supported

High Dual Stack scale profile:

  • LSE:

    • MAC: 64,000

    • IPv4: 64,000

    • IPv6: 24,000

  • LSE2:

    • MAC: 64,000

    • IPv4: 64,000

    • IPv6: 48,000

  • ALE v2: Not supported

High Policy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX switches with 32GB of RAM only):

    • MAC: 24,000

    • IPv4: 24,000

    • IPv6: 12,000

  • LSE (N9K-C9336C-FX2 and N9K-C93180YC-EX):

    • MAC: 16,000

    • IPv4: 16,000

    • IPv6: 8,000

High IPv4 EP Scale profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C93180YC-FX3 switches with 32GB of RAM only):

    • MAC: 24,000

    • IPv4 local: 24,000

    • IPv4 total: 280,000

    • IPv6: 12,000

  • Not supported on LSE

Multicast Heavy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, N9K-C93180YC-FX3, and N9K-C93108TC-FX3P switches with 32GB of RAM only):

    • MAC: 24,000

    • IPv4 local: 24,000

    • IPv4 total: 64,000

    • IPv6: 4,000

  • Not supported on LSE

16-slot and 8-slot modular spine switches:

Max. 450,000 Proxy Database Entries in the fabric, which can be translated into any one of the following:

  • 450,000 MAC-only EPs (each EP with one MAC only)

  • 225,000 IPv4 EPs (each EP with one MAC and one IPv4)

  • 150,000 dual-stack EPs (each EP with one MAC, one IPv4, and one IPv6)

The formula to calculate in mixed mode is as follows:

#MAC + #IPv4 + #IPv6 <= 450,000

NOTE: Four fabric modules are required on all spines in the fabric to support above scale.

4-slot modular spine switches:

Max. 360,000 Proxy Database Entries in the fabric, which can be translated into any one of the following:

  • 360,000 MAC-only EPs (each EP with one MAC only)

  • 180,000 IPv4 EPs (each EP with one MAC and one IPv4)

  • 120,000 dual-stack EPs (each EP with one MAC, one IPv4, and one IPv6)

The formula to calculate in mixed mode is as follows:

#MAC + #IPv4 + #IPv6 <= 360,000

NOTE: Four fabric modules are required on all spines in the fabric to support above scale.

Fixed spine switches:

Max. 180,000 Proxy Database Entries in the fabric, which can be translated into any one of the following:

  • 180,000 MAC-only EPs (each EP with one MAC only)

  • 90,000 IPv4 EPs (each EP with one MAC and one IPv4)

  • 60,000 dual-stack EPs (each EP with one MAC, one IPv4, and one IPv6)

The formula to calculate in mixed mode is as follows:

#MAC + #IPv4 + #IPv6 <= 180,000

Modular spine switches:

Max. 450,000 Proxy Database Entries in the fabric, which can be translated into any one of the following:

  • 450,000 MAC-only EPs (each EP with one MAC only)

  • 225,000 IPv4 EPs (each EP with one MAC and one IPv4)

  • 150,000 dual-stack EPs (each EP with one MAC, one IPv4, and one IPv6)

The formula to calculate in mixed mode is as follows:

#MAC + #IPv4 + #IPv6 <= 450,000

NOTE: Four fabric modules (N9K-C9508-FM-E) are required on all spines in the fabric to support above scale.

Fixed spine switches (N9K-C9364C and N9K-C9316D-GX):

Max. 180,000 Proxy Database Entries in the fabric, which can be translated into any one of the following:

  • 180,000 MAC-only EPs (each EP with one MAC only)

  • 90,000 IPv4 EPs (each EP with one MAC and one IPv4)

  • 60,000 dual-stack EPs (each EP with one MAC, one IPv4, and one IPv6)

The formula to calculate in mixed mode is as follows:

#MAC + #IPv4 + #IPv6 <= 180,000

Number of Multicast Routes

Default (Dual Stack), IPv4 Scale, High LPM, High Policy or High IPv4 EP scale profiles: 8,000 with (S,G) scale not exceeding 4,000

High Dual Stack profile:

  • LSE: 512

  • LSE2: 32,000 with (S,G) scale not exceeding 16,000

Multicast Heavy profile:

  • LSE: not supported

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, N9K-C93180YC-FX3, and N9K-C93108TC-FX3P switches with 32GB of RAM only): 90,000 with (S,G) scale not exceeding 72,000

128,000

Number of Multicast Routes per VRF

Default (Dual Stack), IPv4 Scale, High LPM, High Policy or High IPv4 EP scale profiles: 8,000 with (S,G) scale not exceeding 4,000

High Dual Stack profile:

  • LSE: 512

  • LSE2: 32,000 with (S,G) scale not exceeding 16,000

Multicast Heavy profile:

  • LSE: not supported

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, N9K-C93180YC-FX3, and N9K-C93108TC-FX3P switches with 32GB of RAM only): 32,000

32,000

IGMP snooping L2 multicast routes

  • For IGMPv2, route scale is for (*, G) only

  • For IGMPv3, route scale is for both (S, G) and (*, G)

Note

 

IGMP snooping entries are created per BD (2 receivers that join the same group from 2 different BDs consume 2 separate entries).

Default (Dual Stack), IPv4, High LPM, High Policy, or High IPv4 EP scale profiles: 8,000

High Dual Stack profile:

  • LSE: 512

  • LSE2: 32,000

Multicast Heavy profile:

  • LSE: not supported

  • LSE2: 32,000

32,000

Number of IPs per MAC

4,096

4,096

Number of Host-Based Routing Advertisements

30,000 host routes per border leaf

N/A

SPAN

ALE-based ToR switches:

  • 4 unidirectional or 2 bidirectional access/tenant sessions

  • 4 unidirectional or 2 bidirectional fabric sessions

LSE-based ToR switches:

  • 32 unidirectional or 16 bidirectional sessions (fabric, access, or tenant)

N/A

Number of ports per SPAN session

Note

 

This is also the total number of unique ports (fabric and access) that can be used as SPAN sources across all SPAN sessions combined

ALE-based ToR switches:
  • All leaf access ports could be in one session.

  • All leaf fabric ports could be in one session.

LSE/LSE2-based ToR switches:
  • 63 – total number of unique ports (fabric + access) across all types of span sessions

N/A

Number of source EPGs in tenant SPAN sessions

Note

 

The numbers listed in this row assume that only tenant SPAN is configured.

If both, Access and Tenant SPAN are configured, the following formula applies for both ingress and egress SPAN:

E + P + E*P + EPP + v6FePP + 0.5*v4FePP <= 230

Where:

  • E— Number of source EPGs in Tenant SPAN

  • P—Number of source Ports in access SPAN without any filters

  • EPP—Number of (Epg,Port) Pairs in access SPAN with EPG filter only (no filter group)

  • v4FePP—Number of (v4 filter entry, Port) Pairs in access SPAN with filter group

  • v6FePP—Number of (v6 Filter entry, Port) Pairs in access SPAN with filter group

ALE-based ToR switches:

  • 230 ingress direction + 50 egress direction

LSE-based ToR switches:

  • 230 bidirectional

  • 460 unidirectional (230 ingress + 230 egress)

N/A

Maximum number of SPAN ACL filter TCAM entries

SPAN filters are supported on -EX, -FX, and -FX2 TORs only.

SPAN filters are not supported in the following:

  • Fabric ports

  • Fabric and tenant SPAN sessions

  • Spine switches

  • IPv4: 480

  • IPv6: 240

Total number of TCAM entries is calculated using the following formula:

(IPv4-filters) * (IPv4-filter-source-groups) + 2 * (IPv6-filters) * (IPv6-filter-source-groups) + 2 * (no-filter-source-groups)

N/A

Maximum number of L4 Port Ranges

16 (8 source and 8 destination )

First 16 port ranges consume a TCAM entry per range.

Each additional port range beyond the first 16 consumes a TCAM entry per port in the port range.

Filters with distinct source port range and destination port range count as 2 port ranges.

You cannot add more than 16 port ranges at once.

N/A

Common pervasive gateway

256 virtual IPs per Bridge Domain

N/A

Maximum number of Data Plane policers at the interface level

ALE:

  • 64 ingress policers

  • 64 egress policers

LSE and LSE2:

  • 7 ingress policers

  • 3 egress policers

N/A

Maximum number of Data Plane policers at EPG and interface level

128 ingress policers

N/A

Maximum number of interfaces with Per-Protocol Per-Interface (PPPI) CoPP

63

N/A

Maximum number of TCAM entries for Per-Protocol Per-Interface (PPPI) CoPP

256

One PPPI CoPP configuration may use more than one TCAM entry. The number of TCAM entries used for each configuration varies in each protocol and leaf platform. Use vsh_lc -c 'show system internal aclqos pppi copp tcam-usage' command to check on LSE/LSE2 platforms

N/A

Maximum number of SNMP trap receivers

10

10

IP SLA probes*

*With 1 second probe time and 3 seconds of timeout

100

1,500

First Hop Security (FHS)*

With any combination of BDs/EPGs/EPs within the supported limit

2,000 endpoints

1,000 bridge domains

N/A

Maximum number of Q-in-Q tunnels

(both QinQ core and edge combined)

1,980

N/A

Maximum number of TEP-to-TEP atomic counters

(tracked by 'dbgAcPathA' object)

N/A

1,600

SR-MPLS

Configurable Options

Per Leaf Scale

Per Fabric Scale

EVPN sessions

4

100

BGP labeled unicast (LU) pairs

16

200

ECMP paths

16

N/A

Infra SR-MPLS L3Outs*

* Including both, remote leaf and multi-pod

N/A

100 total, 2 per RL location

VRFs*

* Including both, remote leaf and multi-pod

N/A

1,200

External EPGs

N/A

2,000 total, 100 per VRF

Interfaces

N/A

Same as fabric scale

Multi-pod remote leaf pairs

N/A

50 pairs (100 RLs total)

Tenants

Configurable Options

Per Leaf Scale

Per Fabric Scale

Contexts (VRFs) per tenant

ALE: 50

LSE: 128

ALE: 50

LSE: 128

VRFs (Contexts)

All numbers are applicable to dual stack unless explicitly called out.

Configurable Options

Per Leaf Scale

Per Fabric Scale

Maximum number of Contexts (VRFs)

ALE: 400

LSE and LSE2: 800

3,000

Maximum ECMP (equal cost multipath) for BGP best path

64

N/A

Maximum ECMP (equal cost multipath) for OSPF best path

64

N/A

Maximum ECMP (equal cost multipath) for Static Route best path

64

N/A

Number of isolated EPGs

400

400

Border Leafs per L3 Out

N/A

12

Maximum number of vzAny Provided Contracts

Shared services: Not supported

Non-shared services: 70 per Context (VRF)

N/A

Maximum number of vzAny Consumed Contracts

Shared services: 16 per Context (VRF)

Non-shared services: 70 per Context (VRF)

N/A

Number of Graphs Instances per device cluster

N/A

500

L3 Out per context (VRF)

N/A

400

Maximum number of BGP neighbors

400

10,000

Maximum number of OSPF neighbors

300

N/A

Maximum number of EIGRP neighbors

32

N/A

Maximum number of IP Longest Prefix Matches (LPM) entries

Note

 

The total of (# of IPv4 prefixes) + 2*(# of IPv6 prefixes) must not exceed the scale listed for IPv4 alone

Default (Dual Stack) profile:

  • ALE v2:

    • IPv4: 10,000 or

    • IPv6: 6,000 or

    • IPv4: 4,000, IPv6: 4,000

    • IPv6 wide prefixes (> /64): 1,000

  • For LSE or LSE2:

    • IPv4: 20,000 or

    • IPv6: 10,000

    • IPv6 wide prefixes (>= /84): 1,000

      NOTE: For LSE2 and FX2 models there's no restriction on wide prefixes.

IPv4 scale profile:

  • For LSE or LSE2:

    • IPv4: 38,000

    • IPv6: Not supported

  • ALE v2: Not supported

High Dual Stack scale profile:

  • LSE or LSE2:

    • IPv4: 38,000 or

    • IPv6: 19,000

    • IPv6 wide prefixes (>= /84): 1,000

      NOTE: For LSE2 and FX2 models there's no restriction on wide prefixes.

  • ALE v2: Not supported

N/A

Maximum number of IP Longest Prefix Matches (LPM) entries

(Continued)

Note

 

The total of (# of IPv4 prefixes) + 2*(# of IPv6 prefixes) must not exceed the scale listed for IPv4 alone

High LPM Scale profile –

  • LSE or LSE2:

    • IPv4: 128,000 or

    • IPv6: 64,000

    • IPv6 wide prefixes (>= /84): 1,000

      NOTE: For LSE2 and FX2 models there's no restriction on wide prefixes.

  • ALE v2: Not supported

High Policy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX switches with 32GB of RAM only):

    • IPv4: 20,000 or

    • IPv6: 10,000

  • LSE (N9K-C9336C-FX2 and N9K-C93180YC-EX):

    • IPv4: 8,000

    • IPv6: 4,000

High IPv4 EP Scale profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C93180YC-FX3 switches with 32GB of RAM only):

    • IPv4: 40,000

    • IPv6: 20,000

  • LSE: Not supported

Multicast Heavy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, N9K-C93180YC-FX3, and N9K-C93108TC-FX3P switches with 32GB of RAM only):

    • IPv4: 20,000

    • IPv6: 10,000

  • LSE: Not supported

N/A

Maximum number of Secondary addresses per logical interface

1

1

Maximum number of L3 interfaces per Context

  • 1,000 SVIs

  • 16 Routed interfaces

  • 100 sub-interfaces with or without port-channel

N/A

Maximum number of L3 interfaces

  • 1,000 SVIs

  • 16 Routed interfaces

  • 1,000 sub-interfaces with or without port-channel

N/A

Maximum number of ARP entries for L3 Outs

7,500

N/A

Shared L3 Out

  • IPv4 Prefixes: 2,000 or

  • IPv6 Prefixes: 1,000

  • IPv4 Prefixes: 6,000 or

  • IPv6 Prefixes: 3,000

Maximum number of L3 Outs

400

For LSE and LSE2: 800

2,400 (single-stack)

1,800 (dual-stack)

External EPGs

Configurable Options

Per Leaf Scale

Per Fabric Scale

Number of External EPGs

800

ALE: 2,400

LSE: 4,000

The listed scale is calculated as a product of (Number of external EPGs)*(Number of border leaf switches for the L3Out)

For example, the following combination adds up to a total of 2,000 external EPGs in the fabric (250 external EPGs * 2 border leaf switches * 4 L3Outs):

  • 250 External EPGs in L3Out1 on leaf1 and leaf2

  • 250 External EPGs in L3Out2 on leaf1 and leaf2.

  • 250 External EPGs in L3Out3 on leaf3 and leaf4

  • 250 External EPGs in L3Out4 on leaf3 and leaf4

Number of External EPGs per L3Out

250

600

The listed scale is calculated as a product of (Number of external EPGs per L3Out)*(Number of border leaf switches for the L3Out)

For examples, 150 external EPGs on L3Out1 that is deployed on leaf1, leaf2, leaf3, and leaf4 adds up to a total of 600

Maximum number of LPM Prefixes for External EPG Classification

Note

 

Maximum combined number of IPv4/IPv6 host and LPM prefixes for External EPG Classification must not exceed 64,000

ALE: 1,000 IPv4

LSE: refer to LPM scale section.

N/A

Maximum number of host prefixes for External EPG Classification

Note

 

Maximum combined number of IPv4/IPv6 host and LPM prefixes for External EPG Classification must not exceed 64,000

ALE: 1,000

LSE and LSE2:

  • Default Profile:

    • IPv4 (/32): 16,000

    • IPv6 (/128): 12,000

      Combined number of host prefixes and endpoints can't exceed 12,000.

  • IPv4 Scale profile:

    • IPv4 (/32): 16,000

      Combined number of host prefixes, mcast routes, and endpoints can't exceed 56,000.

    • IPv6 (/128): 0

  • High Dual Stack Profile:

    • IPv4 (/32): 64,000

      Combined number of host prefixes, mcast routes, and endpoints can't exceed 64,000.

    • IPv6 (/128): 24,000 (LSE)

      Combined number of host prefixes and endpoints can't exceed 24,000.

    • IPv6 (/128): 48,000 (LSE2 only)

      Combined number of host prefixes and endpoints can't exceed 48,000.

  • High LPM Profile:

    • IPv4 (/32): 24,000

      Combined number of host prefixes, mcast routes, and endpoints can't exceed 24,000.

    • IPv6 (/128): 12,000

      Combined number of host prefixes and endpoints can't exceed 12,000.

  • High Policy profile (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX switches with 32GB of RAM only):

    • IPv4 (/32): 16,000

    • IPv6 (/128): 12,000

      Combined number of host prefixes and endpoints can't exceed 12,000.

  • High IPv4 EP Scale profile (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C93180YC-FX3 switches with 32GB of RAM only):

    • IPv4 (/32): 16,000

    • IPv6 (/128): 12,000

      Combined number of host prefixes and endpoints can't exceed 12,000.

  • Multicast Heavy profile (N9K-C93180YC-FX, N9K-C93600CD-GX, N9K-C93180YC-FX3, and N9K-C93108TC-FX3P switches with 32GB of RAM only):

    • IPv4 (/32): 16,000

      Combined number of host prefixes and endpoints can't exceed 114,000.

    • IPv6 (/128): 4,000

      Combined number of host prefixes and endpoints can't exceed 4,000.

N/A

Bridge Domains

Configurable Options

Per Leaf Scale

Per Fabric Scale

Maximum number of BDs

1,980

Legacy mode: 3,500

On ALE ToR switches with multicast optimized mode: 50

15,000

Maximum number of BDs with Unicast Routing per Context (VRF)

ALE: 256

LSE: 1,000

1,750

Maximum number of subnets per BD

1,000, cannot be for all BDs.

1,000 per BD

Maximum number of EPGs per BD

3,960

4,000

Number of L2 Outs per BD

1

1

Number of BDs with Custom MAC Address

1,750

Legacy mode: 3,500

On ALE ToR switches with multicast optimized mode: 50

1,750

Legacy mode: 3,500

On ALE ToR switches with multicast optimized mode: 50

Maximum number of EPGs + L3 Outs per Multicast Group

128

128

Maximum number of BDs with L3 Multicast enabled

1,750

1,750

Maximum number of VRFs with L3 Multicast enabled

64

300

Maximum number of L3 Outs per BD

ALE: 4

LSE: 16

N/A

Number of static routes behind pervasive BD (EP reachability)

N/A

450

DHCP relay addresses per BD across all labels

16

N/A

Number of external EPGs per L2 out

1

1

Maximum number of PIM Neighbors

1,000

1,000

Maximum number of PIM Neighbors per VRF

64

64

Maximum number of L3Out physical interfaces with PIM enabled

32

N/A

Endpoint Groups (Under App Profiles)

Configurable Options

Per Leaf Scale

Per Fabric Scale

Maximum number of EPGs

Normally 3,960; if legacy mode 3,500

15,000

Maximum amount of encapsulations per EPG

1 Static leaf binding, plus 10 Dynamic VMM

N/A

Maximum Path encap binding per EPG

Equals to number of ports on the leaf

N/A

Maximum amount of encapsulations per EPG per port with static binding

One (path or leaf binding)

N/A

Maximum number of domains (physical, L2, L3)

100

N/A

Maximum number of VMM domains

N/A

  • 200 VDS

  • 50 AVS

  • 50 Cisco ACI Virtual Edge

Maximum number of native encapsulations

  • One per port, if a VLAN is used as a native VLAN.

  • Total number of ports, if there is a different native VLAN per port.

Applicable to each leaf independently

Maximum number of 802.1p encapsulations

  • 1, if path binding then equals the number of ports.

  • If there is a different native VLAN per port, then it equals the number of ports.

Applicable to each leaf independently

Can encapsulation be tagged and untagged?

No

N/A

Maximum number of Static endpoints per EPG

Maximum endpoints

N/A

Maximum number of Subnets for inter-context access per tenant

4,000

N/A

Maximum number of Taboo Contracts per EPG

2

N/A

IP-based EPG (bare metal)

4,000

N/A

MAC-based EPG (bare metal)

4,000

N/A

Contracts

Cisco ACI supports two types of compression for policy CAM (content-addressable memory):

  • Bidirectional compression ensures that bidirectional rules consume a single entry in the policy CAM and is supported starting with Cisco APIC release 3.2(1).

  • Policy TCAM indirection compression enables multiple contracts to refer to the same filter rules and is supported starting with Cisco APIC release 4.0(1).

If you enable compression in release 4.0(1) or later, APIC will use either or both optimizations depending on the configuration. When enabling compression on -EX switches, APIC will apply bidirectional compression. The policy TCAM compression feature requires -FX leaf switches or newer.

Configurable Options

Per Leaf Scale

Per Fabric Scale

Security TCAM size

Default scale profile:

  • ALE v2: 40,000

  • LSE and LSE2: 64,000

IPv4 scale profile:

  • ALE v2: N/A

  • LSE and LSE2: 64,000

High Dual Stack scale profile:

  • ALE v2: N/A

  • LSE: 8,000

  • LSE2: 128,000

High LPM scale profile:

  • ALE v2: N/A

  • LSE and LSE2: 8,000

High Policy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX switches with 32GB of RAM only): 256,000

  • LSE (N9K-C9336C-FX2 and N9K-C93180YC-EX): 100,000

High IPv4 EP Scale profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C93180YC-FX3 switches with 32GB of RAM only): 64,000

  • Not supported on LSE

Multicast Heavy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, N9K-C93180YC-FX3, and N9K-C93108TC-FX3P switches with 32GB of RAM only): 64,000

  • Not supported on LSE

N/A

Software policy scale with Policy Table Compression enabled

(Number of actrlRule Managed Objects)

Dual stack profile:

  • LSE (N9K-C9336C-FX2 only): 80,000

  • LSE2 (N9K-C93180YC-FX only): 80,000

High Dual Stack profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX only) : 140,000

High Policy profile:

  • LSE2 (N9K-C93180YC-FX, N9K-C93600CD-GX, and N9K-C9364C-GX switches with 32GB of RAM) : 256,000

  • LSE (N9K-C9336C-FX2): 100,000

N/A

Approximate TCAM calculator given contracts and their use by EPGs

Number of entries in a contract X Number of Consumer EPGs X Number of Provider EPGs X 2

N/A

Number of consumers (or providers) of a contract that has more than 1 provider (or consumer)

100

100

Number of consumers (or providers) of a contract that has a single provider (or consumer)

1,000

1,000

Scale guideline for the number of Consumers and Providers for the same contract

N/A

Number of consumer EPGs * number of provider EPGs * number of filters in the contract <= 50,000

Maximum number of rules for consumer/provider relationships with in-band EPG

400

N/A

Maximum number of rules for consumer/provider relationships with out-of-band EPG

400

N/A

Endpoint Security Groups (ESG)

Configurable Options

Scale

Number of ESG per Fabric

10,000

Number of ESG per VRF

1,000

Number of Selector per Leaf

4,000

FCoE NPV

Configurable Options

Per Leaf Scale

Per Fabric Scale

Maximum number of VSANs

32

N/A

Maximum number of VFCs configured on physical ports and FEX ports

151

N/A

Maximum number of VFCs on port-channel (PC), including SAN port-channel

7

N/A

Maximum number of VFCs on virtual port-channel (vPC) interfaces, including FEX HIF vPC

151

N/A

Maximum number of FDISC per port

255

N/A

Maximum number of FDISC per leaf

1000

N/A

FC NPV

Configurable Options

Per Leaf Scale

Per Fabric Scale

Maximum number of FC NP Uplink interfaces

48

N/A

Maximum number of VSANs

32

N/A

Maximum number of FDISC per port

255

N/A

Maximum number of FDISC per leaf

1,000

N/A

Maximum number of SAN port-channel, including VFC port-channel

7

N/A

Maximum number of members in a SAN port-channel

16

N/A

VMM Scalability Limits

VMware

Configurable Options

Per Leaf Scale

Per Fabric Scale

Number of vCenters (VDS)

N/A

200 (Verified with a load of 10 events/minute for each vCenter)

Number of vCenters (AVS)

N/A

50

Number of vCenters (Cisco ACI Virtual Edge)

N/A

50

Datacenters in a vCenter

N/A

15

Total Number of VMM domain (vCenter, Datacenter) instances.

N/A

  • 200 VDS

  • 50 AVS

  • 50 Cisco ACI Virtual Edge

Number of ESX hosts per AVS

240

N/A

Number of ESX hosts running Cisco ACI Virtual Edge

150

N/A

Number of EPGs per vCenter/vDS

N/A

5,000

Number of EPGs to VMware domains/vDS

N/A

5,000

Number of EPGs per vCenter/AVS

N/A

3,500

Number of EPGs to VMware domains/AVS

N/A

3,500

Number of EPGs per vCenter/Cisco ACI Virtual Edge

N/A

VLAN Mode: 1,300

VXLAN Mode: 2,000

Number of EPGs to VMware domains and Cisco ACI Virtual Edge

N/A

VLAN Mode: 1,300

VXLAN Mode: 2,000

Number of endpoints (EPs) per AVS

10,000

10,000

Number of endpoints per VDS

10,000

10,000

Number of endpoints per vCenter

10,000

10,000

Number of endpoints per Cisco ACI Virtual Edge

10,000

10,000

Support RBAC for AVS

N/A

Yes

Support RBAC for VDS

N/A

Yes

Support RBAC for Cisco ACI Virtual Edge

N/A

Yes

Number of Microsegment EPGs with vDS

400

N/A

Number of Microsegment EPGs with AVS

1,000

N/A

Number of Microsegment EPGs with Cisco ACI Virtual Edge

1,000

N/A

Number of DFW flows per vEth with AVS

10,000

N/A

Number of DFW flows per vEth with Cisco ACI Virtual Edge

10,000

N/A

Number of DFW denied and permitted flows per ESX host with AVS

250,000

N/A

Number of DFW denied and permitted flows per ESX host with Cisco ACI Virtual Edge

250,000

N/A

Number of VMM domains per EPG with AVS

N/A

10

Number of VMM domains per EPG with Cisco ACI Virtual Edge

N/A

10

Number of VM Attribute Tags per vCenter

N/A

vCenter version 6.0: 500

vCenter version 6.5: 1,000

Microsoft SCVMM

Configurable Options

Per Leaf Scale (On-Demand Mode)

Per Leaf Scale (Pre-Provision Mode)

Per Fabric Scale

Number of controllers per SCVMM domain

N/A

N/A

5

Number of SCVMM domains

N/A

N/A

25

EPGs per Microsoft VMM domain

N/A

N/A

3,000

EPGs per all Microsoft VMM domains

N/A

N/A

9,000

EP/VNICs per HyperV host

N/A

N/A

100

EP/VNICs per SCVMM

3,000

10,000

10,000

Number of Hyper-V hosts

64

N/A

N/A

Number of logical switch per host

N/A

N/A

1

Number of uplinks per logical switch

N/A

N/A

4

Microsoft micro-segmentation

1,000

Not Supported

N/A

Microsoft Windows Azure Pack

Configurable Options

Per Leaf Scale

Per Fabric Scale

Number of Windows Azure Pack subscriptions

N/A

1,000

Number of plans per Windows Azure Pack instance

N/A

150

Number of users per plan

N/A

200

Number of subscriptions per user

N/A

3

VM networks per Windows Azure Pack user

N/A

100

VM networks per Windows Azure Pack instance

N/A

3,000

Number of tenant shared services/providers

N/A

40

Number of consumers of shared services

N/A

40

Number of VIPs (Citrix)

N/A

50

Number of VIPs (F5)

N/A

50

Layer 4 - Layer 7 Scalability Limits

Configurable Options

(L4-L7 Configurations)

Per Leaf Scale

Per Fabric Scale

Maximum number of L4-L7 logical device clusters

N/A

1,200

Maximum number of graph instances

N/A

1,000

Number of device clusters per tenant

N/A

30

Number of interfaces per device cluster

N/A

Any

Number of graph instances per device cluster

N/A

500

Deployment scenario for ASA (transparent or routed)

N/A

Yes

Deployment scenario for Citrix - One arm with SNAT/etc.

N/A

Yes

Deployment scenario for F5 - One arm with SNAT/etc.

N/A

Yes

AD, TACACS, RBAC Scalability Limits

Configurable Options

Per Leaf Scale

Per Fabric Scale

Number of ACS/AD/LDAP authorization domains

N/A

4 tested (16 maximum /server type)

Number of login domains

N/A

15 (can go beyond).

Number of security domains/APIC

N/A

15 (can go beyond).

Number of security domains in which the tenant resides

N/A

4 (can go beyond).

Number of priorities

N/A

4 tested (16 per domain)

Number of shell profiles that can be returned.

N/A

4 tested (32 domains total)

Number of users

N/A

8,000 local / 8,000 remote

Number of simultaneous logins

N/A

500 connections / NGNIX simultaneous REST logins

Cisco Mini ACI Fabric and Virtual APICs Scalability Limits

Property

Maximum Scale

Multicast Groups

200

BGP + OSPF Sessions

25

Number of Graphs Instances

20

Maximum number of L4-L7 logical device clusters

3 Physical or 10 Virtual

Number of Pods

1

GOLF VRF, Route Scale

N/A

Tenants

25

Endpoints

20,000

Bridge domains (BDs)

1,000

Endpoint groups (EPGs)

1,000

VRFs

25

Number of Leafs

4

Number of Spines

2

Contracts

2,000

Cisco Cloud APIC Scalability Limits

This section contains scalability numbers for Cisco ACI cloud deployments. The scalability limits differ based on whether it's a single cloud site or a multi-cloud deployment.

Single Cloud Site

This section contains scalability numbers for a single cloud site deployment. The same scale numbers apply to both, AWS or Azure, cloud providers.

Table 3. Single Cloud Site

Configurable Options

Scale

Number of Tenants

20

Number of Application Profiles

500

Number of EPGs

500

Number of cloud Endpoints

1,000

Number of VRFs

20

Cloud Context Profiles

40

Number of Contracts

1,000

Number of L4-L7 Service Graphs

200

Number of L4-L7 Services Devices (AWS ALB)

100

Number of hub networks for Transit Gateway (TGW)

2

Number of Transit Gateways per hub network

2

Number of restricted domains (security domain with restricted role)

32

Multi-Cloud Deployments

This section contains scalability numbers for multi-cloud deployments. The same scale numbers apply to each cloud site (AWS or Azure) with intersite connectivity provided by the ACI Multi-Site Orchestrator. Total number of stretched and non-stretched objects must not exceed the maximum verified scalability limit for that object.

Table 4. Multi-Cloud Deployments

Configurable Options

Scale

Number of cloud sites

2

Number of managed regions per site

4

Number of CSRs per site

4

Number of CSRs per region

2

Number of Tenants

5

Number of EPGs

250

Number of cloud endpoints

500

Number of VRFs

10

Cloud Context Profiles (VPC/VNET)

40

Number of Contracts

200

Cisco ACI and UCSM Scalability

The following table shows verified scalability numbers for Cisco Unified Computing System with Cisco ACI ExternalSwitch app.

Configurable Options

Scale

Number of UCSMs per APIC cluster

12

Number of VMM Domains per UCSM

4

Number of VLANs + PVLAN per UCSM

4,000

Number of vNIC Templates per UCSM

16

QoS Scalability Limits

The following table shows QoS scale limits. The same numbers apply for topologies with or without remote leafs as well as with COS preservation and MPOD policy enabled.

QoS Mode

QoS Scale

Custom QoS Policy with DSCP

7

Custom QoS Policy with DSCP and Dot1P

7

Custom QoS Policy with Dot1P

38

Custom QoS Policy via a Contract

38

PTP Scalability Limits

The following table shows Precision Time Protocol (PTP) scale limits.

Configurable Options

Scale

(IEEE 1588 Default Profile)

Scale

(AES67, SMPTE-2059-2)

Number of leaf switches connected to a single spine with PTP globally enabled

128

40

Number of ACI switches connected to the same tier-1 leaf switch (multi-tier topology) with PTP globally enabled

16

16

Number of access ports with PTP enabled on a leaf switch

25

Note

 

For improved performance on 1G interfaces with N9K-C93108TC-FX3P switches, the maximum number of 1G interfaces should not exceed 10 out of 25

25

Note

 

For improved performance on 1G interfaces with N9K-C93108TC-FX3P switches, the maximum number of 1G interfaces should not exceed 10 out of 25

Number of PTP peers per access port

PTP Mode Multicast (Dynamic/Master): 2 peers

PTP Mode Unicast Master: 1 peer

PTP Mode Multicast (Dynamic/Master): 2 peers

PTP Mode Unicast Master: 1 peer

Number of PTP peers per leaf switch

26

26

NetFlow Scale

Configurable Options

Scale

Exporters per leaf switch

2

NetFlow monitor policies under BDs per leaf switch

100

NetFlow monitor policies under L3Outs per leaf switch

120

Maximum number of records per collect interval

20,000