Cisco Nexus Dashboard Fabric Controller Verified Scalability

Verified Scale Limits for Release 12.1.2e

This section provides verified scalability values for various deployment types for Cisco Nexus Dashboard Fabric Controller, Release 12.1.2e.

The values are validated on testbeds that are enabled with a reasonable number of features and aren’t theoretical system limits for Cisco Nexus Dashboard Fabric Controller software or Cisco Nexus/MDS switch hardware and software. When you try to achieve maximum scalability by scaling multiple features at the same time, results might differ from the values that are listed here.

Nexus Dashboard Server Resource (CPU/Memory) Requirements

The following table provides information about Server Resource (CPU/Memory) Requirements to run NDFC on top of Nexus Dashboard. Refer to Nexus Dashboard Capacity Planning to determine the number of switches supported for each deployment.

Cisco Nexus Dashboard can be deployed using number of different form factors. NDFC can be deployed on the following form factors:

  • pND - Physical Nexus Dashboard

  • vND - Virtual Nexus Dashboard

  • rND - RHEL Nexus Dashboard

Table 1. Server Resource (CPU/Memory) Requirements to run NDFC on top of Nexus Dashboard
Deployment Type Node Type CPUs Memory Storage (Throughput: 40-50 MB/s)
Fabric Discovery Virtual Node (vND) – app OVA

16 vCPUs

64 GB

550 GB SSD

Physical Node (pND)

(PID: SE-NODE-G2 and ND-NODE-L4)

2 x 10-core 2.2G Intel Xeon Silver CPU

256 GB of RAM

4 x 2.4 TB HDDs

400 GB SSD

1.2 TB NVME drive

Fabric Controller Virtual Node (vND) – app OVA 16 vCPUs 64 GB 550 GB SSD

Physical Node (pND)

(PID: SE-NODE-G2 and ND-NODE-L4)

2 x 10-core 2.2G Intel Xeon Silver CPU

256 GB of RAM

4 x 2.4 TB HDDs

400 GB SSD

1.6 TB NVME drive

SAN Controller

Virtual Node (vND) – app OVA

(without SAN Insights)

16 vCPUs

(with physical reservation)

64 GB

(with physical reservation)

550 GB SSD

App Node (rND)

(without SAN Insights)

16 vCPUs

(with physical reservation)

64 GB

(with physical reservation)

550 GB SSD

Data Node (vND) – Data OVA

(with SAN Insights)

32 vCPUs

(with physical reservation)

128GB

(with physical reservation)

3 TB SSD

Data Node (rND)

(with SAN Insights)

32 vCPUs

(with physical reservation)

128 GB

(with physical reservation)

3 TB SSD

Physical Node (pND)

(PID: SE-NODE-G2 and ND-NODE-L4)

2 x 10-core 2.2G Intel Xeon Silver CPU

256 GB of RAM

4 x 2.4 TB HDDs

400 GB SSD

1.6 TB NVME drive

Scale Limits for NDFC Fabric Discovery

Table 2. Scale Limits for Fabric Discovery Persona and Nexus Dashboard

Profile

Deployment Type

Verified Limit

Fabric Discovery

1-Node vND (app OVA)

<= 25 switches (Non-Production)

Fabric Discovery

3-Node vND (app OVA)

150 Switches

Fabric Discovery

5-Node vND (app OVA)

1000 Switches

Fabric Discovery

3-Node pND

1000 Switches

Scale Limits for NDFC Fabric Controller

Table 3. Scale Limits for Fabric Controller Persona and Nexus Dashboard

Profile

Deployment Type

Verified Limit

Fabric Controller (Non-Production)

1-Node vND (app OVA)

<= 25 switches (Non-Production)

Fabric Controller

3-Node vND (app OVA)

80 Switches

Fabric Controller

5-Node vND (app OVA)

400 switches for Easy Fabrics1

1000 switches for External Fabrics2

Fabric Controller

3-Node pND

500 switches for Easy Fabrics1

1000 switches for External Fabrics2

1 Easy Fabrics include Data Center VXLAN EVPN fabrics and BGP fabrics.

2External Fabrics include Flexible Network fabrics, Classic LAN fabrics, External Connectivity Network fabrics, and Multi-Site Interconnect Network fabrics. Both managed and monitored mode are supported.

Table 4. Scale Limits for Switches and Fabrics in Fabric Controller

Description

Verified Limit

Switches per fabric

200

Number of fabrics

25

Physical Interfaces per NDFC instance

30000

Table 5. Scale Limits For Provisioning New Data Center VXLAN EVPN Fabrics (also referred to as "Greenfield" Deployment)

Description

Verified Limit

Fabric Underlay Overlay

Switches per fabric

200
Overlay Scale for VRFs and Networks

500 VRFs, 2000 Layer-3 Networks

or

2500 Layer-2 Networks

VRF instances for external connectivity

500

IPAM Integrator application

150 networks with a total of 4K IP allocations on the Infoblox server

ToR and Leaf devices

A Data Center VXLAN EVPN fabric can manage both Layer-2 ToRs and Leafs. Maximum scale for such fabric is 40 Leafs and 240 ToRs.

Endpoint Locator
Endpoints 100000

VXLAN EVPN Multi-Site Domain

Sites

12

Virtual Machine Manager (VMM)

Virtual Machines (VMs)

5500

VMware Center Servers

4

Kubernetes Visualizer application

Maximum of 159 namespaces with maxium of 1002 pods


Note


Refer to the following table if you are transitioning a command line interface (CLI) configured Cisco Nexus 9000 series switches based VXLAN EVPN fabric to NDFC.


Table 6. Scale Limits For Transitioning Existing Data Center VXLAN EVPN Fabric Management to NDFC (also referred to as "Brownfield Migration")

Description

Verified Limit

Fabric Underlay and Overlay
Switches per fabric 100
Physical Interfaces 5000
VRF instances 100
Overlay networks 500
VRF instances for external connectivity 100
Endpoint Locator
Endpoints 50000
IPAM Integrator application 150 networks with a total of 4K IP allocations on the Infoblox server

Virtual Machine Manager (VMM)

Virtual Machines (VMs)

5500

VMware Center Servers

4

Kubernetes Visualizer application

Maximum of 159 namespaces with maxium of 1002 pods

Scale Limits for Cohosting NDFC and other Services

Table 7. Scale Limits for Cohosting Nexus Dashboard Insights and NDFC

Profile

Deployment Type

Verified Limit

NexusDashboard Insights and Nexus Dashboard Fabric Discovery

4-Node pND

50 switches, 10K Flows

NexusDashboard Insights and Nexus Dashboard Fabric Controller

5-Node pND

50 switches, 10K Flows

Scale Limits for IPFM Fabrics

Table 8. Scale Limits for Nexus Dashboard and IPFM Fabrics

Profile

Deployment Type

Verified Limit

Fabric Controller

1-Node vND

35 switches (2 Spines and 33 Leafs)

Fabric Controller

3-Node vND

35 switches (2 Spines and 33 Leafs)

Fabric Controller

1-Node pND

35 switches (2 Spines and 33 Leafs)

Fabric Controller

3-Node pND

120 switches (2 Spines and 100 Leafs and 18 Tier-2 Leafs)

Table 9. Scale limits for IPFM Fabrics

Description

Verified Limit

NBM Active Mode Only

NBM Passive Mode Only

Mixed Mode

NBM Active VRF

NBM Passive VRF

Switches

120

32

32

32

Number of flows

32000

32000

32000

32000

Number of End Points (Discovered Hosts)

5000

1500

3500

1500

VRFs

16

16

16

16

Host Policy - Sender

8000

NA

8000

NA

Host Policy - Receiver

8000

NA

8000

NA

Host Policy - PIM (Remote)

512

NA

512

NA

Flow Policy

2500

NA

2500

NA

NBM ASM group-range

20

NA

20

NA

Host Alias

2500

NA

2500

NA

Flow Alias

2500

NA

2500

NA

NAT Flows

3000

3000

3000

3000

RTP Flow Monitoring

8000

8000

8000

8000

PTP Monitoring

120 switches

32 switches

32 switches

32 switches

Scale Limits for NDFC SAN Controller

Table 10. Scale Limits for SAN Zones

Description

Verified Limits

Zone sets

1000

Zone

16000

Table 11. Scale Limits for Nexus Dashboard and SAN Controller Persona

Profile

Deployment Type

Verified Limit

Without SAN Insights

With SAN Insights

SAN Controller

1-Node vND (app OVA)

80 Switches, 20K Ports

NA

1-Node vND (data OVA)

80 Switches, 20K Ports

1M ITLs/ITNs

1-Node pND (SE)

80 Switches, 20K Ports

120K ITLs/ITNs

SAN Controller

3-Node vND (app OVA)

160 Switches, 40K Ports

NA

3-Node vND (data OVA)

160 Switches, 40K Ports

240K ITLs/ITNs

3-Node pND

160 Switches, 40K Ports

500K ITLs/ITNs

SAN Controller on Linux (rND)

(Install Profile: Default)

1-Node vND

80 Switches, 20K Ports

NA

SAN Controller on Linux (rND)

(Install Profile: Large)

1-Node vND

80 Switches, 20K Ports

1M ITLs/ITNs

SAN Controller on Linux (rND)

(Install Profile: Default)

3-Node vND

160 Switches, 40K Ports

NA

SAN Controller on Linux (rND)

(Install Profile: Large)

3-Node vND

160 Switches, 40K Ports

240K ITLs/ITNs


Note


ITLs - Initiator-Target-LUNs

ITNs - Initiator-Target-Namespace ID