Explore Cisco
How to Buy

Have an account?

  •   Personalized content
  •   Your products and support

Need an account?

Create an account

Cisco Catalyst 9500 Architecture White Paper

White Paper

Available Languages

Download Options

  • PDF
    (5.5 MB)
    View with Adobe Reader on a variety of devices
Updated:March 13, 2020

Available Languages

Download Options

  • PDF
    (5.5 MB)
    View with Adobe Reader on a variety of devices
Updated:March 13, 2020
 

 

Introduction

Enterprise campus networks are undergoing profound changes to support ever-increasing bandwidth demands on the access layer with the introduction of 802.11ax and rapid growth of powerful endpoints. With access layer bandwidth moving from 1-Gbps to 2.5-Gbps and 5-Gbps speeds, higher bandwidth like 25 Gbps and 100 Gbps will become the de facto speeds to maintain a similar oversubscription ratio.

Cisco Catalyst® 9500 Series Switches are the foundation of the Cisco® next-generation, enterprise-class backbone solutions. These switches are industry’s first purpose-built, fixed 1-Rack-Unit (RU) core and aggregation layer switches targeted for the enterprise campus. These switches deliver exceptional table scales (MAC, route, and Access Control List [ACL]) and buffering capabilities for enterprise applications. This platform delivers up to 6.4 Terabits per second (Tbps) of switching capacity and up to 2 billion packets per second (Bpps) of forwarding performance. The platform offers non-blocking 100-Gigabit-Ethernet Quad Small Form-factor Pluggable 28 (QSFP28) and 40-Gigabit-Ethernet Quad Small Form-factor Pluggable (QSFP+) as well as 25-Gigabit-Ethernet Small Form-Factor Pluggable 28 (SFP28) and 10/1-Gigabit-Ethernet Small Form-Factor Pluggable (SFP+/SFP) switches with granular port densities that meet diverse campus needs.

This white paper provides an architectural overview of the new Cisco Catalyst 9500 series, including system design, power, cooling, and storage options.

Platform overview

The Cisco Catalyst 9500 platform consists of fixed-configuration, front-to- back airflow switches built on the Cisco Unified Access Data Plane 2.0 XL and 3.0 (UADP) architecture, which not only protects your investment but also allows a larger scale and higher throughput. The platform runs on the modern Open Cisco IOS® XE operating system, which supports model- driven programmability and has the capacity to host containers and runs third-party applications natively within the switch (by virtue of an x86 CPU architecture, local storage, and a higher memory footprint). The platform also supports all the hardware high-availability capabilities, including Platinum-rated dual-redundant power supplies and variable-speed, highly efficient redundant fans. The Cisco Catalyst 9500 portfolio (Figure 1) offers switches with variable port speeds and port densities for the ever-increasing performance demands within enterprise campus environments.

Cisco Catalyst 9500 Series Switches

Figure 1.               

Cisco Catalyst 9500 Series Switches

The Catalyst 9500 portfolio provides an architectural foundation for next-generation hardware features and scalability. These high-performance switches are based on the UADP 3.0 ASIC, which supports a maximum forwarding capacity of 3.2 Tbps per ASIC and allowing larger table scale compared to the non-high-performance UADP 2.0XL C9500 SKU’s. Both Catalyst 9500 switches have a similar hardware architecture and offer stability with proven operating system software.

100-Gigabit-Ethernet (GE) switches:

     C9500-32C - Cisco Catalyst 9500 high-performance Series includes 2xUADP 3.0 ASIC with 32x 100GE Quad Small Form-Factor Pluggable 28 (QSFP28) ports

     C9500-32QC - Cisco Catalyst 9500 high-performance Series includes UADP 3.0 ASIC with 32x 40GE or 16x 100GE QSFP28 ports

40GE switches:

     C9500-24Q - Cisco Catalyst 9500 Series includes 4xUADP 2.0 XL ASIC with 24x 40GE Quad Small Form-Factor Pluggable + (QSFP+) ports

     C9500-12Q - Cisco Catalyst 9500 Series includes 2xUADP 2.0 XL ASIC with 12x 40GE QSFP+ ports

25GE switches:

     C9500-48Y4C - Cisco Catalyst 9500 high-performance Series includes UADP 3.0 ASIC with 48x 25GE SFP28 and 4x100/40GE QSFP28 ports

     C9500-24Y4C - Cisco Catalyst 9500 high-performance Series includes UADP 3.0 ASIC with 24x 25GE SFP28 and 4x100/40GE QSFP28 ports

10GE switches:

     C9500-40X - Cisco Catalyst 9500 Series includes 2xUADP 2.0 XL ASIC with 40x10GE Small Form-Factor pluggable+(SFP+) + 2x40/8x10GE ports

     C9500-16X - Cisco Catalyst 9500 Series includes UADP 2.0 XL ASIC with 16x10GE Small Form-Factor pluggable+(SFP+) + 2x40/8x10GE ports

Note:       Naming conventions on the Cisco Catalyst 9500 Series indicate supported port speeds:

     C9500 – represents Catalyst switch family

     X: Native 10-GE front-panel ports

     Y: Native 25-GE front-panel ports

     Q: Native 40-GE front-panel ports

     C: Native 100-GE front-panel ports

Switch overview

This section describes the high-level highlights of the Catalyst 9500 Series switches. The Catalyst 9500 switches supports:

     8 SKU options: Select the system that best fits your needs based on port speeds, port density, and network scale.

     Up to two Platinum-rated PSU: 1.6KW AC/DC or 950W/930W AC/DC power supplies supporting 1:1 power redundancy.

     New high-efficient variable speed fans: Supports N+1 or 1+1 Fan/Fan-Tray redundancy with maximum fan speed up to 24000 RPM.

     Multi-rate optics: Every QSFP28 port can support different speeds of 100/40/25/10/1G and every QSFP+ port can support different speeds of 40/10/1G.

     120G USB3.0 or up to 960G M2 SATA external SSD storage: Primarily for hosting third-party applications.

Switch design

With enterprise networks moving toward higher speeds like 25 Gbps and 100 Gbps, reports indicate that modular port shipments are declining as fixed switches offer significant density, cost, and power benefits. This section briefly covers the high-level system design of the new Cisco Catalyst 9500 Series switches. Additional details can be found in the later sections. Figures 2, 3, 4, and 5 show the different board layouts.

C9500-32C board layout

Figure 2.               

C9500-32C board layout

C9500-32QC/48Y4C/24Y4C board layout

Figure 3.               

C9500-32QC/48Y4C/24Y4C board layout

C9500-24Q/12Q board layout

Figure 4.               

C9500-24Q/12Q board layout

C9500-40X/16X board layout

Figure 5.               

C9500-40X/16X board layout

The Catalyst 9500 switches also includes a front-panel RJ45 console port, USB mini type B console port, an RJ45 management port and a USB type A 2.0/3.0 host port for flash drives. All switches come with built-in passive RFID for inventory management, Blue Beacon LED for switch-level identification and Tri-Color LED for system status.

The rear panel on the Catalyst 9500 switches has two field-replaceable Power Supply Unit (PSU) slots and five field replaceable redundant variable speed fans or dual fan trays units, and an M2 SATA drive or USB 3.0 drive for storage.

Switch power

The Cisco Catalyst 9500 high-performance switches supports up to two 1.6KW/650W AC or 1.6KW/930W DC small form factor Power Supply Units (PSUs) for a total system capacity of 1.6KW or 650W (Figure 6). The Catalyst 9500 UADP 2.0 XL SKUs support up to two 950W AC/DC PSUs for a total system capacity of 950W. Each PSU is Platinum-rated for greater than 90 percent power efficiency at 100 percent load. The system supports either one PSU operating in non-redundant mode, which is sufficient to power up the switch in its maximum configuration, or two PSUs operating in redundant load-sharing mode, where 50% power is withdrawn from each PSU. Power supplies support both AC, DC, and a combination of AC and DC units, and support full Online Insertion and Removal (OIR) capabilities.

Numbering of Catalyst 9500 switch power supplies

Figure 6.               

Numbering of Catalyst 9500 switch power supplies

Power supply unit

The maximum output power per power supply in the Catalyst 9500 switches is:

     1.6KW AC PSU is 1.6KW at 220V input and 1000W at 110V input

     950W AC PSU is 950W at 110V-220V

     930W AC PSU is 930W at 110-220V

Each PSU has a power holdup time of approximately 20 milliseconds at 100 percent load. Each PSU comes with front-to-back variable-speed cooling fans and has a push-release lock for simple and secure OIR. Figure 7 shows the PSU features of the switches.

Cisco Catalyst 9500 AC PSUs

Figure 7.               

Cisco Catalyst 9500 AC PSUs

Each PSU supports a bi-color (green/amber) LED to determine the status of the power supply, as shown in Table 1.

Table 1.           Cisco Catalyst 9500 AC PSU LEDs

LED

Color

Status

Description

Green

Related image, diagram or screenshot

Off

No input power

Green

Related image, diagram or screenshot

Blinking

12V main off, 12V standby power ON

Green

Related image, diagram or screenshot

Solid

12V main ON

Amber

Related image, diagram or screenshot

Blinking

Warning detected, 12V main

Amber

Related image, diagram or screenshot

Solid

Critical error detected

Switch cooling

Cisco Catalyst 9500 High-Performance SKU’s support hot-swappable and field-replaceable variable-speed modular fans ((five individual fan modules) in the rear of the switch. Cisco Catalyst 9500 UADP 2.0 XL based SKU’s supports hot- swappable and field-replaceable variable-speed fan trays (two fan trays with dual-stacked fan modules) in the rear of the switch with front-to-back airflow. These fan and fan-tray units support Online Insertion and Removal (OIR) and can support a maximum fan speed of 24000 rpm. The fan unit is responsible for cooling the entire switch and interfacing with environmental monitors to trigger alarms when conditions exceed thresholds. Switches are equipped with on-board thermal sensors to monitor the ambient temperature at various points and report thermal events to the system to adjust the fan speeds. Switches supports a hardware failure of up to one individual fan or fan tray, where remaining fans will automatically increase their rpm to compensate and maintain sufficient cooling. If the switch fails to meet the minimum number of required fans, the switch shuts down automatically to prevent the system from overheating.

Cisco Catalyst 9500 fan and fan tray

Figure 8.               

Cisco Catalyst 9500 fan and fan tray

Insertion and removal of the fan modules are made easy with fan assembly ejectors levers. Press the fan ejector levers and use the fan handle to insert or remove the module. Table 2 highlights the LED signal for each fan and fan tray state.

Table 2.           Cisco Catalyst 9500 fan and fan tray LEDs

LED

Color

Status

Description

Fan

Related image, diagram or screenshot

Off

No input power

Fan

Related image, diagram or screenshot

Blinking

12V main off, 12V standby power ON

Fan

Related image, diagram or screenshot

Solid

Critical error detected

Switch airflow

The Cisco Catalyst 9500 fans and fan trays support front-to-back airflow. Airflow vents are illustrated in Figure 9.

Cisco Catalyst 9500 airflow

Figure 9.               

Cisco Catalyst 9500 airflow

The 9500 switches support port-side intake airflow on all C9500 SKUs, where coolant air enters the switch through the port side (cold aisle) and exhausts through the fan and power supply modules in the rear (hot aisle). QSFP/SFP cages are thermally enhanced with the mid-portion allowing airflow through the cage.

Baseboard components

Catalyst 9500 switches are-line rate switches offering configurable system resources to optimize support for specific features, depending on how the switch is used in the network. The switch architecture consists of four main components:

     UADP ASIC complex

     X86 CPU complex

     ASIC interconnect

     Front-panel interfaces

Figure 10 shows a high-level diagram of the switch’s components.

Catalyst 9500 high-performance SKU high-level block diagram

Figure 10.           

Catalyst 9500 high-performance SKU high-level block diagram

UADP ASIC complex

Catalyst 9500 Series Switches are built on two variants of the UADP ASIC: UADP 2.0 XL and UADP 3.0. Both are based on a System-On-Chip (SOC) architecture. The architecture of both ASICs is similar, but the versions differ in switching capacity, port density, port speeds, buffering capability, and forwarding scalability.

UADP 2.0 XL is a third-generation, 240G, dual core ASIC optimized for next-generation Catalyst fixed or modular switches. The architecture and functionality of UADP 2.0 XL is largely unchanged from previous generation of the UADP ASIC. The UADP 2.0 XL ASIC is built using 28-nanometer technology with dual core architecture. Figure 11 shows the components of the UADP 2.0 XL ASIC.

UADP 2.0 XL ASIC block diagram

Figure 11.           

UADP 2.0 XL ASIC block diagram

Key UADP 2.0 XL capacities and capabilities include:

     Packet bandwidth/switching throughput: 240 GE (120 GE per core)

     Forwarding performance: 375 Mpps

     Stack bandwidth: 720G (2x360G rings)

     FIB table: 128K/64K IPv4 and IPv6 Host routes and 64K/32K IPv4 and IPv6 longest-prefix-match entries

     Shared packet buffer: 32 MB (16 MB per core)

     Dedicated NetFlow block: Up to 128K(IPv4)/64K(IPv6) (64K/32K per core)

     TCAM ACL: 54K total capacity

The UADP 3.0 ASIC is based on the next-generation UADP 2.0 architecture, using a 16-nanometer technology that offers significantly larger tables and bandwidth compared to all other UADP ASICs. Figure 12 shows the components of the UADP 3.0 ASIC.

UADP 3.0 ASIC block diagram

Figure 12.           

UADP 3.0 ASIC block diagram

Key UADP 3.0 capacities and capabilities include:

     Packet bandwidth/switching throughput: 1600 GE (800 GE per core)

     Forwarding performance: 1Bpps (500 Mpps per core)

     ASIC interconnects: Two point to point links with total of 800G bandwidth

     FIB entries: 416K double width tables optimized for IPV6 deployments

     Unified packet buffer: 36M shared between both cores

     NetFlow: Up to 128K IPv4 and IPv6 double-width shared tables

     TCAM ACL: 54K total capacity

Note:       UADP 3.0 achieves line rate forwarding performance for packet sizes greater than 187 bytes and above. Table 3 highlights the high-level differences between the UADP 2.0 XL and UADP 3.0 ASICs.

Table 3.           Catalyst 9500 ASIC comparison

Capabilities (per ASIC)

Cisco Catalyst 9500 Series (UADP 2.0 XL)

Cisco Catalyst 9500 High Performance (UADP 3.0)

Switching and forwarding capacity

480 Gbps/360 Mpps

3.2 Tbps/1 Bpps

Stack bandwidth

2x 360 Gbps

2x 400 Gpbs

Buffer capability

2x 16 MB

36 MB shared buffer

Switch Database Management (SDM) template

Fixed templates

Customizable templates

NetFlow capabilities

Dedicated NetFlow table

Shared NetFlow table

v4 FIB scale

Total 224K*

Total 416K*

v4 and v6 scale

v6 reduced by half

v4 and v6 same scale

*Maximum ASIC Capacity

X86 CPU complex

Catalyst 9500 Series Switches are equipped with the same CPU, system memory and flash storage. Figure 13 outlines the X86 CPU complex.

Some highlights include:

     New 2.4 Ghz x86 quad-core CPU (Intel® Xeon®-D CPU running at 2.4 Ghz)

     Single 16 GB of DDR4 2400MT/s RAM

     Support for a USB Type A file system (front-serviceable) for external storage and Bluetooth dongle

     Support for a USB Type B serial console in addition to a RJ-45 serial console

     16 GB of internal enhanced USB (eUSB) flash

     USB 3.0 (400 MB/s read, and 140 MB/s write) or M.2 (300 MB/s read, or 290 MB/s write) form-factor SSD module (rear-serviceable) for application hosting or general-purpose storage

     System reset switch for manually power cycle

X86 CPU complex

Figure 13.           

X86 CPU complex

ASIC interconnect

Catalyst 9500 switches are fixed core and aggregation switches without any rear stack ports; hence ASIC interconnect links are used for inter-ASIC communications. Communication within a core or between cores is locally switched within the ASIC, so packets destined to local ports within the ASIC do not use ASIC interconnects link. The purpose of the ASIC interconnects is to move data between multiple UADP ASICs.

UADP 3.0 has two ASIC interconnect links, allowing a total packet bandwidth of 800 Gbps.

ASIC interconnects are a combination of up to 16 SERDES (serializer/deserializer) operating at a 25G NRZ format with a total packet bandwidth of 400 Gbps. Because the UADP 3.0 has two ASIC interconnect links, it allows for a total packet bandwidth of 800 Gbps.

Major UADP 3.0 ASIC interconnect features:

     No packet size limitations

     Packet type-agnostic

     Packet data is spread across the SERDES channels

     Header compression capabilities

     No fragmentation or reordering

     No buffering on ASIC Interconnects links

Figure 14 offers a block diagram of the ASIC interconnect.

C9500 high-performance switch ASIC interconnect block diagram

Figure 14.           

C9500 high-performance switch ASIC interconnect block diagram

UADP 2.0 XL has effective bandwidth of 720G, with each core ASIC interconnect burst up to 360G. 360G is composed of dual-independent six-60-Gbps rings (see Figure 15).

C9500 Switch ASIC interconnect block diagram

Figure 15.           

C9500 Switch ASIC interconnect block diagram

Front-panel interfaces

The Ethernet Physical Layer (PHY) connects a link-layer device (often a MAC) to a physical medium such as a transceiver. The PHY on Catalyst 9500 switches is a fully integrated Ethernet transceiver supporting steering and mapping of lanes back to the ASIC to support multiple speeds (10, 25, 40, and 100GE) depending on the optics inserted into the front-panel ports. Figure 16 provides a high-level overview of the C9500-32C switch components.

C9500-32C high-level block diagram

Figure 16.           

C9500-32C high-level block diagram

Highlights of the C9500-32C Switch include:

     16 columns of QSFP28 cage in 2x1 configuration mode

     Each QSFP28 cage has 8 northbound SERDES connections back to the ASIC

    Each SERDES connection operates at either 4x10G speed for 40G QSFP+ optics, or 4x25G speed for 100G QSPF28 optics

    Interface speeds are based on the transceiver module inserted

     32 QSFP28 Ethernet ports

    40G or 100G with a QSFP+/QSFP28 transceiver module or 10G/1G with a CVR adaptor

     Port mapping

    Ports 1-8 are mapped to ASIC0/Core1 and ports 9-16 are mapped to ASIC0/Core0

    Ports 17-24 are mapped to ASIC1/Core1 and ports 25-32 are mapped to ASIC1/Core0

     Power to the optics are enabled by the onboard controller, which turns on as the module are inserted into the front- panel cage

     The advanced-forwarding ASIC supports 100-Gbps single-flow traffic processing on all ports

Figure 17 provides a high-level overview of the C9500-32QC switch components.

C9500-32QC high-level block diagram

Figure 17.           

C9500-32QC high-level block diagram

Key highlights of the C9500-32QC switch include:

     16 columns QSFP28 cage in 2x1 configuration mode

     Each QSFP cage has 4 northbound SERDES connections back to the ASIC

    Each SERDES connection operates at either 2x20G speed for 40G QSFP+ optics, or 4x25G speed for 100G QSPF28 optics

    Interface speeds are CLI-based

     32 QSFP28 Ethernet ports

    40G or 100G with a QSFP+/QSFP28 transceiver module or 10G/1G with a CVR adaptor

     Port mapping

    Ports 1-16 are mapped to ASIC0/Core1 and ports 17-32 are mapped to ASIC0/Core0

     Power to the optics is enabled by the onboard controller, which turns on as the module is inserted

     Default port configuration,

    Ports 1 to 24 are enabled and active as 40G interfaces

    Ports 25 to 32 are 40G interfaces but inactive

    Ports 33 to 44 are 100G but inactive

    ports 45 to 48 are 100G interfaces and active

     Enable or disable of 100G ports using the “enable/no enable” interface command

     The advanced-forwarding ASIC supports 100-Gbps single-flow traffic processing on 100G capable ports and 20G single-flow traffic processing on all 40G Ports

Figure 18 shows the port configuration modes supported on the C9500-32QC switch.

C9500-32QC port configuration modes

Figure 18.           

C9500-32QC port configuration modes

Figure 19 provides a high-level overview of the C9500-48Y4C switch components.

C9500-48Y4C high-level block diagram

Figure 19.           

C9500-48Y4C high-level block diagram

Key highlights of the C9500-48Y4C include:

     12 Columns of SFP28 cage in 2x1 configuration mode and 2 columns of QSFP28 cage in 2x1 configuration mode

     Each SFP28 cage has 24 northbound SERDES connections back to the ASIC

    Each SERDES connection operates at 25G speed for SFP28 Optic and 10G speed for SFP+ Optic

    Interface speeds are based on the transceiver module inserted

     Each QSFP28 cage has 8 northbound SERDES connections back to the ASIC

    Each SERDES connection operates at 4x10G speed for 40G, or 4x25G speed for 100G optics

     48 SFP28 Ethernet ports and 4 QSFP28 Ethernet ports

    25G/10G/1G with a SFP28/SFP+ transceiver module and 40G or 100G with a QSFP+/QSFP28 transceiver module

     Port mapping

    Ports 1-24 and 49-50 are mapped to ASIC0/Core1

    Ports 25-48 and 51-52 are mapped to ASIC0/Core0

     Power to the optics is enabled by an onboard controller, which turns on as the module is inserted

     The advanced-forwarding ASIC supports 100-Gbps single-flow traffic processing on the uplinks ports and 25-Gbps single-flow traffic processing on the downlinks ports

Note:       C9500-24Y4C Switch has exact same architecture as C9500-48Y4C with 1xUADP 3.0 ASIC and total of 24 x 25G/10G/1G ports + 4 QSFP28 Ethernet uplink ports with similar port mapping.

Figure 20 provides a high-level overview of the C9500-24Q switch components.

C9500-24Q high-level block diagram

Figure 20.           

C9500-24Q high-level block diagram

Key highlights of the C9500-24Q:

     3 columns of QSFP+ cage in 2x1 configuration mode

     Each QSFP+ cage has 8 northbound SERDES connections back to the ASIC

    Each SERDES connection operates at 10G speed

    Interface speeds are based on the transceiver module inserted

     24 QSFP Ethernet ports

    40G with a QSFP+ transceiver module or 10G/1G with a CVR adaptor

     Port mapping for 24Q SKU

    Ports 1-3 are mapped to ASIC3/Core1 and ports 4-6 are mapped to ASIC3/Core0

    Ports 7-9 are mapped to ASIC2/Core1 and ports 10-12 are mapped to ASIC2/Core0`

    Ports 13-15 are mapped to ASIC1/Core1 and ports 16-18 are mapped to ASIC1/Core0

    Ports 19-21 are mapped to ASIC0/Core1 and ports 22-24 are mapped to ASIC0/Core0`

     Power to the optics is enabled by an onboard controller, which turns on as the module is plugged in

     ASIC supports 10-Gbps single-flow traffic processing on all ports

Note:       C9500-12Q Switch has exact same architecture as C9500-24Q with two UADP 2.0 XL ASIC and total of 12 x 40G ports and similar port mapping.

Figure 21 provides a high-level overview of the C9500-40X switch components.

C9500-40X high-level block diagram

Figure 21.           

C9500-40X high-level block diagram

Key highlights of the C9500-40X:

     12/8 columns of SFP+ cage in 2x1 configuration mode

     Each SFP+ cage has 24 northbound SERDES connections back to the ASIC

    Each SERDES connection operates at 10G Speed

    Interface speeds are based on the transceiver module that is plugged in

     40 SFP+ Ethernet ports

    10G /1G with an SFP/SFP+ transceiver module

     Port mapping

    Ports 1-12 are mapped to ASIC1/Core1 and ports 13-24 are mapped to ASIC1/Core0

    Ports 25-36 are mapped to ASIC0/Core1 and ports 37-40 are mapped to ASIC0/Core0

    Uplink ports 41/42 or 1-8 are mapped to ASIC0/Core0

     Power to the optics is enabled by an onboard controller, which turns on as the module is plugged in

     ASIC supports 10-Gbps single-flow traffic processing on all ports

Note:       C9500-16X Switch has exact same architecture as C9500-40X with one UADP 2.0 XL ASIC and total of 16 x 10G ports and similar port mapping.

Network modules

The Cisco Catalyst 9500 Series supports two optional network modules (Figure 22) for uplink ports on C9500-40X and C9500-16X switches. The default switch configuration does not include the network modules. All ports on the network module are line rate and all software features supported on switch downlink ports are also supported on network module ports.

Catalyst 9500 network modules

Figure 22.           

Catalyst 9500 network modules

Key network modules highlights:

     Uplink modules are supported on the C950-40X and C9500-16X SKUs only

     Modules are automatically powered upon insertion

     OIR-capable

     ACT2 authenticated

     Line-rate on every port with 10G single-flow traffic processing

     Speed is auto-negotiated depending on the optics inserted

Storage

Applications are used in enterprise networks for a variety of business-relevant use cases. Examples of enterprise applications include administrative tools such as performance monitors and protocol analyzers, as well as security toolsets such as intrusion detection services, which traditionally operate on an external physical or virtual server.

This section specifies the SSD modules supported on Catalyst 9500 switches with primary application for hosting third-party applications. The modules also serve as general-purpose storage for packet captures, operating system trace logs, and Graceful Insertion and Removal (GIR) snapshots. Catalyst 9500 switches use a Cisco application framework, also known as Cisco IOx (the application framework combines Cisco IOS and Linux) to support applications containerized in KVM-based virtual machines, Linux Containers (LXC), or Docker containers.

Cisco IOS XE running on the Catalyst 9500 switches reserves dedicated memory and CPU resources for application hosting (Table 4). By reserving memory and CPU resources, the switch provides a separate execution space for user applications, it protects the switch’s IOS XE run-time processes, ensuring both its integrity and performance.

Table 4.           Catalyst 9500 application hosting resources

Platform

Memory (GB)

CPU (cores)

USB 3.0 (GB)

M2 SATA (GB)

Catalyst 9500 (UADP 2.0)

8

1 x 2.4Ghz

120

NA

Catalyst 9500 High Performance (UADP 3.0)

8

1 x 2.4Ghz

NA

240/480/960

Cisco Catalyst 9500 (UADP 2.0) switch support for a Field-Replaceable Unit (FRU) USB 3.0 SSD on the rear of the switch provides extra 120GB storage for application hosting (only for Cisco IOS XE Release 16.9.1 and above). The USB 3.0 Solid State Drive (SSD) is enabled with Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) to monitor the reliability of the drive, predict drive failures, and carry out different types of drive self-tests. The USB 3.0 SSD module has one partition of 120GB. Cisco IOS Software creates a partition with EXT4 as the default file system.

Storage needs on Catalyst 9500 high-performance switches are supported by a pluggable Serial Advanced Technology Attachment (SATA) SSD module located on the rear panel of the switch. This module is a field- replaceable unit and has a hot-swap button on the storage panel of the switch for graceful removal. The SSD module storage capacity ranges are 240GB, 480GB, and 960GB and the default file system supported is EXT4. The SATA module also supports the ability to monitor the health of the device through S.M.A.R.T. Figure 23 outlines the 9500 Series’ storage options.

Catalyst 9500 storage options

Figure 23.           

Catalyst 9500 storage options

Packet walks

This section provides a high-level overview of how packet forwarding is performed on a Catalyst 9500 high-performance switch. UADP 2.0 XL and UADP 3.0 are architecturally equivalent, hence a single unicast packet walk is described.

Ingress and egress Unicast Forwarding with the ASIC

Figure 24 shows a visual representation of the Unicast packet forwarding within the ASIC.

Catalyst 9500 high-performance packet walk within the ASIC

Figure 24.           

Catalyst 9500 high-performance packet walk within the ASIC

Following is the basic sequence of events when packets enter the Catalyst 9500 front-panel ports:

1.     Packet arrives at the line card’s ingress port; PHY converts the signal and serializes the bits, and then sends the packet to the Network Interface (NIF) that goes to the backplane.

2.     The packet travels through the backplane and enters the NIF of one of the ASICs.

3.     The NIF passes the packet to the ingress MACsec engine. The MACsec engine will decrypt the packet if needed. The decryption is done at line rate. The packet now enters the Ingress First In First Out (FIFO).

4.     The Ingress FIFO sends the packet to both the Ingress Forwarding Controller (IFC) and the Packet Buffer Complex (PBC) in parallel.

5.     The IFC performs Layer 2, Layer 3, Access Control List (ACL), and Quality-of-Service (QoS) lookups and more, then returns the forwarding result (frame descriptor header) to the PBC.

6.     The PBC uses the frame descriptor to determine the egress port. As the egress port is on the same ASIC, the result is sent to the Egress Queueing System (EQS) on the same ASIC.

7.     The EQS receives the notification from the PBC and schedules the packet to be sent for egress processing.

8.     The EQS signals the PBC to send the packet and descriptor out to both the Egress Forwarding Controller (EFC) and the Rewrite Engine (RWE).

9.     The EFC completes egress functions and sends the final rewrite descriptor to the RWE.

10.  The RWE performs packet rewrite with the final descriptor and sends the packet to the Egress FIFO.

11.  The Egress FIFO sends the packet to the Egress MACsec.

12.  The Egress MACsec performs a wire-rate encryption if required and then passes the frame on to the NIF. The packet then goes through the backplane and is sent out from one of the line card ports.

Ingress and egress unicast forwarding across the ASIC

Figure 25 shows a visual representation of the unicast packet forwarding across the ASIC.

Catalyst 9500 high-performance packet walk across the ASIC

Figure 25.           

Catalyst 9500 high-performance packet walk across the ASIC

Following is the basic sequence of events when packets enter the Catalyst 9500 front-panel ports:

1.     Packet arrives at the line card’s ingress port; PHY converts the signal and serializes the bits and then sends the packet to the Network Interface (NIF) that goes to the backplane.

2.     The packet travels through the backplane and enters the NIF of one of the ASICs.

3.      The NIF passes the packet to the ingress MACsec engine. The MACsec engine will decrypt the packet if needed. The decryption is done at line rate. The packet now enters the Ingress FIFO.

4.     The Ingress FIFO sends the packet to both the Ingress Forwarding Controller (IFC) and the Packet Buffer Complex (PBC) in parallel.

5.     The IFC performs Layer 2, Layer 3, ACL, and QoS lookups and more to return the forwarding result (frame descriptor header) to the PBC.

6.     The PBC uses the frame descriptor to determine the egress port. As the egress port is on a different ASIC, the Ingress Queueing System (IQS) schedules the packet to be sent to the destination ASIC using an inter-ASIC connection.

7.     The PBC on the destination ASIC receives the packet from the source ASIC via the inter-ASIC connection.

8.     The PBC sends the frame descriptor to the EQS.

9.     The EQS receives the notification from the PBC and schedules the packet to be sent for egress processing.

10.  The EQS signals the PBC to send the packet and descriptor out to both the Egress Forwarding Controller (EFC) and the Rewrite Engine (RWE).

11.  The EFC completes egress functions and sends the final rewrite descriptor to the RWE.

12.  The RWE performs packet rewrite with the final descriptor and sends the packet to the Egress FIFO.

13.  The Egress FIFO sends the packet to the Egress MACsec.

14.  The Egress MACsec performs a wire-rate encryption if required and then passes the frame on to the NIF. The packet then goes through the backplane and is sent out from one of the line card ports.

Multicast forwarding

Figure 26 shows the basic sequence of events when packets enter the Cisco Catalyst 9500 Series front panel ports for multicast packet forwarding within the ASIC.

Multicast packet walk within the ASIC.

Figure 26.           

Multicast packet walk within the ASIC.

1.     Packet arrives at the line card’s ingress port; PHY converts the signal and serializes the bits and then sends the packet to the Network Interface (NIF) that goes to the backplane.

2.     The packet travels through the backplane and enters the NIF of one of the ASICs.

3.     The NIF passes the packet to the ingress MACsec engine. The MACsec engine will decrypt the packet if needed. The decryption is done at line rate. The packet now enters the Ingress FIFO.

4.     The Ingress FIFO sends the packet to both the Ingress Forwarding Controller (IFC) and the Packet Buffer Complex (PBC) in parallel.

5.     The IFC performs Layer 2, Layer 3, ACL, and QoS lookups and more, then returns the forwarding result (frame descriptor header) to the PBC. The frame descriptor in this case is a pointer to the replication table.

6.     The PBC uses the frame descriptor to determine the egress port. (If there are receivers on other ASICs, the IQS will schedule the packet for the destination ASICs via the inter-ASIC connection.) For the local receivers, the result is sent to the Egress Queueing System (EQS).

7.     The EQS receives the notification from the PBC. Based on the result, Active Queue Management (AQM) generates a list of egress ports and schedules the packet for each of those egress ports. The following steps are repeated for each of the egress ports in that list.

8.     The EQS signals the PBC to send the packet and descriptor out to both the Egress Forwarding Controller (EFC) and the Rewrite Engine (RWE).

9.     The EFC completes the egress functions and sends the final rewrite descriptor to the RWE.

10.  The RWE performs packet rewrite with the final descriptor and sends the packet to the Egress FIFO.

11.  The Egress FIFO sends the packet to the Egress MACsec.

12.  The Egress MACsec performs a wire-rate encryption if required and then passes the frame on to the NIF. The packet then goes through the backplane and is sent out from one of the line card ports.

Conclusion

Cisco Catalyst 9500 Series Switches are the enterprise-class backbone of the Cisco Catalyst 9000 Family of switches, offering a comprehensive high-density portfolio and architectural flexibility with 100G, 40G, 25G, and 10G. This new platform is based on the Cisco next-generation programmable UADP ASIC for increased bandwidth, scale, security, and telemetry. The platform also supports infrastructure investment protection with non-disruptive migration from 10G to 25G and beyond. Cisco Catalyst 9500 Series Switches are built on a modular system architecture designed to provide high performance to meet the evolving needs of highly scalable and growing enterprise networks.

References

Additional websites that offer more information about the Cisco Catalyst 9500 Series and its capabilities:

     Cisco Catalyst 9500 Series Switches data sheet

     Cisco Catalyst 9500 Series Switches hardware installation guide

     Cisco Catalyst 9000 - Switching for a new era of intent-based networking

     25GE and 100GE – Enabling higher speeds in the enterprise with investment protection white paper

     Cisco Catalyst 9500 High Performance series performance validation

Learn more