Guest

Cisco MDS 9700 Series Multilayer Directors

Large SAN Design Best Practices Using Cisco MDS 9700 Series Multilayer Directors

  • Viewing Options

  • PDF (1.2 MB)
  • Feedback

What You Will Learn

As SANs continue to grow in size, many factors need to be considered to help scale and manage them. This document focuses on large SAN deployments within a data center and provides best practices and design considerations for you to apply when designing a large physical fabric. It does not address networks that implement Inter-VSAN Routing (IVR), Fibre Channel or Fibre Channel over IP (FCIP)-based SAN extension, or intelligent fabric applications (for example, Cisco® Data Mobility Manager, or Cisco I/O Acceleration [IOA]).

Design Parameters

In SAN environments, many design criteria need to be addressed, such as the number of servers that access a shared storage frame, the network topology, and fabric scaling. This document focuses on the following design parameters:

Deployments with 1000 or more end devices (servers, storage, and tape devices)

Deployments in which a majority of end devices have connection speeds of 8 and 16 Gbps

Deployments with identical dual physical fabrics (Fabric A and Fabric B)

Cisco MDS 9700 Series Multilayer Directors

The Cisco MDS 9700 Series Multilayer Directors are the newest directors in the Cisco storage networking portfolio. The Cisco MDS 9710 Multilayer Director supports up to 384 line-rate 16-Gbps Fibre Channel or 10-Gbps Fibre Channel over Ethernet (FCoE) ports, and the Cisco MDS 9706 Multilayer Director supports up to 192 line-rate 16-Gbps Fibre Channel or 10-Gbps FCoE ports. They each provide up to 1.5 terabits per second (Tbps) of per-slot throughput when populated with six fabric modules. Both directors also provide redundant supervisors, power supplies, and fan modules.

The Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module delivers line-rate nonblocking 16-Gbps Fibre Channel performance to enable scalability in the virtualized data centers. Line-rate 16-Gbps performance provides high bandwidth throughput for consolidation of workloads from thousands of virtual machines while reducing the number of SAN components, providing scalability for future SAN growth at the same time. These line-card modules are hot swappable and continue to provide all previous Cisco MDS features such as predictable performance, high availability, advanced traffic management capabilities, integrated VSANs, high-performance Inter-Switch Links (ISLs), fault detection, isolation of errored packets, and sophisticated diagnostics. This module offers new hardware-based slow drain, real-time power consumption reporting, and improved diagnostics capabilities.

With the Cisco MDS 9700 48-Port 10-Gbps FCoE Module, the Cisco MDS 9700 Series offers 10-Gbps FCoE capabilities, providing multiprotocol flexibility for SANs. This module extends the benefits of FCoE beyond the access layer to the data center core with a full line-rate FCoE solution for the Cisco MDS 9700 Series.

Customers can save money, simplify management, reduce power and cooling requirements, and improve flexibility by deploying FCoE, while protecting their Fibre Channel SAN investment with the Cisco MDS 9700 10-Gbps 48-Port FCoE Module. FCoE allows an evolutionary approach to I/O consolidation by preserving all Fibre Channel constructs. It maintains the latency, security, and traffic management attributes of Fibre Channel, as well as your investment in Fibre Channel tools, training, and SANs. FCoE also extends Fibre Channel SAN connectivity; now 100 percent of your network servers can be attached to the SAN.

Table 1 provides ordering information for Cisco MDS 9700 Series components.

Table 1. Cisco MDS 9700 Series Components and Cisco Part Numbers

Part Number

Product Description

Cisco MDS 9700 Component

DS-C9710

Cisco MDS 9710 chassis, no power supplies, with fans included

DS-C9706

Cisco MDS 9706 chassis, no power supplies, with fans included

DS-X97-SF1-K9

Cisco MDS 9700 Series Supervisor-1 Module

DS-X9710-FAB1

Cisco MDS 9710 Crossbar Switching Fabric-1 Module

DS-X9706-FAB1

Cisco MDS 9706 Crossbar Switching Fabric-1 Module

DS-X9448-768K9

Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module

DS-X9848-480K9

Cisco MDS 9700 48-Port 10-Gbps FCoE Switching Module

Optional Licensed Software

M97ENTK9

Cisco Enterprise package license for 1 Cisco MDS 9700 Series switch

DCNM-SAN-M97-K9

Cisco Data Center Network Manager (DCNM) for SAN license for Cisco MDS 9700 Series

M97FIC1K9

Cisco Mainframe package license for 1 Cisco MDS 9700 Series switch

SAN Topology Considerations

It is common practice in SAN environments to build two separate, redundant physical fabrics (Fabric A and Fabric B) to protect against the failure of a single physical fabric. This document shows a single fabric in the topology diagrams; however, customers would deploy two identical fabrics for redundancy. When you design for large networks, most environments will be one of two types of topologies within a physical fabric:

Two-tier: Core-edge design

Three-tier: Edge-core-edge design

Within the two-tier design, servers connect to the edge switches, and storage devices connect to one or more core switches (Figure 1). This design allows the core switch to provide storage services to one or more edge switches, thus servicing more servers in the fabric.

Figure 1. Sample Core-Edge Design

In environments in which future growth of the network will likely cause the number of storage devices to exceed the number of ports available at the core switch, a three-tier design may be best approach (Figure 2). This type of topology still uses a set of edge switches for server connectivity, but it adds a set of edge switches for storage devices. Both sets of edge switches connect to a core switch through ISLs.

Figure 2. Sample Edge-Core-Edge Design

Network Considerations

When designing a large Cisco MDS SAN, you should take the following details into consideration:

Fan-in, fan-out, and oversubscription ratios

VSANs

PortChannels

ISLs

Effects of fabric logins

Type of zones

Smart zoning

Fan-In, Fan-Out, and Oversubscription Ratios

To efficiently and optimally use resources and to save deployment time and reduce management costs, SANs are designed to share array ports and ISL and line-card bandwidth. The terms used to describe this sharing include fan-in ratio, fan-out ratio, and oversubscription ratio. The term used depends on the point of reference being described. In general, the fan-in ratio is calculated as the ratio of host port bandwidth to storage array port bandwidth, and the fan-out ratio is calculated as the ratio of storage array port bandwidth to host port bandwidth. Oversubscription is a networking term that is generally defined as the overall bandwidth ratio between host and storage array ports. See Figure 3 for more details.

Figure 3. Fan-In, Fan-Out, and Oversubscription Ratios

Virtual SANs

Cisco MDS switches offer virtual SAN (VSAN) technology, which provides a simple and secure way to consolidate many SAN islands into a single physical fabric. Separate fabric services (per-VSAN zoning, name services, domains, separate role-based management, etc.) are provided for each VSAN, providing separation of both the control plane and the data plane.

VSANs have multiple use cases: for example, you can create a VSAN for each type of operating system (for example, a VSAN for Microsoft Windows or HP-UX), or you can use them on the basis of business functions (a VSAN for development, for production, or for a lab, for instance). VSAN 1 is created on the Cisco MDS switch by default and cannot be deleted. As a best practice, VSAN 1 should be used as a staging area for unprovisioned devices, and other VSANs should be created for the production environments. With each VSAN having its own zones and zone sets, Cisco MDS switches enable secure, scalable, and robust networks.

Inter-Switch Links

An ISL is a connection between Fibre Channel switches. The number of ISLs required between Cisco MDS switches depends on the desired end-to-end oversubscription ratio. The storage port oversubscription ratio from a single storage port to multiple servers can be used to help determine the number of ISLs needed for each edge-to-core connection. Figure 4 shows three examples of storage, server, and ISL combinations, all with the same oversubscription ratio of 8:1.

The first example has one 16-Gbps storage port with eight 16-Gbps server ports traversing one 16-Gbps ISL.

The second example has one 16-Gbps storage port with sixteen 8-Gbps server ports traversing one 16-Gbps ISL.

The third example has eight 16-Gbps storage port with sixty-four 16-Gbps server ports traversing eight 16-Gbps ISLs.

A 1:1 ratio of storage bandwidth-to-ISL bandwidth is recommended for SAN design. ISL bandwidth can be added with additional ISLs to provide greater availability in the event of a link failure.

Figure 4. Number of ISLs Needed to Maintain Oversubscription Ratio

PortChannels

A PortChannel is an aggregation of multiple physical interfaces into one logical interface to provide higher aggregated bandwidth, load balancing, and link redundancy while providing fabric stability in the event that a member fails. PortChannels can connect to interfaces across different switching modules, so a failure of a switching module does not bring down the PortChannel link.

A PortChannel has the following functions:

It provides a single logical point-to-point connection between switches.

It provides a single VSAN ISL (E port) or trunking of multiple VSANs over an EISL (TE port). EISL ports exist only between Cisco switches and carry traffic for multiple VSANs, unlike an ISL.

It increases the aggregate bandwidth on an ISL by distributing traffic among all functional links in the channel. PortChannels can contain up to 16 physical links and can span multiple modules for added high availability. Multiple PortChannels can be used if more than 16 ISLs are required between switches.

It load balances across multiple links and maintains optimum bandwidth utilization. Load balancing is performed based on per-VSAN configuration (source ID [SID] and destination ID (DID); or SID, DID, and Exchange ID [OXID]).

It provides high availability on an ISL. If one link fails, traffic is redistributed to the remaining links. If a link goes down in a PortChannel, the upper protocol is not aware of it. To the upper protocol, the link is still there, although the bandwidth is diminished. The routing tables are not affected by link failure.

Fabric Logins

The number of actual physical ports in the fabric is greater than the number of end devices (server, storage, and tape ports) in the physical fabric. The Cisco MDS 9700 Series supports up to 20,000 fabric logins in a physical fabric, independent of the number of VSANs in the network. Typically when designing a SAN, the number of end devices determines the number of fabric logins. The increase in blade server deployments and the consolidation of servers due to server virtualization technologies affects the design of the network. With features such as N-Port ID Virtualization (NPIV) and Cisco N-Port Virtualization (NPV), the number of fabric logins has further increased (Figure 5). The proliferation of NPIV-capable end devices such as host bus adaptors (HBAs) and Cisco NPV mode switches makes the number of fabric logins on a per-port, per-line-card, per-switch, and per-physical-fabric basis a critical consideration. These fabric login limits determine the design of the current SAN as well as its future growth. The total number of hosts and NPV switches determine the number of fabric logins required on the core switch.

Figure 5. Cisco NPV Enabled Switches and Fabric Logins

Note: Prior to NPIV and Cisco NPV, a single port was limited to one fabric login. With NPIV and Cisco NPV enabled switches, a single port can now support multiple fabric logins. Figure 5 shows 24 logins for 24 hosts and one for the switch.

Zones

Each VSAN has only one active zone set, which contains one or more zones. Each zone consists of one or more members to allow communication between the members. Cisco MDS 9000 SAN-OS and NX-OS Software provide multiple ways to identify zone members, but the commonly used ones are:

PWWN: Port worldwide name of the device (most commonly used)

Device alias: An easy-to-read name associated with a single device’s PWWN

Depending on the requirements of the environment, the type of zone members is a matter of preference. A recommended best practice is to create a device alias for end devices when managing the network. The device alias provides an easy-to-read name for a particular end device. For example, a storage array with PWWN 50:06:04:82:bf:d0:54:52 can be given a device-alias name of Tier1-arrayX-ID542-Port2. In addition, with device alias, when the actual device moves from one VSAN (VSAN 10) to a new VSAN (VSAN 20) in the same physical fabric, the device-alias name will follow that device. You do not need to reenter the device alias for each port of the moved array in the new VSAN.

Note: As a best practice for large SAN deployments, you should have more zones with two members rather than a single zone with three or more members. This practice is not a concern in smaller environments.

Smart Zoning

Smart zoning supports zoning multiple devices in a single zone by reducing the number of zoning entries that need to be programmed. This feature allows multiple member zones consisting of multiple initiators and multiple zones to be zoned together without increasing the size of the zone set. Smart zoning can be enabled at the zone level, zone-set level, zone-member level, or VSAN level.

Sample Use Case Deployments

Figures 6 and 7 show two sample deployments of large-scale Cisco MDS fabrics: each with more than 1000 devices in the fabric.

Figure 6. Use Case 1 Topology with Mix of 8-Gbps and 16-Gbps Hosts Connected to 16-Gbps Storage Ports

Sample Deployment 1

The deployment in Figure 6 allows scaling to more than 1400 devices with 128 storage ports running at 16 Gbps and about 1200 host ports in a single fabric. The environment has a 9:1 oversubscription ratio within the network, which requires each host edge switch to have 2048 Gbps of ISL bandwidth. Storage ports will not grow quite as rapidly, and the core switch has room to grow to add more host edge switches. This topology is using two Cisco MDS 9710 switches with three 16-Gbps 48-port Fibre Channel modules, but the same results can be achieved using Cisco MDS 9706 switches with a similar number of modules. The use of the Cisco MDS 9710 in the core provides more flexibility to meet the demands of future growth for additional storage ports and host ports and to maintain the oversubscription ratio as well.

In this environment, the following were used in managing the network:

Total of four VSANs created

- VSAN 1 for staging new SAN devices

- VSAN 100 for development SAN

- VSAN 200 for lab SAN

- VSAN 300 for production SAN

TACACS+ used for authorization and authentication of Cisco MDS switches

Role-based access control (RBAC) used to create separate administrative roles for VSANs

Device aliases used for logical device identification

Two-member zones used with device alias

Figure 7. Use Case 2 Topology with Mix of 8-Gbps and 16-Gbps Hosts Connected to 16-Gbps Storage Ports

Sample Deployment 2

The deployment in Figure 7 scales to nearly 3000 devices with 192 storage ports running at 16 Gbps and about 2500 host ports in a single fabric. The environment requires a minimum oversubscription ratio of 12:1 within the network, which requires each host edge switch to have a 3072 Gbps of ISL bandwidth. Again, storage ports will not grow quite as rapidly, and the core switch has room to grow to add more host edge switches. The storage edge and core Cisco MDS 9710 switches can have more 16-Gbps Fibre Channel line cards to meet the demands of future growth in storage or host ports. As in the core-edge topology in Figure 6, the Cisco MDS 9710 at the host edge side can be exchanged for Cisco MDS 9706 with four 48-port 16-Gbps Fibre Channel modules.

In this environment, the following were used in managing the network:

Total of five VSANs created

- VSAN 1 used for staging new devices

- Four VSANs based on business operations

TACACS+ used for authorization and auditing of Cisco MDS switches

Separate administrative roles created for VSANs

Device alias created for environment

Dynamic Port VSAN Membership (DPVM) feature enabled: This feature dynamically assigns VSAN membership to ports by assigning VSANs based on the device WWN. DPVM eliminates the need to reconfigure the port VSAN membership to maintain the fabric topology when a host or storage device connection is moved between two Cisco SAN switches or two ports within a switch. It retains the configured VSAN regardless of where a device is connected or moved.

Mixture of two- and three-member zones

Conclusion

With data centers continually growing, SAN administrators must design networks that both meet their current needs and can scale for future growth. Cisco MDS 9710 and 9706 Multilayer Director switches provide embedded features to help SAN administrators in these tasks. SAN administrators deploying large Cisco SAN fabrics can use the design parameters and best practices discussed in this document to design optimized and scalable SANs.

For More Information

Cisco MDS 9700 Series Multilayer Directors

Cisco MDS 9710 Multilayer Director data sheet

Cisco MDS 9706 Multilayer Director data sheet

Cisco MDS 9700 Series Supervisor-1 Module data sheet

Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module data sheet

Compare models: Learn about the similarities and differences of the models within this product series.

Data sheets and product literature:

- At-a-Glance documents

- Data sheets

- Presentations

- White papers