As SANs continue to grow in size, many factors need to be considered to help scale and manage them. This document focuses on large SAN deployments in a data center and provides best practices and design considerations for the design of a large physical fabric. It does not address networks implementing Inter-VSAN Routing (IVR), Fibre Channel, Fibre Channel over IP (FCIP) SAN extension, or intelligent fabric applications (for example, Cisco
® Data Mobility Manager [DMM] and I/O Acceleration [IOA]).
In SAN environments, many design criteria need to be addressed, such as performance, high availability, and scalability. This document focuses on the following design parameters:
• 1000 or more end devices (servers, storage resources, and tape devices)
• Majority of end devices with connection speeds of 4, 8, or 16 Gbps
• Identical dual physical fabrics (Fabric A and Fabric B)
• SAN that is already in production using Cisco MDS 9500 Series Multilayer Directors
When designing a large Cisco MDS 9000 Family fabric, you should consider the following:
• Ports and port groups
• Dedicated and shared rate modes
• Port speed
• Inter-Switch Links (ISLs)
• 8-Gbps compared to 10-Gbps Fibre Channel ISLs
• Fan-in, fan-out, and oversubscription ratios
• Zone type
• Smart zoning
• Virtual SANs (VSANs)
• Fabric login and scalability
Ports and Port Groups
Each port in the Cisco MDS 9000 family is a member of one port group that shares common resources from an assigned pool of allocated bandwidth. This approach allows the appropriate bandwidth allocation for high-bandwidth and low-bandwidth devices, Table 1 provides details needed to better understand bandwidth allocation per port and port group for switching modules.
Table 1. Bandwidth and Port Group Configurations for Fibre Channel Modules
Cisco Part Number
Number of Port Groups
Number of Ports per Port Group
Bandwidth per Port Group (Gbps)
48-port 8-Gbps Advanced Fibre Channel module
32-port 8-Gbps Advanced Fibre Channel module
48-port 8-Gbps Fibre Channel module
24-port 8-Gbps Fibre Channel module
48-port 16-Gbps module
1 Cisco MDS 9513 Multilayer Director with Fabric 3 module installed
2 Cisco MDS 9506 (all) and 9509 (all) or 9513 Multilayer Director with Fabric 2 module installed
3 Cisco MDS 9506 (all), 9509 (all), or 9513 (all)
4 Cisco MDS 9710 Multilayer Director (all)
Dedicated and Shared Rate Modes
Ports on Cisco MDS 9000 Family line cards are grouped into port groups that have a fixed amount of bandwidth per port group (see Table 1). The Cisco MDS 9000 Family allows for the bandwidth of ports in a port group to be allocated based on the requirements of individual ports. When planning port bandwidth requirements, allocation of the bandwidth within the port group is important. Ports in the port group can have bandwidth dedicated to them, or ports can share a pool of bandwidth. For ports that require high sustained bandwidth, such as ISL ports, storage and tape array ports, and ports on high-bandwidth servers, you can have bandwidth dedicated to them in a port group by using the
switchport rate-mode dedicatedcommand. For other ports, typically servers that access shared storage-array ports (that is, storage ports that have higher fan-out ratios), you can share the bandwidth in a port group by using the
switchport rate-mode sharedcommand. When configuring the ports, be sure not to exceed the available bandwidth in a port group.
For example, a Cisco MDS 9513 Multilayer Director with a Fabric 3 module installed and using a 48-port 8-Gbps Advanced Fibre Channel module, has eight port groups of 6 ports each. Each port group has 32.4 Gbps of bandwidth available. You cannot configure all 6 ports of a port group at the 8-Gbps dedicated rate because that would require 48 Gbps of bandwidth, and the port group has only 32.4 Gbps of bandwidth. You can, however, configure all 6 ports in shared rate mode, so that the ports run at 8 Gbps and are oversubscribed at a rate of 1.48:1 (6 ports * 8 Gbps = 48 Gbps/32.4 Gbps). This oversubscription rate is well below the oversubscription rate of the typical storage array port (fan-out ratio) and does not affect performance.
You can also mix dedicated and shared rate ports in a port group. Using the same environment as before, you can configure one port in the port group for dedicated 8-Gbps bandwidth and use it as an ISL port or a storage target port. This approach allocates 8 Gbps of the port-group bandwidth to the port with the dedicated rate, leaving 24.4 Gbps of bandwidth for the remaining 5 ports to be shared, giving them an oversubscription ratio of 1.64:1 (5 ports * 8 Gbps = 40 Gbps/24.4).
The speed of an interface, together with the rate mode, determines the amount of shared resources available to the ports in the port group. An interface can be configured to automatically detect and match the speed of the attached device, or the speed of the interface can be explicitly configured. When a port is configured for autospeed detection, the switch assumes that port is capable of the highest speed allowed by that port, depending on the type of line-card module being used. Therefore, the port bandwidth may be allocated for a higher speed than the attached device requires. For best use of bandwidth in a port group, you should explicitly specify the port speed of the attached device.
An ISL is a connection between Fibre Channel switches. The number of ISLs required between Cisco MDS 9000 Family switches depends on the desired end-to-end oversubscription ratio. The storage port oversubscription ratio from a single storage port to multiple servers can be used to determine the number of ISLs needed for each edge-to-core connection. Figure 1 shows three examples of storage, server, and ISL combinations, all with the same oversubscription ratio of 8:1. The first example shows one 16-Gbps storage port with eight 16-Gbps server ports connected over one 16-Gbps ISL. The second example shows one 16-Gbps storage port with sixteen 8-Gbps server ports connected over one 16-Gbps ISL. The third example shows eight 16-Gbps storage ports with sixty-four 16-Gbps server ports connected over eight 16-Gbps ISLs. Ideally, a 1:1 ratio of storage bandwidth to ISL bandwidth is recommended for SAN designs, but that is not the case most of the time because of factors such as resource efficiency and allocation. When you design a SAN network, you usually should make sure that the oversubscription ratio is low, with enough room to support growth. You should also have additional ISL bandwidth availability to provide higher availability in the event of link failure.
Figure 1. Number of ISLs Needed to Maintain Oversubscription Ratio
A PortChannel is an aggregation of multiple physical interfaces into one logical interface to provide higher aggregated bandwidth, load balancing, and link redundancy while providing fabric stability in the event of member failure. PortChannels can connect to interfaces across different switching modules, so a failure of a switching module does not bring down the PortChannel link.
A PortChannel has the following functions:
• It provides a single logical point-to-point connection between switches.
• It provides a single VSAN ISL (E-port) or trunking of multiple VSANs over an EISL (TE-port). EISL ports exist only between Cisco switches and carry traffic for multiple VSANs.
• It increases the aggregate bandwidth on an ISL by distributing traffic among all functional links in the channel. PortChannels can contain up to 16 physical links and can span multiple modules for added high availability. Multiple PortChannels can be used if more than 16 ISLs are required between switches.
• It performs load balancing across multiple links and maintains optimum bandwidth utilization. Load balancing is configured per VSAN (source ID and destination ID [SID and DID] or source ID and destination ID and exchange ID [SID and DID and OXID]).
• It provides high availability on an ISL. If one link fails, traffic is redistributed to the remaining links. If a link goes down in a PortChannel, the upper protocol is not aware of it. To the upper protocol, the link is still there, although the bandwidth is diminished. The routing tables are not affected by link failure.
8-Gbps Compared to 10-Gbps Fibre Channel ISLs
The Fibre Channel protocol typically is associated with the 1/2/4/8/16-Gbps speeds of attached devices. However, the Fibre Channel protocol also supports 10 Gbps, which can be used for ISLs. The decision to use 8- or 10-Gbps ISLs is a significant design factor when 16-Gbps interfaces are not available.
At first glance, 10-Gbps Fibre Channel may appear to represent only a 25 percent increase over 8-Gbps Fibre Channel. However, because of differences in the physical layer, 10-Gbps Fibre Channel actually has a data rate that is 50 percent greater than 8-Gbps Fibre Channel. To understand this, you must look at the way data is transmitted over the two interfaces. All data is encoded to help ensure data integrity when the data is transmitted over an interface. For 8-Gbps Fibre Channel, for every 8 bits of data, 10 bits are transmitted, imposing a 25 percent overhead. For 10-Gbps Fibre Channel, for every 64 bits of data, 66 bits are transmitted, an overhead on only 3.125 percent. This encoding in combination with the physical clock rate determines the actual data rate of the interface (Table 2).
Table 2. Comparison of 8- and 10-Gbps Data Rates
Fibre Channel Interface Speed
Data Rate (Gbps)
Data Rate (MB/s)
8 bits: 10 bits
64 bits: 66 bits
For ISL connectivity, 10-Gbps Fibre Channel interfaces can provide greater bandwidth per ISL and reduce the number of ISLs between switches, reducing the amount of cabling required.
Different line-card modules have different port-group settings. Depending on the port-group configuration, you can configure the port speed for regular 1/2/4/8/16-Fibre Channel or for 10-Gbps Fibre Channel. Note that depending on the specific line-card module, not all ports can be configured for 10-Gbps Fibre Channel. Figures 2 and 3 show the specific ports of the individual port groups that can be configured for 10-Gbps Fibre Channel. The interfaces that can be configured in the port groups are identified with a yellow border, and each interface that will be disabled by the switch is marked with a red X. Only the first two port groups are shown; however, the groupings of ports are the same for the remainder of the port groups.
Figure 2. 10-Gbps Fibre Channel Port Selection in DS-X9248-256K9
Figure 3. 10-Gbps Fibre Channel Port Selection in DS-X9232-256K9
For the DS-X9448-768K9 16-Gbps 48-port line-card module, all ports in paired port groups can operate in either 2/4/8/16-Gbps mode or 10-Gbps mode. Therefore, 10-Gbps is enabled in 8-port increments at port-group level (ports 1-8, 9-16, 17-24, 25-32, 33-40, and 41-48). If you enable 10-Gbps speed, it will be enabled for all interfaces in those two pair of port groups as mentioned earlier.
Fan-In, Fan-Out, and Oversubscription Ratios
To efficiently and optimally use resources and to reduce deployment and management costs, SANs are designed to share array port, ISL, and line-card bandwidth. The terms used to describe this sharing include fan-in ratio, fan-out ratio, and oversubscription ratio. The term used depends on the point of reference being described. In general, the fan-in ratio is the ratio of host-port bandwidth to storage-array-port bandwidth, and the fan-out ratio is the ratio of storage-array-port bandwidth to host-port bandwidth. Oversubscription is a networking term that is generally defined as the overall ratio of bandwidth between host and storage array ports. See Figure 4 for details.
Figure 4. Fan-In; Fan-Out, and Oversubscription Ratios
Each VSAN has only one active zone set, which contains one or more zones. Each zone consists of one or more members to allow communication between the members. The Cisco MDS 9000 Family SAN-OS and NX-OS Software provide multiple ways to identify zone members, but the commonly used ones are:
• World Wide Port Name (WWPN) of the device (most commonly used)
• Device alias, an easy-to-read name associated with a single device's WWPN
Depending on the requirements of the environment, the type of zone members used is a matter of preference. A recommended best practice is to create a device alias for end devices when you manage the network. The device alias provides an easy-to-read name for a particular end device. For example, a storage array with WWPN 50:06:04:82:bf:d0:54:52 can be given a device-alias name of Tier1-arrayX-ID542-Port2. In addition, with a device alias, when the actual device moves from one VSAN (VSAN 10) to a new VSAN (VSAN 20) in the same physical fabric, the device alias follows that device. Therefore, you do not need to reenter the device alias for each port of the moved array in the new VSAN.
Note: As a best practice for large SAN deployments, you should have more zones with two members rather than a single zone with three or more members. This practice is not a concern in smaller environments, in which zone management is much easier, but as the SAN network expands, you should implement smart zoning to manage the SAN network more easily.
Smart zoning supports the zoning of multiple devices in a single zone by reducing the number of zoning entries that need to be programmed. With smart zoning, multiple member zones consisting of multiple initiators and multiple zones can be zoned together without increasing the size of the zone set. Smart zoning can be enabled at the zone level, zone-set level, zone-member level, or VSAN level.
Cisco MDS 9000 Family switches support VSAN technology, which provides a simple and secure way to consolidate many SAN islands into a single physical fabric. Separate fabric services (for example, per-VSAN zoning, name services, domains, and role-based management) are provided for each VSAN, providing separation of both the control plane and the data plane.
VSANs have multiple use cases: for example, you can create a VSAN for each type of operating system (for instance, a VSAN for Microsoft Windows and for HP-UX) or create VSANs on the basis of business function (for instance, a development VSAN, a production VSAN, and a lab VSAN). VSAN 1 is created on the Cisco MDS 9000 Family switch by default and cannot be deleted. As a best practice, you should use VSAN 1 as a staging area for unprovisioned devices, and you should create other VSANs for production environments.
With each VSAN having its own zones and zone sets, Cisco MDS 9000 Family switches enable secure, scalable, and robust networks. VSANs support multitenancy, in which multiple customers use the SAN with strict requirements for traffic segmentation.
Fabric Login and Scalability
When a Fibre Channel switch wants to forward traffic to its neighboring switch or host connector, it needs to exchange the fabric login parameters. Each switch has some limitation on the number of fabric logins allowed at any one time. Generally, the number of actual physical ports in the fabric is greater than the number of end devices (server, storage, and tape ports) in the physical fabric. The Cisco MDS 9000 Family supports enough fabric logins in a physical fabric, independent of the number of VSANs in the network, for today's environments.
Typically, in a SAN design, the number of end devices determines the number of fabric logins needed. The increase in blade server deployments and the consolidation of servers through the use of server virtualization technologies affect the design of the network. With the use of features such as N-Port ID Virtualization (NPIV) and Cisco N-Port Virtualization (NPV), the number of fabric logins needed has increased even more (Figure 5). The proliferation of NPIV-capable end devices such as host bus adaptors (HBAs) and Cisco NPV-mode switches makes the number of fabric logins needed on a per-port, per-line-card, per-switch, and per-physical-fabric basis a critical consideration. The fabric login limits determine the design of the current SAN as well as its potential for future growth. The total number of hosts and NPV switches determine the number of fabric logins required on the core switch.
Recently, it has become difficult to control the number of virtual machines hosted on servers, so it is important to plan ahead and keep some resources reserved. The current scalability metrics for the Cisco MDS 9000 Family product line can be found in the
configuration guides. To estimate the number of fabric logins required for the SAN network, you can use this formula: (number of hosts) x (number of initiators per host).
Figure 5. Cisco NPV-Enabled Switches and Fabric Logins
Note: Prior to NPIV and Cisco NPV, a single port supported a maximum of one fabric login. With NPIV and Cisco NPV-enabled switches, a single port can now support multiple fabric logins.
Cisco MDS 9500 and 9700 Series Multilayer Director Components
This document discusses the best options for installing a Cisco MDS 9710 director-class switch at the network core with the existing SAN network running Cisco MDS 9513 switches at the edge and core. The Cisco MDS 9513 provides multiple line-card options; this document discusses four of the most used and deployed line cards supported by Cisco MDS 9513. This document also discusses the use of the Fabric-3 module (DS-13SLT-FAB3) with the Cisco MDS 9513 to provide 256 Gbps of fabric switching throughput per slot. The Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module offers hardware-based slow drain, real-time power consumption reporting, and improved diagnostics capabilities; this module is used in the Cisco MDS 9710 chassis described in this document.
Cisco MDS 9513 Multilayer Director: DS-C9513
The Cisco MDS 9513 is a director-class multilayer switch that helps large enterprises and service providers design and deploy large-scale data centers and scalable enterprise clouds to enable business transformation. The Cisco MDS 9513 is a 13-slot director in a 14-rack-unit (14RU) form factor. Two slots are reserved for the redundant supervisor modules, and the 11 remaining slots are available for line-card and service modules. It can support up to 528 ports per chassis, or 1152 ports per rack, with total throughput of 8.4 terabits per second (Tbps) per chassis. It supports 1/2/4/8- and 10-Gbps Fibre Channel, 10-Gbps Fibre Channel over Ethernet (FCoE), and 1/2/4/8/10-Gbps IBM Fiber Connection (FICON) interfaces.
Cisco MDS 9710 Multilayer Director: DS-C9710
The Cisco MDS 9710 is the newest-generation director-class multilayer switch. It supports up to 384 line-rate 16-Gbps Fibre Channel or 10-Gbps FCoE ports. The Cisco MDS 9710 comes with dual supervisor modules and six fabric modules and provides up to 24 Tbps of chassis throughput.
The Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module delivers line-rate nonblocking 16-Gbps Fibre Channel performance for scalability in virtualized data centers. Line-rate 16-Gbps performance provides high-bandwidth throughput to enable consolidation of workloads from thousands of virtual machines while reducing the number of SAN components needed, providing scalability for future SAN growth. This line-card module is hot swappable and provides the same features as in all previous Cisco MDS 9000 Family products, including predictable performance, high availability, advanced traffic management, integrated VSANs, high-performance ISLs, fault detection, isolation of errored packets, and sophisticated diagnostics. This module offers new hardware-based slow drain, real-time power consumption reporting, and improved diagnostics. The Cisco MDS 9710 can support up to 384 line-rate 16-Gbps Fibre Channel throughput along with 10-Gbps FCoE. It includes dual supervisor modules, and up to eight switching module line cards can be installed to provide a total of 24-Tbps chassis throughput using 2/4/8/10/16-Gbps Fibre Channel or 10-Gbps FCoE ports.
The Cisco MDS 9513 24-Port 8-Gbps Fibre Channel Switching Module delivers the performance needed for high-end storage systems and ISL. The front panel delivers 96-Gbps of Fibre Channel bandwidth with 24 ports divided into eight groups, with 3 ports per group. The total allocated bandwidth per port group is 12.8 Gbps. One of the ports can be a dedicated port with a maximum of bandwidth of 8 Gbps, and the remaining 4.8 Gbps can be shared among the other ports in the same port group. Therefore, the switch can provide eight 8-Gbps interfaces, or twenty-four 4-Gbps interfaces (Figure 6).
Figure 6. Port Group Definition on 24-Port 1/2/4/8-Gbps Fibre Channel Line-Card Module
The Cisco MDS 9513 48-Port 8-Gbps Fibre Channel Switching Module provides port density and performance in a server virtualization environment. The front panel delivers 96-Gbps of Fibre Channel bandwidth with 48 ports divided into the eight groups, with 6 ports per group. The total allocated bandwidth per port group is 12.8 Gbps. One of the ports can be a dedicated port with a maximum bandwidth of 8 Gbps. The remaining 4.8 Gbps of bandwidth can be shared among other five ports in the same port group. Therefore, the switch can provide eight 8-Gbps interfaces or twenty-four 4-Gbps interfaces (Figure 7).
Figure 7. Port Group Definition on 48-Port 1/2/4/8-Gbps Fibre Channel Line-Card Module
The Cisco MDS 9513 32-Port 8-Gbps Advanced Fibre Channel Switching Module is suited for high-end storage systems and for ISL connectivity. This module delivers 256 Gbps of front-panel bandwidth with a total of thirty-two 8-Gbps interfaces connected to back-end storage systems. This module supports eight port groups with 4 ports in each group. This module has no oversubscription ratio, and all 32 ports can run at full 8-Gbps speed simultaneously (Figure 8).
Figure 8. Port Group Definition on 32-Port 1/2/4/8/10-Gbps Fibre Channel Line-Card Module
With its 8-Gbps Fibre Channel bandwidth option, the Cisco MDS 9513 48-Port 8-Gbps Advanced Fibre Channel Switching Module provides high port density and high-speed performance. The front panel delivers 256 Gbps of Fibre Channel bandwidth with 48 ports divided into the eight groups, with 6 ports per group. The total allocated bandwidth per port group is 32 Gbps with a maximum speed of 8 Gbps per port (Figure 9).
Figure 9. Port Group Definition on 48-Port 1/2/4/8/10-Gbps Fibre Channel Line-Card Module
The Cisco MDS 9700 48-Port 16-Gbps Fibre Channel Switching Module is the best-in-class module for the new Cisco MDS 9710 chassis, delivering line-rate 16-Gbps Fibre Channel performance for scalability in virtualized data centers. The Cisco MDS 9710 supports up to 384 line-rate 16-Gbps Fibre Channel ports per chassis. These line-card modules are hot swappable and are compatible with 2/4/8/10/16-Gbps Fibre Channel interfaces (Figure 10).
Figure 10. Port Group Definition on 48-Port 2/4/8/10/16-Gbps Fibre Channel Line-Card Module for Cisco MDS 9710
SAN Topology Considerations
It is common practice in SAN environments to build two separate, redundant physical fabrics (Fabric A and Fabric B) in the event that a single physical fabric fails. The topology diagrams in this document show a single fabric; however, customers would deploy two identical fabrics for redundancy. Most designs for large networks will use one of two types of topology for the physical fabric:
• Two-tier topology: Core-edge design
• Three-tier topology: Edge-core-edge design
In the two-tier design, servers connect to the edge switches, and storage devices connect to one or more core switches (Figure 11). This topology allows the core switch to provide storage services to one or more edge switches, thus servicing more servers in the fabric.
Figure 11. Sample Core-Edge Design
In environments in which projections for future growth of the network estimate that the number of storage devices may exceed the number of ports available at the core switch, a three-tier design may be preferred (Figure 12). This type of topology still uses a set of edge switches for server connectivity, but it adds another set of edge switches for storage devices. Both sets of edge switches connect to a core switch through ISLs.
Figure 12. Sample Edge-Core-Edge Design
Sample Use Case Deployments
Figures 13 and 14 show two sample deployments of large-scale Cisco MDS 9000 Family fabrics.
Sample Deployment 1
Figure 13. Use Case 1 Topology
The deployment shown in Figure 13 allows scaling to nearly 3500 devices in a single fabric. The actual production environment has approximately 128 storage ports running at 16 Gbps for storage devices and about 2560 host ports. The environment requires a minimum of 10:1 oversubscription in the network, which requires each host edge switch to have a 2048-Gbps PortChannel using 128 physical links (ISLs). The number of storage ports will not increase quite as rapidly, and the core switch has room to grow to add more host edge switches and connect using ISLs. The network in this environment was managed as follows:
• A total of four VSANs were created:
– VSAN 1 was used for staging new SAN devices.
– VSAN 100 was used for the development SAN.
– VSAN 200 was used for the lab SAN.
– VSAN 300 was used for the production SAN.
• TACACS+ was used for authorization and authentication of Cisco MDS 9000 Family switches.
• Role-based access control (RBAC) was used to create separate administrative roles for different VSANs.
• Device aliases were used for logical device identification.
• Two member zones were used with the device alias.
Sample Deployment 2
Figure 14. Use Case 2 Topology
The deployment shown in Figure 14 scales to nearly 3500 devices in a single fabric and still has room to grow: for instance, by adding a second core. The actual production environment has 120 storage ports running at 16 Gbps, and 2880 hosts using 8-Gbps Fibre Channel interfaces.
The environment requires a minimum of 12:1 oversubscription in the network, which requires each host edge switch to have a 2016-Gbps PortChannel on the core side. This design helps ensure enough bandwidth for performance-intensive applications when needed.
The network in this environment was managed as follows:
• A total of five VSANs were created:
– VSAN 1 was used for staging new devices.
– Four VSANs were based on business operations.
• TACACS+ was used for authorization and auditing of Cisco MDS 9000 Family switches.
• Separate administrative roles were created for the VSANs.
• A device alias was created for the environment.
• The dynamic port VSAN membership (DPVM) feature was enabled.
• A mixture of two- and three-member zones was used.
With data centers constantly growing, SAN administrators must design networks that both meet their current needs and can scale for demanding growth. Cisco MDS 9710 Multilayer Directors provide embedded features to help SAN administrators gain the benefits of redundancy, high availability, and high performance at the core level with room for future growth without degrading performance. SAN administrators upgrading or deploying large Cisco SAN fabrics using Cisco MDS 9500 Series Multilayer Directors can use the design parameters and best practices discussed in this document to design optimized and scalable SANs using the new Cisco MDS 9710 director.
Table 3 provides ordering information for the Cisco MDS 9500 and 9700 Series components.
Table 3. Part Numbers for Ordering Cisco 9500 and 9700 Series Components
Cisco MDS 9700 Series Components
MDS 9710 Chassis, No Power Supplies, Fans Included