In this document you will learn why Cisco's MDS 9500 directors are the natural migration path for mainframe IT shops who have loyally kept their McData (or InRange) directors in service despite lack of a roadmap for enhancement. You will also learn how easy it can be to migrate to the Cisco environment from your current directors.
Why McData Customers Like Their Directors
McData worked with IBM to invent FICON (Fiber Connection) as a synthesis of industry-standard Fibre Channel and IBM's proprietary Enterprise Systems Connection (ESCON) interconnect. The old McData FICON directors gained a well-deserved reputation for high availability and consistent, predictable performance under load due to their crossbar architecture. Unfortunately for McData customers, the trusted McData M6140 and i10K directors are reaching end of life. Their nominally designated successor requires a complete hardware replacement, and a transition to a completely different switching architecture.
Key Performance Metrics in the IBM System z Environment
IBM System z customers invest in mainframes because they deliver high performance consistently under heavy workloads. To maintain this performance, they require the same level of consistent performance from their I/O infrastructures. McData directors were valued for delivering this, and Cisco
® MDS 9500 Series Multilayer Directors, also based on the crossbar design principle, provide even better consistency of performance. The same cannot be said about competitive products, however. Testing has shown that competitive 8-Gbps directors fall far short of both the McData and Cisco solutions in the crucial metrics of consistent throughput and predictable latency.
The fundamental, defining characteristic of a director has always been high availability. At its most basic, high availability means that any hardware or software component of the director can fail without affecting the flow of traffic through the director. Years of testing and customer experience since 2002 have proven that the Cisco MDS 9500 Series fully meets these expectations. In contrast, a simple test that anyone can perform shows that competitive products fail this most basic test of high availability.
The test consists of switching off either core routing module in the other product, and then switching it back on using the slider switch on the module (the module does not need to be removed to see the results). When the core routing module is switched off, frame loss occurs immediately. When the core routing module is switched on, diagnostics are run for 30 seconds before the core routing module is brought online. The moment the core routing module goes online, the frame loss occurs. Significant frame loss-more than 300,000 frames per port-occurs on six of the eight 8-Gbps test ports carrying traffic. At 8 Gbps, this is an interruption of almost 1 second.
Because the switching core of the competitive product consists of blades that string together the same switching application-specific integrated circuit (ASIC) that they use for the external ports, a switching policy must be configured for the core ASICs. The default policy for Fibre Channel switching is exchange-based routing; it spreads traffic across Inter-Switch Links (ISLs) based on Fibre Channel exchanges. However, for FICON use, IBM requires that the other product be configured to use port-based routing. Recent testing by Miercom (Figure 1) reveals that when port-based routing is deployed in an environment with devices running at a mix of speeds (for example, some at 8 Gbps and some at 4 Gbps), about half the ports run out of buffer credits (Figure 2), and the 8-Gbps ports are slowed to an effective rate of 4 Gbps. Since most environments evolve gradually over time, this mix of speeds will be the norm, not the exception. For example, customers may upgrade their direct-access storage device (DASD) arrays to 8 Gbps before upgrading the channels on their IBM System z. The Cisco MDS 9500 Series delivers consistent throughput with any mix of speeds. A claim of high bandwidth per port means nothing if a solution cannot deliver it.
Figure 1. Heavy 4-Gbps Traffic Slows 8-Gbps Traffic to 4 Gbps
When handling large numbers of transactions per second for mission-critical applications, the I/O infrastructure must perform with predictable latency: that is, traffic must take the same amount of time for all round trips between the IBM System z and its I/O devices. Other vendors claim that their latency is 0.7 microsecond when local switching is used (ingress and egress ports on the same ASIC), and 2.1 microseconds when traffic must traverse the core (all other cases). However, under even moderate loads, actual latency on these products can be much worse. The Miercom test filled two of the competing line cards, ran each port at only 4 Gbps, and measured the latency on each port. Figure 2 shows that observed latencies varied greatly, frequently reaching peaks of 36 microseconds or higher, almost 10 times what is promised in the vendor's marketing collateral. By contrast, Cisco MDS 9500 Series latency was observed to remain consistent at 13 to 15 microseconds across all ports as loads increased. The same tests showed frequent exhaustion of buffer credits by the competing products, causing I/O to wait until buffers are freed.
Figure 2. Wide Variations in Competitive Director Latency and Buffer Credit Blackouts
Migrating from McData to Cisco MDS 9000 Family Is Simple
Not only is the Cisco MDS 9000 Family the natural successor to the McData product line, but migrating from current McData directors to Cisco products is easy to perform, without massive disruption to IBM System z operations. Several features of the Cisco MDS 9000 Family provide a simple migration methodology.
Assume that the old director, a McData 6140, is switch number (domain ID) 0x49, with a number of ports used.
1. The first step is to install the Cisco MDS 9500 Series director in parallel with the old director; do not connect it to anything yet.
2. Next, create a VSAN on the Cisco MDS 9500 Series director with the same domain ID as the director to be replaced.
3. Next, assign FICON port numbers to the interfaces (physical ports) you want to use on the Cisco MDS 9500 Series director. These should be the same port numbers that are in use on the old director.
4. Next, vary all channels and devices connected through the old director offline. Be sure to perform this step at a time when the workload is low enough that the paths through the other directors can handle all the traffic during the migration.
5. Carefully following the cabling chart created ahead of time, move all cables from the old director to the new one so that each is connected to the FICON port with the same number as it was before.
6. Finally, vary all devices back online. Verify that everything works. Now you can dispose of the old director. You have accomplished this migration without having to make any changes to the I/O configuration data set (IOCDS) on the IBM System z.
Note that when using this methodology, all cables must be moved during the same operation to avoid having two switches active with the same switch number (domain ID).
• Failure or even a simple power off and reset of a switching core of a competitive director can cause application outages of up to 30 seconds.
• Competitive directors do not deliver consistent throughput in mixed-speed FICON environments, which are common in long-lived IBM System z installations.
• Competitive director latency can vary greatly under even moderate load, with variations of up to 30:1.
• The Cisco MDS 9500 Series is a natural migration path for McData customers, and migrating from the McData directors to Cisco MDS 9500 Series directors can be accomplished simply, without disruptive changes to the IBM System z I/O configuration.