Guest

Products & Services

Advanced FICON Features for Scalable, Reliable SANs

Next Steps

Key Performance Metrics in the IBM System z Environment


IBM System z customers invest in mainframes because they deliver high performance consistently under heavy workloads. To maintain this performance, they require the same level of consistent performance from their I/O infrastructures. McData directors delivered this, and Cisco MDS 9500 Series Multilayer Directors provide even better consistency of performance. The same cannot be said about competitive products, however. Testing has shown that competitive 8-Gbps directors fall far short of both the McData and Cisco solutions in the crucial metrics of consistent throughput and predictable latency.

High Availability


High availability means that any hardware or software component of the director can fail without affecting the flow of traffic through the director. Years of testing and customer experience since 2002 have proven that the Cisco MDS 9500 Series fully meets these expectations. In contrast, a simple test shows that competitive products fail this most basic test of high availability.

The test consists of switching off either core routing module in the other product and then switching it back on using the slider switch on the module. (The module does not need to be removed to see the results.)

When the core routing module is switched off, frame loss occurs immediately. When the core routing module is switched on, diagnostics are run for 30 seconds before the core routing module is brought online. The moment the core routing module goes online, the frame loss occurs. Significant frame loss-more than 300,000 frames per port-occurs on six of the eight 8-Gbps test ports carrying traffic. At 8 Gbps, this is an interruption of almost 1 second.

Consistent Throughput


Because the switching core of the competitive product consists of blades that string together the same switching application-specific integrated circuit (ASIC) that they use for the external ports, a switching policy must be configured for the core ASICs. The default policy for Fibre Channel switching is exchange-based routing; it spreads traffic across Inter-Switch Links (ISLs) based on Fibre Channel exchanges. However, for FICON use, IBM requires that the other product be configured to use port-based routing.

Recent testing by Miercom reveals that when port-based routing is deployed in an environment with devices running at a mix of speeds (for example, some at 8 Gbps and some at 4 Gbps), about half the ports run out of buffer credits, and the 8-Gbps ports are slowed to an effective rate of 4 Gbps.

Since most environments evolve gradually over time, this mix of speeds will be the norm, not the exception. For example, you may upgrade your direct-access storage device (DASD) arrays to 8 Gbps before upgrading the channels on your IBM System z. The Cisco MDS 9500 Series delivers consistent throughput with any mix of speeds.

Predictable Latency


When handling large numbers of transactions per second for mission-critical applications, the I/O infrastructure must perform with predictable latency; that is, traffic must take the same amount of time for all round trips between the IBM System z and its I/O devices.

Other vendors claim that their latency is 0.7 microsecond when local switching is used (ingress and egress ports on the same ASIC), and 2.1 microseconds when traffic must traverse the core (all other cases). However, under even moderate loads, actual latency on these products can be much worse.

The Miercom test filled two of the competing line cards, ran each port at only 4 Gbps, and measured the latency on each port. Cisco MDS 9500 Series latency remained consistent at 13 to 15 microseconds across all ports as loads increased. The same tests showed frequent exhaustion of buffer credits by the competing products, causing I/O to wait until buffers are freed.