Guest

Cisco ONS 15500 Series

Cisco ONS 15500 DWDM: IBM Enterprise Environments, Distances Supported

  • Viewing Options

  • PDF (232.3 KB)
  • Feedback
WHITE PAPER

Connectivity Distances for IBM Enterprise Environments

The purpose of this paper is to list the variety of distances that are supported by IBM system configurations that include the Cisco ONS 15500 Series platforms. For distances greater than the maximum IBM supported distance, a request for price quotation approval (RPQ) must be submitted to IBM.

Exponential growth in data storage requirements and regulatory mandates in the banking, health care, and insurance industries are driving the need for high levels of system availability and data backup, and extensive recovery capabilities for business continuance. Business continuance applications are generally implemented in a minimum of two separate locations in a metropolitan area. The distance is dependent on specific customer requirements.
Figure 1 is a high availability solution implemented with dense wavelength division multiplexing (DWDM) equipment in a metropolitan-area network (MAN). The solution includes IBM data center products (servers, storage, Sysplex Timer, coupling facility, etc.) that are connected by Cisco ONS 15500 Series.

Figure 1

Cisco and IBM GDPS Solution

DISTANCE TESTING DESCRIPTION

Data center testing scenarios were executed and included the protocols listed in Table 1. Additionally, Cisco conducted tests at distances up to 180 kilometers (km) (111.85 miles) for protocols such as FICON. For distances greater than the maximum supported by IBM (last column in Table 1), a RPQ must be submitted to IBM.

Table 1. Summary of IBM Protocol Distances1

Protocol/
Connection

Protocol
Function

Fiber Type

Protocol Speed

Maximum Unrepeated Distance Supported by IBM

Maximum Distance Supported by IBM
with Use of Cisco
ONS 15500 DWDM 2,3

ESCON (MM)

Channel connection to I/O devices

MM

200 Mbps

5 km (3.1 miles)

100 km (62.14 miles)

ETR (MM)

Server to Sysplex Timer

MM

8 Mbps4

3 km (1.86 miles)

100 km (62.14 miles)

CLO (MM)

Sysplex Timer to Sysplex Timer

MM

8 Mbps4

3 km (1.86 miles)

40 km (24.86 miles)

ISC1 and ISC3 Compatibility Mode

Server to Coupling Facility

SM

1.0625 Gbps

20 km (12.4 miles)

40 km (24.86 miles)

ISC3 Peer Mode

Server to Coupling Facility

SM

2.125 Gbps

12 km (7.44 miles)

100 km (62.14 miles)

FICON/FC

Channel connection to I/O devices/LX

SM

1.0625 Gbps

20 km (12.4 miles)

100 km (62.14 miles)

FICON Bridge

FICON channel connection between server and FICON Bridge Adapter located in ESCON Director (9032 mod 5)

SM

1.0625 Gbps

20 km (12.4 miles)

100 km (62.14 miles)

FICON Express/FC

Channel connection to I/O devices/LX

SM

2.125 Gbps

12 km (7.44 miles)

100 km (62.14 miles)

ESCON XDF

Director to Director

SM

200 Mbps

20 km (12.4 miles)

80 km (49.71 miles)

PPRC/PPRC-XD (ESCON)

ESS (Shark) to ESS (Shark)

N/A

N/A

5 km (3.1 miles)

103 km (64.14 miles)

PPRC (1 Gig/2 Gig Fibre Channel)

ESS (Shark) to ESS (Shark)

N/A

N/A

20 km (12.4 miles)/
10 km (6.2 miles)

103 km (64.14 miles)5

XRC (ESCON)

ESCON based data replication

N/A

N/A

5 km (3.1 miles)

80 km (49.71 miles)

XRC
(FICON 1 Gig/2 Gig)

FICON based data replication

N/A

N/A

20 km (12.4 miles)/
12 km (7.44 miles)

100 km (62.14 miles)

Peer-to-Peer VTS (ESCON)

ESCON based data replication

N/A

N/A

5 km (3.1 miles)

50 km (31.07 miles)

Peer-to-Peer VTS (1 Gig/2 Gig FICON)

FICON based data replication

N/A

N/A

20 km (12.4 miles)/
12 km (7.44 miles)

100 km (62.14 miles)

1. Distance dependent on optical budget and application performance
2. May require IBM RPQ, please contact local IBM representatives
3. Supported distance may require amplification dependent on link budget
4. Cisco ONS 15540 transparency supports 16 Mbps to 2.5 Gbps, but specifically supports ETR/CLO at 8 Mbps
5. PPRC with Fibre Channel capability up to 300 km and beyond require IBM RPQ based on specific customer configuration

DEFINITION OF TERMS AND PROTOCOLS

ESCON-Enterprise System Connection is a 200-Mbps unidirectional serial bit transmission protocol used to dynamically connect mainframes with their various control units. The ESCON connection provides nonblocking access through either point-to-point connections or high-speed switches, called ESCON Directors. Figure 2 illustrates an ESCON half-duplex data stream.

Figure 2

ESCON Frame Process

FICON bridge-A FICON bridge connection is a link between a FICON adapter in a zSeries or S/390 server and a FICON bridge adapter in an IBM 9032 model 5 ESCON Director. The FICON bridge link supports up to eight ESCON channels and runs at a speed of 1.0625 Gbps. Figure 3 illustrates a full-duplex data stream.
Figure 3
FICON Bridge Frame Process

FICON-Fiber CONnection is a bidirectional channel protocol used to connect mainframes directly to FICON control units, FICON directors, or ESCON aggregation switches (ESCON Directors with a bridge card). A FICON connection runs at a data rate of 1.062 Gbps. One of the advantages of FICON is that performance does not degrade at distances up to 100 km and is dependent on the application. FICON Express runs at a data rate of 2.125 Gbps.
Figure 4 illustrates a FICON connection.
Figure 4
Native FICON Frame Process

XDF-Extended Distance Facility is a single-mode ESCON connection between ESCON Directors with a data rate of 200 Mbps.
InterSystem Coupling-Coupling facility links, also known as InterSystem Channel (ISC) links, are used to connect mainframes. The coupling facility is used by multiple mainframes to share data in a sysplex or Parallel Sysplex environment. This data sharing capability is key to the high availability features of a GDPS. Coupling links run at data rates of 1.0625 Gbps (ISC1 or ISC3 compatibility) and 2.125 Gbps (ISC3 peer).
Sysplex Timer-Sysplex Timer links are the links used to provide the clock synchronization between the mainframes in a Parallel Sysplex. Two types of links are used. The first is the link between each mainframe and the Sysplex Timer, known as the external timer reference (ETR) link. The second is the link between redundant Sysplex Timers, referred to as the control link oscillator (CLO) link. In a high availability GDPS environment, redundant Sysplex Timers are connected to each mainframe over ETR links, and the timers are connected to each other over the CLO links. This protocol operates at 8 Mbps.
PPRC-Peer-to-Peer Remote Copy, a hardware-based disaster recovery solution that provides real-time mirroring of logical volumes within an IBM Enterprise Storage Server (ESS) or to another ESS, which can be located up to 103 km (64 miles) from the primary device. PPRC is a synchronous copy solution where write operations are made to both copies (primary and secondary) before they are considered complete.
PPRC-XD-A non-synchronous long distance copy option suitable for data migration and periodic offsite backup. PPRC-XD can operate at very long distances-distances beyond the 103 km (64 miles) supported with PPRC synchronous transmissions-with the distance typically limited only by the capabilities of the network and channel extension technologies.
XRC-Extended remote copy is a combined hardware and software asynchronous copy technology.
Parallel Sysplex-A clustering technology that enables resource sharing and dynamic workload balancing. Parallel Sysplex can be implemented to manage workloads in a single site or in multiple sites to achieve high levels of availability.
GDPS-Geographically Dispersed Parallel Sysplex complements a multisite Parallel Sysplex by providing a single, automated solution to dynamically manage storage subsystem mirroring, processors, and network resources to allow a business to attain "near continuous availability" and "near transparent business continuity (disaster recovery)" without data loss. GDPS is designed to minimize and potentially eliminate the effect of any failure, including disasters or a planned site outage. It provides the ability to perform a controlled site switch for both planned and unplanned site outages, with no data loss, maintaining full data integrity across multiple volumes and storage subsystems, and the ability to perform a normal Data Base Management System (DBMS) restart-not DBMS recovery-at the opposite site. GDPS is application independent, therefore, covers the customer's complete application environment.
GDPS/PPRC-The physical topology of a GDPS/PPRC consists of a base or Parallel Sysplex cluster spread across two sites (known as site 1 and site 2 in this paper) separated by up to 40 km of fiber-approximately 25 miles-with one or more z/OS or OS/390 systems at each site. The multisite Parallel Sysplex cluster must be configured with redundant hardware (for example, a coupling facility and a Sysplex Timer in each site) and the cross-site connections must be redundant. An RPQ is available for extending GDPS/PPRC implementations up to 100 km. This 100 km distance extension can be attained by implementing 1) one of the Sysplex Timers at a mid point location within 40 km of the second Sysplex Timer or 2) locating both Sysplex Timers in a single campus. Either implementation requires the 100 km RPQ. All critical data resides on storage subsystems in site 1 (the primary copy of data) and is mirrored to site 2 (the secondary copy of data) via PPRC synchronous remote copy.
GDPS/PPRC has the following attributes:

• Continuous availability solution

• Near transparent disaster recovery solution

• Recovery time objective (RTO) less than one hour

• Recovery point objective (RPO) of zero

• Protects against metropolitan area disasters

GDPS/XRC-The physical topology of a GDPS/XRC consists of a base or Parallel Sysplex cluster in the application site (site 1). The recovery site (site 2) can be located at virtually any distance from site 1. During normal operations, the XRC System data mover (SDM) will be executing in site 2.
All critical data on storage subsystems in site 1 (the primary copy of data) is mirrored to site 2 (the secondary copy of data) via XRC asynchronous remote copy.
GDPS/XRC has the following attributes:

• Disaster recovery solution

• RTO between one and two hours

• RPO less than two minutes

• Protects against metropolitan as well as regional network disasters (distance between sites is unlimited)

• Minimal remote copy performance impact

Peer-to-Peer VTS-The IBM Virtual Tape Server (VTS) product line includes the Virtual Tape Controller (VTC) and VTS Tape Subsystem. Implementations include tape storage at a single location or a primary and secondary location for disaster recovery applications. Both ESCON and FICON configurations are supported.