Guest

IBM Networking

SNASw Migration Scenarios

Table Of Contents

SNASw Migration Scenarios

Scenario 1—Large Branch Network Migration to SNASw EE

Business Requirements

Network Analysis

Design Rationale

The Migration

The Old Network versus the New Network

Scenario 2—Data Center and Remote Branch Migration from APPN NN and DLSw to SNASw EE

Business Requirements

Network Analysis

Design Rationale

The Migration

The Old Network versus the New Network

Scenario 3—Data Center and Remote Branch Migration from IBM 950 NNs and APPN (PSNA) to SNASw

Business Requirements

Network Analysis

Design Rationale

The Migration

The Old Network versus the New Network


Chapter 4

SNASw Migration Scenarios


Scenario 1—Large Branch Network Migration to SNASw EE

A large insurance company had recognized that migrating to an IP backbone and using TCP/IP as the primary transport protocol would lead to enormous productivity increases while shortening application development cycles and lowering costs. The company executives were aware of the alliance between Cisco and IBM and wanted to take advantage of the cooperative engineering teams to design a new network consisting of Cisco routers and IBM enterprise servers outfitted with OSA-Express Gigabit Ethernet LAN adapters.

Business Requirements

Any proposed network design had to incorporate a number of requirements to accomplish the business goals for this large insurance company:

Support more than 2000 remote branch sites

Handle SNA traffic between AS/400 systems at each branch and the main data center

Allow for efficient connectivity for branch-to-branch SNA and IP traffic

Permit easy rollout of new TCP/IP applications without impacting performance and throughput for current SNA-based applications

Allow for a methodical, planned replacement of all IBM routers with Cisco network platforms while maintaining connectivity during the migration period

Design for management from both an SNA (data center operator) perspective as well as IP (network operations) perspective

Network Analysis

To create the new network design, the engineers used a series of questions to help analyze the current network environment:

Is APPN required?—APPN (and APPN/HPR) was already in place throughout the network. Because one of the major goals of the new network was to migrate to an all-IP backbone, replacing APPN routing with an SNA-over-IP solution was paramount.

How do we design for maximum availability and scalability?—The network had to support more than 2000 branch locations. The current network used APPN and OSPF routing. It was decided to use OSPF routing in the new design and supplement the network routing by running OSPF on the data center hosts to facilitate establishing alternate routing paths to the data hosts dynamically. This meant that a hierarchical OSPF design with core, distribution, and access layers would best provide the performance, reliability, and availability requirements for this customer.

How should we connect the Cisco data center routers to the IBM hosts?—The customer, IBM, and Cisco jointly determined that the best solution was to design a redundant, switching and routing network front-ending the data center hosts. The data center hosts would form a sysplex environment with multiple LPARs and interconnected CPUs. Access to the hosts would be via OSA-Express Gigabit Ethernet interfaces attached to Cisco Catalyst 6500 Layer 3 series switches outfitted with MSFCs. This combination would handle both SNA-based application traffic and TCP/IP-based application traffic all over a single IP backbone. Because the hosts are already APPN NNs and running OS/390 V2R8, it was decided that an all-IP network could be implemented using EE.

What should we use for transporting SNA traffic from remote sites to the data center?—EE requires that
HPR/IP be used as the transport mechanism. SNA application data is transported in IP/UDP frames. An additional benefit is the host automatically sets the IP precedence ToS bits according to the COS table entries defined for the session requests. This melding of the SNA COS prioritization scheme with the IP network prioritization scheme provides the best opportunity for maintaining current response time requirements while allowing the company to move forward with its IP application and voice, data, video implementation plans.

Where should we place SNA features such as DLSw+ and SNASw?—As noted previously, the current network already implemented APPN routing within the network at Layer 2. There were also significant branch-to-branch SNA traffic requirements. This need, coupled with the desire for an all-IP Layer 3 network, led to a decision to implement SNASw at the branch access level. The IP backbone network would form an SNASw connection network for APPN traffic. The data center host NN server would make SNA routing decisions, and the actual data path for SNA application session traffic would be established over dynamically built links over the IP connection network.

How will we manage the integrated network?—Because the customer had data center operators who were familiar with mainframe-based Tivoli NetView for OS/390 and wanted visibility into the IP router network, Cisco Internetwork Status Monitor (ISM) would be installed and configured to use SNMP to extract and display status information in a familiar form for these users. CiscoWorks2000 Routed WAN Bundle would also be installed on a UNIX server. This software would be used by the network operations staff for inventory management, configuration management, Cisco IOS Software management, and network troubleshooting.

Design Rationale

Based on the objectives of this large organization and an analysis of its current network design, Cisco engineers derived the following general points that would drive this new network design process:

A hierarchical design would be stable, scalable, and manageable

Parallel host connectivity would create an easy migration

The plan must permit the redesign of an old IP addressing scheme to support new design

The plan must permit redesign of a Frame Relay network for easy migration

A converged IP network would support current SNA traffic and allow rapid deployment of new IP-based applications

SNASw played a large role in this network migration design because of its many features: BX, EE, HPR, automatic COS-to-IP precedence mapping, Layer 3 IP end-to-end routing, and Layer 4 HPR for reliability. Scalability of this network design was of primary importance. Therefore, SNASw was used because of its
EN/NN emulation and its effect in an IP network:

NNs run only in VTAM CMCs

Topology updates and broadcast locates are eliminated from the WAN

IP connection network attachment

Dynamic link setup using connection network

The EE feature of SNASw would provide SNA-over-IP transport throughout the design because of its reliability, ability to support nondisruptive rerouting of SNA sessions around failures, and ability to support branch-to-branch traffic routed through region.

The Migration

The migration involved two phases. Phase 1 was to IP-enable the data center. Phase 2 involved the network migration in the branches in five steps:

Step 1. Install data center WAN routers (see Figure 4-1)

Step 2. Install regional backbone routers (see Figure 4-2)

Step 3. Install regional distribution routers (see Figure 4-3)

Step 4. Convert the branches and then the regions (see Figure 4-4)

Step 5. Decommission the region IBM 2216 routers (see Figure 4-5)

Figure 4-1 Install Data Center WAN Routers

When the data center was prepared and the OSA-Express adapters were installed, the network migration could begin. The first step involved creating parallel network access to the sysplex using Cisco Catalyst 6500 switches. Cisco 7200 Series WAN backbone routers were installed and connected to newly provisioned circuits into the Frame Relay network.

Figure 4-2 Install Regional Backbone Router

The next step was to install new equipment at each of the regional sites. First the regional backbone routers were added to the first of 10 regions. Connectivity was established back to the data center WAN backbone routers.

Figure 4-3 Install Regional Distribution Routers

Next the regional distribution routers were installed and configured in the first region. Each distribution layer router would support 50 remote branches with ISDN backup. All of the WAN connections were installed and prepared for later branch network cut-over. This step also involved modifying the old IBM 2216 regional router to establish connectivity to the new network. Each of the 10 regions would be configured much the same way.

Figure 4-4 Convert the Branches and then the Regions

Next, each new Cisco 2600 Series branch router was installed and the branch LAN connection moved over from the old IBM 2210 router. Branch router connectivity to the data center now traveled over the new network while connectivity to branches not yet migrated flowed through the connection to the old IBM 2216 at the regional site.

Figure 4-5 Decommission the Regional IBM 2216 Routers

As each of the regions completed their branch migrations, the IBM 2216s at the regions were decommissioned. The result was a new network using SNASw EE at the branch with improved performance and lower operating cost because of the redesigned Frame Relay network (before, many more PVCs were necessary to support the partial mesh APPN network).

The Old Network versus the New Network

Figures 4-1 through 4-5 depict the network before, during, and after the migration. Figure 4-5 depicts the final newly designed network. The customer achieved all of the goals that were established during design planning sessions. The new IP backbone network provided reliable, efficient data transport. As business needs arise, various QoS and queuing mechanisms can be employed to classify traffic.

The network provided efficient transport of SNA-application traffic using ANR routing at the edge. It also provided RTP connections using Responsive Mode adaptive rate-based flow control (ARB-2) over HPR/IP between SNASw endpoints and between SNASw and the data center hosts. This arrangement resulted in high traffic throughput and system redundancy.

The IP-based data center connectivity resulted in a very high-performance connection to the sysplex environment. Required availability and flexibility was accomplished with redundant switches and routers, IBM OSA-Express adapters, and OSPF routing.

The end result was a network that allowed this large insurance company to better serve its customers and to roll out new applications that would increase productivity and service levels at a rate not possible before with the old network design.

Scenario 2—Data Center and Remote Branch Migration from APPN NN and DLSw to SNASw EE

The organization was a provider of fully integrated local, long distance, and Internet services for international customers in more than 65 countries. The company offered virtual private network (VPN) solutions as well as security, customer care, Web hosting, multicasting, and e-commerce services.

The data center had more than 40 LPARS distributed over four geographically, separate centers. The host traffic consisted of 75 percent SNA/APPN data supported on a dedicated DS-3 network.

Business Requirements

The organization had three business objectives: improve network availability, reduce total network costs, and improve overall network performance. The business requirements that were derived from these objectives were:

Enable dynamic nondisruptive rerouting of SNA session traffic

Reduce or eliminate DLSw routers between the remote sites and data center mainframes

Consolidate equipment and increase network scalability by eliminating APPN NN/DLUR routers at the aggregation locations

Remove the resource-intensive DLSw function from the ABR routers to enable the ABR routers to accommodate VoIP traffic workload

Exploit efficient, reliable, direct IP routing to each data center for optimal SNA application access

Network Analysis

To create the new network design, the engineers used a series of questions to help analyze the current network environment. These design questions were briefly discussed in the previous section.

Is APPN required?—As in the previous case study, APPN and APPN/HPR were already enabled in the IBM
CS/390 hosts. The major goals of the new network were to improve network availability, enable dynamic nondisruptive rerouting of SNA session traffic over IP, reduce the total costs of networking, and eliminate APPN NN/DLUR and DLSw+ on routers between remote sites and data center hosts. The customer also wanted to improve overall network performance by consolidating the entire network infrastructure over a single IP backbone network (versus dedicated networks for SNA and IP).

How do we design for maximum availability and scalability?—A large part of the customer's design approach was to deploy dynamic routing protocol on the IBM S/390 mainframes. EE between the hosts, with a phased-in deployment to remote branch sites, addressed availability and nondisruptive session path switching when network failures occurred. The customer's deployment of SNASw BX to replace existing APPN NN/DLUR for support of peripheral independent SNA devices addressed the absolute requirement for the new network to be able to scale.

How should we connect the Cisco data center routers to the IBM hosts?—The customer's decision to deploy EE between hosts (EBN) and SNASw out to remote sites for peripheral SNA device support resulted in an IP data center Cisco router solution providing WAN edge and IP dynamic routing transport (SNA transport using DLSw+ or RSRB in data center routers was no longer required).

What should we use for transporting SNA traffic from remote sites to the data center?—EE was chosen as the SNA over IP transport mechanism.

Where should we place SNA features such as DLSw+ and SNASw?—The customer decided to replace remote DLSw+ routers with the SNASw EE feature.

How will we manage the integrated network?—The customer continued to approach network management using existing host NetView platforms.

Design Rationale

The customer's approach for migrating to EE was initially to implement EBN between mainframes to replace SNI and to IP-enable host mainframe connections. This allowed the customer to become more familiar with using EE in a controlled data center environment and allowed them to address becoming APPN EE-enabled throughout the enterprise. As part of this effort they upgraded the host S/390 operating system to the latest versions of
OS/390 and CS/390 and implemented OSPF on their hosts for dynamic routing. One of the customer's biggest requirements was to implement and further extend connection network VRN support to the EE IP network.

The Migration

This migration involved six key steps:

Step 1. Extend SNASw BX and EE to remote branches to support peripheral DLUR connections

Step 2. Replace WAN channel-extended and host-to-host DLSw/RSRB

Step 3. Migrate mainframe-to-mainframe connections to HPR/IP using EE

Step 4. Implement the EE connection network (VRN)

Step 5. Map SNA sessions to EE VRN connection network using SNA COS

Step 6. Implement dynamic routing protocol (OSPF) on the IBM S/390 mainframes

Figure 4-6 Implement the EE Connection Network (VRN)

The Old Network versus the New Network

The new network design, as shown in Figure 4-6, simplified the architecture by creating an IP network infrastructure from the mainframe for transport of both SNA and IP traffic. As a result of this migration, batch transfers achieved a five-fold improvement in performance because of the significantly higher data throughput achieved. In addition, the network had a measurable improvement in network availability as well as simplified network management.

The EE function that was extended to support peripheral network SNA end user traffic over IP resulted in improved session stability. In the future, the customer can take advantage of any new technological changes in the IP WAN network, such as AVVID, without impacting the SNA data transport. The organization also saved $9.6 million by decommissioning its DS-3 network that was dedicated to handling SNA traffic alone.

Scenario 3—Data Center and Remote Branch Migration from IBM 950 NNs and APPN (PSNA) to SNASw

The organization was a computing provider for savings banks in Germany that had 86 regional branch offices. The core of the network was the computing center, which was operated at four different locations: Duisburg, Cologne/Junkersdorf, Cologne/Gremberghoven, and Mainz. The host SNA applications primarily consisted of CICS, IMS, and proprietary Internet banking applications.

The current network had approximately 3000 Cisco routers (mostly Cisco 4700 Series routers at the regional branch bank offices and Cisco 2504s at customer remote locations). Banking clients of this customer used AS/400s for file transfer, along with service support using OS/2, AIX SNA servers, and SDLC-attached ATM cash machines. The organization had approximately 80-90 regional branch banking offices. Each remote branch bank office had two SNASw routers for redundancy (Cisco 4700 Series routers with 64 MB DRAM memory).

The infrastructure was composed of two SNA network segments, with the SNASw routers split evenly between the two. Each network segment had both primary and backup NN/DLUS servers running on the S/390 host enterprise servers.

Business Requirements

The organization had two business objectives: improve network availability and improve network scalability. The business requirements that were derived from these objectives were:

Migrate off installed Cisco APPN NN (PSNA) and IBM NN platforms

Eliminate APPN broadcast traffic and locate storms from the WAN

Migrate the backbone network between data centers from SNI to EE EBN

Migrate the 86 remote branches to EE

Network Analysis

The customer started with a hierarchical SNA network with PU 2.0, 2.1, and LU 6.2 supported by local NCPs (IBM 3745 FEPs) and approximately 50 remote FEPs for boundary function support. Network transport for SNA traffic was supported using Layer 2 RFC 1490 Frame Relay and X.25.

The customer's APPN migration started midyear in 1998 with the deployment of IBM 950 Controller NN servers and 170 NN servers running on Cisco 4700 Series routers with Cisco APPN NN (PSNA) code. Because of network instability and scalability issues encountered as a result of deploying such large numbers of APPN NNs during the initial rollout phase, they decided to migrate to SNASw for BrNN and DLUR support. (The customer also addressed and resolved a number of Y2K issues with some of the older X.25 devices during the migration to SNASw.)

Network management was accomplished through Tivoli NetView for OS/390 plus Internetwork Status Monitor (ISM) on the Cisco 4700 Series routers. Each router was configured with an ISM focal point PU, which allowed them to issue RUNCMD commands via host NetView CLIST control.

Design Rationale

The SNA networks were joined by channel-to-channel connections between DLUS border nodes. The networks had six IBM 950 NN servers channel-attached to their CS/390 hosts. The WAN from the IBM 950s to the SNASw routers was Frame Relay IETF (RFC 1490). Each SNASw router had a single interface to the Frame Relay network with two Frame Relay data-link connection identifiers (DLCIs) connected to each of the IBM 950s. To provide for WAN backup, each SNASw router also had an Integrated Services Digital Network (ISDN) connection to a Cisco 2500 Series router Token Ring-attached to the IBM 950s.

Each SNASw router had three APPN host links. Two of the links were using SRB over Frame Relay (one for each Frame Relay DLCI), and one from a SNASw VDLC port over DLSw+ via ISDN to a TIC connected to the IBM 950 for backup. The ISDN was kept down by configuring no DLSw+ keepalives and by the use of suitable TCP filters. The link via the DLSw+ ISDN backup connection was a low-priority transmission group because it was only used when the other links were down. Links over the ISDN cloud were defined with a lower-bandwidth and higher-cost factor than the Frame Relay links; therefore, no CP-to-CP sessions ran across the ISDN link as long as the Frame Relay connections were active (because the DLSw+ peer for ISDN backup was configured without any keepalive interval).

On the branch banking client user side, each Cisco 4700 Series SNASw router supported two Token Ring interfaces. One Token Ring interface was used by the customer for IP network management. The other Token Ring interface supported SNA client connection using SRB. Each SNASw DLSw+ router was configured as promiscuous so that it could support remote peer DLSw+ connections without having to explicitly configure the DLSw+ remote peers.

Most user traffic was LLC2 either locally bridged or over DLSw+ to SNA servers (most sites had a relatively small number of PUs with many LUs). For the IP network the customer utilized OSPF and Border Gateway Protocol (BGP) as dynamic routing protocols.

Each SNASw router also had an ISM focal point defined for network management using CiscoWorks Blue (the customer built extensive NetView CLISTS and used RUNCMD commands to operate and monitor their network).

The Migration

The plan involved three phases:

Phase 1: Migrate regional branch bank offices from APPN NN to SNASw BX/DLUR (see Figure 4-7)

Phase 2 (future): Migrate data center backbone network from SNI (using IBM FEPs) to HPR/IP EE (EBN)

Phase 3 (future): Migrate remote regional banks from HPR over LLC2 (Frame Relay RFC 1490 transport) to HPR/IP using SNASw EE

Figure 4-7 Migrating Regional Banks to BX/DLUR

The Old Network versus the New Network

Figure 4-7 depicts the network after the Phase 1 migration. The organization achieved all of the goals that it set during the planning and design phase. The new network addressed their immediate APPN network scalability requirements by eliminating large numbers of APPN NNs at aggregation points at the regional bank branch offices. However, the resulting network after the migration from APPN NN to SNASw Bx was still native
APPN/HPR over Layer 2 Frame Relay RFC 1490 transport.

Currently the customer is working with Cisco and IBM to develop plans for migrating to EE later this year. Going forward the customer plans to eliminate the IBM 950s and implement EE EBN support for mainframe-to-mainframe links (the customer is in the process of upgrading the host operating system to OS/390 and CS/390 V2R8). They also plan to upgrade to the latest IBM mainframes that support IBM OSA-Express and to install Cisco Catalyst 6500 Gigabit Ethernet LAN switches in the campus. They are considering upgrading their remote site Cisco 4700 Series routers to Cisco 3640/3660 Series routers to better meet future multiservice (data, voice, video) network IP infrastructure requirements.