|The Internet2 Project
by Larry Dunn, Cisco Systems
Communication, connectivity, education, entertainment, e-commerce across a broad spectrum of activities, the commodity Internet has made a strong impact on the way we live, work, and play. Nevertheless, many classes of applications do not yet run well, and some don't run at all, over the commodity net. As new applications are developed in disciplines from medicine to engineering to the arts and sciences, their success increasingly depends on an ability to use networks effectively. In research and education collaborations all over the world, efforts are under way to make use of new network technologies and develop network services that will facilitate these advanced applications. One such effort in the United States is called the Internet2 Project .
The Internet2 Project was started in 1996 by 34 U.S. research universities. It has since grown to over 140 universities, and includes several corporate members and international partners. This article examines network technology used in Internet2, and looks at some of the engineering challenges involved in facilitating applications being developed by Internet2 members.
By 1996, congestion stemming from academic traffic to the commodity Internet had seriously congested the NAPs; it was accordingly recognized that clients of the supercomputer centers might be better served if the Research Universities, where Principal Investigators often resided, were themselves directly connected to the vBNS. So in 1996, the NSF accepted proposals as part of the High-Performance Connections (HPC) program . Schools applying for an HPC grant might receive $350,000 over a 2-year period, provided their proposals met various criteria, including meritorious research that would benefit from the high-performance connection, a solid network plan, intention to investigate capabilities enabled by such a connection, commitment to share results with the community, matching funds from the University, and so on.
In October 1996, representatives from 34 universities met, and concluded that, while not all the schools had projects involving "meritorious research" that would meet the NSF criteria, they all did have a critical interest in deploying the kind of applications that such high-performance connections could enable.
Thus, to facilitate development and deployment of applications that would further the research and education mission of member universities, the Internet2 Project was formed.
From the beginning, the stated intention was to enable applications that could not run, or could not run well, on the "Commodity Internet." Networks would be utilized or constructed only so as to facilitate this applications enabling goal, and results/methods would be applied to the broader community as rapidly as possible.
The above applications share several challenging requirements, many of which translate to resource commitments that must be met by the network in an end-to-end fashion, including bandwidth and jitter. Additionally, the applications can become scalable only if more mature middleware and control plane infrastructure is developed. Necessary components include features such as Authentication , Authorization , and Accounting (AAA), scheduling, and coordination of resources managed by multiple administrative domains.
One compelling example of the network challenges present in a virtual collaborative environment is exemplified by a CAVE (Cave Automated VR Environment ). See  for more details, but in brief, a single CAVE is a (10 x 10 x 10) foot cube, with one wall removed. Users enter through the open wall, and using lightweight stereo-three-dimensional (3D) glasses, and a radio frequency (RF) mouse, can interact with an immersive environment created by rear screen and direct projection on multiple walls and the floor. As an example, the interconnection of multiple CAVEs allows design teams in remote locations to jointly experience the operating "feel" of a new vehicle, and to dynamically adjust, design, or control parameters to see how the modified vehicle behaves.
The developers of CAVE software at Argonne National Labs have noted that the data flows in a CAVE consist of at least: control, text, audio, video, tracking, database, simulation, haptic, and rendering flows. Additionally, they have estimated the latency, jitter, and bandwidth requirements for these flows. Some of the flows represent a challenge in a single resource dimension, others have strict requirements in multiple resource dimensions.
The vBNS is operated by MCI/Worldcom. It consists primarily of an IP over ATM network. Most schools connect at DS3 or OC-3c via ATM to a vBNS ATM switch. Interior vBNS links are OC-12c ATM. The schools peer with a Layer 3 router; a router is attached to each of the vBNS ATM switches. The vBNS routers are logically connected to each other via a full mesh of Unspecified Bit Rate (UBR) Virtual Circuits (VCs). The ATM switches are connected to each other via a second layer of ATM switches, which are part of MCI's commercial Hyperstream offering. While schools pass the vast majority of their traffic via peering at Layer 3 with the nearest vBNS border router, other services are available, including the option to establish Variable Bit Rate (VBR) VCs as needed, and the possibility to place some of the ATM-attached hosts of the school directly in a vBNS Classical IP Logical IP Subnet (LIS). This setup allows such hosts to send bytes directly to other ATM- attached hosts, bypassing the routers of both the school and the vBNS. The vBNS also carries native IP multicast traffic among members. In addition, the vBNS has a native IPv6 offering, which is achieved by deploying routers that run IPv6, and provisioning VCs to schools also running IPv6. The vBNS has also begun to offer an IP over Synchronous Optical Network (SONET) service, the first instance of which is an OC-48 Packet-over -SONET (POS) link from Northern to Southern California. Because the nominal partnership arrangement with the NSF expires in the year 2000, the vBNS has established a new network offering [called Next Generation Network (NGN)], to which schools and other entities may connect if the vBNS/NSF partnership is not renewed.
Measurement Tools in vBNS
The second backbone network to which Internet2 members can connect is called Abilene . Abilene was constructed by the University Corporation for Advanced Internet Development (UCAID) in collaboration with three industrial partners and Indiana University (IU). Partner contributions include fiber capacity from Qwest, SONET gear from Nortel, and routers from Cisco. The Abilene Network Operations Center (NOC) is staffed and operated by Indiana University. The network uses OC-48c POS interior links that initially connect ten routers in a partial mesh (a few interior links started as OC-12c, but are being upgraded). Abilene participants can connect at OC-3c or OC-12c, using either POS or ATM. See  for details on the router hardware architecture. For an insightful look at a research project that shows how this architecture can scale, see the second link in  and also see Stanford Professor Nick McKeown's Tiny Tera homepage at .
Measurement Tools in Abilene
Gigapop Technology Survey
The connectivity architecture used at gigapops varies widely. Detailed documentation for several gigapops can be found at . Some Gigapops are "Layer 2," meaning that each participant is responsible for exchanging routes and traffic among themselves directly. More often, gigapops are "Layer 3," meaning that the gigapop provides a router with which gigapop participants peer. The gigapop router then typically exchanges traffic with vBNS and/or Abilene, and possible commodity ISPs.
Some gigapops are implemented at a single site (for instance, Metropolitan Research and Education Network [MREN], Southern Crossroads [SoX]), while others are "distributed gigapops," meaning gigapop equipment exists at multiple locations (for instance, the California Research and Educational Network [CalREN-2], and The Great Plains Network [GPN]). Following are a couple of specific gigapop examples.
Second, in each ring, there are two sets of OC-12c connections to each adjacent school. CalREN-2 has currently utilized these connections to construct both a ring of ATM connectivity, and a separate, parallel ring of POS connectivity. As a result, CalREN-2 is uniquely positioned to experiment simultaneously with both ATM and POS connectivity, performance, and QoS characteristics.
Third, to take the Northern schools as an example, the ring structure allows for a variety of Layer 3 topologies to be explored. For example, in a ring with these size and bandwidth characteristics, what are the trade- offs on application level performance of inducing more hops while keeping the per-hop bandwidth high, versus dividing the bandwidth into smaller slices but creating a partial mesh that reduces the average Layer 3 hop count?
Explicit Routing—The Fish Problem
Consider packets from schools A and B, both headed for destination D. Assume both schools are connected to gigapop G, and that G has two paths to D; one along G-Hi_perf-D, and the other along G-Commodity_ 1-Commodity_2-D . Further assume that A is allowed to use either path (and would perfer G-Hi_perf-D), but that B is prohibited from using G-Hi_perf-D. This scenario describes the "Explicit Routing Problem," and since it is often drawn in a shape resembling a fish, is also known as the "Fish Problem." The essence is that a routing decision at G must be made on some other criteria than just the destination IP address.
A couple of solutions to the fish problem have been used in the past, but they tend to have problems with either speed or scalability. For example, "policy routing," which usually includes a method to look at both source and destination address, has historically shown low performance. Inserting ATM switches and using virtual circuits has been used in some cases, but this solution has scaling problems and requires extra equipment. Today, many Internet2 gigapops use a separate router per policy. In the case of needing two policies in the example above, this means two routers. This solution is expensive, but does have high performance.
One promising idea is to implement enough of the "policy routing" process in hardware to allow high-speed source+destination+other-bits lookups. While straightforward in concept, some point out that even with line-rate source-address routing capability, the method is flawed because it requires significant manual configuration, and is prone to creating black holes for traffic upon link failure. Proponents suggest that these shortcomings can be overcome.
Another promising mechanism is becoming available as a result of work done to facilitate Multiprotocol Label Switching (MPLS) in routers and switches. The idea here is that one of the underlying pieces of technology required for MPLS is "multi-FIB" (multiple Forwarding Information Bases). Instead of the traditional "single-FIB," which always uses "the best" route to a destination, multi-FIB allows multiple forwarding tables to exist in a single router. This setup will allow a gigapop to implement multiple policies in one router, rather than the "one box per policy" that several gigapops have used previously. Note that in the case of a gigapop with a single router on which all members converge, multiple policies can be achieved with multi-FIB without actually using MPLS Label-Switched Paths (LSPs). For more complex gigapops, where members themselves may converge high-performance-eligible and ineligible traffic before forwarding on a single link to the gigapop, one might consider using simple LSPs to present the gigapop with traffic that is predifferentiated.
At the Internet2 conference in San Francisco in September 1998, the vBNS backbone was exposed to unprecedented levels of multicast stress. In a somewhat painful, but worthwhile, learning experience, it was concluded that Protocol Independent Multicast-Dense Mode (PIM-DM) did not scale well in highly meshed, high bitrate backbones. As a result, the vBNS has shifted to PIM-Sparse Mode (PIM-SM), and Abilene is being constructed with PIM-SM.
The current set of multicast components being applied in Internet2 (and leading ISPs) include: PIM-SM, Multicast Border Gateway Protocol (MBGP), and the Multicast Source Discovery Protocol (MSDP). MBGP allows distribution of routing information such that unicast and multicast routing can use noncongruent topologies.
MSDP allows independent domains to exchange information about multicast sources without creating interdomain Rendezvous Point (RP) dependencies. As they become standardized, it is expected that the Border Gateway Multicast Protocol (BGMP) and Multicast Address Set Claim (MASC) will be added to this infrastructure set.
Quality of Service
In an effort to start small, but make concrete progress, the Internet2 QoS working group  has launched an experiment called the Qbone . Participants include backbone networks, gigapops, and individual schools and research labs worldwide. The Qbone will focus on deploying and using components developed by the Internet Engineering Task Force's (IETF) Differentiated Services working group (Diffserv) .
The initial Qbone plan is to deploy an approximation to the Expedited Forwarding (EF)  forwarding behavior. The Qbone will start by statically allocating a small amount of EF bandwidth across boundaries between Autonomous Systems (ASs) to allow small EF flows among arbitrary combinations of schools/labs. Large flows, in these early stages, will have to be handled manually (much as they are today). In later stages the plan is to use Bandwidth Brokers (BBs) currently under development  to aid in the automation of adjusting resource commitments between ASs (using interdomain BBs), and to aid in accepting application resource requests (using intradomain BBs, combined with policy servers and AAA mechanisms). The precise mechanics for BB interaction, trade-offs among signaling frequency, amount of state, scalability, and so on are certainly topics of research, but that's part of what makes Qbone participation fun!
LARRY DUNN is the Technology Development Manager in the Advanced Internet Initi-atives Division at Cisco Systems. He serves on the Internet2 Quality of Service and Routing working groups. After receiving his PhD from the University of Minnesota (Elec-trical Engineering '92), he served as Director of Networking there, and subsequently as Director of Strategic Markets and Applications (Education) for FORE Systems. He periodically teaches Advanced Networking courses at the University of Minnesota. Research interests include test vector generation for combinational logic, network design and analysis,and Quality of Service techniques and deployment strategies.