|New Frontiers for Research Networks in the 21st Century
by Robert J. Aiken, Cisco Systems, Inc.
A famous philosopher, Yogi Berra, once said, "Prediction is hard. Especially the future."  In spite of this sage advice, we will still make an attempt at identifying the frontiers for research networks. By first examining and then extrapolating from the evolution and history of past research networks, we may be able to get an idea about the frontiers that face research networks in the future. One of the initial roles of the research network was to act as a testbed for network research on basic network protocols, mostly focusing on network Layers 1 through 4 (that is, the physical, data link, network, transport, and network management layers), but also including basic applications such as file transport and e-mail. During the early phases of the Internet, the commercial sector could not provide the network infrastructure sought by the research and education communities. Consequently, research networks evolved and provided backbone and regional network infrastructures that provided production-quality access to important research and education resources such as supercomputer centers and collaboratories  . Recent developments show that most research networks have moved away from being testbeds for network research and have evolved into production networks serving their research and education communities. It's time to make the next real evolutionary step with respect to research networks, and that is to shift our research focus toward maximizing the most critical of resources--people.
Given the growth and maturity of commercial service providers today, there may no longer be a pressing technical need for governments to continue to support pan-national backbone networks, or possibly even production-like national infrastructures, for Internet-savvy countries. Since commercially available Virtual Private Networks (VPNs) can now easily support many of the networked communities that previously required dedicated research networks, government and other supporting organizations can now support their research and education communities by providing the funding for backbone network services much as it does for telephony, office space, and computing capabilities; that is, as part of their research award. However, there may be valid social, political, and long-term economical reasons for continuing the support for such networks. For instance, a nation may decide that in order to ensure its economic survival in the future it wishes to accelerate the deployment and use of Internet technologies among its people, and thus the nation may decide to subsidize national research networks. In addition, it should be noted that VPNs often recreate the "walled" separation of communities, a scenario that was previously accomplished through the hard multiplexing of circuits.
But, in order to make technical advances in the e-economy, governments should now focus on supporting the evolution of intelligent and adaptable edge and access networks. These, in turn, will support the Ubiquitous Computing (UC) and persistent presence environments that will soon be an integral part of our future Internet-based economies.
The United States's recently expanded National Science Foundation (NSF)  research budget and the Defense Advanced Projects Agency's (DARPA's)  prior support of middleware research are good examples of moving in the right direction. The Netherland's Gigaport  project, which incorporates network and application research as well as an advanced technology access and backbone network infrastructure, is a good example of how visionary research networks are evolving.
Just as Internet technologies and network research have matured and evolved, so should the policies concerning the support of research networks. Policies need to be developed to again encourage basic network research and the development of new technologies. In addition, research networks need to encourage and accentuate new network capabilities in edge networks, on campus infrastructures, and in the end systems to support the humans in these new environments. This article focuses mainly on the future of research networks in e-developed nations; but, this is not to diminish the need or importance for e-developed nations to help encourage the same development in network-challenged nations.
Context and Definitions
Before delving into our discussion, we first need to define a few terms. These definitions will not only aid in our discussion, but may also help to highlight the role and function of various types of research networks. The most important terms to define are "network research" and "research network," both of which often get interchanged during discussions concerning policy, funding, and technology.
In this article, the term "network research" means long-term basic research on network protocols and technologies. The many types of network research can be categorized into three classes. The first category covers research on network transport infrastructure and generally includes research on the Open System Interconnection (OSI) Model Layers 1 through 4 (that is, the physical, data link, network, and transport layers) as well as research issues relating to the interconnection and peering of these layers and protocols. We will refer to this class of research as "transport services."
The second class consists of research covering what can nominally be referred to as "middleware"  . Middleware basically includes many of the services that were originally identified as network Layers 4 through 6. Layer 4 is included because of the need for interfaces to the network layer (sockets, TCP, and so on).
In addition, it nominally includes some components, such as e-mail gateways or directory services, which are normally thought of as being network applications, but which have subcomponents that may also be included in middleware. Given that the definition of middleware is far from an exact science, we shall say that middleware depends on the existence of the network transport services and supports applications.
The third area covers research on the real applications (for example, e-commerce, education, health care, and so on), network interfaces, network applications (for example, e-mail, Web, file transfer, and so on), and the use of networks and middleware in a distributed heterogeneous environment. Applications depend on both the middleware and transport layers. Advanced applications include Electronic Persistence Presence (EPP) and UC. EPP, or e-presence, describes a state of a person or application as always being "on the network" in some form or another. The concept of session-based network access will no longer apply. EPP assumes that support for UC and both mobile and nomadic networking exists. UC refers to the pervasive presence of computing and networking capabilities throughout all of our environments; that is, in automobiles, homes, and even on our bodies.
A "research network," on the other hand, is a production network; that is, one aspiring to the goal of 99.99999-percent "up time" at Layers 1 through 3, which supports various types of domain-specific application research. This application research is most often used to support the sciences and education, but can also be used in support of other areas of academic and economic endeavor. These networks are often referred to as Research Networks (RNs) or Research and Education (R&E) Networks. In this article, we further classify these RNs based on their general customer base. Institutional Research Networks (IRNs) support universities, institutes, libraries, data warehouses, and other "campus"-like networks. National Research Networks (NRNs)  , such as the Netherland's Gigaport or Germany's DFN networks, support IRNs or affinity-based networks. Pan National Research Networks (PNRNs) interconnect and support NRNs. An example of a couple of current production PNRNs are Dante's Ten-155 and the NORDU-NET  networks. In this article we will also classify the older National Science Foundation Network's (NSFNET's), very-high-performance Backbone Network Service (vBNS), CANARIE's CA*NET 3  , and the Internet 2  Abilene networks as PNRNs because in terms of scale and policy they address the same issues of interconnecting a heterogeneous set of regionally autonomous networks (for example, NSFNET's re-gionals and Internet 2's Gigapops) as do the PNRNs.
A hybrid state of RN also exists. When we introduce one or more advanced technologies into a production system, we basically inject some amount of chaos into the system. The interplay between the new technologies and other existing technologies at various levels of the infrastructure, as well as scaling issues, can cause unanticipated results.
Research quality systems engineering and design is then required to address these anomalies. An example of this phenomenon is the problem encountered with ATM cell discard and its effect on TCP streams and subsequent retransmissions (that is, early packet discard and partial packet discard). The term Virtual Private Network (VPN) is used in this article in the classical sense; that is, a network tunneled within another network (for example, IP within IP, ATM virtual circuits [VCs], and so on), and it is not necessarily a security-based network VPN. Acceptable Use Policy (AUP) refers to the definition of the type of traffic or use that is allowed on a network infrastructure. Conditions of Use (COU) is basically another version of AUP.
During the early phases of the evolution of research networks and the Internet, national research networks were building and managing backbone networks because there was a technical reason to do so. Governments supported these activities, because at the time the commercial sector Internet Service Providers (ISPs) could not do it and the expertise to do so resided within the R&E community. Much of the research or testing of this time still focused on backbone technologies as well as aggregation networks and architectures. Research networks started out by supporting longer-term risky network research and quickly evolved to support shorter-term no-risk production infrastructure.
The research during the Advanced Research Projects Agency Network (ARPANET) and early NSFNET phases of the Internet focused on basic infrastructure protocols and technologies. Now commodity services, these services are both easily and cost-effectively available from the commercial sector. We have come a long way since then. Except for a few universities and research centers, the commercial sector now dominates R&D in the backbone technology space. Commercially provided VPNs can now cost-effectively support most of the requirements of the R&E communities. Given the current domination of R&D in backbone technologies by the commercial sectors, as well as the need to address true end-to-end services, it is time that network research and research net-works realign their focus onto the research and development of end-system and campus and edge network technologies. Most of the intelligence of the network (for example, Quality of Service [QoS], security, content distribution and routing, and so on) will live at the edges, and in some way will be oblivious to the backbone service over which it will operate. In addition, in order for applications to be able to make use of this network, intelligent RNs need to be able to provide the middleware and services that exist between the application and the transport systems. The real future for most RNs is in helping to analyze and identify, not necessarily run and manage, advanced network infrastructures for their R&E communities.
One of the problems faced by the R&Ecommunity is how to obtain support from their governments and other supportive organizations (both for-profit and nonprofit). In attempts to support advanced applications and end-user research, organizations and governments may be convinced into supporting RNs, which end up providing commodity services and competing with the commercial sector. One reason that this can occur is that governments often wish to see results very quickly in order to justify their support of the research community; but, by doing so they drive the recipient researchers and research network providers to focus on short-term results and abandon basic long-term research. This pressure from the supporting organizations can also force researchers to compete in a space--that is, transport layers--for which industry may be better suited and adapted in both scale and time. Another issue facing today's research networks is that many of the R&E community, who once would endure downtime and assume some risk in trade for being part of an experimental network, are now demanding full production- quality services from those same R&E networks. Subsequently, the RNs are then being precluded from aggressively pursuing and using really advanced technologies that may pose a risk. And finally, many times research networks, science communities, and researchers claim they are doing network research, when in reality they are not, because they wish to have decent network connectivity, and they assume that this is the only way to get funding and support for good network connectivity with which to support their real research objectives. All of these issues have driven RNs at all levels into difficult positions. RNs need to be able to again take risks if they are to push the envelope in adopting new technology. Likewise, it is also valid to provide production-quality network transport services to support research for middleware, network application (for example collaborative technologies), and R&E application (for example, medical, sciences, education, and so on) research. All of these requirements need to be addressed in the manner most expedient and cost-effective to the government or organization providing the support.
All research carries with it a certain amount of risk. There is theoretical and experimental research. Some research is subject to validation; some is retrospective --for example, examining packet traces to verify the existence of nonlinear synchronization--but some is prospective and involves reprogramming network resources, and any reprogramming is susceptible to bugs. The amount of risk often depends on the area of research undertaken. The lower down in the network structure that one performs experimental research, the more difficult it is to support this research and still maintain a production-like environment for the other researchers and applications; yet we need to provide support for all levels of experimental research, as described in MORPHNET  . The ideal environment would support applications that could easily migrate from a production network to one prototyping recent network research, and then back again if the experiment fails. Recent advances in optical networking show promise in realizing this goal, but many technical and policy-based challenges are yet to be addressed.
ARPANET and Early NSFNET Phase: 1980s
The ARPANET, one of the many predecessors of today's Internet, was a research project run by researchers as a sandbox where they could develop and test many of the protocols that are now integral components of the Internet. Because this was a research network that supported network research, there were times the network would "go down" and become unavailable. Although that was certainly not the goal, it was a reality when performing experimental network research. This was acceptable to all involved and allowed for the quick "research-to-production" cycle, now associated with the Internet, to develop. The management of the network with respect to policy was handled by the Internet Activities Board (IAB), which has since been renamed the Internet Architecture Board , and revolved around the actual use of the network as a research vehicle. The research focused mainly on Layers 1 through 4, and application research was secondary and used to demonstrate the underlying technologies.
At the end of the 1980s, the Internet and its associated set of protocols rapidly gained speed in deployment and use among the research community. This started the major shift away from research networks supporting experimental network protocols toward RNs supporting applications via production research networks; for example, the mission agencies' (that is, those agencies whose mission was fairly well focused in a few scientific areas) networks at the Department of Energy (DoE) (ES-net  ) and NASA (NSInet). At the same time, the NSFNET was still somewhat experimental with the introduction and use of ?home-grown? T1 and T3 routers, as well as with pioneering research on peering and aggregation issues associated with the hierarchical NSFNET backbone. It also focused on issues relating to the interconnection of the major agency networks and international networks at the Federal Internet Exchanges (FIXes), as well as the policy landscape of interconnecting commercial e-mail (MCIMail) with the Internet. The primary policy justification for supporting these networks (for example ESnet, NSInet, NSFNET) in the late 1980s was to provide access to scarce resources, such as supercomputer centers, although the NSFNET still supported network research, albeit on peering and aggregation.
In addition, the NSFNET was first in pioneering research on network measurement and characterization, leading to today's Cooperative Association for Internet Data Analysis (CAIDA) as well as to Surveyor installations on Abilene. As researchers became dependent on the network to support their research, the ability to introduce new and risky technologies into the network became more difficult, as shown by the second-phase T3 router upgrade for the NSFNET when many researchers vehemently complained about any "downtime."
At this time, there were still no commercial service providers from which to procure IP services to connect the numerous and varied sites of the NSFNET and other research networks. Hence there were still valid technical reasons for NRNs and R&E networks to exist and provide backbone services.
The policy decisions affecting the interconnection of the agency networks at the FIXes, as well as engineering international inter-connectivity, were loosely coordinated by an ad hoc group of agency representatives called the Federal Research Internet Coordinating Committee (FRICC). The FRICC became the Federal Networking Council (FNC) in the early 1990s, and then became the Large-Scale Network (LSN) working group by the mid-1990s.
The FNC wisely left the management of the Internet protocols to the IAB, the Internet Engineering Task Force (IETF), and the Internet Engineering Steering Group (IESG); however, the FNC did not completely relinquish its responsibility, as evidenced by its prominent role in prodding the development of Classless Interdomain Routing (CIDR) and originating the work that led to new network protocols (for example, IPv6).
The Next-Generation NSFNET: Early 1990s
During the early 1990s, the Internet evolved and grew larger. It could no longer remain undetected on the government policy radar screen. Many saw the NSFNET and agency networks as competing with commercial Internet Service Providers (ISPs). Because of the charters of the agencies of the U.S.-based RNs (for example NSF, DoE, NASA), all traffic crossing their networks had to adhere to their respective AUPs. These AUPs prohibited any "commercial entity-to-commercial entity traffic" to use a U.S. government supported network as transit. In addition, the demand for generic Internet support for all types of research and education communities became much stronger, and at the same time there was growing support among the U.S. Congress and Executive branches to end the U.S. Federal Government support of the U.S. Internet backbone.
In response to these pressures and the responses to a NSF draft "New NSFNET" proposal, the NSF elected to get out of the business of being the Internet backbone within the United States. This policy change was the nexus for the design of the vBNS, Network Access Points (NAPs), and Routing Arbiter (RA) described in the ABF paper  by early 1992. The vBNS was meant to provide the NSF supercomputer sites a research network that was capable of providing the high-end network services required by the sites for their Metacenter, as well as to provide the capability for their researchers to perform network research because the centers were still the locus for network expertise. The NAPs were designed to enhance the AUP free interconnectivity of both commercial and R"E ISPs and to further evolve the interconnection of the Internet started by the FIXes and the Commercial Internet eXchange (CIX).
The research associated with NRNs is already evolving from dealing with mainly IP and transport protocol research to research addressing the routing and peering issues associated with a highly interconnected mesh of networks. Research was an integral part of the NAP and RA design, but it was now focused on peering of networks as opposed to the transport layer protocols themselves. Although this network was not official until 1995, commercial prototype AUP free NAPs (for example, MAE-EAST) immediately sprang up and hastened the transition to a commercial network. The network was transformed from a hierarchical network topology to a decentralized and distributed peer-to-peer model. It no longer existed for the sole purpose of connecting a large aggregation of R&E users to supercomputer centers and other "one-of-a-kind" resources. The NAPs and the "peering" advances associated with the NAPs constituted a very crucial step for the success of applications such as the World Wide Web (WWW) and the subsequent commercialization of the Internet because they provided the required seamless interconnected infrastructure. Although some ISPs, for example UUNET and PSInet, were quickly building out their infrastructure at that time, there still existed the need for PNRNs to act as brokers for acquiring and managing end-to-end IP services for their R&E customer base; it would not be much longer, however, before the ISPs had the necessary infrastructure in place to do this themselves.
The Internet 2 Phase: 1996?2000
The transition to the vBNS, NAP, and RA architecture became official early in 1995 and, as a result, the United States university community lost its government-subsidized production backbone. NSF-supported regionals had lost their support years earlier, and many had already transitioned to become commercial service providers, and the NSF "connections" program for tier 2 and lower schools persisted because it was felt (policy wise) that it was still valid to support such activities. The result of this set of affairs led to the creation of the Internet 2. Many of the top research universities in the United States felt that the then-current set of ISPs could not affordably provide adequate end-to-end services and bandwidth for the academic community's perceived requirements. As a result, the NSF decided to again support production-quality backbone network services for an elite set of research institutions. This was clearly a policy decision by NSF that had support from the U.S. Congress and Executive branches of government, even though in the early 1990s both Congress and the Executive branches were fairly vocal about not supporting such a network.
The initial phase was to expand to the vBNS and connect hundreds of research universities. The vBNS again changed from a research network, connecting a few sites and focusing on network and Metacenter research, back into a production research network. The vBNS is soon eclipsed by the OC-48 Abilene network. Gigapops, which are localized evolutions of NAPs, are used to connect the top R&E institutions to the Internet 2 backbones (that is, vBNS and Abilene).
These backbones were subject to COU as a way to restrict the traffic to that in direct support of R&E, much like the NSFNET was subject to its AUP.
The ISPs who complained so bitterly about unfair competition in the early 1990s no longer cared, because they had more business than they could handle in selling to corporate customers. An ironic spin on this scenario is that the business demands placed on the commercial ISPs by the late 1990s drove them to aggressively adopt new technologies to remain competitive. Not only were they willing to act as testbeds, they paid for that privilege since it gave them a competitive edge. The result is that in a lot of cases regarding the demonstration and testing of backbone- class technologies, the R&E community was time-wise behind the commercial sector. This situation is further aggravated by the fact that many, but not all, backbone network-savvy R&E folks went to work in industry. Another side effect of this transition is the loss of available network monitoring data. The data used by CAIDA, The National Laboratory for Applied Network Research (NLANR), and other network monitoring researchers had been gathered at the FIXes where most traffic used to pass. With the transition to a commercially dominated infrastructure, meaningful data becomes harder to obtain. In addition, as a result of the COU of the Internet 2 network, and the type of applications it supports (for example, trying to set bandwidth speed records), the traffic passing over its networks can no longer be assumed to be representative Internet data, and its value in this regard is diminished.
Another milestone is reached. ISPs have grown or merged so that they are offering both wide- and local-area network services, and anyone can now easily acquire national and international IP and transport services. The deployment and use of VPNs allows the commercial service providers (SPs) to provide and support various acceptable policy networks with differing AUP/COU on the same infrastructure. The technical need for most PNRNs or NRNs to exist to fulfill this function fades away. Researchers should now be able to specify wide-area network support as a line item in their research proposal budgets, just as they do for telephony and computing support. Most governments do not support separate research "Plain Old Telephone Service" (POTS) networks so that researchers can talk with one another. They provide funding in the grants to allow the researchers to acquire this from the commercial sector. However, valid technical reasons for selectively supporting some research networks still exist. A prime example is the CA*Net 3 network in Canada, which has been extremely aggressive in the adoption and use of preproduction optical networking technologies and infrastructure and has been instrumental in advancing our knowledge on this area.
During this evolution of research networks capabilities, network research is also going through its own evolution. DARPA starts focusing its research on optics, wireless, mobility, and network engineering as part of its Next-Generation Internet program. In addition, the research moves up the food chain of network layers. DARPA and DoE start supporting research on middleware. Globus  , along with Legion  , Condor  , and POLDER  , are major middleware research efforts that become the main impetus for GRIDs; and although they are focused mainly on seeking the holy grail of distributed computing, many of the middleware services they are developing are of value in a broader research and infrastructure context. The focus of network research and research networks now starts moving away from backbone transport services to research on advanced collaboratory, ubiquitous computing, mobile, nomadic, and EPP environments.
The policy management of the Internet now becomes an oxymoron and reflects the completion of the transition of the Internet to a distributed commercial Internet. Many organizations are now vying for a say in how the Internet evolves. Even the IETF is suffering from its own success. It now faces many of the same political challenges the ITU faced, that is, some commercial companies now try to affect the standards process for their own benefit by introducing standards contributions and only later disclosing the fact that they have filed patents on the technology in question. It is now much more difficult to make policy decisions regarding the future of Internet protocols, technologies, and architectures.
UC and EPP are the paradigm shifts at the user level that are already drastically altering our concept and understanding of networks. The scale, number, and complexity of networks supporting these new applications will far exceed anything we have experienced or managed in the past. Users will "be on the net" all the time, either as themselves or indirectly through agents and "bots." They will be mobile and nomadic. There will be "n" multiple instances of a user active on a network at the same time, and not necessarily from the same logical or geographical location. The frontiers associated with this new focus are many times more complex from a systems integration level than any work we have done in the past with backbone networks. This new frontier will provide new technical challenges at the periphery of the network; that is, the intelligent access and campus networks necessary to support these new environments. EPP and UC will drastically affect our research networks and application environments, much as the Web and its protocols drastically changed Internet and traffic patters in the 1990s.
The frontiers faced by research networks of the future will depend upon many technical and sociopolitical factors on a variety of levels. The sociopolitical frontiers can be divided into two different classes, one for e-developed nations who have already gone through the learning process of building an Internet-based infrastructure, and another for the e-challenged nations who still face the challenges of building a viable network transport infrastructure. The developed nations need to now grapple with how they can encourage the next evolutionary phase of their Internet- based economies. Because of the fast evolution of technology, the technical need for subsidizing transport-based network infrastructure is no longer the pressing need it was in the 1990s. The future research network will most likely be nothing more than a VPN based on a commercial ISP "cloud" service that interconnects researchers. The High Energy Physicists (HEPs) have already proved that life as a VPN-based affinity group overlaid on production network services is a viable solution to providing for their network requirements. The High-Energy Physics Network (HEPnet)  is a virtual set of users and network experts using ESnet and other ISP VPN-based network services to support the HEP scientists. Although we still have some technical challenges associated with backbone network technology (for example, optics), there are now only a very small number of institutions and organizations capable of working with industry and making substantial contributions in this area.
The new technical challenges that need to be addressed now include how to build and deploy intelligent edge and campus networks, content delivery and routing, mobile/nomadic/wireless access to the Internet, and the support for both UC and EPP. The latter two require major advancements and will require a whole bevy of middleware that is both network aware and an integral component of an intelligent network infrastructure. This includes, but is not limited to, directories, locators, presence servers, call admission control services, self-configuring services, mobility, media servers, policy servers, bandwidth brokers, intrusion-detection servers, accounting, authentication, and access control. IRNs and RNs can contribute to our knowledge and growth of these new areas by acting as leaders in areas that tend to be more difficult for the commercial sector to address, for instance, the development and deployment of advanced end-to-end services that operate over one or more ISP-provided clouds. Examples include interdomain bandwidth broker services, multi Public Key Infrastructure (PKI) trust models, defining multisite policies and schemas for directory-based policy services, and developing scalable naming conventions.
In order for policy makers to make informed decisions on the evolution and support of Internet technologies and architectures, they will need ac-cess to a generic mix of real backbone network data. There still exists a dire need at this point for such data. Innovative solutions that respect the privacy and business concerns of all types of ISPs and RNs, while at the same time making available "scrubbed" data, need to be developed. In addition, with the new focus on edge and metro networks, we might be able to shift our monitoring attentions to this area as well in order to better understand traffic demands and patterns on these scales of networks. Network monitoring is only one of the challenges facing us.
As the scale and complexity of networks grows, even at the pico and body area network level, we will need to develop new techniques to support network modeling, simulation, and experimentation. The University of Utah is developing a test facility  comprising a large number of networked processors, the network equivalent of a supercomputer center, to be used experimentally in the design and development of new transport layer protocols.
"Being on the net" will change our way of doing e-everything, and the evolution of the underlying infrastructure will need to change in order to support this paradigm shift. The intelligence of the network will not only move to the periphery, but even beyond, to the personal digital assistant and body area network. Therefore, it is important that the goals and focus of the research networks also evolve. Leave the R&D associated with backbone networks mainly with the commercial sector because this is their raison d'etre. The research networks of the future will be mostly VPNs, with a few exceptions, as noted earlier in this article. Research networks need to focus on the new technologies at the periphery as well as the middleware necessary to support the advanced environments that will soon be commonplace. Many research networks will themselves become virtual, for example, HEPnet, providing expertise but not necessarily a network service.
Policy makers must adapt to address not only these substantial technical and architectural changes but also second-order policy issues such as security and privacy and how to ensure that we don't end up with a bifurcated digital economy of e-savvy and e-challenged communities.
E-developed nations have already been through the technology learning curve of implementing and deploying a transport infrastructure. The e-challenged nations, with respect to network infrastructure, still face these same challenges, and they have the benefit of taking advantage of the knowledge of the nations who have successfully made the transition. In order to speed up the deployment of Internet technologies and infrastructure in the e-challenged nations, it may be best to first create technologically educated people and then to provide them an economic and social environment where they can apply their knowledge and build the infrastructure. E-savvy nations should help by providing the "know-how." The North Atlantic Treaty Organization (NATO) has a joint program with the Trans-European Research and Education Networking Association (TERENA) to provide for the instruction of Eastern European nations on the use and deployment of Internet technology (that is, how to configure and manage routers).
In lieu of subsidizing networks in these nations, NATO and TERENA are providing the basic knowledge that these people need to build, manage, and evolve their own networks and infrastructure. This should be the model to consider for e-developing nations. This is not to diminish the challenges of building network infrastructure in some areas where there is no such infrastructure, and perhaps in some of these areas working with other utility infrastructure providers might advance this cause.
The ideas, comments, and projections proffered in this article are the sole opinions of the author, and in no way represent or reflect official or unofficial positions or opinions on the part of Cisco Systems, Inc. This article is based on my experience designing and managing operational international research networks, as well as being a program manager for network research, during the formative years of the Internet (that is, my tenure as a program manager for the United States Government's National Science Foundation and the Department of Energy), and my recent experience within Cisco working with next-generation Internet projects and managing its University Research Program. Many of the examples that I cite in this work are based on the development and deployment of the U.S.-based Internet and research networks, although the lessons learned in the United States may also be illuminating elsewhere.
I would like to thank my friend and colleague, Dr. Stephen Wolff, of the Office of the CTO, Cisco Systems Inc., for many good suggestions with respect to improving the content and presentation of this article; but, mostly for his good-humored authentication of my history and facts.
 This article was presented at the third Global Research Village Conference organized jointly by the Organization for Economic Cooperation and Development (OECD) and the Netherlands in Amsterdam, December 6?8, 2000.
 This is also attributed to the famous Physicist Niels Bohr.
 Wulf, William A. 1988."The National Collaboratory--A white paper," Appendix A. In ?Towards a National Collaboratory,? Unpublished report of a National Science Foundation invitational workshop. Rockefeller University, New York, March 17?18, 1989.
 Draft-aiken-middleware-reqndef-01.txt, Internet Draft,Work in Progress, May 1999, http://www.anl.gov/ECT/Public/research/morphnet.html
 "Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet)," Aiken, et al, ANL-97/ 1 technical report, and 1997 Intelligent Network and Intelligence in Networks Conference.
 "NSF Implementation Plan for an Interagency Interim NREN," (aka Architecture for vBNS, NAPs and RAs), Aiken, Braun, and Ford, GA A21174, May 1992.
ROBERT J. AIKEN has an MS in Computer Science from Temple University. He is the Manager of the Cisco University Research Program. Prior to joining Cisco, Bob was the network and security research program manager for DoE's HPCC program and Next-Generation Internet (NGI) initiative. He was a program manager at the National Science Foundation (NSF), and with colleagues Peter Ford and Hans-Werner Braun coauthored the conceptual design and architecture of the second-generation National Science Foun-dation Network (NSFNET) (vBNS, Network Access Points [NAPs], and the Routing Arbiter [RA]), which enabled the commercialization of the then-U.S.-federally supported Internet. Before his NSF tenure, he served as DoE's ESnet program manager and was the creator and manager of the ESnet Network Information and Services group. Prior to his career in networking, Bob was responsible for managing supercomputers and coding their operating systems. His academic experience includes being an Assistant Professor of Computer Science at Hood College in Maryland, an adjunct Professor at California State University, Hayward, and the Manager of Technology Services at Gettysburg College in Pennsylvania.