The Internet Protocol Journal, Volume 15, No. 1

A Retrospective: Twenty-Five Years Ago

Geoff Huston, APNIC

The Information Technology business is one that rarely pauses for breath. Gordon Moore noted in 1965 that the number of components in integrated circuits had doubled every year from 1958 to 1965, and confidently predicted that this doubling would continue "for at least 10 years." This feature has been a continuing feature of the silicon industry for the past 50 years now, and its constancy has transformed this prediction into Moore's Law. The implications of this constant impetus for innovation in this industry have resulted in an industry that is incapable of remaining in stasis, and what we have instead is an industry that completely reinvents itself in cycles as short as a decade.

Looking back over the past 25 years, we have traversed an enormous distance in terms of technical capability. The leading silicon innovations of the late 1980s were in the Intel 80486 chip, which contained 1 million transistors on a single silicon chip with a clock speed of 50 MHz, and a similarly capable Motorola 68040 processor. Twenty-five years later the state of the art is a multicore processor chip that contains just under 3 billion individual transistors and clock speeds approaching 4 GHz. And where has all that processing power gone? In the same period we have managed to build extremely sophisticated programmed environments that have produced such products as Apple's Siri iPhone application, which combines voice recognition with a powerful information manipulation system, and we have packaged all of this computing capability into a device that fits comfortably in your pocket with room to spare!

Given that the last 25 years in IT has been so active, to look back over this period and contemplate all that has happened is a daunting task, and I am pretty sure that any effort to identify the innovative highlights in that period would necessarily be highly idiosyncratic. So instead of trying to plot the entire story that took us from then to now, I would like instead just to look at "then." In this article, to celebrate 25 combined years of The Internet Protocol Journal (IPJ) [2, 3] and its predecessor ConneXions—The Interoperability Report [0], I would like to look at the networking environment of the late 1980s and see what, if anything, was around then that was formative in shaping what we are doing today, and how it might influence our tomorrow.

The Computing Landscape of the Late 1980s

The computing environment of the late 1980s now seems to be quite an alien environment. Obviously there were no pocket-sized computers then. Indeed there were no pocket-sized mobile phones then. (I recall a visit from a salesman at the time who sported the very latest in mobile telephony—a radio setup that was the size of a briefcase!)

In 1987 the IT world was still fixated with the mainframe computer, which was basking in its last couple of years of viability in the market. IBM enjoyed the dominant position in this marketplace, and Digital Equipment Corporation (DEC) was competing with IBM with its VAX/VMS systems. These systems were intended to take the place of the earlier DEC-10 architectures, as well as offering an upgrade path for the hugely successful PDP-11 minicomputer line. The typical architecture of the computing environment was still highly centralized, with a large multiuser system at its core, and an attendant network or peripheral devices. These peripheral devices were traditionally video terminals, which were a simple ASCII keyboard and screen, and the interaction with the mainframe was through simple serial line character-based protocols.

Although it may not have been universally accepted at the time, this period at the end of the 1980s marked the end of the custom-designed mainframe environment, where large-scale computer systems were designed as a set of component subsystems, placed into a rack of some sort and interconnected through a bus or blackplane. Like many other human efforts, as far as the mainframe computer sector was concerned its final achievements were its greatest.

While the mainframe sector was inexorably winding down, at the other end of the market things were moving very quickly. The Zylogics Z80 processor of the mid-1970s had been displaced by the Intel 8080 chip, which evolved rapidly into 16-bit, then 32-bit processor versions. By 1987 the latest chip was the Intel 80386, which could operate with a clock speed up to 33 MHz. The bus was 32 bits wide, and the chip supported a 32-bit address field. This chip contained some 275,000 transistors, and was perhaps the transformative chip that shifted the personal computer from the periphery of the IT environment to the mainstream. This chip took on the mainframe computer and won. The evolving architecture of the late 1980s was shifting from a central processing center and a cluster of basic peripheral devices to one of a cluster of personal desktop computers.

The desktop personal computer environment enabled computing power to be treated as an abundant commodity, and with the desktop computer came numerous interface systems that allowed users to treat their computer screens in a manner that was analogous to a desktop. Information was organized in ways that had a visual counterpart, and applications interacted with the users in ways that were strongly visual. The approach pioneered by the Xerox Star workstation in the late 1970s and brought to the consumer market through the Apple Lisa and Macintosh systems were then carried across into the emerging "mainstream" of the desktop environment with Windows 2.0 in the late 1980s.

The state of the art of portability was still in the category of "luggable" rather than truly portable, and the best example of what was around at the time is the ill-fated Macintosh Portable, which like its counterpart in the portable phone space was the size of a briefcase and incredibly heavy.

Oddly enough, while the industry press was in raptures when it was released in 1989, it was a complete failure in the consumer market. The age of the laptop was yet to come.

One major by-product in this shift in the computing environment to a distributed architecture was a major shift in the attention to networking, and at the same time as there was a large-scale shift in the industry from mainframes to personal computers, there were also numerous major changes in the networked environment.

The Networking Environment of the Late 1980s

A networking engineer in the late 1980s was probably highly conversant in how to network serial terminals to mainframes. The pin-outs in the DB-25 plug used by the RS-232 protocol was probably one of the basic ABCs of computer networking. At that time much of the conventional networked environment was concerned with connecting these terminal devices to mainframes, statistical multiplexors, and terminal switches, and serial switch suppliers such as Gandalf and Micom were still important in many large-scale computing environments.

At the same time, another networking technology was emerging—initially fostered by the need to couple high-end workstations with mainframes—and that was Ethernet. Compared to the kilobits per second typically obtained by running serial line protocols over twisted pairs of copper wires, the 10-Mbps throughput of Ethernet was blisteringly fast. In addition, Ethernet could span environments with a diameter of around 1500 meters, and with a certain amount of tweaking or with the judicious use of Ethernet bridges and fibre-optic repeaters this distance could be stretched out to 10 km or more.

Ethernet heralded a major change in the networked environment. No longer were networks hub-and-spoke affairs with the mainframe system at the center. Ethernet supplied a common bus architecture that supported any-to-any communications. Ethernet was also an open standard, and many vendors were producing equipment with Ethernet interfaces. In theory, these interfaces all interoperated, at least at the level of passing Ethernet frames across the network (aside from a rather nasty incompatibility between this original Digital-Intel-Xerox specification and the IEEE 802.3 "standardized" specification!).

However, above the basic data framing protocol the networked environment was still somewhat chaotic. I recall the early versions of the multiprotocol routers produced by Proteon and Cisco supported more than 20 networking protocols! There was DECnet, a proprietary network protocol suite from the Digital Equipment Corporation, which at around 1987 had just released Phase IV, and was looking toward a Phase V release that was to interoperate with the International Organization for Standardization's Open Systems Interconnection (OSI) protocol suite [1] (more on this subject a bit later).

There was IBM's Systems Network Architecture (SNA), which was a hierarchical network that supported a generic architecture of remote job entry systems clustered around a central service mainframe. There was the Xerox Network Services (XNS) protocol used by Xerox workstations. Then there were Apollo's Network Computing Architecture (NCA) and Apple's AppleTalk. And also in this protocol mix was the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite, used at that time predominately on UNIX systems, although implementations of TCP/IP for Digitalís VAX/VMS system were very popular at the time. A campus Ethernet network of the late 1980s would probably see all of these protocols, and more, being used concurrently.

And there was the ISO-OSI protocol suite, which existed more as a future protocol suite than as a working reality at the time.

The ISO-OSI and TCP/IP protocol suites were somewhat different from the others that were around at the time because both were deliberate efforts to answer a growing need for a vendor-independent networking solution. At the time the IT environment was undergoing a transition from the monoculture of a single vendor's comprehensive IT environment—which bundled the hardware of the mainframe, network, peripherals, terminals, and the software of the operating system, applications, and network all into the one bundle—into a piecemeal environment that included a diverse collection of personal workstations, desktop computers, peripherals, and various larger minicomputers and mainframe computers in one environment. What was needed was a networking technology that was universally supported on all these various IT assets. What we had instead was a more piecemeal environment. Yes, it was possible to connect most of these systems into a common Ethernet substrate, but making A talk to B was still a challenge, and various forms of protocol translation units were also quite commonplace at the time. What the industry needed was a vendor-independent networking protocol, and there were two major contenders for this role.

ISO-OSI and TCP/IP

The ISO-OSI protocol suite was first aired in 1980. It was intended to be an all-embracing protocol suite that embraced both the IEEE 802.3 Ethernet protocols and the X.25 packet switching protocols that were favoured by many telephony operators as their preferred wide-area data services solution. The ISO-OSI network layer included many approaches, including the telephony sector's Integrated Service Digital Network (ISDN), a Connection-Oriented Network Service (CONS), a virtual circuit function based largely on X.75 that was essentially the "call-connection" function for X.25, and a Connectionless Network Service (CLNS), based loosely on the IP protocol with the use of the End System-to-Intermediate System Routing Exchange Protocol (ES-IS) routing protocol.

Above the network layer were numerous end-to-end transport protocols, notably Transport Protocol Class 4 (TP4), a reliable connection-oriented transport service, and Transport Protocol Class 0 (TP0), a connectionless packet datagram service. Above this layer was a Session Layer, X.215, used by the TP4 CONS services, and a Presentation Layer, defined using the Abstract Syntax Notation One (ASN.1) syntax.

ISO-OSI included numerous application-level services, including Virtual Terminal Protocol (VTP) for virtual terminal support, File Transfer Access And Management (FTAM) for file transfer, Job Transfer And Management (JTAM) for batch job submission, Message Handling System (MHS, also known as X.400) for electronic mail, and the X.500 Directory service. ISO-OSI also included a Common Management Information Protocol (CMIP). ISO-OSI attempted to be everything to everybody, as evidenced by the "kitchen sink" approach adopted by many of the OSI standardization committees at the time.

When confronted by many technology choices, the committees apparently avoided making a critical decision by incorporating both approaches into the standard. The most critical decision in this protocol suite was the inclusion of both connection-oriented and connectionless networking protocols. They also used session and presentation layer protocols, whose precise role was a mystery to many! ISO-OSI was a work-in-progress at the time, and the backing of the telephone sector, coupled with the support of numerous major IT vendors, gave this protocol an aura of inevitability within the industry. Whatever else was going to happen, there was the confident expectation that the 1990s would see all computer networks move inevitably to use the ISO-OSI protocol suite as a common, open, vendor-neutral network substrate.

If the ISO-OSI had a mantra of inevitably, the other open protocol suite of the day, the TCP/IP protocol suite, actively disclaimed any such future ambitions. TCP/IP was thought of at the time as an experiment in networking protocol design and architecture that ultimately would go the way of all other experiments, and be discarded in favor of a larger and more deliberately engineered approach. Compared to the ISO-OSI protocols, TCP/IP was extremely "minimalist" in its approach. Perhaps the most radical element in its design was to eschew the conventional approach at the time of building the network upon a reliable data link protocol. For example, in DECnet Phase IV, the data link protocol, Digital Data Communications Message Protocol (DDCMP), performed packet integrity checks and flow control at the data link level. TCP/IP gracefully avoided this problem by allowing packets to be silently dropped by intermediate data switches, or corrupted while in flight. It did not even stipulate that successive packets within the same end-to-end conversation follow identical paths through the network.

Thus the packet switching role was radically simplified because now the packet switch did not need to hold a copy of transmitted packets, nor did it need to operate a complex data link protocol to track packet transmission integrity and packet flow control. When a switch received a packet, it forwarded the packet based on a simple lookup of the destination address contained in the packet into a locally managed forwarding table. Or it discarded the packet.

The second radical simplification in TCP/IP was the use of real-time packet fragmentation. Previously, digital networks were constructed in a "vertically integrated" manner, where the properties of the lower layers were crafted to meet the intended application of the network. Little wonder that the telephone industry put its support behind X.25, which was a reliable unsynchronized digital stream protocol. If you wanted low levels of jitter, you used a network with smaller packet sizes, whereas higher packet sizes improved the carriage efficiency. Ethernet attempted to meet this wide variance in an agnostic fashion by allowing packets of between 64 and 1500 octets, but even so there were critics who said that for remote terminal access the smallest packets were too large, and for large-scale bulk data movement the largest packets were too small. Fiber Distributed Data Interface (FDDI), the 100-Mbps packet ring that was emerging at the time as the "next thing" as commodity high-speed networking used a maximum size of 4000 octets packets in an effort to improve carriage efficiency, whereas the Asynchronous Transfer Mode (ATM) committee tried to throw a single-packet-size dart at the design board and managed to get the rather odd value of 53 octets!

IP addressed this problem by trying to avoid it completely. Packets could be up to 64,000 octets long, and if a packet switch attempted to force a large packet through an interface that could not accept it, the switch was allowed to divide the packet into appropriately sized autonomous fragments. The fragments were not reassembled in real time: that was the role of the ultimate receiver of the packets.

As an exercise in protocol design, IP certainly showed the elegance of restraint. IP assumed so little in terms of the transmission properties of the underlying networks that every packet was indeed an adventure! But IP was not meant to be the protocol to support the prolific world of communicating silicon in the coming years. This protocol and the IP networks that were emerging in the late 1980s were intended to be experiments in networking. There was a common view that the lessons learned with experience of operating high-speed local networks and wide-area networks using the TCP/IP protocol suite would inform the larger industry efforts. The inclusion of IP-based technologies in the ISO-OSI protocol suite [4] was a visible instantiation of this proposed evolutionary approach.

While these two protocol suites vied with each other for industry attention at the time, there was one critical difference: It was a popular story at the time that the ISO-OSI protocol suite was a stack of paper some 6 feet high, which cost many hundreds of dollars to obtain, with no fully functional implementations, whereas the TCP/IP protocol suite was an open-sourced and openly available free software suite without any documentation at all. Many a jibe at the time characterized the ponderous approach of the ISO-OSI approach as "vapourware about paperware," while the IP effort, which was forming around the newly formed Internet Engineering Task Force (IETF), proclaimed itself to work on the principle of "rough consensus and running code."

Local- and Wide-Area Networking

The rise of Ethernet networks on campuses and in the corporate world in the late 1980s also brought into stark visibility the distinction between local- and wide-area networking.

In the local-area network, Ethernet created a new environment of "seamless connectivity." Any device on the network could provide services to any other device, and the common asset of a 10-Mbps network opened up a whole new set of computing possibilities. Data storage could be thought of as a networked resource, so desktop computers could access a common storage area and complement it with local storage, and do so in a way that the distinction between local resources and shared networkwide resources was generally invisible. The rich computing environment of visualizing the application, popularized by both the Macintosh and Windows 2.0, complemented a rich networked environment where rather than bringing a user into the location that had both the data and the computing resources, the model was invested, and the user was able to exclusively use the local environment and access the remote shared resources through networking capabilities integrated into the application environment. Local-area networking was now an abundant resource, and the industry wasted no time on exploiting this new-found capability.

But as soon as you wanted to venture further than your Local-Area Network (LAN), the picture changed dramatically. The wide-area networking world was provisioned on the margins of oversupply of the voice industry, and the services offered reflected the underlying substrate of a digital voice circuit. The basic unit of a voice circuit was a 64-kbps channel, which was "groomed" into a digital circuit of either 56 or 48 kbps, depending on the particular technology approach used by the voice carrier. Higher capacities (such as 256 or 512 kbps) were obtained by multiplexing individual circuits together. Even high-capacity circuits were obtained by using a voice trunk circuit, which was either 1.5 (T1) or 2.048 Mbps (E1), again depending on the digital technology used by the voice carrier. Whereas the LANs were now supporting an any-to-any mode of connection, these Wide-Area Networks (WANs) were constructed using point-to-point technologies that were either statically provisioned or implemented as a form of "on-demand" virtual circuit (X.25).

In the late 1980s users' patience was running thin over having to use an entirely different protocol suite for the wide area as distinct from the local area. Often the wide area required the use of different applications with different naming and addressing conventions. One approach used by many Ethernet switch vendors was to introduce the concept of an Ethernet Serial Bridge. This technology allowed a logical IEEE 802.3 Ethernet to encompass much larger geographic domains, but at the same time protocols that worked extremely efficiently in the local area encountered significant problems when passed through such supposedly "transparent" Ethernet serial bridges.

However, these bridge units allowed significantly larger and more complex networks to be built using Ethernet as the substrate. The Ethernet Spanning Tree Algorithm gained traction in order to allow arbitrary topologies of interconnected LANs to self-organize into coherent topologies that eliminated loops and allowed for failover resilience in the network.

What has changed, and what has stayed the same?

So what have we learned from this time?

In the intervening period ISO-OSI waned and eventually disappeared, without ever having enjoyed widespread deployment and use. Its legacy exists in numerous technologies, including the X.500 Directory Service, which is the substrate for today's Lightweight Directory Access Protocol (LDAP) Directory Services. Perhaps the most enduring legacy of the ISO-OSI work is the use of the "layered stack" conceptual model of network architectures. These days we refer to "Layer 2 Virtual LANs (VLANs)" and "Layer 3 Virtual Private Networks (VPNs)" perhaps without appreciating the innate reference to this layered stack model.

Of course the ISO-OSI protocol suite was not the only casualty of time. DECnet is now effectively an historic protocol, and Novell's NetWare has also shifted out of the mainstream of networking protocols. Perhaps it may be more instructive to look at those technologies that existed at the time that have persisted and flourished so that they now sit in the mainstream of today's networked world.

Ethernet has persisted, but today's Ethernet networks share little with the technology of the original IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) 10-Mbps common bus network. The entire common bus architecture has been replaced by switched networks, and the notion of self-clocking packets was discarded when we moved into supporting Gbps Ethernets. What has persisted is the IEEE 802.3 packet frame format, and the persistence of the 1500-octet packet as the now universal lowest common factor for packet quantization on today's network. Why did Ethernet survive while other framing formats, such as High-Level Data Link Control (HDLC), did not?

I could suggest that it was a triumph of open standards, but HDLC was also an open standard. I would like to think that the use of a massive address space in the Ethernet frame, the 48-bit Media Access Control (MAC) address, and the use since its inception of a MAC address registry that attempted to ensure the uniqueness of each Ethernet device were the most critical elements of the longevity of Ethernet.

Indeed not only has UNIX persisted, it has proliferated to the extent that it is ubiquitous, because it now forms the foundation of the Apple and Android products. Of the plethora of operating systems that still existed in the late 1980s, it appears that all that have survived are UNIX and Windows, although it is unclear how much of Windows 2.0 still exists in todayís Windows 7, if anything.

And perhaps surprisingly TCP/IP has persisted. For a protocol that was designed in the late 1970s, in a world where megabits per second was considered to be extremely high speed, and for a protocol that was ostensibly experimental, TCP/IP has proved to be extremely persistent. Why? One clue is in the restrained design of the protocol, where, as we have noted, TCP/IP did not attempt to solve every problem or attempt to be all things for all possible applications. I suspect that there are two other aspects of TCP/IP design that contributed to its longevity.

The first was a deliberate approach of modularity in design. TCP/IP deliberately pushed large modules of functions into distinct subsystems, which evolved along distinct paths. The routing protocols we use today have evolved along their own paths. Also the name space and the mapping system to support name resolution has evolved along its own path. Perhaps even more surprisingly, we have had the rate control algorithms used by TCP, the workhorse of the protocol suite, evolve along its own path.

The second aspect is use of what was at the time a massively sized 32-bit address space, and an associated address registry that allowed each network to use its own unique address space. Like the Ethernet 48-bit MAC address registry, the IP address registry was, in my view, a critical and unique aspect of the TCP/IP protocol suite.

Failures

What can we learn from the various failures and misadventures we have experienced along the way?

Asynchronous Transfer Mode (ATM) was a technology that despite considerable interest from the telephone operators proved to be too little too late, and was ultimately swept aside in the quest for ever larger and ever cheaper network transmission systems. ATM appeared to me to be perhaps the last significant effort to invest value into the network through allowing the network to adapt to the various differing characteristics of applications.

The underlying assumption behind this form of adaptive networking is that attached devices are simply incapable of understanding and adapting to the current state of the network, and it is up to the network to contain sufficient richness of capability to present consistent characteristics to each application. However, our experience has been quite the opposite, where the attached devices are increasingly capable of undertaking the entire role of service management, and complex adaptive networks are increasingly seen as at best meaningless duplication of functions, and at worst as an anomalous network behavior that the end device needs to work around. So ATM failed to resonate with the world of data networking, and as a technology it has waned. In the same way subsequent efforts to equip IP networks with Quality of Service (QoS) responses, or the much-hyped more recent Next-Generation Networking (NGN) networking efforts have been failures, for much the same basic reasons.

Fiber Distributed Data Interface (FDDI) also came and went. Rings are notoriously difficult to engineer, particularly in terms of managing a coherent clock across all attached devices that preserves the circumference of the ring, as measured in bits on the wire. From its earlier lower-speed antecedents in the 4-Mbps token, the 100-Mbps FDDI ring attracted considerable interest in the early 1990s. However, it was in effect a dead end in terms of longer-term evolution—the efforts to increase the clock speed required either the physical diameter of the ring to shrink to unusable small diameters or the clock signal to be locked at extraordinarily high levels of stability that made the cost of the network prohibitive. This industry appears to have a strong desire for absolute simplicity in its networks, and even rings have proved to be a case of making the networks too complex.

Interestingly, and despite all the evidence in their favor, the industry is still undecided about open technologies. TCP/IP, UNIX, and the Apache web platform are all in their own way significant and highly persuasive testaments to the power of open-source technologies in this industry, and a wide panoply of open technologies forms the entire foundation of today's networked environment. Yet, in spite of all this accumulated experience, we still see major efforts to promote closed, vendor-specific technologies into the marketplace. Skype is a case in point, and it is possible to see the iPhone and the Kindle in a similar light, where critical parts of the technology are deliberately obscured and aspects of the device behavior are deliberately sealed up or occluded from third-party interception.

The Next Twenty-Five Years

In wondering about the next 25 years, it may be interesting to look back ever further, to the early 1960s, and see what, if anything, has proved to be enduring from the perspective of the past 50 years. Interestingly, it appears that very little of that time, except for the annoying persistence of Fortran, and the ASCII keyboard as the ubiquitous input device, is still a part of today's networked environment. So over a 50-year time period much has changed in our environment.

But, interestingly, when we par down the period to the past 25 years, there is still much that has survived in the computing and networking environment. A Macintosh computer of the late 1980s looks eerily familiar, and although today's systems are faster, lighter, and a lot less clunky, there is actually very little that has changed in terms of the basic interface with the user. A Macintosh of that time could be connected to an Ethernet network, and it supported TCP/IP, and I suspect that if one were to resurrect a Mac system from 1988 loaded with MacTCP and connect it to the Internet today it would be frustratingly, achingly slow, but I would like to think that it would still work! And the applications that ran on that device have counterparts today that continue to use the same mechanisms of interaction with the user.

So if much of today's world was visible 25 years ago, then where are the aspects of change? Are we just touching up the fine-point details of a collection of very well established technologies? Or are there some basic and quite fundamental shifts underway in our environment?

It seems to me that the biggest change is typified in today's tablet and mobile phone computers, and in these devices it is evident that the metaphors of computing and interaction with applications are changing. The promise from 1968 in the film 2001: A Space Odyssey of a computer that was able to converse with humans is now, finally, within reach of commodity computing and consumer products. But it is more than merely the novelty of a computer that can "talk." The constant search for computing devices that are smaller and more ubiquitous now means that the old paradigm of a computer as a "clever" but ultimately bulky typewriter is fading away. Today we are seeing modes of interaction that use gestures and voice, so that the form factor of a computer can become smaller while still supporting a functional and efficient form of interaction with the human user.

It is also evident that the pendulum of distribution and centralization of computing capability is swinging back, and the rise of the heavily hyped Cloud [5, 6] with its attendant collection of data centers and content distribution networks, and the simultaneous shrinking of the end device back to a "terminal" that allows the user to interact with views into a larger centrally managed data store held in this cloud, appears to be back in vogue once more.

It is an open question whether these aspects of today's environment will be a powerful and persistent theme for the next 25 years, or whether we will see other aspects of our environment seize industry momentum, so they are very much just a couple of personal guesses. Moore's Law has proved to be truly prodigious over the past 50 years. It has allowed us to pack what was a truly unbelievable computing capability and storage into astonishingly small packages and then launch them into the consumer market with pricing each year that appears to be consistently lower than the previous year.

If this property of packaging ever greater numbers of transistors into silicon chips continues for the next 25 years at the same rate, then it is likely that whatever happens in the next 25 years, the only limitation may well be our imagination rather than any intrinsic limitations of the technology itself.

References

[0] The Charles Babbage Institute at the University of Minnesota has scanned the complete collection of ConneXions—The Interoperability Report, and it is available at this URL: http://www.cbi.umn.edu/hostedpublications/Connexions/index.html

[1] Starting in April 1989 (Volume 3, No. 4), ConneXions published a long-running series of articles under the general heading "Components of OSI," which described almost every aspect of this protocol suite. The same journal also published articles on many of the other technologies mentioned in this article, including FDDI, AppleTalk, and ATM.

[2] Vint Cerf, "A Decade of Internet Evolution," The Internet Protocol Journal, Volume 11, No. 2, June 2008.

[3] Geoff Huston, "A Decade in the Life of the Internet," The Internet Protocol Journal, Volume 11, No. 2, June 2008.

[4] International Organization for Standardization, "Final text of DIS 8473, Protocol for Providing the Connectionless-mode Network Service," RFC 994, March 1986.

[5] T. Sridhar, "Cloud Computing—A Primer Part 1: Models and Technologies," The Internet Protocol Journal, Volume 12, No. 3, September 2009.

[6] T. Sridhar, "Cloud Computing—A Primer Part 2: Infrastructure and Implementation Topics," The Internet Protocol Journal, Volume 12, No. 4, December 2009.

GEOFF HUSTON B.Sc., M.Sc., is the Chief Scientist at Asia Pacific Network Information Centre (APNIC), the Regional Internet Registry serving the Asia Pacific region. He has been closely involved with the development of the Internet for many years, particularly within Australia, where he was responsible for the initial build of the Internet within the Australian academic and research sector. He is author of numerous Internet-related books, was a member of the Internet Architecture Board from 1999 until 2005, and served on the Board of Trustees of the Internet Society from 1992 until 2001. E-mail: gih@apnic.net