The Internet Protocol Journal - Volume 9, Number 4

Letters to the Editor

Time to Live

As I read the very fine article entitled "IPv6 Internals" (IPJ Volume 9, No. 3, September 2006), I was prompted to review the history of the Time to Live (TTL) as discussed in section 5.3.1 of RFC 1812. Being gray of head, little facts from other eras come quickly to mind. The Xerox Network Systems (XNS) Internet Transport on which Novell Netware was based required that no router ever store a packet in queue longer than 6 seconds. Requirements of RFC 791 were also softened in RFC 1812; rather than requiring the TTL to be decremented at least once and additionally once per second in queue, that document requires that the TTL be treated as a hop count and—reluctantly—reduces the treatment of TTL as a measure of time to a suggestion.

The reason for the change is the increasing implementation of higher-speed lines. A 1,500-byte datagram occupies 12,000 bits (and an asynchronous line sends those as 15,000 bits), which at any line speed below 19.2 kbps approximates or exceeds 1 second per datagram. Any time there are several datagrams in queue, the last message in the queue is likely to sit for many seconds, a situation that in turn can affect the behavior of TCP and other transports. However, 56-kbps lines became common in the 1980s, and T1 and T3 lines became common in the 1990s. Today, hotels generally offer Ethernet to the room; we have reports of edge networks connected to the Internet at 2.5 Gbps, and residential broadband in Japan and Europe at 26 Mbps per household. At 56 kbps, a standing queue of five messages is required to insert a 1-second delay, and at T1 it requires a queue depth of more than 100 messages. At higher speeds, the issue becomes less important.

That is not to say that multisecond queues are now irrelevant. Although few networks are being built today by concatenating asynchronous links, in developing countries—and on occasion even in hotels here in Santa Barbara, California—people still use dialup lines. In Uganda, some networks that run over the instant messaging capacity of GSM [Global System for Mobile Communications], which is to say using 9,600-bps datagrams, have been installed under the supervision of Daniel Stern and UConnect.org. Much of the world still measures round-trip times (RTTs) in seconds, and bit rates in tens of kbps.

The TCP research community, one member of which recently asked me whether it was necessary to test TCP capabilities below 2 Mbps, and the IETF community in general would do well to remember that the ubiquity of high bandwidth in Europe, North America, Australia, and Eastern Asia in no sense implies that it is available throughout the world, or that satellite communications and other long-delay pipelines can now be ignored.

—Fred Baker, Cisco Systems
fred@cisco.com

The author responds:

Although to the casual observer the evolution of the Internet seems one of continuously increasing speed and capacity, reality is slightly different. The original ARPANET used 50-kbps modems in the late 1960s. In the next three decennia or so, the maximum bandwidth of a single link increased by a factor 200,000 to 10 Gbps. Interestingly enough, the minimum speed used for Internet connections went down to a little under 10 kbps, so where once the ARPANET had a uniform link speed throughout the network, the difference between the slowest and the fastest links is now six orders of magnitude. The speed difference between a snail and a supersonic fighter jet is only five orders of magnitude. Amazingly, the core protocols of the Internet—IP and TCP—can work across this full speed or bandwidth gamut, although changes were made to TCP to handle both extremes better, most notably in RFCs 1144 and 1323.

Even though I don't think keeping track of the time that packets are stored in buffers, as suggested in the original IPv4 specification, makes much sense even in slow parts of the network, Fred makes a good point: many Internet users still have to deal with speeds at the low end of the range; some of us only occasionally when connecting through a cellular network, others on a more regular basis. Even in Europe and the United States many millions of Internet users connect through dialup. For someone who is used to having always-on multimegabit connectivity, going back to 56 kbps or worse, 9,600 bps can be a bizarre experience. Many of today's Websites are so large that they take minutes to load at this speed. Connecting to my mail server using the Internet Mail Access Protocol (IMAP) takes 15 minutes. And one of my favorite relatively new applications, podcasting, becomes completely unusable: downloading a 50-minute audio program takes hours at modem speeds.

And that's all IPv4. It is possible to transport IPv6 packets over the Point-to-Point Protocol (PPP) that is used for almost all low-speed connections, but in practice this isn't workable because there are no provisions for receiving a dynamic address from an ISP [Internet Service Provider]. With IPv4, Van Jacobson did important work to optimize TCP/IP for low-speed links (RFC 1144). By reducing the Maximum Transmission Unit (MTU) of the slow link and compressing the IP and TCP headers, it was possible to achieve good interactive response times by avoiding the situation where a small packet gets stuck between a large packet that may take a second or more to transmit over a slow link while at the same time reducing the header overhead. Although the IETF has later done work on IPv6 header compression, it doesn't look like anyone has bothered to implement these techniques, and the minimum MTU of 1,280 bytes creates significant head-of-line blocking when IPv6 is used over slow links.

Another example where low bandwidth considerations are ignored is the widespread practice of enabling RFC 1323 TCP high-performance extensions for all TCP sessions. RFC 1323 includes two mechanisms: a window scale factor that allows much larger windows in order to attain maximum performance over high-bandwidth links with a long delay, and a timestamp option in the TCP header that allows for much more precise round-trip time estimations. With these options enabled, every TCP segment includes 8 extra bytes with timestamp information. In addition to increasing overhead, the timestamp option introduces an unpredictable value into the TCP header that makes it impossible to use header compression, thereby negating the usefulness of RFC 1144. To add insult to injury, almost no applications allocate enough buffer space to actually use the RFC 1323 mechanisms.

Moral of the story for protocol designers and implementers: spend some time thinking about how your protocol works over slow links. You never know when you'll find yourself behind just such a link.

—Iljitsch van Beijnum
iljitsch@muada.com

Gigabit TCP and MTU Size

I appreciated Geoff Huston's thorough description about the current obstacles and research involving Gigabit TCP (IPJ, Volume 9, No. 3, June 2006). I have already shown the article to many of my colleagues. It appears that Geoff did not address one of the solutions, which is to increase the networkwide Maximum Transmission Unit (MTU). In theory that would allow the existing TCP congestion control to handle higher-speed connectivity. Perhaps he did not address the issue because it is infeasible to increase the MTU setting Internetwide, especially with 10-Gigabit Ethernet interfaces sporting a default MTU setting of 1,500 bytes. On the other hand, projects that own their own backbone infrastructure may find increasing the default MTU a feasible approach.

For more information about raising the MTU, please see:
http://www.psc.edu/~mathis/MTU/

—Todd Hansen, UCSD/SDSC
tshansen@hpwren.ucsd.edu

The author responds:

Yes, it's true that increasing the size of the packet makes sound sense when the available bandwidth has increased. If the bandwidth increases by one order of magnitude and the packet size is increased by the same amount, then it is theoretically possible to effectively increase the throughput of the system without changing the packet processing load.

Effectively, if you regard the protocol interaction as a time sequence, then a coupling of increased bandwidth and comparably increased packet size preserves the time sequence interaction. Of course, as bandwidth on the network has increased we have not seen a comparable increase in MTU sizes, and today's networks exhibit a wide variety of MTUs and the importance of Path MTU Discovery, and coherent transmission of related MTU ICMP [Internet Control Message Protocol] messages becomes more critical as a consequence. Although the article concentrated on modifications to the TCP control algorithm, there is no doubting the importance of high-speed TCP senders and receivers using large TCP buffers to maximize the payload throughput potential.

—Geoff Huston, APNIC
gih@apnic.net

Drop us a Line!

We welcome any suggestions, comments or questions you may have regarding anything you read in this journal. Send us an e-mail to ipj@cisco.com. Also, don’t forget to let us know if your delivery address changes. You can use the online subscription system to change your own information by supplying your Subscription ID and e-mail address. The system will then send you an e-mail with a "magic" URL which will allow you to update your database record. If you don’t have your Subscription ID or encounter any difficulties, just send us the updated information via e-mail.

—Ole J. Jacobsen, Editor and Publisher
ole@cisco.com