Innovative File Transfer and Management

White Papers on RELIA™ and TCP


TCP LIMITATIONS ON FILE TRANSFER PERFORMANCE HAMPER the GLOBAL INTERNET

By Bill Gibson, September 25, 2003 and September 2006

TCP LIMITATIONS ON FILE TRANSFER PERFORMANCE HAMPER the GLOBAL INTERNET

 

The Global Internet promises world wide efficient and inexpensive communications for individuals and businesses. Measurements of the evolving Global Internet show congestion (packet loss) and long round-trip delays (ping times) are facts of life in this environment. TCP (Transmission Control Protocol) is the basic transport for Internet applications such as email, web browsing, and file transfer. This paper presents information on the performance of TCP under Global Internet conditions.

Limitation 1: You can't go faster than your slowest link.

If the slowest link between two communicating hosts is limited to x kilobits/second then you are not going to get more than x kilobits per second of information throughput.
For this paper we are considering the data presented to TCP as being already compressed or encrypted so further compression attempts are not fruitful.

Limitation 2: You can't get more throughput than your window size divided by your round trip time.

RFC 1323 does a good job of discussing this limitations and TCP implementations that support RFC 1323 can achieve good throughput on satellite T1 (1.5 megabit/sec) links if there is no packet loss.

rfc1323.txt

Limitation 3: Packet Loss combined with long Round Trip Time

RFC 3155 "End-to-end Performance Implications of Links with Errors" Provides the best official summary of the results when Fast File Transfer is attempted using TCP when delay and packet loss are significant.

rfc3155.txt

The graph at the top of this paper Click to open rfc3155.gif in a separate window.

shows the theoretical maximum sustained TCP throughput and agrees well with our actual experience using Gigabyte Express to provide Fast File Transfer over long RTT networks.
Looking at the points above .01 (1 in 100 packets being lost, generally considered medium congestion) we find the theoretical maximum sustained TCP throughput is:
135 kbits/sec at 1 second RTT
225 kbits/sec at 600 millisec RTT (typical satellite RTT)
449 kbits/sec at 300 millisec RTT
1200 kbits/sec at 100 millisec RTT (typical domestic Internet RTT)
1780 kbits/sec at 60 millisec RTT
2800 kbits/sec at 30 millisec RTT
4510 kbits/sec at 10 millisec RTT (typical within a city)

What are the Global Internet Conditions?

The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections.

www.internettrafficreport.com gathers statistics on Global Traffic, Response Time, and Packet Loss.

These charts from September 2006 show an average RTT of 140 millisec and packet loss of 4.5 per cent.
TCP under these conditions can not sustain better than 200 kilobits/sec, even if both ends have megabit Internet access.

How are Packets Lost?

Packets lost due to bit errors

Any bit error will cause the node receiving the packet with a bit error to discard the packet. If a link has a bit error rate of 1 in 10**9, and packets of length 1500 bytes, then each of the 12,000 bits in the packet has that chance of being lost. The chance of packet loss on this link is 12,000 X 10**-9 or 1.2 X 10**-6. Packets often traverse multiple links on the way from source to destination, and 16 successive 10**-9 links in a row results in a 1.9 X 10**-5 chance of packet loss. This source of packet loss is usually due to links with through-the-air transmission such as satellite or wireless.

Packets lost due to congestion

TCP connection start-up

If a TCP session is established and a new TCP connection starts up using the same bottleneck link, the new TCP connection doubles the number of segments sent each round trip time (exponential increase) until the capacity of the bottleneck router to store and forward packets is exceeded. The new connection detects packet loss and then has an estimate of the bandwidth available, and will continue at this estimated rate. If multiple TCP connections were sharing the bottleneck link, then many of them will also experience packet loss whenever a new TCP connection is established. If the Connections sharing the link are multiple FTP sessions, then each new file sent is a new TCP connection and an occasion for packet loss.

TCP bandwidth discovery

A steady-state TCP connection attempts to slowly increase the amount of bandwidth used, in order to scavenge bandwidth made available as other TCP connections drop out. This continuing increase eventually leads to packet loss.

How does RTT vary?

The bottleneck router has two choices when a packet comes in that cannot be sent out because the outgoing link is full. It may either discard the packet or buffer the packet. Buffering the packet increases the RTT as perceived by the sending host. The choice of optimum buffer length is not straightforward. Increasing congestion may show up as either increasing packet loss or increasing RTT.

Fast File Transfer Problem Summary

From RFC3155, "The Internet does not provide a widely-available loss feedback mechanism that allows TCP to distinguish between congestion loss and transmission error. Because congestion affects all traffic on a path while transmission loss affects only the specific traffic encountering uncorrected errors, avoiding congestion has to take precedence over quickly repairing transmission errors. This means that the best that can be achieved without new feedback mechanisms is minimizing the amount of time that is spent unnecessarily in congestion avoidance."

Fast File Transfer Improvement Approaches

The FAST TCP Project at CalTech

The FAST TCP project's goal is to develop theory, algorithms and prototypes to design, demonstrate, and deploy protocols that are scaleable to arbitrary network capacity and size in order to fulfill the vision of ultrascale networking.
They are investigating replacements for the throughput equation given as Limit #3, and using TCP-Vegas like approaches to detect increases and decreases in congestion by increases and decreases in RTT.

Parallel TCP

While any single TCP stream is subject to limit #3, it is possible to open multiple parallel TCP streams until the number of streams is sufficient to achieve the desired throughput.
This approach is still subject to waiting for the retransmission of all lost packets before the complete stream may be reconstructed at the receiving end.

Niwot's RELIA(tm) Technology

The RELIA Technology is an implementation of US Patent number 6,445,717 which discloses the technique of adding redundant packets to the information stream such that in most cases of packet loss, the receiving end may reconstruct the lost information without needing retransmission. Should the reconstruction not succeed, then retransmission is still available. US Patent number 6,895,019 covers these techniques iimplemented as an enhancement to TCP.
Niwot's RELIA implementation detects increases and decreases in congestion by keeping track of increases and decreases in RTT.

NIWOT NETWORKS, INC. OVERVIEW

Company History:

Niwot Networks was formed in 1988 to provide high speed communication boards and drivers for pc-based routers, with an emphasis on customer-supported product enhancements and bootstrap financing. These first routers were Novell-based and customer complaints about performance over thousand-mile distances led to the development of DFT (Direct File Transfer) which provided full theoretical throughput over T1 terrestrial lines, and DFT/Mac for full theoretical throughput over T1 satellite links. With the addition of on-the-fly compression and support of Internet Protocols these products evolved into today's Gigabyte Express™ for Windows and Macintosh. These are simply the fastest long-haul file transfer products available today, and Niwot hardware is no longer required to achieve full throughput.

While making Gigabyte Express the fastest way to move files over the Internet, Niwot found that packet loss and delay on the Internet hobbles the performance of all Internet applications. This performance can be dramatically improved with Niwot’s RELIA technology(US Patents 6,445,717 and 6,895,019).

Gigabyte Express™

Gigabyte Express file transfer software has been available since 1995, and today is offered for Windows and Macintosh platforms. Gigabyte Express is the fastest way to move large files over long distances for PC-to-PC, Mac-to-Mac, or PC-to-Mac transfers. Niwot's Gigabyte Express is now the easiest, fastest, and most trustworthy way to move large files over the Internet and private Intranets. The product features on-the-fly lossless compression licensed from STAC. Our customers typically report throughput of 2 to 5 times FTP.

In addition to lossless compression and high performance, Gigabyte Express™ File Transfer offers features for ease of use, unattended operation, and control of the amount of bandwidth used.

Niwot's RELIA™ technology (US Patents 6,445,717 and 6,895,019) is an extension to TCP which adds redundant information that enables the receiver to reconstruct lost information without requiring the time delay of retransmission. The first application to support RELIA is Niwot's Gigabyte Express for Windows version 5.0 RELIA™ technlogy is especially beneficial over international and satellite Internet Protocol links.

Updated September 2006

Copyright 1995-2009, Niwot Networks, Inc. All Rights Reserved
Gigabyte Express, You've Got Files!, and RELIA are trademarks of Niwot Networks, Inc.