Packet Drop Avoidance for High-speed network transmission protocol Page: 1 of 5
This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided to UNT Digital Library by the UNT Libraries Government Documents Department.
Extracted Text
The following text was automatically extracted from the image on this page using optical character recognition software:
Packet Drop Avoidance for High-speed Network Transmission Protocol
Jin, Guojun
Distributed Systems Department
Lawrence Berkeley National Laboratory
1 Cyclotron Road, Berkeley, CA 94720
g-jin@lbl.govAbstract: As network bandwidth continues to grow
and longer paths are used to exchange large scientific
data between storage systems and GRID computation,
it has become increasingly obvious that there is a need
to deploy a packet drop avoidance mechanism into
network transmission protocols. Current end-to-end
congestion avoidance mechanisms [1] used in
Transmission Control Protocol (TCP) have worked
well on low bandwidth delay product networks, but
with newer high-bandwidth delay networks they have
shown to be inefficient and prone to unstable. This is
largely due to increased network bandwidth coupled
with changes in internet traffic patterns. These changes
come from a variety of new network applications that
are being developed to take advantage of the increased
network bandwidth. This paper will examine the end-
to-end congestion avoidance mechanism and perform a
step-by-step analysis of its theory. In addition we will
propose an alternative approach developed as part of a
new network transmission protocol. Our alternative
protocol uses a packet drop avoidance (PDA)
mechanism built on top of the maximum burst size
(MBS) theory combined with a real-time available
bandwidth algorithm.
I. INTRODUCTION
Basic TCP congestion control theory is well-known
and a number of studies [2][3][4][6] to analyze it have
been done in the past couple of years. Here, we take a
different approach and analyze on the window-based
congestion control mechanism of TCP.
Many people have worked on improving the TCP
congestion control algorithm. TCP is unable to utilize all
the available bandwidth on high-bandwidth and/or high-
delay paths due to its conservative congestion avoidance
algorithm. In fact TCP can become quite unstable under
these conditions. One problem is the fact that TCP does
not have a mechanism to distinguish between a slowest
(narrow) link and congested (tight) link. This means that
TCP's algorithm will continue to increase the congestion
window (assuming tuned large buffers) to increase the
sending rate as long as there is no further packet loss. This
is problematic since packet drop could be caused bycongestion at the narrow link. In either a high-speed
and/or long delay path, when a congestion signal comes
back to the sender, the outstanding data stream will be the
average size of congestion window, which is computed
from the acknowledgments during the last round-trip-time
(RTT) period. Consider a 100ms RTT and 40Gb/s path,
TCP needs to send a burst as large as 500 MBytes
(333,333 1500Byte packets) of data during one RTT to
detect congestion trend. This big burst of traffic plus
existing cross traffic will exceed the bottleneck link router
queue and cause up to 50% packet loss (more than 160K
packets in above example). A self-clocking system could
help reduce the loss probability when cross traffic is less
burst, but this may not be the condition under which the
current network is dropping packets.
An examination of the congestion avoidance
mechanism shows that we see bursts in two different
phases of the TCP congestion control algorithm; slow start
and congestion avoidance. In the slow start phase, the
algorithm doubles the size of the burst until packet loss
occurs, probing for the ceiling of the congestion window.
After seeing packet loss, standard TCP congestion control
reduces the congestion window to one half the current
window size. If TCP sees more packet loss, it will reduce
the window further. This is called "multiplicative
decrease" which prevents further packets from causing
collapse. This slow start algorithm assumes that a possible
best congestion window is between the last burst
(congestion window) and the previous burst (one half of
the congestion window) since the previous burst did not
cause packet loss. However, this does not efficiently avoid
packet loss, especially when the bandwidth or path latency
is high. For example, on a 100ms RTT and 100Gb/s path,
the previous burst can be 1 GB, and doubling it can cause
the increased 1GB data (666 thousand 1500B packets)
loss. Since acknowledgments are asynchronously fed back
to the sender, they can cause further fluctuations when the
cross traffic is more dynamic. The key issue in the slow
start phase is during the last few window adjustments. In a
better TCP design, the last few probes should be used to1
Upcoming Pages
Here’s what’s next.
Search Inside
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Jin, Guojun. Packet Drop Avoidance for High-speed network transmission protocol, article, May 1, 2004; Berkeley, California. (https://digital.library.unt.edu/ark:/67531/metadc777065/m1/1/: accessed April 25, 2024), University of North Texas Libraries, UNT Digital Library, https://digital.library.unt.edu; crediting UNT Libraries Government Documents Department.