Slides

Dynamic Behavior of
Slowly Responsive
Congestion Control
Algorithms
(Bansal, Balakrishnan, Floyd & Shenker, 2001)
TCP Congestion Control
• Largely responsible for the stability of
the internet in the face of rapid growth
• Implemented cooperatively by end
hosts (not enforced by routers)
• Works well because most traffic is TCP
TCP vs Real-Time Streaming
• TCP congestion control aggresively
modulates bandwidth
• “Real-time” streaming applications
want smooth bandwidth changes
• Alternative protocols may break the
internet
• What about a TCP-compatible protocol?
TCP Compatibility
• A compatible protocol sends data at
roughly the same rate as TCP in the
face of the same amount of packet loss
• This rate is measured at the steady
state, when it has stabilized against a
fixed packet loss rate
• In practice the packet loss rate is highly
dynamic
• So are compatible protocols safe in
practice?
Some Terms
• TCP-equivalent: rate based on AIMD
with the same parameters (a=1, b=2)
• TCP-compatible: over several RTTs,
rate is close to TCP for any given
packet loss rate
• TCP-compatible protocols are slowly
responsive if their rate changes less
rapidly than TCP in response to a
change in the packet loss rate
4.5 alternatives
•
•
•
•
•
TCP(b): shrink window by (1/b) after packet loss
(b=2 in standard TCP)
SQRT(b): Binomial algorithm, gentler version of TCP:
adjust window w to (1-1/b)*w1/2
RAP(b): equation-based TCP equivalent
TFRC(b): Adjust rate based on an exponentially
weighted moving average over b loss events (propose
b=6)
TFRC(b) with self-clocking: following a packet loss,
limit sending rate to receiver’s rate of reception
during previous round trip (default allows 2x sending
rate). Less strict limit in absence of loss.
Metrics
•
•
•
•
•
Smoothness: largest ratio of new rate to old rate
over consecutive round trips (1 is perfect)
Fairness(d): round trips until two flows come within
d% of equal bandwidth, when one starts with 1
packet/RT (d=10)
f(k): fraction of bandwidth used after k round trips
when available bandwidth has doubled
Stabilization time: time until sending rate is within
1.5 times steady state value at the new packet loss
rate
Stabilization cost: Average packet loss rate *
stabilization time
Simulations
• Flows pass through single bottleneck
router using RED queue management
(packet drop rate is proportional to
queue length)
• Square-wave competing flow
• Flash mobs of short-lived HTTP requests
Stabilization cost
•
•
•
Vertical axis is logarithmic: worst case cost is two
orders of magnitude higher than TCP. Oh no!
But proposed algorithm parameters are from 2-6:
worst case is maybe 3x? Not nearly as scary.
b=256 is much more expensive, not appreciably
smoother
Stabilization time
•
•
Remember, proposed algorithm parameters are from
2-6.
Large numbers included to demonstrate theoretical
value of self-clocking?
Flash mob response
•
•
•
•
In all three cases, total
throughput appears close
to total bandwidth
Again, a demonstration of
self-clocking. Where is
TFRC(6)?
Is TFRC-SC(256) too fair?
Streaming applications
might prefer the middle
path
This test is one TFRC flow
vs 1000 HTTP flows. What
happens if there are 10, or
100?
Long-term Fairness
•
•
•
•
x-axis: square-wave
period, competing
flow consumes 2/3
available bandwidth
Link utilization is poor
when square-wave
period is .2 seconds
TCP gets more newlyavailable bandwidth
than TFRC when
period is from .4 to
about 10 seconds
TCP never loses
Transient Fairness
•
TCP(b) is inverted in
these graphs:
W’ = (1-b)W
•
•
•
•
TCP takes a long time
to converge when b is
low
TFRC takes a long time
when b is high
Neither takes very
long when b is
reasonable
TFRC convergence
time is less than linear
with b
Aggressiveness
•
•
f(k) is percent of
available bandwidth
used after k round
trips, when bandwidth
has doubled
As expected, slowlyresponsive protocols
take much longer to
take advantage of an
increase in bandwidth
Effectiveness
•
•
•
•
TFRC=TFRC(6)
When bandwidth
oscillates over a 3:1
range, overall link
utilization is
comparable across
protocols
Over a 10:1 range,
TFRC can perform
quite badly
Performance is good
when burst period is
within RTT history
Smoothness
•
•
•
Best case (steady packet
loss rate): TFRC is
perfectly smooth, TCP
exhibits sawtooth
pattern
TFRC performs well
under mildly bursty
conditions (3 rapid loss
events followed by 3
slow = 6 events, fits
within TFRC(6) history)
In this rosy scenario,
TFRC is not only
smoother but gets better
throughput
Awkwardness
•
•
It is possible to find a burst
pattern for any TFRC
parameter that results in
very bad perfomance - poor
utilization and no
smoothness: just string
together enough loss events
to fill TFRC history window,
then suddenly remove
congestion
How likely is this to occur
naturally? Not sure.
Conclusions
• All of the proposed TCP-compatible
alternatives appear safe (assuming this
paper’s traffic model is reasonable). If
anything, they may be too fair.
• Self-clocking is a useful fail-safe
• slowly-responsive protocols usually
provide smooth bandwidth changes
• but not always
Open questions
• All traffic models were periodic with a fixed
small number of flows. What happens when the
number of flows varies?
• Most tests relied on RED queue management.
How would they look under drop-tail (which is
currently prevalent)?
• How does self-clocking affect TFRC smoothness
(given that smoothness is TFRC’s principal
advantage)?
• Is smoothness at the granularity of a round trip
particularly useful?