Congestion Control - Personal Web Pages

Lecture 10:
TCP Performance
ITCS 6166/8166 091
Spring 2007
Jamie Payton
Department of Computer Science
University of North Carolina at Charlotte
February 12, 2007
Slides adapted from:
Congestion slides for Computer Networks: A Systems Approach (Peterson and Davis)
Chapter 3 slides for Computer Networking: A Top Down Approach Featuring the Internet (Kurose and Ross)
1
Announcements
• Homework 1 deadline is extended
– Due: Feb. 16 at 5 pm
– Submit through WebCT
• Homework 2
– Assigned: Feb. 14
– Due: Feb. 21
2
Transmission Control Protocol
• Implementation of sliding window protocol
TCP uses sliding windows
at sender and receiver
(buffers)
TCP Segment (Packet) Structure
3
TCP Flavors
• TCP Tahoe
– Slow start
• Switch to AIMD when hit threshold
– Handling loss
• Unconditional reduction of window to 1 MSS
• TCP Reno
– Slow start
• Switch to AIMD when hit
threshold
– Handling loss
• Reduction of window to ½
when triple duplicate ACKS
• “Fast recovery”: skip slow
start, go to threshold
4
TCP Performance
• What is the average throughput for a
connection?
– Let W be the window size at which loss
occurs
– Maximum rate is W/RTT
– “Minimum” rate (Reno) is W/(2*RTT)
– Average rate is 0.75*W/RTT
• More detailed formula
– Average throughput is 1.22*MSS/(RTT*√L)
5
How Did You Get That?
6
TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should
have average rate of R/K
TCP connection 1
TCP
connection 2
bottleneck
router
capacity R
7
Why is TCP Fair?
Two competing sessions:
• Additive increase gives slope of 1, as throughout increases
• Multiplicative decrease decreases throughput proportionally
R
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput
R
8
More on Fairness
Fairness and UDP
• Multimedia applications
often use UDP
– can pump audio/video at
constant rate
– rate will not be throttled
by congestion control
– packet loss can be
tolerated
• Research area: making
UDP play nice with TCP
Fairness and TCP
• Applications may use
parallel connections
– Web browsers do this
• Example:
– link of rate R currently
supporting 9 connections;
• new app asks for 1 TCP
– Rate?
• new app asks for 11 TCPs
– Rate?
9
Delay Modeling
Q: How long does it take
to receive an object
from a Web server after
sending a request?
Ignoring congestion,
delay is influenced by:
Notation, assumptions:
• TCP connection
establishment
• data transmission delay
• slow start
Window size:
• Assume one link between
client and server of rate R
• S: MSS (bits)
• O: object size (bits)
• no retransmissions (no loss,
no corruption)
• First assume: fixed
congestion window, W
segments
• Then dynamic window,
modeling slow start
10
Fixed Congestion Window (1)
First case:
WS/R > RTT + S/R: ACK
for first segment in
window returns before
window’s worth of data
sent
delay = 2RTT + O/R
11
Fixed Congestion Window (2)
Second case:
• WS/R < RTT + S/R:
wait for ACK after
sending window’s
worth of data sent
delay = 2RTT + O/R
+ (K-1)[S/R + RTT - WS/R]
12
TCP Delay Modeling: Slow Start (1)
Suppose use of slow start
Will show that the delay for one object is:
Latency  2 RTT 
O
S
S

 P  RTT    ( 2 P  1)
R
R
R

where P is the number of times TCP idles at server:
P  min {Q, K  1}
- where Q is the number of times the server idles
if the object were of infinite size.
- and K is the number of windows that cover the object.
13
TCP Delay Modeling: Slow Start (2)
Delay components:
• 2 RTT for connection
estab and request
• O/R to transmit
object
• time server idles due
to slow start
initiate TCP
connection
request
object
first window
= S/R
RTT
Server idles:
P = min{K-1,Q} times
Example:
• O/S = 15 segments
• K = 4 windows
•Q=2
• P = min{K-1,Q} = 2
Server idles P=2 times
second window
= 2S/R
third window
= 4S/R
fourth window
= 8S/R
complete
transmission
object
delivered
time at
client
time at
server
14
TCP Delay Modeling (3)
S
 RTT  time from when server starts to send segment
R
until server receives acknowledg ement
initiate TCP
connection
2k 1
S
 time to transmit the kth window
R

request
object
S
k 1 S 

RTT

2
 idle time after the kth window
R
R 

first window
= S/R
RTT
second window
= 2S/R
third window
= 4S/R
P
O
delay   2 RTT   idleTime p
R
p 1
P
O
S
S
  2 RTT   [  RTT  2 k 1 ]
R
R
k 1 R
O
S
S
  2 RTT  P[ RTT  ]  (2 P  1)
R
R
R
fourth window
= 8S/R
complete
transmission
object
delivered
time at
client
time at
server
15
TCP Delay Modeling (4)
Recall K = number of windows that cover object
How do we calculate K ?
K  min {k : 20 S  21 S    2 k 1 S  O}
 min {k : 20  21    2 k 1  O / S }
O
 min {k : 2  1  }
S
O
 min {k : k  log 2 (  1)}
S
O


 log 2 (  1)
S


k
Calculation of Q, number of idles for infinite-size object,
is similar (see HW).
16
HTTP Modeling
•
•
•
•
Assume Web page consists of:
– 1 base HTML page (of size O bits)
– M images (each of size O bits)
Non-persistent HTTP:
– M+1 TCP connections in series
– Response time = (M+1)O/R + (M+1)2RTT + sum of idle times
Persistent HTTP:
– 2 RTT to request and receive base HTML file
– 1 RTT to request and receive M images
– Response time = (M+1)O/R + 3RTT + sum of idle times
Non-persistent HTTP with X parallel connections
– Suppose M/X integer.
– 1 TCP connection for base file
– M/X sets of parallel connections for images.
– Response time = (M+1)O/R + (M/X + 1)2RTT + sum of idle times
17
HTTP Response time (in seconds)
RTT = 100 msec, O = 5 Kbytes, M=10 and X=5
20
18
16
14
12
10
8
6
4
2
0
non-persistent
persistent
parallel nonpersistent
28
Kbps
100 1 Mbps 10
Kbps
Mbps
For low bandwidth, connection & response time dominated by
transmission time.
Persistent connections only give minor improvement over parallel
connections.
18
HTTP Response time (in seconds)
RTT =1 sec, O = 5 Kbytes, M=10 and X=5
70
60
50
non-persistent
40
30
persistent
20
parallel nonpersistent
10
0
28
Kbps
100 1 Mbps 10
Kbps
Mbps
For larger RTT, response time dominated by TCP establishment
& slow start delays. Persistent connections now give important
improvement: particularly in high delaybandwidth networks.
19