Solution - Class Home Pages

461 Midterm Spr17 Solutions
1.i:
Since these are all http connections to the same web server, only 1 port # (80) on the server side is used.
Since TCP sockets are defined by 4-tuples, and each connection on the client side uses a distinct port #,
1000 sockets are used.
1.ii:
(a) Since 1s complement checks for parity of each bit, a single flip is guaranteed to be detected.
(b) While many 2-bit errors are detected, some will go undetected. Notably whenever the flipped bits occur
at the same bit location, one on each byte word: e.g. 0 1 0 1 1 1 0 1 and 0 1 1 0 0 1 0 0 - will go undetected.
This flips the parity for that bit index twice, which is equivalent to not flipping.
1.iii:
(a) With circuit switching, a maximum of (1 Gbps)/(100 kbps) = 10,000 users can be accommodated simultaneously.
PK
(b) This is derived using the Binomial distribution; prob. that upto K users are active = j=0 Nj pj (1 − p)N −j
1.iv:
(a) The ACK delay = Round-trip time (RTT) + transmission duration of ACK + any processing delay at
the Rx (for checksumming, generating the ACK, job queuing delay at the processor inside the RX etc.) Of
the delay components, the 1st two are constant, and only the last is variable and contributes to the variation
in the ACK delay. In general, for fast processors the final term is typically a small fraction of the overall
delay even if it varies, and hence a value slightly greater than the RTT often suffices for the timer.
(b) Multi-Hop introduces much more variability since the end-2-end delay now includes the queueing delay
at intermediate nodes (routers) which can vary significantly based on their congestion status (and hence the
sum of multiple such variable delays over the routers along the path significantly increases the variability
which in turn implies that the probability of packet re-ordering increases - see Problem 2). Further (as we
will see later) the routes itself change with time. Such increased variability in delay makes it difficult to estimate/set the timeout value - a value smaller than the true will lead to premature timeouts and un-necessary
duplicate packets while an overly conservative (large) value will reduce protocol efficiency since now the
sender waits excessively to retransmit the data.Hence, it is important to track and estimate the ACK delay
accurately as deviations (both high and low) negatively impact protocol performance.
2.i:
(a) Since RDT3.0 is a ‘Stop-n-Wait’ protocol whereby a packet must be sent and ACKed before the next
packet is sent , there is only (at most) 1 unacknowledged packet at a time under normal operation (i.e with
no premature timeout).
(b) With premature timeout, there can be two unack-ed packets at an instant. Usually, the two packets are
duplicates (identical sequence #) which is resolved by the Rx dropping the duplicate and only passing 1 copy
of the packet up. However, as shown in Figure 1, if the re-transmitted PKT0 is delayed, then a re-ordering
occurs as seen by the receiver - the ACK1 for a later packet arrives before ACK0 for an earlier packet.
(c) The above leads to an instance of protocol failure - the duplicate PKT0 is actually (incorrectly) accepted
by the RX, as it awaits a PKT0 after successful receipt of PKT1. Further, the next PKT0 (which is actually
a new packet) is now dropped, since the RX deems this to be a duplicate of the immediate prior PKT0.
Also, the delayed deuplicate ACK for the first PKT0 arrives after the ACK for PKT1. This causes the
sender to think the second PKT0 was received correctly, when in fact the second PKT0 was dropped by the
RX as explained above.
1
Figure 1: Premature Timeout in RDT 3.0
3.i:
Because there are no propagation delays, client starts receiving the first packet immediately after the internet
core receives the complete first packet and this time is T Cbgn . Since Client download rate is slower than
Server upload rate, it becomes the bottleneck, so Client will have to wait an additional 60 seconds (duration
for transmitting the full 60 Gbit file over a 1 Gbps link) before the full file is received; this time is T Cstp .
Because the internet core has infinite buffer, server can transmit at full upload rate and finish in U60S seconds,
this is T Sstp . It is thus clear that the following ordering applies:
60
60
T Cbgn = N60
US ≤ T Sstp = US < T Cstp = N US + 60
Figure 2: File Transfer Timeline
2
3.ii:
60
N US
+ 60 → 60 as N → ∞. When US >> 1, the same limit is approached even if N is 1 (or small). So N
won’t have a significant influence on T Cstp in this case.
3.iii:
Since N is large, Client 1 will receive the full file by t = 60 seconds (from part i). The server will start
60
= 50 seconds. Client 2 will be
transmitting to Client 2 after it finishes Client 1’s transmission at t = 1.2
done 60 seconds later at t = 110 seconds.
4.i:
Parentheses means IP of whatever in parens.
#
SRC
DEST
MSG
TCP
(m1.a.com)
(httpcache.a.com)
OPEN LINK
TCP
(httpcache.a.com)
(m1.a.com)
ACK OPEN LINK
1
(m1.a.com)
(httpcache.a.com)
GET b.com/bigfile.html
2
(httpcache.a.com)
(dns.a.com)
where is b.com?
3
(dns.a.com)
(root DNS)
where is b.com?
4
(root DNS)
(dns.a.com)
try (.com DNS)
5
(dns.a.com)
(.com DNS)
where is b.com?
6
(.com DNS)
(dns.a.com)
try (auth DNS for b.com)
7
(dns.a.com)
(auth DNS for b.com)
where is b.com?
8
(auth DNS for b.com)
(dns.a.com)
b.com is at (b.com)
9
(dns.a.com)
(httpcache.a.com)
b.com is at (b.com)
TCP
(httpcache.a.com)
(b.com)
OPEN LINK
TCP
(b.com)
(httpcache.a.com)
ACK OPEN LINK
10
(httpcache.a.com)
(b.com)
GET b.com/bigfile.html
11
(b.com)
(httpcache.a.com) → (m1.a.com)
OK b.com/bigfile.html
Adding the delays from the table: 102.52s is the total delay to retrieve the webpage.
DELAY
0ms
0ms
0ms
0ms
100ms
100ms
100ms
100ms
20ms
20ms
0ms
20ms
20ms
20ms
20ms+1s+100s+1s
Since the a.com LAN has an http cache, m1 sends HTTP queries to the cache so a cached version can be
retrieved if there is one. The HTTP cache checks its database and finds it cannot have the requested file, so
it needs to retrieve it from the remote server. It gets the IP address of b.com by querying the DNS server at
a.com. Since the DNS cache is initially empty, a.com must iteratively find the IP address of b.com starting
from root DNS and working down to the authorative DNS for b.com. Once the HTTP cache has the IP
address of b.com, it opens a HTTP connection and requests the file. Note that during this whole process,
the HTTP link between m1 and HTTP cache is still open.
When the file is sent over by b.com, the HTTP cache forwards the data to m1 while simultaneously storing
a copy into its database. The delay for step 11 breaks down as follows: 20ms propagation delay + 1s to
transmit from b.com to R2 via 1 Gbps LAN + 100s to transmit from R2 to R1 over 10Mbps link + 1s to
transmit from R1 to httpcache.a.com and m1.a.com via 1 Gbps LAN.
Note that the TCP links are closed after the file transfer finishes so the closing sequences were not included
into the table.
3
4.ii:
#
SRC
TCP
(m2.a.com)
TCP (httpcache.a.com)
1
(m2.a.com)
TCP (httpcache.a.com)
TCP
(b.com)
2
(httpcache.a.com)
3
(b.com)
4
(httpcache.a.com)
The total delay is 1.08s.
DEST
(httpcache.a.com)
(m2.a.com)
(httpcache.a.com)
(b.com)
(httpcache.a.com)
(b.com)
(httpcache.a.com)
(m2.a.com)
MSG
OPEN LINK
ACK OPEN LINK
GET b.com/bigfile.html
OPEN LINK
ACK OPEN LINK
COND. GET b.com/bigfile.html
304 NOT MODIFIED
OK b.com/bigfile.html
DELAY
0ms
0ms
0ms
20ms
20ms
20ms
20ms
1s
When m2 requests the webpage, the file is already cached locally. However, the HTTP cache still has to
verify it is up to date before delivering it to m2.a.com. Note that the cache no longer has to query for the
IP of b.com since it is in the metadata of the cached page.
4