Emulation of Ad Hoc Networks

Emulation of Ad Hoc Networks
David A. Maltz, Qifa Ke,
David B. Johnson
CMU Monarch Project
Computer Science Department
Carnegie Mellon University
http://www.monarch.cs.cmu.edu/
Carnegie
Mellon
CMU Monarch Project
[email protected]
Early Emulation Successes
Stress testing of networking code via macfilter
Used to debug our 8 node ad hoc network testbed
Description of interesting/problematic movement scenarios
created
Changing network topology extracted into trace file
Physical machines running actual implementation
synchronously read trace-file
Machines prevented from receiving packets from non-neighbors
Described at previous IETF and in CMU CS Tech Report 99-116
Found to be critical for achieving code stability, however cannot be
used for performance evaluation
Propagation model grossly simplified
MAC-layer effects unaccounted for
CMU Monarch Project
[email protected]
Goal of Current Emulation Work
Evaluate real systems under ad hoc network conditions:
Performance study of network protocols or applications
Without implementing them in ns-2
Without deploying and operating physical machines in the field
Using real implementation and real user pattern
Only the network environment is simulated
Evaluating real systems with just simulation is not practical:
Difficult to model real applications inside the simulator
Real systems are significantly performance tuned
Example: Evaluate performance of CMU Coda file system in an ad
hoc network:
Coda is 100+ KLOC implemented and tuned
Coda researchers have expertise to evaluate their system but
can’t create network environments
CMU Monarch Project
[email protected]
Method
Leverage emulation work by Kevin Fall and VINT project
ns-server is a single central machine running ns-2
Real application code runs on real machines directing real
packets to “default router” (ns-server)
User creates scenarios to control the environment inside ns-2:
– Movement pattern of all mobile nodes inside ns-2
– Background traffic
MN1
MN2
ns-server
MN4
MN3
Logical view of emulation setup
CMU Monarch Project
[email protected]
Informal Validation of Emulation Method
Example: Compare FTP behavior between simulation and emulation:
ns-server: 450 MHz Pentium II CPU, running FreeBSD 3.1
10 simulated nodes
Compare 30 MB FTP transfer between simulation and emulation
2 additional simulated FTP connections for background traffic
7
3.5
But how well does this scale?
3
bytes transfered
TCP time–sequence # plots
have basically the same shape:
x 10
2.5
2
1.5
1
0.5
0
0
CMU Monarch Project
simulation
emulation
50
100
150
200
time (seconds)
250
300
[email protected]
A More Challenging Example
16 simulated nodes moving in a complicated pattern with CBR
and FTP background traffic
30 MB FTP transfer simulated in ns-2
30 MB FTP transfer between two physical nodes with emulation
7
3
x 10
bytes transfered
2.5
2
1.5
1
0.5
0
0
CMU Monarch Project
simulation
emulation
100
200
300
time (seconds)
400
500
[email protected]
Evaluating Emulation Scalability
Limitations on emulation scalability:
System effects: scheduling delays, potential buffer overruns
Additional latency of transferring packets to/from ns-server
Only so much one CPU can do
SINK
Experimental scenario:
Conduct real FTP transfer
between 2 physical nodes
Vary the number of other
simulated nodes in ns-2
Vary the amount of
background traffic
CMU Monarch Project
SRC
30 M Bytes
000
111
11
00
000
111
00
0011
11
000
111
00
11
1111
0000
1111
00
11
10100000
1010
CBR
Scalability Test Scenario
[email protected]
Scalability Results Summary
Ideally: emulation throughput = simulation throughput
Pure simulation achieves 1.49 Mb/s on a 2 Mb/s link
Emulation achieves throughput that depends on overall
simulator workload:
# of CBR packets/second
# of nodes
0
10
50 100 300
4
1.44 Mb/s
8
1.30 Mb/s
16
960 Kb/s
FTP Throughput Achieved
CMU Monarch Project
[email protected]
Does the Simulator Keep up with Real Time?
For event scheduled at t1 but processed at t2: time-lag = t2 – t1
Record the time-lag for each delayed event
Histograms show almost all time-lags are less than 4 ms
100 CBR packets/sec
16 nodes
6
# of time-lag events
10
5
10
4
10
3
10
2
10
1
10
4
54
Max lag: 355 ms
CMU Monarch Project
104
154
204 ms
Total # of events: 1.64e+07
[email protected]
16 simulated nodes
8 simulated nodes
4 simulated nodes
All of the Histograms
0 packets/sec
10 packets/sec
50 packets/sec
100 packets/sec
300 packets/sec
1.03e+06
1.30e+06
2.38e+06
3.73e+06
9.13e+06
1.65e+06
2.28e+06
4.80e+06
7.95e+06
2.06e+07
4 54 104 154 204
4 54 104 154 204
4 54 104 154 204
4 54 104 154 204
4 54 104 154 204
2.89e+06
4.24e+06
9.65e+06
1.64e+07
4.35e+07
6
10
5
10
4
10
3
10
2
10
1
10
6
10
5
10
4
10
3
10
2
10
1
10
6
10
5
10
4
10
3
10
2
10
1
10
CMU Monarch Project
[email protected]
Conclusion
Emulation works for networks of sufficient size and complexity to
enable study of interesting application scenarios
Using direct emulation based on ns leverages future
improvements to propagation models
The bottleneck is the real-time requirement in ns-server
Currently using to evaluate the Coda distributed file system
Open questions:
How far can this technique be generalized?
How to ensure performance measurements are meaningful?
– Open loop: evaluate emulation system on many scenarios
using reference application with good simulation models
– Closed loop: after each emulation run, examine emulation
trace to determine if emulation system introduced artifacts
– Validate against real system in ad hoc network testbed
CMU Monarch Project
[email protected]