MuhammadAffanJoniMFKE2008 DU

QUALITY OF SERVICE FOR VIDEO PACKET OVER PEER TO PEER
NETWORK USING ACTIVE MEASUREMENT
MUHAMMAD AFFAN JONI
A project report submitted in partial fulfillment of the
requirements for the award of the degree of
Master of Engineering (Electrical – Electronics and Telecommunication)
Faculty of Electrical Engineering
Universiti Teknologi Malaysia
MAY 2008
iii
Dedicated to…
My beloved Parents, Papa and Mama,
Who has so much faith in me.
Love you always.
Also to my beloved sister, brother and nephew, Alviani, Hani, Mohamed and Abduh
I could have never done it without you.
To all my friends, who have stood by me through thick and thin.
I treasure you all.
Thanks for showering me with love, support and encouragement.
Life has been wonderfully colored by you
iv
ACKNOWLEDGMENT
In the name of Allah, Most Gracious, and Most Merciful
I would like to thank Allah Almighty for blessing and giving me strength to
accomplish this thesis. Firstly, I would like to thank the many people who have made
my master project possible. In particular I wish to express my sincere appreciate to
my supervisor, Dr. Sharifah Hafizah, for encouragement, guidance, critics and
friendship.
I would never have been able to make accomplishment without my loving
support of my parents, my sisters, my brother, and my nephew.
My sincere appreciation extends to all my best friends; Haikal,
Khairiraihanna, Arief Marwanto, Iphey, Evizal, Dimas, Bani, Rival, Farra, Dani,
Rizqa, Ooi and others who have provide assistance. Their views and tips are useful
indeed. Unfortunately, it’s not possible to list all of them in this limited space. I am
grateful having all of you beside me. Thanks you very much.
May Allah SWT prolong the lives of these people and reward them in the
best possible way. Amin
v
ABSTRACT
The purpose of this project is to see the performance of Peer-to-Peer (P2P)
network in Gnutella system. P2P are self-organizing networks that aggregate large
amount of heterogonous computers called nodes or peers. In P2P systems, peers can
communicate directly with each other for the sharing and exchanging of data, besides
the data exchange these peer nodes also share their communication and storage
resources. One of the P2P mostly used is Gnutella. Gnutella is feasibly and still
firmly established as the third-largest peer-to-peer network. This project will observe
the problem of real-time streaming of video packet over Gnutella system from a
single sender to a single receiver. This project concentrate on the development of an
optimum model using Gnutella system as well as evaluating of the quality adaptation
of the streaming mechanism. The simulation process was done using NS2 with 3
scenarios, which are using bandwith 512 kbps, 256 kbps, and 128 kbps. The
simulation result shows that to achieve the good quality of sending video packet is
using bigger bandwith. It is also shows that delay time will increase when the traffic
is full. In order to achieve the right bandwith to send the video packet over the
Gnutella system, it is depend on the frame rate, quantization and the resolution of the
video packet.
vi
ABSTRAK
Penyelidikan ini adalah bertujuan untuk mengkaji prestasi rangkaian Peer to
Peer dalam sistem Gnutella. P2P adalah rangkaian pengurusan diri yang mempunyai
jumlah kumpulan yang besar bagi computer yang berbeza dikenali sebagai nod atau
peers. Dalam sistem P2P, peers dapat berinteraksi secara terus diantara satu sama
lain dalam penukaran data. Selain itu, ia juga dapat berkomunikasi dan berkongsi
sumber data. Salah satu sistem P2P yang sering digunakan ialah Gnutella. Gnutella
adalah bersesuaian dan masih membangun sebagai rangkaian P2P ketiga terbesar.
Masalah masa nyata bagi paket video dalam sebuah sistem Gnutella daripada
pengirim kepada penerima dapat diperhatikan. Penyelidikan ini memfokuskan
kepada pembangunan model optima menggunakan system Gnutella untuk menilai
adaptasi kualiti bagi mekasnisma rangkaian. Proses simulasi menggunakan NS2
dengan 3 scenario yang berbeza yaitu menggunakan bandwith 512 kbps, 256 kbps
dan 128 kbps. Keputusan menunjukkan bahawa kualiti yang bagus bagi
penghantaran paket video adalah menggunakan bandwith yang besar. Ini
menunjukkan bahawa masa tangguhan berkadar langsung dengan trafik. Beberapa
aspek iaitu kadar rangkaian, kuantiti dan resolusi paket video mempengaruhi
bandwith yang sesuai bagi penghantaran paket video dalam sistem Gnutella.
vii
TABLE OF CONTENTS
CHAPTER
1
2
TITLE
PAGE
DECLARATION
ii
DEDICATION
iv
ACKNOWLEDGEMENTS
v
ABSTRACT
vi
ABSTRAK
vii
TABLE OF CONTENTS
vii
LIST OF TABLES
x
LIST OF FIGURES
xi
LIST OF GLOSSARY
xiii
LIST OF SYMBOLS
xv
LIST OF APPENDIX
xvi
INTRODUCTION
1
1.1 Background
1
1.2 Problem Statement
2
1.3 Objective
3
1.4 Project of Scope
3
1.5 Methodology
4
1.6 Organization of Project Report
5
LITERATURE REVIEW
6
2.1 Introduction
6
2.2 Peer to Peer (P2P) System
6
2.2.1 Characteristic and Three-Level Mode
8
viii
2.2.2 Advantages of Peer to Peer Networks
2.3 Quality of Service
11
11
2.3.1 Problem of QoS
13
2.3.2 Qos Performance Dimension
14
2.3.2.1 Network Availability
15
2.3.2.2 Bandwidth
15
2.3.2.2.1 Available Bandwidth
16
2.3.2.2.2 Guaranteed Bandwidth
16
2.3.2.3 Delay
17
2.3.2.4 Jitter
18
2.3.2.5 Loss
19
2.3.2.6 Emission Properties
20
2.3.2.7 Discard Priorities
20
2.3.3 QoS Mechanism
2.4 Technique of Network Measurement
2.4.1 Passive Measurement
21
23
24
2.4.1.1 Principles of Passive Measurement
24
2.4.1.2 Implementation
26
2.4.1.2.1 Software Based Measurement
27
2.4.1.2.2 Hardware Based Measurement
29
2.4.2 Active Measurement
30
2.4.2.1 Simple Network Management Protocol
(SNMP)
31
2.4.2.2 PING
31
2.4.2.3 Traceroute
33
2.4.2.4 One-Way Measurements
35
2.5 Gnutella
36
2.5.1 Gnutella Protocol
36
2.5.2 How Gnutella Works
37
2.5.3 Design of Gnutella
39
2.6 Queuing Theory
40
2.6.1 Little’s Formula
41
2.6.2 First in First Out In Peer to Peer
43
ix
3
4
5
PROJECT DESIGN
44
3.1 Introduction
44
3.2 Gnutella System Model
45
3.3 Gnutella Simulator in NS2
46
3.3.1 Architecture of Gnutella Simulator
47
3.3.2 Framework Component
49
3.3.3 Gnutellasim Component
50
3.3.4 Implementation Details
51
3.4 Configuration of The Network
53
3.5 Video Packet over Gnutella System
54
3.6 Active Measurement
58
3.7 Methodology
59
3.8 Summary
60
SIMULATION RESULT AND PERFORMANCE
ANALYSIS
62
4.1 Introduction
62
4.2 The Queue Simulation Result
63
4.3 The Delay Simulation Result
66
4.4 The RTT Simulation Result
68
4.5 Summary
70
CONCLUSION
71
5.1 Conclusion
71
5.2 Proposed Future Work
72
REFERENCES
73
Appendices A - E
76-93
x
LIST OF TABLES
TABEL NO.
TITLE
PAGE
2.1
Description of Gnutella
37
2.2
Configuration of The Network
54
xi
LIST OF FIGURES
FIGURE NO.
TITLE
PAGE
2.1
Levels of P2P Networks
10
2.2
Basic passive measurement setup
24
2.3
Ping Path
33
2.4
Traceroute Path
29
2.5
One-Way Measurements
36
2.6
Open Queueing Network
40
2.7
Closed Queueing Network
41
2.8
Illustration of queuing system: Customers arrive at a
queue and, after some time, depart from a server
2.9
42
Illustration of the arrival and departure counting
processes for a queuing system
42
2.10
FIFO Queue
43
3.1
Basic Topology of the Gnutella
45
3.2
Architecture of Gnutella Simulator
48
3.3
Topology of the Gnutella in NS2
53
3.4
Configuration of the network
54
3.5
OSI and TCP/IP model
56
3.6
Flowchart of configure UDP sink in NS2
57
3.7
Flowchart to compile tracefile
58
3.8
Methodology of the active Measurement
59
3.9
Methodology of the Simulation
60
4.1
Monitor at node 0
63
4.2
Queue at Node 0
64
4.3
Frequency Queue state at node 0
65
xii
4.4
Delay time from node 2 to node 1
66
4.5
Delay time from node 1 to node 0
67
4.6
RTT from node 2 to node 1
69
xiii
LIST OF GLOSSARY
ADSL
-
Asymmetric Digital Subscriber Line
CBWFQ
-
Class-based Weighted Fair Queueing
DiffServ
-
Differentiated Services
DQ
-
Dynamic Querying
DSCP
-
Differentiated Services Code Points
FCFS
-
First Come First Served
FIFO
-
First In First Out
FTP
-
File Transfer Protocol
GnutellaSim
-
Gnutella Simulator
HTTP
-
Hypertext Transfer Protocol
ICMP
-
Internet Control Message Protocol
IEEE
-
Institute of Electrical and Electronics Engineers
IntServ
-
Integrated Services
IOS
-
Internetwork Operating System
IP
-
Internet Protocol
ISP
-
Internet Service Provider
IRC
-
Internet Relay Chat
kbps
-
Kilo byte per second
LAN
-
Local Area Network
MAC
-
Media Access Control
MIB
-
Management Information Base
ms
-
Millisecond
NLANR
-
National Laboratory for Applied Network Research
NOC
-
Network Operation Centre
NS
-
Network Simulator
OpenNAP
-
Open Source Napster Server
P2P
-
Peer-to Peer
xiv
QoS
-
Quality of Service
QRP
-
Query Routing Protocol
RFC
-
Request for Comments
RSVP
-
Resource reservation protocol
RTT
-
Round Trip Time
SAA
-
Service Assurance Agent
SLA
-
Service Level Agreement
SMTP
-
Simple Mail Transfer Protocol
TCP
-
Transmission Control Protocol
TTL
-
Time To Live
UDP
-
User Datagram Protocol
UUCP
-
Unix to Unix Copy
VLAN
-
Virtual Local Area Network
VoIP
-
Voice-over-Internet protocol
WRR
-
Weighted Round Robin
xv
LIST OF SYMBOLS
λ
-
Lambda
A(t)
-
Counting Process
t
-
time
T
-
Total Time
TQ
-
Time in Queue
TS
-
Time in Service
N
-
Total Packet
NQ
-
Total Packet in Queue
NS
-
Total Packet in Service
D(t)
-
Departures Time
Q(0)
-
Queue at node 0
Q(-)
-
Queue that leave node 0
Q(+)
-
Queue that come to node 0
Dt
-
Total Delay (maximum delay seen by Ping)
Dl
-
Latency Delay (minimum delay seen by Ping)
Dq
-
Queuing Delay
xvi
LIST OF APPENDIX
APPENDIX
TITLE
PAGE
A
Main Programming Code in NS2
76
B
ST File
82
C
Queue Programming Code in NS2
88
D
Delay Programming Code in NS2
90
E
Frequency of Queue Programming Code in NS2
93
CHAPTER 1
INTRODUCTION
1.1 Background
Content sharing between communities has revolutionized the Internet. During
the last few years. A new phenomenon had changed the Internet business model
especially for ISP (Internet Service Provider). Peer-to-Peer (P2P) systems have
gained tremendous intentions during these years. The P2P phenomenon is facilitating
information flow from and back to the end users. Unlike traditional distributed
systems based on pure client/server model, P2P networks are self organizing
networks that aggregate large amount of heterogeneous computers called nodes or
peers. In P2P systems, peers can communicate directly with each other for the
sharing and exchanging of data, besides this data exchange these peer nodes also
share their communication and storage resources. The characteristics of P2P systems
make them a better choice for multimedia content sharing/streaming over IP
networks. P2P systems are dynamic in nature where nodes can join and leave the
network frequently and that might not have a permanent IP address and observe
dynamic changes over the inter connection links. Virtual networks are built on the
top of these networks at the application level in which individual peers communicate
with each other and share both communication and storage resources, ideally directly
without using a dedicated server.
The main concept of P2P networking is that each peer is a client and a server
at the same time. P2P media sharing uses two basic concepts. In the ‘open afterdownloading’ mode, the media content is played after downloading all the contents
2
of the file from different participants, while the ‘play-while downloading’ mode
allows playing while downloading the content, which is commonly known as
streaming. The ‘play-while-downloading’ has many advantages over ‘open-afterdownloading’ as it requires less memory and the client is not expected to wait for a
long time to finish download. In this thesis, we consider the Peer-to-Peer streaming
problem is defined as a content streaming from multiple senders to a single receiver
in the P2P network, i.e. a single receiver peer is receiving same content from
different peers present in the P2P network. Multiple sender peers are selected on the
fact that a single sending peer may not be able or willing to share an outbound
bandwidth of actual playback rate. Dynamic behavior of P2P systems is another
reason of selecting multiple sender peers for media sharing, as it is possible that any
sender peer sharing media can leave/crash without any prior notification (Mubashar
Mushtaq and Toufik Ahmed, 2006)
This project used one of the Peer to Peer that mostly used now, which is
Gnutella. Gnutella is a system in which individual can exchange files over the
Internet directly without going through a Web site in an arrangement which
sometimes described as peer-to-peer. Like Napster and similar Web sites, Gnutella is
often used as a way to download music or video files from or share them with other
Internet users and has been an object of great concern for the music publishing
industry. Unlike Napster, Gnutella is not a Web site, but an arrangement in which
you can see the files of a small number of other Gnutella users at a time, and they in
turn can see the files of others, in a kind of daisy-chain effect. Gnutella also allows
you to download any file type, whereas Napster is limited to MP3 music files.
1.2 Problem Statement
Various approaches have been demonstrated in the past in integrated
network. Peer-to-Peer (P2P) systems have gained tremendous intentions during these
years. The Peer-to-Peer (P2P) phenomenon is facilitating information flow from and
back to the end In P2P systems, peers can communicate directly with each other for
3
the sharing and exchanging of data. The characteristics of P2P systems make them a
better choice for multimedia content sharing/streaming over IP networks. For many
of these applications, it is important to observe the problem of real-time streaming of
video packet over Peer-to-Peer networks (P2P) from a single sender to a single
receiver.
In short, the problems presented by designing can be broken into:
• Understanding the capability of peer to peer system
•
Understanding the operation theory and modeling of peer to peer system
for the Gnutella system
•
Understanding the operation of active measurement
•
Understanding the operation theory of Quality of Service especially in
queue, delay and RTT.
1.3 Objective
The aim of this research is to describe the design and measurement of Peerto-Peer System for the Gnutella system using Active Measurement. To make things
clear, the objective of this research can be broken down into:
•
To investigate the video packet transmission over Peer-to Peer networks
(P2P) in Gnutella system using the active measurement
•
To investigate the performance of Peer-to Peer (P2P) networks
1.4 Project of Scope
This research will analyze the performance of Peer-to-Peer networks for
video packet in Gnutella system using active measurement. The system will be
4
simulated using NS2 v2.26 with Operating System Fedora core 2. The project will
focus on the Quality of Service which is using queue, delay and RTT.
In order to send the video packet in Gnutella system, the packet will send
under the UDP protocol which is using 3 scenarios in the networks.
1.5 Methodology
The methodology of this project will follow the next flow chart:
1. Through literature work and review on Quality of Service of peer-to-peer
performance.
2. Design and analysis the topology of the Gnutella system.
3. Modelling and run the simulation of the Gnutella system using NS2.26
with operating system Fedora core 2.
4. Performance analysis of the QoS networks video packet over Gnutella
networks using active measurement
5. Report writing
Literature Review
Design and analysis
Modified the modeling
Modeling and simulation
Result Analysis and
Performance evaluation
Report writing
Figure 1.1 Flowchart of the methodology
5
1.6 Organization of Project Report
This project report consists of five chapters describing all the work done in
the project. The project report organization is generally described as follows.
The first chapter explain the introduction of the project and problem this
project try to solve which describe the motivation of this project. This chapter sets
the work flows according to the objectives and scope of project.
Chapter two will discuss the theories of Peer to Peer system, Quality of
Service, and the Active measurement.
Chapter three will present the steps on designing the Gnutella simulator, the
software used for design and simulation, the structure of the designed Gnutella, and
the measurement techniques.
Result and analysis are presented in chapter four to compare the performance
of the Gnutella system.
The last chapter highlights the overall conclusion of the project with future
work suggestion to improve QoS of the Peer to Peer network. The project is
summarized in this chapter to give general achievements and the future
improvements can be made by other researchers in the future.
CHAPTER 2
LITERATURE REVIEW
2.1 Introduction
The main concept of Peer-to-Peer (P2P) networking is that each peer is a
client and a server at the same time. P2P media sharing uses two basic concepts. In
the ‘open after- downloading’ mode, the media content is played after downloading
all the contents of the file from different participants, while the ‘play-while
downloading’ mode allows playing while downloading the content, which is
commonly known as streaming. The ‘play-while-downloading’ has many advantages
over ‘open-after-downloading’ as it requires lesser memory and client is not
expected to wait for longer time to finish download first.
2.2 Peer-to-Peer (P2P) System
P2P computer network exploits diverse connectivity between participants in a
network and the cumulative bandwidth of network participants rather than
conventional centralized resources where a relatively low number of servers provide
the core value to a service or application. P2P networks are typically used for
connecting nodes via largely ad hoc connections. Such networks are useful for many
purposes. Sharing content files containing audio, video, data or anything in digital
7
format is very common, and real time data, such as telephony traffic, is also passed
using P2P technology.
A pure P2P network does not have the notion of clients or servers, but only
equal peer nodes that simultaneously function as both "clients" and "servers" to the
other nodes on the network. This model of network arrangement differs from the
client-server model where communication is usually to and from a central server. A
typical example for a non P2P file transfer is an FTP server where the client and
server programs are quite distinct, and the clients initiate the download/uploads and
the servers react to and satisfy these requests.
The earliest peer-to-peer network in widespread use was the Usenet news
server system, in which peers communicated with one another to propagate Usenet
news articles over the entire Usenet network (Minar, Nelson, 2001). Particularly in
the earlier days of Usenet, Unix-to-Unix Copy (UUCP) was used to extend even
beyond the Internet. However, the news server system also acted in a client-server
form when individual users accessed a local news server to read and post articles.
The same consideration applies to SMTP email in the sense that the core email
relaying network of Mail transfer agents is a peer-to-peer network while the
periphery of Mail user agents and their direct connections is client server.
Some networks and channels such as Napster, OpenNAP and IRC server
channels use a client-server structure for some tasks (e.g. searching) and a peer-topeer structure for others. Networks such as Gnutella or Freenet use a peer-to-peer
structure for all purposes, and are sometimes referred to as true peer-to-peer
networks, although Gnutella is greatly facilitated by directory servers that inform
peers of the network addresses of other peers.
Peer-to-peer architecture embodies one of the key technical concepts of the
internet, described in the first internet Request for Comments, RFC 1, "Host
Software" dated 7 April 1969. More recently, the concept has achieved recognition
8
in the general public in the context of the absence of central indexing servers in
architectures used for exchanging multimedia files.
The concept of peer to peer is increasingly evolving to an expanded usage as
the relational dynamic active in distributed networks, i.e. not just computer to
computer, but human to human. Yochai Benkler has called it "commons-based peer
production" to denote collaborative projects such as free software (Benkler, Yochai,
2002). Associated with peer production are the concept of peer governance (referring
to the manner in which peer production projects are managed) and peer property
(referring to the new type of licenses which recognize individual authorship but not
exclusive property rights, such as the GNU General Public License and the Creative
Commons License).
2.2.1
Characteristics and Three-Level Model
The shared provision of distributed resources and services, decentralization
and autonomy are characteristic of P2P networks:
1. Sharing of distributed resources and services: In a P2P network each node
can provide both client and server functionality, that is, it can act as both a
provider and consumer of services or resources, such as information, files,
bandwidth, storage and processor cycles. Occasionally, these network nodes
are referred to as servants—derived from the terms client and server.
2. Decentralization: There is no central coordinating authority for the
organization of the network (setup aspect) or the use of resources and
communication between the peers in the network (sequence aspect). This
applies in particular to the fact that no node has central control over the other.
In this respect, communication between peers takes place directly.
Frequently, a distinction is made between pure and hybrid P2P networks.
Due to the fact that all components share equal rights and equivalent
9
functions, pure P2P networks represent the reference type of P2P design.
Within these structures there is no entity that has a global view of the
network. In hybrid P2P networks, selected functions, such as indexing or
authentication, are to a subset of nodes that as a result, assume the role of a
coordinating entity. This type of network architecture combines P2P and
client/server principles.
3. Autonomy: Each node in a P2P network can autonomously determine when
and to what extent it makes its resources available to other entities.
On the basis of these characteristics, P2P can be understood as one of the
oldest architectures in the world of telecommunication. In this sense, the Usenet,
with its discussion groups, and the early Internet, or ARPANET, can be classified as
P2P networks. As a result, there are authors who maintain that P2P will lead the
Internet back to its origins—to the days when every computer had equal rights in the
network.
Decreasing costs for the increasing availability of processor cycles,
bandwidth, and storage, accompanied by the growth of the Internet have created new
fields of application for P2P networks. In the recent past, this has resulted in a
dramatic increase in the number of P2P applications and controversial discussions
regarding limits and performance, as well as the economic, social, and legal
implications of such applications. The three level model presented below, which
consists of P2P infrastructures, P2P applications, and P2P communities, resolves the
lack of clarity in respect to terminology, which currently exists in both theory and
practice.
10
Figure 2-1 Levels of P2P Networks
Level 1 represents P2P infrastructures. P2P infrastructures are positioned
above existing telecommunication networks, which act as a foundation for all levels.
P2P infrastructures provide communication, integration, and translation functions
between IT components. They provide services that assist in locating and
communicating with peers in the network and identifying, using, and exchanging
resources, as well as initiating security processes such as authentication and
authorization.
Level 2 consists of P2P applications that use services of P2P infrastructures.
They are geared to enable communication and collaboration of entities in the absence
of central control.
Level 3 focuses on social interaction phenomena, in particular, the formation
of communities and the dynamics within them.
In contrast to Levels 1 and 2, where the term peer essentially refers to
technical entities, in Level 3 the significance of the term peer is interpreted in a non
technical sense (peer as person).( Schoder, Fischbach and Schmitt, 2005)
11
2.2.2
Advantages of peer-to-peer networks
An important goal in peer-to-peer networks is that all clients provide
resources, including bandwidth, storage space, and computing power. Thus, as nodes
arrive and demand on the system increases, the total capacity of the system also
increases. This is not true of a client-server architecture with a fixed set of servers, in
which adding more clients could mean slower data transfer for all users.
The distributed nature of peer-to-peer networks also increases robustness in
case of failures by replicating data over multiple peers, and -- in pure P2P systems -by enabling peers to find the data without relying on a centralized index server. In
the latter case, there is no single point of failure in the system.
When the term peer-to-peer was used to describe the Napster network, it
implied that the peer protocol was important, but, in reality, the great achievement of
Napster was the empowerment of the peers (i.e., the fringes of the network) in
association with a central index, which made it fast and efficient to locate available
content. The peer protocol was just a common way to achieve this.
While the original Napster network was a P2P network the newest version of
Napster has no connection to P2P networking at all. The modern day version of
Napster is a subscription based service which allows you to download music files
legally.
2.3 Quality of Service
Quality of service, abbreviated as QoS, has many definitions. For example,
according to the QoS Forum: “Quality of Service is the ability of a network element
to have some level of assurance that its traffic and service requirements can be
satisfied.” QoS can be considered to be subjective, because users can differ in their
12
perception of what is good quality and what is not. However, the level of assurance
at which traffic and service requirements are satisfied can be quantized. The besteffort paradigm does not provide any assurance on the traffic handled and therefore
we classify best-effort routing as a paradigm that does not offer QoS. Still, besteffort routing seems to function properly, which questions the need for new QoS
mechanisms. Then there is an argue of why QoS is needed for the future. In the
business world, QoS could determine whether you can have a normal voice
conversation, whether a video conference is productive, and whether a multimedia
application actually improves productivity for your staff. At home, it could for
instance determine whether you will have cause to complain about the quality of a
video-on-demand movie. Overall, it is seen that new applications are increasingly
demanding higher quality than the one-size-fits-all best-effort service offered by the
initial Internet design.
There are many other situations in which QoS is needed. For instance, the
distinction between one-way and two-way communication, one-way communication
can accept relatively long delays. However, delay hinders two way, interactive
communication if the round-trip time exceeds 300 ms. For example, conducting a
voice conversation over a satellite link illustrates the problem with long delays.
Combined video and audio is very sensitive to differential delays. For the speech is
out of sync with lip movement instance can be detected quickly. Data
communication protocols are very sensitive to errors and loss. An undetected error
can have severe consequences if it is part of a downloaded program. Loss of a packet
frequently requires retransmission, which decreases throughput and increases
response time. On the other hand, many data communication protocols are less
sensitive to delay variation. These are the examples that illustrate the importance to
assure that the traffic and service requirements like delay, jitter, loss and throughput
can be satisfied (Fernando Antonio, 2004).
13
2.3.1
Problems of QoS
When the Internet was first deployed many years ago, it lacked the ability to
provide Quality of Service guarantees due to limits in router computing power. It
therefore ran at default QoS level, or "best effort". There were four "Type of
Service" bits and three "Precedence" bits provided in each message, but they were
ignored. These bits were later re-defined as DiffServ Code Points (DSCP) and are
largely used in peered links on the modern Internet.
When looking at packet-switched networks, Quality of service is affected by
various factors, which can be divided into "human" and "technical" factors. Human
factors include: stability of service, availability of service, delays, user information.
Technical factors include: reliability, scalability, effectiveness, maintainability,
Grade of Service, etc.
Many things can happen to packets as they travel from origin to destination,
resulting in the following problems as seen from the point of view of the sender and
receiver:
1. Dropped packets
The routers might fail to deliver (drop) some packets if they arrive when their
buffers are already full. Some, none, or all of the packets might be dropped,
depending on the state of the network, and it is impossible to determine what
will happen in advance. The receiving application may ask for this
information to be retransmitted, possibly causing severe delays in the overall
transmission.
2. Delay
It might take a long time for a packet to reach its destination, because it gets
held up in long queues, or takes a less direct route to avoid congestion. In
some cases, excessive delay can render an application, such as VoIP,
unusable.
14
3. Jitter
Packets from source will reach the destination with different delays. A
packet's delay varies with its position in the queues of the routers along the
path between source and destination and this position can vary unpredicably.
This variation in delay is known as jitter and can seriously affect the quality
of streaming audio and/or video.
4. Out-of-order delivery
When a collection of related packets is routed through the Internet, different
packets may take different routes, each resulting in a different delay. The
result is that the packets arrive in a different order than they were sent. This
problem necessitates special additional protocols responsible for rearranging
out-of-order packets to an isochronous state once they reach their destination.
This is especially important for video and VoIP streams where quality is
dramatically affected by both latency or lack of isochronicity.
5. Error
Sometimes packets are misdirected, or combined together, or corrupted,
while en route. The receiver has to detect this and, just as if the packet was
dropped, ask the sender to repeat itself.
2.3.2
QoS Performance Dimension
A number of QoS parameters can be measured and monitored to determine
whether a service level offered or received is being achieved. These parameters
consist of the following:
•
Network availability
•
Bandwidth
•
Delay
•
Jitter
15
•
Loss
There are also QoS performance-affecting parameters that cannot be
measured but provide the traffic management mechanisms for the network routers
and switches. These consist of:
•
Emission priority
•
Discard priority
Each of these QoS parameters affects the application’s performance or
enduser’s experience (Nortel Network, 2003).
2.3.2.1 Network availability
Network availability can have a significant effect on QoS. Simply put, if the
network is unavailable, even during brief periods of time, the user or application may
achieve unpredictable or undesirable performance (QoS). Network availability is the
summation of the availability of many items that are used to create a network. These
include networking device redundancy, e.g., redundant interfaces, processor cards or
power supplies in routers and switches, resilient networking protocols, multiple
physical connections, e.g., fiber or copper, backup power sources, etc. Network
operators can increase their network’s availability by implementing varying degrees
of each of these items.
2.3.2.2 Bandwidth
Bandwidth is probably the second most significant parameter that affects
QoS. Bandwidth allocation can be subdivided into two types:
16
•
Available bandwidth
•
Guaranteed bandwidth
2.3.2.2.1
Available bandwidth
Many network operators oversubscribe the bandwidth on their network to
maximize the return on investment of their network infrastructure or leased
bandwidth. Oversubscribing bandwidth means the bandwidth a user subscribed to is
not always available to them. This allows all users to compete for available
bandwidth. They get more or less bandwidth depending upon the amount of traffic
from other users on the network at any given time.
Available bandwidth is a technique commonly used over consumer ADSL
networks, e.g., a customer signs up for a 384-kbps service that provides no QoS
(bandwidth) guarantee in the SLA. The SLA points out that the 384-kbps is "typical"
but does not make any guarantees. Under lightly loaded conditions, the user may
achieve 384-kbps but upon network loading, this bandwidth will not be achieved
consistently. This is most noticeable during certain times of the day when more users
access the network.
2.3.2.2.2
Guaranteed bandwidth
Network operators offer a service that provides a guaranteed minimum
bandwidth and burst bandwidth in the SLA. Because the bandwidth is guaranteed,
the service is priced higher than the available bandwidth service. The network
operator must ensure that those who subscribe to this guaranteed bandwidth service
get preferential treatment (QoS bandwidth guarantee) over the available bandwidth
subscribers.
17
In some cases, the network operator separates the subscribers by different
physical or logical networks, e.g., VLANs, Virtual Circuits, etc. In some cases, the
guaranteed bandwidth service traffic may share the same network infrastructure with
the available bandwidth service traffic. This is often the case at locations where
network connections are expensive or the bandwidth is leased from another service
provider. When subscribers share the same network infrastructure, the network
operator must prioritize the guaranteed bandwidth subscribers’ traffic over the
available bandwidth subscribers’ traffic so that in times of network congestion, the
guaranteed bandwidth subscribers’ SLAs are met.
Burst bandwidth can be specified in terms of amount and duration of excess
bandwidth (burst) above the guaranteed minimum. QoS mechanisms may be
activated to discard traffic that is consistently above the guaranteed minimum
bandwidth that the subscriber agreed to in the SLA.
2.3.2.3 Delay
Network delay is the transit time an application experiences from the ingress
point to the egress point of the network. Delay can cause significant QoS issues with
applications such as voice and video, and applications such as SNA and fax
transmission that simply time-out and fail under excessive delay conditions. Some
applications can compensate for small amounts of delay but once a certain amount is
exceeded, the QoS becomes compromised.
For example, some networking equipment can “spoof” an SNA session on a
host by providing local acknowledgements when the network delay would cause the
SNA session to time-out. Similarly, VoIP gateways and phones provide some local
buffering to compensate for network delay.
18
Finally, delay can be both fixed and variable. Examples of fixed delay are:
•
Application-based delay, e.g., voice codec processing time and IP packet
creation time by the TCP/IP software stack
•
Data transmission (queuing delay) over the physical network media at
each network hop
•
Propagation delay across the network based on transmission distance
Examples of variable delays are:
•
Ingress queuing delay for traffic entering a network node
•
Contention with other traffic at each network node
•
Egress queuing delay for traffic exiting a network node
2.3.2.4 Jitter
Jitter is the measure of delay variation between consecutive packets for a
given traffic flow. Jitter has a pronounced effect on real-time, delay-sensitive
applications such as voice and video. These real-time applications expect to receive
packets at a fairly constant rate with fixed delay between consecutive packets. As the
arrival rate varies, the jitter impacts the application’s performance. A minimal
amount of jitter may be acceptable but as jitter increases, the application may
become unusable.
Some applications, such as voice gateways and IP phones, can compensate
for small amounts of jitter. Since a voice application requires the audio to play out at
a constant rate, if the next packet does not arrive within the playback time, the
application will replay the previous voice packet until the next voice packet arrives.
However, if the next packet is delayed too long, it is simply discarded when it
arrives, resulting in a small amount of distorted audio. All networks introduce some
19
jitter because of variability in delay introduced by each network node as packets are
queued. However, as long as the jitter is bounded, QoS can be maintained.
2.3.2.5 Loss
Loss can occur due to errors introduced by the physical transmission
medium. For example, most landline connections have very low loss as measured in
the Bit Error Rate (BER). However, wireless connections such as satellite, mobile, or
fixed wireless networks have a high BER that varies due to environment or
geographicalconditions such as fog, rain, RF interference, cell handoff during
roaming, and physical obstacles such as trees, buildings, and mountains. Wireless
technologies often transmit redundant information since packets will inherently get
dropped some of the time due to the nature of the transmission medium.
Loss can also occur when congested network nodes drop packets. Some
networking protocols such as TCP (Transmission Control Protocol) offer packet loss
protection by retransmitting packets that may have been dropped or corrupted by the
network. When a network becomes increasingly congested, more packets are
dropped and hence more TCP retransmissions. If congestion continues, the network
performance will significantly decrease because much of the bandwidth is being
used to retransmit dropped packets. TCP will eventually reduce its transmission
window size, resulting in smaller and smaller packets being transmitted. This
eventually will reduce congestion, resulting in fewer packets being dropped.
Because congestion has a direct impact on packet loss, congestion avoidance
mechanisms are often deployed. One such mechanism is called Random Early
Discard (RED). RED algorithms randomly and intentionally drop packets once the
traffic reaches one or more configured thresholds. RED takes advantage of the TCP
protocol's window size throttling feature and provides more efficient congestion
management for TCP-based flows. Note that RED only provides effective
20
congestion control for applications or protocols with “TCP-like” throttling
mechanisms.
2.3.2.6 Emission Properties
Emission priorities determine the order in which traffic is forwarded as it
exits a network node. Traffic with a higher emission priority is forwarded ahead of
traffic with a lower emission priority. Emission priorities also determine the amount
of latency introduced to the traffic by the network node’s queuing mechanism.
For example, delay-tolerant applications such as e-mail would be configured
to have a lower emission priority than delay-sensitive real-time applications such as
voice or video. These delaytolerant applications may be buffered while the delay
sensitive applications are being transmitted.
In its simplest of forms, emission priorities use a simple transmit priority
scheme whereby higher emission priority traffic is always forwarded ahead of lower
emission priority traffic. This is typically accomplished using strict priority
scheduling (queuing). The downside of this approach is that low emission priority
queues may never get serviced (starved) if there is always higher emission priority
traffic with no bandwidth rate limiting.
A more elaborate scheme provides a weighted scheduling approach to the
transmission of traffic to improve fairness, i.e., the lower emission priority traffic
doesn’t always have to wait until the higher emission priority traffic is transmitted.
Finally, some emission priority schemes provide a mixture of both priority and
weighted schedulers.
21
2.3.2.7 Discard Priorities
Discard priorities are used to determine the order in which traffic gets
discarded. The traffic may get dropped due to network node congestion or when the
traffic is out-of-profile, i.e., the traffic exceeds its prescribed amount of bandwidth
for some period of time. Under congestion, traffic with a higher discard priority gets
dropped before traffic with a lower discard priority. Traffic with similar QoS
performance requirements can be subdivided using discard priorities. This allows the
traffic to receive the same performance when the network node is not congested.
However, when the network node is congested, the discard priority is used to
drop the “more eligible” traffic first. Discard priorities also allow traffic with the
same emission priority to be discarded when the traffic is out-of-profile. Without
discard priorities, traffic would need to be separated into different queues in a
network node to provide service differentiation.
This can be expensive since only a limited number of hardware queues
(typically eight or less) are available on networking devices. Some devices may have
software-based queues but as these are increasingly used, network node performance
is typically reduced. With discard priorities, traffic can be placed in the same queue
but in effect, the queue is subdivided into virtual queues, each with a different
discard priority. For example, if a product supports three discard priorities, then one
hardware queue in effect provides three QoS levels.
2.3.3
QoS Mechanism
Quality of Service (QoS) can be provided by generously over-provisioning a
network so that interior links are considerably faster than access links. This approach
is relatively simple, and may be economically feasible for broadband networks with
predictable and light traffic loads. The performance is reasonable for many
22
applications, particularly those capable of tolerating high jitter, such as deeplybuffered video downloads.
Commercial VoIP services are often competitive with traditional telephone
service in terms of call quality even though QoS mechanisms are usually not in use
on the user's connection to his ISP and the VoIP provider's connection to a different
ISP. Under high load conditions, however, VoIP quality degrades to cell-phone
quality or worse. The mathematics of packet traffic indicate that a network with QoS
can handle four times as many calls with tight jitter requirements as one without
QoS. The amount of over-provisioning in interior links required to replace QoS
depends on the number of users and their traffic demands. As the Internet now
services close to a billion users, there is little possibility that over-provisioning can
eliminate the need for QoS when VoIP becomes more commonplace.
For narrowband networks more typical of enterprises and local governments,
however, the costs of bandwidth can be substantial and over provisioning is hard to
justify. In these situations, two distinctly different philosophies were developed to
engineer preferential treatment for packets which require it.
Early work used the "IntServ" philosophy of reserving network resources. In
this model, applications used the Resource reservation protocol (RSVP) to request
and reserve resources through a network. While IntServ mechanisms do work, it was
realized that in a broadband network typical of a larger service provider, Core routers
would be required to accept, maintain, and tear down thousands or possibly tens of
thousands of reservations. It was believed that this approach would not scale with the
growth of the Internet, and in any event was antithetical to the notion of designing
networks so that Core routers do little more than simply switch packets at the highest
possible rates.
The second and currently accepted approach is "DiffServ" or differentiated
services. In the DiffServ model, packets are marked according to the type of service
they need. In response to these markings, routers and switches use various queuing
23
strategies to tailor performance to requirements. (At the IP layer, differentiated
services code point (DSCP) markings use the 6 bits in the IP packet header. At the
MAC layer, VLAN IEEE 802.1Q and IEEE 802.1D can be used to carry essentially
the same information) Routers supporting DiffServ use multiple queues for packets
awaiting transmission from bandwidth constrained (e.g., wide area) interfaces.
Router vendors provide different capabilities for configuring this behavior, to
include the number of queues supported, the relative priorities of queues, and
bandwidth reserved for each queue.
In practice, when a packet must be forwarded from an interface with queuing,
packets requiring low jitter (e.g., VoIP or VTC) are given priority over packets in
other queues. Typically, some bandwidth is allocated by default to network control
packets (e.g., ICMP and routing protocols), while best effort traffic might simply be
given whatever bandwidth is left over.
2.4 Technique in Network Measurement
Network measurement is part of the responsibilities of the network
management system and hence how the management information is stored in the
agents and retrieved by the manager has already been illustrated in the previous
section. When referring to measurement, the agent corresponds to the measurement
device. This can be the router itself or additional measurement equipment. The
manager will be the Network Operation Centre (NOC) which collects the data for
analysis. Active and Passive measurements are the two main approaches that are
used to measure the QoS network performance (Haseb, Maheen, 2006).
24
2.4.1
Passive Measurement
The term passive measurement refers to the process of measuring a network,
without creating or modifying any traffic on the network. This is in contrast to active
measurement, in which specific packets are introduced into the network, and these
packets are timed as they travel through the network being measured.
Passive measurement can provide a detailed set of information about the one
point in the network that is being measured. Examples of the information passive
measurements can provide are:
•
Traffic / protocol mixes
•
Accurate bit or packet rates
•
Packet timing / inter-arrival timing
Passive measurement can also provides a means of debugging a network
application, by providing a user with the entire packet contents, as seen on the
network (Moore, Rose, 1999).
2.4.1.1 Principles of Passive Measurement
Figure 2-2 Basic passive measurement setup (Moore, Rose, 1999)
25
The basic principle of passive measurement is shown in figure 2.2. There are
three main situations that define what the entities represent. In the first situation the
first entity represents the entire Internet and the second a single machine. In the next
situation the first entity is the outside world, as seen by an organization. The second
entity represents this organizations internal network. A good example of this is a
university's Internet connection and their internal LAN. The final situation is a
backbone link where the two entities are just both sections of the Internet.
A monitor `snoops' on all the traffic flowing between these two entities. What
this monitor does with the traffic depends on what the aim of the system is, and also
which of the specific situation listed above applies. There are two major categories
that passive measurement systems can fall into. The first is to deal with the captured
data in real-time. For example by looking at each packet, count the number of bytes
passing the monitor every second, or minute etc. These statistics are very small,
when compared to the amount of data that could pass the monitor. These values can
be used, for example, to see if available bandwidth is being fully utilized, if
saturation is a problem or if there are peak times where more bandwidth could be
required. The second type of passive measurement creates files containing copies of
all or a proportion of the traffic seen on the link over a certain time period. These
trace files can then be post processed. This can allow advanced computation to be
carried out that would be impossible in real-time, and also preserves data for further
analysis at a later point.
Trace based system have one significant requirement. As the data will be post
processed, additional information must be saved with the packet to indicate the time
that this packet arrived. The accuracy of this time stamping process will directly
relate to the accuracy of the results that can be drawn from a trace file. The issues
related to this simple concept of time stamping form some of the hardest problems in
passive measurement.
Real-time analysis suits high speed networks, where the volume of data on
the link is too large to record copies of it to disk or even memory. This is likely to
occur in the third configuration discussed, where the monitor is on a high capacity
26
network backbone. Real-time analysis also suites projects that want to monitor links
for long periods of time, for instance weeks or months. The trade off for long term,
or high speed measurement, is detail. The detail provided by these systems is often
of limited use for in-depth traffic analysis such as required by this project.
Traced based systems can cover a range of speed of networks. The faster the
network the smaller proportion of data that can be saved. One very common subset
of the data that is saved is the IP and transport layer headers. The IP header provides
information on the source of the datagram, the destination of the datagram, the
length of the datagram and which transport protocol is carried in the payload. The
transport layer can give an indication of what type of traffic was contained within the
packet, but the restrictions of this have to be understood, and care must be taken
when claiming a packet contains a certain type of data.
Header traces are commonly used for both of the first two passive
measurement configurations discussed, and where ever else network speeds allow
traces to be taken. Full capture of all data on a link is normally restricted to the first
situation. The data rates created by a single computer are low when compared to
backbones and gateways. Full capture allows complete analysis of the actual data
passing on the network, which could be used for debugging purposes and also allow
later `playback' of the entire data stream.
2.4.1.2 Implementation
A passive measurement system will often be designed with one type of
analysis in mind. Different types of analysis require different levels of accuracy and
the design will focus on the area most important to the project.
Take for example a research project interested in the protocol mixes present
on the Internet today. This type of project requires limited accuracy to obtain results.
The systems need not have accurate timestamps, or require 100% packet capture. A
27
random packet loss is unlikely to effect bulk statistics such as protocol mixes.
However to obtain a complete picture of the Internet, this project would need to
install monitoring positions at as many points of the network as possible. This means
that spending large amounts of money to obtain accurate results would provide
limited benefit, while buying more machines, providing more monitoring points
across the network is likely to improve the project.
As the ultimate aim of this project is simulation, the accuracy of the trace
files is extremely important. A poor set of trace files could end in a poor simulation
with dubious results. For these reasons each possible solution must be evaluated, and
the solutions used must be well understood so the accuracy of any results drawn
from simulations can be stated.
2.4.1.2.1
Software Based Measurement
Quite possibly the simplest and quickest passive measurement system is to
run the Unix program tcpdump. This program is supported under most Unix systems.
It will listen on a specified network interface, and capture all traffic seen on that link.
Contrary to the packages name, it will dump all data sourced from and destined for
that interface, as well as any other traffic seen on that section of the network, and is
not restricted to TCP. Tcpdump can run in two modes, either capture traffic to a file,
or display text information on every packet on the screen. Also built in are filtering
rules that allow capture of a specified subset of data, for instance TCP traffic, on port
80.
Tcpdump is a not only a very common application, but provides a good
example of techniques used in software based solutions. A software based system
does not all systems that contains software components. All measurement systems
will require some software components, the important distinction is that the in a
software system, the software is provided no special assistance for measurement by
any hardware.
28
A software monitor such as tcpdump will run as an application program.
Without a special interface to the network stack, this program would be unable to
capture any packets that were destined for other machines or other applications on
the monitor. This would mean that passive measurement would be impossible.
The interface that allows programs such as tcpdump to work, is the packet
capturing interface. This interface, referred to often as the pcap interface, is accessed
through a C library called libpcap. This library also allows reading and writing to
files, providing a uniform access to a program for trace files and live networks.
tcpdump is a higher level interpretation and filtering program that sits on top of
libpcap. For this reason it is common to refer to files created by libpcap as tcpdump
files.
Libpcap provides the method for a software program to capture packets that
are destined for other applications. To allow capture of all packets seen on a
network, it is often required to put a network card in promiscuous mode. In this
mode a network card will provide the network stack with all packets on the network,
not only the ones to be dealt with locally. These packets will be normally discarded
at the kernel level, but will be forwarded to the pcap interface if requested.
The problem with software monitoring solutions is the fact that none of the
components were designed or optimised for this use. There are many delays, and
buffers in a software solution that reduce the accuracy of timestamps and increase
the possibility of packet loss.
Software based systems are not limited to using the pcap interface, but could
write a custom network card driver to provide an interface to the network card for
use with passive measurement only. This solution removes the ability to use the
network card as a standard network interface, but can still suffer from many of the
problems of a libpcap system.
29
Software solutions have one major advantage. They are likely to be much
cheaper and quicker to setup than a specialist hardware solution. Software solutions
work well for a project that needs a prototype implementation or has only limited
demands on the accuracy of the data measured.
2.4.1.2.2
Hardware Based Measurement
Hardware solutions try to correct many of these limitations of a software
system. In a full hardware solution, as much of the monitoring system is carried out
on the custom interface, and use the system for storage and formatting of the data.
The important features that can be carried out on the card are:
•
Time stamping
•
Clock synchronization
•
Dropped packet counters
•
Traffic filtering
Time stamping before delivery to the host ensures as close to the wire time as
possible. Ideally the packet will be time stamped as soon as it has arrived on the
card, before any buffering or queuing takes place.
Clock synchronization is important if there are multiple interfaces in a
machine. If packets are being captured by two interfaces, the timestamps need to be
consistent between interfaces.
In a hardware system, a check can be placed on any buffers to detect packet
loss. A hardware solution will not necessarily be perfect, but a well designed system
will be able to provide a definite counter of any drops that have occurred in the
system. For this reliability to carry through to the final network trace, checks must be
30
placed at every point in the system that packet loss can occur. This includes any
software components that run on the host PC to deal with the storage of the data as it
is delivered by the monitor card.
Hardware solutions also allow a monitor to deal with larger data rates than
software solutions could. It is quite often the case that a limited amount of data will
be saved in a monitor, however in a software solution the operating system and
possibly monitoring application have to see all of the traffic, then decide if it should
be kept. If the hardware becomes intelligent, then it can carry out this function,
discarding unwanted data at the earliest possible point.
2.4.2
Active Measurement
Active measurement by probing is a means by which testing packets (probes)
are sent into the network. An example technology is the Cisco IOS Service
Assurance Agent (SAA), which uses probe packets to provide insight into the way
customers’ network traffic is treated within the network. Similarly, Caida and
NLANR use probing to measure network health, perform network assessment, assist
with network troubleshooting and plan network infrastructure.
Current packet probing techniques can be encapsulated in existing protocols
such as the Internet Control Message Protocol (ICMP), UDP and TCP. The probe
traffic is governed by the following three parameters:
•
Size of the probe
•
Probe Rate
•
Probe pattern (deterministic, Poisson etc)
A further consideration is that it is imperative that the measurements are
carried out over valid periods when the customers are active. Otherwise, the
31
performance statistics would be averaged over virtually unloaded periods and would
not then reflect an accurate picture.
2.4.2.1 Simple Network Management Protocol (SNMP)
In IP networks the ubiquitous network management tool is the Simple
Network Management Protocol (SNMP). There is no doubt that SNMP can provide a
wealth of data about the operational status of each management network element.
The operation of SNMP is a polling operation, where a management station
directs periodic polls to various managed elements and collects the responses. These
responses are used to update a view of the operating status of the network.
The complementary approach to performance instrumentation of network
elements is active network probing. This requires the injection of marked packets
into the data stream; collection of the packets at a later time; and correlation of the
entry and exit packets to infer some information regarding delay, drop, and
fragmentation conditions for the path traversed by the packet. The most common
probe tools in the network today are ping and traceroute (Geoff Huston,
Telstra,2003).
2.4.2.2 PING
The best known, and most widely used active measurement tool is ping. Ping
is a very simple tool: a sender generates an Internet Control Message Protocol
(ICMP) echo request packet, and directs it to a target system. As the packet is sent,
the sender starts a timer. The target system simply reverses the ICMP headers and
sends the packet back to the sender as an ICMP echo reply. When the packet arrives
at the original sender's system, the timer is halted and the elapsed time is reported.
32
This simple active sampling technique can reveal a wealth of information. A
ping response indicates that the target host is connected to the network, is reachable
from the query agent, and is in a sufficiently functional state to respond to the ping
packet. In itself, this response is useful information, indicating that a functional
network path to the target host exists. Failure to respond is not so informative
because it cannot be reliably inferred that the target host is not available. The ping
packet, or perhaps its response, may have been discarded within the network because
of transient congestion, or the network may not have a path to the target host, or the
network may not have a path back to the ping sending host, or there may be some
form of firewall in the end-to-end path that blocks the ICMP packet from being
delivered.
However, if ping can remote an IP address, then it obtain numerous
performance metrics. Beyond simple reach ability, further information can be
inferred by the ping approach with some basic extensions to our simple ping model.
If a sequence of labeled ping packets is generated, the elapsed time for a response to
be received for each packet can be recorded, along with the count of dropped
packets, duplicated packets, and packets that have been reordered by the network.
Careful interpretation of the response times and their variance can provide an
indication of the load being experienced on the network path between the query
agent and the target. Load will manifest a condition of increased delay and increased
variance, due to the interaction of the router buffers with the traffic flows along the
path elements as load increases. When a router buffer overflows, the router is forced
to discard packets; and under such conditions, increased ping loss is observed. In
addition to indications of network load, high erratic delay and loss within a sequence
of ping packets may be symptomatic of routing instability with the network path
oscillating between many path states.
A typical use of ping is to regularly test numerous paths to establish a
baseline of path metrics. This enables a comparison of a specific ping result to these
base metrics to give an indication of current path load within the network.
33
It is possible to interpret too much from ping results, particularly when
pinging routers within a network. Many router architectures use fast switching paths
for data packets, whereas the central processing unit of the router may be used to
process ping requests. The ping response process may be given a low scheduling
priority because router operations represent a more critical router function. It is
possible that extended delays and loss, as reported by a ping test, may be related to
the processor load or scheduling algorithm of the target router processor rather than
to the condition of the network path. (Figure 2.3)
Figure 2-3 Ping Path
2.4.2.3 Traceroute
The second common ICMP-based network management tool, traceroute,
devised by Van Jacobson, is based on the ICMP Time Exceeded message. A
sequence of User Datagram Protocol (UDP) packets are generated to the target host,
each with an increased value of the Time To Live (TTL) field in the IP header. This
generates a sequence of ICMP Time Exceeded messages sourced from the router
where the TTL expired. These source addresses are those of the routers, in turn, on
the path from the source to the destination (Figure 2.4) (Geoff Huston, Telstra,
2003).
34
Figure 2-4 Traceroute Path
Like ping, traceroute measures the elapsed time between the packet
transmission and the reception of the corresponding ICMP packet. In this way, the
complete output of a traceroute execution exposes not only the elements of the path
to the destination, but also the delay and loss characteristics of each partial path
element. Traceroute also can be used with loose source route options to uncover the
path between two remote hosts. The same caveats mentioned in the ping description
relating to the relative paucity in deployment of support for loose source routing
apply.
Traceroute is an excellent tool for reporting on the state of the routing system. It
operates as an excellent "sanity check" of the match between the design intent of the
routing system and the operational behavior of the network.
The caveat to keep in mind when interpreting traceroute output has to do with
asymmetric routes within the network. Whereas the per-hop responses expose the
routing path taken in the forward direction to the target host, the delay and loss
metrics are measured across the forward and reverse paths for each step in the
forward path. The reverse path is not explicitly visible to traceroute.
35
2.4.2.4 One-Way Measurements
Round-trip probes, such as ping and traceroute, are suited to measuring the
total network path between two ends of a transaction. In such a case the network
provider is interested in the performance of a set of unidirectional transit paths from
a network ingress point to an egress point. There are now some techniques that
perform a one-way delay and loss measurement, and they are suited to measuring the
service parameters of individual transit paths across a network. A one-way approach
does not use a single network management system, but relies on the deployment of
probe senders and receivers using synchronized clocks.
The one-way methodology is relatively straightforward. The sender records
the precise time a certain bit of the probe packet was transmitted into the network,
the receiver records the precise time that same bit arrived at the receiver.
Consequent correlation of the sender's and receiver's data from repeated
probes can reveal the one-way delay and loss patterns between sender and receiver.
To correlate this to a service level requires the packets to travel along the same path
as the service flow and with the same scheduling response from the network.
Ping and traceroute are ubiquitous tools. Almost every device can support
sending ping and traceroute probes, and, by default almost every device, including
network routers, will respond to a ping or traceroute probe. One-way measurements
are a different matter, and such measurements normally require the use of dedicated
devices in order to undertake the clocking of the probes with the required level of
precision (figure 2.5).
36
Figure 2-5 One-Way Measurements
2.5 Gnutella
Gnutella is a protocol for distributed search. Although the Gnutella protocol
supports a traditional client/centralized server search paradigm, Gnutella’s
distinction is its peer-to-peer, decentralized model. In this model, every client is a
server, and vice versa. These so-called Gnutella servants perform tasks normally
associated with both clients and servers. They provide client-side interfaces through
which users can issue queries and view search results, while at the same time they
also accept queries from other servants, check for matches against their local data
set, and respond with applicable results. Due to its distributed nature, a network of
servants that implements the Gnutella protocol is highly fault-tolerant, as operation
of the network will not be interrupted if a subset of servants goes offline (clip2.com).
2.5.1
Gnutella Protocol
The Gnutella protocol defines the way in which servants communicate over
the network. It consists of a set of descriptors used for communicating data between
37
servants and a set of rules governing the inter-servant exchange of descriptors.
Currently, the following descriptors as shown in table 1.
Tabel 2.1 Description of Gnutella
Descriptor Description
Ping
Used to actively discover hosts on the
network. A servant receiving a Ping
descriptor is expected to respond with
one or more Pong descriptors.
Pong
The response to a Ping. Includes the
address of a connected Gnutella
servant and information regarding the
amount of data it is making available
to the network.
Query
The primary mechanism for searching
the distributed network. A servant
receiving a Query descriptor will
respond with a QueryHit if a match is
found against its local data set.
QueryHit
The response to a Query. This
descriptor provides the recipient with
enough information to acquire the data
matching the corresponding Query.
Push
A mechanism that allows a firewalled
servant to contribute file-based data to
the network.
A Gnutella servant connects itself to the network by establishing a connection
with another servant currently on the network. The acquisition of another servant’s
address is not part of the protocol definition and will not be described here (Host
cache services are currently the predominant way of automating the acquisition of
Gnutella servant addresses).
2.5.2
How Gnutella Works
To envision how Gnutella originally worked, imagine a large circle of users
(called nodes), who each have Gnutella client software. On initial start up, the client
software must bootstrap and find at least one other node. Different methods have
been used for this, including a pre-existing address list of possibly working nodes
38
shipped with the software, using updated web caches of known nodes (called
GWebCaches), UDP host caches and, rarely, even IRC. Once connected, the client
will request a list of working addresses. The client will try to connect to the nodes it
was shipped with as well as nodes it receives from other clients until it reaches a
certain quota. It will only connect to that many nodes, locally cache the addresses it
has not yet tried and discarding addresses it tried which were invalid.
Now, when the user wanted to do a search, the client would send the request
to each node it is actively connected to. The number of actively connected nodes for
a client was usually quite small (around 5), so each node then forwards the request to
all the nodes it is connected to, and they in turn forward the request, and so on, until
the packet was a predetermined number of "hops" from the sender..
To solve the problems of bottlenecks, Gnutella developers implemented a
system of ultrapeers and leaves. Instead of all nodes being considered equal, nodes
entering into the network were kept at the 'edge' of the network as a leaf, not
responsible for any routing, and nodes which were capable of routing messages were
promoted to ultrapeers, which would accept leaf connections and route searches and
network maintenance messages. This allowed searches to propagate further through
the network, and allowed for numerous alterations in the topology which have
improved the efficiency and scalability greatly.
Additionally the Gnutella adopted a number of other techniques to reduce
traffic overhead and make searches more efficient. Most notable are QRP (Query
Routing Protocol) and DQ (Dynamic Querying). With QRP a search reaches only
those clients which are likely to have the files, so rare files searches grow vastly
more efficient, and with DQ the search stops as soon as the program has acquired
enough search results, which vastly reduces the amount of traffic caused by popular
searches. Gnutella for users has a vast amount of information about these and other
improvements to Gnutella in user-friendly style.
39
One of the benefits of having Gnutella so decentralized is to make it very
difficult to shut the network down and to make it a network in which the users are
the only ones who can decide which content will be available. Unlike Napster, where
the entire network relied on the central server, Gnutella cannot be shut down by
shutting down any one node and it is impossible for anyone company to control the
contents of the network, which is also due to the many free software Gnutella clients
which share the network.
2.5.3
Design of Gnutella
Gnutella divides nodes into two groups: leaves and hubs. Leaves maintain
one or two connections to hubs, while hubs accept hundreds of leaves, and many
connections to other hubs. When a search is initiated, the node obtains a list of hubs
if needed, and contacts the hubs in the list, noting which have been searched, until
the list is exhausted, or a predefined search limit has been reached. This allows a
user to find a popular file easily without loading the network, while theoretically
maintaining the ability for a user to find a single file located anywhere on the
network.
Hubs index what files a leaf has by means of a Query Routing Table, which
is filled with single bit entries of hashes of keywords which the leaf uploads to the
hub, and which the hub then combines with all the hash tables its leaves have sent it
in order to create a version to send to their neighboring hubs. This allows for hubs to
reduce bandwidth greatly by simply not forwarding queries to leaves and
neighboring hubs if the entries which match the search are not found in the routing
tables.
Gnutella relies extensively on UDP, rather than TCP, for searches. The
overhead of setting up a TCP connection would make a random walk search system,
requiring the contacting of large numbers of nodes with small volumes of data,
unworkable. UDP, however, is not without its own drawbacks. Because UDP is
40
connectionless, there is no standard method to inform the sending client that a
message was received, and so if the packet is lost there is no way to know. Because
of this, UDP packets in Gnutella have a flag to enable a reliability setting. When a
UDP packet with the reliability flag enabled is received, the client will respond with
an acknowledge packet to inform the sending client that their packet arrived at its
destination. If the acknowledge packet is not sent, the reliable packet will be
retransmitted in an attempt to ensure delivery. Low importance packets which do not
have the flag enabled do not require an acknowledge packet, reducing reliability, but
also reducing overhead as no acknowledge packet needs to be sent (gnutella.com).
2.6 Queuing Theory
Queues can be chained to form queuing networks where the departures from
one queue enter the next queue. Queuing networks can be classified into two
categories: open queuing networks (see figure 2.6) and closed queuing networks (see
figure 2.7). Open queuing networks have an external input and an external final
destination. Closed queuing networks are completely contained and the customers
circulate continually never leaving the network (Jerry D Gibson, 2002).
Figure 2.6 Open Queueing Network
41
Figure 2.7 Closed Queueing Network
2.6.1
Little’s Formula
Let the average rate at which customers arrive at a queuing system be the
constant λ. Departures from the queuing system occur when a server finishes with a
customer. In the steady state, the average rate at which customers depart the queuing
system is also λ. Arrivals to a queuing system may be described as a counting
process A(t) such as shown in Figure 2.8. We estimate an average rate of arrivals
with
λ=
A(t )
t
Illustration in figure 2.9 shows that the packets arrive at the queue. And after
some time, it will depart at the server and serve the queue before sent to the
destination
42
Figure 2.8 Illustration of queuing system: Customers arrive at a queue and, after some time,
depart from a server
Figure 2.9 Illustration of the arrival and departure counting processes for a queuing system.
Where A(t) is the number of arrivals in [0,t]. Departures from the system D(t) are
also described by a counting process, and an example is also shown in Figure 2.11.
Note that A(t) ≥ D(t) must be true at all times. The time that each customer spends in
the queuing system is the time difference between the customer’s arrival and
subsequent departure from a server. The time spent in the queuing system for arrival
i is denoted ti, i = 1, 2, 3,….
The average time a customer spends waiting in a queuing system is
t=
1 A(t )
∑ ti
A(t ) i =1
43
The average number of customers in a queuing system is
n=
2.6.2
1 A( t )
∑ t i = λt
t i =1
First in First Out In Peer to Peer
First In First Out (FIFO) queuing is the most basic queue scheduling
discipline. In FIFO queuing, all packets are treated equally by placing them into a
single queue, and then servicing them in the same order that they were placed into
the queue. FIFO queuing is also referred to as First Come First Served (FCFS)
queuing (see figure 2.10) (Leonardo Balliache, 2003).
Figure 2.10 FIFO Queue
Using FIFO, incoming packets are served in the order in which they arrive.
Although FIFO is very simple from the standpoint of management, a pure FIFO
scheduling scheme provides no fair treatment to packets. In fact, with FIFO
scheduling, a higher-speed user can take up more space in the buffer and consume
more than its fair share of bandwidth.
In FIFO queue, a single queue is maintained at each output port. When a new
packet arrives and is routed to an output port. It is placed at the end of the queue. As
long as the queue is not empty, the router transmit packet from the queue, taking the
oldest remaining packet next.
CHAPTER 3
PROJECT DESIGN
3.1 Introduction
The main idea of this project is to see the performance of the Gnutella
networks. Gnutella is mostly used in peer to peer networks which used protocol for
distribution search. Gnutella protocol supports a traditional client/centralized server
search paradigm, Gnutella’s distinction is its peer-to-peer, decentralized model. In
this model, every client is a server, and vice versa. These so called Gnutella servants
perform tasks normally associated with both clients and servers.
The video packet will send over the Gnutella networks. The video packets are
using UDP. UDP is a simple message-based connectionless protocol. In
connectionless protocols, there is no effort made to setup a dedicated end-to-end
connection. Communication is achieved by transmitting information in one direction,
from source to destination without checking to see if the destination is still there, or
if it is prepared to receive the information. With UDP messages, packets cross the
network in independent units.
45
3.2 Gnutella System Model
Figure 3-1 Basic Topology of the Gnutella
Gnutella offers a fully peer-to-peer decentralized infrastructure for
information sharing. The topology of a Gnutella network graph is meshed, and all
servants act both as clients and servers and as routers propagating incoming
messages to neighbors. While the total number of nodes of a network is virtually
unlimited, each node is linked dynamically to a small number of neighbors, usually
between 2 and 12. Messages, that can be broadcast or unicast, are labeled by a
unique identifier, used by the recipient to detect where the message comes from
(Fabrizio Cornelli, 2002).
This feature allows replies to broadcast messages to be unicast when needed.
To reduce network congestion, all the packets exchanged on the network are
characterized by a given TTL (Time to Live) that creates a horizon of visibility for
each node on the network. The horizon is defined as the set of nodes residing on the
network graph at a path length equal to the TTL and reduces the scope of searches,
which are forced to work on only a portion of the resources globally offered.
46
To search for a particular file, a servant p sends a broadcast Query message
to every node linked directly to it (see Figure 3.1). The fact that the message is
broadcast through the P2P network, implies that the node not directly connected with
p will receive this message via intermediaries; they do not know the origin of the
request. Servants that receive the query and have in their repository the file
requested, answer with a QueryHit unicast packet that contains a Result Set plus
their IP address and the port number of a server process from which the files can be
downloaded using the HTTP protocol.
Although p is not known to the responders, responses can reach p via the
network by following in reverse the same connection arcs used by the query.
Servants can gain a complete vision of the network within the horizon by
broadcasting Ping messages. Servants within the horizon reply with a Pong message
containing the number and size of the files they share.
Finally, communication with servants located behind firewalls is ensured by
means of Push messages. A Push message behaves more or less like passive
communication in traditional protocols such as FTP, inasmuch it requires the
``pushed'' servant to initiate the connection for downloading.
3.3 Gnutella Simulator in NS2
NS or the Network Simulator (also popularly called ns-2, in reference to its
current generation) is a discrete event network simulator. It is popular in academia
for its extensibility (due to its open source model) and plentiful online
documentation. ns is popularly used in the simulation of routing and multicast
protocols, among others, and is heavily used in ad-hoc research. ns supports an array
of popular network protocols, offering simulation results for wired and wireless
networks alike. It can be also used as limited-functionality network emulator. NS is
licensed for use under version 2 of the GNU General Public License.
47
NS was built in C++ and provides a simulation interface through OTcl, an
object-oriented dialect of Tcl. The user describes a network topology by writing
OTcl scripts, and then the main ns program simulates that topology with specified
parameters.
In this project, the type of peer to peer that will use in NS2 is Gnutellasim
(Gnutella Simulator). GnutellaSim is a scalable packet-level Gnutella simulator that
enables the complete evaluation of the Gnutella system with a detailed network
model. GnutellaSim is based on a framework that designed for packet-level peer-topeer system simulation, which features functional isolation and a protocol- centric
structure, among other characteristics. The framework is designed to be extensible to
incorporate different implementations alternatives for a specific peer-to-peer system
and is portable to different network simulators.
To support GnutellaSim and to facilitate the simulation of other applications
on ns2, it need to extended the TCP implementation in ns2 to make it closer to real
TCP. The additional features include: receiver advertised window, sender buffer,
Socket-like APIs, dynamic connection establishment of TCP, and real payload
transfer. This part of the code could be independently used (Himanshu Raj,2003).
3.3.1
Architecture of Gnutella Simulator
Figure 3.2 shows the architecture of the Gnutella Simulator that build in NS2
and it divided in 3 parts.
48
Figure 3-2 Architecture of Gnutella Simulator
The three parts of the architecture are:
•
Part I: Application Layer
•
PeerApp-------------------GnutellaApp
Ultrapeer
Leaf
•
ActivityController
•
PeerSys
•
BootstrapControl--------SmpBootServer
•
PDNSBootServer
Part II: Protocol Layer
•
•
PeerAgent-----------------GnutellaAgent
UltraAgent
LeafAgent
MsgParser-----------------GnutellaMsg
49
•
Part III: Socket Adaptation Layer
•
NSSocket
o
•
AdvwTcp
o
3.3.2
PrioSocket
SocketTcp
Framework Component
The component of the framework that should be built in NS2 are:
•
PeerApp is the direct interface to peer-to-peer application users. It embodies
the behavior of a peer-to-peer application and functionalities such as user
interfaces and the maintenance of peer relationship. PeerApp can be
associated with an ActivityController which controls the its behavior based
on certain user behavioral model. Application layer characteristics of a peerto-peer system are described by PeerSys. Bootstrapping scheme varies from
one peer-to-peer system to another and BootstrapControl is a placeholder for
such a scheme.
•
PeerAgent
performs
protocol
dependent
message transmission
and
processing. The APIs between PeerAgent and PeerApp usually correspond to
the message types defined in a specific peer-to-peer protocol. MsgParser is a
placeholder for the entity that encodes and decodes messages of a specific
peer-to-peer protocol.
•
Socket Adaptation Layer wraps the transport services provided by a specific
network simulator with a socket interface, which is most familiar to
developers of real peer-to-peer applications. See "GnutellaSim Components"
for details of the specific classes implemented for ns2.
50
The term protocol-centric structuring refer to the organization in the
framework of system components based on their relation to the peer-to-peer
protocol, which simplifies the extension of the system. For example, PeerApp is the
end consumer/producer of the protocol messages, PeerAgent usually only
routes/delivers protocol messages.
3.3.3
Gnutellasim Component
The components that should be available in the Gnutella Simulator are:
Part 1
•
GnutellaApp: maintains peer relationship and is the direct interface to
Gnutella users. It implements the Legacy peer that speaks Gnutella protocol
0.4.
o SmpBootGnuApp: subclass of GnutellaApp that uses a simplified
(simulated) bootstrapping process to find initial peers.
o Ultrapeer: Ultrapeer servent implementation in the Gnutella protocol
0.6.
o Leaf: Leaf servent implementation in the Gnutella protocol 0.6.
•
ActivityController: a timer-based entity associated with each servent to
control its behavior.
•
PeerSys: describes the system-wide parameters of a Gnutella network, e.g.,
the distribution of file popularities and the number of popular files.
•
SmpBootServer: a central server object that handles the simulated
bootstrapping process of each new peer. #details to follow
o PDNSBootServer: subclass of SmpBootServer that works in PDNS.
Part 2
•
GnutellaAgent:
implements Gnutella protocol, including the message
parsing, processing, forwarding, etc.
•
UltraAgent: GnutellaAgent for Ultrapeers.
51
•
•
LeafAgent: GnutellaAgent for Leaf peers.
GnutellaMsg: defines Gnutella message formats. Formats and parses
Gnutella messages.
Part 3
•
NSSocket: provides socket interfaces to the TCP transport service in ns2.
o PrioSocket: Gnutella application specific priority queuing-based
NSSocket.
•
AdvwTcp (subclass of FullTcpAgent in ns2): subclass of FullTcpAgent in
ns2, which adds the receiver advertised window feature to FullTcpAgent.
o SocketTcp: subclass of AdvwTcp that implements the real data
transfer in ns2.
3.3.4
Implementation Details
While the Gnutella protocol is standardized, Gnutella applications can be
implemented very differently, which has been observed in different servent
implementations such as LimeWire, Gnucleus, gnk-gnutella. While GnutellaSim is
extensible to implement specific schemes of users' choices, some of the details of the
current implementation are described as follows:
•
ActivityController: a servent starts from the "online" state and switches
between the "online" state and the "offline" state during the simulation.
While online, it switches between "active" state and "idle" state. It searches
for files under the "active" state and only serves as a forwarding engine under
the "idle" state. The average lengths of an "idle" period and of an "offline"
period are both described by a ClassSpec object, which is initialized with a
file. Different classes can be defined with different lengths for "idle" and
"offline" periods.
52
•
Bootstrapping: we have currently implemented two ways to simulate the
bootstrapping process. The first one is a simplified version of the
GWebCache protocol. A peer is preconfigured with a list of bootstrap servers
(GWebCache servers). When a peer first gets online in a simulation, it
contacts the bootstrap servers to get a list of servents. Servents are
remembered by a bootstrap server as they contact it for other servents'
information and they send an update to the bootstrap server periodically.
There is a limit on the number of known servents maintained by both the
bootstrap server and a servent. When the limit is reached, older ones are
removed before new ones. The second implemenation of bootstrapping does
not simulate the protocol between bootstrap servers and servents. A central
server is responsible for maintaining the identities of the available peers and a
new peer directly invokes a method of the central server to get a list of online
servents.
•
Connection Management: each servent has a connection watchdog that
periodically checks if the minimum number of connections (can be
configured on a per servent basis) are maintained. If not, the servent will try
to connect to the known servents in its cache. Failing that, it will fetch a list
of hosts from the bootstrap servers.
•
Flow control: congestion and message loss can take place on peers during the
Gnutella forwarding process. We implement the most common congestion
control mechanisms that are used by Gnutella implementations are Priority
Message queuing and rate control and they can both be used together.
•
Query processing: whether a servent responds to a Query with QueryHits is a
random variable controlled by a probability. The probability for a specific file
(distinguished by a number) to hit a match on a servent depends on the
popularity of the file, as specified by a PeerSys object for the simulation.
53
3.4 Configuration of The Network
For this simulation, it divides in two groups: leaves and hubs. Leaves
maintain one or two connections to hubs, while hubs accept hundreds of leaves, and
many connections to other hubs. To make the simulation simple and easy to analysis
the performance, video packet will send from single user to single receiver. To send
the packet video it will through 2 hubs.
Figure 3-3 Topology of the Gnutella in NS2
In this simulation, there are 2 nodes as hubs, node 0 and node1. Each node
has 2 leaves, for node 0 as hubs, node 2 and 3 as it leaves. And for node 1, node 4
and 5 as it leaves. The video packets are sent from node 2 to node 5, through node 0
and 1 (see figure 3.3).
There are 3 set of specification simulated. First scenario, all the link
bandwidth are 512 kbps. The next scenario is we changed the link bandwidth to 256
kbps and third scenario to 128 kbps (see table 3.1 and figure 3.4). The delays for all
bandwidth are the same, which is 100 ms for the delay, and the queue scheduler is
Droptail (see Appendix A).
54
Figure 3-4 Configuration of the network
Table 3-1 Configuration of The Network
Configuration/Scenario
1
2
3
Bandwith
512 kbps
256 kbps
128 kbps
Delay
100ms
Queue
Droptail (FIFO)
Droptail objects are a subclass of queue objects that implemented simple of
FIFO Queue. Droptail is the mostly use in the NS2 simulation.
3.5 Video Packet over Gnutella System
Gnutella are dealing with the problem of unicast, where a single receiver
intended to receive media contents from many sender peers in Gnutella network. In
this problem, selection of active peers become more important as, it is not feasible to
select and coordinate with a larger number of active peers. It leads to extra overhead
of establishing and monitoring of too many peers and also for reconstruction of all
the video packets before decoding at the receiving end. In the start of the adaptive
mechanism of Gnutella packet video streaming, the receiver node sends a query and
get response from sender peers who intend to share the contents. It’s not necessary
55
that all the nodes having requested contents must cooperate for content sharing. It’s a
general observation that a large number of nodes present in P2P networks never
intend to share their resources.
Gnutella relies extensively on UDP, rather than TCP, for searches. The
overhead of setting up a TCP connection would make a random walk search system,
requiring the contacting of large numbers of nodes with small volumes of data,
unworkable. UDP, however, is not without its own drawbacks. Because UDP is
connectionless, there is no standard method to inform the sending client that a
message was received, and so if the packet is lost there is no way to know. Because
of this, UDP packets in Gnutella have a flag to enable a reliability setting. When a
UDP packet with the reliability flag enabled is received, the client will respond with
an acknowledge packet to inform the sending client that their packet arrived at its
destination. If the acknowledge packet is not sent, the reliable packet will be
retransmitted in an attempt to ensure delivery. Low importance packets which do not
have the flag enabled do not require an acknowledge packet, reducing reliability, but
also reducing overhead as no acknowledge packet needs to be sent.
UDP (User Datagram Protocol) is one of the two main transport protocols
utilized in IP networks. The UDP protocol exists on the Transport Layer of the OSI
Model (see figure 3.5). The UDP protocol is an unreliable connectionless protocol.
UDP is unreliable, it mean that UDP does not provide mechanisms for error
detection and error correction between the source and the destination. Because of
this, UDP utilized bandwidth more efficiently than TCP. Applications which utilize
UDP and which require reliable transport must provide their own error detection and
correction mechanisms (tech-faq.com/udp,2006).
56
Figure 3-5 OSI and TCP/IP model
UDP does not guarantee reliability or ordering in the way that TCP does.
Datagrams may arrive out of order, appear duplicated, or go missing without notice.
Avoiding the overhead of checking whether every packet actually arrived makes
UDP faster and more efficient, for applications that do not need guaranteed delivery.
Time-sensitive applications often use UDP because dropped packets are preferable to
delayed packets. UDP's stateless nature is also useful for servers that answer small
queries from huge numbers of clients. Unlike TCP, UDP is compatible with packet
broadcast (sending to all on local network) and multicasting (send to all subscribers).
In order to send video packet using UDP over Gnutella in ns2, it need to
configure the module of UDP in ns2 (see figure 3.6). This is had been done by ChihHeng’s proposing Simulations of MPEG Video Transmission (Chih-Heng Ke).
57
Figure 3-6 Flowchart of configure UDP sink in NS2
The length of the packet video is 750 ms with type of the video is MPEG-4,
with the maximum of fragmented size is 1000. To put the video packet in the
network, it needs to make the trace file of the packet video, and put the trace file in
to the network see figure 3.7).
58
Mpeg4
mpeg4encoder.ex
e example.par
MP4.exe –send
224.1.2.3 5555
1000
(name_of_the_vid
eo.cmp)
St (trace file)
Figure 3-7 Flowchart to compile tracefile
The last file that will show is st file (see appendix B). The file that will use in
the simulation is st file. St file represented the packet video that send in the Gnutella
simulator.
3.6 Active Measurement
There are two types of active measurement, One Way Active Measurement
either Two Way Active Measurement. The type that will use in this project is Two
Way Active Measurement which is used Ping.
Ping is a computer network tool used to test whether a particular host is
reachable across an IP network; it is also used to self test the network interface card
of the computer. It works by sending ICMP “echo request” packets to the target host
and listening for ICMP “echo response” replies. Ping estimates the round-trip time,
59
generally in milliseconds, and records any packet loss, and prints a statistical
summary when finished.
Ping will send from node 2 to node 1. In this project, it just focus in node 0 as
a hub to analyze. The methodology of the active measurement shows in figure 3.8.
Figure 3-8 Methodology of the active Measurement
3.7 Methodology
Figure 3.9 shows the methodology of the whole simulation. It is include from
the input until the output
60
Figure 3-9 Methodology of the Simulation
3.8 Summary
In Gnutella system, it divides in two groups: leaves and hubs. Leaves
maintain one or two connections to hubs, while hubs accept hundreds of leaves, and
many connections to other hubs. To make the simulation simple and easy to see the
performance, we send a packet video from single user to single receiver. To send the
packet video it will through 2 hubs.
In Gnutella model for this simulation, there are 2 nodes as hubs, node 0 and
node1. Each node has 2 leaves, for node 0 as hubs, node 2 and 3 as it leaves. And for
node 1, node 4 and 5 as it leaves. The video packets are sent from node 2 to node 5,
through node 0 and 1
61
There are 3 set of specification simulated. First scenario, all the link bandwith
are 512 kbps. The next scenario is we changed the link bandwith to 256 kbps and
third scenario to 128 kbps. The delays for all bandwith are the same, which is 100 ms
for the delay, and the queue scheduler is Droptail.
The length of the packet video is 750 ms with type of the video is MPEG-4,
with the maximum of fragmented size is 1000. To put the packet video in the
network, we need to make the trace file of the packet video, and then put the trace
file in to the network.
CHAPTER 4
SIMULATION RESULT AND PERFORMANCE ANALYSIS
4.1 Introduction
In this chapter, it presents the simulation results from the developed model.
Generally, the experiment gave three main results. The first result is the queue state
of the buffer. The video packets that send into the network from node 2 to node 5
will have queue in node 0 and node 1. For this project, the analysis will focus at node
0 to see the performance of the queue.
The second main result is to see the delay of the Gnutella network. To
achieve the delay of the network, the Ping will send into the network from node 2 to
node 1. Ping will sent from node 2 only to node 1 because in this project the analysis
will focus only at node 0
The last result that will discuss in this project is Round Trip Time (RTT). To
achieve the RTT is to send Ping to the network from node 2 to node 1. Actually the
RTT is the total of the total time that needed to get into the destination and get back
to the source. In this project, the analysis of the RTT will focus in node 0.
63
4.2. The Queue Simulation Result
In this project, the queue will focus only at node 0 (see figure 4.1). The
packet video has queue in node 0 and node 1. It consider to the setting of the network
in the Gnutella simulator.
Figure 4-1 Monitor at node 0
There are three scenarios that will used in this simulation. It used 128 kbps,
256 kbps, and 512 kbps. To get the queue at node 0, the video packet that come in to
node 0 will be minus by the packet that go to node 0. The equation that used is:
Q0 = Q(-) – Q(+)
Q0 = Queue at node 0
Q(-) = Queue that leave node 0
Q(+) = Queue that come to node 0
64
Queue
6
5
Queue
4
3
2
1
0
0,175916
241,6868495
487,4869745
Time (S)
128Kb
256kb
512kb
Figure 4-2 Queue at Node 0
Figure 4.2 shows the queue at node 0 with different of the link bandwith. For
128 kbps and 256 kbps the queue will going up and will reach the maximum queue
at 5 queue. When it compare with 512 kbps, after it reached queue at 5, it will go
down until 0.
To achieve less queue in the network, it is required to use bigger bandwith.
The bigger bandwith that will use, the less queue that will achieve. Video packet is a
big packet that need high performance of the network. Increasing the frame rate of
the packet video may increase the bandwidth required.
In order to achieve the right bandwith to send the video packet over the
Gnutella simulator, it is depend on the frame rate, quantization and the resolution. So
the data rate is:
65
Data rate for video = Resolution (pixel/frame) x Quantization (bits/pixel) x frame
rate (frame/second)
Freq Queue state
60000
F req Q u eu e state
50000
40000
30000
20000
10000
0
1
2
3
4
5
Queue
512kb
256kb
128kb
Figure 4-3 Frequency Queue state at node 0
Figure 4.3 shows the number of the packets in the queue from node 0 until 5,
it shows that with the link of bandwidth 512 kbps, more video packet are allowed in
to the network than link 256 kbps and 128 kbps.
From that result, it can conclude that more bandwidth that used in the
network, the performance of the network will better. It is because, when the network
used bigger bandwidth, the video packet that come to the queue will be faster than
the less bandwidth.
66
4.2 The Delay Simulation Result
As described in chapter 3, the default of delay in the simulation is 100 ms. To
achieve the real delay in the simulation, ping is needed to put in the network. Ping
will send from node 2 to node 1.
Figure 4-4 Delay time from node 2 to node 1
Figure 4.4 shows delay time in the simulation from node 2 to node 1. It
shows that when the bandwidth is reduce, delay time will increase. From the result,
the delay is not same with the default in the simulation network, it happened because
from node 2 to node 1, the flow of the traffic is very high.
67
In order to reduce the delay time in the network, it needed to increase the
bandwidth of the network. The packet that send to the network is video packet.
Video packet size is big which required high performance of the network. So, to send
a video packet to the network, it is necessary to have high specification, in order to
reduce packet loss in the network.
Figure 4-5 Delay time from node 1 to node 0
Figure 4.5 shows the delay time from node 1 to node 2. In this link there are
no high traffic achieved. The high traffic is only achieved from node 2 to node 1,
while from node 1 to 2, almost no traffic. As described in the chapter 3, the delay
time is 100 ms. From the result, it shows the delay time are 101 ms, 102 ms, 104 ms,
and 108 ms. So, the delay time in the result are comparable with the delay time in the
simulation.
68
From the result in the figure 4.4 and figure 4.5, the delay time depend on the
flow of the traffic. When the flow of the traffic is high, the delay will be increase too.
But when the flow of the traffic is small, the delay time will reduce.
4.3 The RTT Simulation Results
Round-trip time (RTT), also called round-trip delay, is the time required for a
signal pulse or packet to travel from a specific source to a specific destination and
back again. In this context, the source is the computer initiating the signal and the
destination is a remote computer or system that receives the signal and retransmits it.
In order to achieve RTT in this project, Ping has been sent from node 2 to
node 1. To achieve the RTT, the equation that used in the simulation is:
Dt = Dl + Dq
Dt = total delay (maximum delay seen by Ping)
Dl = latency delay (minimum delay seen by Ping)
Dq = queuing delay
69
Figure 4-6 RTT from node 2 to node 1
Figure 4.6 shows the result of the RTT. Ping has been sent from node 2 to
node 1. The result shows that when the bandwith is small, it need more time to send
Ping and go back.
In order to reduce the RTT in the network, it is required to use bigger
bandwith. If the network uses bigger bandwidth, it will handle the high traffic and
reduce the packet loss in the network.
If we observe the result of the delay time and the RTT, the result of them
are very similar. Actually, RTT is the total of the delay time when it is go and come
back. As we know the delay time that come back to the source are very small,
because at that link the flow of the traffic is very small, so the delay time can be
70
ignored. When the delay time has been ignored, we will get the result of the RTT
which is same with the delay of the video packet that as been send from source to the
destination.
4.4 Summary
This project focuses only at queue state, delay time and RTT. To achieve
good performance of the network, it required to choose the best specification of the
network in order to send video packet over the network. The requirement to achieve
the best specification is to choose bigger bandwidth that can handle the traffic of the
network.
CHAPTER 5
CONCLUSION
5.1 Conclusion
The design of the peer to peer network in Gnutella system using active
measurement has been presented. The main concept of P2P networking is that each
peer is a client and a server at the same time. In Gnutella, it divides nodes into two
groups: leaves and hubs. Leaves maintain one or two connections to hubs, while
hubs accept hundreds of leaves, and many connections to other hubs. The design and
simulation is done using NS2 software. To perform the Gnutella simulation in NS2,
it needed to install the Gnutella Simulator. Gnutella Simulator divides into three
parts, application layer, protocol layer, and socket adaptation layer
In this simulation, there are 2 nodes as hubs, node 0 and node1. Each node
has 2 leaves, for node 0 as hubs, node 2 and 3 as its leaves. And for node 1, node 4
and 5 as it leaves. The packets are sent from node 2 to node 5, through node 0 and 1.
The mechanism for this simulation is sending a video packet with length1 750 ms
and the fragmented size is 1000 bytes from a single user to a single receiver
There are 3 set of specification simulated. First scenario, all the link
bandwidth are 512 kbps. The next scenario is changed the link bandwidth to 256
kbps and third scenario to 128 kbps. The delays for all bandwidth are same, which is
100 ms for the delay, and the queue scheduler is Droptail
1
The time that takes for video packet from beginning to the end
72
The simulation was used active measurement to observe the quality of
service for the Gnutella system. The function of active measurement is to observe of
the delay of the link, where by the active probe would not influence with the current
traffic. By using active measurement one trip delay and RTT can be remote to
measure the traffic.
5.2 Proposed Future Work
Further works should be carried out in order to improve the Quality of
Service of the peer to peer system:
1. Improving the system model using other type of the queuing scheduler, such
as WRR, CBWFQ.
2. Extending the application of the simulation using video and audio packet.
May 2008
REFERENCES
Altman, Eithan and Jimenez. Tania. NS Simulator for Beginners. Unive.de los
Andes. December 4 2003
Androutsellis, Theotokis Stephanos and Spinellis, Diomidis. A survey of peer-to-peer
content distribution technologies. ACM Computing Surveys. 36(4).335–371.
December 2004.
Antonio, Fernando. Quality of Service In the Internet. DUP Science. 2004
Balliache,
Leonardo.
Differentiated
Service
on
Linux
HOWTO.
http.//opalsoft.net/qos. 2003
Brownlee, Nevil and Lossley, Chris. Fundamentals of Internet Measurement. A
Tutorial. Keynote Systems. 2001
Carsten, Rossenhövel. Peer-to-Peer Filters. The Big Report. Internet Evolution.
March 2008
Chih-Heng Ke, Ce-Kuen Shieh, Wen-Shyang Hwang, Artur Ziviani. An Evaluation
Framework for More Realistic Simulations of MPEG Video Transmission.
Journal of Information Science and Engineering (accepted) (SCI. EI)
Cisco. Internetworking Technologies Handbook. Cisco.
Claypool, Mark., Kinicki, Robert., Li, Mingzhe., Nichols, James. and Wu, Huahui.
Inferring Queue Sizes in Access Networks by Active Measurement. CS
Department at Worcester Polytechnic Institute Worcester. MA. 01609. USA.
Cornelli, Fabrizio Choosing Reputable Servents in a P2P Network. Università di
Milano. Italy
Chun, Wang. Network Application Design Using TCP/IP Protocol In Windows.
Submitted in Partial Fulfillment of the Requirement for the Degree of Master
of Science in Computer and Information Sciences
Detlef, Schoder and Kai, Fischbach. Core Concepts in Peer-to-Peer (P2P)
Networking. Idea Group Inc. Hershey. 2005
74
Engle, Marling and Khan, J. I.. Vulnerabilities of P2P systems and a critical look at
their solutions. May 2006
Fall, Kevin and Varadhan, Kannan. The ns Manual. UC Berkeley. LBL. USC/ISI.
and Xerox PARC. January 14. 2008
Forouzan, Behrouz. Data Communications and Networking. McGraw Hill. 2004
Gibson. Jerry D. The Communication Handbook. Southern Methodist University.
Texas. 2002
Geoff Huston, Telstra. Measuring IP Network Performance. The Internet Protocol
Journal - Volume 6, Number 1. 2003
Hardy. William C. QoS Measurement and Evaluation of Telecommunication Quality
of Service. John Wiley. 2001
Haseb, Maheen. Analysis of Packet Loss Probing in Packet Network. PhD Thesis.
Queen Mary University of London. 2006
He, M. Ammar, G. Riley. H. Raj and R. Fujimoto Q. A Framework for Packet-level
Simulation of Peer-to-Peer Systems. MASCOTS 2003
http.//hpds.ee.ncku.edu.tw/~smallko/ns2/Evalvid_in_NS2.htm
http.//searchnetworking.techtarget.com/sDefinition/0..sid7_gci1250602.00.html#
http.//www.cc.gatech.edu/computing/compass/gnutella/
http.//www.clip2.com. The Gnutella Protocol Specification
http.//www.isi.edu/nsnam/ns/
http.//www.kazaa.com
http.//www.pcmag.com/encyclopedia_term
http.//www.tech-faq.com/udp.shtml
ITU-T Study Group 2. Teletraffic Engineering Handbook
Khan, Javed I. and Wierzbicki, Adam. Foundation of Peer-to-Peer Computing.
Special Issue. Elsevier Journal of Computer Communication. (Ed). Volume
31. Issue 2. February 2008
Kulkarni. Amit B and Bush. Stephen F . Active Networks and Active Network
Management A Proactive Management Framework. Kluwer Academic
Publisher. 2002
Minar, Nelson. Peer-to-Peer: Harnessing the Power of Disruptive T. O’Reilly. 2001
Moore, Rose. Analysis of Voice Over IP Traffic. Mathematics Department,
Macquarie University, Sydney. 1999
May 2008
75
Mushtaq, Mubashar and Ahmed, Toufik . Adaptive Packet Video Streaming Over
P2P Networks Using Active Measurements. LaBRI University of Bordeaux.
2006
Nader, F. Mir. Computer and Communication Networks. Prentice Hall. November
02. 2006
Nortelnetwork. Introduction to Quality of Service (QoS). nortel network. 2003
[email protected]. Tutorial for Network Simulator
Park, Kihong and Willinger, Walter. Self-Similar Network Traffic and Performance
Evaluation. John wiley and Sons. Inc.
Park, Kun I. Ph.D. QoS In Packet Networks. Springer. United States of America.
2005
Pitts, J.M. and Schormans, J.A.. Introduction to IP and ATM Design and
Performance. Wiley. December 2000
Raj, Himanshu. Packet-level Peer-to-Peer Simulation Framework and GnutellaSim.
College of Computing Georgia Institute of Technology. 2003
Ralf, Steinmetz and Klaus, Wehrle (Eds). Peer-to-Peer Systems and Applications.
Lecture Notes in Computer Science. Volume 3485. September 2005.
Ripeanu, I. Foster and A. Iamnitchi. Mapping the Gnutella Network. Properties of
Large-Scale Peer-to-Peer Systems and Implications for System Design. IEEE
Internet Computing. 6(1). February 2002.
Schoder. Fischbach and Schmitt Core Concepts in Peer-to-Peer Networking.
University of Cologne. Germany. 2005
Shuman Ghosemajumder. Advanced Peer-Based Technology Business Models. MIT
Sloan School of Management. 2002.
Stallings. William . Data and Computer Communications. Pearson.2007
Stefan, Saroiu. P., Krishna Gummadi., Steven D., Gribble.A. Measurement Study of
Peer-to-Peer File Sharing Systems. Proceedings of Multimedia Computing
and Networking 2002 (MMCN'02). San Jose. CA. January 2002.
Subramanian, Ramesh and Goodman. Brian D . Peer-to-Peer Computing. The
Evalution of Disruptive Technology. Idea Group Publishing. 2005
Worchester Polytecnic Institute. NS by Example. August 01. 2002
www.cisco.com
Zeinalipour, Demetris and Folias, Yazti Theodoros. A Quantitative Analysis of the
Gnutella Network Traffic. University of California. U.S.A.
May 2008
APPENDIX A
Main Programming Code in NS2
set ns [new Simulator]
set min 1000
set max 0
set packcount 0
set f3 [open delay.q w]
set f2 [open rtts.q w]
set nd [open out.tr w]
set nf [open graph.nam w]
set f0 [open queue.q w]
set f1 [open frek.q w]
$ns namtrace-all $nf
set max_fragmented_size 1000
#add udp header(8 bytes) and IP header (20bytes)
set packetSize 1028
set n0 [$ns node]
set n1 [$ns node]
set nl0 [$ns node]
set nl1 [$ns node]
77
set nl2 [$ns node]
set nl3 [$ns node]
$ns duplex-link $n0 $nl0 64Kb 100ms DropTail
$ns duplex-link $n0 $nl1 64Kb 100ms DropTail
$ns duplex-link $n1 $nl2 64Kb 100ms DropTail
$ns duplex-link $n1 $nl3 64Kb 100ms DropTail
$ns duplex-link $n0 $n1 64Kb 100ms DropTail
$ns duplex-link $n1 $n0 64Kb 100ms DropTail
$ns namtrace-queue $n1 $n0 $nd
set udp1 [new Agent/myUDP]
$ns attach-agent $nl0 $udp1
$udp1 set packetSize_ $packetSize
$udp1 set_filename sd_be
set null1 [new Agent/myUdpSink2]
$ns attach-agent $nl3 $null1
$ns connect $udp1 $null1
$null1 set_trace_filename rd_be
set sys [new PeerSys 2 2 2]
$sys init-class classinfo.txt
set bootserver [new SocketApp/SmpBootServer]
set nodeid0 [$nl0 node-addr]
set nodeid1 [$nl1 node-addr]
set nodeid2 [$nl2 node-addr]
set nodeid3 [$nl3 node-addr]
set p0 [new PeerApp/GnutellaApp $nodeid0 0]
set p1 [new PeerApp/GnutellaApp $nodeid1 0]
78
set p2 [new PeerApp/GnutellaApp $nodeid2 0]
set p3 [new PeerApp/GnutellaApp $nodeid3 0]
$p0 use-smpbootserver $bootserver
$p1 use-smpbootserver $bootserver
$p2 use-smpbootserver $bootserver
$p3 use-smpbootserver $bootserver
$p0 attach-peersys $sys 0
$p1 attach-peersys $sys 1
$p2 attach-peersys $sys 0
$p3 attach-peersys $sys 1
set original_file_name st
set trace_file_name video1.dat
set original_file_id [open $original_file_name r]
set trace_file_id [open $trace_file_name w]
set frame_count 0
set last_time 0
while {[eof $original_file_id] == 0} {
gets $original_file_id current_line
scan $current_line "%d%s%d%s%s%s%d%s" no_ frametype_ length_ tmp1_
tmp2_ tmp3_ tmp4_ tmp5_
#puts "$no_ $frametype_ $length_ $tmp1_ $tmp2_ $tmp3_ $tmp4_ $tmp5_"
# 30 frames/sec. if one want to generate 25 frames/sec, one can use set time [expr
1000*1000/25]
set time [expr 1000 * 1000/30]
if { $frametype_ == "I" } {
set type_v 1
}
79
if { $frametype_ == "P" } {
set type_v 2
}
if { $frametype_ == "B" } {
set type_v 3
}
if { $frametype_ == "H" } {
set type_v 1
}
puts $trace_file_id "$time $length_ $type_v $max_fragmented_size"
incr frame_count
}
close $original_file_id
close $trace_file_id
set end_sim_time [expr 1.0 * 1000/30 * ($frame_count + 1) / 1000]
puts "$end_sim_time"
set trace_file [new Tracefile]
$trace_file filename $trace_file_name
set video1 [new Application/Traffic/myTrace2]
$video1 attach-agent $udp1
$video1 attach-tracefile $trace_file
proc finish {} {
global ns nd
global nf f0 f1 f3
global min max end_time
$ns flush-trace
close $nd
80
close $nf
exec awk -f queue.awk out.tr
exec awk -f pret.awk queue.q
exec awk -f delay.awk out.tr
#exec xgraph -bb -tk -x time -y queue queue.q &
#exec xgraph -bb -tk -x packetid -y RTT rtts.q &
#exec nam graph.nam &
puts "min: $min ms ; max: $max ms"
exit 0
}
#Define a 'recv' function for the class 'Agent/Ping'
Agent/Ping instproc recv {from rtt} {
global min max f2 packcount
$self instvar node_
set packcount [expr $packcount + 1]
puts $f2 "$packcount
if { $min > $rtt} {
set min $rtt
}
if { $max < $rtt} {
set max $rtt
}
}
proc send_ping {} {
global ns pi0
set time 0.2
set now [$ns now]
$rtt"
81
$pi0 send
#Re-schedule the procedure
$ns at [expr $now+$time] "send_ping"
}
set pi0 [new Agent/Ping]
$ns attach-agent $nl0 $pi0
set pi1 [new Agent/Ping]
$ns attach-agent $nl3 $pi1
$ns connect $pi0 $pi1
$ns at 0.0 "send_ping"
$ns at 0.0 "$p0 start"
$ns at 0.1 "$p1 start"
$ns at 0.2 "$p2 start"
$ns at 0.3 "$p3 start"
$ns at [expr $end_sim_time + 0.0] "$p0 stop"
$ns at [expr $end_sim_time + 0.1] "$p1 stop"
$ns at [expr $end_sim_time + 0.2] "$p2 stop"
$ns at [expr $end_sim_time + 0.3] "$p3 stop"
$ns at 0.0 "$video1 start"
$ns at $end_sim_time "$video1 stop"
$ns at [expr $end_sim_time + 1.0] "$null1 closefile"
$ns at [expr $end_sim_time + 1.0] "finish"
$ns run
APPENDIX B
ST File
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
I
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
0
120
40
80
240
160
200
360
280
320
480
400
440
600
520
560
720
640
680
840
760
800
960
880
920
1080
1000
1040
1200
1120
1160
1320
1240
1280
392
505
36
67
208
34
33
242
47
39
1064
72
78
76
41
46
70
28
29
86
33
47
1064
52
48
181
39
41
181
52
97
100
106
34
83
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
1440
1360
1400
1560
1480
1520
1680
1600
1640
1800
1720
1760
1920
1840
1880
2040
1960
2000
2160
2080
2120
2280
2200
2240
2400
2320
2360
2520
2440
2480
2640
2560
2600
2760
2680
2720
2880
2800
2840
3000
2920
2960
3120
3040
3080
3240
3160
3200
3360
3280
1081
80
88
166
36
37
82
38
104
213
37
33
1067
140
109
90
37
38
71
32
44
93
41
43
1076
126
76
272
33
69
154
70
48
171
75
35
1033
49
70
102
35
35
132
28
30
143
31
41
912
78
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
3320
3480
3400
3440
3600
3520
3560
3720
3640
3680
3840
3760
3800
3960
3880
3920
73
101
34
30
96
30
45
92
33
32
748
70
50
73
29
30
|
|
|
|
|
|
|
|
|
|
|
|
10000
10001
10002
10003
10004
10005
10006
10007
10008
10009
10010
10011
10012
10013
10014
10015
10016
10017
10018
10019
10020
10021
10022
10023
10024
10025
10026
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
399920
400080
400000
400040
400200
400120
400160
400320
400240
400280
400440
400360
400400
400560
400480
400520
400680
400600
400640
400800
400720
400760
400920
400840
400880
401040
400960
318
1453
307
278
1646
287
353
6011
603
545
1692
707
662
1979
789
727
2204
907
897
6120
908
913
2114
797
839
2239
794
85
10027
10028
10029
10030
10031
10032
10033
10034
10035
10036
10037
10038
10039
10040
10041
10042
10043
10044
10045
10046
10047
10048
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
401000
401160
401080
401120
401280
401200
401240
401400
401320
401360
401520
401440
401480
401640
401560
401600
401760
401680
401720
401880
401800
401840
808
2272
901
813
5914
732
643
2003
737
733
2263
903
888
2091
930
933
5542
837
911
1985
810
720
|
|
|
|
|
|
|
|
|
|
|
|
22404
22405
22406
22407
22408
22409
22410
22411
22412
22413
22414
22415
22416
22417
22418
22419
22420
22421
22422
22423
22424
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
896080
896120
896280
896200
896240
896400
896320
896360
896520
896440
896480
896640
896560
896600
896760
896680
896720
896880
896800
896840
897000
279
272
672
189
190
803
238
247
776
244
253
2945
246
262
512
147
138
577
180
163
600
86
22425
22426
22427
22428
22429
22430
22431
22432
22433
22434
22435
22436
22437
22438
22439
22440
22441
22442
22443
22444
22445
22446
22447
22448
22449
22450
22451
22452
22453
22454
22455
22456
22457
22458
22459
22460
22461
22462
22463
22464
22465
22466
22467
22468
22469
22470
22471
22472
22473
22474
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
896920
896960
897120
897040
897080
897240
897160
897200
897360
897280
897320
897480
897400
897440
897600
897520
897560
897720
897640
897680
897840
897760
897800
897960
897880
897920
898080
898000
898040
898200
898120
898160
898320
898240
898280
898440
898360
898400
898560
898480
898520
898680
898600
898640
898800
898720
898760
898920
898840
898880
173
187
3015
249
269
462
138
131
454
189
152
553
118
134
2943
192
170
570
119
149
528
100
115
521
92
139
3002
117
109
526
122
144
694
179
172
683
219
199
2943
241
254
544
137
168
761
178
183
778
217
238
87
22475
22476
22477
22478
22479
22480
22481
22482
22483
22484
22485
22486
22487
22488
22489
22490
22491
22492
22493
22494
22495
22496
22497
22498
22499
I
B
B
P
B
B
P
B
B
P
B
B
I
B
B
P
B
B
P
B
B
P
B
B
I
899040
898960
899000
899160
899080
899120
899280
899200
899240
899400
899320
899360
899520
899440
899480
899640
899560
899600
899760
899680
899720
899880
899800
899840
900000
2816
285
254
681
258
209
726
201
200
721
224
202
3014
292
267
669
182
227
713
166
175
727
231
282
2819
APPENDIX C
Queue Programming Code in NS2
BEGIN{
buff=0
timestamp = 0
max = 0
}
{
if( timestamp == $3)
{
if ($1 == "+")
{
if($9 == "video")
{
buff = buff+1
writeproc[max] = buff
}
}
else if ($1 == "-")
{
if($9 == "video")
{
buff = buff-1
writeproc[max] = buff
}
89
}
}
else
{
if($9 == "video")
{
print writeprocnum[max], writeproc[max] >> "queue.q"
max = max++
timestamp = $3
writeprocnum[max] = timestamp
if ($1 == "+")
{
buff = buff+1
writeproc[max] = buff
}
else if ($1 == "-")
{
buff = buff-1
writeproc[max] = buff
}
}
}
}
APPENDIX D
Delay Programming Code in NS2
BEGIN{
timestamp = 0
max = 0
for (i = 1; i <= max; i++)
{
packetid[i] = 0
timedepart[i] = 0
timearrive[i] = 0
}
}
{
if( $9 == "ping")
{
avail = 0
for (i = 1; i <= max; i++)
{
if(packetid[i] == $15)
{
avail = 1
if ($1 == "+")
{
timedepart[i] = $3
}
if ($1 == "r")
91
{
timearrive[i] = $3
}
}
}
if(avail == 0)
{
max++
packetid[max]=$15
if ($1 == "+")
{
timedepart[max] = $3
}
if ($1 == "r")
{
timearrive[max] = $3
}
}
}
}
END {
for (i = 1; i <= max;i++)
print i,timearrive[i]-timedepart[i] >> "delay.q"
}
APPENDIX E
Frequency of Queue Programming Code in NS2
BEGIN{
writeproct[0]=0
writeproct[1]=0
writeproct[2]=0
writeproct[3]=0
writeproct[4]=0
writeproct[5]=0
}
{
if ($2 == "0")
{
writeproct[0]++
}
if ($2 == "1")
{
writeproct[1]++
}
if ($2 == "2")
{
writeproct[2]++
}
94
if ($2 == "3")
{
writeproct[3]++
}
if ($2 == "4")
{
writeproct[4]++
}
if ($2 == "5")
{
writeproct[5]++
}
}
END{
print "0",writeproct[0] >> "frek.q"
print "1",writeproct[1] >> "frek.q"
print "2",writeproct[2] >> "frek.q"
print "3",writeproct[3] >> "frek.q"
print "4",writeproct[4] >> "frek.q"
print "5",writeproct[5] >> "frek.q"
}