The Enhanced Rate Proportional Differentiation Model
for the Future Internet Differentiated Services
by
Xizheng Shi
B. Sc., Tsinghua University, P. R. China 1992
A THESIS SUBMITTED IN PARTIAL FULFILMENT OF
THE REQUIREMENTS FOR THE DEGREE OF
Master of Science
in
THE FACULTY OF GRADUATE STUDIES
(Department of Computer Science)
We accept this thesis as conforming
to the required standard
________________________________
________________________________
THE UNIVERSITY OF BRITISH COLUMBIA
August 2000
(C) Xizheng Shi, 2000
Abstract
With the advent of diversified network applications and utilization, the services
demanded by users demonstrate divergent requirements. However, the traditional network
service model, which provides all the users with the same performance level (best effort
service), cannot meet the demand for providing a set of differentiated services. A number
of service models, especially the Integrated Service model and the Differentiated Service
model, have been proposed to address this problem. This thesis focuses on a new
differentiated service model, called the Rate Proportional Differentiation (RPD) Services
model.
We propose an improvement of the model that can accommodate different
differentiation requirements of the responsive and non-responsive traffic. We investigate a
number of implementation issues of the service model using the network simulator NS-2,
including the problems of traffic starvation and fairness within each service class.
ii
Contents
Abstract .............................................................................................................................. ii
Contents ............................................................................................................................ iii
List of Figures ................................................................................................................... vi
List of Tables .................................................................................................................... vii
CHAPTER I Introduction ................................................................................................ 1
1.1 Thesis Motivation ..................................................................................................... 1
1.2 Thesis Objectives and Methodology. ........................................................................ 2
1.3 Contributions............................................................................................................. 3
1.3 Thesis Outline ........................................................................................................... 5
CHAPTER II Quality of Service (QoS) – The Big Picture ........................................... 7
2.1 Absolute Differentiation vs. Relative Differentiation ............................................... 7
2.2 Service Models .......................................................................................................... 9
2.2.1 The Integrated Services (IntServ) Model ........................................................... 9
2.2.2 The Differentiated Services (DiffServ) Model ................................................ 10
2.2.3 The Rate Proportional Differentiation (RPD) Model ...................................... 12
2.3. Taxonomy on Service Models ............................................................................... 13
2.4 Other QoS related issues ......................................................................................... 14
2.4.1 Traffic Engineering and Constraint Based Routing ......................................... 14
2.4.2 Traffic Modeling .............................................................................................. 15
2.5 Summary ................................................................................................................. 16
Chapter III Enhancing the RPD Model ........................................................................ 17
3.1 Improving the RPD model ...................................................................................... 18
3.1.1 Simultaneous Switch Syndrome ...................................................................... 18
iii
3.1.2 Coping with Non-responsive and Responsive traffic. ..................................... 18
3.2 Evaluation of the Rate Proportional Differentiation Model.................................... 21
3.2.1 Robustness ....................................................................................................... 21
3.2.2 Cost Efficiency................................................................................................. 22
3.2.3 Fairness ............................................................................................................ 23
3.4 Related Existing Forwarding Algorithms ............................................................... 24
3.5 Summary ................................................................................................................. 26
Chapter IV Improving the ARP Algorithm .................................................................. 27
4.1 Traffic Starvation .................................................................................................... 27
4.2 Preventing Traffic Starvation .................................................................................. 29
4.2.1 Determining Break Point Rate ......................................................................... 29
4.2.2 Choosing the Real Break Point ........................................................................ 31
4.2.3 Determining the Decline Function ................................................................... 31
4. 2. 4 The Enhanced ARP Algorithm (EARP) ......................................................... 34
4.3 Efficient Memory Management Scheme ................................................................ 34
4.4 The Processing of the Out Profile Packets .............................................................. 36
4.5 Simulation Result .................................................................................................... 39
4.5.1 Improved Rate Proportional Drop Rate Differentiation Algorithm ................. 39
4.5.2 The Simulation of the In/Out Profile Packets Process ..................................... 41
4.6 Summary ................................................................................................................. 42
Chapter V Providing Loss Precedence Differentiation with WFRED ....................... 43
5.1 Weighted RED ........................................................................................................ 43
5.2 The Simulation Environment .................................................................................. 46
5.3 Improvement over the Original RED ...................................................................... 47
5.4 Weighted Fairness RED .......................................................................................... 49
5.4.1 Modified FRED ............................................................................................... 49
5.4.2 Weighted Fairness RED ................................................................................... 53
5.4.3 Simulating Rate Proportional Loss Rate Differentiation with WFRED .......... 54
5.4.4 Simulating Assured Forwarding with WFRED ............................................... 56
5.5 WFRED vs. EARP .................................................................................................. 57
iv
5.6 Summary ................................................................................................................. 59
Chapter VI Conclusions ................................................................................................. 60
6.1 Thesis Summary ...................................................................................................... 60
6.2 Future Work ............................................................................................................ 61
BiBliography .................................................................................................................... 64
Appendix 1 Acronyms ..................................................................................................... 69
v
List of Figures
Figure 1 Relation between input rate and differentiation parameter ......................... 28
Figure 2 The traffic starvation free drop rate differentiation algorithm ..................... 34
Figure 3 Dealing with In/Out profile packets with delay differentiation ................... 36
Figure 4 Simulation Topology ................................................................................... 39
Figure 5 Simulation topology with the RED gateway. .............................................. 46
Figure 6 A time plot of the queue length and the average queue length in the RED
gateway ............................................................................................................... 47
Figure 7 The Proposed Weighted Fairness RED........................................................ 53
vi
List of Tables
Table 1 The Taxonomy on Service Models ................................................................ 13
Table 2 A comparison between two types of Loss Rate Differentiation. ................... 22
Table 3 A Comparison of the original and improved algorithm on traffic starvation. 40
Table 4 The effect of the advance coefficient on drop rate differentiation. ............... 40
Table 5 The simulation result of In/Out profile packet process ................................. 41
Table 6 A comparison between WRED and the Modified FRED.............................. 51
Table 7 A Comparison of WRED, the Modified FRED and WFRED in a fragile link
dominated situation ............................................................................................. 51
Table 8 A Comparison of WRED and WFRED with variance analysis. ................... 55
Table 9 A Comparison of RIO and WFRED with variance analysis ......................... 56
Table 10 A Comparison of RIO and WFRED on RTT Sensitivity ............................ 57
vii
CHAPTER I Introduction
1.1 Thesis Motivation
With the advent of diversified network applications and utilization, the services
demanded by users demonstrate divergent requirements. The traditional network service
model, however, which provides all the users with the same performance level (best effort
service), cannot meet the demand for providing a set of differentiated services. A number
of service models, such as the Differentiated Service model (DiffServ) and the Integrated
Service model (IntServ), have been proposed to address the problem.
DiffServ and IntServ are the two service models that have been made to the IETF
standard track.
Nevertheless, numerous issues and problems surrounding these two
models are still under debate and study. The standards themselves are continuously
evolving. Besides these two models, people are still working on proposing new service
models. Fundamentally, all these service models have a common objective - to provide
Quality of Service (QoS). In order to meet the demand of varying service qualities, each
model has to address problems such as resource management, service subscription, packet
queuing and scheduling, packet classification, etc.
Different models address these
problems in different ways and from different perspectives.
The Rate Proportional Differentiation (RPD) Model recently introduced in [8]
proposes yet another service model, distinct from DiffServ and IntServ. The distinction of
this model from the other service models rests on the observation that most current network
1
applications are adaptive. In the RPD model, applications are intelligent and selectively
use different service classes based on the state of the network. As such, the RPD model
avoids the many complexities involved in DiffServ and IntServ in maintaining the state of
the network. For example, in IntServ, the RSVP protocol is used for signaling and network
nodes are required to maintain the state of each active flow. This makes IntServ very
complicated and not scalable. DiffServ, although much more simple than IntServ, requires
service subscription and bilateral agreements to be negotiated along the flow path. To
support more sophisticated functionality, such as dynamic agreement negotiation and
dynamic network service subscription, DiffServ would need some dedicated facilities,
such as Bandwidth Broker (BB). In the RPD model, however, none of these is necessary.
However, there are many open issues that need to be addressed in the RPD model,
such as interaction between responsive and non-responsive traffic and fairness concerns.
In addition, the model has much room for improvement in terms of robustness. This thesis
addresses the issues of the RPD model from both the algorithmic and implementation
points of view.
1.2 Thesis Objectives and Methodology.
Although the RPD model possesses many advantages over the DiffServ and IntServ
models, there exist a number of problems. One of the major deficiencies of the original
RPD model is that it fails to incorporate the interaction between responsive traffic and
non-responsive traffic. The responsive traffic is typically the TCP traffic whereas the
non-responsive traffic is typically the UDP traffic that originated from the video/audio
stream applications.
In addition, it does not consider the interaction between the
2
congestion control mechanisms at the end systems and the traffic shaping and scheduling
algorithms at the network nodes.
Furthermore, the robustness of the model needs
enhancement since traffic starvation and the simultaneous switch syndrome 1 might
develop under certain circumstances. In addition to the improvement mentioned above,
we also develop and implement two algorithms for packet queuing and scheduling, which
are essential in supporting a set of differentiated service classes in the RPD model.
Our work heavily relies on simulation using the software NS-2, a network simulator
developed by UC Berkeley, LBL, USC/ISI and Xerox PARC. For some problems, such as
traffic starvation, a mathematical proof is followed by the simulation. For those problems
that are too complicated to allow the establishment of a mathematical model beforehand,
we simply perform the simulation and analyze the simulation results. Therefore, our work
in this thesis is either based on solid mathematical analysis or backed by simulation results.
1.3 Contributions
The major contributions of the thesis are listed as follows:
Based on observing the types of traffic on the current Internet, we find that two
important types of traffic, responsive and non-responsive traffic, must be handled
separately. We thus introduce the idea of separate handling of the two types of traffic
in the framework of packet forwarding scheme. The differentiation of the two types of
traffic is applied to all the algorithms developed in this thesis.
Since almost all non-responsive traffic, typically UDP traffic, originates from
video/audio stream applications, and these applications are sensitive to packet loss
1
Simultaneous Switch Syndrome is explained in Chapter 4.
3
and delay, differentiation, in terms of both drop rate and delay, is proposed at network
nodes. For responsive traffic, bandwidth is considered as the major metric in the
evaluation of the service quality. Interacting with congestion control mechanisms at
the end systems, packet drop precedence at the network nodes can effectively regulate
the bandwidth sharing among flows. This thesis proposes to add a set of service
classes that provides only packet drop differentiation to achieve the differentiated
bandwidth sharing among responsive flows.
Based on the original algorithm in [8] that provides the Accumulative Rate
Proportional drop rate differentiation (ARP), we develop and implement an improved
algorithm so-called the Enhanced ARP (EARP), to address the potential problem of
traffic starvation.
For the case of responsive traffic, our work introduces a fairness requirement within
each service class. So far, little work has been done to address this problem within the
framework of DiffServ. However, this fairness, as noted in [1], dictates the viability of
the service model and the predictability of the quality of service. In addition, we
propose a new forwarding scheme, the so-called Weighted Fairness Random Early
Detection/Drop (WFRED), which is an integration of Weighted RED and Fairness
RED and satisfies the fairness requirements within each class. The objective of
WFRED is to maintain the differentiation among service classes while enforcing the
fairness among the flows within each service class. This WFRED algorithm is also
implemented and its performance is evaluated via simulation for both the DiffServ and
RPD models.
We propose and implement a modification to the original Random Early
4
Detection/Drop (RED) algorithm that significantly improves the performance of a
RED gateway. This enhanced RED algorithm was actually applied to the WFRED
scheme mentioned above. We also introduced an effective way of implementing
the buffer management to reduce the cost of selecting and dropping the packet
queued in the buffer.
1.3 Thesis Outline
Not including this introduction chapter, the thesis is organized as follows. Chapter II
presents an overview of the Quality of Service. A number of service models and a
taxonomy on the models are described. A number of related research directions are also
presented. This chapter helps to position our work in the overall framework of Quality of
Service.
Chapter III identifies a number of problems in the original RPD model that lead to
our development of the enhanced RPD model. The major improvement focuses on
preventing the simultaneous switch syndrome and the discriminatory considerations of the
responsive traffic and the non-responsive traffic. An evaluation of the model based on the
measures of robustness, cost efficiency and fairness is presented. In the end, the related
existing forwarding algorithms are studied.
Chapter IV focuses on the problem of traffic starvation in the original ARP algorithm.
Simulation is performed to show the significance of the improvement. In addition, we
investigate the potential usage of the algorithm in supporting the Assured Forwarding (AF)
Per-Hop-Behavior (PHB) in DiffServ.
Chapter V presents the Weighted Fairness RED (WFRED), an integration of the
5
Weighted RED and the Fairness RED. The objective of the algorithm is to achieve fairness
requirement within each service class.
Chapter VI summarizes the thesis and offers suggestions for future work.
6
CHAPTER II Quality of Service (QoS) – The Big Picture
With the advent of diversified network applications and utilization, the services
demanded by users demonstrate divergent requirements. The concept of QoS is introduced
to meet the challenge of the problem. In recent years, various issues of QoS have been
under intensive research and study in both the academic community and industry. In this
chapter, we present a framework for the emerging QoS.
2.1 Absolute Differentiation vs. Relative Differentiation
In the traditional Internet, network nodes forward all the packets without
discrimination.
Packets are processed as quickly as possible.
With the rapid
transformation of the Internet into a commercial infrastructure, demand for service quality
has rapidly developed. An adequate understanding of the different demands of service can
serve as a foundation on which research and development work is conducted. In this
section, we identify the measures of service quality and introduce two types of service
differentiation.
In the network, the desired quality of service is measured in terms of delay, latency,
latency variation (jitter), throughput and drop rate. In a broader connotation, as raised in
[1], quality of service also includes the availability of service. Differentiated service, in its
fundamental sense, is to provide different qualities of service to different users in terms of
these measures.
Although delay and latency both cause time loss on the transmission of packets,
7
delay is due to the transmission speed of electromagnetic wave in the media whereas
latency is caused by the queuing delay at network nodes. Delay is a concern in the Traffic
Engineering or QoS Routing. When there are multiple routes to a destination, the nature of
the application generating the traffic should be considered as one of the factors in routing
decision.
For example, when transmitting a large FTP file, a high delay but high
throughput satellite link is preferred to a low delay low throughput link.
Jitter, which is caused by the variation of network latency, has a negative impact on
the quality of audio/video applications. Besides jitter, drop rate also compromises the
fidelity of audio/video playback.
By and large, the requirements of service quality can be classified into two categories:
quantitative requirements and qualitative requirements. The quantitative requirements
usually apply to interactive multimedia applications and mission critical applications such
as real-time controlling systems. This type of applications depends on the timeliness and
the guarantee of packet delivery, so that bounded quality parameters, such as bounded
delay, bounded jitter or bounded drop rate, are often a necessity in the evaluation of the
service. In contrast to the quantitative requirements, the quality requirements do not have
quantifiable criteria.
They often apply to those elastic applications with build-in
mechanisms that are adaptive to the network condition. They also apply to some user
groups that are willing to pay a little more for a higher quality of services.
Based on the two types of requirements, the differentiation of service quality can be
categorized as absolute differentiation and relative differentiation.
In the absolute
differentiation model, the network provides a set of service classes with different
quantitative quality criteria, whereas in the relative differentiation model, the only promise
8
from the network is that the quality of service received by a higher class user is better than
that received by a lower class user.
2.2 Service Models
For the past few years, the architecture, requirements and mechanisms for providing
service differentiation in the Internet have been the focus of extensive research. These
research efforts have identified two fundamentally different approaches for service
differentiation: Integrated Services (IntServ) and Differentiated Services (DiffServ).
2.2.1 The Integrated Services (IntServ) Model
One of the assumptions of the IntServ model is that the network resource (e.g.,
bandwidth) must be explicitly managed in order to meet the application requirement. This
implies that resource reservation and admission control are the two key building blocks of
the model[5]. In other words, the IntServ model assumes QoS guarantee cannot be
achieved without reservation.
The architecture of the Integrated Service consists of two elements: the IntServ
model and the reference implementation framework. The IntServ model, which defines
the externally visible behaviors, proposes two types of services targeted towards real-time
traffic: the guaranteed service and the predictive or controlled load service.
Aiming at hard-real-time requirements, the Guaranteed Services provide firm
bounds on queuing delay.
The Controlled-Load Services are designed for adaptive
real-time applications such as audio/video playback applications. Traffic belonging to the
Guaranteed Services should expect bounded delay and zero loss whereas the
9
Controlled-Load Services flows should expect to experience little queuing delay and little
congestion loss.
The framework of IntServ consists of four components: the admission control
routine, the reservation setup protocol, the classifier and the packet schedule. Each
component assumes the corresponding function in the setting up, forwarding and traffic
control of a QoS packet flow.
The IntServ model is based on a solid background of research in QoS mechanisms
and protocols. However, the acceptance of IntServ in industry is limited, at least so far.
This is due to the scalability and manageability problems. Since IntServ focuses on
individual packet flow, it requires nodes in the network core to keep the state of each flow.
This is not viable for a network core node handling thousands of connections
simultaneously. Although extensions to RSVP enable flow aggregation at the network core,
the demanding requirement of IntServ on router, which requires all the routers along the
path supporting RSVP, the admission control, the Multiple Field (MF) classification and
the packet scheduling, makes it a impediment to the deployment of IntServ. In addition,
end-to-end QoS support requires a multilateral service level agreement to be reached.
2.2.2 The Differentiated Services (DiffServ) Model
DiffServ has been a focus of research within the QoS community in recent years.
Due to the slow deployment of IntServ/RSVP, people have worked around the problem
with a new service model.
The architecture of the DiffServ model is composed of a number of functional
elements. It includes a small set of per-hop forwarding behaviors, packet classification
10
functions, and traffic policing functions such as metering, marking and shaping. To
achieve scalability, complex classification and traffic policing functions are only
implemented at the network boundary nodes. By applying Per-Hop Behaviors (PHB) to
aggregates of flows based on the Differentiated Services (DS) Field (TOS Field) in the IP
header, the per-application or per-customer forwarding state need not be maintained at the
network nodes any more. Admission control and resource provisioning can be either
dynamic or static. A new optional element, the Bandwidth Broker (BB), is introduced to
supervise the overall resource and admission management within a DiffServ domain.
A number of fundamental principles make DiffServ have many advantages over the
IntServ model. First, DiffServ does not enforce the end-to-end service. All service level
agreements are bilateral, through either manual negotiation or automatic reconciliation
between BBs. Second, DiffServ reduces the demand for network core nodes by pushing as
much of complicated function, such as traffic policing and packet classifying to the edge
nodes. The core nodes only implement a small number of PHBs. Third, the service
provisioning and traffic conditioning policies are sufficiently de-coupled from the
forwarding behaviors within the network interior to permit the implementation of a wide
variety of service behaviors. All these principles ensure the scalability, flexibility and
manageability of the DiffServ model.
So far two types of PHBs, Expedited Forwarding (EF) and Assured Forwarding (AF),
have been made to the standard track.
The EF PHB is used to build a low loss, low latency, low jitter, assured bandwidth,
end-to-end service through DS domains. Such a service appears to the endpoints as a
point-to-point connection or a "virtual leased line". This service has also been described as
11
Premium Service[14]. To achieve this service, two parts are needed: EF PHB and traffic
conditioning at the edge nodes.
EF PHB ensures the EF traffic aggregate has a
well-defined minimum departure rate.
Traffic conditioning, including policing and
shaping, makes arrival rate at any node be always less than that node's configured
minimum departure rate.
The AF PHB, which is defined in [15], provides different drop precedence at the time
of congestion. A typical service built upon AF PHB is called Assured Service, in which
each user subscribes to a committed traffic profile. When a user's flow rate exceeds the
profile, some of the packets, which are marked as the out profile packets, are subject to a
higher drop probability. In the DiffServ framework, the traditional best-effort traffic has
the lowest forwarding probability.
2.2.3 The Rate Proportional Differentiation (RPD) Model
The work in this thesis is mainly motivated by the Rate Proportional Differentiation
Model, which appeared in [8]. Based on the observation that most current multimedia
applications are capable of being adaptive to network conditions, that is, applications are
able to tolerate a certain degree of latency, jitter and packet loss, the model proposes a
network to provide a set of service classes. In particular, the differentiation among service
classes takes the following form:
li(t,t+) i
=
(1.1)
lj(t,t+) j
where i is the Differentiation Parameters, and li(t,t+) can be either queuing delay or the
inverse of the drop rate of the arrivals of class i during the interval (t, t+).
12
In addition, at the network edge, instead of applying traffic policing, traffic
accounting is used.
Users are charged on the service class usage.
In this model,
applications are considered as intelligent and adaptive, they use the lower service class as
much as possible and only migrate to a higher service class when the current service class
cannot meet the quality of service demand.
2.3. Taxonomy on Service Models
So far, we have presented three service models. Table 1 shows the taxonomy on
these service models based on the analysis in the first section of this chapter.
Table 1 The Taxonomy on Service Models
Integrated Services
Differentiated Services
Rate Proportional
Differentiation Model
Absolute Differentiation
Guaranteed Services
Premium Services
Relative Differentiation
Controlled-Load Services
Assured Services
Rate Proportional Loss Rate
Proportional Delay
As shown in Table 1, both the Controlled-Load Service in IntServ and the Assured
Service in DiffServ are categorized as providing the relative differentiation.
The
implementation detail and the mechanisms of these two services are different since they
belong to different service differentiation models. However, from the perspective of
external behavior, these two services are quite similar. They both need the user to
subscribe to the service with the token bucket parameters specifying the required quality.
The quality assurance of the service relies on the user's commitment to the subscribed
traffic profile. In addition, probabilistic approaches are used in the admission control of
both the service models. This implies that, although at low probability, under- provisioning
13
can happen in the network and the users might receive a lower quality of services than the
subscribed quality.
Although the RPD service model belongs to the same category as that of the
Controlled-Load Service and the Assured Service, the RPD model provides genuine
relative differentiation. This will be explained in later chapters.
2.4 Other QoS related issues
Besides research on different service models, a number of related components have
been under study within the context of QoS. In this section, we briefly discuss the basic
concept of traffic engineering and traffic modeling.
2.4.1 Traffic Engineering and Constraint Based Routing
In a network, one reason for congestion is unevenly distributed traffic where some
parts of the network are overloaded while other parts are lightly loaded. This is because the
traditional route calculation is only based on relatively static information - the number of
hops and the link delay. Traffic Engineering refers the process of arranging how traffic
flows through the network so that congestion caused by uneven network utilization can be
avoided. Constraint Based Routing, which evolves from the QoS routing, is an important
mechanism for making the Traffic Engineering process automatic. In the OSI reference
model, Constraint Based Routing resides in the third layer. The goal of Constraint Based
Routing is:
1.
To select routes that can meet certain QoS requirement;
2.
To increase the utilization of the network [3].
14
Constraint Based Routing encompasses a number of aspects: route information
distribution, QoS route computation, route establishment and route maintenance. For each
aspect, there exist different approaches and algorithms. For example, for route information
distribution, we can have either periodic updates or triggered distributions. With regard to
the timing of route computation, we can use either on-demand computation or
pre-computation.
The disadvantages of Constraint Based Routing are: 1) increased overhead of
communication and computation, and the space to hold the routing table, 2) longer paths
that may consume more resources, and 3) potential routing instability. These problems
must be solved in the design of a QoS routing algorithm and the deployment of the QoS
routing.
2.4.2 Traffic Modeling
The characteristics of data traffic play a crucial role in the performance analysis and
the design of communication networks. Understanding the models of network traffic can
help us design better protocols, better network topologies, etc. [21]. In addition, within the
context of differentiated service, we believe it also plays an important role for network
service providers in increasing their revenue by setting up appropriate billing tiers. Traffic
modeling provides a way to catch those traffic characteristics.
Historically, the Stochastic Model, which worked well in early ARPANET, assumes
Poisson arrival rate and exponential length of messages. Contrary to the Poisson model
that assumes packets are independent, the "packet train" model, which became popular in
the 1980's, assumes that a group of packets travels together as a train. Recent study found
15
self-similar (fractal) feature in LANs and long-range dependency in WANs. All of these
models work perfectly in some situations. However, as pointed out in [22], it is very
challenging to model the traffic patterns in today’s Internet since it is extremely
heterogeneous and fast moving. Two guidelines were proposed in [22] that can help with
searching for the solution to the problem: 1) Searching for invariants 2) Careful exploration
of the parameter space.
In this thesis, we make use of these guidelines in establishing our simulation
environment. Most of the former research work using NS-2 that appears in the literature
uses the persistent TCP sender model. In the persistent TCP sender model, the sender node
always has data to send. However, in the Internet, most traffic is generated by HTTP
senders that are usually not persistent. In this thesis, besides the persistent TCP sender, the
stochastic TCP sender that mimics the real world more closely is also used in the
simulation.
Second, for a given algorithm, such as the Weighted Random Early
Detection/Drop (WRED), the parameter space has a significant influence on the effect of
the algorithm. In our simulation, we carefully distinguished the parameters that were kept
relatively constant and those that were changeable.
2.5 Summary
In this chapter, we presented an overview of the QoS service and a number of
important components: the service models, traffic engineering and traffic modeling. This
outlined a framework in which our research on the RPD model is conducted.
16
Chapter III Enhancing the RPD Model
As described in the preceding chapter, based on the observation that many
applications are capable of being adaptive to the condition of the network, [8] proposes the
RPD model. In this model, the network provides a set of differentiated service classes.
The higher-class service is subject to the higher price.
When sending packets, the
application marks the packets to different service classes based on the current network
condition. If the current class service cannot meet the requirement, the application
switches to a higher-class service. The user is charged for the usage of service classes.
Therefore, at the network ingress, unlike traffic In/Out profile marking as in the DiffServ
architecture, service usage accounting is applied.
With regard to the forwarding mechanism, the model adopts separate dropper and
scheduler to provide drop rate differentiation and delay differentiation. The proposed
mechanism is thus different from RED/RIO that is proposed in [14]. Since RED/RIO
integrates both the dropping and the scheduling, its behavior has been shown unpredictable
in terms of the service differentiation [17].
Although the RPD Model possesses a number of advantages over DiffServ, it has
some problems. Following we discuss a number of these concerns and introduce an
enhanced RPD model. At the end, an evaluation of the model based on the criteria
proposed in [1] is presented.
17
3.1 Improving the RPD model
3.1.1 Simultaneous Switch Syndrome
In the RPD model, a devastating situation can develop when the network becomes
heavily loaded. Under a heavy loading network condition, applications may find the
service classes they subscribe to do not satisfy the required quality, and then switch to the
higher service classes simultaneously. It is possible that (almost) all applications end up at
the highest service class. This boils down to only one service class in the network for all
the applications, i.e. the nominal first class service that is actually no other than the best
effort service.
To address this problem, one of the solutions is to use traffic policing at the network
edge and to apply admission control. This makes the model as complicated as the Assured
Forwarding in DiffServ. Another alternative is to expect the application returns to the
lowest class immediately when it migrates to the highest class but the required quality of
service is still not met. An algorithm is required to determine the timing when the
application switches between the service classes so that users receive minimal usage
charge and the syndrome does not last for a long time. Such an algorithm is the future
work and is beyond the scope of this thesis.
3.1.2 Coping with Non-responsive and Responsive traffic.
In the Internet, there exist two types of traffic: responsive and non-responsive traffic
or also called as adaptive and non-adaptive traffic. Responsive traffic is generated by the
senders using protocols with build-in congestion avoidance mechanisms, such as TCP.
18
These types of senders back off when they detect congestion occurs or is about to occur in
the network. A typical non-responsive flow is a UDP flow.
For responsive traffic, particularly in the case of TCP in which packet loss is usually
considered as due to the congestion in the network, loss rate differentiation provides a
means to regulate the bandwidth sharing among connections belong to different service
classes. In today’s Internet, the majority of TCP traffic is sent by HTTP, FTP and Email
applications. In modern telecommunication networks, latency is no longer a big concern.
By selectively dropping TCP packets, queuing delay at the network nodes can be restricted
to a limited level. Therefore, bandwidth sharing becomes the major concern in the quality
of service for these types of applications.
In the case of non-responsive traffic, typically UDP traffic, the sender keep on
sending out packet as long as it has data to send whatever happens in the network. The
non-responsive protocol usually does not provide a feedback mechanism to facilitate the
application judging the condition of the network. The original RPD model, which relies on
the adaptability of the application, is not suitable for non-responsive traffic since there is no
feedback mechanism available to notify the current condition of the network. Fortunately,
UDP flows in the current Internet are dominantly generated by the Real Time Protocol
(RTP) applications and RTP provides a somewhat adaptive mechanism [18]. A RTP
application uses UDP for carrying data and uses another TCP connection for control
signaling. The RTP receiver periodically reports to the sender about the quality of the
received data. Based on the receiver's feedback, the sender adjusts sending rate by various
means such as using different compression ratio. Nevertheless, the adaptive mechanism in
RTP is not comparable with the congestion control mechanism in TCP, since they directly
19
use different methodologies and target towards different problems. This kind of UDP
traffic is referred to as the pseudo non-responsive traffic.
One weakness of the original RPD model is that it assumes responsive flows and
non-responsive flows are aggregated. It ignores different requirements and fails to take
into account the interaction between these two types of flows. It is well known that the
UDP flow usually will downgrade the service to the TCP flow when two types of traffic are
aggregated. The research on this issue has appeared on a large number of literatures.
Due to different nature of the TCP flow and the UDP flow, they display different
behaviors on packet loss. For a UDP flow, given loss rate l is fixed, the bandwidth share of
the flow is (1-l) r where r is flow's input rate. However, as presented in [27], the congestion
window size of a persistent TCP connection is of the order of
1
, where p is the drop
p
probability. In the case of a non-persistent TCP connection, the relation between the
congestion window size and the drop probability is more complicated.
A more
sophisticated analysis on this problem can be found in [28][29]. The different impact of
packet loss on the TCP and UDP flows demonstrates that only by applying the same drop
rate, we cannot achieve the fairness in terms of bandwidth sharing between these two types
of flows.
Realizing the UDP traffic is dominated by multimedia applications and almost all
TCP applications need only bandwidth requirement, as suggested by other research as well,
in this thesis we use separate handling of these two types of traffic. We propose to add a
mechanism only providing rate proportion loss rate differentiation for the TCP traffic in the
original RPD model. As a result, the improved RPD model consists of two parts: for the
20
multimedia traffic, particularly UDP traffic, both rate proportional loss rate and delay
differentiation is applied; for the TCP traffic, only loss rate differentiation is provided.
Around these two kinds of differentiation, in the coming chapters we introduce two
algorithms in the implementation of the model.
It is worth noting that both the original RPD model and the improved model only
deal with the pseudo non-responsive traffic and neglect the real non-responsive traffic. It is
considered as acceptable since this type of applications rarely exists in today’s Internet.
Even if they do exist, they are rarely used for bulk transmissions.
3.2 Evaluation of the Rate Proportional Differentiation Model
In [1], a number of criteria are proposed to evaluate a differentiated service model.
In this section, we evaluate the RPD model from three aspects: robustness, cost
effectiveness and fairness.
3.2.1 Robustness
In [1], the concept of levels of aggregation is introduced for evaluating a
differentiated service model.
Corresponding to different levels of aggregation, [1]
introduces three QoS service models: application model, customer model and organization
model. In the application model, applications need enough resources for sufficient quality.
In the customer model, each customer should get a fair amount of resources relative to the
price. In the organization model, organizations should get the right amount of total
resources relative to the total price and the technical support of the optimal use of
resources.
21
The RPD model falls under the category of the customer model. As analyzed in [1],
the customer model is the most robust model in the circumstance where users generally
behave egoistically. Considering today’s Internet that is extremely heterogeneous and
users’ behaviors are not always agreeable, the RPD model is quite appropriate.
3.2.2 Cost Efficiency
In a modern telecommunication system, cost is somehow related to the packet loss.
Since a lost packet may have traveled a long distance before it is dropped, such a packet
loss results in the waste of the network resources. The RPD model suggests the intelligent
ability of an application. The application can dynamically select a service class to obtain a
satisfying quality. This will help to reduce the number of dropped packets in the network.
Table 2 A comparison between two types of Loss Rate Differentiation.
Admission Control and
Resource provisioning
End User
Network Edge Nodes
Network Core Nodes
Assured Forwarding
Required
May need traffic marker
May need traffic marker.
Requiring:
Traffic Classifier,
Traffic policing (shaper,
dropper),
Traffic meter,
May require usage
accounting
Forwarding PHB
RPD
Not a must
May need traffic marker
May need traffic marker.
Requiring:
Traffic Classifier.
Usage accounting.
Forwarding mechanism
Another aspect in the cost efficiency is the complexity of the implementation and
deployment of the service model. The emergence of DiffServ is somehow due to the
scalability problem of IntServ.
The RPD model introduces yet another approach even
simpler than DiffServ. Since AF in DiffServ and the RPD model both provide the relative
22
differentiation, the following table makes a comparison on the required components in the
deployment of two service models. As we can see, the requirement of implementing the
RPD model is less demanding.
3.2.3 Fairness
In both the DiffServ model and the RPD model, flows of the same class are
aggregated at the network node, it becomes difficult to achieve fairness among flows
within each class. However, fairness within each class is important from the user's
perspective. Here we define fairness as all flows within each class experience the same
throughput or delay. Usually higher-class users expect higher quality of service or larger
bandwidth share for the higher price they pay. A service model that cannot guarantee or
ensure with high probability fairness at each user's granularity cannot provide an
acceptable differentiated service.
We argue that fairness is a necessity in the
implementation of a service model.
Although neither DiffServ nor the RPD model explicitly mentions the fairness
requirement, fairness should be considered as an implicit requirement. In the design of an
implementation algorithm, we have to consider the fairness. However, as discussed in [1],
this problem could be extremely difficult.
As proposed in the former section that the UDP and TCP traffic are separately
handled, we no longer need to concern with the fairness between these two types of traffic.
Since each UDP packet can be considered as isolated, so fairness of UDP flows within each
service class is relatively easy to achieve. The major fairness concern in the RPD model is
to provide fair bandwidth sharing among TCP flows within each service class. Chapter V
23
proposes an algorithm to address the problem.
It is worth noting that it may exist other fairness implications such as fairness
requirements based on the user type or the application type in the implementation of a
service model. This issue is beyond the scope of this thesis.
3.4 Related Existing Forwarding Algorithms
QoS has been under study for many years, as a result, a large number of algorithms
are the candidates for the implementation of the RPD model. This section gives an
overview of these algorithms.
In general, forwarding algorithms can be divided into two classes: multiple queue
algorithms or single queue algorithms. A queue is virtually or physically divided buffer at
the network node to hold the packets to be sent. The multiple queue algorithm involves
scheduler to choose the next packet to be sent. In contrast, the single queue algorithm uses
FIFO (First In First Out) to forward packets although it may perform different traffic
policies on different types of packets.
Weighted Fair Queuing (WFQ), Weighted Round Robin (WRR), Deficit Round
Robin (DRR) and Class Based Queuing (CBQ) all belong to the category of multiple queue
algorithms.
Weighted Fair Queue (WFQ) is to compute the time a packet would complete a
service had a General Process Sharing (GPS) [34] server been used to serve the packets and
then to serve packets to these finishing time.
Because WFQ incurs a substantial
computational overhead, a number of efforts have been made to simplify the computation.
A number of WFQ variants can be found in [35][38][36]. Although WFQ can provide
24
bounded delay and accurate bandwidth sharing, as pointed out in[1], three major facts
make it inappropriate in DiffServ. First, WFQ uses per-flow queuing. Second, WFQ
systems need to know the weight of each flow. Third, WFQ requires quite a hard
computational effort. Even with reduced computational cost and using WFQ to offer fair
service between aggregates, fairness within each aggregate still remains as a problem.
Similarly, these reasons apply to the RPD model as well.
In the case of providing rate proportional loss rate differentiation, a naive thought is
to make weight of each class to be proportional to the rate of each aggregate. Ideally,
weight should be proportional to the number of connections within each aggregate. A
careful study shows that this is extremely difficult. Since the number of connections can
only be calculated based on the packets in the buffer at the network node, but packets in the
buffer do not necessarily represent the actual number of active connections within a service
class at a time, this makes weight allocation inaccurate. Certainly, more sophisticated
method can be used to measure the incoming rate of each aggregate, remembering we still
have the problem of fairness within each class, it is simply better avoid this already
complicated algorithm. It is worth noting that another category of less complicated
multiple queue algorithms based on round robin, including WRR, DRR [33], is a possible
solution. However, these algorithms have the same problem of weight allocation.
CBQ, as mentioned in [1][25], is technically a reasonable approach. However, it
does not address the fairness within each aggregate either.
With regard to single queue algorithms, since providing differentiated delay is
impossible with FIFO, the only differentiation a single queue algorithm can do is on loss
rate. The most widely used Weighted RED uses different parameter settings for different
25
classes. We will illustrate this algorithm in detail later. However, this algorithm alone
cannot achieve fairness within each service class. One part of this thesis is to explore a way
of achieving fairness within each service class.
In [8], separated algorithms for drop rate differentiation and delay differentiation are
proposed. The ARP algorithm, which provides drop rate differentiation, maintains a data
structure holding the number of arrivals and drops for each class in a given time interval.
The data structure is used to decide the packet of which class be dropped should it be
needed. The next chapter will discuss this in detail and propose a fix to a potential problem
of the algorithm.
3.5 Summary
In this chapter, we pointed out a potential problem of the RPD model – the
simultaneous switch syndrome. A general idea was given to address the problem. By
analyzing the traffic in the Internet, we proposed to separate the handling of
non-responsive traffic and responsive traffic. For non-responsive traffic, we proposed to
apply the rate proportional loss rate and delay differentiation. For responsive traffic, we
proposed to use rate proportional loss rate differentiation only. An evaluation of the RPD
model based on the principle of fairness, robustness and cost efficiency was presented. In
the end, we discussed related existing algorithms and illustrated that none of those could
satisfy the implementation requirement of the RPD model.
26
Chapter IV Improving the ARP Algorithm
In this chapter, we analyze the ARP algorithm proposed in [8] and propose a fix to
the potential traffic starvation problem.
4.1 Traffic Starvation
In [8], a rate proportional dropper is suggested. Let's consider a situation when some
packet dropping occurs. In this situation, the dropper tries to maintain the following
equation:
ir'i
jr'j
=
ri
rj
Where r'i and ri stand for the loss rate and the arriving rate of class I respectively. is the
drop rate differentiation parameter. However, if we keep a set of fixed for the service
classes, the problem of traffic starvation may occur in some patterns of flow rate
proportions among different service classes. Following we demonstrate in what condition
the traffic starvation may occur.
For the sake of simplicity, we suppose there are only two classes of traffic I and J.
The class I is the higher class. Let B denotes outgoing bandwidth. Suppose in a certain
situation, class J traffic is starved, i.e. the loss rate is equal to the arriving rate. Thus, we
have:
ri - r'i = B and
27
ir'i r'j
= =1
jri rj
Therefore, the minimum traffic rate of class I to starve the traffic of class J is given by the
following equation:
ri =
i
B (4.1)
i-j
From (4.1), we can conclude that as long as i>j , there always exists a valid value of flow
rate for class I to starve the class J traffic. An intuitive solution is to select i and j such
that i-j is small enough to make ri reach its limitation restricted by the incoming
bandwidth.
We propose that values of the drop rate differentiation parameter follow the curves
shown in Figure 1. When the incoming flow rate is below a certain level, the drop rate
differentiation parameter keeps its value as the initial value denoted as . When the
incoming traffic rate exceeds a certain level, begins to decrease. With the incoming flow
rate rising, keeps on decreasing until it reaches the same value of of the lowest service
class. We can select a set of normalized value for the drop rate differentiation parameters
break point for
class i+1
class i
i+1
class 0
i
ultimate point for
1
max input rate
Phase I K
Phase II
Figure 1 Relation between input rate and differentiation parameter
28
by setting the value of of the lowest service class as 1. In the following discussion, we
always suppose that the value of of the lowest service class is 1.
4.2 Preventing Traffic Starvation
In Figure 1, there are two major issues that need to be considered.
The first issue concerns how to choose the break point rate for the drop rate
differentiation parameter. The breakpoint rate, as shown in Figure 1, is the rate at which
start to decline. In our work, we set the identical breakpoint rate for the all service classes.
An alternative is to set a different breakpoint rate for each service class. This is,
however, a political issue rather than a technical issue and it only involves a little more
mathematical complex to set different breakpoint rates for different service classes. One
example is that we can set breakpoint rate for each service class differently based on a
known traffic pattern or flow rate proportion. Also, a general policy may be to set break
point rates in a decreasing, typically arithmetical progressional or geometric proportional,
pattern where the higher class service gets higher rate when its begins to decline.
The second issue is to choose a rate function for . The function has to meet two
requirements. First, it should eliminate traffic starvation. Second, the function should be
simple and have as few variables as possible.
4.2.1 Determining Break Point Rate
The break point rate is the margin rate for the drop rate differentiation parameter to
start to decline. In other words, when the flow rates of all service classes fall below this rate,
any pattern of flow rate proportion among all service classes will not cause traffic
29
starvation. Let's suppose a situation where there are only three service classes, 2, 1 and 0.
Class 2 is the highest class whereas class 0 is the lowest class. The principle is the same for
the cases involving more than three classes.
To determine the break point rate, we need to consider a situation when traffic
starvation begins to occur. At this point, we have:
2r'2
r2
=
1r'1 r'0
r1
=
r0
= 1 and r2 - r'2 + r1 - r'1 = B (4.2)
Where B is the outgoing bandwidth, , r and r' denote the original drop rate differentiation
parameter, the incoming traffic rate and the drop rate for each service class respectively.
Suppose the break point rate is KB where K is the break point coefficient. We have:
r2 KB and r1 KB
It is worth noting that the same break point rate is assumed for the all service classes.
2
2
Let r2 =
B (4.3) thus we have
(4.4)
(2 -1)
K (2 -1)
From (4.2)(4.3) we have:
r1 =(1 -
1
)(
1
) B KB (4.5)
1 -1
Substituting in (4.5) with (4.4) we obtain the break point coefficient as:
K=
21
(4.6)
221-2-1
(4.6) shows that as long as the incoming flow rates of all service classes are strictly
less than KB, we can keep the drop rate differentiation parameters unchanged while
avoiding traffic starvation at any flow rate proportion pattern. (4.6) also shows that given a
30
set of values of the drop rate differentiation parameters, its break point coefficient is
determined by (4.6).
4.2.2 Choosing the Real Break Point
In real implementation, we have to set the real break point coefficient strictly less
than the value determined by (4.6). We introduce the advance coefficient and <1.
Therefore, we have Kreal = K. Consequently, the break point coefficient in phase II is a
function of the incoming flow rate, the original drop rate differentiation parameter value
and .
To avoid traffic starvation, the maximum incoming flow rate of all the service
classes should be always strictly less than the break point rate. Therefore, we can see that
also plays a part in avoiding traffic starvation in phase II. However, it is possible to select
a decline function to make traffic starvation avoidance independent of . In the next
section, we will demonstrate that the decline function used in our simulation is independent
of .
4.2.3 Determining the Decline Function
An intuitive decline function is a linear function. One end-point is determined by the
break point and the other end is determined by the ultimate point (Figure 1). The rate at the
ultimate point is defined as the maximum possible incoming flow rate of the service classes.
At the ultimate point, we set the drop rate differentiation parameters of all classes equal to
that of the lowest service class, i.e. 1 in the normalized case. However, to avoid traffic
starvation in phase II, this function depends on the proper selection of the value of the
advance coefficient . In addition, this function also involves the maximum incoming
31
flow rate as a parameter. Thus, rather than a linear function, we use the decline function for
phase II, in our simulation, as follows:
x +ci
i = x +c -0.5 (4.7)
i
Where i is the varying drop rate differentiation parameter of service class I, excluding the
lowest service class; x is the normalized largest rate of the incoming traffic of all the service
classes excluding the lowest service class, which takes the form:
rmax
x= B
Where rmax is the rate of the service classes except the lowest service class with the largest
incoming flow rate and B is the outgoing link bandwidth. However, it is worth noting that
in a situation where there are other types of services, such as EF, AF etc, B is the portion of
bandwidth allocated to our proportional differentiation services. ci is a constant derived
from the following equation:
ci =
0.5i +K-iK
i-1
Where i is the drop rate differentiation parameter in phase I for class I, and K is the break
point coefficient derived from (4.6) and is the advance coefficient.
Note that we use rmax instead of letting for each individual service class be
determined by its own incoming traffic rate. The reason is that only in this way we can
avoid traffic starvation. Furthermore, a good property of the function is that the ultimate
point rate shown in Figure 2 is infinite. Thus, we do not need to know the possible
maximum rate of the incoming traffic in determining the parameters for the decline
32
function.
Following we demonstrate that our scheme guarantees freedom of traffic starvation.
Since we use rmax in determining , we can use (4.6) to calculate the break point
coefficient for phase II as well. From (4.6) and (4.7), we know that the break point
coefficient k changes with the varying drop rate differentiation parameter that is
determined by the flow rate. If we let the normalized largest flow rate, i.e. the flow rate
divided by the outgoing bandwidth B, be always less than k, we can avoid traffic starvation.
Let us assume the same case as in the above section where there are only three service
classes. In phase II, the break point coefficient is:
k=
21
(+c2)( +c1)
= 1
(4.8)
221-2-1
(
+
+c
+c
)
2 1
2
Where , is the normalized flow rate of class 2 and class 1. Observing (4.8), it is easy to
know that if we let for each service class determined by its own flow rate, it is always
possible that k is less than the maximum flow rate. This means that traffic starvation may
occur in some situation. However, in our scheme, we always use the maximum rate to
calculate for all the service classes. If we assume >, k can be rewritten as:
k=
(+c2)( +c1)
> (4.9)
1
(2
+c
+c
)
2 1
2
Since c2, c1 and all are greater than zero, we can see that k is always greater than . We
can similarly obtain k> if we assume < . Equation (4.9) shows that for any pattern of
traffic proportion among the service classes, as long as the maximum incoming flow rate
rmax belongs to phase II in Figure 1, the break point rate is always greater than rmax . This
33
means traffic starvation will never happen. Note that equation (4.7) also implies that the
traffic starvation avoidance is independent of the advance coefficient .
4. 2. 4 The Enhanced ARP Algorithm (EARP)
/*for each incoming packet */
if queue buffer full {
maxcurrentrate = maximum flow rate of all service class \
excluding the lowest service class;
If (maxcurrentrate < real break point rate)
i=i ;
else
calculate i based on maxcurrentrate;
calculate normalized drop rate for each service class;
delete the packet in queue with the least normalized drop rate;
}
enqueue incoming packet;
Figure 2 The traffic starvation free drop rate differentiation algorithm
Figure 2 concludes our algorithm for the drop rate differentiation. This algorithm
has the following properties: 1) For any pattern of incoming flow rate proportion of service
classes, it always achieves a differentiated drop rate for each service class. 2) For any
pattern of incoming traffic proportion, the algorithm guarantees freedom from traffic
starvation.
4.3 Efficient Memory Management Scheme
As shown in Figure 2, when the buffer is full, we delete packet from the queue to
make room for the new coming. It is expensive to find a packet that has the length of being
able to make adequate space for the new packet in the queue. The cost of such a search
operation is O(N) without sophisticated mechanism, where N is the number of packets in
34
the queue.
To address this problem, a mechanism called Push Out can be used.
In the
mechanism, packet deletion is performed at the head of the queue until enough space is
made for the new packet. However, this scheme may cause conflict with the scheduler in a
parallel implementation. Since the scheduler always de-queue packets from the head of the
queue, a synchronization mechanism is needed in the implementation of the algorithm.
In our implementation, we take another approach. Instead of deleting packets from
the head of the queue, we delete packets from the tail of the queue. To avoid the overhead
of testing whether enough room is made for the new packet and potential multiple deletion
operation, we reserve a small space of bytes in the buffer. When testing whether the
buffer is full, instead of using the real buffer size buffersizereal , we use the pseudo buffer
size buffersizepseudo , which buffersizepseudo = buffersizereal - .
Following we determine the minimum value of so that the buffer never overflows.
Suppose a new packet comes and the queue length l buffersizepesudo . At this moment, the
algorithm determines to delete the last packet from the queue to make room for the new
arrival. Thus, the new queue length is:
l' = l + pktsizenew - pktsizequeue-tail
where pktsizequeue-tail is the size of the last packet in the queue. Since the queue length is
never greater than the real buffer size, so:
buffersizereal l' buffersizepseudo + pktsizenew - pktsizequeue-tail
Thus, we have:
pktsizenew - pktsizequeue-tail
35
As a conclusion, as long as we reserve = pktsizenew - pktsizequeue-tail , the buffer
overflow will never occur.
In practice, only letting be the difference between the maximum and the minimum
possible packet size can satisfy the requirement. The waste of buffer space is no more than
the size of the maximum packet size. In today’s router, the buffer is not a scarce resource
since we could enlarge it to a significant number. The reason to keep the buffer size small
is to restrict the delay on the router. The real value to reserve bytes in the buffer space is
to make the algorithm efficient.
4.4 The Processing of the Out Profile Packets
T1
Rin
packet m+1
in profile
packet m
out profile
Rout
Figure 3 Dealing with In/Out profile packets with delay differentiation
As mentioned above, the RPD model can be adapted to be consistent with DiffServ
model by simply adding traffic policing mechanisms such as marker or shaper at the
network border node. Unlike AF PHB only provides the drop precedence differentiation,
the model provides the delay differentiation as well. However, introducing marker at the
ingress network node incurs the problem of how to deal with the out profile packets at the
network node. The policy for dealing with the out profile packets is always to downgrade
36
the service level for those packets. In the model, each class is allocated a logical queue.
An intuitive approach is to put the out profile packets into the queue of the lower level class.
However, if the Wait Time Priority (WTP) scheduler is used and packet reordering is not
permitted, there is a minimum limitation on the length of the out profile packets. When the
length of an out profile packet is less than the limitation, the packet is doomed to be
dropped.
Let's consider a simple situation as shown in Figure 3: a sequence of packets are
arriving at a network node, packet m is an out profile packet, packet m+1 is an in profile
packet. At time T1, suppose packet m+1 and packet m are already in the queue and packet
m has been waiting a time of 1, so packet m+1 has been waiting:
lm
Waitm+1 = 1 - R - 2
in
where lm is the length of packet m, Rin is the incoming bit rate. 2 is the interval between
the end of packet m arrival and the beginning of packet m+1 arrival.
Suppose the out profile packets receive the service of class J while the in profile
packets receive the service of class I. j and i stand for the delay differentiation
parameters for each class respectively. If we want packet m be scheduled out before packet
m+1 when WTP scheduling is applied, we need j Waitm >iWaitm+1 . Therefore, we
have the minimum length of packet m as:
lm >
1Rin(i - j)
- 2Rin (4.10)
i
From (4.10), we can see that given a set of delay differentiation parameters for the
classes, if 2 is sufficiently small, the longer the time a out profile packet has been waiting,
37
the longer the length of the packet has to be to avoid from being dropped.
This may not be a problem for TCP traffic since TCP protocol itself can handle
packet reordering. Neither may this be a problem if packets from one source are adequately
interleaved with the packets from other sources. However, the ARP algorithm is mainly
designed for multimedia applications in which out of order packets are often discarded. In
addition, in the real world, packets from each source tends to arrive in batches rather than
being interleaved with other connection packets, in which case 2 can be very small.
If we put the out profile packets in the queue of the lower service class while try to
prevent packet reordering, two complexities incur. First, in order to prevent packet
reordering, it needs to apply a certain mechanism to track packets that belong to the same
individual flow but reside in different queues. Second, for the property given by equation
(4.10), some packets already in the queue are doomed to be dropped. This may result in
unnecessary buffer management operations.
In our simulation, we simplify this problem by putting the packets from the same
flow into the same queue, no matter if they are in profile or out profile. So the packet
reordering is avoided. To prevent the out profile packets from monopolizing the queue and
compromising the transmission of the in profile packets, we apply the same drop policy to
the out profile packets and to the packets of the lowest class service.
Since some of the packets of the lowest service class which are the out of profile
packets are scheduled with higher wait time priority, the simplification of the approach
may result in the class ranking of a packet be inconsistent with the delay differentiation in
some situation. This is the price paid for the simplification.
Nevertheless, the
differentiation of average delay of the real in profile packets is still strictly maintained.
38
4.5 Simulation Result
In this section, we present a set
0
of simulation experiments to verify
the analysis discussed in the previous
sections.
We implemented the
improved
ARP
algorithm
and
1
simulated
3
1Mb
4
2
performed the simulation using NS-2.
The
10Mb
Figure 4 Simulation Topology
network
topology is shown in Figure 4. Node 0,1 2 are connected to node 3 via 10Mbps links. Node
3 is connected with node 4 via a 1Mbps link. We have three service classes 0,1,2. Flows
from node 0,1,2 correspond to each service class respectively. Class 0 is the lowest service
class whereas class 2 is the highest. In our simulations, we used CBR traffic with packet
size of 500B. The buffer size at node 3 is 20KB
4.5.1 Improved Rate Proportional Drop Rate Differentiation Algorithm
The following simulation is intended to verify our analysis on the breakpoint rate. In
the simulation, all nodes 0,1,2 send data at the rate of 800kb/s, which is less than the
outgoing bandwidth. With the original drop rate differentiation 4, 2 and 1, the break point
rate is 0.8B, which is 800kb/s in the simulation. However, in the improved ARP, we set the
advance coefficient as 0.9. Thus, the drop rate differentiation is reduced from 4, 2 and 1
to 2.21, 1.67 and 1 for class 2,1 and 0 respectively. The target delay differentiation is 1,2,4
for class 2,1 and 0. The result is as follows:
39
Table 3 A Comparison of the original and improved algorithm on traffic starvation.
Class
Arrived traffic (KB)
dropped traffic (KB)
drop rate
average delay (s)
Original algorithm (Case I)
2
1
0
490. 5
490. 5
490. 5
120
239. 5
478
24. 5
48. 9
97. 5
0. 8
0. 15
0. 27
Improved algorithm (Case II)
2
1
0
490. 5
490. 5
490. 5
185
244. 5
408
37. 7
49. 8
83. 2
0. 038
0. 075
0. 150
As we expected, class 0 is starved in case I. The 2.5% of class 0 packets that got
through were only those packets stuffed into the buffer at the beginning of the simulation
when the buffer was not full. After buffer got full, class 0 packets never made it into the
queuing buffer. It is interesting to note that the average delay is much lower in Case II.
This is because in Case II, each class got a fairer share of buffer than in case I, thus the
queue lengths for class 2 and class 1 are significantly shorter than in Case I.
In case II, although class 0 traffic is not completely choked, it still suffers the high
drop rate of over 80%. From the previous analysis, the smaller the value of , the earlier
the drop rate differentiation parameter starts to decline with the increasing maximum
flow rate of the service classes. Hence, the advance coefficient can serve as a "tuning
knob" [8] to fine-tune the balance of drop rate differentiation between different classes.
The following simulation uses the same set of parameters as above except we choose
as 0.6 in case I and as 0.9 in Case II; the latter is the same value used in the above
simulation. The result is shown in Table 4.
Table 4 shows that the drop rate differentiation is decreased in case I. Although the
drop rate of class 0 is still as high as 77.8%, realizing drop rate would be 58.3% if we set
equal drop rate for all the three classes, this rate is quite reasonable.
Table 4 The effect of the advance coefficient on drop rate differentiation.
40
Class
Arrived traffic (KB)
Dropped traffic (KB)
Drop rate
Average delay (s)
Case I ( = 0. 6)
2
1
0
490. 5
490. 5
490. 5
208
248
381. 5
42. 4
50. 6
77. 8
0. 036
0. 072
0. 146
Case II ( = 0. 9)
2
1
0
490. 5
490. 5
490. 5
185
244. 5
408
37. 7
49. 8
83. 2
0. 038
0. 075
0. 150
4.5.2 The Simulation of the In/Out Profile Packets Process
In the following experiment, we let node 2 send at the rate of 600kb/s. A token
bucket marker is attached on the node and the token rate is 400kb/s. Therefore, the rate of
out of profile packets coming out from node 2 is 200kb/s. Node 1 sends at 400kb/s and
node 0 sends at 200kb/s.
Table 5 The simulation result of In/Out profile packet process
Class
Arrived traffic (KB)
Dropped traffic (KB)
Drop rate
Average delay (s)
2
245. 5
16. 5
6. 7
0. 071
1
245. 5
29. 5
12
0. 141
0
out profile
123
28. 5
23. 2
0. 172
123
28. 5
As shown in Table 5, the result shows that the drop rate differentiation is well
maintained. The drop rate for the out profile packets are equal to that of the packets of the
lowest service. However, as expected, the average delay differentiation is not maintained.
This was caused by some of packets of the lowest service class, which were the out profile
packets from node 2, were scheduled with the same high priority as that of the in profile
packets from node 2.
41
4.6 Summary
In this chapter, we analyzed the traffic starvation problem in the ARP algorithm
proposed in [8] and introduced a solution to fix the problem. In order to avoid traffic
starvation, the differentiation parameter changes dynamically according to the incoming
traffic rate of service classes. The possible usage of the algorithm within the DiffServ
framework was discussed. The simulation was performed to verify the correctness of the
solution.
42
Chapter V Providing Loss Precedence Differentiation with
WFRED
As discussed in earlier chapters, we add the rate proportional loss rate differentiation
in the RPD model to meet the requirement of responsive applications. Since delay
differentiation is not needed, Weighted RED is the most appropriate method to achieve this
goal. However, fairness within each class or aggregate becomes the major concern in the
algorithm. In this chapter, we explore a way to address the problem.
5.1 Weighted RED
Random Early Detection/Drop (RED), which originally aimed at preventing network
congestion at its incipient stage and the problem of global synchronization, ensures that the
flow with high volume of traffic is subject to a high drop probability[32]. In RED, when
the queue length in the network node buffer exceeds a certain minimum threshold, arriving
packets are dropped probabilistically. The drop probability increases proportionally to the
queue length until the queue reaches a maximum threshold. When the queue length
exceeds the maximum threshold, all the subsequent arrivals are dropped. The drop
probability at the maximum threshold is denoted as maxp. To accommodate the burstiness
of the network traffic, the calculated average queue length is used instead of the real queue
length in the algorithm.
In RED, the minimum threshold, the maximum threshold and maxp constitute the
43
parameter triple. The weighted RED (WRED), also known as Enhanced RED, Multiple
RED or Extended RED as well, provides different drop precedences by applying different
parameter triples to different service classes[40]. The RED with In and Out (RIO)
mentioned in[15][17]also embraces the same idea of WRED.
We recognize and
demonstrate that WRED can also be used to provide proportional loss rate differentiation.
Given a fixed queue length and it is between the minimum and maximum threshold, the
drop rate of a service class can be expressed as:
drop rate =
p
= p (5.1)
where p is the drop probability and is the input rate. By setting different values for p,
which is analogous to the differentiation parameter in (1.1), for different classes, a
proportional loss rate differentiation can be achieved.
The way of setting the RED parameters for different classes can be categorized into
two types: the over-lapped model and the non-overlapped model[16]. In the over-lapped
model, the minimum and maximum threshold are the same for the all service classes. The
drop precedence differentiation is achieved by applying different maxp for each class. In
the non-overlapped model, all elements in the parameter triple are different for each class.
For a rate proportional drop rate differentiation, over-lapped model is appropriate.
Otherwise, some classes may suffer packet loss before other classes do. In AF PHB,
however, usually we want to ensure the In Profile packet being served before the Out
Profile packet, therefore non-overlapped model is suitable.
As pointed out in [41], rate proportional dropping is not adequate in assuring each
flow seizing fair share of bandwidth. As a result, WRED cannot ensure the equality within
44
each service class, which is resulted directly from the unawareness of interaction between
the drop mechanism at the network node and the congestion control mechanism at the end
system. The studies on the interaction between the TCP implementations and RED can be
found in [30][42]. Although aggregation at the network node is important to the scalability
of the model, as mentioned in earlier chapters, incapable of providing price proportional
quality of service makes the model not applicable in practice. Given equal link parameters,
the higher service class user is expected to receive a higher quality of service. However, in
our simulation, WRED cannot provide satisfying service differentiation from the
perspective of each micro-flow. Even with the adequate differentiation in terms of the
aggregated throughput or the aggregated drop rate among different service classes, given
same link parameters, the flow of the higher service class may not have the higher
throughput or lower drop rate than that of a lower service class flow. This is because within
each class, either the throughput or the drop rate of a micro-flow might severely deviate
from the average level.
To achieve equality within each service class, we propose to incorporate the Fairness
RED [41] into WRED. Although the Weighted Fairness RED (WFRED) proposed in this
chapter is to provide the proportional loss rate differentiation, it can also be used to
implement AF PHB in DiffServ since the fairness is a significant concern in Assured
Service as well.
45
5.2 The Simulation Environment
The network topology used in the simulation is shown in Figure 5, which is similar to
the one used in [41] with slight difference. All the nodes send the TCP traffic that is
generated by the persistent FTP sources to a sink via the RED gateway. The packet size is
500 bytes. The TCP window is settable up to 64kB. TCP/Sack1 implementation in the NS
network simulator is used to generate the TCP traffic. By default, the bandwidth of the link
from the sender node to the RED Gateway is 100Mbps and the delay is 1ms. The
simulation runs 8 seconds.
R-1
delay=2ms
45mbps
R-2
RED
Gateway
Sink
R-48
48 links compete for a 45mbps link. RED Gateway with the buffer size of
25kB=50pkts, qw=0.002, minimum threshold=1/4 buffer size, maximum
threshold = 1/2 buffer size
Figure 5 The simulation topology with the RED gateway.
46
5.3 Improvement over the Original RED
During the simulation, we found that a simple modification to RED can slightly
improve the performance of the RED gateway. This is based on the observation of the plot
of the queue length and the average queue length.
Figure 6 A time plot of the queue length and the average queue
length in the RED gateway
As the plot shows in Figure 6, there always exists a time lag between the real queue
length and the average queue length. This is caused by the low-pass filter algorithm in
RED, which is to tolerate the short-term congestion caused by bursty traffic. However, the
original RED algorithm may cause packets to be unnecessarily dropped in some situations.
One of the situations is marked out at point a in Figure 6. At point a, the real queue
length has dropped below the minimum threshold. However, since the average queue
length does not timely change with the queue length and still stays beyond the minimum
threshold, incoming packets at this time may be dropped. Usually, point a identifies the
interval between the bursts of incoming flows. Since the acceptance of sporadic arrivals
during such intervals does not undermine the effect of congestion control, we propose an
improvement by simply preventing RED from dropping packet in such a situation.
47
Furthermore, as shown at point b in Figure 6, the trend of the queue length is going
down and the real queue length is less than the average queue length. In contrast, at point c,
the trend of the queue length is going up and the real queue length is greater than the
average queue length. It is reasonable to apply more aggressive congestion control at point
c than at point b. However, in the original RED, the drop probability solely depends on the
average queue length. This makes the drop probability insensitive to the change of the
queue length at those points. Consequently, some packets may be unnecessarily dropped.
Since letting a few more packets in when the trend of the queue length is going down will
not compromise the overall effect of congestion control, we propose a trend sensitive
dropping algorithm. In specific, we propose to use different values of maxp for the uptrend
and downtrend period of the queue length in the drop probability calculation.
In summary, we proposed the following modification to the original RED algorithm:
1) When the queue length is less than the minimum threshold, let drop probability p=0.
2) When the queue length is less than the average queue length, uses the reduced
maximum probability to calculate the drop probability. i.e. maxp' =maxp / where
is a constant.
In our simulation, we set maxp'=0. The simulation showed that the modification
could slightly increase the throughput and decrease the number of packets being dropped.
The more bursty the traffic is, the more noticeable the increase is. This is because when
traffic is more bursty, the trough in the queue length plot tends to be wider and deeper. As
a result, more packets that would be dropped were original RED algorithm used are saved
by the modification.
The improvement is less drastic when Explicit Congestion
Notification (ECN) is enabled since with ECN being enabled, the packet is
48
probabilistically marked instead of dropped unless the buffer overflow occurs.
Above we have proposed a simple modification that can slightly improve the
performance of RED. Much work has been focused on this issue. In [30], the Stabilized
RED (SRED) was proposed to improve the throughput and to stabilize the occupation of
the buffer. The algorithm adjusts maxp according to the load of the network. When the
average queue length is persistently lower than the minimum threshold of the average
queue length, it means the congestion control is too aggressive so maxp should be
decreased.
When the average queue length is continuously close to the maximum
threshold, it indicates the congestion control is too lenient, so the maxp is increased. Our
proposed solution was motivated by the work in [30]. However, when it comes to WRED,
which involves more than one maxp s being used, adjusting maxp becomes more
complicated since the resulted differentiation is not a linear function of the ratio among
maxp s.
Further research would be needed to identify the relation between the
differentiation of maxps and the resulted differentiation of service quality. This issue is
beyond the scope of this thesis.
5.4 Weighted Fairness RED
5.4.1 Modified FRED
As illustrated in [41], the TCP connection with a larger window size or shorter RTT,
which classified as robust connection, often accounts for the greater portion of the total
traffic volume in the network. Thus, to avoid congestion, it is more reasonable and
effective to drop the packet from the robust connection than to drop from the fragile
49
connection, which is characterized as having small window size or large RTT. In addition,
this scheme allows a fairer share of bandwidth for the fragile connection since it takes more
rounds for a fragile connection to recover from a packet loss. The problem of bias against
the connections with large RTT is often called RTT-sensitivity. FRED identifies the packet
of a robust connection by keeping track of the number of packets in the network buffer
from each micro-flow. A flow with its packet residing in the network buffer is called active
flow. When the number of the packets from a certain connection exceeds a threshold, it is
tagged as a robust connection. FRED lets only the packets from those robust connections
to be probabilistically dropped. The packets from the fragile connections are always
accepted by simply skipping the dropping algorithm. Although a network node may
simultaneously serve hundreds or thousands of connections, the number of packets in the
network node buffer remains relatively small. Therefore, the algorithm is scalable and its
overhead is limited. In essence, the idea behind FRED is to achieve fair bandwidth sharing
by enforcing fair buffer sharing. Much work on this idea can be found in the literature such
as in [1].
As discussed earlier, WRED can be used to provide the drop rate differentiation. In
order to achieve fairness, we propose to integrate FRED and WRED. Adapting FRED to
provide different drop precedences seems straightforward. It would be enough to simply
using different values of maxp in calculating the drop probabilities for the packets of
different service class. We refer to this method as the Modified FRED. However, since the
packets from the fragile flows skip the dropping mechanism in FRED, the resulted service
differentiation among aggregates would be different in terms of throughput or drop rate
even if the same RED parameter triples are used for WRED and the Modified FRED.
50
Table 6 shows the simulation result of such a situation. In the simulation, we suppose there
are two service classes. As shown in Figure 5, of all the 48 connections from sender nodes
to the RED gateway, the first 24 connections send packets of class 0. The later half of the
48 connections send packets of class 1. At the RED gateway, maxp=0.02 is used for class 0
and maxp=0.027 is used for class 1.
Table 6 A comparison between WRED and the Modified FRED
Throughput (# of packets)
Dropped (# of packets)
Drop rate ratio(class II/ class I)
WRED
Modified FRED
Class 0
Class 1
Class 0
Class 1
69086
26296
55155
41002
3851
2038
3478
2792
1. 39
1. 08
The result shows that the differentiation effect is neutralized in the Modified FRED
in terms of either throughput ratio or drop rate ratio. It is conceivable that if a class is
dominated by the fragile connections, the differentiation relation will pervert.
The
following experiment demonstrates such an extreme situation.
Table 7 A Comparison of WRED, the Modified FRED and WFRED in a fragile link
dominated situation
Throughput (# of pkts)
Dropped (# of pkts)
Drop rate ratio
(class 1/ class 0)
WRED
Modified FRED
WFRED
Class 0 Class 1 Class 0 Class 1 Class 0 Class 1
35596
55091
19250
72550
42738
47250
1615
2941
951
2229
2021
2966
1. 18
0. 62
1. 33
In the simulation, all the parameters are the same as described above except for the
link bandwidth. As shown in Figure 5, we set links from R-1 to R-8 to the RED gateway
with the same bandwidth of 100Mbps. R-1 to R-8 send class 0 traffic. R-9 to R-48 send
51
class 1 traffic, each with the same link bandwidth of 1Mbps. In such an arrangement, class
1 is dominated by the fragile connections. The simulation result is shown in Table 7. The
last two columns in Table 7 are the simulation result based on our proposed algorithm to be
explained later.
Table 7 shows that the drop rate differentiation is corrupted with the Modified FRED
since the drop rate of class 0 becomes higher than that of class 1. This phenomenon is
caused by the packets from the fragile flows. Since these packets are allowed to pass
through as long as the buffer does not overflow, the equation which defines the relation
between the drop rate and the drop probability p changes from
(5.1) in the previous
section to the following form:
drop rate =
p(-)
p (5.2)
where denotes the aggregate rate of the fragile connections. Thus, the distribution
pattern of the fraction of these packets among the service classes has direct impact on the
drop rate proportion.
Since today’s Internet is dominated by the HTTP traffic, which often exhibits the
characteristic of burstiness and non-persistence, the fragile connections may constitute a
large portion of the connections simultaneously supported by a network node. It is unlikely
that the Modified FRED is able to provide a consistent service differentiation in such
circumstance. To solve this problem, the drop rate of a service class should be independent
from the amount of fragile flows within the class. The following section introduces a
solution to the problem.
52
5.4.2 Weighted Fairness RED
To solve the problem, we need to eliminate in (5.2), the aggregate rate of the fragile
connections. Unlike the Modified FRED, our proposed solution does not let any packet
skip the drop algorithm. When a coming packet is determined to be dropped by the drop
algorithm but it belongs to a fragile flow, instead of just letting it pass through as in the
Modified FRED, we drop a substitute packet that belongs to the same service class as that
of the arrival. Thus, the differentiation among the server classes is kept. The proposed
Weighted FRED algorithm is shown in Figure 7.
For each arrival
Calculate the drop probability
If the packet should be dropped
If (the packet belongs the a robust connection)
Drop it;
Else{ //following 6 lines are denoted as block A
If (there exist packets of the same service class from robust connections)
Randomly select one from these packets and drop it;
Else{
Randomly select packet p of the same service class;
drop p;
}
Accept the arrival;
}
Else
Accept the arrival;
Figure 7 The Proposed Weighted Fairness RED
The fundamental difference between the proposed WFRED and the Modified FRED
is the block A in Figure 7. In the Modified FRED, these lines are simply skipped.
In
selecting a substitute packet to drop, a number of alternatives exist. We can simply
randomly choose a packet of the same class as the new arrival in the buffer. In this way, a
flow with a larger buffer occupancy suffers a higher probability of dropping. However, in
53
our proposal, we use an algorithm that can ensure a better fairness. When looking for a
substitution, we first look for the packets from the robust connections. If such a packet is
not available, we then randomly select a packet in the buffer of the same service class as
that of the arrival. Although this method can achieve a better fairness, it involves some
overhead. First, it has to identify the packet to be dropped by looking through the data
structure that hosts the information about the packets in the buffer. Second, it needs to
locate and delete packet already being queued in the buffer. We believe that by using an
appropriate data structure and an algorithm, the added overhead could be offset.
The simulation result of the proposed WFRED is listed in the last two columns of
Table 7. As we expected, the result is comparable with that of WRED.
5.4.3 Simulating Rate Proportional Loss Rate Differentiation with WFRED
In order to verify the effectiveness of the proposed WFRED, the following
simulation was conducted and the variance analysis was applied to the result.
As shown in Figure 5, R-1 to R-24 send class 0 traffic and R-25 to R48 send class 1
traffic. All the links from the sender nodes to the RED gateway are 100Mbps. Except for
maxp, other parameters are the same as in the previous experiment. In the first run of
simulation, we compared WRED and WFRED using the same parameter triples for the
RED gateway. The result is given in column 2 and 3 in Table 8. Since the variance of a set
of samples indicates the degree of the samples deviating from the average level, so in our
case, reduction of the variance indicates the improvement on the fairness. We can see that
for all the 24 connections of each service class, the standard deviation of either throughput
or the number of dropped packets are reduced when WFRED is applied. However, with a
54
closer observation, we find that for class 1, the improvement in the standard deviation of
throughput is not so evident. This perhaps is caused by the incomparable throughput or
drop rate differentiation between class 0 and class 1, which is resulted from the fairness
control mechanism in WFRED. In the second run of the simulation, we adjusted maxp for
class 1 so that the differentiation in terms of throughput and drop rate achieved by WFRED
was comparable with that of WRED. The result, shown in the last column of Table 8,
indicates that the standard deviation is significantly reduced. The percentage numbers
show the reduction in the standard deviation as compared with WRED.
Table 8 A Comparison of WRED and WFRED with variance analysis.
WRED
WFRED
WFRED
maxp(0)
0.02
0.02
0.02
maxp(1)
0.027
0.027
0.04
class
0
1
0
1
0
1
average throughput(pkts)
2718 1011 2421 1255 2691 1001
average drop(pkts)
160
85
181
111
192
90
throughput standard deviation (TSTD)
990
443
542
430
635
321
reduce percentage on TSTD
- 45%
3% 33% 28%
drop standard deviation(DSTD)
35
27
19
17
23
19
reduce percentage on DSTD
- 46%
37% 34% 33%
In addition to the above simulation experiment of which the results are shown in
Table 8, a series of experiments was conducted with different TCP implementations in NS,
such as Tahoe and Reno. In addition, with each TCP implementation, we tested both cases
when ECN is enabled and disabled.
In all the cases, although different TCP
implementation produced different differentiation even with the same RED parameter
triples for the RED gateway, the variance analysis always favored the WFRED algorithm.
55
5.4.4 Simulating Assured Forwarding with WFRED
As mentioned in the preceding section, WFRED can also be used for implementing
AF PHB. As discussed in [1], AF PHB can be used for implementing a variety of services
model. Since it is impossible to conduct the overall evaluation of a specific algorithm
without a concrete service model, we only focus on the fairness aspect in this section. We
performed a series of simulation experiments to compare RIO and WFRED in terms of the
fairness performance.
Table 9 A Comparison of RIO and WFRED with variance analysis
RIO
Group A
Group B
Average throughput(pkts)
Throughput variance
Average drop(pkts)
Drop variance
Average throughput(pkts)
Throughput variance
Average drop(pkts)
Drop variance
2043
440
147
18
4026
693
138
12
WFRED
2644
358
97
7
4053
500
103
7
First, we tested the fairness improvement when WFRED were used. We used the
first 24 links in Figure 5 and divided them evenly into two groups, A and B. According to
the framework of DiffServ, we applied the token bucket marker to each sender node. In
order for the variance analysis to be comparable for WFRED and RIO, we adjusted the
parameters of the token bucket markers so that the average throughput of each group is
approximately equal. When RIO was used in the RED gateway, the token rate is 1Mb/s for
each sender in group A and 6Mb/s for group B. All bucket size was 6 packets. When
WFRED was applied, the token rate is 1Mb/s for the senders in group A and 3Mb/s for
group B and the bucket size is 4 packets. The RED parameter triples for the in-profile and
56
out-profile packets were (20, 40, 0. 02) and (5, 15, 0. 2) respectively. The result, listed in
Table 9, shows that the throughput deviation from the average value of each connection is
significantly reduced.
The following simulation was conducted to show the effect of WFRED in terms of
the RTT sensitivity. We evenly divided 48 connections in Figure 5 into three groups, A, B,
and C. We set the delay time to 1ms, 5ms and 25 ms for group A, B, C respectively. The
RED parameter triples were the same as above. All nodes were applied with the same
token bucket marker parameters; the bucket size was 4 packets and the token rate was
1Mb/s.
The result shown in Table 10 demonstrates that the sensitivity to RTT is
significantly improved with the fairness effort. However, the difference in the results
between group A and group C shows that the discrepancy still exists.
Table 10 A Comparison of RIO and WFRED on RTT Sensitivity
Class A (delay = 1ms)
Class B (delay = 5ms)
Class C (delay = 25ms)
Average throughput (pkts)
Average drop (pkts)
Average throughput (pkts)
Average drop (pkts)
Average throughput (pkts)
Average drop (pkts)
RIO
WFRED
2541
1905
139
122
1989
1937
84
85
158
782
9
42
5.5 WFRED vs. EARP
Up until now, we have presented two algorithms that provide the rate proportional
drop rate differentiation. It is worth discussing their application and pros and cons.
The ARP algorithm, which discussed in the former chapter, is designed for
non-responsive traffic or loose responsive traffic such as the RTP data flows. Since each
packet of a non-responsive flow can be considered isolated, maintaining rate proportional
57
drop rate differentiation is simple. In addition, fairness is not a great concern in the case.
By integrating a probabilistic dropper such as RED, fairness within each service class can
be easily achieved.
However, for responsive traffic, ARP is not an adequate scheme to achieve the fair
sharing of bandwidth since we have to take into account the interaction between the
congestion control mechanism at the responsive sender and the dropping algorithm at the
network node. We developed the WFRED algorithm to solve the problem. Although
WFRED provides the differentiated drop precedence, the consequence of the
differentiation is to achieve the differentiated bandwidth sharing. If it is used with
non-responsive traffic, although differentiated bandwidth sharing among service classes
will be achieved, the fairness effort could be compromised. Following we demonstrate
such a scenario.
The buffer drain time of a network node is determined by its outgoing bandwidth
and buffer size. Excluding the case of packets being dropped due to buffer overflow, the
packet from the flow that makes the interval between consecutive arrivals consistently
greater than or equal to would never be dropped since no more than one packet from the
flow can be in the buffer at any time. Thus, the packets of the flow would breach the
fairness effort of FRED. It can be verified that the maximum rate of such a flow is
determined by the following equation:
max possible rate with 0% drop R = bandwidth / buffer size in terms of packets
The equation indicates that a bunch of such flows with each flow's rate no greater
than R would seize the bandwidth equal to the aggregate rate of the flows. Although
WFRED, which uses essentially the same fairness mechanism as FRED, can still maintain
58
the service differentiation among classes, is not adequate to achieve the fairness within
each class in such a situation. Nevertheless, some improvement could be observed since
with WFRED, fragile flows may suffer a packet loss as well in some situation.
As a conclusion, the two algorithms proposed in this thesis are designed for different
types of network flows: The WFRED algorithm is used for the responsive traffic and the
ARP algorithm is used for the non-responsive traffic.
5.6 Summary
In this chapter, we proposed WFRED, an integration of the WRED and FRED
algorithm, to address the fairness problem within each service class for responsive traffic.
A series of simulation experiments was presented to demonstrate the fairness effect of the
WFRED algorithm. In addition, based on the idea to avoid unnecessary packet dropping, a
modification to the original RED algorithm was introduced to improve the performance of
the RED gateway.
59
Chapter VI Conclusions
6.1 Thesis Summary
In this thesis, we began with a review of QoS - a fast moving area. Then we focused
our effort on the enhancement and implementation of a novel service model – the RPD
model.
Based on the observation that non-responsive traffic and responsive traffic require
different quality of service, we improved the RPD model and proposed to provide different
types of differentiation to the different protocol types of traffic. We proposed to apply only
the drop rate differentiation to responsive traffic and apply both the drop rate
differentiation and the delay differentiation to non-responsive traffic. Corresponding to
these two types of differentiation, two algorithms were developed for the implementation
of the model.
Without the concern of the interaction between the congestion control mechanism at
the end system and the differentiation algorithm at the network node, the ARP algorithm
proposed in [8] can provide a predictable and accurate differentiation for non-responsive
traffic. However, the original algorithm has the potential problem of traffic starvation. Our
work provided a solution to the problem.
For responsive traffic, we proposed an algorithm to achieve the fairness within each
service class by incorporating the fairness effort into WRED. The idea behind the fairness
effort is trying to achieve fair bandwidth sharing by enforcing fair buffer sharing.
60
Although the idea is not new, a particular issue in the design of WFRED is that we have to
maintain the differentiation between service classes as well as to achieve the fairness
within each service class.
A number of simulation experiments were conducted to
demonstrate the effectiveness of the new algorithm on the fairness issue.
One of the objectives of our work was to explore the basic differentiation algorithms
that serve as the building blocks for the differentiated services. Since it is difficult to
evaluate an algorithm without a specific service model, the context of this thesis was
focused on the issues in the implementation of the RPD service model. Nevertheless, since
the addressed problems such as the traffic starvation and the fairness are the universal
concerns in the design of forwarding scheme in the network, the ideas in this thesis can be
applied to other service models such as DiffServ as well.
6.2 Future Work
In this section, we identify a number of issues that still remain as open questions or
have room for improvement in the future.
First, based on the observation of traffic pattern in the Internet, this thesis proposed
to make the distinction of transport layer protocol type (TCP and UDP) on the network
nodes. This requires the layer-three components, such as routers, to be aware of the layer
four packet header. One possible solution to this added complexity is to adhere to the
DiffServ framework. In DiffServ, each ingress edge node of a DiffServ domain is installed
with a Multiple-Field (MF) Classifier. Each packet passing through the classifier is given a
code point of the Forwarding Behavior. This makes the network core nodes remain simple
and efficient. In such a way, the problem boils down to the definition of a set of code
61
numbers for the model. This involves the backward compatibility with the already
assigned numbers by IETF.
In Chapter IV, we gave a general idea to the solution of the Simultaneous Switch
Syndrome. A concrete algorithm based on the mathematical proof is required to enhance
the robustness of the model regarding to this problem.
Although WFRED significantly improves the problem of RTT sensitivity, evident
discrepancy still exists on bandwidth sharing among the flows with different RTT. It is
suggested that the problem of RTT sensitivity is fundamentally unsolvable [62]. However,
this assertion has not been proved yet. Even if it is proved, solutions to reduce it to the
minimum may still be of interest.
Finally, to improve the performance of the RED gateway, this thesis developed a
modification to the original RED algorithm. Unlike SRED, our modification is static since
the performance improvement does not rely on dynamically changing the RED parameters.
Therefore, the performance improvement of the modification is less dramatic than that of
SRED. Technically, it is possible to incorporate SRED and WFRED to achieve better
performance of the RED gateway. However changing WRED parameters may deviate the
differentiation among service classes. A study is needed to identify the relation between
WRED parameters and resulted QoS differentiation.
This problem could be very
complicated as shown in [28][29]. Nevertheless, putting good things from SRED and
WFRED together may potentially result a better algorithm on the performance of the RED
gateway.
It is worth noting that DiffServ and QoS are evolving. Although there are a number
of proposals have been made to the IETF standard track, they are not widely adopted by
62
industry yet. In addition, major players, such as Cisco and Nortel Networks, have their
own flavor of QoS implementations. The proposed model in this thesis is by no means a
well-established solution catering the requirements of QoS in full dimension. However,
the algorithms and ideas in the work are of universal meaning and can be adapted to solve
other similar problems in the QoS related area.
63
BiBliography
[1]
kalevi Kilkki "Differentiated Services for the Internet"
Publishing. 1999. ISBN 1-57870-132-5
Macmillan Technical
[2]
Andrew S. Tanenbaum "Computer Networks" Third Edition.
International, Inc. ISBN 7-302-02410-3
[3]
Xiao Xipeng et al "Internet QoS: A Big Picture", IEEE Network Magazine, March
1999
[4]
E. Crawley et al "A Framework for QoS-based Routing in the Internet" 1998 IETF
RFC2386
[5]
R. Braden et al "Integrated Services in the Internet Architecture: an Overview" 1994
IETF RFC1633
[6]
J. Wroclawski "Specification of the Controlled-Load Network Element Service",
1997 IETF RFC 2211
[7]
S. Shenker, et al "Specification of Guaranteed Quality of Service"1997 IETF RFC
2212
[8]
Constantinos Dovrolis, et al "A Case for Relative Differentiated Services and the
Proportional Differentiation Model" IEEE Network, September 1999
[9]
Y. Bernet, et al "A Framework for Differentiated Services" Internet Draft,
work-in-progress Feb. 1999 draft-ietf-diffserv-framework-02.txt
Prentice-Hall
[10] Y. Bernet, et al "Interoperation of RSVP/Int-Serv and Diff-Serv Networks" Internet
Draft, work-in-progress. Feb. 1999 raft-ietf-diffserv-rsvp-02.txt
[11] S. Blake, et al "An Architecture for Differentiated Services" IETF RFC 2475
[12] K. Nichols, V. Jacobson, and L. Zhang, "A Two-bit Differentiated Services
Architecture for the Internet", ftp://ftp.ee.lbl.gov/papers/dsarch.pdf, November
1997.
[13] K. Nichols, et al "Definition of the Differentiated Services Field (DS Field) in the
IPv4 and IPv6 Headers" IETF RFC 2474
[14] V. Jacobson et al "An Expedited Forwarding PHB" IETF RFC 2597
64
[15] Juhua Heinanen et al "Assured Forwarding PHB Group" IETF RFC 2597
[16] Nabil Seddigh et al "Study of TCP and UDP Interaction for the AF PHB", Internet
draft, work-in- progress, <draft-nsbnpp-diffserv-tcpudpaf-00.txt> June 1999
[17] K. Nichols "Preliminary Simulation Evaluation of an Assured Service" 1998.
Internet draft, work-in-progress <draft-ibanez-diffserv-assured-eval-00.txt>
[18] "RTP: A Transport Protocol for Real-Time Applications" Internet draft,
work-in-progress, 1999 <draft-ietf-avt-rtp-new-04.ps>
[19] E. Crawley, et al "A Framework for QoS-based Routing in the Interne" IETF
RFC2386
[20] R. Guerin, et al "QoS Routing Mechanisms and OSPF Extensions" Internet draft,
1998. <draft-guerin-qos-routing-ospf-05.txt>
[21] Bobby Vandalore, et al "Analysis and Modeling of Traffic in Modern Data
Communication Network" Globecom'99 http://www.cis.ohio-state.edu/~jain/papers.
html
[22] Vern Paxson and Sally Floyd "Why We Don't Know How To Simulate The Internet"
[23] Andreas Nilsson, et al "Performance of QoS Agents for Provisioning Network
Resources", In Proceedings of IFIP Seventh International Workshop on Quality of
Service, http://www.cdt.luth.se/~olov/publications/IWQoS-99a.ps.gz
[24] "Internet 2 Bandwidth Broker Information" http://www.merit.edu/working.groups
/i2-qbone- bb/
[25] Sally Floyd and Van Jacobson "Link-Sharing and Resource Management Models for
Packet Networks". IEEE/ACM Transactions on Networking Vol. 3. No. 4. August
1995
[26] Jitendra Padhye, et al "Modeling TCP Throughput: A Simple Model and its
Empirical Validation" SIGCOM ‘98
[27] Teunis J. Ott, et al "The stationary behavior of ideal TCP congestion avoidance",
1996. ftp.bellcore.com/pub/tjo/TCPwindows.ps
[28] Archan Misra and Teunis J Ott "The Window Distribution of Idealized TCP
Congestion Avoidance with Variable Packet Loss", Infocom'99
[29] Archan Misra, et al "The Window Distribution of Multiple TCPs with Random Loss
Queues", Globecom'99
65
[30] Teunis J. Ott et al "SRED: Stabilized RED" 1998 ftp.bellcore.com/pub/tjo/SRED.ps
[31] Wu-Chang Feng, et al "A Self-Configuring RED Gateway" Infocom ‘99
[32] Sally Floyd and Van Jacobson, "Random Early Detection Gateways for Congestion
Avoidance", IEEE/ACT Transactions on Networking. Vol. 1. No. 4. August 1993
[33] M. Shreedhar and George Varghese "Efficient Fair Queuing Using Deficit
Round-Robin" IEEE/ACM Transactions on Networking. Vol. 4. No. 3. June 1996.
[34] Abhay K. Parekh, et al "A Generalized Processor Sharing Approach to Flow Control
in Integrated Services Networks: The Single-Node Case". IEEE/ACM Transactions
on Networking. Vol. 4. No. 3. June 1996.
[35] Pawan Goyal, et al "Start-Time Fair Queuing: A Scheduling Algorithm for Integrated
Services Packet Switching Networks" IEEE/ACM Transactions on Networking. Vol.
5. No. 5. October 1997
[36] Dimitrios Stiliadis, et al "Efficient Fair Queuing Algorithms for Packet-Switched
Networks" IEEE/ACM Transactions on Networking. Vol. 6. No. 2. April 1998
[37] Dimitrios Stiliadis, et al "Rate-Proportional Servers: A Design Methodology for Fair
Queueing Algorithms" IEEE/ACM Transactions on Networking. Vol. 6. No.2. April
1998
[38] Jon C. R. Bennett, et al "Hierarchical Packet Fair Queuing Algorithms", IEEE/ACM
Transactions on Networking. Vol. 1. No. 1. October 1993
[39] Abhijit K. Choudhury, et al "Dynamic Queue Length Thresholds for Shared-Memory
Packet Switches", IEEE/ACM Transactions on Networking. Vol. 6. No. 2. April
1998
[40] D. Clark and W. Fang, "Explicit Allocation of Best Effort Delivery Service"
IEEE/ACM Transactions on Networking VOL 6. No. 4 August 1998
[41] Dong Lin and Robert Morris, "Dynamics of Random Early Detection", SIGCCOM
1997. Cannes, France.
[42] Wu-chang Feng et al "Understanding and Improving TCP Performance Over
Networks with Minimum Rate Guarantees" IEEE/ACM Transactions on Networking
Vol 7. No. 2 April. 1999
[43] Baker et al "Aggregation of RSVP for IP4 and IP6 Reservations" Internet draft,
Work -in-progress. Feb. 1999 <draft-baker-rsvp-aggregation-00.txt>
66
[44] R. Guerin et al "Aggregating RSVP-based QoS Requests" Internet draft, work
-in-progress. Sept. 1997 <draft-guerin-aggreg-rsvp-00.txt>
[45] Higgins, Kelly Jackson, "Net for Business", Information Week, 9/28/98 Issue 702
p96
[46] Raj Jain "Effect of Number of Drop Precedences in Assured Forwarding" Globecom
‘99
[47] Stephen Nadas "IBM DiffServ Implementation Experience" Internet draft,
work-in-progress, June 1999 <draft-nadas-diffserv-experience-00.txt>
[48] Xiuping Lu, et al "TCP-Friendly Traffic Conditioners for Differentiated Services"
Internet draft, work-in-progress, March 1999 <draft-azeem-tcpfriendly-diffserv-00.
txt>
[49] Y. Bernet, et al "A Conceptual Model for Diffserv Routers" Internet draft,
work-in-progress, June 1999 <draft-ietf-diffserv-model-00.txt>
[50] Juha Heinanen, et al "A Two Rate Three Color Marker" Internet draft,
work-in-progress, May 1999 <draft-heinanen-diffserv-trtcm-01.txt>
[51] Juha Heinanen, et al "A Single Rate Three Color Marker" Internet draft,
work-in-progress, May 1999 <draft-heinanen-diffserv-srtcm-01.txt>
[52] Longsong Lin, et al "A Generic Traffic Conditioner" Internet draft, work-in-progress,
April 1999 <draft-lin-diffserv-gtc-00.txt>
[53] Olivier Bonaventure "A rate adaptive shaper for differentiated services" Internet
draft, work-in-progress, June 1999 <draft-bonaventure-diffserv-rashaper-00.txt>
[54] M. Seaman, et al "Integrated Service Mappings on IEEE 802 Networks" Internet
draft, work-in-progress, June 1999 <draft-ietf-issll-is802-svc-mapping-04.txt>
[55] Anoop Ghanwani "A Framework for Integrated Services Over Shared and Switched
IEEE 802 LAN Technologies" Internet draft, work-in-progress, June 1999
<draft-ietf-issll-is802-framework-07.txt>
[56] R. Callon, et al "A Framework for Multiprotocol Label Switching" Internet draft,
work-in-progress, Nov. 1997 <draft-ietf-mpls-framework-02.txt>
[57] Bruce Davie et al "Switching in IP networks " Morgan Kaufmann Publishers, Inc.
ISBN 1-55860-505-3
[58] Michalis Faloutsos, et al "QoSMIC: Quality of Service sensitive Multicast Internet
protoCol" SIGCOM ‘98
67
[59] Hui Zhang, et al "Core-Stateless Fair Queuing: Achieving Approximate Bandwidth
Allocations in High Speed Network" SIGCOM ‘98
[60] B. Braden, et al "Recommendations on Queue Management and Congestion
Avoidance in the Internet" RFC2309
[61] Gene Gaines, et al "A Survey of RSVP/QoS Implementations" July 1998 http://www.
iit.nrc.ca/IETF/RSVP_survey/
[62] Sally Floyd, et al "Promoting the Use of End-to-End Congestion Control in the
Internet" IEEE/ACM Transactions on Networking 1999
[63] Son Vuong and Xizheng Shi "A Proportional Differentiation Service Model for the
Future Internet Differentiated Services" ICCT’2000 – International Conference on
Communication Technologies, Beijing, Aug. 2000.
[64] Son Vuong and Xizheng Shi "Implementing Rate Proportional Drop Rate
Differentiation with Weighted Fairness RED" CIC'2000 – International Conference
on Communications in Computing, Las Vegas, June. 2000.
68
Appendix 1 Acronyms
AF
Assured Forwarding
ARP
Accumulative Rate Proportional drop rate differentiation
BB
Bandwidth Broker
CBQ
Class Based Queuing
DiffServ
Differentiated Service
DRR
Deficit Round Robin
DS Field
Differentiated Services Field
EARP
Enhanced Accumulative Rate Proportional drop rate differentiation
EF
Expedited Forwarding
FIFO
Fist In First Out
FRED
Fairness Random Early Detection/Drop
IETF
Internet Engineering Task Force
IntServ
Integrated Service
LAN
Local Area Network
PHB
Per Hop Behavior
QoS
Quality of Service
RED
Random Early Detection/Drop
RIO
Random Early Detection/Drop with In and Out
RPD
Rate Proportional Differentiation
RTP
Real Time Protocol
69
RTT
Round Trip Time
SRED
Stable Random Early Detection/Drop
TCP
Transmission Control Protocol
UDP
User Datagram Protocol
WAN
Wide Area Network
WFQ
Weighted Fairness Queuing
WFRED
Weighted Fairness Random Early Detection/Drop
WRED
Weighted Random Early Detection/Drop
WRR
Weighted Round Robin
70
© Copyright 2025 Paperzz