Tráfego - IC

Traffic Control
Prof. Nelson Fonseca
State University of Campinas
Traffic


Traffic – bits carried;
Ultimate goal of a network – to transport
bits.
Traffic Control


Support of the Quality of Service
requirements of the applications;
Efficient use of network resources.
Quality of Services

Perception of the quality of information
transport.
Aplicações
Aplicações
QoS
QoS
REDE
QoS
Aplicações
QoS
Aplicações
Quality of Services


QoS parameters: express numerically
specific aspects of the quality perceived
Common parameters:


Mean delay;
Loss rate;
REDE
Delay in Packet Switching
Networks
Packets expreice delay in
their end-to-end path
Four delay components


Node processing
checksum
table lookup
ququeing


transmission
A
propagation
B
processing
queueing
Wait for output link
Depends on node
congestion
Delay in Packet Switching
Networks
Transmission delay:
 R=bandwidth (bps)
 L=packet size (bits)
 Time to put the bits in
the link = L/R
Propagation delay
 d = length of the link
 s = propagation speed
(~2x108 m/sec)
 Propagation delay=
transmission
A
propagation
B
processing
queueing
Queueing Delay



R=link bandwidth (bps)
L=packet size (bits)
a=arrival rate of
packets
traffic intensity = La/R



La/R ~ 0: low mean delay
La/R -> 1: increasing delay
La/R > 1: work arrived exceeds processing
capacity
Jitter



Delay variation
Impact on playback of voice applications
Buffer at receiver to ammeliorate jitter
effect
An Example:
ATM QoS Parameters





Maximum transfer delay
Delay variation peak to peak;
Cell Loss rate;
Rate of block severiously lost;
Rate of cells erroneously inserted.
QoS Parameters




Number of packets consecutively lost;
Mena time between faults;
Mean time to have access;
Mean time to recover from faults.
Traffic Descriptors


Characterizes quantitatively the pattern of
the flow of bits
Used to produce estimates of the resource
demands of a flow
An Example:
ATM Traffic Descriptors




Peak Cell Rate (PCR): Maximum rate of
cell transmission;
Sustainable Cell Rate (SCR): upper bound
for the transmission rate;
Burst Tolerance – (BT)
Maximum Burst Size - maximum time for
transmission at peak rate
An Example:
ATM Traffic Descriptors
BT
SCR
PCR
Class of Service


Users´choices of agreemet on quality of
transport made between service provider
Usually classes of services exists at the
network/link layer to support
users´expectation of QoS
An Example:
ATM Class of Services






CBR;
NRT – VBR;
RT – VBR;
ABR;
UBR;
GRF.
CBR




Constant Bit Rate;
Allocation of fixed amount of bandwidth
during the duration of the virtual circuit;
Real-time applications are sensitive to
delay and minimum bandwidth;
Voice, video, circuit emulation.
CBR
Cell Rate
PCR
Tempo
RT – VBR




Real Time Variable Bit Rate;
Time-varying requirements of bandwidth;
Applications which need delay bound
Voice and video.
NRT – VBR



Non-Real Time Variable Bit Rate;
Data-loss sensitive applications;
Time-varying bandwidth requirements.
VBR
BT
Cell Rate
PCR
SCR
Tempo
ABR



Avaliable Bit Rate;
Bandwidth allocation depends on network
feedback;
Not proper to delay sensitive applications.
ABR
Cell Rate
PCR
SCR
MCR
Tempo
UBR




Unspecified Bit Rate;
Serviço Best Effort;
Ip over ATM;
No QoS guarantees.
GRF




Generalized Frame Rate;
Enhanced Serviço Best Effort;
Minimum bandwidth guarantees;
Deals with frames intead of cells;
Class of Service
CBR
RTVBR
NRTVBR
Cell Loss Ratio (CLR)
Y
Y
Y
Cell Delay Variation (CDV)
Y
Y
Y
Y
Y
Y
Y
Y
ABR
UBR
GRB
Y
Y
Y
Cell Transfer Delay (CTD)
Peak Cell Rate (PCR)
Y
CDV Tolerance (CDVT)
Sustainable Cell Rate (SCR)
Burst Tolerance (BT)
Mininum Cell Rate (MCR)
Y
Y
Congestion

Network lack of capacity to provide the
Quality of Service requirements of
applications
QoS
10e-4
Voice
interactive
data
Cell Loss Rate
10e-6
10e-8
Image file
transfer
interactive
video
10e-10
1
10
100
Maximum Cell Delay Variation
1000
Congestion Control

Congestion control mechanisms work at
different time scales and can be either
reactive or pro-active
Congestion Control Mechanisms





Admission control
Policing
Selective Discard
Active queue management
Scheduling
Controle de Admissão de
Conexão (CAC)


Decision making process. Decides whether
or not to accept a flow (connection) into a
network domain;
Decisions need to consider the mantainance
of QoS requirements of already admitted
flows as well as the support of QoS
requirements of requesting flows;
Admission Control
My requirements
are....
My traffic
parameters are...
Service provider
Admission Control
Traffic descriptors
Traffic
representation
Estimation of total
resource demand
Admission Control
Parametric (analytical models)
Two
approaches
Measurement Based
Admission Control
Parametric Approach
W (t  u )  W (t ) 
lim 
 BH (t ) (u )uR 

H (t )
 0

uR 

Â(t )  at   t
   2 log 
H
Admission Control
Parametric Approach
Admission Control
Parametric Approach
Admission Control
Measurement Based

Traffic Envelopes MBAC:


source
Arrival Envelope: describes the peak rate over
defined intervals.
Service Envelope: describes the minimum
service received by a traffic class as a function
of interval length.
ingress
router
egress
router
destination
Admission Control
Measurement Based

Admission Control condition:

R(t) = mean of the arrival envelope

2 = variance of the arrival envelope
S(t) = mean of the service envelope
2 = variance of the service envelope
P = peak rate of the new flow
D = delay bound

 = violation probability




Admission Control
Measurement Based

Stability condition:
Admission Control
Measurement Based

Time-Window/Measured Sum MBAC:

The decision algorithm admits a new flow with load f if:
 + f < C * 
f: load of the new flow.
: measured load of existing traffic.
C: channel capacity.
: user-defined utilization target.
Admission Control
Measurement Based

Time-Window/Measured Sum MBAC:
 + f <  * C
 + f <  * B * 
B: maximum data rate used by the network.
: estimate of the channel efficiency.
Admissible Region
Admissible Region
Centralized Admission Control
Distributed Admission Control
Path
Resv
Emissor
Roteador 1
Path
Resv
Roteador N
Path
Resv
Receptor
Interdomain Admission Control
BB0
ISP3
BB1
ISP1
BB3
BB4
Origem
BB2
Destino
ISP2
Arquiteturas para Provisão de QoS na Internet
Admission Control
Wireless Network

Admission Control aware of:

Signal to interference and noise ratio

Handoff failure
Admission Control
Wireless Network
Admission Control
Wireless Network
Policing

Keeps track of the transmission of a flow
(connection) during its whole duration so
that the contract between user and
provider can be enforced.
Leaky Bucket


Tokens are generated at a constant rate
(leaky rate);
A packet needs to consume a token for
entering into the network;
Leaky Bucket
Gerador de
Fichas
taxa média de
 fichas/segundos
Capacidade
do Balde:
 fichas
Chegada de
Pacotes
Rede
taxa
máxima
Smoothing
Gerador de
Fichas
taxa média de
 fichas/segundos
Capacidade
do Balde:
 fichas
fila de saída
Chegada de
Pacotes
Rede
taxa
máxima
Limitations of the
Leaky Bucket Mechanism



Just two parameters to control;
Leaky rate controls the mean arrival rate;
Bucket size controls the burst length
The GCRA Algorithm

Generic Cell Rate Algorithm



Virtual Scheduling;
LB with continuous state;
GCRA (I,L)




I: increment;
L: Tolerance;
TAT: Theoretical arrival time;
LCT: Last conforming time – arrival time of the last
conforming cell;
GCRA
Sim
TAT < ta(k)?
Não
Célula excessiva
Sim
TAT < ta(k) + L
Não
TAT = TAT + I
bem-comportada
TAT = ta(k)
GCRA
Y = X - (ta(k) - LCT)
Sim
Y < 0?
Não
Célula excessiva
Sim
Y > L?
Não
X + Y+ I
LCT = ta(k)
bem-comportada
Y= 0
Selective Discard

Discard packets in congestion situations
Buffer management
+
Push out Policy
Selective Discard
Buffer Management
Compartilhamento total com push-out
1
1
1
2
1
1
1
1
2
2
1
1
1
1
2
Particionamento total
1
1
Compartilhamento parcial
2
1
1
2
2
Selective Discard
Push out Policy
Primeiro-a-chegar-primeiro-a-ser-descartado
1
2
1
2
1
2
Último-a-chegar-primeiro-a-ser-descartado
1
2
1
2
1
2
2
1
2
1
2
Randômica
1
Packet Discard in
Cell-switched Networks
3
2
1
3
3
2
1
1
Packet Discard in
Cell-switched Networks
Tail Drop
5
4
3
Early Packet Discard
3
2
1
Early Packet Discard with Hysteresis
2
1
Active Queue Management



Conceived
to
detect
incipient
congestion
Discard packets so that TCP
connections reduce their transmission
rate
and
consequently
avoid
congestion
Drop Tail does not avoid global
synchronization
and
resource
monopolization
Active Queue Management



Based on heuristics: RED, FRED,
ARED, BLUE
Based on Control Theory: PI-AQM,
H2-AQM
Based on Optimization Theory: REM
RED




Random Earlier Detection
Standardized by IETF
Two threshold:
if queue length smaller than first
threshold no packet is discard
RED


If queue length is larger than first
thresold and smaller than second
threshold packet is discarded with
linearly increasing probability
If queue length larger than second
threshold all packets are discarded
Probalidade de Marcação/Descarte
RED
Zona
normal de
operação
Zona de
prevenção de
congestionamento
Zona de controle de
congestionamento
maxp
minth
maxth
Tamanho Médio da Fila
RED





Difficulty to set the value of the
thresholds
Performance degradation under a large
number of flows (burst traffic)
Bias againts connections with small window
Does not promote proportional discard
Does not deal with unresponsive flows
ARED



Adaptive Random Earlier Detection
Tries more efficient tunning of
parameters
Rate of increase of discard
probability is proportional to the
variation of the queue length
ARED
Fred





Flow Random Early Drop
Tries to enforce fairness among flows
Keeps variables per flow
Each flow is guaranteed a minimum
number of packets in the buffer. This
minimum number is adjusted
according to the offered load
Each flow cannot exceed a maximum
number of packets in queue
Fred




Each flow cannot exceed a maximum
number of packets in queue
Keeps track of how much each flow tries to
exceed the allowed maximum numbe and
penalize those which try to exceed by
reducing its maximum allowed number of
packets
Prevents non-responsive flows to
monopolize the queue
Not scalable
Blue



Tries to avoid underutilization of the queue
under low load and excessive loss of packet
under high loads
The discard probability changes its values
as a function of the load.
The differential value for
increasing/decreasing the discard
probability as well as the minimum values
for the interval between consecutive
updates are tunable parameters
SBLue



Stochastic Blue
Uses hash scheme to classify flows
and set probability of dicsrad
according to blue
Penalizes non-responsive flows
FPQ




Flow proportinal Queuing
Aim at keeping the queue length proportional to
the number of flow
Estimates the number of flows and set discard
probability according to this estimation
A minimum number of eight packets is guaranteed
per connection and considers the bandwidth delay
product in the computation of the discard
probability
AQM based on Optimization



Maximize the aggregate utility function
subject to link capacity restrinction
Transmission rates are primal variables and
discard probabilities are the dual variables
The Adaptive virtual queue emulates a
virtual queue to determine the discard
probability, the capacity of the queue is a
function of the offered load. The use of
virtual queues tends to stabilize the real
queue
AQM based on Optimization


REM – price is taken as the congestion level
at the link
Exponential RED – discard probability
inreases exponentially with the length of a
vitual queue which is smaller than the true
queue
AQM based on Control Theory

Congestion seen as a control problem
AQM based on Control Theory





AQM Controllers based on P (Proportional), I
(Integral ), PI (Proportional-Integral ), PD
(Proportional-Derivative) ou PID (ProportionalIntegral- Derivative) classical controllers.
I component reduces the error of systems state in
relation to the equilibrium point
D component decreases response time
P & I components work on past error values and
cannot predict future errors
D component is able to predict future errors
AQM based on Control Theory



RED is a P-type controller
Stabilized RED (SRED) stabilizes the
queue independent of the number of active
connections. It estimates the number of
active connections
DRED uses the difference between the
actual queue length and the target one to
change the value of the discard probability
VRC

Virtual Rate Control aims at a target
virtual rate, similar concept to AVQ but
AVQ uses a virtual queue length. VRC aims
at target input rate as well as queue length.
It uses a PID controller to compensate the
difference between the actual and the
target input rate
AQM based on Control Theory



Yellow controls the difference between
link capacity and network load
Receding Horizon AQM policy tries to
compensate the delay element for
feedback
SMVSAQM and VS-AQMutilize SMVS Sliding ModeVariable Structure Control
which has an adptive control structure in
order to be insentive to congestion control
parameters
RIO


Standardized for the DiffServ framework
Uses two RED queues: one for In-Profile
traffic and the other for Out-profile
traffic
Probalidade de Marcação/Descarte
Rio
out_maxp
in_maxp
out_minth
in_minth
Tamanho Médio da Fila
out_maxth
in_maxth
Scheduling



Defines the transmission order of packets;
Provides performance guarantees, i.e.,
upper bound on QoS requirements such as
delay, jitter and bandwidth.
Isolates traffic from different classes;
Scheduling Discipline







First-come-First-Served
Priority
Virtual Clock
Generalized Processor Sharing
Weighted Fair Queuing
Earliest-Due-Date
….
Scheduling Disciplines


Work conserving disciplines – server is
never idle if there are packets in queue;
Non-work-conserving – server can be idle
even if there are packets in queue since
packets may not be elegible for
transmission.
Conservation Law

A work-conserving discipline can only reallocate
delays among the flows
 q
i i
i
i
qi
mean utilizatio n due to connection i
mean waiti ng time of the ith connection
Work Conserving Discipline







First Come First Served
Priority
Virtual Clock
Generalized Processor Sharing
Weighted Fair Queuing
Worst-case weighted fair Queueing
Self-clocked Fair Queuing
Virtual Clock




Isloates flows as in TDM systems;
Each connection has its own clock with its
own time unit
The connection clock advances according to
its time unit
Time unit reflects the negotiated
contracted with network provider.
Virtual Clock
1/2
1/5
1/5
Generalized Processor Sharing


Fair Queueing
Bandwidth shared according to the weights
of the connections

N
gi   iC /   j
j 1


m

C
Generalized Processor Sharing
S (i, , t )  (i)

S ( j , , t )  ( j )



N connections sharing a link with capacity C;
Each connection has its own weight i;
S(i,, t) bits served from connection i during
the interval [, t];
Generalized Processor Sharing

In a backlogged system, each flow receives a
minimum amount of bandwidth proportioned to
its weight. Unused bandwidth is distributed
among flows according to their weight:
 (i ).C
 j
Generalized Processor Sharing

If connection (flow) i is policed by a
leaky bucket with parameters
[(i),(i)], then an upper bound to the
end-to-end delay igual to (i)/(i) is
guaranteed.
Generalized Processor Sharing







GPS is an idealized scheduling discipline. It
considers that a bit is infinitely divisible.
Packets is the unit of transmission
Several disciplines emulate GPS:
Weighted Fair Queuing
Worst-Case Fair Queuing
Self-Clocked Fair Queuing
Start-time fair queuing
Weighted Fair Queuing

Packet version of GPS

Also called Packet-by-Packet Fair Queueing

Packets are served in order of finishing
time of transmission were they served by a
GPS discipline
How does WFQ emulates GPS?



Packets are served in increasing
order of finishing time, called finish
number;
Round number – number of rounds of
service a bit-by-bit round-robin
scheduler has completed at a given
time;
The duration of a round is
proportional to the number of active
conections
Weighted Fair Queueing


In a WFQ a packet can be at most Lmax/C behind
of an equivalent GPS server where Lmax is the
size of the largest packet sent in the
connection;
The end-to-end delay is bounded by:
 i  (m  1) Lmax m
  Lmax / C j
i
j 1
Worst-Case Fair Queueing




A packet in a WFQ system can be far
ahead of its equivalent GPS system
A W2FQ server considers only those
packet which would have started
transmission in a fair queuing system and
select the one which would have finish first
in a fair queuing system
The difference between W2FQ and FQ can
be at most of one packet
Same delay bound of WFQ
Self-Clocked Fair Queuing



Utilizes a more efficient finish
number computation.
Packets that arrive to an empty queue
utilizes the finish number of the
packet in services instead of the
round number
Short term unfairness and large
worst-case delay
Start-time Fair Queuing




Avoids short term unfairness and large
worst-case delay
Computes both finish and start number
Connections are served in order of start
number
The start number is set either to the
finish time of a packet or to the round
number (in case of empty server)
Delay-Earliest-Due-Date



Each packet has a deadline to leave
the server which is set according to
the contract between user and
provider
Packets are server in increasing order
of their arrival time plus their
deadline
Upper bound to delay
Non-Work-Conserving Discipline
Hierarchical Round Robin
 Jitter-Earliest-Due-Date
 Rate Controlled Scheduler

Hierarchical Round Robin



Multilevel framing strategy
The server cycles through different
levels and services packets
If it cycles through a slot with no
packet, it will leave the server idle
instead of serving a packet in another
level
Jitter-Earliest-Due-Date



Eliminates jitter
After being served a packet is stamped
with the difference between its deadline
and the actual finish time
A regulator at the entrance of the next
server holds the packet before is is made
eligible
Rate Controlled Scheduler

By coupling a regulator and a scheduler
flexibility in providing bandwidth, delay and
jitter guarantees is achieved.
Explicit Congestion Notification




Dissociate congestion notification from
packet loss
router should be identify flows that use
ECN
The two end-points envolved should
negotiate at connection setup the use of
ECN
Router marks IP datagram


Recipeint receivec notification in IP header and
echos it to the sender in the ack packet
ToS fields of IP header used
ECT
CE
0
0
0
1
1
0
1
1
No support to ECN
Indicates that sender supports ECN
Indicates that recipient supports ECN
Indicates congestion