Mean delay analysis of Multi Level Processor Sharing disciplines

Mean Delay Analysis of
Multi Level Processor Sharing
Disciplines
Samuli Aalto (TKK)
Urtzi Ayesta (before CWI, now LAAS-CNRS)
mlps2006.ppt
IEEE Infocom 2006, Barcelona, Spain, 23.-29.4.2006
1
Teletraffic application: scheduling elastic flows
• Consider a bottleneck link in an IP network
– loaded with elastic flows, such as file transfers using TCP
– if RTTs are of the same magnitude, then approximately
fair bandwidth sharing among the flows
• Intuition says that
– favouring short flows reduces the total number of flows,
and, thus, also the mean delay at flow level
• How to schedule flows and how to analyse?
– Guo and Matta (2002), Feng and Misra (2003),
Avrachenkov et al. (2004), Aalto et al. (2004, 2005)
2
Queueing model
• Assume that
– flows arrive according to a Poisson process with rate l
– flow size distribution is of type DHR (decreasing hazard
rate) such as hyperexponential or Pareto
• So, we have a M/G/1 queue at flow level
– customers in this queue are flows (and not packets)
– service time = the total number of packets to be sent
– attained service = the number of packets sent
– remaining service = the number of packets left
3
Scheduling disciplines at flow level
• FB = Foreground-Background = LAS = Least Attained Service
– Choose a packet from the flow with least packets sent
• MLPS = Multi Level Processor Sharing
– Choose a packet from a flow with less packets sent than
a given threshold
• PS = Processor Sharing
– Without any specific scheduling policy at packet level,
the elastic flows are assumed to divide the bottleneck
link bandwidth evenly
• Reference model: M/G/1/PS
4
Optimality results for M/G/1
• Schrage (1968)
– If the remaining service times are known, then
• SRPT is optimal minimizing the mean delay
• Yashkov (1987)
– If only the attained service times are known, then
• DHR implies that
FB (which gives full priority to the youngest job) is
optimal minimizing the mean delay
• Remark: In this study we consider work-conserving (WC) and
non-anticipating (NA) service disciplines p such as FB, MLPS,
and PS (but not SRPT) for which only the attained service
times are known
5
MLPS disciplines
• Definition: MLPS discipline, Kleinrock, vol. 2 (1976)
– based on the attained service times
– N+1 levels defined by N thresholds a1 < … < aN
– between levels, a strict priority is applied
– within a level, an internal discipline (FB, PS, or FCFS) is
applied
FCFS+PS(a)
a
6
Earlier results: comparison to PS
• Aalto et al. (2004):
– Two levels with FB and PS allowed as internal disciplines
DHR  T FB  T FB+ PS  T PS+ PS  T PS
• Aalto et al. (2005):
– Any number of levels with FB and PS allowed as internal
disciplines
DHR  T FB  T MLPS  T PS
7
Idea of the proof
• Definition: Ux = unfinished truncated work with threshold x
– sum of remaining truncated service times min{S,x} of
those customers who have attained service less than x
• Kleinrock (4.60)
U xp  l 0 T p ( y ) F ( y ) dy  (U xp )'  l T p ( x) F ( x)
x
– implies that


T p  0 T p ( x) f ( x) dx  l1 0 (U xp )' h( x) dx
– so that
DHR & U xp  U xp ' x  T p  T p '
8
Mean unfinished truncated work E[Ux]
bounded Pareto file size distribution
9
Question we wanted to answer to
Given two MLPS disciplines,
which one is better in the mean delay sense?
10
Locality result
• Proposition 1:
– For MLPS disciplines, unfinished truncated work Ux
within a level depends on the internal discipline of that
level but not on the internal disciplines of the other
levels
FCFS+PS(a)
PS+PS(a)
x
a
11
New results 1: comparison among MLPS disciplines
• Theorem 1:
– Any number of levels with all internal disciplines allowed
– MLPS derived from MLPS’ by changing an internal
discipline from PS to FB (or from FCFS to PS)
DHR  T MLPS  T MLPS'
• Proof based on the locality result and sample path arguments
– Prove that, for all x and t,
U xMLPS (t )  U xMLPS' (t )
– Tedious but straightforward
12
New results 2: comparison among MLPS disciplines
• Theorem 2:
– Any number of levels with all internal disciplines allowed
– MLPS derived from MLPS’ by splitting any FCFS level and
copying the internal discipline
DHR  T MLPS  T MLPS'
• Proof based on the locality result and mean value arguments
– Prove that, for all x,
U xMLPS  U xMLPS'
– An easy exercise since, in this case, we have explicit
formulas for the mean conditional delay
13
New results 3: comparison among MLPS disciplines
• Theorem 3:
– Any number of levels with all internal disciplines allowed
– The internal discipline of the lowest level is PS
– MLPS derived from MLPS’ by splitting the lowest level
and copying the internal discipline
DHR  T MLPS  T MLPS'
• Proof based on the locality result and mean value arguments
– Prove that, for all x,
U xMLPS  U xMLPS'
– In this case, we don’t have explicit formulas for the mean
conditional delay
– However, by truncating the service times, this simplifies
14
to a comparison between PS+PS and PS
Note related to Theorem 3
• Splitting any higher PS level is still an open problem
(contrary to what we thought in an earlier phase)!
• Conjecture 1:
– Let p = PS+PS(a) and p’ = PS+PS(a’) with a  a’
– For any x > a’,
(T p )' ( x)  (T p ' )' ( x)
• This would imply the result of Theorem 3 for any PS level
15
Recent results
• For IMRL (Increasing Mean Residual Lifetime) service times:
– FB is not necessarily optimal with respect to the mean
delay
• a counter example where FCFS+FB and PS+FB are
better than FB
• in fact, there is a class of service time distributions
for which FCFS+FB is optimal
– If only FB and PS allowed as internal disciplines, then
• any such MLPS discipline is better than PS
• Note: DHR  IMRL
16