Analysis of SRPT Scheduling:
Investigating Unfairness
Nikhil Bansal
(Joint work with Mor Harchol-Balter)
Motivation Problem
Client1
Client2
Client3
Aim:
“Good” Scheduling Policy
• Low Response times
• Fair
Server
Time Sharing (PS)
Server shared equally between all the jobs:
Low response times
Fair
Does not require knowledge of sizes
Can we do better ?
Shortest Remaining Proc. Time
Optimal for minimizing mean response times.
Objections:
Knowledge of sizes
Improvements significant ?
Starvation of large jobs
Biggest fear
Questions
Smalls better
Bigs worse
How do means compare
Elephant-mice property and implications
M/G/1 Queue Framework
Arrivals
queue
Server
• Poisson Arrival Process with rate
• Job sizes (S) iid general distribution F
Load( ) = (arrival rate).E[S]
Queueing Formulas for PS
E[T(x)]: Expected Response time for job of size x
E[T ( x)] PS
x
1
E[ S ( x)] PS
1
1
[Kleinrock 71]
Identical for all!
M/G/1 SRPT
x
E[T ( x)]SRPT
( t 2 f (t )dt +x 2 (1 F ( x)))
0
2
2(1 ( x))
x
+
dt
0 (1 (t ))
Waiting Time (E[W(x)]) Residence Time (E[R(x)])
x
( x) tf (t )dt
0
Load up to x
Variance up to x
Gains priority after it begins execution
x
dt
E[ R( x)]SRPT E[T ( x)] PS
1
0
All-Can-Win under srpt put c
Thm: Every job prefers SRPT, when load <= ½, for
all job size distributions.
Proof: Know that
E[W ( x)]SRPT
If
E[ R( x)]SRPT E[T ( x)]PS
1
( E[T ( x)]PS E[ R( x)]SRPT )
2
2(1 ( x))
(1 ) 2(1 ( x)) 2
Key Observation
E[W ( x)]SRPT E[T ( x)]PS E[ R( x)]SRPT
E[T ( x)]SRPT E[T ( x)]PS
Holds for all x, if load <= 0.5
What if load > 0.5 ? problem
E[ R( x)]SRPT E[T ( x)]PS
( x ) 0 .5
Still holds if
Irrespective of
The Heavy-Tailed Property: (Elephant -Mice)
1% of the big jobs make up at least 50% of the load.
For a distribution with the HT property,
>99% of jobs better under SRPT
In fact, significantly better,
Under SRPT,
E[S ( x)]
2 + 2
SRPT
E[S ( x)]PS 1 /(1 )
Bounded by 4
Arbitrarily high
The very largest jobs
If load <= 0.5, all jobs favor SRPT.
At any load, > 99% jobs favor SRPT, if HT property.
Moreover significant improvements.
What about the remaining 1% largest jobs?
1. Bounding the damage theorem
Fill in…
2.
E[ S ]SRPT k ( ) E[ S ]PS
(1.01 ) 2(1 )
2 2
k()
(log( 1 ) + 0.01log(
))
2
2
2
As 1,
k ( ) 0.01
Implication:
Mean slowdown of largest 1% under SRPT: Same as PS
Insert plots here:
1 for BP 1.1 with load 0.9 showing how all
Do better
2 for exp with load 0.9 showing how some do bad.
Other Scheduling Policies
Non-preemptive:
i. First Come First Serve (FCFS)
ii. Random
iii. Last Come First Serve (LCFS)
iv. Shortest Job First (SJF)
Very bad mean
Performance,
for HT
workloads
Preemptive:
i. Foreground Background (FB)
ii. Preemptive LCFS
Same as PS
Trivially worse
Overload
Add some lines for why good
+ we do work on this in paper
Actual Implementation
Add a plot or couple of lines
Conclusions
• Significant mean performance
improvements.
• Big jobs prefer SRPT under low-moderate
loads.
• Big jobs prefer SRPT even under high loads
for heavy-tailed distributions.
Scratch
Pr { X x } x
1
Under h-t distributions
Job Percentile
SRPT
PS
90%
99%
99.9%
99.99%
100%
1.28
1.62
2.08
2.69
9.54
10
10
10
10
10
Very largest job
Load = 0.9
Heavy-tailed distribution with alpha=1.1
Under light-tailed distributions
Job Percentile
SRPT
PS
90%
3.17
10
95%
4.93
10
99%
11.14
10
99.9%
16.01
10
Load=0.9
Exponential distribution
© Copyright 2026 Paperzz