Bounded Delay Scheduling with Packet Dependencies

Bounded Delay Scheduling with
Packet Dependencies
Michael Markovitch
Joint work with Gabriel Scalosub
Department of Communications Systems Engineering
Ben-Gurion University
Real Time Video Streaming
2
Sandvine, “Global Internet phenomena report – 1H 2013”
Real Time Video Streaming
• Video streams are comprised of frames
– Bursty traffic
• Video frames can be large (>>1500B)
– Fragmentation
• Interdependency between different packets
– Dropping some packets -> drop frame
• Packets MUST arrive in a timely manner
3
Current situation & Related work
• Best practices:
– DiffServ AF queue for video streams
– Admission control (average throughput)
• Number of streams can be large
– Average throughput < channel access rate
– Overlapping bursts >> momentary channel rate
• Related work
– FIFO queuing with dependencies
– Deadline scheduling without dependencies
[MPR, 2011] [MPR, 2012] [EHMPRR, 2012] [KPS, 2013] [SML, 2013] [EW, 2012] [AMS, 2002]
4
Deadline scheduling
•
•
•
•
5
Every packet has a deadline
Focus on scheduling
Queue size assumed unbounded
More information (than FIFO)
Buffer and Traffic Model
• Single non-FIFO queue of infinite size (one hop)
• Discrete time:
Arrival
substep
Packets arrive
Delivery
substep
One packet delivered
Cleanup
substep
Packets may be dropped
• Every packet :
– One of multiple packets in a frame
– Has arrival time, deadline, size and value
• Goal: Maximize value of completed frames
6
Buffer and Traffic Model
• Frames of uniform size – k
• No redundancy
• Packets of uniform size and value – WLG 1
k = 12
7
Buffer and Traffic Model
• Uniform slack – d
• Packets can be scheduled on arrival
Arrival(p)
Deadline(p)
d
t
t
d
8
Arrival sequence
schedule
Buffer and Traffic Model
• Finite burst size – b
b
t
9
Arrival sequence
Buffer and Traffic Model
• Recap:
– Frames of uniform size - 𝑘
– Uniform slack – d
– Finite burst size – b
– No redundancy
– Packets of uniform size and value – WLG 1
• Goal: Maximize number of completed frames
• NP-hard off-line problem
10
Competitive analysis
• Worst case performance of online algorithms
1
𝑐
 𝐴 𝐼 ≥ ⋅ 𝑂𝑃𝑇 𝐼
• 𝐴 – algorithm
• 𝐼 – instance
• 𝑃 – problem
11
A proactive greedy algorithm
• Ensures
of at least Cleanup
one frame
Delivery
Arrivalcompletion
–substep
Holds packets ofsubstep
only one frame substep
Packets arrive
12
One packet delivered
Packets may be dropped
Proactive greedy - example
Arrival sequence
Proactive greedy schedule
13
Proactive greedy – competitiveness
• Competitive ratio –
 Details in the paper
• Not far off from the lower bound
14
A better greedy algorithm
Why?
15
Greedy algorithm - analysis
• Competitive ratio –
 Details in the paper
• We have a matching lower bound
• Reminder:
 For proactive greedy –
16
What about the deadlines?
• Deadlines not used explicitly
• Bad news?
– Worst case performance matches lower bound
• Good news
– There is space for more interesting algorithms
– Improve general performance
• How can deadlines be utilized?
– Several approaches presented in the paper
17
Simulation
• Three online algorithms:
– “Vanilla” greedy algorithm
– Greedy algorithm with slack tie breaker
– Opportunistic algorithm
• And the best current offline approximation
18
Simulation
• Simulation details:
– Average throughput = channel access rate
– 50 streams at 30FPS
– Each stream starts at a random time
• Between 0 and 33ms
– Random (short) time between successive packets
• “jitter” between packets of a single frame
19
Simulation results
20
To sum up
• First work considering both deadline
scheduling and packet dependencies
• Very simplified model
– Yet hard
• Improvements to the model
– Non uniform slack
– Randomization
– Redundancy
21
Questions?
• [email protected]
22