On the Scalability of Constraint Solving for
Static/Offline Real-Time Scheduling
R. Gorcitz1 E. Kofman2 D. Potop-Butucaru2
R. De Simone2 Thomas Carle3
1 CNES,
France
2 INRIA,
France
3 Brown
University, USA
December 1, 2015 - Syncron 2015 - Kiel
1 / 27
Outline
Motivation
Contribution
Motivating example and encoding principles
Test cases
Solving and results
Conclusion
2 / 27
Motivation
3 / 27
Motivation
I
Formal methods:
I
I
I
Many things can be done (qualitative point of view)
But which ones work in practice? (quantitative point of view)
Particularly interesting question for NP-complete problems
I
I
I
Exponential theoretical upper bound (cf. P6=NP)
Many instances of NP-complete problems are rapidly solved.
Practical interest: SAT solving, some scheduling problems,
some compilation problems, etc.
4 / 27
Motivation
I
Formal methods:
I
I
I
Particularly interesting question for NP-complete problems
I
I
I
I
Many things can be done (qualitative point of view)
But which ones work in practice? (quantitative point of view)
Exponential theoretical upper bound (cf. P6=NP)
Many instances of NP-complete problems are rapidly solved.
Practical interest: SAT solving, some scheduling problems,
some compilation problems, etc.
Empirical evaluation of the hardness is needed
I
I
Determine under which conditions NP-complete problems are
harder (or easier) to solve, on average
Extensive evaluations for 3-SAT (and some other problems)
I
I
E.g. 3-SAT problems are harder to solve when the
clauses/variables ratio is 4.3 [Selman et al. 1993]
Difficulty: each problem needs a separate evaluation
I
I
Different set of parameters to vary
Synthesis of meaningful test cases is application-dependent
4 / 27
Motivation
Figure: Y-chart design methodology
Allocating and scheduling tasks and messages over a network of
resources is NP-Complete (for most problem instances).
5 / 27
Motivation
Figure: Y-chart design methodology
Allocating and scheduling tasks and messages over a network of
resources is NP-Complete (for most problem instances).
Two main approaches:
I
I
Avoiding NP-Completeness: heuristics
Accepting it: general solvers (Integer Linear Programming /
SAT Modulo Theory / Constraint Programming)
5 / 27
Motivation
Figure: Y-chart design methodology
Allocating and scheduling tasks and messages over a network of
resources is NP-Complete (for most problem instances).
Two main approaches:
I
I
I
Avoiding NP-Completeness: heuristics
Accepting it: general solvers (Integer Linear Programming /
SAT Modulo Theory / Constraint Programming)
How far?
5 / 27
Motivation
Figure: Y-chart design methodology
Allocating and scheduling tasks and messages over a network of
resources is NP-Complete (for most problem instances).
Two main approaches:
I
I
I
Avoiding NP-Completeness: heuristics
Accepting it: general solvers (Integer Linear Programming /
SAT Modulo Theory / Constraint Programming)
How far? (with “up to date” solvers)
I
I
Small size problems (can still be interesting)
Design for scalability (what problems scale better)
5 / 27
Contribution
I
I
Encoding of a variety of off-line real-time scheduling problems
as SMT/ILP/CP problems
Synthetic and realistic test cases (open, try them!)
I
I
I
A new task set generator
Platooning, FFT
Empirical hardness related to various parameters:
I
I
I
I
I
I
I
I
Problem size
Preemtiveness
Satisfiability vs optimization objective
System load
Dependencies
Pipelining
Resources homogeneity
Single-/Multi-period.
6 / 27
Motivating example and encoding principles
Control steering, speed (and image capture parameters) to follow
the vehicle ahead.
...
Image
capture
...
Detection
correction
...
Display
Sobel_H_0
Histo_H_0
Sobel_H_X
Histo_H_X
Sobel_V_0
...
Histo_V_0
Sobel_V_X
Histo_V_X
Figure: Platooning dataflow graph application
7 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
Figure: Platooning dataflow graph application, no pipelining, no
data-parallelism (X=0) and simplified labels...
8 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
Figure: Platooning dataflow graph application, no pipelining, no
data-parallelism (X=0) and simplified labels...
Encoding: ILP/SMT/CP set of linear and Boolean constraints.
Application constraints:
I
ICStop = ICStart + ICDuration
8 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
Figure: Platooning dataflow graph application, no pipelining, no
data-parallelism (X=0) and simplified labels...
Encoding: ILP/SMT/CP set of linear and Boolean constraints.
Application constraints:
I
ICStop = ICStart + ICDuration
I
SHStart ≥ ICStop
I
SVStart ≥ ICStop
8 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
Figure: Platooning dataflow graph application, no pipelining, no
data-parallelism (X=0) and simplified labels...
Encoding: ILP/SMT/CP set of linear and Boolean constraints.
Application constraints:
I
ICStop = ICStart + ICDuration
I
SHStart ≥ ICStop
I
SVStart ≥ ICStop
I
...
8 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
P1
P2
P3
Figure: Platooning dataflow graph application single period, no
data-parallelism (X=0) and simplified labels...
9 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
P1
P2
P3
Figure: Platooning dataflow graph application single period, no
data-parallelism (X=0) and simplified labels...
Resource constraints:
I
ICMap ∈ {P1, P2, P3}, SHMap ∈ {P1, P2, P3}, ...
9 / 27
Motivating example and encoding principles
IC
SH
HH
DC
SV
HV
D
P1
P2
P3
Figure: Platooning dataflow graph application single period, no
data-parallelism (X=0) and simplified labels...
Resource constraints:
I
ICMap ∈ {P1, P2, P3}, SHMap ∈ {P1, P2, P3}, ...
if SHMap = SVMap then
SHStart ≥ SVStop or SHStop ≥ SVStart
I
...
9 / 27
Motivating example and encoding principles
Realistic modeling (in plain English):
1. Each task is allocated on exactly one processor
2. If two tasks are ordered, the second starts after the first ends
3. If two tasks are allocated on the same processor, they must be
ordered
4. The source and destination of a dependency must be ordered
5. The bus communication associated with a dependency (if any) must
start after the source task ends and must end before the destination
task starts
6. When two dependencies require both a bus communication, these
communications must be ordered
7. If two dependencies are ordered, the first must end before the second starts
8. All tasks must end at a date smaller or equal than the schedule length
9. ...
10 / 27
Motivating example and encoding principles
Encoding more complex scheduling problems (added complexity):
I Pipelined scheduling
I
I
Preemptive scheduling
I
I
I
I
Strict periodicity vs. Non-strict
Number of start date variables
Implicitly pipelined
Heterogenous vs. homogenous scheduling
I
I
Each task must be statically divided into atomic chunks
Multi-period scheduling
I
I
Modulo scheduling, non-linear (modulo) operators
Symmetry arguments, etc.
Not done: Accounting for preemption costs, migrations, etc.
11 / 27
Motivating example and encoding principles
P1
0
D
HV->DC
SV->D
HV
SV
Bus
DC
SH
IC
IC->SV
P2
HH
Result in the simple case:
0.005
0.01
0.015
0.02
0.025
0.03
time
Figure: Gantt diagram (optimal non-preemptive, non-pipelined schedule)
12 / 27
Test cases
Synthetic examples:
I
I
Dependent task systems (algorithm of [Carle and Potop 2014])
Non-dependent task systems
I
I
Classical synthesis algorithms cannot be used to provide a
basis of comparison E.g. UUniFast of [Bini&Buttazzo 2005]
cannot handle heterogenous architectures
For each problem type:
I
I
I
I
I
I
I
Number of tasks n ranging from 7 to 147 with step 5
Number of processors ceil(n/5)
40/25 problem instances for each problem and size
Periods chosen randomly in the set 5,10,15,20,30,60
Average system loads: 25%, 75%, 87.5%,125%
WCETs generated accordingly
Allocation: 30% probability of fixed allocation (random CPU).
For open allocation: 70% probability that a task is allocated
to a given processor (heterogenous case).
13 / 27
Test cases
Realistic examples: FFT, Platooning (we need more!)
14 / 27
Test cases
Parameters that we tune:
I
I
Problem type: schedulability vs. optimization
Task set properties:
I
I
I
I
I
Preemtiveness
Dependent vs. Non-dependent
Single-/Multi-period
Pipelining
Resource homogeneity
I
Problem size
I
Average system load (25%, 75%, 87.5%,125%)
I
Pipelining depth (for the platooning example)
15 / 27
Solving and results
We look at:
I
Average solving time
I
Timeout (1 hour) ratio
Figure: Timeout (1h) rate (preemptive, multi-periodic, heterogenous,
75% load)
16 / 27
Solving and results
Figure: Single Period Non-Preemptive vs Multi Period Preemptive
→ Preemptive problems are more complex
→ But non-preemptive scheduling has its bounds, too.
17 / 27
Solving and results
Figure: Homogeneous resources VS Heterogeneous resources
→ Not so clear
18 / 27
Solving and results
Figure: Complexity as a function of system load (single-period,
non-preemptive, heterogenous)
→ Underloaded and overloaded systems are easier to handle.
19 / 27
Solving and results
Figure: Optimization vs. Schedulability analysis (single-period,
non-preemptive, heterogenous, 75% load)
→ Optimization problems are extremely difficult
20 / 27
Solving and results
The dependent task synthetic examples:
I
I
Removing dependencies always reduces solving time (intuitive
explanation: having a smaller search space is less important
than its complexity)
Timeout rate:
I
I
dependent task systems: 55%
after dependency removal: 13%
21 / 27
Solving and results
The FFT example results (non-preemptive, single-period,
dependent):
Figure: Pipelining: Cyclic dataflow graph to DAG
22 / 27
Solving and results
Consider 2 types of parallelism (separately):
I
Pipelining.
I
Data parallelism (split/merge).
Figure: Pipelining: Cyclic dataflow graph to DAG
Pipelining raises the depth of the graph
23 / 27
Solving and results
Read0 0
SobelH0 0
SobelV0 0
HistoH 0 0
HistoV 0 0
Detection0 0
Read0 1
SobelV0 1
SobelH0 1
HistoV 0 1
HistoH 0 1
Display 0 0
Detection0 1
Display 0 1
Read0 2
SobelH0 2
HistoH 0 2
SobelV0 2
HistoV 0 2
Detection0 2
Read0 3
SobelV0 3
SobelH0 3
HistoV 0 3
HistoH 0 3
Display 0 2
Detection0 3
Display 0 3
Read0
Read0 4
SobelV0 4
SobelH0 4
HistoV 0 4
HistoH 0 4
SobelV2
SobelV3
SobelV5
SobelH2
SobelH0
SobelH1
SobelH5
SobelH4
SobelH3
SobelV1
SobelV0
SobelV4
HistoV 2
HistoV 3
HistoV 5
HistoH 2
HistoH 0
HistoH 1
HistoH 5
HistoH 4
HistoH 3
HistoV 1
HistoV 0
HistoV 4
Detection0 4
Display 0 4
Read0 5
SobelV0 5
SobelH0 5
Detection0
HistoV 0 5
Detection0 5
HistoH 0 5
Display 0
Display 0 5
Figure: Pipelining vs. data parallelism
24 / 27
Solving and results
Split/merge parallelism that allows the application of symmetry
breaking techniques (homogenous architecture) !
Figure: Symmetry breaking helps on large graphs
Exponential curve, even after symmetry breaking.
25 / 27
Solving and results
Split/merge parallelism that allows the application of symmetry
breaking techniques (homogenous architecture) !
Figure: Symmetry breaking helps on large graphs
Exponential curve, even after symmetry breaking.
Deep graphs will scale much better than large graphs (the deep
graph scales up to 100 tasks in our case).
25 / 27
Conclusion
Good practices for encoding scheduling problems:
I
Certain problems (especially parallelized code) can exhibit a
lot of symmetry. It is imperative to use symmetry-breaking
techniques.
I
Certain architectures (e.g. many-cores) exhibit a lot of
symmetry.
I
The more complex is to specify, the more complex is to solve.
26 / 27
Conclusion
Thank you
27 / 27
Conclusion
Thank you
Questions ?
27 / 27
© Copyright 2026 Paperzz