Optimal CPU allocation to a set of control tasks with soft real

Optimal CPU Allocation to a Set of Control Tasks with Soft
∗
Real–Time Execution Constraints
Daniele Fontanelli
Luigi Palopoli
Luca Greco
Dipartimento di Scienza e
Ingegneria dell’Informazione,
University of Trento
Via Sommarive, 5
Povo, Trento, Italy
Dipartimento di Scienza e
Ingegneria dell’Informazione,
University of Trento
Via Sommarive, 5
Povo, Trento, Italy
LSS - Supélec
3, rue Joliot-Curie
91192 Gif sur Yvette, France
[email protected]
[email protected]
ABSTRACT
[email protected]
time of the different activations of a tracking task used in
a visual servoing application. As evident from the picture,
We consider a set of control tasks sharing a CPU and having
stochastic execution requirements. Each task is associated
with a deadline: when this constraint is violated the particular execution is dropped. Different choices of the scheduling
parameters correspond to a different probability of deadline
violation, which can be translated into a different level for
the Quality of Control experienced by the feedback loop.
For a particular choice of the metric quantifying the global
QoC, we show how to find the optimal choice of the scheduling parameters.
0.045
0.04
0.035
Probability
0.03
0.025
0.02
0.015
0.01
0.005
Categories and Subject Descriptors
0
0.015
0.025
0.03
0.035
0.04
0.045
0.05
0.055
0.06
Computation time
J.7 [Computer applications]: Computers in other systems—Command and Control, Real–time
Figure 1: Histrograms for a tracking application
Keywords
the distribution is centered around 25 ms but has very long
tails (some executions required more than 50 ms). Another
important element of information is that the level of reliability required to the software implementation of an embedded
control system is necessarily very high.
The time varying computation delays and the scheduling
interference (i.e., the additional delay that a task suffers for
the presence of higher priority tasks) undermine the classical
assumptions of digital control: constant sampling time and
fixed delay in the feedback loop. The standard solution to
this problem relies on the combination of the time–triggered
approach [13] and of the real–time scheduling theory. The
former nullifies the fluctuations in the loop delay by forcing
the communications between plant and controllers to take
place at precise points in time. The latter ensures that all
activities will always be able to deliver the results at the
planned instants. This approach is sustainable in terms of
resource utilisation if the worst case execution requirements
remain close to the average case, which is not a realistic
assumption for the new generations of control applications.
Many researchers have challenged the idea that a control
system can be correctly designed only if we assume a regular
timing behaviour for its implementation. They have investigated on how to make the design robust against an irregular
timing behaviour of the implementation, focusing on such
effects as packet dropout [19, 15], jitter in computation [17,
14] and time varying delays [12]. Other authors have sought
suitable ways to modify the scheduling behaviour in overload
conditions. In this research line fall the work of Marti [3],
Real–time Scheduling, Embedded Control; Stochastic Methods
1.
0.02
INTRODUCTION
The challenging issues posed by modern embedded control
systems are manifold. First, there is a clear trend toward
an aggressive sharing of hardware resources, which calls for
an efficient allocation of resources. Second, the use of modern sensors such as video cameras and radars, generates a
significant computation workload. More importantly, the
computation activities required to extract the relevant information from these sensors are heavily dependent on the
input data set. Thereby, computation and communication
requirements can change very strongly in time. As an example in Figure 1, we show the histograms for the computation
∗This work was partially supported by the HYCON2 NoE,
under grant agreement FP7-ICT-257462
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
HSCC’13, April 8–11, 2013, Philadelphia, Pennsylvania, USA.
Copyright 2013 ACM 978-1-4503-1567-8/13/04 ...$15.00.
233
who proposes to re-modulate the task periods in response to
an overload condition. In the same direction goes the work
of Lemmon and co-workers [6]. More recently, the onset of
a new class of control algorithms, event-triggered [24, 30]
or self-triggered [18, 28] dismisses the very idea of periodic
sampling and advocates the execution of the control action
only when necessary, while innovative models for control applications based on the idea of anytime computing have been
explored [11, 2, 23].
In this work, we consider a set of control tasks sharing a
CPU and target the problem of finding the scheduling parameters that strikes the best trade–off between the control
performance experienced by the different tasks. Our starting point is the consideration that hard real–time scheduling
is not necessarily a suitable choice for a large class of control
system. This is consistently revealed by both experimental
studies [5, 4] and recent theoretical results [10].
Finding the best period assignment that maximises the
overall performance of a set of tasks is a well known problem investigated by several authors assuming a fixed priority or an Earliest Deadline First (EDF) scheduler [27, 22,
31]. Other authors have approached the problem of control oriented scheduling of a CPU or of a bus using off-line
techniques [26, 20, 29, 16]. In the present work, we assume the presence of a soft real–time scheduling algorithm
such as the resource reservations [25, 1]. A key feature of
these algorithms is their ability to control the amount of
CPU devoted to each task (bandwidth). This allows us to
estimate the probability of meeting a deadline for a given
bandwidth allocation with fixed periods and, ultimately, to
expose the dependence of the Quality of Control (QoC) from
the scheduling parameters. We consider as metrics for QoC
the asymptotic covariance of the output of the controlled
system, assuming that its evolution is affected by process
noise. The choice of this metrics enables us to set up an
optimisation problem where the global QoC is given by the
worst QoC achieved by the different control tasks, which
is optimised under the constraint that the total allocated
bandwidth does not exceed the unity. In a preliminary version of this work [9], we have restricted our analysis to the
control of scalar systems with a simplified model of computation. This made for an analytical solution of the problem.
In this paper, we extend the reach of our analysis to the case
of multi–variable systems controlled with a rather standard
model of computation. We characterise the region of feasible choices for the bandwidth of each tasks (choices that
guarantee mean square stability). Then we identify conditions under which the problem lends itself to a very efficient
numeric search for the optimum. The resulting optimisation
algorithm is the key contribution of the paper.
The paper is organised as follows. In Section 2, we offer a description of the problem with a clear statement of
assumptions and constraints. In Section 3, we show the
computation of the QoC metrics adopted in this paper. In
Section 4, we show the closed form expression for the optimal solution, along with the particular form that it takes in
some interesting special cases. In Section 5 we collect some
numeric examples that could clarify the results of the paper.
Finally, in Section 6 we offer our conclusions and announce
future work directions.
2.
Figure 2: Scheduling mechanism and model of execution.
a setting a task τi , i = 1, . . . , n is a piece of software implementing a controller. Tasks have a cyclic behaviour: when
an event occurs at the start time ri, j , task τi generates a job
Ji, j (the j-th job). In this work, we will consider periodic
activations: ri, j = jTi with j ∈ Z≥0 . Job Ji, j finishes at
the finishing time fi, j , after using the resource for a computation time ci, j . If job Ji, j is granted an exclusive use of the
resource, then fi, j = ri, j + ci, j . However, as multiple job
requests can be active at the same time, a scheduling mechanism is required to assign, in each moment, the resource to
one of them.
2.1 Scheduling Mechanism
As a scheduling mechanism, we advocate the use of resource reservations [25]. Each task τi is associated with a
reservation (Qi , Ri ), meaning that τi is allowed to execute
for Qi (budget) time units in every interval of length Ri
(reservation period ). The budget Qi is an integer number
varying in the interval [0, . . . , Ri ]. The bandwidth allocated
to the task is Bi = Qi /Ri and it can be thought of as the
fraction of CPU time allocated to the task.
l Inmthis model,
ci, j
Ri computhe execution of job Ji, j requires at most Q
i
tation units. For the sake of simplicity, we will assume a
small Ri . Therefore, the time required to complete the exc
ecution can be approximated as Bi,ij , which corresponds to
a fluid partitioning of the CPU where the task executes as
if on a “dedicated” CPU whose speed is a fraction Bi of the
speed of the real processor.
For our purposes, it is convenient to choose a reservation
period that is an integer sub-multiple of the task period:
Ti = Ni Ri , Ni ∈ N. There are several possible implementation of the resource reservations paradigm, one of the most
popular being the Constant Bandwidth Server (CBS) [1].
The CBS operates properly as long as the following condition is satisfied.
n
X
Bi ≤ 1.
(1)
i=1
This constraint captures the simple fact that the cumulative
allocation of bandwidth cannot exceed 100% of the computing power. Besides, it also expresses that the CBS is able
to utilise the whole availability of computing power. This
comes form the fact that the CBS is based on an EDF scheduler [1]. Figure 2 reports a pictorial example of the proposed
mechanism for the task τi of three consecutive jobs. There
are Ni = 3 reservation periods. The computation times for
the three consecutive jobs are respectively 3, 2 and 1 units.
2.2 Model of Execution
PROBLEM PRESENTATION
The model of execution we adopt for this paper is time
We consider a set of real–time tasks sharing a CPU. In our
234
triggered [13], which is represented in Figure 2 and described
as follows. At time ri, j a new sample is collected triggering
the activation of a job. The execution of the activity requires
the allocation of ci, j time units of the resource time (which
needs not be contiguous). When the execution finishes (time
fi, j ), the output data is stored in a temporary variable and
is released (i.e., applied to the actuators), only when the
deadline di,j expires. The deadline di, j is set equal to ri,j +
Di where Di is the relative deadline that we will set equal to
an integer multiple of the reservation period Ri . After being
released, the output is held constant until the next update
using the customary ZoH approach.
Very important in our scheme is the fact that if the job
does not terminate within the deadline (i.e., fi,j > di,j ),
its execution is cancelled and the output on the actuator
remains constant.
gathers the individual QoC performance related to each task
into a global performance index. In this paper, we consider
as a cost function Φ(P 1 , . . . , P n ) = maxi φi (P i ), where
φi (·) are suitably defined cost functions used toP
measure the
QoC of each individual task. The constraint n
i=1 Bi ≤ 1
comes from the requirement 1 of the CBS scheduling algorithm, whilst the constraint Bi ≥ B i is to enforce mean
square stability, B i being the critical bandwidth. The analysis presented below could easily be generalised to weighted
norms. For instance, we could consider Φ(P 1 , . . . , P n ) =
P
n
i=1 li φi (P i ) where li weighs the relevance the i–th feedback loop.
2.3 The Control Problem
In the time–triggered semantics, the control tasks are activated on a periodic basis: ri, j = jNi , where the Ni ∈ N
expresses the number of computation units composing a period. This section presents the QoC analysis for a generic
task. Since we are referring to a single control task, we will
drop the i subscript. For its j-th job, the task receives as
an input the state sampled at time jN : x(jN ). For notational simplicity, we will henceforth denote this quantity by
x̂(j) ∈ Rnx . The delay introduced by the j-th job will be denoted by δj = fj − rj . Because we adopt the time–triggered
model of computation, the output is released at time rj + D
if δj ≤ D, and is not released at all if δj > D. For the sake
of simplicity, in this paper we assume D = T . The use of
the ZoH model requires holding the data constant until the
next output is released, whence the need for an additional
state variable ζ ∈ Rm to store the control value. Therefore,
the unitary delayed open loop system is given by
3. COST FUNCTION AND CONSTRAINTS
FOR THE OPTIMISATION PROBLEMS
Each task τi is used to control a controllable and observable discrete–time linear system:
xi (k + 1) = Ai xi (k) + Fi ui (k) + wi (k)
yi (k) = Ci xi (k)
(2)
where xi (k) ∈ Rnxi represents the system state, ui (k) ∈ Rmi
the control inputs, wi (k) ∈ Rnxi a noise term and yi ∈ Rpi
the output functions. One step transition for this discrete
time system refers to the evolution of the system step across
one temporal unit, which is set equal to the reservation period Ri . The ui (k) vector is updated at the end of each
control task, according to the model of execution described
above.
By Pi (k) = E xi (k)xi (k)T we denote the variance of the
state resulting from the action of the noise term and of the
control action. When P i = limk→∞ Pi (k) < +∞ (i.e., the
variance converges) for a given control algorithm, we will
say that the closed loop system is mean square stable and
we will use P i as a QoC metric. Clearly, the smaller is the
value of P i , the better is the control quality.
x̂(j + 1) = AN x̂(j) +
AN−i−1 F ζ(j)
i=0
+
2.4 Problem Formulation
N−1
X
AN−i−1 w(N j + i),
i=0
ζ(j + 1) = u(j),
The amount of allocated bandwidth (Bi ) quantifies the
QoS that the task receives and translates into the delays
introduced in the feedback loop and in the probability of
violating the deadline. Therefore, different values for the
bandwidth determine a different value of the QoC given by
P i . We can identify a critical value B i for the minimum
bandwidth that the task has to receive in order for the feedback loop that it implements to be mean square stable.
The objective of this paper is to identify the allocation
of resources between the different feedback loops that maximises the system-wide global Quality of Control. In mathematical terms, the problem can be set up as follows:
min Φ(P 1 , . . . , P n )
subject to
n
X
Bi ≤ 1
N−1
X
or, in matrix form
+ N PN−1 N−i−1 x̂
0
w(j)
e
A
F x̂
i=0 A
=
+
u+
ζ
ζ
Im
0
0
0
(4)
P
N−i−1
A
w(N
j
+
i).
Let
with w(j)
e
= N−1
i=0
z(j + 1) = Az z(j) + Fz y(j)
u(j) = Cz z(j) + Gz y(j)
be the stabilizing controller for the system (4), where z ∈
Rnz and y(j) = C x̂(j) ∈ Rp . Define the augmented state
x̃ = [x̂T , ζ T , z T ]T ∈ Rnc , with nc = nx + nz + m. The Schur
stable closed loop dynamic matrix of the system can then
be written as
 N PN−1 N−i−1

A
F 0
i=0 A
n ×n
Ac = Gz C
0
Cz  ∈ R c c ,
Fz C
0
Az
(3)
i=1
B i ≥ Bi ≥ B i .
where the upper bound B i is to ensure that the task does
not receive more bandwidth than it needs to achieve probability 1 of meeting the deadline. The cost function Φ(·)
in the nominal condition (δj ≤ T ). On the contrary, if the
job is cancelled (δj > T ), the system evolves with the open
235
Algorithm 1 Find Critical Probability
1: function FindMu(Ac,Ao )
[2]
2:
AC = Ac
[2]
3:
AO = Ao
4:
µ
e=1
5:
µ0 = 0
6:
while not(e
µ − µ0 < tµ ∧ µ
e < 1 − tµ ) do
7:
µ = (e
µ − µ0 )/2 + µ0
8:
A2 = (1 − µ) ∗ AO + µ ∗ AC
9:
if SchurStableSegment(AC , A2 ) then
10:
µ
e=µ
11:
else
12:
µ0 = µ
13:
end if
14:
end while
15:
return µ
e
16: end function
loop matrix

AN

Ao = 0
0
PN−1
i=0
AN−i−1 F
Im
0

0
n ×n
0  ∈ R c c.
Inz
In the following, we assume that the noise w(·) and the
computation time {cj }j∈Z≥0 are independent identically distributed (i.i.d.) random processes, mutually independent
and both independent from the state. The mutual independence is, in our evaluation, a realistic assumption insofar as
the tasks are used to implement independent control loops.
The i.i.d. nature of {cj }j∈Z≥0 is far less obvious. However,
in many practical cases of interest (e.g., visual servoing), the
analysis of the temporal behaviour of a real–time task that
approximates the {cj }j∈Z≥0 process as i.i.d. (ignoring its
correlation structure) produces a close approximation of the
behaviour that is reported in the experiments [21].
The noise
w(·) is also
assumed to have zero mean and
variance E w(k)w(k)T = W ∈ Rnx ×nx . Depending on
the choice of the bandwidth B, we get different distributions for the i.i.d. process representing the delay. Define
µ , P [δj ≤ T ] the probability of meeting the deadline. The
variance P ∈ Rnc ×nc of the state x̃ is given by
n
o
P (j + 1) = E x̃(j + 1)x̃(j + 1)T =
n
o
µE (Ac x̃(j) + v(j)) (Ac x̃(j) + v(j))T +
n
o
(1 − µ)E (Ao x̃(j) + v(j)) (Ao x̃(j) + v(j))T ,
(5)
2
w.r.t. the constant input vec(H) ∈ Rnc if
[2] max λi µA[2]
< 1,
c + (1 − µ)Ao
i
where with λi (M ) we mean the i-th eigenvalue of M . If such
a condition is verified, we look for a steady state solution P̄
solving the algebraic equation (see (6))
P̄ = µAc P̄ ATc + (1 − µ) Ao P̄ ATo + H.
It is worth noting that the inverse in (10) exists due to condition (8). Ensuring mean square stability for the system (4)
turns to a problem of Schur stability for the matrix pencil
[2]
[2]
µAc + (1 − µ)Ao . In other words, we search the critical
probability µ
ec , inf µ̄∈[0,1] {µ̄ | ∀µ ≥ µ̄ (8) is satisfied}. For
our problem, a solution with µ
ec < 1 always exists, since Ac
is Schur and the eigenvalues are continuous functions of µ.
In practice, instead of looking for the infimum in the continuum set [0, 1], we can fix an accuracy
level tµjand
n
kolook for the
(6)
where
H=
PN−1
i=0
AN−i−1 W (AN−i−1 )T
0
(9)
Using again the Kronecker product we can write the unique
solution of (9) as
−1
[2]
vec(P̄ ) = I − µA[2]
vec(H).
(10)
c − (1 − µ)Ao
T
where v(j) , w(j)
e T , 0 ∈ Rnc . Taking into account the
mutual independence of the stochastic processes and the fact
that w(·) has null mean and constant variance W , the equation above can be written as
P (j + 1) = µAc P (j)ATc + (1 − µ) Ao P (j)ATo + H,
(8)
0
.
0
minimum in the discrete set 0, tµ , . . . , tµ t1µ . The Algorithm 1 implements a dichotomic search in the µ space to
{µ̄ | ∀µ ≥ µ̄ (8) is satisfied}.
find µ
e , min 1
The relation (6) describes the dynamics of the covariance.
Two issues are relevant to this paper: 1) find the values of
µ that ensure the convergence of the covariance to a steady
state value (we define the infimum of these values as the critical probability), 2) find a measure of the covariance which
could effectively be used as a cost function in the optimisation problem.
µ̄∈ 0,tµ ,...,tµ t
µ
The function SchurStableSegment(A,B) implements the algebraic conditions on a finite number of matrices of [8] and
provides a positive answer if all the matrices in the matrix
pencil with vertices A and B are Schur.
3.2 Measuring the steady state covariance
3.1 Estimating the critical probability
As discussed in the next section, expressing the QoC through
a measure of the steady state covariance matrix P̄ given
in the equations (9) which enjoys a monotonicity property
w.r.t. the probability µ, significantly simplifies the solution
of the optimisation problem.
As a measure of the steady state covariance, we consider
the trace of P̄ which is a function of the probability µ, i.e.,
φ : R → R with
φ(µ) , Tr P̄ (µ) .
(11)
The computation of the critical probability amounts to
find the infimum value of µ for which the system is mean
square stable. Using Kronecker product properties we can
write the dynamics (6) as
[2]
vec(P (j + 1)) = µA[2]
vec(P (j)) + vec(H),
c + (1 − µ)Ao
(7)
where M [2] , M ⊗ M and vec(·) is the linear operator producing a vector by stacking the column of a matrix. This
is a discrete–time linear time–invariant system in the state
2
vec(P (j)) ∈ Rnc . Hence, it admits a steady state solution
The monotonicity of φ(µ) can be analysed by studying the
sign of dφ(µ)
. To this end, we first notice that Tr {AB} =
dµ
236
vec(AT )T vec(B), which leads to Tr P̄ = vec(I)T vec(P̄ ),
from which
for every µ ∈ [0, 1]. According to such (strictly increasing)
relation, we can define the minimum bandwidth required to
Γ−1 (µ̃i )
dφ(µ)
dvec(P̄ (µ))
= vec(I)T
,
dµ
dµ
achieve mean square stability as B i , ciDi , where µ̃i is
the critical probability computed with the Algorithm 1 in
Section 3. Analogously, we can define the maximum bandwidth necessary for the i-th task to finish within its deadline
and, plugging (10), yields to
−1
[2]
[2]
d I − µAc − (1 − µ)Ao
dφ(µ)
T
= vec(I)
vec(H)
dµ
dµ
−1
dS(µ)
= vec(I)T
vec(H),
dµ
[2]
Γ−1 (1)
i
with probability 1 as B i , cD
. By means of these defi
initions, the feasibility set of the optimisation problem (3)
can be written as the following polytope
)
(
n
X
B , (B1 , . . . , Bn ) ∈ Rn | B i ≤ Bi ≤ B i ,
Bi ≤ 1 .
[2]
with S(µ) , I − µAc − (1 − µ)Ao . We finally have
i=1
(13)
In our framework the deadlines Di are fixed, thus the
probability µi is a strictly increasing function of the bandwidth Bi : µi = Γci (Bi ). Let us recall that the cost function
φi (·) of any task i, introduced in the previous section, is a
decreasing function of µi . The composition φi ◦Γci (·) is then
a strictly decreasing function of the bandwidth Bi in the set
[B i , B i ]. With a slight abuse of notation, in what follows we
will write φi (·) directly as a function of Bi to refer such a
composition.
The optimisation problem (3) can be finally written as
dφ(µ)
dS(µ)
= vec(I)T S(µ)−1
S(µ)−1 vec(H)
dµ
dµ
[2]
= vec(I)T S(µ)−1 A[2]
S(µ)−1 vec(H).
c − Ao
(12)
Relation (12) leads to express the trace as a polynomial
function of µ. The monotonicity of the trace can be assessed
by studying the sign of this expression. To this end we can
does not have
apply Sturms theorem [7] to verify that dφ(µ)
dµ
roots in the range [e
µ, 1]. The result of this numeric test
strongly depends on the specific controller–system pair and
analytical results with general validity are difficult to find.
However, in the rather broad set of thousands of randomly
synthesized open–loop unstable systems,
we found out that
for more than 95% of them Tr P̄ (µ) is a non–increasing
function for µ ∈ [e
µ, 1] whenever the system is closed in loop
with an LQG controller1 . We remark that a non–increasing
function can be made strictly decreasing by simply adding to
it a strictly decreasing function of arbitrary
small
amplitude.
We can consider, for instance, φ(µ) , Tr P̄ (µ) + ε 1−µ
. for
1−µ̃
arbitrary small ε > 0. For this reason, henceforth we make
the following assumption.
min Φ(B) = min
B∈B
φi (Bi )
(14)
where B , (B1 , . . . , Bn ) and B is given in (13).
The optimisation problem (14) can present some special
cases that deserve a separate analysis. For example, it is
said to be degenerate if there exist i, j ∈ {1, . . . , n} such that
φi (B i ) ≥ φj (B j ). In such a case φi (·) is said to dominate
φj (·). In such a case, due to the strictly decreasing nature
of the function φi (·) that attains its minimum for the bandwidth B i , φi (Bi ) ≥ φj (Bj ) for any Bi ∈ [B i , B i ] and any
Bj ∈ [B j , B j ]. For this reason Φ(B) = maxh∈{1,...,n} φh (Bh ) =
maxh∈{1,...,n}\{j} φh (Bh ). In a degenerate case the bandwidth associated to a dominated function does not influence
the cost function, but only the constraints defining the feasibility set. Therefore, it can be fixed to a value ensuring the
largest feasibility set. It can be easily verified that, if φj (·) is
the dominated function, such a value is Bj = B j . It is worth
noting that, with this position, the feasibility set is now a
(n−1)–dimensional facet of the original polytope B. If in the
optimisation problem there are more than one dominated
function, say n′ < n functions with indices in the set I ′ ⊂
{1, . . . , n}, then the feasibility set is the (n−n′ )–dimensional
facet: Bn′ , B \ {(B1 , . . . , Bn ) ∈ Rn | Bh = B h , ∀h ∈ I ′ }.
Let us now analyse the solution set of the non degenerate
optimisation problem (14). First we notice that a function
g : Rn → R is said componentwise strictly decreasing if for
any x, y ∈ Rn such that x <e y (namely xi < yi for any
i ∈ {1, . . . , n}), then g(x) > g(y). We are now in condition
to state the following theorem.
Assumption 1. The function φ : R → R used to measure
the QoC of each controlled system is strictly decreasing in
the range [e
µ, 1].
4.
max
B∈B i∈{1,...,n}
OPTIMAL SOLUTION
As assumed in Section 3, the computation time {ci, j }j∈Z≥0
of the generic task i is a stochastic process, hence its cumulative distribution function Γci : R≥0 → [0, 1] is non–
decreasing. Since {ci, j }j∈Z≥0 is a computation time, it is
perfectly reasonable to assume that its cumulative distribution function is strictly increasing. Let us consider the
probability µi, j = P {fi, j ≥ Di } of the j-th job of the ith task of finishing within its deadline. Byn means ofothe
c
fluid approximation we can write µi, j = P Bi,ij ≥ Di =
P {ci, j ≥ Di Bi } and using the cumulative distribution µi, j =
Γci (Di Bi ). Due to the fact that {ci, j }j∈Z≥0 is an i.i.d. process, the probability does not change with the job and we
can just drop the subscript j: µi = Γci (Di Bi ). Because of
the strict monotonicity of the cumulative distribution, its
inverse is well defined and we can write
Theorem 1. Assume that Assumption (1) holds and that
the optimisation problem (14) is not degenerate. Define the
optimal solution set as X ∗ , arg minB∈B Φ(B) , then we
have:
Γ−1
ci (µi )
,
Di
1
If the test (12) fails for some of the controlled system to
be scheduled, they can receive full bandwidth, letting the
optimisation work on all the others.
Bi =
i) X ∗ ⊆ ∂B where ∂B is the frontier of the polytope B;
ii) there exists B ∗ = (B1∗ , . . . , Bn∗ ) ∈ X ∗ such that φi (Bi∗ ) =
φj (Bj∗ ) for every i, j ∈ {1, . . . , n};
237
Algorithm 2 Solve Optimisation
iii) there exists B̂ = (B̂1 , . . . , B̂n ) ∈ [B 1 , B̄1 ]×· · ·×[B n , B̄n ]
unique solution of φh (B̂h ) = mini∈1,...,n φi (B i ) for evPn
ery h ∈ {1, . . . , n}such that, if
i=1 B̂i ≤ 1, then
B̂ ∈ X ∗ . P
Otherwise there exists B ∗ as for point ii)
∗
satisfying n
i=1 Bi = 1;
P
iv) if (B1 , . . . , Bn ) ∈ Rn | n
∩ ∂B 6= {0} then
i=1 Bi = 1
n Pn
(B1 , . . . , Bn ) ∈ R | i=1 Bi = 1 ∩ X ∗ 6= {0}.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
Proof. In order to prove i), let us proceed by contradiction. Assume that there exists B ∗ ∈ X ∗ such that B ∗ ∈
B \ ∂B. Being B \ ∂B an open set, then there exists B̂ ∈ B
such that B ∗ <e B̂. It is easily to verify that Φ(B) =
maxi∈{1,...,n} φi (Bi ) is a componentwise strictly decreasing
function, hence we have Φ(B̂) < Φ(B ∗ ), which contradicts
the optimality of B ∗ .
Let us prove the points ii) and iii). Define the real values t , maxi∈1,...,n φi (B̄i ) and t̄ , mini∈1,...,n φi (B i ). In
the non degenerate case we have that t ≤ t̄. We want to
prove that the optimal value t∗ , minB∈B Φ(B) is such that
t∗ ∈ [t, t̄]. By definition of the function Φ(·), its codomain
is the set [t, maxi∈1,...,n φi (B i )], hence t∗ ≥ t. Moreover,
clearly we have t̄ ∈ [t, maxi∈1,...,n φi (B i )]. Recall that each
function φi : [B i , B̄i ] → [φi (B̄i ), φi (B i )] with i ∈ {1, . . . , n}
is strictly monotonic, thus invertible. As a consequence
there exists B ∈ [B 1 , B̄1 ] × · · · × [B n , B̄n ] such that Φ(B) =
t̄ ≤ maxi∈1,...,n φi (B i ), hence t∗ ≤ t̄ because of its optimality.
Still for the strict monotonicity of any φi (·), we have that
for any t ∈ [t, t̄] there exists a unique Bi ∈ [B i , B̄i ] such that
φi (Bi ) = t. Thus, for any t ∈ [t, t̄] we can find a unique solution B = (B1 , . . . , Bn ) ∈ [B 1 , B̄1 ] × · · · × [B n , B̄n ] such
that φi (Bi ) = φj (Bj ) for each i, j ∈ {1, . . . , n}. Let us
define in particular B̂ = (B̂1 , . . . , B̂n ) as the unique soluP
tion of φi (B̂i ) = t̄ for each i ∈ {1, . . . , n}. If n
i=1 B̂i ≤ 1
then B̂ ∈ B and it is clearly optimal (B ∗ , B̂ ∈ X ∗ ). If
Pn
Pn
i=1 B̂i > 1, recall that
i=1 B i < 1 and the continuity of each φi (·) to show that there exists t̃ ∈ [t, t̄] and a
P
unique B̃ = (B̃1 , . . . , B̃n ) ∈ B such that n
i=1 B̃i = 1 and
φi (B̃i ) = φj (B̃j ) = t̃ for every i, j ∈ {1, n}. It is apparent
P
∗
that B̃ ∈ ∂B as n
i=1 B̃i = 1, we must show that B̃ ∈ X .
In order for any B = (B1 , . . . , Bn ) to give Φ(B) < t̃, it must
have at least one component Bi such that Bi > B̃i and all
the other components such that Bj ≥ B̃j for
Pnevery j 6= i.
But such a B is not a feasible solution as
h=1 Bh > 1,
hence B ∗ , B̃ ∈ X ∗ .
Concerning point iv), if the solution B̂ at point iii) is such
P
that n
B̂i > 1, we have shown how to find a solution B̃
i=1P
Pn
n
such that n
i=1 B̃i = 1. If instead
i=1 B̂i ≤ 1, let us define
function SolveOptimisation(Systems,B
i ,B i )
P (1) = maxi Tr P i (Bi )
(1)
(1)
B(1) = [B1 , . . . , Bn ] = b(P (1) )
P (1)
if
Bi ≤ 1 then
return B(1)
end if
P (2) = maxi Tr P i (Bi )
while P (2) − P (1) > ǫ do
(1)
(2)
P = P +P
2
B = [B1 , . . . , Bn ] = b(P )
P
if
Bi ≤ 1 then P (2) = P
else P (1) = P
end if
end while
return B = [B1 , . . . , Bn ] = b(P (2) )
end function
function b(P )
for i = 1, . . . , n do
Bm =B i ; BM = B i
if Tr P i (Bm) ≤ P then Bi = Bm
else
while BM − Bm > ǫ do
B = BM+Bm
2
if Tr P i (B) < P then BM=B
elseBm=B
end if
end while
Bi = BM
end if
end for
return B = [B1 , . . . , Bn ]
end function
facet Bn′ is an (n − n′ )–dimensional polytope. We can, thus,
apply the previous theorem
with the
P to such a polytope, P
unique difference that now i∈{1,...,n}\I ′ Bi = 1− i∈I ′ B i .
The Algorithm 2 solves Problem (14) in light of Theorem 1. Basically, it applies a binary search until it reaches
the conditions expressed by the main theorem.
4.1 A geometric interpretation
A simple geometric interpretation can be useful to understand the meaning of the result above for the more interesting case of non-degenerate problems. Let us focus, for
simplicity, on the case of two tasks. Consider the plot in
Figure 3.(b). The thick lines represent the constraints (and
are dashed when the inequality is strict). The area of the
feasible solutions is filled in gray.
o
The dashed-dotted line represent the level set Φ(B1 , B2 ) =
the set H = h ∈ {1, . . . , n} | (B̂1 , . . . , B̂h + b, . . . , B̂n ) ∈
/ B ∀b > 0 .
t, which is the set of all values for which the infinity norm
It is easily to verify that for some i ∈ H we have φi (B̂i ) =
of the traces of the covariance matrices is fixed to the value
t. This set looks like a 90 degree angle, whose vertex is
Φ(B̂).
Then
for
any
j
∈
{1,
.
,
n}\H
we
have
φ
(
B̂
)
≤
Φ(
B̂).
j
j
P
given by the values of the bandwidth B1 and B2 . The
If (B1 , . . . , Bn ) ∈ Rn | n
i=1 Bi = 1 ∩ ∂B 6= {0} we can
level
set Φ(B1 , B2 ) = φ2 (B2 ) (the bottom one in the figeasily find a B = (B1 , . . . , Bn ) such that Bi = B̂i for any
Pn
ure),
originates from a feasible point a point in B. The
i ∈ H, Bj ≥ B̂j for any j ∈ {1, . . . , n} \ H and i=1 Bi = 1.
level set Φ(B1 , B2 ) = φ1 (B1 ) is associated with the point
Due to the strictly decreasing nature of each φi (·), we have
φ1 (B1 ) = maxi Tr P i (B i ) , which we assumed equal to
Φ(B) = Φ(B̂). Hence B ∈ X ∗ .
Tr P 1 (B 1 ) in the example. Two facts are noteworthy:
first, φ1 (B1 ) being a lower bound for Φ(B1 , B2 ), we have
φ1 (B1 ) ≤ φ2 (B2 ); second, the level set Φ(B1 , B2 ) = φ1 (B1 )
The degenerate case can be addressed by recalling that the
is internal to the level set Φ(B1 , B2 ) = φ2 (B2 ) (this is due
238
1
0.9
0.8
0.7
Critical µ
0.6
0.5
0.4
0.3
0.2
0.1
0
20
28
40
Period [ms]
(a)
Figure 4: Critical probability versus sampling time
for a generic unstable system.
The time varying computation times of the control tasks is
described by a probability density function (pdf) fCi . Two
c
− i
different pdfs are considered: Eµi = e µi is exponential
with mean value equal to µi ; U[a, b] is uniform defined in
the range [a, b]. In accordance with our model, periods are
kept constant throughout each simulation run. The relative
deadlines were chosen equal to the period, i.e., Di = Ti .
We first show how the sampling period influences the critical probability µ
ei , which increases as the sampling time increases (Fig. 4). This is in accordance with our expectations
since the variance of the process noise for the discrete time
system increases with the sampling time. This behaviour
complicates the analysis for the minimum bandwidth. Indeed, recalling that Γ−1
ci (·) is monotonically increasing (since
it is the inverse function of the CDF of the computation
times) and that Bi = Γ−1
µi )/Di , we can see that a larger
ci (e
period increases both the numerator and the denominator.
Therefore, the analysis for the minimum bandwidth largely
depends on the specific system, while this is not the case for
the critical probability µ
ei , which is monotonically increasing
for all the tested system.
To expose the relation between the period, the characteristic of the computation time pdfs and the minimum and
maximum bandwidth, Fig. 5 depicts the behaviour of the
available computation bandwidth for a generic unstable system when the control task computation time has different
values of the mean computation times for both uniform and
exponential distributions. The bandwidth can be chosen between B i (dashed lines) and B i (solid lines). The value B i
is the one associated with the critical probability, while B i
guarantees that the loop is always closed at every iteration.
This range is shown for different values of the sampling time.
Whatever is the computation time distribution, increasing
its mean value reduces the available bandwidth. This effect is partially mitigated by the period. For example, from
Fig. 5.(a) the maximum and minimum bandwidths are unchanged if the mean value grows from 6 ms to 12 ms and
simultaneously the period is doubled from 20 to 40 ms. The
benefit is even more evident for the maximum bandwidth.
For instance, it is not possible to close the loop at every iteration if the mean value is 14 ms and the period is 20 ms
(Fig. 5.(a), however this is possible increasing the period at
least up to 28 ms. Similar, discussions can be made for the
(b)
Figure 3: Graphical representation of the problem
addressed by Theorem 1 for the case of two tasks.
(a) the first claim of condition iii), (b) the second
claim of condition iii).
to the decreasing behaviour of the φi (Bi ) functions). Therefore the line joining the vertex of the two level set (represented in dotted notation) remains internal to the level set
Φ(B1 , B2 ) = φ2 (B2 ). What is more, it terminates on the
vertex of the level Φ(B1 , B2 ) = φ1 (B1 ), since φ1 (B1 ) is a
lower bound for Φ(B1 , B2 ). The points
Pon this line remain
feasible until it touches the constraint
Bi = 1, which corresponds to the optimal solution. In this case, the level set
becomes tangent to the constraint. The situation just described is the one corresponding to the second claim of part
iii) of Theorem 1. On the contrary, the case addressed by
the first claim of part iii) of Theorem 1 is the one depicted
in Figure 3.(a). In this case the point φ1 (B1 ) lies inside the
feasible area and it is obviously optimal.
5.
NUMERIC EVALUATION
In this section, we offer some numeric data that show the
results of the optimal allocation of bandwidth algorithm presented in this paper. We consider randomly generated, openloop unstable, reachable and observable linear continuous
time systems subject to a linear combination of continuous
time noises. The number of noise sources equals the number of states. The noise processes are normally distributed
with zero mean and standard deviation equal to σ = 0.01.
239
T1
20
28
40
20
28
40
T2
20
28
40
20
28
40
T3
20
28
40
20
28
40
f C1
U[4,8]
U[4,8]
U[4,8]
U[4,8]
U[4,8]
U[4,8]
f C2
U[4,12]
U[4,12]
U[4,12]
U[4,8]
U[4,8]
U[4,8]
f C3
U[4,8]
U[4,8]
U[4,8]
U[4,8]
U[4,8]
U[4,8]
B1
0.268
0.2
0.149
0.268
0.2
0.149
B2
0.296
0.21714
0.158
0.248
0.18
0.129
B3
0.278
0.20286
0.145
0.278
0.20286
0.145
B1∗
0.2912
0.22879
0.18486
0.29481
0.22879
0.18486
B2∗
0.42781
0.42857
0.3
0.4
0.28571
0.2
B3∗
0.28086
0.20415
0.14672
0.28086
0.20415
0.14672
B
1
0.86152
0.63158
0.97567
0.71866
0.53158
min(Φ(B))
767.7968
723.8266
759.3342
701.1684
723.8266
759.3342
Table 1: Table showing numeric examples for the three unstable systems in the case of uniformly distributed
computation times. Time values for the distribution and for the period are reported in milliseconds.
T1
20
28
40
20
28
40
T2
20
28
40
20
28
40
T3
20
28
40
20
28
40
f C1
E12
E12
E12
E12
E12
E12
f C2
E20
E20
E20
E12
E12
E12
f C3
E12
E12
E12
E12
E12
E12
B1
0.24931
0.21893
0.202
0.24931
0.21893
0.202
B2
0.27444
0.21508
0.17125
0.16466
0.12905
0.10275
B3
0.29658
0.23345
0.17935
0.29658
0.23345
0.17935
B1∗
0.30429
0.32876
0.40384
0.34388
0.3608
0.46307
B2∗
0.39206
0.4343
0.41
0.34902
0.40122
0.34984
B3∗
0.30345
0.23645
0.18576
0.30688
0.23795
0.18656
B
1
1
1
1
1
1
min(Φ(B))
1240.6107
907.4646
887.9284
859.701
791.3342
818.1927
Table 2: Table showing numeric examples for the three unstable systems in the case of exponentially distributed computation times. Time values for the distribution and for the period are reported in milliseconds.
takes the lion’s share, but with a different behaviour since
the support of the pdfs is now infinite. This fact is also
highlighted by the full utilisation of the computing platform
in all cases.
Table 3 reports numeric data in the same situation of Table 1 and Table 2 but it considers the period of each task
that minimises the overall bandwidth B or the maximum
trace. In the former case, only the uniform distribution represents a valid example. In this case, the period should be
increased at most, while the second task should receive its
maximum bandwidth. The same picture is obtained if the
trace is minimised, but in this second situation the second
task has the minimum possible period in order to minimise
the noise power. In the case of the exponential distribution,
again the task two receives an adequate bandwidth and, in
order to leverage the effect of task one (which has the highest eigenvalue hence it is potentially dangerous) the optimal
solution leads to minimise its period.
Finally, Table 4 reports a set of experiments in which the
optimal bandwidth allocation is computed by increasing the
mean value of the exponential distribution of only the first
task and fixing the period to 40 ms. The monotonicity of
the optimal trace and of the optimal bandwidth associated
to the first task is clearly visible. We have found similar
results, but we omitted them for the sake of brevity.
exponential distribution (Fig. 5.(b)), although the effect is
reduced and no maximum bandwidth is available since the
pdf has infinite support.
For an insightful numeric comparison, we report simulation results for three randomly generated open–loop unstable continuous time systems of dimension n1 = 2, n2 = 3
and n3 = 4. The number of inputs is equal to 2, 1 and 2,
while the number of outputs is 2, 3 and 4, respectively. The
pdf of the computation times is described by fCi , i = 1, 2, 3.
The maximum unstable eigenvalues are e1 = 3.92, e2 = 0.85
and e3 = 1.81 for the first, second and third system respectively. In the tables, the first group of three columns reports
the task periods Ti for each control task, while the second
triplet shows the distribution of the computation times fCi
(expressed in milliseconds). The second and the third groups
report the minimum and the optimal bandwidths, B i and Bi∗
respectively. The columns annotated with B and min(Φ(B))
report the overall bandwidth used and the optimal trace.
Finally, the last three columns are the periods of the three
tasks.
Table 1, summarises the numerical results for uniform distributions for three different periods. In the first three rows
the uniform distribution for the second task ranges between
one and three reservation periods, while for the second three
rows it varies between one and two. The second task is the
most critical, since it receives at the optimum the largest
amount of bandwidth. Moreover, only for the first row it
cannot receive its maximum bandwidth, corresponding to
probability one of computing the control action before the
deadline. This fact is reflected by the highest maximum
trace out of the six cases and by the full utilisation of the
computing resources (B = 1). The same maximum trace is
obtained for the periods Ti = 28 ms and Ti = 40 ms in both
cases, since the optimal bandwidth for the second task always corresponds to its maximum value, while the other two
tasks receives exactly the same amount of bandwidth. Finally, for the second set of rows the trace increases with the
period, since the noise power also increases. According to
our previous discussion on the critical bandwidth behaviour,
this is not always true as highlighted by the second set of experiments of Table 2, which shows a similar situation for the
exponential pdfs. In this second table, again the second task
6. CONCLUSIONS
In this paper, we have considered an application scenario
where multiple tasks are used to implement independent
feedback loops. The scheduling decisions determine a different bandwidth reservation for the tasks and have a direct
impact on the QoC that they deliver.
The QoC is evaluated using the trace of the steady state
covariance as a cost function. We have shown conditions
that make the QoC measure a decreasing function of the
probability to close the loop, and hence of the bandwidth
allocated to the task. We have formulated an optimisation
problem where the global cost function is the maximum QoC
obtained by the different loops and the constraints are on
the minimum bandwidth required to obtain mean square
stability and on the sum of the bandwidth allocated to the
different tasks. The problems thus obtained lends itself to a
240
T1
40
40
28
T2
40
20
40
T3
40
40
40
fC1
U[4,8]
U[4,8]
E12
fC2
U[4,12]
U[4,12]
E20
fC3
U[4,8]
U[4,8]
E12
B1
0.149
0.149
0.21893
B2
0.158
0.296
0.17125
B3
0.145
0.145
0.17935
B1∗
0.18486
0.19283
0.34173
B2∗
0.3
0.6
0.47151
B3∗
0.14672
0.14672
0.18656
B
0.63158
0.93955
1
min(Φ(B))
759.3342
701.1684
852.9288
Table 3: Table showing numeric examples in the same set-up of Table 1 and Table 2 with minimized bandwidth
or minimized trace. Time values for the distribution and for the period are reported in milliseconds.
T1
40
40
40
40
40
T2
40
40
40
40
40
T3
40
40
40
40
40
f C1
E12
E16
E20
E24
E28
f C2
E20
E20
E20
E20
E20
f C3
E12
E12
E12
E12
E12
B1
0.202
0.26934
0.33667
0.40401
0.47134
B2
0.17125
0.17125
0.17125
0.17125
0.17125
B3
0.17935
0.17935
0.17935
0.17935
0.17935
B1∗
0.40384
0.48126
0.53554
0.57396
0.60247
B2∗
0.41
0.3323
0.2797
0.24247
0.21495
B3∗
0.18576
0.18576
0.18416
0.18336
0.18256
B
1
1
1
1
1
min(Φ(B))
887.9284
974.7714
1110.2589
1335.2052
1736.5507
Table 4: Table showing numeric examples for increasing values of the mean of the exponential distribution
for the first task. Time values for the distribution and for the period are reported in milliseconds.
[10] D. Fontanelli, L. Palopoli, and L. Greco. Deterministic
and Stochastic QoS Provision for Real-Time Control
Systems. In Proc. IEEE Real-Time and Embedded
Technology and Applications Symposium, pages
103–112, Chicago, IL, USA, April 2011. IEEE.
[11] L. Greco, D. Fontanelli, and A. Bicchi. Design and
stability analysis for anytime control via stochastic
scheduling. Automatic Control, IEEE Transactions on,
(99):1–1, 2011.
[12] C.-Y. Kao and A. Rantzer. Stability analysis of
systems with uncertain time-varying delays.
Automatica, 43(6):959–970, June 2007.
[13] H. Kopetz and G. Bauer. The time-triggered
architecture. Proceedings of the IEEE, 91(1):112–126,
2003.
[14] B. Lincoln and A. Cervin. JITTERBUG: a tool for
analysis of real-time control performance. In Proc.
IEEE Conf. on Decision and Control, pages
1319–1324, Dec. 2002.
[15] Q. Ling and M. Lemmon. Robust performance of soft
real-time networked control systems with data
dropouts. In Proc. IEEE Conf. on Decision and
Control, volume 2, pages 1225–1230, Dec. 2002.
[16] R. Majumdar, I. Saha, and M. Zamani.
Performance-aware scheduler synthesis for control
systems. In Proceedings of the ninth ACM
international conference on Embedded software,
EMSOFT ’11, pages 299–308, New York, NY, USA,
2011. ACM.
[17] P. Marti, J. Fuertes, G. Fohler, and K. Ramamritham.
Jitter compensation for real-time control systems. In
Proc. IEEE Real-Time Systems Symposium, pages
39–48, Dec. 2001.
[18] M. Mazo and P. Tabuada. Input-to-state stability of
self-triggered control systems. In Decision and Control,
2009 held jointly with the 2009 28th Chinese Control
Conference. CDC/CCC 2009. Proceedings of the 48th
IEEE Conference on, pages 928 –933, dec. 2009.
[19] J. Nilsson and B. Bernhardsson. Analysis of real-time
control systems with time delays. In Proc. IEEE Conf.
on Decision and Control, volume 3, pages 3173–3178,
Dec. 1996.
[20] L. Palopoli, A. Bicchi, and A. S. Vincentelli.
Numerically efficient control of systems with
very efficient solution algorithm, which is the core contribution of the paper. We have offered numeric examples of the
algorithm execution.
We envisage future work directions in the possible use of
different cost functions and of different computation models,
where a task execution is not dropped as soon as it violates
a deadline and an execution delay can be tolerated to some
degree.
7.
REFERENCES
[1] L. Abeni and G. Buttazzo. Integrating Multimedia
Applications in Hard Real-Time Systems. In Proc.
IEEE Real-Time Systems Symposium, pages 4–13,
Dec. 1998.
[2] R. Bhattacharya and G. Balas. Anytime control
algorithm: Model reduction approach. Journal of
Guidance Control and Dynamics, 27:767–776, 2004.
[3] G. Buttazzo, M. Velasco, and P. Marti.
Quality-of-Control Management in Overloaded
Real-Time Systems. IEEE Trans. on Computers,
56(2):253–266, Feb. 2007.
[4] A. Cervin and J. Eker. The control server: A
computational model for real-time control tasks. In
ECRTS, pages 113–120. IEEE Computer Society,
2003.
[5] A. Cervin, D. Henriksson, B. Lincoln, J. Eker, and
K.-E. Arzen. How does control timing affect
performance? Analysis and simulation of timing using
Jitterbug and TrueTime. IEEE Control Systems
Magazine, 23(3):16–30, June 2003.
[6] T. Chantem, X. S. Hu, and M. Lemmon. Generalized
Elastic Scheduling for Real-Time Tasks. IEEE Trans.
on Computers, 58(4):480–495, April 2009.
[7] H. Dörrie. 100 Great Problems of Elementary
Mathematics: Their History and Solutions,
chapter 24, pages 112–116. Dover, 1965.
[8] L. Elsner and T. Szulc. Convex sets of schur stable
and stable matrices. Linear and Multilinear Algebra,
48(1):1–19, 2000.
[9] D. Fontanelli and L. Palopoli. Quality of service and
quality of control in real-time control systems. In
Communications Control and Signal Processing
(ISCCSP), 2012 5th International Symposium on,
pages 1 –5, may 2012.
241
1.2
[26]
BM, T = 20 ms
Bm, T = 20 ms
BM, T = 28 ms
Bm, T = 28 ms
BM, T = 40 ms
Bm, T = 40 ms
1
0.8
B
[27]
0.6
0.4
[28]
0.2
0
6
8
10
Mean value [ms]
12
14
[29]
(a)
1.2
[30]
1
0.8
B
[31]
0.6
0.4
0.2
0
Bm - T = 20 ms
Bm - T = 28 ms
Bm - T = 40 ms
12
16
20
Mean value [ms]
24
28
(b)
Figure 5: Available bandwidth for different sampling times as a function of the mean value of the
uniform (a) or exponential (b) computation times
distributions.
[21]
[22]
[23]
[24]
[25]
communication constraints. In Proc. of the IEEE 2002
conference on decision and control (CDC02), Las
Vegas, Nevada, USA, December 2002.
L. Palopoli, D. Fontanelli, N. Manica, and L. Abeni.
An analytical bound for probabilistic deadlines. In
Real-Time Systems (ECRTS), 2012 24th Euromicro
Conference on, pages 179 –188, july 2012.
L. Palopoli, C. Pinello, A. Bicchi, and
A. Sangiovanni-Vincentelli. Maximizing the stability
radius of a set of systems under real-time scheduling
constraints. Automatic Control, IEEE Transactions
on, 50(11):1790–1795, Nov. 2005.
A. Quagli, D. Fontanelli, L. Greco, L. Palopoli, and
A. Bicchi. Designing real-time embedded controllers
using the anytime computing paradigm. In Emerging
Technologies & Factory Automation, 2009. ETFA
2009. IEEE Conference on, pages 1–8. IEEE, 2009.
M. Rabi and K. Johansson. Scheduling packets for
event-triggered control. In Proc. of 10th European
Control Conf, pages 3779–3784, 2009.
R. Rajkumar, K. Juvva, A. Molano, and S. Oikawa.
Resource kernels: A resource-centric approach to
real-time and multimedia systems. In Proc. of the
242
SPIE/ACM Conference on Multimedia Computing
and Networking, January 1998.
H. Rehbinder and M. Sanfridson. Integration of
off-line scheduling and optimal control. In Real-Time
Systems, 2000. Euromicro RTS 2000. 12th Euromicro
Conference on, pages 137–143. IEEE, 2000.
D. Seto, J. Lehoczky, L. Sha, and K. Shin. On task
schedulability in real-time control systems. In rtss,
page 13. Published by the IEEE Computer Society,
1996.
M. Velasco, P. Martı́, and E. Bini. Control-driven
tasks: Modeling and analysis. In IEEE Real-Time
Systems Symposium, pages 280–290. IEEE Computer
Society, 2008.
H. Voit, R. Schneider, D. Goswami, A. Annaswamy,
and S. Chakraborty. Optimizing hierarchical schedules
for improved control performance. In Industrial
Embedded Systems (SIES), 2010 International
Symposium on, pages 9 –17, july 2010.
X. Wang and M. D. Lemmon. Event-triggering in
distributed networked control systems. IEEE Trans.
Automat. Contr., 56(3):586–601, 2011.
F. Zhang, K. Szwaykowska, W. Wolf, and V. Mooney.
Task scheduling for control oriented requirements for
cyber-physical systems. In Real-Time Systems
Symposium, 2008, pages 47 –56, 30 2008-dec. 3 2008.