Deadlines in Real

Deadlines in Real-Time Systems
by
António P. Magalhães*, Mário Z. Rela**, João G. Silva**
* [email protected]
Faculdade de Engenharia,
Universidade do Porto (Portugal)
** {mzrela, jgabriel}@uc.pt
Faculdade de Ciências e Tecnologia,
Universidade de Ciombra (Portugal)
Technical Report
1993
Abstract:
This paper discusses the timeliness of real-time control services as seen by the control
engineering and real-time scientific communities, arguing that computer-controllers
must be designed to meet performance deadlines that, under special circumstances,
can be missed as long as hard deadlines are still met. Hard and performance deadlines
are central to the design of fault-tolerant real-time systems. A unified approach for the
evaluation of these deadlines is presented using concepts and methodologies derived
from the systems engineering and real-time literature. The control requirements of a
hydraulic press are discussed as a case study illustrating the presented concepts.
_____________________________________________________________________
Keywords: Real-Time Systems, Deadlines, Grace-Time, Fault-Tolerance, Error
Recovery, Computer Control.
1 - Introduction
A deadline is a timing milestone. If a deadline is missed by a computer-controller, the
controlled system may transit to an undesirable state.
In hard real-time systems, according to the usual definition, a deadline that is not met
can lead to a catastrophic failure. This means that the criteria used to establish
deadlines are safety based. Control system engineers, on the other hand, use
performance criteria to establish the desired response time of a controlling computer.
The deadlines suggested by these scientific communities are not mutually exclusive
but only different entities perceived in particular and equally important contexts.
Moreover, they show the disassociation of controllers' timing constraints into those
related to safety - hard deadlines - and those related to performance - performance
deadlines.
Performance deadlines are usually more confining than hard deadlines. Therefore, a
computer-controller, designed to meet performance deadlines, does not drive the
controlled system to an unsafe state as soon as one of them is missed, but only later,
when a hard deadline is disrespected. Performance and hard deadlines are thus
separated by a grace-time. This notion can help in the design of low-cost, yet highly
reliable control systems.
This paper presents a method for the evaluation of performance and hard deadlines
using systems engineering and real-time concepts. A discussion of how grace-time can
enable the usage of time consuming error recovery techniques in fault-tolerant hard
real-time systems is also provided. Section 2 presents the current deadline concepts
from real-time and control engineering literature, describing then our unified
approach. Section 3 shows how this view is applicable to the design of fault-tolerant
hard real-time systems. Section 4 exposes a case study illustrating our concepts and
methodologies. Section 5 ends the text providing a summary.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 2
2 - Time Constrains of Controllers
2.1 - Real-Time View
Real-time systems are usually classified into soft and hard. Classically, in a soft realtime system, missing a deadline is inconvenient but not damaging to the environment;
in hard real-time system, missing a deadline can be catastrophic, and thus
unacceptable.
The traditional view of the temporal merit of a hard real-time computation (i.e., the
relationship between the computation completion time and the resulting temporal
merit of that computation) is usually modelled by a step time-value function: if a
controller service is completed before a given deadline it yields a constant positive
value while completing it any time later may incur in a catastrophic failure. From this
point of view, hard deadlines are established in a safety-based context.
This means that when a computer is part of a hard real-time system, all the software
running on it has to be tuned to satisfy all controlled system deadlines.
According to [Jensen85], computations often present non-binary time constraints, even
when a large merit penalty is incurred for completing it after a deadline. Also, there
are many cases in real-time applications where some diminished merit is attained for
completing a computation within an allowable period after a deadline. Moreover, the
acceptability of the completion times of a set of computations must consider their
collective merit instead of the individual ones. [Tokuda87], presents and schedules
some smooth time-value functions illustrating Jensen's point of view.
It is commonly accepted that a controlled process can sporadically tolerate a missed
deadline, if not by much. This notion presupposes a controller not tuned to meet hard
deadlines but some other kind of time limit.
However, the characterisation of a deadline is by itself a relatively unexplored problem
in the real-time community. Most of the literature seems to consider that deadlines are
somehow provided by others, possibly by control system engineers. Moreover,
techniques to calculate systems' deadlines are very seldom presented. [Shin85] and
[Krishna84] are two references on this subject. later discussed in this paper.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 3
Nevertheless, soft and hard deadlines are universally used, and many suggestions
appear in the literature reasoning about the existence of other kinds of deadlines,
besides these classical ones [Geith89]. Moreover, there is a growing tendency to
classify real-time services according to their associated benefit/cost functions, and to
establish on them a set of pertinent points of time concerning application goals
[Jensen93], [Bond91]. This means that there is a growing feeling that traditional
definitions and interpretations of deadlines are poor since they can not describe reality
in a satisfactory way nor can be explicitly employed for a best-effort scheduling
[Jensen92]. In this paper we try to contribute to this approach.
2.2 - Control Engineering View
System engineers have adopted the n-dimensional state space equations concept to
model a system. The state of a system, x(t), can be loosely defined as a set of n
variables such that their knowledge at some time, together with the future inputs and
disturbances, is sufficient to allow the determination of the future behaviour of a
system. Actual system state summarises completely the influence of all past inputs and
disturbances on the system [D'Azzo75].
State trajectory is defined as the path produced in the state space by the vector x(t) as
it changes with the passage of time. This path is the solution of a set of differential
equations for a time t t t0, an input space, u(t), external disturbances, d(t), and initial
system state, x(t0). System outputs, y(t), are system state dependent variables.
w [W wW
\ W I [W X W G W W J [W W [W [
(1);
(2).
In an informal way, it may be stated that the main goal of a controller is to maintain or
to affect in a prescribed manner, some physical quantity or condition of a process.
Introducing state space concepts, a more formal definition takes place: the primary
control objective is to impose to a process some specific time dependent state
trajectory - cyclic or not - or to maintain it at some desired state space location over
time. When this intended action is strictly achieved, the performance of the controller
(externally viewed as the process behaviour) has its maximum value. Control
strategies like optimal, sub-optimal, [Åström90] and extremum control
[Wellstead90] are commonly called area-control based, reflecting this approach.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 4
The performance of a controller can thus be evaluated from the controlled process
space trajectory. Several performance criteria are based on this perception. An
attractive criterion is introduced by the notion of cost functions. Cost functions
usually take the consumed energy, fuel, time, financial cost or some trade-off between
these or other physical parameters to establish the control-loop performance or
control cost associated to the space trajectory of a controlled process. Using this
approach, it may be stated that achieving an optimal performance involves minimising
some cost function.
Space trajectory reflects the interaction of all control-loop components, including
actuators and sensors dynamics, process self-characteristics, sampling intervals and
control strategy implementation. Consequently, the impact of an item's depreciation
upon the global system performance may be established by taking it as an independent
variable of a cost function that encompasses all the other loop component relevant
characteristics.
2.3 - A Unified View
Cost functions may objectively establish the timeliness of a controlled system by
taking controller response time as the input variable. Since every control action has a
time after which its value is monotonically non-increasing [Jensen93] (or, according to
[Shin85], cost functions are continuous monotonic presenting a minimum value for a
zero-time response) control engineers can establish for each function the maximum
cost that will not seriously depreciate the intended performance of a control action.
The cost established this way may be defined as the nominal cost, and its associated
delay as the performance deadline.
It is worthy to note that if performance deadlines are taken as guidelines for controller
design, only at some unusual conditions (e.g., due to an error) they will not be met.
Moreover, if performance deadlines are missed, it does not mean that a catastrophic
consequence will immediately occur: only a control over-cost takes place.
Faults in the control loop affecting controller response time, may lead the controlled
system space trajectory to diverge from the intended one and may even cause the
catastrophic failure of the process. For safety purposes, the control cost of a particular
action reflecting controller response time must be limited to some maximum
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 5
tolerable value. The controller delay associated to this cost may be defined as the
hard deadline of that particular control action.
According to this view, a controller's main goal is to perform a set of control actions
within a predefined nominal cost and to guarantee that consequences of specific
failures will not produce an intolerable cost associated to the catastrophic failure of
the process.
Hard and performance deadlines show the disassociation of timing constraints into
those related to performance and those related to safety. It should be noted that only
for very critical - and rare - applications, intended to work on their safety limits, may
these two deadlines be considered coincident. Usually, the mentioned effects emerge at
different times.
There is another very strong argument to justify the separation of hard and
performance deadlines: the safety margin every engineer applies to every project. No
engineer designs a system to be on edge of disaster. In most engineering disciplines
(civil engineering and electric power distribution are two well-known examples, but
computers are no exception at all) there exists massive amounts of regulations that
establish, above all, the safety margins to use in any project. We could say that if a
system works reliably, then it was designed with significant safety margins (another
common term is "over-engineered"). In fault-tolerance terms, that system was
designed using "fault-avoidance" techniques. We argue that when establishing
deadlines the same happens: after calculating an initial value "by the book", the safety
margin is applied.
From the above, we argue that computer controlled processes impose some
performance deadlines and may even exhibit a hard deadline. Seldom they are one
and the same.
The classical problem of selecting a sampling rate, fs, for a computer controlled system
reflects this view. [Middleton90], for instance, suggests, as a rule of thumb, that the
sampling rate should be about ten times the closed loop bandwidth, fB: slow sampling
is deleterious from the viewpoint of control performance since it involves a loss of
information regarding process intersample behaviour; on the other hand, a very high
sampling rate, fs>>10fB, offers no greater precision, may overload the computer
controller and invariably leads to numerical difficulties in the design of digital filters.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 6
This simple rule helps to understand why a controlled object becomes unstable only
after its controller has missed more than N samplings in a row. Only for very critical
processes in which this approach is not followed, does N take the unitary value.
We may define grace-cost as the difference between nominal and maximum tolerable
cost (fig. 1). This cost defines grace-time, the timing difference between hard and
performance deadlines, representing the amount of time that a particular control task
can be delayed beyond its performance deadline without leading the controlled process
to a catastrophic behaviour. This grace-time definition follows [Kirmann87].
Cost
Maximum
Tolerable
Grace Cost
Nominal
Grace Time
Performance
Deadline
Hard
Deadline
Time Delay
Fig. 1 - Deadlines Establishment and Associated Grace-Cost and Grace-Time.
Although the notion of grace-time seems to be very suitable to continuous control
systems, it also applies to discontinuous ones. For instance, let's consider a bottle
filling line, where a controller must read a level sensor to trigger a shut off order to a
filling valve. If the controller takes more than a certain time in reading the sensor or in
issuing the shut off order, a certain leakage may occur; its cost depends on the amount
of spread liquid and can be described by a monotonic increasing cost function whose
evolution depends on liquid and plant characteristics: chemical composition and
commercial value of the liquid, possible infiltration in electrical parts, etc.
[Shin85] and [Krishna84] define a cost associated to a controller's response time, [, on
executing a task D as a non-decreasing function, CD([), having its minimum value for a
zero-time response. The hard deadline, W dD , occurs when this function assumes an
infinite value representing a catastrophic failure:
­ J D [ LI [ d W GD
&D [ ®
LI [ ! W GD
¯f
(3).
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 7
We agree on this approach except for the quantification of the intolerable cost: from
our point of view, the maximum tolerable cost, MD, on which the hard deadline of a
task D depends, does not necessarily have an infinite value. We represent D's hard
deadline by W hdD :
­ J D [ d 0 D
&D [ ®
¯ J D [ ! 0 D
LI [ d W KGD
LI [ ! W KGD
(4).
The same authors illustrate their idea throughout the notion of the allowed state
space, a region that process leaves when a hard deadline is missed.
In the same way we define a nominal or a performance state space where the control
cost of a task D does not exceed the nominal value, ND. Keeping the system inside this
region is synonymous to say that the control cost does not exceed the nominal value
(5). A process abandons this region when a performance deadline, W pdD , is not
accomplished.
&D [ ­ J D [ d 1 D
°
® 1 D JD [ d 0 D
° J [ ! 0
D
¯ D
LI [ d W SGD
LI W SGD [ d W KGD
LI [ ! W KGD
(5).
The grace-cost, GCD, and the grace-time, GTD, associated to a task D may be defined
as:
*&D
*7D
0 D 1D
W KGD W SGD
(6)
(7),
while the over-cost introduced by an execution of the task D, OD, is:
2D
­ J D [ 1 D
®
¯
LI
LI
[ ! W SGD
[ d W SGD
(8).
The cumulative over-cost introduced upon the system by n executions of a task D
over the time interval [0, t), is defined by
2D W Q W ¦2
L DL
(9),
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 8
where OD i , represents the over-cost of the ith execution of the task D. In a similar way,
the cumulative over-cost introduced by m disjoint tasks over the same time interval,
can be defined by:
m
OS ( t )
¦W * O (t )
j
j
(10),
j 1
where, Wj > 0, represents a normalizing constant intended to convert the partial costs
to a specific metric (dollars, for instance).
According to our performance and hard deadline definitions, process state space may
be decomposed into the following time dependent sub-states (fig. 2):
Fig. 2 - Dynamic State Space Decomposition.
x
x
x
A nominal or performance state space, Sn(t), where the maximum control cost
equals the nominal cost;
A grace state space, Sg(t), where control exhibits a control cost greater than the
nominal but lower or equal to the maximum tolerable;
An intolerable state space, Si(t), where the control exhibits an intolerable cost.
In the allowable or tolerable state space, St(t), the system does not exhibit an
intolerable cost. It may be defined as:
6 W W 6 L W 6 Q W ‰ 6 J W (11).
Some control actions may have a time dependent deadline imposed by the maximum
over-cost that the system can tolerate over some time interval. In particular, that time
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 9
interval may be its mission lifetime, or the time between repairs. This notion applies
for processes where the missing of some particular performance deadlines introduces a
permanent or transient depreciation on some parameter related to control safety (e.g., a
fuel reserve, a time margin, or the mechanical wear of some actuator or process
component).
For this kind of applications, if a particular and undesirable event e, occurs during a
time interval, [t0, t), introducing an unrecoverable and well-defined over-cost, O(e) >
0, on a system bounded to some maximum tolerable over-cost over that time interval,
20$; 6 W , then the occurrences of e must also be bounded to some maximum number;
exceeding it, puts the system safety at risk, even when performance deadlines are met.
We define that bound as the type e grace-events, GE(e,t), that system can tolerate
over the time interval [t0, t), and we represent it by:
*( H W « 20$; 6 W »
«
»
¬« 2H ¼»
(12).
3 - Fault-Tolerant Hard Real-Time Systems
Designing systems that in steady state meet some performance deadlines but that under
special circumstances (like the occurrence of a transient fault) can disrespect them as
long as hard deadlines are still met, is quite different from designing them to be close
to an hard deadline all the time, and never being allowed to miss it.
We would like to discuss here two areas where the consequences of our proposal are
particularly important:
x
x
Scheduling in the presence of faults;
Resource utilisation.
3.1 - Scheduling in the Presence of Faults
Presently, most scheduling algorithms assume a fault-free environment. This means
that the hardware has to provide fault-masking, and that no software bugs can be
tolerated, since there is no room for alternative executions. This situation is
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 10
understandable since the tasks are scheduled to be "on the brink of disaster": they have
to satisfy the hard deadlines, even if only by a short margin. There is no slack time to
deal with faults.
If the tasks are now scheduled to satisfy their performance deadlines (usually much
shorter than the hard deadlines), the grace time can be used to accommodate those
alternative executions. Since they should be rare events, as long as the grace time is
not exceeded no system harm results from them.
Backward error recovery becomes a viable alternative. As long as the restart time is
lower than the grace time of the affected tasks, the system does not suffer. And it is
cheaper to build a system with backward error recovery than to include the massive
redundancy needed to implement fault-masking.
Statically scheduled systems, for instance, should have one (or more) "restart subschedule" to be used after a restart (and possible reconfiguration) so that the hard
deadlines are met.
An interesting area of research is designing recovery techniques whose execution time
is bounded.
3.2 - Resource Utilisation
Let's consider an example. The rate-monotonic (RM) algorithm is used to schedule
periodic preemptive hard-real time tasks in a uniprocessor system. In this algorithm a
fixed priority is given to each task, with a higher priority being assigned to tasks with
shorter periods. Liu and Layland [Liu 73] proved that this algorithm can schedule any
set of n periodic tasks for processor load below Q§¨ Q ·¸ or any set of tasks of any
©
¹
size for processor load below ln 2 | 0.693. They also proved that the RM algorithm is
the optimal static priority scheduling algorithm.
According to our proposal, the tasks should be scheduled by the RM algorithm to meet
their performance deadlines instead of their hard deadlines. The execution time used to
calculate the processor load does not need to be the worst case, but a smaller one. This
means that sometimes the task may miss its performance deadline, because the actual
execution time is longer than the used value, and the slack left by the other tasks not
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 11
using all their attributed time is not enough to compensate for the excess. But, as long
as the hard deadline is not missed, the system still works correctly. The only restriction
is that missing the performance deadline must happen seldom enough for the process
to be able to return from the grace space to the performance space before another miss.
If that second miss comes earlier than that, then at least the system must be able to
maintain itself in the grace space.
Although many of these aspects still need to be treated in a more formal way (there is
another interesting research area here), it seems to us sufficiently clear that the average
processor load can be significantly higher than was previously attainable.
4 - Case Study
4.1 - Process and Machine Description
"Punching" is a general term describing the process of cutting a hole in a metal sheet.
The punching process results from the motion of two sharp, closely adjoined edges on
a material placed between them and comprises three stages [Lascoe88]:
x The Deformation Phase, as the cutting edges begin to close;
x The Penetration Phase, as the cutting edges penetrate the material causing initial
fracture lines on both sides of the material;
x The Fracture Point, the point where the upper and lower fracture lines meet.
Punching reactive forces depend on material thickness, type and hardness, condition of
the cutting edges, diameter of hole, etc. [Restivo91]. Yet, they always increase for the
initial tool displacement, decreasing later. The fracture occurs when the resistive force
significantly decreases. Figure 3 shows a simplified model of a punching force profile.
This case study concerns a prototype flexible hydraulic press devoted to multiple metal
sheet working, including punching. It exhibits a maximum force capacity of 1.6 MN (
|160 Ton) and may execute up to 2 punches per second. Press configuration is based
on a large area main cylinder, C1, a small auxiliary cylinder, C2, and two hydraulic
valves, V1 and V2 (fig. 4). The main cylinder is responsible for press force capacity
while the auxiliary one drives the press on its fast ascending and descending
movement when no resistive force exists. The auxiliary cylinder is driven by a
hydraulic circuit that is not relevant for this presentation.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 12
Force
Pressure Line
C
1
C
2
Valve 1
Fracture
Valve 2
Tank
Male Tool
Penetration
Metal Sheet
Tool Displacement
Female Tool
Deformation
Fig. 3 - A Punching Force Profile.
Fig. 4 - Press Simplified Hydraulic Scheme.
An important problem of this algorithm (and of all the algorithms of this kind) is that,
to determine if one set of tasks is schedulable, we have to calculate the resulting
processor load. To do that we have to sum the worst case execution time of all the
tasks in the set. This leads to the processor being idle most of the time since usually
the average execution time of a task is significantly shorter than the worst case.
The valve 1 is an electrically operated logic element. When active, it connects the
upper and lower main cylinder chambers leading them to a similar pressure and
disabling press force capacity. When not, and if the lower chamber pressure exceeds
the upper one, it conducts an ascending flow. In electrical terms, this valve may be
roughly defined as a switch presenting a diode in parallel, whose cathode is connected
to the upper chamber; when the valve 1 is active the switch is closed.
Since the main cylinder exhibits a constant pressure in its upper chamber,
decompressing its lower chamber enables press force production. This action is
achieved by opening the valve 2 when valve 1 is off.
When a punching cycle is initiated, the male tool is some distance apart from the metal
sheet. Valve 1 and valve 2 are off. By this time, the main cylinder exhibits a
significant pressure on both chambers. The punching cycle involves the following
sequential steps:
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 13
x
The auxiliary cylinder initiates a fast descending movement and valve 1 conducts a
flow from the lower to the upper chamber.
x
When the tool gets close to the metal sheet, the valve 2 is opened starting lower
chamber decompression. Press yields a growing force capacity and the tool
progressively penetrates the material.
When the resistive force presents a quasi-fracture value, the press controller issues
a closing order to valve 2. This action is crucial to press work efficiency, since any
flow escaping the lower camber after the fracture is considered an energy waste.
x
x
Soon after the fracture, the valve 1 is activated disabling the press force capacity
and the auxiliary cylinder initiates the press ascending movement.
4.2 - Performance Analysis and Deadline Establishment
Flexibility may contribute to productivity since it brings the ability to provide a
diversity of goods and services. However, flexibility usually also requires additional
materials, labour, energy, plant, technology and information [Steyn89].
In this context mechanically and hydraulically driven presses are rival machines.
Mechanical presses are quite robust and very fast but they have a reduced
controllability restraining the diversity of their services; hydraulically driven presses
are potentially flexible but they tend to be over-stressed when executing the punching
work and can not rival the best throughput and the energy efficiency of the
mechanically driven ones. For these reasons, a hydraulically driven press must be
designed for a high throughput while its controller must guarantee a high energy
efficiency, and a minimum over-stress upon press mechanical parts.
Any press executing the punching work presents a structure deflection reflecting the
resistive force value. This means that a press structure accumulates mechanical energy
as the resistive force increases, releasing it as the resistive force decreases. Since
fracture represents a discontinuity in the resistive force profile and may take place
when the resistive force value is closed to its maximum, at the fracture moment a
considerable amount of mechanical energy accumulated in press structure may be
suddenly released, impelling an acceleration to the press moving parts and leading to
shock and vibration. For this reason, punching hard materials represents a very
stressing effort to the mechanical parts of any press.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 14
A hydraulic fluid accumulates or releases energy according to the load variations. For
this reason, when a hydraulically driven press executs the punching work, it releases
some amount of hydraulic energy soon after the fracture. For a severe punching effort,
this hydraulic energy may largely prevail upon the energy released by the press
structure, increasing energy losses, mechanical shock and vibration. Therefore, highpowered hydraulic presses, demand an efficient and timely control upon the hydraulic
energy release at the fracture moment, if they are to be competitive with the
mechanical presses.
In this case study, we are considering a particularly severe punching work: a 5 mm
thickness steel metal sheet, whose punching force profile is depicted in figure 5.
Stroke (m m )
Force (Ton)
0,00
1,20
2,30
3,20
3,80
4,20
4,50
4,80
4,99
5,00
0
21
56
92
110
118
120
117
110
0
Force (
Punching Force Profile
120
100
80
60
40
20
0
0,00
2,00
4,00
6,00
Stroke (m m )
Fig. 5 - A Punching Force Profile for a 5 mm Thickness Steel Metal Sheet.
We may consider that for small deflection values, the press used in this case study has
a constant stiffness, k = 109 N/m (i.e. press structure deflects nearly 1 mm for a 100
Ton. punching force). Therefore, the press deflection, x, and the punching resistive
force, F, are related by
F=kx
(13),
while the actual mechanical energy accumulated in press structure, E, as a function of
the press deflection or of the punching force, is denoted by:
(
N[
)
N
(14).
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 15
Considering the equation (14) and the punching force profile represented in figure 5,
we conclude that the mechanical energy accumulated in the press structure at the
fracture moment takes the value of
(
1 1P
-
(15).
In the case of the press that is used in this case study, the hydraulic energy released
after the working material fracture is minimum if valve 2 closes exactly at the fracture
moment. As the closing order is delayed, this released energy increases and may
exceed a reasonable value. Figure 6 represents, on two different time scales, the cost
function of this control action for the considered punching work, where the hydraulic
energy released represents the cost.
800
3000
Released Energy (J)
Released Energy (J)
700
600
500
400
300
200
100
0
2500
2000
1500
1000
500
0
0
0,5
1
1,5
Time Delay (ms)
2
0
5
10
15
20
Time Delay (ms)
Fig. 6 - After Fracture Hydraulic Energy Released Versus
Time Delay on Triggering Valve 2 Closing Order.
The inference of the nominal and maximum tolerable costs of this control action must
be established carrying in mind that this press must rival a mechanically driven one.
Therefore:
x The hydraulic energy released after fracture must present a value that proves to be
reasonable when compared to other energy losses.
x The shock introduced by the punching work must not degrade press mechanical
parts.
The first aim intents a good performance and the second one prevents severe
consequences. Saying it in other words, the first requirement defines the nominal press
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 16
behaviour while the second bounds the press tolerable behaviour. Their associated
controller delays represent the performance and hard deadlines.
As figure 6 denotes, the hydraulic energy released soon after the fracture may be quite
larger than the energy accumulated in the press structure. However, it also shows that
mechanical and hydraulic energy losses can be balanced if the controller does not
delay its action for more than a certain time. Thus, the performance deadline of the
closing order of valve 2 must be inferred on the basis of the mechanical energy that
press accumulates in its structure at the fracture moment. For this reason, our purpose
is to keep the nominal control cost at a value close to 600 J. According to this
approach the performance deadline may be defined as 1 ms (fig. 7).
The hard deadline of the considered control action depends on press maximum
tolerable stress. For the presented press this limit corresponds to a hydraulic energy
release around 2400 J (4 times the energy accumulated in its structure). A greater
value introduces some degradation to press mechanical parts, reducing their lifetime,
increasing maintenance, and raising the danger of a sudden collapse of, for instance,
the hydraulic circuits or the usually very expensive cutting tools. According to this
approach, the hard deadline of the mentioned control action is some value between 18
and 19 ms. For the sake of simplicity the value of 18 ms is assumed. The grace-time of
the control action is therefore 17 ms (fig. 7).
Released Energy (J)
3000
Maximum Tolerable Cost
2500
2000
Grace Cost
1500
1000
Nominal Cost
Grace T ime
500
Performance Deadline
0
0
2
4
6
8
Hard Deadline
10
12
14
16
18
20
T ime D elay (ms)
Fig. 7 - Inferring Grace-Time and Grace-Cost.
4.3 - Batch Manufacturing
Presses are particularly devoted to batch manufacturing. So, it may be more correct to
consider their efficiency in terms of a batch job instead of a single operation. For a
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 17
batch job, control efficiency keeps its meaning but is better measured through the
cumulative effect of single control actions.
One way to achieve a good batch efficiency is to guarantee the efficiency of individual
operations. In the case of the presented press, this is synonymous with meeting the
performance deadline of individual operations. However, a good batch efficiency may
be achieved even if several single operations are inefficient.
In the context previously defined, the performance deadline of a task does not depend
on considering it as a single operation or as a part of a sequence. Batch nominal cost is
the cost of processing a batch if the performance deadlines are met for single
operations. It is therefore the maximum control cost for processing a batch if the
process is always kept in its performance state space. It is worth noting that if a
performance deadline is missed, batch processing cost must be expected to exceed the
nominal value, since there is no guarantee that a future individual control action may
introduce a cost lesser than the nominal one, compensating the occurred failure.
For a five hour punching work, at two punches per second, the nominal control cost of
the presented press is 5 hours * 3600 sec/hour * 2 punches/sec * 600 J = 21.6 MJ. If
we agree that press efficiency is still worth for a 21.6 kJ over-cost (0.1% of 21.6 MJ),
then if a particular error is recovered by using the backward error recovery technique
introducing a (2400-600=) 1800 J cost due to the 17 ms delay it imposes on
controller's response time, then during the five hours interval, batch punching work
can tolerate ¬ ¼ faults of that type. Since considering a 5 hour / 12 = 0.42
hour MTBF is clearly a very pessimistic view, the backward error recovery technique
is a very suitable way for providing fault-tolerance to the press controller.
5 - Summary and Conclusions
In this paper we argued that systems must be designed to meet some performance level
(performance deadlines) but that under special situations it can be disrespected as long
as safety is still guaranteed (hard deadlines). This paper has introduced a unified
approach for the practical evaluation of these deadlines, encompassing control
engineering and real-time concepts. The usage of grace-time, separating performance
and hard deadlines, is central to fault tolerance mechanisms usable in hard real-time
systems.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 18
References
[Åström90]
Åström, K and Wittenmark, B.
Prentice Hall Information and System Sciences Series: "Computer-Controlled
Systems. Theory and Design". Second Edition
Prentice Hall International Editions, 1990.
[Bond91]
Bond, P., Seaton, D., Verissimo, P. and Waddington, J.
"Real-Time Concepts"
in Spriger-Verlag Research Reports ESPRIT Series: "Delta-4: a Generic
Architecture for Dependable Distributed Computing"
Edited by D. Powell
Spriger-Verlag, 1991.
[D'Azzo75]
D'Azzo, J. and Houpis, H.
McGraw-Hill Electrical and Electronic Engineering Series:
"Linear Control System Analysis and Design".
McGraw-Hill International Student Edition, 1975.
[Geith89]
Geith, A. and Schwan, K.
"CHAOSart: Support for Real-Time Atomic Transactions"
19th Fault-Tolerant Computer Symposium. Pp 462,469.
IEEE 1989.
[Jensen85]
Jensen, E.D., Locke, C.D. and Tokuda, H.
"A Time-Value Driven Scheduling Model for Real-Time Operating Systems"
Proc. of the Real Time Systems Symposium
IEEE, 1985.
[Jensen92]
Jensen, E.D.
"Alpha: A Non-Proprietary Realtime Distributed Operating System for
Mission Management Applications"
Conference Proceedings Echtzeit'92, Pp. 205,212.
München, June 1992.
[Jensen93]
Jensen, E.D.
"Asynchronous Decentralized Realtime Computers"
Realtime Computer Systems, Digital Equipment Corp.
March, 1993.
[Kirrmann87]
Kirrmann, H.
"Fault Tolerance in Process Control:
an Overview and Examples of European Products".
IEEE Micro: Vol. 7 NO. 5, pp. 27-50.
October 1987.
[Krishna84]
Krishna, C. M.
"On The Design and Analysis of Real-Time Computers".
PhD Thesis, University of Michigan.
September 1984.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 19
[Lascoe88]
Lascoe, O.D.
Carnes Publishing Series Inc. : "Handbook of Fabrication Processes"
ASM International, 1988.
[Liu73]
Liu, C.L. and Leyland, W.
"Scheduling Algorithms for Multiprogramming in a Hard Real-Time
Environment".
J. ACM, Vol.20, No. 1, January, 1973.
[Middleton90]
Middleton, R. and Goodwin, G.
Prentice Hall Information and System Sciences Series: "Digital Control and
Estimation: a Unified Approach"
Prentice Hall International Editions, 1990.
[Restivo91]
Restivo, T., Magalhães, A.P., Mendes, J. and Freitas, F.
"Evaluation of Sheet Metal Punching Process Using a Computer Controlled
Prototype Hydraulic Press".
Fourth Bath International Fluid Power Workshop.
September 1991.
[Shin85]
Shin, G., Krishna, C. and Lee, Y.-H.
"A Unified Method for Evaluating Real-Time Computer Controllers and Its
Application".
IEEE Transactions on Automatic Control: Vol. AC-30, NO. 4, pp. 357-366.
April 1985.
[Steyn89]
Steyn, P.
"The Scope of Production and Operations Management"
British Library Cataloguing in Publication Data: "International Handbook of
Production and Operations Management". Pp. 3,14.
Edited by Ray Wild.
Cassel Educational Ldt., 1989.
[Tokuda87]
Tokuda, H., Wendorf, J. and Wang, H-Y.
"Implementation of a Time Driven Scheduler for Real-Time Operating
System".
Proceedings of the 7th Real-Time Systems Symposium.
IEEE 1987.
[Wellstead90]
Wellstead, P.E.
"Self-tuning Extremum Control"
IEE Proceedings, Vol. 137, Pt. D, NO. 3. pp. 165-175.
May 1990.
___________________________________________________________________________________
Deadlines in Real-Time Systems --- Page 20