Verifying Fault-Tolerant Behavior of State Machines
M. Dal Cin
University of Erlangen-Nuernberg
Martensstrasse 3, 91058 Erlangen, FRG
Abstract
Fault-tolerant behavior is an important non-functional
requirement for systems that involve high criticality. We
present a framework which allows the analysis of faulttolerant behavior to be undertaken. This framework is based
on the notion of state machines and tolerance relations.
Results concerning fault-tolerant behavior of finite-state
machines are presented and an illustrative example is
discussed. Various kinds of fault-tolerant behavior (masking,
fail-stop, t-fail-stop, degradable, etc.) are modeled.
1. Introduction
Fault-tolerant behavior is an important non-functional
requirement for systems that involve high criticality and even
for those systems that have been verified. After all, systems
can fail in many ways due to wear or inadequate handling and
this loss of quality may not be detected in due time. Then the
affected system moves through a series of states which may
be undesirable, e.g. unsafe, and may then proceed to failure,
hazard, or accident. Hence, systems involved with high
criticality should be able to tolerate to some extent the
degradation of their quality, in particular if their behavior can
be affected by the occurrence of unexpected and sometimes
unpredictable conditions.
The aim of this paper is to develop a framework which
can be elaborated to allow the analysis of fault-tolerant
behavior to be undertaken. Roughly speaking, a system
behaves in a fault-tolerant manner, if it does not fail when it
changes its state due to some faults. The notion of faulttolerant behavior will be made precise below. The framework
is based on the notion of the finite-state machines, since they
provide an intuitive and natural model for describing discretestate and discrete-event systems which continuously receive
inputs from and react to their environment. We aim at a basic
framework and, therefore, employ a simple finite state model.
The basic notion is that of a tolerance relation over the state
space. Given a finite state model there are many ways to
investigate and verify its properties [8]. We choose one based
on exploring the possible states and state transitions of the
model by set-theoretic means and the calculus of binary
relations [3]. Introducing tolerances and errors as binary
relations over state spaces gives us the possibility to exploit
the relational calculus and to deal explicitly with unexpected
state transitions. The intended benefits will be to simplify
modeling erroneous behavior and tolerances.
We use capital letters to denote sets, lower case to denote
elements of sets, and Greek letters to denote relations; ι X is
the identity relation over X and X ¬ is the complement of X.
Definitions: Given binary relations ρ, σ ⊆ X × Y, binary
relations τ and α over X, χ a binary relation over Y and S ⊆
X, R ⊆ Y. Instead of (x, y) ∈ ρ, we usually write xρy. Then:
a) converse: ρc = {(y, x) | (x, y) ∈ ρ} ⊆ Y × X,
b) Peirce products: S⋅ρ = {y | x ∈ S, xρy} , ρ⋅R = {x |
∃y ∈ R, xρy}, instead of {x}⋅ρ, we usually write xρ.
c) composition: ρ ⋅ σ ={(x, y) | ∃x’ : x ρ x’, x’ σ y},
obviously ρ ⋅ σ = ∅ if X ∩ Y = ∅,
d) image of τ under ρ: (ρ, ρ)τ = ρc⋅τ⋅ρ; likewise, (ρ, σ)τ =
ρc⋅τ⋅σ
e) ρ ∗ τ = ι Y ∪ (ρ, ρ)τ ⊆ Y × Y
f) exponentiation: τ0 = ι X and τn = τn-1 ⋅ τ.
g) reflexive extension: r α = ιX ∪ α
Relation τ is stable under α if ( α , α ) τ ⊆ τ , i.e. if it
contains its image. If τ is reflexive and stable then
α ∗ τ = ιX ∪ (α , α ) ⊆ τ . Relation α is deterministic if
α c ⋅ α ⊆ ιX , it is total (complete) if ιX ⊆ α ⋅ α c , and injective
if α ⋅ α c ⊆ ιX .
We proceed as follows. In Section 2 we introduce our
concept of errors and tolerances. In Sections 3 and 5 we
discuss an example. In Section 4 we present the framework
concerning fault tolerance of finite state machines. In Section
6 we comment briefly on model checking fault-tolerant
behavior.
2. Errors of fault-tolerant systems
According to [9] an error is the consequence of a fault
which can lead to a failure of the system. Within our abstract
framework we are not interested in modeling faults, the
physical causes of errors. We are rather interested in errors.
However, studying the effects of errors on system behavior,
one has to differentiate between errors caused by permanent
faults and errors caused by temporary faults. It is commonly
agreed that in computing systems temporary faults occur
much more often than permanent ones.
In research on fault tolerance it has long been recognized
that an appropriate fault/error model plays a fundamental role,
and that it is very hard to obtain one. Our fault/error model is
based on erroneous state transitions. According to [10]: An
erroneous transition of a system is any state transition to
which a subsequent failure could be attributed. Thus, an
erroneous state transition may lead to an erroneous state. An
erroneous state of a system is any state which can lead to
failure by a sequence of valid transitions. Specifically, there
must exist a possible sequence of events which would, in the
absence of corrective actions by the system and in the absence
of erroneous transitions, lead from the erroneous state to a
system failure. If the design of a system is considered to be
correct then an erroneous transition can only occur because of
a failure of one of the components of the system.
An erroneous state transition caused by a temporary fault
occurs at most once in the considered time period, an
erroneous state transition caused by a permanent fault can
occur whenever the system is in the state that gives rise to the
erroneous transition. Note that a temporary erroneous state
transition may very well cause the system to take erroneous
states permanently. Thus, a temporarily faulty system
component spontaneously recovers from a fault. However, the
fault may have corrupted the system state, and subsequent
state transitions will perpetuate the contamination. Hence, a
temporary fault can lead to permanent failure.
On the other hand, the system may exhibit selfstabilizing recovery from temporary faults. Examples of selfstabilizing recovery are discussed in [15]. Hence, not all state
transitions due to faults may lead to failure. We call these
transitions as well as erroneous transitions unexpected
transitions. We will model unexpected state transitions and
erroneous states by binary relations over the state space of the
system. (Such relations will be called errors). That is, we
model errors explicitly as separate entities. This gives us the
possibility of modeling uniformly errors caused by temporary
faults and errors caused by permanent faults. Furthermore,
this framework allows us to describe fault-tolerant behavior,
such as fault masking or fail-stop, and to develop tests for
fault tolerance [6] in a concise way.
For a fault-tolerant system it makes little sense to try to
verify that it behaves as its specification. Ideally, a faulttolerant system behaves as the fault-free system in the
presence of faults. In reality, however, it exhibits most often a
degraded but tolerable behavior due to recovery (performance
degradation). Hence, we introduce the concept of tolerance as
follows. A tolerance is a binary relation over the state space of
a system such that two states are related (are in tolerance) if
their difference can be tolerated. For example, the two states
may give rise to different, but acceptable outputs. A tolerance
is, clearly, reflexive and symmetric. A coarse tolerance, call it
η , is given as follows: two states are η -related if and only if
both are not erroneous. Roll-back recovery provides another
example. If failures do not occur too frequently, a correct
system state may be restored by roll-back recovery as long as
the erroneous state transition is due to a temporary fault
(temporary error) and does not affect the recovery mechanism. However, restoring the system state may only be
feasible from certain (undesirable) states. We then may say
that these states are in tolerance with the correct states.
3. Railroad crossing
To illustrate our approach we consider a railroad crossing system similar to that introduced in [8]. It consists of a
railroad track, a traffic signal (on = green, off = red) for trains,
a gate, and trains. To keep the example simple we consider a
one-way railroad and assume that trains arrive at the crossing
at a sufficiently low rate, and leave it quickly enough, so that
no train attempts to enter the crossing while another train is
already there. When the train approaches region P it observes
the traffic light. Region I is the crossing through which cars
can travel, of course, only if the gate is up. It may be possible
that a train approaches region I while the gate is opening. In
this case, the gate must reverse its movement and start
closing. After the train has left the crossing (state O), the gate
is reopened. Figure 1 illustrates the crossing by a statechart
[7]. This so-called AND-decomposition specifies the behavior
of the train, the light, and the gate. The controller of the
system is not modeled.
Train
goP
O
P
enterI
I
exitI
Gate
Light
up
Off
rise
Opening
Open
lower
switch_on lower
On
rise
Closing
rise
Closed
down
States: O: train outside of the crossing, P: train passes the
light without stopping, I: train is crossing.
The statechart contains three finite-state machines
(automata, transition tables). Each one can be given as A =
(SA, s0A, EA, ∆A ) where SA is a finite set of state and s0A is the
default (initial) state. EA is a set of events and ∆A the
transition relation, i.e., ∆A ⊆ SA × EA × SA. It describes how
the machine makes transitions from one state to the next when
an event occurs. The AND-chart specifies a composed state
machine with state space S = STrain × SLight × SGate. Its event set
is the union of the individual event sets. Events not specified
for a state do not change this state. The default state is O1 =
(O, Off, Open). Transitions between states may be guarded by
conditions.
For example, goP[in(On)]. Whenever the train passes
region P without stopping the light must be on. The guarding
condition must be true before the transition can take place. In
reality, however, it may become difficult to guarantee that the
guards are observed by the system. The system may wear out,
its environment may become adverse or its operator may
introduce erroneous states.
Hence, there may be a non-zero probability (depending
on time) that an (undesirable) transition takes place even if the
guarding condition is false. For example, a guard for the light
is „switch_on [in (Closed)]“, i.e. the light goes on (green)
when the gate is closed. The guard will not be observed when
the light bulb is broken or, if there is a short-circuit. We will,
however, not mention guards explicitly, since they are not
needed for the following. Not observing a guard is a fault, not
an error – it may result in an error.
If the system undergoes desirable state transitions only,
its (fault-free) transition relation is given by the state
transition diagram of Figure 2. Notice that the state space
contains 18 more states which can not be reached from the
default state.
Now, (I, Off, Opening) can be considered in tolerance with (I,
On, Closed). Before the gate is fully open and the cars start
moving the train may have left the crossing. If we consider
safety, all states of Figure 2 are in tolerance with each other.
They are, however, not in tolerance with the unsafe states (P,
Off, Open) and (I, Off, Open), i.e. the train passed the light
without permission, and the train is crossing despite the open
gate, respectively. (I, Off, Opening) may be considered in
tolerance with (I, On, Closed) but undesirable (see above).
The 'deadlock' state (O, Off, Closed) – i.e. the train is not
allowed to approach the crossing despite the gate being closed
- is undesirable, too, and not in tolerance with any other state.
The remaining states can be classified similarly.
4. Automata with tolerance and errors
Central to our approach is the concept of a tolerance space
[2,5], introduced first by H. Poincare [14]. A tolerance space
(X, τ) is a set X with a tolerance τ, that is, a binary, reflexive,
and symmetric relation over X. If (x, x’) ∈ τ, we say x is
within tolerance of x'.
Definition: The transition relation ∆ of automaton A =
(S, s0, E, ∆) is often given as: d = {δx ⊆ S × S | s δx s’ iff (s, x,
s’) ∈ ∆, x ∈ E}. A is deterministic (complete) if for all events
δx is deterministic (complete). Let τ be a tolerance over S, the
tuple (A, τ) is called an automaton with tolerance and (A, τ) is
stable if τ is stable under the state-transitions δx ∈ d.
For example, tolerance η of Section 2 is stable according to the definition of erroneous states. Intuitively, successors of states within tolerance of a stable automaton all stay
within tolerance under the action of any event sequence. This
will later be exploited when we consider temporary and
permanent errors of stable automata.
With τ we can express certain properties of states formally, e.g. q ((τ ⋅ τ ¬ ) ¬ ∩ (τ ¬ ⋅ τ ) ¬ ) s expresses that q is in
tolerance with all and only those states that are in tolerance
with s.
Let FA be the set of all stable tolerances of automaton A.
FA is a complete distributive lattice (with respect to set
inclusion) [5]. Thus, F(A, τ) = {(A, σ) | σ ∈ FA, σ ⊆ τ and τ a
tolerance on S} has an unique maximal element, if not empty.
This maximal stable element of F(A, τ) will be denoted by
(A, τ*). To verify stability of (A, τ) we can compute τ* and
then check whether τ = τ* [6].
As stated in Section 2, our fault/error-model is based on
unexpected state transitions. The unexpected transition from s
to s' may be due to a permanent or temporary fault and s’ may
be an erroneous state.
Definition: Let A = (S, s0, E, ∆) be a (not necessarily
deterministic) finite automaton, τ a tolerance over S. Any
binary and reflexive relation φ ≠ ι S over S is called an error
of (A,τ). Error φ is called temporary if φ is applied only once,
cf. Figure 3a. If error φ is due to a permanent fault, it changes
the transition relation of A from ∆A to ∆A⋅φ. Then error φ is
also called permanent and we denote the error-prone
automaton by Aφ. This modified automaton is nondeterministic.
Definition: Error φ is compatible, if φ ⊆ τ. A temporary
error φ ⊆ τ* is insignificant. It is small, if δ x ∗ φ ⊆ τ for all
x∈ E. A permanent error φ is small, if (δ x ⋅ φ , δ x )φ ⊆ τ for
all x∈ E.
δx
bisimulation, (p, q') ∈ φ ⊆ µ . However, s is not in tolerance
with r. Thus, a stable tolerance is a bisimulation, if A is
complete, und a bisimulatuion is stable, if A is deterministic.
ϕ
s → sδ x → s'
δ φ =δ ⋅φ
x
x
→
s
s'
As an example consider the scenario of Figure 3b. Due to
error (q, s ) ∈ φ the error-prone system may take state s
instead of q. From state s it reaches state u. Error (q, s ) is
small, if (u, v ) ∈ τ and (u, w) ∈ τ , i.e. state u can be tolerated
regardless whether the error-free system would have reached
state v or w. Observe further that φ being small implies that
δ xc ⋅ ιS ⋅ δ x = δ xc ⋅ δ x ⊆ τ . This holds for deterministic events,
not necessarily for non-deterministic ones. However, if we
relate tolerant behavior with non-erroneous state transition,
then δ xc ⋅ δ x ⊆ τ should be required of any tolerance.
Obviously, insignificant temporary errors are compatible
and small.
Examples: Error φ = ιS ∪ δ xc = rδ xc , where δ x is injective, models the situation where automaton A may refuse to
perform x-transitions. The light bulb may break with some
(small) probability just when the train passes it. This defines
the error φ L = r{((P, On, Q), (P, Off, Q))| Q ∈ SGate}.
As with tolerances, we can express properties of errors
directly, e.g. q (φ ⋅ φ c ) ¬ s expresses that there is no unexpected state transition from q and s to the same state.
Lemma: Error φ = x (δ x ⋅ (τ ⋅ δ xc ) ¬ ) ¬ , is the largest
small error of ( A, τ ) , if A is complete.
c
Proof: δx ∗ φ ⊆ τ implies δ x ⋅ φ ⋅ δ x ⊆ τ . Hence,
δ xc ⋅ φ ⋅ δ x ⋅ δ xc ⊆ τ ⋅ δ xc ⇒ δ xc ⋅ φ ⊆ τ ⋅ δ xc
since ιS ⊆ δ x ⋅ δ xc .
This is equivalent to the Schröder equivalences, hence, φ =
x (δ x ⋅ (τ ⋅ δ xc )¬ ) ¬ [3].
Let σ be a bisimulation over SA [12] i.e.,
δ xc ⋅ σ ⊆ σ ⋅ δ xc and σ ⋅ δ x ⊆ δ x ⋅ σ
for all x∈ E, then
µ = ι S ∪ σ ∪ σ is also a bisimulation and could qualify as
c
a tolerance. Suppose A is deterministic. If φ is compatible, we
can say that φ is tolerated by (A, µ ), since for all q∈ S, x∈ E,
p ∈ qδ x and q'∈ q δ x ⋅ φ : (q, q) ∈ µ , (q', p) ∈ φ ⊆ µ and all
pairs of states reachable from q' and p are in tolerance. If,
however, A is not deterministic, we can not infer that φ is
tolerated, c.f. Figure 4, where µ = r{( p, q' ), (q', p)} is a
!
Fault masking can be modeled by compatible and small
errors of stable automata, and stable automaton (A, τ) may be
called a fail-stop automaton if φ ⊆ r (S × s0τ ) are the only
possible errors; i.e. all state transitions due to faults lead to
states that are in tolerance with the default state of A.
Lemma: The maximal compatible error of (A,τ) is τ, and
τ* is the maximal insignificant, hence, compatible and small
temporary error of A. Compatible temporary errors of a stable
automaton are small and their consequences (induced errors)
remain small under the action of any sequence of events.
(proof obvious).
Intuitively, automaton (A, τ) recovers, i.e. overcomes the
influence of error φ, if after some time it enters only states
which are (and from then on stay) in tolerance with correct
states, i.e. when a fault occurs a certain time elapses before
the recovery is effective.
Definitions: Error φ of (A, τ) is (p, l)-bounded, if for all
finite sequences w of events with length equal or greater than
p: δ w ∗ φ ⊆ τ l holds, and this inclusion does not hold for (p-1,
l) or (p, l-1). Here δ w = δ x1 ⋅ δ x2 ⋅ ... ⋅ δ xn for w = x1 x 2 ...x n ,
xi ∈E. Automaton (A, τ) is said to τ-correct temporary error φ,
if there is a p such that φ is (p,1)-bounded by (A, τ).
A t-fault-tolerant automaton can mask at most t faults
[13]. Let φ model its errors, then t-fault tolerance can be
expressed as: φi , i=1,2,...,t, are (0,1)-bounded errors. The
automaton is t-fail-stop, if φt+1 = r(S × s0τ). Gracefully
degrading fault-tolerant systems can be modeled by automata
with tolerance and (p, l)-bounded errors, l > 1. In order to
decide whether automaton A recovers from error φ after a
finite amount of time, one has to see whether there is some p,
0 ≤ p ≤ 2|S|-2 such that φ is (p,1)-bounded. A test is given in
[6].
Let φ be a permanent error. Modification Aφ is nondeterministic, even if the reference automaton A itself is
deterministic. If φ of a stable automaton (A, τ) is small, then
modification (Aφ, τ) is also stable. However, the converse
may not be true. We now briefly investigate automata with
tolerance which have the property that some of their
permanent errors become ineffective by the action of event
sequences.
Definition: Error φ of (A, τ) is called a p-bounded error
of modification (Aφ, τ), if for all finite sequences w of events
φ
φ
with |w| ≥ p: (δw, δ w)φ ⊆ τ where δw is given by ∆ and δ w
by ∆⋅φ, and the inclusion does not hold for p-1. φ is said to be
masked (w.r.t. τ), if it is 0-bounded.
Theorem: Let (A, τ) be stable. If error φ is compatible
with tolerance τ then there exists n ≤ |S|-1 such that (Aφ, τn) is
stable and φ is masked w.r.t. τn.
Proof: Suppose that (A, τ) is stable and φ ⊆ τ. Then for
all x ∈ E, δx∗τr = ι S ∪ δxc⋅ τr ⋅ δx ⊆ τr, r =1,2,... . Since S
is finite and τ reflexive, there is an n such that τn = τn+i,
φ
i=1,2,... . Hence, δ x∗τn = φc⋅ δxc ⋅ τn ⋅ δxφ ⊆ τn+2 = τn
φ
φ
φ
(stability of A ), and δwc ⋅ φ ⋅ δ w ⊆ δwc⋅ τ ⋅ δ w ⊆ τn for all
w, since τ ⊆ τn, i.e., φ is masked.
Additional properties of bounded errors can be found in
[5,6]. In [6] we present an algorithm which tests whether or
not error φ of automaton (A, τ) is a (p, l)-bounded temporary
error. With slight changes of this algorithm we can also
decide whether or not φ is a p-bounded error of modification
(Aφ, τ).
too, (i.e., O7 τ O8). Then GRCφ is stable, since φLP is
compatible with τ and τ = τ*.
Now, consider error φG = r{(I7, I3), (P7, P3)}. This error
models the possibility that the gate opens whenever the traffic
light goes off. This error may be due to a short in the
controller. This is a safe state transition except when the train
is in P. Thus φG ⊄ τ. Anyway, φG does not modify (the
relevant part of) GRC, since ∆GRC ⋅ φG = ∆GRC ⋅ ι SGRC = ∆GRC.
Notice, φG is also (0,0)-bounded since GRC is deterministic.
State relation φLG = φLP ⋅ φG is more interesting. The
modification of GRC by φLG is shown in Figure 7. Suppose
φLG models a temporary error, then φLG is (2,0)-bounded.
However, interpreted as permanent it is, obviously, not pbounded. Moreover, φLG ⊄ τ and the state transitions may lead
to the erroneous state I1 and, hence, are erroneous.
i
Open/Closing
Opening/Closed
Off
1/5
3/7
On
2/6
4/8
"
5. Errors of the railroad crossing system
Let us now return to our general railroad crossing example (in short, GRC) of Section 3 and consider its fault-tolerant
behavior. The relevant part of ∆GRC is given as Figure 2. We
first consider error φL (Sec. 4). Suppose φL models consequences of temporary faults. The statechart with error φL is
shown in Figure 5. Aggregations of states are depicted as socalled superstates; e.g. Out is the superstate when the train is
outside the railroad crossing, and φ is the superstate reached
by an unexpected state transition due to φ. Dashed transitions
are taken only once (temporary). For example instead of
goP
(P, On, Closed)
(O, On, Closed) →
goP
P8(P8, P7) = P7. When the
we have (see Table 1): O8 →
train passed the light, the state of the light is not safetyrelevant, provided the light can be switched on again before
the next train arrives (i.e., provided φL is a temporary error).
Hence, we may say that P7 is within tolerance of P8. We,
therefore, define τ = r (γ ∪ γ c ) with γ = {(P7, P8), (I7, I8)}.
(GRC, τ) is obviously stable. Moreover, φL is (2,0)-bounded.
Now, suppose φL is a permanent error of GRC. We then
consider φLP = φL∪{(O8, O7)}. The modification GRCφ is
given by ∆φ = ∆GRC ⋅ φLP. Its effect on superstate Out is shown
in Figure 6. The other superstates, particularly φ, do not
change. (However, dashed transitions become ordinary
transitions). This modification is not stable, since δexitIc ⋅ (I8,
I7) ⋅ δexitI = (O8, O7) ∉ τ and I8 τ I7. The system changes into
the deadlock state O7 with non-zero probability but not into
erroneous (unsafe) states. Suppose that we can tolerate O7,
Out
φ
O8
exitI
Crossing
I7
enterI
P7
# $%& Note that statecharts provide a convenient way to model
also more complex errors [6].
6. Model Checking
We can employ CTL (computational tree logic [4]) and
symbolic model checking to express and check fault tolerance
requirements. For example, given (A, τ ) and temporary error
φ, define
AA = (SA × SA, s0, EA, ∆ AA)
AA
A
A
with (s , q ) δ x = sA δ xA × qA δ xA . We can now specify the
property 'to be in tolerance' as the subset τ of SA × SA.
Automaton A exhibits fault-tolerant behavior if it stays within
τ. Hence, the requirements, that A masks temporary error φ
and A τ-corrects φ are:
AA, s0 |= φ ⇒ AGτ and AA, s0 |= φ ⇒ A(true U AGτ),
respectively.
Here A (always) is a path quantifier, and G (global) and
U (until) are temporal operators. Note that we identify a
predicate with the set of states which makes it true.
model. However, this framework can be extended when other,
more elaborate models of finite state machines are investigated for particular kinds of applications [1,8].
References
[1]
[2]
Out
[3]
O1
lower
up
lower
O3
[4]
rise
rise
O5
down
down
[6]
[7]
O8
O7
[8]
exitI
P8
[5]
I8 ,I7
[9]
φ
' $%&
[10]
φ
O7
O8
exitI
[11]
I7
[12]
exitI
I3
O3
enterI
[13]
enterI
[14]
P7
P3
goP
[15]
O8
exitI
O1
I1
enterI
( ) !
φLG
7. Conclusion
Within the framework of automata with tolerance, various kinds of fault-tolerant state machines (masking, fail-stop,
t-fail-stop, degradable, etc.) can be specified. Subsequently,
many aspects of errors and their role in the performance of
fault-tolerant state machines can be investigated. To this end,
symbolic model checking can be employed. We aimed at a
basic framework and, therefore, employed a simple finite state
R. Alur, D.L. Dill; Automata-theoretic Verification of
Real-Time Systems, in [8] pp. 55 - 81, 1996
M. Arbib; Tolerance automata, Kybernetika Cislo 3,
pp 223-233, 1967
C. Brink, W. Kahl, G. Schmidt (Eds); Relational
Methods in Computer Sciene, Springer Wien, 1997
E.M. Clarke, R.P. Kurshan; Compiler-aided
verification, IEEE Spectrum June, pp. 61 - 67, 1996;
M. Dal Cin; Fuzzy-state automata: their stability and
fault tolerance, Int. Journ. of Comp. a. Inform. Sciences Vol. 4, p.63 , 1975; and Modification tolerance of
fuzzy-state automata, Int. Journ. of Comp. a. Inform.
Sciences Vol. 4, p. 81, 1975
M. Dal Cin; Verifying fault-tolerant behavior of state
machines, IMMD3 Int.Rep. 97/1, 1997
D. Harel; Statecharts: a visual formalism for complex
systems, Science of Comp. Progr. 8, p. 231, 1987
C. Heitmeyer and D. Mandrioli; Formal methods for
real-time computing: an overview, in Formal Methods For Real-Time Computing, J. Wiley, p. 1, 1996
J.-C. Laprie; Dependability: Basic Concepts a.
Terminology, Dependable Computing a. FaultTolerant Systems, Vol. 5, Springer Wien, 1992
P.A. Lee, T. Anderson; Fault Tolerance: Principles a.
Practice, Springer Wien, 1990
G. Leeb, N. Lynch; Proving safety properties of the
steam-boiler problem in Formal Methods for Industrial Applications, Springer LNCS 1165, p. 318,
1996
R. Milner; Communication and Concurrency,
Prentice-Hall, New York, 1989
T.S. Perraju, S.P. Rana, S.P. Sarkar; Specifying fault
tolerance in mission critical systems, HASE’96, 1996
H. Poincare; The Value of Science; reprinted by
Dover 1958
J. Rushby; Reconfiguration and transient recovery in
state machine architectures, IEEE Proc. FTCS-26, p.
6, 1996
© Copyright 2025 Paperzz