n(c)ub = log (1l- c)/log (1 - (vP)min). (3)

IEEE TRANSACTIONS ON COMPUTERS, VOL. c-27, NO. 1, JANLIARY 1979
86
A better expression of n(c) is (2). Consider a fault that affects a
single transition, the probability of which is vj. When this transition is achieved, there is a probability Pi of detection. Pi depends
on the fault. It is the probability of not reaching a "good" state
before detection.
(2)
n(c) = log (1 - c)/log (1 - vPi).
For a fault affecting several transitions, (vP) = max
take the place of vjPi in (2).
Then the upper bound n(C)ub will be
n(c)ub = log (1l- c)/log (1 - (vP)min).
(vjPi) will
(3)
The fault considered in example 5' leads to the state S4 instead
of S1. Then if the next input is X = 1, there is detection; if the next
input is X = 0, the "good" state S1 is reached without detection.
Then Pi =4 and (2) gives the same result as the ELM, n(O.999)
410. If several transitions may be affected by a fault, (2) may give a
length greater than ELM in some cases.
With vjPi 4 1, (2) may be written
-
(4)
n(c) = log (1 - c)/Pi log (1 - vj).
A proportionality between n(c) and (1/Pi) appears in (4).
But, there is a problem when the values of Pi are to be evaluated, because this value depends on the fault, and the possible
faults are very numerous. In the small example of Fig. 2.1,' there
are more than 20 wires, i.e., more than 40 single possible stuck-atfaults, plus the stuck-at-faults in the two JK flip-flops. Evaluation
of Pi for each fault equivalence class may be very long. The ELM,
though it is an accurate model and may be interesting for a particular study, suffers also from the fact that every fault equivalence
class must be considered.
The approximation method is optimistic when a fault affects
only transitions among the less probable and if, furthermore, the
detection is not sure after a faulty transition. Its results, which
cannot be considered as a bound, may be of a practical use if such
faults are few in the total set of possible faults.
REFERENCES
[1] R. Tellez-Giron and R. David, "Random fault-detection in logical networks," in
Dig., IFAC Int. Symp. on Discrete Systems, RIGA, USSR, vol. 2, Oct. 1974, pp.
232-241.
Authors' Reply2
J. J. SHEDLETSKY AND E. J. McCLUSKEY
A recurring problem in the analysis of random testing is the
tradeoff between accuracy and computational efficienlcy. Every
random test requires an (implicit or explicit) analysis of tlle relationship between test confidence and test lengthl for thle circuit
under test. This analysis is used to specify a test length. The error
latency model ELM [1] provides an accurate analysis of fault behavior in sequential circuits, but the accuracy obtained is computationally costly. On the other hand, an analysis that sacrifices too
much accuracy for computational efficiency would be inadequate
for controlling test confidence. The important question is if an
analysis can be computationally practical, yet accurate enough to
maintain product quality levels [1].
We introduced the ELM to provide a standard of reference,
against which the accuracy of other, more practical methods
could be measured. We are gratified that this reference has already
spurred the suggested improvements in the approximation
method. We encourage further research to explicitly define the
tradeoffs between accuracy and computational efficiency.
REFERENCES
[1] J. J. Shedletsky, "Random testing: Verified effectiveness vs. practicality," in Dig.,
1977 Int. Symp. on
Fault-Toleranit Computinig, June 1977.
2
Manuscript received May 31, 1977.
J. J. Shedletsky is with the IBM Thomas J. Watson Research Center, Yorktown
Heights, NY 10598.
E. J. McCluskey is with the Digital Systems Laboratory, Stanford University,
Stanford, CA 93405.
0018-9340/79/0100-0086$00.75 (© 1979 IEEE