Comment on "When to Use Random Testing"

IEEE TRANSACTIONS ON COMPUTERS, VOL. c-28, NO. 8, AUGUST 1979
,80
is exemplified by e.g., [9]. Thus it is probably not worthwhile to
extend the present methods to two-dimensional filtering.
then the problem
T0rQ exp (yt) = x*(yt) . x*(yO)
(19)
has a solution yl > yo which via Definition 2 yields a critical
Lt 1 = exp (yt)/rQ such that (16) holds iff L > Lt. Defining
Wt
A1(T2To-1)
QR + log, (4To/eTl)
2
then some routine algebra applied to (13), (19), and Definition 2
yields
wt = s + 1 + loge (wt) 2 x*(yo)/2 > 1.33917
REMARKS
There are important cases where (1) is not a satisfactory
approximation. Gentleman and Sande [5] and Bergland [1]
showed that radix 4 and radix 8 algorithms offer certain advantages when N has factors of 4 and 8. If, for a given N, the shortest
possible computation time is sought regardless of complications,
then it is mandatory to exploit the existence of such factors. From
this point of view, to almost every value of N that is a power of 2,
there will correspond some distinctive combination of radix 2,
radix 4, and radix 8 algorithms. For example, the case
N = 512 = 83 is quite different from the case N = 256 when such
possibilities are taken into account. Moreover, (1) is not then a
satisfactory approximation, so the present results as stated do not
apply.
In connection with such matters two points can be made. First,
as Bergland [2] commented in connection with hardware FFT
implementations, "since the cost is proportional to the number of
options included, the use of only one basic operation in the
radix-2 algorithm in many cases offsets the additional computation required." Comparable observations can be made in connection with software FFT's.
Second, while the results presented in the "Optimum Parameters" section appear to require that (1) hold, such is not the
case with the Theorem in the "Q and Computation Time" section.
That Theorem could be modified so that its conclusions apply if in
(1) log, (N) is replaced by some more general monotone increasing function, in order to take into account computation times
associated with more elaborate algorithms. It does not appear to
be worthwhile to pursue such possibilities here.
REFERENCES
(21)
as a problem equivalent to (19). It can be shown that (18) implies
s > 0.04712, so the problem (21) is equivalent mathematically to
the problem (13).
To summarize application of these results, first T I/To is calculated. If T1 /To < 1.33917 then (18) holds and it is not necessary
to calculate yo and x*(yo). If T ITO > 1.33917 then yo = QR +
loge (2) and [from (14)] x*(yo) are calculated. If (17) holds then
the FFT method is faster than the direct method for all L . 3. If
(18) holds then s is calculated from (20) and then wt is calculated
from
wt as + 1 + loge(0.7515s + ,/s+ 1)
-(22)
with relative error less than 0.0035. Then LI - 1 = 2To wtIT is
calculated. The FFT method is then faster than the direct method
iff L> Lt.
It is important to note that when all operations (i.e., both for
the FFT method and for the direct method) are to be done on one
machine, it is virtually impossible that To < T1,, so it is virtually
impossible that (17) hold. Thus it can be said that only when two
different machines are involved in implementing the FFT and
direct methods is it possible to guarantee that the former will be [1] G. D. Bergland, "A fast Fourier transform algorithm using base 8 iterations,"
Math. Computation, vol. 22, pp. 275-279, Apr. 1968.
faster than the latter for all L - 1 2 2, assuming N is chosen to
[2] -, "Fast Fourier transform hardware implementations-An overview," IEEE
minimize (1).
Trans. Audio Electr., vol. AU-17, pp. 104-108, June 1969.
Inequality (18) will hold at least in those cases where the FFT [3] R. C. Borgioli, "Fast Fourier transform correlation versus direct discrete time
correlation," Proc. IEEE, vol. 56, pp. 1602-1604, Sept. 1968.
and direct methods are to be implemented on one machine. There
0. Brigham, The Fast Fourier Transform. Englewood Cliffs, NJ: Prentice-Hall,
will exist a critical Lt - 1 2 2. It should be noted that even ordin- [4] E.1974.
ary values of Q (i.e., 0.5 < Q < 2 when r = 2) will normally play a [5] W. M. Gentleman and G. Sande, "Fast Fourier transforms-For fun and profit,"
in 1966 Fall Joint Comput. Conf Proc., Spartan, Washington, DC, 1966, pp.
strong role in the determination of Lt. This can be shown by
563-578.
example.
[6] H. D. Helms, "Fast Fourier transform method of computing difference equations
and simulating filters," IEEE Trans. Audio Electr., vol. AU-15, pp. 85-90, June
Take Q = r = 2 and To /T1 = 4. It is not necessary to calculate
1967.
yo and x*(yo) to determine that (18) is satisfied. From (20) calcu- [7] B. R. Hunt, "Minimizing the computation time for using the technique of sectionlate s = 1.579, noting that Q > 0 accounts for almost I of the
ing for digital filtering of pictures," IEEE Trans. Comput., vol. C-21, pp. 12191222, Nov. 1972.
quantity s. Then from (22) wt 3.957 so Lt = 32.65.
M. Me sereau and D. E. Dudgeon, "Two-dimensional digital filtering," Proc.
If instead Q had been taken as zero in the preceding calculation, [8] R.
IEEE, vol. 63, pp. 610-623, Apr. 1975.
=
then the result would have been Lt 24.87.
[9] R. E. Twogood, M. P. Ekstrom, and S. K. Mitra, "Optimal sectioning procedure
'
TWO-DIMENSIONAL CASE
If a two-dimensional FFT is employed for two-dimensional
nonrecursive filtering, and if the computation time per output
sample is of the form
T
N,N2(109g (N1N2) + Q)
(N1- Li + 1)(N2- L2 + 1)'
for the implementation of 2-D digital filters,' IEEE Trans. Circuits Syst., vol.
CAS-25, pp. 260-269, May 1978.
Comment on "When to Use Random Testing"
(23)
as has been suggested in the literature [7], [8], then some of the
preceding can be extended to the two-dimensional case. However,
mainly on account of the huge dimensionality of two-dimensional
filtering problems, various practical matters come up that
strongly tend to invalidate (23) as a satisfactory approximation, as
PAUL B. SCHNECK
Abstract-This correspondence indicates a weakness in forming
the criteria used to decide when random testing is practicaL The use
of average fan-in based on total gate count is an oversimplification
Manuscript received November 20, 1978; revised January 29, 1979.
The author is with the Goddard Space Flight Center, Greenbelt, MD 20771.
0018-9340/79/0800-0580$00.75 C) 1979 IEEE
581
IEEE TRANSACTIONS ON COMPUTERS, VOL. c-28, NO. 8, AUGUST 1979
and results in too low a threshold for use of random testing in lieu of a
complete test of 2N patterns. A modification is given to avoid this
difficulty.
A
.AY
.
Index Terms-Fan-in, primary inputs.
Fig. '. Circuits of first example.
In the above paper,' an algorithm for deciding when'it is practical to use random test sequences instead of a complete test of 2N
patterns for an N-input circuit is described. The parameters of
As the first example, let us consider Schneck's circuits shown in
that algorithm are
Fig. 1. M(99 percent) is 22 and 19 for the single output (Y) and the
two output (Y, Y) circuits, respectively. When maximum fan-in is
N the number of primary inputs,
used, we have, for both circuits,
L the number of levels, and
n the average fan-in, obtained by dividing the sum of the
n = 4, L = 2, q = 0.725, M = 29.
fan-in of each gate by the number of gates.
We note that the number and nature of the outputs do not affect Although this number is more conservative, it still represents the
the decision function proposed. Thus, where auxiliary outputs are same order of magnitude (compared to 28) as the estimates based
made available the parameters affected are L and n. If the auxi- upon average fan-in. In several independent fault simulation runs
liary outputs are not on the largest path of the circuit, then only n with random patterns, the two circuits gave almost identical reis affected. The average fan-in, n, decreases. The procedure leads sults. For either circuit, the number of random patterns for a
to the incorrect conclusion that such a circuit is more amenable to complete test generation (eight to ten tests) was always in the
random testing than its predecessor which did not offer auxiliary range of 18 to 49. The conservative estimate (M = 29) appears
better since it lies somewhat in the middle of this range. It should,
outputs.
however, be pointed out that the size of circuits in this example is
For example, consider the circuit which realizes:
too small for statistical accuracy.
Y = (ABCD) + (EFGH).
Our second example is a 4 bit arithmetic logic unit [1] (Texas
Instruments
Type SN5418 1). For this circuit, N = 14, L = 6, and
Its parameters are L= 2, n = 3.33, N = 8.
If we add an auxiliary output,
average estimate:
Y = (ABCD) + (EFGH).
nav = 2.41, q = 0.648, M = 179.
The parameters become L = 2, n = 3.0, N = 8. The only change
in parameters between these two circuits is the decrease in n. conservative estimate:
Using formula (2) of [1] we compute the number of patterns
n,ma= 5, q = 0.755, M = 3911.
necessary,
Compared to 2X= 16384, the first estimate is about 1 percent
M(99 percent) = In (0.01)/ln (1 - q(f- I)L).
while the second estimate is 24 percent. Indeed, each estimate
leads to the same conclusion that for this circuit the' random
(We note that q = 0.6180 is the root of q = 1 - q2.)
method is better than an exhaustive test. Several fault simulation
For n = 3.0, L = 2, q = 0.682, M = 19.
runs showed that a complete test generation, yielding 33 or 34
For n = 3.3, L = 2, q = 0.697, M = 22.
Use of the maximum fan-in is a more conservative measure which tests, could be done with less than 200 random patterns. Conservative estimate is too high perhaps because very few gates have a
avoids this difficulty.
fan-in of 5 while most of the gates have a fan-in close to the
1
V. D. Agrawal, IEEE Trans. Comput., vol. C-27, pp. 1054-1055, Nov. 1978.
average fan-in. It was also seen that the actual number of random
patterns, although of same order as the average fan-in-estimate,
was often higher than this estimate. One possible reason is the
presence of several long paths in the circuit. For example, if there
are ten independent paths of same length, then the number of
patterns required to sensitize all of them with 99 percent probability is obtained as
M(99.9 percent) = In (0.001)/ln (1
Author's Reply
VISHWANI D. AGRAWAL
P. B. Schneck is right in pointing out that maximum fan-in will
lead to a more conservative estimate of the number of random
patterns needed for complete testing. It is, however, useful to compare these estimates with practical cases. We will consider two
examples.
Manuscript received January 22, 1979.
The author is with Bell Laboratories, Murray Hill, NJ 07974.
-
q(n- 1)L).
M(99 percent) should, therefore, be treated as the order of the
number of random patterns rather than an upper bound. Since the
estimate is statistical, its accuracy is better for larger circuits. Maximum fan-in-estimate is more conservative but may be too pessimistic for large circuits.
REFERENCES
[1] The TTL Data Book for Design Engineers, First Edition, Texas Instruments,
Inc., Dallas, TX, p. 390.
0018-9340/79/0800-0581$00.75 (C 1979 IEEE