Accumulator Compression Testing

317
IEEE TRANSACTIONS ON COMPUTERS, VOL. c-35, NO. 4, APRIL 1986
Accumulator Compression Testing
NIRMAL R. SAXENA
AND
JOHN P. ROBINSON,
SENIOR MEMBER, IEEE
Abstract - A new test data reduction technique called accumulator compression testing (ACT) is proposed. ACT is an extension
of syndrome testing. It is shown that the enumeration of errors
missed by ACT for a unit under test is equivalent to the number
of restricted partitions of a number. Asymptotic results are obtained for independent and dependent error modes. Comparison
is made between signature analysis (SA) and ACT. Theoretical
results indicate that with ACT a better control over fault coverage
can be obtained than with SA. Experimental results are supportive of this indication. Built-in self test for processor environments
may be feasible with ACT. However, for general VLSI circuits the
complexity of ACT may be a problem as an adder is necessary.
the performance of counting-based compression has a strong
dependence on the particular function under test.
Accumulator compression testing (ACT) is an extension of
syndrome testing. An accumulator is incremented at each
time instant by the number of ones observed to date. ACT
brings the theory of partitions to compression testing. First
the definition of ACT is presented and the problem of enumerating the number of errors missed is formulated as the
problem of enumerating restricted partitions of a number.
Next an asymptotic solution and some partition identities are
obtained for the number of errors missed by ACT. A comIndex Terms -Built-in test, fault coverage, partitions, signa- parison between ACT and signature analysis is presented.
ture, syndrome, testing, VLSI test.
The final section is a summary of results and conclusions.
II. ACCUMULATOR COMPRESSION
I. INTRODUCTION
T HE problem of testing digital circuits has been compounded by the limited visibility and accessability of
VLSI. In a built-in test, a test generation circuit together with
a response compressor can be fabricated on the same chip as
the circuit under test. The circuit under test receives its inputs from a test source and test data are fed to a data
compressor.
A physical event that may lead to incorrect logical behavior is called a fault. A logical difference between correct and
observed behavior for function is called an error. Thus a fault
will result in an error if the circuit has no redundancy and a
suitable input pattern is presented. The relationships between
test sequences, faults and error patterns may be complex.
When exhaustive testing is done, i.e., all possible input combinations are applied, all detectable faults lead to error patterns. In this work we consider time compression with the
primary focus on the number of error patterns which are
missed because of the compression.
There are various compression techniques: signature analysis [1]-[5], transition count [6]-[9], syndrome testing
[10]-[12], and Walsh coefficient testing [13]-[15]. As is well
known [16], test compression can mask errors that could be
detected. For signature compression the errors missed are all
those error patterns that are multiples of the feedback polynomial [2]. This result follows from the use of a linear
compression circuit and the superposition of inputs holds.
Counting methods, e.g., transition count [6]-[9], syndrome
[10]-[12], and Walsh coefficient [13]-[15] will miss error
patterns which leave the number of ones unchanged. Hence,
We proceed from the definition of the syndrome of a binary
sequence. The syndrome of a sequence is the number of ones
in the sequence. We denote the length of the test sequence by
m. Let a'a' * a' be the binary test data sequence. Let
a
a', be the successive contents of a counter computing
the syndrome.
Note that a' is the syndrome. At time i the syndrome
counter has content
...
al =I=
I
a°.
i
I
1-j--i
(1)
Fig. 1 represents these partial syndromes with respect to
time. The accumulator syndrome s is defined as the sum of
the partial syndromes,
s = ai
Y
I --j -i
a).
(2)
Clearly, s represents the area of the syndrome function in
Fig. 1.
From (1) and (2) we obtain
s
=
,
1 Sj'sm
(m + 1 -j)a9.
(3)
Expression (3) is analogous to an integral of an integral.
The process can be extended resulting in a series of larger and
larger totals. The resulting collection appears to be a complete characterization; however, this set is complex and appears to have little current applicability to test compression.
We next define accumulator compression.
Definition 1: Accumulator compression testing (ACT) is a
Manuscript received December 1, 1985; revised December 20, 1985.
compression scheme which treats the syndrome k and the
N. R. Saxena is with Hewlett Packard, Santa Clara, CA 95050.
accumulator syndrome s together as the reference signature.
J. P. Robinson is with the Department of Electrical and Computer EngineerNext the errors missed by ACT are characterized. For an
ing, University of Iowa, Iowa City, IA 52242.
output stream of length m with syndrome k, the number of
IEEE Log Number 8607583.
0018-9340/86/0400-0317$01 .00 C 1986 IEEE
IEEE TRANSACTIONS
318
1
2
3
4
Ft] F0 OT|
0
m
--
F-F1
l
k2
6
...
.
test date
Syndrome
Accumulator
Fig. 2. Architecture for ACT.
Syndrome
4
Fig. 1. Compression parameters versus time.
error polynomials that preserve the syndrome k is given by
h
=
m
k
-I
.
(4)
From (4) it follows that the worst case situation would be
when k = m/2. From now on we will base our analysis on
this worst case situation.
All syndrome-untestable faults define a family of functions
F, distinct from the function under test, but having k ones.
By syndrome-untestable we mean any logic fault in a circuit
which preserves the syndrome as opposed to a single-stuck-at
fault model assumed in [10].
If follows from (3) that the accumulator syndrome is a sum
of k distinct numbers each less than or equal to m. The
number of syndrome-untestable errors that preserve the accumulator syndrome s is equal to the number of partitions of s
into exactly k distinct parts such that each part is less than or
equal to m. In the next section we examine this number.
Fig. 2 is a proposed architecture for ACT. This architecture
is feasible for built-in test compression for signal processors
or computers as most of the test hardware is resident and
therefore would be little hardware overhead. The organization of Fig. 2 corresponds to adding vertical slices in Fig. 1.
Another implementation would be to add horizontal slices in
Fig. 1. This second approach would require k additions of the
various time intervals. Each time interval would correspond
to the occurrence of a one in the data to be compressed. This
interval could be obtained from a down counter incremented
by the system clock.
III.
c-35, NO. 4, APRIL 1986
ON COMPUTERS, VOL.
ENUMERATION OF RESTRICTED PARTITIONS
In this section we present recurrence relations and identities for restricted partitions which are useful in enumerating
the errors missed by ACT. An asymptotic result is derived for
the number of errors missed by ACT under worst case assumptions. We refer the reader to [17] for a detailed discussion on the theory of partitions.
We will use the following notation:
N(n, r, t) = number of partitions of t into at most n parts
such that each part is less than-or equal to r.
P(n, r, t) = number of partitions of t into exactly n parts
such that each part is less than or equal to r.
D(n, r, t) = number of partitions of t into exactly n distinct parts such that each part is less than or equal to r.
Theorem 1: The number of binary sequences of length m
which have the same syndrome value k and the same accumulator syndrome s is given by
D(k, m, s) = N(k, m
- k,s -
k(k + 1)/2)
and s is bounded
k(k + 1)/2
-
s S m(m + 1)/2
-(m
-
k)(m
-
k +
1)/2.
Proof: First we establish bounds on s. Consider a binary sequence of length m having syndrome k. Let s be the
accumulator syndrome of this binary sequence. If the binary
sequence is of the form 00... 11111 we get the minimum
value for s as
min s = k(k + 1)/2.
We call this minimum sequence 00 ... 011 ... 1 the base
sequence and we define the offset of a binary sequence as the
difference between its accumulator syndrome and this minimum, i.e.,
offset = s - k(k + 1)/2.
The offset will be greater than or equal to zero. If the
sequence is of the form 1111 * 00 we have the maximum
accumulator sum s,
max s = m(m + 1)/2 - (m - k) (m - k + 1)/2.
Next we prove the theorem. The left-hand expression
D(k, m, s) corresponds to the sum in (3). Each binary variable.
where a° = 1 specifies a part in the partition of value m +
1 - j. Since the syndrome is k, there are exactly k binary
variables equal to 1 and, correspondingly, exactly k parts in
the partition. Finally the sum is equal to s and the correspondence is complete.
To show the right-hand side we return to the base sequence
idea and the offset of a sequence. Note that any faulty binary
sequence of length m and syndrome k will go undetected
by ACT if it has an offset of s - k(k + 1)/2. Any binary
sequence of length m and syndrome k can be generated from
a base sequence by moving the ones around. For example,
SAXENA AND ROBINSON: ACCUMULATOR COMPRESSION TESTING
319
to generate 000... 101111, transport the leftmost one in
the base sequence to the new location. By doing so we
have created an offset of one. In general, this operation of
transporting ones will generate a set of numbers whose sum
is the offset of the resulting sequence. Clearly, the numbers
cannot exceed (m - k) and there can be no more than k
transportations. This reduces to N(k, m - k, s - k(k + 1)/
2). Thus,
D(k, m, s) = N(k, m - k,s - k(k + 1)/2).
Removing the pole at X = 1 for f(x) we have
f(
f(x)-=
P(n,k,t) =
, P(n - i,k,t)
i3O
- 1,k - i + 1,t - 1 -
(I + X +
**
. ..
+
(I + X + .
Xk-1)
(1)
..
+
Let
(5)
g(X) = f(X)/f(l) .
During the course of these enumerations the following relationships were observed and we state them without proof:
If k is odd:
N(k, k,Fk2/21) = P(k,k - 1,[(k - 1)2/21).
If k is even:
P(k, k, rk2/21) = > P(k-j, k,Fk2/21)
(8)
(12)
It follows that g(l) = 1, and g(X) behaves as a probability
distribution function, g'(1) will give the mean and g"(1) g (1)2 + g'(1) will give the variance. In order to compute
g '(X), we first take the natural logarithm of g(X). We have
In g(X) =
ln(l + X ***+ X2k-')
-
> ln(I
+X
...
+
Xk-j) - ln(f(l)).
(13)
Differentiating (13) we get
(i - I)n )(7)
Xk).
f(l)e =
(6)
O0-isn
2>P(n
X2k-1)
(11)
Q.E.D.
In terms of restricted partitions, Theorem 1 can be restated
as follows.
Theorem 2: The number of partitions of s into exactly k
distinct parts, each part no greater than m, is equal to the
number of partitions of (s - k(k + 1)/2) into at most k
parts, each part no greater than (m - k).
Computer enumerations of these restricted partitions were
based on the following relations:
N(n,k,t) =
(I + X + *-+
g'(X)/g(X)
(7)
=
>
-I 1 j6k
(1
+ 2X + * * +
(2k -j)X2k-i-l
(14)
Following this procedure we obtain
mean = k2/2
variance = k2(2k + 1)/12.
(15)
Using the values of mean, variance, and the probability dis(9) tribution function, we get the following asympototic formula:
where [xl is the least integer greater than or equal to x.
6
__
(16)
N(k,k,k2/2) (-k
In computing ACT, the worst case is when the offset is
\k/
Trk2 (2k ++11)
irk2
k2/2. Computation using recurrence relations for sufficiently
For test lengths up to 62 the enumeration of the restricted
large k will be very difficult. In the following section we
was computed using the recursive expressions (6)
partitions
develop as asymptotic formula.
and (7). These exact values were found to be slightly less than
the asymptotic expression (16).
IV. ASYMPrOTIC RESULT
V. MEASURE OF EFFECTIVENESS OF ACT
From [17] it can be inferred that N(k, k, Fk2/21) is the coefFOR DEPENDENT ERRORS
ficient of xrF2/ in the generating function given by
Expression (16) can be viewed as a measure of fault cov2- 1) - 1)..(Xk+l - ) 1)
(10)
for the worst case performance of ACT, assuming
erage
f(x) - (X(Xk - )(x2k
(Xk- 1)... (
equally likely errors. Smith [2] has questioned the validity of
The coefficients of f(x) are symmetric about the coeffi- any measure based on independent error model for random
circuits. Here we extend the results in [2] for ACT.
cient(s) of XF a. The coefficient of XFk2/21 iS maximum. This logic
3: For a length m (m = 2k) data stream, asTheorem
proves the correctness of the worst case assumption.
errors in positions separated by multiplies
suming
dependent
Knuth [18] suggested that we formulate the problem of
of
if
is
ACT
the number of errors missed is
b,
performed,
finding N(k, k, [k2/21) in terms of probability theory. With
at
most
this suggestion we now proceed to get an asymptotic approxiX3kb4k
mation for N(k, k, Fk2/21) which will upper bound the numbers of errors missed by ACT.
Vk2
320
IEEE TRANSACTIONS ON COMPUTERS, VOL. c-35, NO. 4, APRiL 1986
Proof: In the worst case the k ones in the data stream
are uniformly distributed. (For the burst error case, the number of errors missed can be shown to be less than the expression above.) Under the dependent error assumption there are
m/b possible points that are equally likely to be affected.
These m/b points can be chosen in b ways. Suppose that the
ones in the data stream are so distributed that for every possible selection of m/b dependent points we get an offset of
(k/b)2/2. The accumulator syndrome of these points will be
of the following form:
E
I -j--mlb
(mlb -j- 1)bd(j)+c
where c is some constant and
d(j) = binary number at position j
of the constructed binary sequence.
The number of errors missed by ACT for these dependent
points is equal to
N(k/b, k/b, (k/b )2/2).
So for all possible dependent locations, the total number of
errors missed by ACT is
bN(k/b, k/b, (k/b)2/2)
\I_ b34klb
lTk2
Conjecture: If the measure of fault coverage under the
independent error model is of order q(x) for a general compression scheme, then the measure of fault coverage under
dependent error model will be of order q(x/b) where q is
some function over x, and x grows 0(m).
VI. COMPARISON OF ACCUMULATOR COMPRESSIONS
TO SIGNATURE ANALYSIS
The number of errors missed by ACT can be estimated
from the distribution with parameters in (15). To count up to
s ones requires log2 S (rounded up) counter stages. ACT
requires log2 m reference bits for k and 2 1og2 m bits for s
yielding a total of 3 1og2 m reference bits.
A signature compression unit using a degree L polynomial
requires L reference bits and misses 2m-L -1 error patterns.
Using the signature compression as a reference we define the
equivalent length Le for a compression technique as the
length of a signature compression which detects the same
fraction of error patterns. Thus,
Le = m--0log2E
where |El is the number of errors missed by the compression
technique. Using the estimate (15) for ACT,
Le =2 log2 rn + 3 (8s n)2
(18)
8 In . L
assuming m is much larger than 1. Le grows as the square of
the difference between s and its mean. An actual register, of
course, has an integer length.
For example, if m = 212 then (15) gives a mean of 221 for
s when k = 2". Suppose that s was 7/8 of the mean or s =
(7)218; then (18) evaluates to Le = 58.6. ACT, in this case,
requires 36 bits of reference.
For values of s close to m2/8, more errors are missed
by ACT than by a feedback shift register with the same
number of reference bits. However, for modest differences
(s - m2/8), ACT will miss fewer errors.
The offset with accumulator compression of the output test
sequence for the circuit under test can be controlled by changing the input test sequence order. In a VLSI environment it
would be possible for the design aids system to search for the
best test generator sequence for a given set of functions to be
tested. If, for a particular sequence, we get an offset which
is at least k\/kW away from the worst case offset of k2/2, then,
ACT will exhibit high coverage.
It is interesting to note that even in the worst case, ACT
offers comparable performance to SA with respect to error
coverage for both independent and dependent error assumptions. Under the assumption of stuck-type and stuck-open
failures we can, by fault simulation, get the actual distribution of errors for a particular input sequence and formulate
this realistic error model as a quasidependent error model.
There appears to be a good correlation between the error
coverage obtained from the coverage expressions and the
actual experiments.
Theoretically we have established thait fault coverage for
ACT can be controlled via the offset of the output data sequence. Input sequences can be changed by using different
feedback sequence generators, different starting points, various types of counters, and by interchanging variable assignments from sequence generator to the unit under test.
VII. RESULTS AND CONCLUSIONS
In Table I we report fault simulation results. The system
simulated consisted of a feedback shift register used as an
input pattern generator, the unit under test, followed by output data compression. In the case of signature compression,
the compression polynomial was the same as used in the input
FBSR.
For the ALU/FUNCTION GENERATOR circuit three
different primitive polynomials were used for the input generator; for the ADDER, four primitive polynomials. Each
row in Table I corresponds to a particular primitive polynomial. The test generator was a feedback shift register so
that the input patterns were pseudorandom. The test lengths
were in the range of 50--200. Several different initial conditions were used for each polynomial and the coverage was
averaged. For the various input sequences and for the two
circuits, ACT always exhibited superior performance over
either signature compression or syndrome compression or
both together (except of course when the simplier system
gave 100 percent coverage). The faults simulated were
single-gate stuck-at faults. The performance of ACT was
superior even when the offsets were approximately equal to
k2/2, the worst case.
SAXENA AND ROBINSON: ACCUMULATOR COMPRESSION TESTING
321
TABLE I
FAULT COVERAGE SIMULATION RESULTS
Circuit Under Test
ALU/FUNCTION
GENERATOR
(SN74181)
4 BIT
ADDER
(SN483)
SA
=
=
SY
SA + SY =
ACT
=
SA
SY
SA + SY
94.12 percent
99.90 percent
100.00 percent
85.53 percent
81.17 percent
89.46 percent
84.55 percent
86.68 percent
96.50 percent
98.80 percent
81.17 percent
88.62 percent
91.57 percent
95.36 percent
100.00 percent
99.90 percent
100.00 percent
96.06 percent
98.87 percent
96.06 percent
98.45 percent
ACT
100.00 percent
100.00 percent
100.00 percent
99.15 percent
100.00 percent
100.00 percent
100.00 percent
Signature.
Syndrome.
Simultaneous signature and syndrome.
Accumulator compression testing.
ACT, as mentioned earlier, can find application in processor environments; in particular fault-tolerant processing
environments. Processors which are not active can perform a
self-test using ACT for units inside them. Routing and multiplexing costs would have to be considered as part of the
overhead associated with ACT. For general VLSI circuits,
the hardware overhead of ACT may be a significant cost and
would likely prohibit its use in the current technology.
Using matrix theory or Taylor's series, it can be shown
that, if higher order syndromes are defined, all syndromes up
to order at most m form a complete, if not orthogonal, set of
syndromes.
ACKNOWLEDGMENT
The authors would like to thank Professor D. E. Knuth
for his invaluable help in getting the asymptotic result. The
authors would also like to express their thanks for the helpful suggestions and perspective provided by the anonomous
Reviewers, without whose help the TRANSACTIONS would not
be possible.
REFERENCES
[1] N. Benowitz et al., "An advanced fault issolation system for digital
logic," IEEE Trans. Comput., vol. C-24, pp. 489-497, May 1975.
[2] J. E. Smith, "Measures of effectiveness of fault signature analysis," IEEE
Trans. Comput., vol. C-29, pp. 510-514, June 1980.
[3] J. L. Carter, "The theory of signature testing for VLSI," in Proc. 14th
ACM Symp. Theory Comput., 1982, pp. 289-296.
[4] S. Z. Hassan and E. J. McCluskey, "Increased fault coverage through
multiple signatures," in Proc. FTCS-14, June 1984, pp. 354-359.
[5] D. K. Bhavsar and B. Krishnamurthy, "Can we eliminate fault escape in
self testing by polynomial division (signature analysis)," in Proc. Int.
Test Conf., Oct. 1984, pp. 134-139.
[6] J. P. Hayes, "Transition count testing of combinational logic circuits,"
IEEE Trans. Comput., vol. C-25, pp. 613-620, June 1976.
[7] S. M. Reddy, "A note on testing logic circuits by transition counting,"
IEEE Trans. Comput., vol. C-26, pp. 313-314, Mar. 1977.
[8] J. P. Hayes, "Generation of optimal transition count tests," IEEE Trans.
Comput., vol. C-27, pp. 36-41, Jan. 1978.
[9] H. Fujiwara and K. Kinoshita, "Testing logic circuits with compressed
data," in Proc. FTCS-8, 1978, pp. 108-113.
[10] J. Savir, "Syndrome testable design of combinational circuits," IEEE
Trans. Comput., vol. C-29, pp. 442-550, June 1980.
[11] V. K. Agarwal, "Increased effectiveness of built-in-testing by output data
modification," in Proc. FTCS-13, June 1983, pp. 227-234.
[12] Y. Zorian and V. K. Agarwal, "Higher certainty of error coverage
by output data modification," in Proc. 1984 Int. Test Conf., 1984,
pp. 140-147.
[13] A.K. Susskind, "Testing by verifying Walsh coefficients," in Proc.
FTCS-11, 1981, pp. 206-208.
[14] T. C. Hsiao and S. Seth, "An analysis of the use of Rademacher-Walsh
spectrum in compact testing," IEEE Trans. Comput., vol. C-33,
pp. 934-937, Oct. 1984.
[15] D. M. Miller and J. C. Muzio, "Spectral fault signatures for single stuckat faults in combinational networks," IEEE Trans. Comput., vol. C-33,
pp. 765-769, Aug. 1984.
[16] T. W. Williams and K. P. Parker, "Design for testability-A survey,"
IEEE Trans. Comput., vol. C-31, pp: 2-16, Jan. 1982.
[17] G. E. Andrews, "The theory of partitions," in Encyclopedia of Mathematics and its Applications, Vol. 2. Reading, MA: Addison-Wesley,
1976.
[18] D. E. Knuth, private communication, Nov. 1984.
Nirmal R. Saxena was bom in Hyderabad, India,
on February 5, 1960. He received the B.S. degree in
1982 from Osmania University, India, and the
M.S.E.E degree from the University of Iowa, Iowa
City, in 1984.
He is currently with Hewlett Packard, Santa
Clara, CA, and is an Honors Cooperative student
working toward the Ph.D. degree at the Center
for Reliable Computing, Stanford University,
Stanford, CA.
John P. Robinson (S'58-M'65-SM'77) received
the B.S. degree in electrical engineering from Iowa
State University, Ames, in 1960, and the M.S. and
Ph.D. degrees in electrical engineering from Princeton University, Princeton, N.J. in 1962 and 1966,
respectively.
From 1960 to 1962 he was at the RCA Laborato-
Princeton, NJ. During the summers of 1963
lries,
and
he
1964, was at the IBM Thomas J. Watson
Research Center, Yorktown Heights, NY. Since
1965 he has been a member of the Department of
Electrical and Computer Engineering, University of Iowa, Iowa City.