Fault tolerance in systems design in VLSI using data compression

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
1725
Fault Tolerance in Systems Design in VLSI Using
Data Compression Under Constraints of Failure
Probabilities
Sunil R. Das, Fellow, IEEE, C. V. Ramamoorthy, Life Fellow, IEEE, Mansour H. Assaf, Emil M. Petriu, Fellow, IEEE,
and Wen-Ben Jone, Senior Member, IEEE
Abstract—The design of space-efficient support hardware
for built-in self-testing (BIST) is of critical importance in the
design and manufacture of VLSI circuits. This paper reports new
space compression techniques which facilitate designing such
circuits using compact test sets, with the primary objective of
minimizing the storage requirements for the circuit under test
(CUT) while maintaining the fault coverage information. The
compaction techniques utilize the concepts of Hamming distance,
sequence weights, and derived sequences in conjunction with the
probabilities of error occurrence in the selection of specific gates
for merger of a pair of output bit streams from the CUT. The
outputs coming out of the space compactor may eventually be fed
into a time compactor (viz. syndrome counter) to derive the CUT
signatures. The proposed techniques guarantee simple design
with a very high fault coverage for single stuck-line faults, with
low CPU simulation time, and acceptable area overhead. Design
algorithms are proposed in the paper, and the simplicity and
ease of their implementations are demonstrated with numerous
examples. Specifically, extensive simulation runs on ISCAS 85
combinational benchmark circuits with FSIM, ATALANTA, and
COMPACTEST programs confirm the usefulness of the suggested
approaches under conditions of stochastic independence as
well as dependence of single and double line output errors. A
performance comparison of the designed space compactors with
conventional linear parity tree space compactors as benchmark is
also presented, which demonstrates improved tradeoff for the new
circuits between fault coverage and the CUT resources consumed
contrasted with existing designs, thereby aiding to fully appreciate
the enhancements.
Index Terms—Built-in self-test (BIST), circuit under test
(CUT), derived sequences, detectable error probability estimates,
Hamming distance, optimal sequence mergeability, parity tree
space compactors, sequence weights, space compaction, time
compaction.
Manuscript received May 11, 2000; revised August 8, 2001. This work was
supported in part by the Natural Sciences and Engineering Research Council of
Canada under Grant A 4750.
S. R. Das was with the Department of Electrical Engineering and Computer
Sciences, Computer Science Division, University of California, Berkeley, CA
94720 USA. He is now with the School of Information Technology and Engineering, Faculty of Engineering, University of Ottawa, Ottawa, ON K1N 6N5,
Canada.
C. V. Ramamoorthy is with the Department of Electrical Engineering and
Computer Sciences, Computer Science Division, University of California,
Berkeley, CA 94720 USA.
M. H. Assaf and E. M. Petriu are with the School of Information Technology
and Engineering, Faculty of Engineering, University of Ottawa, Ottawa, ON
K1N 6N5, Canada.
W.-B. Jone is with the Department of Electrical and Computer Engineering
and Computer Science, University of Cincinnati, Cincinnati, OH 45221 USA.
Publisher Item Identifier S 0018-9456(01)10936-8.
I. INTRODUCTION
A
S THE electronics industry continues to grow, complex
systems and the high level of integration continue to increase, better and more effective testing methods that ensure reliable operations of chips, integrated in complex digital systems,
are always needed. The concept of testing has a broad applicability, and finding highly sophisticated testing techniques that
ensure correct system performance has become increasingly important [1]–[30]. Consider, for example, medical tests and diagnoses instruments, air plane controllers, and other safety-critical systems that have to be tested before (off-line testing) and
during use (on-line testing). Another application where failure
can have severe economic consequences is real-time transactions processing. The testing process in all these circumstances
must be fast and effective to guarantee that such systems operate correctly. The cost of testing integrated circuits (ICs) is
rather prohibitive; it ranges from 35% to 55% of their total manufacturing cost. Besides, testing a chip is also time consuming,
taking up to about one-half of the total design cycle time [4].
The amount of time available for manufacturing, testing, and
marketing a product, on the other hand, continues to decrease.
Moreover, as a result of global competition, customers demand
lower cost and better quality products. Therefore, in order to
achieve this higher quality at lower cost, testing techniques have
to be improved. The conventional testing techniques of digital
circuits require application of test patterns generated by a test
pattern generator (TPG) to the circuit under test (CUT) and comparing the responses with known correct responses. However,
for large circuits, because of higher storage requirements for the
fault-free responses, the test procedures become rather expensive and thus alternative approaches are sought to minimize the
amount of needed storage [3].
Built-in self-testing (BIST) is a design methodology that has
the capability of solving most of the problems encountered in
testing digital systems. It combines the concepts of both built-in
test (BIT) and self-test (ST) in one, termed built-in self-test
(BIST). In BIST, test generation, test application, and response
verification procedures are accomplished through built-in hardware [2], [3], [5], [7], [8]. It allows different parts of a chip to be
tested in parallel, reducing the required testing time. It also simplifies the need for external test equipment. As the cost of testing
is becoming the major component of the manufacturing cost of
a new product, BIST tends to reduce manufacturing, test, and
0018–9456/01$10.00 © 2001 IEEE
1726
Fig. 1.
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
BIST environment.
maintenance costs, and also improves diagnosis. Several companies such as Motorola, AT&T, IBM, and Intel have incorporated BIST in many of their products [6], [7]. AT&T, for example, has incorporated BIST into more than 200 of their chips.
The three large PLAs and microcode ROM in the Intel 80 386
microprocessor were built-in self-tested [7]. The general-purpose microprocessor chip, Alpha AXP21164, and Motorola microprocessor 68020, were also tested using BIST techniques [6],
[7].
BIST is widely used to test embedded regular structures that
exhibit a high degree of periodicity such as memory arrays
(SRAMs, ROMs, FIFOs, and registers). This type of circuits
does not require complex extra hardware for test generation and
response compaction. Also, including BIST in these circuits
can guarantee high fault coverage with zero aliasing. Unlike
regular circuits, random-logic circuits cannot be adequately
tested only with BIST techniques, since generating adequate
on-chip test sets using simple hardware is a difficult task to
be accomplished. Moreover, since test responses generated
by random-logic circuits seldom exhibit regularity, it is extremely difficult to ensure zero-aliasing compaction. Therefore,
random-logic circuits are usually tested using a combination of
BIST, scan design techniques, and external test equipment.
In a typical BIST environment, as shown in Fig. 1, a TPG
sends its outputs to a CUT, and output streams from the CUT
are fed into a test data analyzer. A fault is detected if the test
sequence is different from the response of the fault-free circuit.
The test data analyzer is comprised of a response compaction
unit (RCU), a storage for the fault-free responses of the CUT,
and a comparator.
To reduce the amount of data represented by the fault-free and
the faulty CUT responses, data compression is used to create
signatures (short binary sequences) from the CUT and its corresponding fault-free circuit. Signatures are compared and faults
are detected if a match does not occur. BIST techniques may
be used during normal functional operating conditions of the
unit under test (on-line testing), as well as when a system is not
carrying out its normal functions (off-line testing). In the case
where detecting real-time errors is not that important, systems,
boards, and chips can be tested in off-line BIST mode. BIST
techniques use pseudorandom, or pseudoexhaustive TPGs, or
on-chip storing of reduced test sets. Today, testing logic circuits
exhaustively is no longer used, since only a few test patterns are
needed to ensure full fault coverage for single stuck-type faults
[6]. Reduced pattern test sets can be generated using algorithms
such as FAN, and others. Built-in test generators can often generate such reduced test sets at low cost, making BIST techniques
suitable for on-chip self-testing.
This paper focuses on the response compaction process of
built-in self- testing techniques which translates into a process of
reducing the test response to a signature. Instead of comparing
bit-by-bit the fault-free responses to the observed outputs of the
CUT as in conventional testing methods, the observed signature
is compared to the correct one, thereby reducing the storage
needed for the correct circuit responses [3], [7], [30]. A block
diagram of the output data compaction scheme is given in Fig. 2,
where the test data analyzer consists of a compaction unit, a
comparator, and a storage (memory device). The compaction
unit, in its turn, can be divided into a space compaction unit and
a time compaction unit.
In general, input sequences coming from a CUT are fed
into a space compactor, providing output streams of bits such
; most often, test responses are compressed into
that
). Space compaction brings a solution
one sequence (
for the problem of achieving high-quality BIST of complex
chips without the necessity of monitoring a large number of
internal test points; it reduces testing time and area overhead
by merging test sequences coming from these internal test
points into a single stream of bits. This single bit stream of
length is eventually fed into a time compactor, and a shorter
is obtained at the output. The extra
one of length
logic representing the compaction circuit must be as simple as
possible, to be easily embedded within the CUT, and should
not introduce signal delays that affect either the test execution
time or the normal functionality of the circuit being tested. In
addition, the length of the signature must be as short as it can
be in order to minimize the amount of memory needed to store
the fault-free signatures. Also, signatures obtained from faulty
output responses and their corresponding fault-free signatures
should not be the same, which unfortunately is not always the
case.
A fundamental problem with compaction techniques is error
masking or aliasing [3], [7], [36], [37], which occurs when the
signatures of a faulty output response map to the fault-free signatures, usually calculated by identifying a good circuit, applying test patterns to it, and then having the compaction unit
generate the fault-free references. Aliasing causes loss of information, which affects the testing quality of BIST and reduces the
fault coverage (the number of faults detected, after compaction,
over the total number of injected faults). Several methods have
been suggested in the literature for computing the aliasing probability. The exact computation of this aliasing probability is
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
1727
Fig. 2. Test response compaction scheme.
known to be an NP-hard problem [29]. In practice, high fault
coverage, over 99%, is required and hence, any space compression technique which maintains more percentage error coverage
information would have to be considered for investigation.
The subject paper deals with the general problem of designing and analyzing efficient space compaction techniques
for BIST of VLSI circuits using compact test sets. The techniques are based on identifying certain inherent properties
of the test output responses of the CUT, together with the
knowledge of failure probabilities. The major objective is
to develop methods for space compaction that are simple,
suitable for on-chip self-testing, require low area overhead,
and have little adverse impact on circuit performance. With
that objective in view, the optimal mergeability criteria of
output sequences are developed utilizing concepts of Hamming
distance, sequence weights, and derived sequences for a pair of
outputs, and the effects of failure probabilities on the optimal
mergeability criteria of output sequences are analyzed as well.
The techniques proposed achieve a high fault coverage for
single stuck-line faults, with low CPU simulation time, and
acceptable area overhead, as evident from extensive simulation
results on ISCAS 85 combinational benchmark circuits, under
condition of both stochastic independence and dependence of
single and double line errors.
The material in the paper is organized as follows: Section I
provides a brief introduction of the conventional methods for
test generation of digital ICs, with particular emphasis on BIST.
In Section II, several existing space and time compaction techniques are briefly discussed. Section III introduces certain important properties of circuit responses for the purpose of developing optimal mergeability criteria for the design of space
compactors on the assumption of stochastic independence of
single and double line errors. Section IV describes the design
of compaction trees for multi-output circuits based on proposed
optimal mergeability criteria which assume stochastic dependence of single and double line errors. Section V gives experimental results based on extensive simulations of the ISCAS
85 combinational benchmark circuits with FSIM, ATALANTA,
and COMPACTEST, while Section VI provides concluding remarks, emphasizing effectiveness, advantages, and limitations
of the proposed design algorithms.
II. OVERVIEW OF TEST COMPACTION TECHNIQUES
In this section, we review some of the important test compaction techniques for BIST that have been proposed in the literature. We describe them briefly, concentrating on some of their
relevant properties like the area overhead, fault coverage, signature length, and error masking probability. We first briefly
examine some time compaction methods like ones counting,
syndrome testing, transition counting, signature analysis, and
others, though our concern in this paper is space compaction.
Ones counting [12] uses as its signature the number of ones in
the binary circuit response stream. The hardware that represents
the compaction unit consists of a simple counter, and is independent of the CUT; it only depends on the nature of the test
response. Signature values do not depend on the order in which
the input test patterns are applied to the CUT. In syndrome
counting [13], all the 2 input patterns are exhaustively applied
to an -input combinational circuit. The syndrome , which is
given by the normalized number of 1s in the response stream,
, being the number of minterms of
is defined as
the function being implemented by the single-output CUT. Any
switching function can be so realized that all its single stuck-line
1728
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
faults are syndrome-testable. Transition counting [14] counts
the number of times the output bit stream changes from 1 to 0
and vice versa. In transition counting, the signature length is less
, being the length of a response stream.
than or equal to
The error masking probability takes high values when the sig, and low values when it is close to
nature value is close to
0 or . In Walsh spectral analysis [15], switching functions are
represented by their spectral coefficients which are compared to
known correct coefficients values. In a sense, in this method, the
truth table of the given switching function is basically verified.
The process of collecting and comparing a subset of the complete set of Walsh functions is described as a mechanism for data
compaction. The use of spectral coefficients promises higher
percentage of error coverage, whereas the resulting higher area
overhead for generating them is deemed as a disadvantage [16].
In parity checking [1], the response bit stream coming from a
CUT reduces a multitude of output data to a signature of length
1 bit. The single bit signature has a value that equals the parity
of the test response sequence. Parity checking detects all errors
involving an odd number of bits, while faults that give rise to an
even number of error bits are not detected. This method is relatively ineffective since a large number of the possible response
bit streams from a faulty circuit will result in the same parity
as that of the correct bit stream. All single stuck-line faults in
fanout-free circuits are detected by the parity check technique.
Signature analysis [17] is probably the most popular time compaction technique currently available. It uses linear feedback
shift registers (LFSRs) consisting of flip-flops and exclusive-or
gates. The signature analysis technique is based on the concept
of cyclic redundancy checking (CRC). LFSRs are used for generating pseudorandom input test patterns, and for response compaction as well. The nature of the generated sequence patterns is
determined by the LFSR’s characteristic polynomial as defined
is
by its interconnection structure. A test-input sequence
fed into the signature analyzer, which is divided by the characof the signature analyzer’s LFSR. The
teristic polynomial
obtained by dividing
by
over a Garemainder
.
represents the
lois field such that
being the corresponding quotient. In
state of the LFSR,
represents the observed signature. Signature
other words,
to a
analysis involves comparing the observed signature
. An error is detected if these
known fault-free signature
is the correct response
two signatures differ. Suppose that
is the faulty one, where
is an
and
error polynomial; it can be shown that aliasing occurs whenever
is a multiple of
.
Different methods for computing and reducing the aliasing
probability in signature analysis have been proposed, viz. the
signature analysis model proposed by Williams et al. [18] which
uses Markov chains and derives an upper bound on the aliasing
probability in terms of the test length and probability of an error
occurring at the output of the CUT. Another approach to the
computation of aliasing probability is presented in [19]. An error
pattern in signature analysis causes aliasing, if and only if, it is
a codeword in the cyclic code generated by the LFSR’s characteristic polynomial. Unlike other methods, the fault coverage in
signature analysis may be improved without changing the test
set. This can be done by playing with the length of the LFSR, or
by using a different characteristic polynomial
. As demonstrated in [20], for short test length, signature analysis detects all
single-bit errors. However, there is no known theory that characterizes fault detection in signature analysis.
Testing using two different compaction schemes in parallel
has also been extensively investigated. The combination of
signature analysis and transition counting is analyzed in [21],
which shows that using simultaneously both techniques leads
to a very small overlap in their error masking. As a result
of using two different compaction schemes in parallel, the
fault coverage is improved, while the fault signature size and
hardware overhead are greatly increased.
We will now examine several space compaction techniques
that have been proposed in the literature, and some of which
are in actual industrial use. Some of the common space compression techniques include the parity tree space compaction,
hybrid space compression (HSC), dynamic space compression
(DSC), quadratic functions compaction (QFC), programmable
space compaction (PSC), and cumulative balance testing.
The parity tree compactor circuits [22], [23] are composed
of only exclusive-or gates. An exclusive-or gate has very
good signal-to-error propagation properties which are quite
desirable for space compression. Functions realized by parity
. The
tree compactors are of the form
parity tree space compactor propagates all errors that appear
on an odd number of its inputs. Thereby, errors that appear on
an even number of parity tree circuit inputs are masked. As
experimentally demonstrated, most single stuck-at line faults
are detected using pseudorandom input TPGs and reduced test
sets.
The HSC technique, originally proposed by Li and Robinson
,
, and
logic gates as output compres[24], uses
sion tools to compress the multiple outputs of the CUT into a
single line. The compaction tree is constructed based on the detectable error probability estimates
. A modified version of the HSC method, DSC, was subsequently proposed by Jone and Das [25]. Instead of assigning
and double
static values for the probabilities of single errors
errors , the DSC method dynamically estimates those values
based on the CUT structure during the computation process.
and
are determined based on the number
The values of
of single lines and shared lines connected to an output. A general theory to predict the performance of the space compression techniques was also developed in [25]. Experimental results show that the information loss, combined with the syndrome counting as time compactor, is between 0% and 12.7%.
DSC was later improved in [26], in which some circuit-specific
information is used to calculate the probabilities. However, either HSC or DSC does not provide an adequate measure of fault
coverage because they both rely on estimates of error detection
probabilities.
QFC [27] uses quadratic functions to construct the space
compaction circuit, and has been shown to reduce aliasing. In
of the CUT are
QFC, the observed output responses
processed and compressed in a serial fashion based on a funcwhere
tion of the type
and
are blocks of length , for
.
A new approach, PSC, has recently been proposed for designing
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
low-cost space compactors that provide high fault coverage
[28]. In PSC, a circuit-specific space compactor is designed
to increase the likelihood of error propagation. However, PSC
does not guarantee zero aliasing. A compaction circuit that
minimizes aliasing and has the lowest cost can only be found by
exhaustively enumerating all (2 ) -input Boolean functions,
where represents the number of primary outputs of the CUT.
A new class of space compactors based on parity tree circuits was recently proposed by Chakrabarty and Hayes [7]. The
method is based on multiplexed parity trees (MPTs), and introduces zero aliasing. MPTs perform space compaction of test responses by combining the error propagation properties of multiplexers and parity trees through multiple time-steps. The authors show that the associated hardware overhead is moderate,
and a very high fault coverage is obtained for faults in the CUT
including even those in the compactor.
In this paper we present new methodologies for test output
compression in space in the context of BIST of digital ICs. The
(
),
(
),
suggested techniques make use of
(
) gates as appropriate to construct an output
and
compaction tree that compresses the multiple outputs of the
CUT to a single line. The actual gate selection is done on the
basis of optimal mergeability criteria that were developed utilizing the concepts of Hamming distance, sequence weights, and
derived sequences, as will be obvious from the discussions that
follow.
III. DESIGNING COMPACTION TREE BASED ON SEQUENCE
CHARACTERIZATION AND STOCHASTIC INDEPENDENCE OF
ERRORS
The principal idea in space compaction is to compress
functional test outputs of the CUT possibly into one single test
output line to derive the CUT signature without sacrificing too
much information in the process. Generally, space compression
gates in cascade or in a tree
has been accomplished using
structure [22], [25], [26], [29]. We will adopt a combination of
both cascade and tree structures (cascade-tree) for our frame(
),
(
), and
(
)
work with
operators. The logic function to be selected to build the compaction tree will be determined by the characteristics of the sequences which are inputs to the gates based on some optimal
mergeability criteria. We also assume a syndrome counter [13]
at the output of the two-input gates of the compaction tree as
shown in Fig. 3. The basic theme of the proposed approaches is
to select a suitable gate to merge two candidate output lines of
the CUT under conditions of stochastic independence and stochastic dependence of single and double line errors, using sequence characterization developed in the paper. In the following
sections the mathematical basis of the proposed approaches are
first given with the introduction of appropriate notations and terminologies.
A. Mathematical Basis for the Proposed
Approaches—Hamming Distance, Sequence Weights,
and Derived Sequences
Let
of length
represent a pair of output sequences of a CUT
, where the length is the number of bit positions
1729
Fig. 3. Syndrome counter used as time compressor in test data output
compaction.
in
and . Let
represent the Hamming distance
between and (the number of bit positions in which and
differ).
, of a seDefinition 1: The 1-weight, denoted by
quence , is the number of 1s in the sequence. Similarly, the
, of a sequence , is the number
0-weight, denoted by
of 0s in the sequence.
Example 1: Consider an output sequence pair
with
and
. The length
. The Hamming distance
of both output streams is
and
is
. The 1-weights and
between
and
are
,
,
0-weights of
, and
, respectively.
Property 1: For any sequence , it can be shown that
.
given in Example
For the output sequence
and
. There1, we have found that
.
fore, it is obvious that
of
Definition 2: Consider an output sequence pair
equal length . Then the sequence pair derived by discarding
the bit positions in which the two sequences differ (indicated by
.
a dash - ) is called its derived sequence pair
In the rest of the paper, we will denote the derived sequence of
by , its 1-weight by
, and its 0-weight
a sequence
.
by
given in Example 1,
Example 2: For
- - . The 1-weights and 0-weights of the
are:
,
,
derived pair
, and
.
, we
Property 2: For any derived sequence pair
and
.
have
As depicted in Example 2, for the same output sequence pair
given in Example 1, we have shown that
and
.
By the above property, when no ambiguity arises, we will denote 1-weights and 0-weights for the derived sequence pair by
and
, respectively. The length of the derived sesimply
,
quence pair will be denoted by , where
1730
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
TABLE I
THE DETECTABLE AND MAXIMUM DETECTABLE ERRORS USING DIFFERENT TYPES OF GATES
. Also, since
to denote
use
ample 1,
is always zero, we will simply
. That is, for
in Ex- - ,
- - ,
and
. Therefore,
and
.
Property 3: For every distinct pair of output sequences
at the output of a CUT, the corresponding derived
of equal length
is distinct.
pair
Two derived sequence pairs may have the same length but
they are still distinct and not identical.
and
be two derived
Property 4: Let
sequence pairs, respectively, of the original output stream
and
, having the same length . They
pairs
, if and only if,
are identical, that is,
.
,
,
Consider
, and
as four seand
quence streams at the output of a CUT. Let
be grouped to form two distinct output pairs, and
,
be, respectively, their corresponding
- - and
derived pairs, where
–
–
. In this case, both the
, but they
derived sequence pairs have the same length
are not identical.
In general, however, it is not expected that any two distinct
pairs of sequences at the output of a CUT will be identical, and
hence the possibility of the corresponding derived pairs being
identical is also rather remote.
B. Optimal Mergeability Criteria and Gate Selection
, for
Definition 3: The maximum expectation of error,
two-input logic functions, given two input sequences of length
, is defined as the sum of all single line errors ( ) and all
double line errors ( ).
This definition assumes stochastic independence of single
and double line errors, and thus includes all of the possibili-
ties of error occurrence. The concept of stochastic independence
[31] is that, given a set of events, these events are individually
determined by properties that are in no way interrelated. Mathematically, we define two events and to be stochastically
, denoting the probability
independent if
or
of an event. We agree to accept this definition even if
. In a practical situation, two events may be taken to be
stochastically independent if there is no causal relation, either
temporal or spatial between them, that is, if they are causally
independent [32]. We will later enlarge the scope of our discussion, deviating from the assumption of independence of single
and double line errors at the CUT output, to include the situation where their appearance will be dependent on some causal
relation, so that we can assign distinct probabilities to their occurrence in different lines.
be the number of single line errors and
Now, let
be the number of double line errors detected at the output of a
given in Example
gate . For the output sequence pair
, for two-input logic
1, the maximum expectation of error,
.
functions is
,
Definition 4: The maximum detectable error,
for two-input logic functions, given an input sequence pair
of length , when the gate type is , is defined as
.
Table I shows maximum detectable errors and detectable
errors for different types of gates. For instance,
, where
and
are, respectively, the single and double line
or
gates.
detectable errors when using
Theorem 1: For any derived output sequence pair
of length , if an
(
) gate is used for merger, the
being single
total number of errors detected will be
double line errors (assuming
).
line errors and
Proof: The theorem follows from the basic properties of
(
) gates.
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
For our
of Example 1, we have
and
. Therefore, the total number of errors detected at
(
) gate is 6 (all errors are detected).
the output of an
Theorem 2: For any derived output sequence pair
of length , if an OR (NOR) gate is used for merger, the total
being single line
number of errors detected will be
double line errors (assuming
).
errors and
Proof: The theorem follows from the basic properties of
OR (NOR) gates.
Again, for our
of Example 1,
and
. Therefore, the total number of errors detected at
the output of an OR (NOR) gate is 12 (all errors are detected).
of length
Theorem 3: If an output sequence pair
and Hamming distance
is merged with an
(
)
will
gate, the maximum errors detected
be as follows:
single line errors
single and double line errors
double line errors
Proof: Since the Hamming distance is , if one sequence
has a 0, the other has a 1 in the corresponding position for all
bits. If an
gate is used for merger, only
the
single errors that make both sequences 1 will be detectable, and
single line errors are detected. Similarly,
single
thus
is the 1-weight of
line and double line errors are detected, if
is the 0-weight of the
the derived sequence pair. Likewise, if
double line errors making both
derived sequence pair, only
sequences 1 in the corresponding positions will be detected.
Theorem 4: If an output sequence pair
of length
and Hamming distance
is merged using an
gate, the maximum errors detected
will be as
follows:
single line errors
single and double line errors
double line errors
Proof: As before, since the Hamming distance is , if
one sequence has a 0, the other has a 1 in the corresponding
bits. If an
gate is used for
position for all the
merger, only single errors that make both sequences 0 will be
single line errors are detected. Simidetectable, and hence
single line and double line errors are detected, if
larly,
is the 0-weight of the derived sequence pair. Likewise, if
is
double line
the 1-weight of the derived sequence pair, only
errors making both sequences 0 in the corresponding positions
will be detected.
The undernoted theorems develop optimal mergeability criteria for output sequences under conditions of stochastic independence of single and double line errors in a CUT.
of length
Theorem 5: An output sequence pair
and Hamming distance
is optimally mergeable with
gate, if the maximum number of eran
, is greater than or equal to the
rors detected which is
maximum number of errors detected by an
gate,
, or by an
gate,
, when used for merger.
1731
Proof: The theorem is a direct consequence of Theorems
3 and 4.
Theorem 6: For any output sequence pair
of
and Hamming distance
, an
gate
length
gate may be used for optimal merger,
or an
, where
if the maximum number of errors detected is
.
Proof: For sequences of length , the maximum expec(
single line and double line ertation of error, , is
gate can only detect
single line errors). An
and
,
rors. Hence, if the error detection count is between
or
gates to satisfy opwe need to use
or
timal mergeability, based on whether
has a greater value.
Theorem 7: An
gate may be selected for opof length
timally merging an output sequence pair
with Hamming distance , if
, and
single line errors
single and double line errors
double line errors
Proof: Since
and
is
gate is preferable to
is
, if an
gate, then obviously
, or
, or
.
, the total number of
Further, if
gate, then evidently an
errors detected by an
gate is selected for optimal merger.
Theorem 8: An
gate may be selected for optiof length
mally merging an output sequence pair
with Hamming distance , if
, and
single line errors
single and double line errors
double line errors
Proof: The theorem follows exactly in the same way as
Theorem 7.
Corollary 1: However, if
, either an
gate or an
gate may be used for optimal merger.
of length
Theorem 9: An output sequence pair
and Hamming distance
is optimally mergeable with an
gate, if
is maximized
sequence pairs for an -output CUT.
over all
, considering all
Proof: Since
output sequence pairs for an -output CUT, has
gate must be
the greatest value, obviously, an
selected for optimal merger.
Theorem 10: An output sequence pair
of length
and Hamming distance
is optimally mergeable with an
gate, if
is maximized over all
sequence pairs for an -output CUT.
is maximized over all
Proof: If
output sequence pairs for an -output CUT, then evidently
an OR/NOR gate should be selected for optimal merger, since
gate detects the largest number of errors.
an
1732
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
Theorem 11: An output sequence pair
of length
is optimally mergeable with an
gate, if
.
,
Proof:
. If an
is
whereas
gate for optimal merger, then
selected over an
Or,
Or,
Or,
Or,
Besides, if
,
is also greater than
, and hence
gate is unique.
the selection of an
Theorem 12: An output sequence pair
of length
is optimally mergeable with an
gate, if
.
Proof: The theorem follows in the same way as Theorem
11.
(a)
C. Algorithm 1
The algorithm for the implementation of the proposed space
compaction method based on optimal mergeability of output sequence pairs, assuming stochastic independence of single and
double line errors, is now given underneath in a pseudocode
format.
total_number_of_outputs
stage
0
current_stage
stage-1)
while (current_stage
{
total_number_of_outputs;
)
for (
total_number_of_outputs,
for (
)
{
Determine the derived sequence pair
that corresponds to the sequence
.
pair
Compute its corresponding Hamming dis, 0-weight
, and 1-weight
.
tance
Select a gate for optimal merger based
on the mergeability criteria discussed
earlier
gate if
;
[
gate if
;
gate otherwise] and compute
value
for the selected gate.
}
for
Select the sequence pair
optimal merger that has the maximum
value, choosing one arbitrarily when
there is a tie.
Compute the corresponding new output
sequence using the chosen gate.
which
Discard the sequence pair
will no longer be used as a result of
this selection.
current_stage
}
(b)
Fig. 4. (a) Ten-line decimal-to-8421 BCD converter. (b) The corresponding
compaction tree.
Example 3: Consider the ten-line decimal-to-8421 BCD
converter shown in Fig. 4(a). The circuit has nine primary
,
, and , and four primary
inputs
, and . For the ten grouped input test
outputs
patterns, the fault-free output patterns are shown in Table II.
,
There are only six output sequence pairs
;
;
to consider in the
first stage. The Hamming distances of all sequence pairs are
computed and given as follows:
Furthermore, the 0-weights and 1-weights,
and
, respectively, of the different derived sequence pairs corresponding to
the original sequence pairs are as follows:
Since
(
) is
and
for
greater than
every pair of sequences, therefore every pair is optimally mergegate. An
gate is selected to
able with an
and
chosen arbitrarily. The remaining sequences
merge
and
are also merged with an
gate. The two output
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
TABLE II
FAULT-FREE OUTPUT PATTERNS FOR THE TEN-LINE DECIMAL-TO-8421
BCD CONVERTER
1733
errors are considered stochastically dependent rather than
independent.
A. Effects of Error Probabilities in Selection of Gates for
Merger
Li and Robinson [22] defined the detectable error probability
estimate, , for a two-input logic function, given two input sequences of length , as follows:
where
sequences in the subsequent stage turn out to be also
gate
gate to complete
mergeable; so they are merged with an
the compression tree. The circuit with the corresponding compression tree is shown in Fig. 4(b).
The ten-line decimal-to-8421 BCD converter has 58 single
and
(the
stuck-line faults, two undetectable faults at
effect of lines 11 and 12 being stuck-at-0 is not visible at output
), 34 equivalent faults, and 22 single stuck-line faults in its
collapsed fault set. All detectable single stuck-at faults are injected into the circuit. The fault effect for this circuit at the
outputs with the test set of Table II is shown in Table III. The
fault-free space compactor response for the 10 input test vectors
.
is the following single 10-bit stream
If our space compactor is followed by a time compactor
(i.e., syndrome counter), the resulting fault-free signature for
is simply the number of ones which will
the output stream
be equal to 5 in this case. This fault-free signature is saved in
memory to be compared to the faulty signatures in order to
determine the percentage of fault information loss.
The faulty signatures corresponding to the single line faults
are, respectively, the following:
. Comparing all the faulty signatures to the
expected signature, we notice that two of the faulty signatures
match the fault-free signature. Therefore, 56 detectable faults
are injected into the circuit and the percentage of loss before
compaction is 0.00%. Furthermore, we calculate percentage
of fault coverage at the output of the space compactor by
comparing the fault-free stream bits to the faulty stream bits at
the output of the compactor. The number of undetected faults
turns out to be 5 at the output of the space compactor. Hence
the percentage of fault coverage at the output of the compressor
equals to 91.07% which is equivalent to 8.93% fault loss.
IV. DESIGNING COMPACTION TREE ASSUMING STOCHASTIC
DEPENDENCE OF SINGLE AND DOUBLE LINE ERRORS
In the preceding sections, we presented new compression
technique for response data streams of a CUT on the assumption of stochastic independence of single and double line errors.
In this section, we will digress from that notion and develop
our optimal gate selection criteria based on the assumption
that probabilities of errors have an important role to play
in the overall selection process. The following discussions
exclusively deal with developing technique for the construction
of compaction trees of a CUT, when single and double line
probability of single error effect felt at the output of the
CUT;
probability of double error effect felt at the output of the
CUT;
number of single line errors at the output of gate , if
gate is used for merger;
number of double line errors at the output of gate , if
gate is used for merger.
Based on the computation of the detectable error probability
estimate of Li and Robinson as given above, we deduce the
following results that profoundly influence the selection of gates
for optimal merger.
of length
Theorem 13: For an output sequence pair
and Hamming distance , an
gate is prefergate if
able to an
Proof: The detectable error probability estimate when an
gate is used for merger is
The corresponding one when an
An
gate is preferable to an
Or
Or
Or
Or
Or
Or
Or
Or
gate is used is
gate if
1734
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
TABLE III
THE FAULT EFFECT AT THE OUTPUTS OF THE TEN-LINE DECIMAL-TO-8421 BCD CONVERTER
(a)
(b)
Corollary 13.1: On the other hand, an
selected if
gate is
An
gate if
gate is obviously preferable to an
Or
Or
Or
Theorem 14: For an output sequence pair
of length
and Hamming distance , an
gate is preferable
gate if
to an
Or
Or
Or
Proof: The detectable error probability estimate when an
gate is used for merger is
Or
Or
Corollary 14.1: On the other hand, an
selected if
The corresponding one when an
gate is
gate is used is
However, the probability does not play any role in the
(
) and
(
) gates, if
selection between
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
we use the empirical formula for of Li and Robinson, as
defined above. The undernoted theorem states the condition
that determines the selection criteria between
and
gates.
of length
Theorem 15: For an output sequence pair
and Hamming distance , an
(
) gate is prefer(
) gate for merger, if
.
able to an
Proof: The detectable error probability estimate when an
gate is used for merger is
The corresponding one when an
An
if
gate is used is
gate is surely preferable to an
gate
Or
Or
Corollary 15.1: An
other hand
Or
gate is preferable if, on the
Theorem 16: Sequence merger following Li and Robinson
criteria may result in a compression network that is different
from the one that will result based on the selection criteria
given above, even if the same probability estimate of Li and
Robinson is used as the guide to selection.
Proof: The proof is provided by the following counter example.
Consider the following three sequences
By Li and Robinson criteria, we will merge
and
,
and
will be
while according to our selection criteria,
gate because
for
merged with an
equals 0.81,
for
also
for
equals
equals 0.81, whereas
0.87, which is obviously the largest value of the probability
and
.
estimate, , assuming
The next theorem establishes that the gate selection based on
the aforementioned results follows a stricter guideline as opposed to the criteria used by Li and Robinson.
Theorem 17: The use of the detectable error probability estimate, , as suggested by Li and Robinson and discussed earlier,
gives a weaker condition in gate selection for output merger in
the construction of compression networks compared to the selection criteria proposed in Theorems 13–16.
1735
Proof: The proof is provided by the following example.
Consider the following four output sequences
All the sequence pairs
,
,
,
,
, and
have the same characteristics, that is,
,
, and
. For
and
, the corresponding probability estimates of all the sequence pairs are
the following:
Since the sequence pairs have the same probability estimates,
].
any one can be selected for optimal merger [e.g.,
Now, if we use Li and Robinson criteria to merge the selected
output sequences, then we will have to decide between choosing
or
gates (both have the same probability estimates).
gate
However, according to our mergeability criteria, an
should be used for optimal merger because
is greater than
. This shows that our selection
criteria are stronger than Li and Robinson selection criteria for
output merger in the construction of compaction networks.
We now prove an important theorem concerning gate selection for optimal merger with certain particular but generally
and
for single and
chosen values of the probabilities
double line errors, respectively.
Theorem 18: The gate selection criteria established earlier
under condition of stochastic independence of single and double
line errors at the output of a CUT remain unchanged even under
condition of stochastic dependence of single and double line errors based on computation of the detectable error probability estimates , following the empirical formula of Li and Robinson,
if the values of the probabilities and , are chosen as:
and
.
Proof: The detectable error probability estimates, as used
,
, and
gates
earlier, for
are, respectively
These estimates are for output sequences of length , Hamand 0-weight
, with probming distance , 1-weight
ability of single and double line errors being, respectively,
and , as usual. Evidently, an
gate is preferable
gate, if
, and simito an
gate is preferable to an
gate,
larly, an
. With values of
and
being, reif
spectively, 2/3 and 1/3, we have
1736
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
Then
Select the sequence pair
for
merger based on the detectable error
probability estimate
discussed
earlier, choosing one arbitrarily when
there is a tie.
}
Select a gate to optimally merge the
based on the
chosen sequence pair
optimal mergeability criteria
gate is preferable
(an
gate if
/
to an
,
gate is preferable
an
gate if
to an
,
gate is preferable
an
gate if
, and vice
to an
versa).
Compute the corresponding new output
sequence using the chosen gate.
which
Discard sequence pair
will no longer be used as a result of
this selection.
current_stage
Or
Or
Or
Or
Or
Or
This condition is exactly the same as the one established pregate selection under condition of stoviously for
chastic independence of errors. It can be shown likewise that an
gate is preferable to an
gate, if
and an
gate is preferable to an
gate, if
We thus see that our gate selection criteria need not be
changed even under condition of stochastic dependence of
and
,
errors when the probabilities are:
which considerably simplifies the computational problem
involved with detectable error probability estimates, .
}
B. Algorithm 2
The pseudocode description of the algorithm to construct the
space compaction trees for a CUT, on the assumption of different probability values for single and double line errors (i.e.,
stochastic dependence of single and double line errors) is now
given below.
Define the probability of single ( ) and
double ( ) error effects felt at the
output of the CUT.
total_number_of_outputs
stage
0
current_stage
stage-1)
while (current_stage
{
total_number_of_outputs;
for (
)
total_number_of_outputs,
for (
)
{
Determine the derived sequence pair
that corresponds to the sequence
.
pair
Compute its corresponding Hamming
, 0-weight
, and 1-weight
distance
.
Compute the corresponding number of
single ( ) and double ( ) line errors
at the outputs of the different
,
basic gate types (
, and
), if used for
merger.
C. Input Sequence Length
Apparently, the length of the test-input patterns applied to
the CUT plays an important role in the construction of the circuit’s compaction trees. The type of logic gate to be used along
with the pair of lines to be selected for optimal merger may
change simultaneously with the modification of the input test
length . Furthermore, as will be shown in the results section,
the input test length has a significant effect on the percentage of
faults coverage for the CUT.
Example 4: Consider the three sequences used in Example
,
,
1 before, viz.
. All of the sequences are of length
and
. We have shown that according to our selection criand
will be used for optimal merger. If we now
teria,
consider the same sequences, each increased by 6 bits of 1
), we now have the new sequences as follows:
(with
,
,
. Currently,
and
have
and
and
to be selected for optimal merger (assuming
).
Example 5: Once again, consider the ten-line decimal-to-8421 BCD converter as shown in Fig. 4(a). The circuit
,
;
;
has six sequence pairs
to be considered in the first stage. The Hamming distances
of all sequence pairs are calculated as follows:
;
;
. The 0-weights and 1-weights of the
;
derived sequence pairs are:
;
;
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
1737
TABLE IV
TEN-LINE DECIMAL-TO-8421 BCD CONVERTER’S COMPACTION TREES FOR
DIFFERENT VALUES OF S AND S
Fig. 5.
Converter circuit with its corresponding compaction tree.
;
. Each
pair of sequences has a different number of single ( ) and
double ( ) line errors at the output of gate , if gate is used
has got
,
for merger. The pair
,
,
,
, and
. Of the corresponding probahas the highest value (assuming
bility estimates,
and
). Therefore,
and
are to be
selected for optimal merger in the first stage. They are merged
with an OR gate since both the optimal mergeability conditions
and
are not satisfied, and
. The two output
and
in the subsequent step turn out to be also
sequences
gate mergeable (and merged with an OR gate), while
in the last stage, and are merged with an AND gate, thus
completing the compaction tree. The circuit with the compression tree is shown in Fig. 5.
Table IV summarizes the results on the construction of compaction trees for different values of the probabilities of single
and double error effects felt at the output of the CUT.
All of the 56 detectable single stuck-at line faults are injected into the circuit, and their effects at the outputs of the
circuit are shown in Table III. Depending on the choice of
and , we have different lines for selection and different types
of gate to use when constructing the corresponding space compactor. The fault-free space compactor response for the ten input
) equaling, respectively, (0.0, 1.0),
test vectors, and for (
(0.33, 0.66), (0.1, 0.9), and (0.05, 0.95) is the following single
, while it is the bit stream
10-bit stream
, when (
) equals (1.0, 0.0), (0.66,
0.33), (0.9, 0.1), and (0.95, 0.05), respectively. The fault-free
signatures using syndrome counter as time compressor are 3
) having, respectively, the foland 5, respectively. For (
lowing values (0.0, 1.0), (0.33, 0.66), (0.1, 0.9), and (0.05, 0.95),
by comparing the fault-free streams at the circuit’s outputs to
the faulty bit streams while injecting all of the 56 detectable
single faults, we find that the percentage of fault loss is 0.00%
before compaction. Moreover, 17 out of 56 stuck-at line faults
TABLE V
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING FSIM
turn out to be undetectable after space compaction, and thus, we
get about 69.64% fault coverage, or 30.36% fault loss. For our
1738
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
(a)
(b)
(c)
Fig. 6. (a) Space compactor circuit for c432 assuming stochastic independence of single and double line errors. (b) Space compactor circuit for c432 assuming
different probability values for single and double line errors [viz. when (S ; S ) equals (0.1, 0.9), (0.05, 0.95), and (0.33, 0.66), respectively]. (c) Space compactor
circuit for c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.66, 0.33)]. (d) Space compactor circuit for
c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.9, 0.1), (0.95, 0.05), and (1.0, 0.0), respectively]. (e)
Space compactor circuit for c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.0, 1.0)].
(
) probability values being, respectively, (1.0, 0.0), (0.66,
0.33), (0.9, 0.1), and (0.95, 0.05), the compressor is simply a
parity tree circuit which is a much better compressor for this
circuit (91.07% fault coverage), as seen previously in the case
when stochastic independence of single and double line errors
was assumed.
V. EXPERIMENTAL RESULTS
To demonstrate the feasibility of the proposed new space
compression schemes, independent simulations were per-
formed on various ISCAS 85 combinational benchmark
circuits. We have used ATALANTA [33] (fault simulation
program developed at the Virginia Polytechnic Institute and
State University) to generate the fault-free output sequences
needed to construct our space compactor circuits and to test
the benchmark circuits using reduced test sets accompanied
with a random test session, FSIM fault simulation program
[34] to generate pseudorandom test sets, and COMPACTEST
[35] program to generate the reduced test sets that detect most
detectable single stuck-at faults for all the benchmark circuits.
For each circuit, we determined the number of test vectors used
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
1739
(d)
(e)
Fig. 6. (Continued.) (a) Space compactor circuit for c432 assuming stochastic independence of single and double line errors. (b) Space compactor circuit for
c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.1, 0.9), (0.05, 0.95), and (0.33, 0.66), respectively].
(c) Space compactor circuit for c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.66, 0.33)]. (d) Space
compactor circuit for c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.9, 0.1), (0.95, 0.05), and (1.0, 0.0),
respectively]. (e) Space compactor circuit for c432 assuming different probability values for single and double line errors [viz. when (S ; S ) equals (0.0, 1.0)].
TABLE VI
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING
ATALANTA
TABLE VII
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS
USING COMPACTEST
to construct the compaction tree, CPU time taken to construct
the compactor, number of applied test vectors, simulation CPU
time, and percentage fault coverage by running ATALANTA
and FSIM programs on a SUN SPARC 5 workstation, and
COMPACTEST on an IBM AIX machine. The results are listed
in the multitude of tables that follow. The CPU times needed to
construct the compactors for all the different ISCAS 85 benchmark circuits on a SUN SPARC 5 workstation were in the range
of 2.0–102.0 s.
For comparison purposes, we used a parity tree space comgates, that proppactors as our benchmark, composed of
agates all errors that appear on an odd number of inputs, and
1740
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
TABLE VIII
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING FSIM AND ASSUMING STOCHASTIC INDEPENDENCE OF SINGLE AND DOUBLE LINE ERRORS
SIMULATION RESULTS
OF THE
TABLE IX
ISCAS 85 BENCHMARK CIRCUITS USING ATALANTA
SINGLE AND DOUBLE LINE ERRORS
is, therefore, considered ideal for space compaction. With the
parity tree space compactor as reference, we have simulated
some combinational benchmark circuits to demonstrate the feasibility of the proposed schemes of constructing compaction
trees using sequence characterization based on our concepts of
Hamming distance, sequence weights, and derived sequences.
To give an idea of how our space compactor looks, we have
drawn the space compaction circuits for the c432 benchmark
circuit comprised of 160 gates, 36 primary inputs, and seven
primary outputs, corresponding to different probability values
for single and double line errors. Fig. 6(a) shows the space compactor assuming stochastic independence of single and double
line errors. Fig. 6(b)–(e) illustrates the compaction circuits
) equal (0.1, 0.9), (0.05,
when the probability values (
0.95), (0.33, 0.66), (0.66, 0.33), (0.9, 0.1), (0.95, 0.05), (1.0,
0.0), and (0.0, 1.0), respectively. It may be noted that the compression network in Fig. 6(a) under stochastic independence
of line errors happens to be identical to that in Fig. 6(b) corresponding to certain distinct probability values of line errors,
) equals (0.1, 0.9), (0.05, 0.95), and (0.33,
viz. when (
0.66), respectively.
AND
ASSUMING STOCHASTIC INDEPENDENCE
OF
The hardware overhead for the space compactors shown in
Fig. 6(a)–(e), measured by the weighted gate count metric (viz.
gate count average fanin), is 3.44%. It is about 10% less area
than what was recently measured by an MPT space compactor
[7] which provides zero-aliasing.
Tables V–VII show the fault coverage and simulation CPU
time for all the ISCAS 85 combinational benchmark circuits
without compactors using FSIM, ATALANTA, and COMPACTEST, respectively. ATALANTA provides 100% fault
coverage for c17 and c880 benchmark circuits, and within the
range of 95–99% for other circuits. The fault coverage is much
higher than what is provided by FSIM in almost all cases.
Tables VIII–X show the simulation results for all the
benchmark circuits with their compactors when stochastic
independence of single and double line errors is assumed,
using FSIM, ATALANTA, and COMPACTEST, respectively.
Tables XI–XIII show the number of applied test vectors,
simulation CPU time, and percentage fault coverage for all
the benchmark circuits using FSIM, ATALANTA, and COMPACTEST, respectively, for the following probability values of
) equals (0.9, 0.1), while
single and double line errors: (
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
SIMULATION RESULTS
OF THE
SIMULATION RESULTS
SIMULATION RESULTS
TABLE X
ISCAS 85 BENCHMARK CIRCUITS USING COMPACTEST
SINGLE AND DOUBLE LINE ERRORS
OF THE
OF THE
1741
AND
ASSUMING STOCHASTIC INDEPENDENCE
TABLE XI
ISCAS 85 BENCHMARK CIRCUITS USING FSIM AND ASSUMING DIFFERENT PROBABILITY VALUES
SINGLE AND DOUBLE LINE ERRORS
FOR
TABLE XII
ISCAS 85 BENCHMARK CIRCUITS USING ATALANTA AND ASSUMING DIFFERENT PROBABILITY VALUES
SINGLE AND DOUBLE LINE ERRORS
Table XIV depicts the results on simulation for the benchmark
circuits using ATALANTA, assuming stochastic independence
OF
FOR
of single and double line errors, and when faults were not
considered in compressors.
1742
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
SIMULATION RESULTS
OF THE
TABLE XIII
ISCAS 85 BENCHMARK CIRCUITS USING COMPACTEST AND ASSUMING DIFFERENT PROBABILITY VALUES
SINGLE AND DOUBLE LINE ERRORS
FOR
TABLE XIV
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING ATALANTA, ASSUMING STOCHASTIC INDEPENDENCE OF SINGLE AND DOUBLE LINE
ERRORS, AND WHEN FAULTS WERE NOT CONSIDERED IN COMPRESSORS
We also simulated all the benchmark circuits using FSIM
for pseudorandom testing when stochastic independence as well
as dependence of single and double line errors were assumed,
using the same test sets in both cases. Table XV shows the simulation results for all the circuits before compaction. Table XVI
shows the simulation results for all these circuits after compaction, and assuming stochastic independence of single and
double line errors.
Moreover, we simulated all the ISCAS 85 benchmark circuits
with parity tree space compactors using FSIM and ATALANTA,
respectively. Tables XVII and XVIII show the corresponding results when the circuits were simulated after being compacted.
Table XIX shows the simulation results using ATALANTA for
all the benchmark circuits after compaction, and when faults
were not considered in compressors. Table XX, on the other
hand, shows the simulation results for pseudorandom testing
using FSIM with parity tree as a space compressor, when the
same test sets are applied as used in Table XV. More simulation
results on the benchmark circuits could be found in [30]. It is evident that the percentage fault coverage using COMPACTEST
TABLE XV
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING FSIM
WITH IDENTICAL TEST SETS
program is higher (more than 99% in most cases) than what
is measured by ATALANTA and FSIM, but at the same time
COMPACTEST requires more CPU time than the other two
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
1743
TABLE XVI
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING FSIM WITH IDENTICAL TEST SETS AND ASSUMING STOCHASTIC INDEPENDENCE OF
SINGLE AND DOUBLE LINE ERRORS
TABLE XVII
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING FSIM
WITH PARITY TREE AS A SPACE COMPRESSOR
TABLE XVIII
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING
ATALANTA WITH PARITY TREE AS A SPACE COMPRESSOR
fault simulation programs, in the case of both stochastic independence and dependence of line errors.
From our extensive simulation experiments, we observe that
in all cases, our designed space compactors fare comparably in
TABLE XIX
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING
ATALANTA WITH PARITY TREE AS A SPACE COMPRESSOR AND WHEN
FAULTS WERE NOT CONSIDERED IN COMPRESSORS
TABLE XX
SIMULATION RESULTS OF THE ISCAS 85 BENCHMARK CIRCUITS USING FSIM
WITH PARITY TREE AS A SPACE COMPRESSOR AND IDENTICAL TEST SETS
all respects with the parity tree space compactors, which we
used in this study as our benchmark. For some circuits, we even
obtained better fault coverage with reduction in the CPU time
1744
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
TABLE XXI
HARDWARE OVERHEAD FOR THE ISCAS 85 BENCHMARK CIRCUITS
using our space compactors than what we can get when a parity
tree space compactor is used.
We also estimated the hardware overhead of the compressors
for the different ISCAS 85 combinational benchmark circuits
and found it to be as small as 1–5% in most cases, and within
15% for three of the benchmark circuits. Table XXI shows the
hardware overhead estimates for all the ISCAS 85 benchmark
circuits. To estimate the hardware overhead, we used the ratio
of the weighted gate count metric (that is, number of gates or
gate count average fanin) of the compressor and that of the
total circuit comprised of the CUT and space compactor.
VI. CONCLUDING REMARKS
The design of space-efficient BIST support hardware is of
great importance in the design and fabrication of digital ICs.
The present paper reports compression techniques of test data
outputs for digital combinational circuits which facilitate the design of such kinds of space-efficient support hardware. The pro(
),
(
), and
posed techniques use
(
) gates as appropriate to construct an output compaction
tree that compresses the outputs of the CUT to a single line.
The compaction tree is generated based on sequence characterization and using the concepts of Hamming distance, sequence
weights, and derived sequences. The logic functions selected to
build the compression tree are determined solely by the characteristics of the sequences which are inputs to the gates. The
optimal mergeability criteria were obtained on the assumption
of both stochastic independence and dependence of single and
double line errors. When we do not assume the stochastic independence case for single and double line errors (i.e., errors in
the different lines occur independent of each other), we find that
error occurrence probabilities play a distinct role in the selection
of gates for merger in most cases. In this latter case (stochastic
dependence), output bit stream selection is based on calculating
the detectable error probability estimates using an empirical formula developed by Li and Robinson. It should be recalled that
the effectiveness of the proposed approaches is critically dependent on the probabilities of error occurrence in different lines
of the CUT, and this dependence may be affected by the circuit
structure, partitioning, etc., that is, by the number of inputs, outputs, internal lines, and types of gates it is designed of and the
way it is partitioned. Any given circuit structure has well defined
features which might change based on evolving design practices
with technology trends. If the ISCAS 85 benchmark circuits
were constructed based on some recent technology like FPGA
for instance, the corresponding space compactors that need be
designed based on such modified circuits structure might as
well be different. Besides, in actual situations, the probability
values for error occurrence in particular circuits have to be experimentally determined, that is, these are a posteriori probabilities rather than a priori probabilities. If the circuit structure changes, these probability values change and obviously the
corresponding compression networks that have to be designed
based on optimal mergeability criteria change as well. Furthermore, another point should be noted here also. Since the empirical formula used for computing the detectable error probability
estimates in our gate selection process uses exact values of these
a posteriori probabilities of error occurrence, unless the formula
is modified, whenever only intervals on the probability values
are given rather than exact values, they cannot be used as such in
the gate selection process, except only to provide two extremes
of selections consistent with the probability intervals. Since the
major thrust of this paper is in devising efficient methods of synthesizing compaction networks which provide improved fault
coverage for fixed complexity, thereby realizing a tradeoff in
terms of coverage and complexity (storage) than conventional
techniques, the complexity issues were not addressed in depth in
the current study. From the analytical viewpoint, the major issue
involves the computation of the detectable error probability estimates, which is rather simple in the present case because of
two-line mergers, compared to that in the case of generalized
mergeability [3], where the computation is really intensive.
Also, in this work, as is evident, we did not emphasize designing zero-aliasing compressor [36], [37] but rather endeavored to reinforce the connection between the input test sets and
their length and their reduction into recommended algorithms
in the construction of the compaction tree. Loss of information may not be completely avoided when the size of all output
responses is reduced. Therefore, depending on the amount of
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
information loss, the corresponding space compactor designs
will be affected. In our design experiments we used the reduced
test sets provided by ATALANTA and COMPACTEST to simulate all the ISCAS 85 combinational benchmark circuits. Even
though these reduced input test sets are not the minimal test sets
needed to ensure a 100% fault coverage, experimental results
indicate that the designed space compactors are comparable in
all respects with the parity tree space compactor which we used
as our benchmark, and which propagates all errors that appear
on an odd number of inputs and is usually considered ideal for
ad hoc space compaction.
Some of the advantages inherent in the proposed methods of
space compression are their simplicity and the resulting low area
overhead with improved fault coverage for fixed complexity for
the designed compactors, which might make them suitable in
a VLSI design environment as BIST support hardware, even
though they do not guarantee 100% fault coverage. The techniques were illustrated through designing space compactors for
all benchmark circuits with elaborate details. Finally, testing can
be combined with efficient input test patterns to synthesize BIST
circuits that provide more than 99% fault coverage with small
CPU simulation time and low area overhead. Quite recently new
methodologies were developed by the authors to design even
zero-aliasing space compressors utilizing the aforementioned
concepts, based on the assumption of multiple output line errors
occurring in circuits under conditions of stochastic dependence.
ACKNOWLEDGMENT
The authors are extremely grateful to the anonymous reviewers for their many valuable suggestions that immensely
helped in the preparation of the revised version of the manuscript. The authors are also grateful to Dr. D. S. Ha of the
Department of Electrical Engineering, Virginia Polytechnic
Institute and State University, Blacksburg, for providing them
with ATALANTA and FSIM fault simulators for their use in
the current research.
REFERENCES
[1] S. B. Akers, “A parity bit signature for exhaustive testing,” IEEE Trans.
Computer-Aided Design, vol. 7, pp. 333–338, Mar. 1988.
[2] P. H. Bardell, W. H. McAnney, and J. Savir, Built-In Test for VLSI: Pseudorandom Technique. New York: Wiley, 1987.
[3] S. R. Das, T. F. Barakat, E. M. Petriu, M. H. Assaf, and K. Chakrabarty,
“Space compression revisited,” IEEE Trans. Instrum. Meas., vol. 49, pp.
690–705, June 2000.
[4] T. W. Williams and K. P. Parker, “Testing logic networks and design for
testability,” Computers, vol. 21, pp. 9–21, Oct. 1979.
[5] E. J. McCluskey, “Built-in self-test techniques,” IEEE Des. Test
Comput., vol. 2, pp. 21–28, Apr. 1985.
[6] R. G. Daniels and W. B. Bruce, “Built-in self-test trends in Motorola
microprocessors,” IEEE Des. Test Comput., vol. 2, pp. 64–71, Apr. 1985.
[7] K. Chakrabarty, “Test response compaction for built-in self testing,”
Ph.D. dissertation, Dept. Computer Science and Engineering, University of Michigan, Ann Arbor, MI, 1995.
[8] V. D. Agrawal, C. R. Kime, and K. K. Saluja, “A tutorial on built-in
self-test—Part II,” IEEE Des. Test Comput., vol. 10, pp. 69–77, June
1993.
[9] M. C. Hansen and J. P. Hayes, “High-level test generation using physically-induced faults,” in Proc. VLSI Test Symp., 1995, pp. 20–28.
[10] K. Chakrabarty and J. P. Hayes, “Efficient test response compression for
multiple-output circuits,” in Proc. Int. Test Conf., 1994, pp. 501–510.
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
1745
, “Cumulative balance testing of logic circuits,” IEEE Trans. VLSI
Syst., vol. 3, pp. 72–83, Mar. 1995.
J. P. Hayes, “Check sum methods for test data compression,” J. Design
Automation and Fault-Tolerant Computing, vol. 1, pp. 3–7, Jan. 1976.
J. Savir, “Syndrome-testable design of combinational circuits,” IEEE
Trans. Comput., vol. C-29, pp. 442–451, June 1980.
J. P. Hayes, “Transition count testing of combinational logic circuits,”
IEEE Trans. Comput., vol. C-25, pp. 613–620, June 1976.
T.-C. Hsiao and S. C. Seth, “An analysis of the use of
Rademacher–Walsh spectrum in compact testing,” IEEE Trans.
Comput., vol. C-33, pp. 934–937, Oct. 1984.
A. K. Susskind, “Testing by verifying Walsh coefficients,” IEEE Trans.
Comput., vol. C-32, pp. 198–201, Feb. 1983.
R. A. Frohwerk, “Signature analysis—A new digital field service
method,” Hewlett-Packard J., vol. 28, pp. 2–8, May 1977.
T. W. Williams, W. Daehn, M. Gruetzner, and C. W. Starke, “Aliasing
errors in signature analysis registers,” IEEE Des. Test Comput., vol. 4,
pp. 39–45, Apr. 1987.
D. K. Pradhan and S. K. Gupta, “A new framework for designing and
analyzing BIST techniques and zero aliasing compression,” IEEE Trans.
Comput., vol. C-40, pp. 743–763, June 1991.
N. R. Saxena and J. P. Robinson, “A unified view of test response compression methods,” IEEE Trans. Comput., vol. C-36, pp. 94–99, Jan.
1987.
, “Syndrome and transition count are uncorrelated,” IEEE Trans.
Inform. Theory, vol. 34, pp. 64–69, Jan. 1988.
K. K. Saluja and M. Karpovsky, “Testing computer hardware through
compression in space and time,” in Proc. Int. Test Conf., 1983, pp.
83–88.
S. M. Reddy, K. K. Saluja, and M. G. Karpovsky, “Data compression
technique for test responses,” IEEE Trans. Comput., vol. C-37, pp.
1151–1156, Sept. 1988.
Y. K. Li and J. P. Robinson, “Space compression method with output
data modification,” IEEE Trans. Computer-Aided Design, vol. 6, pp.
290–294, Mar. 1987.
W.-B. Jone and S. R. Das, “Space compression method for built-in selftesting of VLSI circuits,” Int. J. Comput. Aided VLSI Des., vol. 3, pp.
309–322, Sept. 1991.
S. R. Das, H. T. Ho, W. B. Jone, and A. R. Nayak, “An improved output
compaction technique for built-in self-test in VLSI circuits,” in Proc.
Int. Conf. VLSI Design, 1994, pp. 403–407.
M. Karpovsky and P. Nagvajara, “Optimal robust compression of test
responses,” IEEE Trans. Comput., vol. C-39, pp. 138–141, Jan. 1990.
Y. Zorian and V. K. Agarwal, “A general scheme to optimize error
masking in built-in self testing,” in Proc. Int. Symp. Fault-Tolerant
Computing, 1986, pp. 410–415.
M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide
to the Theory of NP-Completeness. San Francisco, CA: Freeman,
1979.
M. H. Assaf, “Space compactor design for built-in self-testing of VLSI
circuits from compact test sets using sequence characterization and
failure probabilities,” M.A.Sc. thesis, Dept. Electrical Engineering,
University of Ottawa, Ottawa, ON, Canada, Aug. 1996.
K. P. Parker and E. J. McCluskey, “Probabilistic treatment of general combinational networks,” IEEE Trans. Comput., vol. C-24, pp.
668–670, June 1975.
A. Gupta, Groundwork of Mathematical Probability and Statistics, 3rd
ed. Calcutta, India: Academic, 1988.
H. K. Lee and D. S. Ha, “On the generation of test patterns for combinational circuits,” Dept. Electrical Engineering, Virginia Polytechnic
Institute and State University, Blacksburg, VA, Tech. Rep. 12-93, 1993.
, “An efficient forward fault simulation algorithm based on the parallel pattern single fault propagation,” in Proc. Int. Test Conf., 1991, pp.
946–955.
I. Pomeranz, L. N. Reddy, and S. M. Reddy, “COMPACTEST: A method
to generate compact test sets for combinational circuits,” in Proc. Int.
Test Conf., 1991, pp. 194–203.
B. Pouya and N. A. Touba, “Synthesis of zero-aliasing elementary-tree
space compactors,” in Proc. VLSI Test Symp., 1998, pp. 70–77 .
S. R. Das, M. H. Assaf, E. M. Petriu, W. B. Jone, and K. Chakrabarty,
“A novel approach to designing aliasing-free space compactors based
on switching theory formulation,” in Proc. IEEE Instrum. Meas. Tech.
Conf., vol. 1, 2001, pp. 198–203.
1746
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 50, NO. 6, DECEMBER 2001
Sunil R. Das (M’70–SM’90–F’94) received the
B.Sc. (Honors) degree in physics, and both the M.Sc.
(Tech) degree and the Ph.D. degree in radiophysics
and electronics, all from the University of Calcutta,
Calcutta, West Bengal, India.
He is a Professor of Electrical and Computer Engineering at the School of Information Technology
and Engineering, University of Ottawa, Ottawa,
ON, Canada. He previously held academic and
research positions with the Department of Electrical
Engineering and Computer Sciences, Computer
Science Division, University of California, Berkeley, the Center for Reliable
Computing (CRC), Computer Systems Laboratory, Department of Electrical
Engineering, Stanford University, Stanford, CA (on sabbatical leave), the
Institute of Computer Engineering, National Chiao Tung University, Hsinchu,
Taiwan, and the Center of Advanced Study (CAS), Institute of Radiophysics
and Electronics, University of Calcutta. He has published extensibly in the
areas of switching and automata theory, digital logic design, threshold logic,
fault-tolerant computing, microprogramming and microarchitecture, microcode
optimization, applied theory of graphs, and combinatorics. He has edited jointly
with P. K. Srimani a book entitled, Distributed Mutual Exclusion Algorithms,
published by the IEEE Computer Society Press, Los Alamitos, CA, 1992, in its
Technology Series. He is also the author jointly with C. L. Sheng of a text on
Digital Logic Design being published by Ablex Publishing Corporation.
Dr. Das has served as the Managing Editor of the IEEE VLSI Technical Bulletin, a publication of the IEEE Computer Society Technical Committee (TC)
on VLSI, and also as an Executive Committee Member of the IEEE Computer
Society Technical Committee (TC) on VLSI. He is currently an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS (now
of Part A, Part B and Part C), an Associate Editor of the IEEE TRANSACTIONS
ON INSTRUMENTATION AND MEASUREMENT, an Associate Editor of the International Journal of Parallel and Distributed Systems and Networks published by
Acta Press, Calgary, AB, and a member of the Editorial Board and a Regional
Editor for Canada of the VLSI Design: An International Journal of Custom-Chip
Design, Simulation and Testing published by Gordon and Breach Science Publishers, Inc., New York. He is a former Administrative Committee (ADCOM)
Member of the IEEE Systems, Man, and Cybernetics Society, a former Associate Editor of the IEEE TRANSACTIONS ON VLSI SYSTEMS (for two consecutive terms), a former Associate Editor of the SIGDA Newsletter, the publication of the ACM Special Interest Group on Design Automation, a former Associate Editor of the International Journal of Computer Aided VLSI Design published by Ablex Publishing Corporation, Norwood, NJ. He has also served as the
Co-Chair of the IEEE Computer Society Students Activities Committee from
Region 7 (Canada). He was the Associate Guest Editor of the IEEE JOURNAL
OF SOLID-STATE CIRCUITS Special Issues on Microelectronic Systems (Third
and Fourth Special Issues), and Guest Editor of the International Journal of
Computer Aided VLSI Design (September 1991) as well as VLSI Design: An
International Journal of Custom-Chip Design, Simulation and Testing (March
1993 and September 1996), Special Issues on VLSI Testing. He is currently
Guest Editing another Special Issue of the journal VLSI Design in the area of
VLSI Testing scheduled for December 2001. He has served in the Technical
Program Committees and Organizing Committees of many IEEE and non-IEEE
international conferences, symposia, and workshops, and also acted as session
organizer, session chair, and panelist.
He was elected one of the delegates of the prestigious Good People, Good
Deeds of the Republic of China in 1981 in recognition of his outstanding contributions in the field of research and education. He is listed in the Marquis Who’s
Who Biographical Directory of the Computer Graphics Industry, Chicago, IL
(First Edition, 1984). Dr. Das is the 1996 recipient of the IEEE Computer Society’s highly esteemed Technical Achievement Award for his pioneering contributions in the fields of switching theory and modern digital design, digital
circuits testing, microarchitecture and microprogram optimization, and combinatorics and graph theory. He is also the 1997 recipient of the IEEE Computer
Society’s Meritorious Service Award for excellent service contributions to IEEE
TRANSACTIONS ON VLSI SYSTEMS and the Society, and was elected a Fellow of
the Society for Design and Process Science, USA, in 1998, for his accomplishments in integration of disciplines, theories and methodologies, development
of scientific principles and methods for design and process science as applied
to traditional disciplines of engineering, industrial leadership and innovation,
and educational leadership and creativity. In recognition as one of the distinguished core of dedicated volunteers and staff whose leadership and services
made the IEEE Computer Society the world’s preeminent association of computing professionals, he was made a Golden Core Member of the Computer
Society in 1998. He is also the recipient of the IEEE Circuit and Systems Society’s Certificates of Appreciation for services rendered as Associate Editor,
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION SYSTEMS, during
1995–1996 and during 1997–1998, and of the IEEE Computer Society’s
Certificates of Appreciation for services rendered to the Society as Member of
the Society’s Fellow Evaluation Committee, once in 1998, and then in 1999.
Dr. Das serves as a Member of the IEEE Computer Society’s Fellow Evaluation Committee for 2001, as well. Finally, he is the recipient of the prestigious
Rudolph Christian Karl Diesel Best Paper Award of the Society for Design and
Process Science in recognition of the excellence of their paper presented at the
Fifth Biennial World Conference on Integrated Design and Process Technology
held in Dallas, TX during June 4–8, 2000. He is a Fellow of the IEEE (with
separate membership in the IEEE Computer Society, IEEE Systems, Man, and
Cybernetics Society, IEEE Circuits and Systems Society, and IEEE Instrumentation and Measurement Society), and a Member of the ACM. He was elected a
Fellow of the IEEE in 1994 for contributions to switching theory and computer
design.
C. V. Ramamoorthy (M’57–SM’76–F’78–LF’93)
received two B.Sc. degrees in physics and technology
from the University of Madras, Madras, India, two
M.S. degrees in mechanical engineering from the
University of California, Berkeley, CA, and the M.S.
and Ph.D. degrees in applied mathematics (computer
science) from Harvard University, Cambridge, MA,
the latter in 1964. His education was supported
by the Computer Division of the Honeywell Inc.,
Waltham, MA, a company he was associated with
until 1967, last as a Senior Staff Scientist.
He joined the University of Texas, Austin, TX, as a Professor in the
Department of Electrical Engineering and Computer Science. After serving as
Chairman of the Department, he joined the University of California, Berkeley,
in 1972 as Professor of Electrical Engineering and Computer Sciences,
Computer Science Division, a position that he still holds as Professor Emeritus.
He supervised more than 70 doctoral students in his career. He has held the
Control Data Distinguished Professorship at the University of Minnesota,
Minneapolis, MN, and Grace Hopper Chair at the U.S. Naval Postgraduate
School, Monterey, CA. He was also a Visiting Professor at the Northwestern
University, Evanston, IL, and a Visiting Research Professor at the University
of Illinois, Urbana-Champaign. He is a Senior Research Fellow at the ICC
Institute of the University of Texas. He has published extensibly—more than
200 papers—and has also coedited three books. He has worked on and holds
patents in computer architecture, software engineering, computer testing and
diagnosis, and in databases. He is currently involved in research on models and
methods to assess the evolutionary trends in information technology.
Dr. Ramamoorthy received from the IEEE Computer Society the Group
Award and Taylor Booth Award for education, Richard Merwin Award for
outstanding professional contributions, and Golden Core Recognition, and is
the recipient of the IEEE Centennial Medal, and IEEE Millennium Medal.
He also received the Computer Society’s Kanai-Hitachi Award for the year
2000 for pioneering and fundamental contributions in parallel and distributed
computing. He is a Fellow of the IEEE and of the Society for Design and
Process Science, from which he received the R. T. Yeh Distinguished Achievement Award in 1997. He also received the Best Paper Award from the IEEE
Computer Society in 1987. Three international conferences were organized in
his honor, and one UC Berkeley Graduate Student Research Award, and two
international Conferences/Society Awards have been established in his name.
He served as the Editor-in-Chief of the IEEE TRANSACTIONS ON SOFTWARE
ENGINEERING. He is the founding Editor-in-Chief of the IEEE TRANSACTIONS
ON KNOWLEDGE AND DATA ENGINEERING, which recently published a Special
Issue in his honor. He is also the founding Co-Editor-in-Chief of the International Journal of Systems Integration (New York: Elsevier North-Holland),
and of the Journal for Design and Process Science (Austin, TX: SDPS). He
served in various capacities in the IEEE Computer Society including its First
Vice President, and a Governing Board Member. He served on several advisory
boards of the Federal Government and of the Academia that include the U.S.
Army, Navy, Air Force, DOE’s Los Alamos Laboratory, University of Texas,
and State University System of Florida. He is one of the founding Directors
of the International Institute of Systems Integration in Campinas, Brazil,
supported by the Federal Government of Brazil, and for several years, was
a Member of the International Board of Advisors of the Institute of Systems
Science of the National University of Singapore, Singapore.
DAS et al.: FAULT TOLERANCE IN SYSTEMS DESIGN IN VLSI USING DATA COMPRESSION
Mansour H. Assaf received the Honors degree in applied physics from the Lebanese University in Beirut,
Lebanon, in 1989, and the B.A.Sc. and M.A.Sc. degrees in electrical engineering from the University of
Ottawa, Ottawa, ON, Canada, in 1994 and 1996, respectively, where he is currently pursuing the Ph.D.
degree in electrical and computer engineering.
From 1994 to 1996, he was associated with the
Fault-Tolerant Computing Group of the University
of Ottawa, where he studied and worked as a
Researcher. After working at the Applications
Technology, a subsidiary of Lernout and Hauspie Speech, McLean, VA, in
the area of software localization and natural language processing, he joined
the Sensing and Modeling Research Laboratory of the University, where he
currently works on projects in the field of human–computer interaction, 3-D
modeling, and virtual environments. His research interests are in the areas
of human–computer interactions and perceptual-user interfaces, and in fault
diagnosis in digital systems.
Emil M. Petriu (M’86–SM’88–F’01) received the
Dipl.Eng. and Dr.Eng. degrees from the Polytechnic
Institute of Timisoara, Romania, in 1969 and 1978,
respectively.
He is currently a Professor of Electrical and Computer Engineering and Director of the School of Information Technology and Engineering (SITE) at the
University of Ottawa, Ottawa, ON, Canada, where he
has been since 1985. He was a Visiting Scholar at the
Department of Applied Physics, Technical University
of Delft, The Netherlands (1979 and 1982), Visiting
Researcher at the Canadian Space Agency (1992), and a Visiting Professor at
the Research Institute of Electronics, Shizuoka University in Hamamatsu, Japan
(1994). His research interests include test and measurement systems, interactive
virtual environments, intelligent sensors, robot sensing and perception, neural
networks, and fuzzy control. During his career, he has published more than 150
technical papers, authored two books, edited two other books, and received two
patents.
Dr. Petriu is a Registered Professional Engineer in the Province of Ontario,
Canada. He was elected a Fellow of the Engineering Institute of Canada
(2000), Fellow of the Canadian Academy of Engineering (2001) and a Fellow
of the IEEE in 2001. He is currently serving as Vice-President (Publications),
a Member of the AdCom, and a Co-Chair of the TC-15 of the IEEE Instrumentation and Measurement Society. He is also an Associate Editor of the IEEE
TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, and a Member of
the Editorial Board of the Instrumentation and Measurement Magazine.
1747
Wen-Ben Jone (S’85–M’88–SM’01) was born in
Taipei, Taiwan, R.O.C. He received the B.S. degree
in computer science and the M.E. degree in computer
engineering, both from the National Chiao-Tung
University, Hsinchu, Taiwan, in 1979 and 1981,
respectively, and the Ph.D. degree in computer
engineering and science from Case Western Reserve
University, Cleveland, OH, in 1987.
In 1987, he joined the Department of Computer
Science at the New Mexico Institute of Mining and
Technology, Socorro, where he was promoted to Associate Professor in 1992. From 1993 to 2000, he was with the Department of
Computer Engineering and Information Science, National Chung-Cheng University, Chiayi, Taiwan. He was a Visiting Research Fellow with the Department
of Computer Science and Engineering, the Chinese University of Hong Kong,
in the summer of 1997. Since 2001, he has been with the Department of Electrical and Computer Engineering and Computer Science, University of Cincinnati, OH. His research interests include VLSI design for testability, built-in
self-testing, memory testing, high-performance circuit testing, and low-power
circuit design. He has published more than 80 papers and served as a reviewer
in these research areas in various technical journals and conferences.
Prof. Jone is listed in the Marquis Who’s Who in the World (15th ed., 1998,
2001). He has also served on the program committee of VLSI Design/CAD
Symposium (since 1993, in Taiwan), the General Chair of 1998 VLSI Design/CAD Symposium, 1995, 1996, 2000 Asian Test Conference, 1995–1998
Asia and South Pacific Design Automation Conference, 1998 International
Conference on Chip Technology, and 2000 International Symposium on Defect
and Fault Tolerance in VLSI Systems. He received the Best Thesis Award from
The Chinese Institute of Electrical Engineering (R.O.C.) in 1981.