Hardware Verification - IEEE Computer Society

1292
IEEE TRANSACTIONS ON COMPUTERS, VOL. C-26, NO. 12, DECEMBER 1977
Also, there is difficulty in assigning the points to the right intersection branches.
If the surface is complicated, like a BBP of high (>3) degree,
then the plane-curve intersection is costly, since root localization
must be performed for each curve, i.e., one must seek to find as
many points of intersection as possible, and this number is limited by the degree of the curve.
2) The second method is presently in use. It finds only those
intersection curves starting on boundaries of the patch, and does
not attempt to locate scalp cuts. However, the programs were
written as a function of variable patch domain values. It is conceivable, downstream, to allow the user to override those values
in cases where he believes scalping can occur. E.g., an FCP is
defined for 0 < u < 1 and 0 < v < 1. The values 0 and 1 are variable input data to the subprograms in the INT package. As far as
these programs are concerned, the values could just as well be 0.7
< u < 0.9 and 0.2 < v < 0.5. This has application, also, to cut
delimitation. At present, however, the assumption that the plane
meets boundaries is a practical compromise and enables relatively
quick computational turnaround. It should be noted that scaling
can be done on FCNET's and SPLCGNIC's as long as the intersection arcs cross internal boundaries of these surfaces.
This method is numerically better behaved than 1) above since
plane-surface arc intersections involve either constant-u or
constant-v arcs depending on the orientation of the plane with
the current quadrilateral. Thus, the near tangent problems encountered in 1) are practically avoided.
Also, this method only needs to analyze a constant-u or a
constant-v curve locally for just one intersection, since it deals
with a fine subdivision of the patch. This fact significantly reduces computation time.
The technique automatically creates ordered strings of points
and there are no problems with correctly assigning the computed
points to the right branch of the cut.
3) The third method also develops cuts from boundary points.
However, where method 2) requires the solution of one nonlinear
equation in one unknown, method 3) requires the solution of two
nonlinear equations in two unknowns.
An advantage of method 3) is it allows the user to specify point
spacing via the step length. He can do this by either entering a
fixed value for A or entering a chord height tolerance value, from
which the program determines the correct A at each step. Both
options require significantly more work than needed by method
2). The second option, in fact, requires significantly more effort
than the first option.
of monostatic backscatter. Here, additional geometric information is needed, such as principal curvatures (i.e., minimum and
maximum curvatures), which the system optionally provides.
ACKNOWLEDGMENT
The following persons significantly contributed to the organization and implementation of the numerical geometry system:
A. J. Bima, R. M. Burkley, R. H. Daw, R. E. Dickle, B. Dimsdale,
K. L. Johnson, P. D. Josephson, L. B. Lichten, W. J. Martin, J.
P. Mayfield, J. G. Sakamoto, G. J. Silverman, and J. H. Wisser.
REFERENCES
[1] P. E. Bezier, Numerical Control Mathematics and Applications. London:
Wiley, 1972.
[2] K. M. Brown, "Solution of simultaneous nonlinear equations," Commun.
Assoc. Comput. Mach., vol. 10, pp. 728-729, 1967.
[3] R. M. Burkley and J. P. Mayfield, "A numerical geometry system," Society
of Manufacturing Engineers Technical Paper MS75-723, 1975.
[41 S. A. Coons, "Surfaces for computer-aided design of space forms," M.I.T.
Project MAC, MAC-TR-41, June 1967.
[5] B. Dimsdale, "On multiconic surfaces," IBM Los Angeles Scientific Center,
Rep. G320.2661, May 1974.
[61 J. C. Ferguson, "Multivariable curve interpolation," J. Assoc. Comput. Mach.,
vol. 11, no. 2, pp. 221-228, 1964.
[7] J. R. Manning, "Continuity conditions for spline curves," Computer J., voL
17, no. 2, pp. 181-186.
[8] G. J. Silverman and L. Lichten, "Fitting space curves with parametric cubic
splines," IBM Los Angeles Scientific Center, Rep. G320.2663, June 1974.
[9] G. J. Silverman and K. L. Johnson, "A computer system for numerical geometry design," in Proc. Workshop on Databases for Interactive Design,
Waterloo, Canada, Sept. 1975.
[10] D. J. Struik, Lectures on Classical Differential Geometry. New York: Addison-Wesley, 1950.
[11] J. H. Wisser, G. J. Silverman, and R. M. Burkley, "Numerical geometry, TSO
and APT," in Proc. SHARE XLV, Aug. 1975.
Hardware Verification
J. PAUL ROTH
ALGORITHM OVERVIEW FOR HOTSPOT OPERATOR
This operator determines points on a surface, P(u,u), at which
normals are parallel to a given direction vector, W 0.
Determining values of (u,v) such that Wis orthogonal to the
tangent plane at P(u,v) amounts to solving the following system
of two equations in two unknowns,
dP/du. W0=
dP/dv. W = 0.
These equations are assumed independent and to produce isolated points. This is equivalent to finding the intersection points
of the line defined by point (0,0,0), direction = W, with the
spherical image, N(u,v), of P(u,v). Thus the general line intersection algorithm in the intersection operator can be used to find
the desired u,v-values. This is a demonstration of the usefulness
of considering the surface algorithms as operating on functions
of the form
=
aP(u,v) + bN(u,v),
where for HOTSPOT, a = 0 and b = 1.
This operator has applications to radar
cross
section studies
Abstract-The need for verification of hardware designs is
particularly important for large-scale-integration technologies
because of the great cost, in time and money, for engineering
changes. This correspondence describes an efficient means for
determining the equivalence of a behavioral, high-level, i.e.,
flowchart, definition of the design and a detailed regular logic
design. It may be used between compatible high-level as well as
low-level designs. A compiler RTRAN transforms the high-level to
a low-level design and a program VERIFY determines the equivalence of two such regular logic designs. It seeks to compute a
counterexample, starting at the outputs, rather than to try exhaustively input patterns. Experimentally, VERIFY is proven vastly
superior to exhaustive simulation. These methods have been used
routinely for very large designs.
Index Terms-Hardware, logic design, testing LSI, verification.
Manuscript received December 2, 1975; revised July 6, 1976, October 27, 1976,
and December 27, 1976.
The author is with the IBM Thomas J. Watson Research Center, Yorktown
Heights, NY 10598.
1293
CORRESPONDENCE
HARDWARE VERIFICATION
behavioral definition
handRTA
designers
RTRAN
automatic
design
manual
design
R
counterexamples
proof of
equivalence
Fig. 1.
I. REGULAR LOGIC DESIGN
A logic circuit for LSI technology performs functions such as
AND, NAND and the like, with typically a few inputs and a single
output. A logic design is an interconnection of logic circuits, with
no circuit connected to itself. Such an interconnection is in general indeterminate in that a given state and input pattern do not
determine the output and next state-it contains in general races
and hazards. Regular logic design was defined originally to ensure
that its behavior was determinate, mainly to guarantee that tests
computed by the D-algorithm for cyclic logic would be valid. The
methods of verification described here depend on its use.
Consider an acyclic (no feedback) logic design; if feedback is
desired between an input-output pair, then a pair of registers is
inserted in the loop. Such a structure is termed a regular logic
module (RLM). (These registers are gated at different clock
times.) A regular logic design (RLD) is an interconnection of
RLM's; its input and output registers are gated by nonoverlapping clock pulses; finally it is possible to SCAN (have direct access
to) the inputs and outputs of the RLD directly (using shift registers [4]. An RLD is always determinate. Any function can be
realized by a regular logic design. Cornish [6] of Texas Instruments discusses-an interesting usage of regular design in testing
and maintenance.
II. A HIGH-LEVEL HARDWARE LANGUAGE AND COMPILER
Originally PL/R is a realization (H. Halliwell) of the regular
notation developed as a means to define complex algorithms. It
was seen as a means to describe hardware at a high-level permitting a compiler RTRAN to transform this PL/R definition into
a RLD automatically. PL/R is a miniscule variant of PL/I and
follows its conventions. variables are declared to have a certain
number of bits. Logical operation on variables are expressed in
PL/I notation. In the development of large-scale designs, it is
common to define the design in some high-level form such as
sequence charts or flowcharts. PL/R may be considered as a realization of such convention as the flowchart. It is possible to
employ macros to build complex functions from simpler ones.
RTRAN handles these automatically for the verification.
A compiler was designed by Halliwell [5] which transformed
algorithms written in PL/R into RLD's. (We also have an inverse
program R* mapping RLD's into PL/R.) The original purpose
for RTRAN was to produce quality logic from a high level. For
verification, RTRAN's implementation consists of all NAND's. In
the verification application described here the logic produced by
RTRAN is used for verification with designs produced manually
or produced by other means.
of these algorithms. In parallel RTRAN produces an automatic
implementation. Then VERIFY seeks a COUNTEREXAMPLE to
their equivalence. If VERIFY finds no counterexample then the
two designs are equivalent. Of course VERIFY accepts any pair
of compatible hardware design. A new design could be verified
against an old standard design. Or two compatible high-level
hardware designs can be compared.
VERIFY assumes a one-to-one correspondence between inputs,
outputs, and registers. Two such designs are said to be INEQUIVALENT if there is an input and state such that the output or next
state are different. This is a definition stronger than the classical
[2]. But in our designs, both high- and low-level, we are given and
take advantage of this one-to-one correspondence. It also makes
practicable the determination of equivalence. VERIFY is a specialization of CONSISTENCY of the D-algorithm [1]. It starts with
the primary outputs or register outputs. The first step is to select
a corresponding pair or outputs, primary or register, and to extract only the logic which feeds, directly or indirectly these outputs. This is usually a small fraction of the total design, particularly for control designs. The FUNCTION of the design in this
computation is expressed by means of a singular cover, each
circuit in the design represented by a few singular cubes. Instead
of blindly applying, as in simulation, input/state patterns in
search on one which reveals a disagreement, in VERIFY we §eek
to CONSTRUCT such a counterexample starting with an output
pair. Opposite values are assigned to these paired output variables. Since the circuits are all NANDs (four inputs or less) at most
only four prime cubes (implicants) of variables (instead of 2**4)
are available to justify the value. From there VERIFY is defined
recursively. For each assignment, as in the D-algorithm, all
possible implications are made. Having made assignments and
taken their implications, if a test-cube pattern has not yet been
computed, a new assignment is made of one of the inputs of lines
already assigned and not yet justified. The process continues until
the primary inputs and states are reached and a test, a counterexample, is developed or else no more decisions remain to be
made wherein the designs are equivalent. VERIFY computes a test
CUBE So that in general many inputs are left unspecified, adding
to the generality of the test. Unlike simulation VERIFY utilizes
the structure of the designs in its calculations. Experimentally
the superiority of VERIFY over simulation is evident. It appears
difficult to construct a proof to this effect. Also unlike Hennie's
study, by construction of an RLD we are able, in a practical design
sense as well as theoretically, to dispense with sequences of inputs
and deal with single inputs and states. The system is practically
useful.
III. VERIFICATION
REFERENCES
Consider Fig. 1. Here we start with a behavioral definition of
algorithms written in PL/R or any flowchart language. It is assumed that the logic designer produces a manual implementation
[1] J. P. Roth, "Diagnosis of automata failures: A calculus and a method," IBMJ.
Res. Develop., 1966.
[21 F. C. Hennie, Finite-State Models for Logical Machines, New York: Wiley,
1968.
1294
IEEE TRANSACTIONS ON COMPUTERS, VOL.
tegrated circuits via test points and additional logic," IEEE Trans. Comput.,
1973.
[5] H. Halliwell and J. P. Roth, "SCD A system for computer design," IBM Tech.
Disclosure Bull., 1974.
[6] M. Cornish, "The D-algorithm for sequential circuits: An extension and its
application," IEEE Computer Society Repository, 1976.
ifft < W-X-+A-F
otherwise
f= 1,
[3] J. P. Roth, "VERIFY (a design verifier)," IBM Tech. Disclosure Bull., 1973.
[4] M. J. Y. Williams and J. B. Angell, "Enhancement testability of large scale in-
C-26, NO. 12, DECEMBER 1977
f = 0,
(1)
where
f = output function of the element,
t = threshold of the element,
X = input variable (external input) vector,
W = weight vector associated with X,
F = (fl,f, ,f-j) = internal input vector,
A = (al,a2, ... ,aj) = weight vector associated with X,
j = number of elements connected to the element.
According to (1), the symbol of the element is shown as Fig. 1.
The thresholds related to the element are listed as follows:
1) threshold of the element, i.e., inherent threshold (IT);
2) thresholds contained in F, i.e., existing thresholds
(ET's);
3) thresholds contained in f, i.e., realized thresholds (RT's);
4) thresholds contained in both F and f, i.e., survived
thresholds (ST's);
5) thresholds contained in F, but not in f, i.e., extinct or disappeared thresholds (DT's); and
6) thresholds contained in f, but not in F, i.e., generated
thresholds (GT's).
Let the linear weighted sums W X and A F be the excitation
e and the internal function g, respectively. And let (t - e) be the
decision line h. Then (1) can be rearranged as
--
Synthesis of Multithreshold Tree Networks
OKIHIKO ISHIZUKA
Abstract-A multithreshold network realizes a multithreshold
logic function of an equivalent analog excitation e which in turn
could be realized by a weighted input network of Boolean variables.
This correspondence presents the synthesis of multithreshold tree
networks with conventional single-threshold elements. The maximum number of thresholds realized by the tree network is derived
by using the graphical method. Each value of these thresholds can
be set arbitrarily.
Index Terms-Graphical method, maximum number of thresholds, multithreshold network, single-threshold element, tree network.
-
f = 1,
f = 0,
-
iff h <g
otherwise
(2)
where
INTRODUCTION
The logical flexibility of a multithreshold element is due to the
number of thresholds in the element. In practice, however, the
element has only three or fewer thresholds. Therefore a multithreshold logic function having four or more thresholds can be
realized only by a multithreshold network, i.e., a feedforward
constant-weight network with single-threshold or three-threshold
elements [1], [2]. In other words, a multithreshold network realizes a multithreshold function of an equivalent analog excitation e which in turn could be realized by a weighted input network of Boolean variables. Although there exist constraints
among the thresholds realized by a multithreshold network
generally, the cascade network or the two-level network with k
single threshold elements can realize any arbitrary (2k - 1)threshold logic function [1], [3]. However, when k is large, the
number of levels in the cascade network is increased, or the fan-in
of the second level element in the two-level network is increased.
In this correspondence the synthesis of multithreshold tree
networks is described. It will be seen that the tree network with
k single-threshold elements can also realize any (2k - 1)threshold logic function; besides, the number of levels and the
fan-in of each element can be chosen appropriately. The graphical
method used for the synthesis was introduced by the author [4].
It is based on a similar method presented by Sheng [5].
h t-e
I.
GRAPHICAL REPRESENTATION
First we consider a single-threshold element in a multithreshold network. The inputs to this element are classified into two
groups, i.e., the external and the internal inputs. The former are
the input variables of the network and the latter are the outputs
of other elements connected to the element. Thus, the output
function of a single-threshold element is defined as
II.
Manuscript received June 1, 1976.
The author is with the Faculty of Engineering, Miyazaki University, Miyazaki-shi,
Japan.
g-A-F
e A W.X.
f and h are functions of e. And g is also a function of e because
it is assumed that the same excitation is applied to all threshold
elements in the network. Therefore they are represented as f(e),
h(e), and g(e). We may consider these functions on a graph, in
which the vertical axis of the various plots is dimensionally
equivalent to the horizontal axis (they are both in excitation
units). h(e) is a straight line having a slope of -1. g(e) is a sequence of vertical and horizontal segments. The vertical segments
are composed of ET's. The horizontal segments are composed
of combinations of the weights for the internal inputs. The points
of intersection of the decision line and vertical segments result
in ST's and those of the decision line and horizontal segments
result in GT's. Hence, those of h (e) and g (e) result in RT's. The
values of the vertical segments, which do not intersect h(e), on
e-axis are DT's.
The output function f(e) can be obtained by mapping g(e) and
h (e). Such an example is shown in Fig. 2. Six kinds of thresholds
are contained in Fig. 2, i.e.,
1) IT (t);
2) ET's (t2,t3,t5Jt6,t8);
3) RT's (t1,t2,t4,t5,t7,t8,t);
4) ST's (t2,t5At8);
5) DT's (t3,t6);
6) GT's (t1,t4,t7,t)III. NUMBER OF THRESHOLDS REALIZED BY A SINGLETHRESHOLD ELEMENT
In this section we will establish some properties about the
number of RT's of a single-threshold element. In each property,
let
mei = number of ET's of ith internal input,
me = 2I=l Mei = total number of ET's,