74
IEEE TRANSACTIONS ON COMPUTERS, JANUARY 1969
or on-line data-processing systems doing correlations or similar pro-
cesses.
ACKNOWLEDGMENT
The author thanks the Department of Physics of the University
of Auckland, New Zealand, for the provision of research facilities.
REFERENCES
[1] 0. L. MacSorley, "High-speed arithmetic in binary computers,' Proc IRE, vol.
49, pp. 67-91, January 1961.
[2] I. Flores, The Logic of Computer Arithmetic. Englewood Cliffs, N. J.: PrenticeHall, 1963.
[31 C. S. Wallace, 'A suggestion for a fast multiplier,' IEEE Trans. Electronic Computers, vol. EC-13, pp. 14-17, January 1964.
Incrementing a Bit-Reversed Integer
DONALD FRASER
Abstract-A fast method of generating bit-reversed addresses
for the fast Fourier transform is described.
Index Terms-Bit-reversal, fast Fourier transform, indexing,
reversed integer.
INTRODUCTION
The use of numbers in "bit-reversed" order by some fast Fourier
transform (FFT) algorithms has been described previously.' For
example, the twelfth address of a series which begins at 0 is not binary (..1011) but (.. 1101..); that is, "eleven" written in reverse.
The scale of the address depends on the size of the FFT. For a 64point FFT the above address is decimal 52. In the following it will
be assumed that a bit-reversed integer is located at the extreme lef t of
the register, its "unit" bit being the normal sign bit, so that postoperative scaling is required before the final address is obtained.
METHOD
The most obvious way to generate a bit-reversed address is to increment a normal integer and reverse the bits. This can be quite
time-consuming. A much faster approach is to work entirely in bitreversed form. When a binary integer is incremented by unity, the
effect is to replace the least significant 0 by a 1, and to clear any less
significant l's. In normal arithmetic this is done by the automatic
left-carry, but in reverse arithmetic no equivalent "right-carry" is
available. Thus the problem of incrementing a bit-reversed integer
by "unity" is simply that of finding the left-most 0, replacing it by a
1, and clearing any l's to the left.
There are many ways of doing this: a step-by-step test-and-shift
is not, on the average, as slow as it sounds because the frequency of
occurrence of an operation requiring n steps is 1/2".
Using floating-point hardware, a neater solution is possible. If the
bit-reversed number, treated as a normal fraction, is negative, then
conversion to floating-point and back to fixed-point form will automatically bring the left-most 0 into standard position next to the
sign bit. The two bits are interchanged (by adding binary 1100. .) and
the whole "fraction" shifted back to its original position by means of
the floating-point exponent, 0's being introduced from the left. A
"positive fraction" must be treated separately or it will give an anomalous result. But since this case occurs half of the time, it is worth
giving it its own fast branch anyway, which, being trivial (a 1 to be
put in the first position), can bypass the double conversion.
Manuscript received July 19, 1968; revised September 26, 1968. This work was
supported by the Australian Research Grants Committee and the Australian Institute of Nuclear Science and Engineering.
The author is with the Department of Mechanical Engineering, University of
Sydney,
Sydney, Australia.
I G-AE Subcommittee on Measurement Concepts. 'What is the fast
Fourier
transform?' IEEE Trans. Audio and Electroacoustics, vol. AU-15, pp. 45-55, June
1967.
For example, suppose we have
111011..
Binary
Convert to floating point
Convert to fixed point
Reverse first two bits
Shift right according to
exponent
Alternate trivial-addition
(1011. .)X2-2
reversed integer decimal 55
or fraction -0.15625
1011..
and exponent -2
000111. .
reversed integer decimal 56
or fraction +0.21875
reversed integer decimal 57
or fraction -0.78125
0111. .
100111..
and so on.
CONCLUSION
Speeds have been compared on an English Electric KDF9 computer by timing 100 000 increments. The test-and-shift method averages 35 ,us per increment, while the combined trivial-addition and
double-conversion method averages 17 /is. For comparison, incrementing a normal integer and reversing 13 bits (for an 8192-point
FFT) takes 200 ,us. Of course, individual machines will differ but it
would seem that, in general, an order of magnitude improvement in
speed can be expected by using the above direct methods to generate
bit-reversed addresses.
Subtraction by Minuend Complementation
GLEN G. LANGDON, JR., SENIOR MEMBER, IEEE
Abstract-In performing the operation of subtraction in additive
systems, a popular practice is to complement the subtrahend and add.
A second method of performing subtraction, which seems to have
been overlooked, is to complement the minuend, add it to the subtrahend, and complement the result. In many cases, this second
method is more awkward; however, in two instances it seems to be
worthy of consideration. The first instance, surpn'singly enough,
concerns BCD systems, where minuend complementation can compare favorably with more conventional methods of BCD subtraction.
The second instance concerns negative radix numbers where the
technique of minuend complementation seems to offer definite
advantages.
Index Terms-Computer arithmetic, negaradix, subtraction.
INTR(ODUCTION
If a binary up-counter has the capability of having its contents
inverted, the counter can also be used as a down-counter by the addition of a simple device. If the counter contents are first inverted, then
counted up, and finally reinverted, the result is that the counter contents have been counted down. This is a simple example of the general
technique of subtraction by complementing the minuend, adding the
subtrahend, and complementing the result.
In this note, "complementation" means the radix-minus-one
complement of each digit of a number representation. The term
"difference operation" is used in the hardware sense. In positional
number systems of positive radix which use signed magnitude numbers, the "difference operation" is defined as the addition of two numbers of unlike sign, or the subtraction of two numbers of like sign. In
number systems using the radix complement (radix-minus-one complement) representation for negative numbers or in negaradix systems [1 ] the "difference operation" means subtraction.
In an additive system, the computer arithmetic unit has the basic
capability of performing the operation of addition directly upon the
Manuscript received July 10, 1968.
The author is with IBM Corporation, Systems Development Division, Endicott,
N. Y.
SHORT NOTES
75
two operands. In order to avoid the cost of performing the difference
operation directly, addition may be performed with one operand
modified such that the result is indeed the difference D between the
minuend M and subtrahend S. Typically, the operand modification
consists of obtaining the negative of the subtrahend:
D = M + (-S).
(1)
- CARRY
_ IN
The advantage of obtaining the difference in this manner, as opposed to providing the arithmetic unit with an adder-subtractor
(instead of just an adder), lies in the relative amount of logic circuits
required. The arithmetic unit usually has the ability to gate the negative of an operand to the adder (e.g., gating from the complement
output side of the operand register in binary systems), whereas a binary full adder-subtractor cell may cost 20 to 35 percent more in logic
circuits than a binary full adder cell.
The difference operation can also be performed in additive systems
by taking the negative of the minuend M, adding the subtrahend S,
and taking the negative of the reuslt:
Fig. 1. Subtraction with a BCD adder.
D= - (-M+S) =M-S.
(2)
In most number systems and arithmetic unit designs, the greater
But the effect of the " -6" (modulo 16) of (6) is the same as adding
simplicity of (1) over (2) is significant. However, in some cases, the binary
1010 (modulo 16):
implementation of some form of (2) offers an advantage.
S > M:
D = (10 + M) -S (modulo 16).
(7)
DECIMAL NUMBERt SYSTEMS
If a carry did not occur in (5), then M>S and the uncorrected
In positive radix number systems (such as decimal), when the digit
code is self-complementing (inverting each bit of the digit yields the difference may be reinverted without six-correction:
radix-minus-one complement), implementation of (2) does not seem
M>S:
D= 15- (15-M+S) =M-S.
(8)
to offer any advantage over (1). However, let us consider the differThe
analysis
thus
verifies
that
the
original
six-correction
circuits
ence operation with signed magnitude decimal numbers using the
8421 BCD code. The code is not self-complementing, and one would need not be modified; it is only necessary to invert the minuend bits
like to avoid either expensive means of obtaining the nine's comple- and the output bits. Fig. 1 shows a functional block diagram of the
circuit; the EXCLUSIVE-OR (OE) block is used for selective inversion.
ment or the need for the nine's complement.
The scheme was explained in terms of signed magnitude decimal
Consider the BCD adder for a single digit. One method of pernumbers
for convenience. The reader may verify that the minuend
forming the addition is described by Richards ([2 ], p. 211). Here the
bits of the operand digits serve as inputs to a 4-bit binary adder, and complementation method remains valid when negative decimal
the uncorrected (hexadecimal) sum is then "six-corrected" if needed. numbers are represented in nine's or ten's complement form.
Another method of performing BCD arithmetic with a hexadeciThe six-correction consists of adding 0110 (binary 6) to the uncormal adder is described by Hellerman ([31, pp. 300-301). In this
rected sum.
A variation of (1) may be employed for the difference operation on scheme, a "plus 6" circuit is required on the minuend inputs, and a
BCD digits. The bits of the subtrahend S may be inverted (forming "minus 6" circuit is required on the hexadecimal adder outputs. Adthe fifteen's complement of the 4-bit digit), and it may be added to dition is performed by adding 6 to one operand and subtracting it
the minuend M with a 1 (forced carry) added through the adder back out if there was no decimal carry. The difference operation is
performed by inverting each bit of the subtrahend (the minuend recarry input. The uncorrected difference UD is expressed by
mains unchanged) to the adder, and subtracting 6 from the result (or
UD = M + (15 -S) + 1 (modulo 16).
(3) adding binary 1010) if there is no decimal carry. As before, the impleIf a carry was generated, the result is M-S+16-16 which is mentation shown in Fig. 1 uses fewer logic circuits. Another decimal
correct. If no carry occurred, the binary quantity 0110 must be sub- code which is not self-complementing is the 5421 code ([2], p. 179).
tracted from UD. (This may be accomplished by adding 1010 and Surprisingly enough, the same technique used in Fig. 1 also works
ignoring the carry.) The scheme requires, in addition to the 4-bit on the 5421 adder.
adder, a "plus six-corrector" and a "minus six-corrector."
NEGARADIX SYSTEMS
Let us generalize (2) for cases where inverting each bit of a numThe method will now be explained in terms of negabinary number or digit does not give the negative of a number, but does give a
bers, where the radix is minus-two. The standard procedure of (1)
difference with respect to an inversion base I.:
requires the negative of the subtrahend. In order to obtain the negaD = In-((In-M) + S) .
(4) tive (-S) of the subtrahend S, one could shift it left one bit position,
For example, inverting each bit of a BCD digit M gives the hex- yielding -2S, then add it to S obtaining -S. The procedure is cumadecimal digit (15 - M). Here, the "inversion base" I. is the number bersome and requires special overflow circuits.
Consider now the implementation of (4). Suppose that each bit of
15 (binary 1111). Now (2) is seen to be a special case of (4) with the
the minuend is inverted. Then, the number In= (-2)-1+(-2)n-2
inversion base In equal to 0.
Consider the implementation of (4). Let the bits of the BCD + ... +(-2)1+1 becomes the "inversion base" for n-bit negaminuend M be inverted (forming the quantity 15-M) and added to binary numbers. (The inversion base for n-bit negadecimal numbers
+9(-10)+9.) Inverting each bit of the negathe subtrahend S. The uncorrected difference UD is determined by is 9(-10)"-'+
binary minuend M gives the representation of a negabinary number
UD = 15 - M + S (modulo 16).
(5) whose value is I.-M. Adding (in negabinary) the subtrahend S
In-M+S = In-D. Reinverting each bit yields
If a carry occurred, S represents a larger magnitude than yields the result the
desired result [4]. The only operations used are
=D,
I.n(In.-D)
M(S>M) and the result is six-corrected (add 0110) before reinvert- negabinary addition and
bit inversion. Thus an economical way to
ing:
perform negabinary subtraction results from (4).
D = 15-(15-M +S+ 6) = M-S-6.
S > M:
(6)
Equation (4) is valid in all negaradix systems where the operation
-
-
76
IEEE TRANSACTIONS ON COMPUTERS, JANUARY 1969
n- M is accomplished by taking the radix-minus-one complement of
each digit. Replacing M by 0, one can obtain the negative of a
negaradix number A by the procedure of adding it to the inversion
base and then inverting each bit:
-A =n- (In+ A).
CONCLUSION
The method of performing the difference operation by minuend
complementation has been explored and is seen to be worthy of consideration in two instances. First, minuend complementation might
be advantageous in decimal systems using codes which are not selfcomplementary. Second, the inconvenience of forming the negative
of a negaradix number suggests a definite advantage to minuend
complementation.
ACKNOWLEDGMENT
A predecessor to the machine described in [5] also used BCD
minuend complementation and was designed by N. A. Fruci, H. W.
Hines, A. J. Greenberg, R. R. Clark, and the author.
REFERENCES
[1] M. P. deRegt, 'Negative radix arithmetic' (pts. 1-8), Computer Design, May
1967-January 1968.
[2] R. K. Richards, Arithmetic Operations in Digital Computers. Princeton, N. J.:
Van Nostrand, 1955.
[3] H. Hellerman, Digital Computer System Principles. New York: McGraw-Hill,
1967.
[4] G. G. Langdon, Jr., "Accumulator and improved adder design for radix minus
two numbers,' M.S. thesis, Dept. of Elec. Engrg., University of Pittsburgh, Pa.,
1963.
[5] C. S. Gurski et al., 'An approach to small machine design in large-scale integration,' 1968 Internat' Solid-State Circuits Conf., Digest of Technical Papers (Philadelphia, Pa., February 14-16, 1968), pp. 46-47.
and product approximation are supposed to have least-square error
property and maximum entropy property (in the sense of Lewis), respectively. Unfortunately, there is no means of relating their properties to the performance measure. It is ultimately desired that the
approximating recognition functions be evaluated by the resultant
performance in recognition. However, there is as yet no satisfactory
solution to this problem of estimating or evaluating the recognition
procedures. This problem has also arisen in our study and been developed to some extent [6 ]- [8].
In this note we present a Chow-like hierarchical system of designing recognition functions which satisfy both the least-square error property and a minimum decision error rate property, although
our discussions are restricted to a binary measurement space and its
dichotomous classification.
II. A DESIGN PRPOCEDURE OF STATISTICAL RECOGNITION FUNCTIONS
A. Statistical Recognition Function
Let {x} be the set of 2" points x= (xi,
x with each xi = 1
., Xn)
or -1. We concern ourselves with the dichotomy problem of the sets
of points with some statistical distributions on { x }. Let the dichotomy classes be denoted by D = 1 and D = -1. Then the statistical
properties of this binary measurement spaces for the dichotomy problem are given by {P(x, D = 1), P(x, D =-1) } for all x, where
P(x, D = ± 1) is the joint probability of the occurrence of x and
D = ± 1. Then we can derive an optimum decision function fd in the
sense of the minimum expected decision error as follows:
fd = P(x, D = 1)-P(x, D = -1),
(1
fd > 0=:: 1; fd < 0a*- I
This situation is discussed in some detail in [6].
Using the finite Walsh function, we can write fd in the following
orthogonal expansion:
Note on a Class of Statistical
Recognition Functions
TAKAYASU ITO
Abstract-Statistical recognition procedures can be derived from
the functional form of underlying probability distributions. Successive approximation to the probability function leads to a class of
recognition procedures. In this note we give a hierarchical method of
designing recognition functions which satisfy both the least-square
error property and a minimum decision error rate property, although
our discussions are restricted to a binary measurement space and
its dichotomous classification.
Index Terms-Binary measurement space, decision theory,
dichotomy problem, expected decision error, Lagrangian multiplier,
least-square error approximation, recognition function, Walsh function.
I. INTRODUCTION
There is as yet no general theory of pattern recognition. But from
a classification theoretic point of view, the statistical method has
been recognized to be one of the most important in pattern recognition studies. Statistical classification procedures can be derived from
the functional form of underlying probability distributions. Successive approximation to the probability function leads to a class of
recognition procedures. This view of binary measurement space is
summarized and developed by Chow [4]. In [4], he gave a hierarchical system of recognition procedures, approximating probability
distributions by 1) orthogonal expansion, and 2) a product of loworder conditional probability. As Chow noted, orthogonal expansion
Manuscript received March 26, 1968; revised June 27, 1968, and September 20,
1968. This note is based on a report presented at the January 22, 1965, meeting of
the Technical Group on Automata and Control of the Institute of Electronics and
Communication Engineers of Japan.
The author was with the Department of Computer Science, Stanford University, Stanford, Calif. 94305. He is now with the Central Research Laboratory,
Mitsubishi Electric Corporation, Kamakura, Japan.
fd =
2n-1
EiGO CiXi
ci
1
-(P(xi
2n
=
=
D)-P(xi
-
2)))
(2)
where
XO-1
Xi
=
xi,
Xn.+l1
=
X1X22 * * *
-
-,
X2n=
,
X27n-1
X1
=
X
X1X2 ... Xn-
(3)
In practice, the probability distributions given by {P(x, D=1),
P(x, D = -1) } are usually unknown to the designer, or one needs 2"
values to represent such a structure [4], [6]. This becomes impractical as n increases. Thus our central problem is to estimate or approximate this probability distribution, using the low-order functions.
For this purpose, Kanal [5] used Lazarsfeld-Bahadur expansion,
and Chow, Fukunaga and Ito, and others have used the finite Walsh
function [8]. Another approach, called "product approximation,"
has been developed by Lewis, Chow, and others. When we observe
only the resultant expressions in these approaches, we find that one
essentially uses the following partial probabilities for designing
recognition functions.
a) For the first-order approximation or the case of statistical
independence
{P(xi = 1, D = 1), P(x* = 1, D = 1), P(xi = 1, D =-1),
P(xi = - 1, D = 1) } for every i. (4)
b) For the second-order approximation
{P(xi=1, x1= 1, D = 1), P(x= - l,xj = 1, D= 1),
P(x - 1, xj = 1, D= 1)} (4')
fori,j= 1,.. , n (i<j),
and so forth.
Then it may be natural to ask: What is the optimum decision
function derived from only the partial probabilities as is given by (4)
or (4')? To answer this question we derive a hierarchical system of
recognition procedures based on statistical decision theory.
© Copyright 2026 Paperzz