Boolean Factoring with Multi-Objective Goals
Mayler G. A. Martins1, Leomar Rosa Jr. 1, Anders B. Rasmussen2, Renato P. Ribas1 and Andre I. Reis1
1
PGMICRO - Instituto de Informática – UFRGS / 2Nangate Inc.
1
{mgamartins, leomarjr, rpribas, andreis}@inf.ufrgs.br
2
[email protected]
Abstract — This paper introduces a new algorithm for
Boolean factoring. The proposed approach is based on a novel
synthesis paradigm, functional composition, which performs
synthesis by associating simpler sub-solutions with minimum
costs. The method constructively controls characteristics of
final and intermediate functions, allowing the adoption of
secondary criteria other than the number of literals for
optimization. This multi-objective factoring algorithm presents
interesting features and advantages when compared to previous
works.
I. INTRODUCTION
Factoring is an important procedure for logic synthesis
tools. It consists in the conversion of a logic function into a
logically equivalent parenthesized expression or factored
form with reduced number of literals. The input function is
usually represented in a sum-of-products or product-of-sums
form. The reduction of literals can affect the area taken by
the final implementation of the function. Yet, according to
[1], the only known optimality result for factoring (until
1996) was presented in [2]. Heuristic techniques have been
proposed for factoring that achieved high commercial
success. These include the quick_factor (QF) and
good_factor (GF) algorithms available in SIS tool [3].
Recently, factoring methods that produce exact results for
read-once factored forms have been proposed [4] and
improved [5]. However, the IROF algorithm [4-5] only
works for functions that can be represented by read-once
formulas. The Xfactor algorithm [6-7] is exact for read-once
forms and produces good heuristic solutions for functions
that are not included in this category.
algorithms that are able to deal with multi-objective design
goals, considering topological properties (like the number of
series and parallel switches in derived networks) while
reducing the number of literals. In this context, we propose
an algorithm that is able to: (1) minimize factored forms
taking into account multi-objective goals, (2) generate more
than one alternative solution and (3) start from a functional
description.
The proposed algorithm is based on a principle called
functional composition. It associates simpler sub-solutions
with known costs, which allows the method to optimize cost
functions that take more than just literals into account. The
association starts with single literal functions and compute
solutions with n+1 literals at each step by using the solutions
computed in prior steps. The algorithm uses dynamic
programming to achieve optimization.
This paper is organized as follows. Section II presents
concepts and notations used in this paper. Section III
presents the baseline algorithm for basic understanding. The
proposed algorithm is described in Section IV, detailing
optimizations that make it usable in practice. A full example
of the proposed algorithm is presented in Section V. Results
and comparisons to other methods are presented in Section
VI. Section VII concludes this paper.
II. BACKGROUND
This section provides basic concepts necessary to
understand the algorithm.
Most of these algorithms [3-7] take as input a sum-ofproducts (SOP) or a product-of-sums (POS). As SOP/POS
forms are completely specified, the don’t cares are not
treated during the factoring but while generating the
SOP/POS. Thus, the whole process is not exact. Lawler’s
algorithm [2] starts from a functional description and
considers don’t cares, but it is too slow to complete for
small functions and can fail for functions of 4 variables. All
these approaches [2-7] only deal with reduction of the
number of literals, without considering secondary criteria.
A. Cofactor
There are more aspects in factoring, besides reducing the
number of literals. For instance, logic depth and structural
characteristics
associated
to
derived
transistor
networks. Consequently, it is necessary to develop
where the positive cofactor is defined when
negative cofactor is defined when
.
The cofactor is a sub element of a Shannon
Decomposition that is a method by which a Boolean
function can be represented by the sum of two subfunctions
of the original. Let
, with the input
variable
, where the cofactor
is
defined as:
and the
For simplicity of definitions, let
and
operators
represents positive and negative cofactors in the variable
of the function . A cube cofactor is obtained by setting
more than one input variable to specific values (zero or one).
function, so
. The anti-symmetric function
can be defined as a function that does not change if the
variables are inverted and exchanged to each other, so
B. Order
Symmetry can also be detected at functional level, before
any equation is produced. This can be detected by
comparing the cube cofactors of the two variables involved.
Let
denote the symmetry function of 2 variables
and
auxiliary functions:
Two Boolean functions can be compared and classified
according their relative order, which can be: equal, larger,
smaller and not-comparable. It is said that
is larger
(smaller) than when the on-set of is a superset (subset)
of the on-set of . Two functions are equal when they have
equal on-set and off-set. They are not-comparable when the
on-sets are not contained by each other.
Let
denote the order of
the auxiliary function:
against
.
and
E. Series/Parallel network
C. Unateness
Let
be a Boolean function on
.
is positive
(negative) unate in the variable
if
is smaller or
equal than
(
is larger or equal than
).
is binate in the variable
if
is not comparable to
.
Unateness and binateness can be detected at functional
level, before any equation is produced. This can be verified
by comparing the positive and negative cofactors of the
function with respect to each variable. Let
denote the
unateness detection function of a variable and
auxiliary functions:
A factored form can be implemented as a complementary
series/parallel CMOS transistor network. This process is
straightforward; further details are described by Weste and
Harris [10]. The number of series transistors in one CMOS
plane is the number of parallel transistors in the other CMOS
plane. The number of series transistors affects the
performance of the final cell and it can vary for factored
forms representing the same Boolean function. Let f denote
a logic function in SOP form:
(1)
Table 1 illustrates some factored forms of (1) as a
motivation for the multi-objective goals. Equation (2) has
minimum literal cost (L). The number of series (S) switches
is the same for all equations. Eq. (4) has minimum parallel
(P) cost. From a project point of view, Eq. (4) is interesting
because it has at most four series transistors in both CMOS
planes, when implemented as a CMOS gate.
F. Read-Once
A function
is called read-once if it can be represented
as a factored form, in which each variable appears no more
than once.
Theorem: If a function can be represented through a readonce formula, all the partial sub-equations in the formula
correspond to functions that are cube cofactors of .
D. Symmetry
A
Boolean
function
is
called symmetric if
for all permutations
. It can be considered for
simplified purposes that two variables are symmetric when
they can be interchanged without modifying the logic
Proof: As each variable appears as a single literal, they can
all be independently set to non-controlling values, which
makes only one literal disappear at a time. This way, any
sub-equation (or sub-set) of f can be obtained by assigning
non-controlling values to the variables to be eliminated. This
variable assignment forms (by definition) cube cofactors.
Table 1. Literal, series and parallel costs of factored forms.
L
S P
(2)
9
3 5
(3)
10
3 5
(4)
10
3 4
(5)
10
3 6
Eq
Factored Function
III. BASELINE ALGORITHM
The baseline algorithm compute solutions from simpler
equations (i.e., with fewer literals) computed in prior steps.
The starting point is the set of known sub-functions
represented by a single literal. A bucket of n-literal
equations
is the set of functions composed of n literals.
Fig. 1 illustrates the process for creating all the buckets up to
5-literal equations. The first step is creating from scratch the
bucket of 1-literal equations, which is trivial. Then the 1literal equations are combined through and/or logic
operations {1} to create the 2-literals bucket. In a similar
way, the combination {2} of 1-literal and 2-literal buckets
creates the 3-literals bucket. The 4-literal bucket is
composed by operations among 1-literal and 3-literals
buckets {3} and by operations among pairs of elements {4}
from the 2-literals bucket. Finally, the 5-literals bucket is
composed of combinations among 1-literal and 4-literals
buckets {5} and among 2-literals and 3-literals buckets {6}.
Generalizing the concept of generation to n-literal buckets, it
can be expressed according Eq. (6).
(6)
The process is iterative, so it will stop when the target
function is found. However, the number of functions grows
exponentially, as there are
Boolean expressions of n
inputs. If the functions in each bucket are not pruned, the
algorithm will be unfeasible in memory and computational
time. Next section introduces optimizations that make the
algorithm feasible.
IV. PROPOSED ALGORITHM
Fig. 1: Generation of subfunctions until the 5th bucket.
The number inside braces indicates the bucket
composition step order.
Fig. 2: Pseudo-code for the factorization algorithm.
The proposed algorithm represents logic functions as a
pair of {functionality, implementation}. The functionality is
either a BDD node or a truth table. The implementation is
either the root of an operator tree or a string representing a
factored form.
Example: the following {bit vector, string} pairs are
inserted in the 1-literal bucket for a binate function
dependent of three variables, a,b,c:
The implementation can also have associated data about
number of literals, logic depth, number of transistors,
series/parallel properties, etc. These data are necessary to
factorize a target function considering multi-objective goals.
Fig. 2 shows the pseudo-code for factorization algorithm.
The first step is to check if the target function is constant. In
this case the algorithm returns the constant. The second step
is the computation of symmetry groups that uses the
information about unateness to detect variables that are
symmetric. The information about symmetry and antisymmetry can be used here to greatly reduce the number of
allowed sub-functions used to prune intermediate functions.
Allowed sub-functions are derived from the cube cofactors.
To reduce the number of computed cube cofactors, the
symmetric and anti-symmetric variables are grouped and the
cube cofactors are computed by first setting all the variables
inside a group before picking a variable from another group.
To optimize performance, a hash table of allowed
subfunctions is maintained. Subfunctions that are not present
in the allowed subfunctions hash table are discarded, unless
they are greater or smaller than the target function. The set
of all cube cofactors of the target function is a very good set
of allowed functions. The intuition behind this is that by
setting variables to zero and one in an optimized factored
form it is possible to obtain sub-expressions of the formula.
For the case of read-once formulas, the use of cube cofactors as allowed sub-functions guarantees an exact result.
This is a direct corollary of the theorem in section II.F. An
"already-looked" hash table stores the functions already
introduced previously. These functions have been produced
with fewer or equal number of literals and do not need to be
introduced twice. This process speeds-up execution time.
The next step is the computation of the allowed
subfunctions. This is done by deriving allowed subfunctions
from the cube cofactors found considering symmetry. There
are three useful types of combinations: Each function of the
larger functions group is combined with each function of the
not comparable (NC) group using an AND operation, if the
result is a smaller functions it is stored for future use;
Similar process is used with the smaller and NC functions,
using the OR operation and storing larger functions. The last
combination is NC against NC, doing AND and OR
operations, storing any function generated in the set of
allowed subfunctions.
After the initialization of the allowed subfunctions, the
algorithm proceeds with the functional composition. It starts
by creating the 1-literal functions. Only the literals with the
right polarity are created, reducing the computation time. If
the target function is not found in the 1-literal bucket,
subsequent buckets are generated, until the target function is
found or the number of buckets reaches a maximum
predetermined number. After completing the ith-bucket
filling, the algorithm makes a search for a solution, using the
computed smaller and larger subfunctions. A solution is
always an OR(AND) operation between smaller (larger)
subfunctions in previous buckets and the functions in the
current smaller (larger) bucket. During this process, non
prime expressions are deleted from the ith-buckets. The
algorithm doesn’t need to stop in the first solution, so
multiple solutions with different costs can be found.
Table 2. Allowed subfunctions of (7).
Cofactors
Cube Cofactors
-------
V. A COMPLETE EXAMPLE
In this section, we provide a complete example for the
algorithm, discussing how the aspects described before are
taken into account in the execution of the method. In our
example we chose a simple but illustrative function in SOP
form.
(7)
First step is to compute the allowed subfunctions. This step
first computes the unateness and symmetry information for
variables. Variable a is binate and variables b, c and d are
positive unate. No variable is symmetric, so symmetry
information is not used to reduce the computation of cube
cofactors. The computation of the cube cofactors will result
in different functions listed in Table 2 and the filling of the
buckets is in Fig. 3. It is important to make two
observations: the total number of cube cofactors is greatly
reduced since some cube cofactors are equal and some are
constant; the list of cube cofactors already contains the
literals in the right polarities.
Next step is the creation of the representations of the
literals. This will create the pairs of {functionality,
implementation} and insert them in the bucket for the 1literal formulas.
Once the 1-literal bucket is filled, the combination part
starts, by producing the 2-literal combinations using Eq. (6).
Only subfunctions that are in allowed subfunctions hash
table or that are smaller or larger than the target function are
accepted as intermediate subfunctions. The combination
continues until the 5-literal bucket, where a solution is
found.
(8)
The logical function represented in Eq. (8) is a compact
form of Eq. (7). The algorithm might continue generating
more solutions if desired. It will generate more buckets until
a number of desired solutions are found or the limit of the
number of buckets is achieved.
Fig. 3: Buckets generated for the example.
Table 3: Results of 44-6 factorization.
QF
GF
ABC
This paper
Literals
41205
39047
39530
38775
Execution
37s
38s
2.5s
214s
time
Table 4: Results of all 4-input factorization.
QF
GF
ABC
This paper
Literals
38419
37844
38646
36986
Execution
42s
47s
2s
114s
time
VI. RESULTS
Experiments were made to test the efficacy of the
proposed algorithm. The experiments were made in a
Pentium 2.4 GHz with 2 GB RAM.
In a first experiment, the algorithm was run on the set of
3503 functions from library 44-6.genlib distributed in SIS
package. The results of this experiment are shown in Table
3. The algorithm proposed herein was compared to
Quick_Factor (column QF) and Good_Factor (column GF)
from SIS package and with the factoring algorithm available
in ABC package (column ABC) through command
print_factor. As it can be seen in column This Paper, the
results achieved by the proposed algorithm gave the smallest
literal count among all approaches. This result was achieved
at the expense of a slightly higher execution time, compared
to other approaches. However, it should be noticed that all
the 3503 equations were factored exactly in less than one
second each. On average each equation was factored in
214/3503s = 61ms. Also, it should be noticed that the
proposed algorithm was always able to find the exact read
once factored form.
In a second experiment, the algorithm was run on the set of
3982 representative functions of permutation equivalent
classes of four input functions. The results of this
experiment are shown in Table 4. As it can be seen in
column This Paper, the results achieved by the proposed
algorithm gave the smallest literal count among all
approaches. This result was achieved at the expense of a
slightly higher execution time, compared to other
approaches. However, it should be noticed that all the 3982
equations were factored in less than one second each. In
average each equation was factored in 114/3982s = 29ms.
Also, it should be noticed that the proposed algorithm was
Eq#
always better or equal than the other approaches, for each
individual logic function.
Table 5 presents an example of multi-objective
factorization. Equation (9) is the minimum literal count logic
function obtained when only literals are minimized.
Equation (10) is obtained when the algorithm is required to
minimize literals and obtain the minimum possible number
of switches in series in one CMOS plane (which is 3). This
minimum number is pre-computed according [11] and used
as a parameter by the algorithm. Subfunctions not respecting
the lower bounds in [11] are discarded. Equation (11) is
obtained when the algorithm is required to minimize literals
and obtain the minimum possible number of switches in
parallel in one CMOS plane (which is 4). In this case the
number of literals is increased by two (from 20 to 22).
Notice that, as the data of subfunctions is always known
during the execution of the algorithm, the algorithm
proposed can be modified to accept only subfunctions with
certain characteristics. The approach can consider any
secondary criteria that can be computed in a monotonically
increasing way, so that the solutions are generated in the
right order. Additionally, the new costs must be easily
obtainable for a combination of the subfunctions. In the case
of literals, this can be done by simple addition. These
requirements allow not only controlling the number of series
and parallel switches, but also logic depth (per input
variable) and function support size. These features will be
implemented as future work.
Table 6 shows the number of literals for some benchmark
functions where our algorithm produces better or equal
literal count than quick_factor, good_factor and ABC.
VII. CONCLUSIONS
This paper has proposed the first multi-objective factoring
algorithm. From an execution time point of view, the
algorithm is slightly slower compared to other approaches,
but still feasible. From a quality point of view the proposed
algorithm always delivered superior (or equal) results
compared to other approaches. The algorithm is based on a
novel synthesis paradigm (functional composition), as it
composes the function by combining smaller known subequations. The algorithm has the ability to take secondary
criteria (like series and parallel number of transistors, or
support size) into account, while generating several
Table 5: Results of multi-objective goal factorization.
Logic Function
L
S
P
Time(s)
(9)
20
4
7
1.15
(10)
20
3
9
1.34
(11)
22
8
4
1.22
Table 6: Results of some benchmarks.
Logic Function
SOP
QF
GF
ABC
[12]
b9_a1*
55
14
14
14
This
Paper
12
(10)
[12]
b9_i1*
55
16
15
15
14
(11)
[12]
rd53_0*
20
14
14
14
12
(12)
[12]
cm162a_o*
17
12
12
12
12
(13)
[12]
cm162a_p*
26
16
16
16
15
(14)
[12]
cm162a_q*
30
18
18
18
17
(15)
[12]
cm162a_r*
34
20
20
20
19
(17)
[7,13,14]
23
12
11
11
8
(18)
[14]
7
6
6
6
5
(20)
[14]
9
8
7
7
7
(21)
[7]
23
13
11
12
9
(22)
[7]
8
7
7
7
6
(23)
[7]
18
20
20
20
12
21
19
18
18
11
Eq#
Source
(9)
(24)
[15]
*- The benchmark name was used instead the logic function.
alternative solutions. This characteristic makes it a useful
piece for approaches based on restructuring small portions of
logic, like [9] and [16]. The unique characteristics of the
algorithm make it very useful in the context of local
optimizations.
ACKNOWLEDGMENTS
Research partially funded by Nangate Inc under a
Nangate/UFRGS research agreement, by CAPES Brazilian
funding agency, and by the European Community's Seventh
Framework Programme under grant 248538 - Synaptic.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Hachtel, G. D. and Somenzi, F. Logic Synthesis and Verification
Algorithms. 1st. Kluwer Academic Publishers. 2000
Lawler, E. L. An Approach to Multilevel Boolean Minimization. J.
ACM 11, 3 (Jul. 1964), 283-295. 1964
Sentovich, E., Singh, K., Lavagno, L., Moon, C., Murgai, R.,
Saldanha, A., Savoj, H., Stephan, P., Brayton, R., and SangiovanniVincentelli, A. SIS: A system for sequential circuit synthesis. Tech.
Rep. UCB/ERL M92/41. UC Berkeley, Berkeley. 1992.
Golumbic, M. C., Mintz, A., and Rotics, U. Factoring and recognition
of read-once functions using cographs and normality. DAC '01. ACM,
New York, NY, 109-114.
Golumbic, M. C., Mintz, A., and Rotics, U. An improvement on the
complexity of factoring read-once Boolean functions. Discrete Appl.
Math. 156, 10 (May. 2008), 1633-1636.
Golumbic, M. C. and Mintz, A. Factoring logic functions using graph
partitioning. ICCAD '99. IEEE Press, Piscataway, NJ, 195-199.
Mintz, A. and Golumbic, M. C. Factoring boolean functions using
graph partitioning. Discrete Appl. Math. 149, 1-3 (Aug. 2005), 131153.
Mishchenko, A., Chatterjee, S., and Brayton, R. DAG-aware AIG
rewriting a fresh look at combinational logic synthesis. DAC '06.
ACM, New York, NY, 532-535.
Werber, J., Rautenbach, D., and Szegedy, C. Timing optimization by
restructuring long combinatorial paths. ICCAD'06. IEEE Press,
Piscataway, NJ, 536-543.
[10] Weste, N.H.E. and Harris, D. Section 6.2.1: Static CMOS. In: CMOS
VLSI design, Addison Wesley, 321-327. 2005.
[11] Schneider, F. R., Ribas, R. P., Sapatnekar, S. S., and Reis, A. I. Exact
lower bound for the number of switches in series to implement a
combinational logic cell. ICCD. IEEE Computer Society, Washington,
DC, 357-362. 2005.
[12] S. Yang, Logic Synthesis and Optimization Benchmarks User Guide
Version 3.0, Technical Report 1991-IWLS-UG-Saeyang, MCNC
Research Triangle Park, NC, January 1991.
[13] Brayton, R. K. Factoring logic functions. IBM J. Res. Dev. 31, 2
(Mar. 1987), 187-198.
[14] Stanion, T.; Sechen, C. Boolean division and factorization using
binary decision diagrams." IEEE TCAD, vol.13, no.9, pp.1179-1184.
1994.
[15] Yoshida, H.; Ikeda, M.; Asada, K. Exact Minimum Logic Factoring
via Quantified Boolean Satisfiability. ICECS '06. 1065-1068.
[16] Mishchenko, A. Brayton, R. Chatterjee, S. Boolean factoring and
decomposition of logic networks. ICCAD 2008. IEEE, pp.38-44.
2008.
© Copyright 2026 Paperzz