Fast Boolean Minimizer for Completely Specified

How Much Randomness
Makes a Tool Randomized?
Petr Fišer, Jan Schmidt
Faculty of Information Technology
Czech Technical University in Prague
[email protected], [email protected]
Outline
Logic synthesis algorithms

What they are? What nature?
Randomization – yes or not?

Pros & cons
Some randomized algorithms

Successful samples
Necessary measure of randomness

IWLS’11
How much randomness is needed?
2
Logic Synthesis Algorithms
…are like:





Two-level minimization (Espresso)
Decomposition
Resynthesis
Resubstitution
…
In fact, they are state space search (exploration) algorithms
Usually they are of a local search nature

Strategies - complete:





B&B
Best-first
BFS, DFS
…
Strategies - approximate:



Best only
First improvement
…
Sometimes they can be iterated
IWLS’11
3
Logic Synthesis Algorithms
Most of these algorithms are deterministic





Deterministic heuristics
Usually “best-only” or “first-improvement” nature
No random decisions
Two runs of the algorithm produce equal results
There is no chance of obtaining different results
Some rare exceptions



IWLS’11
Simulated annealing
Genetic algorithms
Genetic programming
4
Logic Synthesis Algorithms
State space search strategies:
Deterministic heuristic approach

When there are two (or more) equally valued choices,
take the first one

is it always a good choice?
Exhaustive (complete) approach

When there are two (or more) equally valued choices,
try all of them

computationally infeasible
Randomized approach

When there are two (or more) equally valued choices,
take one randomly

IWLS’11
all choices have equal chances, whereas exponential explosion is
avoided
5
Why Randomness?
Randomized move selection:
All choices have equal chances, whereas exponential explosion is
avoided

May be used in iterative way:
Run the randomized algorithm several times and take the best solution
Solutions from different iterations may be combined


Randomized local search = compromise between
best-only (first improvement) and exhaustive search


The solution quality may be improved for a cost of runtime
quality / runtime tradeoff
Anything new? Indeed not! See SA, GA, GP, …
… but they are generic algorithms adapted to logic synthesis
 call for dedicated randomized logic synthesis algorithms?
IWLS’11
6
Where Randomness?
My algorithm uses smart heuristics, they are
deterministic, there is no place for random
decisions
…so how can the algorithm be randomized?
…probably it’s not possible
IWLS’11
7
“Deterministic” Reality
“Deterministic” algorithms are sensitive to variable ordering

The results are different for different variable ordering in the input file
Espresso

up to 20% quality difference (in literals count)
Espresso-exact

up to 6% quality difference (in literals count)
ABC, just “balance”

up to 10% quality difference (in AIG nodes count)
ABC, “choice” script, followed by “map”


up to 67% quality difference (in gates count)
11% in average
CUDD BDD package


IWLS’11
up to exponential difference in nodes count
default variable ordering is lexicographical
8
“Deterministic” Reality
Concluded:
Facts:



Many “deterministic” algorithms are heavily influenced by
variable ordering in the source file
The source file variable ordering is random – from the point of
view of synthesis
Synthesis may produce very bad results, without knowing the
reason
Possible solutions:
1.
2.
Design better heuristics
Accept randomness

IWLS’11
Note that the source file is random itself!
9
Why Randomness Not!
Synthesis may produce unpredictable results





the results of repeated synthesis may significantly differ
small changes in the source may induce big changes in the final design
bad for area & delay estimation
bad for debugging
bad for anything…
BUT:
1.
2.
What if I swap two ports in the VHDL header? Should I expect 70%
quality difference?
Not random, actually – pseudorandom

3.
IWLS’11
The “pseudo-randomized” algorithm produces equal results under given
seed
No chance of obtaining possibly better solutions
10
Randomness - Concluded
+ Chance of obtaining different solutions

Quality / runtime trade-off
+ Influence of lexicographical ordering diminished
- Possibility of unexpected behavior


… but this could happen to deterministic algorithms too –
lexicographical ordering
… and if the algorithm produces near-optimum solutions, no big
differences are expected
- Two synthesis runs produce different results

IWLS’11
they do not, if the seed is fixed
 seed as a synthesis parameter
11
Example 1: BOOM
Two-level (SOP) minimizer
Algorithm (very simplified):
1.
2.
3.
4.
IWLS’11
This is randomized
Generate on-set cover
Put all the generated implicants into a common pool
Solve the covering problem
Go to 1 or stop
12
Example 1: BOOM
1600
7000
1400
6000
Espresso crosspoint
Terms in pool
1000
4000
800
3000
600
2000
400
1000
200
0
0
200
400
600
800
Solution literals
5000
1200
0
1000
Iteration
IWLS’11
13
Derandomization
The random number generator restricted to
produce only a given number of distinct numbers

Randomness factor, RF
RF = 1: only 1 value is produced (0)
RF = 2: only 2 values are produced (0, MAXINT)
RF = infinity: no restriction
IWLS’11
14
BOOM - Derandomized
8000
RF = inf.
7000
Implicants in pool
6000
RF = 2
RF = 10
RF = 3
5000
4000
n = 20
3000
2000
1000
RF = 1
0
0
200
400
600
800
1000
Iteration
IWLS’11
15
BOOM - Derandomized
1800
RF = 1
1600
1400
n = 20
Literals
1200
1000
RF = 3
RF = 2
800
600
RF = inf.
400
200
0
0
200
400
600
800
1000
Iteration
IWLS’11
16
BOOM - Derandomized
140000
120000
100000
Frequency
n = 20
>5
16%
5
5%
1
40%
4
8%
80000
3
12%
60000
2
19%
40000
20000
0
0
IWLS’11
5
10
15
Choices
20
25
30
17
BOOM – Cover Generation
Generation of cover: the basic idea
Implicants are generated in a greedy way
 Implicants are generated by reducing a
tautological cube, by adding literals
 Literals that appear most frequently in the
on-set are preferred
 If two or more literals have the same
frequency, select one randomly



IWLS’11
there are 2n choices
the random number generator needs to produce
no more than 2n different values
18
Example 2: Resynthesis by Parts
Multi-level optimization
Algorithm (very simplified):
1.
2.
3.
4.
5.
IWLS’11
This is randomized
Pick one network node (pivot)
Extract a window of all nodes having a distance
from the pivot up to a given value
Resynthesize the window
Put the window back
Go to 1 or stop
19
Example 2: Resynthesis by Parts
600
e64
500
100% resynthesis
Gates
400
300
200
100
Radius 5 window
0
0
1000
2000
3000
4000
5000
Iteration
IWLS’11
20
Resynthesis by Parts - Derandomized
600
RF = 1
500
100% resynthesis
RF = 2
Gates
400
RF = 3
300
RF = 5
200
RF = inf.
100
0
0
1000
2000
3000
4000
5000
Iteration
IWLS’11
21
Example 3: FC-Min
Two-level (SOP) minimizer for multi-output functions
Algorithm (very simplified):
1.
Generate on-set cover



2.
3.
4.
IWLS’11
Group implicants are generated directly
Rectangle cover problem is solved for PLA output matrix
Greedy heuristic
Put all the generated implicants into a common pool
Solve the covering problem
Go to 1 or stop
22
Example 3: FC-Min
1.
2.
3.
Select a row having maximum 1’s
Append a row non-decreasing the
number of covered 1’s
IF ( random() DF ) go to 2
y0-y4
{
Rectangle cover solving algorithm:
The more rows the rectangle has,



IWLS’11
the more on-set 1’s is covered
the less likely it will be a valid implicant
probabilistic implicant generation
23
Derandomized FC-Min
4000
DF = 10,000
3500
DF = inf.
Implicants in pool
3000
DF = 1000
2500
DF = 100
2000
DF = 10
1500
DF = 3
1000
DF = 2
500
0
0
200
400
600
800
1000
Iteration
IWLS’11
24
Derandomized FC-Min
1200
Randomness factors increasingly:
2
3
5
10
100
1000
infinity
1100
Literals
1000
900
800
700
600
500
0
200
400
600
800
1000
Iteration
IWLS’11
25
Conclusions
Randomness

Allows for obtaining different solutions


Allows for a cost / time trade-off


Iterative run
Can be introduced to most of algorithms


Repeated runs produce different results
Lexicographical ordering aspect
Reproducibility is not a problem

Pseudo-randomness
Necessary measure of randomness

Can be analytically derived



Some algorithms are “random enough” for DF = 2
Some algorithms need very high level of randomness

IWLS’11
By analysis of possible numbers of choices
Probabilistic algorithms, SA, …
26