Inference Rules in Local Search for Max-SAT

Inference Rules in Local Search for Max-SAT
André Abramé and Djamal Habet
Aix Marseille University, LSIS UMR 7296
Av. Escadrille Normandie Niemen
13397 Marseille Cedex 20 (France)
{andre.abrame , djamal.habet}@lsis.org
Abstract—In the last years, many advances were accomplished in the exact solving of the Max-SAT problem, especially
by the definition of new inference rules and a better estimation
of lower bounds in branch and bound based methods. However,
and oppositely to the SAT problem, fewer works exist on
approximate methods for Max-SAT, mainly local search ones
which have shown their potency for SAT. In this paper, we
illustrate that including inference rules in a classical local
search solver for SAT improves its performances when solving
the Max-SAT problem. The obtained results confirm the
efficiency of our approach.
Keywords-Max-SAT; Local Search; Inference Rules; Maxresolution.
I. I NTRODUCTION
The Maximum Satisfiability (Max-SAT) problem is to
find an assignment of Boolean values to the variables of
a propositional formula expressed in a Conjunctive Normal
Form (CNF) that maximizes the number of satisfied clauses.
This NP-hard problem [1] is the optimization version of the
SAT decision problem. During the last years, the interest
in studying Max-SAT has grown significantly because of
its capacity to express and to solve real-life problems in
various domains such as routing, scheduling, bio-informatics
and also academic problems (Max-Cut, Max-Clique, etc).
There are two main approaches to solve Max-SAT. The
first one is exact and generally based on the Branch & Bound
(BnB) algorithm. Many competitive implementations exist
[2], [3], [4] which differ mainly on their variable selection
heuristic, their lower bound estimation and the inference
rules they use. The second approach concerns approximation
algorithms which do not guarantee the optimality of the
solutions but return a result of good quality in a reasonable
time. Such algorithms include semi-definite programming,
pseudo Boolean optimization, etc [5], [6], [7]. Local search
(LS) methods, efficient when dealing with SAT, belong also
to these approximation methods. Any LS algorithm for SAT
can be used to solve the unweighted Max-SAT problem with
minor modifications. However, specific adaptations have
been made on LS to be applied for Max-SAT. The first
one is owed to Hansen and Jaumard [8]. Among the most
recent works, Zang et al. [9] use backbones to improve
a classical LS algorithms, Smyth et al. [10] adapt tabu
search to Max-SAT, Lardeux et al. combine complete and
incomplete search in [11] by using a three values scheme.
More recently, Kroc et al. [12] use a classical DPLL SAT
solver to guide a local search solver running in parallel. To
the best of our knowledge, there is no more recent work on
LS adaptation for Max-SAT. A general presentation of the
Max-SAT problem is made in [13] and two states of the art
of LS for Max-SAT exist, one complete but outdated [14]
and a more up to date one [15].
Inference rules have shown their interest in BnB algorithms, for example to simplify the formula or to improve the
estimation of lower bounds. However, such rules have rarely
been exploited in a local search algorithm to make a MaxSAT instance easier to solve. To the best of our knowledge,
except Heras and Baneres in [16] which use such rules in a
preprocessing step before applying classical LS algorithms,
there is no work dealing with this point. In this paper, we
use an inference rule combining unit propagation and Maxresolution for deducting some of the Minimal Unsatisfiable
Subsets (MUSes) (for a formal definition of MUS and its
relationship with Max-SAT, see for instance [5]) of the instances. We integrate this rule in an existing LS algorithm for
SAT (Novelty++ with adaptive noise setting [17], [18]). The
resulting algorithm, IRAnovelty++, improves significantly
the performances of the LS search, particularly when solving
Max-2-SAT instances. One of the difficulties to overcome in
order to make this rule applicable in a LS algorithm is the
definition of refutation trees in the LS context. This issue
has already been studied for the SAT context, for example
in [19], [20], [21], [22], [23]. Moreover, a balance must
be struck between the time spent on applying the rule and
the time consumed in solving the instance. Otherwise, the
resulting algorithm may be inefficient in practice.
This paper is organized as follows. In Section II, we
give some basic definitions and notations used in this paper.
The inference rules and the local search algorithm used in
our implementation are introduced in Sections III and IV.
We detail in Section V our implementation. Eventually, the
experiments and the results obtained are presented in Section
VI and we conclude in Section VII.
II. N OTATIONS
Let X = {x1 , . . . , xn } be a set of propositional variables.
A literal l is a variable xi or its negation ¬xi . A clause is a
finite disjunction of literals and a formula Σ in conjunctive
normal form (CNF) is a conjunction of clauses. Alternatively, clauses can be represented as sets of literals and a
formula as a multiset of clauses.
An assignment I is a set of literals which does not contain
both a variable and its negation. If |I| = n then I is
complete and it is partial otherwise. A variable xi such
that neither xi nor ¬xi belongs to I is unassigned. For
a given literal l, we denote Σ|l the formula obtained by
applying l on Σ. Formally, Σ|l = {c | c ∈ Σ, {l, ¬l} ∩ c =
∅} ∪ {c/{¬l} | c ∈ Σ, ¬l ∈ c}. This application can be
extended to any assignment I = {l1 , l2 , . . . , lk }, such that
Σ|I = (. . . ((Σ|l1 )|l2 ) . . . |lk ).
A literal l is satisfied by an assignment I iff l ∈ I and it is
falsified iff ¬l ∈ I. A clause is satisfied iff at least one of its
literals is satisfied, and it is falsified (or conflicting) iff all its
literals are falsified. By convention the empty clause, noted
, is conflicting. The Max-SAT problem consists of finding
an assignment that maximizes (minimizes) the number of
satisfied (falsified) clauses.
Again, this rule does not preserve equivalency for Max-SAT.
A variant for Max-SAT, named Max-resolution, has been
proposed in [11] (and later studied in [25], [26]). It can be
defined as follows:
{x, A} ∪ {¬x, B} ∪ Σ0
{A, B} ∪ {x, A, ¬B} ∪ {¬x, ¬A, B} ∪ Σ0
where Σ0 is a multiset of clauses. Two of the newly produced
clauses are not disjunctions of literals and need to be
transformed to preserve the CNF form of the resulting
formula. It should be noted that this transformation produces
(|A| ∗ (|B| − 1) + (|A| − 1) ∗ |B|) clauses of sizes varying
from (min(|A|, |B|) + 1) to (|A| + |B| − 1).
a)
b)
c5
x2@2
c2
c1
III. I NFERENCE RULES
In the recent years, research on Max-SAT was focused on
the exact solving methods, based on BnB, and particularly
on one of their main components: the inference rules which
transform a formula Σ into an equivalent one Σ0 . We denote
this transformation by ΣΣ0 .
In the case of the SAT problem, the inference rules must
preserve the satisfiability of the instance. However, when
applied to Max-SAT, they must also preserve the number
of unsatisfied clauses for every assignment. They can have
several purposes, such as improving the applicability of the
other solver components or memorizing some deductions to
avoid redundancy. In this section, we recall the powerful
inference rule for Max-SAT defined in [3] which uses the
Max-SAT adaptation of two well known inference rules (unit
propagation and resolution) for SAT.
A clause containing only one literal is a unit clause.
In presence of such a clause, all the clauses containing
the literal and all the occurrences of its negation can be
removed. The iterative application of this rule until no more
unit clause remains is called unit propagation. We denote
Σ∗ the formula obtained by applying unit propagation to
a formula Σ. This inference rule preserves the satisfiability,
but can lead to a non-optimal solution in number of satisfied
clauses and is therefore inapplicable to Max-SAT. However,
recent complete Max-SAT solvers perform a simulated unit
propagation (ie. the consequences are not applied on the
formula) to detect conflicts and improve their lower bound
estimation.
Another useful inference rule for SAT is resolution [24].
If two clauses {x, A} and {¬x, B} are present in a formula
(with A and B disjunction of literals), then the clause
{A, B} can be added without affecting its satisfiability.
Σ = {c1, c2, c3, c4, c5} ∪ Σ1 with c1={x1} , c2={x1,x2} ,
c3={x1,x3} , c4={x2,x3,x4} and c5={x1,x2,x4}
X1@1
c4
c3
x3@3
c4
x4@4
c5
□@5
c5
c)
{x1,¬x2,¬x3,x4}
{¬x1,x2,x3,x4}
{x1,x2,x3}
c5
c4
{x1,x2,x3}
c3
{x1,x2}
c2
{x1}
c1
□
d)
Σ' = { □ , {x1,x2,x3} , {x1,x2,x3} } ∪ Σ1
Figure 1. Example of application of Max-resolution based inference rule.
The original formula Σ is on top (a), followed (b) by the implication
graph which shows the unit propagation steps (each node corresponds to
an assignment, with the order given by @X, and each arrow is tagged
with the clauses which have led to the assignment). Then appears (c) the
refutation tree and the resolution steps applied on the formula (in frame, the
compensation clauses that must be added to the formula), and eventually
(d) the resulting formula Σ0 .
In [3], these inference rules are applied to improve the
lower bound estimation made by their BnB solver MiniMaxSat. The main idea is to use the Max-resolution to
perform deductions on the formulas as it is done by the
clause learning mechanism of SAT solvers: when a conflict
is detected by unit propagation, a succession of resolution
steps are applied on the clauses which have led to the
conflict, producing a learnt clause. MiniMaxSat performs,
at each decision, a simulation of unit propagation. When
a conflict is derived, the set of clauses which has led to
it forms an unsatisfiable subset of the formula. From this
subset, the empty clause can be derived by several Maxresolution steps made in a given order. These resolution
steps make up the refutation tree and can be easily deduced
from the implication graph (or from a simple propagation
queue). Contrary to the SAT learning mechanism, the set of
clauses which have led to the conflict is removed from the
formula and compensation clauses are added to preserve the
number of satisfied and falsified clauses for each possible
assignment. To limit the number of clauses added to the
formula, MiniMaxSat applies this method only when all
the clauses of the unsatisfiable subset contain less than
five literals. Fig. 1 gives an example of the application
of the presented inference rules. It should be noted that
MiniMaxSat applies this inference rule at each node of the
search tree, but the resulting transformation is only kept
in the sub-part of the search tree. The transformations are
undone when the algorithm backtracks.
IV. N OVELTY ++ A LGORITHM
This section provides a brief overview of local search
algorithms for SAT, with a focus on Novelty++ [17] which
is used in our approach.
Algorithm 1 presents a general scheme of a local search
(LS) algorithm based on WalkSat (originally introduced in
[27]). It starts by randomly choosing a complete assignment
I (function random_complet_assignment, line 4) and
iteratively selects a variable to flip (lines 7-9) in order to
improve the number of satisfied clauses, until either a satisfying assignment is found (lines 5-6) or a given maximum
number of flips (FreqRestarts) is reached. This process is
repeated up to a maximum number of MaxRestarts restarts.
This restart mechanism can be viewed as a diversification
mechanism.
WalkSat variants differ on their heuristic to select the
variable to flip [28]. Such heuristic is based on two
counters: break and make which calculate, for a given
variable xi , the number of clauses in Σ which will
be falsified and satisfied respectively if xi is flipped.
Let score(xi ) be the difference between make(xi ) and
break(xi ) (score(xi ) = make(xi ) - break(xi )). WalkSat family provides several heuristics to pick a variable
to flip (function select_var_to_flip) from a randomly
selected unsatisfied clause c, returned by the function
select_falsified_clause. In the original WalkSat implementation, if there are variables in c with a null break
value, randomly pick one of them, otherwise with a probability p, randomly pick a variable from c (random walk) and
with probability 1 − p, randomly pick one of the variables
with the smallest break value.
In Novelty [28], the variables appearing in c are sorted
by their score, breaking ties in favor of least recently
flipped variables. Lets consider the best and the second
best variables. If the best variable is not the most recently
Algorithm 1: WalkSat
Data: Σ, MaxRestarts, FreqRestart.
Result: If a satisfiable assignment I is found then I, else ∅.
1
2
3
4
5
6
begin
for i = 0 to MaxRestarts do
for j = 0 to FreqRestarts do
I = random_complet_assignment ();
if is_satisfied (Σ|I ) then
return I;
7
8
9
c = select_falsified_clause (Σ|I );
xi = select_var_to_flip (c);
I = flip (I, xi );
10
11
return ∅
end
flipped one in c then it is selected. Else, with a probability
p, the algorithm selects the second best one and with the
probability 1 − p it picks the best variable.
In [17], Novelty is extended to Novelty++ as follows: with
probability dp (diversification probability), the algorithm
picks the least recently flipped variable in c (corresponding
to a diversification) and with probability 1 − dp, it does as
Novelty.
The optimal values for p and dp depend on the solved
instance and are hard to determine. In [18], it is proposed
a dynamic scheme to adjust the values of such probabilities
depending on the evolution of the search: if the number of
falsified clauses has not been reduced in a given number of
flips then p and dp are increased to favor the diversification.
Oppositely, when the number of falsified clauses is reduced
p and dp are decreased. Novelty++ with adaptive probability
setting will be referred as AdaptNovelty++.
V. IRA NOVELTY ++: LS AND I NFERENCE RULES FOR
M AX -SAT
Our main purpose is to integrate inference rules in a local
search algorithm in order to improve the solving of the
Max-SAT problem. Hence, we add to AdaptNovelty++ the
necessary elements to apply the inference rule described in
Section III to form an original Max-SAT solver, IRAnovelty++.
The first issue to overcome is the integration of simulated
unit propagation (necessary to build refutation trees, and thus
to apply the Max-resolution based inference rule) in AdaptNovelty++. As local search algorithms work on complete
assignments, while simulated unit propagation acts on the
basis of partial ones, our solver keeps simultaneously two
assignments: a complete one for the local search (named Ils )
and a partial one for the detection of conflicts by unit propagation (Iup ). The filling of Iup is done with the achieved
flips according to the AdaptNovelty++ heuristic. It should be
noted that keeping the coherency of the implication graph
is much more difficult in a LS process than in a complete
solver. Indeed, an opposite value can be given to a variable
already assigned by propagation and thus all its assignment
sources via propagation must be undone.
To apply the inference rule in IRAnovelty++, the second
step is the building of refutation trees when conflicts are
discovered by unit propagation (i.e. when there exists a
clause c such as for all literal l ∈ c, ¬l ∈ Iup ). This point
does not require much explanation, as it is similarly done and
largely documented in the complete SAT solvers context.
Eventually, the Max-resolution based inference rule is applied according to the refutation tree, as describes in Section
III. In MiniMaxSat, the formula transformations made by
the Max-resolution rule are kept only in the sub-part of the
search tree where they have been made. Thus, the formula
transformations allow a better estimation of the lower bound
and act as a (partial) memorization scheme. It should be
noted that it is not the case in our approach. IRAnovelty++
keeps the transformations even after the assignment which
has led to them has been undone. The main interest of this
use of the Max-resolution rule is that it dynamically builds
Minimal Unsatisfiable Subsets (MUS)1 , and thus it simplifies
the formula by making some conflicts obvious.
We define two parameters to control the amount of
transformations made and to limit the size of the resulting
formula: LimUP and LimRES.
• LimUP defines the minimum percentage of the original
clauses which must remain in the transformed formula.
Obviously, with LimUP = 1 no transformations are allowed while with LimUP = 0 the formula is completely
transformed.
• LimRES defines the maximum size of the clauses
allowed in the refutation tree. It aims to spur the
deduction of small resolvents, and thus of MUS, and
it limits the size and the number of the added clauses.
Again, with values less than the size of the clauses of
the original formula no transformations are allowed and
the algorithm acts as a simple local search solver.
Moreover, IRAnovelty++ includes a new diversification
mechanism which erases the current rewriting and restores
the original formula after FreqRewriting∗|Σ| flips. This
mechanism may reduce the impact of the quality of the
transformation on the efficiency of IRAnovelty++. It also
includes the classical diversification mechanisms presented
in Section IV: the local search is restarted by a new random
assignment after FreqRestart∗|Σ| flips and the adaptive
mechanisms control the greediness of the algorithm.
IRAnovelty++ is detailed in Algorithm 2 where the following notations are used: best and Ibest are respectively
the minimum number of falsified clauses found so far and
the corresponding assignment. MaxRewritings is the number
of the formula transformations which must be made by
1 As stated in [5], a solution of a Max-SAT instance is a covering set of
the set of the MUSes of the instance.
the algorithm. Also, the following functions are involved:
assign (I, l) adds the literal l to the assignment I and
solves the possible incoherency in the implication graph,
original_clause_rate (Σ, Σ0 ) calculates the percentage
of clauses of Σ which appear in Σ0 and refutation_tree
(I, c) returns the refutation tree of the empty clauses c
built from the assignment I. Eventually, inference_rule
(Σ, R) returns the formula obtained by applying the inference rule on Σ with respect to the refutation tree R.
Algorithm 2: IRAnovelty++
Data: Σ, MaxRewritings, FreqRewriting, FreqRestart,
LimRES and LimUP.
Result: (b, I): b is the minimum number of falsified clauses
found during the search and I the corresponding
assignment.
1
2
3
4
5
6
7
8
9
begin
Ibest = ∅; best = |Σ|;
for i = 0 to MaxRewritings do
Σ0 = Σ; Iup = ∅;
for j = 0 to FreqRewriting do
Ils = random_complet_assignment ();
for k = 0 to FreqRestart do
if is_satisfied (Σ|Ils ) then
return (0, Ils );
cj = select_falsified_clause
(Σ|Ils );
xi = select_var_to_flip (cj );
Ils = flip (Ils , xi );
Iup = assign (Iup , l);
// l is the literal corresponding to xi modulo
// its value in Ils
if original_clause_rate
(Σ, Σ0 ) > LimU P then
for all empty clauses c ∈ Σ0 |Iup ∗
do
R = refutation_tree (Iup , c);
if ∀c0 ∈ R, |c0 | ≤ L RES then
Σ0 = inference_rule
(Σ0 , R);
10
11
12
13
14
15
16
17
18
19
20
if |{ ∈ Σ|Irl }| < best then
best = |{ ∈ Σ|Ils }|;
Ibest = Ils ;
21
22
23
24
25
return (best, Ibest )
end
IRAnovelty++ starts by filling Ils with a random complete
assignment (line 6). For FreqRestarts steps, if Ils does not
satisfy the input formula (lines 8-9) then the algorithm
selects and flips a variable (lines 10-12) and adds the
corresponding assignment to Iup (line 13). If LimUP has
not been reached, the algorithm applies unit propagation on
Σ0 |Iup (line 17) and for each empty clause found it builds
the refutation tree (line 18) and applies the inference rule
(lines 19-20). best and Ibest are updated (lines 21-23) if
the current number of falsified clauses is smaller than the
best known found so far. Eventually, in the loop of line 5,
IRAnovelty++ performs a restart, while in the loop of the
line 3 it achieves a rewriting by emptying Iup and restoring
the original formula.
[15]. The Max-2-SAT instances contain more MUSes (and
thus are more constrained) than the Max-3-SAT ones, and
that could explain the observed results.
VI. E XPERIMENTAL R ESULTS
We have tuned individually and manually each parameter
of IRAnovelty++: the ones relative to the diversification
mechanisms, FreqRestarts and FreqRewriting, and the ones
which control the amount of inference, LimUP and LimRES.
FreqRestarts: The results are shown in Fig. 3. The best
percentage of optimum found (graphic (a)) are obtained with
a restart frequency between 3 and 7 times the number of
clauses of the formula. The median time to find the optimum
and the median number of times the optimum is found
(graphic (b)) suggest that 3 is the best performing value for
the restart frequency. It should be noted that FreqRestart has
no impact on the size of the formula after transformation nor
on the number of MUSes found during the transformation.
FreqRewriting: Fig. 4 presents the impact of the
rewriting frequency on IRAnovelty++ performances. The
percentage of optimum found (graphic (a)), the median
time to find optimum and the median number of times the
optimum was found (graphic (b)) seem relatively constant
whatever the value of FreqRewriting is. This indicates that
the quality of the rewritings is relatively constant, and thus
that this diversification mechanism may not be necessary.
As for the restart frequency, FreqRewriting has no impact
on the size of the formula after transformation nor on the
number of MUSes found during the transformation.
LimUP: This parameter defines the minimum ratio
of original clauses which must remain in the transformed
formula. When this ratio is reached, unit propagation is no
longer applied and thus the formula is no longer transformed.
Fig. 5 shows its impact on IRAnovelty++ performances.
With value 100%, the inference rule is never applied and
IRAnovelty++ acts as AdaptNovelty++. The more its value
decreases, the more the percentage of instances for which
IRAnovelty++ found the optimum increases. Best performances are obtained with value 15%. Smaller ones give
slightly lower performances. We can see two possible explanations for this last observation: (1) too much time is
spent in transforming the formula or/and (2) the size of the
transformed formula slows down the local search. Graphics
(c) and (d) of Fig. 5 show respectively the average growth
of the size of the formula (in percentage of the original one)
and the average number of MUSes found. It should be noted
that no MUS are found on random Max-3-SAT instances.
LimRES: As the previous one, this parameter controls
the amount of inference applied on the input formulas by
limiting the application of the inference rule on clauses of
high size. For a given conflict, if a clause of the refutation
tree is higher than LimRES then the inference rule is not
applied and the conflict is simply ignored. As explained
before, this limit have two purposes: limiting the size of the
As it was described above, IRAnovelty++ contains several
parameters which may strongly guide its performances. In
this section, the first aim is therefore to study its behavior
according to the value of these parameters. Once achieved,
we give an overview of the obtained results under the
optimized parameters.
The tests are achieved on blade servers with GNU/Linux
operating system. Each server is equipped with 2 Intel Xeon
2.4 Ghz processors and 24 GB of RAM and each processor
includes 4 physical cores. Each core is devoted to treat one
instance. The tested instances are issued from the MaxSAT 2011 competition. We select 150 random Max-2-SAT
instances, 150 random Max-3-SAT instances and 167 crafted
Max-2-SAT ones (half from bipartite instances and half from
maxcut ones). The running time of each solver is fixed
to 900 seconds. Optimum values have been obtained on
450 instances (300 Max-2-Sat and 150 Max-3-Sat) with the
complete Max-SAT solver maxsatz [4]. We only consider in
our tests the instances for which the optimum is known.
The most natural comparison criterion of solver’s efficiency is the percentage of instances for which it found the
optimum. When two solvers have very close results on this
first criterion, we use two additional criteria: the time to
find the optimum and the number of times this optimum is
reached during the resolution.
A. AdaptNovelty++ Tuning
We first study the impact of the restart frequency (expressed as a factor of the size of the input formula) variations
on AdaptNovelty++. The percentage of optimum found
(Fig. 2, graphic (a)) is almost constant whatever the restart
frequency value is. However, the median time to find the
optimum and the number of times where the optimum is
found (Fig. 2, graphic (b)) are respectively lower and bigger
for small restart values (around one time the number of
clauses of the input formulas).
One can note that AdaptNovelty++ found the optimum on
all the random Max-3-SAT instances while it is significantly
less efficient on Max-2-SAT ones. We see two possible
explanations. The Max-SAT competition engages complete
solvers and most of them includes unit propagation based
inference rules. These solvers are more efficient on Max2-SAT instances than on Max-3-SAT ones. Consequently,
the Max-2-SAT instances are more difficult to solve than
the Max-2-SAT ones, especially for solvers which do not
use unit propagation as AdaptNovelty++. Another possible
explanation may be related to the fact that WalkSat like
algorithms are less efficient on over-constrained instance
B. IRAnovelty++ Tuning
60 %
40 %
20 %
0%
20
30
40
50
FreqRestart
60
70
80
90
100
10000
2
5000
1
0
0
(a) Percentage of optimum found
98 %
96 %
global
random max2sat
crafted max2sat
random max3sat
92 %
90 %
0
10
20
20
30
40
50
60
FreqRestart
30
40
50
20000
1.5
10000
0.5
5000
0
0
0
2
4
6
global
random max2sat
crafted max2sat
random max3sat
94 %
92 %
90 %
12
14
16
18
20
FreqRewriting (*100000)
(a) Percentage of optimum found
Figure 4.
optimum first time found (seconds)
percentage of optimums found
96 %
10
10
12
14
Impact of the restart frequency on IRAnovelty++ performances.
98 %
8
8
(b) First time and number of times the optimum was found
100 %
6
15000
1
FreqRestart
Figure 3.
4
0
100
90
first time
number of times
FreqRestart
2
80
2
(a) Percentage of optimum found
0
70
Impact of the restart frequency on AdaptNovelty++ performances.
100 %
94 %
10
(b) First time and number of times the optimum was found
optimum first time found (seconds)
percentage of optimums found
Figure 2.
15000
3
optimum number of times found
10
20000
first time
number of times
4
4
3.5
3
2.5
2
1.5
1
0.5
0
5000
first time
number of times
4000
3000
2000
1000
0
2
4
6
8
10
12
14
16
18
0
20
optimum number of times found
0
5
optimum number of times found
global
random max2sat
crafted max2sat
random max3sat
80 %
optimum first time found (seconds)
percentage of optimums found
100 %
FreqRewriting (*100000)
(b) First time and number of times the optimum was found
Impact of the rewriting frequency on IRAnovelty++ performances.
transformed formula and spurring the deduction of MUSes.
Fig. 6 shows the impact of this parameter on IRAnovelty++
performances. With values less than 2, the inference rule is
never applied and the results are similar to the AdaptNovelty++ ones. The more LimRES value increases, the more the
size of the transformed formula and the number of deducted
MUSes increase. The best results are obtained with a value
of LimRES between 5 and 8. Again, no MUS are found on
random Max-3-SAT instances.
C. Results and Discussion
Table I compares the performances of IRAnovelty++
and AdaptNovelty++ with their optimized parameters (for
AdaptNovelty++, FreqRestart = 1 and for IRAnovelty++,
FreqRestart = 3, FreqRewriting = 50000, LimUP = 15%
and LimRES = 7). For each solver, the column %O gives
the percentage of instances for which the optimum has been
found, ftO gives the median time spent in reaching the
optimum for the first time and ntO gives the median number
of times the optimum has been found during the resolution.
Table I
C OMPARISON OF PERFORMANCES OF AdaptNovelty++ AND
IRAnovelty++.
AdaptNovelty++
IRAnovelty++
Instances
%O ftO
ntO
%O ftO
ntO
random max2sat
0
–
–
100
–
–
crafted max2sat 35.53 0.54 1340.5 100 0.08 64783
random max3sat 100 0.02 25477.5 100 0.9 106973
Total
45.77 0.04 19712 100 0.61 99854
We can observe that IRAnovelty++ with optimized parameters greatly outperforms AdaptNovelty++ on Max-2SAT instances (random and crafted). However, the results
are more contrasting on the Max-3-SAT ones. Even if
IRAnovelty++ manages to find the optimum for all the instances and more frequently than AdaptNovelty++, it spends
more time in finding the optimum for the first time than
AdaptNovelty++ does. In our opinion, this can be explained
by the way the inference rule is applied. Except in particular
cases (when few clauses of the refutation tree share same
60 %
40 %
global
random max2sat
crafted max2sat
random max3sat
20 %
0%
0%
20 %
40 %
60 %
80 %
100 %
10
50000
first time
number of times
8
30000
4
20000
2
10000
0
0%
10 %
20 %
(a) Percentage of optimum found
25
total
rand max2sat
crafted max2sat
rand max3sat
400 %
total
rand max2sat
crafted max2sat
rand max3sat
20
nb of MUS
500 %
300 %
200 %
15
10
5
100 %
20 %
40 %
60 %
80 %
0
0%
100 %
20 %
40 %
LimUP
(c) Median size of the transformed formulas
80 %
60 %
global
random max2sat
crafted max2sat
random max3sat
0%
0
5
10
15
100 %
Impact of LimUP value on IRAnovelty++ performances.
100 %
20 %
80 %
(d) Median number of MUSes found
20
25
30
optimum first time found (seconds)
Figure 5.
40 %
60 %
LimUP
25
4000
3500
3000
2500
2000
1500
1000
500
0
first time
number of times
20
15
10
5
0
5
10
15
LimRES
20
25
30
optimum number of times found
0%
0%
percentage of optimums found
0
50 %
40 %
(b) First time and number of times the optimum was found
600 %
formula augmentation
30 %
LimUP
LimUP
LimRES
(a) Percentage of optimum found
(b) First time and number of times the optimum was found
10000 %
30
total
rand max2sat
crafted max2sat
rand max3sat
8000 %
6000 %
total
rand max2sat
crafted max2sat
rand max3sat
25
nb of MUS
formula augmentation
40000
6
optimum number of times found
80 %
optimum first time found (seconds)
percentage of optimums found
100 %
4000 %
2000 %
20
15
10
5
0
0%
0
5
10
15
20
25
30
0
5
10
LimRES
(c) Median size of the transformed formulas
Figure 6.
15
20
25
30
LimRES
(d) Median number of MUSes found
Impact of LimRES value on IRAnovelty++ performances.
literals), the inference rule produces new clauses of size
equal (with input clause of size two) or higher (with input
clauses of size greater than 2) than the original ones. This is
not a problem on Max-2-SAT instances, where the particular
case described above happened sufficiently often to allow
the production of unit and empty clauses. But this is not
the case on Max-3-SAT instances. Consequently, on Max3-SAT instances, IRAnovelty++ spent time transforming
the formulas, thus increasing their sizes (and making them
harder for the local search) with a smaller gain than on
Max-2-SAT instances. Several solutions exist to improve this
situation, like leading the application of the inference rule
by using dedicated heuristics, or more simply by applying
it only when the resulting clause sizes are smaller or equal
to the original ones.
Fig. 7 compares the best solution found by AdaptNovelty++ and IRAnovelty++ on 263 instances of the benchmark. Each point represents an instance, with in the horizontal axis the best solution found by AdaptNovelty++ and
in the vertical axis the one found by IRAnovelty++. On all
the represented instances, our solver found generally better
solutions than AdaptNovelty++.
Eventually, it should be noted that on few instances
IRAnovelty++ manages to find all MUSes. In this situation,
it can prove the optimality of the solution as a complete
solver does. However, this is relatively rare (around 1%
of the instances) and works only on instances with small
optimum (thus small number of MUSes).
300
IRAnovelty++
280
260
240
220
200
random max2sat
crafted max2sat
180
160
160
180
200
220
240
AdaptNovelty++
260
280
300
Figure 7. Comparison of best solutions found by AdaptNovelty++ and
IRAnovelty++ on some instances of the benchmark.
VII. C ONCLUSION
We have presented in this paper a new solver, IRAnovelty++, which includes inference rules in a classic local
search algorithm to solve the Max-SAT problem. The extended tests that we have performed provide useful information on the behavior of our solver and show its robustness
on many instances. The results show that the perfor- mances
of the initial local search are significantly improved. That
proves the interest of our approach. Moreover, the way we
use the Max-resolution (by using it as a full memorization
mechanism for iteratively building MUSes) has never been
made before and seem promising.
As future direction, we will work on extending our
solver to Weighted and Partial Max-SAT and improving the
application of the inference rule on Max-3-SAT instances.
The recent efforts on solving Max-SAT are focused on the
exact methods and less interest is given to incomplete methods. Our work aims also in contributing in the development
of such approaches.
R EFERENCES
[1] C. H. Papadimitriou, Computational complexity. Addison-Wesley,
1994.
[2] T. Alsinet, F. Many, and J. Planes, “A max-sat solver with lazy data
structures,” in Advances in Artificial Intelligence - IBERAMIA’04, ser.
LNCS, vol. 3315, pp. 334–342, 2004.
[3] F. Heras, J. Larrosa, and A. Oliveras, “Minimaxsat: An efficient
weighted max-sat solver,” Journal of Artificial Intelligence Research
- JAIR, vol. 31, pp. 1–32, 2008.
[4] C. M. Li, F. Many, N. Mohamedou, and J. Planes, “Exploiting cycle
structures in max-sat,” in Theory and Applications of Satisfiability
Testing - SAT 2009, ser. LNCS, vol. 5584, pp. 467–480, 2009.
[5] M. Liffiton and K. Sakallah, “On finding all minimally unsatisfiable
subformulas,” in Theory and Applications of Satisfiability Testing SAT 2005, ser. LNCS, vol. 3569, pp. 34–42, 2005.
[6] H. van Maaren and L. van Norden, “Sums of squares, satisfiability
and maximum satisfiability,” in Theory and Applications of Satisfiability Testing - SAT’05, ser. LNCS, vol. 3569, pp. 294–308, 2005.
[7] C. Gomes, W.-J. van Hoeve, and L. Leahu, “The power of semidefinite programming relaxations for max-sat,” in Integration of AI
and OR Techniques in Constraint Programming for Combinatorial
Optimization Problems - CPAIOR’06, ser. LNCS, vol. 3990, pp. 104–
118, 2006.
[8] P. Hansen and B. Jaumard, “Algorithms for the maximum satisfiability problem,” Computing, vol. 44, pp. 279–303, 1990.
[9] W. Zhang, A. Rangan, and M. Looks, “Backbone guided local
search for maximum satisfiability,” in Proceedings of the Eighteenth
International Joint Conference on Artificial Intelligence - IJCAI’03,
pp. 1179–1186, 2003.
[10] K. Smyth, H. Hoos, and T. Stützle, “Iterated robust tabu search for
max-sat,” in Advances in Artificial Intelligence, ser. LNCS, vol. 2671,
pp. 995–995, 2003.
[11] J. Larrosa and F. Heras, “Resolution in max-sat and its relation to
local consistency in weighted csps,” in Proceedings of the Nineteenth
International Joint Conference on Artificial Intelligence - IJCAI’05,
pp. 193–198, 2005.
[12] L. Kroc, A. Sabharwal, C. P. Gomes, and B. Selman, “Integrating
systematic and local search paradigms: A new strategy for maxsat,”
in Proceedings of the 21st International Joint Conference on Artificial
Intelligence - IJCAI 2009, pp. 544–551, 2009.
[13] A. Biere, M. Heule, H. van Maaren, and T. Walsh, Eds., Handbook of
Satisfiability, ser. Frontiers in Artificial Intelligence and Applications.
IOS Press, 2009, vol. 185.
[14] T. Stützle, H. Hoos, and A. Roli, “A review of the literature on
local search algorithms for max-sat,” Intellectics Group, Darmstadt
University of Technology, Germany, Tech. Rep. AIDA-01-02, 2001.
[15] H. Hoos and T. Stutzle, Stochastic Local Search: Foundations and
Applications. Morgan Kaufmann Publishers Inc., 2004.
[16] F. Heras and D. Bañeres, “The impact of max-sat resolution-based
preprocessors on local search solvers,” JSAT, vol. 7, no. 2-3, pp. 89–
126, 2010.
[17] C. Li and W. Huang, “Diversification and determinism in local search
for satisfiability,” in Theory and Applications of Satisfiability Testing
- SAT’05, ser. LNCS, vol. 3569, pp. 158–172, 2005.
[18] H. H. Hoos, “An adaptive noise mechanism for walksat,” in Proceedings of the 18th National Conference on Artificial Intelligence AAAI’02, pp. 655–660, 2002.
[19] X. Li, M. Stallmann, and F. Brglez, “A local search sat solver using
an effective switching strategy and an efficient unit propagation,”
in Theory and Applications of Satisfiability Testing, ser. LNCS, vol.
2919, pp. 276–277, 2004.
[20] E. A. Hirsch and A. Kojevnikov, “Unitwalk: A new sat solver that
uses local search guided by unit clause elimination,” Annals of
Mathematics and Artificial Intelligence, vol. 43, pp. 91–111, 2005.
[21] G. Audemard, J.-M. Lagniez, B. Mazure, and L. Sais, “Learning in
local search,” in 21st IEEE International Conference on Tools with
Artificial Intelligence - ICTAI 2009, pp. 417–424, 2009.
[22] G. Audemard, J.-M. Lagniez, B. Mazure, and L. Sas, “Boosting
local search thanks to cdcl,” in Logic for Programming, Artificial
Intelligence, and Reasoning, ser. LNCS, vol. 6397, pp. 474–488,
2010.
[23] O. Gableske and M. Heule, “Eagleup: Solving random 3-sat using sls
with unit propagation,” in Theory and Applications of Satisfiability
Testing - SAT 2011, ser. LNCS, vol. 6695, pp. 367–368, 2011.
[24] J. A. Robinson, “A machine-oriented logic based on the resolution
principle,” Journal of the ACM, vol. 12, no. 1, pp. 23–41, 1965.
[25] F. Heras and J. Larrosa, “New inference rules for efficient max-sat
solving,” in Proceedings of the 21st national conference on Artificial
intelligence - AAAI’06, vol. 1, pp. 68–73, 2006.
[26] M. L. Bonet, J. Levy, and F. Many, “Resolution for max-sat,”
Artificial Intelligence, vol. 171, no. 89, pp. 606–618, 2007.
[27] B. Selman, H. A. Kautz, and B. Cohen, “Noise strategies for improving local search,” in Proceedings of the 12th National Conference on
Artificial Intelligence - AAAI’94, vol. 1, pp. 337–343, 1994.
[28] D. A. McAllester, B. Selman, and H. A. Kautz, “Evidence for
invariants in local search,” in Proceedings of the Fourteenth National
Conference on Artificial Intelligence and Ninth Innovative Applications of Artificial Intelligence Conference - AAAI’97/IAAI’97, pp.
321–326, 1997.
[29] M. H. Liffiton, Z. S. Andraus, and K. A. Sakallah, “From max-sat to
min-unsat: Insights and applications,” University of Michigan, Tech.
Rep. CSE-TR-506-05, 2005.