1 Maximum Satisfiability Problem (MAX SAT) 2 Simple Algorithm by

Script of Lecture 5, Approximation Algorithms
Summer term 2017
Tobias Mömke, Hang Zhou
http://www-cc.cs.uni-saarland.de/course/61/
Written by Hang Zhou
1 Maximum Satisfiability Problem (MAX SAT)
Input: n Boolean variables x1 , . . . , xn , each having value either true or false; m clauses C1 , . . . , Cm ,
each being the disjunction of some number of the variables and their negations (e. g.,
x3 ∨ x¯5 ∨ x11 ); and a nonnegative weight wj for each clause Cj .
Output: an assignment of true/false to the Boolean variables that maximizes the total weight
of the satisfied clauses.
Terminology:
• literal: xi , x̄i
• positive literal: xi
• negative literal: x̄i
• length of a clause: number of literals in that clause; let lj denote the length of clause Cj
2 Simple Algorithm by Randomized Sampling
Randomized sampling algorithm: Set each xi to true with probability 1/2, independently.
Theorem 1. Randomized sampling gives a (1/2)-approximation algorithm for MAX SAT.
Proof. Consider
a random variable Yj such that Yj = 1 if clause Cj is satisfied and 0 otherwise.
P
Let W := m
w
j=1 j Yj be the total weight of the satisfied clauses. By the linearity of expectation,
we have:
m
m
X
X
E[W ] =
wj · E[Yj ] =
wj · P[clause Cj satisfied],
j=1
where
j=1
lj
1
1
P[clause Cj satisfied] = 1 −
≥ ,
2
2
Therefore, we have
m
E[W ] ≥
1X
1
wj ≥ OPT,
2
2
j=1
where OPT is the optimum value for MAX SAT.
1
since lj ≥ 1.
3 Derandomization
To achieve a deterministic algorithm, we set the values of the variables one by one.
How to set the value of x1 in order to preserve the expected value of W ? Consider the expected
value under the two settings x1 = true and x1 = f alse. We have
E[W ] = E[W | x1 ← true] · P[x1 ← true] + E[W | x1 ← f alse] · P[x1 ← f alse].
Since P[x1 ← true] = P[x1 ← f alse] = 1/2, one of E[W | x1 ← true] and E[W | x1 ← f alse] is
at least E[W ]. We set x1 to true if
E[W | x1 ← true] ≥ E[W | x1 ← f alse],
and otherwise we set x1 to false. The expected value under this setting is at least E[W ].
We extend the above argument to the general case, where we have set variables x1 , . . . , xi to
values b1 , . . . , bi respectively. We set xi+1 to true if
E[W | x1 ← b1 , . . . , xi ← bi , xi+1 ← true] ≥ E[W | x1 ← b1 , . . . , xi ← bi , xi+1 ← f alse],
and otherwise we set xi+1 to false. The expected value under this setting is at least E[W | x1 ←
b1 , . . . , xi ← bi ].
It only remains to show how to compute the conditional expectations.
E[W | x1 ← b1 , . . . , xi ← bi ] =
=
m
X
j=1
m
X
wj · E[Yj | x1 ← b1 , . . . , xi ← bi ]
wj · P[clause Cj satisfied | x1 ← b1 , . . . , xi ← bi ].
j=1
The probability that a clause Cj is satisfied given that x1 ← b1 , . . . , xi ← bi is 1 if the settings
of x1 , . . . , xi already satisfy the clause, and is 1 − (1/2)k otherwise, where k is the number of
literals in the clause that remain unset.
Examples:
P[clause x3 ∨ x¯5 ∨ x¯7 is satisfied | x1 ← true, x2 ← f alse, x3 ← true] = 1
2
1
3
P[clause x3 ∨ x¯5 ∨ x¯7 is satisfied | x1 ← true, x2 ← f alse, x3 ← f alse] = 1 −
=
2
4
4 Randomized Rounding
a) Formulate the problem as an integer program: for every xi , create a variable yi ∈ {0, 1}
such that yi = 1 corresponds to xi ← true.
b) Relax the integer program to an linear program: replace the constraints yi ∈ {0, 1} by
0 ≤ yi ≤ 1.
c) Solve the linear program in polynomial time and obtain an optimal (fractional) solution
y∗;
d) Set each xi to true with probability yi∗ .
2
4.1 Integer Program
In additional to the variables yi , we introduce a variable zj for each clause Cj such that zj = 1
if the clause is satisfied and zj = 0 otherwise. For each clause Cj , let Pj be the indices of the
variables that occur positively in the clause, and let Nj be the indices of the variables that occur
negatively in the clause. Thus
_
_
Cj =
xi ∨
x̄i .
i∈Pj
i∈Nj
We have
X
yi +
i∈Pj
X
(1 − yi ) ≥ zj .
i∈Nj
Integer program:
m
X
maximize
wj z j
j=1
subject to
X
yi +
i∈Pj
X
(1 − yi ) ≥ zj ,
∀Cj =
i∈Nj
_
xi ∨
i∈Pj
_
x̄i ,
i∈Nj
yi ∈ {0, 1},
i = 1, . . . , n,
zj ∈ {0, 1},
j = 1, . . . , m.
4.2 Liner Program Relaxation
m
X
maximize
wj z j
j=1
subject to
X
i∈Pj
yi +
X
(1 − yi ) ≥ zj ,
∀Cj =
i∈Nj
_
xi ∨
i∈Pj
_
x̄i ,
i∈Nj
0 ≤ yi ≤ 1,
i = 1, . . . , n,
0 ≤ zj ≤ 1,
j = 1, . . . , m.
4.3 Analysis for the Randomized Rounding
Let (y ∗ , z ∗ ) be an optimal solution to the linear program relaxation. We consider the result of
the randomized rounding, by setting each xi to true with probability yi∗ independently.
Theorem 2. Randomized rounding gives a (1 − 1e )-approximation algorithm for MAX SAT.
Proof. The key is to analyze the probability that a given clause Cj is satisfied.
Y
Y
P[clause Cj not satisfied] =
(1 − yi∗ )
yi∗
i∈Pj
i∈Nj
lj
X
X
1
(1 − yi∗ ) +
yi∗ 
≤ 
lj


i∈Pj
i∈Nj
(geometric mean ≤ arithmetic mean)


lj
X
X
1
= 1 − 
yi∗ +
(1 − yi∗ )
lj
i∈Pj
≤
i∈Nj
∗ lj
1−
zj
lj
,
3
using linear program constraints.
Fact 1. If a function f (x) is concave on the interval [0, 1] (that is, f 00 (x) ≤ 0), and f (0) = a
and f (1) = b + a, then f (x) ≥ bx + a for x ∈ [0, 1].
z ∗ lj
The function f (zj∗ ) := 1 − 1 − ljj
is concave. So from Fact 1, we have:
zj∗ lj
P[clause Cj satisfied] ≥ 1 − 1 −
lj
"
#
1 lj ∗
≥ 1− 1−
zj .
lj
The expected value of the solution from the randomized rounding algorithm is
E[W ] =
m
X
wj · P[clause Cj satisfied]
j=1
m
X
#
1 lj ∗
zj
≥
wj · 1 − 1 −
lj
j=1
"
# m
1 k X
≥ min 1 − 1 −
wj zj∗ .
k≥1
k
"
j=1
Note that the function 1 − 1 −
to infinity. So
1 k
k
is non-increasing in k and approaches 1 −
1
e
when k tends
m
1 X
1
∗
E[W ] ≥ 1 −
OPT.
wj z j ≥ 1 −
e
e
j=1
The derandomization of the randomized rounding algorithm is left as an exercise.
5 Choosing the Better of Two Solutions
For a given clause of length k, the randomized sampling algorithm (Section 2) satisfies the clause
k
with probability 1 − 12 (which is increasing in k), while
rounding algorithm
h the randomized
i ∗
1 k
(Section 4) satisfies the clause with probability at least 1 − 1 − k
zj (which is decreasing
in k).
Since the bad cases for the two algorithms are opposite, we can achieve a better approximation
guarantee by choosing the better solution of the two algorithms.
Theorem 3. Choosing the better solution of the randomized sampling algorithm and the randomized rounding algorithm gives a (3/4)-approximation algorithm for MAX SAT.
The proof is left as an exercise.
Since both of the two algorithms can be derandomized, we then obtain a deterministic (3/4)approximation algorithm for MAX SAT.
4