Approximation Algorithms
Asymptotic Polynomial-Time
Approximation (APTAS)
and Randomized Approximation
Algorithms
Jens Egeblad
November 29th, 2006
Agenda
First lesson (Asymptotic Approximation):
• Bin-Packing (BPP) Formulation
• 2-Approximation
• APTAS (Asymptotic PTAS)
Second lesson (Randomized Algorithms):
•
•
•
•
Maximum Satisfiability (MAX-SAT) Formulation
2 Randomized Algorithms
Self-Reducibility
Derandomization
1
Bin-Packing Formulation
Bin-Packing: Given n items with sizes a1 , . . . , an ∈ (0, 1]
find a packing in unit sized bins that minimizes the number of
bins used.
Example:
NP-Hard of course!
Applications: Cutting of textile, wood, metal etc.
Generalizable to higher dimensions
(1D-BPP, 2D-BPP, 3D-BPP, . . . ).
2
First-Fit
Algorithm First-Fit:
1. Pick unpacked item ai
2. Run through partially packed bins B1, . . . , Bk . Put ai into
the first bin in which it fits.
3. If ai does not fit in any bin, open a new bin Bk+1
Example:
3
First-Fit Analysis
Theorem: First-Fit is a 2-Approximation Algorithm.
Proof
• If the algorithms uses m bins then at least m − 1 must be
more than half full;
1
m
• Therefore:
m
X
i=1
ai >
m−1
2
I.e. the space we occupy is more than half the space of all the
half full bins.
Pm
• Since
i=1 ai is a lower bound on OP T we get:
m
X
m−1
ai ≤ OP T
<
2
i=1
• so m − 1 < 2OP T and m ≤ 2OP T . 4
Negative Result
Theorem: Assume P 6= N P then there is no approximation
algorithm for BPP having an approximation guarantee of 23 − ǫ
for any ǫ > 0.
Proof: Reduce from Set-Partition problem.
• Set-partition problem (SPP): Given a set S = a1 , . . . , an
of numbers. Determine if S can be partition into two sets A
and A = S \ A such that
X
x∈A
x=
X
x
x∈A
A
A
• Now assume a 23 − ǫ approximation algorithm A exists for
BPP.
• Given an instance S = {a1 , . . . , an } for SPP construct a
Pn
1
BPP instance with n items and bin sizes 2 i=1 ai .
• A will give a 2-bin packing if the SPP instance has ’yes’
answer, since ⌊( 23 − ǫ) · 2⌋ = 2 5
First Fit Decreasing Bin-Packing
Algorithm First-Fit Decreasing:
1. Sort items by size in non-increasing order.
2. Run First-Fit.
[Johnson 73] showed that for FFD,
F F D(I) ≤
11
OP T (I) + 4 = 1.222... · OP T (I) + 4
9
As OP T (I) increases the approximation guarantee approaches
11
11
9 , because 4 becomes insignificant to 9 OP T (I).
6
Asymptotic Approximation Guarantee
Given a minimization problem Π with instances I ,
RA (I) = A(I)/OP T (I).
Definition: Approximation guarantee:
RA = inf{r ≥ 1 : RA (I) ≤ r ∀ I},
So RF F D = 5.
Definition: Asymptotic approximation guarantee:
∞
RA = inf{r ≥ 1 : ∃N > 0, RA (I) ≤ r, ∀ I w. OP T (I) > N },
∞
So RF
FD =
11
9.
A(I)/OPT(I)
RA
8
RA
OPT(I)
7
Approximation Schemes
For a problem Π with instances I .
FPTAS (Fully Polynomial-Time Approximation Scheme)
• For any ǫ > 0 exists algorithm Aǫ,
s.t. Aǫ(I) ≤ (1 + ǫ)OP T (I).
The running time of Aǫ is polynomial in |I| and 1ǫ .
PTAS (Polynomial-Time Approximation Scheme)
• For any ǫ > 0 exists algorithm Aǫ,
s.t. Aǫ(I) ≤ (1 + ǫ)OP T (I).
The running time of Aǫ is polynomial in |I|.
APTAS (Asymptotic Polynomial-Time Approximation Scheme)
• For any ǫ > 0 exists algorithm Aǫ and N0 > 0,
s.t. Aǫ(I) ≤ (1 + ǫ)OP T (I), for OP T (I) > N0.
The running time of Aǫ is polynomial in |I|.
8
Complexity Classes
NP
APX
APTAS
PTAS
FPTAS
P
APX: Problems that have a finite approximation factor.
9
Asymptotic PTAS For BPP (1)
APTAS (Asymptotic Polynomial-Time Approximation Scheme)
• For any ǫ′ > 0 exists algorithm Aǫ and N0 > 0,
s.t. Aǫ(I) ≤ (1 + ǫ′)OP T (I), for OP T (I) > N0.
The running time of Aǫ is polynomial in |I|.
Theorem: For any ǫ, 0 ≤ ǫ ≤ 12 there is a polynomial time
algorithm Aǫ which finds a packing using at most
(1 + 2ǫ)OP T (I) + 1 bins.
Proof: Next slides
Algorithms Aǫ form an APTAS. Choose N0 >
Aǫ(I)
≤
≤
3
ǫ′
and ǫ = 31 ǫ′.
(1 + 2ǫ)OP T (I) + 1
„
«
1
1 + 2ǫ +
OP T (I)
OP T (I)
≤
1
2ǫ′
+
)OP T (I)
(1 +
3
N0
≤
ǫ
2ǫ′
+ )OP T (I)
(1 +
3
3
=
(1 + ǫ′)OP T (I).
10
Asymptotic PTAS For BPP (2)
Lemma 1: Given ǫ > 0 and integer K > 0, consider the
restricted BPP a1 , . . . , an where ai ≥ ǫ and the number of
distinct item sizes is K .
There is a polynomial time algorithm for this restricted BPP.
Proof:
• M = ⌊ 1ǫ ⌋ is max. number of items in one bin.
„
«
M +K
• Number of ways to pack a bin := R ≤
,
M
constant
„
«
n+R
• Number of feasible packings := P ≤
,
R
(n+R)R
).
R!
polynomial in n (≤
• Enumerate all packings and pick the best. Note: For M = 5 and K = 5:
„
«
10!
5+5
=
= 252
5
5! · 5!
So for n = 20
„
20 + 252
252
«
≈ 9.8 · 1029 .
(The algorithms have very high running times)
11
APTAS for BPP (3)
Lemma 2: Given ǫ > 0. Consider restricted BPP a1 , . . . , an
where ai ≥ ǫ. There is a polynomial time algorithm with
approximation guarantee (1 + ǫ).
Proof:
I
J
J’
• Sort a1 , . . . , an by increasing size and partition them into
K = ⌈ ǫ12 ⌉ groups having at most Q = ⌊nǫ2⌋ items.
• Construct instance J by rounding up items sizes. By Lemma
1 we can find optimal packing for J (also feasible for I )
• Construct instance J ′ by rounding down item sizes.
OP T (J ′ ) ≤ OP T (I). J ′ is packing for J except Q
largest items of J so:
′
OP T (J) ≤ OP T (J ) + Q ≤ OP T (I) + Q
• Since ai ≥ ǫ, OP T (I) ≥ nǫ. Therefore
2
Q = ⌊nǫ ⌋ ≤ ǫOP T (I) ⇒ OP T (J) ≤ (1 + ǫ)OP T (I).
12
APTAS for BPP (4)
Theorem: For any ǫ, 0 ≤ ǫ ≤ 12 there is a polynomial time
algorithm Aǫ which fins a packing using at most
(1 + 2ǫ)OP T (I) + 1 bins.
Proof:
• Obtain I ′ from I by discarding items smaller than ǫ.
(OP T (I ′ ) ≤ OP T (I) = OP T ).
• Obtain a packing of I ′ using lemma 2 with at most
(1 + ǫ)OP T (I ′ ) bins.
• Pack small items in First-Fit manner.
No additional bins → we are done.
• Otherwise, Let M be total number of bins.
• We know M − 1 bins are at least 1 − ǫ full, so we use this
lower bound: (M − 1)(1 − ǫ) < OP T and get:
OP T
1
M ≤
+ 1 ≤ (1 + 2ǫ)OP T + 1, for 0 ≤ ǫ ≤
1−ǫ
2
1
• since 1−ǫ
=
for 0 ≤ ǫ ≤
1+2ǫ
(1+2ǫ)(1−ǫ)
1
2 .
=
1+2ǫ
1+ǫ−2ǫ2
=
1+2ǫ
1+ǫ(1−2ǫ)
≤ 1 + 2ǫ
13
BPP Recap
• First-Fit algorithm is 2-approximation
• Complexity classes AP T AS ⊇ P T AS ⊇ F P T AS
• APTAS for BPP:
– Consider restricted problem.
– Enumerate all solutions of restricted problem.
– Show that non-restricted problems can be solved with
approximation factor by changing sizes of items,
– and place small items greedily.
14
2nd Lesson (Randomized Algorithms)
• Maximum Satisfiability (problem formulation)
• Randomized 21 -factor algorithm
• Self-reducibility
• Derandomization by self-reducibility
expectation
• IP formulation
• LP-relaxation based factor 1 −
1
e
and
conditional
random algorithm
• Combining the algorithms to get a 43 -factor random algorithm.
15
Maximum Satisfiability (Formulation)
Conjunctive normal form:
A formula f on boolean variables x1 , . . . , xn of the form:
V
f = c∈C c, where each clause c is a disjunction of literals
(boolean variable or its negation).
Example: f = (x1 ∨ ¬x2 ) ∧ (¬x3 ∨ x4 ) ∧ (x1 ∨ x2 ∨ x4)
MAX-SAT:
Given set of clauses C on boolean variables x1 , . . . , xn ∈
{0, 1} and weights wc ≥ 0 for each c,
maximize
X
wczc,
c∈C
where and zc ∈ {0, 1} is 1 iff c is satisfied.
Definition: Let size(c) be the number of literals in clause c.
MAXk-SAT: The restriction to instances where
• the number of literals in the clauses is at most k.
Note: MAX-SAT and MAXk-SAT are NP-hard for k ≥ 2
16
A Randomized 12 -Factor Algorithm
Algorithm: “Flip-A-Coin”
• Set xi to 1 with probability
1
2
for i = 1, . . . , n.
(Polynomial time of course!)
17
Analysis of 21 -Factor Algorithm
Define:
W random variable: Weight of satisfied clauses.
Wc random variable: Weight contributed by clause c.
W =
P
c∈C
Wc
and
E[Wc] = wc · Pr[c is satisfied].
Lemma:
If size(c) = k then E[Wc ] = αk wc, where αk = (1 −
Proof: c is not satisfied iff all its literals are False.
probability of this is ( 12 )k . Since
P
c∈C
wc ≥ OP T and for k ≥ 1: 1 −
E[W ] =
X
c∈C
1
2k
≥
1
2
1
)
2k
The
we have:
1
1X
wc ≥ OP T.
E[Wc ] ≥
2 c∈C
2
Note: Because Pr[c is satisfied] increases as size(c)
increases the algorithm behaves best on instances with large
clauses.
18
Self-Reducibility
Given an oracle for the decision version of some NP optimization
problem we can:
• Find the value of an optimal solution by binary searching.
• Find an optimal solution by self-reduction (not all NPproblems).
Self-Reducibility works by repeatedly reducing the problem and
using the oracle on the reduced problem to determine properties
of an optimal solution.
19
Self-Reducibility of MAX-SAT
Self-Reducibility of MAX-SAT
Given an instance I of MAX-SAT with boolean variables
x1 , . . . , xn and an oracle for MAX-SAT:
• Calculate OP T (I) by the oracle and binary search.
• Create instances
I0 where x1 is fixed to 0, and
I1 where x1 is fixed to 1
• Use the oracle and binary search on I0 to determine if x1 is 0
in the optimal solution. I.e.
OP T (I0 ) = OP T (I)
⇔
x1 = 0 in optimal solution
• If x1 is 0 in an optimal solution continue with x2 and I0,
• Otherwise continue with x2 and I1.
20
Self-Reducibility Example
Let I be:
• C = {(x1 ), (¬x1 ∨ ¬x2 ), (x1 ∨ x2 ), (x2 ∨ x3)}
•
c
x1 ¬x1 ∨ ¬x2 x1 ∨ x2
wc 15
30
10
Oracle gives us that OP T (I) = 85.
• I0 (x1 = 0)
unsat.
c
wc
15
sat.
30
x2
10
x2 ∨ x3
30
x2 ∨ x3
30
Oracle gives us that OP T (I0 ) = 70 so set x1 = 1.
• I1,0 (x1 = 1, x2 = 0)
c
sat. sat. sat.
wc
15
30
10
x3
30
Oracle gives us that OP T (I1,0 ) = 85 so set x2 = 0.
• I1,0,0 (x1 = 1, x2 = 0, x3 = 0)
c
sat. sat. sat. unsat.
wc
15
30
10
30
This assignment has value 55, so set x3 = 1.
21
Self-Reducibility Tree
A tree T is a self-reducibility tree if its internal nodes
corresponds to reduced problems and leafs of sub-trees are
solutions to the problem rooted at the sub-tree.
MAX-SAT Example:
Original problem
x1 = 0
x1 = 1
x1 fixed
x2 = 0
x2 fixed
x3 = 0
x2 = 1
x3 = 1 x3 = 0
x2 = 0
x3 = 1 x3 = 0
x2 = 1
x3 = 1 x3 = 0
x3 = 1
Solutions
• Each internal node at level i corresponds to a partial setting
of the variables.
• Each leaf represents a complete truth assignment.
22
Derandomization (Self-Reducibility)
Let t be the self-reducibility tree of MAX-SAT expanded s.t.
• each node is labeled with E[W |x1 = a1, . . . , xi = ai ]
where a1 , . . . , ai is the truth assignment corresponding to
this node.
Example: C = {(x1 ), (¬x1 ∨ ¬x2 ), (x1 ∨ x2 ), (x1 ∨ ¬x2 ∨
x3 )} and weights w1 , w2, w3, w4 .
E[W ] =
1
3
3
7
w1 + 2 w2 + 2 w3 + 3 w4
2
2
2
2
In the node corresponding to
Ix1=0 = {(false), (true), (x2 ), (¬x2 ∨ x3 )}, we have
3
1
E[W |x1 = 0] = w2 + w3 + 2 w4
2
2
Lemma: The conditional expectation of any node in t can be
computed in polynomial time.
Proof: Calculate the sum of weights of the clauses satisfied by
the partial truth assignment at this node. Add the expected
weight of the reduced formula. 23
Derandomization (Cond. Expectation)
Theorem:
We can compute, in polynomial time, a path from the root to a
leaf such that the conditional expectation of each node on this
path is ≥ E[W ].
Proof In each node we have:
E[W |x1 = a1 , . . . , xi = ai ] =
+
1
2 E[W |x1
1
2 E[W |x1
= a1, . . . , xi = ai , xi+1 = 0]
= a1, . . . , xi = ai , xi+1 = 1]
because both assignments are equally likely.
Therefore the child with larger value must have conditional
expectation at least as large as its parent.
We can determine the conditional expectations in polynomial time
by previous lemma.
Number of steps is n. Deterministic Algorithm: Start at the root of t and recursively
select the child with largest conditional expectation. This yields
a deterministic factor 12 algorithm which run in polynomial time,
since evaluation of each node takes polynomial time, and the
depth of the tree is polynomial in n.
24
MAX-SAT Integer Program
For each clause c ∈ C
• Sc+ is the set of non-negated variables and
• Sc− is the set of negated variables,
occuring in c.
IP:
X
maximize
wc zc
c∈C
s.t.
X
yi +
i∈Sc+
X
i∈Sc−
(1 − yi ) ≥ zc
for c ∈ C,
zc ∈ {0, 1} for c ∈ C,
yi ∈ {0, 1} for i ∈ {1, . . . , n},
LP-relaxation:
maximize
X
wc zc
c∈C
s.t.
X
i∈Sc+
yi +
X
i∈Sc−
(1 − yi ) ≥ zc
for c ∈ C,
0 ≤ zc ≤ 1 for c ∈ C,
0 ≤ yi ≤ 1 for i ∈ {1, . . . , n},
25
MAX-SAT LP-Based Algorithm
Algorithm: LP-Relaxed based random value
• Solve LP-relaxation to get solution (y ∗, z ∗, OP TLP ).
• Set xi = 1 with probability yi∗.
(Polynomial time of course!)
26
Analysis of LP-Based Algorithm (1)
Lemma: If size(c) = k, then E[Wc ] ≥ βk wc zc∗, for
βk = 1 − (1 − k1 )k .
Proof: Assume c = (x1 ∨ . . . ∨ xk ) w.l.o.g.
Pr[c satis.]
=
=
a +...+a
since 1 k k ≥
the LP-constraint.
1−
1−
k
Y
i=1
Pk
i=1 (1
∗
(1 − yi ) ≥ 1 −
1−
Pk
∗
i=1 yi
k
!k
≥1−
k
„
−
yi∗)
zc∗
1−
k
!k
«k
√
k a · . . . · a , and y + . . . + y ≥ z from
1
k
1
k
c
Finally we have g(z) = 1−(1− kz )k ≥ (1−(1− k1 )k )z = βk z ,
for z ∈ [0, 1]. So Pr[c satis.] ≥ βk zc∗.
βk z
g (z)
k
0
1
z
27
,
Analysis of LP-Based Algorithm (2)
βk is a decreasing function of k. If all clauses are of size ≤ k:
X
X
∗
E[W ] =
E[Wc] ≥ βk
wc zc = βk OP TLP ≥ βk OP T
c∈C
c∈C
Now, for k ∈ Z+ : βk = 1 − (1 − k1 )k ≥ 1 − 1e , so the algorithm
has approximation guarantee (1 − 1e ).
Derandomization:
The algorithm can be derandomized like the factor- 12 algorithm;
In step i determine the conditional expectation when xi is fixed
to both 0 and 1 and choose the setting with largest conditional
expectation.
28
Combining the Algorithms
Algorithm: Let b equal 0 or 1 with probability 12 each. If b = 0
run the first randomized algorithm. If b = 1 run the second
randomized algorithm.
Lemma: E[Wc] ≥ 43 wczc∗.
Proof: Let k = size(c). We know:
• E[Wc|b = 0] = αk wc ≥ αk wc zc∗
• E[Wc|b = 1] ≥ βk wczc∗
So,
E[Wc] =
αk + βk
E[Wc|b = 0] + E[Wc |b = 1]
≥ wc zc∗
.
2
2
Since αk + βk ≥
3
2
for k ∈ Z + , we have E[Wc] ≥ 34 wc zc∗ This leads to a 43 -factor algorithm because:
E[W ] =
X
c∈C
3
3
3X
∗
wczc = OP TLP ≥ OP T
E[Wc] ≥
4 c∈C
4
4
29
Derandomizing Everything
Algorithm:
• Run first deterministic algorithm to get truth assignment τ1
• Run second deterministic algorithm to get truth assignment
τ2
• Output the better of the two assignments
Analysis:
The average of the weights of satisfied clauses under τ1 and τ2 is
≥ 34 OP T .
If we choose the best of the two assignments we must do at least
as good.
In other words:
• We have derandomized algorithm 1 and algorithm 2 which do
at least as well as as the randomized algirithms.
• The combined randomized algorithm ran either of the two
algorithms randomly.
• Running both randomized algorithms must be at least as good.
• Running both derandomized algorithms must also be at least
as good.
30
Recap of Maximum Satisfiability
Second lesson was about:
• A simple random algorithm
• Derandomized by self-reducibility
• A random algorithm based on LP-relaxation
• Derandomized
• Combined the two random algorithms
• Derandomized by running both derandomized algorithms and
choosing the best solution.
Hints for exercises: There is a list of hints on the webpage along with the list of exercises. Use each hint to move
on when you get stuck! The hints are there because some
exercises require a “good idea”, and it is more important that
you understand the material, than get the “good idea” to solve
the exercises. So, look at the hints before you give up or spend
too much time thinking about each exercise! But don’t look
unless you have to.
31
© Copyright 2026 Paperzz