COMP 150 - UML Computer Science

UMass Lowell Computer Science 91.503
Analysis of Algorithms
Prof. Karen Daniels
Spring, 2009
Lecture 7
Tuesday, 4/7/09
Approximation Algorithms
Approximation Algorithms
Chapter 35
Motivation: Some Techniques for
Treating NP-Complete Problems
Approximation Algorithms
 Heuristic Upper or Lower Bounds



Mathematical Programming




Greedy, Simulated Annealing, Genetic “Alg”, AI
Linear Programming for part of problem
Integer Programming
Quadratic Programming...
Search Space Exploration:

Gradient Descent, Local Search, Pruning, Subdivision
Randomization, Derandomization
 Leverage/Impose Problem Structure
 Leverage Similar Problems

Basic Concepts
Definitions
Definitions

Approximation Algorithm


produces “near-optimal” solution
Algorithm has approximation ratio r(n) if:
 C C *
max 
,
  r ( n)
C* C 
min max

C = cost of algorithm’s solution
C* = cost of optimal solution
n = number of inputs = size of instance
Approximation Scheme

(1+e)-approximation algorithm for fixed e


Polynomial-Time Approximation Scheme


e > 0 is an input
time is polynomial in n
Fully Polynomial-Time Approximation Scheme


time is also polynomial in (1/e)
constant-factor decrease in e causes only constant-factor running-time
increase
source: 91.503 textbook Cormen et al.
Resources beyond textbook…




“Approximation Algorithms for NP-Hard Problems”, editor
Dorit Hochbaum, PWS Publishing, Co., 1997.
“Approximation Algorithms” by Vazirani, 2001.
“Computers and Intractability” by Garey & Johnson, 1979.
UMass Lowell course taught by Prof. Jie Wang.
Overview




VERTEX-COVER
 Polynomial-time 2-approximation algorithm
TSP
 TSP with triangle inequality
 Polynomial-time 2-approximation algorithm
 TSP without triangle inequality
 Negative result on polynomial-time r(n)-approximation algorithm
MAX-3-CNF Satisfiability
 Randomized r(n)-approximation algorithm
SET-COVER
 polynomial-time r(n)-approximation algorithm


r(n) is a logarithmic function of set size
SUBSET-SUM
 Fully polynomial-time approximation scheme
Vertex Cover
Polynomial-Time 2-Approximation Algorithm
Vertex Cover
A
Vertex Cover of an undirected
graph G=(V,E) is a subset
B
vertex cover
C
D
of size 2
E
F
V '  V such that if (u, v)  E , then u V ' or v V ' or both
NP-Complete
source: Garey & Johnson
Vertex Cover
b
c
d
a
e
f
d
g
a
g
a
e
f
g
f
g
d
a
c
a
a
source: 91.503 textbook Cormen et al.
Vertex Cover
Theorem: APPROX-VERTEX-COVER is a
polynomial-time 2-approximation algorithm.
Proof: Let C * be an optimal cover
C be cover from APPROX - VERTEX - COVER
A  edges chosen by APPROX - VERTEX - COVER
Observe: no 2 edges of A share any vertices in C due to removal of incident edges
Any vertex cover must include >= 1 endpoint of each edge in A
APPROX-VERTEX-COVER adds both endpoints of each edge of A
C  2 A  2C*
transitivity
C  2C *
Algorithm runs in time polynomial in n.
C*  A
C  2| A|
C / C*  2
Traveling Salesman
TSP with triangle inequality
Polynomial-time 2-approximation algorithm
TSP without triangle inequality
Negative result on polynomial-time r(n)-approximation
algorithm
Hamiltonian Cycle
34.2
Hamiltonian Cycle of an undirected graph G=(V,E) is a
simple cycle that contains each vertex in V.
NP-Complete
source: 91.503 textbook Cormen et al.
Traveling Salesman Problem (TSP)
TSP Tour of a complete,
undirected, weighted graph G=(V,E)
is a Hamiltonian Cycle with a
designated starting/ending vertex.
u
v
x
w
{ G, c, k :
TSP Decision Problem:
c : V V  Z
cost function provides
k Z
edge weights
G has TSP  tour of cost  k}
NP-Complete
source: 91.503 textbook Cormen et al.
Minimum Spanning Tree:
Greedy Algorithms
Time:
O(|E|lg|E|)
given fast
FIND-SET,
UNION
Invariant: Minimum weight
spanning forest
Produces minimum weight tree of
edges that includes every vertex.
Becomes single
tree at end
Time:
O(|E|lg|V|)
=
O(|E|lg|E|)
slightly
faster with
fast priority
queue
2
4
A
3
1
Spans all
vertices at end
G
5
6
E
Invariant: Minimum
weight tree
6
D
B
8
2
1
7
F
4
C
for Undirected, Connected,
Weighted Graph
G=(V,E)
source: 91.503 textbook Cormen et al.
TSP with Triangle Inequality
u
v
5
3
3
5
x
w
4
Cost Function Satisfies
“full walk” = abcbhbadefegeda
Triangle Inequality
u, v, w  V
c(u , w)  c(u, v)  c(v, w)
final approximate tour
source: 91.503 textbook Cormen et al.
(removes vertex duplication)
optimal tour (not necessarily found by
approximation algorithm)
TSP with Triangle Inequality
Theorem: APPROX-TSP-TOUR is a
polynomial-time 2-approximation algorithm
for TSP with triangle inequality.
Proof:
Algorithm runs in time polynomial in n = |V |.
Let H * be an optimal tour and T be a MST
c(T )  c( H *)
(since deleting 1 edge from a tour creates a spanning tree)
Let W be a full walk of T (lists vertices when they are first visited and when returned to after visiting subtree)
c(W )  2c(T ) (since full walk traverses each edge of T twice)
c(W )  2c( H *)
c( H )  c(W )
Now make W into a tour H using triangle inequality.
(New inequality holds since H is formed by deleting duplicate vertices from W.)
c( H )  2c( H *)
TSP without Triangle Inequality
Theorem: If P  NP , then for any constant r  1
polynomial-time approximation algorithm with ratio r
for TSP without triangle inequality.
Proof: (by contradiction) Suppose there is one --- call it A
Showing how to use A to solve NP-complete Hamiltonian Cycle problem
contradiction!
Convert instance G of Hamiltonian Cycle into instance of TSP (in polynomial time):
E '  {( u, v) : u, v V , u  v}
(u, v)  E 
 1
c(u, v)  

 r | V | 1 otherwise
(G’=(V,E’) is complete graph on V)
(assign integer cost to each edge in E’)
For TSP problem (G’,c):
G has Hamiltonian Cycle
(G’,c) contains tour of cost |V|
Tour of G’ must use some edge not in E
G does not have Hamiltonian Cycle
Cost of that tour of G’  r | V | 1  | V | 1  r | V |  | V | r | V |
Can use A on (G’,c)! A finds tour of cost at most r(length of optimal tour of G’)
G has Hamiltonian Cycle
A finds tour of cost at most r|V|
cost = |V|
G does not have Hamiltonian Cycle
A finds tour of cost > r |V|
P

NP
If
, solving NP-complete hamiltonian cycle in polynomial time is a contradiction!
MAX-3-CNF Satisfiability
3-CNF Satisfiability Background
Randomized Algorithms
Randomized MAX-3-CNF SAT Approximation Algorithm
MAX-3-CNF Satisfiability

Background on Boolean Formula Satisfiability

Boolean Formula Satisfiability: Instance of language SAT is
a boolean formula f consisting of:


n boolean variables: x1, x2, ... , xn
m boolean connectives: boolean function with 1 or 2 inputs and 1
output



e.g. AND, OR, NOT, implication, iff





parentheses
truth, satisfying assignments notions apply
SAT  { f : f is a satisfiabl e boolean formula}
NP-Complete
source: 91.503 textbook Cormen et al.
MAX-3-CNF Satisfiability
(continued)

Background on 3-CNF-Satisfiability
 Instance
of language SAT is a boolean formula
f consisting of:
literal: variable or its negation
 CNF = conjunctive normal form




conjunction: AND of clauses
clause: OR of literal(s)
3-CNF: each clause has exactly 3 distinct literals
MAX-3-CNF Satisfiability: optimization version of 3-CNF-SAT
- Maximization: satisfy as many clauses as possible
- Input Restrictions:
- exactly 3 literals/clause
- no clause contains both variable and its negation (this assumption can be removed)
NP-Complete
source: 91.503 textbook Cormen et al.
Definition

Randomized Algorithm has approximation
ratio r(n) if, for expected cost C of solution
produced by randomized algorithm:
 C C *
max 
,
  r ( n)
C* C 
min
max
size of input = n
source: 91.503 textbook Cormen et al.
Randomized Approximation
Algorithm for MAX-3-CNF SAT
Theorem: Given an instance of MAX-3-CNF satisfiability with n
variables x1, x2,..., xn with m clauses, the randomized algorithm that
independently sets each variable to 1 with probability 1/2 and to 0 with
probability 1/2 is a randomized 8/7-approximation algorithm. CC*  87
Proof: independen tly set each varia ble to 0 or 1 with  probabilit y  1/2
indicator random variable  Yi  I {event that clause i is satisfied}
Pr{clause i is not satisfied}  (1/2) 3  1 / 8 Pr{clause i is satisfied}  1 - (1 / 8)  7 / 8
Lemma : Given a sample space S and an event A in the sample space S ,
Let X A  I{A}. Then E[X A ]  Pr {A}.
By Lemma : E[Yi ]  7 / 8
Let Y  Y1  Y2    Ym
m
7 7m
m  m
E[Y ]  E  Yi    EYi   
 expected cost C
8
8
i 1
 i 1  i 1
number of satisfied clauses  m  C*  m 
source: 91.503 textbook Cormen et al.
C*
m
8

  r ( n)
C 7m / 8 7
Set Cover
Greedy Approximation Algorithm
polynomial-time r(n)-approximation algorithm
r(n) is a logarithmic function of set size
Set Cover Problem
Instance ( X , F ) :
finite set X (e.g. of points)
family F of subsets of X
X  S
SF
Problem : Find a minimum - sized subset C  F
whose members cover all of X: X   S
SC
NP-Complete
source: 91.503 textbook Cormen et al.
Greedy Set Covering Algorithm
Greedy: select set that covers the most uncovered elements
source: 91.503 textbook Cormen et al.
Set Cover
Theorem: GREEDY-SET-COVER is a
polynomial-time r(n)-approximation algorithm
for r (n)  H (max{| S |: S  F })
d
Proof:
1
dth harmonic number H d    H (d )
i 1 i
H(0)=0
Algorithm runs in time polynomial in n.
Si  ith subset selected by algorithm
cx  cost of element x  X
cx 
selecting Si costs 1
paid only when x is covered for the first time
1
Si  ( S1  S 2    Si 1 )
elements already covered by first i-1 chosen sets
assume x is covered for the first time by Si
(spread cost evenly across all elements covered for first time by Si )
Number of elements covered for first time by Si
Set Cover (proof continued)
Theorem: GREEDY-SET-COVER is a
polynomial-time r(n)-approximation algorithm
for r (n)  H (max{| S |: S  F })
Proof: (continued)
Let C * be an optimal cover
C 
C be cover from GREEDY - SET - COVER
Cost assigned to
optimal cover:


  cx 

S C * xS

c
x X
x


cx
   cx   x
X
Each x is
in >= 1
S in C* SC * xS


C     cx 
SC * xS

1 unit is charged at
each stage of
algorithm
Set Cover (proof continued)
Theorem: GREEDY-SET-COVER is a
polynomial-time r(n)-approximation algorithm
for r (n)  H (max{| S |: S  F })
Proof: (continued)
d
How does this relate to
harmonic numbers??
We’ll show that:
c
xS
And then
conclude that:
which will finish the proof
C 
1
dth harmonic number H d    H (d )
i 1 i
x
 H( S )
for any set S  F
 H ( S )  C * H (max{ S : S  F })
SC *
Set Cover (proof continued)
Proof of:
c
xS
x
 H( S )
ui | S  ( S1  S 2   Si ) |
for any set S  F
For some set S: Number of elements of S
remaining uncovered after S1…Si selected

1

c

(
u

u
)


x
i 1
i

Si  ( S1  S 2    S i 1 )
xS
i 1 
k

1 


c

(
u

u
)


x
i
 i 1
ui 1 
xS
i 1 
k
k

i 1
u i 1
1

j  u i 1 u i 1
k
ui 1
Since, due to greedy
nature of algorithm:
1 since j <= u
i-1
 
i 1 j ui 1 j
k




u0 | S |
k = least index for
which uk=0.
| Si  ( S1  S 2   Si 1 ) | 
| S  ( S1  S 2   Si 1 ) | ui 1
 ui1 1 ui 1 
      
i 1  j 1 j
j 1 j 
k
  H (ui 1 )  H (ui )   H (u0 )  H (uk )  H (u0 )  H (0)  H (u0 )  H ( S )
i 1
telescoping sum
Subset-Sum
Exponential-Time Exact Algorithm
Fully Polynomial-Time Approximation Scheme
Subset-Sum Problem
Instance ( S , t ) :
Decision
Problem:
S  {x1,x2 ,...,xn }
positive integers
S '  S :  s  t ?
sS '
Optimization Problem seeks subset with largest sum <= t
NP-Complete
source: 91.503 textbook Cormen et al.
Exponential-Time
Exact Algorithm
Pi  set of values obtainable by selecting
subset of {x1,x2 ,...,xi } and summing elements.
Identity:
Pi  Pi 1  ( Pi 1  xi )
Li is sorted list of every element of Pi <= t
Li  list of sums of subsets of {x1,x2 ,...,xi }  t
MERGE-LISTS( L,L’ ) returns sorted list =
merge of sorted L, L’ with duplicates removed.
source: 91.503 textbook Cormen et al.
Fully Polynomial-Time
Approximation Scheme
Theorem:
APPROXSUBSET-SUM is a
fully polynomialtime
approximation
scheme for
subset-sum.
Proof:
| Li |
see textbook
4n ln t
e
2
source: 91.503 textbook Cormen et al.