max - SFU Computing Science

Complexity
16-1
Non-Approximability
Complexity
Andrei Bulatov
Complexity
Optimization and Errors
In an optimization problem, for every possible instance x we have:
a set S(x) of feasible solutions; for every solution y  S(x), we a
positive goodness m(x,y); optimization parameter opt  {min,max}
To solve an optimization problem we must find for any given x  I, a
solution y  S(x) such that
m( x, y )  opt{m( x, z ) | z  S ( x)}
The optimal value will be denoted OPT(x)
The relative error of a solution y (with respect to an instance x) is
| OPT( x )  m( x, y ) |
OPT( x )
16-2
Complexity
FPAS
We have seen that some optimization problems can be approximated
within some fixed relative error in polynomial time
It turns out that for some NP-hard optimization problems we can do
even better:
some problems can be efficiently approximated within any desired
relative error
Definition
An algorithm A is a fully polynomial approximation scheme for an
optimization problem if, for any instance x, and any  > 0, A
computes a feasible solution with relative error less than  in a
time which is polynomial in |x| and 1/
16-3
Complexity
Example
Minimal Partition
Instance: A collection of positive integers x1 , x2 ,, xn
Objective: Find a subset I of {1,2,…,n} which minimizes


max  xi ,  xi 
 iI iI 
Like Knapsack, this problem has a dynamic programming
n
solution with time complexity in O(nS), where
, and
S  i 1 xi
O(n 2 xinmax )
hence
This is a pseudo-polynomial time algorithm
We can use this to get an approximate solution efficiently by truncating
the x values
16-4
Complexity
 xi 
y

• Replace each xi with i  t 
10 
• Solve the new version, obtaining partition J with sum S J
• We have


max  xi ,  xi 
 iJ iJ 


 max  (10t yi  10t ),  (10t yi  10t ) 
iJ
 iJ



 max  (10t yi ),  (10t yi )   n10t
iJ
 iJ



 max  (10t yi ),  (10t yi )   n10t
iI
 iI



 max  ( xi ),  ( xi )   n10t
iI
 iI

 OPT( x )  n10t
16-5
Complexity
• Hence
| OPT( x )  S J |
n10t
n10t


OPT( x )
OPT( x ) xmax
x

• Setting t  log10  max
 n

 gives a relative error 


• The time complexity of the truncated problem is in
 n3 
O ( n y max )  O ( n 10 xmax )  O  
 
2
2
t
16-6
Complexity
Which Problems have a FPAS
The truncation techniques we have just used is quite general and can
be applied to many problems with a pseudo-polynomial time algorithm
Theorem
There is an FPAS for Minimal Partition,
Knapsack, Subset Sum, …
Conversely, it can be shown that NP-hard optimization problems whose
instances do not contain numbers normally do not have an FPAS
(unless P = NP)
Theorem
If P  NP, then there is no FPAS for Max-SAT,
Max-2-SAT, Vertex Cover, …
16-7
Complexity
TSP
Theorem
If P  NP, then TSP is not approximable
Proof
Suppose for contradiction that there is an -approximating algorithm for
TSP; that is, for any collection of cities and distances between them,
the algorithm finds a tour of length l such that
l  OPT

OPT
We use this algorithm to solve Hamilton Circuit in
polynomial time
16-8
Complexity
16-9
For any graph G = (V,E), construct an instance of TSP as follows:
• Let the set of cities be V
• Let the distance between a pair of cities v1 , v2 be
1
d (v1 , v2 )  
2(1   ) | V |
if (v1 , v2 )  E
otherwise
• If G has a Hamilton Circuit, then it has a tour of length |V|
• Otherwise the minimal tour is at least 2(1   ) | V |
Hence the -approximating algorithm would find a tour of length l such that
l
1  
OPT

l  (1   )  OPT
Complexity
More Non-Approximability
Max Independent Set
Instance: A graph G = (V,E).
Objective: Find a largest set M  N such that no two vertices from
M are connected
Max Clique
Instance: A graph G = (V,E).
Objective: Find a largest clique in G
16-10
Complexity
Observation
For a graph G with n vertices, the following conditions are
equivalent
• G has a vertex cover of size k
• G has an independent set of size n – k
• G has a clique of size n – k
Theorem
If P  NP, then Max Independent Set and
Max Clique are not approximable
16-11
Complexity
16-12
Proof
We prove a weaker result:
If there is an -approximating algorithm for Max
Independent Set then there is a FPAS for this problem
For a graph G = (V,E), the square of G is the graph G 2 such that
• its vertex set is V V  {(u, v) | u, v V }
• {(u, u' ), (v, v' )} is an edge if and only if
{u, v}  E or u  v and {u' , v' } V
(1,2)
1
(1,1)
(1,3)
2
(2,1)
(2,3)
3
(3,1)
(3,3)
(3,2)
Complexity
Lemma
A graph G has an independent set of size k if and only if G 2
2
has a independent set of size k
If I is an independent set of G then {(u, v) | u, v  I }
is an independent set of G 2
Conversely, if I 2 is an independent set of G 2 with k 2 vertices, then
•
I  {u | (u, v )  I 2 for some v} is an independent set of G
•
I u  {v | (u, v )  I 2 } is an independent set of G
16-13
Complexity
16-14
Suppose that an -approximating algorithm exists, working in O ( n l ) time
Let G be a graph with n vertices, and let a maximal independent set of
G has size k
Applying the algorithm to G 2 we obtain an independent set of G 2 of size
2l
(1   )k 2 in a time O ( n )
By Lemma, we can get an i.s. of G of size
1   k
Therefore, we have an (1  1   ) -approximating algorithm
Repeating this process m m times, we obtain a (1  1   ) -approximating
algorithm working in O ( n 2 l ) time
2m
Complexity
Given  ' we need m such that
(1  1   )   '
2m
1   1  '
log(1   )
 log(1   ' )
2m
2m
1 log(1   ' )

m
2
log(1   )
m  log
log(1   )
log(1   ' )
 l log(1 ) 


Then our -approximating algorithm works in a time O n log(1 ') 




16-15