Approximation Algorithms

Network Optimization Problems:
Models and Algorithms
In this handout:
• Approximation Algorithms
• Traveling Salesman Problem
Classes of discrete optimization
problems
Class 1 problems have polynomial-time algorithms
for solving the problems optimally.
Ex.: Min. Spanning Tree problem
For Class 2 problems (NP-hard problems)
• No polynomial-time algorithm is known;
• And more likely there is no one.
Ex.: Traveling Salesman Problem
Three main directions to solve
NP-hard discrete optimization problems:
• Integer programming techniques
• Heuristics
• Approximation algorithms
• We gave examples of the first two methods for TSP.
• In this handout,
an approximation algorithm for TSP.
Definition of
Approximation Algorithms
• Definition: An α-approximation algorithm is a
polynomial-time algorithm which always produces a
solution of value within α times the value of an
optimal solution.
That is, for any instance of the problem
Zalgo / Zopt  α , (for a minimization problem)
where Zalgo is the cost of the algorithm output,
Zopt is the cost of an optimal solution.
• α is called the approximation guarantee (or factor)
of the algorithm.
Some Characteristics of
Approximation Algorithms
•
•
•
•
Time-efficient (sometimes not as efficient as heuristics)
Don’t guarantee optimal solution
Guarantee good solution within some factor of the optimum
Rigorous mathematical analysis to prove the approximation
guarantee
• Often use algorithms for related problems as subroutines
Next we will give
an approximation algorithm for TSP.
An approximation algorithm for TSP
Given an instance for TSP problem,
1. Find a minimum spanning tree (MST) for that instance.
(using the algorithm of the previous handout)
2. To get a tour, start from any node and traverse the arcs of
MST by taking shortcuts when necessary.
Example:
Stage 1
Stage 2
red bold arcs
form a tour
start from
this node
Approximation guarantee
for the algorithm
• In many situations, it is reasonable to assume that triangle
inequality holds for the cost function c: E  R defined on
the arcs of network G=(V,E) :
v
cuw  cuv + cvw for any u, v, w V
w
u
• Theorem:
If the cost function satisfies the triangle ineqality,
then the algorithm for TSP
is a 2-approximation algorithm.
Approximation guarantee
for the algorithm (proof)
 First let’s compare the optimal solutions of MST and TSP
for any problem instance G=(V,E), c: E  R .
Optimal TSP sol-n
A tree obtained from the tour
Optimal MST sol-n
(*)
Cost (Opt. TSP sol-n) ≥ Cost (of this tree) ≥ Cost (Opt. MST sol-n)
• Idea: Get a tour from Minimum spanning tree without
increasing its cost too much (at most twice in our case).
Approximation guarantee
for the algorithm (proof)
red bold arcs
form a tour
5
 The algorithm
• takes a minimum spanning tree
6
3
• starts from any node
start from
1
4
• traverse the MST arcs
this node
2
by taking shortcuts when necessary
to get a tour.
 What is the cost of the tour compared to the cost of MST?
• Each tour (bold) arc e is a shortcut
for a set of tree (thin) arcs f1, …, fk
(or simply coincides with a tree arc)
Approximation guarantee
for the algorithm (proof)
• Based on triangle inequality,
red bold arcs
5
form a tour
c(e)  c(f1)+…+c(fk)
E.g, c15  c13 + c35
6
3
c23  c23
1
• But each tree (thin) arc
4 start from
this node
is shortcut exactly twice. (**)
2
E.g., tree arc 3-5 is shortcut by tour arcs 1-5 and 5-6 .
 The following chain of inequalities concludes the proof,
by using the facts we obtained so far:
cost(our t our) 
 c(e)
bold arcs e
by Δ ineq., (**)

2
 c(e)
thin arcs e
by (*)
 2  cost(MST)  2  cost(optim al TSP)
Performance of TSP algorithms in practice
• A more sophisticated algorithm (which again uses the MST
algorithm as a subroutine) guarantees a solution within
factor of 1.5 of the optimum (Christofides).
• For many discrete optimization problems, there are
benchmarks of instances on which algorithms are tested.
• For TSP, such a benchmark is TSPLIB.
• On TSPLIB instances, the Christofides’ algorithm outputs
solutions which are on average 1.09 times the optimum.
For comparison, the nearest neighbor algorithm outputs
solutions which are on average 1.26 times the optimum.
• A good approximation factor often leads to good
performance in practice.