NP - SIIT

“Keep on the lookout for novel ideas that others have used successfully. Your idea
has to be original only in its adaptation to the problem you’re working on.”
—Thomas Edison (1847–1931)
Topic 12
ITS033 – Programming & Algorithms
Limitation of Algorithm Power
Asst. Prof. Dr. Bunyarit Uyyanonvara
IT Program, Image and Vision Computing Lab.
School of Information, Computer and Communication Technology (ICT)
Sirindhorn International Institute of Technology (SIIT)
Thammasat University
http://www.siit.tu.ac.th/bunyarit
[email protected]
02 5013505 X 2005
1
ITS033
Midterm
Topic 01 - Problems & Algorithmic Problem Solving
Topic 02 – Algorithm Representation & Efficiency Analysis
Topic 03 - State Space of a problem
Topic 04 - Brute Force Algorithm
Topic 05 - Divide and Conquer
Topic 06 - Decrease and Conquer
Topic 07 - Dynamics Programming
Topic 08 - Transform and Conquer
Topic 09 - Graph Algorithms
Topic 10 - Minimum Spanning Tree
Topic 11 - Shortest Path Problem
Topic 12 - Coping with the Limitations of Algorithms Power
http://www.siit.tu.ac.th/bunyarit/its033.php
http://www.vcharkarn.com/vlesson/7
2
Overview





P, NP, NP-complete problem
Coping with limitation of algorithm power
Backtracking
Branch and bound
Approximation Algorithm
3
Topic 12.1
ITS033 – Programming & Algorithms
P, NP, NP-Complete
Asst. Prof. Dr. Bunyarit Uyyanonvara
IT Program, Image and Vision Computing Lab.
School of Information, Computer and Communication Technology (ICT)
Sirindhorn International Institute of Technology (SIIT)
Thammasat University
http://www.siit.tu.ac.th/bunyarit
[email protected]
02 5013505 X 2005
4
Problems

Some problems cannot be solved by any algorithms

Other problems can be solved algorithmically but not in
polynomial time

This topic deals with the question of intractability: which
problems can and cannot be solved in polynomial time.

This well-developed area of theoretical computer science is
called computational complexity.
5
Problems

In the study of the computational complexity of problems, the
first concern of both computer scientists and computing
professionals is whether a given problem can be solved in
polynomial time by some algorithm.
6
Polynomial time

Definition:

We say that an algorithm solves a problem in polynomial time if
its worst-case time efficiency belongs to O(p(n)) where p(n) is a
polynomial of the problem’s input size n.

Problems that can be solved in polynomial time are called
tractable

and ones that can not be solved in polynomial time are all
intractable.
7
Intractability
we can not solved
arbitrary instances of
intractable problems in a
reasonable amount of
time unless such
instances are very small.

Although there might
be a huge difference
between the running time
in O(p(n)) for
polynomials of drastically
different degree, there
are very few useful
algorithms with degree of
polynomial higher than
three.

8
P and NP Problems

Most problems discussed in this subject can be solved in
polynomial time by some algorithms

We can think about problems that can be solved in polynomial
time as a set that is called P.
9
Class P

Definition: Class P is a class of decision problems that can be
solved in polynomial time by (deterministic) algorithms.

This class of problems is called polynomial.
Examples:
 searching
 element uniqueness
 graph connectivity
 graph acyclicity
10
Class NP
NP (Nondeterministic Polynomial): class of decision problems
whose proposed solutions can be verified in polynomial time =
solvable by a nondeterministic polynomial algorithm

11
nondeterministic polynomial algorithm
A nondeterministic polynomial algorithm is an abstract two-stage
procedure that:

generates a random string purported to solve the problem

checks whether this solution is correct in polynomial time
By definition, it solves the problem if it’s capable of generating and
verifying a solution on one of its tries
12
What problems are in NP?

Hamiltonian circuit existence

Traveling salesman problem

Knapsack problem

Partition problem: Is it possible to partition a set of n integers into two
disjoint subsets with the same sum?
13
P  NP

All the problems in P can also be solved in this manner (but no
guessing is necessary), so we have:
P  NP

However, we still have a big question: P = NP ?
14
NP-Complete

A decision problem is in a class of NP-complete if it is a
problem in NP and any other problem be reduced to it in
polynomial time.
NP problems











Boolean satisfiability problem (SAT)
N-puzzle
Knapsack problem
Hamiltonian cycle problem
Traveling salesman problem
Subgraph isomorphism problem
Subset sum problem
Clique problem
Vertex cover problem
Independent set problem
Graph coloring problem
NP-complete
problem
15
NP, NP, NP-Complete
16
P = NP ? Dilemma Revisited

P = NP would imply that every problem in NP, including all NPcomplete problems, could be solved in polynomial time

If a polynomial-time algorithm for just one NP-complete problem is
discovered, then every problem in NP can be solved in polynomial
NP problems
time, i.e., P = NP
NP-complete
problem

Most but not all researchers believe that P  NP , i.e. P is a proper
subset of NP
17
Topic 12.2
ITS033 – Programming & Algorithms
Coping with limitation of algorithm power
Asst. Prof. Dr. Bunyarit Uyyanonvara
IT Program, Image and Vision Computing Lab.
School of Information, Computer and Communication Technology (ICT)
Sirindhorn International Institute of Technology (SIIT)
Thammasat University
http://www.siit.tu.ac.th/bunyarit
[email protected]
02 5013505 X 2005
18
Solving NP-complete problems

At present, all known algorithms for NP-complete problems require time that is
superpolynomial in the input size, and it is unknown whether there are any
faster algorithms.

The following techniques can be applied to solve computational problems in
general, and they often give rise to substantially faster algorithms:

Approximation: Instead of searching for an optimal solution, search for an "almost" optimal one.

Randomization: Use randomness to get a faster average running time, and allow the algorithm to fail
with some small probability.

Restriction: By restricting the structure of the input (e.g., to planar graphs), faster algorithms are usually
possible.

Parameterization: Often there are fast algorithms if certain parameters of the input are fixed.

Heuristic: An algorithm that works "reasonably well" on many cases, but for which there is no proof that
it is both always fast and always produces a good result. Metaheuristic approaches are often used.
19
Tackling Difficult Combinatorial Problems

There are two principal approaches to tackling difficult
combinatorial problems (NP-hard problems):

Use a strategy that guarantees solving the problem exactly
but doesn’t guarantee to find a solution in polynomial time

Use an approximation algorithm that can find an
approximate (sub-optimal) solution in polynomial time
20
Exact Solution Strategies

exhaustive search (brute force)


dynamic programming


applicable to some problems (e.g., the knapsack problem)
backtracking



useful only for small instances
eliminates some unnecessary cases from consideration
yields solutions in reasonable time for many instances but worst case
is still exponential
branch-and-bound

further refines the backtracking idea for optimization problems
21
Backtracking

The principal idea is to construct solutions one component at a time
and evaluate such partially constructed candidates as follows.

If a partially constructed solution can be developed further without
violating the problem’s constraints, it is done by taking the first
remaining legitimate option for the next component. If there is no
legitimate option for the next component, no alternatives for any
remaining component need to be considered.

In this case, the algorithm backtracks to replace the last component
of the partially constructed solution with its next option.
22
Coping with the
Limitation of Algorithm
Power
Backtracking

This kind of processing is often implemented by constructing a tree of
choices being made, called the state-space tree.

Its root represents an initial state before the search for a solution begins.

The nodes of the first level in the tree represent the choices made for
the first component of a solution,

the nodes of the second level represent the choices for the second
component, and so on.

A node in a state-space tree is said to be promising if it corresponds to a
partially constructed solution that may still lead to a complete solution;
otherwise, it is called nonpromising.
23
Coping with the
Limitation of Algorithm
Power
Backtracking

Leaves represent either nonpromising dead ends or complete
solutions found by the algorithm.

If the current node turns out to be nonpromising, the algorithm backtracks
to the node’s parent to consider the next possible option for its last
component;

if there is no such option, it backtracks one more level up the tree, and so
on.
24
Coping with the
Limitation of Algorithm
Power
General Remark
25
Backtracking

Construct the state-space tree
nodes: partial solutions
 edges: choices in extending partial solutions


Explore the state space tree using depth-first search

“Prune” nonpromising nodes

DFS stops exploring subtrees rooted at nodes that cannot lead to a
solution and backtracks to such a node’s parent to continue the search
26
Example: n-Queens Problem

The problem is to place n queens on an n-by-n chessboard so that no two queens
attack each other by being in the same row or in the same column or on the same
diagonal.
1
2
3
4
1
queen 1
2
queen 2
3
queen 3
4
queen 4
27
N-Queens Problem

We start with the empty board and then place queen 1 in the first possible
position of its row, which is in column 1 of row 1.

Then we place queen 2, after trying unsuccessfully columns 1 and 2, in the
first acceptable position for it, which is square (2,3), the square in row 2 and
column 3. This proves to be a dead end because there is no acceptable
position for queen 3. So, the algorithm backtracks and puts queen 2 in the
next possible position at (2,4).

Then queen 3 is……………………….
28
Coping with the
Limitation of Algorithm
Power
State-space tree of solving the
four-queens problem by backtracking.
× denotes an unsuccessful attempt
to place a queen in the indicated
column. The numbers above the
nodes indicate the order in which
the nodes are generated.
29
Hamiltonian Circuit Problem
30

we make vertex a the root of the statespace tree. The first component of our
future solution, if it exists, is a first
intermediate vertex of a Hamiltonian
cycle to be constructed.

Using the alphabet order to break the
three-way tie among the vertices
adjacent to a, we select vertex b.

From b, the algorithm proceeds to c, then
to d, then to e, and finally to f , which
proves to be a dead end.
Coping with the Limitation of Algorithm
Power
Hamiltonian Circuit Problem

So the algorithm backtracks from f to e, then to d, and then to c, which
provides the first alternative for the algorithm to pursue. Going from c to e
eventually proves useless, and the algorithm has to backtrack from e to c
and then to b.

From there, it goes to the vertices f , e, c, and d, from which it can
legitimately return to a, yielding the Hamiltonian circuit a, b, f , e, c, d, a. If
we wanted to find another Hamiltonian circuit, we could continue this
process by backtracking from the leaf of the solution found.
31
32
Backtracking

It is typically applied to difficult combinatorial problems for which
no efficient algorithms for finding exact solutions possibly exist

Unlike the exhaustive search approach, which is doomed to be
extremely slow for all instances of a problem, backtracking at
least holds a hope for solving some instances of nontrivial sizes in an
acceptable amount of time. This is especially true for optimization
problems.

Even if backtracking does not eliminate any elements of a problem’s
state space and ends up generating all its elements, it provides a
specific technique for doing so, which can be of value in its own
right.
33
Branch-and-Bound

Branch and bound (BB) is a general algorithm for finding
optimal solutions of various optimization problems, especially
in discrete and combinatorial optimization.

It consists of a systematic enumeration of all candidate
solutions, where large subsets of fruitless candidates are
discarded, by using upper and lower estimated bounds of the
quantity being optimized.
34
Branch-and-Bound

In the standard terminology of optimization problems, a feasible
solution is a point in the problem’s search space that satisfies all
the problem’s constraints

An optimal solution is a feasible solution with the best value of
the objective function
35
Branch-and-Bound

3 Reasons for terminating a search path at the current node in a
state-space tree of a branch-and-bound algorithm:
1.
The value of the node’s bound is not better than the value of the best
solution seen so far.
2.
The node represents no feasible solutions because the constraints of the
problem are already violated.
3.
The subset of feasible solutions represented by the node consists of a single
point—in this case we compare the value of the objective function for this
feasible solution with that of the best solution seen so far and update the
latter with the former if the new solution is better.
36
Coping with the
Limitation of Algorithm
Power
Branch-and-Bound

An enhancement of backtracking

Applicable to optimization problems

For each node (partial solution) of a state-space tree, computes a
bound on the value of the objective function for all descendants of
the node (extensions of the partial solution)

Uses the bound for:
ruling out certain nodes as “nonpromising” to prune the tree – if a
node’s bound is not better than the best solution seen so far
 guiding the search through state-space

37
Topic 12.3
ITS033 – Programming & Algorithms
Approximation Algorithm
Asst. Prof. Dr. Bunyarit Uyyanonvara
IT Program, Image and Vision Computing Lab.
School of Information, Computer and Communication Technology (ICT)
Sirindhorn International Institute of Technology (SIIT)
Thammasat University
http://www.siit.tu.ac.th/bunyarit
[email protected]
02 5013505 X 2005
38
Randomized Algorithms

Randomized Algorithm aims to reduce both
programming time and computational cost by
approximating the process of calculation using
randomness.
39
Randomized Algorithms
Area Calculation Problem
Calculate the area of irregular shape (in red) in a
box of size 20m x 24m
1.
2.
3.
4.
3,3 B
20,5 R
4, 15 R
6, 10 B
A box of size 20 x 24 m2
40
Randomized Algorithms

The randomization was made to produce a
number point’s coordinate randomly with in the
box.

The number of Hit & Miss will be counted.

Scaling calculation will be made the calculate
the area,

this case it is
Red Area = 20 x 24 x red points/all points

41
Approximation Approach
Apply a fast (i.e., a polynomial-time) approximation algorithm to get a
solution that is not necessarily optimal but hopefully close to it
42
Approximation Algorithms for Knapsack Problem

Greedy algorithms for the discrete knapsack problem

Step 1 Compute the value-to-weight ratios ri = vi/wi , i = 1, . . . , n, for the
items given.

Step 2 Sort the items in nonincreasing order of the ratios computed in Step
1. (Ties can be broken arbitrarily.)

Step 3 Repeat the following operation until no item is left in the sorted list:
if the current item on the list fits into the knapsack, place it in the knapsack;
otherwise, proceed to the next item.
43
Coping with the
Limitation of Algorithm
Power
Approximation Algorithms for Knapsack Problem Example

44
Let us consider the instance
of the knapsack problem
with the knapsack’s capacity
equal to 10 and the item
information
Coping with the
Limitation of Algorithm
Power
Approximation Algorithms for Knapsack Problem Example
Computing
the value-to-weight ratios and
sorting the items in nonincreasing order of
these efficiency ratios yields the table beside

The greedy algorithm will select the
first item of weight 4, skip the next
item of weight 7, select the next
item of weight 5, and skip the last
item of weight 3.

The solution obtained happens to
be optimal for this instance
45
Coping with the
Limitation of Algorithm
Power
Approximation Algorithms for Traveling Salesman Problem

Nearest-neighbor algorithm The following simple greedy algorithm is
based on the nearest-neighbor heuristic: the idea of always going to the
nearest unvisited city next.

Step 1 Choose an arbitrary city as the start.
Step 2 Repeat the following operation until all the cities have been visited:
go to the unvisited city nearest the one visited last (ties can be broken
arbitrarily).
Step 3 Return to the starting city.


46
Coping with the
Limitation of Algorithm
Power
Approximation Algorithms for Traveling Salesman Problem Example







47
With a as the starting vertex, the
nearest-neighbor algorithm yields the
tour (Hamiltonian circuit)
sa: a - b – c - d - a of length 10.
The optimal solution, as can be
easily checked by exhaustive search,
is the tour
s.: a - b - d - c - a of length 8.
Thus, the accuracy ratio
r(sa) = f (sa)/ f (s*) = 10/8
= 1.25
Twice-Around-the-Tree Algorithm
Stage 1: Construct a minimum spanning tree of the graph
(e.g., by Prim’s or Kruskal’s algorithm)
Stage 2: Starting at an arbitrary vertex, create a path that goes
twice around the tree and returns to the same vertex
Stage 3: Create a tour from the circuit constructed in Stage 2 by
making shortcuts to avoid visiting intermediate vertices
more than once
Note: RA = ∞ for general instances, but this algorithm tends to
produce better tours than the nearest-neighbor algorithm
48
Example
12
a
9
8
a
e
e
9
7
4
b
d
8
6
11
b
d
10
c
Walk: a – b – c – b – d – e – d – b – a
c
Tour: a – b – c – d – e – a
49
Empirical Data for Euclidean Instances
50
ITS033
Midterm
Topic 01 - Problems & Algorithmic Problem Solving
Topic 02 – Algorithm Representation & Efficiency Analysis
Topic 03 - State Space of a problem
Topic 04 - Brute Force Algorithm
Topic 05 - Divide and Conquer
Topic 06 - Decrease and Conquer
Topic 07 - Dynamics Programming
Topic 08 - Transform and Conquer
Topic 09 - Graph Algorithms
Topic 10 - Minimum Spanning Tree
Topic 11 - Shortest Path Problem
Topic 12 - Coping with the Limitations of Algorithms Power
http://www.siit.tu.ac.th/bunyarit/its033.php
http://www.vcharkarn.com/vlesson/7
51
End of Chapter 12
Thank You!
52