Dynamic Programming

Dynamic
Programming
Prepared by Chen & Po-Chuan
2016/03/29
Basic Idea
• One implicitly explores the space of all
possible solutions by:
o Carefully decomposing things into a series of subproblems
o Building up correct solutions to larger and larger
sub-problems
• Similar to “Divide & Conquer”
Weighted Interval
Scheduling
• Given: A set of n intervals with start/finish
times, weights (values)
• Find: A subset S of mutually compatible
intervals with maximum total values
For Unit-weighted Cases
• We can use greedy algorithm
• But doesn’t work in weighted version
A Recursive Solution
• Sort intervals in by finish times
• p( j ) is the largest index i < j such that
intervals i and j do not overlap
A Recursive Solution
• Oj = the optimal solution for intervals 1~ j
• OPT( j ) = the value of the optimal solution
for intervals 1~ j
• OPT( j ) = max { vj + OPT( p( j ) ), OPT( j - 1) }
Example
• O6 = ? --- Include interval 6 or not?
o O6 = { 6, O3 } or O5
o OPT( 6 ) = max { v6 + OPT( 3 ), OPT( 5 ) }
Implementation
// Preprocessing:
// 1. Sort intervals by finish times
// 2. Compute p(1), p(2), ..., p(n)
Compute-Opt( j )
if ( j = 0 ) then return 0
else return max { vj + Compute-Opt( p( j ) ) },
Compute-Opt( j – 1 ) }
Recursion Tree
Memorization: Top-Down
• The tree of calls widens very quickly
• Too many redundant calls
• Store the value for future to eliminate them
M-Opt( j )
if ( j = 0 ) then return 0
else if (M[ j ] is not empty) then return M[ j ]
else return M[ j ] = max{ vj + M-Opt( p( j ) ),
M-Opt( j – 1 ) }
Iteration: Bottom-Up
• We can also compute the array M[j] by an
iterative algorithm.
I-Opt
M[ 0 ] = 0
for j = 1, 2, .., n do
M[ j ] = max{vj +M[ p( j )], M[ j-1] }
Keys for DP
• Dynamic programming can be used if the
problem satisfies the following properties:
o There are only a polynomial number of subproblems
o The solution to the original problem can be easily
computed from the solutions to the sub-problems
o There is a natural ordering on sub-problems from
“smallest” to “largest,” together with an easy-tocompute recurrence
Keys for DP
• DP works best on objects that are linearly
ordered and cannot be rearranged
• Elements of DP
o Optimal sub-structure
o Overlapping sub-problem
Fibonacci Sequence
fib(n)
if n ≤ 1 return n
return fib( n - 1 ) + fib( n - 2 )
The Solutions
Top-down
Fibonacci( n, f ):
if f[ n ] not found then
f[ n ] = Fibonacci( n - 1, f )
+Fibonacci( n - 2, f )
return f[ n ]
Bottom-up
fib( n ):
f[ 0 ] = 0; f[ 1 ] = 1
for i = 2 to n do
f[ i ] = f[ i - 1 ]+ f[ i - 2 ]
Maze Routing
• Given S, T, and some obstacles, find the
shortest path from S to T.
The Solution
• Bottom up dynamic programming:
Induction on path length
• Procedure:
1. Wave
propagation
2. Retrace
Maze Routing
• Guarantee to find connection between 2
terminals if it exists
• Guarantee minimum path
• Both memory complexity & time complexity
are high. --- O(MN)
o Large memory and slow
The Subset Sum Problem
• Given a set of n items (with weights) and a
knapsack (with a capacity)
• Fill the knapsack so as to maximize total
weight
• Greedy algorithm doesn’t work here
The Recursion
• OPT( i ) = the total weight of the optimal
solution for items 1, ..., i
• OPT( i ) depends not only on items { 1, ..., i }
but also on W (capacity available)
• OPT( i, w ) =
o0
if i or w = 0
o OPT( i - 1, w )
if wi > w
o max {OPT( i-1, w ), wi + OPT( i-1, w-wi ) } o.w.
• Running time: O(nW)
The Implementation
Subset-sum(n, w1 ,..., wn , W)
Initialize M to 0
for i = 1, 2, ..., n do
for w = 1, 2, ..., W do
if ( wi > w ) then
M[ i, w ] = M[ i-1, w ]
else
M[ i, w ] = max {M[ i -1, w ],
wi + M[ i-1, w-wi ] }
Demonstration
The Knapsack Problem
• Same with the subset sum problem, but
each item has a value
• Fill the knapsack so as to maximize total
value
• Greedy algorithm doesn’t work here
The Solution
• Very similar to the subset sum problem
• OPT(i, w) = …
0
if i or w = 0
OPT( i - 1, w )
if wi > w
max { OPT( i -1, w ), vi + OPT( i -1, w - wi ) } o.w.
• Change only from wi to vi