pptx

Local Search
Systematic versus local search
 Systematic
search
 Breadth-first, depth-first, IDDFS, A*, IDA*, etc
 Keep one or more paths in memory and record which
alternatives have been explored
 The path from initial state to goal state is a solution
 Local
search
 Do not maintain history information
 Do not need the path to the solution
2
Example on next slide
3
neighbor = move any queen to other position in the same column
For 4-queen, # of neighbors = ?
4
What is needed:
--A neighborhood function
--A “goodness” function
needs to give a value to non-solution configurations too
for 8 queens: # of number of pair-wise conflicts
maximize value = minimize cost
5
Hill-climbing and systematic search
• Hill-climbing has a lot of freedom in deciding which node to
expand next. But it is incomplete even for finite search
spaces.
– May re-visit the same state multiple times
– Good for problems which have solutions
• Systematic search is complete (because its search tree keeps
track of the parts of the space that have been visited).
– Good for problems where solutions may not exist,
• Or the whole point is to show that there are no
solutions
– or the state-space is densely connected (making repeated
exploration of states a big issue).
When the state-space landscape has local minima, any search that moves only
in the greedy direction cannot be complete
7
Making Hill-Climbing
Asymptotically Complete
• Random restart hill-climbing
– Keep some bound B. When you made more than B moves, reset
the search with a new random initial seed. Start again.
• “biased random walk”: Avoid being greedy when choosing
the seed for next iteration
– With probability p, choose the best child; but with probability (1-p)
choose one of the children randomly
•
Use simulated annealing
– Similar to the previous idea—the probability p itself is increased
asymptotically to one (so you are more likely to tolerate a nongreedy move in the beginning than towards the end)
With random restart or the biased random walk strategies, we can
solve very large problems
million queen problems in under minutes!
Simulated annealing
Minimize cost


For p = 1-ɛ to 1
 With probability p, choose the best child; but with probability (1-p)
choose one of the children randomly
You are more likely to tolerate a non-greedy move in the beginning than
towards the end
http://www.freepatentsonline.com/6725437.html
9
Local beam search
 Keep
track of k states rather than just one
 Start with k randomly generated states
 At each iteration, all the successors of all k
states are generated
 If any one is a goal state, stop; else select the k
best successors from the complete list and
repeat
Biased random selection = stochastic beam search
Local beam search ×
= running k random restarts in parallel?
Useful information is passed among the parallel search threads
10
Genetic algorithm
Stochastic beam search in which successor states are
generated by combining two parent states rather than by
modifying a single state
 Motivated by evolutionary biology, i.e., sexual reproduction

A successor state is generated by combining two parent
states
 Start with k randomly generated states (population)
 A state is represented as a string over a finite alphabet
(often a string of 0s and 1s)
 Evaluation function (fitness function). Higher values for
better states.
 Produce the next generation of states by selection,
crossover, and mutation

11
•
•
Fitness function: number of nonattacking pairs of queens
Normalized fitness function
12
Summary
 Local
search avoid the memory problem of
systematic search
 Can be improved by introducing randomness





Random restart
Biased random walk
Simulated annealing
(Stochastic) beam search
Genetic algorithm
13