Artificial Intelligence

Chapter 4.
Informed Search Methods
Definition
Uninformed search strategies
Generate states
Test them
With the goal
Incredibly inefficient
in most cases
Informed search strategies
Problem-specific
knowledge
Evaluation
Eval(State)
?
Eval(Goal)
Find solution more
efficiency
Best First Search



Idea: Use an evaluation function for each node
Expand most desirable unexpanded node
Implementation:
Inputs: problem, Eval-Func
Function Best-First-Search(problem, Eval-func) return solution
return General-Search (problem, Eval-Func);
They are two kind:
(1)
(2)
Minimize estimated cost to reach a goal: Greedy Search
Minimize the total path cost: A*
Greedy Search: Minimize estimated
cost to reach a Goal

Function that calculates the cost to reach a
goal is called, heuristic function, denoted by h:


h(n) = estimated cost of the cheapest path from
state at node n to the goal state
Implementation:
Function Best-First-Search(problem, h)
Example
Example (continue)
This node takes
to the best
solution
Sol ={Arad, Sibiu, Pagaras, Bucharest} Cost = 450
But the Best Sol ={Arad, Sibiu, Rimnicu Vilcea, Pitesti, Bucharest}
Cost = 418
Evaluation of Greedy-Search


Complete - No; we may expand the node with
best evaluation, but doesn’t reach the goal!
Complexity
1.
Time
• O(bd) very poor evaluation function
• O(b*d) Excellent evaluation function
2.
Memory
• O(bd)
Problem

The selection of the node is entirely
based on the cost evaluation to reach
the goal, and not on how much it cost
the path expanding so far.
A* search : Minimize the total path
cost

This strategy combines two evaluation
functions by summing them:
f(n)= h(n) + g(n) estimated cost of the
cheapest solution through n.
 Since h(n) is the estimated cost of the
cheapest path from n to the goal
 g(n) gives the path cost from the start node
to node n


Implementation:

Function Best-First-Search (problem, g+h)
Behavior of A*
•f : is monotone; always increasing
•A* is complete
•A* is optimal
Complexity:
•Space is still prohibitive (expansive)
•Time depends on the quality of the
heuristic function.
•Good heuristics can sometimes be
constructed by examining the
problem definition or by experience
A* example
A* example
A* example
A* example
A* example
Heuristic Functions

So far, we have seen only straight-line distance, in
this section we will look at heuristic for the 8-puzzle.
5
4
1
6
1
8
8
7
3
2
7
2
3
4
6
5
S0
Initial state
Goal state
…Continue

h1: the number of tiles that are in the wrong
position


h1 (S0) = 7
h2: the sum of distances of the tiles from their
goal. Called also (Manhattan Distance)

h2 (S0) = 2+3+3+2+4+2+0+2 = 18
Dominance

If h2(n)  h1(n) for all nodes


Then h2 dominates h1 and h2 is better for
search
Typical search costs:

d=14 (solution is in depth d)
• A*(h1) = 539
• A*(h2) = 113

d=24 (solution is in depth d)
• A*(h1) = 39135
• A*(h2) = 1641
Local beam search





Keep track of k states rather than just one
Start with k randomly generated states
At each iteration, all the successors of all k
states are generated
If any one is a goal state, stop; else select
the k best successors from the complete list
and repeat.
Genetic algorithms





A successor state is generated by combining two
parent states
Start with k randomly generated states (population)
A state is represented as a string over a finite alphabet
(often a string of 0s and 1s)
Evaluation function (fitness function). Higher values for
better states.
Produce the next generation of states by selection,
crossover, and mutation
Genetic algorithms



Fitness function: number of non-attacking pairs of
queens (min = 0, max = 8 × 7/2 = 28)
24/(24+23+20+11) = 31%
23/(24+23+20+11) = 29% etc
Examples
1. Graph Coloring Problem
Avoid coloring adjacent countries with the same color
Goal state
Initial state
B
A
B
D
C
Colors available
A
D
C