Problem Solving

Knowledge Representation & Problem
Solving by Search
Copyright, 1996 © Dale Carnegie & Associates, Inc.
Semantic Nets
• Commonly used representation in AI
• What is it?
– Graph of nodes and edges
– Nodes:
• Objects
– Edges:
• Relationships between objects
2
Semantic Nets (cont’d)
Dog
Bob
is a
is a
owns
builder
Fido
eats
chases
Cat
Cheese
is a
Mice
Fang
eat
chases
3
Semantic Nets (cont’d)
• Class
– E.g. Dog
• Class instance – relationship ‘is-a’
– E.g. Fido
• Inheritance
– Properties of a class passed on to another
– Expresses generalities of a class
– Subclass, superclass
4
Frames
• Development of a semantic net
• Linear representation of nodes and edges in
semantic net
– Frame – node
• Class frame
• Instance frame
– Slot – edge
• May be more than one in a frame
• Have values
• Used in expert systems
– Stores a lot of information in one frame
– Advantageous over rule-based system
5
Frames (cont’d)
Frame
Slot
Slot Value
Bob
is a
Builder
owns
Fido
eats
Cheese
is a
Dog
chases
Fang
is a
Cat
chases
Mice
eat
Cheese
Fido
Fang
Mice
Cheese
Builder
Dog
Cat
6
Frames (cont’d)
• Diagrammatic representation
Fido
Bob
Is a
Builder
Owns
Fido
Eats
Cheese
Is a
Dog
Chases
Fang
7
Frames (cont’d)
• Generalization
– Membership to a class
– E.g Fido is a member of dog class
• Aggregation
– An object is part of another object
– E.g. Fido has a tail; Tail is a part of Fido
• Association
– Slot usually a verb
– E.g. Fido chases Fang
8
Problem-Solving Agents
• Definition
– Goal-based agent that decides what to
do by finding sequences of actions that
lead to desirable states
• Decision making => algorithm
• Uninformed algorithm
– Only problem definition known
• Informed algorithm
– Problem-specific information known
9
Problem Solving Agents (cont’d)
• Goal formulation - limiting the
objectives
– Goal: set of world states in which
the goal is satisfied
– 1st step in problem solving
– Based on agent’s current state and
performance measures
10
Problem Solving Agents (cont’d)
• Problem formulation - deciding
what actions and states to
consider
– If several possible actions, then best
should be chosen
• Search - looking for the best
sequence of actions
– Search algorithm
• Execute – Recommended action is
carried out.
– Execution phase
11
Search Algorithm
• Input: problem (more detail later)
• Output: solution (action sequence)
– Leads to goal
• Execution – performing solution upon the
world
– Carried out upon determination of solution
• Search algorithm process
– Formulate -> Search -> Execute
12
Problem-solving agents (cont’d)
13
Environment Classification for
Problem-solving agent
• Observability
– Fully observable
• Environment changes
– Static
• Dependence of events
– Deterministic
• Quality of actions
– Discrete
14
Problem Formulation
• Problem – information that the agent uses to
decide what to do.
• Initial state – state that the agent knows itself to
be in.
• Successor function
– <action,successor>: successor state is reachable by
applying action
– State space = initial state + successor function
• Goal test
– Determines whether given state is goal state
• Path cost
– Assigns numeric cost to each path
• Path: sequence of states connected by sequence of
actions
• Step cost: c(x, a, y); where a = action taken, x = starting
state, y = resulting state
15
Example: Vacation in Romania
16
Example: Vacation in Romania
• Formulate goal:
– Be in Bucharest
• Formulate problem:
– Initial state: in Arad
– Successor function: <drive to other city, in
city>
– Goal test: in Bucharest
– Path cost: distance between cities
• Find solution:
17
Example: Vacation in Romania
18
Example: Romania
• Formulate goal:
– Be in Bucharest
• Formulate problem:
– Initial state: in Arad
– Successor function: <drive to other city, in
city>
– Goal test: in Bucharest
– Path cost: distance between cities
• Find solution:
– Shortest sequence of cities, e.g., Arad, Sibiu,
Rimnicu Vilcea, Pitesti, Bucharest
19
Problem Formulation (cont’d)
• Abstraction - removing detail from a
representation
– Valid: if abstract solution can be
expanded into detailed world
– Useful: actions in abstract solution
“easier” than original problem
• Good abstraction choice – removes
all possible detail while retaining
validity
20
Problem types
• Deterministic, fully observable  single-state
problem
– Agent knows exactly which state it will be in; solution is
a sequence
• Non-observable  sensorless/conformant
problem
– Agent may have no idea where it is; solution is a
sequence
• Nondeterministic and/or partially observable 
contingency problem
– percepts provide new information about current state
• Unknown state space  exploration problem
21
Example: vacuum world
• Single-state, start in
#5. Solution?
22
Example: vacuum world
• Single-state, start in #5.
Solution? [Right, Suck]
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
23
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
• Contingency
– Nondeterministic: Suck
may dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution?
24
Example: vacuum world
• Sensorless, start in
{1,2,3,4,5,6,7,8} e.g.,
Right goes to {2,4,6,8}
Solution?
[Right,Suck,Left,Suck]
•
• Contingency
– Nondeterministic: Suck
may dirty a clean carpet
– Partially observable: location, dirt at current location.
– Percept: [L, Clean], i.e., start in #5 or #7
Solution? [Right, if dirt then Suck]
25
Vacuum world state space graph
•
•
•
•
states?
actions?
goal test?
path cost?
26
Vacuum world state space graph
•
•
•
•
states? integer dirt and robot location
actions? Left, Right, Suck
goal test? no dirt at all locations
path cost? 1 per action
27
Example: The 8-puzzle
•
•
•
•
states?
actions?
goal test?
path cost?
28
Example: The 8-puzzle
•
•
•
•
states? locations of tiles
actions? move blank left, right, up, down
goal test? = goal state (given)
path cost? 1 per move
29
Example: robotic assembly
•
•
•
•
states?
actions?
goal test?
path cost?
30
Example: robotic assembly
• states?: real-valued coordinates of robot
joint angles parts of the object to be
assembled
• actions?: continuous motions of robot
joints
• goal test?: complete assembly
• path cost?: time to execute
31
Semantic Tree
• Semantic net with following props:
– Each node has exactly 1 predecessor
(except for root)
– Each node has one or more successors
• Search tree
– Represents possible paths thru
semantic net
32
Terms
•Root – has no predecessor
•Descendents/Successors – node lower down tree
•Ancestor – node higher up tree
•Edge
•Goal node – leaf node (has no successor)
•Path - route from root to leaf node
•Depth (d)
•Branching factor (b)
33
Terms
•Root of the search tree
•Node A
•Descendents/Successors of node B
•Nodes D and E
•Ancestor of a node F
•Node C
34
Terms
•Edge
•Represents actions/operators to get from 1 node to another
•Goal node
•Nodes E and N
•Path
•Complete path (leads to goal) – ABE
35
•Partial path (leads to leaf, not goal)
Terms
•Branching factor (b)
•b = 2
•Depth (d)
•d = 3
36
Tree search example
37
Tree search example
38
Tree search example
39
Missionaries and Cannibals
• 3 missionaries and 3 cannibals are on one side of a river,
with a canoe. They all want to get to the other side of the
river. The canoe can only hold one or two people at a time,
At no time should there be more cannibals than
missionaries on either side of the river, as this would
probably result in the missionaries being eaten.
Use a suitable representation and solve.
Operators:
1) Move 1 cannibal to other side.
2) Move 2 cannibals to other side.
3) Moved 1 missionary to other side.
4) Move 2 missionaries to other side.
5) Move 1 cannibal and 1 missionary to other side.
40
Solution:
41
Tree Search Algorithms
• Basic idea:
– offline, simulated exploration of state space by
generating successors of already-explored
states (a.k.a.~expanding states)
42
Tree Search Algorithms (cont’d)
• Node - data structure on a search tree
–
–
–
–
–
State
Parent node
Action
Path cost g(x)
Depth
• Expand function creates new nodes
43
Tree Search Algorithm (cont’d)
44
Search Strategies
• Search strategy - order of node expansion
• Strategies evaluated:
–
–
–
–
completeness: Finds a solution if one exists
time complexity: Time to find solution
space complexity: Memory required to perform search
optimality: Least-cost solution
• Time and space complexity
– b: branching factor of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
45
Uninformed Search Strategies
• Uninformed search strategies - use
only the information available in the
problem definition
– Breadth-first search
– Uniform-cost search
– Depth-first search
– Depth-limited search
– Iterative deepening search
46
Breadth-First Search
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e. new
successors go at end
47
Breadth-First Search (cont’d)
48
Breadth-First Search (cont’d)
49
Breadth-First Search (cont’d)
50
Breadth-First Search Properties
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
• Space? O(bd+1) (keeps every node in
memory)
• Optimal? Yes (if cost = 1 per step)
• Space is the bigger problem (more than
time)
51
Uniform-Cost Search
• Breadth-first search
– Not costly if path cost is not a function
of the depth of the solution.
• Uniform cost search
– expands the lowest cost node on the
fringe, where cost is the path cost, g(n)
52
Uniform-cost Search (cont’d)
Consider the following problem…
A
10
1
5
S
5
B
G
5
15
C
We wish to find the shortest route from node S to node G; that is, node S is the initial
state and node G is the goal state. In terms of path cost, we can clearly see that the
route SBG is the cheapest route. However, if we let breadth-first search loose on the
problem it will find the non-optimal path SAG, assuming that A is the first node to be
expanded at level 1.
53
• Node S is removed from the queue and the revealed nodes are
added to the queue. The queue is then sorted on path cost.
Nodes with cheaper path cost have priority.In this case the
queue will be Node A (1), node B (5), followed by node C (15).
• We now expand the node at the front of the queue, node A.
• Node A is removed from the queue and the revealed node (node
G) is added to the queue. The queue is again sorted on path
cost. Note, we have now found a goal state but do not recognise
it as it is not at the front of the queue. Node B is the cheaper
node.
• Once node B has been expanded it is removed from the queue
and the revealed node (node G) is added. The queue is again
sorted on path cost. Note, node G now appears in the queue
twice, once as G10 and once as G11. As G10 is at the front of
the queue, we now proceed to goal state.
54
We start with our initial state and expand it…
A
1
5
S
10
5
B
15
C
G
The goal state is achieved and the
path S-B-G is returned. In relation to
path cost, UCS has found the optimal
route.
Size of Queue: 031
Queue: A,
S 10G
B,
G
Empty
B,
,G
11C
,11C, C15
Nodes expanded: 3210
CurrentFINISHED
Current
action: Waiting….
action:
Backtracking
Expanding
SEARCH
Current level: 210n/a
UNIFORM COST SEARCH PATTERN
55
Uniform-Cost Search
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
– Equivalent to breadth-first if step costs all equal
• Complete? Yes, if step cost ≥ ε
• Time? # of nodes with g ≤ cost of optimal
solution, O(bceiling(C*/ ε)) where C* is the cost of the
optimal solution
• Space? # of nodes with g ≤ cost of optimal
solution, O(bceiling(C*/ ε))
• Optimal? Yes – nodes expanded in increasing
order of g(n)
56
Depth-First Search
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at
front
57
Depth-First Search (cont’d)
58
Depth-First Search (cont’d)
59
Depth-First Search (cont’d)
60
Depth-First Search (cont’d)
61
Depth-First Search (cont’d)
62
Depth-First Search (cont’d)
63
Depth-First Search (cont’d)
64
Depth-First Search (cont’d)
65
Depth-First Search (cont’d)
66
Depth-First Search (cont’d)
67
Depth-First Search (cont’d)
68
Depth-First Search Properties
• Complete? No: fails in infinite-depth
spaces, spaces with loops
– Modify to avoid repeated states along path
 complete in finite spaces
• Time? O(bm): terrible if m is much larger
than d
– but if solutions are dense, may be much faster
than breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No
69
Depth-Limited Search
• Depth-first search with depth limit l, i.e. nodes at
depth l have no successors
• Imposes a cutoff on the depth of a path
• Recursive implementation:
70
Iterative Deepening Search
• Choose the best depth limit by trying
all possible depth limit
• First depth 0, then 1, then 2 etc.
• Combines the benefits of depth first
and breath first.
• Optimal and complete (Breath first)
• Modest memory requirement (Depth
first)
71
Iterative Deepening Search
72
Iterative Deepening Search l =0
73
Iterative Deepening Search l =1
74
Iterative Deepening Search l =2
75
Iterative Deepening Search l =3
76
Iterative Deepening Search
Properties
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … +
bd = O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1
77
Summary Of Algorithms
78
Repeated States
• Failure to detect repeated states can turn
a linear problem into an exponential one
• Time wasted expanding already processed
states
• How to avoid:
– Systematic search
– Memory of visited states (open list vs. closed
list)
79
Graph search
80