Search Intro
• Our concern is with problem-solving agents
– Agents that want to maximize their performance measure wrt achieving
some goal
– Will use atomic representations for states and goals
• Steps required:
1. Goal formulation
– Id the goal to be achieved
– Based on current situation and performance measure
– Goal is a set of world states
2. Problem formulation
– Defines means by which the agent can solve the problem
– Actions are used to transform one world state to another
– Agent must consider
∗ What actions are available
∗ What states to consider and their nature
∗ Granularity
3. Search
– Process of finding sequence of actions that will take agent from initial
(current) state to goal state
– Two general situtaions to consider:
(a) No additional information re problem
∗ Simply choose actions at random
(b) Have knowledge about the states that an action can lead to
∗ Can plan multiple sequences of actions
∗ Select best
– The solution is the sequence of actions that transforms the initial state
to the goal state
∗ Assuming environment that is observable, discrete, known, deterministic
∗ Solution not modified during execution based on incoming percepts,
which are ignored
1
Search Intro (2)
4. Execute
• Algorithm:
function Simple-Problem-Solving-Agent (pcpt) returns action
{
input percept
pcpt
static action-sequence as //initially empty
state
st
goal
gl
problem
prob
action
a
st <- UPDATE-STATE(st, pcpt)
if (EMPTY(as)) {
gl <- FORMULATE-GOAL(st)
prob <- FORMULATE-PROBLEM(st, gl)
as <- SEARCH(prob)
if (EMPTY(as))
return null-action
}
a <- FIRST(as)
as <- REST(as)
return a
}
– This is an open loop system
∗ Agent assumes each action will be carried out successfully
– Once execution begins, it will ignore percepts
– No self-correction or replanning
2
Search Intro: Problem Description
• The following five components define a problem:
1. Initial state: State from which problem solving begins
2. Action: An action is a step that the agent is capable of carrying out
ACT ION (state) is a function that returns a set of actions that are
applicable to state
3. Transition model: Describes the results of carrying out actions
RESU LT (state, action) is a function that returns the state that results from applying action to state
– The returned state is the successor to state
– State space: The set of all states reachable from the initial state using
the actions available to the agent
– Path: Sequence of actions linking one state to another
– The state space for a given problem is defined by the initial state, actions,
and transition model
4. Goal test: Tests a state to see if it is a goal state
– Can simply be a value that is matched against
– Can be an abstract description that is applied to the state being tested
5. Path cost function: Computes the cost of the path from the initial state to
the current state
– Usually additive
– Usually represented as g(state)
– Step cost of applying an action to a state s that results in state s0 is the
cost of going from state s to s0
∗ Represented as c(s, a, s0 )
• Solution to a problem is the path from the initial to the goal state
– An optimal solution has the cheapest path cost
3
Search Intro: Path Cost
• The cost associated with a solution has two components
1. Path cost
– In real world, may reflect
∗ Distance
∗ Fuel/energy expended
∗ Danger
∗ Number of steps
∗ ...
– In the absence of other info, cost of each action is usually 1
2. Search cost
– Measured in terms of time and/or memory
– Tradeoffs that are often made:
(a) Time v memory
(b) Time v less optimal path
(c) ...
4
Search Intro: Problem Formulation
• Goal of problem formulation is abstraction
• The world is very complex
– Want to simplify problem as much as possible (for computational reasons)
1. For a given problem, can usually ignore most aspects of a state
∗ Want to consider only those aspects of the environment that are relevant to solving the problem
2. Want to represent only those actions that are instrumental in achieving
a goal
– An abstraction is valid if an abstract solution can be expanded in a more
detailed world
• Wise selection of state descriptions and available actions greatly influence time
and/or storage requirements of search
5
Search Intro: Classic Problems
• Eight(sixteen/twenty-five, ...) -puzzle
• Eight queens problem
• Missionaries and cannibals (hares and foxes)
• Monkey and bananas
• Cryptarithmatic
• Traveling salesman
• Robot navigation
• ...
6
Search Intro: Problem Characteristics
• The following problem characteristics may affect the problem-solving strategy
1. Can the problem be broken into substeps that can be solved independently
of each other?
2. Is the world predictable?
3. Is the solution absolute or relative?
4. Is the solution a state or a path?
5. What knowledge is available to guide the search?
7
Search Intro: Search Basics
• Basic approach of search - Given a problem:
– For a given state, there is a set of actions that can be taken
– Application of these actions will result in a set of new states
– This application is called expanding a state
– The process continues until
1. A goal state is reached, or
2. No states remain to be expanded
• The process is called search because the the set of states generated is rarely a
singleton
– I.e., rarely does problem solving generate a direct path from the initial state
to a goal state without generating additional paths
• The data structures used to represent the search are trees and graphs
– In either case the structure represents the set of paths explored from the
initial state at any given point in time during the search process
– Root represents the initial state
– Leaf nodes are those with no successors, or which have not been expanded
yet
∗ Leaves waiting to be expanded are called the frontier, fringe, or open set
∗ Tree expands from the leaves
– At any time, the states of the search space are partitioned into 3 sets:
∗ States visited and expanded
∗ Frontier
∗ States not yet visited
– Links between nodes represent actions that take the agent from one state
to another
8
Search Intro: Search Strategy Issues
1. Representation of state space
(a) Explicit
•
•
•
•
Entire state space generated and stored
May be result of previous searches (i.e., compiled knowledge)
May be generated specifically for given problem
Memory requirements excessive
– Generally state space is large
– Only applicable for small problems
– Many states generated that are never visited in search
• Computation effort excessive
– Many states generated that are never visited in search
– Not possible in unpredictable universe
(b) Implicit
• Search tree constructed dynamically as nodes visited
• More efficient in time and space
Only maintain nodes visited
• State space implicit in operators and control strategy
2. Data structure for state space
(a) Tree
• As states generated, are added to tree without consideration of whether
they had been already visited
– States may appear multiple times in tree
(b) Graph
• Each state appears exactly once
9
Search Intro: Search Strategy Issues (2)
3. Search strategy
• This is the algorithm that determines which node to expand next
• Must cause motion
• Must be systematic
– Should guarantee that every state of state space be visited
4. Direction of search
• Start from initial state and search toward the goal state
– Called data driven, or forward chaining search
• Start from goal state and search for the initial state
– Called goal driven, or backward chaining search
• Bidirectional
– Search from both the goal and the initial state
10
Search Intro: Tree Search
• In this section we assume a search tree (as opposed to a search graph)
• Algorithm:
function TREE-SEARCH (problem) returns solution, or failure
{
frontier <- initial state
loop
if (empty(frontier))
return failure
choose a leaf node from the frontier
if (leaf node is a goal state)
return solution
expand node
add results to frontier
}
• Must distinguish between the search tree and the state space
– Search tree consists of a set of nodes that represent states from the state
space
– A state only appears once in the state space
– A state may be associated with multiple nodes in a search tree
∗ Except in rare cases, a state space is usually finite
∗ A search tree is often infinite
• A loopy path is one that visits the same state twice via an action and inverse
action
– Results in infinite paths
• A redundant path is any alternative path to a node
– Result of a different set of actions from some ancestor node
11
Search Intro: Tree Search (2)
• Node representation
– Node n can be represented as
1.
2.
3.
4.
n.state: The state the node represents
n.parent: The parent of the node
n.action: The action used to go from the parent state to the state
n.path-cost: The cost of getting from the root to the state
• Function for returning the next node in the search path:
function CHILD-NODE (problem, parent, action) returns node
{
node <- new(node)
node.STATE <- problem.RESULT(parent.STATE, action)
node.PARENT <- parent
node.ACTION <- action
node.PATH-COST <- parent.PATH-COST +
problem.STEP-COST(parent.STATE, action)
return node
}
12
Search Intro: Tree Search (3)
• Data structure used to hold nodes on frontier should provide efficient access to
nodes
– Queue is preferred
– Functions required
1. Boolean ← EM P T Y ?(queue)
2. queue ← IN SERT (element, queue)
3. node ← P OP (queue)
13
Search Intro: Graph Search
• To remedy issues with tree search (infinite paths, large storage requirements,
etc.) use a graph instead
• Requires remembering nodes that have already been visited
– Stored in explored set (closed set)
– Can be implemented as a hash table (assuming state description is atomic)
– Use a canonical representation for states
• Search graph will have at most one node per state of the state space
• Algorithm:
function GRAPH-SEARCH (problem) returns solution, or failure
{
frontier <- initial state
explored-set <- null
loop
if (empty(frontier))
return failure
choose a leaf node from the frontier
if (leaf node is a goal state)
return solution
add node to explored-set
expand node
for (each node r in results)
if ((r not in frontier) AND (r not in explored-set))
add r to frontier
}
• Requires more overhead than tree search
14
Search Intro: Knowledge and Search
• The agent’s knowledge (or lack thereof) affects search in terms of the agent’s
knowledge of what state it may be in after it performs an action
• Problems can be categorized based on this knowledge as follows
1. Single state problems
– Agent has complete knowledge about the current state and its actions
– Can find an exact sequence of actions that will take it from the initial
to the goal state
– Every node in the search structure corresponds to a specific state in the
state space
2. Multiple state problems
– These result from the agent’s limited knowledge of either the results of
its actions, or the states that it is in
(a) Complete knowledge of actions, incomplete of state
∗ Will not know exactly what the initial state is
∗ Agent must reason about the set of states it could be in
∗ Actions take the agent from one set of states to another
Hence multiple state problem
∗ Task of agent is to find a sequence of actions that take it from the
initial state set to one which contains a goal state
(b) Complete knowledge of state, incomplete of actions
∗ This results when actions do not have desired effects
3. Contingency problems
– Incomplete knowledge about states and actions
– Can plan sequences of actions, but no guarantee that a solution will be
found
4. Exploration problems
– NO knowledge of states or actions
– Agent must learn via experimentation
15
Search Intro: Search Direction
• Search may be made more efficient by judicious choice of direction
• Algorithms presented have been data driven Start with initial state and search for goal state
• Goal driven (backward-chaining) search
– Start at goal and search for initial state
– From a given node in the search structure, requires
1. Id’ing actions that could have generated the state associated with the
node
2. Determining what the state would be if the action were reversed/undone
3. Search then continues from this set of nodes
– Issues:
1. Not applicable when environmemt is stochastic or non-deterministic
2. If there are multiple goal states
∗ Can be treated as a multi-state problem
– Which direction is better?
∗ Choose direction with smaller branching factor
∗ Choose direction with greater number of targets
16
Search Intro: Search Direction (2)
• Bidirectional search
– Search from both the goal and the initial state
– In ideal case, fewer nodes are visited than doing pure data or goal driven
search
∗ Let b be the branching factor, d be the tree/graph depth
Then ideally bd/2 + bd/2 << bd
– Issues:
1. Must perform add’l work in trying to find link between two expanding
frontiers
2. If do paths do not meet, will do more work than in straight search
17
Search Intro: Blind (Uninformed) v Informed Strategies
• General types of search strategies:
– Uninformed (blind, weak) methods
∗ Do not exploit knowledge about the problem
∗ Applicable to any problem
– Informed (heuristic, strong) methods
∗ Exploit knowledge about the problem to guide the search
Such knowledge known as heuristics (rules of thumb)
∗ Heuristic info is problem-specific
• Control strategies evaluated on
1. Completeness - is solution guaranteed?
2. Space complexity
3. Time complexity
4. Optimality
18
© Copyright 2026 Paperzz