Unit 2
Basic Problem Solving Methods
1. AI problems are too complex to be solvable by direct techniques
2. They must be attached with search methods to guide the search.
3. This unit, several general purpose search techniques will be discussed, independently of any
particular task or problem domain.
4. These methods are all varieties of heuristic search.
5. Every search process can be viewed as a traversal of a directed graph in which
a) each node represents a problem state and
b) each arc represents a relationship between the states.
6. These graphs are too large and most of it need never be explored.But in practice, most
program represents graph implicitly in the rule and generate explicitly only those parts that
they decide to explore.
Important issues that arise in all of the search techniques:
1. The direction in which to conduct the search
2. The topology of the search process
3. How each node of the search process will be represented
4. Selecting applicable rule
5. Using Heuristic function to guide the search
1) Forward V/S Backward Reasoning
To discover the path through problem space in any problem, can be done in two ways
a. Forward, from the start states
b. Backward, from the goal states
Consider again the problem of 8 puzzle
Assume the area of the tray are numbered
Start
•
Goal
The rules use for solving the problem can be as follows
Square 1 empty and Square 2 contains tile n -->
Square 2 empty and Square 1 contains tile n
Square 1 empty and Square 4 contains tile n -->
Square 4 empty and Square 1 contains tile n
Square 2 empty and Square 1 contains tile n -->
Square 1 empty and Square 2 contains tile n
.
.
.
•
Reason forward from initial state
Starting with initial configuration(s) at the root of the tree.
Generate the next level of the tree by finding all the rules whose left side match the root
node.
Use their right sides to create the new configurations.
Generate the next level by taking each node generated at the previous level.
Continue until a configuration that matches the goal state is generated.
•
Reason backward from initial state
Starting with goal configuration(s) at the root of the tree.
Generate the next level of the tree by finding all the rules whose right side match the root
node.
Use their left sides to create the new configurations.
Generate the next level by taking each node generated at the previous level.
Continue until a configuration that matches the initial state is generated.
This method of chaining backward from the desired final state is often called goaldirected reasoning or back-chaining.
•
In case of the 8-puzzle problem, it does not make much difference whether we reason forward or
backward.
•
About the same number of path will be explored in either case.
•
But this is not always true(depend on topology of problem space).
Factor influencing the question of direction to search
•
Are there more possible start states or goal state?
•
In which direction is the branching factor(avg. number of nodes that can be reached directly from
a single node) greater? Like to move in direction with lower branching factor.
•
Will the program be asked to justify its reasoning process to a user? If so it is important to
proceed in the direction that corresponds more closely with the way the user will think.
Few Examples
•
Traveling from home to unfamiliar place.
If our starting position is Home and our goal position is the unfamiliar place, we should
plan our route by reasoning backward from the unfamiliar place.
•
Problem of symbolic integration.
It is better to reason forward using the rules for integration to try to generate an integralfree expression than to start with arbitrary integration-free expression, use of
differentiation, and try to generate required integral.
•
Problem of solving theorem in some particular domain of mathematics.
One possibility is to work in both direction, forward from the start and backward from the
goal simultaneously, until to path meet somewhere in between. This strategy is called
Bidirectional search.
2) Problem Tree V/S Problem Graph
A simple way to implement any search strategy is as a tree traversal.
Each node of the tree expanded by the production rules to generate a set of successor nodes,
each of which can in turn be expanded, continuing until a node representing a solution is
found.
It required little Bookkeeping.
This process often result in the same node being generated as part of several paths and so
being proceed more than once.
Because it may really an arbitrary directed graph than a tree.
Two level of Breath-first Search Tree for Water Jug Problem
A Search Graph for the Water Jug Problem
The waste of effort that arises when the same node is generated more than once can be
avoided at the price of adding bookkeeping.
This graph differs from a tree in that several paths may come together at a node.
A tree search procedure can be converted to a graph search procedure by modifying the
action performed each time a node is generated.
Instead of simply adding the node to the graph, do the following:
1. Examine the set of node that have been created so far to see if the new node already exists.
2. If it is not simply add it to the graph just as for a tree.
3. If it does exist, then do the following two things:
a. Set the node that is being expanded to the point to the already existing node
corresponding to its successor, rather than to the new one. The new one can simply be
thrown away.
b. If you are keeping track of the best(shortest or least-cost) path to each node , then check
to see if the new path is better than the old one. If worse, do nothing. If better, record the
new path as the correct path to use to get the node, and propagate the corresponding node
to the successor node as necessary.
One problem that may arise here is that cycles(same node occurs more than ones) may
introduced into the search graph.
Thus it is more difficult to show that a graph traversal algorithm is guaranteed to terminate.
It reduces the amount of effort that is spend exploring essentially the same path several time.
But it required additional efforts to see if the same node has been generated before.
Whether this effort is justified depends on the particular problem.
3) Knowledge Representation and the Frame Problem
We describe the search process as moving around a tree or a graph, each node of which
represents a point in the problem space.
But how should we represent an individual node.
For chess we can use an array or list, each element of which contain a character representing
the piece currently occupying the location .
For water jug we use simply pair of integer.
But for complex problem like search space of Robot that can move itself and other object
around in a moderately complex world.
In complex domain it is often useful to divide the representation question into three subquestion
How can individual objects and facts be represented?
How can the representations of individual objects be combined to form a representation
of a complex problem state?
How can the sequences of problem states that arise in search process be represented
efficiently?
Ex. A Robot’s world composed of many objects.
Discover a way to describe each individual object’s state and each facts that could be
true about relationships among objects.
ON(plant, table), UNDER(table, window) and IN(table, room)
Then find the way to combine them to form a complete description of some state of
the world.
Then figure out the way to represent sequences of states of the world in a reasonably
efficient manner.
The first two questions are usually referred to as the problem of knowledge representation.
One way of knowledge representation is predicate logic.
The third question is more important in the context of a search process.
If we use a list of all the pieces of the description of that node, we will spend all the time in
creating this nodes, and
copying those facts, most of which do not change often, and
we will run out of memory quickly.
Ex. In robot a lot of time we will repeat ABOVE(ceiling, floor)
The whole problem representing the facts that change as well as that do not is called as the
frame problem.
Ex. In simple Robotic world, there might be a table with a plant on it under the window.
Suppose we move table to the centre of the room. We should also infer that plant is in the
centre of the room now too, but the window is not.
Problem changing the state – simply starting with the description of the initial state and then
marking changes to that description .
This solves the problem of the wasted space and time involved in copying the information for
each node .
state(ON(plant, table), UNDER(table, window) ,IN(table, room))
state(ON(plant, table), UNDER(table, center) , IN(table, room))
But how do we know that what change in the problem state description need to be undone?
Ex. What do we have to change to undo the effect of moving the table to the centre of the
room.
There are two way to solve this problem:
1. One Way:
a) Do not modify the initial state at all.
b) At each node store an indication of the specific changes that should be made at this node.
c) Whenever it is necessary to refer to the description of the current problem state, look at
the initial state description and also look back through all the node on the path from start
state to current state.
d) In this approach back tracking is very easy, but it makes referring to the state description
fairly complex.
2. Second Way:
a) Modify the initial state description as appropriate, but also record, at each node, an
indication of what to do to undo the move should it ever be necessary ever to backtrack
through the mode
b) Whenever it is necessary to backtrack, check each node along the way and perform the
indicated operations on the state description.
4) Matching
We have described until now, how to carry a “clever” search, how to represent nodes, but we
have said nothing about how we extract from the entire collection of rules those that can be
applied at a given point.
To do so there required some matching between the current state and the preconditions of the
rules.
a) Indexing
One way to select applicable rules is to do a simple search through all the rules,
comparing each one’s preconditions to the current state and extracting all the once that
match.
Problem with this solution is :
a. For Interesting problems, it will be necessary to use a large number of rules.
Scanning each of them at each step of the search would be hopelessly inefficient.
b. It is always immediately obvious whether or not a rule’s preconditions are satisfied
by a particular state.
Instead of searching through the rules, use the current state as an index into the rules and
select the matching ones immediately.
Ex. Chess game
White pawn at Square( file e, rank 2) AND Square( File e, rank 3) is empty AND Square(file e,
rank 4) is empty, =>
move the pawn from Square( file e, rank 2) to Square( file e, rank 4).
Assign an index to each board position to access the appropriate rules.
Treat the board description as large number.
Any reasonable Hash Function can be used to treat that number as an index into the rules.
All rules that describe a given board position will be stored under the same key and so will be
found.
This simple indexing scheme works only because preconditions of rule match exact board
configurations.
This method is easy, but at the price of complete lack of generality in the statement of rules.
Despite of its limitation, indexing in some form is very important in the efficient operation of
rule-based system.
b) Matching with Variables
Problem arise here, just as above with indexing , when the preconditions are not stated as
exact descriptions of particular situations.
But rather describe properties of varying complexity that the situations must have.
Ex.
SON(Mary, John)
SON(John, Bill)
SON(Bill, Tom)
SON(Bill, Joe)
DAUGHTER(John, Sue)
DAUGHTER(Sue, Judy)
•
A simple kind of non-literal match that can sometimes require extensive search arise
when pattern contain variables.
Ex.
Suppose we want to answer the question “Who is John’s grandson?”
For answering this question we have to write set of rules as follows:
1. SON(x, y) ᴧ SON(y, z) → GRANDSON(x, z)
2. DAUGHTER(x, y) ᴧ SON(y, z) → GRANDSON(x, z)
3. SON(x, y) ᴧ DAUGHTER (y, z) → GRANDDAUGHTER (x, z)
To find the answer we have to use rule 1 or rule 2.
But for rule 1we must find the first substitution for y such that it is true that SON(John, y) and
SON(y, z), for some value z.
To do this we find a y that satisfies one of the predicates and see if it satisfies the others.
We will find all SONs of John and see if any of them have SONs
Sometimes a great many value satisfy each predicate but few satisfy both.
The first issues, is that it is usually important to record not only that a match was found
between a pattern and a state description, but also what bindings were performed during the
match process so that those same bindings can be used in the action part of the rule.
Ex. Rule applier will apply preconditions for rule 1
X => John, Y=> Bill, and z=> Tom
Rule applier will conclude
GRANDSON(John, Tom) from GRANDSON(x, z) => action part.
The second issue is that a single rule may match the current problem state in more than one
way, thus lead to several alternative right side actions.
Ex. Rule 1 can match either by binding z with Tom or by binding z with Joe
Above solution is sufficient to lead to solution.
But if this is an intermediate step in finding a solution, if we are trying to find greatgrandchildren.
Thus it is important to keep in mind that the number of states that can be generated as
successors to a given state is not given just by the number of rules that can be applied, but
rather the number of ways all of the rules can be applied.
5) Heuristic Function
This is a function that maps from problem state descriptions to measures of desirability,
usually represented as numbers.
Which aspects of the problem state are considered,
how those aspects are evaluated, and
the weights given to individual aspects are chosen in such a way that
The value of the heuristic function at a given node in the search process gives as good an
estimate as possible of whether that node is on the desired path to a solution.
Well designed heuristic functions can play an important part in efficiently guiding a search
process toward a solution.
Example Simple Heuristic functions
Chess : The material advantage of our side over opponent.
TSP: the sum of distances so far
8-Puzzle : number of titles that are in the place they belong
Tic-Tac-Toe: 1 for each row in which we could win and in we already have one piece plus 2
for each such row in we have two pieces
Weak Methods
Heuristic search is a powerful tool for the solution of difficult problems. The strategy used for controlling
such search is often critical in determining how effective it will be in solving a particular problem. The
following general-purpose strategies, often called as weak /methods, will be presented
Generate-and-test
Hill climbing
Breath-first search
Best-first search
Problem reduction
Constraint satisfaction
Means-ends analysis
The choice of correct strategy for a particular problem depends heavily on the problem
characteristics discussed previously.
The simplest kind of problem graph, which is used for search procedure, is the OR graph. Ex.
Water Jug problem graph.
The first five strategies describes methods of traversing OR graphs.
For some problem, however, it is useful to allow graphs in which a single arc points to a group of
successor nodes. These graph are called AND-OR graph.
These general-purpose methods are not, by themselves, a panacea for the solution of difficult
problems. They often do, provide the framework into which the specific knowledge required for a
particular problem can be organized and exploited.
1) Generate-and-Test
Algorithm
1. Generate a possible solution. For some problems, this means generating a particular point in the
problem space. For others, it is generating a path from a start state.
2. Test to see if this is actually a solution by comparing the chosen point or the end point of the goal
path to the set of acceptable goal states.
3. Quit if a solution has been found. Otherwise, return to step 1.
Acceptable for simple problems.
Inefficient for problems with large space.
This algorithm is a Depth-first search procedure, since complete solution can be generated
before they can be tested.
Exhaustive generate-and-test.
Heuristic generate-and-test: not consider paths that seem unlikely to lead to a solution.
Combining this technique with other technique to restrict the space in which to search even
further, the technique can be very effective.
Plan generate-test:
o
o
Create a list of candidates.
Apply generate-and-test to that list.
Example: coloured blocks
Heuristic: if there are more red faces than other colours then, when placing a block with several red
faces, use few of them as possible as outside faces.
2) Hill Climbing
Searching for a goal state = Climbing to the top of a hill.
Generate-and-test + direction to move. In a pure generate and test procedure, the test function
respond only a yes or no. But if the test function is augmented with a heuristic function that
provides an estimate of how close a given state is to a goal state, the generate procedure can
exploit it as shown in the procedure below.
Algorithm
1. Generate a first possible solution in the same way as would be done in the generate-and-test
procedure. See if it is a solution. If so, quit. Else continue.
2. From this solution, apply some number of rules to generate a new set of solutions.
3. For each element of the set do the following:
1. Send it to the test function. If it is a solution, quit.
2. If not, see if it is a closest to a solution of any of the elements tested so far. If it is,
remember it. If it is not, forget it.
4. Take the best element from the above and use it as the next proposed solution. This step
corresponds to a move through the problem space in the direction that appears to be leading
the most quickly to towards a goal.
5. Go back to step 2.
Example: coloured blocks
Heuristic function: the sum of the number of different colours on each of the four sides (solution =
16).
This brings up an important issue in hill climbing, namely what to do if the process get to a position that
is not a solution, but from which there is no move that improves things. This will happen if you will reach
the following:
Local maximum
A state that is better than all of its neighbours, but not better than some other states far away. At a local
maximum, all moves happen to make things worse. Local maxima are particularly frustrating because
they almost occur within sight of solution. In this case, they are called, foothills shown below.
Plateau
A flat area of the search space, in which all neighbouring states have the same value. On plateau it is not
possible to determine best direction in which to move by making local comparisons.
Ridge
An area of search space that is higher than the surrounding area, but that cannot be traversed by a single
move in any direction.
Ways Out
•
Backtrack to some earlier node and try going in a different direction. This is fairly good way of
dealing with Local Maxima.
•
Make a big jump to try to get in a new section of search space. This is a good way of dealing with
plateaux. If the only rule available describes single small steps, apply them several times in the
same direction.
•
Apply two or more rules before doing the test. Moving in several directions at once. This
particularly is a good strategy to deal with ridges. The orientation of the high region, compared to
the set of available moves, makes it impossible to climb up. However, two moves executed
serially may increase the height.
Comment
This is unsuited for problems where the value of heuristic function drops off suddenly as you
move away from a solution (threshold effect).
Global heuristic may have to pay for computational complexity.
Inefficient for problems with large problem space, but it is often useful when combined with
other methods that get it started in the right general neighbourhood.
3) Breath-first search
A breath-first search procedure is guaranteed to find a solution if one exists, provide there is finite number
of branches of the tree.
There are three main problems with breath-first search:
It requires lot of memory.
It requires lot of work.
Irrelevant or redundant operators will greatly increase the number of nodes that must be explored.
4) Best-First Search
Depth-first search: not all competing branches having to be expanded.
Breadth-first search: not getting trapped on dead-end paths.
Combining the two is to follow a single path at a time, but switch paths whenever some
competing path look more promising than the current one.
Procedure:
Initialize first node, so that it will expand.
Generate three new nodes. Heuristic function here is the estimate cost of getting to the
solution from a given node, is applied to this new nodes.
Since node D is most promising, it is expanded next, producing two successors node, E and
F.
Heuristic function is applied to them. Now the path going from node B, looks more
promising, so it is pursued, generating nodes G and H.
But again when these new nodes are evaluated they look less promising than another path, so
attention is returned to the path through D to E. E expanded, yielding nodes I and J. J will be
expanded since it is the most promising.
This process is continue, until a solution is found.
The A* algorithm
In some problem it is important to use graph instead of tree, so that duplicate path will not
pursued. The A* algorithm is the way to implement best-first search of a problem graph.
The algorithm operates by searching a directed graph in which each node will contain a point
in the problem space. Each node contains:
A description of problem state it represents.
A parent link that point back to the best node from which it came
A list of nodes generated from it.
The parent link will help to recover the path to the goal once the goal is found.
The listing node to propagate the improvement down to its successors.
We will use two lists of nodes:
OPEN: nodes that have been generated, but have not examined.
This is organized as a priority queue.
CLOSED: nodes that have already been examined.
Whenever a new node is generated, check whether it has been generated before.
Heuristic function
f(n) = g(n) + h(n)
h(n) = cost of the cheapest path from node n to a goal state.
g(n) = cost of the cheapest path from the initial state to node n. Sum of costs of the rules that
were applied along the best path to the node.
This is where domain knowledge is exploited.
Best node is the node with smallest value of function f’.
S is the start node
G is the goal node
OPEN is the list that holds unexpanded node
CLOSED is the list that holds expanded node
Algorithm
1. OPEN = {s}
CLOSE = {}
2. If OPEN={}, then halt. There is no path to goal.
3. Remove a node n from OPEN for which f(m)>=f(n) for all nodes m from the OPEN, and place n
on CLOSED.
4. Generate the successors of n, reject all successors which contain a loop, and give each remaining
node a pointer to back to n.
5. If G is among the successors of n, then halt and return the path derived by tracing pointer back
from G to S.
6. For each extant successor n’ of n do the following:
6.1. Compute f(n’) .
6.2. If n’ is not on OPEN or CLOSED,
Then add it to OPEN, assign f(n’) to n’, set a pointer back from n’ to n.
6.3. If n’ is already on OPEN or CLOSED,
Then compare new value of f(n’) = new with the old value of f(n’) = old.
If old<new,
Then substitute the new node for the old in the list,
Moving the new node to OPEN if the old node was on CLOSED.
End of algorithm
Observations about A*
•
Role of g function: This lets us choose which node to expand next on the basis of not only of
how good the node itself looks, but also on the basis of how good the path to the node was.
•
h’, the distance of a node to the goal.If h’ is a perfect estimator of h, then A* will converge
immediately to the goal with no search.
Graceful Decay of Admissibility
•
If h’ rarely overestimates h by more than δ, then A* algorithm will rarely find a solution whose
cost is more than δ greater than the cost of the optimal solution.
•
Under certain conditions, the A* algorithm can be shown to be optimal in that it generates the
fewest nodes in the process of finding a solution to a problem.
Agendas
•
An Agenda is a list of tasks a system could perform.
•
Associated with each task there are usually two things:
–
A list of reasons why the task is being proposed (justification)
–
Rating representing the overall weight of evidence suggesting that the task would be
useful.
Algorithm: Agenda driven Search
1. Do until a goal state is reached or the agenda is empty:
a. Choose the most promising task from the agenda.
b. Execute the task by devoting to it the number of resources determined by its importance.
The important resources to consider are time and space. Executing the task will probably
generate additional tasks (successor nodes). For each of them do the followings:
i.
See if it is already on the agenda. If so, then see if this same reason for doing it is
already on its list of justifications. If so, ignore this current evidence. If this
justification was not already present, add it to the list. If the task was not on the
agenda, insert it.
ii.
Compute the new task’s rating, combining the evidence from all its justifications.
Not all justifications need have equal weight. It is often useful to associate with
each justification a measure of how strong the reason it is. These measures are
then combined at this step to produce an overall rating for the task.
Chatbot
Person: I don’t want to read any more about china. Give me something else.
Computer: OK. What else are you interested in?
Person: How about Italy? I think I’d find Italy interesting.
Computer : What things about Italy are you interested in reading about?
Person: I think I’d like to start with its history.
Computer: why don’t you want to read any more about China?
Example for Agenda: AM
• Mathematics discovery program developed by Lenat ( 77, 82)
•
AM was given small set of starting facts about number theory and a set of operators it could use
to develop new ideas.
•
These operators included such things as “ Find examples of a concept you already know”.
•
AM’s goal was to generate new “interesting” Mathematical concepts.
•
It succeeded in discovering such things as prime numbers and Goldbach’s conjecture.
•
AM used task agenda.
Problem Reduction
AND-OR graphs
•
AND-OR graph (or tree) is useful for representing the solution of problems that can be solved by
decomposing them into a set of smaller problems, all of which must then be solved.
•
One AND arc may point to any number of successor nodes, all of which must be solved in order
for the arc to point to a solution.
•
FUTILITY is chosen to correspond to a threshold such than any solution with a cost above it is
too expensive to be practical, even if it could ever be found.
Algorithm : Problem Reduction
1. Initialize the graph to the starting node.
2. Loop until the starting node is labeled SOLVED or until its cost goes above FUTILITY:
a. Traverse the graph, starting at the initial node and following the current best path, and
accumulate the set of nodes that are on that path and have not yet been expanded or
labeled as solved.
b. Pick one of these nodes and expand it. If there are no successors, assign FUTILITY as the
value of this node. Otherwise, add its successors to the graph and for each of them
compute f’. If f’ of any node is 0, mark that node as SOLVED.
c. Change the f’ estimate of the newly expanded node to reflect the new information
provided by its successors. Propagate this change backward through the graph. This
propagation of revised cost estimates back up the tree was not necessary in the BFS
algorithm because only unexpanded nodes were examined. But now expanded nodes
must be reexamined so that the best current path can be selected.
Algorithm: Constraint Satisfaction
1. Propagate available constraints. To do this first set OPEN to set of all objects that must have
values assigned to them in a complete solution. Then do until an inconsistency is detected or until
OPEN is empty:
a. Select an object OB from OPEN. Strengthen as much as possible the set of constraints
that apply to OB.
b. If this set is different from the set that was assigned the last time OB was examined or if
this is the first time OB has been examined, then add to OPEN all objects that share any
constraints with OB.
c. Remove OB from OPEN.
2. If the union of the constraints discovered above defines a solution, then quit and report the
solution.
3. If the union of the constraints discovered above defines a contradiction, then return the failure.
4. If neither of the above occurs, then it is necessary to make a guess at something in order to
proceed. To do this loop until a solution is found or all possible solutions have been eliminated:
a. Select an object whose value is not yet determined and select a way of strengthening the
constraints on that object.
b. Recursively invoke constraint satisfaction with the current set of constraints augmented
by strengthening constraint just selected.
Constraint Satisfaction: Example
•
Cryptarithmetic Problem:
SEND
+MORE
----------MONEY
Initial State:
•
No two letters have the same value
•
The sums of the digits must be as shown in the problem
Goal State:
•
All letters have been assigned a digit in such a way that all the initial constraints are satisfied.
Cryptasithmetic Problem
•
The solution process proceeds in cycles. At each cycle, two significant things are done:
1. Constraints are propagated by using rules that correspond to the properties of arithmetic.
2. A value is guessed for some letter whose value is not yet determined.
A few Heuristics can help to select the best guess to try first:
•
If there is a letter that has only two possible values and other with six possible values, there is a
better chance of guessing right on the first than on the second.
•
Another useful Heuristic is that if there is a letter that participates in many constraints then it is a
good idea to prefer it to a letter that participates in a few.
Solving a Crypt-arithmetic Problem
Means-Ends Analysis(MEA)
•
We have presented collection of strategies that can reason either forward or backward, but for a
given problem, one direction or the other must be chosen.
•
A mixture of the two directions is appropriate. Such a mixed strategy would make it possible to
solve the major parts of a problem first and then go back and solve the small problems that arise
in “gluing” the big pieces together.
•
The technique of Means-Ends Analysis allows us to do that.
Algorithm: Means-Ends Analysis
•
Compare CURRENT to GOAL. If there if no difference between them then return.
•
Otherwise, select the most important difference and reduce it by doing the following until
success of failure is signaled:
•
Select an as yet untried operator O that is applicable to the current difference. If
there are no such operators, then signal failure.
•
Attempt to apply O to CURRENT. Generate descriptions of two states: OSTART, a state in which O’s preconditions are satisfied and O-RESULT, the state
that would result if O were applied in O-START.
•
If
(FIRST-PART <- MEA( CURRENT, O-START))
and
(LAST-PART <- MEA(O-RESULT, GOAL))
are successful, then signal success and return the result of concatenating FIRSTPART, O, and LAST-PART.
MEA: Operator Subgoaling
•
MEA process centers around the detection of differences between the current state and
the goal state.
•
Once such a difference is isolated, an operator that can reduce the difference must be
found.
•
If the operator cannot be applied to the current state, we set up a subproblem of getting to
a state in which it can be applied.
•
The kind of backward chaining in which operators are selected and then subgoals are set
up to establish the preconditions of the operators.
MEA : Household Robot Application
MEA: Difference Table
MEA: Progress
Four steps to design AI Problem solving:
1. Define the problem precisely. Specify the problem space, the operators for moving within
the space, and the starting and goal state.
2. Analyze the problem to determine where it falls with respect to seven important issues.
3. Isolate and represent the task knowledge required
4. Choose problem solving technique and apply them to problem.
Analyzing Search algorithm
Read from the book.
Questions
© Copyright 2026 Paperzz