AGENT RATIONAL AGENTS

AGENT

Agent is
A
i an autonomous entity
i which
hi h percepts its
i
environment (world) through sensors and acts upon the
environment using actuators.
DA
AI: 02-Agent Architectures
DISTRIBUTED ARTIFICIAL
INTELLIGENCE
Agent
g
Sensor
Percepts
By: Adel Akbarimajd
Actuators
University of Mohaghegh Ardabili
Envir
ronmen
nt
LECTURE 2:
AGENT ARCHITECTURES
Actions
[email protected]
2
RATIONAL AGENTS
RATIONALITY
At time t rationality depends on four parameters:
• Performance Index
• Initial knowledge
• Available actions
• Percent sequences
• Obtain the best Outcome, in deterministic
environment
DA
AI: 02-Agent Architectures
DA
AI: 00-Prefacce
Rational agent acts in such a way that:
Actions ?
• Obtain
Obt i the
th best
b t expected
t d outcome,
t
i
in
nondeterministic
I iti l
Initial
knowledge
3
t0
t
4
CONTENTS
CONTENTS
Contents of
lecture
Architectures
for intelligent
agents
Definitions
Contents of
lecture
Architectures
for intelligent
agents
5
6
BASIC DEFINITIONS

Experiences of agent to date
Agent

env : S  A  ( S )
 ( s, a )  snew  env( s, a )
where env(s, a) is a set of states those could result
from performing action a in state s.
s

A function that maps state sequences to action sets:
A t : S*  A
Agent
Behavior of an environment
Deterministic environment

7
DA
AI: 02-Agent Architectures
The effectoric capability of an agent is assumed to be
represented by set of actions
Sequence
q
of environment states S*


At any given instant, the environment is assumed to be in one
of these states
Set of actions A={a1, a2, …}



DA
AI: 02-Agent Architectures

BASIC DEFINITIONS (CONT.)
Environment states S={s1, s2,…}

DA
AI: 02-Agent Architectures
DA
AI: 02-Agent Architectures
Definitions
If env(s,a) is a singletons for all pairs <s,a> then the
environment is deterministic.
8
BASIC DEFINITIONS (CONT.)

BASIC DEFINITIONS (CONT.)
History


A history h is a sequence:

h : s0  s1  s2  ...  su  ...
a0
a1
a2
au 1
su
hist(agent, environment) denotes the set of all
histories of agent in environment.
environment
u : au  action ( s0 , s1 ,..., su ) 
and
u  0 : su  env(su 1 , au 1 )

Non--terminating agents
Non
Agents whose interaction with their environment
does not end,
end
 Their history is infinite
 We are interested to non
non-terminating
terminating agents
BASIC DEFINITIONS (CONT.)
10
CONTENTS
I
Invariant
i t property
t

E i l
Equivalency
off agents ag1 and
d ag2

They are said to be behaviorally equivalent with
respect to environment env iff
hist(agl, env) = hist(ag2, env)

They are (simply) behaviorally equivalent iff they are
behaviorally equivalent with respect to all
environments.
i
t
11
Definitions
Contents of
lecture
Architectures
for intelligent
agents
DA
AI: 02-Agent Architectures
If some property  holds of all possible histories of an
agent, this property can be regarded as an invariant
agent
property of the agent in the environment.
DA
AI: 02-Agent Architectures

h is a possible history of agent in an environment iff:

9

Possible history
DA
AI: 02-Agent Architectures
Represents the interaction of agent and environment
DA
AI: 02-Agent Architectures

12
ABSTRACT ARCHTECTURES
DESIGN OF AN AGENT
Breaking down model down into sub-systems
Make
according
M k up subsystems
b t
di to
t available
il bl data
d t
and control structures
S
Specify
if internals
i t
l off an agent:
t
3. Layered
13
• Its data structures
• Thee operations
ope a o s that
a may
ay bee performed
pe o e on
o data
a a structures,
s c es,
• The control flow between these data structures

3. Layered
Separation of an agent's decision function into
perception and action subsystems:
See
Agent
Environment
DA
AI: 02-Agent Architectures
DA
AI: 02-Agent Architectures
2. Model based
14
DESIGN OF A SIMPLE REFLEX
AGENT
ABSTRACT ARCHTECTURES
1 Simple reflex
1.
DA
AI: 02-Agent Architectures
2. Model based
DA
AI: 02-Agent Architectures
1 Simple reflex
1.
Action
15
16
DESIGN OF A SIMPLE REFLEX
AGENT (CONT.)

 Action
function maps the sequences of percepts
to actions
Example: On-off heater
see : Environment state Thermomete

r  Temperatur e T
if T  T0 then tur n the heater on
action : 
if T  T0 then tur n the heater off
action : P *  A
17
INDISTINGUISHABLE STATES


E
Example:
l Vacuum
V
cleaner
l
with
ith no C/D sensing
i
Given s  S andd s   S if see ( s )  see ( s ) we write: s  s 
 ≡ is an equivalence relation over environment states,
which
hi h partitions
titi
S iinto
t mutually
t ll iindistinguishable
di ti
i h bl
sets of states
 Example: Vacuum cleaner environment with no C/D
sensing
 : {{(1,1, C ),
) (1,1, D )},
)} {(1,2, C ),
) (1,2, D )},
)}

DA
AI: 02-Agent Architectures
s1  s 2 and see( s1 )  see( s 2 )
18
INDISTINGUISHABLE STATES(CONT.)
DA
AI: 02-Agent Architectures
Definition: Two different environment states
those are mapped to the same percept are
Indistinguishable states:
DA
AI: 02-Agentt Architecturres
see : S  P

DA
AI: 02-Agent Architectures
The output of the see function is a percept (a
perceptual input)
DESIGN OF A SIMPLE REFLEX
AGENT (CONT.)
{( 2,1, C ), ( 2,1, D )}, {( 2,2, C ), ( 2,2, D )}}
see(x,y,C)
( C)  see(x,y,D)
( D)
19
(1,1)
(1,2)
(2 1)
(2,1)
(2 2)
(2,2)
 4
S 8
20
INDISTINGUISHABLE STATES(CONT.)
• If |≡ | = 1 then
th th
the agent's
t' perceptual
t l ability
bilit is
i
non-existent
1 Simple reflex
1.
DA
AI: 02-Agent Architectures
• If |≡ | = |S| then the agent can distinguish
every state. The agent has perfect perception in
the
h environment
i
DA
AI: 02-Agent Architectures
The coarser these equivalence classes are,
the less effective is the agent's perception
ABSTRACT ARCHTECTURES
2. Model based
3. Layered
21
DESIGN OF MODEL BASED AGENTS
(CONT.)
DESIGN OF MODEL BASED AGENTS
The agent needs
Th
d a data
d
structure to record
d
the information about the environment
and its history
history.
see
next
Environment
state
23
DA
AI: 02-Agent Architectures
Decision making of agents is influenced by
history
history.
DA
AI: 02-Agent Architectures
Model based agents are equivalent to
states
agents with internal states.
22
action
24
DESIGN OF MODEL BASED AGENTS
(CONT.)
Let I be the set of all internal states of the agent.
 We now have the functions:
The agent starts in some initial
internal state i0

(unchanged)
action : I  A
(domain changed from P* to I)
next : I  P  I
DA
AI: 02-Agent Architectures
see : S  P
(new)
DESIGN OF MODEL BASED AGENTS
(CONT.)
(1,2)
(2 1)
(2,1)
(2 2)
(2,2)
(1,1)
(1,2)
(2,1)
(2,2)
(1,1)
(1,2)
(2 1)
(2,1)
(2 2)
(2,2)
see(s)=(1,1)
next(i0 (1 1))  II={(1
next(i0,(1,1))
{(1,1,D)}
1 D)}
action (I)=suck
see(s)=(1,1)
( ) (1 1)
next(I,(1,1))  I={(1,1,D),(1,1,C)}
action (I)=Forward
see(s)=(1,2)
next(I (1 1))  I={(1,1,D),(1,1,C),(1,2,D)}
next(I,(1,1))
I={(1 1 D) (1 1 C) (1 2 D)}
action (I)=suck
26
1 Simple reflex
1.
2. Model based
DA
AI: 02-Agent Architectures
(1,1)
This action is then p
performed,, and the
agent enters another cycle
ABSTRACT ARCHTECTURES
DA
AI: 02-Agent Architectures
E
Example:
l Vacuum
V
cleaner
l
with
ith no C/D sensing
i
A={Suck, Forward, Turn , Noop}
I={}
{} // i0
The internal state of the agent is then
updated via the next function to
next(i0, see(s
see(s))
))..
The action selected by the agent is
then action(next(i0, see(s
see(s)))
)))..
25

It then observes its environment state
see(ss).
s1 and generates a percept see(
DA
AI: 02-Agent Architectures
DESIGN OF MODEL BASED AGENTS
(CONT.)
3. Layered
27
28
LAYERED AGENTS
LAYERED AGENTS
HORIZONTAL LAYERING
Layer n
…
perceptual
input
Layer 2
We can identify two types of control flow within
layered
y
architectures:
• Horizontal layering
• Vertical layering
Layer 1
29
30
LAYERED AGENST
LAYERED AGENTS
HORIZONTAL LAYERING
VERTICAL LAYERING
• If the agent requires n behaviors, n
layers are needed.
• A mediator is required to ensure
overall coherence
• Interaction between layers : O(mn)
• (m is number of actions in each layer)
Two pass control
p
action output
Layer
y n
Layer n
…
…
Layer 2
Layer 2
Layer 1
Layer 1
31
perceptual input
perceptual
input
DA
AI: 02-Agent Architectures
Disadvantages
• Parallel computation
• Conceptual simplicity
• Fault tolerant
One pass control
DA
AI: 02-Agent Architectures
The software layers are each directly connected to the
sensory input and action output.
E h layer
l
it lf acts
t like
lik an agent.
t
Each
itself
Advantages
action
output
DA
AI: 02-Agent Architectures
Typically, there are two types of layers , to deal with
Typically
reactive (event-driven) and pro-active (goal-directed)
behaviors.
DA
AI: 02-Agent Architectures
In layered architectures the various subsystems are
arranged into a hierarchy of interacting layers.
action
output
32
LAYERED AGENTS
CONCRETE ARCHITECTURES
VERTICAL LAYERING
• Interaction between layers : m2(n-1)
• Control flow passes through all layers.
layers
Disadvantages
M d l based
b
d
• Model
R
Reactive
ti agents
t
• Simple
p Reflex
• Layered
DA
AI: 02-Agent Architectures
Advantages
Logic based agents
DA
AI: 02-Agent Architectures
Sensory input and action output are each dealt with by
at most one layer each.
Belief-desire-intention agents
• Not fault tolerant
33
• Somehow Model based
34
LOGIC BASED ARCHITECTURE
(CONT.)
LOGIC BASED AGENTS
Symbolic
representation
logical
formulae
Syntactic
manipulation
logical
deduction
classical first-order p
predicate logic:
g
D={open(vavle1), high_temp(reactor1), low_pressure(reactor1), …}
•An agent's
g
database p
plays
y a somewhat
analogous role to that of belief in humans
DA
AI: 02-Agent Architectures
• Symbolic representation of agent’s environment
and its desired behavior.
behavior
• Syntactically manipulating this representation.
The internal state is a database of formulae of
DA
AI: 02-Agent Architectures
The ''traditional" approach to building artificially
intelligent systems:
•Just
J t like
lik humans,
h
agents
t can be
b wrong
35
• The agent's sensors may be faulty, its reasoning may
y, the information may
y be out of date,, …
be faulty,
36
LOGIC BASED ARCHITECTURE
(CONT.)

LOGIC BASED ARCHITECTURE
(CONT.)
 is set of deduction rules

Functions
see : S  P
(
(unchanged)
h
d)
action : D  A
(D in place of I)
next : D  P 
 D
We write Δ╞if formula can be proved from
database Δ using the rules of 
(D in place of I)
37
EXAMPLE OF LOGIC BASED
ARCHITECTURE

DA
AI: 02-Agent Architectures
The members of D are Δ1, Δ2, …

DA
AI: 02-Agent Architectures
Let
L be the set of sentences of classical first-order
l i
logic
D   (L) is a set of database (possible internal
states ≡ agent
agent’ss believes).
believes)
38
EXAMPLE OF LOGIC BASED
ARCHITECTURE
E
Example:
l Vacuum
V
cleaner
l
dirt (there’s dirt beneath it)
null ((no special
p
information))
Actions:
forward (one square)
suck
turn (90o right)
(0,2)
(1,2)
(2,2)
(0,1)
(1,1)
(2,1)
Three domain predicates
• In(x,y): agent is at (x,y)
DA
AI: 02-Agent Architectures
Percepts:
DA
AI: 02-Agent Architectures
start
t t point:
i t (0
(0,0)
0)
• Dirt(x,y): there is dirt at (x,y)
• Facing(d): agent is facing direction d
(0,0)
(1,0)
(2,0)
39
40
EXAMPLE OF LOGIC BASED
ARCHITECTURE
SHORTCOMINGS OF LOGIC-BASED
AGENTS
Deduction rules
Calculative rationality
The highest priority rule is:
In ( x, y )  Dirt ( x, y )  Do ( suck )
If the conditions of this rule do not hold, then the agent traverses
the room – assume a fixed order for visiting squares:
(0 0) (0 1) (0 2) (1 2) (1 1)
(0,0),(0,1),(0,2),(1,2),(1,1),…
• Calculative rationality is clearly not acceptable in
environments
i
t th
thatt change
h
ffaster
t th
than th
the agentt can
make decisions
Environment mapping
Rules for the traversal up to square (0,2):
In ( 0 , 0 )  Facing ( north )  ~ Dirt ( 0 , 0 )  Do ( forward )
In ( 0 ,1)  Facing ( north )  ~ Dirt ( 0 ,1)  Do ( forward )
I ( 0 , 2 )  Facing
In
F i
( northh )  ~ Dirt
Di ( 0 , 2 )  Do
D ( right
i h )
• Definition:
e
t o : An agent
age iss said
sa too enjoy
e joy thee property
p ope y oof
calculative rationality if and only if its decision making
apparatus will suggest an action that was optimal
when the decision making process began.
41
In ( 0 , 2 )  Facing ( east )  Do ( forward )
• For many environments, it is not obvious how the
mapping from environment to symbolic percept might
be realized.
CONCRETE ARCHITECTURES
REACTIVE AGENTS
Logic based agents
Problem:
• Simple
p Reflex
• Layered
Belief-desire-intention agents
• Somehow Model based
43
• Some researchers have argued that minor changes to
the
h symbolic
b li approach
h will
ill not b
be sufficient
ffi i
to b
build
ild
agents that can operate in time-constrained
environments.
Solution keys:
• The rejection of symbolic representations, and of decision
making based on syntactic manipulation of such
p
representations.
• The idea that intelligent, rational behavior is seen as
innately linked to the environment an agent occupies.
• The idea that intelligent behavior emerges from the
interaction of various simpler behaviors.
42
DA
AI: 02-Agent Architectures
R
Reactive
ti agents
t
DA
AI: 02-Agent Architectures
M d l based
b
d
• Model
DA
AI: 02-Agent Architectures
 (...)   (...)
DA
AI: 02-Agent Architectures
General form:
44
SUBSUMPTION ARCHITECTURE
SUBSUMPTION ARCHITECTURE
(CONT.)
A well-known reactive architecture
• Decision making is done via Task
Accomplishing Behaviors.
Task
Accomplishing
B h i
Behaviors
• Task accomplishment blocks
d not have
do
h
complex
l
representation.
DA
AI: 02-Agent Architectures
Two major characteristics:
DA
AI: 02-Agent Architectures
Introduced by Rodny Brooks
• Each
E h behavior
b h i can be
b
considered as an independent
action function.
• There is no logical deduction:
• More than one behaviors can be fired
simultaneously.
45
SUBSUMPTION ARCHITECTURE
(CONT.)
• Subsumption
S b
ti
hi
hierarchy
h
• Lower layers can inhibit
upper ones
• Lower layers have higher
priority
• Upper layers represent
more abstract behaviors
47
DA
AI: 02-Agent Architectures
Simultaneous
firing of
b h i
behaviors
46
SUBSUMPTION ARCHITECTURE
(CONT.)
DA
AI: 02-Agent Architectures
• There should be mechanism
to select a behavior
situation => action
48
SUBSUMPTION ARCHITECTURE
(CONT.)

R  Beh is the agent’s set of behavior rules
The inhibition relation  totally orders R


b1  b2 reads
d “b1 inhibits
i hibi b2”:
” b1 is
i lower
l
in
i the
h
hierarchy so has priority
Example: case study on foraging robots [Drogoul
and Feber, 1992]
agents
Constranits
- No message exchange
- No agent maps
- obstacles
b t l
- gradient field
base
DA
AI: 02-Agent Architectures

Let Beh  {( c , a ) | c  P and a  A} be the set of all
such rules.
DA
AI: 02-Agent Architectures

SUBSUMPTION ARCHITECTURE
(CONT.)
- clustering of samples
49
SUBSUMPTION ARCHITECTURE
(CONT.)
OTHER REACTIVE ARCHITECTURES

if true then move randomly
if detect a sample then pick sample up.
if carrying samples and not at the base then
travel up gradient.


if carrying samples and at the base then drop
samples
if detect an obstacle then change direction.
direction

actuators
51
The agent network architecture developed by
Pattie Maes
Nilsson's teleo reactive programs
Rosenchein and Kaelbling's situated automata
approach
DA
AI: 02-Agent Architectures
DA
AI: 02-Agent Architectures
If sample sensed then move toward sample
sensors
50
Agre and Chapman's PENGI system

Schoppers' universal

Firby'ss reactive action packages
Firby
52
Logic based agents
Originated from philosophical tradition of
understanding practical reasoning
M d l based
b
d
• Model
R
Reactive
ti agents
t
• Simple
p Reflex
• Layered
• The process of deciding, moment by moment, which
action
ti to
t perform
f
in
i the
th furtherance
f th
off our goals.
l
Two important
processes of
practical
reasoning:
Belief-desire-intention agents
• Somehow Model based
53
• Deliberation: deciding
what goals we want to
achieve
• Means-ends: how we are
going to achieve these goals.
goals
DA
AI: 02-Agent Architectures
BDI ARCHITECTURE
DA
AI: 02-Agent Architectures
CONCRETE ARCHITECTURES
54
BDI ARCHITECTURE
BELIEFS, DESIRES AND INTENTIONS
• I desire to get a Ph.D. student position.
• I intend to apply for Ph.D. programs.
So, beliefs and desires shape the
i t ti
intentions
th
thatt agents
t adopt.
d t
55
Intention:
Apply for
a Ph.D.
program
For example:
p
Fill in an
application
form
It is expected to act
on that intention
DA
AI: 02-Agent Architectures
• I believe that if I apply for a Ph.D. program, I
can gett a Ph.D.
Ph D student
t d t position.
iti
Intentions play a crucial role in the practical
reasoning process.
 The
Th mostt obvious
b i
property
t off intentions
i t ti
is
i that
th t
they tend to lead to action.

DA
AI: 02-Agent Architectures
To differentiate between these three
concepts consider:
concepts,
INTENTIONS
56
BDI ARCHITECTURE
BDI ARCHITECTURE
INTENSION IN PRACTICAL REASONONG
DESIGN OF PRACTICAL REASONONG AGENTS
Intentions persist.
Intentions influence
beliefs upon which
future p
practical
reasoning is based.
•If I intend to apply for a position, then I will not
entertain options that are inconsistent with this
intention. (quit my M.Sc. program)
•I will not usually give up on my intentions without
good reason (successfully achieved, cannot achieve,
the desires for the intention is no longer present).
•If I adopt the intention to apply for a Ph.D. program,
then I can plan for the future on the assumption
that I will be a Ph.D.
Ph D student.
student
DA
AI: 02-Agent Architectures
Intentions constrain
future deliberation.
Achieving a good balance between these different
concerns:
•If I fail to gain a position at a university, I might
send an email or fill in a form for another university
DA
AI: 02-Agent Architectures
Intentions drive meansends reasoning.
• An agent should at times drop some intentions
• But reconsideration has a cost (in terms of both time and
computational resources).
A Delima:
Tradeoff between the
d
degree
off commitment
it
t
and reconsideration
58
57
BDI ARCHITECTURE
BDI ARCHITECTURE
DESIGN OF PRACTICAL REASONONG AGENTS
BLOCK DIAGRAM
Bold agents:
• constantly
t tl stop
t tto
reconsider
To define a parameter for rate of world change: 
• If  is
i llow then
th bold
b ld agents
t do
d well
ll compared
d tto cautious
ti
ones
• If  is high, then cautious agents tend to outperform bold
agents.
g t
sensors
brf
beliefs
generate
options
desires
intensions
59
filter
DA
AI: 02-Agent Architectures
Cautious agents:
DA
AI: 02-Agent Architectures
• never stop to reconsider
action
actuators
60
BDI ARCHITECTURE
BDI ARCHITECTURE
MAIN COMPONENTS OF BDI AGENT
MAIN COMPONENTS OF BDI AGENT (CONT.)
A belief
revision
function
brf
brf,
• which takes a perceptual input
and the agent's
agent s current beliefs
beliefs,
and on the basis of these,
determines a new set of beliefs;
61
An option
generation
function
options
options,
A set of
current
options
desires
desires,
• which determines the options
available to the agent (its desires),
on the basis of its current beliefs
about its environment and its
current intentions;
• representing possible courses of
actions available to the agent;
62
BDI ARCHITECTURE
BDI ARCHITECTURE
MAIN COMPONENTS OF BDI AGENT (CONT.)
MAIN COMPONENTS OF BDI AGENT (CONT.)
A set of
current
intentions,
• representing the agent's current
focus—those
focus those states of affairs that
it has committed to trying to bring
about;
63
An action
selection
function
execute,
• which determines an action to
perform on the basis of current
intentions.
DA
AI: 02-Agent Architectures
• which represents the agent's
deliberation process, and which
determines the agent's intentions
on the basis of its current beliefs,
desires, and intentions;
DA
AI: 02-Agent Architectures
A filter
function
filter,
DA
AI: 02-Agent Architectures
• representing
ep ese
g information
o a o thee
agent has about its current
environment;
DA
AI: 02-Agent Architectures
A set of
current
beliefs,
64
BDI ARCHITECTURE
BDI ARCHITECTURE
CONFIGURATION OF BDI
CONFIGURATION OF BDI (CONT.)

Let:
The state of a BDI agent at any given moment is
(B, D
(B
D, I)
where:

Option generation function
options : ( Bel ) ( Int )  ( Des )


B  Bel
B l , D  Des
D , I  Int
I t
Belief revision function
brf : ( Bel )  P  ( Bel )

Next
f
function
ti
Filter function
filter : ( Bel ) ( Des ) ( Int )  ( Int )
Execution function
execute : ( Int )  A
Action
function
65
EXAMPLE: PASSING THIS COURSE

A student
d
agent perceives
i
the
h following
f ll i beliefs:
b li f

The agent has an initial intention to pass the course:
Intentions0   passCourse

66
EXAMPLE: PASSING THIS COURSE
  workhard  passCourse

 

Beliefs1  brf , attendLectures  completeCoursework  review
  workhard


 

DA
AI: 02-Agent Architectures

DA
AI: 02-Agent Architectures
Bel be the set of all possible beliefs
D be
Des
b the
h set off all
ll possible
ibl desires
d i
Int be the set of all possible intentions

The fil
Th
filter ffunction
i lleads
d to some new iintentions
i
b
being
i
added:
Intentions1  filter  Belief1 , Desires1 , Intentions0 
 passCourse, workHard , attendLectures, 


completeCoursework , review


One or more of which will then be executed before the
agent’s deliberation cycle recommences.
The agent’s
g
desires are freshly
yg
generated each for cycle
y
((they
y do
not persist). The option generation function leads to desires to
pass the course and its consequence:
D i 1  options
Desires
i
B li f1 , Intentions
I
i 0
 Belief
workhard , attendLecture, completeCoursework , review
67
68
EXAMPLE: PASSING THIS COURSE

EXAMPLE: PASSING THIS COURSE
Suppose the
S
h agent perceives
i
new information
i f
i which
hi h leads
l d
to his beliefs being revised:

Th agent recomputes his
The
hi current desires
d i
Desires2  options  Beliefs1 , Intentions1   cheat

cheat  passCourse,  
Beliefs2  brf  Beliefs1 , 

cheat  workhard



 workHard  passCourse,

attendLecture

tt dL t  completeCoursework
l t C
k  review
i


  workHard ,

cheat

h t  passCourse
C
,


cheate  workHard

And intentions
Intentions2  filter  Beliefs2 , Desires2 , Intentions1 
  passCourse, cheat

The agent drops
Th
d
his
hi original
i i l intention
i
i to work
k hard
h d (and
( d its
i
consequences) and adopts a new one to cheat
69
EXAMPLE: PASSING THIS COURSE

EXAMPLE: PASSING THIS COURSE
Subsequently,
S
b
l the
h agent perceives
i
that
h if caught
h cheating,
h i
he will no longer pass the course. What’s more, he is
certain to be caught

Th agentt recomputes
The
t his
hi desires
d i
and
d intentions
i t ti
Desires3  options  Beliefs2 , Intentions2 

cheat  caught  passCourse,  
Beliefs3  brf  Beliefs2 , 

g
caught


 Beliefs2 / cheat  passCourse 
 workHard , attendLectures, completeCoursework , review
Intentions3  filter  Beliefs3 , Desires3 , Intentions2 
 passCourse, workHard ,



attendLectures, completeCoursework , review
cheat  caught  passCourse, caught

70
Because the new beliefs lead to an inconsistency, the agent
h had
has
h d to
t drop
d
his
hi belief
b li f in
i

cheat  passCourse
71
Because it’s not longer consistent to cheat (even through it
may be preferable to working hard), the agent drops that
intention and re-adopts workHard (and consequences)
72
BDI ARCHITECTURE
TWO IMPELEMNATION FRAMEWORKS
PRS: Procedural Reasoning System
• Published in: IEEE Expert: Intelligent Systems and
Their Applications archive Volume 7 Issue 6,
December 1992
DA
AI: 02-Agent Architectures
• Paper:
P
“An
“A A
Architecture
hit t
for
f Real-Time
R l Ti
Reasoning and System Control” by F. Ingrand,
P. Georgeff,
g , S. Rao
JADEX BDI agent system
•An open source project at University of
Hamburg
• http://sourceforge.net/projects/jadex
73