Intelligent Agents

Intelligent Systems
Lecture 2 - Intelligent Agents
In which we discuss the nature of agents, the diversity of
environments, and the resulting menagerie of agent types.
Dr. Igor Trajkovski
Dr. Igor Trajkovski
1
Outline
♦ Agents and environments
♦ Rationality
♦ PEAS (Performance measure, Environment, Actuators, Sensors)
♦ Environment types
♦ Agent types
Dr. Igor Trajkovski
2
Agents and environments
sensors
percepts
?
environment
actions
agent
actuators
Agents include humans, robots, softbots, thermostats, etc.
The agent function maps from percept histories to actions:
f : P∗ → A
The agent program runs on the physical architecture to produce f
Dr. Igor Trajkovski
3
Vacuum-cleaner world
A
B
Percepts: location and contents, e.g., [A, Dirty]
Actions: Lef t, Right, Suck, N oOp
Dr. Igor Trajkovski
4
A vacuum-cleaner agent
Percept sequence
[A, Clean]
[A, Dirty]
[B, Clean]
[B, Dirty]
[A, Clean], [A, Clean]
[A, Clean], [A, Dirty]
..
Action
Right
Suck
Lef t
Suck
Right
Suck
..
function Reflex-Vacuum-Agent( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
What is the right function?
Can it be implemented in a small agent program?
Dr. Igor Trajkovski
5
Rationality
Fixed performance measure evaluates the environment sequence
– one point per square cleaned up in time T ?
– one point per clean square per time step, minus one per move?
– penalize for > k dirty squares?
A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence to date
Rational 6= all-knowing
– percepts may not supply all relevant information
Rational 6= foreseeing the future
– action outcomes may not be as expected
Hence, rational 6= successful
Rational ⇒ exploration, learning, autonomy
Dr. Igor Trajkovski
6
Rationality (cont.)
What is rational at any given time depends on four things:
• The performance measure that defines the criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The agent’s percept sequence to date.
Dr. Igor Trajkovski
7
PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure??
Environment??
Actuators??
Sensors??
Dr. Igor Trajkovski
8
PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure?? safety, destination, profits, legality, comfort, . . .
Environment?? US streets/freeways, traffic, pedestrians, weather, . . .
Actuators?? steering, accelerator, brake, horn, speaker/display, . . .
Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, . . .
Dr. Igor Trajkovski
9
Internet shopping agent
Performance measure??
Environment??
Actuators??
Sensors??
Dr. Igor Trajkovski
10
Internet shopping agent
Performance measure?? price, quality, appropriateness, efficiency
Environment?? current and future WWW sites, vendors, shippers
Actuators?? display to user, follow URL, fill in form
Sensors?? HTML pages (text, graphics, scripts)
Dr. Igor Trajkovski
11
Environment types
The range of task environments that might arise in AI is obviously vast.
We can, however, identify a fairly small number of dimensions along which
task environments can be categorized.
• Fully observable vs. partially observable
• Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous.
• Single agent vs. multiagent.
Dr. Igor Trajkovski
12
Environment types (cont.)
Solitaire
Backgammon
Internet shopping
Taxi
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Dr. Igor Trajkovski
13
Environment types
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Solitaire
Yes
Backgammon
Yes
Internet shopping
No
Taxi
No
Dr. Igor Trajkovski
14
Environment types
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Solitaire
Yes
Yes
Backgammon
Yes
No
Internet shopping
No
Partly
Taxi
No
No
Dr. Igor Trajkovski
15
Environment types
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Solitaire
Yes
Yes
No
Backgammon
Yes
No
No
Internet shopping
No
Partly
No
Taxi
No
No
No
Dr. Igor Trajkovski
16
Environment types
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Solitaire
Yes
Yes
No
Yes
Backgammon
Yes
No
No
Semi
Internet shopping
No
Partly
No
Semi
Taxi
No
No
No
No
Dr. Igor Trajkovski
17
Environment types
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Solitaire
Yes
Yes
No
Yes
Yes
Backgammon
Yes
No
No
Semi
Yes
Internet shopping
No
Partly
No
Semi
Yes
Taxi
No
No
No
No
No
Dr. Igor Trajkovski
18
Environment types
Observable??
Deterministic??
Episodic??
Static??
Discrete??
Single-agent??
Solitaire Backgammon
Internet shopping
Taxi
Yes
Yes
No
No
Yes
No
Partly
No
No
No
No
No
Yes
Semi
Semi
No
Yes
Yes
Yes
No
Yes
No
Yes (except auctions) No
The environment type largely determines the agent design
The real world is (of course) partially observable, stochastic, sequential,
dynamic, continuous, multi-agent
Dr. Igor Trajkovski
19
Agent types
The job of AI is to design the agent program that implements the agent
function mapping percepts to actions.
We assume this program will run on some sort of computing device with
physical sensors and actuators-we call this the architecture:
agent = architecture + program
Dr. Igor Trajkovski
20
Table driven agents
function TABLE-DRIVEN-AGENT( percept) returns action
static: percepts, a sequence, initially empty
table, a table, indexed by percept sequences, initially fully specified
append percept to the end of percepts
action LOOKUP( percepts, table)
return action
• Consider the automated taxi: the visual input from a single camera comes
in at the rate of roughly 27 megabytes per second (30 frames per second,
640 x 480 pixels with 24 bits of color information). This gives a lookup
table with over 10250,000,000,000 entries for an hour’s driving.
• Even the lookup table for chess - a tiny, well-behaved fragment of the
real world-would have at least 10150 entries.
Dr. Igor Trajkovski
21
Table driven agents - problems
The daunting size of these tables (the number of atoms in the observable
universe is less than 1080) means that
• no physical agent in this universe will have the space to store the table
• the designer would not have time to create the table
• no agent could ever learn all the right table entries from its experience
• even if the environment is simple enough to yield a feasible table size, the
designer still has no guidance about how to fill in the table entries
Dr. Igor Trajkovski
22
Agent types
Four basic types in order of increasing generality:
– simple reflex agents
– reflex agents with state
– goal-based agents
– utility-based agents
All these can be turned into learning agents
Dr. Igor Trajkovski
23
Simple reflex agents
Agent
Sensors
Condition−action rules
What action I
should do now
Environment
What the world
is like now
Actuators
Dr. Igor Trajkovski
24
Example
function Reflex-Vacuum-Agent( [location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
or
if car-in-front-is-braking then initiate-braking
Dr. Igor Trajkovski
25
Reflex agents with state
Sensors
State
How the world evolves
What my actions do
Condition−action rules
Agent
What action I
should do now
Environment
What the world
is like now
Actuators
Dr. Igor Trajkovski
26
Example
function Reflex-Vacuum-Agent( [location,status]) returns an action
static: last A, last B, numbers, initially ∞
if status = Dirty then . . .
Dr. Igor Trajkovski
27
Goal-based agents
Sensors
State
What the world
is like now
What my actions do
What it will be like
if I do action A
Goals
What action I
should do now
Agent
Environment
How the world evolves
Actuators
Dr. Igor Trajkovski
28
Utility-based agents
Sensors
State
What the world
is like now
What my actions do
What it will be like
if I do action A
Utility
How happy I will be
in such a state
What action I
should do now
Agent
Environment
How the world evolves
Actuators
Dr. Igor Trajkovski
29
Learning agents
Performance standard
Sensors
Critic
changes
Learning
element
knowledge
Performance
element
learning
goals
Environment
feedback
Problem
generator
Agent
Actuators
Dr. Igor Trajkovski
30
Summary
Agents interact with environments through actuators and sensors
The agent function describes what the agent does in all circumstances
The performance measure evaluates the environment sequence
A perfectly rational agent maximizes expected performance
Agent programs implement (some) agent functions
PEAS descriptions define task environments
Environments are categorized along several dimensions:
observable? deterministic? episodic? static? discrete? single-agent?
Several basic agent architectures exist:
reflex, reflex with state, goal-based, utility-based
Dr. Igor Trajkovski
31