Introduction

Advanced AI
Prof. Sarit Kraus
Bar-Ilan University
Slides adjusted from David Parkes from
Harvard Univ.
89-950 Lecture 1
1
Different Goals of AI
Think like humans
“cognitive science”
Think rationally
formalize inference process
-who will determined how human
-taking informal knowledge into
formal terms.
think?
-computational resources.
Act like humans
Turing test
Act rationally
Achieve goal according to
believes
-Failed since humans weakness.
89-950 Lecture 1
2
An agent and its environment
sensors
environment
percepts
?
agent
actions
actuators
agent :Something that takes input (percepts) from its environment
through sensors and takes actions upon its environment, using actuators.
agent function: mapping from percept sequence to an action
89-950 Lecture 1
3
What is an Agent?
PROPERTY
MEANING
Situated
Sense and act in dynamic/uncertain
environments
Flexible
Reactive (responds to changes in the
environment)
Autonomous
Exercises control over its own actions
Goal-oriented
Purposeful
Persistent
Continuously running process
Social
Interacts with other agents/people
Learning
Adaptive
Mobile
Able to transport itself
89-950 Lecture 1
4
Examples
Medical diagnosis system
Foreign-language tutor
Web shopping program
Virtual humans for training, entertainment
89-950 Lecture 1
5
Examples of how the agent
function can be implemented
More
sophisticated
Table-driven agent
Simple reflex agent
Reflex agent with internal
state
4. Agent with explicit goals
5. Utility-based agent
6. Learning agent
1.
2.
3.
89-950 Lecture 1
6
Consider a Taxi Driving Agent
1.
Goal: correct destination, manage fuel
consumption, minimize driving violations
2.
Environment: roads, people, potholes, etc.
3.
Actuators: gas pedal, breaks, etc.
4.
Sensors: video camera, speed, etc.
89-950 Lecture 1
7
1. Table-driven agent
An agent based on a pre-specified lookup table. It keeps track of percept
sequence and just looks up the best action
Disadvantage:
• Huge number of possible percepts
• Takes long time to build the table
• Not adaptive
 |P|t states, for |P| percepts, and lifetime T
T
e.g. Taxi.
t=1
27 MB/sec, 1 hour, 10^150 entries (c.f. 10^80 atoms in observable
universe)
89-950 Lecture 1
8
2. Simple reflex agent
Simple Reflex Agent
sensors
Condition - action rules
What action I
should do now
Environment
What the world
is like now
actuators
Use a simple rule, and just the current percept, perform the action
associated with that rule.
table still too large, and current percept may not be enough,
and no goal.
89-950 Lecture 1
9
3. Model-Based Reflex Agent
sensors
State
How the world evolves
What my actions do
Condition - action rules
Environment
What the world
is like now
What action I
should do now
actuators
maintain model of world to make-up for lack of percepts,
table still too large, and still no goals. note flexible.
89-950 Lecture 1
10
4. Model Based, Goal Based
sensors
State
How the world evolves
What my actions do
What it will be like
if I do action A
Goals
Environment
What the world
is like now
What action I
should do now
actuators
deliberative, goal-based, choosing actions with search and
Planning. more flexible (not pre-specified).
89-950 Lecture 1
11
5. Utility-based agent
sensors
State
How the world evolves
What the world
is like now
What my actions do
Utility
How happy I will
be in such as a state
Environment
What it will be like
if I do action A
What action I
should do now
effectors
refined measure of ``good’’ and ``bad’’, utility= happiness
89-950 Lecture 1
12
Properties of environments
Observable vs. Partially-observable
(complete state of world is available to agent)
Deterministic vs. no-deterministic (Stochastic)
(no uncertainty about effects of actions)
Static vs. Dynamic
(do not need to observe while deliberate)
Discrete vs. Continuous
(state/percepts/actions/time)
Single vs. Multiagent
(cooperative vs. competitive)
89-950 Lecture 1
13