CS 547: Sensing and Planning in Robotics

Used in Spring 2013
1.Examples of using probabilistic ideas in
robotics
2.Reverend Bayes and review of
probabilistic ideas
3.Introduction to Bayesian AI
4.Simple example of state estimation –
robot and door to pass
5.Simple example of modeling actions
6.Bayes Filters.
7.Probabilistic Robotics
Probabilistic
Robotics:
Sensing and Planning in
Robotics
Examples of
probabilistic ideas
in robotics
Robotics Yesterday
Robotics Today
Robotics Tomorrow?
More like a human
1. Boolean Logic and Differential equations are based of
classical robotics
2. Probabilistic Bayes methods are fundaments of all math in
humanities and future robotics.
What is robotics today?
1. Definition (Brady): Robotics is the
intelligent connection of perception and
action
• Trend to human-like reasoning emotional
service robots.
• Perception, action, reasoning, emotions –
all need probability.
Trends in Robotics Research
Classical Robotics (mid-70’s)
• exact models
• no sensing necessary
Reactive Paradigm (mid-80’s)
• no models
• relies heavily on good sensing
Hybrids (since 90’s)
• model-based at higher levels
• reactive at lower levels
Probabilistic Robotics (since mid-90’s)
• seamless integration of models and sensing
• inaccurate models, inaccurate sensors
Robots are moving away from factory floors to
Entertainment, Toys, Personal service. Medicine,
Surgery, Industrial automation (mining, harvesting),
Hazardous environments (space, underwater)
Examples of robots that need
stochastic reasoning
and
stochastic learning
Entertainment Robots: Toys
Entertainment Robots: RoboCup
Autonomous Flying Machines
Humanoids
Mobile Robots as Museum TourGuides
Life is full of uncertainties
Tasks to be Solved by Robots












Planning
Perception
Modeling
Localization
Interaction
Acting
Manipulation
Cooperation
Recognition of environment that changes
Recognition of human behavior
Recognition of human gestures
...
Uncertainty is Inherent/Fundamental
•
Uncertainty arises from four major
factors:
1.
2.
3.
4.
Environment is stochastic, unpredictable
Robots actions are stochastic
Sensors are limited and noisy
Models are inaccurate, incomplete
Nature of Sensor Data
Odometry Data
Range Data
Main probabilistic
ideas in robotics
Probabilistic Robotics
Key idea:
Explicit representation of uncertainty
using the calculus of probability theory
• Perception
• Action
= state estimation
= utility optimization
Advantages and Pitfalls of
probabilistic robotics
1.
2.
3.
4.
Can accommodate inaccurate models
Can accommodate imperfect sensors
Robust in real-world applications
Best known approach to many hard
robotics problems
5. Computationally demanding
6. False assumptions
7. Approximate
Introduction to
“Bayesian Artificial
Intelligence”
• Reasoning under uncertainty
• Probabilities
• Bayesian approach
– Bayes’ Theorem – conditionalization
– Bayesian decision theory
Reasoning under Uncertainty
• Uncertainty – the quality or state of being
not clearly known
– distinguishes deductive knowledge from
inductive belief
• Sources of uncertainty
– Ignorance
– Complexity
– Physical randomness
– Vagueness
Reminder of Bayes Formula
P ( x, y )  P ( x | y ) P ( y )  P ( y | x ) P ( x )

P( y | x) P( x) likelihood  prior
P( x y ) 

P( y )
evidence
likelihood
Normalization
P( y | x) P( x)
P( x y) 
  P( y | x) P( x)
P( y)
1
1
  P( y) 
 P( y | x)P( x)
x
Algorithm:
likelihood
prior
x : aux x| y  P( y | x) P( x)
1

 aux x| y
x
x : P( x | y )   aux x| y
Conditional knowledge has
many applications
1. Total probability:
P( x y )   P( x | y, z ) P( z | y) dz
See law of total probability earlier
2. Bayes rule and background knowledge:
P ( y | x, z ) P ( x | z )
P( x | y, z ) 
P( y | z )
examples
I will present
many examples
of using Bayes
probability in
mobile robot
Simple Example of
State Estimation
The door opening
problem
Simple Example of State
Estimation
• Suppose a robot obtains measurement z
• What is P(open|z)?
Simple Example of State Estimation
• Suppose a robot obtains measurement z
• What is P(open|z)?
What is the probability
that door is open if the
measurement is z
P(open | z ) 
P( z | open) P(open)
P( z )
Causal vs. Diagnostic Reasoning
• P(open|z) is diagnostic.
• P(z|open) is causal.
• Often causal knowledge is easier to
obtain.
count frequencies!
• Bayes rule allows us to use causal
knowledge:
P( z | open) P(open)
P(open | z ) 
P( z )
We open the door and repeatedly measure z.
We count frequencies
Examples of calculating probabilities for door opening
problem
Probability that
measurement z=open was
for door open
Probability that
measurement z=open was
for door not open
• P(z|open) = 0.6
P(z|open) = 0.3
• P(open) = P(open) = 0.5
P ( z | open) P (open)
P (open | z ) 
P ( z | open) p (open)  P ( z | open) p (open)
0 . 6  0 .5
2
P (open | z ) 
  0.67
0 .6  0 .5  0 .3  0 . 5 3
• z raises the probability that the door is open.
Combining Evidence
What we do when more information will come?
• Suppose our robot obtains another
observation z2.
• How can we integrate this new
information?
• More generally, how can we estimate
P(x| z1...zn )?
Recursive Bayesian Updating
P( zn | x, z1,, zn  1) P( x | z1,, zn  1)
P( x | z1,, zn) 
P( zn | z1,, zn  1)
Markov assumption: zn is independent of z1,...,zn-1 if
we know x.
P( zn | x) P ( x | z1,  , zn  1)
P ( x | z1,  , zn) 
P ( zn | z1,  , zn  1)
  P ( zn | x) P ( x | z1,  , zn  1)
 1... n
 P( z | x) P( x)
i
i 1... n
Example: Second Measurement added
in our “robot and door” problem
What we do if we have a new measurement result?
• P(z2|open) = 0.5
P(z2|open) = 0.6
• P(open|z1)=2/3
P ( z 2 | open) P (open | z1 )
P (open | z 2 , z1 ) 
P ( z 2 | open) P (open | z1 )  P( z 2 | open) P(open | z1 )
1 2

5
2 3


 0.625
1 2 3 1
8
  
2 3 5 3
• z2 lowers the probability that the door is open.
If z2 says that door is open, now our
probability that it is open is lower
A Typical Pitfall
• Two possible locations x1 and x2
• P(x1)=0.99
• P(z|x2)=0.09 P(z|x1)=0.07
x2
1
p(x2 | d)
p(x1 | d)
0.9
0.8
0.7
x1
p( x | d)
0.6
0.5
0.4
0.3
0.2
probabilities
0.1
0
5
10
15
20
25
30
Number of integrations
35
Number of integrations
40
45
50
35
The
behavior/recognition
model for the robot
should take into
account robot actions
Actions Change the world?
How to use this knowledge?
• Often the world is dynamic since
– actions carried out by the robot change
the world,
– actions carried out by other agents change
the world,
– or just the time passing by change the
world.
• How can we incorporate such actions?
Typical Actions of a Robot
• The robot turns its wheels to move
• The robot uses its manipulator to grasp an
object
• Plants grow over time…
• Actions are never carried out with absolute
certainty.
• In contrast to measurements, actions generally
increase the uncertainty.
Modeling Actions Probabilistically
• To incorporate the outcome of an action
u into the current “belief”, we use the
conditional pdf
P(x|u,x’)
• This term specifies the pdf that
executing u changes the state from x’
to x.
pdf = probability distribution function
Simple example of
modeling Actions
Example: Closing the door
State Transitions
State Transitions for closing door example
P(x|u,x’) for u = “close door”:
Closing the door
succeeded
0.9
0.1
open
closed
0
If the door is open, the action “close door”
succeeds in 90% of all cases.
1
Integrating the Outcome of Actions
Continuous case:
P( x | u )   P( x | u, x' ) P( x' )dx'
Discrete case:
P( x | u)   P( x | u, x' ) P( x' )
Example: The Resulting Belief for the door problem
P(closed | u )   P(closed | u , x' ) P( x' )
Probability that door
open
 P(closed | u, open) P(open)
Continue
open/closed
door example
 P(closed | u, closed ) P(closed )
9 5 1 3 15
    
10 8 1 8 16
P(open | u )   P(open | u , x' ) P( x' )
Probability that the
door is closed after
action u
 P(open | u , open) P(open)
 P(open | u, closed ) P(closed )
1 5 0 3 1
    
10 8 1 8 16
 1  P(closed | u )
0.9
0.1
open
closed
0
1
Probability that the
door is open after
action u
Concepts of
Probabilistic Robotics
1.
2.
3.
4.
5.
6.
7.
Probabilities are base concept
Bayes rule used in most applications
Bayes filters used for estimation
Bayes networks
Markov Chains
Bayesian Decision Theory
Bayes concepts in AI
Bayes Filters:
1. Kalman Filters
2. Particle Filters
3. Other filters
Key idea of Probabilistic
Robotics repeated
Key idea:
Explicit representation of uncertainty using
the calculus of probability theory
– Perception = state estimation
– Action
= utility optimization
1. Probability Calculus
Pr(U )  1
X  U Pr( X )  0
X , Y  U
if X  Y   then Pr( X  Y )  Pr( X )  Pr(Y )
Pr( X  Y )
Pr( X | Y ) 
Pr(Y )
Pr( X | Y )  Pr( X )
Markov assumption: zn is independent of z1,...,zn-1 if
we know x.
2. Main ideas of using Bayes in
robotics
1. Bayes rule allows us to compute probabilities
that are hard to assess otherwise.
2. Under the Markov assumption, recursive
Bayesian updating can be used to efficiently
combine evidence.
3. Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.
48
Bayes
Filters
3. Bayes Filters: Framework
• Given:
– Stream of observations z and action data u:
dt  {u1 , z1 , ut , zt }
– Sensor model P(z|x).
– Action model P(x|u,x’).
– Prior probability of the system state P(x).
• Wanted:
– Estimate of the state X of a dynamical system.
– The posterior of the state is also called Belief:
Bel ( xt )  P( xt | u1 , z1 , ut , zt )
Markov assumption: zn is independent of z1,...,zn-1 if we know x.
We derive the posterior
of the state
Bayes Filters
Bel ( xt )  P( xt | u1, z1 , ut , zt )
Derivation of rule for belief
Bayes
  P( zt | xt , u1, z1, , ut ) P( xt | u1, z1, , ut )
Markov
  P( zt | xt ) P( xt | u1, z1, , ut )
Total prob.
  P( zt | xt )  P( xt | u1 , z1 , , ut , xt 1 )
z = observation
u = action
x = state
P( xt 1 | u1 , z1 , , ut ) dxt 1
Markov
  P( zt | xt )  P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1
Markov
  P( zt | xt )  P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , zt 1 ) dxt 1
  P( zt | xt )  P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
We derived an important formula:
Bel ( xt )   P( zt | xt )  P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
Now we use it in Bayes Filter
2.
Algorithm Bayes_filter( Bel(x),d ):
0
3.
If d is a perceptual data item z then
1.
4.
5.
6.
7.
8.
9.
For all x do
Bel ' ( x)  P( z | x) Bel ( x)
Calculate new belief
For all x do
    Bel ' ( x)
Bel ' ( x)   1Bel ' ( x)
Update new belief for all knowledge
Else if d is an action data item u then
10.
11.
For all x do
12.
Return Bel’(x)
Bel ' ( x)   P( x | u, x' ) Bel ( x' ) dx'
Calculate new belief after new action
for all knowledge
Bayes Filters are fundaments of
many methods!
Bel ( xt )   P( zt | xt )  P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1
•
•
•
•
•
Kalman filters
Particle filters
Hidden Markov models (HMM)
Dynamic Bayesian networks
Partially Observable Markov Decision Processes
(POMDPs)
4. Bayesian Networks and
Markov Models –
main concepts
•
•
•
•
Bayesian AI
Bayesian networks
Decision networks
Reasoning about changes over time
• Dynamic Bayesian Networks
• Markov models
5. Markov Assumption and HMMs
controls
states
outputs
states
p( zt | x0:t , z1:t , u1:t )  p( zt | xt )
p( xt | x1:t 1 , z1:t , u1:t )  p( xt | xt 1 , ut )
Underlying Assumptions of HMMs
• Static world
• Independent noise
• Perfect model, no approximation errors
6. Bayesian Decision Theory
1. Frank Ramsey (1926)
2. Decision making under uncertainty – what
action to take when the state of the world is
unknown
3. Bayesian answer –
Find the utility of each possible outcome (actionstate pair), and take the action that maximizes
expected utility
Bayesian Decision Theory –
Example
Action
Rain (p=0.4)
Shine (1-p=0.6)
Take
umbrella
30
10
Leave
umbrella
-100
50
Expected utilities:
 E(Take umbrella) = 300.4+100.6=18
 E(Leave umbrella) = -1000.4+500.6=-10
Story of my friend how he wanted to get married scientifically
7. Bayesian Conception of an AI
1. An autonomous agent that
1. has a utility structure (preferences)
2. can learn about its world and the
relationship (probabilities) between its
actions and future states maximizes its
expected utility
2. The techniques used to learn about the
world are mainly statistical
Data mining
Conclusion on
Bayesian AI
• Reasoning under uncertainty
• Probabilities
• Bayesian approach
– Bayes’ Theorem – conditionalization
– Bayesian decision theory
Summary
• Bayes rule allows us to compute
probabilities that are hard to assess
otherwise.
• Under the Markov assumption, recursive
Bayesian updating can be used to
efficiently combine evidence.
• Bayes filters are a probabilistic tool for
estimating the state of dynamic systems.
Sources
Gaurav S. Sukhatme
Computer Science
Robotics Research Laboratory
University of Southern California