State Estimation and Kalman Filtering

State Estimation and Kalman
Filtering
Zeeshan Ali Sayyed
What is State
Estimation?
We need to estimate the
state of not just the robot
itself, but also of objects
which are moving in the
robot’s environment.
For instance, other cars,
people, deers, etc.
• Localization
• Tracking
Why do we
need it?
• The world is stochastic and not
deterministic
• There are errors in the motors or
transition mechanism of the
robot .
• There are errors in the sensors
on the robot.
• Sometimes, we also need to
predict future states so as to plan
accordingly. For instance, apply
brakes if we are about to collide
with another car.
What is Localization?
• Imagine a robot in a simple world.
• The robot doesn’t know where it is in the world frame
of reference.
• Estimating the position and state of the robot in this
world making use of the limited information available
to the robot is called Localization.
• Localization is a form of State Estimation where we
estimate the state of the robot in the given world.
Example of Localization
Belief of a Robot
•What is belief?
•How do we represent it?
•How do we start when we have absolutely
no information?
•How do we update belief?
How do we start?
• Uniform Distribution –
• This shows we have absolutely no information about the
location of the robot
Quiz
• There are 4 possible places where the robot can
be. What is the probability that the robot is in
the 3rd place, given absolutely no other
information?
Incorporating Sensor Measurements
• The belief after we incorporate the sensor measurements is
called Posterior Belief.
How do we do that in practice?
• There are a variety of techniques for incorporating sensor input into
our belief.
• The simplest one is a simple product.
• For instance, Consider the following world
0.2
0.2
0.2
0.2
0.2
• Let’s say the robot observes Yellow. What do we do?
?
?
?
?
?
Incorporating Transition of Robot
• This is technically called Convolution.
How do we do that in practice?
Assume a cyclic world. What happens, say, if the robot moves 2 steps
forward?
0.2
0.5
0.1
0.1
0.1
0.1
0.1
0.2
0.5
0.1
Final Localization
• This technique is referred to as Monte Carlo Localization
Modelling Noisy Sensors
• Sensors are not accurate. To model the error we use probabilistic
models to represent.
• For eg. If the sensor reports a door, we do not trust it completely.
How do we quantify? We do it using our model. For example:
• 𝑝 𝐷𝑜𝑜𝑟 = 𝑃𝑟𝑒𝑠𝑒𝑛𝑡 𝑆𝑒𝑛𝑠𝑜𝑟 = 𝑇𝑟𝑢𝑒 = 0.6
• 𝑝 𝐷𝑜𝑜𝑟 = 𝑃𝑟𝑒𝑠𝑒𝑛𝑡 𝑆𝑒𝑛𝑠𝑜𝑟 = 𝐹𝑎𝑙𝑠𝑒 = 0.2
• What happens now?
0.2
• Multiply and normalize!
0.2
0.2
0.2
0.2
Modelling Noisy Transition
0.2
0.5
0.1
0.1
0.1
• For example:
• 𝑝 𝑎𝑐𝑡𝑢𝑎𝑙𝑠𝑡𝑒𝑝𝑠 = 𝑛 + 1 𝑐𝑜𝑚𝑚𝑎𝑛𝑑𝑒𝑑𝑠𝑡𝑒𝑝𝑠 = 𝑛 = 0.2
• 𝑝 𝑎𝑐𝑡𝑢𝑎𝑙𝑠𝑡𝑒𝑝𝑠 = 𝑛 𝑐𝑜𝑚𝑚𝑎𝑛𝑑𝑒𝑑𝑠𝑡𝑒𝑝𝑠 = 𝑛 = 0.6
• 𝑝 𝑎𝑐𝑡𝑢𝑎𝑙𝑠𝑡𝑒𝑝𝑠 = 𝑛 − 1 𝑐𝑜𝑚𝑚𝑎𝑛𝑑𝑒𝑑𝑠𝑡𝑒𝑝𝑠 = 𝑛 = 0.2
Representation of things we have learned
• State – X
• Measurement – z
• Control Actions – u
• Time – t
• What is 𝑋𝑡 𝑜𝑟 𝑧𝑡 𝑜𝑟 𝑢𝑡 ?
• Belief – 𝑏𝑒𝑙(𝑋𝑡 )
• Sensor model - 𝑝 𝑧𝑡 𝑥𝑡
• Transition model - 𝑝(𝑥𝑡 |𝑥𝑡−1 , 𝑢𝑡 )
Introducing Kalman Filters
• Kalman Filters used for both Localization as well as
Tracking.
• It is very similar to Monte Carlo Localization
• It one of the most popular state estimation technique
is use, not only in robotics but in many other fields.
• It deals in Continuous State Spaces (What do they
mean?).
Gaussian
𝑓 𝑥, 𝜇, 𝜎 =
1
𝜎 2𝜋
𝑥−𝜇 2
−
. 𝑒 2𝜎2
Notation: 𝑁(𝜇, 𝜎 2 )
Comparison of Means and Variances
Representing Belief and Measurement
• The belief and sensor measurement, both are represented by
Gaussians.
• Gaussian with high variance implies uncertainty and low variance
implies certainty.
• Example on board
Kalman Filter Algorithm
• Incorporate Sensor Measurements
• Bayes Rule
• Incorporate Transition Update
• Total Probability
Measurement
Update
Transition
Update
Incorporating Sensor Measurements
• Can you say anything about the posterior?
Multiplication of two Gaussians
2
2
  22

X 1 ~ N ( 1 ,  1 ) 

1
1

 p ( X 1 )  p ( X 2 ) ~ N  2
1  2
2 ,
2
2
2
2 
2 
1   2
1   2 
X 2 ~ N (  2 ,  2 )
 1   2
Addition of two Gaussians

2
X 1 ~ N ( 1 ,  1 ) 
2
2

p
(
X
)

p
(
X
)
~
N



,



1
2
1
2
1
2
2 
X 2 ~ N (  2 ,  2 )

Incorporating Transition Update
• When we move, we tend to lose information. Therefore, the variance
of the belief increases.
• Simple add the two Gaussians using the previous formula.
• That’s the Kalman Filter for a simple one dimensional case!
Generalized Kalman Filter
• We assume we have a linear transition and observation (sensing)
models.
bel ( x0 )  N x0 ; 0 , 0 
xt  At xt 1  Bt ut   t
p( xt | ut , xt 1 )  N xt ; At xt 1  Bt ut , Rt 
zt  Ct xt   t
p( zt | xt )  N zt ; Ct xt , Qt 
Kalman Filter Algorithm
1.
Algorithm Kalman_filter( t-1, t-1, ut, zt):
2.
3.
4.
Prediction:
 t  At t 1  Bt ut
5.
6.
7.
8.
Correction:
9.
Return t, t
t  At t 1 AtT  Rt
Kt  t CtT (Ct t CtT  Qt )1
t   t  Kt ( zt  Ct  t )
t  ( I  Kt Ct )t
27
Kalman Filter Summary
• Highly efficient: Polynomial in
measurement dimensionality k and state
dimensionality n:
O(k2.376 + n2)
• Optimal for linear Gaussian systems!
• Most robotics systems are nonlinear!
28