Probability for Estimation
(review)
In general, we want to develop an estimator for systems of the form:
๐ฅ = ๐ ๐ฅ, ๐ข + ฮท(๐ก);
y = โ ๐ฅ + ๐(๐ก);
๐๐๐ฃ๐๐ ๐ฆ, ๐๐๐๐ ๐ฅ
We will primarily focus on discrete time linear systems
๐ฅ๐+1 = ๐ด๐ ๐ฅ๐ + ๐ต๐ ๐ข๐ + ฮท๐ ;
๐ฆ๐ = ๐ป๐ ๐ฅ๐ + ๐๐ ;
Where
โ Ak, Bk, Hk are constant matrices
โ xk is the state at time tk; uk is the control at time tk
โ ฮทk ,wk are โdisturbancesโ at time tk
Goal: develop procedure to model disturbances for estimation
โ Kolmogorov probability, based on axiomatic set theory
โ 1930โs onward
Axioms of Set-Based Probability
Probability Space:
โ Let W be a set of experimental outcomes (e.g., roll of dice)
ฮฉ = ๐ด1 , ๐ด2 , โฆ , ๐ด๐
โข the Ai are โelementary eventsโ and subsets of W are termed โeventsโ
โข Empty set {โ
} is the โimpossible eventโ
โข S={ฮฉ} is the โcertain eventโ
โ A probability space (ฮฉ, F,P)
โข F = set of subsets of W, or โeventsโ, P assigns probabilities to events
Probability of an Eventโthe Key Axioms:
โ Assign to each Ai a number, P(Ai), termed the โprobabilityโ of event Ai
โ P(Ai) must satisfy these axioms
1. P(Ai) โฅ 0
2. P(S) = 1
3. If events A,B ฯต W are โmutually exclusive,โ or disjoint, elements or
events (AโฉB= {โ
}), then
P A โช B = ๐ ๐ด + ๐(๐ต)
Axioms of Set-Based Probability
As a result of these three axioms and basic set operations (e.g.,
DeMorganโs laws, such as ๐ด โช ๐ต=๐ด โฉ ๐ต)
โ P({โ
})=0
โ P(A) = 1-P(๐ด) โ P(A) + P(๐ด) = 1, where ๐ด is complement of A
โ If ๐ด1 , ๐ด2 , โฆ , ๐ด๐ mutually disjoint
P ๐ด1 โช ๐ด1 โช โฏ โช ๐ด๐ = ๐ ๐ด1 + ๐ ๐ด1 + โฏ + ๐(๐ด๐ )
For W an infinite, but countable, set we add the โAxiom of infinite
additivityโ
3(b). If ๐ด1 , ๐ด2 , โฆ are mutually exclusive,
P ๐ด1 โช ๐ด1 โช โฏ = ๐ ๐ด1 + ๐ ๐ด1 + โฏ
We assume that all countable sets of events satisfy Axioms 1, 2, 3, 3(b)
But we need to model uncountable setsโฆ
Continuous Random Variables (CRVs)
Let ฮฉ = โ (an uncountable set of events)
โ Problem: it is not possible to assign probabilities to subsets of โ which
satisfy the above Axioms
โ Solution:
โข let โeventsโ be intervals of โ: A = {x | xl โค x โค xu}, and their countable
unions and intersections.
โข Assign probabilities to these events
P ๐ฅ๐ โค ๐ฅ โค ๐ฅ๐ข = ๐๐๐๐๐๐๐๐๐๐ก๐ฆ ๐กโ๐๐ก ๐ฅ ๐ก๐๐๐๐ ๐ฃ๐๐๐ข๐๐ ๐๐ [๐ฅ๐ , ๐ฅ๐ข ]
โข x is a โcontinuous random variable (CRV).
Some basic properties of CRVs
โ If x is a CRV in ๐ฟ, ๐ , then P(L โค x โค L) = 1
โ If y in ๐ฟ, ๐ , then P(L โค y โค x) = 1 - P(y โค x โค U)
Probability Density Function (pdf)
๐ฅ๐ข
๐ ๐ฅ๐ โค ๐ฅ โค ๐ฅ๐ข โก
๐ ๐ฅ ๐๐ฅ
๐ฅ๐
E.g.
๐ ๐ฅ
โ Uniform Probability pdf:
1
๐ ๐ฅ = ๐โ๐
โ Gaussian (Normal) pdf:
๐ ๐ฅ =
1
๐ 2๐
๐
1 ๐ฅโ๐ 2
๐
โ2
µ =โmeanโ of pdf
s = โstandard deviationโ
๐ ๐ฅ
Most of our Estimation theory will be built on the Gaussian distribution
Joint & Conditional Probability
Joint Probability:
โ Countable set of events: P A โฉ B = P(A,B), probability A & B both occur
โ CRVs: let x,y be two CRVs defined on the same probability space. Their
โjoint probability density functionโ p(x,y) is defined as:
๐ฆ๐ข
๐ฅ๐ข
P ๐ฅ๐ โค ๐ฅ โค ๐ฅ๐ข ; ๐ฆ๐ โค ๐ฆ โค ๐ฆ๐ข โก
๐ ๐ฅ, ๐ฆ ๐๐ฅ ๐๐ฆ
๐ฆ๐
๐ฅ๐
โ Independence
โข A, B are independent if P(A,B) = P(A) P(B)
โข x,y are independent if p(x,y) = p(x) p(y)
Conditional Probability:
A โฉB
, probability of A given that B occurred
P B
โข E.G. probability that a โ2โ is rolled on a fair die given that we know
the roll is even:
โ Countable events: P(A|B)=
P
โ P(B) = probability of even roll = 3/6=1/2
โ P A โฉ B = 1/6 (since A โฉ B = A)
โ P(2|even roll) = P A โฉ B /P(B) = (1/6)/(1/2) = 1/3
Conditional Probability & Expectation
Conditional Probability (continued):
โ CRV: ๐ ๐ฅ ๐ฆ =
๐ ๐ฅ,๐ฆ
๐ ๐ฆ
๐๐ 0 < ๐ ๐ฆ < โ
0
๐๐กโ๐๐๐ค๐๐ ๐
โ This follows from:
P ๐ฅ๐ โค ๐ฅ โค ๐ฅ๐ข | ๐ฆ โก
๐ฅ๐ข
๐ ๐ฅ|๐ฆ ๐๐ฅ =
๐ฅ๐
โ and:
โ
๐ ๐ฅ =
๐ฅ๐ข
๐
๐ฅ๐
๐ฅ, ๐ฆ ๐๐ฅ
๐(๐ฆ)
โ
๐ ๐ฅ, ๐ฆ ๐๐ฆ =
โโ
๐ ๐ฅ|๐ฆ ๐(๐ฆ)๐๐ฆ
โโ
Expectation: (key for estimation)
โ Let x be a CRV with distrubution p(x). The expected value (or mean) of x
โ
โ
is
๐ธ[๐ฅ] =
๐ฅ๐ ๐ฅ ๐๐ฅ
๐ธ[๐(๐ฅ)] =
๐(๐ฅ)๐ ๐ฅ ๐๐ฅ
โโ
โโ
โ Conditional mean (conditional expected value) of x given event M:
โ
๐ธ[๐ฅ|๐] =
๐ฅ๐ ๐ฅ|๐ ๐๐ฅ
โโ
Expectation
Mean Square:
Variance:
(continued)
โ
๐ธ[๐ฅ 2 ] =
๐ 2 = ๐ธ[(๐ฅ โ ๐)2 ] =
๐ฅ 2 ๐ ๐ฅ ๐๐ฅ
โโ
โ
(๐ฅ โ ๐)2 ๐ ๐ฅ ๐๐ฅ
โโ
๐ ๐ฅ = ๐ธ[๐ฅ]
Random Processes
A stochastic system whose state is characterized a time evolving CRV,
x(t), t e [0,T].
โ At each t, x(t) is a CRV
โ x(t) is the โstateโ of the random process, which can be characterized by
P[๐ฅ๐ โค ๐ฅ(๐ก) โค ๐ฅ๐ข ] =
โ
๐
โโ
๐ฅ, ๐ก ๐๐ฅ
Random Processes can also be characterized by:
โ Joint probability function
P[๐ฅ1๐ โค ๐ฅ(๐ก1 ) โค ๐ฅ1๐ข ; ๐ฅ2๐ โค ๐ฅ(๐ก2 ) โค ๐ฅ2๐ข ] =
โ Correlation Function
๐ธ[๐ฅ ๐ก1 ๐ฅ(๐ก2 )] =
โ
๐ฅ
โโ 1
๐ฅ1๐ข ๐ฅ2๐ข
๐
๐ฅ1๐ ๐ฅ2๐
Joint probability
density function
๐ฅ1 , ๐ฅ2 , ๐ก1 , ๐ก2 ๐๐ฅ1 ๐๐ฅ2
Correlation function
๐ฅ2 ๐ ๐ฅ1 , ๐ฅ2 , ๐ก1 , ๐ก2 ๐๐ฅ1 ๐๐ฅ2 โก ๐(๐ก1 , ๐ก2 )
โ A random process x(t) is Stationary if p(x,t+t)=p(x,t) for all t
Vector Valued Random Processes
๐1 (๐ก)
โฎ
๐(๐ก) =
where each Xi(t) is a random process
๐๐ (๐ก)
R(๐ก1 , ๐ก2 ) = โCorrelation Matrixโ = ๐ธ[๐ ๐ก1 ๐ ๐ ๐ก2 ]
๐ธ[๐1 ๐ก1 ๐1 ๐ก2 ] โฏ ๐ธ[๐1 ๐ก1 ๐๐ ๐ก2 ]
โฎ
โฑ
โฎ
=
๐ธ[๐๐ ๐ก1 ๐1 ๐ก2 ] โฏ ๐ธ[๐๐ ๐ก1 ๐๐ ๐ก2 ]
โ(t) = โCovariance Matrixโ = E (๐ ๐ก โ ๐ ๐ก )(๐ ๐ก โ ๐ ๐ก )๐
Where ๐ ๐ก = ๐ธ[๐ ๐ก ]
© Copyright 2026 Paperzz