Daniel Jahn
Department of Probability and Mathematical Statistics
Convergence of Random Variables
Primary Seminar
18. October 2016
Don’t just ask how, ask why.
Don’t just ask how, ask why.
Why did we define this?
Why not define it in any other way?
Why is this theorem true?
Stochastic convergence
Stochastic convergence
3/19 Daniel Jahn
Convergence of Random Variables
Stochastic convergence
Stochastic convergence
Pointwise
3/19 Daniel Jahn
Uniform
Convergence of Random Variables
Stochastic convergence
Stochastic convergence
Pointwise
Almost everywhere
3/19 Daniel Jahn
Uniform
Almost uniform
Convergence of Random Variables
Stochastic convergence
Stochastic convergence
Pointwise
Uniform
Almost everywhere
Almost uniform
(σ-finite)
finite spaces
Almost sure
3/19 Daniel Jahn
Convergence of Random Variables
Almost sure convergence
The sequence Xn converges almost surely to the random variable X if
P( lim Xn = X ) = 1
n→∞
Strong statment: clear decision for each ω
4/19 Daniel Jahn
Convergence of Random Variables
Convergence in probability
A sequence Xn converges in probability to the random variable X if
∀ > 0
lim P(|Xn − X | > ) = 0
n→∞
“Unusual” outcome becomes increasingly improbable
5/19 Daniel Jahn
Convergence of Random Variables
Convergence in probability
A sequence Xn converges in probability to the random variable X if
∀ > 0
lim P(|Xn − X | > ) = 0
n→∞
“Unusual” outcome becomes increasingly improbable
Archer
Day 1
5/19 Daniel Jahn
Convergence of Random Variables
Convergence in probability
A sequence Xn converges in probability to the random variable X if
∀ > 0
lim P(|Xn − X | > ) = 0
n→∞
“Unusual” outcome becomes increasingly improbable
Archer
Day 1
5/19 Daniel Jahn
Day 20
Convergence of Random Variables
Comparison: almost sure and in probability
Almost sure
Probability
A2
A1000
A3
A1
A20
Ω
6/19 Daniel Jahn
Fix ε > 0
An=[Xn - X > ε]
Ω
Convergence of Random Variables
Comparison: almost sure and in probability 2
Travelling hump: Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = 1En , n ∈ N, where:
E1 = [0, 1/2], E2 = [1/2, 1], E3 = [0, 1/3], E4 = [1/3, 2/3], . . .
7/19 Daniel Jahn
Convergence of Random Variables
Comparison: almost sure and in probability 2
Travelling hump: Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = 1En , n ∈ N, where:
E1 = [0, 1/2], E2 = [1/2, 1], E3 = [0, 1/3], E4 = [1/3, 2/3], . . .
Converges in probability (behaves like 1/n)
7/19 Daniel Jahn
Convergence of Random Variables
Comparison: almost sure and in probability 2
Travelling hump: Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = 1En , n ∈ N, where:
E1 = [0, 1/2], E2 = [1/2, 1], E3 = [0, 1/3], E4 = [1/3, 2/3], . . .
Converges in probability (behaves like 1/n)
Does not converge almost surely
7/19 Daniel Jahn
Convergence of Random Variables
Convergence in LP spaces
A sequence Xn converges in Lp (p ≥ 1) to the random variable X if
E|Xn |p < ∞ and
lim E(|Xn − X |p ) = 0
n→∞
p = 1 convergence in the mean (sometimes)
For p = 2 we get mean-square convergence
8/19 Daniel Jahn
Convergence of Random Variables
Convergence in LP spaces
A sequence Xn converges in Lp (p ≥ 1) to the random variable X if
E|Xn |p < ∞ and
lim E(|Xn − X |p ) = 0
n→∞
p = 1 convergence in the mean (sometimes)
For p = 2 we get mean-square convergence
Why LP ? Because L2 is a Hilbert space!
8/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
p
Norm kX k = E|X |2
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
p
Norm kX k = E|X |2
Convergence kXn − X k2 = E|Xn − X |2 → 0
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
p
Norm kX k = E|X |2
Convergence kXn − X k2 = E|Xn − X |2 → 0
We get all results
from Hilbert spaces, e.g. Cauchy-Schwarz
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
p
Norm kX k = E|X |2
Convergence kXn − X k2 = E|Xn − X |2 → 0
We get all results
from Hilbert spaces, e.g. Cauchy-Schwarz
Projection ↔ conditional expected value
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
p
Norm kX k = E|X |2
Convergence kXn − X k2 = E|Xn − X |2 → 0
We get all results
from Hilbert spaces, e.g. Cauchy-Schwarz
Projection ↔ conditional expected value
Orthogonal ↔ uncorrelated (for centered)
9/19 Daniel Jahn
Convergence of Random Variables
Hilbert spaces!
Hilbert spaces!
L2 (Ω, A, P)
X ∼ Y ⇐⇒ P[X = Y ] = 1
Inner product hX , Y i = E[XY ]
p
Norm kX k = E|X |2
Convergence kXn − X k2 = E|Xn − X |2 → 0
We get all results
from Hilbert spaces, e.g. Cauchy-Schwarz
Projection ↔ conditional expected value
Orthogonal ↔ uncorrelated (for centered)
9/19 Daniel Jahn
Convergence of Random Variables
Lp ⇒ P
We want to prove that E|Xn − X |p → 0 gives us
∀ > 0
10/19 Daniel Jahn
P(|Xn − X | > ) → 0
Convergence of Random Variables
Lp ⇒ P
We want to prove that E|Xn − X |p → 0 gives us
∀ > 0
P(|Xn − X | > ) → 0
Theorem (Markov’s inequality)
Let X be a nonnegative random variable, > 0, then for p > 0:
P(|X | > ) ≤
10/19 Daniel Jahn
E|X |p
p
Convergence of Random Variables
Lp ⇒ P
We want to prove that E|Xn − X |p → 0 gives us
∀ > 0
P(|Xn − X | > ) → 0
Theorem (Markov’s inequality)
Let X be a nonnegative random variable, > 0, then for p > 0:
P(|X | > ) ≤
P(|Xn − X | > ) ≤
10/19 Daniel Jahn
E|X |p
p
E|Xn − X |p
→0
p
Convergence of Random Variables
Lp isn’t implied by either a.s. or P
Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = n · 1Dn , where Dn = [0, 1/n]
11/19 Daniel Jahn
Convergence of Random Variables
Lp isn’t implied by either a.s. or P
Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = n · 1Dn , where Dn = [0, 1/n]
P:
11/19 Daniel Jahn
P(|Xn − 0| > ) = 1/n → 0
Convergence of Random Variables
Lp isn’t implied by either a.s. or P
Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = n · 1Dn , where Dn = [0, 1/n]
P:
P(|Xn − 0| > ) = 1/n → 0
a.s.: P(lim Xn = 0) = λ((0, 1]) = 1
11/19 Daniel Jahn
Convergence of Random Variables
Lp isn’t implied by either a.s. or P
Set (Ω, A, P) = ([0, 1], B([0, 1]), λ) and define
Xn = n · 1Dn , where Dn = [0, 1/n]
P:
P(|Xn − 0| > ) = 1/n → 0
a.s.: P(lim Xn = 0) = λ((0, 1]) = 1
Lp :
11/19 Daniel Jahn
E|Xn − 0|p = np · 1/n 6→ 0
Convergence of Random Variables
Lp does not imply almost sure convergence
Back to the travelling hump!
12/19 Daniel Jahn
Convergence of Random Variables
Lp does not imply almost sure convergence
Back to the travelling hump!
We already know:
Converges in probability
Does not converge almost surely
But does converge in Lp :
E|Xn − 0|p = 1p · 1/n → 0
12/19 Daniel Jahn
Convergence of Random Variables
Convergence in distribution
A sequence Xn converges in distribution to the random variable X if
lim Fn (X ) = F (x) ∀x ∈ R at which F is continuous
n→∞
where Fn is the distribution function of Xn and F the distribution
function of X .
Will be talked about in the second presentation.
13/19 Daniel Jahn
Convergence of Random Variables
Convergence in distribution
A sequence Xn converges in distribution to the random variable X if
lim Fn (X ) = F (x) ∀x ∈ R at which F is continuous
n→∞
where Fn is the distribution function of Xn and F the distribution
function of X .
Will be talked about in the second presentation.
Central limit theorem
13/19 Daniel Jahn
Convergence of Random Variables
Convergence in distribution
A sequence Xn converges in distribution to the random variable X if
lim Fn (X ) = F (x) ∀x ∈ R at which F is continuous
n→∞
where Fn is the distribution function of Xn and F the distribution
function of X .
Will be talked about in the second presentation.
Central limit theorem
Example: X ∼ B(1/2),
Xn = X ∀n ∈ N,
Y = 1 − X ∼ B(1/2).
d
Then Xn → Y but lim Xn 6= Y everywhere!
13/19 Daniel Jahn
Convergence of Random Variables
Relationships summary
We have proven that no other relationships exist!
14/19 Daniel Jahn
Convergence of Random Variables
P implies a.s. subsequence
For the last time, we use the travelling hump.
Choose the subsequence of triangular numbers:
kn =
15/19 Daniel Jahn
(n + 1)n
2
i.e.
kn ∈ {1, 3, 6, 10, 15, . . . }
Convergence of Random Variables
P implies a.s. subsequence
For the last time, we use the travelling hump.
Choose the subsequence of triangular numbers:
kn =
(n + 1)n
2
i.e.
kn ∈ {1, 3, 6, 10, 15, . . . }
Then the travelling hump changes to the receding hump and we get:
P( lim Xkn = 0) = λ((0, 1]) = 1
n→∞
15/19 Daniel Jahn
Convergence of Random Variables
Travelling hump sum up
16/19 Daniel Jahn
Convergence of Random Variables
Travelling hump sum up
16/19 Daniel Jahn
Convergence of Random Variables
Travelling hump sum up
16/19 Daniel Jahn
Convergence of Random Variables
Continuous Mapping Theorem
Theorem (Continuous Mapping Theorem)
Let {Xn } be a sequence of k−dimensional random vectors. Let
g : Rk → Rl be a function continuous on the support of X . Then
P
P
Xn → X ⇒ g(Xn ) → g(X ),
a.s.
a.s.
Xn → X ⇒ g(Xn ) → g(X ),
D
D
Xn → X ⇒ g(Xn ) → g(X ).
If we furthermore assume g to be bounded, we get
Lp
Lp
Xn → X ⇒ g(Xn ) → g(X ).
17/19 Daniel Jahn
Convergence of Random Variables
Continuous Mapping Theorem
Theorem (Continuous Mapping Theorem)
Let {Xn } be a sequence of k−dimensional random vectors. Let
g : Rk → Rl be a function continuous on the support of X . Then
P
P
Xn → X ⇒ g(Xn ) → g(X ),
a.s.
a.s.
Xn → X ⇒ g(Xn ) → g(X ),
D
D
Xn → X ⇒ g(Xn ) → g(X ).
If we furthermore assume g to be bounded, we get
Lp
Lp
Xn → X ⇒ g(Xn ) → g(X ).
As a nice corollary, we e.g. get
aXn + bYn → aX + bY
if Xn → X and Yn → Y , a, b ∈ R in all modes of convergence.
17/19 Daniel Jahn
Convergence of Random Variables
Convergence of random vectors
Theorem (Component-wise convergence)
Let Xn = (Xn,1 , Xn,2 , . . . , Xn,k )T and X = (X1 , X2 , . . . , Xk )T be random
vectors. Then
P
Xn,i → Xi ∀i
a.s.
Xn,i → Xi ∀i
18/19 Daniel Jahn
P
⇒
Xn → X ,
⇒
Xn → X .
a.s.
Convergence of Random Variables
Convergence of random vectors
Theorem (Component-wise convergence)
Let Xn = (Xn,1 , Xn,2 , . . . , Xn,k )T and X = (X1 , X2 , . . . , Xk )T be random
vectors. Then
P
Xn,i → Xi ∀i
a.s.
Xn,i → Xi ∀i
P
⇒
Xn → X ,
⇒
Xn → X .
a.s.
But this does not hold for convergence in distribution.
18/19 Daniel Jahn
Convergence of Random Variables
Thank you for your attention!
© Copyright 2026 Paperzz