Lecture 10.
Today : Cramer-Rao lower bound
̂1 , θ
̂2 , θ
̂3 …
Unbiased Estimator of θ, say θ
̂ i ) when there are
It could be really difficult for us to compare Var(θ
many of them.
Theorem. Cramer-Rao Lower Bound
Let Y1 , Y2 , … , Yn be a random sample from a population with p.d.f.
f(y; θ). Let θ̂ = h(y1 , y2 , … , yn ) be an unbiased estimator of θ.
Given some regularity conditions (continuous differentiable etc.) and the
domain of f(yi ; θ). does not depend on θ. Then, we have
Var(θ̂) ≥
1
1
=
∂lnf(θ) 2
∂2 lnf(θ)
nE[(
) ] −nE [
]
∂θ
∂θ2
Theorem. Properties of the MLE
Let Yi
i. i. d.
f(y; θ), i = 1,2, … , n
~
̂ be the MLE of θ, then
Let θ
θ̂ ⃑⃑⃑⃑⃑⃑⃑⃑⃑⃑⃑⃑⃑⃑⃑
n → ∞ N (θ,
1
)
∂lnf(θ) 2
nE[(
) ]
∂θ
The MLE is asymptotically unbiased and its asymptotic variance : C-R
lower bound
Definition. Fisher Information.
2
∂lnf(θ)
𝜕 2 ln[f(x; θ)]
I(θ) = E [(
) ] = −E [
]
∂θ
𝜕θ2
Definition. The Efficiency – of an unbiased estimator Y is:
𝑒(𝑌) =
𝐶𝑅𝐿𝐵
𝑉𝑎𝑟(𝑌)
1
Harald Cramér (above) was born in Stockholm, Sweden on September
25, 1893, and died there on October 25, 1985 . (wiki). Calyampudi
Radhakrishna Rao (below), FRS known as C R Rao (born 10 September
1920) is an Indian statistician. He is professor emeritus at Penn State
University and Research Professor at the University at Buffalo. Rao was
awarded the US National Medal of Science in 2002. (wiki)
2
Example 1. Let Y1 , Y2 , … , Yn
i. i. d.
Bernoulli(p)
~
1. MLE of p ?
2. What are the mean and variance of the MLE of p ?
3. What is the Cramer-Rao lower bound for an unbiased estimator
of p ?
Solution. P(Y = y) = f(y; p) = py (1 − p)1−y , y = 0,1
1.
L = ∏ni=1 f(yi ; p)
= ∏ni=1[pyi (1 − p)1−yi ] = p∑ yi (1 − p)n−∑ yi
l = lnL = (∑ yi ) lnp + (n − ∑ yi )ln(1 − p)
Solving:
dl
dp
=
∑ yi
p
−
n − ∑ yi
1−p
= 0,
we have the MLE:
p̂ =
∑ni=1 Yi
n
2.
𝐸(p̂) = p, 𝑉𝑎𝑟(p̂) =
p(1 − p)
n
3.
lnf(y, p) = ylnp + (1 − y) ln(1 − p)
∂ln f(y, p) y 1 − y
= −
∂p
p 1−p
2
∂ ln f(y, p)
y
1−y
=− 2−
2
∂p
p
(1 − p)2
Y
1−Y
1
E [− 2 −
]=−
2
(1 − p)
p
p(1 − p)
C-R lower bound
∂2 lnf(y, p) −1 p(1 − p)
]} =
∂p2
n
Var(p̂) ≥ {−nE[
3
Thus, the MLE of p is unbiased and its variance = C-R lower
bound.
Definition. Efficient Estimator
̂ is an unbiased estimator of θ and its variance = C-R lower bound,
If θ
̂ is an efficient estimator of θ.
then θ
Definition. Best Estimator (UMVUE, or, MVUE)
̂ is an unbiased estimator of θ and var( θ
̂ )≤ var(θ̃) for all unbiased
If θ
̂ is a best estimator for θ.
estimator θ̃, then θ
The best estimator is also called the uniformly minimum variance
unbiased estimator (UMVUE), or simply, the minimum variance
unbiased estimator (MVUE).
Efficient Estimator
Always true
⇌
Best Estimator
May not be true
Example 2. If Y1 , Y2 , … , Yn is a random sample from f(y; θ) =
2y
θ2
,0 <
3
𝑦 < 𝜃, then θ̂ = 2 ̅
Y is an unbiased estimator for θ.
̂ ) and 2. C-R lower bound for fY (y; θ)
Compute 1. Var(θ
Solution.
1.
n
n
i=1
i=1
3
9
1
9
̅) = Var ( ∑ Yi ) =
Var(θ̂) = Var ( Y
∑ V ar(Yi )
2
4
n
4n2
θ
θ
2y
2y
2
2
2
)
Var(Yi = E(Yi ) − [E(Yi )] = ∫ y 2 dy − [∫ y 2 2 dy]2
θ
θ
0
0
2
θ
=
18
9 nθ2 θ2
̂
Therefore, Var(θ) = 2
=
4n 18 8n
2. C-R lower bound
2y
ln fY (y; θ) = ln ( 2 ) = ln2y − 2lnθ
θ
4
∂ln fY (y; θ)
2
=−
∂θ
θ
2
θ
∂ ln fY (y; θ)
4
4 2y
4
E[(
)] = E( 2 ) = ∫ 2 2 dy = 2
∂θ
θ
θ
0 θ θ
1
∂ ln fY (y; θ)2
)]
∂θ
nE[(
=
θ2
4n
The value of 1 is less than the value of 2. But, it is NOT a contradiction
to the theorem. Because the domain, 0 < 𝑦 < 𝜃, depends on θ. Thus the
C-R Theorem doesn’t hold for this problem.
5
© Copyright 2026 Paperzz