2014 Jan Qualification Exam and Solution (Theory Part)
1. The joint density function of X and Y is given by
2
e−yx /2
· ye−y ,
f (x, y) = p
2π/y
−∞ < x < ∞, y > 0.
(a) Find the conditional density fX|Y (x|y) of X given Y = y.
(b) Compute E(X|Y ).
(c) Compute Var(X|Y ).
(d) Compute Var(X).
2. Suppose that X and Y are independent exp(λ) random variables with density
f (x) =
1 −x/λ
e
,
λ
x > 0,
(a) Show that the sum X + Y and the ratio X/Y are independent.
X
(b) Let Z = X+Y
, show that for 0 < z < 1, FZ (z) = P (Z ≤ z) = z, i.e. the randome
variable Z is uniformly distributed over (0,1).
3. Let X1 , X2 , · · · , Xn be independent Gaussian random variables, having mean µ and
variance σ 2 . Define by recurrence the sequence
Y0 = x ∈ R,
Yn+1 = λYn + Xn+1 ,
for some λ ∈ (−1, 1). Prove that the sequence {Yn }n∈N converges in distribution and
determine the limiting distribution.
4. Let X1 , · · · , Xn be a random sample from a distribution with the pdf given by
c
f (x|θ) = θ−c cxc−1 e−(x/θ) I(x > 0),
where c > 0 is a known constant.
(a) Find the maximum likelihood estimator (MLE) of θ.
(b) Find the uniformly minimum variance unbiased estimator (UMVUE) of θ.
1
(c) Find the uniformly most powerful (UMP) test of size α for testing
H0 : θ ≤ θ0 , vs H1 : θ > θ0 ,
where θ0 is a positive constant.
5. Let X1 , · · · , Xn be a random sample of i.i.d. observations drawn from the following
probability density function (pdf)
f (x|θ) = θ−1 x(1−θ)/θ I(0 ≤ x ≤ 1), θ > 0.
P
(a) Show that T (X) = −2 ni=1 log(Xi ) is a minimal sufficient statistic for θ.
(b) Find the distribution of Y = −2 log X1 .
(c) Find a two-sided 95% confidence interval for θ based on T .
(d) Argue or prove that the expected length of the confidence interval obtained in
part (c) converges to zero as n → ∞.
6. For regression data {(xi , Yi }ni=1 assume the model
Yi ∼ Poisson(xi β),
i = 1, · · · , n, independent,
where P
x1 , · · · , xn are strictly
P positive and known constants. Let Y = (Y1 , · · · , Yn ),
Ȳ = n1 ni=1 Yi , and x̄ = n1 ni=1 xi .
(a) Show that the MLE of β is β̂ = Ȳ /x̄.
(b) Compute the mean and variance of β̂.
(c) Now assume that β has a gamma prior distribution β ∼ Γ(wb0 , 1/w), where b0 is
our prior best guess and w > 0 is a weight attached to this guess. To be specific,
β has the prior density
π(β|w, b0 ) =
wwb0 wb0 −1
β
exp(−wβ).
Γ(wb0 )
Find the posterior density of β given Y.
(d) Show that the posterior mean of β is the weighted average of the prior mean and
the MLE. What does the posterior mean converge to when the weight w → 0?
2
Solutions:
1. (a) By Inspection,
2
−y
fY (y) = ye , y ≥ 0,
e−yx /2
p
, fX|Y (x|y) =
.
2π/y
Therefore X|Y ∼ N (0, 1/y).
(b) E[X|Y ] = 0.
(c) Var[X|Y ] = 1/Y .
(d)
Var(X) = E[Var(X|Y )] + Var[E(X|Y )]
Z ∞
1 −y
ye dy
= E(1/Y ) + 0 =
y
0
= −e−y |∞
0 = 1.
2. (a) Let U = X + Y , V = X/Y . Solve for X, Y , x =
J=
hence |J| =
−u(v+1)
(v+1)3
=
v
v+1
u
(v+1)2
uv
,y
v+1
1
v+1
u
− (v+1)
2
=
u
.
v+1
Jacobian Matrix
,
−u
.
(v+1)2
1 −u/λ
u
e
2
λ
(v + 1)2
1 −u/λ
1
=
e
u
·
.
λ2
(v + 1)2
fU,V (u, v) =
Therefore, the joint density function of U and V can be written as the product
of a function of U and a function of V . Thus U and V are independent.
3
(b) For 0 < z < 1,
P (Z ≤ z) =
=
=
=
=
=
X
P
≤ z = P (X ≤ zX + zY )
X +Y
P (X(1 − z)/z ≤ Y )
Z Z
1 −(x+y)/λ
e
dxdy
2
x(1−z)/z≤y λ
Z ∞ Z ∞
1 −y/λ
1 −x/λ
e
dy
e
dx
λ
0
x(1−z)/z λ
Z ∞
1
e−x(1−z)/λz e−x/λ dx
λ
Z0 ∞
1 −x/λz
e
dx = −ze−x/λz |∞
0 = z.
λ
0
Therefore Z follows a uniform over (0, 1) distribution.
3. From MGF we know that Yn+1 must be normally distributed. To show the convergence
in distribution, firstly we note
Y0 = x, Yn = λn x +
n
X
λn−i Xi ,
i=1
with mean µ
n
P
i=1
n
λn−i + λn x = λn x + µ 1−λ
→
1−λ
and variance σ 2
n
P
i=1
2n
λ2(n−i) = σ 2 1−λ
→
1−λ2
µ
,
1−λ
σ2
.
1−λ2
So we know that the parameters in
the MGFs converge therefore the MGFs of Yn converges to the MGF of a normal
D
µ
µ
σ2
σ2
distribution with mean 1−λ
and variance 1−λ
2 , i.e. Yn → N ( 1−λ , 1−λ2 ).
4. (a) The joint likelihood is
L(θ) = −nc log θ + n log(c) + (c − 1)
n
X
Pn
log xi −
i=1
i=1
θc
xci
.
Taking the derivative with respect to θ, we set
Pn c
xi
dL
nc
−
= c i=1
== 0.
c+1
dθ
θ
θ
Pn
xc
i 1/c
This leads to θ̂ = [ i=1
] . Check the second derivative
n
P
T 1/c
θ̂M LE = n
, where T = ni=1 xci .
4
d2 L
|
dθ2 θ̂
< 0. Thus
P
(b) This is one-parameter exponential family, with T = ni=1 Xic being the complete
and sufficient statistic. Define Yi = Xic for i = 1, · · · , n. Using the transformation,
we compute the pdf of Y as f (y|θ) = θ1c exp −y/θc . So Y = X c ∼ exp(θc ), so
T ∼ Gamma(n, θc ). Then
1c
t
t
1
tn−1 e− θc dt
E(θ̂M LE ) =
cn
n
Γ(n)θ
0
1
Γ(n + c )
=
θ.
Γ(n)n1/c
Z
The UMVUE of θ is
∞
Γ(n)n1/c
θ̂
.
Γ(n+ 1c ) M LE
(c) For θ2 > θ1 > 0, the density ratio
f (x|θ2 )
=
f (x|θ1 )
θ1
θ2
(
nc
exp
1
1
− c
c
θ1 θ2
X
n
)
xci
i=1
is an increasing function of T . So the distribution family of X has an MLR in T .
By Karlin-Rubin Theorem, the UMP test of size α is: reject H0 if T (X) > t0 where
∼ χ22n . Then
t0 satisfies Pθ0 (T > t0 ) = α. Note that when θ = θ0 , 2T
θc
0
P (T > t0 ) = P (
2T
2t0
2t0
> c ) = P (χ22n > c ) = α.
c
θ0
θ0
θ0
So t0 = χ22n,α θ0c /2.
5. (a) This is one-parameter exponential family. It follows that T (X) = −2
is a sufficient statistic for θ. Furthermore,
n
X
I(0 ≤ x(1) ≤ x(n) ≤ 1)
f (x|θ)
1−θ
=
exp{
f (y|θ)
I(0 ≤ y(1) ) ≤ y(n) ≤ 1
θ
The ratio is free of θ if and only if
and sufficient.
Pn
i=1 log Xi =
log xi −
i=1
Pn
i=1
So Y ∼ exp(2θ).
5
1
y
exp(− ).
2θ
2θ
i=1
log(Xi )
!
log yi }.
i=1
log Yi . So T is minimal
(b) Using the variable transformation, the pdf of Y = −2 log X1 is
f (y|θ) =
n
X
Pn
(c) Based on part (b), we have T =
T
∼ χ22n . Then
θ
P
Pn
i=1
χ22n,0.975
Yi ∼ Γ(n, 2θ). Consider the pivotal quantity
T
≤ ≤ χ22n,0.025
θ
= 0.95,
which implies
P
T
χ22n,0.025
≤θ≤
T
χ22n,0.975
so a two-sided 95% confidence interval for θ is χ2
= 0.95,
T
2n,0.025
, χ2
T
.
2n,0.975
(d) The expected length of the confidence interval obtained in part (c) is
1
1
2nθ
−
.
χ22n,0.025 χ22n,0.975
Using the normal approximation, we have
√
χ22n,0.975
2n + 1.96 4n
≈
→ 2,
n
n
and
√
χ22n,0.025
2n − 1.96 4n
≈
→ 2,
n
n
So the length converges to zero as n → ∞.
6. (a) The log-likelihood is
L(β) = −β
n
X
xi +
i=1
Then setting
dL
dβ
n
X
i=1
yi log(βxi ) −
n
X
log(yi !).
i=1
= nx̄ + nȳ/β to zero, we have β̂ = Ȳ /x̄. Note that
so β̂M LE = Ȳ /x̄.
(b) Compute
E(β̂M LE ) =
E Ȳ
β x̄
=
= β,
x̄
x̄
and, by independence,
Pn
Var(Yi )
V ar(Ȳ )
Var(β̂M LE ) =
= i=1 2 2
2
x̄
n x̄
P
β ni=1 xi
β
=
=
.
n2 x̄2
nx̄
6
dL
|
dβ 2 β̂M LE
< 0,
(c) Now assume that β has gamma prior distribution β ∼ Γ(wb0 , 1/w), where b0 is
our prior best guess and w > 0 is a weight attached to this guess. To be specific,
β has the prior density
π(β|w, b0 ) =
wwb0 wb0 −1
β
exp(−wβ).
Γ(wb0 )
Then the posterior density of β given Y is
π(β|Y) ∝ f (y|β)π(β|w, b0 )
n
Y
P
wwb0 wb0 −1
−β n
xi
i=1
= e
(βxi )yi
β
exp(−wβ)
Γ(wb0 )
i=1
Pn
Pn
= e−β( i=1 xi +w) β wb0 + i=1 yi −1
∼ Γ(wb0 + nȳ, 1/(w + nx̄)).
(d) The posterior mean of β is
wb0 + nȲ
w + nx̄
wb0
nȲ
=
+
w + nx̄ w + nx̄
Ȳ nx̄
w
= b0
+
.
w + nx̄
x̄ w + nx̄
E(β|Y) =
Thus the weighted average of the prior mean b0 and the MLE Ȳ /x̄. The posterior
mean converges to the MLE Ȳ /x̄ when the weight w → 0.
7
© Copyright 2026 Paperzz