School of Mathematical Sciences
MTH736U Mathematical Statistics
Solutions to Exercises
Exercise 1.1 For any events A, B, C defined on a sample space Ω we have
Commutativity
A∪B =B∪A
A∩B =B∩A
Associativity
A ∪ (B ∪ C) = (A ∪ B) ∪ C
A ∩ (B ∩ C) = (A ∩ B) ∩ C
Distributive Laws
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
DeMorgan’s Laws
(A ∪ B)c = Ac ∩ B c
(A ∩ B)c = Ac ∪ B c
Exercise 1.2 Here we will use general versions of DeMorgan’s Laws: For any events A1 , A2 , . . . ∈ A,
where A is a σ-algebra defined on a sample space Ω we have:
∞
[
i=1
Ai
c
=
∞
\
Aci
∞
\
and
i=1
Ai
c
i=1
=
∞
[
Aci
i=1
Now, by Definition 1.5 if A1 , A2 , . . . ∈ A, then also Ac1 , Ac2 , . . . ∈ A. Therefore,
∞
[
Aci ∈ A .
i=1
Then, by the first general DeMorgan Law we have
∞
[
Aci
=
T∞
i=1 Ai
Ai
c
∈A
i=1
i=1
and so
∞
\
∈ A as well. It means that σ-algebra A is closed under intersection of its elements.
1
Exercise 1.3 Theorem 1.2
If P is a probability function, then
P
(a) P (A) = ∞
i=1 P (A ∩ Ci ) for any partition C1 , C2 , . . ..
P
S∞
(b) P ( i=1 Ai ) ≤ ∞
i=1 P (Ai ) for any events A1 , A2 , . . ..
[Boole’s Inequality]
Proof
(a) Since C1 , C2 , . . . form a partition of Ω we have that Ci ∩ Cj = ∅ for all i 6= j and Ω =
Hence, by the Distributive Law, we can write
∞
[
A=A∩Ω=A∩
S∞
i=1 Ci .
∞
[
Ci =
(A ∩ Ci ).
i=1
i=1
Then, since A ∩ Ci are disjoint,
P (A) = P
∞
[
∞
X
(A ∩ Ci ) =
P (A ∩ Ci ).
i=1
i=1
(b) First we construct a disjoint collection of events A?1 , A?2 , . . . such that
∞
[
A?i =
i=1
Define
∞
[
Ai .
i=1
A?1 = A1
i−1
[
A?i = Ai \
Aj , i = 2, 3, . . .
j=1
Then
P
∞
[
Ai = P
j=1
∞
[
A?i
j=1
=
∞
X
P (A?i )
j=1
since A?i are disjoint. Now, by construction A?i ⊂ Ai for all i = 1, 2, . . .. Hence, P (A?i ) ≤
P (Ai ) for all i = 1, 2, . . ., and so
∞
X
P (A?i ) ≤
i=1
∞
X
P (Ai ).
i=1
Exercise 1.4 Let X ∼ Bin(8, 0.4), that is n = 8 and the probability of success p = 0.4. The pmf, shown
in a mathematical, tabular and graphical way and a graph of the c.d.f. of the variable X follow.
Mathematical form:
(x, P (X = x) = 8 Cx (0.4)x (0.6)8−x ), x ∈ X = {0, 1, 2, . . . , 8}
Tabular form:
x
P (X = x)
P (X ≤ x)
0
0.0168
0.0168
1
0.0896
0.1064
2
0.2090
0.3154
3
0.2787
0.5941
2
4
0.2322
0.8263
5
0.1239
0.9502
6
0.0413
0.9915
7
0.0079
0.9993
8
0.0007
1
0.27869184
1.0
0.8
0.20918272
0.6
0.13967360
0.4
0.07016448
0.2
0.00065536
0.0
0
2
4
6
8
0
x
1
2
3
x
4
5
6
7
8
Figure 1: Graphical representation of the mass function and the cumulative distribution function for X ∼
Bin(8, 0.4)
Exercise 1.5 1. To verify that F (x) is a cdf we will check the conditions of Theorem 1.4.
(i) F 0 (x) = x23 > 0 for x ∈ X = (1, ∞). Hence, F (x) is increasing on (1, ∞). The function is
equal to zero otherwise.
(ii) Obviously limx→−∞ F (x) = 0
(iii) limx→∞ F (x) = 1 as limx→∞
1
x2
=0
(iv) F (x) is continuous.
2. The pdf is
0,
for x ∈ (−∞, 1);
for x ∈ (1, ∞);
f (x) =
not defined, for x = 1.
2
,
x3
3. P (3 ≤ X ≤ 4) = F (4) − F (3) =
25
144 .
Exercise 1.6 Discrete distributions:
1. Uniform U (n) (equal mass at each outcome): The support set and the pmf are, respectively,
X = {x1 , x2 , . . . , xn } and
1
P (X = xi ) = , xi ∈ X ,
n
where n is a positive integer. In the special case of X = {1, 2, . . . , n} we have
E(X) =
n+1
(n + 1)(n − 1)
, var(X) =
.
2
12
Examples:
(a) X ≡ first digit in a randomly selected sequence of length n of 5 digits;
(b) X ≡ randomly selected student in a class of 15 students.
2. Bern(p) (only two possible outcomes, usually called “success” and “failure”): The support set
and the pmf are, respectively, X = {0, 1} and
P (X = x) = px (1 − p)1−x , x ∈ X ,
where p ∈ [0, 1] is the probability of success.
E(X) = p, var(X) = p(1 − p).
Examples:
(a) X ≡ an outcome of tossing a coin;
3
(b) X ≡ detection of a fault in a tested semiconductor chip;
(c) X ≡ guessed answer in a multiple choice question.
3. Bin(n, p) (number of success in n independent trials): The support set and the pmf are, respectively, X = {0, 1, 2, . . . n} and
n x
P (X = x) =
p (1 − p)n−x , x ∈ X ,
x
where p ∈ [0, 1] is the probability of success.
E(X) = np, var(X) = np(1 − p).
Examples:
(a) X ≡ number of heads in several tosses of a coin;
(b) X ≡ number of semiconductor chips in several faulty chips in which a test finds a defect;
(c) X ≡ number of correctly guessed answers in a multiple choice test of n questions.
4. Geom(p) (the number of independent Bernoulli trials until first “success”): The support set and
the pmf are, respectively, X = {1, 2, . . .} and
P (X = x) = p(1 − p)x−1 = pq x−1 , x ∈ X ,
where p ∈ [0, 1] is the probability of success, q = 1 − p.
1−p
1
.
E(X) = , var(X) =
p
p2
Examples include:
(a) X ≡ number of bits transmitted until the first error;
(b) X ≡ number of analyzed samples of air before a rare molecule is detected.
5. Hypergeometric(n, M, N ) (the number of outcomes of one kind in a random sample of size n
taken from a dichotomous sample space with M and N elements in each group): The support
set and the pmf are, respectively, X = {0, 1, . . . , n} and
N −M
M
n−x
x
P (X = x) =
, x ∈ X,
N
n
where M ≥ x, N − M ≥ n − x.
E(X) =
nM
nM
(N − M )(N − n)
, var(X) =
×
.
N
N
N (N − 1)
6. Poisson(λ) (a number of outcomes in a period of time or in a part of a space): The support set
and the pmf are, respectively, X = {0, 1, . . .} and
P (X = x) =
λx −λ
e ,
x!
where λ > 0.
E(X) = λ, var(X) = λ.
Examples:
(a)
(b)
(c)
(d)
count blood cells within a square of a haemocytometer slide;
number of caterpillars on a leaf;
number of plants of a rear variety in a square meter of a meadow;
number of tree seedlings in a square meter around a big tree;
4
(e) number of phone calls to a computer service within a minute.
Exercise 1.7 Continuous distributions:
1. Uniform, U(a, b): The support set and the pdf are, respectively,
X = [a, b] and
1
fX (x) =
I (x).
b − a [a,b]
E(X) =
a+b
(b − a)2
, var(X) =
.
2
12
2. Exp(λ): The support set and the pdf are, respectively,
X = [0, ∞) and
fX (x) = λe−λx I[0,∞) (x),
where λ > 0.
E(X) =
1
1
, var(X) = 2 .
λ
λ
Examples:
• X ≡ the time between arrivals of e-mails on your computer;
• X ≡ the distance between major cracks in a motorway;
• X ≡ the life length of car voltage regulators.
3. Gamma(α, λ): An exponential rv describes the length of time or space between counts. The
length until r counts occur is a generalization of such a process and the respective rv is called
the Erlang random variable. Its pdf is given by
fX (x) =
xr−1 λr e−λx
I
(x),
(r − 1)! [0,∞)
r = 1, 2, . . .
The Erlang distribution is a special case of the Gamma distribution in which the parameter r is
any positive number, usually denoted by α. In the gamma distribution we use the generalization
of the factorial represented by the gamma function, as follows.
Z ∞
Γ(α) =
xα−1 e−x dx, for α > 0.
0
A recursive relationship that may be easily shown integrating the above equation by parts is
Γ(α) = (α − 1)Γ(α − 1).
If α is a positive integer, then
Γ(α) = (α − 1)!
since Γ(1) = 1.
The pdf is
fX (x) =
xα−1 λα e−λx
I[0,∞) (x),
Γ(α)
where the α > 0, λ > 0 and Γ(·) is the gamma function. The mean and the variance of the
gamma rv are
α
α
E(X) = , var(X) = 2 .
λ
λ
4. χ2 (ν): A rv X has a Chi-square distribution with ν degrees of freedom iff X is gamma distributed with parameters α = ν2 and λ = 12 . This distribution is used extensively in interval
estimation and hypothesis testing. Its values are tabulated.
5
Exercise 1.8 We can write
2
E(X − b)2 = E X − E X + E X − b
2
= E (X − E X) + (E X − b)
= E(X − E X)2 + (E X − b)2 + 2 E (X − E X)(E X − b)
= E(X − E X)2 + (E X − b)2 + 2(E X − b) E(X − E X)
= E(X − E X)2 + (E X − b)2 as E(X − E X) = 0.
This is minimized when (E X − b)2 = 0, that is, E X = b.
Exercise 1.9
var(aX + b) = E(aX + b)2 − {E(aX + b)}2
= E(a2 X 2 + 2abX + b2 ) − (a2 {E X}2 + 2ab E X + b2 )
= a2 E X 2 − a2 {E X}2 = a2 var(X)
Exercise 1.10 Let X ∼ Bin(n, p). The pmf of X is
n x
P (X = x) =
p (1 − p)n−x ,
x
x = 0, 1, . . . , n.
To obtain the mgf of X we will use the Bionomial formula
n X
n x n−x
u v
= (u + v)n .
x
x=0
Hence, we may write
MX (t) = E etX
n
X
tx n
px (1 − p)n−x
=
e
x
x=0
n X
x
n
=
et p (1 − p)n−x
x
x=0
n
= et p + (1 − p) .
Now, assume that X ∼ Poisson(λ). Then
e−λ λx
, x = 0, 1, 2, . . .
x!
Here we will use the Taylor series expansion of eu at zero, i.e.,
P (X = x) =
u
e =
∞
X
ux
x=0
x!
.
Hence, we have
MX (t) = E etX
∞
X
e−λ λx
=
etx
x!
x=0
∞
X λet x
−λ
=e
x!
=e
x=0
−λ λet
e
t −1)
= eλ(e
.
6
Exercise 1.11 The pdf of a standard normal rv Z is
1
2
fZ (z) = √ e−z /2 IR .
2π
To find the distribution of a transformed rv Y = g(Z) = µ + σZ we will apply Theorem 1.11. The
inverse mapping at y is z = g −1 (y) = y−µ
σ . Assuming σ > 0 the function g(·) is increasing. Then,
we can write
d −1
fY (y) = fZ g −1 (y)
g (y)
dy
(y−µ)2 1
1
= √ e− 2σ2
σ
2π
(y−µ)2
1
= √ e− 2σ2 .
σ 2π
This is a pdf of a normal distribution with parameters µ ∈ R and σ 2 ∈ R+ , i.e., Y ∼ N (µ, σ 2 ).
Exercise 1.12
(a) Let MX (t) be a mgf of a random variable X. Then, for Y = a + bX, where a and b are constants,
we can write
MY (t) = E etY = E et(a+bX) = E eta etbX = eta E etbX = eta MX (tb).
(b) First we derive the mgf for a standard normal rv Z.
MZ (t) = E etZ
Z ∞
1
2
=
etz √ e−z /2 dz
2π
Z−∞
∞
1 2
1
√ e− 2 (z −2tz) dz
=
2π
Z−∞
∞
1 − 1 (z 2 −2tz+t2 )+ 1 t2
2 dz
√ e 2
=
2π
−∞
Z ∞
(z−t)2 t2
1
√ e− 2 e 2 dz
=
2π
−∞
Z ∞
2
(z−t)2
t
1
√ e− 2 dz
=e2
2π
−∞
t2
=e2,
as the integrand is the pdf of a rv with mean t and variance 1. Now, we can calculate the mgf of
Y ∼ N (µ, σ 2 ) using the result of part (a), that is,
MY (t) = E(et(µ+σZ) ) = etµ MZ (tσ) = etµ e
t2 σ 2
2
2 σ 2 )/2
= e(2tµ+t
.
Exercise 1.13 Let us write the table of counts given for the smoking status and gender in terms of proportions, as follow:
G
0
1
1
S
2
3
0.20
0.10
0.30
0.32
0.05
0.37
0.08
0.25
0.33
7
0.60
0.40
1.00
where S is a random variable denoting ’smoking status’ such that S(s) = 1, S(q) = 2, S(n) = 3
and G is a random variable denoting ’gender’ such that G(m) = 0, G(f ) = 1. In the table we have
the distribution of X = (G, S) as well as marginal distributions of G and of S.
1. pG (0) = 0.6.
2. pX (0, 1) = 0.2.
3. pS (1) + pS (2) = 0.30 + 0.37 = 0.67.
4. pX (1, 1) + pX (0, 2) = 0.10 + 0.05 = 0.15.
Exercise 1.14 Here we have conditional probabilities.
1. pS (S = 1|G = 0) =
2. pG (G = 0|S = 1) =
0.20
0.60
0.10
0.30
= 13 .
= 13 .
Exercise 1.15 The first equality is a special case of Lemma 1.2 with g(Y ) = Y .
To show the second equality we will work on the RHS of the equation.
var(Y |X) = E(Y 2 |X) − [E(Y |X)]2 .
So
E[var(Y |X)] = E[E(Y 2 |X)] − E{[E(Y |X)]2 } = E(Y 2 ) − E{[E(Y |X)]2 }.
Also,
var(E[Y |X]) = E{[E(Y |X)]2 } − {E[E(Y |X)]}2 = E{[E(Y |X)]2 } − [E(Y )]2 .
Hence, adding these two expressions we obtain
E[var(Y |X)] + var(E[Y |X]) = E(Y 2 ) − [E(Y )]2 = var(Y ).
Exercise 1.16 (See example 1.25)
Denote by a success a message which gets into the server and by X the number of successes in Y
trials. Then
1
X|Y ∼ Bin(Y, p), p =
3
Y ∼ Poisson(λ) λ = 21
As in Example 1.25
X ∼ Poisson(λp) = 7.
Hence
E(X) = λp,
var(X) = λp = 7.
Exercise 1.17 Here we have a two-dimensional rv X = (X1 , X2 ) which denotes length of life of two
components, with joint pdf equal to
1
1
fX (x1 , x2 ) = x1 e− 2 (x1 +x2 ) I{(x1 ,x2 ):x1 >0,x2 >0} .
8
8
1. Probability that the length of life of each of the two components will be greater than 100 hours
is:
Z ∞Z ∞
1
1
PX (X1 > 1, X2 > 1) =
x1 e− 2 (x1 +x2 ) dx1 dx2
8
1
1
Z ∞
Z ∞
1
1
1
x1 e− 2 x1 dx1 dx2
=
e− 2 x2
8 1
|1
{z
}
=6e−1/2 (by parts)
Z
∞
1
1
1
e− 2 x2 6e− 2 dx2
8 1
3
= e−1 .
2
That is, the probability that the length of life of each of the two components is greater than 100
hours is 32 e−1 .
=
2. Now we are interested in component II only, so we need to calculate the marginal pdf of the
length of its life.
Z ∞
1
1
fX2 (x2 ) =
x1 e− 2 (x1 +x2 ) dx1
8
0
Z
1
1 − 1 x2 ∞
= e 2
x1 e− 2 x1 dx1
8
|0
{z
}
=4 (by parts)
1 1
= e − 2 x2 .
2
Now,
Z ∞
1 − 1 x2
e 2 dx2 = e−1 .
PX2 (X2 > 2) =
2
2
That is, the probability that the length of life of component II will be bigger than 200 hours is
e−1 .
3. We can write
1
1
− 12 x1
fX (x1 , x2 ) =
x1 e
e− 2 x2 = g(x1 )h(x2 ).
8
The joint pdf can be written as a product of two functions, one depending on x1 only and the
other on x2 only for all pairs (x1 x2 ) and the domains are independent. Hence, the random
variables X1 and X2 are independent.
4. Write g(X1 ) =
1
X1
and h(X2 ) = X2 . Then, by part 2 of Theorem 1.13 we have
1
E[g(X1 )h(X2 )] = E[g(X1 )] E[h(X2 )] = E
E[X2 ].
X1
The marginal pdfs for X1 and X2 , respectively, are
1
1
fX1 (x1 ) = x1 e− 2 x1 ,
4
Hence,
1 1
fX2 (x2 ) = e− 2 x2 .
2
Z
1
1
1 ∞ 1
=
x1 e− 2 x1 dx1
X1
4 0 x1
Z
1 ∞ − 1 x1
1
=
e 2 dx1 = .
4 0
2
Z ∞
1
1
E[X2 ] =
x2 e− 2 x2 dx2 = 2e−1 .
2 0
E
Finally,
X2
1
E
=E
E[X2 ] = e−1 .
X1
X1
9
Exercise 1.18 Take the nonnegative function of t, g(t) = var(tX1 + X2 ) ≥ 0. Then,
g(t) = var(tX1 + X2 ) = E (tX1 + X2 − (tµ1 + µ2 ))2
= E (t(X1 − µ1 ) + (X2 − µ2 ))2
= E ((X1 − µ1 )2 t2 + 2(X1 − µ1 )(X2 − µ2 )t + (X2 − µ2 )2
= var(X1 ) × t2 + 2 cov(X1 , X2 ) × t + var(X2 ).
1. g(t) ≥ 0, hence there is one or no real roots of this quadratic function. The discriminant is:
∆ = 4 cov2 (X1 , X2 ) − 4 var(X1 ) var(X2 ) ≤ 0.
Hence,
cov2 (X1 , X2 ) ≤ var(X1 ) var(X2 ),
that is
cov2 (X1 , X2 )
≤1
var(X1 ) var(X2 )
−1 ≤
and so
−1 ≤ ρ(X1 , X2 ) ≤ 1.
2. |ρ(X1 , X2 )| = 1 if and only if the discriminant is equal to zero. That is ρ(X1 , X2 ) = 1 if and
2
only if g(t)
has a single root. But since
(t(X1 − µ1 ) + (X2 − µ2 )) ≥ 0, the expected value
2
g(t) = E (t(X1 − µ1 ) + (X2 − µ2 )) = 0 if and only if
P [t(X1 − µ1 ) + (X2 − µ2 )]2 = 0 = 1.
This is equivalent to
P (t(X1 − µ1 ) + (X2 − µ2 ) = 0) = 1.
It means that P (X2 = aX1 + b) = 1 with a = −t and b = µ1 t + µ2 , where t is the root of g(t),
that is
cov(X1 , X2 )
t=−
.
var(X1 )
Hence, a = −t has the same sign as ρ(X1 , X2 ).
Exercise 1.19 We have
8xy, for 0 ≤ x < y ≤ 1;
0,
otherwise.
fX,Y (x, y) =
(a) Variables X and Y are not independent because their ranges are not independent.
(b) We have cov(X, Y ) = E(XY ) − E(X) E(Y ).
Z 1Z 1
4
E(XY ) =
xy8xydydx = .
9
0
x
To calculate E(X) and E(Y ) we need the marginal pdfs for X and for Y .
Z 1
fX (x) =
8xydy = 4x(1 − x2 ) on (0, 1);
x
and
Z
fY (y) =
1
8xydx = 4y 3 on (0, 1).
0
Now we can calculate the expectations, that is,
Z 1
Z 1
8
4
E(X) =
x4x(1 − x2 )dx = , E(Y ) =
y4y 3 dy = .
15
5
0
0
Hence,
cov(X, Y ) =
10
4
8 4
−
≈ 0.01778.
9 15 5
(c) The transformation and the inverses are, respectively,
u=
x
and v = y;
y
and
x = uy = uv and y = v.
Then, Jacobian of the transformation is
J = det
v u
0 1
= v.
So, by the theorem for a bivariate transformation we can write
fU,V (u, v) = fX,Y (uv, v)|J| = 8uv 3 .
The support of (U, V )is
D = {(u, v) : 0 ≤ u < 1, 0 < v ≤ 1}.
(d) The joint pdf for (U, V ) is a product of two functions: one depends on u only and the other on
v only. Also, the ranges of U and V are independent. Hence, U and V are independent random
variables.
(e) U and V are independent, hence cov(U, V ) = 0.
Exercise 1.20
(a) Here u =
x
x+y
and v = x + y. Hence, the inverses are
x = uv and y = v − uv.
The Jacobian of the transformation is J = v.
Furthermore, the joint pdf of (X, Y ) (by independence) is
fX,Y (x, y) = λ2 e−λ(x+y) .
Hence, by the transformation theorem we get
fU,V (u, v) = λ2 e−λv v
and the support for (U, V ) is
D = {(u, v) : 0 < u < 1, 0 < v < ∞}.
(b) Random variables U and V are independent because their joint pdf can be written as a product
of two functions, one depending on v only and the other on u only (this is a constant) and their
domains are independent.
(c) We will find the marginal pdf for U .
Z
fU (u) =
∞
λ2 e−λv vdv = 1
0
on (0, 1). That is U ∼ Uniform(0, 1).
(d) A sum of two identically distributed exponential random variables has the Erlang(2, λ) distribution.
11
Exercise 1.21
(a) By definition of expectation we can write
Z ∞
y α−1 λα e−λy
ya
E(Y a ) =
dy
Γ(α)
0
Z ∞ a+α−1 α −λy a
y
λ e
λ Γ(a + α)
=
dy
Γ(α)
λa Γ(a + α)
0
Z
Γ(a + α) ∞ y a+α−1 λa+α e−λy
= a
dy
λ Γ(α) 0
Γ(a + α)
{z
}
|
(pdf of a gamma rv)
=
Γ(a + α)
.
λa Γ(α)
Now, the parameters of a gamma distribution have to be positive, hence a + α > 0.
(b) To obtain E(Y ) we apply the above result with a = 1. This gives
E(Y ) =
Γ(1 + α)
αΓ(α)
α
=
= .
λΓ(α)
λΓ(α)
λ
For the variance we have var(Y ) = E(Y 2 ) − [E(Y )]2 . Hence we need to find E(Y 2 ). Again,
we will use the result of point (a), now with a = 2. That is,
E(Y 2 ) =
(1 + α)Γ(1 + α)
(1 + α)αΓ(α)
α(1 + α)
Γ(2 + α)
=
=
=
.
2
2
2
λ Γ(α)
λ Γ(α)
λ Γ(α)
λ2
Hence, the variance is
α(1 + α) α 2
α
−
= 2.
2
λ
λ
λ
var(Y ) =
(c) X ∼ χ2ν is a special case of gamma distribution when α = ν2 and λ = 12 . Hence, from (b) we
obtain
α
ν
α
ν
E(X) = = 2 = ν, var(X) = 2 = 4 = 2ν.
λ
2
λ
2
Exercise 1.22 Here we have Yi ∼ N (0, 1), Y =
iid
1
5
P5
i=1 Yi
and Y6 ∼ N (0, 1) independently of the other
Yi .
(a) Yi ∼ N (0, 1), hence Yi2 ∼ χ21 . Then, by Corollary 1.1 we get
iid
iid
W =
5
X
Yi2 ∼ χ25 .
i=1
P
(b) U = 5i=1 (Yi − Y )2 =
Example 1.34 we have
4S 2
,
σ2
where S 2 =
1
4
U=
P5
i=1 (Yi
− Y )2 and σ 2 = 1. Hence, as shown in
4S 2
∼ χ24 .
σ2
(c) U ∼ χ24 and Y62 ∼ χ21 . Hence, by Corollary 1.1 we get
U + Y62 ∼ χ25 .
(d) Y6 ∼ N (0, 1) and W ∼ χ25 . Hence, by Theorem 1.19, we get
Y
p 6 ∼ t5 .
W/5
12
Exercise 1.23 Let X = (X1 , X2 , . . . , Xn )T be a vector of mutually independent rvs with
mgfs MX1 (t), MX2 (t), . . . , MXn (t) and
Pn let a1 , a2 , . . . , an and b1 , b2 , . . . , bn be fixed constants. Then
the mgf of the random variable Z = i=1 (ai Xi + bi ) is
MZ (t) = et
P
bi
n
Y
MXi (ai t).
i=1
Proof. By definition of the mgf we can write
Pn
MZ (t) = E etZ = E et i=1 (ai Xi +bi )
Pn
Pn
Pn
Pn
= E et i=1 ai Xi +t i=1 bi = E et i=1 bi et i=1 ai Xi
n
n
P
P
Y
Y
t n
bi
tai Xi
t n
bi
i=1
i=1
=e
E
e
=e
E etai Xi
i=1
= et
Pn
i=1 bi
n
Y
i=1
MXi (tai ).
i=1
Exercise 1.24 Here we will use the result of Theorem 1.22. We know that the mgf of a normal rv X with
mean µ and variance σ 2 is
2 2
MX (t) = e(2tµ+t σ )/2 .
Hence, by Theorem 1.22 with ai =
1
n
and bi = 0 for all i = 1, . . . , n, we can write
n
Y
Y
n
1 2 2
1
1
MX (t) =
t =
MXi
e(2 n tµ+ n2 t σ )/2
n
i=1
i=1
on
n 1
1
1 2 2
(2 n tµ+ 12 t2 σ 2 )/2
n
= en(2 n tµ+ n2 t σ )/2
= e
2 σ 2 )/2
n
= e(2tµ+t
.
This is the mgf of a normal rv with mean µ and variance σ 2 /n, that is
σ2
X ∼ N µ,
.
n
Exercise 1.25
Here g(X) = Y = AX, hence, h(Y ) = X = A−1 Y = BY , where B = A−1 . The Jacobian is
Jh (y) = det
∂
1
∂
h(y) = det
By = det B = det A−1 =
.
∂y
∂y
det A
Hence, by Theorem 1.25 we have
fY (y) = fX h(y) Jh (y) = fX A−1 Y ) 1
|.
det A
13
Exercise 1.26
A multivariate normal rv has the pdf of the form
fX (x) =
1
√
(2π)n/2
1
T −1
exp − (x − µ) V (x − µ) .
2
det V
Then, by the result of Exercise 1.25, for X ∼ N n (µ, V ) and Y = AX we have
1
| det A|
1
1
1 −1
T −1
−1
√
exp − (A y − µ) V (A y − µ)
| det A| (2π)n/2 det V
2
T −1 −1
1
1 −1
1
√
exp − A (y − Aµ) V A (y − Aµ)
| det A| (2π)n/2 det V
2
1
1
1
T
−1 T −1 −1
√
exp − (y − Aµ) (A ) V A (y − Aµ)
| det A| (2π)n/2 det V
2
1
1
T
T −1
p
exp − (y − Aµ) AV A
(y − Aµ)
2
(2π)n/2 det V (det A)2
1
1
T
T −1
p
(y − Aµ)
exp − (y − Aµ) AV A
2
(2π)n/2 det(AV AT )
fY (y) = fX A−1 Y )
=
=
=
=
=
This is a pdf of a multivariate normal rv with mean Aµ and variance-covariance matrix AV AT . 14
CHAPTER 2
Exercise 2.1 Suppose that Y = (Y1 , . . . , Yn ) is a random sample from an Exp(λ) distribution. Then we
may write
n
P
Y
n −λ n
i=1 yi ×
1 .
fY (y) =
λe−λyi = λ
} |{z}
| e {z
i=1
It follows that T (Y ) =
Pn
i=1 Yi
gλ (T (Y ))
h(y)
is a sufficient statistic for λ.
Exercise 2.2 Suppose that Y = (Y1 , . . . , Yn ) is a random sample from an Exp(λ) distribution. Then the
ratio of the joint pdfs at two different realizations of Y , x and y, is
Pn
f (x; λ)
λn e−λ i=1 xi
= n −λ Pn y
i=1 i
f (y; λ)
λ e
=e
The ratio is constant iff
sufficient statistic for λ.
Pn
i=1 yi
=
Pn
i=1 xi .
λ
Pn
i=1
yi −
Pn
i=1
xi
Hence, by Lemma 2.1, T (Y ) =
Pn
i=1 Yi
is a minimal
Exercise 2.3 Yi are identically distributed, hance have the same expectation, say E(Y ), for all i = 1, . . . , n.
Here, for y ∈ [0, θ], we have:
Z
2 θ
y(θ − y)dy
E(Y ) = 2
θ 0
Z
2 θ
= 2
(θy − y 2 )dy
θ 0
1 2 1 3 θ
2
= 2 θ y − y
θ
2
3
0
1
= θ.
3
Bias:
n
1X
1 1
E(T (Y )) = E(3Y ) = 3
E(Yi ) = 3 n θ = θ.
n
n 3
i=1
That it bias(T (Y )) = E(T (Y )) − θ = 0.
Variance: Yi are identically distributed, hance have the same variance, say var(Y ), for all i =
1, . . . , n,
var(Y ) = E(Y 2 ) − [E(Y )]2 .
We need to calculate E(Y 2 ).
Z
2 θ 2
y (θ − y)dy
θ2 0
Z
2 θ 2
= 2
(θy − y 3 )dy
θ 0
1 3 1 4 θ
2
= 2 θ y − y
θ
3
4
0
1 2
= θ .
6
1 2
2
2
Hence var(Y ) = E(Y ) − [E(Y )] = 6 θ − 19 θ2 = 29 θ2 . This gives
E(Y 2 ) =
n
n
1 X
1 X2 2
1 2
2
var(T (Y )) = 9 var(Y ) = 9 2
var(Yi ) = 9 2
θ = 9 2 n θ2 = θ2 .
n
n
9
n 9
n
i=1
i=1
15
Consistency:
T (Y ) is unbiased, so it is enough to check if its variance tends to zero when n tends to infinity.
Indeed, as n → ∞ we have n2 θ2 → 0, that is T (Y ) = 3Y is a consistent estimator of θ.
Pn
1
n
Exercise 2.4 We have Xi ∼ Bern(p) for i = 1, . . . , n. Also, X =
iid
i=1 Xi .
b → 0 as n → ∞. We have
(a) For an estimator ϑb to be consistent for ϑ we require that the MSE(ϑ)
b = var(ϑ)
b + [bias(ϑ)]
b 2.
MSE(ϑ)
We will now calculate the variance and bias of pb = X.
n
E(X) =
1X
1
E(Xi ) = np = p.
n
n
i=1
Hence X is an unbiased estimator of p.
var(X) =
n
1
1 X
1
var(Xi ) = 2 npq = pq.
2
n
n
n
i=1
Hence, MSE(X) =
1
n
pq → 0 as n → 0, that is, X is a consistent estimator of p.
b → ϑ as n → ∞, that is, the bias tends
(b) The estimator ϑb is asymptotically unbiased for ϑ if E(ϑ)
to zero as n → ∞. Here we have
2
2
E(pq)
b = E[X(1 − X)] = E[X − X ] = E[X] − E[X ].
Note that we can write
2
E[X ] = var(X) + [E(X)]2 =
That is
E(pq)
b =p−
1
pq + p2
n
=
1
pq + p2 .
n
pq(n − 1)
→ pq as n → ∞.
n
Hence, the estimator is asymptotically unbiased for pq.
Exercise 2.5 Here we have a single parameter p and g(p) = p.
By definition the CRLB(p) is
n
CRLB(p) =
dg(p)
dp
o2
n 2
o,
E − d log Pdp(Y2 =y;p)
(1)
where the joint pmf of Y = (Y1 , . . . , Yn )T , where Yi ∼ Bern(p) independently, is
P (Y = y; p) =
n
Y
pyi (1 − p)1−yi = p
Pn
i=1
yi
Pn
(1 − p)n−
i=1
yi
.
i=1
For the numerator of (1) we get
denominator of (1) we calculate
dg(p)
dp
log P =
= 1. Further, for brevity denote P = P (Y = y; p). For the
n
X
yi log p + (n −
i=1
n
X
i=1
16
yi ) log(1 − p)
and
Pn
Pn
yi − n
d log P
i=1 yi
=
+ i=1
,
dp
p
1−p
Pn
Pn
yi − n
d2 log P
i=1 yi
=−
+ i=1
.
2
2
dp
p
(1 − p)2
Hence, since E(Yi ) = p for all i, we get
2
d log P
E −
dp2
Pn
Pn
i=1 Yi
i=1 Yi − n
=E
−E
p2
(1 − p)2
np
np − n
n
= 2 −
=
2
p
(1 − p)
p(1 − p)
Hence, CRLB(p) =
p(1−p)
n .
Since var(Y ) = p(1−p)
n , it means that var(Y ) achieves the bound and so Y has the minimum variance
among all unbiased estimators of p.
Exercise 2.6 From lectures, we know that the joint pdf of independent normal r.vs is
(
)
n
1 X
2
2 −n
2
f (y; µ, σ ) = (2πσ ) 2 exp − 2
(yi − µ) .
2σ
i=1
Denote f =
f (y|µ, σ 2 ).
Taking log of the pdf we obtain
n
n
1 X
log f = − log(2πσ 2 ) − 2
(yi − µ)2 .
2
2σ
i=1
Thus, we have
n
∂ log f
1 X
= 2
(yi − µ)
∂µ
σ
i=1
and
n
∂ log f
n
1 X
=− 2 + 4
(yi − µ)2 .
∂σ 2
2σ
2σ
i=1
It follows that
∂ 2 log
n
= − 2,
∂µ2
σ
and
n
∂ 2 log f
1 X
=
−
(yi − µ)
∂µ∂σ 2
σ4
i=1
n
n
1 X
∂ 2 log f
=
−
(yi − µ)2 .
∂σ 4
2σ 4 σ 6
i=1
Hence, taking expectation of each of the second derivatives we obtain the Fisher information matrix
n
0
2
σ
M=
.
0 2σn4
Now, let g(µ, σ 2 ) = µ + σ 2 . Then we have ∂g/∂µ = 1 and ∂g/∂σ 2 = 1. So
n
−1 0
1
2
2
σ
CRLB(g(µ, σ )) = (1, 1)
n
0 2σ4
1
! 2
σ
σ2
0
1
n
= (1, 1)
=
(1 + 2σ 2 ).
2σ 4
1
n
0
n
17
Exercise 2.7
PSuppose that Y1 , . . . , Yn are independent Poisson(λ) random variables. Then we know that
T = ni=1 Yi is a sufficient statistic for λ.
Now, we need to find out what is the distribution of T . We showed in Exercise 1.10 that the mgf of a
Poisson(λ) rv is
z
MY (z) = eλ(e −1) .
Hence, we may write (we used z not to be confused with the values of T, denoted by t).
MT (z) =
n
Y
MYi (z) =
i=1
n
Y
z −1)
eλ(e
z −1)
= enλ(e
.
i=1
Hence, T ∼ Poisson(nλ), and so its probability mass function is
P (T = t) =
(nλ)t e−nλ
,
t!
Next, suppose that
E{h(T )} =
∞
X
h(t)
t=0
for λ > 0. Then we have
∞
X
h(t)
t=0
t!
t = 0, 1, . . . .
(nλ)t e−nλ
=0
t!
(nλ)t = 0
for λ > 0. Thus, every coefficient h(t)/t! is zero, so that h(t) = 0 for all t = 0, 1, 2, . . ..
Since T takes on values t = 0, 1, 2, . . . with probability 1 it means that
P {h(T ) = 0} = 1
for all λ. Hence, T =
Pn
i=1 Yi
is a complete sufficient statistic.
Pn
Exercise 2.8 S = i=1 Yi is a complete sufficient statistic for λ. We have seen that T = Y = S/n is a
MVUE for λ. Now, we will find a unique MVUE of φ = λ2 . We have
1 2
1
E(T 2 ) = E
S
= 2 E(S 2 )
n2
n
1
= 2 var(S) + [E(S)]2
n
1
= 2 nλ + n2 λ2
n
1
1
= λ + λ2 = E(T ) + λ2 .
n
n
It means that
1 E T 2 − T = λ2 ,
n
2
i.e., T 2 − n1 T = Y − n1 Y is an unbiased estimator of λ2 . It is a function of a complete sufficient
statistic, hence it is the unique MVUE of λ2 .
18
Exercise 2.9 We may write
P (Y = y; λ) =
=
=
λy e−λ
y!
1
exp{y log λ − λ}
y!
1
exp{(log λ)y − λ}.
y!
Thus, we have a(λ) = log λ, b(y) = y, c(λ) = −λ and h(y) =
representation of the form required by Definition 2.10.
1
y! .
That is the P (Y = y; λ) has a
Exercise 2.10 (a) Here, for y > 0, we have
λα α−1 −λy
y
e
Γ(α)
α
λ
α−1 −λy
= exp log
y
e
Γ(α)
α λ
= exp −λy + (α − 1) log y + log
Γ(α)
f (y|λ, α) =
This has the required form of Definition 2.10, where p = 2 and
a1 (λ, α) = −λ
a2 (λ, α) = α − 1
b1 (y) = y
b2 (y) = log y
λα
c(λ, α) = log
Γ(α)
h(y) = 1
(b) By Theorem 2.8 (lecture notes) we have that
S1 (Y ) =
n
X
Yi and S2 (Y ) =
i=1
n
X
log Yi
i=1
are the joint complete sufficient statistics for λ and α.
Exercise 2.11 To obtain the Method of Moments estimators we compare the population and the sample
moments. For a one parameter distribution we obtain θb as the solution of:
E(Y ) = Y .
Here, for y ∈ [0, θ], we have:
Z
2 θ
E(Y ) = 2
y(θ − y)dy
θ 0
Z
2 θ
= 2
(θy − y 2 )dy
θ 0
2
1 2 1 3 θ
= 2 θ y − y
θ
2
3
0
1
= θ.
3
19
(2)
Then by (2) we get the method of moments estimator of θ:
θb = 3Y .
Exercise 2.12 (a) First, we will show that the distribution belongs to an exponential family. Here, for y > 0
and known α, we have
λα α−1 −λy
y
e
Γ(α)
λα −λy
= y α−1
e
Γ(α)
α
λ
α−1
−λy
=y
exp log
e
Γ(α)
α λ
α−1
=y
exp −λy + log
Γ(α)
f (y|λ, α) =
This has the required form of Definition 2.11, where p = 1 and
a(λ) = −λ
b(y) = y
λα
c(λ) = log
Γ(α)
h(y) = y α−1
By Theorem 2.8 (lecture notes) we have that
S(Y ) =
n
X
Yi
i=1
is the complete sufficient statistic for λ.
(b) The likelihood function is
n
Y
λα α−1 −λyi
y
e
Γ(α) i
i=1
n
Y
λα log yiα−1 −λyi
=
e
e
Γ(α)
i=1
α n
Pn
Pn
λ
e(α−1) i=1 log yi e−λ i=1 yi
=
Γ(α)
L(λ; y) =
The the log-likelihood is
l(λ; y) = log L(λ; Y ) = αn log λ − n log Γ(α) + (α − 1)
n
X
i=1
Then, we obtain the following derivative of l(λ; Y ) with respect to λ:
n
1 X
dl
yi
= αn −
dλ
λ
i=1
This, set to zero, gives
b = Pαn
λ
n
i=1 yi
20
=
α
.
y
log yi − λ
n
X
i=1
yi .
Hence, the MLE(λ) = α/Y . So, we get
n
1
1 X
1
1
1
=
MLE[g(λ)] = MLE
= Y =
Yi =
S(Y ).
λ
MLE(λ)
α
αn
αn
i=1
That is, MLE[g(λ)] is a function of the complete sufficient statistic.
(c) To show that it is an unbiased estimator of g(λ) we calculate:
!
n
n
X
1
1 X
1
1
1
b =E
E[g(λ)]
Yi =
E(Yi ) =
nα = .
αn
αn
αn λ
λ
i=1
i=1
It is an unbiased estimator and a function of a complete sufficient statistics, hence, by Corollary
2.2 (given in Lectures), it is the unique MVUE(g(λ)).
Exercise 2.13 (a) The likelihood is
n
1 X
exp − 2
(yi − β0 − β1 xi )2
2σ
(
−n
2
L(β0 , β1 ; y) = (2πσ 2 )
)
.
i=1
Now, maximizing this is equivalent to minimizing
S(β0 , β1 ) =
n
X
(yi − β0 − β1 xi )2 ,
i=1
which is the criterion we use to find the least squares estimators. Hence, the maximum likelihood
estimators are the same as the least squares estimators.
(c) The estimates of β0 and β1 are
βb0 = Y − βb1 x = 94.123,
Pn
i=1 xi Yi − nxY
b
β1 = P
n
2
2 = −1.266.
i=1 xi − nx
Hence the estimate of the mean response at a given x is
E(Y\
|x = 40) = 94.123 − 1.266x.
For the temperature of x = 40 degrees we obtain the estimate of expected hardness equal to
\
E(Y
|x) = 43.483.
Exercise 2.14 The LS estimator of β1 is
Pn
i=1 xi Yi − nxY
b
β1 = P
n
2
2 .
i=1 xi − nx
We will see that it has a normal distribution and is unbiased, and we will find its variance. Now,
normality is clear from the fact that we may write
Pn
βb1 =
x Y −x
i=1
Pni i 2
i=1 xi −
Pn
i=1 Yi
nx2
21
=
n
X
(xi − x)
i=1
Sxx
Yi ,
P
where Sxx = ni=1 x2i − nx2 , so that β̂1 is a linear function of Y1 , . . . , Yn , each of which is normally
distributed. Next, we have
E(βb1 ) =
=
=
=
=
n
1 X
(xi − x) E(Yi )
Sxx
i=1
( n
)
n
X
X
1
xi E(Yi ) − x
E(Yi )
Sxx
i=1
i=1
( n
)
n
X
X
1
xi (β0 + β1 xi ) − x
(β0 + β1 xi )
Sxx
i=1
i=1
(
)
n
X
1
2
2
nβ0 x + β1
xi − nβ0 x − nβ1 x
Sxx
i=1
( n
)
X
1
x2i − nx2 β1
Sxx
i=1
=
1
Sxx β1 = β1 .
Sxx
Finally, since the Yi s are independent, we have
var(β̂1 ) =
=
=
n
X
1
2
(Sxx )
i=1
1
n
X
2
(Sxx )
1
(Sxx )2
(xi − x)2 var(Yi )
(xi − x)2 σ 2
i=1
Sxx σ 2 =
σ2
.
Sxx
Hence, β̂1 ∼ N (β1 , σ 2 /Sxx ) and a 100(1 − α)% confidence interval for β1 is
s
S2
β̂1 ± tn−2, α2
,
Sxx
where S 2 =
1
n−2
Pn
i=1 (Yi
− β̂0 − β̂1 xi )2 is the MVUE for σ 2 .
Exercise 2.15 (a) The likelihood is
L(θ; y) =
n
Y
θyiθ−1
=θ
n
i=1
n
Y
!θ−1
yi
,
i=1
and so the log-likelihood is
`(θ; y) = n log θ + (θ − 1) log
n
Y
!
yi
.
i=1
Thus, solving the equation
n
Y
d`
n
= + log
yi
dθ
θ
!
= 0,
i=1
Q
we obtain the maximum likelihood estimator of θ as θ̂ = −n/ log( ni=1 Yi ).
(b) Since
d2 `
n
= − 2,
2
dθ
θ
22
we have
CRLB(θ) =
1
2
d `
E − dθ
2
=
θ2
.
n
Thus, for large n, θ̂ ∼ N (θ, θ2 /n).
(c) Here we have to replace CRLB(θ) with its estimator to obtain the approximate pivot
θb − θ
Q(Y , θ) = q
∼
θb2 approx.
n
AN (0, 1)
This gives
θb − θ
P −z α2 < q < z α2 ' 1 − α.
θb2
n
where z α2 is such that P (|Z| < z α2 ) = 1 − α, Z ∼ N (0, 1). It may be rearranged to yield
s
s
2
b
θ
θb2
P θ̂ − z α2
< θ < θ̂ + z α2
' 1 − α.
n
n
Hence, an approximate 100(1 − α)% confidence interval for θ is
s
θb2
.
θb ± z α2
n
Finally, the approximate 90% confidence interval for θ is
s
θ̂2
θ̂ ± 1.6449
,
n
Q
where θb = −n/ log( ni=1 Yi ).
b=Y
Exercise 2.16 (a) For a Poisson distribution we have the MLE(λ) equal to λ
and
λ
b
λ ∼ AN λ,
.
n
Hence,
b1 − λ
b2 ∼ AN
λ
λ1
λ2
λ1 − λ2 ,
+
n1 n2
So, after standardization, we get
b1 − λ
b − (λ1 − λ2 )
λ
q2
∼ AN (0, 1).
λ1
λ2
+
n1
n2
Hence, the approximate pivot for λ1 − λ2 is
Q(Y , λ1 − λ2 ) =
b1 − λ
b − (λ1 − λ2 )
λ
q2
b1
b2
λ
λ
n1 + n2
23
∼
approx.
N (0, 1).
Then, for z α2 such that P (|Z| < z α2 ) = 1 − α, Z ∼ N (0, 1),
we may write
b
b
λ1 − λ2 − (λ1 − λ2 )
q
P −z α2 <
< z α2 ' 1 − α.
b2
b1
λ
λ
n1 + n2
what gives
s
s
b
b
b
b
b1 + λ
b2 − z α λ1 + λ2 ' 1 − α.
b1 − λ
b2 − z α λ1 + λ2 < λ1 − λ2 < λ
P λ
2
2
n1 n2
n1 n2
That is, a 100(1 − α)% CI for λ1 − λ2 is
s
Y 1 − Y 2 ± z α2
Y1 Y2
+
.
n1
n2
(b) Denote:
Yi - density of seedlings of tree A at a square meter area i around the tree;
Xi - density of seedlings of tree B at a square meter area i around the tree.
Then, we may assume that Yi ∼ Poisson(λ1 ) and Xi ∼ Poisson(λ2 ).
iid
iid
We are interested in the difference in the mean density, i.e., in λ1 − λ2 .
From the data we get:
b
b
b1 − λ
b2 = 1, λ1 + λ2 = 17 .
λ
n1 n2
70
Hence, the approximate 99% CI for λ1 − λ2 is
"
r #
r
17
17
, 1 + 2.5758
= [−0.269, 2.269].
1 − 2.5758
70
70
The CI includes zero, hence, at the 1% significance level, there is no evidence to reject H0 :
λ1 − λ2 = 0 against H1 : λ1 − λ2 6= 0, that is, there is no evidence to say, at the 1% significance
level, that tree A produced a higher density of seedlings than tree B did.
Exercise 2.17 (a) Yi ∼ Bern(p), i = 1, . . . , n and we are interested in testing a hypothesis
iid
H0 : p = p0 against H1 : p = p1 . The likelihood is
L(p; y) = p
Pn
i=1
yi
(1 − p)n−
Pn
i=1
yi
and so we get the likelihood ratio:
λ(p) =
=
L(p0 ; y)
L(p1 ; y)
Pn
yi
Pn
y
p0
i=1
Pn
(1 − p0 )n−
i=1
yi
Pn
)n− i=1 yi
p1 i=1 i (1 − p1
Pni=1 yi Pn
1 − p0 n− i=1 yi
p0
=
p1
1 − p1
Pni=1 yi p0 (1 − p1 )
1 − p0 n
=
.
p1 (1 − p0 )
1 − p1
Then, the critical region is
R = {y : λ(p) ≤ a},
24
where a is a constant chosen to give significance level α. It means that we reject the null
hypothesis if
Pni=1 yi p0 (1 − p1 )
1 − p0 n
≤ a,
p1 (1 − p0 )
1 − p1
which is equivalent to
Pni=1 yi
p0 (1 − p1 )
≤ b,
p1 (1 − p0 )
or, after taking logs of both sides, to
n
X
p0 (1 − p1 )
yi log
≤ c,
p1 (1 − p0 )
i=1
where b and c are constants chosen to give significance level α.
When p1 > p0 we have
p0 (1 − p1 )
log
< 0.
p1 (1 − p0 )
Hence, the critical region can be written as
R = {y : y ≥ d},
for some constant d chosen to give significance level α.
By the central limit theorem, we have that (when the null hypothesis is true, i.e., p = p0 ):
p0 (1 − p0 )
.
Y ∼ AN p0 ,
n
Hence,
Y − p0
Z=q
∼ AN (0, 1)
p0 (1−p0 )
n
and we may write
where zα =
q d−p0
p0 (1−p0
n
α∼
= P (Y ≥ d|p = p0 ) = P (Z ≥ zα ),
q
p0 (1−p0 )
and the critical region is
.
Hence
d
=
p
+
z
0
α
n
)
(
R=
r
y : y ≥ p0 + z α
p0 (1 − p0 )
n
)
.
(b) The critical region does not depend on p1 , hence it is the same for all p > p0 and so there is a
uniformly most powerful test for H0 : p = p0 against H1 : p > p0 .
The power function is
β(p) = P (Y ∈ R|p)
!
p0 (1 − p0 )
= P Y ≥ p 0 + zα
|p
n
q
p0 (1−p0 )
p
+
z
−
p
0
α
Y −p
n
q
≥
= P q
r
p(1−p)
n
p(1−p)
n
∼
= 1 − Φ{g(p)},
where g(p) =
q
p0 (1−p0 )
p0 +zα
−p
n
q
p(1−p)
n
and Φ denotes the cumulative distribution function of the
standard normal distribution.
25
Question 2.18 (a) Let us denote:
Yi ∼ Bern(p) - a response of mouse i to the drug candidate.
Then, from Question 1, we have the following critical region
)
(
r
p0 (1 − p0 )
.
R = y : y ≥ p0 + z α
n
Here p0 = 0.1, n = 30, α = 0.05, zα = 1.6449. It gives
R = {y : y ≥ 0.19}
From the sample we have pb = y =
at the significance level α = 0.05.
6
30
= 0.2, that is there is evidence to reject the null hypothesis
(b) The power function is
β(p) ∼
= 1 − Φ{g(p)},
where g(p) =
q
p0 (1−p0 )
p0 +zα
−p
n
q
.
p(1−p)
n
When n = 30, p0 = 0.1 and p = 0.2 we obtain, for
z0.05 = 1.6449,
g(0.2) = −0.1356 and Φ(−0.1356) = 1 − Φ(0.1356) = 1 − 0.5539 = 0.4461.
It gives the power equal to β(0.2) = 0.5539. It means that the probability of type II error is
0.4461, which is rather high.
This is because the value of the alternative hypothesis is close to the null hypothesis and also the
number of observations is not large.
To find what n is needed to get the power β(0.2) = 0.8 we calculate:
q
0.1 + z0.05 0.09
√
n − 0.2
q
g(p) =
= 0.75 z0.05 − 0.25 n.
0.16
n
For β(p) ∼
= 1 − Φ{g(p)} to be equal to 0.8 it means that Φ{g(p)} = 0.2. From statistical tables
we obtain that g(p) = −0.8416. Hence, it gives, for z0.05 = 1.6449,
n = (4 × 0.8416 + 3 × 1.6449)2 = 68.9.
At least n = 69 mice are needed to obtain as high power test as 0.8 for detecting that the
proportion is 0.2 rather than 0.1.
26
© Copyright 2026 Paperzz