HW 1 - Solutions

HW 1 - Solutions
Intermediate Statistics - 10-705 / 36-705
Fall 2015
Problem 1: Wasserman 1.1 [20 points]
Case An ↑ A (increasing sequence)
∞
We need to prove: 1) B j ∩ Bk = 0,
/ ∀ j 6= k; 2) An = ∪ni=1 Ai = ∪ni=1 Bi and ∪∞
i=1 Bi = ∪i=1 Ai .
1. It suffices to prove B j ∩ Bk = 0,
/ ∀ j < k, without loss of generality. First of all note that
we can rewrite B j = A j \ A j−1 = A j ∩ ACj−1 . We have
B j ∩ Bk = A j ∩ ACj−1 ∩ Ak ∩ ACk−1
| {z }
| {z }
Bk
Bj
. Since j < k, then A j ⊆ Ak−1 so that A j ∩ ACk−1 = 0.
/ Thus, B j ∩ Bk = 0.
/
2. It is easy to see that An =
induction to prove
Sn
i=1 Ai
Sn
since Ai is an increasing sequence. We can use
i=1 Ai
=
Sn
i=1 Bi
• BASE CASE . If n = 1, then
Sn
i=1 Bi
• I NDUCTIVE STEP. Now suppose
= B1 = A1 =
Sn
i=1 Bi
=
"
#
Sn
Sn
i=1 Ai .
i=1 Ai
= An for some arbitrary n ∈ N.
Then:
n+1
[
Bi =
i=1
n
[
Bi ∪ Bn+1
"i=1 #
n
[
=
Ai ∪ Bn+1 =
i=1
h
i
= An ∪ An+1 ∩ ACn
h
i
C
= An ∪ An ∩ [An ∪ An+1 ]
= Ω ∩ An+1 =
n+1
[
i=1
1
Ai
Finally, note that
Sn
i=1 Bi , ∀n
S∞
i=1 Ai
=
S∞
n=1
Sn
i=1 Ai
and
S∞
i=1 Bi
Sn
S∞
=
i=1 Bi ,
n=1
so
Sn
i=1 Ai
=
implies
∞
[
Ai =
∞ [
n
[
Ai =
n=1 i=1
i=1
∞ [
n
[
Bi =
n=1 i=1
∞
[
Bi .
i=1
∞
Alternate proof. Since Bi = Ai \ Ai−1 ⊆ Ai for all i, clearly ∪∞
i=1 Bi ⊆ ∪i=1 Ai . To prove
the reverse inclusion, let x ∈ ∪∞
i=1 Ai and let k be the smallest number such that x ∈ Ak .
∞
Then x 6∈ Ak−1 and hence x ∈ Ak \ Ak−1 = Bk . Therefore ∪∞
i=1 Bi ⊇ ∪i=1 Ai .
Case An ↓ A (decreasing sequence)
Convergence of sets sequences is preserved under complement operation. Thus, if An is a
decreasing sequence converging to A, then ACn is an increasing sequence converging to AC .
Therefore, following the steps used for the increasing case we can show the continuity:
P(ACn ) → P(AC )
which implies
P(An ) = 1 − P(ACn ) → 1 − P(AC ) = P(A)
Problem 2: Wasserman 1.4 [0 points]
C
i∈I Ai )
S
Case (
=
C
i∈I Ai
T
T
C
C
i∈I Ai ) ⊆ i∈I Ai ”. For any
T
∈ ACi , ∀i ∈ I ⇒ x ∈ i∈I ACi .
S
a) “(
x
b) “(
C
i∈I Ai )
S
⇒x
S
i∈I Ai
C
i∈I Ai )
T
Case (
C
i∈I Ai ”. For
S
∈ ( i∈I Ai )C .
⊇
=
T
C
i∈I Ai ) ,
x∈(
any x ∈
S
C
i∈I Ai ,
T
C
i∈I Ai
=(
i∈I Ai
then x ∈ ACi , ∀i ∈ I ⇒ x 6∈ Ai , ∀i ∈ I ⇒ x 6∈
C
i∈I Ai
C
i∈I Bi )
S
T
⇒ @i ∈ I s.t. x ∈ Ai ⇒
S
S
Let Bi = ACi , i ∈ I. Then, as shown above (
S
then x 6∈
C
i∈I Ai )
.
2
=
C
i∈I Bi
T
⇔
S
i∈I Bi
=
C C
i∈I Bi
T
⇔
Problem 3: Wasserman 1.5 [15 points]
The sample space can be defined in terms of the number of tosses until we obtain in total 2
heads S = {2, 3, ...}, where we start from 2 because to get two head we need at least 2 tosses;
also S = {HH, HT H, T HH, HT T H, ...} is an alternative valid representation. Now note that
“k tosses are needed to get 2 heads” ⇔ “On the first k − 1 tosses we get only 1 head and then
we get 1 head at the k-th toss”. Let X be the number of tosses to get 2 heads, and Ci be the
result at the i-th toss. Since we are dealing with a fair coin, then P(Ci = H) = P(Ci = T ) = 1/2.
Thus
P(X = k) =P(C1 = H) × P(C2 = T ) × ... × P(Ck−1 = T ) × P(Ck = H)+
P(C1 = T ) × P(C2 = H) × ... × P(Ck−1 = T ) × P(Ck = H)+
..
.
P(C1 = T ) × P(C2 = T ) × ... × P(Ck−1 = H) × P(Ck = H)
k−1
= k
2
Alternate solution. Define the random variable Wi = 0 if tail and Wi = 1 if head on the i-th
toss. Thus W1 , ...,Wk are i.i.d. Bernoulli(p = 1/2), so that ∑k−1
i=1 Wi ∼ Binomial(k − 1, 1/2)
and
!
k−1
P(X = k) =P
∑ Wi = 1
× P(Wk = 1)
i=1
1 k−1
1
1 k−2 1
=
1−
×
1
2
2
2
(k − 1)! 1
=
(k − 2)! 2k
k−1
= k
2
Problem 4: Wasserman 1.6 [0 points]
Condition “P(A) = P(B) ⇔ |A| = |B|” implies P({i}) = P({ j}), ∀i, j ∈ Ω, which implies ∃c
such that P({i}) = c, ∀i ∈ Ω. Yet, for c = 0 we get P(Ω) = ∑i∈Ω P({i}) = 0 and ∀c > 0 we
have P(Ω) = ∑i∈Ω P({i}) = ∞, which does not satisfy the second axiom of probability (see
Definition 1.5).
3
Problem 5: Wasserman 1.7 [0 points]
We want to construct a sequence of events B1 , B2 , ... such that:
1. B j ∩ Bk = 0,
/ ∀ j 6= k
2.
S∞
i=1 Bi
=
S∞
i=1 Ai
3. Bi ⊆ Ai , ∀i
S∞
∞
∞
i=1 Ai ) = P( i=1 Bi ) = ∑i=1 P(Bi ) ≤ ∑i=1 P(Ai ).
S∞
Conditions 1), 2), and 3) will imply P(
Let Bn = An \
Sn−1
i=1
Ai . We now show that conditions 1), 2), and 3) are satisfied by sequence
B1 , B2 , ....
1. It suffices to prove B j ∩ Bk = 0,
/ ∀ j < k, without loss of generality. First of all note that
S j−1
we can rewrite B j = A j ∩ (
i=1
Ai )C . We have
j−1
[
B j ∩ Bk = A j ∩
!C
k−1
[
∩ Ak ∩
Ai
i=1
|
Sk−1
. Since j < k, then A j ⊆
2. We use induction to show that
{z
i=1 Bi
• I NDUCTIVE
STEP.
=
Sn
• BASE STEP. If n = 1, then
|
}
Bj
Sn
{z
Bk
S
k−1
i=1 Ai
Sn
Now suppose
C
}
= 0.
/ Thus, B j ∩ Bk = 0.
/
for all n ∈ N.
i=1 Ai
= B1 = A1 =
i=1 Bi
Ai
i=1
Ai so that A j ∩
i=1
!C
Sn
Sn
i=1 Ai .
i=1 Bi
=
Sn
"
n
[
#
for some arbitrary n ∈ N.
i=1 Ai
Then:
n+1
[
"
Bi =
i=1
#
n
[
Bi ∪ Bn+1 =
i=1
i=1
# 
n
[
Ai ∪ An+1 ∩
n
[
i=1
i=1
"
=
Ai ∪ Bn+1 =

= 
n
[
Ai ∪
"
= Ω∩
!C  "
!
#
n
n
[
[
Ai  ∩
Ai ∪ An+1
!
i=1
n+1
[
i=1
#
Ai =
i=1
n+1
[
i=1
4
!C 
Ai 
i=1
Ai
Then note that
Sn
i=1 Bi
S∞
i=1 Ai
=
S∞
n=1
Sn
i=1 Ai
and
S∞
i=1 Bi
=
S∞
n=1
Sn
i=1 Bi ,
so ∀n
Sn
i=1 Ai
=
implies
∞
[
Ai =
∞ [
n
[
Ai =
n=1 i=1
i=1
∞ [
n
[
Bi =
n=1 i=1
∞
[
Bi .
i=1
Alternate proof: Since Bi ⊆ Ai for all i (as shown below at point 3.), clearly ∪∞
i=1 Bi ⊆
∞
∪∞
i=1 Ai . To prove the reverse inclusion, let x ∈ ∪i=1 Ai and let k be the smallest number
k−1
C
such that x ∈ Ak . Then x 6∈ ∪k−1
i=1 Ai and hence x ∈ Ak ∩ (∪i=1 Ai ) = Bk . Therefore
∞
∪∞
i=1 Bi ⊇ ∪i=1 Ai .
Si−1
3. Bi ⊆ Ai , ∀i. It is easy to see that Bi = Ai ∩ (
C
j=1 A j )
⊆ Ai , ∀i.
Comment. Compare this proof with Problem 1.1. Where are they different?
Problem 6: Wasserman 2.20 [20 points]
• Let Z = X −Y . We can easily see that Z ∈ [−1, 1]. To calculate FZ (z) we first consider
the case −1 ≤ z ≤ 0:
Z 1+z Z 1
1dydx =
0
x−z
z2
1
+z+
2
2
Now for 0 < z ≤ 1, FZ (z) is given by:
1−
Z 1 Z x−z
z
1dydx = −
0
1
z2
+z+
2
2
So the density of Z is given by:


1 + z if −1 ≤ z ≤ 0



fZ (z) = 1 − z if 0 < z ≤ 1



 0
otherwise
which is known as triangular distribution.
5
• Let W = X/Y , so that W ∈ (0, ∞). We have:
P (W ≤ w) = P (X ≤ wY ) =
Z 1 Z min(1,wy)
Z 1
1dxdy =
0
0
Consider now the case w ≤ 1, then the integral becomes:
Z 1
wydy =
0
w
2
Similarly for w > 1 we get:
Z 1/w
Z 1
wydy +
0
1dy = 1 −
1/w
1
2w
So the density of W is given by:


1/2



fW (w) = 1/2w2



 0
if 0 ≤ w ≤ 1
if w > 1
otherwise
Problem 7: Wasserman 2.21 [0 points]
We have
P(Y ≤ y) = P(max{X1 , ..., Xn } ≤ y)
= P(X1 ≤ y, ..., Xn ≤ y)
n
= ∏ P(Xi ≤ y)
i=1
= P(X1 ≤ y)n


0
, y≤0
n
=
 1 − e−y/β
, y>0
n
= 1 − e−y/β I(0,∞) (y)
n−1 −y/β
e
Thus, the pdf is fY (y) = n 1 − e−y/β
β I(0,∞) (y)
6
min (1, wy) dy
0
Problem 8: Wasserman 3.2 [15 points]
a) “P(X = c) = 1 ⇒ V (X) = 0”. For any integer k
E[X k ] =
Z
xk dPX = ck
R
Thus V (X) = E[X 2 ] − E[X]2
= c2 − c2
= 0.
b) “P(X = c) = 1 ⇐ V (X) = 0”. By Chebyshev’s inequality, ∀ε > 0
P(|X − E[X]| > ε) ≤
V (X)
=0
ε2
Thus, P(X = c) = 1 for c = E[X].
Alternate
Solution for
b). Assume P (X = E[X]) < 1, so that there exists ε0 > 0 such that
2
P (X − E[X]) > ε0 > 0. Then since (X − E[X])2 is nonnegative, the property shown in
Problem 3.7 is usuable and
0 = V (X)
Z ∞ P (X − E[X])2 > ε dε
Z0ε0 ≥
P (X − E[X])2 > ε dε
Z0ε0 2
≥
P (X − E[X]) > ε0 dε
0
2
= ε0 P (X − E[X]) > ε0 > 0,
=
which is a contradiction. Hence, P (X = c) = 1 with c = E[X].
Problem 9: Wasserman 3.3 [15 points]
We have:
P(Yn ≤ y) = P(max{X1 , ..., Xn } ≤ y)
= P(X1 ≤ y, ..., Xn ≤ y)
= P(X1 ≤ y) × ... × P(Xn ≤ y)
= P(X1 ≤ y)n


y<0

 0
=
yn y ∈ [0, 1]


 1
y>1
= I[0,1] yn + I(1,∞) (y)
7
Since Yn > 0, to compute E[X] we can use the property shown in Problem 3.7:
Z ∞
E[Yn ] =
Z0
P(Yn > y)dy
∞
(1 − P(Yn ≤ y))dy
=
0
Z 1
(1 − yn )dy
0
1
yn+1 = y−
n + 1 0
1
= 1−
n+1
n
=
n+1
=
Alternate Solution. We can note that the pdf of Yn is nyn−1 I[0,1] (y), which is the pdf of
n
n+1 .
Beta(n, 1), whose expectation is known to be
Problem 10: Wasserman 3.7 [15 points]
By integration by parts:
Z ∞
Z ∞
P(X > x)dx =
0
(1 − F(x))dx
0
= [x(1 − F(x))]∞
0 +
Z ∞
x f (x)dx
= lim x(1 − F(x)) − 0 + E[X]
0
x→∞
= E[X]
Alternate Solution. Let fX (x) be a pdf of X. Then
Z ∞
Z ∞Z ∞
P(X > x)dx =
0
0
x
fX (t)dtdx
Z ∞Z t
=
Z0∞ 0
=
0
fX (t)dxdt (Fubini’s Theorem)
t fX (t)dt
= E[X]
8