ETH Zürich, Spring 2017
Lecturer: Prof. Dr. Sara van de Geer
Coordinator: Thomas Cayé
Probability and Statistics
Solution sheet 5
Solution 5.1
(a) Remember that x mod 1 =
x − bxc. Note that the chord is longer than a side of the triangle
if (V − U ) mod 1 ∈ 13 , 23 .
P (V − U )
mod 1 ∈
1 2
,
3 3
ZZ
1(y−x)
=
Z
(0,1)2
1 Z 1
=
0
Z
=
0
0
1
mod 1∈( 13 , 32 ) dydx
1y∈( 31 +x, 23 +x)
mod 1 dy
dx
1
1
dy = .
3
3
(b) Let r be the point chosen in thepradius, it is a uniform random variable over [0, 1]. The length
of the chord will be given by 2 (1 − r2 ). Then
p
√ 3
1
1
2
2
2
P 2 (1 − r ) ≥ 3 = P 1 − r ≥
=P r ≤
= .
4
4
2
(c) Let (x, y) be the point chosen in the circle,
p it is a uniform random variable over B(0, 1). The
length of the chord will be given by 2 1 − (x2 + y 2 ). Thus,
p
√ 3
P 2 (1 − (x2 + y 2 )) ≥ 3 = P 1 − (x2 + y 2 ) ≥
4
1
= P (x2 + y 2 ) ≤
4
ZZ
1
1
=
dxdy = .
4
B(0, 12 ) π
Updated: March 24, 2017
1/5
Probability and Statistics, Spring 2017
Solution sheet 5
(d) This is not a contradiction. What this shows us is that there is not a formal way to “pick a
chord uniformly at random” in this circle, so we have to define the probability measure we
are interested in before asking the question about the probability of an event.
Solution 5.2 Let us suppose, first, that µ1 = µ2 = 0. We just have to use the convolution formula:
Z ∞
y2
(x − y)2
1
exp(− 2 ) exp(−
)dy
fX1 +X2 (x) =
2πσ1 σ2 −∞
2σ1
2σ22
Z ∞
1
σ 2 y 2 + σ12 (x − y)2
=
dy
exp − 2
2πσ1 σ2 −∞
2σ12 σ22
Z ∞
1
σ 2 y 2 + σ12 x2 + σ12 y 2 − 2σ12 xy
=
dy
exp − 2
2πσ1 σ2 −∞
2σ12 σ22
2σ12 xy
x2
Z ∞
exp(− 2σ
y 2 − (σ2 +σ
2)
2)
2
1
2
=
dy
exp −(σ12 + σ22 )
2πσ1 σ2
2σ12 σ22
−∞
2
σ12 x
σ12 x2
x2
Z
∞
y
−
exp(− 2σ
+
)
2
2
2
(σ1 +σ2 )
2σ22 (σ12 +σ22 )
2
=
exp −
dy
2σ12 σ22
2πσ1 σ2
−∞
2
2
σ1 +σ2
2
=√
1
x
p
),
exp(−
2
2
2
2(σ1 + σ22 )
2π σ1 + σ2
that is the distribution function of a normal random variable with parameter N (0, σ12 + σ22 ).
For the general case note, that Xi − µi is distributed as N (0, σi2 ). So (X1 − µ1 ) + (X2 − µ2 ) ∼
N (0, σ12 + σ22 ), then X1 + X2 ∼ N (µ1 + µ2 , σ12 + σ22 ).
Solution 5.3
(a) We have that the cumulative distribution function of Y for y ≥ 0 is given by:
√
√
√
FY (y) = P (Y ≤ y) = P (− y ≤ X ≤ y) = 2FX ( y) − 1.
Then, taking the derivative we have:
y
1
1
√
fY (y) = fx ( y)y − 2 1y≥0 = ce− 2 y − 2 1y≥0 .
(b) By the convolution formula we have that:
Z x
fY1 +Y2 (x) =
fY (x − y)fY (y)dy 1x≥0
0
Z x
x−y
y
1
1
= c21
(x − y)− 2 e− 2 y − 2 e− 2 dy 1x≥0
0
Z x
x
1
1
= c21 e− 2
(x − y)− 2 y − 2 dy 1x≥0
0
Z 1
1
1
x
= c21
x(x − ux)− 2 (ux)− 2 du e− 2 1x≥0
0
=
c21
Z
1
1
1
x
(1 − u)− 2 u− 2 du e− 2 1x≥0 .
0
This is the distribution of an exponential random variable.
Updated: March 24, 2017
2/5
Probability and Statistics, Spring 2017
Solution sheet 5
(c) By the previous questions, the base case is true. Now let us prove the inductive step. Suppose
that the proposition is true for n − 1, then
Z x
P
n
f
fY (x − y)fPn−1 Y (y)dy 1x≥0
Yi (x) =
i
i=1
i=1
0
Z x
x−y
y
n−1
1
= c1 cn−1
(x − y)− 2 e− 2 y 2 −1 e− 2 dy 1x≥0
0
1
Z
= c1 cn−1 e
(x − xu)− 2 (xu) 2 −1 xdu1x≥0
0
Z 1
n
x
− n−1
−1
− 12
2
du e− 2 x 2 −1 1x≥0 .
= c1 cn−1
(1 − u) (u)
−x
2
n−1
1
0
Solution 5.4
(a) To find the density we differentiate the cumulative distribution function
F (t) := P(X ≤ t) = 1 − P(X ≥ t) = 1 − e−λt .
Then its density is
f (t) := F 0 (t) = λe−λt .
We can calculate its mean as
∞
Z
tλe−λt dt
Z
−λt ∞
= −te
|0 +
E (X) =
0
∞
e−λt dt
0
1
= .
λ
Its second moment is
E X2 =
Z
∞
λt2 e−λt dt
0
= −t2 e−λt |∞
0 +2
Z
∞
te−λt dt
0
2
= 2.
λ
Finally,
1
2
V ar(X) = E X 2 − E (X) = 2 .
λ
(b) We have to compute
P (min{X1 , X2 } > t) = P(X1 > t, X2 > t)
= P(X1 > t)P(X2 > t)
= e−λ1 t e−λ2 t
= e−(λ1 +λ2 )t .
This is the definition of min{X1 , X2 } ∼ E(λ1 + λ2 ).
Updated: March 24, 2017
3/5
Probability and Statistics, Spring 2017
Solution sheet 5
(c) It holds that,
P (Y ≥ t + h | Y ≥ h) =
P(Y ≥ t + h)
= e−λt = P(Y ≥ t).
P(Y ≥ h)
(d) We have
G(t + h) = P(Y ≥ t + h)
P(Y ≥ t + h)
P(Y ≥ h)
P(Y ≥ h)
= P(Y ≥ t + h | Y ≥ h)P(Y ≥ h)
=
= G(t)G(h).
Pn
(e) First
Qn we will prove by induction that for all n ∈ N and (an )n∈N ⊆ R we have that G( i=1 ai ) =
i=1 G(ai ). It holds when n = 1. Then, assuming the statement is true for some n ≥ 1
!
! n+1
n+1
n
X
X
Y
G
ai = G(an+1 )G
an =
G(ai ),
i=1
i=1
i=1
where we first used the memoryless property and then the induction hypothesis. Take
m, n ∈ N, we have that
!
!
m
n
m n
X
X
m
=G
G(1)m = G
1 =G
n
n
i=1
i=1
m
m
⇒G(1) n = G
n
(f) Finally, take t ∈ R+ and (tn )n∈N , (sn )n∈N ⊆ Q so that tn % t and sn & t. The monotonicity
of G(t) yields
G(tn ) ≤ G(t) ≤ G(sn )
G(1)tn ≤ G(t) ≤ G(1)sn
t
t
which gives
G(t)
= G(1) . Finally we have that P(Y ≥ t) = G(1) = e
1
Y ∼ E ln G(1)
.
− ln
1
G(1)
t
, then
Solution 5.5 We compute first the cumulative distribution function of λ given X = 1. By
hypothesis,
P(X = 1 | Λ) = E(1{X=1} | Λ) = e−Λ Λ.
Hence
Z
P(X = 1) = E(1{X=1} ) = E[E(1{X=1} | Λ)] =
0
Z ∞
−λ
−2λ
=
2e λe
dλ
0
Z ∞
=
2λe−3λ dλ = 2/9.
∞
e−λ λf (λ)dλ
0
By definition of the conditional expectation, and that for any x > 0,
hence
1{Λ≤λ} is σ(Λ)-measurable,
P(X = 1, Λ ≤ λ) = E[1{X=1} 1{Λ≤λ} ] = E[E(1{X=1} | Λ)1{Λ≤λ} ] = E[e−Λ Λ1{Λ≤λ} ],
Updated: March 24, 2017
4/5
Probability and Statistics, Spring 2017
Solution sheet 5
and the cumulative distribution function of λ given X = 1 is
Rλ
2se−3s ds
P(X = 1, Λ ≤ λ)
P(Λ ≤ λ | X = 1) =
= 0
.
P(X = 1)
P (X = 1)
Differentiating the cumulative distribution function with respect to λ, we obtain the probability
distribution function
f (λ | X = 1) = 9λe−3λ .
Updated: March 24, 2017
5/5
© Copyright 2026 Paperzz