Applied Stochastic Processes

ETH Zürich, FS 2017
D-MATH
Prof. Dr. A-S Sznitman
Coordinator
Chong Liu
Applied Stochastic Processes
Solution sheet 7
Solution 7.1
(a) For x ≥ 0 define hx (t) := (1 − F (t + x))1{t≥0} . Then hx ≥ 0 is non-increasing and satisfies
Z
∞
Z
∞
Z
∞
(1 − F (t + x)) dt ≤
hx (t) dt =
0
0
Z
(1 − F (t)) dt
0
∞
P [S1 > t] dt = E[S1 ] = µ < ∞.
=
0
Hence hx is directly Riemann integrable (d. R. i) by (2.85) in the lecture notes.
Putting the results of Exercise 6.2 a) and b) together with Smith’s key renewal theorem, we
get
lim Z(x,y) (t) = lim Z(0,x+y) (t − x) = lim Z(0,x+y) (t)
t→∞
t→∞
Z
Z
1 ∞
1 ∞
=
hx+y (u) du =
(1 − F (u)) du.
µ 0
µ x+y
t→∞
For x, y ≥ 0 set G∞ (x, y) :=
1
µ
R∞
x+y
(1)
(1 − F (u)) du. Note that
1
G∞ (0, 0) =
µ
Z
∞
(1 − F (u)) du =
0
Moreover, for x, y ∈ R define the function G∞ by


G∞ (−x, −y)

G (−x, 0)
∞
G∞ (x, y) :=

G
∞ (0, −y)



1
if
if
if
if
µ
= 1.
µ
x, y ≤ 0,
x ≤ 0, y > 0,
x > 0, y ≤ 0,
x, y > 0.
It follows directly from the definition of G∞ that G∞ is [0, 1]-valued and continuous (and a
fortiori right-continuous). In addition, it is not difficult to check that for (x1 , y1 ), (x2 , y2 ) ∈ R2
with x1 < x2 and y1 < y2 we have
G∞ (x2 , y2 ) − G∞ (x1 , y2 ) − G∞ (x2 , y1 ) + G∞ (x1 , y1 ) ≥ 0.
Moreover, we have G∞ (0, 0) = 1 and lim(x,y)→(−∞,−∞) G∞ (x, y) = 0.
In conclusion, G∞ is the distribution function of a two dimensional random vector supported
on (−∞, 0]2 . Put differently, there exists a random vector (A∞ , E∞ ) valued in [0, ∞)2 such
that G∞ is the distribution function of (−A∞ , −E∞ ) and we have
G∞ (x, y) = P [A∞ ≥ x, E∞ ≥ y],
x, y ≥ 0.
For t ≥ 0, denote by Gt the distribution function of (−At , −Et ) and define the function Gt
on [0, ∞)2 by Gt (x, y) := P [At ≥ x, Et ≥ y]. Note that Gt and Gt have the same relationship
as G∞ and G∞ .
1/4
Applied Stochastic Processes, FS 2017
D-MATH
Solution sheet 7
We proceed to show that (At , Et ) converges in distribution to (A∞ , E∞ ) as t → ∞. This is
clearly equivalent to showing that (−At , −Et ) converges in distribution to (−A∞ , −E∞ ) as
t → ∞, which in turn by continuity of G∞ is equivalent to showing that
lim Gt (x, y) = G∞ (x, y) for all (x, y) ∈ R2 .
t→∞
Using the relationship between Gt and Gt and G∞ and G∞ , respectively, the latter is
equivalent to establishing that
lim Gt (x, y) = G∞ (x, y) for all x, y ≥ 0.
(2)
t→∞
Observe that Zx,y (t) = Gt (x, y+) for all t, x, y ≥ 0, where Gt (x, y+) = limu↓y Gt (x, u). By
(1) it follows that
lim Gt (x, y+) = G∞ (x, y) for all x, y ≥ 0.
t→∞
Using continuity of G∞ , monotonicity of Gt (x, y+) in y and limu↑y Gt (x, u+) = Gt (x, y), it
is an easy exercise in analysis to derive (2).
(b) A∞ and E∞ are independent if and only if for all x, y ≥ 0 we have
P [A∞ ≥ x, E∞ ≥ y] = P [A∞ ≥ x]P [E∞ ≥ y].
R∞
Define the function g by g(z) = µ1 z (1 − F (u)) du, z ≥ 0. By part a) it follows that for all
x, y ≥ 0 we have
P [A∞ ≥ x, E∞ ≥ y] = G∞ (x, y) = g(x + y).
In particular, by Dynkin’s lemma and using (2.95), (2.96) in the lecture notes, A∞ and E∞
are independent if and only if for all x, y ≥ 0 we have
g(x + y) = g(x)g(y).
As g is continuous, this functional equation has the unique solution g(z) = eαz , where α < 0,
since limz→∞ g(z) = 0. Moreover, we know that
Z
1 ∞
αz
e = g(z) =
(1 − F (s)) ds, z ≥ 0
µ z
Differentiating both sides yields
1
αeαz = − (1 − F (z)),
µ
z ≥ 0.
Hence, we have F (z) = 1 + µαeαz . Plugging in z = 0 shows that α ≥ −1/µ. Hence there
exists λ ∈ (0, 1/µ] such that
F (t) = (1 − λµe−λt )1{t≥0} = (1 − λµ) × 1{t≥0} + λµ(1 − e−λt )1{t≥0} ,
t ≥ 0.
In conclusion, A∞ and E∞ are independent if and only if F is the mixture of a Diracdistribution at 0 and an exponential distribution with parameter λ ∈ (0, 1/µ] with weights
(1 − λµ) and λµ, respectively.
Remark: For λ = 1/µ, N is a Poisson process with parameter 1/µ.
Solution 7.2 The stochastic processes described in a) and b) are Markov chains, while the one in
c) is not. Let Yn denote the number which shows up in the n-th roll.
2/4
Applied Stochastic Processes, FS 2017
D-MATH
Solution sheet 7
(a) We have Xn = (Xn−1 + 1) 1{Yn <6} . Thus, (Xn )n∈N is a Markov chain with state space N0 .
For i, j ∈ {0, 1, 2, . . .}:
 1

 6 if j = 0,



5
if j = i + 1,
ri,j =
6





0 otherwise.
(b) Then Xn = max{Xn−1 , Yn }. Hence, (Xn )n∈N is a Markov chain with state space {1, . . . , 6}.
We obtain the following transition probabilities for 1 ≤ i, j ≤ 6:


 0 if j < i,



i
if j = i,
ri,j =
6




 1
if j > i.
6
Furthermore, noting that ri,j (n) = P max{Y1 , Y2 , . . . , Yn } = j | X0 = i for j > i, we have

0





i n
ri,j (n) =
6




 j n
6
if j < i,
if j = i,
−
j−1 n
6
if j > i.
(c) The transition probabilities at time n depend not only on Xn , but also on Xn−1 . For example,
P [X4 = 6|X3 = 6] = P [Y3 = 6|X3 = 6] + P [Y3 < 6, Y4 = 6|X3 = 6] =
5 1
6
+
·
11 11 6
< 1 = P [X4 = 6|X3 = 6, X2 = 1].
Therefore, this is not a Markov chain.
Solution 7.3
(a) M (t) = E[Xt ], X0 = 1. At time t = 0, there exists one initial member of the population. Let
N be the number of offspring of the initial member. Then pk = P [N = k], m = E[N ]. Let L1
be the lifetime of the initial member.
M (t) = E[Xt ] = E[Xt 1{L1 >t} ] + E[Xt 1{L1 ≤t} ].
d
On {L1 > t}, we have Xt = 1; on {L1 ≤ t} we have Xt =
independent copies of the process Xt . Therefore
M (t) = P [L1 > t] + E[
N
X
PN
k=1
(k)
(k)
Xt−L1 , where Xt
are
(k)
Xt−L1 1{L1 ≤t} ]
k=1
(1)
= (1 − G(t)) + E[N ]E[Xt−L1 1{L1 ≤t} ]
Z t
(1)
= (1 − G(t)) + m
E[Xt−L1 1{L1 ≤t} | L1 = y]dG(y)
0
Z t
Z t
= (1 − G(t)) + m
M (t − y)dG(y) = (1 − G(t)) +
M (t − y)d(mG)(y).
0
0
3/4
Applied Stochastic Processes, FS 2017
D-MATH
Solution sheet 7
(b) The renewal equation for M (t) is defective, as mG(∞) = m > 1. We can fix the defect by
exponential tilting. Define
Z ∞
Z ∞
sy
l(s) =
e d(mG)(y) = m
esy dG(y).
0
0
We have l(0) = m > 1, hence there exists s∗ ∈ (−∞, 0), such that l(s∗ ) = 1. We have
M = h + M ∗ H,
∗
∗
where h = 1 − G, H = mG. Set M ∗ (y) = es y M (y), h∗ (y) = es y h(y) and dH ∗ (y) =
∗
es y dH(y), then
M ∗ = h∗ + M ∗ ∗ H ∗
R∞
R∞ ∗
and H ∗ (∞) = 0 dH ∗ (y) = 0 es y dH(y) = l(s∗ ) = 1. h∗ is directly Riemann integrable,
hence we can apply Smith’s key renewal theorem and obtain
R ∞ s∗ y
e h(y)dy
∗
.
lim M (t) = R 0∞ s∗ y
t→∞
ye dH(y)
0
∗
M ∗ (t) = es t M (t), hence
R ∞ s∗ y
R ∞ s∗ y
e h(y)dy
e (1 − G(y))dy
∗
∗
M (t) ∼ e−s t R 0∞ s∗ y
= e−s t 0 R ∞ s∗ y
.
m
ye
dH(y)
ye dG(y)
0
0
4/4