Discrete-time Markov Chain: Transient Behavior

Introduction to Stochastic Models
GSLM 54100
1
Outline
 discrete-time
 transient
Markov chain
behavior
2
Transient Behavior


{Xn} for weather condition
0.7 0 0.3 0 
 0.5 0 0.5 0 

P
 0 0.4 0 0.6
yesterday  0 0.2 0 0.8
0
if it rained both today and
1
if it rained today but not yesterday
2
if it rained yesterday but not today
3
if it did not rain either yesterday or today
yesterday rained but not today
 P(it
will rain tomorrow|X0 = 2)
= P(X1 = 1|X0 = 2) = 0.4
3
Transient Behavior


{Xn} for weather condition
0.7 0 0.3 0 
 0.5 0 0.5 0 

P
 0 0.4 0 0.6


0
0.2
0
0.8



0 if it rained both today and yesterday

1 if it rained today but not yesterday

2 if it rained yesterday but not today

3 if it did not rain either yesterday or today
yesterday rained but not today

P(it will rain 10 days from now|X0 = 2)
= P(X10 = 0|X0 = 2) + P(X10 = 1|X0 = 2)
=
 P(10) 2,0   P(10) 2,1
4
Example 4.10

2 balls in an urn

randomly picking one out, replacing by the same
color w.p. 0.8, and the opposite color w.p. 0.2

initially both balls being red

P(there are 2 red balls in the urn after 4 selections)
=?

P(the fifth selected ball is red) = ?
5
Example 4.10
 Xn
= the # of red balls in the urn after the
nth selection and subsequent replacement
 X0
=0
0.8 0.2 0 
P   0.1 0.8 0.1
 0 0.2 0.8
p01
p00
p12
p11
p10
6
Example 4.10
 P(there
are 2 red balls in the urn after 4
(4)
selections) = P(X4 = 2|X0 = 2) = p22
= 0.4872
0.8 0.2 0 
P   0.1 0.8 0.1
 0 0.2 0.8


P(2)
0.66 0.32 0.02
 P 2  0.16 0.68 0.16
0.02 0.32 0.66
P(4)
0.4872 0.4352 0.0776 
 P4  0.2176 0.5648 0.2176 
0.0776 0.4352 0.4872 
7
Example 4.10

P(5th selection is red)
P(4)
0.4872 0.4352 0.0776 
 P4  0.2176 0.5648 0.2176 
0.0776 0.4352 0.4872 
= P(5th selection is red|X4 = 0)P(X4 = 0|X0 = 2)
+ P(5th selection is red|X4 = 1)P(X4 = 1|X4 = 2)
+ P(5th selection is red|X4 = 2)P(X4 = 2|X4 = 2)
(4)
(4) + (1) (4)
= (0) p20
+ (0.5) p21
p22
= 0 + (0.5)(0.4352) + (1)(0.4872)
= 0.7048
8
Example 4.11
 balls
successively, randomly distributed
among 8 urns
 P(3
nonempty urns after 9 balls distributed) =
?
 Xn
= the # of nonempty urns after n balls have
been distributed; Xn  {0, 1, . . . , 8}
 pi,i
= i/8 = 1 − Pi,i+1, i = 0, 1, . . . , 8
 required
answer =
(9)
(8)
p03
 p03
9
Unconditional Probability

P( X n  j )   P( X 0  i ) pij( n)
i 0
10
Example 4.12 of Ross



the amount of money of a pensioner
receiving 2 (thousand dollars) at the beginning of a
month
expenses in a month = i, w.p. ¼, i = 1, 2, 3, 4




not using the excess if insufficient money on hand
disposal of excess if having more than 3 at the end of a
month
at a particular month (time reference), having 5 after
receiving his payment
P(the pensioner’s capital ever 1 or less within the
following four months)
11
Example 4.12 of Ross
 Xn =
the amount of money that the
pensioner has at the end of month n
= min{[Xn+2Dn]+, 3}, Dn ~ disc. unif.
[1, 2, 3, 4]
 Xn+1
 starting
with X0 = 3, Xn  {0, 1, 2, 3}
0.75
 0.5
P
0.25

 0
0 
0.25 0.25
0 
0.25 0.25 0.25

0.25 0.25 0.5 
0.25
0
12
Example 4.12 of Ross
 to
find P(the capital of the pensioner is ever
1 or less at any time within the following 4
months)
0.75
with X0 = 3,
will the chain
visit state 0 or
1 for n  4
0.5
0
0.25
0.5
0.25
0.25
1
0.25
0.25
0.75
3
 0.5
P
0.25
0.25

 0
0.25
0 
0.25 0.25 0 
0.25 0.25 0.25

0.25 0.25 0.5 
0.25
0
2
0.25
0.25
13
0.75
 0.5
P
0.25

 0
0 
0.25 0.25 0 
0.25 0.25 0.25

0.25 0.25 0.5 
0.25


0
Example 4.12 of Ross
0
0 
 1

Q   0.5 0.25 0.25
0.25 0.25 0.5 
starting with X0 = 3, whether the chain has ever visited state 0 or
1 on or before n depends on the transitions within {2, 3}
merging states 0 and 1 into a super state A
0.5
0.75
0.5
0
0.25
0.5
3
0.25
0.25
0.25
0.25
0.25
1
A
0.25
1
3
0.25
0.25
0.25
0.25
2
0.25
2
0.25
0.25
0.25
14
Probability of Ever Visiting
a Set of States by Period n
a
Markov chain [pij]
 A: a set of special states
 P(ever visiting states in A on or before
period n|X0 = i)
 defining
 super
 the
state A: indicating ever visiting states in A
first visiting time of A, N = min{n: Xn A}
 X n , if n  N ,
 a new Markov chain Wn = 
 A, if n  N .
15
Probability of Ever Visiting
a Set of States by Period n
 transition
probability matrix Q = [qij]
qij  pij ,
if i, j  A ,
qiA   pij ,
if i  A,
iA
q AA  1,
16
Probability of Visiting a Particular State at n and Skipping a
Particular Set of States for k  {1, …, n1}
= j, Xk  A, k = 1, …, m1| X0 = i)
= P(Wn = j|X0 = i) = P(Wn = j|W0 = i)
( m)
 qij
 P(Xn
17