Discrete Time Markov Chain

Discrete Time Markov Chain
Discrete Time Markov Chain
Kristine Joy E. Carpio
Department of Mathematics
De La Salle University
SEAMS School 2016
“Topics in Stochasic Analysis”
Discrete Time Markov Chain
Outline
Markov Chains
Introduction
Chapman-Kolmogorov Equations
Classification of States
Limit Theorems
Absorbing Markov Chain
References
Discrete Time Markov Chain
Markov Chains
Introduction
Markov Chains
Markov chains were introduced in 1906 by Andrei Andreyevich
Markov (1856-1922) and were named in his honor.
Definition 1
A stochastic process X = {Xn | n ≥ 0} on a countable set S is a
Markov Chain if, for any i, j ∈ S and n ≥ 0,
P{Xn+1 = j | X0 , . . . , Xn } = P{Xn+1 = j | Xn },
P{Xn+1 = j | Xn = i} = pij .
The pij is the probability that the Markov chain jumps from
state i to state j.
Discrete Time Markov Chain
Markov Chains
Introduction
Transition Matrix
The transition probability pij represents the probability that
the process will, when in state i make a transition to state j.
These probabilities are nonnegative and the process must make
a transition into some state which leads to
pij ≥ 0,
∞
X
i, j ≥ 0;
pij = 1,
i = 0, 1, . . . .
j=0
Let P be the matix of one-step transition probabilities pij , so
that


p00 p01 p02 · · ·
 p

 10 p11 p12 · · · 
 .



P =  ..
.

 pi0

..
.
pi1
..
.
pi2
..
.
··· 

..
.

Discrete Time Markov Chain
Markov Chains
Chapman-Kolmogorov Equations
n-Step Probabilities
Definition 2 (n-step transition probabilities)
The nth step transition probabilities pnij denote the probability
that a process in state i will be in state j after n additional
transitions. That is,
pnij = P{Xk+n = j | Xk = i}
n ≥ 0, i, j ≥ 0.
This probability can be obtained upon computing P n .
Discrete Time Markov Chain
Markov Chains
Chapman-Kolmogorov Equations
Chapman-Kolmogorov Equations
Let P n be the matrix of n-step transition probabilities pnij . It
follows that
P m+n = P m · P n
In particular,
P 2 = P (1+1) = P · P = P 2
and by induction
P n = P (n−1+1) = P n−1 · P = P n
The multiplication property of matrices P m+n = P m · P n , from
m, n ≥ 1, yields the Chapman-Kolmogorov equations
pm+n
=
ij
X
n
pm
ik pkj ,
i, j ∈ S.
k∈S
The Chapman-Kolmogorov equations were named for their
independent formulation by Chapman (a Trinity College
graduate, 1880-1970), and Kolmogorov (1903-1987).
Discrete Time Markov Chain
Markov Chains
Chapman-Kolmogorov Equations
Unconditional Distribution
Let the probability distribution of the initial state be
αi = P{X0 = i},
i≥0
∞
X
!
αi = 1 .
i=0
The unconditional distribution of the state at time n can be
obtained by conditioning on the initial state.
If α = (αi ) is the probability vector representing the starting
distribution. Then the probability that the chain is in state j
after n steps
P{Xn = j} = (αP n )j ,
which is the jth entry in the row vector αP n .
Discrete Time Markov Chain
Markov Chains
Classification of States
Classification of States
Definition 3 (absorbing state)
State j is said to be absorbing when they are never left once
entered, that is pjj = 1.
Definition 4 (accessible state)
State j is said to be accessible from state i if pnij > 0 for some
n ≥ 0.
Two states i and j that are accessible to each other are said to
communicate, and we write i ↔ j.
Discrete Time Markov Chain
Markov Chains
Classification of States
Properties of the relation of communication:
i. State i communicates with state i, all i ≥ 0.
ii. If state i communicates with j, then state j communicates
with state i.
iii. If state i communicates with state j, and state j
communicates with state k, then state i communicates with
state k.
Two states that communicate are said to be in the same class.
The Markov chain is said to be irreducible if there is only one
class, that is, if all states communicate with each other.
Discrete Time Markov Chain
Markov Chains
Classification of States
Return Time
Let fijn be the probability that starting in state i, the first
transition to j occurs at time n. We have
fij0
= 0
fijn
= P{Xn = j, Xk 6= j, k = 1, . . . , n − 1|X0 = i}
Let
fij =
∞
X
fijn
n=1
denote the probability that, starting in state i, the process will
ever reenter state j. State i is recurrent if fii = 1 and transient
if fii < 1.
If state i is recurrent then, starting in state i, the process will
reenter state i again and again and again — in fact, infinitely
often. If state i is transient then, starting in state i, the number
of time periods that the process will be in state i has a
Discrete Time Markov Chain
Markov Chains
Classification of States
A Frog on a Lily Pad
Figure 1: In this example, a frog hops about seven lily pads. The
numbers next to arrows indicate the probabilities that he jumps to a
neighboring lily pad.
Discrete Time Markov Chain
Markov Chains
Classification of States
Theorem 5
State i is
recurrent if
∞
X
pnii = ∞
n=1
transient if
∞
X
pnii < ∞.
n=1
This leads to the conclusion that in a finite-state Markov chain
not all states can be transient.
Discrete Time Markov Chain
Markov Chains
Classification of States
Corollary 6
If state i is recurrent, and state i communicates with state j,
then j is recurrent.
This implies that transience is a class property. This along with
the previous result that not all states in a finite Markov chain
can be transient leads to the conclusion that all states of a finite
irreducible Markov chain are recurrent.
Discrete Time Markov Chain
Markov Chains
Limit Theorems
Limiting Probabilities
As n goes to infinity pnij is converging to some value which is
the same for all i. There is a limiting probability that the
process will be in state j after a large number of transitions,
and this value is independent of the initial state.
Definition 7 (period)
State i has period d if pnii = 0 whenever n is not divisible by d,
and d is the largest integer with this property. A state with
period 1 is called aperiodic.
Periodicity is a class property.
Discrete Time Markov Chain
Markov Chains
Limit Theorems
Definition 8 (positive recurrent)
A recurrent state i is positive recurrent if, starting from i, the
expected time until the process returns to state i is finite.
Recurrent states that are not positive recurrent are called null
recurrent.
Positive recurrence is a class property.
Definition 9 (ergodic)
Positive recurrent, aperiodic states are called ergodic.
In a finite state Markov chain all recurrent states are positive
recurrent.
Discrete Time Markov Chain
Markov Chains
Limit Theorems
Theorem 10
For an irreducible ergodic Markov chain lim pnij exists and is
n→∞
independent of i. Furthermore, letting
πj = lim pnij ,
n→∞
j≥0
then πj is the unique nonnegative solution of
πj
=
∞
X
i=0
∞
X
j=0
πj
= 1.
πi pij ,
j≥0
Discrete Time Markov Chain
Markov Chains
Limit Theorems
The long run proportions πj , j ≥ 0, are often called stationary
probabilities. If
P(X0 = j) = πj ,
j≥0
then
P(Xn = j) = πj
for all n, j ≥ 0.
Let µjj to be the expected number of transitions until a Markov
chain, starting in state j, returns to that state. On the average,
the chain will spend 1 unit of time in state j for every µjj units
of time, it follows that
1
πj =
.
µjj
Discrete Time Markov Chain
Markov Chains
Limit Theorems
Example
Harry used to play semipro basketball where he was a defensive
specialist. His scoring productivity per game fluctuated
between three states: 1 (scored 0 or 1 point), 2 (scored between
2 and 5 points), 3 (scored more than 5 points). Inevitably, if
Harry scored a lot of points in the one game, his jealous
teammates refused to pass him the ball in the next game, so his
productivity in the next game was nil. The team statistician,
upon observing the transition between states, concluded these
transitions could be modelled by a Markov chain with
transition matix


0 13 23


P =  31 0 23  .
1 0 0
Discrete Time Markov Chain
Markov Chains
Limit Theorems
1. What is the long run proportion of games that Harry had
high scoring games?
2. The salary structure in the semipro leagues includes
incentives for scoring. Harry was paid $40/games for a high
scoring performance, $30/games when he scored between 2
and 5 points and only $20/games when he scored nil.
What was the long run earning rate of our hero?
Discrete Time Markov Chain
Markov Chains
Absorbing Markov Chain
Absorbing Markov Chain
Definition 11
A Markov chain is absorbing if it has at least one absorbing
state and it is possible to go to an absorbing state from every
state.
Discrete Time Markov Chain
Markov Chains
Absorbing Markov Chain
Probability of Absoption
Let an arbitrary absorbing Markov chain have r absorbing
states and t transient states. We can write the transition matrix
in such a way that the first t states are transient and last r
states are absorbing. The transition matrix has the canonical
form
#
"
Q R
P=
(1)
0 I
Theorem 12
In an absorbing Markov chain, the probability that the process
will be absorbed is 1.
Discrete Time Markov Chain
Markov Chains
Absorbing Markov Chain
Fundamental Matrix
Theorem 13
For an absorbing Markov chain P, the matrix N = (I − Q)−1 is
called the fundamental matrix for P. The entry nij of N gives
the expected number of times that the process is in the transient
state j if it started in the transient state i.
The sum of the entries in the ith row of N gives the expected
time required before being absorbed starting from the ith
transient state.
Discrete Time Markov Chain
Markov Chains
Absorbing Markov Chain
Absorption Probabilities
Theorem 14
Let bij be the probability that an absorbing chain will be absorbed
in the absorbing state j if it starts from a transient state i. Let
B be the matrix with entries bij . Then B is an t × r matrix and
B = NR,
where N is the fundamental matrix and R is the canonical form
in Equation (1).
Discrete Time Markov Chain
References
References
1. C.M. Grinstead & J.L. Snell. Introduction to Probability.
American Mathematical Society, 2nd Revised Edition,
1997.
2. S.I. Resnick. Adventures in Stochastic Processes.
Birkhäuser Boston, 1992.
3. S.M. Ross. Introduction to Probability Models. Elsevier,
8th Edition, 2003.
4. S.M. Ross. Stochastic Processes. John Wiley & Sons, Inc,
2nd Edition, 1996.
5. H. Tijms. Understanding Probability: Chance Rules in
Everyday Life. Cambridge University Press, 2004.