MARKOV CHAINS - Dei-Isep

Al-Imam Mohammad Ibn Saud University
CS433
Modeling and Simulation
Lecture 06 – Part 02
Discrete Markov Chains
http://10.2.230.10:4040/akoubaa/cs433/
11 Nov 2008
Dr. Anis Koubâa
Goals for Today
 Practical example for modeling a system
using Markov Chain
 State Holding Time
 State Probability and Transient Behavior
2
Example
Learn how to find a model of a given system
• Learn how to extract the state space
•
3
Example: Two Processors System

Consider a two processor computer system where, time is divided
into time slots and that operates as follows:







At most one job can arrive during any time slot and this can happen with
probability α.
Jobs are served by whichever processor is available, and if both are available
then the job is given to processor 1.
If both processors are busy, then the job is lost.
When a processor is busy, it can complete the job with probability β during any
one time slot.
If a job is submitted during a slot when both processors are busy but at least
one processor completes a job, then the job is accepted
(departures occur before arrivals).
Q1. Describe the automaton that models this system (not included).
Q2. Describe the Markov Chain that describes this model.
4
Example: Automaton (not included)


Let the number of jobs that are currently processed by the system by the
state, then the State Space is given by X= {0, 1, 2}.
Event set:



Feasible event set:



a: job arrival,
d: job departure
If X=0, then Γ(X)= a
If X= 1, 2, then Γ(Χ)= a, d.
State Transition Diagram
- / a,d
a
-
0
a
d
1
dd
d / a,d,d
-/a/ad
2
5
Example: Alternative Automaton
(not included)


Let (X1,X2) indicate whether processor 1 or 2 are busy, Xi= {0, 1}.
Event set:


di: job departure from processor i
Feasible event set:



a: job arrival,
If X=(0,0), then Γ(X)= a
If X=(1,0) then Γ(Χ)= a, d1.
If X=(0,1) then Γ(Χ)= a, d2.
If X=(0,1) then Γ(Χ)= a, d1, d2.
State Transition Diagram
- / a,d1
a
-
d1
00
10
a,d2
d2
a
-/a/ad1/ad2
a,d1,d2
d1,d2
01
11
d1
-
6
Example: Markov Chain
7

For the State Transition Diagram of the Markov Chain, each transition is
simply marked with the transition probability
p11
p01
p00
0
p10
p12
1
p22
2
p21
p20
p00  1   
p01  
p02  0
p10   1   
p11  1    1     
p12   1   
p20   1   
p21   2  2 1    1   
2
p22  1     2 1  7 
2
Example: Markov Chain
8
p11
p01
p00
0
p10
p12
1
p21
p22
2
p20

Suppose that α = 0.5 and β = 0.7, then,
0.5
0 
0.5
P   pij   0.35 0.5
0.15
0.245 0.455 0.3 
8
State Holding Time
How much time does it take for going from one
state to another?
9
State Holding Times
P  A  B | C   P  A | B C   P  B | C 
10



Suppose that at point k, the Markov Chain has transitioned into state
Xk=i. An interesting question is how long it will stay at state i.
Let V(i) be the random variable that represents the number of time
slots that Xk=i.
We are interested on the quantity Pr{V(i) = n}
Pr V  i   n   Pr X k  n  i ,  X k  n 1  i ,..., X k 1  i  | X k  i 
 Pr X k  n  i | X k  n 1  i ,..., X k  i  
Pr X k  n 1  i ,..., X k 1  i | X k  i 
 Pr X k  n  i | X k  n 1  i   Pr X k  n 1  i | X k  n  2 ..., X k  i  
Pr X k  n  2  i ,..., X k 1  i | X k  i 
10
State Holding Times
11
Pr V  i   n  Pr  X k  n  i | X k  n 1  i 
Pr  X k  n 1  i | X k  n  2 ..., X k  i 
Pr  X k  n  2  i,..., X k 1  i | X k  i
 1  pii  Pr  X k  n 1  i | X k  n  2  i 
Pr  X k  n  2  i | X k  n 3  i,..., X k  i
Pr  X k  n 3  i,..., X k 1  i | X k  i
Pr V  i   n  1  pii  piin1


This is the Geometric Distribution with parameter p ii
Clearly, V(i) has the memoryless property
11
State Probabilities
12

An interesting quantity we are usually interested in is the
probability of finding the chain at various states, i.e., we define
 i  k   Pr  X k  i

For all possible states, we define the vector

Using total probability we can write
π  k    0  k  , 1  k  ...
 i  k    Pr X k  i | X k 1  j   Pr X k 1  j 
j
  p ji  k    j  k  1
j

In vector form, one can write
π  k   π  k  1 P  k 
Or, if homogeneous
Markov Chain
π  k   π  k  1 P
12
State Probabilities Example
13


Suppose that
0.5
0 
0.5
P  0.35 0.5
0.15
0.245 0.455 0.3 
Find π(k) for k=1,2,…
with
π  0  1 0 0
0.5
0 
0.5
π 1  1 0 0 0.35 0.5
0.15  0.5 0.5 0
0.245 0.455 0.3 


Transient behavior of the system
In general, the transient behavior is obtained by solving the
difference equation
π  k   π  k  1 P
13