Reliability - vahid

HIDDEN MARKOV CHAINS
Prof. Alagar Rangan
Dept of Industrial Engineering
Eastern Mediterranean University
North Cyprus
Source: Probability Models
Sheldon M.Ross
MARKOV CHAINS
 Toss
a coin repeatedly .
Denote Head=1
Tail =0
Let Yn=outcome of nth Toss
P( Yn=1) = p
P( Yn=0) = q
Y1,Y2,… are iid random variables.
Sn = Y1 + Y2 + …+ Yn
Sn is the accumulated number of Heads in the first
n trials.
 
S

n 1
S Y
n
n 1

P S n 1  j  1 S n  j  p

P S n 1  j
S
n

 j q
Sn ~ Markov chain ;
Time n=0,1,2,…
States j=0,1,2,…
P
X
Xn ~ Markov Chain
n 1
 j
X
n

X  i  p
 i , X n 1 , X n 2 ,... X 1
P
X
n 1
 j
n
0
2
( say )
.
. .
 p p p . . .
 00 01 02

 p p p . . .
1  10 11 12



.
.
.
p
p
p
P  2  20 21 22


. .
.
.


. .
.
.

. 

.
.
.


0
One step Transition
Probability Matrix
1
ij
n - step Transition Probabilities
P

X
m n
 j
X
m
 p
i 
The corresponding Matrix
(n)
ij
 (n) (n) . . .
 p 00 p 01



(n)
(n) (n) . . .


P  p10 p11

.
.
. . . .




Simple Results:
(n)
n
 (a) P  P
 (b) Expected sojourn time in a state  jj
 (c) Steady state Probability
 0 , 1,. . . . .  
P  


P
X
Xn ~ Markov Chain
n 1
 j
X
n

X  i  p
 i , X n 1 , X n 2 ,... X 1
P
X
n 1
 j
n
0
One step Transition
Probability Matrix
ij
( say )
1 2...
 p p p . . .
 00 01 02

 p p p . . .
 10 11 12



.
.
.
p
p
p
P   20 21 22

.

.
.


.
.
.

.

.
.


Examples:
Dry
Weather Forecasting
States : Dry day, Wet day,
X
n
 state of the nth day
Dry .8
X
n
0
X
n
0 p
X
1
n
C
O
C .5
.4
P  X n1 O .3
U .2
Wet
q


p 
P  X n1 
1 q
 Mood of the Professor on the nth day.
n
. 2


.6 
P  X n1 
Wet .4

Communication System
1
2
3
States : signals 0 , 1
X n  signals leaving the nth stage of the system.
Moods of a Professor
States: cheerful , ok , unhappy.
(C)
(O)
(U)
X
.4
.3
U
.1
.3
.5
Hidden Markov Chain Models
Let Xn be a Markov chain with one step Transition Probability
Matrix P  pij





 
Let S be a set of signals.
A signal from S is emitted each time the Markov chain enters
a state.
If the Markov chain enters state j, then the signal s is emitted
with probability
, Pwith
s j 
sS Ps j   1
S
P S
P
1
n
X  j   ps j 
 s X , S , X , S ,..., S
s
1
1
1
2
2
n 1

, X n  j  ps j 
The above model in which the sequence of signals S1,S2,…
is observed while the sequence of the underlying Markov chain
states X1,X2,… is unobserved is called a hidden Markov chain
Model.

pt i 
ps j 
signal
time
X
X
n 1
i
j
p
ij
n
state of the chain
Examples : Production Process State
Signal
acceptable quality .99
Good state(1)
Production
Process
Poor state(2)
1
2
.90 .1
P 

2
1
0
1
unacceptable
.01
acceptable
.04
unacceptable
.96
Moods of the Professor
C
Professor
Grades High average
Grades average
O
U
Condition of a Patient subject to Therapy.
Red Cell count high
Improving
Patient
Red Cell count low
Deteriorating
Signal Processing
0
Signals sent
1
Signals received as 0
Signals received as 1
Let S n  S 1 , S 2 ,..., S n  be the random vector of the first n signals.
For a fixed sequence of signals, let sn  s1 , s2 ,..., sn  and
F  j   P X
P X  j , S  s 
P X  j S  s  
P S  s 
 j
F

1


i
F
n
 j , S  sn
n
n
n

n
n
n
n
n
n
n
n
n
i
It can be shown that
F  j   Ps
n
Now starting with
n
 j  F n1pi. j
F  j   P X
1
n 1, 2, 3.....
2
i
1

 i , S  s1  pi p s1 i 
1

We can recursively determine
F i , F i ,..., upto F i 
2
using
Note
2
, which will determine

1
3
n
.

P S  sn   F n  j 
n
j
We can also compute the above using backward
recursion using
B i   PS
k
k 1
 sk 1 , ... , S n  sn
X
k
i

Example:
P
P
11
12
 .9
 .1
p u 1  .1
p u 2   .04
Let
22
1
21
0
P
P
p a 1  .99
p a 2   .96
P  X 1  1  .8
Let the first 3 items produced be a,u,a
 s3  a, u , a 
F 1  p ps 1  (.8)(.99)  .792
F 2  p ps 2  (.2)(.96)  .192
1
1
1
2
1
1
Similarly calculating
F i  and F i 
using
2
X

F
2
3
P
 1 s3  ( a, u , a ) 
3
F
3
3
(1)
(1)  F 3 (2)
 .364
Predicting the states.

Suppose the first observed n signals are
s  s ,..., s 
n

1
n
We wish to predict the first n states of the Markov chain
using this data.
Case 1
We wish to maximize the expected number of states that are
correctly predicted.
For each k=1,2,…,n , we calculate
P
X
choose that j which maximizes
the above as the predictor of
 j S  sn
n
k
X
k
.

Case 2
 A different problem arises if we regard the sequence of
states as a single entity.

For instance in signal processing while
X ,X
1
2
,..., X n
may be actual message sent , S1 , S 2 ,..., S n would be what is
received. Thus the objective is to predict the actual message
in its entirety.

Let
X
k
  X 1 , X 2 ,..., X k  our problem is to find the
sequence of states
P
X
i1 , i2 ,..., ik
 i1 , i 2 ,..., i n  S  sn
n
n
that maximizes

n

P X n  i1 , i 2 ,..., i n  S  sn 

 
n

P S  sn 


To solve the above we let
k





 i1 , i2 ,..., ik 1 , X k  j, S  sk 
V k j  max
P
X
k 1
, ,...,
i1 i2 ik 1
We can show using probabilistic arguments
V  j   P s j max p .V i 
k
k
i
Starting with
V i   P X
1
1
 j, S 1  s1 
We can recursively determine
k 1
ij
p p S j 
j
1
V  j  for each j .
n
This procedure is known as Viterbi Algorithm.