SP-Fall2012-HW6.pdf

In the name of GOD.
Sharif University of Technology
Stochastic Processes CE 695 Dr. H.R. Rabiee
Homework 6 (Markov Chains)
For matrix multiplications, you’d better use a matrix based programming software such as Matlab or Octave.
1. Consider the Markov chain given in figure 1.
Figure 1: Problem 1
1 1
(a) If the initial probability distribution is π = [ 12 , 14 , 12
, 6 ], what is the
(2)
probability distribution after two steps (π ) ?
(b) Find the steady state probability distribution of this process (i.e. π ∗
such that π ∗ P = π ∗ ).
2. A marksman is shooting at a target. Every time he hits the target his
confidence for next shooting goes up and his probability of hitting the
target next time is 0.9. Every time he misses the target his confidence
falls and he hit the target the next time with probability 0.6. Draw this
process Markov chain. Over a long period of time what is his average
success rate in hitting the target?
3. At an airport security checkpoint there are a metal detector, an explosive residue detector and a very long line of people waiting to get checked
through. (Assume the line is so long we can consider it infinitely long for
this exercise. Also assume that a person may not enter the metal detector
until there is no one waiting to enter the explosive detector.) Every traveler must go through these two checks, metal detector first, followed by
the explosive detector. Assume that the number of seconds required for
the metal detector is geometrically distributed with an expected time of 10
seconds and that the number of seconds required for the explosive detector is also geometrically distributed with an expected time of 20 seconds.
Graph the Markov chain describing the states of the metal detector and
explosive detector.
1
(a) What is the probability that each detector is in use?
(b) What is the probability that both detectors are simultaneously in
use?
Note: Recall that the expected value of a geometrically distributed random
variable is p1 .
4. Repeat problem 3 under the assumption that each detector is equally
likely to finish in exactly 10 seconds or exactly 20 seconds. What is the
probability that both detectors are busy?
5. ()In problem 3 don’t consider the underlined assumption. How many
states do we need to model the problem with a Markov chain?
6. Define the hidden Markov model (π, P, B) with the following parameters:
• Three states S1 , S2 , S3 , alphabet A = {1, 2, 3}

0
• P = 1
0
0.5
0
1

0.5
0
0
• π = (1, 0, 0)T

0.5
• B = 0.5
0
0.5
0
0.5

0
0.5
0.5
Which P describes the transition probabilities and Bij is probability of
observing j when process is in state i. What are all possible state sequences
for the following observed sequences O, and what is P (O|π, P, B)?
(a) O = 1, 2, 3.
(b) O = 1, 3, 1.
7. Consider a person who randomly walks on the vertices of a graph G,
starting from a node v. At each step, he moves to one of the adjacent
vertices with the same probability. He continues until he reaches vertex
SuS. Let Tuv be the R.V representing the time in which he reaches u.
Give an example in which adding an edge to G which reduces the length
of shortest path between v and u, causes increase in Tuv .
8. A markov process is described in figure 2. This process starts from either
S1 (with probability q) or S2 (with probability 1 − q). Suppose that α is
a positive parameter.
(a) Find steady states π3 , π4 , π5 , π6 , π7 .
(b) Is your answer for previous part applicable for the case where α = 0?
Why?
(c) (Optional) Analyze states 1 and 2 as time goes to infinity.
2
q
0.2
S1
S3
0.1
S5
0.2
1
1
0.1
S2
0.3
0.2
0.3 0.3
S4
0.3
1
S7
1
S6
α
1−α
1-q
Figure 2: Problem 8
9. Prove the following theorems:
a) In an irreducible Markov chain, either all states are transient, all states
are recurrent null, or all states are recurrent positive.
b) In a finite-state Markov chain, all recurrent states are positive, and
it is impossible that all states are transient. If the Markov chain is also
irreducible, then it has no transient states.
c) In a Markov chain, the states can be divided, in a unique manner, into
irreducible sets of recurrent states and a set of transient states.
10. Some random walker will move on the states of the Markov chain given
in figure 3, according to its transition probabilities, untill he arrives an
absorbing state.
Figure 3: Problem 10
Each state has an award written on it. Each time he visits a state, he will
gain its award (if he visits a state for n times, he will gain its award n
times). He starts with initial property distribution π = [ 21 , 14 , 18 , 18 , 0].
(a) What will be his expected total gain?
3
(b) What will be his expected total gain, if he should pay 1$ for each
step?
(c) What will be his expected total gain, if the awards are as following
(he should pay nothing for each step)?
state 1: 0$, state 2: 0$, state 3: 0$, state 4: 10$, state 5: 25$,
11. Let P be the transition matrix of the Markov chain given in figure 4.
Figure 4: Problem 11
(a) Find limn→∞ Pn .
(b) Find expected number of steps the process will take before absorption
if the process:
i.
ii.
iii.
iv.
starts
starts
starts
starts
from state 1.
from state 2.
from state 3.
1 1
with the initial probability distriution π = [ 12 , 14 , 12
, 6 ].
4