(1) A Markov process is a finite sequence of

M.1 MARKOV PROCESSES
(1) A Markov process is a finite sequence of experiments in which
1. Each experiment has the same possible outcomes.
2. The probabilities of the outcomes depend only on the preceding experiment.
The outcome at any stage of the experiment in a Markov process is called the
state of the experiment.
(2) We can write the conditional probabilities from the tree in a matrix, called the
transition matrix or stochastic matrix, denoted by T. T will have the following
form:
(3) First, we first need an column matrix X0 . Let the ith element of X0 be the initial
probability of the ith state. In other words, X0 represents the initial probability
distribution.
Then, to find the probability distribution X1 of the first stage, we should only
multiply X0 by the transition matrix T , namely,
X1 = T X0
We repeat this, observing that the distribution Xn for the nth state can be found
by computing
Xn = T Xn−1 = T 2 Xn−2 = T n X0
(4) We say that a matrix T is a stochastic or transition matrix if
1. the matrix T is square
2. all elements in the matrix T are between 0 and 1 (including 0 and 1)
3. the sum of all the elements in any column is 1.
(5) If there are n states in a Markov process, then the transition matrix size is n × n.
Guchao Zeng; Department of Mathematics, Texas A&M University, College Station, TX
77843, USA; [email protected]
1