12
MARKOV PROCESSES
Continuous-time Markov chains (CTMCs)
In the preceding lectures we have studied discrete-time Markov chains
(DTMCs).
Definition of discrete-time Markov chain:
A stochastic process {Xn , n ≥ 0} with state space S is called a discrete-time
Markov chain (DTMC) if for all i and j in S we have
P (Xn+1 = j | Xn = i, Xn−1, . . . , X0) = P (Xn+1 = j | Xn = i).
Given the present Xn and the history X0 , . . . , Xn−1 of the process, the future
Xn+1 only depends on the present and not on the past.
In this chapter we study stochastic processes in continuous time having a
similar property.
/k
1/16
12
Properties of the exponential distribution
In the description of continuous-time Markov chains an important role is
played by the exponential distribution.
A random variable T is exponentially distributed with parameter λ
(Notation: T ∼ Exp(λ)) if we have
F (t) = P (T ≤ t) =
0
if t ≤ 0,
−λt
1−e
if t > 0.
The corresponding probability density function is given by
f (t) = F 0(t) =
0
if t ≤ 0,
λe−λt if t > 0.
Interpretation probability density function: for small interval of length dt
P (t ≤ T ≤ t + dt) ≈ f (t)dt.
/k
2/16
12
Expectation and variance
If T is exponentially distributed with parameter λ, then we have
Z
∞
∞
Z
tf (t)dt =
E(T ) =
−∞
λte−λtdt =
1
λ
λt2e−λtdt =
2
.
λ2
0
and
2
Z
∞
E(T ) =
2
Z
t f (t)dt =
−∞
∞
0
Hence
Var(T ) = E(T 2 ) − (E(T ))2 =
/k
1
.
λ2
3/16
12
Memoryless property
If the random variable T is exponentially distributed, then we have for all
s, t > 0:
P (T > s + t | T > s) = P (T > t).
In words:
If the lifetime of a machine is at least s, then the probability that the remaining lifetime is at least t is equal to the probability that the lifetime of a new
machine is at least t.
In other words, the current age of a machine does not influence the remaining lifetime of the machine. This property is called the memoryless property
of the exponential distribution.
/k
4/16
12
Constant failure-rate
If the random variable T is exponentially distributed with parameter λ, then
for dt small and for all t, we have
P (t < T ≤ t + dt)
P (T > t)
e−λt − e−λ(t+dt)
=
e−λt
= 1 − e−λdt
≈ λdt.
P (T ≤ t + dt | T > t) =
The last step follows form the Taylor series of ex :
ex =
∞
X
xk
k=0
k!
Conclusion: A machine with an exponentially distributed lifetime has a constant failure rate (compare with property of geometric distribution).
/k
5/16
12
Minimum of exponentially distributed random variables
Exp(λ1 ), T2 ∼ Exp(λ2 ), . . . , Tk ∼ Exp(λk ) with
T1, T2, . . . , Tk independent and define T = Min(T1, T2, . . . , Tk ).
Then we have:
Assume T1
∼
1. T ∼ Exp(λ) with λ = λ1 + · · · + λk ,
2. P (T = Ti ) =
λi
,
λ
3. P (T > t | T = Ti ) = e−λt .
In words:
1. the minimum of exponentially distributed random variables is again exponentially distributed,
2. the probability that Ti is the minimum, is proportional to the parameter
λi,
3. the time until the occurrence of the first event is independent of which
of the events occurs first.
/k
6/16
12
Definition of continuous-time Markov chain:
A stochastic process {X(t), t ≥ 0} with state space S is called a continuoustime Markov chain (CTMC) if for all i and j in S and for all s, t ≥ 0 we
have
P (X(s+t) = j | X(s) = i, X(u), 0 ≤ u < s) = P (X(s+t) = j | X(s) = i).
In words:
"Given the present X(s) and the past X(u), 0 ≤ u < s of the process, the
future X(s + t) only depends on the present and not on the past."
/k
7/16
12
Just as before, we restrict our attention to time-homogeneous Markov processes, i.e., continuous-time Markov chains with the property that, for all
s, t ≥ 0,
P (X(s + t) = j | X(s) = i) = P (X(t) = j | X(0) = i) = pi,j (t).
The probabilities pi,j (t) are called transition probabilities and the N × N
matrix
p1,1(t) p1,2(t) . . . p1,N (t)
p2,1(t) p2,2(t) . . . p2,N (t)
P (t) =
...
...
...
pN,1(t) pN,2(t) . . . pN,N (t)
is called the transition probability matrix.
/k
8/16
12
Just as in the discrete-time case we have the following properties:
pi,j (t) ≥ 0,
N
X
for all i and j in S and t ≥ 0,
pi,j (t) = 1, for all i in S and t ≥ 0,
j=1
pi,j (s + t) =
X
pi,k (s) · pk,j (t) for all i and j in S and s, t ≥ 0.
k∈S
Hence, the matrix P (t) is for all t a stochastic matrix and we have
P (s + t) = P (s)P (t).
/k
9/16
12
How do we describe a continuous-time Markov chain?
A discrete-time Markov chain is described by the 1-step transition probability
matrix. The n-step transition probability matrix then equals P (n) = P n .
An analogous description in the continuous-time case is not possible: there
is no "smallest time-step"!
Therefore, we describe a CTMC in a different way by making use of the
memoryless property of the exponential distribution.
From the fact that for a Markov process the future, given present and past, is
independent of the past, it follows immediately that the amount of time the
process stays uninterruptedly in a certain state is memoryless, and hence
exponentially distributed.
/k
10/16
12
Description continuous-time Markov chain
A continuous-time Markov chain is a stochastic process {X(t) : t ≥ 0}
with the following random evolution:
• the process stays an exponential amout of time, say with parameter ri ,
in state i,
• after that the process jumps, independent of the history of the process,
with probability pi,j to state j ,
• then, the process stays an exponential amout of time, say with parameter
rj , in state j ,
• after that the process jumps, independent of the history of the process,
with probability pj,k to state k ,
• and so on .....
/k
11/16
12
Visualization continuous-time Markov chain
Just as for a discrete-time Markov chain, we can visualize a continuous-time
Markov chain using a directed graph.
If we can jump from state i to state j , we draw an arc from node i to node j
with the value of
ri,j = ripi,j
written next to it.
The quantity ri,j is called the transition rate from state i to state j .
Interpretation of transition rate:
The probability that the process jumps in a small interval of length dt from
state i to state j equals ri,j dt.
Alternatively: Whenever the process is in state i, it wants to make ri,j times
per time unit the jump from state i to state j .
/k
12/16
12
The rate matrix R
r1,1 r1,2 . . . r1,N
r2,1 r2,2 . . . r2,N
R=
...
...
...
rN,1 rN,2 . . . rN,N
plays a similar role for a CTMC as the 1-step transition probability matrix P
does for a DTMC. Once you know the initial distribution and the rate matrix
of the CTMC, the random evolution of the process is completely described.
Warning:
In many other books, the authors use the so-called generator matrix Q instead of the rate matrix R. For i 6= j , qi,j = ri,j , but for i = j they define
qi,i = −ri (instead of ri,i = 0).
/k
13/16
12
Example 6.5
Machine is alternately operating and in repair. The time it operates, B , is
random: B ∼ Exp(µ). The repair time, R, is also random: R ∼ Exp(λ).
B and R are independent.
X(t) = state of machine at time t.
(State 0 : machine in repair; State 1 : machine is operating)
{X(t), t ≥ 0} is continuous-time Markov chain with state space S = {0, 1}.
At any time t we have remaining time in state 1 ∼ Exp(µ) or remaining
time in state 0 ∼ Exp(λ). Hence, r1 = µ and r0 = λ.
Furthermore, p1,0 = p0,1 = 1, so that r0,1 = r0 ∗ p0,1 = λ and r1,0 =
r1 ∗ p1,0 = µ. Hence,
R=
0 λ
µ 0
/k
14/16
12
Example 6.6
Two independent machines, 1 and 2.
Time that machine i (i = 1, 2) is operating is Bi ∼ Exp(µ).
Repair time of machine i is Ri ∼ Exp(λ).
Each machine has its own repairman.
X(t) = number of machines operating at time t.
Then, {X(t), t ≥ 0} is a CTMC with state space S = {0, 1, 2}.
State 0:
remaining time in 0 = min (remaining R1 , remaining R2 ) ∼ Exp(2λ).
Hence, r0 = 2λ.
If a machine is repaired, then the system jumps to state 1 with probability
p0,1 = 1. Hence, r0,1 = r0 ∗ p0,1 = 2λ.
/k
15/16
12
State 1:
remaining time in 1 = min ( remaining B1 , remaining R2 ) or min ( remaining R1 , remaining B2 ) ∼ Exp(λ + µ). Hence, r1 = λ + µ.
Operating machine fails before other machine is repaired: transition to
µ
state 0, occurs with probability p1,0 = λ+µ
. Hence, r1,0 = r1 ∗ p1,0 = µ.
Repair of machine is completed before other machine fails: transition to
λ
. Hence, r1,2 = r1 ∗ p1,2 = λ.
state 2, occurs with probability p1,2 = λ+µ
State 2:
remaining time in 2 = min (remaining B1 , remaining B2 ) ∼ Exp(2µ).
Hence, r2 = 2µ.
If a machine fails, then the system jumps to state 1 with probability p2,1 = 1.
Hence, r2,1 = r2 ∗ p2,1 = 2µ.
0 2λ o
Rate matrix R is given by R = µ 0 λ
0 2µ 0
/k
16/16
© Copyright 2026 Paperzz