Lecture 19
Markov chains in insurance
Lecture 19
1 / 14
Introduction
We will describe how certain types of markov processes can be used to
model behavior that are useful in insurance applications.
Lecture 19
2 / 14
Introduction
We will describe how certain types of markov processes can be used to
model behavior that are useful in insurance applications.
The focus in this lecture is on applications.
Lecture 19
2 / 14
Continuous time Markov chains (1)
A continuous time Markov chain defined on a finite or countable infinite
state space S is a stochastic process Xt , t ≥ 0, such that for any 0 ≤ s ≤ t
P(Xt = x|Is ) = P(Xt = x|Xs ),
where
Is = All information generated by Xu for u ∈ [0, s].
Lecture 19
3 / 14
Continuous time Markov chains (1)
A continuous time Markov chain defined on a finite or countable infinite
state space S is a stochastic process Xt , t ≥ 0, such that for any 0 ≤ s ≤ t
P(Xt = x|Is ) = P(Xt = x|Xs ),
where
Is = All information generated by Xu for u ∈ [0, s].
Hence, when calculating the probability P(Xt = x|Is ), the only thing that
matters is the value of Xs . This is the Markov property.
Lecture 19
3 / 14
Continuous time Markov chains (1)
A continuous time Markov chain defined on a finite or countable infinite
state space S is a stochastic process Xt , t ≥ 0, such that for any 0 ≤ s ≤ t
P(Xt = x|Is ) = P(Xt = x|Xs ),
where
Is = All information generated by Xu for u ∈ [0, s].
Hence, when calculating the probability P(Xt = x|Is ), the only thing that
matters is the value of Xs . This is the Markov property.
Here and onwards, all states we consider are assumed to be elements in S.
Lecture 19
3 / 14
Continuous time Markov chains (2)
We only consider time-homogeneous Markov chains, which means that all
Markov chains Xt we consider have the property
P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).
Lecture 19
4 / 14
Continuous time Markov chains (2)
We only consider time-homogeneous Markov chains, which means that all
Markov chains Xt we consider have the property
P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).
We call the function
pt (x, y ) = P(Xt = y |X0 = x)
the transition function.
Lecture 19
4 / 14
Continuous time Markov chains (2)
We only consider time-homogeneous Markov chains, which means that all
Markov chains Xt we consider have the property
P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).
We call the function
pt (x, y ) = P(Xt = y |X0 = x)
the transition function.
Note that
P(Xt = x|Is ) = {Markov property}
Lecture 19
4 / 14
Continuous time Markov chains (2)
We only consider time-homogeneous Markov chains, which means that all
Markov chains Xt we consider have the property
P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).
We call the function
pt (x, y ) = P(Xt = y |X0 = x)
the transition function.
Note that
P(Xt = x|Is ) = {Markov property}
= P(Xt = x|Xs )
Lecture 19
4 / 14
Continuous time Markov chains (2)
We only consider time-homogeneous Markov chains, which means that all
Markov chains Xt we consider have the property
P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).
We call the function
pt (x, y ) = P(Xt = y |X0 = x)
the transition function.
Note that
P(Xt = x|Is ) = {Markov property}
= P(Xt = x|Xs )
= {Defintion of the transition function}
Lecture 19
4 / 14
Continuous time Markov chains (2)
We only consider time-homogeneous Markov chains, which means that all
Markov chains Xt we consider have the property
P(Xs+t = y |Xs = x) = P(Xt = y |X0 = x).
We call the function
pt (x, y ) = P(Xt = y |X0 = x)
the transition function.
Note that
P(Xt = x|Is ) = {Markov property}
= P(Xt = x|Xs )
= {Defintion of the transition function}
= pt (Xs , x).
Lecture 19
4 / 14
Continuous time Markov chains (3)
The transition intensity from state x to state y is defined by
λ(x, y ) = lim
t↓0
pt (x, y )
.
t
Lecture 19
5 / 14
Continuous time Markov chains (3)
The transition intensity from state x to state y is defined by
λ(x, y ) = lim
t↓0
pt (x, y )
.
t
It is not unusual to define a Markov process in terms of its transition
intensities.
Lecture 19
5 / 14
Continuous time Markov chains (3)
The transition intensity from state x to state y is defined by
λ(x, y ) = lim
t↓0
pt (x, y )
.
t
It is not unusual to define a Markov process in terms of its transition
intensities.
Note that λ(x, y ) is a constant for fixed states x and y
Lecture 19
5 / 14
Continuous time Markov chains (3)
The transition intensity from state x to state y is defined by
λ(x, y ) = lim
t↓0
pt (x, y )
.
t
It is not unusual to define a Markov process in terms of its transition
intensities.
Note that λ(x, y ) is a constant for fixed states x and y – it is neither
dependent on time nor on any randomness.
Lecture 19
5 / 14
Continuous time Markov chains (4)
Using the transition intensities, we can define the intensity matrix
Lecture 19
6 / 14
Continuous time Markov chains (4)
Using the transition intensities, we can define the intensity matrix:
if x 6= y
Pλ(x, y )
Λ(x, y ) =
− y 6=x λ(x, y ) if x = y
Lecture 19
6 / 14
Continuous time Markov chains (4)
Using the transition intensities, we can define the intensity matrix:
if x 6= y
Pλ(x, y )
Λ(x, y ) =
− y 6=x λ(x, y ) if x = y
Let
P(t) = [pt (x, y )]
be the matrix of transition probabilities.
Lecture 19
6 / 14
Continuous time Markov chains (4)
Using the transition intensities, we can define the intensity matrix:
if x 6= y
Pλ(x, y )
Λ(x, y ) =
− y 6=x λ(x, y ) if x = y
Let
P(t) = [pt (x, y )]
be the matrix of transition probabilities.
In general it holds that
P 0 (t) = ΛP(t) = P(t)Λ.
Lecture 19
6 / 14
Continuous time Markov chains (4)
Using the transition intensities, we can define the intensity matrix:
if x 6= y
Pλ(x, y )
Λ(x, y ) =
− y 6=x λ(x, y ) if x = y
Let
P(t) = [pt (x, y )]
be the matrix of transition probabilities.
In general it holds that
P 0 (t) = ΛP(t) = P(t)Λ.
These are the backward and forward equations respectively.
Lecture 19
6 / 14
The Poisson process
A Poisson process is a Markov process with intensity matrix
−λ λ
0 0 0···
0 −λ λ 0 0 · · ·
Λ= 0
.
0
−λ
λ
0
·
·
·
..
..
..
.. . .
.
.
.
.
.
Lecture 19
7 / 14
The Poisson process
A Poisson process is a Markov process with intensity matrix
−λ λ
0 0 0···
0 −λ λ 0 0 · · ·
Λ= 0
.
0
−λ
λ
0
·
·
·
..
..
..
.. . .
.
.
.
.
.
It is a counting process
Lecture 19
7 / 14
The Poisson process
A Poisson process is a Markov process with intensity matrix
−λ λ
0 0 0···
0 −λ λ 0 0 · · ·
Λ= 0
.
0
−λ
λ
0
·
·
·
..
..
..
.. . .
.
.
.
.
.
It is a counting process: the only transitions possible is from n to n + 1.
Lecture 19
7 / 14
The Poisson process
A Poisson process is a Markov process with intensity matrix
−λ λ
0 0 0···
0 −λ λ 0 0 · · ·
Λ= 0
.
0
−λ
λ
0
·
·
·
..
..
..
.. . .
.
.
.
.
.
It is a counting process: the only transitions possible is from n to n + 1.
We can solve the equation for the transition probabilities to get
P(X (t) = n) = e −λt
λn t n
, n = 0, 1, 2, . . . .
n!
Lecture 19
7 / 14
A whole-life insurance model (1)
Let us consider a simple model of a whole-life insurance.
Lecture 19
8 / 14
A whole-life insurance model (1)
Let us consider a simple model of a whole-life insurance.
To fit into the Markovian model, we assume a constant force of mortality
λ.
Lecture 19
8 / 14
A whole-life insurance model (1)
Let us consider a simple model of a whole-life insurance.
To fit into the Markovian model, we assume a constant force of mortality
λ.
This means that we have the picture
Alive
λ
→
Lecture 19
Dead
8 / 14
A whole-life insurance model (2)
Let us define.
State 1 = Alive
State 2 = Dead
Lecture 19
9 / 14
A whole-life insurance model (2)
Let us define.
State 1 = Alive
State 2 = Dead
This implies that the intensity matrix is given by
−λ λ
0 0
Lecture 19
9 / 14
A whole-life insurance model (2)
Let us define.
State 1 = Alive
State 2 = Dead
This implies that the intensity matrix is given by
−λ λ
0 0
Note that a row of zeros for a state means that that state is absorbing.
Lecture 19
9 / 14
A more complex life insurance model (1)
In this previous models we only considered if the individual was alive or
dead.
Lecture 19
10 / 14
A more complex life insurance model (1)
In this previous models we only considered if the individual was alive or
dead.
In some cases we want to know if the individual is alive and healthy, or
alive and an invalid.
Lecture 19
10 / 14
A more complex life insurance model (1)
In this previous models we only considered if the individual was alive or
dead.
In some cases we want to know if the individual is alive and healthy, or
alive and an invalid.
This leads to the following model
Lecture 19
10 / 14
A more complex life insurance model (1)
In this previous models we only considered if the individual was alive or
dead.
In some cases we want to know if the individual is alive and healthy, or
alive and an invalid.
This leads to the following model:
Invalid
µ1
µ2
λ2 &
Healthy
. λ1
Dead
Lecture 19
10 / 14
A more complex life insurance model (2)
With
State 1 = Healthy
State 2 = Invalid
State 3 = Dead
Lecture 19
11 / 14
A more complex life insurance model (2)
With
State 1 = Healthy
State 2 = Invalid
State 3 = Dead
the intensity matrix is given by
−λ1 − µ1
µ1
λ1
µ2
−λ2 − µ2 λ2
0
0
0
Lecture 19
11 / 14
A model of more than one life (1)
So far we have only considered one individual.
Lecture 19
12 / 14
A model of more than one life (1)
So far we have only considered one individual.
Now assume that an insurance company has insured n individuals.
Lecture 19
12 / 14
A model of more than one life (1)
So far we have only considered one individual.
Now assume that an insurance company has insured n individuals.
For each of the individuals, the force of mortaility is the constant λ, and
the individuals die independently of each other.
Lecture 19
12 / 14
A model of more than one life (1)
So far we have only considered one individual.
Now assume that an insurance company has insured n individuals.
For each of the individuals, the force of mortaility is the constant λ, and
the individuals die independently of each other.
Let N be the number of individuals alive after 1 year.
Lecture 19
12 / 14
A model of more than one life (1)
So far we have only considered one individual.
Now assume that an insurance company has insured n individuals.
For each of the individuals, the force of mortaility is the constant λ, and
the individuals die independently of each other.
Let N be the number of individuals alive after 1 year.
What is the distribution of N?
Lecture 19
12 / 14
A model of more than one life (2)
Let X be the Markov process connected to one individual’s state
Lecture 19
13 / 14
A model of more than one life (2)
Let X be the Markov process connected to one individual’s state:
1 if the individual is alive at time t
X (t) =
2 if the individual is dead at time t
Lecture 19
13 / 14
A model of more than one life (2)
Let X be the Markov process connected to one individual’s state:
1 if the individual is alive at time t
X (t) =
2 if the individual is dead at time t
For one individual, the probability of being alive after 1 year is
P(X (1) = 1) = e −λ .
Lecture 19
13 / 14
A model of more than one life (2)
Let X be the Markov process connected to one individual’s state:
1 if the individual is alive at time t
X (t) =
2 if the individual is dead at time t
For one individual, the probability of being alive after 1 year is
P(X (1) = 1) = e −λ .
Since the individuals die independently of each other, it follows that
N ∼ Bin(n, e −λ ).
Lecture 19
13 / 14
Reference
I have partly used
Basic Life Insurance Mathematics by Ragnar Norberg
in this lecture.
Lecture 19
14 / 14
© Copyright 2026 Paperzz