Download attachment

The pricing of a stop-loss reinsurance contract
using the Kalman-Bucy filter
Angelos Dassios
Department of Statistics
London School of Economics and Political Science
Ji-Wook Jang
Actuarial Studies
University of New South Wales, Sydney
Abstract
We use a doubly stochastic Poisson process (or the Cox process) to model the claim
arrival process for common events. The shot noise process is used for the claim
intensity function within the Cox process. The Cox process with shot noise intensity
is examined by piecewise deterministic Markov processes theory. Since the claim
intensity is not observable we employ state estimation on the basis of the number of
claims i.e. we obtain the Kalman-Bucy filter. In order to use the Kalman-Bucy filter,
the claim arrival process (i.e. the Cox process) and the claim intensity (i.e. the shot
noise process) should be transformed and approximated to two-dimensional Gaussian
process. We derive pricing formula for a stop-loss reinsurance contract for common
events, using the conditional distribution of claim intensity that is obtained by the
Kalman-Bucy filter. The gross premium for a stop-loss reinsurance contract is
obtained by levying the security loading on the net premium.
Keywords: Doubly stochastic Poisson process; Shot noise process; Piecewise
deterministic Markov processes theory; Stop-loss reinsurance contract; The KalmanBucy filter.
JEL Classification: G13
Angelos Dassios
Department of Statistics
London School of Economics and Political Science
Houghton Street
London WC2A 2AE
United Kingdom
Tel: +44 20 7955 7749
Fax: +44 20 7955 7416
E-Mail: [email protected]
1
Ji-Wook Jang
Actuarial Studies
University of New South Wales
Sydney, NSW 2052
Australia
Tel: +61 2 9385 3360
Fax: +61 2 9385 1883
E-Mail: [email protected]
1. Introduction
Insurance companies have traditionally used reinsurance contracts to hedge
themselves against losses from common events.
Let ℵi be the claim amount, which are assumed to be independent and identically
distributed with distribution function H ( u ) ( u > 0) then the total loss excess over b,
which is a retention limit, up to time t is

 Nt
 ∑ ℵi − b 

 i =1
+
(1.1)
+


 Nt
 Nt
where N t is the number of claims up to time t and  ∑ ℵi − b  = Max ∑ ℵi − b, 0  .


 i =1
 i =1
Nt
Let Ct = ∑ℵi be the total amount of claims up to time t. Then
i =1
+
 Nt

 ∑ ℵi − b  = (C t − b )+ .
 i =1

Therefore the stop-loss reinsurance premium at time 0 is
+
E (C t − b )
{
}
(1.2)
(1.3)
Throughout the paper, for simplicity, we assume interest rates to be constant.
In insurance modelling, the Poisson process has been used as a claim arrival process.
Extensive discussion of the Poisson process, from both applied and theoretical
viewpoints, can be found in Cramér (1930), Cox & Lewis (1966), Bühlmann (1970),
Cinlar (1975), and Medhi (1982). However, there has been a significant volume of
literature that questions the appropriateness of the Poisson process in insurance
modelling (see Seal (1983) and Beard et al.(1984)) and more specifically for rainfall
modelling (see Smith (1980) and Cox & Isham (1986)).
For some events, the assumption that resulting claims occur in terms of the
process is inadequate. Therefore an alternative point process needs to be
generate the claim arrival process. We will employ a doubly stochastic
process, or the Cox process, (see Cox (1955), Bartlett (1963), Serfozo
Grandell (1976, 1991), Bremaud (1981) and Lando (1994)).
Poisson
used to
Poisson
(1972),
The shot noise process can be used as the parameter of doubly stochastic Poisson
process to measure the number of claims due to common events (see Cox & Isham
(1980,1986) and Klüppelberg & Mikosch (1995)). As time passes, the shot noise
2
process decreases as more and more claims are settled. This decrease continues until
another event occurs which will result in a positive jump in the shot noise process.
The shot noise process is particularly useful in the claim arrival process as it measures
the frequency, magnitude and time period needed to determine the effect of the
common events. Therefore we will use it as a claim intensity function to generate
doubly stochastic Poisson process. We will adopt the shot noise process used by Cox
& Isham (1980):
λ t = λ 0e − δt + ∑ yi e − δ (t − si )
all i
si < t
λt
ρ
y
δ
0
t
where:
i primary event
λ 0 initial value of λ
yi jump size of primary event i (i.e. magnitude of contribution of primary event i to
intensity) where E ( yi ) < ∞
si time at which primary event i occurs where si < t < ∞
δ exponential decay which never reaches zero
ρ the rate of primary event arrival.
The piecewise deterministic Markov processes theory developed by Davis (1984) is a
powerful mathematical tool for examining non-diffusion models. We present
definitions and important properties of the Cox and shot noise processes with the aid
of piecewise deterministic processes theory (see Dassios (1987) and Dassios &
Embrechts (1989)). This theory is used to calculate the distribution of the number of
claims and the mean of the number of claims. These are important factors in the
pricing of any reinsurance product.
3
Since the claim intensity is unobservable, the state estimation is employed to derive
the distribution of the claim intensity. One of the methods used is the Kalman-Bucy
filter. We derive pricing models for stop-loss reinsurance contracts for common
events using the conditional distribution of the claim intensity that is obtained by the
Kalman-Bucy filter. The asymptotic (stationary) distribution of the claim intensity
can also be used to derive pricing model for a stop-loss reinsurance contract for
common events (see Dassios (1987), Dassios & Jang (1998a, 1998b) and Jang (1998,
2000)).
2. Doubly stochastic Poisson process and shot noise process
Under doubly stochastic Poisson process, or the Cox process, the claim intensity
function is assumed to be stochastic. The Cox process is more appropriately used as a
claim arrival process as some events should be based on a specific stochastic process.
However, little work has been done to further develop this assumption in an insurance
context. We will now proceed to examine the doubly stochastic Poisson process as
the claim arrival process.
The doubly stochastic Poisson process provides flexibility by letting the intensity not
only depend on time but also allowing it to be a stochastic process. Therefore the
doubly stochastic Poisson process can be viewed as a two step randomisation
procedure. A process λ t is used to generate another process N t by acting as its
intensity. That is, N t is a Poisson process conditional on λ t which itself is a
stochastic process (if λ t is deterministic then N t is a Poisson process).
Many alternative definitions of a doubly stochastic Poisson process can be given. We
will offer the one adopted by Bremaud (1981).
Definition 2.1 Let ( Ω, F , P ) be a probability space with information structure F.
The information structure F is the filtration, i.e. F = {ℑtλ , t ∈ [ 0, T ]} . F consists of σalgebra's ℑtλ on Ω , for any point t in the time interval [0, T], representing the
information available at time t. Let N t be a point process adopted to a history ℑtλ
and let λt be a non- negative process. Suppose that λt is ℑtλ -measurable, t ≥ 0 and
that
t
∫ λ ds < ∞
s
almost surely (no explosions).
0
4
If for all 0 ≤ t1 ≤ t2 and u ∈ℜ
{
E e
(
iu Nt2 − Nt1
)
λ
ℑt
1
}
t2
 iu

= exp ( e − 1) ∫ λs ds 
t1


(2.1)
then N t is called a ℑtλ -doubly stochastic Poisson process with intensity λt .
•
(2.1) gives us
t2
∫
− λ s ds
{
}
Pr N t2 − N t1 = k λ s ; t1 ≤ s ≤ t 2 =
e
t1
 t2

 λ ds 
s
 t∫

1

k!
k
(2.2)
Now let us look at the shot noise process described in the previous section. The shot
noise process can never reach 0, where the decay is exponential δ which is a
constant. The frequency of jump arrivals follows a Poisson distribution with ρ and
we will have generally distributed jump sizes with distribution function G ( y )
( y > 0) . If the jump size distribution is exponential, its density is g ( y ) = αe −αy ,
y > 0, α > 0.
If λt is a Markov process, the generator of the process ( λt , t) acting on a function
f ( λ , t ) belonging to its domain is given by
∞
∂f
∂f
Α f (λ , t ) =
− δλ
+ ρ{∫ f (λ + y, t )dG ( y ) − f (λ , t )} .
∂t
∂λ
0
(2.3)
It is sufficient that f ( λ , t ) is differentiable w.r.t. λ , t for all λ , t and that
∞
∫ f (λ + y, t )dG ( y ) − f (λ , t ) < ∞ for
f ( λ , t ) to belong to the domain of the generator
0
Α.
Let us derive the mean and variance of λ t assuming that λ 0 is given.
Theorem 2.2 Let λ t be a shot noise process. Assuming that we know λ 0 ,
µρ
µ ρ
E (λt λ0 ) = 1 + (λ0 − 1 )e −δt
δ
δ
∞
where µ1 = ∫ ydG ( y ) .
0
5
(2.4)
Proof
Set f ( λ ) = λ in (2.3), then
Α λ = −δλ + µ1 ρ
∞
where µ1 = ∫ ydG ( y ) .
0
t
From E (λt λ0 ) − λ0 = E[ ∫ {Α f (λ s ) λ0 }ds ]
0
t
t
0
0
E (λt λ0 ) = λ0 − δ ∫ E (λ s λ 0 )ds + ∫ µ1 ρds .
Differentiate w.r.t t
dE (λt λ0 )
dt
t
= −δE (λt λ0 ) + ∫ µ1 ρds .
0
Solving the differential equation
E (λt λ0 ) =
µ1 ρ
µ ρ
+ (λ0 − 1 )e −δt .
δ
δ
•
Lemma 2.3 Let λ t be as defined. Assuming that we know λ 0 ,
2µ ρ
µ ρ
µ2ρ2 µ ρ
E λt2 λ0 = λ20 e −2δt + 1 (λ0 − 1 )(e −δt − e − 2δt ) + ( 1 2 + 2 )(1 − e −2δt )
δ
δ
δ
δ
(
)
(2.5)
∞
where µ 2 = ∫ y 2 dG ( y ) .
0
Proof
Set f ( λ ) = λ2 in (2.3) then
Α λ2 = −2δλ 2 + 2 µ1 ρλ + µ 2 ρ
∞
where µ 2 = ∫ y 2 dG ( y ) .
0
t
From E (λt2 λ 0 ) − λ20 = E[ ∫ ( Α λ2s λ0 )ds ]
0
t
t
t
0
0
E (λ λ0 ) = λ − 2δ ∫ E (λ λ0 )ds + 2∫ µ 1 ρE (λ s λ0 )ds + ∫ µ 2 ρds .
2
t
2
0
2
s
0
Differentiate w.r.t t
(
dE λt2 λ0
dt
) = −2δE (λ
2
t
)
λ0 + 2 µ1 ρE (λt λ0 ) + µ 2 ρ .
6
Multiply by e 2δt , then
d 2 δt
e E (λt2 λ0 ) = e 2δt [2 µ1 ρE (λt λ0 ) + µ 2 ρ ] .
dt
[
]
Solving the differential equation
µ ρ
µ2ρ2 µ ρ
2µ ρ
E λt2 λ0 = λ20 e −2δt + 1 (λ0 − 1 )(e −δt − e − 2δt ) + ( 1 2 + 2 )(1 − e −2δt ) .
δ
δ
δ
δ
(
)
•
Corollary 2.4 Let λ t be as defined. Assuming that we know λ 0 ,
µ ρ
Var (λt λ0 ) = (1 − e − 2δt ) 2 .
2δ
(2.6)
Proof
Var ( λ t λ 0 ) = E ( λ2t λ 0 ) − {E ( λ t λ 0 )}2 .
Therefore (2.6) follows immediately from (2.5) and (2.4).
•
Similarly, the asymptotic (stationary) mean and variance can be obtained from
theorem 2.2 and corollary 2.4.
Corollary 2.5 Let N t , λ t be as defined. Furthermore if λ t is stationary, that is λ 0
has the stationary distribution, then
E (λt ) =
µ1 ρ
.
δ
(2.7)
Proof
Let t → ∞ in (2.4) and the corollary follows immediately.
•
Corollary 2.6 Let λ t be as defined. If λ t is stationary then
µ ρ
Var(λt ) = 2 .
2δ
(2.8)
Proof
Let t → ∞ in (2.6) and the corollary follows immediately.
•
7
As mentioned earlier, the asymptotic (stationary) distribution of the claim intensity
can also be used to derive pricing model for a stop-loss reinsurance contract for
common events (see Dassios (1987), Dassios & Jang (1998a, 1998b) and Jang (1998,
2000)).
3. Transformations and approximations
We have obtained E ( λ t ) when λ t is stationary. Therefore the mean of the number of
points in a fixed time interval, E ( N t ) , can be easily found;
µρ
(3.1)
E ( Nt ) = 1 t .
δ
We can also obtain
µ
2µ
µ

Var ( ∫ λ s ds λ 0 ) = Var ( X t λ 0 ) =  22 t − 32 (1 − e −δt ) + 23 (1 − e −2δt ) ρ
δ
2δ
δ

0
t
(3.2)
and, when λ t is stationary,
µ
µ 
µ
Var ( ∫ λ s ds ) = Var ( X t ) =  22 t + 23 e −δt − 23  ρ
δ
δ 
δ
0
t
(3.3)
t
where X t = ∫ λ s ds (the aggregated process).
0
In practical situations, we observe claims and we want to filter the ‘noise’ out and
‘estimate’ the value of λ t at any time. This is useful for the pricing of reinsurance
contract as it helps us estimate the distribution of λ 0 from past data. In order to find
an estimate of λ t based on the observations {N s ;0 ≤ s ≤ t} we assume ρ is large and
start by transforming the processes λ t , N t and Ct using
Z
( ρ)
t
=
Wt (ρ) =
λ t − µδ1ρ
µ 2ρ
2δ
µ1ρ
δ
Nt −
t
µ 2ρ
2δ
µ1ρ
µ 2ρ
+ Zt(ρ)
δ
2δ
i.e.
λt =
i.e.
Nt =
i.e.
Ct = m1
µ1ρ
µ 2ρ
t + Wt (ρ)
δ
2δ
(3.4)
(3.5)
and
U
( ρ)
t
=
Ct − m1 µδ1ρ t
µ 2ρ
2δ
∞
where m1 = ∫ udH (u ) .
0
8
µ1ρ
µ 2ρ
t + U t(ρ)
2δ
δ
(3.6)
We start with a proposition used by Ethier & Kurtz (1985).
Proposition 3.1 For n = 1, 2, L , let {ℑnt } be a filtration and let M n be an {ℑnt }-local
martingale with sample paths in Dℜd [ 0, ∞ ) and M n ( 0) = 0 . Let An = (( Anij )) be
symmetric d × d matrix-valued processes such that Anij has sample paths in Dℜ [ 0, ∞ )
and An ( t ) − An ( s) is nonnegative definite for 0 ≤ s < t . Assume that


lim E sup Anij (t ) − Anij (t −)  = 0 ,
n→∞


 t ≤T



2

lim E sup M n (t ) − M n (t −)  = 0 ,
n →∞


 t ≤T

and for i , j = 1, 2,L , d ,
M ni ( t ) M nj ( t ) − Anij ( t )
is an {ℑ tn } -local martingale.
If for each t ≥ 0 and i , j = 1, 2,L , d ,
Anij ( t ) → cij ( t )
in probability where C = (( cij )) is a continuous, symmetric, d × d matrix-valued
function, defined on [ 0, ∞ ) , satisfying C( 0) = 0 and ∑ ( cij ( t ) − cij ( s)) ξ i ξ j ≥ 0,
ξ ∈ℜ d . Then
Mn ⇒ X
in law where X is a process with independent Gaussian increments such that
X i X j − cij are (local) martingales with respect to {ℑtX } .
•
t
Let
Qt(ρ) =
us
now
Ct − m1 N t
µ 2ρ
2δ
define
Vt (ρ) =
J t − µ1ρt
µ 2ρ
2δ
,
L(t ρ ) =
N t − ∫ λ s ds
0
µ2 ρ
2δ
Mt
=
Nt − X t
µ2 ρ
2δ
and
, where J t = ∑ y i , M t is the total number of primary event’s jumps
i =1
t
up to time t and X t = ∫ λ s ds .
0
Lemma 3.2 Let Vt (ρ) , L(tρ) and Qt(ρ) be as defined and ρ → ∞. Then
9
 2δ B (1) 
Vt ( ρ ) 
t


 (ρ ) 
2 µ1
(2)


L
⇒
B
t
µ2
 t 


(
ρ
)
Qt 
2 µ1
( 3)
k
B




 2 µ2 t 
(3.7)
in law where Bt(1) , Bt(2) and Bt( 3) are three independent standard Brownian motions
2
∞

and k 2 = ∫ u dH (u ) −  ∫ udH (u )  (the variance of claim size).
0
0

∞
2
Proof
The generator of the process Vt (ρ) acting on a function f ( v ) is given by
∞
µ 1 ρ ∂f
Α f (v) = −
+ ρ{∫ f (v +
µ 2 ρ ∂v
0
y
µ2 ρ
2δ
2δ
)dG ( y ) − f (v)} .
(3.8)
Set f ( v ) = v . Then
2
Α v 2 = 2δ .
The generator of the process
( X t , N t , Ct , λ t , J t , t )
acting on a function
f ( x , n , c, λ , j , t ) is given by
∞
∂f
∂f
∂f
Α f ( x , n, c , λ , j , t ) =
+λ
− δλ
+ ρ{∫ f ( x, n, c, λ + y, j + y, t ) dG ( y ) − f ( x, n, c, λ , j , t )}
∂t
∂x
∂λ
0
∞
+ λ{∫ f ( x, n + 1, c + u , λ , j , t )dH (u ) − f ( x, n, c, λ , j , t )}.
0
(3.9)
Clearly, for f ( x , n , c, λ , j , t ) to belong to the domain of the generator Α, it is essential
that f ( x , n , c, λ , j , t ) is differentiable w.r.t. x , c , λ , t for all x , n , c , λ , j, t and that
∞
∫ f (⋅, λ + y,⋅)dG( y) − f (⋅, λ ,⋅) < ∞
∞
and
0
∫ f (⋅, c + u,⋅)dH (u ) − f (⋅, c,⋅) < ∞ .
0
2
2
n−x


 and f ( x, n, c, λ , j , t ) =  c − m1n  . Then
Set f ( x, n, c, λ , j , t ) = 
 µ2 ρ 
 µ2 ρ 
2δ
 2δ 


2
n− x
 = 2δ λ
Α 
µ2 ρ
 µ 2 ρ 
 2δ 
∞
∞
0
0
2
and
c−m n
2δ λ
1 
Α 
=
k
2
µ2 ρ
 µ 2 ρ 
2δ


where m1 = ∫ udH (u ) , m2 = ∫ u 2 dH (u ) and k 2 = m2 − m12 .
10
t
f ( X t ) − ∫ Α f ( X s )ds is a martingale therefore Α f is the solution to the 'martingale
0
(
)
t
( ρ )2
( ρ )2
2
2δ λ s
problem'. Hence Vt ( ρ ) − 2δt ,  Lt  − ∫
ds and  Qt


 0 µ2 ρ
t
 − k 2δ λ s ds are
 ∫0 2 µ 2 ρ
martingales.
t
As can be seen from (3.2) (see also (3.3)) Var ( ∫ λ s ds ) = K (t ) ρ .
Therefore, by
0
Chebyschev's inequality, as ρ → ∞
t
 t 2δ λ s

2µ
Pr  ∫
ds − 1 t > ε  ≤
µ2
 0 µ 2 ρ

( 2µδ2 ) 2 Var ( ∫ λ s ds )
ρ 2ε
0
2
=
( 2µδ2 ) 2 K (t ) ρ
ρ 2ε 2
→0
(3.10)
and
t
k ( ) Var ( ∫ λ s ds )
2
2
 t 2δ λ s

2µ
ds − k 2 1 t > ε  ≤
Pr  k 2 ∫
µ2
 0 µ 2 ρ

2δ
µ2
2
0
ρ 2ε 2
=
k 22 ( 2µδ2 ) 2 K (t ) ρ
ρ 2ε 2
→ 0 . (3.11)
Therefore from (3.10) and (3.11)
2 µ1
2δ λ s
ds →
t
ρ
µ
2
2
0
t
∫µ
and
t
∫k
0
2
2µ
2δ λ s
ds → k 2 1 t
µ2 ρ
µ2
in probability.
 n − x  j − µ ρt 




 , f ( x, n, c, λ , j , t ) =  c − m1 n  j − µ1 ρt 
1
f ( x, n, c, λ , j , t ) = 
 µ 2 ρ 

 µ 2 ρ 

µ2 ρ
µ2 ρ
2δ
2δ
2δ
 2δ 




 c − m n  n − x 
 . Then
1 
and f ( x, n, c, λ , j , t ) = 
 µ 2 ρ  µ 2 ρ 
2δ

 2δ 
Set
 n − x  j − µ ρt 
 c − m n  j − µ ρt 
 c − m n  n − x 






 = 0.
1
1 
1
1 
Α
= 0, Α
= 0 and Α 











µ
ρ
µ
ρ
µ
ρ
µ
ρ
µ
ρ
µ
ρ
2
2
2
2
2
 2 

2δ
2δ
2δ
2δ




 2δ 
 2δ 

(3.12)
Hence from proposition 3.1,
11
Vt ( ρ ) =
J t − µ 1 ρt
⇒
µ2 ρ
2δ
2δ Bt(1)
(3.13)
2 µ1 ( 2 )
Bt
µ2
(3.14)
t
L(t ρ ) =
N t − ∫ λ s ds
⇒
0
µ2 ρ
2δ
and
Qt(ρ) =
Ct − m1 N t
⇒
µ 2ρ
2δ
k2
2 µ1 ( 3)
Bt
µ2
(3.15)
in law where Bt(1) , Bt(2) and Bt( 3) are three independent standard Brownian motions.
Therefore (3.7) follows immediately from (3.12), (3.13), (3.14) and (3.15).
•
Let us now prove the main result of this section.
Theorem 3.3 Let Zt(ρ) , Wt (ρ) and U t(ρ) be as defined and ρ → ∞. Then Zt(ρ) , Wt (ρ) and
U t(ρ) converge in law to Zt , Wt and U t where
dZt = − δZt dt + 2δ dBt(1)
2µ1 (2)
dWt = Zt dt +
dBt
µ2
dU t = m1dWt + k 2
(3.16)
(3.17)
2µ1 ( 3)
2µ
dBt = m1Zt dt + m2 1 dBt( 4)
µ2
µ2
(3.18)
where Bt(1) , Bt(2) , Bt( 3) are three independent standard Brownian motions and
2µ 1 (2)
2µ
m1
Bt + k 2 1 Bt(3)
µ2
µ2
(also a standard Brownian motion).
Bt( 4) =
2µ1
2
( m1 + k 2 )
µ2
Proof
Zt(ρ) , Wt (ρ) and U t(ρ) can be written as
µ ρ
− δt
t
J t − µ1 ρt
J u − µ 1 ρu
λ t − µδ1ρ (λ 0 − δ1 )e
( ρ)
−δ ( t − u )
Zt =
=
+
− δe
du
∫
µ 2ρ
2δ
µ2 ρ
2δ
µ2 ρ
2δ
0
12
µ2 ρ
2δ
(3.19)
t
Wt
(ρ)
=
µ1 ρ
δ
Nt −
t
µ2 ρ
2δ
=
N t − ∫ λ s ds
0
µ2 ρ
2δ
t
+∫
λs −
µ1 ρ
δ
0
(3.20)
ds
µ2 ρ
2δ
and
U
(ρ)
t
=
µ1 ρ
δ
C t − m1
t
µ2 ρ
2δ
=
C t − m1 N t
µ2 ρ
2δ
 N − µ1 ρ t 
+ m1  t δ  .
µ2 ρ


2δ


(3.21)
Therefore by continuous mapping theorem (see Billingsley (1968)) and lemma 3.2,
(3.19), (3.20) and (3.21) converge to
Zt = Z0e
−δt
t
+ 2δ ∫ e −δ (t − s ) dBs(1)
(3.22)
0
Wt = ∫ Z s ds +
2 µ1 ( 2 )
Bt
µ2
(3.23)
U t = m1Wt + k 2
2µ1 ( 3)
Bt .
µ2
(3.24)
t
0
and
From (3.23) and (3.24), we have
2µ
2 µ1 (2)
2µ
dU t = m1dWt + k 2 1 dBt(3) = m1Zt dt + m1
dBt + k2 1 dBt( 3) .
µ2
µ2
µ2
(3.25)
Since the sum of two independent standard Brownian motions is also a standard
Brownian motion this completes the proof of the theorem.
•
Theorem 3.3 has proved that Zt , Wt and U t are normally distributed. As a result of
~
~
~
this, we define λ t , N t and C t as Gaussian approximations of λ t , N t and Ct ;
µρ
µ 2ρ
λ t = 1 + Zt
δ
2δ
~
~
µρ
µ 2ρ
N t = 1 t + Wt
δ
2δ
i.e.
Zt =
λ t − µδ1ρ
(3.26)
µ 2ρ
2δ
~
~
i.e.
Wt =
N t − µδ1ρ t
(3.27)
µ 2ρ
2δ
and
µρ
µ 2ρ
C t = m1 1 t + U t
δ
2δ
~
~
Ut =
i.e.
13
C t − m1 µδ1ρ t
µ 2ρ
2δ
.
(3.28)
4. The Kalman-Bucy filter and the distribution of Z t
We will derive the conditional distribution of Zt , given {Ws ;0 ≤ s ≤ t} , by the
Kalman-Bucy filter where
dZt = − δZt dt + 2δ dBt(1)
(4.1)
2µ1 (2)
dBt .
µ2
(4.2)
and
dWt = Zt dt +
Let us begin with a proposition used by Øksendal (1992) (see theorem 6.10 in chapter
IV).
Proposition 4.1 The solution Z t = E (Z t Ws ;0 ≤ s ≤ t ) of the 1-dimensional linear
∧
filtering problem
dZt = F ( t ) Zt dt + C ( t ) dBt(1) ; F ( t ), C ( t ) ∈ℜ
dWt = G ( t ) Zt dt + D( t ) dBt( 2) ; G ( t ), D( t ) ∈ℜ
(4.3)
(4.4)
satisfies the stochastic differential equation
∧
∧
G 2 (t ) S (t ) ∧
G(t ) S (t )
d Z t = {F ( t ) −
}
Z
dt
+
dW
;
Z
t
0 = E ( Z0 )
t
D2 (t )
D2 (t )
(4.5)
where
2
∧

 
S (t ) = E  Z t − Z t   satisfies the Riccati equation

 
dS
G 2 (t )
2
= 2 F (t ) S (t ) − 2 S 2 (t ) + C 2 (t ), S (0) = E {Z 0 − E (Z 0 )} = Var ( Z 0 ) .
dt
D (t )
[
]
(4.6)
•
Theorem 4.2 Let ( Zt , Wt ) be a two-dimensional normal process satisfying the system
of equations of (4.1) and (4.2). Then the estimate of Zt based on the observed
{Ws ;0 ≤ s ≤ t} is
Z t = E (Z t Ws ;0 ≤ s ≤ t ) = exp{∫ Ψ ( s )ds} Z 0 +
∧
t
∧
0
t
µ2 t
exp{
∫ Ψ (u )du}S ( s )dWs
∫
2µ1 0
s
where
S ( 0) = a 2 ,
14
(4.7)
2

 a +
 2δµ 1
2µ1
δ 
+ 2  1 +
µ2
 a2 +
 µ2

S ( s) =
a2 +
2δµ1
µ2
+
2 µ1
µ2
a2 +
2δµ1
µ2
−
2 µ1
µ2
2δµ1
µ2
+
2 µ1
µ2
2δµ1
µ2
−
2 µ1
µ2
(
δ(
δ
2δµ1
µ2
2δµ1
µ2
(
δ(
δ
) 
exp

+ 2)

2δµ1
µ2
2δµ1
µ2
2 µ1
µ2
+2
) 
exp

+ 2)

δ(
+ 2)
2 µ1
µ2
+2
2δµ1
µ2
µ1
µ2
δ
(
2δµ1
µ2
+2
)
µ1
µ2


s 




s −1


− 2δ
µ1
µ2
(4.8)
and
(
(
)
)
(
)
2δµ1
2 µ1
2δµ1
2

 2 µ1 δ 2δµ1 + 2 
  a + µ2 + µ2 δ µ2 + 2
 2δµ 1
2µ1
µ2
 µ

δ 
s 
+ 2  1 +
exp 2
µ
1
µ2


  a 2 + 2µδµ2 1 − 2µµ21 δ 2µδµ2 1 + 2
 µ2
µ2



Ψ ( s) = −
.
2
δµ
2
µ
2
δµ
2
µ
2
δµ
a2 + 1 +

1
1
1
1


δ µ2 + 2
2 µ1 
µ2
µ2
 µ δ µ2 + 2  
s  − 1
exp 2
 2 2δµ
µ1
µ 2  a + 1 − 2 µ1 δ 2δµ1 + 2

 
µ
2
µ2
µ2
µ2

 

(
(
)
)
(
)
(4.9)
Proof
Let S ( 0) = a 2 . Then from (4.6) the Riccati equation has the solution
2

 2δµ 1
 a +
2µ 1


δ
+ 2  1 +
µ2
 µ2
 a2 +

S (t ) =
a2 +
2δµ1
µ2
+
2 µ1
µ2
a2 +
2δµ1
µ2
−
2 µ1
µ2
2δµ1
µ2
+
2 µ1
µ2
2δµ1
µ2
−
2 µ1
µ2
(
δ(
δ
2δµ1
µ2
2δµ1
µ2
(
δ(
δ
) 
exp

+ 2)

2δµ1
µ2
2δµ1
µ2
+2
2 µ1
µ2
) 
exp

+ 2)

δ(
+ 2)
2 µ1
µ2
+2
2δµ1
µ2
µ1
µ2
δ
(
2δµ1
µ2
µ1
µ2


t −1


)
+ 2 
t 

µ

− 2δ 1 .
µ2
(4.10)
∧
Therefore from (4.5) (4.10) offers the solution for Z t of the form
Z t = E (Z t Ws ;0 ≤ s ≤ t ) = exp{∫ Ψ ( s )ds} Z 0 +
∧
∧
t
0
where
15
t
µ2 t
exp{
∫ Ψ (u )du}S ( s )dWs
∫
2µ1 0
s
(
(
)
)
(
)

2δµ1
2 µ1
2δµ1
2
 2 µ1 δ 2δµ1 + 2  
µ2
 2δµ1
  a + µ2 + µ 2 δ µ2 + 2
2 µ1
 µ2

+ 2  1 +
s 
δ
exp 
µ1
δµ
µ
δµ
2
2
2
2
µ2
 µ2
 a + µ 1 − µ1 δ µ 1 +2
µ2


2
2
2



Ψ (s) = −
.
 a 2 + 2δµ1 + 2 µ1 δ 2δµ1 + 2
 2 µ1 δ 2δµ1 + 2  
µ2
µ2
µ2
µ2
2 µ1 

 µ2
s  − 1
exp 

µ1
δµ
µ
δµ
2
2
2
2
µ2  a + 1 −
1
µ2

 
δ µ2 1 + 2
µ2
µ2

 

(
(
)
)
(
)
•
We have obtained E (Z t Ws ;0 ≤ s ≤ t ) = Z t . Now let us try to find the distribution of
∧
Zt .
∧
Corollary 4.3 Let Zt , Wt , Z t and S ( t ) be as defined. Then
∧
1


E e −γZ t Ws ;0 ≤ s ≤ t = exp− γ Z t + γ 2 S (t ) .
2


(
)
(4.11)
Proof
From theorem 4.2 and the fact that Var (Z t Ws ;0 ≤ s ≤ t ) = S (t ) and Zt is normally
distributed, given Ws ;0 ≤ s ≤ t , the result follows immediately.
•
5. Pricing of a stop-loss reinsurance contract for common events using the
Kalman-Bucy filter
We have transformed and approximated λ t and N t as normal variables Zt and Wt
from which we have obtained the distribution of Zt . Now let us derive the pricing
model for a stop-loss reinsurance contract for common events using normal variables
Zt and Wt . Having assumed that ρ → ∞, this approach can be used for the pricing of
reinsurance contracts when we expect quite a few claims in the near future.
The stop-loss reinsurance premium at time t is
NT − N t
E[( ∑ ℵi − b) + N s ;0 ≤ s ≤ t ]
(5.1)
i =1
where all symbols have previously been defined.
Let CT − Ct be the total amount of claims between time T and t. Then from (5.1), the
stop-loss reinsurance premium at time t is
16
[
NT − N t
]
E[( ∑ ℵi − b) + N s ;0 ≤ s ≤ t ] = E {(CT − C t ) − b}+ N s ;0 ≤ s ≤ t .
i =1
(5.2)
~
~
Since we have obtained Ct and N t which are Gaussian approximations of Ct and N t ,
we will use these approximations (see (3.27) and (3.28)).
~
µρ
µ 2ρ
Ct = m1 1 t + U t
in (5.2) then
δ
2δ
Therefore set
+
 µ ρ

~
~

µ
ρ
 ~


+
2
1

(U T − U t ) + m1
(T − t ) − b Ws ;0 ≤ s ≤ t .
E {(C T − C t ) − b} N s ;0 ≤ s ≤ t  = E 
 2δ

δ





(5.3)
From (5.3), we can see that mean and variance of U T − U t need to be determined to
obtain stop-loss reinsurance premium. Therefore let us derive the expected value and
variance of U T − U t .
∧
Lemma 5.1 Let Zt , Wt , U t and Z t be as defined. Then
1 − e −δ ( T −t ) ∧
E (U T − U t Ws ;0 ≤ s ≤ t ) = m1
Zt.
δ
and
Var (U T − U t Ws ;0 ≤ s ≤ t )
{(
}
 m2 m µ 
2
m 
=  1  1 − e −δ (T −t ) S (t ) − e −2δ (T −t ) + 4e −δ (T −t ) − 3 + 2 1 + 2 1 (T − t ) .
µ2 
δ 
 δ
2
)
(5.4)
(5.5)
Proof
From (3.18)
2µ1 T ( 4)
U T − U t = m1 ∫ Z s ds + m2
dBs .
µ 2 ∫t
t
T
Set (3.22) in (5.6) then
T
2µ1 T ( 4)
1 − e − δ (T − t )
1 − e − δ (T − u )
(1)
U T − U t = m1
Z t + m1 2δ ∫
dBu + m2
dBs .
δ
δ
µ 2 ∫t
t
(5.6)
(5.7)
Take expectation in (5.7) then (5.4) follows immediately.
{
}
Var (U T − U t Ws ;0 ≤ s ≤ t ) = E (U T − U t ) Ws ;0 ≤ s ≤ t − E {(U T − U t Ws ;0 ≤ s ≤ t )} .
2
2
(5.8)
Therefore (5.5) follows immediately from (5.7) and (5.4).
•
17
We can now find the stop-loss reinsurance premium at time t based on the
observations {Ws ;0 ≤ s ≤ t} .
∧
Theorem 5.2 Let Zt ,Wt , Z t and S ( t ) be as defined. Then
1 2
~


mµ ρ
 ~
  µ 2 ρ ∑ − 2 L  µ 2 ρ
+
E {(CT − C t ) − b} Ws ;0 ≤ s ≤ t  = 
e
+
Ω + 1 1 (T − t ) − b Φ(− L )
δ
 2δ


  4δπ

(5.9)
where
Ω = E (U T − U t Ws ;0 ≤ s ≤ t ) = m1
∑ = Var (U T − U t Ws ;0 ≤ s ≤ t )
{(
1 − e −δ (T −t ) ∧
Zt,
δ
}
 m2 m µ 
2
m 
=  1  1 − e −δ (T −t ) S (t ) − e − 2δ (T −t ) + 4e −δ (T −t ) − 3 + 2 1 + 2 1 (T − t ) ,
µ2 
δ 
 δ
b − m1 µδ1ρ ( T − t )
K −Ω
L=
, K=
and Φ(⋅) is the cumulative normal distribution
µ 2ρ
∑
2δ
2
)
function.
Proof
Set U = U T − U t in (5.3), then
+


~

µ
ρ
µ
ρ

 ~

+
2
1
(U T − U t ) + m1
E {(CT − C t ) − b} Ws ;0 ≤ s ≤ t  = E 
(T − t ) − b Ws ;0 ≤ s ≤ t 
 2δ

δ





2
υ
−
Ω
1
(
)
∞
−
 1
µ ρ
 µ ρ
e 2 ∑ dυ
= ∫  2 υ + m1 1 (T − t ) − b
(5.10)
δ
 2π ∑
K
 2δ
where
Ω = E (U T − U t Ws ;0 ≤ s ≤ t ) = m1
∑ = Var (U T − U t Ws ;0 ≤ s ≤ t )
{(
1 − e −δ (T −t ) ∧
Zt,
δ
}
 m2 m µ 
2
m 
=  1  1 − e −δ (T −t ) S (t ) − e − 2δ (T −t ) + 4e −δ (T −t ) − 3 + 2 1 + 2 1 (T − t ) ,
µ2 
δ 
 δ
µ1ρ
b − m1 δ ( T − t )
and K =
.
2
)
µ 2ρ
2δ
Set y =
υ−Ω
K −Ω
in (5.10) and put L =
then (5.9) follows immediately.
∑
∑
18
•
Let us obtain the gross premium for stop-loss reinsurance contract at time t by levying
the security loading on the original premium. This can be achieved by multiplying
(1+ θ) to λ t where 0 ≤ θ ≤ 1. Then λ t changes to (1+ θ) λ t . It depends on insurance
companies’ attitude towards to insurance risk how much the security loading should
be levied on the original premium.
∧
Corollary 5.3 Let Zt ,Wt , Z t and S ( t ) be as defined. Then
~
 ~

E * {(CT − C t ) − b}+ Ws ;0 ≤ s ≤ t 


 (1 + θ ) 2 µ ρ ∑ * − 1 L*2  (1 + θ ) 2 µ ρ

m (1 + θ ) µ1 ρ
2
2
(T − t ) − bΦ − L*
=
e 2 +
Ω* + 1
δ
4δπ
2δ



(

)

(5.11)
where
E * [⋅] is the gross premium for stop-loss reinsurance contract,
∧
1 − e −δ (T −t )
Ω* = m1
(1 + θ) Z t ,
δ
2
 m12
m2 µ1 
 m1 
−δ (T −t ) 2
− 2δ ( T − t )
−δ (T − t )
*
2
∑ =   1− e
+ 4e
− 3 + 2
+
(1 + θ ) S (t ) − e
(T − t ),
(1 + θ ) µ 2 
δ 
δ
{(
L =
*
K * − Ω*
∑*
}
)
, K =
*
b − m1 (1+ θδ) µ1ρ ( T − t )
(1+ θ) 2 µ 2 ρ
2δ
and Φ(⋅) is the cumulative normal
distribution function.
Proof
∧
∧
If we replace µ 1 , µ 2 , Z t and S ( t ) with (1 + θ) µ1, (1 + θ ) 2 µ 2 , (1+ θ) Z t , and
(1 + θ ) 2 S ( t ) respectively in (5.9) then (5.11) follows.
•
The following example illustrates the calculation of gross premiums for stop-loss
reinsurance contract using the pricing model derived previously.
Example 5.1
The numerical values used to simulate the claim arrival process are δ = 0. 5, λ 0 = 200 .
We will assume that ρ = 100 i.e. the interarrival time between jumps is exponential
with mean 0.01 and that the jump size follows exponential with mean 1, i.e.
19
y ~ Exponential (1) . S-Plus was used to generate random values and to simulate the
claim arrival process.
The numerical values used to calculate (4.7) and (5.11) are
∧
Z 0 = 0 , S ( 0) = 0 , θ = 0.1 , µ 1 = 1 , µ 2 = 2 , m1 = 1, m2 = 3, t = 1, T = 2 ,
b = 0, 200, 210, 220, 230, 240
and
E * ( CT − Ct ) = E * ( N T − N t ) E (ℵ) =
(1 + θ) µ1ρ
m1 = 220
δ
.
∧
By computing (4.7) and (5.11) using MAPLE and S-Plus, where Z 1 = 0. 5579152 , the
calculation of the gross premiums for stop-loss reinsurance contract at each retention
level b are shown in Table 5.1.
Table 5.1
Retention level b
Gross reinsurance premium
0
227.512939
200
30.049486
210
22.209532
220
15.521060
230
10.171605
240
6.202363
References
Bartlett, M. S. (1963) : The spectral analysis of point processes, J. R. Stat. Soc., 25,
264-296.
Beard, R.E., Pentikainen, T. and Pesonen, E. (1984) : Risk Theory, 3rd Edition,
Chapman & Hall, London.
Billingsley, P. (1968) : Convergence of Probability Measures, John Wiley & Sons,
USA.
Bremaud, P. (1981) : Point Processes and Queues: Martingale Dynamics, SpringerVerlag, New-York.
Bühlmann, H. (1970) : Mathematical Methods in Risk Theory, Springer-Verlag,
Berlin-Heidelberg.
Cinlar, E. (1975) : Introduction to Stochastic Processes, Prentice-Hall, Englewood
Cliffs.
20
Cox, D. R. (1955) : Some statistical methods connected with series of events, J. R.
Stat. Soc. B, 17, 129-164.
Cox, D. R. and Isham, V. (1980) : Point Processes, Chapman & Hall, London.
Cox, D. R. and Isham, V. (1986) : The virtual waiting time and related processes,
Adv. Appl. Prob. 18, 558-573.
Cox, D. R. and Lewis, P. A. W. (1966) : The Statistical Analysis of Series of Events,
Metheun & Co. Ltd., London.
Cox, J. and Ross, S. (1976) : The valuation of options for alternative stochastic
processes, Journal of Financial Economics, 3 , 145-166.
Cramér, H. (1930) : On the Mathematical Theory of Risk, Skand. Jubilee Volume,
Stockholm.
Dassios, A. (1987) : Insurance, Storage and Point Process: An Approach via
Piecewise Deterministic Markov Processes, Ph. D Thesis, Imperial College, London.
Dassios, A. and Embrechts, P. (1989) : Martingales and insurance risk, Commun.
Stat.-Stochastic Models, 5(2), 181-217.
Dassios, A. and Jang, J. (1998a) : Linear filtering in reinsurance, Working Paper,
Department of Statistics, , The London School of Economics and Political Science
(LSERR 41).
Dassios, A. and Jang, J. (1998b) : The cox process in reinsurance, Working Paper,
Department of Statistics, , The London School of Economics and Political Science
(LSERR 42).
Davis, M. H. A. (1984) : Piecewise deterministic Markov processes: A general class
of non diffusion stochastic models, J. R. Stat. Soc. B, 46, 353-388.
Ethier, S. N. and Kurtz, T. G. (1986) : Markov Processes Characterization and
Convergence, John Wiley & Sons, Inc., USA.
Geber, H. U. (1979) : An Introduction to Mathematical Risk Theory, S. S. Huebner
Foundation for Insurance Education, Philadelphia.
Grandell, J. (1976) : Doubly Stochastic Poisson Processes, Springer-Verlag, Berlin.
Grandell, J. (1991) : Aspects of Risk Theory, Springer-Verlag, New York.
Harrison, J. M. and Kreps, D. M. (1979) : Martingales and arbitrage in multiperiod
markets, Journal of Economic Theory, 20, 381-408.
Harrison, J. M. and Pliska, S. (1981) : Martingales and stochastic integrals in the
theory of continuous trading, Stochastic Processes and Their Applications, 11, 215260.
Jang, J. (1998) : Doubly Stochastic Point Processes in Reinsurance and the Pricing of
Catastrophe Insurance Derivatives, Ph. D Thesis, The London School of Economics
and Political Science.
21
Jang, J. (2000) : Doubly Stochastic Poisson Process and the Pricing of Catastrophe
Reinsurance Contract; Proceedings of 31st International ASTIN Colloquium, Istituto
Italiano degli Attuari.
Klüppelberg, C. and Mikosch, T. (1995) : Explosive Poisson shot noise processes with
applications to risk reserves, Bernoulli, 1, 125-147.
Lando, D. (1994) : On Cox processes and credit risky bonds, University of
Copenhagen, The Department of Theoretical Statistics, Pre-print.
Medhi, J. (1982) : Stochastic Processes, Wiley Eastern Limited, New Delhi.
Øksendal, B. (1992) : Stochastic Differential Equations, Springer-Verlag, Berlin.
Ryan, J. (1994) : Advantages of Insurance Futures, The Actuary, April, 20.
Seal, H. L. (1983) : The Poisson process: Its failure in risk theory, Insurance:
Mathematics and Economics, 2, 287-288.
Serfozo, R. F. (1972) : Conditional Poisson processes, J. Appl. Prob., 9, 288-302.
Smith, J. A. (1980) : Point Process Models of Rainfall, Ph. D Thesis, The Johns
Hopkins University, Baltimore, Maryland.
Sondermann, D. (1991) : Reinsurance in arbitrage-free markets, Insurance:
Mathematics and Economics, 10, 191-202.
22