CDF Formulation for Solving an Optimal Reinsurance Problem

CDF Formulation for Solving an Optimal Reinsurance Problem
∗
Chengguo Weng, and Shengchao Zhuang
Department of Statistics and Actuarial Science
University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
July 28, 2015
Abstract
An innovative cumulative distribution function (CDF) based method is proposed for
deriving optimal reinsurance contracts for an insurer to maximize its survival probability. The optimal reinsurance model is a non-convex constrained stochastic optimization
problem, and the CDF based method transforms it into a linear problem of determining
an optimal CDF over a corresponding feasible set. Compared to the existing literature,
our proposed CDF formulation provides a more transparent derivation of the optimal solutions, and more interestingly, it enables us to solve a further complex model with an extra
background risk.
Key Words: CDF formulation, Lagrangian dual method, optimal reinsurance, survival prob-
ability maximization, background risk.
∗
Corresponding author. Tel: +001(519)888-4567 ext. 31132.
Email: Weng ([email protected]), Zhuang ([email protected]).
1
Model Setup
1.1
Preliminaries
Let
(Ω, F, P) be a probability space, where all the random variables in this paper are dened.
We consider a constrained non-convex stochastic optimal decision problem from an insurance
context. The problem is formulated from the perspective of an insurance company (insurer).
W and is subject to a loss X , which is
a nonnegative random variable with a support of [0, M ] for some M ∈ (0, ∞]. For the case
of M = ∞, the support of X is interpreted as [0, ∞), and throughout the paper, we assume
E[X] < ∞.
We assume that the insurer has an initial capital of
The stochastic optimization problem considered in the paper is to determine an optimal
reinsurance purchase strategy against the risk
X
for the insurer to maximize its survival prob-
r(X) so that
measurable function and so is r . In their
loss that is ceded to a reinsurer, and r(X)
ability. Mathematically, it is to nd an optimal partition on
X = f (X) + r(X),
where
economic meanings,
f (X)
f : [0, M ] → [0, M ]
is a
represents the portion of
X
into
f (X)
and
is the residual loss retained by the insurer. In the context of optimal reinsurance, the ceded
loss function
f
is often restricted to the set
C
as given below for the solution (Cui, et al., 2013;
Zheng, et al., 2015):
C := f (·) : 0 6 f (x) 6 x
and
r(x) := x − f (x)
is non-decreasing
The non-decreasing assumption on the retained loss function
hazard risk. If the retained function
r
r
∀ x ∈ [0, M ] .
(1)
is imposed to reduce the moral
is not non-decreasing, the insurer may encourage the
policyholders to claim more so as to reduce its own retained loss but increase the ceded loss on
the reinsurer.
In exchange for covering partial risk for the insurer, the reinsurer charges a premium on the
insurer as compensation. Generally, this premium is always positive, and computed according
to a certain premium principle
Π
so that the reinsurance premium is given by
Π(f (X)).
In
our work, we consider the expected value principle and compute the reinsurance premium by
Π(f (X)) = (1 + ρ)E[f (X)],
1.2
where
ρ>0
is the loading factor.
Optimal Reinsurance Model in Absence of Background Risk
From the preceding subsection, the insurer's net wealth in the presence of a reinsurance is
given by
T := W − r(X) − (1 + ρ)E[f (X)],
(2)
and the insurer is subject to a survival probability of
P(W − r(X) − (1 + ρ)E[f (X)] > 0).
Thus, with the reinsurance premium constrained on a level of
π > 0,
the optimal reinsurance
problem of maximizing the above survival probability is equivalent to the following one:
2
Problem 1.1

 max
P W − r(X) − π > 0 ,
 s.t.
(1 + ρ)E[f (X)] = π,
f ∈C
(3)
where π is a reinsurance premium budget satisfying 0 < π ≤ (1 + ρ)E[X].
The research of optimal reinsurance has remained a fascinating area since the seminar
papers Borch (1960) and Arrow (1963), where explicit solutions were derived for minimizing the
variance of the insurer's retained loss (or equivalently the variance of
the expected utility of its terminal wealth
premium of
E[f (X)].
T
T
in (2)) and maximizing
respectively, subject to a given net reinsurance
These two classic results have been extended in a number of important
directions. Just to name a few, Gajek and Zagrodny (2000) and Kaluszka (2001) generalized
Borch's result by considering the standard deviation premium principle and a class of general
premium principles, respectively. Young (1999) generalized Arrow's result by assuming Wang's
premium principle. Among the recent studies on optimal reinsurance, risk measures including
VaR and CVaR have been extensively used; see, for instance, Cai, et al. (2008), Balbás, et al.,
(2009), Tan, et al. (2009), Tan, et al. (2011), Cheung (2010), Chi and Tan (2013), Asimit, et
al. (2013), Chi and Weng (2013), Cheung, et al. (2014).
1.3
Optimal Reinsurance Model in Presence of Background Risk
In the insurance practice, there are some risks which are not insurable and can potentially
occur with other insurable risks.
Moreover, the administration fee on the insurer to settle
insurance claims is often not fully deterministic, and not insured.
aggregate of these risks which occur along with the underlying risk
model the risk by a nonnegative random variable
We generally refer to the
X
as background risk, and
Y.
The optimal decision problems with background risk in an insurance context have been intensively studied, but most of these are within a utility maximization framework. For example,
Gollier (1996) considered the optimal insurance purchase strategy by maximizing the expected
utility of a policyholder.
For another example, Dana and Scarsini (2007) characterized the
optimal risk sharing strategy between two parties, both being expected utility maximizers. In
the literature, it is common to assume that the background risk is statistically independent
of the insurable risk
X,
because such assumption leads to much more tractable models and it
works as good approximation in the presence of week dependence. For example, Mahul (2000),
Gollier and Pratt (1996), Courbage et al. (2007), and Schlesinger (2013).
Taking the background risk into account, we compute the terminal net wealth of the insurer
as
W − r(X) − Y − (1 + ρ)E[f (X)],
and accordingly the optimal reinsurance model of maximizing the survival probability goes as
follows:
3
Problem 1.2

 max
P W − r(X) − Y − π > 0 ,
 s.t.
(1 + ρ)E[f (X)] = π.
f ∈C
1.4
(4)
Some Remarks
Problem 1.1 has been studied by Gajek and Zagrodny (2004) and analytical optimal solutions have been derived. However, their results apply only when the reinsurance premium
budget
π is small enough and they fail to clearly outline the range of the budget for their results
to apply. Moreover, their approach involves a sophisticated application of the Neyman Pearson
Lemma and is mathematically abstruse. Further, their method can not be readily used to solve
Problem 1.2 where the background risk is additionally considered. For these reasons, in the
present paper, we propose an innovative CDF based method to solve the above two problems.
Our CDF based method rst transforms each of them into a problem of determining an
optimal CDF over a corresponding feasible set, and then combines with a Lagrangian dual
method to derive the optimal solutions in a transparent way. The idea behind our method is to
equivalently reformulate each of the original two problems into a linear functional optimization
problem so as to facilitate the application of a pointwise optimization procedure. Our CDF
based method enables us to derive the optimal solutions of Problem 1.1 for any premium budget
π
over the whole range of
(0, (1 + ρ)E[X]].
More signicantly, the transparency of the method
enables us to derive analytical optimal reinsurance contracts in the presence of independent
background risk, i.e., the solutions of Problem 1.2, which is otherwise non-tractable.
As one can discover later, our procedure of deriving the optimal solutions can be applied
in parallel if the objective function in Problem 1.1 (or Problem 1.2) were changed to
P W − r(X) − π > A (or P W − r(X) − Y − π > A )
for some some constant
A,
which represents the net wealth the insurer targets to achieve. Such
an optimal decision problem is typically called goal-reaching model, which was proposed by
Kulldor (1993), and studied extensively in the literature, e.g., Browne (1999, 2000). When
A = 0,
the goal-reaching model reduces to the survival probability maximization model.
The rest of the paper proceeds as follows.
Section 2 develops the CDF formulation and
some preliminary results from the Lagrangian dual method. Sections 3 and 4 solve Problems
1.1 and 1.2, respectively. Section 5 concludes the paper.
2
CDF Formulation
V (F ) of CDF F is non-decreasing (nonincreasing) in F , we mean that V (F1 ) ≥ V (F2 ) if F1 (x) ≥ F2 (x) (correspondingly F1 (x) ≤
F2 (x)), x ≥ 0. For a given distribution F of a nonnegative random variable, we dene its
−1 (y) := inf z ∈ R+ : F (z) ≥ y .
inverse as F
Hereafter, when it is stated that a functional
4
Denote
R := {r(·) : r(x) ≡ x − f (x) ∀x ∈ [0, M ], f ∈ C}.
Then, it follows from (1) that
R = r(·) : r(0) = 0, 0 6 r(x) 6 x ∀x ∈ [0, M ], r(x)
is non-decreasing on
[0, M ] ,
and thus, Problem 1.1 can be equivalently rewritten, in terms of retained loss function
r,
as
follows:

 max
P r(X) ≤ W − π ,
 s.t.
E[r(X)] = π
e,
r∈R
where
π
e := E[X] − π/(1 + ρ) and 0 ≤ π
e < E[X] for 0 < π ≤ (1 + ρ)E[X].
(5)
From the formulation
of (5), a necessary condition to have a nonempty feasible set for (5) is given by
and a trivial solution in the case of
π
e = E[X]
is given by
0≤π
e ≤ E[X],
r(x) = x, x ∈ [0, M ].
We can similarly rewrite Problem 1.2, in terms of retained loss functions, as follows:

 max
P r(X) + Y ≤ W − π ,
 s.t.
E[r(X)] = π
e.
r∈R
Once a solution, say
r∗ ,
is obtained for (5) (or (6)), then
(6)
f ∗ (x) := x − r∗ (x), x ∈ [0, M ],
is
a solution to Problem 1.1 (correspondingly Problem 1.2). Thus, we shall focus on studying (5)
and (6) for optimal solutions.
The following two assumptions will be used in the rest of the paper.
Both the left and right derivatives of the CDF FX (x) of X exist and are
strictly positive over [0, M ].
Assumption 2.1
Assumption 2.2
independent of X .
Y
has a non-increasing probability density function gY (·), and is statistically
(a) Assumption 2.1 allows the random variable X to have a probability mass of p
at 0, which is a common model for an insurance loss. Assumption 2.1 is certainly satised if
X has a piecewise continuous density function over (0, M ]. The assumption also implies that
FX is continuous and strictly increasing over [0, M ].
(b) Assumption 2.1 also implies, by the Inverse Function Theorem, that both the left and
right derivatives of FX−1 exist and are strictly positive, which we respectively denote by
Remark 1
(FX−1 )0− (u)
and (FX−1 )0+ (u), u ∈ (0, 1).
(c) Furthermore, it is worth noting that a great number of typical distributions for insurance
loss modelling satisfy Assumption 2.2, such as Pareto distribution, exponential distribution and
so on.
5
Notably, (6) reduces to (5) if we set
Y = 0 almost surely.
Nevertheless, our CDF method for
solving (6) relies on Assumption 2.2, whereas the condition of
Y =0
almost surely contradicts
to Assumption 2.2. Therefore, the solution of (5) can not be directly retrieved from that of
(6). We need to analyze both problems separately.
To solve (5) and (6), we reformulate each of them into a problem of identifying an optimal
CDF over a feasible set, and recover the optimal ceded loss functions for the original problems
(5) and (6) from those identied optimal CDF's via Lemma 2.2 in the sequel.
To proceed, we dene
n
F ∗ := F (·) : F (·)
is the CDF of
o
r(X), r ∈ R ,
and
F
∗∗
n
:= F (·) : F (·)
is a CDF with
The equivalence between these two sets of
Lemma 2.1
o
F (t) ≥ FX (t) ∀ 0 ≤ t ≤ M .
F∗
and
F ∗∗
(7)
is shown in Lemma 2.1 below.
Assume that Assumption 2.1 holds, then F ∗ = F ∗∗ .
r ∈ R, r(x) ≤ x ∀ x ∈ [0, M ] and thus we must have
P(r(X) ≤ x) ≥ FX (x) ∀ x ∈ [0, M ]. This means F ∗ ⊆ F ∗∗ . On the other hand, for any
Fe ∈ F ∗∗ , we dene re(s) = Fe−1 (FX (s)) for s ∈ [0, M ] to get 0 ≤ re(s) ≤ s, where the second
e(x) ≥ FX (x) ∀ x ∈ [0, M ]. Moreover, Since FX is
inequality follows from the fact that F
continuous and strictly increasing over [0, M ] due to Assumption 2.1, for any t ∈ [0, M ], there
e(t) = FX (y). Hence, we obtain P re(X) ≤ t = P Fe−1 (FX (X)) ≤
exists y ∈ [0, M ] such that F
t = P FX (X) ≤ Fe(t) = P FX (X) ≤ FX (y) = P(X ≤ y) = Fe(t) by Assumption 2.1 again.
e ∈ F ∗ , and thus, F ∗ ⊆ F ∗∗ .
This means that F
Proof.
On one hand, for every
It is obvious that the reinsurance premium budget constraint
E[r(X)] = π
e
in problems (5)
and (6) is equivalent to
M
Z
0
where
Fr
1 − Fr (t) dt = π
e,
r(X). Moreover, the objective in (5) is Fr (W − π), which also
of r(X). Such observation motivates us to consider the following
denotes the CDF of
merely depends on the CDF
CDF formulation
(
max
F ∈F ∗∗
s.t.
U0 (F ) := F (W − π),
RM
1 − F (t) dt = π
e.
0
(8)
For (6) where the background risk is considered, we apply Assumption 2.2 to rewrite its
objective function as follows
Z
W −π
P r(X) ≤ W − π − s dFY (s)
P r(X) + Y ≤ W − π =
0
6
Z
W −π
P r(X) ≤ t gY (W − π − t)dt.
=
0
Z
W −π
Fr (t)gY (W − π − t)dt,
=
0
which is a linear functional of
Fr .
Accordingly,
regarding (6), we propose the following CDF
formulation:
(
max
F ∈F ∗∗
s.t.
R W −π
U1 (F ) := 0
F (t)gY (W − π − t)dt,
RM
1 − F (t) dt = π
e.
0
(9)
For presentation convenience, we refer to (5) and (6) as retained loss function (RLF)
formulation" in contrast to the term of CDF formulation" for (8) and (9). The equivalence
between the RLF formulation and CDF formulation is given in Lemma 2.2 below.
An element F ∗ ∈ F ∗∗ solves (8) (or (9)) if and only if r∗ , dened by r∗ (x) =
(F ∗ )−1 (FX (x)) for x ∈ [0, M ], solves (5) (correspondingly (6)).
Lemma 2.2
Proof.
We only show the relationship between (9) and (6) as
the result can be similarly
proved for (8) and (5). We achieve the proof by contradiction. We rst consider the if part.
Assume that
Fb ∈
F ∗ ∈ F ∗∗ is not an optimal solution to (9), i.e., there exists another element, say
−1
U1 (Fb) > U1 (F ∗ ). Then, we dene rb(x) = (Fb) (FX (x)), ∀x ∈ [0, M ], to
F ∗∗ , such that
get
P rb(X) + Y ≤ W − π = U1 (Fb) > U1 (F ∗ ) = P r∗ (X) + Y ≤ W − π ,
which means that
r∗
can not be a solution to (6). Thus, if
r∗
solves (6), then
F∗
must solve
(9).
To show the only if" part, we assume that
exists another element
rb ∈ R
r∗
as given is not a solution to (6) so that there
such that
P(b
r(X) + Y ≤ W − π) > P(r∗ (X) + Y ≤ W − π).
As we have already seen in the proof of Lemma 2.1,
last display further implies
Therefore, if
F∗
solves (9),
F∗
is the CDF of
U1 (F ∗ ), which means that
U1 (Fb) >
∗
then r must
be a solution to (6).
r∗ (X),
and thus, the
F ∗ is not a solution to (9).
According to Lemma 2.2, once we solve the CDF formulation (equations (8) or
∗ ∈ F ∗∗ , we can construct a solution r ∗ to the RLF
(9)) and obtain an optimal solution F
formulation (equation (5) or (6) correspondingly) by r∗ (x) = (F ∗ )−1 (FX (x)) for x ∈ [0, M ]
and obtain an optimal ceded loss function f ∗ (x) = x − r∗ (x), x ∈ [0, M ].
Remark 2
7
In view that the objectives and constraints in (8) and (9) are all linear functionals of the
F
decision variable
and the feasible set
F ∗∗
is convex, we exploit the Lagrangian dual method
to solve both problems. This entails introducing a Lagrangian multiplier
λ and considering the
following auxiliary problem
max
F ∈F ∗∗
Z
V0 (λ, F ) := F (W − π) + λ
M
0
1 − F (t) dt − π
e
(10)
for (8), and another auxiliary problem
W −π
Z
max
F ∈F ∗∗
F (t)gY (W − π − t)dt + λ
V1 (λ, F ) :=
Z
0
0
for (9). Once (10) (or (11)) is solved with an optimal solution
∗
determine λ
∈R
by solving
M
R1
0
(1 − Fλ∗ (t))dt = π
e.
1 − F (t) dt − π
e
Fλ (·)
for each
Then, we can show that
(11)
λ ∈ R, one can
F ∗ := Fλ∗ is an
optimal solution to (8) (or (9)) as in Lemma 2.3 below.
Assume that for every λ ∈ R, Fλ (·) solves (10) (or (11)) and there exists a
RM
∈ R satisfying 0 (1 − Fλ∗ (t))dt = π
e. Then, F ∗ := Fλ∗ solves (8) (or (9) corre-
Lemma 2.3
constant
spondingly).
λ∗
Proof.
We generally denote the objective in (8) (or (9)) by
U (F ),
and let
u(e
π)
denotes its
optimal value. Then, it follows
u(e
π) =
max
U (F ) =
F ∈F ∗∗
RM
0
(1−F (t))dt=e
π
≤ max∗∗ U (F ) + λ
F ∈F
max
0
∗
Z
0
U (F ) + λ
F ∈F ∗∗
RM
Z
0
(1−F (t))dt=e
π
M
∗
M
(1 − F (t))dt − π
e
(1 − F (t))dt − π
e
= U (F ∗ ) ≤ u(e
π ),
which implies that
3
F∗
is optimal to (8) (or (9)).
Solutions to Problem 1.1
In this section, we study the optimal solutions when there is no background risk, i.e. the
−π ≥ M
M = ∞, only
solutions to Problem 1.1. We analyze the solutions in two scenarios separately (W
W − π < M ), depending on the relative magnitude of W − π and M . When
the case of W − π < M is possible.
We rst consider the case with W − π ≥ M , where for any feasible f ∈ C we have P(W −
r(X) − π > 0) ≥ F (r(X) < M ) ≥ FX (M ) = 1. Therefore, every element in the feasible set of
and
Problem 1.1 satisfying the constraint is a solution. We summarize this result in Theorem 3.1
below.
8
Assume that Assumption 2.1 and W −π ≥ M hold. Then, any f ∗ ∈ C satisfying
(1 + ρ)E[f ∗ (X)] = π is an optimal ceded loss function to Problem 1.1.
Theorem 3.1
W − π < M , which is automatically satised for
M = ∞, i.e., X has an unbounded support of [0, ∞). According to the analysis in the preceding
In the rest of the section, we assume
section, we can solve the CDF formulation (8) and apply Lemma 2.2 to get the optimal retained
loss function and the optimal ceded loss function. By virtue of Lemma 2.3, we can rst analyze
the solution of (10) for each
λ.
To this end, we note that
F (W − π) ∈ [FX (W − π), 1]
for every
F ∈ F ∗∗
and consider the following sub-problems
max
F ∈F ∗∗
V0 (λ, F ) := u + λ
M
Z
0
1 − F (t) dt − π
e ,
s.t.
F (W − π) = u,
(12)
∗ and get optimal
u ∈ [FX (W − π), 1]. If we could derive an optimal solution Fλ,u
∗
∗
∗
value ξ(u) := V0 (λ, Fλ,u ) of (12) for each u ∈∈ [FX (W − π), 1], then Fλ := Fλ,u∗ is an optimal
indexed by
solution to (10), where
u∗ =
argmax
ξ(u).
u∈[FX (W −π),1]
q = FX−1 (u) for u ∈ [FX (W − π), 1]. Since W − π > 0 and FX (x) is continuous and strictly
increasing on [0, M ] according to Assumption 2.1, we have u = FX (q) and q ∈ [W − π, M ].
Let
Thus,
u∗ = FX (q ∗ )
with
q ∗ = argmin ξ(FX (q)).
q∈[W −π,M ]
In the subsequent analysis, we follow such a procedure to solve (10) for
respectively. Its solution for
λ<0
λ=0
and
λ > 0,
is not relevant when we invoke Lemma 2.3 for the optimal
CDF of (8) and thus omitted.
Case 3.1
λ = 0.
In this case, the objective in (12) is independent of
the solution to (12)
Case 3.2
Fλ∗
can be any
F ∈ F ∗∗
F
and
ξ(u) = u,
which satises
and thus,
F (W − π) = 1.
(13)
λ > 0.
V0 (λ, F ) is non-increasing in F , and the pointwise smallest CDF F ∈ F ∗∗ which
F (W − π) = u solves (12). Therefore, from the denition of F ∗∗ in (7), a solution to
In this case,
satises
(12) is given by


F (t),


 X
∗
Fλ,u
(t) = u,



F (t),
X
if
0 ≤ t < W − π,
if
W − π ≤ t < FX−1 (u),
if
FX−1 (u) ≤ t ≤ M,
9
(14)
and as a consequence,
∗
)
ξ(u) = V0 (λ, Fr(X),u
Z W −π
Z
=u+λ
(1 − FX (t))dt + λ
−1
FX
(u)
Z
(1 − u)dt + λ
W −π
0
By taking the left derivative of
0
ξ−
(u)
ξ(u),
hZ
M
−1
FX
(u)
(1 − FX (t))dt − λe
π.
we obtain
−1
FX
(u)
i
(−1)dt + (FX−1 )0− (u)(1 − FX (FX−1 (u))
W −π
h
i
−1 0
= − λ(FX )− (u) 1 − FX (FX−1 (u))
h 1 + λ(W − π)
i
=λ
− FX−1 (u) , u ∈ [FX (W − π), 1]
λ
=1 + λ
We can similarly derive its right derivative, and indeed, it is the same as its left derivative
obtained h
in the above. This imeans that ξ(u) is dierentiable over [FX (W − π), 1) with
−π)
− FX−1 (u) , which is decreasing in u. This further implies that ξ(u) is a
ξ 0 (u) = λ 1+λ(W
λ
concave function of u and attains its maximum at
1 + λ(W − π)
1 + λ(W − π)
∗
u = max FX (W − π), FX
= FX
.
λ
λ
Thus, one solution to (10) is given by
∗
Fλ∗ = Fλ,u
∗
as dened in (14).
With the analysis in the above Cases 3.1 and 3.2, we readily apply Lemma 2.3 to analyze
the solutions to the CDF formulation (8). Denote
W −π
Z
1 − FX (t) dt.
π̄0 =
(15)
0
Then, depending on the magnitude of
π
e
relative to
π̄0 ,
the optimal solution of (8) can be
obtained as summarized in Proposition 3.1 below.
Proposition 3.1
(a) If
0≤π
e ≤ π̄0 ,
Assume that Assumption 2.1 and W − π < M hold.
then one optimal CDF to (8) is given by
F ∗ (t) =
where
(b) If

FX (t),
if
0 ≤ t < t0 ,
1,
if
t0 ≤ t ≤ M,
RM
t0 ∈ [0, W − π] is such that 0 1 − F ∗ (t) dt = π
e.
π̄0 < π
e < E[X], then one CDF to (8) is given by


F (t),
if 0 ≤ t < W − π,


 X
F ∗ (t) = FX (t0 ),
if W − π ≤ t < t0 ,



F (t),
if t ≤ t ≤ M,
0
X
10
(16)
t0 ∈ [W − π, M ]
where
Proof.
is such that
RM
0
1 − F ∗ (t) dt = π
e.
π
b ∈ [0, π̄0 ], there obviously exists
RM
satisfy
1 − F ∗ (t) dt = π
e, and F ∗
0
(a) Given
dened in (16) to
a constant
t0 ∈ [0, W − π]
for
F ∗ (t)
satises the condition (13). Therefore,
∗
by Lemma 2.3, F as given by (16) is one solution to (8).
(b) For
a ∈ [W − π, M ],
dene


FX (t),


Fa (t) := FX (a),



F (t),
X
if
0 ≤ t < W − π,
if
W − π ≤ t < a,
if
a ≤ t ≤ M,
RM q(a) := 0 1 − Fa (t) dt. Obviously, q(a) is a continuous and non-increasing function
of a with q(W − π) = E[X] and q(M ) = π̄0 . Therefore, for any π
e ∈ (π̄0 , E[X]), there exists
t0 ∈ (W − π, M ] such that q(t0 ) = π
e which means Ft0 solves (10) for t0 = [1 + λ(W − π)]/λ,
i.e., λ = 1/[t0 − (W − π)], as previously shown in Case 3.2. Thus, the desired result follows
from Lemma 2.3.
and
The optimal ceded loss functions for the case of
W −π < M
can be consequently obtained
by combining Proposition 3.1 and Lemma 2.2, and we summarize the results in Theorem 3.2
below.
Theorem 3.2
(a) If
Assume that Assumption 2.1 and W − π < M hold.
(1 + ρ)(E[X] − π̄0 ) ≤ π ≤ (1 + ρ)E[X],
then one optimal cede loss function to Problem
1.1 is given by
f ∗ (t) =
where
(b) If

0,
if
0 ≤ t < t0 ,
t − t ,
0
if
t0 ≤ t ≤ M,
t0 is such that (1 + ρ)E[f ∗ (X)] = π .
0 < π < (1 + ρ)(E[X] − π̄0 ), then one
optimal ceded loss function to Problem 1.1 is given
by


0,



f ∗ (t) = t − (W − π),



0,
where
t0
Proof.
is such that
0 ≤ t < W − π,
if
W − π ≤ t < t0 ,
if
t0 ≤ t ≤ M,
(1 + ρ)E[f ∗ (X)] = π .
π
e = E[X] − π/(1 + ρ), the conditions on π given in both parts are equivalent to
terms of π
e in Proposition 3.1 respectively, whereby the desired results follow from
Since
those two in
if
11
Proposition 3.1 and Lemma 2.2.
Part (b) of Theorem 3.2 indicates that an truncated stop-loss reinsurance is optimal
for reinsurance premium budget smaller than (1 + ρ)(E[X] − π̄). The optimality of truncated
stop-loss reinsurance has been established in the Corollary 1 of Gajek and Zagrodny (2004) for
the same Problem 1.1. However, as we have commented in section 1.4, their derivation of
the solution involves a complicated application of Neyman Pearson Lemma, and much more
mathematically abstruse than our CDF method. Furthermore, Gajek and Zagrodny (2004) fail
to identify the range of premium budget π for such a solution as clearly as we do in Theorem
3.2. They also fail to discover the optimality of a stop-loss reinsurance for the case of large
premium budget as stipulated in part (a) of Theorem 3.2.
Remark 3
4
Solutions to Problem 1.2
In this section, we study the optimal solutions when the insurer has a background risk, i.e.,
the solutions of Problem 1.2 or equivalently (6). Based on our analysis in section 2, we need to
solve (9) and apply Lemma 2.2 to get the optimal ceded loss functions. To solve (9), we rst
investigate the solutions of (11) and consequently invoke Lemma 2.3 for optimal reinsurance
contracts.
Similar to Section 3, the analysis is divided into two cases of
W −π ≥ M
and
W − π < M.
4.1
W − π ≥ M.
Assume
In this case, we can rewrite the objective function in (11) as follows
M
Z
V (λ, F ) =
0
h
i
F (t) gY (W − π − t) − λ dt + 1 − FY (W − π − M ) + λ(M − π
e).
Hence, (11) reduces to
Z
M
max
F ∈F ∗∗
0
h
i
F (t) gY (W − π − t) − λ dt + 1 − FY (W − π − M ) + λ(M − π
e),
(17)
of which one optimal solution follows from (7) as follows
Fλ∗ (t)
=



FX (t),
any constant,



1,
provided that
Fλ∗
if
0 ≤ t ≤ M and gY (W − π − t) − λ < 0,
if
0 ≤ t ≤ M and gY (W − π − t) − λ = 0,
if
0 ≤ t ≤ M and gY (W − π − t) − λ > 0,
is an appropriate CDF in
(18)
F ∗∗ .
With an aid of Assumption 2.2 and Lemma 2.3, we can obtain a solution to (9) as given in
Proposition 4.1 below.
12
Suppose that Assumptions 2.1 and 2.2 hold and W − π ≥ M is satised.
Then, one optimal solution to (9) is given by
Proposition 4.1
F ∗ (t) =
where t0 is such that
Proof.
For
R t0
0
a ∈ [0, M ],

FX (t),
if 0 ≤ t < t0 ,
1,
if t0 ≤ t ≤ M,
(19)
[1 − F ∗ (t)]dt = π
e.
we dene

F (t),
X
Fa (t) :=
1,
if
0 ≤ t < a,
if
a ≤ t ≤ M,
RM q(a) := 0 1 − Fa (t) dt. Obviously, q(a) is a continuous and non-decreasing function of
a with q(0) = 0 and q(M ) = E[X]. Thus, for any π
e ∈ [0, E[X]), we can nd t0 such that
∗
q(t0 ) = π
e. Let λ = gY (W − π − t0 ). Then, by virtue of Lemma 2.3, F ∗ given in (19) solves (9)
∗
if we could show that it is a solution to (17) for λ = λ . Indeed, since gY (W − π − t) is a non∗
∗
decreasing function of t, we have gY (W −π−t)−λ ≤ 0 for t ∈ [0, t0 ) and gY (W −π−t)−λ ≥ 0
∗
∗
for t ∈ (t0 , M ], whereby it follows from (18) that F is a solution to (17) for λ = λ . Thus, the
proof is complete.
and
By invoking Lemma 2.2, we can derive an optimal ceded loss function to solve Problem 1.2.
Assume that Assumptions 2.1 and 2.2 hold and W − π ≥ M is satised. One
optimal solution to Problem 1.2 is given by
Theorem 4.1

0,
∗
f (x) =
x − t0 ,
if 0 ≤ x < t0 ,
if t0 ≤ x ≤ M,
(20)
where t0 is determined by (1 + ρ)E[f ∗ (X)] = π.
Proof.
4.2
It is a direct consequence of Proposition 4.1 and Lemma 2.2.
Assume
W − π < M.
In this case, the objective function in (11) can be rewritten as follows
Z
V1 (λ, F ) =
W −π
Z
h
i
F (t) gY (W − π − t) − λ dt − λ
M
W −π
0
13
F (t)dt + λ(M − π
e).
(21)
We follow the same procedure as applied in Section 3 to solve (11).
[FX (W − π), 1]
for every
F ∈
Because
F (W − π) ∈
F ∗∗ , we consider the following sub-problems
max
F ∈F ∗∗
V1 (λ, F ),
s.t.
F (W − π) = u,
(22)
∗ and optimal value
u ∈ [FX (W − π), 1]. If we could derive an optimal solution Fλ,u
∗ ) of (22) for each for each u ∈∈ [F (W − π), 1], then F ∗ := F ∗
η(u) := V1 (λ, Fλ,u
X
λ
λ,u∗ with
∗
u = argmax η(u) is an optimal solution to (11).
indexed by
u∈[FX (W −π),1]
We analyze the solutions of (22) for
0 < λ ≤ gY ((W −
π)− ), and
gY ((W −
λ,
π)− )
respectively, in three dierent ranges:
≤λ≤
gY (0+ ). The case of
λ>
λ = 0,
gY (0+ ) is not
relevant in order to apply Lemma 2.3 for optimal CDF of (9) and thus omitted.
Case 4.1
λ = 0.
R W −π
V1 (λ, F ) = 0
F (t)gY (W − π − t)dt,
Fλ∗ (t) = 1, t ∈ [0, M ], is an optimal solution to (11).
From (21), in this case,
F.
Therefore,
Case 4.2
which is non-decreasing in
0 < λ ≤ gY ((W − π)− ).
In this case,
gY (W − π − t) ≥ λ
for
t ∈ [0, W − π].
Therefore, by virtue of (21), a solution
to (22) is given by
∗
(t) =
Fλ,u
which, along with the condition

u,
if
0 ≤ t < FX−1 (u),
F (t),
X
if
FX−1 (u) ≤ t ≤ M,
u ∈ [FX (W − π), 1],
∗
)
η(u) := V1 (λ, Fλ,u
Z W −π h
Z
i
=
u gY (W − π − t) − λ dt − λ
further implies
−1
FX
(u)
Z
udt − λ
W −π
0
We compute the left derivative of
0
(u) =
η−
W −π
Z
h
η(u)
M
−1
FX
(u)
FX (t)dt + λ(M − π
e).
as follows
"Z
i
gY (W − π − t) − λ dt − λ
−1
FX
(u)
W −π
0
#
dt + u · (FX−1 )0− (u)
λFX (FX−1 (u))(FX−1 )0− (u)
+
h F (W − π)
i
Y
=λ
− FX−1 (u) , u ∈ (FX (W − π), 1).
λ
We can similarly derive its right derivative, and indeed, it is the same as the left derivative
we
just obtained. Thus,
FX−1 (u) , which
[FX (W − π], 1]
η(u) is dierentiable over u ∈ (FX (W − π), 1) with η 0 (u) = λ
η(u) is a concave
and attains its maximum over [FX (W − π), 1] at
FY (W − π)
∗
, 1 .
u = min max FX (W − π), FX
λ
is decreasing in
u.
This implies that
14
FY (W −π)
λ
function for
−
u ∈
We further note that
Z
W −π
W −π
Z
gY (y)dy ≥
FY (W − π) =
gY ((W − π)− )dy = (W − π)gY ((W − π)− ),
0
0
FY (W − π)/λ ≥ W − π for 0 < λ ≤ gY ((W − π)− ). This implies
FY (W − π)
FX (W − π)
∗
u = min FX
, 1 = FX min
, M
,
λ
λ
and thus,
and
∗
Fλ∗ (t) = Fλ,u
∗ (t) =

FX min FX (W −π) , M ,
λ
if
F (t),
X
if
Notably, in the case of
Case 4.3
M ≤ FX (W − π)/λ, Fλ∗ (t) = 1
for
−π)
0 ≤ t < min FX (W
,
M
,
λ
−π)
min FX (W
, M ≤ t ≤ M.
λ
t ∈ [0, M ].
gY ((W − π)− ) ≤ λ ≤ gY (0+ ).
t0 ∈ [0, W − π] such that gY (W − π − t) − λ ≤ 0 for t ∈ [0, t0 ) and
gY (W − π − t) − λ ≥ 0 for t ∈ (t0 , W − π], where [0, t0 ) = ∅ for t0 = 0. Hence, in view of (21)
and the fact that F (t) ≥ FX (t), ∀t ∈ [0, M ] and F (W − π) = u for every feasible F in (22), a
In this case, there exists
solution to (22) is given by


F (t),


 X
∗
(t) = u,
Fλ,u



F (t),
X
if
0 ≤ t < t0 ,
if
t0 ≤ t < FX−1 (u),
if
FX−1 (u) ≤ t ≤ M.
Accordingly,
∗
)
η(u) := V1 (λ, Fλ,u
Z t0
Z
h
i
=
FX (t) gY (W − π − t) − λ dt +
0
h
i
u gY (W − π − t) − λ dt
t0
Z
−λ
W −π
−1
FX
(u)
Z
udt − λ
W −π
M
−1
FX
(u)
FX (t)dt + λ(M − π
e).
Similarly to the previous case, we respectively consider the left and right derivatives of
and nd that they coincide for
u ∈ (FX (W − π), 1)
η 0 (u) =λ
and are given by
h F (W − π − t )
i
0
Y
+ t0 − FX−1 (u) ,
λ
u ∈ [FX (W − π), 1]. This implies that η(u)
its maximum over [FX (W − π), 1] at
F (W − π − t )
0
Y
∗
+ t0 , 1 .
u = min max FX (W − π), FX
λ
which is non-increasing as a function of
and attains
η(u)
15
is concave
gY (t)
Since
is non-increasing in
t
Z
and
gY (W − π − t) ≥ λ
W −π
FY (W − π − t0 )/λ + t0 =
t0
for
t ∈ (t0 , W − π],
gY (W − π − t)
dt + t0 ≥
λ
Z
W −π
dt + t0 = W − π.
t0
Hence,
F (W − π − t )
F (W − π − t )
0
0
Y
Y
u = min FX
+ t0 , 1 = FX min
+ t0 , M
λ
λ
= FX (H(t0 , λ)) ,
∗
and


FX (t),


 ∗
Fλ∗ (t) = Fλ,u
∗ (t) =
F
H(t
,
λ)
,
0
X



F (t),
X
if
0 ≤ t < t0 ,
if
t0 ≤ t < H(t0 , λ),
if
H(t0 , λ) ≤ t ≤ M,
(23)
where
H(t, λ) = min
FY (W − π − t)
+ t, M
λ
.
With the analysis in the above Cases 4.1, 4.2 and 4.3, Lemma 2.3 can be readily
for the solution of (11).
small
π
e
and a large
π
e.
(24)
invoked
The solution is given in Propositions 4.2 and 4.3, respectively, for a
To proceed, we dene
Z
de := FY (W − π)/gY ((W − π)− )
and
M
φ(F ) :=
[1 − F (t)] dt, F ∈ F ∗∗ .
(25)
0
We further write
Z
de
[1 − FX (t)] dt.
π̄1 :=
(26)
0
Proposition 4.2
Assume that Assumptions 2.1 and 2.2 hold. For 0 ≤ d ≤ M , dene
Fd (t) :=

FX (d),
if 0 ≤ t < d,
F (t),
X
if d ≤ t ≤ M.
e M to satisfy φ(Ft ) = π
Then, for each πe ∈ [0, π̄1 ], there exists a constant t0 ∈ d,
e and Ft0
0
solves (9).
h
i
Proof.
h
i Based on the analysis in Case 4.1, Fd solves (11) with λ = FY (W − π)/d,
e M . Moreover, it is easy to check that φ(Fd ) is continuous as a function of d with
d ∈ d,
limd→M φ(Fd ) = 0 and limd→de φ(Fd ) = φ(Fde) = π̄1 . Thus, by the Intermediate Value Theorem,
16
there exists a constant
Ft0
h
i
e M
t0 ∈ d,
such that
e,
φ(Ft0 ) = π
and consequently, by Lemma 2.3,
is one solution to (9).
To derive the solution to (9) for
each
a ∈ [0, W − π],
π̄1 ,
larger than
some extra notations are necessary. For
we denote
Λa :=
Further, for each
π
e
lim gY (W − π − t), lim gY (W − π − t) .
t→a−
t→a+
a ∈ [0, W − π]
λa ∈ Λa ,
and
we dene


FX (t),


 F (a,λa ) (t) := FX H(a, λa ) ,



F (t),
X
if
0 ≤ t < a,
if
a ≤ t < H(a, λa ),
if
H(a, λa ) ≤ t ≤ M,
(27)
so that
φ(F
(a,λa )
Z
a
[1 − FX (t)] dt + [H(a, λa ) − a] · [1 − FX (H(a, λa ))]
)=
0
Z
M
[1 − FX (t)] dt.
+
H(a,λa )
gY (t) is non-increasing in t, given any λ ∈ Λa , gY (W − π − t) − λ ≤ 0 for t ∈ [0, a) and
gY (W −π −t)−λ ≥ 0 for t ∈ (a, W −π]. Therefore, according to the analysis in Case 4.3, F(a,λa )
is a solution to (22) for any λa ∈ Λa . Moreover, it is worth noting that Λa = {gY (W − π − a)},
which is a singleton set, at any continuity point a of gY (W − π − a). Therefor, for any interval
(s1 , s2 ) where gY (W − π − t) is continuous, φ(F (a,λa ) ) is a continuous function of a.
Since
Proposition 4.3
Assume that Assumptions
(2.1)
and
(2.2)
hold. For each
π
e ∈ [π̄1 , E[X]) ,
there exists a constant a ∈ [0, W − π] and λa ∈ Λa such that F (a,λa ) solve
Proof.
.
(9)
Based on the analysis in Case 4.3 and Lemma 2.3, it is sucient to show the
a ∈ [0, W −π] and λa ∈ Λa to satisfy φ(F (a,λa ) ) = π
e for each π
e ∈ [π̄1 , E[X]). Note
(a,λ
)
a
that F
= Fde for a = 0 and λa = limt→0+ gY (W −π −t) = gY ((W −π)− ), and F (a,λa ) = FX
(a,λa ) ) = φ(F ) = π̄ and
for a = W − π and any λa ∈ ΛW −π . These two cases lead to φ(F
1
de
(a,λ
)
a
φ(F
) = E[X], respectively. Therefore, it is sucient for us to assume π
e ∈ (π̄1 , E[X]) in
existence of
the rest of the proof.
Dene
n
Sπe := t ∈ [0, W − π] : ∃ λ ∈ Λt
17
such that
o
φ(F (t,λt ) ) > π
e ,
:= sup Sπe . If gY (W −π−t) is continuous at t0 , then it is continuous over a neighbourhood
of t0 , say (t0 − δ, t0 + δ) for some constant δ > 0, because it is a monotone function. In this
(a,λa ) ) is a continuous
case, Λt = {g(W − π − t)} for each t ∈ (t0 − δ, t0 + δ) and therefore, φ(F
function of a over (t0 − δ, t0 + δ) with λa = g(W − π − a), which in turn implies that, given
any > 0,
and t0
φ(F (s,λs ) ) − ≤ φ(F (t0 ,λt0 ) ) ≤ φ(F (s,λs ) ) + ,
whenever
property
s ∈ (t0 − κ, t0 + κ) for some constant κ ∈ (0, δ). On one
of t0 , there exists s1 ∈ (t0 − δ1 , t0 ) and s1 ∈ Sπ
e such that
hand, by the supremum
φ(F (t0 ,λt0 ) ) ≥ φ(F (s1 ,λs1 ) ) − ≥ π
e − .
One the other hand, there exists
s2 ∈ (t0 , t0 + δ2 )
and
s2 ∈
/ Sπe
such that
φ(F (t0 ,λt0 ) ) ≤ φ(F (s2 ,λs2 ) ) + ≤ π
e + .
Letting
→0
Otherwise, we assume that
gY (W − π − t)
gt−0 := lim gY (W − π − t),
t→t−
0
Since
φ(F (t0 ,λt0 ) ) = π
e.
in the last two displays, we get
gY (W − π − t)
is discontinuous at
t0 ,
and dene
gt+0 := lim gY (W − π − t).
and
t→t+
0
is monotone as a function of
t,
it has at most countably jumps, which
δ > 0 such that gY (W − π − t) is continuous over
(s,λ
δ). Therefore, φ(F s ) ) is a continuous function of s over (t0 − δ, t0 )
in turn implies that there exists a constant
(t0 − δ, t0 ) and (t0 , t0 +
and (t0 , t0 + δ) with
−
lim φ(F (s,λs ) ) = φ(F (t0 ,gt0 ) )
s→t−
0
By the supremum property of
t0 ,
+
lim φ(F (s,λs ) ) = φ(F (t0 ,gt0 ) ).
and
there exists
s→t+
0
s1 ∈ (t0 − δ, t0 )
and
s1 ∈ Sπe
such that
−
φ(F (t0 ,gt0 ) ) ≥ φ(F (s1 ,λs1 ) ) − ≥ π
e − .
On the other hand, there exists
s2 ∈ (t0 , t0 + δ)
and
s2 ∈
/ Sπe
such that
+
φ(F (t0 ,gt0 ) ) ≤ φ(F (s2 ,λs2 ) ) + ≤ π
e + .
Letting
→0
in the last two displays, we obtain
−
+
φ(F (t0 ,gt0 ) ) ≤ π
e ≤ φ(F (t0 ,gt0 ) ).
φ(F (t0 ,λ) ) with t0 xed is a continuous function
λt0 ∈ [gt+0 , gt−0 ] ≡ Λt0 such that φ(F (t0 ,λt0 ) ) = π
e.
of
λ
and therefore, there must exist some
The optimal ceded loss function for Problem 1.2 can be retrieved from the optimal CDF
that is derived in Propositions 4.2 and 4.3 by invoking Lemma 2.2.
18
Theorem 4.2
(1 + ρ)(E[X] − π̄1 ) ≤ π ≤ (1 + ρ)E[X], one solution to

0,
if 0 ≤ x < t0 ,
f ∗ (x) =
x − t ,
if t0 ≤ x ≤ M,
0
(a) For each
where
t0
Assume that Assumptions 2.1 and 2.2 hold.
(1 + ρ)E[f ∗ (X)] = π .
0 < π < (1 + ρ)(E[X] − π̄1 ), there
Problem 1.2 is given by
is determined by
(b) For each
exists a constant
a ∈ [0, W − π]
and
λa ∈ Λa
such that one optimal solution to Problem 1.2 is given by


0,



f (a,λa ) (x) = x − a,



0,
where
if
0 ≤ x < a,
if
a ≤ x < H(a, λa ),
if
H(a, λa ) ≤ x ≤ M,
(28)
(1 + ρ)E f (a,λa ) (X) = π .
π
e = E[X] − π/(1 + ρ), the conditions on π given in both parts are equivalent
terms of π
e in Proposition 4.3 respectively. The result of part (a) follows from
Since
Proof.
to those two in
Lemma 2.2 and Proposition 4.2 as follows
f ∗ (x) = x − r∗ (x) = x − (Ft0 )−1 (FX (x)) =
where t0 and
Ft0

x,
if
0 ≤ x < t0 ,
x − t ,
0
if
t0 ≤ x ≤ M,
are as given in Proposition 4.2. The result of part (b) can be proved similarly
by using Lemma 2.2 and Proposition 4.3 as


0,



f ∗ (x) = x − r∗ (x) = x − (F (a,λa ) )−1 (FX (x)) = x − a,



0,
where
a
and
λa
if
0 ≤ x < a,
if
a ≤ x < H(a, λa ),
if
H(a, λa ) ≤ x ≤ M,
are as given in Proposition 4.3.
It is interesting to compare the solutions between Problems 1.1 and 1.2. For the
case of W −π ≥ M , their solutions are given in Theorems 3.1 and 4.1 respectively. In this case,
only a stop-loss reinsurance is shown to be optimal for Problem 1.2, whereas any reinsurance
treaties satisfying the premium budget constraint are optimal to Problem 1.1.
The solutions for the case of W − π < M are given in Theorems 3.2 and 4.2 for the two
problems respectively. For both problems, the optimal reinsurance contract is a stop-loss treaty
for larger premium budget π and a truncated stop-loss treaty for small premium budget π. This
Remark 4
19
means that, in the later case with a small premium budget, the optimal strategy for the insurer
is to entirely sacrice the protection against the occurrence of a large loss. The critical point for
the optimal solution transits from a stop-loss treaty to a truncated stop-loss one diers between
these two problems either. In the presence of Assumption 2.2, gY is non-increasing, and thus,
it follows from (25) that
d˜ = FY (W − π)/gY ((W − π)− ) ≥ W − π.
This in turn follows form
(15)
and
(26)
that π̄0 ≤ π̄1 , and therefore,
(1 + ρ)(E[X] − π̄1 ) ≤ (1 + ρ)(E[X] − π̄0 ).
5
Conclusion
In the present paper, we propose an innovative cumulative distribution function (CDF)
based method to solve a constrained and generally non-convex stochastic optimization problem, which arises from the area of optimal reinsurance, and targets to design the optimal
reinsurance contract for an insurer to maximize its survival probability or for a more general goal-reaching model. It is an important decision problem to the insurance companies in
their risk management. Our proposed method reformulates the optimization problem into a
functional linear programming problem of determining an optimal CDF over a corresponding
feasible set. The linearity of the CDF formulation allows us to conduct a pointwise optimization procedure, combined with the Lagrangian dual method, to solve the problem. Compared
with the existing literature, our proposed CDF based method is more technically convenient
and transparent in the derivation of optimal solutions.
Moreover, our proposed CDF based
method can be readily applied for analytical solutions in the presence of background risk. The
inclusion of background risk leads to a more complex problem, and the analytical solutions are
obtained for the rst time.
Acknowledgements
Weng thanks the nancial support from the Natural Sciences and Engineering Research
Council of Canada, and Society of Actuaries Centers of Actuarial Excellence Research Grant.
Zhuang acknowledges nancial support from the Department of Statistics and Actuarial Science,
University of Waterloo.
References
[1] Arrow, K.J., 1963. Uncertainty and the welfare economics of medical care. American Economic Review 53, 941-973.
[2] Asimit, A.V., Badescu, A.M., Cheung, K.C., 2013. Optimal reinsurance in the presence of
counterparty default risk. Insurance: Mathematics and Economics 53, 590-697.
20
[3] Balbás, A., Balbás, B., Heras, A., 2009. Optimal reinsurance with general risk measures.
Insurance: Mathematics and Economics 44, 374-384.
[4] Borch, K., 1960. An attempt to determine the optimum amount of stop loss reinsurance.
In: Transactions of the 16th International Congress of Actuaries, Vol. I. 597-610.
[5] Browne, S., 1999. Reaching goals by a deadline: digital options and continuous-time active
portfolio management. Advances in Applied Probability 31(2), 551-577.
[6] Browne, S., 2000. Risk-constrained dynamic active portfolio management. Management
Science 46(9), 1188-1199.
[7] Cai, J., Tan, K.S., Weng, C., Zhang, Y., 2008. Optimal Reinsurance under VaR and CTE
Risk Measures. Insurance: Mathematics and Economics 43, 185-196.
[8] Cheung, K.C., 2010. Optimal reinsurer revisited - a geometric approach. ASTIN Bulletin
40, 221-239.
[9] Cheung, K.C., Sung, K.C.J., Yam, S.C.P., Yung, S.P., 2014. Optimal reinsurance under
general law-invariant risk measures. Scandinavian Actuarial Journal 2014(1), 72-91.
[10] Chi, Y., Tan, K.S., 2013. Optimal reinsurance with general premium principles. Insurance:
Mathematics and Economics 52, 180-189.
[11] Chi, Y., Weng, C., 2013. Optimal reinsurance subject to Vajda condition. Insurance: Mathematics and Economics 53, 179-189.
[12] Courbage, C.,Rey, B., 2007. Precautionary saving in the presence of other risks. Economic
Theory 32, 417-424.
[13] Cui, W., Yang, J., Wu, L., 2013. Optimal reinsurance minimizing the distortion risk measure under general reinsurance premium principles. Insurance:
Mathematics and Eco-
nomics 53, 74-85.
[14] Dana, R.-A., Scarsini, M., 2007. Optimal risk sharing with background risk. Journal of
Economic Theory 133, 152-176.
[15] Gajek, L., Zagrodny, D., 2000. Insurer's optimal reinsurance strategies. Insurance: Mathematics and Economics 27, 105-112
[16] Gajek, L., Zagrodny, D., 2004. Reinsurance arrangements maximizing insurer's survival
probability. Journal of Risk and Insurance 71, 421-435.
[17] Gollier, C., 1996. Optimal insurance approximate losses. The Journal of Risk and Insurance
63, 369-380.
[18] Gollier, C., Pratt, J., 1996. Risk vulnerability and the tempering eect of background risk.
Econometrica 64(5), 1109-1123.
21
[19] Kaluszka, M., 2001. Optimal reinsurance under mean-variance premium principles. Insurance: Mathematics and Economics 28, 61-67.
[20] Kulldor, M., 1993. Optimal control of favorable games with a time limit. SIAM. Journal
of Control and Optimization 31(1), 52-69.
[21] Mahul, O., 2000. Optimal insurance design with random initial wealth. Economics Letters
69, 353-358.
[22] Schlesinger, H., 2013. The theory of insurance demand. In: Handbook of InsuranceNew
York: Springer, 167-184.
[23] Tan, K.S., Weng, C., Zhang, Y., 2009. VaR and CTE criteria for optimal quota-share and
stop-loss reinsurance. North American Actuarial Journal 13, 450-482.
[24] Tan, K.S., Weng, C., Zhang, Y., 2011. Optimality of general reinsurance contracts under
CTE risk measure. Insurance: Mathematics and Economics 49, 175-187.
[25] Young, V.R., 1999. Optimal insurance under Wang's premium principle. Insurance: Mathematics and Economics 25, 109-122.
[26] Zheng Y., Wei, C., Yang, J., 2015. Optimal reinsurance under distortion risk measures and
expected value premium principle for reinsurer. Journal of Systems Science and Complexity
28, 122-143.
22