An optimal consumption problem in finite time with a constraint on

An optimal consumption problem in finite time with a
constraint on the ruin probability
Peter Grandits∗
Institut für Wirtschaftsmathematik
TU Wien
Wiedner Hauptstraße 8-10,A-1040 Wien
Austria
September 2013
Keywords: optimal consumption, singular control problem, free boundary value problem
AMS subjects classifications. 49J20,35R37,45J05
Abstract
In this paper we want to investigate the following problem: For a given upper bound for the
ruin probability, maximize the expected discounted consumption of an investor in finite time.
The endowment of the agent is modeled by Brownian motion with positive drift. We give an
iterative algorithm for the solution of the problem, where in each step an unconstraint, but
penalized, problem is solved. For the discontinuous value function of the penalized problem
V (t, x) we show that it is the unique viscosity solution of the corresponding Hamilton-JacobiBellman equality. Moreover, we characterize the optimal strategy as a barrier strategy with
continuous barrier function.
1
Introduction
In Actuarial Mathematics there are two main paradigms of optimization for insurance companies:
On the one hand side, we have the classical approach of minimizing the ruin probability of the company. The seminal paper in this direction was [19], in which the author gives an upper bound for
the ruin probability of a company, which faces a claim evolution described by a compound Poisson
process. Meanwhile this result has been generalized in a vast number of directions. E.g., one could
ask for the optimal investment strategy of a company, investing in the stock market, if the aim is to
keep the ruin probability as low as possible, see e.g. [12],[21],and [24].
Whereas in this approach the main focus lies on the safety of the investor, it was often critized as too
conservative. Hence, B. de Finetti suggested in his classical paper [10] to maximize the expectation
of the discounted dividends paid out by the company, see e.g. [2] and [24] and the references therein
for an overview on the literature devoted to this topic.
The classical model for the wealth process of an insurance company is the so called Cramer-Lundberg
process, basically a compound Poisson process. In the case of very frequent and small claims the so
called diffusion approximation is often used, i.e. the endowment of the company is given by Brownian
motion with a certain linear positive drift µt and a certain volatility σ. It is this model we shall use
∗ email:
[email protected], tel. +43-1-58801-10512, fax +43-1-5880110599
1
in this paper exclusively. Now, one main drawback of the “de Finetti approach” is that in an infinite
time horizon setting, and in all relevant models, it leads to an optimal payment stream, generating
almost sure ruin of the company. E.g., in our diffusion setting it turns out (see [25], or [1]) that it is
optimal to use a so called barrier strategy with constant barrier. This means the investor pays out
just enough to keep the endowment below a certain level.
In two recent papers [13],[14] the de Finetti problem was solved in a diffusion setting, but for finite
time horizon T . It turned out that in this case a barrier strategy is again optimal, but now the barrier
√
function b(t) is time-dependent, with asymptotic behavior b(t) ∼ σ −(T − t) ln(T − t) for t → T .
Interestingly, in the paper [6] it was shown that, for a Brownian motion reflected at a barrier like
this, there occurs a so called heat atom. This means - if we denote the reflected process by Xt - that
one has P
I (XT = 0) > 0. It is not difficult to show that this also holds, if we replace Brownian motion
by Brownian motion with drift, which is absorbed at zero, in the sense that P
I (τ = T) ≡ 1 − κ0 > 0
(with τ denoting the ruin time). Hence, the optimal barrier strategy of [13],[14] clearly solves the
de Finetti problem, if we impose the constraint that the company is ruined with a probability not
larger than κ0 . The aim of the present paper is to study this constrained problem for a general κ
with κ1 < κ < κ0 , where κ1 is the ruin probability of a company, which does not pay out at all. We
conclude this Introduction with an outline of the paper. In section 2 we define our model and present
an iterative algorithm, solving the problem described above. In each step of the algorithm on has to
solve the classical de Finetti problem, but with an additional penalty term in the target functional.
Moreover, we present in this section the mathematical main result, namely the characterization of
the optimal strategy solving the penalized problem, which turns out to be a barrier strategy with
continuous barrier function and b(T ) = 0. In the rest of the paper we deal exclusively with this
penalized problem and show in section 3 that its value function V (t, x) is upper-semicontinuous on
[0, T ] × R
I 0 and continuous on [0, T ] × R
I 0 \ {(T, 0)}. In section 4 we show a dynamic programming
principle (DPP) for our problem, and using this DPP, we show in section 5 that the value function
is the unique viscosity solution of the proper Hamilton-Jacobi-Bellman (HJB) equation in its variational form. The proof is oriented on standard methods like in [23],[11], but some parts need a
modification due to the discontinuity of V (t, x) at (T, 0) and the discontinuity of the state process.
For convenience of the reader we give full proofs. This concludes the first part of the paper.
If one uses viscosity methods, one very often shows - as in our case - the validity of the HJB-equation,
using the DPP principle. But this does not give us a characterization of the optimal strategy to be
used, which is the aim of the second part of the paper. In section 6 we consider an approximating problem for our penalized problem. We show that the solution of a certain approximating free
boundary value problem (FBVP) is a viscosity solution of the HJB equation and therefore, by part
one, the value function of our approximating problem. The methods to do this are similar to [13].
In principle one uses Green’s functions and the principle of smooth fit. Two remarks are important:
a) As in [13] we solve the FBVP on a symmetric region in the x−variable. For a motivation to do
this, see the Introduction of [13]. b) Before using the Green’s function method, one has to subtract
the discontinuity at (T, 0) in a proper way from the value function; then one can apply the whole
machinery to the regularized “value function” Ṽ . Moreover, we show in section 6 that the optimal strategy for the approximating problem is to use a barrier strategy, and we provide the barrier
function as the C 1 -solution of a certain integral equation. In the following section 7 we show that
the approximating barrier functions of section 6 converge to a continuous limit, and that this limit
provides the barrier function for a barrier strategy, which is the optimal strategy for the penalized
problem. Concluding remarks and a numerical example finish the paper.
Before we conclude this introduction, let us note that, although the original motivation for this paper
comes from Actuarial Mathematics, one can replace the dividend payments by some arbitrary consumption of an agent, the endowment of whom is described by Brownian motion with drift. Hence
2
we will always speak in the sequel of an optimal consumption problem. Let us also mention that our
problem is a pure finite horizon problem, meaning that the endowment of the agent is zero at time
T , and he is not concerned what happens afterwards. One can easily adapt the setting by replacing
the level x = 0 by an arbitrary level x = x0 > 0. This would yield an optimal way to reach the
endowment level x0 at time T with a certain probability (and not falling below this level during
lifetime).
Let us conclude this introduction with some relevant literature. For the first part, note that the literature on viscosity methods in Financial and Actuarial Mathematics is vast. Let us mention, beside
the standard monographs [11], [23], the lecture notes [5] and all the references therein. Concerning
a consumption problem with constraint, an interesting paper is [9], where the author studies an
optimization problem with drawdown constraint in finite time. He is able to characterize the value
function as unique viscosity solution of the associated HJB equation. Some numerics for getting the
value function is also provided. Concerning optimization of dividends, let us mention the papers [26]
and [22]. For the second part of our paper, some seminal works are [3] and [20].
Finally, let us mention a follow-up paper [15], which is in preparation, where we study the asymptotic
behavior of our optimal barrier function for t → T .
2
The model and transformation of the original problem to
a penalty problem
We model the endowment of the agent at time t by the following process
Xt = x + µt + σWt − Ct ,
0 ≤ t ≤ T.
(1)
Here x, σ, µ are positive constants, describing the initial endowment, the volatility and the drift of the
process. Wt denotes standard Brownian motion on a filtration F
I , which fulfills the usual conditions
of right continuity and completeness. We further assume that Ct is an admissible process, i.e. C ∈ A,
where A consists of all F
I -adapted, nondecreasing, caglad processes, fulfilling ∆Ct ≤ Xt . Since Ct
describes the accumulated consumption, this means that we are not allowed to consume a lump sum
larger than our current endowment.
Our original extremal problem with constraint is the following one
}
[∫ τ
]
J0 (0, x, C) ≡ E
I 0 e−βs dCs + e−βτ Xτ → max,
(2)
(P0 )
P
I (τ < T) ≤ κ,
where τ denotes the time of ruin, i.e. τ ≡ inf{s > 0|Xs = 0} ∧ T , and κ a constant with 0 < κ1 <
∫τ
κ < 1, where κ1 is the ruin probability of an agent not consuming at all. Note that the symbol 0 is
meant to include a possible jump at time s = 0, and that, for definiteness, we set Xs ≡ 0, Cs ≡ Cτ + ,
for s > τ .
We now formulate a new problem with penalty and show in the subsequent Lemma, that we can
use the solution of the latter problem as the solution of the former one. The penalization problem
is as follows:
[∫
J1 (0, x, C) ≡ E
I
0
τ
]
e−βs dCs + e−βτ (γ̄ + Xτ )1{τ =T} → max,
(P1 )
(3)
where 1{ } denotes the indicator function, whereas γ̄ is some positive constant, controlling the weight
for the penalization of premature ruin (resp. a reward for “staying alive”).
Lemma 2.1 a) Assume that (P1 ) has a solution C ∗ ∈ A with corresponding ruin time τ ∗ , and
assume further that P
I (τ ∗ < T) = κ, then C ∗ solves also the problem (P0 ).
3
b) Denote by C γ , J1γ , pγ the optimal strategy, the target functional J1 and the survival probability of
C γ for the parameter γ ≡ e−βT γ̄. Then γ0 < γ1 implies pγ0 ≤ pγ1 .
Proof. a) We argue by contradiction. So assume there exists a strategy C̃ with
]
[∫
[∫ ∗
]
τ̃
τ
−βs
−β τ̃
−βs
∗
−βτ ∗ ∗
E
I
e
dC̃s + e
X̃τ̃ − E
I
e
dCs + e
Xτ ∗ = ρ,
ρ > 0,
0
0
P
I (τ̃ < T) = κ − δ,
δ ≥ 0.
(4)
Here τ̃ , X̃, τ ∗ , X ∗ correspond to the strategies C̃, respectively C ∗ . We now claim that the strategy
C̃ gives a better target functional in (P1 ) than C ∗ , which would be a contradiction. We find
]
[∫
τ̃
J1 (0, x, C̃) = E
I
[∫
τ∗
ρ+E
I
[∫
e−βs dC̃s + e−β τ̃ X̃τ̃ + γ̄IP(τ̃ = T)e−βT =
0
e
]
−βs
dC∗s
0
τ∗
ρ+E
I
+e
−βτ ∗
X∗τ ∗
+ γ̄e−βT (1 − κ + δ) =
]
∗
e−βs dC∗s + e−βτ X∗τ ∗ + γ̄e−βTP
I (τ ∗ = T) + δγ̄e−βT =
0
ρ + δγ̄e−βT + J1 (0, x, C ∗ ),
(5)
which is the desired contradiction.
b) We argue again by contradiction. So assume pγ0 > pγ1 . We claim that this would imply
J1γ1 (C γ0 ) > J1γ1 (C γ1 ),
(6)
which would clearly be a contradiction to the optimality of C γ1 . Now this can be written as
J1γ0 (C γ0 ) + (γ1 − γ0 )pγ0 > J1γ0 (C γ1 ) + (γ1 − γ0 )pγ1 ,
⊓
⊔
which is obviously true.
So, assuming that we can solve problem (P1 ), the algorithm to solve (P0 ) would be the following:
1.) Choose a positive number γ̄ and solve problem (P1 ).
2.) Using the optimal strategy of 1.), find the corresponding ruin probability by simulation.
3.) If it is too high (low) (larger (smaller) than κ), increase (decrease) the value of γ̄
4.) Iterate the steps 1.)-3.), until the corresponding ruin probability is sufficiently close to κ.
Note that for γ̄ → ∞ we will end up with κ1 , defined above, and for γ̄ = 0 we have the unconstrained
problem solved in [13],[14] with corresponding ruin probability κ0 .
Hence, from now on we shall only consider the problem (P1 ), and we state now our main result,
the proof of which we postpone to section 7.
Theorem 2.1 The solution of problem (P1 ) is given by a barrier strategy with a continuous barrier
function b(t), 0 ≤ t ≤ T with b(T ) = 0, i.e the optimal consumption process and the optimal state
process are given by
Ct∗
=
C0∗
= 0,
Xt∗
= x + µt + σWt − Ct∗ .
max [x + µs + σWs − b(s)]+ ,
0≤s≤t
t > 0,
(7)
Moreover, we have limϵk →0 b(ϵk ) = b(t) pointwise, where the b(ϵk ) are C 1 -solutions of the integral
equation (64), and the sequence ϵk can be constructed explicitly (see Lemma 7.1).
4
3
Continuity properties of the value function
In this section we shall prove upper-semicontinuity for the value function of problem (P1 ). We first
approximate our target functional and show continuity of the approximating value function. Let
Q ≡ {(t, x)|0 < t < T, x > 0}, which implies that τ is the exit time of Q for our underlying process
Xt . In order to formulate our approximating problem, we need a function fn : [0, T ] → R
I+
0 , with
1
∞
the following properties: fn ∈ C ([0, T ]), fn (s) = 0 on the interval [0, T (1 − n )], fn is increasing on
[T (1 − n1 ), T ] and fn (T ) = γ ≡ e−βT γ̄. Obviously, one can easily construct such a function explicitly.
Since we deal in the sequel with problem (P1 ) only, we define, by a slight abuse of notation, the
approximating target functionals Jn as
[∫ τ
]
−βs
−βτ (t,x,C)
Jn (t, x, C) ≡ E
I
(8)
e
dCs + e
Xτ
+ fn (τ ) .
t
(t,x,C)
Xs
Here
≡ x + µ(s − t) + σ(Ws − Wt ) − (Cs − Ct ), s ≥ t, and τ is the corresponding ruin time.
Let now the value function of our new problem be given by
Vn (t, x) ≡ sup Jn (t, x, C).
(9)
C∈A
In order to show continuity of Vn , we will have to compare ruin times for different starting points
(t, x). More precisely, we shall show that, if the starting points are close enough, one can find a new
strategy, such that the corresponding ruin time is close to the ruin time of the original strategy.
(t,x,C)
Lemma 3.1 Let C ∈ A, and τ ≡ inf{s > t|(s, Xs
)∈
/ Q}. Then we have:
a) Let (t, x) ∈ Q̄ \ {(T, 0)}, and P
I (τ = T) = ϕ ≥ 0. Let further be (t̂, x̂) ≡ (t + ∆t, x + ∆x) ∈ Q̄;
(t̂,x̂,Ĉ)
)∈
/ Q}, we have
then we can find a strategy Ĉ ∈ A, such that, with τ̂ ≡ inf{s > t̂|(s, Xs
P
I (τ̂ = T) → ϕ,
(10)
for (∆t, ∆x) → 0.
b) Let (t, x) ∈ Q̄. For (t̂, x̂) ∈ Q̄, there exists a strategy Ĉ ∈ A with corresponding ruin time τ̂ , such
that
(11)
||τ̂ − τ ||L2 (P ) → 0,
for (∆t, ∆x) → 0, uniformly in (t, x).
We defer the proof to the Appendix.
The next proposition deals with continuity properties of the approximating value function Vn (t, x).
Proposition 3.1 The value function Vn (t, x) of our auxiliary problem is continuous on [0, T ] × R
I+
0.
−βt
Moreover Vn − e x is uniformly continuous on the same set.
Proof. We first rearrange the target functional Jn .
[∫ τ
]
Jn (t, x, C) ≡ E
I
e−βs dCs + e−βτ X(t,x,C)
+
f
(τ
)
=
n
τ
t
]
[∫ τ
(
)
−βτ (t,x,C)
+
e
X
+
f
(τ
)
=
E
I
e−βs µ ds + σ dWs − dX(t,x,C)
n
s
τ
t
[∫ τ
]
)
µ ( −βt
e
−E
I [e−βτ ] + e−βt x − βIE
e−βs X(t,x,C)
ds
+E
I [fn (τ )] ,
s
β
t
(t,x,C)
(12)
where we have used the defining equation for the state process Xs
in the first step, and integration by parts in the second one.
Let ϵ > 0 be given. By definition, there exists an admissible strategy C̄ with exit time τ , s.t.
Vn (t, x) − Jn (t, x, C̄) ≤ ϵ.
(13)
5
We apply Lemma 3.1 (with C replaced by C̄) and get a new admissible strategy Ĉ with exit time τ̂ ,
s.t.
||τ̂ − τ ||L2 = o(1),
(14)
if (∆t, ∆x) → 0, uniformly in (t, x). In the rest of this proof we understand the o(1)-terms to be
uniformly in (t, x). For the difference of the target functionals we get, using our new representation
(12),
µ
Jn (t, x, C̄) − e−βt x − Jn (t + ∆t, x + ∆x, Ĉ) + e−β(t+∆t) (x + ∆x) ≤ e−βt − e−β(t+∆t) +
β
[∫
[∫
]
]
τ̂
τ
[ −β τ̂ ]
µ [ −βτ ]
−βs C
−βs Ĉ
e Xs ds − E
I
e Xs ds E
I e
−E
I e
+ β E
I
β
t
t+∆t
+ |IE [fn (τ )] − E
I [fn (τ̂ )]| ≡ I1 + I2 + I3 + I4 . (15)
I1 is clearly an o(1), and for I2 we get
I2 ≤ D||τ − τ̂ ||L1 ≤ D||τ − τ̂ ||L2 = o(1),
(16)
where we have used the fact that the function e−βt is Lipschitz on [0, T ] and Lemma 3.1b). By the
same reason, I4 is an o(1), so we remain with I3 and get
[∫
]
[
]
∫ τ
min(τ̂ ,τ )
)
(
Ĉ
−βs
C
+
I3 ≤ β E
I
e
Xs − Xs ds + β E
I 1{τ̂ ≤τ }
ds
e−βs XC
s
u
τ̂
[
]
[
]
∫ τ̂
∫ u
−βs Ĉ
−βs C
β E
I 1{τ̂ >τ }
e Xs ds + β E
I 1{∆t≥0}
e Xs ds +
τ
t
[
]
∫ u
(17)
β E
I 1{∆t<0}
e−βs XĈ
s ds ≡ I31 + I32 + I33 + I34 + I35 ,
t̂
where we have used the definition u ≡ max(t, t̂). Concerning I31 , we note that on the set G(t̂,x̂) ≡
1
Ĉ
C
{ω|0 ≤ −Xu(
+ Xu(
≤ |∆t| 3 + |∆x|} the integrand in I31 is an o(1). Moreover, we have
t̂)+
t̂)+
(
)
P
I G(t̂,x̂) = 1 − o(1), hence I31 = o(1).
For I32 and I33 , we find by Cauchy-Schwarz and the fact that sups∈[0,T ] |XsC | has certainly a finite
(uniformly in C) L2 -norm
I32 , I33 ≤ D||τ − τ̂ ||L2 = o(1).
(18)
Finally, I34 and I35 are obviously an o(1). Summarizing, we have
Jn (t, x, C̄) − e−βt x − Jn (t + ∆t, x + ∆x, Ĉ) + e−β(t+∆t) (x + ∆x) = o(1),
(19)
for (∆t, ∆x) → 0. Using (13), this implies
Vn (t, x) − e−βt x − Vn (t̂, x̂) + e−β t̂ x̂ ≤ ϵ + o(1),
(20)
which implies the uniform lower-semicontinuity of Wn ≡ Vn (t, x) − e−βt x on [0, T ] × R
I+
0.
Concerning the uniform upper-semicontinuity, we argue by contradiction. So assume that there
exists an ϵ0 > 0, s.t. Wn (⃗xm + ⃗ym ) ≥ Wn (⃗xm ) + ϵ0 for all m. Here we have used the notation
⃗xm ≡ (tm , xm ) ∈ [0, T ] × R
I+
ym ≡ (sm , ym ) → 0 for m → ∞. Hence, we can find a
0 , as well as ⃗
m
sequence of strategies C ∈ A, s.t.
Jn (⃗xm + ⃗ym , C m ) − e−β(tm +sm ) (xm + ym ) ≥ Vn (⃗xm ) − e−βtm xm +
ϵ0
2
(21)
for all m, or
Jn (⃗xm + ⃗ym , C m ) − e−β(tm +sm ) (xm + ym ) ≥ Jn (⃗xm , C) − e−βtm xm +
6
ϵ0
,
2
(22)
for all m and all C ∈ A.
We now replace C̄ in the proof of the lower-semicontinuity by our C m , and (t, x) there by our
⃗xm + ⃗ym . In the same way as above, we get an analogue to the inequality (19) there, which is an
obvious contradiction to (22).
⊓
⊔
If we replace in the definition of the approximating target functional Jn , c.f. eq. (8), the function
fn (τ ) by f (τ ) ≡ γ1{τ =T} , the corresponding value function V (t, x) is the value function of our
problem (P1 ). We provide now the continuity result for V (t, x).
Proposition 3.2 a) The function W (t, x) ≡ V (t, x) − e−βt x is uniformly upper-semicontinuous on
[0, T ] × R
I+
0 (hence V(t,x) is upper-semicontinuous).
b) On the set {(t, x)|0 ≤ t ≤ T, 0 ≤ x} \ {(T, 0)} the value function V (t, x) is continuous and fulfills
the growing condition V (t, x) ≤ D(1 + x) for some model dependent positive constant D.
Proof. a) By Lemma 3.1b) we know that τ̂ → τ in L1 (P ), for (∆t, ∆x) → 0. Since this convergence
holds also weakly, we get by [4],I.4
P
I (τ = T) − P
I (τ̂ = T) ≥ −o(1),
(23)
where we understand o(1) always as a positive quantity. Using this relation, we get instead of (19)
in the proof of Proposition 3.1
J(t, x, C̄) − e−βt x − Jn (t + ∆t, x + ∆x, Ĉ) + e−β(t+∆t) (x + ∆x) ≥ −o(1).
(24)
The rest of the proof is the same as the proof of the uniform upper-semicontinuity in Proposition
3.1.
b) The continuity works analogously as in the proof of Proposition 3.1. Indeed, the only point where
the explicit form of fn enters, is in the estimate for |IE [fn (τ ) − fn (τ̂ )]|. But for f (τ ) = γ1{τ =T} , we
can now use Lemma 3.1a).
Finally, we show the linear growing estimate:
[∫ τ
]
[
]
J(t, x, C) = E
I
e−βs dCs + e−βτ XC
+
γ1
≤ V∞ (x) + E
I XC=0
+γ ≤
τ
=T
τ
τ
t
V ∞ (x) + x + µT + γ ≤ D(1 + x),
(25)
for all C ∈ A. Here we have used the notation V ∞ (x) for the value function of our dividend
maximization problem, but on infinite time horizon and without penalty term. It is well known, see
e.g. [1], Th. 3.2, that it satisfies V ∞ (x) ≤ D(1 + x).
⊓
⊔.
4
Dynamic programming principle
In this section we prove a dynamic programming principle (DPP) for our approximating problem
(8), as well as for our original problem (P1 ). We start with the approximating problem and prove the
result for the original one by a limiting procedure. Using the notation Ψn (τ, XτC ) ≡ e−βτ Xτt,x,C +
fn (τ ), we rewrite the target functional Jn in the shorter form
[∫ τ
]
Jn (t, x, C) ≡ E
I
e−βs dCs + Ψn (τ, XC
)
.
(26)
τ
t
In the following we use the notation Tt,T for stopping times with values in [t, T ].
7
Proposition 4.1 The value function Vn (t, x) for the problem (8) fulfills the (DPP) in the following
form:
a) For all (t, x) ∈ [0, T ] × R
I+
0 , C ∈ A, and θ ∈ Tt,T we have
[∫
]
τ ∧θ
Vn (t, x) ≥ E
I
t
e−βs dCs + Vn (τ ∧ θ, Xτ ∧θ ) .
(t,x)
b) For all δ > 0, there exists a C ∈ A, s.t. for all θ ∈ Tt∧T we find
[∫
τ ∧θ
Vn (t, x) − δ ≤ E
I
−βs
e
dCs + Vn (τ ∧
t
]
(t,x)
θ, Xτ ∧θ )
.
Proof. Step 1: Let C be an arbitrary admissible strategy. Then we have
]
[∫
∫ τ
τ ∧θ
−βs
−βs
C
Jn (t, x, C) = E
I
e
dCs +
e
dCs + Ψn (τ, Xτ ) =
[∫
t
τ ∧θ
E
I
e−βs dCs + E
I
τ ∧θ
[∫
τ
τ ∧θ
t
e−βs dCs + Ψn (τ, XC
τ )|Fτ ∧θ
]]
.
Since the inner conditional expectation depends on the past actually only via Xτ ∧θ , this inner
C
expectation is given by Jn (θ ∧ τ, Xθ∧τ
, C), and we arrive at
[∫
]
τ ∧θ
Jn (t, x, C) ≤ E
I
t
e−βs dCs + Vn (θ ∧ τ, Xθ∧τ ) ,
(t,x)
for all admissible strategies C and stopping times θ. Hence, by applying inf θ∈Tt,T on the right hand
side, followed by supC∈A on both sides we arrive at
[∫
]
τ ∧θ
Vn (t, x) ≤ sup inf E
I
C∈A θ∈Tt,T
t
e−βs dCs + Vn (θ ∧ τ, Xθ∧τ ) .
(t,x)
(27)
Step 2: Here we shall use the following Lemma, the proof of which we postpone to the Appendix.
Lemma 4.1 Let ϵ > 0 be given. Then there exists a partition Qi,j , (i, j) ∈ I ≡ {1, 2, ..., M } × N
I0
i,j
i,j
of [0, T ] × R
I+
and
a
family
of
admissible
strategies
C
,
s.t.
J
(t,
x,
C
)
≥
V
(t,
x)
−
3ϵ
for
all
n
n
0
i,j
(t, x) ∈ Q , for all (i, j).
Let now C be an arbitrary admissible strategy and θ ∈ Tt,T , then we define
∑
Ĉs ≡ Cs 1[t,τ ∧θ] (s) + 1]τ ∧θ,τ ] (s)
Ci,j
s 1{(τ ∧θ,X(t,x,C) )∈Qi,j } .
(28)
τ ∧θ
(i,j)∈I
Hence,
[∫
τ ∧θ
Vn (t, x) ≥ Jn (t, x, Ĉ) = E
I
e
−βs
∫
dCs +
t
[∫
τ ∧θ
E
I
e−βs dCs + E
I
[∫
τ
τ ∧θ
t
τ
]
−βs
e
τ ∧θ
dĈs +
Ψn (τ, XĈ
τ)
e−βs dĈs + Ψn (τ, XĈ
τ )|Fτ ∧θ
=
]]
.
(29)
Since the inner conditional expectation depends on the past actually only via Xτ ∧θ , this inner
(t,x)
expectation can be estimated below - using Lemma 4.1 - by Vn (τ ∧ θ, Xτ ∧θ ) − 3ϵ. So we get
[∫
]
τ ∧θ
Vn (t, x) ≥ E
I
t
e−βs dCs + Vn (τ ∧ θ, Xτ ∧θ ) − 3ϵ.
(t,x)
8
Since θ, C and ϵ were arbitrary, we conclude
[∫
]
τ ∧θ
I
Vn (t, x) ≥ sup sup E
−βs
e
C∈A θ∈Tt,T
dCs + Vn (θ ∧
t
(t,x)
τ, Xθ∧τ )
.
Combining (27) and (30), we get, using Remark 3.3.3 of the monograph [23], our result.
(30)
⊓
⊔
We now replace the functions fn , which we have used to define our approximating target functional
by
{
0 if s ∈ [0, T [,
f (s) =
γ s = T,
and demand that fn (s) → f (s) pointwise, in a monotone decreasing way. The next Lemma shows
that Jn are really approximating target functionals.
Lemma 4.2 We have pointwise
[∫
τ
Jn (t, x, C) → J(t, x, C) ≡ E
I
]
)
,
e−βs dCs + Ψ(τ, XC
τ
t
for all admissible C, where
Ψ(τ, XτC )
−βτ
≡e
Xτt,x,C
+ f (τ ). Moreover, one finds
lim Vn (t, x) ≥ V (t, x).
n→∞
Proof. By monotone convergence, we have Jn (t, x, C) → J(t, x, C) pointwise, for all admissible
C. Since Vn (t, x) is monotonically decreasing and bounded below, we get existence of the pointwise
limit limn→∞ Vn (t, x) ≡ V̄ (t, x). Clearly one has Vn (t, x) ≥ V (t, x) for all n and (t, x), hence
V̄ (t, x) ≥ V (t, x) for all (t, x).
⊓
⊔
Using this approximation result, we shall show in the next proposition that, for the full control
problem (P1 ), we have a (DPP) again.
[∫ τ
]
Proposition 4.2 For the target functional J(t, x, C) ≡ E
I t e−βs dCs + Ψ(τ, XC
τ ) , the (DPP)
holds in the same form, as it is formulated in Proposition 4.1.
Proof. Step 1: We first note that the admissible strategies A are the same for the target functionals
Jn and J. So let C ∈ A, and let θ be an arbitrary stopping time in Tt,T . In the proof of Proposition
4.1, step 1, we found
[∫
]
τ ∧θ
Jn (t, x, C) = E
I
e−βs dCs + Jn (θ ∧ τ, XC
θ∧τ , C).
t
Going to the limit n → ∞, using Lemma 4.2 and the obvious fact J(s, x, C) ≤ V (s, x), this gives
[∫
]
τ ∧θ
J(t, x, C) ≤ E
I
t
e−βs dCs + V(θ ∧ τ, Xθ∧τ ).
(t,x)
Exactly as in the proof of Proposition 4.1, step 1, we find
[∫
τ ∧θ
V (t, x) ≤ sup inf E
I
C∈A θ∈Tt,T
e
t
−βs
dCs + V(θ ∧
(31)
]
(t,x)
τ, Xθ∧τ )
.
(32)
Step 2: We now formulate a Lemma analogously, as we have done for the approximating problem.
Again, we defer its proof to the Appendix.
Lemma 4.3 Let ϵ > 0 be given. Then there exists a partition Qi,j , (i, j) ∈ I ≡ {1, 2, ..., M } × N
I0
+
i,j
i,j
of [0, T ] × R
I 0 and a family of admissible strategies C , s.t. J(t, x, C ) ≥ V (t, x) − 3ϵ, for all
i,j
(t, x) ∈ Q and all (i, j).
9
We define Ĉ as in the proof of Proposition 4.1 and get in the same way as there
[∫
]
τ ∧θ
(t,x)
J(t, x, Ĉ) ≥ E
I
e−βs dCs + V(τ ∧ θ, Xτ ∧θ ) − 3ϵ.
t
Using J(s, x, Ĉ) ≤ V (s, x), we conclude
[∫
]
τ ∧θ
V (t, x) ≥ E
I
e
−βs
dCs + V(τ ∧
t
(t,x)
θ, Xτ ∧θ )
− 3ϵ,
(33)
for all admissible C and arbitrary ϵ and θ. Having proved the inequalities (32) and (33), the rest of
the proof follows exactly as in the proof of Proposition 4.1.
⊓
⊔
5
Viscosity solution property of the value function
The HJB-equation in its variational form for our problem (P1 ) is given by
min{−LV, −e−βt + Vx } = 0,
(34)
2
where LV ≡ Vt + µVx + σ2 Vxx holds. We shall prove in this section that our value function is the
unique viscosity solution of this equation. We start with the viscosity supersolution property. The
proof of the following proposition is in principle analogously as in [23], Proposition 4.3.1, but since
the proof there uses the continuity of the state process in an essential way, we have to do a bit of
extra work. For convenience of the reader we shall give a full proof.
Proposition 5.1 The value function V (t, x) is a viscosity supersolution of
min{−LV, −e−βt + Vx } = 0.
Proof.
Let V∗ (t, x) be the lower-semicontinuous envelope of V (see e.g. [23], section 4.2 for a
definition of this concept). Let further (t̄, x̄) ∈ Q be a minimizer of V∗ − ϕ with
V∗ (t̄, x̄) = ϕ(t̄, x̄),
(35)
where ϕ ∈ C 1,2 (Q). We have to show that
min{−Lϕ(t̄, x̄), −e−β t̄ + ϕx (t̄, x̄)} ≥ 0.
(36)
Let now (tm , xm ) → (t̄, x̄) for m → ∞, with V (tm , xm ) → V∗ (t̄, x̄). Moreover, let
γm ≡ V (tm , xm ) − ϕ(tm , xm ) → 0,
(37)
for m → ∞.
We now choose θm ≡ tm + hm , with a strictly positive sequence hm , s.t. hm → 0 and hγm
→ 0, if
m
m → ∞. Furthermore, we choose Cs ≡ Ctm , s ≥ tm in the (DPP) of Proposition 4.2a) and get
[
]
(t ,x )
V (tm , xm ) ≥ E
I V(τm ∧ θm , Xτmm∧θm
)
,
m
where τm is the corresponding ruin time. By (35), we have V ≥ V∗ ≥ ϕ, which implies, using (37)
and Ito’s Lemma,
[∫
]
τm ∧θm
(tm ,xm )
ϕ(tm , xm ) + γm ≥ ϕ(tm , xm ) + E
I
Lϕ(s, Xs
) ds .
tm
Cancelling ϕ(tm , xm ) and dividing by hm , gives
[
]
∫ τm ∧θm
γm
1
m ,xm )
≥E
I
Lϕ(s, X(t
) ds .
s
hm
hm tm
10
Since (t̄, x̄) lies in the open set Q, and since Xs , s ≥ tm is now just Brownian motion with drift, we
can conclude that, for m large enough, i.e. m ≥ M (ω), τm ∧ θm = θm = tm + hm holds. Going to the
limit in the above inequality, implies - using dominated convergence and the fact that ϕ is smooth
enough
0 ≥ Lϕ(t̄, x̄).
(38)
In the second step of our proof we choose the strategy
Cs ≡ Ctm ,
for s ≤ tm ,
Cs ≡ Ctm + δ,
for s > tm ,
for some δ, with 0 < δ < xm , for all m. The (DPP) Proposition 4.2a) yields
[
]
(t ,x )
V (tm , xm ) ≥ E
I V(τm ∧ θm , Xτmm∧θm
)
+ e−βtm δ.
m
(37), V ≥ V∗ ≥ ϕ and dominated convergence implies, after the limit m → ∞,
ϕ(t̄, x̄) ≥ ϕ(t̄, x̄ − δ) + e−β t̄ δ.
Dividing by δ gives, after δ → 0,
ϕx (t̄, x̄) ≥ e−β t̄ .
(39)
⊓
⊔
Together with (38), this implies (36).
Our next proposition shows that the value function is a viscosity subsolution too.
Proposition 5.2 The value function V (t, x) is a viscosity subsolution of min{−LV, −e−βt +Vx } = 0.
Proof.
Since, by Proposition 3.2, the value function V (t, x) is upper-semicontinuous, we have
∗
V = V . Now, let (t̄, x̄) ∈ Q be a maximizer of V − ϕ with
V (t̄, x̄) = ϕ(t̄, x̄),
where ϕ ∈ C 1,2 (Q). We have to show that
min{−Lϕ(t̄, x̄), −e−β t̄ + ϕx (t̄, x̄)} ≤ 0.
(40)
Let (tm , xm ) → (t̄, x̄) for m → ∞, with V (tm , xm ) → V (t̄, x̄). Moreover, let
γm ≡ V (tm , xm ) − ϕ(tm , xm ) → 0,
(41)
for m → ∞.
We argue by contradiction. So assume that we have
−Lϕ(t̄, x̄) > 0,
−β t̄
−e
+ ϕx (t̄, x̄) > 0.
(42)
By continuity, this implies the existence of ϵ0 , η > 0 with
−Lϕ(t, x) > ϵ0 > 0,
−βt
−e
+ ϕx (t, x) > ϵ0 > 0,
(43)
on a sphere K((t̄, x̄); η) ⊂ Q with radius η.
Let θm be the exit time of X (tm ,xm ) from Km ≡ K((tm , xm ); η/2) ⊂ K((t̄, x̄); η), where the inclusion
11
holds for m large enough. For h > 0, θm + h is a stopping time, and we apply our (DPP) Proposition
4.2b) for this stopping time. Hence, for all δ > 0, there exists an admissible strategy C m , s.t.
[∫
]
τ ∧(θm +h)
V (tm , xm ) − δ ≤ E
I
tm
(t ,x )
(t ,xm )
m
Let now λm ≡ [X(θm
, X(θm
m )+
m)
definition, we have
m m
e−βs dCm
s + V(τ ∧ (θm + h), Xτ ∧(θm +h) ) .
(t ,x )
(44)
] ∩ ∂Km , where ∂Km denotes the boundary of Km . Using this
(t ,x )
(t ,xm )
m m
V (τ ∧ (θm + h), Xτ ∧(θ
) − ϕ(θm , Xθmm
m +h)
(t ,x )
m m
) − V (θm , λm ) +
) = V (τ ∧ (θm + h), Xτ ∧(θ
m +h)
(t ,xm )
V (θm , λm ) − ϕ(θm , Xθmm
).
Hence,
(t ,x )
(t ,xm )
)≤
(t ,xm )
)≤
m m
lim sup V (τ ∧ (θm + h), Xτ ∧(θ
) − ϕ(θm , Xθmm
m +h)
V
h→0
(tm ,xm )
(θm , Xθm + )
− V (θm , λm ) + V (θm , λm ) − ϕ(θm , Xθmm
−βθm
−e
(t ,x )
(λm − Xθmm+ m )
(t ,x )
−e−βθm (Xθmm m
+
−
(t ,x )
ϕ(θm , λm ) − ϕ(θm , Xθmm m ) ≤
(t ,x )
(t ,x )
Xθmm+ m ) − ϵ0 (Xθmm m − λm ).
(45)
For the first inequality we have used the upper-semicontinuity of V (c.f. Proposition3.2). The
argument for the second one is exactly as in [11] VIII, Lemma 3.1 (moreover we use ϕ ≥ V to
estimate the third term). Finally, for the third inequality we use
(t ,xm )
ϕ(θm , λm ) − ϕ(θm , Xθmm
(t ,xm )
) ≤ ϕx (θm , X̂θm )(λm − Xθmm
) ≤ −(e−βθm − ϵ0 )(−λm + Xθmm
(t ,xm )
),
with X̂θm ∈ [λm , Xθm ], employing (43). For notational convenience we now neglect the upper index
(tm , xm ) of the state process. Ito’s formula implies
∫ θm
∫ θm
ϕ(θm , Xθm ) = ϕ(tm , xm ) +
Lϕ(s, Xs ) ds −
ϕx (s, Xs ) dCsm,c +
tm
∫
θm
ϕ(tm , xm ) − ϵ0 (θm − tm ) −
ϕ(tm , xm ) − ϵ0 (θm − tm ) −
θm
tm
−βs
e
∫
dCsm,c
− ϵ0
∑
θm
dCsm,c +
tm
∫
(ϕ(s, Xs+ ) − ϕ(s, Xs ) ≤
s∈JX ,tm ≤s<θm
∫
e−βs dCsm,c − ϵ0
tm
∑
∑
θm
dCsm,c +
tm
ϕx (s, X̃s )∆Xs ≤
s∈JX ,tm ≤s<θm
tm
(e−βs + ϵ0 )∆Xs =
s∈JX ,tm ≤s<θm
∫
ϕ(tm , xm ) − ϵ0 (θm − tm ) −
θm
e−βs dCsm − ϵ0
Here we have used (43), the notation
for the continuous part of
X and X̃s ∈ [Xs+ , Xs ]. Combining (45) and (46), gives
θm
dCsm . (46)
tm
tm
Csm,c
∫
Csm ,
JX for the jump times of
lim sup V (τ ∧ (θm + h), Xτ ∧(θm +h) ) − ϕ(tm , xm ) ≤
h→0
∫
e−βs dCsm − ϵ0
−ϵ0 (Xθm − λm ) − ϵ0 (θm − tm ) −
[tm ,θm ]
∫
θm
dCsm ,
(47)
tm
where the last but one integral includes a possible jump at θm too. Applying lim suph→0 in (44)
gives
[∫
]
−δ ≤ E
I
[tm ,θm ]
e−βs dCm
s + lim sup V(τ ∧ (θm + h), Xτ ∧(θm +h) ) − V(tm , xm ).
h→0
12
Using (47) and taking the expected value, we end up with
[∫
−δ ≤ −ϵ0E
I [Xθm − λm ] − ϵ0E
I [θm − tm ] − ϵ0E
I
]
θm
dCm
s
− γm .
(48)
tm
We now define the stopping time θ̂m ≡ inf{s > tm |Xs ∈
/ K̂m ≡ K((tm , xm ); η/4) ⊂ Km } and the set
Am ≡ {ω|θm = θ̂m }, which means that, if we leave K̂m on Am , then we leave also Km . Obviously,
on the set Am we have |Xθm − λm | ≥ η/4. We distinguish now several cases:
Case 1: P
I (Am ) ≥ 1/2.
In this case we can give the following upper estimate for the r.h.s. of (48): − ϵ08η − γm .
Case 2: P
I (Acm ) ≥ 1/2.
∫θ
Set now Bm ≡ {ω| tmm dCsm ≥ η/10} and distinguish two subcases:
Case 2.1: P
I (Acm ∩ Bm ) ≥ 1/4.
This is the case, where we leave only the smaller sphere and not the larger one, and where the integral
0η
term of the right hand side of (48) gives a significant contribution. We find − ϵ40
− γm as an upper
estimate for the r.h.s of (48).
Case 2.2: P
I (Acm ∩ Bcm ) ≥ 1/4.
c
We define θ̃m ≡ inf{s > tm |xm + µ(s − tm ) + σ(Ws − Wtm ) ∈
/ K̂m } and observe that we have on Bm
:
θ̃m ≤ θm . Moreover, one has E
I [θ̃m − tm ] ≥ D for some generic constant D, depending on η, µ and σ.
Hence, E
I [θm − tm ] ≥ D, and for the upper bound for the r.h.s. of (48) we find −ϵ0 D − γm .
Therefore, we have in all cases
r.h.s. of (48) ≤ −ϵ0 D − γm ,
(49)
for all m. Hence, by (48), −δ ≤ −ϵ0 D − γm for all m, which is a contradiction for m large and δ
small enough.
⊓
⊔
The propositions 5.1 and 5.2 imply
Theorem 5.1 The value function V (t, x) is a viscosity solution of min{−LV, −e−βt + Vx } = 0.
Once one knows that the value function is a viscosity solution of the HJB-equation, it is important
to know, whether this solution is unique. We clarify this in the rest of the section. The proof is
oriented on [23], but since our value function V (t, x) is discontinuous at (T, 0), some modifications
are necessary. We start with a comparison principle.
Proposition 5.3 Let u be a viscosity subsolution of min{−LV, −e−βt + Vx } = 0, which fulfills
u(T, x) = γ + e−βT x,
γ > 0, x ≥ 0,
u(t, 0) = 0,
0 ≤ t < T,
0 ≤ u ≤ D(1 + x),
x ≥ 0,
u is continuous on Q̄ \ {(T, 0)},
(50)
with a positive constant D. If v is a supersolution fulfilling the same conditions (50), we have u ≤ v
on Q̄.
Proof. We first transform the HJB-equation to an equivalent one, where we use V = eλ1 t+λ2 x Ṽ
for some real λi . This yields (we replace now, for convenience, Ṽ again by V )
min{−L̃V, −e−ν2 t−ν3 x + Vx + ν3 V } = 0,
2
2
(51)
with L̃V ≡ ν1 V + Vt + µ̃Vx + σ2 Vxx , ν1 ≡ λ1 + λ2 µ + σ2 λ22 , µ̃ ≡ µ + σ 2 λ2 , ν2 ≡ β + λ1 , ν3 ≡ λ2 .
In the sequel we shall choose a λ2 > 0, arbitrary large, and λ1 ≡ −λ32 . All together, we have the
13
following choice of parameters
ν1 < 0
and of the magnitude − λ32 ,
ν2 < 0
and of the magnitude − λ32 ,
ν3 > 0,
and of the magnitude λ2 ,
µ̃ > 0
and of the magnitude λ2 .
(52)
Our proof will be by contradiction. So assume that
K≡
sup
(t,x)∈[0,T ]×I
R+
0
(u − v) =
sup
(u − v) > 0,
(53)
(t,x)∈[0,T [×O
for some open bounded set O in R
I+
0 . Here we have used the method of penalized supersolution, (u, v
increase only linear!), for the last equality above (see e.g. [23], Th. 4.4.3, step 2 and Th. 4.4.4, step
1).
Let now w ≡ 10Ke−ν3 x−ν3 (T −t) with ν3 > 0 from above, and let U be the small open rectangle at
the corner (T, 0), i.e. U ≡ {(t, x)|T − ν13 < t < T, 0 < x < ν13 }. We want to show that
u−v−w ≤0
(54)
holds on Q̄ \ U . To do this, we first show that v + w is a supersolution, too. Indeed, we get
2
−L̃w = (−ν1 − ν3 + µ̃ν3 − σ2 ν32 )w > 0. Moreover, one has wx + ν3 w = 0, and both together imply
min{−L̃(v + w), −e−ν2 t−ν3 x + (v + w)x + ν3 (v + w)} ≥ 0,
(55)
hence the asserted supersolution property. One can explicitly check that we have (u − v − w) ≤ 0 on
the set ∂(Q̄ \ U ).
We now show (54) by contradiction. So assume
0 < M ≡ sup (u − v − w) =
Q̄\U
max (u − v − w) =
(Q̄\U )∩L
max
((Q̄\U )∩L)o
(u − v − w),
(56)
where Ao denotes the interior of the set A, and where L is the compact set given by L ≡ [0, T ]×[0, z],
for some z > 0. (Here we have again used the method of penalized supersolution).
We now apply the dedoubling variable technique and define Φϵ (t, s, x, y) ≡ u(t, x)−v(s, y)−w(s, y)−
2
2
ϕϵ (t, s, x, y), with ϕϵ (t, s, x, y) ≡ (t−s) +(x−y)
. Since Φϵ is continuous on ((Q̄\U )∩L)×((Q̄\U )∩L),
2ϵ
let us denote its maximum by Mϵ and its maximizer by (tϵ , sϵ , xϵ , yϵ ). By going to subsequences, we
can assume that (tϵ , sϵ , xϵ , yϵ ) → (t̄, s̄, x̄, ȳ) holds, for ϵ → 0. As in [23], Theorem 4.4.4, we infer that
t̄ = s̄, x̄ = ȳ, ϕϵ (tϵ , sϵ , xϵ , yϵ ) → 0, Mϵ → M, (u − v − w)(t̄, x̄) = M,
(57)
with (t̄, x̄) ∈ ((Q̄ \ U ) ∩ L)o , because of (56). (Note that the form of the HJB-equation doesn’t enter
into this argument!)
Now, Ishii’s Lemma in the form of [23], Remark 4.4.9, give the existence of real numbers K, N , s.t.
1
1
2,+
( (tϵ − sϵ ), (xϵ − yϵ ), K) ∈ P u(tϵ , xϵ ),
ϵ
ϵ
1
1
2,−
( (tϵ − sϵ ), (xϵ − yϵ ), N ) ∈ P (v + w)(sϵ , yϵ ),
ϵ
ϵ
(58)
with c2 K − d2 N ≤ 3ϵ (c − d)2 , for any real numbers c, d. Hence, by setting c = d, we get
K ≤ N.
14
(59)
The superjet-, resp. subjet-characterization of super-, resp. subsolutions provides
1
µ̃
σ2
1
min{−ν1 u(tϵ , xϵ ) − (tϵ − sϵ ) − (xϵ − yϵ ) −
K, −e−ν2 tϵ −ν3 xϵ + (xϵ − yϵ ) + ν3 u(tϵ , xϵ )} ≤ 0,
ϵ
ϵ
2
ϵ
1
µ̃
σ2
1
min{−ν1 (v + w)(sϵ , yϵ ) − (tϵ − sϵ ) − (xϵ − yϵ ) −
N, −e−ν2 sϵ −ν3 yϵ + (xϵ − yϵ ) +
ϵ
ϵ
2
ϵ
ν3 (v + w)(sϵ , yϵ )} ≥ 0. (60)
Now, (60) is equivalent to
1
µ̃
σ2
1
max{ν1 u(tϵ , xϵ ) + (tϵ − sϵ ) + (xϵ − yϵ ) +
K, e−ν2 tϵ −ν3 xϵ − (xϵ − yϵ ) − ν3 u(tϵ , xϵ )} ≥ 0,
ϵ
ϵ
2
ϵ
1
µ̃
σ2
1
max{ν1 (v + w)(sϵ , yϵ ) + (tϵ − sϵ ) + (xϵ − yϵ ) +
N, e−ν2 sϵ −ν3 yϵ − (xϵ − yϵ ) −
ϵ
ϵ
2
ϵ
ν3 (v + w)(sϵ , yϵ )} ≤ 0, (61)
and (61)2 implies
1
µ̃
σ2
ν1 (v + w)(sϵ , yϵ ) + (tϵ − sϵ ) + (xϵ − yϵ ) +
N ≤ 0,
ϵ
ϵ
2
1
e−ν2 sϵ −ν3 yϵ − (xϵ − yϵ ) − ν3 (v + w)(sϵ , yϵ ) ≤ 0.
ϵ
(62)
We distinguish two cases:
2
Case 1: ν1 u(tϵ , xϵ ) + 1ϵ (tϵ − sϵ ) + µ̃ϵ (xϵ − yϵ ) + σ2 K ≥ 0 for some subsequence, again denoted by ϵ,
tending to zero.
2
The assumption of Case 1 and (62)1 imply ν1 (u(tϵ , xϵ ) − (v + w)(sϵ , yϵ )) + σ2 (K − N ) ≥ 0, from
2
which one gets, after going to the limit ϵ → 0, ν1 M + σ2 (K − N ) ≥ 0. By (52),(56) and (59), this is
a contradiction.
Case 2: e−ν2 tϵ −ν3 xϵ − 1ϵ (xϵ − yϵ ) − ν3 u(tϵ , xϵ ) ≥ 0 for some subsequence, again denoted by ϵ, tending
to zero.
The assumption of Case 2 and (62)2 imply −ν3 (u(tϵ , xϵ )−(v+w)(sϵ , yϵ ))+e−ν2 tϵ −ν3 xϵ −e−ν2 sϵ −ν3 yϵ ≥
0, and after going to the limit, −ν3 M ≥ 0, which is, by (52) and (56), a contradiction.
Hence, our assumption of a positive supremum of (u − v − w) on Q̄ \ U is wrong. Therefore, we have
u − v − w ≤ 0 on Q̄ \ U , and also on on Q̄ \ V , if we define V ≡ {(t, x)|T − √1ν3 < t < T, 0 < x < √1ν3 }
(remember that ν3 is large!). Since w can be made arbitrarily small on Q̄ \ V , and as the same holds
for V , we conclude
u − v ≤ 0,
on Q̄ \ {(t, 0) ∪ (T, x), t ∈ [0, T ], x ∈ R
I+
0 }.
Because we have assumed u = v on {(t, 0) ∪ (T, x), t ∈ [0, T ], x ∈ R
I+
0 }, we get finally u ≤ v on Q̄.
⊓
⊔
As usual, the comparison result implies the following uniqueness assertion.
Theorem 5.2 There is only one viscosity solution of min{−LV, −e−βt + Vx } = 0, fulfilling the
condition (50), and this solution is the value function of problem (P1 ).
Remark 5.1 Let us note that this theorem holds unaltered, if we replace the term Xτ in the target
functional (P1 ) by K(Xτ ), for some function K(x) ∈ C 2 ([0, ∞)), with K(0) = 0 and fulfilling a
linear growing condition. (Clearly, we have to replace the r.h.s. of (50)1 by γ + e−βT K(x).)
15
6
Construction of an approximating value function and an
approximation of the optimal strategy
Similar as for the case γ = 0, see [13], we construct here an approximating value function to our
problem (P1 ), and go then to the limit in the following section.
6.1
Derivation of the basic integral equation and a verification theorem
We start with the definition of an approximation for the expression in the bracket of the terminal
condition V (T, x) = e−βT (γ̄ + x), which we call H (ϵ) (x). For the new terminal condition we are
able to solve our problem rather explicitly, using the method of smooth fit and Green’s function
methodology. In the sequel we will actually solve a moving boundary problem not for V , but for a
transformed function v, which fulfills in the “no action” region instead of LV = 0 the backward heat
equation. Moreover, we will do this in a region, which is symmetric w.r.t. x = 0. Hence, we have to
define the terminal condition also on some interval of the form [−ϵ, 0].
We formulate the conditions H (ϵ) (x) should obey.
Standing Assumption For ϵ small enough, say ϵ ≤ ϵ0 , the approximating final gain function
H (ϵ) (x) fulfills the following assumptions (we shall write H instead of H (ϵ) in the sequel to simplify
notation!)
(i) H(x) ∈ C 4 ([−ϵ, ϵ] \ {0}) ∩ C 2 ([−ϵ, ∞) \ {0}) with existing H (IV ) (0±),
(ii) H(0) = H(0+) = γ̄, H(0−) = −γ̄, H(ϵ) = ϵ + H0 + γ̄,
(iii) H ′ (0+) = H1 , H ′ (0−) = H ′ (0+) + 2µ̂γ̄, H ′ (ϵ) = 1,
(iv) H ′′ (0+) = −2µ̂H1 − µ̂2 γ̄, H ′′ (0−) = H ′′ (0+) − 2µ̂2 γ̄, H ′′ (ϵ) = 0,
(v) H ′′′ (0+) = 3µ̂2 H1 + H3 , H ′′′ (0−) = H ′′′ (0+) + 2µ̂3 γ̄, H ′′′ (ϵ−) = 2β̂,
(vi) H (IV ) (0+) = −4µ̂H3 −4µ̂3 H1 +5µ̂4 γ̄, H (IV ) (0−) = H (IV ) (0+)−7µ̂4 γ̄, H (IV ) (ϵ−) = 4σβ̂2 H4 −4µ̂β̂,
(vii) H(−x) = −H(x)e2µ̂x , for x ∈]0, ϵ],
(viii) H ′ (x) ≥ 0, H ′′ (x) ≤ 0, for x ∈ [−ϵ, ϵ],
(ix) H(x) = x + H0 + γ̄, for x ≥ ϵ,
(x) 2µ̂H ′′ (x) + H ′′′ (x) > 0,
for x ∈ [−ϵ, ϵ].
Here H0 and H1 denote constants depending only on ϵ with the asymptotic behavior limϵ→0 Hϵ02(ϵ) =
2
H1 (ϵ)−1
µ̂2 γ̄
2
= µ̂ + µ̂2γ̄ . H4 can be chosen arbitrarily and independently of
7 µ̂ + 7 , respectively limϵ→0
ϵ
ϵ, and for H3 we choose H3 = 6µ̂2 + 5µ̂3 γ̄ . Moreover, we use the notation µ̂ ≡ σµ2 and β̂ ≡ σβ2 .
Remarks. (i) In the sequel we shall subtract the discontinuity of the value function at (t, x) = (T, 0).
The conditions (i)-(vi) will guarantee a regularized value function, which is smooth enough to apply
our machinery. Condition (vii) is necessary to guarantee an odd function v(t, x). Finally, the
conditions (viii)-(x) are basically used to prove that the solution of a certain (FBVP) also satisfies
the proper inequalities for our HJB equation.
(ii) Note that the inequalities of (viii) and (x) above hold for x = 0 as well, if we choose the right-,
resp. left handed limits, given in the points before.
The existence of such a function H is assured in the following
Lemma 6.1 For ϵ small enough, say ϵ ≤ ϵ0 , there exists a function H(x), fulfilling the conditions
of the standing assumptions.
For a proof we refer to the Appendix.
Remark. Note that in the rest of this section we fix some ϵ ≤ ϵ0 , and we denote V (t, x) ≡
[∫ τ
]
supA E
I t e−βs dCs + e−βτ H(Xτ )1{τ =T} .
16
Before we proceed, we introduce some notation.
Definition 6.1
µ2
λ ≡ β + 2σ
, λ̂ ≡ σλ2 , d1 ≡ 2µ̂β̂ + 2µ̂3 , γ̃ ≡ e−λT γ̄,
)
∫2 ( −βs
βt T
d(t) ≡ e t µe
− βe−βs b(s) ds + (H0 + γ̄)eβ(t−T ) ,
(x−ξ)2
∫∞
− 2
2σ (T −t) dξ,
F (t, x) = √ γ̃
sgn(ξ)e
2π(T −t)σ −∞
(
)
µ̂b(t)−λt
B̃ ≡ e
2β̂ + 3µ̂2 + µ̂3 (b(t) + d(t) − Fxxx (t, b(t)),
(
)
D̃(t) ≡ eµ̂b(t)−λt 4µ̂β̂ + 4µ̂3 + µ̂4 (b(t) + d(t)) + 4σβ̂2 b′ (t) − Fxxxx (t, b(t)),
K(x, T − t; ξ, s) ≡ √
(x−ξ)2
−
1
e 2σ2 (T −t−s) ,
2πσ 2 (T −t−s)
h(x) ≡ eµ̂x−λT H(x), h̃(x) = h(x) − γ̃sgn(x),
I ≡ {(t, x)|0 ≤ t ≤ T, 0 ≤ x ≤ b(t)}, II ≡ {(t, x)|0 ≤ t ≤ T, x ≥ b(t)},
G1 ≡ {(t, x)| − b(t) < x < b(t), 0 < t < T }, G2 ≡ {(t, x)| − b(t) < x, 0 < t < T }, and Gi denote their
closures.
A few explanations are in order: We adopt the convention sgn(x) = 1, x ≥ 0; sgn(x) = −1, x < 0.
F (t, x) is the solution of the Cauchy problem
σ2
Fxx (t, x) = 0,
2
F (T, x) = γ̃sgn(x).
Ft (t, x) +
The value function in the region x ≥ b(t) will be given by V (t, x) = e−βt (x + d(t)). Therefore, d(t)
can be interpreted as the gain of behaving optimally instead of paying out everything immediately.
B̃(t) and D̃(t) will turn out to be the values of derivatives of a transformed value function at the free
boundary b(t)(more details see below). K(., .; ., .) is the well known Green’s function of the backward
heat equation. Finally, h is the terminal value of the transformed value function v(t, x), which will
solve the backward heat equation. The next Theorem 6.1 characterizes our value function.
Theorem 6.1 (i) Assume a function V (t, x) and the barrier function b(t) solve the following system

V (T, x) = e−βT H(x),
x ≥ 0,




σ2

on {(t, x)|0 ≤ x ≤ b(t), 0 ≤ t ≤ T } \ {(T, 0)},
Vt + µVx + 2 Vxx = 0, Vx ≥ e−βt ,



 V + µV + σ2 V ≤ 0, V = e−βt ,
for x ≥ b(t), 0 ≤ t ≤ T,
t
x
x
2 xx
A:

V (t, 0) = 0,
0 ≤ t < T,





V
(t,
x),
V
(t,
x),
V
I+
t
x (t, x), Vxx (t, x) continuous on [0, T ] × R

0 \ {(T, 0)},


1
b(t) > 0, for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C ([0, T ]),
[∫ τ
]
then V is the value function of supA E
I t e−βs dCs + e−βτ H(Xτ )1{τ =T} .
(ii) The optimal strategy C ∗ and the optimal state process X ∗ are given by
Ct∗
=
C0∗
= 0,
Xt∗
= x + µt + σWt − Ct∗ ,
max [x + µs + σWs − b(s)]+ ,
0≤s≤t
t > 0,
which means that C ∗ is a barrier strategy with barrier function b(t).
(iii) Moreover, the function b(t) is the C 1 -solution of the following integral equation
{∫
∫ b(T −s)
T −t
σ 2 −µ̂b(t)+λt
′
b (t) =
e
ds
Kx (b(t), T − t; ξ, s) B̃ ′ (T − s) dξ
2β̂
0
−b(T −s)
17
(63)
σ2
2
2 ∫
∫
T −t
+
σ
2
∫
+
Kx (b(t), T − t; b(T − s), s) D̃(T − s) ds
0
T −t
+
0
b(T )
−b(T )
Kx (b(t), T − t; −b(T − s), s) D̃(T − s) ds
}
(
)
′′′
Kx (b(t), T − t; ξ, 0) h̃ (ξ) − B̃(T ) dξ
(
)
µ̂4
−
d1 +
(b(t) + d(t)) .
2
2β̂
σ2
(64)
Proof. We shall prove (i) and (ii) now and consider (iii), once we have transformed the problem
to a proper free boundary value problem.
(i) A function V (t, x), fulfilling system A above, is clearly a viscosity solution of min{−LV, −e−βt +
Vx } = 0. Moreover, the conditions (50) are satisfied as well. Hence, we can conclude by Theorem
5.2, resp. Remark 5.1, that V is the value function.
(ii) First of all note that, when we will prove (iii) in the sequel, we shall also construct a solution to
system A, which is by (i) the value function of our problem. Hence, the value function fulfills the
conditions of system A.
By [7], Theorem 3.8, one knows that P
I ((τ, X∗τ ) = (T, 0)) = 0. (Note that this is not true for a general
admissible strategy C, hence the first step of the standard proof of (ii), as it is given e.g. in [13],
Theorem 3.2.(ii), step 1, is not applicable, and we had to use the viscosity concept.) Since apart from
the point (T, 0) the function V (t, x) is smooth enough, one can - after a localization argument, with
τn ≡ τ ∧ inf{t > 0|Vx (t, Xt ) = n} → τ a.s., to get rid of the expectation of the stochastic integral in
Ito’s formula - use the same proof as in [13], Theorem 3.2.(ii), step 2.
⊓
⊔
In order to prove (iii), we shall successively transform the problem A of Theorem 6.1 by means of
a series of lemmata into a proper free boundary value problem, which will characterize our barrier
function. Our first Lemma transforms the original problem into a new one, where the former region
I is now chosen symmetrically to x = 0. Moreover, it shows that the correct inequalities assumed in
region I, resp. II in Theorem 6.1 follow, if we assume C 4 -regularity (except at the point (T, 0)) up
to and including the boundary .
Lemma 6.2 A function V (t, x) and a boundary function b(t) solving the system


V (T, x) = e−βT H(x),
x ∈ [−ϵ, ∞),



σ2

in G1 \ {(T, 0)},

 Vt + µVx + 2 Vxx = 0,


−βt

x ≥ b(t), 0 ≤ t ≤ T,
 Vx = e ,


 V (t, 0) = 0,
0 ≤ t < T,
B:

V, Vt , Vtt , Vx , Vxx , Vxxx , Vxxxx ∈ C(G1 \ {(T, 0)}),


(
)



V, Vt , Vx , Vxx ∈ C G2 \ {(T, 0)} ,




b(t) > 0,
for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C 1 ([0, T ]),




b(t) + d(t) > 0,
for 0 ≤ t ≤ T,
solve also system A of Theorem 6.1.
Proof. What we have actually to show is that the following inequalities hold.
Vt + µVx +
σ2
Vxx ≤ 0,
2
Vx ≥ e−βt ,
for (t, x) ∈ II,
for (t, x) ∈ I \ {(T, 0)}.
18
(65)
(66)
The proof of (65) is completely analogous to the proof in Lemma 3.2 of [13], with the only alteration
that we have now d(T ) = H0 + γ̄, which implies
∫ T
( −βs
)
d(t) = eβt
µe
− βe−βs b(s) ds + (H0 + γ̄)eβ(t−T ) .
(67)
t
We proceed with inequality (66) and start with the calculation of the left hand boundary condition of V , i.e. V (t, −b(t)). Using the right hand boundary conditions V (t, b(t)) = e−βt (b(t) +
µ2
d(t)), Vx (t, b(t)) = e−βt , Vxx (t, b(t)) = 0, the transformation v(t, x) = V (t, x)eµ̂x− 2σ2 t and the fact
that v is an odd function in x, gives after some elementary algebra:
V (t, −b(t)) = −e−βt+2µ̂b(t) (b(t) + d(t)),
Vx (t, −b(t)) = e−βt+2µ̂b(t) (1 + 2µ̂(b(t) + d(t))),
Vxx (t, −b(t)) = e−βt+2µ̂b(t) (−4µ̂ − 4µ̂2 (b(t) + d(t))).
(68)
For the transformed function v(t, x) one gets
v(t, ±b(t)) = ±e−λt+µ̂b(t) (b(t) + d(t)) ≡ ±g(t),
vx (t, ±b(t)) = e−λt+µ̂b(t) (1 + µ̂(b(t) + d(t))) ≡ k(t),
vxx (t, ±b(t)) = ±e−λt+µ̂b(t) (2µ̂ + µ̂2 (b(t) + d(t))) ≡ ±m(t).
(69)
Moreover, v(t, x) fulfills the following boundary value problem
σ2
vxx = 0,
2
v(t, ±b(t)) = ±g(t),
L̂v(t, x) ≡ vt +
v(T, x) = eµ̂x−λT H(x) ≡ h(x).
(70)
The function h(x) has a discontinuity at x = 0, and using the standing assumptions on H one finds
h(0±) = ±γ̃. Subtracting out this jump, we define h̃ as h̃(x) ≡ h(x) − γ̃sgn(x). Our definition
of H in the Standing assumptions is chosen in a way, s.t. we have h(−x) = −h(x) for x ∈ [−ϵ, ϵ],
′′
′′′′
h (0+) = 0, and h (0+) = 0. This implies that h̃ ∈ C 4 ([−ϵ, ϵ]).
We now decompose the function v into a discontinuous part fulfilling a Cauchy problem and a regular
part fulfilling a BVP, i.e. v ≡ v (1) + v (2) with
L̂v (1) = 0,
v (1) (T, x) = γ̃sgn(x),
(71)
respectively
L̂v (2) = 0,
v (2) (t, ±b(t)) = ±g(t) − v (1) (t, ±b(t)),
v (2) (T, x) = h̃(x).
(72)
Note that the function on the r.h.s. of (72)2 is in C 1 ([0, T ]), since away from the point (T, 0)
the function v (1) (t, x) is smooth, and g ∈ C 1 ([0, T ]) by the assumptions of our Lemma. Since the
boundary data and the final data are obviously compatible, we get by Gevrey’s result, see e.g. [13],
(2)
(2)
Lemma 3.5, that vx ∈ C(G1 ). Hence, vx fulfills the following BVP
L̂vx(2) = 0,
vx(2) (t, ±b(t)) = k(t) − vx(1) (t, b(t)),
vx(2) (T, x) = h̃′ (x).
19
(73)
Again the function on the r.h.s. of (73)2 is in C 1 ([0, T ]), and since the boundary and final data are
(2)
compatible, we conclude by Gevrey’ result that vxx ∈ C(G1 ).
We now approximate the function v (1) by smooth functions v (1,δ) (t, x), which fulfill
L̂v (1,δ) = 0,
v (1,δ) (T, x) = ϕ(δ) (x),
(74)
where ϕ(δ) fulfills ϕ(δ) ∈ C ∞ (IR), ϕ(δ) → γ̃sgn(x) pointwise except at x = 0, for δ → 0. Moreover, we
′′
′
impose: ϕ(δ) (x) are odd functions, fulfill ϕ(δ) (x) ≥ 0, ϕ(δ) (x) ≤ 0 for x ≥ 0, and ϕ(δ) (x) ≡ γ̃sgn(x),
for |x| ≥ δ/2. The solution is well known and given by
∫ ∞
(x−ξ)2
1
−
v (1,δ) (t, x) = √
ϕ(δ) (ξ)e 2σ2 (T −t) dξ.
2π(T − t)σ −∞
Obviously, one has
(1,δ)
(1)
v (1,δ) → v (1) , vx(1,δ) → vx(1) , vxx
→ vxx
,
(75)
for δ → 0, pointwise in G1 \ {(T, 0)} and uniformly on each compactum K with K ⊂ G1 \ {(T, 0)}.
Let v (δ) ≡ v (1,δ) + v (2) , which gives the following system for v (δ)
L̂v (δ) = 0,
v (δ) (t, ±b(t)) = v (1,δ) (t, ±b(t)) ± g(t) − v (1) (t, b(t)),
v (δ) (T, x) = v (1,δ) (T, x) + h̃(x) = ϕ(δ) (x) + h(x) − γ̃sgn(x).
(76)
Because of (75), we have
(δ)
→ vxx ,
v (δ) → v, vx(δ) → vx , vxx
(77)
for δ → 0, pointwise in G1 \ {(T, 0)} and uniformly on each compactum K with K ⊂ G1 \ {(T, 0)},
µ2
or (with V (δ) ≡ v (δ) e−µ̂x+ σ2 t ),
(δ)
V (δ) → V, Vx(δ) → Vx , Vxx
→ Vxx ,
(78)
for δ → 0, pointwise in G1 \ {(T, 0)} and uniformly on each compactum K with K ⊂ G1 \ {(T, 0)}.
(δ)
(δ)
Since v (1,δ) ∈ C ∞ , we have vxx ∈ C(G1 ), and vx fulfills
L̂vx(δ) = 0,
vx(δ) (t, ±b(t)) = vx(1,δ) (t, ±b(t)) + k(t) − vx(1) (t, b(t)) ≡ k (δ) (t),
′
vx(δ) (T, x) = ϕ(δ) (x) + h̃′ (x).
(79)
Furthermore, we have h̃′ (x) = h′ (x), for x ̸= 0, and h̃′ (0) = h′ (0+) = h′ (0−). Our standing
assumptions on H, the transformation formula between H and h, as well as the oddity of h(x) imply
h̃′ (x) > 0, for x ∈ [−ϵ, ϵ], which in turn gives
for x ∈ [−ϵ, ϵ].
vx(δ) (T, x) > 0,
(80)
Since we know that the convergence k (δ) (t) → k(t) > 0, for δ → 0, is uniformly on [0, T ] by (77),
we conclude that k (δ) (t) > 0 on [0, T ], for δ small enough. The maximum principle implies by (79)
(δ)
(δ)
and (80) that vx > 0 on G1 , for small enough δ, especially vx (t, 0) > 0, for 0 ≤ t ≤ T . Hence,
transforming to V (δ) , gives, by the oddity of v (δ) ,
for t ∈ [0, T ],
(δ)
Vxx
(t, 0) < 0,
20
(81)
and δ small enough. Defining g (δ) and m(δ) analogously to k (δ) and exploiting (75), gives g (δ) (t) →
(δ)
g(t), k (δ) (t) → k(t), m(δ) (t) → m(t), uniformly on [0, T ], hence Vxx (t, b(t)) → Vxx (t, b(t)), uniformly
on [0, T ], i.e. there exists a positive function ν(δ) → 0 with δ → 0, s.t.
(δ)
(82)
Vxx (t, b(t)) ≤ ν(δ),
for t ∈ [0, T ].
For the terminal value we claim that
(δ)
Vxx
(T, x) ≤ 0,
for x ∈ [0, ϵ].
(δ)
(83)
(δ)
By our transformation law this is equivalent to (vxx − 2µ̂vx + µ̂2 v (δ) )(T, x) ≤ 0 or, if we use our
′′
′
terminal condition (76), equivalent to ϕ(δ) (x)+ h̃′′ (x)−2µ̂ϕ(δ) (x)−2µ̂h̃′ (x)+ µ̂2 ϕ(δ) (x)+ µ̂2 h̃(x) ≤ 0.
Now, the sum of the second, fourth and µ̂2 h from the sixth term are less or equal to zero, by our
′′
′
(Standing assumption)viii . So it remains to show ϕ(δ) (x) − 2µ̂ϕ(δ) (x) + µ̂2 (ϕ(δ) (x) − γ̃) ≤ 0, which
is true by our assumptions on the function ϕ(δ) , formulated after (74). Hence, (83) is true.
(δ)
(δ)
Since Vxx ∈ C(G1 ), the maximum principle, together with (81), (82), (83), implies Vxx (t, x) ≤ ν(δ)
on G1 . Employing (78) and going to the limit δ → 0, gives Vxx (t, x) ≤ 0 on G1 \ {(T, 0)}, which by
Vx (t, b(t)) = e−βt finishes the proof.
⊓
⊔
Since we want to apply the Green’s function method for the backward heat equation, we remove the
drift of the problem by a simple exponential transformation, and we formulate the problem now for
a new dependent variable v.
Lemma 6.3 (i) If a function v(t, x) and a boundary function b(t) solve the system


v(t, x) is an odd function in x on G1 , except v(T, 0) = γ̃,




v(T, x) = eµ̂x−λT H(x) ≡ h(x),
x ∈ [−ϵ, ∞),




σ2

vt + 2 vxx = 0 in G1 \ {(T, 0)},



 v(t, x) = e−λt+µ̂x (x + d(t)),
x ≥ b(t), 0 ≤ t ≤ T,
C:

v,
v
,
v
,
v
,
v
,
v
,
v
∈
C(G
t tt x xx xxx xxxx
1 \ {(T, 0)}),




 v, vt , vx , vxx ∈ C(G2 \ {(T, 0)}),




b(t) > 0 for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C 1 ([0, T ]),




b(t) + d(t) > 0,
0 ≤ t ≤ T,
µ2
then V (t, x), defined by V (t, x) ≡ e−µ̂x+ 2σ2 t v(t, x), and b(t) solve the system B.
Remark: Note, that in comparison to the system C in [13], Lemma 3.3, we have skipped the condition
C5 there. It was used there to prove the property B5 , which we do not have any more in our new
system B.
(ii) The transformed terminal conditions h(x) fulfill


h(x) ∈ C 4 ([−ϵ, ϵ] \ {0}) ∩ C 2 ([−ϵ, ∞) \ {0}), with existing h(IV ) (0±),





h(0) = h(0+) = γ̃, h(0−) = −γ̃, h(−x) = −h(x), for x > 0, h′′ (0±) = h′′′′ (0±) = 0,




−h(−ϵ) = h(ϵ) = e−λT +µ̂b(T ) (b(T ) + d(T )),


 ′
h (−ϵ) = h′ (ϵ) = e−λT +µ̂b(T ) (1 + µ̂(b(T ) + d(T ))) ,
(
)

′′
′′
−λT +µ̂b(T )

2µ̂(+ µ̂2 (b(T ) + d(T )) ,

 −h (−ϵ) = h (ϵ) = e
)



h′′′ (−ϵ+) = h′′′ (ϵ−) = e−λT +µ̂b(T ) 2β̂ + 3µ̂2 + µ̂3 (b(T ) + d(T )) ,



)
(



−h(IV ) (−ϵ+) = h(IV ) (ϵ−) = e−λT +µ̂b(T ) 4σβ̂2 H4 + 4µ̂β̂ + 4µ̂3 + µ̂4 (b(T ) + d(T )) ,
Proof.
The proof consists of simple calculations. Note that the asserted values of h and its
derivatives at ϵ are the same as in [13], with the only difference that we now have a different value
21
for d(T ).
⊓
⊔
In the following Lemma we define a function, which we shall use repeatedly in the sequel.
Lemma 6.4 Let F (t, x) be the solution of the Cauchy problem
L̂F (t, x) = 0,
F (T, x) = γ̃sgn(x).
Then F (t, x) has the following representation F (t, x) = √
γ̃
2π(T −t)σ
∫∞
−
sgn(ξ)e
−∞
(x−ξ)2
2σ 2 (T −t)
dξ, and
k+l
is continuous on G1 \ {(T, 0)}, resp.
continuous
on each compact set K ⊂
( uniformly
)
∂ k+l
G1 \ {(T, 0)}, for k, l ∈ N
I 0 . Moreover, we have L̂ ∂tk ∂xl F (t, x) = 0 on G1 \ {(T, 0)}. Finally, note
√
x2
−
that Fx (t, x) = σγ̃ π(T2−t) e 2σ2 (T −t) is the fundamental solution of our backwards heat equation.
∂
F (t, x)
∂tk ∂xl
By subtracting the function F (t, x) from v, we get a regularized version of v, namely ṽ(t, x) ≡
v(t, x) − F (t, x). In the sequel we shall consider ṽ and derive a sequence of BVPs for it. The proof
of the following Lemma follows obviously from Lemma 6.3.
Lemma 6.5 (i) If a function ṽ(t, x) and a boundary function b(t) solve the system


ṽ(t, x) is an odd function in x on G1 ,




ṽ(T, x) = h(x) − γ̃sgn(x) ≡ h̃(x),
x ∈ [−ϵ, ∞),




 ṽt + σ2 vxx = 0 in G1 ,

2


 ṽ(t, x) = e−λt+µ̂x (x + d(t)) − F (t, x),
x ≥ b(t), 0 ≤ t ≤ T,
C1 :

ṽ,
ṽ
,
ṽ
,
ṽ
,
ṽ
,
ṽ
,
ṽ
∈
C(G
),
t tt x xx xxx xxxx
1




 ṽ, ṽt , ṽx , ṽxx ∈ C(G2 ),




b(t) > 0 for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C 1 ([0, T ]),




b(t) + d(t) > 0,
0 ≤ t ≤ T,
then v(t, x), defined by v(t, x) ≡ ṽ(t, x) + F (t, x), and b(t) solve the system C.
(ii) The transformed terminal conditions h̃(x) fulfill

4
2


 h̃(x) ∈ C ([−ϵ, ϵ] ∩ C ([−ϵ, ∞)),



 h̃(x) is an odd function on [−ϵ, ϵ],



−h̃(−ϵ) = h̃(ϵ) = e−λT +µ̂b(T ) (b(T ) + d(T )) − γ̃,


 ′
h (−ϵ) = h′ (ϵ) = e−λT +µ̂b(T ) (1 + µ̂(b(T ) + d(T ))) ,
(
)


−h′′ (−ϵ) = h′′ (ϵ) = e−λT +µ̂b(T ) 2µ̂(+ µ̂2 (b(T ) + d(T )) ,


)


′′′
′′′
−λT +µ̂b(T )
2
3

h
(−ϵ+)
=
h
(ϵ−)
=
e
2
β̂
+
3µ̂
+
µ̂
(b(T
)
+
d(T
))
,



)
(


4
β̂
3
4
(IV
)
(IV
)
−λT
+µ̂b(T
)
 −h
H
+
4µ̂
β̂
+
4µ̂
+
µ̂
(b(T
)
+
d(T
))
.
(−ϵ+) = h
(ϵ−) = e
σ2 4
Our next result shows that, if we chose proper boundary conditions for ṽ, then ṽ has automatically
the smoothness asserted in Lemma 6.5.
Lemma 6.6 If a function ṽ(t, x) and a boundary function b(t) solve the system

ṽ(T, x) = h̃(x),
x ∈ [−ϵ, ∞),



2

σ

ṽt + 2 ṽxx = 0 in G1 ,



 ṽ(t, ±b(t)) = ±e−λt+µ̂b(t) (b(t) + d(t)) ∓ F (t, b(t)),

0 ≤ t ≤ T,

−λt+µ̂b(t)
D:
ṽx (t, ±b(t)) = e
(1 + µ̂(b(t) + d(t))) − Fx (t, b(t)),
0 ≤ t ≤ T,


µ̂x−λt

ṽ(t, x) = e
(x + d(t)) − F (t, x),
(t, x) ∈ II,




 b(t) > 0 for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C 1 ([0, T ]),



b(t) + d(t) > 0,
0 ≤ t ≤ T,
22
then ṽ(t, x) and b(t) solve the system C1 , especially the smoothness conditions formulated there.
Moreover, we have the following explicit representations:
(
)
ṽxx (t, ±b(t)) = ±e−λt+µ̂b(t) 2µ̂ + µ̂2 (b(t) + d(t)) ∓ Fxx (t, b(t)) ≡ ±Ã(t),
(
)
ṽxxx (t, ±b(t)∓) = e−λt+µ̂b(t) 2β̂ + 3µ̂2 + µ̂3 (b(t) + d(t)) − Fxxx (t, b(t)) ≡ B̃(t),
(
)
4β̂ ′
(ϵ)
−λt+µ̂b(t)
3
4
vxxxx (t, ±b(t)∓) = ±e
b (t) + 4µ̂β̂ + 4µ̂ + µ̂ (b(t) + d(t)) ∓ Fxxxx (t, b(t)) ≡ ±D̃(t). (84)
σ2
Proof. The proof is completely analogously to the proof of Lemma 3.4 in [13]. We only have to
modify the boundary conditions, using our function F (t, x).
We now start to derive similar systems for the derivatives of ṽ.
Lemma 6.7 If a function ũ(t, x) and a boundary function b(t) solve the system


ũ(T, x) = h̃′ (x),
x ∈ [−ϵ, ∞),



σ2

in G1 ,
ũ + 2 uxx = 0,


 t

−λt+µ̂b(t)

(1 + µ̂(b(t) + d(t))) − Fx (t, b(t)) ≡ R̃(t),

 ũ(t, ±b(t)) = e

 ũ (t, ±b(t)) = ±Ã(t),
0 ≤ t ≤ T,
x
E:
µ̂x−λt
 ũ(t, x) = e
(1 + µ̂(x + d(t))) − Fx (t, x),
(t, x) ∈ II,





ũ
∈
C(G
),
xx
1




b(t)
>
0
for
0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C 1 ([0, T ]),




b(t) + d(t) > 0,
0 ≤ t ≤ T,
0 ≤ t ≤ T,
∫x
then ṽ(t, x) ≡ −M̃ (t) + −b(t) ũ(t, z) dz with M̃ (t) ≡ eµ̂b(t)−λt (b(t) + d(t)) − F (t, b(t)) and b(t) solve
system D of Lemma 6.6.
Note the slight abuse of notation, but we don’t want to introduce further notation for the ṽ defined
in this lemma.
Proof. The proof is completely analogous to the proof of Lemma 3.7 in [13]. We just have to use
the relation L̂F (t, b(t)) = 0 at one instance.
⊓
⊔
We reach our last transformation, which will give us a system we shall solve.
Lemma 6.8 If q̃(t, x) and a boundary function b(t) solve the system

q̃(T, x) = h̃′′′ (x) − B̃(T ),
x ∈ [−ϵ, ∞); replace, for x = ϵ, h̃′′′ (ϵ) by h̃′′′ (ϵ−)



2

σ
′

in G1 ,
q̃t + 2 q̃xx = −B̃ (t),





0 ≤ t ≤ T,
 q̃(t, ±b(t)) = 0,
F :
q̃x (t, ±b(t)) = ±D̃(t),
0 ≤ t ≤ T,

(
)

 q̃(t, x) = eµ̂x−λt 3µ̂2 + µ̂3 (x + d(t)) − B̃(t) − Fxxx (t, x),
(t, x) ∈ II\{(t, x)|0 ≤ t ≤ T, x = b(t)},




1

b(t) > 0 for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C ([0, T ]),



b(t) + d(t) > 0,
0 ≤ t ≤ T,
then
∫
ũ(t, x) ≡ R̃(t) − Ã(t)(x + b(t)) +
(
+B̃(t)
∫
x
y
dy
−b(t)
2
q̃(t, z) dz
−b(t)
)
x2 − b (t)
+ b(t)(x + b(t))
2
and b(t) solve the system E of Lemma 6.7.
23
Proof. Again, the proof is completely analogous to the proof of Lemma 3.8 in [13]. This time we
just have to use the relations L̂Fx (t, b(t)) = 0 and Fxx (T, b(T )) = 0 .
⊓
⊔
As announced above, we now construct a solution of the system F , where we use the Green’s function
K(., .; ., .), defined in Definition 6.1. Moreover, we assume that our basic integral equation (64) for
b(t) has a positive C 1 -solution.
Proposition 6.1 Let b(t) be a solution of equation (64), fulfilling
b(t) > 0 for 0 ≤ t ≤ T, b(T ) = ϵ, b ∈ C 1 ([0, T ]),
0 ≤ t ≤ T.
b(t) + d(t) > 0
(85)
Then q̃(t, x) defined as
∫
σ2
+
2
T −t
0
b(T −s)
ds
0
∫
∫
T −t
q̃(t, x) ≡
σ2
K (x, T − t; b(T − s), s) D̃(T − s) ds +
2
∫
−b(T −s)
T −t
0
∫ b(T )
+
−b(T )
K (x, T − t; ξ, s) B̃ ′ (T − s) dξ
K (x, T − t; −b(T − s), s) D̃(T − s) ds
(
)
K (x, T − t; ξ, 0) h̃′′′ (ξ) − B̃(T ) dξ, (86)
in G1 , and
(
)
q̃(t, x) ≡ eµ̂x−λt 3µ̂2 + µ̂3 (x + d(t)) − B̃(t) − Fxxx (t, x),
(t, x) ∈ II\{(t, x)|0 ≤ t ≤ T, x = b(t)}
(87)
solve system F .
Proof. For the proof see Proposition 3.1 of [13], because B̃(t), D̃(t), h̃(x) have the same smoothness
properties as B(t), D(t), h(x).
⊓
⊔
6.2
Solution of the integral equation
We have now derived an integral equation for our barrier, and we shall now prove the existence of
a solution of this equation. We start with a lemma, which asserts the existence of a local solution
near the terminal date t = T .
Lemma 6.9 For δ small enough, depending only on ϵ and ||h̃(IV ) ||C([−ϵ,ϵ]) , i.e. δ = δ(ϵ, ||h(IV ) ||C([−ϵ,ϵ]) )
the integral equation (64) has a strict positive C 1 ([T − δ, T ])-solution.
Proof. We note that, in comparison to eq. (11) in [13], we just have to replace B ′ (t), D(t), h(x)
by B̃ ′ (t), D̃(t), h̃(x). This means, that we have to find estimates for the extra terms Fxxxx (t, b(t))
d
and dt
(Fxxx (t, b(t))). But these are, exactly as the previous terms D(t), B ′ (t) affine in b′ (t) with
coefficients, which are smooth functionals of the barrier b(s), t ≤ s ≤ T .
⊓
⊔
To conclude our results so far: By Lemma 6.9 and Proposition 6.1 we have a local solution of the
system F ′ , which is the system F without the last condition F7 . This gives (modulo a proof of F7 ), via
the series of Lemmata 6.2 to 6.8, a local solution of our original problem. We shall proceed as follows.
We assume that we have found by iteration of our solution method a local interval (T − η, T ], where
we have constructed a solution to system F ′ . By Lemma 6.9 the length of a consecutive solution
interval depends on ||(v (ϵ) )(IV ) (t, x)||C([−b(t),b(t)]) and inf s∈[t,T ] b(s), for some t > T − η, arbitrary
close to T − η (see the proof of Lemma 6.9 for the second dependency). This means that, if we find
a-priori bounds on ||(v)(IV ) (t, x)||C([−b(t),b(t)]) and inf s∈[t,T ] b(s), we can continue the solution up to
24
t = 0. By the maximum principle for (v)(IV ) (t, x) and the boundary condition (843 ), it suffices to
give a-priori bounds on inf s∈[t,T ] b(s), sups∈[t,T ] b(s) and sups∈[t,T ] |b′ (s)|. This gives us then a global
solution of F ′ , and in a final step we show that F7 is also satisfied.
We start with the estimate of inf s∈(T −η,T ] b(s).
Lemma 6.10 There exists a, possibly large, but fixed number L, such that inf s∈(T −η,T ] b(s) >
ϵ
L
≡ ϵ′ .
Proof. The proof is identical to the proof of Lemma 5.1 in [13], since the only point where the form
of the terminal function H(x) is used, is inequality (46) there, where one has to estimate E
I [H2 (XT )].
So, exchanging the old terminal function by the new one, gives just a different constant.
⊓
⊔
We proceed with the estimation of sups∈(T −η,T ] b(s).
Lemma 6.11 There exists a constant W , depending only on the model parameters, the function H
and on ϵ, such that sups∈(T −η,T ] b(s) ≤ W .
µ2
Proof. Let rn be a sequence, s.t. rn ↓ T − η, and define Ṽ (t, x) ≡ e−µ̂x+ 2σ2 t ṽ(t, x). For Ṽ one has
LṼxt = 0 in G1 , and - after some elementary transformations µ2
Ṽxt (t, b(t)−) = −βe−βt + e−µ̂b(t)+ 2σ2 t
3
∑
ci F (i) (t, b(t)),
i=0
µ2
Ṽxt (t, −b(t)+) = (−β − 2µµ̂)e−βt+2µ̂b(t) + e+µ̂b(t)+ 2σ2 t
3
∑
ci F (i) (t, b(t)),
(88)
i=0
where F (i) denotes the i-th x-derivative of F , and ci some generic, model depending constants, which
may vary from place to place, and where c0 in (88)2 fulfills c0 < 0. Define now V̂ ≡ Ṽxt − ρ, for some
positive constant ρ, to be chosen later. This gives for V̂ the system
V̂ (T, x) = K(x),
µ2
V̂ (t, b(t)−) = −βe−βt + e−µ̂b(t)+ 2σ2 t
3
∑
ci F (i) (t, b(t)) − ρ,
i=0
2
+µ̂b(t)+ µ 2
2σ
V̂ (t, −b(t)+) = (−β − 2µµ̂)e−βt+2µ̂b(t) + e
t
3
∑
ci F (i) (t, b(t)) − ρ.
(89)
i=0
Since we know by the previous Lemma 6.10 that b(t) ≥ C, for some generic (ϵ-dependent) constant
∑3
C, one concludes that | i=0 ci F (i) (t, b(t))| ≤ C holds. Note also that for b(t) large, |F (i) (t, b(t))|,
i=1,2,3 becomes small. Hence, we can choose a model-depending positive constant ρ, s.t we have by
the maximum principle V̂ < 0 on G1 . Introducing w ≡ −βeV̂−βt , gives w > 0 on G1 on the one hand
side, and the system
w(T, x) =
w(t, b(t)) = 1 +
(
w(t, −b(t)) = e2µ̂b(t) 1 +
2
2µ
βσ 2
3
∑
ci F (i) (t, b(t))e−µ̂b(t)+λt +
i=0
)
+
3
∑
ci F (i) (t, b(t))e+µ̂b(t)+λt +
i=0
K(x)
,
−βe−βT
ρ βt
e ,
β
ρ βt
e ≡ M (t),
β
Lw = βw,
(90)
with c0 > 0 in (90)3 , on the other hand. Since we know that w is positive, it fulfills the maximum
principle.
Moreover, M (t) ∈ C 1 (]T − η, T ]) and M ∗ (t) ≡ sups∈[t,T ] M (s) is absolutely continuous w.r.t. the
25
Lebesgue measure on [rn , T ] for all n. Let now I ≡ {t ∈ [rn , T ]|M ∗ (t) = M (t), M (t) ≥ Γ}, for
K(x)
some constant Γ > || −βe
−βT ||C([−ϵ,ϵ]) . We choose Γ large enough, s.t. t ∈ I implies w(t, −b(t)) =
∗
M (t) > w(s, b(s)), for all s ∈ [t, T ]. By the maximum principle this implies wx (t, −b(t)) ≤ 0, or
V̂x (t, −b(t)+) ≥ 0, or Ṽxxt (t, −b(t)+) ≥ 0, for t ∈ I. Using the boundary condition for Ṽxxt , one
arrives at
4
∑
4µ3
2β̂b′ (t) + 4µβ̂ + 4 + e−µ̂b(t)+λt
ci F (i) (t, b(t)) ≥ 0,
t ∈ I,
σ
i=0
or
b′ (t) ≥ −C,
t ∈ I,
(91)
for a generic positive (ϵ-dependent) constant C.
2µ2
We set now C + ≡ 1 + βσ
2 , view the functions M and b as functions of the backwards time ν ≡ T − t
and calculate the derivative of M (ν) for ν, with t(ν) ∈ I. Note that, since with Γ large also M (ν)
becomes large, we can infer that b(ν) becomes large, hence |F (i) (t, b(ν))|, i=1,2,3 become small,
whereas F (0) (t, b(ν)) is approximately γ̃ > 0. We get
′
′
+ 2µ̂b(ν)
M (ν) = 2µ̂b (ν)C e
µ̂b(ν)+λt ′
+ µ̂e
b (ν)
3
∑
ci F
(i)
(t, b(ν)) − λe
µ̂b(ν)+λt
i=0
eµ̂b(ν)+λt
+ 2µ̂b(ν)
b (ν) 2µ̂C e
ci F (i) (t, b(ν)) +
i=0
3
∑
ci F (i+1) (t, b(ν))b′ (ν) − eµ̂b(ν)+λt
i=0
{
′
3
∑
3
∑
(i+1)
ci Ft
(t, b(ν)) − ρeβt ≤
i=0
µ̂b(ν)+λt
+ µ̂e
3
∑
ci F
(i)
µ̂b(ν)+λt
(t, b(ν)) +e
i=0
3
∑
}
ci F
(i+1)
(t, b(ν))
i=0
−e
µ̂b(ν)+λt
3
∑
(i+1)
ci Ft
(t, b(ν)).
i=0
+ 2µ̂b(ν)
Because we have M (ν) > C e 2
> −eµ̂b(ν)+λt
above by b′ (ν){...} + M (ν). Moreover, we have
∑3
(i+1)
(t, b(ν)),
i=0 ci Ft
we can estimate the r.h.s.
µ̂C + e2µ̂b(ν) ≤ {...} ≤ 3µ̂C + e2µ̂b(ν) ,
which implies that, in case of b′ (ν) ≥ 0, we get the upper estimate 3b′ (ν)µ̂C + e2µ̂b(ν) + M (ν) ≤
Ce2µ̂b(ν) + M (µ) ≤ CM (ν), for some generic positive constant C. In case of b′ (ν) < 0, we get the
upper estimate M (ν). Summing up, we have, for t(ν) ∈ I,
M ′ (ν) ≤ CM (ν).
(92)
This is obviously sufficient for M (t) ≤ C, t ∈ [rn , T ], uniformly in n. By the formula for M (t) the
same holds true for b(t).
⊓
⊔
We proceed with an upper estimate of b′ on (T − η, T ]. Its bound also depends on the bound W ,
which we had in the previous Lemma.
Lemma 6.12 There exists a constant U , depending only on the model parameters, the function H,
ϵ and the constant W of Lemma 6.11, such that
sup
s∈(T −η,T ]
b′ (s) ≤ U.
Proof. Let tn be a decreasing sequence in the interval (T − η, T ], s.t. b′ (tn ) is a maximizer of b′
in the interval [tn , T ]. If the sequence b′ (tn ) is bounded, there is nothing to show. So let us assume
that b′ (tn ) becomes large, with increasing n. We fix a large n and define b ≡ b′ (tn ). The goal is, to
26
construct an upper bound for b, independent of n.
Let t0 ≡ tn + √1 . We further define w as the solution of the following boundary value problem.
b


w(t0 , x) = −βe−βt0 ,
x ∈ (−∞, −b(t0 )],



−βt0


w(t
,
x)
=
max{V
(t
,
x),
−βe
},
x ∈ (−b(t0 ), b(t0 )],
0
xt
0

−βt0
w(t0 , x) = −βe
,
x ∈ (b(t0 ), x0 ],


 w(t, x) = −βe−βt ,
for
x0 − x = b(t0 − t), t ∈ [tn , t0 ],



 w + µw + σ2 w = 0,
for x0 − x > b(t0 − t), t ∈ (tn , t0 ),
t
x
xx
2
(93)
n )+x0
where x0 solves −b(t
= b. So we have a boundary value problem in a wedge with small (b is
t0 −tn
large!) angle. The maximum principle for w implies
w(t, −b(t)) ≥ −βe−βt ,
w(t, b(t)) ≥ −βe−βt ,
for t ∈ [tn , t0 ]. The maximum principle applied for w − Vxt on G0 ≡ {(t, x)| − b(t) ≤ x ≤ b(t), tn ≤
t ≤ t0 }, gives w − Vxt ≥ 0 on G0 (for the boundary values of Vxt see [13], eq. 51). Using the fact
that w and Vxt agree at (tn , b(tn )), we conclude (wx − Vxtx ) (tn , b(tn )) ≤ 0, or
wx (tn , b(tn )) ≤ Vxtx (tn , b(tn )).
(94)
Now we split w into three functions
w = w(1) + w(2) + w(3)
and define w(1) ≡ −βe−βt0 . For w(2) and w(3) we shall define proper BVP, s.t. (95) holds.
w(2) is defined as the solution of the following BVP

(2)

x ∈ (−∞, x0 ],
 w (t0 , x) = 0,
(2)
−βt
w (t, x) = −βe
+ βe−βt0 ,
for x0 − x = b(t0 − t), t ∈ [tn , t0 ],

2
 (2)
(2)
(2)
σ
for x0 − x > b(t0 − t), t ∈ (tn , t0 ).
wt + µwx + 2 wxx = 0,
(95)
(96)
The proof of
√
wx(2) (tn , b(tn )) ≥ −c b
(97)
is exactly as in [13], Lemma 5.3, and we remain with the problem for w(3) , where the new terminal
condition enters.
We define w(3) as solution of the following BVP


w(3) (t0 , x) = 0,
x ∈ (−∞, −b(t0 )],



(3)
−βt0


} + βe−βt0 ,
x ∈ (−b(t0 ), b(t0 )],
 w (t0 , x) = max{Vxt (t0 , x), −βe
(98)
w(3) (t0 , x) = 0,
x ∈ (b(t0 ), x0 ],


(3)
 w (t, x) = 0,
for x0 − x = b(t0 − t), t ∈ [tn , t0 ],



 w(3) + µw(3) + σ2 w(3) = 0,
for x0 − x > b(t0 − t), t ∈ (tn , t0 ).
x
xx
t
2
Let now
{
M > max ||b||C(]T −η,T ]) ,
}
(
)
−βt0
−βt0
.
max{Vxt (t0 , x), −βe
} + βe
sup
x∈[−b(t0 ),b(t0 )]
27
(99)
This constant depends only on the quantities, mentioned in the formulation of the Lemma. (Note
that we shall show in the next section, Lemma 7.3 that, for a special sequence ϵk → 0, it depends
only on the model parameters.) Define now w(4) as the solution of

(4)

x ∈ (−∞, x0 ],
 w (t0 , x) = M 1[−M,M] (x),
(100)
w(4) (t, x) = 0,
for x0 − x = b(t0 − t), t ∈ [tn , t0 ],

 (4)
(3)
σ 2 (3)
wt + µwx + 2 wxx = 0,
for x0 − x > b(t0 − t), t ∈ (tn , t0 ).
The maximum principle implies w(4) ≥ w(3) and, since we have w(4) (tn , b(tn )) = w(3) (tn , b(tn )), one
concludes
wx(4) (tn , b(tn )) ≤ wx(3) (tn , b(tn )).
(101)
Introducing the new coordinates
{
z = −(x − x0 ) − b(t0 − t),
τ = t0 − t,
(102)
b̃2
b̃
and the new dependent variable u ≡ w(4) e σ2 z+ 2σ2 τ , where b̃ ≡ b−µ holds, gives the following system
for u(τ, z).

b̃
z

 u(τ = 0, z) = M e σ2 1[x0 −M,x0 +M] (z),
u(τ, z = 0) = 0,
τ ∈ (0, τn ),


σ2
for z > 0, τ ∈ (0, τn ),
uτ = 2 uzz ,
which has the solution u(τ, z) =
√M
2πτ σ
∫ x0 +M
x0 −M
z ∈ [0, ∞),
(103)
(
)
(z−ξ)2
(z+ξ)2
b̃
e σ2 z e− 2σ2 τ − e− 2σ2 τ
dξ. By elementary calcu-
lation one gets the value of uz (τ, z = 0) and transforming back to the original variables gives
√
wx(4) (t
= tn , x = b(tn )) = −
1/4
2 Mb
π σ
√ )
√
(
2
(x +M )2 b
(x −M )2 b
− b̃ √
− 0
− 0
2
2
2σ
2σ
−e
e 2σ2 b ,
e
hence,
wx(4) (t = tn , x = b(tn )) ≥ −c,
(104)
for a generic positive constant c, depending √
on the model parameters and M . We conclude by (101),
(97), (95)and (94) that Vxtx (tn , b(tn )) ≥ −c b − c holds, i.e. - by the boundary condition for Vxtx √
−2β̂e−βtn b ≥ −c b − c,
(105)
which implies immediately b ≤ c.
⊓
⊔.
Our last a-priori estimate gives a lower bound on b′ (s).
Lemma 6.13 There exists a positive constant Z, depending only on the model parameters, the
function H and on ϵ, such that sups∈(T −η,T ] b′ (s) ≥ −Z.
Proof. The proof works analogously to the proof of Lemma 5.4 in [13]. Indeed, instead of the
integral equation (11) there, we work with our new integral equation (64), i.e we have to replace
B ′ , D and h by B̃ ′ , D̃ and h̃. By this replacement we generate a new term in (77), stemming from
d
′
dt (−Fxxx (t, b(t))). But the new term can be incorporated in −c+cg (T −s) there (with ϵ- dependent
constants c). The replacement of D by D̃ in (78) leads to a term, which can be incorporated in
cb(T − s) + c there. Hence, (79) and (80) are true as well. (81) can be dealt with as (78) before, and
finally, the replacement of h by h̃ doesn’t generate alterations at all.
⊓
⊔
28
Now that we have found our a-priori estimates, we can provide the final piece of the proof of Theorem
6.1.
Proof of Theorem 6.1(iii) Remembering what we have said after the proof of Lemma 6.9, it
remains to show that F7 holds true for the constructed barrier b(t), i.e.
0 ≤ t ≤ T.
b(t) + d(t) > 0,
This can be done in exactly the same way as in the proof of [13], Theorem 3.2 (iii).
7
⊓
⊔
Construction of the value function and the optimal strategy of the original problem
We shall find the value function of problem (P1 ) by choosing a special sequence ϵk → 0 in our
approximating problems and then letting k → ∞. The sequence is chosen in a way, s.t. the
approximating barriers b(k) ≡ b(ϵk ) are decreasing. We start with
Lemma 7.1 Let ϵ0 be as in the Standing assumptions of Section 6. Then there exists a sequence
ϵk ≤ ϵ0 , strictly monotone decreasing to zero, such that
d (ϵk+1 )
H
(x) ≡ H (k+1)′ (x) < H (k)′ (x),
dx
x ∈ [0, ϵk+1 ],
(106)
where for x = 0 we understand H (k)′ (0) as H (k)′ (0+).
Proof. The proof is completely analogous to the proof of Lemma 3.1 in [14]. We just have to replace
2
the value µ̂ there by µ̂ + µ̂2γ̄ , because of our new value for H1 in the Standing assumptions.
⊓
⊔
We now provide the announced monotonicity result.
Lemma 7.2 For the sequence ϵk , constructed in Lemma 7.1, one has
b(ϵk+1 ) (t) ≡ b(k+1) (t) < b(k) (t),
t ∈ [0, T ].
(107)
Proof. The proof is basically the same as the proof of Lemma 3.2 in [14], if one takes into account
the following points.
(k)
(k)
1.) Although the functions Vx (T, x) are only piecewise C 3 , the function Y (t, x) ≡ Vx (t, x) −
(k+1)
(k+1)
(t, x) is still a smooth solution of LY = 0 in G1
Vx
≡ {(t, x)| − b(k+1) (t) ≤ x ≤ b(k+1) (t), t0 ≤
t ≤ T }, since the discontinuity at (T, 0) is subtracted out. Here t0 is the assumed “first” intersection
point of b(k) and b(k+1) (the proof there is by contradiction).
2.) The fact that Y (t, x) > 0 for (t, x) ∈ {(t, x)|t = T, x ∈ [0, ϵk+1 ]} follows from the previous
Lemma. For (t, x) ∈ {(t, x)|t = T, x ∈ [−ϵk+1 0)}, one can use our Standing Assumptions on H (k) .
3.) The fact that on the set {(t, x)|t0 < t < T, x = b(k+1) (t)} we find Y ≥ 0, follows in our case by
(k)
(k+1)
the boundary condition for Vx and Vx
on their r.h. boundaries, and by the concavity of V (k) ,
see the last line of the proof of Lemma 6.2.
The rest of the proof is identical to the proof of Lemma 3.2 in [14].
⊓
⊔
In order to go with k to infinity, and therefore finding our limiting barrier, which will serve as optimal
barrier for the original problem, the following Lemma on the derivatives of the b(k) will be helpful.
Lemma 7.3 Let ϵk be the sequence of Lemma 7.2, and 0 < t0 < T . Then we have sup0≤t≤t0 b(k)′ (t) ≤
U , for a constant U depending only on the model parameters and t0 , and which is independent of k.
29
Proof. Checking the proof of Lemma 6.12, one sees that we have to check only that the constant
(k)
M , defined in (99), is independent of k. Sufficient for this is supx∈[−b(k) (t0 ),b(k) (t0 )] Vxt (t, x) ≤ C,
(k)
for a generic k-independent constant C, depending on t0 and the model parameters. By LVx = 0
(k)
this is equivalent to inf x∈[−b(k) (t0 ),b(k) (t0 )] Vxxx (t, x) ≥ −C or, after transforming to the v-variable,
(k)
inf x∈[−b(k) (t0 ),b(k) (t0 )] vxxx (t, x) ≥ −C. By definition we have v (k) ≡ ṽ (k) + F (t, x), which implies that
it is sufficient that
(k)
inf
ṽxxx
(t, x) ≥ −C,
(108)
x∈[−b(k) (t0 ),b(k) (t0 )]
for a generic k-independent constant C, depending on t0 and the model parameters. We remember
(k)
(see Lemma 6.6) that ṽxxx solves the system
x ∈ [−ϵk , ϵk ],
(k)
ṽxxx
(T, x) = h̃(k)′′′ (x),
(k)
(k)
ṽxxx
(t, ±b(k) (t)∓) = e−λt+µ̂b
(k)
(t)
(
(k)
L̂ṽxxx
= 0 in G1 ,
)
2
3 (k)
(k)
(k)
2β̂ + 3µ̂ + µ̂ (b (t) + d (t)) − Fxxx (t, b (t)) ≡ B̃ (k) (t).(109)
We now assert that the following is true:
Claim 1. ||B̃ (k) (t)||Lp (0,T ) ≤ C, uniformly in k, for all 1 ≤ p < ∞.
For the first term of B̃ (k) this is obvious, since the b(k) are uniformly bounded, and for Fxxx (t, b(k) (t))
−
x2
C
2
2
2σ 2 (T −t) and
this follows from the explicit formula Fxxx (t, x) = (T −t)
5/2 (x − σ (T − t))e
√
Claim 2. Let 0 < δ0 < 3 be an arbitrary fixed number. Then there exists a ∆ > 0 independent of
k, s.t.
b(k) (t)
inf √
≥ δ0 ,
(110)
k σ −(T − t) ln(T − t)
for t ∈ [T − ∆, T ].
√
(k)
(ν)
We prove Claim 2 by contradiction. So let f (k) (ν) ≡ b g(ν)
, with ν ≡ T − t, and g(ν) ≡ σ −ν ln ν.
Furthermore, let f (ν) ≡ inf k f (k) (ν), and assume that there exists a sequence νl → 0, s.t f (νl ) < δ0 ,
for all l ∈ N
I . Hence, there exist numbers K(l), s.t.
f (k) (νl ) ≤ δ1 ,
(111)
√
for some δ0 < δ1 < 3, and for all k ≥ K(l).
Note that we use alternating in the following the physical time t and the backwards running time
ν, which is a slight abuse of notation. Since the barrier strategy with barrier function b(k) is the
optimal strategy, we get - using the notation Vk for the value function with terminal gain H (k) and
using Ĥ (k) (x) ≡ H (k) (x) − γ̄ - for all k ≥ K(l)
[∫ (k)
]
τl
√
√
(k)
−βs
(k)
−βτl
(k)
I
e
dCs + e
Ĥ (Xτ (k) ) + γ1τ (k) =T + e−βtl ( 3g(νl ) − b(k) (νl )) =
Vk (tl , 3g(νl )) = E
tl +
l
l
]
∫
[
]
)
(k)
(k)
(k)
µ ( −βtl
s=τl
−βs
−βτl
−βs
Xs e
ds + E
I e−βτl Ĥ(k) (Xτ (k) ) +
−E
I [e
] −E
I e Xs |s=tl + + β
e
l
β
tl +
[
]
√
)
(
(k)
µ
(k)
γIP(τl = T) + e−βtl ( 3g(νl ) − b(k) (νl )) ≤
I e−βτl Xτ (k) +
e−βtl − e−βT − E
l
β
[
]
√
(k)
(k)
−βτl
(k)
−βtl
E
I e
Ĥ (Xτ (k) ) + γIP(τl = T) + e
3g(νl ) ≤
l
√
(k)
e−βT (µνl + O(νl2 )) + 2ϵk + γIP(τl = T) + e−βtl 3g(νl )(112)
[
(k)
τl
(k)
(k)
Here we have used dXs = µds + σdWs − dCs and integration by parts in the first equality, τl ≤ T
and the nonnegativity of Xs in the first inequality, and finally, Ĥ (k) (Xτ (k) ) − Xτ (k) ≤ 2ϵk in the
l
l
last inequality. Note that the Landau O above is uniformly in k. We are now looking for an upper
30
(k)
(k)
estimate of P
I (τl = T) and denote by τ̂l the ruin time, if we do not consume anything after the
(k)
(k)
lump sum at tl . Clearly P
I (τl = T) < P
I (τ̂l = T), and we estimate the latter probability by
)
(
(k)
P
I (τ̂l
= T) = P
I
inf (µs + σWs ) ≥ −b
(k)
0≤s≤νl
(νl )
e−r (νl )
≤ erf(r(νl )) ≤ 1 − √
,
2 πr(νl )
2
(113)
√
)
√ l . Here we used for the last but one inequality standard results (see
with r(νl ) ≡ σµ ν2l + δ1 σg(ν
2νl
e.g. [17]),√and the last equality holds for r(νl ) ≥ 1/2, which we assume w.l.o.g. Noting that
√
l)
r(νl ) ≤ δ2 − ln(ν
, for some δ2 , with δ0 < δ1 < δ2 < 3, we end up - using (112) and (113) - at
2
√
√
Vk (tl , 3g(νl )) ≤ e−βT (µνl + O(νl2 )) + 2ϵk + e−βtl 3g(νl ) + γ − Cνlδ3 ,
(114)
for a constant δ3 < 32 arbitrary close to 3/2, for all l and for all k ≥ K(l), with the Landau O
uniformly in k.
√
√
We now consider a barrier strategy with barrier 3g(ν), for 0 ≤ ν ≤ νl , and denote it by C 3 . We
get for the target functional, using integration by parts as before,
[ ∫ τl
]
√
√
) −βt √
µ ( −βtl
3
−βτl
−βs
l
J(tl , 3g(νl ), C ) =
e
3g(νl )−IE β
−E
I [e
] +e
Xs e
ds +γIP(τl = T). (115)
β
tl
[
]
Considering the first term, we use the estimate −IE e−βτl ≥ −e−βTP
I (τl = T) − e−βtl P
I (τl < T) and
arrive at
)
(
)
µ ( −βtl
e
−E
I [e−βτl ] ≥ e−βT µνl + O(νl2 ) − CνlP
I (τl < T) ,
(116)
β
for some generic positive constant C. We now use a result from the Appendix, namely Lemma 9.2,
3√
which provides the upper estimate P
I (τl < T) ≤ Cνl2 − ln νl , for l large enough, say l ≥ L. All
√
together, and observing that for Xs in the last integral of (115) Xs ≤ 3g(ν) holds, we conclude
that
√
√
√
3√
J(tl , 3g(νl ), C 3 ) ≥ e−βT µνl − Cνl2 − ln νl + e−βtl 3g(νl ) + γ,
(117)
for all l ≥ L. Since the target functional should be less or equal than the value function, this gives,
using (114),
0 ≤ 2ϵk − Cνlδ3 ,
for al l ≥ L̄, and for all k ≥ K(l). Letting k tend to infinity, this is an obvious contradiction proving
Claim 2, and therefore also Claim 1.
One checks that ||h̃(k)′′′ (x)||L1 (−ϵk ,ϵk ) is uniformly bounded, and the fact that B̃ (k) (t) is uniformly
(k)
bounded above implies that ||ṽxxx+ ||L1 (−b(k) (t),b(k) (t)) is uniformly bounded in t ∈ [0, T ] and k. Hence,
in order to show
(k)
Claim 3. ||ṽxxx ||L1 (−b(k) (t),b(k) (t)) is uniformly bounded in t ∈ [0, T ] and k,
)
∫ T ( d ∫ b(k) (s) (k)
it suffices to show that t ds
ṽ
(s,
x)
dx
ds is uniformly bounded in t ∈ [0, T ] and k.
xxx
(k)
−b (s)
We calculate
∫ b(k) (s)
∫ b(k) (s)
d
(k)
(k)
(k)
ṽxxxs
(s, x) dx + 2b(k)′ (s)ṽxxx
(s, b(k) (s)) =
ṽ (s, x) dx =
ds −b(k) (s) xxx
−b(k) (s)
(
)
(k)
(k)
−σ 2 ṽxxxx
(s, b(k) (s)) + 2b(k)′ (s)ṽxxx
(s, b(k) (s)) = −σ 2 D(k) (s) − Fxxxx (s, b(k) (s)) +
(
) (
)
2b(k)′ (s) B (k) (s) − Fxxx (s, b(k) (s)) = −σ 2 D(k) (s) + 2b(k)′ (s)B (k) (s) +
(
)
σ 2 Fxxxx (s, b(k) (s)) − 2b(k)′ (s)Fxxx (s, b(k) (s)) ≡ J1 (s) + J2 (s).
(118)
(k)
Note that one can transform the homogeneous BVP for ṽxxx above with inhomogeneous boundary
data to an inhomogeneous equation with homogeneous boundary data, s.t. the inhomogeneity is a
31
(k)
L2 (G1 ) function (actually it’s bounded for fixed k). Hence, one can use standard L2 -theory (see
(k)
(k)
e.g. [18]) to get ṽxxxxx ∈ L2 (G1 ), which makes the formal calculations above rigorous. In addition,
(k)
we have used the boundary data for ṽxxx and the validity of the backwards heat equation.
(
)
d
Using the backwards heat equation for Fxx , one calculates ds
2Fxx (s, b(k) (s)) = −J2 (s). Hence,
we get
∫
T
J2 (s) ds = 2Fxx (T, b(k) (T )) − 2Fxx (t, b(k) (t)) ≤ C,
(119)
t
−
x2
x
for a generic constant C, because of Claim 2 and the explicit form of Fxx (t, x) = Ce 2σ2 (T −t) (T −t)
3/2 .
Concerning J1 , one calculates
(
(k)
−σ 2 D(k) (s) + 2b(k)′ (s)B (k) (s) = e−λs+µ̂b (s) −C − C(b(k) (s) + d(k) (s)) + Cb(k)′ (s)+
)
Cb(k)′ (s)b(k) (s) + Cb(k)′ d(k) (s) ≡ J3 (s) + J4 (s) + J5 (s) + J6 (s) + J7 (s),
(120)
with some generic positive constants C. We claim that
∫
T
Ji (s) ds ≤ C
t
(121)
holds, for some positive constant C, for i = 3, 4, ..., 7, uniformly in t ∈ [0, T ] and k. We show this
only for J6 , because the other cases work analogously, or even simpler. One calculates
(k)
(k)
(k)
d ( −λs+µ̂b(k) (s) (k) )
e
b (s) +Ce−λs+µ̂b (s) b(k) (s)−Ce−λs+µ̂b (s) b(k)′ (s).
e−λs+µ̂b (s) b(k)′ (s)b(k) (s) = C
ds
¿From this (121) for i = 6 is immediate. (For the estimate of the last term on the r.h.s. one can use
the result for J5 , which can obviously be proved without using the case i = 6). (118)-(121) provide
Claim 3.
(k)
0
and consider the BVP for ṽxxx on
Let now t0 be arbitrary given, with 0 ≤ t0 < T . Set t1 ≡ t0 + T −t
2
the time interval [0, t1 ]. By Claim 2 one has obviously B̃ (k) (t) ≥ −C(t0 ) on [0, t1 ], with the constant
(k)
independent of k. Moreover, Claim 3 provides ||ṽxxx (t1 , x)||L1 (−b(k) (t1 ),b(k) (t1 )) ≤ C. Both together
(k)
give ṽxxx (t, x) ≥ −C(t0 ), for t ∈ [0, t0 ] and x ∈ [−b(k) (t), b(k) (t)], with the constant independent of
k, which concludes our proof (see (108))
⊓
⊔
We conclude our a-priori estimates with a uniform lower bound on our sequence of barrier functions.
Lemma 7.4 Let t < T . Then we have
inf b(k) (s) > c(T − t),
(122)
s∈[0,t]
for all k, where c is a positive function, which can be chosen independently of k.
The proof of this is identical to the proof of [14], Lemma 3.4. The next results provides our final
barrier function, which in turn provides the optimal strategy of our original problem.
Lemma 7.5 Let ϵk be the sequence of Lemma 7.2, then the limit limk→∞ b(k) (t) exists pointwise for
all t ∈ [0, T ]. Let us denote this limit by b(t). Then we have: b(t) is left-continuous with existing
limits from the right (caglad). Moreover the - at most denumerable - jumps are downwards, and at
t = T , b(t) is continuous with b(T ) = 0.
Proof. The pointwise existence of the limit is obvious, since our sequence is monotone decreasing
and bounded below. Moreover we get the upper semicontinuity of b(t).
32
Let now t0 ∈ [0, T [ be arbitrarily chosen, and consider the interval t ∈ [0, t0 ]. Define gk (t) ≡
b(k) (t) + U (T − t), with U stemming from Lemma 7.3. Hence, the gk are monotone decreasing for
fixed k and fulfill gk+1 (t) ≤ gk (t). Clearly, the limiting function g(t) ≡ limk→∞ gk (t) has at most
denumerable jumps downwards and is caglad (because of its upper semicontinuity).
Obviously, these properties transform to the function b(t), and the only thing we have still to prove
is the continuity of b(t) at t = T . But upper semicontinuity gives lim supt→T b(t) ≤ b(T ) = 0, and
we trivially have lim inf t→T b(t) ≥ 0. So we get limt→T b(t) = 0 = b(T ), finishing our proof.
⊓
⊔
Our next proposition shows that the barrier b(t), constructed above, provides indeed the optimal
strategy for our original problem (P1 ), eq. (3).
Let J(t, x, C) be the target functional of problem (P1 ), and V (t, x) its value function. Similarly let
J (k) (t, x, C) be the target functional with our approximating terminal values H (k) , and V (k) (t, x) its
value function.
Proposition 7.1 Let b(t) the barrier function constructed in Lemma 7.5 above, and C (b) the corresponding barrier strategy. Then we have V (t, x) = J(t, x, C (b) ), i.e. the barrier strategy with barrier
b(t) is optimal for our problem (P1 ).
Proof. Let C (k) be the optimizers of our approximating problem, i.e.
[
]
(k)
(b)
Ct ≡ sup0≤s≤t x + µs + σWs − b(k) (s) + , and we also have Ct ≡ sup0≤s≤t [x + µs + σWs − b(s)]+ .
We assert now
Claim 1: J(t, x, C (k) ) → J(t, x, C (b) ), for k → ∞.
Integration by parts, as in the proof of Lemma 7.3, gives the following representation of our target
functional
[ ∫ (k)
]
τ
)
µ ( −βt
(k)
−βt
−βτ (k)
−βs (k)
J(t, x, C ) = e x +
e
−E
I [e
] −E
I β
e Xs ds + γIP(τ (k) = T), (123)
β
t
where τ (k) and X (k) correspond to strategy C (k) . An analogous formula holds (with τ and X) for
(k)
(k)
(k)
the strategy C (b) . Denote the last three terms of the r.h.s. of (123) by K1 , K2 , K3 , resp. by
(k)
K1 , K2 , K3 . We have to show Ki → Ki , for k → ∞ and i = 1, 2, 3. We start with i = 3.
Let A(k) ≡ {ω|τ (k) < T } and A ≡ {ω|τ < T }. Clearly one has A(1) ⊂ A(2) ⊂ ... ⊂ A. Now, ruin can
(k)
occur only continuously by Brownian motion, since possible jumps of Xt resp. Xt are generated
by the barrier functions, and these are strictly positive for t < T . Let ω ∈ A, i.e. Xτ (ω) = 0 with
τ < T . Since the X (k) inherit, via the C (k) , the pointwise monotone convergence for fixed ω, we
conclude that there exists a K(ω), s.t. τ (k) (ω) < T for all k ≥ K, or ω ∈ A(k) for all k ≥ K. Hence,
lim P
I (A(k) ) = P
I (A).
(124)
k→∞
One also has limk→∞ τ (k) = τ a.s., which gives by dominated convergence
lim E
I [e−βτ
k→∞
Using the boundedness of the X (k) one gets
[ ∫ (k)
τ
lim E
I β
k→∞
−βs
e
(k)
]=E
I [e−βτ ].
]
X(k)
s
[ ∫
ds = E
I β
t
τ
(125)
]
e−βs Xs ds .
(126)
t
Hence, our Claim 1 is proved.
By Theorem 6.1 we have J (k) (t, x, C (k) ) = V (k) (t, x). Moreover, since H (k) (x) converges uniformly
to H(x), for k → ∞, we have
(127)
lim V (k) (t, x) = V (t, x).
k→∞
33
Since we have J (k) (t, x, C) → J(t, x, C), uniformly in C, we conclude, using Claim 1,
lim V (k) (t, x) = J(t, x, C (b) ),
(128)
k→∞
which, together with (127), implies V (t, x) = J(t, x, C (b) ).
⊓
⊔
Now that we know that our barrier function b(t) gives the optimal strategy, we can prove that it is
actually a continuous function.
Proposition 7.2 The optimal barrier function b(t) is continous for 0 ≤ t ≤ T .
Proof. We know already that b(t) is caglad (with jumps downwards) and denote the corresponding
barrier strategy by C (b) . We now argue by contradiction. So assume that there exists a jump at t = t0 ,
i.e. 0 < K ≡ b(t0 ) − b(t0 +). Let δ > 0 be arbitrary small (w.l.o.g. we can, because of left-continuity,
assume that there is now further jump in [t0 − δ, t0 ]), and consider the point (t, x) ≡ (t0 − δ, b(t0 − δ)).
Moreover, let C̃ be the following strategy: At time t0 − δ consume K/2 and use then the barrier
strategy with barrier function b(t) − K/2, for t ∈ [t0 − δ, t0 ], and use afterwards again C (b) . We claim
that
J(t, x, C̃) > J(t, x, C (b) ),
(129)
for δ small enough.
Before this, we prove
(
)
Claim 1: P
I Aδ ≡ {XC̃ (t0 ) > b(t0 +)} = 1 − o(δ n ), for arbitrary n ∈ N
I.
Let Ĉ be the barrier strategy with barrier function b̂ ≡ b(t0 +) + K/4 on [t0 − δ, t0 ]. W.l.o.g. (b is
left-continuous!) we assume b̂(t) < b̃(t) on [t0 − δ, t0 ], hence,
X Ĉ (t0 ) < X C̃ (t0 ).
(130)
Now, the probability of a reflected Brownian motion to reach a maximum larger than K/4 in a time
interval of length δ is o(δ n ), for arbitrary n ∈ N
I . Indeed, if we denote by τ the stopping time to reach
[ −βτ ]
√
the level K/4, one has by [27], eq. (7), E
I e
≤ const.
= o(β (−n) ), for some positive constant C,
e Cβ
and for β → ∞. Standard theorems about Laplace transform (see e.g. [8]) imply P
I (τ ≤ δ) = o(δ n ).
Hence, by (130), Claim 1 is proved.
For the new strategy C̃ we get the estimate
[∫
[∫
]
]
−βs
−βs
−β(t0 −δ) K
E
I
e
dC̃s 1Aδ = e
P
I (Aδ ) + E
I
e
dC̃s 1Aδ =
2
[t0 −δ,T]
(t0 −δ,T]
[∫
]
[∫
]
K
e−β(t0 −δ) P
I (Aδ ) + E
I
e−βs dC̃s 1Aδ + E
I
e−βs dC̃s 1Aδ =
2
(t0 −δ,t0 )
[t0 ,T]
[∫
]
[∫
]
−βt0 K
−βs
(b)
−βs
(b)
−β(t0 −δ) K
P
I (Aδ ) + E
I
e
dCs 1Aδ − e
P
I (Aδ ) + E
I
e
dCs 1Aδ =
e
2
2
[t0 −δ,t0 )
[t0 ,T]
]
[∫
]
[∫
I
e−βs dC(b)
− o(δ n ),(131)
≥ const. δ + E
I
e−βs dC(b)
s
s 1Aδ = const. δ + E
[t0 −δ,T]
[t0 −δ,T]
for some positive constant and arbitrary n. Here we have used in the last [but one equality the fact
]
∫
that b(t) has no jump at t0 −δ, and for the last equality Claim 1. Clearly, E
I [t0 −δ,T] e−βs dC̃s 1Aδ =
[∫
]
E
I [t0 −δ,T] e−βs dC̃s − o(δ n ) holds as well, implying
[∫
]
E
I
e
[t0 −δ,T]
−βs
[∫
dC̃s ≥ E
I
proving (129) and finishing the proof.
]
e
[t0 −δ,T]
−βs
dC(b)
s
+ const. δ,
⊓
⊔
Combining Theorem 6.1, Proposition 7.1 and Proposition 7.2, provide our main result Theorem 2.1.
34
8
A numerical example and concluding remarks
Note that from the motivation of our problem in Actuarial mathematics it is natural to assume a
positive drift µ, since this corresponds to a positive “security loading” there. But, of course, it is
interesting to pose the same question for nonpositive µ as well. In the case of the unconstrained
and unpenalized problem of [13], we could show there (see section 7) that it is optimal to consume
everything immediately. For our penalized problem we conjecture that the optimal boundary will
hit the level x = 0 at some time t0 , since the positivity Lemma 6.10 uses µ > 0 in a crucial way.
This would mean that it is only optimal to consume everything immediately, if our current time t is
smaller than a certain value t0 . We leave this for future research.
We conclude the paper with a numerical example, illustrating the algorithm of section 2 to solve the
basic problem (P0 ).
Example Let T = 10, µ = 0.04, β = 0.02, σ 2 = 0.02, (t, x) = (0, 1) and the upper bound for the
ruin probability κ = 0.05.
We basically used an Euler scheme for getting a solution of the integral equation (64), (with ϵ chosen
very small, i.e. ϵ = 0.001 and 100 nodes for the discretization ) which provides an approximation of
the solution of our penalized problems. Then we used Monte Carlo simulation evaluating the state
process Xt at the nodes of our Euler scheme. For calculating the approximate ruin probability for
the different values of γ̄, we simulated 106 paths. The results are as follows.
− γ̄ = 0: P
I ruin ≈ 15.53%
− γ̄ = 0.25: P
I ruin ≈ 7.6%
− γ̄ = 0.5: P
I ruin ≈ 4.64%
− γ̄ = 0.48: P
I ruin ≈ 4.8%
− γ̄ = 0.458: P
I ruin ≈ 4.97%
As approximation for the value function of (P0 ) we get V (0, 1) ≈ 0.20879. Finally, figure 1 shows the
barriers for the indicated values of γ̄. Let us remark that the numerical solution is by no means at a
final stage. There is still space for improvement concerning accuracy and efficiency. E.g., note that
the barrier in the unconstrained case is almost smooth, whereas for γ̄ > 0 there is some oscillating
effect near maturity. Two possible reasons are: On the one hand side we have used in the first interval
of the Euler scheme the known asymptotics near t = T (see [14] for the unconstrained case and [15]
for the constrained case) to improve the performance. Now, for γ̄ = 0 we know the first two terms in
the asymptotic expansion, whereas for γ̄ > 0 we have proved only the first and conjectured the order
of the second one so far. This is one possible reason for the better accuracy in the unconstrained
case. The other one is that the function F (t, x) appearing in the integral equation of the constraint
case is numerically critical.
9
Appendix
Proof of Lemma 3.1. In the following proof we will use the notation XsC for the underlying
process, if the starting point is clear from the context.
a) Let us assume first 0 ≤ t < T, 0 < x.
( )
With u(t) ≡ max t, t̂ , define
{
}
{
}
t,x,C
(132)
κ ≡ κ1 ∧ κ2 ∧ T ≡ inf s ≥ u(t)|Xst̂,x̂,0 ≥ Xs+
∧ inf s ≥ t̂|Xst̂,x̂,0 = 0 ∧ T,
which is a stopping time, since C is adapted and the Brownian filtration is assumed to be rightcontinuous.
35
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
8
9
10
Figure 1: The optimal barrier (solid line, γ̄ = 0.458) vs. γ̄ = 0.25 (dashed-dotted) vs. the optimal
barrier in the unconstrained case (dashed line γ̄ = 0)
We provide now the definition of the new strategy Ĉ:
(
∆Ĉκ ≡ − Xκt̂,x̂,0 −
Ĉs ≡ Ĉt̂ ,
s ∈ [t̂, κ],
)
,
if κ2 ∧ T > κ,
t,x,C
Xκ+
Ĉs ≡ Cs ,
s > κ, if κ2 ∧ T > κ,
Ĉs ≡ Ĉt̂ ,
s > κ, if κ2 ∧ T ≤ κ.
(133)
Hence, after time t̂ we run our endowment without paying out anything, until the first moment
our trajectory is above or equal to the level of the old trajectory. Then we jump down to this old
trajectory and use the given strategy C.
1
Ĉ
C
Let us define the following set B (t̂,x̂) ≡ {ω|0 ≤ −Xu(
+ Xu(
≤ |∆t| 3 + |∆x|} ∩ {ω|τ = T }. By
t̂)+
t̂)+
construction we have
)
(
I (τ = T),
for (∆t, ∆x) → 0.
P
I B(t̂,x̂) → P
(134)
We distinguish 3 cases:
Case 1: ω ∈ B1 ≡ {ω ∈ B (t̂,x̂) |CT − Cu+ > 0}.
1
Let now |∆t| 3 + |∆x| ≤ ∆(ω). As C is left-continuous, one gets the existence of a δ(ω) > 0, s.t.
u+
CT −s − Cu+ ≥ CT −C
, for 0 ≤ s ≤ δ. Hence, we arrive at
2
κ(ω) ≤ T − δ(ω),
for ∆ ≤
CT − Cu+
.
2
(135)
Furthermore, we know, since for ω ∈ B we have τ (ω) = T that, for all ϵ > 0,
inf
u<s≤T −ϵ
C
{Xu+
+ µ(s − u) + σ(Ws − Wu ) − (Cs − Cu+ )} > 0.
36
(136)
Choosing now ϵ as δ(ω)/2, gives
inf
u<s≤T −δ/2
C
{Xu+
+ µ(s − u) + σ(Ws − Wu ) − (Cs − Cu+ )} ≡ ν(ω) > 0,
(137)
which implies, for ∆ < ν,
inf
u<s≤T −δ/2
Ĉ
{Xu+
+ µ(s − u) + σ(Ws − Wu ) − (Cs − Cu+ )} > 0.
(138)
Ĉ
{Xu+
+ µ(s − u) + σ(Ws − Wu ))} > 0.
(139)
Hence,
inf
u<s≤T −δ/2
Ĉ
Therefore, by (135), Xκ+
> 0 for ∆ small enough. Finally, we arrive at
τ̂ = τ = T,
(140)
for (∆t, ∆x) small enough on the set B1 .
C
Case 2: ω ∈ B2 ≡ {ω ∈ B|CT − Cu+ = 0 and Xu+
+ µ(T − u) + σ(WT − Wu ) > 0}.
In this case we choose Ĉs = Ĉu+ for s > u and get XTĈ ≥ XTC − ∆, so that we have, for ∆ small
enough, τ̂ = T for all ω in B2 .
C
Case 3: ω ∈ B3 ≡ {ω ∈ B|CT − Cu+ = 0 and Xu+
+ µ(T − u) + σ(WT − Wu ) = 0}.
We first note that P
I (B3 ) = 0 and hence, by (134), one gets
P
I (B1 ∪ B2 ) → P
I (τ = T),
for (∆t, ∆x) → 0.
(141)
So, by the cases 1 and 2, we conclude that
lim inf P
I (τ̂ = T) ≥ P
I (τ = T).
(142)
(∆t,∆x)→0
Finally, since we have by construction P
I (τ̂ = T) ≤ P
I (τ = T), we get
lim
P
I (τ̂ = T) = P
I (τ = T).
(143)
(∆t,∆x)→0
The cases (t = T, x > 0) and (0 ≤ t < T, x = 0) are trivial, hence the assertion a) is proved.
Let us finally remark that for (t = T, x = 0) assertion a) is wrong, since one has P
I ({τ = T}) = 1,
but if we approach the point (T, 0) flat enough, say linearly, e.g. as x = T − t̂, then P
I ({τ̂ < T}) is
clearly not an o(1) for t̂ → T .
Proof of assertion b).
We first consider the case 0 < t < T, 0 < x.
1
C
Ĉ
Let G ≡ {ω|(Xu+
− Xu+
) ≤ |∆t| 3 + |∆x|}, where Ĉ, u stem from part a). Hence,
P
I (Gc ) → 0,
(144)
for (∆t, ∆x) → 0, uniformly in (t, x). We split up the probability space Ω and arrive at
||(τ̂ − τ )||L2 ≤ ||(τ̂ − τ )1G ||L2 + ||(τ̂ − τ )1Gc ||L2 .
(145)
Now, the second term tends to zero by (144), whereas for the first term it is sufficient that
||ν1 − ν2 ||L2 → 0,
(146)
for ∆x ↘ 0, uniformly in x, where we use
ν1 ≡ inf{t > 0|x + µt + σWt = 0} ∧ T,
ν2 ≡ inf{t > 0|x − ∆x + µt + σWt = 0} ∧ T.
(147)
This on the other hand is implied by Lemma 9.1, which gives assertion b) for (0 < t < T, 0 < x).
The case t = T is trivial, and for (t < T, x = 0) one can again use Lemma 9.1.
⊓
⊔
37
Lemma 9.1 Let Xt ≡ ∆x+µt+σWt , with small positive ∆x, and let further τ ≡ inf{t > 0|Xt = 0}∧
T . Then we have ||τ ||L2 (P ) ≤ const.∆x, where the constant depends only on the model parameters.
Proof. By [17], (3.5.12), τ has the following defective density
∆x − (∆x+µt)2
2t
f (t) = √
e
1[0,T] (t)
2πt3
∫∞
and point mass P
I (τ = T) =
T
∫
[ ]
E
I τ2 =
0
T
(148)
f(s) ds. Hence, we get
∆x 1 (∆x+µt)2
√ t 2 e− 2t
dt + T2
2π
∫
∞
T
∆x − (∆x+µt)2
2t
√
e
dt.
2πt3
(149)
In the following D denotes a generic constant, which may vary from place to place, and which
depends only on the model parameters. So we get as an upper estimate for the r.h.s above
∫ T
∫ ∞
(∆x+µt)2
1
∆x (∆x+µt)2
−
2
2t
√ e− 2t
D
∆xt e
dt + D
dt ≤
t3
0
T


2
∫ ∞
∫ (∆x)
2T
1
1
3
− 52 −w

D∆x (∆x)
w e dw +
w− 2 e−w dw ≤ D∆x,
(150)
(∆x)2
∆x 0
2T
which concludes the proof.
⊓
⊔
Proof of Lemma 4.1. We define our partition as follows:
Qi,j ≡ {(t, x)|
(i − 1)T
iT j
j+1
iT j
j+1
<t<
,
<x<
} ∪ {(t, x)|t =
,
≤x<
}∪
M
M M
M
M M
M
iT
j
(i − 1)T
<t≤
,x =
},
for i = 2, 3, ..., M, j ∈ N
I 0,
{(t, x)|
M
M
M
j j+1
T
and Q1,j ≡ [0, M
] × [M
, M [, for j ∈ N
I 0.
j
(i,j)
Let now C
be an admissible strategy for X ti ,xj , with (ti , xj ) ≡ ( T (i−1)
M , M ), (i, j) ∈ I. By our
definition of the half-open rectangles Qi,j , this strategy is also admissible for all (t, x) ∈ Qi,j , for all
(i, j) ∈ I. Similarly, as in the proof of Proposition 3.1 (c.f. eq. (19) there), one finds that, for all
ϵ > 0, there exists an M > 0, large enough, s.t.
(151)
Jn (t, x, C (i,j) ) − e−βt x − Jn (ti , xj , C (i,j) ) − e−βti xj < ϵ,
for all (t, x) ∈ Qi,j and for all (i, j) ∈ I. Since Vn −e−βt x is, by Proposition 3.1, uniformly continuous,
we have also
Vn (t, x) − e−βt x − Vn (ti , xj ) − e−βti xj < ϵ,
(152)
for all (t, x) ∈ Qi,j and for all (i, j) ∈ I. We now choose C (i,j) especially so that one has
Jn (ti , xj , C (i,j) ) − e−βti xj ≥ Vn (ti , xj ) − e−βti xj − ϵ.
(153)
By (151),(153) and (152) one concludes that
Jn (t, x, C (i,j) ) − e−βt x ≥ Vn (t, x) − e−βt x − 3ϵ
holds, for all (t, x) ∈ Qi,j and for all (i, j) ∈ I.
⊓
⊔
Proof of Lemma 4.3. Instead of the estimate (151) in the proof of Lemma 4.1, we claim that we
have now
J(t, x, C (i,j) ) − e−βt x − J(ti , xj , C (i,j) ) + e−βti xj > −ϵ,
(154)
for all (t, x) ∈ Qi,j , for all (i, j) ∈ I and M large enough. Indeed, to get (151), we have used (19),
and the point where the function fn entered in this estimate was via an estimate of |IE[fn (τ ) − fn (τ̂ )]|.
38
This is now replaced by E
I [f(τ̂ ) − f(τ )] ≥ 0, by the definition of our partition and the points (ti , xj ).
Moreover, we can replace, using the uniform upper semicontinuity of W (c.f. Proposition 3.2a), the
estimate (152) by
Vn (ti , xj ) − e−βti xj − Vn (t, x) + e−βt x > −ϵ,
(155)
for all (t, x) ∈ Qi,j , for all (i, j) ∈ I and M large enough. Finally, we can take (153) without alterations, and this gives together with (154) and (155) our assertion.
⊓
⊔
Proof of Lemma 6.1. One can show in an analogous manner as in [13], Lemma 3.1, that the
following construction of H(x) is successful. H(x) ≡ H(x) + H(x), and we first give the definitions
of H(x) and H(x) for x ∈ [0, ϵ].
µ̂2 γ̄
)H1 x2 + α
H(x) ≡ γ̄ + H1 x − (µ̂ +
2
(
)
x7
ϵ 6
ϵ2 5
−
x +
x ,
840 240
240
where α depends on the model parameters and has the asymptotic (for ϵ → 0) behavior
2
γ̄)
.
α ∼ 120(2µ̂+µ̂
ϵ5
For H(x) one can take
(
)
H(x) = x3 (x − ϵ)3 α0 + α1 x + α2 x2 + α3 x3 ,
with
a
a
c
, α1 = − 4 −
,
6ϵ3
2ϵ
24ϵ3
3a
b
c
d
5a
5b
c
d
α2 = 5 + 5 +
−
, α3 = − 6 − 6 −
+
,
2ϵ
ϵ
12ϵ4
24ϵ4
6ϵ
6ϵ
24ϵ5
24ϵ5
α0 = −
and
a ≡ 3µ̂2 H1 + H3 , b ≡ 2β̂,
c ≡ −4µ̂H3 − 4µ̂3 H1 + 5µ̂4 γ̄, d ≡
4β̂
H4 − 4µ̂β̂.
σ2
For negative x we use the definition H(−x) = −H(x)e2µ̂x , x > 0, which stems from the fact that we
µ2
assume that the transformed function h(x) = H(x)eµ̂x−λT , with λ ≡ β + 2σ
2 , is an odd function.
⊓
⊔
√
Lemma 9.2 Let b(t) ≡ σ −3(T − t) ln(T − t), t > T − 1, and let Xs , s ∈ [t, T ] be the process
x + µ(s − t) + σ(Ws − Wt ) reflected on the barrier b(t). Then we have:
(
)
3
(T − t) 2
≤ K(t) ∼ 1 − C √
,
s∈[t,T]
− ln(T − t)
(
)
3√
P
I
inf Xs ≥ 0 |Xt = b(t) ≥ L(t) ∼ 1 − C(T − t) 2 − ln(T − t),
P
I
inf Xs ≥ 0 |Xt = b(t)
(156)
(157)
s∈[t,T]
for some positive constant C, and for t → T .
Proof. In the following we use the abbreviation ν = T − t for the time to maturity T and we use
P
I t (.) as shortcut for P
I (.|Xt = b(t)). We start with the proof of (156).
One simply estimates the survival probability of our reflected process Xt by the survival probability
of an agent, who consumes nothing, which is evidently larger. I.e.,
(
)
(
)
√
P
It
inf Xs ≥ 0 ≤ P
I
inf
−3ν ln νσ + µs + σWs ≥ 0 ≤
s∈[t,T]
s∈[0,ν]
(
)
√ ∫ ∞
√
3
2
√
µ
2
2
ν2
− z2
√
1−P
I
sup Ws ≥ −3ν ln ν + ν = 1 −
e
dz ∼ 1 −
, (158)
σ
π µσ √ν+√−3 ln ν
π −3 ln ν
s∈[0,ν]
39
for ν → 0. Here we have used in the last equality a well known formula for the distribution of the
supremum of Brownian motion (see e.g. [17], eq. (2.8.4)), and for the last asymptotic equivalence
the asymptotic behavior of the standard normal distribution.
In order to prove (157), we first note that setting µ = 0 makes the probability in question certainly
smaller, so we keep in the proof to this case. Let X̂ be our process Xt , but not stopped at ruin
time τ . So we{have X̂T ∈ (−∞,}0]. Define D ≡ {τ < T } and consider on the set D the stopping
time τ1 ≡ inf t > τ |X̂t = ±b(t) . Set D1 ≡ {X̂τ1 = b(τ1 )}, D2 ≡ {X̂τ1 = −b(τ1 )}, which implies
by symmetry P
I t (D1 |D) =( P
I t (D2 |D) = 1/2. Since
) the process X̂t is not reflected on the lower branch
−b(t) the probability P
I t X̂T < 0|X̂τ1 = −b(τ1 ) is certainly very high by the proof of (156), and in
any case we have
(
) 1
P
I t X̂T < 0|X̂τ1 = −b(τ1 ) > ,
2
for ν small enough, which implies
(
)
(
)
P
I t (D)
P
I t X̂T < 0 ≥ P
I t X̂T < 0|D2 P
I t (D2 ) >
.
4
(159)
The process X̂t is considered in [6], and if one has P
I t (X̂T = 0) > 0, the authors call this phenomenon
a heat atom. Moreover, they show that this occurs, iff the reflecting boundary is a so called upper
function of Brownian motion. In the second part of the proof of their Theorem 2.2 the authors
show - in our notation - that P
I t (X̂T = b(T)) ≥ P
I (A) > 0, where the set A is defined as A ≡
{b(t)+σ(WT −Wt ) > b(T ) = 0}∩{σWT −s −σWT < b(T −s)−b(T ), ∀s ∈ (0, T −t)} ≡ A1 ∩A2 . Now, we
3/2
obviously have P
I (A1 ) ∼ 1 − C √ν− ln ν , for ν → 0, and by time reversal and the classical considerations
b(s)2
∫ν
concerning the Kolmogorov test as in [16], p.34, P
I (A2 ) ≥ 1− 0 σ√b(s)
e− 2σ2 s ds ∼ 1−C(− ln ν)ν 3/2 .
2πs3
Hence, P
I (A) ≥ L1 (ν) ∼ 1 − C(− ln ν)ν 3/2 , and finally - using (159) (
)
3√
P
It
inf Xs ≥ 0 = 1 − P
I t (D) ≥ 1 − 4IPt (X̂T < 0) ≥ −3 + 4IP(A) ≥ L(ν) ∼ 1 − Cν 2 − ln ν, (160)
s∈[t,T]
which concludes the proof.
⊓
⊔
References
[1] S. Asmussen and M. Taksar, Controlled diffusion models for optimal dividend pay-out, Ins.
Math. Econom., 20 (1997), pp. 1-15.
[2] B. Avanzi, Strategies for dividend distribution: a review, N. Am. Actuar. J. 13, no. 2 (2009),
pp. 217-251.
[3] V.E. Beneš, L.A. Shepp and H.S. Witsenhausen, Some solvable stochastic control problems,
Stochastics, 4 (1980), pp. 39-83.
[4] P. Billingsley, Convergence of probability measures, Wiley Series in Probability and Statistics:
Probability and Statistics, John Wiley and Sons, Inc., New York, 1999.
[5] B. Bouchard, Introduction to stochastic control of mixed diffusion processes, viscosity solutions and applications in finance and insurance, Lecture notes, Université Paris-IX Dauphine,
Ceremade and CREST, 2007.
[6] K. Burdzy, Z. Chen and J. Sylvester, The heat equation and reflected Brownian motion in
time-dependent domains. II. Singularities of solutions, J. Funct. Anal. 204, no. 1 (2003), pp.
1-34.
40
[7] K. Burdzy, Z. Chen and J. Sylvester, The heat equation and reflected Brownian motion in
time-dependent domains, Ann. Probab. 32, no. 1B, (2004), pp. 775-804.
[8] G. Doetsch, Handbuch der Laplace-Transformation. Band I: Theorie der LaplaceTransformation, Birkhäuser Verlag, Basel-Stuttgart, 1971.
[9] R. Elie, Finite time Merton strategy under drawdown constraint: a viscosity solution approach,
Appl. Math. Optim. 58, no. 3, (2008), pp. 411-431.
[10] B. de Finetti, Su un’ impostazione alternativa della teoria collettiva del rischio, Transactions
of the XVth International Congress of Actuaries 2 (1957), pp. 433-443.
[11] W.H. Fleming and H. M. Soner, Controlled Markov processes and viscosity solutions, SpringerVerlag, New York, 1993.
[12] J. Gaier, P. Grandits and W. Schachermayer, Asymptotic ruin probabilities and optimal investment, Annals of Applied Probability 13, no.3 (2003), pp. 1054-1076.
[13] P. Grandits, Optimal consumption in a Brownian model with absorption and finite time horizon,
Applied Math. and Optimization 67 (2013), pp. 197-241.
[14] P. Grandits, Existence and asymptotic behavior of an optimal barrier for an optimal consumption problem in a Brownian model with absorption and finite time horizon, to appear in Applied
Math. and Optimization.
[15] P. Grandits, Asymptotic behavior of an optimal barrier in an constraint optimal consumption
problem, Preprint TU Vienna.
[16] K. Itô and H.P. McKean, Diffusion processes and their sample paths, Die Grundlehren der
mathematischen Wissenschaften, Band 125. Springer-Verlag, Berlin-New York, 1974.
[17] I. Karatzas and S. Shreve, Brownian motion and stochastic calculus, Springer-Verlag, New
York, 1991.
[18] O. A. Ladyženskaja, V.A. Solonnikov and N.N. Uralceva, Linear and quasilinear equations of
parabolic type, American Mathematical Society, Providence, Rhode Island, 1968.
[19] F. Lundberg, I. Approximerad Framställning av Sannolikhetsfunktionen, II. Återförsäkring av
Kollektivrisker., Almqvist and Wiksell, Uppsala 1903.
[20] P. van Moerbeke, On optimal stopping and free boundary problems, Arch. Rational Mech. Anal.,
60 (1976), pp. 101-148.
[21] J. Paulsen and H.K. Gjessing, Ruin theory with stochastic return on investments, Advances in
Applied Probability 29 (1997), pp. 965-985.
[22] J. Paulsen, Optimal dividend payouts for diffusions with solvency constraints, Finance Stoch.
7, no. 4 (2003), pp.457-473.
[23] H. Pham, Continuous-time stochastic control an optimization with financial applications,
Springer-Verlag, 2009.
[24] H. Schmidli, Stochastic control in insurance, Springer-Verlag, 2008.
[25] S.E. Shreve, J.P. Lehoczky and D.P. Gaver, Optimal consumption for general diffusions with
absorbing and reflecting barriers, SIAM J. Control Optim., 22 (1984), pp. 55-75.
41
[26] M. Taksar, Optimal risk and dividend distribution control models for an insurance company,
Math. Methods Oper. Res. 51, no. 1 (2000), pp. 1-42.
[27] R. Williams, Asymptotic variance parameters for the boundary local times of reflected Brownian
motion on a compact interval, J. Appl. Probab. 29, no. 4 (1992), pp. 996-1002.
42