Smooth Value Functions for a Class of Nonsmooth Utility

Smooth Value Functions for a Class of Nonsmooth Utility
Maximization Problems
Baojun Bian∗, Sheng Miao†and Harry Zheng‡
Abstract. In this paper we prove that there exists a smooth classical solution to the HJB
equation for a large class of constrained problems with utility functions that are not necessarily differentiable or strictly concave. The value function is smooth if the optimal control
satisfies an exponential moment condition or if the value function is continuous on the closure of its domain. The key idea is to work on the dual control problem and the dual HJB
equation. We construct a smooth, strictly convex solution to the dual HJB equation and
show that its conjugate function is a smooth, strictly concave solution to the primal HJB
equation satisfying the terminal and boundary conditions.
Key words. nonsmooth utility maximization, dual control problem, classical solution to
HJB equation, verification theorem, smooth value function.
AMS subject classifications. 90C46, 49L20
1
Introduction
There has been extensive research in utility maximization. Two main methods are stochastic
control and convex duality. The classical stochastic control approach requires the underlying
state process be Markovian and applies the dynamic programming principle and Ito’s lemma
to derive a nonlinear parabolic PDE (the HJB equation) for the optimal value function. If
there is a classical solution to the HJB equation one may then apply the verification theorem
to show that the value function is smooth and find the optimal control as a byproduct. The
convex duality approach requires the objective functional be concave and the state equation
be linear. It first solves a static maximization problem and applies convex analysis to show the
existence of the optimal solutions to the primal and dual problems and establishes their dual
relationship. It then uses the martingale representation theorem or more general optional
decomposition theorem to super-replicate the optimal terminal wealth/consumption. For
excellent expositions of these two methods in utility maximization, see [6, 8, 9, 11] and
references therein.
The smoothness of the value function is a highly desirable property. One normally has
to impose some conditions to ensure that. One key condition is the uniform ellipticity of the
diffusion coefficient, which is not satisfied for the standard wealth process as long as doing
nothing is a feasible portfolio trading strategy. When the trading constraint set is a closed
convex cone, the utility function is strictly concave, continuously differentiable, satisfying
some growth conditions, and the market is complete, the value function is a classical solution
to the HJB equation, see [8]. When the constraint set is the whole space and the utility
function is of power or logarithmic type, the value function has a closed-form expression.
∗
Department of Mathematics, Tongji University, Shanghai 200092, China. [email protected], Research of this author was supported by NSFC No.10671144 and National Basic Research Program of China
(2007CB814903)
†
Department of Mathematics, Imperial College, London SW7 2BZ, UK. [email protected]
‡
Corresponding author, Department of Mathematics, Imperial College, London SW7 2BZ, UK.
[email protected].
1
For general non-smooth and/or non-strictly-concave utility functions it is not clear if there
exist smooth solutions to the HJB equation. To deal with the lack of a priori knowledge of
the differentiability of the value function one may use a weak solution concept and characterize the value function as a unique viscosity solution to the HJB equation. Due to the
stability property of viscosity solutions one may solve the HJB equation numerically. It is in
general difficult to show the differentiability of the value function even it is known to be a
viscosity solution to the HJB equation (but see the remarkable paper [13]). The lack of the
differentiability of the value function makes impossible to apply the verification theorem to
find the optimal control.
Consider a financial market consisting of one bank account and n stocks. The discounted
price process S = (S 1 , . . . , S n )0 of n risky assets is modelled by
dSt = diag(St )(b(t)dt + σ(t)dWt ),
0≤t≤T
with the initial price S0 = s, where diag(St ) is an n × n matrix with diagonal elements Sti and
all other elements zero, b and σ are deterministic continuous vector-valued and nonsingular
matrix-valued functions of time t, representing the stock excess returns and volatilities, respectively, and W is an n-dimensional standard Brownian motion on a complete probability
space (Ω, F , P ), endowed with a natural filtration {Ft } generated by W . The discounted
wealth process X satisfies the SDE
dXt = Xt (πt0 b(t)dt + πt0 σ(t)dWt ),
X 0 = x0
(1)
where πt = (πt1 , . . . , πtn )0 are progressively measurable control processes satisfying πt ∈ K,
a closed convex cone, a.s. for t ∈ [0, T ]. In our notation we write time t in parentheses for
deterministic functions (e.g., b(t), σ(t)) and in subscript for stochastic processes (e.g., St , πt ).
A standard utility maximization problem is given by
sup E[U (XT )] subject to (1)
(2)
where U is a utility function which is continuous, increasing, concave, and U (0) = 0.
Denote by V (t, x) the value function of (2) for 0 ≤ t ≤ T and x ≥ 0. The HJB equation
is given by
∂V
(t, x) − H(t, x, Vx (t, x), Vxx (t, x)) = 0, x > 0, t < T
(3)
−
∂t
with the terminal condition V (T, x) = U (x) and the boundary condition V (t, 0) = 0, where
H is a Hamiltonian, defined by
1
H(t, x, p, M ) = sup {π 0 b(t)xp + |σ(t)0 π|2 x2 M }
2
π∈K
(4)
and ∂V
∂t is the partial derivative of V with respect to t, Vx and Vxx are defined similarly.
The main contribution of this paper is that we show that there exists a smooth classical solution to the HJB equation (3) for a large class of constrained problems with utility functions
that are not necessarily differentiable or strictly concave (Theorem 3.8), and that the value
function is smooth if the optimal control satisfies an exponential moment condition (Theorem 4.1) or if the value function is continuous on the closure of its domain (Theorem 5.5).
The key idea is to work on the dual control problem and the dual HJB equation. We show
that there is a smooth, strictly convex solution to the dual HJB equation and its conjugate
function is a smooth, strictly concave solution to the primal HJB equation satisfying the
terminal and boundary conditions.
The rest of the paper is organized as follows. Section 2 reviews the existence results
of nonsmooth utility maximization and characterizes the dual control problem. Section 3
2
constructs the smooth solutions to the primal and dual HJB equations. Section 4 proves the
verification theorem under an exponential moment condition for the optimal control. Section
5 shows that the primal and dual value functions are smooth if they are continuous on the
closure of the domains with the comparison method for viscosity solutions. Section 6 gives
two applications, one is the efficient frontier of utility and conditional VaR of utility loss, the
other is the monotonicity of absolute risk aversion measures.
2
Dual Control Problem
In this section we briefly review the main results on the existence of the optimal solutions to
the primal and dual problems and characterize the dual control problem. We focus on the
dual domain for the application of stochastic control theory. Almost all work in literature on
utility maximization are for continuously differentiable and strictly concave utility functions.
The main references for nonsmooth utility maximization are [4, 5, 15, 16].
To use the duality method to study the value function of the utility maximization problem
(2) we need first to formulate a dual minimization problem with a well defined dual domain.
The choice of the dual domain is often problem specific. For a complete market generated
by Brownian motions [8] provides an explicit construction of the dual process which is very
useful in proving the dual relation of the primal and dual value functions. The approach in
[8] crucially depends on the differentiability and strict concavity of the utility function as
the inverse function of the marginal utility is extensively used. The results of [8] cannot be
directly applied to the problem of this paper.
For general semimartingale asset price processes the duality method is normally used to
show the existence of optimal solutions to the primal and dual problems and to establish
their dual relation. There are several definitions of dual variables depending on the primal
problem formulation. [9] chooses the set of dual variables consisting of nonnegative supermartingale processes Y with Y0 = y such that XY are supermartingales for all admissible
wealth processes X with initial endowment x, while [4] takes nonnegative random variables
Y in L1 such that E[XT Y ] ≤ xy for all admissible terminal wealth XT .
Consider a security market consisting of d + 1 assets, one bond and d stocks. Assume
bond price S 0 equals one and (discounted) stock price S = (S i )1≤i≤d is modeled by a (0, ∞)d
valued semimartingale on a filtered probability space (Ω, F , (Ft )0≤t≤T , P). Let K be a closed
convex cone. Denote by Ξ the set of admissible trading strategies such that every ξ ∈ Ξ is a
predictable process, integrable with respect to S and valued in K a.s. for all t.
The wealth process is defined by initial capital x and admissible strategy ξ as follows
Z t
ξv dSv .
Xtx,ξ = x +
0
The set of nonnegative wealth processes with initial value x is defined by
X+ (x) := {X x,ξ : ξ ∈ Ξ and Xtx,ξ ≥ 0 for all t ∈ [0, T ]}
(5)
and the set of terminal values of nonnegative wealth processes is defined by X+T (x) := {XTx,ξ :
X x,ξ ∈ X+ (x)}. The problem of maximizing the expected utility of the terminal wealth is
given by
V (x) = sup E[U (X)]
T (x)
X∈X+
where U is an increasing concave utility function defined on the positive real line. The dual
problem is formulated as
Ṽ (y) := inf E[Ũ (Y )]
T (y)
Y ∈Y+
3
where Ũ is the dual function of U , defined by
Ũ (y) := sup{U (x) − xy}
x≥0
T (y) is the set of dual variables, defined by
and Y+
T
(y) := {Y ∈ L0+ : E[XY ] ≤ xy for all x ∈ R+ and X ∈ X+T (x)}.
Y+
We now state the main theorem on the existence and the dual relation of the primal and
dual problems, see [4], Theorem 3.2, and [15], Theorem 5.1.
Theorem 2.1 Let the following assumptions be satisfied: 1. the set of equivalent local martingale measures for S is nonempty, 2. the set K is a closed convex polyhedral cone, 3.
AE0 (Ũ ) := lim supy→0 supq∈∂ Ũ (y) Ũ|q|y
< ∞, and 4. W (x) := inf y>0 (Ṽ (y) + xy) < ∞ for
(y)
some x > 0. Then
T (ˉ
1. There exist yˉ ≥ 0 and Yˉ ∈ Y+
y ) such that Ṽ (ˉ
y ) = E[Ũ (Yˉ )] and W (x) = Ṽ (ˉ
y ) + xˉ
y.
ˉ ∈ X T (x) such that V (x) = E[U (X)].
ˉ
2. There exists X
+
ˉ Yˉ ] = xˉ
ˉ ∈ −∂ Ũ (Yˉ ).
3. V (x) = W (x), E[X
y , and X
Remark 2.2 The assumptions are related to the no-arbitrage condition of a financial market,
the existence of a constrained optional decomposition, and the asymptotic elasticity of utility
functions.
Proposition 2.3 Let the assumptions of Theorem 2.1 be satisfied and function U be strictly
increasing. If Yˉ is an optimal dual solution of W (x), then Yˉ > 0 a.s.
ˉ
Proof. We first show that if Yˉ (ω) = 0, for some ω ∈ Ω, the optimal X(ω)
= ∞. From
ˉ
ˉ
[15], Theorem 5.1, η = X(ω) ∈ −∂ Ũ (Y (ω)) = −∂ Ũ (0). Ũ is a convex function, then
Ũ (z) ≥ Ũ (0) − η(z − 0), ∀z > 0. Ũ (z) ≥ U (∞) − ηz. Ũ (z) = supx>0 (U (x) − xz) = U (ˉ
x) − x
ˉz
if and only if z ∈ ∂U (ˉ
x). Assume η < ∞, choose any x
ˉ > η, then choose zˉ ∈ ∂U (ˉ
x), zˉ > 0.
We have Ũ (ˉ
z ) = U (ˉ
x) − x
ˉzˉ ≥ Ũ (0) − ηˉ
z , which implies 0 > U (ˉ
x) − U (∞) ≥ zˉ(ˉ
x − η) > 0.
This is a contradiction. We can now show that Yˉ > 0 a.s. Assume ∃A ⊂ Ω, P(A) > 0,
ˉ
Yˉ (ω) = 0, ω ∈ A, then X(ω)
= ∞, ω ∈ A from the discussion above. For any equivalent
ˉ = EQ [X1
ˉ A ] + EQ [X1
ˉ Ac ] = ∞, a contradiction to the
probability measure Q, we have EQ [X]
budget constraint and the no-arbitrage condition.
2
T
The dual domain Y+ (y) is a set of random variables. To formulate a dual control, we
need to have the dual domain consisting of stochastic processes, not just random variables.
It is suggested in [9] that a natural dual process domain is
Y+ (y) = {Y ≥ 0 : Y0 = y and XY is a supermartingale, for all X ∈ X+ (x)}.
(6)
This indeed serves our purpose. We have the following equivalent results of Theorem 2.1.
Theorem 2.4 Let the assumptions of Theorem 2.1 be satisfied. Then
1. There exist y ∗ ≥ 0 and Y ∗ ∈ Y+ (y ∗ ) such that Ṽ (y ∗ ) = E[Ũ (YT∗ )] and W (x) = Ṽ (y ∗ ) +
xy ∗ , where W (x) := inf y>0 (Ṽ (y) + xy).
2. There exists X ∗ ∈ X+ (x) such that V (x) = E[U (XT∗ )].
3. V (x) = W (x), E[XT∗ YT∗ ] = xy ∗ , and XT∗ ∈ −∂ Ũ (YT∗ ).
4
Proof. Define
V̂ (y) =
inf
Y ∈Y+ (y)
E[Ũ (YT )].
T (y) and we have Ṽ (y) ≤ V̂ (y). From Theorem
It is obvious that if Y ∈ Y+ (y) then YT ∈ Y+
T (ˉ
y ) such that W (x) = E[Ũ (Yˉ )+ xˉ
y ]. Since E[X Yˉ ] ≤ xˉ
y
2.1 (1), there exist yˉ ≥ 0 and Yˉ ∈ Y+
T
for all x > 0 and X ∈ X+ (x) we have
y )}
Yˉ ∈ {h ∈ L0+ (Ω, F , P ) : 0 ≤ h ≤ YT , for some Y ∈ Y+ (ˉ
by [9], Proposition 3.1 (ii). Let y ∗ = yˉ. We can find a Y ∗ ∈ Y+ (y ∗ ) such that Yˉ ≤ ỸT∗ . Since
Ũ is a decreasing function we have E[Ũ (Yˉ )] ≥ E[Ũ (YT∗ )], which implies that Ṽ (y) ≥ V̂ (y).
Therefore
V̂ (y) ≤ E[Ũ (YT∗ )] ≤ E[Ũ (Yˉ )] = Ṽ (y) ≤ V̂ (y).
That gives V̂ (y) = Ṽ (y) = E[Ũ (YT∗ )] = E[Ũ (Yˉ )] and (1) is proved. (3) can be proved in the
same way as that of [15], Lemma 5.8 and is omitted here.
2
Remark 2.5 We know that Yˉ ≤ YT∗ but we have not claimed that Yˉ = YT∗ . This would
be the case if one chose a seemly natural stochastic process Yt∗ = E[Yˉ |Ft ] for 0 ≤ t ≤ T .
However, it is not clear if X ∗ Y ∗ is a supermartingale with this construction.
We now continue to use the control process πt instead of ξt which are related by πti =
The domains of the prime and dual problems are given by (5) and (6), respectively.
ξti Sti
Xt .
Proposition 2.6 Let K̃ be the positive polar cone of K, i.e., K̃ = {p : p0 v ≥ 0, ∀v ∈ K}.
Then the optimal value of the dual problem can be characterized by
Ṽ (y) =
inf
Y ∈Yˉ+ (y)
E[Ũ (YT )]
where Yˉ+ (y) is the set of processes Y satisfying
dYt = −Yt (σ(t)−1 vt + θ(t))0 dWt ,
Y0 = y
and v are progressively measurable with vt ∈ K̃ a.s. for all t, and θ(t) = σ(t)−1 b(t).
Proof. Since the filtration is generated by diffusion processes the Doob-Mayer decomposition
theorem implies that the positive supermartingale Y ∈ Y+ (y) can be decomposed as:
Yt = yε(−α0 W )t Dt
where ε is the Doléans-Dade exponential and D is a positive nonincreasing process with
D0 = 1. From the fact that Ũ is decreasing and Dt ≤ 1 for all t we have
Ṽ (y) = inf E[Ũ (YT )]
Y
where Y satisfies the SDE
dYt = −Yt αt0 dWt
with the initial condition Y0 = y. Ito’s lemma implies
d(Xt Yt ) = Xt Yt (πt0 b(t) − πt0 σ(t)αt )dt + Xt Yt (πt0 σ(t) − αt0 )dWt .
The supermartingale property of XY implies πt0 b(t) − πt0 σ(t)αt ≤ 0 a.s. for all t. Define
vt := −σ(t)(θ(t) − αt ). Then αt = σ(t)−1 vt + θ(t) and
−πt0 vt ≤ 0, ∀ πt ∈ K a.s.
5
which leads to vt ∈ K̃ a.s. for all t.
Denote by Ṽ (t, y) the value function of the dual problem, i.e.,
Ṽ (t, y) =
inf
Y ∈Yˉ+ (y)
2
E[Ũ (YT )|Yt = y].
Then the dual HJB equation is given by
∂ Ṽ
1
(t, y) + inf { |θ(t) + σ(t)−1 π̃|2 y 2 Ṽyy (t, y)} = 0, y > 0, t < T
∂t
π̃∈K̃ 2
(7)
with the terminal condition Ṽ (T, y) = Ũ (y). It is easy to verify that Ṽ (t, y) is convex in y
for fixed t ∈ [0, T ]. Denote by π̂(t) the unique minimizer of f (π̃) = |θ(t) + σ(t)−1 π̃|2 over
π̃ ∈ K̃ and θ̂(t) = θ(t) + σ(t)−1 π̂(t). Equation (7) is then equivalent to a linear PDE
∂ Ṽ
1
(t, y) + |θ̂(t)|2 y 2 Ṽyy (t, y) = 0, y > 0, 0 ≤ t < T.
∂t
2
(8)
Remark 2.7 In general a wealth process with both investment and consumption is given by
dXt = Xt (πt0 b(t)dt + πt0 σ(t)dWt ) − ct dt
where c is a nonnegative consumption rate process. The utility maximization problem becomes
Z T
U1 (t, ct )dt + U (XT )]
sup E[
π,c
0
where U1 (t, c) is a running utility at time t with consumption rate c. One can again formulate
a dual problem with the same dual state process set Yˉ+ (y). The dual minimization problem
becomes
Z
Ṽ (y) =
inf
Y ∈Yˉ+ (y)
T
E[
Ũ1 (t, Yt )dt + Ũ (YT )]
0
where Ũ1 (t, ∙) is the dual function of U1 (t, ∙). The dual HJB equation is a linear PDE with
an extra term Ũ1 (t, y), i.e.,
∂ Ṽ
1
(t, y) + |θ̂(t)|2 y 2 Ṽyy (t, y) + Ũ1 (t, y) = 0, y > 0, 0 ≤ t < T.
∂t
2
One can show the existence of a classical solution to the dual HJB equation and then construct
a classical solution to the primal HJB equation. To focus on the key ideas of our approach
we only consider the investment case in this paper and will discuss the extension to the
investment/consumption and other cases in a separate paper.
Remark 2.8 There are several reasons that the dual HJB equation is a linear PDE in
this paper. Firstly, discounted asset price process S follows a geometric Brownian motion,
which decouples the dual process Y and the primal process X from the supermartingale
process XY . Secondly, trading constraint set K is a cone which ensures that dual controls
vt ∈ K̃ a.s. for all t, where
R TK̃ is the positive polar cone of K, otherwise, v should satisfy an
integrability condition E[ 0 δ−K (vs )ds] < ∞, where δ−K is the support function of −K, see
[8] for details. Thirdly, the primal problem involves only one state variable X which makes
the dual problem have also just one state variable Y . If the primal problem had several
correlated state stochastic processes, for example, wealth process X and stochastic variance
process V in a Heston model, then the dual problem would involve the correlated state
stochastic processes Y and V and it would be highly unlikely that the dual HJB equation
could be reduced to a linear PDE. In summary, the special structure of the wealth process
(1) with the cone constraint leads to a linear dual HJB equation (8). In general, the dual
HJB equation is a nonlinear PDE which is likely to be as complicated and difficult to solve
as the primal HJB equation.
6
3
Smooth Solutions to HJB Equation
We assume that U and θ̂ satisfy the following conditions.
Assumption 3.1 Utility function U is a continuous, increasing, concave function on [0, ∞)
satisfying U (0) = 0, U (∞) = limx→∞ U (x) = ∞, and
0 ≤ U (x) ≤ L(1 + xp ), x ≥ 0
(9)
for some constants L > 0, 0 < p < 1.
Remark 3.2 Condition U (0) = 0 can be replaced by U (0) > −∞. If U ∈ C 1 (0, ∞), then
U 0 (∞) = 0 from (9). We do not assume that the Inada condition holds. If U satisfies
Assumption 3.1, then Ũ is a continuous decreasing convex function satisfying Ũ (0) = ∞,
Ũ (∞) = 0, and
p
0 ≤ Ũ (y) ≤ sup{L(1 + xp ) − yx} ≤ L̂(1 + y p−1 ), y > 0
(10)
x>0
1
where L̂ = max{L, (Lp) 1−p (p−1 − 1)}. In particular, if Ũ ∈ C 1 , then Ũ 0 (∞) = 0.
Assumption 3.3 θ̂ is continuous on [0, T ] and there is a positive constant θ0 such that
|θ̂(t)| ≥ θ0 for all t ∈ [0, T ].
Remark 3.4 Assumption (3.3) is automatically satisfied if all components of b(t) are positive, a natural condition as b(t) represents the stock excess returns, and K is either the whole
n (short selling
space Rn (no trading constraints) or the positive part of the whole space R+
n and the optimal solution
constraints). The positive polar cone K̃ is then either {0} or R+
π̂(t) = 0 for all t. Therefore θ̂(t) = θ(t) is a nonzero continuous vector-valued function on
[0, T ] and θ0 is the minimum value of |θ| on [0, T ].
Consider the linear SDE
dŶs = −Ŷs θ̂(s)0 dWs ,
s≥t
with the initial value Ŷt = y. Denote by Ŷst,y its unique strong solution and define a function
V̂ on [0, T ] × (0, ∞) by
V̂ (t, y) = E[Ũ (ŶTt,y )].
Lemma 3.5 V̂ satisfies
p
0 ≤ V̂ (t, y) ≤ K(1 + y p−1 ), t ∈ [0, T ]
for some positive constant K. Furthermore, V̂ is continuous on [0, T ] × (0, ∞) and is a
viscosity solution to linear PDE (8).
Proof. Define Zs =
Ŷst,y
p
p−1
for s ≥ t. Ito’s lemma implies that Z satisfies the SDE
dZs = Zs
p
p
θ̂(s)0 dWs
|θ̂(s)|2 ds −
2
2(p − 1)
p−1
p
with initial value Zt = y p−1 . Therefore
ZT = y
p
p−1
exp
p
2(p − 1)2
7
Z
T
t
|θ̂(s)| ds Γt
2
where
Γt = exp
and η̂(s) =
p
1−p θ̂(s).
Z
T
t
1
η̂(s) dWs −
2
0
Z
T
t
2
|η̂(s)| ds
Since E(Γt ) = E(ΓT ) = 1 we have from (10) that
p
p
where K = L̂e 2(p−1)2
y0 > 0 we have
RT
0
V̂ (t, y) ≤ L̂(1 + E[ZT ]) ≤ K(1 + y p−1 )
|θ̂(s)|2 ds
. Furthermore, for all 0 ≤ t ≤ T and y ≥ y0 for any fixed
2p
p
E[(Ũ (ŶTt,y ))2 ] ≤ K 2 E[(1 + y p−1 Γt )2 ] ≤ 2K 2 (1 + y0p−1 e
RT
0
|η̂(s)|2 ds
)
which implies that {Ũ (ŶTt,y ) : 0 ≤ t ≤ T, y ≥ y0 } is a class of uniformly integrable random
variables. From the continuity of Ũ and ŶTt,y with respect to t and y we conclude that V̂ is
continuous on [0, T ] × (0, ∞). Since V̂ (t, y) = E[V (τ, Ŷτt,y )] for any stopping time τ ≥ t it is
straightforward to show that V̂ is a viscosity solution to (8), see [11].
2
Next we show that V̂ is smooth and strictly convex in y and is a classical solution to (8).
Since Ũ is only continuous and convex, we must improve the regularity and convexity. The
regularity is well known in the PDE theory. The key to improving convexity is connected to
the convexity preservation and constant rank principle for solutions to PDEs, see [1, 2, 10].
The technique used here is likely to be useful in solving other problems involving nonlinear
equations.
Lemma 3.6 The function V̂ ∈ C 1,∞ ([0, T ) × (0, ∞)) is a classical solution to (8). Furthermore, for every t ∈ [0, T ), function V̂ (t, ∙) is strictly decreasing and strictly convex with the
following limiting properties:
lim V̂ (t, y) = ∞,
y→0
lim V̂ (t, y) = 0,
y→∞
lim V̂y (t, y) = −∞,
y→0
lim V̂y (t, y) = 0.
y→∞
(11)
p
Proof. Define v(t, z) = V̂ (t, ez ). Then v(t, z) ≤ K(1 + e p−1 z ) and v is a continuous viscosity
solution to a linear PDE
∂v 1
+ |θ̂(t)|2 (vzz − vz ) = 0, z ∈ R, 0 ≤ t < T
∂t
2
(12)
with the terminal condition v(T, z) = Ũ (ez ) for z ∈ R. Assumption 3.3 ensures that (12) is
a uniformly parabolic PDE and v is a classical solution, given by ([7], Chapter 1)
Z ∞
(z−x)2
1
1
− 14 τ
e− 4τ e− 2 (x−z) Ũ (ex )dx,
v(t, z) = √ e
2 πτ
−∞
RT
where τ = 12 t |θ̂(s)|2 ds. Hence
Z ∞
(ln x)2
3
1
− 14 τ
(13)
V̂ (t, y) = √ e
e− 4τ x− 2 Ũ (yx)dx
2 πτ
0
and V̂ ∈ C 1,∞ ([0, T ) × (0, ∞)). Since Ũ is decreasing and convex, it follows that V̂ is
decreasing and convex for fixed t ∈ [0, T ) from (13). Hence
V̂y (t, y) ≤ 0, V̂yy (t, y) ≥ 0
for every t ∈ [0, T ). Differentiating (8) twice, we conclude that w(t, y) = V̂yy (t, y) is a
nonnegative classical solution to the equation
∂w 1
+ |θ̂(t)|2 y 2 wyy + 2|θ̂(t)|2 ywy + |θ̂(t)|2 w = 0, y > 0, 0 ≤ t < T.
∂t
2
8
If w(t0 , y0 ) = 0 for some (t0 , y0 ) with t0 < T , then (t0 , y0 ) is a minimum point of w and
V̂yy (t, y) = w(t, y) = 0 for all (t, y) ∈ (t0 , T ) × (0, ∞) by the strong maximum principle ([7],
Chapter 2). This implies that V̂ (t, y) is linear in y for any fixed t ∈ (t0 , T ], in particular,
Ũ (y) is linear. This is a contradiction and we conclude that V̂yy (t, y) > 0 for every t ∈ [0, T ).
Similarly, we deduce that V̂y (t, y) < 0 for every t ∈ [0, T ).
We can easily prove the limiting properties (11) with the help of (13) and Remark 3.2.
For example, to show limy→0 V̂ (t, y) = ∞, we simply note that
Z 1
(ln x)2
1
− 14 τ
− 4τ
− 32
√ e
e
x dx Ũ (y),
V̂ (t, y) ≥
2 πτ
0
which leads to the required limit. To prove lim y→∞ V̂ (t, y) = 0, we estimate, for y > 1 and
a > 0, that
Z a
Z ∞
p
(ln x)2
(ln x)2
1
3
3
1
e− 4τ x− 2 (1 + x p−1 )dx + Ũ (ay)
e− 4τ x− 2 dx ,
V̂ (t, y) ≤ √ e− 4 τ L̂
2 πτ
0
a
which leads to the required limit.
Since V̂ (t, y) is a convex smooth function in y for fixed t ∈ [0, T ), we conclude that V̂y (t, y)
is increasing in y. Suppose limy→0 V̂y (t, y) = A > −∞. Then V̂y (t, y) ≥ A and V̂ (t, 1) −
R1
V̂ (t, y) ≥ y V̂y (t, x)dx ≥ A(1 − y) for 0 < y < 1, which contradicts to limy→0 V̂ (t, y) = ∞.
Similarly, we deduce that lim y→∞ V̂y (t, y) = 0 for every t ∈ [0, T ).
Let Y (t, ∙) be the inverse function of −V̂y (t, ∙), i.e.,
2
−V̂y (t, Y (t, x)) = x, Y (t, −V̂y (t, y)) = y,
for fixed t ∈ [0, T ). Y (t, x) is well defined on [0, T ) × (0, ∞) from Lemma 3.6. Since V̂ ∈
C 1,∞ ([0, T ) × (0, ∞)) and V̂yy (t, y) > 0, the inverse function Y ∈ C 1,∞ ([0, T ) × (0, ∞)) by the
implicit function theorem. Let
u(t, x) = inf {V̂ (t, y) + xy}.
y>0
(14)
We now show that u is a classical solution to the HJB equation (3). We need the following
result which is similar to [17], Lemma 3.2.
Lemma 3.7 Let a be a given number. Let π̂(t) be the unique minimizer of convex function
f (π̃) = |sgn(a)θ(t) + σ(t)−1 π̃|2 = |σ(t)−1 (sgn(a)b(t) + π̃)|2
over π̃ ∈ K̃, where sgn(a) is a sign function which equals 1 if a > 0 and −1 if a < 0. Denote
0 −1
θ̂(t) = sgn(a)θ(t) + σ(t)−1 π̂(t). Then π ∗ (t) = |a|
2 ∇f (π̂(t)) = |a|(σ(t) ) θ̂(t) is the unique
minimizer of convex function
g(π) =
1 0
|π σ(t)|2 − aπ 0 b(t)
2
over π ∈ K, where ∇f is the gradient of f . Furthermore,
1
g(π ∗ (t)) = − a2 |θ̂(t)|2 .
2
Proof. Since K̃ is a convex cone, we see that f (ηπ̂) attains its minimum at η = 1. Hence
π̂(t)0 ∇f (π̂(t)) = 0. Furthermore, for any given q ∈ K̃, f (ηπ̂(t)+(1−η)q) attains its minimum
at η = 1, which implies q 0 ∇f (π̂(t)) ≥ π̂(t)0 ∇f (π̂(t)) = 0, we conclude that ∇f (π̂(t)) ∈ K.
Direct computation yields
∇f (π̂(t)) = 2(σ(t)σ(t)0 )−1 (sgn(a)b(t) + π̂(t)) = 2(σ(t)0 )−1 θ̂(t)
9
and
Let π ∗ (t) =
|a|
2 ∇f (π̂(t))
∇g(π) = σ(t)σ(t)0 π − ab(t).
= |a|(σ(t)0 )−1 θ̂(t). Then π ∗ (t) ∈ K and simple algebra shows that
(π ∗ (t))0 ∇g(π ∗ (t)) = 0, π 0 ∇g(π ∗ (t)) ≥ 0
for all π ∈ K, which implies that π ∗ (t) is the unique minimizer of g over π ∈ K. Furthermore,
g(π ∗ (t)) =
1 2
1
a |θ̂(t)|2 − a2 θ̂(t)0 σ(t)−1 b(t) = − a2 |θ̂(t)|2 .
2
2
2
We now state the main result of this section.
Theorem 3.8 Assume K is a closed convex cone and Assumptions 3.1 and 3.3 hold. Then
there exists a function u ∈ C 0 ([0, T ] × [0, ∞)) ∩ C 1,2 ([0, T ) × (0, ∞)) which is a classical
solution to the HJB equation (3). The maximum of the Hamiltonian H is achieved at
π ∗ (t, x) = −(σ(t)0 )−1 θ̂(t)
ux (t, x)
xuxx (t, x)
and π ∗ (t, x) ∈ K. Furthermore, u(t, x) is strictly increasing and strictly concave in x for
fixed t ∈ [0, T ) with u(T, x) = U (x) and u(t, 0) = 0, and 0 ≤ u(t, x) ≤ K̃(1 + xp ) for some
constant K̃.
Proof. Let u(t, x) be defined by (14). We have, for (t, x) ∈ [0, T ) × (0, ∞),
u(t, x) = V̂ (t, Y (t, x)) + xY (t, x),
which yields the regularity of u(t, x). Direct computation yields
∂ V̂
∂u
(t, x) =
(t, Y (t, x)),
∂t
∂t
ux (t, x) = Y (t, x),
uxx (t, x) = −
1
V̂yy (t, Y (t, x))
.
Since Y (t, x) > 0 and V̂yy (t, Y (t, x)) < 0 for fixed 0 ≤ t < T , the function u(t, ∙) is strictly
increasing and strictly concave. Substituting y = Y (t, x) into equation (8) we get
∂u
1
u2 (t, x)
= 0.
(t, x) − |θ̂(t)|2 x
∂t
2
uxx (t, x)
We conclude by lemma 3.7 that u is a classical solution to the HJB equation (3) and the
maximum of the Hamiltonian is achieved at π ∗ (t, x). Furthermore, from Lemma 3.5 we have
p
u(t, x) ≤ inf {K(1 + y p−1 ) + xy} ≤ K̃(1 + xp )
y>0
where K̃ = K +
4
1
p
1−p
Kp
p−1
.
(15)
2
Verification Theorem
Theorem 3.8 confirms that there is a classical solution u to the HJB equation (3) and the
Hamiltonian achieves its maximum at a point π ∗ in K, i.e., there is a classical solution to the
nonlinear PDE (15). We now show that the value function V is indeed a smooth classical
solution to the HJB equation (3) with the optimal feedback control π ∗ . Since the drift and
diffusion terms in SDE (1) do not satisfy the uniform Lipschitz continuous and linear growth
conditions due to the unboundedness of the control set K, we do not know if solutions to
SDE (1) are square integrable and cannot directly apply the method of localization and the
dominated convergence theorem to prove the verification theorem, see [11] for details. We
need to assume an extra exponential moment condition for the optimal trading strategy π ∗
to make the verification theorem work. The next theorem is the main result of this section.
10
Theorem 4.1 Let u be given as in Theorem 3.8, then V (t, x) ≤ u(t, x) on [0, T ] × (0, ∞).
ˉ with the feedback
Furthermore, if SDE (1) admits a unique nonnegative strong solution X
∗
∗
control π , defined in Theorem 3.8, and π satisfies an exponential moment condition
Z T
1
∗
t,x 0
2
ˉ
|π (s, Xs ) σ(s)| ds
< ∞,
(16)
E exp
2 0
then V (t, x) = u(t, x) on [0, T ] × [0, ∞) and π ∗ is an optimal Markovian control.
Proof. Since u is a smooth classical solution to the HJB equation (3) we have for all (t, x) ∈
[0, T ) × (0, ∞) and π ∈ K, that
1
∂u
(t, x) + ux (t, x)xπ 0 b(t) + uxx (t, x)x2 |π 0 σ(t)|2 ≤ 0
∂t
2
(17)
and the equality holds in (17) if π = π ∗ (t, x). For any s ∈ [t, T ) and any admissible control
π, we have, by Ito’s lemma and (17), that
Z s
t,x
ux (v, Xvt,x )Xvt,x πv0 σ(v)dWv
u(s, Xs ) ≤ u(t, x) +
t
where Xst,x is a solution to SDE (1) with trading
π and initial condition Xtt,x = x.
R s strategy
t,x
Since u is nonnegative the local martingale t ux (v, Xv )Xvt,x πv0 σ(v)dWv is bounded below
and is therefore a supermartingale. We have
E[u(s, Xst,x )] ≤ u(t, x).
Let s tend to T . We apply Fatou’s lemma to get, also note the continuity and the terminal
condition of u, that
E[U (XTt,x )] ≤ u(t, x),
which gives V (t, x) ≤ u(t, x).
It is slightly more involved to show V (t, x) = u(t, x) with the optimal trading strategy
π ∗ as Fatou’s lemma is not enough for the equality and the dominated convergence theorem
cannot be applied. For any s ∈ [t, T ), stopping time τ ∈ [t, ∞), and optimal control π ∗
satisfying (16), we have, by Ito’s lemma and (17), that
Z s∧τ
t,x
ˉ
ˉ t,x )X
ˉ t,x π ∗ (v, X
ˉ t,x )0 σ(v)dWv .
u(s ∧ τ, Xs∧τ ) = u(t, x) +
ux (v, X
(18)
v
v
v
t
Let
τ = τn = inf{s ≥ t :
R s∧τ
Z
s
t
ˉ t,x )X
ˉ t,x π ∗ (v, X
ˉ t,x )0 σ(v)|2 dv ≥ n},
|ux (v, X
v
v
v
ˉ vt,x )X
ˉ vt,x π ∗ (v, X
ˉ vt,x )0 σ(v)dWv , t ≤ s ≤ T } is a martinthen the stopped process { t ux (v, X
gale. Taking expectation in (18) leads to
t,x
ˉ s∧τ
E[u(s ∧ τn , X
)] = u(t, x).
n
(19)
Since 0 < p < 1 we may choose α ∈ (1, 1/p) and show, by Theorem 3.8 and the convexity of
function xα , that
t,x
t,x
ˉ s∧τ
ˉ s∧τ
)α ≤ (K̃(1 + (X
)p ))α
u(s ∧ τn , X
n
n
t,x
ˉ s∧τ
= K̃ α 2α−1 (1 + (X
)αp )
n
Z
α α−1
αp
= K̃ 2
1 + x Γs∧τn exp(
11
s∧τn
t
Ψv dv)
where
Γs∧τn
Ψv
Z
ˉ t,x )0 σ(v)dWv − 1 (αp)2 |π ∗ (v, X
ˉ t,x )0 σ(v)|2 dv
(αp)π ∗ (v, X
v
v
2
t
ˉ t,x )0 b(v) − 1 (αp)(1 − αp)|π ∗ (v, X
ˉ t,x )0 σ(v)|2 .
= (αp)π ∗ (v, X
v
v
2
= exp
s∧τn
Simple algebra shows that
αp
1
1
ˉ t,x )0 σ(v) −
Ψv = − (αp)(1 − αp)|π ∗ (v, X
σ(v)−1 b(v)|2 +
|σ(v)−1 b(v)|2
v
2
1 − αp
2(1 − αp)
αp
≤
|θ(v)|2 .
2(1 − αp)
Therefore,
Z
t,x
α
α α−1
αp
ˉ s∧τ
u(s ∧ τn , X
)
≤
K̃
2
1
+
x
Γ
exp(
s∧τn
n
T
t
αp
|θ(v)|2 dv) .
2(1 − αp)
Finally, since π ∗ satisfies (16) and 0 < αp < 1 we know that Γs∧τn is a martingale from
Novikov’s condition, which implies
Z T
αp
t,x
α
α α−1
αp
2
ˉ
E[u(s ∧ τn , Xs∧τn ) ] ≤ K̃ 2
|θ(v)| dv) .
1 + x exp(
t 2(1 − αp)
t,x
ˉ s∧τ
) : n ≥ 1} is a family of uniformly integrable random
We conclude that {u(s ∧ τn , X
n
variables. Since τn ↑ ∞ a.s. as n → ∞ and u ∈ C 0 ([0, T ] × [0, ∞)) we may let n tend to
infinity in (19) to get
ˉ t,x )] = u(t, x).
E[u(s, X
s
We can apply exactly the same discussion as above and let s tend to T to get
ˉ t,x )] ≤ V (t, x).
u(t, x) = E[U (X
T
We have proved V (t, x) = u(t, x) and the optimal feedback control is π ∗ .
5
2
Smoothness of Value Functions
In this section we show that if the value function is continuous on the closure of its domain
then it is in fact smooth. Admissible trading strategies π are not assumed to satisfy (16) and
therefore the verification theorem 4.1 cannot be applied.
Theorem 5.1 Assume that Ṽ is continuous on [0, T ] × (0, ∞). Then Ṽ = V̂ .
Proof. Since π̂ ∈ K̃ is an admissible control for the dual problem, we have
p
0 ≤ Ṽ (t, y) ≤ E[Ũ (ŶT )] ≤ K(1 + y p−1 ), y > 0.
p
We have 0 ≤ Ṽ (t, y), V̂ (t, y) ≤ K(1 + y p−1 ). Let h(y) = y + y −m with m >
h(y) > 0, h00 (y) > 0
for y > 0. Let
w(t, y) =
Ṽ (t, y) − V̂ (t, y)
eλt h(y)
12
p
1−p .
Then
where λ is a constant to be determined later. Then w is continuous on [0, T ] × (0, ∞) and
w(T, y) = 0, lim w(t, y) = 0, lim w(t, y) = 0.
y→∞
y→0
Next, we prove that
w(t, y) ≤ 0
for (t, y) ∈ [0, T ] × [0, ∞).
If it were not true, we would find a point (t0 , y0 ) ∈ [0, T ) × (0, ∞) such that
w0 := w(t0 , y0 ) =
sup
w(t, y) > 0.
[0,T )×(0,∞)
Let
φ(t, y) := V̂ (t, y) + w0 eλt h(y)
be a test function which satisfies Ṽ (t, y) ≤ φ(t, y) and Ṽ (t0 , y0 ) = φ(t0 , y0 ). Since Ṽ is a
viscosity subsolution to (7) we have
∂φ
1
−1 2 2
+ inf
|θ(t) + σ(t) π̃| y φyy ≥ 0
∂t π̃∈K̃ 2
at (t0 , y0 ). Substituting φ into the inequality, also noting that φyy > 0 and V̂ is a solution to
(8), we get
λt 1
2 2 00
w0 e
|θ̂(t)| y h + λh ≥ 0
2
at (t0 , y0 ). Substituting h(y) = y + y −m into the above inequality, we obtain
1
λy0m+1 + λ + |θ̂(t)|2 m(m + 1) ≥ 0.
2
This leads to a contradiction if we choose λ < − 12 θ12 m(m + 1) where θ1 = max0≤t≤T |θ̂(t)|.
This proves that w(t, x) ≤ 0, i.e., Ṽ (t, y) ≤ V̂ (t, y), for all (t, x) ∈ [0, T ] × R+ . Similarly, we
can show that V̂ (t, y) ≤ Ṽ (t, y).
2
Lemma 5.2 Let Assumption 3.1 hold. Then the value function V (t, x) satisfies
0 ≤ V (t, x) ≤ L̃(1 + xp )
for some constant L̃.
Proof. Define a stochastic process for 0 ≤ t ≤ T by
Z t
Z
1 t
2
θ(s)dWs −
|θ(s)| ds .
Γt = exp −
2 0
0
Assumption 3.3 and Novikov’s condition imply that Γ is a positive martingale. Define an
0
equivalent probability measure Q by dQ
dP = ΓT . Then Girsanov’s theorem implies that Wt =
Rt
Wt + 0 θ(s)ds is a Q-Brownian motion and the wealth process X is a Q-supermartingale for
0 ≤ t ≤ T . Therefore EQ [XT ] ≤ x. Note also that
Z t
Z
dP
1 t
0
2
θ(s)dWs −
|θ(s)| ds
= Γ̃T := exp
dQ
2 0
0
Let p̃ = 1/p and q̃ = 1/(1 − p), applying the Holder inequality, we get
E[XTp ] = EQ [XTp
1/q̃
1/p̃ dP
EQ [(Γ̃T )q̃ ]
≤ xp (EQ [Γ̃q̃T ])1/q̃ .
] ≤ EQ [(XTp )p̃ ]
dQ
13
Since
Γ̃q̃T
= exp
Z
T
0
q̃θ(s)dWs0
1
−
2
Z
T
0
Z T
1
2
2
q̃ |θ(s)| ds exp
(q̃ − q̃)|θ(s)| ds
2 0
2
2
and W 0 is Q-Brownian motion, we get
EQ [Γ̃q̃T ]
Z T
1
2
2
= exp
(q̃ − q̃)|θ(s)| ds
2 0
which results in
(EQ [Γ̃q̃T ])1/q̃
= exp
p
2(1 − p)
Putting everything together, we get from (9) that
V (t, x) =
sup E[U (XT )] ≤ L 1 +
π∈A(t,x)
p
where L̃ = Le 2(1−p)
RT
0
|θ(s)|2 ds
Z
T
0
2
|θ(s)| ds .
sup E[XTp ]
π∈A(t,x)
!
≤ L̃(1 + xp )
.
2
Lemma 5.3 Assume the value function V is continuous on [0, T ] × [0, ∞). Then V is a
viscosity solution to the HJB equation
−
∂V
(t, x) − H(t, x, Vx (t, x), Vxx (t, x)) = 0
∂t
with the terminal condition V (T, x) = U (x) and the boundary condition V (t, 0) = 0.
Proof. Since K is a cone we know that the Hamiltonian H defined in (4) is ∞ if M > 0 and
is either 0 or ∞ if M = 0. Therefore, if H is positive at some point (t, x, p, M ) we must have
M < 0. Applying Lemma 3.7, we can write
p
1
p2
2
0
0 2
H(t, x, p, M ) = x M inf π b(t)
+ |σ(t) π| = −
|θ̂(t)|2 .
xM
2
2M
It is clear that H is continuous at any point where it is positive. The remaining proof that
V is a viscosity supersolution and a subsolution is the same as that of [11], Prop. 4.3.1 and
Prop. 4.3.2. The only difference is that we do not use the function G as in the proof of
Prop. 4.3.2 for subsolution property. The function G ensures that H is continuous at any
point where it is positive, which has been established directly.
2
Remark 5.4 In fact, we only need to assume that the value function V is locally bounded
on [0, T ] × [0, ∞) to get the viscosity property. We need to define its upper-semicontinuous
envelope V ∗ and lower-semicontinuous envelope V∗ on [0, T ] × [0, ∞) by
V ∗ (t, x) = lim sup V (t0 , x0 ),
(t0 ,x0 )→(t,x)
V∗ (t, x) =
lim inf
(t0 ,x0 )→(t,x)
V (t0 , x0 )
and use V ∗ (and V∗ ) instead of V in the definition of viscosity subsolution (and supersolution).
Since V may be discontinuous at boundary of [0, T ] × [0, ∞) it is much subtle to define the
proper terminal and boundary conditions, see [11, 14] for details. This is the main reason
we assume that V is continuous on [0, T ] × [0, ∞). In general, one needs to add some strong
conditions to ensure the continuity of V on the closure of its domain, see [6].
We can now state the main result of this section.
14
Theorem 5.5 Assume that the value function V is continuous on [0, T ] × [0, ∞). Then
V = u and V is a classical solution to the HJB equation (3).
Proof. Let H = max{L̃, K̃}. Then 0 ≤ V (t, x), u(t, x) ≤ H(1 + xp ). Let h(x) = (x + 1)q with
p < q < 1. Then
h(x) > 0, h0 (x) > 0, h00 (x) < 0
for x ≥ 0. Define
w(t, x) =
V (t, x) − u(t, x)
eλt h(x)
where λ is a constant to be determined later. Then w(t, x) ∈ C([0, T ] × [0, ∞) and
w(T, x) = 0, w(t, 0) = 0, lim w(t, x) = 0.
x→∞
We now show that
w(t, x) ≤ 0 for all (t, x) ∈ [0, T ] × [0, ∞).
If it were not true, we would find a point (t0 , x0 ) ∈ (0, T ) × (0, ∞) such that
w0 := w(t0 , x0 ) =
sup
w(t, x) > 0.
[0,T ]×[0,∞)
Let
φ(t, x) = u(t, x) + w0 eλt h(x)
be a test function which satisfies V (t, x) ≤ φ(t, x) and V (t0 , x0 ) = φ(t0 , x0 ). Since V is a
viscosity subsolution to (3) we have
1 0
∂φ
2 2
0
+ sup
|π σ(t)| x φxx + π b(t)xφx ≥ 0
∂t π∈K 2
at (t0 , x0 ). Substituting φ into the inequality, also noting that u is a solution to (3), we get
1 0
λt
2 2 00
0
0
|π σ(t)| x h + π b(t)xh + λh ≥ 0
w0 e
sup
π∈K 2
at (t0 , x0 ). Applying Lemma 3.7 we obtain
1
h02 (x0 )
+ λh(x0 ) ≥ 0.
− |θ̂(t0 )|2 00
2
h (x0 )
Since h(x) = (x + 1)q , the above inequality becomes
1
q(x0 + 1)q
− |θ̂(t0 )|2
+ λ(x0 + 1)q ≥ 0.
2
(q − 1)
q
θ12 . This proves that V (t, x) ≤ u(t, x).
This leads to a contradiction if we choose λ < − 2(1−q)
Similarly, we can show u(t, x) ≤ V (t, x).
2
6
Applications
In this section we present two examples which can be solved with the main results of the
paper. The first one is the efficient frontier of utility and CVaR and the second one is the
preservation of monotonicity of the absolute risk aversion.
15
6.1
Efficient Frontier of Utility and CVaR
In the standard utility maximization theory the risk is not considered. However, in practice
one often needs to find the optimal tradeoff between return and risk. This is the fundamental
idea of the Markowitz’s mean variance efficient frontier theory. In [18] the problem of the
efficient frontier of utility and CVaR is discussed. A utility loss random variable Z is defined
by Z = U (x0 ) − U (XT ), which represents the risk associated with a trading strategy π in
comparison with a riskfree strategy π = 0. Two common risk measures are VaR and CVaR.
Given a number β ∈ (0, 1) (close to 1) the β-VaR of Z is defined by
VaRβ = min{z : P (Z ≤ z) ≥ β}
and the β-CVaR of Z is defined by
CVaRβ = mean of the β-tail distribution of Z
where the β-tail distribution Fβ (z) is defined by
(
0
for z < VaRβ
Fβ (z) =
P (Z≤z)−β
for z ≥ VaRβ .
1−β
A fundamental minimization formula is established in [12], Theorem 10, to compute VaR β
and CVaRβ by solving a convex minimization problem in which the minimum value is CVaR β
and the left end point of the minimum solution set gives VaR β . Specifically,
CVaRβ = min[y + δE(Z − y)+ ]
y
where δ = (1 − β)−1 . If y ∗ is the left endpoint of the minimum solution set, then VaR β = y ∗ .
The following optimization problem is discussed in [18]:
sup(E[U (XT )] − λCVaRβ ) subject to (1)
π
where λ is a nonnegative parameter. λ = 0 corresponds to the utility maximization while
λ → ∞ to the CVaR minimization. The efficient frontier of utility and CVaR can be solved
with a two-stage optimization procedure. The first stage is to solve a parametric utility
maximization problem
u(x0 , y) = sup E[U y (XT )] subject to (1),
π
(20)
where
U y (x) = U (x) − λδ(U (x0 ) − U (x) − y)+ + λδ(U (x0 ) − y)+ ,
and the second stage is to solve a one-dimensional continuous concave maximization problem
u(x0 ) = sup(u(x0 , y) − λδ(U (x0 ) − y)+ − λy).
y
(21)
The first stage problem (20) is a nonsmooth utility maximization problem even the original
utility function U is smooth. [18] shows the existence of an optimal solution to (20) for every
fixed y under some additional conditions on U (strictly increasing, strictly concave, C 1 , and
U 0 (0) = ∞) and the existence of an optimal solution to (21).
Note that U y inherits the properties (except differentiability) of U for every fixed y.
Therefore Theorems 3.8 and 4.1 hold true for the first stage problem (20) if U satisfies
Assumption 3.1. In particular, we know that there exists a classical solution to the HJB
equation (3) with U being replaced by U y and that the value function is equal to that
smooth solution if admissible trading strategies satisfy the integrability condition (16) for
every fixed y. This opens a way to finding numerically the optimal value and the optimal
feedback control for parametric problem (20).
16
Remark 6.1 Although we know that for every fixed y there is a smooth solution u to
the HJB equation (3) we may not be able to find the closed-form expression for u. For
example, let U (x) = α1 xα with 0 < α < 1, n = 1, b(t) = b, and σ(t) = σ. Denote by
2
c1 = λδ, c2 = U (x0 ) − y, and τ = 12 σb 2 (T − t). If c1 = 0 or c2 ≤ 0, i.e., λ = 0 or
α
α−1 , the dual value function
y ≥ U (x0 ), then U y (x) = U (x), the dual function Ũ y (w) = 1−α
α w
α
ατ
α−1 e (α−1)2 , and the solution to
V̂ y (t, z) = 1−α
α z
However, if c1 > 0 and c2 > 0, i.e., λ > 0 and y
are left for readers)

α
1
1−α

 α w α−1 (1 + c1 ) 1−α for
1
Ũ y (w) =
c2 (1 + c1 ) − (αc2 ) α w for

α
 1−α α−1
+ c 1 c2
for
α w
ατ
the HJB equation (3) uy (t, x) = U (x)e 1−α .
< U (x0 ), then (the details of the derivation
α−1
(1 + c1 ) (αc2 ) α ≤ w
α−1
α−1
(αc2 ) α ≤ w ≤ (1 + c1 ) (αc2 ) α
α−1
w ≤ (αc2 ) α
and the dual value function
V̂ y (t, z)
√ √
ατ
α
1
(α + 1) τ
ln B
ln A
(α + 1) τ
1 − α α−1
2
(α−1)
1−α
√
√ −√
e
Φ √ −
Φ
+ (1 + c1 )
=
z
α
2τ
2τ
(α − 1) 2
(α − 1) 2
r r 1
ln B
τ
τ
ln A
− (αc2 ) α z Φ √ −
−Φ √ −
2
2
2τ
2τ
r r τ
τ
ln B
ln A
− c2 Φ √ +
+ c2 (1 + c1 ) Φ √ +
2
2
2τ
2τ
α−1
where A = z1 (αc2 ) α , B = (1 + c1 ) A, and Φ the cumulative distribution function of a
standard normal variable. To construct explicitly the smooth solution uy to the HJB equation
∂
(3) one has to first find the explicit solution z of the equation − ∂z
V̂ y (t, z) = x, which seems
an impossible task. Nonetheless, we know there is a classical solution to the HJB equation
(3) and we can use the finite difference method to find its numerical solution.
6.2
Monotonicity of Absolute Risk Aversion Measure
In this subsection we assume U ∈ C 2 . The Arrow-Pratt measure of absolute risk aversion for
a utility function U is defined by
U 00 (x)
.
R(x) = − 0
U (x)
R is a constant for exponential utility functions and is a decreasing function for power and
logarithmic utility functions. Since R(x) = −(ln U 0 (x))0 it is clear that R is increasing
(decreasing) if and only if ln U 0 (x) is concave (convex). For the value function V (t, x) with
V (T, x) = U (x) we may define a dynamic Arrow-Pratt measure of absolute risk version by
R(t, x) = −
Vxx (t, x)
Vx (t, x)
provided all derivatives are well defined. The monotonicity properties of optimal investment
strategies is discussed in [3] which shows that that R(t, x) inherits the monotonicity of R(x)
with the martingale approach. Here we give a new proof with the PDE approach and the
duality method. We also extend the results of [3] as we do not need the Inada condition. The
next result is needed in proving the monotonicity of R(t, x).
Lemma 6.2 Suppose that Ũ ∈ C 1 and y Ũ 0 (y) − Ũ (y) is convex (concave) in y. Then
y V̂y (t, y) − V̂ (t, y) is strictly convex (concave) in y for t < T .
17
Proof. Let w(t, y) = y V̂y (t, y) − V̂ (t, y). From (13), we get
Z ∞
(ln x)2
3
1
− 14 τ
w(t, y) = √ e
e− 4τ x− 2 [(yx)Ũ 0 (yx) − Ũ (yx)]dx.
2 πτ
0
This implies the convexity of w(t, y) in y for t < T . A simple computation confirms that
w is a solution to linear PDE (8). As in Lemma 3.6, we deduce that if wyy (t0 , y0 ) = 0 for
some (t0 , y0 ) with t0 < T then Ũ (y) = C1 y ln y + C2 y + C3 with constants C1 , C2 , C3 , which
contradicts the assumption 3.1. Therefore w(t, y) is strictly convex in y for t < T .
2
The next theorem shows that ln Vx (t, x) preserves the convexity (concavity) of ln U 0 (x).
Theorem 6.3 Let the assumptions of Theorem 3.8 hold and let U ∈ C 1 and U be strictly
increasing and strictly concave. Then ln Vx (t, x) is strictly convex (concave) for t < T if
ln U 0 (x) is convex (concave).
Proof. Assume ln U 0 (x) is concave. As in the proof of Lemma 4.1 in [3], we conclude that
Ũ 0 (ez ) is convex. Then y Ũ 0 (y) − Ũ (y) is convex. From Lemma 6.2, we see that y V̂y (t, y) −
V̂ (t, y) is strictly convex for t < T , i.e., (y V̂y (t, y) − V̂ (t, y))yy > 0 for t < T . A direct
computation implies
(y V̂y (t, y) − V̂ (t, y))yy = (y V̂yy )y =
(ln Vx )xx
.
Vxx ((ln Vx )x )2
Since V is strictly concave in x for t ∈ [0, T ), we conclude that (ln Vx )xx < 0 and ln Vx (t, x)
is strictly concave for t < T .
2
Corollary 6.4 In addition to the assumptions of Theorem 6.3, assume that U ∈ C 2 . Then
R(t, ∙) inherits the monotonicity of R for t < T . Furthermore, R(t, ∙) is strictly increasing
(decreasing) for t < T if R is increasing (decreasing).
Acknowledgement. The authors thank the referees for their constructive comments and
suggestions which have helped to improve the previous versions of the paper. The authors
also thank Martin Schweizer, Nizar Touzi, and the audience at an Oxford University seminar
by Harry Zheng for the useful discussions on the contents of the paper.
References
[1] B. Bian and P. Guan, Convexity preserving for fully nonlinear parabolic integrodifferential equations, Methods Appl. Anal., 15 (2008), pp. 39-52.
[2] B. Bian and P. Guan, A microscopic convexity principle for nonlinear partial differential
Equations, Invent. Math., 177 (2009), pp. 307-335.
[3] C. Borell, Monotonicity properties of optimal investment strategies for log-Brownian
asset prices, Math. Finance, 17 (2007), pp. 143-153.
[4] B. Bouchard, N. Touzi, and A. Zeghal, Dual formulation of the utility maximization
problem: the case of nonsmooth utility, Ann. Appl. Probab., 14 (2004), pp. 678-717.
[5] G. Deelstra, H. Pham, and N. Touzi, Dual formulation of the utility maximization
problem under transaction costs, Ann. Appl. Probab., 11 (2001), pp. 1353-1383.
[6] W. Fleming and M. Soner, Controlled Markov Processes and Viscosity Solutions,
Springer, 1993.
18
[7] A. Friedman, Partial Differential Equations of Parabolic Type, Prentice-Hall, 1964.
[8] I. Karatzas and S.E. Shreve, Methods of Mathematical Finance, Springer, 1998.
[9] D. Kramkov and W. Schachermayer, The asymptotic elasticity of utility functions and
optimal investment in incomplete markets, Ann. Appl. Probab., 9 (1999), pp. 904-950.
[10] P. Lions and M. Musiela, Convexity of solutions of parabolic equations, C. R.
Math. Acad. Sci. Paris, 342 (2006), pp. 915-921.
[11] H. Pham, Continuous-time Stochastic Control and Optimization with Financial Applications, Springer, 2009.
[12] R.T. Rockafellar and S. Uryasev, Conditional value-at-risk for general loss distributions,
J. Banking Finance, 26 (2002), pp. 1443-1471.
[13] S. Shreve and M. Soner, Optimal investment and consumption with transaction costs.
Ann. Appl. Probab., 4 (1994), pp. 609-692.
[14] N. Touzi, Stochastic Control Problems, Viscosity Solutions, and Application to Finance,
Scuola Normale Superiore, 2002.
[15] N. Westray and H. Zheng, Constrained nonsmooth utility maximization without
quadratic inf-convolution, Stochastic Process. Appl., 119 (2009), pp. 1561-1579.
[16] N. Westray and H. Zheng, Minimal sufficient conditions for a primal optimizer in nonsmooth utility maximization, Finance Stoch., forthcoming, 2010.
[17] G. Xu and S. Shreve, A duality method for optimal and investment under short-selling
prohibition, II. constant market coefficients, Ann. Appl. Probab., 2 (1992), pp. 314-328.
[18] H. Zheng, Efficient frontier of utility and CVaR, Math. Methods Oper. Res., 70 (2009),
pp. 129-148.
19