PDF

Random
𝐺-Expectations
Marcel Nutz
βˆ—
First version: September 11, 2010. This version: June 28, 2012.
Abstract
We construct a time-consistent sublinear expectation in the setting
of volatility uncertainty. This mapping extends Peng's 𝐺-expectation
by allowing the range of the volatility uncertainty to be stochastic. Our
construction is purely probabilistic and based on an optimal control
formulation with path-dependent control sets.
Keywords 𝐺-expectation, volatility uncertainty, stochastic domain, risk measure,
time-consistency
AMS 2000 Subject Classications primary 93E20, 91B30; secondary 60H30
Acknowledgements Financial support by Swiss National Science Foundation
Grant PDFM2-120424/1 is gratefully acknowledged.
The author thanks
Shige Peng, Mete Soner and Nizar Touzi for stimulating discussions as well as
Laurent Denis, Sebastian Herrmann and the anonymous referees for helpful
comments.
1
Introduction
𝐺-expectation as introduced by Peng [13, 14] is a dynamic nonexpectation which advances the notions of 𝑔 -expectations (Peng [10])
The so-called
linear
and backward SDEs (Pardoux and Peng [9]). Moreover, it yields a stochastic
representation for a specic PDE and a risk measure for volatility uncertainty
in nancial mathematics (Avellaneda et al. [1], Lyons [6]). The concept of
volatility uncertainty also plays a key role in the existence theory for second
order backward SDEs (Soner et al. [18]) which were introduced as representations for a large class of fully nonlinear second order parabolic PDEs
(Cheridito et al. [3]).
The
𝐺-expectation
is a sublinear operator dened on a class of random
variables on the canonical space
Ξ©.
Intuitively, it corresponds to the worst-
𝐡
𝐷.
case expectation in a model where the volatility of the canonical process
is seen as uncertain, but is postulated to take values in some bounded set
βˆ—
Dept. of Mathematics, Columbia University, New York,
1
[email protected]
The symbol
𝐺
𝐷.
then stands for the support function of
martingale laws on
Ξ©
under which the volatility of
𝐡
If
𝒫𝐺
is the set of
behaves accordingly,
the 𝐺-expectation at time 𝑑 = 0 may be expressed as the upper expectation
β„°0𝐺 (𝑋) := sup𝑃 βˆˆπ’« 𝐺 𝐸 𝑃 [𝑋]. This description is due to Denis et al. [4]. See
also Denis and Martini [5] for a general study of related capacities.
𝑑, the 𝐺-expectation is extended to a conditional exℰ𝑑𝐺 (𝑋) with respect to the ltration (ℱ𝑑 )𝑑β‰₯0 generated by 𝐡 . When
𝑋 = 𝑓 (𝐡𝑇 ) for some suciently regular function 𝑓 , then ℰ𝑑𝐺 (𝑋) is dened via
the solution of the nonlinear heat equation βˆ‚π‘‘ 𝑒 βˆ’ 𝐺(𝑒π‘₯π‘₯ ) = 0 with boundary
𝐺
condition π‘’βˆ£π‘‘=0 = 𝑓 . The mapping ℰ𝑑 can be extended to random variables
of the form 𝑋 = 𝑓 (𝐡𝑑1 , . . . , 𝐡𝑑𝑛 ) by a stepwise evaluation of the PDE and
For positive times
pectation
nally to a suitable completion of the space of all such random variables. As
(ℰ𝑑 )𝑑β‰₯0 of conditional 𝐺-expectations satisfying
ℰ𝑠 ∘ ℰ𝑑 = ℰ𝑠 for 𝑠 ≀ 𝑑, also called time-consistency
context. For an exhaustive overview of 𝐺-expectations and
a result, one obtains a family
the semigroup property
property in this
related literature we refer to Peng's recent ICM paper [16] and survey [15].
𝐷 is allowed to
process D = {D𝑑 (πœ”)}.
In this paper, we develop a formulation where the set
be path-dependent; i.e., we replace
𝐷
by a set-valued
Intuitively, this means that the function
tion
𝐺(𝑑, πœ”, β‹…)
𝐺(β‹…)
is replaced by a random func-
and that the a priori bounds on the volatility can be adjusted
to the observed evolution of the system, which is highly desirable for applications. Our main result is the existence of a time-consistent family
of sublinear operators corresponding to this formulation. When
on
πœ”
in a Markovian way,
ℰ𝑑
D
(ℰ𝑑 )𝑑β‰₯0
depends
can be seen as a stochastic representation for a
class of state-dependent nonlinear heat equations
βˆ‚π‘‘ 𝑒 βˆ’ 𝐺(π‘₯, 𝑒π‘₯π‘₯ ) = 0
which
are not covered by [18].
At time 𝑑 = 0, we again have a set 𝒫 of probability
β„°0 (𝑋) := sup𝑃 βˆˆπ’« 𝐸 𝑃 [𝑋]. For 𝑑 > 0, we want to have
ℰ𝑑 (𝑋) = sup 𝐸 𝑃 [π‘‹βˆ£β„±π‘‘ ]
measures and dene
in some sense.
(1.1)
𝑃 βˆˆπ’«
The main diculty here is that the set
𝒫
is not dominated by a nite mea-
sure. Moreover, as the resulting problem is non-Markovian in an essential
way, the PDE approach lined out above seems unfeasible. We shall adopt
the framework of regular conditional probability distributions and dene, for
each
πœ” ∈ Ξ©,
on the path
ℰ𝑑 (𝑋)(πœ”)
time 𝑑,
a quantity
πœ”
up to
ℰ𝑑 (𝑋)(πœ”) :=
by conditioning
𝐸 𝑃 [𝑋 𝑑,πœ” ],
sup
𝑋
and
πœ” ∈ Ξ©.
D
(and hence
𝒫)
(1.2)
𝑃 βˆˆπ’«(𝑑,πœ”)
Then the right hand side is well dened since it is simply a supremum of
real numbers. This approach gives direct access to the underlying measures
and allows for control theoretic methods. There is no direct reference to the
function
𝐺, so that 𝐺 is no longer required to be nite and we can work with
2
an unbounded domain
variable
ℰ𝑑 (𝑋)
D.
The nal result is the construction of a random
which makes (1.1) rigorous in the form
β€²
ℰ𝑑 (𝑋) = ess sup(𝑃,ℱ𝑑 ) 𝐸 𝑃 [π‘‹βˆ£β„±π‘‘ ] 𝑃 -a.s.
for all
𝑃 ∈ 𝒫,
𝑃 β€² βˆˆπ’«(𝑑,𝑃 )
where
𝒫(𝑑, 𝑃 ) = {𝑃 β€² ∈ 𝒫 : 𝑃 β€² = 𝑃
on
ℱ𝑑 }
and
essential supremum with respect to the collection
ess sup(𝑃,ℱ𝑑 ) denotes
of (𝑃, ℱ𝑑 )-nullsets.
the
The approach via (1.2) is strongly inspired by the formulation of stochastic target problems in Soner et al. [17]. There, the situation is more nonlinear
in the sense that instead of taking conditional expectations on the right hand
side, one solves under each
𝑃
a backward SDE with terminal value
𝑋.
On
the other hand, those problems have (by assumption) a deterministic domain
with respect to the volatility, which corresponds to a deterministic set
D
in
our case, and therefore their control sets are not path-dependent.
The path-dependence of
𝒫(𝑑, πœ”)
constitutes the main diculty in the
present paper. E.g., it is not obvious under which conditions
πœ” 7β†’ ℰ𝑑 (𝑋)(πœ”)
in (1.2) is even measurable. The main problem turns out to be the following. In our formulation, the time-consistency of
(ℰ𝑑 )𝑑β‰₯0
takes the form of a
dynamic programming principle. The proof of such a result generally relies
on a pasting operation performed on controls from the various conditional
problems. However, we shall see that the resulting control in general violates
the constraint given by
D, when D is stochastic.
This feature, reminiscent of
viability or state-constrained problems, renders our problem quite dierent
from other known problems with path-dependence, such as the controlled
SDE studied by Peng [11].
Our construction is based on a new notion of
regularity which is tailored such that we can perform the necessary pastings
at least on certain well-chosen controls.
One motivation for this work is to provide a model for superhedging in
nancial markets with a stochastic range of volatility uncertainty. Given a
contingent claim
𝑋 , this is the problem of nding the minimal capital π‘₯ such
𝐡 , one can achieve a nancial position
to 𝑋 at time 𝑇 . From a nancial point of view, it is crucial
that by trading in the stock market
greater or equal
that the trading strategy be universal; i.e., it should not depend on the
uncertain scenario
𝑃.
It is worked out in Nutz and Soner [8] that the (right-
continuous version of ) the process
price; in particular,
β„°0 (𝑋)
β„°(𝑋)
yields the dynamic superhedging
corresponds to the minimal capital
π‘₯.
Since the
universal superhedging strategy is constructed from the quadratic covariation
β„°(𝑋)
β„°(𝑋) as a
𝐡,
process of
and
yields
single, aggregated process. One then obtains an optional
it is crucial for their arguments that our model
decomposition of the form
∫
β„°(𝑋) = β„°0 (𝑋) +
where
𝐾
𝑍 𝑑𝐡 βˆ’ 𝐾,
𝐾𝑇 β‰₯ 0 indicates the
the claim 𝑋 .
is an increasing process whose terminal value
dierence between the nancial position time
3
𝑇
and
The remainder of this paper is organized as follows.
Section 2 intro-
duces the basic set up and notation. In Section 3 we formulate the control
problem (1.2) for uniformly continuous random variables and introduce a
regularity condition on
D.
Section 4 contains the proof of the dynamic pro-
gramming principle for this control problem. In Section 5 we extend
β„°
to a
suitable completion.
2
Preliminaries
We x a constant
𝑇 > 0
and let
Ξ© := {πœ” ∈ 𝐢([0, 𝑇 ];
R𝑑 ) :
πœ”0 = 0}
be
the canonical space of continuous paths equipped with the uniform norm
βˆ₯πœ”βˆ₯𝑇 := sup0≀𝑠≀𝑇 βˆ£πœ”π‘  ∣, where ∣ β‹… ∣ is the Euclidean norm. We denote by
𝐡 the canonical process 𝐡𝑑 (πœ”) = πœ”π‘‘ , by 𝑃0 the Wiener measure, and by
𝔽 = {ℱ𝑑 }0≀𝑑≀𝑇 the raw ltration generated by 𝐡 . Unless otherwise stated,
probabilistic notions requiring a ltration (such as adaptedness) refer to 𝔽.
A probability measure 𝑃 on Ξ© is called local martingale measure if 𝐡
is a local martingale under 𝑃 . We recall from Bichteler [2, Theorem 7.14]
that, via the integration-by-parts formula, the quadratic variation process
⟨𝐡⟩(πœ”) can be dened
𝑃 -nullset for every
is a
pathwise for all
πœ”
outside an exceptional set which
local martingale measure
limits, we can then dene the
𝔽-progressively
𝑃.
Taking componentwise
measurable process
[
]
π‘Ž
ˆ𝑑 (πœ”) := lim sup 𝑛 βŸ¨π΅βŸ©π‘‘ (πœ”) βˆ’ βŸ¨π΅βŸ©π‘‘βˆ’1/𝑛 (πœ”) ,
0<𝑑≀𝑇
π‘›β†’βˆž
taking values in the set of
line. We also set
Let
π’«π‘Š
𝑑 × π‘‘-matrices
with entries in the extended real
π‘Ž
Λ†0 = 0.
𝑃 such that 𝑑 7β†’ βŸ¨π΅βŸ©π‘‘
π•Š>0
𝑑 𝑑𝑑 × π‘ƒ -a.e., where
be the set of all local martingale measures
is absolutely continuous
𝑃 -a.s.
and
π‘Ž
Λ†
takes values in
𝑑×𝑑 denotes the set of strictly positive denite matrices. Note
π•Š>0
𝑑 βŠ‚ ℝ
π‘Ž
Λ† is then the quadratic variation density of 𝐡 under any 𝑃 ∈ 𝒫 π‘Š .
that
As in [4, 17, 18] we shall use the so-called strong formulation of volatility
uncertainty in this paper; i.e., we consider a subclass of
π’«π‘Š
consisting of the
laws of stochastic integrals with respect to a xed Brownian motion. The
latter is taken to be the canonical process
𝐡
under
𝑃0 :
we dene
𝒫𝑆 βŠ‚ π’«π‘Š
to be the set of laws
𝛼 βˆ’1
𝛼
𝑃 := 𝑃0 ∘ (𝑋 )
where
𝑋𝑑𝛼
(𝑃0∫) 𝑑
:=
𝛼𝑠1/2 𝑑𝐡𝑠 ,
𝑑 ∈ [0, 𝑇 ].
(2.1)
0
Here
𝛼
ranges over all
𝔽-progressively measurable processes with values in
0 βˆ£π›Όπ‘‘ ∣ 𝑑𝑑 < ∞ 𝑃0 -a.s. The stochastic integral is the Itô
integral under 𝑃0 , constructed as an 𝔽-progressively measurable process with
right-continuous and 𝑃0 -a.s. continuous paths, and in particular without
passing to the augmentation of 𝔽 (cf. Stroock and Varadhan [19, p. 97]).
π•Š>0
satisfying
𝑑
βˆ«π‘‡
4
2.1
Shifted Paths and Regular Conditional Distributions
We now introduce the notation for the conditional problems of our dynamic
Ξ© is the canonical space, we can construct for any
probability measure 𝑃 on Ξ© and any (𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ© the corresponding
πœ”
regular conditional probability distribution 𝑃𝑑 ; c.f. [19, Theorem 1.3.4]. We
πœ”
recall that 𝑃𝑑 is a probability kernel on ℱ𝑑 × β„±π‘‡ ; i.e., it is a probability
πœ”
measure on (Ξ©, ℱ𝑇 ) for xed πœ” and πœ” 7β†’ 𝑃𝑑 (𝐴) is ℱ𝑑 -measurable for each
πœ”
𝐴 ∈ ℱ𝑇 . Moreover, the expectation under 𝑃𝑑 is the conditional expectation
under 𝑃 :
πœ”
𝐸 𝑃𝑑 [𝑋] = 𝐸 𝑃 [π‘‹βˆ£β„±π‘‘ ](πœ”) 𝑃 -a.s.
programming.
whenever
𝑋
is
Since
ℱ𝑇 -measurable
and bounded. Finally,
the set of paths that coincide with
πœ”
up to
{
π‘ƒπ‘‘πœ” πœ” β€² ∈ Ξ© : πœ” β€² = πœ”
Next, we x
0≀𝑠≀𝑑≀𝑇
on
π‘ƒπ‘‘πœ”
is concentrated on
𝑑,
}
[0, 𝑑] = 1.
(2.2)
and dene the following shifted objects. We
𝑑
𝑑
denote by Ξ© := {πœ” ∈ 𝐢([𝑑, 𝑇 ]; ℝ ) : πœ”π‘‘ = 0} the shifted canonical space,
𝑑
𝑑
𝑑
𝑑
by 𝐡 the canonical process on Ξ© , by 𝑃0 the Wiener measure on Ξ© , and
𝑑
𝑠
𝑑
𝑑
by 𝔽 = {ℱ𝑒 }𝑑≀𝑒≀𝑇 the (raw) ltration generated by 𝐡 . For πœ” ∈ Ξ© , the
𝑑
𝑑
𝑑
shifted path πœ” ∈ Ξ© is dened by πœ”π‘’ := πœ”π‘’ βˆ’ πœ”π‘‘ for 𝑑 ≀ 𝑒 ≀ 𝑇 and if
furthermore πœ”
˜ ∈ Ω𝑑 , then the concatenation of πœ” and πœ”
˜ at 𝑑 is the path
(πœ” βŠ—π‘‘ πœ”
˜ )𝑒 := πœ”π‘’ 1[𝑠,𝑑) (𝑒) + (πœ”π‘‘ + πœ”
˜ 𝑒 )1[𝑑,𝑇 ] (𝑒),
𝑠 ≀ 𝑒 ≀ 𝑇.
πœ”
¯ ∈ Ξ©, we note the associativity πœ”
¯ βŠ—π‘  (πœ” βŠ—π‘‘ πœ”
˜ ) = (¯
πœ” βŠ—π‘  πœ”) βŠ—π‘‘ πœ”
˜ . Given an
ℱ𝑇𝑠 -measurable random variable πœ‰ on Ω𝑠 and πœ” ∈ Ω𝑠 , we dene the shifted
𝑑,πœ” on Ω𝑑 by
random variable πœ‰
If
πœ‰ 𝑑,πœ” (˜
πœ” ) := πœ‰(πœ” βŠ—π‘‘ πœ”
˜ ),
Clearly
tion of
πœ”
˜ ∈ Ω𝑑 .
πœ”
˜ 7β†’ πœ‰ 𝑑,πœ” (˜
πœ” ) is ℱ𝑇𝑑 -measurable and πœ‰ 𝑑,πœ” depends
πœ” to [𝑠, 𝑑]. For a random variable πœ“ on Ξ©, the
only on the restricassociativity of the
concatenation yields
(πœ“ 𝑠,¯πœ” )𝑑,πœ” = πœ“ 𝑑,¯πœ”βŠ—π‘  πœ” .
𝔽𝑠 -progressively measurable process {𝑋𝑒 , 𝑒 ∈ [𝑠, 𝑇 ]},
𝑑,πœ”
𝑑
the shifted process {𝑋𝑒 , 𝑒 ∈ [𝑑, 𝑇 ]} is 𝔽 -progressively measurable. If 𝑃 is
𝑠
𝑑,πœ”
𝑑
a probability on Ξ© , the measure 𝑃
on ℱ𝑇 dened by
We note that for an
𝑃 𝑑,πœ” (𝐴) := π‘ƒπ‘‘πœ” (πœ” βŠ—π‘‘ 𝐴),
𝐴 ∈ ℱ𝑇𝑑 ,
where
πœ” βŠ—π‘‘ 𝐴 := {πœ” βŠ—π‘‘ πœ”
˜: πœ”
˜ ∈ 𝐴},
is again a probability by (2.2). We then have
𝐸𝑃
𝑑,πœ”
πœ”
[πœ‰ 𝑑,πœ” ] = 𝐸 𝑃𝑑 [πœ‰] = 𝐸 𝑃 [πœ‰βˆ£β„±π‘‘π‘  ](πœ”) 𝑃 -a.s.
In analogy to the above, we also introduce the set
𝑑
π’«π‘Š
of martingale mea-
𝑑
sures on Ξ© under which the quadratic variation density process
5
π‘Ž
ˆ𝑑
of
𝐡𝑑
𝑑
𝑑
π•Š>0
and the subset 𝒫 𝑆 βŠ† 𝒫 π‘Š induced by
𝑑
𝑑
𝑑
(𝑃0 , 𝐡 )-stochastic integrals of 𝔽𝑑 -progressively measurable integrands. (By
𝑇
𝑇
𝑇
convention, 𝒫 𝑆 = 𝒫 π‘Š consists of the unique probability on Ξ© = {0}.) Fi𝑠
𝑠
𝑠
nally, we denote by Ω𝑑 := {πœ”βˆ£[𝑠,𝑑] : πœ” ∈ Ξ© } the restriction of Ξ© to [𝑠, 𝑑] and
𝑠
𝑠
note that Ω𝑑 can be identied with {πœ” ∈ Ξ© : πœ”π‘’ = πœ”π‘‘ for 𝑒 ∈ [𝑑, 𝑇 ]}.
is well dened with values in
3
Formulation of the Control Problem
+
D : Ξ© × [0, 𝑇 ] β†’ 2π•Šπ‘‘ taking
values in the positive semidenite matrices; i.e., D𝑑 (πœ”) is a closed set of
matrices for each (𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ©. We assume that D is progressively
+
measurable in the sense that for every compact 𝐾 βŠ‚ π•Šπ‘‘ , the lower inverse
image {(𝑑, πœ”) : D𝑑 (πœ”) ∩ 𝐾 βˆ•= βˆ…} is a progressively measurable subset of
[0, 𝑇 ] × Ξ©. In particular, the value of D𝑑 (πœ”) depends only on the restriction
of πœ” to [0, 𝑑].
We start with a closed set-valued process
In view of our setting with a nondominated set of probabilities, we shall
introduce topological regularity. As a rst step to obtain some stability, we
consider laws under which the quadratic variation density of
𝐡
takes values
π•Š+
𝑑 and
𝛿 > 0, we dene the
D. For a set 𝐷 βŠ†
𝛿 -interior Int𝛿 𝐷 := {π‘₯ ∈ 𝐷 : 𝐡𝛿 (π‘₯) βŠ† 𝐷}, where 𝐡𝛿 (π‘₯) denotes the open
ball of radius 𝛿 .
in a uniform interior of
Denition 3.1.
lection of all
Given
(𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ©,
𝑑
𝒫 𝑆 for which there exists
𝑃 ∈
π‘Ž
ˆ𝑑𝑠 (˜
πœ” ) ∈ Int𝛿 D𝑑,πœ”
πœ”)
𝑠 (˜
for
𝒫(𝑑, πœ”) to be the col𝛿 = 𝛿(𝑑, πœ”, 𝑃 ) > 0 such that
we dene
𝑑𝑠 × π‘ƒ -a.e. (𝑠, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
𝛿 βˆ— denotes the supremum of all such 𝛿 , we dene the positive
βˆ—
quantity deg(𝑑, πœ”, 𝑃 ) := (𝛿 /2) ∧ 1. We note that 𝒫(0, πœ”) does not depend
on πœ” and denote this set by 𝒫 .
Furthermore, if
The formula
admissible
𝛿.
(𝛿 βˆ— /2) ∧ 1
ensures that
deg(𝑑, πœ”, 𝑃 )
is nite and among the
The following is the main regularity condition in this paper.
Denition 3.2.
D is uniformly continuous if for all 𝛿 > 0 and
β€²
exists πœ€ = πœ€(𝑑, πœ”, 𝛿) > 0 such that βˆ₯πœ” βˆ’ πœ” βˆ₯𝑑 ≀ πœ€
We say that
(𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ©
there
implies
β€²
Int𝛿 D𝑑,πœ”
πœ” ) βŠ† Intπœ€ D𝑑,πœ”
πœ”)
𝑠 (˜
𝑠 (˜
If the dimension is
𝑑=1
and
D
for all
(𝑠, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
is a random interval, this property is
related to the uniform continuity of the processes delimiting the interval; see
also Example 3.8.
Assumption 3.3.
and such that
D is
(𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ©.
We assume throughout that
𝒫(𝑑, πœ”) βˆ•= βˆ…
for all
6
uniformly continuous
This assumption is in force for the entire paper. We now introduce the
value function which will play the role of the sublinear (conditional) expectation. We denote by
functions on
UC𝑏 (Ξ©)
the space of bounded uniformly continuous
Ξ©.
Denition 3.4.
Given
πœ‰ ∈ UC𝑏 (Ξ©),
we dene for each
𝑑 ∈ [0, 𝑇 ]
the value
function
𝑉𝑑 (πœ”) := 𝑉𝑑 (πœ‰)(πœ”) :=
sup
𝐸 𝑃 [πœ‰ 𝑑,πœ” ],
πœ” ∈ Ξ©.
𝑃 βˆˆπ’«(𝑑,πœ”)
Until Section 5, the function
πœ‰
is xed and often suppressed in the no-
tation. The following result will guarantee enough separability for our proof
of the dynamic programming principle; it is a direct consequence of the preceding denitions.
Lemma 3.5.
Let
πœ€ = πœ€(𝑑, πœ”, 𝑃 ) > 0
βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑑 ≀ πœ€.
Proof. Let
(𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ© and 𝑃 ∈ 𝒫(𝑑, πœ”). Then there exists
β€²
β€²
such that 𝑃 ∈ 𝒫(𝑑, πœ” ) and deg(𝑑, πœ” , 𝑃 ) β‰₯ πœ€ whenever
𝛿 := deg(𝑑, πœ”, 𝑃 ).
Then, by denition,
π‘Ž
ˆ𝑑𝑠 (˜
πœ” ) ∈ Int𝛿 D𝑑,πœ”
πœ”)
𝑠 (˜
for
𝑑𝑠 × π‘ƒ -a.e. (𝑠, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
πœ€ = πœ€(𝑑, πœ”, 𝛿) be as in Denition 3.2 and πœ” β€² such that βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑑 ≀ πœ€, then
β€²
Int D𝑑,πœ”
πœ” ) βŠ† Intπœ€ D𝑑,πœ”
πœ” ) by Assumption 3.3 and hence
𝑠 (˜
𝑠 (˜
Let
𝛿
β€²
π‘Ž
ˆ𝑑𝑠 (˜
πœ” ) ∈ Intπœ€ D𝑑,πœ”
πœ”)
𝑠 (˜
That is,
𝑃 ∈ 𝒫(𝑑, πœ” β€² )
and
for
𝑑𝑠 × π‘ƒ -a.e. (𝑠, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
deg(𝑑, πœ” β€² , 𝑃 ) β‰₯ πœ€(𝑑, πœ”, 𝑃 ) := (πœ€/2) ∧ 1.
A rst consequence of the preceding lemma is the measurability of
We denote
βˆ₯πœ”βˆ₯𝑑 := sup0≀𝑠≀𝑑 βˆ£πœ”π‘  ∣.
Corollary 3.6.
πœ‰ ∈ UC𝑏 (Ξ©). The value function πœ” 7β†’ 𝑉𝑑 (πœ‰)(πœ”)
βˆ₯ β‹… βˆ₯𝑑 and in particular ℱ𝑑 -measurable.
Let
semicontinuous for
Proof. Fix
𝑉𝑑 .
πœ” ∈Ω
𝑃 ∈ 𝒫(𝑑, πœ”).
(πœ‰) ,
continuity 𝜌
and
exists a modulus of
Since
πœ‰
βˆ£πœ‰(πœ”) βˆ’ πœ‰(πœ” β€² )∣ ≀ 𝜌(πœ‰) (βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑇 )
It follows that for all
is lower
is uniformly continuous, there
for all
πœ”, πœ” β€² ∈ Ξ©.
πœ”
˜ ∈ Ω𝑑 ,
β€²
βˆ£πœ‰ 𝑑,πœ” (˜
πœ” ) βˆ’ πœ‰ 𝑑,πœ” (˜
πœ” )∣ = βˆ£πœ‰(πœ” βŠ—π‘‘ πœ”
˜ ) βˆ’ πœ‰(πœ” β€² βŠ—π‘‘ πœ”
˜ )∣
≀ 𝜌(πœ‰) (βˆ₯πœ” βŠ—π‘‘ πœ”
˜ βˆ’ πœ” β€² βŠ—π‘‘ πœ”
˜ βˆ₯𝑇 )
= 𝜌(πœ‰) (βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑑 ).
7
(3.1)
(πœ” 𝑛 ) such that βˆ₯πœ” βˆ’ πœ” 𝑛 βˆ₯𝑑 β†’ 0. The preceding
𝑃 ∈ 𝒫(𝑑, πœ” 𝑛 ) for all 𝑛 β‰₯ 𝑛0 = 𝑛0 (𝑑, πœ”, 𝑃 ) and thus
Consider a sequence
shows that
lim inf 𝑉𝑑 (πœ” 𝑛 ) = lim inf
π‘›β†’βˆž
β€²
sup
π‘›β†’βˆž 𝑃 β€² βˆˆπ’«(𝑑,πœ” 𝑛 )
π‘›β†’βˆž
β€²
sup
𝑃 β€² βˆˆπ’«(𝑑,πœ” 𝑛 )
= lim inf
𝑛
𝐸 𝑃 [πœ‰ 𝑑,πœ” ]
[
β‰₯ lim inf
lemma
𝐸 𝑃 [πœ‰ 𝑑,πœ” ] βˆ’ 𝜌(πœ‰) (βˆ₯πœ” βˆ’ πœ” 𝑛 βˆ₯𝑑 )
]
β€²
sup
π‘›β†’βˆž 𝑃 β€² βˆˆπ’«(𝑑,πœ” 𝑛 )
𝐸 𝑃 [πœ‰ 𝑑,πœ” ]
β‰₯ 𝐸 𝑃 [πœ‰ 𝑑,πœ” ].
As
𝑃 ∈ 𝒫(𝑑, πœ”)
was arbitrary, we conclude that
We note that the obtained regularity of
uniform continuity of
πœ‰;
𝑉𝑑 is signicantly weaker than the
this is a consequence of the state-dependence in our
problem. Indeed, the above proof shows that if
then
as
πœ‰
𝑉𝑑
lim inf 𝑛 𝑉𝑑 (πœ” 𝑛 ) β‰₯ 𝑉𝑑 (πœ”).
𝒫(𝑑, πœ”)
is independent of
πœ”,
is again uniformly continuous with the same modulus of continuity
(see also [17]). Similarly, in Peng's construction of the
𝐺-expectation,
the preservation of Lipschitz-constants arises because the nonlinearity in the
underlying PDE has no state-dependence.
Remark 3.7.
πœ‰ is bounded and continuous, the value function 𝑉𝑑 (πœ‰)
if 𝒫(𝑑, πœ”) is replaced by its weak closure (in the sense of
Since
remains unchanged
weak convergence of probability measures). As an application, we show that
we retrieve Peng's
𝐺-expectation
under a nondegeneracy condition.
𝐺, we recall from [4, Section 3] that there exists a compact and
𝐷 βŠ‚ π•Š+
𝑑 such that 2𝐺 is the support function of 𝐷 and such that
𝐺
β„°0 (πœ“) = sup𝑃 βˆˆπ’« 𝐺 𝐸 𝑃 [πœ“] for suciently regular πœ“ , where
{
}
𝒫 𝐺 := 𝑃 𝛼 ∈ 𝒫 𝑆 : 𝛼𝑑 (πœ”) ∈ 𝐷 for 𝑑𝑑 × π‘ƒ0 -a.e. (𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ© .
Given
convex set
We make the additional assumption that
the scalar case
𝑑 = 1,
𝐷
Int 𝐷. In
𝐺
where β„°0 is an
has nonempty interior
this precisely rules out the trivial case
expectation in the usual sense.
We then choose
D := 𝐷.
In this deterministic situation, our formulation
boils down to
𝒫=
βˆͺ{
𝑃 𝛼 ∈ 𝒫 𝑆 : 𝛼𝑑 (πœ”) ∈ Int𝛿 𝐷
for
}
𝑑𝑑 × π‘ƒ0 -a.e. (𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ© .
𝛿>0
𝒫 βŠ‚ 𝒫 𝐺 , so it remains to show that 𝒫 is dense. To this end, x
βˆ—
𝛼 ∈ 𝒫 𝐺 ; i.e., 𝛼 takes values in 𝐷 . Then for
a point 𝛼 ∈ Int 𝐷 and let 𝑃
πœ€
0 < πœ€ < 1, the process 𝛼 := πœ€π›Όβˆ— + (1 βˆ’ πœ€)𝛼 takes values in Int𝛿 𝐷 for some
𝛿 > 0, due to the fact that the disjoint sets βˆ‚π· and {πœ€π›Όβˆ— + (1 βˆ’ πœ€)π‘₯ : π‘₯ ∈ 𝐷}
π›Όπœ€ ∈ 𝒫 and it follows by
have positive distance by compactness. We have 𝑃
π›Όπœ€ β†’ 𝑃 𝛼 for πœ€ β†’ 0.
dominated convergence for stochastic integrals that 𝑃
Clearly
8
While this shows that we can indeed recover the
𝐺-expectation, we should
D, one can use a
mention that if one wants to treat only deterministic sets
much simpler construction than in this paper, and in particular there is no
need to use the sets
Int𝛿 D
at all.
Next, we give an example where our continuity assumption on
D
is sat-
ised.
Example 3.8.
𝑑 = 1. Let π‘Ž, 𝑏 : [0, 𝑇 ] × Ξ© β†’ ℝ
0 ≀ π‘Ž < 𝑏. Assume that π‘Ž
time; i.e., that for all 𝛿 > 0 there
We consider the scalar case
be progressively measurable processes satisfying
is uniformly continuous in
exists
πœ€>0
πœ”,
uniformly in
such that
βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑇 ≀ πœ€
sup βˆ£π‘Žπ‘  (πœ”) βˆ’ π‘Žπ‘  (πœ” β€² )∣ ≀ 𝛿.
implies
(3.2)
0≀𝑠≀𝑇
Assume that
𝑏
is uniformly continuous in the same sense. Then the random
interval
D𝑑 (πœ”) := [π‘Žπ‘‘ (πœ”), 𝑏𝑑 (πœ”)]
𝛿 > 0, there exists πœ€β€² = πœ€β€² (𝛿) > 0 such
that βˆ£π‘Žπ‘  (πœ”) βˆ’ π‘Žπ‘ 
< 𝛿/2 for all 0 ≀ 𝑠 ≀ 𝑇 whenever βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑇 ≀ πœ€β€² , and
β€²
β€²
β€²
the same for 𝑏. We set πœ€ := πœ€ ∧ 𝛿/2. Then for πœ”, πœ” such that βˆ₯πœ” βˆ’ πœ” βˆ₯𝑑 ≀ πœ€,
we have that βˆ₯πœ” βŠ—π‘‘ πœ”
˜ βˆ’ πœ” β€² βŠ—π‘‘ πœ”
˜ βˆ₯𝑇 = βˆ₯πœ” βˆ’ πœ” β€² βˆ₯𝑑 ≀ πœ€ and hence
[
]
Int𝛿 D𝑑,πœ”
πœ” ) = π‘Žπ‘  (πœ” βŠ—π‘‘ πœ”
˜ ) + 𝛿 , 𝑏𝑠 (πœ” βŠ—π‘‘ πœ”
˜) βˆ’ 𝛿
𝑠 (˜
[
]
βŠ† π‘Žπ‘  (πœ” β€² βŠ—π‘‘ πœ”
˜ ) + πœ€ , 𝑏𝑠 (πœ” β€² βŠ—π‘‘ πœ”
˜) βˆ’ πœ€
is uniformly continuous. Indeed, given
(πœ” β€² )∣
β€²
= Intπœ€ D𝑑,πœ”
πœ”)
𝑠 (˜
for all
(𝑠, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
𝐴:
π‘Ÿ : [0, 𝑇 ] × Ξ© β†’ [0, ∞) be two progressively measurable
A multivariate version of the previous example runs as follows. Let
[0, 𝑇 ] × Ξ© β†’ π•Š>0
𝑑
and
processes which are uniformly continuous in the sense of (3.2) and dene the
set-valued process
{
}
D𝑑 (πœ”) := Ξ“ ∈ π•Š>0
𝑑 : βˆ£Ξ“ βˆ’ 𝐴𝑑 (πœ”)∣ ≀ π‘Ÿπ‘‘ (πœ”) .
Then
D is uniformly continuous; the proof is a direct extension of the above.
We close this section by a remark relating the random
𝐺-expectations
to a class of state-dependent nonlinear heat equations.
Remark 3.9.
We consider a Markovian case of Example 3.8, where the
π‘Ž, 𝑏 :
ℝ β†’ ℝ be bounded, uniformly continuous functions such that 0 ≀ π‘Ž ≀ 𝑏 and
𝑏 βˆ’ π‘Ž is bounded away from zero, and dene
functions delimiting
D
depend only on the current state. Indeed, let
D𝑑 (πœ”) := [π‘Ž(πœ”π‘‘ ), 𝑏(πœ”π‘‘ )].
9
(Of course, an additional time-dependence could also be included.) Moreover, let
𝑓 :ℝ→ℝ
be a bounded, uniformly continuous function and con-
sider
βˆ’βˆ‚π‘‘ 𝑒 βˆ’ 𝐺(π‘₯, 𝑒π‘₯π‘₯ ) = 0,
𝑒(𝑇, β‹…) = 𝑓 ;
𝐺(π‘₯, π‘ž) :=
sup
π‘π‘ž/2.
(3.3)
π‘Ž(π‘₯)≀𝑝≀𝑏(π‘₯)
We claim that the (unique, continuous) viscosity solution
𝑒(0, π‘₯) = 𝑉0 (πœ‰)
𝑒
of (3.3) satises
πœ‰ := 𝑓 (π‘₯ + 𝐡𝑇 ).
for
Indeed, by the standard Hamilton-Jacobi-Bellman theory,
(3.4)
𝑒
is the value
function of the control problem
𝑒(0, π‘₯) = sup 𝐸 𝑃0 [𝑓 (π‘₯ + 𝑋𝑇𝛼 )],
𝛼
where
𝑋𝑑𝛼 =
∫
𝑑
𝛼𝑠1/2 𝑑𝐡𝑠 ,
0
𝛼 varies over all positive, progressively measurable processes satisfying
𝛼𝑑 ∈ D(𝑋𝑑𝛼 ) 𝑑𝑑 × π‘ƒ0 -a.e.
For each such
𝛼,
let
𝑃𝛼
be the law of
𝑋 𝛼,
then clearly
𝛼
𝑒(0, π‘₯) = sup 𝐸 𝑃 [𝑓 (π‘₯ + 𝐡𝑇 )].
𝛼
{𝑃 𝛼 } are in
one-to-one correspondence with 𝒫 , if Denition 3.1 is used with 𝛿 = 0 (i.e.,
𝛿
we use D instead of its interior). Let 𝐺 (π‘₯, π‘ž) := supπ‘Ž(π‘₯)+𝛿≀𝑝≀𝑏(π‘₯)βˆ’π›Ώ π‘π‘ž/2 be
𝛿
𝛿
the nonlinearity corresponding to Int D and let 𝑒 be the viscosity solution
It follows from (the proof of ) Lemma 4.2 below that the laws
of the corresponding equation (3.3). Then the above yields
𝑒(0, π‘₯) β‰₯ 𝑉0 (πœ‰) β‰₯ 𝑒𝛿 (0, π‘₯)
for
𝛿 > 0 small (so that 𝑏 βˆ’ π‘Ž β‰₯ 2𝛿 ).
It follows from the comparison principle
and stability of viscosity solutions that
𝑒(𝑑, π‘₯)
4
as
𝛿 ↓ 0;
𝑒𝛿 (𝑑, π‘₯)
increases monotonically to
as a result, we have (3.4).
Dynamic Programming
The main goal of this section is to prove the dynamic programming principle
for
𝑉𝑑 (πœ‰), which corresponds to the time-consistency property of our sublinear
D is deterministic and 𝑉𝑑 (πœ‰) ∈ UC𝑏 (Ξ©), the
expectation. For the case where
relevant arguments were previously given in [17].
10
4.1
Shifting and Pasting of Measures
As usual, one inequality in the dynamic programming principle will be the
consequence of an invariance property of the control sets.
Lemma 4.1 (Invariance).
Let
0≀𝑠≀𝑑≀𝑇
and
πœ”
¯ ∈ Ξ©.
If
𝑃 ∈ 𝒫(𝑠, πœ”
¯ ),
then
𝑃 𝑑,πœ” ∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ”)
for
𝑃 -a.e. πœ” ∈ Ω𝑠 .
Proof. It is shown in [17, Lemma 4.1] that
𝑑
the quadratic variation density of
π‘Ž
ˆ𝑑𝑒 (˜
πœ” ) = (Λ†
π‘Žπ‘ π‘’ )𝑑,πœ” (˜
πœ”)
and
𝑃 -a.e. πœ” ∈ Ω𝑠 .
Let
for
𝑑
𝑃 𝑑,πœ” ∈ 𝒫 𝑆
and that under
𝐡
𝑑𝑒 × π‘ƒ 𝑑,πœ” -a.e. (𝑒, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘
𝛿 := deg(𝑠, πœ”
¯ , 𝑃 ),
πœ”
β€²
π‘Ž
ˆ𝑠𝑒 (πœ” β€² ) ∈ Int𝛿 D𝑠,¯
𝑒 (πœ” )
for
𝑃 𝑑,πœ” ,
coincides with the shift of π‘Ž
ˆ𝑠 :
(4.1)
then
𝑑𝑒 × π‘ƒ -a.e. (𝑒, πœ” β€² ) ∈ [𝑠, 𝑇 ] × Ξ©π‘ 
and hence
πœ”
π‘Ž
ˆ𝑠𝑒 (πœ” βŠ—π‘‘ πœ”
˜ ) ∈ Int𝛿 D𝑠,¯
˜)
𝑒 (πœ” βŠ—π‘‘ πœ”
Now (4.1) shows that for
for
𝑑𝑒 × π‘ƒ 𝑑,πœ” -a.e. (𝑒, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
𝑑𝑒 × π‘ƒ 𝑑,πœ” -a.e. (𝑒, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘
we have
πœ”
πœ” βŠ—π‘  πœ”
π‘Ž
ˆ𝑑𝑒 (˜
πœ” ) = (Λ†
π‘Žπ‘ π‘’ )𝑑,πœ” (˜
πœ”) = π‘Ž
ˆ𝑠𝑒 (πœ” βŠ—π‘‘ πœ”
˜ ) ∈ Int𝛿 D𝑠,¯
˜ ) = Int𝛿 D𝑑,¯
(˜
πœ”)
𝑒 (πœ” βŠ—π‘‘ πœ”
𝑒
for
𝑃 -a.e. πœ” ∈ Ω𝑠 ;
i.e., that
𝑃 𝑑,πœ” ∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ”).
The dynamic programming principle is intimately related to a stability
property of the control sets under a pasting operation. More precisely, it is
necessary to collect
πœ€-optimizers
from the conditional problems over
and construct from them a control in
𝒫
(if
𝑠 = 0).
𝒫(𝑑, πœ”)
As a rst step, we give
a tractable criterion for the admissibility of a control. We recall the process
𝑋𝛼
from (2.1) and note that since it has continuous paths
𝑃0 -a.s., 𝑋 𝛼
can be
seen as a transformation of the canonical space under the Wiener measure.
Lemma 4.2.
if and only if
(𝑑, πœ”) ∈ [0, 𝑇 ] × Ξ©
there exists 𝛿 > 0 such
Let
𝛼
𝛼𝑠 (˜
πœ” ) ∈ Int𝛿 D𝑑,πœ”
πœ” ))
𝑠 (𝑋 (˜
and
𝑑
𝑃 = 𝑃 𝛼 ∈ 𝒫𝑆.
Then
𝑃 ∈ 𝒫(𝑑, πœ”)
that
for
𝑑𝑠 × π‘ƒ0𝑑 -a.e. (𝑠, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
Proof. We rst note that
𝑑
∫
⟨𝐡 ⟩ =
β‹…
π‘Ž
ˆ𝑑𝑒 (𝐡 𝑑 ) 𝑑𝑒
𝑃
𝛼
-a.s.
and
𝛼
∫
βŸ¨π‘‹ ⟩ =
𝑑
β‹…
𝛼𝑒 (𝐡 𝑑 ) 𝑑𝑒 𝑃0𝑑 -a.s.
𝑑
𝛼
𝑑
𝛼 βˆ’1
𝛼
Recalling that 𝑃
of
( 𝑑 βˆ«β‹… 𝑑 𝑑
) = 𝑃0 ∘ (𝑋 ) , we𝑑 have that the 𝑃
( -distribution
)
βˆ«β‹…
𝛼
𝑑
𝐡, π‘‘π‘Ž
Λ† (𝐡 ) 𝑑𝑒 coincides with the 𝑃0 -distribution of 𝑋 , 𝑑 𝛼(𝐡 ) 𝑑𝑒 .
𝛼
By denition, 𝑃 ∈ 𝒫(𝑑, πœ”) if and only if there exists 𝛿 > 0 such that
π‘Ž
ˆ𝑑 (𝐡 𝑑 ) ∈ Int𝛿 D𝑑,πœ” (𝐡 𝑑 ) 𝑑𝑠 × π‘ƒ 𝛼 -a.e.
11
on
[𝑑, 𝑇 ] × Ξ©π‘‘ ,
and by the above this is further equivalent to
𝛼(𝐡 𝑑 ) ∈ Int𝛿 D𝑑,πœ” (𝑋 𝛼 ) 𝑑𝑠 × π‘ƒ0𝑑 -a.e.
on
[𝑑, 𝑇 ] × Ξ©π‘‘ .
This was the claim.
To motivate the steps below, we rst consider the admissibility of pastings
in general. We can paste given measures
at time
𝑑
to obtain a measure
𝑃¯
on
Ξ©
𝑃 = 𝑃 𝛼 ∈ 𝒫𝑆
𝑑
𝑃ˆ = 𝑃 𝛼ˆ ∈ 𝒫 𝑆
𝑃¯ = 𝑃 𝛼¯ for
and
and we shall see that
𝛼
¯ 𝑒 (πœ”) = 1[0,𝑑) (𝑒)𝛼𝑒 (πœ”) + 1[𝑑,𝑇 ] (𝑒)Λ†
𝛼𝑒 (𝑋 𝛼 (πœ”)𝑑 ).
𝑃ˆ ∈ 𝒫(𝑑, πœ”
Λ† ). By the previous lemma, these
𝛿
𝛼
constraints may be formulated as 𝛼 ∈ Int D(𝑋 ) and 𝛼
Λ† ∈ Int𝛿 D(𝑋 𝛼ˆ )𝑑,Λ†πœ” ,
respectively. If D is deterministic, we immediately see that 𝛼
¯ (πœ”) ∈ Int𝛿 D
¯ ∈ 𝒫 . However, in the stochastic case we merely
for all πœ” ∈ Ξ© and therefore 𝑃
obtain that the constraint on 𝛼
¯ (πœ”) is satised for πœ” such that 𝑋 𝛼 (πœ”)𝑑 = πœ”
Λ†.
¯
Therefore, we typically have 𝑃 ∈
/ 𝒫.
Now assume that
𝑃 βˆˆπ’«
and
The idea to circumvent this diculty is that, due to the formulation cho-
𝐡(Λ†
πœ” ) of πœ”
Λ† such that
constraint 𝛼
¯ ∈ Int𝛿 D(𝑋 𝛼¯ ) is
sen in the previous section, there exists a neighborhood
𝑃ˆ ∈ 𝒫(𝑑, πœ” β€² )
β€²
for all πœ”
∈ 𝐡(Λ†
πœ” ). Therefore, the
𝐡(Λ†
πœ” ) under 𝑋 𝛼 . In
veried on the preimage of
the separability of
Ξ©
the next lemma, we exploit
to construct a sequence of
sponding neighborhoods cover the space
Ξ©,
𝑃ˆ 's
such that the corre-
and in Proposition 4.4 below we
shall see how to obtain an admissible pasting from this sequence. We denote
βˆ₯πœ”βˆ₯[𝑠,𝑑] := sup𝑠≀𝑒≀𝑑 βˆ£πœ”π‘’ ∣.
Lemma 4.3 (Separability).
Let 0 ≀ 𝑠 ≀ 𝑑 ≀ 𝑇 and πœ”
¯ ∈ Ξ©. Given πœ€ > 0,
𝑖
𝑠
𝑠
there exist a sequence (Λ†
πœ” )𝑖β‰₯1 in Ξ© , an ℱ𝑑 -measurable partition (𝐸 𝑖 )𝑖β‰₯1 of
𝑑
𝑠
𝑖
Ξ© , and a sequence (𝑃 )𝑖β‰₯1 in 𝒫 𝑆 such that
(i) βˆ₯πœ” βˆ’ πœ”
Λ† 𝑖 βˆ₯[𝑠,𝑑] ≀ πœ€ for all πœ” ∈ 𝐸 𝑖 ,
𝑖
(ii) 𝑃 ∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ”) for all πœ” ∈ 𝐸 𝑖 and inf πœ”βˆˆπΈ 𝑖 deg(𝑑, πœ”
¯ βŠ—π‘  πœ”, 𝑃 𝑖 ) > 0,
(iii)
𝑖
𝑖
𝑉𝑑 (¯
πœ” βŠ—π‘  πœ”
Λ† 𝑖 ) ≀ 𝐸 𝑃 [πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ”Λ† ] + πœ€.
πœ€ > 0 and let πœ”
Λ† ∈ Ω𝑠 .
𝑃 (Λ†
πœ” ) ∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ”
Λ† ) such that
Proof. Fix
By denition of
𝑉𝑑 (¯
πœ” βŠ—π‘  πœ”
Λ†)
there exists
𝑉𝑑 (Λ†
πœ” ) ≀ 𝐸 𝑃 (Λ†πœ”) [πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ”Λ† ] + πœ€.
πœ€(Λ†
πœ” ) = πœ€(𝑑, πœ”
¯ βŠ—π‘  πœ”
Λ† , 𝑃 (Λ†
πœ” )) > 0
β€²
such that 𝑃 (Λ†
πœ” ) ∈ 𝒫(𝑑, πœ”
¯
deg(𝑑, πœ”
¯ βŠ—π‘  πœ” , 𝑃 (Λ†
πœ” )) β‰₯ πœ€(Λ†
πœ” ) for all
πœ” β€² ∈ 𝐡(πœ€(Λ†
πœ” ), πœ”
Λ† ) βŠ† Ω𝑠 . Here 𝐡(πœ€, πœ”
Λ† ) := {πœ” β€² ∈ Ω𝑠 : βˆ₯Λ†
πœ”βˆ’πœ”
Λ† β€² βˆ₯[𝑠,𝑑] < πœ€} denotes
the open βˆ₯ β‹… βˆ₯[𝑠,𝑑] -ball. By replacing πœ€(Λ†
πœ” ) with πœ€(Λ†
πœ” ) ∧ πœ€ we may assume that
πœ€(Λ†
πœ” ) ≀ πœ€.
As the above holds for all πœ”
Λ† ∈ Ω𝑠 , the collection {𝐡(πœ€(Λ†
πœ” ), πœ”
Λ†) : πœ”
Λ† ∈ Ω𝑠 }
𝑠
𝑠
forms an open cover of Ξ© . Since the (quasi-)metric space (Ξ© , βˆ₯ β‹… βˆ₯[𝑠,𝑑] ) is
Furthermore, by Lemma 3.5, there exists
βŠ—π‘  πœ” β€² ) and
12
(𝐡 𝑖 )𝑖β‰₯1 ,
𝑠
ℱ𝑑 -measurable
separable and therefore Lindelöf, there exists a countable subcover
𝑖
where 𝐡
:=
𝐡(πœ€(Λ†
πœ” 𝑖 ), πœ”
Λ† 𝑖 ). As a
βˆ₯[𝑠,𝑑] -open set, each 𝐡 𝑖 is
βˆ₯β‹…
and
𝐸 1 := 𝐡 1 ,
𝐸 𝑖+1 := 𝐡 𝑖+1 βˆ– (𝐸 1 βˆͺ β‹… β‹… β‹… βˆͺ 𝐸 𝑖 ),
𝑖β‰₯1
Ω𝑠 . It remains to set 𝑃 𝑖 := 𝑃 (Λ†
πœ”π‘–)
𝑖
𝑖
inf πœ”βˆˆπΈ 𝑖 deg(𝑑, πœ”
¯ βŠ—π‘  πœ”, 𝑃 ) β‰₯ πœ€(Λ†
πœ” ) > 0 for each 𝑖 β‰₯ 1.
denes a partition of
For
𝐴 ∈ ℱ𝑇𝑠
we denote
and note that
𝐴𝑑,πœ” = {˜
πœ” ∈ Ω𝑑 : πœ” βŠ—π‘‘ πœ”
˜ ∈ 𝐴}.
Proposition 4.4 (Pasting).
Let 0 ≀ 𝑠 ≀ 𝑑 ≀ 𝑇 , πœ”
¯ ∈ Ξ© and 𝑃 ∈ 𝒫(𝑠, πœ”
¯ ).
𝑖
𝑠
𝑠
Let (𝐸 )0≀𝑖≀𝑁 be a nite ℱ𝑑 -measurable partition of Ξ© . For 1 ≀ 𝑖 ≀ 𝑁 ,
𝑑
𝑖
𝑖
¯ βŠ—π‘  πœ”) for all πœ” ∈ 𝐸 𝑖 and
assume that 𝑃 ∈ 𝒫 𝑆 are such that 𝑃 ∈ 𝒫(𝑑, πœ”
𝑖
inf πœ”βˆˆπΈ 𝑖 deg(𝑑, πœ”
¯ βŠ—π‘  πœ”, 𝑃 ) > 0. Then
𝑃¯ (𝐴) := 𝑃 (𝐴 ∩ 𝐸 0 ) +
𝑁
βˆ‘
[
]
𝐸 𝑃 𝑃 𝑖 (𝐴𝑑,πœ” )1𝐸 𝑖 (πœ”) ,
𝐴 ∈ ℱ𝑇𝑠
𝑖=1
denes an element of
(i)
(ii)
(iii)
𝑃¯ = 𝑃
𝑃¯ 𝑑,πœ” =
𝑠
on ℱ𝑑 ,
𝑃 𝑑,πœ” for
𝑃¯ 𝑑,πœ” = 𝑃 𝑖
for
𝒫(𝑠, πœ”
¯ ).
Furthermore,
𝑃 -a.e. πœ” ∈ 𝐸 0 ,
𝑃 -a.e. πœ” ∈ 𝐸 𝑖
Proof. We rst show that
and
𝑃¯ ∈ 𝒫(𝑠, πœ”
¯ ).
1 ≀ 𝑖 ≀ 𝑁.
The proof that
𝑠
𝑃¯ ∈ 𝒫 𝑆
is the same
as in [17, Appendix, Proof of Claim (4.19) ]; the observation made there
𝛼, 𝛼𝑖 are the 𝔽𝑠 - resp. 𝔽𝑑 -progressively measurable processes such
𝛼 and 𝑃 𝑖 = 𝑃 𝛼𝑖 , then 𝑃
¯ = 𝑃 𝛼¯ for 𝛼
that 𝑃 = 𝑃
¯ dened by
]
[
𝑁
βˆ‘
𝛼
𝑖
𝑑
𝛼
𝛼𝑒 (πœ” )1𝐸 𝑖 (𝑋 (πœ”))
𝛼
¯ 𝑒 (πœ”) := 1[𝑠,𝑑) (𝑒)𝛼𝑒 (πœ”)+1[𝑑,𝑇 ] (𝑒) 𝛼𝑒 (πœ”)1𝐸 0 (𝑋 (πœ”))+
is that if
𝑖=1
for
(𝑒, πœ”) ∈ [𝑠, 𝑇 ] ×
Ω𝑠 . To show that
πœ”
π‘Ž
ˆ𝑠𝑒 (πœ”) ∈ Int𝛿 D𝑠,¯
𝑒 (πœ”)
for
𝑃¯ ∈ 𝒫(𝑠, πœ”
¯ ),
it remains to check that
𝑑𝑒 × π‘ƒ¯ -a.e. (𝑒, πœ”) ∈ [𝑠, 𝑇 ] × Ξ©π‘ 
𝛿 > 0. Indeed, this is clear for 𝑠 ≀ 𝑒 ≀ 𝑑 since both sides are adapted
¯
𝑃 = 𝑃 on ℱ𝑑𝑠 by (i), which is proved below. In view of Lemma 4.2 it
for some
and
remains to show that
πœ”
𝛼
¯
𝛼
¯ 𝑒 (πœ”) ∈ Int𝛿 D𝑠,¯
𝑒 (𝑋 (πœ”))
for
𝑑𝑒 × π‘ƒ0𝑠 -a.e. (𝑒, πœ”) ∈ [𝑑, 𝑇 ] × Ξ©π‘  .
(4.2)
𝐴𝑖 := {𝑋 𝛼 ∈ 𝐸 𝑖 } ∈ ℱ𝑑𝑠 for 0 ≀ 𝑖 ≀ 𝑁 . Note that 𝐴𝑖 is dened
𝑠
𝛼 is dened as an Itô integral under 𝑃 𝑠 . Let
up to a 𝑃0 -nullset since 𝑋
0
0
𝛼
0
πœ” ∈ 𝐴 , then 𝑋 (πœ”) ∈ 𝐸 and thus 𝛼
¯ 𝑒 (πœ”) = 𝛼𝑒 (πœ”) for 𝑑 ≀ 𝑒 ≀ 𝑇 . With
𝛿 0 := deg(𝑠, πœ”
¯ , 𝑃 ), Lemma 4.2 shows that
Let
0
0
πœ”
𝛼
𝛿
𝑠,¯
πœ”
𝛼
¯
𝛼
¯ 𝑒 (πœ”) = 𝛼𝑒 (πœ”) ∈ Int𝛿 D𝑠,¯
𝑒 (𝑋 (πœ”)) = Int D𝑒 (𝑋 (πœ”))
for
13
𝑑𝑒 × π‘ƒ0𝑠 -a.e. (𝑒, πœ”) ∈ [𝑑, 𝑇 ] × π΄0 .
Next, consider
1≀𝑖≀𝑁
and
πœ”π‘– ∈ 𝐸 𝑖.
By assumption,
𝑃 𝑖 ∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ” 𝑖 )
and
deg(𝑑, πœ”
¯ βŠ—π‘  πœ” 𝑖 , 𝑃 𝑖 ) β‰₯ 𝛿 𝑖 := inf deg(𝑑, πœ”
¯ βŠ—π‘  πœ”, 𝑃 𝑖 ) > 0.
πœ”βˆˆπΈ 𝑖
We set
𝛿 := min{𝛿 0 , . . . , 𝛿 𝑁 } > 0,
𝑖
then Lemma 4.2 yields
𝑖
𝑖
πœ” βŠ—π‘  πœ”
πœ”
𝑖
𝛼
𝛼𝑒𝑖 (˜
πœ” ) ∈ Int𝛿 D𝑑,¯
(𝑋 𝛼 (˜
πœ” )) = Int𝛿 D𝑠,¯
πœ” ))
𝑒
𝑒 (πœ” βŠ—π‘‘ 𝑋 (˜
for
Now let
πœ”π‘–
:=
πœ” ∈ 𝐴𝑖 for some 1 ≀ 𝑖 ≀ 𝑁 .
∈ 𝐸 𝑖 , we deduce that
𝑑𝑒 × π‘ƒ0𝑑 -a.e. (𝑒, πœ”
˜ ) ∈ [𝑑, 𝑇 ] × Ξ©π‘‘ .
Applying the previous observation with
𝑋 𝛼 (πœ”)
𝑖
πœ”
𝛼
𝛼
𝑑
𝛿
𝑠,¯
πœ”
𝛼
¯
𝛼
¯ 𝑒 (πœ”) = 𝛼𝑒𝑖 (πœ” 𝑑 ) ∈ Int𝛿 D𝑠,¯
𝑒 (𝑋 (πœ”)βŠ—π‘‘ 𝑋 (πœ” )) = Int D𝑒 (𝑋 (πœ”))
for
𝑑𝑒 × π‘ƒ0𝑠 -a.e. (𝑒, πœ”) ∈ [𝑑, 𝑇 ] × π΄π‘– .
More precisely, we have used here the following two facts. Firstly, to pass
𝑑𝑒 × π‘ƒ0𝑑 -nullsets to 𝑑𝑒 × π‘ƒ0𝑠 -nullsets, we have used that if 𝐺 βŠ‚ Ω𝑑 is
𝑠
𝑠
𝑑
𝑑
𝑑
a 𝑃0 -nullset, then 𝑃0 {πœ” ∈ Ξ© : πœ” ∈ 𝐺} = 𝑃0 (𝐺) = 0 since the canonical
𝑠
𝑠
process 𝐡 has 𝑃0 -independent increments. Secondly, we have used that
𝑖
𝛼
πœ“(πœ”) := 𝑋 (πœ”) βŠ—π‘‘ 𝑋 𝛼 (πœ” 𝑑 ) = 𝑋 𝛼¯ (πœ”) for πœ” ∈ 𝐴𝑖 . Indeed, for 𝑠 ≀ 𝑒 < 𝑑 we
𝛼
𝛼
¯
have πœ“π‘’ (πœ”) = 𝑋𝑒 (πœ”) = 𝑋𝑒 (πœ”) while for 𝑑 ≀ 𝑒 ≀ 𝑇 , πœ“π‘’ (πœ”) equals
from
(𝑃0π‘ βˆ«) 𝑑
𝛼1/2 𝑑𝐡(πœ”) +
𝑠
As
(𝛼𝑖 )1/2 𝑑𝐡 𝑑 (πœ” 𝑑 ) =
𝑑
[ βˆͺ𝑁
𝑠
𝑃0
(𝑃0π‘‘βˆ«) 𝑒
𝑖=0 𝐴
]
𝑖
= 1,
(𝑃0π‘ βˆ«) 𝑒
(¯
𝛼)1/2 𝑑𝐡(πœ”) = 𝑋𝑒𝛼¯ (πœ”).
𝑠
we have proved (4.2) therefore
It remains to show (i)(iii).
𝑃¯ ∈ 𝒫(𝑠, πœ”
¯ ).
These assertions are fairly standard; we
include the proofs for completeness.
𝐴 ∈ ℱ𝑑𝑠 , we show that 𝑃¯ (𝐴) = 𝑃 (𝐴). Indeed, for πœ” ∈ Ξ©, the queswhether πœ” ∈ 𝐴 depends only on the restriction of πœ” to [𝑠, 𝑑]. Therefore,
(i) Let
tion
𝑃 𝑖 (𝐴𝑑,πœ” ) = 𝑃 𝑖 {˜
πœ” : πœ” βŠ—π‘‘ πœ”
˜ ∈ 𝐴} = 1𝐴 (πœ”),
¯ (𝐴) = βˆ‘π‘ 𝐸 𝑃 [1𝐴∩𝐸 𝑖 ] = 𝑃 (𝐴).
and thus 𝑃
𝑖=0
𝑑
(ii), (iii) Let 𝐹 ∈ ℱ𝑇 , we show that
𝑃¯ 𝑑,πœ” (𝐹 ) = 𝑃 𝑑,πœ” (𝐹 )1𝐸 0 (πœ”) +
𝑁
βˆ‘
1≀𝑖≀𝑁
𝑃 𝑖 (𝐹 )1𝐸 𝑖 (πœ”) 𝑃 -a.s.
𝑖=1
Using the denition of conditional expectation and (i), this is equivalent to
the following equality for all
Ξ› ∈ ℱ𝑑𝑠 ,
𝑃¯ {πœ” ∈ Ξ› : πœ” 𝑑 ∈ 𝐹 } = 𝑃 {πœ” ∈ Ξ› ∩ 𝐸 0 : πœ” 𝑑 ∈ 𝐹 } +
𝑁
βˆ‘
𝑖=1
14
𝑃 𝑖 (𝐹 )𝑃 (Ξ› ∩ 𝐸 𝑖 ).
𝐴 := {πœ” ∈ Ξ› : πœ” 𝑑 ∈ 𝐹 } we have 𝐴𝑑,πœ” = {˜
πœ” ∈ 𝐹 : πœ” βŠ—π‘‘ πœ”
˜ ∈ Ξ›} and
𝑠
𝑑,πœ” equals 𝐹 if πœ” ∈ Ξ› and is empty otherwise. Thus the
since Ξ› ∈ ℱ𝑑 , 𝐴
¯ yields 𝑃¯ (𝐴) = 𝑃 (𝐴 ∩ 𝐸 0 ) + βˆ‘π‘ 𝐸 𝑃 [𝑃 𝑖 (𝐹 )1Ξ› (πœ”)1𝐸 𝑖 (πœ”)] =
denition of 𝑃
𝑖=1
βˆ‘
𝑖
𝑖
𝑃 (𝐴 ∩ 𝐸 0 ) + 𝑁
𝑖=1 𝑃 (𝐹 )𝑃 (Ξ› ∩ 𝐸 ) as desired.
For
We remark that the above arguments apply also to a countably innite
partition
(𝐸 𝑖 )𝑖β‰₯1 ,
inf 𝑖β‰₯1 inf πœ”βˆˆπΈ 𝑖 deg(𝑑, πœ”, 𝑃 𝑖 ) > 0.
provided that
However,
this condition is dicult to guarantee. A second observation is that the results of this subsection are based on the regularity property of
stated in Lemma 3.5, but make no use of the continuity of
ability of
4.2
πœ” 7β†’ 𝒫(𝑑, πœ”)
or the measur-
𝑉𝑑 (πœ‰).
Dynamic Programming Principle
We can now prove the key result of this paper.
tion
πœ‰
𝑉𝑑 = 𝑉𝑑 (πœ‰)
from Denition 3.4 and denote by
supremum of a family of
ℱ𝑠 -measurable
the collection of
(𝑃, ℱ𝑠 )-nullsets.
Theorem 4.5.
Let
0 ≀ 𝑠 ≀ 𝑑 ≀ 𝑇.
𝑉𝑠 (πœ”) =
sup
𝐸
We recall the value func-
ess sup(𝑃,ℱ𝑠 )
the essential
random variables with respect to
Then
[
𝑃
(𝑉𝑑 )𝑠,πœ”
]
for all
πœ” ∈ Ξ©.
(4.3)
𝑃 βˆˆπ’«(𝑠,πœ”)
With
𝒫(𝑠, 𝑃 ) := {𝑃 β€² ∈ 𝒫 : 𝑃 β€² = 𝑃
on
β„± 𝑠 },
β€²
𝑉𝑠 = ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [𝑉𝑑 βˆ£β„±π‘  ]
we also have
𝑃 -a.s.
𝑃 βˆˆπ’«
(4.4)
𝑃 ∈ 𝒫.
(4.5)
for all
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
and in particular
β€²
𝑉𝑠 = ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [πœ‰βˆ£β„±π‘  ]
𝑃 -a.s.
for all
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
Proof. (i) We rst show the inequality ≀ in (4.3). Fix πœ”
¯
𝑑,πœ”
𝑃 ∈ 𝒫(𝑠, πœ”
¯ ). Lemma 4.1 shows that 𝑃
∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ”) for
∈ Ω as well as
𝑃 -a.e. πœ” ∈ Ω𝑠 ,
yielding the inequality in
𝐸𝑃
𝑑,πœ”
[ 𝑠,¯πœ” 𝑑,πœ” ]
]
𝑑,πœ” [ 𝑑,¯
(πœ‰ )
= 𝐸𝑃
πœ‰ πœ” βŠ—π‘  πœ”
≀
]
β€²[
𝐸 𝑃 πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ”
sup
𝑃 β€² βˆˆπ’«(𝑑,¯
πœ” βŠ—π‘  πœ”)
= 𝑉𝑑 (¯
πœ” βŠ—π‘  πœ”)
= 𝑉𝑑𝑠,¯πœ” (πœ”)
where
𝑉𝑑𝑠,¯πœ” := (𝑉𝑑 )𝑠,¯πœ” .
𝑃 (π‘‘πœ”)-expectations
Since
𝑉𝑑
for
𝑃 -a.e. πœ” ∈ Ω𝑠 ,
is measurable by Corollary 3.6, we can take
on both sides to obtain that
[ 𝑑,πœ” [
[
]
]]
[
]
𝐸 𝑃 πœ‰ 𝑠,¯πœ” = 𝐸 𝑃 𝐸 𝑃
(πœ‰ 𝑠,¯πœ” )𝑑,πœ” ≀ 𝐸 𝑃 𝑉𝑑𝑠,¯πœ” .
15
𝑃 ∈ 𝒫(𝑠, πœ”
¯)
Thus taking supremum over
yields the claim.
(ii) We now show the inequality β‰₯ in (4.3). Fix
and let
𝛿 > 0.
πœ”
¯ ∈ Ξ© and 𝑃 ∈ 𝒫(𝑠, πœ”
¯)
We start with a preparatory step.
(ii.a) We claim that there exists a
𝑃 (𝐸) > 1 βˆ’ 𝛿
βˆ₯ β‹… βˆ₯[𝑠,𝑑] -compact
set
𝐸 ∈ ℱ𝑑𝑠
with
such that the restriction
𝑉𝑑𝑠,¯πœ” (β‹…)∣𝐸
is uniformly continuous for
βˆ₯ β‹… βˆ₯[𝑠,𝑑] .
In particular, there exists then a modulus of continuity
𝑠,πœ”
¯
βˆ£π‘‰π‘‘π‘ ,¯πœ” (πœ”) βˆ’ 𝑉𝑑𝑠,¯πœ” (πœ” β€² )∣ ≀ 𝜌(𝑉𝑑
∣𝐸)
(
βˆ₯πœ” βˆ’ πœ” β€² βˆ₯[𝑠,𝑑]
)
𝑠,πœ”
¯
𝜌(𝑉𝑑
for all
∣𝐸) such that
πœ”, πœ” β€² ∈ 𝐸.
𝑃 is a Borel measure on the Polish space Ω𝑠𝑑 , there exists a
𝑠,¯
πœ”
𝑠
𝑠
compact set 𝐾 = 𝐾(𝑃, 𝛿) βŠ‚ Ω𝑑 such that 𝑃 (𝐾) > 1 βˆ’ 𝛿/2. As 𝑉𝑑
is ℱ𝑑 𝑠
measurable (and thus Borel-measurable as a function on Ω𝑑 ), there exists by
𝑠
Lusin's theorem a closed set Ξ› = Ξ›(𝑃, 𝛿) βŠ† Ω𝑑 such that 𝑃 (Ξ›) > 1 βˆ’ 𝛿/2 and
𝑠,¯
πœ”
such that 𝑉𝑑
βˆ£Ξ› is βˆ₯ β‹… βˆ₯[𝑠,𝑑] -continuous. Then 𝐸 β€² := 𝐾 ∩ Ξ› βŠ‚ Ω𝑠𝑑 is compact
𝑠,¯
πœ”
β€²
and hence the restriction of 𝑉𝑑
to 𝐸 is even uniformly continuous. It
𝑠
β€²
remains to set 𝐸 := {πœ” ∈ Ξ© : πœ”βˆ£[𝑠,𝑑] ∈ 𝐸 }.
𝑠
(ii.b) Let πœ€ > 0. We apply Lemma 4.3 to 𝐸 (instead of Ξ© ) and obtain
a sequence (Λ†
πœ” 𝑖 ) in 𝐸 , an ℱ𝑑𝑠 -measurable partition (𝐸 𝑖 ) of 𝐸 , and a sequence
𝑑
𝑖
(𝑃 ) in 𝒫 𝑆 such that
Indeed, since
(a)
βˆ₯πœ” βˆ’ πœ”
Λ† 𝑖 βˆ₯[𝑠,𝑑] ≀ πœ€
(b)
𝑃 𝑖 ∈ 𝒫(𝑑, πœ”
¯ βŠ—π‘  πœ”)
(c)
Let
for all
for all
𝑃𝑖
πœ” ∈ 𝐸𝑖
and
inf πœ”βˆˆπΈ 𝑖 deg(𝑑, πœ”
¯ βŠ—π‘  πœ”, 𝑃 𝑖 ) > 0,
πœ”
ˆ𝑖
𝑉𝑑 (¯
πœ” βŠ—π‘  πœ”
Λ† 𝑖 ) ≀ 𝐸 [πœ‰ 𝑑,¯πœ”βŠ—π‘  ] + πœ€.
𝐴𝑁 := 𝐸 1 βˆͺ β‹… β‹… β‹… βˆͺ 𝐸 𝑁
𝑃¯ = 𝑃¯π‘ ∈ 𝒫(𝑠, πœ”
¯)
𝑃¯ = 𝑃
πœ‰
𝑠
on ℱ𝑑
and
𝑁 β‰₯ 1. In view of (a)(c), we can apply
𝑐
1
𝑁
𝑠
partition {𝐴𝑁 , 𝐸 , . . . , 𝐸 } of Ξ© and obtain a
for
Proposition 4.4 to the nite
measure
Since
πœ” ∈ 𝐸𝑖,
such that
¯ 𝑑,πœ”
𝑃
{
𝑃 𝑑,πœ”
=
𝑃𝑖
for
for
πœ” ∈ 𝐴𝑐𝑁 ,
πœ” ∈ 𝐸 𝑖 , 1 ≀ 𝑖 ≀ 𝑁.
is uniformly continuous, we obtain similarly as in (3.1) that there
exists a modulus of continuity
𝜌(πœ‰)
such that
β€²
βˆ£πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ” βˆ’ πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ” ∣ ≀ 𝜌(πœ‰) (βˆ₯πœ” βˆ’ πœ” β€² βˆ₯[𝑠,𝑑] ).
16
Let
πœ” ∈ 𝐸 𝑖 βŠ‚ Ω𝑠
for some
1 ≀ 𝑖 ≀ 𝑁.
Then using (a) and (c),
𝑠,πœ”
¯
𝑉𝑑𝑠,¯πœ” (πœ”) ≀ 𝑉𝑑𝑠,¯πœ” (Λ†
πœ” 𝑖 ) + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
𝑠,πœ”
¯
𝑖[
𝑖]
≀ 𝐸 𝑃 πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ”Λ† + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
]
𝑠,πœ”
¯
𝑖[
≀ 𝐸 𝑃 πœ‰ 𝑑,¯πœ”βŠ—π‘  πœ” + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
]
𝑠,πœ”
¯
¯ 𝑑,πœ” [ 𝑑,¯
= 𝐸𝑃
πœ‰ πœ”βŠ—π‘  πœ” + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
]
𝑠,πœ”
¯
¯ 𝑑,πœ” [ 𝑠,¯
= 𝐸𝑃
(πœ‰ πœ” )𝑑,πœ” + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
]
𝑠,πœ”
¯
¯[
= 𝐸 𝑃 πœ‰ 𝑠,¯πœ” ℱ𝑑𝑠 (πœ”) + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
for
on
𝑃¯ -a.e. (and thus 𝑃 -a.e.) πœ” ∈ 𝐸 𝑖 . This holds for all 1 ≀ 𝑖 ≀ 𝑁 .
ℱ𝑑𝑠 , taking 𝑃 -expectations yields
𝑠,πœ”
¯
¯
𝐸 𝑃 [𝑉𝑑𝑠,¯πœ” 1𝐴𝑁 ] ≀ 𝐸 𝑃 [πœ‰ 𝑠,¯πœ” 1𝐴𝑁 ] + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑
Recall that
𝑃¯ = 𝑃¯π‘ .
∣𝐸)
As
𝑃 = 𝑃¯
(πœ€).
Using dominated convergence on the left hand side,
and on the right hand side that
𝑃¯π‘ (𝐸 βˆ– 𝐴𝑁 ) = 𝑃 (𝐸 βˆ– 𝐴𝑁 ) β†’ 0
as
𝑁 β†’βˆž
and that
¯
¯
¯
𝐸 𝑃𝑁 [πœ‰ 𝑠,¯πœ” 1𝐴𝑁 ] = 𝐸 𝑃𝑁 [πœ‰ 𝑠,¯πœ” 1𝐸 ] βˆ’ 𝐸 𝑃𝑁 [πœ‰ 𝑠,¯πœ” 1πΈβˆ–π΄π‘ ]
¯
≀ 𝐸 𝑃𝑁 [πœ‰ 𝑠,¯πœ” 1𝐸 ] + βˆ₯πœ‰βˆ₯∞ 𝑃𝑁 (𝐸 βˆ– 𝐴𝑁 ),
(4.6)
we conclude that
𝑠,πœ”
¯
¯
𝐸 𝑃 [𝑉𝑑𝑠,¯πœ” 1𝐸 ] ≀ lim sup 𝐸 𝑃𝑁 [πœ‰ 𝑠,¯πœ” 1𝐸 ] + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑
∣𝐸)
(πœ€)
𝑁 β†’βˆž
≀
𝑃 β€² βˆˆπ’«(𝑠,¯
πœ” ,𝑑,𝑃 )
where
𝑠,πœ”
¯
β€²
sup
𝐸 𝑃 [πœ‰ 𝑠,¯πœ” 1𝐸 ] + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑
𝒫(𝑠, πœ”
¯ , 𝑑, 𝑃 ) := {𝑃 β€² ∈ 𝒫(𝑠, πœ”
¯) : 𝑃 β€² = 𝑃
on
ℱ𝑑𝑠 }.
As
∣𝐸)
(πœ€),
πœ€ > 0
was
𝛿>0
was
arbitrary, this shows that
𝐸 𝑃 [𝑉𝑑𝑠,¯πœ” 1𝐸 ] ≀
Finally, since
β€²
sup
𝑃 β€² βˆˆπ’«(𝑠,¯
πœ” ,𝑑,𝑃 )
𝑃 β€² (𝐸) = 𝑃 (𝐸) > 1 βˆ’ 𝛿
for all
𝐸 𝑃 [πœ‰ 𝑠,¯πœ” 1𝐸 ].
𝑃 β€² ∈ 𝒫(𝑠, πœ”
¯ , 𝑑, 𝑃 )
and
arbitrary, we obtain by an argument similar to (4.6) that
𝐸 𝑃 [𝑉𝑑𝑠,¯πœ” ] ≀
sup
β€²
𝐸 𝑃 [πœ‰ 𝑠,¯πœ” ] ≀
𝑃 β€² βˆˆπ’«(𝑠,¯
πœ” ,𝑑,𝑃 )
The claim follows as
β€²
sup
𝑃 β€² βˆˆπ’«(𝑠,¯
πœ”)
𝑃 ∈ 𝒫(𝑠, πœ”
¯)
𝐸 𝑃 [πœ‰ 𝑠,¯πœ” ] = 𝑉𝑠 (¯
πœ” ).
was arbitrary. This completes the proof
of (4.3).
(iii) The next step is to prove that
β€²
𝑉𝑑 ≀ ess sup(𝑃,ℱ𝑑 ) 𝐸 𝑃 [πœ‰βˆ£β„±π‘‘ ] 𝑃 -a.s.
𝑃 β€² βˆˆπ’«(𝑑,𝑃 )
17
for all
𝑃 ∈ 𝒫.
(4.7)
Fix
𝑃 ∈ 𝒫.
We use the previous step (ii) for the special case
that for given
πœ€>0
𝑁 β‰₯1
there exists for each
𝑠 = 0 and obtain
𝑃¯π‘ ∈ 𝒫(𝑑, 𝑃 )
a measure
such that
¯
𝑉𝑑 (πœ”) ≀ 𝐸 𝑃𝑁 [πœ‰βˆ£β„±π‘‘ ](πœ”) + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
βˆͺ
𝑖
Therefore, since 𝐸 =
𝑖β‰₯1 𝐸 , we have
for
𝑃 -a.s. πœ” ∈ 𝐸 1 βˆͺ β‹… β‹… β‹… βˆͺ 𝐸 𝑁 .
¯
𝑉𝑑 (πœ”) ≀ sup 𝐸 𝑃𝑁 [πœ‰βˆ£β„±π‘‘ ](πœ”) + 𝜌(πœ‰) (πœ€) + πœ€ + 𝜌(𝑉𝑑 ∣𝐸) (πœ€)
for
𝑃 -a.s. πœ” ∈ 𝐸.
𝑁 β‰₯1
We recall that the set
𝐸
𝛿,
depends on
πœ€.
but not on
Thus letting
πœ€β†’0
yields
𝑉𝑑 1𝐸 ≀ ess sup
(𝑃,ℱ𝑑 )
(
𝑃 β€² βˆˆπ’«(𝑑,𝑃 )
𝐸 [πœ‰βˆ£β„±π‘‘ ]1𝐸 =
𝐸 ∈ ℱ𝑑 .
𝛿 β†’ 0.
where we have used that
by taking the limit
(
)
𝑃′
ess sup
(𝑃,ℱ𝑑 )
𝑃 β€² βˆˆπ’«(𝑑,𝑃 )
In view of
)
𝐸 [πœ‰βˆ£β„±π‘‘ ] 1𝐸 𝑃 -a.s.,
𝑃′
𝑃 (𝐸) > 1 βˆ’ 𝛿 ,
the claim follows
(iv) We now prove the inequality ≀ in (4.4); we shall reduce this claim
to its special case (4.7).
(𝑃 β€² )𝑑,πœ”
∈ 𝒫(𝑑, πœ”)
𝑃 ∈ 𝒫 . For any 𝑃 β€² ∈ 𝒫(𝑠, 𝑃 )
πœ” ∈ Ξ© by Lemma 4.1. Thus
𝑠 := 𝑑 and 𝑑 := 𝑇 , that
Fix
we have that
β€²
for 𝑃 -a.s.
we can infer
from (4.3), applied with
𝑉𝑑 (πœ”) β‰₯ 𝐸 (𝑃
and in particular that
β€² )𝑑,πœ”
β€²
[πœ‰ 𝑑,πœ” ] = 𝐸 𝑃 [πœ‰βˆ£β„±π‘‘ ](πœ”) 𝑃 β€² -a.s.
β€²
β€²
𝐸 𝑃 [𝑉𝑑 βˆ£β„±π‘  ] β‰₯ 𝐸 𝑃 [πœ‰βˆ£β„±π‘  ] 𝑃 β€² -a.s. on ℱ𝑠 , hence also 𝑃 -a.s.
This shows that
β€²
β€²
ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [𝑉𝑑 βˆ£β„±π‘  ] β‰₯ ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [πœ‰βˆ£β„±π‘  ] 𝑃 -a.s.
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
But (4.7), applied with
dominates
𝑉𝑠 .
𝑠
instead of
𝑑,
yields that the right hand side
This proves the claim.
(v) It remains to show the inequality β‰₯ in (4.4).
𝑃 β€² ∈ 𝒫(𝑠, 𝑃 ).
Since
(𝑃 β€² )𝑠,πœ” ∈ 𝒫(𝑠, πœ”)
yields
𝑉𝑠 (πœ”) β‰₯ 𝐸 (𝑃
𝑃 β€² -a.s.
on
ℱ𝑠
𝑃 -a.s.
and hence also
β€² )𝑠,πœ”
for
𝑃 β€² -a.s. πœ” ∈ Ξ©
Fix
𝑃 ∈ 𝒫
and
by Lemma 4.1, (4.3)
β€²
[𝑉𝑑𝑠,πœ” ] = 𝐸 𝑃 [𝑉𝑑 βˆ£β„±π‘  ](πœ”)
𝑃 -a.s.
The claim follows as
𝑃 β€² ∈ 𝒫(𝑠, 𝑃 )
was
arbitrary.
5
Extension to the Completion
So far, we have studied the value function
now set
ℰ𝑑 (πœ‰) := 𝑉𝑑
and extend this operator to a
18
πœ‰ ∈ UC𝑏 (Ξ©). We
completion of UC𝑏 (Ξ©) by
𝑉𝑑 = 𝑉𝑑 (πœ‰)
for
the usual procedure (e.g., Peng [12]). The main result in this section is that
the dynamic programming principle carries over to the extension.
𝑝 ∈ [1, ∞)
Given
ℱ𝑑 -measurable
𝑑 ∈ [0, 𝑇 ], we dene 𝐿𝑝𝒫 (ℱ𝑑 )
variables 𝑋 satisfying
and
random
to be the space of
βˆ₯𝑋βˆ₯𝐿𝑝 := sup βˆ₯𝑋βˆ₯𝐿𝑝 (𝑃 ) < ∞,
𝒫
𝑃 βˆˆπ’«
βˆ₯𝑋βˆ₯𝑝𝐿𝑝 (𝑃 ) := 𝐸 𝑃 [βˆ£π‘‹βˆ£π‘ ]. More precisely, we take equivalences classes
𝑝
with respect to 𝒫 -quasi-sure equality so that 𝐿𝒫 (ℱ𝑑 ) becomes a Banach
space. (Two functions are equal 𝒫 -quasi-surely, 𝒫 -q.s. for short, if they are
equal 𝑃 -a.s. for all 𝑃 ∈ 𝒫 .) Furthermore,
where
𝕃𝑝𝒫 (ℱ𝑑 )
is dened as the
βˆ₯ β‹… βˆ₯𝐿𝑝 -closure
𝒫
For brevity, we shall sometimes write
Lemma 5.1.
the norm
Let
𝑝 ∈ [1, ∞).
𝕃𝑝𝒫
for
The mapping
of
UC𝑏 (Ω𝑑 ) βŠ† 𝐿𝑝𝒫 (ℱ𝑑 ).
𝕃𝑝𝒫 (ℱ𝑇 )
ℰ𝑑
on
and
UC𝑏 (Ξ©)
𝐿𝑝𝒫
for
𝐿𝑝𝒫 (ℱ𝑇 ).
is 1-Lipschitz for
βˆ₯ β‹… βˆ₯𝐿𝑝 ,
𝒫
βˆ₯ℰ𝑑 (πœ‰) βˆ’ ℰ𝑑 (πœ“)βˆ₯𝐿𝑝 ≀ βˆ₯πœ‰ βˆ’ πœ“βˆ₯𝐿𝑝
𝒫
As a consequence,
ℰ𝑑
𝒫
for all
πœ‰, πœ“ ∈ UC𝑏 (Ξ©).
uniquely extends to a Lipschitz-continuous mapping
ℰ𝑑 : 𝕃𝑝𝒫 (ℱ𝑇 ) β†’ 𝐿𝑝𝒫 (ℱ𝑑 ).
Proof. Note that
βˆ£πœ‰ βˆ’ πœ“βˆ£π‘
UC𝑏 (Ξ©). The denition of ℰ𝑑 and
βˆ£β„°π‘‘ (πœ‰) βˆ’ ℰ𝑑 (πœ“)βˆ£π‘ ≀ ℰ𝑑 (βˆ£πœ‰ βˆ’ πœ“βˆ£)𝑝 ≀ ℰ𝑑 (βˆ£πœ‰ βˆ’ πœ“βˆ£π‘ ).
is again in
Jensen's inequality imply that
Therefore,
[
]1/𝑝
βˆ₯ℰ𝑑 (πœ‰) βˆ’ ℰ𝑑 (πœ“)βˆ₯𝐿𝑝 ≀ sup 𝐸 𝑃 ℰ𝑑 (βˆ£πœ‰ βˆ’ πœ“βˆ£π‘ )
= sup 𝐸 𝑃 [βˆ£πœ‰ βˆ’ πœ“βˆ£π‘ ]1/𝑝 ,
𝒫
𝑃 βˆˆπ’«
𝑃 βˆˆπ’«
where the equality is due to (4.3).
Since we shall use
𝕃𝑝𝒫
as the domain of
ℰ𝑑 ,
we also give an explicit de-
𝑋 ∈ 𝐿1𝒫 is 𝒫 -quasi
β€²
has a representative 𝑋 with the property that
scription of this space. We say that (an equivalence class)
uniformly continuous if
𝑋
πœ€ > 0 there exists an open set 𝐺 βŠ‚ Ξ© such that 𝑃 (𝐺) < πœ€ for all
𝑃 ∈ 𝒫 and such that the restriction 𝑋 β€² βˆ£Ξ©βˆ–πΊ is uniformly continuous. We dene 𝒫 -quasi continuity in an analogous way and denote by 𝐢𝑏 (Ξ©) the space
of bounded continuous functions on Ξ©. The following is very similar to the
for all
results in [4].
Proposition 5.2.
𝕃𝑝𝒫 consists of all 𝑋 ∈ 𝐿𝑝𝒫 such
that 𝑋 is 𝒫 -quasi uniformly continuous and limπ‘›β†’βˆž βˆ₯𝑋1{βˆ£π‘‹βˆ£β‰₯𝑛} βˆ₯𝐿𝑝 = 0.
𝒫
𝑝
If D is uniformly bounded, then 𝕃𝒫 coincides with the βˆ₯ β‹… βˆ₯𝐿𝑝 -closure of
𝒫
𝐢𝑏 (Ξ©) βŠ‚ 𝐿𝑝𝒫 and uniformly continuous can be replaced by continuous.
Let
𝑝 ∈ [1, ∞).
The space
19
Proof. For the rst part, it suces to go through the proof of [4, Theorem 25]
and replace continuity by uniform continuity everywhere. The only dierence
is that one has to use a rened version of Tietze's extension theorem which
yields uniformly continuous extensions (cf. Mandelkern [7]).
If
D
is uniformly bounded,
𝒫
is a set of laws of continuous martingales
with uniformly bounded quadratic variation density and therefore
𝒫
is tight.
Together with the aforementioned extension theorem we derive that
𝑝
is contained in 𝕃𝒫 and now the result follows from [4, Theorem 25].
𝐢𝑏 (Ξ©)
Before extending the dynamic programming principle, we prove the following auxiliary result which shows in particular that the essential suprema
in Theorem 4.5 can be represented as increasing limits. This is a consequence
of a standard pasting argument which involves only controls with the same
history and hence there are no problems of admissibility as in Section 4.
Lemma 5.3.
Let
𝜏
𝔽-stopping time and 𝑋 ∈ 𝐿1𝒫 (ℱ𝑇 ).
𝑃𝑛 ∈ 𝒫(𝜏, 𝑃 ) such that
be an
there exists a sequence
β€²
ess sup (𝑃,β„±πœ ) 𝐸 𝑃 [π‘‹βˆ£β„±πœ ] = lim 𝐸 𝑃𝑛 [π‘‹βˆ£β„±πœ ]
π‘›β†’βˆž
𝑃 β€² βˆˆπ’«(𝜏,𝑃 )
where the limit is increasing and
there exists
Indeed, we prove
𝑃¯ ∈ 𝒫(𝜏, 𝑃 )
𝑃 βˆˆπ’«
𝑃 -a.s.,
𝒫(𝜏, 𝑃 ) := {𝑃 β€² ∈ 𝒫 : 𝑃 β€² = 𝑃
on
β„± 𝜏 }.
β€²
{𝐸 𝑃 [π‘‹βˆ£β„±πœ ] : 𝑃 β€² ∈ 𝒫(𝜏, 𝑃 )} is 𝑃 -a.s.
that for Ξ› ∈ β„±πœ and 𝑃1 , 𝑃2 ∈ 𝒫(𝜏, 𝑃 )
Proof. It suces to show that the set
upward ltering.
For each
such that
¯
𝐸 𝑃 [π‘‹βˆ£β„±πœ ] = 𝐸 𝑃1 [π‘‹βˆ£β„±πœ ]1Ξ› + 𝐸 𝑃2 [π‘‹βˆ£β„±πœ ]1Λ𝑐
then the claim follows by letting
𝑃 -a.s.,
Ξ› := {𝐸 𝑃1 [π‘‹βˆ£β„±πœ ] > 𝐸 𝑃2 [π‘‹βˆ£β„±πœ ]}.
Similarly
as in Proposition 4.4, we dene
[
]
𝑃¯ (𝐴) := 𝐸 𝑃 𝑃 1 (π΄βˆ£β„±πœ )1Ξ› + 𝑃 2 (π΄βˆ£β„±πœ )1Λ𝑐 ,
𝛼, 𝛼1 , 𝛼2 be such
𝑃 = 𝑃 1 = 𝑃 2 on β„±πœ
Let
1
𝐴 ∈ ℱ𝑇 .
2
𝑃 𝛼 = 𝑃 , 𝑃 𝛼 = 𝑃1 and 𝑃 𝛼 = 𝑃2 .
1
2
translates to 𝛼 = 𝛼 = 𝛼 𝑑𝑒 × π‘ƒ0 -a.e.
that
and with this observation we have as in Proposition 4.4 that
for the
𝔽-progressively
(5.1)
The fact that
[[0, 𝜏 (𝑋 𝛼 )[[
¯
𝑃 = 𝑃 𝛼¯ ∈ 𝒫 𝑆
on
measurable process
𝛼
¯ 𝑒 (πœ”) :=
[
]
1[[0,𝜏 (𝑋 𝛼 )[[ (𝑒)𝛼𝑒 (πœ”) + 1[[𝜏 (𝑋 𝛼 ),𝑇 ]] (𝑒) 𝛼𝑒1 (πœ”)1Ξ› (𝑋 𝛼 (πœ”)) + 𝛼𝑒2 (πœ”)1Λ𝑐 (𝑋 𝛼 (πœ”)) .
𝑃, 𝑃 1 , 𝑃 2 ∈ 𝒫 , Lemma 4.2 yields that 𝑃¯ ∈ 𝒫 . Moreover, we have
𝜏 (πœ”),πœ”
¯ 𝜏 (πœ”),πœ” = 𝑃 𝜏 (πœ”),πœ” for
𝑃¯ = 𝑃 on β„±πœ and 𝑃¯ 𝜏 (πœ”),πœ” = 𝑃1
for πœ” ∈ Ξ› and 𝑃
2
πœ” ∈ Λ𝑐 . Thus 𝑃¯ has the required properties.
Since
20
We now show that the extension
ℰ𝑑
from Lemma 5.1 again satises the
dynamic programming principle.
Theorem 5.4.
Let
0≀𝑠≀𝑑≀𝑇
𝑋 ∈ 𝕃1𝒫 .
and
β€²
ℰ𝑠 (𝑋) = ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [ℰ𝑑 (𝑋)βˆ£β„±π‘  ]
Then
𝑃 -a.s.
for all
𝑃 βˆˆπ’«
(5.2)
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
and in particular
β€²
ℰ𝑠 (𝑋) = ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [π‘‹βˆ£β„±π‘  ]
𝑃 -a.s.
for all
𝑃 ∈ 𝒫.
(5.3)
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
Proof. Fix
𝑃 ∈ 𝒫.
Given
πœ€ > 0,
there exists
πœ“ ∈ UC𝑏 (Ξ©)
such that
βˆ₯ℰ𝑠 (𝑋) βˆ’ ℰ𝑠 (πœ“)βˆ₯𝐿1 ≀ βˆ₯𝑋 βˆ’ πœ“βˆ₯𝐿1 ≀ πœ€.
𝒫
For any
𝑃 β€² ∈ 𝒫(𝑠, 𝑃 ),
𝒫
we also note the trivial identity
β€²
𝐸 𝑃 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋)
(5.4)
𝑃′
𝑃′
(
) (
)
= 𝐸 [𝑋 βˆ’ πœ“βˆ£β„±π‘  ] + 𝐸 [πœ“βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (πœ“) + ℰ𝑠 (πœ“) βˆ’ ℰ𝑠 (𝑋) 𝑃 -a.s.
(i) We rst prove the inequality ≀ in (5.3). By (4.5) and Lemma 5.3
there exists a sequence
(𝑃𝑛 )
in
𝒫(𝑠, 𝑃 )
such that
β€²
ℰ𝑠 (πœ“) = ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [πœ“βˆ£β„±π‘  ] = lim 𝐸 𝑃𝑛 [πœ“βˆ£β„±π‘  ] 𝑃 -a.s.
π‘›β†’βˆž
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
(5.5)
𝑃 β€² := 𝑃𝑛 and taking 𝐿1 (𝑃 )-norms we nd that
𝑃
𝐸 𝑛 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋) 1
𝐿 (𝑃 )
𝑃
≀ 𝑋 βˆ’ πœ“βˆ₯𝐿1 (𝑃𝑛 ) + 𝐸 𝑛 [πœ“βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (πœ“)𝐿1 (𝑃 ) + ℰ𝑠 (πœ“) βˆ’ ℰ𝑠 (𝑋)𝐿1 (𝑃 )
≀ 𝐸 𝑃𝑛 [πœ“βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (πœ“)𝐿1 (𝑃 ) + 2πœ€.
Using (5.4) with
Now bounded convergence and (5.5) yield that
lim sup 𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋)𝐿1 (𝑃 ) ≀ 2πœ€.
π‘›β†’βˆž
Since
such
πœ€ > 0 was arbitrary, this implies that
π‘ƒΛœ
that 𝐸 𝑛 [π‘‹βˆ£β„±π‘  ] β†’ ℰ𝑠 (𝑋) 𝑃 -a.s. In
there is a sequence
π‘ƒΛœπ‘› ∈ 𝒫(𝑠, 𝑃 )
particular, we have proved the
claimed inequality.
(ii) We now complete the proof of (5.3). By Lemma 5.3 we can choose a
sequence
(𝑃𝑛′ )
in
𝒫(𝑠, 𝑃 )
such that
β€²
β€²
ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [π‘‹βˆ£β„±π‘  ] = lim 𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ] 𝑃 -a.s.,
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
π‘›β†’βˆž
21
with an increasing limit. Let
Step (i), the sets
𝐴𝑛
increase
β€²
𝐴𝑛 := {𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ] β‰₯ ℰ𝑠 (𝑋)}.
to Ξ© 𝑃 -a.s. Moreover,
As a result of
)
( β€²
β€²
0 ≀ 𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋) 1𝐴𝑛 β†— ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋) 𝑃 -a.s.
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
By (5.4) with
𝑃 β€² := 𝑃𝑛′
and by the rst equality in (5.5), we also have that
β€²
β€²
𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋) ≀ 𝐸 𝑃𝑛 [𝑋 βˆ’ πœ“βˆ£β„±π‘  ] + ℰ𝑠 (πœ“) βˆ’ ℰ𝑠 (𝑋) 𝑃 -a.s.
Taking
𝐿1 (𝑃 )-norms
and using monotone convergence, we deduce that
ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 β€² [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋)
β€²
1
𝑃 βˆˆπ’«(𝑠,𝑃 )
𝐿 (𝑃 )
( β€²
)
= lim 𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ] βˆ’ ℰ𝑠 (𝑋) 1𝐴𝑛 1
π‘›β†’βˆž
𝐿 (𝑃 )
≀ lim sup βˆ₯𝑋 βˆ’ πœ“βˆ₯𝐿1 (𝑃𝑛′ ) + ℰ𝑠 (πœ“) βˆ’ ℰ𝑠 (𝑋)βˆ₯𝐿1 (𝑃 )
π‘›β†’βˆž
≀ 2πœ€.
Since
πœ€>0
was arbitrary, this proves (5.3).
(iii) It remains to show (5.2) for a given
𝑃 ∈ 𝒫.
In view of (5.3), it
suces to prove that
β€²
ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [π‘‹βˆ£β„±π‘  ]
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
= ess sup
(𝑃,ℱ𝑠 )
𝐸
𝑃′
[
ess sup
(𝑃 β€² ,ℱ𝑑 )
𝐸
𝑃 β€²β€²
𝑃 β€²β€² βˆˆπ’«(𝑑,𝑃 β€² )
𝑃 β€² βˆˆπ’«(𝑠,𝑃 )
]
[π‘‹βˆ£β„±π‘‘ ]ℱ𝑠
𝑃 -a.s.
𝑃 β€²β€² := 𝑃 β€² ∈ 𝒫(𝑑, 𝑃 β€² )
β€²
right hand side. To see the converse inequality, x 𝑃 ∈ 𝒫(𝑠, 𝑃 ) and
β€²β€²
β€²
by Lemma 5.3 a sequence (𝑃𝑛 ) in 𝒫(𝑑, 𝑃 ) such that
The inequality ≀ is obtained by considering
ess sup (𝑃
𝑃 β€²β€² βˆˆπ’«(𝑑,𝑃 β€² )
β€² ,β„±
𝑑)
β€²β€²
on the
choose
β€²β€²
𝐸 𝑃 [π‘‹βˆ£β„±π‘‘ ] = lim 𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘‘ ] 𝑃 β€² -a.s.,
π‘›β†’βˆž
with an increasing limit. Then monotone convergence and the observation
𝒫(𝑑, 𝑃 β€² ) βŠ† 𝒫(𝑠, 𝑃 ) yield
]
[
β€²β€²
𝑃′
(𝑃 β€² ,ℱ𝑑 ) 𝑃 β€²β€²
𝐸
ess sup
𝐸 [π‘‹βˆ£β„±π‘‘ ]ℱ𝑠 = lim 𝐸 𝑃𝑛 [π‘‹βˆ£β„±π‘  ]
that
π‘›β†’βˆž
𝑃 β€²β€² βˆˆπ’«(𝑑,𝑃 β€² )
β€²β€²β€²
≀ ess sup (𝑃,ℱ𝑠 ) 𝐸 𝑃 [π‘‹βˆ£β„±π‘  ] 𝑃 -a.s.
𝑃 β€²β€²β€² βˆˆπ’«(𝑠,𝑃 )
As
𝑃 β€² ∈ 𝒫(𝑠, 𝑃 )
was arbitrary, this proves the claim.
22
We note that (5.3) determines
ℰ𝑠 (𝑋) 𝒫 -q.s. and can therefore be used as
an alternative denition. For most purposes, it is not necessary to go back
to the construction. Relation (5.2) expresses the time-consistency property
of
ℰ𝑑 .
With a mild abuse of notation, it can also be stated as
ℰ𝑠 (ℰ𝑑 (𝑋)) = ℰ𝑠 (𝑋),
indeed, the domain of
ℰ𝑠
0 ≀ 𝑠 ≀ 𝑑 ≀ 𝑇,
𝑋 ∈ 𝕃1𝒫 ;
has to be slightly enlarged for this statement as in
general we do not know whether
ℰ𝑑 (𝑋) ∈ 𝕃1𝒫 .
We close by summarizing some of the basic properties of
Proposition 5.5.
Then the following
(i)
(ii)
(iii)
(iv)
(v)
(vi)
𝑋, 𝑋 β€² ∈ 𝕃𝑝𝒫 for
relations hold 𝒫 -q.s.
Let
ℰ𝑑 (𝑋) β‰₯ ℰ𝑑 (𝑋 β€² )
if
some
𝑝 ∈ [1, ∞)
ℰ𝑑 .
and let
𝑑 ∈ [0, 𝑇 ].
𝑋 β‰₯ 𝑋 β€²,
ℰ𝑑 (𝑋 + 𝑋 β€² ) = ℰ𝑑 (𝑋) + 𝑋 β€²
𝑋′
is ℱ𝑑 -measurable,
+
βˆ’
ℰ𝑑 (πœ‚π‘‹) = πœ‚ ℰ𝑑 (𝑋) + πœ‚ ℰ𝑑 (βˆ’π‘‹) if πœ‚ is ℱ𝑑 -measurable and
ℰ𝑑 (𝑋) βˆ’ ℰ𝑑 (𝑋 β€² ) ≀ ℰ𝑑 (𝑋 βˆ’ 𝑋 β€² ),
ℰ𝑑 (𝑋 + 𝑋 β€² ) = ℰ𝑑 (𝑋) + ℰ𝑑 (𝑋 β€² ) if ℰ𝑑 (βˆ’π‘‹ β€² ) = βˆ’β„°π‘‘ (𝑋 β€² ),
βˆ₯ℰ𝑑 (𝑋) βˆ’ ℰ𝑑 (𝑋 β€² )βˆ₯𝐿𝑝 ≀ βˆ₯𝑋 βˆ’ 𝑋 β€² βˆ₯𝐿𝑝 .
𝒫
𝒫
if
πœ‚π‘‹ ∈ 𝕃1𝒫 ,
Proof. Statements (i)(iv) follow directly from (5.2). The argument for (v)
β€²
β€²
is as in [15, Proposition III.2.8]: we have ℰ𝑑 (𝑋 + 𝑋 ) βˆ’ ℰ𝑑 (𝑋 ) ≀ ℰ𝑑 (𝑋)
β€²
β€²
β€²
by (iv) while β„°(𝑋 + 𝑋 ) β‰₯ ℰ𝑑 (𝑋) βˆ’ ℰ𝑑 (βˆ’π‘‹ ) = ℰ𝑑 (𝑋) + ℰ𝑑 (𝑋 ) by (iv) and
β€²
the assumption on
𝑋
. Of course, (vi) is contained in Lemma 5.1.
References
[1] M. Avellaneda, A. Levy, and A. Parás. Pricing and hedging derivative securities in markets with uncertain volatilities. Appl. Math. Finance, 2(2):7388,
1995.
[2] K. Bichteler. Stochastic integration and 𝐿𝑝 -theory of semimartingales. Ann.
Probab., 9(1):4989, 1981.
[3] P. Cheridito, H. M. Soner, N. Touzi, and N. Victoir. Second-order backward
stochastic dierential equations and fully nonlinear parabolic PDEs. Comm.
Pure Appl. Math., 60(7):10811110, 2007.
[4] L. Denis, M. Hu, and S. Peng. Function spaces and capacity related to a
sublinear expectation: application to 𝐺-Brownian motion paths. Potential
Anal., 34(2):139161, 2011.
[5] L. Denis and C. Martini. A theoretical framework for the pricing of contingent
claims in the presence of model uncertainty. Ann. Appl. Probab., 16(2):827
852, 2006.
[6] T. J. Lyons. Uncertain volatility and the risk-free synthesis of derivatives.
Appl. Math. Finance, 2(2):117133, 1995.
[7] M. Mandelkern. On the uniform continuity of Tietze extensions. Arch. Math.,
55(4):387388, 1990.
23
[8] M. Nutz and H. M. Soner. Superhedging and dynamic risk measures under
volatility uncertainty. To appear in SIAM J. Control Optim., 2010.
[9] E. Pardoux and S. Peng. Adapted solution of a backward stochastic dierential
equation. Systems Control Lett., 14(1):5561, 1990.
[10] S. Peng. Backward SDE and related 𝑔 -expectation. In Backward stochastic
dierential equations, volume 364 of Pitman Res. Notes Math. Ser., pages
141159. Longman, 1997.
[11] S. Peng. Filtration consistent nonlinear expectations and evaluations of contingent claims. Acta Math. Appl. Sin. Engl. Ser., 20(2):191214, 2004.
[12] S. Peng. Nonlinear expectations and nonlinear Markov chains. Chinese Ann.
Math. Ser. B, 26(2):159184, 2005.
[13] S. Peng. 𝐺-expectation, 𝐺-Brownian motion and related stochastic calculus
of Itô type. In Stochastic Analysis and Applications, volume 2 of Abel Symp.,
pages 541567, Springer, Berlin, 2007.
[14] S. Peng. Multi-dimensional 𝐺-Brownian motion and related stochastic calculus
under 𝐺-expectation. Stochastic Process. Appl., 118(12):22232253, 2008.
[15] S. Peng. Nonlinear expectations and stochastic calculus under uncertainty.
Preprint arXiv:1002.4546v1, 2010.
[16] S. Peng. Backward stochastic dierential equation, nonlinear expectation
and their applications. In R. Bhatia, editor, Proceedings of the International
Congress of Mathematicians 2010, Hyderabad, India, volume 1, pages 393432,
World Scientic, Singapore, 2011.
[17] H. M. Soner, N. Touzi, and J. Zhang. Dual formulation of second order target
problems. To appear in Ann. Appl. Probab., 2010.
[18] H. M. Soner, N. Touzi, and J. Zhang. Wellposedness of second order backward
SDEs. To appear in Probab. Theory Related Fields, 2010.
[19] D. Stroock and S. R. S. Varadhan. Multidimensional Diusion Processes.
Springer, New York, 1979.
24