TRANSACTIONS OF THE
AMERICAN MATHEMATICAL SOCIETY
Volume 361, Number 8, August 2009, Pages 4045–4076
S 0002-9947(09)04732-1
Article electronically published on April 1, 2009
VALUE FUNCTIONS AND THE DIRICHLET PROBLEM
FOR ISAACS EQUATION IN A SMOOTH DOMAIN
JAY KOVATS
Abstract. In this paper, we investigate probabilistic solutions of the Dirichlet problem for the elliptic Isaacs equation in a smooth bounded domain in
Euclidean space.
0. Introduction
We wish to obtain a probabilistic representation for the viscosity solution of the
Dirichlet problem for the (possibly degenerate) elliptic Isaacs equation
min max {L(y, z, x)v(x) + f (y, z, x)} = 0 in D,
z∈Z y∈Y
(0.1)
v = g on ∂D,
where L(y, z, x)u := tr[a(y, z, x)uxx ]+b(y, z, x)·ux −c(y, z, x)u, D ⊂ Ed is a smooth
bounded domain and g ∈ C(D̄). Here, the d×d matrix a(y, z, x) is nonnegative definite. We assume that our coefficients, a, b, c, f are uniformly continuous, uniformly
bounded and Lipschitz continuous in x (uniformly in y, z), with c ≥ 0. Y, Z are
compact metric spaces. There are two Isaacs equations. The upper Isaacs equation
F + and the lower Isaacs equation F − arise in the theory of stochastic differential
games (see [ES], [Fr], [I]) and are defined by
(0.2)
F + [u] : = min max {tr[a(y, z, x)uxx ] + b(y, z, x) · ux − c(y, z, x)u + f (y, z, x)} = 0,
z∈Z y∈Y
F − [u] : = max min {tr[a(y, z, x)uxx ] + b(y, z, x) · ux − c(y, z, x)u + f (y, z, x)} = 0.
y∈Y z∈Z
By our conditions on the coefficients, we know (see [IL]) that when the equation
in (0.1) is nondegenerate, i.e. ∃ 0 < λ ≤ Λ for which ∀y ∈ Y, z ∈ Z, x ∈ D, λId ≤
a(y, z, x) ≤ ΛId , and ∂D satisfies a uniform exterior sphere condition, there exists a
unique viscosity solution v ∈ C(D̄) to the Dirichlet problem (0.1). A unique (continuous) viscosity solution also exists in the degenerate elliptic case when ∂D ∈ C 2 ,
inf y,z,x c(y, z, x) ≥ c0 > 0 and inf y,z a(y, z, x)n(x), n(x) > 0 on ∂D, where n(x) is
the outward unit normal to D at x ∈ ∂D. Under the assumption of the existence of
an appropriate global barrier, we show that the value functions v + and v − , given
in (0.7) and (0.8), are continuous viscosity solutions for the corresponding Dirichlet
problems for F + and F − , respectively. Hence, in the aforementioned cases where
uniqueness is guaranteed, v + and v − are the continuous viscosity solutions for the
corresponding Dirichlet problems. Our results completely cover the nondegenerate
Received by the editors May 7, 2007.
2000 Mathematics Subject Classification. Primary 35B65, 35J60, 49N70, 91A05.
c
2009
American Mathematical Society
4045
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4046
JAY KOVATS
case, since for uniformly elliptic equations with bounded coefficients, global barriers
always exist when ∂D is smooth. (See Assumption 1.0 and the discussion which
follows.)
From the viewpoint of partial differential equations, it would be of interest to
have an explicit form, especially in the nondegenerate case, for the viscosity solution
of (0.1), which is of “Perron-type”, as proved by Ishii. Solutions of the nondegenerate Isaacs equation are of interest because (i) any uniformly elliptic equation of
the form F (uxx , x) = 0 can be shown to be of Isaacs type (see [CC2] (2003)) and
(ii) the Isaacs equation is an example of a second-order partial differential equation
which is, in general, neither convex nor concave in uxx (i.e., the Isaacs operator
F (m, p, r, x) is neither convex nor concave in m). The C 2+α regularity theory
hasn’t been extended to solutions of even the simplest such equations F (uxx ) = 0,
i.e. F = F (m), except in special cases (see [CC2]).
We recall (1982) that the Evans-Krylov theorem states that if u ∈ C 2 (B) satisfies
the uniformly elliptic equation F (uxx ) = 0, where F = F (m) is either convex or
2+α
(B). In 1989, Caffarelli (see [CC1])
concave, then ∃ α ∈ (0, 1) for which u ∈ Cloc
extended this result to continuous viscosity solutions of F (uxx ) = 0, where F is
either convex or concave in m. But what can be said about the viscosity solution
of even the simplest equation in d = 3, i.e.,
∆v + (vx1 x1 )+ − (vx2 x2 )− = 0 in B1 ,
(0.3)
v = g on ∂B1 ,
for arbitrary fixed smooth g? This equation is uniformly elliptic and of Isaacs type,
since it can be written as
y
z
(0.4)
max min tr[a(y, z)vxx ] = 0, where a(y, z) =
.
1
1≤y≤2 1≤z≤2
The operator in (0.3) is of the form
min {max{L1 v, L2 v}, max{L3 v, L4 v}} ,
where L1 v = ∆v = (1, 1, 1),L2 v = (2, 1, 1), L3 v =(1, 2, 1), L4 v = (2, 2, 1) and not of
the “3-operator” form min L̃1 v, max{L̃2 v, L̃3 v} , to which recent C 2+α regularity
results (see [CC2]) apply. Any viscosity solution of ∆v + (vx1 x1 )+ − (vx2 x2 )− = 0
must be locally C 1,α for some α ∈ (0, 1) but what more can be said about these
solutions? Since the expression inside the max min in (0.4) is an affine function of
y, z, trivially the Isaaacs condition max min = min max is satisfied and (0.3) can
also be written as max {min{L1 v, L3 v}, min{L2 v, L4 v}} = 0. It is possible that
an explicit representation, albeit a probabilistic one, could shed light on regularity
properties of solutions. (See [Ka] for an explicit representation of viscosity solutions
of fully nonlinear degenerate parabolic equations in the whole space.)
From the viewpoint of applications, solutions of Isaacs equations are intuitively
thought of as “value” functions of a stochastic differential game of survival between
two players, which we loosely describe as follows. We have a probability space
(Ω, F, P), a filtration of σ-algebras {Ft }t≥0 of Ω, complete with respect to (F, P),
and a d1 -dimensional Wiener process (wt , Ft ) on (Ω, F, P). For controls y, z (Ft progressively measurable processes available to players I, II respectively) and x ∈ D,
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4047
we have a solution xy,z,x
of the stochastic equation
t
(0.5)
xt = x +
t
t
σ(yr , zr , xr ) dwr +
b(yr , zr , xr ) dr.
0
0
Here, σ is a d × d1 matrix and b is a vector in Ed , sufficiently smooth to ensure
(0.5) has a solution for any x ∈ Ed and y, z. For fixed x ∈ D and choice of controls
y, z for the solution of (0.5), we associate a functional
J(y, z, x) :=
τ
Ey,z
x
−ϕr
f (yr , zr , xr )e
0
(0.6)
where
=
ϕy,z,x
t
−ϕτ
dr + g(xτ )e
,
t
c(ys , zs , xy,z,x
) ds,
s
0
y,z,x
τ y,z,x = τD
:= inf{t ≥ 0 : xy,z,x
∈
/ D}, and xy,z,x
is the solution of the stochastic
t
t
equation (0.5). The idea of the game is that player I chooses controls y = yt (ω) for
(0.5) in order to maximize J, while player II chooses controls z = zt (ω) for (0.5) to
minimize J. Each player “moves”, step-by-step, in fixed time intervals, into which
the lifetime of the game is divided. There are two values for this game: an upper
value (of J) in which player I (the maximizer) has the advantage, and a lower value
in which player II (the minimizer) has the advantage. By “advantage”, we mean
that the player knows how his opponent has moved, before he moves. Games of
survival in which f ≥ 0 and g ≡ 0 are called generalized pursuit-evasion games,
the most famous of which is the pursuit-evasion game: f ≡ 1, c ≡ 0. In this case,
from
J(y, z, x) = Eτ y,z,x , the mean exit-time or capture time of the process xy,z,x
t
the domain D. Here, player I, wishing to maximize the mean capture time, is the
evader, while player II, wanting to minimize the mean capture time, is the pursuer.
See [Fr] for a rigorous treatment of deterministic, nonstochastic differential games.
Definition 0.1. An admissible control for player I (respectively II) is an Ft progressively measurable function yt (ω) (respectively zt (ω)), defined on [0, ∞) × Ω,
with values in Y (respectively Z). The set of all admissible controls for player I
(respectively II) is denoted by M (respectively N ). Controls y 1 , y 2 ∈ M are said
to be equal on [s, t] if P{|yr1 − yr2 | = 0, a.e. r ∈ [s, t]} = 1. An analogous statement
holds for controls in N .
Definition 0.2. An admissible strategy for player I (respectively II) is a mapping
α : N → M (respectively β : M → N ) which preserves the indistinguishability of
controls. That is, if z 1 , z 2 ∈ N and z 1 = z 2 on [s, t], then α(z 1 ) = α(z 2 ) on [s, t].
The set of all admissible strategies for player I (respectively II) is denoted by Γ
(respectively ∆).
Definition 0.3. We define the upper value v + of the differential game by
(0.7)
v + (x) := sup inf J(α(z), z, x)
α∈Γ z∈N
τ
−ϕr
−ϕτ
= sup inf Eα,z
f
(α(z)
,
z
,
x
)e
dr
+
g(x
)e
.
r r
r
τ
x
α∈Γ z∈N
0
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4048
JAY KOVATS
and the lower value v − of the differential game by
(0.8)
v − (x) := inf sup J(y, β(y), x)
β∈∆ y∈M
= inf sup Ey,β
x
β∈∆ y∈M
τ
f (yr , β(y)r , xr )e−ϕr dr + g(xτ )e−ϕτ .
0
In the special case τ = +∞, g ≡ 0 and the lower bound for c is appropriately large,
it follows from [FS] that both v + and v − satisfy a dynamic programming principle.
From this it can be shown that v + is the unique bounded viscosity solution of
the upper Isaacs equation F + [v] = 0 in Ed and v − is the unique bounded viscosity
solution of the lower Isaacs equation F − [v] = 0 in Ed . Hence if the Isaacs condition
holds, i.e. F + = F − , we immediately have that v + = v − , in which case the
differential game is said to have value. (See also Świȩch [S], 1996.) In [FS] (1988),
Fleming and Souganidis examined probabilistic expressions for the viscosity solution
of the Cauchy problem for the parabolic Isaacs equation in a strip HT := [0, T )×Ed ,
with terminal data function g(x) and discount factor c ≡ 0, where the coefficients
depended on t and x (as well as controls y, z). They showed that if f and the
coefficients of the process are bounded, uniformly continuous, Lipschitz in (t, x),
and g ∈ Cb0,1 (Ed ), then the upper value function
(0.9)
T
α,z
f (r, xr , α(z)r , zr ) dr + g(xT ) , 0 ≤ t ≤ T, x ∈ Ed
v(t, x) = sup inf Et,x
α∈Γ(t) z∈N (t)
t
is the unique viscosity solution of the Cauchy problem: F + [u]+ut = 0 in HT , u(T, x)
= g(x), x ∈ Ed , and satisfies the dynamic programming principle: ∀ s with t ≤ s ≤
T,
s
f
(r,
x
,
α(z)
,
z
)
dr
+
v(s,
x
)
.
(0.10)
v(t, x) = sup inf Eα,z
r
r
r
s
t,x
α∈Γ(t) z∈N (t)
α(z),z
Here, for t ≤ s ≤ T, xs
t
(t, x) is the solution of the stochastic differential equation
s
σ(r, xr , yr , zr ) dwr +
b(r, xr , yr , zr ) dr
s
xs = x +
t
t
with control y = α(z), while Γ(t), N (t) denote, respectively, strategies for player I
and controls for player II, defined on [t, T ]. Corresponding statements hold for the
lower value function and the lower Isaacs equation.
If we define St,T g(x) = v(t, x), the dynamic programming principle (0.10) is the
statement that for 0 ≤ t ≤ s ≤ T , St,T = St,s ◦ Ss,T as operators on Cb0,1 (Ed ). If
the coefficients of our process and f are independent of t, and our process is at the
point x at the initial time t = 0, (0.9) can be rewritten as
T −t
α,z
inf E0,x
f (α(z)r , zr , xr ) dr + g(xT −t ) .
v(t, x) = S0,T −t g(x) = sup
α∈Γ(0,T ) z∈N (0,T )
0
That is, the right-hand side depends, for t ∈ [0, T ], only on differences T −t. Setting
Q(T − t)g(x) := S0,T −t g(x), (0.10) reads: for t ≤ s ≤ T , Q(T − t) = Q(s − t) ◦
Q(T −s). Since s−t, T −s ∈ [0, T ] were arbitrary, we have Q(t1 +t2 ) = Q(t1 )◦Q(t2 )
for any t1 , t2 ∈ [0, T ] such that t1 + t2 ∈ [0, T ]. That is, the family of operators
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4049
{Q(t)} forms a nonlinear semigroup on Cb0,1 (Ed ), where
t
α,z
(0.11)
Q(t)g(x) := sup inf Ex
f (α(z)r , zr , xr ) dr + g(xt )
α∈Γ z∈N
0
and Γ, N denote, respectively, stategies for player 1 and controls for player 2 on
[0, T ].
It also follows from Nisio’s results in [N1] that Q(t)g is a semigroup on Cb (Ed )
(bounded, uniformly continuous functions on Ed ). In 1988, M. Nisio (see [N1])
studied probabilistic solutions for the Cauchy problem for the parabolic Isaacs
equation with coefficients independent of t. The expressions for v + (t, x) and v − (t, x)
in [N1], however, differ from those found in [FS], as the notion of strategy is not
made explicit. Actually, the [FS] results use a discretization technique very similar
to that used in [N1]. Loosely stated, in the definition of the upper value function
from [N1], player I can freely choose controls (to maximize J) in a given time
interval, while player II can choose only among constant controls in that interval
(to minimize J). Similarly, for the lower value function, player II can freely choose
controls, while player I can use only constant controls. So, in a sense, the notion
of strategy is implicitly built into the game. Under the Isaacs condition F + = F − ,
Nisio showed that both v + (t, x) and v − (t, x) are viscosity solutions of the Cauchy
problem for the Isaacs equation: F [v] − vt = 0 in HT , v(0, x) = g(x), x ∈ Ed .
In 1993, Fleming and Nisio (see [FN]) examined value functions and min-max
equations associated with a stochastic differential game in a Hilbert space, with
dynamics governed by the controlled (time-homogeneous) Zakai equation. We refer
the reader to §§5 and 6, wherein the authors use a time-homogeneous analogue of
§2 in [FS] to prove a dynamic programming principle for the corresponding value
functions. Our formulation of value functions is consistent with that of both [FN]
and [N1].
Under the assumptions that (i) D ⊂ Ed is a domain for which there exists a
global barrier relative to L0 (y, z, x) (Assumption 1.0), and (ii) the coefficients of
the diffusion process (0.5) are uniformly bounded and Lipschitz, ((1.1) ), we show
that the following forms a nonlinear semigroup on C(D̄):
t∧τ
α,z
−ϕr
−ϕt∧τ
(0.12) J(t)g(x) = sup inf Ex
f (α(z)r , zr , xr )e
dr + g(xt∧τ )e
,
α∈Γ z∈N
0
where for x ∈ D, α ∈ Γ and z ∈ N , τ = τ α(z),z,x is the first exit time of the process
α(z),z,x
α(z),z,x
xt
from D, and xt
is the solution of the stochastic differential equation
(0.5) with y = α(z). From this fact it follows (see Theorem 1.4, the last remark in
§1 and (1.6)) that v + in (0.7) satisfies a dynamic programming principle, which in
turn implies that v + is a continuous viscosity solution of the upper Issacs equation
in D satisfying v = g on ∂D. The same holds for v − in (0.8) and the lower Isaacs
equation.
1. Continuity properties of J(t, x, y, z) and related functionals
In this section, we establish continuity properties of the functional
t∧τ
y,z
−ϕr
−ϕt∧τ
(1.1)
J(t, x, y, z) = Ex
f (yr , zr , xr )e
dr + g(xt∧τ )e
,
0
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4050
JAY KOVATS
where x ∈ D, y ∈ M, z ∈ N and g ∈ Cb (D). For simplicity, we assume that
t ∈ [0, T ]. In the case τ y,z,x = +∞, corresponding to D = Ed , it is a straightforward matter to show that J is uniformly continuous on [0, T ] × Ed × M × N
(see Proposition 2.2 in [N1]). Continuity and semigroup properties of the Bellman
functional (only one player) are established for the case D = Ed in §§3.1-3.3 of
[K1], §§5.1, 5.2 of [BL] and §§1, 2 of [N2]. Continuity and semigroup properties of
the Bellman functional for nondegenerate processes in a smooth, bounded domain
D are addressed in §2 of [LM1]. In general, for the case D = Ed , J will not be
continuous in x even in the linear (0-player) case. To offset this, we assume the
existence of a global barrier (see Assumption 1.0 below).
T
We define a distance function on M, by ρ̃M (y 1 , y 2 ) = E 0 ρY (yr1 , yr2 ) dr, for
y 1 , y 2 ∈ M, where ρY is the metric on Y . We may always assume that ρY < 1,
since otherwise we use the metric given by ρY := π2 tan−1 (ρY ). We define a distance
function on N in the same way. Fix t ∈ [0, T ], x ∈ Ed . We assume that for
y, y 1 , y 2 ∈ Y , z, z 1 , z 2 ∈ Z, x, x1 , x2 ∈ Ed ,
(1.1)
|h(y 1 , z 1 , x1 ) − h(y 2 , z 2 , x2 )| ≤ C{|x1 − x2 | + ρY (y 1 , y 2 ) + ρZ (z 1 , z 2 )},
|h(y, z, x)| ≤ C,
for h = σ, b, c, f . Then for y 1 , y 2 ∈ M, z 1 , z 2 ∈ N , x1 , x2 ∈ Ed , Itô’s formula yields
for t ∈ [0, T ],
E|xyt
1
,z 1 ,x1
− xyt
2
,z 2 ,x2 2
| ≤ |x1 − x2 |2
t
1 1 1
2 2 2
+ (3C 2 + 4C)E
|xyr ,z ,x − xyr ,z ,x |2 + ρ2Y (yr1 , yr2 ) + ρ2Z (zr1 , zr2 ) dr
0
≤ |x1 − x2 |2
t
+ (3C 2 + 4C)
E|xyr
1
,z 1 ,x1
− xyr
2
| dr + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) .
,z 2 ,x2 2
0
Hence Gronwall’s inequality gives, for any t ∈ [0, T ],
(1.2) E|xyt
1
,z 1 ,x1
− xyt
2
| ≤ C2 eC2 t |x1 − x2 |2 + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) ,
,z 2 ,x2 2
where C2 > 1 is a constant, depending only on C. This estimate, along with the
BDG inequality implies
(1.3)
1 1 1
2 2 2
E sup |xyt ,z ,x − xyt ,z ,x |2 ≤ C4 eC4 T |x1 − x2 |2 + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) ,
0≤t≤T
where C4 depends only on C. Define L0 (y, z, x)u(x) := tr[a(y, z, x)uxx (x)] +
b(y, z, x) · ux (x). Our main assumption throughout this paper is
Assumption 1.0. ∃ ψ ∈ C 2 (D̄) such that ∀ y ∈ Y, z ∈ Z, L0 (y, z)ψ ≤ −1 in
D, ψ ≡ 0 on ∂D.
Observe that in the nondegenerate case, that is, λId ≤ a(y, z, x) ≤ ΛId , a uniform
global barrier always exists if ∂D ∈ C 2 . For example, if D = Br (x0 ), an explicit
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4051
C ∞ (D) global barrier is given by ψ(x) = cosh(kr)−cosh(k|x−x0 |), for k sufficiently
large. More generally, if ∂D satisfies a uniform exterior sphere condition, then
the solution ψ ∈ C 2+α (D̄) of the Dirichlet problem for the convex Bellman-Pucci
equation
M+ (ψxx , λ, Λ) + C|ψx | + 1 = 0 in D,
ψ = 0 on ∂D,
where C is as in (1.1) , will be a global barrier with respect to L0 (y, z) on D. This
follows immediately from our assumption λId ≤ a(y, z, x) ≤ ΛId , and the definition
of M+ . Note that we can express the above pde as
sup
(a,b)∈Aλ,Λ ×B 1 (0)
{tr[aψxx ] + Cb · ψx } + 1 = 0,
where Aλ,Λ denotes the set of symmetric d × d symmetric matrices with eigenvalues
in [λ, Λ]. (See pp. 14-15 in [CC1] for the definition of M+ .)
In the degenerate case, global barriers occur in certain cases. For example, say
D := {x ∈ Ed : ψ(x) > 0}, where ψ is C 2 (Ed ) and uniformly concave, with
,
ψxx ≤ −κId , for some κ > 0. If tr[a(y, z, x)] ≥ Λ(x) > κ1 , and |ψx (x)| ≤ κΛ(x)−1
C
where C is as in (1.1) , then ψ will be a barrier for L0 on D, since
L0 (y, z, x)ψ ≤ −κtr[a(y, z, x)] + |b(y, z, x)||ψx (x)| ≤ −κΛ(x) + C|ψx (x)| ≤ −1.
Of course, here we’ve assumed that a(y, z, x) has at least one positive eigenvalue in
D.
The following is a variation of a lemma due to M. Safonov.
k
k
k
Lemma 1.1. For k = 1, 2 and xkt := xyt ,z ,x , let τ k be the first exit time of xkt
from the open set D. Then for a finite constant K = K(ψ),
E|τ 1 ∧ T − τ 2 ∧ T | ≤ K · E sup |x1t − x2t |.
0≤t≤T
Proof. Define τ := τ ∧ τ . Note that ψ ≥ 0 in D. By Ito’s formula,
τ k ∧T
τ k ∧T
1 dt ≤ −E
Lkt ψ(xkt ) dt
E(τ k ∧ T − τ ∧ T ) = E
τ ∧T
τ ∧T
= E ψ(xkτ∧T ) − ψ(xkτk ∧T ) ≤ E I{τ <T, τ =τ k } ψ(xkτ ).
1
2
On the set {τ < T, τ = τ k }, we have ψ(xkτ ) = |ψ(x1τ ) − ψ(x2τ )|. Moreover, since
τ = τ 1 ∧ τ 2 , the sets {τ = τ 1 } and {τ = τ 2 } are disjoint. Therefore,
E|τ 1 ∧ T − τ 2 ∧ T | =
2
E(τ k ∧ T − τ ∧ T ) ≤ E I{τ <T } |ψ(x1τ ) − ψ(x2τ )|
k=1
≤ K · E I{τ <T } |x1τ − x2τ | ≤ K · E sup |x1t − x2t |,
0≤t≤T
where
K :=
sup
x∈D, y∈∂D
|ψ(x) − ψ(y)|
ψ(x)
= sup
< ∞.
|x − y|
dist
(x, ∂D)
x∈D
This lemma and estimate (1.3) immediately give
(1.4)
E|τ y
1
,z 1 ,x1
2
2
2
∧ T − τ y ,z ,x ∧ T |
≤ K C4 eC4 T |x1 − x2 | + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) .
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4052
JAY KOVATS
Theorem 1.2. For g ∈ Cb (D), the function J(t, x, y, z) in (1) is uniformly continuous on [0, T ] × D × M × N .
Proof. First, we show uniform continuity on D × M × N . So fix t ∈ [0, T ], x1 , x2 ∈
D, y 1 , y 2 ∈ M, z 1 , z 2 ∈ N . For convenience, we use the following notation: for
t
k k
k
k k k
k = 1, 2, we set τ k = τ y ,z ,x , xkt = xyt ,z ,x , ϕkt = 0 c(yrk , zrk , xkr ) dr. Then
J(t, x1 , y 1 , z 1 ) − J(t, x2 , y 2 , z 2 )
1
t∧τ
=E
1
f (yr1 , zr1 , x1r )e−ϕr
dr −
0
t∧τ 2
2
f (yr2 , zr2 , x2r )e−ϕr
dr
0
1
2
+ E g x1t∧τ 1 e−ϕt∧τ 1 − g x2t∧τ 2 e−ϕt∧τ 2
:= J1 + J2 ,
where
t∧τ 2
t∧τ 1
1 1
1 −ϕ1r
2 2
2 −ϕ2r
f (yr , zr , xr )e
dr −
f (yr , zr , xr )e
dr |J1 | ≤ E 0
0
t∧τ 1 1
2
≤E
f (yr1 , zr1 , x1r )e−ϕr − f (yr2 , zr2 , x2r )e−ϕr dr
0
t∧τ 1
2 2
2 −ϕ2r
+ E
f (yr , zr , xr )e
dr t∧τ 2
t
t 1
2
f (yr1 , zr1 , x1r ) − f (yr2 , zr2 , x2r ) dr
≤ f ∞
E e−ϕr − e−ϕr dr + E
0
0
+ f ∞ E|t ∧ τ 1 − t ∧ τ 2 |.
By the Mean Value Theorem,
|e−ϕr − e−ϕr | ≤ |ϕ1r − ϕ2r |
r
≤
|c(ys1 , zs1 , x1s ) − c(ys2 , zs2 , x2s )| ds
0
r
ρY (ys1 , ys2 ) + ρZ (zs1 , zs2 ) + |x1s − x2s | ds.
≤C
1
2
0
Similarly
f (yr1 , zr1 , x1r ) − f (yr2 , zr2 , x2r ) dr ≤ C
t
0
t
ρY (yr1 , yr2 ) + ρZ (zr1 , zr2 ) + |x1r − x2r | dr.
0
From inequalities (1.2), (1.4), the definitions of ρ̃M , ρ̃N , and the fact that ρ̃M ≤ T ,
we have
|J1 | ≤ eC6 T N7 |x1 − x2 | + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) ,
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4053
where N7 = N7 (K, C), and C6 depends only on C. Splitting up J2 in the usual
way, and again using the Mean Value Theorem, we get
|J2 | ≤ g∞ E ϕ1 1 − ϕ2 2 + E g x1 1 − g x2 2 ,
t∧τ
t∧τ
t∧τ
t∧τ
|ϕ1t∧τ 1 − ϕ2t∧τ 2 | ≤ |ϕ1t∧τ 1 − ϕ1t∧τ 2 | + |ϕ1t∧τ 2 − ϕ2t∧τ 2 |
t∧τ 2
1
2
ρY (yr1 , yr2 ) + ρZ (zr1 , zr2 ) + |x1r − x2r | dr.
≤ C|t ∧ τ − t ∧ τ | + C
0
By inequality (1.4), we get, for some constant N8 = N8 (K, C),
E ϕ1t∧τ 1 − ϕ2t∧τ 2 ≤ eC5 T N8 |x1 − x2 | + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) .
Fix ε > 0. Since g ∈ Cb (D), ∃ δ(ε) > 0 such that for ξ, ξ ∈ D, |ξ − ξ | < δ(ε)
implies |g(ξ) − g(ξ )| < ε. Then
E g x1 1 − g x2 2 ≤ 2g∞ P x1 1 − x2 2 ≥ δ + ε.
t∧τ
|x1t∧τ 1
t∧τ
− x2t∧τ 2 |
|x1t∧τ 1
t∧τ
− x1t∧τ 2 | + |x1t∧τ 2
t∧τ
− x2t∧τ 2 |,
Writing
≤
the BDG inequality and
inequality (1.4) yield
E x1t∧τ 1 − x1t∧τ 2 ≤ N CEx |t ∧ τ 1 − t ∧ τ 2 | + CE|t ∧ τ 1 − t ∧ τ 2 |
≤ N C E|t ∧ τ 1 − t ∧ τ 2 | + CE|t ∧ τ 1 − t ∧ τ 2 |
≤ N3 eC4 T |x1 − x2 |1/2 + 4 ρ̃M (y 1 , y 2 ) + 4 ρ̃N (z 1 , z 2 )
+|x1 − x2 | + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) ,
where N3 = N3 (K, C). By (1.3),
E x1t∧τ 2 − x2t∧τ 2 ≤ E sup x1s − x2s 0≤s≤T
≤ C4 eC4 T |x1 − x2 | + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) .
Thus, since ρ̃ ≤ T (since ρ < 1), we get, for some N4 = N4 (K, C),
E x1t∧τ 1 − x2t∧τ 2 ≤ N4 (1 + diam(D) + T 1/4 )eC4 T |x1 − x2 |1/2 + 4 ρ̃M (y 1 , y 2 )
+ 4 ρ̃N (z 1 , z 2 )
= N6 |x1 − x2 |1/2 + 4 ρ̃M (y 1 , y 2 ) + 4 ρ̃N (z 1 , z 2 ) ,
where N6 = N6 (C, D, K, T ). By Chebyshev’s inequality,
N6 1
P x1t∧τ 1 − x2t∧τ 2 ≥ δ ≤
|x − x2 |1/2 + 4 ρ̃M (y 1 , y 2 ) + 4 ρ̃N (z 1 , z 2 ) ,
δ
and hence, for t ∈ [0, T ], and some constant N9 = N9 (C, K, |g∞ ), we get
2g∞ N6 1
C7 T
1/4
|J1 + J2 | ≤ N9 e
( diam(D) ∨ T ) +
|x − x2 |1/2 + 4 ρ̃M (y 1 , y 2 )
δ
+ 4 ρ̃N (z 1 , z 2 ) + ε.
Hence ∀ε > 0, ∃ δ = δ (ε, K, C, D, T, |g∞ ) > 0 such that ∀x1 , x2 ∈ D, y 1 , y 2 ∈
M, z 1 , z 2 ∈ N , |x1 − x2 | + ρ̃M (y 1 , y 2 ) + ρ̃N (z 1 , z 2 ) < δ ⇒ |J(t, x1 , y 1 , z 1 ) −
J(t, x2 , y 2 , z 2 )| < 2ε. That is, for t ∈ [0, T ], J(t, x, y, z, g) is uniformly continuous
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4054
JAY KOVATS
on D × M × N . The proof that J is continuous in t is easier. For fixed t, s ≥ 0, x ∈
D, y ∈ M, z ∈ N ,
|J(t, x, y, z) − J(s, x, y, z)|
t∧τ
−ϕr
−ϕt∧τ
+ Ey,z
≤ Ey,z
f
(y
,
z
,
x
)e
dr
− g(xs∧τ )e−ϕs∧τ |
r r
r
x x |g(xt∧τ )e
s∧τ
y,z
−ϕt∧τ
− e−ϕs∧τ |
≤ f ∞ Ey,z
x |t ∧ τ − s ∧ τ | + Ex |g(xt∧τ )||e
+ Ey,z
x |g(xt∧τ ) − g(xs∧τ )|
y,z
≤ f ∞ |t − s| + g∞ Ey,z
x |ϕs∧τ − ϕt∧τ | + Ex |g(xt∧τ ) − g(xs∧τ )|
≤ |t − s|(f ∞ + Cg∞ ) + Ey,z
x |g(xt∧τ ) − g(xs∧τ )|,
t∧τ
2
y,z 2
Ey,z
|x
−
x
|
≤
2
E
σ(y
,
z
,
x
)
dr
t∧τ
s∧τ
r r
r
x
x s∧τ
t∧τ
2 y,z +Ex |b(yr , zr , xr )| dr s∧τ
≤ 2C |t − s| + |t − s|2 .
2
Since g is uniformly continuous in D, ∀ε > 0, ∃ δ(ε) > 0 such that ∀ ξ, ξ ∈ D,
|ξ − ξ | < δ(ε) implies |g(ξ) − g(ξ )| < ε. Hence by Chebyshev’s inequality,
y,z
Ey,z
x |g(xt∧τ ) − g(xs∧τ )| ≤ ε + 2g∞ Px {|xt∧τ − xs∧τ | ≥ δ}
4C 2 g∞ |t − s| + |t − s|2 .
≤ε+
2
δ
Thus, for any (x, y, z) ∈ D × M × N ,
|J(t, x, y, z) − J(s, x, y, z)| ≤ |t − s|(f ∞ + Cg∞ )
4C 2 g∞ |t − s| + |t − s|2 + ε.
+
2
δ
Thus ∀ ε > 0, ∃δ = δ (ε, C, g∞ ), such that |t − s| < δ implies | sup J(t, x, y, z) −
x,y,z
sup J(s, x, y, z)| < 2ε. Thus J is (uniformly) continuous in t, uniformly with respect
x,y,z
to x, y, z.
The fact that J and the related functional (1.5) below are uniformly bounded,
independent of T , follows from Assumption 1.0 and Itô’s formula, since for every
x ∈ D, y ∈ M, z ∈ N and t > 0,
y,z
0 ≤ Ey,z
x ψ(xτ ∧t ) = ψ(x) + Ex
τ ∧t
L0 (yr , zr , xr )ψ(xr ) dr ≤ ψ(x) − Ey,z
x [τ ∧ t].
0
Hence by the monotone convergence theorem, supy,z,x Eτ y,z,x ≤ maxD ψ(x) :=
|ψ|0,D . This inequality, our assumptions on f , and the elementary estimate
sup inf J(t, x1 , y, z) − sup inf J(t, x2 , y, z) ≤ sup sup |J(t, x1 , y, z) − J(t, x2 , y, z)|
y
z
y
z
y
z
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4055
immediately imply
Corollary 1.3. For g ∈ C(D̄) and t ∈ [0, T ], for M̂ ⊂ M, N̂ ⊂ N , the function
(1.5)
J(t)g(x) := sup inf
y∈M̂ z∈N̂
t∧τ
−ϕr
Ey,z
x
f (yr , zr , xr )e
−ϕt∧τ
dr + g(xt∧τ )e
0
belongs to Cb (D).
Remark. Since our continuity (in x) estimates rely entirely on continuity estimates
for J(t, x, y, z) which are uniform in y, z, our corollary holds for functionals J(t)g
in which inf and sup are reversed. This statement holds as well for the next few
theorems. By our mean exit time estimate, ∀ x ∈ D, |J(t)g(x)| ≤ N f ∞ + g∞ ,
where N is independent of t. Moreover, it follows from the proof of Theorem 1.2
0,β/2
that g ∈ Cb0,β (D) implies J(t)g ∈ Cb
(D), β ∈ (0, 1]. Furthermore, our previous
estimate (1.2), obtained for t ∈ [0, T ], can be obtained for ∀ t ≥ 0, simply by
redefining our metric
∞ on M, N to be independent of T ; e.g., instead of ρ̃M , define
ρ∗M (y 1 , y 2 ) := E 0 ρY (yr1 , yr2 )e−r dr. It is clear then that (1.2) (with ρ∗ in place
of ρ and a different C2 ) will hold for all t ≥ 0.
Theorem 1.4. For x ∈ D and g ∈ C(D̄), define
τ
y,z
−ϕr
−ϕτ
(1.6)
v(x) = sup inf Ex
f (yr , zr , xr )e
dr + g (xτ ) e
.
y∈M̂ z∈N̂
0
Then for J in (1.5), we have
lim J(t)g(x) = v(x).
t→∞
Proof. Recall that ψ ≥ 0 in D. Setting µ =
1
2|ψ|∞ ,
Itô’s formula gives
τ
µτ
0 = Ey,z
= ψ(x) + Ey,z
x ψ(xτ )e
x
[L0 (yr , zr , xr )ψ(xr ) + µψ(xr )]eµr dr
0
1
≤ ψ(x) − Ey,z
2 x
τ
eµr dr,
0
µτ
from which it follows that Ey,z
≤ 2. Hence for x ∈ D,
x e
τ
y,z
|f (yr , zr , xr )|e−ϕr dr
|J(t)g(x) − v(x)| ≤ sup sup Ex
y∈M z∈N
t∧τ
+
Ey,z
x
g(xt∧τ )e−ϕt∧τ − g(xτ )e−ϕτ .
But
τ
Ey,z
x
t∧τ
|f (yr , zr , xr )|e−ϕr dr ≤ f ∞ Ey,z
x [(τ − t)Iτ >t ]
C4 − µt
2
e 2.
≤ C Ey,z
Py,z
x τ
x {τ > t} ≤
µ
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4056
JAY KOVATS
τ
By the Mean Value Theorem, e−ϕt∧τ − e−ϕτ ≤ ϕτ − ϕt∧τ = t∧τ c(yr , zr , xr ) dr ≤
C(τ − t ∧ τ ) and hence
−ϕt∧τ
g(xt∧τ )e−ϕt∧τ − g(xτ )e−ϕτ ≤ g∞ Ey,z
− e−ϕτ )
Ey,z
x
x (e
+ Ey,z
x |g(xt∧τ ) − g(xτ )|
y,z
≤ g∞ CEy,z
x [(τ − t)Iτ >t ] + 2g∞ Px {τ > t}
≤ 4g∞ e− 2 ( C
µ + 1).
µt
Putting all this together yields
|J(t)g(x) − v(x)| ≤ 4e−
µt
2
C
µ
+ g∞ ( C
µ + 1) .
Remark. By the proof of our previous theorem, we know that for g ∈ Cb (D), J(t)g
is uniformly continuous in t on [0, ∞). If we knew J(t) were a semigroup on some
subset of Cb (D), then our function v(x) = limt→∞ J(t)g(x) would satisfy a dynamic
programming principle, provided v ∈ Cb (D). Indeed, in this case, for any t ≥ 0,
(1.7)
v(x) = lim J(t + s)g(x) = lim J(t)[J(s)g(x)]
s→∞
s→∞
= J(t) lim J(s)g(x) = J(s)v(x)
s→∞
t∧τ
y,z
−ϕr
−ϕt∧τ
= sup inf Ex
f (xr , yr , zr )e
dr + v(xt∧τ )e
.
y∈M̂ z∈N̂
0
2. Continuity properties of value-type functions
By functions of value-type, we mean functions of the general form (1.6) (and
with inf/sup reversed). The continuity results in this section hold, in particular,
for the probabilistic solutions (0.7),(0.8) of the Isaacs equations to be introduced in
§3, which we call the value functions. From the theory of pde it is known that in
the nondegenerate case, continuous viscosity solutions of the Dirichlet problem for
Isaacs equations with bounded, Lipschitz coefficients (arbitrary continuous boundary values) are locally C 1,α (D). In the degenerate case, with positive discount
factor, viscosity solutions are unique (see theorem II.2 in [IL]) and, in the case of a
global barrier, locally Lipschitz in D. We prove the Lipschitz continuity of functions
of value-type (see Theorem 2.3) under the additional assumption that g ∈ C 2 (D).
First, we give a few preliminaries. Letting T → ∞ in the proof of Lemma 1.1
and using the fact that supy,z,x Eτ y,z,x ≤ |ψ|0,D yields
Ey,z |τ 1 − τ 2 | ≤ K · Ey,z |x1τ − x2τ |,
µτ
≤ 2,
where τ = τ 1 ∧ τ 2 . Recalling that ∃ µ = µ(ψ) > 0 for which supy,z,x Ey,z
x e
∀ t > 0, we have
Ey,z |x1τ − x2τ | = Ey,z |x1τ − x2τ |Iτ ≤t + Ey,z |x1τ − x2τ |Iτ >t
≤ Ey,z sup |x1s − x2s | + diam(D) · Py,z {τ 1 > t}
0≤s≤t
≤ Ce |x1 − x2 | + 2diam(D)e−µt .
Ct
The right-hand side is of the form AeCt +Be−µt , which for small |x1 −x2 | attains its
absolute minimum over (0, ∞), at the point t = t0 satisfying ACeCt0 = Bµe−µt0 .
This immediately implies
(2.1)
Ey,z |x1τ − x2τ | ≤ N · |x1 − x2 |α ,
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4057
and hence
(2.2)
Ey,z |τ 1 − τ 2 | ≤ KN · |x1 − x2 |α ,
τ
µ
and N = N (C, µ, diam(D)). Estimates of Ey,z τ 0 |x1r − x2r | dr and
where α = C+µ
τ
Ey,z 0 |x1r − x2r | dr are handled the same way. For example, for any t > 0,
τ
τ
|x1r − x2r | dr = Ey,z τ
|x1r − x2r | dr · Iτ ≤t
Ey,z τ
0
0 τ
y,z
+E
|x1r − x2r | dr · Iτ >t
τ
0
Ey,z |x1r − x2r | dr + diam(D)Ey,z τ 2 · Iτ >t
0
t
1
2
eC2 r dr
≤ t C2 |x − x |
0
√
+ diam(D) Ey,z τ 4 · Py,z {τ > t}
t
≤t
e(1+C2 )t
7
+ diam(D) 2 e−µt/2 ,
≤ |x1 − x2 | √
µ
C2
and taking the minimum over t ∈ (0, ∞) gives for β =
N (C, µ, diam(D)),
τ
(2.3)
Ey,z τ
|x1r − x2r | dr ≤ N · |x1 − x2 |β .
µ
2(1+C2 )+µ
and N =
0
We use these estimates to establish continuity properties of value-type functions.
For g ∈ C(D̄), y ∈ M, z ∈ N , x ∈ D, we define
τ
−ϕr
−ϕτ
J(y, z, x, g) := Ey,z
f
(y
,
z
,
x
)e
dr
+
g(x
)e
,
r
r
r
τ
x
0
where
t
=
ϕy,z,x
t
c(ys , zs , xy,z,x
) ds,
s
0
y,z,x
:= inf{t ≥ 0 : xy,z,x
∈
/ D}, and xy,z,x
is the solution of our stochastic
τ y,z,x = τD
t
t
integral equation.
Theorem 2.1. Let g ∈ C(D̄), and let M̂ ⊂ M, N̂ ⊂ N . Then the function
v(x) := sup inf J(y, z, x, g)
y∈M̂ z∈N̂
belongs to C(D̄).
Proof. We will show that (i) v ∈ Cb (D) and (ii) ∀ x0 ∈ ∂D,
lim
x∈D,x→x0
v(x) = g(x0 ).
The fact that v is bounded follows from the uniform boundedness of f, g and the
estimate supy,z,x Eτ y,z,x ≤ |ψ|0,D . To prove (i), fix x1 , x2 ∈ D. Omitting the
obvious dependence on g, we have
1
2
1
2 |v(x ) − v(x )| = sup inf J(y, z, x ) − sup inf J(y, z, x )
y∈M̂ z∈N̂
y∈M̂ z∈N̂
1
2
≤ sup sup J(y, z, x ) − J(y, z, x ) .
y∈M z∈N
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4058
JAY KOVATS
1
2
For fixed y ∈ M, z ∈ N and x1 , x2 ∈ D, set τ = τ y,z,x ∧ τ y,z,x = τ 1 ∧ τ 2 . Then
1
τ2
τ
1
2
y,z
1 −ϕ1r
2 −ϕ2r
J(y, z, x ) − J(y, z, x ) = E
f (yr , zr , xr )e
dr −
f (yr , zr , xr )e
dr
0
0
−ϕ1τ 1
+ Ey,z g(x1τ 1 )e
− g(x2τ 2 )e−ϕτ 2
2
:= T1 + T2 .
Writing the expression inside the expectation of T1 (in obvious abbreviated notation) as
τ
τ
τ
τ1
2
1 −ϕ1r
1 −ϕ1r
−ϕ2r
−ϕ2r 1
2
fr e
dr +
fr [e
−e
] dr +
e
[fr − fr ] dr +
fr2 e−ϕr dr,
τ
0
|T1 | ≤ CE
τ2
0
by our previous estimates, we have
τ
r
(τ − τ ) + C E
|x1s − x2s | ds dr
0
0
τ
y,z
1
2
|xr − xr | dr + CEy,z (τ 2 − τ )
+ CE
0
τ
y,z 1
≤ 2CE |τ − τ 2 | + C 2 Ey,z τ
|x1s − x2s | ds + CEy,z
y,z
1
2
y,z
0
τ
|x1r − x2r | dr.
0
As usual, by the uniform continuity of g in D̄, ∀ ε > 0, ∃ δ(ε) > 0 with
|T2 | ≤ g∞ Ey,z |ϕ1τ 1 − ϕ2τ 2 | + Ey,z |g(x1τ 1 ) − g(x2τ 2 )|
τ
≤ g∞ 2C Ey,z |τ 1 − τ 2 | + Ey,z
|x1r − x2r | dr
0
+ 2g∞ Py,z {|x1τ 1 − x2τ 2 | > δ} + ε.
By Chebyshev’s inequality,
3 y,z 1
E |xτ 1 − x1τ | + Ey,z |x1τ − x2τ | + Ey,z |x2τ − x2τ 2 | .
δ
1
1
τ
τ
1
1
1
− xτ ≤ σ(yr , zr , xr ) dwr + b(yr , zr , xr ) dr ,
τ
τ
Py,z {|x1τ 1 − x2τ 2 | > δ} ≤
From
1
x 1
τ
we have
Ey,z |x1τ 1 − x1τ | ≤ N CEy,z (τ 1 − τ )1/2 + CEy,z (τ 1 − τ )
≤ N1 (C) Ey,z |τ 1 − τ 2 |1/2 + Ey,z |τ 1 − τ 2 |
1/2
+ Ey,z |τ 1 − τ 2 | ,
≤ N1 (C) Ey,z |τ 1 − τ 2 |
and hence
y,z 1
2
y,z
|T2 | ≤ C2 g∞ E |τ − τ | + E
0
τ
1/2
N3 (C) y,z 1
E |τ − τ 2 |
δ
y,z 1
2
y,z 1
2
+E |τ − τ | + E |xτ − xτ |
+ ε.
|x1r − x2r | dr +
Estimates (2.1)-(2.3) and routine calculations give
N4 1
1
2 α
1
2 γ1
2 γ2
|x − x |
|T1 + T2 | ≤ C4 |x − x | + C2 g∞ |x − x | +
+ ε,
δ
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4059
where α, γ1 , γ2 depend only on C, µ, D with the constants depending additionally
on K. From this it follows that ∀ ε > 0, ∃δ = δ (ε, C, D, µ, K, g∞ ) such that
∀x1 , x2 ∈ D, if |x1 − x2 | < δ , then
|v(x1 ) − v(x2 )| ≤ sup sup J(y, z, x1 ) − J(y, z, x2 ) < 2ε,
y∈M z∈N
which proves (i). To prove (ii), observe that
τ
y,z −ϕr
y,z −ϕτ
|v(x)−g(x)| ≤ sup sup Ex f (yr , zr , xr )e
dr + Ex g(xτ )e
− g(x) .
y∈M z∈N
0
Applying Itô’s theorem to the barrier function ψ gives supy,z Ey,z
x τ ≤ ψ(x). By
the uniform continuity of g in D̄, and then Chebyshev’s inequality, we have ∀ ε >
0, ∃ δ(ε) > 0 with
−ϕτ
g(xτ )e−ϕτ − g(x) ≤ g∞ Ey,z
) + Ey,z
Ey,z
x
x (1 − e
x |g(xτ ) − g(x)|
≤ g∞ CEy,z
x τ +
2g∞ y,z
Ex |xτ − x| + ε.
δ
This, f ∞ ≤ C and the inequalities
y,z 1/2
y,z
Ey,z
|x
−
x|
≤
C
(E
τ
)
+
E
τ
≤
C
ψ(x)
+
ψ(x)
τ
x
x
x
yield, for x ∈ D and any ε > 0,
|v(x) − g(x)| ≤ Cψ(x) (1 + g∞ ) +
2Cg∞ ψ(x) + ψ(x) + ε.
δ(ε)
From this, the inequality |v(x) − g(x0 )| ≤ |v(x) − g(x)| + |g(x) − g(x0 )|, and the
fact that limx→x0 ψ(x) = 0, it follows that ∀ ε > 0, ∃ δ = δ (ε, C, g∞ ) such that
∀x ∈ D, if |x − x0 | < δ , then |v(x) − g(x0 )| < 3ε, proving (ii).
Remark. Using the same argument as in (ii), we can show that for x0 ∈ ∂D,
lim J(t)g(x) = g(x0 ). This, along with Corollary 1.3, implies that for g ∈
x∈D,x→x0
C(D̄), J(t)g ∈ C(D̄). Using the same techniques as in (i), we can show that if
g ∈ C 0,β (D̄), β ∈ (0, 1], then there is a constant N = N (d, µ, C, K, D, β) and
γ = γ(d, µ, C, D, β) such that for any x1 , x2 ∈ D,
|v(x1 ) − v(x2 )| ≤ N (1 + g0,β ) · |x1 − x2 |γ .
We now prove the Lipschitz continuity of the value-type functions, under the
additional assumptions that (i) g ∈ C 2 (D) and (ii) the lower bound of the discount
factor is large compared to the Lipschitz constant for coefficients σ and b (assumption (2.4) below). Our techniques are similar to those found in Theorems 2.3 in
both [L] and [LM1]. More precisely, assumption (ii) is that
(2.4)
c(y, z, x) = c0 > µ0 , where
inf
x∈Ed
(y,z)∈Y ×Z
µ0 = sup
x,x ∈Ed
(y,z)∈Y ×Z
σ(y, z, x) − σ(y, z, x )2
(b(y, z, x) − b(y, z, x )) · (x − x )
+
2
2|x − x |
|x − x |2
.
Lemma 2.2. For all admissible controls y ∈ M, z ∈ N and any Ft -Markov time
θ,
2
Ey,z xxθ − xxθ e−2µ0 θ ≤ |x − x |2 .
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4060
JAY KOVATS
Proof. Writing
xy,z,x
− xy,z,x
= x − x +
t
t
t
[σ(yr , zr , xxr ) − σ(yr , zr , xxr )] dwr
0
t
+
[b(yr , zr , xxr ) − b(yr , zr , xxr )] dr,
0
the definition of µ0 , and Itô’s formula applied to the function z → |z|2 yields
Ey,z |xxT ∧θ − xxT ∧θ |2 e−2µ0 T ∧θ
= |x − x |2
+ Ey,z
T ∧θ tr (σ(yr , zr , xxr ) − σ(yr , zr , xxr ))(σ(yr , zr , xxr ) − σ(yr , zr , xxr ))∗
0
+ 2(b(yr , zr , xxr ) − b(yr , zr , xxr )) · (xxr − xxr ) − 2µ0 |xxr − xxr |2 e−2µ0 r dr.
≤ |x − x |2 .
Theorem 2.3. Under the above assumptions, there is a constant
N = N (C, c0 , µ0 , |g|2,D , |ψx |0,D )
such that for any x , x ∈ D,
1
2
|v(x1 ) − v(x2 )| ≤ N |x1 − x2 |.
Proof. For any x1 , x2 ∈ D, as before,
1
2
1
2 |v(x ) − v(x )| = sup inf J(y, z, x ) − sup inf J(y, z, x )
y∈M̂ z∈N̂
y∈M̂ z∈N̂
1
2
≤ sup sup J(y, z, x ) − J(y, z, x ) .
y∈M z∈N
For any y ∈ M, z ∈ N and x1 , x2 ∈ D, using the same “1,2” notation for convenience,
1
τ2
τ
1
2
y,z
1 −ϕ1r
2 −ϕ2r
fr e
dr −
fr e
dr
J(y, z, x ) − J(y, z, x ) = E
0
y,z
+E
0
1
g(x1τ 1 )e−ϕτ 1
− g(x2τ 2 )e−ϕτ 2
2
:= T1 + T2 .
As before, and using the fact that c ≥ c0 ,
∞
1
2
|T1 | ≤
Ey,z Ir≤τ 1 |fr1 ||e−ϕr − e−ϕr | dr
1
τ
2
+
Ey,z Ir≤τ 1 e
|fr1 − fr2 | dr + Ey,z |fr2 |e−ϕr dr 2
τ
0
∞
∞
1
2
≤C
Ey,z |e−ϕr − e−ϕr | dr + C
e−c0 r Ey,z |x1r − x2r | dr
0
(2.5)
∞
0
−ϕ2r
1
2
C
+ Ey,z e−c0 τ − e−c0 τ .
c0
0
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4061
By the Mean Value Theorem, e−ϕt − e−ϕt = e−ξ (ϕ2t − ϕ1t ), with ξ ≥ c0 t (since
t
c ≥ c0 ). Moreover, |ϕ1t − ϕ2t | ≤ C 0 |x1s − x2s | ds. Hence
t
1
2
Ey,z |x1s − x2s | ds.
(2.6)
Ey,z |e−ϕt − e−ϕt | ≤ Ey,z e−ξ |ϕ1t − ϕ2t | ≤ Ce−c0 t
1
2
0
By Lemma 2.2, Ey,z |x1t − x2t | ≤ |x1 − x2 |eµ0 t . Hence by (2.6),
1
2
C 1
Ey,z |e−ϕt − e−ϕt | ≤
|x − x2 |e(µ0 −c0 )t ,
µ0
which by (2.5), immediately yields
∞
C 2 |x1 − x2 | ∞ (µ0 −c0 )r
|T1 | ≤
e
dr + C|x1 − x2 |
e(µ0 −c0 )r dr
µ0
0
0
(2.7)
C y,z −c0 τ 1
−c0 τ 2 + E e
−e
.
c0
By assumption (iii), Itô’s formula implies ∀ x ∈ D, y ∈ M, z ∈ N ,
τ
y,z
y,z
0 = Ex ψ( xτ ) = ψ(x) + Ex
L0 (yr , zr , xr )ψ(xr ) dr ≤ ψ(x) − Ey,z
x τ.
0
Hence Eτ y,z,x ≤ |ψ|0,D . Itô’s formula implies that the process
t∧τ
βty,z,x := ψ(xt∧τ )e−c0 (t∧τ ) +
e−c0 r dr,
0
defined for x ∈ D, y ∈ M, z ∈ N , is an Ft -supermartingale. As before, set τ =
1
2
τ y,z,x ∧ τ y,z,x = τ 1 ∧ τ 2 . Since τ ≤ τ 1 , the definition of supermartingale implies
1
1
1 y,z −c0 τ
e
E
− e−c0 τ ≤ Ey,z ψ x1τ e−c0 τ − ψ x1τ 1 e−c0 τ
c0
= Ey,z ψ x1τ − ψ x2τ e−c0 τ
1
+ Ey,z ψ x2τ e−c0 τ − ψ x1τ 1 e−c0 τ .
Note that the quantity inside the expectation of our last summand is 0 a.s. on the
set τ 2 ≤ τ 1 , since ψ ≡ 0 on ∂D. By the Mean Value Theorem, c0 > µ0 , and Lemma
2.2,
1
1 y,z −c0 τ
e
E
− e−c0 τ ≤ 2Ey,z e−c0 τ ψ x1τ − ψ x2τ c0
≤ 2ψx ∞ Ey,z e−c0 τ x1τ − x2τ ≤ 2ψx ∞ |x1 − x2 |,
and hence
1
2
Ey,z e−c0 τ − e−c0 τ ≤ 4c0 ψx ∞ |x1 − x2 |,
which by (2.7) gives
C2
C
(2.8)
|T1 | ≤ |x1 − x2 |
+
+ 4Cψx ∞ .
µ0 (c0 − µ0 ) c0 − µ0
To estimate T2 , we write
2 y,z 1 −ϕ1τ 1
g(xτ 1 )e
− g(x2τ 2 )e−ϕτ 2 E
1
1 1
2
≤ Ey,z g(x1τ 1 )e−ϕτ 1 − g(x1τ )e−ϕτ + Ey,z g(x1τ )e−ϕτ − g(x2τ )e−ϕτ 2
2 + Ey,z g(x2τ )e−ϕτ − g(x2τ 2 )e−ϕτ 2 .
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4062
JAY KOVATS
The first and third summands are estimated using Itô’s formula
τ1
1 1
y,z 1 −ϕ1τ 1
g(xτ 1 )e
− g(x1τ )e−ϕτ ≤ Ey,z
|L(yr , zr , xr )g(x1r )|e−ϕr dr
E
τ
≤ N (C, d)g2,∞ E
y,z
τ1
e−c0 r dr
τ
≤ 2N g2,∞ ψx ∞ |x1 − x2 |.
We handle the middle summand in the usual way:
1
2
1
2
(2.9) g(x1τ )e−ϕτ − g(x2τ )e−ϕτ ≤ g∞ e−ϕτ − e−ϕτ + e−c0 τ gx ∞ |x1τ − x2τ |.
τ
∞
1
2
As before, e−ϕτ − e−ϕτ ≤ Ce−c0 τ 0 |x1r −x2r | dr ≤ C 0 e−c0 r |x1r −x2r | dr. Taking
the expectation in (2.9) and using Lemma 2.2, along with c0 > µ0 , yields
1
2
g∞ C
Ey,z g(x1τ )e−ϕτ − g(x2τ )e−ϕτ ≤ |x1 − x2 |
+ gx ∞ ,
c0 − µ0
and hence
g∞ C
+ gx ∞ ,
|T2 | ≤ |x − x | 4Cg2,∞ ψx ∞ +
c0 − µ0
1
2
which, along with (2.8) gives
|v(x1 ) − v(x2 )| ≤ sup sup J(y, z, x1 , g) − J(y, z, x2 , g) ≤ N |x1 − x2 |.
y∈M z∈N
3. Semigroup properties
In this section, we establish the dynamic programming principle for the upper
and lower value functions v + (x), v − (x) defined in (3.14), (3.15). We do this by
showing that the upper and lower value functions V + (t)g, V − (t)g, defined in (3.4),
form a semigroup on C(D̄) (see the remark at the end of §1). Our setup is as
follows: (Ω, F, P) is a probablity space on which a d-dimensional Brownian motion
(wt , Ft ) is defined. Here, for our filtration {Ft }t≥0 of σ-algebras of Ω, we take
Ft := Ftw = σ(ws : 0 ≤ s ≤ t). We adapt the weak formulation of the differential
game in which our probability space may vary, and hence admissible controls may
be defined on another probability space (e.g. the canonical space). In the strong
formulation of a differential game, the probability space is fixed. Consequently,
controls may be defined only on the original space. The weak formulation has the
obvious practical advantage of a wider class of admissible controls for the players.
Even in the one-player setting, there are known examples in which optimal controls
do not exist in the strong formulation, but do exist in the weak formulation.
Observe that by condition (1.1) : if (Ω , G, P ) is any probability space on which a
d-dimensional Brownian motion (βt , Gt ) is defined, and if y = y.(ω ), z = z.(ω ) are
{Gt }-progressively measurable processes with values in Y and Z respectively, then
to the equation
there exists a (P -a.s.) unique, Gt -measurable solution xt = xy,z,x
t
(defined on (Ω , G, P ))
t
t
σ(yr , zr , xr ) dβr +
b(yr , zr , xr ) dr.
xt = x +
0
0
We further recall that if (Ω, F, P) is a probablity space on which (wt , Ft ) is a ddimensional Brownian motion, then for any s ≥ 0, Fs and σ(ws+h − ws : h ≥ 0)
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4063
θs w
are P-independent σ-algebras. In particular, for 0 ≤ s ≤ t, Fsw and Ft−s
are
independent, where for ω ∈ Ω and h ∈ [0, t − s], θs w(ω) is the Brownian motion
defined by θs wh (ω) := ws+h (ω) − ws (ω). Note that for any ω ∈ Ω, w[0,s] (ω) :=
w.|[0,s] (ω) ∈ C([0, s], Ed ), while θs w[0,t−s] (ω) ∈ C([s, t], Ed ).
Analogously, if C := C([0, ∞), Ed ), A = A∞ := σ(xt : t ≥ 0), and W =
W (d) is the d-dimensional Wiener measure on (C, A), then (xt , At ) is the canonical
Brownian motion on (C, A, W), where At := σ(xs : 0 ≤ s ≤ t). (See §§2.2, 2.3 in
θs x
are W-independent σ-algebras,
[SV].) In particular, for 0 ≤ s ≤ t, As and At−s
where θs xh := xh+s − xs is also a Brownian motion under W. We identify C[0, t]
with C[0, s] × C[s, t] under the map π : C[0, t] → C[0, s] × C[s, t], defined by π(x.) =
(x.1 , x.2 ) := (x[0,s] , θs x[0,t−s] ). Moreover, as a Brownian motion has independent
increments, this map induces the identification W0,t = W0,s ⊗ Ws,t , where W0,s
and Ws,t denote the Wiener measures on C[0, s], C[s, t], respectively.
The weak formulation of the differential game plays on the fact that admissible
controls may be viewed as a.e. Borel functions of that space’s Brownian motion
(see p. 6 of [K1], and Lemma 1.5.6 in [N3]). Whether an expectation is taken
with respect to P or W is therefore unimportant, as expectations with respect
to P are expectations with respect to W. For example, say that y = {yt (ω)} is
admissible and that yt (ω) = ȳt (w[0,t] (ω)) for P-a.e. ω ∈ Ω, for some Borel function
ȳ : [0, t] × C[0, t] → Y . By the property of Wiener measure, for our fixed Brownian
motion w.(ω) on Ω, we have W(B) = P(w.−1 (B)) ∀ B ∈ A, and hence Wt (B) =
−1
(B)) ∀ B ∈ At . Thus, for any function f : (Y, B(Y )) → (E1 , B(E1 )), we
P(w[0,t]
have
EP f (yt ) :=
f (yt (ω)) P(dω)
Ω
−1
=
f (ȳt (w[0,t] (ω))) P(dω) =
f (ȳt (x.)) Pw[0,t]
(dx.)
Ω
C[0,t]
=
f (ȳt (x.)) Wt (dx.) := EWt f (ȳt ).
C[0,t]
Following the convention in [FN] and [N1], we use E to denote expectation, either
with respect to P or W.
In [FS], the authors prove the dynamic programming principle in the timeinhomogeneous setting, using a strong formulation of the differential game in the
canonical space. In the time-homogeneous setting of [FN], and using the same
definitions of control and strategy as in [FS], the authors prove a corresponding
dynamic programming principle using a weak formulation of the game. It is interesting to note that the “-optimal” strategies chosen in the “strong” proof of
the dynamic programming principle in [FS] (see (2.10), (2.11) p. 307) are identical
to those chosen in the corresponding “weak” proof given in [FN] (see (5.14) p. 90,
(5.24) p. 91, with, of course, α, β and M, N switched). The weak formulation of
the game is also taken in the time-homogeneous setting in [N1]. Our formulation
of the game is consistent with that of both [N1] and [FN].
Definition 3.1. An admissible control for player I (respectively II) is an Ftw progressively measurable process yt (ω) (respectively zt (ω)), having values in Y
(respectively Z). The set of all admissible controls for player I (respectively II) is
denoted by M (respectively N ). We say that admissible controls y 1 , y 2 in M are
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4064
JAY KOVATS
equal on [s, t] if P{|yr1 − yr2 | = 0, a.e. r ∈ [s, t]} = 1, with the analogous statement
holding for controls in N .
Definition 3.2. An admissible strategy for player I (respectively II) is a mapping
α : N → M (respectively β : M → N ) which preserves equality of controls. That
is, if z 1 , z 2 ∈ N and z 1 = z 2 on [s, t], then α(z 1 ) = α(z 2 ) on [s, t]. The set of all
admissible strategies for player I (respectively II) is denoted by Γ (respectively ∆).
For simplicity, we suppose t ∈ [0, T ]. Let π = {0 = t0 < t1 < · · · < tn = T } be a
partition of [0, T ] and let π = max1≤i≤n (ti − ti−1 ) denote its mesh.
Definition 3.3. y ∈ M (respectively z ∈ N ) is a π-admissible control for player
I (respectively II) if yt = ytj (respectively zt = ztj ) ∀ t ∈ [tj , tj+1 ). The set of all
π-admissible controls for player I (respectively II) is denoted by Mπ (respectively
Nπ ).
Definition 3.4. α ∈ Γ is a π-admissible strategy for player I if α : N → Mπ has
the further properties that (i) α(z)r is z-independent for r ∈ [0, t1 ), and (ii) z = z̄
on [0, tj ) implies α(z)tj = α(z̄)tj . Corresponding defintions hold for π-admissible
strategies β : M → Nπ for player II. The set of all π-admissible strategies for player
I (respectively II) is denoted by Γπ (respectively ∆π ).
As in [FN], we define the π-upper value function Vπ+ and π-lower value function
by
Vπ−
Vπ+ (t)g(x) = inf sup J(t, x, y, β(y), g),
β∈∆π y∈M
(3.2)
Vπ− (t)g(x)
= sup inf J(t, x, α(z), z, g),
α∈Γπ z∈N
where J(t, x, y, z, g) is as in (1.1). Taking α(N ) = M̂ and β(M) = N̂ in Corollary
1.3, along with the remark after Theorem 2.1, yields that Vπ+ (t)g, Vπ− (t)g ∈ C(D̄)
and are continuous in t, for g ∈ C(D̄). Moreover, Vπ+ g, Vπ− g are continuous in t, x
independent of the partition π of [0, T ]. We define the upper value function V +
and lower value function V − by
V + (t)g(x) =
(3.4)
inf
sup J(t, x, y, β(y), g),
sup
inf J(t, x, α(z), z, g).
β∈∪n ∆πn y∈M
V − (t)g(x) =
α∈∪n Γπn z∈N
Say that πn ≤ πn+1 ; i.e., the set of partition points for πn is contained in the set of
partition points for πn+1 . Then Nπn ⊂ Nπn+1 , and hence ∆πn ⊂ ∆πn+1 . The same
holds for Mπn and Γπn . From this it follows that
V + g ≤ Vπ+n+1 g ≤ Vπ+n g,
Vπ−n g ≤ Vπ−n+1 g ≤ V − g
and hence
(3.5)
V + (t)g(x) = lim Vπ+n (t)g(x),
n→∞
V − (t)g(x) = lim Vπ−n (t)g(x).
n→∞
It follows from our previous arguments that if πn < πn+1 with limn→∞ πn = 0,
the value functions V + , V − do not depend on a sequence {πn }.
Remark. As in [N1], we can describe V + and V − without reference to strategies.
For example, for any fixed partition π, we can write
Vπ+ (t)g(x) = inf sup J(t, x, y, β(y), g) = inf sup J(t, x, y, z, g).
β∈∆π y∈M
z∈Nπ y∈M
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4065
Indeed, ∀ ε > 0, ∃ β ε ∈ ∆π with supy∈M J(y, β ε (y)) < inf β∈∆π supy∈M J(y, β(y))
+ ε. But ∀ y ∈ M, β ε (y) ∈ Nπ ; hence
inf sup J(y, z) < inf sup J(y, β(y)) + ε,
z∈Nπ y∈M
β∈∆π y∈M
giving us one inequality. On the other hand, fix any z ∈ Nπ and consider the
constant strategy β̂ : M → Nπ defined by β̂(y) ≡ z. It is trivial to verify that β̂ ∈
∆π and hence supy∈M J(y, z) = supy∈M J(y, β̂(y)) ≥ inf β∈∆π supy∈M J(y, β(y)).
Since z ∈ Nπ was arbitrary, we have the other inequality. A similar argument yields
Vπ− (t)g(x) = sup inf J(t, x, α(z), z, g) = sup inf J(t, x, y, z, g).
α∈Γπ z∈N
y∈Mπ z∈N
Hence from (3.2) and (3.5), we can write, as in [N1],
(3.6)
V + (t)g(x) = lim
inf
V − (t)g(x) = lim
sup
sup J(t, x, y, z, g),
n→∞ z∈Nπn y∈M
n→∞ y∈Mπ
inf J(t, x, y, z, g).
n
z∈N
We will use formulation (3.6) of the value functions in proving the semigroup property. We establish the semigroup property for V + (t) (the proof for V − (t) is similar), generalizing Theorem 2 in [N1], which corresponds to the case D = Ed ,
τ y,z,x = +∞. Since value functions (as given by (3.5)) are independent of the
sequence of partitions whose mesh tends to zero, we take for our πn , the “approximate” partition of [0, T ], where tj = j2−n , 0 ≤ j ≤ [2n T ] + 1. We define
M(n, j) to be the set of all Y -valued, Ftw -adapted processes defined on the interval
I(n, j) := [ 2jn , j+1
2n ]. N (n, j) is defined analogously. We identify, for example, N
with N (n, 0) × N (n, 1) × · · · . We define the set of all constant strategies for player
II on the interval I(n, j) as
Nc (n, j) = z ∈ N (n, j) : zt = z jn , ∀ t ∈ I(n, j) .
2
Setting y = (y 0 , y 1 , . . . , y l ), z = (z 0 , z 1 , . . . , z l ), where y k , z k ∈ M(n, k), Nc (n, k)
respectively, we define the lower value function V + , for l = [2n T ] by
V + (t)g(x) = lim
inf
sup
inf
sup
n→∞ Nc (n,0) M(n,0) Nc (n,1) M(n,1)
· · · inf
sup J(t, x, y 0 y 1 · · · y l , z 0 z 1 · · · z l , g).
Nc (n,l) M(n,l)
Under the identification Nπn = Nc (n, 0) × Nc (n, 1) × · · · × Nc (n, l), the above
expression is exactly the expression for V + (t)g(x) given in (3.6) (see Theorem 1.4.1
in [Fr]). As in [N1], to show the semigroup property for V + (t), set ∆ = 2−n and
define
Vn g(x) = inf sup J(∆, x, y, z, g) := inf S(∆, z)g(x).
z∈Z y∈M
z∈Z
By Bellman’s principle, for any z ∈ Z, S(t, z)g(x) is a semigroup on Cb (D), for
0 ≤ t ≤ T . Since Vn g ∈ Cb (D) for g ∈ Cb (D), we consider the set
N (x) = {z ∈ Z : Vn g(x) = S(∆, z)g(x)}.
By the continuity of Vn and S, N (x) is easily seen to be nonempty and compact.
Hence, by Theorem 12.1.10 in [SV], a Borel selector z̄(·) = z̄(·, ∆, g) of N (x) exists;
i.e., z̄ is a Borel function on Ed and z̄(x) ∈ N (x). The following is the “stopped
process” analogue of lemma 3.1 in [N1] (see §10.2 in [D]).
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4066
JAY KOVATS
Lemma 3.5. For z j ∈ Nc (n, j) and y j ∈ M(n, j), j = 0, . . . , k − 1, we have
sup
inf
z∈Nc (n,k) y∈M(n,k)
J((k + 1)∆, x, y 0 · · · y k−1 y, z 0 · · · z k−1 z, g)
= J(k∆, x, y 0 · · · y k−1 , z 0 · · · z k−1 , Vn g).
Proof. Set ŷ = y 0 · · · y k−1 , ẑ = z 0 · · · z k−1 . Observe that for F̃k∆ := {A ∈ F0 :
{A, τ ŷ,ẑ > k∆} ∈ Fk∆ }:
(k+1)∆∧τ
−ϕ
ŷy,ẑz
−ϕr
(k+1)∆∧τ
Ex
f ((ŷy)r , (ẑz)r , xr )e
dr + g x(k+1)∆∧τ e
F̃k∆
k∆∧τ
−ϕk∆∧τ ŷ,ẑ,x
=e
Ey,z
xŷ,ẑ,x ŷ,ẑ,x
k∆∧τ
∆∧τ
−ϕr
f (yr , zr , xr )e
−ϕ∆∧τ
dr + g (x∆∧τ ) e
0
, y, z, g)
= e−ϕk∆∧τ ŷ,ẑ,x J(∆, xŷ,ẑ,x
k∆∧τ ŷ,ẑ,x
ŷ,ẑ,x
≤ e−ϕk∆∧τ ŷ,ẑ,x S(∆, z)g xk∆∧τ ŷ,ẑ,x ,
from which it follows that
J((k + 1)∆, x, ŷy, ẑz, g) ≤ J(k∆, x, ŷ, ẑ, S(∆, z)g).
Since y ∈ M(n, k) was arbitrary,
sup
J((k + 1)∆, x, ŷy, ẑz, g) ≤ J(k∆, x, ŷ, ẑ, S(∆, z)g);
y∈M(n,k)
hence
inf
sup
z∈Nc (n,k) y∈M(n,k)
J((k + 1)∆, x, ŷy, ẑz, g) ≤
inf
z∈Nc (n,k)
J(k∆, x, ŷ, ẑ, S(∆, z)g)
≤ J(k∆, x, ŷ, ẑ, Vn g),
ŷ,ẑ,x
ŷ,ẑ,x
=
V
at
z
=
z̄
x
∈ Nc (n, k).
since S(∆, z)g xŷ,ẑ,x
g
x
n
ŷ,ẑ,x
ŷ,ẑ,x
ŷ,ẑ,x
k∆∧τ
k∆∧τ
k∆∧τ
We now show the other inequality. By the uniform continuity of J(∆, x, y, z, g)
in x ∈ D, z ∈ Z (uniformly w.r.t. y ∈ M), ∀ ε > 0, ∃ δ(ε, g) > 0 s.t. |x − x | <
δ and |z − z | < δ imply
ε
(3.7)
sup |J(∆, x, y, z, g) − J(∆, x , y, z , g)| < .
3
y∈M
Let {Dn }n≥1 be a partition of Ed × Z, with diam(Dn ) < δ. Fix (xn , zn ) ∈ Dn . By
definition of S(∆, zn )g(xn ), ∃ yn∗ ∈ M(n, 0) s.t.
ε
(3.8)
S(∆, zn )g(xn ) < J(∆, xn , yn∗ , zn , g) + .
3
Since yn∗ ∈ M(n, 0), yn∗ is w.-adapted. Moreover, since controls may be considered as a.e. Borel functions of the Brownian motion, there exists a Borel
function yn : [0, ∆] × C([0, ∆], Ed ) → Y , which is progressively measurable and
yn∗ (t, ω) = yn (t, w.(ω)) for P-a.e. ω ∈ Ω. Now define ȳ = ȳ(·, ∆, g) by
ȳ(t, w., x, z) =
yn (t, w.)IDn (x, z).
n
Then ȳ ∈ M(n, 0), and by (3.7), (3.8), for (x, z) ∈ D × Z (hence in Dn , for some
n) and by the continuity of S(∆, z)g(x) in x ∈ D, z ∈ Z,
(3.9)
S(∆, z)g(x) < J(∆, x, ȳ, z, g) + ε.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4067
For z ∈ Nc (n, k), and t ∈ I(n, k), define
ȳ2 (t) = ȳ(t − k∆, θk∆ w., xŷ,ẑ,x
k∆ , z),
where θt ws = ws+t − wt . Then ȳ2 ∈ M(n, k) and so
(3.10)
J((k + 1)∆, x, ŷy, ẑz, g) ≥ J((k + 1)∆, x, ŷ ȳ2 , ẑz, g).
sup
y∈M(n,k)
As before, and by (3.9),
J((k + 1)∆, x, ŷy, ẑ z̄2 , g)
k∆∧τ ŷ,ẑ,x
ŷ,ẑ,x
−ϕr
−ϕk∆∧τ ŷ,ẑ,x
=E
f (ŷr , ẑr , xr )e
dr + J(∆, xk∆∧τ ŷ,ẑ,x , ȳ2 , z, g)e
0
k∆∧τ ŷ,ẑ,x
≥E
0
e−ϕk∆∧τ ŷ,ẑ,x − ε
f (ŷr , ẑr , xr )e−ϕr dr + S(∆, z)g xŷ,ẑ,x
k∆∧τ ŷ,ẑ,x
k∆∧τ ŷ,ẑ,x
≥E
−ϕr
f (ŷr , ẑr , xr )e
0
ŷ,ẑ,x
−ϕk∆∧τ ŷ,ẑ,x
dr + Vn g xk∆∧τ ŷ,ẑ,x e
−ε
= J(k∆, x, ŷ, ẑ, Vn g) − ε.
Hence by (3.9), (3.10),
sup
J((k + 1)∆, x, ŷy, ẑz, g) ≥ J(k∆, x, ŷ, ẑ, Vn g) − ε.
y∈M(n,k)
Since z ∈ Nc (n, k) was arbitrary, inf z∈Nc (n,k) supy∈M(n,k) J((k+1)∆, x, ŷy, ẑz, g) ≥
J(k∆, x, ŷ, ẑ, Vn g) − ε. Letting ε → 0, the lemma is proved.
Remark. Starting at the innermost
sup
inf
z k ∈Nc (n,k) y k ∈M(n,k)
and working backwards,
using Lemma 3.5 at each step gives
(3.11)
inf
sup
z 0 ∈Nc (n,0) y 0 ∈M(n,0)
···
inf
sup
z k ∈Nc (n,k) y k ∈M(n,k)
J((k + 1)∆, x, y 0 · · · y k , z 0 · · · z k , g)
= J(0, x, y 0 , z 0 , Vnk+1 g) = Vnk+1 g(x).
Theorem 3.6. V + (t) is a semigroup on C(D̄).
Proof. From the definition Vn g(x) = inf z∈Z S( 21n , z)g(x), Vn is a monotone operator, i.e. g ≤ h ⇒ Vn g(x) ≤ Vn h(x). From the
of S(t, z),
semigroupproperties
1
, z)g(x) = S( 21n + 21n , z)g(x) = S( 21n , z) S( 21n , z)g(x) ≥ Vn S( 21n , z)g(x)
S( 2n−1
≥ Vn (Vn g(x)), and hence Vn−1 g(x) ≥ Vn2 g(x). We now define, for t = 2kn ,
Vn (t)g(x) := Vnk g(x).
For any binary t, Vn (t)g is decreasing in n. This follows by induction on k. Indeed,
for k = 1, i.e. t = 21n ,
2
Vn (t)g(x) = Vn g(x) ≥ Vn+1
g(x) := Vn+1 (t)g(x),
Assume the statement is true for k = m. For t =
for t =
2
2n+1
=
m+1
2n ,
2m
Vn (t)g(x) = Vnm+1 g(x) = Vnm (Vn g(x)) ≥ Vn+1
(Vn g(x))
2m+2
2m
2
≥ Vn+1 Vn+1 g(x) = Vn+1 g(x) = Vn+1 (t)g(x).
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
1
2n .
4068
JAY KOVATS
For t = 2kj , j ≤ n, Vjk g(x) =: Vj (t)g(x) ≥ Vn (t)g(x) =: Vnk g(x). Since J is uniformly
bounded for g ∈ Cb (D), by (3.11) we have that {Vn (t)g : n ≥ j} is a totally bounded
subset of Cb (D), for t = 2kj . Hence for any binary t, limn→∞ Vn (t)g(x) exists. Set
p(t, x, g) = lim Vn (t)g(x).
n→∞
+
By the uniform continuity of V (t)g on [0, T ], p(t, x, g) is continuous in t. By
+
(3.11), the fact that Vn (t)g(x) = Vnk+1 g(x), for t = k+1
2n , and the definition of V ,
+
+
we have p(t, x, g) = V (t)g(x) for binary t. By the continuity of V (t)g(x) in t,
V + (t)g(x) exists for any t. Observe that for binary t, Vn (t)g is a semigroup for
fixed n, since for t = 2kn , s = 2ln ,
Vn (t + s)g(x) = Vnk+l g(x) = Vnk Vnl g(x) = Vn (s) (Vn (t)g(x)) .
Hence for binary t, s,
V + (t + s)g(x) = lim Vn (t + s)g(x) = lim Vn (t) (Vn (s)g(x)) .
n→∞
n→∞
Finally, from
|Vn (s) (Vn (t)g(x)) − V + (s) V + (t)g(x) |
≤|Vn (s)(Vn (t)g(x))−Vn (s)(V + (t)g(x))|+|Vn (s)(V + (t)g(x))−V (s)(V (t)g(x))|
≤ Vn (t)g − V + (t)g + Vn (s) V + (t)g − V (s) (V (t)g) ,
(*)
we see that limn→∞ Vn (s) (Vn (t)g(x)) = V + (s) (V + (t)g(x)), and hence by the
above, for binary t, s,
V + (t + s)g(x) = V + (t) V + (s)g(x) .
For arbitrary t ≥ 0, set κn (t) = 2−n [2n t]. Then, by the continuity of V + (t)g(x) in
t,
V + (t + s)g(x) = lim V + (κn (t) + κn (s))g(x) = lim V + (κn (t)) V + (κn (s))g(x)
n→∞
n→∞
and again by (∗), we see that, for arbitrary t, s ∈ [0, T ],
V + (t + s)g(x) = V + (t) V + (s)g(x) .
Hence by (3.6) and the remark, the function
(3.12)
V + (t)g(x) = lim
inf
sup J(t, x, y, β(z), g)
n→∞ β∈∆πn y∈M
is a semigroup on C(D̄). As in Proposition 5.4 in [FN], we have
inf sup J(t, x, y, β(y), g) = sup inf J(t, x, α(z), z, g)
α∈Γ z∈Nπ
β∈∆π y∈M
and hence we can rewrite (3.12) as
(3.13) V + (t)g(x) = lim sup inf J(t, x, α(z), z, g) = sup inf J(t, x, α(z), z, g).
n→∞ α∈Γ z∈Nπn
α∈Γ z∈N
A similar argument shows that the semigroup property is also satisfied by the lower
value function
V − (t)g(x) = lim inf
sup J(t, x, y, β(y), g) = inf sup J(t, x, y, β(y), g).
n→∞ β∈∆ y∈Mπ
n
β∈∆ y∈M
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4069
Hence by (1.7) in the remark following Theorem 1.4, we see that the upper and
lower value functions
τ
+
α(z),z
−ϕr
−ϕτ
f (α(z)r , zr , xr )e
dr + g(xτ )e
(3.14)
,
v (x) := sup inf Ex
α∈Γ z∈N
0
τ
v − (x) := inf sup Ey,β(y)
f (yr , β(y)r , xr )e−ϕr dr + g(xτ )e−ϕτ
(3.15)
x
β∈∆ y∈M
0
α(z),z,x
denotes the solution
satisfy the dynamic programming principle. Here, xt
t
t
of xt = x + 0 σ(yr , zr , xr ) dwr + 0 b(yr , zr , xr ) dr with y = α(z), and τ α(z),z,x =
α(z),z,x
inf{t ≥ 0 : xt
∈
/ D}. Since z ∈ N , zt is Ft -adapted. Since α ∈ Γ, α(z) ∈ M;
α(z),z,x
hence α(z)t is Ft -adapted. Thus, xt
is Ft -adapted and continuous in t. Since
α(z),z,x
D is open, τ
is a Markov time with respect to Ft .
4. Infinitesimal generator of the nonlinear semigroup
and Isaacs equation
Theorem 4.1. For g ∈ Cb2 (D) and x ∈ D, define
t∧τ
α,z
−ϕr
−ϕt∧τ
f (α(z)r , zr , xr )e
dr + g (xt∧τ ) e
.
J(t)g(x) = sup inf Ex
α∈Γ z∈N
0
Let L(y, z, x)u := tr[a(y, z, x)uxx ] + b(y, z, x) · ux − c(y, z, x)u. Then
J(t)g(x) − g(x)
= min max{L(y, z, x)g(x) + f (y, z, x)}.
z∈Z y∈Y
t
For convenience, let us denote p(y, z, x) := L(y, z, x)g(x) + f (y, z, x) and p(x) :=
min max p(y, z, x). First we need a lemma.
(1)
lim
t→0+
z∈Z y∈Y
Lemma 4.2. Given ε > 0, there exists δ = δ(ε, C, g) such that ∀ t < δ, ∀ y ∈ M
and z ∈ N ,
t∧τ
t∧τ
−ϕr
≤ εt.
(2)
Ey,z
p(y
,
z
,
x
)e
dr
−
p(y
,
z
,
x)
dr
r r
r
r r
x 0
0
Proof. Since g ∈
the coefficients and f are uniformly bounded and 1 −
y,z,x
e−ϕr
≤ ϕy,z,x
≤
Cr,
the
left-hand side of (2) is majorized by
r
t∧τ
p(yr , zr , xr )e−ϕr − p(yr , zr , x) dr
Ey,z
x
0
t∧τ
|p(yr , zr , xr )|(1 − e−ϕr ) + |p(yr , zr , xr ) − p(yr , zr , x)| dr
≤ Ey,z
x
0
t∧τ
C 2 (g2 + 1) r + |p(yr , zr , xr ) − p(yr , zr , x)| dr.
≤ Ey,z
x
Cb2 (D),
0
2
Obviously, C (g2 + 1) t2 < Ctε, provided t < δ1 (ε, C, g). To estimate the second
integrand, we use the uniform (in y, z) Lipschitz continuity of the coefficients and
f , as well as the uniform boundedness of the coefficients:
|p(yr , zr , xr ) − p(yr , zr , x)| ≤ |tr[a(yr , zr , xr )gxx (xr )] − tr[a(yr , zr , x)gxx (x)]|
+ |b(yr , zr , xr )gx (xr ) − b(yr , zr , x)gx (x)|
+ |c(yr , zr , xr )g(xr ) − c(yr , zr , x)g(x)|
+ |f (yr , zr , xr ) − f (yr , zr , x)|.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4070
JAY KOVATS
The first 3 summands on the right are estimated in exactly the same way. For
example, denoting γ = (y, z),
|tr[a(γr , xr )gxx (xr )] − tr[a(γr , x)gxx (x)]|
≤ a(γr , xr ) gxx (xr ) − gxx (x) + gxx (x) a(γr , xr ) − a(γr , x).
Since g ∈ Cb2 (D), ∃ δ = δ(ε, g) such that ∀x, x ∈ D, |x − x | < δ implies that
gxx (x) − gxx (x ) < ε. Thus
Eγx
t∧τ
|tr[a(γr , xr )gxx (xr )] − tr[a(γr , x)gxx (x)]| dr
t∧τ
t
gxx (xr ) − gxx (x) dr + gxx ∞ Eγx
|xr − x| dr. ,
≤ C Eγx
0
0
and hence
Eγx
0
t∧τ
t∧τ
gxx (xr ) − gxx (x) dr ≤ 2gxx ∞ Eγx
0
0
I|xr −x|≥δ dr + εt
≤ 2gxx ∞ t sup Pγx {|xr − x| ≥ δ} + εt.
0≤r≤t
But for 0 ≤ r ≤ t, Pγx {|xr − x| ≥ δ} ≤
Eγx
Eγ
x |xr −x|
δ
≤
√ √ r
2C r e
δ
≤
√ √ t
2C t e
.
δ
Thus
t∧τ
|tr[a(γr , xr )gxx (xr )] − tr[a(γr , x)gxx (x)]| dr
√ t
1
+ 1 + εt < 2Ctε,
≤ C gxx ∞ t t e 3C
δ
0
provided t < δ2 (ε, C, g). Moreover, this same δ2 works for the next 2 summands.
Finally
Eγx
0
t∧τ
t∧τ
|f (γr , xr ) − f (γr , x)| dr ≤ CEγx
|xr − x| dr
0
t
√ √
Eγx |xr − x| dr ≤ C 2 2 t t et < Ctε,
≤C
0
provided t < δ3 (ε, C). Hence the left-hand side of (3) is ≤ 8Cεt, for all t <
δ(ε, C, g) := δ1 ∧ δ2 ∧ δ3 , which proves (2).
Proof of Theorem 4.1. Since g ∈ C 2 (D), Itô’s formula yields
−ϕt∧τ
Eα,z
x g (xt∧τ ) e
= g(x) +
Eα,z
x
t∧τ
L(α(z)r , zr , xr )g(xr )e−ϕr dr.
0
Thus
(3)
J(t)g(x)−g(x) = sup inf Eα,z
x
α∈Γ z∈N
t∧τ
(L(α(z)r , zr , xr )g(xr ) + f (α(z)r , zr , xr )) e−ϕr dr.
0
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4071
Recalling that p(y, z, x) := L(y, z, x)g(x) + f (y, z, x) and p(x) := min max p(y, z, x),
z∈Z y∈Y
by (3), ∀ α ∈ Γ and z ∈ N ,
J(t)g(x) − g(x)
−
min
max
{L(y,
z,
x)g(x)
+
f
(y,
z,
x)}
z∈Z y∈Y
t
t∧τ
1
−ϕr
= sup inf Eα,z
p(α(z)
,
z
,
x
)e
dr
−
p(x)
r r
r
x
α∈Γ z∈N t
0
t∧τ
1
≤ sup inf Eα,z
p(α(z)r , zr , xr )e−ϕr dr
z∈N
t x 0
α∈Γ
1 α,z t∧τ
p(α(z)r , zr , x) dr − sup inf Ex
(4)
α∈Γ z∈N t
0
t∧τ
1
p(α(z)
,
z
,
x)
dr
−
p(x)
+ sup inf Eα,z
r r
x
α∈Γ z∈N t
0
t∧τ
t∧τ
1
y,z −ϕr
≤ sup sup Ex p(yr , zr , xr )e
dr −
p(yr , zr , x) dr t y∈M z∈N
0
0
t∧τ
1
.
+ sup inf Eα,z
p(α(z)
,
z
,
x)
dr
−
p(x)
r r
x
α∈Γ z∈N t
0
By Lemma 4.1, we need only estimate the last summand in (4). Fix any constant
z̄ ∈ Z and consider the constant control zt ≡ z̄. For any α ∈ Γ,
t∧τ
1
inf Eα,z
p(α(z)r , zr , x) dr
x
z∈N t
0
t∧τ
t ∧ τ α(z̄),z̄,x
1
≤ Eα,z̄
p(α(z̄)
,
z̄,
x)
dr
≤
max
p(y,
z̄,
x)E
r
y∈Y
t x 0
t
t∧τ
= − max p(y, z̄, x) 1 − Eα,z̄
+ max p(y, z̄, x)
x
y∈Y
y∈Y
t
t∧τ
≤ C̃ 1 − Eα,z̄
+ max p(y, z̄, x).
x
y∈Y
t
Since z̄ ∈ Z was arbitrary, taking inf Z and using the inequality inf(h1 + h2 ) ≤
sup h1 + inf h2 yields
t∧τ
t∧τ
1
α,z̄
inf Eα,z
p(α(z)
,
z
,
x)
dr
−
p(x)
≤
C̃
sup
1
−
E
r r
x
x
z∈N t
t
z̄∈Z
0
t∧τ
≤ C̃ 1 − inf Eα,z
(5)
,
x
z∈N
t
where C̃ = C(g2,D + 1). Recall that if x ∈ D with dist(x, ∂D) ≥ κ and for
any (y, z) ∈ M × N , for t > 0 sufficiently small (depending only on κ, |b|∞ ), there
exists a constant N , depending only on σ∞ , such that
N · t2
.
κ4
Fix x ∈ D, z ∈ N , α ∈ Γ and choose t > 0 so small that (6) holds,
t∧τ
N · t2
α,z
Ex
,
≥ Pα,z
x {τ > t} ≥ 1 −
t
κ4
(6)
P{τ y,z,x ≤ t} ≤
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4072
JAY KOVATS
and hence
1 − inf Eα,z
x
(7)
z∈N
t∧τ
t
≤
N · t2
.
κ4
In particular, if x ∈ K, where K ⊂ D is compact with dist(K, ∂D) > 0, we can
take κ = dist(K, ∂D) in (7). Since α ∈ Γ was arbitrary, (5) and (7) yield
C̃N · t2
1 α,z t∧τ
p(α(z)r , zr , x) dr − p(x) ≤
< ε.
(8)
sup inf Ex
κ4
α∈Γ z∈N t
0
for t < δ(ε, κ, g, C). On the other hand, for any strategy α ∈ Γ, we obviously have
t∧τ
1
1 α,z t∧τ
E
p(α(z)
,
z
,
x)
dr
≥
inf
p(α(z)r , zr , x) dr.
(9) sup inf Eα,z
r r
x
x
z∈N t
α∈Γ z∈N t
0
0
Following [ES], [FS], fix x ∈ D and set p(x) := minz∈Z maxy∈Y p(y, z, x). Then
∀ z ∈ Z, ∃ y = y(z) ∈ Y for which p(x) ≤ p(y(z), z, x). Since p(y, ·, x) is continuous
and Z is compact, p(y, ·, x) is uniformly continuous in Z. Hence ∀ε > 0, ∃δ()
so that ∀ζ, z ∈ Z, |ζ − z| < δ implies |p(y, ζ, x) − p(y, z, x)| < ε. In particular,
for z ∈ Z and the y = y(z) ∈ Y described above, if ζ ∈ Z with |ζ − z| < δ,
then p(x) − ε < p(y(z), ζ, x).
But Z is compact, so we can find r1 , . . . , rn >
0, z1 , . . . , zn ∈ Z with Z ⊂ ni=1 Bri (zi ), ri < δ. Hence ∀ ζ ∈ Z, if |ζ − zi | < ri ,
then p(x) − ε < p(yi , ζ, x), where yi = yi (zi ) is as above. Now define a map
Φ : Z → Y by Φ(z) = yk if z ∈ Brk (zk ) \ i<k Bri (zi ) for k = 1, . . . , n. Then for
all z ∈ Z, p(x) − ε < p(Φ(z), z, x). Now define αε : N → M by αε (z)r = Φ(zr )
for z ∈ N . Then αε ∈ Γ. That is, for z ∈ N , αε (z) ∈ M, since ∀B ∈ B(Y ),
(αε (z)r )−1 (B) = zr−1 (Φ−1 (B)) ∈ Fr , since Φ is Borel measurable and zt is Ft adapted. Then for any z ∈ N and any t > 0, for 0 ≤ r ≤ t, we have
p(x) − ε < p(αε (z)r , zr , x),
which immediately gives
(p(x) −
ε
,z
ε)Eα
x
t∧τ
t
1 ε ,z
≤ Eα
t x
t∧τ
p(αε (z)r , zr , x) dr.
0
This yields
t∧τ
t∧τ
αε ,z
αε ,z
−C̃ 1 − Ex
≤ (ε − p(x)) 1 − Ex
t
t
t∧τ
1 ε ,z
p(αε (z)r , zr , x) dr − p(x) + ε.
≤ Eα
t x
0
Taking inf z∈N and then using (9) yields
ε
t∧τ
1 α,z t∧τ
,z
inf
p(α(z)r , zr , x) dr − p(x) + ε.
−C̃ 1 − inf Eα
≤
sup
E
x
x
z∈N
t
α∈Γ z∈N t
0
By (7), we get (again taking κ = dist(K, ∂D)),
−C̃2N · t2
1 α,z t∧τ
E
≤
sup
inf
p(α(z)r , zr , x) dr − p(x) + ε.
x
κ4
α∈Γ z∈N t
0
Along with (8), this immediately implies that for t < δ(ε, κ, g, C) (the same one as
above),
t∧τ
< 2ε.
sup inf 1 Eα,z
p(α(z)
,
z
,
x)
dr
−
p(x)
r r
α∈Γ z∈N t x
0
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4073
This, (2) and (4) imply that for any x ∈ D with dist(x, ∂D) > 0 and any ε > 0,
∃ δ = δ(ε, κ, g, C) such that for t < δ,
J(t)g(x) − g(x)
− min max {L(y, z, x)g(x) + f (y, z, x)} < 3ε.
z∈Z y∈Y
t
Theorem 4.3. The upper value function v + is a viscosity solution of the upper
Isaacs equation
F + [u](x) := min max {L(y, z, x)u(x) + f (y, z, x)} = 0
z∈Z y∈Y
in D,
and the lower value function v − is a viscosity solution of the lower Isaacs equation
F − [u](x) := max min {L(y, z, x)u(x) + f (y, z, x)} = 0
y∈Y z∈Z
in D,
where L(y, z, x)u := tr[a(y, z, x)uxx ] + b(y, z, x) · ux − c(y, z, x)u.
Proof. Since g ∈ C(D̄), both v + , v − ∈ C(D̄), by Theorem 2.1. We prove the
theorem only for v + , since the proof for v − is similar. So set
τ
+
α,z
−ϕr
−ϕτ
f (α(z)r , zr , xr )e
dr + g(xτ )e
v(x) = v (x) := sup inf Ex
.
α∈Γ z∈N
0
First, we show that v is a subsolution. Fix x0 ∈ D with dist(x0 , ∂D) ≥ δ. Since
v ∈ C(D), let ψ ∈ C 2 (D) such that v − ψ has a local maximum at x0 . We want
to show F + [ψ](x0 ) ≥ 0. Without loss of generality, we may assume this local
maximum is a global maximum and v(x0 ) = ψ(x0 ). That is, v(x) ≤ ψ(x) in D and
v(x0 ) = ψ(x0 ). By the dynamic programming principle, for any h > 0 we have
τ ∧h
α,z
−ϕr
−ϕτ ∧h
ψ(x0 ) = v(x0 ) = sup inf Ex0
f (α(z)r , zr , xr )e
dr + v(xτ ∧h )e
α∈Γ z∈N
0
(1)
≤ sup inf
α∈Γ z∈N
Eα,z
x0
τ ∧h
−ϕr
f (α(z)r , zr , xr )e
−ϕτ ∧h
dr + ψ(xτ ∧h )e
,
0
α(z),z,x
0
where for any z ∈ N and α ∈ Γ, xt
is a solution of
t
t
xt = x0 +
σ(yr , zr , xr ) dwr +
b(yr , zr , xr ) dr
0
with y = α(z), and τ
α(z),z,x0
0
= inf{t
−ϕτ ∧h
Eα,z
= ψ(x0 ) +
x0 ψ(xτ ∧h )e
and hence by (1),
(2)
0 ≤ sup inf
α∈Γ z∈N
Eα,z
x0
τ ∧h
α(z),z,x0
≥ 0 : xt
∈
/ D}. By Itô’s formula,
τ ∧h
Eα,z
L(α(z)r , zr , xr )ψ(xr )e−ϕr dr,
x0
0
[f (α(z)r , zr , xr ) + L(α(z)r , zr , xr )ψ(xr )] e−ϕr dr.
0
Denoting the above expectation by q(h, α(z), z, x0 ), (2) gives that
0 ≤ sup inf q(h, α(z), z, x0 ).
α∈Γ z∈N
But for any α ∈ Γ and z ∈ N , α(z) ∈ M. Fix any constant z̄ ∈ Z and recall
Z ⊂ N . For any α ∈ Γ, we have
inf q(h, α(z), z, x0 ) ≤ q(h, α(z̄), z̄, x0 ) ≤ sup q(h, y, z̄, x0 )
z∈N
y∈M
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4074
JAY KOVATS
and since α ∈ Γ was arbitrary, we have 0 ≤ supα∈Γ inf z∈N q(h, α(z), z, x0 ) ≤
supy∈M q(h, y, z̄, x0 ). For convenience, set p(y, z, x) := L(y, z, x)ψ(x) + f (y, z, x).
Then by (2), since h > 0,
(3)
τ ∧h
1 τ ∧h
−ϕr
y,z̄ 1
p(y
,
z̄,
x
)e
dr
≤
sup
E
max p(y, z̄, xr ) e−ϕr dr,
0 ≤ sup Ey,z̄
r
r
x0
x0
y∈Y
h 0
h 0
y∈M
y∈M
t
t
0
is a solution of xt = x0 + 0 σ(yr , z̄, xr ) dwr + 0 b(yr , z̄, xr ) dr and
where xy,z̄,x
t
0
τ y,z̄,x0 = inf{t ≥ 0 : xy,z̄,x
∈
/ D}. By the same techniques as in Theorem 4.1,
t
letting h → 0+ in (3) yields 0 ≤ maxy∈Y p(y, z̄, x0 ), and since z̄ ∈ Z was arbitrary,
0 ≤ min max p(y, z, x0 ) = F + [ψ](x0 ),
z∈Z y∈Y
and so v is a subsolution. The proof that v is a supersolution of F + [v] = 0 proceeds the same way and uses an argument similar to the argument of Theorem 4.1
following (4.9).
In addition to the assumptions (1.1) , our results have been derived under the
assumption of the existence of a global barrier. It is well known that if ∂D satisfies a uniform exterior sphere condition and a = 12 σσ ∗ is nondegenerate, such a
barrier exists. (See the discussion following Assumption 1.0.) For Isaacs equations
with uniformly bounded and uniformly (in y, z) Lipschitz coefficients, continuous
viscosity solutions of the Dirichlet problem with continuous boundary values are
unique (see III.1, V.1 of [IL]). Existence and uniqueness of continuous viscosity solutions also holds (see Theorem II.2 in [IL]) in the degenerate case when ∂D ∈ C 2 ,
inf y,z,x c(y, z, x) = c0 > 0, and inf y,z a(y, z, x)n(x), n(x) > 0 on ∂D, where n(x) is
the outward unit normal to D at x ∈ ∂D . Since we’ve shown that v + , v − ∈ C(D̄),
for arbitrary g ∈ C(D̄), the following result is an immediate consequence of Proposition II.1 in [IL].
Theorem 4.4. In addition to (1.1) , suppose that either (i) ∂D satisfies a uniform
exterior sphere condition and a = 12 σσ ∗ is nondegenerate or (ii) ∂D ∈ C 2 , the
equation is degenerate elliptic, inf y,z,x c(y, z, x) ≥ c0 > 0, inf y,z a(y, z, x)n(x), n(x)
> 0 on ∂D, and a global barrier exists. Then the upper value function v + is the
unique viscosity solution of the Dirichlet problem for the upper Isaacs equation
+
F [v](x) := min max {L(y, z, x)v(x) + f (y, z, x)} = 0
z∈Z y∈Y
v=g
in D,
on ∂D,
and the lower value function v − is the unique viscosity solution of the Dirichlet
problem for the lower Isaacs equation
F − [v](x) := max min {L(y, z, x)v(x) + f (y, z, x)} = 0
y∈Y z∈Z
v=g
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
in D,
on ∂D,
DIRICHLET PROBLEM FOR ISAACS EQUATION IN A SMOOTH DOMAIN
4075
for arbitrary g ∈ C(D̄), where L(y, z, x)u := tr[a(y, z, x)uxx ] + b(y, z, x) · ux −
c(y, z, x)u.
Corollary 4.5. Under either of the assumptions of Theorem 4.4, if the Isaacs
min-max condition is satisfied, that is, if
F + (m, p, r, x) = F − (m, p, r, x)
∀ (m, p, r, x) ∈ Sd × Rd × R × D,
then v + = v − in D and the differential game has value.
References
[BL]
[CC1]
[CC2]
[D]
[ES]
[FN]
[FS]
[Fr]
[GT]
[I]
[IL]
[Ish]
[Ka]
[K1]
[K2]
[K3]
[L]
[LM1]
[N1]
[N2]
[N3]
A. Bensoussan and J.L. Lions, Applications des Inéquations Variationnelles en Contrôle
Stochastique, Dunod, Paris, 1978. MR0513618 (58:23923)
L. Caffarelli and X. Cabre, Fully Nonlinear Elliptic Equations, Amer. Math. Soc., Providence, R.I., 1995. MR1351007 (96h:35046)
, Interior C 2,α Regularity Theory for a Class of Nonconvex Fully Nonlinear Elliptic
Equations, J. Math. Pures Appl. 82 (2003), 573-612. MR1995493 (2004f:35049)
E.B. Dynkin, Markov Processes, Vol. 1, Berlin, 1965, English transl. Grundl. Math. Wiss.
Vol. 121, Springer-Verlag. MR0193671 (33:1887)
L.C. Evans and P.E. Souganidis, Differential Games and Representation Formulas for
Solutions of Hamilton-Jacobi-Isaacs Equations, Indiana Univ. Math. J. 33 (1984), 773797. MR756158 (86d:90185)
W.H. Fleming and M. Nisio, Differential Games for Stochastic Partial Differential Equations, Nagoya Math. J. 131 (1993), 75-107. MR1238634 (94h:93074)
W.H. Fleming and P.E. Souganidis, On the Existence of Value Functions of Two-Player,
Zero-Sum Stochastic Differential Games, Indiana Univ. Math. J. 38 (1989), 293-314.
MR997385 (90e:93089)
A. Friedman, Differential Games, Wiley, New York, 1971. MR0421700 (54:9696)
D. Gilbarg and N.S. Trudinger, Elliptic Partial Differential Equations of Second Order,
2nd ed., Springer-Verlag, New York, 1983. MR737190 (86c:35035)
R. Isaacs, Differential Games, Wiley, New York, 1965. MR0210469 (35:1362)
H. Ishii and P.L. Lions, Viscosity Solutions of Fully Nonlinear Second-Order Elliptic Partial
Differential Equations, J. Diff. Equat. 83 (1990), 26-78. MR1031377 (90m:35015)
H. Ishii, On Uniqueness and Existence of Viscosity Solutions of Fully Nonlinear SecondOrder Elliptic PDEs, Comm. Pure Appl. Math. 42 (1989), 14-45. MR973743 (89m:35070)
M. A. Katsoulakis, A Representation Formula and Regularizing Properties for Viscosity Solutions of Second-Order Fully Nonlinear Degenerate Parabolic Equations, Nonlinear
Analysis 24 (1995), 147-158. MR1312585 (95m:35039)
N.V. Krylov, Controlled Diffusion Processes, Nauka, Moscow, 1977, English transl.
Springer-Verlag, New York, 1980. MR601776 (82a:60062)
, Smoothness of the Payoff Function for a Controlled Diffusion Process in a Domain, vol. 34, Izv. Acad. Nauk. SSSR Ser. Mat., 1990, pp. 65-95, English transl. in Math.
USSR Izv. MR992979 (90f:93040)
, On Controlled Diffusion Processes with Unbounded Coefficients, vol. 19, Izv. Acad.
Nauk. SSSR Ser. Mat., 1982, pp. 41-64, English transl. in Math. USSR Izv..
P.L. Lions, Optimal Control of Diffusion Processes and Hamilton-Jacobi-Bellman Equations. Part 1: The Dynamic Programming Principle and Applications, Communicatons in
PDE. 8 (1983), 1101-1174. MR709164 (85i:49043a)
P.L. Lions and J.L. Menaldi, Optimal Control of Stochastic Integrals and Hamilton-JacobiBellman Equations I, II, Siam J. Control and Optimization, No. 1, 20, (1982), 58-81, 82-95.
MR642179 (83c:49039)
M. Nisio, Stochastic Differential Games and Viscosity Solutions of Isaacs Equations,
vol. 110, Nagoya Math. J., 1988, pp. 163-184. MR945913 (90b:93100)
, Some Remarks on Stochastic Optimal Control, Proceedings of 3rd USSR-Japan
Symposium on Probability Theory, Springer-Verlag, 1976, pp. 446-460. MR0439373
(55:12266)
, Stochastic Control Theory, ISI Lecture Notes 9, Macmillan, India, 1981.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
4076
[S]
[SV]
JAY KOVATS
A. Świeçh, Another Approach to the Existence of Value Functions of Stochastic Differential
Games, vol. 204, J. Math. Anal. Appl., 1996, pp. 884-897. MR1422779 (97j:90091)
D. W. Stroock and S.R.S. Varadhan, Multidimensional Diffusion Processes, SpringerVerlag, Berlin, 1979. MR532498 (81f:60108)
Department of Mathematical Sciences, Florida Institute of Technology, Melbourne,
Florida 32901
E-mail address: [email protected]
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
© Copyright 2026 Paperzz