### SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES 1

```Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 359-377
www.serialspublications.com
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
LEONID MYTNIK* AND EYAL NEUMAN*
Abstract. We consider the regularity of sample paths of Volterra processes.
These processes are defined as stochastic integrals
t
M (t) =
F (t, r)dX(r), t в€€ R+ ,
0
where X is a semimartingale and F is a deterministic real-valued function.
We derive the information on the modulus of continuity for these processes
under regularity assumptions on the function F and show that M (t) has
вЂњworstвЂќ regularity properties at times of jumps of X(t). We apply our results
to obtain the optimal HВЁ
older exponent for fractional LВґ
evy processes.
1. Introduction and Main Results
1.1. Volterra Processes. A Volterra process is a process given by
t
M (t) =
F (t, r)dX(r), t в€€ R+ ,
(1.1)
0
where {X(t)}tв‰Ґ0 is a semimartingale and F (t, r) is a bounded deterministic realvalued function of two variables which sometimes is called a kernel. One of the
questions addressed in the research of Volterra and related processes is studying
their regularity properties. It is also the main goal of this paper. Before we describe our results let us give a short introduction to this area. First, let us note that
one-dimensional fractional processes, which are the close relative of Volterra processes, have been extensively studied in the literature. One-dimensional fractional
processes are usually defined by
в€ћ
X(t) =
F (t, r)dL(r),
(1.2)
в€’в€ћ
where L(r) is some stochastic process and F (t, r) is some specific kernel. For
example in the case of L(r) being a two-sided standard Brownian motion and
Hв€’1/2
Hв€’1/2
1
F (t, r) = О“(H+1/2)
(t в€’ s)+
в€’ (в€’s)+
, X is called fractional Brownian
motion with Hurst index H (see e.g. Chapter 1.2 of [3] and Chapter 8.2 of [11]). It
is also known that the fractional Brownian motion with Hurst index H is HВЁolder
Received 2011-11-3; Communicated by F. Viens.
2000 Mathematics Subject Classification. Primary 60G17, 60G22 ; Secondary 60H05.
Key words and phrases. Sample path properties, fractional processes, LВґ
evy processes.
* This research is partly supported by a grant from the Israel Science Foundation.
359
360
LEONID MYTNIK AND EYAL NUEMAN
continuous with any exponent less than H (see e.g. [8]). Another prominent example is the case of fractional О±-stable LВґevy process which can be also defined
via (1.2) with L(r) being two-sided О±-stable LВґevy process and
F (t, r) = a{(t в€’ r)d+ в€’ (в€’r)d+ } + b{(t в€’ r)dв€’ в€’ (в€’r)dв€’ }.
Takashima in [15] studied path properties of this process. Takashima set the
following conditions on the parameters: 1 < О± < 2, 0 < d < 1 в€’ О±в€’1 and в€’в€ћ <
a, b < в€ћ, |a| + |b| = 0. It is proved in [15] that X is a self-similar process. Denote
the jumps of L(t) by в€†L (t): в€†L (t) = L(t) в€’ L(tв€’), в€’в€ћ < t < в€ћ. It is also proved
in [15] that:
lim(X(t + h) в€’ X(t))hв€’d = aв€†L (t), 0 < t < 1, P в€’ a.s.,
hв†“0
lim(X(t) в€’ X(t в€’ h))hв€’d = в€’bв€†L (t), 0 < t < 1, P в€’ a.s.
hв†“0
Note that in his proof Takashima strongly used the self-similarity of the process
X.
Another well-studied process is the so-called fractional LВґevy process, which again
is defined via (1.2) for a specific kernel F (t, r) and L(r) being a two-sided LВґevy
process. For example, Marquardt in [10] defined it as follows.
Definition 1.1. (Definition 3.1 in [10]): Let L = {L(t)}tв€€R be a two-sided LВґevy
process on R with E[L(1)] = 0, E[L(1)2 ] < в€ћ and without a Brownian component.
Let F (t, r) be the following kernel function:
1
[(t в€’ r)d+ в€’ (в€’r)d+ ].
F (t, r) =
О“(d + 1)
For fractional integration parameter 0 < d < 0.5 the stochastic process
в€ћ
Md (t) =
F (t, r)dLr , t в€€ R,
в€’в€ћ
is called a fractional LВґevy process.
As for the regularity properties of fractional LВґevy process Md defined above,
Marquardt in [10] used an isometry of Md and the Kolmogorov continuity criterion
in order to prove that the sample paths of Md are P -a.s. local HВЁ
older continuous
of any order ОІ < d. Moreover she proved that for every modification of Md and
for every ОІ > d:
P ({П‰ в€€ в„¦ : Md (В·, П‰) в€€ C ОІ [a, b]}) > 0,
where C ОІ [a, b] is the space of HВЁ
older continuous functions of index ОІ on [a, b]. Note
that in this paper we are going to improve the result of Marquardt and show that
for d в€€ (0, 0.5) the sample paths of Md are P -a.s. HВЁ
older continuous of any order
ОІ в‰¤ d.
The regularity properties of the analogous multidimensional processes have been
also studied. For example, consider the process
Л† (t) =
M
F (t, r)L(dr), t в€€ RN ,
Rm
(1.3)
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
361
where L(dr) is some random measure and F is a real valued function of two
variables. A number of important results have been derived recently by Ayache,
Л† (t) for some particular
Roueff and Xiao in [1], [2], on the regularity properties of M
choices of F and L. As for the earlier work on the subject we can refer to KЛ†ono
and Maejima in [5] and [6]. Recently, the regularity of related fractional processes
was studied by Maejima and Shieh in [7]. We should also mention the book of
Samorodnitsky and Taqqu [13] and the work of Marcus and RosiВґ
nsky in [9] where
Л† (t) in (1.3) were also studied.
the regularity properties of processes related to M
1.2. Functions of Smooth Variation. In this section we make our assumptions
on the kernel function F (s, r) in (1.1). First we introduce the following notation.
Denote
в€‚ n+m f (s, r)
, в€Ђn, m = 0, 1, . . . .
f (n,m) (s, r) в‰Ў
в€‚sn в€‚rm
We also define the following sets in R2 :
E = {(s, r) : в€’в€ћ < r в‰¤ s < в€ћ},
Лњ = {(s, r) : в€’в€ћ < r < s < в€ћ}.
E
We denote by K a compact set in E, EЛњ or R, depending on the context. We define
the following spaces of functions that are essential for the definition of functions
of smooth variation and regular variation.
(k)
Definition 1.2. Let C+ (E) denote the space of functions f from a domain E in
R2 to R1 satisfying
1. f is continuous on E;
Лњ
2. f has continuous partial derivatives of order k on E.
Лњ
3. f is strictly positive on E.
Note that functions of smooth variation of one variable have been studied extensively in the literature; [4] is the standard reference for these and related functions.
Here we generalize the definition of functions of smooth variation to functions on
R2 .
(2)
Definition 1.3. Let ПЃ > 0. Let f в€€ C+ (E) satisfy, for every compact set K вЉ‚ R,
a)
hf (0,1) (t, t в€’ h)
+ ПЃ = 0,
lim sup
hв†“0 tв€€K
f (t, t в€’ h)
b)
hf (1,0) (t + h, t)
lim sup
в€’ ПЃ = 0,
hв†“0 tв€€K
f (t + h, t)
c)
h2 f (1,1) (t, t в€’ h)
+ ПЃ(ПЃ в€’ 1) = 0,
lim sup
hв†“0 tв€€K
f (t, t в€’ h)
d)
h2 f (0,2) (t, t в€’ h)
lim sup
в€’ ПЃ(ПЃ в€’ 1) = 0.
hв†“0 tв€€K
f (t, t в€’ h)
362
LEONID MYTNIK AND EYAL NUEMAN
Then f is called a function of smooth variation of index ПЃ at the diagonal and is
denoted as f в€€ SRПЃ2 (0+).
It is easy to check that f в€€ SRПЃ2 (0+), for ПЃ > 0 satisfies f (t, t) = 0 for all t. The
trivial example for a function of smooth variation SRПЃ2 (0+) is f (t, r) = (t в€’ r)ПЃ .
Another example would be f (t, r) = (t в€’ r)ПЃ | log(t в€’ r)|О· where О· в€€ R.
1.3. Main Results.
Convention: From now on we consider a semimartingale {X(t)}tв‰Ґ0 such that
X(0) = 0 P -a.s. Without loss of generality we assume further that X(0в€’) = 0,
P -a.s.
In this section we present our main results. The first theorem gives us information
about the regularity of increments of the process M .
Theorem 1.4. Let F (t, r) be a function of smooth variation of index d в€€ (0, 1)
and let {X(t)}tв‰Ґ0 be a semimartingale. Define
t
M (t) =
F (t, r)dX(r), t в‰Ґ 0.
0
Then,
M (s + h) в€’ M (s)
= в€†X (s), в€Ђs в€€ [0, 1], P в€’ a.s.,
F (s + h, s)
where в€†X (s) = X(s) в€’ X(sв€’).
lim
hв†“0
Information about the regularity of the sample paths of M given in the above
theorem is very precise in the case when the process X is discontinuous. In fact,
it shows that at the point of jump s, the increment of the process behaves like
F (s + h, s)в€†X (s).
In the next theorem we give a uniform in time bound on the increments of the
process M .
Theorem 1.5. Let F (t, r) and {M (t)}tв‰Ґ0 be as in Theorem 1.4. Then
lim
hв†“0
|M (t) в€’ M (s)|
= sup |в€†X (s)|, P в€’ a.s.
F (t, s)
|tв€’s|в‰¤h
sв€€[0,1]
sup
0<s<t<1,
Our next result, which in fact is a corollary of the previous theorem, improves
the result of Marquardt from [10].
Theorem 1.6. Let d в€€ (0, 0.5). The sample paths of {Md(t)}tв‰Ґ0 , a fractional
LВґevy process, are P -a.s. HВЁ
older continuous of order d at any point t в€€ R.
In Sections 2,3 we prove Theorems 1.4 , 1.5. In Section 4 we prove Theorem
1.6.
2. Proof of Theorem 1.4
The proof of Theorems 1.4 and 1.5 uses ideas of Takashima in [15], but does
not use the self-similarity assumed there.
The goal of this section is to prove Theorem 1.4. First we prove the integration by
parts formula in Lemma 2.1. Later, in Lemma 2.2, we decompose the increment
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
363
M (t + h) в€’ M (t) into two components and then we analyze the limiting behavior
of each of the components. This allows us to prove Theorem 1.4.
In the following lemma we refer to functions in C(1) (E), which is the space of
Лњ It is easy
functions from Definition 1.2, without the condition that f > 0 on E.
to show that functions of smooth variation satisfy the assumptions of this lemma.
Lemma 2.1. Let X be a semimartingale such that X(0) = 0 a.s. Let F (t, r)
be a function in C(1) (E) satisfying F (t, t) = 0 for all t в€€ R. Denote f (t, r) в‰Ў
F (0,1) (t, r). Then,
t
t
F (t, r)dX(r) = в€’
0
f (t, r)X(r)dr, P в€’ a.s.
0
Proof. Denote Ft (r) = F (t, r). By Corollary 2 in Section 2.6 of [12] we have
t
t
X(rв€’)dFt (r) в€’ [X, Ft ]t .
Ft (rв€’)dX(r) = X(t)Ft (t) в€’
(2.1)
0
0
By the hypothesis we get X(t)Ft (t) = 0. Since Ft (В·) has continuous derivative
and therefore is of bounded variation, it is easy to check that [X, Ft ]t = 0, P -a.s.
Finally, since X is a semimartingale, it has c`
ag sample paths (see definition in
Chapter 2.1 of [12]) and we immediately have
t
t
f (t, r)X(rв€’)dr =
0
f (t, r)X(r)dr.
0
Convention and Notation
In this section we use the notation F (t, r) for a smoothly varying function of
index d (that is, F в€€ SRd2 (0+)), where d is some number in (0, 1). We denote by
2
(0+) denote
f (t, r) в‰Ў F (0,1) (t, r), a smooth derivative of index d в€’ 1 and let SDdв€’1
the set of smooth derivative functions of index d в€’ 1.
In the following lemma we present the decomposition of the increments of the
process Y (t) that will be the key for the proof of Theorem 1.4.
Lemma 2.2. Let
t
Y (t) =
f (t, r)X(r)dr,
t в‰Ґ 0.
0
Then we have
Y (t + Оґ) в€’ Y (t) = J1 (t, Оґ) + J2 (t, Оґ),
where
в€Ђt в‰Ґ 0, Оґ > 0,
1
f (t + Оґ, t + Оґ в€’ Оґv)X(t + Оґ в€’ Оґv)dv
(2.2)
[f (t + Оґ, t в€’ Оґv) в€’ f (t, t в€’ Оґv)]X(t в€’ Оґv)dv.
(2.3)
J1 (t, Оґ) = Оґ
0
and
t/Оґ
J2 (t, Оґ) = Оґ
0
364
LEONID MYTNIK AND EYAL NUEMAN
Proof. For any t в€€ [0, 1], Оґ > 0 we have
t+Оґ
Y (t + Оґ) в€’ Y (t) =
t
f (t + Оґ, r)X(r)dr в€’
0
f (t, r)X(r)dr
(2.4)
0
t+Оґ
=
t
f (t + Оґ, r)X(r)dr +
t
[f (t + Оґ, r) в€’ f (t, r)]X(r)dr.
0
By making a change of variables we are done.
The next propositions are crucial for analyzing the behavior of J1 and J2 from
the above lemma.
2
Proposition 2.3. Let f (t, r) в€€ SDdв€’1
(0+) where d в€€ (0, 1). Let X(r) be a
semimartingale. Denote
gОґ (t, v) =
f (t + Оґ, t в€’ Оґv) в€’ f (t, t в€’ Оґv)
, t в€€ [0, 1], v в‰Ґ 0, Оґ > 0.
f (t + Оґ, t)
Then
t/Оґ
lim
Оґв†“0
0
1
gОґ (t, v)X(t в€’ Оґv)dv + X(tв€’) = 0, в€Ђt в€€ [0, 1], P в€’ a.s.
d
2
Proposition 2.4. Let f (t, r) в€€ SDdв€’1
(0+) where d в€€ (0, 1). Let X(r) be a
semimartingale. Denote
fОґ (t, v) =
f (t + Оґ, t + Оґ в€’ Оґv)
, t в€€ [0, 1], v в‰Ґ 0, Оґ > 0.
f (t + Оґ, t)
Then
1
lim
Оґв†“0
0
1
fОґ (t, v)X(t + Оґ(1 в€’ v))dv в€’ X(t) = 0, в€Ђt в€€ [0, 1], P в€’ a.s.
d
We first give a proof of Theorem 1.4 based on the above propositions and then
get back to the proofs of the propositions.
Proof of Theorem 1.4: From Lemma 2.1 we have
M (t + Оґ) в€’ M (t) = в€’(Y (t + Оґ) в€’ Y (t)), P в€’ a.s.
where
(2.5)
t
Y (t) =
f (t, r)X(r)dr.
0
By Lemma 2.2, for every t в‰Ґ 0, Оґ > 0, we have
Y (t + Оґ) в€’ Y (t) = J1 (t, Оґ) + J2 (t, Оґ),
(2.6)
For the first integral we get
J1 (t, Оґ)
=
Оґf (t + Оґ, t)
1
fОґ (t, v)X(t + Оґ(1 в€’ v))dv.
0
Now we apply Proposition 2.4 to get
lim
Оґв†“0
1
J1 (t, Оґ)
= X(t), в€Ђt в€€ [0, 1], P в€’ a.s.
Оґf (t + Оґ, t)
d
(2.7)
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
365
For the second integral we get
J2 (t, Оґ)
=
Оґf (t + Оґ, t)
t/Оґ
gОґ (t, v)X(t в€’ Оґv)dv.
(2.8)
0
By Proposition 2.3 we get
lim
Оґв†“0
J2 (t, Оґ)
1
= в€’ X(tв€’), в€Ђt в€€ [0, 1], P в€’ a.s.
Оґf (t + Оґ, t)
d
(2.9)
Combining (2.7) and (2.9) with (2.6) we get
lim
Оґв†“0
Y (t + Оґ) в€’ Y (t)
1
= в€†X (t), в€Ђt в€€ [0, 1], P в€’ a.s.
Оґf (t + Оґ, t)
d
(2.10)
Recall that F (0,1) (t, r) = f (t, r) where by our assumptions F в€€ SRd2 (0+). It is
trivial to verify that
lim sup
hв†“0 tв€€[0,1]
F (t, t в€’ h)
1
= 0.
+
hf (t, t в€’ h) d
(2.11)
Then by (2.5), (2.10) and (2.11) we get
lim
Оґв†“0
M (t + Оґ) в€’ M (t)
= в€†X (t), в€Ђt в€€ [0, 1], P в€’ a.s.
F (t + Оґ, t)
(2.12)
Now we are going to prove Propositions 2.3 and 2.4. First let us state a few
properties of SRПЃ2 (0+) functions. These properties are simple extensions of some
properties of smoothly varying functions (see Chapter 1 of [4]).
Lemma 2.5. Let f be a SRd2 (0+) function for some d в€€ (0, 1). Then f в€€ Rd2 (0+),
2
f (0,1) в€€ Rdв€’1
(0+), and
a)
f (t, t в€’ hv)
lim sup
в€’ v d = 0, uniformly on v в€€ (0, a],
hв†“0 tв€€[0,1] f (t, t в€’ h)
b)
lim sup
hв†“0 tв€€[0,1]
f (t + hv, t)
в€’ v d = 0, uniformly on v в€€ (0, a],
f (t + h, t)
c)
lim sup
f (0,1) (t, t в€’ hv)
в€’ v dв€’1 = 0, uniformly on v в€€ [a, в€ћ),
f (0,1) (t, t в€’ h)
lim sup
f (0,1) (t + hv, t)
в€’ v dв€’1 = 0, uniformly on v в€€ [a, в€ћ),
f (0,1) (t + h, t)
hв†“0 tв€€[0,1]
d)
hв†“0 tв€€[0,1]
for any a, b such that a в€€ (0, в€ћ).
Next, we state two lemmas which are dealing with the properties of functions fОґ
and gОґ . We omit the proofs as they are pretty much straightforward consequences
of Lemma 2.5 and properties of smoothly varying functions.
366
LEONID MYTNIK AND EYAL NUEMAN
2
Lemma 2.6. Let f (t, r) в€€ SDdв€’1
(0+) where d в€€ (0, 1). Let gОґ (t, v) be defined as
in Proposition 2.3. Then for every h0 в€€ (0, 1]
(a)
t/Оґ
lim sup
|gОґ (t, v)|dv = 0;
Оґв†“0 h0 в‰¤tв‰¤1
h0 /Оґ
(b)
h0 /Оґ
lim sup
Оґв†“0 0в‰¤tв‰¤1
gОґ (t, v)dv +
1
= 0;
d
gОґ (t, v)dv +
1
= 0;
d
0
(c)
t/Оґ
lim sup
Оґв†“0 h0 в‰¤tв‰¤1
0
(d)
h0 /Оґ
lim sup
|gОґ (t, v)|dv в€’
Оґв†“0 h0 в‰¤tв‰¤1
Lemma 2.7. Let f (t, r) в€€
in Proposition 2.4. Then,
0
2
SDdв€’1
(0+)
1
= 0.
d
where d в€€ (0, 1). Let fОґ (t, v) be defined as
(a)
1
lim sup
Оґв†“0 0в‰¤tв‰¤1
|fОґ (t, v)|dv в€’
0
1
= 0;
d
(b)
1
fОґ (t, v)dv в€’
lim sup
Оґв†“0 0в‰¤tв‰¤1
0
1
= 0.
d
Now we will use Lemmas 2.6, 2.7 to prove Propositions 2.3, 2.4. At this point
we also need to introduce the notation for the supremum norm on c`adl`
ag functions
on [0, 1]:
f в€ћ = sup |f (t)|, f в€€ DR [0, 1],
0в‰¤tв‰¤1
where DR [0, 1] is the class of real valued c`
ag functions on [0, 1]. Since X is a
c`
ag process we have
X в€ћ < в€ћ, P в€’ a.s.
(2.13)
Note that for every I вЉ‚ R, DR (I) will denotes the class of real-valued c`
ag
functions on I.
Proof of Proposition 2.3: Let us consider the following decomposition
t/Оґ
0
1
gОґ (t, v)X(t в€’ Оґv)dv + X(tв€’)
d
t/Оґ
=
gОґ (t, v)[X(t в€’ Оґv) в€’ X(tв€’)]dv
0
t/Оґ
+ X(tв€’)
gОґ (t, v)dv +
0
=: J1 (Оґ, t) + J2 (Оґ, t).
1
d
(2.14)
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
367
By (2.13) and Lemma 2.6(c) we immediately get that for any arbitrarily small
h0 > 0, we have
lim sup |J2 (Оґ, t)| = 0, P в€’ a.s.
Оґв†“0 h0 в‰¤tв‰¤1
Since h0 was arbitrary and X(0в€’) = X(0) = 0, we get
lim |J2 (Оґ, t)| = 0, в€Ђt в€€ [0, 1], P в€’ a.s.
Оґв†“0
Now to finish the proof it is enough to show that, P в€’ a.s., for every t в€€ [0, 1]
lim |J1 (Оґ, t)| = 0.
(2.15)
Оґв†“0
For any h0 в€€ [0, t] we can decompose J1 as follows
J1 (Оґ, t)
h0 /Оґ
=
t/Оґ
gОґ (t, v)[X(t в€’ Оґv) в€’ X(tв€’)]dv +
0
gОґ (t, v)[X(t в€’ Оґv) в€’ X(tв€’)]dv
h0 /Оґ
=: J1,1 (Оґ, t) + J1,2 (Оґ, t).
(2.16)
Let Оµ > 0 be arbitrarily small. X is a c`
ag process therefore, P в€’ a.s. П‰, for
every t в€€ [0, 1] we can fix h0 в€€ [0, t] small enough such that
|X(t в€’ Оґv, П‰) в€’ X(tв€’, П‰)| < Оµ, for all v в€€ (0, h0 /Оґ].
(2.17)
Let us choose such h0 for the decomposition (2.16). Then by (2.17) and Lemma
2.6(d) we can pick Оґ вЂІ > 0 such that for every Оґ в€€ (0, Оґ вЂІ ) we have
2Оµ
.
(2.18)
|J1,1 (Оґ, t)| в‰¤
d
Now let us treat J1,2 . By (2.13) and Lemma 2.6(a) we get
t/Оґ
|J1,2 (Оґ, t)|
в‰¤
2 X
|gОґ (t, v)|dv
в€ћ
(2.19)
h0 /Оґ
в†’ 0, as Оґ в†“ 0, P в€’ a.s.
Then by combining (2.18) and (2.19), we get (2.15) and this completes the proof.
Proof of Proposition 2.4: We consider the following decomposition
1
0
1
fОґ (t, v)X(t + Оґ(1 в€’ v))dv в€’ X(t)
d
1
=
fОґ (t, v)[X(t + Оґ(1 в€’ v)) в€’ X(t)]dv
0
1
+ X(t)
fОґ (t, v)dv в€’
0
1
d
=: J1 (Оґ, t) + J2 (Оґ, t).
Now the proof follows along the same lines as that of Proposition 2.3. By (2.13)
and Lemma 2.7(b) we have
lim sup |J2 (Оґ, t)| = 0, P в€’ a.s.
Оґв†“0 0в‰¤tв‰¤1
368
LEONID MYTNIK AND EYAL NUEMAN
Hence to complete the proof it is enough to show that, P в€’ a.s., for every t в€€ [0, 1]
lim |J1 (Оґ, t)| = 0.
Оґв†“0
Let Оµ > 0 be arbitrarily small. X is a c`
ag process; therefore, P в€’ a.s. П‰, for
every t в€€ [0, 1] we can fix h0 small enough such that
|X(t + Оґ(1 в€’ v), П‰) в€’ X(t, П‰)| < Оµ, for all v в€€ (0, h0 /Оґ].
(2.20)
Then by (2.20) and Lemma 2.7(a) we easily get
lim |J1 (Оґ, t)| = 0. в€Ђt в€€ [0, 1], P в€’ a.s.
Оґв†“0
3. Proof of Theorem 1.5
Recall that by Lemma 2.1 we have
M (t) в€’ M (s) = в€’(Y (t) в€’ Y (s)), 0 в‰¤ s < t,
where
(3.1)
s
Y (s) =
f (s, r)X(r)dr, s > 0.
0
Then by Lemma 2.2 we get:
Y (s + Оґ) в€’ Y (s)
Оґf (s + Оґ, s)
J1 (s, Оґ)
J2 (s, Оґ)
+
, Оґ > 0.
Оґf (s + Оґ, s) Оґf (s + Оґ, s)
=
(3.2)
Recall that J1 and J2 are defined in (2.2) and (2.3).
Convention: Denote by О“ вЉ‚ в„¦ the set of paths of X(В·, П‰) which are right continuous and have left limit. By the assumptions of the theorem, P (О“) = 1. In what
follows we are dealing with П‰ в€€ О“. Therefore, for every Оµ > 0 and t > 0 there
exists О· = О·(Оµ, t, П‰) > 0 such that:
|X(tв€’) в€’ X(s)| в‰¤
Оµ,
for all s в€€ [t в€’ О·, t),
|X(t) в€’ X(s)| в‰¤
Оµ,
for all s в€€ [t, t + О·].
(3.3)
Let us fix an arbitrary Оµ > 0. The interval [0, 1] is compact; therefore there exist
points t1 , . . . , tm that define a cover of [0, 1] as follows:
m
[0, 1] вЉ‚
tk в€’
k=1
О·k
О·k
,
, tk +
2
2
where we denote О·k = О·(Оµ, tk ). Note that if в€†X (s) > 2Оµ then s = tk for some k.
We can also construct this cover in a way that
О·k
inf
tk в€’
в‰Ґ t1 .
(3.4)
2
kв€€{2,...,m}
Also since X(t) is right continuous at 0, we can choose t1 sufficiently small such
that
|X(t)| в‰¤ Оµ.
(3.5)
sup
tв€€(0,t1 +
О·1
2
)
Denote:
Bk = tk в€’ О·k , tk + О·k , Bkв€— = tk в€’
О·k
О·k
, tk +
.
2
2
(3.6)
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
369
Note that the coverings Bk and Bkв€— we built above are randomвЂ”they depend on a
particular realization of X. For the rest of this section we will be working with the
particular realization of X(В·, П‰), with П‰ в€€ О“ and the corresponding coverings Bk ,
Bkв€— . All the constants that appear below may depend on П‰ and the inequalities
should be understood P -a.s.
Let s, t в€€ Bkв€— and denote Оґ = t в€’ s. Recall the notation from Propositions 2.3 and
i (s,Оґ)
, i = 1, 2, as follows:
2.4. Let us decompose ОґfJ(s+Оґ,s)
J1 (s, Оґ) + J2 (s, Оґ)
Оґf (s + Оґ, s)
1
= X(tk в€’)
s/Оґ
fОґ (s, v)dv +
0
gОґ (s, v)dv
0
1
+ в€†X (tk )
0
s/Оґ
fОґ (s, v)1{s+Оґ(1в€’v)в‰Ґtk } dv +
0
gОґ (s, v)1{sв€’Оґvв‰Ґtk } dv
1
+
fОґ (s, v)1{s+Оґ(1в€’v)<tk } [X(s + Оґ(1 в€’ v)) в€’ X(tk в€’)]dv
0
s/Оґ
+
gОґ (s, v)1{sв€’Оґv<tk } [X(s в€’ Оґv) в€’ X(tk в€’)]dv
0
1
+
fОґ (s, v)1{s+Оґ(1в€’v)в‰Ґtk } [X(s + Оґ(1 в€’ v)) в€’ X(tk +)]dv
0
s/Оґ
+
0
gОґ (s, v)1{sв€’Оґvв‰Ґtk } [X(s в€’ Оґv) в€’ X(tk +)]dv
=: D1 (k, s, Оґ) + D2 (k, s, Оґ) + . . . + D6 (k, s, Оґ),
where 1 is the indicator function. The proof of Theorem 1.5 will follow as we
handle the terms Di , i = 1, 2, . . . , 6 via a series of lemmas.
Lemma 3.1. There exists a sufficiently small h3.1 > 0 such that
|D1 (k, s, Оґ)| в‰¤
|X(tk в€’)| +
4
Оµ, в€Ђk в€€ {1, . . . , m}, s в€€ Bkв€— , Оґ в€€ (0, h3.1 ). (3.7)
d
Proof. By Lemma 2.6(c), Lemma 2.7(b) and by our assumptions on the covering
we get (3.7) for k = 2, . . . , m. As for k = 1, we get by (3.5)
|X(t1 в€’)| в‰¤ Оµ.
(3.8)
By Lemma 2.6(b) we have
(t1 +О·1 )/Оґ
sup
0в‰¤sв‰¤1
gОґ (s, v)dv +
0
1
< Оµ/2.
d
(3.9)
370
LEONID MYTNIK AND EYAL NUEMAN
Hence by Lemma 2.6(c), (3.8) and (3.9), for a sufficiently small Оґ, we get
1
sup |D1 (1, s, Оґ)| в‰¤
Оµ sup
sв€€B1в€—
sв€€B1в€—
в‰¤
(t1 +О·1 )/Оґ
fОґ (s, v)dv +
0
|gОґ (s, v)|dv
0
4
Оµ ,
d
and (3.7) follows.
To handle the D2 term we need the following lemma.
Lemma 3.2. Let gОґ (s, v) and fОґ (s, v) be defined as in Propositions 2.3 and 2.4.
Then there exists h3.2 > 0 such that for all Оґ в€€ (0, h3.2 ),
1
0
s/Оґ
fОґ (s, v)1{s+Оґ(1в€’v)в‰Ґtk } dv +
gОґ (s, v)1{sв€’Оґvв‰Ґtk } dv
0
в‰¤
1
+ Оµ,
d
в€Ђk в‰Ґ 1, s в€€ [0, 1].
Proof. We introduce the following notation
1
I1 (s, Оґ) =
fОґ (s, v)1{s+Оґ(1в€’v)в‰Ґtk } dv,
0
s/Оґ
I2 (s, Оґ) =
0
gОґ (s, v)1{sв€’Оґvв‰Ґtk } dv.
From Definition 1.3, it follows that there exists h1 > 0, such that for every Оґ в€€ h1 ,
v в€€ (0, h2Оґ1 ) and s в€€ [0, 1]
gОґ (s, v) в‰¤
0,
(3.10)
в‰Ґ 0.
(3.11)
and
fОґ (s, v)
By Lemma 2.6(a), we can fix a sufficiently small h2 в€€ (0, h1 /2) such that for every
Оґ в€€ (0, h2 ), we have
s/Оґ
|gОґ (s, v)|dv в‰¤ Оµ/2, в€Ђs в€€ [h1 /2, 1],
(3.12)
h1 /(2Оґ)
where Оµ was fixed for building the covering {Bkв€— }m
k=1 . Then, by (3.12), we have
(sв€§
|I1 (s, Оґ) + I2 (s, Оґ)|
в‰¤
h1
2
)Оґ
gОґ (s, v)1{vв‰¤ sв€’tk } dv + Оµ/2, (3.13)
I1 +
Оґ
0
for all s в€€ [0, 1], Оґ в€€ (0, h2 ).
By (3.11) and the choice of h2 в€€ (0, h21 , we get
I1 (s, Оґ) в‰Ґ
0, в€Ђs в€€ [0, 1], Оґ в€€ (0, h2 ).
(3.14)
By (3.10) we have
(sв€§
0
h1
2
)Оґ
gОґ (s, v)1{vв‰¤ sв€’tk } dv
Оґ
в‰¤
0, в€Ђs в€€ [0, 1], Оґ в€€ (0, h2 ).
(3.15)
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
371
Then by (3.13), (3.14) and (3.15) we get
1
|I1 (s, Оґ) + I2 (s, Оґ)| в‰¤
max
(sв€§
h1
2
)/Оґ
gОґ (s, v)dv
fОґ (s, v)dv,
+ Оµ/2
0
0
(3.16)
for all s в€€ [0, 1], Оґ в€€ (0, h2 ).
By (3.16), Lemma 2.7(b) and Lemma 2.6(d) we can fix h3.2 sufficiently small such
that
1
|I1 (s, Оґ) + I2 (s, Оґ)| в‰¤ + Оµ
d
and we are done.
Note that
1
|D2 (k, s, Оґ)| = в€†X (tk )
0
s/Оґ
fОґ (s, v)1{s+Оґ(1в€’v)в‰Ґtk } dv +
0
gОґ (s, v)1{sв€’Оґvв‰Ґtk } dv
(3.17)
Then the immediate corollary of Lemma 3.2 and (3.17) is
Corollary 3.3.
1
, в€Ђk в€€ {1, . . . , m}, s в€€ [0, 1], Оґ в€€ (0, h3.2 ).
d
One can easily deduce the next corollary of Lemma 3.2.
|D2 (k, s, Оґ)| в‰¤ |в€†X (tk )| Оµ +
Corollary 3.4. There exists h3.4 > 0 such that
1
sup |D2 (k, s, Оґ)| в€’ |в€†X (tk )| в‰¤ Оµ|в€†X (tk )|, в€Ђk в€€ {1, . . . , m}, Оґ в€€ (0, h3.4 ).
в€—
d
sв€€Bk
(3.18)
Proof. By Corollary 3.3 we have
1
sup |D2 (k, s, Оґ)| в‰¤ |в€†X (tk )| + |в€†X (tk )|Оµ, в€Ђk в€€ {1, . . . , m},
d
sв€€Bkв€—
s в€€ [0, 1], Оґ в€€ (0, h3.2 ).
Bkв€—
To get (3.18) it is enough to find s в€€
and h3.4 в€€ (0, h3.2 ) such that for all
Оґ в€€ (0, h3.4 ).
1
|D2 (k, s, Оґ)| в‰Ґ |в€†X (tk )| в€’ |в€†X (tk )|Оµ, в€Ђk в€€ {1, . . . , m}.
(3.19)
d
By picking s = tk we get
1
|D2 (k, tk , Оґ)| = в€†X (tk )
fОґ (tk , v)dv .
0
Then by Lemma 2.7(b), (3.19) follows and we are done.
The term |D3 (k, s, Оґ)| + |D5 (k, s, Оґ)| is bounded by the following lemma.
Lemma 3.5. There exists a sufficiently small h3.5 such that
4
|D3 (k, tk , Оґ)| + |D5 (k, tk , Оґ)| в‰¤ Оµ В· , в€Ђk в€€ {1, . . . , m}, s в€€ Bkв€— , в€ЂОґ в€€ (0, h3.5 ).
d
372
LEONID MYTNIK AND EYAL NUEMAN
Proof. By the construction of Bk we get that
1
|D3 (k, s, Оґ)| в‰¤ Оµ
0
|fОґ (s, v)|1{s+Оґ(1в€’v)<tk } dv, в€Ђs в€€ Bkв€— ,
k = {1, . . . , m}, Оґ в€€ (0, О·k /2),
(3.20)
and
1
|D5 (k, s, Оґ)| в‰¤ Оµ
0
|fОґ (s, v)|1{s+Оґ(1в€’v)в‰Ґtk } dv, в€Ђs в€€ Bkв€— ,
k = {1, . . . , m}, Оґ в€€ (0, О·k /2).
(3.21)
From (3.20), (3.21) we get
1
|D3 (k, s, Оґ)| + |D5 (k, s, Оґ)| в‰¤ 2Оµ
|fОґ (s, v)|dv, в€Ђ k = {1, . . . , m},
0
s в€€ Bkв€— , Оґ в€€ (0, О·/2).
By Lemma 2.7(a) the result follows.
Next we show that |D4 (k, s, Оґ)| is bounded in the following lemma.
Lemma 3.6. There exists h3.6 > 0 such that for all Оґ в€€ (0, h3.6 ):
|D4 (k, s, Оґ)| в‰¤ Оµ
2
+2 X
d
в€ћ
, в€Ђs в€€ Bkв€— , k в€€ {1, . . . , m}.
(3.22)
Proof. Let Оµ > 0 be arbitrarily small and fix k в€€ {1, . . . , m}. First we consider the
case s в€’ tk > 0, k в€€ {1, . . . , m}.
s/Оґ
0
gОґ (s, v)1{sв€’Оґv<tk } [X(s в€’ Оґv) в€’ X(tk в€’)]dv
sв€’tk
Оґ
(3.23)
О·
k
+ 2Оґ
|gОґ (s, v)||X(s в€’ Оґv) в€’ X(tk в€’)|dv
в‰¤
(sв€’tk )/Оґ
s/Оґ
+
sв€’tk
Оґ
О·
k
+ 2Оґ
1{tk >О·k /2} gОґ (s, v)[X(s в€’ Оґv) в€’ X(tk в€’)]dv
:= |I1 (k, s, Оґ)| + |I2 (k, s, Оґ)|.
Note that the indicator in I2 (k, s, Оґ) makes sure that s/Оґ > (s в€’ tk )/Оґ + О·k /(2Оґ).
By the definition of Bk in (3.6) and by Lemma 2.6(d) there exists h1 > 0 such
that for every Оґ в€€ (0, h1 ) we have uniformly on s в€€ Bkв€— [tk , 1]
2
Оµ.
(3.24)
d
By Lemma 2.6(a), there exists h2 в€€ (0, h1 ) such that for every Оґ в€€ (0, h2 ) we have
uniformly on s в€€ Bkв€— [tk , 1] (note that if tk в‰¤ О·k /2 then I2 (k, s, Оґ) = 0)
|I1 (k, s, Оґ)| в‰¤
|I2 (k, s, Оґ)| в‰¤
By (3.23), (3.24) and (3.25) we get (3.22).
2 X
в€ћ Оµ.
(3.25)
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
373
Consider the case s в‰¤ tk , s в€€ Bkв€— , k в€€ {1, . . . , m}. Then we have
s/Оґ
gОґ (s, v)1{sв€’Оґv<tk } [X(s в€’ Оґv) в€’ X(tk в€’)]dv
0
О·k /(2Оґ)
=
gОґ (s, v)[X(s в€’ Оґv) в€’ X(tk в€’)]dv
0
s/Оґ
=
gОґ (s, v)[X(s в€’ Оґv) в€’ X(tk в€’)]dv
О·k /(2Оґ)
=: J1 (k, s, Оґ) + J2 (k, s, Оґ).
(3.26)
Note that if v в€€ (0, О·k /(2Оґ)), s в‰¤ tk and s в€€ Bkв€— , then s в€’ Оґv в€€ (tk в€’ О·k , tk ). Hence,
by the construction of Bk we have
sup
vв€€(0,О·k /(2Оґ))
|X(s в€’ Оґv) в€’ X(tk в€’)| в‰¤ Оµ, в€Ђs в‰¤ tk , s в€€ Bkв€— .
(3.27)
By (3.27) and Lemma 2.6(d), there exists h4 в€€ (0, h3 ) such that
2
в‰¤ Оµ , в€ЂОґ в€€ (0, h4 ), s в‰¤ tk , s в€€ Bkв€— , k в€€ {1, . . . , m}.(3.28)
d
|J1 (k, s, Оґ)|
By Lemma 2.6(a), exists h3.6 в€€ (0, h4 ) such that
|J2 (k, s, Оґ)|
в‰¤ 2 X
в€ЂОґ в€€ (0, h3.6 ), s в‰¤ tk , s в€€ Bkв€— , k в€€ {1, . . . , m}.
в€ћ Оµ,
(3.29)
By combining (3.28) and (3.29) with (3.26), the result follows.
|D6 (k, s, Оґ)| is bounded in the following lemma.
Lemma 3.7. There exists h3.7 > 0 such that for all Оґ в€€ (0, h3.7 ):
|D6 (k, s, Оґ)| в‰¤
2Оµ
, в€Ђs в€€ Bkв€— , k в€€ {1, . . . , m}.
d
Proof. Recall that
Bkв€— = tk в€’
О·k
О·k
, k в€€ {1, . . . , m}.
, tk +
2
2
Note that
|D6 (k, s, Оґ)| = 0, в€Ђs в€€ tk в€’
О·k
, tk , k в€€ {1, . . . , m}.
2
(3.30)
Hence we handle only the case of s > tk , s в€€ Bkв€— . One can easily see that in this
case
(sв€’tk )/Оґ
D6 (k, s, Оґ)
=
gОґ (s, v)[X(s в€’ Оґv) в€’ X(tk +)]dv.
0
Then by the construction of Bkв€— , for every s в€€ Bkв€— , s > tk we have
|X(s в€’ Оґv) в€’ X(tk )| в‰¤ Оµ, for all v в€€ (0, (s в€’ tk )/Оґ].
(3.31)
374
LEONID MYTNIK AND EYAL NUEMAN
We notice that if s в€€ Bkв€— and s > tk then 0 < s в€’ tk < О·k /2, for all k = 1, . . . , m.
Denote by О· = maxk=1,...,m О·k . Then by (3.31) and Lemma 2.6(d) we can pick
h3.7 > 0 such that for every Оґ в€€ (0, h3.7 ) we have
|D6 (k, s, Оґ)|
в‰¤
О·k
2Оµ
, в€Ђs в€€ tk , tk +
, k в€€ {1, . . . , m}.
d
2
(3.32)
Then by (3.30) and (3.32) for all Оґ в€€ (0, h3.7 ), the result follows.
Now we are ready to complete the proof of Theorem 1.5. By Lemmas 3.1, 3.5,
3.6, 3.7 and by Corollary 3.4, there exists hв€— small enough and C3.33 = 6 X в€ћ + 12
d
such that
sup
sв€€Bkв€—
|J1 (s, Оґ) + J2 (s, Оґ)| 1
в€’ |в€†X (tk )| в‰¤ Оµ В· C3.33 , в€Ђk в€€ {1, . . . , m},
|Оґf (s + Оґ, s)|
d
Оґ в€€ (0, hв€— ), P в€’ a.s.
(3.33)
Recall that F (0,1) (s, r) = f (s, r) where F (s, r) в€€ SRd2 (0+) is a positive function.
Then, by (2.11), we can choose h в€€ (0, hв€— ) to be small enough such that
sup
sв€€Bkв€—
|J1 (s, Оґ) + J2 (s, Оґ)|
в€’ |в€†X (tk )| в‰¤ Оµ В· C3.34 , в€Ђk в€€ {1, . . . , m}, Оґ в€€ (0, h),
F (s + Оґ, s)
(3.34)
where C3.34 = 2C3.33 + 1. By (3.34) and Lemma 2.2 we get,
sup
|tв€’s|в‰¤Оґ,
sв€€Bkв€—
|Y (t) в€’ Y (s)|
в€’ |в€†X (tk )| в‰¤ Оµ В· C3.34 , в€Ђk в€€ {1, . . . , m}, Оґ в€€ (0, h).
F (t, s)
From Lemma 2.1 we have
sup
|tв€’s|в‰¤Оґ,
sв€€Bkв€—
|M (t) в€’ M (s)|
в€’ |в€†X (tk )| в‰¤ Оµ В· C3.34 , в€Ђk в€€ {1, . . . , m},
F (t, s)
Оґ в€€ (0, h), P в€’ a.s. (3.35)
By the construction of the covering Bk , for any point s в€€ {t1 , . . . , tk }, |в€†X (s)| в‰¤
2Оµ. Set C3.36 = C3.34 + 2. Then, by (3.35) we get
sup
0<s<t<1, |tв€’s|в‰¤Оґ
|M (t) в€’ M (s)|
в€’ sup |в€†X (s)|
F (t, s)
sв€€[0,1]
в€Ђk в€€ {1, . . . , m},
в‰¤
Оµ В· C3.36 ,
(3.36)
Оґ в€€ (0, h), P в€’ a.s.
Since C3.36 is independent of m, and since Оµ was arbitrarily small the result follows.
4. Proof of Theorem 1.6
In this section we prove Theorem 1.6. In order to prove Theorem 1.6 we need
the following lemma.
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
375
Lemma 4.1. Let L(t) be a two-sided LВґevy process with E[L(1)] = 0, E[L(1)2 ] < в€ћ
and without a Brownian component. Then for P -a.e. П‰, for any t в€€ R, a в‰¤ 0
вЂІ
such that t > a there exists Оґ в€€ (0, t в€’ a) such that
a
вЂІ
[(t + Оґ в€’ r)dв€’1 в€’ (t в€’ r)dв€’1 ]L(r)dr в‰¤ C В· |Оґ|, в€Ђ|Оґ| в‰¤ Оґ ,
(4.1)
в€’в€ћ
вЂІ
where C is a constant that may depend on П‰, t, Оґ .
вЂІ
Proof. Fix an arbitrary t в€€ R and pick Оґ в€€ (0, t в€’ a). For all |Оґ| в‰¤ Оґ
вЂІ
a
[(t + Оґ в€’ r)dв€’1 в€’ (t в€’ r)dв€’1 ]L(r)dr
в€’в€ћ
N
[(t + Оґ + u)dв€’1 в€’ (t + u)dв€’1 ]L2 (u)du
=
в€’a
в€ћ
+
[(t + Оґ + u)dв€’1 в€’ (t + u)dв€’1 ]L2 (u)du
N
=: I2,1 (N, Оґ) + I2,2 (N, Оґ).
(4.2)
Now we use the result on the long time behavior of LВґevy processes. By Proposition
48.9 from [14], if E(L2 (1)) = 0 and E(L2 (1)2 ) < в€ћ, then
lim sup
sв†’в€ћ
L2 (s)
= (E[L2 (1)2 ])1/2 , P в€’ a.s.
(2s log log(s))1/2
(4.3)
Recall that d в‰¤ 0.5. Hence by (4.3) we can pick N = N (П‰) > 0 large enough such
that
в€ћ
|I2,2 (N, Оґ)| в‰¤
|(t + Оґ + u)dв€’1 в€’ (t + u)dв€’1 |u1/2+Оµ du
N
в‰¤
вЂІ
вЂІ
C В· |Оґ|, в€ЂОґ в€€ (в€’Оґ , Оґ ), P в€’ a.s.
(4.4)
On the other hand, for Оґ small enough
N
|I2,1 (N, Оґ)|
[(t + Оґ + u)dв€’1 в€’ (t + u)dв€’1 ]L2 (u)du
=
(4.5)
в€’a
в‰¤
вЂІ
вЂІ
C||L2 (u)||[0,N ] |Оґ| В· (t в€’ a)dв€’1 , в€ЂОґ в€€ (в€’Оґ , Оґ ),
where
||L2 (u)||[0,N ] = sup |L2 (u)|.
uв€€[0,N ]
Then, by (4.4) and (4.5) we get for d < 1/2
|I2,1 (N, Оґ) + I2,2 (N, Оґ)|
вЂІ
вЂІ
< C|Оґ|, в€ЂОґ в€€ (в€’Оґ , Оґ ),
and by combining (4.2) with (4.6) the result follows.
(4.6)
376
LEONID MYTNIK AND EYAL NUEMAN
Proof of Theorem 1.6. By Theorem 3.4 in [10] we have
Md (t) =
1
О“(d)
в€ћ
в€’в€ћ
[(t в€’ r)dв€’1
в€’ (в€’r)dв€’1
+
+ ]L(r)dr, t в€€ R, P в€’ a.s.
(4.7)
We prove the theorem for the case of t > 0. The proof for the case of t в‰¤ 0 can
be easily adjusted along the similar lines. We can decompose Md (t) as follows:
Md (t)
=
1
О“(d)
t
(t в€’ r)dв€’1 L(r)dr +
0
1
О“(d)
0
[(t в€’ r)dв€’1 в€’ (в€’r)dв€’1 ]L(r)dr
в€’в€ћ
= Md1 (t) + Md2 (t), t в€€ (0, 1), P в€’ a.s.
By Lemma 2.1 we have
Md1 (t) =
1
О“(d + 1)
t
(t в€’ r)d dLr , t в€€ R+ , P в€’ a.s.
0
By Theorem 1.5 we have
lim
hв†“0
sup
0<s<t<1, |tв€’s|в‰¤h
О“(d + 1)
|Md1 (t) в€’ Md1 (s)|
= sup |в€†X (s)|, P в€’ a.s.
hd
sв€€[0,1]
Therefore, P -a.s. П‰, for any t в€€ (0, 1), there exists Оґ1 > 0 and C1 > 0 such that
|Md1 (t + Оґ) в€’ Md1 (t)| в‰¤ C1 |Оґ|d , в€ЂОґ в€€ (в€’Оґ1 , Оґ1 ).
(4.8)
By Lemma 4.1, P -a.s. П‰, for any t в€€ (0, 1), there exists Оґ2 > 0 and C2 = C2 (П‰, t) >
0 such that
|Md2 (t + Оґ) в€’ Md2 (t)| в‰¤ C2 |Оґ|, в€ЂОґ в€€ (в€’Оґ2 , Оґ2 ).
(4.9)
Hence by (4.8) and (4.9), P -a.s. П‰, for any t в€€ (0, 1), we can fix Оґ3 and C = C(П‰, t)
such that,
|Md (t + Оґ) в€’ Md (t)| в‰¤ C|Оґ|d , в€ЂОґ в€€ (в€’Оґ3 , Оґ3 ),
and we are done.
Acknowledgment. Both authors thank an anonymous referee for the careful
reading of the manuscript, and for a number of useful comments and suggestions
that improved the exposition.
References
1. Ayache, A., Roueff, F. and Xiao, Y.: Local and asymptotic properties of linear fractional
stable sheets. C. R. Math. Acad. Sci. Paris 344(6) (2007) 389вЂ“394.
2. Ayache, A., Roueff, F. and Xiao, Y.: Linear fractional stable sheets: wavelet expansion and
sample path properties. Stochastic Process. Appl. 119(4) (2009) 1168вЂ“1197.
3. Biagini, F., Hu, Y., Г�ksendal, B. and Zhang, T.: Stochastic calculus for fractional Brownian motion and applications, Probability and its Applications, New York, Springer-Verlag
London Ltd., London, 2008.
4. Bingham, N. H., Goldie, C. M., and Teugels J. L.: Regular variation (Encyclopedia of
Mathematics and its Applications), Cambridge University Press, 1987.
5. KЛ†
ono, N. and Maejima, M.: HВЁ
older continuity of sample paths of some self-similar stable
processes, Tokyo J. Math., 14(1) (1991) 93вЂ“100.
6. Maejima, M.: On a class of self-similar processes, Z.Wahrsch. Verw. Gebiete, 62(2) (1983)
235вЂ“245.
SAMPLE PATH PROPERTIES OF VOLTERRA PROCESSES
377
7. Maejima, M. and Shieh, N. R.: Sample paths of fractional LВґ
evy processes, Private communication.
8. Mandelbrot, B. B. and Van Ness, J. W.: Fractional Brownian motions, fractional noises and
applications, SIAM Review, 10(4) (1968) 422вЂ“437.
9. Marcus, M. B. and RosiВґ
nski, J.: Continuity and boundedness of infinitely divisible processes:
a Poisson point process approach, Journal of Theoretical Probability, 18(1) (2005) 109вЂ“160.
10. Marquardt, T.: Fractional LВґ
evy processes with an application to long memory moving average processes, Bernoulli, 12(6) (2006) 1099вЂ“1126.
11. Nualart, D.: Malliavin calculus and its applications, American Mathematical Society, 2009.
12. Protter, P. E. Stochastic integration and differential equations, Springer-Verlag, Berlin
Hiedelberg, 2004.
13. Samorodnitsky, G. and Taqqu, M. S.: Stable non-Gaussian random processes, Chapman &
Hall, 1994.
14. Sato, K.: LВґ
evy processes and infinitely divisible distributions, Cambridge University Press,
1999.
15. Takashima, K.: Sample path properties of ergodic self-similar processes. Osaka Journal of
Mathematics, 26(1) (1989) 159вЂ“189.
Leonid Mytnik: Faculty of Industrial Engineering and Management, Technion Institute of Technology, Haifa, 3200, Israel
Eyal Neuman: Faculty of Industrial Engineering and Management, Technion - Institute of Technology, Haifa, 3200, Israel
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 379-402
www.serialspublications.com
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
AND APPLICATION TO HITTING TIMES
PEDRO LEI AND DAVID NUALART*
Abstract. In this paper we establish a change-of-variable formula for a class
of Gaussian processes with a covariance function satisfying minimal regularity
and integrability conditions. The existence of the local time and a version of
TanakaвЂ™s formula are derived. These results are applied to a general class of
self-similar processes that includes the bifractional Brownian motion. On the
other hand, we establish a comparison result on the Laplace transform of the
hitting time for a fractional Brownian motion with Hurst parameter H < 12 .
1. Introduction
There has been a recent interest in establishing change-of-variable formulas for a
general class of Gaussian process which are not semimartingales, using techniques
of Malliavin calculus. The basic example of such process is the fractional Brownian
ВЁ unel [8], different
motion, and, since the pioneering work by Decreusefond and UstВЁ
versions of the ItЛ†
o formula have been established (see the recent monograph by
Biagini, Hu, Г�ksendal and Zhang [4] and the references therein).
In [1] the authors have considered the case of a Gaussian Volterra process of
t
the form Xt = 0 K(t, s)dWs , where W is a Wiener process and K(t, s) is a square
integrable kernel satisfying some regularity and integrability conditions, and they
have proved a change-of-variable formula for a class of processes which includes
the fractional Brownian motion with Hurst parameter H > 41 . A more intrinsic
approach based on the covariance function (instead of the kernel K) has been
developed by Cheridito and Nualart in [5] for the fractional Brownian motion. In
this paper an extended divergence operator is introduced in order to establish an
ItЛ†
o formula in the case of an arbitrary Hurst parameter H в€€ (0, 1). In [13], Kruk,
Russo and Tudor have developed a stochastic calculus for a continuous Gaussian
process X = {Xt , t в€€ [0, T ]} with covariance function R(s, t) = E(Xt Xs ) which
has a bounded planar variation. This corresponds to the case of the fractional
Brownian motion with Hurst parameter H в‰Ґ 21 . In [12] Kruk and Russo have
extended the stochastic calculus for the Skorohod integral to the case of Gaussian
processes with a singular covariance, which includes the case of the fractional
Brownian motion with Hurst parameter H < 21 . The approach of [12] based on
Received 2012-1-29; Communicated by Hui-Hsiung Kuo.
2000 Mathematics Subject Classification. Primary 60H07, 60G15; Secondary 60G18.
Key words and phrases. Skorohod integral, ItЛ†
oвЂ™s formula, local time, TanakaвЂ™s formula, selfsimilar processes, fractional Brownian motion, hitting time.
* D. Nualart is supported by the NSF grant DMS-1208625.
379
380
PEDRO LEI AND DAVID NUALART
the duality relationship of Malliavin calculus and the introduction of an extended
domain for the divergence operator is related with the method used in the present
paper, although there are clear differences in the notation and basic assumptions.
In [15], Mocioalca and Viens have constructed the Skorohod integral and developed a stochastic calculus for Gaussian processes having a covariance structure
of the form E[|Bt в€’ Bs |2 ] в€ј Оі 2 (|t в€’ s|), where Оі satisfies some minimal regularity
conditions. In particular, the authors have been able to consider processes with a
logarithmic modulus of continuity, and even processes which are not continuous.
The purpose of this paper is to extend the methodology introduced Cheridito
and Nualart in [5] to the case of a general Gaussian process whose covariance
function R is absolutely continuous in one variable and the derivative satisfies an
appropriate integrability condition, without assuming that R has planar bounded
variation. The main result is a general ItЛ†oвЂ™s formula formulated in terms of the
extended divergence operator, proved in Section 3. As an application we establish
the existence of a local time in L2 and a version of TanakaвЂ™s formula in Section
4. In Section 5 the results of the previous sections are applied to the case of
a general class of self-similar processes that includes the bifractional Brownian
motion with parameters H в€€ (0, 1) and K в€€ (0, 1] and the extended bifractional
Brownian motion with parameters H в€€ (0, 1) and K в€€ (1, 2) such that HK в€€
(0, 1). Finally, using the stochastic calculus developed in Section 3, we have been
able, in Section 6, to generalize the results by Decreusefond and Nualart (see [7])
on the distribution of the hitting time, to the case of a fractional Brownian motion
with Hurst parameter H < 12 . More precisely, we prove that if the Hurst parameter
is less than 21 , then the hitting time П„a , for a > 0, satisfies E(exp(в€’О±П„a2H )) в‰Ґ
в€љ
eв€’a 2О± for any О± > 0.
2. Preliminaries
Let X = {Xt , t в€€ [0, T ]} be a continuous Gaussian process with zero mean and
covariance function R(s, t) = E(Xt Xs ), defined on a complete probability space
(в„¦, F , P ). For the sake of simplicity we will assume that X0 = 0. Consider the
following condition on the covariance function:
(H1) For all t в€€ [0, T ], the map s в†’ R(s, t) is absolutely continuous on [0, T ],
and for some О± > 1,
T
sup
0в‰¤tв‰¤T
0
в€‚R
(s, t)
в€‚s
О±
ds < в€ћ.
Our aim is to develop a stochastic calculus for the Gaussian process X, assuming
condition (H1). In this section we introduce some preliminaries.
Denote by E the space of step functions on [0, T ]. We define in E the scalar
product
1[0,t] , 1[0,s] H = R(t, s).
Let H be the Hilbert space defined as the closure of E with respect to this scalar
product. The mapping 1[0,t] в†’ Xt can be extended to a linear isometry from H
into the Gaussian subspace of L2 (в„¦) spanned by the random variables {Xt , t в€€
[0, T ]}. This Gaussian subspace is usually called the first Wiener chaos of the
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
381
Gaussian process X. The image of an element П• в€€ H by this isometry will be a
Gaussian random variable denoted by X(П•). For example, if X = B is a standard
Brownian motion, then the Hilbert space H is isometric to L2 ([0, T ]), and B(П•) is
T
the Wiener integral 0 П•t dBt . A natural question is whether the elements of the
space H can be indentified with real valued functions on [0, T ], and in this case,
X(П•) will be interpreted as the stochastic integral of the function П• with respect
to the process X. For instance, in the case of the fractional Brownian motion with
Hurst parameter H в€€ (0, 1), this question has been discussed in detail by Pipiras
and Taqqu in the references [18, 19].
We are interested in extending the inner product П•, 1[0,t] H to elements П• that
are not necessarily in the space H. Suppose first that П• в€€ E has the form
n
П•=
ai 1[0,ti ] ,
i=1
where 0 в‰¤ ti в‰¤ T . Then the inner product П•, 1[0,t]
n
П•, 1[0,t]
H
=
n
ai R(ti , t) =
i=1
T
H
=
0
1
О±
0
+
1
ОІ
H
can be expressed as follows
в€‚R
(s, t)ds =
в€‚s
ai
i=1
If ОІ is the conjugate of О±, i.e.
П•, 1[0,t]
ti
T
П•(s)
0
в€‚R
(s, t)ds. (2.1)
в€‚s
= 1, applying HВЁolderвЂ™s inequality, we obtain
в€‚R
П•(s)
(s, t)ds в‰¤ П•
в€‚s
T
ОІ
sup
0в‰¤tв‰¤T
0
в€‚R
|
(s, t)|О± ds
в€‚s
1
О±
.
Therefore, if (H1) holds, we can extend the inner product П•, 1[0,t] H to functions
П• в€€ LОІ ([0, T ]) by means of formula (2.1), and the mapping П• в†’ П•, 1[0,t] H is
continuous in LОІ ([0, T ]). This leads to the following definition.
Definition 2.1. Given П• в€€ LОІ ([0, T ]) and П€ =
m
П•, П€
H
T
=
bj
j=1
П•(s)
0
m
j=1 bj 1[0,tj ]
в€€ E, we set
в€‚R
(s, tj )ds.
в€‚s
In particular, this implies that for any П• and П€ as in the above definition,
t
П•1[0,t] , П€
H
=
П•(s)d 1[0,s] , П€
0
H.
(2.2)
3. Stochastic Calculus for the Skorohod Integral
Following the argument of AlВґos, Mazet and Nualart in [1], in this section we
establish a version of ItЛ†
oвЂ™s formula. In order to do this we first discuss the extended
divergence operator for a continuous Gaussian stochastic process X = {Xt , t в€€
[0, T ]} with mean zero and covariance function R(s, t) = E(Xt Xs ), defined in
a complete probability space (в„¦, F , P ), satisfying condition (H1), and such that
X0 = 0. The Gaussian family {X(П•), П• в€€ H} introduced in the Section 2 is
an isonormal Gaussian process associated with the Hilbert space H, and we can
construct the Malliavin calculus with respect to this process (see [17] and the
references therein for a more complete presentation of this theory).
382
PEDRO LEI AND DAVID NUALART
We denote by S the space of smooth and cylindrical random variables of the
form
F = f (X(П•1 ), . . . , X(П•n )),
(3.1)
where f в€€ Cbв€ћ (Rn ) (f is an infinitely differentiable function which is bounded
together with all its partial derivatives), and, for 1 в‰¤ i в‰¤ n, П•i в€€ E. The derivative
operator, denoted by D, is defined by
n
DF =
i=1
в€‚f
(X(П•1 ), . . . , X(П•n ))П•i ,
в€‚xi
if F в€€ S is given by (3.1). In this sense, DF is an H-valued random variable. For
any real number p в‰Ґ 1 we introduce the seminorm
F
1,p
1
p
p
H ))
= (E(|F |p ) + E( DF
,
and we denote by D1,p the closure of S with respect to this seminorm. More
generally, for any integer k в‰Ґ 1, we denote by Dk the kth derivative operator, and
Dk,p the closure of S with respect to the seminorm
пЈ«
пЈ¶ p1
k
F
k,p
= пЈ­E(|F |p ) +
p
пЈё
HвЉ—j )
E( Dj F
j=1
.
The divergence operator Оґ is introduced as the adjoint of the derivative operator.
More precisely, an element u в€€ L2 (в„¦; H) belongs to the domain of Оґ if there exists
a constant cu depending on u such that
|E( u, DF
H )|
в‰¤ cu F
2
,
for any smooth random variable F в€€ S. For any u в€€ DomОґ, Оґ(u) в€€ L2 (в„¦) is then
defined by the duality relationship E(F Оґ(u)) = E( u, DF H ), for any F в€€ D1,2
and in the above inequality we can take cu = Оґ(u) 2 . The space D1,2 (H) is
included in the domain of the divergence.
If the process X is a Brownian motion, then H is L2 ([0, T ]) and Оґ is an extension
of the ItЛ†
o stochastic integral. Motivated by this example, we would like to interpret Оґ(u) as a stochastic integral for u in the domain of the divergence operator.
However, it may happen that the process X itself does not belong to L2 (в„¦; H). For
example, this is true if X is a fractional Brownian motion with Hurst parameter
H в‰¤ 41 (see [5]). For this reason, we need to introduce an extended domain of the
divergence operator.
Definition 3.1. We say that a stochastic process u в€€ L1 (в„¦; LОІ ([0, T ])) belongs
to the extended domain of the divergence DomE Оґ if
|E( u, DF
H )|
в‰¤ cu F
2
,
for any smooth random variable F в€€ S, where cu is some constant depending on
u. In this case, Оґ(u) в€€ L2 (в„¦) is defined by the duality relationship
E(F Оґ(u)) = E( u, DF
for any F в€€ S.
H ),
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
Note that the pairing u, DF
H
383
is well defined because of Definition 2.1.
In general, the domains DomОґ and DomE Оґ are not comparable because u в€€
DomОґ takes values in the abstract Hilbert space H and u в€€ DomE Оґ takes values
in LОІ ([0, T ]). In the particular case of the fractional Brownian motion with Hurst
parameter H < 21 we have (see [7])
1
в€’H
1
H = IT2 в€’ (L2 ) вЉ‚ L H ([0, T ]),
1
and assumption (H1) holds for any О± < 2Hв€’1
. As a consequence, if ОІ is the
1
ОІ
conjugate of О±, then ОІ > 2H , so H вЉ‚ L ([0, T ]) and DomОґ вЉ‚ DomE Оґ.
If u belongs to DomE Оґ, we will make use of the notation
T
Оґ(u) =
us ОґXs ,
0
t
and we will write 0 us ОґXs for Оґ(u1[0,t] ), provided u1[0,t] в€€ DomE Оґ.
We are going to prove a change-of-variable formula for F (t, Xt ) involving the
extended divergence operator. Let F (t, x) be a function in C 1,2 ([0, T ]Г—R) (the parв€‚2F
в€‚F
tial derivatives в€‚F
в€‚x , в€‚x2 and в€‚t exist and are continuous). Consider the following
growth condition.
(H2) There exist positive constants c and О» < 14 (sup0в‰¤tв‰¤T R(t, t))в€’1 such that
sup
0в‰¤tв‰¤T
|F (t, x)| + |
в€‚2F
в€‚F
в€‚F
(t, x)| + | 2 (t, x)| + |
(t, x)|
в€‚x
в€‚x
в€‚t
в‰¤ c exp(О»|x|2 ).
(3.2)
Using the integrability properties of the supremum of a Gaussian process, condition (3.2) implies
E
sup |F (t, Xt )|2
в‰¤ c2 E exp(2О» sup |Xt |2 ) < в€ћ,
0в‰¤tв‰¤T
0в‰¤tв‰¤T
2
в€‚ F
and the same property holds for the partial derivatives в€‚F
в€‚x , в€‚x2 and
the following additional condition on the covariance function.
(H3) The function Rt := R(t, t) has bounded variation on [0, T ].
в€‚F
в€‚t
. We need
Theorem 3.2. Let F be a function in C 1,2 ([0, T ]Г—R) satisfying (H2). Suppose that
X = {Xt , t в€€ [0, T ]} is a zero mean continuous Gaussian process with covariance
function R(t, s), such that X(0) = 0, satisfying (H1) and (H3). Then for each
t в€€ [0, T ] the process { в€‚F
в€‚x (s, Xs )1[0,t] (s), 0 в‰¤ t в‰¤ T } belongs to extended domain
E
of the divergence Dom Оґ and the following holds
t
F (t, Xt ) = F (0, 0) +
0
1
+
2
t
0
в€‚F
(s, Xs )ds +
в€‚s
в€‚2F
(s, Xs )dRs .
в€‚x2
t
0
в€‚F
(s, Xs )ОґXs
в€‚x
(3.3)
Proof. Suppose that G is a random variable of the form G = In (hвЉ—n ), where In
denotes the multiple stochastic integral of order n with respect to X and h is a
step function in [0, T ]. The set of all these random variables form a total subset
384
PEDRO LEI AND DAVID NUALART
of L2 (в„¦). Taking into account Definition 3.1 of the extended divergence operator,
it is enough to show that for any such G,
t
E(GF (t, Xt )) в€’ E(GF (0, 0)) в€’
в€‚F
= E( DG, 1[0,t] (В·)
(В·, XВ· )
в€‚x
E(G
0
в€‚F
1
(s, Xs )ds в€’
в€‚s
2
t
E(G
0
в€‚2F
(s, Xs ))dRs
в€‚x2
H ).
(3.4)
First we reduce the problem to the case where the function F is smooth in x. For
this purpose we replace F by
1
Fk (t, x) = k
в€’1
F (t, x в€’ y)Оµ(ky)dy,
where Оµ is a nonnegative smooth function supported by [в€’1, 1] such that
1
Оµ(y)dy = 1. The functions Fk are infinitely differentiable in x and their derivaв€’1
tives satisfy the growth condition (3.2) with some constants ck and О»k .
Suppose first that G is a constant, that is, n = 0. The right-hand side of
Equality (3.4) vanishes. On the other hand, we can write
E(GFk (t, Xt )) = G
Fk (t, x)p(Rt , x)dx,
R
where p(Пѓ, y) = (2ПЂПѓ)в€’1/2 exp(в€’x2 /2Пѓ). We know that
quence, integrating by parts, we obtain
t
E(GFk (t, Xt )) в€’ GF (0, 0) в€’ G
1
= G
2
=
1
G
2
1
= G
2
0
R
в€‚2p
Fk (s, x) 2 (Rs , x)dx dRs
в€‚x
R
t
в€‚ 2 Fk
(s, x)p(Rs , x)dx dRs
в€‚x2
0
R
t
E
0
=
1 в€‚2 p
2 в€‚x2 .
As a conse-
в€‚Fk
(s, x)p(Rs , x)dxds
в€‚s
t
0
в€‚p
в€‚Пѓ
в€‚ 2 Fk
(s, Xs ) dRs ,
в€‚x2
which completes the proof of (3.4), when G is constant.
Suppose now that n в‰Ґ 1. In this case E(G) = 0. On the other hand, using the
fact that the multiple stochastic integral In is the adjoint of the iterated derivative
operator Dn we obtain
E(GFk (t, Xt )) = E(In (hвЉ—n )Fk (t, Xt )) = E
=E
в€‚ n Fk
(t, Xt )
в€‚xn
h, 1[0,t]
hвЉ—n ,
n
H.
в€‚ n Fk
(t, Xt )1вЉ—n
[0,t]
в€‚xn
HвЉ—n
(3.5)
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
385
Note that E(GFk (t, Xt )) is the product of two factors. Therefore, its differential
will be expressed as the sum of two terms
d (E(GFk (t, Xt ))) = h, 1[0,t]
n
Hd
E(
в€‚ n Fk
(t, Xt ))
в€‚xn
в€‚ n Fk
(t, Xt ))d h, 1[0,t] nH .
(3.6)
в€‚xn
Using again the integration by parts formula and the fact that the Gaussian density
satisfies the heat equation we obtain
+ E(
в€‚ n Fk
в€‚ n Fk
(t,
X
))
=
d
(t, x)p(Rt , x)dx
t
n
в€‚xn
R в€‚x
в€‚ n+1 Fk
в€‚ n Fk
1
в€‚2p
=
(t, x)p(Rt , x)dx dt +
(t, x) 2 (Rt , x)dx dRt
n
n
2
в€‚x
R в€‚tв€‚x
R в€‚x
в€‚ n+1 Fk
1
в€‚ n+2 Fk
=
(t, x)p(Rt , x)dx dt +
(t, x)p(Rt , x)dx dRt
n
n+2
2
R в€‚tв€‚x
R в€‚x
1 в€‚ n+2 Fk
в€‚ n+1 Fk
(t, Xt ))dt + E(
(t, Xt ))dRt .
(3.7)
= E(
n
в€‚tв€‚x
2
в€‚xn+2
d E(
Equation (3.5) applied to
E(G
в€‚ 2 Fk
в€‚x2
and to
в€‚Fk
в€‚t
yields
в€‚ 2 Fk
в€‚ n+2 Fk
(t, Xt )) = E(
(t, Xt )) h, 1[0,t]
2
в€‚x
в€‚xn+2
n
H,
(3.8)
and
в€‚ n+1 Fk
в€‚Fk
(t, Xt )) = E(
(t, Xt )) h, 1[0,t] nH ,
(3.9)
в€‚t
в€‚tв€‚xn
respectively. Then, substituting (3.8), (3.9) and (3.7) into the first summand in
the right-hand side of (3.6) we obtain
E(G
в€‚Fk
1
в€‚ 2 Fk
(t, Xt ))dt + E(G
(t, Xt ))dRt
в€‚t
2
в€‚x2
n
в€‚ Fk
+ E(
(t, Xt )))d h, 1[0,t] nH .
в€‚xn
Therefore, to show (3.4), it only remains to check that
d(E(GFk (t, Xt ))) = E(G
E( DG, 1[0,t] (В·)
в€‚Fk
(В·, XВ· )
в€‚x
t
H) = n
E(
0
в€‚ n Fk
(s, Xs )) h, 1[0,s]
в€‚xn
nв€’1
H d
(3.10)
h, 1[0,s]
H
.
Using the fact that DG = nInв€’1 (hвЉ—(nв€’1) )h, we get
в€‚Fk
в€‚Fk
(В·, XВ· ) H ) = n h, 1[0,t] (В·)E(Inв€’1 (hвЉ—(nв€’1) )
(В·, XВ· ))
в€‚x
в€‚x
Then, taking into account (2.2), we can write
E( DG, 1[0,t] (В·)
H.
в€‚Fk
в€‚Fk
(В·, XВ· ) H )) = nE(Inв€’1 (hвЉ—(nв€’1) )
(t, Xt ))d h, 1[0,t] H .
в€‚x
в€‚x
Finally, using again that Inв€’1 is the adjoint of the derivative operator yields
d(E( DG, 1[0,t] (В·)
E(Inв€’1 (hвЉ—(nв€’1) )
в€‚Fk
в€‚ n Fk
(t, Xt )) = E(
(t, Xt )) h, 1[0,t]
в€‚x
в€‚xn
nв€’1
H ,
386
PEDRO LEI AND DAVID NUALART
which allows us to complete the proof for the function Fk . Finally, it suffices to
let k tend to infinity.
4. Local Time
In this section, we will apply the ItЛ†o formula obtained in Section 3 to derive a
version of TanakaвЂ™s formula involving the local time of the process X. In order to
do this we first discuss the existence of the local time for a continuous Gaussian
stochastic process X = {Xt , t в€€ [0, T ]} with mean zero defined on a complete probability space (в„¦, F , P ), with covariance function R(s, t). We impose the following
additional condition which is stronger than (H3):
(H3a) The function Rt = R(t, t) is increasing on [0, T ], and Rt > 0 for any t > 0.
The local time Lt (x) of the process X (with respect to the measure induced
by the variance function) is defined, if it exists, as the density of the occupation
measure
t
mt (B) =
1B (Xs )dRs ,
0
B в€€ B(R)
with respect to the Lebesgue measure. That is, for any bounded and measurable
function g we have the occupation formula
t
g(x)Lt (x)dx =
g(Xs )dRs .
0
R
Following the computations in [6] based on Wiener chaos expansions we can get
sufficient conditions for the local time Lt (x) to exists and to belong to L2 (в„¦) for
any fixed t в€€ [0, T ] and x в€€ R. We denote by Hn the nth Hermite polynomial
defined for n в‰Ґ 1 by
(в€’1)n в€’ x2 dn в€’ x2
e 2
(e 2 ),
n!
dxn
Hn (x) =
and H0 = 1. For s, t = 0 set ПЃ(s, t) =
t
u
О±n (t) =
0
R(s,t)
в€љ
.
Rs Rt
0
For all n в‰Ґ 1 and t в€€ [0, T ] we define
|ПЃ(u, v)|n dRv dRu
в€љ
в€љ
,
n
Ru Rv
and we introduce the following condition on the covariance function R(s, t):
(H4)
в€ћ
n=1
О±n (T ) < в€ћ.
The following proposition is an extension of the result on the existence and
Wiener chaos expansion of the local time for the fractional Brownian motion proved
by Coutin, Nualart and Tudor in [6]. Recall that for all Оµ > 0 and x в€€ R,
p(Оµ, x) = (2ПЂОµ)в€’1/2 exp(в€’x2 /2Оµ).
Proposition 4.1. Suppose that X = {Xt , t в€€ [0, T ]} is a zero mean continuous
Gaussian process with covariance function R(t, s), satisfying conditions (H3a) and
(H4) and with X(0) = 0. Then, for each a в€€ R, and t в€€ [0, T ], the random
variables
t
0
p(Оµ, Xs в€’ a)dRs
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
387
converge in L2 (в„¦) to the local time Lt (a), as Оµ tends to zero. Furthermore the
local time Lt (a) has the following Wiener chaos expansion
в€ћ
Lt (a) =
t
n=0
0
a
в€’n
Rs 2 p(Rs , a)Hn ( в€љ )In (1вЉ—n
[0,s] )dRs .
Rs
(4.1)
Proof. Applying StroockвЂ™s formula we can compute the Wiener chaos expansion
of the random variable p(Оµ, Xs в€’ a) for any s > 0 as it has been done in [6], and
we obtain
p(Оµ, Xs в€’ a) =
в€ћ
ОІn,Оµ (s)In (1вЉ—n
[0,s] ),
(4.2)
n=0
where
n
a
ОІn,Оµ (s) = (Rs + Оµ)в€’ 2 p(Rs + Оµ, a)Hn ( в€љ
).
Rs + Оµ
(4.3)
From (4.2), integrating with respect to the measure dRs , we deduce the Wiener
chaos expansion
t
0
p(Оµ, Xs в€’ a)dRs =
в€ћ
t
n=0
0
ОІn,Оµ (s)In (1вЉ—n
[0,s] (В·))dRs .
(4.4)
We need to show this expression converges in L2 (в„¦) to the right-hand side of
Equation (4.1), denoted by О›t (a), as Оµ tends to zero. For every n and s we have
limОµв†’0 ОІn,Оµ (s) = ОІn (s), where
a
в€’n
ОІn (s) = Rs 2 p(Rs , a)Hn ( в€љ ).
Rs
We claim that
|ОІn,Оµ (s)| в‰¤ c
2n/2
О“
n!
n+1
2
в€’ n+1
2
Rs
.
(4.5)
In fact, from the properties of Hermite polynomials it follows that
Hn (y)eв€’y
2
/2
n
= (в€’1)[ 2 ] 2n/2
2
в€љ
n! ПЂ
в€ћ
в€љ
2
sn eв€’s g(ys 2)ds,
0
where g(r) = cos r for n even, and g(r) = sin r for n odd. Thus, |g| is dominated
by 1, and this implies
|Hn (y)eв€’y
2
/2
|в‰¤c
2n/2
О“
n!
n+1
2
.
Substituting this estimate into (4.3) yields (4.5). The estimate (4.5) implies that,
t
for any n в‰Ґ 1, the integral 0 ОІn (s)In (1вЉ—n
[0,s] )dRs is well defined as a random
variable in L2 (в„¦), and it is the limit in L2 (в„¦) of
t
0
ОІn,Оµ (s)In (1вЉ—n
[0,s] )dRs as Оµ tends
388
PEDRO LEI AND DAVID NUALART
to zero. In fact, (4.5) implies that
t
0
t
ОІn (s)In (1вЉ—n
[0,s] )dRs
в‰¤
2
0
|ОІn (s)| In (1вЉ—n
[0,s] )
в€љ
в‰¤ c n!2n/2 О“
n+1
2
в€љ n/2
n+1
в‰¤= c n!2 О“
2
dRs
2
t
в€’ n+1
2
Rs
n
Rs2 dRs
0
Rt .
t
For n = 0, ОІn,Оµ (s) = p(Rs + Оµ, a), and clearly 0 p(Rs + Оµ, a)dRs converges to
t
0 p(Rs , a)dRs as Оµ tends to zero. In the same way, using dominated convergence,
we can prove that
t
(ОІn,Оµ (s) в€’ ОІn (s)) In (1вЉ—n
[0,s] )dRs
lim
Оµв†’0
0
Set
t
О±n,Оµ = E
0
= 0.
2
2
ОІn,Оµ (s)In (1вЉ—n
[0,s] )dRs
.
To show the convergence in L2 (в„¦) of the series (4.4) to the right-hand side of (4.1)
it suffices to prove that supОµ в€ћ
n=1 О±n,Оµ < в€ћ. Using (4.5) and Stirling formula we
have
t
t
О±n,Оµ =
0
0
вЉ—n
E(In (1вЉ—n
[0,u] )In (1[0,v] ))ОІn,Оµ (u)ОІn,Оµ (v)dRv dRu
t
u
= 2n!
0
0
2n
в‰¤c О“
n!
0
2
n+1
2
t
в‰¤c
R(u, v)n ОІn,Оµ (u)ОІn,Оµ (v)dRv dRu
u
0
u
t
0
n
0
|R(u, v)|n (Ru Rv )в€’
n+1
2
dRv dRu
|ПЃ(u, v)| dRv dRu
в€љ
в€љ
n
Rv Ru
= О±n (t).
Therefore, taking into account hypothesis (H4), we conclude that
sup
Оµ
в€ћ
О±n,Оµ <
n=1
в€ћ
n=1
О±n (T ) < в€ћ,
and this proves the convergence in L2 (в„¦) of the series (4.4) to a limit denoted by
О›t (a).
Finally, we have to show that О›t (a) is the local time Lt (a). The above estimates are uniform in a в€€ R. Therefore, we can deduce that the convergence of
t
2
0 p(Оµ, Xs в€’ a)dRs to О›t (a) holds in L (в„¦ Г— R, P Г— Вµ), for any finite meausre Вµ.
As a consequence, for any continuous function g with compact support we have
that
t
R
0
p(Оµ, Xs в€’ a)dRs g(a)da
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
converges in L2 (в„¦), as Оµ tends to zero, to
t
also converges to 0 g(Xs )dRs . Hence,
R
389
О›t (a)g(a)da. Clearly, this sequence
t
О›t (a)g(a)da =
g(Xs )dRs ,
0
R
which imples that О›t (a) is a version of the local time Lt (a)
Corollary 4.2. Condition (H4) holds if
T
T
0
0
в€љ
1 в€’ ln(1 в€’ |ПЃ(u, v)|)
Rv Ru В·
1 в€’ |ПЃ(u, v)|
dRv dRu < в€ћ.
(4.6)
Proof. We can write
в€ћ
where П•(x) =
g(x) =
в‰¤
1
2
T
T
dRv dRu
П•(|ПЃ(u, v)|) в€љ
,
Rv Ru
0
0
n=1
в€љ
в€ћ
xn
в€љ
n=1 n . If we define g(x) = П•(x) 1 в€’ x for every x в€€ [0, 1), then
О±n (T ) =
в€ћ
xn в€љ
в€љ
1в€’x=
n
n=1
n(1в€’x)<1
xn в€љ
в€љ
1в€’x+
n
n(1в€’x)в‰Ґ1
xn в€љ
в€љ
1в€’x
n
в€ћ
n(1в€’x)<1
xn
+
xn (1 в€’ x) в‰¤ 1 в€’ ln(1 в€’ x),
n
n=0
and the result follows.
Notice that the Wiener chaos expansion (4.1) can also we written as
Lt (a) =
в€ћ
n=0
t
In
a
в€’n
Rs 2 p(Rs , a)Hn ( в€љ )dRs .
Rs
s1 в€ЁВ·В·В·в€Ёsn
In the particular case a = 0, the Wiener chaos expansion of Lt (0) can be written
as
в€ћ
t
k
в€’kв€’ 1 (в€’1)
Lt (0) =
Rs 2 в€љ
I2k (1вЉ—2k
[0,s] )dRs .
k k!
2ПЂ2
k=0 0
Using arguments of Fourier analysis, in [3] it is proved that if the covariance
function R(s, t) satisfies
T
0
T
0
в€’ 21
(Ru + Rv в€’ 2R(u, v))
dRu dRv < в€ћ,
(4.7)
then for any t в€€ [0, T ] the local time Lt of X exists and is square integrable, i.e.
E R L2t (x)dx < в€ћ. We can write
Ru + Rv в€’ 2R(u, v) = Ru + Rv в€’ 2ПЃ(u, v) Ru Rv
= ( Ru в€’
в‰Ґ2
Rv )2 + 2 Ru Rv (1 в€’ ПЃ(u, v))
Ru Rv (1 в€’ ПЃ(u, v)).
390
PEDRO LEI AND DAVID NUALART
Therefore, condition (4.7) is implied by
T
0
T
1
в€љ
4
Ru Rv В·
0
1 в€’ ПЃ(u, v)
dRu dRv < в€ћ,
(4.8)
which can be compared with the above assumption (4.6). Notice that both conditions have different consequences. In fact, (4.8) implies E R L2t (x)dx < в€ћ;
whereas (4.6) implies only that for each x, E(L2t (x)) < в€ћ.
We can now establish the following version of Tanaka formula.
Theorem 4.3. Suppose that X = {Xt , 0 в‰¤ t в‰¤ T } is a zero-mean continuous
Gaussian process,with X0 = 0, and such that the covariance function R(s, t) satisfies conditions (H1), (H3a) and (H4). Let y в€€ R. Then, for any 0 < t в‰¤ T ,
the process {1(y,в€ћ) (Xs )1[0,t] (s), 0 в‰¤ s в‰¤ T } belongs to DomE Оґ and the following
holds
1
Оґ 1(y,в€ћ) (XВ· )1[0,t] (В·) = (Xt в€’ y)+ в€’ (в€’y)+ в€’ Lt (y).
2
Proof. Let Оµ > 0 and for all x в€€ R set
x
v
в€’в€ћ
в€’в€ћ
fОµ (x) =
Theorem 3.2 implies that
t
fОµ (Xt ) = fОµ (0) +
0
p(Оµ, z в€’ y)dzdv.
fОµвЂІ (Xs )ОґXs +
1
2
t
0
fОµвЂІвЂІ (Xs )dRs .
Then we have that fОµвЂІ (Xs )1[0,t] (s) converges to 1(y,в€ћ) (Xs )1[0,t] (s) in L2 (в„¦Г—R) and
t
fОµ (Xt ) converges to (Xt в€’y)+ in L2 (в„¦). Finally, by Proposition 4.1, 0 fОµвЂІвЂІ (Xs )dRs
converges to Lt (y) in L2 (в„¦). This completes the proof.
5. Example: Self-Similar Processes
In this section, we are going to apply the results of the previous sections to the
case of a self-similar centered Gaussian process X. Suppose that X = {Xt , t в‰Ґ 0}
is a stochastic process defined on a complete probability space (в„¦, F , P ). We say
that X is self-similar with exponent H в€€ (0, 1) if for any a > 0, the processes
{X(at), t в‰Ґ 0} and {aH X(t), t в‰Ґ 0} have the same distribution. It is well-known
that fractional Brownian motion is the only H-self-similar centered Gaussian process with stationary increments. Suppose that X = {Xt , t в‰Ґ 0} is a continuous
Gaussian centered self-similar process with exponent H. Let R(s, t) be the covariance function of X. To simplify the presentation we assume E(X12 ) = 1. The
process X satisfies the condition (H3a) because
Rt = R(t, t) = t2H R(1, 1) = t2H .
The function R is homogeneous of order 2H, that is, for a > 0 and s, t в‰Ґ 0, we
have
R(as, at) = E(Xas Xat ) = E(aH Xs aH Xt ) = a2H R(s, t).
For any x в‰Ґ 0, we define
П• (x) = R(1, x).
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
391
Notice that for any x > 0,
1
П•(x) = R(1, x) = x2H R( , 1) = x2H П•
x
1
x
.
On the other hand, applying Cauchy-Schwarz inequality we get that function П•
satisfies |П•(x)| в‰¤ xH for all x в€€ [0, 1]. The next proposition provides simple
sufficient conditions on the function П• for the process X to satisfy the assumptions
(H1) and (H4).
Proposition 5.1. Suppose that X = {Xt , t в‰Ґ 0} is a zero mean continuous
self-similar Gaussian process with exponent of self-similarity H and covariance
function R(s, t). Let П•(x) = R(1, x). Then
(i) (H1) holds on any interval [0, T ] for О± > 1 if О±(2H в€’ 1) + 1 > 0 and П• is
absolutely continuous and satisfies
1
|П•вЂІ (x)|О± dx < в€ћ.
0
(5.1)
(ii) (H4) holds on any interval [0, T ] if for some Оµ > 0 and for all x в€€ [0, 1]
|П•(x)| в‰¤ xH+Оµ .
(5.2)
Proof. We first prove (i). We write
T
0
|
в€‚R
(s, t)|О± ds =
в€‚s
t
0
|
в€‚R
(s, t)|О± ds +
в€‚s
П•( st ) and в€‚R
в€‚s (s, t)
x = st , we have
2H
For s в‰¤ t, R(s, t) = t
change of variables by
t
|
0
в€‚R
(s, t)|О± ds =
в€‚s
t
0
=t
2Hв€’1
T
|
t
в€‚R
(s, t)|О± ds.
в€‚s
П•вЂІ ( st ).
s О±
| ds
t
tО±(2Hв€’1) |П•вЂІ
1
= tО±(2Hв€’1)+1
0
Applying (5.1) and the
|П•вЂІ (x)|О± dx.
(5.3)
For s > t, R(s, t) = s2H П•( st ) and
в€‚R
t
t
(s, t) = 2Hs2Hв€’1 П•( ) в€’ s2Hв€’2 tП•вЂІ ( ).
в€‚s
s
s
Then,
T
t
|
в€‚R
(s, t)|О± ds в‰¤ C
в€‚s
T
t
s2Hв€’1 |П•
With the change of variables x =
T
t
s2Hв€’1 |П•
t
s
|О± ds в‰¤ П•
=
t
s
t
s
|О± ds +
T
s2Hв€’2 t|П•вЂІ
t
t
s
|О± ds .
we can write
О± (2Hв€’1)О±+1
в€ћt
1
xО±(1в€’2H)в€’2 dx,
t
T
П• О±
в€ћ
[t(2Hв€’1)О±+1 в€’ T (2Hв€’1)О±+1 ]
О±(1 в€’ 2H) в€’ 1
(5.4)
392
PEDRO LEI AND DAVID NUALART
and
T
s2Hв€’2 t|П•вЂІ
t
t
s
1
|О± ds в‰¤ t(2Hв€’1)О±+1
t
T
(2в€’2H)О±в€’2
t
T
в‰¤ t(2Hв€’1)О±+1
1
t
T
|П•вЂІ (x) |О± dx
1
tО±в€’1
в‰¤
|П•вЂІ (x) |О± x(2в€’2H)О±в€’2 dx
T (2в€’2H)О±в€’2
t
T
|П•вЂІ (x) |О± dx.
(5.5)
Now, (H1) follows from (5.3), (5.4) and (5.5).
In order to show (ii) we need to show that
в€ћ
О±n (T ) =
n=1
в€ћ
T
u
0
n=1
0
1 |R(u, v)|n
в€љ
dRv dRu < в€ћ.
n (Ru Rv ) n+1
2
For any 0 < v < u, we have R(u, v) = u2H П•( uv ), and the change of variable x =
yields
(2H)2
О±n (T ) = в€љ
n
(2H)2
= в€љ
n
T
u
0
0
T
1
0
2HT 2H
= в€љ
n
0
1
0
v
u
|R(u, v)|n (uv)H(1в€’n)в€’1 dvdu
|R(1, x)|n u2Hв€’1 xH(1в€’n)в€’1 dvdu
|П•(x)|n xH(1в€’n)в€’1 dx
2HT 2H 1 nОµ+Hв€’1
в‰¤ в€љ
x
dx
n
0
2HT 2H
1
= в€љ
.
n nОµ + H
Therefore, we have
в€ћ
n=1
О±n (T ) в‰¤
2HT 2H
Оµ
в€ћ
n=1
3
nв€’ 2 < в€ћ.
(5.6)
This completes the proof of (ii).
Example 5.2. The bifractional Brownian motion is a centered Gaussian process
X = {BtH,K , t в‰Ґ 0}, with covariance
R(t, s) = RH,K (t, s) = 2в€’K ((t2H + s2H )K в€’ |t в€’ s|2HK ),
(5.7)
where H в€€ (0, 1) and K в€€ (0, 1]. We refer to HoudrВґe and Villa [10] for the definition
and basic properties of this process. Russo and Tudor [20] have studied several
properties of the bifractional Brownian motion and analyzed the case HK = 12 .
Tudor and Xiao [21] have derived small ball estimates and have proved a version
of the ChungвЂ™s law of the iterated logarithm for the bifractional Brownian motion.
In [14], the authors have shown a decomposition of the bifractional Brownian
motion with parameters H and K into the sum of a fractional Brownian motion
with Hurst parameter HK plus a stochastic process with absolutely continuous
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
393
trajectories. The stochastic calculus with respect to the bifractional Brownian
motion has been recently developed in the references [13] and [12]. A Tanaka
formula for the bifractional Brownian motion in the case HK в‰¤ 12 by Es-Sebaiy
and Tudor in [9]. A multidimensional ItЛ†oвЂ™s formula for the bifractional Brownian
motion has been established in [2].
Note that, if K = 1 then B H,1 is a fractional Brownian motion with Hurst
parameter H в€€ (0, 1), and we denote this process by B H . Bifractional Brownian
motion is a self-similar Gaussian process with non-stationary increment if K is not
equal to 1.
Set
П•(x) = 2в€’K ((1 + x2H )K в€’ (1 в€’ x)2HK ).
Then
П•вЂІ (x) = 21в€’K HK[x2HKв€’1 (1 + x2H )K + (1 в€’ x)2HKв€’1 ],
which implies that (i) in Proposition 5.1 holds О± such that О±(2HK в€’ 1) > в€’1.
Notice that
1
П•(x) в‰¤ K [1 + x2H в€’ (1 в€’ x)2H ]K .
(5.8)
2
Then, if 2H в‰¤ 1
1 + x2H в€’ (1 в€’ x)2H в‰¤ 2x2H ,
(5.9)
and when 2H > 1,
1 + x2H в€’ (1 в€’ x)2H в‰¤ x + x2H в‰¤ 2x.
(5.10)
From the inequalities (5.8), (5.9) and (5.10) we obtain
П•(x)
(1 + x2H )K в€’ (1 в€’ x)2HK
=
в‰¤ xmin(H,1в€’H)K .
(5.11)
xHK
2K xHK
Then condition (ii) in Proposition 5.1 holds with Оµ = min(H, 1 в€’ H)K. As a
consequence, the results in Sections 3, 4 and 5 hold for the bifractional Brownian
motion.
Bardina and Es-Sebaiy considered in [2] an extension of bifractional Brownian
motion with parameters H в€€ (0, 1), K в€€ (1, 2) and HK в€€ (0, 1) with covariance
function (5.7). By the same arguments as above, Proposition 5.1 holds in this case
with Оµ = min(H, 1 в€’ H)K in condition (ii). Thus, the results in Sections 3, 4 and
5 hold for this extension of the bifractional Brownian motion.
6. Hitting Times
Suppose that X = {Xt , t в‰Ґ 0} is a zero mean continuous Gaussian process with
covariance function R(t, s), satisfying (H1) and (H3) on any interval [0, T ]. We
also assume that X(0) = 0. Moreover, we assume the following conditions:
(H5) lim suptв†’в€ћ Xt = +в€ћ almost surely.
(H6) For any 0 в‰¤ s < t, we have E(|Xt в€’ Xs |2 ) > 0.
(H7) For any continuous function f ,
t
rв†’
is continuous on [0, в€ћ).
f (s)
0
в€‚R
(s, r)ds
в€‚s
394
PEDRO LEI AND DAVID NUALART
For any a > 0, we denote by П„a the hitting time defined by
П„a = inf{t в‰Ґ 0, Xt = a} = inf{t в‰Ґ 0, Xt в‰Ґ a}.
(6.1)
The map a в†’ П„a is left continuous and increasing with right limits.
We are interested in the distribution of the random variable П„a . The explicit
form of this distribution is known only in some special cases such as the standard
Brownian motion. In this case the Laplace transform of the hitting time П„a is
given by
в€љ
E(eв€’О±П„a ) = eв€’a 2О± ,
for all О± > 0. This can be proved, for instance, using the exponential martingale
1
2
Mt = eО»Xt в€’ 2 О» t ,
and DoobвЂ™s optional stopping theorem. In the general case, the exponential process
1
(6.2)
Mt = exp(О»Xt в€’ О»2 Rt ).
2
is no longer martingale. However, if we apply (3.2) for the divergence integral, we
have
Mt = 1 + О»Оґ(M 1[0,t] ) = 1 + О»Оґt (M ).
(6.3)
Substituting t by П„a and taking the expectation in Equation (6.3), Decreusefond
в€љ
and Nualart have established in [7] an inequality of the form E(eв€’О±RП„a ) в‰¤ eв€’a 2О± ,
assuming that the partial derivative of the covariance в€‚R
в€‚s (t, s) is nonnegative and
continuous. This includes the case of the fractional Brownian motion with Hurst
parameter H > 21 . The purpose of this section is derive the converse inequality in
the singular case where the partial derivative of the covariance is not continuous,
assuming в€‚R
в€‚s (t, s) в‰¤ 0 for s < t (which includes the case of the fractional Brownian
motion with Hurst parameter H < 12 ), completing the analysis initiated in [7].
As in the case of the Brownian motion we would like to substitute t by П„a in both
sides of Equation (6.3) and then take the mathematical expectation in both sides
of the equality. It is convenient to introduce also an integral in a, and, following
the approach developed in [7], we claim that the following result holds.
Proposition 6.1. Suppose X satisfies (H1), (H3), (H5), (H6) and (H7), then
в€ћ
E(MП„a )П€(a)da = cв€’
0
ST
lim О»E
Оґв†’0
1
dП„y
0
в€ћ
П€(y)dО·
0
0
pОґ ((П„y+ в€§ T )О· + П„y (1 в€’ О·) в€’ s)Ms
в€‚R
(П„y , s)ds,
в€‚s
(6.4)
where p is an infinitely differentiable function with support on [в€’1, 1] such that
1
p(x)dx = 1, П€(x) be a nonnegative smooth function with compact support
в€’1
в€ћ
contained in (0, в€ћ) such that 0 П€(a)da = c and we use the notation pОµ (x) =
1
x
Оµ p( Оµ ).
Before proving this proposition we need several technical lemmas. The first
lemma is an integration by parts formula, and it is a consequence of the definition
of the extended divergence operator given in Definition 2.1.
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
395
Lemma 6.2. For any t > 0 and any random variable of the form
F = f (Xt1 , . . . , Xtn ), where f is an inifinitely differentiable function which is
bounded together with all its partial derivatives, we have
n
E(F Оґt (M )) = E
i=1
в€‚f
(Xt1 , . . . , Xtn )
в€‚xi
t
Ms
0
в€‚R
(ti , s)ds ,
в€‚s
(6.5)
where Оґt (M ) is given in Equation (6.3).
Proof. Using the Definition 2.1 of the extended divergence operator and Equation
(2.1) we can write
E(F Оґt (M )) = E( DF, M 1[0,t]
n
=E
i=1
n
=E
i=1
H)
в€‚f
(Xt1 , . . . , Xtn ) 1[0,ti ] , M 1[0,t]
в€‚xi
в€‚f
(Xt1 , . . . , Xtn )
в€‚xi
t
Ms
0
H
в€‚R
(ti , s)ds ,
в€‚s
which completes the proof of the lemma.
For any a > 0, we know that P (П„a < в€ћ) = 1 by condition (H5). Set
St = sup Xs .
sв€€[0,t]
We know that for all t > 0, St belongs to D1,2 and DSt = 1[0,П„St ] (see [7] and
[11]). Following the approach developed in [7], we introduce a regularization of
the hitting time П„a , and we establish its differentiability in the sense of Malliavin
calculus.
Lemma 6.3. Suppose that П• is a nonnegative smooth function with compact support in (0, в€ћ) and define for any T > 0,
в€ћ
Y =
0
П•(a)(П„a в€§ T )da.
The random variable Y belongs to the space D1,2 , and
ST
Dr Y = в€’
П•(y)1[0,П„y ] (r)dП„y .
0
Proof. First, it is clear that Y is bounded because П• has compact support. On
the other hand, for any r > 0, we can write
{П„a > r} = {Sr < a}.
396
PEDRO LEI AND DAVID NUALART
Apply FubiniвЂ™s theorem, we have
в€ћ
Y =
П„a в€§T
П•(a)
0
0
в€ћ
=
в€ћ
0
0
в€ћ
T
=
0
в€ћ
dОё da =
0
в€ћ
0
П•(a)1{Оё<П„a в€§T } dОёda
T
П•(a)1{Оё<П„a } 1{Оё<T } dadОё =
в€ћ
0
0
SОё
в€ћ
The function П€(x) = x П•(a)da is continuously differentiable with a bounded
derivative, so П€(SОё ) в€€ D1,2 for any Оё в€€ [0, T ] because SОё в€€ D1,2 (see, for instance,
T
[17]). Finally, we can show that Y = 0 П€(SОё )dОё belongs to D1,2 approximatig
the integral by Riemann sums. Hence, taking the Malliavin derivative of Y , we
obtain
T
Dr Y = в€’
П•(SОё )Dr SОё dОё
0
T
=в€’
ST
П•(SОё )1[0,П„SОё ] (r)dОё = в€’
0
П•(y)1[0,П„y ] (r)dП„y ,
0
where the last equality holds by changing variable SОё = y, which is equivalent to
Оё = П„y .
The following lemma provides an explicit formula for the expectation
E(p(Y )Оґt (M )), where p is a smooth function with compact support.
Lemma 6.4. Suppose X satisfies (H1), (H3), (H5), (H6) and (H7). Then, for
any infinitely differentiable function p with compact support,
t
E(p(Y )Оґt (M )) = в€’E
ST
Ms pвЂІ (Y )
0
П•(y)
0
в€‚R
(П„y , s)dП„y ds ,
в€‚s
(6.6)
Proof. Consider the random variable
в€ћ
Y =
0
T
П•(a)(П„a в€§ T )da =
0
в€ћ
SОё
T
=
Оѕ(SОё )dОё,
0
в€ћ
where Оѕ(x) = x П•(a)da. Let {DN , N в‰Ґ 1} be an increasing sequence of finite
subsets of [0, T ] such в€Єв€ћ
N =1 DN is dense in [0, T ]. Set DN = {Пѓi , 0 = Пѓ0 < Пѓ1 <
Оё
В· В· В· < ПѓN = T } and DN
= DN в€© [0, Оё], and
Оё
SОёN = max{Xt , t в€€ DN
} = max{XПѓ0 , . . . , XПѓ(Оё) },
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
397
Оё
where Пѓ(Оё) = sup DN
. We also write SkN = SПѓNk . Define
N
T
Оѕ(SОёN )dОё =
YN =
0
k=1
(Пѓk в€’ Пѓkв€’1 )Оѕ(max{XПѓ0 , . . . , XПѓkв€’1 })
N
=
k=1
N
(Пѓk в€’ Пѓkв€’1 )Оѕ(Skв€’1
).
Then, taking into account that XПѓ0 = X0 = 0, p(YN ) is a Lipschitz function F of
the N в€’ 1 variables {XПѓ1 , . . . , XПѓN в€’1 }, namely,
N
p(YN ) = F (XПѓ1 , . . . , XПѓN в€’1 ) = p
k=2
N
(Пѓk в€’ Пѓkв€’1 )Оѕ(Skв€’1
) ,
and, for all 1 в‰¤ i в‰¤ N в€’ 1 the derivative of F respect to xi is
в€‚F
= в€’pвЂІ (YN )
в€‚xi
N
k=i+1
N
(Пѓk в€’ Пѓkв€’1 )П•(Skв€’1
)1{Skв€’1
N
=XПѓi } .
By (6.5), we have
E(p(YN )Оґt (M ))
=E
вЂІ
в€’p (YN )
N в€’1
N
i=1 k=i+1
N
(Пѓk в€’ Пѓkв€’1 )П•(Skв€’1
)1{Skв€’1
N =X
Пѓi }
N
= в€’E
pвЂІ (YN )
k=2
N
(Пѓk в€’ Пѓkв€’1 )П•(Skв€’1
)
T
= в€’E
t
pвЂІ (YN )
П•(SОёN )
Пѓ1
Ms
0
kв€’1
t
Ms (
0
i=1
в€‚R Оё,N
(Пѓ , s)dsdОё
в€‚s
t
Ms
0
в€‚R
(Пѓi , s)ds
в€‚s
в€‚R
(Пѓi , s)1{Skв€’1
N
=XПѓi } )ds
в€‚s
+ RN ,
where
N kв€’1
Пѓ Оё,N =
k=1 i=0
Пѓi 1(Пѓkв€’1 ,Пѓk ] (Оё)1{max(XПѓ0 ,...,XПѓkв€’1 )=XПѓi } ,
and the reminder term RN is given by
N
RN = в€’П•(0)
(Пѓk в€’ Пѓkв€’1 )E pвЂІ (YN )1{max(XПѓ0 ,...,XПѓkв€’1 =0}
k=2
t
Ms
0
в€‚R
(0, s)ds .
в€‚s
As N tends to infinity, RN converges to
T
в€’П•(0)
0
E pвЂІ (YN )1{SОё =0}
t
Ms
0
в€‚R
(0, s)ds dОё = 0,
в€‚s
because SОё has an absolutely continuous distribution for any Оё > 0. On the other
hand, we claim that for all Оё, Пѓ Оё,N converges to П„SОё almost suterly as N goes to
infinite. This is a consequence of the fact that X is continuous and the maximum
is almost surely attained in a unique point by condition (H6). In addition, pвЂІ (YN )
converges to pвЂІ (Y ) and П•(SОёN ) converges to П•(SОё ) almost surely. Therefore, by
398
PEDRO LEI AND DAVID NUALART
t
t
0
condition (H7), 0 Ms в€‚R
в€‚s (s, П„SОёN )ds converges pointwise to
the other hand, by condition (H1),
t
0
T
в€‚R n
(П„ , s)ds в‰¤
Ms
в€‚s SОё
0
1
ОІ
MsОІ ds
T
sup
0в‰¤tв‰¤T
0
Ms в€‚R
в€‚s (s, П„SОё )ds. On
в€‚R
(s, t)
в€‚s
1
О±
О±
ds
,
so by the dominated convergence theorem, we obtain
T
pвЂІ (Y )
E(p(Y )Оґt (M )) = в€’E
t
Ms
в€‚R
(П„SОё , s)dsdОё .
в€‚s
П•(y)
в€‚R
(П„y , s)dП„y ds ,
в€‚s
П•(SОё )
0
0
Finally, the change of variable SОё = y yields
t
E(p(Y )Оґt (M )) = в€’E
ST
Ms pвЂІ (Y )
0
0
which completes the proof of the lemma.
Proof of Proposition 6.1. Define
в€ћ
YОµ,a =
0
П•Оµ (x в€’ a)(П„x в€§ T )dx =
a
1
Оµ
1
aв€’Оµ
(П„x в€§ T )dx =
0
(П„aв€’ОµОѕ в€§ T )dОѕ,
where П•Оµ (x) = 1Оµ 1[в€’1,0] ( xОµ ), and by convention П„x = 0 if x < 0. Lemma 6.4 can be
extended to the function x в†’ П•Оµ (x в€’ a) and to the random variable YОµ,a for any
fixed a. Therefore, from (6.3) and Lemma 6.4 we optain
в€ћ
0
E(pОґ (YОµ,a в€’ t)Mt )dt
в€ћ
=1+О»
0
в€ћ
=1в€’О»
E(pОґ (YОµ,a в€’ t)Оґ(M 1[0,t] ))dt
t
E
0
0
в€ћ
=1в€’О»
ST
Ms pвЂІОґ (YОµ,a в€’ t)
0
П•Оµ (y в€’ a)
ST
E
pОґ (YОµ,a в€’ s)Ms
0
0
П•Оµ (y в€’ a)
в€‚R
(П„y , s)dП„y ds dt
в€‚s
в€‚R
(П„y , s)dП„y
в€‚s
ds,
(6.7)
where the last inequality holds by integration by parts. Multiplying by П€(a) and
integrating with respect to the variable a yields
в€ћ
П€(a)
R
0
E(pОґ (YОµ,a в€’ t)Mt )dtda
ST
= c в€’ О»E
R
0
0
ST
= c в€’ О»E
в€ћ
dП„y
dП„y
0
1
Оµ
y+Оµ
y
0
в€ћ
dП„y
0
в€ћ
П€(a)da
ST
= c в€’ О»E
pОґ (YОµ,a в€’ s)Ms
0
в€‚R
(П„y , s)ds П•Оµ (y в€’ a)П€(a)da
в€‚s
pОґ (YОµ,a в€’ s)Ms
в€‚R
(П„y , s)ds
в€‚s
1
0
dО·П€(y + ОµО·)pОґ (YОµ,yв€’ОµО· в€’ s) Ms
в€‚R
(П„y , s)ds
в€‚s
,
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
399
where the last equation holds by the change of variable a = y + ОµО·. Next, consider
1
YОµ,y+ОµО· =
О·
(П„y+ОµО·в€’ОµОѕ в€§ T )dОѕ =
0
0
1
(П„y+ОµО·в€’ОµОѕ в€§ T )dОѕ +
О·
(П„y+ОµО·в€’ОµОѕ в€§ T )dОѕ.
Taking the limit as Оµ goes to zero, and using the fact that П„ is left continuous and
with right limit, we obtain
О·
lim
Оµв†’0
О·
(П„y+ОµО·в€’ОµОѕ в€§ T )dОѕ =
0
1
lim
Оµв†’0
О·
(П„y+ в€§ T )dОѕ = (П„y+ в€§ T )О·,
0
1
(П„yв€’ОµО·+ОµОѕ в€§ T )dОѕ =
О·
(П„y в€§ T )dОѕ = П„y (1 в€’ О·).
This implies that
1
lim
Оµв†’0
0
1
П€(y + ОµО·)pОґ (YОµ,y+ОµО· в€’ s)dО· =
0
П€(y)pОґ ((П„y+ в€§ T )О· + П„y (1 в€’ О·) в€’ s)dО·.
This allows us to compute the limit of the right-hand side of Equation (6.7) as Оµ
tends to zero, using the dominated convergence theorem. In fact,
1
П€(y + ОµО·)pОґ (YОµ,y+ОµО· в€’ s)dО· в‰¤ K,
0
where K is a constant, and assuming supp(pОґ ) вЉ† [0, T +Оґ], we have using condition
(H1),
T +Оґ
ST
Ms
dП„y
E
0
0
пЈ«
в‰¤EпЈ­
ST
T +Оґ
dП„y
0
0
пЈ«
1
ОІ
|Ms |ОІ ds
1
ОІ
T +Оґ
в‰¤ TE пЈ­
в€‚R
(П„y , s) ds
в€‚s
|Ms |
0
ОІ
пЈ¶
dsпЈё
T +Оґ
|
0
в€‚R
(П„y , s)|О± ds
в€‚s
T +Оґ
sup
zв€€[0,T +Оґ]
0
1
О±
в€‚R
|
(z, s)|О± ds
в€‚s
пЈ¶
пЈё
1
О±
< в€ћ.
On the other hand, we know that limОµв†’0 YОµ,a = П„a в€§ T = П„a since П„a в‰¤ T .
Therefore,
в€ћ
П€(a)
R
0
E(pОґ (П„a в€’ t)Mt )dtda
ST
= c в€’ О»E
1
0
в€ћ
П€(y)dО·
dП„y
0
0
pОґ ((П„y+ в€§ T )О· + П„y (1 в€’ О·) в€’ s)Ms
в€‚R
(П„y , s)ds.
в€‚s
(6.8)
Finally, for the left hand side of (6.8) we have
lim
Оґв†’0
в€ћ
П€(a)
R
0
E(pОґ (П„a в€’ t)Mt )dtda =
в€ћ
E(MП„a )П€(a)da,
0
which implies the desired result.
Proposition 6.1 implies the following inequalities which are the main result of
this section.
400
PEDRO LEI AND DAVID NUALART
Theorem 6.5. Assume that X satisfies (H1), (H3), (H5), (H6) and (H7).
(i) If в€‚R
в€‚s (t, s) в‰Ґ 0 for all s > t, then for all О±, a > 0, we have
(ii) If
в€‚R
в€‚s (t, s)
E(exp(в€’О±RП„a ) в‰¤ eв€’a
в€љ
2О±
.
(6.9)
в‰¤ 0 for all s > t, then for all О±, a > 0, we have
E(exp(в€’О±RП„a )) в‰Ґ eв€’a
Proof. If we assume
в€‚R
в€‚s (t, s)
в€љ
2О±
.
(6.10)
в‰Ґ 0, Proposition 6.1 implies
в€ћ
0
E(MП„a )П€(a)da в‰¤ c.
Therefore, E(MП„a ) в‰¤ 1, namely,
1
E(exp(О»a в€’ О»2 RП„a )) в‰¤ 1,
2
for any О» > 0, which implies (6.9).
To show (ii), we choose pОґ such that pОґ (x в€’ y) = 0 if x > y. Then, in the
integral with respect to ds appearing in the right-hand side of (6.8) we can assume
that s > (П„y+ в€§ T )О· + П„y (1 в€’ О·) в‰Ґ П„y , which implies в€‚R
в€‚s (П„y , s) в‰¤ 0. Then,
в€ћ
0
E(MП„a )П€(a)da в‰Ґ c,
which allows us to conclude the proof as in the case (i).
Theorem 6.5 tells that the Laplace transform of the random variable RП„a can
be compared with the Laplace transform of the hitting time of the ordinary Brownian motion at the level a, under some monotonicity conditions on the covariance
function. This implies some consequences on the moments of RП„a . In the case (i),
the inequality (6.9) implies for any r > 0,
E(RП„в€’r
)=
a
1
О“(r)
в‰¤
1
О“(r)
в€ћ
E(eв€’О±RП„a )О±rв€’1 dО±
0
в€ћ
eв€’a
в€љ
2О± rв€’1
О±
dО± =
0
On the other hand, for 0 < r < 1,
r
E(RП„ra ) =
О“(1 в€’ r)
r
в‰Ґ
О“(1 в€’ r)
в€ћ
0
в€ћ
0
2r О“(r + 21 ) в€’2r
в€љ
a .
ПЂ
(6.11)
(1 в€’ E(eв€’О±RП„a ))О±в€’rв€’1 dО±
(1 в€’ eв€’a
в€љ
2О±
)О±в€’rв€’1 dО±.
(6.12)
As a consequence, E(RП„ra ) = +в€ћ for r в€€ ( 12 , 1).
In the case (ii), the inequality (6.10) implies for any r > 0
E(RП„в€’r
)=
a
1
О“(r)
в‰Ґ
1
О“(r)
в€ћ
E(eв€’О±RП„a )О±rв€’1 dО±
0
в€ћ
0
eв€’a
в€љ
2О± rв€’1
О±
dО± =
2r О“(r + 21 ) в€’2r
в€љ
a .
ПЂ
(6.13)
STOCHASTIC CALCULUS FOR GAUSSIAN PROCESSES
401
On the other hand, for 0 < r < 1,
r
О“(1 в€’ r)
r
в‰¤
О“(1 в€’ r)
E(RП„ra ) =
в€ћ
0
в€ћ
0
(1 в€’ E(eв€’О±RП„a ))О±в€’rв€’1 dО±
(1 в€’ eв€’a
в€љ
2О±
)О±в€’rв€’1 dО±,
(6.14)
and, hence, E(RП„ra ) < в€ћ for r в€€ (0, 12 ).
Example 6.6. Consider the case of a fractional Brownian motion Hurst parameter
H > 12 . Recall that
1 2H
(t + s2H в€’ |t в€’ s|2H ).
2
Conditions (H5), (H6) and (H7) are satisfied. We can write
RH (t, s) =
в€‚RH
(t, s) = H(s2Hв€’1 + sign(t в€’ s)|t в€’ s|2Hв€’1 )
в€‚s
for all s, t в€€ [0, T ].
H
(t, s) в‰Ґ 0 for all s, t, and by (6.9) in Theorem 6.5,
If H > 12 , then в€‚R
в€‚s
в€љ
2H
в€’a 2О±
E(exp(в€’О±П„a )) в‰¤ e
. This implies that E(П„ap ) = +в€ћ for any H < p and П„a
has finite negative moments of all order.
H
(t, s) в‰¤ 0 for s > t, and by (6.10) in Theorem 6.5,
If H < 21 , then в€‚R
в€‚s
в€љ
2H
в€’a 2О±
E(exp(в€’О±П„a )) в‰Ґ e
. This implies that E(П„ap ) < +в€ћ for any p < H.
In ([16]), Molchan proved that for the fractional Brownian motion with Hurst
parameter H в€€ (0, 1),
P (П„a > t) = tHв€’1+o(1) ,
as t tends to infinity. As a consequence, E(П„ap ) > в€ћ if p < 1 в€’ H and E(П„ap ) = в€ћ
if p > 1 в€’ H, which is stronger than the integrability results mentioned above.
Acknowledgment. We would like to thank an anonymous referee for this helpful
References
1. Al`
os, E., Mazet, O. and Nualart, D.: Stochastic calculus with respect to Gaussian processes,
Ann. Probab. 29 (2001) 766вЂ“801.
2. Bardina, X. and Es-sebaiy, K.: An Extension of Bifractional Brownian motion, Commun.
on Stoch. Anal. 5 (2011) 333вЂ“340.
3. Berman, B. M.: Local Times and Sample Function Properties of stationary Gaussian Processes, Trans. Amer. Math. Soc. 137 (1969) 277вЂ“299.
4. Biagini, F., Hu, Y., Г�ksendal, B. and Zhang, T.: Stochastic calculus for fractional Brownian
motions and applications, Springer, 2008.
5. Cheridito, P. and Nualart, D.: Stochastic integral of divergence type with respect to the
fractional Brownian motion with Hurst parameter H < 12 , Ann. Institut Henri PoincarВґ
e 41
(2005) 1049вЂ“1081.
6. Coutin, L., Nualart, D. and Tudor, C. A.: Tanaka formula for the fractional Brownian
motion, Stoch. Process. Appl. 94 (2001) 301вЂ“315.
7. Decreusefond, L. and Nualart, D.: Hitting times for Gaussian processes, Anal. Probab. 36
(2008) 319вЂ“330.
402
PEDRO LEI AND DAVID NUALART
ВЁ unel, A. S.: Stochastic analysis of the fractional Brownian motion,
8. Decreusefond, L. and UstВЁ
Potential Anal. 10 (1999) 177вЂ“214.
9. Es-Sebaiy, K. and Tudor, C.A.: Multidimensional bifractional Brownian motion: ItЛ†
o and
Tanaka formulas, Stoch. Dyn. 7 (2007) 365вЂ“388.
10. HoudrВґ
e, C. and Villa, J.: An example of infinite dimensional quasi-helix, Contemporary
Mathematics 366 (2003) 195вЂ“201.
11. Kin, J. and Pollard, D.: Cube root asymptotics, Annals of Statistics 18 (1990) 191вЂ“219.
12. Kruk, I. and Russo, F.: Malliavin-Skorohod calculus and Paley-Wiener integral for covariance
singular processes. Preprint.
13. Kruk, I., Russo, F. and Tudor, C. A.: Wiener integrals, Malliavin calculus and covariance
structure measure, Journal of Functional Analysis 249 (2007) 92вЂ“142.
14. Lei, P. and Nualart, D.: A decomposition of the bi-fractional Brownian motion and some
applications, Statist. Probab. Lett. 79 (2009) 619вЂ“624.
15. Mocioalca, O. and Viens, F. G.: Skorohod integration and stochastic calculus beyond the
fractional Brownian scale, Journal of Functional Analysis 222 (2005) 385вЂ“434.
16. Molchan, G. M.: On the maximum of fractional Brownian motion, Theory of Porbab. Appl.
44 (2000) 97вЂ“102.
17. Nualart, D.: The Malliavin Calculus and Related Topics (Probability and Its Applications),
2nd ed. Springer, 2006.
18. Pipiras, V. and Taqqu, M. S.: Integration questions related to fractional Brownian motion,
Probab. Theory Related Fields 118 (2000) 251вЂ“291.
19. Pipiras, V. and Taqqu, M. S.: Are classes of deterministic integrands for fractional Brownian
motion on an interval complete, Bernoulli 6 (2001) 873вЂ“897.
20. Russo, F. and Tudor, C. A.: On the bifractional Brownian motion, Stoch. Process. Appl. 5
(2006) 830вЂ“856.
21. Tudor, C. A. and Xiao, Y.: Sample path properties of bifractional Brownian motion,
Bernoulli 13 (2007) 1023вЂ“1052.
Pedro Lei: Department of Mathematics, University of Kansas, Lawrnece, KS 66045
David Nualart: Department of Mathematics, University of Kansas, Lawrnece, KS
66045
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 403-407
www.serialspublications.com
AN ESTIMATE FOR BOUNDED SOLUTIONS OF THE
HERMITE HEAT EQUATION
Abstract. An estimate result on the partial derivatives of the Mehler kernel
E(x, Оѕ, t) for t > 0 is first established. Particularly for 0 < t < 1, it extends
the estimate result given by S. Thangavelu in his monograph A lecture notes
on Hermite and Laguerre expansions on the order of the partial derivative
of the Mehler kernel with respect to the space variable. Furthermore, for
в€‚ m U (x,t)
each m в€€ N0 , a growth estimate on the partial derivative
of all
в€‚xm
bounded solutions U (x, t) of the Cauchy Dirichlet problem for the Hermite
heat equation is established.
1. Introduction
As introduced in [1], we denote by E(x, Оѕ, t) the Mehler kernel defined by
в€ћ
в€’(2k+1)t
hk (x)hk (Оѕ),
k=0 e
E(x, Оѕ, t) =
0,
t > 0,
t в‰¤ 0,
where hk вЂ™s are L2 вЂ“ normalized Hermite functions defined by
2
(в€’1)k ex /2 dk в€’x2
hk (x) =
e
,
в€љ
2k k! ПЂ dxk
x в€€ R.
Moreover the explicit form of E(x, Оѕ, t) for t > 0 is
в€’ 21
E(x, Оѕ, t) =
eв€’t e
1+eв€’4t
1в€’eв€’4t
в€љ
в€’2t
(xв€’Оѕ)2 в€’ 1в€’eв€’2t xОѕ
1+e
1
ПЂ(1 в€’ eв€’4t ) 2
.
We note that for each Оѕ в€€ R, E(x, Оѕ, t) satisfies the Hermite heat equation. In
(Theorem 3.1, [2]), we proved that
в€ћ
U (x, t) =
{E(x, Оѕ, t) в€’ E(x, в€’Оѕ, t)} П†(Оѕ)dОѕ
(1.1)
0
is a unique bounded solution of the following Cauchy Dirichlet problem for the
Hermite heat equation
пЈ±
в€‚
в€‚2
2
пЈІ ( в€‚t
в€’ в€‚x
x > 0, t > 0,
2 + x )U (x, t) = 0,
(1.2)
U (x, 0) = П†(x),
x > 0,
пЈі
U (0, t) = 0,
t > 0,
Received 2012-1-10; Communicated by K. SaitЛ†
o.
2000 Mathematics Subject Classification. Primary 33C45; Secondary 35K15.
Key words and phrases. Hermite functions, Mehler kernel, Hermite heat equation.
403
404
where П† is a continuous and bounded function on [0, в€ћ) with П†(0) = 0.
It is not necessary that every bounded solution of the Hermite heat equation
should satisfy a fixed growth behavior on its mth partial derivative with respect to
the space variable. However, since the solution U (x, t) in (1.1) is a unique solution
of (1.2), it is natural to make an effort for obtaining a fixed growth estimate
m
U (x,t)
on в€‚ в€‚x
. But it is not as easy as we anticipate. To find a growth estimate
m
m
m
в€‚ U (x,t)
on в€‚xm , we require first to obtain an estimate on в€‚ E(x,Оѕ,t)
. Note that an
в€‚xm
estimate on the partial derivatives of the heat kernel
x2
(4ПЂt)в€’ 2 eв€’ 4t , t > 0,
0,
t в‰¤ 0,
1
E(x, t) =
with respect to the space variable has been given in [3]:
(1+m)
1
ax2
в€‚ m E(x, t)
в‰¤ C m tв€’ 2 m! 2 eв€’ 4t ,
m
в€‚x
t > 0,
where C is some constant and a can be taken as close as desired to 1 such that
0 < a < 1.
Though the estimates of the following types on the Mehler kernel for 0 < t < 1
and B independent of x, Оѕ and t
2
B
в€‚E(x, Оѕ, t)
в‰¤ Ctв€’1 eв€’ t |xв€’Оѕ| ,
в€‚x
(1.3)
2
B
3
в€‚ 2 E(x, Оѕ, t)
в‰¤ Ctв€’ 2 eв€’ t |xв€’Оѕ| ,
в€‚x в€‚Оѕ
are provided in [4], the estimate on the partial derivatives of the Mehler kernel of
all order with respect to the space variable is yet to be established.
m
Lemma 2.1 that gives an estimate on в€‚ E(x,Оѕ,t)
for each nonnegative integer m,
в€‚xm
is therefore a novelty of this paper which as an application yields
t 2 eв€’mt
m
в€‚ m U (x, t)
в‰¤ M in [0, в€ћ) Г— [0, в€ћ)
в€‚xm
for some constant M , the main objective and the final part of this paper.
2. Main Results
Lemma 2.1. Let E(x, Оѕ, t) be the Mehler kernel and m в€€ N0 . Then for some
constants a with 0 < a < 1 and A := A(a) > 0
в€‚ m E(x, Оѕ, t)
в‰¤
в€‚xm
в€љ
(xв€’Оѕ)2
m! e(A+1)m emt в€’ aeв€’2t
1в€’eв€’4t
.
в€љ 1+ m
m+1 e
ПЂ2 2 t 2
AN ESTIMATE FOR BOUNDED SOLUTIONS OF THE HERMITE HEAT EQUATION 405
Proof. By the Cauchy integral formula, we have
в€‚ m E(x, Оѕ, t)
в€‚xm
m!
E(О¶, Оѕ, t)
=
dО¶
2ПЂi О“R (О¶ в€’ x)m+1
=
в€’ 12
eв€’t e
m!
3
1+eв€’4t
1в€’eв€’4t
в€’2t
(О¶в€’Оѕ)2 в€’ 1в€’eв€’2t О¶Оѕ
1+e
dО¶,
1
2ПЂ 2 i
(О¶ в€’ x)m+1 (1 в€’ eв€’4t ) 2
О“R
where О“R is a cirle of radius R in the complex plane C with center at x. With
О¶ = x + ReiОё , we have
m!eв€’t
в€‚ m E(x, Оѕ, t)
в€љ
в‰¤
3
m
в€‚x
2ПЂ 2 Rm 1 в€’ eв€’4t
2ПЂ
в€’ 12
e
1+eв€’4t
1в€’eв€’4t
в€’2t
(xв€’Оѕ+ReiОё )2 в€’ 1в€’eв€’2t (x+ReiОё )Оѕ
1+e
dОё.
0
Then, writing S for x + R cos Оё, we have
в€‚ m E(x, Оѕ, t)
m! eв€’t
в€љ
в‰¤
3
в€‚xm
2ПЂ 2 Rm 1 в€’ eв€’4t
в€’4t
1+e
Let P = 12 1в€’e
в€’4t and Q =
using the inequality
1в€’eв€’2t
1+eв€’2t .
2ПЂ
1 1+eв€’4t
2 1в€’eв€’4t
в€’
e
в€’ 12
0
в€’2t
{Оѕв€’S}2 + 1в€’eв€’2t ОѕS
e
1+e
1+eв€’4t
1в€’eв€’4t
dОё.
R2
Then P > 0 and Q > 0 since t is positive. Now
2
P {Оѕ в€’ (x + R cos Оё)} + QОѕ(x + R cos Оё) в‰Ґ
Pв€’
Q
2
2
{Оѕ в€’ (x + R cos Оё)} ,
we have
в€‚ m E(x, Оѕ, t)
в€‚xm
2ПЂ
в€’4t
в€’2t
m! eв€’t
в€’ e
(xв€’Оѕ+R cos Оё)2 + 12 1+eв€’4t R2
1в€’e
в€љ
e 1в€’eв€’4t
dОё
2ПЂ Rm 1 в€’ eв€’4t 0
в€’4t
в€’2t
m! eв€’t
x
Лњ2 + 12 1+eв€’4t R2
в€’ e
1в€’e
,
e 1в€’eв€’4t
в€љ mв€љ
ПЂR
1 в€’ eв€’4t
в‰¤
3
2
в‰¤
where x
Лњ = x в€’ Оѕ в€’ R or 0 or x в€’ Оѕ + R. Since the ratio
1+eв€’4t
m
minimum at R = 2b
where b = 12 1в€’e
в€’4t , we have
m! eв€’t e 2
в€љ в€љ
m
ПЂ 1 в€’ eв€’4t m 2
m
в€‚ m E(x, Оѕ, t)
в€‚xm
в‰¤
exp
1 + eв€’4t
1 в€’ eв€’4t
1 1+eв€’4t
2 1в€’eв€’4t
Rm
m
2
в€’
e
R2
attains its
eв€’2t
1в€’eв€’4t
x
Лњ2
.
(2.1)
But with 0 < a < 1 and |ОІ| в‰¤ 1
в€’
e
eв€’2t
1в€’eв€’4t
(xв€’Оѕ+ОІR)2
в€’
aeв€’2t
1в€’eв€’4t
(xв€’Оѕ)2 в€’
eв€’2t
1в€’eв€’4t
в€’
aeв€’2t
1в€’eв€’4t
(xв€’Оѕ)2 в€’
(1в€’a)eв€’2t
1в€’eв€’4t
в€’
aeв€’2t
1в€’eв€’4t
(xв€’Оѕ)2
= e
= e
в‰¤ e
where A =
a
1в€’a .
e
e
Aeв€’2t
e 1в€’eв€’4t
[(1в€’a)(xв€’Оѕ)2 +2(xв€’Оѕ)ОІR+ОІ 2 R2 ]
R2
2
,
Then clearly
в€’
e
eв€’2t
1в€’eв€’4t
x
Лњ2
в€’
в‰¤e
aeв€’2t
1в€’eв€’4t
(xв€’Оѕ)2
2
ОІR 2
aОІ R
(xв€’Оѕ+ 1в€’a
) в€’ (1в€’a)
2
Aeв€’2t
e 1в€’eв€’4t
R2
.
406
в€’4t
в€’2t
в€’4t
2t
)
e
1 1+e
e
Using R2 = m(1в€’e
and the inequalities 1+e
в€’4t в‰¤ 2 , 1в€’eв€’4t в‰¤ 2t for every
1+eв€’4t
t > 0, (2.1) reduces to
в€љ
aeв€’2t (xв€’Оѕ)2
в€‚ m E(x, Оѕ, t)
m! e(A+1)m
eв€’t
emt
в€’
1в€’eв€’4t
в€љ
в€љ
в‰¤
. (2.2)
m m e
в€‚xm
ПЂ
1 в€’ eв€’4t 2 2 t 2
Furthermore, since
в€’t
в€љ e
1в€’eв€’4t
в‰¤
1
в€љ
2 t
в€‚ m E(x, Оѕ, t)
в‰¤
в€‚xm
for every t > 0 we obtain
в€љ
(xв€’Оѕ)2
m! e(A+1)m emt в€’ aeв€’2t
1в€’eв€’4t
.
в€љ 1+ m
m+1 e
ПЂ2 2 t 2
(2.3)
This completes the proof.
в€’2t
e
1
Remark 2.2. For 0 < t < 1, in view of (2.3) and в€’ 1в€’e
в€’4t в‰¤ в€’ 8t it is easy to see
that
в€љ
в€‚ m E(x, Оѕ, t)
m! e(A+3)m в€’ a(xв€’Оѕ)2
8t
в‰¤в€љ
e
m m+1
m
в€‚x
ПЂ 21+ 2 t 2
which extends the estimate result (1.3) on the order m > 1 of the partial derivative
of E(x, Оѕ, t) with respect to the variable x.
Theorem 2.3. Every bounded solution of the Cauchy Dirichlet problem for the
Hermite heat equation
пЈ±
в€‚2
в€‚
2
пЈІ ( в€‚t
в€’ в€‚x
x > 0, t > 0,
2 + x )U (x, t) = 0
(2.4)
U (x, 0) = П†(x)
x > 0,
пЈі
U (0, t) = 0
t > 0,
satisfies the following growth estimate
t 2 eв€’mt
m
в€‚ m U (x, t)
в‰¤ M,
в€‚xm
in [0, в€ћ) Г— [0, в€ћ),
where m в€€ N0 and M is some constant.
Proof. From (Theorem 3.1, [2]), every bounded solution of the Cauchy Dirichlet
problem (2.4) for the Hermite heat equation is of the form
в€ћ
U (x, t) =
{E(x, Оѕ, t) в€’ E(x, в€’Оѕ, t)} П†(Оѕ)dОѕ,
0
where П† is a continuous and bounded function on [0, в€ћ) with П†(0) = 0 and
E(x, Оѕ, t), the Mehler kernel. We write
в€ћ
U (x, t) =
{E(x, Оѕ, t) в€’ E(x, в€’Оѕ, t)} П†(Оѕ)dОѕ
0
=
E(x, Оѕ, t)h(Оѕ)dОѕ,
R
where
h(Оѕ) =
П†(Оѕ),
Оѕ в‰Ґ 0,
в€’П†(в€’Оѕ), Оѕ < 0.
AN ESTIMATE FOR BOUNDED SOLUTIONS OF THE HERMITE HEAT EQUATION 407
From (2.2), we have
в€‚ m U (x, t)
в€‚xm
в‰¤
в‰¤
в€‚ m E(x, Оѕ, t)
|h(Оѕ)|dОѕ
в€‚xm
R
в€љ
h в€ћ m!e(A+1)m e(mв€’1)t
в€љ в€љ
m
ПЂ 1 в€’ eв€’4t (2t) 2
в€љ
в€’t
в€љ a e
(Оѕ
1в€’eв€’4t
в€’
e
aeв€’2t (xв€’Оѕ)2
1в€’eв€’4t
dОѕ.
R
в€’ x) = s and integrating, we have
в€љ
в€‚ m U (x, t)
h в€ћ m!e(A+1)m emt
в‰¤
mв€љ
m .
в€‚xm
22 a
t2
Under the change of variable
Clearly
t 2 eв€’mt
m
if we take M =
h
в€ћ
в€‚ m U (x, t)
в‰¤ M in [0, в€ћ) Г— [0, в€ћ)
в€‚xm
в€љ
m!e(A+1)m
.
mв€љ
22 a
References
1. Dhungana, B. P.: An example of nonuniqueness of the Cauchy problem for the Hermite heat
equation, Proc. Japan. Acad. 81, Ser. A, no. 3 (2005), 37вЂ“39.
2. Dhungana, B. P. and Matsuzawa, T.: An existence result of the Cauchy Dirichlet problem
for the Hermite heat equation, Proc. Japan. Acad. 86, Ser. A, no. 2 (2010), 45вЂ“47.
3. Matsuzawa, T.: A calculus approach to hyperfunctions. II, Trans. Amer. Math. Soc. 313,
No. 2 (1989), 619вЂ“654.
4. Thangavelu, S.: Lectures on Hermite and Laguerre Expansions, Princeton University Press,
Princeton, 1993.
Bishnu Prasad Dhungana: Department of Mathematics, Mahendra Ratna Campus,
Tribhuvan University, Kathmandu Nepal
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 409-419
www.serialspublications.com
FEYNMAN-KAC FORMULA FOR THE SOLUTION OF
CAUCHYвЂ™S PROBLEM WITH TIME DEPENDENT
Вґ
LEVY
GENERATOR
Вґ
AROLDO PEREZ
Abstract. We exploit an equivalence between the well-posedness of the homogeneous Cauchy problem for a time dependent LВґ
evy generator L, and the
well-posedness of the martingale problem for L, to obtain the Feynman-Kac
representation of the solution of
в€‚u (t, x)
в€‚t
=
L (t) u (t, x) + c(t, x)u (t, x) , t > 0, x в€€ Rd ,
u (0, x)
=
П• (x) , П• в€€ C02 Rd ,
where c is a bounded continuous function.
1. Introduction
Solutions of many partial differential equations can be represented as expectation functionals of stochastic processes known as Feynman-Kac formulas; see [8],
[12] and [13] for pioneering work of these representations.
Feynman-Kac formulas are useful to investigate properties of partial differential equations in terms of appropriate stochastic models, as well as to study probabilistic properties of Markov processes by means of related partial differential
equations; see e.g. [9] for the case of diffusion processes. Feynman-Kac formulas
naturally arise in the potential theory for SchВЁ
odinger equations [4], in systems of
relativistic interacting particles with an electromagnetic field [11], and in mathematical finance [6], where they provide a bridge between the probabilistic and
the PDE representations of pricing formulae. In recent years there has been a
growing interest in the use of LВґevy processes to model market behaviours (see
e.g. [1] and references therein). This leads to consider Feynman-Kac formulas
for CauchyвЂ™s problems with LВґevy generators. Also, Feynman-Kac representations
have been used recently to determine conditions under which positive solutions
of semi-linear equations exhibit finite-time blow up, see [2] for the autonomous
case and [14] and [17] for the nonautonomous one. A well-known reference on
the interplay between the Cauchy problem for second-order differential operators,
and the martingale problem for diffusion processes is [20]. In particular, in [20]
it is proved that existence and uniqueness for the Cauchy problem associated to
Received 2012-1-31; Communicated by V. Perez-Abreu.
2000 Mathematics Subject Classification. 60H30, 60G51, 60G46, 35K55.
Key words and phrases. Feynman-Kac formula, LВґ
evy generator, martingale problem, Cauchy
problem.
409
Вґ
AROLDO PEREZ
410
a diffusion operator is equivalent to existence and uniqueness of the martingale
problem for the same operator.
The purpose of this paper is to prove that such an equivalence also holds for
equations with LВґevy generators and their corresponding martingale problems, and
to provide a Feynman-Kac representation of the solution of the Cauchy problem.
Let П• в€€ C02 Rd be the space of C 2 -functions П• : Rd в†’ R, vanishing at infinite.
The LВґevy generators we consider here are given by the expresion
d
L (t) П• (x)
=
d
1
в€‚ 2 П• (x)
в€‚П• (x)
+
aij (t, x)
bi (t, x)
2 i,j=1
в€‚xi в€‚xj
в€‚xi
i=1
+
Rd
П• (x + y) в€’ П• (x) в€’
t в‰Ґ 0, П• в€€ C02 Rd , where x, y =
d
i=1
y, в€‡П• (x)
1 + |y|
2
(1.1)
Вµ (t, x, dy) ,
xi yi , a : [0, в€ћ) Г— Rd в†’ Sd+ . Here Sd+
is the space of symmetric, non-negative definite, square real matrices of order d,
b : [0, в€ћ) Г— Rd в†’ Rd and Вµ (t, x, В·) is a LВґevy measure, that is, Вµ (t, x, В·) is a Пѓ-finite
measure on Rd that satisfies Вµ (t, x, {0}) = 0 and Rd |y|2 (1+|y|2 )в€’1 Вµ(t, x, dy) < в€ћ
for all t в‰Ґ 0 and x в€€ Rd . These operators represent the infinitesimal generators
of the most general stochastically continuous, Rd -valued Markov processes with
independent increments. We establish the equivalence (see Theorem 4.3 below)
between the existence and uniqueness of solutions of the homogeneous Cauchy
problem
в€‚u (t, x)
в€‚t
u (s, x)
= L (t) u (t, x) , t > s в‰Ґ 0, x в€€ Rd ,
(1.2)
= П•s (x) , П•s в€€ C02 Rd ,
and that of solutions of the martingale problem for {L (t)}tв‰Ґ0 on C02 Rd . In
order to achieve this, we use some ideas introduced in [20], several properties
of the Howland semigroup (see e.g. [3]), and some results about the martingale
problem given in [7]. By means of this equivalence, using the Howland evolution
semigroup (whose definition is based on the classical idea of considering вЂњtimeвЂќto
be a new variable in order to transform a nonautonomous Cauchy problem into an
autonomous one) and Theorem 9.7, p. 298 from [5] (in the autonomous case), we
are able to prove (see Theorem 5.1 below) that the solution of the Cauchy problem
(5.1), given below, admits the representation
t
c(t в€’ s, X(s))ds
u(t, x) = Ex П•(X(t)) exp
,
0
where X в‰Ў {X(t)}tв‰Ґ0 is a strong Markov process on Rd with respect to the
filtration GtX = FtX+ , which is right continuous and quasi-left continuous, and
solves the martingale problem for {L (t)}tв‰Ґ0 on C02 Rd . Here Ex denotes the
expectation with respect to the process X starting at x.
FEYMAN-KAC FORMULA
411
2. Non-negativity of Solutions
Let us consider the LВґevy generator defined in (1.1). It is known (see e.g. [18])
that the space Ccв€ћ Rd of continuous functions g : Rd в†’ R, having compact support and possessing continuous derivatives of all orders, is a core for the common
domain D of the family of linear operators {L(t)}tв‰Ґ0 , and that C02 Rd вЉ‚ D.
Notice that {L (t)}tв‰Ґ0 satisfies the positive maximun principle, namely,
If П• (x) = sup П• (y) в‰Ґ 0, П• в€€ D, then L (t) П• (x) в‰¤ 0 for all t в‰Ґ 0.
(2.1)
yв€€Rd
In fact, if П• (x) = sup П• (y) в‰Ґ 0, then в€‡П• (x) = 0 and
yв€€Rd
в€‚ 2 П•(x)
в€‚xi в€‚xj
is a
1в‰¤i,jв‰¤d
symmetric non-positive definite matrix. Since
1
L (t) П• (x) =
trace (a (t, x) HП• (x)) + b (t, x) , в€‡П• (x)
2
y, в€‡П• (x)
+
П• (x + y) в€’ П• (x) в€’
Вµ (t, x, dy) ,
2
d
1 + |y|
R
where b (t, x) = (bi (t, x))1в‰¤iв‰¤d , a (t, x) = (aij (t, x))1в‰¤i,jв‰¤d and HП• (x) is the
Hessian matrix, we have
L (t) П• (x) в‰¤ 0,
because trace (AB) в‰¤ 0 if A в€€ Sd+ , B в€€ Sdв€’ , and by assumption П• (x + y) в€’
П• (x) в‰¤ 0 for all y в€€ Rd .
We note also that (2.1) implies that L (t) П• в‰¤ 0 for any nonnegative constant
function П• в€€ D. In fact, here L (t) П• = 0 for all functions which do not depend on
space.
Let us now turn to the differential equation (1.2).
We are going to assume that the homogeneous Cauchy problem (1.2) is wellposed, and that the evolution family {U (t, s)}tв‰Ґsв‰Ґ0 that solves (1.2) is an evolution
family of contractions, i.e., a family of operators on C0 Rd such that
(i) U (t, s) = U (t, r)U (r, s) and U (t, t) = I for all t в‰Ґ r в‰Ґ s в‰Ґ 0 (here I is the
identity operator).
(ii) For each П• в€€ C0 Rd , the function (t, s) в†’ U (t, s)П• is continuous for t в‰Ґ s в‰Ґ 0.
(iii) U (t, s) в‰¤ 1 for all t в‰Ґ s в‰Ґ 0.
Proposition 2.1. Assume that u is a classical solution of (1.2) such that u (t, В·) в€€
C02 Rd for all t в‰Ґ s в‰Ґ 0, and that 0 в‰¤ П•s в‰Ў П• в€€ C02 Rd . Then u (t, x) в‰Ґ 0,
t в‰Ґ s в‰Ґ 0, x в€€ Rd .
Proof. Suppose that for some r > s,
inf u (r, x) = в€’c < 0.
xв€€Rd
Fix T > r and let Оґ > 0 be such that Оґ <
c
T в€’s .
Define on [s, T ] Г— Rd the function
vОґ (t, x) = u (t, x) + Оґ (t в€’ s) ,
which coincides with u when t = s. Clearly, this function has a negative infimun
on [s, T ] Г— Rd and tends to the positive constant Оґ (t в€’ s) as |x| в†’ в€ћ and t в€€
Вґ
AROLDO PEREZ
412
(s, T ] is fixed. Consequently, vОґ has a global (negative) minimun at some point
(t0 , x0 ) в€€ (s, T ] Г— Rd . This implies that
в€‚vОґ
(t0 , x0 ) в‰¤ 0,
в€‚t
and by the positive maximun principle (2.1),
L (t) vОґ (t0 , x0 ) в‰Ґ 0.
Therefore
в€‚vОґ
в€’ L (t) vОґ (t0 , x0 ) в‰¤ 0.
в€‚t
On the other hand, since u solves (1.2),
в€‚vОґ
в€’ L (t) vОґ (t0 , x0 ) =
в€‚t
в€‚u
в€’ L (t) u (t0 , x0 ) + Оґ в€’ (t в€’ s) (L (t) Оґ) (x0 ) в‰Ґ Оґ.
в€‚t
Corollary 2.2. The differential equation (1.2) can have at most one solution u
such that u (t, В·) в€€ C02 Rd , t в‰Ґ s в‰Ґ 0.
3. Markov Process Associated to the Evolution Family
From Proposition 2.1 we deduce that U (t, s) is a nonnegative contraction on
C0 Rd for t в‰Ґ s в‰Ґ 0, i.e., {U (t, s)}tв‰Ґsв‰Ґ0 is an evolution family of contractions
such that U (t, s)П• в‰Ґ 0 for each П• в‰Ґ 0. This follows from the fact that C02 Rd is
dense in C0 Rd , and that, by definition of {U (t, s)}tв‰Ґsв‰Ґ0 , for any П•s в€€ C02 Rd
the function
u (t, x) = U (t, s) П•s (x)
is a solution of (1.2) such that u (t, В·) в€€ C02 Rd . Thus, by Riesz representation
theorem for nonnegative functionals on C0 Rd (see e.g. [21], p. 5), we have that
for each t в‰Ґ s в‰Ґ 0 and x в€€ Rd , there exists a measure P (s, x, t, В·) on the Borel
Пѓ-field B Rd on Rd , such that P s, x, t, Rd в‰¤ 1 and
U (t, s) П• (x) =
Rd
П• (y) P (s, x, t, dy) , П• в€€ C0 Rd .
Since {U (t, s)}tв‰Ґsв‰Ґ0 is an evolution family, in order to prove that P (s, x, t, О“) ,
t в‰Ґ s в‰Ґ 0, x в€€ Rd , О“ в€€ B Rd , is a transition probability function it suffices to
show that P (s, x, t, В·) is a probability measure for all t в‰Ґ s в‰Ґ 0, x в€€ Rd . To this
end, from the evolution family {U (t, s)}tв‰Ґsв‰Ґ0 on C0 Rd , we define the family of
operators {T (t)}tв‰Ґ0 on C0 [0, в€ћ) Г— Rd by
(T (t) f ) (r, x) =
U (r, r в€’ t) f (r в€’ t, x) , r > t в‰Ґ 0, x в€€ Rd ,
U (r, 0) f (0, x) , 0 в‰¤ r в‰¤ t.
(3.1)
Notice that {T (t)}tв‰Ґ0 is a positivity-preserving, strongly continuous semigroup
of contractions on C0 [0, в€ћ) Г— Rd , which is called the Howland semigroup of
FEYMAN-KAC FORMULA
413
{U (t, s)}tв‰Ґsв‰Ґ0 . Let us denote by A the infinitesimal generator of {T (t)}tв‰Ґ0 , and
define the operator AЛ† by
Л† (r, x) = в€’ в€‚f (r, x) + L (r) f (r, x) , r в‰Ґ 0, x в€€ Rd ,
Af
в€‚r
(3.2)
whose domain is the space of functions f which are differentiable at t, such that
Л† в€€ C0 [0, в€ћ) Г— Rd . Let us denote by D the linear span
f (t, В·) в€€ C02 Rd and Af
of all functions f в€€ C0 [0, в€ћ) Г— Rd of the form
f (t, x) в‰Ў fО±,П• (t, x) = О± (t) П• (x) , О± в€€ Cc1 ([0, в€ћ)) and П• в€€ C02 Rd ,
(3.3)
where Cc1 ([0, в€ћ)) is the space of continuous functions on [0, в€ћ) having compact
support and continuous first derivative. Then
(T (t) fО±,П• ) (r, x) =
О± (r в€’ t) U (r, r в€’ t) П• (x) , r > t в‰Ґ 0, x в€€ Rd ,
О± (0) U (r, 0) П• (x) , 0 в‰¤ r в‰¤ t, x в€€ Rd .
Thus,
d
(T (t) fО±,П• ) (r, x) |t=0
dt
= в€’О± (r) П• (x) + О± (r) L (r) П• (x)
в€‚fО±,П• (r, x)
= в€’
+ L (r) fО±,П• (r, x)
в€‚r
Л† О±,П• (r, x) .
=
Af
(AfО±,П• ) (r, x)
=
(3.4)
Since D is dense in C0 [0, в€ћ) Г— Rd , this proves that the operator A is the closure
of AЛ† |D .
Notice that the infinitesimal generator A of the Howland semigroup {T (t)}tв‰Ґ0 is
conservative. Hence {T (t)}tв‰Ґ0 is a Feller semigroup, i.e. {T (t)}tв‰Ґ0 is a positivitypreserving, strongly continuous semigroup of contractions on C0 [0, в€ћ) Г— Rd ,
whose infinitesimal generator is conservative. Therefore (see [7], Theorem 2.7, p.
169), there exists a time-homogeneous Markov process {Z (t)}tв‰Ґ0 with state space
[0, в€ћ) Г— Rd and sample paths in the Skorohod space D[0,в€ћ)Г—Rd [0, в€ћ), such that
f (П…, y) P (t, (r, x) , d (П…, y)) , f в€€ C0 [0, в€ћ) Г— Rd ,
(T (t) f ) (r, x) =
[0,в€ћ)Г—Rd
where P t, (r, x) , О“ , t в‰Ґ 0, (r, x) в€€ [0, в€ћ) Г— Rd , О“ в€€ B [0, в€ћ) Г— Rd , is a
transition probability function for {Z (t)}tв‰Ґ0 . Recalling that
U (t, s) П• (x) =
Rd
П• (y) P (s, x, t, dy) , П• в€€ C0 Rd ,
Вґ
AROLDO PEREZ
414
we obtain, by definition of {T (t)}tв‰Ґ0 , that for any r > t в‰Ґ 0,
f (П…, y) P (t, (r, x) , dП…dy)
[0,в€ћ)Г—Rd
=
Rd
f (r в€’ t, y) P (r в€’ t, x, r, dy)
f (П…, y) P (r в€’ t, x, r, dy) Оґrв€’t (dП…) , f в€€ C0 [0, в€ћ) Г— Rd ,
=
[0,в€ћ)Г—Rd
where Оґl denotes the measure with unit mass at l, and for any 0 в‰¤ r в‰¤ t,
f (П…, y) P (t, (r, x) , dП…dy)
[0,в€ћ)Г—Rd
=
f (0, y) P (0, x, r, dy)
Rd
=
f (П…, y) P (0, x, r, dy) Оґ0 (dП…) ,
f в€€ C0 [0, в€ћ) Г— Rd .
[0,в€ћ)Г—Rd
Therefore
P (t, (r, x) , C Г— О“) =
P (r в€’ t, x, r, О“) Оґrв€’t (C) , if r > t в‰Ґ 0,
P (0, x, r, О“) Оґ0 (C) , if 0 в‰¤ r в‰¤ t,
(3.5)
where C в€€ B ([0, в€ћ)) and О“ в€€ B Rd .
Since P (t, (r, x) , В·) is a probability measure on [0, в€ћ) Г— Rd , B [0, в€ћ) Г— Rd ,
it follows from (3.5) that P (s, x, t, В·) is a probability measure on Rd , B Rd .
Thus, if there exists an evolution family of contractions {U (t, s)}tв‰Ґsв‰Ґ0 on C0 Rd
that solves the homogeneous Cauchy problem (1.2), then there also exists a transition probability function
P (s, x, t, О“) , t в‰Ґ s в‰Ґ 0, x в€€ Rd , О“ в€€ B Rd ,
(3.6)
such that
U (t, s) П• (x) =
Rd
П• (y) P (s, x, t, dy) , П• в€€ C0 Rd .
4. The Cauchy Problem and the Martingale Problem
Let {X (t)}tв‰Ґ0 be an Rd -valued Markov process with sample paths in DRd [0, в€ћ),
and with the transition probability function given by (3.6) (see [10], Theorem 3,
p. 79).
Lemma 4.1. For any П• в€€ C02 Rd
and s в‰Ґ 0,
t
M (t) = П• (X (t)) в€’
L (П…) П• (X (П…)) dП…, t в‰Ґ s,
s
is a martingale after time s with respect to the filtration FtX = Пѓ (X (s) , s в‰¤ t).
FEYMAN-KAC FORMULA
415
Proof. Let П• в€€ C02 Rd and t > r в‰Ґ s. Then, almost surely,
E M (t) | FrX
t
=
Rd
П• (y) P (r, X (r) , t, dy) в€’
L (П…) П• (y) P (r, X (r) , П…, dy) dП…
r
r
в€’
Rd
L (П…) П• (X (П…)) dП…
s
r
t
U (П…, r) L (П…) П• (X (r)) dП… в€’
= U (t, r) П• (X (r)) в€’
L (П…) П• (X (П…)) dП…
r
s
t
= U (t, r) П• (X (r)) в€’
r
в€‚U (П…, r) П• (X (r))
dП… в€’
в€‚П…
r
L (П…) П• (X (П…)) dП…
s
r
= U (t, r) П• (X (r)) в€’ [U (t, r) П• (X (r)) в€’ П• (X (r))] в€’
L (П…) П• (X (П…)) dП…
s
r
= П• (X (r)) в€’
L (П…) П• (X (П…)) dП… = M (r) .
s
Let {Y (t)}tв‰Ґ0 be the time-homogeneous Markov process on [0, в€ћ) with transition probabilities
P (t, r, О“) = Оґrв€’t (О“) , О“ в€€ B ([0, в€ћ)) , t, r в‰Ґ 0.
Then the Markov semigroup {V (t)}tв‰Ґ0 associated to {Y (t)}tв‰Ґ0 is given by
V (t) f (r) = f (r в€’ t) , r в‰Ґ t в‰Ґ 0,
and its generator Q satisfies
Qf = в€’f
on the space D (Q) =
Cc1
([0, в€ћ)).
Lemma 4.2. Let {X (t)}tв‰Ґ0 and {Y (t)}tв‰Ґ0 be as above. Then
{(Y (t) , X (t))}tв‰Ґ0
is a Markov process with values in [0, в€ћ) Г— Rd , wich has the same distribution as
the Markov process {Z (t)}tв‰Ґ0 whose semigroup is given by (3.1).
Proof. We consider the space of functions D defined in (3.3). Since the Markov
process {Y (t)}tв‰Ґ0 is a solution of the martingale problem for Q and, by Lemma
4.1, the Markov process {X (t)}tв‰Ґ0 is a solution of the martingale problem for
{L (t)}tв‰Ґ0 on C02 Rd , we have (see [7], Theorem 10.1, p. 253) that the Markov
process {(Y (t) , X (t))}tв‰Ґ0 is a solution of the martingale problem for AЛ† |D , where
AЛ† is the operator defined in (3.2). This implies (see [7], Theorem 4.1, p. 182) that
the processes {Z (t)}tв‰Ґ0 and {(Y (t) , X (t))}tв‰Ґ0 have the same finite-dimensional
distributions. But, since these processes have sample paths in D[0,в€ћ)Г—Rd [0, в€ћ),
it follows (see [7], Corollary 4.3, p. 186) that they have the same distribution on
D[0,в€ћ)Г—Rd [0, в€ћ).
Вґ
AROLDO PEREZ
416
Theorem 4.3. There exists an evolution family of contractions {U (t, s)}tв‰Ґsв‰Ґ0
on C0 Rd , which is unique and solves the homogeneous Cauchy problem (1.2), if
and only if, there exists a Markov process {X (t)}tв‰Ґ0 on Rd , unique in distribution,
that solves the martingale problem for {L (t)}tв‰Ґ0 on C02 Rd .
Proof. The necessity follows from Lemma 4.1. Assume that {X (t)}tв‰Ґ0 is a Markov
process on Rd , unique in distribution, that solves the martingale problem for
{L (t)}tв‰Ґ0 on C02 Rd . Let P (s, x, t, О“) be a transition function for {X (t)}tв‰Ґ0 ,
and let
U (t, s) f (x) =
Rd
P (s, x, t, dy) f (y) , t в‰Ґ s в‰Ґ 0, f в€€ C0 Rd .
Then {U (t, s)}tв‰Ґsв‰Ґ0 is a positivity-preserving evolution family of contractions on
C0 Rd . Let {T (t)}tв‰Ґ0 be the semigroup on C0 [0, в€ћ) Г— Rd defined from the
evolution family {U (t, s)}tв‰Ґsв‰Ґ0 on C0 Rd by (3.1). Then, for П• в€€ C02 Rd and
О± в€€ Cc1 ([s, в€ћ)) satisfying О± (s) = 1,
u (t, x) в‰Ў U (t, s) П• (x) = (T (t в€’ s) О± (В·) П• (В·)) (t, x) , t в‰Ґ s.
Hence, due to (3.2), (3.4) and the strong continuity of the semigoup {T (t)}tв‰Ґ0 ,
в€‚u (t, x)
в€‚t
(T (t в€’ s + h) О± (В·) П• (В·)) (t + h, x) в€’ (T (t в€’ s) О± (В·) П• (В·)) (t, x)
hв†’0
h
(T (t в€’ s + h) О± (В·) П• (В·)) (t + h, x) в€’ (T (t в€’ s + h) О± (В·) П• (В·)) (t, x)
= lim
hв†’0
h
(T (t в€’ s + h) О± (В·) П• (В·)) (t, x) в€’ (T (t в€’ s) О± (В·) П• (В·)) (t, x)
+ lim
hв†’0
h
(T (t в€’ s) О± (В·) П• (В·)) (t + h, x) в€’ (T (t в€’ s) О± (В·) П• (В·)) (t, x)
= lim T (h)
hв†’0
h
Л†
+A (T (t в€’ s) О± (В·) П• (В·)) (t, x)
=
lim
=
L (t) (T (t в€’ s) О± (В·) П• (В·)) (t, x) = L (t) U (t, s) П• (x) .
This shows clearly that the positivity-preserving evolution family of contractions
{U (t, s)}tв‰Ґsв‰Ґ0 solves the homogeneous Cauchy problem (1.2). Uniqueness follows
from Corollary 2.2.
Conditions on the functions a : [0, в€ћ) Г— Rd в†’ Sd+ , b : [0, в€ћ) Г— Rd в†’ Rd
and the LВґevy measure Вµ (t, x, О“) that ensure existence of a unique Markov process
{X (t)}tв‰Ґ0 on Rd that solves the martingale problem for {L (t)}tв‰Ґ0 on Ccв€ћ Rd ,
are given in [15] and [19]. In particular, in [19] it is proved that if a : [0, в€ћ)Г—Rd в†’
Sd+ and b : [0, в€ћ) Г— Rd в†’ Rd are bounded and continuous, and Вµ (t, x, В·) is a LВґevy
measure such that
О“
|y|
2
1 + |y|
2
в€’1
Вµ (t, x, dy) is bounded and continuous in
(t, x) for every О“ в€€ B R \ {0} , then there exists a unique (in law) strong Markov
process {X (t)}tв‰Ґ0 on Rd that solves the martingale problem for {L (t)}tв‰Ґ0 on
Ccв€ћ Rd .
d
FEYMAN-KAC FORMULA
417
5. The Feynman-Kac Formula
Assume now that there exists a Markov process X = {X (t)}tв‰Ґ0 on Rd with
respect to the filtration GtX = FtX+ , right continuous and quasi-left continuous, that
solves the martingale problem for {L (t)}tв‰Ґ0 on C02 Rd . Then, by Theorem 4.3,
there exists a unique evolution family of contractions {U (t, s)}tв‰Ґsв‰Ґ0 on C0 Rd ,
that solves the homogeneous Cauchy problem (1.2).
We consider the Cauchy problem
в€‚u (t, x)
= L (t) u (t, x) + c(t, x)u (t, x) , t > 0, x в€€ Rd ,
в€‚t
u (0, x) = П• (x) , П• в€€ C02 Rd ,
(5.1)
where c is a given bounded continuous function. Any classical solution u (t, x) of
(5.1) satisfies the integral equation
t
u (t, x) = U (t, 0) П•(x) +
U (t, r)(cu)(r, x)dr.
(5.2)
0
Any solution of the integral equation (5.2) is called a mild solution of the Cauchy
problem (5.1).
Let us consider now the Cauchy problem
в€‚u (t, (r, x))
= Au (t, (r, x)) + c(r, x)u (t, (r, x)) ,
в€‚t
u (0, (r, x)) = П•(r, x),
(5.3)
where П• в€€ C0 ((0, в€ћ) Г— Rd ), is such that П•(0, x) = П•(x), x в€€ Rd . As before, the
classical solution of (5.3) satisfies the integral equation
t
T (t в€’ r)(cu(r, В·)(s, x)dr.
u (t, (s, x)) = T (t)П• (s, x) +
(5.4)
0
Let u(t, (s, x)) be a solution of the integral equation (5.4). Using the definition of
T (t) given in (3.1) and the assumption that П• (0, x) = П•(x), x в€€ Rd , we obtain
t
u (t, (t, x)) =
T (t в€’ r)(cu(r, В·))(t, x)dr
T (t)П• (t, x) +
0
t
U (t, r)(cu(r, В·))(r, x)dr.
= U (t, 0)П•(x) +
0
Hence, u(t, x) в‰Ў u(t, (t, x)), t в‰Ґ 0, satisfies the integral equation (5.2), i.e. u(t, x)
is the mild solution of the Cauchy problem (5.1).
Theorem 5.1. Let X = {X (t)}tв‰Ґ0 be a strong Markov process on Rd with respect
to the filtration GtX = FtX+ , right continuous and quasi-left continuous, that solves
the martingale problem for {L (t)}tв‰Ґ0 on C02 Rd . Then the classical solution
u(t, x) of (5.1) on [0, T ) Г— Rd admits the representation
t
c(t в€’ s, X(s))ds
u(t, x) = Ex П•(X(t)) exp
0
.
(5.5)
Вґ
AROLDO PEREZ
418
Proof. Let ОІ >
sup
c(r, x), and consider the function
(r,x)в€€[0,в€ћ)Г—Rd
V (r, x) = ОІ в€’ c(r, x),
(r, x) в€€ [0, в€ћ) Г— Rd .
It follows from [5], Theorem 9.7, p. 298, that the classical solution v(t, (r, x)) of
the Cauchy problem
в€‚v (t, (r, x))
= Av (t, (r, x)) в€’ V (r, x)v(t, (r, x)),
(5.6)
в€‚t
v (0, (r, x)) = П•(r, x),
with t в€€ [0, T ],and (r, x) в€€ [0, в€ћ) Г— Rd , admits the representation
t
v(t, (r, x)) =
E(r,x) П•(Z(t)) exp в€’
V (Z(s))ds
(5.7)
0
t
= eв€’ОІt E(r,x) П•(Z(t)) exp
c(Z(s))ds
,
0
where E(r,x) denotes the expectation with respect to the process {Z (t)}tв‰Ґ0 starting
at (r, x).
On the other hand, if u(t, (r, x)) is a classical solution of (5.3) for t в€€ [0, T ],
(r, x) в€€ [0, в€ћ) Г— Rd , then clearly
eв€’ОІt u(t, (r, x)), t в€€ [0, T ], (r, x) в€€ [0, в€ћ) Г— Rd ,
is a classical solution of (5.6). Thus, using (5.7) and uniqueness of solutions of
problem (5.3), we obtain that u(t, (r, x)) admits the representation
t
u(t, (r, x)) = E(r,x) П•(Z(t)) exp
c(Z(s))ds
,
0
see [16], Theorem 1.2, p. 184. Thus, due to Lemma 4.2 and the definition of
П• (В·, В·),
u(t, x)
в‰Ў u (t, (t, x))
t
= E(t,x) П• (Z(t)) exp
c(Z(s))ds
0
t
c(Z(s))ds |Z(0) = (t, x)
= E П•(Z(t)) exp
0
t
c(Y (s), X(s))ds |(Y (0), X(0)) = (t, x))
= E П•(Y (t), X(t)) exp
0
t
c(t в€’ s, X(s))ds |X(0) = x)
= E П•(X(t)) exp
0
t
c(t в€’ s, X(s))ds
= Ex П•(X(t)) exp
,
0
and, since u(t, x) в‰Ў u(t, (t, x)) satisfies the integral equation (5.2), by uniqueness
of the mild solution to (5.1) (see e.g. [16]), it follows that the solution u(t, x) of
(5.1) on [0, T ] Г— Rd admits the representation (5.5).
FEYMAN-KAC FORMULA
419
References
1. Applebaum, D.: LВґ
evy Processes and Stochastic Calculus, Cambridge University Press, 2004.
2. Birkner, M., LВґ
opez-Mimbela, J. A. and Walkonbinger, A.: Blow-up of semilinear PDEвЂ™s
at the critical dimension. A probabilistic approach, Proc. Amer. Math. Soc. 130 (2002)
2431вЂ“2442.
3. Chicone, C. and Latushkin, Y.: Evolution Semigroups in Dynamical Systems and Differential
Equations, Amer. Math. Soc., 1999.
4. Chung, K. L. and Zhao, Z.: From Brownian Motion to SchrВЁ
odingerвЂ™s Equation, SpringerVerlag, 1995.
5. Dynkin, E. B.: Markov Processes I, Springer-Verlag, 1965.
6. Etheridge, A.: A Course in Financial Calculus, Cambridge University Press, 2002.
7. Ethier, S. and Kurtz, T. G.: Markov Processes: Characterization and Convergence, John
Wiley & Sons, 1986.
8. Feynman, R. P.: Space-time approach to nonrelativistic quantum mechanics, Rev. Mod.
Phys. 20 (1948) 367вЂ“387.
9. Freidlin, M.: Functional Integration and Partial Differential Equations, Princeton University
Press, 1985.
10. Gihman, I. I. and Skorohod, A. V.: The Theory of Stochastic Processes II, Springer-Verlag,
1975.
11. Ichinose, T. and Tamura, H.: Imaginary-time path integral for a relativistic spinless particle
in an electromagnetic field, Commun. Math. Phys. 105 (1986) 239вЂ“257.
12. Kac, M.: On distributions of certain Wiener functionals, Trans. Am. Math. Soc. 65 (1949)
1вЂ“13.
13. Kac, M.: On some connections between probability theory and differential and integral
equations, in: Proc. 2nd Berkeley Symp. Math. Stat. and Prob. 65 (1951) 189вЂ“215.
14. Kolkovska, E. T., LВґ
opez-Mimbela, J. A. and PВґ
erez, A.: Blow up and life span bounds for
a reaction-diffusion equation with a time-dependent generator, Electron. J. Diff. Eqns. 10
(2008) 1вЂ“18.
15. Komatsu, T.: Markov processes associated with certain integro-differential operators, Osaka
J. Math. 10 (1973) 271вЂ“303.
16. Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer-Verlag, 1983.
17. PВґ
erez, A. and Villa, J.: Blow-up for a system with time-dependent generators, ALEA, Lat.
Am. J. Probab. Math. Stat. 7 (2010) 207вЂ“215.
18. Sato Ken-Iti: LВґ
evy Processes and Infinitely Divisible Distributions, Cambridge University
Press, 1999.
19. Stroock, D. W.: Diffusion processes associated with LВґ
evy generators, Z. Wahrcheinlichkeitstheorie verw. 32 (1975) 209вЂ“244.
20. Stroock, D. W. and Varadhan, R. S.: Multidimensional Diffusion Processes, Springer-Verlag,
1979.
21. Zimmer, R. J.: Essential Results of Functional Analysis, The University of Chicago Press,
1990.
Вґrez Auto
Вґ noma de Tabasco, Divisio
Вґmica de
Aroldo Pe
Вґsicas, Km. 1 Carretera Cunduaca
Вґn-Jalpa de Me
Вґndez, C.P. 86690 A.P. 24,
Ciencias Ba
Вґn, Tabasco, Me
Вґxico.
Cunduaca
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 421-435
www.serialspublications.com
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING
STOCHASTIC FLOWS
ANDREY A. DOROGOVTSEV*
noise functionals related to a stochastic flow. A generalization of the KrylovвЂ“
Veretennikov expansion is presented. An analog of this expansion for the
Arratia flow is derived
Introduction
In this article we present the form of the kernels in the ItЛ†oвЂ“Wiener expansion
for functionals from a dynamical system driven by an additive Gaussian white
noise. The most known example of such expansion is the KrylovвЂ“Veretennikov
representation [11]:
в€ћ
f (y(t)) = Tt f (u) +
Ttв€’П„k bв€‚TП„2в€’П„1 . . .
k=1
в€†k (0;t)
. . . bв€‚TП„1 f (u)dw(П„1 ) . . . dw(П„k ).
where f is a bounded measurable function, y is a solution of the SDE
dy(t) = a(y(t))dt + b(y(t))dw(t)
with smooth and nondegenerate coefficients, and {Tt ; t в‰Ґ 0} is the semigroup of
operators related to the SDE and в€‚ is the symbol of differentiation.
A family of substitution operators of the SDEвЂ™s solution into a function can be
treated as a multiplicative Gaussian white noise functional. In the first section
of this article we consider a family {Gs,t ; 0 в‰¤ s в‰¤ t < +в€ћ} of strong random
operators (see Definition 1.1) in the Hilbert space which is an operator-valued
multiplicative functional from the Gaussian white noise. It turns out that the
precise form of the kernels in the ItЛ†oвЂ“Wiener expansion can be found for a wide
class of operator-valued multiplicative functionals using some simple algebraic relations. The obtained formula covers the KrylovвЂ“Veretennikov case and gives a
representation for different objects such as Brownian motion in Lie group etc.
The representation obtained in the first section may be useful in studing the
properties of a dynamical system with an additive Gaussian white noise. On the
Received 2011-7-22; Communicated by the editors.
2000 Mathematics Subject Classification. Primary 60J57; Secondary 60 H25, 60 K53.
Key words and phrases. Multiplicative functionals, white noise, stochastic semi-group, Arratia flow.
* This article was partially supported by the State fund for fundamental researches of Ukraine
and The Russian foundation for basic research, grant F40.1/023.
421
422
ANDREY A. DOROGOVTSEV
other hand, there exist cases when a dynamical system is obtained as a limit in
a certain sense of systems driven by the Gaussian white noise. A limiting system
could be highly irregular [2, 5, 10]. One example of such a system is the Arratia flow [2] of coalescing Brownian particles on the real line. The trajectories of
individual particles in this flow are Brownian motions, but the whole flow cannot be built from the Gaussian noise in a regular way [13]. Nevertheless, it is
possible to construct the nв€’point motion of the Arratia flow from the pieces of
the trajectories of n independent Wiener processes. Correspondingly a function
from the n-point motion of the Arratia flow has an ItЛ†oвЂ“Wiener expansion based on
the initial Wiener processes. This expansion depends on the way of construction
(coalescing description). We present such expansion in terms of an infinite family
of expectation operators related to all manner of coalescence of the trajectories in
the Arratia flow. To do this we first obtain an analog of the KrylovвЂ“Veretennikov
expansion for the Wiener process stopped at zero.
This paper is divided onto three parts. The first section is devoted to multiplicative operator-valued functionals from Gaussian white noise. The second part
contains the definition and necessary facts about the Arratia flow. In the last section we present a family of KrylovвЂ“Veretennikov expansions for the n-point motion
of the Arratia flow.
1. Multiplicative White Noise Functionals
In this part we present the ItЛ†oвЂ“Wiener expansion for the semigroup of strong
random linear operators in Hilbert space. Such operators in the space of functions
can be generated by the flow of solutions to a stochastic differential equation. In
this case our expansion turns into the well-known KrylovвЂ“Veretennikov representation [11]. In the case when these operators have a different origin, we obtain a
new representation for the semigroup.
Let us start with the definition and examples of strong random operators in the
Hilbert space. Let H denote a separable real Hilbert space with norm В· and
inner product (В·, В·). As usual (в„¦, F , P ) denotes a complete probability space.
Definition 1.1. A strong linear random operator in H is a continuous linear map
from H to L2 (в„¦, P, H).
Remark 1.2. The notion of strong random operator was introduced by A.V.Skorokhod [14]. In his definition Skorokhod used the convergence in probability, rather
than convergence in the square mean.
Consider some typical examples of strong random operators.
Example 1.3. Let H be l2 with the usual inner product and {Оѕn ; n в‰Ґ 1} be an
i.i.d. sequence with finite second moment. Then the map
l2 в€‹ x = (xn )nв‰Ґ1 в†’ Ax = (Оѕn xn )nв‰Ґ1
is a strong random operator. In fact
в€ћ
E Ax
2
x2n EОѕ12
=
n=1
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 423
and the linearity is obvious. Note that pathwise the operator A can be not welldefined. For example, if {Оѕn : n в‰Ґ 1} have the standard normal distribution, then
with probability one
sup |Оѕn | = +в€ћ.
nв‰Ґ1
An interesting set of examples of strong random operators can be found in the
theory of stochastic flows. Let us recall the definition of a stochastic flow on R
[12].
Definition 1.4. A family {П†s,t ; 0 в‰¤ s в‰¤ t} of random maps of R to itself is
referred to as a stochastic flow if the following conditions hold:
(1) For any 0 в‰¤ s1 в‰¤ s2 в‰¤ . . . sn < в€ћ : П†s1 ,s2 , . . . ,П†snв€’1 ,sn are independent.
(2) For any s, t, r в‰Ґ 0 : П†s,t and П†s+r,t+r are equidistributed.
(3) For any r в‰¤ s в‰¤ t and u в€€ R : П†r,s П†s,t (u) = П†r,t (u), П†r,r is an identity
map.
(4) For any u в€€ R : П†0,t (u) в†’ u in probability when t в†’ 0.
Stochastic flows arise as solutions to stochastic differential equations with
smooth coefficients. Namely, if П†s,t (u) is a solution to the stochastic differential
equation
dy(t) = a(y(t))dt + b(y(t))dw(t)
(1.1)
starting at the point u in time s and considered in time t, then under smoothness
conditions on the coefficients a and b the family {П†s,t } will satisfy the conditions
of Definition 1.4 [12]. Another example of a stochastic flow is the Harris flow
consisting of Brownian particles [5]. In this flow П†0,t (u) for every u в€€ R is a
Brownian martingale with respect to a common filtration and
d П†0,t (u1 ), П†0,t (u2 ) = О“(П†0,t (u1 ) в€’ П†0,t (u2 ))dt
for some positive definite function О“ with О“(0) = 1.
For a given stochastic flow one can try to construct a corresponding family of
strong random operators as follows.
Example 1.5. Let H = L2 (R). Define
Gs,t f (u) = f (П†s,t (u)).
Let us check that in the both cases mentioned above Gs,t satisfies Definition 1.1.
For the Harris flow we have
f (П†s,t (u))2 du =
E
R
f (v)2 ptв€’s (u в€’ v)dudv =
R
R
f (v)2 dv.
R
Here pr denotes the Gaussian density with zero mean and variance r.
To get an estimation for the flow generated by a stochastic differential equation
let us suppose that the coefficients a and b are bounded Lipschitz functions and b
is separated from zero. Under such conditions П†s,t (u) has a density, which can be
estimated from above by a Gaussian density [1]. Consequently we will have the
inequality E R f (П†s,t (u))2 du в‰¤ c R f (v)2 dv.
As it was shown in Example 1.3, a strong random operator in general is not
a family of bounded linear operators in H indexed by the points of probability
424
ANDREY A. DOROGOVTSEV
space. Despite of this the composition of such operators can be properly defined
(see [3] for detailed construction in case of dependent nonlinear operators via Wick
product). Here we will consider only the case when strong random operators A
and B are independent. In this case both A and B have measurable modifications
and one can define for u в€€ H, П‰ в€€ в„¦
AB(u, П‰) := A(B(u, П‰), П‰)
and prove that the value AB(u) does not depend on the choice of modifications.
Note that the operators from the previous example satisfy the semigroup property,
and that for the flow generated by a stochastic differential equation these operators
are measurable with respect to increments of the Wiener process. In this section
we will consider a general situation of this kind and study the structure of the
semigroup of strong random operators measurable with respect to a Gaussian
white noise. The white noise framework is presented in [3, 6, 8], here we just recall
necessary facts and definitions.
LetвЂ™s start with a description of the noise. Let H0 be a separable real Hilbert
Лњ = H0 вЉ— L2 ([0; +в€ћ]), where an inner
space. Consider a new Hilbert space H
product is defined by the formula
в€ћ
Лњ в€‹ f, g в†’< f, g =
H
(f (t), g(t))0 dt.
0
Лњ is a family of jointly Gaussian
Definition 1.6. Gaussian white noise Оѕ in H
Лњ which is linear with respect to h в€€ H
Лњ and for
random variables { Оѕ, h ; h в€€ H}
every h, Оѕ, h has mean zero and variance h 2 .
Лњ s,t be the product H0 вЉ— L2 ([s; t]), which can be naturally considered as a
Let H
Лњ Define the Пѓ-fields Fs,t = Пѓ{ Оѕ, h ; h в€€ H
Лњ s,t }, 0 в‰¤ s в‰¤ t < +в€ћ.
subspace of H.
Definition 1.7. A family {Gs,t ; 0 в‰¤ s в‰¤ t < +в€ћ} of strong random operators in
H is a multiplicative functional from Оѕ if the following conditions hold:
1) Gs,t is measurable with respect to Fs,t ,
2) Gs,s is an identity operator for every s,
3) Gs1 ,s3 = Gs2 ,s3 Gs1 ,s2 for s1 в‰¤ s2 в‰¤ s3 .
Remark 1.8. Taking an orthonormal basis {en } in H0 one can replace Оѕ by a
sequence of independent Wiener processes {wn (t) = en вЉ— 1[0; t] ; Оѕ ; t в‰Ґ 0}. We
use Оѕ in order to simplify notations and consider simultaneously both cases of finite
and infinite number of the processes {wn }.
Example 1.9. Let us define x(u, s, t) as a solution to the Cauchy problem for
(1.1) which starts from the point u at the moment s. Using the flow property one
can easily verify that the family of operators {Gs,t f (u) = f (x(u, s, t))} in L2 (R)
is a multiplicative functional from the Gaussian white noise wЛ™ in L2 ([0; +в€ћ]).
Now we are going to introduce the notion of a homogeneous multiplicative functional. Let us recall, that every square integrable random variable О± measurable
with respect to Оѕ can be uniquely expressed as a series of multiple Wiener integrals
[8]
в€ћ
О± = EО± +
ak (П„1 , . . . , П„k )Оѕ(dП„1 ) . . . Оѕ(dП„k ),
k=1
в€†k (0;+в€ћ)
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 425
where
в€†k (s; t) = {(П„1 , . . . , П„k ) : s в‰¤ П„1 в‰¤ . . . в‰¤ П„k в‰¤ t},
ak в€€ L2 (в€†k (0; +в€ћ), H0вЉ—k ), k в‰Ґ 1.
Here in the multiple integrals we consider the white noise Оѕ as Gaussian H0 -valued
random measure on [0; +в€ћ). In the terms of the mentioned above orthonormal
basis {en } in H0 and the sequence of the independent Wiener processes {wn } one
can rewrite the above multiple integrals as
ak (П„1 , . . . , П„k )Оѕ(dП„1 ) . . . Оѕ(dП„k )
в€†k (0;+в€ћ)
=
ak (П„1 , . . . , П„k )(en1 , ..., enk )dwn1 (П„1 )...dwnk (П„k ).
n1 ,...,nk
в€†k (0;+в€ћ)
Define the shift of О± for r в‰Ґ 0 as follows
в€ћ
Оёr О± = EО± +
ak (П„1 в€’ r, . . . , П„k в€’ r)Оѕ(dП„1 ) . . . Оѕ(dП„k ).
k=1
в€†k (r;+в€ћ)
Definition 1.10. A multiplicative functional {Gs,t } is homogeneous if for every
s в‰¤ t and r в‰Ґ 0
Оёr Gs,t = Gs+r,t+r .
Note that the family {Gs,t } from Example 1.9 is a homogeneous functional.
From now on, we will consider only homogeneous multiplicative functionals from
Оѕ. For a homogeneous functional {Gs,t } one can define the expectation operators
Tt u = EG0,t u, u в€€ H, t в‰Ґ 0.
Since the family {Gs,t } is homogeneous, then {Tt } is the semigroup of bounded
operators in H. Under the well-known conditions the semigroup {Tt } can be
described by its generator. However the family {Gs,t } cannot be recovered from
this semigroup. The following simple example shows this.
Example 1.11. Define {G1s,t } and {G2s,t } in the space L2 (R) as follows
G1s,t f (u) = Ttв€’s f (u),
where {Tt } is the heat semigroup, and
G2s,t f (u) = f (u + w(t) в€’ w(s)),
where w is a standard Wiener processes. It is evident, that
EG2s,t f (u) = Ttв€’s f (u) = EG1s,t f (u).
To recover multiplicative functional uniquely we have to add some information to
{Tt }. It can be done in the following way. For f в€€ H define an operator which
acts from H0 to H by the rule
1
.
A(f )(h) = lim EG0,t f (Оѕ, h вЉ— 1[0; t] ).
tв†’0+ t
(1.2)
426
ANDREY A. DOROGOVTSEV
Example 1.12. Let the family {Gs,t } be defined as in Example 1.9. Now H =
L2 (R) and the noise Оѕ is defined on L2 ([0; +в€ћ) as w.
Л™ Then for f в€€ L2 (R) (now
H0 = R and it makes sense only to take h = 1)
1
A(f )(u) = lim Ef (x(u, t))w(t).
tв†’0+ t
Suppose that f has two bounded continuous derivatives. Then using ItЛ†oвЂ™s formula
one can get
t
Ef вЂІ (x(u, s))П•(x(u, s))ds,
Ef (x(u, t))w(t) =
0
and
1
L2 (R)
Ef (x(вЂў, t))w(t) в†’ f вЂІ (вЂў)b(вЂў), t в†’ 0 + .
t
Consequently, for вЂњgoodвЂќ functions
Af = bf вЂІ .
Definition 1.13. An element u of H belongs to the domain of definition D(A) of
A if the limit (1.2) exists for every h в€€ H0 and defines a HilbertвЂ“Schmidt operator
A(u) : H0 в†’ H. The operator A is refereed to as the random generator of {Gs,t }.
Now we can formulate the main statement of this section, which describes the
structure of homogeneous multiplicative functionals from Оѕ.
Theorem 1.14. Suppose, that for every t > 0, Tt (H) вЉ‚ D(A) and the kernels of
the ItЛ†
o-Wiener expansion for G0,t are continuous with respect to time variables.
Then G0,t has the following representation
в€ћ
G0,t (u) = Tt u +
Ttв€’П„k ATП„k в€’П„kв€’1 . . . ATП„1 udОѕ(П„1 ) . . . dОѕ(П„k ).
k=1
(1.3)
в€†k (0;t)
Proof. Let us denote the kernels of the ItЛ†oвЂ“Wiener expansion for G0,t (u) as
{atk (u, П„1 , . . . , П„k ); k в‰Ґ 0}. Since
at0 (u) = EG0,t (u),
then
at0 (u) = Tt u.
Since
G0,t+s (u) = Gt,t+s (G0,t (u)),
and Gt,t+s = Оёt G0,s , then
t
s
at+s
1 (u, П„1 ) = Ts a1 (u, П„1 )1П„1 <t + a1 (Tt u, П„1 в€’ t)1tв‰¤П„1 в‰¤t+s .
(1.4)
Using this relation one can get
at1 (u, 0) = Ttв€’П„1 aП„11 (u, 0),
1
at1 (u, П„1 ) = atв€’П„
(TП„1 u, 0).
1
(1.5)
The condition of the theorem implies that for v = TП„1 u and every h в€€ H0 there
exists the limit
1
A(v)h = lim EG0,t (v)(Оѕ, h вЉ— 1[0; t] )
tв†’0+ t
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 427
1
tв†’0+ t
t
= lim
0
at1 (v, П„1 )hdП„1 .
Now, by continuity of a1 ,
a01 (TП„1 u, 0) = A(TП„1 u).
Finally,
at1 (u, П„1 ) = Ttв€’П„1 ATП„1 u.
The case k в‰Ґ 2 can be proved by induction. Suppose, that we have the representation (1.3) for atj , j в‰¤ k. Consider at+s
k+1 . Using the multiplicative and homogeneity
properties one can get
at+s
k+1 (u, П„1 , . . . , П„k+1 )1{0в‰¤П„1 в‰¤...в‰¤П„k в‰¤tв‰¤П„k+1 в‰¤t+s}
= as1 (atk (u, П„1 , . . . , П„k ), П„k+1 в€’ t)
= Ts+tв€’П„k+1 ATП„k+1 в€’t atk (u, П„1 , . . . , П„k )
= Ts+tв€’П„k+1 ATП„k+1 в€’t Ttв€’П„k A . . . ATП„1 u
= Ts+tв€’П„k+1 ATП„k+1 в€’П„k A . . . ATП„1 u.
The theorem is proved.
Consider some examples of application of the representation (1.3).
Example 1.15. Consider the multiplicative functional from Example 1.9. Suppose that the coefficients a, b have infinitely many bounded derivatives. Now it
can be proved, that x(u, t) has infinitely many stochastic derivatives [15]. Consequently for a smooth function f the first kernel in the ItЛ†oвЂ“Wiener expansion of
f (x(u, t)) can be expressed as follows
at1 (u, П„ ) = EDf (x(u, t))(П„ ).
(1.6)
Indeed, for an arbitrary h в€€ L2 ([0; +в€ћ))
t
0
t
at1 (u, П„ )h(П„ )dП„ = Ef (x(u, t))
h(П„ )dw(П„ )
0
t
=E
Df (x(u, t))(П„ )h(П„ )dП„,
0
which gives us the expression (1.6). The required continuity of a1 follows from a
well-known expression for the stochastic derivative of x [8]. As it was mentioned
d
on smooth functions. Finally,
in Example 1.12, the operator A coincides with b du
the expression (1.3) turns into the well-known KrylovвЂ“Veretennikov expansion [11]
for f (x(u, t))
в€ћ
f (x(u, t)) = Tt f (u) +
Ttв€’П„k bв€‚TП„2 в€’П„1 . . .
k=1
в€†k (0;t)
. . . bв€‚TП„1 f (u)dw(П„1 ) . . . dw(П„k ).
Remark 1.16. The expression (1.3) can be applied to multiplicative functionals,
which are not generated by a stochastic flow.
428
ANDREY A. DOROGOVTSEV
Example 1.17. Let L be a matrix Lie group with the corresponding Lie algebra
A with dim A = n. Consider an L-valued homogeneous multiplicative functional
{Gs,t } from Оѕ. Suppose that {G0,t } is a semimartingale with respect to the filtration generated by Оѕ. Let {Gs,t } be continuous with respect to s, t with probability
one. It means, in particular, that {G0,t } is a multiplicative Brownian motion in L
[7]. Then G0,t is a solution to the following SDE
dG0,t = G0,t dMt ,
G0,0 = I.
Here {Mt ; t в‰Ґ 1} is an A-valued Brownian motion obtained from G by the rule [7]
[ в€†t ]
(Gkв€†,(k+1)в€† в€’ I).
Mt = P - lim
в€†в†’0+
(1.7)
k=0
Since G0,t is a semimartingale with respect to the filtration of Оѕ, then Mt also has
the same property. The representation (1.7) shows that Mt в€’ Ms is measurable
with respect to the Пѓ-field Fs,t and for arbitrary r в‰Ґ 0
Оёr (Mt в€’ Ms ) = Mt+r в€’ Ms+r .
Considering the ItЛ†
oвЂ“Wiener expansion of Mt в€’ Ms one can easily check, that
t
Mt =
ZdОѕ(П„ )
(1.8)
0
with a deterministic matrix Z. We will prove (1.8) for the one-dimensional case.
Suppose that Mt has the following ItЛ†o-Wiener expansion with respect to Оѕ
в€ћ
Mt =
ak (t, П„1 , . . . , П„k )dОѕ(П„1 ) . . . dОѕ(П„k ).
k=1
в€†k (t)
Then for k в‰Ґ 2 the corresponding kernel ak satisfies relation
ak (t + s, П„1 , . . . , П„k ) = ak (t, П„1 , . . . , П„k )1{П„1 ,...,П„k в‰¤t}
+ ak (s, П„1 в€’ t, . . . , П„k в€’ t)1{П„1 ,...,П„k в‰Ґt} .
n
t
j=1 n
Iterating this relation for t =
same arguments give ak в‰Ў const.
one can verify that ak в‰Ў 0. For k = 1 the
Consequently, the equation for G can be rewritten using Оѕ as
dG0,t = G0,t ZdОѕ(t).
(1.9)
Now the elements of the ItЛ†
oвЂ“Wiener expansion from Theorem 1.14 can be determined as follows
Tt = EG0,t , A = Z.
Consequently,
в€ћ
G0,t = Tt +
Ttв€’П„k ZTП„k в€’П„kв€’1 . . . ZTП„1 dОѕ(П„1 ) . . . dОѕ(П„k ).
k=1
в€†k (0;t)
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 429
2. The Arratia Flow
When trying to obtain an analog of the representation (1.3) for a stochastic
flow which is not generated by a stochastic differential equation with smooth coefficients, we are faced with the difficulty that there is no such a Gaussian random
vector field, which would generate the flow. This circumstance arise from the possibility of coalescence of particles in the flow. We will consider one of the best
known examples of such stochastic flows, the Arratia flow. Let us start with the
precise definition.
Definition 2.1. The Arratia flow is a random field {x(u, t); u в€€ R, t в‰Ґ 0}, which
has the properties
1) all x(u, В·), u в€€ R are Wiener martingales with respect to the join filtration,
2) x(u, 0) = u, u в€€ R,
3) for all u1 в‰¤ u2 , t в‰Ґ 0
x(u1 , t) в‰¤ x(u2 , t),
4) the joint characteristics of x(u1 , t) and x(u2 , t) equals
t
x(u1 , В·), x(u2 , В·)
t
=
1{П„ (u1 ,u2 )в‰¤s} ds,
0
where
П„ (u1 , u2 ) = inf{t : x(u1 , t) = x(u2 , t)}.
It follows from the properties 1)вЂ“3), that individual particles in the Arratia flow
move as Brownian particles and coalesce after meeting. Property 4) reflects the
independence of the particles before meeting. It was proved in [4], that the Arratia
flow has a modification, which is a cdlg process on R with the values in C([0; +в€ћ)]).
From now on, we assume that we are dealing with such a modification. We will
construct the Arratia flow using a sequence of independent Wiener processes {wk :
k в‰Ґ 1}. Suppose that {rk ; k в‰Ґ 1} are rational numbers on R. To construct the
Arratia flow put wk (0) = rk , k в‰Ґ 1 and define
x(r1 , t) = w1 (t), t в‰Ґ 0.
If x(r1 , В·), . . . , x(rn , В·) have already been constructed, then define
n
Пѓn+1 = inf{t :
(x(rk , t) в€’ wn+1 (t)) = 0},
k=1
x(rn+1 , t) =
wn+1 (t), t в‰¤ Пѓn+1
x(rkв€— , t), t в‰Ґ Пѓn+1 ,
where
wn+1 (Пѓn+1 ) = x(rkв€— , Пѓn+1 ),
k = min{l : wn+1 (Пѓn+1 ) = x(rl , Пѓn+1 )}.
In this way we construct a family of the processes x(r, В·), r в€€ Q which satisfies
conditions 1)вЂ“4) from Definition 2.1.
430
ANDREY A. DOROGOVTSEV
Lemma 2.2. For every u в€€ R the random functions x(r, В·) uniformly converge
on compacts with probability one as r в†’ u. For rational u the limit coincides with
x(u, В·) defined above. The resulting random field {x(u, t); u в€€ R, t в‰Ґ 0} satisfies
the conditions of Definition 2.1.
Proof. Consider a sequence of rational numbers {rnk ; k в‰Ґ 1} which converges to
some u в€€ R \ Q. Without loss of generality one can suppose that this sequence
decreases. For every t в‰Ґ 0, {x(rnk , t); k в‰Ґ 1} converges with probability one as a
bounded monotone sequence. Denote
x(u, t) = lim x(rnk , t).
kв†’в€ћ
вЂІ
вЂІвЂІ
Note that for arbitrary r , r в€€ Q and t в‰Ґ 0
E sup (x(rвЂІ , s) в€’ x(rвЂІвЂІ , s))2 в‰¤ C В· (|rвЂІ в€’ rвЂІвЂІ | + (rвЂІ в€’ rвЂІвЂІ )2 ).
(2.1)
[0; t]
Here the constant C does not depend on rвЂІ and rвЂІвЂІ . Inequality (2.1) follows from
the fact, that the difference x(rвЂІ , В·) в€’ x(rвЂІвЂІ , В·) is a Wiener process with variance 2,
started at rвЂІ в€’ rвЂІвЂІ and stopped at 0. Monotonicity and (2.1) imply that the first
assertion of the lemma holds. Note that for every t в‰Ґ 0
Ft = Пѓ(x(r, s); r в€€ Q, s в€€ [0; t])
= Пѓ(x(r, s); r в€€ R, s в€€ [0; t]).
Using standard arguments one can easily verify, that for every u в€€ R, x(u, В·) is a
Wiener martingale with respect to the flow (Ft )tв‰Ґ0 , and that the inequality
x(u1 , t) в‰¤ x(u2 , t)
remains to be true for all u1 в‰¤ u2 . Consequently, for all u1 , u2 в€€ R, x(u1 , В·) and
x(u2 , В·) coincide after meeting. It follows from (2.1) and property 4) for x(r, В·)
with rational r, that
x(u1 , В·), x(u2 , В·) t = 0
for
t < inf{s : x(u1 , s) = x(u2 , s)}.
Hence, the family {x(u, t); u в€€ R, t в‰Ґ 0} satisfies Definition 2.1.
This lemma shows that the Arratia flow is generated by the initial countable
system of independent Wiener processes {wk ; k в‰Ґ 1}.
From this lemma one can easily obtain the following statement.
Corollary 2.3. The Пѓ-field
x
F0+
:=
Пѓ(x(u, s); u в€€ R, 0 в‰¤ s в‰¤ t)
t>0
is trivial modulo P.
The proof of this statement follows directly from the fact that the Wiener
process has the same property [9].
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 431
3. The KrylovвЂ“Veretennikov Expansion for the n-point
Motion of the Arratia Flow
We begin this section with an analog of the KrylovвЂ“Veretennikov expansion for
the Wiener process stopped at zero. For the Wiener process w define the moment
of the first hitting zero
П„ = inf{t : w(t) = 0}
and put w(t) = w(П„ в€§ t). For a measurable bounded f : R в†’ R define
Tt (f )(u) = Eu f (w(t)).
The following statement holds.
Lemma 3.1. For a measurable bounded function f : R в†’ R and u в‰Ґ 0
f (w(t)) = Tt f (u)
в€ћ
+
Ttв€’rk
k=1
в€†k (t)
в€‚
в€‚
Tr в€’r
...
Tr f (v1 )
в€‚vk k kв€’1
в€‚v1 1
dw(r1 ) . . . dw(rk ).
(3.1)
Proof. Let us use the FourierвЂ“Wiener transform. Define for П• в€€ C([0; +в€ћ), R)
L2 ([0; +в€ћ), R) the stochastic exponential
+в€ћ
E(П•) = exp
П•(s)dw(s) в€’
0
1
2
+в€ћ
П•(s)2 ds .
0
Suppose that a random variable О± has the ItЛ†oвЂ“Wiener expansion
в€ћ
О± = a0 +
aj (r1 , . . . , rk )dw(r1 ) . . . dw(rk ).
k=1
в€†k (t)
Then
EО±E(П•) = a0
в€ћ
+
ak (r1 , . . . , rk )П•(r1 ) . . . П•(rk )dr1 . . . drk .
k=1
(3.2)
в€†k (t)
Consequently, to find the ItЛ†
oвЂ“Wiener expansion of О± it is enough to find EО±E(П•)
as an analytic functional from П•. Note that
Eu f (w(t))E(П•) = Eu f (y(t)),
where the process y is obtained from the process
t
y(t) = w(t) +
П•(r)dr
0
in the same way as w from w. To find Eu f (y(t)) consider the case when f is
continuous bounded function with f (0) = 0. Let F be the solution to the following
boundary problem on [0; +в€ћ) Г— [0; T ]
в€‚
1 в€‚2
в€‚
F (u, t) = в€’
F (u, t) в€’ П•(t) F (u, t),
в€‚t
2 в€‚u2
в€‚u
F (u, T ) = f (u), F (0, s) = 0, s в€€ [0; T ],
(3.3)
432
ANDREY A. DOROGOVTSEV
F в€€ C 2 ((0; +в€ћ) Г— (0; T )) в€© C([0; +в€ћ) Г— [0; T ]).
Then F (u, 0) = Eu f (y(T )). To check this relation note, that F satisfies the relation
в€‚
в€‚2
F (0, s) =
F (0, s) = 0, s в€€ [0; T ].
в€‚u
в€‚u2
Consider the process F (y(s), s) on the interval [0; T ]. Using ItЛ†oвЂ™s formula one can
get
T в€§П„
1 в€‚2
F (y(s), s
2 в€‚u2
F (y(T ), T ) = F (u, 0) +
0
+ П•(s)
1 в€‚2
F (y(s), s
2 в€‚u2
в€‚
F (y(s), s)) в€’
в€‚u
T в€§П„
+
0
+ П•(s)
в€‚
F (y(s), s))ds
в€‚u
в€‚
F (y(s), s)dw(s).
в€‚u
Consequently
F (u, 0) = Eu f (y(T )).
The problem (3.3) can be solved using the semigroup {Tt ; t в‰Ґ 0}. It can be
obtained from (3.3) that
T
F (u, s) = TT в€’s f (u) +
П•(r)Trв€’s
s
в€‚
F (u, r)dr.
в€‚u
(3.4)
Solving (3.4) by the iteration method one can get the series
F (u, s) = TT в€’s f (u)
в€ћ
+
Tr1 в€’s
k=1
в€†k (s; T )
в€‚
Tr в€’r . . .
в€‚v1 2 1
в€‚
TT в€’rk f (vk )П•(r1 ) . . . П•(rk )dr1 . . . drk .
в€‚vk
The last formula means that the ItЛ†oвЂ“Wiener expansion of f (w(t)) has the form
в€ћ
f (w(t)) = Tt f (u) +
T r1
k=1
в€†k (t)
в€‚
Tr в€’r . . .
в€‚v1 2 1
в€‚
Ttв€’rk f (vk )dw(r1 ) . . . dw(rk ).
в€‚vk
To consider the general case note that for t > 0 and c в€€ R
(3.5)
в€‚
Tt c в‰Ў 0.
в€‚v
Consequently (3.5) remains to be true for an arbitrary bounded continuous f. Now
the statement of the lemma can be obtained using the approximation arguments.
The lemma is proved.
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 433
The same idea can be used to obtain the ItЛ†oвЂ“Wiener expansion for a function
from the Arratia flow. The n-point motion of the Arratia flow was constructed in
Section 2 from independent Wiener processes. Consequently, a function from this
nв€’point motion must have the ItЛ†oвЂ“Wiener expansion in terms of these processes.
We will treat such expansion as the KrylovвЂ“Veretennikov expansion for the Arratia
flow.
Here there is a new circumstance compared to the case when the flow is generated by SDE with smooth coefficients. Namely, there are many different ways to
construct the trajectories of the Arratia flow from the initial Wiener processes, and
the form of the ItЛ†
oвЂ“Wiener expansion will depend on the way of constructing the
trajectories. In [2] Arratia described different ways of constructing the colliding
Brownian motions from independent Wiener processes. We present here a more
general approach by considering a broad class of constructions, and find the ItЛ†oвЂ“
Wiener expansion for it. To describe our method we will need some preliminary
notations and definitions.
Definition 3.2. An arbitrary set of the kind {i, i + 1, . . . , j}, where i, j в€€ N,i в‰¤ j
is called a block.
Definition 3.3. A representation of the block {1, 2, . . . , n} as a union of disjoint
blocks is called a partition of the block {1, 2, . . . , n}.
Definition 3.4. We say that a partition ПЂ2 follows from a partition ПЂ1 if it coincides with ПЂ1 or if it is obtained by the union of two subsequent blocks from
ПЂ1 .
We will consider a sequences of partitions {ПЂ0 , . . . , ПЂl } where ПЂ0 is a trivial
partition, ПЂ0 = {{1}, {2}, . . . , {n}} and every ПЂi+1 follows from ПЂi . The set of all
such sequences will be denoted by R. Denote by Rk the set of all sequences from R
that have exactly k matching pairs: ПЂi = ПЂi+1 . The set R0 of strongly decreasing
Л� For every sequence {ПЂ0 , . . . , ПЂk } from R
Л� each ПЂi+1 is
sequences we denote by R.
obtained from ПЂi by the union of two subsequent blocks. It is evident, that the
Л� is less or equal to n. Let us associate with every
length of every sequence from R
partition ПЂ a vector О»ПЂ в€€ Rn with the next property. For each block {s, . . . , t}
from ПЂ the following relation holds
t
О»2ПЂq = 1.
q=s
We will use the mapping О» as a rule of constructing the nв€’point motion of the
Arratia flow. Suppose now, that {wk ; k = 1, . . . , n} are independent Wiener
processes starting at the points u1 < . . . < un . We are going to construct the
trajectories {x1 , . . . , xn } of the Arratia flow starting at u1 < . . . < un from the
pieces of the trajectories of {wk ; k = 1, . . . , n}. Assume that we have already built
the trajectories of {x1 , . . . , xn } up to a certain moment of coalescence П„ . At this
moment a partition ПЂ of {1, 2, . . . , n} naturally arise. Two numbers i and j belong
to the same block in ПЂ if and only if xi (П„ ) = xj (П„ ). Consider one block {s, . . . , t} in
ПЂ. Define the processes xs , . . . , xt after the moment П„ and up to the next moment
434
ANDREY A. DOROGOVTSEV
of coalescence in the whole system {x1 , . . . , xn } by the rule
t
xi (t) = xi (П„ ) +
О»ПЂq (wq (t) в€’ wq (П„ )).
q=s
Proceeding in the same way, we obtain the family {xk , k = 1, . . . , n} of continuous
square integrable martingales with respect to the initial filtration, generated by
{wk ; k = 1, . . . , n} with the following properties:
1) for every k = 1, . . . , n, xk (0) = uk ,
2) for every k = 1, . . . , n в€’ 1, xk (t) в‰¤ xk+1 (t),
3) the joint characteristic of xi and xj satisfies relation
d xi , xj (t) = 1tв‰ҐП„ij ,
where П„ij = inf{s : xi (s) = xj (s)}.
It can be proved [10] that the processes {xk , k = 1, . . . , n} are the nв€’point
motion of the Arratia flow starting from the points u1 < . . . < un . We constructed
it from the independent Wiener processes {wk ; k = 1, . . . , n} and the way of
construction depends on the mapping О». To describe the ItЛ†oвЂ“Wiener expansion for
functions from {xk (t), k = 1, . . . , n} it is necessary to introduce operators related
Л� Denote by П„0 = 0 < П„1 < . . . < П„nв€’1 the
to a sequence of partitions ПЂ
Лњ в€€ R.
moments of coalescence for {xk (t), k = 1, . . . , n} and by ОЅЛњ = {ПЂ0 , ОЅ1 , . . . , ОЅnв€’1 }
related random sequence of partitions. Namely, the numbers i and j belong to the
same block in the partition ОЅk if and only if xi (t) = xj (t) for П„k в‰¤ t. Define for a
bounded measurable function f : Rn в†’ R
TtПЂЛњ f (u1 , . . . , un ) = Ef (x1 (t), . . . , xn (t))1{ОЅ1 =ПЂ1 ,...,ОЅk =ПЂk , П„k в‰¤t<П„k+1 } .
Now let Оє be an arbitrary partition and let u1 в‰¤ u2 в‰¤ . . . в‰¤ un be such, that
ui = uj if and only if i and j belong to the same block in Оє. One can define
formally the nв€’point motion of the Arratia flow starting at u1 в‰¤ u2 в‰¤ . . . в‰¤ un ,
assuming that the trajectories that start at coinciding points, also coincide. Then
for the strongly decreasing sequence of partitions ПЂ
Лњ = {Оє, ПЂ1 , . . . , ПЂk } the operator
TtПЂЛњ is defined by the same formula as above.
The next theorem is the KrylovвЂ“Veretennikov expansion for the nв€’point motion
of the Arratia flow.
Theorem 3.5. For a bounded measurable function f : Rn в†’ R the following
representation takes place
TtПЂЛњ f (u1 , . . . , un )
f (x1 (t), . . . , xn (t)) =
Л�
ПЂ
Лњ в€€R
n
t
+
О»ПЂ1 i
i=1 ПЂ
Лњ в€€R1
0
ПЂ
Лњ2
f (u1 , . . . , un )dwi (s1 )
TsПЂЛњ11 в€‚i Ttв€’s
1
n
+
О»ПЂ1 i1 О»ПЂ2 i2
i1 ,i2 =1 ПЂ
Лњ в€€R2
в–і2 (t)
ПЂ
Лњ3
TsПЂЛњ11 в€‚i1 TsПЂЛњ22в€’s1 в€‚i2 Ttв€’s
2
f (u1 , . . . , un )dwi1 (s1 )dwi2 (s2 )
KRYLOVвЂ“VERETENNIKOV EXPANSION FOR COALESCING STOCHASTIC FLOWS 435
n
k
+
ПЂ
Лњ
О»ПЂj ij
i1 ,...,ik =1 ПЂ
Лњ в€€Rk j=1
в–іk (t)
k+1
TsПЂЛњ11 в€‚i1 TsПЂЛњ22в€’s1 ...в€‚ik Ttв€’s
k
f (u1 , . . . , un )dwi1 (s1 )...dwik (sk ) + В· В· В· В· В· В·
In this formula we use the following notations. For a sequence ПЂ
Лњ в€€ Rk partitions
ПЂ1 , ..., ПЂk are the left elements of equalities from ПЂ
Лњ = {...ПЂ1 = ...ПЂ2 = ...ПЂk = ...}
and ПЂ
Лњ1 , ..., ПЂ
Лњk+1 are strictly decreasing pieces of ПЂ
Лњ . The symbol в€‚i denotes differentiation with respect to a variable corresponding to the block of partition, which
q=t
contains i. For example, if i в€€ {s, ..., t} then в€‚i f = q=s fqвЂІ .
The proof of the theorem can be obtained by induction, adopting ideas of
Lemma 3.1. One has to consider subsequent boundary value problems and then
use the probabilistic interpretation of the GreenвЂ™s functions for these problems.
The corresponding routine calculations are omitted.
Acknowledgment. Author wish to thank an anonymous referee for careful reading of the article and helpful suggestions.
References
1. Aronson, D. G.: Bounds for the fundamental solution of a parabolic equation, Bull. Amer.
Math. Soc. 73 (1967) 890вЂ“896.
2. Arratia, R.: Coalescing Brownian motion on the line, in: University of WisconsinвЂ“Madison.
PhD thesis (1979).
3. Dorogovtsev, A. A.: Stochastic Analysis and Random Maps in Hilbert Space VSP, Utrecht,
1994.
4. Dorogovtsev, A. A.: Some remarks on a Wiener flow with coalescence, Ukrainian mathematical journal 57 (2005) 1550вЂ“1558.
5. Harris, T. E.: Coalescing and noncoalescing stochastic flows in R1 , Stochastic Processes and
their Applications 17 (1984) 187вЂ“210.
6. Hida, T., Kuo, H.-H., Potthoff, J., Streit, L.: White Noise вЂ“ An Infinite Dimensional Calculus, Kluwer, Dordrecht, 1993.
7. Holevo, A. S.: An analog of the ItЛ†
o decomposition for multiplicative processes with values
in a Lie group, The Indian Journal of Statistics 53, Ser. A, Pt.2 (1991) 158вЂ“161.
8. Janson, S.: Gaussian Hilbert Spaces, Cambridge University Press, Cambridge, 1997.
9. Kallenberg, O.: Foundations of Modern Probability, Springer-Verlag, New-York, 1997.
10. Konarovskii, V. V.: On Infinite System of Diffusing Particles with Coalescing, Theory Probab.
Appl. 55 (2010) 159вЂ“169.
11. Krylov, N. V., Veretennikov, A. Yu.: Explicit formulae for the solutions of the stochastic
differential equations, Math. USSR Sb. 29, No. 2 (1976) 239вЂ“256.
12. Kunita, H.: Stochastic Flows and Stochastic Differential Equations, Cambridge University
Press, Cambridge, 1990.
13. Le Jan, Y., Raimond, O.: Flows, coalescence and noise, Ann.Probab. 32 (2004) 1247вЂ“1315.
14. Skorokhod, A. V.: Random Linear Operators, D. Reidel Publishing Company, Dordrecht,
Holland, 1983.
15. Watanabe, S.: Lectures on Stochastic Differential Equations and Malliavin Calculus, Tata
Institute of Fundamental Research, Bombay, 1984.
Andrey A. Dorogovtsev: Institute of Mathematics, National Academy of Sciences
of Ukraine, Kiev-4, 01601, 3, Tereschenkivska st, Ukraine
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 437-450
www.serialspublications.com
QWN-CONSERVATION OPERATOR AND ASSOCIATED WICK
DIFFERENTIAL EQUATION
HABIB OUERDIANE AND HAFEDH RGUIGUI
Abstract. In this paper we introduce the quantum white noise (QWN) conservation operator N Q acting on nuclear algebra of white noise operators
L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R))) endowed with the Wick product. Similarly to the
classical case, we give a useful integral representation in terms of the QWNderivatives {Dtв€’ , Dt+ ; t в€€ R} for the QWN-conservation operator from which it
follows that the QWN-conservation operator is a Wick derivation. Via this property, a relation with the Cauchy problem associated to the QWN-conservation
operator and the Wick differential equation is worked out.
1. Introduction
Piech [21] initiated the study of an infinite dimensional analogue of a finite dimensional Laplacian on infinite dimensional abstract Wiener space. This infinite
dimensional Laplacian (called the number operator) has been extensively studied
in [16, 17] and the references cited therein. In particular, Kuo [15] formulated the
number operator as continuous linear operator acting on the space of test white
noise functionals (E). By virtue of the general theory based on an infinite dimensional analogue of SchwarzвЂ™s distribution theory, where the Lebesgue measure on
R and the GelвЂ™fand triple
S(R) вЉ‚ L2 (R) вЉ‚ S вЂІ (R)
(1.1)
are replaced respectively by the Gaussian measure Вµ on S вЂІ (R) and the following
GelвЂ™fand triple of test function space FОё (SCвЂІ (R)) and generalized function space
FОёв€— (SCвЂІ (R))
FОё (SCвЂІ (R)) вЉ‚ L2 (S вЂІ (R), Вµ) вЉ‚ FОёв€— (SCвЂІ (R)),
(1.2)
see for more details [5] and if we employ a discrete coordinate, the number operator
N has the following expressions:
в€ћ
в€‚eв€—k в€‚ek ,
N=
(1.3)
k=1
where {en ; n в‰Ґ 0} is an arbitrary orthonormal basis for L2 (R), в€‚ek denotes
the derivative in the direction ek acting on FОё (SCвЂІ (Rd )) and в€‚eв€—k is the adjoint
Received 2012-1-22; Communicated by the editors.
2000 Mathematics Subject Classification. Primary 60H40; Secondary 46A32, 46F25, 46G20.
Key words and phrases. Wick differential equation, Wick derivation, QWN-conservation operator, QWN-derivatives.
437
438
HABIB OUERDIANE AND HAFEDH RGUIGUI
of в€‚ek . For details see [16], [17]. In [2], the conservation operator N (K), for
K в€€ L(SCвЂІ (R), SCвЂІ (R)), is given by
в€ћ
n xвЉ—n , (K вЉ—I вЉ—(nв€’1) )П•n ,
N (K)П•(x) =
(1.4)
n=0
from which it is obvious that N (I) = N . Using the S-transform it is well known
that N (K) is a wick derivation of distributions, see [4], moreover we have
N (K) =
П„K (s, t)x(s) в‹„ at dsdt.
(1.5)
R2
In the present paper, by using the new idea of QWN-derivatives pointed out by
Ji-Obata in [9, 8], we extend some results contained in [2] to the QWN domains. For
B1 , B2 в€€ L(SCвЂІ (R), SCвЂІ (R)), the QWN-analogous NBQ1 ,B2 stands for appropriate QWN
counterpart of the conservation operator in (1.3). In the first main result we show
that NBQ1 ,B2 has functional integral representations in terms of the QWN-derivatives
{Dtв€’ , Dt+ ; t в€€ R} and a suitable Wick product в‹„ on the class of white noise
operators as a quantum white noise analogue of (1.5). The second remarkable
feature is that NBQ1 ,B2 behaviors as a Wick derivation of operators. This enable
us to give a relation between the Cauchy problem associated to NBQ1 ,B2
в€‚
Ut = NBQ1 ,B2 Ut , U0 в€€ L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R))),
в€‚t
and the Wick differential equation introduced in [10] as follows
DY = G в‹„ Y,
G в€€ L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R)))
(1.6)
(1.7)
where D is a Wick derivation. It is well known that (see [10]) if there exists
an operator Y in the algebra L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R))) such that DY = G and
wexpY := eв‹„Y is defined in L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R))), then every solution to (1.7),
is of the form:
Оћ = (wexpY ) в‹„ F,
where F в€€ L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R))) satisfying DF = 0. More precisely, an
important example of the Wick differential equation associated with the QWNconservation operator is studied where the solution of a system of equations of
type (1.7) is given explicitly in terms of the solution of the associated Cauchy
problem associated to QWN-conservation operator.
The paper is organized as follows. In Section 2, we briefly recall well-known
results on nuclear algebra of entire holomorphic functions. In Section 3, we reformulate in our setting the creation derivative and annihilation derivative as
well as their adjoints. Then, we introduce the QWN-conservation operator acting on L(FОё (SCвЂІ (R)), FОёв€— (SCвЂІ (R))). As a main result, we give a useful integral
representation for the QWN-conservation operator from which it follows that the
QWN-conservation operator is a Wick derivation. In Section 4, we find a connection between the solution of a continuous system of QWN-differential equations and
the solution of the Cauchy problem associated to the QWN-conservation operator
NBQ1 ,B2 .
QWN-CONSERVATION OPERATOR AND ASSOCIATED DIFFERENTIAL EQUATION
439
2. Preliminaries
Let H be the real Hilbert space of square integrable functions on R with norm
| В· |0 , E в‰Ў S(R) and E вЂІ в‰Ў S вЂІ (R) be the Schwarz space consisting of rapidly
decreasing C в€ћ -functions and the space of the tempered distributions, respectively.
Then, the GelвЂ™fand triple (1.1) can be reconstructed in a standard way (see Ref.
[17]) by the harmonic oscillator A = 1 + t2 в€’ d2 /dt2 and H. The eigenvalues of A
are 2n + 2, n = 0, 1, 2, В· В· В· , the corresponding eigenfunctions {en ; n в‰Ґ 0} form an
orthonormal basis for L2 (R) and each en is an element of E. In fact E is a nuclear
space equipped with the Hilbertian norms
|Оѕ|p = |Ap Оѕ|0 ,
Оѕ в€€ E,
pв€€R
and we have
E вЂІ = ind lim Eв€’p ,
E = proj lim Ep ,
pв†’в€ћ
pв†’в€ћ
where, for p в‰Ґ 0, Ep is the completion of E with respect to the norm | В· |p and Eв€’p
is the topological dual space of Ep . We denote by N = E + iE and Np = Ep + iEp ,
p в€€ Z, the complexifications of E and Ep , respectively.
Throughout the paper, we fix a Young function Оё, i.e. a continuous, convex and
increasing function defined on R+ and satisfies the two conditions: Оё(0) = 0 and
limxв†’в€ћ Оё(x)/x = +в€ћ. The polar function Оёв€— of Оё, defined by
Оёв€— (x) = sup(tx в€’ Оё(t)),
x в‰Ґ 0,
tв‰Ґ0
is also a Young function. For more details , see Refs. [5], [12] and [18]. For a
complex Banach space (B, В· ), let H(B) denotes the space of all entire functions
on B, i.e. of all continuous C-valued functions on B whose restrictions to all affine
lines of B are entire on C. For each m > 0 we denote by Exp(B, Оё, m) the space
of all entire functions on B with Оёв€’exponential growth of finite type m, i.e.
Exp(B, Оё, m) = f в€€ H(B);
f
Оё,m
:= sup |f (z)|eв€’Оё(m
z )
<в€ћ .
zв€€B
The projective system {Exp(Nв€’p , Оё, m); p в€€ N, m > 0} and the inductive system
{Exp(Np , Оё, m); p в€€ N, m > 0} give the two spaces
FОё (N вЂІ ) = proj lim
pв†’в€ћ;mв†“0
Exp(Nв€’p , Оё, m) ,
GОё (N ) = indlim
pв†’в€ћ;mв†’0
Exp(Np , Оё, m).
(2.1)
It is noteworthy that, for each Оѕ в€€ N , the exponential function
eОѕ (z) := e
z,Оѕ
,
z в€€ N вЂІ,
belongs to FОё (N вЂІ ) and the set of such test functions spans a dense subspace of
FОё (N вЂІ ).
We are interested in continuous linear operators from FОё (N вЂІ ) into its topological
dual space FОёв€— (N вЂІ ). The space of such operators is denoted by L(FОё (N вЂІ ), FОёв€— (N вЂІ ))
and assumed to carry the bounded convergence topology. A typical examples of
elements in L(FОё (N вЂІ ), FОёв€— (N вЂІ )), that will play a key role in our development, are
HidaвЂ™s white noise operators at . For z в€€ N вЂІ and П•(x) with Taylor expansions
440
HABIB OUERDIANE AND HAFEDH RGUIGUI
в€ћ
n=0
xвЉ—n , fn in FОё (N вЂІ ), the holomorphic derivative of П• at x в€€ N вЂІ in the direction z is defined by
П•(x + О»z) в€’ П•(x)
.
О»в†’0
О»
(a(z)П•)(x) := lim
(2.2)
We can check that the limit always exists, a(z) в€€ L(FОё (N вЂІ ), FОё (N вЂІ )) and aв€— (z) в€€
L(FОёв€— (N вЂІ ), FОёв€— (N вЂІ )), where aв€— (z) is the adjoint of a(z), i.e., for О¦ в€€ FОёв€— (N вЂІ ) and
П† в€€ FОё (N вЂІ ), aв€— (z)О¦, П† = О¦, a(z)П† . If z = Оґt в€€ E вЂІ we simply write at instead
of a(Оґt ) and the pair {at , aв€—t } will be referred to as the QWN-process. In quantum
field theory at and aв€—t are called the annihilation operator and creation operator
at the point t в€€ R. By a straightforward computation we have
at eОѕ = Оѕ(t) eОѕ ,
Оѕ в€€ N.
(2.3)
Similarly as above, for П€ в€€ GОёв€— (N ) with Taylor expansion П€(Оѕ) = n П€n , Оѕ вЉ—n
where П€n в€€ N вЂІвЉ—n , we use the common notation a(z)П€ for the derivative (2.2).
The Wick symbol of Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )) is by definition [17] a C-valued
function on N Г— N defined by
Пѓ(Оћ)(Оѕ, О·) = ОћeОѕ , eО· eв€’
Оѕ,О·
,
Оѕ, О· в€€ N.
вЂІ
(2.4)
), FОёв€— (N вЂІ ))
By a density argument, every operator in L(FОё (N
is uniquely determined by its Wick symbol. Moreover, if GОёв€— (N вЉ• N ) denotes the nuclear space
obtained as in (2.1) by replacing Np by Np вЉ• Np , see [12], we have the following
characterization theorem for operator Wick symbols.
Theorem 2.1. ( See Refs. [12]) The Wick symbol map yields a topological isomorphism between L(FОё (N вЂІ ), FОёв€— (N вЂІ )) and GОёв€— (N вЉ• N ).
In the remainder of this paper, for the sake of readers convenience, we simply
use the name symbol for the transformation Пѓ.
Let Вµ be the standard Gaussian measure on E вЂІ uniquely specified by its characteristic function
1
2
eв€’ 2 |Оѕ|0 =
ei
x,Оѕ
Вµ(dx),
Оѕ в€€ E.
EвЂІ
In all the remainder of this paper we assume that the Young function Оё satisfies
the following condition
Оё(x)
lim sup 2 < +в€ћ.
(2.5)
x
xв†’в€ћ
It is shown in Ref. [5] that, under this condition, we have the nuclear GelвЂ™fand
triple (1.2). Moreover, we observe that L(FОё (N вЂІ ), FОё (N вЂІ )), L(FОёв€— (N вЂІ ), FОё (N вЂІ ))
and L(L2 (E вЂІ , Вµ), L2 (E вЂІ , Вµ)) can be considered as subspaces of L(FОё (N вЂІ ), FОёв€— (N вЂІ )).
Furthermore, identified with its restriction to FОё (N вЂІ ), each operator Оћ in the space
L(FОёв€— (N вЂІ ), FОёв€— (N вЂІ )) will be considered as an element of L(FОё (N вЂІ ), FОёв€— (N вЂІ )), so that
we have the continuous inclusions
L(FОёв€— (N вЂІ ), FОёв€— (N вЂІ )) вЉ‚ L(FОё (N вЂІ ), FОёв€— (N вЂІ )),
L(FОёв€— (N вЂІ ), FОё (N вЂІ )) вЉ‚ L(FОё (N вЂІ ), FОёв€— (N вЂІ )).
QWN-CONSERVATION OPERATOR AND ASSOCIATED DIFFERENTIAL EQUATION
441
It is a fundamental fact in QWN theory [17] (see, also Ref. [12]) that every white
noise operator Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )) admits a unique Fock expansion
в€ћ
Оћ=
Оћl,m (Оєl,m ),
(2.6)
l,m=0
where, for each pairing l, m в‰Ґ 0, Оєl,m в€€ (N вЉ—(l+m) )вЂІsym(l,m) and Оћl,m (Оєl,m ) is the
integral kernel operator characterized via the symbol transform by
Пѓ(Оћl,m (Оєl,m ))(Оѕ, О·) = Оєl,m , О· вЉ—l вЉ— Оѕ вЉ—m ,
Оѕ, О· в€€ N.
(2.7)
This can be formally reexpressed as
Оћl,m (Оєl,m ) =
Rl+m
Оєl,m (s1 , В· В· В· , sl , t1 , В· В· В· , tm )
aв€—s1 В· В· В· aв€—sl at1 В· В· В· atm ds1 В· В· В· dsl dt1 В· В· В· dtm .
In this way Оћl,m (Оєl,m ) can be considered as the operator polynomials of degree
l + m associated to the distribution Оєl,m в€€ (N вЉ—(l+m) )вЂІsym(l,m) as coefficient; and
therefore every white noise operator is a вЂњfunctionвЂќ of the QWN. This gives a natural
idea for defining the derivatives of an operator Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )) with respect
to the QWN coordinate system {at , aв€—t ; t в€€ R}.
From Refs. [7] and [8], (see also Refs. [9] and [1]), we summarize the novel
formalism of QWN-derivatives. For О¶ в€€ N , then a(О¶) extends to a continuous
linear operator from FОёв€— (N вЂІ ) into itself (denoted by the same symbol) and aв€— (О¶)
(restricted to FОё (N вЂІ )) is a continuous linear operator from FОё (N вЂІ ) into itself. Thus,
for any white noise operator Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )), the commutators
[a(О¶), Оћ] = a(О¶)Оћ в€’ Оћ a(О¶),
[aв€— (О¶), Оћ] = aв€— (О¶)Оћ в€’ Оћ aв€— (О¶),
are well defined white noise operators in L(FОё (N вЂІ ), FОёв€— (N вЂІ )). The QWN-derivatives
are defined by
DО¶+ Оћ = [a(О¶), Оћ] , DО¶в€’ Оћ = в€’[aв€— (О¶), Оћ].
(2.8)
These are called the creation derivative and annihilation derivative of Оћ, respectively.
3. QWN-Conservation Operator
In the following technical lemma, by using the symbol transform Пѓ, we reformulate the QWN-derivatives DzВ± as natural QWN counterparts of the partial derivatives
в€‚
в€‚1,x1 в‰Ў в€‚xв€‚ 1 and в€‚2,x2 в‰Ў в€‚x
on the space of entire functions with two variables
2
g(x1 , x2 ) in GОёв€— (N вЉ• N ). More precisely, for x1 ,x2 ,z в€€ N ,
g(x1 + О»z, x2 ) в€’ g(x1 , x2 )
,
О»в†’0
О»
(в€‚1,z g)(x1 , x2 ) := lim
(3.1)
g(x1 , x2 + О»z) в€’ g(x1 , x2 )
.
(3.2)
О»
Then, in view of Theorem 2.1 and using the same technic of calculus used in [3],
we have the following
(в€‚2,z g)(x1 , x2 ) := lim
О»в†’0
442
HABIB OUERDIANE AND HAFEDH RGUIGUI
Lemma 3.1. Let be given z в€€ N . The creation derivative and annihilation derivative of Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )) are given by
Dzв€’ Оћ = Пѓ в€’1 в€‚1,z Пѓ(Оћ)
and
Dz+ Оћ = Пѓ в€’1 в€‚2,z Пѓ(Оћ).
Moreover, their dual adjoints are given by
в€—
(Dzв€’ )в€— Оћ = Пѓ в€’1 в€‚1,z
Пѓ(Оћ)
and
в€—
(Dz+ )в€— Оћ = Пѓ в€’1 в€‚2,z
Пѓ(Оћ).
In the remainder of this paper we need to use the action of DzВ± on the operator
Оћl,m (Оєl,m ) for a given l, m в‰Ґ 0 and Оєl,m in (N вЉ—(l+m) )вЂІsym(l,m) . Therefore, for
z в€€ N , by direct computation, the partial derivatives of the identity (2.7) in the
direction z are given by
= m Оєl,m , О· вЉ—l вЉ— (Оѕ вЉ—(mв€’1) вЉ—z)
= Пѓ(mОћl,mв€’1 (Оєl,m вЉ—1 z))(Оѕ, О·)
в€‚1,z Пѓ(Оћl,m (Оєl,m ))(Оѕ, О·)
(3.3)
and
в€‚2,z Пѓ(Оћl,m (Оєl,m ))(Оѕ, О·)
= l Оєl,m , (z вЉ—О· вЉ—(lв€’1) ) вЉ— Оѕ вЉ—m
= Пѓ(lОћlв€’1,m (z вЉ—1 Оєl,m ))(Оѕ, О·),
(3.4)
where, for zp в€€ (N вЉ—p )вЂІ , and Оѕl+mв€’p в€€ N вЉ—(l+mв€’p) , p в‰¤ l + m, the contractions
zp вЉ—p Оєl,m and Оєl,m вЉ—p zp are defined by
zp вЉ—p Оєl,m , Оѕlв€’p+m = Оєl,m , zp вЉ— Оѕlв€’p+m
Оєl,m вЉ—p zp , Оѕl+mв€’p = Оєl,m , Оѕl+mв€’p вЉ— zp .
в€—
в€—
Similarly, if we denote в€‚1,z
and в€‚2,z
the adjoint operators of в€‚1,z and в€‚2,z respectively, we get
в€—
в€‚1,z
Пѓ(Оћl,m (Оєl,m ))(Оѕ, О·) = Пѓ(Оћl,m+1 (Оєl,m вЉ— z))(Оѕ, О·)
(3.5)
в€—
в€‚2,z
Пѓ(Оћl,m (Оєl,m ))(Оѕ, О·) = Пѓ(Оћl+1,m (z вЉ— Оєl,m ))(Оѕ, О·).
(3.6)
Note that, from [3] and the above discussion, for z в€€ N , the QWN-derivatives DzВ±
and (DzВ± )в€— are continuous linear operators from L(FОёв€— (N вЂІ ), FОё (N вЂІ )) into itself and
from L(FОё (N вЂІ ), FОёв€— (N вЂІ )) into itself, i.e., DzВ± , (DzВ± )в€— в€€ L(L(FОёв€— (N вЂІ ), FОё (N вЂІ ))) в€©
L(L(FОё (N вЂІ ), FОёв€— (N вЂІ ))).
Theorem 3.2. For z в€€ N and Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )), we have
(Dz+ )в€— Оћ = aв€— (z) в‹„ Оћ,
(Dzв€’ )в€— Оћ = a(z) в‹„ Оћ.
Proof. Let Оћ = l,m=0 Оћl,m (Оєl,m ) в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )). Using (3.5), (3.6) and
Lemma 3.1, we get
в€ћ
(Dz+ )в€— Оћ =
Оћl+1,m (z вЉ— Оєl,m )
(3.7)
Оћl,m+1 (Оєl,m вЉ— z).
(3.8)
l,m=0
and
в€ћ
(Dzв€’ )в€— Оћ =
l,m=0
QWN-CONSERVATION OPERATOR AND ASSOCIATED DIFFERENTIAL EQUATION
443
On the other hand,
Пѓ(aв€— (z) в‹„ Оћ)(Оѕ, О·)
=
Пѓ(aв€— (z))(Оѕ, О·).Пѓ(Оћ)(Оѕ, О·)
в€ћ
=
Оєl,m , О· вЉ—l вЉ— Оѕ вЉ—m
z, О·
l,m=0
в€ћ
z вЉ— Оєl,m , О· вЉ—l+1 вЉ— Оѕ вЉ—m
=
l,m=0
=
Then, for Оћ =
в€ћ
l,m=0
пЈ«
ПѓпЈ­
пЈ¶
в€ћ
l,m=0
Оћl+1,m (z вЉ— Оєl,m )пЈё (Оѕ, О·).
Оћl,m (Оєl,m ) в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )), we get
в€ћ
aв€— (z) в‹„ Оћ =
Оћl+1,m (z вЉ— Оєl,m ).
(3.9)
l,m=0
Similarly, we obtain
Пѓ(a(z) в‹„ Оћ)(Оѕ, О·)
=
Пѓ(a(z))(Оѕ, О·)Пѓ(Оћ)(Оѕ, О·)
в€ћ
=
Оєl,m , О· вЉ—l вЉ— Оѕ вЉ—m
z, Оѕ
l,m=0
в€ћ
Оєl,m вЉ— z, О· вЉ—l вЉ— Оѕ вЉ—m+1
=
l,m=0
=
пЈ«
ПѓпЈ­
пЈ¶
в€ћ
l,m=0
Оћl,m+1 (Оєl,m вЉ— z)пЈё (Оѕ, О·).
From which, we get
в€ћ
a(z) в‹„ Оћ =
Оћl,m+1 (Оєl,m вЉ— z).
(3.10)
l,m=0
Hence, by (3.7), (3.8), (3.9) and (3.10) we get the desired statement.
3.1. Representation of the QWN-Conservation operator. For Locally convex
spaces X and Y we denote by L(X, Y) the set of all continuous linear operators
from X into Y. Let B1 and B2 in L(N вЂІ , N вЂІ ). For Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )), define
NBQ1 ,B2 (Пѓ(Оћ)) to be
в€ћ
NBQ1 ,B2 (Пѓ(Оћ))(Оѕ, О·)
в€ћ
в€—
в€‚1,e
в€‚ в€— Пѓ(Оћ)(Оѕ, О·)
j 1,B2 ej
=
j=0
в€—
в€‚2,e
в€‚ в€— Пѓ(Оћ)(Оѕ, О·).
j 2,B1 ej
+
j=0
(3.11)
Using a technic of calculus used in [2, 3, 12, 13] one can show that
belongs to GОёв€— (N вЉ• N ) which gives us an essence to the following
NBQ1 ,B2 (Пѓ(Оћ))
444
HABIB OUERDIANE AND HAFEDH RGUIGUI
Definition 3.3. For Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )), we define the QWN-conservation operator at Оћ by
NBQ1 ,B2 Оћ = Пѓ в€’1 (NBQ1 ,B2 (Пѓ(Оћ))).
(3.12)
As a straightforward fact, the QWN-conservation operator is a continuous linear
operator from L(FОё (N вЂІ ), FОёв€— (N вЂІ )) into itself.
For a later use, define the operator Оћa,b for a, b в€€ N вЂІ by
в€ћ
Оћl,m (Оєl,m (a, b)) в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )),
Оћa,b в‰Ў
l,m=0
1 вЉ—l
a вЉ— bвЉ—m . It is noteworthy that {Оћa,b ; a, b в€€ N вЂІ } spans
l!m!
a dense subspace of L(FОё (N вЂІ ), FОёв€— (N вЂІ )).
where Оєl,m (a, b) =
Proposition 3.4. The QWN-conservation operator admits on L(FОё (N вЂІ ), FОёв€— (N вЂІ ))
the following representation
в€ћ
в€ћ
NBQ1 ,B2
+
(De+j )в€— DB
в€—e В·
j
в€’
(Deв€’j )в€— DB
в€—e +
j
=
1
2
j=1
(3.13)
j=1
Proof. From the fact
Пѓ(Оћa,b )(Оѕ, О·) = exp{ a, О· + b, Оѕ } ,
Оѕ, О· в€€ N, a, b в€€ N вЂІ
by using (3.11), we compute
в€ћ
NBQ1 ,B2 Пѓ Оћa,b
(Оѕ, О·)
=
e
a,О· + b,Оѕ
e j , B2 b e j , Оѕ + e j , B1 a e j , О·
j=0
( B2 b, Оѕ + B1 a, О· ) e
=
( B2 b, Оѕ + B1 a, О· ) Пѓ Оћa,b (Оѕ, О·).
On the other hand, we get
пЈ«
в€ћ
ПѓпЈ­
пЈ«
ПѓпЈ­
в€ћ
в€ћ
пЈ¶
ej , Оѕ B2в€— ej , b e
a,О· + b,Оѕ
ej , О· B1в€— ej , a e
a,О· + b,Оѕ
j=1
в€ћ
+
a,b пЈё
(De+j )в€— DB
(Оѕ, О·) =
в€—e Оћ
j
1
j=1
пЈ¶
в€’
a,b пЈё
(Оѕ, О·) =
(Deв€’j )в€— DB
в€—e Оћ
j
2
j=1
a,О· + b,Оѕ
=
j=1
which gives that
Пѓ
в€ћ
в€’ в€— в€’
a,b
j=1 (Dej ) DB2в€— ej Оћ
+
в€ћ
+ в€— +
a,b
j=1 (Dej ) DB1в€— ej Оћ
(Оѕ, О·)
= ( B1 a, О· + B2 b, Оѕ )Пѓ(Оћa,b )(Оѕ, О·)В·
Hence, the representation (3.13) follows by (3.17) and density argument.
(3.14)
QWN-CONSERVATION OPERATOR AND ASSOCIATED DIFFERENTIAL EQUATION
445
Remark 3.5. Note that, by a straightforward calculus using the symbol map we
see that the QWN-conservation operator defined in this paper on L(FОё (N вЂІ ), FОёв€— (N вЂІ ))
coincides on L(FОёв€— (N вЂІ ), FОё (N вЂІ )) with the QWN-conservation operator defined in [3]
and coincides with its adjoint on L(FОё (N вЂІ ), FОёв€— (N вЂІ )), which shows that the QWNconservation operator is symmetric.
3.2. QWN-Conservation operator as a Wick derivation. It is shown (see Refs.
[12]) that GОёв€— (N вЉ• N ) is closed under pointwise multiplication. Then, for any
Оћ1 , Оћ2 в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )), there exists a unique Оћ в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )) such
that Пѓ(Оћ) = Пѓ(Оћ1 )Пѓ(Оћ2 ). The operator Оћ will be denoted Оћ1 в‹„ Оћ2 and it will be
referred to as the Wick product of Оћ1 and Оћ2 . It is noteworthy that, endowed with
the Wick product в‹„, L(FОё (N вЂІ ), FОёв€— (N вЂІ )) becomes a commutative algebra.
Since (L (FОё (N вЂІ ), FОёв€— (N вЂІ )) , в‹„) is a topological algebra, each white noise operator
Оћ0 in L (FОё (N вЂІ ), FОёв€— (N вЂІ )) gives rise to an operator-valued Wick operator
Оћ в†’ Оћ0 в‹„ Оћ в€€ L (FОё (N вЂІ ), FОёв€— (N вЂІ )) ,
Оћ в€€ L (FОё (N вЂІ ), FОёв€— (N вЂІ )) .
In fact this is a continuous operator. We then adopt the following slightly general
definition: a linear operator D from L (FОё (N вЂІ ), FОёв€— (N вЂІ )) into itself is called a Wick
derivation (see [11]) if
Оћ1 , Оћ2 в€€ L(FОё (N вЂІ ), FОёв€— (N вЂІ )).
D(Оћ1 в‹„ Оћ2 ) = D(Оћ1 ) в‹„ Оћ2 + Оћ1 в‹„ D(Оћ2 ),
As a non trivial example, we study, in this paper the QWN-conservation operator NBQ1 ,B2 on an appropriate subset of L (FОё (N вЂІ ), FОёв€— (N вЂІ )); more precisely, from
L(FОёв€— (N вЂІ ), FОё (N вЂІ )) into itself. As in [11], we can prove that D is a continuous
Wick derivation from L(FОё (N вЂІ ), FОёв€— (N вЂІ )) into itself if and only if there exist a
white noise operator coefficients F, G в€€ N вЉ— L (FОё (N вЂІ ), FОёв€— (N вЂІ )) such that
F(t) в‹„ Dt+ dt +
D=
R
G(t) в‹„ Dtв€’ dt,
(3.15)
R
where F(t), G(t) в€€ L (FОё (N вЂІ ), FОёв€— (N вЂІ )) are identified with QWN Wick operators with
parameter t. In fact t в†’ F(t) and t в†’ G(t) are L (FОё (N вЂІ ), FОёв€— (N вЂІ )) в€’valued processes on R, namely, F(t, x) and G(t, x) are elements in N вЉ— L (FОё (N вЂІ ), FОёв€— (N вЂІ )) в€ј
=
N вЉ— FОёв€— (N вЂІ ) вЉ— FОёв€— (N вЂІ ).
Theorem 3.6. For B1 , B2 в€€ L(N вЂІ , N вЂІ ), the QWN-conservation operator is a Wick
derivation with coefficients
П„B1 (s, .)aв€—s ds
F=
and
G=
R
П„B2 (s, .)as ds,
R
i.e., the QWN-conservation operator admits, on L(FОё (N вЂІ ), FОёв€— (N вЂІ )), the following
integral representation
NBQ1 ,B2 =
R2
П„B1 (s, t)aв€—s в‹„ Dt+ dsdt +
R2
П„B2 (s, t)as в‹„ Dtв€’ dsdt.
(3.16)
446
HABIB OUERDIANE AND HAFEDH RGUIGUI
Proof. By straightforward computation, by using (3.11), we obtain
в€ћ
NBQ1 ,B2 Пѓ Оћa,b
(Оѕ, О·)
=
e
a,О· + b,Оѕ
e j , B2 b e j , Оѕ + e j , B1 a e j , О·
j=0
a,О· + b,Оѕ
=
( B2 b, Оѕ + B1 a, О· ) e
=
( B2 b, Оѕ + B1 a, О· ) Пѓ Оћa,b (Оѕ, О·).
(3.17)
On the other hand, denote
N Qв€’ (B2 ) в‰Ў
R2
N Q+ (B1 ) в‰Ў
R2
П„B2 (s, t)as в‹„ Dtв€’ dsdt
П„B1 (s, t)aв€—s в‹„ Dt+ dsdt.
Then, from the fact
Пѓ(Оћa,b )(Оѕ, О·) = exp{ a, О· + b, Оѕ } ,
Оѕ, О· в€€ N, a, b в€€ N вЂІ
and using (3.3) and (3.5) we compute
Пѓ N Qв€’ (B2 )Оћa,b (Оѕ, О·)
=
l,m
=
=
e
R2
П„B2 (s, t)Пѓ as )(Оѕ, О·)Пѓ(Dtв€’ Оћl,m (Оєl,m (a, b)) (Оѕ, О·)dsdt
a,О· + b,Оѕ
П„B2 (s, t)b(t)Оѕ(s)dsdt
R2
a,b
B2 b, Оѕ Пѓ(Оћ
)(Оѕ, О·).
(3.18)
Similarly, using (3.4) and (3.6) we obtain
Пѓ (N Q+ (B1 )Оћa,b (Оѕ, О·) = B1 a, О· Пѓ(Оћa,b )(Оѕ, О·).
(3.19)
Hence, by Theorem 2.1 and a density argument, we complete the proof.
4. Application to Wick Differential Equation
In this section we give an important example of the differential equation associated with the QWN-conservation operator where the solution of (1.7) is given
explicitly in terms of the solution of the associated Cauchy problem (1.6). Let us
start by studying the Cauchy problem. Let B1 , B2 в€€ L(N вЂІ , N вЂІ ) such that {B1n ; n =
в€ћ
1, 2, В· В· В· } and {B2n ; n = 1, 2, В· В· В· } are equi-continuous. For Оћ = l,m=0 Оћl,m (Оєl,m )
in L(FОё (N вЂІ ), FОёв€— (N вЂІ )) and Оєl,m в€€ (N вЉ—(l+m) )вЂІsym(l,m) , the transformation GQ
t is
defined by
в€ћ
GQ
t Пѓ(Оћ)(Оѕ, О·) :=
(etB1 )вЉ—l вЉ— (etB2 )вЉ—m Оєl,m , О· вЉ—l вЉ— Оѕ вЉ—m .
(4.1)
l,m=0
Note that it is easy to show that GQ
t Пѓ(Оћ) belongs to GОё в€— (N вЉ• N ), see [2]. Then
using Theorem 2.1, there exists a continuous linear operator GQ
t acting on the
nuclear algebra L(FОё (N вЂІ ), FОёв€— (N вЂІ )) such that
в€’1 Q
GQ
Gt Пѓ(Оћ).
t Оћ = Пѓ
QWN-CONSERVATION OPERATOR AND ASSOCIATED DIFFERENTIAL EQUATION
447
Similarly to the classical case studied in [2] and the scalar case studied in [6], we
have the following
Lemma 4.1. The solution of the Cauchy problem associated with the QWN- conservation operator (1.6) is given by Ut = GQ
t U0 .
в€—
Let ОІ be a Young function satisfying the condition (2.5) and put Оё = (eОІ в€’ 1)в€— .
For ОҐ в€€ L(FОІ (N вЂІ ), FОІв€— (N вЂІ )) the exponential Wick wexp(ОҐ) defined by
в€ћ
wexp(ОҐ) =
1 в‹„n
ОҐ ,
n!
n=0
belongs to L(FОё (N вЂІ ), FОёв€— (N вЂІ )), see [12]. In the following we study the system of
Wick differential equations for white noise operators of the form
Оћ0 в€€ L(FОІ (N вЂІ ), FОІв€— (N вЂІ ))
Dt Оћt = О t в‹„ Оћt ,
(4.2)
where О t в€€ L(FОІ (N вЂІ ), FОІв€— (N вЂІ )) and
Dt :=
в€‚
в€’ NBQ1 ,B2 .
в€‚t
Eq. (4.2) is referred to as a Wick differential equation associated to the QWNconservation operator.
Theorem 4.2. The unique solution of the Wick differential equation (4.2) is given
by
t
Оћt = GQ
Оћ0 в‹„ wexp
t
0
GQ
в€’s (О s )ds
Proof. Applying the operator
в€‚
Оћt
в€‚t
=
в€‚
в€‚t
в€‚ Q
G (Оћ0 ) в‹„ wexp
в€‚t t
+ GQ
t (Оћ0 )
в‹„
= GQ
t (Оћ0 ) в‹„ wexp
t
0
GQ
tв€’s (О s )ds .
(4.3)
to the right hand side of Eq. (4.3), we get
t
0
NBQ1 ,B2
+ GQ
t (Оћ0 ) в‹„ О t в‹„ wexp
GQ
tв€’s (О s )ds
t
0
GQ
tв€’s (О s )ds в‹„ wexp
t
0
t
0
GQ
tв€’s (О s ) ds
GQ
tв€’s (О s ) ds.
Then, using Lemma 4.1 and Theorem 3.6 , we get
в€‚
Оћt = NBQ1 ,B2 (Оћt ) + О t в‹„ Оћt .
в€‚t
Which shows that Оћt is solution of (4.2). Let Оћt be an arbitrary solution of (4.2)
and put
t
Ft = Оћt в‹„ wexp в€’
0
GQ
tв€’s (О s )ds .
448
HABIB OUERDIANE AND HAFEDH RGUIGUI
Then, we have
в€‚
Ft
в€‚t
=
в€‚
Оћt в‹„ wexp в€’
в€‚t
+ NBQ1 ,B2 в€’
=
t
0
t
0
GQ
tв€’s (О s )ds
GQ
tв€’s (О s )ds в‹„ Ft в€’ О t в‹„ Ft
в€‚
Оћt в€’ О t в‹„ Оћt
в€‚t
t
в‹„ wexp в€’
+ NBQ1 ,B2 wexp в€’
0
t
0
GQ
tв€’s (О s )ds
GQ
tв€’s (О s )ds в‹„ Оћt .
Using Eq. (4.2) and the fact that NBQ1 ,B2 is a Wick derivation, we obtain
в€‚
Ft
в€‚t
= NBQ1 ,B2 (Оћt ) в‹„ wexp в€’
+ Оћt в‹„ NBQ1 ,B2 wexp в€’
t
GQ
tв€’s (О s )ds
0
t
0
GQ
tв€’s (О s )ds
= NBQ1 ,B2 (Ft ).
From which we deduce that Dt Ft = 0. Then, by Lemma 4.1, we get
Q
Ft = GQ
t (F0 ) = Gt (Оћ0 ).
Therefore, we deduce that
Оћt = GQ
t (Оћ0 ) в‹„ wexp
t
0
GQ
tв€’s (О s )ds .
Now, using (4.1), we obtain
Пѓ GQ
t (Оћ) (Оѕ, О·) = Пѓ Оћ
(etB2 )в€— Оѕ, (etB1 )в€— О· .
Then, using the definition of the Wick product of two operators, for every Оћ1 ,
Оћ2 в€€ L(FОІ (N вЂІ ), FОІв€— (N вЂІ )), we get
Пѓ GQ
t (Оћ1 в‹„ Оћ2 ) (Оѕ, О·)
= Пѓ Оћ1 в‹„ Оћ2
= Пѓ Оћ1
(etB2 )в€— Оѕ, (etB1 )в€— О·
(etB2 )в€— Оѕ, (etB1 )в€— О· . Пѓ Оћ2
(etB2 )в€— Оѕ, (etB1 )в€— О·
Q
= Пѓ GQ
t (Оћ1 ) (Оѕ, О·) . Пѓ Gt (Оћ2 ) (Оѕ, О·).
From which we deduce that
Q
Q
GQ
t (Оћ1 в‹„ Оћ2 ) = Gt (Оћ1 ) в‹„ Gt (Оћ2 ).
QWN-CONSERVATION OPERATOR AND ASSOCIATED DIFFERENTIAL EQUATION
449
Hence, we get
GQ
Оћ0 в‹„ wexp
t
t
0
GQ
в€’s (О s )ds
= GQ
t (Оћ0 ) в‹„ wexp
= GQ
t (Оћ0 ) в‹„ wexp
t
0
t
0
Q
GQ
t Gв€’s (О s )ds
GQ
tв€’s (О s )ds .
Which completes the proof.
References
1. Accardi, L., Barhoumi, A., and Ji, U. C.: Quantum Laplacians on Generalized Operators
on Boson Fock space, Probability and Mathematical Statistics, Vol. 31 (2011), 1вЂ“24.
2. Barhoumi, A., Ouerdiane, H., and Rguigui, H.: Generalized Euler heat equation, Quantum
Probability and White Noise Analysis, Vol. 25 (2010), 99вЂ“116.
3. Barhoumi, A., Ouerdiane, H., and Rguigui, H.: QWN-Euler Operator And Associated
Cauchy Problem, Infinite Dimensional Analysis Quantum Probability and Related Topics,
Vol. 15, No. 1 (2012).
4. Chung, D. M. and Chung, T. S.: Wick derivations on white noise functionals, J. Korean
Math. Soc. 33 (1996), No. 4.
5. Gannoun, R., Hachaichi, R., Ouerdiane, H., and Rezgi, A.: Un thВґ
eor`
eme de dualitВґ
e entre
espace de fonction holomorphes `
a croissance exponentielle, J. Funct. Anal., Vol. 171 (2000),
1вЂ“14.
6. Ji, U. C.: Quantum Extensions of Fourier-Gauss and Fourier-Mehler Transforms. J. Korean
Math. Soc., Vol. 45, No. 6 (2008), 1785вЂ“1801.
7. Ji, U. C. and Obata, N.: Generalized white noise operator fields and quantum white noise
derivatives, SВґ
eminaires & Congr`
es, Vol. 16 (2007), 17вЂ“33.
8. Ji, U. C. and Obata, N.: Annihilation-derivative, creation-derivative and representation of
quantum martingales, Commun. Math. Phys., Vol. 286 (2009), 751вЂ“775.
9. Ji, U. C. and Obata, N.: Quantum stochastic integral representations of Fock space operators,
Stochastics: An International Journal of Probability and Stochastics Processes, Vol. 81, Nos.
3-4 (2009), 367вЂ“384.
10. Ji, U. C. and Obata, N.: Quantum White Noise Derivatives and Associted Differential
Equation for White Noise Operator , Quantum Probability and White Noise Analysis, Vol.
25 (2010), 42-54.
11. Ji, U. C. and Obata, N.: Implementation problem for the canonical commutation relation
in terms of quantum white noise derivatives, Journal of Mathematical Physics 51, 123507
(2010).
12. Ji, U. C., Obata, N., and Ouerdiane, H.:, Analytic characterization of generalized Fock
space operators as two-variable entire function with growth condition, Infinite Dimensional
Analysis Quantum Probability and Related Topics, Vol. 5, No 3 (2002), 395вЂ“407.
13. Ji, U. C., Obata, N., and Ouerdiane, H.: Quantum LВґ
evy Laplacian and associated heat
equation, J. Funct. Anal., Vol. 249, No. 1 (2007), 31вЂ“54.
14. Kuo, H.-H.: Potential theory associated with UhlenbeckвЂ“Ornstein process, J. Funct. Anal.
21 (1976), 63вЂ“75.
15. Kuo, H.-H.: On Laplacian operator of generalized Brownian functionals, Lect. Notes in
Math. 1203 (1986), 119вЂ“128.
16. Kuo, H.-H.: White Noise Distribution Theory, CRC press, Boca Raton 1996.
17. Obata, N.: White Noise Calculus and Fock Spaces, Lecture Notes in Mathematics 1577,
Spriger-Verlag 1994.
18. Obata, N.: Quantum white noise calculus based on nuclear algebras of entire function,
Trends in Infinite Dimensional Analysis and Quantum Probability (Kyoto 2001), RIMS No.
1278, 130вЂ“157.
450
HABIB OUERDIANE AND HAFEDH RGUIGUI
19. Ouerdiane, H.: Fonctionnelles analytiques avec condition de croissance, et application вЂ“
lвЂ™analyse gaussienne, Japan. J. Math. (N.S.) 20 (1994), no. 1, 187вЂ“198.
20. Ouerdiane, H.: Noyaux et symboles dвЂ™opВґ
erateurs sur des fonctionelles analytiques gaussiennes, Japon. J. Math. Vol. 21 (1995), 223вЂ“234.
21. Piech, M. A.: Parabolic equations associated with the number operator, Trans. Amer. Math.
Soc. 194 (1974), 213вЂ“222.
Habib Ouerdiane: Department of Mathematics, Faculty of Sciences of Tunis, University of Tunis El-Manar, 1060 Tunis, Tunisia
Hafedh Rguigui: Department of Mathematics, Faculty of Sciences of Tunis, University of Tunis El-Manar, 1060 Tunis, Tunisia
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 451-470
www.serialspublications.com
SDE SOLUTIONS IN THE SPACE OF SMOOTH RANDOM
VARIABLES
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
Abstract. In this paper we analyze properties of a dual pair (G, G в€— ) of spaces
of smooth and generalized random variables on a LВґ
evy white noise space. We
show that G вЉ‚ L2 (Вµ) which shares properties with a FrВґ
echet algebra contains
a larger class of solutions of ItЛ†
o equations driven by pure jump LВґ
evy processes.
Further a characterization of (G, G в€— ) in terms of the S-transform is given.
We propose (G, G в€— ) as an attractive alternative to the Meyer-Watanabe test
function and distribution space (Dв€ћ , Dв€’в€ћ ) [30] to study strong solutions of
SDEвЂ™s.
1. Introduction
GelвЂ™fand triples or dual pairs of spaces of random variables have proved to be
very useful in the study of various problems of stochastic analysis. Important
applications pertain e.g. to the analysis of the regularity of the solutions of the
Zakai equation in non-linear filtering theory, positive distributions in potential
theory, the construction of local time of LВґevy processes or the Clark-Ocone formula
for the hedging of contingent claims in mathematical finance. See e.g. [4, 6, 8, 28]
and the references therein.
The most prominent examples of dual pairs in stochastic and infinite dimensional analysis are ((S), (S)в€— ) of Hida and (Dв€ћ , Dв€’в€ћ ) of Meyer and Watanabe.
See [6], [8] and [30]. The Hida test function and distribution space ((S), (S)в€— )
has been e.g. successfully applied to quantum field theory, the theory of stochastic partial differential equations or the construction of Feynman integrals ([6, 7]).
One of the most interesting properties of the distribution space (S)в€— is that it accommodates the singular white noise which can be viewed as the time-derivative
of the Brownian motion. The latter provides a favorable setting for the study of
stochastic differential equations (see [24]). See also [15], where the authors derived
an explicit representation for strong solutions of ItЛ†o equations. From an analytic
point of view the pair ((S), (S)в€— ) also exhibits the nice feature that it can be characterized by the powerful tool of Sв€’transform [6]. It is also worth mentioning
that test functions in (S) admit continuous versions on the white noise probability
space. However the Brownian motion is not contained in (S) since elements in (S)
have chaos expansions with kernels in the Schwartz test function space. Therefore
Received 2012-1-31; Communicated by Hui-Hsiung Kuo.
2000 Mathematics Subject Classification. Primary 60H07 60J75; Secondary 65C30 60H40.
Key words and phrases. Strong solutions of SDEвЂ™s with jumps; Malliavin calculus; white noise
analysis.
451
452
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
(S) does not seem to be suitable for the study of SDEвЂ™s. It turns out that the test
function space Dв€ћ is more appropriate for the investigation of solutions of SDEвЂ™s
than (S), since it carries a larger class of solutions of ItЛ†o equations. However a
severe deficiency of the pair (Dв€ћ , Dв€’в€ћ ) compared to ((S), (S)в€— ) is that it lacks
the availability of characterization-type theorems.
In this paper we propose a dual pair (G, G в€— ) of smooth and generalized random
variables on a LВґevy white noise space which meets the following two important
requirements: A richer class of solutions of (pure jump) LВґevy noise driven ItЛ†o
equations belongs to the test function space G. On the other hand (G, G в€— ) allows
for a characterization-type theorem.
The pair (G, G в€— ) has been studied in the Gaussian case by [2, 13, 22, 29]. See
also [4] and the references therein for the case of LВґevy processes. Similarly to the
Gaussian case, G is defined by means of exponential weights of the number operator
on a LВґevy white noise space. The space G comprises the test functions in (S) and
is included in the space Dв€ћ,2 вЉѓ Dв€ћ . The important question whether G contains
a bigger class of ItЛ†
o jump diffusions has not been addressed so far in the literature.
We will give an affirmative answer to this problem. For example, one can more
or less directly show (Section 4) that solutions of compound Poisson driven SDEвЂ�s
are contained in G. Furthermore we will discuss a characterization of Levy noise
functionals in terms of the Sв€’transform by using the concept Bargmann-Segal
spaces (see [5]). We believe that the pair (G, G в€— ) could serve as an alternative
tool to (Dв€ћ , Dв€’в€ћ ) for the study of LВґevy noise functionals. By approximating
general Levy measures by finite ones and by using the вЂњniceвЂќ topologies of G and
G в€— it is conceivable that one can construct solutions to SDEвЂ™s (with discontinuous
coefficients) driven by more general Levy processes (as e.g. the variance gamma
process or even Levy processes of unbounded variation). See e.g. [17] in the
The paper is organized as follows: In Section 2 we introduce the framework of
our paper, that is we briefly elaborate some basic concepts of a white noise analysis
for LВґevy processes and give the definitions of the pairs (Dв€ћ , Dв€’в€ћ ), (G, G в€— ). In
Section 3 we discuss some properties of (G, G в€— ) and provide a characterization
theorem. In Section 4 we verify that a bigger class of SDE solutions actually lives
in G.
2. Framework
In this section, we concisely recall some concepts of white noise analysis of pure
jump LВґevy processes which was developed in [14, 15]. This theory presents a
framework which is suitable for all pure jump LВґevy processes. For general information about white noise theory, see [6, 10, 11] and [19]. We conclude this section
with a discussion of the dual pairs (Dв€ћ , Dв€’в€ћ ), (G, G в€— ) and ((S), (S)в€— ).
2.1. White noise analysis of LВґ
evy processes. A LВґevy process L(t) is defined
as a stochastic process on R+ which starts in zero and has stationary and independent increments. It is a canonical example of a semimartingale, which is uniquely
determined by the characteristic triplet
(Bt , Ct , Вµ
Л†) = (a В· t, Пѓ В· t, dtОЅ(dz)),
(2.1)
SDE SOLUTIONS
453
where a, Пѓ are constants and ОЅ is the LВґevy measure on R0 = R \ {0}. We denote
processes, see e.g. [1, 3, 9, 25, 26]. In this paper, we are only dealing with the case
of pure jump LВґevy processes without drift, i.e. (2.1) with a = Пѓ = 0.
We want to work with a white noise measure, which is constructed on the
nuclear algebra S (X) as introduced in [15]. Here X := R Г— R0 . For that purpose,
recall that S(R) is the Schwartz space of test functions on R and the space S (R)
is its dual space, which is the space of tempered distributions. The space S(X)
which is a variation of the Schwartz space on the space X is then defined as the
quotient algebra
S(X) = S(X)/NПЂ ,
(2.2)
where S(X) is a closed subspace of S(R2 ), given by
S(X) :=
П•(t, z) в€€ S(R2 ) : П•(t, 0) = (
в€‚
П•)(t, 0) = 0
в€‚z
(2.3)
= 0}.
(2.4)
and the closed ideal NПЂ in S(X) is defined as
NПЂ := {П† в€€ S(X) : П†
L2 (ПЂ)
The space S(X) is a nuclear algebra with a compatible system of norms given by
П†Л†
p,ПЂ :=
inf
П€в€€NПЂ
П†+П€
p,
p в‰Ґ 0,
(2.5)
where В· p , p в‰Ґ 0 are the norms of S(R2 ). Moreover the Cauchy-Bunjakowski
Л† П€Л† в€€
inequality holds, that is for all p в€€ N there exists an Mp such that for all П†,
Лњ
S(X)
we have
П†Л†П€Л†
p,ПЂ
в‰¤ Mp П†Л†
p,ПЂ
П€Л†
p,ПЂ
.
We indicate S (X) as its dual. For further information, see [15].
Next, we define the (pure jump) LВґevy white noise probability measure Вµ on the
Borel sets of в„¦ = S (X), by means of Bochner-Minlos-Sazonov theorem
ei
П‰,П†
dВµ(П‰) = exp
S (X)
X
(eiП† в€’ 1)dПЂ
(2.6)
for all П† в€€ S(X), where П‰, П† := П‰(П†) denotes the action of П‰ в€€ S (X) on
П† в€€ S(X). For П‰ в€€ S (X) and П† в€€ S(X), define the exponential functional
eЛњ(П†, П‰) := (e(В·, П‰) в—¦ l) (П†) = exp
П‰, ln(1 + П†) в€’
X
П†(x)О» вЉ— ОЅ(dx)
as a function of П† в€€ Sq0 (X) for functions П† в€€ Sq0 (X) satisfying П†(x) > в€’1 and
l(x) = ln(1 + x) is analytic in a neighborhood of zero with l(0) = 0 for all x в€€ X.
See [15].
Л†
Denote by S(X)вЉ—n
the n-th completed symmetric tensor product of S(X) with
itself. Since eЛњ(П†, П‰) is holomorphic in П† around zero for П†(x) > в€’1, it can be
454
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
expanded into a power series. Furthermore, there exist generalized Charlier polyЛ†
nomials Cn (П‰) в€€ (S(X)вЉ—n
) such that
eЛњ(П†, П‰) =
nв‰Ґ0
1
Cn (П‰), П†вЉ—n
n!
(2.7)
for П† in a certain neighborhood of zero. One shows that
Л†
{ Cn (В·), П†(n) : П†(n) в€€ S(X)вЉ—n , n в€€ N0 }
(2.8)
Л†
is a total set of L2 (Вµ). Further, one observes that for all n, m, П†(n) в€€ S(X)вЉ—n
,
Л†
П€ (m) в€€ S(X)вЉ—m
the orthogonality relation
Cn (П‰), П†(n) Cm (П‰), П€ (m) Вµ(dП‰) = Оґn,m n! (П†(n) , П€ (m) )L2 (X n ,ПЂn )
(2.9)
S (X)
holds, where
0, n = m
1, else
Оґn,m =
is the Kronecker symbol. Using (2.9) and a density argument we can extend
Cn (П‰), П†(n) to act on П†(n) в€€ L2 (X n , ПЂ n ) for П‰ a.e. The functionals Cn (П‰), П†(n)
can be regarded as an n-fold iterated stochastic integral of functions П†(n) в€€
L2 (X n , ПЂ n ) with respect to the compensated Poisson random measure
N (dt, dz) = N (dt, dz) в€’ ОЅ(dz)dt,
where N (О›1 , О›2 ) := П‰, 1О›1 Г—О›2 for О›1 в€€ R and О›2 в€€ R such that zero is not in
the closure of О›2 , defined on our white noise probability space
(в„¦, F , P ) = S (X), B(S (X)), Вµ .
In this setting, a square integrable pure jump LВґevy process L(t) can be represented
as
t
L(t) =
z N (dt, dz).
0
R0
Л† 2 (X n , ПЂ n ) the space of square integrable functions П†(n) (t1 , z1 , . . . , tn , zn )
Denote L
being symmetric in the n-pairs (t1 , z1 ), . . . , (tn , zn ). Then one infers from (2.7) to
(2.9) the LВґevy-ItЛ†
o chaos representation property of square integrable LВґevy functionals: For all F в€€ L2 (Вµ), there exists a unique sequence of П†(n) в€€ L2 (X n , ПЂ n )
such that
Cn (П‰), П†(n)
F (П‰) =
nв‰Ґ0
for П‰ a.e. Moreover, we have the ItЛ†o-isometry
F
2
L2 (Вµ) =
n!
nв‰Ґ0
П†(n)
2
L2 (X n ,ПЂ n )
.
(2.10)
SDE SOLUTIONS
455
2.2. The spaces (Dв€ћ , Dв€’в€ћ ), (G, G в€— ) and ((S), (S)в€— ). In our search for appropriate candidates of subspaces of L2 (Вµ) in which strong solutions of SDEвЂ™s
live, we shall focus on the Meyer-Watanabe test function and distribution spaces
(Dв€ћ , Dв€’в€ћ ) and the dual pair (G, G в€— ) of smooth and generalized random variables
on the LВґevy white noise space.
The Meyer-Watanabe test function Dв€ћ for pure jump LВґevy process (see e.g.
[4, 31, 32]) is defined as a dense subspace of L2 (Вµ) endowed with the topology
given by the seminorms
пЈ«
пЈ¶1/p
k
F
k в€€ N, p в‰Ґ 1, with
k,p =
пЈ­E[| F |p ] +
j=1
j
E[ DВ·,В·
F
p
пЈё
L2 (ПЂ n ) ]
,
(2.11)
Dtj1 ,z1 ,...,tj ,zj F (П‰) := Dt1 ,z1 Dt2 ,z2 . . . Dtj ,zj F (П‰)
for F в€€ Dв€ћ , where Dt,z stands for the Malliavin derivative in the direction of
the (square integrable) pure jump LВґevy process L(t), t в‰Ґ 0. DВ·,В· is defined as a
mapping
D : D1,2 в†’ L2 (Вµ Г— ПЂ)
given by
Dt,z F =
n В· Cnв€’1 (В·), П†(n) (В·, t, z) ,
(2.12)
nв‰Ґ1
2
if F в€€ L (Вµ) with chaos expansion
Cn (В·), П†(n)
F =
nв‰Ґ0
satisfies
nв‰Ґ1
n В· n! П†(n)
2
L2 (ПЂ n )
< в€ћ.
(2.13)
The domain D1,2 of DВ·,В· is the space of all F в€€ L2 (Вµ) such that inequality (2.13)
holds. See [4, 23, 31, 32] for further information.
The Meyer-Watanabe distribution space Dв€’в€ћ is defined as the (topological)
dual of Dв€ћ . If one combines the transfer principle from the Wiener space (or
Gaussian white noise space) to the Poisson space as devised in [23] with the results
of [30], one finds that solutions of non-degenerate jump SDEвЂ™s exist in Dв€ћ . This
is a striking feature which pays off dividends in the analysis of LВґevy functionals.
However it seems not that easy to set up a characterization-type theorem for
(Dв€ћ , Dв€’в€ћ ) in the sense of [21]. Consequently, other GelвЂ™fand triples have been
studied to overcome this deficiency. In [22] the authors study the pair (G, G в€— ) and
provide sufficient conditions in terms of the S-transform to characterize (G, G в€— ).
Using Bargmann-Segal spaces, a complete characterization of this pair (and for a
scale of closely related pairs) is obtained by [5] in the Gaussian case.
We will show in Section 3 and 4 that (G, G в€— ) can be characterized by means
of the S-transform on the LВґevy noise space and that G contains a richer class
of solutions of jump SDEвЂ™s. These two properties make (G, G в€— ) an interesting
alternative to (Dв€ћ , Dв€’в€ћ ) to analyze functionals of LВґevy processes.
456
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
The test function space G is a subspace of L2 (Вµ) which is constructed by means
of exponential weights of the Ornstein-Uhlenbeck or number operator. Denoted
by N , this operator acts on the elements of L2 (Вµ) by multiplying the n-th homogeneous chaos with n в€€ N0 . The space of smooth random variables G is defined as
the collection of all
f=
Cn (В·), П†(n) в€€ L2 (Вµ)
(2.14)
nв‰Ґ0
such that
f
2
q :=
eqN f
2
L2 (Вµ) <
for all q в‰Ґ 0. The latter condition is equivalent to
f
n!e2qn
2
q=
П†(n)
в€ћ
2
L2 (X n ,ПЂ n )
(2.15)
nв‰Ґ0
for all q в‰Ґ 0. The space G is endowed with the topology given by the family
of norms В· q , q в‰Ґ 0. Its topological dual is the space of generalized random
variables G в€— .
Let us turn our attention to the S-transform which is a fundamental concept
of white noise distribution theory and serves as a tool to characterize elements of
the Hida test function space (S) and the Hida distribution space (S)в€— . See [6] or
[15] for a precise definition of the pair ((S), (S)в€— ). The S-transform of О¦ в€€ (S)в€— ,
denoted by S(О¦), is defined as the dual pairing
S(О¦)(П†) := О¦, eЛњ(П†, В·) ,
в€ћ
n=0
П† в€€ SC (X),
(2.16)
where eЛњ(П†, В·) 2 =
П† 2n
p,ПЂ and SC (X) is the complexification of S(X).
The S-transform is a monomorphism, that is, if
О¦, ОЁ в€€ (S)в€—
S(О¦) = S(ОЁ) for
then
О¦ = ОЁ.
One verifies, e.g. that
ЛњЛ™ (t, z))(П†) = П†(t, z),
S(N
(2.17)
ЛњЛ™ (t, z) the white noise of the compensated Poisson random measure N
Лњ (dt, dz) in
N
в€—
Лњ
(S) and П† в€€ SC (X). We refer the reader to [6] or [15] for more information on
the Hida test function space (S) and Hida distribution space (S)в€— .
Finally, we give the important definition of the Wick or Wick-Grassmann product, which can be considered a tensor algebra multiplication on the Fock space.
The Wick product of two distributions О¦, ОЁ в€€ (S)в€— , denoted by О¦в‹„ОЁ, is the unique
element in (S)в€— such that
S(О¦ в‹„ ОЁ)(П†) = S(О¦)(П†)S(ОЁ)(П†)
(2.18)
for all П† в€€ SC (X). As an example one finds that
Л† (m)
Cn (П‰), П†(n) в‹„ Cm (П‰), П€ (m) = Cn+m (П‰), П†(n) вЉ—П€
Л†
вЉ—n
for П†(n) в€€ (S(X))
Л†
вЉ—m
and П€ (m) в€€ (S(X))
(2.19)
. The latter and (2.7) imply that
в‹„
eЛњ(П†, П‰) = exp ( П‰, П† )
(2.20)
SDE SOLUTIONS
457
for П† в€€ S(X). The Wick exponential expв‹„ (X) of an X в€€ (S)в€— is defined as
1 в‹„n
X
(2.21)
expв‹„ (X) =
n!
nв‰Ґ0
в€—
provided the sum converges in (S) , where X в‹„n = X в‹„ . . . в‹„ X. We mention that
the following chain of continuous inclusions is valid:
(S) Ц’в†’ G Ц’в†’ L2 (Вµ) Ц’в†’ G в€— Ц’в†’ (S)в€— .
3. Properties of the Spaces G and G в€—
In the Gaussian case the space G has the nice feature to be stable in the sense
of pointwise multiplication of random variables. More precisely, G is a FrВґechet
algebra. See [13, 22]. In the LВґevy setting, we can show the following:
Theorem 3.1. Suppose that our LВґevy measure ОЅ satisfies the moment condition
R0
| z |n ОЅ(dz) < в€ћ
Cn (В·), П†(n) and
(t, z) < R}, R > 0.
for all n в€€ N. Let F, G be in G with chaos expansions F =
G = nв‰Ґ0 Cn (В·), П•(n) . Define KR = {(t, z) в€€ R Г— R0 :
Assume that
в€љ
sup n! П†(n) Lв€ћ (X n ,ПЂn ) < в€ћ
nв‰Ґ0
(3.1)
nв‰Ґ0
and
sup
в€љ
n!
П•(n)
nв‰Ґ0
Lв€ћ (X n ,ПЂ n ) <
в€ћ.
(3.2)
In addition require that there exists a R > 0 such that the compact support of П†(n)
and П•(n) are in (KR )n , i.e.,
supp П†(n) , supp П•(n) вЉ† (KR )n
for all n в‰Ґ 0. Then
F В· G в€€ G.
в€љ
In particular, let О»0 =
+ ln(4R) + ln( 2 + 2) and assume that for
О» > 2О»0 , F, G в€€ GО» . Then for all ОЅ > О»0 + О»2 , F В· G в€€ GО»в€’ОЅ .
ln(ПЂ(KR ))
4
Proof. Let F , G в€€ GО» вЉ‚ G for some О» в€€ R with F =
G = mв‰Ґ0 Cm (В·), П•(m) . Then,
2
О»=
F
n!e2О»n
П†(n)
m!e2О»m
П•(m)
2
L2 (ПЂ n ) <
nв‰Ґ0
G
2
О»=
Cn (В·), П†(n) and
в€ћ,
2
L2 (ПЂ m ) <
mв‰Ґ0
nв‰Ґ0
в€ћ.
By the product formula in [12], we obtain the following:
Cn (В·), П†(n) В· Cm (В·), П•(m)
mв€§n (mв€§n)в€’k
=
k! r!
k=0
r=0
m
k
n
k
mв€’k
r
nв€’k
r
r
Л† k П•(m) ,
Cm+nв€’2kв€’r (В·), П†(n) вЉ—
458
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
Л† 2 (X n ), П• в€€ L
Л† 2 (X m ), 0 в‰¤ k в‰¤ m в€§ n, 0 в‰¤ r в‰¤ m в€§ n в€’ k, is
Л† rk П• for П† в€€ L
where П†вЉ—
the symmetrization of the function П† вЉ—rk П• on X nв€’kв€’r Г— X mв€’kв€’r Г— X r given by
П† вЉ—rk П•(A, B, Z) :=
П†(A, Z, Y )П•(Y, Z, B)dПЂ вЉ—k (Y )
p2 (z)
Xk
zв€€Z
for (A, B, Z) в€€ X nв€’kв€’r Г— X mв€’kв€’r Г— X r . Here
:= z1 В· z2 В· В· В· zr
p2 (z)
zв€€Z
when Z = ((t1 , z1 ), (t2 , z2 ), В· В· В· , (tr , zr )). Because of Lemma 3.4 in [12], we know
that
r
Л† k П•(m)
П†(n) вЉ—
L2 (Rm+nв€’2kв€’r ) в‰¤
П•(m)
В·
Lв€ћ
Rr
4
(ПЂ(KR ))(m+nв€’2r) В·
П†(n)
В·
L2 (Rn )
П•(m)
В·
П†(n)
Lв€ћ
L2 (Rm ) .
Moreover, using the conditions (3.1) and (3.2), we obtain the following inequality:
mв€§n (mв€§n)в€’k
Cn (В·), П†(n) В· Cm (В·), П•(m)
О»в€’ОЅ в‰¤
n в€’ k (О»в€’ОЅ)(m+nв€’2kв€’r)
e
r
mв€§n (mв€§n)в€’k
в‰¤
m
k
k! r!
k=0
r=0
(m + n в€’ 2k в€’ r)!
в‰¤ constant
mв€§n
k!
k=0
m
k
n
k
r=0
Rr
r=0
k=0
m
k
mв€’k
r
n
k
r
Л† k П•(m)
Cm+nв€’2kв€’r (w), П†(n) вЉ—
mв€’k
r
L2 (Вµ)
n в€’ k О»(m+n) в€’ОЅ(m+n) в€’(О»в€’ОЅ)(2k+r)
e
e
e
r
r
Cn (В·), П†(n)
mв€§n (mв€§n)в€’k
k=0
n
k
k! r!
Л† k П•(m)
П†(n) вЉ—
1/2
О»
L2 (Rm+nв€’2kв€’r )
Cm (В·), П•(m)
1/2
О»
О»
eв€’ОЅ(m+n) e 2 (m+n) 2m+n
(m + n в€’ 2k)!eв€’(О»в€’ОЅ)2k
4
(ПЂ(KR ))(m+nв€’2r) r! (m + n в€’ 2k в€’ r)!
в€љ в€љ
,
n! m! (m + n в€’ 2k)!
for ОЅ < О». From now, without loss of generality we assume that ПЂ(KR ) > e and
R в‰Ґ 1. It is clear that
r! (m + n в€’ 2k в€’ r)!
r!
в€љ в€љ
в‰¤
(n
в€§
m)!
n! m! (m + n в€’ 2k)!
(m + n в€’ 2k в€’ r)!
(m + n в€’ 2k)!
mв€§n
mв€§n
Rr
r=0
в‰¤ 1,
4
(ПЂ(KR ))(m+nв€’2r)
m+n
4
ln(ПЂ(KR ))
Rr
в‰¤
e
в‰¤
(m в€§ n)Rm+n e
<
r=0
(2R)m+n e
m+n
4
m+n
4
ln(ПЂ(KR ))
ln(ПЂ(KR ))
,
SDE SOLUTIONS
and
mв€§n
k=0
m
k!
k
n
k
(m + n в€’ 2k)!e
459
в€љ
2 1/2
в‰¤ (в€љ
) (
2в€’1
в€’(О»в€’ОЅ)2k
2+
в€љ m+n
2)
.
Therefore,
Cn (В·), П†(n) В· Cm (В·), П•(m)
1/2
О»
Cn (В·), П†(n)
в‰¤
О»в€’ОЅ
Cm (В·), П•(m)
1/2
О»
HПѓn Пѓm ,
в€љ
where H is a constant and Пѓ = 4Reв€’ОЅ+О»/2+ln(ПЂ(KR ))/4 ( 2 + 2). Then,
F В·G
=
в‰¤
О»в€’ОЅ
в€ћ
Cn (В·), П†(n) Cm (В·), П•(m)
m,n=0
в€ћ
m,n=0
в€ћ
в‰¤H
в‰¤H
n=0
в€ћ
О»в€’ОЅ
Cn (В·), П†(n) Cm (В·), П•(m)
Пѓn
Cn (В·), П†(n)
1/2
О»
О»в€’ОЅ
в€ћ
Пѓn
Cn (В·), П•(n)
1/2
О»
n=0
в€ћ
3/4
Пѓ 4/3n
n=0
в€ћ
1/4
Cn (В·), П†(n)
2
О»
n=0
в€ћ
3/4
Пѓ 4/3n
n=0
1/4
Cn (В·), П•(n)
2
О»
n=0
в‰¤ H(
1
3
1в€’Пѓ
4
3
)2
F
1/2
О»
G
1/2
О» ,
в€љ
+ ln(4R) + ln( 2 + 2).
в€љ
Remark 3.2. Note that, in the conditions (3.1) and (3.2), n! can be replaced by
в€љ
3
8 n!.
if Пѓ < 1, i.e. ОЅ >
О»
2
+ О»0 , where О»0 =
ln(ПЂ(KR ))
4
Define the space Dв€ћ,2 вЉѓ Dв€ћ as
Dв€ћ,2 = proj lim Dk,2
(3.3)
kв†’0
and denote by Dв€’в€ћ,2 its topological dual. Then it is apparent from the definition
of G that
G вЉ‚ Dв€ћ,2 вЉ‚ L2 (Вµ) вЉ‚ Dв€’в€ћ,2 вЉ‚ G в€— .
(3.4)
If L(t) is a Poisson process, then a transfer principle to Poisson spaces based on
exponential distributions (see [23]) gives
G вЉ‚ Dв€ћ вЉ‚ L2 (Вµ) вЉ‚ Dв€’в€ћ вЉ‚ G в€— .
Finally, we want to discuss the characterization of the spaces G and G в€— in terms
of the S-transform. For this purpose assume a densely defined operator A on
L2 (X, ПЂ) such that
AОѕj = О»j Оѕj , j в‰Ґ 1,
460
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
where 1 < О»1 в‰¤ О»2 в‰¤ ... and {Оѕj }jв‰Ґ1 вЉ‚ S(X) is an orthonormal basis of L2 (X, ПЂ).
Further we require that there exists a О± > 0 such that Aв€’О±/2 is Hilbert-Schmidt.
Then let us denote by S the standard countably Hilbert space constructed from
A (see [19]). An application of the Bochner-Minlos theorem leads to a Gaussian
measure ВµG on S (dual of S) such that
ei
П‰,П†
ВµG (dП‰) = e
в€’ 21 П†
2
L2 (X,ПЂ)
S
for all Оѕ в€€ S. It is well-known that each element f in L2 (ВµG ) has the chaos
representation
Hn (В·), П†(n) ,
f=
(3.5)
nв‰Ґ0
for unique П†(n) в€€ L2 (X n , ПЂ n ), n в‰Ґ 0, where Hn (П‰) в€€ (S вЉ—n ) are generalized Hermite polynomials. Comparing (2.14) with (3.5) we observe that the mapping
U : L2 (Вµ) в€’в†’ L2 (ВµG )
(3.6)
given by
nв‰Ґ0
Cn (П‰), П†(n) в€’в†’
Hn (П‰), П†(n)
nв‰Ґ0
is a unitary isomorphism between the spaces L2 (Вµ) and L2 (ВµG ). In the following
let us denote by SG the Sв€’transform on the Gaussian Hida distribution space
(S)в€—ВµG which is defined as
SG (П†) = О¦, e(П†, П‰) , П† в€€ (S)в€—ВµG ,
where
e(П†, П‰) = e
П‰,П† в€’1/2 П†
2
L2 (X,ПЂ)
(3.7)
.
в€—
See [6]. Our characterization of (G, G ) requires the concept of Bargmann-Segal
space (see [27], [5] and the references therein):
Definition 3.3. Let ВµG, 21 be the Gaussian measure on S associated with the
в€’1 П†
2
characteristic function C(П†) := e 4 L2 (X,ПЂ) . Introduce the measure ОЅ on SC
given by
ОЅ(dz) = ВµG, 12 (dx) Г— ВµG, 21 (dy),
where z = x + iy. Further denote by P the collection of all projections P of the
form
m
Pz =
j=1
z, Оѕj Оѕj , z в€€ SC .
The Bargmann-Segal space E 2 (ОЅ) is the space consisting of all entire functions
f : L2C (X, ВµG ) в€’в†’ C such that
sup
P в€€P
SC
|f (P z)| ОЅ(dz) < в€ћ.
So we obtain from Theorem 7.1 and 7.3 in [5] the following result:
SDE SOLUTIONS
461
Theorem 3.4. (i) The smooth random variable П• belongs to G if and only if
SG (U(П•))(О»В·) в€€ E 2 (ОЅ)
for all О» > 0.
(ii) The generalized random variable О¦ is an element of G в€— if and only if there is
a О» > 0 such that
SG (U(П•))(О»В·) в€€ E 2 (ОЅ).
Remark 3.5. The connection between SG в—¦ U and S in (2.16) is given by the
following relation: Since
1
k
Cn (В·), П†вЉ—n
вЉ—...вЉ—П†вЉ—n
1
k
U
1
k
= Hn (В·), П†вЉ—n
вЉ—...вЉ—П†вЉ—n
1
k
for П†1 , ..., П†k в€€ L2 (X, ПЂ), ni в‰Ґ 1 with n1 + ... + nk = n we find (see (2.19)) that
S
1
k
Cn (В·), П†вЉ—n
вЉ—...вЉ—П†вЉ—n
1
k
= (S ( C1 (В·), П†1 ))n1 В· ... В· (S ( C1 (В·), П†k ))nk
as well as
SG в—¦ U
1
k
Cn (В·), П†вЉ—n
вЉ—...вЉ—П†вЉ—n
1
k
= (SG в—¦ U ( C1 (В·), П†1 ))n1 В· ... В· (SG в—¦ U ( C1 (В·), П†k ))nk .
We conclude this section with a sufficient condition for a Hida distribution to
be an element of G:
Theorem 3.6. Let Q be a positive quadratic form on L2 (X, ПЂ) with finite trace.
Further let О¦ be in (S)в€— . Assume that for every З« > 0 there exists a K(З«) > 0 such
that
2
|SG (U(О¦))(zП†)| в‰¤ K(З«)eЗ«|z| Q(П†,П†)
holds for all П† в€€ S and z в€€ C for some constant K > 0. Then О¦ в€€ G.
Proof. The proof is a direct consequence from the proof of Theorem 4.1 in [22].
Example 3.7. Let Оі в€€ L2 (X, ПЂ) with Оі > в€’1 and З« > 0. Then
Y (t) := expв‹„ ( C1 (П‰), П‡[0,t] Оі )
is the solution of
t
dY (t) = Y (tв€’ )
Оі(t, u)N (dt, du).
0
R0
So we get
|SG (U(Y (t)))(zП†)|
where K(З«) = e1/(4З«) and
в‰¤
exp(
в‰¤
K(З«) exp(З« |z|2 Q(П†, П†)),
П‡[0,t] zП†(x)Оі(x)ПЂ(dx) )
X
2
Q(П†, П†) =
П‡[0,t] Оі(x)П†(x)ПЂ(dx)
X
Thus Y (t) в€€ G.
.
462
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
4. Solutions of SDEвЂ™s in G
In this section, we deal with strong solutions of pure jump LВґevy stochastic
differential equations of the type
t
Лњ (ds, dz)
Оі(s, X(sв€’ ), z)N
X(t) = x +
0
(4.1)
R0
for X(0) = x в€€ R, where Оі : [0, T ] Г— R Г— R0 в†’ R satisfies the linear growth and
Lipschitz condition, i.e.,
| Оі(t, x, z) |2 ОЅ(dz) в‰¤ C(1+ | x |2 ),
(4.2)
| Оі(t, x, z) в€’ Оі(t, y, z) |2 ОЅ(dz) в‰¤ K | x в€’ y |2 ,
(4.3)
R0
R0
where C, K and M are some constants such that | Оі(t, x, z) |< M for all x, y в€€ R,
0 в‰¤ t в‰¤ T . Note that since Оі satisfies the conditions (4.2) and (4.3), there exists
a unique solution X = {X(t), t в€€ [0, T ]} with the initial condition X(0) = x.
If ОЅ(R0 ) < в€ћ (i.e. X(t), t в‰Ґ 0 is compound Poissonian), we will prove that
X(t) в€€ G, t в‰Ґ 0. To this end we need some auxiliary results:
Lemma 4.1. Let {Xn }в€ћ
n=0 be a sequence of random variables converging to X in
L2 (Вµ). Suppose that
sup | Xn | k,2 < в€ћ
n
k
k
for some k в‰Ґ 1. Then X в€€ Dk,2 and DВ·,В·
Xn , n в‰Ґ 0 converges to DВ·,В·
X in the
2
k
sense of the weak topology of L ((О» Г— ОЅ Г— Вµ) ).
Proof. First note that supn
| Xn |
k,2 <
в€ћ is equivalent to
k
2
sup
(1 + N ) Xn
n
k,2 <
в€ћ.
By weak compactness, there exists a subsequence {Xni }в€ћ
i,n=1 such that (1 +
k
2
N ) 2 Xni ) converges weakly to some element О± в€€ L (Вµ Г— (О» Г— ОЅ)k ). Then for
k
any Y in the domain of (1 + N ) 2 , it follows from the self-adjointness of N that
k
E X(1 + N ) 2 Y
=
=
k
lim E Xni (1 + N ) 2 Y
nв†’в€ћ
k
lim E (1 + N ) 2 Xni Y
nв†’в€ћ
k
= E lim (1 + N ) 2 Xni Y
nв†’в€ћ
= E О±Y .
k
2
k
Therefore О± = ((1 + N ) )в€— X = (1 + N ) 2 X. For the proof in Brownian motion
case, see e.g. [18].
For notational convenience, we shall identify from now on Malliavin derivatives
N
of the same order, that is we set Dr,z
X(t) = DrN1 ,z1 ,r2 ,z2 ,...,rN ,zN X(t).
SDE SOLUTIONS
463
Lemma 4.2. Let X(t), 0 в‰¤ t в‰¤ T be defined as in the Equation (4.1). Then
N
X(t) в€€ Dв€ћ,2 , i.e. DВ·,В·
X(t) exists for all N в‰Ґ 1.
We need the following results to prove this lemma:
Proposition 4.3. Let X в€€ D1,2 and f be a real continuous function on R. Then
f (X) в€€ D1,2 and
Dt,z f (X) = f (X + Dt,z X) в€’ f (X).
(4.4)
Proof. See e.g. [4].
Lemma 4.4. Let X(t), 0 в‰¤ t в‰¤ T be defined as in the Equation (4.1). Then, the
N -th Malliavin derivative of X(t) can be written as
N
Dr,z
X(t) =
N
t
r
N
(в€’1)k Оі(s,
k
R0 k=0
+N
N в€’1
N в€’k
i=0
N в€’1
(в€’1)k Оі(r,
k
k=0
N в€’k
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ)N
i
N в€’kв€’1
i=0
N в€’kв€’1
i
X(rв€’ ), z),
Dr,z
i
(4.5)
0
for N в‰Ґ 1 and Dr,z
X(t) := X(t).
Proof. We will prove the equality (4.5) by induction methodology. Since the proof
is basically based on calculations, we include it in the Appendix.
Now, we are ready to prove Lemma 4.2.
Proof. Let us consider the Picard approximations Xn (t) to X(t) given by
t
Лњ (ds, dz),
Оі(s, Xn (sв€’ ), z)N
Xn+1 (t) = x +
0
(4.6)
R0
for n в‰Ґ 0 and X0 (t) = x. We want to show by induction on n that Xn (t) belongs
to DN,2 and
N
П•n+1,N (t) в‰¤ k1 + k2
t
П•n,j (u) du,
j=1
0
for all n в‰Ґ 0, N в‰Ґ 1 and t в€€ [0, T ] where
П•n+1,N (t) := sup E
0в‰¤rв‰¤t
N
sup | Dr,z
Xn+1 (s) |2 ОЅ(dz) . . . ОЅ(dz) < в€ћ.
rв‰¤sв‰¤t
RN
0
Note that
N
Dr,z
Xn+1 (t)
N
t
=
r
R0 k=0
+N
N в€’1
k=0
N
(в€’1)k Оі(s,
k
N в€’k
i=0
N в€’1
(в€’1)k Оі(r,
k
N в€’k
i
Лњ (ds, dОѕ)
Dr,z
Xn (sв€’ ), Оѕ)N
i
N в€’kв€’1
i=0
N в€’kв€’1
i
Dr,z
Xn (rв€’ ), z),
i
(4.7)
464
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
0
with Dr,z
Xn (sв€’ ) := Xn (sв€’ ). See Lemma 4.4 for a proof. Then, by DoobвЂ™s
maximal inequality, FubiniвЂ™s theorem, ItЛ†o isometry, Equation (4.2) and (4.3), we
get
N
2
j
Dr,z
Xn+1 (s)
sup
E
Rj0 rв‰¤sв‰¤t
j=1
N
j
s
=
Rj0 rв‰¤sв‰¤t
j=1
jв€’k
i=0
jв€’1
r
R0 k=0
jв€’k
i
Лњ (du, dОѕ)
Xn (uв€’ ), Оѕ N
Dr,z
i
Оі u,
jв€’1
(в€’1)k Оі r,
k
+j
k=0
N
E
Rj0
j=1
i=0
r
R0 k=0
jв€’1
sup
E
j
Rj0 rв‰¤sв‰¤t
j=1
jв€’kв€’1
Оі r,
i=0
k=0
j
t
E
Rj0
r
jв€’k
Оі u,
i=0
R0 k=0
jв€’1
E
j=1
jв€’kв€’1
Оі r,
i=0
Rj0
k=0
j
t
E
j=1
Rj0
jв€’k
Оі u,
i=0
r
R0
k=0
(ОЅ(dz))j
2
(ОЅ(dz))j
j
(в€’1)k
k
2
(ОЅ(dz))j
jв€’1
(в€’1)k
k
jв€’kв€’1
i
Dr,z
Xn (rв€’ ), z
i
N
=8
j
2
jв€’1
(в€’1)k
k
jв€’k
i
Лњ (du, dОѕ)
Dr,z
Xn (uв€’ ), Оѕ N
i
N
+2
j
(в€’1)k
k
jв€’kв€’1
i
Dr,z
Xn (rв€’ ), z
i
N
j=1
jв€’kв€’1
i
Dr,z
Xn (rв€’ ), z
i
jв€’k
i
Лњ (du, dОѕ)
Dr,z
Xn (uв€’ ), Оѕ N
i
N
+2
i=0
sup
rв‰¤sв‰¤t
jв€’k
jв€’kв€’1
j
s
Оі u,
в‰¤8
j
(в€’1)k
k
sup
E
в‰¤2
(ОЅ(dz))j
2
(ОЅ(dz))j
j
(в€’1)k
k
jв€’k
i
Dr,z
Xn (uв€’ ), Оѕ
i
2
ОЅ(dОѕ)du (ОЅ(dz))j
2
(ОЅ(dz))j
SDE SOLUTIONS
jв€’1
N
j2E
+2
Rj0
j=1
jв€’kв€’1
i=0
N
в‰¤ k1 + k2
jв€’1
(в€’1)k
k
k=0
jв€’kв€’1
i
Dr,z
Xn (rв€’ ), z
i
Оі r,
2
(ОЅ(dz))j
j
t
E
j=1
465
Rj0 k=0
r
i
|Dr,z
Xn (uв€’ )|2 (ОЅ(dz))j du,
(4.8)
for some constants k1 and k2 . Applying a discrete version of GronwallвЂ™s inequality
to Equation (4.8) we get
| Xn |
sup
n
N,2 <
в€ћ,
for all N в‰Ґ 1. Moreover, note that
sup | Xn (s) в€’ X(s) |2
E
0в‰¤sв‰¤T
в†’0
as n goes to infinity by the Picard approximation. Hence, by Lemma 4.1 we
conclude that X(t) в€€ Dв€ћ,2 .
Theorem 4.5. Let X(t) be the strong solution of the SDE,
Лњ (dt, dz),
Оі(t, X(tв€’ ), z)N
dX(t) =
(4.9)
R0
with X(0) = x в€€ R. Assume that Оі : [0, T ] Г— R Г— R0 в†’ R satisfies the conditions
(4.2) and (4.3). Then,
X(t) в€€ Gq
for all q в€€ R and for all 0 в‰¤ t в‰¤ T .
Proof. Using the isometry U : L2 (Вµ) в€’в†’ L2 (ВµG ) in (3.6) and MeyerвЂ™s inequality
(see e.g. [20]), we obtain that
N n X(t)
2
в‰¤ Cn
2n
DВ·,В·
X(t)
2
L2 ((О»Г—ОЅ)2n Г—Вµ)
+
X(t)
2
L2 (Вµ)
,
where Cn в‰Ґ 0 is a constant depending on n. The proof of MeyerвЂ™s inequality in
[20] or Theorem 1.5.1 in [18] shows that Cn is given by
nв€’1
Cn = M nв€’1
1 j
(1 + ) 2 ,
j
j=1
nв‰Ґ1
for a universal constant M . We see that
Cn в‰¤ M nв€’1 e
nв€’1
2
,
n в‰Ґ 1.
466
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
Thus we get
X(t)
eqN X(t) L2 (Вµ)
qn
N n X(t)
n!
=
q
в‰¤
L2 (Вµ)
nв‰Ґ0
в‰¤
nв‰Ґ0
q n nв€’2 nв€’1
M 2 e 4
n!
2n
DВ·,В·
X(t)
L2 ((О»Г—ОЅ)2n Г—Вµ)
+
X(t)
L2 (Вµ)
On the other hand, it follows from Equation (4.5) that
n
2n
DВ·,В·
X(t)
L2 ((О»Г—ОЅ)2n Г—Вµ)
LВ·
в‰¤
L В· (n + 1)2n + n3 2nв€’k
в‰¤
(n + 1)
k=0
qв‰¤
nв€’1
k=0
nв€’k
k
L В· 23n+1
for a constant L в‰Ґ 0. Hence we get
X(t)
n
+ n2
k
в‰¤
(L + 1) В· e16
в€љ
4
e MВ·q
X(t)
L2 (Вµ) <
в€ћ.
Remark 4.6. We shall mention that the proof of Theorem 4.5 also carries over to
backward stochastic differential equations (BSDEвЂ™s) of the type
T
T
f (s, Y (s), Z(s, В·)) ds в€’
Y (t) = x +
t
t
Лњ (ds, dz),
Z(sв€’ , z) N
(4.10)
R0
provided e.g. that the driver f is bounded and fulfills a linear growth and Lipschitz
condition and ОЅ(R0 ) < в€ћ.
Appendix: Proof of Lemma 4.4
For N = 1, we have
Dr,z X(t)
1
t
1в€’k
1
1в€’k
i
Лњ (ds, dОѕ) + Оі(r, X(rв€’ ), z)
(в€’1)k Оі s,
Dr,z
X(sв€’ ), Оѕ N
k
i
i=0
=
r R0 k=0
t
=
r R0
i
Лњ (ds, dОѕ) + Оі(r, X(rв€’ ), z).
Оі(s, X(sв€’ ) + Dr,z
X(sв€’ ), Оѕ) в€’ Оі(s, X(sв€’ ), Оѕ) N
Let us assume that it holds for N в‰Ґ 1. Hence
N +1
Dr,z
X(t)
N
= Dr,z (Dr,z
X(t))
t
N
= Dr,z
r
R0 k=0
N
(в€’1)k Оі s,
k
N в€’k
i=0
N в€’k
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
SDE SOLUTIONS
N в€’1
+N
N в€’1
(в€’1)k Оі r,
k
k=0
N
t
R0 k=0
N в€’k
Оі s,
i=0
в€’
N
r
N
(в€’1)k Оі r,
k
+
k=0
N в€’1
+N
k=0
Оі r,
N в€’kв€’1
i=0
в€’
N
(в€’1)k Оі s,
k
R0 k=0
N
N в€’1
N в€’k
N в€’k
N в€’k
i+1
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
i=0
N в€’k
i=0
N в€’k
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N в€’k
i
X(rв€’ ), z
Dr,z
i
i=0
N в€’1
(в€’1)k
k
N в€’kв€’1
i
Dr,z
X(rв€’ ) +
i
N в€’1
(в€’1)k Оі r,
k
k=0
i=0
N в€’k
i
Dr,z
X(sв€’ ) +
i
t
N в€’kв€’1
i
Dr,z
X(rв€’ ), z
i
N
(в€’1)k
k
=
r
N в€’kв€’1
467
N в€’kв€’1
i=0
N в€’kв€’1
N в€’kв€’1
i+1
Dr,z
X(rв€’ ), z
i
i=0
N в€’kв€’1
i
Dr,z
X(rв€’ ), z
i
.
Note that
N в€’k
N в€’k
i
i=0
i
i+1
Dr,z
+ Dr,z
X(sв€’ ) =
N в€’k+1
N в€’k+1
i
Dr,z
X(sв€’ )
i
i=0
and
N в€’kв€’1
N в€’kв€’1
i
i=0
i
i+1
Dr,z
+ Dr,z
X(sв€’ ) =
N в€’k
i=0
N в€’1
i
Dr,z
X(sв€’ ).
i
Hence,
N +1
Dr,z
X(t)
N
t
=
r
R0 k=0
t
N
+
r
N
+
k=0
R0 k=0
N
(в€’1)k Оі s,
k
N в€’k+1
i=0
N
(в€’1)k+1 Оі s,
k
N
(в€’1)k Оі r,
k
N в€’k
i=0
N в€’k
i=0
N в€’k+1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N в€’k
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N в€’k
i
Dr,z
X(rв€’ ), z
i
468
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
N в€’1
+N
N в€’1
(в€’1)k Оі r,
k
k=0
+
N в€’1
N в€’1
(в€’1)k Оі r,
k
k=0
N
t
r
R0 k=0
N +1
t
+
r R0 k=1
N
k=0
N в€’1
+N
k=1
N +1
t
=
Оі(s,
r
R0
i=0
N
t
r
R0 k=1
N в€’k+1
i=0
t
N
k=0
+
N
(в€’1)k Оі r,
k
+ N Оі r,
i=0
N в€’1
i=0
i=0
N в€’k
i
Dr,z
X(rв€’ ), z
i
N в€’k
i
Dr,z
X(rв€’ ), z
i
N
kв€’1
N +1
t
=
N в€’k
i=0
(в€’1)k
Оі(s,
R0
i=0
t
N в€’k
i
Dr,z
X(rв€’ ), z
i
N
i
Dr,z
X(rв€’ ), z
i
N в€’1
N в€’1
+
kв€’1
k
k=1
+ (в€’1)N Оі(r, X(rв€’ ), z)
k
(в€’1) Оі r,
R0
N в€’k
i=0
N в€’k
i
Dr,z
X(rв€’ ), z
i
N +1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ)N
i
Лњ (ds, dОѕ)
(в€’1)k Оі(s, X(sв€’ ), Оѕ)N
+
r
N в€’k
N в€’k
N в€’k+1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N
r
i=0
N в€’k+1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
R0
+
+
N в€’k+1
Лњ (ds, dОѕ)
(в€’1)N +1 Оі(s, X(sв€’ ), Оѕ)N
+
r
N в€’k+1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N +1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ)N
i
N
k
+
Оі s,
i=0
N в€’1
(в€’1)k Оі r,
kв€’1
+
N в€’kв€’1
i
Dr,z
X(rв€’ ), z
i
N в€’k
i
Dr,z
X(rв€’ ), z
i
N в€’1
(в€’1)k Оі r,
k
k=0
N
i=0
N
(в€’1)k Оі s,
kв€’1
N
(в€’1)k Оі r,
k
+
i=0
N в€’k+1
N в€’k
N в€’k
i
Dr,z
X(rв€’ ), z
i
i=0
N в€’kв€’1
N
(в€’1)k Оі s,
k
=
N в€’k
SDE SOLUTIONS
N
t
N +1
(в€’1)k Оі s,
k
+
r
R0 k=1
N
+
k=0
N
(в€’1)k Оі r,
k
N
+N
k=0
t
i=0
N
(в€’1)k Оі r,
k
N +1
=
r
N в€’k
R0 k=0
+ (N + 1)
k=0
i=0
N в€’k+1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N в€’k
i
Dr,z
X(rв€’ ), z
i
N в€’k
i=0
N в€’k
i
Dr,z
X(rв€’ ), z
i
N +1
(в€’1)k Оі s,
k
N
N в€’k+1
469
N
(в€’1)k Оі r,
k
N в€’k+1
i=0
N в€’k
i=0
N в€’k+1
i
Лњ (ds, dОѕ)
Dr,z
X(sв€’ ), Оѕ N
i
N в€’k
i
Dr,z
X(rв€’ ), z .
i
Acknowledgment. The authors would like to thank N. Privault and C. Scheid
References
1. Applebaum, D.: LВґ
evy Processes and Stochastic Calculus, Cambridge University Press, UK,
2004.
2. Benth, F. E., LГёkka, A.: Anticipative calculus for LВґ
evy processes and stochastic differential
equations, Stochast. Stochast. Rep. 76 (2004) 191вЂ“211.
3. Bertoin, J.: LВґ
evy Processes, Cambridge University Press, Cambridge, 1996.
4. Di Nunno, G., Г�ksendal, B., Proske, F.: Malliavin Calculus for LВґ
evy Processes with Applications to Finance, Springer-Verlag, 2009.
5. Grothaus, M., Kondratiev, Y. G., Streit, L.: Complex Gaussian analysis and the BargmannSegal space, Methods of Functional Analysis and Topology 3 (1997) 46вЂ“64.
6. Hida, T., Kuo, H-H, Potthoff, J., Streit, L.: White Noise An Infinite Dimensional Approach,
Kluwer, 1993.
7. Holden, H., Г�ksendal, B., UbГёe, J., Zhang, T.: Stochastic Partial Diffrential Equations,
BirkhВЁ
auser, Verlag, 1996.
8. Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes, Elsevier,
North-Holland, 1989.
9. Jacod, J., Shiryaev, A. N.: Limit Theorems for Stochastic Processes, Springer, Berlin, Heidelberg, New York, 1987.
10. Kachanovsky, N. A.: On biorthogonal approach to a construction of non-Gaussian analysis
and application to the Poisson analysis on the configuration space, Methods of Functional
Analysis and Topology 6 (2000) 13вЂ“21.
11. Kuo, H-H.: White Noise Distribution Theory, Probability and Stochastic Series, CRC Press,
Boca, Raton, 1996.
12. Lee, Y-J., Shih, H-H.: The product formula of multiple LВґ
evy-ItЛ†
o integrals, Bull. of the Inst.
of Math. Academia Sinica 32 (2004) 71вЂ“95.
13. Lindsay, M., Maassen, H.: An integral kernel approach to noise, in: Quantum Probability
and Applications II (1988) 192вЂ“208, Springer.
14. LГёkka, A., Г�ksendal, B., Proske, F.: Stochastic partial differential equations driven by LВґ
evy
space time white noise, Annals of Appl. Prob. 14 (2004) 1506вЂ“1528.
15. LГёkka, A., Proske, F.: Infinite dimensional analysis of pure jump LВґ
evy processes on the
Poisson space, Math. Scand. 98 (2006) 237вЂ“261.
470
YELIZ YOLCU OKUR, FRANK PROSKE, AND HASSILAH BINTI SALLEH
16. Meyer-Brandis, T., Proske, F.: On the existence and explicit representability of strong solutions of LВґ
evy noise driven SDEвЂ™s with irregular coefficients, Comm. Math. Sci. 4 (2006)
129вЂ“154.
17. Meyer-Brandis, T., Proske, F.: Construction of strong solutions of SDEвЂ�s via Malliavin
calculus, Journal of Funct. Anal. 258(2010) 3922-3953.
18. Nualart, D.: The Malliavin Calculus and Related Topics, Springer-Verlag, Berlin, Heidelberg,
2006.
19. Obata, N.: White Noise Caculus and Fock Space, LNM 1577, Berlin, Springer-Verlag, 1994.
20. Pisier, G.: Riesz transforms: A simple analytic proof of P.A. MeyerвЂ™s inequality, Seminaire
de probabiliteвЂ™s XXIII 1321 (1988) 485вЂ“501.
21. Potthoff, J., Streit, L.: A characterization of Hida distributions, Journal of Functional Analysis 101 (1991) 212вЂ“229.
22. Potthoff, J., Timpel, M.: On a dual pair of spaces of smooth and generalized random variables, Potential Analysis 4 (1995) 637вЂ“654.
23. Privault, N.: A transfer principle from Wiener to Poisson space and applications, Journal of
Functional Analysis 132 (1995) 335вЂ“360.
24. Proske, F.: Stochastic differential equations - some new ideas, Stochastics 79 (2007) 563вЂ“600.
25. Protter, P.: Stochastic Integration and Differential Equations, Springer-Verlag,Berlin, 1990.
26. Sato, K.: LВґ
evy Processes and Infinitely Divisible Distributions, Cambridge University Studies
in Advanced Mathematics, Vol. 68, Cambridge University Press, Cambridge, 1999.
27. Segal, I. E.: Lectures at the 1960 Summer Seminar in Applied Mathematics, Boulder, Colorado, 1960.
ВЁ unel, A. S.: An Introduction to Analysis on Wiener Space, Springer, 1995.
28. UstВЁ
ВЁ unel, A. S., Zakai, M.: Transformation of Measure on Wiener Space, Springer Mono29. UstВЁ
graphs in Mathematics, 2000.
30. Watanabe, S.: On Stochastic Differential Equations and Malliavin Calculus, Tata Institute
of Fundamental Research, Vol. 73, Springer-Verlag, 1979.
31. Wu, L.: Construction de lвЂ™opВґ
erateur de Malliavin sur lвЂ™espace de Poisson, LNM 1247,
SВґ
eminaire de ProbabilitВґ
es XXI, 1987.
32. Wu, L.: InВґ
egalitВґ
e de Sobolev sur lвЂ™espace de Poisson, LNM 1247, SВґ
eminaire de ProbabilitВґ
es
XXI, 1987.
Yeliz Yolcu Okur: Institute of Applied Mathematics, Middle East Technical University, 06800 Ankara, Turkey
Frank Proske: Centre of Mathematics for Applications (CMA), University of Oslo,
Norway
Hasslah Binti Salleh: Department of Mathematics, Universiti Malaysia Terengganu, 21030 Kuala Terengganu, Malaysia
Serials Publications
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 471-486
www.serialspublications.com
SOLUTIONS OF LINEAR ELLIPTIC EQUATIONS IN
GAUSS-SOBOLEV SPACES
PAO-LIU CHOW
Abstract. The paper is concerned with a class of linear elliptic equations
in a Gauss-Sobolev space setting. They arise from the stationary solutions
of the corresponding parabolic equations. For nonhomogeneous elliptic equations, under appropriate conditions, the existence and uniqueness theorem
for strong solutions is given. Then it is shown that the associated resolvent
operator is compact. Based on this result, we shall prove a Fredholm Alternative theorem for the elliptic equation and a Sturm-Liouville type of theorem
for the eigenvalue problem of a symmetric elliptic operator.
1. Introduction
The subject of parabolic equations in infinite dimensions has been studied by
many authors, (see e.g., the papers [6, 7, 17, 3] and in the books [8, 9]). As in
finite dimensions, an elliptic equation may be regarded as the equation for the
stationary solution of some parabolic equation, if exists, when the time goes to
infinity. For early works, in the abstract Wiener space, the infinite-dimensional
Laplace equation was treated in the context of potential theory by Gross [13]
and a nice exposition of the connection between the infinite-dimensional elliptic
and parabolic equations was given by Daleskii [6]. More recently, in the book
[9] by Da Prato and Zabczyk, the authors gave a detailed treatment of infinitedimensional elliptic equations in the spaces of continuous functions, where the
solutions are considered as the stationary solutions of the corresponding parabolic
equations. Similarly, in [4], we considered a class of semilinear parabolic equations
in an L2 -Gauss-Sobolev space and showed that, under suitable conditions, their
stationary solutions are the mild solutions of the related elliptic equations. So
far, in studying the elliptic problem, most results rely on its connection to the
parabolic equation which is the Kolmogorov equation of some diffusion process in
a Hilbert space. However, for partial differential equations in finite dimensions, the
theory of elliptic equations is considered in its own rights, independent of related
parabolic equations [12]. Therefore it is worthwhile to generalize this approach
to elliptic equations in infinite dimensions. In the present paper, similar to the
finite-dimensional case, we shall begin with a class of linear elliptic equations in a
L2 -Sobolev space setting with respect to a suitable Gaussian measure. It will be
Received 2012-8-26; Communicated by the editors.
2000 Mathematics Subject Classification. Primary 60H; Secondary 60G, 35K55, 35K99, 93E.
Key words and phrases. Elliptic equation in infinite dimensions, Gauss-Sobolev space, strong
solutions, compact resolvent, eigenvalue problem.
471
472
PAO-LIU CHOW
shown that several basic results for linear elliptic equations in finite dimensions
can be extended to the infinite-dimensional counter parts. In passing it is worth
noting that the infinite-dimensional Laplacians on a LВґ
evy - GelвЂ™fand triple was
treated by Barhoumi, Kuo and Ouerdian [1].
To be specific, the paper is organized as follows. In Section 2, we recall some
basic results in Gauss-Sobolev spaces to be needed in the subsequent sections.
Section 3 pertains to the strong solutions of some linear elliptic equations in a
Gauss-Sobolev space, where the existence and uniqueness Theorem 3.2 is proved.
Section 4 contains a key result (Theorem 4.1) showing that the resolvent of the
elliptic operator is compact. Based on this result, the Fredholm Alternative Theorem 4.4 is proved. In Section 5, we first characterize the spectral properties of the
elliptic operator in Theorem 5.1. Then the eigenvalue problem for the symmetric
part of the elliptic operator is studied and the results are summarized in Theorem
5.2 and Theorem 5.3. They show that the eigenvalues are positive, nondecreasing
with finite multiplicity, and the set of normalized eigenfunctions forms a complete
orthonormal basis in the Hilbert space H consisting of all L2 (Вµ)в€’functions, where
Вµ is an invariant measure defined in Theorem 2.1. Moreover the principal eigenvalue is shown to be simple and can be characterized by a variational principle.
2. Preliminaries
Let H be a real separable Hilbert space with inner product (В·, В·) and norm | В· |.
Let V вЉ‚ H be a Hilbert subspace with norm В· . Denote the dual space of V by V вЂІ
and their duality pairing by В·, В· . Assume that the inclusions V вЉ‚ H в€ј
= HвЂІ вЉ‚ V вЂІ
are dense and continuous [15].
Suppose that A : V в†’ V вЂІ is a continuous closed linear operator with domain
D(A) dense in H, and Wt is a H-valued Wiener process with the covariance operator R. Consider the linear stochastic equation in a distributional sense:
dut
u0
= Aut dt + d Wt ,
t > 0,
(2.1)
= h в€€ H.
Assume that the following conditions (A) hold:
(A.1) Let A : V в†’ V вЂІ be a self-adjoint, coercive operator such that
Av, v в‰¤ в€’ОІ v 2 ,
for some ОІ > 0, and (в€’A) has positive eigenvalues 0 < О±1 в‰¤ О±2 в‰¤ В· В· В· в‰¤
О±n в‰¤ В· В· В· , counting the finite multiplicity, with О±n в†‘ в€ћ as n в†’ в€ћ. The
corresponding orthonormal set of eigenfunctions {en } is complete.
(A.2) The resolvent operator RО» (A) and covariance operator R commute, so
that RО» (A)R = R RО» (A), where RО» (A) = (О»I в€’ A)в€’1 , О» в‰Ґ 0, with I
being the identity operator in H.
(A.3) The covariance operator R : H в†’ H is a self-adjoint operator with a finite
trace such that T r R < в€ћ.
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
473
It follows from (A.2) and (A.3) that {en } is also the set of eigenfunctions of R
with eigenvalues {ПЃn } such that
R e n = ПЃn e n ,
n = 1, 2, В· В· В· , n, В· В· В· ,
(2.2)
в€ћ
ПЃn < в€ћ.
where ПЃn > 0 and
n=1
By applying Theorem 4.1 in [5] for invariant measures and a direct calculation,
we have the following theorem.
Theorem 2.1. Under conditions (A), the stochastic equation (2.1) has a unique
invariant measure Вµ on H, which is a centered Gaussian measure with covariance
1
operator О“ = в€’ Aв€’1 R.
2
Remark 2.2. We make the following two remarks:
(1) It is easy to check that en вЂІ s are also eigenfunctions of О“ so that
О“ en = Оіn en , n = 1, 2, В· В· В· , n, В· В· В· ,
(2.3)
ПЃn
where Оіn =
.
2О±n
tA
(2) Let e , t в‰Ґ 0, denote the semigroup of operators on H generated by A.
Without condition (A.2), the covariance operator of the invariant measure
в€ћ
Вµ is given by О“ = 0 etA RetA dt, which cannot be evaluated in a closed
2
form. Though an L (Вµ)в€’ theory can be developed in the subsequent analysis, one needs to impose some conditions which are not easily verifiable.
Let H = L2 (H, Вµ) be a Hilbert space consisting of real-valued functions О¦ on
H with norm defined by
|О¦(v)|2 Вµ(dv)}1/2 ,
|||О¦||| = {
H
and the inner product [В·, В·] given by
[О�, О¦] =
О�(v)О¦(v)Вµ(dv),
for О�, О¦ в€€ H.
H
Let n = (n1 , n2 , В· В· В· , nk , В· В· В· ), where nk в€€ Z+ , be a sequence of nonnegative
в€ћ
integers, and let Z = {n : n = |n| =
nk < в€ћ}, so that nk = 0 except for a
k=1
finite number of nвЂІk s. Let hm (r) be the normalized
one-dimensional Hermite polynomial of degree m. For v в€€ H, define a Hermite
(polynomial) functional of degree n by
в€ћ
Hn (v) =
hnk [в„“k (v)],
k=1
where we set в„“k (v) = (v, О“в€’1/2 ek ) and О“в€’1/2 denotes a pseudo-inverse of О“1/2 . For
a smooth functional О¦ on H, let DО¦ and D2 О¦ denote the FrВґechet derivatives of
the first and second orders, respectively. The differential operator
1
AО¦(v) = T r[RD2 О¦(v)] + Av, DО¦(v)
(2.4)
2
474
PAO-LIU CHOW
is well defined for a polynomial functional О¦ with DО¦(v) lies in the domain D(A)
of A. However this condition is rather restrictive on О¦. For ease of calculations, in
place of Hermite functionals, introduce an exponential family EA (H) of functionals
as follows [8]:
.
EA (H) = Span{Re О¦h , Im О¦h : h в€€ D(A)},
(2.5)
.
where О¦h (v) = exp{i(h, v)}. It is known that EA (H) вЉ‚ D(A) is dense in H. For
v в€€ EA (H), the equation (2.4) is well defined.
Returning to the Hermite functionals, it is known that the following holds [2]:
Proposition 2.3. The set of all Hermite functionals {Hn : n в€€ Z} forms a
complete orthonormal system in H. Moreover we have
AHn (v) = в€’О»n Hn (v),
в€Ђ n в€€ Z,
в€ћ
where О»n =
nk О±nk .
k=1
We now introduce the L2 в€’Gauss-Sobolev spaces. For О¦ в€€ H, by Proposition
2.2, it can be expressed as
О¦=
О¦ n Hn ,
nв€€Z
where О¦n = [О¦, Hn ] and |||О¦|||2 =
|О¦n |2 < в€ћ.
n
Let Hm denote the Gauss-Sobolev space of order m defined by
Hm = {О¦ в€€ H : |О¦ |m < в€ћ}
for any integer m, where the norm
|||О¦|||m = |||(I в€’ A)m/2 О¦||| = {
(1 + О»n )m |О¦n |2 }1/2 ,
(2.6)
n
вЂІ
with I being the identity operator in H = H0 . For m в‰Ґ 1, the dual space Hm
of
Hm is denoted by Hв€’m , and the duality pairing between them will be denoted by
В·, В· m with В·, В· 1 = В·, В· . Clearly, the sequence of norms {|||О¦|||m } is increasing,
that is,
|||О¦|||m < |||О¦|||m+1 ,
for any integer m, and, by identify H with its dual HвЂІ , we have
Hm вЉ‚ Hmв€’1 вЉ‚ В· В· В· вЉ‚ H1 вЉ‚ H вЉ‚ Hв€’1 вЉ‚ В· В· В· вЉ‚ Hв€’m+1 вЉ‚ Hв€’m ,
for m в‰Ґ 1,
and the inclusions are dense and continuous. Of course the spaces Hm can be
defined for any real number m, but they are not needed in this paper.
Owing to the use of the invariant measure Вµ, it is possible to develop a L2 -theory
of infinite-dimensional parabolic and elliptic equations connected to stochastic
PDEs. To do so, similar to the finite-dimensional case, the integration by parts is
an indispensable technique. In the abstract Wiener Space, the integration by parts
with respect the Wiener measure was obtained by Kuo [14]. As a generalization
to the Gaussian invariant measure Вµ, the following integration by parts formula
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
475
holds (see Lemma 9.2.3 [9]). In this case, instead of the usual derivative DО¦, it is
more natural to use the Rв€’derivative
DR О¦ = R1/2 DО¦,
which can be regarded as a Gross derivative or the derivative of О¦ in the direction
.
of HR = R1/2 H.
Proposition 2.4. Let g в€€ HR and О¦, ОЁ в€€ H1 . Then we have
(DR О¦, g) ОЁ dВµ = в€’
H
(v, О“в€’1/2 g)О¦ ОЁ dВµ.
О¦(DR ОЁ, g) dВµ +
H
(2.7)
H
The following properties of A are crucial in the subsequent analysis. For now
let the differential operator A given by (2.4) be defined in the set of Hermite
polynomial functionals. In fact it can be extended to a self-adjoint linear operator
in H. To this end, let PN be a projection operator in H onto its subspace spanned
by the Hermite polynomial functionals of degree N and define AN = PN A. Then
the following theorem holds (Theorem 3.1, [2]).
Theorem 2.5. The sequence {AN } converges strongly to a linear symmetric operator A : H2 в†’ H, so that, for О¦, ОЁ в€€ H2 , the second integration by parts formula
holds:
1
(AО¦, ОЁ) dВµ =
(AОЁ)О¦ dВµ = в€’
(DR О¦, DR ОЁ) dВµ,
(2.8)
2 H
H
H
Moreover A has a self-adjoint extension, still denoted by A with domain dense in
H.
In particular, for m = 2, it follows from (2.6) and (2.8) that
Corollary 2.6. The H1 -norm can be defined as
1
|||О¦|||1 = {|||О¦|||2 + |||DR О¦|||2 }1/2 ,
2
. 1/2
for all О¦ в€€ H1 , where DR О¦ = R DО¦.
(2.9)
Remark 2.7. In (2.9) the factor 12 was not deleted for convenience. Also it becomes
clear that the space H1 consists of all L2 (Вµ)-functions whose R-derivatives are
Вµв€’square-integrable.
Let the functions F : H в†’ H and G : H в†’ R be bounded and continuous. For
Q в€€ L2 ((0, T ); H) and О� в€€ H, consider the initial-value problem for the parabolic
equation:
в€‚
ОЁt (v) =
в€‚t
ОЁ0 (v) =
A ОЁt (v) в€’ (F (v), DR ОЁt (v)) в€’ G(v)ОЁt (v) + Qt (v),
(2.10)
О�(v),
for 0 < t < T, v в€€ H, where A is given by (2.4). Suppose that the conditions for
Theorem 4.2 in [3] are met. Then the following proposition holds.
476
PAO-LIU CHOW
Proposition 2.8. The initial-value problem for the parabolic equation (2.10) has
a unique solution ОЁ в€€ C([0, T ]; H) в€© L2 ((0, T ); H1 ) such that
T
sup |||ОЁt |||2 +
0в‰¤tв‰¤T
T
|||ОЁs |||21 ds в‰¤ K(T ){1 + |||О�|||2 +
0
|||Qs |||2 ds},
(2.11)
0
where K(T ) is a positive constant depending on T .
Moreover, when Qt = Q independent of t, it was shown that, as t в†’ в€ћ, the
solution О¦t of (2.10) approaches the mild solution О¦ of the linear elliptic equation
в€’A О¦(v) + (F (v), DR О¦(v)) + G(v)О¦(v) = Q(v),
(2.12)
or, for О± > 0 and AО± = A + О±, О¦ satisfies the equation
О¦(v) = Aв€’1
О± {(F (v), DR О¦(v)) + G(v)О¦(v) в€’ Q(v)},
v в€€ H.
In what follows, we shall study the strong solutions (to be defined) of equation
(2.12) in an L2 -Gauss-Sobolev space setting and the related eigenvalue problems.
3. Solutions of Linear Elliptic Equations
Let L denote an linear elliptic operator defined by
L О¦ = в€’A О¦ + F О¦ + G О¦,
О¦ в€€ EA (H),
(3.1)
where A is given by (2.4) and
F О¦ = (F (В·), DR О¦(В·)).
(3.2)
Then, for О¦ в€€ H1 and Q в€€ H, the elliptic equation (2.12) can be written as
L О¦ = Q,
(3.3)
in a generalized sense. Multiplying the equation (3.1) by ОЁ в€€ H1 and integrating
the resulting equation with respect to Вµ, we obtain
(LО¦) ОЁ dВµ =
H
1
{ (DR О¦, DR ОЁ) + (F О¦) ОЁ + (G О¦) ОЁ }dВµ,
2
H
(3.4)
where the second integration by parts formula (2.8) was used.
Associated with L, we define a bilinear form B(В·, В·) : H1 Г— H1 в†’ R as follows
B(О¦, ОЁ) =
=
1
{ (DR О¦, DR ОЁ) + (F О¦) ОЁ + (G О¦) ОЁ } dВµ
2
H
1
[DR О¦, DR ОЁ] + [F О¦, ОЁ] + [GО¦, ОЁ],
2
(3.5)
for О¦, ОЁ в€€ H1 .
Now consider a generalized solution of the elliptic equation (3.3). There are
several versions of generalized solutions, such as mild solution, strict solution and
so on (see [9]). Here, for Q в€€ Hв€’1 , a generalized solution О¦ is said to be a strong
(or variational) solution of problem (3.3) if О¦ в€€ H1 and it satisfies the following
equation
B(О¦, ОЁ) = Q, ОЁ ,
for all ОЁ в€€ H1 .
(3.6)
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
477
Lemma 3.1. (Energy inequalities) Suppose that F : H в†’ H and G : H в†’ R are
bounded and continuous. Then the following inequalities hold.
There exists a constant b > 0 such that
|B(О¦, ОЁ)| в‰¤ b |||О¦|||1 |||ОЁ|||1 ,
for О¦, ОЁ в€€ H1 ,
(3.7)
and, for any Оµ в€€ (0, 1/2), B satisfies the coercivity condition:
1
ОІ2
B(О¦, О¦) в‰Ґ ( в€’ Оµ)|||DR О¦|||2 + (Оґ в€’
)|||О¦|||2 ,
2
4Оµ
for О¦ в€€ H1 ,
(3.8)
1
{ (DR О¦, DR ОЁ) + (F, DR О¦) ОЁ + (G О¦) ОЁ }dВµ|
2
H
1
в‰¤ |||DR О¦||| |||DR ОЁ||| + |||(F, DR О¦)||| |||ОЁ||| + |||GО¦||| |||ОЁ|||
2
1
в‰¤ |||DR О¦||| |||DR ОЁ||| + ОІ |||DR О¦||| |||ОЁ||| + Оі |||О¦||| |||ОЁ|||,
2
(3.9)
where ОІ = sup |F (v)| and Оґ = inf G(v).
vв€€H
vв€€H
Proof. From the equations (3.2) and (3.5), we have
|B(О¦, ОЁ)| = |
where ОІ = sup |F (v)| and Оі = sup |G(v)|. It follows from (3.9) that
vв€€H
vв€€H
|B(О¦, ОЁ)| в‰¤ b |||О¦|||1 |||ОЁ|||1 ,
for some suitable constant b > 0.
By setting ОЁ = О¦ in (3.2) and (3.5), we obtain
B(О¦, О¦)
1
{ (DR О¦, DR О¦) + (F, DR О¦) О¦ + (G О¦) О¦ }dВµ
2
H
=
=
в‰Ґ
1
|||DR О¦|||2 + [(F, DR О¦), О¦] + [GО¦, О¦]
2
1
|||DR О¦|||2 в€’ ОІ |||DR О¦||| |||О¦||| + Оґ |||О¦|||2 ,
2
(3.10)
where Оґ = inf G(v).
vв€€H
For any Оµ > 0, we have
ОІ |||DR О¦||| |||О¦||| в‰¤ Оµ |||DR О¦|||2 +
ОІ2
|||О¦|||2 .
4Оµ
(3.11)
By making use of (3.11) in (3.10), we can get the desired inequality (3.8):
1
ОІ2
B(О¦, О¦) в‰Ґ ( в€’ Оµ)|||DR О¦|||2 + (Оґ в€’
)|||О¦|||2 ,
2
4Оµ
which completes the proof.
With the aid of the energy estimates, under suitable conditions on F and G,
the following existence theorem can be established.
478
PAO-LIU CHOW
Theorem 3.2. (Existence of strong solutions) Suppose the functions F : H в†’ H
and G : H в†’ R are bounded and continuous. Then there is a constant О±0 в‰Ґ 0 such
that for each О± > О±0 and for any Q в€€ Hв€’1 , the following elliptic problem
.
LО± О¦ = L О¦ + О± О¦ = Q
(3.12)
has a unique strong solution О¦ в€€ H1 .
Proof. By definition of a strong solution, we have to show the that there exists a
unique solution О¦ в€€ H1 satisfying the variational equation
.
BО± (О¦, ОЁ) = B(О¦, ОЁ) + О± [О¦, ОЁ] = Q, ОЁ ,
(3.13)
for all ОЁ в€€ H1 .
To this end we will apply the Lax-Milgram Theorem [18] in the real separable
Hilbert space H1 . By Lemma 3.1, the inequality (3.7) holds similarly for BО± with
a different constant b1 > 0,
|BО± (О¦, ОЁ)| в‰¤ b1 |||О¦|||1 |||ОЁ|||1 ,
In particular we take Оµ =
then used in (3.13) to give
for О¦, ОЁ в€€ H1 .
(3.14)
1
and О±0 = |Оґ в€’ ОІ 2 | in the inequality (3.11), which is
4
1
|||DR О¦|||2 + О· |||О¦|||2 ,
4
where О· = О± в€’ О±0 > 0 by assumption. It follows that
BО± (О¦, О¦) в‰Ґ
BО± (О¦, О¦) в‰Ґ Оє |||О¦|||21 ,
(3.15)
(3.16)
1
for Оє = min{ , О·}. In view of (3.14) and (3.15), the bilinear form BО± (В·, В·) satisfies
4
the hypotheses for the Lax-Milgram Theorem.
For Q в€€ Hв€’1 , Q, В· defines a bounded linear functional on H1 . Hence there
exists a function О¦ в€€ H1 which is the unique solution of the equation
BО± (О¦, ОЁ) =
Q, ОЁ
for all ОЁ в€€ H1 .
Remark 3.3. By writing
BО± (О¦, ОЁ) =
LО± О¦ , ОЁ ,
it follows from Theorem 3.2 that the mapping LО± : H1 в†’ Hв€’1 is an isomorphism.
Corollary 3.4. Suppose that F в€€ Cb (H; H) and G в€€ Cb (H) such that
inf G(v) > sup |F (v)|2 .
vв€€H
(3.17)
vв€€H
Then there exists a unique strong solution О¦ в€€ H1 of the equation
L О¦ = Q.
Proof. This follows from the fact that, under condition (3.17), the inequality (3.16)
holds with О± = 0.
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
479
4. Compact Resolvent and Fredholm Alternative
For U в€€ H, consider the elliptic problem:
.
LО» О¦ = L О¦ + О» О¦ = U,
(4.1)
where О» > О±0 is a real parameter. By Theorem 3.2, the problem (4.1) has a unique
strong solution О¦ в€€ H1 satisfying
BО» (О¦, ОЁ) =
LО» О¦ , ОЁ
=
U, ОЁ ,
(4.2)
for all ОЁ в€€ H1 . For each U в€€ H, let us express the solution of (4.1) as
О¦ = Lв€’1
О» U.
(4.3)
Denote the resolvent operator KО» of L on H by
.
KО» U = Lв€’1
О» U
(4.4)
for all U в€€ H.
In the following theorem, we will show that the resolvent operator KО» : H в†’ H
is compact.
Theorem 4.1. (Compact Resolvent) Under the conditions of Theorem 3.2 with
О» = О±, the resolvent operator KО± : H в†’ H is bounded, linear and compact.
Proof. By the estimate (3.16) and equation (3.13), we have
Оє |||О¦|||21
в‰¤
BО± (О¦, О¦) = [U , О¦]
в‰¤
|||U ||| |||О¦|||1 ,
which, in view of (4.3) and (4.4), implies
|||О¦|||1 = ||| KО± U |||1 в‰¤ C ||| U |||,
(4.5)
1
. Hence the linear operator KО± : H в†’ H is bounded.
Оє
To show compactness, let {Un } be a bounded sequence in H with ||| Un ||| в‰¤ C0
for some C0 > 0 and for each n в‰Ґ 1. Define О¦n = KО± Un . Then, by (4.5), we
obtain
1
|||О¦n |||1 в‰¤
||| Un ||| в‰¤ C1 ,
(4.6)
Оє
C0
where C1 =
. It follows that {О¦n } is a bounded sequence in the separable
Оє
Hilbert space H1 and, hence, there exists a subsequence, to be denoted by {О¦k }
for simplicity, which converges weakly to О¦ or О¦k в‡Ђ О¦ in H1 . To show that the
subsequence will converge strongly in H, by Proposition 2.2, we can express
for C =
О¦=
П†n Hn
nв€€Z
and О¦k =
П†n,k Hn ,
(4.7)
nв€€Z
where П†n = [О¦, Hn ] and П†n,k = [О¦k , Hn ]. For any integer N > 0, let ZN = {n в€€
Z : 1 в‰¤ |n| в‰¤ N } and Z+
N = {n в€€ Z : |n| > N }. By the orthogonality of Hermite
480
PAO-LIU CHOW
functionals HnвЂІ s, we can get
|||О¦ в€’ О¦k |||2
(П†n в€’ П†n,k )2 +
=
(П†n в€’ П†n,k )2
nв€€Z+
N
nв€€ZN
1
N
(П†n в€’ П†n,k )2 +
в‰¤
nв€€ZN
(1 + О»n )(П†n в€’ П†n,k )2
О±1
|||О¦ в€’ О¦k |||21 .
N
[О¦n в€’ О¦n,k , Hn ]2 +
в‰¤
(4.8)
nв€€Z+
N
nв€€ZN
Since О¦k в‡Ђ О¦, by a theorem on the weak convergence in H1 (p.120, [18]), the
subsequence {О¦k } is bounded such that
sup |||О¦k ||| в‰¤ C, and |||О¦||| в‰¤ C, some constant C > 0.
kв‰Ґ1
For the last term in the inequality (4.8), given any Оµ > 0, there is an integer N0 > 0
such that
О±1
2О±1 C 2
Оµ
|||О¦ в€’ О¦k |||21 в‰¤
< , for N > N0 .
(4.9)
N
N
2
Again, by the weak convergence of {О¦k }, for the given N0 , we have
(П†n в€’ П†n,k )2 = lim
lim
kв†’в€ћ
[О¦n в€’ О¦n,k , Hn ]2 = 0.
kв†’в€ћ
nв€€ZN0
nв€€ZN0
Therefore there is an integer m > 0
[О¦n в€’ О¦n,k , Hn ]2 <
nв€€ZN
Оµ
2
(4.10)
for k > m. Now the estimates (4.8)вЂ“(4.10) implies
lim |||О¦ в€’ О¦k ||| = 0,
kв†’в€ћ
which proves the compactness of the resolvent KОі.
Due to the coercivity condition (3.16), the compactness of the resolvent operator
implies the following fact.
Theorem 4.2. The embedding of H1 into H is compact.
In the Wiener space, a direct proof of this important result was given in [10]
and [16].
To define the adjoint operator of L, we first introduce the divergence operator
Dв‹† . Let F в€€ C1b (H; H) be expanded in terms of the eigenfunctions {ek } of A:
в€ћ
F =
fk ek ,
k=1
where fk = (F, ek ). Then the divergence Dв‹† F of F is defined as
.
Dв‹† F = T r (DF ) =
в€ћ
(D fk , ek ).
k=1
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
481
Recall the covariant operator О“ = (в€’1)Aв€’1 R for Вµ. We shall need the following
integration by parts formula.
Lemma 4.3. Suppose that the function F : H в†’ О“(H) вЉ‚ H is bounded continuous
and differentiable such that
sup |T rDF (v)| < в€ћ.
(4.11)
vв€€H
and
sup |(О“в€’1 F (v), v)| < в€ћ.
(4.12)
vв€€H
Then, for О¦, ОЁ в€€ H1 , the following equation holds
О¦ Dв‹† (ОЁ F ) dВµ +
(DО¦, F )ОЁ dВµ = в€’
H
H
О“в€’1 F (v), v О¦ ОЁ dВµ.
(4.13)
H
Proof. The proof is similar to that of Lemma 9.2.3 in [8], it will only be sketched.
Let
n
.
fk ek ,
(4.14)
Fn =
k=1
with fk = (F, ek ). Then the sequence {Fn } converges strongly to F in H. In view
of (4.14), we have
n
(DО¦, Fn )ОЁ dВµ =
H
(DО¦, ek )fk ОЁ dВµ.
(4.15)
H
k=1
By invoking the first integral-by-parts formula (2.7),
(DО¦, ek )fk ОЁ = в€’
H
(v, О“в€’1 ek ))fk О¦ОЁ dВµ,
О¦(D(ОЁfk ), ek ) dВµ +
H
H
so that (4.15) yields
n
О¦Dв‹† (ОЁfk ek ) dВµ
(DО¦, Fn )ОЁ dВµ = в€’
H
k=1
n
H
(v, О“в€’1 ek )fk О¦ОЁ dВµ
+
k=1
О¦Dв‹† (ОЁFn ) dВµ +
=в€’
H
(4.16)
H
(v, О“в€’1 Fn (v)) О¦ОЁdВµ(v).
H
Now the formula (4.13) follows from (4.15) by taking the limit termwise as n в†’
в€ћ.
Let О¦, ОЁ в€€ EA (H) and let F, G be given as in by Lemma 4.3. we can write
[LО¦, ОЁ] = [О¦, Lв‹† ОЁ],
where Lв‹† is the formal adjoint of L defined by
.
Lв‹† ОЁ = в€’AОЁ + Dв‹† (ОЁ F ) в€’ О“в€’1 F (v), v ОЁ в€’ G ОЁ.
The associated bilinear form Bв‹† : H1 Г— H1 в†’ R is given by
Bв‹† (О¦, ОЁ) = B(ОЁ, О¦),
(4.17)
482
PAO-LIU CHOW
for all О¦, ОЁ в€€ H1 . For Q в€€ H, consider the adjoint problem
Lв‹† ОЁ = Q.
(4.18)
A function ОЁ в€€ H1 is said to be a strong solution of (4.16) provided that
Bв‹† (ОЁ, О¦) = [Q, О¦]
for all О¦ в€€ H1 .
Now, for U в€€ H, consider the nonhomogeneous problem
L О¦ = U,
(4.19)
and the related homogeneous problems
L О¦ = 0,
(4.20)
and
Lв‹† ОЁ = 0.
(4.21)
Let N and N denote, respectively, the subspaces of solutions of (4.20) and (4.21)
in H1 . Then, by applying the Fredholm theory of compact operators [18], we can
prove the following theorem.
в‹†
Theorem 4.4. (Fredholm Alternative) Let L and Lв‹† be defined by (3.1) and (4.17)
respectively, in which F satisfies the conditions (4.11) and (4.12) in Lemma 4.3.
(1) Exactly one of the following statements is true:
(a) For each Q в€€ H, the nonhomogeneous problem (4.19) has a unique
strong solution.
(b) The homogeneous problem (4.20) has a nontrivial solution.
(2) If case (b) holds, the dimension of null space N is finite and equals to the
dimension of N в‹† .
(3) The nonhomogeneous problem (4.19) has a solution if and only if
[U, ОЁ] = 0,
for all ОЁ в€€ N в‹† .
Proof. To prove the theorem, we shall convert the differential equations into equivalent Fredholm type of equations involving a compact operator. To proceed let О±
be given as in Theorem 4.1 and rewrite equation (4.19) as
.
LО± О¦ = LО¦ + О±О¦ = О±О¦ + U.
By theorem 4.1, the equation (4.19) is equivalent to the equation
О¦ = KО± (О±О¦ + U ),
which can be rewritten as the Fredholm equation
(I в€’ T ) О¦ = Q,
(4.22)
where I is the identity operator on H,
T = О± KО±
and
Q = KО± U.
Since KО± : H в†’ H is compact, T is also compact and Q belongs to H. By
applying the Fredholm Alternative Theorem [11] to equation (4.22), the equivalent
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
483
statements (1)вЂ“(3) hold for the Fredholm operator (I в€’ T ). Due to the equivalence
of the problems (4.19) and (4.22), the theorem is thus proved.
5. Spectrum and Eigenvalue Problem
For О» в€€ R, consider strong solutions of the eigenvalue problem:
L О¦ = О»О¦.
(5.1)
Here, for simplicity, we only treat the case of real solutions. As usual a nontrivial
solution О¦ of (5.1) is called an eigenfunction and the corresponding real number
О», an eigenvalue of L. The (real) spectrum ОЈ of L consists of all of its eigenvalues.
Theorem 5.1. (Spectral Property) The spectrum ОЈ of L is at most countable. If
the set ОЈ is infinite, then ОЈ = {О»k в€€ R : k в‰Ґ 1} with О»k в‰¤ О»k+1 , each with a finite
multiplicity, for k в‰Ґ 1, and О»k в†’ в€ћ as k в†’ в€ћ.
Proof. By taking a real number О±, rewrite the equation (5.1) as
.
LО± = L О¦ + О±О¦ = (О» + О±) О¦.
(5.2)
By taking О± > О±0 as in Theorem 4.2, the equation (5.2) can be converted into the
eigenvalue problem for the resolvent operator
KО± О¦ = ПЃ О¦,
(5.3)
where
1
.
(5.4)
О»+О±
By Theorem 4.1, the resolvent operator KО± on H is compact. Therefore its
spectrum ОЈО± is discrete. If the spectrum is infinite, then ОЈО± = {ПЃk в€€ R : k в‰Ґ 1}
with ПЃk в‰Ґ ПЃk+1 , each of a finite multiplicity, and lim ПЃk = 0. Now it follows
ПЃ=
kв†’в€ћ
from equation (5.4) that spectrum ОЈ of L has the asserted property with О»k =
1
в€’ О±.
ПЃk
Remark 5.2. As in finite dimensions, the eigenvalue problem (5.1) may be generalized to the case of complex-valued solutions in a complex Hilbert space. In this
case the eigenvalues О»k may be complex.
As a special case, set F в‰Ў 0 in L and the reduced operator L0 is given by
L0 О¦ = (в€’A) О¦ + G О¦.
Clearly L0 is a formal self-adjoint operator, or L0 = Lв‹†0 . The corresponding
bilinear form
1
B0 (О¦, ОЁ) = L0 О¦ , ОЁ = [RDО¦, DОЁ] + [GО¦, ОЁ],
(5.5)
2
for О¦, ОЁ в€€ H1
Consider the eigenvalue problem:
L0 О¦ = О» О¦.
(5.6)
For the special case when G = 0, L0 = в€’A. The eigenvalues and the eigenfunctions of (в€’A) were given explicitly in Proposition 2.2. The results show that
484
PAO-LIU CHOW
the eigenvalues can be ordered as a non-increasing sequence {О»k } and the corresponding eigenfunctions are orthonormal Hermite polynomial functionals. With
a smooth perturbation by G, similar results will hold for the eigenvalue problem
(5.6) as stated in the following theorem.
Theorem 5.3. (Symmetric Eigenvalue Problem) Suppose that G : H в†’ R+ be a
bounded, continuous and positive function, and there is a constant Оґ > 0 such that
G(v) в‰Ґ Оґ, в€Ђ v в€€ H. Then the following statements hold:
(1) Each eigenvalue of L0 is positive with finite multiplicity. The set ОЈ0 of
eigenvalues forms a nondecreasing sequence (counting multiplicity)
0 < О»1 в‰¤ О»2 в‰¤ В· В· В· О»k в‰¤ В· В· В·
such that О»k в†’ в€ћ as k в†’ в€ћ.
(2) There exists an orthonormal basis О¦k , k = 1, 2, В· В· В· of H, where О¦k is an
eigenfunction as a strong solution of
L0 О¦k = О»k О¦k ,
for k = 1, 2, В· В· В· .
Proof. By the assumption on G, it is easy to check that the bilinear form B0 :
.
H1 Г— H1 в†’ R satisfies the conditions of Theorem 4.1. Hence the inverse K0 = Lв€’1
0
is a self-adjoint compact operator in H. Similar to the proof of Theorem 5.1,
by converting the problem (5.6) into an equivalent eigenvalue problem for K0 ,
the statements (1) and (2) are well-known spectral properties of a self-adjoint,
compact operator in a separable Hilbert space.
As in finite dimensions the smallest or the principal eigenvalue О»1 can be characterized by a variational principle.
Theorem 5.4. (The Principal Eigenvalue) The principal eigenvalue can be obtained by the variational formula
О»1 =
inf
О¦в€€H1 ,|||О¦|||=0
B0 (О¦, О¦)
.
|||О¦|||2
(5.7)
Proof. To verify the variation principle, a strong solution О¦ of equation (5.6) must
satisfy
B0 (О¦, О¦) = О» [О¦, О¦]
or
. B0 (О¦, О¦)
О» = J(О¦) =
,
|||О¦|||2
provided that |||О¦||| = 0. In view of the coercivity condition
B0 (О¦, О¦) в‰Ґ Оє |||О¦|||21
(5.8)
(5.9)
for some Оє > 0, J is bounded from below so that the minimal value of J is given
by
О»в‹† =
inf
J(ОЁ),
(5.10)
ОЁв€€H1 ,|||ОЁ|||=0
ELLIPTIC EQUATIONS IN GAUSS-SOBELEV SPACES
485
which can also be written as
О»в‹† =
inf
ОЁв€€H1 ,|||ОЁ|||=1
Q(ОЁ),
(5.11)
where we set Q(О¦) = B0 (О¦, О¦).
To show that О»в‹† = О»1 and the minimizer of Q gives rise to the principal eigenfunction О¦1 , choose a minimizing sequence {ОЁn } in H1 with |||ОЁn |||1 = 1 such that
Q(ОЁn ) в†’ О»1 as n в†’ в€ћ. By (5.9) and the boundedness of the sequence {ОЁn } in
H1 , the compact embedding Theorem 4-2 implies the existence of a subsequence,
to be denoted by {ОЁk }, which converges to a function ОЁ в€€ H1 with |||ОЁ|||1 = 1.
Since Q is a quadratic functional, by the Parallelogram Law and equation (5.11),
we have
ОЁj в€’ ОЁk
1
ОЁj + ОЁk
Q(
) = (Q(ОЁj ) + Q(ОЁk )) в€’ Q(
)
2
2
2
1
ОЁj + ОЁk
в‰¤ (Q(ОЁj ) + Q(ОЁk ) в€’ О»в‹† |||(
)||| в†’ 0
2
2
as j, k в†’ в€ћ. Again, by (5.9), we deduce that {ОЁk } is a Cauchy sequence in H1
which converges to ОЁ в€€ H1 and Q(ОЁ) = О»в‹† . Now, for О� в€€ H1 and t в€€ R, let
f (t) = J(ОЁ + t О�).
As well-known in the calculus of variations, for J to attain its minimum at ОЁ, it
is necessary that
f вЂІ (0) = 2{B0 (ОЁ, О�) в€’ О»в‹† [О¦, О�]} = 0,
which shows ОЁ is the strong solution of
L0 (ОЁ) = О»в‹† ОЁ.
Hence, in view of (5.8), we conclude that О»в‹† = О»1 and ОЁ is the eigenfunction
associated with the principal eigenvalue.
Acknowledgement. This work was done to be presented at the Special Session
on Stochastic Analysis in honor of Professor Hui-Hsiung Kuo in the 2012 AMS
Annual Meeting in Boston.
References
1. Barhoumi, A., Kuo, H-H., Ouerdian, H.: Infinite dimensional Laplacians on a LВґ
evy - GelвЂ™fand
triple; Comm. Stoch. Analy. 1 (2007) 163вЂ“174
2. Chow, P. L.: Infinte-dimensional Kolmogorov equations in Gauss-Sobolev spaces; Stoch.
Analy. and Applic. 14 (1996) 257вЂ“282
3. Chow, P. L.: Infinte-dimensional parabolic equations in Gauss-Sobolev spaces; Comm. Stoch.
Analy. 1 (2007) 71вЂ“86
4. Chow, P. L.: Singular perturbation and stationary solutions of parabolic equations in GaussSobolev spaces; Comm. Stoch. Analy. 2 (2008) 289вЂ“306
5. Chow, P. L., Khasminski, R. Z.: Stationary solutions of nonlinear stochastic evolution equations; Stoch. Analy. and Applic. 15 (1997) 677вЂ“699
6. Daleskii, Yu. L.: Infinite dimensional elliptic operators and parabolic equations connected
with them; Russian Math. Rev. 22 (1967) 1вЂ“53
7. Da Prato, G.: Some results on elliptic and parabolic equations in Hilbert space; Rend. Mat.
8. Da Prato, G., Zabczyk,J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge, UK, 1992
486
PAO-LIU CHOW
9. Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces.
Cambridge University Press, Cambridge, 2002
10. Da Prato, G., Malliavin, P., Nualart, D.: Compact families of Wiener functionals; C.R.
Acad. Sci. Paris. 315, 1287вЂ“1291, 1992.
11. Dunford, N., Schwartz, J.: Linear Operators Part I. Interscience Pub., New York, 1958
12. Gilbarg, D., Truginger, N. S.: Elliptic Partial Differential Equations of Second Order.
Springer, Berlin, 1998
13. Gross, L., Potential theory in Hilbert spaces; J. Funct. Analy. 1 (1967) 123вЂ“181
14. Kuo, H.H.: Integration by parts for abstract Wiener measures; Duke Math. J. 41 (1974)
373вЂ“379
15. Lions, J. L., Mangenes, E.: Nonhomogeneous Boundary-value Problems and Applications.
Springer-Verlag, New York, 1972
16. Peszat, S.: On a Sobolev space of functions of infinite numbers of variables; Bull. Pol. Acad.
Sci. 98 (1993) 55вЂ“60
17. Piech, M. A.: The Ornstein-Uhleneck semigroup in an infinite dimensional L2 setting; J.
Funct. Analy. 18 (1975) 271вЂ“285
18. Yosida, K.: Functional Analysis. Springer-Verlag, New York, 1967.
P.L. Chow: Department of Mathematics, Wayne State University, Detroit, Michigan 48202, USA
Communications on Stochastic Analysis
Vol. 6, No. 3 (2012) 487-511
Serials Publications
www.serialspublications.com
TEMPORAL CORRELATION OF DEFAULTS IN SUBPRIME
SECURITIZATION
ERIC HILLEBRAND, AMBAR N. SENGUPTA, AND JUNYUE XU
A BSTRACT. We examine the subprime market beginning with a subprime mortgage, followed by a portfolio of such mortgages and then a series of such portfolios. We obtain
an explicit formula for the relationship between loss distribution and seniority-based interest rates. We establish a link between the dynamics of house price changes and the
dynamics of default rates in the Gaussian copula framework by specifying a time series
model for a common risk factor. We show analytically and in simulations that serial correlation propagates from the common risk factor to default rates. We simulate prices of
mortgage-backed securities using a waterfall structure and find that subsequent vintages
of these securities inherit temporal correlation from the common risk factor.
1. Introduction
In this paper we (i) derive closed-form mathematical formulas (4.3) and (4.12) connecting interest rates paid by tranches of Collateralized Debt Obligations (CDOs) and the
corresponding loss distributions, (ii) present a two-step Gaussian copula model (Proposition 6.1) governing correlated CDOs, and (iii) study the behavior of correlated CDOs both
mathematically and through simulations. The context and motivation for this study is the
investigation of mortgage backed securitized structures built out of subprime mortgages
that were at the center of the crisis that began in 2007. Our investigation demonstrates,
both theoretically and numerically, how the serial correlation in the evolution of the common factor, reflecting the general level of home prices, propagates into a correlated accumulation of losses in tranches of securitized structures based on subprime mortgages
of specific vintages. The key feature of these mortgages is the short time horizon to default/prepayment that makes it possible to model the corresponding residential mortgage
backed securities (RMBS) as forming one-period CDOs. We explain the difference in behavior between RMBS based on subprime mortgages and those based on prime mortgages
in Table 1 and related discussions.
During the subprime crisis, beginning in 2007, subprime mortgages created at different times have defaulted one after another. Figure 1, lower panel, shows the time series
of serious delinquency rates of subprime mortgages from 2002 to 2009. (By definition
of the Mortgage Banker Association, seriously delinquent mortgages refer to mortgages
that have either been delinquent for more than 90 days or are in the process of foreclosure.) Defaults of subprime mortgages are closely connected to house price fluctuations,
as suggested, among others, by [26] (see also [4, 16, 29].) Most subprime mortgages
Received 2012-5-22; Communicated by the editors.
2000 Mathematics Subject Classification. 62M10; 91G40.
Key words and phrases. Mortgage-backed securities, CDO, vintage correlation Gaussian copula .
487
488
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
F IGURE 1. Two-Year Changes in U.S. House Price and Subprime
ARM Serious Delinquency Rates
US Home Price Index Changes (Twoв€’Year Rolling Window)
50
0
в€’50
в€’100
2002
2003
2004
2005
2006
2007
2008
2009
US Subprime Adjustableв€’Rateв€’Mortgage Serious Delinquency Rates (%)
40
30
20
10
0
2002
2003
2004
2005
2006
2007
2008
2009
вЂњU.S. home price two-year rolling changesвЂќ are two-year overlapping changes in the S&P Case-Shiller U.S.
National Home Price index. вЂњSubprime ARM Serious Delinquency RatesвЂќ are obtained from the Mortgage
Banker Association. Both series cover the first quarter in 2002 to the second quarter in 2009.
are Adjustable-Rate Mortgages (ARM). This means that the interest rate on a subprime
mortgage is fixed at a relatively low level for a вЂњteaserвЂќ period, usually two to three years,
after which it increases substantially. Gorton [26] points out that the interest rate usually
resets to such a high level that it вЂњessentially forcesвЂќ a mortgage borrower to refinance or
default after the teaser period. Therefore, whether the mortgage defaults or not is largely
determined by the borrowerвЂ™s access to refinancing. At the end of the teaser period, if
the value of the house is much greater than the outstanding principal of the loan, the borrower is likely to be approved for a new loan since the house serves as collateral. On the
other hand, if the value of the house is less than the outstanding principal of the loan, the
borrower is unlikely to be able to refinance and has to default.
We analyze how the dynamics of housing prices propagate, through the dynamics of
defaults, to the dynamics of tranche losses in securitized structures based on subprime
mortgages. To this end, we introduce the notion of vintage correlation, which captures
the correlation of default rates in mortgage pools issued at different times. Under certain assumptions, vintage correlation is the same as serial correlation. After showing that
changes in a housing index can be regarded as a common risk factor of individual subprime mortgages, we specify a time series model for the common risk factor in the Gaussian copula framework. We show analytically and in simulations that the serial correlation
of the common risk factor introduces vintage correlation into default rates of pools of
TEMPORAL CORRELATION OF DEFAULTS
489
subprime mortgages of subsequent vintages. In this sense, serial correlation propagates
from the common risk factor to default rates. In simulations of the price behavior of
Mortgage-Backed Securities (MBS) over different cohorts, we find that the price of MBS
also exhibits vintage correlation, which is inherited from the common risk factor.
One of our objectives in this paper is to provide a formal examination of one of the important causes of the current crisis. (For different perspectives on the causes and effects
of the subprime crisis, see also [12, 15, 20, 27, 39, 42, 43].) Vintage correlation in default
rates and MBS prices also has implications for asset pricing. To price some derivatives,
for example forward starting CDO, it is necessary to predict default rates of credit assets
created at some future time. Knowing the serial correlation of default probabilities can
improve the quality of prediction. For risk management in general, some credit asset portfolios may consist of credit derivatives of different cohorts. Vintage correlation of credit
asset performance affects these portfoliosвЂ™ risks. For instance, suppose there is a portfolio
consisting of two subsequent vintages of the same MBS. If the vintage correlation of the
MBS price is close to one, for example, the payoff of the portfolio has a variance almost
twice as big as if there were no vintage correlation.
2. The Subprime Structure
In a typical subprime mortgage, the loan is amortized over a long period, usually 30
years, but at the end of the first two (or three) years the interest rate is reset to a significantly higher level; a substantial prepayment fee is charged at this time if the loan is paid
off. The aim is to force the borrower to repay the loan (and obtain a new one), and the
prepayment fee essentially represents extraction of equity from the property, assuming
the property has increased in value. If there is sufficient appreciation in the price of the
property then both lender and borrower win. However, if the property value decreases
then the borrower is likely to default.
Let us make a simple and idealized model of the subprime mortgage cashflow. Let
P0 = 1 be the price of the property at time 0, when a loan of the same amount is taken
to purchase the property (or against the property as collateral). At time T the price of
the property is PT , and the loan is terminated, resulting either in a prepayment fee k
plus outstanding loan amount or default, in which case the lender recovers an amount R.
For simplicity of analysis at this stage we assume 0 interest rate up to time T ; we can
view the interest payments as being built into k or R, ignoring, as a first approximation,
defaults prior to time T (for more on early defaults see [7]). The borrower refinances if
PT is above a threshold Pв€— (say, the present value of future payments on a new loan) and
defaults otherwise. Thus the net cashflow to the lender is
k1[PT >Pв€— ] в€’ (1 в€’ R)1[PT в‰¤Pв€— ] ,
(2.1)
with all payments and values normalized to time-0 money. The expected earning is therefore
(k + 1 в€’ R)P(PT > Pв€— ) в€’ (1 в€’ R),
for the probability measure P being used. We will not need this expected value but observe simply that a default occurs when PT < Pв€— , and so, if log PT is Gaussian then
default occurs for a particular mortgage if a suitable standard Gaussian variable takes a
value below some threshold.
490
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
It is clear that nothing like the above model would apply to prime mortgages. The
main risk (for the lender) associated to a long-term prime mortgage is that of prepayment,
though, of course, default risk is also present. A random prepayment time embedded
into the amortization schedule makes it a different problem to value a prime mortgage.
In contrast, for the subprime mortgage the lender is relying on the prepayment fee and
even the borrower hopes to extract equity on the property through refinancing under the
assumption that the property value goes up in the time span [0, T ]. (The prepayment
fee feature has been controversial; see, for example, [14, page 50-51].) We refer to the
studies [7, 14, 26] for details on the economic background, evolution and ramifications of
the subprime mortgage market, which went through a major expansion in the mid 1990s.
3. Portfolio Default Model
Securitization makes it possible to have a much larger pool of potential investors in
a given market. For mortgages the securitization structure has two sides: (i) assets are
mortgages; (ii) liabilities are debts tranched into seniority levels. In this section we briefly
examine the default behavior in a portfolio of subprime mortgages (or any assets that have
default risk at the end of a given time period). In Section 4 we will examine a model
structure for distributing the losses resulting from defaults across tranches.
For our purposes consider N subprime mortgages, issued at time 0 and (p)repaid or
defaulting at time T , each of amount 1. In this section, for the sake of a qualitative
understanding we assume 0 recovery, and that a default translates into a loss of 1 unit (we
neglect interest rates, which can be built in for a more quantitatively accurate analysis).
Current models of home price indices go back to the work of Wyngarden [49] , where
indices were constructed by using prices from repeated sales of the same property at
different times (from which property price changes were calculated). Bailey et al. [3]
examined repeated sales data and developed a regression-based method for constructing
an index of home prices. This was further refined by Case and Shiller [13] into a form that,
in extensions and reformulations, has become an industry-wide standard. The method in
[13] is based on the following model for the price PiT of house i at time T :
log PiT = CT + HiT + NiT ,
(3.1)
where CT is the log-price at time T across a region (city, in their formulation), HiT is
a mean-zero Gaussian random walk (with variance same for all i), and NiT is a housespecific random error of zero mean and constant variance (not dependent on i). The three
terms on the right in equation (3.1) are independent and (NiT ) is a sale-specific fluctuation
that is serially uncorrelated; a variety of correlation structures could be introduced in
modifications of the model. We will return to this later in equation (5.7) (with slightly
different notation) where we will consider different values of T . For now we focus on a
portfolio of N subprime mortgages i в€€ {1, . . . , N } with a fixed value of T .
Let Xi be the random variable
Xi =
log PiT в€’ mi
,
si
(3.2)
where mi is the mean and si is the standard devation of log PiT with respect to some
probability measure of interest (for example, the market pricing risk-neutral measure).
TEMPORAL CORRELATION OF DEFAULTS
491
Keeping in mind (3.1) we assume that
Xi =
в€љ
ПЃZ +
1в€’ПЃ
i
(3.3)
for some ПЃ > 0, where (Z, 1 , . . . , N ) is a standard Gaussian in RN +1 , with independent
components. Mortgage i defaults when Xi crosses below a threshold Xв€— , so that the
assumed common default probability for the mortgages is
P[Xi < Xв€— ] = E 1[Xi <Xв€— ] .
(3.4)
The total number of defaults, or portfolio loss (with our assumptions), is
N
L=
1[Xj <Xв€— ] .
(3.5)
j=1
The cash inflow at time T is the random variable
N
S(T ) =
1[Xj в‰ҐXв€— ] .
(3.6)
j=1
Pooling of investment funds and lending them for property mortgages is natural and
has long been in practice (see Bogue [9, page 73]). In the modern era Ginnie Mae issued
the first MBS in 1970 in вЂњpass throughвЂќ form which did not protect against prepayment
risk. In 1983 Freddie Mac issued Collateralized Mortgage Obligations (CMOs) that had
a waterfall-like structure and seniority classes with different maturities. The literature on
securitization is vast (see, for instance, [19, 36, 41]).
4. Tranche Securitization: Loss Distribution and Tranche Rates
In this section we derive a relation between the loss distribution in a cashflow CDO and
the interest rates paid by the tranches. We make the simplifying assumption that all losses
and payments occur at the end of one period. This assumption is not unreasonable for
subprime mortgages that have a short interest-rate reset period, which we take effectively
as the lifetime of the mortgage (at the end of which it either pays back in full with interest
or defaults). We refer to the constituents of the portfolio as вЂњloansвЂќ, though they could
be other instruments. Figure 2 illustrates the structure of the portfolio and cashflows. As
pointed out by [10, page xvii] there is вЂњvery little research or literatureвЂќ available on cash
CDOs; the complex waterfall structures that govern cashflows of such CDOs are difficult
to model in a mathematically sound way. For technical descriptions of cashflow waterfall
structures, see [23, Chapter 14].
Consider a portfolio of N loans, each with face value of one unit. Let S(T ) be the cash
inflow from the investments made by the portfolio at time T , the end of the investment
period. Next consider investors named 1, 2, . . . , M , with investor j investing amount Ij .
The most senior investor, labeled 1, receives an interest rate r1 (return per unit investment
over the full investment period) if at all possible; this investorвЂ™s cash inflow at time T is
Y1 (T ) = min {S(T ), (1 + r1 )I1 } .
Proceeding in this way, investor j has payoff
(4.1)
492
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
F IGURE 1. Illustration of schematic structure of MBS
F IGURE 2. Illustration of schematic structure of MBS
Mortgage
Pool
A Typical Mortgage
Principal: \$1;
Annual interest rate:
9%;
Maturity: 15 years.
1
2
3
4
Interest
Rate
MBS
M1
\$\$\$
M2
\$\$\$
M3
\$\$\$
M4
\$\$\$
Senior
70%
6%
Mezzanine
25%
15%
M99
\$\$\$
Subordinate
4%
20%
M100
\$\$\$
Equity
1%
N/A
v
пЈ±
пЈІ
Yj (T ) = min S(T ) в€’
пЈі
V = 120 T = 144
Yi (T ), (1 + rj )Ij
1в‰¤i<j
пЈј
пЈЅ
.
(4.2)
пЈѕ
Using the market pricing measure (risk-neutral measure) Q we should have
EQ [Yj (T )] = (1 + R0 )Ij ,
(4.3)
where R0 is the risk-free interest rate for the period of investment.
Given a model for S(T ), the rates rj can be worked out, in principle, recursively from
equation (4.2) as follows. Using the distribution of X(1) we can back out the value of the
supersenior rate r1 from
EQ [min {S(T ), (1 + r1 )I1 }] = EQ [Y1 (T )] = (1 + R0 )I1 .
(4.4)
Now we use this value of r1 in the equation for Y2 (T ):
EQ [min {S(T ) в€’ Y1 (T ), (1 + r2 )I2 }] = EQ [Y2 (T )] = (1 + R0 )I2 ,
(4.5)
and (numerically) invert this to obtain the value of r2 implied by the market model. Note
that in equation (4.5) the random variable Y1 (T ) on the left is given by equation (4.1)
using the already computed value of r1 . Proceeding in this way yields the full spectrum
of tranche rates rj .
1
Now we turn to a continuum model for tranches,
again with one time period. Consider
an idealized securitization structure ABS. Investors are subordinatized by a seniority parameter y в€€ [0, 1]. An investor in a thin вЂњtrancheletвЂќ [y, y + Оґy] invests the amount Оґy and
is promised an interest rate of r(y) (return on unit investment for the entire investment
period) if there is no default. In this section we consider only one time period, at the end
of which the investment vehicle closes.
TEMPORAL CORRELATION OF DEFAULTS
493
Thus, if there is sufficient return on the investment made by the ABS, a tranche [a, b] вЉ‚
[0, 1] will be returned the amount
b
1 + r(y) dy.
a
In particular, assuming that the total initial investment in the portfolio is normalized to
1
one, the maximum promised possible return to all the investors is 0 1 + r(y) dy. The
portfolio loss is
1
1 + r(y) dy в€’ S(T ),
L=
(4.6)
0
where S(T ) is the total cash inflow, all assumed to occur at time T , from investments
made by the ABS. Note that L is a random variable, since S(T ) is random.
Consider a thin tranche [y, y + Оґy]. If S(T ) is greater than the maximum amount
promised to investors in the tranche [y, 1], that is if
1
S(T ) >
1 + r(s) ds,
(4.7)
y
then the tranche [y, y + Оґy] receives its maximum promised amount 1 + r(y) Оґy. (If
S(T ) is insufficient to cover the more senior investors, the tranchelet [y, y + Оґy] receives
nothing.) The condition (4.7) is equivalent to
y
L<
(4.8)
1 + r(s) ds,
0
as can be seen from the relation (4.6). Thus, the thin tranche receives the amount
1
L<
y
0
1+r(s) ds
1 + r(y) Оґy
Using the risk-neutral probability measure Q, we have then
y
Q L<
1 + r(s) ds
1 + r(y) Оґy = (1 + R0 ) Оґy,
(4.9)
0
where R0 is the risk-free interest rate for the period of investment. Thus,
y
1 + r(y) FL
1 + r(s) ds
= 1 + R0
(4.10)
0
where FL is the distribution function of the loss L with respect to the measure Q.
Let О»(В·) be the function given by
y
О»(y) =
1 + r(s) ds,
(4.11)
0
which is strictly increasing as a function of y, with slope > 1 (assuming the rates r(В·) are
positive). Hence О»(В·) is invertible. Then the loss distribution function is obtained as
FL (l) =
1 + R0
.
1 + r О»в€’1 (l)
(4.12)
If r(y) are the market rates then the market-implied loss distribution function FL is given
by (4.12). On the other hand, if we have a prior model for the loss distribution FL then
the implied rates r(y) can be computed numerically using (4.12).
494
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
A real tranche is a вЂњthickвЂќ segment [a, b] вЉ‚ [0, 1] and offers investors some rate r[a,b] .
This rate could be viewed as obtained from the balance equation:
b
(1 + r[a,b] )(b в€’ a) =
(1 + r(y) dy,
a
which means that the tranche rate is the average of the rates over the tranche:
r[a,b] =
1
bв€’a
b
r(y) dy.
(4.13)
a
5. Modeling Temporal Correlation in Subprime Securitization
We turn now to the study of a portfolio consisting of several CDOs (each homogeneous) belonging to different vintages. We model the loss by a вЂњmulti-stageвЂќ copula,
one operating within each CDO and the other across the different CDOs. The motivation comes from the subprime context. Each CDO is comprised of subprime mortgages
of a certain vintage, all with a common default/no-default decision horizon (typically two
years). It is important to note that we do not compare losses at different times for the same
CDO; we thus avoid problems in using a copula model across different time horizons.
Definition 5.1 (Vintage Correlation). Suppose we have a pool of mortgages created
at each time v = 1, 2, В· В· В· , V . Denote the default rates of each vintage observed at
a fixed time T > V as p1 , p2 , В· В· В· , pV , respectively. We define vintage correlation
П†j := Corr(p1 , pj ) for j = 2, 3, В· В· В· , V as the default correlation between the j в€’ th
vintage and the first vintage.
As an example of vintage correlation, consider wines of different vintages. Suppose
there are several wine producers that have produced wines of ten vintages from 2011
to 2020. The wines are packaged according to vintages and producers, that is, one box
contains one vintage by one producer. In the year 2022, all boxes are opened and the percentage of wines that have gone bad is obtained for each box. Consider the correlation of
fractions of bad wines between the first vintage and subsequent vintages. This correlation
is what we call vintage correlation.
The definition of vintage correlation can be extended easily to the case where the base
vintage is not the first vintage but any one of the other vintages. Obviously, vintage correlation is very similar to serial correlation. There are two main differences. First, the
consideration is at a specific time in the future. Second, in calculating the correlation
between any two vintages, the expected values are averages over the cross-section. That
is, in the wine example, expected values are averages over producers. In mortgage pools,
they are averages over different mortgage pools. Only if we assume the same stochastic
structure for the cross-section and for the time series of default rates, vintage correlation
and serial correlation are equivalent. We do not have to make this assumption to obtain
our main results. Making this assumption, however, does not invalidate any of the results either. Therefore, we use the terms вЂњvintage correlationвЂќ and вЂњserial correlationвЂќ
interchangeably in our paper.
To model vintage correlation in subprime securitization, we use the Gaussian copula
approach of Li [34], widely used in industry to model default correlation across names.
The literature on credit risk pricing with copulas and other models has grown substantially
TEMPORAL CORRELATION OF DEFAULTS
495
in recent years and an exhaustive review is beyond the scope of his paper; monographs include [8, 18, 32, 40, 44]. Other works include, for example, [1, 2, 5, 6, 10, 17, 22, 24, 30,
33, 35, 45, 46]. There are approaches to model default correlation other than default-time
copulas. One method relies on the so-called structural model, which goes back to MertonвЂ™s (1974) work on pricing corporate debt. An essential point of the structural model
is that it links the default event to some observable economic variables. The paper [31]
extends the model to a multi-issuer scenario, which can be applied to price corporate debt
CDO. It is assumed that a firm defaults if its credit index hits a certain barrier. Therefore,
correlation between credit indices determines the correlation of default events. The advantage of a structural model is that it gives economic meaning to underlying variables.
Other approaches to CDO pricing are found, for example, in [28] and in [47]. The work
[11] provides a comparison of common CDO pricing models.
In our model each mortgage i of vintage v has a default time П„v,i , which is a random
variable representing the time at which the mortgage defaults. If the mortgage never
defaults, this value is infinity. If we assume that the distribution of П„v,i is the same across
all mortgages of vintage v, we have
Fv (s) = P[П„v,i < s], в€Ђi = 1, 2, ..., N,
(5.1)
where the index i denotes individual mortgages and the index v denotes vintages. We
assume that Fv is continuous and strictly increasing. Given this information, for each
vintage v the Gaussian copula approach provides a way to obtain the joint distribution of
the П„v,i across i. Generally, a copula is a joint distribution function
C (u1 , u2 , ..., uN ) = P (U1 в‰¤ u1 , U2 в‰¤ u2 , ..., UN в‰¤ uN ) ,
where u1 , u2 , ..., uN are N uniformly distributed random variables that may be correlated.
It can be easily verified that the function
C [F1 (x1 ), F2 (x2 ), ..., FN (xN )] = G(x1 , x2 , ..., xN )
(5.2)
is a multivariate distribution function with marginals given by the distribution functions
F1 (x1 ), F2 (x2 ),..., FN (xN ). Sklar [48] proved the converse, showing that for an arbitrary multivariate distribution function G(x1 , x2 , ..., xN ) with continuous marginal distributions functions F1 (x1 ), F2 (x2 ),..., FN (xN ), there exists a unique C such that equation
(5.2) holds. Therefore, in the case of default times, there is a Cv for each vintage v such
that
Cv [Fv (t1 ), Fv (t2 ), ..., Fv (tN )] = Gv (t1 , t2 , ..., tN ),
(5.3)
where Gv on the right is the joint distribution function of (П„v,1 , . . . , П„v,N ). Since we
assume Fv to be continuous and strictly increasing, we can find a standard Gaussian
random variable Xv,i such that
О¦(Xv,i ) = Fv (П„v,i ) в€Ђv = 1, 2, ..., V ; i = 1, 2, ..., N,
(5.4)
П„v,i = Fvв€’1 (О¦(Xv,i )) в€Ђv = 1, 2, ..., V ; i = 1, 2, ..., N,
(5.5)
or equivalently,
496
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
where О¦ is the standard normal distribution function. To see that this is correct, observe
that
P[П„v,i в‰¤ s] = P [О¦(Xv,i ) в‰¤ Fv (s)] = P Xv,i в‰¤ О¦в€’1 (Fv (s))
= О¦ О¦в€’1 (Fv (s)) = Fv (s).
The Gaussian copula approach assumes that the joint distribution of (Xv,1 , . . . , Xv,N ) is
a multivariate normal distribution function О¦N . Thus the joint distribution function of
default times П„v,i is obtained once the correlation matrix of the Xv,i is known. A standard
simplification in practice is to assume that the pairwise correlations between different Xv,i
are the same across i. Suppose that the value of this correlation is ПЃv for each vintage v.
Consider the following definition
в€љ
(5.6)
Xv,i := ПЃv Zv + 1 в€’ ПЃv Оµi в€Ђi = 1, 2, . . . , N ; v = 1, 2, . . . , V,
where Оµv,i are i.i.d. standard Gaussian random variables and Zv is a Gaussian random
variable independent of the Оµv,i . It can be shown easily that in each vintage v, the variables
Xv,i defined in this way have the exact joint distribution function О¦N .
Using the information above, for each vintage v, the Gaussian copula approach obtains
the joint distribution function Gv for default times as follows. First, N Gaussian random
variables Xv,i are generated according to equation (5.6). Second, from equation (5.5) a set
of N default times П„v,i is obtained, which has the desired joint distribution function Gv .
In equation (5.6), the common factor Zv can be viewed as a latent variable that captures
the default risk in the economy, and Оµi is the idiosyncratic risk for each mortgage. The
variable Xv,i can be viewed as a state variable for each mortgage. The parameter ПЃv is
the correlation between any two individual state variables. It is obvious that the higher the
value of ПЃv , the greater the correlation between the default times of different mortgages.
Assume that we have a pool of N mortgages i = 1, . . . , N for each vintage v =
1, . . . , V . Each individual mortgage within a pool has the same initiation date v and
interest adjustment date v > v. Let Yv,i be the change in the logarithm of the price Pv,i
of borrower iвЂ™s (of vintage v) house during the teaser period [v, v ]. From equation (3.1),
we can deduce that
Yv,i := log Pv ,i в€’ log Pv,i = в€†Cv + ev,i ,
(5.7)
where в€†Cv := log Cv в€’ log Cv is the change in the logarithm of a housing market index
Cv , and ev,i are i.i.d. normal random variables for all i = 1, 2, ..., N , and v = 1, 2, ..., V .
As outlined in the introduction, default rates of subprime ARM depend on house price
changes during the teaser period. If the house price fails to increase substantially or even
declines, the mortgage borrower cannot refinance, absent other substantial improvements
in income or asset position. They have to default shortly after the interest rate is reset to
a high level. We assume that the default, if it happens, occurs at time v . Therefore, we
assume that a mortgage defaults if and only if Yv,i < Y в€— , where Y в€— is a predetermined
threshold.
We can now give a structural interpretation of the common risk factor Zv in the Gaussian copula framework. Define
в€†Cv
Zv :=
,
(5.8)
Пѓв€†C
TEMPORAL CORRELATION OF DEFAULTS
497
where Пѓв€†C is the unconditional standard deviation of в€†Cv . Then we have
Yv,i = Zv Пѓв€†C + ev,i .
Further standardizing Yv,i , we have
Xv,i :=
Yv,i
Z Пѓв€†C + ev,i
= v
=
ПѓY
ПѓY
Пѓв€†C
2
Пѓв€†C +
Пѓe2
Zv +
Пѓe
2
Пѓв€†C +
Пѓe2
Оµv,i
where Пѓe is the standard deviation of ev,i , and Оµv,i := ev,i /Пѓe . The third equality follows
from the fact that
2
ПѓY = Пѓв€†C
+ Пѓe2 .
Define
ПЃ :=
Then
Xv,i =
ПЃ Zv +
2
Пѓв€†C
.
2
Пѓв€†C
+ Пѓe2
1 в€’ ПЃ Оµv,i
в€Ђi = 1, 2, . . . , N ;
t = 1, 2, . . . , T
(5.9)
Note that equation (5.9) has exactly the same form as equation (5.6). The default event is
defined as Xv,i < X в€— where
X в€— :=
Yв€—
2
Пѓв€†C
+ Пѓe2
.
Let
П„v,i := Fvв€’1 О¦(Xv,i ) ,
and
П„vв€— := Fvв€’1 (О¦(X в€— )) ,
then the default event can be defined equivalently as П„v,i в‰¤ П„ в€— . The comparison between
equation (5.9) and (5.6) shows that the common risk factor Zv in the Gaussian copula
model for subprime mortgages can be interpreted as a standardized change in a house
price index. This is consistent with our remarks in the context of (3.2) that the CaseShiller model provides a direct justification for using the Gaussian copula, with common
risk factor being the housing price index.
In light of this structural interpretation, the common risk factor Zv is very likely to
be serially correlated across subsequent vintages. More specifically, we find that Zv is
proportional to a moving average of monthly log changes in a housing price index. To see
this, let v be the time of origination and v be the end of the teaser period. Then,
v
в€†Cv =
d log IП„ ,
v
where I is the house price index. For example, if we measure house price index changes
quarterly, as in the case of the Case-Shiller housing index, we have
в€†Cv =
(log IП„ в€’ log IП„ в€’1 ),
(5.10)
П„ в€€[v,v ]
where the unit of П„ is a quarter. If we model this index by some random shock arriving
each quarter, equation (5.10) is a moving average process. Therefore, from equation (5.8)
we know that Zv has positive serial correlation. Figure 1 shows that the time series of
498
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
Case-Shiller index changes exhibits strong autocorrelation, and is possibly integrated of
order one.
6. The Main Theorems: Vintage Correlation in Default Rates
Since the common risk factor is likely to be serially correlated, we examine the implications for the stochastic properties of mortgage default rates. We specify a time series
model for the common risk factor in the Gaussian copula and determine the relationship
between the serial correlation of the default rates and that of the common risk factor.
Proposition 6.1 (Default Probabilities and Numbers of Defaults). Let k = 1, 2, ..., N ,
в€љ
Xk = ПЃZ + 1 в€’ ПЃ Оµk ,
and
Xk = ПЃ Z + 1 в€’ ПЃ Оµk
(6.1)
with
Z = П†Z +
(6.2)
1 в€’ П†2 u,
where ПЃ, ПЃ в€€ (0, 1), П† в€€ (в€’1, 1), and Z, Оµ1 , ..., ОµN , Оµ1 , ..., ОµN , u are mutually independent standard Gaussians. Consider next the number of Xk that fall below some threshold
Xв€— , and the number of Xk below Xв€— :
N
N
A=
1{Xk в‰¤Xв€— } ,
A =
and
k=1
1{Xk в‰¤Xв€— } ,
(6.3)
k=1
where Xв€— and Xв€— are constants. Then
(6.4)
Cov(A, A ) = N 2 Cov(p, p ),
where
p = p(Z) := P[Xk в‰¤ Xв€— | Z] = О¦
в€љ
Xв€— в€’ ПЃZ
в€љ
1в€’ПЃ
,
and
(6.5)
p = P[Xk в‰¤ Xв€— | Z ] = p (Z ).
Moreover, the correlation between A and A equals the correlation between p and p , in
the limit as N в†’ в€ћ.
Proof. We first show that
(6.6)
E[AA ] = E [E[A | Z]E[A | Z ]] .
Note that A is a function of Z and Оµ = (Оµ1 , . . . , ОµN ), and A is a function (indeed, the
same function as it happens) of Z and Оµ = (Оµ1 , . . . , ОµN ). Now for any non-negative
bounded Borel functions f and g on R, and any non-negative bounded Borel functions F
and G on R Г— RN , we have, on using self-evident notation,
E[f (Z)g(Z )F (Z, )G(Z , )]
=
f (z)g(П†z +
1 в€’ П†2 x)F (z, y1 , ..., yN )G(z , y1 , ..., yN ) dО¦(z, x, y, y )
y
z
=
f (z)g(z )
F (z, y) dО¦(y)
(6.7)
y
G(z , y ) dО¦(y )
= E [f (Z)g(Z )E[F (Z, ) | Z]E[G(Z , ) | Z ]] .
dО¦(z, x)
TEMPORAL CORRELATION OF DEFAULTS
499
This says that
E [F (Z, Оµ)G(Z , Оµ ) | Z, Z ] = E[F (Z, Оµ) | Z]E[G(Z , Оµ ) | Z ].
(6.8)
Taking expectation on both sides of equation (6.8) with respect to Z and Z , we obtain
E [F (Z, Оµ)G(Z , Оµ )] = E [E[F (Z, Оµ) | Z]E[G(Z , Оµ ) | Z ]] .
(6.9)
Substituting F (Z, Оµ) = A, and G(Z , Оµ ) = A , we have equation (6.6) and
E[AA ] = E [E[A | Z]E[A | Z ]]
= E[N pN p ] = N 2 E[pp ],
(6.10)
The last line is due to the fact that conditional on Z, A is a sum of N independent indicator
variables and follows a binomial distribution with parameters N and Ep. Applying (6.9)
again with F (Z, Оµ) = A, and G(Z , Оµ ) = 1, or indeed, much more directly by repeated
expectations, we have
and
E[A] = N E[p],
E[A ] = N E[p ].
(6.11)
Hence we conclude that
Cov(A, A ) = E(AA ) в€’ E[A]E[A ]
= N 2 E[pp ] в€’ N 2 E[p]E[p ]
= N 2 Cov(p, p ).
We have
Var(A) = E E[A2 | Z] в€’ N 2 (E[p])2 = N E[p(1 в€’ p)] + N 2 Var(p).
(6.12)
Similarly,
Var(A ) = N E[p (1 в€’ p )] + N 2 Var(p ).
Putting everything together, we have for the correlations:
Corr(p, p )
Corr(A, A ) =
1+
E[p(1в€’p)]
N Var(p )
1+
E[p (1в€’p )]
N Var(p)
(6.13)
= Corr(p, p ) as N в†’ в€ћ.
Theorem 6.2 (Vintage Correlation in Default Rates). Consider a pool of N mortgages
created at each time v, where N is fixed. Suppose within each vintage v, defaults are
governed by a Gaussian copula model as in equations (5.1), (5.5), and (5.6) with common risk factor Zv being a zero-mean stationary Gaussian process. Assume further that
ПЃv = Corr(Xv,i , Xv,j ), the correlation parameter for state variables Xv,i of individual mortgages of vintage v, is positive. Then, Av and Av , the numbers of defaults
observed at time T within mortgage vintages v and v are correlated if and only if
П†v,v = Corr(Zv , Zv ) = 0, where Zv is the common Gaussian risk factor process.
Moreover, in the large portfolio limit, Corr(Av , Av ) approaches a limiting value determined by П†v,v , ПЃv , and ПЃv .
500
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
Proof. Conditional on the common risk factor Zv , the number of defaults Av is a sum of
N independent indicator variables and follows a binomial distribution. More specifically,
N k
p (1 в€’ pv )N в€’k
k v
P(Av = k|Zv ) =
(6.14)
where pv is the default probability conditional on Zv , i.e.,
pv = P(П„v,i в‰¤ П„ в€— |Zv ) = P(Xv,i в‰¤ Xvв€— |Zv ),
with
Xvв€— = О¦в€’1 (Fv (T )),
where Fv (T ) is the probability of default before the time T . Then
pv = P (Xv,i в‰¤ Xvв€— |Zv ) = О¦ (Zvв€— ) ,
where
Zvв€—
(6.15)
в€љ
Xvв€— в€’ ПЃv Zv
в€љ
=
.
1 в€’ ПЃv
(6.16)
pv = О¦ (Zvв€— ) ,
(6.17)
Similarly,
where
в€љ
Xvв€— в€’ ПЃv Zv
в€љ
.
(6.18)
1 в€’ ПЃv
are jointly Gaussian with correlation coefficient П†v,v , we can
Zvв€— =
Note that if Zv and Zv
write
Zv = П†v,v Zv +
for t > j,
1 в€’ П†2v,v uv,v
(6.19)
where uv,v are standard Gaussians that are independent of Zv . Combining equation
(6.16), (6.18) and (6.19), we have
Zvв€— = aП†v,v Zvв€— +
Xtв€— в€’ bП†j Xvв€—
в€љ
в€’
1 в€’ ПЃt
ПЃv (1 в€’ П†2v,v )
в€љ
uv,v ,
1 в€’ ПЃv
(6.20)
where
a=
ПЃv (1 в€’ ПЃv )
,
ПЃv (1 в€’ ПЃv )
ПЃv
.
ПЃv
b=
Cov(pv , pv ) = Cov (О¦(Zvв€— ), О¦(Zvв€— ))
пЈ« пЈ«
X в€— в€’ bП†v,v Xvв€—
= Cov пЈ­О¦ пЈ­aП†v,v Zvв€— + v в€љ
в€’
1 в€’ ПЃv
пЈ¶
пЈ¶
ПЃv (1 в€’ П†2v,v )
в€љ
uv,v пЈё , О¦ (Zvв€— )пЈё .
1 в€’ ПЃv
(6.21)
Since a > 0 as ПЃv в€€ (0, 1), we know that the covariance and the correlation between pv
and pv are determined by П†v,v , ПЃv , and ПЃv . They are nonzero if and only if П†v,v = 0.
Applying Proposition 6.1, we know that
Corr(pv , pv )
Corr(Av , Av ) =
1+
E[pv (1в€’pv )]
N Var(pv )
1+
E[pv (1в€’pv )]
N Var(pv )
в€Ђv = v .
Therefore, Av and Av have nonzero correlation as long as pv and pv do.
(6.22)
TEMPORAL CORRELATION OF DEFAULTS
501
Equations (6.21) and (6.22) provide closed-form expressions for the serial correlation
of default rates pv of different vintages and the number of defaults Av . However, we
cannot directly read from equation (6.21) how the vintage correlation of default rates
depends on П†v,v . The theorem below (whose proof extends an idea from [37]) shows that
this dependence is always positive.
Theorem 6.3 (Dependence on Common Risk Factor). Under the same settings as in
Theorem 6.2, assume that both the serial correlation П†v,v of the common risk factor and
the individual state variable correlation ПЃv are always positive. Then the number Av of
defaults in the vintage-v cohort by time T is positively correlated with the number Av in
the vintage-(v ) cohort. Moreover, this correlation is an increasing function of the serial
correlation parameter П†v,v in the common risk factor.
Proof. We will use the notation established in Proposition 6.1. We can assume that
v = v . Recall that in the Gaussian copula model, name i in the vintage-v cohort defaults by time T if the standard Gaussian variable Xv,i falls below a threshold Xvв€— . The
unconditional default probability is
P[Xv,i в‰¤ Xvв€— ] = О¦(Xvв€— ).
For the covariance, we have
N
Cov(Av , Av ) =
Cov(1[Xv,k в‰¤Xvв€— ] , 1[Xv ,l в‰¤X в€— ] )
v
k,l=1
2
= N Cov(1[Xв‰¤Xvв€— ] , 1[X
(6.23)
в‰¤Xvв€— ] ),
where X, X are jointly Gaussian, each standard Gaussian, with mean zero and covariance
E[XX ] = E[Xv,k Xv ,l ],
which is the same for all pairs k, l, since v = v . This common value of the covariance
arises from the covariance between Zv and Zv along with the covariance between any
Xv,k with Zv ; it is
в€љ
(6.24)
Cov(X, X ) = П†j ПЃv ПЃv .
Now since X, X are jointly Gaussian, we can express them in terms of two independent
standard Gaussians:
W1 := X,
1
в€љ
(6.25)
W2 :=
[X в€’ П†v,v ПЃv ПЃv X].
1 в€’ ПЃv ПЃv П†2v,v
We can check readily that these are standard Gaussians with zero covariance, and
X = W1 ,
(6.26)
в€љ
X = П†v,v ПЃv ПЃv W1 + 1 в€’ ПЃv ПЃv П†2v,v W2 .
Let
The assumption that ПЃ and П†v,v
в€љ
О± = П†v,v ПЃv ПЃv .
are positive (and, of course, less than 1) implies that
0 < О± < 1.
502
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
Note that the covariance between pv and pv can be expressed as
Cov(pv , pv ) = E(pv pv ) в€’ E(pv )E(pv )
= E E 1{Xv,i в‰¤Xvв€— } Zv E 1{X в‰¤X в€— } Zv
v ,i
v
= E 1{Xv,i в‰¤Xvв€— } 1{X
в€’ E(pv )E(pv )
} в€’ E(pv )E(pv )
в‰¤ Xvв€— ] в€’ E(pv )E(pv )
в€—
v ,i в‰¤Xv
= P [Xv,i в‰¤ Xvв€— , Xv ,i
= P W1 в‰¤ Xvв€— , О±W1 +
1 в€’ О±2 W2 в‰¤ Xvв€— в€’ E(pv )E(pv )
Xvв€—
Xvв€— в€’ О±w1
в€љ
П•(w1 ) dw1 в€’ E(pv )E(pv ),
1 в€’ О±2
в€’в€ћ
where П•(В·) is the probability density function of the standard normal distribution. The
third equality follows from equation (6.9). The fifth equality follows from equation (6.26).
The unconditional expectation of pv is independent of О±, because
=
О¦
E(pv ) = E [ P(Xv,i в‰¤ Xvв€— |Zv ) ] = О¦(Xvв€— ).
(6.27)
It follows that
в€‚
Cov(pv , pv ) =
в€‚О±
Xvв€—
П•
П•(w1 )
П•
Xvв€— в€’ О±w1
в€љ
1 в€’ О±2
П•(w1 )
в€’в€ћ
Xvв€—
=
в€’в€ћ
=в€’
Xvв€—
1
(1 в€’ О±2 )
3
2
Xvв€— в€’О±w1
в€љ
1в€’О±2
в€‚
Xvв€— в€’ О±w1
в€љ
1 в€’ О±2
в€‚О±
в€’w1 + О±Xvв€—
3
(1 в€’ О±2 ) 2
dw1
dw1
Xvв€— в€’ О±w1
в€љ
1 в€’ О±2
(w1 в€’ О±Xvв€— )П•
в€’в€ћ
П•(w1 ) dw1 .
(6.28)
The last two terms in the integrand simplify to
П•
Xvв€— в€’ О±w1
в€љ
1 в€’ О±2
2
П•(w1 ) =
1
(w1 в€’ О±Xvв€— ) + Xvв€— 2 (1 в€’ О±2 )
exp в€’
.
2ПЂ
2(1 в€’ О±2 )
(6.29)
Substituting equation (6.29) into (6.28), we have
Xв€—
2
exp в€’ v2
в€‚
Cov(pv , pv ) = в€’
3
в€‚О±
2ПЂ (1 в€’ О±2 ) 2
Xvв€—
2
(w1 в€’ О±Xvв€— ) exp в€’
в€’в€ћ
(w1 в€’ О±Xvв€— )
2(1 в€’ О±2 )
dw1 .
Make a change of variable and let
w1 в€’ О±Xvв€—
y := в€љ
.
1 в€’ О±2
It follows, upon further simplification, that
1
в€‚
X в€— 2 в€’ 2О±Xvв€— Xvв€— + Xvв€— 2
в€љ
Cov(pv , pv ) =
> 0.
(6.30)
exp в€’ v
в€‚О±
2(1 в€’ О±2 )
2ПЂ 1 в€’ О±2
Thus, we have shown that the partial derivative of the covariance with respect to О± is
positive. Since
в€љ
О± = ПЃv ПЃv П†v,v ,
TEMPORAL CORRELATION OF DEFAULTS
503
TABLE 1. Default Probabilities Through Time (F (П„ )).
Time (Month)
Default Probability
Subprime
12
24
0.04 0.10
36
0.12
72
0.13
144
0.14
Time (Month)
Default Probability
Prime
12
24
0.01 0.02
36
0.03
72
0.04
144
0.05
with ПЃs and П†v,v assumed to be positive, we know that the partial derivatives of the covariance with respect to П†v,v , ПЃv and ПЃv are also positive everywhere. Note that the
unconditional variance of pv is independent of П†v,v (although dependent of ПЃs ), which
can be seen from equation (6.15). It follows that the serial correlation of pv has positive
partial derivative with respect to П†v,v . Recall equation (6.21), which shows that the covariance of pv and pv is zero for any value of ПЃs when П†v,v = 0. This result together
with the positive partial derivatives of the covariance with respect to П†v,v ensure that
the covariance and thus the vintage correlation of pv and pv is always positive. From
equation (6.22), noticing the fact that both the expectation and variance of pv are independent of П†v,v , we know that the correlation between Av and Av must also be positive
everywhere and monotonically increasing in П†v,v .
7. Monte Carlo Simulations
In this section, we study the link between serial correlation in a common risk factor
and vintage correlation in pools of mortgages in two sets of simulations: First, a series
of mortgage pools is simulated to illustrate the analytical results of Section 6. Second, a
waterfall structure is simulated to study temporal correlation in MBS.
7.1. Vintage Correlation in Mortgage Pools. We conduct a Monte Carlo simulation to
study how serial correlation of a common risk factor propagates into vintage correlation in
default rates. We simulate default times for individual mortgages according to equations
(5.1), (5.5), and (5.6). From the simulated default times, the default rate of a pool of mortgages is calculated. In each simulation, we construct a cohort of N = 100 homogeneous
mortgages in every month v = 1, 2, . . . , 120. We simulate a monthly time series of the
common risk factor Zv , which is assumed to have an AR(1) structure with unconditional
mean zero and variance one,
Zv = П†Zvв€’1 +
1 в€’ П†2 uv
в€Ђv = 2, 3, . . . , 120.
(7.1)
The errors uv are i.i.d. standard Gaussian. The initial observation Z1 is a standard normal
random variable. We report the case where П† = 0.95. Each mortgage i issued at time v
has a state variable Xv,i assigned to it that determines its default time. The time series
properties of Xv,i follow equation (5.6). The error Оµi in equation (5.6) is independent of
uv .
504
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
F IGURE 3. Serial Correlation in Default Rates of Subprime Mortgages
Default Rates Across Vintages
0.12
0.1
0.08
0
20
40
60
80
100
120
Vintage Correlation
1
0.5
0
в€’0.5
0
2
4
2
4
6
8
10
12
14
16
Lag
Sample Partial Autocorrelation Function
6
8
18
20
18
20
1
0.5
0
в€’0.5
0
10
Lag
12
14
16
To simulate the actual default rates of mortgages, we need to specify the marginal distribution functions of default times F (В·) as in equation (5.1). We define a function F(В·),
which takes a time period as argument and returns the default probability of a mortgage
within that time period since its initiation. We assume that this F(В·) is fixed across different vintages, which means that mortgages of different cohorts have a same unconditional
default probability in the next S periods from their initiation, where S = 1, 2, . . . . It is
easy to verify that Fv (T ) = F(T в€’ v). The values of the function F(В·) are specified in
Table 1, for both subprime and prime mortgages. Intermediate values of F(В·) are linearly
interpolated from this table. While these values are in the same range as actual default
rates of subprime and prime mortgages in the last ten years, their specification is rather
arbitrary as it has little impact on the stochastic structure of the simulated default rates.
We set the observation time T to be 144, which is two years after the creation of the last
vintage, as we need to give the last vintage some time window to have possible default
events. For example, in each month from 1998 to 2007, 100 mortgages are created.
We need to consider two cases, subprime and prime. For the subprime case, every
vintage is given a two-year window to default, so the unconditional default probability
is constant across vintages. On the other hand, prime mortgages have decreasing default
probability through subsequent vintages. For example, in our simulation, the first vintage
has a time window of 144 months to default, the second vintage has 143 months, the
third has 142 months, and so on. Therefore, older vintages are more likely to default by
observation time T than newer vintages. This is why the fixed ex-post observation time
of defaults is one difference that distinguishes vintage correlation from serial correlation.
TEMPORAL CORRELATION OF DEFAULTS
505
F IGURE 4. Serial Correlation in Default Rates of Prime Mortgages
Default Rates Across Vintages
0.06
0.04
0.02
0
20
40
60
80
100
120
Vintage Correlation
1
0.5
0
в€’0.5
0
2
4
2
4
6
8
10
12
14
16
Lag
Sample Partial Autocorrelation Function
6
8
18
20
18
20
1
0.5
0
в€’0.5
0
10
Lag
12
14
16
We construct a time series П„v,i of default times of mortgage i issued at time v according
to equation (5.5). (Note that this is not a time series of default times for a single mortgage,
since a single mortgage defaults only once or never. Rather, the index i is a placeholder
for a position in a mortgage pool. In this sense, П„v,i is the time series of default times of
mortgages in position i in the pool over vintages v.) Time series of default rates AВЇv are
computed as:
#{mortgages for which П„v,i в‰¤ П„vв€— }
AВЇv (П„vв€— ) =
.
N
In the subprime case, П„vв€— = 24; in the prime case, П„vв€— = T в€’ v varies across vintages.
The simulation is repeated 1000 times. For the subprime case, the average simulated
default rates are plotted in Figure 3. For the prime case, average simulated default rates
are plotted in Figure 4. Note that because of the decreasing time window to default, the
default rates in Figure 4 have a decreasing trend.
In the subprime case, we can use the sample autocorrelation and partial autocorrelation
functions to estimate vintage correlation, because the unconditional default probability is
constant across vintages, so that averaging over different vintages and averaging over
different pools is the same. In the prime case, we have to calculate vintage correlation
proper. Since we have 1000 Monte Carlo observations of default rates for each vintage,
we can calculate the correlation between two vintages using those samples. For the partial
autocorrelation function, we simply demean the series of default rates and obtain the usual
partial autocorrelation function. We plot the estimated vintage correlation in the second
rows of Figure 3 and 4 for subprime and prime cases, respectively. As can be seen, the
506
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
correlation of the default rates of the first vintage with older vintages decreases geometrically. In both cases, the estimated first-order coefficient of default rates is close to but less
than П† = 0.95, the AR(1) coefficient of the common risk factor. The partial autocorrelation functions are plotted in the third rows of Figures 3 and 4. They are significant only
at lag one. This phenomenon is also observed when we set П† to other values. Both the
sample autocorrelation and partial autocorrelation functions indicate that the default rates
follow a first-order autoregressive process, similar to the specification of the common risk
factor. However, compared with the subprime case, the default rates of prime mortgages
seem to have longer memory.
The similarity between the magnitude of the autocorrelation coefficient of default rates
and common risk factor can be explained by the following Taylor expansion. Taylorexpanding equation (6.15) at Zvв€— = 0 to first order, we have
1
1
+ в€љ Zvв€— .
(7.2)
2
2ПЂ
Since pv is approximately linear in Zvв€— , which is a linear transformation of Zt , it follows
a stochastic process that has approximately the same serial correlation as Zt .
pv в‰€
7.2. Vintage Correlation in Waterfall Structures. We have already shown using the
Gaussian copula approach that the time series of default rates in mortgage pools inherits
vintage correlation from the serial correlation of the common risk factor. We now study
how this affects the performance of assets such as MBS that are securitized from the
mortgage pool in a so-called waterfall. The basic elements of the simulation are: (i) A
time line of 120 months and an observation time T = 144. (ii) The mortgage contract
has a principal of \$ 1, maturity of 15 years, and annual interest rate 9%. Fixed monthly
payments are received until the mortgage defaults or is paid in full. A pool of 100 such
mortgages is created every month. (iii) There is a pool of 100 units of MBS, each of
principal \$1, securitized from each monthвЂ™s mortgage cohort. There are four tranches: the
senior tranche, the mezzanine tranche, the subordinate tranche, and the equity tranche.
The senior tranche consists of the top 70% of the face value of all mortgages created
in each month; the mezzanine tranche consists of the next 25%; the subordinate tranche
consist of the next 4%; the equity tranche has the bottom 1%. Each senior MBS pays an
annual interest rate of 6%; each mezzanine MBS pays 15%; each subordinate MBS pays
20%. The equity tranche does not pay interest but retains residual profits, if any.
The basic setup of the simulation is illustrated in Figure 2. For a cohort of mortgages issued at time v and the MBS derived from it, the securitization process works
as follows. At the end of each month, each mortgage either defaults or makes a fixed
monthly payment. The method to determine default is the same that we have used before:
mortgage i issued at time v defaults at П„v,i , which is generated by the Gaussian copula
approach according to equations (5.1), (5.5), and (5.6). We consider both subprime and
prime scenarios, as in the case of default rates. For subprime mortgages, we assume that
each individual mortgage receives a prepayment of the outstanding principal at the end
of the teaser period if it has not defaulted, so the default events and cash flows only happen within the teaser period. For the prime case, there is no such restriction. Again, we
assume the common risk factor to follow an AR(1) process with first-order autocorrelation coefficient П† = 0.95. The cross-name correlation coefficient ПЃ is set to be 0.5. The
unconditional default probabilities over time are obtained from Table 1.
TEMPORAL CORRELATION OF DEFAULTS
507
F IGURE 5. Serial Correlation in Principal Losses of Subprime MBS
Vintage Corr
Mezzanine Tranche
Equity Tranche
1
1
0.5
0.5
0.5
0
0
0
в€’0.5
0
Partial Autocorr
Subordinate Tranche
1
5
10
в€’0.5
0
5
10
в€’0.5
0
1
1
1
0.5
0.5
0.5
0
0
0
в€’0.5
0
5
10
в€’0.5
0
5
10
в€’0.5
0
5
10
5
10
The first row plots the vintage correlation of the principal loss of each tranche. The correlation is estimated
using the sample autocorrelation function. The second row plots the partial autocorrelation functions.
If a mortgage has not defaulted, the interest payments received from it are used to
pay the interest specified on the MBS from top to bottom. Thus, the cash inflow is used
to pay the senior tranche first (6% of the remaining principal of the senior tranche at
the beginning of the month). The residual amount, if any, is used to pay the mezzanine
tranche, after that the subordinate tranche, and any still remaining funds are collected
in the equity tranche. If the cash inflow passes a tranche threshold but does not cover
the following tranche, it is prorated to the following tranche. Any residual funds after
all the non-equity tranches have been paid add to the principal of the equity tranche.
Principal payments are processed analogously. We assume a recovery rate of 50% on
the outstanding principal for defaulted mortgages. The 50% loss of principal is deducted
from the principal of the lowest ranked outstanding MBS.
Before we examine the vintage correlation of the present value of MBS tranches, we
look at the time series of total principal loss across MBS tranches. In our simulations, no
loss of principal occurred for the senior tranche. The series of expected principal losses
of other tranches and their sample autocorrelation and sample partial autocorrelation are
plotted in Figures 5 and 6 for subprime and prime scenarios respectively. We use the
same method to obtain the autocorrelation functions for prime mortgages as in the case of
default rates. The correlograms show that the expected loss of principal for each tranche
follows an AR(1) process.
The series of present values of cash flows for each tranche and their sample autocorrelation and partial autocorrelation functions are plotted in Figures 7 and 8 for subprime
508
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
F IGURE 6. Serial Correlation in Principal Losses of Prime MBS
Vintage Corr
Mezzanine Tranche
Equity Tranche
1
1
0.5
0.5
0.5
0
0
0
в€’0.5
0
Partial Autocorr
Subordinate Tranche
1
10
20
в€’0.5
0
10
20
в€’0.5
0
1
1
1
0.5
0.5
0.5
0
0
0
в€’0.5
0
10
20
в€’0.5
0
10
20
в€’0.5
0
10
20
10
20
The first row plots the vintage correlation of the principal loss of each tranche. The correlation is estimated
using the correlation between the first and subsequent vintages, each of which has a Monte Carlo sample size
of 1000. The second row plots the partial autocorrelation functions of the demeaned series of principal losses.
and prime scenarios, respectively. The senior tranche displays a significant first-order autocorrelation coefficient due to losses in interest payments although there are no losses
in principal. The partial autocorrelation functions, which have significant positive values
for more than one lag, suggest that the cash flows may not follow an AR(1) process due
to the high non-linearity. However, the estimated vintage correlation still decreases over
vintages, same as in an AR(1) process, which indicates that our findings for default rates
can be extended to cash flows.
Acknowledgment. Discussions with Darrell Duffie and Jean-Pierre Fouque helped improve the paper. We thank seminar and conference participants at Stanford University,
LSU, UC Merced, UC Riverside, UC Santa Barbara, the 2010 Midwest Econometrics
Group Meetings in St. Louis, Maastricht University, and CREATES. EH acknowledges
support from the Danish National Research Foundation. ANS acknowledges research
support received as a Mercator Guest Professor at the University of Bonn in 2011 and
2012, and discussions with Sergio Albeverio and Claas Becker.
References
1. Andersen, L. and J. Sidenius: Extensions to the Gaussian Copula: Random Recovery and Random Factor
2. Andersen, L., Sidenius, J. and Basu, S.: All your Hedges in One Basket, Risk (2003) 67вЂ“72.
TEMPORAL CORRELATION OF DEFAULTS
509
F IGURE 7. Cashflow correlation in Subprime MBS
Vintage Corr
Senior
Subordinate
Equity
1
1
1
0.5
0.5
0.5
0.5
0
0
0
0
в€’0.5
0
Partial Autocorr
Mezzanine
1
5
10
в€’0.5
0
5
10
в€’0.5
0
5
10
в€’0.5
0
1
1
1
1
0.5
0.5
0.5
0.5
0
0
0
0
в€’0.5
0
5
10
в€’0.5
0
5
10
в€’0.5
0
5
10
в€’0.5
0
5
10
5
10
The first row plots the vintage correlation of the cash flow received by each tranche. The correlation is
estimated using the sample autocorrelation function. The second rows plot the partial autocorrelation functions.
3. Bailey, M. J., Muth, R. F. and Nourse, H. O.: A regression method for real estate price index construction,
Journal of the American Statistical Association 58, 304 (1963) 933вЂ“942.
4. Bajari, P., Chu, C. and Park, M.: An Empirical Model of Subprime Mortgage Default From 2000 to 2007,
Working Paper 14358, NBER, 2008.
5. Beare, B.: Copulas and Temporal Dependence, Econometrica 78, 1 (2010) 395вЂ“410.
6. Berd, A., Engle, R. and Voronov, A.: The Underlying Dynamics of Credit Correlations, Journal of Credit
Risk 3, 2 (2007) 27вЂ“62.
7. Bhardwaj, G. and Sengupta, R.: Subprime Mortgage Design, Working Paper 2008-039E, Federal Reserve
Bank of St. Louis, 2011.
8. Bluhm, C., Overbeck, L. and Wagner, C.: An Introduction to Credit Risk Modeling, Chapman & Hall/CRC,
London, 2002.
9. Bogue, A. G.: Land Credit for Northern Farmers 1789-1940, Agricultural History 50, 1 (1976) 68вЂ“100.
10. Brigo, D., Pallavacini, A. and Torresetti, R.: Credit Models and the Crisis: A Journey into CDOs, Copulas,
Correlations and Dynamic Models, Wiley, Hoboken, 2010.
11. Burtschell, X., Gregory, J. and Laurent, J.-P.: A Comparative Analysis of CDO Pricing Models, Journal of
Derivatives 16, 4 (2009) 9вЂ“37.
12. Caballero, R. J. and Krishnamurthy, A.: Global Imbalances and Financial Fragility, American Economic
Review 99, 2 (2009) 584вЂ“588.
13. Case, K. E. and Shiller, R. J.: Prices of Single Family Homes Since 1970: New Indexes for Four Cities,
New England Economic Review (1987) 45вЂ“56.
14. Chomsisengphet, S. and Pennington-Cross, A. The Evolution of the Subprime Mortgage Market, Discussion
Paper 1, Federal Reserve Bank of St. Louis Review, January/February 2006.
15. Crouhy, M. G., Jarrow, R. A. and Turnbull, S. M.: The Subprime Credit Crisis of 2007, Journal of Derivatives 16, 1 (2008) 81вЂ“110.
510
E. HILLEBRAND, A. N. SENGUPTA, AND J. XU
F IGURE 8. Cashflow correlation in Prime MBS
Vintage Corr
Senior
Subordinate
1
1
0.5
0.5
0.5
0
0
0
в€’0.5
0
Partial Autocorr
Mezzanine
1
10
20
в€’0.5
0
10
20
в€’0.5
0
1
1
1
0.5
0.5
0.5
0
0
0
в€’0.5
0
10
20
в€’0.5
0
10
20
в€’0.5
0
10
20
10
20
The first row plots the vintage correlation of the cash flow received by each tranche. The correlation is
estimated using the correlation between the first and subsequent vintages, each of which has a Monte Carlo
sample size of 1000. The second row plots the partial autocorrelation functions of the demeaned series of cash
flow.
16. Daglish, T.: What Motivates a Subprime Borrower to Default?, Journal of Banking and Finance 33, 4
(2009) 681вЂ“693.
17. Das, S. R., Freed, L., Geng, G. and Kapadia, N.: Correlated Default Risk, Journal of Fixed Income 16, 2
(2006) 7вЂ“32.
18. Duffie, D. and Singleton, K.: Credit Risk: Pricing, Measurement, and Management, Princeton University
Press, Princeton, 2003.
19. Fabozzi, F. J.: The Handbook of Mortgage-Backed Securities, Probus Publishing Company, Chicago, 1988.
20. Figlewski, S.: Viewing the Crisis from 20,000 Feet Up, Journal of Derivatives 16, 3 (2009) 53вЂ“61.
21. Frees, E. W. and Valdez, E. A.: Understanding Relationships Using Copulas, North American Actuarial
Journal 2, 1 (1998) 1вЂ“25.
22. Frey, R. and McNeil, A.: Dependent Defaults in Models of Portfolio Credit Risk, Journal of Risk 6, 1 (2003)
59вЂ“92.
23. Garcia, J. and Goosens, S.: The Art of Credit Derivatives: Demystifying the Black Swan, Wiley, Hoboken,
2010.
24. Giesecke, K.: A Simple Exponential Model for Dependent Defaults, Journal of Fixed Income 13, 3 (2003)
74вЂ“83.
25. Goetzmann, W. N.: The Accuracy of Real Estate Indices: Repeat Sale Estimators, Journal of Real Estate
Finance and Economics 5, 3 (1992) 5вЂ“53.
26. Gorton, G.:The Panic of 2007, Working Paper 14358, NBER, 2008.
27. Gorton, G.: Information, Liquidity, and the (Ongoing) Panic of 2007, American Economic Review 99, 2
(2009) 567вЂ“572.
28. Graziano, G. D. and Rogers, L. C. G.: A Dynamic Approach to the Modeling of Correlation Credit Derivatives Using Markov Chains, International Journal of Theoretical and Applied Finance 12, 1 (2009) 45вЂ“62.
TEMPORAL CORRELATION OF DEFAULTS
511
29. Hayre, L. S., Saraf, M., Young, R. and Chen, J.: Modeling of Mortgage Defaults, Journal of Fixed Income
17, 4 (2008) 6вЂ“30.
30. Hull, J., Predescu, M. and White, A.: The Valuation of Correlation-Dependent Credit Derivatives Using a
Structural Model, Journal of Credit Risk 6, 3 (2010) 53вЂ“60.
31. Hull, J. and White, A.: Valuing Credit Default Swaps II: Modeling Default Correlations, Journal of Derivatives 8, 3 (2001) 12вЂ“22.
32. Lando, D.: Credit Risk Modeling: Theory and Applications, Princeton University Press, Princeton, 2004.
33. Laurent, J. and Gregory, J.: Basket Default Swaps, CDOs and Factor Copulas, Journal of Risk 7, 4 (2005)
103вЂ“122.
34. Li, D.: On Default Correlation: A Copula Function Approach, Journal of Fixed Income 9, 4 (2000) 43вЂ“54.
35. Lindskog, F. and McNeil, A.: Common Poisson Shock Models: Applications to Insurance and Credit Risk
Modelling, ASTIN Bulletin 33, 2 (2003) 209вЂ“238.
36. Lucas, D. J., Goodman, L. S. and Fabozzi, F. J.: Collateralized Debt Obligations, 2nd edition, Wiley,
Hoboken, 2006.
37. Meng, C. and Sengupta, A. N.: CDO tranche sensitivities in the Gaussian Copula Model, Communications
on Stochastic Analysis 5, 2 (2011) 387вЂ“403.
38. Merton, R. C.: On the Pricing of Corporate Debt: The Risk Structure of Interest Rates, Journal of Finance
29, 2 (1974) 449вЂ“470.
39. Murphy, D.: A Preliminary Enquiry into the Causes of the Credit Crunch, Quantitative Finance 8, 5 (2008)
435вЂ“451.
40. OвЂ™Kane, D.: Modelling Single-Name and Multi-Name Credit Derivatives, Wiley, Hoboken, 2008.
41. Pavel, C.: Securitization: The Analysis and Development of the Loan-Based/Asset-Based Securities Market,
Probus Pblishing, Chicago, 1989.
42. Reinhart, C. M. and Rogoff, K. S.: Is the US Sub-Prime Financial Crisis So Different? An International
Historical Comparison, American Economic Review 98, 2 (2008) 339вЂ“344.
43. Reinhart, C. M. and Rogoff, K. S.: The Aftermath of Financial Crises, American Economic Review 99, 2
(2009) 466вЂ“472.
44. SchВЁonbucher, P.: Credit Derivatives Pricing Models: Models, Pricing and Implementation, Wiley, Hoboken, 2003.
45. SchВЁonbucher, P. and Schubert, D.: Copula Dependent Default Risk in Intensity Models, Working paper,
Bonn University, 2001.
46. Servigny, A. and Renault, O.: Default Correlation: Empirical Evidence, Working paper, Standard and
PoorвЂ™s, 2002.
47. Sidenius, J., V. Piterbarg and Andersen, L.: A New Framework for Dynamic Credit Portfolio Loss Modelling, International Journal of Theoretical and Applied Finance 11, 2 (2008) 163вЂ“197.
48. Sklar, A.: Fonctions de Repartition a n Dimensions et Leurs Marges, Publ. Inst. Stat. Univ. Paris 8 (1959)
229вЂ“231.
49. Wyngarden, H.: An Index of Local Real Estate Prices, Michigan Business Studies 1, 2 (1927).
E RIC H ILLEBRAND : CREATES - C ENTER FOR R ESEARCH IN E CONOMETRIC A NALYSIS
S ERIES , D EPARTMENT OF E CONOMICS AND B USINESS , A ARHUS U NIVERSITY, D ENMARK