Existence, uniqueness and almost surely asymptotic estimations of

Existence, uniqueness and almost surely asymptotic
estimations of the solutions to neutral stochastic
functional differential equations driven by pure jumps
Wei Mao1 ,Quanxin Zhu2 , Xuerong Mao3
1.School of Mathematics and Information Technology, Jiangsu Second Normal
University, Nanjing, 210013, Jiangsu, P.R.China;
2.School of Mathematical Sciences, Nanjing Normal University, Nanjing, 210023,
Jiangsu, P.R.China;
3.Department of Mathematics and Statistics, University of Strathclyde, Glasgow, G1
1XH, UK.
Abstract
In this paper, we are concerned with neutral stochastic functional differential equations driven by pure jumps (NSFDEwPJs). We prove the existence
and uniqueness of the solution to NSFDEwPJs whose coefficients satisfying
the Local Lipschitz condition. In addition, we establish the p-th exponential estimations and almost surely asymptotic estimations of the solution for
NSFDEwJs.
Key words: Neutral stochastic functional differential equations, Pure
jumps, Existence and uniqueness, Exponential estimations, Almost surely
asymptotic estimations.
1. Introduction
Stochastic delay differential equations (SDDEs) have come to play an important role in many branches of science and industry. Such models have
been used with great success in a variety of application areas, including
biology, epidemiology, mechanics, economics and finance. In the past few
decades, qualitative theory of SDDEs have been studied intensively by many
scholars. Here, we refer to S.E.A.Mohammed [1], X.Mao [2-5,9], E.Buckwar
[6], U.Kuchler [7], Y.Hu [8], D.Xu [10], F.Wu [11], J.Appleby [12], I.Gyongy
[13] and references therein. Recently, motivated by the theory of aeroelasticity, a class of neutral stochastic equations has also received a great deal of
Preprint submitted to Elsevier
December 29, 2014
attention and much work has been done on neutral stochastic equations. For
example, conditions of the existence and stability of the analytical solution
are given in [14-20]. Various efficient computational methods are obtained
and their convergence and stability have been studied in [21-25].
However, all equations of the aboved mentioned works are driven by
white noise perturbations with continuous initial data and white noise perturbations are not always appropriate to interpret real datas in a reasonable
way. In real phenomena, the state of neutral stochastic delay equations may
be perturbed by abrupt pulses or extreme events. A more natural mathematical framework for these phenomena has been taken into account other
than purely Brownian perturbations. In particular, we incorporate the Levy
perturbations with jumps into neutral stochastic delay equations to model
abrupt changes.
In this paper, we study the following neutral stochastic functional differential equations with pure jumps (NSFDEwPJs)
∫
d[x(t) − D(xt )] = f (xt , t)dt +
h(xt , u)Np̄ (dt, du), t0 ≤ t ≤ T. (1)
U
To the best of our knowledge, there are no literatures concerned with the existence and asymptotic estimations of the solution to NSFDEwPJs (1). On
the one hand, we prove that equation (1) has a unique solution in the sense of
LP norm. We don’t use the fixed point Theorem. Instead, we get the solution
of equation (1) via successive approximations. On the other hand, we study
the p-th exponential estimations and almost surely asymptotic estimations
of the solution to equation (1). By using the Itb
o formula, Taylor formula
and the Burkholder Davis inequality, we have that the p-th moment of the
solution will grow at most exponentially with exponent M and show that
the exponential estimations implies almost surely asymptotic estimations.
Although the way of analysis follows the ideas in [2], however, those results
on the existence and uniqueness of the solution in [2] can not be extended to
the jumps case naturally. Unlike the Brown process whose almost all sample paths are continuous, the Poisson random measure Np̄ (dt, du) is a jump
process and has the sample paths which are right-continuous and have left
limits. Therefore, there is a great difference between the stochastic integral
with respect to the Brown process and the one with respect to the Poisson
random measure. It should be pointed out that the proof for NSFDEwPJs
is certainly not a straightforward generalization of that for NSFDEs without
2
jumps and some new techniques are developed to cope with the difficulties
due to the Poisson random measures.
The rest of the paper is organized as follows. In Section 2, we introduce
some notations and hypotheses concerning equation (1); In Section 3, the
existence and uniqueness of the solution to equation (1) are investigated;
In Section 4, we prove the p-th moment of the solution will grow at most
exponentially with exponent M and show that the exponential estimations
implies the almost surely asymptotic estimations.
2. Preliminaries
Let (Ω, F, P ) be a complete probability space equipped with some filtration (Ft )t≥t0 satisfying the usual conditions, (i.e. it is right continuous and
(Ft0 ) contains all P -null sets). Let τ > 0, and D([−τ, 0]; Rn ) denote the family of all right-continuous functions with left-hand limits φ from [−τ, 0] → Rn .
The space D([−τ, 0]; Rn√
) is assumed to be equipped with the norm ||φ|| =
sup |φ(t)| and |x| = x⊤ x for any x ∈ Rn . If A is a vector or matrix,
−τ ≤t≤0
√
its trace norm is denoted by |A| = trace(A⊤ A), while its operator norm is
denoted by ||A|| = sup{|Ax| : |x| = 1}. DFb 0 ([−τ, 0]; Rn ) denotes the family
of all almost surely bounded, F0 -measurable, D([−τ, 0]; Rn ) valued random
variable ξ = {ξ(θ) : −τ ≤ θ ≤ 0}. Let t0 ≥ 0, p ≥ 2, LpFt ([−τ, 0]; Rn ) de0
note the family of all Ft0 measurable, D([−τ, 0]; Rn )-valued random variables
φ = {φ(θ) : −τ ≤ θ ≤ 0} such that E sup |φ(θ)|p < ∞.
−τ ≤θ≤0
Let (U, B(U )) be a measurable space and π(du) a σ- finite measure on it.
Let {p̄ = p̄(t), t ≥ t0 } be a stationary Ft -Poisson point process on U with a
characteristic measure π. Then, for A ∈ B(U − {0}), here 0 ∈ the closure of
A, the Poisson counting measure Np̄ is defined by
∑
Np̄ ((t0 , t] × A) := ♯{t0 < s ≤ t, p̄(s) ∈ A} =
IA (p̄(s)),
t0 <s≤t
where ♯ denotes the cardinality of a set. For simplicity, we denote: Np̄ (t, A) :=
Np̄ ((t0 , t] × A). It follows from [26] that there exists a σ- finite measure π
satisfying
e(−tπ(A))(π(A)t)n
.
E[Np̄ (t, A)] = π(A)t, P (Np̄ (t, A) = n) =
n!
3
This measure π is called the Levy measure. Then, the measure Ñp̄ is defined
by
Ñp̄ ([t0 , t], A) := Np̄ ([t0 , t], A) − tπ(A), t > t0 .
We refer to N.Ikeda [26] for the details on Poisson point process.
The integral version of equation (1) is given by the equation
∫ t∫
∫ t
h(xs , u)Np̄ (ds, du),
f (xs , s)ds +
x(t) − D(xt ) = xt0 − D(xt0 ) +
t0
t0
U
(2)
where
xt = {x(t + θ) : −τ ≤ θ ≤ 0}
is regarded as a D([−τ, 0]; Rn )-valued stochastic process. f : D([−τ, 0]; Rn )×
[t0 , T ] → Rn and h : D([−τ, 0]; Rn ) × U → Rn are both Borel-measurable
functions. The initial condition xt0 is defined by
xt0 = ξ = {ξ(t) : −τ ≤ t ≤ 0} ∈ LpFt ([−τ, 0]; Rn ),
0
that is, ξ is an Ft0 -measurable D([−τ, 0]; Rn )-valued random variable and
E||ξ||p < ∞. Ñp̄ (dt, du) is the compensated Poisson random measure given
by
Ñp̄ (dt, du) = Np̄ (dt, du) − π(du)dt,
here π(du) is the Levy measure associated to Np̄ .
To study the existence and asymptotic estimations of the solution to
equation (1), we consider the following hypotheses.
(H1) Let D(0) = 0 and for all φ, ψ ∈ D([−τ, 0]; Rn ), there exists a
constant k0 ∈ (0, 1) such that
|D(φ) − D(ψ)| ≤ k0 ||φ − ψ||.
(3)
(H2) For all φ, ψ ∈ D([−τ, 0]; Rn ), t ∈ [t0 , T ] and u ∈ U , there exist two
positive constants k and L0 such that
∫
2
|f (φ, t) − f (ψ, t)| ∨
|h(φ, u) − h(ψ, u)|2 π(du) ≤ k||φ − ψ||2 . (4)
U
|f (0, t)|2 ∨ |h(0, u)|2 ≤ L0 .
4
(5)
(H3) For all φ, ψ ∈ D([−τ, 0]; Rn ), p ≥ 2 and u ∈ U , there exists a
positive constant L such that
|h(φ, u) − h(ψ, u)|p ≤ L||φ − ψ||p |u|p ,
∫
where π(U ) < ∞ and U |u|p du < ∞.
Clearly, (H2) and (H3) implies the linear growth condition
∫
2
|f (φ, t)| ∨
|h(φ, u)|2 π(du) ≤ L1 (1 + ||φ||2 ),
(6)
(7)
U
and
∫
|h(φ, u)|p π(du) ≤ L2 (1 + ||φ||p ),
(8)
U
where L1 and L2 are two positive constants.
In fact, for any φ ∈ D([−τ, 0]; Rn ) and t ∈ [t0 , T ], it follows from (4) and
(5) that
|f (φ, t)|2 ≤ 2[|f (φ, t) − f (0, t)|2 + |f (0, t)|2 ]
≤ 2(k||φ||2 + L0 ) ≤ L1 (1 + ||φ||2 ).
and
∫
∫
∫
2
|h(φ, u)| π(du) ≤ 2[ |h(φ, u) − h(0, u)| π(du) +
|h(0, u)|2 π(du)]
2
U
U
U
≤ 2(k||φ||2 + 2L0 π(U )) ≤ L1 (1 + ||φ||2 ).
where L1 = max{2k, 2L0 , 2L0 π(U )}. Similarly, for any φ ∈ D([−τ, 0]; Rn )
and t ∈ [t0 , T ], it follows from (5) and (6) that
∫
∫
p
p−1
|h(φ, u)| π(du) ≤ 2
(|h(φ, u) − h(0, u)|p + |h(0, u)|p )π(du)
U
∫U
p
|u|p duk1 ||φ||p + 2p−1 L02 π(U ) ≤ L2 (1 + ||φ||p ),
≤ 2p−1
U
p
∫
where L2 = max{2p−1 U |u|p duk1 , 2p−1 L02 π(U )}. Hence, the linear growth
conditions (7) and (8) are satisfied.
Now we present the definition of the solution to equation (1).
5
Definition 2.1 A right continuous with left limits process x = {x(t), t ∈
[t0 , T ]} (t0 < T < ∞) is called a solution of equation (1) if
(1) x(t) is Ft -adapted and x = {x(t), t ∈ [t0 , T ]} is Rn -valued;
∫T
(2) t0 |x(t)|2 ds < ∞, a.s.;
(3) x(t) = ξ and, for every t0 ≤ t ≤ T ,
∫ t∫
∫ t
h(xs , u)Np̄ (ds, du) a.s.
f (xs , s)ds +
x(t) − D(xt ) = xt0 − D(xt0 ) +
t0
t0
U
A solution x(t) is said to be unique if any other solution y(t) is indistinguishable from it, that is,
P {x(t) = y(t), t ∈ [t0 , T ]} = 1.
3. The existence and uniqueness theorem
In this section, we establish the existence and uniqueness of the solution to
equation (1) under the Lipschitz condition and the local Lipschitz condition.
Define x0t0 = ξ and x0 (t) = ξ(0) for t ∈ [t0 , T ]. Let xnt0 = ξ, n = 1, 2, · · ·
and define the sequence of successive approximations to equation (1)
∫ t
n
n
x (t) − D(xt ) = ξ(0) − D(ξ) +
f (xsn−1 , s)ds
t0
∫ t∫
+
h(xn−1
, u)Np̄ (ds, du), n ≥ 1.
(9)
s
t0
U
Theorem 3.1 Let p ≥ 2 and suppose that the coefficients of equation (1)
satisfy conditions (H1)-(H3), then equation (1) has a unique solution x(t) on
[t0 , T ] in the sense of Lp -norm.
In order to prove this theorem, let us present three useful lemmas.
Lemma 3.1 [2] Let p ≥ 2, ε > 0 and a, b ∈ R,then
1
|a + b|p ≤ [1 + ε p−1 ]p−1 (|a|p +
|b|p
).
ε
(10)
Lemma 3.2 Under conditions (H1)-(H3), there exists a positive constant
c such that
E sup |xn (t)|p ≤ c,
t0 ≤t≤T
6
(11)
where c = (c3 +c4 (T −t0 )E||ξ||p )ec4 (T −t0 ) , c3 and c4 are two positive constants
of (25).
Proof: For any ε > 0, it follows from Lemma 3.1 that
|xn (t)|p = |D(xnt ) + xn (t) − D(xnt )|
1
p−1
≤ [1 + ε
p−1
]
(|x (t) −
n
D(xnt )|p
|D(xnt )|p
+
).
ε
(12)
By (H1), one gets
|x (t)| ≤ [1 + ε
n
p
1
p−1
p−1
]
(|x (t) −
n
D(xnt )|p
k0p ||xnt ||p
+
).
ε
(13)
k0 p−1
Letting ε = [ 1−k
]
and taking the expectation on both sides of (13), we
0
have
E sup |xn (s)|p ≤ k0 E sup ||xns ||p
t0 ≤s≤t
t0 ≤s≤t
+
1
E sup |xn (s) − D(xns )|p .
p−1
(1 − k0 )
t0 ≤s≤t
(14)
On the other hand, we have
E sup ||xns ||p ≤ E
t0 ≤s≤t
sup
t0 −τ ≤s≤t
p
|xn (s)|p
≤ E||ξ|| + E sup |xn (s)|p .
t0 ≤s≤t
(15)
Combing (14) and (15), we obtain
E sup |xn (s)|p ≤
t0 ≤s≤t
k0
1
E||ξ||p +
E sup |xn (s) − D(xns )|p .
p
1 − k0
(1 − k0 ) t0 ≤s≤t
(16)
By (9) and using the inequality |a + b + c|p ≤ 3p−1 [|a|p + |b|p + |c|p ], we have
E sup |xn (s) − D(xns )|p
t0 ≤s≤t
∫
≤ 3
(E( sup |ξ(0) − D(ξ)| ) + E sup |
t0 ≤s≤t
t0 ≤s≤t
∫ s∫
p
+E sup |
h(xn−1
σ , u)Np̄ (dσ, du)| )
p−1
t0 ≤s≤t
:= 3p−1
s
f (xσn−1 , σ)dσ|p
p
3
∑
t0
t0
U
Ii .
(17)
i=1
7
Let us estimate the terms introduced above. Letting ε = k0p−1 , then it follows
from Lemma 3.1 that
I1 ≤ (1 + k0 )p E|||ξ|p .
(18)
By using the H ölder inequality and the linear growth condition (7), one gets
∫ t
p−1
I2 ≤ (t − t0 ) E
|f (xsn−1 , s)|p ds
t
∫ 0t
p
≤ (t − t0 )p−1 E
[L1 (1 + ||xsn−1 ||2 )2 ] 2 ds
t0
∫ t
p
p−1
≤ (t − t0 ) (2L1 ) 2 E
(1 + ||xsn−1 ||p )ds.
(19)
t0
For the third term I3 in (17), we have
∫ s∫
∫ s∫
n−1
p
I3 = E sup |
h(xσ , u)Ñp̄ (dσ, du) +
h(xn−1
σ , u)π(du)dσ|
t0 ≤s≤t
t0 U
t0 U
∫ s∫
p−1
p
≤ 2 E sup |
h(xn−1
σ , u)Ñp̄ (dσ, du)|
t0 ≤s≤t
t0 U
∫ s∫
p−1
p
+2 E sup |
h(xn−1
(20)
σ , u)π(du)dσ| ,
t0 ≤s≤t
t0
U
where Ñp (dt, du) := Np (dt, du) − π(du)dt. For the last term of (20), using
the H ölder inequality and condition (7), we obtain
∫ s∫
p
E sup |
h(xn−1
σ , u)π(du)dσ|
t0 ≤s≤t
t0 U
∫ t
∫ t ∫
p−1
≤ E[ ds]
| h(xn−1
, u)π(du)|p ds
s
t0
t
U
∫
∫ 0t ∫
p
p
p−1
2
, u)|2 π(du)] 2 ds
≤ (t − t0 ) E
[ π(du)] [ |h(xn−1
s
U
t0
U
∫ t
p
p
≤ (t − t0 )p−1 [π(U )] 2 E
[L1 (1 + ||xsn−1 ||2 )] 2 ds
t0
∫ t
p
p
p−1
2
2
(21)
≤ (t − t0 ) (2L1 ) [π(U )] E
(1 + ||xsn−1 ||p )ds.
t0
8
Now let us estimate the martingale part in (20). By the Kunita’s estimates
(see Kunita [27] and Applebaum [28]), conditions (7)-(8) and properties of
stochastic integral with respect to a Poisson random measure, we have a
positive real number cp such that the following inequality holds:
∫ s∫
p
E sup |
h(xn−1
σ , u)Ñp̄ (dσ, du)|
t0 ≤s≤t
t0 U
∫ t∫
∫ t∫
p
n−1
2
≤ cp {E
[ |h(xs , u)| π(du)] 2 ds + E[
|h(xsn−1 , u)|p π(du)ds]}
t
U
t
U
∫ 0t
∫ t 0
p
||)] 2 ds + L2 E
(1 + ||xsn−1 ||p )ds}
≤ cp {E
[L1 (1 + ||xn−1
s
t0
t0
∫ t
p
≤ cp [(2L1 ) 2 + L2 ]E
(1 + ||xn−1
||p )ds.
(22)
s
t0
Inserting (21) and (22) into (20), we obtain that
∫ t
I3 ≤ c1 E
(1 + ||xsn−1 ||p )ds.
(23)
t0
p
p
p
where c1 = 2p−1 [(2L1 ) 2 (t − t0 )p−1 (π(U )) 2 + cp ((2L1 ) 2 + L2 )]. Therefore,
E sup |xn (s) − D(xns )|p
t0 ≤s≤t
≤ 3
p−1
∫
p
t
(1 + ||xsn−1 ||p )ds,
p
(1 + k0 ) E||ξ|| + c2 E
(24)
t0
p
where c2 = 3p−1 [(t − t0 )p−1 (2L1 ) 2 + c1 ]. Combing (16) and (24) together, we
have
∫ t
n
p
E sup |x (s)| ≤ c3 + c4 E
||xsn−1 ||p ds.
(25)
t0 ≤s≤t
t0
p
k0
c2
c2
0)
where c3 = [ 1−k
+ 3p−1 (1+k
]E||ξ||p + (1−k
p (T − t0 ) and c4 = (1−k )p . For
(1−k0 )p
0
0)
0
any r ≥ 1,
∫ t
n
p
max E sup |x (s)| ≤ c3 + c4
max E sup |xn−1 (σ)|p ds
1≤n≤r
1≤n≤r
t0 ≤s≤t
t0 ≤σ≤s
t0
∫ t
≤ c3 + c4
(E||ξ||p + max E sup |xn (σ)|p )ds.
t0
9
1≤n≤r
t0 ≤σ≤s
From the Gronwall inequality, we derive that
max E sup |xn (t)|p ≤ (c3 + c4 (T − t0 )E||ξ||p )ec4 (T −t0 ) .
1≤n≤r
t0 ≤t≤T
Since r is arbitrary, we must have
E sup |xn (t)|p ≤ (c3 + c4 (T − t0 )E||ξ||p )ec4 (T −t0 ) ,
t0 ≤t≤T
(26)
which shows that the desired result holds with c = (c3 +c4 (T −t0 )E||ξ||p )ec4 (T −t0 ) .
Lemma 3.3 Let the conditions of Theorem 3.1 hold. Then {xn (t)} (n ≥
0) defined by (9) is a Cauchy sequence in D([t0 , T ], Rn ).
Proof: For n ≥ 1 and t ∈ [t0 , T ], it follows from (9) that,
∫ t
n+1
n
n+1
n
x (t) − x (t) = D(xt ) − D(xt ) +
[f (xns , s) − f (xsn−1 , s)]ds
t0
∫ t∫
+
[h(xns , u) − h(xn−1
, u)]Np̄ (ds, du).
(27)
s
t0
U
Similar to the analysis of (14), by lemma 3.1 and taking the expectation on
|xn+1 (t) − xn (t)|p , we have
E( sup |xn+1 (s) − xn (s)|p )
t0 ≤s≤t
≤ k0 E( sup |xn+1 (s) − xn (s)|p )
t0 ≤s≤t
+
1
E sup |[xn+1 (s) − xn (s)] − [D(xn+1
) − D(xns )]|p .(28)
s
p−1
(1 − k0 )
t0 ≤s≤t
Consequently,
E( sup |xn+1 (s) − xn (s)|p )
t0 ≤s≤t
≤
1
E sup |[xn+1 (s) − xn (s)] − [D(xn+1
) − D(xns )]|p . (29)
s
(1 − k0 )p t0 ≤s≤t
The basic inequality |a + b|p ≤ 2p−1 (|a|p + |b|p ) implies that
E sup |[xn+1 (s) − xn (s)] − [D(xn+1
) − D(xns )]|p
s
t0 ≤s≤t
∫ s
p−1
≤ 2 [E sup |
[f (xnσ , σ) − f (xσn−1 , σ)]dσ|p
t0 ≤s≤t
∫ s ∫t0
+E sup |
[h(xnσ , u) − h(xσn−1 , u)]Np̄ (dσ, du)|p ].
t0 ≤s≤t
t0
U
10
(30)
Applying the H ölder inequality and (H2), we obtain
∫ s
[f (xnσ , σ) − f (xσn−1 , σ)]dσ|p
E sup |
t0 ≤s≤t
t0
∫ t
p−1
≤ (t − t0 ) E
|f (xns , s) − f (xsn−1 , s)|p ds
t
∫0 t
p
≤ (t − t0 )p−1 k 2
E||xns − xsn−1 ||p ds.
(31)
t0
By the Kunita’s estimates, H ölder inequality and (H2)-(H3), there exists a
positive constant c5 such that
∫ s∫
p
E sup |
[h(xnσ , u) − h(xn−1
σ , u)]Np̄ (dσ, du)|
t0 ≤s≤t
t0 U
∫ s∫
p−1
≤ 2 E sup |
[h(xnσ , u) − h(xσn−1 , u)]Ñp̄ (dσ, du)|p
t0 ≤s≤t
t0 U
∫ s∫
p−1
+2 E sup |
[h(xnσ , u) − h(xσn−1 , u)]π(du)dσ|p
t0 ≤s≤t
t0 U
∫ t∫
p
p
p−1
p−1
≤ 2 [(t − t0 ) (π(U )) 2 E
[ |h(xns , u) − h(xn−1
, u)|2 π(du)] 2 ds
s
t0
U
∫ t∫
p
+cp 2p−1 {E
[ |h(xns , u) − h(xn−1
, u)|2 π(du)] 2 ds
s
t0
U
∫ t∫
+E[
|h(xns , u) − h(xn−1
, u)|p π(du)ds]}
s
t
U
∫ 0t
≤ c5 E
||xns − xn−1
||p ds,
(32)
s
t0
∫
p
p
where c5 = 2p−1 {[(t − t0 )p−1 (π(U )) 2 + cp ]k 2 + cp L U |u|p π(du)}. Hence,
inserting (30)-(32) into (29) yields
∫ t
n+1
n
p
E( sup |x (s) − x (s)| ) ≤ c6
E( sup |xn (σ) − xn−1 (σ)|p )ds, (33)
t0 ≤s≤t
t0
t0 ≤σ≤s
p
2
where c6 = k 2p−1 (1−k1 0 )p [(T − t0 )p−1 + c5 ].
Setting φn (t) = E sup |xn+1 (s) − xn (s)|p , we have
t0 ≤s≤t
∫
∫
t
φn−1 (s1 )ds1 ≤
φn (t) ≤ c6
c26
11
s1
φn−2 (s2 )ds2
ds1
0
t0
∫
t
t0
≤ ···
∫
∫ t
n
ds1
≤ c6
s1
∫
ds2 · · ·
t0
t0
sn−1
φ0 (sn )dsn .
(34)
t0
From the Kunita’s estimates, H ölder inequality and conditions (7)-(8), we
have
φ0 (t) = E sup |x1 (s) − x0 (s)|p ≤ c0 .
t0 ≤s≤t
(35)
Substituting (35) into (34) and integrating the right hand side, we obtain
E( sup |x
n+1
t0 ≤s≤t
(c6 (t − t0 ))n
(s) − x (s)| ) ≤ c0
.
n!
n
p
(36)
Taking t = T in (36), we have
E( sup |xn+1 (t) − xn (t)|p ) ≤ c0
t0 ≤t≤T
(c6 (T − t0 ))n
.
n!
(37)
Then using the Chebyshev inequality, one gets
P ( sup |xn+1 (t) − xn (t)| >
t0 ≤t≤T
1
(c6 (T − t0 ))n
)
≤
c
M
.
0
2n
n!
c0 M (c6 (T −t0 ))
Since Σ∞
< ∞, and by the Borel-Cantelli lemma, we have
n=0
n!
n
P ( sup |xn+1 (t) − xn (t)| ≤
t0 ≤t≤T
1
) = 1.
2n
(38)
(38) implies that for each t, {xn (t)}n=1,2··· is a Cauchy sequence on [t0 , T ]
under sup |.|. However, the space D([t0 , T ], Rn ) is not a complete space
under sup |.| and we cannot get the limit of the sequence {xn (t)}n≥1 . So we
need to introduce a metric to make the space D([t0 , T ], Rn ) complete. For
any x, y ∈ D([t0 , T ], Rn ), P.Billingsley [29] gives the following metric
d(x, y) = inf { sup |xt − yλ(t) | +
λ∈Λ t0 ≤t≤T
sup
t0 ≤s≤t≤T
|log
λ(t) − λ(s)
|},
t−s
where Λ = {λ = λ(t) : λ is strictly increasing, continuous on t ∈ [t0 , T ], such
that λ(t0 ) = t0 , λ(T ) = T }. So we have that D([t0 , T ], Rn ) is a complete metric space. Taking λ(t) = t, we can see that {xn (t)}n≥1 is a cauchy sequence
under d(., .). The proof is completed.
12
Proof of Theorem 3.1 Uniqueness. Let x(t) and y(t) be two solutions
of Eq.(1). Then, for t ∈ [t0 , T ], by the Kunita’s estimates, H ölder inequality,
we have
∫ t
p
E sup |x(s) − y(s)| ≤ c
E sup |x(u) − y(u)|p ds.
(39)
t0 ≤s≤t
t0
t0 ≤u≤s
Therefore, using the Gronwall inequality, we get
E sup |x(s) − y(s)|p = 0,
t0 ≤s≤t
t ∈ [t0 , T ],
which implies that x(t) = y(t) for all t ∈ [t0 , T ]. Therefore, for all t ∈ [t0 , T ],
x(t) = y(t) a.s.
Existence. We derive from Lemma 3.3 that {xn (t)}n=1,2··· is a Cauchy
sequence in D([t0 , T ], Rn ). Hence, there exists a unique solution x(t) ∈
D([t0 , T ], Rn ) such that d(xn (.), x(.)) → 0 as n → ∞. For all t ∈ [t0 , T ],
taking limits on both sides of (9) and letting n → ∞, we then can show that
x(t) is the solution of equation (1). So the proof of Theorem 3.1 is completed.
Next, we relax the Lipschitz conditions (H2)-(H3) and replace them by
the following the local Lipschitz conditions.
(H4) For all φ, ψ ∈ D([−τ, 0]; Rn ), t ∈ [t0 , T ], u ∈ U and ||φ|| ∨ ||ψ|| ≤ n,
there exist two positive constants kn and L0 such that
∫
2
|f (φ, t) − f (ψ, t)| ∨
|h(φ, u) − h(ψ, u)|2 π(du) ≤ kn ||φ − ψ||2 .
(40)
U
(H5) For all φ, ψ ∈ D([−τ, 0]; Rn ), p ≥ 2, u ∈ U and ||φ|| ∨ ||ψ|| ≤ n,
there exists a positive constant Ln such that
|h(φ, u) − h(ψ, u)|p ≤ Ln ||φ − ψ||p |u|p .
(41)
∫
where π(U ) < ∞ and U |u|p du < ∞.
Then, Theorem 3.1 can be generalized as Theorem 3.2.
Theorem 3.2 Let conditions (H1),(H4) and (H5) hold. Then equation
(1) has a unique solution x(t) on [t0 , T ] in the sense of Lp -norm. Moreover,
there exists a constants c such that
E sup |x(t)|p ≤ c.
t0 ≤t≤T
for any t ∈ [t0 , T ].
13
Proof: For each n ≥ 1, define the truncation function

 f (t, x), if ||x|| ≤ n,
fn (t, x) =
nx
 f (t,
), if ||x|| ≥ n,
||x||
and

 h(x, u), if ||x|| ≤ n,
hn (x, u) =
nx
 h(
, u), if ||x|| ≥ n.
||x||
(42)
(43)
Then fn and hn satisfy the conditions (H1)-(H3). From Theorem 3.1, we
have that the following equation
∫ t
xn (t) = ξ(0) + D((xn )t ) − D(ξ) +
fn ((xn )s , s)ds
t0
∫ t∫
+
hn ((xn )s , u)Np̄ (ds, du)
(44)
t0
U
has a unique solution xn (t). Moreover, xn+1 (t) is the unique solution of the
equation
∫ t
xn+1 (t) = ξ(0) + D((xn+1 )t ) − D(ξ) +
fn+1 ((xn+1 )s , s)ds
t0
∫ t∫
+
hn+1 ((xn+1 )s , u)Np̄ (ds, du).
(45)
t0
U
By (44) and (45), we have
xn+1 (t) − xn (t)
∫
t
= D((xn+1 )t ) − D((xn )t ) +
[fn+1 ((xn+1 )s , s) − fn ((xn )s , s)]ds
t0
∫ t∫
+
[hn+1 ((xn+1 )s , u) − hn ((xn )s , u)]Np̄ (ds, du).
(46)
t0
U
For any fixed n ≥ 1, define the stopping time
τn = T ∧ inf {t ∈ [t0 , T ] : |(xn )t | ≥ n}.
14
Taking the expectation on |xn+1 (t) − xn (t)|p and by lemma 3.1, it deduces
that
E
≤
sup
t0 ≤s≤t∧τn
|xn+1 (s) − xn (s)|p
1
E sup |[(xn+1 )s − (xn )s ] − [D((xn+1 )s ) − D((xn )s )]|p
(1 − k0 )p−1 t0 ≤s≤t∧τn
+k0 E( sup |(xn+1 )s − (xn )s |p ).
(47)
t0 ≤s≤t∧τn
Therefore,
|xn+1 (s) − xn (s)|p
t0 ≤s≤t∧τn
∫ s
1
p−1
2 {E( sup |
≤
[fn+1 ((xn+1 )σ , σ) − fn ((xn )σ , σ)]dσ|p
p
(1 − k0 )
t ≤s≤t∧τn
t0
∫ s ∫0
+E( sup |
[hn+1 ((xn+1 )σ , u) − hn ((xn )σ , u)]Np̄ (dσ, du)|p }
E
sup
t0 ≤s≤t∧τn
=
t0
U
1
2p−1 (J1 + J2 ).
(1 − k0 )p
(48)
By the H ölder inequality and rearranging the terms on the right-hand side
by plus and minus technique, we have
∫ t∧τn
p−1
J1 ≤ (t ∧ τn − t0 ) E
|fn+1 ((xn+1 )s , s) − fn ((xn )s , s)|p ds
t0
∫ t∧τn
p−1
≤ (t ∧ τn − t0 ) E
{2p−1 |fn+1 ((xn+1 )s , s) − fn+1 ((xn )s , s)|p
t0
p−1
+2
|fn+1 ((xn )s , s) − fn ((xn )s , s)|p }ds.
The Kunita’s estimates implies that
∫ t∧τn ∫
p
J2 ≤ c7 E
[ |hn+1 ((xn+1 )s , u) − hn ((xn )s , u)|2 π(du)] 2 ds
t0
U
∫ t∧τn ∫
+cp 2p−1 E
|hn+1 ((xn+1 )s , u) − hn ((xn )s , u)|p π(du)ds
t
U
∫ t∧τn ∫0
≤ c7 E
{ [2|hn+1 ((xn+1 )s , u) − hn+1 ((xn )s , u)|2
t0
U
15
(49)
p
+2|hn+1 ((xn )s , u) − hn ((xn )s , u)|2 ]π(du)} 2 ds
∫ t∧τn ∫
2p−2
+cp 2
{E
|hn+1 ((xn+1 )s , u) − hn+1 ((xn+1 )s , u)|p π(du)ds
t
U
∫ t∧τn ∫ 0
+E
|hn+1 ((xn+1 )s , u) − hn ((xn )s , u)|p π(du)ds},
(50)
t0
U
p
where c7 = 2p−1 [(t ∧ τn − t0 )p−1 (π(U )) 2 + cp ]. Combing (48)-(50) together,
it follows that
E
≤
sup
|xn+1 (s) − xn (s)|p
∫
p−1
p−1
2 (T − t0 ) E
p
t0 ≤s≤t∧τn
t∧τn
1
{2p−1 |fn+1 ((xn+1 )s , s)
(1 − k0 )
t0
p
p−1
−fn+1 ((xn )s , s)| + 2 |fn+1 ((xn )s , s) − fn ((xn )s , s)|p }ds
∫ t∧τn ∫
1
p−1
+
2 {c7 E
{ [2|hn+1 ((xn+1 )s , u) − hn+1 ((xn )s , u)|2
(1 − k0 )p
t0
U
p
+2|hn+1 ((xn )s , u) − hn ((xn )s , u)|2 ]π(du)} 2 ds
∫ t∧τn ∫
2p−2
+cp 2
{E
|hn+1 ((xn+1 )s , u) − hn+1 ((xn+1 )s , u)|p π(du)ds
t
U
∫ t∧τn ∫ 0
+E
|hn+1 ((xn+1 )s , u) − hn ((xn )s , u)|p π(du)ds}}.
(51)
t0
U
For t0 ≤ t ≤ τn , we have
fn+1 ((xn )t , t) = fn ((xn )t , t) = f ((xn )t , t),
hn+1 ((xn )t , u) = hn ((xn )t , u) = h((xn )t , u).
(52)
By (52), we get from (51),
|xn+1 (s) − xn (s)|p
∫ t∧τn
1
2p−2
p−1
2
(T − t0 ) E
|fn+1 ((xn+1 )s , s) − fn+1 ((xn )s , s)|p ds
≤
p
(1 − k0 )
t
∫ t∧τn ∫ 0
1
+
2p−1 {c7 E
{ [2|hn+1 ((xn+1 )s , u)
(1 − k0 )p
t0
U
E
sup
t0 ≤s≤t∧τn
p
−hn+1 ((xn )s , u)|2 ]π(du)} 2 ds
16
∫
2p−2
+cp 2
t∧τn
∫
|hn+1 ((xn+1 )s , u) − hn+1 ((xn+1 )s , u)|p π(du)ds}.
E
t0
U
(53)
By the local Lipschitz conditions (H4) and (H5), we have
∫ t∧τn
p
E sup |xn+1 (s) − xn (s)| ≤ c8 E
||(xn+1 )s − (xn )s ||p ds
t0 ≤s≤t∧τn
t0
∫ t
≤ c8
E sup |xn+1 (σ) − xn (σ)|p ds,
t0 ≤σ≤s∧τn
t0
(54)
p
3
2
where c8 = (1−k1 0 )p {kn+1
[22p−2 (T −t0 )p−1 +2 2 p−2 c7 ]+cp 23p−3 Ln+1
From (54) and the Gronwall inequality, we get
∫
U
|u|p π(du)}.
|xn+1 (s) − xn (s)|p = 0,
(55)
xn+1 (t) = xn (t), f or t ∈ [t0 , τn ].
(56)
E
sup
t0 ≤s≤t∧τn
which yields
It then deduced that τn is increasing, that is as n → ∞, τn ↑ T a.s. By
the linear growth condition (7) and (8), for almost all ω ∈ Ω, there exists
an integer n0 = n0 (ω) such that τn = T as n ≥ n0 . Now define x(t) by
x(t) = xn0 (t) for t ∈ [t0 , T ]. Next to verify that x(t) is the solution of
equation (1). By (56), x(t ∧ τn ) = xn (t ∧ τn ), and it follows from (44) that
x(t ∧ τn ) − D(xt∧τn )
∫ t∧τn
∫ t∧τn ∫
= ξ(0) − D(ξ) +
fn (xs , s)ds +
hn (xs , u)Np̄ (ds, du)
t0
t0
U
∫ t∧τn ∫
∫ t∧τn
h(xs , u)Np̄ (ds, du). (57)
= ξ(0) − D(ξ) +
f (xs , s)ds +
t0
t0
U
Letting n → ∞ on both sides of (57), we obtain
x(t ∧ T ) − D(xt∧T )
∫ t∧T
∫
= ξ(0) − D(ξ) +
f (xs , s)ds +
t0
t∧T
t0
17
∫
h(xs , u)Np̄ (ds, du).
U
that is
∫
∫ t∫
t
x(t) − D(xt ) = ξ(0) − D(ξ) +
f (xs , s)ds +
t0
h(xs , u)Np̄ (ds, du),
t0
U
which implies that x(t) is the solution of equation (1). By stopping our
process x(t), uniqueness of the solution to equation (1) is obtained. Moreover,
by the proof of Theorem 3.1, we can easily obtain that E sup |x(t)|p ≤ c.
t0 ≤t≤T
The proof is completed.
4. Asymptotic estimations for solutions
In this section, we will give the exponential estimate of the solution to
equation (1).
According to the definition of Ñp̄ (dt, du) := Np̄ (dt, du) − π(du)dt, we can
rewrite equation (1) as the following equation
∫
d[x(t) − D(xt )] = F (t, xt )dt +
h(xt , u)Ñp̄ (dt, du),
(58)
U
∫
where F (t, xt ) = f (xt , t) + U h(xt , u)π(du).
Let C 2,1 (Rn ×[t0 −τ, T ), R+ ) denote the family of all nonnegative functions
V (x, t) on Rn × [t0 − τ, T ) which are continuously twice differentiable with
respect to x and continuously once differentiable with respect to t. For a
V ∈ C 2,1 (Rn × [t0 − τ, T ), R+ ), one can define the Kolmogorov operator LV
as follows:
LV (x, y, t) ≡ Vt (x − D(y), t) + Vx (x − D(y), t)F (t, y)
∫
+ [V (x − D(y) + h(y, u), t) − V (x − D(y), t)
U
−Vx (x − D(y), t)h(y, u)]π(du),
(59)
where
Vt (x, t) =
∂V (x, t)
,
∂t
Vx (x, t) = (
∂V (x, t)
∂V (x, t)
,···,
).
∂x1
∂xn
First, we establish the p-th exponential estimations of the solution to
equation (1).
18
Theorem 4.1 Let {x(t), t0 ≤ t ≤ T } be a solution of equation (1) whose
coefficients satisfy conditions (H1) and (H2). For a given integer p ≥ 2 and
any 2 ≤ q ≤ p, there exists a positive constant K such that
∫
|h(φ, u)|q π(du) ≤ K||φ||q .
(60)
U
Then, for any t0 ≤ t ≤ T ,
E
sup
t0 −τ ≤s≤t
|x(s)|p ≤ [1 + (1 + c12 )E||ξ||p ]eM (t−t0 ) ,
(61)
+c10 +c11 )
where M = 2(c9(1−k
. c9 , c10 , c11 , c12 are four positive constants of (68),
p
0)
(74), (80), (82).
Proof: Let V (x(t) − D(xt ), t) = 1 + |x(t) − D(xt )|p , then Vt (x(t) −
D(xt ), t) = 0. Applying the Itb
o formula to V (x(t) − D(xt ), t), we obtain that
∫ t
V (x(t) − D(xt ), t) = V (x(t0 − D(xt0 )), t0 ) +
LV (x(s), xs , s)ds
t0
∫ t∫
+
[V (x(s) − D(xs ) + h(xs , u), s)
t0
U
−V (x(s) − D(xs ), s)]Ñp̄ (ds, du).
(62)
By (59), we have
1 + |x(t) − D(xt )|p = 1 + |x(t0 ) − D(xt0 )|p
∫ t
+p
|x(s) − D(xs )|p−2 [x(s) − D(xs )]⊤ F (s, xs )ds
t
∫ t 0∫
+
{(1 + |x(s) − D(xs ) + h(xs , u)|p ) − (1 + |x(s) − D(xs )|p )
t0
U
−p|x(s) − D(xs )|p−2 [x(s) − D(xs )]⊤ h(xs , u)}π(du)ds
∫ t∫
+
{(1 + |x(s) − D(xs ) + h(xs , u)|p )
t0
U
−(1 + |x(s) − D(xs )|p )}Ñp̄ (ds, du).
Taking the expectation on both sides of (63), one gets
E sup (1 + |x(s) − D(xs )|p )
t0 ≤s≤t
19
(63)
∫
t
≤ 1 + E sup |ξ + D(ξ)| + pE
|x(s) − D(xs )|p−1 |F (s, xs )|ds
t0 ≤s≤t
t0
∫ s∫
+pE sup
|x(σ) − D(xσ )|p−2 [x(σ) − D(xσ )]⊤ h(xσ , u)Ñp̄ (dσ, du)
t0 ≤s≤t t0 U
∫ s∫
+E sup
{|x(σ) − D(xσ ) + h(xσ , u)|p − |x(σ) − D(xσ )|p
p
t0 ≤s≤t
t0
U
−p|x(σ) − D(xσ )|p−2 [x(σ) − D(xσ )]⊤ h(xσ , u)}Np̄ (dσ, du)
3
∑
p
p
≤ 1 + (1 + k0 ) E||ξ|| +
Qi ,
(64)
i=1
where
∫
t
|x(s) − D(xs )|p−1 |F (s, xs )|ds,
t0
∫ s∫
= pE sup
|x(σ) − D(xσ )|p−2 [x(σ) − D(xσ )]⊤ h(xσ , u)Ñp̄ (dσ, du),
t0 ≤s≤t t0 U
∫ s∫
= E sup
{|x(σ) − D(xσ ) + h(xσ , u)|p − |x(σ) − D(xσ )|p
Q1 = pE
Q2
Q3
t0 ≤s≤t
t0
U
−p|x(σ) − D(xσ )|p−2 [x(σ) − D(xσ )]⊤ h(xσ , u)}Np̄ (dσ, du).
Let us estimate Q1 . By the basic inequality
ar b1−r ≤ ra + (1 − r)b,
r ∈ [0, 1],
we derive that
ap−1 b ≤
ε1 (p − 1) p
1
a + p−1 bp ,
p
pε1
where a, b, ε1 > 0. Hence,
∫ t
1
ε1 (p − 1)
Q1 ≤ pE
|x(s) − D(xs )|p + p−1 |F (s, xs )|p ]ds
[
p
pε1
t0
∫ t
ε1 (p − 1)
1
≤ pE
[
(1 + k0 )p ||xs |p + p−1 |F (s, xs )|p ]ds.
p
pε1
t0
By using lemma 3.1, we have
∫ t
E
|F (s, xs )|p ds
t0
20
(65)
∫
∫
|f (xs , s)|p
[1 + ε ] [| h(xs , u)π(du)|p +
]ds.
ε
t0
U
∫ t
1
p
p
1
2
≤ (2L) E
[1 + ε p−1 ]p−1 [((π(U )) 2 + )(1 + ||xs ||p )]ds.
ε
t0
≤ E
t
1
p−1
p−1
(66)
Letting ε = (2L)p−1 , then we get
∫ t
∫ t
p
3p
−1
p
(1 + ||xs ||p )ds. (67)
E
|F (s, xs )| ds ≤ (1 + 2L) 2 (π(U )) 2 E
t0
t0
Inserting (67) into (65) and letting ε1 =
∫
t
Q1 ≤ pE
[
t0
1+2L
,
1+k0
ε1 (p − 1)(1 + k0 )p
||xs ||p
p
p
3p
+
(1 + 2L) 2 −1 (π(U )) 2
p−2
2
∫
we obtain that
(1 + ||xs ||p )]ds
pε1
t
≤ c9 E
(1 + ||xs ||p )ds,
(68)
t0
p
where c9 = (1 + 2L)p (1 + k0 )p−1 [p + (π(U )) 2 ]. For the estimation of Q2 .
By using the Burkholder-Davis inequality, there exists a positive constant c̃p
such that
∫ t∫
1
Q2 ≤ pc̃p E[
|x(s) − D(xs )|2p−2 |h(xs , u)|2 π(du)ds] 2
t0 U
∫ t∫
p
≤ pc̃p E[ sup |x(s) − D(xs )| (
|x(s) − D(xs )|p−2
t0 ≤s≤t
t0
U
1
2
×|h(xs , u)|2 π(du)ds)] .
(69)
Further, for any ε > 0, the Young inequality implies that
∫ t∫
p 12 1
Q2 ≤ pc̃p [εE sup |x(s) − D(xs )| ] [ E(
|x(s) − D(xs )|p−2
ε
t0 ≤s≤t
t0 U
1
×|h(xs , u)|2 π(du)ds)] 2
∫ t∫
pc̃p ε
pc̃p
p
≤
E sup |x(s) − D(xs )| +
E
|x(s) − D(xs )|p−2
2
2ε
t0 ≤s≤t
t0 U
×|h(xs , u)|2 π(du)ds.
(70)
21
Letting ε =
Q2
1
,
pc̃p
we obtain
1
1
≤
E sup |x(s) − D(xs )|p + p2 c̃2p E
2 t0 ≤s≤t
2
∫ t∫
|x(s) − D(xs )|p−2
t0
U
×|h(xs , u)| π(du)ds.
2
(71)
By the following inequality (see Mao[2]),
ap−2 b2 ≤
ε2 (p − 2) p
1
a + p−2 bp ,
p
pε2 2
a, b, ε2 > 0,
and condition (60), we have
∫ t∫
E
|x(s) − D(xs )|p−2 |h(xs , u)|2 π(du)ds
t0 U
∫ t∫
(p − 2)ε2
≤
E
|x(s) − D(xs )|p π(du)ds
p
t
U
∫ t ∫0
2
+ p−2 E
|h(xs , u)|p π(du)ds]
0
U
pε2 2
∫ t∫
(p − 2)ε2 (1 + k0 )p
]E
≤ [
||xs ||p π(du)ds
p
t0 U
∫ t
2
||xs ||p ds]
+ p−2 KE
2
0
pε2
Letting ε2 =
(72)
1
,
(1+k0 )2
∫ t∫
|x(s) − D(xs )|p−2 |h(xs , u)|2 π(du)ds
t0 U
∫ t
2K
(p − 2)
p−2
≤ [
π(U ) +
](1 + k0 ) E
||xs ||p ds.
p
p
t0
E
Inserting (73) into (71), we obtain
∫ t
1
Q2 ≤ c10 E
||xs ||p ds + E sup |x(s) − D(xs )|p ,
2 t0 ≤s≤t
t0
22
(73)
(74)
where c10 = 12 pc̃2p [(p − 2)π(U ) + 2K](1 + k0 )p−2 . Finally, let us estimate
Q3 . Since Np̄ (dt, du) = Ñp̄ (dt, du) + π(du)dt and Ñp̄ (dt, du) is a martingale
measure, we get
∫ t∫
Q3 = E
{|x(s) − D(xs ) + h(xs , u)|p − |x(s) − D(xs )|p
t0
U
−p|x(s) − D(xs )|p−2 [x(s) − D(xs ]⊤ h(xs , u)}π(du)ds.
(75)
We note that it has the form
∫ t∫
{f (x(s) − D(xs ) + h(xs , u)) − f (x(s) − D(xs ))
E
t0
U
−f ′ (x(s) − D(xs ))h(xs , u)}π(du)ds,
(76)
where f (x) = |x|p . Using the Taylor formula, there exists a positive constant
Mp , such that for p ≥ 2
f (x(s) − D(xs ) + h(xs , u)) − f (x(s) − D(xs ))
−f ′ (x(s) − D(xs ))h(xs , u)
= |x(s) − D(xs ) + h(xs , u)|p − |x(s) − D(xs )|p
−p|x(s) − D(xs )|p−2 [x(s) − D(xs )]⊤ h(xs , u)
≤ Mp [|x(s) − D(xs ) + h(xs , u)|p−2 |h(xs , u)|2 ].
(77)
Again the basic inequality |a + b|p−2 ≤ 2p−3 (|a|p−2 + |b|p−2 ) and the Young
inequality implies that
∫ t∫
Q3 ≤ M p E
[|x(s) − D(xs ) + h(xs , u)|p−2 |h(xs , u)|2 ]π(du)ds
t0 U
∫ t∫
p−3
≤ Mp 2 E
[(|x(s) − D(xs )|p−2 + |h(xs , u)|p−2 )|h(xs , u)|2 ]π(du)ds
t0 U
∫ t
p−3 p − 2
p−2
≤ Mp 2
(1 + k0 ) π(U )E
||xs ||p ds
p
t
∫ t 0∫
p−2
2(1 + k0 )
)E
|h(xs , u)|p π(du)ds.
(78)
+Mp 2p−3 (1 +
p
t0 U
By (60), we have
∫ t∫
∫ t
p
E
|h(xs , u)| π(du)ds ≤ KE
||xs ||p ds.
t0
U
t0
23
(79)
Substituting (79) into (78),
∫
t
Q3 ≤ c11
E||xs ||p ds,
(80)
t0
where c11 = Mp 2p−3 [ p−2
(1 + k0 )p−2 π(U ) + (1 +
p
(68), (74) and (80) together, we obtain that
2(1+k0 )p−2
)K].
p
Combing (64),
E sup (1 + |x(s) − D(xs )|p ) ≤ 2 + 2(1 + k0 )p E||ξ||p
t0 ≤s≤t
∫ t
+2(c9 + c10 + c11 )
E(1 + ||xs ||p )ds.
t0
(81)
On the other hand, by lemma 3.1, we have
k0
E||ξ||p
1 − k0
1
+
E sup (1 + |x(s) − D(xs )|p )
(1 − k0 )p t0 ≤s≤t
∫
2(c9 + c10 + c11 ) t
p
≤ c12 E||ξ|| +
E(1 + ||xs ||p )ds,(82)
p
(1 − k0 )
t0
E sup |x(s)|p ≤
t0 ≤s≤t
where c12 =
E(1 +
sup
k0
1−k0
t0 −τ ≤s≤t
+
2+2(1+k0 )p
.
(1−k0 )p
Consequently,
|x(s)|p ) ≤ 1 + (1 + c12 )E||ξ||p
∫
2(c9 + c10 + c11 ) t
+
E(1 + sup |x(σ)|p )ds.
p
(1 − k0 )
t0 −τ ≤σ≤s
t0
(83)
Therefore, we apply the Gronwall inequality to get
E(1 +
sup
t0 −τ ≤s≤t
|x(s)|p ) ≤ [1 + (1 + c12 )E||ξ||p ]eM (t−t0 ) ,
+c10 +c11 )
where M = 2(c9(1−k
. This completes the proof.
p
0)
The next result shows that exponential estimations implies almost surely
asymptotic estimations, and we give an upper bound for the sample Lyapunov exponent.
24
Theorem 4.2 Under the conditions (H1)-(H2), we have
1
L[(1 + k02 ) + 3 + 2π(U ) + 2c̃22 ]
lim sup log|x(t)| ≤
,
(1 − k0 )2
t→∞ t
a.s.
(84)
That is, the sample Lyapunov exponent of the solution should not be greater
L[(1+k02 )+3+2π(U )+2c̃22 ]
.
than
(1−k0 )2
Proof: For each n = 1, 2, . . ., it follows from Theorem 4.1 (taking p = 2)
that
E(
sup
t0 +n−1≤t≤t0 +n
|x(t)|2 ) ≤ βeγn ,
2[3+2π(U )+2c̃2 ]L(T −t )
2L[(1+k2 )+3+2π(U )+2c̃22 ]
0
k0
0
2
and γ =
E||ξ||2 +
where β = 1−k
(1−k0 )2
(1−k0 )2
0
Hence, for any ε > 0, by the chebysher inequality, it follows that
P {ω :
sup
t0 +n−1≤t≤t0 +n
.
|x(t)|2 > e(γ+ϵ)n } ≤ βe−ϵn .
−εn
Since Σ∞
< ∞, by the Borel-Cantelli lemma, we deduce that, there
n=0 βe
exists a integer n0 such that
sup
t0 +n−1≤t≤t0 +n
|x(t)|2 ≤ e(γ+ε)n a.s n ≥ n0 .
Thus, for almost all ω ∈ Ω, if t0 + n − 1 ≤ t ≤ t0 + n and n ≥ n0 , then
1
1
(γ + ε)n
log|x(t)| = log(|x(t)|2 ) ≤
.
t
2t
2(t0 + n − 1)
(85)
Taking limsup in (85) leads to almost surely exponential estimate, that is,
1
γ+ε
L[(1 + k02 ) + 3 + 2π(U ) + 2c̃22 ]
lim sup log|x(t)| ≤
=
,
2
(1 − k0 )2
t→∞ t
a.s.
Required assertion (84) follows because ε > 0 is arbitrary.
5. Conclusion
In this paper, we prove the existence and uniqueness of the solution to
NSFDEs with pure jumps under the Local Lipschitz condition. Meanwhile,
by using the Itb
o formula, Taylor formula and the Burkholder-Davis inequality,
we establish the p-th exponential estimations and almost surely asymptotic
estimations of the solution to NSFDEs with pure jumps.
25
6. Acknowledgements
The authors would like to thank the Royal Society of Edinburgh, Qing Lan
Project of Jiangsu Province(2012), NNSF of China (11071037,11401261,61374080),
NSF of Higher Education Institutions of Jiangsu Province (13KJB110005),
the grant of Jiangsu Second Normal University(JSNU-ZY-02) and the Jiangsu
Government Overseas Study Scholarship for their financial support.
References
[1] S. E. A. Mohammed, Stochastic Functional Differential Equations, Longman
Scientific and Technical, 1984.
[2] X. Mao, Stochastic Differential Equations and Applications,Horwood, New
York, 1997.
[3] X. Mao, Razumikhin-type theorems on exponential stability of stochastic
functional differential equations, Stoch. Processes. Appl, 65 (1996), 233-250.
[4] X. Mao, A note on the LaSalle-type theorems for stochastic differential delay
equations, J. Math. Anal. Appl. 268 (2002), 125-142.
[5] X. Mao, M. Rassias, Khasminskii-type theorems for stochastic differential
delay equations, J. Sto. Anal. Appl, 23 (2005), 1045-1069.
[6] E. Buckwar, Introduction to the numerical analysis of stochastic delay differential equations, J. Comput. Appl. Math, 125 (2000), 297-307.
[7] U. Kuchler, E. Platen, Strong discrete time approximation of stochastic differential equations with time delay, Math. Comput. Simulation, 54 (2000),
189-205.
[8] Y. Hu, S. E. A. Mohammed, F. Yan, Discrete-time approximation of stochastic delay equations: the Milstein scheme, Ann. Probab, 32 (2004), 265-314.
[9] X. Mao, Exponential stability of equidistant Euler-Maruyama approximations
of stochastic differential delay equations, J.Comput.Appl.Math,200 (2007),
297-316.
[10] D. Xu, Z. Yang, Y. Huang, Existence-uniqueness and continuation theorems
for stochastic functional differential equations,J. Differential Equations, 245
(2008), 1681-1703.
26
[11] F. Wu, X. Mao and L. Szpruch, Almost sure exponential stability of numerical
solutions for SDDEs, Numer. Math, 115 (2010), 681-697.
[12] J. Appleby, X. Mao and H. Wu, On the almost sure running maxima of soluti
ons of affine stochastic functional differential equations, SIAM J. Math. Anal,
42 (2010), 646-678.
[13] I. Gyongy, S. Sabanis, A Note on Euler Approximations for Stochastic Differential Equations with Delay, Appl. Math. Optimization, 68 (2013) 391-412.
[14] X. Mao, Razumikhin-type theorems on exponential stability of neutral
stochastic functional differential equations, SIAM J. Math. Anal, 28 (1997),
389-401.
[15] X. Mao, Asymptotic properties of neutral stochastic differential delay equations, Stochastics and Stochastic Reports, 68 (2000), 273-295.
[16] V. Kolmanovskii, N. Koroleva, T. Maizenberg, X. Mao, A. Matasov, Neutral
stochastic differential delay equations with Markovian switching. Stoch. Anal.
Appl, 21 (2003), 819-847.
[17] Q. Luo, X. Mao, Y. Shen New criteria on exponential stability of neutral
stochastic differential delay equations, Syst. Control. Lett, 55 (2006), 826834.
[18] L. Huang, F. Deng, Razumikhin-Type Theorems on Stability of Neutral
Stochastic Functional Differential Equations, IEEE Transactions on Automatic Control, 53 (2008) 1718-1723.
[19] H. Bao, J. Cao, Existence and uniqueness of solutions to neutral stochastic
functional differential equations with infinite delay, Appl. Math. Comput, 215
(2009), 1732-1743.
[20] F. Wu, S. Hu, C. Huang Robustness of general decay stability of nonlinear
neutral stochastic functional differential equations with infinite delay, Syst.
Control. Lett, 59 (2010), 195-202.
[21] F. Wu, X. Mao, Numerical Solutions of Neutral Stochastic Functional Differential Equations, SIAM J. Numer. Anal, 46 (2008), 1821-1841.
[22] S. Jankovic, M.Vasilova, M.Krstic, Some analytic approximations for neutral
stochastic functional differential equations, Appl. Math. Comput, 217 (2010)
3615-3623.
27
[23] W. Wang, Y. Chen, Mean-square stability of semi-implicit Euler method for
nonlinear neutral stochastic delay differential equations, Appl. Numer. Math,
61 (2011), 696-701 .
[24] S. Zhou, Z. Fang, Numerical approximation of nonlinear neutral stochastic
functional differential equations, J. Appl. Math. Comput, 41 (2013), 427-445.
[25] Z. Yu, The improved stability analysis of the backward Euler method for
neutral stochastic delay differential equations, I. J. Comput. Math, 90 (2013),
1489-1494.
[26] N. Ikeda, S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North Holland Publishing Company, Amsterdam, Oxford, New York,
1989.
[27] H. Kunita, Stochastic diffrential equations based on Levy processes and
stochastic flows of diffomorphisms, in Real and Stochastic Analysis, New Perspectives, M.M.Rao (ed.), 305-373, Birkhauser Boston Basel Berlin (2004).
[28] D. Applebaum, Levy Processes and Stochastic Calculus, Cambridge University Press (2004), second edition (2009)
[29] P. Billingsley, Convergence of probability measures, JohnWiley and Sons,
Inc., New York-London- Sydney 1968.
28