Theory of Set Differential Equations
in Metric Spaces
V. Lakshmikantham
T. Gnana Bhaskar
J. Vasundhara Devi
Department of Mathematical Sciences
Florida Institute of Technology
Melbourne FL 32901
USA
2
Contents
Preface
1
1 Preliminaries
1.1 Introduction . . . . . . . . . . .
1.2 Compact Convex Subsets of Rn
1.3 The Hausdorff Metric . . . . .
1.4 Support Functions . . . . . . .
1.5 Continuity and Measurability .
1.6 Differentiation . . . . . . . . .
1.7 Integration . . . . . . . . . . .
1.8 Subsets of Banach Spaces . . .
1.9 Notes and Comments . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
6
8
13
14
18
20
23
24
2 Basic Theory
2.1 Introduction . . . . . . . . . . . . . . . .
2.2 Comparison Principles . . . . . . . . . .
2.3 Local Existence and Uniqueness . . . . .
2.4 Local Existence and Extremal Solutions
2.5 Monotone Iterative Technique . . . . . .
2.6 Global Existence . . . . . . . . . . . . .
2.7 Approximate Solutions . . . . . . . . . .
2.8 Existence of Euler Solutions . . . . . . .
2.9 Proximal Normal and Flow Invariance .
2.10 Existence, Upper Semicontinuous Case .
2.11 Notes and Comments . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
27
27
28
32
36
40
49
50
51
56
59
63
3 Stability Theory
3.1 Introduction . . . . . . . . . .
3.2 Lyapunov-like Functions . . .
3.3 Global Existence . . . . . . .
3.4 Stability Criteria . . . . . . .
3.5 Nonuniform Stability Criteria
3.6 Criteria for Boundedness . . .
3.7 Set Differential Systems . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
65
65
66
68
69
74
79
83
.
.
.
.
.
.
.
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
CONTENTS
3.8
3.9
3.10
3.11
The Method of Vector Lyapunov Functions
Nonsmooth Analysis . . . . . . . . . . . . .
Lyapunov Stability Criteria . . . . . . . . .
Notes and Comments . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
86
89
93
95
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
97
97
98
101
109
116
120
124
129
136
5 Miscellaneous Topics
5.1 Introduction . . . . . . . . . . . . . . . . . . . . .
5.2 Impulsive Set Differential Equations (SDEs) . . .
5.3 Monotone Iterative Technique . . . . . . . . . . .
5.4 Set Differential Equations with Delay . . . . . . .
5.5 Impulsive Set Differential Equations with Delay .
5.6 Set Difference Equations . . . . . . . . . . . . . .
5.7 Set Differential Equations with Causal Operators
5.8 Lyapunov-like Functions in Kc (Rd+ ) . . . . . . . .
5.9 Set Differential Equations in (Kc (E), D), . . . .
5.10 Notes and Comments . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
139
139
140
150
163
173
180
185
195
197
199
4 Connection to FDEs
4.1 Introduction . . . . . . . . . . . . . . .
4.2 Preliminaries . . . . . . . . . . . . . .
4.3 Lyapunov-like functions . . . . . . . .
4.4 Connection with SDEs . . . . . . . . .
4.5 Upper Semicontinuous Case Continued
4.6 Impulsive FDEs . . . . . . . . . . . . .
4.7 Hybrid FDEs . . . . . . . . . . . . . .
4.8 Another Formulation . . . . . . . . . .
4.9 Notes and Comments . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
References
201
Index
206
Preface
The study of analysis in metric spaces has gained importance in recent times.
It is realized that many results of differential calculus and set valued analysis,
including the inverse function theorem do not really rely upon the linear structure and therefore can be adapted to the nonlinear case of metric spaces and
exploited. Moreover, the concept of the differential equation governing evolution
in metric spaces has been suitably formulated.
Multivalued differential equations (now known as set differential equations
(SDEs)) generated by multivalued differential inclusions have been introduced
in a semi-linear metric space, consisting of all nonempty, compact, convex subsets of an initial finite or infinite dimensional space. The basic existence and
uniqueness results of such SDEs have been investigated and their solutions have
compact, convex values. Also, these generated SDEs have been employed as a
tool to prove the existence of solutions, in a unified way, of multivalued differential inclusions. The multifunctions involved in this set up are compact, but
not necessarily convex, subsets of the base space utilized.
Because of the fact that fuzzy set theory and its applications have been extensively investigated, due to the increase of industrial interest in fuzzy control,
the initiation of the theory of fuzzy differential equations (FDEs) in an appropriate metric space has recently been accomplished. In view of the inherent
disadvantage resulting from the fuzzification of the derivative employed in the
original formulation of FDEs, an alternative formulation based upon a family of
multivalued differential inclusions derived from the fuzzy maps involved in the
FDEs, is recently suggested to reflect the rich behaviour of the corresponding
ordinary differential equation before fuzzification.
The investigation of the theory of SDEs as an independent discipline, has
certain advantages. For example, when the set is a single valued mapping, it is
clear that the Hukuhara derivative and the integral utilized in formulating the
SDEs reduce to the ordinary vector derivative and the integral, and therefore
the results obtained in the framework of SDEs become the corresponding results
of ordinary differential systems if the base space is Rn. On the other hand, if the
base space is a Banach space, we get from the corresponding SDE’s the differential equations in a Banach space. Moreover, one has only a semilinear metric
space to work with in the SDE set up, compared to the complete normed linear
space that one employs in the usual study of an ordinary differential system. As
indicated earlier,the SDEs that are generated by multivalued differential inclu1
2
PREFACE
sions when the needed convexity is missing, form a natural vehicle for proving
the existence results for multivalued differential inclusions. Also, one can utilize
SDEs profitably to investigate FDEs. Consequently, the study of the theory of
SDEs has recently been growing very rapidly and is still in the initial stages.
Nonetheless, there exists sufficient literature to warrant assembling the existing
fundamental results in a unified way to understand and appreciate the intricacies and advantages involved, so as to pave the way for further advancement of
this important branch of differential equations as an independent subject area.
It is with this spirit we see the importance of the present monograph. As a
result, we provide a systematic account of recent development, describe the current state of the useful theory, show the essential unity achieved and initiate
several new extensions to other types of SDEs.
In Chapter 1, we assemble the preliminary material providing the necessary
tools including the calculus for set valued maps relevant to the later development. Chapter 2 is devoted to the investigation of the fundamental theory of
SDEs such as various comparison principles, existence and uniqueness, continuous dependence, existence of extremal solutions suitably introducing a partial
order in the metric space, monotone iterative technique using lower and upper
solutions and global existence under the continuity assumption for SDEs. We
also discuss, utilizing the method of nonsmooth analysis, existence and flow
invariance results without any continuity assumption, in terms of Euler solutions. Finally, we consider the case of upper semicontinuity in the framework
of Caratheodory and prove an existence result in a general set up.
In Chapter 3, we extend Lyapunov stability theory to SDEs, employing
Lyapunov-like functions, proving first suitable comparison results in terms of
such functions. The stability and boundedness criteria are obtained by choosing
appropriate initial values in terms of the Hukuhara difference to eliminate the
undesirable part of the solutions of SDEs, so that the rich behaviour of the corresponding ODEs, from which SDEs are generated, is preserved. The methods
of vector Lyapunov-like functions and the perturbing Lyapunov-like functions
are discussed in detail. Also, employing lower semicontinuous Lyapunov-like
functions and utilizing nonsmooth analysis, stability results are described under weaker assumptions.
Chapter 4 deals with the interconnection between SDEs and fuzzy differential
equations(FDEs). For this purpose, necessary tools are provided for formulating
FDEs, and basic results are proved, including the stability theory of Lyapunov.
Then the interconnection between FDEs and SDEs is explored via a sequence
of multivalued differential inclusions, suitably generating SDEs as described
earlier. The impulsive effects are then incorporated in FDEs and then it is
shown how impulses can help to improve the qualitative behaviour of solutions
of FDEs. Hybrid fuzzy differential equations are introduced and their stability
properties are discussed. Another concept of differential equations in metric
spaces is considered which can be applied to the study of FDEs.
Chapter 5 is devoted to initiate several topics in the setup of SDEs such
as impulsive SDEs, SDEs with time delay, set difference equations, and SDEs
involving causal maps, which cover several types of SDEs including integro-
PREFACE
3
differential equations. Some important basic results are provided for each type
of SDEs. We then introduce Lyapunov-like functions whose values are in some
metric space, prove suitable comparison results and study stability theory in
this general set up. This study includes the methods of single, vector, matrix
and cone-valued Lyapunov-like functions by an appropriate choice of the metric
space. Since the basic space utilized to define the metric space (Kc (Rn), D) is
restricted, for convenience of understanding, to Rn, we indicate how one can
extend most of the results described when we choose a Banach space E instead
of Rn , so that we have the corresponding metric space (Kc (E), D) to work with.
Finally, notes and comments are provided for each chapter.
Some of the important features of the monograph as follows:
1. It is the first book that attempts to describe the theory of set differential
equations as an independent discipline.;
2. It incorporates, the recent general theory of set differential equations,
discusses the interconnections between set differential equations and fuzzy
differential equations and uses both smooth and nonsmooth analysis for
investigation.
3. It exhibits several new areas of study by providing the initial apparatus
for further advancement.
4. It is a timely introduction to a subject that follows the present trend of
studying analysis and differential equations in metric spaces.
This monograph will be very useful to those experts and their doctoral students who work in Nonlinear Analysis, in general. It will also be a good reference
book to Engineering and Computer Scientists since it also covers fuzzy dynamics
as a subset.
We are immensely thankful to Professors Alex Tolstanogov, Donta Satyanarayana and S. Leela for their valuable suggestions and help and to Mrs. Donn
Miller-Kermani for the excellent CRC typing of the manuscript. We are pleased,
indeed, to offer our heartfelt thanks to Ms. Janie Wardle for her interest and
cooperation in our projects.
4
PREFACE
Chapter 1
Preliminaries
1.1
Introduction
Recently the study of set differential equations (SDEs) was initiated in a metric
space and some basic results of interest were obtained. The investigation of
set differential equations as an independent subject has some advantages. For
example, when the set is a single valued mapping, it is easy to see that the
Hukuhara derivative, and the integral utilized in formulating the SDEs reduce
to the ordinary vector derivative and the integral and therefore, the results
obtained in this new framework become the corresponding results in ordinary
differential systems. Also, we have only a semilinear complete metric space to
work with in the present setup, compared to the normed linear space that one
employs in the usual study of ordinary differential systems.
Furthermore, SDEs that are generated by multivalued differential inclusions,
when the multivalued functions involved do not possess convex values, can be
used as a tool for studying multivalued differential inclusions. Moreover, one
can utilize SDEs indirectly to investigate profitably fuzzy differential equations,
since the original formulation of fuzzy differential equations suffers from grave
disadvantage and does not reflect the rich behavior of corresponding differential
equations without fuzziness. This is due to the fact that the diameter of any
solution of a fuzzy differential equation increases as time increases because of
the necessity of the fuzzification of the derivative involved.
In order to formulate the set differential equations in a metric space, we
need some background material, since the metric space involved consists of
all nonempty compact, convex sets in finite or infinite dimensional space. In
Section 1.2, we define the necessary ingredients of such sets restricting ourselves
to the Euclidean n-space Rn . Since the difference of any two sets in Kc (Rn)
(set of all nonempty, compact, convex sets in Rn) is not defined in general,
conditions for the existence of the difference are provided in this section. Section
1.3 introduces the Hausdorff metric D[·, ·] for Kc (Rn ) and lists its properties.
Support functions are defined in Section 1.4, where they are utilized to create
5
6
CHAPTER 1. PRELIMINARIES
a mapping that makes it possible to embed the metric space (Kc (Rn ), D) into
a complete cone in a specified Banach space.
In Section 1.5, the continuity and measurability properties of mappings into
the metric space are dealt with. Section 1.6 investigates the concept of differentiation of such mappings and its behavior. In Section 1.7, we consider the
theory of integration of these mappings and the needed properties. Section 1.8
summarizes the corresponding situation when the elements of the metric space
considered are from a Banach space. Notes and comments are listed in Section
1.9.
1.2
Compact Convex Subsets of Rn
We shall consider the following three spaces of nonempty subsets of Rn , namely,
(i) Kc (Rn ) consisting of all nonempty compact convex subsets of Rn ;
(ii) K(Rn ) consisting of all nonempty compact subsets of Rn;
(iii) C(Rn) consisting of all nonempty closed subsets of Rn .
Recall that a nonempty subset A of Rn is convex if for all a1 , a2 ∈ A and all
λ ∈ [0, 1], the point
a = λa1 + (1 − λ)a2
(1.2.1)
belongs to A. For any nonempty subset A of Rn, we denote by coA its convex
hull, that is the totality of points a of the form (1.2.1) or, equivalently, the
smallest convex subset containing A. Clearly
A ⊆ co A = co(co A)
(1.2.2)
with A = coA if A is convex. Moreover, coA is closed (compact) if A is closed
(compact).
Let A and B be two nonempty subsets of Rn and let λ ∈ IR. We define the
following Minkowski addition and scalar multiplication by
A + B = {a + b : a ∈ A, b ∈ B}
(1.2.3)
λA = {λa : a ∈ A}.
(1.2.4)
and
Then we have the following proposition.
Proposition 1.2.1 The spaces C(Rn), K(Rn ) and Kc (Rn ) are closed under
the operations of addition and scalar multiplication. In fact, the following properties hold:
(i) A+θ = θ+A = A, θ ∈ Rn, is the zero element of Rn, treated as a singleton.
(ii) (A + B) + C = A + (B + C)
1.2 COMPACT CONVEX SUBSETS OF RN
7
(iii) A + B = B + A
(iv) A + C = B + C implies A = B
(v) 1 · A = A
(vi) λ(A + B) = λA + λB
(vii) (λ + µ)A = λA + µA
where A, B, C ∈ Kc (Rn), λ, µ ∈ IR+ .
Proof We only give the proof of (iv), the rest being simple to prove.
Let A, B, C ∈ Kc (Rn). We show that A 6= B implies A + C 6= B + C.
Suppose, for example, that there exists a point a ∈ A which does not belong
to B. Through a pass hyperplanes which are disjoint from B. Let one of these
hyperplanes be P. Let P 0 be the support hyperplane of C, which is parallel to
P and such that, if we move P 0 parallel to itself onto P, C moves on a compact
convex set which is located on the same side of P as B. If c is a point of C ∩ P 0,
then a + c ∈
/ B + C. Hence the proof.
In general, A+(−A) 6= {θ}. This fact is illustrated by the following example.
Example 1.2.1 Let A = [0, 1] so that (−1)A = [−1, 0], and therefore
A + (−1)A = [0, 1] + [−1, 0] = [−1, 1].
Thus, adding (−1) times a set does not constitute a natural operation of subtraction.
This leads us to the following definition.
Definition 1.2.1 For a fixed A and B in Kc (Rn ) if there exists an element
C ∈ Kc (Rn ) such that A = B + C then we say that the Hukuhara Difference of
A and B exists and is denoted by A − B.
When the Hukuhara difference exists it is unique. This follows from (iv) of
Proposition 1.2.1.
The following example explains the above definition.
Example 1.2.2 From Example 1.2.1, we get
[−1, 1] − [−1, 0] = [0, 1] and [−1, 1] − [0, 1] = [−1, 0].
Note that the Hukuhara difference A − B is different from the set
A + (−B) = {a + (−b) : a ∈ A, b ∈ B}.
The next proposition provides the necessary and sufficient condition for the
existence of the Hukuhara difference A − B.
8
CHAPTER 1. PRELIMINARIES
Proposition 1.2.2 Let A, B ∈ Kc (Rn ). For the difference A − B to exist, it is
necessary and sufficient to have the following condition. If a ∈ ∂A, there exists
at least a point c such that
a ∈ B + c ⊂ A.
(1.2.5)
Proof Necessity: Suppose the difference A − B exists. Let C = A − B. Then,
A = B + C. If a ∈ ∂A, a ∈ B + C, that is, a = b + c where b ∈ B and c ∈ C.
Also, if z ∈ B, then z + c ∈ A and therefore (1.2.5) is satisfied.
Sufficiency: Suppose (1.2.5) holds. Consider the set C = {x : B + x ⊆ A}.
Clearly C is compact and we have B + C ⊆ A. Now, if d and d0 ∈ C, then we
have B + d ⊆ A and B + d0 ⊆ A, from which we obtain
(1 − λ)(B + d) + λ(B + d0 ) ⊂ A, for 0 ≤ λ ≤ 1.
(1.2.6)
We can write the L.H.S of (1.2.6) as B + z with z = (1 − λ)d + λd0. Hence z ∈ C
and C is convex.
Let u ∈ A. A straight line through u meets ∂A at two points a and a0 .
By hypothesis there exist elements d and d0 in C such that a ∈ B + d, and
a0 ∈ B + d0. We can write u = (1 − λ)a + λa0 with 0 < λ < 1. Then u ∈ B + x,
where x = (1 − λ)d + λd0 ∈ C. Hence A ⊆ B + C. Thus A = B + C and the
proof is complete.
We note that a necessary condition for the Hukuhara difference A − B to
exist is that some translate of B is a subset of A. However, in general, the
Hukuhara difference need not exist as is seen from the following example.
Example 1.2.3 {0} − [0, 1] does not exist, since no translate of [0,1] can ever
belong to the singleton set {0}.
1.3
The Hausdorff Metric
Let x be a point in Rn and A a nonempty subset of Rn. The distance d(x, A)
from x to A is defined by
d(x, A) = inf{kx − ak : a ∈ A}.
(1.3.1)
Thus d(x, A) = d(x, Ā) ≥ 0 and d(x, A) = 0 if and only if x ∈ Ā, the closure of
A ⊆ Rn .
We shall call the subset
S (A) = {x ∈ Rn : d(x, A) < }
(1.3.2)
an −neighborhood of A. Its closure is the subset
S̄ (A) = {x ∈ Rn : d(x, A) ≤ }.
(1.3.3)
In particular, we shall denote by
S̄1n = S̄1 (θ),
(1.3.4)
1.3 THE HAUSDORFF METRIC
9
which is obviously a compact subset of Rn. Note also that
S̄ (A) = A + S̄1n ,
(1.3.5)
for any > 0 and any nonempty subset A of Rn. We shall for convenience
sometimes write S(A, ) and S̄ (A).
Now, let A and B be nonempty subsets of Rn. We define the Hausdorff
separation of B from A by
dH (B, A) = sup{d(b, A) : b ∈ B}
(1.3.6)
or, equivalently
dH (B, A) = inf{ > 0 : B ⊆ A + S̄1n }.
We have dH (B, A) ≥ 0 with dH (B, A) = 0 if and only if B ⊆ Ā. Also, the
triangle inequality
dH (B, A) ≤ dH (B, C) + dH (C, A),
holds for all nonempty subsets A, B and C of Rn . In general, however
dH (A, B) 6= dH (B, A).
We define the Hausdorff distance between nonempty subsets A and B of Rn by
D(A, B) = max{dH (A, B), dH (B, A)},
(1.3.7)
which is symmetric in A and B. Consequently,
(a)
(b)
(c)
D(A, B) ≥ 0 with D(A, B) = 0 if and only if Ā = B̄;
D(A, B) = D(B, A);
D(A, B) ≤ D(A, C) + D(C, B),
(1.3.8)
for any nonempty subsets A, B and C of Rn.
If we restrict our attention to nonempty closed subsets of Rn, we find that
the Hausdorff distance (1.3.7) is a metric known as the Hausdorff metric. Thus
(C(Rn ), D) is a metric space.
In fact, we have
Proposition 1.3.1 (C(Rn ), D) is a complete separable metric space in which
K(Rn ) and Kc (Rn ) are closed subsets. Hence, (K(Rn ), D) and (Kc (Rn ), D) are
also complete separable metric spaces.
The following properties of the Hausdorff metric will be useful later.
We start by stating a proposition dealing with the invariance of the Hausdorff
metric.
Proposition 1.3.2 If A, B ∈ Kc (Rn) and C ∈ K(Rn ) then,
D(A + C, B + C) = D(A, B).
(1.3.9)
10
CHAPTER 1. PRELIMINARIES
We need the following result which deals with the law of cancellation to proceed
further.
Lemma 1.3.1 Let A, B ∈ Kc (Rn ) and C ∈ K(Rn ) and A + C ⊆ B + C, then
A ⊆ B.
Proof Let a be any element of A. We need to show that a ∈ B. Given any
c1 ∈ C, we have a + c1 ∈ B + C, that is, there exist b1 ∈ B and c2 ∈ C with
a+c1 = b1 +c2. For the same reason, b2 ∈ B and c3 ∈ C with a+c2 = b2 +c3 must
exist. Repeat the procedure indefinitely and sum the first n of the equations
obtained. We get
n
n
n+1
X
X
X
na +
ci =
bi +
ci
i=1
i=1
i=2
which implies
na + c1 =
n
X
bi + cn+1.
i=1
Then,
1X
cn+1 c1
bi +
− .
n
n
n
n
a=
i=1
Set xn =
1
n
Pn
i=1 bi .
Thus
a = xn +
cn+1 c1
− .
n
n
We observe that xn ∈ B for all n, because B is convex and cn+1
− cn1 → 0 as C
n
is compact. Thus xn converges to a. But since B is compact, a ∈ B. Thus, if
A + C = B + C then A = B. This completes the proof of the lemma.
Proof of Proposition 1.3.2. Let λ ≥ 0 and S denote the closed unit sphere
of the space. Consider the following inequalities
(1) A + λS ⊃ B,
(2) B + λS ⊃ A,
(3) A + C + λS ⊃ B + C,
(4)B + C + λS ⊃ A + C.
Put d1 = D(A, B) and d2 = D(A + C, B + C). Then d1 is the infimum of the
positive numbers λ for which (1) and (2) hold. Similarly, d2 is the infimum of
the positive numbers λ for which (3) and (4) hold. Since (3) and (4) follow from
(1) and (2) respectively, by adding C, we have d1 ≥ d2. Conversely, since by
Lemma 1.3.1, canceling C is allowed in (3) and (4), we obtain d1 ≤ d2 , which
proves the proposition.
1.3 THE HAUSDORFF METRIC
11
Proposition 1.3.3 If A, B ∈ K(Rn )
D(co A, co B) ≤ D(A, B).
(1.3.10)
If A, A0 , B, B 0 ∈ Kc (Rn ) then
D(tA, tB) = tD(A, B) for all t ≥ 0,
(1.3.11)
D(A + A0 , B + B 0 ) ≤ D(A, B) + D(A0 , B 0),
(1.3.12)
D(A − A0 , B − B 0 ) ≤ D(A, B) + D(A0 , B 0),
(1.3.13)
Further,
0
0
provided the differences A−A and B −B exist. Moreover with β = max{λ, µ},
we have
D(λA, µB) ≤ βD(A, B) + |λ − µ|[D(A, θ) + D(B, θ)]
(1.3.14)
D(λA, λB) = λD(A − B, θ), if A − B exists.
(1.3.15)
and
Proof Since (1.3.10) is obvious, we begin with the proof of (1.3.11). For all
a ∈ A and u ∈ A0 , compactness of B and B 0 ensures that there exist b(a) ∈ B
and v(u) ∈ B 0 such that
inf |a − b| = |a − b(a)|;
b∈B
inf |u − v| = |u − v(u)|.
v∈B 0
(1.3.16)
From the relation
|a + u − b(a) − v(u)| ≤ |a − b(a)| + |u − v(u)|
and (1.3.16), it follows that
sup
inf
0
a∈A, u∈A0 b∈B, v∈B
|a + u − b − v| ≤ sup inf |a − b| + sup inf 0 |u − v|.
a∈A b∈B
u∈A0 v∈B
From the above and the analogous inequality obtained by interchanging the
roles of A with B and A0 with B 0 , we obtain (1.3.11).
We now prove (1.3.13).
Using Proposition 1.3.2, we find that
D(A − A0 , B − B 0 ) =
=
≤
D(A − A0 + A0 + B 0 , B − B 0 + B 0 + A0 )
D(A + B 0 , B + A0)
D(A, B) + D(A0 , B 0 ),
which follows from (1.3.11).
To prove (1.3.14), consider, for λ − µ ≥ 0,
D(λA, µB) ≤ µD(A, B) + (λ − µ)D(A, θ),
12
CHAPTER 1. PRELIMINARIES
and if λ − µ ≤ 0, then
D(λA, µB) ≤ λD(A, B) + (µ − λ)D(B, θ).
The relations above put together prove (1.3.14).
The proof of (1.3.15) follows from Proposition 1.3.2.
Next, we define the magnitude of a nonempty subset of A of Rn by
kAk = sup{kak : a ∈ A},
(1.3.17)
kAk = D(θ, A).
(1.3.18)
or equivalently,
Here, kAk is finite, and the supremum in (1.3.17) is attained when A ∈ K(Rn ).
From (1.3.10) it obviously follows that
ktAk = tkAk, for all t ≥ 0.
(1.3.19)
Moreover, (1.3.8) and (1.3.18) yield
|kAk − kBk| ≤ D(A, B),
(1.3.20)
for all A, B ∈ K(Rn).
We say that a subset U ∈ K(Rn ) or Kc (Rn ) is uniformly bounded if there
exists a finite constant c(U) such that
kAk ≤ c(U), for all A ∈ U.
(1.3.21)
We then have the following simple characterization of compactness.
Proposition 1.3.4 A nonempty subset A of the metric space (K(Rn ), D) or
(Kc (Rn), D), is compact if and only if it is closed and uniformly bounded.
Set inclusion induces partial ordering on K(Rn ). Write A ≤ B if and only if
A ⊆ B, where A, B ∈ K(Rn ). Then
L(B) = {A ∈ K(Rn ) : B ≤ A}, U(B) = {A ∈ K(Rn ) : A ≤ B},
(1.3.22)
are closed subsets of K(Rn ) for any B ∈ K(Rn ). In fact, from Proposition 1.3.4,
U(B) is compact subset of K(Rn ).
Proposition 1.3.5 U(B) is a compact subset of K(Rn ).
This assertion remains true with Kc (Rn ) replacing K(Rn ) everywhere.
Sequences of nested subsets in (K(Rn ), D) have the following useful intersection and convergence properties.
Proposition 1.3.6 Let {Aj } ⊂ K(Rn ) satisfy
· · · ⊆ Aj ⊆ · · · A2 ⊆ A1 .
n
Then A = ∩∞
j=1 Aj ∈ K(R ) and
D(An , A) → 0 as n → ∞.
(1.3.23)
n
On the other hand, if A1 ⊆ A2 ⊆ · · · Aj ⊆ · · · and A = ∪∞
j=1 Aj ∈ K(R ), then
(1.3.23) holds.
1.4 SUPPORT FUNCTIONS
1.4
13
Support Functions
Let A be a nonempty subset of Rn . The support function of A is defined for all
p ∈ Rn by
s(p, A) = sup{< p, a >: a ∈ A},
(1.4.1)
which may take the value +∞ when A is unbounded. However, when A is a
compact, convex subset of Rn the supremum is always attained and the support
function s(·, A) : Rn → R is well defined. Indeed,
|s(p, A)| ≤ kAkkpk,
(1.4.2)
|s(p, A) − s(q, A)| ≤ kAkkp − qk,
(1.4.3)
for all p ∈ Rn , and
n
for all p, q ∈ R .
Further, for all p ∈ Rn,
s(p, A) ≤ s(p, B),
if A ⊆ B,
(1.4.4)
and
s(p, co(A ∪ B)) ≤ max{s(p, A), s(p, B)}.
(1.4.5)
The support function s(p, A) is uniquely paired to the subset A ∈ Kc (Rn ) in
the sense that s(p, A) = s(p, B) for all p ∈ Rn if and only if A = B when A
and B are restricted to Kc (Rn ). It also preserves set addition and nonnegative
scalar multiplication. That is, for all p ∈ Rn,
s(p, A + B) = s(p, A) + s(p, B),
(1.4.6)
which, in particular reduces to
s(p, A + {x}) = s(p, A)+ < p, x >,
(1.4.7)
s(p, tA) = ts(p, A), t ≥ 0.
(1.4.8)
for any x ∈ Rn, and
n
For a fixed A ∈ Kc (R ), s(p, A) is positively homogeneous
s(tp, A) = ts(p, A), t ≥ 0,
(1.4.9)
for all p ∈ Rn , and subadditive:
s(p1 + p2 , A) ≤ s(p1 , A) + s(p2 , A),
(1.4.10)
for all p1, p2 ∈ Rn . Moreover, combining (1.4.13) and (1.4.14) we see that s(·, A)
is a convex function, that is, it satisfies
s(λp1 + (1 − λ)p2 , A) ≤ λs(p1 , A) + (1 − λ)s(p2 , A),
(1.4.11)
for all p1 , p2 ∈ Rn and λ ∈ [0, 1].
The nonempty compact convex subsets of Rn are uniquely characterized by
such functions.
14
CHAPTER 1. PRELIMINARIES
Proposition 1.4.1 For every continuous, positively homogeneous and subadditive function s : Rn → R there exists a unique nonempty compact convex
subset
A = {x ∈ Rn :< p, x > ≤ s(p) for all p ∈ Rn},
which has s as its support function.
The Hausdorff metric is related to the support function for A, B ∈ Kc (Rn ),
since we have
D(A, B) = sup{|s(p, A) − s(p, B)| : p ∈ S n−1 },
(1.4.12)
where S n−1 = {p ∈ Rn : kpk = 1} is the unit sphere in Rn .
Let C(S n−1 ) denote the Banach Space of continuous functions f : S n−1 → R
with the supremum norm
kfk = sup{|f(p)| : p ∈ S n−1 }.
One can use the support function to embed the metric space (Kc (Rn ), D) isometrically as a positive cone in C(S n−1).
For this, define j : Kc (Rn ) → C(S n−1 ) by j(A)(·) = s(·, A), for each A ∈
Kc (Rn).
From the properties of the support function, j is a univalent mapping satisfying
j(A + B) = j(A) + j(B),
(1.4.13)
and
j(tA) = tj(A), t ≥ 0,
(1.4.14)
kj(A) − j(B)k = D(A, B),
(1.4.15)
with
for all A, B ∈ Kc (Rn).
The desired positive cone is the image j(Kc (Rn)) in C(S n−1 ). Obviously j
is continuous, as is its inverse
j −1 : j(Kc (Rn)) → Kc (Rn ).
1.5
Continuity and Measurability
We consider mappings F from a domain T in Rk into the metric space (Kc (Rn ), D).
Thus F : T → Kc (Rn ) or equivalently,
F (t) ∈ Kc (Rn ), for all t ∈ T.
(1.5.1)
We shall call such a mapping F a (compact convex) set valued mapping from T
to Rn .
1.5 CONTINUITY AND MEASURABILITY
15
The usual definition of continuity of mappings between metric spaces applies
here. We shall say that a set valued mapping F satisfying (1.5.1) is continuous
at t0 in T if for every > 0 there exists a δ = δ(, t0) > 0 such that
D[F (t), F (t0)] < ,
(1.5.2)
for all t ∈ T with kt − t0 k < δ.
Alternatively, we can write (1.5.2) in terms of the convergence of sequences,
that is
lim D[F (tn), F (t0)] = 0,
(1.5.3)
tn →t0
for all sequences {tn} in T with tn → t0 ∈ T.
Using the Hausdorff separation dH and neighborhoods, we see that (1.5.2)
is the combination of
dH (F (t), F (t0)) < ,
(1.5.4)
that is
F (t) ⊂ S (F (t0)) ≡ F (t0) + S1n ,
(1.5.5)
dH (F (t0), F (t)) < ,
(1.5.6)
F (t0 ) ⊂ S (F (t)) ≡ F (t) + S1n ,
(1.5.7)
and
that is
for all t ∈ T with kt − t0 k < δ. As before, S1n = {x ∈ Rn : kxk < 1} is the
open unit ball in Rn . If the mapping F satisfies (1.5.4), (1.5.5) we say that
it is upper semicontinuous at t0 , or that it is lower semicontinuous at t0 , if it
satisfies (1.5.6), (1.5.7). Thus, F is continuous at t0 if and only if it is both lower
semicontinuous and upper semicontinuous at t0 . A set valued mapping can be
lower semicontinuous without being upper semicontinuous, and vice versa.
Example 1.5.1 The set valued mapping F from R into R defined by
{0},
for t = 0,
F (t) =
[0, 1],
for t ∈ IR \ {0},
is lower semicontinuous, but not upper semicontinuous, at t0 = 0. On the other
hand, F : R → R defined by
[0, 1],
for t = 0,
F (t) =
{0},
for t ∈ R \ {0},
is upper semicontinuous, but not lower semicontinuous, at t0 = 0.
Example 1.5.2 A single valued mapping f : R → R+ is upper (lower) semicontinuous if the set valued mapping F defined by F (t) = [0, f(t)] is upper (lower)
semicontinuous.
16
CHAPTER 1. PRELIMINARIES
If the mapping is continuous, upper semicontinuous or lower semicontinuous
at every t0 ∈ T, we shall replace the qualifier ‘at t0 ’ with ‘on T ’, or omit it
altogether.
We say that a set valued mapping F from T into Rn is Lipschitz continuous
with Lipschitz constant L if
D [F (t0), F (t)] ≤ Lkt0 − tk,
(1.5.8)
for all t0, t ∈ T. A Lipschitz continuous mapping is obviously continuous.
The distance d(x, F (t)) of F (t) from a point x ∈ Rn satisfies
|d(x, F (t)) − d(y, F (t0 ))| ≤ kx − yk + D [F (t), F (t0)] ,
(1.5.9)
for all x, y ∈ Rn and t0 , t ∈ T. Thus d(·, F (·)) : Rn × T → R+ is continuous
whenever F is continuous, and Lipschitz continuous whenever F is Lipschitz
continuous. Since the magnitude kF (t)k = D[F (t), θ], a similar assertion holds
for kF (·)k : T → R+ with the same Lipschitz constant when F is Lipschitz
continuous.
We saw in Section 1.4 that the support function s(·, A) of an element A ∈
Kc (Rn) can be used to form an isometric embedding j : Kc (Rn ) → C(S n−1)
with j(A)(·) = s(·, A).
Thus, if F : T → Kc (Rn ), the support function s(·, F (t)), for each t ∈ T
defines a mapping j(F (·)) : T → C(S n−1 ). From (1.4.12) we have
sup{|s(p, F (t0)) − s(p, F (t))| : p ∈ S n−1 } = D [F (t0 ), F (t)] .
(1.5.10)
So j(F (·)) is continuous or Lipschitz continuous (with the same Lipschitz constant) whenever F is continuous or Lipschitz continuous.
Combining (1.4.3) and (1.5.10) we obtain
|s(x, F (t0)) − s(y, F (t))| ≤ kF (t)kkx − yk + D [F (t0), F (t)] ,
(1.5.11)
for all t0, t ∈ T and x, y ∈ S̄1n = {x ∈ Rn : kxk ≤ 1}.
Thus the support function s(x, F (t)), considered as a mapping s(·, F (·)) :
S̄1n × T → R is continuous or Lipschitz continuous whenever F is continuous or
Lipschitz continuous.
Let B(Rk ) and B(Kc (Rn )) denote the σ− algebras of Borel subsets of Rk and
(Kc (Rn), D) respectively. Adopting the usual definition of Borel measurability
of a mapping between metric spaces, we shall say that a mapping F : T →
Kc (Rn) is measurable if
{t ∈ T : F (t) ⊂ B} ∈ B(Rk ) for all B ∈ B(Kc(Rn )).
(1.5.12)
We shall write F −1(A) = {t ∈ T : F (t) ∩ A 6= ∅} for any subset A of Rn .
Then we have
Proposition 1.5.1 The following assertions are equivalent:
(i) F : T → Kc (Rn) is measurable;
1.5 CONTINUITY AND MEASURABILITY
17
(ii) F −1(B) ∈ B(Rk ) for all B ∈ B(Rk );
(iii) F −1(O) ∈ B(Rk ) for all open subsets O of Rn;
(iv) F −1(C) ∈ B(Rk ) for all closed subsets C of Rn ;
(v) d(x, F (·)) : T → R is measurable for each x ∈ Rn;
(vi) kF (·)k : T → R is measurable;
(vii) s(x, F (·)) : T → R is measurable for each x ∈ Rn.
In (v)-(vii) we mean measurability of the single valued mapping T 7→ R
with respect to the Borel σ− algebras B(Rk ) and B(R). Such mappings are
measurable if they are continuous. For set valued mappings we have
Proposition 1.5.2 F : T → Kc (Rn ) is measurable if it is upper semicontinuous or lower semicontinuous, and hence if it is continuous.
For set valued mappings, measurability is also preserved on taking limits.
Proposition 1.5.3 Let Fi : T → Kc (Rn) be measurable for i = 1, 2, 3, · · · and
suppose that limt→∞ D[Fi(t), F (t)] = 0 for almost all t ∈ T. Then F : T →
Kc (Rn ) is measurable.
A selector of a set valued mapping F from T into Rn is a single valued
mapping f : T → Rn such that
f(t) ∈ F (t) for all t ∈ T.
(1.5.13)
Proposition 1.5.4 If F : T → Kc (Rn ) is measurable then it has a measurable
selector f : T → Rn .
The following result, known as the Castaing Representation Theorem, gives
an additional characterization of measurability of a set valued mapping.
Theorem 1.5.1 F : T → Kc (Rn ) is measurable if and only if there exists a
sequence {fi } of measurable selectors of F such that
F (t) =
[
{fi (t) : i = 1, 2, · · ·},
(1.5.14)
for each t ∈ T.
If the set valued mapping is at least lower semicontinuous, then it has continuous selectors. In fact, any point in F (t) is attainable by a continuous selector.
Proposition 1.5.5 Let F : T → Kc (Rn ) be lower semicontinuous. Then for
each x ∈ F (t) and t ∈ T there is a continuous selector f of F such that f(t) = x.
On the other hand, a set valued mapping need not have a continuous selector
if it is only upper semicontinuous.
18
CHAPTER 1. PRELIMINARIES
Example 1.5.3 The set valued mapping F from R to R defined by
if t < 0,
{−1},
[−1,
1],
if t = 0,
F (t) =
{+1},
if t > 0,
is upper semicontinuous, but has no continuous selectors. Note that F is not
lower semicontinuous at t = 0.
When the set valued mapping F is Lipschitz continuous, then it has Lipschitz
continuous selectors satisfying the attainability property of Proposition 1.5.4.
Let ≤ be the partial ordering on Kc (Rn) induced by set inclusion, that is,
A ≤ B if and only if A ⊆ B. We say that a mapping F : T → Kc (Rn) has a
≤ − maximum at t0 ∈ T if
F (t) ≤ F (t0) for all t ∈ T,
(1.5.15)
and a ≤ − minimum at t0 ∈ T if
F (t0) ≤ F (t) for all t ∈ T.
1.6
(1.5.16)
Differentiation
We begin with the definition of the Hukuhara derivative.
Definition 1.6.1 Let I be an interval of real numbers. Let a multifunction
U : I → Kc (Rn ) be given. U is Hukuhara differentiable at a point t0 ∈ I, if
there exists DH U (t0 ) ∈ Kc (Rn ) such that the limits
lim +
∆t→0
U (t0 + ∆t) − U (t0 )
∆t
(1.6.1)
and
U (t0 ) − U (t0 − ∆t)
∆t
both exist and are equal to DH U (t0 ).
lim
∆t→0+
(1.6.2)
Clearly, implicit in the definition of DH U (t0) is the existence of the differences
U (t0 + ∆t) − U (t0 ) and U (t0) − U (t0 − ∆t), for all ∆t > 0 sufficiently small.
Using the difference quotient in (1.6.2) is not equivalent to using the difference
quotient in
U (t0 + ∆t) − U (t0)
lim
,
(1.6.20)
−
∆t→0
∆t
contrary to the situation for ordinary functions from I into a topological vector
space. In general the existence of A − B, A, B, ∈ Kc (Rn ) implies nothing about
the existence of B − A.
The integral of a continuous multifunction F : [a, b] → Kc (Rn ) is defined in
Rt
Hukuhara [1,2], and is shown that DH a F (s)ds = F (t). In order that such a
result holds, one must use (1.6.2) instead of (1.6.20), since the difference quotient
in (1.6.20) may not exist, as shown in the following example.
1.6 DIFFERENTIATION
19
Example 1.6.1 Let A ∈ Kc (Rn) and define F (t) = A, t ∈ R; then for any
Rt
t > 0, we have 0 F (s)ds = tA. Taking U (t) = tA, t > 0 we see that the
difference quotient (1.6.2 0) does not exist.
The following proposition illustrates an important property of Hukuhara derivative.
Proposition 1.6.1 If the multifunction U : I → Kc (Rn ) is Hukuhara differentiable on I, then the real valued function t → diam(U (t)), t ∈ I is nondecreasing
on I.
Proof If U is Hukuhara differentiable at a point t0 ∈ I, then there is a δ(t0 ) > 0,
such that U (t0 + ∆t) − U (t0) and U (t0) − U (t0 − ∆t) are defined for 0 < ∆t <
δ(t0 ). Since A − B, A, B ∈ Kc (Rn ) is defined only if some translate of B is
contained in A, thus A − B exists only if diam(A) ≥ diam(B). Let t1 , t2 ∈ I
be fixed with t1 < t2 . Then for each τ ∈ [t1, t2] there is a δ(τ ) > 0 such that
diam(U (s)) ≤ diam(U (τ )), for s ∈ [τ −δ(τ ), τ ], and diam(U (s)) ≥ diam(U (τ )),
for s ∈ [τ, τ + δ(τ )]. The collection
{Iτ : τ ∈ [t1 , t2], Iτ = (τ − δ(τ ), τ + δ(τ ))},
forms an open covering of [t1, t2]. Choose a finite subcover Iτ1 , · · · , IτN with
τi < τi+1 . We then arrive at diam(U (t1 )) ≤ diam(U (τ1 )) and diam(U (τN )) ≤
diam(U (t2 )). There is no loss in generality to assume Iτi ∩ Iτi+1 6= ∅, i =
1, · · · , N − 1. Thus for each i = 1, · · · , N − 1, there exists an si ∈ Iτi ∩ Iτi+1
with τi < si < τi+1 , and hence
diam(U (τi )) ≤ diam(U (si )) ≤ diam(U (τi+1 )).
Therefore we have diam(U (t1 )) ≤ diam(U (t2 )), which proves the proposition.
The example given below, utilizes the above proposition.
Example 1.6.2 Let U (t) = (2 + sin t)S̄1n .(S̄1n is the closed unit ball in Rn). U
is not Hukuhara differentiable on (0, 2π), since diam(U (t)) = 2(2 + sin t) is not
non-decreasing on (0, 2π).
Remark 1.6.1 Note that the existence of the limits in (1.6.1) and (1.6.2) is
not used in the proof of proposition 1.6.1. In fact, instead of the hypothesis that
U (t) is Hukuhara differentiable on I, one could substitute the assumption that
for each t ∈ I, the differences U (t + ∆t) − U (t) and U (t) − U (t − ∆t) both exist
for all sufficiently small ∆t > 0.
Also, we observe that a multifunction U : I → Kc (Rn) is Hukuhara differentiable
on I, and diam(U (t)) > 0 for t ∈ I need not imply U is monotone with respect
to set inclusion. The following example illustrates this fact.
Example 1.6.3 If U (t) = [t, 2t], 0 < t < 1, then DH (t) = [1, 2], 0 < t < 1 and
yet U (t1 ) 6⊂ U (t2 ) and U (t2) 6⊂ U (t1) for any t1 , t2 , 0 < t1 < t2 < 1.
20
CHAPTER 1. PRELIMINARIES
We now proceed to prove the following standard result on the real line for
the Hukuhara derivative.
Proposition 1.6.2 The set valued mapping U is constant if and only if we have
DH U = 0
(1.6.3)
identically on I.
Proof If U is a constant, the result follows. Conversely, suppose (1.6.3) holds.
For a fixed t0 ∈ (0, 1), if t > t0 , using (1.3.15) we get
D[U (t), U (t0 )] = D[U (t) − U (t0), θ],
which gives
lim
t→t+
0
D[U (t), U (t0)]
= 0.
t − t0
Similarly, if t < t0 , we obtain
lim
t→t−
0
and hence,
D[U (t), U (t0 )]
=0
t − t0
D[U (t), U (t0)] =0
lim
t→t0 t − t0
(1.6.4)
Fixing t1 ∈ (0, 1), from the inequality
|D[U (t1), U (t)] − D[U (t1 ), U (t0)]| ≤ D[U (t), U (t0)],
upon dividing by |t − t0 | and considering (1.6.4), we obtain that the real valued
function t → D[U (t1 ), U (t)] is a constant, and since it is zero at t1 , it must be
identically zero.
1.7
Integration
Let F : [0, 1] → Kc (Rn) and let S(F ) denote the set of integrable selectors of F
over [0, 1]. Then the Aumann integral of F over [0, 1] is defined as
Z 1
Z 1
F (t)dt =
f(t)dt : f ∈ S(F ) .
(1.7.1)
0
0
If S(F ) 6= ∅, then the Aumann integral exists and F is said to be Aumann
integrable.
We shall say that F is integrally bounded on [0, 1] if there exists an integrable
function g : [0, 1] → R such that
kF (t)k ≤ g(t), for almost all t ∈ [0, 1].
(1.7.2)
If such an F has measurable selectors, then they are also integrable and S(F )
is nonempty.
1.7 INTEGRATION
21
Theorem 1.7.1 If F : [0, 1] → Kc (Rn ) is measurable andR integrally bounded,
s
then it is Aumann integrable over each [a, s] ⊂ [0, 1] with a F (t)dt ∈ Kc (Rn)
for all s ∈ [a, 1].
For a set valued mapping F as in Theorem 1.7.1, the Castaing Representation
Theorem 1.5.1 applies and provides a sequence {fi} of integrable selectors which
are pointwise dense in F. Moreover,
Z
1
F (t)dt =
Z
0
1
fi (t)dt : i = 1, 2, · · ·
(1.7.3)
0
R1
and so we need only consider these selectors to evaluate 0 F (t)dt.
The Aumann integrability of a mapping F : [0, 1] → Kc (Rn) is fundamentally related to the Bochner integrability of its support function.
Theorem 1.7.2 Suppose that F : [0, 1] → Kc (Rn ) is measurable. Then F is
Aumann integrable if and only if s(·, F (·)) : S n−1 × [0, 1] → C(S n−1 ) is Bochner
integrable, in which case
Z
s ·,
1
F (t)dt =
Z
0
1
s(·, F (t))dt,
(1.7.4)
0
where the integral on the right is Bochner integral.
From (1.7.4), we obtain the pointwise equality
Z
s p,
1
Z
F (t)dt =
0
1
s(p, F (t))dt
(1.7.5)
0
for all p ∈ S n−1 , where the integral on the right is now the Lebesgue integral.
Using Theorem 1.7.2, we find that the Aumann integral satisfies
Z
1
(F (t) + G(t))dt =
Z
0
Z
c
F (t)dt =
F (t)dt +
0
Z
a
b
F (t)dt +
a
and
1
Z
Z
1
G(t)dt,
(1.7.6)
0
c
F (t)dt, 0 ≤ a ≤ b ≤ c ≤ 1,
(1.7.7)
Z
(1.7.8)
b
1
λF (t)dt = λ
0
Z
1
F (t)dt, λ ∈ R,
0
for all Aumann integrable F, G : [0, 1] → Kc (Rn ), with
Z
1
F (t)dt ⊆
0
Z
1
G(t)dt if F (t) ⊆ G(t) for all t ∈ [0, 1].
0
In addition, the Aumann integral uniquely determines its integrand.
(1.7.9)
22
CHAPTER 1. PRELIMINARIES
Proposition 1.7.1 If F, G : [0, 1] → Kc (Rn ) are Aumann integrable with
R1
R1
F (t)dt = 0 G(t)dt, then F (t) = G(t) for almost all t ∈ [0, 1].
0
Similarly the following convergence properties can be established.
Theorem 1.7.3 Let Fi, F : [0, 1] → Kc (Rn ), i = 1, 2, · · · , be measurable and
uniformly integrally bounded. If Fi (t) → F (t) for all t ∈ [0, 1] as i → ∞, then
Z 1
Z 1
Ai =
Fi (t)dt → A =
F (t)dt as i → ∞.
(1.7.10)
0
0
Theorem 1.7.4 Let Fi : [0, 1] → Kc (Rn), i = 1, 2, · · · , be measurable and
R1
uniformly integrally bounded, and suppose that Ai = 0 Fi (t)dt → A ∈ Kc (Rn)
as i → ∞. Then there exists a measurable mapping F : [0, 1] → Kc (Rn ) such
R1
that A = 0 F (t)dt.
Theorem 1.7.5 If F, G : [0, 1] → Kc (Rn ) are integrable, then so also is
D[F (·), G(·)] : [0, 1] → R
and
D
Z
1
F (t)dt,
0
Z
1
Z
G(t)dt ≤
0
1
D[F (t), G(t)]dt.
(1.7.11)
0
Integration and differentiation of set valued mappings F : [0, 1] → Kc (Rn ) are
essentially inverse operations.
Proposition 1.7.2 Let F : [0, 1] → Kc (Rn) be measurable and integrally bounded.
Then
Z t0 +∆t
1
lim
F (t)dt = F (t0),
(1.7.12)
∆t→0+ ∆t t0
for almost all t0 ∈ [0, 1]. In particular, (1.7.12) holds at all t0 ∈ [0, 1) when F
is continuous.
Theorem 1.7.6 Let F : [0, 1] → Kc (Rn ) be measurable and integrally bounded.
Then A : [0, 1] → Kc (Rn ) defined by
Z t
A(t) =
F (s)ds,
(1.7.13)
0
for all t ∈ [0, 1] is Hukuhara differentiable for almost all t0 ∈ (0, 1), with the
Hukuhara derivative DH A(t0 ) = F (t0 ).
A counterpart of the first Fundamental Theorem of Calculus
Z t1
F (t1) = F (t0) +
DH F (t)dt, 0 ≤ t0 ≤ t1 ≤ 1,
(1.7.14)
t0
holds for a Hukuhara differentiable F : [0, 1] → Kc (Rn ) with continuous Hukuhara
derivative DH F on [0, 1].
1.8 SUBSETS OF BANACH SPACES
1.8
23
Subsets of Banach Spaces
Let E be a real Banach space with the norm k·k and the metric generated by it.
Let (2E )b be a collection of all nonempty bounded subsets of E with Hausdorff
pseudometric,
D[A, B] = max sup d(x, A), sup d(y, B) ,
(1.8.1)
x∈B
y∈A
where d(x, A) = inf[d(x, y) : y ∈ A], A, B ∈ (2E )b . Denote by K(E) (Kc (E))
the collection of all nonempty compact (compact convex) subsets of E, which
are considered as subspaces of (2E )b . We note that on K(E), (Kc (E)) the
topology of the space (2E )b induces the Hausdorff metric. Also K(E) (Kc (E))
is a complete metric space and K(E) (Kc (E)) is separable if E is separable
space.
It is known that if the space Kc (E) is equipped with the natural algebraic
operations of addition and nonnegative scalar multiplication, then Kc (E) becomes a semilinear metric space which can be embedded as a complete cone in
a corresponding Banach space. See Tolstonogov [1], and Brandao et. el [1].
Let T = [0, a], a > 0. Then the mapping F : T → K(E) is said to be
strongly measurable, if it is almost everywhere (a.e.) in T a point-wise limit of
the sequence Fn : T → K(E), n ≥ 1, of step mappings.
If D(F (t), Θ) ≤ λ(t), a.e. on T, where λ(t) is summable on T and Θ is the
zero element of E, which is regarded as a one-point set, then F is said to be
integrally
R bounded on T. For a set-valued mapping F : T → E, we shall denote
by (A) T0 F (s)ds the integral in the sense of Aumann on the measurable set
T0 ⊂ T, that is,
Z
Z
(A)
F (s)ds =
f(s)ds : f is a Bochner integrable selector of F .
T0
T0
R
For a strongly measurable mapping F : T → Kc (E), the integral T0 F (s)ds in
the sense of Bochner is introduced in a natural way, since as pointed out earlier,
Kc (E) can be embedded as a complete cone into a corresponding Banach Space.
If a multifunction F : T → E with compact convex values is strongly measurable and integrally bounded then
Z
Z
F (s)ds = (A)
F (s)ds,
(1.8.2)
T0
T0
on the measurable set T0 ⊂ T.
Let A, B ∈ Kc (E). The set C ∈ Kc (E) satisfying A = B + C is known as the
Hukuhara difference of the sets A and B and is denoted by the symbol A − B.
We say that the mapping F : T → Kc (E) has a Hukuhara derivative DH F (t0)
at a point t0 ∈ T if there exists an element DH F (t0) ∈ Kc (E) such that
lim
h→0+
F (t0 + h) − F (t0)
F (t0) − F (t0 − h)
, and lim
h
h
h→0+
24
CHAPTER 1. PRELIMINARIES
exist in the topology of Kc (E) and are equal DH F (t0).
By embedding Kc (E) as a complete cone in a corresponding Banach space
and taking into account the result on differentiation of Bochner integral, we find
that if
Z
t
F (t) = X0 +
Φ(s) ds,
X0 ∈ Kc (E),
(1.8.3)
0
where Φ : T → Kc (E) is integrable in the sense of Bochner, then DH F (t) exists
a.e. on T and the equality
DH F (t) = Φ(t) a.e. on T,
(1.8.4)
holds.
Let µ be Lebesgue measure, L be a σ− algebra of Lebesgue measurable
subsets of J = [t0, b], t0 ≥ 0, b ∈ (t0, ∞), and E be a metrizable space.
A multiplication F : J → E with closed values is measurable if the set
F −1(B) = {t ∈ J; F (t) ∩ B 6= ∅} is measurable for each closed subset B of E. If
F : Y → E, where Y is a topological space, then F is measurable implies that
F is measurable when Y is assigned the σ−algebra BY of Borel subsets of Y.
Similarly, if F : J × Y → E, then the measurability of F is defined in terms of
the product of σ−algebras L ⊗ BY generated by the sets A × B, where A ∈ L
and B ∈ BY . If E is separable then for multifunction F : J → E with compact
values the definitions of strong measurability and measurability are equivalent.
A multifunction from a topological space Y into space E is upper semicontinuous (usc) at a point y0 ∈ Y if, for any > 0, there exists a neighborhood
U (y0 ) of the point y0 such that F (y) ⊂ F (y0 ) + · B for all y ∈ U (y0 ), where B
is unit ball of E.
A multifunction F : Y → E is said to be usc if it is usc at any point y0 ∈ E.
For a multifunction F : Y → E with compact values the definition of usc is
equivalent to the following: the set F −1(U ) is closed for each closed subset U
of E.
Let us recall that the Hausdorff metric (1.8.1) satisfies the following properties:
D[U + W, V + W ] = D[U, V ],
(1.8.5)
D[λU, λV ] = λD[U, V ],
(1.8.6)
D[U, V ] ≤ D[U, W ] + D[W, V ],
(1.8.7)
for all U, V, W ∈ Kc (E) and λ ∈ R+ . Also for U ∈ Kc (E), we set
D[U, Θ] = kU k = sup[kuk : u ∈ U ].
1.9
(1.8.8)
Notes and Comments
The preliminary material including calculus of the set valued maps assembled
in this chapter is taken from Arstein [1], Banks and Jacobs [1], De Blasi and
1.9 NOTES AND COMMENTS
25
Iervolino [1], Diamond and Kloeden [1], Hukuhara [1,2], Radström [1] and Tolstonogov [1]. See also for further details Aubin and Frankowska [1], Aumann
[1], Castaing and Valadier [1], Debreu [1], Hermes [1], Hausdorff [1], Lay [1] and
Rockafeller [1].
26
CHAPTER 1. PRELIMINARIES
Chapter 2
Basic Theory
2.1
Introduction
This chapter is devoted to the basic theory of set differential equations (SDEs).
We begin Section 2.2 with the formulation of the initial value problem of SDEs
in the metric space (Kc (Rn ), D). Utilizing the properties of the Hausdorff metric
D[·, ·] and employing the known theory of differential and integral inequalities,
we establish a variety of comparison results, that are required for later discussion. Section 2.3 deals with the convergence of successive approximations of the
initial value problem (IVP) under the general uniqueness assumption of Perron
type, using the comparison function that is rather instructive. The continuous
dependence of solutions relative to the initial conditions is also studied under
the same conditions. In Section 2.4, we investigate an existence result of Peano’s
type and then consider the existence of extremal solutions of SDE. For this purpose, one needs to introduce a partial order in (Kc (Rn), D), prove the required
comparison result for strict inequalities, and then, utilizing it, discuss the existence of extremal solutions. Having the notion of maximal solution for SDE,
we then prove the comparison result analogous to the well known comparison
theorem in the ordinary differential system.
The monotone iterative technique is considered for SDE in Section 2.5, employing the method of upper and lower solutions. The results considered are so
general that they contain several special cases of interest. Section 2.6 contains
a global existence result, and Section 2.7 considers the error estimate between
the solutions and approximate solutions of SDEs.
In Section 2.8, we discuss the IVP of SDE without assuming any continuity of
the function involved and obtain the existence of an Euler solution which reduces
to the actual solution when the continuity assumption is made. Using the
proximal normal aiming condition, we study in Section 2.9, the flow invariance
of solutions relative to a closed set. Here weak and strong flow invariance
results are considered in terms of nonsmooth analysis. Section 2.10 establishes
an existence result for SDE when the function involved is upper semicontinuous.
27
28
CHAPTER 2. BASIC THEORY
The conditions are also provided to obtain the function involved in SDE from
the multifunction with compact values only by suitably utilizing convexification.
Notes and comments are provided in Section 2.11.
2.2
Comparison Principles
Let us consider the initial value problem (IVP) for the set differential equation
DH U = F (t, U ),
U (t0) = U0 ∈ Kc (Rn ), t0 ≥ 0,
(2.2.1)
where F ∈ C[R+ × Kc (Rn ), Kc(Rn )] and DH U is the Hukuhara derivative of U.
The mapping U ∈ C 1 [J, Kc(Rn )] where J = [t0, t0 + a], a > 0, is said to be
a solution of (2.2.1) on J, if it satisfies (2.2.1) on J.
Since U (t) is continuously differentiable, we have
U (t) = U0 +
Z
t
DH U (s)ds, t ∈ J.
(2.2.2)
t0
We therefore associate with the IVP(2.2.1) the following integral equation
U (t) = U0 +
Z
t
F (s, U (s))ds, t ∈ J,
(2.2.3)
t0
where the integral in (2.2.3) is the Hukuhara integral. Observe also that U (t)
is a solution of (2.2.1) if and only if it satisfies (2.2.3) on J.
Utilizing the properties of the Hausdorff metric D[·, ·] and the integral, and
employing the known theory of differential and integral inequalities for ordinary
differential equations, we shall first establish the following comparison principles,
which we need for later discussion.
Theorem 2.2.1 Assume that F ∈ C[J × Kc (Rn ), Kc(Rn )] and t ∈ J, U, V ∈
Kc (Rn),
D[F (t, U ), F (t, V )] ≤ g(t, D[U, V ]),
(2.2.4)
where g ∈ C[J×R+ , R+ ] and g(t, w) is monotone nondecreasing in w for each t ∈
J. Suppose further that the maximal solution r(t, t0, w0) of the scalar differential
equation
w0 = g(t, w), w(t0 ) = w0 ≥ 0,
(2.2.5)
exists on J. Then, if U (t), V (t) are any two solutions through (t0 , U0), (t0 , V0)
respectively on J, it follows that
D[U (t), V (t)] ≤ r(t, t0, w0), t ∈ J,
provided D[U0 , V0 ] ≤ w0.
(2.2.6)
2.2 COMPARISON PRINCIPLES
29
Proof Set m(t) = D[U (t), V (t)], so that m(t0 ) = D[U0 , V0] ≤ w0. Then, in
view of the properties of the metric D, we get
m(t)
Z t
Z t
= D U0 +
F (s, U (s))ds, V0 +
F (s, V (s))ds
t0
t0
t0
t0
Z t
Z t
≤ D U0 +
F (s, U (s))ds, U0 +
F (s, V (s))ds
Z t
Z t
+D U0 +
F (s, V (s))ds, V0 +
F (s, V (s))ds
= D
Z
t0
t
F (s, U (s))ds,
Z
t0
t0
t
F (s, V (s))ds + D[U0 , V0].
t0
Now using the properties of the integrals and condition (2.2.4), we observe that,
m(t)
≤
≤
=
m(t0 ) +
m(t0 ) +
m(t0 ) +
Z
t
D [F (s, U (s)), F (s, V (s))] ds
t0
Z t
Z
g(s, D[U (s), V (s)])ds
t0
t
g(s, m(s))ds,
t ∈ J.
t0
Now applying Theorem 1.9.2 given in Lakshmikantham and Leela [1], we conclude that
m(t) ≤ r(t, t0, w0),
t ∈ J.
This establishes Theorem 2.2.1.
Remark 2.2.1 If we employ the theory of differential inequalities instead of
integral inequalities, we can dispense with the monotone character of g(t, w)
assumed in Theorem 2.2.1. This is the content of the next comparison principle.
Theorem 2.2.2 Let the assumptions of Theorem 2.2.1 hold except the nondecreasing property of g(t, w) in w. Then the conclusion (2.2.6) is valid.
Proof For small h > 0, the Hukuhara differences U (t+h)−U (t), V (t+h)−V (t)
exist, and we have for t ∈ J,
m(t + h) − m(t) = D[U (t + h), V (t + h)] − D[U (t), V (t)].
Using the property (1.3.8) for D, we get
D[U (t + h), V (t + h)] ≤
D[U (t + h), U (t) + hF (t, U (t))]
+D[U (t) + hF (t, U (t)), V (t + h)],
30
CHAPTER 2. BASIC THEORY
and
D[U (t) + hF (t, U (t)), V (t + h)]
≤
D[V (t) + hF (t, V (t)), V (t + h)]
+D[U (t) + hF (t, U (t)), V (t) + hF (t, V (t))].
Also, we observe that
D[U (t) + hF (t, U (t)), V (t) + hF (t, V (t))]
≤ D[U (t) + hF (t, U (t)), U (t) + hF (t, V (t))]
+D[U (t) + hF (t, V (t)), V (t) + hF (t, V (t))]
=
D[hF (t, U (t)), hF (t, V (t))] + D[U (t), V (t)].
Hence, it follows that
m(t + h) − m(t)
h
≤
1
D[U (t + h), U (t) + hF (t, U (t))]
h
1
+ D[V (t) + hF (t, V (t)), V (t + h)]
h
1
+ D[hF (t, U (t)), hF (t, V (t))]
h
and consequently, in view of the properties of D and the fact that U (t), V (t)
are solutions of (2.2.1), we find that
D+ m(t)
1
≤ lim sup [m(t + h) − m(t)]
h
+
h→0
U (t + h) − U (t)
≤ lim sup D
, F (t, U (t))
h
h→0+
V (t + h) − V (t)
+ lim sup D F (t, V (t)),
h
h→0+
+D[F (t, U (t)), F (t, V (t))].
Here, we have used the fact that
D[U (t + h), U (t) + hF (t, U (t))] =
=
=
=
D[U (t) + Z(t, h), U (t) + hF (t, U (t))]
D[Z(t, h) + U (t), U (t) + hF (t, U (t))]
D[Z(t, h), hF (t, U (t))]
D[U (t + h) − U (t), hF (t, U (t))].
The conclusion (2.2.6) follows from Theorem 1.4.1 in Lakshmikantham and Leela
[1].
The next comparison result provides an estimate under weaker assumptions.
Theorem 2.2.3 Assume that F ∈ C[J × Kc (Rn ), Kc (Rn)] and
1
lim sup [D[U + hF (t, U ), V + hF (t, V )] − D[U, V ]] ≤ g(t, D[U, V ]), t ∈ J,
h→0 h
2.2 COMPARISON PRINCIPLES
31
where U, V ∈ Kc (Rn ), g ∈ C[J × R+ , R]. The maximal solution r(t, t0, w0) of
(2.2.5) exists on J. Then the conclusion of Theorem 2.2.1 is valid.
Proof Proceeding as in the proof of Theorem 2.2.2, we see that
m(t + h) − m(t)
=
≤
D[U (t + h), V (t + h)] − D[U (t), V (t)]
D[U (t + h), U (t) + hF (t, U (t))]
+D[V (t) + hF (t, V (t)), V (t + h)]
+D[U (t) + hF (t, U (t)), V (t) + hF (t, V (t))] − D[U (t), V (t)].
D+ m(t)
1
= lim sup [m(t + h) − m(t)]
h
+
h→0
1
≤ lim sup [D[U (t) + hF (t, U (t)),
h→0+ h
V (t) + hF (t, V (t))] − D[U (t), V (t)]]
U (t + h) − U (t)
+ lim sup D
, F (t, U (t))
h
h→0+
V (t + h) − V (t)
+ lim sup D F (t, V (t)),
h
h→0+
≤ g(t, D[U (t), V (t)]) = g(t, m(t)), t ∈ J.
The conclusion follows as before by Theorem 1.4.1 in Lakshmikantham and
Leela [1] and the proof is complete.
We wish to remark that in Theorem 2.2.3, g(t, w) need not be nonnegative
and therefore the estimate in Theorem 2.2.3 would be finer than the estimates
in Theorems 2.2.1 and 2.2.2.
As special cases of Theorems 2.2.1, 2.2.2 and 2.2.3, we have the following
important corollaries.
Corollary 2.2.1 Assume that F ∈ C[J × Kc (Rn), Kc (Rn )] and either
(a) D[F (t, U ), θ] ≤ g(t, D[U, θ]) or
1
(b) lim sup [D[U + hF (t, U ), θ] − D[U, θ]] ≤ g(t, D[U, θ]), where g ∈ C[J ×
h
h→0
R+ , R].
Then, if D[U0 , θ] ≤ w0, we have
D[U (t), θ] ≤ r(t, t0, w0), t ∈ J,
where r(t, t0, w0) is the maximal solution of (2.2.5) on J.
32
CHAPTER 2. BASIC THEORY
Corollary 2.2.2 The function g(t, w) = λ(t)w,
admissible in Theorem 2.2.1 to give
m(t) ≤ m(t0 ) +
Z
λ(t) ≥ 0 and continuous is
t
λ(s)m(s)ds, t ∈ J.
t0
Then the Gronwall inequality implies
m(t) ≤ m(t0 ) exp
Z
t
λ(s)ds , t ∈ J,
t0
which shows that (2.2.6) reduces to
D[U (t), V (t)] ≤ D[U0 , V0] exp
Z
t
λ(s)ds , t ∈ J.
t0
Corollary 2.2.3 The function g(t, w) = −λ(t)w, with λ(t) as in Corollary
2.2.2, is also admissible in Theorem 2.2.3, and we get,
Z t
D[U (t), V (t)] ≤ D[U0 , V0] exp −
λ(s)ds , t ∈ J.
t0
If λ(t) = λ > 0, we find that
D[U (t), V (t)] ≤ D[U0 , V0 ] e−λ(t−t0) , t ∈ J.
If J = [t0, ∞), we see that limt→∞ D[U (t), V (t)] = 0, showing the advantage of
Theorem 2.2.3.
2.3
Local Existence and Uniqueness
We shall begin by proving the existence and uniqueness result under assumptions
more general than the Lipschitz type condition, which exhibits the idea of the
comparison principle.
Theorem 2.3.1 Assume that
(a) F ∈ C[R0, Kc (Rn)] and D[F (t, U ), θ] ≤ M0 where R0 = J × B(U0 , b),
B(U0 , b) = [U ∈ Kc (Rn ) : D[U, U0 ] ≤ b] on R0;
(b) g ∈ C [J × [0, 2b], R+] , g(t, w) ≤ M1 on J × [0, 2b], g(t, 0) ≡ 0, g(t, w) is
nondecreasing in w for each t ∈ J and w(t) ≡ 0 is the only solution of
w0 = g(t, w),
w(t0 ) = 0,
on J;
(c) D[F (t, U ), F (t, V )] ≤ g(t, D[U, V ]) on R0.
(2.3.1)
2.3 LOCAL EXISTENCE AND UNIQUENESS
33
Then the successive approximations defined by
Un+1 (t) = U0 +
Z
t
F (s, Un(s))ds,
n = 0, 1, 2, . . .,
(2.3.2)
t0
b
exist on J0 = [t0, t + η), where η = min{a, M
}, M = max{M0 , M1}, as
continuous functions and converge uniformly to the unique solution U (t) of the
IVP (2.2.1) on J0 .
Proof Using the properties of Hausdorff metric, we get by induction,
Z t
D[Un+1 (t), U0 ] = D U0 +
F (s, Un(s)) ds, U0
= D
≤
Z
Z
t0
t
F (s, Un(s)) ds, θ
t0
t
D [F (s, Un(s)), θ] ds
t0
≤ M0 (t − t0 ) ≤ M0 a ≤ b,
and consequently, the successive approximations {Un (t)} are well defined on J0 .
We shall next define the successive approximations of (2.3.1) as follows:
w0(t)
wn+1(t)
= M (t − t0 ),
Z t
=
g(s, wn (s)) ds,
t ∈ J0 ,
n = 0, 1, 2, . . .
(2.3.3)
t0
An easy induction, in view of the monotone character of g(t, w) in w, proves
that {wn(t)} are well defined and
0 ≤ wn+1(t) ≤ wn(t),
t ∈ J0 .
(2.3.4)
Since |wn0 (t)| ≤ g(t, wn−1(t)) ≤ M1 , we conclude by Ascoli-Arzela Theorem and
the monotonicity of the sequence {wn(t)} that
lim wn(t) = w(t),
n→∞
uniformly on J0. It is also clear that w(t) satisfies (2.3.1) and therefore by
condition (b), w(t) ≡ 0 on J0.
We observe that
Z t
D[U1 (t), U0 ] ≤
D[F (s, U0), θ] ds ≤ M (t − t0 ) ≡ w0(t).
t0
Assume that for some k > 1, we have
D[Uk (t), Uk−1(t)] ≤ wk−1(t),
on J0.
34
CHAPTER 2. BASIC THEORY
Since
D[Uk+1 (t), Uk (t)] ≤
Z
t
D[F (s, Uk (s)), F (s, Uk−1(s))] ds,
t0
using condition (c) and the monotone character of g(t, w), we get
D[Uk+1 (t), Uk (t)] ≤
≤
Z
t
g(s, D[Uk (s), Uk−1 (s)]) ds
t0
Z t
g(s, wk−1(s)) ds = wk (t).
t0
Thus by induction, the estimate
D[Un+1 (t), Un(t)) ≤ wn(t),
t ∈ J0 ,
(2.3.5)
is true for all n.
Letting u(t) = D[Un+1 (t), Un (t)], t ∈ J0 , the proof of Theorem of 2.2.2 shows
that
D+ u(t) ≤ g (t, D[Un(t), Un−1 (t)]) ≤ g(t, wn−1(t)), t ∈ J0.
Now let n ≤ m. Setting v(t) = D[Un (t), Um (t)], we obtain from (2.3.2)
D+ v(t)
≤
≤
D[DH Un (t), DH Um (t)] = D[F (t, Un−1(t)) , F (t, Um−1 (t))]
D [F (t, Un(t)) , F (t, Un−1(t))] + D [F (t, Un (t)) , F (t, Um (t))]
+D[F (t, Um (t)) , F (t, Um−1(t))]
≤
≤
g (t, wn−1(t)) + g (t, wm−1(t)) + g (t, D[Un(t), Um (t)])
g (t, v(t)) + 2g (t, wn−1(t)) ,
t ∈ J0 .
Here we have used the arguments of the proof of Theorem 2.2.2, the monotone character of g(t, w) and the fact that wm−1 ≤ wn−1, since n ≤ m and wn(t)
is a decreasing sequence. The comparison Theorem 1.4.1 in Lakshmikantham
and Leela [1] yields the estimate
v(t) ≤ rn(t),
t ∈ J0 ,
where rn (t) is the maximal solution of
rn0 = g(t, rn) + 2g(t, wn−1(t)),
rn(t0 ) = 0,
(2.3.6)
for each n. Since as n → ∞, 2g (t, wn−1(t)) → 0 uniformly on J0, it follows
by Lemma 1.3.1 in Lakshmikantham and Leela [1] that rn(t) → 0 as n → ∞
uniformly, on J0 . This implies from (2.2.5) and the definition of v(t) that Un (t)
converges uniformly to U (t), and clearly U (t) is a solution of (2.2.1).
To show uniqueness, let V (t) be another solution of (2.2.1), on J0 . Then
setting m(t) = D[U (t), V (t)] and noting that m(t0 ) = 0, we get, as before,
D+ m(t) ≤ g (t, m(t)) ,
t ∈ J0 ,
2.3 LOCAL EXISTENCE AND UNIQUENESS
35
and m(t) ≤ r(t, t0, 0), t ∈ J0, by Theorem 2.2.1. By assumption r(t, t0, 0) ≡ 0,
we get U (t) ≡ V (t) on J0 , proving the theorem.
We shall discuss, in the next result, the continuous dependence of solutions
with initial values. We need the following lemma before we proceed.
Lemma 2.3.1 Let F ∈ C[J × Kc (Rn ), Kc (Rn )] and let
G(t, r) = max[D [F (t, U ), θ] : D[U, U0 ] ≤ r].
Assume that r∗ (t, t0, 0) is the maximal solution of
w0 = G(t, w),
w(t0) = 0, on J.
Let U (t) = U (t, t0, U0 ) be the solution of (2.2.1). Then
D[U (t), U0] ≤ r∗ (t, t0, 0),
t ∈ J.
Proof Define m(t) = D[U (t), U0 ], t ∈ J. Then Corollary 2.2.1 shows that
D+ m(t)
≤
=
≤
D[DH U (t), θ]
D[F (t, U (t)) , θ]
max
D[F (t, U ), θ]
D[U,U0 ]≤m(t)
=
G(t, m(t)).
This implies by Theorem 1.4.1 in Lakshmikantham and Leela [1] that
D[U (t), U0] ≤ r∗ (t, t0, 0), t ∈ J,
proving the lemma.
Theorem 2.3.2 Suppose that the assumptions (a), (b), (c) of Theorem 2.3.1
hold. Assume further that the solutions w(t, t0, w0) of (2.2.5) through every
point (t0 , w0) are continuous with respect to (t0, w0). Then the solutions U (t) =
U (t, t0, U0 ) of (2.2.1) are continuous relative to (t0, U0 ).
Proof Let U (t) = U (t, t0, U0), V (t) = V (t, t0, V0), U0 , V0 ∈ Kc (Rn), be the two
solutions of (2.2.1). Then defining m(t) = D[U (t), V (t)], we get from Theorem
2.2.1, the estimate
D[U (t), V (t)] ≤ r (t, t0, D[U0, V0]) , t ∈ J.
Since limU0 →V0 r (t, t0, D[U0, V0 ]) = r(t, t0, 0) uniformly on J and by hypothesis
r(t, t0, 0) ≡ 0, it follows that limU0 →V0 U (t, t0, U0) = V (t, t0, V0) uniformly and
hence continuity of U (t, t0, U0) relative to U0 is valid.
To prove the continuity relative to t0, we let U (t) = U (t, t0, U0 ), V (t) =
V (t, τ0, U0) be the two solutions of (2.2.1) and let τ0 > t0 . As before, setting
m(t) = D[U (t), V (t)], noting that m(τ0 ) = D[U (τ0 ), U0 ], we obtain from Lemma
2.3.1,
m(τ0 ) ≤ r∗ (τ0 , t0, 0),
36
CHAPTER 2. BASIC THEORY
and consequently, by Theorem 2.2.1, we arrive at
m(t) ≤ r̃(t),
t ≥ τ0,
where r̃(t) = r̃(t, τ0 , r∗(τ0 , t0, 0)) is the maximal solution of (2.2.5) through
(τ0 , r∗ (τ0 , t0, 0)). Since r∗ (t0, t0, 0) = 0, we have
lim r̃(t, τ0, r∗ (τ0, t0 , 0)) = r̃(t, t0, 0),
τ0 →t0
uniformly on J. By hypothesis r̃(t, t0 ,0) ≡ 0 which proves the continuity of
U (t, t0 ,U0), with respect to t0 and the proof is complete.
2.4
Local Existence and Extremal Solutions
We begin by proving the local existence result corresponding to Peano’s theorem
for the IVP (2.2.1). For this purpose, we need the Ascoli-Arzela theorem suitably
generalized in the present set up, which we state below, see Morales[1].
Since Kc (Rn ) is a closed subset of K(Rn ), and the family of compact sets
included in a closed ball of Rn is compact, the following Ascoli-Arzela theorem
holds.
Theorem 2.4.1 If {Un(t)} is a sequence of equicontinuous and equibounded
multimappings defined on an interval J, we can extract a subsequence that converges uniformly to a continuous multimapping U (t) on J.
Using Theorem 2.4.1, we can prove the local existence result for the IVP
(2.2.1).
Theorem 2.4.2 Assume that F ∈ C[R0, Kc(Rn )] where R0 = J × B[U0 , b],
B[U0 , b] = {U ∈ Kc (Rn) : D[U, U0 ] ≤ b} and D[F (t, U ), θ] ≤ M on R0. Then
there exists at least one solution U (t) for the IVP (2.2.1) on J0 = [t0, t0 + α],
b
where α = min{a, M
}. Here, as before, J = [t0 , t0 + a], a > 0.
Proof Let U0 ∈ C 1 [[t0−δ, t0 ], Kc (Rn)], δ > 0 such that U0 (t0 ) = U0 , D[U0 (t), U0 ] ≤
b and D[DH U0 (t), θ] ≤ M. Consider 0 < ≤ δ and define
"
U (t) = U0 (t), t0 − δ ≤ t ≤ t0,
(2.4.1)
Rt
U (t) = U0 + t0 F (s, U (s − ))ds, t0 ≤ t ≤ t0 + α1,
where α1 = min{α, }. We then have, using the properties (1.3.8), and (1.7.11)
of D,
Z t
D[U (t), U0 ] = D U0 +
F (s, U (s − ))ds, U0
=
≤
D
Z
Z
t0
t
F (s, U (s − ))ds, θ
t0
t
D[F (s, U (s − )), θ]ds
t0
≤
M (t − t0 ) ≤ M α1 ≤ b.
2.4 LOCAL EXISTENCE AND EXTREMAL SOLUTIONS
37
Further more,
DH U (t) = F (t, U(t − )) on [t0, t0 + α1],
(2.4.2)
D[DH U (t), θ] ≤ M.
(2.4.3)
and therefore,
If α1 < α, we can use (2.4.1) to extend U (t) satisfying the relations (2.4.2)
and (2.4.3) on [t0 − δ, t0 + α2 ] where α2 = min{α1, 2}. Continuing this process
we arrive at an a.e. differentiable function U (t) such that (2.4.1), (2.4.2) and
(2.4.3) hold on [t0 − δ, t0 + α]. Thus U (t) forms a family of equicontinuous
and uniformly bounded functions. By Theorem 2.4.1, we obtain a decreasing
sequence {n} such that n → 0 as n → ∞ and U (t) = limn→∞ Un (t) exists
uniformly for t0 − δ ≤ t ≤ t0 + α. Since F is uniformly continuous, we show that
F (t, Un (t − n)) → F (t, U (t)) uniformly, as n → ∞ on J0 . This allows for term
by term integration in (2.4.1) with = n and α1 = α which yields
U (t) = U0 +
Z
t
F (s, U (s))ds, t ∈ J0 .
t0
Hence U (t) is a solution of (2.2.1) on J0 and the proof is complete.
In order to discuss the existence of extremal solutions for the IVP (2.2.1),
we require a comparison result which demands introducing a partial order in
the metric space (Kc (Rn ), D).
We denote by K(K 0 ) the subfamily of Kc (Rn ) consisting of sets U ∈ Kc (Rn)
such that any u ∈ U is a nonnegative (positive) vector of n−components satisfying ui ≥ 0 (ui > 0) for i = 1, 2, · · ·n. Thus K is a cone in Kc (Rn ) and K 0
is the nonempty interior of K. We can therefore induce a partial ordering in
Kc (Rn ) as follows.
Definition 2.4.1 For any U and V ∈ Kc (Rn ), if there exists a Z ∈ Kc (Rn)
such that Z ∈ K(K 0 ) and
U = V + Z,
(2.4.4)
then, we write U ≥ V (U > V ). Similarly, one can define U ≤ V (U < V ).
We are now in a position to define the maximal and minimal solutions of
(2.2.1).
Definition 2.4.2 Let R(t) be a solution of the set differential equation (2.2.1).
Then we say that R(t) is the maximal solution of (2.2.1) if, for every solution
U (t) of (2.2.1) existing on J0 , we have
U (t) ≤ R(t), t ∈ J0.
(2.4.5)
We define the minimal solution of (2.2.1) similarly by reversing the inequality
in (2.4.5).
We now state and prove the basic comparison result.
Theorem 2.4.3 Assume that
38
CHAPTER 2. BASIC THEORY
(i) F ∈ C[R+ × Kc (Rn ), Kc (Rn)], F (t, U ) is monotone nondecreasing in U for
each t ∈ R+ ; that is, whenever U ≤ V, we have
F (t, U ) ≤ F (t, V ), t ∈ R+ ;
(ii) V, W ∈ C 1[R+ , Kc (Rn )],
DH V < F (t, V ) and DH W ≥ F (t, W ), t ∈ R+ ;
(2.4.6)
(iii) V (t0 ) < W (t0).
Then,
V (t) < W (t), t ≥ t0 .
(2.4.7)
Proof Let t1 > 0 be the supremum of all positive numbers δ > 0 such that
V (t0 ) < W (t0 ) implies V (t) < W (t) on [t0, δ].
Clearly t1 > t0 and V (t1) ≤ W (t1 ). Now using the nondecreasing nature of
F (t, U ) in U and the assumption (ii), we arrive at
DH V (t1 ) < F (t1 , V (t1)) ≤ F (t1, W (t1)) ≤ DH W (t1).
It therefore follows that there exists an η > 0 satisfying
V (t) − W (t) > V (t1 ) − W (t1 ), t1 − η < t < t1 .
This implies that t1 > t0 cannot be the supremum due to the continuity of the
functions involved, and hence the relation (2.4.7) holds, completing the proof.
Remark 2.4.1 It is clear that the inequalities 2.4.6 can be replaced by
DH V ≤ F (t, V ) and DH W > F (t, W ), t ∈ R+ ,
respectively, to get the conclusion of Theorem 2.4.3.
We are now ready to prove the existence of extremal solutions of (2.2.1).
Theorem 2.4.4 Let the assumptions of Theorem 2.4.2 hold and suppose further
F (t, U ) is nondecreasing in U for each t ∈ J. Then the IVP (2.2.1) possesses
the extremal solutions on J 0 = [t0, t0 + α0], where α0 = min{a, 2Mb+b }.
Proof Let = (1 , 2, · · · , n) > 0 be such that kk ≤ 2b . Then consider for each
positive integer N, the following IVP for t ∈ J,
DH U = F (t, U ) +
, U (t0) = U0 + .
N
N
(2.4.8)
We observe that FN (t, U ) = F (t, U ) + N is defined and is continuous on R =
J × B[U0 , 2b ], where B[U0 , 2b ] = {U ∈ Kc (Rn ) : D[U, U0 + N ] ≤ 2b }. Also
2.4 LOCAL EXISTENCE AND EXTREMAL SOLUTIONS
39
D[FN (t, U ), θ] ≤ M + 2b on R. Hence we deduce from Theorem 2.4.2 that
(2.4.8) has a solution UN (t, ) ∈ Kc (Rn) on J 0 . For 0 < 2 < 1 ≤ , we see that
UN (t0 , 2) <
DH UN (t, 2) ≤
DH UN (t, 1) >
UN (t0 , 1),
2
,
N
2
F (t, UN (t, 1)) + , on J 0.
N
F (t, UN (t, 2)) +
We can apply the Theorem 2.4.3 (in fact Remark 2.4.1) to get
UN (t, 2) < UN (t, 1), on J 0 .
Since the family of functions {UN (t, )} is equicontinuous and uniformly bounded
on J 0 , it follows by Theorem 2.4.1 that there exists a decreasing sequence { Nk }
such that Nk → 0 as k → ∞ and the uniform limit
R(t) = lim UNk (t, ),
(2.4.9)
k→∞
exists on J 0 . Obviously R(t0 ) = U0 . The uniform continuity of F implies that
F (t, UNk (t, )) tends uniformly to F (t, R(t)) as k → ∞, and thus term by term
integration is applicable to
Z t
UNk (t, ) = U0 +
+
F (s, UNk (t, ))ds,
Nk
t0
which in turn yields that the limit R(t) is a solution of (2.2.1) on J 0.
We shall next show that R(t) is the required maximal solution of IVP (2.2.1)
on J 0 . For this purpose, we observe that
U (t0 ) = U0 < U0 +
= UN (t0 , ),
N
DH U (t) < F (t, U (t)) + ,
N
DH UN (t, ) ≥ F (t, UN (t, )) + , on J 0 .
N
We then obtain from Theorem 2.4.3 (or Remark 2.4.1), that
U (t) < UN (t, ), on J 0.
The uniqueness of maximal solution R(t) shows that UN (t, ) tends uniformly
to R(t) on J 0 as N → ∞. This proves that R(t) is the maximal solution of IVP
(2.2.1). Similarly, one can prove the existence of the minimal solution ρ(t) of
IVP (2.2.1) by considering the IVP
DH U = F (t, U ) − , U (t0) = U0 − ,
N
N
and proceeding with suitable change of arguments. Hence the proof is complete.
Having the notion of maximal solution and how to obtain it, one can prove
the following comparison result analogous to the well known comparison result
in ordinary differential system.
40
CHAPTER 2. BASIC THEORY
Theorem 2.4.5 Assume that the conditions of Theorem 2.4.4 are satisfied.
Suppose that M ∈ C 1[J, Kc (Rn
+ )] and
DH M (t) ≤ F (t, M (t)), M (t0 ) ≤ U0 .
(2.4.10)
0
Then M (t) ≤ R(t) on J .
Proof Let UN (t, ) be a solution of (2.4.8) on J 0 . Then
M (t0) < U0 + ,
N
DH UN (t, ) > F (t, UN (t, )) on J 0.
This together with (2.4.10), yields, by Theorem 2.4.3,
M (t) < UN (t, ) on J 0.
The last inequality, in view of (2.4.9) proves the assertion of the Theorem.
2.5
Monotone Iterative Technique
The method of lower and upper solutions coupled with the monotone iterative
technique offers an effective and flexible mechanism to provide constructive existence results for nonlinear problems. In the development of this technique,
one uses the fact that when the right hand side is not monotone, it can be made
monotone by adding a suitable function. A generalization of this idea has been
recently developed where one considers the situation when the right hand side
can be split into the difference of two monotone functions. This unified setting
provides very general results which cover several known cases of importance in
addition to providing new results.
In this section, we develop the monotone iterative technique, in the same
general set up. See Ladde, Lakshmikantham and Vatsala [1], Pao [1], and Köksal
and Lakshmikantham [1] for details.
In the previous section we introduced a partial ordering in the metric space
(Kc (Rn), D) and proved the comparison Theorem 2.4.3 for strict inequalities,
which was essential for discussing the existence of extremal solutions for IVPs
of set differential equations. We now prove first the following basic result on
nonstrict set differential inequalities.
Theorem 2.5.1 Assume that
(i) V, W ∈ C 1[R+ , Kc (Rn )], F ∈ C[R+ × Kc (Rn ), Kc (Rn)], F (t, X) is monotone nondecreasing in X for each t ∈ R+ and
DH V ≤ F (t, V ), DH W ≥ F (t, W ), t ∈ R+ ;
(ii) for any X, Y ∈ Kc (Rn ) such that X ≥ Y, t ∈ R+ ,
F (t, X) ≤ F (t, Y ) + L(X − Y )
for some L > 0.
2.5 MONOTONE ITERATIVE TECHNIQUE
41
Then V (t0 ) ≤ W (t0 ) implies
V (t) ≤ W (t), t ≥ t0.
(2.5.1)
Proof Let = (1, 2, . . . , n) > 0 and define W̃ = W + e2Lt. Since V (t0 ) ≤
W (t0 ) < W̃ (t0 ), it is enough to prove that
V (t) < W̃ (t), t ≥ t0 ,
(2.5.2)
to arrive at the conclusion (2.5.1) in view of the fact > 0 is arbitrary.
Let t1 > 0 be the supremum of all positive numbers δ > 0 such that V (t0) <
W̃ (t0 ) implies V (t) < W̃ (t) on [t0, δ]. It is clear that t1 > t0 and V (t1 ) ≤ W̃ (t1 ).
From this follows, using the nondecreasing nature of F and condition (ii), that
DH V (t1)
≤
≤
≤
F (t1, V (t1 ))
F (t1, W̃ (t1 ))
F (t1, W (t1)) + L(W̃ − W )
≤
<
=
DH W (t1) + Lεe2Lt1
DH W (t1) + 2Lεe2Lt1
DH W̃ (t1).
Consequently, it follows that there exists an η > 0 satisfying
V (t) − W̃ (t) > V (t1 ) − W̃ (t1), t1 − η < t < t1 .
This implies that t1 > t0 cannot be the supremum in view of the continuity of
the functions involved and therefore the relation (2.5.2) is true, which, in turn,
leads to the conclusion (2.5.1). The proof is complete.
The following corollary is useful.
Corollary 2.5.1 Let V, W ∈ C 1[R+ , Kc (Rn )], σ ∈ C[R+ , Kc(Rn )]. Suppose
that
DH V ≤ σ, DH W ≥ σ, for t ≥ t0 .
Then V (t) ≤ W (t), t ≥ t0 , provided V (t0) ≤ W (t0 ).
In order to develop the monotone iterative technique, we shall consider the
following set differential equation,
DH U = F (t, U ) + G(t, U ), U (0) = U0 ∈ Kc (Rn ),
n
(2.5.3)
n
where F, G ∈ C[J × Kc (R ), Kc (R )] and J = [0, T ].
We need the following definition which gives various possible notions of lower
and upper solutions relative to (2.5.3).
Definition 2.5.1 Let V, W ∈ C 1 [J, Kc(Rn )]. Then V, W are said to be
(a) natural lower and upper solutions of (2.5.3) if
DH V ≤ F (t, V ) + G(t, V ), DH W ≥ F (t, W ) + G(t, W ), t ∈ J; (2.5.4)
42
CHAPTER 2. BASIC THEORY
(b) coupled lower and upper solutions of type I of (2.5.3) if
DH V ≤ F (t, V ) + G(t, W ), DH W ≥ F (t, W ) + G(t, V ), t ∈ J; (2.5.5)
(c) coupled lower and upper solutions of type II of (2.5.3) if
DH V ≤ F (t, W ) + G(t, V ), DH W ≥ F (t, V ) + G(t, W ), t ∈ J; (2.5.6)
(d) coupled lower and upper solutions of type III of (2.5.3) if
DH V ≤ F (t, W ) + G(t, W ), DH W ≥ F (t, V ) + G(t, V ), t ∈ J. (2.5.7)
We observe that whenever we have V (t) ≤ W (t), t ∈ J, if F (t, X) is nondecreasing in X for each t ∈ J and G(t, Y ) is nonincreasing in Y for each t ∈ J,
the lower and upper solutions defined by (2.5.4) and (2.5.7) reduce to (2.5.6)
and consequently, it is sufficient to investigate the cases (2.5.5) and (2.5.6).
We are now in a position to prove the following result.
Theorem 2.5.2 Assume that
(A1) V, W ∈ C 1[J, Kc (Rn )] are coupled lower and upper solutions of type I
relative to (2.5.3) with V (t) ≤ W (t), t ∈ J;
(A2) F, G ∈ C[J ×Kc (Rn), Kc (Rn )], F (t, X) is nondecreasing in X and G(t, Y )
is nonincreasing in Y , for each t ∈ J;
(A3) F and G map bounded sets into bounded sets in Kc (Rn ).
Then there exist monotone sequences {Vn(t)}, {Wn(t)} in Kc (Rn ) such that
Vn (t) → ρ(t), Wn (t) → R(t) in Kc (Rn ) and (ρ, R) are the coupled minimal and
maximal solutions of (2.5.3) respectively, that is, they satisfy
DH ρ = F (t, ρ) + G(t, R),
DH R = F (t, R) + G(t, ρ),
ρ(0) = U0 ,
R(0) = U0 , on J.
Proof For each n ≥ 0, define the unique solutions Vn+1 (t), Wn+1 (t) by
DH Vn+1 = F (t, Vn) + G(t, Wn), Vn+1(0) = U0 ,
DH Wn+1 = F (t, Wn) + G(t, Vn), Wn+1(0) = U0 , t ∈ J,
(2.5.8)
(2.5.9)
where V (0) ≤ U0 ≤ W (0). We set V0 = V, W0 = W .
Our aim is to prove
V0 ≤ V1 ≤ V2 ≤ ... ≤ Vn ≤ Wn ≤ ... ≤ W2 ≤ W1 ≤ W0 , t ∈ J.
(2.5.10)
Since V0 is the coupled lower solution of type I of (2.5.3), we have using the
fact V0 ≤ W0 and the nondecreasing character of F ,
DH V0 ≤ F (t, V0) + G(t, W0).
2.5 MONOTONE ITERATIVE TECHNIQUE
43
Also from (2.5.8), we get for n = 0,
DH V1 = F (t, V0) + G(t, W0).
Consequently, following the proof of Theorem 2.5.1, we arrive at V0 ≤ V1 on
J. A similar argument shows that W1 ≤ W0 on J. We next prove V1 ≤ W1 on
J. For this purpose consider
DH V 1
=
F (t, V0) + G(t, W0)
DH W1
V1 (0)
=
=
F (t, W0) + G(t, V0),
W1(0) = U0 .
Then, the monotone nature of F and G respectively yield
DH V1 ≤ F (t, W0) + G(t, W0), DH W1 ≥ F (t, W0) + G(t, W0), t ∈ J.
We therefore have, by Corollary 2.5.1, V1 ≤ W1 on J. As a result, we obtain
V0 ≤ V1 ≤ W1 ≤ W0 on J.
(2.5.11)
Assume that for some j > 1, we have
Vj−1 ≤ Vj ≤ Wj ≤ Wj−1 on J.
(2.5.12)
Vj ≤ Vj+1 ≤ Wj+1 ≤ Wj on J.
(2.5.13)
Then we show that
To do this, consider
DH Vj = F (t, Vj−1) + G(t, Wj−1), Vj (0) = U0 ,
DH Vj+1 = F (t, Vj ) + G(t, Wj ) ≥ F (t, Vj−1) + G(t, Wj−1),
t ∈ J.
Here we have employed (2.5.12) and the monotone nature of F and G. Corollary 2.5.1 now gives Vj ≤ Vj+1 on J. Similarly, we can get Wj+1 ≤ Wj on
J.
Next we show that Vj+1 ≤ Wj+1 , t ∈ J. We have from (2.5.8) and (2.5.9)
DH Vj+1
DH Wj+1
= F (t, Vj ) + G(t, Wj ), Vj+1 (0) = U0 ,
= F (t, Wj ) + G(t, Vj ), Wj+1 (0) = U0 , t ∈ J.
Using (2.5.12) and the monotone character of F and G, we arrive at
DH Vj+1
≤
F (t, Wj ) + G(t, Wj ),
DH Wj+1
≥
F (t, Wj ) + G(t, Wj ), t ∈ J,
and therefore Corollary 2.5.1 implies that Vj+1 ≤ Wj+1, t ∈ J. Hence (2.5.13)
follows and consequently, by induction (2.5.10) is valid for all n. Clearly the
sequences {Vn}, {Wn } are uniformly bounded on J.
44
CHAPTER 2. BASIC THEORY
To show that they are equicontinuous, consider for any s < t, where t, s ∈ J,
Z t
D[Vn (t), Vn(s)] = D U0 +
{F (ξ, Vn−1(ξ)) + G(ξ, Wn−1(ξ))}dξ,
0
Z s
U0 +
{F (ξ, Vn−1(ξ)) + G(ξ, Wn−1(ξ))}dξ
0
Z t
=D
{F (ξ, Vn−1(ξ)) + G(ξ, Wn−1(ξ)}dξ,
0
Z s
{F (ξ, Vn−1(ξ) + G(ξ, Wn−1(ξ))}dξ
≤
0
Z
t
D[F (ξ, Vn−1(ξ)) + G(ξ, Wn−1(ξ)), θ]dξ ≤ M |t − s|.
s
Here we have utilized the properties of integral and the metric D, together with
the fact F + G are bounded since {Vn }, {Wn} are uniformly bounded. Hence
{Vn (t)} is equicontinuous on J. The corresponding Ascoli’s Theorem 2.4.1 now
gives a subsequence {Vnk (t)} which converges uniformly to ρ(t) ∈ Kc (Rn ), t ∈
J, and since {Vn(t)} is a monotone nondecreasing sequence, the entire sequence
{Vn (t)} converges uniformly to ρ(t) on J.
Similar arguments apply to the sequence {Wn (t)} and Wn (t) → R(t) uniformly on J. It therefore follows, using the integral representation of (2.5.8)
and (2.5.9) that ρ(t), R(t) satisfy
DH ρ(t) = F (t, ρ(t)) + G(t, R(t)), ρ(0) = U0 ,
DH R(t) = F (t, R(t)) + G(t, ρ(t)), R(0) = U0 , t ∈ J,
(2.5.14)
and that
V0 ≤ ρ ≤ R ≤ W0 , t ∈ J.
(2.5.15)
Next we claim that (ρ, R) are coupled minimal and maximal solutions of
(2.5.3), that is, if U (t) is any solution of (2.5.3) such that
V0 ≤ U ≤ W0 , t ∈ J,
(2.5.16)
V0 ≤ ρ ≤ U ≤ R ≤ V0 , t ∈ J.
(2.5.17)
then
Suppose that for some n,
Vn ≤ U ≤ Wn on J.
(2.5.18)
Then we have, using the monotone nature of F, G and (2.5.18),
DH U = F (t, U ) + G(t, U ) ≥ F (t, Vn) + G(t, Wn),
DH Vn+1 = F (t, Vn) + G(t, Wn), Vn+1 (0) = U0 .
U (0) = U0 ,
Corollary 2.5.1 yields Vn+1 ≤ U on J. Similarly Wn+1 ≥ U on J. Hence by
induction (2.5.18) is true for all n ≥ 1. Now taking the limit as n → ∞, we get
(2.5.17), proving the claim. The proof is therefore complete.
2.5 MONOTONE ITERATIVE TECHNIQUE
45
Corollary 2.5.2 If, in addition to the assumptions of Theorem 2.5.2, F and
G satisfy, whenever X ≥ Y, X, Y ∈ Kc (Rn),
F (t, X) ≤ F (t, Y ) + N1 (X − Y )
and
G(t, X) + N2 (X − Y ) ≥ G(t, Y ),
where N1, N2 > 0. Then ρ = R = U is the unique solution of (2.5.3).
Proof Since ρ ≤ R, we have R = ρ + m or m = R − ρ. Now
DH ρ + DH m
=
DH R = F (t, R) + G(t, ρ),
≤
=
F (t, ρ) + N1 m + G(t, R) + N2 m,
DH ρ + (N1 + N2 )m,
which means
DH m ≤ (N1 + N2 )m, m(0) = 0,
which by Theorem 2.5.1 leads to R ≤ ρ on J, proving the uniqueness of ρ =
R = U , completing the proof.
Several remarks are now in order.
Remark 2.5.1 (1) In Theorem 2.5.2, if G(t, Y ) ≡ 0, then we get a result
when F is nondecreasing.
(2) In (1) above, suppose that F is not nondecreasing, but F̃ (t, X) = F (t, X)+
M X is nondecreasing in X for each t ∈ J, for some M > 0, then one can
consider the IVP
DH U + M U = F̃ (t, U ), U (0) = U0 ,
where F̃ (t, X) = F (t, X) + M X to obtain the same conclusion as in (1).
To see this, use the transformation Ũ (t) = U (t)eM t so that
DH Ũ = [DH U + M U ]eM t = F̃ (t, Ũ e−M t )eM t ≡ F0(t, Ũ ), Ũ (0) = U0 .
(2.5.19)
Clearly (2.5.19) has Ṽ (t) = V (t)eM t as a lower solution and W̃ (t) =
W (t)eM t as an upper solution, and therefore we have the same conclusion
as in (1). Here we assume that DH Ũ exists.
(3) If f(t, X) ≡ 0 in Theorem 2.5.2, then we obtain the result for G nonincreasing.
(4) If in (3) above, G is not monotone but there exists a function G̃(t, U ) that
is nonincreasing in U for each t ∈ J, and a constant M > 0 such that
G(t, U ) = M U + G̃(t, U )and
G̃(t, U ) = G(t, U ) − M U.
46
CHAPTER 2. BASIC THEORY
Then setting U (t) = Ũ(t)eM t , we obtain
DH Ũ
=
G0(t, Ũ),
Ũ(0)
=
U0 ,
(2.5.20)
where G0(t, Ũ ) = G̃(t, ŨeM t )e−M t . In this case, we need to assume that
(2.5.20) has coupled lower and upper solutions to get the same conclusion
as in (3).
(5) Suppose that in Theorem 2.5.2, G(t, Y ) is nonincreasing in Y and F (t, X)
is not monotone but F̃ (t, X) = F (t, X) + M X, M > 0 is nondecreasing in
X. Then we consider the IVP
DH Ũ + M U = F̃ (t, U ) + G(t, U ), U (0) = U0 .
(2.5.21)
The transformation in (2) yields the conclusion by Theorem 2.5.2 in this
case as well.
(6) If in Theorem 2.5.2, F is nondecreasing and G is not monotone then we
suppose that there exists a function G̃(t, U ) and a constant M > 0 as in
(4) and consider the IVP
DH Ũ = F0(t, Ũ ) + G̃(t, Ũ ), U (0) = U0,
(2.5.22)
where F0 (t, Ũ) = F (t, ŨeM t)e−M t and G0(t, Ũ) = G̃(t, Ũ eM t )e−M t.
(7) If both F and G are not monotone in Theorem 2.5.2, then suppose that
there are functions F̃ (t, U ), G̃(t, U ), and a constant M > 0 such that
F̃ (t, U ) + G̃(t, U ) + M U = F (t, U ) + G(t, U ), where F̃ (t, U ) is nondecreasing in U and G̃(t, U ) is nonincreasing in U . Now the transformation
U (t) = Ũ (t)eM t gives,
DH Ũ = F0(t, Ũ) + G0(t, Ũ ), U (0) = U0
(2.5.22∗)
where F0 (t, Ũ) = F̃ (t, Ũ eM t )e−M t, G0(t, Ũ ) = G̃(t, ŨeM t )e−M t . Assuming that (2.5.22∗) has coupled lower and upper solutions of type I, one gets
the same conclusion by Theorem 2.5.2.
Let us next consider utilizing the coupled lower and upper solutions of type
II. In this case, we don’t need to assume the existence of coupled lower and
upper solutions of type II of (2.5.3) since one can construct them under the
given assumptions. However, we have to pay a price to get monotone flows,
by assuming certain conditions on the second iterates. Also, we get alternative
sequences which are monotone but complicated.
Theorem 2.5.3 Assume that (A2 ) and (A3 ) of Theorem 2.5.2 hold. Then for
any solution U (t) of (2.5.3) with V0 ≤ U ≤ W0 on J, we have the iterates
{Vn }, {Wn} satisfying
V0 ≤ V2 ≤ ... ≤ V2n ≤ U ≤ V2n+1 ≤ ... ≤ V3 ≤ V1 on J,
(2.5.23)
2.5 MONOTONE ITERATIVE TECHNIQUE
W1 ≤ W3 ≤ ... ≤ W2n+1 ≤ U ≤ W2n ≤ ... ≤ W2 ≤ W0 on J,
47
(2.5.24)
provided V0 ≤ V2 , W2 ≤ W0 on J, where the iterative schemes are given by
DH Vn+1 = F (t, Wn) + G(t, Vn), Vn+1 (0) = U0 ,
(2.5.25)
DH Wn+1 = F (t, Vn) + G(t, Wn), Wn+1 (0) = U0 , on J.
(2.5.26)
Moreover, the monotone sequences {V2n}, {V2n+1}, {W2n}, {W2n+1} in Kc (Rn)
converge to ρ, R, ρ∗ , R∗ in Kc (Rn ) respectively and verify
DH R = F (t, R∗) + G(t, ρ), R(0) = U0 ,
DH ρ = F (t, ρ∗ ) + G(t, R), ρ(0) = U0 ,
DH R∗ = F (t, R) + G(t, ρ∗), R∗ (0) = U0 ,
DH ρ∗ = F (t, ρ) + G(t, R∗), ρ∗ (0) = U0 , on J.
Proof We shall first show that coupled lower and upper solutions V0 , W0 of
type II of (2.5.3) exist on J satisfying V0 ≤ W0 on J. For this purpose, consider
the IVP
DH Z = F (t, θ) + G(t, θ), Z(0) = U0 .
(2.5.27)
Let Z(t) be the unique solution of (2.5.27) which exists on J. Define V0 , W0 by
R0 + V0 = Z and W0 = Z + R0,
where the positive vector R0 = (R01, R02, ..., R0n) is chosen sufficiently large so
that we have V0 ≤ θ ≤ W0 on J. Then using the monotone character of F and
G, we arrive at
DH V0 = DH Z = F (t, θ) + G(t, θ) ≤ F (t, W0) + G(t, V0),
V0 (0) = Z(0) − R0 ≤ Z(0) = U0 .
Similarly, DH W0 ≥ F (t, V0) + G(t, W0), W0 (0) ≥ U0 . Thus V0 , W0 are the
coupled lower and upper solutions of type II of (2.5.3).
Let U (t) be any solution of (2.5.3) such that V0 ≤ U ≤ W0 on J. We shall
show that
V0 ≤ V2 ≤ U ≤ V3 ≤ V1 , W1 ≤ W3 ≤ U ≤ W2 ≤ W0 on J.
(2.5.28)
Since U is a solution of (2.5.3), we have, using the monotone character of F and
G, and the fact V0 ≤ U ≤ W0 ,
DH U = F (t, U ) + G(t, U ) ≤ F (t, W0) + G(t, V0), U (0) = U0 ,
and V1 satisfies
DH V1 = F (t, W0) + G(t, V0), V1 (0) = U0 , on J.
Hence Corollary (2.5.1) yields U ≤ V1 on J. Similarly, W1 ≤ U on J.
(2.5.29)
48
CHAPTER 2. BASIC THEORY
Next we show that V2 ≤ U on J. Note that
DH V2 = F (t, W1) + G(t, V1), V2 (0) = U0 ,
and because of monotonicity of F and G, we get
DH U = F (t, U ) + G(t, U ) ≥ F (t, W1) + G(t, V1), U (0) = U0 on J.
Corollary 2.5.1 therefore gives V2 ≤ U on J. A similar argument shows that
U ≤ W2 on J. Next we find utilizing the assumption V0 ≤ V2 , W2 ≤ W0 on J
and monotonicity of F and G,
DH V3 = F (t, W2) + G(t, V2) ≤ F (t, W0) + G(t, V0), V3 (0) = U0 on J.
This together with (2.5.29) shows by Corollary 2.5.1 that V3 ≤ V1, on J. In the
same way one can show that W1 ≤ W3 on J. Also, employing similar reasoning,
one can prove that U ≤ V3 and W3 ≤ U on J, proving the relations (2.5.28).
Now assuming for some n > 2, the inequalities
V2n−4 ≤ V2n−2 ≤ U ≤ V2n−1 ≤ V2n−3,
W2n−3 ≤ W2n−1 ≤ U ≤ W2n−2 ≤ W2n−4, on J,
hold, it can be shown, employing similar arguments that
V2n−2 ≤ V2n ≤ U ≤ V2n+1 ≤ V2n−1,
W2n−1 ≤ W2n+1 ≤ U ≤ W2n ≤ W2n−2, on J.
Thus by induction (2.5.23) and (2.5.24) are valid for all n = 0, 1, 2, · · ·.
Since Vn, Wn ∈ Kc (Rn) for all n, employing a similar reasoning as in Theorem 2.5.2, we conclude that the limits limn→∞ V2n = ρ, limn→∞ V2n+1 =
R, limn→∞ Wn+1 = ρ∗ , and limn→∞ W2n = R∗ , exist, in Kc (Rn ), uniformly on
J. It therefore follows by suitable use of the integral representation (2.5.25)and
(2.5.26) that ρ, ρ∗ , R, R∗ satisfy corresponding set differential equations given
in Theorem 2.5.3 on J. Also, from (2.5.23)and 2.5.24), we arrive at
ρ ≤ U ≤ R, ρ∗ ≤ U ≤ R∗ on J.
The proof is therefore complete.
Corollary 2.5.3 Under the assumptions of Theorem 2.5.3 if F and G satisfy
the assumptions of Corollary 2.5.2, then ρ = ρ∗ = R = R∗ = U is the unique
solution of (2.5.3).
Proof Let q1 + ρ = R, q2 + ρ∗ = R∗ so that q1 , q2 ≥ 0 on J, since ρ ≤ R and
ρ∗ ≤ R∗ on J. It then follows using the assumptions, that
DH (q1 + q2) ≤ (N1 + N2 )(q1 + q2), q1(0) + q2(0) = 0 on J.
2.6 GLOBAL EXISTENCE
49
This implies that q1 + q2 ≤ 0 on J and consequently, we get
U = ρ = R and ρ∗ = R∗ = U on J,
and this proves the claim of Corollary 2.5.2.
Theorem 2.5.3 also has several remarks which correspond to the remarks of
Theorem 2.5.2. To repetition we do not list them again. For similar results
which unify monotone iterative technique refer to Lakshmikantham and Köksal
[1].
2.6
Global Existence
We consider the set differential equation
DH U = F (t, U ), U (t0) = U0 ∈ Kc (Rn),
(2.6.1)
where F ∈ C[R+ × Kc (Rn ), Kc (Rn )]. In this section, we shall investigate the
global existence of solutions for t ≥ t0 . Assuming local existence, we shall prove
the following global existence result.
Theorem 2.6.1 Assume that F ∈ C[R+ × Kc (Rn), Kc (Rn )] and
D[F (t, U ), θ] ≤ g(t, D[U, θ]), (t, U ) ∈ R+ × Kc (Rn),
where g ∈ C[R2+, R+ ], g(t, w) is nondecreasing in w for each t ∈ R+ and the
maximal solution r(t, t0 , w0) of (2.2.5) exists on [t0, ∞). Suppose further that
F is smooth enough to guarantee local existence of solutions of (2.6.1) for any
(t0 , U0) ∈ R+ × Kc (Rn ). Then the largest interval of existence of any solution
U (t, t0, U0 ) of (2.6.1) such that D[U0 , θ] ≤ w0 is [t0 , ∞).
Proof Let U (t) = U (t, t0, U0 ) be any solution of (2.6.1) with D[U0 , θ] = w0,
which exists on [t0, β), t0 < β < ∞ and the value of β cannot be increased.
Define m(t) = D[U (t), θ]. Then Corollary 2.2.1 shows that
m(t) ≤ r(t, t0, D[U0, θ]) t0 ≤ t < β.
For any t1 , t2 such that t0 < t1 < t2 < β, we have
Z t1
Z
D[U (t1 ), U (t2)] = D U0 +
F (s, U (s))ds, U0 +
=
D
Z
t0
t2
F (s, U (s))ds, θ
t1
≤
Z
≤
t1
Z t2
t2
D[F (s, U (s)), θ]ds
t1
g(s, D[U (s), θ])ds.
(2.6.2)
t2
t0
F (s, U (s))ds
50
CHAPTER 2. BASIC THEORY
The relation (2.6.2) and the nondecreasing nature of g(t, w) now yields
D[U (t1), U (t2 )] ≤
Z
t2
g(s, r(s, t0 , w0))ds
(2.6.3)
t1
=
r(t2, t0, w0) − r(t1 , t0, w0).
Since limt→β− r(t, t0, w0) exists and is finite by hypothesis, taking the limit as
t1 , t2 → β − and using the Cauchy criterion for convergence, it follows from
(2.6.3) that limt→β− U (t, t0, U0) exists.
We define
U (β, t0 , U0) = lim U (t, t0, U0 )
t→β −
and consider the initial value problem
DH U = F (t, U ), U (β) = U (β, t0 , U0).
By the assumed local existence, we see that U (t, t0 , U0) can be continued beyond β, contradicting our assumption that β cannot be continued. Hence every
solution U (t, t0 , U0) of (2.6.1) such that D[U0 , θ] ≤ w0 exists globally on [t0, ∞)
and the proof is complete.
Remark 2.6.1 Since r(t, t0, w0) is nondecreasing because of the fact that g(t, w) ≥
0, if we assume that r(t, t0, w0) is bounded on [t0, ∞) it follows that
limt→∞ r(t, t0, w0) exists and is finite. This, together with (2.6.2) which now
holds for t ∈ [t0, ∞), implies that limt→∞ U (t, t0, U0) = Y ∈ Kc (Rn) exists.
2.7
Approximate Solutions
We shall obtain an error estimate between the solutions and approximate solutions of IVP (2.6.1). Let us define the notion of approximate solutions.
Definition 2.7.1 A function V (t) = V (t, t0, V0, ), > 0, is said to be an
−approximate solutions of IVP (2.6.1) if
V ∈ C[R+, Kc (Rn )], V (t0 , t0, V0, ) = V0 and
D[DH V (t), F (t, V (t))] ≤ , t ≥ t0.
In case = 0, V (t) is a solution of (2.6.1).
Theorem 2.7.1 Assume that F ∈ C[R+ × Kc (Rn), Kc (Rn )] and for
t ≥ t0, U, V ∈ Kc (Rn),
D[F (t, U ), F (t, V )] ≤ g(t, D[U, V ]),
(2.7.1)
where g ∈ C[R2+ , R+ ]. Suppose that r(t) = r(t, t0, w0, ) is the maximal solution
of
w0 = g(t, w) + , w(t0 ) = w0 ≥ 0,
(2.7.2)
2.8 EXISTENCE OF EULER SOLUTIONS
51
existing for t ≥ t0 . Let U (t) = U (t, t0, U0 ) be any solution of (2.6.1) and V (t) =
V (t, t0, V0, ) be an −approximate solution of IVP 2.6.1 existing for t ≥ t0 .
Then
D[U (t), V (t)] ≤ r(t, t0, w0, ), t ≥ t0 ,
(2.7.3)
provided D[U0 , V0] ≤ w0 .
Proof We proceed, as in the proof of Theorem 2.2.3, with m(t) = D[U (t), V (t)],
until we arrive at
U (t + h) − U (t)
+
D m(t) ≤ lim sup D
, F (t, U (t))
h
h→0+
V (t + h) − V (t)
+ lim sup D F (t, V (t)),
h
h→0+
+D[F (t, U (t)), F (t, V (t))], t ≥ t0 .
This implies, using the definition of approximate solution and (2.7.2), the differential inequality
D+ m(t) ≤ g(t, m(t)) + , t ≥ t0 ,
and m(t0 ) ≤ w0. The stated estimate follows from Theorem 1.4.1 in Lakshmikantham and Leela [1].
The following corollary provides the well-known error estimate between the
solution and an −approximate solution of (2.6.1).
Corollary 2.7.1 The function g(t, w) = Lw, L > 0, is admissible in Theorem
2.7.1 to yield
D[U (t, t0, U0 ), V (t, t0, V0, )]
L(t−t0 )
≤ D[U0 , V0]eL(t−t0) +
− 1 , t ≥ t0 .
e
L
(2.7.4)
Proof Since (2.7.2) in this case reduces to
w0 = Lw + , w(t0 ) = D[U0 , V0 ]
(2.7.5)
it is easy to obtain the estimate (2.7.4) by solving the linear differential equation
(2.7.5).
2.8
Existence of Euler Solutions
We consider the initial value problem (IVP) for set differential equation
DH U = F (t, U ),
U (t0) = U0 ∈ Kc (Rn),
(2.8.1)
where F is any function from [t0, T ] × Kc (Rn ) → Kc (Rn ). Let
π = [t0, t1, ...., tN = T ]
(2.8.2)
52
CHAPTER 2. BASIC THEORY
be a partition of [t0, T ].
Consider the interval [t0, t1]. Observe that the right hand side of the set
differential equation
DH U (t) = F (t0, U0 ),
U (t0) = U0 ,
on [t0, t1] is a constant. Therefore, this IVP clearly has a unique solution U (t) =
U (t, t0 , U0) on [t0, t1].
Define the node U1 = U (t1 ) and iterate next by considering on [t1, t2] the
IVP
DH U = F (t1, U1) U (t1 ) = U1 ∈ Kc (Rn ).
The next node is U2 = U (t2) and we proceed this way till an arc Uπ = Uπ (t)
has been defined on all [t0, T ]. We employ the notation Uπ to emphasize the
role played by the particular partition π in determining Uπ which is the Euler
polygonal arc corresponding to the partition π. The diameter µπ of the partition
π is given by
µπ = max {ti − ti−1 : 1 ≤ i ≤ N }.
(2.8.3)
Definition 2.8.1 By an Euler solution of (2.8.1) we mean any arc U = U (t)
which is the uniform limit of Euler polygonal arcs UπJ , corresponding to some
sequence πJ such that πJ → 0, where this means the convergence of the diameters µπJ → 0 as J → ∞.
Clearly the corresponding number NJ of the partition points in πJ must then
go to infinity. Obviously, the Euler arc satisfies the initial condition U (t0 ) = U0 .
We can now prove the following result on existence of an Euler solution for
(2.8.1).
Theorem 2.8.1 Assume that
(i) D[F (t, A), θ] ≤ g(t, D[A, θ]), (t, A) ∈ [t0, T ] × Kc (Rn ), where g ∈
C[[t0, T ] × R+ , R+ ] g(t, u) is nondecreasing in (t, u);
(ii) the maximal solution r(t) = r(t, t0, u0) of the scalar differential equation
u0 = g(t, u),
u(t0) = u0 ≥ 0,
(2.8.4)
exists on [t0, T ].
Then,
(a) there exists at least one Euler solution U (t) = U (t, t0, U0) to the IVP
(2.8.1), which satisfies a Lipschitz condition;
(b) any Euler solution U (t) of (2.8.1) satisfies the relation
D[U (t), U0 ] ≤ r(t, t0, u0) − u0 , t ∈ [t0, T ],
where u0 = D[U0 , θ].
(2.8.5)
2.8 EXISTENCE OF EULER SOLUTIONS
53
Proof Let π be the partition of [t0, T ] defined by (2.8.2) and let Uπ = Uπ (t)
denote the corresponding arc with nodes of Uπ represented by U0 , U1 , ...., UN .
Let us set Uπ (t) = Ui (t) on ti ≤ t ≤ ti+1 , i = 0, 1....., N − 1, and observe
that Ui (ti ) = Ui , i = 0, 1, ...., N − 1.
On the interval (ti , ti+1) we have
D[DH Uπ (t), θ] = D[F (ti , Ui), θ] ≤ g(ti , D[Ui , θ]).
(2.8.6)
On the interval [t0, t1], we obtain
Z t
D[U1 (t), U0 ] = D U0 +
F (t0, U0 ) ds, U0
=
D
t
F (t0 , U0) ds, θ
t0
≤
Z
≤
t0
Z t
t
D[F (t0, U0 ), θ] ds
t0
Z t
≤
t0
Z
g(t0 , D[U0 , θ]) ds
g(s, r(s)) ds
t0
=
≤
r(t, t0, D[U0 , θ]) − D[U0 , θ]
r(T, t0 , D[U0, θ]) − D[U0 , θ] = M (say).
Here we have employed the properties of the metric D and the integral, monotone character of g(t, u) in (t, u) and the fact that r(t, t0, U0) ≥ 0 is nondecreasing in t.
Similarly on [t1, t2], we get
Z t
D[U2 (t), U0 ] = D U1 +
F (t1, U1 ) ds, U0
=
=
≤
≤
Z
D U0 +
D
Z
Z
t1
t1
F (t0, U0 ) ds +
t0
t1
F (t0, U0) ds +
t0
t0
≤
F (t1 , U1) ds, U0 )
t
F (t1, U1) ds, θ
t1
D[F (t0, U0 ), θ] ds +
g(s, r(s)) ds +
t
t1
Z
t1
t0
Z t1
Z
Z
Z
t
D[F (t1, U1 ), θ] ds
t1
t
g(s, r(s) ds =
t1
Z
t
g(s, r(s)) ds
t0
r(T, t0 , D[U0, θ]) − D[U0 , θ] = M (say).
Proceeding in this way, we arrive at
D[Ui (t), U0 ] ≤ r(T, t0 , D[U0 , θ]) − D[U0 , θ] = M, on [ti, ti+1 ].
54
CHAPTER 2. BASIC THEORY
Hence it follows that
D[ Uπ (t), U0 ] ≤ M, on [t0, T ].
Also, from (2.8.6) we obtain,
D[DH Uπ (t), θ] ≤ g(T, r(T )) = r0 (T, t0 , D[U0, θ]) = k, say.
Consequently, using similar arguments, we can find for t0 ≤ s ≤ t ≤ T,
Z t
Z s
D[Uπ (t), Uπ (s)] ≤
D[F (τ, Uπ (τ ), θ] dτ +
D[F (τ, Uπ (τ )), θ] dτ
≤
=
Z
t0
t
g(τ, r(τ )) dτ +
t0
Z t
Z
t0
s
g(τ, r(τ )) dτ
t0
g(τ, r(τ )) dτ
s
=
r(t) − r(s) = r0(σ) | t − s |≤ k | t − s |
for some σ such that s ≤ σ ≤ t, proving Uπ (t) is Lipschitz of rank k on
[t0 , T ].
Now, let πJ be a sequence of partitions of [t0, T ] such that πJ → 0, that is
such that µπJ → 0 and therefore NJ → ∞. Then the corresponding polygonal
arcs UπJ on [t0, T ] all satisfy
UπJ (t0 ) = U0 , D[UπJ (t), U0] ≤ M and D[DH UπJ (t), θ] ≤ k on [t0, T ].
Hence the family {UπJ } is equicontinuous and uniformly bounded, and, as a
consequence, Ascoli-Arzela Theorem 2.4.1 guarantees the existence of a subsequence which converges uniformly to a continuous function U (t) on [t0, T ] and
thus absolutely continuous on [t0, T ]. Thus, by definition, U (t) is an Euler solution of the IVP (2.8.1) on [t0, T ] and the claim of the theorem follows. The
inequality (2.8.5) in part (b) is inherited by U (t) from the sequence of polygonal
arcs generating it when we identify T with t. Hence the proof is complete.
If F (t, U ) in (2.8.1) is assumed to be continuous, then one can show that
U (t) actually satisfies the IVP (2.8.1).
Theorem 2.8.2 Under the assumptions of Theorem 2.8.1, if we suppose in
addition that F ∈ C[[t0, T ] × Kc (Rn ), Kc (Rn )], then U (t) is a solution of
(2.8.1).
Proof Let UπJ (t) denote a sequence of polygonal arcs for IVP (2.8.1) converging uniformly to an Euler solution U (t) on [t0, T ]. Clearly, the arcs UπJ (t) all lie
in B(U0 , M ) and satisfy a Lipschitz condition of the same rank k. Since a continuous function is uniformly continuous on compact sets, for any given > 0,
one can find a δ > 0 such that
t, t∗ ∈ [t0, T ], U, U ∗ ∈ {UπJ }, | t − t∗ |< δ, D[U, U ∗] < δ
2.8 EXISTENCE OF EULER SOLUTIONS
55
implies D[F (t, U ), F (t∗, U ∗)] < .
Let J be sufficiently large so that the partition diameter µπJ satisfies µπJ < δ
and kµπJ < δ. For any t, which is not one of the finitely many points at which
UπJ (t) is a node, we have DH UπJ (t) = F (t̃, UπJ (t̃)) for some t̃ within µπJ < δ
of t.
Since
D[UπJ (t), UπJ (t̃)] ≤ kµπJ < δ, we get
D[DH UπJ (t), F (t, UπJ (t))] = D[F (t̃, UπJ (t̃)), F (t, UπJ (t))] < .
It follows that for any t ∈ [t0, T ], we obtain,
Z t
D UπJ (t), UπJ (t0 ) +
F (τ, UπJ (τ ))dτ
t0
=
=
≤
Z t
Z t
D UπJ (t0 ) +
DH UπJ (τ ) dτ, UπJ (t0 ) +
F (τ, UπJ (τ )) dτ
D
Z
Z
t0
t
DH UπJ (τ ) dτ,
Z
t0
t
F (τ, UπJ (τ ))dτ
t0
t0
t
D [DH UπJ (τ ), F (τ, UπJ (τ ))] dτ
t0
≤
(t − t0) ≤ (T − t0).
Letting J → ∞, we have from this,
D[U (t), U0 +
Z
t
F (τ, U (τ )) dτ ] < (T − t0).
t0
Since is arbitrary, it follows that
U (t) = U0 +
Z
t
F (τ, U (τ )) dτ, t ∈ [t0 , T ],
t0
which implies that U (t) is C 1 and therefore
DH U (t) = F (t, U (t)), U (t0 ) = U0 , t ∈ [t0, T ].
The proof is therefore complete.
Remark 2.8.1 We can extend the notion of an Euler solution of (2.8.1) from
the interval [t0, T ] to [t0, ∞), if we define F and g on [t0, ∞) instead of [t0, T ]
and assume that the maximal solution r(t) exists on [t0, ∞) and show that an
Euler solution exists on every [t0, T ] where T ∈ (t0, ∞).
56
2.9
CHAPTER 2. BASIC THEORY
Proximal Normal and Flow Invariance
Let Ω ⊂ Kc (Rn ) be a nonempty, closed set. Assume that for any U ∈ Kc (Rn)
such that U and Ω are disjoint, and for any S ∈ Ω, there exists a Z ∈ Kc (Rn)
such that U = S + Z. Then U − S is called the Hukuhara difference. Suppose
now that, for any U ∈ Kc (Rn ) there is an element S ∈ Ω whose distance to U
is minimal, that is,
D0 [U, Ω] = kU − Sk = inf kU − S0 k.
S0 ∈Ω
(2.9.1)
Then S is called a projection of U onto Ω. The set of all such elements is
denoted by projΩ (U ). The element U − S will be called the proximal normal
direction to Ω at S. Any nonnegative multiple ξ = t(U − S), t ≥ 0, is called
proximal normal to Ω at S. The set of all ξ obtained in this way is said to be
proximal normal cone to Ω at S and is denoted by NΩP (S).
Definition 2.9.1 The system (Ω, F ) is said to be weakly invariant provided that
for all U0 ∈ Ω, there exists an Euler solution U (t) of (2.8.1) on [t0, ∞) such
that U (t0 ) = U0 and U (t) ∈ Ω, t ≥ t0.
Before proceeding further we introduce the following notation.
For any A ∈ Kc (Rn ), we get D[A, θ] = kAk = supa∈A kak and we define for
any A, B ∈ Kc (Rn ),
< A, B >= sup{(a · b) : a ∈ A, b ∈ B}
so that we obtain the relation
kA + Bk2 ≤ kAk2 + kBk2 + 2 < A, B > .
We can now prove the following result which offers sufficient conditions in terms
of proximal normal for the weak invariance of the system (Ω, F ).
Theorem 2.9.1 Let F and g satisfy the conditions of Theorem 2.8.1 on [t0, ∞),
t0 ≥ 0. Suppose that U (t) = U (t, t0, U0) is an Euler solution of (2.8.1) on
[t0 , ∞), which lies in an open set Q ⊂ Kc (Rn ). Suppose also that for every
(t, Z) ∈ [t0, ∞) × Q, the proximal aiming condition is satisfied: namely, there
exists an S ∈ projΩ(Z) such that
2 < F (t, Z), Z − S > ≤ q(t, D02 [Z, Ω]),
(2.9.2)
where q ∈ C[[t0, ∞) × R+ , R]. Assume that r(t) = r(t, t0, u0) is the maximal
solution of the scalar differential equation
u0 = q(t, u), u(t0 ) = u0 ≥ 0,
existing on [t0, ∞). Then we have
D02 [U (t), Ω] ≤ r(t, t0, D02[U0 , Ω]), t0 ≤ t < ∞.
(2.9.3)
If, in addition r(t, t0, 0) ≡ 0 then U0 ∈ Ω implies U (t) ∈ Ω, t ≥ t0 , that is, the
system (Ω, F ) is weakly invariant.
2.9 PROXIMAL NORMAL AND FLOW INVARIANCE
57
Proof Let Uπ be one polygonal arc in the sequence converging uniformly to U
as per the definition of Euler solution of (2.8.1). We denote as before, its nodes
at ti by Ui , i = 0, 1, ....., N and hence U0 = U (t0). Let Uπ (t) be in Q for all
t0 ≤ t ≤ T , where T ∈ (t0 , ∞). Accordingly, there exists for each i, an element
Si ∈ projΩ (Ui ) such that
q(ti , D02 [Ui , Ω]).
2 < F (ti , Ui), Ui − Si > ≤
As in Theorem 2.8.1, letting D[DH Uπ (t), θ] ≤ k, we find
D02 [U1, Ω]
kU1 − S0 k2, since S0 ∈ Ω.
≤
We note that U1 = U0 + Z1 , where Z1 = F (t0, U0 )(t1 − t0) and U0 = S0 + Z0
and therefore, we get successively
D02 [U1, Ω] ≤
≤
≤
kZ1 + Z0 k2 ≤ kZ1 k2 + kZ0k2 + 2 < Z1 , Z0 >
Z t1
k2(t1 − t0)2 + D02 [U0, Ω] + 2
< DH Uπ (t0 ), Z0 > dt
k2(t1 − t0)2 + D02 [U0, Ω] + 2
Z
t0
t1
< F (t0 , U0), U0 − S0 > dt
t0
≤
k2(t1 − t0)2 + D02 [U0, Ω] + q(t0 , D02 [U0, Ω])(t1 − t0).
Since similar estimates hold at any node, we obtain,
D02 [Ui , Ω] ≤ k2(ti − ti−1)2 + D02 [Ui−1, Ω] + q(ti−1, D02 [Ui−1, Ω])(ti − ti−1).
and therefore it follows that
D02 [Ui , Ω]
≤
D02 [U0, Ω] +
k
2
i
X
(tJ − tJ −1 )2
J =1
+
i
X
q(tJ −1 , D02[UJ −1 , Ω])(tJ − tJ −1 )
J =1
≤ D02 [U0, Ω] + k2µπ
i
X
(tJ − tJ −1 )
J =1
+
i
X
q(tJ −1 , D02[UJ −1 , Ω])(tJ − tJ −1 )
J =1
≤ D02 [U0, Ω] + k2µπ (T − t0) +
i
X
q(tJ −1 , D02 [UJ −1, Ω])(tJ − tJ −1 ).
J =1
Consider now, the sequence UπJ (t) of polynomial arcs converging to U (t).
Since the last estimate is true at every node, µπJ → 0, as J → ∞, and the same
k applies to each UπJ , we deduce in the limit the integral inequality
Z t
D02 [U (t), Ω] ≤ D02 [U0, Ω] +
q(s, D02 [U (s), Ω]) ds, t0 ≤ t ≤ T,
(2.9.4)
t0
58
CHAPTER 2. BASIC THEORY
for every T ∈ (t0 , ∞). If we know that q(t, u) is nondecreasing in u, then we
can apply the theory of integral inequalities (see Theorem 1.9.2 in Lakshmikantham and Leela [1]), to arrive at
D02 [U (t), Ω] ≤ r(t, t0, D02[U0 , Ω]), t ≥ t0.
(2.9.5)
If, on the other hand, q(t, u) is not nondecreasing in u, we can obtain instead
of (2.9.4), the following integral inequality for any t0 ≤ t ≤ t + h ≤ T, h > 0,
employing similar reasoning,
Z t+h
q(s, D02 [U (s), Ω]) ds,
(2.9.6)
D02 [U (t + h), Ω] ≤ D02 [U (t), Ω] +
t
from which we obtain, setting m(t) = D02 [U (t), Ω], the differential inequality
D+ m(t) ≤ q(t, m(t)), m(t0 ) = D02 [U0 , Ω],
(2.9.7)
where D+ m(t) is a Dini derivative.
Applying now the theory of Differential inequalities (see Theorem 1.4.1
Lakshmikantham and Leela [1]), we arrive at the same estimate (2.9.5). If
r(t, t0 , 0) ≡ 0, then, supposing that U0 ∈ Ω implies that U (t) ∈ Ω for t ≥ t0 and
therefore the system (Ω, F ) is weakly invariant as claimed. The proof is hence
complete.
We shall next discuss the strong invariance of the system (Ω, F ).
Definition 2.9.2 The system (Ω, F ) is said to be strongly invariant if every
Euler solution U (t) of (2.8.1) existing on [t0, ∞) for which U (t0) = U0 ∈ Ω,
satisfies U (t) ∈ Ω, t ≥ t0 .
We can now prove the following result for strong invariance of Euler solutions
of (2.8.1).
Theorem 2.9.2 Let the assumptions of Theorem 2.8.1 hold. Suppose that F
satisfies the generalized Lipschitz condition
< F (t, A), C > ≤ < F (t, B), C > + L kCk2,
(2.9.8)
where there exists a C ∈ Kc (Rn ) for those A, B ∈ Kc (Rn ) such that A = B + C
and L > 0. Then, if the proximal normal condition
< F (t, B), C > ≤ 0,
(2.9.9)
holds, we have the strong invariance of the system (Ω, F ).
Proof Let V (t) be any Euler solution of (2.8.1) on [t0, ∞) with V (t0 ) = U0 ∈ Ω.
By Theorem 2.8.1, there exists a M > 0 such that
D[V (t), U0 ] ≤ M on [t0, T ], for any T ∈ (t0 , ∞).
If U lies in B[U0, M ] and S ∈ projΩ (U ) then we have
2.10 EXISTENCE, UPPER SEMICONTINUOUS CASE
59
D[S, U0 ] ≤ D[S, U ] + D[U, U0] ≤ 2D[U, U0 ] ≤ 2M,
which implies that S ∈ B[U0, 2M ]. Let L be the Lipschitz constant for F in
B[U0, 2M ] and consider any U ∈ B[U0 , M ] and S ∈ projΩ (U ). Then U − S ∈
NΩP (S).
Consequently, using (2.9.8), we get
< F (t, U ), U − S > ≤
1 2
D [U, Ω].
2 0
The relation (2.9.10) is a special case of Theorem 2.9.1, with q(t, w) =
therefore we obtain the conclusion from Theorem 2.9.1,
L
D02 [V (t), Ω] ≤ D02 [U0, Ω] e 2 (t−t0 ) , t ≥ t0.
(2.9.10)
L
w
2
and
(2.9.11)
Since U0 ∈ Ω is assumed, we get readily from (2.9.11), V (t) ∈ Ω, t ≥ t0 and
the proof is complete.
2.10
Existence, Upper Semicontinuous Case
We shall consider the IVP for set differential equation
DH U = F (t, U ),
U (t0) = U0 ∈ Kc (Rn),
(2.10.1)
where F is a function from J × Kc (Rn ) to Kc (Rn ). By a solution of (2.10.1)
we mean an absolutely continuous function U : J → Kc (Rn ), U (t0 ) = U0 ,
whose derivative DH U (t), in the sense of Hukuhara, satisfies (2.10.1) almost
everywhere (a.e.) on J = [t0, b], t0 ≥ 0, b ∈ (t0 , ∞).
In what follows, by F : J × Kc (Rn) → Kc (Rn ), we mean that F is a singlevalued function and when we write F : J × Kc (Rn ) → Rn , it means that F is a
multifunction defined on metric space J × Kc (Rn ) with values in Rn. From the
context it would be clear when we consider F as a single-valued function or a
multifunction.
Let F : J × Kc (Rn ) → Rn be a multifunction with compact, convex values
and V be a compact convex subset of C(J, Rn). Then a function V (t) = {x(t) :
x(·) ∈ V }, t ∈ J, is continuous from J to Kc (Rn). If the multifunction t →
(t, V (t)) is measurable then there exists a measurable selector of F (t, V (t)). For
a compact convex subset V ⊂ C(T, Rn) we denote by T (V0 , F, V ), V0 = V (t0)
the collection of all functions x : J → Rn representable as
Z t
x(t) = x0 +
v(s)ds, t ∈ J, x0 ∈ V
(2.10.2)
t0
where v(s) is a Bochner integrable selector of F (s, V (s)). Let us list the following
assumptions:
(i) F : J × Kc (Rn ) → Rn is a multifunction with compact, convex values and
F (t, A) is monotone nondecreasing with respect to A ∈ Kc (Rn);
60
CHAPTER 2. BASIC THEORY
(ii) the map (t, U ) → F (t, U ) is L ⊕ B(Kc (Rn )) is measurable;
(iii) the map U → F (t, U ) is usc for almost all t ∈ J;
(iv) kF (t, U )k ≤ g(t, kU k), where g : J × R+ → R+ is a Caratheodory function
integrally bounded on bounded subsets of J × R+ , g(t, r) is monotone
nondecreasing in r, a.e. in t ∈ J and r(t) = r(t, t0, r0) is the maximal
solution of the scalar differential equation
r0 = g(t, r),
r(t0 ) = r0 ≥ 0,
(2.10.3)
existing on J.
We are now in a position to prove the following existence result.
Theorem 2.10.1 Assume that conditions (i) to (iv) hold. Then for any U0 ∈
Kc (Rn), there exists a solution U : J → Kc (Rn ) of the IVP (2.10.1) on J.
Proof According to (ii), (iii) and Theorem 2.2 and Remark 2.1 in Tolstonogov
[2] for any ε > 0, there exists a compact set Tε ⊂ J, µ(J\Tε ) ≤ ε such that
the restriction of F (t, A) on Tε × Kc (Rn ) is usc. Then for any continuous
function V : J → Kc (Rn) the restriction of F (t, U (t)) on Tε is usc. Hence the
multifunction t → F (t, U (t)) is measurable.
Let V ⊂ C(T, Rn ) be a compact set of C(J, Rn ) then we can define the
multivalued operator T (V0 , F, V ), V0 = V (0) by using (2.10.2). We note also
that T (V0 , F, V ) is monotone relative to V in view of the monotonicity of F (t, A)
with respect to A assumed in (i).
Now let r0 = max{kxk : x ∈ V0 } and r(t) = r(t, t0, r0) be the maximal
solution of (2.10.3) on J. We denote by U0 a set of all absolutely continuous
functions x : J → Rn, x(t0 ) ∈ V0 , whose derivatives x0 (t) satisfy the estimate
kẋ(t)k ≤ r0 (t) a.e. on J.
This implies that kx(t)k ≤ r(t), t ∈ J, for any x(·) ∈ U0 and therefore U0 is
a convex, bounded and equicontinuous subset of C(T, Rn ).
Using Theorem 1.5 in Tolstonogov [1] we obtain that U0 is a closed subset of
C(T, Rn ). Hence U0 is a convex compact subset of C(T, Rn), U0 (t0 ) = V0 and
a multifunction U0 (t) is continuous from J to Kc (Rn ).
Set U1 = T (V0 , F, U0). By assumption (iv) we have
kF (t, U0(t))k ≤ g(t, kU0(t)k) ≤ g(t, r(t)) = r0 (t) a.e.
(2.10.4)
and for any x(·) ∈ T (V0 , F, U0), it follows, using (2.10.4) that
kx0(t)k = kv(t)k ≤ kF (t, U0(t))k ≤ r0 (t), a.e.
Hence U1 = T (V0 , F, U0) ⊂ U0 . By analogy with U0 we can prove that U1 is
compact convex subset of C(J, Rn) and U1 (t0 ) = V0 .
We now define U2 = T (V0 , F, U1), and, since U1 ⊂ U0 , it follows because
of the monotone nature of T (V0 , F, V ) in V , that T (V0 , F, U1) ⊂ T (V0 , F, U0).
Thus U2 ⊂ U1 .
2.10 EXISTENCE, UPPER SEMICONTINUOUS CASE
61
Continuing this process, we obtain a sequence {Uk }, k ≥ 1, of compact convex
sets of C(J, Rn) decreasing
relative to the inclusion.
T∞
Hence U = k=0 Uk is a nonempty compact convex subset of C(J, Rn)
and the sequence {Uk } converges to U in Hausdorff metric on the space of
all nonempty, closed, bounded sets of C(J, Rn). It is clear that U (t0 ) = V0 .
Since U ⊂ Uk , k ≥ 0, we have
T (V0 , F, U ) ⊂ T (V0 , F, Uk−1) ⊂ Uk−1, k ≥ 1,
and therefore
T (V0 , F, U ) ⊂
∞
\
Vk = U.
(2.10.5)
k=0
T∞
It is easy to prove that U (t) = k=0 Uk (t), t ∈ J, and the sequence Uk (t), k ≥ 1,
converges pointwise on J to U (t).
Let x(·) ∈ U. Then x(·) ∈ T (V0 , F, Uk−1), k ≥ 1. Hence x(t) is absolutely
continuous and
x0(t) ∈ F (t, Uk−1(t)) a.e., k ≥ 1.
Then
x0(t) ∈ F (t, U (t)) a.e.
(2.10.6)
due to (iii) and the convergence of Uk (t), k ≥ 1, to U (t) in Kc (Rn ).
As a consequence of (2.10.6) we have
x(·) ∈ T (V0 , F, U ).
(2.10.7)
Combining (2.10.5), (2.10.7) we obtain
T (V0 , F, U ) = U.
It is easy to see from (2.10.8) and (1.8.2) that
Z t
U (t) = V0 +
F (s, U (s))ds, t ∈ J.
(2.10.8)
(2.10.9)
0
Taking into consideration (1.8.3), (1.8.4) and (2.10.9) we obtain
DH U (t) = F (t, U (t)) a.e. on J
and U (0) = V0 and the proof is complete.
Let Γ : J × Rn → Rn be a multifunction. We need the following hypotheses
H(Γ) : Γ : J × Rn → Rn is a multifunction with compact values such that
(i) (t, x) → Γ(t, x) is L ⊕ BRn measurable;
(ii) x → Γ(t, x) is usc a.e. on J;
(iii)
kΓ(t, x)k = sup{kyk : y ∈ Γ(t, x)} ≤ g(t, kxk).
(2.10.10)
62
CHAPTER 2. BASIC THEORY
Consider a multifunction Φ : J × Kc (Rn ) → Rn defined by
Φ(t, U ) = co Γ(t, U ), U ∈ Kc (Rn ),
(2.10.11)
where co denotes closed convex hull.
Lemma 2.10.1 Suppose that hypotheses H(Γ) hold. Then there exists a multifunction F : J × Kc (Rn ) → Rn such that assumptions (i) to (iv) hold and
F (t, U ) = Φ(t, U ), U ∈ Kc (Rn ) a.e. on J,
where Φ(t, U ) is defined by (2.10.11).
Proof Let Tk , k ≥ 1, be a sequence
S∞ of closed subsets of J increasing with
respect to inclusion such that µ(J\ k=1 Tk ) = 0. For every t ∈ Tk the multifunction x → Γ(t, x) is usc. Fix k ≥ 1. Then the restriction of Γ(t, x) on Tk × E
is L ⊕ BRn measurable.
From Theorem 2.2 in Tolstonogov [2] we know that for every ε > 0 there
exists a closed set Tε ⊂ Tn , µ(Tk \Tε ) ≤ ε such that the restriction of multifunction on Tε × Rn is usc with respect to (t, x) ∈ Tε × Rn . Since the multifunction
x → Γ(t, x) is usc for every t ∈ Tε , the set Γ(t, A), A ∈ Kc (Rn ), t ∈ Tε is
compact subset of Rn .
Let us show that the restriction of Γ(t, A) to Tε × Kc (Rn) is usc. To this end
we have to prove that restriction of Γ(t, U ) on Tε × M is usc for any compact
set M ⊂ Kc (Rn), Tolstonogov [2].
S
Let M ⊂ Kc (Rn ) be a compact set. Then the set M = { U ; U ∈ M} is
compact set of Rn. Since Γ(t, x) is usc on Tε × M , there exists a compact set
Q ⊂ Rn such that
Γ(t, x) ⊂ Q, t ∈ Tε , x ∈ M.
(2.10.12)
From (2.10.12) it follows that upper semicontinuity of restriction Γ(t, U ) on
Tε × M is equivalent to closedness of graph of restriction Γ(t, U ) on Tε × M.
Let tm → t, tm ∈ Tε , Um → U in Kc (Rn ), Um ∈ M, and ym → y, ym ∈
Γ(t, Um ). Then there exists xm ∈ Um such that ym ∈ Γ(t, xm ). Since xm ∈
M, m ≥ 1, without loss of generality we can suppose that xm → x. It is clear
that x ∈ U.
From the upper semicontinuity of Γ(t, x) on Tε × M it follows that y ∈
Γ(t, x) ⊂ Γ(t, U ). It means that the restriction of Γ(t, U ) on Tε × M has closed
graph in Tε × Kc (Rn ) × Rn. Hence the restriction of Γ(t, U ) on Tε × M is usc.
Then, restriction Φ(t, U ) = coΓ(t, U ) on Tε × M. Therefore the restriction of
Φ(t, U ) on Tε × Kc (Rn) is usc.
By using similar arguments we can prove that for every t ∈ Tk , k ≥ 1, the
multifunction Φ(t, U ) is usc.
In this case Theorem 2.2 in Tolstonogov [2] tells us that the restriction of the
multifunctionS
Φ(t, U ) on Tk ×Kc (Rn), k ≥ 1, is L⊕BKc (Rn ) measurable. Hence
for every t ∈ ∞
U → Φ(t, U ) is usc and the restriction
k=1 Tk the multifunction
S∞
of the multifunction Φ(t, U ) on k=1 Tk × Kc (Rn ) is L ⊕ BKc (Rn ) measurable.
2.11 NOTES AND COMMENTS
63
Set
F (t, U ) = Φ(t, U ), t ∈
∞
[
Tk , U ∈ Kc (Rn ),
k=1
F (t, U ) = Θ, t ∈ J\
∞
[
Tk , U ∈ Kc (Rn),
k=1
where Θ is the zero element of E, which is regarded as an one-point set. It is
clear that the multifunction U → F (t, U ) is usc for every t ∈ J and is monotone
nondecreasingSwith respect to U ∈ Kc (Rn ).
∞
Since (T \ k=1 Tk ) × Kc (Rn ) is a Borel subset of T × Kc (Rn ), the multifunction F (t, A) is L ⊕ BKc (Rn ) measurable. From (2.10.10) it follows that
multifunction F (t, U ) satisfies the assumption (iv). The theorem is proved.
Remark 2.10.1 If multifunction Γ(t, x) is L ⊕ BRn measurable and the multifunction x → Γ(t, x) is usc for every t ∈ J, then the multifunction co Γ(t, A) is
L ⊕ BKc (R) measurable and the multifunction U → co Γ(t, U ) is usc for every
t ∈ J.
Corollary 2.10.1 Assume that the multifunction Γ satisfies hypotheses H(Γ).
Then there exists, for any U0 ∈ Kc (Rn), a solution U (t) = U (t, t0, U0 ) ∈ Kc (Rn)
of the IVP (2.10.1) with the multifunction Φ(t, U ) defined by (2.10.11).
By Lemma 2.10.1 without loss of generality we can consider that multifunction
Φ(t, U ) satisfies condition (i) to (iv). Then the Corollary follows from Theorem
2.10.1.
2.11
Notes and Comments
For the formulation of SDEs in the metric space (Kc (Rn), D) and the initiation
of preliminary results of existence, uniqueness and extremal solutions with a
suitable partial order, see Brandao Lopes Pinto, De Blasi, and Iervolino [1], and
De Blasi and Iervolino [1]. For the case of SDEs in the metric space (Kc (E), D),
E being a Banach space, as a tool to prove existence results of multivalued
differential inclusions without compactness and convexity and for several general
results refer Tolstonogov [1]. The results of Section 2.2 and 2.3 are taken from
Lakshmikantham, Leela and Vatsala [2]. For the contents of Section 2.4, see
Lakshmikantham and Vasundhara Devi[1], which are analogous to the results
of Brandao Lopes Pinto, De Blasi, and Iervolino [1] in the present framework.
Monotone Iterative Technique of Section 2.5 is from Lakshmikantham and
Vatsala [1]. For earlier results on monotone iterative technique for ordinary and
partial differential equations see Ladde, Lakshmikantham and Vatsala [1] and
more recent general results Lakshmikantham and Köksal [1]. Sections 2.6 and
2.7 contain new results parallel to the corresponding theorems in ODE. For the
existence of Euler solutions and flow invariance in Sections 2.8 and 2.9 in terms
of nonsmooth analysis, see Gnana Bhaskar and Lakshmikantham [1]. For more
64
CHAPTER 2. BASIC THEORY
information on nonsmooth analysis see Clarke, Ledyaev, Stern, and Wolenski
[1]. For some generalizations refer to Gnana Bhaskar and Lakshmikantham [2,
4, 5]. The existence of solutions in USC case covered in 2.9 are adopted from
Lakshmikantham and Tolstonogov [1].
Chapter 3
Stability Theory
3.1
Introduction
In this chapter, we investigate stability theory via Lyapunov-like functions. We
shall also initiate the development of set differential systems using generalized
metric spaces.
In Section 3.2, we prove a comparison theorem in terms of Lyapunov-like
functions which serves as a vehicle for the discussion of the stability theory of
Lyapunov. Some special cases of the comparison result are given which are useful later. Section 3.3 considers a global existence result for solutions of SDE, in
terms of Lyapunov-like functions using Zorn’s lemma. Simple stability results
are established in Section 3.4. Here an example is worked out to demonstrate
the problems encountered in the study of stability theory of SDE, in view of the
fact diam kU (t)k is nondecreasing as t increases. A way to avoid the problems
generated is suggested, which leads in certain cases to choosing an appropriate subset of the solution. The stability criteria is obtained in the suggested
format throughout. Section 3.5 discusses non-uniform stability criteria under
less restrictive conditions employing perturbing Lyapunov-like functions. The
criteria for boundedness of solutions is dealt with in Section 3.6, where various
definitions of boundedness are also given. The results proved also include the
method of perturbing Lyapunov functions.
In Section 3.7, we embark on initiating the study of set differential systems,
the consideration of which leads to generalized metric spaces, in terms of which
are proved comparison results utilizing vector Lyapunov-like functions. Section
3.8 develops the method of vector Lyapunov-like functions and stability criteria
in this set up. Since we get a comparison differential system in this situation,
the study of which is sometimes difficult, we provide certain results to reduce
the study of comparison systems to a single comparison equation.
We begin to utilize, in Section 3.9, the tools of nonsmooth analysis to investigate the stability results via lower semicontinuous Lyapunov-like functions.
Employing the connection between proximal normal theory and subdifferentials
65
66
CHAPTER 3. STABILITY THEORY
of lower semicontinuous functions, we provide the necessary framework in this
section. In Section 3.10, we prove stability criteria for Euler solutions of SDE,
utilizing the tools provided in Section 3.9. Notes and comments are given in
section 3.10.
3.2
Lyapunov-like Functions
We consider the IVP for set differential equations
DH U = F (t, U ),
U (t0) = U0 ∈ Kc (Rn ),
(3.2.1)
where F ∈ C[R+ × Kc (Rn ), Kc (Rn )]. To investigate the qualitative behaviour
of solutions of (3.2.1), the following comparison result in terms of a Lyapunovlike function is very important and can be proved via the theory of ordinary
differential inequalities. Here the Lyapunov -like function serves as a vehicle
to transform the set differential equation into a scalar comparison differential
equation. Therefore, it is enough to consider the qualitative properties of the
simpler comparison equation, under suitable conditions for the Lyapunov-like
function.
We also require the IVP for scalar differential equation
w0 = g(t, w),
w(t0 ) = w0 ≥ 0,
(3.2.2)
where g ∈ C[R2+, R].
Theorem 3.2.1 Assume that
(i) V ∈ C[R+ × Kc (Rn), R+ ] and | V (t, A) − V (t, B) | ≤ L D[A, B], where L
is the local Lipschitz constant , for A, B ∈ Kc (Rn), t ∈ R+ ;
(ii) g ∈ C[R2+ , R] and for t ∈ R+ , A ∈ Kc (Rn ),
1
D+ V (t, A) ≡ lim sup [V (t + h, A + hF (t, A)) − V (t, A)] ≤ g(t, V (t, A)).
h
h→0+
(3.2.3)
Then, if U (t) = U (t, t0, U0) is any solution of (3.2.1) existing on [t0, T ) such
that V (t0 , U0) ≤ w0 , we have
V (t, U (t)) ≤ r(t, t0, w0), t ∈ [t0, T ),
(3.2.4)
where r(t, t0, w0) is the maximal solution of (3.2.2) existing on [t0, T ).
Proof Let U (t) = U (t, t0, U0) be any solution of (3.2.1) existing on [t0, T ).
Define m(t) = V (t, U (t)) so that m(t0 ) = V (t0 , U0) ≤ w0. Now for small h > 0,
m(t + h) − m(t)
= V (t + h, U (t + h)) − V (t, U (t))
= V (t + h, U (t + h)) − V (t + h, U (t) + hF (t, U (t)))
+V (t + h, U (t) + hF (t, U (t))) − V (t, U (t))
≤ LD[U (t + h), U (t) + hF (t, U (t))]
+V (t + h, U (t) + hF (t, U (t))) − V (t, U (t)),
3.2 LYAPUNOV-LIKE FUNCTIONS
67
using the Lipschitz condition given in (i). Thus
D+ m(t)
=
≤
Since
1
lim sup [m(t + h) − m(t)]
h→0+ h
1
D+ V (t, U (t)) + L lim sup [D[U (t + h), U (t) + hF (t, U (t))]].
h→0+ h
1
U (t + h) − U (t)
D[U (t + h), U (t) + hF (t, U (t))] =
, F (t, U (t)) ,
h
h
we find that,
1
lim sup [D[U (t + h), U (t) + hF (t, U (t))]]
+
h
h→0
U (t + h) − U (t)
= lim sup D
, F (t, U (t)) ,
h
h→0+
= D[DH U (t), F (t, U (t))] ≡ 0,
since U (t) is a solution of (3.2.1). We therefore have the scalar differential
inequality
D+ m(t) ≤ g(t, m(t)), m(t0 ) ≤ w0,
which yields, as before, the estimate
m(t) ≤ r(t, t0, w0),
t ∈ [t0, T ).
This proves the claimed estimate of the Theorem.
The following Corollaries are useful.
Corollary 3.2.1 The function g(t, w) = 0 is admissible in Theorem 3.2.1 to
yield the estimate
V (t, U (t)) ≤ V (t0 , U0),
t ∈ [t0, T ).
Corollary 3.2.2 If, in Theorem 3.2.1, we assume that g(t, w) = −αw, α > 0,
we get the relation
V (t, U (t)) ≤ V (t0 , U0) exp(−α(t − t0 )),
t ∈ [t0, T ).
Corollary 3.2.3 If, in Theorem 3.2.1, we strengthen the assumption on D+ V (t, U )
to
D+ V (t, U ) + c[w(t, U )] ≤ g(t, V (t, U )),
where w ∈ C[R+ × Kc (Rn ), R+ ], c ∈ K and g(t, w) is nondecreasing in w for
each t ∈ [t0 , T ), then we get
Z t
V (t, U (t)) +
c[w(s, U (s))] ds ≤ r(t, t0, V (t0 , U0)), t ∈ [t0, T ), (3.2.5)
t0
whenever w0 = V (t0 , U0).
68
CHAPTER 3. STABILITY THEORY
Proof Set L(t, U (t)) = V (t, U (t)) +
D+ L(t, U (t))
≤
≤
Rt
t0
c[w(s, U (s))] ds and note that
D+ V (t, U (t)) + c[w(t, U (t))]
g(t, V (t, U (t))) ≤ g(t, L(t, U (t))),
using the monotone character of g(t, w). We then get immediately by Theorem
3.2.1 the desired estimate (3.2.4).
3.3
Global Existence
Employing the comparison Theorem 3.2.1, we shall prove the following global
existence result.
Theorem 3.3.1 Assume that
(i) F ∈ C[R+ × Kc (Rn ), Kc(Rn )], F maps bounded sets onto bounded sets,
and there exists a local solution of (3.2.1) for every (t0 , U0), t0 ≥ 0 and
U0 ∈ Kc (Rn );
(ii) V ∈ C[R+ × Kc (Rn), R+ ]; | V (t, A) − V (t, B) | ≤ L D[A, B] where L is
the local Lipschitz constant, for A, B ∈ Kc (Rn ), t ∈ R+ , V (t, A) → ∞ as
D[A, θ] → ∞ uniformly for [0, T ] for every T > 0 and for t ∈ R+ , A ∈
Kc (Rn ),
D+ V (t, A) ≤ g(t, V (t, A)),
where g ∈ C[R2+ , R], D+ V (t, A) is as defined in Theorem 3.2.1.;
(iii) The maximal solution r(t) = r(t, t0, w0) of (3.2.2) exists on [t0 , ∞), and
is positive whenever w0 > 0.
Then, for every U0 ∈ Kc (Rn ) such that V (t0 , U0) ≤ w0 , the IVP (3.2.1) has a
solution U (t) on [t0, ∞) which satisfies
V (t, U (t)) ≤ r(t), t ≥ t0 .
Proof Let S denote the set of all functions U defined on IU = [t0, cU ) with
values in Kc (Rn) such that U (t) is a solution of (3.2.1) on IU and
V (t, U (t)) ≤ r(t)on IU .
Define a partial order ≤ on S as follows:
the relation U ≤ V implies IU ≤ IV and V (t) = U (t) on IU .
We shall first show that S is nonempty. By (i) there exists a solution U (t)
of (3.2.1) defined on IU = [t0, cU ). Setting m(t) = V (t, U (t)), t ∈ IU and using
assumption (ii), we get by Theorem 3.2.1, that
V (t, U (t)) ≤ r(t),
t ∈ IU ,
where r(t) is the maximal solution of (3.2.2). This proves that U ∈ S and hence
S is not empty.
3.4 STABILITY CRITERIA
69
If (Uβ )β is a chain in (S, ≤), then there is uniquely defined map V on IV =
[t0, supβ cUβ ) that coincides with Uβ on IUβ . Clearly, V ∈ S and therefore V
is an upperbound of (Uβ )β in (S, ≤). Then Zorn’s lemma assures the existence
of a maximal element Z in (S, ≤). The proof of the Theorem is complete if we
show that cZ = ∞.
Suppose that it is not true, so that cZ < ∞. Since r(t) is assumed to exist on
[t0, ∞), r(t) is bounded on IZ . Since V (t, A) → ∞ as D[A, θ] → ∞ uniformly
in t on [t0, cZ ] , the relation V (t, Z(t)) ≤ r(t) on IZ implies that D[Z(t), θ] is
bounded on IZ . By (i), this shows that there is a constant M > 0 such that
D[F (t, Z(t)), θ] ≤ M
on IZ .
We then have, for all t1, t2 ∈ IZ , t1 ≤ t2 ,
Z t2
D[Z(t2 ), Z(t1)] ≤
D[F (s, Z(s)), θ] ds ≤ M (t2 − t1),
t1
which proves that Z is Lipschitizian on IZ and consequently has a continuous
extension Z0 (t) on [t0, cZ ].
By continuity, we get
Z cZ
Z0 (cZ ) = U0 +
F (s, Z0 (s)) ds.
t0
This implies that Z0 (t) is a solution of (3.2.1) on [t0, cZ ] and obviously V (t, Z0 (t)) ≤
r(t), t ∈ [t0, cZ ].
Consider the IVP
DH U = F (t, U ),
U (cZ ) = Z0 (cZ ).
By the assumed local existence there is a solution U0 (t) on [cZ , cZ + δ), δ > 0.
Define
Z0 (t) for t0 ≤ t ≤ cZ ,
Z1 (t) =
U0 (t) for cZ ≤ t ≤ cZ + δ.
Clearly, Z1 (t) is a solution of (3.2.1) on [t0, cZ + δ) and repeating the argument,
we get
V (t, Z1(t)) ≤ r(t), t ∈ [t0, cZ + δ).
This contradicts the maximality of Z and hence cZ = ∞. The proof is complete.
3.4
Stability Criteria
Having necessary comparison results in terms of Lyapunov-like functions, it is
easy to establish the stability results for the set differential equations (SDE)
(3.2.1) analogous to the original Lyapunov results and their extensions. However, in order to investigate the stability criteria for the trivial solution of
(3.2.1), we need to employ, in a natural way, the measure D[U (t), θ] = kU (t)k =
70
CHAPTER 3. STABILITY THEORY
diamU (t) for t ≥ t0 , where U (t) = U (t, t0, U0 ) is the solution of (3.2.1). This
implies by Proposition 1.6.1 that the diam U (t) is nondecreasing in t ≥ t0. This
is due to the fact that, in the generation of the SDE from ordinary differential
equations(ODEs), certain undesirable elements may enter the solution U (t) and
the measure to be employed, namely, kU (t)k is therefore unsuitable to develop
stability theory without some adjustment. Recall that SDE (3.2.1) reduces to
ODE when U (t) is single valued and SDE (3.2.1) can be generated from ODE
as well. The latter is done as follows.
We let F (t, A) = co f(t, A), A ∈ Kc (Rn ), where f ∈ C[R+ × Rn , Rn] arising
from the ODE
u0 = f(t, u), u(t0) = u0 ∈ Rn .
(3.4.1)
Consequently, the solutions of u(t) of ODE (3.4.1) are imbedded in the solution
U (t) of the SDE (3.2.1).
Since the cause of the problem in SDE is the requirement of the existence of
Hukuhara differences in formulating SDE, we need to incorporate the Hukuhara
difference in the initial conditions also, in order to match the behavior of solutions of SDE with the corresponding ODE. This is precisely what we plan to
do.
Suppose that the Hukuhara difference exists for any given initial values
U0 , V0 ∈ Kc (Rn ) so that U0 − V0 = W0 is defined. Then we consider the
stability of the solution U (t, t0, U0 − V0 ) = U (t, t0, W0) of (3.2.1). Before presenting the stability results, in this new set up, let us present some examples to
illustrate our approach.
Consider the ODE
u0 = −u, u(0) = u0 ∈ R,
(3.4.2)
and the corresponding SDE
DH U = −U, U (0) = U0 ∈ Kc (R).
(3.4.3)
Since the values of the solution (3.4.3) are interval functions, the equation (3.4.3)
can be written as,
[u01, u02] = (−1)U = [−u2, −u1],
(3.4.4)
where U (t) = [u1(t), u2 (t)] and U (0) = [u10, u20]. The relation (3.4.4) is equivalent to the system of equations
u01 = −u2 ,
u1(0) = u10,
u02
u2 (0) = u20,
= −u1 ,
whose solution for t ≥ 0, is
u1(t)
=
u2(t)
=
1
[u
2 10
1
2 [u20
+ u20]e−t + 12 [u10 − u20]et ,
+ u10]e−t + 12 [u20 − u10]et .
(3.4.5)
Given U0 ∈ Kc (R), if there exists V0 , W0 ∈ Kc (R) such that U0 = V0 + W0 , then
the Hukuhara difference U0 − V0 = W0 , exists. Let us choose
U0 = [u10, u20],
V0 =
1
[(u10 − u20), (u20 − u10)],
2
3.4 STABILITY CRITERIA
71
so that
1
[(u10 + u20), (u20 + u10)].
2
If u10 6= −u20, then we have for t ≥ 0,
W0 =
U (t, U0 ) =
U (t, V0) =
U (t, W0) =
1
1
[−(u20 − u10), (u20 − u10)]et + [(u10 + u20), (u10 + u20)]e−t
2
2
1
t
[(u10 − u20), (u20 − u10]e , and
2
1
[(u10 + u20), (u10 + u20)]e−t .
2
If on the other hand, u10 = −u20, implies we choose U0 = [−d, d] with d = u20.
Then U0 = V0 and W0 = [0, 0]. This choice eliminates the term with e−t and
we have only undesirable part of the solution. The other situation is to choose
U0 = [c, c] for some c, which eliminates the term with et and retains only the
desirable part of the solution compared with the ODE. Even when U0 is chosen
as U0 = [−d, d], we can find V0 = [c − d, c + d] for some c so that we have
V0 = U0 + W0 where W0 = [c, c].
We note that for any general initial value U0 , the solution of SDE (3.4.2)
contains both desired and undesired parts compared to the solution of the ODE
(3.4.2). In order to isolate the desired part of the solution U (t, U0) of (3.4.2)
that matches the solution of the ODE (3.4.2), we need to use the initial values
satisfying the desired Hukuhara difference of the given two initial values.
If, on the other hand, we have the SDE as
DH U = λ(t)U,
U (0) = U0 ,
(3.4.6)
which is generated by
u0 = λ(t)u, u(0) = u0
(3.4.7)
where λ(t) > 0 is a real valued function from R+ → R+ such that λ ∈ L1 (R+ ),
then we see that, with similar computation,
Z t
U (t, U0) = U0 exp
λ(s) ds , t ≥ 0,
0
for any U0 ∈ Kc (Rn ).
Hence we get the stability of the trivial solution of (3.4.6). In this case, we
note that the solutions of both equations (3.4.6) and (3.4.7) match, providing
the same stability results. There is no necessity, therefore, to choose the initial
values as before, since the undesirable part of the solution does not exist among
solutions of (3.4.6). Consequently, it does not matter, whether we use the
Hukuhara difference or not, we get the same conclusion. In order to be consistent
and to take care of all the situations, most of the results in this chapter are
formulated in terms of Hukuhara differences of initial values.
In order to consider the stability of the trivial solution of (3.2.1), let us
assume that F (t, θ) = θ, the solutions are unique and exist for all t ≥ t0 . Also,
72
CHAPTER 3. STABILITY THEORY
we assume, as a standard hypothesis, that the Hukuhara difference U0 −V0 = W0
exists, since we suppose that U0 = V0 + W0. Consequently, we utilize the
solutions U (t) = U (t, t0, U0 − V0) = U (t, t0, W0), throughout.
Let us start with the following result on equi-stability.
Theorem 3.4.1 Assume that the following hold:
(i) V ∈ C[R+ × S(ρ), R+ ], |V (t, U1) − V (t, U2 )| ≤ L D[U1 , U2], L > 0 and
for (t, U ) ∈ R+ × S(ρ), where S(ρ) = [U ∈ Kc (Rn ) : D[U, θ] < ρ],
1
D+ V (t, U ) ≡ lim sup [V (t + h, U + hF (t, U )) − V (t, U )] ≤ 0;
h→0+ h
(3.4.8)
(ii) b(kU k) ≤ V (t, U ) ≤ a(t, kU k), for (t, U ) ∈ R+ × S(ρ) where
b, a(t, .) ∈ K = {σ ∈ C[[0, ρ), R+] : σ(0) = 0 and σ(ω) is increasing in
ω}.
Then, the trivial solution of (3.2.1) is equi-stable.
Proof Let 0 < ε < ρ and t0 ∈ R+ , be given. Choose a δ = δ(t0 , ε) such that
a(t0 , δ) < b(ε).
(3.4.9)
We claim that with this δ, equi-stability holds. If not, there would exist a
solution U (t) = U (t, t0, W0) of (3.2.1) and t1 > t0 such that
kU (t1)k = ε and kU (t)k ≤ ε < ρ,
t 0 ≤ t ≤ t1 ,
(3.4.10)
whenever kW0k < δ. By Corollary 3.2.1, we then have
V (t, U (t)) ≤ V (t0 , W0),
t 0 ≤ t ≤ t1 .
Consequently, using (ii), (3.4.9) and (3.4.10), we arrive at the following contradiction:
b(ε) = b(kU (t1)k) ≤ V (t1 , U (t1)) ≤ V (t0 , W0) ≤ a(t0 , kW0k) ≤ a(t0 , δ) < b(ε).
Hence equi-stability holds, completing the proof.
The next result provides sufficient conditions for equi-asymptotic stability.
In fact, it gives exponential asymptotic stability.
Theorem 3.4.2 Let the assumptions of Theorem 3.4.1 hold, except that the
estimate (3.4.8) be strengthened to
D+ V (t, U ) ≤ −βV (t, U ),
(t, U ) ∈ R+ × S(ρ).
Then the trivial solution of (3.2.1) is equi-asymptotically stable.
(3.4.11)
3.4 STABILITY CRITERIA
73
Proof Clearly, the trivial solution of (3.2.1) is equi-stable. Hence taking ε = ρ
and designating δ0 = δ(t0 , ρ) , we have by Theorem 3.4.1,
kW0 k < δ0 implies kU (t)k < ρ,
t ≥ t0 ,
where U (t) = U (t, t0, W0) as before.
Consequently, we get from the assumption (3.4.11), the estimate
V (t, U (t)) ≤ V (t0, W0 ) exp[−β(t − t0)],
Given ε > 0, we choose T = T (t0 , ε) =
that,
1
β
t ≥ t0 .
0 ,δ0 )
ln a(tb(ε)
+ 1. Then it is easy to see
b(kU (t)k) ≤ V (t, U (t)) ≤ a(t0 , δ) e−β(t−t0 ) < b(ε),
t ≥ t0 + T.
The proof is complete.
We shall next consider the uniform stability criteria.
Theorem 3.4.3 Assume that, for (t, U ) ∈ R+ × S(ρ) ∩ S c (η) for each 0 < η <
ρ, V ∈ C[R+ × S(ρ) ∩ S c (η), R+ ], we have,
|V (t, U1) − V (t, U2)| ≤ L D[U1 , U2 ], L > 0,
D+ V (t, U ) ≤ 0,
(3.4.12)
and
b(kU k) ≤ V (t, U ) ≤ a(kU k),
a, b ∈ K.
(3.4.13)
Then the trivial solution of (3.2.1) is uniformly stable.
Proof Let 0 < ε < ρ and t0 ∈ R+ be given. Choose δ = δ(ε) > 0 such that
a(δ) < b(ε). Then we claim that with this δ, uniform stability follows. If not,
there would exist a solution U (t) of (3.2.1), and a t2 > t1 > t0 satisfying
kU (t1 )k = δ, kU (t2 )k = ε
and δ ≤ kU (t)k ≤ < ρ, t1 ≤ t ≤ t2. (3.4.14)
Taking η = δ, we get from (3.4.12), the estimate
V (t2 , U (t2)) ≤ V (t1, U (t1 )),
and therefore, (3.4.13) and (3.4.14), together with the definition of δ, yield
b(ε)
=
≤
b(kU (t2)k) ≤ V (t2 , U (t2)) ≤ V (t1, U (t1 ))
a(kU (t1)k) = a(δ) < b(ε).
This contradiction proves uniform stability, completing the proof.
Finally, we shall prove uniform asymptotic stability.
Theorem 3.4.4 Let the assumptions of Theorem 3.4.3 hold except that (3.4.12)
is strengthened to
D+ V (t, U ) ≤ −c(kU k), c ∈ K.
(3.4.15)
Then the trivial solution of (3.2.1) is uniformly asymptotically stable.
74
CHAPTER 3. STABILITY THEORY
Proof By Theorem 3.4.3, uniform stability follows. Now, for ε = ρ, we designate δ0 = δ0 (ρ). This means,
kW0 k < δ0 implies kU (t)k < ρ,
t ≥ t0 .
In view of the uniform stability, it is enough to show that there exists a t∗ such
0)
that for t0 ≤ t∗ ≤ t0 + T , where T = 1 + a(δ
,
c(δ)
kU (t∗ )k < δ.
(3.4.16)
If this is not true, δ ≤ kU (t)k, for t0 ≤ t ≤ t0 + T . Then, (3.4.15) gives,
V (t, U (t)) ≤ V (t0 , W0) −
Z
t
c(kU (s)k) ds,
t0 ≤ t ≤ t0 + T.
t0
As a result, we have, in view of the choice of T ,
0 ≤ V (t0 + T, U (t0 + T )) ≤ a(δ0 ) − c(δ)T < 0
a contradiction. Hence there exists a t∗ satisfying (3.4.16) and uniform stability
then shows that
kW0k < δ0 implies kU (t)k < ε,
t ≥ t0 + T,
and the proof is complete.
3.5
Nonuniform Stability Criteria
In section 3.4, we discussed stability results parallel to Lyapunov’s original theorems for set differential equations. We note that in proving nonuniform stability concepts, one needs to impose assumptions throughout R+ × S(ρ), whereas
to investigate uniform stability notions it is enough to assume conditions in
R+ × S(ρ) ∩ S c (η) for 0 < η < ρ, where S c (η) denotes the complement of S(η).
The question therefore arises whether one can prove nonuniform stability notions under less restrictive assumptions. The answer is yes and one needs to
employ the method of perturbing Lyapunov functions to achieve this. This is
what we plan to do in this section.
We begin with the following result which provides nonuniform stability criteria under weaker assumptions.
Theorem 3.5.1 Assume that
(i) V1 ∈ C[R+ × S(ρ), R+ ], | V1 (t, U1) − V1 (t, U2 ) |≤ LD[U1 , U2 ], L1 > 0,
V1 (t, U ) ≤ a0(t, kU k), where a0 ∈ C[R+ × [0, ρ), R+ ] and a0 (t, .) ∈ K for
each t ∈ R+ .
(ii) D+ V1 (t, U ) ≤ g1(t, V1(t, U )), (t, U ) ∈ R+ × S(ρ), where g1 ∈ C[R2+ , R] and
g1 (t, 0) ≡ 0;
3.5 NONUNIFORM STABILITY CRITERIA
75
(iii) for every η > 0, there exists a Vη ∈ C[R+ × S(ρ) ∩ S c (η), R+ ],
| Vη (t, U1) − Vη (t, U2 ) |≤ Lη D[U1 , U2 ]
b(kU k) ≤ Vη (t, U ) ≤ a(kU k), a, b ∈ K;
and
D+ V1 (t, U ) + D+ Vη (t, U ) ≤ g2(t, V1 (t, U ) + Vη (t, U ))
for (t, U ) ∈ R+ × S(ρ) ∩ S c (η), where g2 ∈ C[R2+, R] and g2(t, 0) ≡ 0;
(iv) the trivial solution w1 ≡ 0 of
w10 = g1(t, w1), w1 (t0) = w10 ≥ 0,
(3.5.1)
is equistable.
(v) the trivial solution w2 ≡ 0 of
w20 = g2(t, w2), w2 (t0) = w20 ≥ 0,
(3.5.2)
is uniformly stable.
Then, the trivial solution of (3.2.1) is equi-stable.
Proof Let 0 < ε < ρ and t0 ∈ R+ , be given. Since the trivial solution of (3.5.2)
is uniformly stable, given b(ε) > 0 and t0 ∈ R+ , there exists a δ0 = δ0 (ε) > 0
satisfying
0 < w20 < δ0 implying w2(t, t0, w20) < b(ε), t ≥ t0,
(3.5.3)
where w2(t, t0, w20) is any solution of (3.5.2). In view of the hypothesis on a(w),
there is a δ2 = δ2 (ε) > 0 such that
a(δ2 ) <
δ0
.
2
(3.5.4)
Since the trivial solution of (3.5.1) is equi-stable, given
can find a δ ∗ = δ ∗ (t0, ε) > 0 such that
0 < w10 < δ ∗ implies w1 (t, t0, w10) <
δ0
2
> 0 and t0 ∈ R+ , we
δ0
, t ≥ t0 ,
2
(3.5.5)
where w1(t, t0, w10) is any solution of (3.5.1). Choose w10 = V1 (t0 , W0). Since
V1 (t, U ) ≤ a0 (t, kU k), we see that there exists δ1 = δ1 (t0, ε) > 0 satisfying
kW0k < δ1 and a0(t0 , kW0k) < δ ∗ ,
(3.5.6)
simultaneously. Define δ = min(δ1 , δ2).
Then, we claim that
kW0 k < δ implies kU (t)k < ε, t ≥ t0 ,
(3.5.7)
76
CHAPTER 3. STABILITY THEORY
for any solution U (t) = U (t, t0, W0) of (3.2.1). If this is false, there would exist
a solution U (t) of (3.2.1) with kW0 k < δ and t1, t2 > t0 such that
kU (t1 )k = δ2 , kU (t2 )k = ε,
and
δ2 ≤ kU (t)k ≤ ε < ρ,
(3.5.8)
for t1 ≤ t ≤ t2. We let η = δ2 so that the existence of a Vη satisfying hypothesis
(iii) is assured. Hence setting
m(t) = V1 (t, U (t)) + Vη (t, U (t)),
t ∈ [t1, t2],
we obtain the differential inequality
D+ m(t) ≤ g2 (t, m(t)),
t 1 ≤ t ≤ t2 ,
which yields
V1(t2 , U (t2)) + Vη (t2 , U (t2)) ≤ r2 (t2, t1, w20),
(3.5.9)
where w20 = V1 (t1 , U (t1)) + Vη (t1 , U (t1)), and r2(t, t1, w20) is the maximal
solution of (3.5.2). We also have, because of assumptions (i) and (ii),
V1 (t1 , U (t1)) ≤ r1(t1 , t0, w10),
with w10 = V1 (t0 , W0) where r1(t, t0 , w10) is the maximal solution of (3.5.1).
By (3.5.5) and (3.5.6), we get
V1 (t1 , U (t1)) <
δ0
.
2
(3.5.10)
Also, by (3.5.4), (3.5.8) and assumption (iii) , we arrive at
Vη (t1, U (t1)) ≤ a(δ2 ) <
δ0
.
2
(3.5.11)
Thus, (3.5.10) and (3.5.11) and the definition of w20 shows that w20 < δ0 which,
in view of (3.5.3), shows that w2 (t2, t1, w20) < b(ε). It then follows from (3.5.9),
V1 (t, U ) ≥ 0 and assumption (iii),
b(ε) = b(kU (t2)k) ≤ V1(t2 , U (t2)) ≤ r2(t2 , t1, w20) < b(ε).
This contradiction proves equi-stability of the trivial solution of (3.2.1) since
(3.5.7) is then true. The proof is complete.
The next result offers conditions for equi-asymptotic stability.
Theorem 3.5.2 Let the assumptions of Theorem 3.5.1 hold except that condition (ii) is strengthened to
D+ V1 (t, U ) ≤ −c(w(t, U )) + g1 (t, V1(t, U )), c ∈ K,
(ii∗ )
3.5 NONUNIFORM STABILITY CRITERIA
77
w ∈ C[R+ × S(ρ), R+ ],
| w(t, U1 ) − w(t, U2) | ≤ N D[U1 , U2], N > 0,
and D+ w(t, U ) is bounded above or below. Then, the trivial solution of (3.2.1)
is equi-asymptotically stable, if g1(t, w) is monotone nondecreasing in w and
w(t, U ) ≥ b0(kU k), b0 ∈ K.
(3.5.12)
Proof By Theorem 3.5.1, the trivial solution of (3.2.1) is equi-stable. Hence
letting ε = ρ so that δ0 = δ(ρ, t0 ), we get, by equi-stability
kW0k < δ0 implies kU (t0 )k < ρ, t ≥ t0 .
We shall show that, for any solution U (t) of (3.2.1) with kW0k < δ0 , we have
limt→∞ w(t, U (t)) = 0. This and (3.5.12) imply limt→∞ kU (t)k = 0, completing
the proof.
Suppose that limt→∞ sup w(t, U (t)) 6= 0. Then there would exist two divergent sequences {t0i}, {t00i } and a σ > 0 satisfying
(a) w(t0i , U (t0i)) =
σ
σ
, w(t00i , U (t00i )) = σ and w(t, U (t)) ≥ , t ∈ (t0i.t00i ),
2
2
or
(b) w(t0i , U (t0i)) = σ, w(t00i , U (t00i )) =
σ
σ
and w(t, U (t)) ≥ , t ∈ (t0i , t00i ).
2
2
Suppose that D+ w(t, U (t)) ≤ M. Then using (a) we obtain
σ
σ
= σ − = w(t00i , U (t00i )) − w(t0i , U (t0i)) ≤ M (t00i − t0i ),
2
2
which shows that t00i − t0i ≥
we have
σ
2M
for each i. Hence by (ii∗ ) and Corollary 3.2.3
V1 (t, U (t)) ≤ r1 (t, t0, w10) −
n Z
X
i=1
t00
i
c[w(s, U (s))] ds,
t ≥ t0 .
t0i
Since w10 = V1 (t0 , W0) ≤ a0 (t0, kW0k) ≤ a0 (t0, δ0 ) < δ ∗ (ρ), we get from
(3.5.5), w1(t, t0, w10) < δ02(ρ) , t ≥ t0 . we thus obtain
0 ≤ V1 (t, U (t)) ≤
δ0 (ρ)
σ σ
− c( )
n.
2
2 2M
For sufficiently large n, we get a contradiction and therefore lim supt→∞ w(t, U (t)) =
0. Since w(t, U ) ≥ b0(kU (t)k) by assumption, it follows that limt→∞ kU (t)k = 0
and the proof is complete.
The following remarks are relevant.
78
CHAPTER 3. STABILITY THEORY
Remark 3.5.1 The functions g1(t, w) = g2 (t, w) ≡ 0 are admissible in Theorem 3.5.1, and so the same conclusion can be reached. If V1 (t, U ) ≡ 0 and
g1 (t, w) ≡ 0, then we get uniform stability from Theorem 3.5.1. If, on the other
hand, Vη (t, U ) ≡ 0, g2(t, w) ≡ 0 and V1 (t, U ) ≥ b(kU k), b ∈ K, then Theorem
3.5.1 yields equi-stability. We note that known results on equi-stability require
the assumption to hold everywhere in S(ρ) and Theorem 3.5.1 relaxes such a
requirement considerably by the method of perturbing Lyapunov functions.
Remark 3.5.2 The functions g1(t, w) ≡ g2 (t, w) ≡ 0 are admissible in Theorem 3.5.2 to yield equi-asymptotic stability. Similarly, if Vη (t, U ) ≡ 0, g2(t, w) ≡
0 with V1 (t, U ) ≥ b(kU k), b ∈ K, implies the same conclusion. If V1 (t, U ) ≡ 0
and g1(t, w) ≡ 0 in Theorem 3.5.1, to get uniform asymptotic stability, one
needs to strengthen the estimate on D+ Vη (t, U ). This we state as a corollary.
Corollary 3.5.1 Suppose that the assumptions of Theorem 3.5.1 hold with
V1 (t, U ) ≡ 0, g(t, w) ≡ 0. Suppose further that
D+ Vη (t, U ) ≤ −c[w(t, U )]+g2(t, Vη (t, U )), (t, U ) ∈ R+ ×S(ρ)∩S c (η), (3.5.13)
where w ∈ C[R+ × S(ρ), R+ ], w(t, U ) ≥ b(kU k), c, b ∈ K and g2 (t, w) is nondecreasing in w. Then, the trivial solution of (3.2.1) is uniformly asymptotically
stable.
Proof The trivial solution of (3.2.1) is uniformly stable by Remark 3.5.1 in the
present case. Hence taking ε = ρ and designating δ0 = δ(ρ), we have
kW0 k < δ0 implies kU (t)k < ρ, t ≥ t0.
To prove uniform attractivity, let 0 < ε < ρ be given. Let δ = δ(ε) > 0 be the
number relative to ε in uniform stability. Choose T = b(ρ)
+ 1. Then we shall
c(δ)
∗
∗
show that there exists a t ∈ [t0, t0 + T ] such that w(t , U (t∗ )) < b(δ) for any
solution U (t) of (3.2.1) with kW0k < δ0 .
If this is not true, w(t, U (t)) ≥ b(δ), t ∈ [t0, t0 +T ]. Now using the assumption
(3.5.13) and arguing as in Corollary 3.2.3, we get
Z t0 +T
0 ≤ Vη (t0 + T, U (t0 + T )) ≤ r2(t0 + T, t0 , w20) −
w(s, U (s)) ds.
t0
This yields, since r2 (t, t0, w20) < b(ρ) and the choice of T ,
0 ≤ Vη (t0 + T, U (t0 + T )) ≤ b(ρ) − c(δ)T < 0,
which is a contradiction. Hence there exists a t∗ ∈ [t0, t0 + T ] satisfying
w(t∗ , U (t∗)) < b(δ), which implies kU (t)k < δ. Consequently, it follows, by
uniform stability that
kW0 k < δ0 implies kU (t)k < ε, t ≥ t0 + T,
and the proof is complete.
3.6 CRITERIA FOR BOUNDEDNESS
3.6
79
Criteria for Boundedness
We shall, in this section, investigate the boundedness of solutions of the set
differential equation
DH U = F (t, U ), U (t0) = U0 ∈ Kc (Rn),
(3.6.1)
where F ∈ C[R+ ×Kc (Rn ), Kc (Rn )]. Corresponding to the definitions of various
stability notions given in section 3.4, we also have boundedness concepts, which
we define below.
Definition 3.6.1 The solution of (3.6.1) is said to be
(B1) equi-bounded, if for any α > 0 and t0 ∈ R+ , there exists a β = β(t0 , α) >
0 such that
kW0 k < α implies kU (t)k < β, t ≥ t0 ;
(B2) uniform-bounded, if β in (B1)does not depend on t0 ;
(B3) quasi-equi-ultimately bounded for a bound B if for each α ≥ 0, t0 ∈ R+ ,
there exists a B > 0 and a T = T (t0 , α) > 0 such that
kW0k < α implies kU (t)k < B, t ≥ t0 + T.
(B4) quasi-uniform ultimately bounded if T in (B3) is independent of t0 ;
(B5) equi-ultimately bounded, if (B1) and (B3) hold simultaneously;
(B6) uniform ultimately bounded if (B2) and (B4) hold simultaneously;
(B7) equi-Lagrange stable if (B1) and (S3) hold;
(B8) uniformly Lagrange stable if (B2) and (S4) hold.
Using the comparison results of section 3.2, we shall prove simple boundedness results.
Theorem 3.6.1 Assume that
(i) V ∈ C[R+ × Kc (Rn ), R+ ], | V (t, U1) − V (t, U2 ) |≤ LD[U1 , U2 ], L > 0,
and for (t, U ) ∈ R+ × Kc (Rn), D+ V (t, U ) ≤ 0;
(ii) b(kU k) ≤ V (t, U ) ≤ a(t, kU k) , for (t, U ) ∈ R+ × Kc (Rn ) where b, a(t, .) ∈
K = [σ ∈ C[R+ , R+ ] : σ(ω) is increasing in ω and σ(w) → ∞ as w → ∞].
Then, (B1) holds.
Proof Let 0 < α and t0 ∈ R+ , be given. Choose β = β(t0 , α) such that
a(t0, α) < b(β).
(3.6.2)
80
CHAPTER 3. STABILITY THEORY
With this β, (B1) holds. If this is not true, there would exist a solution
U (t) = U (t, t0, W0) of (3.6.1) and a t1 > t0 such that
kU (t1)k = β
and
kU (t)k ≤ β, t0 ≤ t ≤ t1 .
Assumption (i) and Corollary 3.2.1 show that
V (t, U (t)) ≤ V (t0 , W0),
t 0 ≤ t ≤ t1 .
As a result, condition (ii) and (3.6.2) yield
b(β)
= b(kU (t1 )k) ≤ V (t1, U (t1)) ≤ V (t0, W0 )
≤ a(t0 , kW0k) < a(t0 , α) < b(β).
This contradiction proves (B1) and we are done.
For uniform boundedness, the following result is obtained under weaker assumptions.
Theorem 3.6.2 Assume that
(i) V ∈ C[R+ × S c (ρ), R+ ], where ρ may be large; |V (t, U1 ) − V (t, U2 )| ≤
LD[U1 , U2 ], and for (t, U ) ∈ R+ × S c (ρ),
D+ V (t, U ) ≤ 0;
(ii) b(kU k) ≤ V (t, U ) ≤ a(kU k) , for (t, U ) ∈ R+ ×S c (ρ) where a, b ∈ K, which
are defined only on [ρ, ∞).
Then, (B2) holds.
Proof The proof is similar to the proof of Theorem 3.6.1 except that the choice
of β is now made so that a(α) < b(β) and consequently β is independent of t0 .
Also, α > ρ for the proof since the assumptions are only for S c (ρ). However, if
0 < α ≤ ρ, we can take β = β(ρ) and again the proof follows.
We shall give a typical result that offers conditions for equi-ultimate boundedness, that is for (B5).
Theorem 3.6.3 Let the assumptions of Theorem 3.6.1 hold except that we
strengthen the estimate on D+ V (t, U ) as
D+ V (t, U ) ≤ −ηV (t, U ), η > 0, (t, U ) ∈ R+ × Kc (Rn ),
(3.6.3)
and suppose that condition (ii) holds for kU k ≥ B. Then (B5) holds.
Proof Clearly (B1) is obtained from Theorem 3.6.1. Hence
kW0 k < α implies kU (t)k < β, t ≥ t0 .
Now (3.6.3) yields the estimate
V (t, U (t)) ≤ V (t0 , W0)e−η(t−t0 ) ,
t ≥ t0 .
(3.6.4)
3.6 CRITERIA FOR BOUNDEDNESS
81
0 ,α)
Let T = η1 ln a(t
b(B) and suppose that for t ≥ t0 + T, kU (t)k ≥ B. Then, we get
from (3.6.4)
b(B) ≤ b(kU (t)k) ≤ V (t, U (t)) < a(t0 , α)e−ηT = b(B).
This contradiction proves (B5) and the proof is complete.
Finally, we shall provide a result on nonuniform boundedness property using
the method of perturbing Lyapunov functions.
Theorem 3.6.4 Assume that
(i) ρ > 0, V1 ∈ C[R+ × S(ρ), R+ ], V1 is bounded for (t, U ) ∈ R+ × ∂S(ρ), and
|V1 (t, U1) − V (t, U2 )| ≤ L1 D[U1 , U2],
D+ V1 (t, U ) =
≤
L1 > 0,
1
lim sup [V1(t + h, U + hF (t, U )) − V1(t, U )]
h
h→0+
g1(t, V1 ), (t, U ) ∈ R+ × S c (ρ),
where g1 ∈ C[R2+ , R];
(ii) V2 ∈ C[R+ × S c (ρ), R+ ],
b(kU k) ≤ V2 (t, U ) ≤ a(kU k),
a, b ∈ K,
D+ V1 (t, U ) + D+ V2(t, U ) ≤ g2 (t, V1(t, U ) + V2 (t, U )),
g2 ∈ C[R2+ , R],
(iii) the scalar differential equations
w10 = g1 (t, w1),
w1(t0 ) = w10 ≥ 0,
(3.6.5)
w20 = g2 (t, w2),
w2(t0 ) = w20 ≥ 0,
(3.6.6)
and
are equi-bounded and uniformly bounded respectively.
Then the system (3.6.1) is equi-bounded.
Proof Let B1 > ρ and t0 ∈ R+ , be given. Let
α1
where α0
andα∗
=
=
≥
α1(t0 , B1) = max{α0, α∗},
max{V1(t0 , W0 ) : W0 ∈ cl{S(B1 ) ∩ S c (ρ)}}
V1 (t, U ) for (t, U ) ∈ R+ × ∂S(ρ).
Since equation (3.6.5) is equi-bounded, given α1 > 0, and t0 ∈ R+ , there
exist a β0 = β(t0 , α1) such that
w1(t, t0, w10) < β0 , t ≥ t0 ,
(3.6.7)
82
CHAPTER 3. STABILITY THEORY
provided w10 < α1, where w1 (t, t0, w10) is any solution (3.6.5). Let α2 = a(B1 )+
β0 , the uniform boundedness of equation (3.6.6) yields that
w2(t, t0 , w20) < β1 (α2), t ≥ t0,
(3.6.8)
provided w20 < α2, where w2 (t, t0, w20) is any solution of (3.6.6). Choose B2
satisfying
b(B2 ) > β1 (α2).
(3.6.9)
We now claim that W0 ∈ S(B1 ) implies that
U (t, t0, W0) ∈ S(B2 ),
for t ≥ t0 , where U (t, t0, W0 ) is any solution of (3.6.1).
If it is not true, there exists a solution U (t, t0, W0) of (3.6.7) with W0 ∈
S(B1 ), such that for some t∗ > t0, kU (t∗, t0, W0)k = B2 . Since B1 > ρ, there
are two possibilities to consider:
(1) U (t, t0, W0 ) ∈ S c (ρ) for t ∈ [t0, t∗ ];
(2) there exists a t ≥ t0 such that U (t̄, t0, W0) ∈ ∂S(ρ) and U (t, t0, W0 ) ∈
S c (ρ) for t ∈ [t̄, t∗ ].
If (1) holds, we can find t1 > t0 such that
U (t1, t0, W0) ∈ ∂S(B1 ),
U (t∗ , t0, W0) ∈ ∂S(B2 ), and
(3.6.10)
U (t, t0, W0 ) ∈ S c (B1 ), t ∈ [t1, t∗ ].
Setting m(t) = V1 (t, U (t, t0, W0)) + V2 (t, U (t, t0, W0)) for t ∈ [t1, t∗ ], and then
using Theorem 3.2.1, we can obtain the differential inequality
D+ m(t) ≤ g2 (t, m(t)),
t ∈ [t1, t∗ ],
m(t) ≤ γ2 (t, t1, m(t1 )),
t ∈ [t1, t∗],
and so,
where γ2 (t, t1, v0) is the maximal solution of (3.6.6) with γ2 (t1 , t1, v0) = v0.
Thus,
V1 (t∗ , U (t∗ , t0, W0)) + V2 (t∗ , U (t∗, t0 , W0))
≤ γ2 (t∗ , t1, V1(t1 , U (t1, t0, W0)) + V2 (t1 , U (t1, t0, W0))).
(3.6.11)
Similarly, we also have
V1 (t1 , U (t1, t0, W0)) ≤ γ1 (t1 , t0, V1(t0 , W0)),
where γ1 (t, t0, u0) is the maximal solution of (3.6.5).
(3.6.12)
3.7 SET DIFFERENTIAL SYSTEMS
83
Set w10 = V1 (t0 , W0) < α1. Then
V1 (t1 , U (t1, t0, W0)) ≤ γ1 (t1, t0 , V (t0 , W0)) ≤ β0 ,
since (3.6.7) holds.
Furthermore, V2 (t1, U (t1 , t0, W0)) ≤ a(B1 ). Consequently, we have
w20 = V1 (t1 , U (t1, t0, W0)) + V2 (t1 , U (t1, t0, W0)) ≤ β0 + a(B1 ) = α2. (3.6.13)
Combining (3.6.8),(3.6.9),(3.6.10) and (3.6.13), we obtain
b(B2 ) ≤ m(t∗ ) ≤ γ(t∗ ) ≤ β1 (α2) < b(B2 ),
(3.6.14)
which is a contradiction.
If case (2) holds, we also arrive at the inequality (3.6.11), where t1 > t
satisfies (3.6.10). We then have, in place of (3.6.12), the relation
V1 (t1 , U (t1, t0, W0)) ≤ γ1 (t1 , t, V1 (t, U (t, t0, W0))).
Since U (t, t0, W0 ) ∈ ∂S(ρ) and V1 (t, U (t, t0, W0)) ≤ α∗ ≤ α1, arguing as before,
we get the contradiction (3.6.14). This proves that for any given B1 > ρ, t0 > 0,
there exists a B2 such that
W0 ∈ S(B1 ) impliesU (t, t0, W0 ) ∈ S(B2 ), t ≥ t0.
For B1 < ρ, we set B2 (t0 , B1) = B2 (t0 , ρ) and hence the proof is complete.
3.7
Set Differential Systems
In this section we shall attempt to study the set differential system, given by
DH U = F (t, U ),
U (t0) = U0 ,
(3.7.1)
where F ∈ C[R+ × Kc (Rn)N , Kc (Rn)N ], U ∈ Kc (Rn )N , Kc (Rn )N = (Kc (Rn ) ×
Kc (Rn ) × ..... × Kc (Rn), N times), U = (U1 , ...., UN ) such that for each i, 1 ≤
i ≤ N, Ui ∈ Kc (Rn). Note also U0 ∈ Kc (Rn )N .
We have the following two possibilities to measure the new variables U, U0 , F .
PN
(1) Define D0 [U, V ] = i=1 D[Ui , Vi ], where U, V ∈ Kc (Rn)N and employ the
metric space (Kc (Rn)N , D0).
(2) Define D̃ : Kc (Rn)N × Kc (Rn )N → RN
+ such that
D̃[U, V ] = (D[U1 , V1 ], D[U2, V2], ......D[UN , VN ]),
and employ the generalized metric space (Kc (Rn )N , D̃).
84
CHAPTER 3. STABILITY THEORY
If we utilize option (1) above, the assumption (2.2.4) of Theorem 2.2.1 appears as
D0 [F (t, U ), F (t, V )] =
N
X
D(fi (t, U ), fi(t, V ) ≤ g(t, D0 [U, V ]).
(3.7.2)
i=1
On the other hand, if we choose option (2) then the assumption (2.2.4) takes
the form
D̃[F (t, U ), F (t, V )] ≤ G(t, D̃[U, V ]),
(3.7.3)
where G ∈ C[R+ × RN , RN ].
In this case, condition (2.2.4) reduces to
D̃[F (t, U ), F (t, V )] ≤ S D̃[U, V ]),
(3.7.4)
where S = (Sij ) is an N × N matrix with Sij ≥ 0, for all i, j, which corresponds
to the generalized contractive condition. Of course, the matrix S needs to satisfy
a suitable condition, that is , for some k > 1, S k must be an A−matrix, which
means I − S k is positive definite, where I is the identity matrix. For details of
generalized spaces and contraction mapping theorem in this set up, see Bernfeld
and Lakshmikantham [1].
Moreover, in order to arrive at the corresponding estimate (3.2.4) of Theorem
3.2.1, for example , one is required to utilize the corresponding theory of systems
of differential inequalities, which demands that G(t, w) have the quasi-monotone
property, which is defined as follows:
w1 ≤ w2 and w1i = w2i for some i, 1 ≤ i ≤ N,
implies
Gi(t, w1) ≤ Gi(t, w2),
w1 , w2 ∈ RN .
If G(t, w) = Aw, where A is an N ×N matrix then the quasi-monotone property
reduces to requiring aij ≥ 0, i 6= j.
The method of vector Lyapunov-like functions has been very effective in the
investigation of the qualitative properties of large-scale differential systems.
We shall extend this technique to set differential systems (3.7.1) where, as we
shall see, both metrics described above are very useful. For this purpose, let us
prove the following comparison result in terms of vector Lyapunov-like functions
relative to the set differential system (3.7.1). We note that the inequalities
between vectors in RN are to be understood as componentwise.
Theorem 3.7.1 Assume that V ∈ C[R+ × Kc (Rn)N , RN
+ ],
|V (t, U1 ) − V (t, U2 )| ≤ A D̃[U1, U2 ],
where A is on N × N matrix with nonnegative elements, and for (t, U ) ∈ R+ ×
Kc (Rn)N ,
D+ V (t, U ) ≤ G(t, V (t, U )),
(3.7.5)
3.7 SET DIFFERENTIAL SYSTEMS
85
N
where G ∈ C[R+ × RN
+ , R ]. Suppose further that G(t, w) is quasi-monotone in
w for each t ∈ R+ and r(t) = r(t, t0, w0) is the maximal solution of
w0 = G(t, w),
w(t0 ) = w0 ≥ 0,
(3.7.6)
existing for t ≥ t0. Then
V (t, U (t)) ≤ r(t),
t ≥ t0 ,
(3.7.7)
where U (t) = U (t, t0, W0) is any solution of (3.7.1) existing for t ≥ t0.
Proof Let U (t) be any solution of (3.7.1) existing for t ≥ t0 .
Define m(t) = V (t, U (t)) so that m(t0 ) = V (t0, W0 ) ≤ w0 . Now for small
h > 0, we have, in view of Lipschitz conditions,
m(t + h) − m(t)
=
V (t + h, U (t + h)) − V (t, U (t))
≤
AD̃[U (t + h), U (t) + hF (t, U (t))]
+V (t + h, U (t) + hF (t, U (t))) − V (t, U (t)).
It follows therefore that
D+ m(t)
=
1
lim sup [m(t + h) − m(t)] ≤ D+ V (t, U (t))
h→0+ h
1
+A lim sup [D̃[U (t + h), U (t) + hF (t, U (t)))].
h→0+ h
Since DH U is assumed to exist, we see that U (t + h) = U (t) + Z(t) where
Z(t) = Z(t, h) is the Hukuhara difference for small h > 0. Hence utilizing the
properties of D̃[U, V ] we obtain,
D̃[U (t + h), U (t) + hF (t, U (t))] =
=
=
D̃[U (t) + Z(t), U (t) + hF (t, U (t))]
D̃[Z(t), hF (t, U (t)))]
D̃[U (t + h) − U (t), hF (t, U (t)))].
As a result, we get
1
U (t + h) − U (t)
, F (t, U (t))
D̃[U (t + h), U (t) + hF (t, U (t))] = D̃
h
h
and consequently
1
lim sup D̃[U (t + h), U (t) + hF (t, U (t))]
h
+
h→0
1
U (t + h) − U (t)
= lim sup
D̃
, F (t, U (t)) = D̃[DH U (t), F (t, U (t))] = 0,
h
h→0+ h
since U (t) is a solution of (3.7.1). We therefore have the vectorial differential
inequality,
D+ m(t) ≤ G(t, m(t)), m(t0 ) ≤ w0 , t ≥ t0,
86
CHAPTER 3. STABILITY THEORY
which by the theory of differential inequalities for systems (Lakshmikantham
and Leela [1]) yields
m(t) ≤ r(t), t ≥ t0 ,
proving the claimed estimate (3.7.7).
The following corollary of Theorem 3.7.1 is interesting.
Corollary 3.7.1 The function G(t, w) = Aw, where A is an N × N matrix
satisfying aij ≥ 0, i 6= j, is admissible in Theorem 3.7.1 and yields the estimate
V (t, U (t)) ≤ V (t0, W0 )eA(t−t0) ,
3.8
t ≥ t0 .
The Method of Vector Lyapunov Functions
We shall prove a typical result that gives sufficient conditions in terms of vector
Lyapunov-like functions for the stability properties of the trivial solution of the
set differential system (3.7.1).
Theorem 3.8.1 Assume that
N
(i) G ∈ C[R+ × RN
+ , R ], G(t, 0) ≡ 0 and G(t, w) is quasi-monotone nondecreasing in w for each t ∈ R+ ;
(ii) V ∈ C[R+ × S(ρ), RN
+ ], |V (t, U1 ) − V (t, U2 )| ≤ A D̃[U1 , U2 ], where A is a
nonnegative N × N matrix and the function
V0 (t, U ) =
N
X
Vi (t, U )
(3.8.1)
i=1
satisfies
b(D0 [U, θ]) ≤ V0(t, U ) ≤ a(D0 [U, θ]), a, b ∈ K;
(iii) F ∈ C[R+ × S(ρ), Kc (Rn )N ], F (t, θ) ≡ θ and
D+ V (t, U ) ≤ G(t, V (t, U )),
(t, U ) ∈ R+ × S(ρ),
where S(ρ) = [U ∈ Kc (Rn)N : D0 [U, θ] < ρ].
Then, the stability properties of the trivial solution of (3.7.6) imply the corresponding stability properties of the trivial solution of (3.7.1).
Proof We shall prove only equi-asymptotic stability of the trivial solution of
(3.7.1). For this purpose, let us first prove equi-stability . Let 0 < ε < ρ
and t0 ∈ R+ , be given. Assume that the trivial solution of (3.7.6) is equiasymptotically stable. Then, it is equi-stable. Hence given b(ε) > 0 and t0 ∈ R+ ,
there exists a δ1 = δ1 (t0 , ε) > 0 such that
N
X
i=1
wi0 < δ1 implies
N
X
i=1
wi (t, t0, w0) < b(ε),
t ≥ t0 ,
(3.8.2)
3.8 THE METHOD OF VECTOR LYAPUNOV FUNCTIONS
87
where w(t, t0, w0) is any solution of (3.7.6). Choose w0 = V (t0 , W0) and a
δ = δ(t0, ε) > 0 satisfying
a(δ) < δ1 .
(3.8.3)
Let D0 [W0, θ] < δ. Then, we claim that D0 [U (t), θ] < ε, t ≥ t0 , for any solution
U (t) = U (t, t0, W0) of (3.7.1). If this is not true, there would exist a solution
U (t) of (3.7.1) with D0 [W0, θ] < δ and a t1 > t0 such that
D0 [U (t1), θ] = ε and D0 [U (t), θ] ≤ ε < ρ, t0 ≤ t ≤ t1 .
(3.8.4)
Hence we have by Theorem 3.7.1,
V (t, U (t)) ≤ r(t, t0 , w0),
t 0 ≤ t ≤ t1 ,
(3.8.5)
where r(t, t0, w0) is the maximal solution of (3.7.6). Since
V0 (t0, W0 ) ≤ a(D0 [W0 , θ]) < a(δ) < δ1 ,
the relations (3.8.2), (3.8.3), (3.8.4) and (3.8.5) yield
b(ε) ≤ V0 (t1, U (t1 )) ≤ r0 (t1, t0, w0) < b(ε),
PN
where r0(t, t0, w0) = i=1 ri (t, t0, w0). This contradiction proves that the trivial
solution of (3.7.1) is equi-stable.
Suppose next that the trivial solution of (3.7.6) is quasi-equi-asymptotically
stable . Set ε = ρ and δˆ0 = δ(t0 , ρ). Let 0 < η < ρ. Then given b(η) and
t0 ∈ R+ , there exist δ1∗ = δ1 (t0) > 0 and T = T (t0 , η) > 0 satisfying
N
X
i=1
wi0 < δ1∗ implies
N
X
wi (t, t0, w0) < b(η),
t ≥ t0 + T.
(3.8.6)
i=1
Choosing w0 = V (t0 , W0 ) as before, we find δ0∗ = δ0 (t0 ) > 0 such that a(δ0∗ ) <
δ1∗ . Let δ0 = min(δ1∗ , δ0∗) and D0 [W0 , θ] < δ0 . This implies D0 [U (t), θ] < ρ,
t ≥ t0 and therefore the estimate (3.8.5) holds for all t ≥ t0.
Suppose now that there is a sequence {tk }, tk ≥ t0 +T, tk → ∞ as k → ∞,
and η ≤ D0 [U (tk ), θ], where U (t) is any solution of (3.7.1) with D0 [W0 , θ] < δ0 .
In view of (3.8.6), this leads to the contradiction
b(η) ≤ V0(tk , U (tk )) ≤ r0(tk , t0, w0) < b(η).
Hence the trivial solution of (3.7.1) is equi-asymptotically stable and the proof
is complete.
In order to apply the method of vector Lyapunov functions to concrete problems, it is necessary to know the properties of solutions of the comparison system
(3.7.6), which is difficult in general, except when G(t, w) = Aw, where A is a
quasi-monotone N × N stability matrix. Hence we shall present some simple
and useful techniques to deal with this problem.
We shall first prove a result which reduces the study of the properties of
solutions of (3.7.6) to that of a scalar differential equation
v0 = G0(t, v), v(t0 ) = v0 ≥ 0
where G0 ∈ C[R2+ , R]. Specifically, we have the following result.
(3.8.7)
88
CHAPTER 3. STABILITY THEORY
N
N
Lemma 3.8.1 Assume that L ∈ C 1[R+ , RN
+ ], G ∈ C[R+ × R+ , R ], G0 ∈
2
C[R+ , R] and G, G0 are smooth enough to assure the existence and uniqueness
of solutions for t ≥ t0 of (3.7.6) and (3.8.7) respectively. Suppose further that
for (t, v) ∈ R2+ ,
dL(v)
G(t, L(v)) ≤
G0(t, v).
dv
Then w0 ≤ L(v0 ) implies
w(t, t0, w0) ≤ L(v(t, t0 , v0)), t ≥ t0 ,
(3.8.8)
where w(t, t0, w0), v(t, t0 , w0) are the solutions of (3.7.6) and (3.8.7) respectively.
Proof Set m(t) = L(v(t, t0 , v0), so that m(t0 ) = L(v0 )) ≥ w0 and
m0 (t) =
dL(v(t, t0 , v0))
G0 (t, v(t, t0, v0))
dv
≥ G(t, L(v(t, t0 , v0))) = G(t, m(t)).
hence by the comparison Theorem 1.4.1 in Lakshmikantham and Leela[1] , we
get the stated result in view of uniqueness of solutions.
Let us give an example to illustrate Lemma 3.8.1.
3
Suppose that G1 = −2w12 , G2 = −2w23 + 2w1w22 , so that
w10 = −2w12 ,
3
w20 = −2w23 + 2w1w22 .
(3.8.9)
3
Choosing L1 (v) = 35 v 2 , L2(v) = v and
2 3
− 5 v , 0 ≤ v < 1,
G0 (t, v) =
5
− 25 v 2 , 1 ≤ v,
the assumptions of Lemma 3.8.1 are satisfied. Clearly, the trivial solution of
(3.8.7) is uniformly asymptotically stable and therefore the trivial solution of
(3.8.9) is also uniformly asymptotically stable.
N
N
Lemma 3.8.2. Assume that Q ∈ C 1[RN
+ , R+ ], G ∈ C[R+ × R+ , R ], G0 ∈
2
N
C[R+ , R] and for (t, w) ∈ R+ × R+ ,
dQ(w)
G(t, w) ≤ G0 (t, Q(w)).
dw
(3.8.10)
Then, any solution w(t) = w(t, t0, w0) of (3.7.6) existing for t ≥ t0 , satisfies
Q(w(t)) ≤ v(t),
t ≥ t0 ,
where v(t) = v(t, t0 , v0) is the maximal solution of (3.8.7) existing for t ≥ t0 ,
provided Q(w0 ) ≤ v0 .
Proof Let w(t) = w(t, t0, w0) be any solution of (3.7.6) existing for t ≥ t0 .
3.9 NONSMOOTH ANALYSIS
89
Set p(t) = Q(w(t)), Then, we have
p0 (t) =
dQ(w(t))
G(t, w(t)) ≤ G0(t, Q(w(t))) = G0(t, p(t))),
dw
and p(t0) ≤ v0. Hence by the Theorem 1.4.1 in Lakshmikantham and Leela [1],
it follows that p(t) ≤ v(t), t ≥ t0 , where v(t) is the maximal solution of (3.8.7).
Hence the proof is complete.
As an example, consider the case G(t, w) = Aw where A is an N × N matrix
with aij ≥ 0, i 6= j, and A is quasi-diagonally dominant, that is, for some di > 0,
di|aii| >
n
X
dj |aij |.
(3.8.11)
j=1,i6=j
PN
Choosing Q(w) = i=1 di wi for some di > 0, we see that (3.8.10) is satisfied
by G0(t, v) = −γv for some γ > 0 in view of (3.8.11). Consequently, the trivial
solution of (3.8.7) is exponentially asymptotically stable which implies that the
trivial solution of (3.7.6) does have the same property.
3.9
Nonsmooth Analysis
In the previous sections of this chapter, we developed several results in stability
theory by utilizing continuous Lyapunov-like functions and investigated analogous results parallel to standard Lyapunov stability theory for set differential
equations.
In this and the next section, we concern ourselves with Lyapunov stability
theory employing Lyapunov-like functions, which are only lower semi continuous (lsc) and this requires introducing the concepts and results of nonsmooth
analysis extending suitably to the present set up. We have already sketched in
Sections 2.7 and 2.8, some results related to the existence of Euler solutions and
flow invariance in terms of proximal normals. There is an intimate connection
between proximal normal theory and subdifferentials of lower semicontinuous
functions that we develop before proceeding to build Lyapunov theory in the
framework of lsc functions and set differential equations. We shall embark on
this aspect in this section and consider Lyapunov’s theory in the following section.
Let us start with proximal normals again, since we did not provide in Section
2.8, all the necessary tools needed for our purpose.
Recall that D[A, θ] = kAk = supa∈A kak, and
kA + Bk2 ≤ kAk2 + kBk2 + 2 < A, B >,
(3.9.1)
where for A, B ∈ Kc (Rn ),
< A, B > = sup{(a · b) : a ∈ A, b ∈ B}.
(3.9.2)
90
CHAPTER 3. STABILITY THEORY
Let f : Kc (Rn) → R. Let U, V ∈ Kc (Rn) be such that there exists a Z ∈ Kc (Rn)
satisfying V = U + Z. So V − U the Hukuhara difference of V, U , exists.
If there exists an element A(U ) ∈ Kc (Rn ) such that
|f(V ) − f(U )− < A(U ), Z > | ≤ ε kZk, ε > 0,
where < A(U ), Z > is defined as in (3.9.2), then we say that fU (U ) = A(U ) is
the derivative of f at U . We note that fU : Kc (Rn ) → Kc (Rn ).
Consider next, F (U ) = fU (U ). For A, B ∈ Kc (Rn ), we first define (A, B) as
an element of Rn whose ith component is given by
(A, B)i = sup{(aibi ) : a ∈ A, b ∈ B}, 1 ≤ i ≤ n.
Suppose there exists a map f˜ defined for each U ∈ Kc (Rn ), mapping Kc (Rn)
into K̃c (Rn ) = {the set of compact, convex subsets in Kc (Rn )}.
Then, we define,
˜ ), Z >>] ≤ ε kZk, ε > 0,
D[F (V ), F (U ) + << f(U
where
<< f˜(U ), Z >> = {(B, Z) : B ∈ f˜(U ) and f˜ : Kc (Rn ) → K̃c (Rn )}.
Then, we say that FU (U ) = f˜(U ) is the derivative of F at U .
With these preliminaries, we consider, as in Section 2.8, the set differential
equation
DH U = F (t, U ), U (t0 ) = U0 ∈ Kc (Rn ),
(3.9.3)
where F : [t0, ∞)×Kc (Rn) → Kc (Rn ) is any function, and the scalar differential
equation
u0 = g(t, u), u(t0) = u0 ≥ 0,
(3.9.4)
where g ∈ C[R2+, R+ ], as in (2.8.4)
Let Ω ⊂ Kc (Rn ) be a nonempty, closed set. Let U ∈ Kc (Rn) be not lying
in Ω. Suppose that the Hukuhara difference U − S exists for every S ∈ Ω. That
is, for each S ∈ Ω, there exists a Z ∈ Kc (Rn ) such that U = S + Z. Suppose
further that, there exists an element S ∈ Ω whose distance to U is minimal,
that is,
D0 [U, Ω] = kU − Sk = inf kU − S0 k.
(3.9.5)
S0 ∈Ω
We call such an S ∈ Ω a projection of U onto Ω and denote the set of all such
elements by projΩ (U ). The element U − S will be called the proximal normal
direction to Ω at S. Any nonnegative multiple ξ = t(U − S), t ≥ 0, we call a
proximal normal to Ω at S. The set of all ξ obtained in this manner is said to
be a proximal normal cone to Ω at S and is denoted by NΩP (S). Suppose that
S ∈ Ω is such that S 6∈ projΩ (U ), for any U ∈ Kc (Rn), not in Ω, then we set
NΩP (S) = {θ}. When S 6∈ Ω, NΩP (S) is not defined.
3.9 NONSMOOTH ANALYSIS
91
Suppose that S ∈ projΩ(U ). Then, kU − S̃k ≥ kU − Sk for all S̃ ∈ Ω. As a
result, we have using (3.9.1),
kU − S̃k2
=
≤
kU − S + S − S̃k2
kU − Sk2 + kS − S̃k2 + 2 < U − S, S − S̃ >,
which implies
< U − S, S̃ − S > ≤
1
kS̃ − Sk2 for all S̃ ∈ Ω.
2
(3.9.6)
However, any element C = U − S, satisfying (3.9.6) need not be such that
S ∈ projΩ (U ). Consequently, we set NΩP (S) = {θ}, for any ξ = t(U − S),
satisfying
< ξ, S̃ − S > ≤ σ kS̃ − Sk2 for all S̃ ∈ Ω,
(3.9.7)
where σ = σ(ξ, S) > 0, but S 6∈ projΩ (U ). Thus, we assume as an axiom, the
following proposition.
Proposition 3.9.1 For any given δ > 0,
exists a σ = σ(ξ, S) > 0, such that
< ξ, S̃ − S >
≤
σkS̃ − Sk2 ,
ξ ∈ NΩP (S), if and only if there
for all
S̃ ∈ Ω ∩ B(S, δ).
(3.9.8.)
It can be verified that S ∈ projΩ (U ) is equivalent to S ∈ projΩ (S + δ(U − S)).
We shall next consider the subgradients of lower semicontinuous(lsc) functions.
Let f : Kc (Rn ) → (−∞, ∞] be a lsc function with dom(f) = {X ∈ Kc (Rn ) :
P
f(X) < ∞}. Suppose that (ζ, −λ) ∈ Kc (Rn) × R belongs to Nepi(f
) (X, r). It
can be verified that (i) λ ≥ 0. (ii) r = f(X) ifλ > 0, and (iii) λ =
0 if r > f(X).
An element ζ ∈ Kc (Rn) is said to be the proximal subgradient of f at x ∈
dom (f) provided that (ζ, −1) ∈ Nepi(f ) (X, f(X)), where epi(f) = {(X, r) ∈
Kc (Rn ) × R : f(X) ≤ r}.
The set of all such ζ is denoted by ∂P f(X) and is referred to as proximal
subdifferential or P-subdifferential. Let us note that because a cone is involved,
ζ
P
if λ > 0, and (ζ, −λ) ∈ Nepi(f
) (X, f(X)), then λ ∈ ∂P f(X).
Also, the following result concerns the approximation of horizontal proximal
normals to epigraphs, by nonhorizontal proximal normals, and is needed for our
later use.
Theorem 3.9.1 Let f : Kc (Rn ) → R be a lsc function and let (θ, 0) ∈
Nepi(f ) (X, f(X)). Then, for every > 0 there exists X 0 ∈ X +B and (ζ, −λ) ∈
Nepi(f ) (X 0 , f(X 0 )) such that
λ > 0,
| f(X 0 ) − f(X) |< ,
k(θ, 0) − (ζ, −λ)k < (3.9.9)
We are now in a position to prove the following proximal subgradient inequality.
92
CHAPTER 3. STABILITY THEORY
Theorem 3.9.2 Let f : Kc (Rn ) → (−∞, ∞] be a lsc function and let X ∈
dom(f). Then, ζ ∈ ∂P f(X) if and only if there exists positive numbers σ, and
η such that
f(Y ) ≥ f(X)+ < ζ, Y − X > −σkY − Xk2
(3.9.10)
for all Y ∈ B(X, η).
Proof Suppose that (3.9.10) holds. we then have by adding σ(α − f(X))2 ,
α − f(X) + σ[kY − Xk2 + (α − f(X))2 ] ≥ < ζ, Y − X >
for all α ≥ f(Y ). This implies,
< (ζ, −1), [(Y, α) − (X, f(X))] > ≤ σk(Y, α) − (X, f(X))k2
for all points (Y, α) ∈ epi(f) near (X, f(X)).
In view of Proposition 3.9.1, this implies,
P
(ζ, −1) ∈ Nepi(f
) (X, f(X)).
P
Conversely, suppose that (ζ, −1) ∈ Nepi(f
) (X, f(X)). Then there exists a δ > 0
such that
(X, f(X)) ∈ projepi(f ) ((X, f(X)) + δ(ζ, −1)).
This implies,
kδ(ζ, −1)k2 ≤ k[(X, f(X)) + δ(ζ, −1)] − (Y, α)k2,
for all (Y, α) ∈ epi(f).
Upon taking α = f(Y ), we get from the last inequality
δ 2 + δ 2kζk2 ≤ kX − Y + δζk2 + (f(X) − f(Y ) − δ)2 ,
which can be rewritten as
(f(Y ) − f(X) + δ)2 ≥ δ 2 + 2δ < ζ, Y − X > −kX − Y k2.
(3.9.11)
Clearly, the left hand side of the expression (3.9.11) is positive for all Y , sufficiently near X, that is Y ∈ B(X, η) for some η > 0. Since f is lsc, by shrinking
η if necessary, we can make sure that
f(Y ) − f(X) + δ > 0,
for all Y ∈ B(X, η).
Consequently, taking square roots of (3.9.11) we get,
1
f(Y ) ≥ h(Y ) = f(X) − δ + [δ 2 + 2δ < ζ, Y − X > −kY − Xk2 ] 2
(3.9.12)
for all Y ∈ B(X, η).
Considering the function h(Y ), we can calculate and show that H(X) =
hX (X) = ζ, and HX (X) is bounded by say 2σ > 0 in a neighbourhood of X.
Hence the function h satisfies the inequality, for some η > 0,
h(Y ) ≥ h(X)+ < ζ, Y − X > −σkY − Xk2 , for all Y ∈ B(X, η). (3.9.13)
But, then noting that f(X) = h(X) , the relations (3.9.12) and (3.9.13) yield
f(Y ) ≥ f(X)+ < ζ, Y − X > −σkY − Xk2
for all Y ∈ B(X, η) which is (3.9.10) as desired. The proof is complete.
(3.9.14)
3.10 LYAPUNOV STABILITY CRITERIA
3.10
93
Lyapunov Stability Criteria
In this section, we plan to provide the Lyapunov stability criteria for Euler solutions of set differential equation (3.9.3). We begin with the following definition
of the Lyapunov function.
Let V : R+ × Kc (Rn ) → (−∞, ∞] be an lsc function with domain dom(V ) =
[(t, X) ∈ R+ × Kc (Rn ) : V (t, X) < ∞].
Definition 3.10.1 The pair (V, F ) is said to be weakly decreasing if for any U0
there exists an Euler solution U (t) = U (t, t0, U0) of (3.9.3) satisfying
V (t, U (t)) ≤ V (t0 , U0), t ≥ t0 ≥ 0.
One can easily verify that, (V, F ) is weakly decreasing if and only if (epiV, {1} ×
F (t, U ) × {0}) is weakly invariant.
In order to deduce the Lyapunov theory of stability from the present set up,
we need a sufficient condition, which assures the weakly decreasing nature of
the system (V, F ).
Theorem 3.10.1 (V, F ) is weakly decreasing if for all (θ, ζ) ∈ ∂P V (t, Z), (t, Z) ∈
dom(V ), we have
θ + < F (t, Z), ζ > ≤ 0.
(3.10.1)
Proof Suppose that for (t, Z) ∈ dom(V ), (θ, ζ) ∈ ∂P V (t, Z), and (3.10.1)
holds. (V, F ) is weakly decreasing if and only if (epiV, {1} × F × {0}) is weakly
invariant.
To show that (epiV, {1} × F × {0}) is weakly invariant, it suffices to show
P
that for any (θ, ζ, λ) ∈ Nepi
(t, Z, r)
V
< ({1}, F (t, Z), {0}), (θ, ζ, λ) > ≤ 0.
P
Let (θ, ζ, λ) ∈ Nepi
(t, Z, r). Then, λ ≤ 0.
V
P
If λ < 0, then (θ, ζ, λ) ∈ Nepi
(t, Z, r).
V
ζ
ζ
θ
θ
P
This implies ( −λ
, −λ
, −1) ∈ Nepi
(t, Z, r), which in turn leads to ( −λ
, −λ
)∈
V
∂P V (t, Z).
ζ
θ
By hypothesis, we get −λ
+ < F (t, Z), −λ
> ≤ 0. This implies, θ + <
F (t, Z), ζ > ≤ 0, (t, Z) ∈ dom(V ), which is the required condition.
P
In case, λ = 0, then we have (θ, ζ, 0) ∈ Nepi
(t, Z, V (t, Z)). We invoke TheoV
P
rem 3.9.1 to deduce the existence of sequences (θi , ζi , −i) ∈ Nepi
(t , Zi , V (ti , Zi ))
V i
with i > 0, and (θi , Zi , V (ti , Zi)) such that (θi , ζi, −i ) → (θ, ζ, 0),
(ti , Zi, V (ti , Zi )) → (t, Z, V (t, Z)).
Then, as in case λ < 0 above, θi + < F (ti, Zi ), ζi > ≤ 0. Since F is locally
bounded, the sequence F (ti , Zi) is bounded. Passing to a subsequence, we may
suppose that F (ti , Zi) converges to F (t, Z).
94
CHAPTER 3. STABILITY THEORY
This in turn implies θ + < F (t, Z), ζ > ≤ 0 and hence the proof is complete.
We shall now consider the Lyapunov theory of stability, employing the lsc
functions V (t, X) so that we can utilize the set of proximal subdifferentials of
V , namely ∂P V (t, X), for providing sufficient conditions for Lyapunov stability
in the weak sense. We shall derive the weak stability results from Theorem
3.10.1. We assume that F (t, θ) ≡ θ so that we can discuss the weak stability of
X = θ. As in section 3.4, we assume that, given U0 , V0 ∈ Kc (Rn), the Hukuhara
difference U0 − V0 ≡ X0 exists, and we consider solutions
U (t) = U (t, t0, X0 ) or X(t) = X(t, t0 , X0) for stability purposes.
We list the following conditions concerning V :
(i) V : R+ × Kc (Rn ) → [0, ∞] is lsc with dom V = {(t, X) ∈ R+ × Kc (Rn ) :
V (t, X) < ∞} and V (t, X) is positive definite.
(ii) the sets [V ]q = {(t, X) ∈ R+ × Kc (Rn ) : V (t, X) ≤ q} are compact for
every q > 0.
(iii) θ + < F (t, X), ζ > −G(t, V (t, X)) ≤ 0, for (t, X) ∈ R+ ×Kc (Rn ), (θ, ζ) ∈
∂P V (t, X)) where G ∈ C[R2+ , R], G(t, 0) ≡ 0 and satisfies a nonlinear
growth condition similar to F . (See Theorem 2.8.1).
(iv) θ + < F (t, X), ζ > +W (t, X) ≤ 0, (t, X) ∈ Kc (Rn ), where W ∈ C[R+ ×
Kc (Rn ), R+ ], and F is bounded on bounded sets. Here W (t, X) is positive
definite and satisfies a nonlinear growth condition similar to F .
We can now prove the following result on weak stability.
Theorem 3.10.2 Assume that (i), (ii) and (iii) hold. Then the weak stability
properties of the scalar differential equation
w0 − G(t, w) = 0, w(0) = w0 = V (t0 , X0),
(3.10.2)
imply that the corresponding weak stability properties of X = θ of (3.9.3).
Proof Let us define Q : R+ ×Kc (Rn )×R → (−∞, ∞] by Q(t, X, w) = V (t, X)−
w and the function F̃ (t, X, w) = F (t, X) × {−G(t, w)}. We observe that F̃ is
also an usc function satisfying the nonlinear growth condition similar to the one
satisfied by F .
Let us claim that the system (Q, F̃ ) is weakly decreasing. We wish to apply
Theorem 3.10.1 and so let (θ, ζ, η) ∈ ∂P Q(t, X, w).
Then, (θ, ζ) ∈ ∂P V (t, X) and η = −1.
The condition θ + < F (t, X), ζ > −G(t, V (t, X)) ≤ 0, provides the condition
that
θ+ < (F (t, X), −G(t, V (t, X)), (ζ, −1) > ≤ 0,
which verifies the inequality of the Theorem 3.10.1.
Hence, we deduce the existence of a solution (t, X, w) of F̃ with (t0 , X0 , w0)
satisfying
Q(t, X(t), w(t)) ≤ Q(t0 , X0, w0) = V (t0 , X0) − w0 = 0,
t ≥ t0 ,
3.11 NOTES AND COMMENTS
95
which reduces to the comparison principle
V (t, X(t)) ≤ w(t),
t ≥ t0 ,
(3.10.3)
where X(t) = X(t.t0, X0 ) is a solution of (3.9.3), and w(t) is a solution of
(3.10.2).
Once we have the comparison principle given by (3.10.3), if we suppose that
w = 0, of (3.10.2) possesses any weak stability property, it follows employing the
standard arguments, see Lakshmikantham and Leela [1] and Lakshmikantham,
Leela and Martynyuk [1] that the corresponding weak stability property of X =
θ of (3.9.3) holds. Hence the proof is complete.
In a similar manner, one can prove the following result relative to the condition (iv).
Theorem 3.10.3 Assume that (i), (ii) and (iv) hold. then, X = θ of (3.9.3)
is weakly asymptotically stable.
Proof In this case we set, Q(t, X, y) = V (t, X) + y and F̃ (t, X, y) = F (t, X) ×
{W (t, X)} and, similar to the proof of Theorem 3.10.2, we can deduce from
Theorem 2.8.1, the existence of a solution (X, y) of F̃ at (X0 , 0) such that
Q(t, X(t), y(t)) ≤ Q(t0 , X0 , 0) = V (t0, X0 ),
This in turn reduces to
Z t
V (t, X(t)) +
W (s, X(s)) ds ≤ V (t0 , X0)),
t ≥ 0.
t ≥ 0,
(3.10.4)
0
where X(t) is a solution of (3.9.3).
The weak stability of X = θ of (3.9.3) follows immediately from (3.10.4)
using (i) and (ii) in a straightforward way.
To prove kX(t)k → 0 as t → ∞, we observe that (3.10.4) implies that
Rt
V (t, X(t)) and 0 W (s, X(s)) ds are bounded on R+ , as well as X(t). Since F is
bounded on bounded sets, we also have DH X(t) bounded, and as a consequence
X(t) is globally Lipschitz. With this information, it is now Rstandard to show
∞
that kX(t)k → 0 as t → ∞, for otherwise the divergence of 0 W (s, X(s)) ds
results. Hence the proof is complete.
3.11
Notes and Comments
Lyapunov-like functions and needed comparison theorems including global existence described in Sections 3.2 and 3.3 are from Lakshmikantham, Leela and
Vatsala [2]. The stability criteria provided in Section 3.4, parallel to the original Lyapunov results for ODE, is from Gnana Bhaskar and Vasundhara Devi
[1]. The idea of utilizing the concept of Hukuhara difference in choosing the
initial values suitably to delete any undesirable part of the solutions that may
be present in certain cases, is from Lakshmikantham, Leela and Vasundhara
96
CHAPTER 3. STABILITY THEORY
Devi [1]. The example worked to demonstrate this idea is also from the above
mentioned paper. The example uses interval analysis, for which, see Moore [1]
and Markov [1].
The contents of Sections 3.5 and 3.6, where the method of perturbing Lyapunov functions is employed to prove nonuniform stability and boundedness
results, are from Gnana Bhaskar and Vasundhara Devi [2].For the idea of perturbing Lyapunov functions pertaining to ODE, refer to Lakshmikantham and
Leela [4]. See also Lakshmikantham, Leela and Martynyuk [1].The introduction
of set differential systems and extension of the method of vector Lyapunov functions for such systems reported in Sections 3.7 and 3.8, is adapted from Gnana
Bhaskar and Vasundhara Devi [3]. For more details on the method of vector Lyapunov functions, see Lakshmikantham, Matrosov and Sivasundaram [1].
The criteria described in Sections 3.9 and 3.10 is taken from Gnana Bhaskar
and Lakshmikantham [3], where the ideas of nonsmooth analysis and weaker
lower semicontinuous functions are employed.
Chapter 4
Connection to Fuzzy
Differential Equations
4.1
Introduction
When a real world problem is transferred into a deterministic IVP of ordinary
differential equations, namely,
x0 = f(t, x),
x(t0 ) = x0 ,
we cannot usually be sure that the model is perfect. If the underlying structure
of the model depends upon subjective choices, one way to incorporate these into
the model, is to utilize the aspect of fuzziness, which leads to the consideration
of fuzzy differential equations(FDE). There exists sufficient literature on the
theory of FDEs. The intricacies involved in incorporating fuzziness into the
theory of ordinary differential equations pose a certain disadvantage and other
possibilities are being explored to address this problem. One of the approaches
is to transform FDEs into multivalued differential inclusions so as to employ
the existing theory of differential inclusions. Another approach is to connect
FDEs to SDEs and examine the interconnection between them. In this chapter
we shall be concerned with the latter approach, since the former framework is
already known, see Lakshmikantham and Mohapatra [1].
In Section 4.2, we provide a short account of the necessary preliminary material on fuzzy set theory, formulate FDEs and list the required known results
concerning local and global existence, uniqueness, continuous dependence of solutions and comparison results. Section 4.3 is devoted to the investigation of
Lyapunov stability theory through Lyapunov-like functions. For this purpose,
we develop a comparison result in terms of Lyapunov-like functions and then
give some simple stability results. We sketch an example to expose the difficulties involved in general and suggest a way out in those cases where there is a
problem, by suitably choosing the initial value in order to weed out the undesirable part of the solution. This may be considered parallel to partial stability
97
98
CHAPTER 4. CONNECTION TO FDES
or conditional stability in ODEs. The interconnection of FDEs with SDEs is
explored in Section 4.4. We indicate also the alternative formulation of fuzzy
IVPs into a sequence of multivalued differential inclusions and the advantage of
the approach.
In Section 4.5, we continue to study the interconnection for the case when
the function involved in the SDE is only upper semicontinuous and prove some
results parallel to the continuous case considered in Section 4.4. Section 4.6
introduces impulses into FDEs and shows how the impulses help to overcome
the disadvantage that exists in the study of FDEs. Some important results
relative to impulsive fuzzy differential equations are proved in this section and
a familiar example is discussed to point out the advantage gained by adding
impulses. Hybrid fuzzy differential equations are considered in Section 4.7 and
the required structure is developed for the investigation of the stability of such
systems. Section 4.8 introduces another concept of differential equations in a
metric space which can be applied to study FDEs. Notes and comments are
given in Section 4.9.
4.2
Preliminaries
Let Kc (Rn) denote the family of all nonempty, compact, convex subsets of R.
If α, β ∈ R and A, B ∈ Kc (Rn ), then
α(A + B) = αA + αB,
α(βA) = (αβ)A,
1A = A
and if α, β ≥ 0, then (α + β)A = αA + βA.
Let I = [t0, t0 + a], t0 ≥ 0 and a > 0 and E n be the set of all functions
u : Rn → [0, 1] such that u satisfies (i)-(iv) mentioned below :
(i) u is normal, that is , there exists an x0 ∈ Rn such that u(x0) = 1;
(ii) u is fuzzy convex , that is , for x, y ∈ Rn and 0 ≤ λ ≤ 1,
u(λx + (1 − λ)y) ≥ min{u(x), u(y)};
(iii) u is upper semicontinuous;
(iv) [u]0 = closure of {x ∈ Rn : u(x) > 0} is compact.
For 0 < α ≤ 1, we denote [u]α = {x ∈ Rn : u(x) ≥ α}. Then from (i)-(iv), it
follows that the α− level sets [u]α ∈ Kc (Rn ), for 0 ≤ α ≤ 1.
Let D[A, B] be the Hausdorff distance between the sets A, B ∈ Kc (Rn ). We
define,
D0 [u, v] = sup0≤α≤1D[[u]α, [v]α],
(4.2.1)
which is a metric in E n , and (E n , D0) is a complete metric space.
We list the following properties of D0 [u, v], which are similar to D[A, B]
where A, B ∈ Kc (Rn) :
D0 [u + w, v + w] = D0 [u, v],
(4.2.2)
4.2 PRELIMINARIES
99
D0 [λu, λv] = |λ|D0[u, v],
(4.2.3)
D0 [u, v] ≤ D0 [u, w] + D0 [w, v],
(4.2.4)
n
for all u, v, w ∈ E and λ ∈ R.
For x, y ∈ E n if there exists a z ∈ E n such that x = y +z, then z is called the
Hukuhara difference of x and y and is denoted by x − y. A mapping F : I → E n
is differentiable at t ∈ I if there exists a DH F (t) ∈ E n such that the limits
lim
h→0+
F (t + h) − F (t)
,
h
and
lim
h→0+
F (t) − F (t − h)
h
(4.2.5)
exist and are equal to DH F (t). Here the limits are taken in the metric space
(E n , D0 ).
Moreover, if F : I → E n is continuous , then it is integrable and
Z t2
Z t1
Z t2
F =
F+
F, t0 ≤ t1 ≤ t2 ≤ t0 + a.
t0
t0
t1
Further, If F, G : I → E n are integrable, λ ∈ R, then the following properties
of the integral hold:
Z
Z
Z
(F + G) = F + G;
Z
λF = λ
Z
F, λ ∈ R;
D0 [F, G] is integrable;
Z
Z Z
D0
F, G ≤ D0 [F, G].
Finally,Rlet F : I → E n be continuous. Then the integral
t
G(t) = t0 F (s) ds is differentiable and DH G(t) = F (t). Furthermore,
F (t) − F (t0 ) =
Z
t
DH F (s) ds.
t0
See for details Lakshmikantham and Mohapatra[1].
Consider the fuzzy differential system
DH u = f(t, u),
u(t0 ) = u0,
(4.2.6)
where f ∈ C[I × E n , E n] and I = [t0, t0 + a], t0 ≥ 0, a > 0.
Before proceeding further, we note that a mapping u : I → E n is a solution
of the initial value problem (4.2.6) if and only if it is continuous and satisfies
the integral equation
u(t) = u0 +
Z
t
f(s, u(s)), ds for t ∈ I.
t0
100
CHAPTER 4. CONNECTION TO FDES
Using the properties of D0 [u, v] and of the integral listed above, and the theory
of differential and integral inequalities, one can establish the following results
concerning comparison principles, existence and uniqueness, continuous dependence and global existence of solutions of (4.2.6). We merely state such results
whose proofs are almost similar to the corresponding results for set differential
equations discussed in Chapter 2. One can also see the proofs in Lakshmikantham and Mohapatra [1].
Theorem 4.2.1 Assume that f ∈ C[I × E n, E n ] and for t ∈ I, u, v ∈ E n,
D0 [f(t, u), f(t, v)] ≤ g(t, D0 [u, v]),
where g ∈ C[I × R+ , R+ ] and g(t, w) is nondecreasing in w for each t. Suppose
further that the maximal solution r(t) = r(t, t0, w0) of the scalar differential
equation
w0 = g(t, w), w(t0 ) = w0 ≥ 0,
(4.2.7)
exists on I. Then, if u(t), v(t) are any two solutions of (4.2.6) through (t0 , u0),
(t0 , v0) respectively on I, and if D0 [u0, v0] ≤ w0, we have
D0 [u(t), v(t)] ≤ r(t, t0, w0),
t ∈ I.
(4.2.8)
Remark 4.2.1 If we employ the theory of differential inequalities instead of
integral inequalities, we can dispense with the monotone character of g(t, w)
assumed in Theorem 4.2.1. This is the next comparison principle.
Theorem 4.2.2 Let the assumptions of Theorem 4.2.1 hold except for the nondecreasing property of g(t, w) in w. Then the conclusion (4.2.8) is valid.
The next comparison result provides an estimate under weaker assumptions.
Theorem 4.2.3 Assume that f ∈ C[I × E n, E n ] and
lim sup
h→0+
D0 [u + hf(t, u), v + hf(t, v)] − D0 [u, v]
≤ g(t, D0 [u, v]), t ∈ I,
h
where g ∈ C[I × R+ , R], u, v ∈ E n. The maximal solution r(t, t0, w0) of (4.2.7)
exists on I. Then, the conclusion of the Theorem 4.2.1 is valid.
We wish to remark that in Theorem 4.2.3, g(t, w) need not be non-negative, and
therefore the estimate (4.2.8) would be finer than the estimates in Theorems
4.2.1 and 4.2.2.
As a special case of Theorems 4.2.1, 4.2.2, 4.2.3 we have the following important corollary.
Corollary 4.2.1 Assume that f ∈ C[I × E n , E n] and either
(a) D0 [f(t, u), 0̂ ] ≤ g(t, D0 [u, 0̂ ]) or
(b) lim sup
h→0+
D0 [u + hf(t, u), 0̂ ] − D0 [u, 0̂ ]
≤ g(t, D0 [u, 0̂ ]), t ∈ I,
h
4.3 LYAPUNOV-LIKE FUNCTIONS
101
where g ∈ C[I × R+ , R]. Then, if D0 [u0, 0̂ ] ≤ w0, we have
D0 [u(t), 0̂ ] ≤ r(t, t0 , w0), t ∈ I,
where r(t, t0, w0) is the maximal solution of (4.2.7) on I and 0̂ ∈ E n is defined
as
1 if x = 0,
0̂ (x) =
0 if x =
6 0.
Theorem 4.2.4 Assume that
(a) f ∈ C[R0, E n], D0 [f(t, x), 0̂ ] ≤ M0 , on R0; where R0 = I×B(u0 , b), B(u0 , b) =
{x ∈ E n : D0[u, u0] ≤ b} and
(b) g ∈ C[I × [0, 2b], R+], 0 ≤ g(t, w) ≤ M1 on I × [0, 2b], g(t, 0) = 0, g(t, w)
is nondecreasing in w for each t ∈ I and w(t) ≡ 0 is the unique solution
of (4.2.7) on I;
(c) D0 [f(t, u), f(t, v)] ≤ g(t, D0 [u, v]) on R0.
Then the successive approximations defined by
Z t
un+1(t) = u0 +
f(s, un (s)) ds,
n = 0, 1, 2, · · · ,
t0
b
exist on [t0, t0 + η], where η = min{a, M
}, M = max{M0, M1 }, as continuous
functions and converge uniformly to the unique solution u(t) of the IVP (4.2.6)
on [t0, t0 + η].
Theorem 4.2.5 Suppose that the assumptions of Theorem 4.2.4. hold. Also,
further that the solutions w(t, t0, w0) of (4.2.7) through every point (t0 , w0) are
continuous with respect to (t0 , w0). Then the solutions u(t, t0, u0) of (4.2.6) are
continuous relative to (t0 , u0).
Theorem 4.2.6 Assume that f ∈ C[R+ × E n, E n] and
D0 [f(t, u), 0̂ ] ≤ g(t, D0 [ u, 0̂ ] ), (t, u) ∈ R+ × E n ,
where g ∈ C[R2+ , R+ ], g(t, w) is nondecreasing in w for each t ∈ R+ and
the maximal solution r(t, t0, w0) of (4.2.7) exists on [t0, ∞). Suppose further
that f is smooth enough to guarantee local existence of solutions of (4.2.6) for
any (t0 , u0) ∈ R+ × E n . Then the largest interval of existence of any solution
u(t, t0, u0) of (4.2.6) such that D0 [u0, 0̂ ] ≤ w0 is [t0, ∞).
4.3
Lyapunov-like functions
Consider the fuzzy differential equation
DH u = f(t, u),
u(t0 ) = u0,
(4.3.1)
102
CHAPTER 4. CONNECTION TO FDES
where f ∈ C[R+ × S(ρ), E n ] and S(ρ) = {u ∈ E n : D0 [u, 0̂ } < ρ].
We assume that f(t, 0̂) = 0̂, so that we have the trivial solution for (4.3.1).
To investigate the stability criteria, the following comparison result in terms
of the Lyapunov-like function is very important and can be proved via the theory of ordinary differential inequalities. Here the Lyapunov-like function serves
as a vehicle to transform the fuzzy differential equation into scalar comparison differential equation, and therefore it is enough to consider the qualitative
properties of the simpler comparison equation.
Theorem 4.3.1 Assume that
(i) V ∈ C[R+ × S(ρ), R+ ] and | V (t, u1) − V (t, u2) | ≤ L D0 [u1, u2], where
L > 0,
1
(ii) D+ V (t, u) ≡ lim sup [V (t+h, u+hf(t, u))−V (t, u)] ≤ g(t, V (t, u)), where
h→0+ h
g ∈ C[R2+ , R].
Then, if u(t) is any solution of (4.3.1) existing on (t0, ∞) such that V (t0 , u0) ≤
w0 , we have
V (t, u(t)) ≤ r(t, t0, w0), t ≥ t0,
where r(t, t0, w0) is the maximal solution of the scalar differential equation
w0 = g(t, w), w(t0 ) = w0 ≥ 0,
(4.3.2)
existing on [t0, ∞).
Proof Let u(t) be any solution of (4.3.1) existing on [t0, ∞). Define m(t) =
V (t, u(t)) so that m(t0 ) = V (t0, u0) ≤ w0. For small h > 0,
m(t + h) − m(t)
=
V (t + h, u(t + h)) − V (t, u(t))
=
V (t + h, u(t + h)) − V (t + h, u(t) + hf(t, u(t)))
+ V (t + h, u(t) + hf(t, u(t))) − V (t, u(t))
LD0 [u(t + h), u(t) + hf(t, u(t))]
+ V (t + h, u(t) + hf(t, u(t))) − V (t, u(t)),
≤
using the Lipschitz condition given in (i). Thus
D+ m(t)
=
≤
1
lim sup [m(t + h) − m(t)]
+
h
h→0
1
D+ V (t, u(t)) + L lim sup [D0 [u(t + h), u(t) + hf(t, u(t))]].
h→0+ h
Let u(t + h) = u(t) + z(t, h), where z(t, h) is the Hukuhara difference for small
h > 0, which is assumed to exist. Hence employing the properties of D0 [u, v],
we see that
D0 [u(t + h), u(t) + hf(t, u(t))]
=
=
=
D0 [u(t) + z(t, h), u(t) + hf(t, u(t))]
D0 [z(t, h), hf(t, u(t))]
D0 [u(t + h) − u(t), hf(t, u(t))].
4.3 LYAPUNOV-LIKE FUNCTIONS
103
Consequently,
1
u(t + h) − u(t)
D0 [u(t + h), u(t) + hf(t, u(t))] = D0 [
, f(t, u(t))],
h
h
and therefore,
1
lim sup [D0 [u(t + h), u(t) + hf(t, u(t))]]
h→0+ h
u(t + h) − u(t)
1
= lim sup D0
, f(t, u(t)) ,
h
h→0+ h
= D0 [DH u(t), f(t, u(t))] ≡ 0,
since u(t) is a solution of (4.3.1). We therefore have the scalar differential
inequality
D+ m(t) ≤ g(t, m(t)), m(t0 ) ≤ w0, t ≥ t0 ,
which yields by the theory of differential inequalities (see Lakshmikantham and
Leela [1])
m(t) ≤ r(t, t0, w0), t ≥ t0 .
This proves the claimed estimate of the theorem.
The following corollaries are useful.
Corollary 4.3.1 The function g(t, w) ≡ 0 is admissible in Theorem 4.3.1 to
yield the estimate
V (t, u(t)) ≤ V (t0 , u0), t ≥ t0 .
Corollary 4.3.2 If, in Theorem 4.3.1, we strengthen the assumption on D+ V (t, u)
to
D+ V (t, u) ≤ −c[w(t, u)] + g(t, V (t, u)),
where w ∈ C[R+ × S(ρ), R+ ], c ∈ K = {a ∈ C[[0, ρ), R+] : a(w) is increasing in
w and a(0) = 0}, and g(t, w) is nondecreasing in w for each t ∈ R+ , then we
get the estimate
Z t
V (t, u(t)) +
c[w(s, u(s))] ds ≤ r(t, t0, w0), t ≥ t0 ,
t0
whenever V (t0, u0) ≤ w0.
Proof Set L(t, u(t)) = V (t, u(t)) +
D+ L(t, u(t)) ≤
≤
Rt
t0
c[w(s, u(s))] ds, and note that
D+ V (t, u(t)) + c[w(t, u(t))]
g(t, V (t, u(t))) ≤ g(t, L(t, u(t))),
using the monotone character of g(t, w). We then get immediately by Theorem
4.3.1, the estimate
L(t, u(t)) ≤ r(t, t0, w0), t ≥ t0,
104
CHAPTER 4. CONNECTION TO FDES
where u(t) is any solution of (4.3.1). This implies the stated estimate.
A simple example of V (t, u) is D0 [u, 0̂] so that
1
D+ V (t, u) = lim sup [D0 [u + hf(t, u), 0̂ ] − D0 [u, 0̂ ]].
+
h
h→0
Having the necessary comparison results in terms of Lyapunov-like functions, it
is easy to establish stability results analogous to original Lyapunov results for
fuzzy differential equations.
We shall assume that (4.3.1) possesses the trivial solution, and the solutions
exist and are unique for t ≥ 0.
Recall that the approach in the formulation of fuzzy differential equation
(FDE) (4.3.1) is based on the fuzzification of the differential operator, whose
values are in E n and therefore suffers from the disadvantage ( in view of Corollary 2.5.1 in Lakshmikantham and Mohapatra[1]) that the solution u(t) of (4.3.1)
has the property that diam[u(t)]α is nondecreasing as t increases. Consequently,
it is concluded that this formulation of FDE is not suitable for reflecting the rich
behaviour of the solutions of the corresponding ordinary differential equation
(ODE). As a result, following the suggestion of Hüllermeier[1], an alternative
formulation leading to multivalued differential inclusions has been investigated,
which we shall discuss in the next section. Here we shall utilize the Hukuhara
difference in the initial values in such a way that a subset of the solutions of
(4.3.1) matches the behaviour of the solutions of the corresponding ODE.
For this purpose, we suppose that for any u0 , v0 ∈ E n, there exists a z0 ∈ E n
such that Hukuhara difference u0 − v0 = z0 exists. Then, we consider the
stability of the solutions u(t, t0, u0 − v0 ) = u(t, t0, z0 ) of (4.3.1) with respect to
the trivial solution of (4.3.1). This approach helps to delete an undesirable part
of the solution generated.
On the other hand, if the FDE is given by
DH u = λ(t)u, λ ∈ C[R+ , R+ ], λ ∈ L1(R+ ),
the foregoing situation does not exist, and therefore it matches in its behaviour
with corresponding ODE. Hence there is no need to use the Hukuhara difference
in the initial values to obtain the desirable part of the solution.
Let us consider the following standard example. Let a ∈ E 1 have level sets
α
1
[a]α = [aα
1 , a2 ] for α ∈ I = [0, 1] and suppose that a solution u : [0, T ] → E of
the fuzzy differential equation
DH u = au, u(0) = u0 ∈ E 1,
α
on E 1 has level sets [u(t)]α = [uα
1 (t), u2 (t)], for α ∈ I and t ∈ [0, T ].
The Hukuhara derivative DH u also has level sets
α α
du
du1
duα
2
(t) =
(t),
(t) ,
dt
dt
dt
for α ∈ I and t ∈ [0, T ].
(4.3.3)
4.3 LYAPUNOV-LIKE FUNCTIONS
105
By the extension principle, the fuzzy set f(t, u(t)) = au(t) has level sets,
α
α α
α α
α α
[au(t)]α = [min{aα
1 u1 (t), a2 u1 (t), a1 u2 (t), a2 u2 (t)},
α
α α
α α
α α
max{aα
1 u1 (t), a2 u1 (t), a1 u2 (t), a2 u2 (t)}],
for all α ∈ I and t ∈ [0, T ].
Thus the fuzzy differential equation (4.3.3) is equivalent to the coupled system of ordinary differential equations
duα
1
dt
duα
2
dt
=
α α α α α α α
min{aα
1 u1 , a2 u1 , a1 u2 , a2 u2 },
=
α α α α α α α
max{aα
1 u1 , a2 u1 , a1 u2 , a2 u2 },
(4.3.4)
for α ∈ I.
For a = χ{−1} ∈ E 1, the fuzzy differential equation (4.3.3) becomes
DH u = −u,
u(0) = u0,
(4.3.5)
and the system of ordinary differential equations (4.3.4) reduces to
duα
1
= −uα
2,
dt
duα
2
= −uα
1,
dt
for α ∈ I.
α
If the level sets of the initial value u0 ∈ E 1 are [u0]α = [uα
01 , u02] for α ∈ I,
α
α
then the level sets of the solution u of (4.3.5)are given by, [u(t)] = [uα
1 (t), u2 (t)]
where
1 α
1 α
α
t
α
−t
uα
(4.3.6)
1 (t) = (u01 − u02 )e + (u01 + u02)e ,
2
2
1 α
1 α
α
t
α
−t
uα
(4.3.7)
2 (t) = (u02 − u01 )e + (u01 + u02)e ,
2
2
for 0 ≤ α ≤ 1 and t ≥ 0.
Let us suppose that for v0, z0 ∈ E 1, such that the Hukuhara difference
u0 − v0 = z0 exists, we have
[u0]α = [v0 + z0 ]α = [v0]α + [z0 ]α.
Let us suppose that for v0 , z0 ∈ E 1 , such that the Hukuhara difference u0 −v0 =
z0 exists, we have
[u0]α = [v0 + z0 ]α = [v0]α + [z0 ]α.
α
α
α
u01 − uα
02 u02 − u01
α
α
Since [u0]α = [uα
,
u
],
let
us
choose
[v
]
=
,
, so that
0
01 02
2
2
α
α
α
u01 + uα
02 u02 + u01
[z0 ]α =
,
.
2
2
α
Then it follows, assuming uα
01 6= −u02 , that
α
α
α
α
α
α
u − uα
u + uα
01 u02 − u01
02 u02 + u01
[u(t, u0)]α = − 02
,
et + 01
,
e−t ,
2
2
2
2
106
CHAPTER 4. CONNECTION TO FDES
[u(t, v0)]α =
and
1 α
1 α
α
),
−
u
)
et ,
(u01 − uα
(u
02
01
2
2 02
1 α
1 α
α
α
[u(t, z0)] =
(u + u02), (u02 + u01) e−t , t ≥ 0.
2 01
2
α
α
α
α α
α
α
If on the other hand, uα
01 = −u02 , that is [u0 ] = [−d , d ] with d = u02 . Then
α
α
α α
α
α
α
α α
the choice of [v0] = [c − d , c + d ] for some c so that [z0 ] = [c , c ] and
we find [v0]α = [u0]α + [z0 ]α, by changing the roles of u0, v0 .
α
We note that, in general, for any initial value [u0]α for which uα
01 6= u02 ,
the solution of (4.3.5) contains both desired and undesired parts of solution
compared with the solution of the corresponding ODE. In order to isolate the
desired part of the solution u(t, u0) of (4.3.5) that matches the solution of ODE,
we need to utilize the initial values satisfying the desired Hukuhara difference.
We are now ready to prove the following stability results by means of Lyapunovlike functions utilizing the solutions u(t, t0, z0) = u(t, t0, u0 − v0) of (4.3.1). For
this purpose, we list a typical definition of stability so that others can be formulated on the basis of this and standard definitions in stability theory.
Definition 4.3.1 The trivial solution of (4.3.1) is said to be equi-stable if for
each ε > 0 and t0 ≥ 0, there exists a δ = δ(t0, ε) > 0 such that D0 [z0 , 0̂ ] < δ
implies D0 [u(t, t0, z0), 0̂ ] < ε, t ≥ t0.
Theorem 4.3.2 Assume that the following hold:
(i) V ∈ C[R+ × S(ρ), R+ ], |V (t, u1) − V (t, u2)| ≤ LD0 [u1, u2], L > 0 and for
(t, u) ∈ R+ × S(ρ), where S(ρ) = {u ∈ E n : D0 [u, 0̂ < ρ},
1
D+ V (t, u) ≡ lim sup [V (t + h, u + hf(t, u)) − V (t, u)] ≤ 0;
+
h
h→0
(4.3.8)
(ii) b(D0 [u, 0̂ ]) ≤ V (t, u) ≤ a(t, D0[u, 0̂ ]) , for (t, u) ∈ R+ × S(ρ) where
b, a(t, .) ∈ K= {σ ∈ C[[0, ρ), R+] : σ(0) = 0 and σ(ω) is increasing in ω}.
Then, the trivial solution of (4.3.1) is equi-stable.
Proof Let 0 < ε < ρ and t0 ∈ R+ be given. Choose a δ = δ(t0 , ε) such that
a(t0 , δ) < b(ε).
(4.3.9)
We claim that with this δ, equi-stability holds. If not, there would exist a
solution u(t) = u(t, t0, z0) of (4.3.1) with D0 [z0 , 0̂] < δ and t1 > t0 such that
D0 [u(t1), 0̂ ] = ε
and D0 [u(t), 0̂ ] ≤ ε < ρ,
t0 ≤ t ≤ t1 .
By Corollary 4.3.1, we then have
V (t, u(t)) ≤ V (t0, z0 ),
t 0 ≤ t ≤ t1 .
(4.3.10)
4.3 LYAPUNOV-LIKE FUNCTIONS
107
Consequently, using (ii), (4.3.9) and (4.3.10), we arrive at the following contradiction:
b(ε) = b(D0 [u(t1), 0̂ ]) ≤ V (t1, u(t1)) ≤ V (t0 , z0) ≤ a(t0 , D0[z0 , 0̂ ]) < b(ε).
Hence equi-stability holds, completing the proof.
The next result provides sufficient conditions for equi-asymptotic stability.
In fact, it gives exponential asymptotic stability.
Theorem 4.3.3 Let the assumptions of Theorem 4.3.2 hold except that the
estimate (4.3.8) be strengthened to
D+ V (t, u) ≤ −βV (t, u),
(t, u) ∈ R+ × S(ρ).
(4.3.11)
Then the trivial solution of (4.3.1) is equi-asymptotically stable.
Proof Clearly, the trivial solution of (4.3.1) is equi-stable. Hence taking ε = ρ
and designating δ0 = δ(t0 , ρ) , we have by Theorem 4.3.2.
D0 [z0 , 0̂ ] < δ0 implies D0 [u(t), 0̂ ] < ρ,
t ≥ t0 .
Consequently, we get from the assumption (4.3.11), the estimate
V (t, u(t)) ≤ V (t0 , z0) exp[−β(t − t0 )],
Given ε > 0, we choose T = T (t0 , ε) =
t ≥ t0 .
1 a(t0 , δ0 )
ln
+ 1. Then, it is easy to see
β
b(ε)
that,
b(D0 [u(t), 0̂ ]) ≤ V (t, u(t)) ≤ a(t0 , δ) e−β(t−t0 ) < b(ε),
t ≥ t0 + T.
The proof is complete.
We shall next consider the uniform stability criteria.
Theorem 4.3.4 Assume that, for (t, u) ∈ R+ ×S(ρ)∩S c (η) for each 0 < η < ρ,
V ∈ C[R+ × S(ρ) ∩ S c (η), R+ ], |V (t, u1) − V (t, u2)| ≤ LD0 [u1, u2], L > 0,
D+ V (t, u) ≤ 0,
(4.3.12)
and
b(D0 [u, 0̂ ]) ≤ V (t, u) ≤ a(D0 [u, 0̂ ]),
a, b ∈ K.
(4.3.13)
Then the trivial solution of (4.3.1) is uniformly stable.
Proof Let 0 < ε < ρ and t0 ∈ R+ be given. Choose δ = δ(ε) > 0 such that
a(δ) < b(ε). Then we claim that with this δ, uniform stability follows. If not,
there would exist a solution u(t) = u(t, t0, z0) of (4.3.1), and a t2 > t1 > t0
satisfying
D0 [u(t1), 0̂ ] = δ, D0 [u(t2), 0̂ ] = ε
and δ ≤ D0 [u(t), 0̂ ] ≤ < ρ.
(4.3.14)
108
CHAPTER 4. CONNECTION TO FDES
Taking η = δ, we get from (4.3.12), the estimate
V (t2 , u(t2)) ≤ V (t1 , u(t1)).
This, together with (4.3.13), (4.3.14), and the definition of δ, yield
b(ε)
=
≤
b(D0 [u(t2), 0̂ ])
V (t2 , u(t2)) ≤ V (t1 , u(t1))
≤
a(D0 [u(t1), 0̂ ]) = a(δ) < b(ε).
This contradiction proves uniform stability, completing the proof.
Finally, we shall prove uniform asymptotic stability.
Theorem 4.3.5 Let the assumptions of Theorem 4.3.4 hold except that (4.3.12)
is strengthened to
D+ V (t, u) ≤ −c(D0 [u, 0̂ ]),
c ∈ K.
(4.3.15)
Then the trivial solution of (4.3.1) is uniformly asymptotically stable.
Proof By Theorem 4.3.4, uniform stability follows. And, for ε = ρ, we designate δ0 = δ0 (ρ). This means,
D0 [z0, 0̂ ] < δ0 implies D0 [u(t), 0̂ ] < ρ,
t ≥ t0 ,
where u(t) = u(t, t0, z0) is the solution of (4.3.1).
In view of the uniform stability, it is enough to show that there exists a t∗
a(δ0 )
such that for t0 ≤ t∗ ≤ t0 + T , where T = 1 +
,
c(δ)
D0 [u(t∗), 0̂ ] < δ.
(4.3.16)
If this is not true, δ ≤ D0 [u(t), 0̂ ], for t0 ≤ t ≤ t0 + T . Then, (4.3.15) gives,
V (t, u(t)) ≤ V (t0, z0 ) −
Z
t
c(D0 [u(s), 0̂ ]) ds,
t0 ≤ t ≤ t0 + T.
t0
As a result, we have, in view of the choice of T ,
0 ≤ V (t0 + T, u(t0 + T )) ≤ a(δ0 ) − c(δ)T < 0,
a contradiction. Hence there exists a t∗ satisfying (4.3.16) and from uniform
stability we conclude that
D0 [z0 , 0̂ ] < δ0 implies D0 [u(t), 0̂ ] < ε,
and the proof is complete.
t ≥ t0 + T,
4.4 CONNECTION WITH SDES
4.4
109
Connection with Set Differential Equations
Recall that the IVP for fuzzy differential equations proposed in Section 4.2, is
of the type
DH u = f(t, u), u(t0 ) = u0 ∈ E n ,
(4.4.1)
where f ∈ C[R+ × E n , E n], for which basic results are listed there. As noted in
Section 4.3., this approach is based on the fuzzification of the differential operator, whose values are in E n and therefore suffers from the disadvantage that the
solution u(t) of (4.4.1) has the property that diam[u(t)]α is nondecreasing as t
increases. Consequently, this formulation cannot fully reflect the rich behaviour
of solutions of corresponding ordinary differential equations.
Recently, Hüllermeier [1] has suggested an alternative formulation of fuzzy
IVPs by replacing the RHS of a system of ordinary differential equations by a
fuzzy function
f : R+ × Rn → E n ,
with the initial fuzzy set x0 ∈ E n, so that one can consider the fuzzy multivalued
differential inclusion
x0 ∈ f(t, x),
x(t0) = x0 ∈ E n ,
(4.4.2)
on J = [t0, T ], where now f is defined from R+ × Rn → E n , rather than
R+ ×E n → E n , as in (4.4.1). However, instead of (4.4.2), a family of multivalued
differential inclusions,
x0β ∈ F (t, xβ ; β), xβ (t0 ) ∈ [x0]β ,
0 ≤ β ≤ 1,
(4.4.3)
is investigated on J where F (t, x, β) ≡ [f(t, x)]β and F (t, x, 0) = supp(f(t, x)).
The idea is that the set of all solutions Sβ (x0 , T ), t0 ≤ t ≤ T , would be β− level
of a fuzzy set S(x0 , T ), in the sense that all attainable sets Aβ (x0, t), t0 < t ≤ T,
are levels of a fuzzy set on Rn. Considering S(x0 , T ) to be the solution of (4.4.1)
thus captures both uncertainty and the rich behaviour of differential inclusion
in one and the same technique.
For this purpose, the standard results of multivalued differential inclusions,
under the usual conditions on F in (4.4.3) yield that the attainable set Aβ (x0 , t)
is compact subset of Rn. If F is assumed to be quasiconcave in addition, one can
conclude, under reasonable assumptions, utilizing the representation theorem,
the existence of a fuzzy set u(t) such that [u(t)]β = Aβ (x0, t) with a similar
relation for the solution set Sβ (x0 , T ). See for details Lakshmikantham and
Mohapatra [1].
In the literature, the following example is often quoted to demonstrate the
advantage gained by the alternative approach when compared to the original
one.
Consider the crisp initial value problem of ODE with unknown initial value
x0 , that is ,
x0 = −x, x(0) = x0 ∈ [−1, 1].
(4.4.4)
110
CHAPTER 4. CONNECTION TO FDES
The solution of (4.4.4) when restricted to the interval [−1, 1] is
x(t) = [−e−t, e−t ], t ≥ 0.
The fuzzy differential equation corresponding to (4.4.4) in E 1 is
(4.4.5)
DH x = −x, x(0) = x0 = [−1, 1], x0 ∈ E 1 .
"
#
dxβ1 dxβ2
Suppose that [x]β = [xβ1 , xβ2 ], [DH x]β =
,
are the β−level sets for
dt dt
0 ≤ β ≤ 1. By extension principle, (4.4.5) becomes
dxβ1
= −xβ2 ,
dt
dxβ2
= −xβ1 ,
dt
0 ≤ β ≤ 1.
(4.4.6)
The solution of (4.4.6) is given by xβ1 (t) = −et , xβ2 (t) = et and therefore the
fuzzy function x(t) solving (4.4.5) is [x(t)]β = [−et , et], t ≥ 0, which shows that
the diam[x(t)]β → ∞ as t → ∞.
In the framework of Hüllermeier, on the other hand, fuzzy differential equation (4.4.5) is replaced by the family of inclusions
x0β ∈ F (t, xβ ; β) = [−xβ2 , −xβ1 ], xβ (0) = [−1, 1];
(4.4.7)
which has a fuzzy solution set S([−1, 1], T ), and fuzzy attainable set Aβ ([−1, 1], t),
0 ≤ t ≤ T respectively which are defined by β− level sets
Sβ ([−1, 1], t) = {x(.) : x(t) ∈ [−e−t , e−t]},
Aβ [[−1, 1], t) = [−e−t, e−t ],
0 ≤ t ≤ T,
(4.4.8)
(4.4.9)
which matches the kind of behaviour a fuzzification of the ordinary differential
equation (4.4.4) should have.
The new approach shows that a fuzzification of ODE has no effect on the behaviour of solutions. Then, a question arises, why should we bother to introduce
fuzziness in the originally modelled ODE without fuzziness? It is natural, in
fact, when one incorporates in the ODE other phenomena such as randomness,
delay, uncertainties such as fuzziness or even transform ODE into a difference
equation or generate a set differential equation (SDE), to name a few, the corresponding dynamic system should exhibit the effect of such phenomena. It is
not natural to expect always to have the same behaviour as that of the solutions of ODE from which the new dynamic systems are generated. Generally
speaking, the theory of the corresponding dynamic systems is a lot richer than
the theory of ODEs and therefore it would be interesting to investigate it as an
independent discipline.
Seen from this point of view, the original formulation of FDEs, does meet the
criteria. Let us give some examples, including the often quoted one described
earlier, to show other possibilities. Let us start with the ODEs, that is, crisp
IVPs:
4.4 CONNECTION WITH SDES
111
(1a) u0 = − u, or (1b) u0 + u = 0, u(0) = u0 ,
(2a) u0 = u, or (2b) u0 − u = 0, u(0) = u0.
Clearly the solutions of (1) and (2) are u(t) = u0 e−t and u(t) = u0et , t ≥ 0,
respectively.
The corresponding IVPs for FDEs are, respectively ,
(i)
(ii)
(iii)
(iv)
DH u = (−1)u,
DH u + u = 0,
DH u = u,
DH u + (−1)u = 0,
u(0) = u0
u(0) = u0
u(0) = u0
u(0) = u0
∈ E 1,
∈ E 1,
∈ E 1,
∈ E 1.
Since (ii) and (iv) are not essentially FDE’s, we introduce as a forcing term
a σ ∈ E 1 and consider the following FDE’s:
(ii∗ )
(iv∗ )
DH u + u = σ(t),
u(0) = u0 ∈ E 1 ,
DH u + (−1)u = σ(t), u(0) = u0 ∈ E 1 .
Suppose that the solutions of FDEs have level sets
α
α
[u(t)]α = [uα
1 (t), u2 (t)], [u0 ] = [α − 1, 1 − α],
α
duα
1 du2
,
] and σα (t) = [(α − 1), (1 − α)]e−t, for 0 ≤ α ≤ 1.
dt dt
The FDE’s (i), (ii∗ ), (iii) and (iv∗ ) reduce to the following systems of ODEs
and [DH u]α = [
(I) [
(II ∗ ) [
α
duα
1 du2
α
,
] = [−uα
2 , −u1 ],
dt dt
α
duα
1 du2
α
−t
,
] + [uα
1 , u2 ] = [(α − 1), (1 − α)]e ,
dt dt
(III) [
(IV ∗ ) [
α
duα
1 du2
α
,
] = [uα
1 , u2 ],
dt dt
α
duα
1 du2
α
−t
,
] + [−uα
2 , −u1 ] = [(α − 1), (1 − α)]e .
dt dt
with the same initial condition [u0]α = [α − 1, 1 − α]. The solutions of these
equations, using the methods of interval analysis, are
(Ia)
(II ∗ a)
(IIIa)
(IV ∗ a)
[u(t)]α
[u(t)]α
[u(t)]α
[u(t)]α
= [α − 1, 1 − α]et,
= [α − 1, 1 − α](e−t (1 + t)),
= [α − 1, 1 − α]et,
= [α − 1, 1 − α](e−t (1 + t)).
The solutions (Ia) and (II ∗ a) represent typical behaviours corresponding to
ODE from which they are generated. They show that introducing fuzziness into
112
CHAPTER 4. CONNECTION TO FDES
ODEs, some times destroys the good behaviour of solutions and helps at some
other times. For a discussion that solution behavior depends on the choice of
the forcing term, see Kaleva [3].
On the other hand, if we consider the family of multivalued differential inclusions (MDI), for (IV ∗ ), we obtain,
α
−t
0 ∈ u0α + [−uα
2 , −u1 ] + [(α − 1), (1 − α)]e , u(0) = [α − 1, 1 − α],
which has as its attainable set
3
Aα [[α − 1, 1 − α], t] = [α − 1, 1 − α] ( et − e−t ), t ≥ 0.
2
The foregoing analysis of examples indicates a variety of behaviors of solutions
of FDEs compared to that of ODEs, from which they are generated if the initial
values are chosen appropriately. Thus, it appears that study of FDEs as originally formulated is much richer than expected and needs further investigation.
We next state a known result that relates the solution of set differential
equation to the attainable set of multivalued differential inclusion is the following
(see Tolstonogov [1]).
Theorem 4.4.1 Assume that F ∈ C[R+ × Rn, Kc (Rn)];
D[F (t, x), F (t, y)] ≤ g(t, kx − yk),
(t, x) (t, y) ∈ R+ × Rn ,
and D[F (t, x), θ] ≤ q(t, kxk), (t, x) ∈ R+ × Rn , where g and q satisfy the
following assumptions. The functions g, q ∈ C[R2+, R+ ], g(t, w), q(t, w) are
nondecreasing in w for each t ∈ R+ , w(t) ≡ 0 is the only solution of the scalar
differential equation
w0 = g(t, w), w(t0 ) = 0,
on [t0, ∞) and r(t, t0, w0) is the maximal solution of the scalar differential equation
w0 = q(t, w), w(t0 ) = w0 > 0,
existing on [t0, ∞). Then, there exists a unique solution U (t) = U (t, t0, U0 ) on
[t0 , ∞) of IVP (3.2.1) and the attainable set A(U0 , t) of differential inclusion
x0 ∈ F (t, x),
satisfying A(U0 , t) ⊂ U (t),
x(t0) ∈ U0 ,
t0 ≤ t < ∞.
Finally, we need the following representation result (see Lakshmikantham
and Mohapatra [1]).
Theorem 4.4.2 Let Yβ ⊂ Rn , 0 ≤ β ≤ 1 be a family of compact subsets
satisfying
(i) Yβ ∈ K(Rn ) for all 0 ≤ β ≤ 1;
(ii) Yβ ⊆ Yα whenever α ≤ β, α, β ∈ [0, 1];
4.4 CONNECTION WITH SDES
(iii) Yβ =
Q∞
i=1
113
Yβi , for any nondecreasing sequence βi → β in [0, 1].
Then, there is a fuzzy set u ∈ Dn , such that [u]β = Yβ . If Yβ are also convex,
then u ∈ E n . (Here Dn denotes the set of usc normal fuzzy sets with compact
support and thus E n ⊂ Dn ). Conversely, the level sets [u]β , of any u ∈ E n, are
convex and satisfy these conditions.
It should be noted that Theorem 4.4.2 can be easily generalized from Rn to
a Banach space.
We propose here another formulation of fuzzy differential equation (4.4.1)
by a set differential equation which is generated by β− level set of the R.H.S.
of (4.4.1) where f : R+ × Rn → E n, as before.
For this purpose, consider the level set for each β, 0 ≤ β ≤ 1, and write
F (t, x; β) = [f(t, x)]β ∈ Kc (Rn ).
Next generate the mapping H : R+ × Kc (Rn ) × I → Kc (Rn ), I = [0, 1] by
defining
H(t, A; β) = coF (t, A; β)
(4.4.10)
for each A ∈ Kc (Rn ). Then consider the family of set differential equations given
by
DH Uβ = H(t, Uβ ; β) Uβ (t0 ) = U0β ∈ Kc (Rn ),
(4.4.11)
on [t0, T ], T > t0 , where DH Uβ is the Hukuhara derivative for each β.
Let us list the following conditions.
(i) F (t, x; β) is quasi-concave, that is,
1. for (t, x) ∈ R+ × Rn , α, β ∈ I, F (t, x; α) ⊇ F (t, x; β), whenever
α ≤ β;
2. if βn is nondecreasing sequence in I, converging to β, then for (t, x) ∈
R+ × Rn, ∩∞
n=1 F (t, x; βn) = F (t, x; β);
(ii) D[H(t, A; β), H(t, B; β)] ≤ g(t, D[A, B]), for t ∈ R+ , A, B ∈ Kc (Rn),
β ∈ I;
(iii) g ∈ C[R2+, R+ ], g(t, 0) ≡ 0, g(t, w) is nondecreasing in w for each t ∈ R+
and w(t) ≡ 0 is the only solution of
w0 = g(t, w),
w(t0) = 0, for t ≥ t0 ;
(iv) D[H(t, A; α), H(t, A; β)] ≤ L|α − β|, α, β ∈ I,
(t, A) ∈ R+ × Kc (Rn ), L > 0.
We are now in a position to prove the following result.
Theorem 4.4.3 Suppose that the conditions (i) to (iv) are satisfied. Then there
exists a unique solution Uβ (t) = Uβ (t, t0, U0β ) ∈ Kc (Rn ), β ∈ I of (4.4.11) and
Uβ (t) is quasiconcave in β for t ≥ t0 . Moreover, there exists a fuzzy set u(t) ∈ E n
such that
[u(t)]β = Uβ (t), t ≥ t0.
114
CHAPTER 4. CONNECTION TO FDES
Proof Since f is continuous on R+ × Rn , F (t, x; β) is also continuous for
(t, x, β) ∈ R+ × Rn × I. This implies that H(t, A; β) is continuous map for
(t, A; β) ∈ R+ × Kc (Rn ) × I. Consequently, by Theorems 2.3.1 and 2.6.1 it
follows that there exists a unique solution Uβ (t) = U (t, t0, U0β ) ∈ Kc (Rn ), for
t ≥ t0 of (4.4.11).
We first show that if α ≤ β, then Uβ (t) ⊆ Uα (t) for t ≥ t0. From the
definition of quasi-concavity of F (t, x; β), it follows that H(t, A; β) is also quasiconcave in β. Let Uα (t), Uβ (t) be the solution of
DH Uα = H(t, Uα; α),
Uα (t0) = U0 ∈ Kc (Rn ),
DH Uβ = H(t, Uβ ; β) Uβ (t0 ) = U0 ∈ Kc (Rn ), α ≤ β.
Then we find, using quasi-concavity
DH Uα = H(t, Uα; α) ⊇ H(t, Uα; β),
α ≤ β.
But H(t, Uα; β) = H(t, Uβ ; β) because of Uα (t0 ) = U0 = Uβ (t0 ) and therefore
Uα (t) ≡ Uβ (t) by uniqueness of solutions of (4.4.11).
Thus it is clear that Uβ (t) ⊆ Uα (t), α ≤ β, t ≥ t0 .
We shall next prove that if βn is a nondecreasing sequence, βn ∈ I, converging to β, then Uβn (t) → Uβ (t), uniformly on compact subsets of [t0 , ∞). For
this purpose, set m(t) = D[Uβn (t), Uβ (t)] and note that D[U0βn , U0β ] = m(t0 ).
We shall assume that U0βn → U0β , as n → ∞. Then employing the properties
of the metric D, the definition of Hukuhara derivative and the conditions (ii)
and (iv), we arrive at the scalar differential inequality,
D+ m(t) ≤ g(t, m(t)) + L|βn − β|, m(t0 ) = D[U0βn , U0β ], t ≥ t0 .
Hence by Lemma 1.3.1 in (Lakshmikantham and Leela [1]), we obtain
m(t) ≤ rn(t, t0 , ηn),
on any compact set J ⊂ [t0 , ∞), where ηn = D[U0βn , U0β ] and rn (t, t0, ηn) is
the maximal solution of
w0 = g(t, w) + L|βn − β|,
w(t0 ) = ηn , on J.
By assumption (iii), rn(t, t0, ηn) → r(t, t0, 0) ≡ 0, uniformly on J as n → ∞.
Since βn → β as n → ∞, it follows that m(t) ≡ 0 on J, which in turn implies
that D[Uβn (t), Uβ (t)] → 0 as n → ∞. Thus, Uβ (t) is quasi-concave in β ∈ I, for
t ≥ t0. Consequently, by Theorem 4.4.2, there exists a fuzzy set u(t) ∈ E n such
that [u(t)]β = Uβ (t), t ≥ t0 , and this completes the proof.
To find the connection between the solution Uβ (t) of (4.4.11) and the attainability set Aβ (U0 , t) of (4.4.3) we have the following result.
Theorem 4.4.4 Let F ∈ C[R+ × Rn × I, Kc (Rn)] satisfy the assumptions of
Theorem 4.4.1 for each β ∈ I = [0, 1] and assume that it is also quasiconcave in
β as well. Then there exists a unique solution Uβ (t) = Uβ (t, t0, U0) of (4.4.1)
for t ≥ t0 and the attainable set Aβ (U0 , t) of the inclusion (4.4.3) such that
Aβ (U0 , t) ⊂ Uβ (t, t0, U0), t ≥ t0 .
(4.4.12)
4.4 CONNECTION WITH SDES
115
Proof It is easy to verify that when F (t, x; β) satisfies the assumptions required in Theorem 4.4.1, the desired conditions in Theorems 2.3.1 and 2.6.1
are also satisfied, in view of the monotone nondecreasing nature of the functions g(t, w), q(t, w), the definition of D and the fact H(t, A; β) is generated by
F (t, x; β). We have assumed conditions in terms of H(t, A; β) since the set differential equations are treated as an independent subject. Thus for each β ∈ I,
Theorem 4.4.1 yields the relation (4.4.12). Also, both Uβ (t) and Aβ (U0 , t) satisfy the assumptions of Theorem 4.4.2., because one can prove similarly the
quasi-concavity of Aβ (U0 , t). Therefore, there exist fuzzy sets u(t), v(t) ∈ E n
such that
[v(t)]β = Aβ (U0 , t) and [u(t)]β = Uβ (t), t ≥ t0 .
The proof is complete.
We note that, in general, since Aβ (U0 , t) is only compact and not convex,
only (4.4.12) holds. Equality in (4.4.12) is valid only in some special cases.
Recalling (4.4.7) in the example, let us generate the set differential equation
from F in (4.4.7), that is
H(t, U, β) = coF (t, A; β), for A ∈ Kc (R),
and thus,
DH Uβ = −Uβ , Uβ (0) = U0β ∈ Kc (R).
(4.4.13)
Since the values of the solution (4.4.13) are intervals, the equation (4.4.13) can
be written as,
[u01β , u02β ] = [−u2β , −u1β ],
(4.4.14)
where U = [u1β , u2β ]. The relation (4.4.14) is equivalent to, taking U0β =
[u10β , u20β ], the system of equations,
u01β = −u2β ,
u02β = −u1β ,
u1β (0) = u10β ,
u2β (0) = u20β ,
whose solution corresponds to (4.3.6) and (4.3.7), duly altered to the present
framework, for 0 ≤ β ≤ 1 and t ≥ 0,
1
1
[u10β + u20β ]e−t + [u10β − u20β ]et ,
2
2
1
1
u2β (t) = [u20β + u10β ]e−t + [u20β − u10β ]et.
2
2
u1β (t) =
Given U0β ∈ Kc (R), there exists V0β , W0β ∈ Kc (R) such that U0β = V0β + W0β ,
and hence the Hukuhara difference U0β − V0β = W0β exists.
Choose
1
V0β = [ [(u10β − u20β ), (u20β − u10β )],
2
so that
W0β =
1
[(u10β + u20β ), (u20β + u10β )].
2
116
CHAPTER 4. CONNECTION TO FDES
It then follows, assuming that u10β 6= −u20β , that
Uβ (t, U0β ) =
Uβ (t, V0β ) =
Uβ (t, W0β ) =
1
[−(u20β − u10β ), (u20β − u10β )]et
2
1
+ [(u10β + u20β ), (u10β + u20β )]e−t
2
1
[(u10β − u20β ), (u20β − u10β )]et , and
2
1
[(u10β + u20β ), (u10β + u20β )]e−t , t ≥ 0.
2
It therefore follows that
Aβ (W0β , t) = Uβ (t, W0β ) ⊂ Uβ (t, U0β ), t ≥ 0.
4.5
(4.4.16)
Upper Semicontinuous Case Continued
Recall that from fuzzy differential equation (4.4.1), we did generate the sequence
of set differential equations given by
DH Uβ = H(t, Uβ ; β), Uβ (t0 ) = U0β ∈ Kc (Rn ),
(4.5.1)
where H(t, A; β) = coF (t, A; β) and F (t, X; β) = [f(t, X)]β , 0 ≤ β ≤ 1.( see
equations (4.4.10) and (4.4.11).)
In this section, we continue to investigate the upper semicontinuous (usc)
case discussed in Section 2.9 and prove some results parallel to the continuous
case considered in Section 4.4.
Let us list the following hypotheses, H(F).
F : J × Rn × I → Rn , I = [0, 1], J = [t0, b], t0 ≥ 0 and b ∈ [t0, ∞), is a
multifunction with compact convex values such that
L
(a) (t, X) → F (t, X; β) is L
B(Rn) measurable for every β ∈ I;
(b) for every (x, β) ∈ Rn × I and ε > 0 for almost every t ∈ J there exists
δ > 0 such that for all α, β − δ < α ≤ β, y ∈ x + ε.B, the inclusion
F (t, y; α) ⊂ F (t, x; β) + ε.B
(4.5.2)
holds;
(c) F (t, x; β) is quasiconcave, that is for x ∈ Rn, α, β ∈ I, α ≤ β,
F (t, x; α) ⊃ F (t, x; β) a.e.
(4.5.3)
(d) the inequality (2.9.10) holds for every β ∈ I.
Lemma 4.5.1 Assume that hypotheses H(F) hold. Then for every β ∈ I the
multifunction H(t, A; β) has all properties listed in Lemma 2.9.1. and
4.5 UPPER SEMICONTINUOUS CASE CONTINUED
117
(i) H(t, A; β) is quasiconcave a.e.;
(ii) if Uk ∈ Kc (Rn ), k ≥ 1, is a nonincreasing sequence with respect to
inclusion, and βk ∈ I, k ≥ 1, is a nondecreasing sequence converging to
β.
Then the sequence H(t, Uk ; βk ), k ≥ 1, converges a.e. to H(t, ∩∞
k=1 Uk ; β) in
space Kc (Rn).
Proof By (4.5.2) for fixed β ∈ I the multifunction x → F (t, x; β) is usc for
almost every t ∈ J. Hence for fixed β ∈ I the multifunction (t, x) → H(t, x; β)
for almost every t ∈ J has compact convex values and possesses all properties
listed in Lemma 2.9.1.
According to (4.5.3) for every A ∈ Kc (Rn ),
F (t, A; α) ⊃ F (t, A; β), α ≤ β, a.e.
(4.5.4)
Hence (i) is true.
Let a sequence Uk ∈ Kc (Rn ), k ≥ 1 be nonincreasing with respect to inclusion, and βk ∈ I, k ≥ 1, be a nondecreasing sequence converging to β. Then
V = ∩∞
k=1 Uk is a nonempty compact convex set and sequence Uk , k ≥ 1, converges to V in Kc (Rn ). By using (4.5.4) and the monotonicity of F (t, U ; β) with
respect to U we obtain
F (t, V ; β) ⊂
∞
\
k=1
F (t, V ; βk ) ⊂
∞
\
F (t, Uk ; βk ).
(4.5.5)
k=1
Let y ∈ ∩∞
k=1 F (t, Uk ; βk ). Then there exists a sequence xk ∈ Uk ⊂ U1 such that
y ∈ F (t, xk; βk ), k ≥ 1. Since xk ∈ U1 , k ≥ 1, without loss of generality we can
assume that xk , k ≥ 1, converges to x. It is clear that x ∈ V. Take any ε > 0.
According to (4.5.2) there exists a number N ≥ 1 such that
y ∈ F (t, xk; βk ) ⊂ F (t, x; β) + ε.B
(4.5.6)
for all k ≥ N. Since ε > 0 is arbitrary then y ∈ F (t, x; β) ⊂ F (t, V ; β). Hence
by (4.5.5)
F (t, V ; β) ⊂ ∩∞
(4.5.7)
k=1 F (t, Uk ; βk ) ⊂ F (t, V ; β).
The inclusion (4.5.7) tells us that the sequence F (t, Uk ; βk ), k ≥ 1 converges to
F (t, V ; β) in Kc (Rn ). Hence the sequence H(t, Uk ; βk ) = coF (t, Uk ; βk ), k ≥ 1,
n
converges to H(t, ∩∞
k=1 Uk ; β) in Kc (R ). Thus the Lemma is proved.
We can now prove the following result which provides the connection between the fuzzy differential equation (4.4.1) and the sequence of set differential
equations (4.5.1).
Theorem 4.5.1 Assume that the multifunction F (t, x; β) satisfies hypotheses
H(F ). Then there exists a solution Uβ (t) = Uβ (t, t0, U0β ) ∈ Kc (Rn ), β ∈ I, t ∈ J
of the equation (4.5.1). If the solution Uβ (t), β ∈ I, is unique then Uβ (t) is
quasiconcave for t ∈ J. Moreover, there exists a fuzzy set u(t) ∈ E n such that
[u(t)]β = Uβ (t), β ∈ I, t ∈ J and fuzzy set t → u(t) is continuous from J to E n .
118
CHAPTER 4. CONNECTION TO FDES
Proof The existence of solution Uβ (t), β ∈ I follows from Lemma 4.5.1. and
Corollary 2.9.1. Let us show that Uβ (t), t ∈ J, is quasiconcave if the solution
Uβ (t), β ∈ I, is unique.
Let α < β and Uα (t) be a solution of equation (4.5.1). Then
Uα (t) = U0α +
Z
t
H(s, Uα (s); α) ds, t ∈ J.
(4.5.8)
t0
Denote by Vα the collection of all functions x → Rn representable as
x(t) = x0 +
Z
t
v(s) ds, t ∈ J, x0 ∈ U0α ,
(4.5.9)
t0
where v(s) is a Bochner integrable selector of H(s, Uα (s); α). Then Vα is compact
convex set of C(J, Rn) and Vα (t) = Uα (t), t ∈ J.
Using (1.8.2) , (4.5.8), (4.5.9) and the definition of the operator T (V0 , F, V )
we obtain
Vα = T (U0α , Hα, Vα).
(4.5.10)
Let Vβ0 be a collection of all functions x : J → Rn representable as (4.5.9) with
x0 ∈ U0β and v(s) being a Bochner integrable selector of H(s, Uα (s); α).
As has been shown in the proof of Theorem 2.9.1, Vβ0 is a compact convex
set of C(J, Rn),
Vβ0 = T (U0β , Hα, Vα ),
(4.5.11)
and
Vβ0 ⊂ Vα,
(4.5.12)
because U0β ⊂ U0α.
From quasiconcavity and monotonicity of H(t, A; α) and (4.5.11) , (4.5.12)
it follows
Vβ1 = T (U0β , Hβ , Vβ0 ) ⊂ T (U0β , Hα, Vβ0 ) ⊂ T (U0β) , Hα, Vα ) = Vβ0,
(4.5.13)
and Vβ1 is compact convex subset of C(J, Rn ).
We now define Vβ2 = T (U0β , Hβ , Vβ1). Because Vβ1 ⊂ Vβ0 we have
Vβ2 = T (U0β , Hβ , Vβ1 ) ⊂ T (U0β , Hβ , Vβ0 ) = Vβ1.
Continuing this process we obtain a sequence {Vβk }, k ≥ 1, of compact convex
subsets of C(J, Rn) decreasing relative to the inclusion. Repeating the proof
k
of Theorem 2.9.1 we obtain that Uβ = ∩∞
k=1 Vβ = T (U0β , Hβ , Uβ ) and Uβ (t) =
{x(t) : x(.) ∈ Uβ } is a solution of equation (4.5.1). Since Uβ ⊂ Vβk ⊂ Vα then
Uβ (t) ⊂ Vα (t) = Uα (t), t ∈ J.
For the construction of Uβ (t) we use the solution Uα (t) of equation (4.5.1).
Since the equation (4.5.1) has a unique solution, the solution Uβ (t) does not
depend on α < β. Hence Uβ (t) ⊂ Uα (t), t ∈ J, for any α, β ∈ I, α < β.
4.5 UPPER SEMICONTINUOUS CASE CONTINUED
119
Let r0 = max{kxk; x ∈ U00 } and r(t) = r(t, t0, r0) be a maximal solution of
(2.9.3) on J. By hypothesis H(F )(d) we have
Z t
Z t
kUβ (t)k ≤ kU0β k +
kH(s, Uβ (s); β)k ds ≤
g(s, kUβ (s)k) ds, t ∈ J.
0
0
Hence by Lemma 1.3.1. in Lakshmikantham and Leela[1] we obtain
kH(t, Uβ (t); β)k ≤ g(t, r(t)) = r0 (t).
If t∗ ≤ t, then
Z
Uβ (t) = Uβ (t∗ ) +
(4.5.14)
t
H(s, Uβ (s); β) ds.
(4.5.15)
H(s, Uβ (s); β) ds.
(4.5.16)
t∗
If t ≤ t∗ , then
Uβ (t∗ ) = Uβ (t) +
Z
t∗
t
Taking into consideration (1.3.9),(4.5.14),(4.5.15),(4.5.16) we obtain
Z t
D(Uβ (t), Uβ (t∗ )) ≤ |
r0 (s) ds|, β ∈ J.
t∗
Hence the family of functions Uβ (.), β ∈ I is equicontinuous from J to Kc (Rn ).
Let βk ∈ I, k ≥ 1, be any nondecreasing sequence converging to β. We claim
that Uβ (t) = ∩∞
k=1 Uβk (t), t ∈ J. Since for every t ∈ J the sequence Uβk (t)
is nondecreasing with respect to inclusion, the sequence Uβk (t) is converging
n
pointwise to a function V (t) = ∩∞
k=1 Uβk (t) in Kc (R ). Moreover, the sequence
Uβk (t), k ≥ 1, converges uniformly to V (t) because the sequence Uβk (t) is
equicontinuous. Hence V : J → Kc (Rn ) is continuous.
Taking into consideration the statement (ii) of Lemma 4.5.1 we obtain
H(t, Uβk (t); βk ) → H(t, V (t); β) in Kc (Rn ).
(4.5.17)
From (4.5.14) it follows
D(H(t, V (t); β), H(t, Uβk (t); βk )) ≤ 2r0(t).
Let
U (t) = U0β +
Z
(4.5.18)
t
H(s, V (s); β) ds.
0
Then the inequality
D(U (t), Uβk (t)) ≤ D(U0β , U0βk )+
Z
t
D(H(s, V (s); β), H(t, Uβk (t), Uβk )) ds, t ∈ J,
0
(4.5.19)
holds. Since U0β = [x0]β , then
lim D(U0β , U0βk ) = 0.
k→∞
(4.5.20)
120
CHAPTER 4. CONNECTION TO FDES
Now from (4.5.18) to (4.5.20) and Lebesgue bounded convergence theorem we
obtain limk→∞ D(U (t), Uβk (t)) = 0.
Since Uβk (t) → V (t) then U (t) = V (t), t ∈ J and the equality
V (t) = U0β +
Z
t
H(s, V (s)) ds, t ∈ J
0
is true.
Hence V (t) is a solution of equation (4.5.1). Due to the uniqueness of the
solution of equation (4.5.1)
Uβ (t) = V (t) =
∞
\
Uβk (t), t ∈ J.
k=1
Consequently, by Theorem 4.4.2, there exists a fuzzy set u(t) ∈ E n such that
[u(t)]β = Uβ (t), t ∈ J. Since the family Uβ (t), β ∈ J, is equicontinuous , then
the fuzzy set u(t) is continuous from J to E n and this completes the proof.
Remark 4.5.1 In the original formulation of fuzzy differential equations (FDEs),
the function f in (4.4.1)is assumed to be continuous to prove several basic results. This implies that the function F (t, x; β) is continuous for each β. In
Section 4.4, under the assumption of continuity several results are investigated.
The function f is assumed to be usc, F (t, x; β) is usc for each β, and, consequently , the standard results of multivalued inclusions can be utilized to capture
the rich behaviour of solutions of inclusions. See Lakshmikantham and Mohapatra [1] for further details. In this section, we took a step further to study set
differential equations (SDEs) which are generated by FDEs, since SDEs have
several advantages.
Remark 4.5.2 If u ∈ Dn , that is u is not assumed fuzzy convex , then the
level set [u]β need not be convex. Hence, when the fuzzy convexity is discarded,
[f(t, x)]β = F (t, x; β) need not be convex. Nonetheless, the generated function
H(t, A; β) is convex and therefore, we can still apply our results.
4.6
Impulsive Fuzzy Differential Equations
Let P C denote the class of piecewise continuous functions from R+ to R with
discontinuities of first kind only at t = tk , k = 1, 2, ... . We need the following
known result (see Lakshmikantham, Bainov and Simeonov [1]).
Theorem 4.6.1 Assume that
(A0) The sequence {tk } satisfies 0 ≤ t0 < t1 < t2, ..., with limk→∞ tk = ∞;
(A1) m ∈ P C 1[R+ , R] and m(t) is left continuous at tk , k = 1, 2, ...;
4.6 IMPULSIVE FDES
121
(A2) for k = 1, 2, ..., t ≥ t0 , and m0 (t) ≤ g(t, m(t)),
t 6= tk ,
m(t0 ) ≤ w0,
m(t+
k ) ≤ ψk (m(tk )),
where g ∈ C[R+ × R, R],
(4.6.1)
ψk : R → R, ψk (w) is nondecreasing in w;
(A3) r(t) = r(t, t0, w0) is the maximal solution of
w0 = g(t, w),
t 6= tk ,
w(t+
k ) = ψk (w(tk )),
w(t0) = w0,
tk > t0 ≥ 0,
(4.6.2)
existing on [t0, ∞). Then m(t) ≤ r(t), t ≥ t0 .
Proof For t ∈ [t0, t1], we have by the classical comparison theorem m(t) ≤ r(t).
Hence using the facts that ψ1(w) is nondecreasing in w and m(t1 ) ≤ r(t1) we
obtain
+
m(t+
1 ) ≤ ψ1 (m(t1 )) ≤ ψ1 (r(t1 )) = w1 .
Now, for t1 < t ≤ t2 , it follows, using again the classical comparison theorem
m(t) ≤ r(t), where r(t) = r(t, t1, w1+ ) is the maximal solution of (4.6.2) on the
interval t1 ≤ t ≤ t2. Moreover, as before, we get
+
m(t+
2 ) ≤ ψ2 (m(t2 )) ≤ ψ2 (r(t2 )) = w2 .
Repeating the arguments, we finally arrive at the desired result, and the proof
is complete. Repeating the arguments, we finally arrive at the desired result,
and the proof is complete.
Let us consider now the impulsive fuzzy differential equation
u0
=
f(t, u),
u(t+
k)
=
u(tk ) + Ik (u(tk )),
t 6= tk ,
(4.6.3)
u(t0 ) = u0 ,
where (A0 ) holds and f : R+ × E n → E n, Ik : E n → E n , f is continuous in
(tk−1, tk ] × E n and for each u ∈ E n, lim f(t, v) = f(t+
k , u) exists as (t, v) →
(t+
k , u).
We assume that, for each (tk−1, tk ] × E n , there exists a unique solution u(t)
of (4.6.3) in each interval [tk−1, tk ]. As a result, employing impulsive condition
in (4.6.3) at each tk , we can define the solution u(t) on the entire interval [t0, ∞).
Theorem 4.6.2 Assume that f ∈ C[R+ × E n, E n] and
lim sup
h→0+
1
[D0 [u + hf(t, u), v + hf(t, v)] − D0 [u, v]]]
h
≤ g(t, D0 [u, v]),
t ∈ R+ ,
u, v ∈ E n,
t 6= tk ,
where g ∈ C[R+ × R+ , R]. Suppose that
D0 [u + Ik (u), v + Ik (v)] ≤ ψk (D0 [u, v])
122
CHAPTER 4. CONNECTION TO FDES
where ψk : R+ → R+ , ψk (w) is nondecreasing in w. The maximal solution
r(t) = r(t, t0, w0) of (4.6.2) exists for t ≥ t0. Then
D0 [u(t), v(t)] ≤ r(t),
t ≥ t0 ,
where u(t), v(t) are the solutions of (4.6.3) existing on [t0, ∞).
Proof Proceeding as in the proof of Theorem 4.3.1, we find that for t 6= tk ,
m(t + h) − m(t) = D0 [u(t + h), v(t + h)] − D0 [u(t), v(t)]
≤ D0 [u(t + h), u(t) + hf(t, u(t)] + D0 [v(t) + hf(t, v(t), v(t + h)]
+ D0 [u(t) + hf(t, u(t)), v(t) + hf(t, v(t))] − D0 [u(t), v(t)].
Hence
D+ m(t)
=
≤
1
lim sup [m(t + h) − m(t)]
h
+
h→0
1
+ ≤ lim sup [D0 (u(t) + hf(t, u(t)), v(t) + hf(t, v(t))]
h→0+ h
−D0 [u(t), v(t)]
u(t + h) − u(t)
+ lim sup D0 [
, f(t, u(t))]
h
+
h→0
v(t + h) − v(t)
+ lim sup D0 [f(t, v(t)),
], t 6= tk ,
h
+
h→0
g(t, D0 [u(t), v(t)]) = g(t, m(t)), t 6= tk .
Also, for t = tk ,
m(t+
k) =
=
≤
+
D0 [u(t+
k ), v(tk )]
D0 [u(tk ) + Ik (u(tk )), v(tk ) + Ik (v(tk ))]
ψk (D0 [u(tk ), v(tk )] = ψk (m(tk )).
We therefore obtain from Theorem 4.6.1, the stated result, namely,
D0 [u(t), v(t)] ≤ r(t),
t ≥ t0 ,
where r(t) = r(t, t0, w0) is the maximal solution of (4.6.2) provided D0 [u0, v0] ≤
w0 , completing the proof.
Let V : R+ × E n → R+ . Then V is said to belong to class V0 if
(i) V is continuous in (tk−1, tk ] × E n and for each u ∈ E n , k = 1, 2, ...,
lim(t,v)→(t+,u) V (t, v) = V (t+
k , u) exists;
k
(ii) V satisfies |V (t, u) − V (t, v)| ≤ LD0 [u, v], L ≥ 0.
4.6 IMPULSIVE FDES
123
For (t, u) ∈ (tk−1, tk ] × E n , we define
1
D+ V (t, u) = lim sup [V (t + h, u + hf(t, u)) − V (t, u)],
h
+
h→0
then we can prove the following comparison theorem.
Theorem 4.6.3 Let V : R+ × E n → R+ and V ∈ V0 . Suppose that
D+ V (t, u) ≤ g(t, V (t, u)),
t 6= tk ,
V (t, u + Ik (u)) ≤ ψk (V (t, u)),
t = tk ,
(4.6.4)
(4.6.5)
where g : R2+ → R is continuous in (tk−1, tk ] × R+ and for each w ∈ R+ ,
lim(t,z)→(t+,w) g(t, z) = g(t+
k , w) exists, ψk : R+ → R is nondecreasing. Let
k
r(t) be the maximal solution of the scalar impulsive differential equation (4.6.2)
existing for t ≥ t0. Then V (t+
0 , u0) ≤ w0 implies
V (t, u(t)) ≤ r(t), t ≥ t0.
Proof Let u(t) = u(t, t0, u0) be any solution of (4.6.3) existing on t ≥ t0 , such
that V (t+
0 , u0 ) ≤ w0 . Define m(t) = V (t, u(t)) for t 6= tk . Then using standard
arguments, we arrive at the differential inequality
D+ m(t) ≤ g(t, m(t)),
t 6= tk .
From (4.6.5), we get for t = tk ,
+
+
+
m(t+
k ) = V (tk , u(tk )) = V (tk , u(tk )+Ik (u(tk ))) ≤ ψk (V (tk , u(tk ))) = ψk (m(tk )).
Hence by Theorem 4.6.1, m(t) ≤ r(t), t ≥ t0, which proves the claim of Theorem
4.6.3.
Some special cases of g(t, w) and ψk (w) which are instructive and useful are
given below as a corollary.
Corollary 4.6.1 In Theorem 4.6.3, suppose that
(i) g(t, w) = 0, ψk (w) = w for all k, then V (t, u(t)) is nondecreasing in t
and V (t, u(t)) ≤ V (t+
0 , u0 ), t ≥ t0 ;
(ii) g(t, w) ≡ 0, ψk (w) = dk w, dk ≥ 0 for all k, then
Y
V (t, u(t)) ≤ V (t+
dk , t ≥ t0 ;
0 , u0)
t0 <tk <t
(iii) g(t, w) = −αw, α > 0, ψk (w) = dk w, dk ≥ 0 for all k, then
!
Y
+
V (t, u(t)) ≤ V (t0 , u0)
dk e−α(t−t0) , t ≥ t0;
t0 <tk <t
124
CHAPTER 4. CONNECTION TO FDES
(iv) g(t, w) = λ0 (t)w, ψk (w) = dk w, dk ≥ 0 for all k, λ ∈ C 1[R+ , R+ ].
Then
V (t, u(t)) ≤ V
(t+
0 , u0 )
Y
dk
!
exp[λ(t) − λ(t0 )],
t ≥ t0 ;
t0 <tk <t
Recall the example considered in Section 4.3 and note that when we choose
[x0]α = [α − 1, 1 − α], 0 ≤ α ≤ 1, we get
[x(t)]α = [(α − 1), (1 − α)]et = [−1, 1](1 − α)et , t ≥ 0.
In particular, diam [x(t)]α = 2(1 − α)et , t ≥ 0.
In order to show the effect of impulses, we now introduce the impulsive
+α
α
α
α
condition as in (4.6.3), that is, [xk]+α = [x+α,
1k , x2k ] and [xk ] = [x1k , x2k ] for
+
+
each k , where xk = x(tk ) and xk = x(tk ). Because of impulse condition [x(t)]α
reduces to
#
"
Y
Y
α
[x(t)] = (α − 1)
dk , (1 − α)
dk et , t ≥ 0.
0<tk <t
0<tk <t
It follows that, if dk , tk satisfy the condition
tk+1 + ln dk ≤ tk ,
then x = 0 is stable and if tk+1 + β ln dk ≤ tk , β > 0, then x = 0 is asymptotically stable.
This demonstrates that the impulsive action helps to obtain stability of FDE
without utilizing Hukuhara difference for the initial values, as we have proposed
in Section 4.3.
4.7
Hybrid Fuzzy Differential Equations
The problem of stabilizing a continuous plant governed by differential equation
through the interaction with a discrete time controller has recently been investigated. This study leads to the consideration of hybrid systems. In this section,
we shall extend this approach to fuzzy differential equations.
Consider the hybrid fuzzy differential system
u0(t) = f(t, u(t), λk (z)), u(tk ) = z,
(4.7.1)
on [tk , tk+1] for any fixed z ∈ E n , k = 0, 1, 2, ..., where f ∈ C[R+ ×E n ×E n , E n ],
and λk ∈ C[E n, E n]. Here we assume that 0 ≤ t0 < t1 < t2 < ..., are such that
tk → ∞ as k → ∞ and the existence and uniqueness of solutions of the hybrid
4.7 HYBRID FDES
125
system hold on each [tk , tk+1]. To be specific, the system is of the form
0
u0(t) = f(t, u0(t), λ0 (u0 )),
u0(t0 ) = u0 ,
t0 ≤ t ≤ t 1 ,
0
u1(t) = f(t, u1(t), λ1 (u1 )),
u1(t1 ) = u1 ,
t1 ≤ t ≤ t 2 ,
..
..
..
.
.
.
u0 (t) =
0
u
(t)
=
f(t,
u
(t),
λ
(u
)),
u
(t
)
=
u
,
t
≤
t
≤
tk+1 ,
k
k k
k k
k
k
k
..
..
..
.
.
.
where uk = uk−1(tk ) for each k. By
the following function
u(t) = u(t, t0, u0) =
the solution of (4.7.1), we therefore mean
u0 (t),
u1 (t),
..
.
uk (t),
..
.
t0 ≤ t ≤ t1 ,
t1 ≤ t ≤ t2 ,
..
.
tk ≤ t ≤ tk+1 ,
..
.
We note that the solutions of (4.7.1) are differentiable in each interval for t ∈
(tk , tk+1) for any fixed uk ∈ E n and k = 0, 1, 2, ....
Let V ∈ C[E n, R+ ]. For t ∈ [tk , tk+1], u, z ∈ E n, we define
1
D+ V (u; z) = lim sup [V (u + hf(t, u, λk (z))) − V (u)].
+
h
h→0
We can then prove the following comparison theorem in terms of Lyapunov-like
function V .
Theorem 4.7.1 Assume that
(i) V ∈ C[E n, R+ ], V (u) satisfies |V (u) − V (v)| ≤
u, v ∈ E n;
L D0 [u, v], L > 0 for
(ii) D+ V (u; z) ≤ g(t, V (u), σk (V (z))), t ∈ (tk , tk+1], where g ∈ C[R3+, R],
σk ∈ C[R+ , R+ ], u, z ∈ E n, k = 0, 1, 2, ...;
(iii) the maximal solution r(t) = r(t, t0, w0) of the hybrid scalar differential
equation.
w0
=
g(t, w(t), σk (wk )), t ∈ (tk , tk+1],
w(tk )
=
wk ,
k = 0, 1, 2, ...,
(4.7.2)
exists on [t0, ∞).
Then any solution u(t) = u(t, t0, u0) of (4.7.1) such that V (u0 ) ≤ w0 satisfies
the estimate
V (u(t)) ≤ r(t), t ≥ t0 .
126
CHAPTER 4. CONNECTION TO FDES
Proof Let u(t) be any solution of (4.7.1) existing on [t0, ∞) and set m(t) =
V (u(t)). Then using (i) and (ii), and proceeding as in the proof of Theorem
4.3.1 and Theorem 4.6.2, we get the differential inequality
D+ m(t) ≤ g(t, m(t), σk (mk )) for tk < t ≤ tk+1,
where mk = V (u(tk )). For t ∈ [t0, t1], since m(t0 ) = V (u0 ) ≤ w0, we obtain
V (u0(t)) ≤ r0(t, t0, w0), t0 ≤ t ≤ t1,
where r0 (t) = r0 (t, t0, w0) is the maximal solution of
w00 = g(t, w0, σ0(w0)), w0(t0 ) = w0 ≥ 0, t0 ≤ t ≤ t1 ,
and u0 (t) is the solution of
u00 = f(t, u0 (t), λ0 (u0)), u(t0 ) = u0 ≥ 0, t0 ≤ t ≤ t1 .
Similarly, for t ∈ [t1, t2 ], it follows that
V (u1(t)) ≤ r1(t, t1, w1), t1 ≤ t ≤ t2,
where w1 = r0 (t1, t0, w0), r1(t, t1, w1) is the maximal solution of
w10 = g(t, w1, σ1(w1)), w1(t1 ) = w1 ≥ 0, t1 ≤ t ≤ t2 ,
and u1 (t) is the solution of
u01 = f(t, u1 (t), λ1(u1 )), u1 (t1) = u1, t1 ≤ t ≤ t2.
Proceeding similarly, we can obtain
V (uk (t)) ≤ rk (t, tk , wk ), tk ≤ t ≤ tk+1 ,
where uk (t) is the solution of
u0k (t) = f(t, uk (t), λk (uk )), uk (tk ) = uk , tk ≤ t ≤ tk+1,
and rk (t, tk , wk ) is the maximal solution of
wk0 = g(t, wk (t), σk (wk )), wk (tk ) = wk , tk ≤ t ≤ tk+1,
where wk = rk−1(tk , tk−1, rk−2(tk−1, tk−2, wk−1)).
Thus defining r(t, t0 , w0)as the maximal solution of the comparison hybrid
system (4.7.2) as
t 0 ≤ t ≤ t1 ,
r0(t, t0, w0),
r1(t, t1, w1),
t 1 ≤ t ≤ t2 ,
..
..
.
r(t, t0, w0) = .
rk (t, tk , wk), tk ≤ t ≤ tk+1,
..
..
.
.
4.7 HYBRID FDES
127
and taking w0 = V (u0 ), we obtain the desired estimate
V (u(t)) ≤ r(t), t ≥ t0.
The proof is therefore complete.
Consider now the hybrid impulsive fuzzy differential system given by
0
u = f(t, u, λ(tk , uk )),
t ∈ [tk , tk+1],
u(t+ ) = u(tk ) + Ik (u(tk )), t = tk ,
(4.7.3)
k
+
u(t0 ) = u0,
where f ∈ C[R+ × E n × E n , E n], Ik : E n → E n, λk ∈ C[R+ , ×E n, E n], and
k = 0, 1, 2, ... .
We assume that I0 (u0 ) = 0, and the existence of solution of the system
0
u = f(t, u, λ(tk , z)), t ∈ (tk , tk+1],
u(t+ ) = z + Ik (z),
t 6= tk ,
(4.7.4)
k
+
u(t0 ) = u0
on (tk , tk+1] for any fixed z ∈ E n and all k = 0, 1, 2, ... .
Note that the solution of (4.7.4) is a piecewise continuous function with
points of discontinuity of the first type at t = tk at which they are assumed to
be left continuous.
Let V : R+ × E n → R+ . Then V is said to belong to class V0 , if
(i) V is continuous in (tk , tk+1] × E n and for each u ∈ E n, k = 1, 2, ....
lim(t,v)→(t+,u) V (t, v) = V (t+
k , u) exists;
k
(ii) V is locally Lipschitzian in u. Then we define, as before,
1
D+ V (t, u, z) ≡ lim sup [V (t + h, u + hf(t, u, λk (tk , z)) − V (t, u)].
h→0+ h
We need the following comparison result.
Theorem 4.7.2 Assume that
(i) V ∈ C[R+ × E n, R+ ], V (t, u) is locally Lipschitzian in u that is |V (t, u) −
V (t, v)| ≤ L D0[u, v], L > 0, and
D+ V (t, u, z) ≤ g(t, V (t, u), σk (tk , z)), t ∈ (tk , tk+1],
u, z ∈ E n , where σk ∈ C[R+ × E n, R], g ∈ C[R3+, R];
(ii) there exist a ψk ∈ C[R+ , R+ ], ψk (w) is nondecreasing in w and
V (t, u + Ik (u)) ≤ ψk (V (t, u)), k = 1, 2, · · · ; u ∈ E n ;
128
CHAPTER 4. CONNECTION TO FDES
(iii) the maximal solution r(t) = r(t, t0, w0) of the scalar hybrid impulsive differential equation
w0 = g(t, w, η(tk , wk )), t ∈ (tk , tk+1],
w(t+ ) = ψ(w(tk )),
t = tk ,
k
w(t0) = w0 ≥ 0,
(4.7.5)
existing on [t0, ∞), where η ∈ C[R2+ , R] and wk = V (tk , uk). Then any solution
u(t) = u(t, t0, u0) of (4.7.3) satisfies
V (t, u(t)) ≤ r(t, t0 , w0), t ≥ t0 ,
provided w0 ≥ V (t0 , u0).
The proof of this comparison theorem follows on similar lines to Theorem 4.7.2
defining u(t) and r(t), piece by piece suitably. We omit the proof to avoid
monotony. Having the foregoing comparison result at our disposal, we can
formulate stability criteria of the solutions of (4.7.3) relative to any kind of
stability such as Lyapunov stability , practical stability, stability in terms of
two different measures, which includes several known stability concepts or the
new concept of stability, which includes Lyapunov and orbital stability as special
cases. We simply state a typical result whose proof can be constructed based
on stability criteria of impulsive differential equations.
Theorem 4.7.3 Assume that
(i) V ∈ V0 and V (t, u) is positive definite and decrescent;
(ii) D+ V (t, u, z) ≤ g(t, V (t, u), σk(tk , z)), t ∈ (tk , tk+1], u, z ∈ E n, σk , g are
as defined in Theorem 4.7.2 ;
(iii) V (t, u + Ik (u)) ≤ ψk (V (t, u)), t = tk , u ∈ E n, where ψk (w) is nondecreasing in w, as in Theorem 4.7.2.
Then stability properties of the trivial solution w = 0 of (4.7.5) imply the corresponding stability properties of (4.7.3) respectively.
All that is needed to get any kind of stability properties of (4.7.3) is to require
positive definiteness and decresence of V (t, u) suitably relative to that particular
stability demands. For example if we want stability criteria in terms of two
different measures say, (h0, h), then V (t, u) need to satisfy positive definiteness
relative to h and decrescence with respect to h0 , where h0 is finer than h. See
for details Lakshmikantham and Liu [1].
4.8 ANOTHER FORMULATION
4.8
129
Another Formulation
Consider a differential equation in a given space X
u0(t) = f(t, u(t)), t ∈ [0, T ],
(4.8.1)
with a specific initial condition
u(0) = u0,
(4.8.2)
where T > 0, and f : [0, T ] × X → X.
Now, suppose that X is a Banach space with norm k.k inducing a distance
d. If f is continuous, then equation (4.8.1) is equivalent to
lim
h→0
ku(t + h) − u(t) − f(t, u(t))hk
= 0, t ∈ [0, T ).
h
Therefore, any solution of (4.8.1) satisfies
lim+
h→0
d(u(t + h), F (t, h, u(t)))
= 0,
h
(4.8.3)
where
F : [0, T ] × R+ × X → X, F (t, h, u) = u + hf(t, u).
(4.8.4)
With the notation of Kloeden, Sadovsky and Vasiyeva [1], (4.8.1) is equivalent
to
u(t + dt) − u(t) − Dt,u(t)(dt) = o(dt),
(4.8.5)
with
Dt,u (dt) = f(t, u)dt = F (t, dt, u) − u.
(4.8.6)
There are three possible definitions for continuous processes in a Banach space.
Formulation (4.8.5) allows the study of nonsmooth systems such as “stop
nonlinearities” and is called an equation with a nonlinear differential. With
an adequate choice of nonlinear differential D it is possible to obtain existence
results for classical ordinary differential equations, Caratheodory differential
equations, and differential inclusions with maximal monotone operators.
However, (4.8.1) and (4.8.5) have both the same shortcoming: One needs an
algebraic structure in the underlying space. On the other hand, (4.8.3) seems
adequate to study the evolution of a process in a metric space making it possible
to obtain results of calculus and differential equations without employing any
concept of derivative or requiring that the underlying metric space be linear.
This motivates the following definition.
Let (X, d) be a complete metric space and F : [0, T ] × R+ × X → X. We
shall consider (X, F ) as a metric differential equation in the following sense: A
function u : [0, T ] → X is a solution of the metric differential equation (X, F )
with initial condition (4.8.2) if u satisfies (4.8.3) and u(0) = u0 .
This conception is related to the concept of quasi-differential equations and
to mutations in a metric space. See Melnik [1], Panasyuk [1], Plotnikov [1],
Aubin [1,2].
130
CHAPTER 4. CONNECTION TO FDES
Of course, if X = Rn and F is given by (4.8.4) with f continuous, then
we have a classical ordinary differential equation, i.e., a continuous dynamical
system.
If X = E n , note that for a function f : [0, T ] × E n → E n, (4.8.4) makes
sense, and we can reconsider a fuzzy differential equation as a metric differential
equation in the metric space E n. As we know that in E n the difference of two
elements is not always well defined which precludes us from using (4.8.5) to
study fuzzy differential equations.
We recall that a fuzzy subset of Rn is just a map
u : Rn → [0, 1]
where u(x) is the grade of membership of x ∈ Rn to the fuzzy set. For each
α ∈ (0, 1] the α− level set [u]α = {x ∈ Rn : u(x) ≥ α}. The support of u ,
denoted by [u]0 is the closure of the union of all its level sets. Of course, any
classical subset A ⊂ Rn is identified with its characteristic function χA . The set
of normal, fuzzy convex, upper semicontinuous functions, with compact support
is denoted by E n.
It is possible to define addition in E n levelwise:
u, v ∈ E n, [u + v]α = [u]α + [v]α, α ∈ [0, 1].
For c 6= 0, scalar multiplication is defined also levelwise:
u ∈ E n , [cu]α = c[u]α.
Note that it is not possible to define 0u levelwise since 0u = χφ 6∈ E n . We
define u − v = u + (−1)v. Observe that u + v = χ{0} implies u = −v, but
u = −v does not imply, in general, that u + v = χ{0} . Indeed, for example, for
u = χ[0,1], −u = χ[−1,0], and u − u = χ[−1,1] .
The distance between elements of E n is given by the supremum of the Hausdorff distance between the level sets:
u, v ∈ E n, D0 [u, v] = supα∈[0,1] D[[u]α, [v]α].
Thus, (E n , D0) is a complete metric space (see Lakshmikantham and Mohapatra
[1]). Moreover, for u, v, w, z ∈ E n and c, c0 6= 0, we have
D0 [cu, cv] =
D0 [u + w, v + w] =
D0 [u + w, v + z] =
D0 [cu, c0u] =
|c|D0[u, v],
D0 [u, v],
D0 [u, v] + D0 [w, z],
|c − c0 |D0[u, χ{0}].
If a ∈ Rn, then χ{a} ∈ E n , and for a, b ∈ Rn . D0 [χ{a} , χ{b}] = |a − b|.
Now, consider a map f : E n → E n .
4.8 ANOTHER FORMULATION
131
Definition 4.8.1 For fixed u, v ∈ E n we say that f is differentiable at u in the
direction v if there exists w ∈ E n such that
lim
h→0+
D0 [f(u + hv), f(u) + hw]
= 0.
h
We say that w is the derivative of f at u in the direction v and write w = f 0 (u)v.
Definition 4.8.2 For a function u : [0, T ] → E n and t ∈ [0, T ) we say that u
is differentiable at t if there exists w ∈ E n such that
lim
h→0+
D0 [u(t + h), u(t) + hw]
= 0,
h
and we write DH u(t) = w.
Example 4.8.1 Let u0 ∈ E n and take u : [0, T ] → E n , u(t) = u0 . Then,
u0 (t) = χ{0} for every t ∈ [0, T ). Indeed,
lim
h→0+
D0 [u(t + h), u(t) + hχ{0}]
h
=
=
D0 [u0, u0 + hχ{0}]
h
D0 [u0, u0]
lim
= 0.
h
h→0+
lim
h→0+
Example 4.8.2 For u0, u1 ∈ E n let u(t) = u0 + tu1. Then, DH u(t) = u1 since
lim+
h→0
D0 [u(t + h), u(t) + hu1 ]
D0 [u0 + (t + h)u1, u0 + tu1 + hu1]
= lim+
= 0.
h
h
h→0
Example 4.8.3 Let f : [0, T ] → Rn be differentiable. Define
f̂ : [0, T ] → E n , f̂ (t) = χ{f (t)} .
Then f̂ is differentiable and fˆ0 (t) = χ{f 0 (t)}.
Example 4.8.4 For any λ > 0, u0 ∈ E n , the function u(t) = eλt u0 satisfies
DH u(t) = λu(t). Indeed,
1
(D0 [u(t + h), u(t) + hλeλt u0])
h
1
= lim (D0 [eλ(t+h) u0, eλtu0 + hλeλt u0])
+
h→0 h
eλt
= lim
D0 [eλh u0 , (1 + λh)u0]
h→0+ h
eλt λh
= lim
(e − (1 + λh))D0 [u0, χ{0}] = 0.
h→0+ h
lim
h→0+
132
CHAPTER 4. CONNECTION TO FDES
Definition 4.8.3 Given v : [0, T ] → E n , a primitive of v is a function u :
[0, T ] → E n such that DH u(t) = v(t) a.e. [0, T ], i.e. for almost all t ∈ [0, T ] we
have
D0 [u(t + h), u(t) + hv(t)]
lim
= 0.
+
h
h→0
If u is a primitive of v satisfying the initial condition 4.8.2., we say that u is a
primitive starting at u0 .
For example, a primitive of v(t) = et u0 is itself.
Lemma 4.8.1 If u : [0, T ] → E n is differentiable at t, then u is right continuous
at t.
Proof Let DH u(t) = v(t). For every ε > 0, there exists δ > 0 such that for
h ∈ (0, δ), we have
D0 [u(t + h), u(t)] ≤ D0 [u(t + h), u(t) + hv(t)] + D0 [u(t) + hv(t), u(t)]
≤ εh + D0 [hv(t), χ{0}] = εh + hD0 [v(t), χ{0}].
This shows the right continuity of u at t.
Lemma 4.8.2 Suppose that v ∈ [0, T ] → E n is piecewise constant, then v
has a primitive starting at u0. Moreover, if t0 = 0 < t1 < t2 < · · · < tk =
T, and v(t) = vi ∈ E n for t ∈ (ti , ti+1), i = 0, 1, ......., k − 1, it is possible
to construct a Lipschitz continuous primitive with Lipschitz constant equal to
max0≤i≤k−1D0 [vi, χ{0}].
Proof Let u(0) = u0 and for t ∈ (0, t1), define u(t) = u0 + tv0 . Thus, DH u(t) =
v0 = v(t) for every t ∈ [0, t1). For i ≥ 1, set u(ti ) = u(ti−1) + (ti − ti−1)vi−1 ,
and for t ∈ (ti , ti+1), u(t) = u(ti ) + (t − ti)vi . It is clear that DH u(t) = v(t) for
every t ∈ (ti , ti+1), i = 0, 1....., k − 1. Now, if t, τ ∈ [ti , ti+1], then
D0 [u(t), u(τ )] =
=
=
D0 [u(ti) + (t − ti )vi , u(ti) + (τ − ti )vi ]
D0 [(t − ti )vi , (τ − ti )vi ]
|t − τ |D0[vi , χ{0}].
For arbitrary t, τ ∈ [0, T ], suppose that t ∈ [ti, ti+1 ], and τ ∈ [tj , tj+1], i < j.
Hence,
D0 [u(t), u(τ )] ≤ D0 [u(t), u(ti+1)] + D0 [u(ti+1), u(ti+2)] + · · ·
· · · + D0 [u(tj−1), u(tj )] + D0 [u(tj ), u(τ )]
≤ |t − ti+1|D0 [vi, χ{0} ] + |ti+2 − ti+1 |D0[vi+1 , χ{0}] + · · ·
· · · + |tj − tj−1|D0 [vj−1, χ{0}] + |τ − tj |D0 [vj , χ{0}]
≤
max D0 [vi , χ{0}]|τ − t|.
0≤i≤(k−1)
4.8 ANOTHER FORMULATION
133
Lemma 4.8.3 Let v1, v2 : [0, T ] → E n with primitives u1 , u2. Suppose that the
function s ∈ [0, T ] → D0 [v1(s), v2 (s)] is integrable on [0, T ] (for example if v1
and v2 are piecewise continuous). Then,
D0 [u1(t), u2(t)] ≤ D0 [u1(0), u2(0)] +
Z
t
D0 [v1 (s), v2 (s)] ds.
(4.8.7)
0
Proof Define ξ(t) = D0 [u1(t), u2(t)], t ∈ [0, T ]. We have
ξ(t + h) − ξ(t)
≤ D0 [u1(t + h), u1(t) + hv1 (t)] + D0 [u1(t) + hv1 (t), u2(t) + hv1 (t)]
−
=
+ D0 [u2(t) + hv1 (t), u2(t) + hv2 (t)] + D0 [u2(t) + hv2 (t), u2(t + h)]
D0 [u1(t), u2 (t)]
D0 [u1(t + h), u1(t) + hv1 (t)] + D0 [u1(t), u2(t)] + hD0 [v1(t), v2 (t)]
+ D0 [u2(t) + hv2 (t), u2(t + h)] − D0 [u1(t), u2(t)].
Hence,
ξ(t + h) − ξ(t)
1
≤ D0 [u1(t + h), u1(t) + hv1 (t)]
h
h
+D0 [v1(t), v2 (t)] +
1
D0 [u2(t) + hv2 (t), u2(t + h)].
h
Therefore, D+ ξ(t) ≤ D0 [v1(t), v2 (t)], t ∈ [0, T ], and
ξ(t) ≤ ξ(0) +
Z
t
D0 [v1(s), v2 (s)] ds.
0
Corollary 4.8.1 (Uniqueness) : For a given initial state we have at most one
continuous primitive.
Lemma 4.8.4 Let u be a continuous primitive of v. Then it satisfies the following inequality:
1
1
D0 [u(t + h), u(t) + hv(t)] ≤
h
h
Z
h
D0 [v(t + s), v(t)] ds.
(4.8.8)
0
Moreover, assume that supτ ∈[0,T ] D0 [v(τ ), χ{0}] = k < +∞, then u is Lipschitz
continuous with Lipschitz constant k.
Proof Fix t0 ∈ [0, T ], and note that
µ : [0, T ] → E n, µ(t) = u(t0) + tv(t0 )
is a primitive of the constant function v(t0 ) with µ(0) = u(t0). Also the function
ut0 : [0, T − t0] → E n , ut0 (t) = u(t + t0 )
134
CHAPTER 4. CONNECTION TO FDES
is a primitive of vt0 : [0, T − t0] → E n , vt0 (t) = v(t + t0 ) with ut0 (0) = u(t0 ).
Using (4.8.7) we get
D0 [µ(h), ut0 (h)] ≤ D0 [µ(0), ut0 (0)] +
Z
h
D0 [v(t0 ), v(s + t0 )] ds
0
and
D0 [u(t0) + hv(t0 ), u(h + t0)] ≤
Z
h
D0 [v(t0 + s), v(t0 )] ds
0
Dividing by h we obtain (4.8.8).
Now, the constant function u(t) is a primitive of the constant function χ{0}
starting at u(t). Let t0 > t. hence,
D0 [u(t0), u(t)] =
=
D0 [ut(t0 − t), u(t)] ≤
Z
t0 −t
D0 [vt(s), χ{0} ] ds
0
Z
t0 −t
D0 [v(t + s), χ{0}] ds ≤ k.(t0 − t).
0
We now prove the main result of the Section: Any continuous function has a
primitive.
Theorem 4.8.1 Let v : [0, T ] → E n be continuous. Then there exists a unique
primitive of v starting at a given u0.
Proof For ε > 0 there exists δ > 0 such that D0 [v(t), v(s)] < ε whenever
T
|t − s| < δ since v is uniformly continuous on [0, T ]. Take m > Tδ , h = m
, and
for i = 0, 1, 2...., m − 1 define the piecewise continuous function
vm : [0, T ] → E n , vm (t) = v(ih), t ∈ (ih, (i + 1)h).
For any t ∈ [0, T ], let t ∈ (ih, (i + 1)h), then
D0 [vm (t), v(t)] = D0 [v(ih), v(t)] < ε,
since |t − ih| < h =
T
m
< δ. Also, for every m = 1, 2, ..... and t ∈ [0, T ] we have
D0 [vm (t), χ{0} ] ≤ supτ ∈[0,T ] D0 [v(τ ), χ{0} ] = k < ∞.
In view of Lemma 4.8.2., let um be the primitive of vm starting at u0 . By
Lemma 4.8.4, um is Lipschitz continuous with Lipschitz constant k. Now, using
Lemma 4.8.3., for l, m > Tδ and t ∈ [0, T ] we have
D0[um (t), ul (t)] ≤
Z
t
D0 [vm (s), vl (s)] ds ≤ 2εt ≤ 2T ε.
(4.8.9)
0
In consequence, for every t ∈ [0, T ] we see that the sequence {um (t)}∞
m=1 is a
Cauchy sequence in E n. Therefore, there exists limm→∞ um (t) = u(t). Passing
4.8 ANOTHER FORMULATION
135
to the limit when l → ∞ in (4.8.9) we see that {um }∞
m=1 converges uniformly
to u.
Now, for m = 1, 2..... consider the functions
µm : [0, T ] → E n , µm (t) = um (t) + hvm (t),
and
µ : [0, T ] → E n , µ(t) = u(t) + hv(t).
Using (4.8.8) we can write
D0 [um (t + h), um (t) + hvm (t)] ≤
Z
h
D0 [vm (t + s), vm (t)] ds.
0
On the other hand,
lim D0 [um (t + h), um(t) + hvm (t)] = D0 [u(t + h), u(t) + hv(t)],
m→∞
and
lim
m→∞
Z
h
D0 [vm (t + s), vm (t)] ds =
Z
0
h
D0 [v(t + s), v(t)] ds.
0
Hence,
1
1
D0 [u(t + h), u(t) + hv(t)] ≤
h
h
Z
h
D0 [v(t + s), v(t)] ds.
0
Now, v is uniformly continuous on [0, T ] and hence
Z
1 h
lim
D0 [v(t + s), v(t)] ds = 0,
h→0+ h 0
uniformly on t ∈ [0, T ]. Therefore,
lim
h→0+
1
D0 [u(t + h), u(t) + hv(t)] = 0,
h
uniformly on t ∈ [0, T ], and u is a primitive of v. For f : E n → E n , consider
the fuzzy differential equation
DH u(t) = f(u(t)), t ∈ [0, T ],
(4.8.10)
with the fuzzy initial condition
u(0) = u0 ∈ E n .
(4.8.11)
Definition 4.8.4 We say that u : [0, T ] → E n is a solution of (4.8.10)-(4.8.11)
if u is a primitive of f(u) starting at u0 .
For u ∈ C([0, T ], E n), define the function
F u : [0, T ] → E n, [F u](t) = f(u(t)),
and denote by Gu the unique continuous primitive of F u starting at u0. Hence,
a function u ∈ C([0, T ], E n) is a solution of the initial value problem (4.8.10)(4.8.11) if and only if u coincides with Gu.
136
CHAPTER 4. CONNECTION TO FDES
Theorem 4.8.2 Suppose that f : E n → E n is such that there exists k ≥ 0 with
D0 [f(x), f(y)] ≤ kD0 [x, y], x, y ∈ E n.
(4.8.12)
Then the initial fuzzy problem (4.8.10),(4.8.11) has a unique solution.
Proof . In the space C([0, T ], E n), consider the metric
D̃[u1, u2] = supt∈[0,T ] D0 [u1(t), u2(t)]e−kt.
Thus, using Lemma 4.8.3, we have for any t ∈ [0, T ],
Z t
D0 [[Gu1](t), [Gu2](t)] ≤
D0 [[F u1])(s), [F u2](s)] ds
0
Z t
=
D0 [f(u1 (s)), f(u2 (s))] ds
0
≤ k
= k
≤ k
Z
t
D0 [u1(s), u2 (s)] ds
Z
0
Z
0
t
eks e−ksD0 [u1(s), u2 (s)] ds
t
eks D̃[u1, u2] ds = (ekt − 1)D̃[u1, u2].
0
Hence
D̃[Gu1, Gu2] ≤ [1 − e−kT ] D̃[u1, u2],
and G is a contraction and has a unique fixed point.
4.9
Notes and Comments
The preliminaries introduced for the formulation of fuzzy differential equations
and the basic results reported for such equations in Section 4.2, are adapted
from Lakshmikantham and Mohapatra [1]. For further results on basic fuzzy
set theory see Kaleva [1, 2], Seikkala [1], Vorobiev and Seikkala [1], and O’Regan,
Lakshmikantham and Nieto [1]. For the results of Section 4.3 concerning stability criteria in terms of Lyapunov-like functions with necessary comparison
principles see Gnana Bhaskar, Lakshmikantham, and Vasundhara Devi [1]. To
eliminate the possible undesirable part of the solutions, the Hukuhara difference
in initial conditions is employed suitably extending the ideas given Lakshmikantham, Leela and Vasundhara Devi [1]. For the suggestion to reduce FDEs to
a sequence of multivalued differential equations, see Hüllermeier [1]. See Diamond[1], Diamond and Watson [1] and Lakshmikantham and Mohapatra [1]
where Hülermeier’s approach is exploited fruitfully. For recent results in this
connection see Agarwal, O’Regan, and Lakshmikantham [1]. For results in multivalued differential equations see Deimling [1]. The interconnection between
4.9 NOTES AND COMMENTS
137
FDEs and SDEs sketched in Section 4.4, dealing with the continuous case, is
taken from Lakshmikantham, Leela and Vatsala [1]. See Lakshmikantham and
Tolstonogov [1] for similar results in USC case described in Section 4.5 and also
Tolstonogov [1]. The introduction to the theory of impulsive FDEs in Section
4.6 and hybrid FDEs in Section 4.7 and the corresponding results reported are
adapted from Lakshmikantham and Vatsala [2]. See Lakshmikantham and Nieto [1] for the formulation of differential equations in metric space and related
results for FDEs studied in Section 4.8. For other kinds of formulation of differential equations in metric spaces, see also Aubin [1], Kloeden, Sadovsky and
Vasiyeva [1], Melnik [1], Panasyuk [1] and Plotnikov [1].
138
CHAPTER 4. CONNECTION TO FDES
Chapter 5
Miscellaneous Topics
5.1
Introduction
This chapter is devoted to the introduction of several topics in the framework
of set differential equations, that need further investigation.
We begin Section 5.2 with set differential equations involving impulsive effects and prove certain basic results. In Section 5.3, we consider impulsive set
differential equations and develop the fruitful monotone iterative technique in
a general set up so as to include several special important results.
Section 5.4 is dedicated to the investigation of set differential equations with
delay. Here we provide some fundamental results for such equations. In Section 5.5, we introduce impulses into the study of SDEs with delay and consider
suitable interesting results. The discussion of set difference equations forms the
content of Section 5.6. Employing Causal or nonanticipative maps of Volterra
type, we discuss, in Section 5.7, set differential equations involving such maps
and extend appropriate basic results to such equations. Lyapunov-like functions
whose values are in the metric space (Kc (Rd ), D) are introduced in Section 5.8.
A necessary comparison theorem in terms of such Lyapunov-like functions is
proved using suitable partial order, to discuss qualitative properties of solutions
of set differential systems. The general set up considered for Lyapunov-like functions covers not only existing single, vector, matrix and cone-valued Lyapunov
function theory, but also provides a very general framework for further progress.
Since all through this book, we did employ the metric space (Kc (Rn ), D) for
the investigation of several situations, in Section 5.9 we indicate how one can
extend most of the results discussed to the metric space (Kc (E), D), where E
is any real Banach space with suitable modifications demanded by the infinite
dimensional framework. Finally, we provide notes and contents in Section 5.10.
139
140
5.2
CHAPTER 5. MISCELLANEOUS TOPICS
Impulsive Set Differential Equations
Many evolution processes are characterized by the fact that at certain moments
of time they experience a change of state abruptly. These processes are subject
to short term perturbations whose duration is negligible in comparison to the
duration of the process. Consequently, it is natural to suppose that these perturbations act instantaneously, that is, in the form of impulses. Thus impulsive
differential equations have become a natural description of observed evolution
phenomena of several real world problems. The study of impulsive differential equations has been growing as a well deserved discipline, and a systematic
treatment of the theory is available. There has been much progress in the investigation of impulsive dynamic systems of other kinds.
In this section, we shall extend the ideas of impulsive ordinary differential
equations, to set differential equations and investigate some basic properties.
We first introduce the following notation.
(i) Let {tk } be a sequence such that 0 ≤ t1 < t2 < . . . < tk < . . . with
limk→∞ tk = ∞.
(ii) F ∈ P C[R+ × Kc (Rn ), Kc (Rn)], implies F : R+ × Kc (Rn ) → Kc (Rn)
is continuous in (tk−1, tk ] × Kc (Rn ), for each k = 1, 2, ..., and for each
U ∈ Kc (Rn ), k = 1, 2, . . ., lim(t,Y )→(t+ ,U ) F (t, Y ) = F (t+
k , U ) exists.
k
(iii) g ∈ P C[R+ × R+ , R ] if g : (tk−1 , tk] × R+ → R is continuous and for
each w ∈ R+ , lim(t,z)→(t+,w) g(t, z) = g(t+
k , w) exists.
k
1
n
(iv) h ∈ P C [R+ , Kc(R )] means that h ∈ P C[R+, Kc (Rn)] and is differentiable in each interval (tk−1, tk ).
Now, consider the impulsive set differential equation (ISDE) given by
DH U = F (t, U ), t 6= tk ,
U (t+
(5.2.1)
k ) = Ik (U (tk )), t = tk ,
U (t0) = U0 ∈ Kc (Rn ),
where F ∈ P C[R+ × Kc (Rn), Kc (Rn )], Ik : Kc (Rn) → Kc (Rn ) and {tk }
is a sequence of points such that 0 ≤ t0 < t1 < t2 < . . . < tk < . . . with
limk→∞ tk = ∞.
Definition 5.2.1 By a solution U (t, t0, U0) of the impulsive set differential
equation (5.2.1), we mean a piecewise continuous function on [t0, ∞) which
is left continuous in each subinterval (tk , tk+1] and is given by
t0 ≤ t ≤ t 1 ,
U0 (t, t0, U0),
+
U
(t,
t
,
U
),
t
1
1
1 < t ≤ t2 ,
1
..
..
U (t, t0, U0 ) =
.
.
(5.2.2)
+
U
(t,
t
,
U
),
t
<
t
≤
t
,
k
k
k
k+1
k
..
..
.
.
5.2 IMPULSIVE SET DIFFERENTIAL EQUATIONS (SDES)
141
where Uk (t, tk , Uk+ ) is the solution of the set differential equation
DH U = F (t, U ),
+
U (t+
k ) = Uk = Ik (U (tk )).
We begin with a basic differential inequality result, which is a useful tool in
studying monotone method in the impulsive setup later.
Theorem 5.2.1 Assume that
(i) V, W ∈ P C 1[R+ , Kc (Rn)], F ∈ P C[R+ × Kc (Rn), Kc (Rn )]. F (t, X) is
monotone nondecreasing in X for each t ∈ R+ and
DH V ≤ F (t, V ),
V
and
(t+
k)
≤ Ik (V (tk )),
t 6= tk ,
t = tk ,
DH W ≥ F (t, W ),
t 6= tk ,
W (t+
k)
t = tk ,
≥ Ik (W (tk )),
k = 1, 2, · · · ;
(ii) Ik : Kc (Rn ) → Kc (Rn ), Ik (U ) is nondecreasing in U for each k;
(iii) for any X, Y ∈ Kc (Rn ) such that X ≥ Y, t ∈ R+ ,
F (t, X) ≤ F (t, Y ) + L(X − Y ) for some L > 0.
Then V (0) ≤ W (0) implies V (t) ≤ W (t), t ≥ 0.
Proof Consider J = [0, t1]. Let V (0) ≤ W (0). Then applying Theorem 2.5.1,
we have V (t) ≤ W (t) on J. This implies V (t1 ) ≤ W (t1 ) and since I1 (U ) is
nondecreasing,
+
V (t+
1 ) ≤ I1 (V (t1 )) ≤ I1 (W (t1 )) ≤ W (t1 ).
+
Thus V (t+
1 ) ≤ W (t1 ). Next set J = (t1 , t2 ], and apply Theorem 2.5.1, to get
V (t) ≤ W (t) on (t1 , t2].
Proceeding as before, we can obtain the conclusion of the theorem.
Corollary 5.2.1 Let V, W ∈ P C 1[R+ , Kc(Rn )], p, σ ∈ C[R+ , Kc (Rn )]
DH V ≤ σ,
V
(t+
k)
≤ p(tk ),
t 6= tk ,
t = tk ,
and
DH W ≥ σ,
t 6= tk ,
W (t+
k)
t = tk .
≥ p(tk ),
Then V (t0 ) ≤ W (t0 ) implies
V (t) ≤ W (t),
t ≥ t0 .
142
CHAPTER 5. MISCELLANEOUS TOPICS
We first prove an existence theorem for impulsive set differential equations
with fixed moments of impulse.
Theorem 5.2.2 Suppose that
(i) F ∈ P C[R+ × Kc (Rn ), Kc (Rn )],
(ii) D[F (t, U ), θ] ≤ g(t, D(U, θ)], t 6= tk , where g ∈ P C[R2+, R+ ], g(t, w) is
nondecreasing in (t, w),
(iii) D[U (t+
k ), θ] ≤ ψk (D[U (tk ), θ]), t = tk ,
(iv) ψk (w) is a nondecreasing function of w,
(v) r(t, t0, w0) is the maximal solution of the impulsive scalar differential equation
w0
=
g(t, w),
t 6= tk ,
w(t+
(5.2.3)
k ) = ψk (w(tk )), t = tk ,
w(t0 ) =
w0,
existing on [0, ∞),
(vi) {tk } is a sequence of points of impulse with 0 ≤ t0 < t1 < t2 < . . .
< tk < . . . and limk→∞ tk = ∞.
Then there exists a solution for the ISDE (5.2.1).
Before we proceed with the proof, let us define the notion of a maximal
solution of (5.2.3).
Definition 5.2.2 By a maximal solution r(t, t0, w0) of the impulsive differential
equation (5.2.3), we mean the solution r(t, t0, w0) defined by By a maximal
solution r(t, t0, w0) of the impulsive differential equation (5.2.3), we mean the
solution r(t, t0 , w0) defined by
t 0 ≤ t ≤ t1 ,
r0 (t, t0, r0),
+
r
(t,
t
,
r
),
t1 < t ≤ t2 ,
1
1
1
..
..
r(t, t0 , w0) =
.
.
+
r
(t,
t
,
r
),
t
<
t
≤
tk+1,
k
k k
k
.
.
..
..
satisfying the relation
w(t, t0, w0) ≤ r(t, t0, w0), t ∈ R+ ,
for every solution w(t, t0, w0) of (5.2.3), where rk+ = ψk (rk−1(tk )).
5.2 IMPULSIVE SET DIFFERENTIAL EQUATIONS (SDES)
143
We now present the proof of Theorem 5.2.2.
Proof Set J0 = [t0, t1 ] and restrict F to J0 × Kc (Rn).
Consider the set differential equation given by
DH U = F (t, U )
U (t0) = U0
on
J0 .
Then the hypothesis of Theorem 2.8.2 is satisfied with D[U0 , θ] = w0 . Hence
there exists a solution U0 (t, t0 , U0), for the set differential equation such that
D[U0 (t), θ] ≤ r(t, t0, w0) on J0.
For t = t1 , U0 (t1 ) = U0 (t1, t0, U0 ). Set U1+ = U0 (t+
1 ) = I1 (U0 (t1 )). From
hypothesis,
D[U1+ , θ]
=
≤
≤
D[I1 (U0 (t1 )), θ]
ψ(D[U0 (t1 ), θ])
ψ(r(t1 )) = r(t+
1 ).
Put J1 = [t1, t2] and consider the set differential equation
DH U = F (t, U ),
t ∈ J1 ,
+
U (t+
1 ) = U1 .
Then again, restricting F to the domain J1 ×Kc (Rn ), the hypothesis of Theorem
2.8.2 is satisfied and thus there exists a solution U1 (t, t1 , U1+ ) for t ∈ J1 satisfying
the set differential equation restricted to J1 . We have
U1 (t2 ) = U1 (t2, t1 , U1+) and U1 (t+
2 ) = I2 (U (t2 )).
U2+ = U1 (t+
2 )and J2 = [t2 , t3 ]
Set
Now repeating the above process, we obtain the existence of a solution of
the impulsive set differential equation.
We next give a basic comparison theorem in the impulsive set differential
equation set up.
Theorem 5.2.3 Assume that
(i) F ∈ P C[R+ × Kc (Rn ), Kc (Rn )];
(ii) for t ∈ R+ , t 6= tk , U, V ∈ Kc (Rn),
D[F (t, U ), F (t, V )] ≤ g(t, D(U, V )],
(5.2.4)
where g ∈ P C[R2+, R+ ];
+
(iii) D[U (t+
k ), V (tk )] ≤ ψk (D[U (tk ), V (tk )]), where ψk (w) is a nondecreasing
function of w.
144
CHAPTER 5. MISCELLANEOUS TOPICS
Further, suppose that the maximal solution r(t, t0, w0) of the impulsive scalar
differential equation (5.2.3) exists on R+ .
If U (t), V (t) are any two solutions of ISDE (5.2.1) through U (t0 ) = U0 and
V (t0 ) = V0 , U0 , V0 ∈ Kc (Rn) on J respectively, then we have
D[U (t), V (t)] ≤ r(t, t0, w0),
t ∈ R+ ,
provided D[U0 , V0 ] ≤ w0.
Proof We set J0 = [t0, t1] and restrict the domain of F to J0 × Kc (Rn ). Then
F is continuous on this domain and further the hypothesis of Theorem 2.2.1 is
satisfied. Hence we can conclude that
D[U (t), V (t)] ≤ r(t, t0, w0],
t ∈ J0 ,
which implies D[U (t1 ), V (t1 )] ≤ r(t1, t0, w0].
Now using the hypothesis for t = t+
1 , we have
+
D[U (t+
1 ), V (t1 )] ≤
≤
=
ψ1 (D[U (t1 ), V (t1)])
ψ1 [r(t1, t0, w0)]
r(t+
1 ),
since ψ1 is a nondecreasing function. Thus
+
+
D[U (t+
1 ), V (t1 )] ≤ r(t1 ).
(5.2.5)
Next, set J1 = [t1, t2], domF = J1 ×Kc (Rn ). Then using the inequalities (5.2.4),
(5.2.5) and applying Theorem 2.2.1, we conclude that
D[U (t), V (t)] ≤ r(t, t0, w0),
t ∈ J1 .
Repeating the above process, the conclusion of the theorem is obtained.
We now state a corollary which will be a useful tool in our work.
Corollary 5.2.2 Assume that,
(i) F ∈ P C[R+ × Kc (Rn ), Kc (Rn )];
(ii) D[F (t, U ), θ] ≤ g(t, D[U, θ]), t 6= tk , where g ∈ P C[R2+, R];
(iii) D[U (t+
k ), θ] ≤ ψk (D[U (tk ), θ]), t = tk , and ψk (w) is nondecreasing in w;
(iv) r(t, t0, w0) is the maximal solution of the impulsive scalar differential equation (5.2.3).
Then, if D[U0 , θ] ≤ w0, we have
D[U (t), θ] ≤ r(t, t0, w0),
t ∈ J.
5.2 IMPULSIVE SET DIFFERENTIAL EQUATIONS (SDES)
145
We next prove the comparison theorem using Lyapunov-like functions. This
will help us to study stability criteria for ISDE (5.2.1). Before proceeding further, we make the following assumption.
Let V : R+ × Kc (Rn ) → R+ . We say that V belongs to the class V0 if
(i) V is continuous in (tk−1, tk ] × Kc (Rn ) and for each U ∈ Kc (Rn ), k =
1, 2, . . ., lim(t,Y )→(t+ ,U ) V (t, Y ) = V (t+
k , U ) exists,
k
(ii) |V (t, A) − V (t, B)| ≤ LD[A, B] for A, B ∈ Kc (Rn), t ∈ R+ , where L is
the local Lipschitz constant.
Theorem 5.2.4 Assume that,
(i) V ∈ V0 ;
(ii) for t ∈ R+ , U ∈ Kc (Rn ),
D+ V (t, U ) ≤ g(t, V (t, U )),
t 6= tk ,
where g ∈ C[R2+ , R];
+
(iii) V (t+
k , U (tk )] ≤ ψk (V (tk , U (tk ))), t = tk , where ψk (w) is nondecreasing in
w.
Further, suppose that r(t, t0, w0) is the maximal solution of the impulsive scalar
differential equation (5.2.3) existing on R+ . Then, if U (t) = U (t, t0, U0 ) is
any solution of (5.2.1) existing on R+ such that V (t0 , U0 ) ≤ w0, we have
V (t, U (t)) ≤ r(t, t0, w0),
t ∈ R+ .
Proof Set J0 = [t0, t1 ]. Applying Theorem 3.2.1 on J0 × Kc (Rn), we obtain
V (t, U (t)) ≤ r(t, t0, w0),
t ∈ J0 .
At t = t1, V (t1 , U (t1)) ≤ r(t1 , t0, w0). Since ψ1 is a nondecreasing function
ψ1 (V (t1, U (t1))) ≤ ψ1 (r(t1 , t0, w0) = r(t+
1 ).
Hence proceeding as in the earlier theorems, step by step, we can prove the
conclusion of the theorem.
We now define stability properties of the null solution of an impulsive set
differential equation (5.2.1).
Definition 5.2.3 Let U (t) = U (t, t0 , U0) be any solution of ISDE (5.2.1). Then
the trivial solution U (t) ≡ θ is said to be
(S1 ) stable, if for each > 0 and t0 ∈ R+ , there exists a δ = δ(t0 , ) > 0 such
that D[U0 , θ] < δ implies D[U (t), θ] < , for t ≥ t0 .
146
CHAPTER 5. MISCELLANEOUS TOPICS
(S2) attractive, if for each > 0 and t0 ∈ R+ there exist δ0 = δ0 (t0 ) > 0 and a
T = T (t0, ) > 0 such that D[U0 , θ] < δ0 implies
D[U (t), θ] < ,
for
t ≥ t0 + T.
The other definitions can be formulated similarly.
We denote B(U0 , b) = {U ∈ Kc (Rn ) : D[U, U0 ] ≤ b},
K = {σ ∈ C[R+ , R+ ] : σ(0) = 0 and σ(t) is strictly increasing in t}.
The following theorem connects the stability properties of the trivial solution
of ISDE (5.2.1) with the stability properties of the trivial solution of the impulsive scalar differential equation (5.2.3) through the Lyapunov-like function.
In order to obtain the trivial solution for ISDE (5.2.1) we assume that
F (t, θ) ≡ θ and Ik (θ) = θ for all k.
5.2 IMPULSIVE SET DIFFERENTIAL EQUATIONS (SDES)
147
Theorem 5.2.5 Assume that
(i) V : R+ × B(θ, b) → R+ , V ∈ V0 ,
D+ V (t, U ) ≤ g(t, V (t, U )),
t 6= tk ,
where g : R+ × R+ → R+ , g(t, 0) ≡ 0 and g satisfies A0 (ii).
(ii) there exists b0 > 0 such that U ∈ B(θ, b0 ) implies that Ik (U ) ∈ B(θ, b) for
all k, and
V (tk , Ik (U (tk ))) ≤ ψk (V (tk , U (tk ))), U ∈ B(θ, b0 )
and ψk : R+ → R+ is nondecreasing, ψk (0) = 0;
(iii) b(D[U, θ]) ≤ V (t, U ) ≤ a(D[U, θ]), where a, b ∈ K.
Then the stability properties of the trivial solution of the impulsive scalar
differential equation (5.2.3) imply the corresponding stability properties of the
trivial solution of ISDE (5.2.1).
Proof Let 0 < < b∗ = min(b0 , b) and t0 ∈ R+ be given. Suppose the trivial
solution of (5.2.3) is stable. Then given b() > 0 there exists a δ1 (t0 , ) > 0 such
that
0 ≤ w0 < δ1 implies w(t, t0, u0) < b(), t ≥ t0 ,
where w(t, t0, u0) is any solution of (5.2.3).
Let w0 = a(D[U0 , θ]) and choose δ2 = δ2 () such that a(δ2 ) < δ1 .
Define δ = min(δ1 , δ2 ). With this δ, we claim that if D[U0 , θ] < δ then
D[U (t), θ] < , t ≥ t0, where U (t) = U (t, t0, U0) is any solution of ISDE (5.2.1).
Suppose this does not hold.
Then there exists a solution U (t) = U (t, t0, U0) of ISDE (5.2.1) with
D[U0 , θ] < δ and a t∗ > t0 such that tk < t∗ ≤ tk+1 for some k satisfying
≤ D[U (t∗ ), θ] and D[U (t), θ] < for t0 ≤ t ≤ tk . Since 0 < < b0 from
condition (ii) we have
D[U (t+
k ), θ] = D[Ik (U (tk )), θ] < b,
and D[U (tk ), θ] < .
Hence, we can find a t0 such that tk < t0 ≤ t∗ satisfying
≤ D[U (t0 ), θ] < b.
Setting m(t) = V (t, U (t)) for t0 ≤ t ≤ t0 , and using the hypothesis (i) and (ii),
we get from Theorem 5.2.4, the estimate
V (t, U (t)) ≤ r(t, t0, a(D[U0 , θ])),
t 0 ≤ t ≤ t0 ,
where r(t, t0, w0) is the maximal solution of impulsive scalar differential equation
(5.2.3).
148
CHAPTER 5. MISCELLANEOUS TOPICS
Now consider
b() ≤ b(D[U (t0 ), θ]) ≤
≤
V (t0, U (t0))
r(t0 , t0, a(D[U0, θ])) < b(),
which is a contradiction. This proves that U (t) ≡ θ of ISDE (5.2.1) is stable.
Next, if we suppose that w ≡ 0 of (5.2.3) is uniformly stable. Then clearly
δ is independent of t0 and this gives the uniform stability of U ≡ θ of the ISDE
(5.2.1).
Let us suppose that w ≡ 0 of (5.2.3) is asymptotically stable. This implies
that U ≡ θ of ISDE (5.2.1) is stable. Hence, set = b∗ and δ0∗ = δ(t0, b∗ ), we
have
D[U0 , θ] < δ0∗ implies D[U (t), θ] < b∗ , t ≥ t0 .
(5.2.6)
To prove attractivity, we let 0 < < b∗ and t0 ∈ R+ . Since w ≡ 0 of (5.2.3)
is attractive, given b() > 0 and t0 ∈ R+ , there exists a δ10 = δ10 (t0 ) > 0 and a
T = T (t0 , ) > 0 such that 0 ≤ w0 < δ10 implies
w(t, t0, w0) < b() for t ≥ t0 + T.
Then using (5.2.6) and reasoning as in the earlier case, we get
V (t, U (t)) ≤ r(t, t0, a(D[U0, θ])).
Thus we get
b(D[U (t), θ]) ≤ V (t, U (t)) ≤ r(t, t0, a(D[U0 , θ]))
< b(),
t ≥ t0 + T,
which implies
D[U (t), θ] < ,
t ≥ t0 + T.
Thus U ≡ θ is attractive and hence asymptotically stable.
We now consider the example given in 3.4.3 and illustrate how impulses
control the behavior of the solutions and as such the notion of using Hukuhara
difference in initial values becomes redundant in this case.
Example 5.2.1 Consider the set differential equation,
DH U = (−1)U, U (0) = U0 ∈ Kc (R).
(5.2.7)
Since the values of the solution U (t) of (5.2.7) are intervals, the equation (5.2.7)
can be written as
[u01, u02] = (−1)U = [−u2, −u1],
(5.2.8)
where U = [u1, u2] and U0 = [u10, u20]. Recall that the solution is given by
)
u1(t) = 12 [u10 + u20]e−t + 12 [u10 − u20]et,
(5.2.9)
u2(t) = 12 [u20 + u10]e−t + 12 [u20 − u10]et.
5.2 IMPULSIVE SET DIFFERENTIAL EQUATIONS (SDES)
149
If U0 = [u0, u0], that is U0 is a singleton, we get from (5.2.9),
U (t) = [u1(t), u2(t)] = u0 e−t ,
t ≥ 0.
In this situation, the impulses have no role to play and hence we can take
u(t+
k ) = u(tk ),
k = 1, 2, . . ..
If, on the other hand, we take U0 = [−u0, u0], then (5.2.9)reduces to
U (t) = [−u0, u0]et ,
t ≥ 0.
Suppose that we choose the impulses as
U (t+
k ) = dk U (tk ),
for t = tk ,
(5.2.10)
for all k,
(5.2.11)
where the dk ’s satisfy 0 < dk < 1 and
tk+1 + ln dk ≤ tk
then the solution of the corresponding ISDE (5.2.7),(5.2.10) is given by
U (t) = U0 Π0<tk <t dk et ,
t ≥ 0,
(5.2.12)
we know that D[U (t), θ] = kU (t)k, therefore
kU (t)k ≤ kU0 kΠ0<tk<t dk et ,
t ≥ 0.
(5.2.13)
Choosing δ = 2 e−t1 and using (5.2.11) it follows that kU (t)k < , t ≥ 0
provided that kU0|| < δ. Hence the stability of the trivial solution of (5.2.7),
(5.2.10) follows.
To prove asymptotic stability, we strengthen the assumption (5.2.11) to
tk+1 + ln α dk ≤ tk for all k,
where α > 1.
Then dk ≤ α1 exp[tk − tk+1]. Using this estimate on dk in (5.2.13), we see from
the relation
lim kU (t)k = 0.
k→∞
Thus, the trivial solution U ≡ θ of the ISDE (5.2.7), (5.2.10) is asymptotically
stable.
Remark 5.2.1 If U , F and I in (5.2.1) are single-valued mappings then the
Hukuhara derivative and integral reduce to the ordinary derivative and integral.
Consequently, the impulsive set differential equation (5.2.1) reduces to the corresponding ordinary impulsive differential system. Thus the results obtained in
this section include the corresponding results of such equations as a very special
case.
150
5.3
CHAPTER 5. MISCELLANEOUS TOPICS
Monotone Iterative Technique
We develop the monotone technique for ISDE corresponding to the various notions of upper and lower solutions of SDE 2.5.3. We can define similar concepts
for the ISDE
DH U = F (t, U ) + G(t, U ),
t 6= tk ,
U (t+
k)
= Ik (U (tk )) + Jk (U (tk )), t = tk ,
U (0) = U0 ∈ Kc (Rn ),
(5.3.1)
where F, G ∈ P C[J × Kc (Rn ), Kc (Rn )], Ik , Jk : Kc (Rn) → Kc (Rn ) for each k
and 0 < t1 < t2 < · · · < tk < · · · , with limk→∞ tk = T with J = [0, T ].
Definition 5.3.1 Let V, W ∈ P C 1[J, Kc (Rn )]. Then V, W are said to be
(a) coupled lower and upper solutions of type I of (5.3.1) if
DH V ≤ F (t, V ) + G(t, W ),
t 6= tk ,
(t+
k)
V
≤ Ik (V (tk )) + Jk (W (tk )), t = tk ,
V (0) ≤ U0 ,
(5.3.2)
and
DH W ≥ F (t, W ) + G(t, V ),
W (t+
k)
t 6= tk ,
≥ Ik (W (tk )) + Jk (V (tk )), t = tk ,
(5.3.3)
W (0) ≥ U0 ,
(b) coupled lower and upper solutions of type II of (5.3.1) if
DH V ≤ F (t, W ) + G(t, V ),
t 6= tk ,
(t+
k)
V
≤ Ik (W (tk )) + Jk (V (tk )), t = tk ,
V (0) ≤ U0 ,
(5.3.4)
and
DH W ≥ F (t, V ) + G(t, W ),
W (t+
k)
t 6= tk ,
≥ Ik (V (tk )) + Jk (W (tk )), t = tk ,
W (0) ≥ U0 .
(5.3.5)
5.3 MONOTONE ITERATIVE TECHNIQUE
151
Theorem 5.3.1 Assume that
(A1) V, W ∈ P C 1[J, Kc(Rn )] are coupled lower and upper solutions of type I
relative to (5.3.1) with V (t) ≤ W (t), t ∈ J;
(A2) F, G ∈ C[J×Kc (Rn ), Kc (Rn)], F (t, X) is nondecreasing in X and G(t, Y )
is nonincreasing in Y , for each t ∈ J, F, G map bounded sets into bounded sets;
(A3) Ik (U ) is continuous and nondecreasing in U , Jk (U ) is continuous and
nonincreasing in U , for each k = 1, 2, · · ·.
Then there exist monotone sequences {Vn (t)}, {Wn(t)} in Kc (Rn ) such that
Vn (t) → ρ(t),Wn (t) → R(t) in Kc (Rn ) and (ρ, R) are the coupled minimal
and maximal solutions of type I of (5.3.1) respectively, that is, they satisfy the
relations
DH ρ = F (t, ρ) + G(t, R),
ρ(t+
k)
t 6= tk ,
= Ik (ρ(tk )) + Jk (R(tk )), t = tk ,
(5.3.6)
ρ(0) = U0 ,
and
DH R = F (t, R) + G(t, ρ),
t 6= tk ,
R(t+
k)
= Ik (R(tk )) + Jk (ρ(tk )), t = tk ,
R(0) = U0 ,
(5.3.7)
for t ∈ J.
Proof Consider, for each n ≥ 0, the ISDEs given by
DH Vn+1 = F (t, Vn) + G(t, Wn),
t 6= tk ,
Vn+1 (t+
k)
= Ik (Vn (tk )) + Jk (Wn (tk )), t = tk ,
Vn+1 (0) = U0,
(5.3.8)
and
DH Wn+1 = F (t, Wn) + G(t, Vn),
Wn+1 (t+
k)
t 6= tk ,
= Ik (Wn (tk )) + Jk (Vn (tk )), t = tk ,
(5.3.9)
Wn+1 (0) = U0 ,
where V (0) ≤ U0 ≤ W (0).
It is clear that the equations (5.3.8) and (5.3.9) have unique solutions say
Vn+1 (t) and Wn+1 (t), t ∈ J. We set V0 (t) = V (t) and W0 (t) = W (t), t ∈ J.
Our aim is to prove
V0 ≤ V1 ≤ V2 ≤ · · · ≤ Vn ≤ Wn ≤ · · · ≤ W2 ≤ W1 ≤ W0 , t ∈ J .
(5.3.10)
From the hypothesis, we have that V0 and W0 are coupled lower and upper
solutions of type I of (5.3.1). Setting, Vn = V0 and Wn = W0 in (5.3.8) and
(5.3.9), we get, V1 (t) and W1(t), t ∈ J, which are unique solutions of (5.3.8)
and (5.3.9). We now claim
152
CHAPTER 5. MISCELLANEOUS TOPICS
(i) V0 ≤ V1 ,
(ii) V1 ≤ W1 , and
(iii) W1 ≤ W0 for t ∈ J.
To prove (i), consider the equation (5.3.8) with n = 0,then
DH V1 = F (t, V0) + G(t, W0),
t 6= tk ,
V1(t+
k)
= Ik (V0 (tk )) + Jk (W0 (tk )), t = tk ,
V1 (0) = U0 ,
(5.3.11)
and from hypothesis (A1), we have,
DH V0 ≤ F (t, V0) + G(t, W0),
t 6= tk ,
V0(t+
k)
≤ Ik (V0 (tk )) + Jk (W0 (tk )), t = tk ,
V0 (0) ≤ U0 .
(5.3.12)
Now arguing as in Theorem 2.5.1, we get
V0 (t) ≤ V1 (t),
t ∈ (tk−1, tk ].
Now using the fact that
+
V0 (t+
k ) ≤ V1 (tk ), at each t = tk ,
we get V0 (t) ≤ V1 (t), t ∈ J.
Next, to prove (ii), we consider the relations (5.3.8), (5.3.9) with n = 0. We
use the monotone properties of F and G, and Ik and Jk for each k = 1, 2, · · ·.
Then we arrive at the following equations:
DH V1 ≤ F (t, W0) + G(t, W0),
t 6= tk ,
V1 (t+
k)
≤ Ik (W0 (tk )) + Jk (W0 (tk )), t = tk ,
V1 (0) = U0 ,
and
DH W1 ≥ F (t, W0) + G(t, W0),
t 6= tk ,
W1(t+
k)
≥ Ik (W0 (tk )) + Jk (W0 (tk )), t = tk ,
W1 (0) = U0 ,
which yield from Corollary 5.2.1,
V1 (t) ≤ W1 (t),
t ∈ J.
Proceeding as in the proof of (i), we obtain, W1 (t) ≤ W0 (t), t ∈ J, which is
(iii).
5.3 MONOTONE ITERATIVE TECHNIQUE
153
Thus, we have
V0 (t) ≤ V1 (t) ≤ W1 (t) ≤ W0 (t) for t ∈ J.
Assume that for some j > 1, we have
Vj−1(t) ≤ Vj (t) ≤ Wj (t) ≤ Wj−1(t)
for t ∈ J.
Then we prove Vj (t) ≤ Vj+1 (t) ≤ Wj+1 (t) ≤ Wj (t)
Consider
for t ∈ J.
DH Vj = F (t, Vj−1) + G(t, Wj−1),
(5.3.13)
t 6= tk ,
Vj (t+
k)
= Ik (Vj−1 (tk )) + Jk (Wj−1(tk )), t = tk ,
Vj (0) = U0 ,
(5.3.14)
and
DH Vj+1 = F (t, Vj ) + G(t, Wj ),
t 6= tk ,
Vj+1 (t+
k)
= Ik (Vj (tk )) + Jk (Wj (tk )), t = tk ,
Vj+1 (0) = U0 .
(5.3.15)
Using the nondecreasing nature of F , for each t ∈ J and the nonincreasing
nature of G, for each t ∈ J, and also using the nondecreasing nature of Ik and
the nonincreasing nature of Jk , for each k, along with the relation (5.3.13) we
arrive at
DH Vj+1 ≥ F (t, Vj−1) + G(t, Wj−1),
Vj+1(t+
k)
t 6= tk ,
≥ Ik (Vj−1 (tk )) + Jk (Wj−1(tk )), t = tk ,
(5.3.16)
Vj+1 (0) ≥ U0 .
By applying Corollary 5.2.1, to the equations (5.3.14) and (5.3.16) we get,
Vj (t) ≤ Vj+1 (t),
t ∈ J.
Similarly, we can show that Wj+1 (t) ≤ Wj (t), t ∈ J.
We next prove that Vj+1 (t) ≤ Wj+1(t), t ∈ J. Taking n = j in (5.3.8) and
(5.3.9), we have
DH Vj+1 = F (t, Vj ) + G(t, Wj ),
t 6= tk ,
Vj+1 (t+
k)
= Ik (Vj (tk )) + Jk (Wj (tk )), t = tk ,
Vj+1 (0) = U0 ,
(5.3.17)
and
DH Wj+1 = F (t, Wj ) + G(t, Vj ),
Wj+1(t+
k)
t 6= tk ,
= Ik (Wj (tk )) + Jk (Vj (tk )), t = tk ,
Wj+1 (0) = U0 .
(5.3.18)
154
CHAPTER 5. MISCELLANEOUS TOPICS
Again, using the fact that G is nonincreasing in Y for each t, and Jk (U ) is
nonincreasing in U in the relation (5.3.18), and F is nondecreasing in X for
each t and Ik (U ) is nondecreasing in U for each k = 1, 2, · · · , in the relations
(5.3.17),(5.3.18) we have,
DH Vj+1 ≤ F (t, Wj ) + G(t, Wj ),
t 6= tk ,
Vj+1 (t+
k)
≤ Ik (Wj (tk )) + Jk (Wj (tk )), t = tk ,
Vj+1(0) ≤ U0 ,
and
DH Wj+1 ≥ F (t, Wj ) + G(t, Wj ),
Wj+1 (t+
k)
t 6= tk ,
≥ Ik (Wj (tk )) + Jk (Wj (tk )), t = tk ,
Wj+1 (0) ≥ U0 ,
t ∈ J,
which on using Corollary 5.2.1, yields
Vj+1 ≤ Wj+1
for t ∈ J.
Thus we have the sequences of functions {Vn}, {Wn} which are piecewise continuous functions and also satisfy the relation (5.3.10). Clearly these sequences
are uniformly bounded on J. In each subinterval [tk , tk+1], the sequence of functions {Vn} and {Wn } are equi-continuous; hence using Arzela–Ascoli Theorem
on each subinterval, we show that the entire sequence {Vn (t)} converges uniformly to ρ(t) on [tk , tk+1] and {Wn } converges uniformly to R(t) on [tk , tk+1].
Since Ik , Jk are continuous functions for each k = 1, 2, · · ·, we obtain from
lim Vn (t+
k ) = lim [Ik (Vn−1 (tk )) + Jk (Wn−1 (tk ))]
n→∞
n→∞
+
that ρ(t+
k ) = Ik (ρ(tk )) + Jk (R(tk )), similarly R(tk ) = Ik (R(tk )) + Jk (ρ(tk )).
We now consider the integral equations,
Z t
Vn+1 (t) = U0 +
[F (s, Vn(s)) + G(s, Wn (s))] ds
0
Z t
Wn+1 (t) = U0 +
[F (s, Wn(s)) + G(s, Vn (s))] ds
0
Taking limits as n → ∞, using the uniform continuity of F and G on each
subinterval [tk , tk+1], we get (5.3.6) and (5.3.7). Further, V0 ≤ ρ ≤ R ≤ W0 for
t ∈ J.
Next, we claim that (ρ, R) are coupled minimal and maximal solutions of
ISDE(5.3.1). For proof, we show that if U (t) is any solution of (5.3.1) such that
V0 ≤ U ≤ W0 for t ∈ J then,
V0 ≤ ρ ≤ U ≤ R ≤ W0 ,
for t ∈ J.
(5.3.19)
5.3 MONOTONE ITERATIVE TECHNIQUE
155
Suppose for some n,
Vn ≤ U ≤ Wn ,
t ∈ J.
(5.3.20)
Using (5.3.20) along with the monotone properties of F, G for each t, and Ik , Jk
for each k, we arrive at
DH U ≥ F (t, Vn) + G(t, Wn),
t 6= tk ,
U (t+
k)
≥ Ik (Vn (tk )) + Jk (Wn (tk )), t = tk ,
U (0) ≥ U0,
and
DH Vn+1 = F (t, Vn) + G(t, Wn),
t 6= tk ,
Vn+1 (t+
k)
= Ik (Vn (tk )) + Jk (Wn (tk )), t = tk ,
Vn (0) = U0,
which yields, on using Corollary 5.2.1, Vn+1 (t) ≤ U (t), t ∈ J.
Similarly Wn+1 (t) ≥ U (t) for t ∈ J. This holds for all n. Hence taking limits
as n → ∞, we come up with the relation (5.3.19), thus proving our claim.
Corollary 5.3.1 If in addition to the assumptions of Theorem 5.3.1, suppose
that the following hold.
(i) F and G satisfy the relations, for X, Y ∈ Kc (Rn ), whenever X ≥ Y ,
and
F (t, X) ≤ F (t, Y ) + N1 (X − Y ),
G(t, X) + N2 (X − Y ) ≥ G(t, Y ),
N1 ≥ 0
N2 ≥ 0;
(ii) for each k, Ik , Jk satisfy the relations
and
Ik (X) ≤ Ik (Y ) + M1k (X − Y ),
M1k ≥ 0
Jk (X) + M2k (X − Y ) ≥ Jk (Y ),
M2k ≥ 0,
such that M1k + M2k < 1.
Then ρ = U = R is the unique solution of the ISDE (5.3.1).
Proof Since ρ ≤ R, we have R = ρ + m or m = R − ρ. Now
DH ρ + DH m = DH R = F (t, R) + G(t, ρ), t 6= tk ,
≤ F (t, ρ) + N1 m + G(t, R) + N2 m, t 6= tk ,
= DH ρ + (N1 + N2 )m, t 6= tk ,
and for t = tk ,
+
+
m(t+
k ) + ρ(tk ) = R(tk )
= Ik (R(tk )) + Jk (ρ(tk ))
≤ Ik (ρ(tk )) + Jk (R(tk )) + M1k m(tk ) + M2k m(tk )
= ρ(t+
k ) + (M1k + M2k )m(tk ).
156
CHAPTER 5. MISCELLANEOUS TOPICS
Thus we have,
DH m
m(t+
k)
m(0)
≤ N m, t 6= tk ,
≤ Mk m(tk ), t = tk ,
= 0,
with N = N1 + N2 > 0 and Mk = M1k + M2k with 0 < Mk < 1 for each k.
Using a special case of Theorem 1.4.1 in Lakshmikantham, Bainov, Simeonov
[1], we obtain m(t) ≤ 0, that is R ≤ ρ. Hence ρ = U = R is the unique solution
of the ISDE (5.3.1).
Remark 5.3.1
(1) In Theorem 5.3.1, if G(t, Y ) ≡ 0, Jk (U ) ≡ 0 for every k, then we get the
result when F is nondecreasing in X for each t and Ik (U ) is nondecreasing in
U for every k.
(2) In (1) above, suppose that F is not nondecreasing in X for every t and for
every k, Ik (X) is not nondecreasing in X, but F̃ (t, X) = F (t, X)+M X, M > 0
˜
is nondecreasing in X and I(X)
= Ik (X) + Nk X is nondecreasing in X, for
Nk > 0.
Now we consider the IVP of ISDE
DH U + M U = F̃ (t, U ), t 6= tk ,
+ Nk U (tk ) = I˜k (U (tk )), t = tk ,
U (t+
k)
U (0) = U0 .
Then we obtain the same conclusion as in (1). To see this, consider the transformation
(
U (t)eM t ,
t 6= tk ,
Ũ (t) =
1
Ũ (t),
t = tk ,
1+Nk
then
DH Ũ = F̃ (t, Ũ e−M t)eM t = F0(t, Ũ), t 6= tk ,
!
◦
Ũ
(t
)
k
+
Ũ (tk ) + Nk Ũ (tk ) = [1 + Nk ]I˜k
= I k (Ũ (tk )), t = tk ,
1 + Nk
Ũ (0) = U0 .
For this system
and
(
V (t)eM t ,
Ṽ (t) =
(1 + Nk )V (t),
t 6= tk ,
t = tk ,
(
W (t)eM t ,
t 6= tk ,
W̃ (t) =
(1 + Nk )W (t), t = tk ,
are lower and upper solutions. Here we have assumed that DH Ũ exists.
5.3 MONOTONE ITERATIVE TECHNIQUE
157
(3) If F (t, X) ≡ 0, Ik (X) ≡ 0, k = 1, 2, · · · in Theorem 5.3.1, then we obtain
the result for G(t, Y ) nonincreasing in Y for each t and Jk (Y ) nonincreasing in
Y , for each k = 1, 2, · · ·.
(4) If in (3) above, G and Jk , for each k, are not monotone but
(i) there exists a function G̃(t, Y ) which is nonincreasing in Y for each
t ∈ J, and a constant M > 0 such that G(t, Y ) = M Y + G̃(t, Y ), that is
G̃(t, Y ) = G(t, Y ) − M Y, and
(ii) there exists a function J˜k (Y ) which is nonincreasing in Y for each k and
a constant Nk > 0, with 0 < Nk < 1, for each k such that
Jk (Y ) = Nk Y + J˜k (Y )
Then using the transformation
(
Ũ (t)eM t ,
U (t) =
1
1−Nk Ũ (t),
t 6= tk ,
t = tk ,
we obtain
DH Ũ = G0 (t, Ũ),
Ũ (t+
k) =
◦
J k [Ũ(tk )],
t 6= tk ,
t = tk ,
(5.3.21)
Ũ (0) = U0 ,
◦
1
where G0 [t, Ũ] = G̃(t, ŨeM t )e−M t and J k [Ũ(tk )] = [1 − Nk ][J˜k[ 1−N
Ũ (tk )] +
k
Nk Ũ (tk )].
In this case we need to assume that (5.3.21) has coupled lower and upper
solutions of type I, to get the same conclusion as in (3).
(5) Suppose that in Theorem 5.3.1, G(t, Y ) is nonincreasing in Y and F (t, X)
is not monotone but F̃ (t, X) = F (t, X) + M X, M > 0 is nondecreasing in X.
Further, suppose that Jk (Y ) is nonincreasing in Y , for each k and Ik (U ) is not
monotone but I˜k (U ) = Ik (U ) + Nk U , Nk > 0, is nondecreasing in U. Then,
consider the IVP of ISDE
DH U + M U = F̃ (t, U ) + G(t, U ),
t 6= tk ,
+ Nk U (tk ) = I˜k (U (tk )) + Jk (U (tk )), t = tk ,
U (0) = U0 ,
U (t+
k)
(5.3.22)
In this case also, we obtain the same conclusion of Theorem 5.3.1, by utilizing
the transformation used in (2).
(6) If F (t, X) is nondecreasing in X, and Ik is nondecreasing for each k. But
G(t, Y ) is not monotone in Y for each t ∈ J, and Jk is not nonincreasing for
158
CHAPTER 5. MISCELLANEOUS TOPICS
each k. Then we assume there exist functions G̃(t, Y ) and J˜k (Y ), and constants
M, Nk > 0 as in (4). Now, we consider the IV P
DH U = F (t, U ) + G̃(t, U ) + M U,
U (t+ ) = Ik (U ) + J˜k (U ) + Nk U,
t 6= tk ,
t = tk ,
k
(5.3.23)
U (0) = U0 .
Then using the transformation
(
Ũ (t)eM t ,
U (t) =
1
Ũ (t),
1−Nk
t 6= tk ,
t = tk ,
we get
DH Ũ = F0(t, Ũ) + G0 (t, Ũ),
Ũ(t+
k) =
◦
I k (Ũ (tk ))
+
◦
J k (Ũ (tk )),
t 6= tk ,
t = tk ,
(5.3.24)
Ũ (0) = U0 ,
where for t 6= tk , F0 [t, Ũ] = F (t, ŨeM t )e−M t
and G0[t, Ũ] = G̃(t, Ũ eM t)e−M t
◦
1
Ũ (tk )] + Nk Ũ (tk ),
J k [Ũ (tk )] = [1 − Nk ][J˜k[ 1−N
k
◦
1
andI k [Ũ (tk )] = Ik [ 1−N
Ũ (tk )].
k
If we assume that the system (5.3.24) has coupled lower and upper solutions
of type I, then we get by Theorem 5.3.1 the same conclusion.
(7) If both F and G are not monotone and also Ik , Jk for each k are not monotone in Theorem 5.3.1, then we suppose
(i) there exist functions F̃ (t, U ), G̃(t, U ) and a constant M > 0 such that
F̃ (t, U ) + G̃(t, U ) = F (t, U ) + G(t, U ) + M U,
exists, and F̃ (t, U ) is nondecreasing in U and G̃(t, U ) is nonincreasing in U .
(ii) there exist functions I˜k and J˜k and Nk U , with 0 < Nk < 1 for each k
such that
I˜k (U ) + J˜k (U ) = Ik (U ) + Jk (U ) + Nk U,
where I˜k is nondecreasing in U and J˜k is nonincreasing in U for each k.
Now using the transformation
(
Ũ (t)eM t ,
t 6= tk ,
U (t) =
1
Ũ
(t),
t
=
tk ,
1−Nk
5.3 MONOTONE ITERATIVE TECHNIQUE
159
we get,
DH Ũ = F0 (t, Ũ) + G0(t, Ũ ),
Ũ (t+
k)=
◦
J k (Ũ (tk )),
t 6= tk ,
t = tk ,
Ũ (0) = U0 .
Assuming that the above ISDE has coupled lower and upper solutions of type I,
we conclude Theorem 5.3.1.
Next, we try to utilize the coupled lower and upper solution of type II in
our study. In this case, we need not assume the existence of coupled lower and
upper solutions of type II of (5.3.1) as we can construct them under the given
assumptions. But this leads to assumptions on second iterates. Further, we get
complicated alternative sequences which are monotone.
Theorem 5.3.2 Assume that (A2) and (A3) of Theorem 5.3.1 hold. Then for
any solution U (t) of (5.3.1) with V0 ≤ U ≤ W0, t ≥ 0, we have the iterates
{Vn }, {Wn} satisfying
V0 ≤ V2 ≤ · · · ≤ V2n ≤ U ≤ V2n+1 ≤ · · · ≤ V3 ≤ V1 on R+
(5.3.25)
W1 ≤ W3 ≤ · · · ≤ W2n+1 ≤ U ≤ W2n ≤ · · · ≤ W2 ≤ W0 on R+ ,
(5.3.26)
provided V0 ≤ V2 , W2 ≤ W0 on J, where the iterative schemes are given by
and
DH Vn+1 = F (t, Wn) + G(t, Vn ),
t 6= tk ,
Vn+1 (t+
k)
= Ik (Wn (tk )) + Jk (Vn (tk )), t = tk ,
Vn+1(0) = 0,
(5.3.27)
and
DH Wn+1 = F (t, Vn) + G(t, Wn),
t 6= tk ,
Wn+1 (t+
k)
= Ik (Vn (tk )) + Jk (Wn (tk )), t = tk ,
Wn+1 (0) = U0 on J.
(5.3.28)
Moreover, the monotone sequences {V2n}, {V2n+1}, {W2n }, {W2n+1}inKc (Rn)
converge to ρ, R, ρ∗, R∗ in Kc (Rn ) respectively and verify,
∗
t 6= tk ,
DH R = F (t, R ) + G(t, ρ),
+
∗
R(tk ) = Ik (R (tk )) + Jk (ρ(tk )), t = tk ,
R(0) = U0 ;
∗
t 6= tk ,
DH ρ = F (t, ρ ) + G(t, R),
+
∗
ρ(tk ) = Ik (ρ (tk )) + Jk (R(tk )), t = tk ,
ρ(0) = U0 ;
160
CHAPTER 5. MISCELLANEOUS TOPICS
∗
∗
t 6= tk ,
DH R = F (t, R) + G(t, ρ ),
∗ +
∗
R (tk ) = Ik (R(tk )) + Jk (ρ (tk )), t = tk ,
∗
R (0) = U0 ;
and
∗
∗
t 6= tk ,
DH ρ = F (t, ρ) + G(t, R ),
∗ +
∗
ρ (tk ) = Ik (ρ(tk )) + Jk (R (tk )), t = tk ,
∗
ρ (0) = U0 ,
respectively on J.
Proof First, we prove the existence of coupled lower and upper solutions V0 , W0
of type II, satisfying V0 (t) ≤ W0 (t), t ∈ J.
To achieve this, consider,
DH Z = F (t, θ) + G(t, θ),
Z(t+
k)
= Ik (θ) + Jk (θ),
Z(0) = U0 ;
t 6= tk ,
t = tk ,
Let Z(t) be the unique solution which exists on J. Define V0 and W0 by
R0 + V0 = Z and W0 = Z + R0,
where the positive vector R0 = (R01, · · · , R0n) is chosen sufficiently large so
that we have V0 ≤ θ ≤ W0 on J.
Next, using the monotone character of F, G, Ik and Jk , for each k, we get
for t 6= tk
DH V 0 = DH Z
= F (t, θ) + G(t, θ)
≤ F (t, W0) + G(t, V0),
andV0 (t+
k)
≤ Z(t+
k ),
= Ik (θ) + Jk (θ),
≤ Ik (W0 (tk )) + Jk (V0 (tk )),
and V0 (0) ≤ U0 .
Similarly,
DH W0 ≥ F (t, V0) + G(t, W0),
W0(t+
k)
t 6= tk ,
≥ Ik (V0 (tk )) + Jk (W0 (tk )), t = tk ,
W0 (0) ≥ U0 ,
Thus V0 and W0 are coupled lower and upper solutions of type II of (5.3.1).
Let U (t) be any solution of (5.3.1) such that V0 ≤ U ≤ W0 on J. We prove
that,
V0 ≤ V2 ≤ U ≤ V3 ≤ V1 and W1 ≤ W3 ≤ U ≤ W2 ≤ W0 on J.
(5.3.29)
5.3 MONOTONE ITERATIVE TECHNIQUE
161
The monotonicity of F and G, Ik and Jk along with the facts V0 ≤ U ≤ W0
and U is a solution of (5.3.1) gives,
DH U = F (t, U ) + G(t, U ) ≤ F (t, W0) + G(t, V0),
t 6= tk
U (t+
k)
= Ik (U (tk )) + Jk (U (tk )) ≤ Ik (W0 (tk )) + Jk (V0 (tk )), t = tk
U (0) ≤ U0 .
The relations (5.3.27) for n = 0 are
DH V1 = F (t, W0) + G(t, V0 ),
t 6= tk ,
V1 (t+
k)
= Ik (W0 (tk )) + Jk (V0 (tk )), t = tk ,
V1 (0) = U0 .
On using Corollary 5.2.1, for U and V1 , we get,
U ≤ V1 on J.
Again, setting n = 1 in (5.3.27), we get
DH V2 = F (t, W1) + G(t, V1),
≤ F (t, U ) + G(t, U ),
V2 (t+
k)
t 6= tk ,
= Ik (W1 (tk )) + Jk (V1 (tk )),
≤ Ik (U (tk )) + Jk (U (tk )),
t = tk ,
V2 (0) = U0 .
Now, since U is a solution of (5.3.1), the above differential inequalities of V2 ,
along with Corollary 5.2.1, imply V2 ≤ U on J. Thus we have V2 ≤ U ≤ V1 on
J.
Similarly , we can show that U ≤ W2 on J.
Further, using the fact, V0 ≤ V2 and W2 ≤ W0 on J along with the properties
of F, G, Ik and Jk for each k, we have,
DH V3 = F (t, W2) + G(t, V2) ≤ F (t, W0) + G(t, V0),
t 6= tk ,
V3(t+
k)
= Ik (W2 (tk )) + Jk (V2 (tk )) ≤ Ik (W0 (tk )) + Jk (V0 (tk )), t = tk ,
V3 (0) = U0 .
Now considering equations (5.3.27) with n = 0 and using the Corollary 5.2.1,
we get V3 ≤ V1 on J. In a similar fashion, we can show that W1 ≤ W3 on J.
Also, we get U ≤ V3 and W3 ≤ U on J, thus proving the relations (5.3.29).
Now assume for some n > 2, the inequalities,
V2n−4 ≤ V2n−2 ≤ U ≤ V2n−1 ≤ V2n−3,
(5.3.30)
W2n−3 ≤ W2n−1 ≤ U ≤ W2n−2 ≤ W2n−4,
(5.3.31)
hold on J. We claim that
V2n−2 ≤ V2n ≤ U ≤ V2n+1 ≤ V2n−1,
(5.3.32)
162
CHAPTER 5. MISCELLANEOUS TOPICS
W2n−1 ≤ W2n+1 ≤ U ≤ W2n ≤ W2n−2, on J.
(5.3.33)
Taking n = 2n − 1 in the equations (5.3.27) and using the relations (5.3.30),
(5.3.31) and monotone character of F, G, Ik and Jk for each k, yields
DH V2n = F (t, W2n−1) + G(t, V2n−1)
≤ F (t, U ) + G(t, U ),
t 6= tk ,
V2n(t+
k)
≤ Ik (W2n−1(tk )) + Jk (V2n−1(tk ))
≤ Ik (U (tk )) + Jk (U (tk )),
t = tk ,
V2n(0) ≤ U0 .
Since U is a solution of (5.3.1), U satisfies the reverse inequalities.
Now arguing as in the proof of Theorem 2.5.1, we get V2n ≤ U on J.
We now show W2n ≤ W2n−2. Setting n = 2n − 1 in relations (5.3.28), and
using the hypothesis,
DH W2n = F (t, V2n−1) + G(t, W2n−1)
≤ F (t, V2n−3) + G(t, W2n−3),
t 6= tk ,
W2n (t+
k)
= Ik (V2n−1(tk )) + Jk (W2n−1(tk ))
≤ Ik (V2n−3(tk )) + Jk (W2n−3(tk )), t = tk ,
W2n(0) = U0 .
Observing that by setting n = 2n − 3 in (5.3.28), we get the equalities
(reverse inequalities) of the above relations with W2n replaced by W2n−2. Now
using Corollary 5.2.1, gives W2n ≤ W2n−2 on J.
The proofs of the remaining relations are an exact repetition of the above
discussion. Hence we avoid them. Thus we conclude the relations (5.3.32) and
(5.3.33). By induction on n, we have the conclusion of the relations (5.3.25) and
(5.3.26).
Since Vn , Wn ∈ P C 1[J, Kc (Rn)] for all n, reasoning as in Theorem 5.3.1, we
get
lim V2n = ρ, and lim V2n+1 = R,
n→∞
n→∞
∗
lim W2n+1 = ρ , and
n→∞
lim W2n = R∗
n→∞
exist over each subinterval [tk , tk+1], and the convergence is uniform in each
subinterval.
Further, for each t = tk ,
lim V2n (t+
k ) = Ik ( lim W2n−1 (tk )) + Jk ( lim V2n−1 (tk ))
n→∞
n→∞
n→∞
∗
which implies, ρ(t+
k ) = Ik (ρ (tk )) + Jk (R(tk )).
With a similar reasoning, we observe that the functions ρ∗ , R, R∗, satisfy
their corresponding impulse conditions, at t = tk . Also, using the integral
representation for the differential equations in (5.3.27) and (5.3.28) suitably,
5.4 SET DIFFERENTIAL EQUATIONS WITH DELAY
163
we obtain that ρ, ρ∗ , R, R∗ satisfy their corresponding impulsive set differential
equations, given in the statement of the theorem.
Also from (5.3.25) and (5.3.26), we get
ρ ≤ U ≤ R and ρ∗ ≤ U ≤ R∗ , on J.
Thus the proof is complete.
Corollary 5.3.2 Assume that the hypothesis of the Theorem 5.3.2 hold. Further, suppose F, G, Ik and Jk satisfy the hypothesis in the corollary 5.3.1. Then
ρ = ρ∗ = R = R∗ = U is the unique solution of the ISDE (5.3.1).
Proof Since ρ ≤ R and ρ∗ ≤ R∗ , let q1 + ρ = R and q2 + ρ∗ = R∗ . Then
considering DH (q1 + q2) and using the hypothesis, we get
DH (q1 + q2) ≤ (N1 + N2 )(q1 + q2),
q1(t+
k)
and
+
q2 (t+
k)
t 6= tk ,
≤ (M1k + M2k )(q1 + q2 ), t = tk ,
(q1 + q2)(0) = 0.
Using Theorem 1.4.1 in Lakshmikantham, Bainov, Simeonov [1] in this context,
since N1 + N2 ≥ 0 and 0 < M1k + M2k < 1, we get
(q1 + q2)(t) ≤ 0,
t ∈ J,
which means R + R∗ ≤ ρ + ρ∗ ≤ R + R∗ ,. This gives U = ρ = R = ρ∗ = R∗ ,
and hence the solution is unique.
Remark 5.3.2 Corresponding to the Remark 5.3.1, we can make similar remark following from Theorem 5.3.2. To avoid monotony we do not list them.
Remark 5.3.3 The impulsive set differential equation (5.3.1) reduces to an ordinary impulsive differential equation if F, G, Ik, Jk are all single valued mappings. In this case, Theorem 5.3.1 and 5.3.2 along with the remarks give rise to
many new results in the theory of impulsive differential equations.
5.4
Set Differential Equations with Delay
In ordinary differential difference equations or, more generally, in differential
equations with delay, the history exerts its influence in a significant way on
the future of solutions. There are several applications in which future depends
on the past history (finite or infinite). This area of differential equations with
delay or usually known as functional differential equations is investigated as an
independent subject and is very interesting.
In this section, we shall incorporate delay in the formulation of set differential
equations and provide some basic results of interest. We start by describing the
set differential equation with delay.
164
CHAPTER 5. MISCELLANEOUS TOPICS
Given any τ > 0, consider C0 = C[[−τ, 0], Kc(Rn)]. For any Φ, Ψ ∈ C0 ,
define the metric
D0 [Φ, Ψ] = max D[Φ(s), Ψ(s)]
−τ ≤s≤0
. Also, we write kΦk0 = D0 [Φ, θ].
Suppose that J0 = [t0 − τ, t0 + a], a > 0. Let U ∈ C[J0, Kc (Rn )]. For any
t ≥ t0, t ∈ J0 , let Ut denote a translation of the restriction of U to the interval
[t − τ, t]. That is, Ut ∈ C0 is defined by Ut (s) = U (t + s), −τ ≤ s ≤ 0.
Consider the set differential equation with finite delay given by
DH U = F (t, Ut),
Ut0 = Φ0 ∈ C0
(5.4.1)
where F ∈ C[J × C0, Kc (Rn )] and J = [t0, t0 + a].
The following existence result is obtained using the contraction principle.
Theorem 5.4.1 Assume that
D[F (t, Φ), F (t, Ψ)] ≤ KD0 [Φ, Ψ],
K >0
(5.4.2)
for t ∈ J, Φ, Ψ ∈ C0 . Then the IVP (5.4.1) possesses a unique solution U (t)
on J0.
Proof Consider the set of functions U ∈ C[J0, Kc(Rn )] such that U (t) =
Φ0 (t), t0 − τ ≤ t ≤ t0 and U ∈ C[J, Kc(Rn )] with U (t0 ) = Φ0 (0) with
Φ0 (t) ∈ Kc (Rn ), − τ ≤ t ≤ 0.
Define the metric on C[J0, Kc (Rn )] by
D1 (U, V ) =
max
t0 −τ ≤t≤t0 +a
D[U (t), V (t)]e−λt,
λ > 0, is chosen suitably later.
(5.4.3)
Next, define the operator T on C[J0, Kc(Rn )] by
T U (t) = Φ0 (t), t0 − τ ≤ t ≤ t0
Z t
T U (t) = Φ0 (0) +
F (s, Us) ds, t ∈ J.
(5.4.4)
t0
Then, for −τ ≤ s ≤ 0,
D[T U (t0 + s), T V (t0 + s)],
= D[Φ0 (t0 + s), Φ0 (t0 + s)],
=0.
We get, using the properties of Hausdorff metric (1.3.9) and (1.7.11), for
5.4 SET DIFFERENTIAL EQUATIONS WITH DELAY
165
t ∈ J,
D[T U (t), T V (t)]
Z t
Z t
F (ξ, Uξ ) dξ, Φ0(0) +
F (ξ, Vξ ) dξ]
= D[Φ0 (0) +
= D[
≤
Z
t0
t
F (ξ, Uξ ) dξ,
Z
t0
Z
t0
t
F (ξ, Vξ ) dξ]
t0
t
D[F (ξ, Uξ ), F (ξ, Vξ )] dξ
t0
≤K
Z
t
D0 [(Uξ , Vξ )] dξ.
t0
Consider
Z t
Z
D[U (ξ + s), V (ξ + s)] dξ ≤
t0
≤
=
t
D[U (σ), V (σ)] dσ
t0 −τ
Z t
max
[D[U (σ), V (σ)]e−λσ ] eλσ dσ
t0 −τ t0 −τ ≤σ≤t0 +a
Z t
D1 [U, V ]
eλσ dσ
t0 −τ
1
= D1 [U, V ] [eλt − eλ(t0 −τ ) ]
λ
1
≤ D1 [U, V ]eλt.
λ
Thus we obtain
K
Z
t
D0 [Uξ , Vξ ] dξ ≤
t0
K
D1 [U, V ] eλt,
λ
which implies that, on J0 ,
e−λt D[T U (t), T V (t)] ≤
K
D1 [U, V ].
λ
Choosing λ = 2K and taking maximum over t, on J0, we have
D1 [T U, T V ] ≤
1
D1 [U, V ],
2
which means that the operator T on C[J0, Kc(Rn )] is a contraction. Thus there
exists a unique fixed point U ∈ C[J0, Kc (Rn )] of T by the contraction principle.
Hence U (t) = U (t0 , Φ0)(t) is the unique solution of the IVP (5.4.1).
We now prove a comparison theorem in this context, which is a useful tool
in proving the global existence theorem.
166
CHAPTER 5. MISCELLANEOUS TOPICS
Theorem 5.4.2 Assume that F ∈ C[R+×C0 , Kc (Rn )] and D[F (t, Φ), F (t, Ψ)] ≤
g(t, D0 [Φ, Ψ]) for t ∈ R+ , Φ0 , Ψ0 ∈ C0 , where g ∈ C[R2+ , R+ ]. Let r(t) =
r(t, t0 , w0) be the maximal solution of
w0 = g(t, w)
w(t0 ) = w0 ≥ 0
existing for t ≥ t0.
Then D0 [Φ0, Ψ0] ≤ w0 implies
D[U (t), V (t)] ≤ r(t),
t ≥ t0 ,
where U (t) = U (t0 , Φ0)(t) and V (t) = V (t0 , Ψ0)(t) are the solutions of (5.4.1).
Proof Since U (t), V (t) are solutions of (5.4.1) for small h > 0, the differences
U (t + h) − U (t), V (t + h) − V (t) exist. Now for t ∈ R+ , set m(t) = D[U (t), V (t)].
Then using the properties of Hausdorff metric, (1.3.8) and (1.3.9), we have
m(t + h) − m(t) = D [U (t + h), V (t + h)] − D[U (t), V (t)]
≤ D[U (t + h), U (t) + hF (t, Ut)] + D[U (t) + hF (t, Ut), V (t) + hF (t, Vt)]
≤
+D[V (t) + hF (t, Vt), V (t + h)] − D[U (t), V (t)]
D[U (t + h), U (t) + hF (t, Ut)] + D[V (t) + hF (t, Vt), V (t + h)]
+hD[F (t, Ut), F (t, Vt)] ,
from which we get, using (1.3.8) and (1.3.9) again,
m(t + h) − m(t)
U (t + h) − U (t)
≤D
, F (t, Ut)
h
h
V (t + h) − V (t)
+ D F (t, Vt),
+ D[F (t, Ut), F (t, Vt)].
h
Taking limit supremum as h → 0+ , gives,
D+ m(t) = lim sup
h→0+
m(t + h) − m(t)
h
≤ D[F (t, Ut), F (t, Vt)]
≤ g(t, D0[Ut , Vt]) = g(t, |mt |0).
The above inequality, along with the fact that |mt0 |0 = D0 [Φ0, Ψ0] ≤ w0, implies
from the comparison theorem for ordinary delay differential equations (Lakshmikantham and Leela [1]), that
D[U (t), V (t)] ≤ r(t),
t ≥ t0 .
We are now ready to prove the global existence theorem.
5.4 SET DIFFERENTIAL EQUATIONS WITH DELAY
167
Theorem 5.4.3 Let F ∈ C[R+ × C0 , Kc(Rn )] and for (t, Φ) ∈ R+ × C0 ,
D[F (t, Φ), θ] ≤ g(t, D0 [Φ, θ]),
where g ∈ C[R2+ , R+ ], g(t, w) is nondecreasing in w for each t ∈ R+ . Assume
that the solutions w(t, t0, w0) of w0 = g(t, w), w(t0) = w0 exist for t ≥ t0 ,
and F is smooth enough to assure local existence. Then the largest interval of
existence of any solution U (t0, Φ0 )(t) of (5.4.1) is [t0, ∞).
Proof Suppose that U (t0, Φ0)(t) is a solution of (5.4.1) existing on some interval [t0 −τ, β), where t0 < β < ∞. Assume that β cannot be increased. Define
for t ∈ [t0 − τ, β),
m(t) = D[U (t0, Φ0)(t), θ]
mt = D[Ut (t0 , Φ0), θ] and
|mt|0 = D0 [Ut(t0 , Φ0), θ]
Then reasoning and proceeding as in the comparison theorem, we get the differential inequality
D+ m(t) ≤ g(t, |mt |0),
t0 ≤ t < β.
Choosing |mt0 |0 = D0 [Φ0, θ] ≤ w0, we arrive at
D[U (t0 , Φ0)(t), θ] ≤ r(t, t0, w0),
t0 ≤ t < β.
Now g(t, w) ≥ 0 implies that r(t, t0, w0) is nondecreasing in t, which further
yields
D[Ut (t0, Φ0), θ] ≤ r(t, t0, w0), t0 ≤ t < β.
(5.4.5)
Consider t1 , t2 such that t0 < t1 < t2 < β, then using (1.3.8) and (1.7.11), we
get
D[U (t0, Φ0 )(t1), U (t0 , Φ0)(t2 )]
= D[U (t0 , Φ0)(t1 ), U (t0, Φ0)(t1 ) +
= D[θ,
Z
Z
t2
F (s, Us ) ds]
t1
t2
F (s, Us) ds]
t1
≤
Z
≤
t1
Z t2
t2
D[F (s, Us ), θ] ds
g(s, D0 [Us(t0 , Φ0), θ]) ds.
t1
Next, using the fact that g is monotonically nondecreasing in w and the relation
(5.4.5) in the above inequality, we obtain
Z t2
D[U (t0, Φ0)(t1 ), U (t0 , Φ0)(t2 )] ≤
g(s, r(s, t0 , w0)) ds
(5.4.6)
t1
= r(t2, t0 , w0) − r(t1 , t0, w0).
168
CHAPTER 5. MISCELLANEOUS TOPICS
If we let t1, t2 → β in the above relation (5.4.6), then limt→β− U (t0 , Φ0)(t)
exists, because of Cauchy’s criterion for convergence.
We now define U (t0, Φ0)(β) = limt→β− U (t0, Φ0)(t) and consider Ψ0 =
Uβ (t0 , Φ0)
as the new initial function at t = β. The assumption of local existence
implies that there exists a solution U (β, Ψ0 )(t) of (5.4.1) on [β −τ, β +α], α > 0.
This means that the solution U (t0 , Φ0)(t) can be continued beyond β, which is
contrary to our assumption that the value of β cannot be increased. Hence the
theorem.
Next, we present a result on nonuniform practical stability of (5.4.1) using
perturbing Lyapunov functions. See (Lakshmikantham Leela and Martynyuk
[1]) for details.
Before proceeding further, we need the following classes of functions
K = {a ∈ C[[0, A], R+], a(0) = 0 and a(u) is strictly increasing}
CK = {σ ∈ C[R+ × [0, A], R+] : σ(t, .) ∈ K for each t ∈ R+ }
We now define practical stability in this context.
Definition 5.4.1 The system (5.4.1) is practically stable, given (λ, A) with 0 <
λ < A, we have D0 [Φ0, θ] < λ implies D[U (t), θ] < A, t ≥ t0 for some t0 ∈ R+ .
We set
S(A) = { U ∈ Kc (Rn) : D[U, θ] < A}
and
Ω(A) = {Φ ∈ C0 : D0 [Φ, θ] < A}.
We are now in a position to prove the following result on practical stability.
Theorem 5.4.4 Assume that (i) 0 < λ < A ;
(ii) V1 ∈ C[ R+ × S(A) × Ω(A), R+ ],
for (t, U1, Φ), (t, U2, Φ) ∈ R+ × S(A) × Ω(A)
|V1 (t, U1, Φ) − V1(t, U2 , Φ)| ≤ L1 D[U1 , U2], L1 > 0;
for each (t, U, Φ) ∈ R+ × S(A) × Ω(A),
V1 (t, U, Φ) ≤ a1 (t, D0[Φ, θ]), a1 ∈ CK,
and
1
D+ V1 (t, U, Φ) ≡ lim sup [V1 (t + h, U + hF (t, Ut), Ut+h ) − V1(t, U, Ut )]
h→0+ h
≤ g1 (t, V1(t, U, Φ)),
where g1 ∈ C[R2+ , R];
(iii) V2 ∈ C[ R+ × S(A) × Ω(A), R+ ],
for (t, U1, Φ), (t, U2, Φ) ∈ R+ × S(A) × Ω(A),
|V2(t, U1 , Φ) − V2 (t, U2 , Φ)| ≤ L2 D[U1 , U2], L2 > 0;
5.4 SET DIFFERENTIAL EQUATIONS WITH DELAY
169
for each (t, U, Φ) ∈ R+ × S(A) × Ω(A),
b(D[U, θ]) ≤ V2(t, U, Φ) ≤ a2 (D0 [Φ, θ]),
+
D V1(t, U, Φ) + D+ V2 (t, U, Φ) ≤ g2 (t, V1(t, U, Φ) + V2 (t, U, Φ)),
where a2, b ∈ K and g2 ∈ C[R2+ , R];
(iv) a1(t0 , λ) + a2 (λ) < b(A) for some t0 ∈ R+ ;
(v) u0 < a1(t0 , λ) implies u(t, t0, u0) < a1(t0 , λ) for t ≥ t0 where u(t, t0, u0) is
any solution of
u0 = g1(t, u), u(t0) = u0,
(5.4.7)
and v0 < a1 (t0, λ) + a2(λ) implies
v(t, t0 , v0) < b(A),
t ≥ t0 ,
for every t0 ∈ R+ , where v(t, t0 , v0) is any solution of
v0 = g2(t, v),
v(t0 ) = v0 ≥ 0.
(5.4.8)
Then the system (5.4.1) is practically stable.
Proof We have to prove that given 0 < λ < A, D0 [Φ, θ] < λ then D[U (t), θ] <
A where U (t) = U (t0 , Φ0)(t) is any solution of (5.4.1), for t ≥ t0 . Suppose it is
not true, then there exists t2 > t1 > t0 and a solution U (t0, Φ0)(t) of (5.4.1)
such that
D[U (t1 ), θ] = λ and D[U (t2 ), θ] = A
(5.4.9)
and λ ≤ D[U (t), θ] ≤ A, for t1 ≤ t ≤ t2 . Now using the hypothesis (iii) and (v)
and the standard arguments, we get
V1 (t, U (t0, Φ0)(t), Ut (t0, Φ0 )) + V2(t, U (t0 , Φ0)(t), Ut(t0 , Φ0))
≤ r2(t, t1, V1 (t, U (t0, Φ0)(t1 ), Ut1 (t0 , Φ0))
+ V2 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0 , Φ0)))
(5.4.10)
for t1 ≤ t ≤ t2 , where r2 (t, t1, v0) is the maximal solution of (5.4.8) through
(t1 , v0), and v0 = V1 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0, Φ0))+V2 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0 , Φ0)).
Similarly, condition (ii) gives the estimate
V1 (t, U (t0, Φ0)(t), Ut (t0 , Φ0)) ≤ r1(t, t0, V1(t0 , Φ0(0), Φ0)), t0 ≤ t ≤ t1 ,
where r1 (t, t0, u0) is the maximal solution of (5.4.7) with u0 = V1 (t0 , Φ0(0), Φ0).
Since D0 [Φ0, θ] < λ, using hypothesis (ii)
V1 (t0 , Φ0(0), Φ0) ≤ a1(t0 , D0[Φ0 , θ]) ≤ a1(t0 , λ).
Also, we have
V2 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0 , Φ0)) ≤ a2 (D0 [Ut1 , θ]) ≤ a2(λ).
170
CHAPTER 5. MISCELLANEOUS TOPICS
Thus we get
V1 (t1, U (t0, Φ0)(t1 ), Ut1 (t0 , Φ0)) + V2 (t1, U (t0 , Φ0)(t1), Ut1 (t0 , Φ0))
≤ a1(t0 , λ) + a2 (λ).
Now using the relation (5.4.10) and the hypothesis (v), we obtain
V1 (t2 , U (t0, Φ0)(t2 ), Ut2 (t0 , Φ0)) + V2 (t2, U (t0, Φ0)(t2 ), Ut2 (t0 , Φ0))
≤ r2 (t2, t1 , a1(t0, λ) + a2(λ)) < b(A).
(5.4.11)
However, using the relation (5.4.9) and the hypothesis (ii) and (iii), we get
V1 (t2, U (t0, Φ0)(t2 ), Ut2 (t0 , Φ0)) + V2 (t2, U (t0 , Φ0)(t2), Ut2 (t0 , Φ0))
≥ V2(t2 , U (t0, Φ0)(t2 ), Ut2 (t0 , Φ0))
≥ b(D[U (t2), θ]) = b(A),
which contradicts (5.4.11). Thus the proof of our claim.
We next study the nonuniform boundedness property for the system (5.4.1).
We define the concept of boundedness as follows.
Definition 5.4.2 The differential system (5.4.1) is said to be
(1) Equibounded, if for any α > 0 and t0 ∈ R+ , there exists a β where β =
β(t0 , α) > 0 such that
D0 [Φ0, θ] < α implies D[U (t), θ] < β, t ≥ t0 ,
where U (t) = U (t0 , Φ0)(t) is any solution of (5.4.1).
(2) Uniform Bounded, if β in (1) does not depend on t0.
The following theorem uses the method of perturbing Lyapunov functions
to obtain nonuniform boundedness property.
For that purpose, set
e
S(ρ) = {U ∈ Kc (Rn ) : D[U, θ] < ρ} and S(ρ)
= {Φ ∈ C : D0 [Φ, θ] < ρ}.
Theorem 5.4.5 Assume that
e
(i) ρ > 0, V1 ∈ C[R+ × S(ρ) × S(ρ),
R+ ], V1 is bounded for
e
(t, U, Φ) ∈ R+ × ∂S(ρ) × ∂ S(ρ);
|V1 (t, U1, Φ) − V1 (t, U2 , Φ)| ≤ L1 D[U1 , U2 ],
L1 > 0,
and for (t, U, Φ) ∈ R+ × S c (ρ) × Sec (ρ),
1
D+ V1(t, U, Φ) ≡ lim sup [V1(t + h, U + hF (t, Ut), Ut+h ) − V1(t, U, Ut )]
h
h→0+
≤ g1 (t, V1(t, U, Φ)),
where g1 ∈ C[R2+, R];
5.4 SET DIFFERENTIAL EQUATIONS WITH DELAY
171
(ii) V2 ∈ C[R+ × S c (ρ) × Sec (ρ), R+ ],
b(D[U, θ]) ≤ V2 (t, U, Φ) ≤ a(D0 [Φ, θ]),
+
D V1 (t, U, Φ) + D+ V2(t, U, Φ) ≤ g2 (t, V1(t, U, Φ) + V2 (t, U, Φ)),
where a, b ∈ K and g2 ∈ C[R2+ , R];
(iii) The scalar differential equations
w10 = g1 (t, w1),
w1(t0 ) = w10 ≥ 0,
(5.4.12)
w20 = g2 (t, w2),
w2(t0 ) = w20 ≥ 0,
(5.4.13)
are equibounded and uniformly bounded respectively.
Then the system (5.4.1) is equibounded.
Proof Let B1 > ρ and t0 ∈ R+ be given. Let
α0
=
α∗
≥
max{V1 (t0, U0 , Φ0) : U0 = Φ0(0) ∈ cl{S(B1 ) ∩ S c (ρ)},
e 1 ) ∩ Sec (ρ)}}
Φ0 ∈ cl{S(B
e
V1(t, U, Φ) for (t, U, Φ) ∈ R+ × ∂S(ρ) × ∂ S(ρ),
and set α1 = α1 (t0, B1 ) = max(α0, α∗).
Since the scalar differential equation (5.4.12) is equibounded, given α1 > 0
and t0 ∈ R+ , there exists a β0 = β0 (t0 , α1) such that
w1(t, t0, w0) < β0 ,
t ≥ t0 ,
(5.4.14)
provided w10 < α1, where w1(t, t0, w10) is any solution of (5.4.12).
Let α2 = a(B1 )+β0 . Then the uniform boundedness of the equation (5.4.13)
yields
w2 (t, t0, w20) < β1 (α2), t ≥ t0
(5.4.15)
provided w20 < α2, where w2(t, t0, w20) is any solution of (5.4.13).
Choose B2 satisfying
b(B2 ) > β1 (α2).
(5.4.16)
e 1 ) implies that U (t) ∈ S(B2 ) for t ≥ t0 , where
We now claim that Φ0 ∈ S(B
U (t) = U (t0, Φ0)(t) is any solution of (5.4.1).
If it is not true, there exists a solution U (t0 , Φ0)(t) of (5.4.1) with Φ0 ∈
e 1 ), such that, for some t∗ > t0 , D[U (t0 , Φ0)(t∗ ), θ] = B2 . Since B1 > ρ,
S(B
there are two possibilities to consider,
(i) U (t0, Φ0)(t) ∈ S c (ρ) for t ∈ [t0, t∗ ],
(ii) there exists a t ≥ t0 such that U (t0 , Φ0)(t) ∈ ∂S(ρ) and
U (t0 , Φ0)(t) ∈ S c (ρ) for t ∈ [t, t∗ ].
172
CHAPTER 5. MISCELLANEOUS TOPICS
If (i) holds, we can find a t1 > t0 such that:
U (t0, Φ0)(t1 ) ∈ ∂S(B1 ),
U (t0 , Φ0)(t∗ ) ∈ ∂S(B2 ),
U (t0 , Φ0)(t) ∈ S c (B1 ), t ∈ [t1, t∗ ].
(5.4.17)
Setting m(t) = V1(t, U (t0 , Φ0)(t), Ut (t0, Φ0)) + V2(t, U (t0 , Φ0)(t), Ut(t0 , Φ0))
for t ∈ [t1, t∗ ] and using the standard arguments, we obtain the differential
inequality
D+ m(t) ≤ g2(t, m(t)), t ∈ [t, t∗] .
It then follows from the Comparison Theorem 1.4.1 of Lakshmikantham and
Leela [1]
m(t) ≤ r2(t, t1, m(t1 )), t ∈ [t1, t∗ ]
where r2 (t, t1, w20) is the maximal solution of (5.4.13) with
r2(t1 , t1, w20) = w20 =V1 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0, Φ0))
+ V2 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0 , Φ0)).
Thus,
V1 (t∗ , U (t0, Φ0)(t∗ ), Ut∗ (t, Φ0 )) + V2 (t∗ , U (t0, Φ0)(t∗ ), Ut∗ (t, Φ0))
≤ r2(t∗ , t1, w20).
(5.4.18)
Similarly, we get
V1 (t, U (t0, Φ0)(t), Ut (t0 , Φ0)) ≤ r1(t1 , t0, V1(t0 , U (t0, Φ0)(t0 ), Ut0 (t0, Φ0))
(5.4.19)
where r1 (t, t0, u0) is the maximal solution of (5.4.12) with
u0 = V1 (t0 , U (t0, Φ0)(t0 ), Ut0 (t0, Φ0)) = V1 (t0 , Φ0(0), Φ0).
Setting w10 = V1 (t0, Φ0(0), Φ0 ) < α1, and using the relation (5.4.14) we get,
V1 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0, Φ0)) ≤ r1(t1 , t0, w10) ≤ β0 .
Furthermore, V2 (t1 , U (t0, Φ0)(t1 ), Ut1 (t0, Φ0)) ≤ a(B1 ) and we have w20 ≤ β0 +
a(B1 ) = α2.
Now combining (5.4.15), (5.4.16),(5.4.17) we have
b(B2 ) ≤ m(t∗ ) ≤ r(t∗ ) ≤ β1 (α2) < b(B2 ),
(5.4.20)
which is a contradiction.
If case (ii) holds, we also come up with the inequality (5.4.18), where t1 > t
satisfies (5.4.17). We then obtain, in place of (5.4.19) the relation
V1 (t1 , U (t0, Φ0)(t1 ), Φt1 ) ≤ r1(t1 , t, V1 (t, U (t0, Φ0)(t)), Φt ) ,
5.5 IMPULSIVE SET DIFFERENTIAL EQUATIONS WITH DELAY
173
since U (t0 , Φ0)(t) ∈ ∂S(ρ) and
V1(t, U (t0, Φ0)(t), Φt) ≤ α∗ ≤ α1 ,
arguing as before, we get a contradiction.
This proves that, for any given B1 > ρ, t0 > 0 there exists a B2 such that
e 1 ) implies U (t0, Φ0)(t) ∈ S(B2 ), t ≥ 0. For the case B1 < ρ, we get
Φ0 ∈ S(B
B2 (t0 , B1 ) = B2 (t0 , ρ) and hence the proof.
5.5
Impulsive Set Differential Equations with Delay
In this section we establish basic results in the theory of impulsive set differential
equations with delay.
Consider the impulsive set differential equation with delay
DH U = F (t, Ut), t 6= tk ,
Ut+ = Ik (Utk ), t = tk ,
(5.5.1)
k
Ut0 = Φ0 ∈ Kc (Rn),
where F ∈ P C[R+ × C, Kc (Rn)], Ik : C −→ C with C = C[[−τ, 0], Kc(Rn )]
and {tk } is a sequence of points such that 0 ≤ t0 < t1 < · · · < tk < . . . with
limk→∞ tk = ∞.
By a solution of (5.5.1) we mean a piecewise continuous function U (t0 , Φ0)(t)
on [t0, ∞) which is left continuous on (tk , tk+1] and defined by
t0 − τ ≤ t ≤ t0 ,
Φ0,
U
(t
,
Φ
)(t),
t 0 ≤ t ≤ t1 ,
0
0
0
t 1 < t ≤ t2 ,
U1 (t1 , Φ1)(t),
..
..
U (t0, Φ0)(t) =
(5.5.2)
.
.
Uk (tk , Φk )(t), tk < t ≤ tk+1 ,
..
..
.
.
where Uk (tk , Φk )(t) is the solution of the set differential equation with delay
DH U = F (t, Ut), Ut+ = Φk , k = 0, 1, 2, . . . .
k
We will first prove an existence theorem for impulsive set differential equations
with delay.
Theorem 5.5.1 Assume that
(i) F ∈ P C[R+ × C, Kc (Rn],
(ii) D[F (t, Φ), θ] ≤ g(t, D0 [Φ, θ)]), t 6= tk , where g ∈ P C[R+2 , R+ ], g(t, w) is
nondecreasing in w for each t ∈ R+ ,
174
CHAPTER 5. MISCELLANEOUS TOPICS
(iii) D0 [I(Utk ), θ] ≤ Jk (D0 [Utk , θ]), t = tk , where Jk (w) is a nondecreasing
function of w,
(iv) r(t, t0, w0) is the maximal solution of the impulsive scalar differential equation
w 0 = g(t, w), t 6= tk ,
w(t+
(5.5.3)
k ) = Jk (w(tk )), t = tk ,
w(t0) = w0,
existing on [t0 , ∞] and F is smooth enough to assure local existence.
Then there exists a solution for (5.5.1) on [t0 , ∞).
Proof Set J0 = [t0, t1] and restrict F to J0 × C. Note that F is continuous on
J0 × C.
Consider on J0 the set differential equation
DH U = F (t, Ut),
Ut0 = Φ0 .
Then, the hypothesis of Theorem 5.4.3 is satisfied, and hence there exists a
solution U0 (t0 , Φ0)(t), t ∈ J0 , for the set differential equation with delay on J0 .
Now, for t = t1, U0 (t1 ) = U0 (t0 , Φ0)(t1 ) and U0,t+ = I1 (U0,t1 ). Set Φ1 =
1
U0,t+ . Let J1 = (t1 , t2] and consider the set differential equation with delay
1
DH U
Ut+
1
=
=
F (t, Ut), t ∈ J1,
Φ1.
Once again, restricting the domain of F to J1 × C and employing the impulsive condition in (iii), the hypothesis of Theorem 5.4.3 is satisfied and thus
there exists a solution U1 (t1 , Φ1)(t),
t ∈ J1 , satisfying the set differential equation with delay restricted to J1 .
We have U1 (t2 ) = U1 (t1, Φ1)(t2 ), U1,t+ = I2 (U1,t2 ). Set Φ2 = U1,t+ and let
2
2
J2 = (t2 , t3]. Repeating the above process, we get the existence of a solution of
the impulsive set differential equation with delay on [t0, ∞).
Next, we give a basic comparison theorem for impulsive set differential equations
with delay.
Theorem 5.5.2 Assume that
(i) F ∈ P C[R+ × C, Kc (Rn )];
(ii) D[F (t, Φ), F (t, Ψ)] ≤ g(t, D0 [Φ, Ψ]) for t ∈ R+ , t 6= tk , Φ, Ψ ∈ C, and
g ∈ P C[R+
2 , R+ ];
(iii) D0 [Ik (U tk ), Ik (V tk )] ≤ Jk (D0 [Utk , Vtk ]), t = tk , where Jk (w) is a nondecreasing function of w;
(iv) r(t) = r(t, t0, w0) is the maximal solution of the scalar impulsive differential equation (5.5.3) existing on [t0, ∞).
5.5 IMPULSIVE SET DIFFERENTIAL EQUATIONS WITH DELAY
175
Then, if U (t) = U (t0 , Φ0)(t) and V (t) = V (t0 , Ψ0)(t) are any two solutions of
(5.5.1) on [t0, ∞), we have
D[U (t), V (t)] ≤ r(t), t ≥ t0 ,
provided D0 [Φ0, Ψ0 ] ≤ w0.
Proof We set J0 = [t0, t1] and restrict the domain of F to J0 × C. Then F
is continuous on this domain and the hypothesis of Theorem 5.4.2 is satisfied.
Hence we have that
D[U (t), V (t)] ≤ r(t), t ∈ J0 ,
which implies that D[U (t1 ), V (t1 )] ≤ r(t1 ). Using hypothesis (iii) for t = t+
1 , we
get
D0 [Ut+ , Vt+ ] = D0 [I1(Ut1 ), I1(Vt1 )]
1
1
≤ J1 (D0 [Ut1 , Vt1 ])
≤ J1 (r(t1)) ≡ r(t+
1 ).
Thus
D0 [Ut+1 , Vt+1 ] ≤ r(t+
1 ).
(5.5.4)
Next, restrict the domain of F to J1 × C, where J1 = (t1 , t2]. Then using
(ii), (5.5.4) and Theorem 5.4.2 we can conclude that
D[U (t), V (t)] ≤ r(t, t0 , w0), t ∈ J1 .
Repeating the above process, the conclusion of the theorem is obtained.
We shall next extend a typical result in Lyapunov-like theory.
Let V : R+ × Kc (Rn ) × C → R+ . Then V is said to belong to class V0 if
(A1) V (t, U, Φ) is continuous in (tk−1, tk ] × Kc (Rn ) × C and for each
U ∈ Kc (Rn ), Φ ∈ C, k = 1, 2, · · · ,
lim
(t,W,Φ)→(t+
k ,U,Φ)
V (t, W, Φ) = V (t+
k , U, Φ)
exists;
(A2) V ∈ C[(tk−1, tk ] × Kc (Rn) × C, Rn
+ ] satisfies
|V (t, U, Φ) − V (t, W, Φ| ≤ LD[U, W ], L > 0.
For (t, U, φ) ∈ (tk−1 , tk] × Kc (Rn ) × C, we define
1
D+ V (t, U, Φ) = lim sup [V (t + h, U + hF (t, Ut), Ut+h ) − V (t, U, Ut)].
h
+
h→0
To investigate stability criteria the following comparison result in terms of a
Lyapunov function on product spaces is needed. (See Lakshmikantham, Leela
and Sivasundaram [1]).
176
CHAPTER 5. MISCELLANEOUS TOPICS
Theorem 5.5.3 Suppose that
(i) V : R+ × Kc (Rn) × C → R+ and V ∈ V0 .
(ii) D+ V (t, U, Φ) ≤ g(t, V (t, U, Φ)), t 6= tk , where g : (tk−1, tk] × R+ → R is
continuous and for each w ∈ R+ , lim(t,z)→(t+ ,w) g(t, z) = g(t+
k , w) exists.
k
+
(iii) V (t+
(t0 , Φ0)) ≤ Jk [V (tk , U (t0, Φ0 )(tk ), Utk (t0 , Φ0)],
k , U (t0 , Φ0 )(tk ), Ut+
k
t = tk , and Jk (w) is nondecreasing in w.
Let r(t) = r(t, t0, w0) be the maximal solution of the scalar impulsive differential equation (5.5.3) existing on t ≥ t0 . Then
V (t, U (t0, Φ0)(t), Ut (t0, Φ0)) ≤ r(t), t ≥ t0,
where U (t0, Φ0)(t) is any solution of the impulsive set differential equation with
delay (5.5.1) existing on t ≥ t0 .
Proof Let U (t0 , Φ0)(t) be any solution of (5.5.1) existing on [t0, ∞). Define
m(t) = V (t, U (t0, Φ0)(t), Ut (t0 , Φ0)), so that
m(t0 ) = V (t0, U (t0, Φ0)(t0 ), Φ0 ), and suppose that m(t0 ) ≤ w0.
Now for t ∈ (tk−1, tk], k = 1, 2, · · · ,
m(t + h) − m(t)
=
V (t + h, U (t0, Φ0)(t + h), Ut+h (t0 , Φ0))
−V (t, U (t0, Φ0 )(t), Ut(t0 , Φ0))
=
V (t + h, U (t0, Φ0)(t + h), Ut+h (t0 , Φ0))
−V (t + h, U (t0, Φ0)(t) + hF (t, Ut), Ut+h (t0, Φ0))
+V (t + h, U (t0, Φ0)(t) + hF (t, Ut), Ut+h (t0, Φ0))
−V (t, U (t0, Φ0 )(t), Ut (t0 , Φ0))
≤
LD[U (t0, Φ0 )(t) + hF (t, Ut), U (t0 , Φ0)(t + h))
+V (t + h, U (t0, Φ0)(t) + hF (t, Ut), Ut+h (t0, Φ0))
−V (t, U (t0, Φ0 )(t), Ut (t0 , Φ0)),
using (A2). Thus
D+ m(t)
1
= lim sup [m(t + h) − m(t)]
h
+
h→0
≤ D+ V (t, U (t0, Φ0)(t), Ut (t0 , Φ0))
+L lim sup D[U (t0, Φ0)(t + h), U (t0, Φ0 )(t) + hF (t, Ut)].
h→0+
Using the properties of the Hausdorff metric D and the fact that U (t0 , Φ0)(t) is
a solution (5.5.1), it is not difficult to show that
lim sup D[U (t0 , Φ0)(t + h), U (t0 , Φ0)(t) + hF (t, Ut)] = 0.
h→0+
5.5 IMPULSIVE SET DIFFERENTIAL EQUATIONS WITH DELAY
177
Therefore, using (ii), we have
D+ m(t)
≤
D+ V (t, U (t0, Φ0 )(t), Ut (t0 , Φ0))
≤
=
g(t, V (t, U (t0, Φ0)(t), Ut (t0 , Φ0))
g(t, m(t)).
For t = tk , we get from (iii)
m(t+
k)
=
+
V (t+
k , U (t0 , Φ0 )(tk ), Ut+ (t0 , Φ0 ))
≤
=
Jk [V (tk , U (t0, Φ0)(tk ), Utk (t0 , Φ0))]
Jk [m(tk )].
k
Hence by Theorem 4.6.1 we get
m(t) ≤ r(t), t ≥ t0 .
We will now define stability of the null solution of an impulsive set differential
equation with delay.
Definition 5.5.1 Let U (t) = U (t0, Φ0 )(t) be any solution of (5.5.1). Then the
trivial solution U (t) ≡ θ is said to be stable if for each > 0 and t0 ∈ R+ , there
exists a δ = δ(t0 , ) > 0 such that D0 [Φ0, θ] < δ implies D[U (t), θ] < , t ≥ t0.
The other definitions can be formulated similarly. (See Lakshmikantham
and Leela [1]).
We set, as before
S(ρ)
= [U ∈ Kc (Rn) : D[U, θ] < ρ]
S̃(ρ)
K
= [Φ ∈ C : D0 [Φ, θ] < ρ]
= {a ∈ C[R+ , R+ ] : a(0) = 0 and a(u) is strictly increasing}.
We shall now give a typical result on stability criteria.
In order to obtain the trivial solution of (5.5.1) we assume that F (t, θ) ≡ θ
and Ik (θ) ≡ θ for all k.
Theorem 5.5.4 Assume that
(i) V : R+ × S(ρ) × S̃(ρ) → R+ , V ∈ V0
and D+ V (t, U, φ) ≤ g(t, V (t, U, φ)), t 6= tk , g : R2 + → R, g(t, 0) ≡ 0
and g satisfies the assumptions given in Theorem 5.5.3;
(ii) there exists ρ0 > 0 such that Utk ∈ S̃(ρ0 ) implies that Ik (Utk ) ∈ S̃(ρ) for
all k and
+
V (t+
k , U (t0 , Φ0 )(tk ), Ut+ (t0 , Φ0 )) ≤ Jk [V (tk , U (t0 , Φ0 )(tk ), Utk (t0 , Φ0 ))],
k
t = tk , Utk ∈ S̃(ρ0 ) and Jk : R+ → R+ is nondecreasing and Jk (0) = 0
for all k;
(iii) b(D0 [U, θ]) ≤ V (t, U, Φ) ≤ a(D0 [Φ, θ]), where a, b ∈ K.
178
CHAPTER 5. MISCELLANEOUS TOPICS
Then the stability properties of the trivial solution of (5.5.3)imply the corresponding stability properties of the trivial solution of (5.5.1).
Proof Let 0 < < min(ρ, ρ0 ), t0 ∈ R+ be given. Suppose that the trivial
solution of (5.5.3) is stable. Then, given b() > 0 and t0 ∈ R+ , there exists a
δ1 = δ1 (t0 , ) > 0 such that 0 ≤ w0 < δ1 implies w(t, t0, w0) < b(), t ≥ t0 ,
where w(t, t0, w0) is any solution of (5.5.3). Let w0 = a(D0 [Φ0, θ]) and choose
a δ = δ(t0 , ) such that a(δ) < δ1 .
We claim that with this δ we have D0 [Φ0, θ] < δ implies D[U (t0 , Φ0)(t), θ] <
, t ≥ t0 for any solution U (t0 , Φ0)(t) of (5.5.1). If this is not true there
would exist a solution U (t) = U (t0, Φ0)(t) of (5.5.1) with D0 [Φ0, θ] < δ and
a t∗ > t0 satisfying tk < t∗ ≤ tk+1, for some k, ≤ D[U (t0, Φ0)(t∗ ), θ] and
D[U (t0 , Φ0)(t), θ] < , t0 ≤ t ≤ tk .
Since 0 < < ρ0 , condition (ii) shows that D[U (t0 , Φ0)(tk ), θ] < and
D0 [Ut+ (t0 , Φ0), θ] = D0 [Ik (Utk (t0, Φ0 )), θ] < ρ.
k
Hence we can find a t0 such that tk < t0 ≤ t∗ satisfying
≤ D[U (t0 , U (t0, Φ0)(t0 ), θ] < ρ.
Now setting m(t) = V (t, U (t0, Φ0)(t), Ut (t0 , Φ0)), t0 ≤ t ≤ t0 and using
hypothesis (i), (ii), and Theorem 5.5.3, we get the estimate
V (t, U (t0, Φ0)(t), Ut (t0 , Φ0)) ≤ r(t, t0, a(D0 [Φ0, θ])), t0 ≤ t ≤ t0 ,
where r(t, t0, w0) is the maximal solution of (5.5.3).We are then led, because of
(iii), to the contradiction
b() ≤
≤
≤
<
b(D[U (t0 , Φ0)(t0 ), θ)
V (t0, U (t0, Φ0)(t0 ), Ut0 (t0 , Φ0))
r(t0 , t0, a(D0 [Φ0, θ]))
r(t0 , t0, a(δ)) < r(t0 , t0, δ1) < b(),
which proves that the trivial solution of (5.5.1) is stable.
Example 5.5.1 Consider the set differential equation with delay on R
DH U = −U (t − τ ), U0 = [φ1, φ2],
(5.5.5)
where U (t) = [u1(t), u2(t)].
This can be written as
[u 01 , u 02] = [−u2 (t − τ ), −u1 (t − τ )]
which is equivalent to the system of ordinary differential equations with delay
u
u
0
1
0
2
=
=
−u2(t − τ ),
−u1(t − τ ),
u1,0 = φ1 ,
u2,0 = φ2 ,
(5.5.6)
5.5 IMPULSIVE SET DIFFERENTIAL EQUATIONS WITH DELAY
which reduces to
(
u 001
=
u1(t − 2τ )
u 002
=
u2(t − 2τ )
Suppose the initial functions are given by
u10 − u20 λ1 s
u10 + u20
φ
(s)
=
+
e
e−λ2 s,
1
2
2
− 2τ ≤ s ≤ 0.
u20 − u10 λ1 s
u10 + u20
−λ2 s
φ
(s)
=
+
,
e
e
2
2
2
179
(5.5.7)
(5.5.8)
We choose λ1, λ2 > 0 such that
λ21 = e−2λ1 τ , λ22 = e2λ2 τ ,
so that eλ1 t , e−λ2 t satisfy (5.5.7).
As a result , we have
u1 (t) = c1 eλ1 t + c2e−λ2 t
t ≥ 0.
u2 (t) = c3 eλ1 t + c4e−λ2 t
(5.5.9)
Using (5.5.6), (5.5.8) and (5.5.9) at t = 0, we compute the values of c1, c2 , c3 and c4
u10 − u20
u10 + u20
u20 − u10
to find that c1 =
, c2 =
, c3 =
and c4 =
2
2
2
u10 + u20
.
2
Hence, the solutions (5.5.9) are given by
u10 − u20 λ1 t
u10 + u20
u
(t)
=
+
e
e−λ2 t,
1
2
2
(5.5.10)
u20 − u10 λ1 t
u10 + u20
−λ2 t
,
t ≥ 0.
e +
e
u2(t) =
2
2
Thus, the solution of (5.5.7) is given by (5.5.8) and (5.5.10), where
φ1(0) = u10 and φ2(0) = u20.
Case 1: If u20 = u10 = u0, then (5.5.10) reduces to
(
u1(t) = u0e−λ2 t,
u2(t) = u0e−λ2 t,
t ≥ 0,
or
U (t) = [u0, u0]e−λ2 t, t ≥ 0.
In this case impulses have no role to play.
Case 2: If u10 = −u20 = −u0 , then (5.5.10) reduces to
(
u1(t) = −u0eλ1 t
u2(t) = u0eλ1 t ,
t ≥ 0,
180
CHAPTER 5. MISCELLANEOUS TOPICS
or
U (t) = [−u0, u0]eλ1 t, t ≥ 0.
Now suppose we introduce impulses to the set differential equation with delay,
(5.5.5) at t = tk , k = 1, 2, . . . as
Ut+ = ak Utk ,
k
ak > 0.
(5.5.11)
Then the solution to the impulsive set differential equation with delay (5.5.5),
(5.5.11), is given by
Y
U (t) =
ak [−u0, u0]eλ1t , t ≥ 0.
(5.5.12)
0<tk <t
If the ak ’s satisfy
λ1 tk+1 + ln ak ≤ λ1 tk for all k,
λ1 (tk −tk+1 )
then ak ≤ e
(5.5.13)
and using this estimate in (5.5.12), we obtain
||U (t)|| ≤ ||U0|| eλ1 t1
where ||U (t)|| = D[U (t), θ]. Choosing δ = 2 e−λ1 t1 , it follows that ||U (t)|| <
, t ≥ 0, provided ||U0|| < δ. Hence the stability of the trivial solution of
(5.5.5) and (5.5.11) follows.
For asymptotic stability, we strengthen the assumption (5.5.13) to
λ1 tk+1 + ln(αak ) ≤ λ1 tk for all k, where α > 1.
Then ak ≤
1 λ1 (tk −tk+1 )
αe
(5.5.14)
and using this estimate in (5.5.12), we obtain
lim sup kU (t)k = 0.
h→∞
Thus the trivial solution of the impulsive differential equation with delay (5.5.5)
and (5.5.11) is asymptotically stable.
5.6
Set Difference Equations
It is well known that difference equations appear as the natural description of
observed evolution phenomenon, because most measurements of time evolving
variables are discrete and as such these equations are, in their own right, important models. More importantly, difference equations also appear in the study
of discretization methods for differential equations. Consequently, initial value
problems (IVPs) of difference equations should be formulated as,
∆un ≡ un+1 − un = F (n, un), un0 = u0, n ≥ n0 ≥ 0
to represent discretization of corresponding ODEs. Nonetheless, in the literature
of difference equations, we find usually the following type of formulation
un+1 = f(n, un), un0 = u0, n ≥ n0 ≥ 0,
5.6 SET DIFFERENCE EQUATIONS
181
where, of course , f(n, un) = un + F (n, un).
In this section, we plan to discuss set difference equations parallel to set
differential equations, and develop the theory of such equations. We shall only
provide a few typical results so as to advance the investigation of set difference
equations further, since the theory of such equations is a lot richer than the
corresponding SDEs.
Let N denote the natural numbers and N+ = N ∪ {0}. We denote by N+
n0 the
set
N+
n0 = {n0 , n0 + 1, · · · , n0 + l, · · · },
with l ∈ N+ and n0 ∈ N+ .
We consider the set difference equation of the form
Un+1 = F (n, Un),
Un0 = U0 ,
(5.6.1)
q
q
where F : N+
n0 × Kc (R ) → Kc (R ) is continuous in U for each n and Un ∈
q
Kc (R ) for each n ≥ n0 .
Since we shall be using n in difference equations, we shall employ the metric
space (Kc (Rq ), D) for (Kc (Rn ), D) used earlier. This will avoid confusion. The
possibility of obtaining the values of solutions of (5.6.1) recursively is very important and does not have a counter part in other kinds of equations. For this
reason, we sometimes reduce continuous problems to approximate difference
problems. For simple set difference equations we can find solutions in closed
form. However, deducing information on the qualitative and quantitative behavior of solutions of (5.6.1) by the comparison principle, is very effective as
usual.
We need the following comparison principle for ordinary difference equations.
See Lakshmikantham and Trigiante [1] for details.
Theorem 5.6.1 Let n ∈ N+
r ≥ 0 and g(n, r) be a nondecreasing function
n0 ,
in r for each n. Suppose that for each n ≥ n0 , the inequalities
yn+1 ≤ g(n, yn ),
(5.6.2)
zn+1 ≥ g(n, zn ),
(5.6.3)
hold. If yn0 ≤ zn0 , then yn ≤ zn for all n ≥ n0 .
Corollary 5.6.1 Let n ∈ N+
n0 , kn ≥ 0 and yn+1 ≤ kn yn + pn . Then,
yn ≤ yn0
n−1
Y
s=n0
ks +
n−1
X
ps
s=n0
n−1
Y
kτ ,
τ =s+1
Corollary 5.6.2 (Discrete Gronwall Inequality)
Let n ∈ N+
n0 , kn ≥ 0 and
yn+1 ≤ yn0 +
n
X
s=n0
[ksys + ps ].
n ≥ n0 .
(5.6.4)
182
CHAPTER 5. MISCELLANEOUS TOPICS
Then,
yn ≤ yn0
n−1
Y
n−1
X
(1 + ks) +
s=n0
≤ yn0 exp
n−1
X
ks
!
ps
s=n0
+
s=n0
n−1
Y
(1 + kτ )
(5.6.5)
τ =s+1
n−1
X
ps exp
s=n0
n−1
X
kτ
!
,
n ≥ n0 .
τ =s+1
The following theorem estimates the solution of the set difference equation
in terms of the solution of the scalar difference equation
zn+1 = g(n, zn ),
zn0 = z0,
(5.6.6)
where g(n, r) is continuous in r for each n and nondecreasing in r for each n.
We prove the following result.
Theorem 5.6.2 Assume that F (n, U ) is continuous in U for each n and
D[F (n, U ), θ] = kF (n, U )k ≤ g(n, kU k)
(5.6.7)
where g(n, r) is given in (5.6.6). Then, kUn0 k ≤ zn0 implies,
kUn+1 k ≤ zn+1 , for n ≥ n0 .
(5.6.8)
Proof Set yn+1 = kUn+1 k, so that we get
yn+1 = kF (n, Un)k ≤ g(n, kUnk) = g(n, yn ),
n ≥ n0 .
Let zn+1 be the solution of (5.6.6) with zn0 = yn0 . Then, Theorem 5.6.1 yields
immediately,
yn+1 ≤ zn+1 , n ≥ n0 ,
which implies (5.6.8), completing the proof.
The assumption (5.6.7) can be replaced by a weaker condition, namely
D[F (n, U ), θ] = kF (n, U )k ≤ kU k + w(n, kU k),
U ∈ Kc (Rq ).
Now, set g(n, r) = r + w(n, r) and assume that g(n, r) is nondecreasing in r,
for each n. This version of Theorem 5.6.2 is more suitable because w(n, r) need
not be positive, and hence the solutions of (5.6.6) could have better properties.
This observation is useful in extending the Lyapunov-like method for (5.6.1).
q
Let V : N+
n0 × Kc (R ) → R+ be a given function. We have the following
comparison result.
Theorem 5.6.3 Let V (n, U ) given above satisfy
V (n + 1, Un+1)
≤ V (n, Un) + w(n, V (n, Un))
≡ g(n, V (n, Un)),
n ≥ n0 .
Then, V (n, Un0 ) ≤ zn0 implies,
V (n + 1, Un+1) ≤ zn+1 ,
n ≥ n0 ,
where zn+1 = zn+1 (n0 , zn0) is the solution of (5.6.6).
(5.6.9)
5.6 SET DIFFERENCE EQUATIONS
183
Proof Set yn+1 = V (n + 1, Un+1), so that yn0 = V (n0 , Un0 ) ≤ zn0 and
yn+1 ≤ yn + w(n, yn),
n ≥ n0 .
Consequently, g(n, r) = r + w(n, r). Hence, by Theorem 5.6.1, we get
yn+1 ≤ zn+1 ,
n ≥ n0 ,
where zn+1 is the solution of (5.6.7). This implies the stated estimate.
Using Theorem 5.6.3., we can prove the stability results for the solutions of
set difference equation (5.6.1).
Theorem 5.6.4 Let the assumptions of Theorem 5.6.3 hold. Suppose further
that
b(kU k) ≤ V (n, U ) ≤ a(kU k),
q
where a, b ∈ K, n ∈ N+
n0 and U ∈ Kc (R ). The stability properties of the
trivial solution of (5.6.6) imply the corresponding stability properties of the trivial
solution of (5.6.1).
Proof Suppose that the trivial solution of (5.6.6) is asymptotically stable.
Then, it is stable. Let, 0 < ε, n0 ∈ N be given. Then, b(ε) > 0, n0 ∈ N,
there exists a δ1 = δ1 (n0 , ε) such that
0 ≤ zn0 < δ1 implies zn+1 < b(ε),
n ≥ n0 .
Choose δ = δ(n0 , ε) satisfying
a(δ) < δ1 .
Then Theorem 5.6.3 gives,
V (n + 1, Un+1) ≤ zn+1 ,
n ≥ n0 ,
which shows that
b(kUn+1 k) ≤ V (n + 1, Un+1 ) ≤ zn+1 ,
n ≥ n0 .
Let kUn0 k < δ. Choose zn0 = V (n0 , Un0 ) so that we have
zn0 ≤ a(kUn0 k) ≤ a(δ) < δ1 .
We then get,
b(kUn+1 k) < b(ε),
n ≥ n0 ,
which implies the stability of the trivial solution of (5.6.1).
For asymptotic stability, we observe that
b(kUn+1 k) ≤ V (n + 1, Un+1 ) ≤ zn+1 ,
n ≥ n0 .
Since zn+1 → 0 as n → ∞, we get kUn+1 k → 0 as n → ∞. The proof is
complete.
184
CHAPTER 5. MISCELLANEOUS TOPICS
As an example, take g(n, r) = anr where an ∈ R. Then the solution of
zn+1 = anzn , zn0 = z0 ,
is given by
n−1
Y
zn = z0
ai .
i=n0
We have the following two cases.
(a) If
n−1
Y
|
ai | ≤ M (n0 ),
i=n0
then |zn| ≤ |z0 |M (n0) and therefore enough to take δ(ε, n0) =
ε
, to get
M (n0 )
stability.
(b) If
lim |
n→∞
n−1
Y
ai | = 0,
i=n0
then asymptotic stability results.
Consequently, Theorem 5.6.3. yields the corresponding stability properties
of the trivial solution of (5.6.1).
Corresponding to the familiar example in SDE, namely DH U = (−1)U, let
us consider the example in set difference equation
∆Un ≡ Un+1 − Un = (−1)Un ,
U0 = U 0 ∈ Kc (R).
Of course, we need to assume that the Hukuhara difference Un+1 − Un exists
for all n. Letting Un = [un, vn], U0 = U 0 = [u0, v0], using the interval methods
as before, we solve the ordinary difference equations,
un+1 = un − vn ,
vn+1 = vn − un,
which yield
un+1 = 2n[u0 − v0], vn+1 = 2n [v0 − u0],
and therefore
Un+1 = [u0 − v0 , v0 − u0]2n.
Thus, diam kUn+1 k → ∞ as n → ∞.
If, on the other hand, we consider set difference equation
Un+1 = (−1)Un , U0 = U 0 ∈ Kc (R),
we get , using the interval methods,
un+1 = −vn , vn+1 = −un ,
5.7 SET DIFFERENTIAL EQUATIONS WITH CAUSAL OPERATORS 185
which give us
U2n = [u0, v0], U2n+1 = [−v0, −u0 ] = (−1)U 0 .
Hence diam kUn k = diam kU 0k, which is finite.
As a last example, if we analyze
Un+1 =
we find that Un =
5.7
1 n
2
1
Un , U0 = U 0 = [u0, v0],
2
U0 and consequently, diamkUnk → 0 as n → ∞.
Set Differential Equations with Causal Operators
We shall devote this section to extend certain basic results to set differential
equations (SDEs) with causal or nonanticipative maps of Volterra type, since
such equations provide a unified treatment of the basic theory of SDEs, SDEs
with delay and set integro differential equations, which in turn include ordinary
dynamic systems of the corresponding types.
Let E = C[[t0, T ], Kc(Rn )] with norm
D0 [U, θ] = sup D[U (t), θ].
t0 ≤t≤T
Definition 5.7.1 Suppose that Q ∈ C[E, E], then Q is said to be a causal map
or a nonanticipative map if U (s) = V (s), t0 ≤ s ≤ t ≤ T, where U, V ∈ E,
then (QU )(s) = (QV )(s), t0 ≤ s ≤ t.
We define the IVP for SDE with causal map, using the Hukuhara derivative
as follows:
DH U (t) = (QU )(t), U (t0) = U0 ∈ Kc (Rn).
(5.7.1)
Before we proceed to prove an existence and uniqueness result for (5.7.1), we
need the following comparison results.
Theorem 5.7.1 Assume that m ∈ C[J, R+ ], g ∈ C[J × R+ , R+ ] and for t ∈
J = [t0, T ],
D− m(t) ≤ g(t, |m|0(t)),
where |m|0(t) = sup0≤s≤t |m(s)|. Suppose that r(t) = r(t, t0, w0) is the maximal
solution of the scalar differential equation
w0 = g(t, w),
w(t0 ) = w0 ≥ 0,
(5.7.2)
existing on J. Then m(t0 ) ≤ w0 implies m(t) ≤ r(t), t ∈ J.
Proof To prove the stated inequality, it is enough to prove that
m(t) < w(t, t0, w0, ), t ≥ t0, t ∈ J,
(5.7.3)
186
CHAPTER 5. MISCELLANEOUS TOPICS
where w(t, t0 , w0, ) is any solution of
w0 = g(t, w) + ,
w(t0) = w0 + , > 0,
since lim→0+ w(t, t0, w0, ) = r(t, t0, w0).
If (5.7.3) is not true, there exists a t1 > t0 such that m(t1 ) = w(t1, t0, w0, )
and m(t) < w(t, t0, w0, ), t0 ≤ t < t1 , in view of the fact m(t0 ) < w0 + .
Hence,
D− m(t1 ) ≥ w0(t1 , t0, w0, ) = g(t1 , w(t1, t0, w0, )) + .
(5.7.4)
Now g(t, w) ≥ 0 implies that w(t, t0, w0, ) is nondecreasing in t, and this gives
|m|0(t1 ) = w(t1 , t0, w0, ) = m(t1 ),
which in turn yields
D− m(t1 ) ≤ g(t1 , |m|0(t1 )) = g(t1 , w(t1, t0, w0, ))
which is a contradiction to (5.7.4). Hence the theorem follows.
Next we consider an estimate of any two solutions of (5.7.1) in terms of the
maximal solution of (5.7.2) utilizing Theorem 5.7.1.
We define D0 [U, V ](t) = maxt0 ≤s≤t D[U (s), V (s)].
Theorem 5.7.2 Let Q ∈ C[E, E] be a causal map such that for t ∈ J,
D[(QU )(t), (QV )(t)] ≤ g(t, D0 [U, V ](t)),
(5.7.5)
where g ∈ C[J × R+ , R+ ]. Suppose further that the maximal solution r(t, t0, w0)
of the differential equation (5.7.2) exists on J. Then, if U (t), V (t) are any two
solutions of (5.7.1) through U (t0 ) = U0 , V (t0) = V0, U0 , V0 ∈ Kc (Rn ) on J
respectively, we have
D[U (t), V (t)] ≤ r(t, t0, w0), t ∈ J,
(5.7.6)
provided D[U0 , V0 ] ≤ w0.
Proof Set m(t) = D[U (t), V (t)]. Then m(t0 ) = D[U0 , V0 ] ≤ w0 . Now for small
h > 0, t ∈ J, consider m(t + h) = D[U (t + h), V (t + h)]. Using the property
(1.3.8) of the Hausdorff metric D, we get successively, the following relations:
m(t + h)
≤
≤
D[U (t + h), U (t) + h(QU )(t)] + D[U (t) + h(QU )(t), V (t + h)]
D[U (t + h), U (t) + h(QU )(t)]
+D[U (t) + h(QU )(t), V (t) + h(QV )(t)]
+D[V (t) + h(QV )(t), V (t + h)]
≤
D[U (t + h), U (t) + h(QU )(t)]
+D[U (t) + h(QU )(t), U (t) + h(QV )(t)]
+D[U (t) + h(QV )(t), V (t) + h(QV )(t)]
+D[V (t) + h(QV )(t), V (t + h)].
5.7 SET DIFFERENTIAL EQUATIONS WITH CAUSAL OPERATORS 187
Next using the property (1.3.9) of the Hausdorff metric D and the fact that the
Hukuhara differences U (t + h) − U (t) and V (t + h) − V (t) exist for small h > 0,
we arrive at,
m(t + h)
≤
D[U (t) + Z(t, h), U (t) + h(QU )(t)]
+D[h(QU )(t), h(QV )(t)] + D[U (t), V (t)]
+D[V (t) + h(QV )(t), V (t) + Y (t, h)],
where U (t + h) = U (t) + Z(t, h) and V (t + h) = V (t) + Y (t, h). Again the
property (1.3.9) gives
m(t + h) ≤
D[Z(t, h), h(QU )(t)] + D[h(QU )(t), h(QV )(t)]
+D[U (t), V (t)] + D[h(QV )(t), Y (t, h)].
Since the Hukuhara differences exist, we can replace Z(t, h) and Y (t, h) with
U (t + h) − U (t) and V (t + h) − V (t) respectively. This gives, on subtracting
m(t) and dividing both sides with h > 0,
m(t + h) − m(t)
U (t + h) − U (t)
≤ D
, (QU )(t)
h
h
+D[(QU )(t), (QV )(t)]
V (t + h) − V (t)
+D (QV )(t),
.
h
Now, taking limit supremum as h → 0+ and using the fact that U (t) and V (t)
are solutions of (5.7.1) along with assumption (5.7.5), we obtain
D+ m(t)
≤ D[(QU )(t), (QV )(t)]
≤ g(t, D0 [U, V ](t))
= g(t, |m|0(t)), t ∈ J.
Now, Theorem 5.7.1 guarantees the stated conclusion and the proof is complete.
Corollary 5.7.1 Let Q ∈ C[E, E] be a causal map and
D[(QU )(t), θ] ≤ g(t, D0 [U, θ](t)),
where g ∈ C[J × R+ , R+ ]. Also, suppose that r(t, t0 , w0) is the maximal solution
of the scalar differential equation (5.7.2). If U (t, t0, U0 ) is any solution of (5.7.1)
through (t0 , U0 ) with U0 ∈ Kc (Rn), then
D[U0 , θ] ≤ w0 implies D[U (t), θ] ≤ r(t, t0, w0), t ∈ J.
Let us begin by proving a local existence result using successive approximations.
Theorem 5.7.3 Assume that
188
CHAPTER 5. MISCELLANEOUS TOPICS
(a) Q ∈ C[B, E] where B = B(U0 , b) = {U ∈ E : D0 [U, U0] ≤ b}, is a causal
map and D0 [(QU ), θ](t) ≤ M1 , on B;
(b) g ∈ C[J × [0, 2b], R+], g(t, w) ≤ M2 on J × [0, 2b], g(t, 0) ≡ 0, g(t, w) is
nondecreasing in w for each t ∈ J and w(t) = 0 is the only solution of
w0 = g(t, w), w(t0 ) = 0 on J;
(5.7.7)
(c) D[(QU )(t), (QV )(t)] ≤ g(t, D0 [U, V ](t)) on B.
Then, the successive approximations defined by
Un+1 (t) = U0 +
Z
t
(QUn )(s) ds, n = 0, 1, 2, · · · ,
(5.7.8)
t0
b
exist on J0 = [t0, t0 + η) where η = min[T − t0 , M
] and M = max(M1 , M2 ), and
converge uniformly to the unique solution U (t) of (5.7.1).
Proof For t ∈ J0 , we have, by induction, using property (1.3.9) and (1.7.11) of
the Hausdorff metric D,
D[Un+1 (t), U0] =
=
≤
≤
Z t
D U0 +
(QUn )(s) ds, U0
D
Z
t0
t
(QUn )(s) ds, θ
t0
Z
t
D[(QUn )(s), θ] ds
t0
Z t
D0 [QUn , θ](t) ds ≤ M1 (t − t0 ) ≤ M (t − t0 ) ≤ b,
t0
which shows the successive approximations are well defined on J0 .
Next, we define successive approximations for the problem (5.7.7) as follows:
w0(t) = M (t − t0 )
Z t
wn+1(t) =
g(s, wn(s)) ds, t ∈ J0, n = 0, 1, 2, · · · .
t0
Then,
w1 (t) =
Z
t
g(s, w0 (s)) ds ≤ M2 (t − t0) ≤ M (t − t0 ) = w0(t).
t0
Assume for some k > 1, t ∈ J0, that
wk (t) ≤ wk−1(t).
5.7 SET DIFFERENTIAL EQUATIONS WITH CAUSAL OPERATORS 189
Then, using the monotonicity of g, we get
Z t
Z t
wk+1(t) =
g(s, wk (s)) ds ≤
g(s, wk−1(s)) ds = wk (t).
t0
t0
Hence, the sequence {wk (t)} is monotone decreasing.
Since wk0 (t) = g(t, wk−1(t)) ≤ M2 , t ∈ J0 , we conclude by the Ascoli-Arzela
theorem and the monotonicity of the sequence {wk (t)} that
lim wn(t) = w(t)
t→∞
uniformly on J0 . Since w(t) satisfies equation (5.7.7), we get w(t) ≡ 0 on J0
from condition (b) on J0 . Observing that for each t ∈ J0, t0 ≤ s ≤ t,
Z s
D[U1 (s), U0 ] = D U0 +
(QU0 )(ξ) dξ, U0
t0
Z s
= D
(QU0 )(ξ) dξ, θ
t
Z s 0
≤
D [(QU0 )(ξ), θ] dξ
t0
≤
D0 [(QU0), θ](ξ) (s − t0 ) ≤ D0 [(QU0 ), θ](ξ) (t − t0 )
≤
M1 (t − t0 ) ≤ M (t − t0 ) = w0 (t),
which implies that D0 [U1, U0 ](t) ≤ w0(t).
We assume for some k > 1,
D0 [Uk , Uk−1](t) ≤ wk−1(t), t ∈ J0 .
Consider, for any t ∈ J0, t0 ≤ s ≤ t,
Z s
D[Uk+1 (s), Uk (s)] ≤
D[(QUk )(ξ), (QUk−1)(ξ)] dξ
t0
Z s
≤
g(ξ, D0[Uk , Uk−1](ξ)) dξ
t0
Z s
≤
g(ξ, wk−1(ξ)) dξ
≤
Z
t0
t
g(ξ, wk−1(ξ)) dξ = wk (t),
t0
which further gives,
D0 [Uk+1, Uk ](t) ≤ wk (t), t ∈ J0.
Thus we conclude that
D0 [Un+1 , Un](t) ≤ wn (t),
(5.7.9)
190
CHAPTER 5. MISCELLANEOUS TOPICS
for t ∈ J0 and for all n = 0, 1, 2, · · · .
We claim that {Un(t)} is a Cauchy sequence. To show this, let n ≤ m.
Setting v(t) = D[Un (t), Um (t)] and using (5.7.8), we get
D+ v(t)
≤ D[DH Un (t), DH Um (t)]
= D[(QUn−1 )(t), (QUm−1 )(t)]
≤ D[(QUn−1 )(t), (QUn )(t)] + D[(QUn )(t), (QUm )(t)]
+D[(QUm )(t), (QUm−1 )(t)]
≤ g(t, D0 [Un−1, Un ](t)) + g(t, D0 [Un , Um ](t))
+g(t, D0 [Um−1, Um ](t))
≤ g(t, wn−1(t)) + g(t, |v|0(t)) + g(t, wn−1(t))
= g(t, |v|0 (t)) + 2g(t, wn−1(t)).
The above inequalities yield, on using the Theorem 5.7.1, the estimate
v(t) ≤ rn(t), t ∈ J0 ,
where rn (t) is the maximal solution of
rn0 (t) = g(t, rn) + 2g(t, wn−1(t)),
rn(t0 ) = 0,
for each n. Since as n → ∞, 2g(t, wn−1(t)) → 0 uniformly on J0 , it follows
by Lemma 1.3.1 in Lakshmikantham and Leela [1] that rn(t) → 0, as n → ∞
uniformly on J0 . This implies from (5.7.9) that Un (t) converges uniformly to
U (t) on J0 and clearly U (t) is a solution of (5.7.1).
To prove uniqueness, let V (t) be another solution of (5.7.1) on J0. Set m(t) =
D[U (t), V (t)]. Then m(t0 ) = 0 and
D+ m(t) ≤ g(t, |m|0(t)), t ∈ J0 .
Since m(t0 ) = 0, it follows from Theorem 5.7.1 that
m(t) ≤ r(t, t0, 0), t ∈ J0,
where r(t, t0, 0) is the maximal solution of (5.7.7). The assumption (b) now
shows that U (t) = V (t), t ∈ J0, proving uniqueness.
Assuming local existence, we next discuss a global existence result.
Theorem 5.7.4 Let Q ∈ C[E, E] be a causal map such that
D[(QU )(t), θ] ≤ g(t, D0 [U, θ](t)),
(5.7.10)
where g ∈ C[R2+ , R+ ], g(t, w) is nondecreasing in w for each t ∈ R+ and the
maximal solution r(t) = r(t, t0 , w0) of (5.7.2) exists on [t0, ∞). Suppose further
that Q is smooth enough to guarantee the local existence of solutions of (5.7.1)
for any (t0 , U0) ∈ R+ × Kc (Rn ). Then, the largest interval of existence for any
solution U (t, t0 , U0) of (5.7.1) is [t0, ∞), whenever D[U0 , θ] ≤ w0.
5.7 SET DIFFERENTIAL EQUATIONS WITH CAUSAL OPERATORS 191
Proof Suppose that U (t) = U (t, t0, U0 ) is any solution of (5.7.1) existing on
[t0, β), t0 < β < ∞, with D[U0 , θ] ≤ w0, and the value of β cannot be increased.
Define m(t) = D[U (t), θ] and note that m(t0 ) ≤ w0 . Then it follows that,
D+ m(t) ≤ D[DH U (t), θ] ≤ D[(QU )(t), θ] ≤ g(t, D0 [U, θ](t)).
Using Comparison Theorem 5.7.1, we obtain
m(t) ≤ r(t), t0 ≤ t < β.
(5.7.11)
For any t1 , t2 such that t0 < t1 < t2 < β, using (5.7.10) and the properties of
Hausdorff metric D,
Z t1
Z t2
D[U (t1 ), U (t2)] = D
(QU )(s) ds,
(QU )(s) ds
≤
≤
Z
t0
t0
t2
D[(QU )(s), θ] ds
t1
Z t2
g(s, D0 [U, θ](s)) ds.
t1
Employing the estimate (5.7.11) and the monotonicity of g(t, w), we find,
Z t2
D[U (t1 ), U (t2)] ≤
g(s, r(s)) ds = r(t2) − r(t1 ).
t1
Since limt→β− r(t, t0, w0) exists, taking the limit as t1 , t2 → β − , we get that
{U (tn )} is a Cauchy sequence and therefore limt→β− U (t, t0, U0) = Uβ exists.
We then consider the IVP
DH U (t) = (QU )(t),
U (β) = Uβ .
As we have assumed the local existence, we note that U (t, t0, U0 ) can be continued beyond β, contradicting our assumption that β cannot be increased. Thus
every solution U (t, t0, U0 ) of (5.7.1) such that D[U0 , θ] ≤ w0 exists globally on
[t0, ∞) and hence the proof.
To prove a comparison result in terms of Lyapunov-like functions, we need
the following known result from Lakshmikantham and Rama Mohana Rao [1].
Lemma 5.7.1 Let g0 , g ∈ C[R2+, R] be such that
g0(t, w) ≤ g(t, w),
(t, w) ∈ R2+ .
(5.7.12)
Then the right maximal solution r(t, t0, w0) of (5.7.2) and the left maximal
solution η(t, T0 , v0) of
v0 = g0 (t, v),
v(T0 ) = v0 ≥ 0,
satisfy the relation
r(t, t0 , w0) ≤ η(t, T0, v0 ), t ∈ [t0, T0],
whenever r(T0 , t0, w0) ≤ v0 .
(5.7.13)
192
CHAPTER 5. MISCELLANEOUS TOPICS
Theorem 5.7.5 Assume that
(i) L ∈ C[R+ × Kc (Rn ), R+ ], L(t, U ) is locally Lipschitzian in U, i.e.
|L(t, U ) − L(t, V )| ≤ KD[U, V ],
U, V ∈ Kc (Rn), t ∈ R+ ;
(ii) g0 , g ∈ C[R+ × R+ , R],
g0(t, w) ≤ g(t, w),
η(t, T0 , v0) is the left maximal solution of
v0 = g0 (t, v),
v(T0 ) = v0 ≥ 0,
existing on t0 ≤ t ≤ T0 and r(t, t0, w0) is the maximal solution of (5.7.2)
existing on [t0 , ∞);
(iii) D− L(t, U (t)) ≤ g(t, L(t, U (t))) on Ω, where
Ω = [U ∈ E : L(s, U (s)) ≤ η(s, t, L(t, U (t))), t0 ≤ s ≤ t],
and
D− L(t, U (t)) = liminfh→0−
1
[L(t + h, U (t) + h(QU )(t)) − L(t, U (t))] .
h
Then we have,
L(t, U (t, t0, U0 )) ≤ r(t, t0, w0), t ≥ t0.
(5.7.14)
whenever L(t0 , U0 ) ≤ w0 .
Proof To prove (5.7.14), we set m(t) = L(t, U (t, t0, U0)), t ≥ t0 so that m(t0 ) =
L(t0 , U0 ) ≤ w0 . Let w(t, ) be any solution of
w0 = g(t, w) + ,
w(t0 ) = w0 + ,
for sufficiently small > 0. Then, since r(t, t0, w0) = lim→0+ w(t, ), it is enough
to prove that
m(t) < w(t, ) for t ≥ t0 .
If this is not true, there exists a t1 > t0 such that m(t1 ) = w(t1, ),
m(t) < w(t, ), t0 ≤ t < t1. This implies that
D− m(t1 ) ≥ w0(t, ) = g(t1 , m(t1 )) + .
Now consider the left maximal solution η(s, t1 , m(t1 )) of (5.7.13) with
v(t1 ) = m(t1 ) on the interval t0 ≤ s ≤ t1 . By Lemma 5.7.1, we have
r(s, t0, w0) ≤ η(s, t1, m(t1 )), s ∈ [t0, t1].
Since
r(t1, t0, w0) = lim w(t1 , ) = m(t1 ) = η(t1, t1, m(t1 ))
→0+
(5.7.15)
5.7 SET DIFFERENTIAL EQUATIONS WITH CAUSAL OPERATORS 193
and m(s) ≤ w(s, ) for t0 ≤ s ≤ t1, it follows that
m(s) ≤ r(s, t0 , w0) ≤ η(s, t1 , m(t1 )), t0 ≤ s ≤ t1.
This inequality implies that hypothesis (iii) holds for U (s, t0, U0 ) on
t0 ≤ s ≤ t1 , and hence, standard computation yields,
D− m(t1 ) ≤ g(t1 , m(t1 ))
which contradicts the relation (5.7.15). Thus m(t) ≤ r(t, t0, w0), t ≥ t0 and the
proof is complete.
The above comparison result in terms of Lyapunov like functions is a useful
tool to establish some appropriate stability and boundedness results for set
differential equations with causal maps (5.7.1) analogous to original Lyapunov
results and their extensions. In order to match the behavior of solutions of
(5.7.1) with the corresponding ordinary differential equation with causal map,
we need to use the existence of Hukuhara difference U0 − V0 = W0 in the initial
condition as in Section 3.3 and study the stability or boundedness criteria of
U (t, t0, U0 − V0) = U (t, t0, W0) of (5.7.1).
We present a simple example in Kc (R).
Consider
Z t
DH U (t) = −
U (s) ds, U (0) = U0 ∈ Kc (R).
0
Then using interval methods, we get
u01
= −
Z
t
u2(s) ds,
0
u02
= −
Z
t
u1(s) ds,
0
where U = [u1, u2] and U0 = [u10, u20]. Clearly this yields
(4)
=
u1 ,
u1 (0) = u10,
(4)
=
u2 ,
u2 (0) = u20,
u1
u2
whose solutions are given by
u1(t)
=
u2(t)
=
u10 − u20
2
u20 − u10
2
et + e−t
2
et + e−t
2
+
+
u10 + u20
2
u10 + u20
2
cos t
cos t.
194
CHAPTER 5. MISCELLANEOUS TOPICS
That is, t ≥ 0,
U (t, 0, U0)
t
1
1
e + e−t
=
− (u20 − u10), (u20 − u10)
2
2
2
1 u10 + u20
1 u10 + u20
+
,
cos t, t ≥ 0.
2
2
2
2
Thus choosing
1
1
V0 = − (u20 − u10), (u20 − u10) ,
2
2
we obtain
1 u10 + u20
1 u10 + u20
U (t, 0, W0) =
,
cos t, t ≥ 0.
2
2
2
2
which implies the stability of the trivial solution of (5.7.1).
We next give an example which illustrates that one can get asymptotic stability as well in SDE with causal maps.
Consider the following differential equation,
Z t
DH U = −aU − b
U (s) ds, U (0) = U0 ∈ Kc (Rn ),
(5.7.16)
0
a, b > 0.
As before, we take U (t) = [u1(t), u2(t)], U0 = [u10, u20], which reduces to
Z t
u01 = −au2 − b
u2 (s) ds,
0
u02
=
−au1 − b
Z
t
u1 (s) ds,
0
(4)
=
a2u001 + 2abu01 + b2u1,
(4)
=
a2u002 + 2abu02 + b2u2,
u1
u2
from which we obtain by choosing a = 1 and b = 2. Using the initial conditions,
u1 (t) =
u2 (t) =
1
1
(u10 − u20) e−t + (u10 − u20) e2t
6
3
"
√ !
7
1
− 12 t 1
+e
(u10 + u20) cos
t − √ (u10 + u20) sin
2
2
2 7
√ !#
7
t
2
1
1
(u20 − u10) e−t + (u20 − u10) e2t
6
3
"
√ !
1
1
7
1
+e− 2 t
(u20 + u10) cos
t − √ (u20 + u10) sin
2
2
2 7
√ !#
7
t
.
2
5.8 LYAPUNOV-LIKE FUNCTIONS IN KC (RD
+)
195
Thus it follows that,
U (t)
=
1 1 −t
1 1 2t
(u20 − u10) − ,
e + (u20 − u10) − ,
e
6 6
3 3
√ !
7
1 1 − 12 t
+(u10 + u20) ,
cos
e
t
2 2
2
√ !
1
7
1
1
−(u10 + u20) √ , √ e− 2 t sin
t .
2
2 7 2 7
Now choosing u10 = u20, we eliminate the undesirable terms and therefore, we
get asymptotic stability of the zero solution of (5.7.16).
5.8
Lyapunov-like Functions in Kc (Rd+ )
Recall that in Section 2.3, a partial order in (Kc (Rn ), D) is introduced and employing it, the existence of extremal solutions and a suitable comparison result
were discussed. In this section, using the comparison result, we shall consider
Lyapunov-like functions whose values are in (Kc (Rd+ ), D). For this purpose, we
shall also utilize the set differential systems developed in Section 3.7, and consequently, we consider the set differential system, namely,
DH U = F (t, U ), U (t0 ) = U0 ∈ Kc (Rn )N ,
(5.8.1)
where F ∈ C[R+ × Kc (Rn )N , Kc(Rn )N ], Kc (Rn )N = Kc (Rn) × Kc (Rn ) × · · · ×
Kc (Rn ), N times. Since we need a partial order in (Kc (Rd ), D), we shall assume
that we have introduced it in a similar way. We shall begin by proving a comparison result in terms of Lyapunov-like functions whose values are in Kc (Rd+ ).
We also need a map,
ρ : Kc (Rn )N → Kc (Rd+ ).
(5.8.2)
Theorem 5.8.1 Assume that
(i) L ∈ C[R+ ×Kc (Rn )N , Kc (Rd+ )] and whenever Hukuhara difference of A, B ∈
Kc (Rn )N exists,
L(t, A) ≤ L(t, B) + Kρ[A − B],
(5.8.3)
where K ≥ 0 is a local Lipschitz constant;
(ii) G ∈ C[R+ × Kc (Rd+ ), Kc (Rd )], the Hukuhara difference
L(t + h, U (t + h)) − L(t, U (t))
(5.8.4)
exists, and for t ∈ R+ , U ∈ Kc (Rn),
D+ L(t, U ) ≡
≤
1
lim sup [L(t + h, U + hF (t, U )) − L(t, U )]
h
+
h→0
G(t, L(t, U )).
(5.8.5)
196
CHAPTER 5. MISCELLANEOUS TOPICS
Then, if U (t) = U (t, t0, U0) is any solution of (5.8.1) existing for t ≥ t0, such
that L(t0 , U0 ) ≤ R0, we have
L(t, U (t)) ≤ R(t, t0, R0), t ≥ t0 ,
(5.8.6)
where R(t, t0, R0) is the maximal solution of SDE
DH R = G(t, R), R(t0 ) = R0 ∈ Kc (Rd+ ),
(5.8.7)
existing for t ≥ t0 .
Proof Define m(t) = L(t, U (t)), where U (t) = U (t, t0 , U0) is any solution of
(5.8.1) existing on [t0 , ∞). Clearly, m(t) ∈ Kc (Rd+ ) for each t ∈ [t0, ∞) and
m(t0 ) ≤ R0. Now for small h > 0,
m(t + h)
=
=
L(t + h, U (t + h)) = L(t + h, U (t) + hF (t, U (t)) + o(h))
L(t, U (t)) + Z(t, U (t); h)
because of (5.8.4). It therefore follows, using (5.8.3), that
m(t + h) − m(t)
=
=
Z(t, U (t); h)
L(t + h, U (t) + hF (t, U (t)) + o(h)) − L(t, U (t))
≤
L(t + h, U (t) + hF (t, U (t))) − L(t, U (t)) + Kρ(o(h)).
Hence we arrive at
D+ m(t)
≤
1
lim sup [m(t + h) − m(t)]
h→0+ h
Kρ(o(h))
lim sup
h
h→0+
[L(t + h, U (t) + hF (t, U (t)) − L(t, U (t))]
+ lim sup
h
h→0+
≤
D+ V (t, U (t)) ≤ G(t, L(t, U (t))) = G(t, m(t)), t ≥ t0 ,
≡
in view of (5.8.5). This implies by Theorem 2.4.5,
L(t, U (t)) = m(t) ≤ R(t, t0, R0), t ≥ t0 ,
R(t, t0, R0) being the maximal solution of (5.8.7). The proof is complete.
Employing the estimate (5.8.6), we can now discuss the desired qualitative
properties of solutions U (t) of set differential system (5.8.1). For this purpose,
we need suitable notions of stability for SDEs (5.8.1) and (5.8.7).
Definition 5.8.1 The trivial solution of (5.8.1) is said to be equi-stable if, given
> 0 and t0 ∈ R+ , there exists a δ = δ(t0, ) > 0 such that
D0 [W0, θ] < δ implies D0 [U (t, t0, W0 ), θ] < , t ≥ t0 ,
where W0 is chosen in such a way that U0 = V0 + W0 , that is, the Hukuhara
difference U0 − V0 exists and is equal to W0 .
5.9 SET DIFFERENTIAL EQUATIONS IN (KC (E), D),
197
Definition 5.8.2 The trivial solution of (5.8.7) is said to be equi-stable if given
> 0 and t0 ∈ R+ , there exists a δ = δ(t0, ) > 0 such that
kQ0k < δ implies kR(t, t0, Q0)k < , t ≥ t0,
where the Hukuhara difference R0 − S0 = Q0 is assumed to exist and
kRk = sup[krk : r ∈ R ∈ Kc (Rd+ )].
One can construct the other definitions of various stability and boundedness
concepts based on the foregoing definitions. We shall now state a typical result
on stability.
Theorem 5.8.2 Assume that conditions (i), (ii) of Theorem 5.8.1 are satisfied.
Suppose further that
b(D0 [U, θ]) ≤ kL(t, U )k ≤ a(D0 [U, θ]), a, b ∈ K.
(5.8.8)
Then the stability properties of the trivial solution of (5.8.7) imply the corresponding stability properties of the trivial solution of (5.8.1) respectively.
One can construct the proof of the theorem following the standard proofs
employed already, once we have the estimates (5.8.6) and (5.8.8).
Since the comparison system in the present situation is also a SDE and
not a scalar differential equation as before, it would be necessary to choose
(when connecting the initial values of the two SDEs) a(D0 [W0, θ]) = kQ0k so
that kL(t0, W0 )k ≤ kQ0k holds, which is required when we utilize the estimate
(5.8.6).
The use of comparison SDE in Kc (Rd+ ) provides a very general set up, which
includes several possibilities. For example, if R, G ∈ Kc (Rd+ ) in (5.8.7) are single
valued maps, then as observed earlier, (5.8.7) reduces to ordinary differential
system, and, consequently, there results the method of Vector Lyapunov-like
functions.
Similarly, if d = 1, then the Lyapunov-like functions and the comparison
system reduce to interval valued maps, which, of course, include as a special
case, the usual Lyapunov-like method. There is a need to further explore this
framework to get interesting results.
5.9
Set Differential Equations in (Kc (E), D),
In Section 1.8, we provided the necessary background for the metric space
(Kc (E), D), where E is a Banach space. As we have seen, SDEs generated by
the differential inclusions can be used as a tool to prove existence results of differential inclusions. The results of Section 4.4 offer such results in (Kc (Rn ), D).
In this Section, we shall only indicate some basic results similar to the ones
proved in Chapter 2.
198
CHAPTER 5. MISCELLANEOUS TOPICS
Let Γ : J × Kc (E) → K(E) ={all nonempty compact subsets of E}, and
assume Γ is continuous on J × Kc (E), where J = [t0, t0 + a]. Consider the
differential inclusion
x0 ∈ Γ(t, x), x(t0 ) = x0 ∈ E.
(5.9.1)
Then we can define a mapping F : J × Kc (E) → Kc (E), F (t, A) = coΓ(t,
¯
A)
where A ∈ Kc (E). Since Γ is assumed to be continuous, we have F is continuous.
We can now define SDE
DH U = F (t, U ), U (t0) = U0 ∈ Kc (E),
(5.9.2)
where, as before, DH U is the Hukuhara derivative.
One can prove several results parallel to the ones we have investigated for
SDEs in the metric space (Kc (Rn ), D). In fact, Tolstonogov [11] presents a
systematic study of differential inclusions in a Banach space as well as SDEs
generated by the differential inclusions. This reference has several general results
for SDEs.
We shall therefore list a couple of results which can be proved almost parallel
to the results considered already in (Kc (Rn ), D). We begin with the following
comparison result.
Theorem 5.9.1 Assume that F ∈ C[J × Kc (E), K(E)] and for t ∈ J, U, V ∈
Kc (E),
D[F (t, U ), F (t, V )] ≤ g(t, D[U, V ])
(5.9.3)
where g ∈ C[J × R+ , R+ ]. Suppose further that the maximal solution r(t) =
r(t, t0 , w0) of the scalar differential equation
w0 = g(t, w), w(t0 ) = w0 ≥ 0,
(5.9.4)
exists on J. Then, if U (t) = U (t, t0, U0 ), V (t) = V (t, t0, V0 ) are any two solutions of (5.9.2) existing on J, we have the estimate
D[U (t), V (t)] ≤ r(t, t0, w0), t ∈ J,
provided D[U0 , V0 ] ≤ w0.
The proof follows exactly as in the proof of Theorem 2.2.1 with suitable
modifications and hence omitted. The following Corollary is immediate and is
useful later.
Corollary 5.9.1 Assume that F ∈ C[J × Kc (E), Kc (E)] and
D[F (t, U ), θ] ≤ g(t, D[U, θ]),
(5.9.5)
for t ∈ J, U ∈ Kc (E), where g satisfies the same assumptions as in Theorem 5.9.1. Then, if U (t) = U (t, t0, U0 ) is any solution of (5.9.2) existing on
J, D[U0 , θ] ≤ w0 implies
D[U (t), θ] ≤ r(t, t0, w0), t ∈ J,
r(t, t0 , w0) being the maximal solution of (5.9.4) existing on J.
5.10 NOTES AND COMMENTS
199
We recall that D[U, θ] = kU k = sup[kuk : u ∈ U ], U ∈ Kc (E).
Next, we state an existence and uniqueness result under assumptions more
general than the Lipschitz type condition, which provides the idea inherent in
the comparison principle.
Theorem 5.9.2 Assume that
(i) F ∈ C[R0, Kc(E)] where R0 = J × B(U0 , b), B(U0 , b) = {U ∈ Kc (E) :
D[U, U0 ] ≤ b} and D[F (t, U ), θ] ≤ M0 on R0;
(ii) g ∈ C[J × [0, 2b], R+], g(t, w) ≤ M1 on J × [0, 2b], g(t, 0) ≡ 0, g(t, w) is
monotone nondecreasing in w for each t ∈ J, and w(t) ≡ 0 is the only
solution of
w0 = g(t, w), w(t0 ) = 0 on J;
(iii) D[F (t, U ), F (t, V )] ≤ g(t, D[U, V ]) for t ∈ J, U, V ∈ R0.
Then the successive approximations defined by
Z t
Un+1 (t) = U0 +
F (s, Un(s)) ds, n = 0, 1, 2, · · · ,
(5.9.6)
t0
b
exist on J0 = [t0, t0 + η] where η = min(a, M
), M = max[M0, M1 ], as a continuous function and converge to the unique solution U (t) = U (t, t0 , U0) of (5.9.2)
on J0.
The proof of Theorem 5.9.2 proceeds similarly to the proof of Theorem 2.3.1,
and, therefore, we omit the proof. We thus find that several results proved in
(Kc (Rn), D) can be adapted to (Kc (E), D), with additional conditions whenever
necessary to match the Banach Space set up. For example, to prove Peano’s
theorem, we need to impose a suitable condition in terms of the measure of
noncompactness and also employ the corresponding Ascoli-Arzela’s theorem.
As we indicated earlier, there are several results in Tolstonogov [1], in this
framework which connect differential inclusion in a Banach Space when the
multifunction involved is not convex, as well as not continuous, and even when
it is not compact, convex. Then the proofs of corresponding results become
more complicated but can be constructed.
5.10
Notes and Comments
Section 5.2 begins by introducing impulses to SDEs and obtains basic results.
This material is from Vasundhara Devi [1]. Section 5.3 extends the monotone
iterative technique to impulsive SDEs and this is adapted from Vasundhara
Devi and Vatsala [1]. In Section 5.4, delay is incorporated in the SDEs, and the
fundamental results described are from Vasundhara Devi and Vatsala [1]. For
ordinary differential equations with delay, refer to Lakshmikantham and Leela
[3] and Hale [1]. For practical stability for ODE see Lakshmikantham, Leela
200
CHAPTER 5. MISCELLANEOUS TOPICS
and Martynyuk [1]. Monotone iterative technique for SDEs with delay was
developed in McRae and Vasundhara Devi [2]. An interesting combination of
impulses and delay was infused into the SDEs, and basic results were obtained
in McRae and Vasundhara Devi [1] which form Section 5.5.
Section 5.6 deals with set difference equations, the material of which is compiled from Gnana Bhaskar and Shaw [1]. For the basic theory of difference
equations see Lakshmikantham and Trigiante [1]. The results on differential
equations with causal maps investigated in Section 5.7 are adapted from the
papers of Drici, Mc Rae and Vasundhara Devi [1,2]. For a good reference for
differential equations with causal operators, see Corduneanu [1]. See also Lakshmikantham and Rama Mohana Rao [1].
Also, for results in abstract spaces see Lakshmikantham and Leela [2]. Section 5.8 investigates the concept of Lyapunov-like functions, whose values are in
metric spaces, which includes single, vector, matrix and cone valued Lyapunov
functions as a special case. For details, see Lakshmikantham and Vasundhara
Devi [1]. Finally, in Section 5.9 we indicate how one can extend most of the results obtained in the metric space (Kc (Rn ), D) to the metric space (Kc (E), D),
where E is a Banach space. For details and more general results see Tolstonogov
[1].
References
Ambrosio, L. and Cassno, F.S.
[1] Lecture notes on “Analysis in Metric Spaces”, Scuola Normale Superiore,
PISA, 2000.
Ambrosio, L. and Tilli, P.
[1] Selected topics on ”Analysis in Metric Spaces”, Scuola Normale Superiore,
PISA, 2000.
Agarwal, R.P., O’Regan, D. and Lakshmikantham, V.
[1] Fuzzy Volterra Integral equations : a stacking theorem approach, Nonlinear
Analysis 55, 2003, 299-312.
Arstein, Z.
[1] On the calculus of closed set-valued functions, , Indiana Univ J. Math.,
24(1974), 433-441.
Aubin, J.-P.
[1] Mutational and Morphological Analysis, Birkhauser, Berlin, 1999.
[2] Mutational equations in metric spaces, Set Valued Analysis, 1 (1993), 3-16.
Aubin, J.-P. and Frankowska, H.
[1] Set Valued Analysis, Birkhaüser, Basel, 1990.
Aumann, R.J.
[1] Integrals of set-valued functions, JMAA 12(1965), 1-12.
Banks, H.T. and Jacobs, M.Q.
[1] A differential calculus of multifunctions, JMAA 29 (1970), 246-272.
Brandao Lopes Pinto, A.J., De Blasi, F.S. and Iervolino, F.
[1] Uniqueness and existence theorems for differential equations with convexvalued solutions, Boll. Unione. Mat. Italy 3 (1970), 47-54.
Castaing, C. and Valadier, M.
[1] Convex analysis and measurable multifunctions, Springer-Verlag, Berlin 1977.
201
202
REFERENCES
Corduneanu, C.
[1] Functional Equations with Causal Operators, Taylor & Francis, New York,
2003.
Clarke, F.H., Yu.S.Ledyaev, Stern, R.J. and Wolenski, P.R.
[1] Nonsmooth Analysis and Control Theory, Springer Verlag, New York, 1998.
De Blasi. F. S. and Iervolino, F.
[1] Equazioni differenziali con soluzioni a valore compatio convesso, Boll. U.M.I.
4, 2(1969), pp 491-501.
Debreu, G.
[1] Integration of correspondences, in: Proc. Fifth Berkeley Symp.Math. Statist.
Praobab., Vol 2, part#1, Univ California Press, Berkeley, CA(1967), 351-372.
Deimling, K.
[1] Multivalued Differential Equations, Walter de Grutyer, New York, 1992.
Diamond, P. [1] Stability and periodicity in fuzzy differential equations, IEEE
Trans. Fuzzy Systems, 8, (2000), 583-590.
Diamond, P. and Kloeden, P.
[1] Metric spaces of fuzzy sets, World Scientific, Singapore 1994.
Diamond, P. and Watson, P.
[1] Regularity of solution sets for differential inclusions quasi-concave in parameter, Appl. Math. Lett.13(2000), 31-35.
Drici, Z., McRae, F.A. and Vasundhara Devi, J.
[1] Basic results for Set Differential Equations with Causal maps.(to appear)
[2] Stability results for Set Differential Equations with Causal maps.(to appear)
Gnana Bhaskar, T., Lakshmikantham, V.
[1] Set Differential Equations and Flow Invariance, Appl. Anal. 82(2003), 357368.
[2] Generalized flow invariance for differential inclusions, To appear in Int.J.
Nonlinear Diff.Eq. TMA.
[3] Lyapunov stability for set differential equations, Dyn. Sys. Appl. 13(1),
2004, 1-10.
[4] Generalized proximal normals and flow invariance for differential inclusions,
Nonlinear Studies, 10(1), 2003, 29-37
[5] Generalized proximal normals and Lyapunov stability, Nonlinear Studies,
10(4), 2003, 335-341.
203
Gnana Bhaskar, T., Lakshmikantham, V. and Vasundhara Devi, J.
[1] Revisiting fuzzy differential equations, Non.Anal. TMA 58(3-4), 2004, 351358.
Gnana Bhaskar, T. and Shaw, M.
[1] Stability results for Set Difference Equations, Dynamical Systems and Applications, 13 (2004), 479-485.
Gnana Bhaskar, T. and Vasundhara Devi, J.
[1] Stability Criteria in Set Differential Equations, to appear in Math.Comp.Mod.
[2] Nonuniform stability and boundedness criteria for Set Differential Equations,
Applicable Analysis, 84, No. 2 (2005), 131-142.
[3] Set Differential Systems and Vector Lyapunov functions, to appear Appl.Math.Comp.
Hale, J.
[1] Functional Differential Equations, Springer Verlag, New york, 1971.
Hausdorff, F.
[1] Set Theory, Chelsea, New York 1957.
Hermes, H.
[1] Calculus of set-valued functions and control, J. Math. Mech., 18(1968), 4759.
Hukuhara, M.
[1] Sul l’application semicontinue dont la valeur est un pact convexe, Functial.
Ekvac. 10 (1967), pp.48-66
[2] Integration des applications measurables dont la valeur est. un compact convexe, Funkcialaj, Ekavacioj 10(1967), 205-223.
Hüllermeier, E.
[1] An approach to modelling and simulation of uncertain dynamical systems,
Int. J.Uncertainity, Fuzziness & Knowledge-based Systems, 5 (1997), 117-137.
Kaleva, O.
[1] Fuzzy differential equations, Fuzzy Sets and Systems, 24(1987), 301-317.
[2] The Cauchy problem for fuzzy differential equations, Fuzzy Sets and Systems, 35(1990), 389-396.
[3] A note on fuzzy differential equations, to appear in Nonlinear Analysis: Theory and Methods (2005).
Kloeden, P.E., Sadovsky, B.N. and Vasilyeva, I.E.
[1] Quasi-flows and equations with nonlinear differentials, Nonlinear Analysis
51 (2002), 1143-1158.
204
REFERENCES
Ladde, G.S. , Lakshmikantham, V. and Vatsala, A.S.
[1] Monotone Iterative Techniques for Nonlinear Differential Equations, Pitman,
London, 1985.
Lakshmikantham, V.,
[1] The connection between set and fuzzy differential equations, (to appear).
Lakshmikantham, V. and Bainov, D.D. and Simeonov, P.S.
[1] Theory of Impulsive Differential Equations, World Scientific, Singapore,
1989.
Lakshmikantham, V. and Koksal, S.
[1] Monotone Flows and Rapid Convergence for Nonlinear Partial Differential
Equations, Taylor & Francis, New York, 2003.
Lakshmikantham, V., and Leela, S.
[1] Differential and Intergral Inequalities, Vol.I and II, Academic Press, New
York, 1969.
[2] On perturbing Lyapunov functions, Math.Sys.Theory 10(1976), 85-90.
[3] Nonlinear Differential Equations in Abstract Spaces, Pergamon Press, Oxford, 1981.
Lakshmikantham, V., Leela, S. and Martynyuk, A.A.
[1] Stability Analysis of Nonlinear Systems, Marcel Dekker, 1969.
[2] Practical Stability of Nonlinear Systems, World Scientific, Singapore, 1990.
Lakshmikantham, V., Leela, S. and Vasundhara Devi, J.,
[1] Stabiliy theory for set differential equations, Dynamics of Continuous, Discrete, and Impulsive Systems (DCDIS) Series A, 11 (2004), 181-190.
Lakshmikantham, V., Leela, S. and Vatsala, A.S.
[1] Set valued hybrid differential equations and stability in terms of two measures, J. Hybrid.Syst. 2(2002), 169-188.
[2] Interconnection between set and fuzzy differential equations, Nonlinear Analysis, 54(2), 2003, 351-360.
Lakshmikantham, V. and Liu, X.Z.
[1] Stability analysis in terms of two measures, World Scientific, Singapore, 1993.
Lakshmikantham, V., Mohapatra, R.
[1] Theory of Fuzzy Differential Equations and Inclusions, Taylor& Francis,
2003.
Lakshmikantham, V., Matrosov, V. and Sivasundaram, S.
[1] Vector Lyapunov Functions and Stability Analysis of Nonlinear Systems,
Kluwer Academic Publishers, Dordrecht 1991.
205
Lakshmikantham, V., and Nieto, J.
[1] Differential Equations in metric spaces: an introduction and application to
fuzzy differential equations, DCDIS, 10 (2003), 991-1000.
Lakshmikantham, V. and Rama Mohana Rao, M.
[1] Theory of Integro-Differential Equations, Gordon and Breach, Amsterdam,
1995.
Lakshmikantham, V. and Tolstonogov, A.
[1] Existence and interrrelation between set and fuzzy differential equations,
Nonlinear Analysis TMA, 55(2003), 255-268.
Lakshmikantham, V. and Trigiante, D.
[1] Theory of Difference Equations, Numerical Methods and Applications, 2nd
Edition, Marcel Dekker, New york, 2002.
Lakshmikantham, V., and Vasundhara Devi, J.
[1] Lyapunov-like functions in metric spaces, Dynamical Systems and Applications, 13 (2004), 553-559.
Lakshmikantham, V., Vatsala, A.S.
[1] Set differential equations and monotone flows, Nonlinear Dyn. Stability Theory, 3(2), 2003, 33-43.
[2] Hybrid Impulsive Fuzzy Differential Equations, (to appear).
Lay, S.R.
[1] Convex sets and their applications, John Wiley and Sons, NY, 1982.
Markov, S.M.
[1] Existence and uniqueness of solutions of the interval differential equation
X 0 = F (t, X), comptes rendus de l’academie bulgare des sciences, 31(1978),
1519-1522.
McRae, F.A. and Vasundhara Devi, J.
[1] Impulsive Set Differential Equations with Delay, to appear.
[2] Monotone Iterative Techniques for Set Differential Equations with Delay, to
appear.
Mel’nik, T.A.
[1] An iterative method for solving systems of quasi differential equations with
slow and fast variables, Differential Equations, 33 (1997), 1335-1340.
Moore, R.E.
[1] Methods and applications of interval analysis, SIAM, Philadelphia, 1979.
206
INDEX
Morales, P.
[1] Non-Hausdorff Ascoli theory, Dissertation Math, Rozprawy Math. 119 (1974).
O’Regan, D., Lakshmikantham, V. and Nieto, J.
[1] Initial and boundary value problems for fuzzy differential equations, Nonlinear Analysis, TMA 54 (2003), 405-415.
Panasyuk, A.I.
[1] Quasi-differential equations in metric spaces, Differential Equations, 21(1985),
914-921.
Plotnikov, V.A.
[1] Controlled quasi-differential equations and some of their properties, Differential Equations, 34, (1998), 1332-1336.
Radström, H.
[1] An embedding theorem for spaces of convex sets, Proc. Amer.Math.Soc.3
(1952), 165-169.
Rockafellar, R.T.
[1] Convex analysis, Princeton University Press, Princeton, N.J., 1970.
Seikkala, S.
[1] On the fuzzy initial value problem, Fuzzy Sets and Systems, 24 (1987), 319330.
Tolstonogov, A.
[1] Differential Inclusions in a Banach Space, Kluwer Academic Publishers,Dodrecht,
2000.
[2] On Scorza- Dragono theorem for multifunctions with variable domain of definition, Mathematicheskie zametki, 48, N 5 (1990), 109-120 (in Russian).
Vasundhara Devi, J. and Vatsala, A.S.
[1] Monotone Iterative Technique for Impulsive and Set Differential Equations,
Nonlinear Studies, 11, No. 4 (2004), 639-658.
[2] A study of set differential equations with delay, Dynamics of Continuous,
Discrete, and Impulsive Systems (DCDIS) Series A: Mathematical Analysis, 11
(2004), 287-300.
Vasundhara Devi, J.
[1] Basic Results in Set Differential Equations, Nonlinear Studies, 10, No.3
(2003), 259-272.
Vorobiev, D., Seikkala, S.
[1] Towards the theory of fuzzy differential equations, Fuzzy Sets and Systems,
125(2), 2002, 231-237.
Index
Approximate Solutions, 50
Ascoli-Arzela Theorem, 33
equi-bounded, 79
Euler solution
existence, 52
weak asymptotic stability, 95
Existence
for impulsive SDE with delay,
171
global, for SDE with delay, 165
impulsive SDE, 140
in a metric space, 134
Existence for SDE
global, 49, 68
Peano’s type, 36
successive approximations, 32,
185
USC case, 60
extension principle, 105
extremal solutions
existence, 38
for impulses, 149, 157
via monotone iterates, 42, 46
bounded solutions
equi, 79
nonuniformly, 80
uniformly, 80
Castaing Representation Theorem,
17
causal map, 183
comparison theorem for any two solutions of
impulsive FDE, 121
impulsive SDE, 141
SDE, 28, 29
SDE with delay, 163
comparison theorem for Lyapunovlike functions for
fuzzy differential equations, 102
hybrid FDE, 125
hybrid impulsive FDE, 127
impulsive FDE, 123
impulsive SDE, 143
impulsive SDE with delay, 174
SDE, 66
SDE with causal operator, 190
set differential systems, 84
comparison theorem for set differential
nonstrict inequalities, 40, 139
nonstrict inequalities, with impulses, 114
strict inequalities, 37
continuous dependence, 35
Hausdorff
metric, 9
separation, 9
Hukuhara
derivative, 18
difference, 7
integral
Aumann, 20
Bochner, 21
integrally bounded, 20
Lipschitz continuous, 16
lower and upper solutions
coupled, 42
epigraph, 91
207
208
natural, 41
lower semicontinuous, 15
maximal and minimal solutions, 37
metric differential equation, 129
Monotone iterative technique for SDE,
42
partial ordering in Kc (Rn), 37
proximal
aiming condition, 56
normal, 56
selector, 17
set differential inequality, 40
stability
equi, 72, 106
equi, asymptotic, 72, 76, 107
nonuniform, 74
practical for SDE with delay,
166
uniform, 73, 107
uniform, asymptotic, 73, 78, 108
stabilty properties for
impulsive SDE, 145
impulsive SDE with delay, 175
set difference equations, 181
strongly invariant, 58
strongly measurable, 23
subdifferential, 91
subgradient, 91
support function, 13
uniform bounded solution, 79
upper semicontinuous, 15
weakly decreasing, 93
weakly invariant, 56
INDEX
© Copyright 2026 Paperzz