Set-Markov processes - University of Ottawa

SET-MARKOV PROCESSES
By
Raluca M. Balan, B.Sc., M.S.
December 2004
A Thesis
submitted to the Faculty of Graduate and Postdoctoral Studies
in partial fulfillment of the requirements
for the degree of
Doctor of Philosophy in Mathematics1
c Copyright 2005
°
by Raluca M. Balan, B.Sc., M.S., Ottawa, Canada
1
The Ph.D. Program is a joint program with Carleton University, administered by the OttawaCarleton Institute of Mathematics and Statistics
Abstract
This thesis introduces a type of Markov property, called the ‘set-Markov’ property,
that can be defined for set-indexed processes, and in particular for multiparameter
processes. This property is stronger than the ‘sharp Markov’ property that has been
introduced earlier in the literature. For processes indexed by the real line, the setMarkov property coincides with the classical Markov property.
An important class of set-Markov processes are ‘Q-Markov’ processes, where Q
is a family of transition probabilities satisfying a Chapman-Kolmogorov type relationship. Two constructions are indicated for a Q-Markov process, as applications of
Kolmogorov’s extension theorem; the first construction is valid only under a certain
geometrical condition.
A Q-Markov process can be associated to a ‘suitably indexed’ collection of classical
Markov processes; therefore, ‘the generator’ of the process is defined as the collection
of all the generators of these classical Markov processes. It is proved that under
certain conditions, one can construct a Q-Markov process knowing its generator.
Processes with independent increments are Q-Markov; they are characterized by
‘convolution systems’ of distributions. In particular, Lévy processes have a double characterization in terms of their characteristic functions and in terms of their
generators. Some other examples of Q-Markov processes are considered, including
empirical processes. For each of these examples, the generator provides us with a
means of constructing the process.
‘Adapted sets’ and ‘optional sets’ generalize the classical notions of stopping time
and optional time. Using these sets, the strong Markov properties associated to a
Q-Markov process, respectively a sharp Markov process, are considered.
ii
Acknowledgements
I would like to thank my supervisor Dr. Gail Ivanoff for the immense amount of work
and energy that she has dedicated to this hard-to-achieve goal, for teaching me the
craft of research, and for keeping up my morale during the difficult periods of doubt
and disillusionment, which are inevitable in the process of learning to do research.
I would also like to thank my former professors from the University of Bucharest,
Dr. Monica Dumitrescu, Dr. Ion Cuculescu and Dr. Gheorghe Rautu for being
wonderful and dedicated teachers.
iii
Dedication
I dedicate this work to my parents for their love and support.
iv
Contents
Abstract
ii
Acknowledgements
iii
Dedication
iv
1 Introduction
1
2 The Set-Indexed Framework
15
2.1
The Indexing Collection . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.2
Examples of Indexing Collections . . . . . . . . . . . . . . . . . . . .
24
2.3
Set-Indexed Processes
. . . . . . . . . . . . . . . . . . . . . . . . . .
26
2.4
Sample Path Properties. Flows . . . . . . . . . . . . . . . . . . . . .
35
2.5
The Collection of Projected Flows . . . . . . . . . . . . . . . . . . . .
44
3 Set-Markov Processes
49
3.1
Set-Markov Processes. General Properties . . . . . . . . . . . . . . .
50
3.2
Q-Markov Processes. The Construction under SHAPE . . . . . . . .
56
3.3
Q-Markov Processes. The General Construction . . . . . . . . . . . .
70
3.4
Transition Semigroups . . . . . . . . . . . . . . . . . . . . . . . . . .
80
4 The Generator
85
4.1
The Definition of the Generator . . . . . . . . . . . . . . . . . . . . .
85
4.2
The Construction using the Generator . . . . . . . . . . . . . . . . .
90
4.3
The Construction using the Generator along the Projected Flows . . 100
v
5 Processes with Independent Increments
116
5.1
General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.2
The Compound Poisson Process . . . . . . . . . . . . . . . . . . . . . 123
5.3
Construction of a Process with Independent Increments . . . . . . . . 126
5.4
Lévy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.5
Existence of a Cadlag Version of a Lévy process . . . . . . . . . . . . 137
6 Examples
146
6.1
The Empirical Process . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6.2
A Bivariate Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.3
A Jump Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7 Strong Markov Properties
163
7.1
Adapted Sets and Optional Sets . . . . . . . . . . . . . . . . . . . . . 163
7.2
The Strong Sharp Markov Property . . . . . . . . . . . . . . . . . . . 168
7.3
The Strong Q-Markov Property . . . . . . . . . . . . . . . . . . . . . 172
A Conditioning
177
A.1 Conditional Independence . . . . . . . . . . . . . . . . . . . . . . . . 177
A.2 Conditional Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 179
B Stochastic Processes on the Real Line
182
B.1 Lévy Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
B.2 The Strong Q-Markov Property . . . . . . . . . . . . . . . . . . . . . 185
B.3 Metric Entropy of Classes of Sets . . . . . . . . . . . . . . . . . . . . 189
Bibliography
198
vi
Chapter 1
Introduction
The Markov property is without doubt one of the most appealing notions that exists
in the theory of stochastic processes and many processes modelling physical phenomena enjoy it, simply because it is the natural expression of the law under which various
systems evolve. To explain it in a few words, this law requires that the future behaviour of the system be completely independent of its past, given its present status,
or equivalently, in order to predict the future development of certain events, one does
not need to know anything about the past, the present being all that is needed.
The first attempt in trying to generalize the notions of ‘time’ and ‘Markov property’ (which are intrinsically related) was made in [51] (as early as 1948): a process
(Xz )z∈R2+ indexed by the positive quadrant of the plane, is said to be ‘Markov with
respect to a set A ⊆ R2+ ’ if FA and FAc are conditionally independent given F∂A ,
where FD = σ(Xz ; z ∈ D) for any D ⊆ R2+ . This definition (which is now known as
sharp Markov property) began to become popular as further steps were taken in
the study of two-parameter processes and processes such as the Brownian sheet (apparently, first introduced by a statistician, in [45]) and the Poisson sheet were found
to be natural generalizations of the classical Brownian motion and Poisson process.
The big question which stayed open for quite a few decades was: what is the largest
class of sets A for which the Brownian sheet, the Poisson sheet, or some of their
associated processes are sharp Markov?
But let us forget for the moment about the Markov property and ask ourselves a
1
CHAPTER 1. INTRODUCTION
2
natural question: leaving aside the pure theoretical aspect, why would anyone go into
the study of processes indexed by the plane, by Rd , or even by more general objects?
The answer to this question comes in different forms. A possible application to
statistics of the Brownian sheet was suggested in [44]. In [4] it is claimed that some
two-parameter jump processes are known to have application in reliability theory.
Cabaña’s famous problem of the vibrating string is the most cited instance where one
can actually identify a concrete object (i.e. the displacement of the point x of the
string at moment t) with a two-parameter process. More recently, in [19] it is pointed
out that the stochastic wave equation in two spatial dimensions driven by a special
type of noise is known in the literature as having some physical [11] and geophysical
[58] applications. This is a particular case of the following general problem: suppose
one is given a physical system governed by a partial differential equation; suppose
then that the system is perturbed randomly, perhaps by some sort of white noise and
we want to find out how it evolves in time. The solution will be a multiparameter
stochastic process.
Returning to our discussion about the Markov property, a quick review of the
literature to date on this subject reveals the following.
At least four possible definitions were proposed for a Markov process in the continuous, multiparameter case.
The first one, which is the oldest one, the sharp Markov property, has produced the
most research activity and due to the constant efforts made in this area, there aren’t
too many questions waiting for an answer here. But let us proceed chronologically.
In 1976, it was proved that: 1. the Brownian sheet fails to have the sharp Markov
property with respect to a very simple region, the triangle with vertices (0, 0), (0, 1)
and (1, 0) (see [69]); and 2. a process indexed by the plane which is Euclidean invariant
and ergodic cannot satisfy the sharp Markov property for all open sets (see [18]).
This led to the conclusion that, instead of the sharp σ-field F∂A one has to consider
a larger one, called the germ σ-field, which is defined as G∂A = ∩G FG where the
intersection is taken over all open sets G containing ∂A. The new Markov property,
for which FA and FAc are conditionally independent given G∂A was called the Lévy
Markov property or germ Markov property (this definition was first introduced
CHAPTER 1. INTRODUCTION
3
in [55] and it was extensively examined in the Gaussian case, for instance in [62], [42];
some of its properties are discussed in [52]).
In 1984, it was proved that: 1. the Brownian sheet satisfies the germ Markov
property with respect to all open sets (see [59]), which, according to [53], is equivalent
to saying that the germ Markov property holds with respect to all sets; and 2. all
processes with independent increments are sharp Markov with respect to finite unions
of rectangles (see [64]).
For more than a decade it was thought that, in fact, for the Brownian sheet, the
finite unions of rectangles are the only sets for which the sharp Markov property holds.
An intuitive reason for this is given in [70], using the propagation of ‘singularities’
(points where the law of the iterated logarithm breaks down) in the sheet. The
first instance where the sharp Markov property was shown to hold for a domain
whose boundary contains no vertical or horizontal segment was the result of [21]
for domains bounded by ‘separation lines’ (continuous non-increasing curves that
meet both coordinate axes). For these sets it is actually showed that the sharp σfield coincides with the germ field. With this, the subject was revived and pursued
with even more fervour. By now, one thing was clear: from the point of view of
the sharp Markov property, Gaussian processes behave completely differently from
point processes. Intuitively, one can compare the Poisson sheet whose discontinuities
propagate along horizontal and vertical lines and for which one can get a lot of
information just looking at the boundary of a set, with the Brownian sheet, whose
singularities criss-cross the plane horizontally and vertically in a symmetric way so
that it is impossible to say which way they are travelling looking only at the boundary
of the set.
For point processes a partial answer was given in [56]: under certain assumptions
a point process with independent increments satisfies the sharp Markov property with
respect to a certain class of bounded open sets, a class which contains the triangles
and the lower layers (a set A ⊆ R2+ is a lower layer if z ∈ A ⇒ [0, z] ⊆ A). This result
was an extension of a result of [16] (which appears also in [17]) in which it is proved
that the Poisson sheet satisfies the sharp Markov property with respect to the class
of bounded, relatively convex open sets.
CHAPTER 1. INTRODUCTION
4
The complete answer was given in 1992 in the form of two thorough papers written
by Dalang and Walsh.
In [23], the result of [21] is extended to domains whose boundaries are singular
curves of bounded variation; in this paper it is also shown that the Brownian sheet
satisfies the sharp Markov property with respect to ‘almost all’ Jordan curves, where
‘almost all’ has to be understood in the sense of Baire category.
Multiparameter processes with independent increments have been in the literature
for many years, their definition being a straightforward generalization of the classical
one: they have to assume independent values over disjoint hyper-parallelpipeds. It
was proved in [1] that most of the classical properties of processes with independent
increments on the real line can be transferred to multiparameter processes, including
the Lévy-Itô path decomposition of the stochastically continuous part. However,
it was not so straightforward to identify the appropriate type of Markov property
satisfied by these processes. The main result of [22] states that for jump processes with
independent increments which have only positive jumps, the sharp Markov property
holds for all bounded open sets; in the case of a process which has negative jumps as
well, the minimal splitting field turns out to be larger than the sharp field.
The third definition for a Markov process indexed by the plane was introduced
in [61] and it was first studied in the Gaussian case. It was proved there that a
Gaussian process is Markov (in the specified sense) if and only if it is an integral
process with respect to the Brownian sheet (the stochastic integral in the plane had
been introduced and studied in [15]). The authors of [34] used the same definition
of the Markov property, which was generalized and studied thoroughly in [46]. The
basic idea in this third Markov property, which for simplicity will be called the KLMMarkov property (‘KLM’ is an abbreviation for the names of the three authors of
[46]), is the separation of parameters, i.e. the simultaneous definition of a horizontal
Markov property and a vertical Markov property. (Clearly, the partial order z =
(s, t) ≤ z 0 = (s0 , t0 ) defined by s ≤ s0 and t ≤ t0 , induced by the cartesian coordinates
plays a crucial role.) What is interesting about this definition is the fact that it has
two equivalent formulations, in terms of the strict-sense history Fz = σ(Xz0 ; z 0 ∈ Rz )
and the wide-sense history Fz∗ = σ(Xz0 ; z 0 ∈ Rz∗ ) associated to the point z in the plane
CHAPTER 1. INTRODUCTION
5
(here Rz = {z 0 ; z 0 ≤ z} and Rz∗ = {z 0 ; z 6≤ z 0 }). Note that these are the two histories
with respect to which the weak martingales and respectively strong martingales are
defined in the plane.
To be precise, we say that a two-parameter process (Xz )z∈R2+ is KLM-*Markov,
if for any z the σ-fields Fz∗ and σ(Xz0 ; z 0 ∈ Rz∗c ) are conditionally independent given
σ(Xz0 ; z 0 ∈ ∂Rz∗ ), and moreover the predictor E[h(Xz1 , . . . , Xzn )|Fz∗ ] for z1 , . . . , zn ∈
Rz∗c and h measurable and bounded, depends only on the values Xs,ti , Xsi ,t , Xz of
the process at the points obtained by projecting zi , i = 1, . . . , n on the boundary
of Rz∗ . The definition is analogous in the strict-sense formulation: we say that a
two-parameter process (Xz )z∈R2+ is KLM-Markov, if for any z the σ-fields Fz and
σ(Xz0 ; z 0 ∈ Rzc ) are conditionally independent given σ(Xz0 ; z 0 ∈ ∂Rz ), and moreover the predictor E[h(Xz1 , . . . , Xzn )|Fz ] for z1 , . . . , zn ∈ Rzc and h measurable and
bounded, depends only on the values Xz∧zi of the process at the points obtained
by projecting zi , i = 1, . . . , n on the boundary of Rz . It is proved in [46] that in
the definition of the KLM-*Markov property we can consider n = 1 and that the
KLM-*Markov property is equivalent to the KLM-Markov property.
An easy but important exercise, not mentioned by the authors of the paper, is
to show that all processes with independent increments are KLM-Markov. (By an
‘increment’ of a two-parameter process X we understand the value ∆[z,z0 ] X of the
process over the rectangle [z, z 0 ], defined by ∆[z,z0 ] X = Xz − Xst0 − Xs0 t + Xz0 .) To
prove this, let z 0 ∈ Rz∗c and write Xz0 = ∆[z,z0 ] X + Xst0 + Xs0 t − Xz . The result follows
since ∆[z,z0 ] X is independent of Fz∗ .
On the other hand the authors proved that any KLM-Markov process is sharp
Markov with respect to the finite unions of rectangles and germ Markov with respect
to the relatively convex bounded open sets.
Given a two-parameter homogeneous semigroup P = (Pz )z∈R2+ we say that the
KLM-Markov process X defined on the probability space (Ω, F, Px ) is a realization
of the semigroup P if X0 = x a.s. and Ex [h(Xz )] = Pz h(x) for any z and for any
measurable and bounded function h. In [54] it is proved that such a realization
exists, and moreover, it can be chosen such that its trajectories are cadlaq: i.e. rightcontinuous and with quadrantale limits.
CHAPTER 1. INTRODUCTION
6
The fourth definition of a Markov property in the plane was introduced in [72].
However, in this paper the problem is treated from a completely different point of
view since the authors consider processes parametrized by smooth curves in R2+ .
Such a path-parametrized process Y = (Yγ )γ indexed over all continuous curves in
R2+ that are either increasing or decreasing is said to be γ Markov if for every
connected open set D with a piecewise monotone boundary the σ-fields σ(Yγ ; γ ⊆ D)
and σ(Yγ ; γ ⊆ Dc ) are conditionally independent given σ(Yγ ; γ ⊆ ∂D). On the other
hand, if X = (Xz )z∈R2+ is an arbitrary process indexed by points in R2+ and Y is a
path-parametrized process then the process Ỹ = (Ỹγ )γ defined as Ỹγ = (Yγ , Xγ0 , Xγ1 )
(where γ0 , γ1 are the end points of γ) is said to be γ +Markov if for every connected
open set D with a piecewise monotone boundary the σ-fields σ(Ỹγ ; γ ⊆ D) and
σ(Ỹγ ; γ ⊆ Dc ) are conditionally independent given σ(Ỹγ ; γ ⊆ ∂D). Examples of
such path-parametrized processes are the following: let W be the planar white noise
with W ([0, z]) = Wz and for each smooth curve γ, let A(γ), B(γ) be its vertical and
horizontal shadows; set Yγ := (W (A(γ)), W (B(γ))) and Ỹγ := (Yγ , Wγ0 , Wγ1 ); then Y
is γ Markov and Ỹ is γ+Markov.
The case of Markov processes indexed by a discrete partially ordered set was considered by various authors ([14], [48]) for solving stochastic optimal control problems,
the Markovian nature of the model allowing us to discard any unnecessary information when formulating such a problem. More precisely, if (S, ≤) is a countable
partially ordered set (with a minimal element) such that each s ∈ S has a finite set
U (s) = {δ1 (s), . . . , δd (s)} of direct successors and finitely many predecessors (think
of it as being Nd ) and (Fs )s∈S is an S-indexed filtration, then a stopping point is
a random point ν in S satisfying {ν = s} ∈ Fs ∀s and a strategy is an increasing
sequence (σn )n of random points with τ = inf{n; σn = σn+1 }, σn+1 ∈ U (σn ) if n < τ ,
σn = στ if n ≥ τ and σn+1 is Fσn -measurable. We will consider only stopping points
ν which are accessible by means of a strategy i.e. ν = στ .
If for each s ∈ S, the reward obtained if we stop at s is given by the random
variable Zs and there is no advantage in taking either one of the paths that lead to s,
to solve the optimal stopping problem associated to {Zs ; s ∈ S} means to determine
the accessible stopping point ν ∗ which maximizes the expected reward E[Zs ]. If in
CHAPTER 1. INTRODUCTION
7
addition, there is a running reward which is obtained any time we take the decision
of going from s to its direct successor u, reward given by the random variable Zs,u ,
then we are facing an optimal control problem for which we will have to find the best
Pτ −1
strategy σ ∗ which maximizes E[
∗
best stopping point ν =
στ∗∗ .)
n=0
Zσn ,σn+1 + Zστ ]. (Note that this also gives us the
The usefulness of the Markov property in this set-up is
the following: Let Y = (Ys )s∈S be a fixed S-indexed process. If Y is a Markov chain (in
the sense that the σ-fields σ(Yu ; u ≤ s) and σ(Yu ; u ≥ s) are conditionally independent
given σ(Ys ) for any s ∈ S) then for any choice of the rewards Zs , Zs,u , s ∈ S, u ∈ U (s)
such that Zs is σ(Ys )-measurable and Zs,u is σ(Ys , Yu )-measurable, the optimal control
∗
is
problem associated to them is solved by a strategy σ ∗ satisfying the condition: σn+1
σ(Yσn∗ , σn∗ )-measurable ∀n (any strategy satisfying this condition is called a Markov
strategy). In other words, in the case when the underlying probabilistic model is a
Markov chain, it is enough to solve the optimal control problem over the restricted
class of Markov strategies. This result (whose converse is also true) is proved in [48].
In the same context, the authors of [14] considered Markov chains (Ys )s∈S for which
the mechanism of transition is known: P (Yδi (s) = y|Yu , u ≤ s) = Pi (Ys , y), i = 1, . . . , d
where P1 , . . . , Pd are transition functions.
The literature is rather scarce when it comes to the strong Markov property for
processes indexed by partially ordered sets. In the case of the processes indexed by
the plane, the obvious notion of stopping point (introduced for the first time in
[57], in complete analogy with the classical case) does not prove to be fruitful for the
Markov property. Instead, various authors were considering stopping lines. A random
decreasing line L is called a stopping line with respect to the filtration (Fz )z , if for
any z ∈ R2+ we have {z ≤ L} ∈ Fz . (A ‘decreasing’ line is the image of continuous
curve γ on R2+ such that γ(0) belongs to the y-axis, γ(1) belongs to the x-axis and
θ ≤ θ0 implies γ1 (θ) ≤ γ1 (θ0 ) and γ2 (θ) ≥ γ2 (θ0 ); to each decreasing line l one can
associate the set D(l) := {z; ∃z 0 ∈ l, z ≤ z 0 }.)
In [56] it is proved that every strictly simple point process N which is sharp Markov
with respect to the sets D(l) for every decreasing line l, and whose intensity is absolutely continuous with respect to the Lebesgue measure, is strong Markov in the sense
CHAPTER 1. INTRODUCTION
8
that the σ-fields FD(L) and FD(L)c are conditionally independent given FL for any stopping line L, where FA := σ({Nz 1A (z), 1A (z); z ∈ R2+ }) for an arbitrary random set A.
They also proved that FD(L) = {F ∈ F; F ∩{L ≤ l} ∈ FD(l) for any decreasing line l},
which is similar to the definition of the stopped σ-field in the classical case.
The same problem was studied in the last section of [72], but for path-parametrized
processes. The path-parametrized process Ỹ is said to be strong Markov if for any
∗
∗
stopping line L, the σ-fields FD(L)
and FD(L)
c are conditionally independent given
∗
2
∗
= σ({Ỹγ∩D(L) ; γ ⊆ R2+ }), FD(L)
FL∗ , where FD(L)
c = σ({Ỹγ∩D(L)c ; γ ⊆ R+ }) and
FL = σ({Ỹγ ; γ ⊆ L}).
A different type of strong Markov property was introduced in [31] (as early as
1977), in the general context of ‘random fields’, which in this paper are increasing
collections (FV )V of σ-fields, indexed over the closed subsets V of an n-dimensional
space. Such a random field is said to be locally Markov if for any closed sets V
and W with V ∩ W = ∅, FV and FW are conditionally independent given F∂V . A
random set T whose values are closed sets is said to be a Markov random set if
{T ⊆ V } ∈ FV for any closed set V . Two σ-fields are associated to each Markov
random set T : AT = ∩²>0 A²T and BT = ∩²>0 BT² , where A²T = σ({F ∩ {T² ⊇ V }; F ∈
FV , V closed})∨σ(T ) and BT² = σ({F ∩{(∂T )² ⊇ V }; F ∈ FV , V closed})∨σ(T ). The
main result is that for a locally Markov random field (FV )V , for any Markov random
set T whose values are compact sets, if V is an arbitrary closed set and F ∈ FV , we
have P [F |AT ] = P [F |BT ] a.s. (T ∩ V = ∅).
In a much more recent paper [33], another type of strong Markov property is
introduced for processes indexed by a partially ordered discrete set T . Namely, a
T -indexed process X is said to be Markov if for any t ∈ T , the σ-fields σ(Xt0 ; t0 ≤ t)
and σ(Xt0 ; t0 ≥ t) are conditionally independent given Xt . When T is a finite or
countable set, a random element τ in T is called a splitting element if for any t ∈ T
we can write {τ = t} = F1 ∩ F2 with F1 ∈ σ(Xt0 ; t0 ≤ t) and F2 ∈ σ(Xt0 ; t0 ≥ t). To
each splitting element τ one can associate the σ-fields A1 (τ ) = σ({F ∩ {τ = t}; F ∈
σ(Xt0 ; t0 ≤ t), t ∈ T }) and A2 (τ ) = σ({F ∩ {τ = t}; F ∈ σ(Xt0 ; t0 ≥ t), t ∈ T }). It
is stipulated there that for a Markov process X, the σ-fields A1 (τ ) and A2 (τ ) are
conditionally independent given (τ, Xτ ) if and only if τ is a splitting element.
CHAPTER 1. INTRODUCTION
9
In the 1990’s several authors have begun again to be interested in the sharp or
germ Markov property, this time for processes which are solutions of stochastic partial
differential equations (PDE). In [60] it is proved that the solution of the Cauchy
problem for a quasi-linear parabolic stochastic PDE driven by a space-time white
noise is germ Markov and, if the Cauchy problem is replaced by a problem with some
periodic boundary conditions then the germ Markov property holds only in the linear
case. A similar result was proved in [24] for the elliptical equation.
Because stochastic PDE’s in more than one spatial dimension driven by white
noise do not have real-valued process solutions (see [71]), there has been little study
of Markov properties of solutions to these equations and, in particular, no study of
the sharp Markov property. However, the Markov property indicates how information
flows through space-time, so it would be interesting to study this property for equations which do have real-valued solutions. Therefore, equations driven by some other
types of noise were taken into consideration. Partially motivated by some physical
applications, the authors of [19] considered the wave equation in two spatial dimensions driven by a noise which is white in time and has smooth spatial covariance.
The same wave equation in two spatial dimensions, but this time driven by a Lévy
point process, was studied in [20]. They showed that the solution is sharp Markov for
domains bounded by the plane if and only if the angle between the plane and the time
axis is at least π/4; the sharp Markov property also holds for bounded polyhedra.
The final remark which will conclude our review of the literature on this subject
is the following.
The next level of generality in the theory of stochastic processes indexed by partially ordered sets was to consider the class of processes with independent increments,
indexed by a collection A of closed subsets of the d-dimensional unit cube [0, 1]d .
These processes, called ‘set-indexed processes with independent increments’, have
to be finitely additive, and to assume independent values over disjoint sets. They
were also called ‘random measures’ in the terminology of [2], or ‘ID processes’ in the
terminology of [8] (which assumed in addition the stationarity of the increments).
In [2] the (set-indexed) Lévy processes are introduced as ‘stochastically continuous’
processes with independent increments, and the set convergence is defined using the
CHAPTER 1. INTRODUCTION
10
metric dλ (A, B) := λ(A∆B), where λ is the Lebesgue measure; it is proved that the
distribution of such a process is completely characterized by a system of infinitely
divisible distributions on the real line. Concentrating only on the jump part of a
(stationary) Lévy process, the authors of [8] and [2] obtained in 1984 (independently
of each other), some very similar conditions under which a ‘cadlag’ version exists;
these conditions relate the undelying Lévy measure of the process, with the metric
entropy of the class A.
A completely new approach in the modern theory of stochastic processes indexed
by partially ordered sets was initiated and developed by Ivanoff and Merzbach for the
martingale case (see for example [36], [25], [37], [38], [41]), powerful results like DoobMeyer decomposition and martingale representations finding their analogues in this
context. In this framework, the processes that are taken into consideration are indexed
by a collection A of closed subsets of a topological space T . If we identify this class
with the class of all rectangles [0, z], z ∈ Rd and we denote with Xz the value of the
set-indexed process at the rectangle [0, z], then (Xz )z will be a d-dimensional process;
hence, we can view this theory as a generalization of the theory of multiparameter
processes.
In 1998 a new paper [39] was written by Ivanoff and Merzbach, this time dealing
with a certain Markov property for processes indexed by the collection A. This
paper motivated the beginning of the present research in the area of certain Markov
properties for set-indexed processes and led to the writing of this work.
When trying to define a Markov property for processes indexed by the collection A
of sets (which will be introduced formally in Chapter 2), there is a natural question
that comes to mind: is it possible to find an analogous sharp Markov property in
this general context? The answer to this question is positive and the analogy to the
multiparameter theory is complete in the sense that, if we identify our set-indexed
process with a multiparameter process, then the sharp Markov property defined in
the general framework becomes exactly the sharp Markov property for finite unions
of rectangles.
To make things more precise, it is said in [39] that an A-indexed process X is
sharp Markov if FB ⊥ FB c | F∂B for any B ∈ A(u), where ⊥ denotes conditional
CHAPTER 1. INTRODUCTION
11
independence (see Appendix A.1) and FB = σ({XA ; A ∈ A}), FB c = σ({XA ; A ∈
A, A 6⊆ B}) and F∂B = σ({XA ; A ∈ A, A ⊆ B, A 6⊆ B 0 }). (Throughout this work we
will denote with A(u) the class of all finite unions of sets in A; this collection is a
natural object to work with if we think that in the theory of 2-parameter processes
the increments are usually defined over the rectangles with the sides parallel to the
coordinate axes but which do not necessarily have the origin as a vertex and any such
rectangle C can be written as the difference C = A\B where A is a rectangle in A
and B is the union of two rectangles from A which are contained in A; in other words,
for an additive set-indexed process X, the value XC can be identified exactly with
the increment XA − XB .)
A set-indexed process (XA )A∈A is said to have independent increments if
XC1 , . . . , XCn are independent whenever C1 := A1 \B1 , . . . , Cn := An \Bn are pairwise disjoint increments with Ai ∈ A, Bi ∈ A(u). The authors of [39] considered
also another type of Markov property (the set-indexed analogue of the KLM-Markov
property), stonger than the sharp Markov property, and which is satisfied by all
processes with independent increments: a process X = (XA )A∈A is called Markov
if ∀B ∈ A(u), ∀A1 , . . . , Ak ∈ A, Ai 6⊆ B and for any measurable bounded function
h : Rk → R we have
E[h(XA1 , . . . , XAk )|FB ] = E[h(XA1 , . . . , XAk )|F∂B ∩ F∪ki=1 Ai ].
It is proved there that under certain circumstances the sharp Markov property and
the Markov property are equivalent. Moreover, a sharp Markov process is also strong
Markov with respect to any ‘stopping set’ in a sense that will be specified later on, and
consequently, any sharp Markov point process is strong Markov (roughly speaking, a
stopping set is any random set ξ which satisfies {A ⊆ ξ} ∈ FA for all A ∈ A).
As in the case of two-parameter processes, constructing a general process which
is sharp Markov (even with respect to such a small class of sets as the class of finite
unions of rectangles) is not an easy task and the question of existence of such a process
is not a trivial matter. For planar processes the problem was solved by the author of
[54] who constructed a KLM-Markov process which, as we know (Theorem 3.7 [46])
is sharp Markov with respect to the finite unions of rectangles. The trajectories of
CHAPTER 1. INTRODUCTION
12
the process constructed in [54] also have nice regularity properties.
The main type of Markov property that we will consider throughout this work
will be different from the ones introduced in [39] and will be called ‘set-Markov’,
to emphasize the fact that it can be defined in the general context of set-indexed
processes (in fact, the set-Markov property implies the sharp Markov property, but
not the Markov property). This definition requires that the value of the process
XA\B over the increment A\B be conditionally independent of the history FB given
the present status XB . We believe that this is a natural definition because it captures
the essence of the Markov property in terms of the increments, allowing us to have a
perfect analogy with the classical case (for which we know that for a Markov process
X, the increment Xt − Xs , s < t is conditionally independent of the past history Fs
given Xs ). Moreover, from the technical point of view, since the collection C of all
increments is a semialgebra (and hence C(u), the collection of all finite unions of sets in
C, is an algebra of sets, in other words it is a collection for which the usual operations
with sets are permitted) this definition will enable us to make all the computations
with objects indexed by the same family.
This work is organized as follows:
In Chapter 2 we will present the general definitions and properties that are specific
to the set-indexed framework; in particular, we will examine the additivity property
of a set-indexed process and we will formally introduce the notion of ‘flow’ as an
increasing map from an interval [0, a] of the real line to the class A(u) of sets.
In Chapter 3 we will first define the set-Markov property and we will examine some
of its basic properties: all processes with independent increments are set-Markov; a
set-Markov process becomes Markov in the classical sense, when it is transported by
a flow (it is clear that by using different flows, time can get very easily compressed or
decompressed, without affecting in any way the ‘distance’ between the arrival sets;
so, the notion of homogeneity is meaningless for set-Markov processes). Next we will
introduce a sub-class of set-Markov processes, called Q-Markov, for which there exists a ‘transition system’ Q characterizing the transitions from a state B ∈ A(u) to a
another state of the form B 0 ∈ A(u) with B ⊆ B 0 . For Q-Markov processes we will indicate two types of constructions as applications of Kolmogorov’s extension theorem.
CHAPTER 1. INTRODUCTION
13
The first construction is valid only if the indexing collection A satisfies a certain geometric property called SHAPE (in particular, this construction is perfectly suited for
multiparameter processes), while the second construction is valid in the general case.
For both constructions one has to impose a natural ‘consistency’ assumption on the
transition system Q, which basically requires that the finite-dimensional distributions
of the process be uniquely defined, up to a permutation.
In Chapter 4 we will define ‘the generator’ of a Q-Markov process as the collection
of all the generators of classical Markov processes obtained as ‘traces’ of the original
set-indexed process over some (suitably chosen) flows. This definition will permit us
to translate the previous ‘consistency’ assumption in terms of the generators, and
therefore to identify the necessary and sufficient conditions that have to be satisfied
by a collection of one-dimensional generators so that they become the generator of a
set-indexed Q-Markov process.
In Chapter 5 we will examine the class of set-indexed processes with independent
increments, in the light of their set-Markov property. The object that will allow
us to reconstruct such a process will no longer be the transition system Q but a
‘convolution’ family (FC )C∈C of probability measures on R which will be identified
with the distributions of the process over the sets in C. The sub-class of Lévy processes
will be introduced very much in the spirit of the classical theory, and the starting
point will be the existing theory of multidimensional Lévy processes. In particular,
for these processes the consistency condition satisfied by the generators has a very
simple form.
In Chapter 6 we will examine three examples of Q-Markov processes which do not
have independent increments: the empirical process, a bivariate process and a simple
example of a ‘jump’ process. Again, for these processes, the consistency condition
satisfied by the generator is much simpler than the one we derived in the general case.
In Chapter 7 we introduce the notions of ‘adapted set’ and ‘optional set’ as natural
generalizations of the notions of ‘stopping time’ and ‘optional time’. Using these
objects we will examine the strong Markov properties that can be associated to the
sharp Markov property and Q-Markov property.
Finally, at the end of this work we have included two appendix chapters which
CHAPTER 1. INTRODUCTION
14
contain generally known facts that have been used occasionally for deriving certain
results: while Chapter A contains some classical results about conditioning, in Chapter B we have gathered some results about stochastic processes indexed by the real
line.
Chapter 2
The Set-Indexed Framework
This chapter introduces the general definitions, properties and assumptions that are
used in the theory of set-indexed processes.
2.1
The Indexing Collection
Let T be a compact Haussdorff topological space.
Definition 2.1.1 A collection A of compact subsets of T is called an indexing collection if it satisfies the following properties:
1. it contains ∅ and T ;
2. ∀A, B ∈ A; A, B 6= ∅ ⇒ A ∩ B 6= ∅;
3. it is closed under arbitrary intersections; and
4. (Separability from above) there exist an increasing sequence of finite sub-semilattices (An )n of A which are closed under intersections (and contain ∅ and T )
and a sequence (gn )n of functions gn : A → An (u) (where An (u) is the class of
unions of sets in An ) such that:
(a) gn preserves arbitrary intersections and finite unions i.e. if (Aα )α∈Λ is an
arbitrary collection of sets in A, then gn (∩α∈Λ Aα ) = ∩α∈Λ gn (Aα ) and if
15
CHAPTER 2. THE SET-INDEXED FRAMEWORK
16
0
A1 , . . . , Ak , A01 , . . . , A0m ∈ A are such that ∪ki=1 Ai = ∪m
j=1 Aj , then
0
∪ki=1 gn (Ai ) = ∪m
j=1 gn (Aj );
(b) A ⊆ gn (A)0 for any A ∈ A;
(c) gn+1 (A) ⊆ gn (A) for any A ∈ A;
(d) A = ∩n gn (A) for any A ∈ A;
(e) gn (∅) = ∅.
Comments 2.1.2
1. The sequence (gn )n will play the role of the ‘dyadic’ approx-
imations for the indexing sets. They will permit us to approximate each set in
A ‘strictly from above’.
2. If A ∈ A and A 6= T , then there exists a rank N such that gn (A) 6= T for all
n ≥ N . Also, if A ∈ A, A 6= ∅ then gn (A) 6= ∅ for all n. This is true because
A = ∩n gn (A).
3. Let ∅0 := ∩A∈A\{∅} A be the minimal set in A. The role played by the set ∅0 will
be similar to that played by 0 in the classical case of processes indexed by R+ .
Note that we can also write ∅0 = ∩n ∩A∈An \{∅} A: let x be such that x lies in
any set A ∈ An \{∅} for any n; then x lies in any set B ∈ An (u)\{∅} for any
n, in particular, x ∈ gn (A) ∀n, ∀A ∈ A and hence x ∈ ∩n gn (A) = A ∀A ∈ A.
Denoting An = ∩A∈An \{∅} A, (An )n will be a decreasing sequence of compact
nonempty sets with intersection ∅0 . By the finite intersection property, ∅0 6= ∅.
We will always assume that ∅0 ∈ A0 , if A0 is a finite sub-semilattice of A. (In
particular ∅0 ∈ An for any n.)
4. Usually the finite sub-semilattices An and the functions gn are chosen such that
gn (A) = ∩D∈An (u),
A⊆D0 D.
5. The function gn has the monotone property i.e. if A, A0 ∈ A are such that
A ⊆ A0 then gn (A) ⊆ gn (A0 ).
(To see this write A0 = A ∪ A0 and then, because gn preserves finite unions,
gn (A0 ) = gn (A) ∪ gn (A0 ) ⊇ gn (A).)
CHAPTER 2. THE SET-INDEXED FRAMEWORK
17
Let A(u) be the class of all finite unions of sets in A. Note that A(u) is closed
under finite intersections and finite unions. Each function gn has a unique extension
to A(u) defined by:
gn (B) := ∪A∈A,A⊆B gn (A).
Note that if B = ∪ki=1 Ai is an arbitrary set in A(u) with Ai ∈ A then gn (B) :=
∪ki=1 gn (Ai ).
Lemma 2.1.3 (a) The functions gn preserve finite intersections and finite unions
on A(u).
(b) (Lemma 2.1.2, [41]) For any B ∈ A(u) we have B = ∩n gn (B), with gn+1 (B) ⊆
gn (B).
(c) The functions gn have the monotone property on A(u).
Proof: (a) It is clear that gn preserves finite unions of sets in A(u). To show that
i
gn also preserves finite intersections, let Bi = ∪m
j=1 Aij , i = 1, . . . , k be sets in A(u)
mk
m1
k
i
with Aij ∈ A. Then ∩ki=1 Bi = ∩ki=1 ∪m
j=1 Aij = ∪j1 =1 · · · ∪jk =1 ∩i=1 Aiji and hence
mk
mi
k
k
k
1
gn (∩ki=1 Bi ) = ∪m
j1 =1 · · · ∪jk =1 ∩i=1 gn (Aiji ) = ∩i=1 ∪j=1 gn (Aij ) = ∩i=1 gn (Bi ).
(c) Let B, B 0 ∈ A(u) be such that B ⊆ B 0 . Each set A ∈ A which is contained in B
will also be contained in B 0 and by the definition of gn it follows that gn (B) ⊆ gn (B 0 ).
2
A very important role in the theory of set-indexed processes will be played by the
following collection of sets:
C := {C = A\B; A ∈ A, B ∈ A(u)}.
Writing an arbitrary set C in C as C = A\(A ∩ B) we can always assume that
a set in C is of the form C = A\B with A ∈ A, B ∈ A(u), B ⊆ A. The value
XC = XA − XB of an (additive !) set-indexed process X over such a set C will play
the role of the increment Xt − Xs , s < t from the classical theory of processes indexed
by R+ .
CHAPTER 2. THE SET-INDEXED FRAMEWORK
18
Lemma 2.1.4 Every set C ∈ C admits a representation C = AC \BC with AC ∈
A, AC ∩ BC ∈ A(u) (called ‘the maximal representation’) such that for any other
representation C = A\B, A ∈ A, B ∈ A(u) we have AC ⊆ A and BC ⊇ B.
Proof: Let us consider all the pairs (Ai , Bi ), i ∈ I with the property that Ai ∈ A, Bi ∈
A(u) and C = Ai \Bi . Then C = AC \BC where AC := ∩i∈I Ai and BC := ∪i∈I Bi .
Since A is closed under arbitrary intersections, AC ∈ A. We will prove next that
AC ∩ BC ∈ A(u). Note that for every i ∈ I we have C = Ai \Bi ⊇ AC \Bi ⊇
AC \BC = C. Hence C = AC \Bi = AC \(AC ∩ Bi ) for every i ∈ I i.e. AC ∩ Bi is the
complement of C in AC for every i ∈ I. Denote B := AC ∩ Bi (regardless of which
i ∈ I we are considering). Clearly B ∈ A(u). Since B = AC ∩ Bi for every i ∈ I we
also have B = AC ∩ ∪i∈I Bi = AC ∩ BC .
2
In complete analogy with the class C we introduce its sub-classes
Cn := {C = A\B; A ∈ An , B ∈ An (u)}.
Note that A ⊆ C and An ⊆ Cn .
Lemma 2.1.5 (a) Each set in A(u) (respectively An (u)) can be expressed as a union
of pairwise disjoint sets in C (respectively Cn ).
(b) The collections C and Cn are semi-algebras i.e. they are closed under finite intersections and the complement of every set in C (respectively Cn ) can be expressed
as the finite union of some pairwise disjoint sets in C (respectively Cn ).
(c) The algebra generated by C (i.e. the minimal algebra containing C) is exactly
C(u), the class of finite unions of sets in C.
(d) Each set in C(u) can be expressed as a finite union of pairwise disjoint sets in C.
i−1
Aj for
Proof: (a) Let B = ∪ki=1 Ai ∈ A(u) with Ai ∈ A. If we denote Ci := Ai \ ∪j=1
i = 1, . . . , k then C1 , . . . , Ck are pairwise disjoint sets in C whose union is B.
(b) We will prove the statement only for C, the same arguments working for Cn
as well. If C = A\B, C 0 = A0 \B 0 are two sets in C, with A, A0 ∈ A and B, B 0 ∈ A(u)
CHAPTER 2. THE SET-INDEXED FRAMEWORK
19
then C ∩ C 0 = (A ∩ A0 )\(B ∪ B 0 ) is again a set in C. Let C = A\B be an arbitrary
set in C with A ∈ A, B ∈ A(u), B ⊆ A. Then C c = Ac ∪ B and Ac ∩ B = ∅. Note
that Ac = T \A ∈ C and B can be expressed as a finite union of disjoint sets in C.
(c) It is clear that C(u) is contained in any algebra which contains C, hence C(u) is
contained in the algebra generated by C. On the other hand, C(u) itself is an algebra:
clearly ∅ ∈ C(u) and C(u) is closed under finite unions; to see that C(u) is also closed
under complements, let C = ∪ki=1 Ci be an arbitrary set in C(u) with Ci ∈ C; then,
mk
m1
k
i
since C is a semialgebra, C c = ∩ki=1 Cic = ∩ki=1 ∪m
j=1 Cij = ∪j1 =1 · · ·∪jk =1 ∩i=1 Ciji ∈ C(u).
(d) This is a general property of any algebra of sets.
2
Definition 2.1.6 (a) A representation B = ∪ki=1 Ai with Ai ∈ A is called extremal
if Ai 6⊆ ∪j6=i Aj for each i = 1, . . . , k.
(b) A representation C = A\B with A ∈ A, B ∈ A(u) is called extremal if the
representation of B is extremal.
Note that extremal representations always exist for sets in A(u) and C although
they might not be unique. In order to examine the uniqueness of the extremal representations, one has to consider the following geometric assumption.
Assumption 2.1.7 (SHAPE) If A, A1 , . . . , Ak ∈ A and A ⊆ ∪ki=1 Ai then there
exists an index i = 1, . . . , k such that A ⊆ Ai .
Comments 2.1.8
∪ki=1 Ai
1. Note that if SHAPE holds and A ∈ A is such that A =
for some A1 , . . . , Ak ∈ A, then there exists an index i = 1, . . . , k such
that A = Ai .
2. Without loss of generality we can assume that any finite sub-semilattice of A
(in particular An ) satisfies SHAPE: if A0 ⊆ ∪ki=1 Ai , then A0 = ∪ki=1 (A0 ∩ Ai );
replace A0 in the finite sub-semilattice by A0 ∩ Ai ; i = 1, . . . , k.
Lemma 2.1.9 (a) If A satisfies SHAPE then any set in A(u) has a unique extremal
representation.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
20
(b) If A satisfies SHAPE then any set in C has a unique extremal representation
of the form C = A\B with A ∈ A, B ∈ A(u) and B ⊆ A; moreover, this
representation ‘almost’ coincides with the maximal representation in the sense
that A = AC and B = AC ∩ BC .
0
0
Proof: (a) Say B = ∪ki=1 Ai = ∪m
j=1 Aj , Ai , Aj ∈ A are two extremal representations
of the set B ∈ A(u). By SHAPE each Ai has to be contained in some A0ji with
ji = 1, . . . , m and each A0ji in turn, has to be contained into some Ali . Hence Ai ⊆ Ali
and because the representation B = ∪ki=1 Ai is extremal, we must have i = li . This
0
proves that Ai = A0ji for each i = 1, . . . , k. We get ∪ki=1 A0ji = ∪ki=1 Ai = ∪m
j=1 Aj which
0
forces {j1 , . . . , jk } = {1, . . . , m} since the representation B = ∪m
j=1 Aj is extremal.
Hence the two representations are the same.
(b) To avoid trivialities assume that C 6= ∅. Let C = A\B be an arbitrary
representation of C with A ∈ A, B ∈ A(u), B ⊆ A. By maximality B ⊆ BC and
hence A\BC ⊆ A\B = C = AC \BC ⊆ A\BC . It follows that C = A\BC . Therefore
A = (A\BC ) ∪ (A ∩ BC ) = C ∪ (A ∩ BC ) ⊆ AC ∪ BC . Because SHAPE holds and A
cannot be contained in BC (this would imply that C = A\BC = ∅) we get A ⊆ AC
and hence A = AC by the definition of AC . Finally B = AC ∩ BC as these sets are
both the complement of C in A = AC .
2
Definition 2.1.10 Let A0 be a finite sub-semilattice of A and A ∈ A0 . The set
C := A\ ∪A0 ∈A0 ,A6⊆A0 A0
is called the left neighbourhood of A in A0 .
Lemma 2.1.11 Let A0 be a finite sub-semilattice of A and A ∈ A0 . If C is the left
neighbourhood of A in A0 then
C = A\ ∪A00 ∈A0 ,A00 ⊂A A00
where ⊂ denotes strict inclusion.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
21
Proof: It is clear that if A00 ⊂ A then A 6⊆ A00 . On the other hand, taking an
arbitrary set A0 ∈ A0 such that A 6⊆ A0 then A00 := A ∩ A0 ∈ A0 and A00 ⊂ A.
2
Since the finite sub-semilattices of the indexing collection A will play a very important role in the theory of set-indexed Markov processes, having an ordering on
the sets of a finite sub-semilattice proves to be a very useful tool in handling these
objects. The ordering that we have in mind will be defined in such a manner that a
set is never numbered before any of its subsets.
Definition 2.1.12 Let A0 be a finite sub-semilattice of A which has n distinct sets.
The ordering {A1 , A2 , . . . , An } of the sets in A0 is called consistent if Aj ⊂ Ai ⇒
j < i. By convention, we will always assume that ∅0 ∈ A0 and we set A0 := ∅0 .
It is clear that a consistent ordering of a finite semilattice A0 always exists, but
in general it may not be unique. (To prove the existence we proceed inductively:
assume that the sets A1 , . . . , Ai ∈ A0 have already been counted and choose Ai+1 ∈
A0 \{A1 . . . , Ai } such that all of its subsets that are in A0 have already been counted
i.e. if there exists a set A ∈ A0 with A ⊂ Ai+1 then A = Aj for some j ≤ i).
Lemma 2.1.13 (a) Let A0 be a finite sub-semilattice of A and {A0 = ∅0 , A1 ,
. . . , An } a consistent ordering of A0 . If Ci is the left neighbourhood of Ai ,
then C0 = A0 and for each i = 1, . . . , n
i
i
Ci = Ai \(∪i−1
j=1 Aj ) and ∪j=1 Cj = ∪j=1 Aj .
(b) If {A0 = ∅0 , A1 , . . . , An }, {A0 = ∅0 , A01 , . . . , A0n } are two consistent orderings of
the same finite semilattice A0 , Ci , Ci0 are the left neighbourhoods of Ai , A0i and
Ai = A0π(i) for a permutation π of {1, . . . , n} with π(1) = 1, then
0
Ci = Cπ(i)
∀i = 1, . . . , n.
i−1
Proof: (a) By the definition of the consistent ordering, ∪j=0
Aj ⊆ ∪A∈A0 ,Ai 6⊆A A and
∪A∈A0 ,A⊂Ai A ⊆ ∪i−1
j=0 Aj ; the result follows using the two equivalent formulas for the
left neighbourhood of the set Ai .
CHAPTER 2. THE SET-INDEXED FRAMEWORK
22
0
(b) This is clear because both Ci and Cπ(i)
are the left-neighbourhoods of the
same set Ai = A0π(i) .
2
The following result will prove to be an extremely powerful tool when working
with finite semilattices for which we have to choose the most ‘appropriate’ consistent
ordering.
Proposition 2.1.14 (a) Let B1 ⊆ · · · ⊆ Bm be sets in A(u). There exists a finite
sub-semilattice A0 of A and a consistent ordering {A0 = ∅0 , A1 , . . . , An } of A0
l
such that Bl = ∪ij=0
Aj ; l = 1, . . . , m, for some indices 0 < i1 ≤ . . . ≤ im = n.
(b) Let A0 ⊆ A00 be finite sub-semilattices of A. For any consistent ordering {A00 =
∅0 , A01 , . . . , A0m } of A0 there exists a consistent ordering {A0 = ∅0 , A1 , . . . , An }
of A00 such that, if A0l = Ail ; l = 1, . . . , m for some indices i1 ≤ . . . ≤ im , then
l
∪ls=0 A0s = ∪ij=0
Aj ; l = 1, . . . , m.
l
Proof: (a) Let Bl = ∪nk=1
Akl , Akl ∈ A be an extremal representation and A0 be
the minimal finite sub-semilattice which contains all the sets Akl , l = 1, . . . , m; k =
1, . . . , nl . Consider first all the sets in A0 that are contained in B1 and order them con1
sistently. Say these sets are A0 = ∅0 , A1 , . . . , Ai1 ; then B1 = ∪ij=0
Aj . Next order con-
sistently all the sets in A0 that are contained in B2 and have not been numbered yet.
If we denote these sets with Ai1 +1 , . . . , Ai2 , then the ordering {A0 = ∅0 , A1 , . . . , Ai2 }
will be consistent: we cannot find a set Ai with i1 < i ≤ i2 which is contained in
another set Aj with j ≤ i1 because Aj is contained in B1 and Ai is not. We have
2
B2 = ∪ij=0
Aj . Continue this procedure until all the sets in A0 have been ordered.
(b) In the first step, order consistently all the sets in A00 which are contained
1
Aj . Next
in A01 . Say these sets are A0 = ∅0 , A1 , . . . , Ai1 = A01 ; then Ai1 = ∪ij=0
order consistently all the sets in A00 which are contained in A02 and have not been
numbered yet. If we denote these sets with Ai1 +1 , . . . , Ai2 = A02 then the ordering
2
{A0 = ∅0 , A1 , A2 , . . . , Ai2 } will be consistent and Ai1 ∪ Ai2 = ∪ij=0
Aj . Continue in the
same manner until at the last step all the sets in A00 have been ordered. Finally order
consistently the remaining sets in A00 .
CHAPTER 2. THE SET-INDEXED FRAMEWORK
23
2
The next result will be used in the proof of Theorem 2.3.3, which states that if the
indexing collection A satisfies SHAPE, then any set-indexed process can be extended
in a unique additive manner to the class C(u).
Proposition 2.1.15 Let A0 be a finite sub-semilattice of A, {A0 = ∅0 , A1 ,
. . . , An } a consistent ordering of A0 and Ci the left neighbourhood of Ai ; i = 0, . . . , n.
Assume that there exist some indices j1 < . . . < jk such that
C := Cj1 ∪ . . . ∪ Cjk ∈ C.
Then we can find a consistent ordering {A00 = ∅0 , A01 , . . . , A0n } of A0 such that, if Ci0 is
0
0
0
the left neighbourhood of A0i , then Cj1 = Cm+1
, Cj2 = Cm+2
, . . . , Cjk = Cm+k
for some
index m (i.e. Cj1 , . . . , Cjk become consecutive left neighbourhoods).
Proof: Let C = A\B, A ∈ A, B ∈ A(u), B ⊆ A be a representation of C such
that Aji ⊆ A for all i = 1, . . . , k. Since A\B = C = Cj1 ∪ . . . ∪ Cjk it follows that
B ∩ Cji = ∅ for all i = 1, . . . , k. But Cji ⊆ Aji ; hence Aji 6⊆ B for any i. This means
that we can order the sets of the semilattice A0 that are in B before ordering the sets
Aj1 , . . . , Ajk .
We claim that for each i = 1, . . . , k it is impossible to find a set Al of the semilattice
A0 such that Aji ⊆ Al ⊆ Aji+1 .
Proof of the claim: Assume that this is the case. Note that
Cl ⊆ Al \B
0
0
0
(To see this remember that Cl = Al \ ∪A0 ∈A0 ,Al 6⊆A0 A0 ; let B = ∪m
i=1 Ai , Ai ∈ A be a
representation of B. If Al ⊆ A0i for some i = 1, . . . , m then Aji ⊂ Al ⊆ A0i ⊆ B, which
0
is a contradiction. Hence Al 6⊆ A0i for any i = 1, . . . , m and therefore B = ∪m
i=1 Ai ⊆
∪A0 ∈A0 ,Al 6⊆A0 A0 .)
On the other hand Al \B ⊆ A\B = C because Al ⊆ Aji+1 ⊆ A. It follows that
Cl ⊆ C i.e. l ∈ {j1 , . . . , jk }. But because the ordering is consistent and Aji ⊂ Al ⊂
Aji+1 we must have ji < l < ji+1 and this is a contradiction. The proof of the claim
is complete.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
24
Taking in account these facts we can consider the following consistent ordering
of A0 : first order consistently the sets of A0 that are not contained in B; next order
consecutively the sets Aj1 , . . . , Ajk ; finally order consistently the remaining sets of A0 .
2
2.2
Examples of Indexing Collections
In this section we will discuss several examples of topological spaces T endowed with
an indexing collection A.
Example 2.2.1 Let T = [0, 1]d and A = {[0, x]; x ∈ T } ∪ {∅} be the collection of
the hyper-parallelipipeds in T with sides parallel to the axis and having 0 as a vertex (these are called ‘rectangles’). Clearly the intersection of any two ‘rectangles’ is
non-empty and A is closed under arbitrary intersections: ∩α∈Λ [0, xα ] = [0, inf α∈Λ xα ].
(Here inf α∈Λ xα = (inf α∈Λ xα,i )i=1,...,d is defined with respect to the partial order induced by the coordinate axis: x ≤ y means xi ≤ yi , i = 1, . . . , d.)
To show that A is an indexing collection it remains to prove that A satisfies the
‘separability from above’ assumption. Take An = {[0, x]; xi =
ki
,0
2n
≤ ki ≤ 2n , i =
1, . . . , d} ∪ {∅} be the ‘dyadic’ approximation semilattice. Then (An )n is an increasing sequence of finite sub-semilattices which are closed under arbitrary intersections.
Define gn : A → An by taking gn ([0, x]) to be the smallest ‘rectangle’ with dyadic vertices which contains [0, x] in its interior (i.e. gn ([0, x]) := [0, ( 2kn1 , . . . , 2knd )] where ki is
chosen such that
ki −1
2n
≤ xi <
ki
2n
if xi < 1, or ki = 2n if xi = 1). Clearly gn ([0, x]) = T .
By definition set gn (∅) := ∅. It can be easily checked that the functions gn have the
desired properties.
Note that ∅0 = {0} and the left neighbourhoods associated to the finite subsemilattice An are the
1
-sided
2n
squares in [0, 1]d . In the case of this example the
geometric property SHAPE holds: assume that [0, x] ⊆ ∪m
j=1 [0, xj ]; then there must
be some index j = 1, . . . , m such that x ∈ [0, xj ]. But this implies x ≤ xj and hence
[0, x] ⊆ [0, xj ].
CHAPTER 2. THE SET-INDEXED FRAMEWORK
25
Example 2.2.2 Let T = [−1, 1]d and A = {[0, x]; x ∈ T } ∪ {∅, T } be the collection
of all 2d -dimensional ‘rectangles’ with sides parallel to the axis and having 0 as a
vertex (here [0, x] :=
Qd
i=1 [0, xi ]
and we make the convention that if a, b ∈ R then
[a, b] = [b, a] = [a ∧ b, a ∨ b]). Clearly A is closed under arbitrary intersections.
Consider An = {[0, x]; xi =
ki
, −2n
2n
≤ ki ≤ 2n , i = 1, . . . , d} ∪ {∅, T }. In order
to have [0, x] ⊆ (gn ([0, x]))0 we must define the function gn in such a way that its
values will be sets in An (u) and not in An as in the case of the previous example.
More precisely, if |xi | < 1 for each i = 1, . . . , d define gn ([0, x]) to be the smallest
2d -dimensional ‘rectangle’ with dyadic vertices of order n which contains [0, x] in
its interior (for example, if d = 2 and 0 < xi < 1, i = 1, 2 with
ki −1
2n
≤ xi <
ki
2n
we define gn ([0, x]) := [0, ( 2kn1 , 2kn2 )] ∪ [0, (− 21n , 2kn2 )] ∪ [0, ( 2kn1 , − 21n )] ∪ [0, (− 21n , − 21n )]).
Clearly gn (T ) = T . Set by definition gn (∅) := ∅.
Note that in this case we have again ∅0 = {0}, the left neighbourhoods associated
to the finite sub-semilattice An are the
1
-sided
2n
squares in [−1, 1]d and assumption
SHAPE holds too.
Example 2.2.3 Let T = [0, 1]d and A be the collection of all compact lower layers of
T together with {∅}. (Recall that a set A ⊆ Rd is called a lower layer if [0, z] ⊆ A
whenever z ∈ A.) It is not difficult to observe that A is closed under arbitrary
intersections and that in this case A(u) = A. In order to be able to approximate
from above a lower layer we will have to consider An as the collection of finite unions
of ‘rectangles’ having all the vertices with dyadic coordinates of order n. We have
An (u) = An . Define gn (A) := ∩B∈An ,A⊆B 0 B.
Again ∅0 = {0} but in this case the assumption SHAPE does not hold: we could
very easily think of three lower layers A1 , A2 , A3 such that A1 ⊆ A2 ∪ A3 but A1 6⊆
A2 , A1 6⊆ A3 . Note that in this case A is a lattice with ∨ = ∪ and ∧ = ∩.
Example 2.2.4 Let T = [−1, 1]d and A be the collection of all compact lower layers
contained in T , together with {∅}. This example is completely similar to the Example
2.2.3.
Examples 2.2.5 Let T = B(0, t0 ) (compact ball in R3 ) and A = {AR,t ; R = ‘[a, b] ×
CHAPTER 2. THE SET-INDEXED FRAMEWORK
26
[c, d]0 , 0 ≤ a < b < 2π, −π ≤ c < d ≤ π, t ∈ [0, t0 ]}, where the set
AR,t := {(r cos θ cos τ, r sin θ cos τ, r sin τ ); θ ∈ [a, b], τ ∈ [c, d], r ∈ [0, t]}
can be interpreted as the history of the region
R := ‘[a, b] × [c, d]0 = {(cos θ cos τ, sin θ, cos τ, sin τ ); θ ∈ [a, b], τ ∈ [c, d]}
of the Earth from the beginning until time t (here θ represents the longitude of a
generic point in the region R, while τ is the latitude). Hence, the collection A can
be identified with the history of the world until time t0 .
This example is similar to Example 2.2.1.
2.3
Set-Indexed Processes
The basic definitions and properties associated with set-indexed processes are discussed in this section, with special emphasis on the additivity property.
Definition 2.3.1 A set-indexed process X := (XA )A∈A has a unique additive extension to:
0
(a) the class A(u) if ∀A1 , . . . , An , A01 , . . . , A0m ∈ A with ∪ni=1 Ai = ∪m
j=1 Aj
n
X
XAi −
i=1
X
1≤i1 <i2 ≤n
m
X
j=1
XA0j −
XAi1 ∩Ai2 + · · · + (−1)n+1 XA1 ∩...∩An =
X
1≤j1 <j2 ≤m
XA0j
1
∩A0j
2
+ · · · + (−1)m+1 XA01 ∩...∩A0m a.s.
(by the inclusion-exclusion principle);
(b) the class C if ∀A, A0 ∈ A, ∀B, B 0 ∈ A(u) with A\B = A0 \B 0
XA − XA∩B = XA0 − XA0 ∩B 0 a.s.
0
0
∈ C with ∪ni=1 Ci = ∪m
(c) the class C(u) if ∀C1 , . . . , Cn , C10 , . . . , Cm
j=1 Cj
n
X
XCi −
i=1
X
1≤i1 <i2 ≤n
m
X
j=1
XCj0 −
XCi1 ∩Ci2 + · · · + (−1)n+1 XC1 ∩...∩Cn =
X
1≤j1 <j2 ≤m
XCj0
1
∩Cj0
2
0
+ · · · + (−1)m+1 XC10 ∩...∩Cm
a.s.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
27
A set-function x : A → R is said to have a unique additive extension to A(u), C, C(u)
if the previous respective relationships hold (everywhere this time).
Lemma 2.3.2 Let X := (XA )A∈A be a set-indexed process.
(a) If for every A, A1 , . . . , An ∈ A with A = ∪ni=1 Ai we have
n
X
XA =
XAi −
i=1
X
1≤i1 <i2 ≤n
XAi1 ∩Ai2 + · · · + (−1)n+1 XA1 ∩...∩An a.s.
(1)
then X has a unique additive extension to A(u).
(b) If X has a unique additive extension to C and for every C, C1 , . . . , Cn ∈ C with
C = ∪ni=1 Ci we have
XC =
n
X
XCi −
i=1
X
1≤i1 <i2 ≤n
XCi1 ∩Ci2 + · · · + (−1)n+1 XC1 ∩...∩Cn a.s.
(2)
then X has a unique additive extension to C(u).
0
Proof: (a) Let A1 , . . . , Ak , A01 , . . . , A0m ∈ A be such that ∪ni=1 Ai = ∪m
j=1 Aj . We want
to prove that
n
X
X
(−1)k+1
1≤i1 <···<ik ≤n
k=1
XAi1 ∩...∩Aik =
m
X
(−1)l+1
X
1≤j1 <···<jl ≤m
l=1
XA0j
1
∩...∩A0j
l
a.s.
0
Because Ai1 ∩ . . . ∩ Aik = ∪m
j=1 (Ai1 ∩ . . . ∩ Aik ∩ Aj ) (where each of the sets Ai1 ∩ . . . ∩
Aik ∩ A0j is in A because A is closed under intersections) and X is additive on A we
have
XAi1 ∩...∩Aik =
Hence
m
X
(−1)l+1
(−1)k+1
k=1
m
X
l=1
X
(−1)k+1
k=1
n
X
1≤j1 <···<jl ≤m
l=1
n
X
(−1)l+1
X
1≤i1 <···<ik ≤n
X
XAi1 ∩...∩Aik ∩A0j
1
∩...∩A0j
l
a.s.
XAi1 ∩...∩Aik =
X
1≤i1 <···<ik ≤n 1≤j1 <···<jl ≤m
XAi1 ∩...∩Aik ∩A0j
1
∩...∩A0j
l
a.s.
and the result follows since the expression on the right hand side is symmetric in Ai ’s
and A0j ’s.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
28
(b) The argument is identical to the one used for proving (a), by replacing the
sets in A with sets in C.
2
Theorem 2.3.3 If the indexing collection A satisfies SHAPE, then any set-indexed
process has a unique additive extension to C(u). Moreover, in this case, the additivity
relation holds everywhere.
Proof: Let X := (XA )A∈A be an arbitrary set-indexed process. We will proceed in
three steps.
(a) We will prove first that relation (1) holds. By Lemma 2.3.2.(a) it will follow
that the process X has a unique additive extension to A(u).
Let A = ∪ni=1 Ai with A, A1 , . . . , An ∈ A. Since SHAPE holds, there exists an
index j such that A = Aj . Reindexing the sets A1 , . . . , An we can assume that
A = An . Then each Ai with i < n will be contained in An and hence the intersection
Ai ∩ An will be simply Ai . In other words the sets Ai1 ∩ . . . ∩ Aik with ij < n remain
unchanged when intersected with An . Hence
n
X
XAi −
i=1
X
1≤i1 <i2 ≤n
n−1
X
(
i=1
(
XAi1 ∩Ai2 + · · · + (−1)n+1 X∩ni=1 Ai =
XAi + XAn ) − (
X
1≤i1 <i2 ≤n−1
XAi1 ∩Ai2 ∩Ai3 +
1≤i1 <i2 <i3 ≤n−1
(−1)n+1 X∩n−1 Ai ∩An
i=1
n−1
X
(
i=1
(
X
1≤i1 <i2 ≤n−1
X
1≤i1 <i2 ≤n−1
XAi1 ∩Ai2 ∩Ai3 +
1≤i1 <i2 <i3 ≤n−1
(−1)n+1 X∩n−1 Ai
i=1
X
XAi ∩An ) +
i=1
XAi1 ∩Ai2 ∩An ) + · · · +
=
XAi + XAn ) − (
X
XAi1 ∩Ai2 +
n−1
X
XAi1 ∩Ai2 +
X
1≤i1 <i2 ≤n−1
n−1
X
XAi ) +
i=1
XAi1 ∩Ai2 ) + · · · +
= XAn .
(b) Since SHAPE holds, any set in C has a unique extremal representation of
the form C = A\B with A ∈ A, B ∈ A(u), B ⊆ A (according to Lemma 2.1.9,
(b)). Hence we can extend the process X to C in a unique way, namely by defining
XC := XA − XB .
CHAPTER 2. THE SET-INDEXED FRAMEWORK
29
(c) We will prove that relation (2) holds. By Lemma 2.3.2.(b) it will follow that
the process X has a unique additive extension to C(u).
Let C = ∪ni=1 Ci with C, C1 , . . . , Cn ∈ C. Without loss of generality we can assume
that C 6= ∅. We will proceed in two sub-steps.
(c1) Assume that C1 , . . . , Cn are consecutive left neighbourhoods in a finite subsemilattice A0 of A. Let C = AC \BC be the maximal representation of C and
BC0 = AC ∩ BC . Say {A0 = ∅0 , A01 , . . . , A0m } is a consistent ordering of A0 such that if
0
, C2 =
Ci0 is the left neighbourhood of the set A0i , then BC0 = ∪pi=1 Ci0 and C1 = Cp+1
0
0
Cp+2
, . . . , Cn = Cp+n
.
p+n
p+n 0
p+n 0
0
0
0
Then AC = BC0 ∪ C = BC0 ∪ ∪p+n
i=p+1 Ci = ∪i=1 Ci = ∪i=1 Ai = BC ∪ ∪i=p+1 Ai .
Clearly AC 6⊆ BC0 . Also AC 6⊆ A0i for any p + 1 ≤ i < p + n (if this would be the case,
0
0
0
0
0
0
0
0
i
0
0
then ∪p+n
i=p+1 Ci = C = AC \BC ⊆ Ai \BC = (Ai ∪ BC )\BC ⊆ (BC ∪ ∪j=p+1 Aj )\BC =
(∪ij=1 Cj0 )\(∪pj=1 Cj0 ) = ∪ij=p+1 Cj0 , which is a contradiction). Using SHAPE and Comment 2.1.8.1 we can conclude that AC = A0p+n . It follows that A0i ⊆ A0p+n for every
i−1
1 ≤ i < p + n. Using the fact that each Ci0 = A0i \ ∪j=1
(A0i ∩ A0j ) and knowing that X
has a unique additive extension to A(u) and C we get
n
X
p+n
X
XCi =
i=1
i=p+1
+
p+n
X
XCi0 =
X
i=p+1 1≤j1 <j2 <i
p+n
X
i=p+1
XA0i ∩A0j
XA0i −
∩A0j
1
2
p+n
i−1
X X
i=p+1 j=1
− ... +
XA0i ∩A0j +
p+n
X
(−1)i+1 XA01 ∩...∩A0i
i=p+1
We start processing the terms which compose the last member of the previous equality.
Write the first sum as
Pp+n−1
i=p+1
XA0i + XA0p+n . Because A0j ⊆ A0p+n for 1 ≤ j < p + n
we can write the second sum as
p+n−1
i−1
X X
i=p+1 j=1
p+n−1
i−1
X X
i=p+1 j=1
XA0i ∩A0j +
XA0i ∩A0j +
p+n−1
X
j=1
p
X
j=1
XA0p+n ∩A0j =
XA0j +
p+n−1
X
j=p+1
XA0j
and the first term of the first sum gets cancelled by the last term of the second sum.
From the first sum we are left with XA0p+n . Next, because A0j1 ∩ A0j2 ⊆ A0p+n for any
CHAPTER 2. THE SET-INDEXED FRAMEWORK
30
1 ≤ j1 < j2 < p + n we can write the third sum as
p+n−1
X
X
i=p+1 1≤j1 <j2 ≤i−1
p+n−1
X
X
i=p+1 1≤j1 <j2 ≤i−1
p+n−1
X
X
i=p+1 1≤j1 <j2 ≤i−1
XA0i ∩A0j
1
∩A0j
2
XA0i ∩A0j
1
XA0i ∩A0j
∩A0j
1
2
p+n−1
2 −1
X jX
j2 =p+1 j1 =1
XA0j
1
∩A0j
∩A0j
2
2
X
+
1≤j1 <j2 ≤p+n−1
+
p+n−1
2 −1
X jX
j2 =2 j1 =1
p jX
2 −1
X
+
j2 =2 j1 =1
XA0p+n ∩A0j
XA0j
XA0j
1
1
1
∩A0j
∩A0j
2
2
∩Aj2
=
=
+
.
The first term of the second sum gets cancelled by the last term of the third sum.
From the second sum we are left with
conclude that
Pn
i=1
X
j=1
XA0j . Continuing inductively we can
XCi is equal to
XA0p+n −
A0p+n
Pp
p
X
j=1
XA0j +
− X
∪pj=1 A0j
X
1≤j1 <j2 ≤p
XA01 ∩A02 − . . . + (−1)p XA01 ∩...∩A0p =
= XAC − XBC0 = XC
using the fact that X has a unique additive extension to A(u) and C.
(c2) Assume that C1 , . . . , Cn are arbitrary, not necessarily left neighbourhoods.
i
Let Ci = Ai \Bi , Bi = ∪m
j=1 Aij ; i = 1, . . . , n be the extremal representations of Ci with
Ai , Aij ∈ A, Bi ⊆ Ai , A0 the minimal finite sub-semilattice of A which contains the
sets Ai , Aij and {A00 = ∅0 , A01 , . . . , A0m } a consistent ordering of A0 . Let Ci0 be the left
neighbourhood of the set A0i .
Then each Ci = ∪j∈Ji Cj0 for some Ji ⊆ {1, . . . , m} and hence each Ci1 ∩ . . . ∩ Cik =
∪j∈Ji1 ,...,ik Cj0 with Ji1 ,...,ik = Ji1 ∩ . . . ∩ Jik .
Fix a k-tuple 1 ≤ i1 < . . . < ik ≤ n. Using Proposition 2.1.15, we can find a
consistent ordering of A0 such that the set Ci1 ∩ . . . ∩ Cik ∈ C becomes the union of
some consecutive left neighbourhoods of A0 .
Hence XCi1 ∩...∩Cik =
n
X
(−1)k+1
k=1
P
j∈Ji1 ,...,ik
X
1≤i1 <...<ik ≤n
XCj0 by the previous step (c1). We have
XCi1 ∩...∩Cik =
n
X
(−1)k+1
k=1
X
j∈Ji1 ,...,ik
XCj0
CHAPTER 2. THE SET-INDEXED FRAMEWORK
which is the same thing as
P
j∈J
31
XCj0 with J = ∪m
i=1 Ji .
On the other hand, C = ∪ni=1 Ci = ∪ni=1 ∪j∈Ji Cj0 = ∪j∈J Cj0 and using again
Proposition 2.1.15 we can find a consistent ordering of A0 such that the sets Cj0 ; j ∈ J
become consecutive left neighbourhoods ; hence XC =
P
j∈J
XCj0 by step (c1), and
the conclusion follows.
2
Important Note: Throughout this work, all the equalities between the values
of a process which has an additive extension to C(u) should be understood as almost
sure equalities, even if this will not be usually mentioned. The only instance when
these equalities actually hold everywhere is when the indexing collection A satisfies
assumption SHAPE.
We will introduce now three different types of equivalence relations between setindexed processes. The definitions are not specific to the set-indexed case, they are
encountered in the context of processes with arbitrary index set; we include them
only for the sake of completeness.
Definition 2.3.4 Let X := (XA )A∈A and Y := (YA )A∈A be two set-indexed processes.
We say that
(a) X and Y are indistinguishable if P (XA = YA ∀A ∈ A) = 1.
(b) X and Y are modifications of each other if P (XA = YA ) = 1 ∀A ∈ A.
(c) X and Y are versions of each other if they have the same finite dimensional
distributions.
Indistinguishability is the strongest form of equivalence among processes, while
the fact that they are versions of each others is the weakest; the fact that they are
modifications of each others lies between the two forms of equivalence.
A sufficient condition for two processes X and Y to be versions of each other is
that they have the same finite dimensional distributions over all finite sub-semilattices
of A. To see this, note that given any finite sub-collection of A there exists a (unique)
minimal finite sub-semilattice containing it.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
32
In what follows we will examine the information structure that can be associated
to a set-indexed process.
Definition 2.3.5 Let (Ω, F) be a measurable space. A collection (FA )A∈A of sub-σfields of F is called a filtration if FA ⊆ FA0 whenever A, A0 ∈ A, A ⊆ A0 .
We will always consider filtrations which are complete i.e. {A ∈ F ; P (A) = 0} ⊆
F∅0 .
Given a filtration (FA )A∈A we can extend it to a filtration indexed by A(u) by
setting
FB :=
_
FA , B ∈ A(u)
(3)
A∈A(B)
where A(B) := {A ∈ A; A ⊆ B}. Then (FB )B∈A(u) is also a filtration i.e. if B1 , B2 ∈
A(u), B1 ⊆ B2 then FB1 ⊆ FB2 . (This is true because A(B1 ) ⊆ A(B2 ).)
Definition 2.3.6 A filtration (FB )B∈A(u) is outer-continuous if for every B ∈
A(u) we have FB = ∩n FBn for any decreasing sequence (Bn )n≥1 in A(u) such that
B = ∩n Bn .
Note that even if we assume that the original A-indexed filtration is outer-continuous
(i.e. for every A ∈ A we have FA = ∩n FAn for any decreasing sequence (An )n≥1 in A
such that A = ∩n An ), the extended filtration to A(u) may not be outer-continuous.
However, we do have the following result.
Lemma 2.3.7 (Lemma 1.4.1, [41]) Let (FA )A∈A be an outer-continuous filtration,
(FB )B∈A(u) the extended filtration to A(u) and define
FBr := ∩n Fgn (B) , B ∈ A(u).
The family (FBr )B∈A(u) is an outer-continuous filtration.
Definition 2.3.8 A set-indexed process X is called adapted with respect to a filtration (FA )A∈A if XA is FA -measurable for every A ∈ A.
Lemma 2.3.9 If X := (XA )A∈A is a set-indexed process which is adapted with respect
to a filtration (FA )A∈A , then XB is FB -measurable for any B ∈ A(u).
CHAPTER 2. THE SET-INDEXED FRAMEWORK
33
Proof: Let B = ∪ni=1 Ai with Ai ∈ A. Since X has a unique additive extension to
A(u), we have XB =
Pn
k+1
k=1 (−1)
P
1≤i1 <···<ik ≤n
XAi1 ∩...∩Aik . The result follows since
all the variables XAi1 ∩...∩Aik are FB -measurable.
2
At times it is useful to consider filtrations satisfying certain orthogonality properties.
Definition 2.3.10 A filtration (FA )A∈A
(a) satisfies condition (CI) if ∀B1 , B2 ∈ A(u) FB1 ⊥ FB2 | FB1 ∩B2
(b) satisfies condition (CO) if ∀B1 , B2 ∈ A(u), FB1 ⊥ FB2 | FB1 ∩ FB2
((CI) and (CO) stand for ‘conditional independence’, respectively ‘conditional orthogonality’.)
Lemma 2.3.11 If a filtration (FA )A∈A satisfies condition (CI), then
FB1 ∩ FB2 = FB1 ∩B2 ∀B1 , B2 ∈ A(u).
In particular condition (CI) implies condition (CO).
Proof: Clearly FB1 ∩B2 ⊆ FBi , i = 1, 2. Conversely, take F ∈ FB1 ∩ FB2 ; then on one
hand 1F = E[1F |FB1 ] since 1F is FB1 -measurable and on the other hand, by the (CI)
property, since 1F is also FB2 -measurable we have 1F = E[1F |FB1 ] = E[1F |FB1 ∩B2 ];
hence 1F is FB1 ∩B2 -measurable i.e. F ∈ FB1 ∩B2 .
2
For a fixed set-indexed process X := (XA )A∈A , the minimal filtration with respect
to which X is adapted is given by FA := σ({XA0 ; A0 ∈ A(A)}). Note that if we
extend this filtration to A(u) by the formula (3), then we also have FB := σ({XA ; A ∈
A(B)}), ∀B ∈ A(u). Moreover,
FB1 ∪B2 = FB1 ∨ FB2 ∀B1 , B2 ∈ A(u).
(Clearly FBi ⊆ FB1 ∪B2 , i = 1, 2; conversely, if A ∈ A(B1 ∪ B2 ) then A = (A ∩ B1 ) ∪
(A ∩ B2 ) and XA = XA∩B1 + XA∩B2 − XA∩B1 ∩B2 (a.s.) is FB1 ∨ FB2 -measurable.)
CHAPTER 2. THE SET-INDEXED FRAMEWORK
34
In analogy with the minimal filtration (FB )B∈A(u) we define the following σ-fields,
for an arbitrary set B ∈ A(u):
F∂B := σ({XA ; A ∈ A(∂B)})
FB c := σ({XA ; A ∈ A(B c )})
where
A(∂B) := {A ∈ A; A ⊆ B, A 6⊆ B 0 }
A(B c ) := {A ∈ A; A 6⊆ B}.
In what follows we will consider two types of Markov properties for set-indexed
processes. These definitions have been first introduced in [39]; we just extend them
for arbitrary filtrations.
Definition 2.3.12 A set-indexed process X := (XA )A∈A is called
(a) sharp Markov with respect to a filtration (FA )A∈A if it is adapted and
∀B ∈ A(u)
FB ⊥ FB c |F∂B ;
(b) Markov with respect to a filtration (FA )A∈A if it is adapted and ∀B ∈
A(u), ∀Ai ∈ A, Ai 6⊆ B, i = 1, . . . , k and for every bounded measurable function
h : Rk → R
E[h(XA1 , . . . , XAk )|FB ] = E[h(XA1 , . . . , XAk )|F∂B ∩ F∪ki=1 Ai ].
The process X is called simply sharp Markov (respectively Markov) if it is sharp
Markov (respectively Markov) with respect to its minimal filtration.
Note that on the real line, the sharp Markov property is equivalent to the usual
Markov property.
From the definition it is clear that any Markov process with respect to a certain
filtration is sharp Markov with respect to the same filtration. Conversely, we have
the following result.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
35
Theorem 2.3.13 (Theorem 3.1, [39]) If the filtration (FB )B∈A(u) satisfies condition (CO), then any sharp Markov process with respect to (FB )B∈A(u) is also a Markov
process with respect to (FB )B∈A(u) .
The definition of a set-indexed process with independent increments is similar to
the definition of such a process on the plane; the role of the rectangles on the plane
is taken by the sets in C.
Definition 2.3.14 A set-indexed process X := (XA )A∈A is said to have independent increments if X∅ = X∅0 = 0 a.s. and XC1 , . . . , XCk are independent whenever
C1 , . . . , Ck ∈ C are pairwise disjoint.
Note that if X is a process with independent increments then XC1 , . . . , XCk are
independent whenever C1 , . . . , Ck ∈ C(u) are pairwise disjoint. (Write each Ci as the
disjoint union of some sets Cij , j = 1, . . . , mi in C; hence XCi is equal to
Pmi
j=1
XCij
and we use the associativity of independence.)
We conclude this section with a two general results from the theory of set-indexed
processes.
Lemma 2.3.15 (Lemma 2.4, [39]) For any A ∈ A, B ∈ A(u) with A 6⊆ B, XA∩B
can be expressed as a linear combination of terms of the form XD with D ∈ A(∂B) ∩
A(A); hence XA∩B is F∂B ∩ FA -measurable.
Theorem 2.3.16 (Theorem 2.3, [39]) Any process with independent increments
is Markov.
2.4
Sample Path Properties. Flows
In this section we will introduce various notions of regularity for the sample paths of
a set-indexed process.
Definition 2.4.1 A set-function x : A → R is called
CHAPTER 2. THE SET-INDEXED FRAMEWORK
36
(a) monotone outer-continuous if for any decreasing sequence (An )n ⊆ A with
A := ∩n An , we have limn→∞ x(An ) = x(A);
(b) monotone inner-continuous if for any increasing sequence (An )n ⊆ A with
A := ∪n An ∈ A, we have limn→∞ x(An ) = x(A);
A set-indexed process X := (XA )A∈A is called monotone outer-continuous (respectively monotone inner-continuous) if all its sample paths are monotone outercontinuous (respectively monotone inner-continuous).
A set-function x : A → R with a unique additive extension to C(u) is said to
be monotone outer-continuous (respectively monotone inner-continuous) ‘on C(u)’ if
for any decreasing (respectively increasing) sequence (Cn )n ⊆ C(u) such that C :=
∩n Cn ∈ C(u) (respectively C := ∪n Cn ∈ C(u)), we have limn x(Cn ) = x(C).
Definition 2.4.2 A set-function x : A → R with a unique additive extension to C(u)
is called increasing if x(∅) = 0 and x(C) ≥ 0 for every C ∈ C.
Note that a monotone outer-continuous function which has a unique additive
extension to C(u) may not be monotone outer-continuous on C(u). However, we do
have the following result.
Proposition 2.4.3 (Proposition 1.4.8, [41]) If x : A → R is a monotone outercontinuous increasing set-function, then x is monotone outer-continuous on C(u)
at ∅, i.e. for every decreasing sequence (Cn )n ⊆ C(u) with ∩n Cn = ∅, we have
limn→∞ x(Cn ) = 0. Hence, any increasing monotone outer-continuous set-function
x : A → R is countably additive on C(u) and therefore, it has a unique additive
extension to a measure on σ(A).
Let us now formally introduce an object named ‘flow’, which will prove to be
extremely useful in exploiting the Markov property considered throughout this work.
Definition 2.4.4 Any map f : [0, a] → A(u) which is increasing with respect to the
partial order induced by the set-inclusion is called a flow. A flow f is called
CHAPTER 2. THE SET-INDEXED FRAMEWORK
37
(a) right-continuous (respectively left-continuous) if for any t ∈ [0, a] and for
any decreasing (respectively increasing) sequence (tn )n with limn→∞ tn = t we
have f (t) = ∩n f (tn ) (respectively f (t) = ∪n f (tn ));
(b) continuous if it is right-continuous and left-continuous;
(c) simple if it is continuous and there exists a partition 0 = t0 < t1 < . . . < tn = a
and some A-valued (continuous) flows fi : [ti , ti+1 ] → A, i = 0, . . . n − 1 such
that f (0) = ∅0 and
f (t) = ∪ij=0 fj (tj ) ∪ fi (t), t ∈ [ti , ti+1 ], i = 0, . . . , n.
Comment 2.4.5 If f : [0, a] → A(u) is a right-continuous (respectively left-continuous) flow and x : A → R is a set-function with a unique additive extension to
C(u) which is monotone outer-continuous (respectively monotone inner-continuous)
on C(u), then x ◦ f is right-continuous (respectively left-continuous) on [0, a]. (In
particular, if f is a right-continuous flow and Λ is a finite positive measure on σ(A),
then Λ ◦ f becomes a distribution function on [0, a].)
The preceding comment may not be true if the set-function x is monotone outercontinuous (respectively monotone inner-continuous) on A but not on C(u). However,
the comment is true if the flow f is simple. In order to prove this result, we need the
following lemma, whose proof relies on the separability from above of the indexing
collection A.
Lemma 2.4.6 (Lemma 5.1.6, [41]) Let A, B ∈ A1 , B ⊂ A such that B is minimal
in A (in A1 ). Then, there exists a continuous flow f : [0, 1] → A such that f (0) =
A, f (1) = B.
More precisely, we have the following result.
Proposition 2.4.7 Let x : A → R be a set-function with a unique additive extension to C(u). Then x is monotone outer-continuous (respectively monotone innercontinuous) if and only if for every simple flow f : [0, a] → A(u) the function
xf := x ◦ f is right-continuous (respectively left-continuous). (For the sufficiency
part it is enough to consider only the A-valued continuous flows.)
CHAPTER 2. THE SET-INDEXED FRAMEWORK
38
Proof: For necessity, let f : [0, a] → A(u) be a simple flow of the form f (t) =
∪ij=0 fj (aj ) ∪ fi (t), t ∈ [ai , ai+1 ] where 0 = a0 < a1 < . . . am = a is a partition of
[0, a] and fi : [ai , ai+1 ] → A are some A-valued (continuous) flows. Let t, tn ∈ [0, a]
be such that tn → t. Suppose that t ∈ [ai , ai+1 ] for some i; then f (t) = B ∪ A
for A := fi (t) ∈ A and some B ∈ A(u). For n large enough, tn ∈ [ai , ai+1 ] and
f (tn ) = B ∪ An for An := fi (tn ) ∈ A.
If the sequence (tn )n is decreasing, then (An )n is a decreasing sequence of sets in
A with intersection A, using the right-continuity of the flow fi . Hence x(An ) → x(A).
Using the additivity of the set-function x, we can conclude that xftn = x(B ∪ An ) =
x(B) + x(An ) − x(B ∩ An ) → x(B) + x(A) − x(B ∩ A) = x(B ∪ A) = xft . (Say
0
0
B = ∪m
j=1 Aj , Aj ∈ A. Then x(B ∩ An ) =
An ) →
Pm
k+1
k=1 (−1)
P
j1 <...jk
Pm
k+1
k=1 (−1)
P
j1 <...jk
x(A0j1 ∩ . . . ∩ A0jk ∩
x(A0j1 ∩. . .∩A0jk ∩A) = x(B ∩A) because each (A0j1 ∩. . .∩
A0jk ∩ An )n is a decreasing sequence of sets in A with intersection A0j1 ∩ . . . ∩ A0jk ∩ A.)
If the sequence (tn )n is increasing we get the same conclusion using the leftcontinuity of the flow fi .
For sufficiency, let (An )n be a decreasing sequence in A with intersection A. Let
A0 := T . For each An+1 ⊆ An ⊆ An−1 there exists two continuous A-valued flows
fn−1 , fn0 such that fn−1 (tn−1 ) = An−1 , fn−1 (tn ) = An and fn0 (t0n ) = An , fn0 (t0n+1 ) =
An+1 for some tn−1 ≤ tn and t0n ≤ t0n+1 (using Lemma 2.4.6). We can certainly find a
linear rescaling fn of the flow fn0 such that fn (tn ) = An , that is the set An is the image
of the same point tn under the two different flows. The sequence (tn )n is decreasing
and let t∗ := limn tn . Let f : [0, t0 ] → A be defined by

 f (t) if t ∈ [t , t
n
n n+1 ]
f (t) := 
g(t) if t ∈ [0, t∗ ]
where g : [0, t∗ ] → A is a continuous flow with g(0) = ∅0 , g(t∗ ) = A. Clearly f is a
continuous A-valued flow. Since xf is right-continuous it follows that x(An ) = xftn →
xft∗ = x(A).
If the sequence (An )n is increasing and ∪n An := A ∈ A then we start with A0 := ∅0
and construct a continuous A-valued flow f such that f (tn ) = An ∀n for an increasing
sequence (tn )n with t∗ = limn tn . We can set f (t∗ ) := A. The left-continuity of xf
will give us again the desired conclusion x(An ) → x(A).
CHAPTER 2. THE SET-INDEXED FRAMEWORK
39
2
The next result will be occasionally used in this work.
Lemma 2.4.8 (Lemma 5.1.7, [41]) For any finite sub-semilattice A0 of A and for
any consistent ordering {∅0 = A0 , A1 , . . . , An } of A0 , there exists a simple flow f :
[0, a] → A(u) which ‘connects the sets of A0 in the sense of the ordering ord’, i.e.
f (ti ) = ∪ij=0 Aj ∀i = 0, . . . , n, where t0 = 0 < t1 < . . . < tn = a is the partition
corresponding to f .
The previous result has the following consequence.
Lemma 2.4.9 Given B1 , . . . , Bm ∈ A(u) such that B1 ⊆ . . . ⊆ Bm , there exists
a simple flow f : [0, a] → A(u) and t1 , . . . , tm ∈ [0, a], t1 ≤ . . . ≤ tm such that
Bi = f (ti ), i = 1, . . . , m.
Proof: Using Proposition 2.1.14.(a), there exists a finite sub-semilattice A0 of A and
l
a consistent ordering {A0 = ∅0 , A1 , . . . , An } of A0 such that Bl = ∪ij=0
Aj ; l = 1, . . . , m
for some 1 ≤ i1 ≤ . . . ≤ im = n. By Lemma 2.4.8, there exists a simple flow f
corresponding to a partition 0 = t0 ≤ t1 ≤ . . . ≤ tn = a such that f (ti ) = ∪ij=0 Aj for
all i = 0, . . . , n. Hence f (til ) = Bl for l = 1, . . . , m.
2
The second type of regularity property for the sample paths of a set-indexed
process is introduced using the Hausdorff distance dH ; for this we have to assume
that T is metrizable.
Assume that (T, d) is a compact complete separable metric space and let K be the
collection of all compact subsets of T . For each A ∈ K\{∅}, ² > 0 let
A² := {t ∈ T ; d(A, t) ≤ ²}
where d(A, t) = inf s∈A d(s, t) is the distance from A to t. The Hausdorff metric dH is
defined on K\{∅} by the formula:
dH (A, B) = inf{² > 0; A ⊆ B ² , B ⊆ A² }.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
40
The following result says that monotone convergence is stronger than dH -convergence
(this result is valid only for compact spaces T ).
Lemma 2.4.10 (Lemma 1.3.1, [41]) Let (An )n be a monotone sequence of closed
subsets of T .
(a) If (An )n is decreasing with A = ∩n An , then limn→∞ dH (An , A) = 0.
(b) If (An )n is increasing with A = ∪n An , then limn→∞ dH (An , A) = 0.
A classical result says that K\{∅} is dH -compact (because T is compact). The
next result says that under certain (quite general) conditions the collection A\{∅}
will be dH -compact.
Lemma 2.4.11 (Lemma 1.3.3, [41]) Assume that the functions gn are A-valued
and that they satisfy the following condition:
dH (A, gn (A)) ≤ ²n ∀A ∈ A, ∀n ≥ 1
for a certain sequence (²n )n converging monotonically to 0. Then A\{∅} is dH -closed
(and hence dH -compact) in K\{∅}.
Here are the regularity properties that we have mentioned.
Definition 2.4.12 A set-function x : A → R is called
(a) outer-continuous if for every sequence (An )n ⊆ A with limn dH (An , A) =
0, A ∈ A and An ⊇ A, ∀n, we have limn x(An ) = x(A);
(b) inner-continuous if for every sequence (An )n ⊆ A with limn dH (An , A) =
0, A ∈ A and An ⊆ A0 , ∀n, we have limn x(An ) = x(A);
(c) dH -continuous if for every sequence (An )n ⊆ A with limn dH (An , A) = 0, A ∈ A,
we have limn x(An ) = x(A);
(d) cadlag if it is outer-continuous and it ‘has inner limits’ i.e. for every sequence
(An )n ⊆ A with limn dH (An , A) = 0, A ∈ A and An ⊆ A0 ∀n, (x(An ))n converges.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
41
A set-indexed process X := (XA )A∈A is called outer-continuous (respectively innercontinuous, or dH -continuous, or cadlag) if all its sample paths are outer-continuous (respectively inner-continuous, or dH -continuous, or cadlag).
Notation: The space of all cadlag set-functions x : A → R is denoted with D(A).
Remark: Various authors considered similar notions for cadlaguity in the ddimensional euclidean cube Id replacing the convergence in the Hausdorff distance
with other types of convergences. The authors of [2] considered the convergence in
the metric dλ defined by dλ (A, B) := λ(A∆B), where λ denotes the Lebesgue measure
on Id and A∆B := (A\B) ∪ (B\A) is the usual symmetric difference of two sets; the
authors of [8] considered the convergence A = lim inf An = lim sup An instead of
dH (An , A) → 0 where lim inf An := ∪m ∩n≥m An , lim sup An := ∩m ∪n≥m An .
Clearly any dH -continuous function is both outer-continous and inner-continuous.
On the other hand, by Lemma 2.4.10 it follows immediately that any outer-continuous
function is monotone outer-continuous. An inner-continuous function may not be
monotone-inner continuous: if (An )n is an increasing sequence of sets in A and A :=
∪n An ∈ A we cannot conclude that An ⊆ A0 . However, if A satisfies the following
assumption
Assumption 2.4.13 (Separability from below) For any A ∈ A with a non-empty
interior, for any ² > 0 there exists a set A² ∈ A such that A² ⊆ A0 and dH (A² , A) < ².
then any inner-continuous set-function x : A → R which vanishes over all sets A with
an empty interior, is monotone inner-continuous (Section 7.1, [41]).
An intuitive candidate for a cadlag function would be a ‘purely atomic’ setfunction, that is a linear combination of purely atomic measures,
x(A) =
n
X
aj I{zj ∈A}
j=1
for some aj ∈ R and for some distinct points zj ∈ T, j = 1, . . . k. The zj ’s are called
the ‘locations of the atoms’ and the aj ’s are the ‘masses of the atoms’. Any purely
atomic function can be extended to a unique finite signed measure on σ(A); if the
CHAPTER 2. THE SET-INDEXED FRAMEWORK
42
masses of all its atoms are positive then this extension is a finite positive measure
(hence it is monotone outer-continuous on σ(A)).
As observed by the author of [67], our intuition breaks down if one of the atoms
fall exactly on the boundary of our compact space T . As a counterexample consider
the case when T is the unit square, A = {[0, z]; z ∈ T } and x has a single atom of
mass 1 located on the boundary of T , say (1, 1/2) is this atom. Then x does not have
an inner limit at any point z ∗ = (1, v) with v ∈ (1/2, 1): if An = [0, zn = (sn , tn )] and
zn → z ∗ then x(An ) = 0 if sn < 1 and x(An ) = 1 if sn = 1. It was not noticed in [2]
or [8] that a general purely atomic function may not be cadlag.
To circumvent this difficulty we will adopt the technique introduced for the first
time by Ivanoff and Merzbach in [40].
Given D ⊆ T and ² > 0, define D−² := ∩A∈A,D⊆A² A.
Definition 2.4.14 A set A ∈ A is proper if there exists an ²0 > 0 such that
(A² )−² = A ∀0 < ² < ²0 .
We note that by definition, (A² )−² ⊆ A for any A ∈ A. It is not true that every
set in A is proper. For example, if T = [0, 1]2 and A = {[0, z]; z ∈ T } then [0, z] is
proper if and only if z = (s, t) where s 6= 1 and t 6= 1.
Now, for z ∈ T , define
Az = ∩A∈A,z∈A A
and let T ∗ := {z ∈ T ; Az is proper}. Let P(A) be the space of all purely atomic
functions with atoms located in T ∗ . The next result appears in [67].
Theorem 2.4.15 Any purely atomic function with atoms located in T ∗ is cadlag i.e.
P(A) ⊆ D(A).
In what follows we will introduce a sub-class of purely-atomic functions with
positive masses of atoms and the corresponding class of set-indexed processes.
Definition 2.4.16 A set-function x : A → R is called a point measure if there
exists a finite subset {z1 , . . . , zn } ⊆ T such that x(A) =
Pn
j=1 I{zj ∈A}
for every A ∈ A.
A set-indexed process X := (XA )A∈A is called a point process if all its sample paths
are point measures.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
43
In other words, a point measure is a purely atomic set-function whose atoms have
masses equal to 1. We will denote with D1 (A) the space of all point measures with
atoms located in T ∗ . Clearly D1 (A) ⊆ P(A) ⊆ D(A).
Proposition 2.4.17 If x : A → R is a purely atomic set-function, then for every
flow f : [0, a] → A(u) with f (a) = T , the function xf := x ◦ f is also purely atomic.
Proof: Assume x(A) =
which reaches
zjf
∈ [0, t] and
Pn
j=1 aj I{zj ∈A} , A ∈ A and let
f
T . Set zj := min{t ∈ [0, a]; zj ∈ f (t)}. Then
P
hence xft = nj=1 aj I{zf ∈[0,t]} .
f be an arbitrary flow
zj ∈ f (t) if and only if
j
2
Note: If x is a point measure, it might be possible that xf = x ◦ f is not a
point measure, for a certain flow f . To see this, consider the following example: let
x(A) :=
P2
j=1 I{zj ∈A} , A
∈ A be a point measure such that z1 , z2 ∈ ∂A for some set
A ∈ A; let f be a flow such that f (t) = A and f (s) ⊆ A0 , ∀s < t; then ∆xf (t) = 2
and xf is not a point measure.
The third type of regularity property for the sample paths of a set-indexed process
uses the notion of ‘flow’.
Notation: The space of all set-indexed processes X := (XA )A∈A with the property that for every simple flow f : [0, a] → A(u), the process X f := (Xf (t) )t∈[0,a] has
a cadlag modification, is denoted with D[S(A)].
The space D[S(A)] contains many important classes of set-indexed processes. Processes with purely atomic sample paths (in particular point processes) and proceses
which are monotone outer-continuous and monotone inner-continuous are cadlag on
every simple flow. On the other hand, the ‘strong martingales’ (defined in [41]) or the
‘Levy processes’ (which will be introduced in Chapter 5 of this work) possess cadlag
modifications on every simple flow.
We will conclude this section with a note on ‘separability’.
By the definition of the indexing collection (property ‘Separability from above’)
and Lemma 2.4.10, it follows that the indexing collection A\{∅} is separable with
CHAPTER 2. THE SET-INDEXED FRAMEWORK
44
respect to the metric dH : its countable dense subset is the collection {gn (A); A ∈
A, A 6= ∅, n ≥ 1}.
We have the following general definition.
Definition 2.4.18 Let T be a separable metric space. An R-valued process (Xt )t∈T
(defined on a probability space (Ω, F, P )) is separable if there exists a countable
dense subset D ⊆ T and a set N ∈ F , P (N ) = 0 such that ∀ω 6∈ N, ∀t ∈ T, ∃(tn )n ⊆
D, limn tn = t such that limn Xtn (ω) = Xt (ω).
Following carefully the results given in Section 38, [10] for the case T = [0, ∞),
we realize that the same arguments will work for no matter what separable metric
space T , by replacing the intervals with the balls. Therefore, we have the following
analogue of Theorem 38.1, [10] which will be used in Section 5.5 of this work.
Theorem 2.4.19 Let T be a separable metric space. Any R-valued process (Xt )t∈T
has an R-valued separable modification and a R-valued separable version.
2.5
The Collection of Projected Flows
Let f0 : [0, 1] → A be a fixed continuous flow such that f0 (0) = ∅0 and f0 (1) = T . For
each finite sub-semilattice A0 of A and for each consistent ordering ord of A0 we will
define a simple flow fA0 ,ord which will be regarded as the ‘projection’ of the original
flow f0 on the semilattice, in the sense of the prescribed ordering.
The collection of these simple flows will prove to be an useful tool in Section 4.3
where we derive an alternative version for the necessary and sufficient conditions that
are imposed on the ‘generator’ of a Q-Markov process. The results of this section are
not used anywhere else in this work.
Let A0 be a finite sub-semilattice of A and ord={A0 = ∅0 , A1 , . . . , An } a consistent
ordering of A0 . For each i = 1, . . . , n − 1 let
τi := inf{t ∈ [0, 1]; f0 (t) ∩ Ai+1 6⊆ ∪ij=1 Aj }.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
45
Define the following continuous A-valued flow
fi (t) := f0 (t) ∩ Ai , t ∈ [0, 1]
and note that fi+1 (t) ⊆ ∪ij=1 Aj for t < τi . Therefore, we will consider the flow fi+1
only on the region [τi , 1].
The simple flow f := fA0 ,ord that we will construct will ‘connect’ the sets of the
semilattice A0 in the sense of the ordering ord and will have the same behaviour
between the sets ∪ij=1 Aj and ∪i+1
j=1 Aj as the flow fi+1 .
For t ∈ [0, 1] let f (t) := f1 (t). We have f (0) = ∅0 , f (1) = A1 and the flow f is
continuous and A-valued between these values. We would like next to go from A1 to
A1 ∪ A2 by means of a continuous flow which adds only sets in A to the initial set A1 .
The best candidate is the flow f20 (s) := A1 ∪ f2 (s), s ∈ [τ1 , 1], the only problem being
that we have to be able to paste this flow immediately after the flow f1 . The solution
will be a change of scale: we shift the interval [τ1 , 1] by 1−τ1 (in other words, instead of
s ∈ [τ1 , 1] we will consider s+(1−τ1 )) so that τ1 becomes 1 and 1 becomes 1+(1−τ1 ).
For t ∈ [1, 1 + (1 − τ1 )] let f (t) := A1 ∪ f2 (t − (1 − τ1 )) = A1 ∪ (f0 (t − (1 − τ1 )) ∩ A2 ).
We can continue in the same manner until we reach the set ∪nj=1 Aj . In general
we have for each i = 0, . . . , n − 1
f (t) := (∪ij=1 Aj ) ∪ [f0 (t −
i
X
(1 − τj )) ∩ Ai+1 ], t ∈ [ti , ti+1 ]
j=1
where t0 = 0, t1 = 1 and ti = 1 +
Pi−1
j=1 (1
− τj ), i = 2, . . . , n.
The flow f will be denoted fA0 ,ord (since it depends on both the semilattice A0 and
the ordering ord) and will be called the projection of the flow f0 on the semilattice
A0 , with respect to the ordering ord.
Example 2.5.1 Let T = [0, 1]2 , A = {[0, z]; z ∈ T }, A0 be a finite sub-semilattice of
A and ord={A0 = ∅0 , A1 , A2 , A3 , A4 = T } a consistent ordering of A0 , where A1 =
[0, (1/2, 1/2)], A2 = [0, (1, 1/2)], A3 = [0, (1/2, 1)]. Suppose that f0 (t) = [0, (t, t2 )], t ∈
[0, 1].
CHAPTER 2. THE SET-INDEXED FRAMEWORK
46
Then the projected flow fA0 ,ord has the following definition:


f0 (t) ∩ A1
if t ∈ [0, 1]





 A ∪ [f (s) ∩ A ]
if s := t − (1 − τ1 ) ∈ [τ1 , 1]
1
0
2
fA0 ,ord (t) := 

(A1 ∪ A2 ) ∪ [f0 (s) ∩ A3 ] if s := t − (1 − τ1 ) − (1 − τ2 ) ∈ [τ2 , 1]





(A1 ∪ A2 ∪ A3 ) ∪ f0 (s)
if s := t − (1 − τ1 ) − (1 − τ2 ) − (1 − τ3 ) ∈ [τ3 , 1]
where τi := inf{t; f0 (t) ∩ Ai+1 6⊆ ∪ij=1 Aj }; i = 1, 2, 3.
The following picture shows the path of the flow fA0 ,ord in this case:
The following result says that until a certain moment, the two flows f0 and f have
exactly the same path and after that moment, any increment on the path of the flow
f0 (which is contained in ∪nj=1 Aj ) can be recovered as the disjoint union of a finite
number of increments on the path of the projected flow f .
Proposition 2.5.2 Let si := sup{t ∈ [0, 1]; f0 (t) ⊆ ∪ij=1 Aj }, i = 1, . . . , n. Define
u1 := s1 and ui := s1 +
Pi−1
j=1 (1
− τj ), i = 2, . . . , n. Then f0 (t) = f (t) for any t ≤ s1
and
f0 (s1 + ²)\f0 (s1 ) = ∪˙ ij=1 [f (uj + ²)\f (uj )]
for any ² ∈ (0, si − s1 ), i = 2, . . . , n.
Proof: The first assertion is clear. Let Ci be the left neighbourhood of Ai in A0 . Let
i = 2, . . . , n be fixed. For any ² < si − s1 , f0 (s1 + ²) ⊆ ∪ij=1 Aj = ∪ij=1 Cj and hence
we have
f0 (s1 + ²)\f0 (s1 ) = ∪˙ ij=1 [(f0 (s1 + ²)\f0 (s1 )) ∩ Cj ]
On the other hand
j−1
(f0 (s1 + ²)\f0 (s1 )) ∩ Cj = (f0 (s1 + ²) ∩ Aj )\[(∪k=1
Ak ) ∪ (f0 (s1 ) ∩ Aj )]
CHAPTER 2. THE SET-INDEXED FRAMEWORK
47
j−1
= [(∪j−1
k=1 Ak ) ∪ (f0 (s1 + ²) ∩ Aj )]\[(∪k=1 Ak ) ∪ (f0 (s1 ) ∩ Aj )]
= f (s1 +
j−1
X
(1 − τk ) + ²) \ f (s1 +
k=1
j−1
X
(1 − τk ))
k=1
j−1
by the definition of the projected flow f between the sets ∪k=1
Ak and ∪jk=1 Ak .
2
A consequence of the above proposition is the following: let X := (XA )A∈A be
a set-indexed process which has a unique additive extension to C(u). Denote with
Xt0 := Xf0 (t) , t ∈ [0, 1] and Xt := Xf (t) , t ∈ [0, tn ] the projections of the process X
over the flows f0 , f . Then Xt0 = Xt for any t ≤ s1 and
Xs01 +²
−
Xs01
=
i
X
(Xuj +² − Xuj ) for any ² ∈ (0, si − s1 ), i = 2, . . . , n.
j=1
The next proposition describes the relationship between the projections f and g
of the flow f0 on the same finite semilattice A0 , but with respect to two different
orderings.
Proposition 2.5.3 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 , with Ai = A0π(i) , ∀i,
where π is a permutation of {1, . . . , n} with π(1) = 1 and we denote f := fA0 ,ord1 , g :=
fA0 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
(a) t1 = u1 = 1 and ti − ti−1 = uπ(i) − uπ(i)−1 for every i = 2, . . . , n;
(b) f (t) = g(t) for any t ∈ [0, t1 ] and
f (ti−1 + ²)\f (ti−1 ) = g(uπ(i)−1 + ²)\g(uπ(i)−1 )
for every ² ∈ (0, ti − ti−1 ) and for every i = 2, . . . , n.
Proof: (a) From the definition of the projected flow we have t1 = u1 = 1 and
ti = 1 +
Pi−1
j=1 (1
− τj ), ui = 1 +
Pi−1
j=1 (1
− τj0 ) for i = 2, . . . , n, where
τi := inf{t ∈ [0, 1]; Ai+1 ∩ f0 (t) 6⊆ ∪ij=1 Aj }
τi0 := inf{t ∈ [0, 1]; A0i+1 ∩ f0 (t) 6⊆ ∪ij=1 A0j }.
CHAPTER 2. THE SET-INDEXED FRAMEWORK
48
0
Hence ti − ti−1 = 1 − τi−1 and uπ(i) − uπ(i)−1 = 1 − τπ(i)−1
. We will prove that
0
for any i = 2, . . . , n.
τi−1 = τπ(i)−1
Note that τi−1 is the first instance when Ai ∩ f0 (t) 6⊆ ∪i−1
j=1 Aj . But Ai ∩ f0 (t) ⊆
i−1
0
∪i−1
j=1 Aj if and only if Ai ∩ f0 (t) ⊆ ∪j=1 (Aj ∩ Ai ) = ∪A∈A0 ,A⊆Ai A. Similarly τπ(i)−1 is
π(i)−1
π(i)−1
the first instance when A0π(i) ∩ f0 (t) 6⊆ ∪j=1 A0j ; but A0π(i) ∩ f0 (t) ⊆ ∪j=1 A0j if and
only if A0π(i) ∩ f0 (t) ⊆ ∪A∈A0 ,A⊆A0π(i) A. The equality follows since Ai = A0π(i) .
(b) Clearly f (t) = g(t) = f0 (t) ∩ A1 for any t ∈ [0, t1 ]. By the definition of f we
have that for each i = 2, . . . , n
f (ti−1 + ²)\f (ti−1 ) = [(∪i−1
j=1 Aj ) ∪ (f0 (ti−1 + ² −
i−1
X
(1 − τj )) ∩ Ai )]\(∪i−1
j=1 Aj )
j=1
= (f0 (τi−1 + ²) ∩ Ai )\(∪i−1
j=1 Aj ) = f0 (τi−1 + ²) ∩ Ci .
Similarly
0
0
g(uπ(i)−1 + ²)\g(uπ(i)−1 ) = f0 (τπ(i)−1
+ ²) ∩ Cπ(i)
.
0
0
The result follows since τi−1 = τπ(i)−1
and Ci = Cπ(i)
.
2
A consequence of the above proposition is the following: let X := (XA )A∈A be
a set-indexed process which has a unique additive extension to C(u). Denote with
Xt0 := Xf (t) , t ∈ [0, tn ] and Xt00 := Xg(t) , t ∈ [0, un ] the projections of the process X
over the flows f, g. Then Xt0 = Xt00 for any t ≤ t1 and
Xt0i−1 +² − Xt0i−1 = Xu00π(i)−1 +² − Xu00π(i)−1
for any ² ∈ (0, ti − ti−1 ), i = 2, . . . , n. In other words, the trajectory of the process X 0
can be constructed from the trajectory of the process X 00 by permuting (according to
the permutation π) the n − 1 pieces determined by the partition u1 < u2 < . . . < un
.
Chapter 3
Set-Markov Processes
Defining a Markov property for processes indexed by a partially ordered set is not
a problem that can be tackled very easily when someone has the declared goal of
proving that processes with such a property actually exist.
For processes indexed by the plane, different types of definitions were proposed,
the only one for which a constructive result exists being Lévy’s sharp Markov property
(see the introductory Chapter 1). For A-indexed processes, where A is an indexing
collection, there are already in the literature (see [41]) two possible definitions for
the Markov property called ‘sharp Markov property’ (analogous to the sharp Markov
property on the plane) and ‘Markov property’ (analogous to the Markov property
introduced by the authors of [46] on the plane). Having the merit of being as general as
possible, these definitions have also overcome the difficulty of working with processes
indexed by one class of objects (the points) and with a Markov property considered
with respect to another class of objects (the sets), because in their case everything is
defined in terms of the sets of the indexing collection. However, an existence result is
not yet available for these types of Markov properties in their full generality, although
we know that any process with independent increments is Markov (in the specified
sense).
Here we will consider a different type of definition for a set-indexed Markov process, such a process being called ‘set-Markov’. The biggest advantage of this definition
is, in our opinion, the fact that, not only it is completely analogous to the Markov
49
CHAPTER 3. SET-MARKOV PROCESSES
50
property on the line, but we can also very easily make use of the rich theory of Markov
processes on the line. In particular, exploiting this definition we will develop without
any difficulty a transition system theory paralleling the classical one and we will be
able to prove the desired existence result. The key ingredient to this promising theory
is the object named ‘flow’, introduced in Section 2.4, which will transport a closed
interval of the real line into our indexing class A(u).
In this chapter we will introduce the formal definition of set-Markov processes and
we will prove some of their properties. In particular we will prove that all processes
with independent increments are set-Markov and that the class of set-Markov processes is a sub-class of the class of sharp Markov processes. Given a transition system
Q we will restrict our attention to those set-Markov processes for which the transition
from one state to another is dictated by Q. We will call these processes Q-Markov
and we will construct them in two cases: when the geometrical property SHAPE
holds and when it does not. Finally, we will see that to each transition system Q one
can associate a transition semigroup of bounded linear operators, very much in the
spirit of the classical theory.
Most of the results of Section 3.1 and Section 3.3 appear in [7].
3.1
Set-Markov Processes. General Properties
In this section we will give the definition of a set-Markov process and we will discuss
its properties.
If F, G, H are three sub-σ-fields of the same probability space, we will use the
notation F ⊥ H | G if F and H are conditionally independent given G i.e, for every
bounded H-measurable random variable Y , we have E[Y |F ∨ G] = E[Y |G]. (Basic
properties of conditional independence are discussed in Appendix A.1)
Definition 3.1.1 A set-indexed process X := (XA )A∈A is called set-Markov with
respect to a filtration (FA )A∈A if it is adapted and ∀A ∈ A, ∀B ∈ A(u)
FB ⊥ σ(XA\B ) | σ(XB )
(4)
CHAPTER 3. SET-MARKOV PROCESSES
51
The process X is called simply set-Markov if it is set-Markov with respect to its
minimal filtration.
Comments 3.1.2
1. In the above definition we can assume without loss of gen-
erality that A 6⊆ B : if A ⊆ B, then XA\B = 0 a.s.
2. It is easy to see that in the classical case (i.e. for the processes indexed by the
real line) the set-Markov property is equivalent to the usual Markov property:
a process X := (Xt )t∈[0,a] is Markov if and only if ∀s, t ∈ [0, 1], s < t we have
Fs ⊥ σ(Xt − Xs ) | σ(Xs ).
We have the following immediate consequence of the definition.
Proposition 3.1.3 Any process with independent increments is set-Markov.
Proof: Let X := (XA )A∈A be a process with independent increments and (FB )B∈A(u)
its minimal filtration. Using a monotone class argument, it is not difficult to see that
for every A ∈ A, B ∈ A(u), XA\B is independent of FB (and hence XA\B is also
independent of XB ).
2
The next result gives some equivalent definitions for the set-Markov property.
Proposition 3.1.4 (Characterization Properties) Let (FA )A∈A be a set-indexed
filtration and X := (XA )A∈A an adapted set-indexed process. The following statements
are equivalent:
1. the process X is set-Markov with respect to (FA )A∈A ;
2. ∀A ∈ A, B ∈ A(u), FB ⊥ σ(XA∪B ) | σ(XB );
3. ∀B, B 0 ∈ A(u), B ⊆ B 0 , FB ⊥ σ(XB 0 ) | σ(XB ).
Proof: The fact that 1 is equivalent to 2 follows from Lemma A.1.3, Appendix A.1
since XA∪B = XA\B + XB a.s. and XB is FB -measurable. Clearly 3 implies 2; it
remains to check that 2 implies 3. Let B, B 0 ∈ A(u) be such that B ⊆ B 0 . Say
CHAPTER 3. SET-MARKOV PROCESSES
52
B 0 = ∪ki=1 Ai , Ai ∈ A is an arbitrary representation. Then B 0 = B ∪ B 0 = B ∪ ∪ki=1 Ai
and the result follows by induction on k: assume that the statement is true for k.
Then
E[h(XB∪∪k+1 Ai )|FB ] = E[E[h(XB∪∪k+1 Ai )|FB∪∪ki=1 Ai ]|FB ] =
i=1
i=1
E[E[h(XB∪∪k+1 Ai )|XB∪∪ki=1 Ai ]|FB ] = E[h0 (XB∪∪ki=1 Ai )|FB ]
i=1
which is σ(XB )-measurable, by the induction hypothesis.
2
Comment 3.1.5 Using a monotone class argument it is not difficult to see that
a set-indexed process X := (XA )A∈A is set-Markov if and only if ∀A ∈ A, ∀B ∈
A(u), ∀Ai ∈ A, Ai ⊆ B; i = 1, . . . , m
σ(XA1 , . . . , XAm ) ⊥ σ(XA\B ) | σ(XB ).
(According to the above characterization, we can replace A\B with A ∪ B, or with
B 0 ∈ A(u), B ⊆ B 0 .) This property virtually says that we can ‘discretize’ the past
history of a set-Markov process, exactly as in the classical case.
The following result says that a set-Markov process becomes Markov in the usual
sense along any flow (i.e. along any increasing collection of sets in A(u), indexed
by a closed interval of the real line). Moreover when we restrict our attention only
to the ‘simple’ flows, i.e. to those flows which are piecewise A-valued, this becomes
in fact a sufficient condition for the set-Markov nature of the process. This result,
while simple, is crucial since it provides us with the means to define the generator of
a set-Markov process; this will be done in Chapter 4.
Proposition 3.1.6 Let (FA )A∈A be a set-indexed filtration and X := (XA )A∈A a
set-indexed process. Then X is set-Markov with respect to the filtration (FA )A∈A if
and only if for every simple flow f : [0, a] → A(u) the process X f := (Xf (t) )t∈[0,a]
is Markov with respect to the filtration (Ff (t) )t∈[0,a] . (For the necessity part we can
consider any flow, not only the simple ones.)
CHAPTER 3. SET-MARKOV PROCESSES
53
Comment 3.1.7 The filtration (Ff (t) )t∈[0,a] is not the minimal filtration associated
to the process X f even if the set-indexed filtration (FA )A∈A is the minimal filtration
associated to the process X.
Proof: From the classical theory of Markov processes, the process X f is Markov with
respect to the filtration (Ff (t) )t if and only if it is adapted and for every s < t we
have
Ff (s) ⊥ σ(Xf (t) ) | σ(Xf (s) ).
This is equivalent to the characterization property 3, Proposition 3.1.4, since we know
that whenever the sets B, B 0 ∈ A(u) are such that B ⊆ B 0 there exists a simple flow
f and some s < t such that f (s) = B and f (t) = B 0 (Lemma 2.4.9).
2
Comment 3.1.8 To emphasize the importance of the above result, let us record that
a similar phenomenon happens in the martingale case. More precisely, a set-indexed
process X := (XA )A∈A is a ‘strong martingale’ with respect to the filtration (FA )A∈A
if and only if for every simple flow f : [0, a] → A(u) the process X f := (Xf (t) )t∈[0,a]
is a martingale with respect to the filtration (Ff (t) )t∈[0,a] (see Lemma 5.1.2, [41]).
A ‘strong martingale’ is an adapted set-indexed process X with a unique additive
extension to C(u) which satisfies the condition E[XC |GC∗ ] = 0 for all C ∈ C, where
GC∗ := ∨B∈A(u),B∩C=∅ FB .
The next lemma shows that we can consider more than one set in the definition of
the set-Markov property. In particular, this will imply that the set-Markov property
is stronger that the sharp Markov property.
Lemma 3.1.9 Let X be a set-Markov process with respect to the filtration (FA )A∈A .
Then ∀B ∈ A(u), ∀A1 , . . . Ak ∈ A
FB ⊥ σ(XA1 \B , . . . , XAk \B ) | σ(XB ).
Proof: Without loss of generality we can assume that Ai 6⊆ B ∀i. Let h : Rk → R
be an arbitrary bounded measurable function.
CHAPTER 3. SET-MARKOV PROCESSES
54
0
Say B = ∪m
j=k+1 Aj , Aj ∈ A. Let A be the minimal finite semilattice which
contains the sets A1 , . . . Am , {A00 = ∅0 , A01 , . . . , A0n } a consistent ordering of A0 and Ci0
the left neighborhood of A0i in A0 for i = 1, . . . , n.
Say Aj = A0ij for j = 1 . . . k; then Aj \B = A0ij \B = ∪i∈Ij Ci0 with Ij ⊆ {1, . . . , ij }
P
and XAj \B = i∈Ij XCi0 .
Therefore we can say that h(XA1 \B , . . . , XAk \B ) = h1 (XCl0 , . . . , XCl0s ), for a cer1
tain bounded measurable function h1 : Rs → R and some l1 ≤ . . . ≤ ls , Cl0i 6⊆ B.
In order to simplify the notation, let us denote Di := Cl0i , i = 1, . . . , s. Let Bi =
B ∪ ∪ik=1 Dk for i = 1, . . . , s. Then Di = Bi \Bi−1 and XDi = XBi − XBi−1 ; hence
XD1 , . . . , XDs−1 are FBs−1 -measurable. Because each Di is the left neighbourhood of
A0li , we also have Bi = B ∪ ∪ik=1 A0lk and therefore Di = (A0li ∪ Bi−1 )\Bi−1 = A0li \Bi−1 .
Using Proposition 3.1.4 (property 3), we have E[f (XDs )|FBs−1 ] = E[f (XDs )|XBs−1 ]
for any bounded measurable function f . At this point we may apply Lemma A.1.3,
Appendix A.1 to get
E[h1 (XD1 , . . . , XDs )|FBs−1 ] = E[h1 (XD1 , . . . , XDs )|XBs−1 , XD1 , . . . , XDs−1 ]
So
E[h(XA1 \B , . . . , XAk \B )|FB ] = E[E[h1 (XD1 , . . . , XDs )|FBs−1 ]|FB ] =
E[E[h1 (XD1 , . . . , XDs )|XBs−1 , XD1 , . . . , XDs−1 ]|FB ].
Writing XBs−1 = XB +
Ps−1
k=1
XDk we get
E[h(XA1 \B , . . . , XAk \B )|FB ] = E[h2 (XD1 , . . . , XDs−1 , XB )|FB ].
Continuing in the same manner, reducing at each step another set Di , we finally get
E[h(XA1 \B , . . . , XAk \B )|FB ] = E[h(XA1 \B , . . . , XAk \B )|XB ].
2
Proposition 3.1.10 If X := (XA )A∈A is a set-Markov process with respect to the
filtration (FA )A∈A , then it also sharp Markov with respect to the filtration (FA )A∈A .
CHAPTER 3. SET-MARKOV PROCESSES
55
Proof: Let B ∈ A(u), Ai ∈ A(B c ); i = 1, . . . , k and h : Rk → R a bounded and
measurable function. Then, writing XAi = XAi ∩B + XAi \B and using Lemma A.1.3,
Appendix A.1, we get
E[h(XA1 , . . . , XAn )|FB ] = E[h(XA1 , . . . , XAn )|XB , XAi ∩B , i = 1, . . . n]
which is F∂B -measurable, by Lemma 2.3.15.
2
In what follows we will give an important characterization of the set-Markov
processes that will be instrumental for the construction of these processes.
Proposition 3.1.11 A set-indexed process X := (XA )A∈A is set-Markov if and only
if for every finite semilattice A0 , for every consistent ordering {A0 = ∅0 , A1 , . . . , An }
of A0 and for every i = 1, . . . , n − 1
σ(XA0 , XA0 ∪A1 , . . . , X∪i−1 Aj ) ⊥ σ(X∪i+1 Aj ) | σ(X∪ij=0 Aj ).
j=0
j=0
(5)
Proof: We will use the characterization of the set-Markov property given by Proposition 3.1.4 (property 3). Necessity follows immediately.
For sufficiency note first that equation (5) implies a similar equation where the
union of the first i + 1 sets is replaced by the union of the first i + p sets. In fact it
is easily shown by induction on p that for every i ≤ n − p
σ(XA0 , XA0 ∪A1 , . . . , X∪i−1 Aj ) ⊥ σ(X∪i+k Aj , k = 1, . . . , p) | σ(X∪ij=0 Aj ).
j=0
j=0
(6)
(The statement for p = 1 is true by hypothesis. Assume next that the statement
is true for p. The result for p + 1 follows by Lemma A.1.4, Appendix A.1 since for
each fixed i ≤ n − p − 1, σ(XA0 , XA0 ∪A1 , . . . , X∪i−1 Aj ) ⊥ σ(X∪i+1 Aj ) | σ(X∪ij=0 Aj ) and
j=0
j=0
σ(XA0 , XA0 ∪A1 , . . . , X∪ij=0 Aj ) ⊥ σ(X∪i+k Aj , k = 2, . . . , p + 1) | σ(X∪i+1 Aj ).)
j=0
j=0
0
0
Consider now arbitrary sets B, B ∈ A(u) with B ⊆ B . Using Comment 3.1.5,
it is enough to show that ∀A0l ∈ A, A0l ⊆ B; l = 1, . . . , m, σ(XA01 , . . . , XA0m ) ⊥
σ(XB 0 ) | σ(XB ). Without loss of generality we can assume that the sets A01 , . . . , A0m
form a finite semilattice (if not, there exists a finite semilattice A0 which contains
them) and the ordering {A00 = ∅0 , A01 , . . . , A0m } is consistent.
CHAPTER 3. SET-MARKOV PROCESSES
56
There exists a bijective map ψ such that (XA01 , XA02 , . . . , XA0m ) = ψ(XA01 , XA01 ∪A02 ,
. . . , X ∪m
A0 ) a.s. Consequently, we have
l=1 l
σ(XA01 , XA02 , . . . , XA0m ) = σ(XA01 , XA01 ∪A02 , . . . , X∪m
A0 ).
l=1 l
By Proposition 2.1.14.(a), there exists there exists a finite semilattice A0 and a
1
consistent ordering {A0 = ∅0 , A1 , . . . , An } of A0 such that A01 = ∪ij=0
Aj , A01 ∪ A02 =
i
i
im
m+2
m+1
0
0
2
Aj , . . . , ∪m
∪ij=0
l=1 Al = ∪j=0 Aj , B = ∪j=0 Aj and B = ∪j=0 Aj for some i1 ≤ i2 ≤
. . . ≤ im+2 . Using (6), it follows that σ(XA01 , XA01 ∪A02 , . . . , X∪m
A0 ) ⊥ σ(XB 0 ) | σ(XB ):
l=1 l
take i := im+1 and p := im+2 − im+1 .
2
3.2
Q-Markov Processes. The Construction under
SHAPE
In this section we will introduce a special class of set-Markov processes, called QMarkov processes, for which the mechanism of transition from one state to another is
completely known. This theory should be viewed as a generalization of the classical
theory of Markov processes corresponding to a certain transition system.
We will assume that the indexing collection A satisfies SHAPE and we will prove
that, under a certain consistency assumption, there exists a probability measure P
on the product space (RA , B(R)A ) under which the coordinate-variable process X :=
(XA )A∈A defined by XA (x) := xA is Q-Markov. In particular, this proves the existence
of a multiparameter process, which is sharp Markov with respect to the class of all
finite unions of rectangles of type [0, z].
We recall that by a transition probability on R we mean a function Q(x; Γ), x ∈
R, Γ ∈ B(R) which is measurable in x and a probability measure in Γ. Given two
random variables X and Y , a transition probability Q is said to be a version of the
conditional distribution of Y given X the if P [Y ∈ Γ|X = x] = Q(x; Γ) for almost
all x (with respect to the distribution of X). Such a version always exists, but it is
CHAPTER 3. SET-MARKOV PROCESSES
57
clearly not unique. (Properties of conditional distributions are discussed in Appendix
A.2.)
Note that if X := (Xt )t∈[0,a] is a classical Markov process with initial distribution
µ and for each s ≤ t, Qst is a version of the conditional distribution of Xt given Xs ,
then the finite dimensional distribution of X (over an ordered n + 1-tuple of the form
0 < t1 < t2 < . . . < tn ) is given by
P (X0 ∈ Γ0 , Xt1 ∈ Γ1 , Xt2 ∈ Γ2 , . . . , Xtn ∈ Γn ) =
Z
Rn+1
IΓ0 (x0 )
n
Y
IΓi (xi )Qtn−1 tn (xn−1 ; dxn ) . . . Qt1 t2 (x1 ; dx2 )Q0t1 (x0 ; dx1 )µ(dx0 )
i=1
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Using this observation, it would seem that all we need to construct a Markov
process is the initial distribution µ and the conditional distributions Qst . However,
for any s ≤ t ≤ u the conditional distribution of Xu given Xs has two versions:
Qsu (x; Γ) and Qst Qtu (x; Γ) :=
R
R
Qtu (y; Γ)Qst (x; dy) (by Proposition A.2.3, Appendix
A.2); hence, for each Γ ∈ B(R) fixed, these versions will coincide for almost all x (with
respect to the distribution of Xs ), the negligible set depending on the set Γ.
In theory, one is confronted with the problem of constructing a Markov process
X := (Xt )t∈[0,a] (and implicitly the probability space (Ω, F, P ) on which it is defined)
knowing the initial distribution µ and a system Q of transition probabilities Qst (each
Qst will be later identified as a version of the conditional distribution of Xt given Xs ).
However, because the probability measure P is not defined yet, we have to require
that for s ≤ t ≤ u, the versions Qsu and Qst Qtu coincide everywhere, i.e.
Z
Qsu (x; Γ) =
R
Qtu (y; Γ)Qst (x; dy) ∀x ∈ R, Γ ∈ B(R).
In this case, the family Q is called a (one-parameter) transition system. A Markov
process X := (Xt )t∈[0,a] for which Qst is a version of the conditional distribution of
Xt given Xs for every s ≤ t, is called Q-Markov.
In what follows we will indicate how we can transfer these ideas to the set-indexed
case.
CHAPTER 3. SET-MARKOV PROCESSES
58
Definition 3.2.1 (a) For each B, B 0 ∈ A(u), B ⊆ B 0 let QBB 0 be a transition probability on R. The family Q := (QBB 0 )B,B 0 ∈A(u);B⊆B 0 is called a transition
system if ∀B ∈ A(u), QBB (x; ·) = δx and ∀B, B 0 , B 00 ∈ A(u), B ⊆ B 0 ⊆ B 00
Z
Q
BB 00
(x; Γ) =
R
QB 0 B 00 (y; Γ)QBB 0 (x; dy) ∀x ∈ R, Γ ∈ B(R)
(b) Let Q := (QBB 0 )B,B 0 ∈A(u);B⊆B 0 be a transition system. A set-indexed process
X := (XA )A∈A is called Q-Markov with respect to a filtration (FA )A∈A if
it is adapted and ∀B, B 0 ∈ A(u), B ⊆ B 0
P [XB 0 ∈ Γ|FB ] = QBB 0 (XB ; Γ) a.s. ∀Γ ∈ B(R).
The process X is called simply Q-Markov if it is Q-Markov with respect to its
minimal filtration.
Note that a Q-Markov process is a set-Markov process for which QBB 0 is a version
of the conditional distribution of XB 0 given XB , for every B, B 0 ∈ A(u), B ⊆ B 0 : from
the definition of the Q-Markov property, ∀B, B 0 ∈ A(u), B ⊆ B 0 , P [XB 0 ∈ Γ|FB ] =
P [XB 0 ∈ Γ|XB ], i.e. FB ⊥ σ(XB 0 ) | σ(XB ).
According to the preceding discussion, if X is set-Markov and QBB 0 is an arbitrary
version of the conditional distribution of XB 0 given XB for every B, B 0 ∈ A(u), B ⊆
B 0 , then X is not necessarily Q-Markov, simply because the family Q of all QBB 0 ’s
may not be a transition system.
Let Q := (QBB 0 )B,B 0 ∈A(u);B⊆B 0 be a fixed transition system.
Comment 3.2.2 Using a monotone class argument, it is not difficult to see that a
set-indexed process X := (XA )A∈A is Q-Markov if and only if ∀A ∈ A, ∀B ∈ A(u)
and for every partition B = ∪pi=1 Ci , Ci ∈ C
P [XA∪B ∈ Γ|XC1 , . . . , XCp ] = QB,A∪B (XB ; Γ) a.s. ∀Γ ∈ B(R).
This property will be used in Chapter 6 in order to show that certain processes are
Q-Markov.
CHAPTER 3. SET-MARKOV PROCESSES
59
The next result follows exactly as Proposition 3.1.6 and gives the expected correspondence via flows.
Proposition 3.2.3 A set-indexed process X := (XA )A∈A is Q-Markov with respect
to a filtration (FA )A∈A if and only if for every simple flow f : [0, a] → A(u) the
process X f := (Xf (t) )t∈[0,a] is Qf -Markov with respect to the filtration (Ff (t) )t∈[0,a] ,
where Qfst := Qf (s),f (t) . (For the necessity part we can consider any flow, not only the
simple ones.)
The following result is the counterpart of Proposition 3.1.11; in particular, this
result will allow us to write down the finite dimensional distributions of a set-indexed
Q-Markov process in a close form that is very similar to the classical case.
Proposition 3.2.4 A set-indexed process X := (XA )A∈A is Q-Markov if and only if
for every finite semilattice A0 , for every consistent ordering {A0 = ∅0 , A1 , . . . , An } of
A0 and for every i = 1, . . . , n − 1
σ(XA0 , XA0 ∪A1 , . . . , X∪i−1 Aj ) ⊥ σ(X∪i+1 Aj ) | σ(X∪ij=0 Aj ) and
j=0
Q∪i
i+1
j=1 Aj ,∪j=1 Aj
j=0
is a version of the c.d. of X∪i+1 Aj given X∪ij=1 Aj
j=1
(c.d. stands for ‘conditional distribution’).
Proof: Necessity follows immediately by the definition of a Q-Markov process. For
sufficiency, note first that the process X is set-Markov by Proposition 3.1.11. Consider
now arbitrary sets B, B 0 ∈ A(u) with B ⊆ B 0 . We want to prove that QBB 0 is a version
of the c.d. of XB 0 given XB .
By Proposition 2.1.14.(a), there exists a finite semilattice A0 and a consistent
1
2
ordering {A0 = ∅0 , A1 , . . . , An } of A0 such that B = ∪ij=0
Aj and B 0 = ∪ij=0
Aj for
some i1 ≤ i2 .
We can easily prove by induction on p that for every i ≤ n − p
Q∪ i
i+p
j=0 Aj ,∪j=0 Aj
is a version of the c.d. of X∪i+p Aj given X∪ij=0 Aj .
j=0
(The statement for p = 1 is true by hypothesis. Assume next that the statement is
true for p. The result for p + 1 follows by Proposition A.2.3, Appendix A.2, since
CHAPTER 3. SET-MARKOV PROCESSES
for each fixed i ≤ n − p − 1, Q∪i
i+p
j=0 Aj ,∪j=0 Aj
60
is a version of the c.d. of X∪i+p Aj given
j=0
X∪ij=0 Aj , Q∪i+p Aj ,∪i+p+1 Aj is a version of the c.d. of X∪i+p+1 Aj given X∪i+p Aj , and
j=0
j=1
j=0
j=0
σ(X∪ij=0 Aj ) ⊥ σ(X∪i+p+1 Aj ) | σ(X∪i+p Aj ).)
j=0
j=0
In particular, this implies that QBB 0 is a version of the conditional distribution of
XB 0 given XB : take i := i1 and p := i2 − i1 .
2
The consequence of the previous proposition is that we can characterize a setindexed Q-Markov process in terms of its finite dimensional distributions. More
precisely, we have the following result.
Proposition 3.2.5 A set-indexed process X := (XA )A∈A is Q-Markov with initial
distribution µ if and only for every finite semilattice A0 and for every consistent
ordering {A0 = ∅0 , A1 , . . . , An } of A0
P (XA0 ∈ Γ0 , XA1 ∈ Γ1 , XA1 ∪A2 ∈ Γ2 , . . . , X∪nj=1 Aj ∈ Γn ) =
Z
Rn+1
IΓ0 (x0 )
n
Y
i=1
IΓi (xi )Q∪n−1 Aj ,∪n
j=1
j=1 Aj
(7)
(xn−1 ; dxn ) . . .
QA1 ,A1 ∪A2 (x1 ; dx2 )Q∅0 A1 (x0 ; dx1 )µ(dx0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Proof: We will use the characterization of the Q-Markov property given by Proposition 3.2.4.
For necessity, we have
P (XA0 ∈ Γ0 , XA1 ∈ Γ1 , XA1 ∪A2 ∈ Γ2 , . . . , X∪nj=1 Aj ∈ Γn ) =
Z
Z
Γ0
Γ0
P [XA1 ∈ Γ1 , XA1 ∪A2 ∈ Γ2 , . . . , X∪nj=1 Aj ∈ Γn |XA0 = x0 ]µ(dx0 ) =
Z
Γ1
P [XA1 ∪A2 ∈ Γ2 , . . . , X∪nj=1 Aj ∈ Γn |XA1 = x1 ]Q∅0 A1 (x0 ; dx1 )µ(dx0 )
because Q∅0 A1 is a version of the c.d. of XA1 given X∅0 and
P [XA1 ∪A2 ∈ Γ2 , . . . , X∪nj=1 Aj ∈ Γn |XA1 = x1 , XA0 = x0 ] =
CHAPTER 3. SET-MARKOV PROCESSES
61
P [XA1 ∪A2 ∈ Γ2 , . . . , X∪nj=1 Aj ∈ Γn |XA1 = x1 ]
(since σ(XA0 ) ⊥ σ(XA1 ∪A2 , . . . , X∪nj=1 Aj ) | σ(XA1 )).
Continuing inductively in the same manner, at the last step we get the desired
relationship.
For sufficiency, we note first that µ is the distribution of X∅0 (take Γi = R; i =
1, . . . , n in (7)). Taking Γ2 = . . . = Γn = R in (7) we obtain
Z
P (XA0 ∈ Γ0 , XA1 ∈ Γ1 ) =
Z
Γ0
Γ1
Q∅0 A1 (x0 ; dx1 )µ(dx0 ) ∀Γ0 , Γ1 ∈ B(R)
which proves that Q∅0 A1 is a version of the c.d. of XA1 given X∅0 .
For every Γ0 , Γ1 , Γ2 ∈ B(R)
Z
Γ0
Z
Γ1
P [XA1 ∪A2 ∈ Γ2 |XA1 = x1 , XA0 = x0 ]Q∅0 A1 (x0 ; dx1 )µ(dx0 ) =
Z
P (XA0 ∈ Γ0 , XA1 ∈ Γ1 , XA1 ∪A2 ∈ Γ2 ) =
Z
Γ0
Γ1
QA1 ,A1 ∪A2 (x1 ; Γ2 )Q∅0 A1 (x0 ; dx1 )µ(dx0 )
using equation (7) with Γ3 = . . . = Γn = R
This implies that
P [XA1 ∪A2 ∈ Γ2 |XA1 = x1 , XA0 = x0 ] = QA1 ,A1 ∪A2 (x1 ; Γ2 )
(depends only on x1 ) = P [XA1 ∪A2 ∈ Γ2 |XA1 = x1 ]
for P ◦ (XA0 , XA1 )−1 -almost all (x0 , x1 ) and for every Γ2 ∈ B(R). Consequently,
σ(XA0 ) ⊥ σ(XA1 ∪A2 ) | σ(XA1 ) and QA1 ,A1 ∪A2 is a version of the c.d. of XA1 ∪A2 given
XA1 .
Finally, using an inductive argument, we can conclude that for every i ≤ n − 1,
σ(XA0 , XA0 ∪A1 , . . . , X∪i−1 Aj ) ⊥ σ(X∪i+1 Aj ) | σ(X∪ij=0 Aj ) and Q∪i
j=0
j=0
i+1
j=1 Aj ,∪j=1 Aj
is a version
of the c.d. of X∪i+1 Aj given X∪ij=1 Aj , i.e. X is a Q-Markov process, according to
j=1
Proposition 3.2.4.
2
The next result says that the Q-Markov property of a set-indexed process depends
only on the finite-dimensional distributions of the process. Consequently, any version
of a Q-Markov process will automatically be Q-Markov.
CHAPTER 3. SET-MARKOV PROCESSES
62
Proposition 3.2.6 A set-indexed process X := (XA )A∈A is Q-Markov with initial
distribution µ if and only for every finite semilattice A0 and for every consistent
ordering {A0 = ∅0 , A1 , . . . , An } of A0 , if we suppose that Ai = ∪j∈Ii Cj ; i = 2, . . . , n
for some subset Ii ⊆ {1, . . . , i}, where Ci the left neighbourhood of Ai for each i =
1, . . . , n, then
P (XA0 ∈ Γ0 , XA1 ∈ Γ1 , XA2 ∈ Γ2 , . . . , XAn ∈ Γn ) =
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
IΓi (
i=2
X
j∈Ii
(xj − xj−1 ))Q∪n−1 Aj ,∪n
j=1
j=1 Aj
(xn−1 ; dxn ) . . .
QA1 ,A1 ∪A2 (x1 ; dx2 )Q∅0 A1 (x0 ; dx1 )µ(dx0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
j−1
Proof: For each i = 2, . . . , n, Ai = ∪j∈Ii Cj = ∪j∈Ii [(∪jl=1 Al )\(∪l=1
Al )] and
XAi =
X
XCj =
j∈Ii
X
j∈Ii
(X∪j
l=1
Al
− X∪j−1 Al ) a.s.
l=1
2
If the indexing collection A satisfies SHAPE, the construction of a set-indexed
Q-Markov process will be made using only the sets in A.
The following assumption requires that the distribution of the process over the
sets A0 , A1 , . . . , An of a finite semilattice, does not depend on the consistent ordering
of the semilattice.
Assumption 3.2.7 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 , with Ai = A0π(i) , ∀i,
where π is a permutation of {1, . . . , n} with π(1) = 1, and we suppose that Ai =
∪j∈Ii Cj for some subset Ii ⊆ {1, . . . , i}, where Ci is the left neighbourhood of Ai for
each i = 1, . . . , n, then
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
i=2
IΓi (
X
j∈Ii
(xj − xj−1 ))Q∪n−1 Aj ,∪n
j=1
j=1 Aj
QA1 ,A1 ∪A2 (x1 ; dx2 )Q∅0 A1 (x0 ; dx1 )µ(dx0 ) =
(xn−1 ; dxn ) . . .
(8)
CHAPTER 3. SET-MARKOV PROCESSES
Z
Rn+1
IΓ0 (y0 )IΓ1 (y1 )
n
Y
IΓi (
i=2
X
63
(yπ(j) − yπ(j)−1 ))Q∪n−1 A0 ,∪n
j=1
j∈Ii
j
0
j=1 Aj
(yn−1 ; dyn ) . . .
QA01 ,A01 ∪A02 (y1 ; dy2 )Q∅0 A01 (y0 ; dy1 )µ(dy0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Proposition 3.2.8 Assumption 3.2.7 is a necessary condition for the existence of a
Q-Markov process (with initial distribution µ).
Proof: Suppose that there exists a set-indexed Q-Markov process X := (XA )A∈A
(with initial distribution µ). Let Γ0 , Γ1 , . . . , Γn ∈ B(R) be arbitrary. Then
P (XA0 ∈ Γ0 , XA1 ∈ Γ1 , XA2 ∈ Γ2 , . . . , XAn ∈ Γn ) =
P (XA00 ∈ Γ0 , XA01 ∈ Γ1 , XA02 ∈ Γπ−1 (2) , . . . , XA0n ∈ Γπ−1 (n) )
By Proposition 3.2.6, the left-hand side of the previous equation is equal to the lefthand side of equation (8); its right-hand side is equal to
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
i=2
IΓπ−1 (i) (
X
j∈Ii0
(yj − yj−1 ))Q∪n−1 A0 ,∪n
j=1
j
0
j=1 Aj
(yn−1 ; dyn ) . . .
QA01 ,A01 ∪A02 (y1 ; dy2 )Q∅0 A01 (y0 ; dy1 )µ(dy0 )
where A0i = ∪j∈Ii0 Cj0 for some subset Ii0 ⊆ {1, . . . , i}, and Ci0 is the left neighbourhood
of A0i for each i = 1, . . . , n.
0
0
Finally ∪j∈Iπ(i)
Cj0 = A0π(i) = Ai = ∪j∈Ii Cj = ∪j∈Ii Cπ(j)
, using Lemma 2.1.13.(b).
0
Consequently, we have Iπ(i)
= {π(j); j ∈ Ii }; hence
n
Y
i=2
IΓπ−1 (i) (
X
(yj − yj−1 )) =
j∈Ii0
n
Y
i=2
IΓi (
X
0
j∈Iπ(i)
(yj − yj−1 )) =
n
Y
i=2
IΓi (
X
(yπ(j) − yπ(j)−1 ))
j∈Ii
and the result follows.
2
We are now ready to prove the main result of this section which says that, if
the indexing collection satisfies SHAPE, then Assumption 3.2.7 is also sufficient to
construct a Q-Markov process.
CHAPTER 3. SET-MARKOV PROCESSES
64
Theorem 3.2.9 Suppose that the indexing collection A satisfies SHAPE.
If µ is a probability measure on R and Q := (QBB 0 )B⊆B 0 is a transition system which satisfies Assumption 3.2.7, then there exists a probability measure P on
the product space (RA , B(R)A ) under which the coordinate-variable process X :=
(XA )A∈A defined on this space is Q-Markov with initial distribution µ.
Proof: For each k-tuple (A1 , . . . , Ak ) of distinct sets in A we will define a probability
measure µA1 ...Ak on (Rk , B(R)k ) such that the system of all these probability measures
satisfy the following two consistency conditions:
(C1) If (A01 . . . A0k ) is another ordering of the k-tuple (A1 . . . Ak ), say Ai = A0π(i) ,
where π is a permutation of {1, . . . , k}, then
µA1 ...Ak (Γ1 × . . . × Γk ) = µA01 ...A0k (Γπ−1 (1) × . . . × Γπ−1 (k) )
for every Γ1 , . . . , Γk ∈ B(R).
(C2) If A1 , . . . , Ak , Ak+1 are k + 1 distinct sets in A, then
µA1 ...Ak (Γ1 × . . . × Γk ) = µA1 ...Ak Ak+1 (Γ1 × . . . × Γk × R)
for every Γ1 , . . . , Γk ∈ B(R).
By Kolmogorov’s extension theorem (Theorem 36.1, [10]) there exists a probability
measure P on (RA , RA ) under which the coordinate-variable process X := (XA )A∈A
defined by XA (x) := xA has the measures µA1 ...Ak as its finite dimensional distributions. The process X will have a unique additive extension to C(u) by Theorem
2.3.3 (and the additivity relationships will hold everywhere). Finally, the Q-Markov
property of this process will follow from the way we choose its finite dimensional distributions µA0 A1 ...An over the consistently ordered sets of a finite semilattice, according
to Proposition 3.2.6.
Step 1 (Construction of the finite dimensional distributions)
Let (A1 , . . . , Ak ) be a k-tuple of distinct sets in A. Let A0 be the minimal finite
sub-semilattice of A which contains the sets A1 , . . . , Ak , {B0 = ∅0 , B1 , . . . , Bn } a
consistent ordering of A0 and suppose that Bj = ∪v∈Jj Dv ; j = 2, . . . , n for some
CHAPTER 3. SET-MARKOV PROCESSES
65
subset Jj ⊆ {1, . . . , j}, where Dj is the left neighbourhood of the set Bj for each
j = 1, . . . , n. Define
Z
µB0 B1 ...Bn (Γ0 × Γ1 × . . . × Γn ) :=
Q∪n−1 Bj ,∪n
j=1
j=1 Bj
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
j=2
IΓj (
X
(xv − xv−1 ))
v∈Jj
(xn−1 ; dxn ) . . . QB1 ,B1 ∪B2 (x1 ; dx2 )Q∅0 B1 (x0 ; dx1 )µ(dx0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Say Ai = Bji for some ji ∈ {1, . . . , n}; i = 1, . . . , k, let α : Rn+1 → Rk ,
α(x0 , x1 , . . . , xn ) := (xj1 , . . . , xjk ) and define
µA1 ...Ak := µB0 B1 ...Bn ◦ α−1 .
The fact that µA1 ...Ak does not depend on the ordering of the semilattice A0 is a
consequence of Assumption 3.2.7.
Step 2 (Consistency condition (C1))
Let (A01 . . . A0k ) be another ordering of the k-tuple (A1 . . . Ak ), with Ai = A0π(i) , π
being a permutation of {1, . . . , k}. Let A0 be the minimal finite sub-semilattice of
A which contains the sets A1 , . . . , Ak , {B0 = ∅0 , B1 , . . . , Bn } a consistent ordering
of A0 and suppose that Bj = ∪v∈Jj Dv for some subset Jj ⊆ {1, . . . , j}, where Dj is
the left neighbourhood of the set Bj for each j = 1, . . . , n. Say Ai = Bji , A0i = Bji0
and let α(x0 , x1 , . . . , xn ) := (xj1 , . . . , xjk ), β(x0 , x1 , . . . , xn ) := (xj10 , . . . , xjk0 ). We have
0
ji = jπ(i)
. Hence
α−1 (Γ1 × . . . × Γk ) = {(x0 , x1 , . . . , xn ); xj1 ∈ Γ1 , . . . , xjk ∈ Γk }
= {(x0 , x1 , . . . , xn ); xj10 ∈ Γπ−1 (1) , . . . , xjk0 ∈ Γπ−1 (k) }
= β −1 (Γπ−1 (1) × . . . Γπ−1 (k) )
and
µA1 ...Ak (Γ1 × . . . × Γk ) = µB0 B1 ...Bn (α−1 (Γ1 × . . . Γk ))
= µB0 B1 ...Bn (β −1 (Γπ−1 (1) × . . . Γπ−1 (k) ))
= µA01 ...A0k (Γπ−1 (1) × . . . × Γπ−1 (k) )
CHAPTER 3. SET-MARKOV PROCESSES
66
for every Γ1 , . . . , Γk ∈ B(R).
Step 3 (Consistency Condition (C2))
Let A1 , . . . , Ak , Ak+1 be k + 1 distinct sets in A. Let A0 , A00 be the minimal finite
sub-semilattice of A which contain the sets A1 , . . . , Ak , respectively A1 , . . . , Ak , Ak+1 .
Clearly A0 ⊆ A00 . Using Proposition 2.1.14.(b), there exist a consistent ordering
0
{B0 = ∅0 , B1 , . . . , Bn } of A00 and a consistent ordering {B00 = ∅0 , B10 , . . . , Bm
} of A0
such that, if Bl0 = Bil ; l = 1, . . . , m for some indices i1 < i2 < . . . < im , then
l
Bj for all l = 1, . . . , m. Let γ(x0 , x1 , . . . , xn ) = (x0 , xi1 , . . . , xim ) and
∪ls=1 Bs0 = ∪ij=1
suppose for the moment that
−1
0 = µB B ...B ◦ γ
.
µB00 B10 ...Bm
n
0 1
(9)
Say Ai = Bji ; i = 1, . . . , k + 1 for some ji ∈ {1, . . . , n} and define α(x0 , x1 , . . . ,
xn ) := (xj1 , . . . , xjk+1 ). On the other hand, if we say that for each i = 1, . . . , k we have
Ai = Bl0i for some li ∈ {1, . . . , m}, then j1 = il1 , . . . , jk = ilk ; define β(y0 , y1 , . . . , ym ) =
(yl1 , . . . , ylk ). Then
β −1 (Γ1 × . . . × Γk ) = {(y0 , y1 , . . . , ym ); yl1 ∈ Γ1 , . . . , ylk ∈ Γk }
γ −1 (β −1 (Γ1 × . . . × Γk )) = {(x0 , x1 , . . . , xn ); xil1 ∈ Γ1 , . . . xilk ∈ Γk }
= {(x0 , x1 , . . . , xn ); xj1 ∈ Γ1 , . . . , xjk ∈ Γk }
= α−1 (Γ1 × . . . × Γk × R)
and
def
−1
0 (β
µA1 ...Ak (Γ1 × . . . × Γk ) = µB00 B10 ...Bm
(Γ1 × . . . × Γk ))
= µB0 B1 ...Bn (γ −1 (β −1 (Γ1 × . . . × Γk )))
= µB0 B1 ...Bn (α−1 (Γ1 × . . . × Γk × R))
def
= µA1 ...Ak Ak+1 (Γ1 × . . . × Γk × R)
for every Γ1 , . . . , Γk ∈ B(R).
In order to prove (9) let Γ0 , Γ1 , . . . , Γm ∈ B(R) be arbitrary. Suppose that
Bj = ∪v∈Jj Dv ; j = 2, . . . , n for some subset Jj ⊆ {1, . . . , j}, where Dj is the left
neighbourhood of the set Bj for each j = 1, . . . , n. Then
µB0 B1 ...Bn (γ −1 (Γ0 × Γ1 × . . . × Γm )) =
(10)
CHAPTER 3. SET-MARKOV PROCESSES
Z
Rn+1
IΓ0 ×Γ1 ×...×Γm (γ(x0 , x1 ,
j=1 Bj
Note that γ(x0 , x1 ,
...,
X
(xj − xj−1 ), . . . ,
j∈J2
Q∪n−1 Bj ,∪n
j=1
X
67
P
(xj − xj−1 )))
j∈Jn
(xn−1 ; dxn ) . . . QB1 ,B1 ∪B2 (x1 ; dx2 )Q∅0 B1 (x0 ; dx1 )µ(dx0 ).
P
j∈J2 (xj
j∈Jim (xj − xj−1 )).
− xj−1 ), . . . ,
P
j∈Jn (xj
− xj−1 )) = (x0 ,
On the other hand, suppose that Bl0 =
P
j∈Ji1 (xj
∪w∈Ll Dw0 ; l =
− xj−1 ),
1, . . . , m
for some subset Ll ⊆ {1, . . . , l}, where Dl0 is the left neighbourhood of the set Bl0 for
il−1
il
0
each l = 1, . . . , m. Note that Dl0 = (∪lw=1 Bw0 )\(∪l−1
w=1 Bw ) = (∪j=1 Bj )\(∪j=1 Bj ) =
l
Dj and hence we have
∪ij=i
l−1 +1
w
∪v∈Jil Dv = Bil = Bl0 = ∪w∈Ll Dw0 = ∪w∈Ll ∪iv=i
Dv
w−1 +1
This implies that
Jil = ∪w∈Ll {iw−1 + 1, iw−1 + 2, . . . , iw } ∀l = 1, . . . , m
and
P
j∈Jil (xj
− xj−1 ) =
P
w∈Ll
Piw
j=iw−1 +1 (xj
− xj−1 ) =
P
w∈Ll (xiw
− xiw−1 ). Therefore
the integrand of (10) does not depend on the variables xj , j 6∈ {i0 , i1 , . . . , im }. By the
definition of the transition system, the integral of (10) collapses to
Z
Rm+1
IΓ0 (x0 )IΓ1 (xi1 )
m
Y
IΓl (
l=2
Q∪i1
i2
j=1 Bj ,∪j=1 Bj
X
w∈Ll
(xiw − xiw−1 ))Q∪im−1 B
j=1
(xi1 ; dxi2 )Q∅0 ,∪i1
j=1 Bj
im
j ,∪j=1 Bj
(xim−1 ; dxim ) . . .
(x0 ; dxi1 )µ(dx0 )
il
0 (Γ0 × Γ1 × . . . × Γm ) because ∪
which is exactly the definition of µB00 B10 ...Bm
j=1 Bj =
∪ls=1 Bs0 for every l = 1, . . . , m. This concludes the proof of (9).
2
Finally we will translate the preceding result in terms of flows. Let µ be an
arbitrary probability measure on R.
Definition 3.2.10 For each finite sub-semilattice A0 and for each consistent ordering
ord= {A0 = ∅0 , A1 , . . . , An } of A0 pick one simple flow f := fA0 ,ord which ‘connects the
sets of A0 in the sense of the ordering ord’ (see Lemma 2.4.8). Let S be the collection
of all flows fA0 ,ord .
CHAPTER 3. SET-MARKOV PROCESSES
68
The next assumption will provide a necessary and sufficient condition which will
allow us to reconstruct a transition system Q := (QBB 0 )B⊆B 0 from a class of transition
systems {Qf := (Qfst )s<t ; f ∈ S}, if the indexing collection A satisfies SHAPE. It
basically says that whenever we have two simple flows f, g ∈ S such that f (s) =
g(u), f (t) = g(v) for some s < t, u < v we must have Qfst = Qguv .
Assumption 3.2.11 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 , such that A1 , . . . , Ak is
a permutation of A01 , . . . , A0k , and we denote f := fA0 ,ord1 , g := fA0 ,ord2 with f (ti ) =
∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
Qf0t1 = Qg0u1 and Qftk tn = Qguk un .
The following assumption is easily seen to be equivalent to Assumption 3.2.7.
Assumption 3.2.12 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , where π
is a permutation of {1, . . . , n} with π(1) = 1, we suppose that Ai = ∪j∈Ii Cj for some
subset Ii ⊆ {1, . . . , i}, where Ci is the left neighbourhood of Ai for each i = 1, . . . , n,
and we denote f := fA0 ,ord1 , g := fA0 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
IΓi (
i=2
X
(xj − xj−1 ))Qftn−1 tn (xn−1 ; dxn ) . . .
j∈Ii
Qft1 t2 (x1 ; dx2 )Qft0 t1 (x0 ; dx1 )µ(dx0 ) =
Z
Rn+1
IΓ0 (y0 )IΓ1 (y1 )
n
Y
IΓi (
i=2
X
(yπ(j) − yπ(j)−1 ))Qgun−1 un (yn−1 ; dyn ) . . .
j∈Ii
Qgu1 u2 (y1 ; dy2 )Qgu0 u1 (y0 ; dy1 )µ(dy0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
The following result is immediate.
Corollary 3.2.13 Suppose that the indexing collection A satisfies SHAPE.
(11)
CHAPTER 3. SET-MARKOV PROCESSES
69
If µ is a probability measure on R and {Qf := (Qfst )s<t ; f ∈ S} is a collection
of transition systems which satisfies Assumption 3.2.11 and Assumption 3.2.12, then
there exist a transition system Q := (QBB 0 )B⊆B 0 and a probability measure P on the
product space (RA , B(R)A ) under which:
1. the coordinate-variable process X := (XA )A∈A defined on this space is Q-Markov
with initial distribution µ; and
2. ∀f ∈ S, X f := (Xf (t) )t is Qf -Markov.
p
0
0
00
Proof: Let B, B 0 ∈ A(u) be such that B ⊆ B 0 . Let B = ∪m
j=1 Aj , B = ∪l=1 Al
be their unique extremal representations, A0 the minimal finite sub-semilattice of A
which contains the sets A0j , A00l and ord={A0 = ∅0 , A1 , . . . , An } a consistent ordering of
A0 such that B = ∪kj=1 Aj and B 0 = ∪nj=1 Aj . Denote f := fA0 ,ord with f (ti ) = ∪ij=1 Aj .
We define QBB 0 := Qftk tn .
The definition of QBB 0 does not depend on the ordering of the semilattice A0 ,
because of Assumption 3.2.11.
The family Q := (QBB 0 )B⊆B 0 of all these transition probabilities is a transition
system: if B ⊆ B 0 ⊆ B 00 ∈ A(u) are such that B ⊆ B 0 ⊆ B 00 , then there exists a finite
semilattice A0 and a consistent ordering ord={A0 = ∅0 , A1 , . . . , An } of A0 , such that
B = ∪kj=1 Aj , B 0 = ∪lj=1 Aj , B 00 = ∪m
j=1 Aj for some k ≤ l ≤ m; if we denote f := fA0 ,ord
with f (ti ) = ∪ij=1 Aj , then
QBB 0 QB 0 B 00 = Qfst Qftu = Qfsu = QBB 00
because Qf is a one-dimensional transition system.
Finally, it is not hard to see that the transition system Q satisfies Assumption
3.2.7 and the conclusion follows by Theorem 3.2.9.
2
CHAPTER 3. SET-MARKOV PROCESSES
3.3
70
Q-Markov Processes. The General Construction
If the indexing collection A does not satisfy SHAPE, then the process that we constructed in the previous section may not have a unique additive extension to C(u);
therefore this construction is not valid in general. In this section we will give the
construction of a Q-Markov process in the general case, when assumption SHAPE
does not necessarily hold. The construction will be made using increments (i.e. sets
in C) instead of sets in A; more precisely, we will prove that there exists a probability
measure P on the space (RC , B(R)C ) under which the coordinate-variable process
X := (XC )C∈C defined by XC (x) := xC , C ∈ C is Q-Markov. The (almost sure)
additivity of this process will follow from the way we choose its finite dimensional
distributions.
Let µ be a probability measure on R and Q := (QBB 0 )B,B 0 ∈A(u);B⊆B 0 a fixed transition system.
We begin by giving another characterization of a Q-Markov process in terms of
its finite dimensional distributions over the sets in C.
Proposition 3.3.1 A set-indexed process X := (XA )A∈A is Q-Markov with initial
distribution µ if and only for every finite semilattice A0 and for every consistent
ordering {A0 = ∅0 , A1 , . . . , An } of A0 , if we denote with Ci the left neighbourhood of
Ai for each i = 0, . . . , n, then
P (XC0 ∈ Γ0 , XC1 ∈ Γ1 , XC2 ∈ Γ2 , . . . , XCn ∈ Γn ) =
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
i=2
IΓi (xi − xi−1 )Q∪n−1 Aj ,∪n
j=1
j=1 Aj
(xn−1 ; dxn ) . . .
QA1 ,A1 ∪A2 (x1 ; dx2 )Q∅0 A1 (x0 ; dx1 )µ(dx0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Proof: For each i = 2, . . . , n, Ci = (∪ij=1 Aj )\(∪i−1
j=1 Aj ) and
XCi = X∪ij=1 Aj − X∪i−1 Aj a.s.
j=1
(12)
CHAPTER 3. SET-MARKOV PROCESSES
71
The result follows by Proposition 3.2.5.
2
In order to be able to construct a Q-Markov process in the general case we must
impose another consistency assumption on the transition system Q, which requires
that the distribution of the process over the left neighbourhoods C0 , C1 , . . . , Cn of a
finite sub-semilattice does not depend on the consistent ordering of the semilattice.
Assumption 3.3.2 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i,
where π is a permutation of {1, . . . , n} with π(1) = 1, then
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
i=2
IΓi (xi − xi−1 )Q∪n−1 Aj ,∪n
j=1
j=1 Aj
(xn−1 ; dxn ) . . .
(13)
QA1 ,A1 ∪A2 (x1 ; dx2 )Q∅0 A1 (x0 ; dx1 )µ(dx0 ) =
Z
Rn+1
IΓ0 (y0 )IΓ1 (y1 )
n
Y
IΓi (yπ(i) − yπ(i)−1 )Q∪n−1 A0 ,∪n
i=2
j=1
j
0
j=1 Aj
(yn−1 ; dyn ) . . .
QA01 ,A01 ∪A02 (y1 ; dy2 )Q∅0 A01 (y0 ; dy1 )µ(dy0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Note: In Section 3.2, we have seen that Assumption 3.2.7 (which involves only
the finite dimensional distributions over the sets in A) is a necessary condition for
the existence of a Q-Markov process, and we have proved that, if SHAPE holds, then
this assumption is also sufficient. In this section, we will prove that Assumption 3.3.2
is also a necessary and sufficient condition for the existence of a Q-Markov process.
Therefore, in general Assumption 3.3.2 is stronger than Assumption 3.2.7, but if
SHAPE holds, then the two assumptions are equivalent.
The technical difference between these two assumptions is that they rely on the two
different bijections which relate the vector (XA0 , XA0 ∪A1 , . . . , X∪ij=0 Aj ) with the vector
(XA0 , XA1 , . . . , XAn ) on one hand, for Assumption 3.2.7, respectively with the vector
(XC0 , XC1 , . . . , XCn ) on the other hand, for Assumption 3.3.2.
Proposition 3.3.3 Assumption 3.3.2 is a necessary condition for the existence of a
Q-Markov process (with initial distribution µ).
CHAPTER 3. SET-MARKOV PROCESSES
72
Proof: Suppose that there exists a set-indexed Q-Markov process X := (XA )A∈A
(with initial distribution µ). Let Γ0 , Γ1 , . . . , Γn ∈ B(R) be arbitrary. By Lemma
0
2.1.13 we have Ci = Cπ(i)
. Then
P (XC0 ∈ Γ0 , XC1 ∈ Γ1 , XC2 ∈ Γ2 , . . . , XCn ∈ Γn ) =
P (XC00 ∈ Γ0 , XC10 ∈ Γ1 , XC20 ∈ Γπ−1 (2) , . . . , XCn0 ∈ Γπ−1 (n) ).
By Proposition 3.3.1, the left-hand side of the previous equation is equal to the lefthand side of equation (13); its right-hand side is equal to
Z
Rn+1
IΓ0 (y0 )IΓ1 (y1 )
n
Y
i=2
IΓπ−1 (i) (yi − yi−1 )Q∪n−1 A0 ,∪n
j=1
j
0
j=1 Aj
(yn−1 ; dyn ) . . .
QA01 ,A01 ∪A02 (y1 ; dy2 )Q∅0 A01 (y0 ; dy1 )µ(dy0 ).
Finally
Qn
j=2 IΓπ−1 (i) (yi
− yi−1 ) =
Qn
i=2 IΓi (yπ(i)
− yπ(i)−1 ) and the result follows.
2
The finite dimensional distributions of a Q-Markov process over the sets in C
have to be defined so that they ensure the (almost sure) additivity of the process.
The next result gives the definition of the finite dimensional distribution (of a QMarkov process) over an arbitrary k-tuple of sets in C and shows that, if the transition
system satisfies Assumption 3.3.2, then the definition will not depend on the extremal
representations of these sets.
Lemma 3.3.4 Let Q := (QBB 0 )B⊆B 0 be a transition system satisfying Assumption
3.3.2. Let (C1 , . . . , Ck ) be a k-tuple of distinct sets in C and suppose that each set Ci
0
0
00
i
i
admits two extremal representations Ci = Ai \ ∪nj=1
Aij = A0i \ ∪m
j=1 Aij . Let A , A
be the minimal finite sub-semilattices of A which contain the sets Ai , Aij , respectively
0
} two consistent orderings of
A0i , A0ij , {B0 = ∅0 , B1 , . . . , Bn }, {B00 = ∅0 , B10 , . . . , Bm
A0 , A00 and Dj , Dl0 the left neighbourhoods of the sets Bj , Bl0 for j = 1, . . . , n; l =
1, . . . , m. If each set Ci ; i = 1, . . . , k can be written as Ci = ∪j∈Ji Dj = ∪l∈Li Dl0 for
some Ji ⊆ {1, . . . , n}, Li ⊆ {1, . . . , m}, then
Z
k
Y
Rn+1 i=1
IΓi (
X
j∈Ji
(xj − xj−1 ))Q∪n−1 Bj ,∪n
j=1
j=1 Bj
(xn−1 ; dxn ) . . .
(14)
CHAPTER 3. SET-MARKOV PROCESSES
73
QB1 ,B1 ∪B2 (x1 ; dx2 )Q∅0 B1 (x; dx1 )µ(dx) =
Z
k
Y
Rm+1 i=1
IΓi (
X
(yl − yl−1 ))Q∪m−1 B 0 ,∪m
l=1
l∈Li
l
l=1
Bl0 (ym−1 ; dym ) . . .
QB10 ,B10 ∪B20 (y1 ; dy2 )Q∅0 B10 (y; dy1 )µ(dy)
for every Γ1 , . . . , Γk ∈ B(R), with the convention x0 = y0 = 0.
Proof: Let à be the minimal finite sub-semilattice of A determined by the sets
in A0 and A00 ; clearly A0 ⊆ Ã, A00 ⊆ Ã. Let ord1 = {E0 = ∅0 , E1 , . . . , EN } and
0
ord2 = {E00 = ∅0 , E10 , . . . , EN
} be two consistent orderings of à such that, if Bj =
Eij ; j = 1, . . . , n and Bl0 = Ek0 l ; l = 1, . . . , m for some indices i1 < i2 < . . . < in ,
respectively k1 < k2 < . . . < km , then
n
1
2
B1 = ∪ip=1
Ep , B1 ∪ B2 = ∪ip=1
Ep , . . . , ∪nj=1 Bj = ∪ip=1
Ep
0
km
0
1
2
B10 = ∪kq=1
Eq0 , B10 ∪ B20 = ∪kq=1
Eq0 , . . . , ∪m
l=1 Bl = ∪q=1 Eq .
0
Let π be the permutation of {1, . . . , N } such that Ep = Eπ(p)
; p = 1, . . . , N .
Denote by Hp , Hq0 the left neighbourhoods of Ep , Eq0 with respect to the orderings
0
ord1 , ord2 ; clearly Hp = Hπ(p)
, ∀p = 1, . . . , N . Note that Dj = (∪jv=1 Bv )\(∪j−1
v=1 Bv ) =
i
i
i
j
j−1
j
l
Hp ; j = 1, . . . , n and similarly Dl0 = ∪kq=k
Ep ) = ∪p=i
Ep )\(∪p=1
Hq0 ; l =
(∪p=1
j−1 +1
l−1 +1
1, . . . , m. Hence
i
i
j
j
0
Ci = ∪j∈Ji Dj = ∪j∈Ji ∪p=i
Hp = ∪j∈Ji ∪p=i
Hπ(p)
j−1 +1
j−1 +1
l
= ∪l∈Li Dl0 = ∪l∈Li ∪kq=k
Hq0
l−1 +1
and we can conclude that
{π(p); p ∈ ∪j∈Ji {ij−1 + 1, ij−1 + 2, . . . , ij }} = ∪l∈Li {kl−1 + 1, kl−1 + 2, . . . , kl }.
Assumption 3.3.2 combined with the previous relationship implies that
Z
k
Y
RN +1 i=1
IΓi (
X
ij
X
(xp − xp−1 ))Q∪N −1 Ep ,∪N
j∈Ji p=ij−1 +1
p=1
p=1 Ep
(xN −1 ; dxN ) . . .
QE1 ,E1 ∪E2 (x1 ; dx2 )Q∅0 E1 (x; dx1 )µ(dx) =
CHAPTER 3. SET-MARKOV PROCESSES
Z
k
Y
RN +1 i=1
IΓi (
ij
X
X
74
(yπ(p) − yπ(p)−1 ))Q∪N −1 Eq0 ,∪N
0
q=1 Eq
q=1
j∈Ji p=ij−1 +1
(yN −1 ; dyN ) . . .
QE10 ,E10 ∪E20 (y1 ; dy2 )Q∅0 E10 (y; dy1 )µ(dy) =
Z
k
Y
RN +1 i=1
IΓi (
X
kl
X
(yq − yq−1 ))Q∪N −1 Eq0 ,∪N
l∈Li q=kl−1 +1
q=1
0
q=1 Eq
(yN −1 ; dyN ) . . .
QE10 ,E10 ∪E20 (y1 ; dy2 )Q∅0 E10 (y; dy1 )µ(dy)
with the convention x0 = y0 = 0. This gives us the desired relationship, because
Pij
p=ij−1 +1 (xp
− xp−1 ) = xij − xij−1 , the left-hand side integral collapses to an integral
with respect to Q∪n
n−1
j=1 Bj ,∪j=1 Bj
(xin−1 ; dxin ) . . . QB1 ,B1 ∪B2 (xi1 ; dxi2 )Q∅0 B1 (x; dxi1 )µ(dx),
and a similar phenomenon happens in the right-hand side.
2
We are now ready to prove the main result of this section which says that Assumption 3.3.2 is also sufficient to construct a Q-Markov process.
Theorem 3.3.5 If µ is a probability measure on R and Q := (QBB 0 )B⊆B 0 is a transition system which satisfies Assumption 3.3.2, then there exists a probability measure P on the product space (RC , B(R)C ) under which the coordinate-variable process
X := (XC )C∈C defined on this space is Q-Markov with initial distribution µ.
Proof: For each k-tuple (C1 , . . . , Ck ) of distinct sets in C we will define a probability
measure µC1 ...Ck on (Rk , B(R)k ) such that the system of all these probability measures
satisfy the following two consistency conditions:
0
(C1) If (C10 . . . Ck0 ) is another ordering of the k-tuple (C1 . . . Ck ), say Ci = Cπ(i)
,
where π is a permutation of {1, . . . , k}, then
µC1 ...Ck (Γ1 × . . . × Γk ) = µC10 ...Ck0 (Γπ−1 (1) × . . . × Γπ−1 (k) )
for every Γ1 , . . . , Γk ∈ B(R).
(C2) If C1 , . . . , Ck , Ck+1 are k + 1 distinct sets in C, then
µC1 ...Ck (Γ1 × . . . × Γk ) = µC1 ...Ck Ck+1 (Γ1 × . . . × Γk × R)
for every Γ1 , . . . , Γk ∈ B(R).
CHAPTER 3. SET-MARKOV PROCESSES
75
By Kolmogorov’s extension theorem (Theorem 36.1, [10]), there exists a probability measure P on (RC , RC ) under which the coordinate-variable process X := (XC )C∈C
defined by XC (x) := xC has the measures µC1 ...Ck as its finite dimensional distributions. We will prove that the process X has an (almost surely) unique additive
extension to C(u). The Q-Markov property of this process will follow from the way
we choose its finite dimensional distributions µC0 C1 ...Cn over the left neighbourhoods
of a finite sub-semilattice, according to Proposition 3.3.1.
Step 1 (Construction of the finite dimensional distributions)
i
Aij ; i = 1, . . . , k
Let (C1 , . . . , Ck ) be a k-tuple of distinct sets in C and Ci = Ai \ ∪nj=1
some extremal representations. Let A0 be the minimal finite sub-semilattice of A
which contains the sets Ai , Aij , {B0 = ∅0 , B1 , . . . , Bn } a consistent ordering of A0 and
Dj the left neighbourhood of the set Bj for j = 1, . . . , n. Define
Z
µD0 D1 ...Dn (Γ0 × Γ1 × . . . × Γn ) :=
Q∪n−1 Bj ,∪n
j=1
j=1 Bj
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
IΓj (xj − xj−1 )
j=2
(xn−1 ; dxn ) . . . QB1 ,B1 ∪B2 (x1 ; dx2 )Q∅0 B1 (x0 ; dx1 )µ(dx0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
Say Ci = ∪j∈Ji Dj for some Ji ⊆ {1, . . . , n}; i = 1, . . . , k, let α : Rn+1 →
Rk , α(x0 , x1 , . . . , xn ) := (
P
j∈J1
xj , . . . ,
P
j∈Jk
xj ) and define
µC1 ...Ck := µD0 D1 ...Dn ◦ α−1 .
The fact that µC1 ...Ck does not depend on the ordering of the semilattice A0 is a
consequence of Assumption 3.3.2. The fact that µC1 ...Ck does not depend on the
extremal representations of the sets Ci is also a consequence of Assumption 3.3.2,
using Lemma 3.3.4. Finally, we observe that the finite dimensional distributions are
additive.
Step 2 (Consistency condition (C1))
0
, π
Let (C10 . . . Ck0 ) be another ordering of the k-tuple (C1 . . . Ck ), with Ci = Cπ(i)
i
being a permutation of {1, . . . , k}. Let Ci = Ai \ ∪nj=1
Aij ; i = 1, . . . , k be extremal
representations, A0 the minimal finite sub-semilattice of A which contains the sets
Ai , Aij , {B0 = ∅0 , B1 , . . . , Bn } a consistent ordering of A0 and Dj the left neighbourhood of the set Bj for each j = 1, . . . , n. Say Ci = ∪j∈Ji Dj , Ci0 = ∪j∈Ji0 Dj and
CHAPTER 3. SET-MARKOV PROCESSES
let α(x0 , x1 , . . . , xn ) := (
P
j∈Jk0
P
j∈J1
xj , . . . ,
P
j∈Jk
76
xj ), β(x0 , x1 , . . . , xn ) := (
P
j∈J10
xj , . . . ,
0
xj ). We have Ji = Jπ(i)
. Hence
α−1 (Γ1 × . . . × Γk ) = {(x0 , x1 , . . . , xn );
X
xj ∈ Γ1 , . . . ,
j∈J1
= {(x0 , x1 , . . . , xn );
X
X
j∈Jk
xj ∈ Γπ−1 (1) , . . . ,
j∈J10
xj ∈ Γk }
X
xj ∈ Γπ−1 (k) }
j∈Jk0
= β −1 (Γπ−1 (1) × . . . Γπ−1 (k) )
and
µC1 ...Ck (Γ1 × . . . × Γk ) = µD0 D1 ...Dn (α−1 (Γ1 × . . . × Γk ))
= µD0 D1 ...Dn (β −1 (Γπ−1 (1) × . . . × Γπ−1 (k) ))
= µC10 ...Ck0 (Γπ−1 (1) × . . . × Γπ−1 (k) )
for every Γ1 , . . . , Γk ∈ B(R).
Step 3 (Consistency Condition (C2))
i
Let C1 , . . . , Ck , Ck+1 be k + 1 distinct sets in C and Ci = Ai \ ∪nj=1
Aij ; i = 1, . . . , k + 1
some extremal representations. Let A0 , A00 be the minimal finite sub-semilattices of
A which contain the sets Ai , Aij ; i = 1, . . . , k; j = 1, . . . , ni , respectively Ai , Aij ; i =
1, . . . , k + 1; j = 1, . . . , ni . Clearly A0 ⊆ A00 . Using Proposition 2.1.14.(b), there
exist a consistent ordering {B0 = ∅0 , B1 , . . . , Bn } of A00 and a consistent ordering
0
{B00 = ∅0 , B10 , . . . , Bm
} of A0 such that, if Bl0 = Bil ; l = 1, . . . , m for some indices i1 <
l
i2 < . . . < im , then ∪ls=1 Bs0 = ∪ij=1
Bj for all l = 1, . . . , m. For each j = 1, . . . , n; l =
1, . . . , m, let Dj , Dl0 be the left neighbourhoods of Bj in A00 , respectively of Bl0 in
l
A0 and recall that Dl0 = ∪ij=i
Dj for each l = 1, . . . , m. Let γ(x0 , x1 , . . . , xn ) :=
l−1 +1
(x0 ,
Pi1
j=1
xj ,
Pi2
j=i1 +1
xj , . . . ,
Pim
j=im−1 +1
xj ) and for the moment suppose that
−1
0 = µD D ...D ◦ γ
µD00 D10 ...Dm
.
n
0 1
(15)
Say Ci = ∪j∈Ji Dj ; i = 1, . . . , k+1 for some Ji ⊆ {1, . . . , n} and define α(x0 , x1 , . . . , xn ) :=
P
(
j∈J1
xj , . . . ,
have Ci =
P
j∈Jk+1 xj ). On
∪l∈Li Dl0 for some Li
the other hand, if we say that for each i = 1, . . . , k we
⊆ {1, . . . , m}, then Ji = ∪l∈Li {il−1 + 1, il−1 + 2, . . . , il };
CHAPTER 3. SET-MARKOV PROCESSES
define β(y0 , y1 , . . . , ym ) := (
P
l∈L1
yl , . . . ,
P
l∈Lk
77
yl ). Then
β −1 (Γ1 × . . . × Γk ) = {(y0 , y1 , . . . , ym );
X
l∈L1
γ
−1
(β
−1
(Γ1 × . . . × Γk )) = {(x0 , x1 , . . . , xn );
X
yl ∈ Γ1 , . . . ,
il
X
X
xj ∈ Γi ∀i ≤ k}
l∈Li j=il−1 +1
= {(x0 , x1 , . . . , xn );
X
yl ∈ Γk }
l∈Lk
xj ∈ Γ1 , . . . ,
j∈J1
X
xj ∈ Γk }
j∈Jk
= α−1 (Γ1 × . . . × Γk × R)
and
def
−1
0 (β
(Γ1 × . . . × Γk ))
µC1 ...Ck (Γ1 × . . . × Γk ) = µD00 D10 ...Dm
= µD0 D1 ...Dn (γ −1 (β −1 (Γ1 × . . . × Γk )))
= µD0 D1 ...Dn (α−1 (Γ1 × . . . × Γk × R))
def
= µC1 ...Ck Ck+1 (Γ1 × . . . × Γk × R)
for every Γ1 , . . . , Γk ∈ B(R).
In order to prove (15) let Γ0 , Γ1 , . . . , Γm ∈ B(R) be arbitrary. Then
µD0 D1 ...Dn (γ −1 (Γ0 × Γ1 × . . . × Γm )) =
Z
Rn+1
IΓ0 ×Γ1 ×...×Γm (γ(x0 , x1 , x2 − x1 , . . . , xn − xn−1 ))
Q∪n−1 Bj ,∪n
j=1
j=1 Bj
(xn−1 ; dxn ) . . . QB1 ,B1 ∪B2 (x1 ; dx2 )Q∅0 B1 (x0 ; dx1 )µ(dx0 ).
Note that γ(x0 , x1 , x2 − x1 , . . . , xn − xn−1 ) = (x0 , xi1 , xi2 − xi1 , . . . , xim − xim−1 ) and
hence the integrand does not depend on the variables xj , j 6∈ {i0 , i1 , . . . , im }. By the
definition of the transition system, the above integral collapses to
Z
Rm+1
IΓ0 (x0 )IΓ1 (xi1 )
m
Y
l=2
Q∪i1
i2
j=1 Bj ,∪j=1 Bj
IΓl (xil − xil−1 )Q∪im−1 B
j=1
(xi1 ; dxi2 )Q∅0 ,∪i1
j=1 Bj
im
j ,∪j=1 Bj
(xim−1 ; dxim ) . . .
(x0 ; dxi1 )µ(dx0 )
l
which is exactly the definition of µE0 E1 ...Em (Γ0 × Γ1 × . . . × Γm ) because ∪ij=1
Bj =
∪ls=1 Bis for every s = 1, . . . , m. This concludes the proof of (15).
CHAPTER 3. SET-MARKOV PROCESSES
78
Step 4 (Almost Sure Additivity of the Canonical Process)
We will show that the canonical process X on the space (RC , RC ) has an (almost
surely) unique additive extension to C(u) (with respect to the probability measure P
given by Kolmogorov’s extension theorem). Let C, C1 , . . . , Ck ∈ C be such that C =
i
∪ki=1 Ci , and suppose that Ci = Ai \ ∪nj=1
Aij ; i = 1, . . . , k are extremal representations.
Let A0 be the minimal finite sub-semilattice of A which contains the sets Ai , Aij ,
{B0 = ∅0 , B1 , . . . , Bn } a consistent ordering of A0 and Dj the left neighbourhood
of Bj . Assume that each Ci = ∪j∈Ji Dj for some Ji ⊆ {1, . . . , n}. Because the
finite dimensional distributions of X were chosen in an additive way, we have XCi =
P
j∈Ji
XDj a.s., XCi1 ∩Ci2 =
XC =
P
P
j∈∪ki=1 Ji
j∈∪ki=1 Ji
XDj a.s.
P
j∈Ji1 ∩Ji2 XDj a.s., . . . , X∩ki=1 Ci
P
P
Hence ki=1 XCi − i1 <i2 XCi1 ∩Ci2 +
=
P
j∈∩ki=1 Ji
k
XDj a.s.,
. . . + (−1) X∩ki=1 Ci =
XDj = XC a.s.
2
Finally we will translate the preceding result in terms of flows. Let S be a collection of simple flows as in Definition 3.2.10.
The next assumption will provide a necessary and sufficient condition which will
allow us to reconstruct a transition system Q := (QBB 0 )B⊆B 0 from a class of transition
systems {Qf := (Qfst )s<t ; f ∈ S}. It requires that whenever we have two simple flows
f, g ∈ S such that f (s) = g(u), f (t) = g(v) for some s < t, u < v, we must have
Qfst = Qguv .
Assumption 3.3.6 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0m }
are two consistent orderings of some finite semilattices A0 , A00 such that ∪nj=1 Aj =
0
k
l
0
∪m
j=1 Aj , ∪j=1 Aj = ∪j=1 Aj for some k < n, l < m, and we denote f := fA0 ,ord1 ,
g := fA00 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
Qf0t1 = Qg0u1 and Qftk tn = Qgul um .
The following assumption is easily seen to be equivalent to Assumption 3.3.2.
In general, this assumption is stronger than the former Assumption 3.2.12 (which
involves only the finite dimensional distributions over the sets in A), but if SHAPE
holds, then the two assumptions are equivalent.
CHAPTER 3. SET-MARKOV PROCESSES
79
Assumption 3.3.7 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i,
where π is a permutation of {1, . . . , n} with π(1) = 1, and we denote f := fA0 ,ord1 ,
g := fA0 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
IΓi (xi − xi−1 )Qftn−1 tn (xn−1 ; dxn ) . . .
(16)
i=2
Qft1 t2 (x1 ; dx2 )Qf0t1 (x0 ; dx1 )µ(dx0 ) =
Z
Rn+1
IΓ0 (y0 )IΓ1 (y1 )
n
Y
IΓi (yπ(i) − yπ(i)−1 )Qgun−1 un (yn−1 ; dyn ) . . .
i=2
g
Qu1 u2 (y1 ; dy2 )Qg0u1 (y0 ; dy1 )µ(dy0 )
for every Γ0 , Γ1 , . . . , Γn ∈ B(R).
The following result is immediate.
Corollary 3.3.8 If µ is a probability measure on R and {Qf := (Qfst )s<t ; f ∈ S} is
a collection of transition systems which satisfyies Assumption 3.3.6 and Assumption
3.3.7, then there exist a transition system Q := (QBB 0 )B⊆B 0 and a probability measure
P on the product space (RC , B(R)C ) under which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is Q-Markov
with initial distribution µ; and
2. ∀f ∈ S, X f := (Xf (t) )t is Qf -Markov.
p
0
0
00
Proof: Let B, B 0 ∈ A(u) be such that B ⊆ B 0 . Let B = ∪m
j=1 Aj , B = ∪l=1 Al
be some extremal representations, A0 the minimal finite sub-semilattice of A which
contains the sets A0j , A00l and ord={A0 = ∅0 , A1 , . . . , An } a consistent ordering of A0
such that B = ∪kj=1 Aj and B 0 = ∪nj=1 Aj . Denote f := fA0 ,ord with f (ti ) = ∪ij=1 Aj .
We define QBB 0 := Qftk tn .
The definition of QBB 0 does not depend on the extremal representations of B, B 0 ,
because of Assumption 3.3.6. Using the same argument as in the proof of Corollary
3.2.13, it follows that the family Q := (QBB 0 )B⊆B 0 is a transition system.
CHAPTER 3. SET-MARKOV PROCESSES
80
Finally, it is not hard to see that the transition system Q satisfies Assumption
3.3.2 and the conclusion follows by Theorem 3.3.5.
2
3.4
Transition Semigroups
Let B(R) be the Banach space of all bounded measurable functions h : R → R, with
the supremum norm k h k:= supx∈R h(x). Let L be a fixed closed subspace of B(R)
and I the identity operator on L.
We recall that a bounded linear operator on L is a linear function T : L → L which
has the property that k T k:= suph∈L,h6=0
kT hk
khk
< ∞. One example of a bounded
linear operator on the whole space B(R) is the operator associated to a transition
probability Q(x; dy) on R, defined by
Z
(T h)(x) :=
R
h(y)Q(x; dy), x ∈ R.
A family T := (Tst )s,t∈[0,a];s<t of bounded linear operators on L is called a (oneparameter) semigroup on L if Tss = I ∀s and Tst Ttu = Tsu ∀s < t < u. Clearly,
if Q := (Qst )s<t is a transition system on R and we denote with Tst the operator
associated to Qst , then the family (Tst )s<t will be an example of a semigroup on the
whole space B(R).
Given a semigroup T on L, a classical Markov process (Xt )t∈[0,a] with respect to
a filtration (Fs )s is called T -Markov with respect to the filtration (Fs )s if ∀s < t
E[h(Xt )|Fs ] = (Tst h)(Xs ) ∀h ∈ L.
This notion clearly generalizes the notion of Q-Markov process on R+ .
In this section we will take the same approach as above but for set-indexed processes. More precisely, we will introduce a more general class of set-Markov processes,
called T -Markov and we will do some investigation to see if this class is indeed nonempty. In other words, we will give an answer to the following question: what are
the conditions that have to be imposed on the semigroup T which will ensure the
CHAPTER 3. SET-MARKOV PROCESSES
81
existence of a set-indexed T -Markov process? We will discover that the existence of
a T -Markov process forces the semigroup T to be almost surely the one associated to
a transition system, and therefore we will be in the position to invoke the results of
Sections 3.2 and 3.3.
We begin with a definition.
Definition 3.4.1 (a) For each B, B 0 ∈ A(u) with B ⊆ B 0 let TBB 0 be a bounded linear operator on L. The family T := (TBB 0 )B,B 0 ∈A(u);B⊆B 0 is called a semigroup
on L if ∀B ∈ A(u), TBB = I and ∀B, B 0 , B 00 ∈ A(u), B ⊆ B 0 ⊆ B 00
TBB 0 TB 0 B 00 = TBB 00 .
(b) Let T := (TBB 0 )B,B 0 ∈A(u);B⊆B 0 be a semigroup on L. A set-Markov process X :=
(XA )A∈A with respect to a filtration (FA )A∈A is called T -Markov with respect
to the filtration (FA )A∈A if ∀B, B 0 ∈ A(u), B ⊆ B 0
E[h(XB 0 )|FB ] = (TBB 0 h)(XB ) ∀h ∈ L.
The process X is called simply T -Markov if it is T -Markov with respect to its
minimal filtration.
In what follows we will obtain the finite-dimensional distribution of a T -Markov
process X, over an arbitrary finite semilattice A0 = {A0 = ∅0 , A1 , . . . , An } ordered
consistently. Because of the additivity property of the process, it is enough to specify
its finite-dimensional distribution over the collection of first i-th unions of sets in A0 ,
i = 1, . . . , n. This distribution will be obtained inductively. Assume that the initial
distribution of the process is X. At the first step, we get: ∀h0 ∈ B(R)
Z
E[h0 (XA0 )] =
R
h0 dµ.
At the second step, we get: ∀h0 ∈ B(R), ∀h1 ∈ L
E[h0 (XA0 )h1 (XA0 ∪A1 )] = E[h0 (X∅0 )E[h1 (XA1 )|F∅0 ]] =
Z
E[h0 (X∅0 )(T∅0 A1 h1 )(X∅0 )] =
R
h0 · T∅0 A1 h1 dµ.
CHAPTER 3. SET-MARKOV PROCESSES
82
In general, we get: ∀h0 ∈ B(R), ∀h1 , . . . , hn ∈ L
E[h0 (XA0 )h1 (XA0 ∪A1 ) . . . hn (X∪nj=0 Aj )] =
Z
R
h0 · T∅0 A1 (h1 · TA1 ,A1 ∪A2 (· · · (hn−1 · T∪n−1 Aj ,∪n
j=0
j=0 Aj
hn ) · · ·))dµ.
The next result says that, if the subspace L is large enough, then the transition
semigroup T and the initial distribution µ determine uniquely a T -Markov process.
We say that a closed subspace L of B(R) is separating if for any probability measures
P, Q on R
Z
Z
R
hdP =
R
hdQ ∀h ∈ L ⇒ P = Q.
An example of a separating subspace of the space B(R) is the space C0 (R) of all
continuous functions h : R → R which vanish at infinity.
Proposition 3.4.2 Assume that L is a separating subspace of B(R). Let T :=
(TBB 0 )B⊆B 0 be a transition semigroup on L and µ a probability measure on R. Any
T -Markov processes with initial distribution µ have the same finite dimensional distributions.
Proof: Let X := (XA )A∈A and Y := (YA )A∈A be two T -Markov processes with initial
distribution µ. It is enough to prove that they have the same finite-dimensional
distributions over any finite semilattice, or even over the collection of the first ith unions of the sets of any finite semilattice (by the additivity property of the two
processes). Let A0 = {A0 = ∅0 , A1 , . . . , An } be a finite semilattice ordered consistently.
According to the previous formula we have
E[h0 (XA0 )h1 (XA0 ∪A1 ) . . . hn (X∪nj=0 Aj )] = E[h0 (YA0 )h1 (YA0 ∪A1 ) . . . hn (Y∪nj=0 Aj )]
for any h0 ∈ B(R), h1 , . . . , hn ∈ L. Since L is separating, this implies that the distribution of (XA0 , XA0 ∪A1 , . . . , X∪nj=0 Aj ) coincides with the distribution of (YA0 , YA0 ∪A1 ,
. . . , Y∪nj=0 Aj ), by Proposition III.4.6, [30].
2
The following result gives us the desired correspondence in terms of flows.
CHAPTER 3. SET-MARKOV PROCESSES
83
Proposition 3.4.3 A set-indexed process X := (XA )A∈A is T -Markov with respect
to a filtration (FA )A∈A if and only if for every simple flow f : [0, a] → A(u) the
process X f := (Xf (t) )t∈[0,a] is T f -Markov with respect to the filtration (Ff (t) )t∈[0,a] ,
where Tstf := Tf (s),f (t) . (For the necessity part we can consider any flow, not only the
simple ones.)
Note that if Q := (QBB 0 )B⊆B 0 is a transition system on R and TBB 0 is the operator
associated to QBB 0 , then the family T := (TBB 0 )B⊆B 0 of these operators is a semigroup
on the whole space B(R). It is clear that any Q-Markov process will be T -Markov
and vice-versa.
The next result says that in fact any transition semigroup has to be almost surely
of this form, if there exists a T -Markov process associated to it.
Proposition 3.4.4 Let T := (TBB 0 )B⊆B 0 be a semigroup on a closed subspace L and
assume that there exists a T -Markov process X := (XA )A∈A associated to it. Then
each operator TBB 0 is ‘almost surely’ the restriction to L of the operator associated to
a transition probability QBB 0 on B(R), in the sense that ∀h ∈ L
Z
(TBB 0 h)(x) =
R
h(y)QBB 0 (x; dy)
P ◦ XB−1 a.s.(x).
0
Proof: For each B, B ∈ A(u), B ⊆ B 0 let QBB 0 (x; dy) be a version of the conditional
distribution of XB 0 given XB . By the definition of the T -Markov property, for each
h ∈ L we have
Z
(TBB 0 h)(x) = E[h(XB 0 )|XB = x] =
R
h(y)QBB 0 (x; dy) P ◦ XB−1 a.s.(x).
2
The preceding proposition gives us a necessary condition that has to be satisfied
by a transition semigroup T in order to ensure the existence of a T -Markov process.
This condition clearly cannot be sufficient since it involves an ‘almost sure’ statement,
which refers in fact to the probability distribution of XB , which is not constructed
yet. To be able to construct a T -Markov process we have in fact to assume that each
TBB 0 is the operator associated to a transition probability on R. The next result
(which is an immediate consequence of Theorem 3.3.5) gives sufficient conditions for
the existence of a T -Markov process.
CHAPTER 3. SET-MARKOV PROCESSES
84
Theorem 3.4.5 Let µ be a probability measure on R and T := (TBB 0 )B⊆B 0 a semigroup on B(R) such that each TBB 0 is the operator associated to a transition probability
QBB 0 on R.
If the transition system Q := (QBB 0 )B⊆B 0 satisfies Assumption 3.3.2, then there
exists a probability measure P on the product space (RC , B(R)C ) under which the
coordinate-variable process X := (XC )C∈C defined on this space is T -Markov with
initial distribution µ.
For the sake of completeness we will also include the corresponding result in terms
of flows. Let S be a collection of simple flows as in Definition 3.2.10.
Corollary 3.4.6 Let µ be a probability measure on R and {T f := (Tstf )s,t ; f ∈ S}
a collection of semigroups on B(R) such that each Tstf is the operator associated to a
transition probability Qfst on R.
If the collection {Qf := (Qfst )s<t ; f ∈ S} of transition systems satisfies Assumption
3.3.6 and Assumption 3.3.7, then there exist a semigroup T := (TBB 0 )B⊆B 0 on B(R)
and a probability measure P on the product space (RC , B(R)C ) under which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is T -Markov
with initial distribution µ; and
2. ∀f ∈ S, X f := (Xf (t) )t is T f -Markov.
Chapter 4
The Generator
In this chapter we will make use of flows to introduce the generator of a set-indexed
Q-Markov process. Corollary 3.3.8 allows us to characterize a set-indexed Q-Markov
process by a class {Qf := (Qfst ; f ∈ S} of transition systems, where S is a collection
of simple flows as in Definition 3.2.10. These transition systems in turn are characterized by their corresponding generators. This observation permits us to define
a generator for a set-indexed Q-Markov process as a class of generators indexed by
the suitable class S of flows. There are two main results in this chapter. The first is
that the generator defined in this way allows us to reconstruct the finite dimensional
distributions of the process. The second gives a sufficient condition for an arbitrary
class of generators be the generator of a set-indexed Q-Markov process.
4.1
The Definition of the Generator
In this section we will introduce the formal definition of ‘the generator’ of a setindexed Q-Markov process. Let S be a collection of simple flows as in Definition
3.2.10.
Let B(R) be the Banach space of all bounded measurable functions h : R → R,
with the supremum norm k h k:= supx∈R h(x).
Let Q := (QBB 0 )B⊆B 0 be a transition system and X := (XA )A∈A a Q-Markov
85
CHAPTER 4. THE GENERATOR
86
process (with respect to a filtration (FA )A∈A ) with initial distribution µ.
For each flow f ∈ S, the process X f is Qf -Markov (with respect to the filtration
(Ff (t) )t ); let T f := (Tstf )s<t be its corresponding semigroup.
The backward generator of the process X f at time s is a linear operator Gsf on
the Banach space B(R), defined by
f
Ts−²,s
h−h
∂− f
= − Trs
h|r=s
²&0
²
∂r
Gsf h := lim
whose domain is the subspace D(Gsf ) of those functions h for which the limit exists in
the supremum norm topology. It is not difficult to verify that Kolmogorov’s backward
equation holds:
−
∂− f
Tst h = Gsf Tstf h, if Tstf h ∈ D(Gsf ).
∂s
If the function Tstf h is strongly continuously differentiable in s, then Kolmogorov’s
backward equation has the following integral form:
Tstf h
−h=
Z t
s
f
f
Guf Tut
hdu, if Tut
h ∈ D(Guf ) ∀u ∈ [s, t].
The forward generator of the process X f at time s is defined by
f
h−h
Ts,s+²
∂+ f
=
T h|t=s
²&0
²
∂t st
Gs∗f h := lim
whose domain is the subspace D(Gs∗f ) of those functions h for which the limit exists
in the supremum norm topology. Kolmogorov’s forward equation is:
∂+ f
T h = Tstf Gs∗f h, if h ∈ D(Gt∗f ).
∂t st
If the function Tstf h is strongly continuously differentiable in t, then Kolmogorov’s
forward equation has the following integral form:
Tstf h
−h=
Z t
s
f ∗f
Gu hdu, if h ∈ D(Gu∗f ) ∀u ∈ [s, t].
Tsu
We will make the usual assumption that all the domains D(Gsf ) and D(Gs∗f ) have
a common subspace D, which is dense in B(R), such that for every s < t, Tstf (D) ⊆ D
CHAPTER 4. THE GENERATOR
87
and for every h ∈ D, the function Tstf h is strongly continuously differentiable with
respect to s and t with
−(
∂+
∂− f
Trs h)|r=s = ( Tstf h)|t=s ∀s.
∂r
∂t
The operator Gsf = Gs∗f defined on D, is called the generator of the process X f at
time s. A consequence of Kolmogorov’s equations is that
−
∂ f
∂
Tst h = Tstf h = Tstf Gtf h = Gsf Tstf h ∀h ∈ D.
∂s
∂t
Hence
Tstf h − h =
Z t
∂
and
Tstf h
−h=−
s
(
∂v
f
Tsv
h)dv =
Z t
∂
s
(
∂v
Tvtf h)dv
Z t
=
s
f
Gsf Tsv
hdv =
Z t
s
Tvtf Gtf hdv
Z t
=
s
f f
Tsv
Gv hdv ∀h ∈ D
Z t
s
Gvf Tvtf hdv ∀h ∈ D.
For each h ∈ D we have
E[h(Xf (s+²) ) − h(Xf (s) )|Ff (s) ] = (Tf (s),f (s+²) h)(Xf (s) ) − h(Xf (s) ) =
²(Gsf h)(Xf (s) ) + o(²).
The generator on the simple flow f appears as a means of describing how the process
moves along the flow from one set to another set, which is ‘infinitesimally close’.
The fact that each set in A(u) can be approached from different directions, according to the flow that we are looking at, which passes through that set, makes
us think about the possibility defining “the” generator of a set-indexed Q-Markov
process as the collection of all generators of the process along a large enough collection of simple flows. (Intuitively, we can identify the set-indexed generator at the
set B ∈ A(u) with the collection of the ‘directional’ derivatives of the Banach-valued
map A(u) 3 B 0 7→ TBB 0 h.) More precisely, we have the following definition.
Definition 4.1.1 Let Q := (QBB 0 )B⊆B 0 be a transition system. The generator of a
Q-Markov process X := (XA )A∈A is the collection {G f := (Gsf )s ; f ∈ S}, where Gsf is
the generator of the process X f := (Xf (t) )t at time s.
CHAPTER 4. THE GENERATOR
88
Note that if we know the generator G f := (Gsf )s of a set-indexed Q-Markov process
on a certain flow f , we can in fact write down the generator of the process over any
flow g which has the same path as f . More precisely, if two flows f : [0, a] → A(u)
and g : [0, b] → A(u) are such that f (t) = g(α(t)) ∀t ∈ [0, a] for a certain continuous
one-to-one map α : [0, a] → [0, b] which has a non-vanishing derivative denoted α0 ,
then
g
g
Gsf = α0 (s)Gα(s)
and D(Gsf ) = D(Gα(s)
).
This is a consequence of the following lemma.
Lemma 4.1.2 Let X 1 := (Xt1 )t∈[0,a] and X 2 := (Xt2 )t∈[0,b] be two real-valued processes
1
such that Xt2 := Xα(t)
for all t ∈ [0, a], where α : [0, a] → [0, b] is a continuous one-to-
one map. Let Q1 := (Q1st )s,t∈[0,a];s<t be a transition system and define Q2st := Q1α(s),α(t) .
Then Q2 := (Q2st )s<t is also a transition system and X 1 is Q1 -Markov if and only if X 2
is Q2 -Markov. In this case, if in addition the map α has a non-vanishing derivative
1
α0 , and we denote with Gs2 , Gα(s)
the generators of X 2 , X 1 at times s, respectively α(s),
1
and with D(Gs2 ), D(Gα(s)
) their respective domains, then
1
1
Gs2 = α0 (s)Gα(s)
and D(Gs2 ) = D(Gα(s)
).
Proof: Clearly Q2 is a transition system: for any s < t < u we have
Q2st Q2tu = Q1α(s),α(t) Q1α(t),α(u) = Q1α(s),α(u) = Q2su .
Assume that X 1 is Q1 -Markov and let s < t, x ∈ R, Γ ∈ B(R) be arbitrary. Then
1
1
P [Xt2 ∈ Γ|Xs2 = x] = P [Xα(t)
∈ Γ|Xα(s)
= x] = Q1α(s),α(t) (x; Γ) = Q2st (x; Γ)
i.e. X 2 is Q2 -Markov.
Suppose next that α has a non-vanishing derivative α0 . We will prove that D(Gs2 ) ⊆
1
1
) since the limit
D(Gα(s)
): if h ∈ D(Gs2 ), then h ∈ D(Gα(s)
R
1
h)(x)
(Gα(s)
lim
t&s
= lim
u&α(s)
R (h(y)
− h(x))Q1α(s),u (x; dy)
= lim
t&s
u − α(s)
R
R (h(y)
− h(x))Q1α(s),α(t) (x; dy)
=
α(t) − α(s)
1
t−s
1 Z
· lim
(h(y) − h(x))Q2st (x; dy) = 0
· (Gs2 h)(x)
t&s
α(t) − α(s)
t−s R
α (s)
CHAPTER 4. THE GENERATOR
89
1
exists, uniformly in x ∈ R. Similarly D(Gα(s)
) ⊆ D(Gs2 ).
2
The general problem of finding the conditions that have to be imposed on a family
(Gs )s of linear operators on B(R) so that each Gs becomes the generator at time s of a
classical Q-Markov process is not straightforward. In the homogeneous case, the HilleYosida theorem (Theorem I.2.6. [30]) gives the necessary and sufficient conditions for
a linear operator G on B(R) to be the generator of a (strongly continuous positive
contraction) semigroup T := (Tt )t ; however, there is no guarantee that each Tt is the
operator associated to some transition probability Qt . In the inhomogeneous case,
things become even more complicated.
Therefore, in trying to determine when a given collection {G f := (Gsf )s ; f ∈ S}
of families of linear operators on B(R) actually form the generator of a set-indexed
Q-Markov process, for ease of exposition we will assume that each operator Gsf is
the generator at time s of a semigroup T f := (Tstf )s<t , each Tstf being the operator
associated to a transition probability Qfst .
The main result of this section is an immediate consequence of Corollary 3.3.8.
It basically says that the generator allows us to reconstruct the distribution of a setindexed Q-Markov process (or, equivalently, its finite dimensional distributions) if
certain consistency assumptions hold.
Theorem 4.1.3 Let µ be a probability measure on R and {G f := (Gsf )s ; f ∈ S}
a collection of families of linear operators on B(R) such that each operator Gsf is
defined on a dense subspace D of B(R) and is the generator at time s of a semigroup
T f := (Tstf )s<t , each Tstf being the operator associated to a transition probability Qfst .
If the collection {Qf := (Qfst )s<t ; f ∈ S} of transition systems satisfies Assumption
3.3.6 and Assumption 3.3.7, then there exist a transition system Q := (QBB 0 )B⊆B 0
and a probability measure P on the product space (RC , B(R)C ) under which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is Q-Markov
with initial distribution µ; and
CHAPTER 4. THE GENERATOR
90
2. ∀f ∈ S, the generator of the process X f := (Xf (t) )t at time s is an extension of
the operator Gsf .
In Chapter 5 and Chapter 6 we will focus on some particular examples of setindexed Q-Markov processes (processes with independent increments, respectively
empirical processes) and we will see that in their case, the consistency assumptions
in terms of generators become much more simple.
4.2
The Construction using the Generator
In this section, we will find equivalent formulations in terms of the generators Gsf
for Assumption 3.3.6 and Assumption 3.3.7, in the case of a general set-indexed QMarkov process. These new assumptions, even if notationally complex, will provide
us with a set of necessary and sufficient conditions that have to be imposed on a
collection of generators so that they become the generator of a set-indexed Q-Markov
process. Most of the results of this section appear as the last section of [7].
Let µ be a probability measure on R and {G f := (Gsf )s ; f ∈ S} a collection of
families of linear operators on B(R) such that each operator Gsf is defined on a dense
subspace D of B(R) and is the generator at time s of a semigroup T f := (Tstf )s<t ,
each Tstf being the operator associated to a transition probability Qfst .
Recall from the previous section that, in this case, the functions Tstf h with h ∈ D
are assumed to be strongly continuously differentiable with respect to s and t; consequently, we have a set of four integral equations from which we retain the following:
Tstf h
−h=
Z t
s
Gvf Tvtf hdv ∀h ∈ D.
(17)
The goal is to find the conditions that have to be satisfied by the collection {G f :=
(Gsf )s ; f ∈ S} such that the collection {Qf := (Qfst )s<t ; f ∈ S}) of transition systems
satisfies Assumption 3.3.6 and Assumption 3.3.7. By invoking Theorem 4.1.3 we
will be able to conclude next that there exist a set-indexed transition system Q :=
(QBB 0 )B⊆B 0 and a probability measure P on the product space (RC , B(R)C ) under
CHAPTER 4. THE GENERATOR
91
which the coordinate-variable process X := (XC )C∈C (defined on this space) is QMarkov with generator {G f ; f ∈ S} and initial distribution µ.
Using the integral equation (17), we will first give the equivalent form in terms of
generators of Assumption 3.3.6.
Assumption 4.2.1 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0m }
are two consistent orderings of some finite semilattices A0 , A00 , such that ∪nj=1 Aj =
0
k
l
0
∪m
j=1 Aj , ∪j=1 Aj = ∪j=1 Aj for some k < n, l < m, and we denote f := fA0 ,ord1 ,
g := fA0 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then for every h ∈ D
Z t1
Gvf Tvtf 1 hdv =
0
Z u1
0
g
Gvg Tvu
hdv and
1
Z tn
tk
Gvf Tvtf n hdv =
Z um
ul
g
Gvg Tvu
hdv.
m
We will prove next that the condition of Assumption 3.3.7 can also be written in
a certain form which admits an expression in terms of generators.
Let us introduce the following notational convention.
If Q1 (x1 ; dx2 ), Q2 (x2 ; dx3 ), . . . , Qn−1 (xn−1 ; dxn ) are transition probabilities on R,
T1 , T2 , . . . , Tn−1 are their associated bounded linear operators, and h : Rn → R is a
bounded measurable function, then
def
T1 T2 . . . Tn−1 [h(x1 , x2 , . . . , xn )](x1 ) : =
Z
Rn−1
h(x1 , x2 , . . . , xn )Qn−1 (xn−1 ; dxn ) . . . Q2 (x2 ; dx3 )Q1 (x1 ; dx2 ).
If in addition, G is a linear operator on B(R) with domain D(G), and the function h
is chosen such that for every x1 , . . . , xk ∈ R, the function
Z
Rn−k
h(x1 , x2 , . . . , xn )Qn−1 (xn−1 ; dxn ) . . . Qk+1 (xk+1 ; dxk+2 )Qk (·; dxk+1 )
is in D(G), then
def
T1 T2 . . . Tk−1 GTk . . . Tn−1 [h(x1 , x2 , . . . , xn )](x1 ) : =
Z
Z
Rk−1
(G
Rn−k
h(x1 , x2 , . . . , xn )Qn−1 (xn−1 ; dxn ) . . . Qk+1 (xk+1 ; dxk+2 )
Qk (·; dxk+1 ))(xk )Qk−1 (xk−1 ; dxk ) . . . Q2 (x2 ; dx3 )Q1 (x1 ; dx2 ).
CHAPTER 4. THE GENERATOR
92
Let µft1 be the probability measure defined by µft1 (Γ) :=
R
R
Qf0t1 (x; Γ)µ(dx). The
following result is crucial. It shows that condition (16) of the consistency Assumption
3.3.7 has an equivalent form in which both the left-hand side and the right-hand side
are integrated with respect to the same transition probabilities Qguπ(j)−1 ,uπ(j) ; and moreover, this equivalent form can be translated in terms of the generators (Gwf )w∈[ti−1 ,ti ]
and (Gvg )v∈[uπ(i)−1 ,uπ(i) ] .
Lemma 4.2.2 Let ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n } be
two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i, where
π is a permutation of {1, . . . , n} with π(1) = 1, and denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j .
Suppose that Qf0t1 = Qg0u1 . The following statements are equivalent:
(a) For every Γ0 , Γ1 , . . . , Γn ∈ B(R)
Z
Rn+1
IΓ0 (x0 )IΓ1 (x1 )
n
Y
IΓi (xi − xi−1 )Qftn−1 tn (xn−1 ; dxn ) . . .
(18)
i=2
Qft1 t2 (x1 ; dx2 )Qf0t1 (x0 ; dx1 )µ(dx0 ) =
Z
IΓ0 (y0 )IΓ1 (y1 )
Rn+1
n
Y
IΓi (yπ(i) − yπ(i)−1 )Qgun−1 un (yn−1 ; dyn ) . . .
i=2
g
Qu1 u2 (y1 ; dy2 )Qg0u1 (y0 ; dy1 )µ(dy0 );
(b) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with p
the index for which π(i) − 1 = lp−1 , π(i) = lp , then for every h2 , . . . , hi ∈ B(R)
and for µft1 -almost all x1
Tug1 ul Tugl
u
1 l2
1
[
i−1
Y
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp+1
Tugl
u
p+1 lp+2
i−1
X
hj (yπ(j) − yπ(j)−1 )(Ttfi−1 ti hi )(x1 +
j=2
. . . Tugl
u
2(i−1)−1 l2(i−1)
(yπ(j) − yπ(j)−1 ))](x1 ) =
j=2
Tug1 ul Tugl
1
[
i−1
Y
j=2
u
1 l2
. . . Tugl
hj (yπ(j) − yπ(j)−1 )hi (x1 +
u
2(i−1)−1 l2(i−1)
i
X
(yπ(j) − yπ(j)−1 ))](x1 );
j=2
(19)
CHAPTER 4. THE GENERATOR
93
(c) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with
p the index for which π(i) − 1 = lp−1 , π(i) = lp , then for every h2 , . . . , hi ∈ D
and for µft1 -almost all x1
(c1) if p = 2(i − 1) we have
Z ti
ti−1
[
i−1
Y
. . . Tugl
Tug1 ul Tugl
u
p−3 lp−2
u
1 l2
1
i−1
X
f
hj (yπ(j) − yπ(j)−1 )(Gwf Twt
h )(x1 +
i i
j=2
(yπ(j) − yπ(j)−1 ))](x1 )dw =
j=2
Z ul
p
ulp−1
[
i−1
Y
Tug1 ul Tugl
1
u
1 l2
. . . Tugl
u
p−3 lp−2
hj (yπ(j) − yπ(j)−1 )hi (x1 +
j=2
i
X
Tugl
u
p−2 lp−1
g
Gvg Tvu
lp
(yπ(j) − yπ(j)−1 ))](x1 )dv;
j=2
(c2) if p < 2(i − 1) we have
Z ti
ti−1
[
i−1
Y
Tug1 ul . . . Tugl
u
p−3 lp−2
1
hj (yπ(j) −
Tugl
u
p−2 lp+1
f
yπ(j)−1 )(Gwf Twt
h )(x1
i i
j=2
Tugl
+
u
p+1 lp+2
i−1
X
. . . Tugl
u
2(i−1)−1 l2(i−1)
(yπ(j) − yπ(j)−1 ))](x1 )dw =
j=2
Z ul
p
ulp−1
Tug1 ul Tugl
1
Tuglp ul Tugl ul
p+1
p+1 p+2
[hi (x1 +
i
X
u
1 l2
. . . Tugl
u
p−3 lp−2
Tugl
. . . Tugl
{
u
2(i−1)−1 l2(i−1)
(yπ(j) − yπ(j)−1 )) − hi (x1 +
j=2
u
p−2 lp−1
i−1
Y
g
Gvg Tvu
lp
hj (yπ(j) − yπ(j)−1 )·
j=2
i−1
X
(yπ(j) − yπ(j)−1 ))]}(x1 )dv.
j=2
Proof: (a) ⇒ (b): Let X be a Qf -Markov process and Y a Qg -Markov process
with the same initial distribution µ. Then (a) is equivalent to saying that the distribution of (X0 , Xt1 , Xt2 − Xt1 , . . . , Xtn − Xtn−1 ) coincide with the distribution of
(Y0 , Yu1 , Yuπ(2) − Yuπ(2)−1 , . . . , Yuπ(n) − Yuπ(n)−1 ).
CHAPTER 4. THE GENERATOR
94
Since Qf0t1 = Qg0u1 , both Xt1 and Yu1 have the same distribution µft1 . Hence
(a) is equivalent to saying that for µft1 -almost all x1 , the conditional distribution of
(Xt2 − Xt1 , . . . , Xtn − Xtn−1 ) given Xt1 = x1 coincide with the conditional distribution
of (Yuπ(2) − Yuπ(2)−1 , . . . , Yuπ(n) − Yuπ(n)−1 ) given Yu1 = x1 .
For i = 2 we will use the following relationship: for every Γ2 ∈ B(R) and for
µft1 -almost all x1
P [Xt2 − Xt1 ∈ Γ2 |Xt1 = x1 ] = P [Yuπ(2) − Yuπ(2)−1 ∈ Γ2 |Yu1 = x1 ].
Using Lemma A.2.4, Appendix A.2, the left-hand side is
R
f
R IΓ2 +x1 (x2 )Qt1 t2 (x1 ; dx2 ),
whereas on the right-hand side we have
Z
R2
IΓ2 (yuπ(2) − yuπ(2)−1 )Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ).
By a monotone class argument we can conclude that for every h2 ∈ B(R) and for
µft1 -almost
all x1
Z
h2 (x2 )Qft1 t2 (x1 ; dx2 ) =
R
Z
R2
h2 (x1 + yuπ(2) − yuπ(2)−1 )
(20)
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 )
which is the desired relationship for i = 2.
For i = 3 we will use the following relationship: for every Γ2 , Γ3 ∈ B(R) and for
µft1 -almost all x1
P [Xt2 − Xt1 ∈ Γ2 , Xt3 − Xt2 ∈ Γ3 |Xt1 = x1 ] =
P [Yuπ(2) − Yuπ(2)−1 ∈ Γ2 , Yuπ(3) − Yuπ(3)−1 ∈ Γ3 |Yu1 = x1 ].
Using Lemma A.2.4, Appendix A.2, the left-hand side can be written as
Z
R2
IΓ2 +x1 (x2 )IΓ3 +x2 (x3 )Qft2 t3 (x2 ; dx3 )Qft1 t2 (x1 ; dx2 )
which becomes
Z
R3
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 +x1 +yπ(2) −yπ(2)−1 (x3 )Qft2 t3 (x1 + yπ(2) − yπ(2)−1 ; dx3 )
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )g Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 )
CHAPTER 4. THE GENERATOR
95
using equation (20).
Using also Lemma A.2.4, Appendix A.2, the right-hand side can be written as
Z
R4
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 (yπ(3) − yπ(3)−1 )Qgul
3
Qgul
1
g
,ul4 (yl3 ; dyl4 )Qul2 ,ul3 (yl2 ; dyl3 )
g
,ul2 (yl1 ; dyl2 )Qu1 ,ul1 (x1 ; dyl1 )
where l1 ≤ l2 ≤ l3 ≤ l4 is the increasing ordering of the values π(2) − 1, π(2), π(3) −
1, π(3).
By a monotone class argument we can conclude that for every h2 , h3 ∈ B(R) and
for µft1 -almost all x1
Z
R3
h2 (yπ(2) − yπ(2)−1 )h3 (x3 )Qft2 t3 (x1 + yπ(2) − yπ(2)−1 ; dx3 )
(21)
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ) =
Z
h2 (yπ(2) − yπ(2)−1 )h3 (x1 + yπ(2) − yπ(2)−1 + yπ(3) − yπ(3)−1 )Qgul
3
R4
Qgul
2
,ul4 (yl3 ; dyl4 )
g
g
,ul3 (yl2 ; dyl3 )Qul1 ,ul2 (yl1 ; dyl2 )Qu1 ,ul1 (x1 ; dyl1 )
which is the desired relationship for i = 3.
The inductive argument will be omitted since it is identical, but notationally
complex.
(b) ⇒ (a): Let Γ0 , Γ1 , . . . , Γn ∈ B(R) be arbitrary. Using the fact that Qf0t1 =
Qg0u1 and equation (20), the left-hand side of (18) becomes
Z
Rn+2
IΓ0 (y0 )IΓ1 (y1 )IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 +y1 +yπ(2) −yπ(2)−1 (x3 )IΓ4 +x3 (x4 ) . . .
IΓn +xn−1 (xn )Qftn−1 tn (xn−1 ; dxn ) . . . Qft3 t4 (x3 ; dx4 )Qft2 t3 (y1 + yπ(2) − yπ(2)−1 ; dx3 )
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (y1 ; dyπ(2)−1 )Qg0u1 (y0 ; dy1 )µ(dy0 )
which in turn can be written as
Z
Rn+3
IΓ4 +y1 +P3
IΓ0 (y0 )IΓ1 (y1 )IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 (yπ(3) − yπ(3)−1 )
j=2
(yπ(j) −yπ(j)−1 )
(x4 ) . . . IΓn +xn−1 (xn )Qftn−1 tn (xn−1 ; dxn ) . . .
CHAPTER 4. THE GENERATOR
Qft3 t4 (y1 +
3
X
96
(yπ(j) − yπ(j)−1 ); dx4 )Qgul
3
j=2
Qgul
1
g
,ul4 (yl3 ; dyl4 )Qul2 ,ul3 (yl2 ; dyl3 )
g
g
,ul2 (yl1 ; dyl2 )Qu1 ,ul1 (y1 ; dyl1 )Q0u1 (y0 ; dy1 )µ(dy0 )
using equation (21), where l1 ≤ l2 ≤ l3 ≤ l4 is the increasing ordering of the values
π(2) − 1, π(2), π(3) − 1, π(3).
Continuing in the same manner at the last step we will get exactly the desired right-hand side of (18), since the increasing ordering of the values π(2) −
1, π(2), . . . , π(n) − 1, π(n) is exactly 1 ≤ 2 ≤ . . . ≤ n.
(b) ⇔ (c): The basic ingredient will be equation (17), which gives the integral
expression of a semigroup in terms of its generator.
Since D is dense we can assume that the functions h2 , . . . , hi are in D in the
expression given by (b). Subtract
Tug1 ul Tugl
u
1 l2
1
[
i−1
Y
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp+1
hj (yπ(j) − yπ(j)−1 )hi (x1 +
j=2
Tugl
u
p+1 lp+2
i−1
X
. . . Tugl
u
2(i−1)−1 l2(i−1)
(yπ(j) − yπ(j)−1 ))](x1 )
j=2
from both sides of this expression.
On the left-hand side we will have
Tug1 ul Tugl
u
1 l2
1
[
i−1
Y
. . . Tugl
hj (yπ(j) −
u
p−3 lp−2
Tugl
u
p−2 lp+1
yπ(j)−1 )(Ttfi−1 ti hi
Tugl
u
p+1 lp+2
− hi )(x1 +
j=2
i−1
X
. . . Tugl
u
2(i−1)−1 l2(i−1)
(yπ(j) − yπ(j)−1 ))](x1 )
j=2
which can be written as
Z ti
ti−1
[
Tug1 ul Tugl
i−1
Y
1
u
1 l2
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp+1
f
h )(x1 +
hj (yπ(j) − yπ(j)−1 )(Gwf Twt
i i
Tugl
u
p+1 lp+2
i−1
X
. . . Tugl
(yπ(j) − yπ(j)−1 ))](x1 )dw
j=2
j=2
using the integral expression (17) and Fubini’s theorem.
On the right-hand side we have
Tug1 ul Tugl
1
u
1 l2
. . . Tugl
u
p−3 lp−2
u
2(i−1)−1 l2(i−1)
Tugl
u
p−2 lp−1
CHAPTER 4. THE GENERATOR
{Tugl
u
p−1 lp
[
i−1
Y
Tuglp ul
p+1
97
Tugl
u
p+1 lp+2
i
X
hj (yπ(j) − yπ(j)−1 )hi (x1 +
j=2
(yπ(j) − yπ(j)−1 ))](ylp−1 )−
. . . Tugl
Tugl
u
2(i−1)−1 l2(i−1)
u
p+1 lp+2
u
p−1 lp+1
[
u
2(i−1)−1 l2(i−1)
j=2
−Tugl
i−1
Y
. . . Tugl
hj (yπ(j) − yπ(j)−1 )hi (x1 +
j=2
i−1
X
(yπ(j) − yπ(j)−1 ))](ylp−1 )}(x1 ).
j=2
If we denote
h0 (x1 , yl1 , . . . , ylp−1 , ylp ) :=
Z
i−1
Y
R2(i−1)−p j=2
hj (yπ(j) − yπ(j)−1 )hi (x1 +
i
X
(yπ(j) − yπ(j)−1 ))
j=2
Qgul
u
2(i−1)−1 l2(i−1)
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgul
u
p+1 lp+2
Qgulp ul
p+1
(ylp+1 ; dylp+2 )
(ylp ; dylp+1 )
and
h(x1 , yl1 , . . . , ylp−1 ) :=
Z
i−1
Y
R2(i−1)−p j=2
Qgul
hj (yπ(j) − yπ(j)−1 )hi (x1 +
u
2(i−1)−1 l2(i−1)
i−1
X
(yπ(j) − yπ(j)−1 ))
j=2
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgul
u
p+1 lp+2
Qgul
u
p−1 lp+1
(ylp+1 ; dylp+2 )
(ylp−1 ; dylp+1 )
then the right-hand side becomes
Tug1 ul Tugl
1
[(Tugl
u
p−1 lp
u
1 l2
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp−1
h0 (x1 , yl1 , . . . , ylp−1 , ·))(ylp−1 ) − h(x1 , yl1 , . . . , ylp−1 )](x1 ) =
Tug1 ul Tugl
1
u
1 l2
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp−1
[h0 (x1 , yl1 , . . . , ylp−1 , ylp−1 ) − h(x1 , yl1 , . . . , ylp−1 )+
+
Z ul
p
ulp−1
g
(Gvg Tvu
h0 (x1 , yl1 , . . . , ylp−1 , ·))(ylp−1 )dv](x1 )
lp
since h0 (x1 , yl1 , . . . , ylp−1 , ·) ∈ D for every x1 , yl1 , . . . , ylp−1 . We have two cases:
CHAPTER 4. THE GENERATOR
98
Case 1) If p = 2(i − 1), then all the integrals with respect to Qgulp ul
p+1
Qgul
u
p+1 lp+2
, . . . , Qgul
u
2(i−1)−1 l2(i−1)
disappear in the preceding expressions. Hence
h0 (x1 , yl1 , . . . , ylp−1 , ylp−1 ) = h(x1 , yl1 , . . . , ylp−1 )
and the result follows.
Case 2) If p < 2(i − 1), then
h0 (x1 , yl1 , . . . , ylp−1 , ylp−1 ) = (Tuglp ul
p+1
h(x1 , yl1 , . . . , ylp−1 ) = (Tugl
u
p−1 lp+1
H(x1 , yl1 , . . . , ylp−1 , ·))(ylp−1 )
H(x1 , yl1 , . . . , ylp−1 , ·))(ylp−1 )
where
H(x1 , yl1 , . . . , ylp−1 , ylp+1 ) :=
Z
i−1
Y
R2(i−1)−p−1 j=2
Qgul
i−1
X
hj (yπ(j) − yπ(j)−1 )hi (x1 +
(yπ(j) − yπ(j)−1 ))
j=2
u
2(i−1)−1 l2(i−1)
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgul
u
p+1 lp+2
(ylp+1 ; dylp+2 ).
To simplify the notation we will omit the arguments of the function H. Hence
h0 (x1 , yl1 , . . . , ylp−1 , ylp−1 ) − h(x1 , yl1 , . . . , ylp−1 ) =
(Tuglp ul
p+1
−
H)(ylp−1 ) − [Tugl
Z ul
p
ulp−1
u
p−1 lp
g
(Gvg Tvu
(Tuglp ul
lp
(Tuglp ul
p+1
p+1
H)](ylp−1 ) =
H))(ylp−1 )dv
since H(x1 , yl1 , . . . , ylp−1 , ·) ∈ D, for every x1 , yl1 , . . . , ylp−1 .
Using Fubini’s theorem, the right-hand side becomes
Z ul
p
ulp−1
Tug1 ul Tugl
1
u
1 l2
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp−1
g
[(Gvg Tvu
U (x1 , yl1 , . . . , ylp−1 , ·))(ylp−1 )](x1 )dv
lp
where
U (x1 , yl1 , . . . , ylp−1 , ylp ) :=
h0 (x1 , yl1 , . . . , ylp−1 , ylp ) − (Tuglp ul
p+1
H(x1 , yl1 , yl2 , . . . , ylp−1 , ·))(ylp ) =
,
CHAPTER 4. THE GENERATOR
Z
99
i−1
Y
R2(i−1)−p j=2
[hi (x1 +
i
X
hj (yπ(j) − yπ(j)−1 )
(yπ(j) − yπ(j)−1 )) − hi (x1 +
j=2
Qgul
u
2(i−1)−1 l2(i−1)
i−1
X
(yπ(j) − yπ(j)−1 ))]
j=2
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgulp ul
p+1
(ylp ; dylp+1 ).
This concludes the proof.
2
The following assumption is the equivalent form in terms of generators of Assumption 3.3.7.
Assumption 4.2.3 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , where π
is a permutation of {1, . . . , n} with π(1) = 1, and we denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then the generators G f and G g satisfy condition
(c) stated in Lemma 4.2.2.
Clearly Assumptions 4.2.1 and 4.2.3 are necessary conditions satisfied by the generator of any set-indexed Q-Markov process. The following result says that in fact,
these two assumptions are also sufficient for the construction of the process.
Theorem 4.2.4 Let µ be a probability measure on R and {G f := (Gsf )s ; f ∈ S}
a collection of families of linear operators on B(R) such that each operator Gsf is
defined on a dense subspace D of B(R) and is the generator at time s of a semigroup
T f := (Tstf )s<t , each Tst being the operator associated to a transition probability Qfst .
If the collection {G f := (Gsf )s ; f ∈ S} satisfies Assumption 4.2.1 and Assumption
4.2.3, then there exist a transition system Q := (QBB 0 )B⊆B 0 and a probability measure
P on the product space (RC , B(R)C ) under which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is Q-Markov
with initial distribution µ; and
2. ∀f ∈ S, the generator of the process X f := (Xf (t) )t at time s is an extension of
the operator Gsf .
CHAPTER 4. THE GENERATOR
100
Proof: The collection {Qf := (Qfst )s<t ; f ∈ S} of transition systems satisfies Assumption 3.3.6 and Assumption 3.3.7 (using Lemma 4.2.2). The proof is immediate
using Theorem 4.1.3.
2
4.3
The Construction using the Generator along
the Projected Flows
In this section we will develop a new set of consistency conditions for the generator of a
set-indexed Q-Markov process (which are stronger than those of the previous section)
which will allow us to reconstruct the process, if S is the collection of projected flows
introduced in Section 2.5. The new consistency conditions become even more complex
than in the previous section. However, we are able to prove that in this case, (the
distributions of) the projections of the constructed process over a semilattice ordered
consistently in two different ways can be obtained from each other, via a permutation.
The results of this section are not required anywhere else in this work.
Let f0 : [0, 1] → A be a fixed continuous flow with f0 (0) = ∅0 and f0 (1) = T .
For each finite sub-semilattice A0 of A and for each consistent ordering ord={A0 =
∅0 , A1 , . . . , An } of A0 , let fA0 ,ord be the projection of the flow f0 on the semilattice A0
with respect to the ordering ord. Let S be the collection of all simple flows fA0 ,ord .
Let µ be a probability measure on R and {G f := (Gsf )s ; f ∈ S} a collection of
families of linear operators on B(R), such that each operator Gsf is defined on a dense
subspace D of B(R) and is the generator at time s of a semigroup T f := (Tstf )s<t ,
each Tstf being the operator associated to a transition probability Qfst .
The first step is to find the relationship between the transition probabilites
Qfti−1 ,ti−1 +²
and Qguπ(i)−1 ,uπ(i)−1 +² , and then to write down the new relationship between
the generators (Gwf )w∈[ti−1 ,ti−1 +²] and (Gvg )v∈[uπ(i)−1 ,uπ(i)−1 +²] , for every 0 < ² ≤ ti − ti−1
Let µft1 be the probability measure defined by µft1 (Γ) :=
R
R
Qf0t1 (x; Γ)µ(dx).
Lemma 4.3.1 Let ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0n } be
two consistent orderings of the same finite semilattice A0 , with Ai = A0π(i) , ∀i, where
CHAPTER 4. THE GENERATOR
101
π is a permutation of {1, . . . , n} with π(1) = 1, and denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j .
Suppose that Qf0t1 = Qg0u1 . The following statements are equivalent:
(a) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with
p the index for which π(i) − 1 = lp−1 , π(i) = lp , then for every 0 < ² ≤ ti − ti−1 ,
and for every Γ0 , Γ1 , . . . , Γi ∈ B(R)
Z
Ri+1
IΓ0 (x0 )IΓ1 (x1 )
i−1
Y
IΓj (xj − xj−1 )IΓi (x0i−1 − xi−1 )
(22)
j=2
Qfti−1 ,ti−1 +² (xi−1 ; dx0i−1 )Qfti−2 ti−1 (xi−2 ; dxi−1 ) . . . Qft1 t2 (x1 ; dx2 )
Qf0t1 (x0 ; dx1 )µ(dx0 ) =
Z
R2(i−1)+2
IΓ0 (y0 )IΓ1 (y1 )
i−1
Y
0
IΓj (yπ(j) − yπ(j)−1 )IΓi (yπ(i)−1
− yπ(i)−1 )
j=2
g
Qul
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgul ul (ylp+1 ; dylp+2 )
u
p+1 p+2
2(i−1)−1 l2(i−1)
Qgulp−1 +²,ulp+1 (yl0p−1 ; dylp+1 )Qgulp−1 ,ulp−1 +² (ylp−1 ; dyl0p−1 )
Qgul
u
p−2 lp−1
(ylp−2 ; dylp−1 ) . . . Qgul
u
1 l2
(yl1 ; dyl2 )Qgu1 ul (y1 ; dyl1 )
1
Qg0u1 (y0 ; dy1 )µ(dy0 );
(b) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with
p the index for which π(i) − 1 = lp−1 , π(i) = lp , then for every 0 < ² ≤ ti − ti−1 ,
for every h2 , . . . , hi ∈ B(R) and for µft1 -almost all x1
Tug1 ul Tugl
1
[
i−1
Y
u
1 l2
. . . Tugl
u
p−3 lp−2
Tugl
u
p−2 lp+1
Tugl
u
p+1 lp+2
hj (yπ(j) − yπ(j)−1 )(Ttfi−1 ,ti−1 +² hi )(x1 +
j=2
i−1
X
u
1 l2
1
j=2
(23)
(yπ(j) − yπ(j)−1 ))](x1 ) =
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +² Tuglp−1 +²,ulp+1 Tugl ul
p+1 p+2
[
u
2(i−1)−1 l2(i−1)
j=2
Tug1 ul Tugl
i−1
Y
. . . Tugl
hj (yπ(j) − yπ(j)−1 )hi (x1 +
i−1
X
. . . Tugl
u
2(i−1)−1 l2(i−1)
0
− yπ(i)−1 ))](x1 );
(yπ(j) − yπ(j)−1 ) + (yπ(i)−1
j=2
CHAPTER 4. THE GENERATOR
102
(c) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with
p the index for which π(i) − 1 = lp−1 , π(i) = lp , then for every 0 < ² ≤ ti − ti−1 ,
for every h2 , . . . , hi ∈ D and for µft1 -almost all x1
(c1) if p = 2(i − 1) we have
Z ti−1 +²
[
i−1
Y
Tug1 ul Tugl
u
1 l2
1
ti−1
. . . Tugl
u
p−3 lp−2
f
hj (yπ(j) − yπ(j)−1 )(Gwf Tw,t
h )(x1 +
i−1 +² i
j=2
Tug1 ul Tugl
. . . Tugl
u
1 l2
1
ulp−1
i−1
Y
(yπ(j) − yπ(j)−1 ))](x1 )dw =
j=2
Z ul +²
p−1
[
i−1
X
hj (yπ(j) − yπ(j)−1 )hi (x1 +
j=2
u
p−3 lp−2
i−1
X
Tugl
u
p−2 lp−1
g
Gvg Tv,u
lp−1 +²
0
(yπ(j) − yπ(j)−1 ) + (yπ(i)−1
− yπ(i)−1 ))](x1 )dv;
j=2
(c2) if p < 2(i − 1) we have
Z ti−1 +²
ti−1
[
i−1
Y
Tug1 ul . . . Tugl
u
p−3 lp−2
1
Tugl
u
p−2 lp+1
Tugl
f
hj (yπ(j) − yπ(j)−1 )(Gwf Tw,t
h )(x1 +
i−1 +² i
j=2
u
p+1 lp+2
i−1
X
. . . Tugl
u
2(i−1)−1 l2(i−1)
(yπ(j) − yπ(j)−1 ))](x1 )dw =
j=2
Z ul +²
p−1
ulp−1
Tugl
u
p−1 +² lp+1
Tug1 ul Tugl
Tugl
u
p+1 lp+2
[hi (x1 +
i−1
X
. . . Tugl
u
1 l2
1
u
p−3 lp−2
. . . Tugl
Tugl
u
2(i−1)−1 l2(i−1)
u
p−2 lp−1
{
i−1
Y
g
Gvg Tv,u
lp−1 +²
hj (yπ(j) − yπ(j)−1 )·
j=2
0
(yπ(j) − yπ(j)−1 ) + (yπ(i)−1
− yπ(i)−1 ))−
j=2
hi (x1 +
i−1
X
(yπ(j) − yπ(j)−1 ))]}(x1 )dv.
j=2
Proof: The proof is identical to the proof of Lemma 4.2.2; the only difference is that
we are replacing ti by ti−1 + ².
2
CHAPTER 4. THE GENERATOR
103
Note: We will give below the explicit form of the relationship given at (b) for
i = 2, 3. For i = 2 we have
Z
R
Z
R2
h2 (x01 )Qft1 ,t1 +² (x1 ; dx01 ) =
(24)
0
)
h2 (x1 + yu0 π(2)−1 − yuπ(2)−1 )Qguπ(2)−1 uπ(2)−1 +² (yπ(2)−1 ; dyπ(2)−1
Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ).
For i = 3 we have two cases: if π(2) < π(3), then
Z
R3
h2 (yπ(2) − yπ(2)−1 )h3 (x02 )Qft2 ,t2 +² (x1 + yπ(2) − yπ(2)−1 ; dx02 )
(25)
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ) =
Z
R4
0
h2 (yπ(2) − yπ(2)−1 )h3 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 )
0
)Qguπ(2) ,uπ(3)−1 (yπ(2) ; dyπ(3)−1 )
Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 ,uπ(2)−1 (x1 ; dyπ(2)−1 );
if π(3) < π(2), then
Z
R3
h2 (yπ(2) − yπ(2)−1 )h3 (x02 )Qft2 ,t2 +² (x1 + yπ(2) − yπ(2)−1 ; dx02 )
(26)
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ) =
Z
R4
0
h2 (yπ(2) − yπ(2)−1 )h3 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 )
0
Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )Qguπ(3)−1 +²,uπ(2)−1 (dyπ(3)−1
; yπ(2)−1 )
0
Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
)Qgu1 ,uπ(3)−1 (x1 ; dyπ(3)−1 ).
The second step will be to find the relationship between the transition probabilites
Qfti−1 +²,ti−1 +²0
and Qguπ(i)−1 +²,uπ(i)−1 +²0 and then to write down the relationship between
the generators (Gwf )w∈[ti−1 +²,ti−1 +²0 ] and (Gvg )v∈[uπ(i)−1 +²,uπ(i)−1 +²0 ] for arbitrary 0 ≤ ² <
²0 ≤ ti − ti−1 .
CHAPTER 4. THE GENERATOR
104
Lemma 4.3.2 Let ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0n } be
two consistent orderings of the same finite semilattice A0 , with Ai = A0π(i) , where π
is a permutation of {1, . . . , n} with π(1) = 1, and denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j .
Suppose that Qf0t1 = Qg0u1 . The following statements are equivalent:
(a) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with p
the index for which π(i)−1 = lp−1 , π(i) = lp , then for every 0 ≤ ² < ²0 ≤ ti −ti−1 ,
and for every Γ0 , Γ1 , . . . , Γi , Γ0i ∈ B(R)
Z
IΓ0 (x0 )IΓ1 (x1 )
Ri+2
i−1
Y
IΓj (xj − xj−1 )
(27)
j=2
IΓi (x0i−1 − xi−1 )IΓ0i (x00i−1 − x0i−1 )Qfti−1 +²,ti−1 +²0 (x0i−1 ; dx00i−1 )Qfti−1 ,ti−1 +² (xi−1 ; dx0i−1 )
Qfti−2 ti−1 (xi−2 ; dxi−1 ) . . . Qft1 t2 (x1 ; dx2 )Qf0t1 (x0 ; dx1 )µ(dx0 ) =
Z
R2(i−1)+3
i−1
Y
IΓ0 (y0 )IΓ1 (y1 )
0
IΓj (yπ(j) − yπ(j)−1 )IΓi (yπ(i)−1
− yπ(i)−1 )
j=2
00
0
IΓ0i (yπ(i)−1
−yπ(i)−1
)Qgul
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgul ul (ylp+1 ; dylp+2 )
u
p+1 p+2
2(i−1)−1 l2(i−1)
Qgul
p−1
g
00
0
00
+²0 ,ulp+1 (ylp−1 ; dylp+1 )Qulp−1 +²,ulp−1 +²0 (ylp−1 ; dylp−1 )
Qgulp−1 ,ulp−1 +² (ylp−1 ; dyl0p−1 )Qgul
u
p−2 lp−1
(ylp−2 ; dylp−1 ) . . . Qgul
u
1 l2
(yl1 ; dyl2 )
Qgu1 ul (y1 ; dyl1 )Qg0u1 (y0 ; dy1 )µ(dy0 );
1
(b) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i), and with p
the index for which π(i)−1 = lp−1 , π(i) = lp , then for every 0 ≤ ² < ²0 ≤ ti −ti−1 ,
for every h2 , . . . , hi , h0i ∈ B(R) and for µft1 -almost all x1
Tug1 ul Tugl
u
1 l2
1
Tugl
. . . Tugl
2(i−1)−1
u
p−2 lp−1
,ul2(i−1)
[
i−1
Y
j=2
Tuglp−1 ,ulp−1 +² Tuglp−1 +²,ulp+1 Tugl
u
p+1 lp+2
0
− yπ(i)−1 )
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
...
(28)
CHAPTER 4. THE GENERATOR
(Ttfi−1 +²,ti−1 +²0 h0i )(x1 +
i−1
X
u
1 l2
1
p−1
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )](x1 ) =
j=2
Tug1 ul Tugl
Tugl
105
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +²
g
g
+²,ulp−1 +²0 Tulp−1 +²0 ,ulp+1 Tulp+1 ulp+2
[
i−1
Y
. . . Tugl
2(i−1)−1
,ul2(i−1)
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
h0i (x1 +
i−1
X
00
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )](x1 );
j=2
(c) for each i = 2, . . . , n, if we denote with l1 ≤ l2 ≤ . . . ≤ l2(i−1) the increasing
ordering of the values π(2) − 1, π(2), π(3) − 1, π(3), . . . , π(i) − 1, π(i) and with p
the index for which π(i)−1 = lp−1 , π(i) = lp , then for every 0 ≤ ² < ²0 ≤ ti −ti−1 ,
for every h2 , . . . , hi , h0i ∈ D and for µft1 -almost all x1
(c1) if p = 2(i − 1) we have
Z ti−1 +²0
Tug1 ul Tugl
ti−1 +²
[
i−1
Y
u
1 l2
1
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +²
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
f
0
(Gwf Tw,t
0 hi )(x1
i−1 +²
Z ul +²0
p−1
ulp−1 +²
+
i−1
X
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )](x1 )dw =
j=2
Tug1 ul Tugl
1
[
i−1
Y
u
1 l2
. . . Tugl
u
p−2 lp−1
g
Tuglp−1 ,ulp−1 +² Gvg Tv,u
l
p−1
+²0
0
− yπ(i)−1 )
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
j=2
h0i (x1
i−1
X
+
00
− yπ(i)−1 )](x1 )dv;
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
j=2
(c2) if p < 2(i − 1) we have
Z ti−1 +²0
ti−1 +²
Tug1 ul Tugl
1
u
1 l2
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +² Tuglp−1 +²,ulp+1 Tugl
u
p+1 lp+2
...
CHAPTER 4. THE GENERATOR
Tugl
2(i−1)−1
,ul2(i−1)
f
0
(Gwf Tw,t
0 hi )(x1
i−1 +²
Z ul +²0
p−1
ulp−1 +²
+
[
i−1
Y
i−1
X
{
i−1
Y
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )](x1 )dw =
j=2
u
1 l2
1
p−1
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
Tug1 ul Tugl
Tugl
106
. . . Tugl
u
p−2 lp−1
g
+²0 ,ulp+1 Tulp+1 ,ulp+2
g
Tuglp−1 ,ulp−1 +² Gvg Tv,u
l
p−1
. . . Tugl
2(i−1)−1
+²0
,ul2(i−1)
0
− yπ(i)−1 )
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
j=2
[h0i (x1
i−1
X
+
00
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )−
j=2
−h0i (x1 +
i−1
X
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )]}(x1 )dv.
j=2
Proof: (a) ⇒ (b): Let X be a Qf -Markov process and Y a Qg -Markov process with
the same initial distribution µ. Then (a) is equivalent to saying that the distribution
of (X0 , Xt1 , Xt2 − Xt1 , . . . , Xti−1 − Xti−2 , Xti−1 +² − Xti−1 , Xti−1 +²0 − Xti−1 +² ) coincides
with the distribution of (Y0 , Yu1 , Yuπ(2) − Yuπ(2)−1 , . . . , Yuπ(i−1) − Yuπ(i−1)−1 , Yuπ(i)−1 +² −
Yuπ(i)−1 , Yuπ(i)−1 +²0 − Yuπ(i)−1 +² ).
Since Qf0t1 = Qg0u1 , both Xt1 and Yu1 have the same distribution µft1 . Hence
(a) is equivalent to saying that for µft1 -almost all x1 , the conditional distribution of
(Xt2 −Xt1 , . . . , Xti−1 −Xti−2 , Xti−1 +² −Xti−1 , Xti−1 +²0 −Xti−1 +² ) given Xt1 = x1 coincide
with the conditional distribution of (Yuπ(2) −Yuπ(2)−1 , . . . , Yuπ(i−1) −Yuπ(i−1)−1 , Yuπ(i)−1 +² −
Yuπ(i)−1 , Yuπ(i)−1 +²0 − Yuπ(i)−1 +² ) given Yu1 = x1 .
For i = 2 we will use the following relationship: for every 0 ≤ ² < ²0 ≤ t2 − t1 and
for every Borel sets Γ2 , Γ02
P [Xt1 +² − Xt1 ∈ Γ2 , Xt1 +²0 − Xt1 +² ∈ Γ02 |Xt1 = x1 ] =
P [Yuπ(2)−1 +² − Yuπ(2)−1 ∈ Γ2 , Yuπ(2)−1 +²0 − Yuπ(2)−1 +² ∈ Γ02 |Yu1 = x1 ].
CHAPTER 4. THE GENERATOR
107
Using relationship (24) (which can be derived from equation (27) for i = 2, ² = ²0 ),
the left-hand side can be written as
Z
R2
IΓ2 +x1 (x01 )IΓ02 +x01 (x001 )Qft1 +²,t1 +²0 (x01 ; dx001 )Qft1 ,t1 +² (x1 ; dx01 ) =
Z
R3
00
0
0
− yπ(2)−1 )IΓ02 +x1 +yπ(2)−1
IΓ2 (yπ(2)−1
−yπ(2)−1 (x1 )
0
0
Qft1 +²,t1 +²0 (x1 + yπ(2)−1
− yπ(2)−1 ; dx001 )Qguπ(2)−1 ,uπ(2)−1 +² (yπ(2)−1 ; dyπ(2)−1
)
Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ).
The right-hand side is
Z
R3
0
00
0
IΓ2 (yπ(2)−1
− yπ(2)−1 )IΓ02 (yπ(2)−1
− yπ(2)−1
)
0
0
00
)
Qguπ(2)−1 +²,uπ(2)−1 +²0 (yπ(2)−1
; dyπ(2)−1
)Qguπ(2)−1 ,uπ(2)−1 +² (yπ(2)−1 ; dyπ(2)−1
Qgu1 ,uπ(2)−1 (x1 ; dyπ(2)−1 ).
By a monotone class argument we can conclude that or every 0 ≤ ² < ²0 ≤ t2 − t1
and for every h2 , h02 ∈ B(R)
Z
R3
0
0
h2 (yπ(2)−1
− yπ(2)−1 )h02 (x001 )Qft1 +²,t1 +²0 (x1 + yπ(2)−1
− yπ(2)−1 ; dx001 )
0
)Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ) =
Qguπ(2)−1 ,uπ(2)−1 +² (yπ(2)−1 ; dyπ(2)−1
Z
R3
0
00
0
h2 (yπ(2)−1
− yπ(2)−1 )h02 (x1 + yπ(2)−1
− yπ(2)−1
)
0
00
0
Qguπ(2)−1 +²,uπ(2)−1 +²0 (yπ(2)−1
; dyπ(2)−1
)Qguπ(2)−1 ,uπ(2)−1 +² (yπ(2)−1 ; dyπ(2)−1
)
Qgu1 ,uπ(2)−1 (x1 ; dyπ(2)−1 )
which is the desired relationship for i = 2.
To prove the relationship for i = 3 we will use the following relationship: for every
0 ≤ ² < ²0 ≤ t3 − t2 and for every Borel sets Γ2 , Γ3 , Γ03
P [Xt2 − Xt1 ∈ Γ2 , Xt2 +² − Xt2 ∈ Γ3 , Xt2 +²0 − Xt2 +² ∈ Γ03 |Xt1 = x1 ] =
P [Yuπ(2) − Yuπ(2)−1 ∈ Γ2 , Yuπ(3)−1 +² − Yuπ(3)−1 ∈ Γ3 ,
Yuπ(3)−1 +²0 − Yuπ(3)−1 +² ∈ Γ03 |Yu1 = x1 ].
(29)
CHAPTER 4. THE GENERATOR
108
The left-hand side is
Z
IΓ2 +x1 (x2 )IΓ3 +x2 (x02 )IΓ03 +x02 (x002 )Qft2 +²,t2 +²0 (x02 ; dx002 )
R3
Qft2 ,t2 +² (x2 ; dx02 )Qft1 t2 (x1 ; dx2 ).
Using equation (20) (which can be derived from equation (27) for i = 2, ² = 0, ²0 =
t2 − t1 ), this can be written as
Z
R2
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 +x1 +yπ(2) −yπ(2)−1 (x02 )IΓ03 +x02 (x002 )
(30)
Qft2 +²,t2 +²0 (x02 ; dx002 )Qft2 ,t2 +² (x1 + yπ(2) − yπ(2)−1 ; dx02 )Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )
Qgu1 uπ(2)−1 (x1 ; dyπ(2)−1 ).
At this point we may apply equation (27) for i = 3.
Case 1 Suppose that π(2) < π(3). Using equation (25) (which can be derived
from equation (27) for i = 3, ² = ²0 , if π(2) < π(3)), integral (30) can be written as
Z
R5
0
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 (yπ(3)−1
− yπ(3)−1 )
00
0
IΓ03 +x1 +yπ(2) −yπ(2)−1 +yπ(3)−1
−yπ(3)−1 (x2 )
0
0
)
Qft2 +²,t2 +²0 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 ; dx002 )Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
Qguπ(2) ,uπ(3)−1 (yπ(2) ; dyπ(3)−1 )Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 ,uπ(2)−1 (y1 ; dyπ(2)−1 ).
The right-hand side of (29) can be written as
Z
R5
0
00
0
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 (yπ(3)−1
− yπ(3)−1 )IΓ03 (yπ(3)−1
− yπ(3)−1
)
0
00
0
Qguπ(3)−1 +²,uπ(3)−1 +²0 (yπ(3)−1
; dyπ(3)−1
)Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
)
Qguπ(2) ,uπ(3)−1 (yπ(2) ; dyπ(3)−1 )Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 ,uπ(2)−1 (y1 ; dyπ(2)−1 ).
By a monotone class argument we can conclude that for every 0 ≤ ² < ²0 ≤ t3 − t2
and for every h2 , h3 , h03 ∈ B(R) we have
Z
R5
0
− yπ(3)−1 )h03 (x002 )
h2 (yπ(2) − yπ(2)−1 )h3 (yπ(3)−1
CHAPTER 4. THE GENERATOR
109
0
0
Qft2 +²,t2 +²0 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 ; dx002 )Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
)
Qguπ(2) ,uπ(3)−1 (yπ(2) ; dyπ(3)−1 )Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 ,uπ(2)−1 (y1 ; dyπ(2)−1 ) =
Z
R5
0
h2 (yπ(2) − yπ(2)−1 )h3 (yπ(3)−1
− yπ(3)−1 )
00
− yπ(3)−1 )
h03 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
0
00
0
Qguπ(3)−1 +²,uπ(3)−1 +²0 (yπ(3)−1
; dyπ(3)−1
)Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
)
Qguπ(2) ,uπ(3)−1 (yπ(2) ; dyπ(3)−1 )Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 ,uπ(2)−1 (y1 ; dyπ(2)−1 )
which is the desired relationship for i = 3, if π(2) < π(3).
Case 2 Suppose that π(3) < π(2). Using equation (26) (which can be derived
from equation (27) for i = 3, ² = ²0 , if π(3) < π(2)), integral (30) can be written as
Z
R5
0
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 (yπ(3)−1
− yπ(3)−1 )
00
0
IΓ03 +x1 +yπ(2) −yπ(2)−1 +yπ(3)−1
−yπ(3)−1 (x2 )
0
Qft2 +²,t2 +²0 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 ; dx002 )Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )
0
0
)
Qguπ(3)−1 +²,uπ(2)−1 (dyπ(3)−1
; yπ(2)−1 )Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
Qgu1 ,uπ(3)−1 (y1 ; dyπ(3)−1 ).
The right-hand side of (29) can be written as
Z
R5
0
00
0
IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 (yπ(3)−1
− yπ(3)−1 )IΓ03 (yπ(3)−1
− yπ(3)−1
)
00
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) Qguπ(3)−1 +²0 ,uπ(2)−1 (yπ(3)−1
; dyπ(2)−1 )
0
0
00
Qguπ(3)−1 +²,uπ(3)−1 +²0 (yπ(3)−1
; dyπ(3)−1
)Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
)
Qgu1 uπ(3)−1 (x1 ; dyπ(3)−1 ).
By a monotone class argument we can conclude that for every 0 ≤ ² < ²0 ≤ t3 − t2
and for every h2 , h3 , h03 ∈ B(R) we have
Z
R5
0
h2 (yπ(2) − yπ(2)−1 )h3 (yπ(3)−1
− yπ(3)−1 )h03 (x002 )
0
Qft2 +²,t2 +²0 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 ; dx002 )Qguπ(2)−1 ,uπ(2) (yπ(2)−1 ; dyπ(2) )
CHAPTER 4. THE GENERATOR
110
0
0
Qguπ(3)−1 +²,uπ(2)−1 (dyπ(3)−1
; yπ(2)−1 )Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
)
Z
Qgu1 ,uπ(3)−1 (y1 ; dyπ(3)−1 ) =
0
h2 (yπ(2) − yπ(2)−1 )h3 (yπ(3)−1
R5
00
h03 (x1 + yπ(2) − yπ(2)−1 + yπ(3)−1
− yπ(3)−1 )
− yπ(3)−1 )
00
; dyπ(2)−1 )
Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) Qguπ(3)−1 +²0 ,uπ(2)−1 (yπ(3)−1
0
00
0
)
; dyπ(3)−1
)Qguπ(3)−1 ,uπ(3)−1 +² (yπ(3)−1 ; dyπ(3)−1
Qguπ(3)−1 +²,uπ(3)−1 +²0 (yπ(3)−1
Qgu1 uπ(3)−1 (x1 ; dyπ(3)−1 )
which is the desired relationship for i = 3, if π(3) < π(2).
The general argument will be omitted since it is identical to the argument for the
case i = 3, but notationally more complex.
(b) ⇒ (a): Let Γ0 , Γ1 , . . . , Γi , Γ0i ∈ B(R) be arbitrary. Using the fact that Qf0t1 =
Qg0u1 and equation (20) (which ca be derived from equation (28) for i = 2, ² = 0, ²0 =
t2 − t1 ), the left-hand side of (27) becomes
Z
Ri+3
IΓ0 (y0 )IΓ1 (y1 )IΓ2 (yπ(2) − yπ(2)−1 )IΓ3 +y1 +yπ(2) −yπ(2)−1 (x3 )IΓ4 +x3 (x4 ) . . .
IΓi−1 +xi−2 (xi−1 )IΓi +xi−1 (x0i−1 )IΓ0i +x0i−1 (x00i−1 )Qfti−1 +²,ti−1 +²0 (x0i−1 ; dx00i−1 )
Qti−1 ,ti−1 +² (xi−1 ; dx0i−1 )Qfti−2 ti−1 (xi−2 ; dxi−1 ) . . . Qft3 t4 (x3 ; dx4 )
Qft2 t3 (y1 + yπ(2) − yπ(2)−1 ; dx3 )Qguπ(2)−1 uπ(2) (yπ(2)−1 ; dyπ(2) )Qgu1 uπ(2)−1 (y1 ; dyπ(2)−1 )
Qg0u1 (y0 ; dy1 )µ(dy0 )
(we have formally replaced x2 by y1 + yπ(2) − yπ(2)−1 ).
We continue in the same manner and we use successively equation (28) first for
i = 3, ² = 0, ²0 = t3 − t2 , then for i = 4, ² = 0, ²0 = t4 − t3 , and so on until we reach
i − 1, taking ² = 0, ²0 = ti−1 − ti−2 . The left-hand side of (27) becomes
Z
R2(i−1)+3
I
Γ0i +x0i−1
IΓ0 (y0 )IΓ1 (y1 )
i−1
Y
(yπ(j) − yπ(j)−1 )IΓi +y1 +Pi−1 (y
j=2
(x00i−1 )Qfti−1 +²,ti−1 +²0 (x0i−1 ; dx00i−1 )Qfti−1 ,ti−1 +² (y1
π(j) −yπ(j)−1 )
j=2
+
i−1
X
(x0i−1 )
(yπ(j) − yπ(j)−1 ); dx0i−1 )
j=2
CHAPTER 4. THE GENERATOR
Qgul
u
2(i−1)−1 l2(i−1)
Qgul
u
p−1 lp+1
111
(yl2(i−1)−1 ; dyl2(i−1) ) . . . Qgul
u
p+1 lp+2
(ylp−1 ; dylp+1 )Qgul
u
p−2 lp−1
(ylp+1 ; dylp+2 )
(ylp−2 ; dylp−1 ) . . . Qgul
u
1 l2
(yl1 ; dyl2 )
Qgu1 ul (y1 ; dyl1 )Qg0u1 (y0 ; dy1 )µ(dy0 ).
1
We use next equation (28) for i taking ² = ²0 and we ‘eliminate’ the operator
Qfti−1 ,ti−1 +² by inserting the operator Qgulp−1 ,ulp−1 +² at the appropriate place (formally
we replace x0i−1 by y1 +
Pi−1
j=2 (yπ(j)
0
− yπ(j)−1 ) + (yπ(i)−1
− yπ(i)−1 )). Finally, by using
equation (28) for i, ², ²0 , we ‘eliminate’ the operator Qfti−1 +²,ti−1 +²0 and we get exactly
the desired right-hand side of (27) (formally we replace x00i−1 by y1 +
yπ(j)−1 ) +
00
(yπ(i)−1
Pi−1
j=2 (yπ(j)
−
− yπ(i)−1 )).
(b) ⇔ (c): The basic ingredient will be equation (17), which gives the integral
expression of a semigroup in terms of its generator.
Since D is dense we can assume that the functions h2 , . . . , hi are in D in the
expression given by (b). Subtract
Tug1 ul Tugl
u
1 l2
1
. . . Tugl
u
p−2 lp−1
Tugl
,ul2(i−1)
2(i−1)−1
h0i (x1 +
[
i−1
Y
Tuglp−1 ,ulp−1 +² Tuglp−1 +²,ulp+1 Tugl
u
p+1 lp+2
...
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
i−1
X
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )]
j=2
in both members of this expression.
On the left hand-side we will have
Tug1 ul Tugl
u
1 l2
1
. . . Tugl
u
p−2 lp−1
Tugl
,ul2(i−1)
2(i−1)−1
[
i−1
Y
Tuglp−1 ,ulp−1 +² Tuglp−1 +²,ulp+1 Tugl
u
p+1 lp+2
...
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
(Ttfi−1 +²,ti−1 +²0 h0i − h0i )(x1 +
i−1
X
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )]
j=2
which can be written as
Z ti−1 +²0
ti−1 +²
Tug1 ul Tugl
1
u
1 l2
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +² Tuglp−1 +²,ulp+1 Tugl
u
p+1 lp+2
...
CHAPTER 4. THE GENERATOR
Tugl
2(i−1)−1
,ul2(i−1)
[
i−1
Y
112
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
f
0
(Gwf Tw,t
0 hi )(x1 +
i−1 +²
i−1
X
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )]dv
j=2
using the integral expression (17) and Fubini’s theorem.
On the right-hand side we have
Tug1 ul Tugl
1
{Tugl
p−1
u
1 l2
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +²
g
g
+²,ulp−1 +²0 Tulp−1 +²0 ,ulp+1 Tulp+1 ulp+2
[
i−1
Y
. . . Tugl
2(i−1)−1
,ul2(i−1)
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
h0i (x1 +
i−1
X
00
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )](yl0p−1 )−
j=2
−Tuglp−1 +²,ulp+1 Tugl
u
p+1 lp+2
[
i−1
Y
. . . Tugl
2(i−1)−1
,ul2(i−1)
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
h0i (x1 +
i−1
X
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )](yl0p−1 )}.
j=2
If we denote
h²0 (x1 , yl1 , . . . , ylp−1 , yl0p−1 , yl00p−1 ) :=
Z
i−1
Y
R2(i−1)−p j=2
h0i (x1 +
i−1
X
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
00
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )Qgul
2(i−1)−1
j=2
. . . Qgul
p+1
,ul2(i−1) (yl2(i−1)−1 ; dyl2(i−1) )
g
00
,ulp+2 (ylp+1 ; dylp+2 )Qulp−1 +²0 ,ulp+1 (ylp−1 ; dylp+1 )
and
h² (x1 , yl1 , . . . , ylp−1 , yl0p−1 ) :=
Z
R2(i−1)−p
i−1
Y
j=2
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
CHAPTER 4. THE GENERATOR
h0i (x1 +
i−1
X
113
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )Qgul
2(i−1)−1
j=2
. . . Qgul
p+1
,ul2(i−1) (yl2(i−1)−1 ; dyl2(i−1) )
g
0
,ulp+2 (ylp+1 ; dylp+2 )Qulp−1 +²,ulp+1 (ylp−1 ; dylp+1 )
then the right-hand side becomes
. . . Tugl
Tug1 ul Tugl
1
[(Tugl
p−1
u
p−2 lp−1
u
1 l2
Tuglp−1 ,ulp−1 +²
0
0
+²,ulp−1 +²0 h²0 (x1 , yl1 , . . . , ylp−1 , ylp−1 , ·))(ylp−1 )
Tug1 ul Tugl
1
. . . Tugl
u
1 l2
u
p−2 lp−1
− h² (x1 , yl1 , . . . , ylp−1 , yl0p−1 )] =
Tuglp−1 ,ulp−1 +²
[h²0 (x1 , yl1 , . . . , ylp−1 , yl0p−1 , yl0p−1 ) − h² (x1 < yl1 , . . . , ylp−1 , yl0p−1 )+
+
Z ul +²0
p−1
ulp−1 +²
g
(Gvg Tv,u
l
p−1
0
0
+²0 h²0 (x1 , yl1 , . . . , ylp−1 , ylp−1 , ·))(ylp−1 )dv](x1 )
since h²0 (x1 , yl1 , yl2 , . . . , ylp−1 , yl0p−1 , ·) ∈ D for every x1 , yl1 , . . . , ylp−1 , yl0p−1 . We have
two cases:
Case 1) If p = 2(i − 1), then all the integrals with respect to Qgul
+²,ulp−1 +²0 ,
p−1
g
g
g
Qul +²0 ,ul , Qul ,ul , . . . , Qul
disappear in the preceding expressions.
,u
p+1
p+2
p−1
p+1
2(i−1)−1 l2(i−1)
0
0
Hence h²0 (x1 , yl1 , . . . , ylp−1 , ylp−1 , ylp−1 ) = h² (x1 , yl1 , . . . , ylp−1 , yl0p−1 ) and the result fol-
lows.
Case 2) If p < 2(i − 1), then
h²0 (x1 , yl1 , . . . , ylp−1 , yl0p−1 , yl0p−1 ) = (Tugl
p−1
0
0
+²0 ,ulp+1 H(x1 , yl1 , . . . , ylp−1 , ylp−1 , ·))(ylp−1 )
h² (x1 , yl1 , . . . , ylp−1 , yl0p−1 ) = (Tuglp−1 +²,ulp+1 H(x1 , yl1 , . . . , ylp−1 , yl0p−1 , ·))(yl0p−1 )
where
H(x1 , yl1 , yl2 , . . . , ylp−1 , yl0p−1 , ylp+1 ) :=
Z
R2(i−1)−p−1
h0i (x1 +
i−1
X
i−1
Y
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )Qgul
2(i−1)−1
j=2
. . . Qgul
p+1
,ulp+2 (ylp+1 ; dylp+2 ).
,ul2(i−1) (yl2(i−1)−1 ; dyl2(i−1) )
CHAPTER 4. THE GENERATOR
114
To simplify the notation we will omit the arguments of the function H. Hence
h²0 (x1 , yl1 , . . . , ylp−1 , yl0p−1 , yl0p−1 ) − h² (x1 , yl1 , . . . , ylp−1 , yl0p−1 ) =
(Tugl
0
+²0 ,ulp+1 H)(ylp−1 )
p−1
−
Z ul +²0
p−1
− [Tugl
p−1
g
(Gvg Tv,u
l
p−1
ulp−1 +²
g
0
+²,ulp−1 +²0 (Tulp−1 +²0 ,ulp+1 H)](ylp−1 )
=
g
0
+²0 (Tulp−1 +²0 ,ulp+1 H))(ylp−1 )dv
since H(x1 , yl1 , . . . , ylp−1 , yl0p−1 , ·) ∈ D for every x1 , yl1 , . . . , ylp−1 , yl0p−1 .
Using Fubini theorem’s, the right-hand side becomes
Z ul +²0
p−1
g
[(Gvg Tv,u
l
p−1
Tug1 ul Tugl
1
ulp−1 +²
u
1 l2
. . . Tugl
u
p−2 lp−1
Tuglp−1 ,ulp−1 +²
0
0
+²0 U²0 (x1 , yl1 , . . . , ylp−1 , ylp−1 , ·))(ylp−1 )](x1 )dv
where
U²0 (x1 , yl1 , . . . , ylp−1 , yl0p−1 , yl00p−1 ) :=
h²0 (x1 , yl1 , . . . , ylp−1 , yl0p−1 , yl00p−1 ) − (Tugl
p−1
Z
i−1
Y
R2(i−1)−p
[h0i (x1 +
0
00
+²0 ,ulp+1 H(x1 , yl1 , . . . , ylp−1 , ylp−1 , ·))(ylp−1 )
0
hj (yπ(j) − yπ(j)−1 )hi (yπ(i)−1
− yπ(i)−1 )
j=2
i−1
X
00
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )−
j=2
i−1
X
−h0i (x1 +
0
(yπ(j) − yπ(j)−1 ) + yπ(i)−1
− yπ(i)−1 )]
j=2
Qgul
2(i−1)−1
g
,ul2(i−1) (yl2(i−1)−1 ; dyl2(i−1) ) . . . Qulp+1 ,ulp+2 (ylp+1 ; dylp+2 )
Qgul
p−1
00
+²0 ,ulp+1 (ylp−1 ; dylp+1 ).
The concludes the proof.
2
We have the following assumption.
=
CHAPTER 4. THE GENERATOR
115
Assumption 4.3.3 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n }
are two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i,
where π is a permutation of {1, . . . , n} with π(1) = 1, and we denote f := fA0 ,ord1 ,
g := fA0 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then the generators G f and G g
satisfy condition (c) stated in Lemma 4.3.2.
Here is the main result of this section.
Theorem 4.3.4 Let µ be a probability measure on R and {G f := (Gsf )s ; f ∈ S}
a collection of families of linear operators on B(R) such that each operator Gsf is
defined on a dense subspace D of B(R) and is the generator at time s of a semigroup
T f := (Tstf )s<t , each Tstf being the operator associated to a transition probability Qfst .
If the collection {G f ; f ∈ S} satisfies Assumption 4.2.1 and Assumption 4.3.3,
then there exist a transition system Q := (QBB 0 )B⊆B 0 and a probability measure P on
the product space (RC , B(R)C ) under which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is Q-Markov
with initial distribution µ;
2. ∀f ∈ S, the generator of the process X f := (Xf (t) )t at time s is an extension of
the operator Gsf ;
3. if ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0n } are two consistent orderings of the same finite semilattice A0 , with Ai = A0π(i) , ∀i, where π is
a permutation of {1, . . . , n} with π(1) = 1, and we denote f := fA0 ,ord1 , g :=
fA0 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
P ◦ (Xtfi−1 +²0 − Xtfi−1 +² )−1 = P ◦ (Xugπ(i)−1 +²0 − Xugπ(i)−1 +² )−1
for every i = 2, . . . , n and for every 0 ≤ ² < ²0 ≤ ti − ti−1 .
Proof: The collection {Qf := (Qfst )s<t ; f ∈ S} of transition systems satisfies Assumption 3.3.6 and Assumption 3.3.7 (using Lemma 4.3.2). The proof is immediate
using Theorem 4.1.3.
2
Chapter 5
Processes with Independent
Increments
On the real line, processes with independent increments constitute a fundamental
class of stochastic processes; typical examples include Brownian motion, compound
Poisson processes, and stable processes. Historically, their systematic study began
in 1934, with the characterization of the infinitely divisible distributions given by
Lévy’s impressive paper [49]. It turned out that any such process can be written as
the sum of a non-random function, a discrete process, and a stochastically continuous
process. Moreover, any Lévy process (i.e. a stochastically continuous process with
independent increments) has a cadlag version, which can be constructed as the sum
of a continuous Gaussian process and an independent jump process, obtained as a
limit of a sequence of independent compound Poisson processes.
On the other hand, processes with independent increments are prototypes of important classes of processes, like Markov processes and semimartingales. An immense
amount of literature has been dedicated to them, including two excellent (and quite
recent) monographs [9], [65]; a good all-time reference remains Chapter IV, [32].
In the present chapter, we will take the first step in the study of set-indexed
processes with independent increments and we will show that these processes can
also be described by means of convolution systems of distributions on the real line.
We want to emphasize that for most of the chapter, the underlying space T need not
116
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
117
be metrizable.
Most of the results of this chapter appear in [6].
5.1
General Properties
In this section we will give some general properties of processes with independent
increments, with special emphasis on the set-Markov property. We will also introduce the two fundamental examples of such processes: the Brownian motion and the
Poisson process.
We begin by recalling that a set-indexed process X := (XA )A∈A is said to have
independent increments if X∅ = X∅0 = 0 a.s., and XC1 , . . . , XCk are independent
whenever C1 , . . . , Ck ∈ C are pairwise disjoint (Definition 2.3.14).
Comment 5.1.1 Since each set in C(u) can be written as the finite union of some
pairwise disjoint sets in C, by the associativity of independence, it follows that the
above property can be extended to the sets in C(u); namely, if X is a process with
independent increments, then XC1 , . . . , XCk are independent whenever C1 , . . . , Ck ∈
C(u) are pairwise disjoint.
Let X := (XA )A∈A be a set-indexed process with independent increments. If we
denote with FC the distribution of XC for each C ∈ C(u), then the family (FC )C∈C(u)
of these probability measures satisfies the following conditions:
1. F∅ = F∅0 = δ0 ; and
0
2. FC1 ∗ . . . ∗ FCn = FC10 ∗ . . . ∗ FCm0 whenever C1 , . . . , Cn , C10 , . . . , Cm
∈ C are such
0
˙
that ∪˙ ni=1 Ci = ∪˙ m
j=1 Cj . (Here ∗ denotes the convolution operator and ∪ denotes
a pairwise disjoint union.)
Any such family of distributions will be called a convolution system.
Proposition 5.1.2 Any process with independent increments is Q-Markov, with the
transition system Q given by
QBB 0 (x; Γ) := FB 0 \B (Γ − x) ∀x ∈ R, ∀Γ ∈ B(R)
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
118
for every B, B 0 ∈ A(u), B ⊆ B 0 , where (FC )C∈C(u) is the convolution system of the
process.
Proof: We will prove first that the family Q := (QBB 0 )B⊆B 0 is a transition system:
let B, B 0 , B 00 ∈ A(u) be such that B ⊆ B 0 ⊆ B 00 and x ∈ R, Γ ∈ B(R) arbitrary; then
Z
Z
R
QB 0 B 00 (y; Γ)QBB 0 (x; dy) =
=
ZR
R
FB 00 \B 0 (Γ − y)FB 0 \B (dy − x)
FB 00 \B 0 (Γ − x − z)FB 0 \B (dz)
= (FB 0 \B ∗ FB 00 \B 0 )(Γ − x)
= FB 00 \B (Γ − x)
= QBB 00 (x; Γ)
because (B 0 \B) ∪ (B 00 \B 0 ) = B 00 \B.
Let X := (XA )A∈A be a process with independent increments. We recall that the
process is automatically set-Markov (Proposition 3.1.3). We will prove that QBB 0 is a
version of the conditional distribution of XB 0 given XB , for every B, B 0 ∈ A(u), B ⊆
B 0 (see Definition A.2.1, Appendix A.2). Let x ∈ R, Γ ∈ B(R) be arbitrary; then
P [XB 0 ∈ Γ|XB = x] = P [XB 0 \B ∈ Γ − x|XB = x]
= P (XB 0 \B ∈ Γ − x)
= FB 0 \B (Γ − x)
= QBB 0 (x; Γ)
by the independence of XB 0 \B and XB .
2
We have the following conjecture.
Conjecture 5.1.3 If X := (XA )A∈A is a Q-Markov process with the transition system Q given by the relationship QBB 0 (x; Γ) := FB 0 \B (Γ − x); B, B 0 ∈ A(u), B ⊆ B 0
where (FC )C∈C(u) is a convolution system, then X has independent increments.
Attempt of proof: First note that the distribution of XB 0 \B is FB 0 \B , ∀B, B 0 ∈
A(u), B ⊆ B 0 :
P [XB 0 \B ∈ Γ|XB = x] = QBB 0 (x; Γ + x)
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
119
= FB 0 \B (Γ) (does not depend onx)
= P (XB 0 \B ∈ Γ).
We suppose that XC1 , . . . , XCn are independent for any pairwise disjoint sets C1 ,
. . . , Cn ∈ C such that ∪ni=1 Ci = B 0 \B for some B ⊆ B 0 in A(u). Note that this is not
−1
n
n
clear even if we know that P ◦ X∪−1
n C = F∪n Ci = ∗i=1 FCi = ∗i=1 (P ◦ XC ); this is
i
i=1
i=1 i
where the conjecture lies.
i
Aij
Finally, if C1 , . . . Cn are arbitrary pairwise disjoint sets in C, Ci = Ai \ ∪nj=1
are some extremal representations, A0 is the minimal finite sub-semilattice of A determined by the sets Ai , Aij , {B0 = ∅0 , B1 , . . . , Bm } is a consistent ordering of A0
and we denote with Dj the left neighbourhood of Bj in A0 , then Ci = ∪j∈Ji Dj and
XCi =
P
j∈Ji
XDj a.s. If the previous conjectured fact is true then the variables
m
0
XD1 , . . . , XDm will be independent since we can write ∪m
j=1 Dj = ∪j=1 Bj = B \B with
B = ∅0 and B 0 = ∪m
j=1 Bj . By the associativity of the independence, the variables
XC1 , . . . , XCn will be independent.
2
Comment 5.1.4 The semigroup associated to a process with independent increments with convolution system (FC )C∈C(u) is given by
Z
(TBB 0 h)(x) =
R
h(x + y)FB 0 \B (dy), h ∈ B(R), x ∈ R
for every B, B 0 ∈ A(u), B ⊆ B 0 .
Let Λ be a finite positive measure on σ(A). Here are the two basic examples of
set-indexed processes with independent increments.
Examples 5.1.5
1. A process X := (XA )A∈A with independent increments for
which the distribution FC of each increment XC , C ∈ C is normal with mean 0
and variance ΛC , is called a Brownian motion (with variance measure Λ).
Each probability measure QBB 0 (x; ·) is also normal with mean x and variance
ΛB 0 \B . The associated semigroup T is given by:
(
)
Z
(y − x)2
1
h(y) exp −
dy, x ∈ R, h ∈ B(R).
(TBB 0 h)(x) = q
2ΛB 0 \B
2πΛB 0 \B R
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
120
2. A process X := (XA )A∈A with independent increments for which the distribution FC of each increment XC , C ∈ C is Poisson with mean ΛC is called a
Poisson process with variance measure Λ.
Each probability measure QBB 0 (k; ·) is Poisson:
−Λ
QBB 0 (k; ·) = e
B0
\B
n
X ΛB 0 \B
n≥0
n!
δk+n , k = 0, 1, 2 . . .
The semigroup T is given by
−Λ
(TBB 0 h)(k) = e
B0
\B
X
h(k + n)
n≥0
ΛnB 0 \B
n!
, k = 0, 1, 2, . . .
for every bounded function h : {0, 1, 2, . . .} → R.
Note: Alternatively, a Brownian motion with variance measure Λ is a centered
Gaussian process X := (XA )A∈A (with an additive extension to C(u)), whose covariance function is given by
cov(XA1 , XA2 ) = ΛA1 ∩A2 , ∀A1 , A2 ∈ A.
The covariance function can be extended to C(u) by a similar relation.
In what follows we will show that our main tool from the previous chapters, the
flow, becomes also extremely useful in this set-up.
Proposition 5.1.6 A set-indexed process X := (XA )A∈A(u) has independent increments if and only if for for every simple flow f : [0, a] → A(u) the process X f :=
(Xf (t) )t∈[0,a] has independent increments. (For the necessity part we can consider any
flow, not only the simple ones.)
Proof: For necessity, let f : [0, a] → A(u) be an arbitrary flow (not necessarily
simple) and s, t ∈ [0, a], s < t. Then Xtf − Xsf = Xf (t)\f (s) is independent of Ff (s) :=
σ({XA ; A ∈ A, A ⊆ f (s)}) and therefore it is independent of σ({Xf (u) ; u ≤ s})
(which is contained in Ff (s) ). This is enough to conclude that the process X f has
independent increments.
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
121
i
For sufficiency, let C1 , . . . , Ck ∈ C be pairwise disjoint and Ci = Ai \ ∪nj=1
Aij ; i =
1, . . . , k some extremal representations. Let A0 be the minimal finite sub-semilattice
of A which contains the sets Ai , Aij , {B0 = ∅0 , B1 , . . . , Bn } a consistent ordering of
A0 and Dj the left neighbourhood of Bj for each j = 1, . . . , n.
Say Ci = ∪j∈Ji Dj for a subset Ji of {1, . . . , n}. By Lemma 2.4.8, there exists a
simple flow f : [0, a] → A(u) such that f (ti ) = ∪ij=0 Bj , where t0 = 0 < t1 < . . . <
tn ≤ a is the partition corresponding to f . Hence, for every i = 1, . . . , k
XCi =
X
j∈Ji
XDj =
X
(Xtfj − Xtfj−1 ) a.s.
j∈Ji
These variables are independent by the associativity of independence.
2
Let us consider our two basic examples, the Brownian motion and the Poisson
process in the light of the previous proposition. Let Λ be a finite positive measure on
σ(A).
1. A set-indexed process X := (XA )A∈A(u) is a Brownian motion with variance
measure Λ if and only if for every simple flow f : [0, a] → A(u) the process
X f := (Xf (t) )t∈[0,a] is a Brownian motion with variance function Λf := Λ ◦ f . If
the function Λf is differentiable, then the generator of the process X f at time
s is given by
1
(Gs h)(x) = (Λfs )0 h00 (x), x ∈ R
2
f
whose domain D(Gs ) contains the space Cb2 (R) of all twice continuously differentiable functions h : R → R which have the property that h, h0 , h00 are
bounded (see Example B.1.4.1, Appendix B.1).
2. A set-indexed process X := (XA )A∈A(u) is Poisson with variance measure Λ if
and only if for every simple flow f : [0, a] → A(u) the process X f := (Xf (t) )t∈[0,a]
is Poisson with variance function Λf := Λ◦f . If the function Λf is differentiable,
then the generator of the process X f at time s is given by
(Gs h)(k) = (Λfs )0 · (h(k + 1) − h(k)), k = 0, 1, 2, . . .
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
122
whose domain D(Gsf ) is the space B({0, 1, 2, . . .}) of all bounded functions h :
{0, 1, 2, . . .} → R (see Example B.1.4.2, Appendix B.1).
For the remainder of this section we will concentrate on the variance function Λ of
an arbitrary set-indexed process with independent increments. We need the following
definition.
Definition 5.1.7 A process X := (XA )A∈A is square-integrable if
E(XA2 ) < ∞ ∀A ∈ A.
We prove that the variance function Λ has a unique finite additive extension to C(u)
(although, in general, we may not be able to extend it to a measure on σ(A)).
Proposition 5.1.8 Let X := (XA )A∈A be a square-integrable process with independent increments and let ΛA := Var(XA ), A ∈ A. Then Λ has a unique additive
extension to C(u) and ΛC = Var(XC ), ∀C ∈ C(u).
Proof: We will show first that Λ has a unique additive extension to A(u) by proving
inductively that for every A1 , . . . , An ∈ A we have
Var(X∪ni=1 Ai ) =
n
X
Var(XAi ) −
i=1
X
i<j
Var(XAi ∩Aj ) + . . . + (−1)n+1 Var(X∩ni=1 Ai )
The case n = 1 is trivial. Assume that the statement is true for n − 1. Let
A1 , . . . , An be some arbitrary sets in A. We have Var(X∪ni=1 Ai ) = Var(XA1 \∪n Ai ) +
i=2
Var(X∪ni=2 Ai ) = Var(XA1 ) − Var(X∪ni=2 (A1 ∩Ai ) ) + Var(X∪ni=2 Ai ), by independence. Now
we use the induction hypothesis for Var(X∪ni=2 (A1 ∩Ai ) ) and Var(X∪ni=2 Ai ). We set
def
Λ∪ni=1 Ai =
n
X
i=1
ΛAi −
X
i<j
ΛAi ∩Aj + . . . + (−1)n+1 Λ∩ni=1 Ai .
Trivially, this definition does not depend on the representation of B = ∪ni=1 Ai .
To show that Λ has a unique additive extension to C we will use again the indedef
pendence: let A, B ∈ A(u); then Var(XA\B ) = Var(XA ) − Var(XA∩B ) = ΛA\B . The
existence of the unique additive extension of Λ to C(u) is proved exactly as for A(u).
2
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
123
Comment 5.1.9 Note that if the variance function Λ is monotone outer-continuous,
then it can be extended to a measure on σ(A) by Proposition 2.4.3.
We will conclude this section with a few remarks about the ‘homogeneity’ of the
projections of a process with independent increments over a suitable class of flows.
If X := (XA )A∈A is a square integrable process with independent increments whose
variance function Λ is monotone outer-continuous and monotone inner-continuous
and f : [0, a] → A(u) is a simple flow, then by Proposition 2.4.7, the function Λ ◦ f :
[0, a] → [0, b] is continuous (where b = Λ(f (a))). Therefore, for each s ∈ [0, b], there
exists some ts ∈ [0, a] such that (Λ ◦ f )(ts ) = s. The function
g(s) := f (ts ), s ∈ [0, b]
is clearly a flow. What is important is that the variance function Λ◦g of the projected
process X g is equal to the identity map, i.e. Var(Xsg ) = s, ∀s ∈ [0, b]. If the
distribution of each increment XC , C ∈ C is completely determined by its variance
ΛC (as it is the case of the Brownian motion and Poisson process), then we can
conclude that the projected process X g is ‘homogenous’, i.e. the distribution of each
increment Xtg − Xsg , s < t depends on s, t only through t − s.
If in addition, the function Λ ◦ f is strictly increasing, then it is one-to-one and
g = f ◦ (Λ ◦ f )−1 .
In this case, the flow g is simple too: say f (t) = ∪ij=0 Aj ∪fi (t), t ∈ [ti , ti+1 ], i = 0, . . . , n
for some t0 = 0 < t1 < . . . < tn = a and some A-valued flows fi : [ti , ti+1 ] → A;
denote Λ(∪ij=0 Aj ) := si and gi := fi ◦ (Λ ◦ f )−1 : [si , si+1 ] → A; for each s ∈ [si , si+1 ]
we have g(s) = f ((Λ ◦ f )−1 (s)) = ∪ij=0 Aj ∪ fi ((Λ ◦ f )−1 (s)) = ∪ij=0 Aj ∪ gi (s).
Note also that the flow g follows exactly the same path as f .
5.2
The Compound Poisson Process
In this section we will introduce an important class of set-indexed processes with
independent increments, the compound Poisson processes (see Appendix B.1 for the
definition of the classical compound Poisson process).
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
124
We begin with the definition of a set-indexed compound Poisson process.
Definition 5.2.1 Let Π(dz; dx) be a finite positive measure on (T × R, σ(A) × B(R))
and let ΠA (Γ) := Π(A × Γ), A ∈ σ(A), Γ ∈ B(R). A process X := (XA )A∈A with
independent increments, for which the distribution FC of each increment XC , C ∈ C
has the log-characteristic function
Z
ψC (u) =
R
(eiuy − 1)ΠC (dy), u ∈ R
is called a compound Poisson process (corresponding to the measure Π).
Note: If the measure Π can be written as a product of measures Π(dz; dx) =
Λ(dz) × G(dx) for some probability measure G on R, then the process X is said to
have stationary increments (with respect to Λ), i.e. the distribution of each increment
XC , C ∈ C depends on C only through ΛC . In particular, if G = δ1 , then X is a
Poisson process with variance measure Λ.
Comments 5.2.2
1. If X := (XA )A∈A is a compound Poisson process (corre-
sponding to a measure Π), then each increment XC , C ∈ C has a compound
Poisson distribution (corresponding to the measure ΠC ).
2. A set-indexed process X := (XA )A∈A is compound Poisson (corresponding to
a measure Π) if and only if for every simple flow f : [0, a] → A(u) the process
X f := (Xf (t) )t∈[0,a] is compound Poisson (corresponding to the measure Πf
defined by Πf ([0, t]; Γ) := Π(f (t) × Γ)).
In this case, if the collection S can be chosen such that for every flow f ∈ S
the function Πft (Γ) is differentiable with respect to t (for every fixed Borel set
Γ ⊆ R), then the generator of the process X f at time s is given by
Z
(Gsf h)(x) =
R
(h(x + y) − h(x))(Πfs )0 (dy), x ∈ R
whose domain D(Gsf ) is equal to the space B(R) (see Example B.1.4.2, Appendix
B.1).
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
125
We will conclude this section with the simultaneous construction of a compound
Poisson process, corresponding to a measure Π(dz; dx), and a Poisson process, whose
variance measure Λ is the first marginal of Π.
Theorem 5.2.3 Let Π(dz; dx) be a finite positive measure on (T × R, σ(A) × B(R))
and let ΛA := Π(A × R), A ∈ σ(A). If N is a Poisson random variable with mean
ΛT and (Zj , Xj )j≥1 is an independent sequence of independent identically distributed
random variables on T × R with distribution
XA :=
N
X
1
Π(T ×R)
· Π(dz; dx), then the processes
Xj I{Zi ∈A} , A ∈ A
j=1
WA :=
N
X
I{Zj ∈A} ,
A∈A
j=1
are a compound Poisson process (corresponding to the measure Π), respectively a
Poisson process with variance measure Λ.
Proof: Clearly both X and W have unique countably additive extensions to σ(A).
In other words, for each fixed ω the set-functions A 7→ XA (ω) and A 7→ WA (ω) are a
finite signed measure, respectively a finite positive measure on σ(A).
The simplest way to show that X is a compound Poisson process is by proving
that the joint characteristic function corresponding to a finite number of increments
XC1 , . . . , XCm over disjoint sets C1 , . . . Cm ∈ C is the product of the m characteristic
functions corresponding to C1 , . . . , Cm , i.e.
E[exp {i
m
X
k=1
uk XCk }] =
m
Y
k=1
Z
exp {
R
(eiuk x − 1)ΠCk (dx)}.
The argument is classical and the extension to the set-indexed case poses no
particular problems (see for instance the argument on page 14-15 of [1] for the planar
case). The argument for the Poisson process is identical.
2
Comment 5.2.4 In the construction described above, both processes X and W have
purely atomic sample paths, if Λ({z}) = 0, ∀z ∈ T . As noted in Section 2.4 not all
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
126
purely atomic set-functions are cadlag, and a counterexample can be given even in
the case of processes indexed by the unit square. Fortunately, we know that if the
atoms of a purely atomic function fall in the ‘well-behaved’ part
T ∗ := {z ∈ T ; Az is proper}
of the space T (introduced in Section 2.4), then the function will be cadlag. In the
case of our previous construction is suffices to require that the measure Π(dz; dx) is
concentrated on T ∗ × R in order to obtain the cadlaguity of the processes X and W .
5.3
Construction of a Process with Independent
Increments
In this section we will give two ways of constructing a process with independent increments: 1. using Kolomogorov’s extension theorem; 2. using the property that
any set-indexed process with independent increments can be associated with a ‘large
enough’ and ‘suitably indexed’ collection of classical processes with independent increments (i.e processes indexed by the real line).
The finite dimensional distributions of a process with independent increments over
the sets in C have to be defined so that they ensure the (almost sure) additivity of
the process. Our first result gives the definition of the finite dimensional distribution
(of a process with independent increments) over an arbitrary k-tuple of sets in C and
shows that, if the family (FC )C∈C is a convolution system, then the definition will
not depend on the extremal representations of these sets. This result is a version of
Lemma 3.3.4, in the case of processes with independent increments.
Lemma 5.3.1 Let (FC )C∈C be a convolution system. Let (C1 , . . . , Ck ) be a k-tuple of
distinct sets in C and suppose that each set Ci admits two extremal representations
0
00
0
i
i
Ci = Ai \ ∪nj=1
Aij = A0i \ ∪m
j=1 Aij . Let A , A be the minimal finite sub-semilattices
of A which contain the sets Ai , Aij , respectively A0i , A0ij , {B0 = ∅0 , B1 , . . . , Bn }, {B00 =
0
} two consistent orderings of A0 , A00 and Dj , Dl0 the left neighbourhoods
∅0 , B10 , . . . , Bm
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
127
of the sets Bj , Bl0 for j = 1, . . . , n; l = 1, . . . , m. If each set Ci ; i = 1, . . . , k can be
written as Ci = ∪j∈Ji Dj = ∪l∈Li Dl0 for some Ji ⊆ {1, . . . , n}, Li ⊆ {1, . . . , m}, then
Z
k
Y
Rn i=1
IΓi (
Z
X
xj )FDn (xn ) . . . FD1 (x1 ) =
j∈Ji
k
Y
Rm i=1
IΓi (
X
l∈Li
0 (ym ) . . . FD 0 (y1 ) (31)
yl )FDm
1
for every Γ1 , . . . , Γk ∈ B(R).
Proof: Let à be the minimal finite sub-semilattice of A determined by the sets
in A0 and A00 ; clearly A0 ⊆ Ã, A00 ⊆ Ã. Let ord1 = {E0 = ∅0 , E1 , . . . , EN } and
0
ord2 = {E00 = ∅0 , E10 , . . . , EN
} be two consistent orderings of à such that, if Bj =
Eij ; j = 1, . . . , n and Bl0 = Ek0 l ; l = 1, . . . , m for some indices i1 < i2 < . . . < in ,
respectively k1 < k2 < . . . < km , then
n
1
2
B1 = ∪ip=1
Ep , B1 ∪ B2 = ∪ip=1
Ep , . . . , ∪nj=1 Bj = ∪ip=1
Ep
0
km
0
1
2
B10 = ∪kq=1
Eq0 , B10 ∪ B20 = ∪kq=1
Eq0 , . . . , ∪m
l=1 Bl = ∪q=1 Eq .
0
Let π be the permutation of {1, . . . , N } such that Ep = Eπ(p)
; p = 1, . . . , N .
Denote by Hp , Hq0 the left neighbourhoods of Ep , Eq0 with respect to the orderings
0
ord1 , ord2 ; clearly Hp = Hπ(p)
, ∀p = 1, . . . , N . Note that Dj = (∪jv=1 Bv )\(∪j−1
v=1 Bv ) =
i
i
i
j
j−1
j
l
Hp ; j = 1, . . . , n and similarly Dl0 = ∪kq=k
Ep ) = ∪p=i
Hq0 ; l =
Ep )\(∪p=1
(∪p=1
j−1 +1
l−1 +1
1, . . . , m. Hence
i
i
j
j
0
Ci = ∪j∈Ji Dj = ∪j∈Ji ∪p=i
Hp = ∪j∈Ji ∪p=i
Hπ(p)
j−1 +1
j−1 +1
l
= ∪l∈Li Dl0 = ∪l∈Li ∪kq=k
Hq0
l−1 +1
and we can conclude that
{π(p); p ∈ ∪j∈Ji {ij−1 + 1, ij−1 + 2, . . . , ij }} = ∪l∈Li {kl−1 + 1, kl−1 + 2, . . . , kl }.
This relationship, combined with the fact that
Z
h(x1 , x2 , . . . , xn )FHN (dxN ) . . . FH2 (dx2 )FH1 (dx1 ) =
RN Z
RN
h(yπ(1) , yπ(2) , . . . , yπ(N ) )FHN0 (dyN ) . . . FH20 (dy2 )FH10 (dy1 )
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
128
for any bounded measurable function h, implies that
Z
k
Y
RN i=1
Z
k
Y
RN i=1
Z
IΓi (
X
ij
X
xp )FHN (dxN ) . . . FH2 (dx2 )FH1 (dx1 ) =
j∈Ji p=ij−1 +1
IΓi (
ij
X
X
j∈Ji p=ij−1 +1
k
Y
RN i=1
IΓi (
X
yπ(p) )FHN0 (dyN ) . . . FH20 (dy2 )FH10 (dy1 ) =
kl
X
l∈Li q=kl−1 +1
yq )FHN0 (dyN ) . . . FH20 (dy2 )FH10 (dy1 ).
This gives us the desired relationship, because
i
j
FDj = ∗p=i
F ; j = 1, . . . , n
j−1 +1 Hp
l
FDl0 = ∗kq=k
F 0 ; k = 1, . . . , m
l−1 +1 Hq
and for every probability measures F, G on R and for every h ∈ B(R)
Z
R2
Z
h(x + y)F (dx)G(dy) =
R
h(z)(F ∗ G)(dz).
2
Here is the main result of this section, which says that the only ingredient that
is needed for the construction of a set-indexed process with independent increments
is a convolution system. This result is a simplified version of Theorem 3.3.5, in the
case of processes with independent increments.
Theorem 5.3.2 If (FC )C∈C is a convolution system, then there exists a probability
measure P on the product space (RC , B(R)C ) under which the coordinate-variable
process X := (XC )C∈C defined on this space is a process with independent increments
and the distribution of each increment XC , C ∈ C is FC .
Proof: As expected, the proof will be an application of Kolmogorov’s extension
theorem. The proof in the classical case can be found on pages 35-36 of [65]; in the
d-dimensional case, a straightforward proof is mentioned on page 12 of [1].
For each k-tuple (C1 , . . . , Ck ) of distinct sets in C we will define a probability
i
Aij ; i =
measure µC1 ...Ck on (Rk , Rk ) in the following manner: let Ci = Ai \ ∪nj=1
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
129
1, . . . , k be extremal representations of these sets, A0 the minimal finite sub-semilattice
of A which contains the sets Ai , Aij , {B0 = ∅0 , B1 , . . . , Bn } a consistent ordering of
A0 , and Dj the left neighbourhood of Bj for j = 1, . . . , n; if Ci = ∪˙ j∈Ji Dj for some
Ji ⊆ {1, . . . , n}; i = 1, . . . , k, then define
µC1 ...Ck := (FD1 × . . . × FDn ) ◦ α−1
where α(x1 , . . . , xn ) := (
P
j∈J1
xj , . . . ,
P
j∈Jk
xj ).
It is clear that µC1 ...Ck does not depend on the ordering of the semilattice A0 ; on
the other hand, Lemma 5.3.1 shows that µC1 ...Ck does not depend on the extremal
representations of the sets Ci .
The fact that the family of all probability measures µC1 ...Ck satisfies the two consistency conditions of Kolmogorov’s extension theorem, follows exactly as in the proof
of Theorem 3.3.5. Therefore, there exists a probability measure P on (RC , RC ) such
that the coordinate-variable process X := (XC )C∈C defined by XC (x) := xC has
the measures µC1 ...Ck as its finite dimensional distributions. Moreover, this process
has an (almost surely) unique additive extension to C(u), since its finite dimensional
distributions were chosen in an additive way.
Finally, it is not difficult to see that the process X has independent increments:
if C1 , . . . , Ck are arbitrary pairwise disjoint sets in C, then each Ci can be written as
Ci = ∪˙ j∈Ji Dj , where {D0 = ∅0 , D1 , . . . , Dn } is the collection of all left neighbourhoods
associated to a certain finite sub-semilattice A0 ; hence XCi =
P
j∈Ji
XDj , i = 1, . . . , k
are independent, by the associativity of independence (the variables XDj ; j = 1, . . . , n
are independent since they are defined as projections under the product measure
FD1 × . . . × FDn ).
2
Note: The requirement that the family (FC )C∈C be a convolution system is more
than sufficient for the proof of the previous theorem! All we need for the construction
of a process with independent increments is a family (FC )C∈C which satisfies property
2 (from the definition of a convolution system) in a very particular instance only. More
precisely, it is enough to assume that for any finite sub-semilattices A0 ⊆ Ã, and for
any consistent orderings ord = {B0 = ∅0 , B1 , . . . , Bn }, ord1 = {E0 = ∅0 , E1 , . . . , EN }
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
130
of A0 , respectively à which are chosen such that, if Bj = Eij ; j = 1, . . . , n then
i
j
∪jv=1 Bv = ∪p=1
Ep ; j = 1, . . . , n, we have
i
j
F , ∀j = 1, . . . , n
FDj = ∗p=i
j−1 +1 Hp
where Dj , Hp are the left neighbourhoods of Bj , Ep in A0 , respectively Ã.
We will conclude this section by giving the second construction of a process with
independent increments which is suggested by the fact that the projection of such
process on any simple flow is also a process with independent increments (indexed by
the real line).
Before giving this second construction, we should mention that a family (Fst )s<t of
probability measures on R is called a convolution system if Ftt = δ0 ∀t and Fst ∗ Ftu =
Fsu for every s < t < u.
Let S be a collection of simple flows as in Definition 3.2.10.
The next assumption will provide a necessary and sufficient condition which will
allow us to reconstruct a convolution system (FC )C∈C from a collection {(Fstf )s<t ; f ∈
S} of convolution systems. It requires that whenever we have two flows f, g ∈ S such
g
that f (t)\f (s) = g(v)\g(u) for some s < t, u < v, we must have Fstf = Fuv
. This
assumption is a version of Assumption 3.3.6 in the case of processes with independent
increments.
Assumption 5.3.3 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0m }
are two consistent orderings of some finite semilattices A0 , A00 such that (∪nj=1 Aj )\
0
l
0
(∪kj=1 Aj ) = (∪m
j=1 Aj )\(∪j=1 Aj ) for some k < n, l < m, and we denote f := fA0 ,ord1 ,
g := fA00 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
Ftfk tn = Fugl um
The following result is immediate. This result is a simplified version of Corollary
3.3.8 in the case of processes with independent increments.
Corollary 5.3.4 If {(Fstf )s<t ; f ∈ S} is a collection of convolution systems which
satisfies Assumption 5.3.3, then there exists a probability measure P on the product
space (RC , B(R)C ) under which:
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
131
1. the coordinate-variable process X := (XC )C∈C defined on this space is a process
with independent increments; and
2. ∀f ∈ S, the distribution of each increment Xtf − Xsf , s < t is Fstf .
0
Proof: Let C be an arbitrary set in C and C = A\B, A ∈ A, B = ∪m
j=1 Aj ∈ A(u)
one of its extremal representations. Let A0 be the minimal finite sub-semilattice of A
which contains the sets A, A01 , . . . , A0m , and ord = {A0 = ∅0 , A1 , . . . , An } a consistent
ordering of A0 such that B = ∪kj=0 Aj , A ∪ B = ∪nj=0 Aj for some k < n; let f := fA0 ,ord
with f (ti ) = ∪nj=1 Aj . Define
FC := Ftfk tn
Note that this definition does not depend on the extremal representation of the set
C, because of Assumption 5.3.3.
Clearly F∅ = F∅0 = δ0 . Finally, it is not difficult to see that the family (FC )C∈C
satisfies property 2 (from the definition of a convolution system) in the particular
instance mentioned in the note following Theorem 5.3.2: denote f := fÃ,ord1 , g :=
fA0 ,ord with f (ti ) = ∪ip=1 Ep ; i = 1, . . . , N and g(uj ) = ∪jv=1 Bv ; j = 1, . . . , n; then
i
i
j
j
F
Ff
= ∗p=i
FDj = Fugj−1 uj = Ftfij−1 tij = ∗p=i
j−1 +1 Hp
j−1 +1 tp−1 tp
(using Assumption 5.3.3).
2
5.4
Lévy Processes
In this section, we will define a set-indexed Lévy process as a process with independent
increments, which is monotone outer-continuous in probability and monotone inner
continuous in probability (see Appendix B.1 for the definition and some properties of
the classical Lévy process).
We will first show that the distribution of such a process is completely determined
by a system of infinitely divisible distributions. Then, using the construction given
by Corollary 5.3.4 for a process with independent increments, we will be able to
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
132
show that a Lévy process is also characterized by a ‘consistent’ collection of Lévy
generators.
We begin with the definition of a set-indexed Lévy process. We want to emphasize
that this definition does not require the existence of a metric on the underlying space
T.
Definition 5.4.1 A process X := (XA )A∈A with independent increments is called a
P
Lévy process if XAn → XA whenever the sequence (An )n ⊆ A is chosen such that
either An ⊇ An+1 ∀n and A = ∩n An , or An ⊆ An+1 ∀n and A = ∪n An ∈ A.
Note that this definition coincides with the definition of a Lévy process on the real
line, but as soon as we get to higher dimensions, we obtain a more general concept
than the one that exists in the literature. For instance, in [1] a Lévy (multiparameter)
process is defined as a process with independent increments which is continuous in
probability when a point z is approached from any direction, not only from the ‘upperright’ or ‘lower-left’ quadrants. Similarly, the set-indexed Lévy process considered
in [2] is continuous in probability when a Borel set A is approached by an arbitrary
sequence (An )n , using the distance dλ (A, B) := λ(A∆B) (λ is the Lebesgue measure).
In our general case, by considering the monotone convergence of sets, we avoid
having to introduce either a metric or a measure on the space T (that would play the
role of the Lebesgue measure on the Euclidean space [0, 1]d ). The good news is that
we are still able to obtain infinitely divisible distributions for the increments of our
Lévy process. (In [8], the stochastic continuity aspect is simply ignored and a setindexed Lévy process is defined as a process with independent and infinitely divisible
increments.) The difference between our approach and that of other authors is that we
will obtain monotone continuity properties of the generating triplets (γA , ΛA , ΠA ); in
[1], the usual continuity properties (i.e. a point is approached from all the quadrants)
have been obtained; in [2], the continuity properties are obtained in terms of the
metric dλ .
The following result gives the expected correspondence via flows.
Proposition 5.4.2 A set-indexed process X := (XA )A∈A is a Lévy process if and
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
133
only if for every simple flow f the process X f := (Xf (t) )t is Lévy. (For the sufficiency
part it is enough to consider only A-valued continuous flows.)
Proof: By Proposition 5.1.6, a necessary and sufficient condition for a set-indexed
process to have independent increments is that it also has independent increments on
every simple flow.
The same argument that we used for proving Proposition 2.4.7 can be used here for
showing that the set-indexed process X is monotone outer-continuous in probability
and monotone inner-continuous in probability if and only if for every simple flow f ,
the process X f is continuous in probability. The only difference is that now we are
dealing with the convergence in probability, instead of pointwise convergence. We do
not repeat this argument here.
2
Remark: Although it might not have been clear until now why we need our
indexing collection A to be separable from above, it is necessary to point out that it is
exactly on this property that the proof of Lemma 2.4.6 (that is used in the argument
of the previous proof) relies.
Here is the main result of this section.
Theorem 5.4.3 (Characterization) (i) If X := (XA )A∈A is a Lévy process, then the
distribution of each XC , C ∈ C(u) is infinitely divisible.
The generating triplet (γC , ΛC , ΠC ) has the following properties:
(a) γ∅0 = 0; the function C 7→ γC is additive on C(u), monotone outer-continuous
and monotone inner-continuous;
(b) Λ∅0 = 0; the function C 7→ ΛC has a unique additive extension to a measure on
σ(A);
(c) Π∅0 = 0; for any fixed Borel set Γ contained in some set {y; |y| > ²}, ² > 0, the
function C 7→ ΠC (Γ) has a unique extension to a measure on σ(A).
(ii) Conversely, given any system (FC )C∈C of infinitely divisible distributions on
R with generating triplets (γC , ΛC , ΠC ) satisfying the conditions (a), (b), (c), there
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
134
exists a probability measure P on the product space (RC , B(R)C ) under which the
coordinate-variable process X := (XC )C∈C defined on this space is a Lévy process and
the distribution of each increment XC , C ∈ C is FC .
Proof: (i) Instead of trying to prove directly that the distribution of each increment
XC , C ∈ C is infinitely divisible, we will cheat a little bit and we will bring in the
heavy machinery of simple flows (which makes use of the additional structure that
we have for our indexing collection A).
Let C ∈ C be arbitrary, say C = A\B with A ∈ A, B ∈ A(u). By Lemma 2.4.9,
there exists a simple flow f such that f (s) = B, f (t) = A ∪ B for some s < t; the
process X f := (Xf (t) )t is Lévy, and hence the distribution of XC = Xtf − Xsf is
infinitely divisible (see Appendix B.1). If C ∈ C(u), the distribution of XC is also
infinitely divisible, using the fact that the process X has independent increments.
Clearly, the functions A 7→ γA , A 7→ ΛA , A 7→ ΠA (Γ) (for any Borel set Γ ⊆ R)
are additive on C(u). Moreover, these functions are continuous over any simple flow
f , i.e. the functions t 7→ γtf := γf (t) , t 7→ Λft := Λf (t) , t 7→ Πft (Γ) := Πf (t) (Γ) are
continuous, where the set Γ is contained in some set {y; |y| > ²}, ² > 0 (see Appendix
B.1). Therefore, by Proposition 2.4.7, these functions are monotone outer-continuous
and monotone inner-continuous.
The functions Λ, Π are also increasing, i.e. ΛC ≥ 0, ΠC (Γ) ≥ 0, ∀C ∈ C; using Proposition 2.4.3, it follows that they have unique additive extensions to some
measures on σ(A).
(ii) Conversely, let (FC )C∈C be system of infinitely divisible distributions on R
with generating triplets (γC , ΛC , ΠC ) satisfying the conditions (a), (b), (c). By the
additivity of the functions γ, Λ, Π, it follows that the family (FC )C∈C is a convolution
system. Using Theorem 5.3.2, there exists a probability measure P on the product
space (RC , B(R)C ) under which the coordinate-variable process X := (XC )C∈C defined
on this space is a process with independent increments and the distribution of each
increment XC , C ∈ C is FC .
This process will automatically be a Lévy process: let (An )n be a decreasing
sequence of sets in A with intersection A; then (An \A)n is a decreasing sequence
of sets in C with empty intersection and by the monotone outer-continuity of the
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
D
135
P
functions γ, Λ, Π it follows that XAn \A → 0 i.e. XAn → XA . (A similar argument
may be used for an increasing sequence (An )n using the monotone inner-continuity
of the functions γ, Λ, Π.)
2
Corollary 5.4.4 A process X := (XA )A∈A with independent increments is Lévy if
and only if the distribution FC of each increment XC , C ∈ C is infinitely divisible and
its characterizing triplet (γC , ΛC , ΠC ) satisfies properties (a), (b) and (c) stated in
Theorem 5.4.3.
Fundamental examples of Lévy processes are the Brownian motion and the compound Poisson process.
Note: If the probability measure F defined by F (A) :=
1
ΛT
·ΛA , A ∈ σ(A) satisfies
γA = F (A) · γT , ΠA = F (A) · ΠT , ∀A ∈ A then the process X is said to have stationary
increments (with respect to Λ), i.e. the distribution of each increment XC , C ∈ C
depends on C only through ΛC .
The construction of a general Q-Markov process, knowing the generators of the
process along a suitable class of simple flows, is given by Theorem 4.2.4. In what
follows, we will give the simplified version of this result, in the case of processes with
independent increments.
Let S be a collection of simple flows as in Definition 3.2.10, and denote by fA the
simple flow in the class S which connects the sets of the semilattice A0 := {A0 =
∅0 , A1 = A} with f (tA ) = A.
The generator of a set-indexed Lévy process, which is weakly differentiable on any
simple flow f ∈ S, has a known form (see Appendix B.1). The next result says that
conversely, given a collection {G f := (Gsf )s ; f ∈ S} of families of Lévy generators, it
is possible to reconstruct a set-indexed Lévy process, providing certain consistency
conditions hold.
Let {G f := (Gsf )s ; f ∈ S} be a collection of families of Lévy generators with domains D(Gsf ) = Cb2 (R) and generating triplets (γsf , Λfs , Πfs ). The following assumption
is an equivalent form in terms of generators of Assumption 5.3.3.
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
136
Assumption 5.4.5 If ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A00 = ∅0 , A01 , . . . , A0m }
are two consistent orderings of some finite semilattices A0 , A00 such that (∪nj=1 Aj )\
0
l
0
(∪kj=1 Aj ) = (∪m
j=1 Aj )\(∪j=1 Aj ) for some k < n, l < m, and we denote f := fA0 ,ord1 ,
g := fA00 ,ord2 with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j , then
γtfk tn = γugl um , Λftk tn = Λgul um , Πftk tn = Πgul um
f
where γst
:= γtf − γsf , Λfst := Λft − Λfs , Πfst := Πft − Πfs .
Clearly Assumption 5.4.5 is a necessary condition satisfied by any process with
independent increments which has Lévy generators over the class S of simple flows.
The following result is an immediate consequence of Corrolary 5.3.4 which says that
in fact, this assumption is also sufficient for the construction of such a process.
Theorem 5.4.6 If {G f := (Gsf )s ; f ∈ S} is a collection of families of Lévy generators
with domains D(Gsf ) = Cb2 (R) and generating triplets (γsf , Λfs , Πfs ), which satisfies
Assumption 5.4.5, then there exists a probability measure P on the product space
(RC , B(R)C ) under which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is a process
with independent increments; and
2. ∀f ∈ S, the generator of the process X f := (Xf (t) )t at time s is an extension of
the operator Gsf .
The process X is Lévy if the following additional condition holds: for any decreasing
(respectively increasing) sequence (An )n ⊆ A with A := ∩n An (respectively A :=
∪n An ∈ A) we have
γnfn → γtf , Λfnn → Λft , Πfnn (Γ) → Πft (Γ)
(32)
where tn = tAn , fn = fAn , ∀n and t = tA , f = fA .
Proof: For each flow f ∈ S, f : [0, a] → A(u) and for each s, t ∈ [0, a], s < t, let Fstf
f
be the infinitely divisible distribution with generating triplet (γst
, Λfst , Πfst ). Note that
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
137
each family (Fstf )s<t is a convolution system. Let Qf := (Qfst )s<t be the associated
transition system, defined by Qfst (x; Γ) := Fstf (Γ − x), x ∈ R, Γ ∈ B(R), and denote
with Tstf the operator associated to Qfst . Then the generator at time s of the semigroup
T := (Tstf )s<t is an extension of the operator Gsf .
Moreover, by Assumption 5.4.5, the collection {(Fstf )s<t ; f ∈ S} of convolution systems satisfies Assumption 5.3.3; hence, by Corollary 5.3.4, there exists a probability
measure P on the product space (RC , B(R)C ) under which: 1. the coordinate-variable
process X := (XC )C∈C defined on this space is a process with independent increments;
and 2. for every flow f ∈ S the distribution of each increment Xf (t) − Xf (s) , s < t is
exactly Fstf .
Hence we can define some set-indexed functions (γA )A∈A , (ΛA )A∈A , (ΠA )A∈A (with
unique additive extensions to C(u)) such that the distribution of each increment
XC , C ∈ C is infinitely divisible with generating triplet (γC , ΛC , ΠC ). These functions
will be monotone outer-continuous and monotone inner-continuous on A (which is
equivalent to saying that the process X is Lévy) if (32) holds.
2
Our final remark is that set-indexed Lévy processes belong to the class D[S(A)]
(introduced in Section 2.4) since for every simple flow f , the (Lévy) process X f :=
(Xf (t) )t has a cadlag modification (see Appendix B.1).
5.5
Existence of a Cadlag Version of a Lévy process
In this section we will include several remarks about the existence of a cadlag version
of a set-indexed Lévy process. These results are not new; rather it is an attempt
to incorporate all the existing results from the literature into our context. We also
include a slight extension of Theorem 3.1, [8] about the existence of a cadlag version
of a jump Lévy process, with two minor corrections (one to the statement, the other
to the proof) of this result. We will make use of the notions of ‘metric entropy’ and
‘Vapnik-Cervonenkis class’, which are treated in Appendix B.3.
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
138
We recall that out notion of ‘cadlag’ is defined using the Hausdorff metric. Therefore, in this section we assume that T is a metric space.
Given a system (FC )C∈C of infinitely divisible distributions on R with generating
triplets (γC , ΛC , ΠC ) satisfying the conditions (a), (b), (c) stated in Theorem 5.4.3,
we want to construct an (almost surely) cadlag Lévy process X := (XA )A∈A such the
distribution of each increment XC , C ∈ C is FC .
We will consider separately the Gaussian and the jump part. The building blocks
for the two constructions will be the Brownian motion and the compound Poisson
process.
1. The Gaussian Case
Theorem 5.5.1 Let X := (XA )A∈A be a Lévy process with Gaussian increments (i.e
ΠA = 0, ∀A ∈ A, where (γA , ΛA , ΠA ) denotes its generating triplet). If
1. all the approximation functions gn are A-valued and dH (A, gn (A)) ≤ ²n , ∀n for
some ²n & 0;
2. the collection A is totally bounded with respect to the pseudo-metric dΛ defined by
dΛ (A, B) := ΛA∆B and the metric entropy HdΛ of A with respect to dΛ satisfies
Z 1
0
(HdΛ (²))1/2 d² < ∞; and
(33)
3. the functions γ and Λ are dH -continuous on A\{∅}
then the process X has a dH -continuous (and hence cadlag) version.
Proof: Since the function γ is dH -continuous, we can assume without loss of generality that γC = 0, ∀C ∈ C, i.e. X is a Brownian motion with variance measure Λ.
Moreover, we know that any set-indexed process has a separable version (Theorem
2.4.19); hence, without loss of generality, we will suppose that X is separable.
On one hand, because condition (33) holds, the process X is dΛ -continuous, using
Theorem 1.1, [3]. On the other hand, using the argument on page 3 of [3], this is
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
139
equivalent to saying that the process X is also dH -continuous, because its variance
measure Λ is dH -continuous (this argument requires that the indexing space A\{∅}
be dH -compact; this condition is satisfied by Lemma 2.4.11).
2
Note: The fact that A is a Vapnik-Cervonenkis class in σ(A) is:
• a sufficient condition for the existence of a dH -continuous version of a Brownian
motion indexed by A, with dH -continuous variance measure Λ (condition (33)
is satisfied if A is a Vapnik-Cervonenkis class in σ(A), according to Theorem
1.10, [3]); and
• a necessary condition for the existence of a dΛ -bounded and uniformly continuous version of a Brownian motion indexed by A, with arbitrary variance
measure Λ (according to the argument on page 125 of [29]).
2. The Non-Gaussian Case
The following result extends and gives a minor correction to Theorem 3.1, [8]. Our
generalization has to be understood in two directions: in the first place, our indexing
sets lie in an arbitrary metric space T ; in the second place the stationarity condition
is replaced by a more general condition which requires that the Lévy measure of the
process be bounded by a stationary measure (so that it does not need to be stationary
itself). One correction is that we require that the measure Π be concentrated on
the well-behaved part T ∗ of the space T (introduced in Section 2.4); without this
requirement, the constructed process may fail to be cadlag. The second correction is
a minor technical point encountered in the proof.
Theorem 5.5.2 Let X := (XA )A∈A be a Lévy process with no Gaussian part (i.e.
γC = 0, ΛC = 0, ∀C ∈ C), whose Lévy measure Π(dz; dx) is concentrated on T ∗ × R,
its first marginal Λ(A) := Π(A × R) has no atoms (i.e. Λ({z}) = 0, ∀z ∈ T ) and
suppose that Π is bounded from above by a stationary Lévy measure Π1 (dz; dx) =
F 1 (dz) × Π1T (dx) i.e.
Π(dz; dx) ≤ Π1 (dz; dx) = F 1 (dz) × Π1T (dx)
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
140
where F 1 is a probability measure on σ(A). Denote M 1 (x) := Π1T ((−∞, −x)∪(x, ∞))
and G1 (y) := y · inf{x > 0; M 1 (y) < x}.
Suppose that the Lévy measure Π1T satisfies the following conditions:
(B1) lim supx→0 x2 | ln x|9 M 1 (x) < ∞
(B2) for some τ > 0, xτ M 1 (x) increases as x & 0
and
Z
{|x|≤1}
|x|Π1T (dx) = ∞.
Assume that the indexing collection A is totally bounded with inclusion (in σ(A))
with respect to the probability measure F 1 , and that its metric entropy with inclusion
HI (²) satisfies the following condition:
(A1) HI (²) = ²−c0 L(²) for some constant c0 > 1, where L is a slowly varying function
near 0 such that ²(c0 +1)/2 HI (²) increases as ² & 0.
If
Z 1
HI (²)
)d² < ∞
²
0
then the process X possesses a cadlag version on A.
G1 (
(34)
Proof: For the sake of completeness we include the proof, even if the arguments are
essentially the same as those used to prove Theorem 3.1, [8].
For each n ≥ 0, let X n := (XAn )A∈A be a cadlag compound Poisson process with
Lévy measure Πn (A × Γ) := Π(A × (Γ ∩ {y; an < |y| ≤ an−1 })), n ≥ 1; Π0 (A × Γ) :=
Π(A×(Γ∩{y; |y| > 1})), using Comment 5.2.4. (Here (an )n≥1 is a decreasing sequence
of positive constants converging to 0, a0 = 1.) We assume that the processes (X n )n≥0
are independent (by redefining them on a product space, if necessary).
Note that each Πn (dz; dx) is a finite Lévy measure concentrated on T ∗ × R:
1 Z
y 2 ΠT (dy) < ∞; n ≥ 1
µn := Π (T × R) =
ΠT (dy) ≤ 2
an |y|≤1
{y;an <|y|≤an−1 }
Z
n
Z
µ0 := Π0 (T × R) =
{y;|y|>1}
ΠT (dy) < ∞.
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
141
Let ZAn := XAn − E[XAn ], n ≥ 1 and k Z n kA := supA∈A |ZAn |. Note that the logcharacteristic function of XA0 is
Z
ψA0 (u) =
{|y|>1}
(eiuy − 1)ΠA (dy)
whereas the log-characteristic function of ZAn is
Z
ψAn (u)
=
{y;an <|y|≤an−1 }
(eiuy − 1 − iuy)ΠA (dy).
Therefore, if we can define
XA := XA0 +
X
ZAn uniformly in A ∈ A
n≥1
then the process X := (XA )A∈A will be a cadlag Lévy process with no Gaussian part
and Lévy measure Π.
In order to show that
P
n≥1
k Z n kA converges a.s. it is enough to prove that
there exists a summable sequence (αn )n of positive real numbers such that
X
P ∗ (k Z n kA ≥ αn ) < ∞
n≥1
using an extension of Borel-Cantelli Lemma (P ∗ denotes the outer measure induced
by the underlying probability measure P ; we use this measure because k Z n kA may
not be measurable). Our constants αn will be of the form αn := 4ηn , where ηn will
be chosen later such that
X
ηn < ∞.
(35)
n≥1
Let β ∈ (0, 1) (its value will be chosen later) and set δn := β n , n ≥ 1. For
each n ≥ 1 let An := Aδn denote a collection of measurable sets from σ(A) such
+
that for any A ∈ A there exist two sets A−
n , An ∈ An with the following properties:
−
+
+
A−
n ⊆ A ⊆ An and F (An \An ) ≤ δn . Without loss of generality we can assume that
−
F (A−
n ∆An−1 ) ≤ δn−1 for every n. Recall that the collection An can be chosen such
that it has at most exp(2HI (δn )) elements.
Let (γj )j≥1 be a sequence (to be chosen later) such that
X
j≥1
γj = 1.
(36)
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
Let ηnj = ηn γj . Let Mn1 := M 1 (an ) and Q1n−1 :=
R
an <|x|≤an−1
142
x2 Π1T (dx). Clearly
Qn−1 ≤ a2n−1 Mn .
Let (kn )n≥1 be an increasing sequence of positive integers (to be chosen later) such
that limn→∞ kn = ∞. For each A ∈ A we may write
ZAn =
kn
X
(ZAn− − ZAn− ) + (ZAn − ZAn− )
j−1
j
j=1
(37)
kn
where we considered A−
0 := ∅.
Hence P ∗ (k Z n kA > 4ηn ) = P ∗ (∃A ∈ A : |ZAn | > 4ηn ) ≤ P(I) + P(II) where
P(I) :=
kn
X
j=1
−
−
−
n
n
P ∗ (∃A−
j ∈ Aj , Aj−1 ∈ Aj−1 , F (Aj ∆Aj−1 ) ≤ δj−1 : |ZA− − ZA− | > 2ηnj )
j
j−1
and
X
P(II) :=
+
+
A−
k ,Ak ∈Akn ,F (Ak
n
n
n
\A−kn )≤δkn
+
n
P ∗ (∃A ∈ A, A−
kn ⊆ A ⊆ Akn : |ZA\A− | > 2ηn ).
kn
Note that
P(I) ≤
kn
X
exp(2HI (δj ))rnj and P(II) ≤ exp(2HI (δkn )rn
j=1
where
rnj :=
max
−
−
−
A−
j ∈Aj ,Aj−1 ∈Aj−1 ,F (Aj ∆Aj−1 )≤δj−1
{P (|ZAn− \A− | > ηnj ) + P (|ZAn− \A− | > ηnj )}
j
j−1
j−1
j
and
rn :=
max
A−
,A+
∈Akn ,F (A+
kn
kn
kn
\
A−
)≤δkn
kn
P ∗(
sup
A∈A:A−
⊆A⊆A+
kn
kn
|ZAn\A− | > 2ηn ).
kn
In order to find an upper bound for rnj we will use an inequality specific to
infinitely divisible distributions. This inequality is given by Lemma 2.3, [8], except
that we will use it for Lévy measures whose domain is {x; a0 < |x| ≤ a}. (This is
in fact the inequality that the authors of [8] are using too, even if they preferred to
prove it only in the case when a0 = 0; the same proof works for arbitrary a0 > 0.)
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
143
In our case each ZAn is a random variable with an infinitely divisible distribution
whose log-characteristic function is
Z
ψAn (u) =
Then θn :=
R
an <|x|≤an−1
an <|x|≤an−1
(eiux − 1 − iux)ΠA (dx)
x2 ΠA (dx) ≤ F (A)Q1n−1 and, according to the previous lemma,
P (|ZAn |
> x) ≤ 2 exp{−
x2
2(F (A)Q1n−1 +
an−1 x }
)
3
−
−
−
for every x > 0. Using this inequality for both A−
j \Aj−1 and Aj−1 \Aj we get rnj ≤
2pnj where
pnj := 2 exp{−
2
ηnj
2(δj−1 Q1n−1 +
an−1 ηnj }.
)
3
Suppose that for j = 1, . . . , kn ,
2HI (δj ) ≤
and
2
ηnj
8 max(δj−1 Q1n−1 , an−13 ηnj )
2
ηnj
≥ 2 log n + log kn .
8 max(δj−1 Q1n−1 , an−13 ηnj )
(38)
(39)
Then
exp(2HI (δj ))pnj ≤ 2 exp(−2 log n − log kn )
and
kn
X
exp(2HI (δj ))pnj ≤ 2kn exp(−2 log n − log kn ) = 2n−2
j=1
which is summable.
We will find now a summable upper bound for the second term P(II) .
Note first that for any set A ∈ σ(A) we have
Z
|ZAn | ≤ |XAn | +
an <|x|≤an−1
|x|ΠA (dx) ≤ an−1 WAn + an−1 F (A)Mn1
where (WAn )A∈A is the set-indexed Poisson process with mean E(WAn ) = ΠA (Γn ).
+
−
+
We begin by taking a closer look at rn . Fix A−
kn , Akn ∈ Akn such that Akn ⊆ Akn
−
−
+
and F (A+
kn \Akn ) ≤ δkn . For each A ∈ A, Akn ⊆ A ⊆ Akn we have
|ZAn\A− | ≤ an−1 WAn+ \A− + an−1 δkn Mn1
kn
kn
kn
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
144
Suppose that
ηn > e2 an−1 δkn Mn1
Then supA∈A,A−
kn
⊆A⊆A+
k
P ∗(
n
|ZAn\A− | ≤ an−1 WAn+ \A− + ηn and hence
kn
kn
kn
sup
A∈A,A−
⊆A⊆A+
k
k
n
(40)
|ZAn\A− | > 2ηn ) ≤ P (WAn+ \A− > ηn /an−1 )
kn
kn
kn
n
At this point we will use an inequality which gives a bound on the tail of a Poisson
distribution. This inequality is given by Lemma 2.2, [8].
In our case WAn+ \A− is a Poisson random variable with mean ΠA+ \A− (Γn ) ≤
kn
kn
kn
kn
−
1
1
F (A+
\A
)Π
(Γ
)
≤
δ
M
.
Using
the
previous
lemma
with
s
=
η
n
kn
n /an−1 >
T
n
kn
kn
e2 δkn Mn1 we get
rn ≤ exp(−ηn /an−1 ).
Suppose that
HI (δkn ) ≤
and
ηn
4an−1
(41)
ηn
≥ log n.
4an−1
(42)
Then exp(2HI (δkn ))rn ≤ exp(−ηn /2an−1 ) ≤ exp(−2 log n) = n−2 which is summable.
It remains to choose the constants an , ηn , β, γj , kn such that conditions (35), (36),
(38), (39), (40), (41), (42) hold. But this follows exactly as in the proof of Theorem
3.1, [8].
2
Comment 5.5.3 The preceding proof contains also a correction to the original proof
0
of Theorem 3.1, [8]. In this paper it is proved that P ∗ (k Z n kA > 6ηn ) ≤ P(I) + P(II)
0
has exactly the same definition as our P(II) with the constant 2ηn replaced
where P(II)
0
by 4ηn . An unnecessarily large upper bound for P(II)
is used, namely
0
P(II)
≤ exp(2HI (δkn )) · (2qn )
where
qn :=
max
A−
,A+
∈Akn ,F (A+
kn
kn
kn
\
A−
)≤δkn
kn
P ∗(
sup
A−
⊆B⊆A+
kn
kn
|ZBn | > 2ηn ).
CHAPTER 5. PROCESSES WITH INDEPENDENT INCREMENTS
145
On page 168 of [8] it is claimed that
sup
A−
⊆B⊆A+
kn
kn
−
|ZBn | ≤ an−1 WAn+ \A− + an−1 Mn |A+
kn \Akn |
kn
kn
which may not be true. Our approach makes use of the upper bound rn (instead of
qn ) and circumvents this difficulty.
Chapter 6
Examples
In this chapter we will see some examples of Q-Markov processes which do not have
independent increments.
6.1
The Empirical Process
In this section we will examine the set-indexed empirical process as a Q-Markov process. In particular, we will prove that in this case the consistency condition imposed
on the generator has a very simple form.
Let S be a collection of simple flows as in Definition 3.2.10.
Proposition 6.1.1 Let (Zj )j=1,...,n be a sequence of independent identically distributed
random variables on T with distribution F . Let
XA :=
n
X
I{Zj ∈A} , A ∈ A.
j=1
Then:
(i) the process X := (XA )A∈A is Q-Markov with the transition system Q given by

Ã
F (B 0 \B)

QBB 0 (k; {m}) := 
1 − F (B)
m−k
n−k
!m−k Ã
F (B 0 \B)
1−
1 − F (B)
!n−m
for every k, m ∈ {0, 1, . . . , n}, k ≤ m and for every B, B 0 ∈ A(u), B ⊆ B 0 ;
146
CHAPTER 6. EXAMPLES
147
(ii) for every right-continuous flow f : [0, a] → A(u) with f (a) = T , the process
X f := (Xf (t) )t∈[0,a] can be written as
Xtf
=
n
X
j=1
I{Z f ≤t} , t ∈ [0, a]
j
where (Zjf )j=1,...,n is a sequence of independent identically distributed random
variables on [0, a] with distribution F f := F ◦ f ; and
(iii) if the collection S can be chosen such that for every flow f ∈ S the function F f
is differentiable, then the generator of the process X f at time s is

 λ (k) · (h(k + 1) − h(k)) if k = 0, 1, . . . , n − 1
s
(Gsf h)(k) = 
0
if k = n
with
λs (k) := (n − k)
(F f )0 (s)
1 − F f (s)
and domain D(Gsf ) equal to B({0, 1, . . . , n}), the space of all functions h :
{0, 1, . . . , n} → R.
Comments 6.1.2
1. The process X is called the empirical process (of size n)
associated to the measure F ; this process can be extended uniquely to a finite
positive measure on σ(A), called the empirical measure.
2. According to (ii), each process X f , f ∈ S is a one-parameter empirical process
(of size n) associated to the measure F f .
3. If F ({z}) = 0, ∀z ∈ T then X is a point process since P (Zi = Zj ) = 0, ∀i 6= j.
Proof: (i) It is not difficult to see that the family Q := (QBB 0 )B⊆B 0 is a transition
system. According to Comment 3.2.2, it is enough to prove that for every A ∈ A, B ∈
A(u), for every partition B = ∪pi=1 Ci , Ci ∈ C and for every ki , m ∈ {0, 1, . . . , n} with
k :=
Pp
i=1
ki ≤ m
P [XA∪B = m|XC1 = k1 , . . . , XCp = kp ] = QB,A∪B (k; {m}).
CHAPTER 6. EXAMPLES
148
We have
P (XC1 = k1 , . . . , XCp = kp , XA∪B = m) =
p
Y
n!
(F (Ci ))ki (F (A\B))m−k (1 − F (A ∪ B))n−m
Qp
i=1 ki !(m − k)!(n − m)! i=1
and
p
Y
n!
(F (Ci ))ki (1 − F (B))n−k .
k
!(n
−
k)!
i=1 i
i=1
P (XC1 = k1 , . . . , XCp = kp ) = Qp
By dividing these two relationships we get
P [XA∪B = m|XC1 = k1 , . . . , XCp = kp ] =
(F (A\B))m−k (1 − F (A ∪ B))n−m
(n − k)!
·
(m − k)!(n − m)!
(1 − F (B))n−k
which concludes the proof of (i).
(ii) For each right-continuous flow f : [0, a] → A(u) with f (a) = T , the function
F f is right-continuous and increasing with F f (a) = F (T ) = 1; let
Zjf := min{t ∈ [0, a]; Zj ∈ f (t)}.
Note that Zjf ≤ t if and only if Zj ∈ f (t). Hence
Xtf = Xf (t) =
n
X
I{Zj ∈f (t)} =
j=1
n
X
j=1
I{Z f ≤t}
j
where the random variables (Zjf )j≥1 are independent identically distributed with distribution F f .
(iii) For each f ∈ S, let Qfst := Qf (s)f (t) and Tstf the bounded linear operator
associated to Qfst . We have
(Tstf h)(k)
− h(k) =
n
X
(h(m) − h(k))Qfst (k; {m}) =
m=k+1
n
X

(h(m) − h(k)) · 
m=k+1
n−k
m−k

 (F f (t) − F f (s))m−k ·
(1 − F f (t))n−m
(1 − F f (s))n−k
CHAPTER 6. EXAMPLES
149
for every function h ∈ B({0, 1, . . . , n}). By definition, the generator of the process
X f at time s is
(Tstf h)(k) − h(k)
uniformly in k.
t→s
t−s
(Gsf h)(k) := lim
The result follows since
f
lim
t→s
f
m−k
(F (t) − F (s))
t−s

 (F f )0 (s) if m = k + 1
=
.
0
if m > k + 1
2
The next theorem shows that the finite dimensional distributions of a set-indexed
empirical process are completely characterized by its generator, provided that a certain consistency assumption holds.
Theorem 6.1.3 For each f ∈ S, f : [0, a] → A(u), let F f be a differentiable probability distribution function on [0, a].
If {G f := (Gsf )s ; f ∈ S} is a collection of families of linear operators on B({0, 1,
. . . , n}) given by

 λ (k) · (h(k + 1) − h(k))
s
(Gsf h)(k) =
 0
with
if k = 0, 1, . . . , n − 1
if k = n
(F f )0 (s)
λs (k) := (n − k)
1 − F f (s)
and domain D(Gsf ) equal to the space B({0, 1, . . . , n}), and we assume that there exists
a probability measure F on (T, σ(A)) such that
F ◦ f = F f , ∀f ∈ S
then there exists a probability measure P on the product space {0, 1, . . . , n}C under
which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is a version
of the empirical process (of size n) associated to the measure F ; and
CHAPTER 6. EXAMPLES
150
2. ∀f ∈ S, the generator of the process X f := (Xf (t) )t at time s is Gsf .
Proof: For each f ∈ S, the operator Gsf is the generator at time s of the semigroup
T f := (Tstf )s<t associated to the transition system Qf := (Qfst )s<t defined by

Qfst (k; {m})
:= 
n−k
Ã

m−k
F f (t) − F f (s)
1 − F f (s)
!m−k Ã
F f (t) − F f (s)
1−
1 − F f (s)
!n−m
for every k, m ∈ {0, 1, . . . , n}, k ≤ m. The result will follow by Theorem 4.1.3,
provided that the conditions of this theorem are satisfied.
We will prove first that the collection {Qf ; f ∈ S} satisfies Assumption 3.3.6: let
f, g ∈ S be such that f (s) = g(u) and f (t) = g(v); we have
F f (s) = F (f (s)) = F (g(u)) = F g (u)
F f (t) − F f (s) = F (f (t)) − F (f (s)) = F (g(v)) − F (g(u)) = F g (v) − F g (u)
and hence Qfst = Qguv .
We will prove next that the collection {Qf ; f ∈ S} satisfies Assumption 3.3.7
with µ := δ0 . Let ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n } be
two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i, where
π is a permutation of {1, . . . , n} with π(1) = 1, and denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j and t0 = u0 = 0. We want to prove that
Z
Rn
I{k1 } (x1 )
I{ki } (xi − xi−1 )Qftn−1 tn (xn−1 ; dxn ) . . . Qf0t1 (0; dx1 ) =
(43)
i=2
Z
Rn
n
Y
I{k1 } (y1 )
n
Y
i=2
I{ki } (yπ(i) − yπ(i)−1 )Qgun−1 un (yn−1 ; dyn ) . . . Qg0u1 (0; dy1 )
for every ki ∈ {0, 1, . . . , n} with
Pn
i=1
ki ≤ n.
The left-hand side of (43) is equal to
n
Pn
Y
n!
(F f (ti ) − F f (ti−1 ))ki (1 − F f (tn ))n− i=1 ki
Pn
Qn
i=1 ki )! i=1
i=1 ki !(n −
since
Qf0t1 (0; {k1 }) =
n!
(F f (t1 ))k1 (1 − F f (t1 ))n−k1 ;
k1 !(n − k1 )!
(44)
CHAPTER 6. EXAMPLES
Qft1 t2 (k1 ; {k1 + k2 }) =
151
(n − k1 )!
(F f (t2 ) − F f (t1 ))k2 (1 − F f (t2 ))n−k1 −k2
·
;
k2 !(n − k1 − k2 )!
(1 − F f (t1 ))n−k1
etc. On the right-hand side of (43) we write
n
Y
I{ki } (yπ(i) − yπ(i)−1 ) =
i=2
n
Y
I{li } (yi − yi−1 )
i=2
with li := kπ−1 (i) , and therefore this side becomes
n
Pn
Y
n!
(F g (ui ) − F g (ui−1 ))li (1 − F g (un ))n− i=1 li .
Pn
i=1 li !(n −
i=1 li )! i=1
Qn
(45)
Finally, we observe that (44) is equal to (45) since
0
F f (ti ) − F f (ti−1 ) = F (Ci ) = F (Cπ(i)
) = F g (uπ(i) ) − F g (uπ(i)−1 )
(where Ci , Ci0 are the left neighbourhoods of Ai , A0i ) and hence
(F f (ti ) − F f (ti−1 ))ki = (F g (uπ(i) ) − F g (uπ(i)−1 ))lπ(i)
for every i = 1, . . . , n.
2
6.2
A Bivariate Process
In this section we will examine a bivariate Q-Markov process, whose construction is
based on the empirical process.
Let S be a collection of simple flows as in Definition 3.2.10.
Proposition 6.2.1 Let (Zj , Xj )j=1,...,n be a sequence of independent identically distributed random variables on T × R with distribution F × G. Let
XA :=
YA :=
n
X
j=1
n
X
j=1
Then:
I{Zj ∈A} , A ∈ A
Xj I{Zj ∈A} , A ∈ A.
CHAPTER 6. EXAMPLES
152
(i) the process (X, Y ) := (XA , YA )A∈A is Q-Markov with the transition system Q
given by


QBB 0 ((k, y); {m} × Γ) = QBB 0 (k; {m} × Γ) :=
n−k
Ã

m−k
F (B 0 \B)
1 − F (B)
!m−k Ã
F (B 0 \B)
1−
1 − F (B)
!n−m
G∗m (Γ)
for every k, m ∈ {0, 1, . . . , n}, k ≤ m, for every y ∈ R, Γ ∈ B(R) and for every
B, B 0 ∈ A(u), B ⊆ B 0 (here G∗m denotes |G ∗ .{z
. . ∗ G}):
m times
(ii) for every right-continuous flow f : [0, a] → A(u) with f (a) = T , the processes
X f := (Xf (t) )t∈[0,a] , Y f := (Yf (t) )t∈[0,a] can be written as
Xtf
n
X
=
j=1
n
X
Ytf =
j=1
I{Z f ≤t} , t ∈ [0, a]
j
Xj I{Z f ≤t} , t ∈ [0, a]
j
where (Zjf )j=1,...,n is a sequence of independent identically distributed random
variables on [0, a] with distribution F f := F ◦ f ; and
(iii) if the collection S can be chosen such that for every flow f ∈ S the function F f
is differentiable, then the generator of the process (X f , Y f ) at time s is
Z
(Gsf h)(k, y)
= λs (k) ·
R
(h(k + 1, z) − h(k, y))G∗(k+1) (dz)
for k = 0, 1, . . . , n − 1; y ∈ R, with
(F f )0 (s)
λs (k) := (n − k)
1 − F f (s)
and domain D(Gsf ) equal to B({0, 1, . . . , n} × R), the space of all bounded functions h : {0, 1, . . . , n} × R → R.
Comment 6.2.2 The process Y may not be set-Markov, as it can be seen from
the following example: let T = [0, 1], A = {[0, t]; t ∈ T }, X1 , X2 some independent
CHAPTER 6. EXAMPLES
153
random variables with common distribution 12 δ1 + 21 δ2 and Z1 , Z2 some i.i.d. random
variables on T , independent of X1 , X2 ; the process Y := (Yt )t∈T defined by
Yt :=
2
X
Xj I{Zj ∈[0,t]} , t ∈ [0, 1]
j=1
is not Markov: P [Y1 > 2|Y1/2 = 2] > 0 but P [Y1 > 2|Y1/2 = 2, Y1/4 = 1] = 0.
Proof: (i) It is not difficult to see that the family Q := (QBB 0 )B⊆B 0 is a transition
system. According to Comment 3.2.2, it is enough to prove that for every A ∈
A, B ∈ A(u), for every partition B = ∪pi=1 Ci , Ci ∈ C, for every ki , m ∈ {0, 1, . . . , n}
with k :=
Pp
i=1
ki ≤ m, for every yi ∈ R and for every Γi , Γ ∈ B(R)
P [XA∪B = m, YA∪B ∈ Γ|XC1 = k1 , YC1 = y1 , . . . , XCp = kp , YCp = yp ] =
QB,A∪B ((k,
p
X
yi ); {m} × Γ)
i=1
or, equivalently
P (XA∪B = m, YA∪B ∈ Γ, XC1 = k1 , YC1 ∈ Γ1 , . . . , XCp = kp , YCp ∈ Γp ) =
Z
Qp
i=1
Γi
QB,A∪B ((k,
p
X
i=1
(46)
yi ); {m} × Γ)P(XC1 ,YC1 ,...,XCp ,YCp ) (k1 , dy1 , . . . , kp , dyp )
for every Γ1 , . . . , Γp ∈ B(R) (here P(XC1 ,YC1 ,...,XCp ,YCp ) denotes the distribution of the
random vector (XC1 , YC1 , . . . , XCp , YCp )).
The result follows since the left-hand side of (46) is equal to
p
Y
n!
(F (Ci ))ki (F (A\B))m−k (1 − F (A ∪ B))n−m
k
!(m
−
k)!(n
−
m)!
i=1 i
i=1
Qp
(47)
G∗k1 (Γ1 ) . . . G∗kp (Γp ) · G∗m (Γ)
the right-hand side is equal to
p
Y
n!
(F (Ci ))ki (1 − F (B))n−k
k
!(n
−
k)!
i=1 i
i=1
Qp
Z
Qp
Γ
i=1 i
QB,A∪B ((k,
p
X
i=1
yi ); {m} × Γ)G∗k1 (dy1 ) . . . G∗kp (dyp )
(48)
CHAPTER 6. EXAMPLES
154
and (47) and (48) are easily seen to be equal, using the definition of QB,A∪B .
(ii) The variables Zjf are the same as in the proof of Proposition 6.1.1.
(iii) For each f ∈ S, let Qfst := Qf (s)f (t) and Tstf the bounded linear operator
associated to Qfst . We have
(Tstf h)(k, y)
n
X


m=k
− h(k, y) =
n Z
X
m=k R
(h(m, z) − h(k, y))Qfst ((k, y); {m} × dz) =

n−k
m−k
 (F f (t) − F f (s))m−k ·
(1 − F f (t))n−m Z
(h(m, z) − h(k, y))G∗m (dz)
f
n−k
(1 − F (s))
R
for every function h ∈ B({0, 1, . . . , n}×R). By definition, the generator of the process
X f at time s is
(Gsf h)(k, y)
(Tstf h)(k, y) − h(k, y)
uniformly in k, y.
:= lim
t→s
t−s
The result follows since

(F f (t) − F f (s))m−k  (F f )0 (s) if m = k + 1
lim
=
.
 0
t→s
t−s
if m > k + 1
2
The next theorem shows that the finite dimensional distributions of the setindexed process constructed in the previous proposition are completely characterized
by its generator, provided that a certain consistency assumption holds.
Theorem 6.2.3 For each f ∈ S, f : [0, a] → A(u), let F f be a differentiable probability distribution function on [0, a].
If {G f := (Gsf )s ; f ∈ S} is a collection of families of linear operators on B({0, 1,
. . . , n} × R) given by
Z
(Gsf h)(k, y) = λs (k) ·
R
(h(k + 1, z) − h(k, y))G∗(k+1) (dz)
for k = 0, 1, . . . , n − 1; y ∈ R, with
λs (k) := (n − k)
(F f )0 (s)
1 − F f (s)
CHAPTER 6. EXAMPLES
155
and domain D(Gsf ) equal to the space B({0, 1, . . . , n} × R), and we assume that there
exists a probability measure F on (T, σ(A)) such that
F ◦ f = F f , ∀f ∈ S
then there exists a probability measure P on the product space ({0, 1, . . . , n} × R)C
under which:
1. the coordinate-variable process (X, Y ) := (XC , YC )C∈C defined on this space is a
version of the process constructed in Proposition 6.2.1; and
2. ∀f ∈ S, the generator of the process (X f , Y f ) := (Xf (t) , Yf (t) )t at time s is Gsf .
Proof: For each f ∈ S, the operator Gsf is the generator at time s of the semigroup
T f := (Tstf )s<t associated to the transition system Qf := (Qfst )s<t defined by
Qfst ((k, y); {m} × Γ) = Qfst (k; {m} × Γ) :=

Ã
F f (t) − F f (s)


1 − F f (s)
m−k
n−k
!m−k Ã
F f (t) − F f (s)
1−
1 − F f (s)
!n−m
G∗m (Γ)
for every k, m ∈ {0, 1, . . . , n}, k ≤ m and for every y ∈ R, Γ ∈ B(R). The result will
follow by Theorem 4.1.3, provided that the conditions of this theorem are satisfied.
The fact that the collection {Qf ; f ∈ S} satisfies Assumption 3.3.6 follows as for
the empirical process (see the proof of Theorem 6.1.3) since Qfst depends only on f, s, t
only through F f (s) and F f (t).
It remains to prove that the collection {Qf ; f ∈ S} satisfies Assumption 3.3.7
with µ := δ(0,0) . Let ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n } be
two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i, where
π is a permutation of {1, . . . , n} with π(1) = 1, and denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j and t0 = u0 = 0. We want to prove that
Z
R2n
I{k1 }×Γ1 (x1 , x01 )
n
Y
I{ki }×Γi (xi − xi−1 , x0i − x0i−1 )
i=2
Qftn−1 tn ((xn−1 , x0n−1 ); dxn × dx0n ) . . . Qf0t1 ((0, 0); dx1 × dx01 ) =
(49)
CHAPTER 6. EXAMPLES
156
Z
R2n
n
Y
I{k1 }×Γ1 (y1 , y10 )
0
0
I{ki }×Γi (yπ(i) − yπ(i)−1 , yπ(i)
− yπ(i)−1
)
i=2
0
Qgun−1 un ((yn−1 , yn−1
); dyn
Pn
for every ki ∈ {0, 1, . . . , n} with
i=1
× dyn0 ) . . . Qg0u1 ((0, 0); dy1 × dy10 )
ki ≤ n and for every Γ1 , . . . , Γn ∈ B(R).
The left-hand side of (49) is equal to
n
Pn
Y
n!
(F f (ti ) − F f (ti−1 ))ki (1 − F f (tn ))n− i=1 ki
Pn
i=1 ki !(n −
i=1 ki )! i=1
Qn
Z
·
Rn
n
Y
IΓ1 (x01 )
(50)
IΓi (x0i − x0i−1 )G∗kn (dx0n ) . . . G∗k1 (dx01 )
i=2
since
Qf0t1 (0; {k1 } × dx01 ) =
Qft1 t2 (k1 ; {k1 + k2 } × dx02 ) =
n!
(F f (t1 ))k1 (1 − F f (t1 ))n−k1 · G∗k1 (dx01 );
k1 !(n − k1 )!
(n − k1 )!
(F f (t2 ) − F f (t1 ))k2 (1 − F f (t2 ))n−k1 −k2
·
k2 !(n − k1 − k2 )!
(1 − F f (t1 ))n−k1
·G∗k2 (dx02 );
etc. On the right-hand side of (49) we write
n
Y
I{ki }×Γi (yπ(i) −
0
yπ(i)−1 , yπ(i)
0
yπ(i)−1
)
−
i=2
=
n
Y
0
I{li }×Σi (yi − yi−1 , yi0 − yi−1
)
i=2
with li := kπ−1 (i) , Σi := Γπ−1 (i) , and therefore this side becomes
n
Pn
Y
n!
(F g (ui ) − F g (ui−1 ))li (1 − F g (un ))n− i=1 li
Pn
i=1 li !(n −
i=1 li )! i=1
Qn
Z
·
Rn
IΣ1 (y10 )
n
Y
(51)
0
IΣi (yi0 − yi−1
)G∗ln (dyn0 ) . . . G∗l1 (dy10 ).
i=2
In order to prove that (50) is equal to (51) we recall that
(F f (ti ) − F f (ti−1 ))ki = (F g (uπ(i) ) − F g (uπ(i)−1 ))lπ(i)
for every i = 1, . . . , n (see the proof of Theorem 6.1.3).
Finally, let si :=
Pi
j=1
kj , ri :=
s1 = r1 = k1 and sn = rn = k :=
Pi
Pn
j=1 lj ; i
i=1
ki .
= 1, . . . , n and s0 = r0 := 0. We have
CHAPTER 6. EXAMPLES
157
We consider the permutation ρ of {1, 2, . . . , k} defined by
ρ(sπ−1 (i)−1 + a) = ri−1 + a
for every a = 1, 2, . . . , kπ−1 (i) ; i = 1, . . . , n (ρ is indeed a bijective map since kπ−1 (i) =
li = sπ−1 (i) − sπ−1 (i)−1 = ri − ri−1 ).
R
Knowing that in general
h(z)(F ∗ G)(dz) =
R
R
R2
h(x + y)F (dx)G(dy) and using
the change of variable uj := vρ(j) , we have
Z
Rn
Z
Rk
IΓ1 (
Rk
Z
Rk
IΓ1 (
s1
X
IΓ1 (
s1
X
uj )
vρ(j) )
j=1
vρ(j) )
n
Y
n
Y
Z
IΣ1 (
si−1
si
X
IΓi (
si
X
IΓi (
X
uj −
j=si−1 +1
i=2
i=2
Rk
n
Y
i=2
j=1
s1
X
IΓi (x0i − x0i−1 )G∗kn (dx0n ) . . . G∗k1 (dx01 ) =
i=2
j=1
Z
n
Y
IΓ1 (x01 )
si−1
X
vρ(j) −
sπ−1 (i)
l=1
vl )
n
Y
IΣi (
i=2
Z
Rn
sπ−1 (i)−1
X
IΣ1 (y10 )
ri
X
l=ri−1 +1
n
Y
X
vρ(j) −
j=sπ−1 (i)−1 +1
r1
X
vρ(j) )G(dvk ) . . . G(dv1 ) =
j=si−2 +1
j=si−1 +1
IΓπ−1 (i) (
uj )G(duk ) . . . G(du1 ) =
j=si−2 +1
vρ(j) )G(dvk ) . . . G(dv1 ) =
j=sπ−1 (i)−2 +1
ri−1
vl −
X
vl )G(dvk ) . . . G(dv1 ) =
l=ri−2 +1
0
IΣi (yi0 − yi−1
)G∗ln (dyn0 ) . . . G∗l1 (dy10 )
i=2
This concludes the proof that (50) is equal to (51).
2
6.3
A Jump Process
In this section we will construct a simple Q-Markov process with ‘jumps’ and we will
see what consistency condition has to be imposed on the generator of this process.
Let S be a collection of simple flows as in Definition 3.2.10.
Proposition 6.3.1 Let (Zj )j≥1 be a sequence of independent identically distributed
random variables on T with distribution F , Λ an independent random variable with
density fΛ : (0, ∞) → R, and N a random variable such that:
CHAPTER 6. EXAMPLES
158
• N is conditionally independent of (Zj )j≥1 given Λ; and
• the conditional distribution of N given Λ is Poisson with mean Λ.
Let
XA :=
N
X
I{Zj ∈A} , A ∈ A.
j=1
Then:
(i) the process X := (XA )A∈A is Q-Markov with the transition system Q given by
R
0
(F (B 0 \B))m−k 0∞ e−λF (B ) λm fΛ (λ)dλ
QBB 0 (k; {m}) :=
· R ∞ −λF (B) k
(m − k)!
λ fΛ (λ)dλ
0 e
for every k, m ∈ {0, 1, 2, . . .}, k ≤ m and for every B, B 0 ∈ A(u), B ⊆ B 0 ;
(ii) for every right-continuous flow f : [0, a] → A(u) with f (a) = T , the process
X f := (Xf (t) )t∈[0,a] can be written as
Xtf =
N
X
j=1
I{Z f ≤t} , t ∈ [0, a]
j
where (Zjf )j≥1 is a sequence of independent identically distributed random variables on [0, a] with distribution F f := F ◦ f ; and
(iii) if the collection S can be chosen such that for every flow f ∈ S the function F f
is differentiable, then the generator of the process X f at time s is
(Gsf h)(k) = λs (k) · (h(k + 1) − h(k)),
with
k = 0, 1, 2, . . .
R ∞ −λF f (s) k+1
e
λ fΛ (λ)dλ
λs (k) := (F ) (s) · 0R ∞ −λF f (s) k
f 0
0
e
λ fΛ (λ)dλ
and domain D(Gsf ) equal to the space B({0, 1, 2 . . .}) of all bounded functions
h : {0, 1, 2, . . .} → R.
CHAPTER 6. EXAMPLES
159
Proof: (i) We know that for any disjoint sets C1 , . . . , Cp ∈ C, the variables XC1 , . . . ,
XCp are conditionally independent given Λ, and the distribution of each XC , C ∈ C
given Λ = λ is Poisson with mean λF (C).
From here we can conclude that ∀A ∈ A, ∀B ∈ A(u)
FB ⊥ σ(XA\B ) | σ(Λ)
where (FB )B∈A(u) is the minimal filtration of the process X.
(In order to show that E[h(XA\B )·Y |Λ] = E[h(XA\B )|Λ]E[Y |Λ] for any FB -measurable
function Y , it is enough to assume that Y =
Qp
i=1
hi (XCi ) for some disjoint Ci ∈
C, Ci ⊆ B and some hi ∈ B(R).)
Let h ∈ B(R) be arbitrary. Then ∀A ∈ A, ∀B ∈ A(u)
E[h(XA\B )|FB ] = E[E[h(XA\B )|FB ∨ σ(Λ)]|FB ]
= E[E[h(XA\B )|σ(Λ)]|FB ]
= E[h0 (Λ)|FB ].
In order to prove the process X is set-Markov, it is enough to show that FB ⊥
σ(Λ) | σ(XB ) or, equivalently, for any partition B = ∪pi=1 Ci , Ci ∈ C
E[h0 (Λ)|XC1 , . . . , XCp ] = E[h0 (Λ)|XB ].
Since Λ has an absolutely continuous distribution (with density fΛ (λ)), the distribution of Λ given XC , for C ∈ C(u) is also absolutely continuous; we denote with
fΛ|XC (λ|XC ) its density function.
Using Bayes theorem, if k =
Pp
i=1
ki we have
fΛ|XC1 ,...,XCp (λ|k1 , . . . , kp ) = R ∞
0
fXC1 ,...,XCp |Λ (k1 , . . . , kp |λ)fΛ (λ)
fXC1 ,...,XCp |Λ (k1 , . . . , kp |λ)fΛ (λ)dλ
Qp
1
· (λF (Ci ))ki ] · fΛ (λ)
ki !
−λF (Ci ) · 1 · (λF (C ))ki ] · f (λ)dλ
i
Λ
i=1 [e
ki !
i=1 [e
−λF (Ci )
= R ∞ Qp
0
=
·
(λF (B))k
· fΛ (λ)
k!
R∞
k
(λF
(B))
−λF (B) ·
· fΛ (λ)dλ
0 e
k!
e−λF (B) ·
fXB |Λ (k|λ)fΛ (λ)
0 fXB |Λ (k|λ)fΛ (λ)dλ
= fΛ|XB (λ|k).
= R∞
CHAPTER 6. EXAMPLES
160
Hence
Z
E[h0 (Λ)|XC1 , . . . , XCp ] =
Z
=
R
R
h0 (λ)fΛ|XC1 ,...,XCp (λ|XC1 , . . . , XCp )dλ
h0 (λ)fΛ|XB (λ|XB )dλ = E[h0 (Λ)|XB ].
It is not diffcult to see that the family Q := (QBB 0 )B⊆B 0 is a transition system. We
prove now that ∀A ∈ A, B ∈ A(u), QB,A∪B is a version of the conditional distribution
of XA∪B given XB : for 0 ≤ k ≤ m
P (XB = k, XA\B = m − k)
P (XB = k)
R∞
0 P [XB = k|Λ = λ] · P [XA\B = m − k|Λ = λ] · fΛ (λ)dλ
R∞
=
0 P [XB = k|Λ = λ] · fΛ (λ)dλ
R ∞ −λF (B) 1
1
· (λF (A\B))m−k ] · fΛ (λ)dλ
· k! · (λF (B))k ] · [e−λF (A\B) · (m−k)!
0 [e
P [XA∪B = m|XB = k] =
=
R∞
0
[e−λF (B) ·
1
k!
· (λF (B))k ] · fΛ (λ)dλ
R
(F (A\B))m−k 0∞ e−λF (A∪B) λm fΛ (λ)dλ
=
· R ∞ −λF (B) k
.
(m − k)!
λ fΛ (λ)dλ
0 e
(ii) The argument is the same as for the empirical process (see the proof of
Proposition 6.1.1).
(iii) For each f ∈ S, let Qfst := Qf (s)f (t) and Tstf the bounded linear operator
associated to Qfst . We have
(Tstf h)(k) − h(k) =
X
(h(m) − h(k))Qfst (k; {m}) =
m≥k+1
R
f
(F f (t) − F f (s))m−k 0∞ e−λF (t) λm fΛ (λ)dλ
(h(m) − h(k)) ·
· R ∞ −λF f (s) k
(m
−
k)!
λ fΛ (λ)dλ
0 e
m≥k+1
X
for every function h ∈ B({0, 1, 2, . . .}). By definition, the generator of the process
X f at time s is
(Gsf h)(k)
The result follows since
(Tstf h)(k) − h(k)
uniformly in k.
:= lim
t→s
t−s

(F f (t) − F f (s))m−k  (F f )0 (s) if m = k + 1
=
.
lim
t→s
t−s
0
if m > k + 1
CHAPTER 6. EXAMPLES
161
2
The next theorem shows that the finite dimensional distributions of the process
constructed in the previous proposition are completely characterized by its generator,
provided that a certain consistency assumption holds. This assumption is the same
as for the empirical process.
Theorem 6.3.2 For each f ∈ S, f : [0, a] → A(u), let F f be a differentiable probability distribution function on [0, a].
If {G f := (Gsf )s ; f ∈ S} is a collection of families of linear operators on B({0, 1, 2,
. . .}) given by
(Gsf h)(k) = λs (k) · (h(k + 1) − h(k)),
with
k = 0, 1, 2, . . .
R ∞ −λF f (s) k+1
e
λ fΛ (λ)dλ
λs (k) = (F ) (s) · 0R ∞ −λF f (s) k
f 0
0
e
λ fΛ (λ)dλ
and domain D(Gsf ) equal to the space B({0, 1, 2, . . .}), and we assume that there exists
a probability measure F on (T, σ(A)) such that
F ◦ f = F f , ∀f ∈ S
then there exists a probability measure P on the product space ({0, 1, 2, . . .}C under
which:
1. the coordinate-variable process X := (XC )C∈C defined on this space is a version
of the process constructed in Proposition 6.3.1; and
2. ∀f ∈ S, the generator of the process X f := (Xf (t) )t at time s is Gsf .
Proof: For each f ∈ S, the operator Gsf is the generator at time s of the semigroup
T f := (Tstf )s<t associated to the transition system Qf := (Qfst )s<t defined by
R
Qfst (k; {m})
f
(F f (t) − F f (s))m−k 0∞ e−λF (t) λm fΛ (λ)dλ
:=
· R ∞ −λF f (s) k
(m − k)!
λ fΛ (λ)dλ
0 e
for every k, m ∈ {0, 1, 2, . . .}, k ≤ m. The result will follow by Theorem 4.1.3, provided that the conditions of this theorem are satisfied.
CHAPTER 6. EXAMPLES
162
The fact that the collection {Qf ; f ∈ S} satisfies Assumption 3.3.6 follows as for
the empirical process (see the proof of Theorem 6.1.3) since Qfst depends only on f, s, t
only through F f (s) and F f (t).
It remains to prove that the collection {Qf ; f ∈ S} satisfies Assumption 3.3.7
with µ := δ0 . Let ord1={A0 = ∅0 , A1 , . . . , An } and ord2={A0 = ∅0 , A01 , . . . , A0n } be
two consistent orderings of the same finite semilattice A0 with Ai = A0π(i) , ∀i, where
π is a permutation of {1, . . . , n} with π(1) = 1, and denote f := fA0 ,ord1 , g := fA0 ,ord2
with f (ti ) = ∪ij=1 Aj , g(ui ) = ∪ij=1 A0j and t0 = u0 = 0. We want to prove that
Z
Rn
I{k1 } (x1 )
I{ki } (xi − xi−1 )Qftn−1 tn (xn−1 ; dxn ) . . . Qf0t1 (0; dx1 ) =
(52)
i=2
Z
Rn
n
Y
n
Y
I{k1 } (y1 )
i=2
I{ki } (yπ(i) − yπ(i)−1 )Qgun−1 un (yn−1 ; dyn ) . . . Qg0u1 (0; dy1 )
for every ki ∈ {0, 1, 2, . . .}.
The left-hand side of (52) is equal to
Qn
n
Y
1
(F f (ti ) − F f (ti−1 ))ki ·
Z ∞
i=1 ki ! i=1
since
Qf0t1 (0; {k1 }) =
0
e−λF
f (t
n)
Pn
λ
ki
· fΛ (λ)dλ
(53)
(F f (t1 ))k1 Z ∞ −λF f (t1 ) k1
·
e
λ fΛ (λ)dλ;
k1 !
0
R
Qft1 t2 (k1 ; {k1
i=1
f
(F f (t2 ) − F f (t1 ))k2 0∞ e−λF (t2 ) λk1 +k2 fΛ (λ)dλ
· R ∞ −λF f (t1 ) k1
+ k2 }) =
;
k2 !
λ fΛ (λ)dλ
0 e
etc. On the right-hand side of (52) we write
n
Y
I{ki } (yπ(i) − yπ(i)−1 ) =
i=2
n
Y
I{li } (yi − yi−1 )
i=2
with li := kπ−1 (i) , and therefore this side becomes
Qn
1
n
Y
(F g (ui ) − F g (ui−1 ))li ·
i=1 li ! i=1
Z ∞
0
e−λF
g (un)
λ
Pn
i=1
li
· fΛ (λ)dλ.
(54)
Finally, we observe that (53) is equal to (54) by the same argument used in the proof
of Theorem 6.1.3 for the empirical process.
2
Chapter 7
Strong Markov Properties
In this chapter we will introduce two different types of strong Markov properties
for set-indexed processes, which can be associated to the sharp Markov property,
respectively the Q-Markov property. In the first section, we will introduce the notions
of ‘adapted set’ and ‘optional set’ and we will define their associated σ-fields.
Most of the results of this chapter (except Section 7.3) appear in [5].
7.1
Adapted Sets and Optional Sets
In this section we will introduce adapted sets and optional sets. The ‘adapted set’
is a generalization of the classical notion of ‘stopping time’, the total ordering of the
real line being replaced by set-inclusion; we are not using the term ‘stopping set’ for
this notion, since this term has already been introduced in the literature ( [38], [39])
for a different object. The ‘optional set’ is a generalization of the classical notion
of ‘optional time’. (See Appendix B.2 for the definition and some properties of the
stopping times and optional times.)
We will assume that the approximating functions gn have the following definition:
gn (B) := ∩D∈An (u);B⊆D0 D, ∀B ∈ A(u).
This is the case of many examples of indexing collections, including the lower layers
of [0, 1]d (or [−1, 1]d ) and the rectangles [0, z], z ∈ [0, 1]d (or z ∈ [−1, 1]d ).
163
CHAPTER 7. STRONG MARKOV PROPERTIES
164
From the separability from above of the indexing collection A we have
∀D, B ∈ A(u), D ⊆ B 0 ⇒ ∃n such that gn (D) ⊆ B 0
(55)
since D ⊆ B 0 if and only if (B 0 )c ⊆ Dc = ∪n (gn (D))c and (B 0 )c is a compact set.
Let (FB )B∈A(u) be the extension to A(u) of a set-indexed filtration and (FBr )B∈A(u)
its minimal outer-continuous filtration, defined by FDr := ∩n Fgn (D) , D ∈ A(u). Then
FDr = ∩B∈A(u),D⊆B 0 FB , D ∈ A(u).
(56)
The following assumption gives the approximation from below for a set in A(u).
Assumption 7.1.1 For any B ∈ A(u) there exists a monotone increasing sequence
(dn (B))n≥1 ⊆ A(u) such that B 0 = ∪n dn (B), dn (B) ⊆ dn+1 (B)0 ∀n.
Consequently,
∀D, B ∈ A(u), D ⊆ B 0 ⇒ ∃n such that D ⊆ dn (B)0 .
(57)
A random set α is a function with values in A(u), defined on a measurable space
(Ω, F).
Definition 7.1.2 A random set α is called an adapted set of a filtration (FB )B∈A(u)
if {α ⊆ B} ∈ FB for every B ∈ A(u); it is called an optional set of a filtration
(FB )B∈A(u) if {α ⊆ B 0 } ∈ FB for every B ∈ A(u).
To any adapted set α we can associate the σ-field:
def
Fα = {F ∈ F : F ∩ {α ⊆ B} ∈ FB ∀B ∈ A(u)}.
To any optional set α we can associate the σ-field:
def
Fαr = {F ∈ F : F ∩ {α ⊆ B 0 } ∈ FB ∀B ∈ A(u)}.
The following facts are completely analogous to the classical case.
Lemma 7.1.3 (a) If α ≡ B ∈ A(u), then α is an adapted set and Fα = FB , Fαr =
FBr .
CHAPTER 7. STRONG MARKOV PROPERTIES
165
(b) A random set is an optional set of the filtration (FB )B∈A(u) if and only if it is
an adapted set of the filtration (FBr )B∈A(u) . In particular, any adapted set is an
optional set.
(c) If α is an optional set, then Fαr = {F ∈ F : F ∩ {α ⊆ B} ∈ FBr ∀B ∈ A(u)}. In
particular, if α is an adapted set then Fα ⊆ Fαr .
(d) If α is an optional set and β is an adapted set such that α ⊆ β 0 , then Fαr ⊆ Fβ .
(e) If α and β are adapted sets (respectively optional sets) such that α ⊆ β, then
Fα ⊆ Fβ (respectively Fαr ⊆ Fβr ).
Proof: (a) Clear.
(b) If α is an optional set of the filtration (FB )B∈A(u) , then {α ⊆ B} = ∩n≥m {α ⊆
gn (B)0 } ∈ Fgm (B) ∀m ≥ 1 and hence {α ⊆ B} ∈ ∩m Fgm (B) = FBr . Conversely, if α is
an adapted set of the filtration (FBr )B∈A(u) , then {α ⊆ B 0 } = ∪n {α ⊆ dn (B)} ∈ FB ,
because {α ⊆ dn (B)} ∈ Fdrn (B) ⊆ FB (using property (56) and the fact that dn (B) ⊆
B 0 ).
(c) Same type as argument as (b).
(d) For each F ∈ Fαr we have F ∩ {β ⊆ B} = (F ∩ {α ⊆ B 0 }) ∩ {β ⊆ B} ∈
FB ∀B ∈ A(u) i.e. F ∈ Fβ .
(e) Suppose that both α and β are adapted sets. For each F ∈ Fα we have
F ∩ {β ⊆ B} = (F ∩ {α ⊆ B}) ∩ {β ⊆ B} ∈ FB ∀B ∈ A(u) i.e. F ∈ Fβ . The case
when both α and β are optional sets is similar.
2
Comment 7.1.4 If α is a discrete random set i.e., it takes on only countably many
configurations, then {α = B} = {α ⊆ B}\ ∪D∈range(α),D⊂B {α ⊆ D} (where ⊂ denotes
strict inclusion) and {α ⊆ B} = ∪D∈range(α),D⊆B {α = D}.
Hence a discrete random set α is an adapted set (respectively an optional set) if
and only if {α = B} ∈ FB ∀B ∈ A(u) (respectively {α = B} ∈ FBr ∀B ∈ A(u)). In
this case
Fα = {F ∈ F : F ∩ {α = B} ∈ FB ∀B ∈ A(u)}
(respectively Fαr = {F ∈ F : F ∩ {α = B} ∈ FBr ∀B ∈ A(u)} ).
CHAPTER 7. STRONG MARKOV PROPERTIES
166
The next result allows us to get around the difficulty of defining ‘progressive
measurability’ in the set-indexed case.
Proposition 7.1.5 If X := (XA )A∈A is an adapted monotone outer-continuous process, then
(a) Xα is Fα -measurable for any discrete adapted set α; and
(b) Xα is Fαr -measurable for any optional set α.
Proof: (a) For any arbitrary a ∈ R, {Xα < a} ∩ {α = B} = {XB < a} ∩ {α = B} ∈
FB , ∀B ∈ A(u) i.e., {Xα < a} ∈ Fα .
(b) By the monotone outer-continuity of the process, Xα = limn Xgn (α) and
{Xα < a} = ∪N ≥m ∩n≥N {Xgn (α) < a} ∈ Fgm (α) , ∀m ≥ 1
using (a), since gn (α) is a discrete adapted set. Hence {Xα < a} ∈ ∩m Fgm (α) = Fαr .
2
Proposition 7.1.6 If α is an optional set then gn (α) is a (discrete) adapted set and
Fαr = ∩n Fgn (α) .
Proof: We claim that the following relation holds:
{gn (α) = B} = {α ⊆ B 0 }\
[
{α ⊆ D0 } ∀B ∈ A(u)
(58)
D∈An (u),D⊂B
where ⊂ denotes strict inclusion.
To see this, suppose that gn (α) = B. Clearly α ⊆ gn (α)0 = B 0 . If there were
some D ∈ An (u), D ⊂ B such that α ⊆ D0 , then by the definition of gn we would
have gn (α) ⊆ D, which is impossible. Conversely, suppose that α ⊆ B 0 and α 6⊆
D0 ∀D ∈ A(u), D ⊂ B. Since α ⊆ B 0 , there exists n such that gn (α) ⊆ B. Since
gn (α) ∈ A(u) and α ⊆ gn (α)0 , we cannot have gn (α) ⊂ B; hence gn (α) = B.
From (58) it follows that {gn (α) = B} ∈ FB ∀B ∈ A(u) i.e., gn (α) is an adapted
set.
CHAPTER 7. STRONG MARKOV PROPERTIES
167
Let us prove now that Fαr = ∩n Fgn (α) . Since α ⊆ gn (α)0 , we have Fαr ⊆ Fgn (α) by
Lemma 7.1.3, (d). Conversely, let F ∈ ∩n Fgn (α) . For each B ∈ A(u)
F ∩ {α ⊆ B} = ∩n≥m (F ∩ {gn (α) ⊆ gn (B)}) ∈ Fgm (B) , ∀m ≥ 1.
Hence F ∩ {α ⊆ B} ∈ ∩m Fgm (B) = FBr i.e. F ∈ Fαr .
2
We conclude this section with some additional properties of adapted sets and
optional sets.
Proposition 7.1.7 (a) If α is an adapted set (respectively an optional set), then for
every B ∈ A(u), {α ⊆ B} ∈ Fα (respectively {α ⊆ B} ∈ Fαr ).
(b) If α is a discrete adapted set, then for every B ∈ A(u), {α = B} ∈ Fα and
Fα |{α=B} = FB |{α=B} .
(c) If α is an optional set, then for every B ∈ A(u), {α = B} ∈ FBr , F ∩ {α = B} ∈
FBr , ∀F ∈ Fαr and
Fαr |{α=B} = FBr |{α=B} .
Proof: (a) Let α be an adapted set and let B ∈ A(u) be arbitrary. Then {α ⊆ B} ∩
{α ⊆ B 0 } = {α ⊆ B ∩ B 0 } ∈ FB∩B 0 ⊆ FB 0 for every B 0 ∈ A(u) i.e. {α ⊆ B} ∈ Fα .
The case when α is an optional set is similar.
(b) Recall the definition of a conditional σ-field F|S := {F ∩ S; F ∈ F}; note
that F|S = {F ∈ F; F ⊆ S} if S ∈ F.
Let B, B 0 ∈ A(u) be arbitrary. The set {α = B} ∩ {α ⊆ B 0 } coincides with the set
{α = B} if B ⊆ B 0 and it is ∅ otherwise; in both cases it belongs to FB 0 . Since this
happens for any B 0 ∈ A(u), by the definition of Fα it follows that {α = B} ∈ Fα .
As for the second equality, consider first F ∈ Fα ; then F ∩ {α = B} ∈ FB .
Conversely, take F ∈ FB , F ⊆ {α = B} (this is the general form of an element of
FB |{α=B} since {α = B} ∈ FB ). Because {α = B} ∈ Fα too, it is enough to show
that F ∈ Fα : let B 0 ∈ A(u) be arbitrary and note that the set F ∩{α ⊆ B 0 } coincides
with F if B ⊆ B 0 and it is ∅ otherwise; in both cases it belongs to FB 0 .
CHAPTER 7. STRONG MARKOV PROPERTIES
(c) For arbitrary m ≥ 1 we have {α = B} =
T
168
n≥m {gn (α)
= gn (B)} ∈ Fgm (B) .
Hence {α = B} ∈ ∩m Fgm (B) = FBr .
If F ∈ Fαr , then F ∩ {α = B} = ∩n≥m (F ∩ {gn (α) = gn (B)}) ∈ Fgm (B) ∀m ≥ 1.
Hence F ∩ {α = B} ∈ ∩m Fgm (B) = FBr .
Finally, to prove the last equality, consider first F ∈ Fαr . Clearly F ∩ {α = B} ∈
FBr . Since {α = B} ∈ FBr an arbitrary element of FBr |{α=B} is of the form F ∈ FBr
with F ⊆ {α = B}. Let us consider such an F . Note that F ∈ Fαr since for arbitrary
B 0 ∈ A(u) we have F ∩ {α ⊆ B 0 } = F ∩ {α = B} ∩ {α ⊆ B 0 } which is ∅ if B 6⊆ B 0 and
F if B ⊆ B 0 and in either case it is in FBr 0 . Finally F = F ∩ {α = B} ∈ Fαr |{α=B} .
2
7.2
The Strong Sharp Markov Property
In this section we will consider a strong Markov property for set-indexed processes
which can be associated to the sharp Markov property, using the optional sets. Our
approach is very similar to the one considered in [39] for stopping sets, the advantage
of our method being that we do not need to assume outer-continuity of either the
filtration or the process.
Let X := (XA )A∈A be a set-indexed process and (FA )A∈A its minimal filtration.
The following σ-fields have been introduced in [39] for any random set α:
def
FαX : = σ({XA I{A⊆α} , I{A⊆α} ; A ∈ A})
def
X
F∂α
: = σ({XA I{A⊆α,A6⊆α0 } , I{A⊆α,A6⊆α0 } ; A ∈ A})
def
FαXc : = σ({XA I{A6⊆α} , I{A6⊆α} ; A ∈ A}).
Note: We have {B ⊆ α} ∈ FαX ∩ FαXc , ∀B ∈ A(u): if B = ∪ki=1 Ai , Ai ∈ A, then
{B ⊆ α} = ∩ki=1 {Ai ⊆ α}.
X
Lemma 7.2.1 (Lemma 4.1, [39]) (a) If α ≡ B ∈ A(u) then FαX = FB , F∂α
=
F∂B , FαXc = FB c .
X
⊆ FαX .
(b) For any random set α we have F∂α
CHAPTER 7. STRONG MARKOV PROPERTIES
169
Lemma 7.2.2 (Lemma 4.6, [39]) If α is a discrete random set, then {α = B} ∈
FB ∀B ∈ A(u) and FαX |{α=B} ⊆ FαX .
Lemma 7.2.3 (a) For any discrete adapted set α we have FαX = Fα .
(b) For any optional set α we have Fαr = ∩n FgXn (α) , FgXn+1 (α) ⊆ FgXn (α) ∀n.
(c) For any optional set α we have FαX ⊆ Fαr .
Proof: (a) To prove that {A ⊆ α} ∈ Fα , note that {A ⊆ α} ∩ {α = B} is ∅ if
A 6⊆ B and it is exactly {α = B} otherwise; in either case this intersection lies in
FB . Similarly {XA ∈ Γ} ∩ {A ⊆ α} ∈ Fα for every Γ ∈ B(R). Conversely, let
F ∈ Fα . Using Lemma 7.2.2, F ∩ {α = B} ∈ FB |{α=B} = FαX |{α=B} ⊆ FαX since
X
{α = B} ∈ F∂α
⊆ FαX . Finally F = ∪B∈range(α) (F ∩ {α = B}) ∈ FαX .
(b) Using (a), FgXn (α) = Fgn (α) and hence Fαr = ∩n Fgn (α) = ∩n FgXn (α) .
(c) For each A ∈ A we have I{A⊆α} = limn I{A⊆gn (α)} and XA I{A⊆α} = limn XA
I{A⊆gn (α)} . Since I{A⊆gn (α)} and XA I{A⊆gn (α)} are FgXn (α) -measurable and the σ-fields
(FgXn (α) )n are decreasing, it follows that the limits I{A⊆α} and XA I{A⊆α} are measurable
with respect to the intersection ∩n FgXn (α) = Fαr .
2
The following theorem generalizes a classical result (see Proposition 4.1.3, [30]).
Theorem 7.2.4 (Theorem 4.7, [39]) If X := (XA )A∈A is a sharp Markov process,
then
X
FαX ⊥ FαXc | F∂α
for any discrete adapted set α.
Ideally, we would like to say that a sharp Markov process X := (XA )A∈A is strong
Markov if for any optional set α, we have
X
Fαr ⊥ FαXc | F∂α
(59)
X
, using Lemma 7.2.3.(c). However, we would
which would imply that FαX ⊥ FαXc | F∂α
rather not introduce a new terminology, since we could not find an example of a
process satisfying this property.
CHAPTER 7. STRONG MARKOV PROPERTIES
170
In what follows we will see how close we can get of the desired conditional independence (59) for an arbitrary sharp Markov process. The following σ-field has been
introduced in [39] for any random set α:
def
X
F[α,g
: = σ({XA 1{A⊆gn (α),A6⊆α0 } , 1{A⊆gn (α),A6⊆α0 } ; A ∈ A}).
n (α)]
Lemma 7.2.5 Suppose that gn (A) ∈ An , ∀A ∈ A, ∀n ≥ 1. Then for every optional
X
X
set α, F∂g
⊆ F[α,g
⊆ FgXn (α) .
n (α)
n (α)]
Proof: The first inclusion is valid for any random set α. To prove it, it will be
X
X
enough to show that the generators of F∂g
are in F[α,g
. Note that the set
n (α)
n (α)]
{A ⊆ gn (α)} ∩ {A 6⊆ gn (α)0 } can be written as the difference
({A ⊆ gn (α)} ∩ {A 6⊆ α0 }) \ ({A ⊆ gn (α)0 } ∩ {A 6⊆ α0 })
and the first set {A ⊆ gn (α)} ∩ {A 6⊆ α0 } lies in F[α,gn (α)] . As for the second set
{A ⊆ gn (α)0 } ∩ {A 6⊆ α0 }, (using (55) and the fact that A ⊆ gk (A)0 ) it can be
written as
∪k ({gk (A) ⊆ gn (α)} ∩ {A 6⊆ α0 }) = ∪k ∩m≥k ({gm (A) ⊆ gn (α)} ∩ {gm (A) 6⊆ α0 })
X
which belongs to F[α,g
since gm (A) ∈ A. A similar argument shows that {XA ∈
n (α)]
Γ} ∩ {A ⊆ gn (α)} ∩ {A 6⊆ gn (α)0 } ∈ F[α,gn (α)] for any Γ ∈ B(R).
To prove the second inclusion we will assume that α is an optional set and we will
X
show that the generators of F[α,g
are in FgXn (α) . Note that {A 6⊆ α0 } = ∩k {gk (A) 6⊆
n (α)]
α} ∈ FαX ⊆ Fαr ⊆ FgXn (α) by Lemma 7.2.3. Hence {A 6⊆ α0 } ∩ {A ⊆ gn (α)} ∈ FgXn (α) .
Finally, a similar argument shows that {XA ∈ Γ} ∩ {A 6⊆ α0 } ∩ {A ⊆ gn (α)} ∈ FgXn (α)
for any Γ ∈ B(R).
2
Note that the proof of the first inclusion in the above lemma works only for those
of our examples for which we have gn : A → An for every n ≥ 1. For one of our
2d -dimensional examples, this inclusion is still valid. More precisely, we have the
following result.
CHAPTER 7. STRONG MARKOV PROPERTIES
171
Lemma 7.2.6 If T = [−1, 1]d and A = {[0, x]; x ∈ T }, then for every optional set
X
X
α, F∂g
⊆ F[α,g
⊆ FgXn (α) .
n (α)
n (α)]
Proof: We will prove only the first inclusion. Using the same argument as in the
proof of the previous lemma, it is enough to show that
X
∩m≥n {gm (A) ∈ [α, gn (α)]} ∈ F[α,g
n (α)]
for every A ∈ A, n ≥ 1.
Let A ∈ A, n ≥ 1 be fixed. Assume that A is in the first quadrant. Then, if we
d
m
write gm (A) = ∪21 Am
i where Ai ∈ A is the component of gm (A) that is in the i-th
quadrant, we have A ⊆ Am
1 .
We claim that the following relation holds:
∩m≥n {gm (A) ∈ [α, gn (α)]} = {A ∈ [α, gn (α)]} ∩ ∩m≥n {Am
1 ∈ [α, gn (α)]}.
Assume that gm (A) ∈ [α, gn (α)] for every m ≥ n. Then A 6⊆ α0 and A ⊆
0
m
0
gm (A) ⊆ gn (α). For each m ≥ n we have Am
1 6⊆ α since A ⊆ A1 and A 6⊆ α . Also
m
Am
1 ⊆ gm (A) ⊆ gn (α). Hence A1 ∈ [α, gn (α)].
Conversely, let m ≥ n be fixed. Since A 6⊆ α0 and A ⊆ gm (A) we have gm (A) 6⊆
α0 .Because Am
1 ⊆ gn (α) we must have gm (A) ⊆ gn (α). Hence gm (A) ∈ [α, gn (α)] and
the proof is complete.
2
X
Since the σ-fields (F[α,g
) may not necessarily be monotonically decreasing,
n (α)] n
we let
X,m
F[α,g
:=
n (α)]
Note that
X
F∂α
⊆
X,m
F[α,g
n (α)]
_
N ≥n
X,m
X
X
F[α,g
and F∂α
:= ∩n F[α,g
.
N (α)]
n (α)]
since {A ⊆ α, A 6⊆ α0 } = ∩k≥n {A ⊆ gk (α), A 6⊆ α0 } ∈
X,m
X,m
F[α,g
, and similarly {XA ∈ Γ, A ⊆ α, A 6⊆ α0 } ∈ F[α,g
for any Γ ∈ B(R); hence
n (α)]
n (α)]
X
X
⊆ F∂α
. Then, for every optional set α
F∂α
X,m
X
⊆ F[α,g
⊆ FgXn (α) .
F∂g
n (α)
n (α)]
(60)
Theorem 7.2.7 Suppose that either gn (A) ∈ An , ∀A ∈ A, ∀n ≥ 1 or T = [−1, 1]d
and A = {[0, x]; x ∈ T }. If X := (XA )A∈A is a sharp Markov process, then for every
X
optional set α, Fαr ⊥ FαXc | F∂α
.
CHAPTER 7. STRONG MARKOV PROPERTIES
172
Proof: The argument is similar to the one used in the proof of Theorem 4.19, [39].
Let Y be a bounded, FαXc -measurable random variable. By a monotone class argument
it is enough to assume that
Y =
k
Y
hi (XAi I{Ai 6⊆α} , I{Ai 6⊆α} )
i=1
where Ai ∈ A, hi : R2 → R continuous and bounded.
Let Yn =
Qk
and bounded.
i=1
hi (XAi I{Ai 6⊆gn (α)} , I{Ai 6⊆gn (α)} ). Note that Yn is FgXn (α)c -measurable
Moreover, since {Ai ⊆ gn (α)}n≥1 is a decreasing sequence with
∩n {Ai ⊆ gn (α)} = {Ai ⊆ α} we have Y = limn Yn
X,m
X
] = E[Yn |F[α,g
].
Using Theorem 7.2.4 and (60), E[Yn |FgXn (α) ] = E[Yn |F∂g
n (α)
n (α)]
X,m
X
Since Fαr = ∩n FgXn (α) and F∂α
= ∩n F[α,g
we get
n (α)]
X,m
X
]
E[Y |Fαr ] = lim E[Yn |FgXn (α) ] = lim E[Yn |F[α,g
] = E[Y |F∂α
n (α)]
n
n
using a generalized form of the Martingale Convergence Theorem.
2
Corollary 7.2.8 Suppose that either gn (A) ∈ An , ∀A ∈ A, ∀n ≥ 1 or T = [−1, 1]d
and A = {[0, x]; x ∈ T }. If X := (XA )A∈A is a sharp Markov process, then for every
X
optional set α, FαX ⊥ FαXc | F∂α
.
Comment 7.2.9 The applicability of the preceding results extends to all sharp
Markov processes, without any requirement for regularity of sample paths or filtrations. Therefore, this holds for all processes with independent increments, including
the Brownian motion on the lower layers, which is known to be a.s. unbounded, and
therefore a.s. discontinuous.
7.3
The Strong Q-Markov Property
In this section we will introduce the strong Q-Markov property in the set-indexed
case. See Appendix B.2 for the strong Q-Markov property for processes indexed by
the real line.
We begin with the definition of a set-indexed strong Q-Markov process.
CHAPTER 7. STRONG MARKOV PROPERTIES
173
Definition 7.3.1 (a) A transition system Q := (QBB 0 )B⊆B 0 is called measurable
if for every flow f , the transition system Qf := (Qfst )s<t is measurable, where
Qfst := Qf (s),f (t) .
(b) A transition system Q := (QBB 0 )B⊆B 0 is called weakly monotone outercontinuous in B 0 if ∀B, B 0 ∈ A(u), B ⊆ B 0 , ∀(Bn0 )n ⊆ A(u) such that
0
⊆ Bn0 , ∀n and ∩n Bn0 = B 0
Bn+1
w
QBBn0 (x; ·) → QBB 0 (x; ·).
(c) Let Q := (QBB 0 )B⊆B 0 be a measurable transition system, which is weakly monotone outer-continuous in B 0 . A set-indexed process X := (XA )A∈A is called
strong Q-Markov with respect to a filtration (FB )B∈A(u) if it is adapted
and monotone outer-continuous and for every adapted set α = f (τ ) with values
on the path of a strictly increasing flow f , for every random set α0 = f (τ 0 ) with
values on the path of the same flow f , such that α ⊆ α0 and τ 0 is Fα -measurable,
and for every bounded continuous function h : R → R
Z
E[h(Xα0 )|Fα ] =
R
h(y)Qαα0 (Xα ; dy).
The process X is called simply strong Q-Markov if it is strong Q-Markov with
respect to its minimal filtration.
The next result gives us the form of an adapted set which takes values on the path
of a strictly increasing flow f .
Lemma 7.3.2 A random set α := f (τ ) with values on the path of a strictly increasing
flow f , is an adapted set with respect to a filtration (FB )B∈A(u) if and only if τ is a
stopping time with respect to the filtration (Ff (t) )t . In this case, Fα = Fτf .
Proof: For necessity, we have {τ ≤ t} = {f (τ ) ⊆ f (t)} ∈ Ff (t) for every t ∈ [0, a]
(using the fact that f is strictly increasing). For sufficiency, let B ∈ A(u) be arbitrary
and tB := inf{t ∈ [0, a]; f (t) 6⊆ B}; note that t ≤ tB if and only if f (t) ⊆ B. Hence
{α ⊆ B} = {f (τ ) ⊆ B} = {τ ≤ tB } ∈ Ff (tB ) ⊆ FB .
CHAPTER 7. STRONG MARKOV PROPERTIES
174
The equality of the two stopped σ-fields is proved similarly.
2
The next result gives the expected correspondence via flows.
Proposition 7.3.3 Let X := (XA )A∈A be a monotone outer-continuous process,
adapted with respect to a filtration (FA )A∈A .
The process X is strong Q-Markov with respect to the filtration (FA )A∈A if and
only if for every strictly increasing right-continuous flow f , the process X f := (Xf (t) )t
is strong Qf -Markov with respect to the filtration (Ff (t) )t , where Qfst := Qf (s),f (t) .
Proof: For every flow f , the transition system Qf is measurable and weakly rightcontinuous in t and the process X f is right-continuous and adapted with respect to
the filtration (Ff (t) )t . The result follows using Lemma 7.3.2 and Proposition B.2.5,
Appendix B.2.
2
Here are some consequences of the previous proposition.
Examples 7.3.4
1. A monotone outer-continuous Brownian motion with vari-
ance measure Λ is strong Q-Markov, where QBB 0 (x; ·) is the normal distribution
with mean 0 and variance ΛB 0 \B (see Proposition B.2.3.(a), Appendix B.2).
2. A monotone outer-continuous stationary compound Poisson process corresponding to the product measure Π(dz; dx) = Λ(dz) × G(dx) (where Λ is a finite positive measure on T and G is a probability measure on R) is strong Q-Markov,
where
Q
BB 0
(x; Γ) :=
n
X Λ B 0 \B
n≥0
n!
G∗n (Γ − x)
where G∗n denotes |G ∗ .{z
. . ∗ G} (see Proposition B.2.3.(b), Appendix B.2).
n times
We will conclude this section with another type of strong Q-Markov property
which is satisfied by the Brownian motion and the stationary compound Poisson
process.
CHAPTER 7. STRONG MARKOV PROPERTIES
175
A random set α is called an A-valued stopping set if α(ω) ∈ A, ∀ω and {α ⊇ A} ∈
FA , ∀A ∈ A; the associated σ-field is defined by
def
Fα = {F ∈ F : F ∩ {α ⊆ B} ∈ FB ∀B ∈ A(u)}.
Proposition 7.3.5 If X := (XA )A∈A is either a monotone outer-continuous Brownian motion with variance measure Λ or a monotone outer-continuous stationary compound Poisson process corresponding to a measure Π(dz; dx) := Λ(dz) × G(dx), then
for all A-valued stopping sets α and α0 such that α ⊆ α0 and Λα0 is Fα -measurable,
and for every h ∈ B(R)
Z
E[h(Xα0 )|Fα ] =
R
h(y)Qαα0 (Xα ; dy)
Proof: The idea is essentially the same as in the classical case (see Proposition B.2.3,
Appendix B.2).
(a) Let X := (XA )A∈A be a monotone outer-continuous Brownian motion with
variance measure Λ.
For each u ∈ R fixed, the process
1
2Λ
A
MA := eiuXA + 2 u
, A∈A
is a martingale on A: let A, A0 ∈ A be such that A ⊆ A0 ; then
1
iuX
E[MA0 |FA ] = MA · E[e
A0
iuX
= MA · E[e
A0
2Λ
\A + 2 u
\A ] · e
A0
\A |FA ]
1 2
u Λ 0
2
A
\A
= MA
(Note that the extension of the process M to A(u) is not of exponential type.)
Using the Optional Sampling Theorem for martingales indexed by directed sets
(Theorem 2.15, [47]) we can conclude that for any A-valued stopping sets α and α0
with α ⊆ α0 we have E[Mα0 |Fα ] = Mα ; because Λα0 is Fα -measurable
E[e
iuXα0
iuXα − 12 u2 Λ
|Fα ] = e
α0
\α =
and the result follows using Lemma 2.6.13, [43].
Z
R
eiuy Qαα0 (Xα ; dy).
CHAPTER 7. STRONG MARKOV PROPERTIES
176
(b) Let X := (XA )A∈A be a monotone outer-continuous stationary compound
Poisson process corresponding to a measure Π(dz; dx) := Λ(dz) × G(dx), where Λ is
a positive measure on T and G is a probability measure on R. Denote with ϕG the
characteristic function of G.
For each u ∈ R fixed, the process
MA := eiuXA −ΛA (ϕG (u)−1) , A ∈ A
is a martingale on A: if A, A0 ∈ A are such that A ⊆ A0 , then
iuX
\A −ΛA0 \A (ϕG (u)−1) |FA ]
iuX
\A ] · e−ΛA0 \A (ϕG (u)−1)
E[MA0 |FA ] = MA · E[e
= MA · E[e
A0
A0
= MA .
For any A-valued stopping sets α and α0 with α ⊆ α0 , we have E[Mα0 |Fα ] = Mα ;
because Λα0 is Fα -measurable
E[eiuXα0 |Fα ] = e
and the result follows.
2
iuXα +Λ
α0
\α (ϕG (u)−1) =
Z
R
eiuy Qαα0 (Xα ; dy)
Appendix A
Conditioning
In this appendix chapter we will discuss the notions of conditional independence and
conditional distribution, which are essential in the study of any Markov property.
These will rely on the notion of conditional expectation, as it is defined in the classical
literature (Section 34, [10]; Section 15-1, [12], etc.).
A.1
Conditional Independence
In this section we will discuss the notion of ‘conditional independence’ which is essential in the study of any type of Markov property.
We begin with a definition.
Definition A.1.1 Let F, G, H be sub-σ-fields of a probabiliy space (Ω, K, P ). We say
that F and H are conditionally independent given G, and we write F ⊥ H | G if
E[Y |F ∨ G] = E[Y |G]
for every bounded H-measurable random variable Y .
The next result gives an equivalent definition of the conditional independence; it
says that the σ-fields F and H are symmetric.
177
APPENDIX A. CONDITIONING
178
Lemma A.1.2 (Lemma 1.1, [53]) F and H are conditionally independent given G
if and only if
E[XY |G] = E[X|G]E[Y |G]
for every bounded F-measurable random variable X and for every bounded H-measurable
random variable Y .
The next result proved to be extremely useful in exploiting the properties of the
set-Markov processes.
Lemma A.1.3 Let G 0 ⊆ G be two sub-σ-fields in a certain probability space (Ω, F, P )
and X, Y two random variables on this space such that Y is G-measurable. If G ⊥
σ(X) | G 0 , then G ⊥ σ(X, Y ) | G 0 ∨ σ(Y ). (Here X, Y are allowed to be multidimensional, that is X : Ω → Rd , Y : Ω → Rk .)
Proof: By a monotone class argument it is enough to assume that h(x, y) = f (x)g(y)
where f and g are bounded and measurable. Then
E[f (X)g(Y )|G] = g(Y )E[f (X)|G] = g(Y )E[f (X)|G 0 ]
which is G 0 ∨ σ(Y )-measurable.
2
The following result has been occasionally used in this work.
Lemma A.1.4 Let F, G, H, K be four σ-fields in the same probability space. If F ⊥
H | G and (F ∨ G) ⊥ K | H, then F ⊥ (H ∨ K) | G.
Proof: Let Z be a bounded H ∨ K-measurable random variable. By a monotone
class argument we can suppose that Z = XY where X is a bounded H-measurable
random variable and Y is a bounded K-measurable random variable. We have
E[XY |F ∨ G] = E[X · E[Y |F ∨ G ∨ H]|F ∨ G]
= E[X · E[Y |H]|F ∨ G]
= E[X · E[Y |H]|G]
APPENDIX A. CONDITIONING
179
which is G-measurable.
2
Here is a list of other properties of the conditional independence.
Comments A.1.5
1. If F ⊥ H | G and F 0 ⊆ F , H0 ⊆ H, then F 0 ⊥ H0 | G.
2. If F ⊥ H | G, then (F ∨ G) ⊥ (H ∨ G) | G.
3. If F ⊥ H | G and G ⊆ G 0 ⊆ F ∨ G, then F ⊥ H | G 0 .
A.2
Conditional Distribution
In this section we will introduce the notion of conditional distribution and we will
discuss some of its properties.
Definition A.2.1 (a) A function Q(x; Γ), x ∈ R, Γ ∈ B(R) is called a transition
probability if Q(·; Γ) is a measurable function for every Γ ∈ B(R) and Q(x; ·)
is a probability measure for every x ∈ R.
(b) Let X, Y be random variables. A transition probability Q is called a version of
the conditional distribution of Y given X, if for every Γ ∈ B(R)
P [Y ∈ Γ|X = x] = Q(x; Γ) PX a.s.(x)
where PX is the distribution of X.
Note: By a monotone class argument, if Q is a version of the conditional distribution
of Y given X, then for every h ∈ B(R)
Z
E[h(Y )|X = x] =
R
h(y)Q(x; dy) PX a.s.(x).
(61)
Remark: A version of the conditional distribution always exists, for any random
variables X and Y (page 396, [12]).
The following property allows us to calculate the joint distribution of (X, Y ) knowing a version of a conditional distribution.
APPENDIX A. CONDITIONING
180
Theorem A.2.2 (Theorem 15-3C, [12]) If PX is the distribution of X and Q is
a version of the conditional distribution of Y given X, then ∀Γ1 , Γ2 ∈ B(R)
Z
P (X ∈ Γ1 , Y ∈ Γ2 ) =
Q(x; Γ2 )PX (dx).
Γ1
The following result is related to the Markov property and has been used several
times in this work.
Proposition A.2.3 Let X, Y, Z be some random variables such that
σ(X) ⊥ σ(Z) | σ(Y ).
(62)
If Q1 is a version of the conditional distribution of Y given X and Q2 is a version of
the conditional distribution of Z given Y , then the transition probability Q defined by
Z
Q(x; Γ) :=
R
Q2 (y; Γ)Q1 (x; dy), x ∈ R, Γ ∈ B(R)
is a version of the conditional distribution of Z given X.
Proof: Using equations (61) and (62) we have
P [Z ∈ Γ|X] = P [P [Z ∈ Γ|X, Y ]|X] = P [P [Z ∈ Γ|Y ]|X]
Z
= P [Q2 (Y ; Γ)|X] =
R
Q2 (Y ; Γ)Q1 (X; dy).
2
The following result was also used in this work.
Lemma A.2.4 Let X, Y, Z be three random variables such that σ(X) ⊥ σ(Z) | σ(Y ).
If we denote by QY |X a version of the conditional distribution of Y given X, then for
every bounded measurable function h : R2 → R
Z
E[h(Y, Z)|X = x] =
R
E[h(y, Z)|Y = y]QY |X (x; dy)
P ◦ X −1 a.s.(x).
(63)
Proof: Recall that by definition (see page 390 of [12]), E[h(Y, Z)|X = x] := α(x) is
the unique (P ◦ X −1 almost surely) measurable function α : R → R which satisfies
Z
Z
Γ1
α(x)(P ◦ X
−1
)(dx) =
X −1 (Γ1 )
h(Y, Z)dP
∀Γ1 ∈ B(R).
APPENDIX A. CONDITIONING
181
By a monotone class argument, it is enough to consider the case when h(y, z) =
IΓ2 (y)IΓ3 (z) for some Borel sets Γ2 , Γ3 . We will prove that the right-hand side of
equation (63) verifies the definition of E[h(Y, Z)|X = x]. Let Γ1 ∈ B(R) be arbitrary.
Denote PX := P ◦ X −1 and let QZ|Y be a version of the conditional distribution of Z
given Y . We have
Z
Γ1
Z
R
E[IΓ2 (y)IΓ3 (Z)|Y = y]QY |X (x; dy)PX (dx) =
Z
Γ1
Z
Γ1
Z
Γ2
Z
Γ2
E[IΓ3 (Z)|Y = y]QY |X (x; dy)PX (dx) =
Z
Γ3
QZ|Y (y; dz)QY |X (x; dy)PX (dx).
This in turn is equal to
Z
P (X ∈ Γ1 , Y ∈ Γ2 , Z ∈ Γ3 ) =
using the fact that σ(X) ⊥ σ(Z) | σ(Y ).
2
X −1 (Γ1 )
IΓ2 (Y )IΓ3 (Z)dP
Appendix B
Stochastic Processes on the Real
Line
In this appendix chapter we have gathered some classical results related to the theory
of stochastic processes indexed by the real line.
B.1
Lévy Processes
In this section we will introduce the Lévy processes indexed by the real line and we
will discuss some of their properties.
We begin by recalling the classical Lévy-Khintchine formula (Theorem 2.8.1, [65]):
if F is an infinitely divisible distribution on R, then its log-characteristic function
ψ(u) := log
R
R
eiuy F (dy), u ∈ R can be written uniquely as
Z
1 2 Z
iuy
ψ(u) = iγu − Λu +
(e − 1)Π(dy) +
(eiuy − 1 − iuy)Π(dy)
2
{|y|>1}
{0<|y|≤1}
where γ ∈ R, Λ ≥ 0, and Π is a Lévy measure on R, i.e. Π({0}) = 0 and
R
R (y
2
∧
1)Π(dy) < ∞. (γ, Λ, Π) is called the generating triplet of F .
In particular, an infinitely divisible distribution on R, whose log-characteristic
function is given by
Z
ψ(u) =
R
(eiuy − 1)Π(dy)
182
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
183
where Π is a finite positive measure on R, is called a compound-Poisson distribution
(corresponding to the measure Π).
Definition B.1.1 A process (Xt )t∈[0,a] with independent increments, which is continP
uous in probability (i.e. Xtn → Xt whenever limn tn = t) is called a Lévy process.
If (Xt )t∈[0,a] is a Lévy process, then:
1. the distribution Fst of each increment Xt − Xs , s < t is infinitely divisible (Theorem 2.9.1, [65]); in particular, if each Fst is a compound Poisson distribution,
then X is called a compound Poisson process;
2. if we denote with (γt , Λt , Πt ) the generating triplet of the distribution F0t , then
the functions γt , Λt , Πt (Γ) (for every fixed Borel set Γ contained in some set
{y; |y| > ²}, ² > 0) are continuous with respect to t (Theorem 2.9.8, [65]);
3. the process X has a cadlag modification (Theorem 19.2, [65]).
Note: If the probability distribution function F defined by F (t) :=
1
Λa
· Λt , t ∈ [0, a]
satisfies γt = F (t) · γa , Πt = F (t) · Πa , ∀t ∈ [0, a], then the Lévy process (Xt )t∈[0,a] is
said to have stationary increments (with respect to F ), i.e. the distribution of each
increment Xt − Xs , s < t depends on s, t only through the value F (t) − F (s).
Definition B.1.2 A Lévy process (Xt )t∈[0,a] for which the log-characteristic function
ψt (u) := log E[eiuXt ], u ∈ R is differentiable in t, uniformly in u on compact intervals,
is called weakly differentiable.
Proposition B.1.3 The generator of a weakly differentiable Lévy process with characterizing triplet (γt , Λt , Πt ) is
Z
1
(Gs h)(x) = γs0 h0 (x) + Λ0s h00 (x) +
(h(x + y) − h(x))Π0s (dy)+
2
{|y|>1}
Z
{|y|≤1}
(h(x + y) − h(x) − yh0 (x))Π0s (dy)
where γs0 , Λ0s , Π0s (Γ) denote the derivatives at t of the functions γt , Λt , Πt (Γ) (for any
fixed Borel set Γ contained in some set {y; |y| > ²}, ² > 0). The domain D(Gs )
contains the space Cb2 (R) of all twice continuously differentiable functions h : R → R
which have the property that h, h0 , h00 are bounded.
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
184
Proof: The homogeneous version of this result is treated in many standard textbooks
(e.g on pages 291-292 of [32], or on pages 285-286 of [63]). For the inhomogeneous
version we could not find a proof anywhere in the literature and therefore, for completeness, we include this proof here.
Let Fst be the distribution of Xt − Xs and (Tst h)(x) :=
R
R
h(x + y)Fst (dy) the
associated operator.
Let h ∈ Cb2 (R) be arbitrary. Without loss of generality we can assume that h
can be represented as h(x) =
R
R
R
R
eiux h̃(u)du for a certain integrable function h̃ with
u2 |h̃(u)|du < ∞. (The space of functions which admit this kind of representation
is dense in Cb2 (R); the operator Gs is closed.) If we denote with ψst (u) the logcharacteristic function of Xt − Xs , then
Z
Z Z
(Tst h)(x) =
R
R
eiu(x+y) h̃(u)duFst (dy) =
R
eiux h̃(u)eψst (u) du
Z
(Tst h)(x) − h(x) =
By definition (Gs h)(x) = limt&s
t&s R
eiux h̃(u)(eψst (u) − 1)du.
(Tst h)(x)−h(x)
t−s
uniformly in x; hence
Ã
Z
(Gs h)(x) = lim
R
iux
e
!
Z
∂ + ψst (u)
eψst (u) − 1
h̃(u) ·
du =
eiux h̃(u) ·
e
|t=s du =
t−s
∂t
R
Ã
Z
iux
R
e
!
∂+
h̃(u)
ψst (u) |t=s du
∂t
since eψst (u) |t=s = 1. The result follows since
Z
∂+
1
ψst (u)|t=s = ψs0 (u) = iγs0 u − Λ0s u2 +
(eiuy − 1)Π0s (dy)+
∂t
2
{|y|>1}
Z
{|y|≤1}
and h0 (x) =
R
R
(eiuy − 1 − iuy)Π0s (dy)
iueiux h̃(u)du, h00 (x) = −
R
R
u2 eiux h̃(u)du.
2
Here are some consequences of the previous result.
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
Examples B.1.4
185
1. If X is a Brownian motion whose variance function Λ is
differentiable, then the generator of the process X at time s is
1
(Gs h)(x) = Λ0s h00 (x), x ∈ R.
2
whose domain D(Gsf ) contains the space Cb2 (R).
2. If X is a compound Poisson process corresponding to the measure Π such that
the function Πt (Γ) is differentiable with respect to t (for any fixed Borel set
Γ ⊆ R), then the generator of the process X at time s is
Z
(Gs h)(x) =
R
(h(x + y) − h(x))Π0s (dy), x ∈ R.
whose domain D(Gsf ) is equal to the space B(R).
In particular, if X is a Poisson process whose variance measure Λ is differentiable, then the generator of the process X at time s is
(Gs h)(k) = Λ0s · (h(k + 1) − h(k)), k = 0, 1, 2, . . .
whose domain D(Gsf ) is the space B({0, 1, 2, . . . , }) of all bounded functions
h : {0, 1, 2, . . .} → R.
B.2
The Strong Q-Markov Property
In this section we will introduce the notions of stopping time and optional time and
the strong Q-Markov property for processes indexed by the real line.
Definition B.2.1 A random time τ : Ω → [0, a] is called a stopping time of a
filtration (Ft )t∈[0,a] if {τ ≤ t} ∈ Ft , ∀t ∈ [0, a]; it is called an optional time of a
filtration (Ft )t∈[0,a] if {τ < t} ∈ Ft , ∀t ∈ [0, a].
To any stopping time τ one can associate the σ-field
Fτ := {F : F ∩ {τ ≤ t} ∈ Ft ∀t ∈ [0, a]}.
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
186
To any optional time τ one can associate the σ-field
Fτ + := {F : F ∩ {τ < t} ∈ Ft ∀t ∈ [0, a]}.
Here are some basic properties of stopping times and optional times.
• τ is an optional time of the filtration (Ft )t∈[0,a] if and only if it is a stopping time
of the filtration (Ft+ )t∈[0,a] , defined by Ft+ := ∩s>t Fs (Corollary 1.2.4, [43]);
• if τ is a stopping time of the filtration (Ft )t∈[0,a] , then τ is Fτ -measurable (Problem 1.2.13, [43]);
• if τ is an optional time, then Fτ + = {F : F ∩ {τ ≤ t} ∈ Ft+ ∀t ∈ [0, a]}
(Problem 1.2.21, [43]);
• if (Xt )t∈[0,a] is a progressively measurable process, then Xτ is Fτ -measurable
for any stopping time τ (Proposition 1.2.18, [43]); consequently Xτ is Fτ + measurable for any optional time τ ;
• for any optional time τ , there exists a decreasing sequence (τn )n of discrete
stopping times with limn τn = τ , such that Fτ + = ∩n Fτn (Problems 1.2.23,
1.2.24, [43]).
Definition B.2.2 (a) A transition system Q := (Qst )s<t is called measurable if
for every Borel set Γ, the function (s, t, x) 7→ Qst (x; Γ) is measurable.
(b) Let Q := (Qst )s,t∈[0,a];s<t be a measurable transition system. A process (Xt )t∈[0,a]
is called strong Q-Markov with respect to a filtration (Ft )t∈[0,a] if it is progressively measurable and ∀t ∈ [0, a], for every stopping time τ ≤ a − t and
∀h ∈ B(R)
Z
E[h(Xτ +t )|Fτ ] =
R
h(y)Qτ,τ +t (Xτ ; dy).
We recall that an adapted right-continuous process is progressively measurable
(Proposition 1.1.13, [43]). Here are some examples of strong Q-Markov processes.
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
187
Proposition B.2.3 (a) A right-continuous Brownian motion with variance measure
Λ is strong Q-Markov, where Qst (x; ·) is the normal distribution with mean 0
and variance Λ((s, t]).
(b) A right-continuous stationary compound Poisson process corresponding to the
product measure Π(dz; dx) = Λ(dz)×G(dx) (where Λ is a finite positive measure
on [0, a] and G is a probability measure on R) is strong Q-Markov, where
Qst (x; Γ) :=
X Λ((s, t])n
n!
n≥0
G∗n (Γ − x)
and G∗n denotes |G ∗ .{z
. . ∗ G}.
n times
Proof: (a) The homogeneous version of this result is Proposition 2.6.15, [43]. For
the inhomogeneous version we could not find a proof anywhere in the literature and
and therefore, for completeness, we include this proof here.
Let (Xt )t∈[0,a] be a right-continuous Brownian motion with variance measure Λ.
Denote Λ(t) := Λ((0, t]). For each u ∈ R fixed, the process
1
2 Λ(t)
Mt := eiuXt + 2 u
, t ∈ [0, a]
is a martingale: for each s < t
1
2 Λ((s,t])
E[Mt |Fs ] = Ms · E[eiu(Xt −Xs )+ 2 u
= Ms · E[eiu(Xt −Xs ) ] · e
|Fs ]
1 2
u Λ((s,t])
2
= Ms .
Using the Optional Sampling Theorem (Theorem 1.3.22, [43]) we can conclude
that for any t ∈ [0, a] and for any stopping time τ ≤ a − t, we have E[Mτ +t |Fτ ] = Mτ ;
because Λ(τ + t) is Fτ -measurable
iuXτ +t
E[e
iuXτ − 21 u2 Λ((τ,τ +t])
|Fτ ] = e
Z
=
and the result follows using Lemma 2.6.13, [43].
R
eiuy Qτ,τ +t (Xτ ; dy).
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
188
(b) Let (Xt )t∈[0,a] be a stationary compound Poisson process corresponding to the
product measure Π(dz; dx) := Λ(dz) × G(dx), where Λ is a positive measure on T
and G is a probability measure on R. Denote by ϕG the characteristic function of G.
Denote Λ(t) := Λ((0, t]). For each u ∈ R fixed, the process
Mt := eiuXt −Λ(t)·(ϕG (u)−1) , t ∈ [0, a]
is a martingale: for each s < t
E[Mt |Fs ] = Ms · E[eiu(Xt −Xs )−Λ((s,t])·(ϕG (u)−1) |Fs ]
= Ms · E[eiu(Xt −Xs ) ] · e−Λ((s,t])·(ϕG (u)−1)
= Ms .
For any t ∈ [0, a] and for any stopping time τ ≤ a − t, we have E[Mτ +t |Fτ ] = Mτ ;
because Λ(τ + t) is Fτ -measurable
Z
E[e
iuXτ +t
iuXτ +Λ((τ,τ +t])·(ϕG (u)−1)
|Fτ ] = e
=
R
eiuy Qτ,τ +t (Xτ ; dy)
and the result follows.
2
Definition B.2.4 A transition system Q := (Qst )s<t is called weakly rightw
continuous in t if ∀x ∈ R, ∀s < t, ∀tn & t, we have Qstn (x; ·) → Qst (x; ·).
Proposition B.2.5 Let Q := (Qst )s<t a measurable transition system, which is
weakly right-continuous in t and (Xt )t∈[0,a] a right-continuous process, adapted with
respect to a filtration (Ft )t∈[0,a] .
The process (Xt )t∈[0,a] is strong Q-Markov with respect to the filtration (Ft )t∈[0,a] if
and only if for every stopping time τ , for every random time τ 0 ≤ a such that τ ≤ τ 0
and τ 0 is Fτ -measurable, and for every bounded continuous function h : R → R
Z
E[h(Xτ 0 )|Fτ ] =
R
h(y)Qτ τ 0 (Xτ ; dy).
Proof: The homogeneous version of this result is Proposition 2.6.17, [43].
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
Define τn0 := τ +
k
2n
on the set Ak := {τ +
k−1
2n
≤ τ0 < τ +
k
};
2n
189
note that τn0 & τ 0 .
By the strong Q-Markov property, for every bounded continuous function h : R → R
Z
E[h(Xτ + kn )|Fτ ] =
2
R
h(y)Qτ,τ + kn (Xτ ; dy).
2
Since Xτn0 = Xτ + kn on the set Ak ∈ Fτ , we have (using Problem 2.6.9, [43])
2
E[h(Xτn0 )|Fτ ] = E[h(Xτ + kn )|Fτ ] on Ak .
2
Also
R
R
h(y)Qτ τn0 (Xτ ; dy) =
R
R
h(y)Qτ,τ + kn (Xτ ; dy) on Ak and therefore
2
Z
E[h(Xτn0 )|Fτ ] =
R
h(y)Qτ τn0 (Xτ ; dy) on Ak .
Since (Ak )k forms a partition of the whole space Ω, we have
Z
E[h(Xτn0 )|Fτ ] =
R
h(y)Qτ τn0 (Xτ ; dy).
Take the limit as n → ∞ in this equality. The result follows using the Bounded
Convergence Theorem for Conditional Expectations (Theorem 15-1D, [12]), the rightcontinuity of X and the weak right-continuity in t of Q.
2
B.3
Metric Entropy of Classes of Sets
In this appendix section we will introduce the notions of metric entropy and metric
entropy with inclusion.
Definition B.3.1 Let (E, τ ) be a pseudo-metric space.
For any ² > 0 we say that E² ⊆ E is an ²-net if any point in E is within ²-distance
of a point in E² (i.e. E ⊆ ∪e∈E² B² (e), where B² (e) denotes the closed ball of radius ²
centered at e.) The space E is totally bounded with respect to τ if for every ² > 0
there exists a finite ²-net.
Denote by Nτ (²) the cardinality of the smallest ²-net i.e. Nτ (²) is the smallest
number of closed balls of radius ² that cover E. The function
Hτ (²) := log Nτ (²)
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
190
is called the metric entropy of E with respect to the metric τ .
Note that Hτ (²) takes on only positive values and it increases as ² gets small; also,
if ² > diam(T ) then Nτ (²) = 1 and Hτ (²) = 0. It is therefore only the behaviour of
Hτ (²) for small ² that will be of interest.
The second notion of metric entropy is defined in the following context:
Definition B.3.2 Let (T, B, µ) be a finite measure space and A ⊆ B be an arbitrary
class of measurable sets. The class A is totally bounded with inclusion (in
B) with respect to the measure µ if for every ² > 0 there exists a finite collection
+
−
+
A² ⊆ B such that for any A ∈ A there exists A−
² , A² ∈ A² with A² ⊆ A ⊆ A² and
−
µ(A+
² \A² ) ≤ ².
Denote by NI (²) the cardinality of the smallest collection A² . The function
HI (²) := log NI (²)
is called the metric entropy with inclusion of A (in B) with respect to µ.
If A is totally bounded with inclusion with respect to the measure µ, then it is
also totally bounded with respect to the pseudo-metric dµ defined by
dµ (A, B) := (µ(A∆B))1/2 , A, B ∈ A
(where A∆B := (A\B) ∪ (B\A) denotes the symmetric difference of the two sets).
The collection A² becomes an ²-net in the pseudo-metric space (A, dµ ). Therefore,
the cardinality Ndµ (²) of the smallest ²-net (with respect to dµ ) cannot exceed the
cardinality of the smallest A² . We have
Ndµ (²) ≤ NI (²) and Hdµ (²) ≤ HI (²).
We conclude this section with two examples of classes of sets for which we can
place some bounds on the metric entropy with inclusion.
The first example is Dudley’s class I(d, α, M ) of sets in Id (the d-dimensional unit
cube) whose boundaries are given by functions from the sphere S d−1 into Rd with
APPENDIX B. STOCHASTIC PROCESSES ON THE REAL LINE
191
derivatives of order ≤ α. (For a precise definition of I(d, α, M ) see [27].) For this
class of sets, the metric entropy with inclusion (in the class of all Borel sets of Id )
with respect to the Lebesgue measure is
HI (²) = O(1/²)(d−1)/α .
(Note that I(d, 2, M ) give us the class of all closed convex subsets of Id .)
The second example is introduced using the following notation. Let T be an
arbitrary set and A a collection of subsets of T . For any finite subset F ⊆ T let
∆A (F ) := card{A ∩ F ; A ∈ A}
mA (n) := max{∆A (F ); card(F ) = n}.
Note that if F has n elements then ∆A (F ) ≤ 2n ; hence mA (n) ≤ 2n . Set
V (A) = inf{n; mA (n) < 2n } (inf ∅ = ∞).
The number V (A) is called the Vapnik-Cervonenkis (VC) index of the class A. The
class A is called a Vapnik-Cervonenkis class if its VC index is finite.
An example of such a class is the class A = {[a, b]; a < b} of rectangles in Rd for
which V (A) = 2d + 1.
Lemma B.3.3 (Lemma 7.13, [28]) Let (T, B, F ) be a probability space and A ⊆ B
a class of sets which is totally bounded with inclusion (in B) with respect to the
probability measure F ; let HI (²) be its metric entropy with inclusion. If A is a VapnikCervonenkis class in B with VC index v, then there exists a constant K = K(v) (not
depending on F ) such that for every 0 < ² ≤ 12 ,
HI (²) ≤ K − v log ² − v log | log ²|.
Hence for a Vapnik-Cervonenkis class we have
HI (²) = O(log1/²).
Bibliography
[1] Adler, R.J., Monrad, D., Scissors R.H. and Wilson, R. (1983). Representations,
decompositions and sample function continuity of random fields with independent increments. Stoch. Proc. Appl. 15, pages 3-30.
[2] Adler, R.J. and Feigin P.D. (1984). On the cadlaguity of random measures.
Ann. Probab. 12, pages 615-630.
[3] Adler, R.J. (1990). An introduction to continuity, extrema, and related topics
for general Gaussian processes, Lect. Notes-Monograph Series, 12.
[4] Al-Hussaini, A. and Elliott, R.J. (1981). Stochastic calculus for a two-parameter
jump process. Lect. Notes in Math. 863, pages 233-244.
[5] Balan, R.M. (2001). A strong Markov property for set-indexed processes. Stat.
Probab. Letters, to appear.
[6] Balan, R.M. (2001). Set-indexed processes processes with independent increments. Submitted to J. Theor. Probab.
[7] Balan, R.M. and Ivanoff, B.G. (2001). A Markov property for set-indexed processes. Submitted to J. Theor. Probab.
[8] Bass, R.F. and Pyke R. (1984). The existence of set-indexed Lévy processes. Z.
Wahrsch. verw. Gebiete 66, pages 157-172.
[9] Bertoin J. (1996). Lévy processes. Cambridge University Press.
[10] Billingsley P. (1995). Probability and Measure, John Wiley and Sons, New York.
192
BIBLIOGRAPHY
193
[11] Biswas, S.K. and Ahmed N.U. (1985). Stabilisation of systems governed by
the wave equation in the presence of the distributed white noise. IEEE Trans.
Automat Control AC-30, pages 1043-1045.
[12] Burrill, C.W. (1972). Measure, Integration, and Probability, McGraw-Hill, New
York.
[13] Cabaña, E.M. (1972). On barrier problems for the vibrating string. Z. Wahrsch.
verw. Gebiete 22, pages 13-24.
[14] Cairoli, R. and Dalang, R.C. (1996). Sequential Stochastic Optimization, John
Wiley, New York.
[15] Cairoli, R. and Walsh, J.B. (1975). Stochastic integrals in the plane. Acta Math.
134, pages 111-183.
[16] Carnal, E. (1979). Processus markoviens à plusieurs paramètres. Thesis, École
Polytechnique Fédérale de Lausanne.
[17] Carnal, E. and Walsh, J.B. (1991). Markov properties for certain random fields.
Stochastic Analysis (E. Meyer-Wolf, E. Merzbach, A. Schwartz, eds.), Academic
Press, New York, pages 91-110.
[18] Constantinescu, F. and Thalheimer, W. (1976). Remarks on generalized Markov
processes. J. Funct. Analysis 23, pages 33-38.
[19] Dalang, R.C. and Frangos, N. (1998). The stochastic wave equation in two
spacial dimensions. Ann. Probab. 26, pages 187-212.
[20] Dalang, R.C. and Hou, Q. (1997). On Markov properties of Lévy waves in two
dimensions. Stoch. Proc. Appl. 72, pages 265-287.
[21] Dalang, R.C. and Russo, F. (1988). A prediction problem for the Brownian
sheet. J. Multiv. Anal. 26, pages 16-47.
[22] Dalang, R.C. and Walsh, J.B. (1992). The sharp Markov property for Lévy
sheets. Ann. Probab. 20, pages 591-626.
BIBLIOGRAPHY
194
[23] Dalang, R.C. and Walsh, J.B. (1992). The sharp Markov property for the Brownian sheet and related processes. Acta Math. 168, pages 153-218.
[24] Donati-Martin, C. and Nualart, D. (1994). Markov property for elliptical
stochastic PDE. Stochastics Stochastic Rep. 46, pages 107-115.
[25] Dozzi, M., Ivanoff, B.G. and Merzbach, E. (1994). Doob-Meyer decomposition
for set-indexed sub-martingales. J. Theory Probab. 7, pages 499-525.
[26] Dudley, R.M. (1973). Sample functions of the Gaussian process. Ann. Probab.
1, pages 66-103.
[27] Dudley, R.M. (1974). Metric entropy of some classes of sets with differentiable
boundaries. J. Approx. Theory 10, pages 227-236.
[28] Dudley, R.M. (1978). Central limit theorems for empirical measures. Ann.
Probab. 6, pages 899-929.
[29] Dudley, R.M. (1984). A course on empirical processes. Lect. Notes Math. 1097,
pages 1-130.
[30] Ethier, S.N. and Kurtz, T.G. (1986). Markov Processes. Characterization and
Convergence, John Wiley, New York.
[31] Evstigneev, I.V. (1977). Markov times for random fields. Theory Probab. Appl.
22, pages 563-569.
[32] Gihman, I.I. and Skorohod, A.V. (1975). The theory of stochastic processes II,
Springer-Verlag, New York.
[33] Greenwood, P. and Evstigneev, I.V. (1990). A Markov evolving random field
and splitting random elements. Theory Probab. Appl. 37, pages 40-42.
[34] Guyon, X. and Prum, B. (1979). Propriétés markoviennes de certains processus
indicés par R2+ . Rap. tec. bf 79, Univ. Paris-Sud.
BIBLIOGRAPHY
195
[35] Itô, K. (1942). On stochastic processes I. Infinitely divisible laws of probability.
Japn J. Math. 18, pages 261-301.
[36] Ivanoff, B.G., Merzbach, E. and Schiopu-Kratina, I. (1993). Predictability and
stopping on lattices of sets. Probab. Theory Related Fields 97, pages 433-446.
[37] Ivanoff, B.G. and Merzbach, E. (1994). A martingale characterization of the
set-indexed Poisson process. Stoch. Stoch. Reports 51, pages 69-82.
[38] Ivanoff, B.G. and Merzbach, E. (1995). Stopping and set-indexed local martingales. Stoch. Proc. Appl. 57, pages 83-98.
[39] Ivanoff, B.G. and Merzbach, E. (1998). Set-indexed Markov processes, in
Stochastic Models, Conference Proceedings Series of the Canadian mathematical
Society.
[40] Ivanoff, B.G. and Merzbach, E. (1999). A Skorokhod topology for a class of
set-indexed functions. (to appear)
[41] Ivanoff, B.G. and Merzbach, E. (2000). Set-Indexed Martingales, Chapman &
Hall/CRC, Boca Raton, London.
[42] Kallianpur, G. and Mandrekar, V. (1974). The Markov property for generalized
Gaussian random fields. Ann. Inst. Fourier 24, pages 143-164.
[43] Karatzas, I. and Shreve, S.E. (1991). Brownian Motion and Stochastic Calculus,
Springer-Verlag, New York.
[44] Kiefer, J. (1972). Skorohod embedding of multivariate RV’s and the sample DF.
Z. Wahrsch. verw. Gebiete 24, pages 1-35.
[45] Kitagawa, T. (1951). Analysis of variance applied to function space. Mem. Fac.
Sci. 6, Kyusu U., pages 41-53.
[46] Korezlioglu, H., Lefort, P. and Mazziotto, G. (1981). Une propriété markovienne
et diffusions associées. Lect. Notes in Math. 863, Springer-Verlag, pages 245276.
BIBLIOGRAPHY
196
[47] Kurtz, T.G. (1980). The optional sampling theorem for martingales indexed by
directed sets. Ann Probab. 8, pages 675-681.
[48] Lawler, G.F. and Vanderbei, R.J. (1981). Markov strategies for optimal control
probelms indexed by a partially ordered set. Ann. Probab. 11, pages 642-647.
[49] Lévy, P. (1934). Sur les intégrales dont éléments sont des variables
aléatoiresindépendentes. Ann. Scuola Norm. Sup. Pisa Ser. II 3, pages 337366.
[50] Lévy, P. (1937). Théorie de l’addition des variables aléatoires., Gauthier-Villars,
Paris.
[51] Lévy. P. (1948). Exemples de processus doubles de Markoff. C.R.A.S. 226, pages
307-308.
[52] Mandrekar, V. (1976). Germ-field Markov property for multiparameter processes. Lect. Notes in math. 511, pages 78-85.
[53] Mandrekar, V. (1983). Markov properties for random fields. Probability Analysis
and Related Topics (A.J. Bharucha-Reid, ed.) 3, pages 161-193.
[54] Mazziotto, G. (1988). Two-parameter Hunt processes and a potential theory.
Ann. Probab. 16, pages 600-619.
[55] McKean, J.H.P. (1963). Brownian motion with a several-dimensional time. Th.
Probab. Appl. 8, pages 335-354.
[56] Merzbach, E. and Nualart, D. (1990). Markov properties for point processes on
the plane. Ann Probab. 18, pages 342-358.
[57] Meyer, P.A. (1981) Théorie élémentaire des processus à deux indices. Lect. Notes
in Math 863, pages 1-39.
[58] Miller, R.N. (1990). Tropical data assimilation with simulated data: the impact
of the tropical ocean and global atmosphere thermal array for the ocean. Journal
of Geophysical Research 95 11, pages 461-482.
BIBLIOGRAPHY
197
[59] Nualart, D. (1980). Propriedad de Markov para functiones aleatorias gaussianas.
Cuadern. Estadistica Mat. Univ. Granada Ser. A Prob. 5, pages 30-43.
[60] Nualart, D. and Pardoux, E. (1994). Markov field properties of solutions of
white noise driven quasi-linear parabolic PDE’s. Stochastics Stochastic Rep.
48, pages 17-44.
[61] Nualart, D. and Sanz, M. (1979). A Markov property for two-parameter Gaussian processes. Stochastica 3, pages 1-16.
[62] Pitt, L.D. (1971). A Markov property for Gaussian processes with multidimensional parameter. Arch. Rat. Mech. Anal. 43, pages 367-397.
[63] Revuz, D. and Yor, M. (1999). Continuous Martingales and Brownian motion,
Third Edition, Springer-Verlag, New York.
[64] Russo, F. (1984). Etude de la propriété de Markov étroite en relation avec les
processus planaires à accroissements indépendantes. Lect. Notes in Math. 1059,
pages 353-378.
[65] Sato, K. (1999). Lévy processes and Infinitely Divisible Distributions. Cambridge
University Press.
[66] Skorohod, A.V. (1996). Lectures on the theory of stochastic processes, VSP
Utrecht (The Netherlands)- TBiMC Kiev (UKraine).
[67] Slonowsky, D. (1998). Central limit theorems for set-indexed processes. Ph.D.
thesis, Univ. of Ottawa.
[68] Tanabe, H. (1979). Equations of Evolution, Pitman, London.
[69] Walsh, J.B. (1976-1977). Martingales with a multidimensional parameter and
stochastic integrals in the plane. Cours III cycle, Laboratoire de Probabilités,
Univ. Paris VI.
[70] Walsh, J.B. (1982). Propagation of singularities in the Brownian sheet. Ann.
Probab. 10, pages 279-288.
BIBLIOGRAPHY
198
[71] Walsh, J.B. (1984). An introduction to stochastic partial differential equations.
Lect. Notes in Math. 1180, pages 65-437.
[72] Wong, E. and Zakai, M. (1985). Markov processes in the plane. Stochastics 15,
pages 311-333.