Gualtierotti, Antonio F.; (1972)Some problems related to equivalence of measures: extension of cylinder set measures and a martingale transformation."

r
* The research in this report was supported in part by the Office of Naval
Research under Grant No. N00014-67-A-032l-0006.
1
Ph.D. dissertation under the direction of C. R. Baker.
Reproduction in whole or in part is permitted
for any purpose of the
United States Government
•
SOME PROBLEMS RELATED TO EQUIVALENCE OF MEASURES:
EXTENSION OF CYLINDER SET MEASURES
AND A MARTINGALE TRANSFORMATION*,l
Antonio F. Gualtierotti
Department of Statistios
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 834
July, 1972
ANTONIO F. GUALTIEROTTI. Some Problems Related To Equivalence of
Measures: Extension of Cylinder Set Measures and a Martingale
Transformation. (Under the direction of CHARLES R. BAKER.)
Let
E
be a linear one-to-one continuous map of the real and sep-
arable Hilbert space
with
E
H into the real and separable Hilbert space
having dense range.
Gaussian cylinder set measures on
defined by weak covariance operators, are considered.
measure., and the map
E,
K,
H,
Such a cylinder set
induces a Gaussian cylinder set measure on
K.
The first result of this thesis is a characterization of the norm of the
spaces
K for which the induced
on the Borel sets of
K.
probability measures on
;-
(
measures on
H
m~~sure
extends to a probability measure
This characterization is then used to study two
K,
induced by
E
from two Gaussian cylinder set
with known weak covariance operators.
Conditions are
obtained for the equivalence or orthogonality of the induced measures, and
representations of the Radon-Nikodyrn derivative are given for the case when
equivalence holds.
These results are expressed in terms of
E
and the
weak covariance operators of the original cylinder set measures on
H.
The next problem considered is that of translating a continuous L 2
bounded martingale, and making an associated absolutely continuous substitution of measure.
Conditions are obtained for the translated process
to be a martingale with res~rthe new measure, and also for the translated process to have the sa"me natural increasing process as the original
martingale.
induced on
The problem of determlning equivalence of the two measures
t~e'
space of continuous functions by the original martingale and
by its translation, both with respect to the original probability measure,
is discussed.
The problems considered in this thesis are related to the statistical
theory of signal detection.
TABLE OF CoNTENTS
CHAPTER
PAGE
ACKNOWLEDGMENTS
I
II
•
III
INTRODUCTION
1.1 The Detection Problem and Some Related
Mathematical Questions
1.2 Some of the Known Results
1.3A Discussion of the Contents of this Thesis
EXTENS:lON:OF GAUSSIAN CYLINDER SET MEASURES
2.1 Notation and Conventions
2.2 Hilbertian Norms and Extension of Gaussian
Cylinder Set Measures
2.3 Application to the Detection Problem
2.4 A Revi ew of Chapt,er II
iii
1
3
9
14
14
52
65
TRANSFpRMATION OF L2-BOUNDED CONTINUOUS MARTINGALES:
AN EXTENSION OF GIRSANOV'S THEOREM
3.1 A Summary of Results on Stochastic Calculus for
Martingales
3.2 An Extension of Girsanov's Theorem
3.3 Theorem 3.2.1 and the Detection Problem
3.4 A Review of Chapter III
104
108
BIBLIOGRAPHY
109
66
76
ACKNOWLEDGMENTS
I wish to thank cordially Dr. C. R. Baker, my thesis adviser, for
his help during the preparation of this work and for introducing me to
communication theory.
I am grateful to Dr. S. Cambanis and
Dr. B. S. Rajput for their active encouragement and to Dr. B. J. Pettis,
Dr. G. Simons and Dr. W. L. Smith, the other members of my committee
for their patience.
Last but not least, I wish to thank Cynthia
Grossman for her fine typing.
I also wish to acknowledge the financial support offered to me
during my studies by the Department of Statistics of the University of
North Carolina at Chapel Hill and the Office of Naval Research under
Contract NOOOI4-67-A-0321-0006 •
•
CHAPTER I
INTRODUCTION
Sections 1.1 and 1.2 of this introduction are a rearrangement of a
survey paper by Baker [197la].
and Root [1963].
Some items corne also from Baker [1970b]
These sources are often quoted "verbatim", specially
when the ,formulation is well suited to our needs.
In the "statistical theory of signal detection", we are concerned
with problems of interest in communication engineering involving statistical inference from stochastic processes.
1.1.
1.1.1.
waveform
The Detection Problem and Some Related Mathematical Questions
Statement of the basic signal detection problem.
{y(t), t
E
[O,T]}
An observed
is known to be a sample function from one
of two stochastic processes, each corresponding to an hypothesis
i = 0,1.
process
Under
H '
O
H. ,
1
it is supposed that a sample function from a "noise"
N has been observed and, under
HI'
that
is a realization of a "signal-pIus-noise" process
wishes to discriminate between hypotheses
H and
O
{y(t), t
S + N.
E
[O,T]}
The observer
HI.
A complete solution to the problem requires that one answer the
following questions:
1)
Is the mathematical model
2)
What Is the optimum operation
rea~wnahJe?
Oil
til(' oIHlI·rvl·d wllvl'fonll'{
3)
Given a specific procedure for deciding between
H
O
and
HI'
what is its performance?
1.1.2.
Singular and non-singular detection problems.
There are classes
of decision problems that involve a model of the kind described in 1.1.1
for which a correct decision, or correct inference, can be made with
probability one.
Such problems are called singular.
If we require from
a given model that it always detects the presence of a signal (modulo
a zero-probability set) and if this requirement forces a probability of
"false alarm" equal to one, then the model is called non-singular.
One
distinguishes the intermediate cases by labeling them as intermediate
singular problems.
It has been argued that well-posed detection prob-
lems should yield a non-singular answer and non-singularity is thus a
first rough measure of the adequacy of the model used.
The "singularity" question is closely related to the concept of
probability measures on function spaces since it leads to statements regarding the measure of sets in an appropriate sample space.
Indeed, if
the stochastic processes under consideration have adequate features, they
induce on some appropriate space of functions, containing the sample
and
paths, two measures
respectively.
Then the condition
for non-singular detection is:
IJN(A)
=
1
if and only if
IIS+N(A)
= 1,
[or all measurable sets A.
In other words, the detection will be non-singular if the induced measures are equivalent
1.1.3.
(n
and singular if they are orthogonal (1).
Likelihood ratios.
When the detection problem is non-singular,
the optimum operation on the data according to seve'raJ criteria (e.g.
Bayes, Neyman-Pearson) consists in computing the value of the likelihood ratio and comparing this value with a threshold.
2
If the latter is
exceeded, one decides "signal present"; otherwise, the decision is
"noise" only.
The likelihood ratio is the Radon-Nikodym derivative of
the two measures induced on the sample function space by the two stochastic processes considered.
The computation of the Radon-Nikodym de-
rivative is thus a second important mathematical problem related to detection theory.
1.1.4.
Associated questions.
Since the measures considered are in-
duced by stochastic processes, one might ask, for instance, what are the
signals leading to non-singular detection, for a fixed type of "noise".
One might also wish to describe their characteristics.
This brings
forth a third category of problems, those of representation.
1.2.
Some of the Known Results
Most of the work in the area has been directed towards the theory
of detecting or characterizing information-bearing signals immersed in
noise'with GaUSSian statistics.
to be done.
In the non-Gaussian cases, much remains
If one considers the techniques involved in such investi-
gations, it is possible to delineate three broad categories, even though
they often overlap.
1.2.1.
The "no ise" and "signal-plus-noise" processes are both Gaussian
In this case, when almost all sample functions considered are squareintegrable, one can deal with the question of singularity without using
the time structure of the processes.
It is then possible to use the
rich knowledge about Hilbert spaces that is available.
We start by giving some "background" material that will be used
constantly afterwar;ds without specific reference.
J
Denote by
uct
(.,.)
H
a real and separable Hilbert space with inner prod-
r.
and Borel a-field
Definition 1.2.1.1.
A "covariance operator" is any function mapping
H
into
H
that
is linear, bounded, non-negative, self-adjoint and trace-class.
Proposition 1.2.1.1.
~
Let
~
Then
the
(Mourier [1953])
be a 'pro~ability measure on
has a mean element
m
(H, r)
and suppose that
R defined by
and a covariance operator
formulae~
fH (h,u) d~(h)
(m,u)
fH (h-m,u)
(Ru,V)
for all
u
and
v
in
(h-m,v)
d~(h)
H.
Definition 1.2.1.2.
~
Let
be a probability measure on
(H,
r).
be Gaussian if, for any bounded linear functional
measure on
~o<.,h)-l
B(ffi),
fH
jJ
H,
the
(Mourier [1953])
is a Gaussian probability measure on
Ilhll 2dll(h)
II,
on
corresponds to a normal distribution.
(H, 1'),
"
Convl'n;t'ly, for each cOVari<lllCl' operator
of
(.,h)
is said to
the Borel sets of the real line, defined by
Proposition 1.2.1.2.
If
~
Then
I{
Oil
there exists a Gaussian probability measun' on
4
II
and ('Ielllenl
(II, 1')
hay ing
III
R as covariance operator and
m as mean element.
Definition 1.2.1.3.
~
Let
be a probability measure on
~
functional
is a map from
H
(H, f).
Its characteristic
to the complex numbers defined by the
relation:
for all
Hu)
u
in
H.
Many results about measures can be expressed in terms of their
,
characteristic' functionals, as on the real line.
In particular, we will
make use of the following proposition.
Proposition 1.2.1.3.
Let
R
ato~ on' H.
be a linear, bounded, self-adjoint and non-negative oper-
~(u)
The function
= exp(-~<Ru,u»
~
functional of a probability measure
is trace-class.
era tor of
~.
f
Then
I Ihl 12d~(h)
H
Moreover
on
<
00
(H,
and
is the characteristic
f)
R
if and only if
R
is the covariance op-
is Gaussian.
~
The main result on non-singularity for Gaussian measures is given
in the following theorem:
Theorem 1.2.1. 1 .
Suppose that
~l
(resp. ~2) is a Gaussian probability measure on
with mean
m
l
(resp. m )
2
(H, r)
R2 )·
and covariance operator
R
1
(resp.
Then:
1)
either
2)
IJ 1
a)
~1
l
~2
or
~l 1 ~2
if and only if
~2
IlI
==
-
ffi
2
( R([R1+R2rlo) (I~(I\) d"IWll'H lite rangl' 01
operator A)
5
lltl'
=
R
b)
1:
1:
R 2(I+T)R 2
122
with
I
identity operator of H
T
Hilbert-Schmidt operator on H that does not
have -1 among its eigenvalues.
1) was obtained independently by Feldman [1958] and Hajek [1958].
2) is due to Rao-Varadarajan [1963]
and Kallianpur-Odaira [1963].
Similar results are given in a paper by Root [1963].
Definition 1.2.1.4.
~
Let
a
denote a probability measure on
~
quadratic form with respect to
all
h
in
H
(with respect to
L(h)
where
A
~),
(H, f).
L
is said to be
if it can be written, for almost
as
(Ah,h) + constant
is an operator on
H,
closed, densely defined, self-adjoint
~{V(A)}
and satisfying the relation
=
(V(A)
1
is the domain of the
operator A).
Possibly the most comprehensive results for the likelihood ratio
are those of Rao-Varadarajan [1963] which include the following theorem.
Theorem 1 .2. 1 .2.
Suppose that
(II,
I')
IJ
I
and
P
2
art' (;aussian pro!>;i1Ji I ity mt',uwres
of the likelihood ratio.
a)
I\(R )
1
b)
(R
-J
I
R(R )
L
-1
),;
R
)R '
2
2
to the whole of
Then
PL.
PI
with zero nll'an, and that
I.
is
,\
Dt'noLI'!>y
quadrat
j(-
I.
011
the logarithm
form i r and only i I
has a !>oundL'd and Iii l!>erl-Sl'llIlIidl I'x tens i Oil
H.
6
When
L
is a quadratic form, the closure
A of
exists, is a closed symmetric operator such that
almost all
where
IS I
h
R
l
-1
~Z{V(A)}
-R
-1
Z
=1
H,
in
* * and
S
is the operator satisfying the equation R = R SR
l
Z
Z
m·
its determinant (lSi = TrjClO=l A. J
A. = eigenvalue of S,
J
multiplicity of
and, for
J
m. =
J
A ).
j
Remark 1.2.1.1.
An additional feature of Guassian probability measures is that they
behave very much like normal distributions on Euclidean spaces and that
questions concerning their possible properties can be answered in a
"nothing or everything" fashion: we have seen that Gaussian measures are
either equivalent or orthogonal; also the subspaces have measure zero or
one and many zero-one laws about them are known (Baker [1971],
Kallianpur [1970]).
1.2.2.
The "no ise" process is the Wiener process.
In this case there
are problems that do not require that the distribution of the "signal"
be known in order to obtain conditions for equivalence and also an expression for the Radon-Nikodym derivative.
chastic calculus as developed by Ito [1951].
The basic tool here is stoOne of the most recent re-
suIts in this area (Theorem 1.2.2.1 below) is due to Kadota and Shepp
[1970]; another proof of the same result has been given by Kailath and
Zakai [1971] •
•
7
Theorem 1.2.2.1.
Let
process
where
P
w
{W ' tE[O,T]}
t
Y
be the standard Wiener process.
by the formula:
{Zt' tE[O,T]}
has almost all its sample paths in
be the measure induced on
probability
Define
{C[O,T], B(C[O,T])}
W: n
and the map
P
{Wt(W), tE[O,T]}.
Py
~
C[O,T]
a)
Py «
b)
if
Z
if
2
E Jl Zt dt
0
P
W,
L [O,T].
2
Let
by the underlying
defined by
similarly.
pendent of the future increments of
c)
Define the
W(w)
Then, provided
Z
is inde-
one has:
w
Py
is uniformly bounded,
+-
00
and
Z
and
--
P
w
W are mutually independent,
then
exp
where
{J: f(t,xt)dx(t) - ~ J: f 2 (t'X t )dt}
f(t,x ) = E(ZtIYs' s ~ t)y=x
t
where
x =
{x(t), tE[O,T]} E C[O,T].
1.2.3.
Neither the
II
s igna1
11
nor the
II
no ise ll are Gaussian.
We will
comment on results concerning independent "signa]" and "noise", "noise"
and "signal-pIus-noise" decomposable, Markov, or wi til independlmt
increments.
Theorem 1.2.3.1.
Let
and
N
(SI,
(Baker [1970a])
A, P)
be the basic: probability space and suppose that
S
are two measurable and independent processes that takc' values
f
in
(H, I').
N (resp.
Denote by
llN
(resp.
P +
S N
'
)Jh+N)
the measure induced by
S+N, h+N) and P on (H, r), h being an element (non-random)
8
in H.
i~
'then,
).Ih+N - ).IN
for almost all
in
h
H,
with respect to
In other words, the detection problem is non-singular if it is nonsingular for almost all signal sample functions.
Extensive work concerning measures corresponding to processes with
independent increments and general Markov processes has been done by
A general exposition can be found in a paper by Gikman and
Skorokhod.
Skorokhod [1966 ].'
Firtally, Feldman [1971] has initiated investigations for a class of
processes called decomposable.
Definition 1.2.3.1.
A stochastic process
X is called decomposable if
i)
X is indexed by a family of sets
ii)
for every finite disjoint family
Sl"",Sn
S
n
in
S,
X_\,OO
S - Ln=l
S,
of sets in
s 1 , ... ,Xsn are independent
oo
for every disjoint decomposition of S, S = L
S
n=l n'
the random variables
iii)
S
X
X
S'
S
and
with probability one.
n
The cases considered in the present section are not investigated
in this thesis: they are included to produce an overall picture of the
problem at the present time.
1.3.
A Discussion of the Contents of this Thesis
In 1.1.1, it was indicated that an important feature of the detection: problem involves the adequacy of the chosen model.
concerns the representation of the "noise".
Part of this
A Gaussian "white noise" is
often "defined" as a Gaussian process whose covariance function is a
delta function, the "derivative" of the covariance function
9
llIin(s,t)
Since such a definition is only "formal", in order to deal with
mathematical objects one has to choose workable definitions.
One ap-
proach is to interpret the problem in terms of the integrated signal and
noise:
this is what happens in Theorem 1.2.2.1.
Such a representation
is obtained by taking the time structure of the processes into consideration.
In terms of sample space properties, since the delta function
is supposed to act as an identity, it is possible to interpret it as the
kernel of the identity operator.
authors like Hida [1970].
This is the attitude adopted by
There is one drawback to such an approach.
Since the identity operator is not trace-class, there is no probability
measure associated with it.
Gross [1970, a summary], who was interested
in finding an analogue of Lebesgue measure for infinite dimensional
linear spaces, considered this problem, thereby introducing the notion
of abstract Wiener space.
Definition 1.3.1.
An abstract Wiener space is a triple
Hilbert space and
injection of
H
B
a real Banach space,
into
B
B
i: H
with dense range.
associated with the identity operator
induce on
(i, H, B).
I
a cylinder set measure and
on
B
~
B
H
is a real
is a continuous
The cylinder set measure
H
and the injection
i
is so chosen that this
cylinder set measure extends to a probability measure on
Sato [1969] characterized the cases for which
B
B.
is a Hilbert
space.
Theorem 1.3.1.
(Sato [1969])
/\ Ililbertian norm
II-II
011
a separahJ(~ fliJIlt'rt space is admis-
sible (that is, leads to a countably-additive extension
10
of a cylinder
set measure as in 1.3.1) if and only if there exists a one-to-one
Hilbert-Schmidt operator
in
H,
I· I
where
Th~re
S
on
H such that
is the initial norm on
Ishl
I Ihl I
for all
h
H.
are many other ways to obtain a "noise" that is Gaussian, but
not "white", as many in fact as there are weak covariance operators
that is, linear, bounded, positive and self-adjoint maps from
H
R,
to
H.
The first result in this thesis is a characterization of the Hilbertian
norms that permit an extension of the Gaussian cylinder set measure induced by
R
and the appropriate injection (definitions are given in
Chapter II).
characterizat~?n is
This
then applied to the non-singu-
larity problem for the extended measures.
Roughly speaking, it is shown
that non-singularity holds if weak covariance operators behave, with
respect to each other, as covariance operators would.
The Radon-
Nikodym derivative, under certain additional hypotheses, is then expressed in terms of the weak covariance operators and the norm of the
extension space.
This forms Chapter II.
In Chapter III, in an attempt to relax the condition that the
"noise" be Gaussian, we consider a class of processes that share with
the Wiener process many properties:
LZ-bounded or square integrable
continuous martingales.
Definition 1.3.2.
An LZ-bounded martingale is a martingale
su P
tElR
+ EM t Z
<
M for which
00.
The approach of Kadota and Shepp (Theorelll 1.2.2.1) consists in
looking at the finite dimensional distributions
limiting behaviour.
;IIIIJ
In studying L1H'lr
Since, with the new hypotheses, there an- no
11
distributions available, one has to rely on the approach to the same
results pursued by Kailath and Zakai [1971].
Their derivation practi-
cally relies on a single theorem of Girsanov [1960] that describes the
transformation of a Wiener process under an absolutely continuous substitution of measure.
Definition 1.3.3.
W = {Wt,At,P}
Let
(X,P)
be a Wiener process in
with respect to
Xo
where
X
+
¢,~)
(W,
J°t ¢s dWs + Jot
is independent of
o
[O,T].
An Ito process
is a process of the form
~s ds,
t
E
[O,T],
Wand the right hand side of (1) makes
sense and yields a continuous process.
Sufficient conditions for the
latter are
~
(2)
¢
and
(3)
¢t
(4)
¢
is in
L [0,T]
2
almost surely with respect to
P.
(5)
~
is in
Ll[O,T]
almost surely with respect to
P.
is measurable with respect to
Theorem 1.3.2.
(6)
f
J:
s
d(~ (Ill)
(H)
W
L
r dW
u
u
T
. '
l'xp(Zo(1
W
t
for all
be a Wiener process in
that satisfies (2), (3) and (4).
Z t (f)
(7)
At
t
in
[O,T].
(Girsanov [1960, Theorem 1])
W = {Wt,At,P}
Let
process
are measurable.
-
j:
»
f clu,
u
- J2
r
1'2 du
s
u
d I' (ell)
t
,
[O,'!'] •
12
Let
[O,T].
Consider a
Assume
(9)
Q(rl)
=
Then
W = {wt,At,Q}
(W,~,~+~f)
1.
is a Wiener process in
[O,T]
and
(X,Q)
is an Ito process.
It is shown in Chapter III that there is an extension of Girsanov's
theorem (Theorem 1. 3.2) valid for L -bounded, continuous mar tinga1es,
2
provided an additibna1 assumption is made.
A condition for the equi-
valent of equality (9) is given and a discussion of the obstacles met
in obtaining a final answer, that is in proving the equivalent for
\
martingales of Theorem 1.2.2.1, is offered.
Remark 1.3.1.
One observation reveals how the passage from the Wiener process to
martingales cart be achieved: the process
of (6) is a martin-
gale from which is subtracted half of its associated natural increasing
process (definitions are given in Chapter III).
Such processes have re-
markab1e properties revealed by the use of stochastic calculus for martinga1es, as developed by Kunita and Watanabe [1967] and extended by
Meyer and the Strasbourg group, Do1eans-Dade [1970] in particular.
Since most of the literature on the subject is in French, a
summary of results is included at the beginning of Chapter III.
II
CHAPTER
EXTENSION OF GAUSSIAN CYLINDER SET MEASURES
2.1.
Notation and Conventions
N
positive integers
JR
n
real numbers (lR is the cartesian product of n copies of JR)
H
Hilbert space with scalar product
K
Hilbert space with scalar product [0,0]
V(A)
domain of the operator
A
<0,0)
and norm
and norm
I loll
[1 0 1]
(set of elements for which A is
defined)
Ker(A)
kernel of the operator
A (set of elements that A sends to
zero)
R(A)
range of the operator
A
(set of images by A of elements in
V(A) )
B(B)
Borel sets of the topological space
f[8]
image by the map
S
closure of the set
f
of the set
S
B
8
in the appropr i;l
tl' SP;Il:('
ThL' end of each definition, statement of 1t'l1IlIIa, proposilioll,
theorem and proof is signaled by a square
2.2.
II.
Hilbertian Norms and Extension of Gaussian Cylinder Set
Measures
In this section, we characterize the Hilbertian norms leading to
countable extension of Gaussian cylinder set measures.
giving some necessary definitions and results.
We start by
These can be found in
Baker [1972] and, in somewhat less detail, in Balakrishnan [1971] or
Gelfand and Vilenkin [1964].
2.2.1.
Cylinder sets and Gaussian cylinder set measures.
Definition 2.2.1.1.
Let
n
B(lR ).
If
{hl, ... ,h }
n
be a finite subset of
A cylinder set
CH[hl, ... ,h ; B]
n
then
CH[hl, •.• ,h n ; B]
B an element of
is defined by the formula:
H is a finite dimensional subspace of
O
{hl, .•• ,h },
n
Hand
H containing the set
is said to be based on
H . 0
O
Proposition 2.2.1.1.
A set
C
is a cylinder set in
H with n-dimensional base space if
and only i f
(2)
B
C
H J.
o
H is an n-dimensional subspace of
O
where
ment of
set
+
H and
B is a unique ele-
B is then called the base or base set of the cylinder
B(H )·
O
C. 0
Proposition 2.2.1.2.
i)
ii)
The cylinder sets of
A(H)
generates
H form an algebra
A(H).
B(H). []
Definition 2.2.1.2.
A cylinder set measure
to
1.I
on
lR such tha t :
15
A(H)
is a set function from
A(H)
i)
ii)
iii)
for all
~
(H)
if
C
in
A (H),
0
$
~
(C)
$
1
=1
{C , n E
n
N}
is a collection of disjoint cylinder sets with
~{U
c.onunon base space, then
nEN Cn
}
= I nE N ~{C n }. 0
Definition 2.2.1.3.
~
Let
be a cylinder set measure on
A(H).
For fixed
h
in
H,
define
{;\h, A
Let
then
~h
~
E
IR}.
denote the probability measure induced on
B(H )
by
h
~.
If
is said to have bounded one-dimensional variance. 0
Definition 2.2.1.4.
~
Let
be a cylinder set measure on
mensional variance.
(5)
J
(m,h)
(x,h)
~
Definition 2.2.1.5.
~
The mean of
d~h(x),
joint and non-negative operator in
with bounded one-di-
is an element
for all
A weak covariance operator on
A(H)
II
h
in
m
H such that
in
H. 0
is any Ii IH'ar, hounded, se1f-:l0-
II. II
Definition 2.2.1.6.
Let
measure
IJ
be a cylinder set ml'asure on
Phon
lli.
by the formula:
16
A(II) .
IlL'f i
Ill'
"
proh"b [1
[l
Y
(6)
e
If
~h{xE~I<x,h> E A},
Ph(A)
for
A
B (IR) •
in
corresponds to a normal distribution for all
Ph
the cylinder set measure
~
h
in
H,
then
is called Gaussian. 0
Proposition 2.2.1.3.
There is a one-to-one correspondence between the class of all zeromean Gaussian cylinder set measures on
having bounded one-dimen-
A(H)
sional variance and the class of all weak covariance operators on
~
The correspondence between such a measure
operator
R
H.
and its weak covariance
is defined by
(7)
where
H.
-n,k
= {Ah+vk,
~h,k
and
in
v
and
IR}
B(~,k)
by
is a weak covariance operator and if
~R
is the measure induced on
~.
0
Proposition 2.2.1.4.
If
R
c
is the associated
zero-mean Gaussian cylinder set measure, then
<P {B; «<Rh., h . » ) ~ . l}
1
<P{B; «<Rhi,hj»)~,j=l}
where
J
1,J=
is the measure of
B with respect to
the normal distribution with zero-mean and covariance matrix
Proposition 2.2.1.5.
If
f: JRn
-+
IR is Borel measurable and if
g(h)
]7
g: H
-+
JR is defined by
where
hl, ... ,h
are fixed in
n
f
lRn
2.2.2.
H,
then
f(x) dHx;
«(Rh.,h.))~
.-l}·
1 J
1,J-
0
Some useful observations concerning Hand K.
Lemma 2.2.2.1.
Let
(. , -)
H
be a real and separable Hilbert space with scalar product
and associated norm
product on
H
H with respect to
H
K.
(K, [
Let
[I~
I]
What follows concerns
[- , -]
[I- I].
with associated norm
tion of
into
II-II.
and by
E
denote another scalar
Denote by
E
the natural injection of
as a map from
then
is an injective bounded linear map from
range and
[-'-]K
(11)
K
H
is weaker than the norm
h
E
in
[I-I]
for all
and some
(H,
II- I I)
to
I I- I I,
that is, if
a > 0,
H
to
K
with dense
is a real and separable Hilbert space with scalar product
satisfying the relation:
[h,h']
[Eh, Eh ' ] K
for all
In the sequel we will write
Proof:
(11).
(I 2)
the comple-
I-I]) .
If the norm
a nd
K
The claim on
Since, on
It '
H,
[-, -]
hand
[Eh,Eh']K
E
=
in
K.
[Eh,Eh']. 0
is obvious from its definition, (10) and
is a scalar product, we have for all
i nil,
til , Ii' I
h'
4I {[I 1i+1l' II 2 - I IIi-Ii' I J2} •
This gives, from (11),
18
h
(13)
By
[Eh,Eh'lK
contin~ity,
t {[ I
2
I
Eh+Eh' lK
=
-
[IEh-Eh' I l/}.
the same formula (13) holds on
ner product space.
Since
K
K
is a Hilbert space.
suffi~ient
to notice that any dense set in
E
To see that
that is a dense set in
H
K,
which is thus an in-
[I· IlK
is complete in the
lows that
an image by
K,
K
norm,
it fol-
is separable, it is
for the stronger norm has
because the norm of the lat-
0
ter is weaker.
Lellll1a 2.2.2.2.
Let
Hand
K
denote real and separable Hilbert spaces with re-
spective scalar products
[I· Il.
and
into
<.,.>
Suppose that
E
and
[·,·l
and associated norms
is a continuous linear injection of
11·1 I
H
K with dense range.
To avoid some repetitions, we will denote a set of elements {E,H,K}
having the properties stated above by (E,H,K) and refer to it as the
triple (E,H,K).
i)
(14)
E*,
Then:
the adjoint of
=
<h, E*k)
E*
[Eh,kl
iii)
r~*
is inj ec tive
E**
=
E
1
(E- )*
and
E = U (E*E) 2
U*E
Proof:
•
=
where
;k
(E*E)2
and
=
h
in
Hand
K
The formula
in
K.
K
to
H whose
U
H
to
K.
(E*)-l
is a unitary map from
E
R(E)~
Also
k
(E*E)2. 0
E*U
20.7
Claim i) is Theorem
which applies since
H.
for all
K.
;k
iv)
exists and is defined by the relation
is a bounded linear transformation from
domain is
ii)
E,
of Bachman-Narici [1966, p.363l
is a bounded linear transformation defined on
= R(E)~ =
Ker(E*)
]9
holds for the bounded linear
operator
But
E
E
[rom
H
to
K
(Baehman-Narici [1966, p. 363, 20.37]).
has dense range by hypothesis and thus
Since
is one-to-one, as was stated in ii).
defined on the range of
(E*)-l
E,
E
K.
Thm. 30.3.(3)]).
=
E
(E*E)
he
2
from
H
to
and zero on its kernel.
one-to-one and
U*
E
= U(E*E)~
is an isometry because
So
U
E
has dense range
is unitary.
U*
R
(E,H,K)
on
11·11
norlll
H
and
and the
(see
that defines a
E
determine
and we suppose that
extends to a Gaussian probability measure
llIore
(Halmos
We start by con-
we have a triple
a Gaussian cylinder set measure
OIlC
is
The second claim of
zero-mean Gaussian cylinder set measure
First we IH'cd
E
D
Lemma 2.2.2.2) and a weak covariance operator
till'
for a
U
Extension of Gaussian cylinder set measures.
nl'cessary prop('rL ies of
From
But
sidering the following situation:
K
is
is isometric on the range of
third follows by taking adjoints.
on
=
U
iv) follows from the first by premultiplying both sides by
2.2.3.
is
K.
is an isometry since
[1967, p. 69, Corollary 2]).
E
Now to claim iv).
Halmos [1967, p. 68, Problem 105], we have that
U
-1
(Bachman-Narici [1966, p.357,
So we checked claim iii).
partial isometry
Since
E*
E
l
(E- )*
Thus
(Bachman-Narici [1966, p. 354, Thm. 20.2]).
E**
So
is injective,
which is dense in
bounded, it is closed and thus
= {a}.
Ker(E*)
Oil
Oil
K.
1':1111
We wish to givp
11':11, II' III.
1('lIIm;l.
Lemma 2.2.3.1.
Consider the triple
H
(E,II,K),
the weak cov;lrianct' operaLor
and its associate zero-mean Gaussian cylinder St'l
relation
2.0
111(';1
SlI rl'
I{
C
II,{
•
on
-1
c
(15)
]JR
defines on
0
E
K a Gaussian zero-mean cylinder set measure such that the
vector
k
E
i
K,
i = 1, ...
,n,
has a mu1tinorma1 distribution with mean zero and covariance matrix
(16)
«(RE*k.,
~
Proof:
2. 2 .1. 1) .
(17)
Let
E*k.»)~
j-1. 0
J~, C [k , •.• ,k ; B]
K 1
n
K
(Def.
Then,
E-1 {C [k , ••• ,k ; B]}
n
K 1
=
(by Def.)
{hER I ([Eh,k ],.··, [Eh,k ]) E B}
1
n
=
(by (14»
{hERI «h,E*k1 ), ••. ,(h,E*kn » E B}
=
(by Def.)
Thus the inverse image by
in
be a cylinder set in
E of a cylinder set in
K
R and consequently the definition (15) of
is a cylinder set
makes sense.
Moreover, by (17) and Proposition 2.2.1.4,
(18)
v
c
R
{C [k , ..• ,k ; B]}
K 1
n
=
C
]JR {C R[E*k1 , ••• ,E*k ; B]}
n
~{B; «(RE*ki,E*kj»)~,j=l}. 0
The proof of the next proposition is adapted from Sato [1969, p.
76, Thm. 3].
Proposition 2.2.3.1.
Suppose we are given a triple
(E,II,K)
and lhat
I{
is all lnJ(·c-
tive weak covariance operator with associated zero-mean Gaussian
21
cylinder set measure
JJ
c
R
as in Lemma 2.2.3.1. (15) and
Define
•
suppose that it extends to a probability measure
covariance operator
on
K
with weak
R.
Then there exists a self-adjoint Hilbert-Schmidt operator
H
and a bounded linear extension
all
h
in
of
SR
-he
2
to
H
on
such that, for
H,
[IEll
(19)
T
S
IIThll.
0
Remark 2.2.3.1.
a), S),
The proof of Proposition 2.2.3.1. is given in four steps
y) and 8).
Only in part 8) do we make use of the hypothesis that
an injection.
Since we wish to refer to parts
when the hypothesis that
R
R
is
a), S) and y) later on,
is injective will be dropped, the proofs of
a), S) and y) will be carried out as if
R
were a general, not neces-
sarily one-to-one, weak covariance operator.
Proof of Proposition 2.2.3.1:
By definition, for
k
and
l
k
in
2
K,
=
(20)
But, by Lemma 2.2.3.1, the vector
([o,k ],[o,k ])
l
2
has a bivariate nor-
mal distribution with mean zero and covariance matrix
«(RE*k,' ,E,':k.)): . 1.
'1
.1
1 ,J =
Consequently, we also have, for
inK,
('! I )
I· , k I I I . , k ') I
1<
"1<
•
get
22
k
l
and
k
2
A first consequence of (22) is, for all
(23)
k
in
K,
=
A second is that
(24)
R =
ERE*.
No ~ N be an index set and
Let
of eigenvectors of
R.
Let
A -~
n
n
the family
the
i)
./
R(R)
in
H.
forms a complete orthonormal set in H .
O
are orthonormal.
{f , nEN }
n
o
f
n
's
We have indeed, by the choice of the
(25»
n
~ E* e .
n
H denote the closure of
O
a)
and
A
Define
-
f
be an orthonormal set
R belonging to the non-zero eigenvalues
spanning the range of
(25)
{en' nEN }
o
f 's
and
n
e 's
n
(relation
and relation (22),
on,m
=
ii)
the
form a complete set for
fn's
First we need to show that the
nition (25), the
that
R(R)
and
f
n
R(R~)
have the same closure in
we need to show is that
k:
Ker (R 2)
that is
~
Ker (R).
k:
II R2h"
Ker(R) = Ker (R~).
Moreover, i f
= 0
and
h
H .
O
Since, by Defi-
R(R~), it will be sufficient to prove
are in
's
fn's are in
H
O
h
and equality of the two sets follows.
23
To that effect all
But it is clear that
is in
is in
H.
Ker(R),
J"
Ker(R~).
then
<Rh,h)
=
{,
So
Kedl{) :.- Ker (1{2)
0,
Suppose now that
o
for all
n
No.
in
We have by (25) and (14) successively, for all
n
in
N0'
-~
~
[ER h,e ].
n
n
A
So, since, for all
n
N0'
in
A
is strictly positive, we get from
n
(27) and our hypothesis (26), that
o
(28)
e 's
n
NO.
No '
where
m
is one of the
in
to obtain a complete orthonormal set in
{g , m E
denoted by
E
N
is also an index set.
If
then (28) gives
'
(29)
(for
g
m
e 's
is not one of the
If
n
{e ,
n
Complete the set
K,
for all
n
'
we still have, from (14),
(30)
But, from (23) and our choice of
(31)
II R
l-2
E*g
m
II
gm'
we get
o.
So from (30) and (31) we deduce
o
(32)
if
for all
L
(29) and
(32) then yield
in
n
No.
I,
[!fo:lChIJ = 0
or
= (),
R:'h
lH'cause
E
is;1ll
1
injection.
Conse<jlwntly
latter set is
R(R) ,
Ker(R).
we must have
h =
h
is in
Ker(R:')
and
Since we started with
I).
Thi~; proves ii).
h
WI'
in
l};lve Sl'l'n that
11 ,
0
that
lht'
is in
Remark 2.2.3.2.
Let
H[f]
denote the linear manifold consisting of all finite
linear combinations of elements of
H[f]
is dense in
8)
{fn' n E No}.
H •
O
Construction of the operator
Let
S
Then a) shows that
S on
sel~:adjoint
be the bounded,
H •
O
and compact operator on
H
O
defined by
(33)
=
S
that is
L
nE N0
A
n
~
LnE
n
@ f
o
N
A <
n
0
00.
orthonormal set in H
and
O
thus
S
For any
Since
k
(35)
S
is in
for all
Moreover,
An
in
H.
{f, f E No}
n
n
H
and
O
is a complete
n
SR~E*k = ~E*R~k
H
and
O
{fn' n E NO}
is a complete ortho-
o
(8)],
we have, premultiplying (34) by
L <~E*k,fn)Sfn·
nEN
in
= H[f] = HO•
R(S)
o
=
A
But, according to (22) and the definition of
(37)
is Hilbert-
is strictly positive for all
By (25) and (33) applied to (35), we get
(36)
S
L <~E*k,fn>fn·
nEN
is bounded
SR~E*k =
K,
h
[a)],
H
O
~E*k =
in
k
R 2 E*k
normal set in
Since
,
is strictly positive on
y)
(34)
n
A ~<h f)f
' n n'
n
Sh = \
LnEN
Schmidt since
f
-~
~{;
>. .' [R • k
n
and
n
'
I.
l'
II
S,
Thus, substituting (37) into (36), we obtain
1,
(38)
I
SR2E~~k
ntN o
1,
1('E*
Now
from
K
is a continuous map from
to
to
K
~
(Lemma 2.2.2.2. ,i» and
H
continuous map.
H
since
E*
is bounded
is the square root of a
R
Thus (38) can be written as
(39)
The last equality of (39) holds because the
the range of
0)
R
I I]
[ Eh
span the closure of
e 's
n
which is also the closure of the range of
=
II Th II
for aU
in
h
where
H.,
~J.'
2
R
•
is the bounded
T
tineal' extension of
As noted in Remark 2.2.3.1., we assume throughout this section
0) that
R
is injective.
We have by definition:
(40)
sup{ I [x,Eh]
[ IEh I]
I,
x
in
[Ixl] ~ l}.
K,
Using relation (14), (40) becomes
(41)
sup { I( E*x , h) I,
[IEhl]
x
in
I I]
[ x
K,
To proceed further we need to prove that
noU ce first that
\{~
l'sis:
I{
is
injective.
R~
Indeed
;llId
IC"
has dense range.
We
inj('ctiv(' hy Itypotlt-
is
I
silw('
Lilli:;,
l}.
:0;
I
1t;IV('
:;;1111('
\«'rlH'I,
I{"
I:; ;1I:;1l
inj""1 iVI'.
I
11111
I':'/(
illj""1 iv(' hy
I:;
1."111111:1
/./.:I./.,ii),
:;Il
11t;11
I("I,:A
I:; 0111'
III
.. ,1
w('
havl' lhaL
formation
({'
is all
injectioll.
Now,
B mapping the Hilbert space
26
j
or allY houlld,'"
HI
I illl';II' trail:;"
into the Hilbert space
H ,
2
we have (Bachman-Narici [1966, p.363, 20.3.7.])
R(B)~
(42)
= Ker(B*).
If we apply (42) to
R~, which is self-adjoint, we get that R(R~) = K.
-k
We also need the fact that
R 2S
that the latter range is dense in
H.
and the second from the inclusion
H[f]
k
R(R 2E*)
is well defined on
and
The first point follows from
k
R(R 2E*)
~
y)
(Remark 2.2.3.2. and
relation (25».
R-~~ =
So from (41), (23) an4 the relation
on
H), and the fact that
in
H,
is dense in
(identity operator
H
R(R~E*)
K and
dense
we obtain:
(43)
But
R(R~)
I
=
yields, when applied to (43), taking into account the remark
y)
in the preceeding paragraph,
(44)
SUP{I<R-~(S~E*x),h>l, x
[IEhl]
SUP{I<R-~S(~E*x),h>l,
=
Now, from
in K,
x in K,
IIR~E*xll
I I~E*xl I
:0;
l}
:0;
l}.
and (23), we obtain
y)
(45)
k
R(R 2E*x)
Since
operator on
defined on
is dense in
H.
Let
(45) proves that
-k
Thus
H.
H,
(R 2S)*,
T
the adjoint of
= (R-~S)*.
which we denote by
(46)
T
)
-k
SR".
27
!{
-k
R 2S
-~
'2S,
is a bounded
is bounded and
I
Then
T
is an extension of
SR"
Then, from (44) and (46), and the above remarks, it follows that
sup{
=
(47)
I\R-~sx,h;l,
sup{!\x,Th;l,
x
x
in
II x II ::;
II x II : ; I}
in
H,
H,
I}
II ThJJ.
(33),
(46) and (47) prove Proposition 2.2.3.1. 0
Remark 2.2.3.3.
T
has dense range.
Proof:
II Th II.
T
Consequently
such maps.
R(S)
Since
R(S)~
thus
~
is an injection since
R(T),
S
k
TR 2
S
E
is an injection and
[!EhIJ
is an injection as a composition of
is self-adjoint (Proposition 2.2.3.l.,SJ)
Ker(S),
S
has dense range.
But
S
so that
T
has also dense range.
U
k
TR 2
and
implies
Remark 2.2.3.4.
k
T
V(T*T)2
Proof:
for
V
unitary.
By the polar decomposition,
etry on the closure of
The range of
V
k
R{(T*T)2}
T
=
k
V(T*T) 2
for
So
an isom-
and zero on its orthogonal complement.
is equal to the closure of the range of
dense by Remark 2.2.3.3.
V
Vis an onto map.
Also
T
V
which is
is isometric
I
on
II,
since
'I'
lwing an injecl ion impl
injection. II
Remark 2.2.3.5.
i)
ii)
iii)
E*E
RE
I~*RE
T7<T.
ERT*T.
T*TRT*T
L8
j('S
111;11
('1'7<'1'):'
is
;\11
Proof:
For all hand h' in H we have, by (14), [Eh,Eh') =
But Proposition 2.2.3.1 also gives [Eh,Eh') = (Th,Th').
(E*Eh,h').
Thus, for all hand h' in H, the equality (E*Eh,h') = (T*Th,h')
which means that E*E = T*T
and i) is checked.
Part ii) follows from (24) and part i) since
RE
holds,
R = ERE*
implies
ERE*E = ERT*T.
By (24) again, R = ERE*.
Thus E*RE = E*ERE*E = T*TRT*T, by i).
k
k
2
But TR 2 = S = R2T* implies TRT* = S and we thus obtain iii). 0
Remark 2.2.3.6.
"*
R2E
=
Proof:
k
k
ERk2T.
For all k in K, we have, by Proposition 2.2.3.1, y),
~L
SR 2E*k = R2E*R~k.
-k
k
~L
So R 2SR 2E*k = E*R~k.
k
~k
thus, for all k in K, T*R 2E*k = E*R 2k.
have
(T*R~E*k,h)=(E*R~k,h) which
But, by (46), T*
and
So, for h in Hand k in K, we
becomes
(E*k,~Th)
=
(E*~k,h).
k
~k
~k
~k
plying (14) we get [k,ER 2Th) = [R 2k,Eh) = [k,R 2Eh), that is R2E =
Proposition 2.2.3.2.
then R~ =
If the hypotheses of Proposition 2.2.3.1 hold,
where
W is a unitary operator from K to H. 0
Proof:
By the polar decomposition and relation (24)
where:W is a partial isometry from K to H.
W is an isometry.
k
Since R2 E* is one-to-one,
We also have R(E*)L = Ker(E**) = Ker(E) = {e}, that
is E* has dense range. Thus
R~E*
has dense range and consequently W is
unitary (the facts just asserted are justified by Problem 105 and Cor~.I.
oUaries 1 and 2 of Halmos [1967, pp.68-69]).
29
(48) thl'n yields Ri ' =
Corollary 2.2.3.1.
If the hypotheses of Proposition 2.2.3.1 hold, then T
WE and S
1.,
WER'2. LJ
Proof:
~k
By Proposition 2.2.3.2 we have R2E
~~
R2E
2.2.3.6 asserts that
=
But Remark
~
k
ER 2T.
So, since Rand E are injections,
Y
1.,
WE = T and WER~ = TR 2 = S. 0
Corollary 2.2.3.2.
If we make the hypotheses of Proposition 2.2.3.1, R
W is the unitary operator of Proposition 2.2.3.2.
Proof:
From Corollary 2.2.3.1 we have T
=
U
WE.
But Remark 2.2.3.5,
So E*RE = E*W*S2WE or R = W*S2W. 0
iii) yields E*RE = T*S2T.
Remark 2.2.3.7.
If W is the unitary operator of Proposition 2.2.3.2, U the unitary
operator of Lemma 2.2.2.2, iv) and V the unitary operator of Remark
2.2.3.4, we have WU
Proof:
= V.
k
k
E = U{E*E)2 = U{T*T)2 = UV*T using successively Lemma
2.2.2.2, iv), Remark 2.2.3.5, i) and Remark 2.2.3.4.
But Corollary 2.2.3.1 asserts WE = T.
=
So WE
WUV*T.
Consequently WUV* = I or V = WU.IJ
Corollary 2.2.3.3.
,~I
If
we make the hypothesis of Proposition 2.2.3.1, (((1(")
Proof:
1
I,
,~,I
/((I':R:').
II1Il
"I
I{'''
=
I':I{:'W
1
/(I{''').
LillIl
From Propositiol1 2.2.'3.2 we hav('
TIIlJ:,
,,,I
;lIsp
ilJll'lil'~;
",I,
I~(I':I{:')" /~(I<:').
1{:'Wt,
I.
11111
is l'1ll'l'I,,·d. II
3()
I~(I·:I{")
~,I
1
I{"
I':I{'''W.
,,,I
I~{I{''')
Tillis
I
c
,
I<IC
wld"il }'.iv<':;
1~(I':l\q)
1
I<II~ (1\ ') I
,llld
:.11
III<'
;1:;:"'1
Notation 2.2.3.1.
From now on, 'unless specifically stated, we adopt the following
notation:
R(R)
R (0')
=
=
Ker(R)
=
R(ii)
k
Ker (R 2)
=
=
R(~)
~k
~
Ker (R 2).
Ker(R)
.-",JI
Lerrma 2.2.3.2.
We adopt the assumptions of Proposition 2.2.3.1 but drop the requirement that
R be an injection.
Then, with Notation 2.2.3.1, we have
Proof:
k
~
K
l
Pick k in K ·
2
k2
II R E*h II
=
= E[H l ]. 0
We first notice that (14) and (23) are still valid.
E[R(R 2 )]
a)
K
l
o.
"'*:
2
Then
So, for all
R k = 0
h
in
and from (23) we get
H, using (14), the following relatim
holds:
(49)
Since
to
<h,~E*k)
0
k
K
2
(:,)
<R~h,E*k)
=
E[H ]
I
h
c
in
[ER~h,k].
k
ER 2 h
K , (49) implies that
is arbitrary in
for all
=
2
H,
tha t is
J,;
E [ R (R')] ::. K
.1
2
=
is orthogonal
KI
K
l
By definition
HI
R(R1i ).
But, since
E
is continuous, we
have
E[R(0')].
E[H ]
l
Now, by
Consequently
a)
and it
then followH l hn L
3J
Iq III
I
~ KI •
Suppose that
k
is an element in
K
l
orthogonal to
E[H ].
l
To show that claim yJ is valid, it is sufficient to prove that
By hypothesis, we have, for all
(h,E*k) = 0
written, by (14), as
/ R~h E*k\
\
'
/
we will then have
(h,R~E1Ch) = 0 for all h
~k
[ IR2k I] =
(23) ,
with
k
in
O.
for all
0
H.
in
h
in
h
k
which can be
that is
k
II R2E*k II = 0
is in
k =
H,
O.
=
"A fortiori"
Hl ·
in
This means
we must have
K2 '
[Eh,k] = 0,
HI'
for all
In other words
.1
K
1
in
h
k
and, by
Since we started
K .
2
e. 0
Remark 2.2.3.8.
~k
G*R 2 where
i)
ii)
K
l
G
is invariant under
k
ER 2E*
Proof:
G*.
is a bounded self-adjoint operator on
fying the following inequality for all
I IR4!'E*k II
IIEII
IjEl1
=
K.
is a bounded linear operator on
[IR~kl],
k
in
K
satis-
k
[IER 2E*kl] ~
K:
where we make use of (23).
Taking
squares and writing norms as scalar products we obtain
The latter inequality and a result from Baker [1970b, Coroll. l.a)] imply that
J--
",h:
R(ER 2E*) ~ R(R 2 )
linear operator
Wl' w;lnt
K
But
thl'll
: ;l \('
II
..
I{
i"
(I ('II S(' i
l'!{'k, 1l,1~1
,
C'"
," K I
---
K ,
1
11
I I k-I{k 11 I I
111;11
EW-E*
R G
I "11.1 ~;
is;1 Cauchy
~
1.
~
quence in
=
I h, '1-(' ( •x i ~: (
HI
n
10
/. ( ,
..
( )
; I~ ;
St'qUl'IH'('
ill
1.-
SO
m
and there is an
11
32
~:
a
ill
~;(''1"t'III'"
K
S i 1)(' ('
1
Ik ,
II'
II
~J/
II R 2 E*R;9 (k -k ) II.
k
So pick
K,
II
in
I ('11.1 ~ ; 10
K,
II
II Rk n -l{k m I]
for some bounded
Taking adjoints, we obtain i).
K.
now to show that
t Ill' rail)',' ' oJ
ill
G on
rvk
2
1.)
or that
,_,1/
{R 2 E1c){'''k, ne
n
HI
and,
N}
for which
NI
i III III i I V.
h('C,IIIS('
or
is a Cauchy
C'I),
Sl'-
tends to zero as
~k
tends to infinity.
k ~k
[IEh-G*R 2k I] = [IEh-ER 2E*R 2k I]
n
n
converges towards
n
Eh
when
n
our choice of the sequence
verges towards
G*k
as
k
~J..,
ER2E*R~k
But by i),
~k
IIEII Ilh-R E*R 2k
:0;
tends to infinity.
we also have that
n
tends to infinity.
~k
= G*R 2k
n
k2
~k
so that
G*R 2k
n
n
II.
Thus
However, by
G*Rk
n
This means
n'
Eh
con-
= G*k.
Definition 2.2.3.1.
With the assumptions of Lemma 2.2.3.2, we define
restriction of the weak covariance operator
real and separable Hilbert space
scalar product of
.-
H,
HI
R
to
R
l
HI'
to be the
Then, on the
whose scalar product is the
is an injective weak covariance operator. 0
R
l
Theorem 2.2.3.1.
Suppose we are given a triple
(E, H, K)
and that
R
is a weak
covariance operator with associated zero-mean Gaussian cylinder set measure
c
as in Lemma 2.2.3.1. «15»
Define
]JR •
extends to a probability measure
v
R
on
K.
Then there exists a Hilbert-Schmidt operator
bounded linear operator
for all
h
in
Proof:
HI'
[\Eh
As before
(1)
that extends
T
l
and suppose that it
SlRl
-~
Sl
on
HI
to
HI
such that,
and a
I]
R will denote the covariance operator of
(1)
vI{'
; B] and PH ' respectively, denote a cylinder
C [h
, ... ,h
n
H1 l
1
set in HI (that is, la~l) is in HI for i = 1, ... ,n) and the pro-
Let
1
jection of
H with range
HI'
We then have
(50)
(
B}
We can thus define a cylinder set measure
on
by the
formula
(51)
Jl
c
R
oP
-1
H
1
A similar procedure provides a cylinder set measure
on
by the formula:
c
-1
\!R oP
K
1
(52)
where
P
is the projection mapping
K
1
Since
K
1
K
K
1
K ,
1
so that
which are Borel sets of
(Parthasarathy [1967,p.5, Thm. 1.9]).
on
B(K )
1
\!R
is a probability measure since
1
ki ) , ...
definition,
is
K
induces a measure
Thus
,k~l)
CK[kil), ...
\!R
\!R(K )
1
1
(Ito [1970, p.181,Thm]).
c
\!R.
~
is an extension of
k = k(l). Then, by
K1
BI = {k(KI([k(l),k;l)], ... ,[k(I),kl~I)J) (
be elements of
,k~l);
=
K
1
and
P
'I'hw-;
c: , I k (, I ) ,
1\
B(K )
1
by the formula:
We are going to prove that
Ill.
K .
1
is closed, it is a Borel set in
the class of all subsets of
Let
onto
••• , I, ( I );
1\
I
.
c. K I k
II
,
II
(I)
I
' ••• ,I,
(I)
II
'I'11('n
(by (53»
34
;
1\
I.
e'
(by (54)
=
(55)
But
K
l
,
v '
being the support of
has probability one and conse-
R
quently
(56)
But, since
v
R
extends
v c
by hypothesis, we also have
R
=
(57)
However, by (50), "mutatis mutandis",
--
-1
(1)
(1)
P
{CK1[k l , ... ,kn ; B]}.
Kl
Thus
c
-1
(1)
(1)
{CK1[k l , ... ,k
v R OP
; B]},
n
Kl
=
which becomes, due to (52),
(58)
From (55), (56), (57) and (58), we deduce
(59)
In other words, the probability measure
~
of the Gaussian cylinder set measure
~
We now want to show that
on
sure
K
l
by the restriction
~
~jR
c
defined in (57).
c
vI{
E
l
of
v
v
c
R
on
R
on
B(K )
l
A(K ).
l
is the cylinder set measure induced
E
to
H
l
and the eylindt·r set mea-
But (58) and (59) give also
(60)
35
is an extension
If we compare the right hand side of (60) with (16) of Lemma 2.2.3.1,
we see that
(61)
¢{B;
Now Lemma 2.2.3.2. shows that
to
K
l
with dense range.
E
l
as well.
«\RE*k~l) ,E*k~l»))~
l
]
. 1}
l,]=
is a bounded linear map from
E
1
HI
So the properties of Lemma 2.2.2.2 hold for
In particular, if
h(l)
is in
HI
and
k(l)
in
K ,
1
we
have
So, from (62) we get
(63)
,-1
(1)
E1 {C [k
K1 l
(1)
;
n
, ... ,k
B]}
{hCl\H11([Elh(l),kil)J, ...
,[E1h(l),k~1)])
( B}
e'
B]
CHI [E~i~k(1)E*k(1)
1
' ... , 1 n ;
.
(63) together with (51) then yields
(64)
~
~R
c
oE
~
-1
1
)J R
{C
K1
[k
(1)
1
, ... ,k
(1)
n
;
B]}
[*k(l)
E*k(l)]}
'HI E l l " ' " 1 n
; B
c{C'
IJ I{ c"I' "]-ltc''II)
[J'*k(l)
"I
L
,'j<kCI). 1',11.
11
'
' ••. , "1
C(5)
Il l{
('I C' [I"~I (l)
'II
'.', <)
,"'<1 (I)
, ••• , '. ') \ 11
;
1'.
II.
Hut, by definition,
( ()£:.6)
c{C [!'*k(l)
)JR'H~L1
, · · ·E'*k(l)'BI}
,'In'·
36
(1
) JI*k(l»))11
I
1 i
' 'I j
i,j=J'
'P{B' CC\UE>'<k
,
1'\.
If we compare (61), (66) and (65), we see that we are left to prove the
equality
•
But (14) gives
[k(l) ,Eh(l)]
(E*k(l) ,h(l»,
which, compared with
(62) yields
(68)
Now
E*k (1)
PH E*k (1) .
1
1
<RE*k~),Ekk ~l»
= (RP
J
1
H1
E*k~l) ,PH E*k~l»
1
1
J
have established (67).
Thus far we have shown that i f
sure
\l
R
B(K):,
on
if
\l
R
\l
c
R
extends to a probability mea-
is the restriction of
\l
R
to
K ,
1
if
c
\l
R
\l
c
R
is the cylinder set measure induced on K by
and the projection
1
c
map P
and i f \lR
is the cylinder set measure induced on HI by
K
1
c
c
c
and the projection map PH ' then \l
extends \l
and \l
is
\lR
R
R
R
1
c
induced on K by \lR
and the map E
of the triple (E ,H ,K )· To
l
1
1 1 1
~
'e
~
~
~
~
complete the proof it thus suffices, by Proposition 2.2.3.1, to show
that
~
c
is the Gaussian cylinder set measure defined by
\lR
But
~
c
\lR {C
(1)
(1)
[h l , .•. ,hn ; B]}
H1
=
c
-1
(1)
(1)
; B]}
\lR OP
{C [h
, .•• ,h
l
n
H1
Hl
(by (51»
(by (50»
(by Prop. 2.2.1.4)
<P{B,'
«(Rh(l) h~l»»~.
1
'J
1,]=1
37
}
(by Def. of R )
1
on
~
Thus, by Proposition 2.2.1.4,
R .
set measure defined by
~R
c
must be the Gaussian cylinder
This completes the proof of Theorem
l
2.2.3.1. 0
To prove the converse of Theorem 2.2.3.1, we will first need the
preliminary facts and lemmas that follow.
Let
T
be an operator inH
that is bounded, linear and injective.
Define
f: HxH
(69)
+
R by the relation
(69) makes sense since
f
T
is bounded.
is an inner product on
[ Ihi] ~
II T II
Ilh II
the norm
E
[I·
I],
H
for all
than the original norm.
f(h,h')
=
T
being linear and injective,
that will be denoted by
h
Thus
in
H,
l
Since
the resulting norm is weaker
the completion of
K ,
[.,.].
H
with respect to
yields a real and separable Hilbert space and the map
that identifies
H
K is a bounded linear
with a dense subspace of
injection with dense range (Lemma 2.2.2.1).
Lemma 2.2.3.3.
If
T
is a bounded linear injection with dense range and if
defined as above, then, for fixed
such that
Proof:
T
is.
{h , nE
n
T
in
such that
for
H,
h' in
H
in
H,
(h,Th')
in
K
[k, Eh' I. II
H, it is possible to pick a sequence
II h-T 2h n II
converges to zero as
2
nfinity and further that
[IETh -ETh
n
m
goes to infinity.
I]
II
tends to
{ETh, n(
NI
is;1 Cauchy
II T h
n
= 11'I'2 h n _T 2 h m II,
So there is
k
[l
38
in
n
n
But then we also have that
j
k
there exists a
is a bounded linear injection with dense range, since
N}
in
in
and, for all
h
grows to
K,
2
Ilhll
So, given
to infinity.
n
=
[Ikj]
h
is
K
K
Ilhll
;IS
S('l(U('Ill'('
which tends to zero as
such that
tends
[ Ik-I':'I'h 11 II
11
lends
to zero as
[ Ikl]
n
when
goes to infinity, which implies that
grows to infinity.
n
Moreover
[Ikl] = Ilhll·
(h,Th')
But
[ IETh
n
I]
=
[IETh
n
I]
tends to
2
II T h II
n
and thus
2
= limn (T h n ,Th') = limn[EThn,Eh ' ] =
[k,Eh ' ]. 0
Lemma
2~2.3.4.
If
complete orthonormal set for
Proof:
for all
exists a
for all
in
K
h'
in
H.
such that
n
Ilhll
= 8.
h
is a complete or tho-
o
n
and thus
h
=
and
N,
in
since
[Ikl] = 0,
=
(Ten,Te n )
=0
(h,Te n )
But, by Lemma 2.2.3.2, there
[Ikl] = jlhll
Thus, for all
which implies
Hence
K.
N and show that
in
n
{Te ,nEN}
To check completeness, we suppose that
k
[k,Ee],
then
Orthonormality follows from the relation
= 0 n,m
n
K,
is a
H. 0
normal set for
[Ee ,Ee ]
n
m
{Een,nEN} ~ E[H]
is as in Lemma 2.2.3.3 and if
T
(h,Th')
we have
{Ee ,nEN}
0
=
[k,Eh ' ]
= (h,Ten ) =
is complete in
n
0
8.
Proposition 2.2.3.3.
Let
A(H)
lJ
c
R
denote the zero-mean Gaussian cylinder set measure on
associated with the injective weak covariance operator
a Hilbert-Schmidt self-adjoint operator
S
a bounded linear injective extension
to
If
I IThl
I
K
is the completion of
lJ
c
R
0
-1
E
H into
K,
H such that
Pick
SR
-he
has
2
H.
H with respect to the norm
induced by the scalar product
the injection map of
T
on
R.
[h,h']
= (Th,Th') and
[Ihl)
E
is
then the cylinder set measure
extends to a probability measure
on
K.
II
Remark 2.2.3.9.
I
If
I{
Is trace-clasH,
hy l .. ldllg
S
I{:',
WI'
will
olltllill
'I' ~
I
and the result is well-known.
So we suppose that
R
is not trace-
class.
Proof of Proposition 2.213.2.
the function '<j>(k)
a)
The function
respect to
<j>
= f
K
exp(i[x,k]) dvRc(x)
&8
continuous on
is well-defined by Proposition 2.2.1.5.
the random variable
with mean zero and variance
K
With
has a normal distribution
[ " ,k]
\RE*k,E*k) (Lemma 2.2.3.1).
So by Propo-
sition 2.2.1.5
(70)
<j> (k)
Also, by (14),
[ERE*k, k],
(RE*k,E*kJ
exp(-~[ERE*k,k]).
<j>(k)
Now
ERE*
is a continuous map from
tinuous maps.
Consequently
For all
S)
&n
h
H,
K
[ERE*k,k]
exp(-~[ERE*k,k]) =
and so is
so that (70) becomes
to
K as a composition of con-
is a continuous function of
<j>(k).
<j>(Eh) =
exp(-~[IER~Thl]2.
We first show that the random variable
v c,
spect to
R
(R'P~Th, '1"'< Til
1I('P~TIt)
=
J.
K(h) = i A Eh ,A( ml
Consider
as a subspace'
of cylinder sets based on
cylinder sets based on
II
into
K
[",Eh]
has, with re-
a normal distribution with mean zero and variance
{AT1<TIt,A( IHJ
strictcd to
K
II
AT)~Th
K(h)
or
II.
A
I.vt
and
It
restricted to
H(T*Th).
are probabil ity mvasurt·s.
A
Tltv
K
It
A K are of the form
Ii
I
{kE K [k, AEh ] E B, B c B (lR), Ac:n~} ,
40
I<
1)('
I'
alld
;111<1
"
til('
be tlte o-algebra
is measurable with n~spl'ct to
cylinder Sl'ts in
as a subspace' of
I)
-;,1
~',('I> r;\
or
A K and
re'-
It
inclusion lIlap
Silll'l'
I':
III('
() r
e-
AH
T*Th
the cylinder sets of
are of the form
{h'EHI(h',>"'T*Th) E B', B' E B(JR), >..'ElR}
and
1
E- {kEKI [k,>..Eh] E B}
=
=
{h'EHI [Eh' ,>..Eh] E B}
{h'EHI(Th',>..Th) E B}
Consequently, we can apply the change of variable formula to the measure space
c
H
(H, AT*Th' ~R ),
surable transformation
the measurable space
(K,
E and the measurable function
A K ),
the mea-
h
exp(i[',Eh])
on
K to obtain (Ha1mos [1950, p. 163, Thm, C]):
f
(71)
exp(i[x,Eh])
c
d~R
oE
-1
(x)
=
Since
~
f
exp(i[Ey,Eh]) d~Rc(y).
H
K
c
c -1
oE
= v
R
R
and
[Ey,Eh] = (Ty,Th) = (y,T*Th),
we get, from
(71) and Proposition 2.2.1.5,
But, since
T
is an extension of
taking into account that
S
SR
-k
2,
k
TR 2
S
and thus
is self-adjoint by hypothesis.
R~T*T
ST,
Then
(73)
We point out for further use the equality
(74)
II STh II
Thus (72) and (73) yield
y)
$(Eh)
ql (k) = exp (-~[ IBk I] 2)
Dchmidt operator on
= exp(-~[IER~Thl]),
the desired result.
k
B a If-Uber·!.-
roY' all
1,n
K and
K.
The latter claim is suffJeient to prove that
41
l.·xtends
lo
a probability measure on
K
{Ee , nEN} ~ E[H]
n
Let
(Parthasarathy [1967, p. 164, Example]).
K,
be a complete orthonormal set for
which always exists (Helmberg [1969, p. 49]).
Since
S
is self-adjoint
and injective, as the product of two injective maps, it has dense range.
k
TR 2 = S
But the equality
implies that the range of
range of
S.
2.2.3.3.
We thus obtain that
set for
Thus
T
contains the
has also dense range and we can apply Lemma
T
H and, sinc:e
S
{Te
is a complete orthonormal
n' nEN}
00
is Hilbert-Schmidt,
Ln=ll IS(Te n )
11
2
<
00
,
so that, by (74) ,
(75)
<
Define
Extend
to
let
B
B,
00
K,
an operator in
by the relation
linearly to the manifold generated by
the span of
Bk = 8.
B
by continuity.
K; for
B)
and y) •
h
k
n
= ER 2 Te .
n
{Ee , nEN}
n
For
and then
in
k
e-
is Hilbert-Schmidt by (75) •
a)~
We are now left to put the pieces together: by
on
BEe
in the "dense" set
H,
<j>(Eh) =
<j>
is continuous
exp(-~[IBEhl]2),
Consequently, the latter equality for all
k
in
by
K,
and
Proposition 2.2.3.Z. is proved. 0
In order to abandon the requirement that
R
be injective, we will
need a few properties of the direct sum of two Hilbert spaces.
HI
and
HZ
(-'·/1
and
product
H
where
So let
be two Hilbert spaces with respective scalar products
(-,°/ 2 .
The direct sum of
HI
and
H
2
is the cartesian
= Hl xH 2 endowed with the scalar product:
(hl,h )
Z
and
(hi,h
Z)
are elements of
42
H.
HI
can be identified
H
l
with
e
{8}
x
H
l
written
Ell
and
as if
HZ
H.
spaces of
with
HZ
{8}
and
H
l
and the direct sum can be
HZ
were complementary orthogonal sub-
HZ
H
l
Conversely, i f
onal 8ubspaces of
x
and
are complementary orthog-
HZ
the latter can be considered as the direct sum of
H,
I
H
cind HZ are separable and if {e(l) , nEN} and
and HZ' If H
l
n
l
!
{e(Z) mEN} are complet~ orthonormal sets for H and HZ respecl
m '
tively, then
{(e~l) ,8), (8,e~Z», nand m in N}
is a complete orthonormal set for the direct sum.
Theorem 2.2.3.2.
Let
~R
c
be the Gaussian cylinder set measure on
ia ted with the weak covariance operator
·e
Z.Z.3.1 for
H
l
and
HZ
in
R
H
l
bounded injective extension
Let
with respect to the norm
K
direct sum of
and
l
HZ
v c
~R
c
= ~Rc aE-1 ,
R
Proof:
(1)
C [h
H 1
sure
, ..• ,h
~
c
~R
c
{CH[h
(1)
l
[Ihl]
E of
on
H .
l
K
l
= I ITlhl I
H
be the completion of
for
h
(1)
~
c
, ... ,h
(1)
n
Form the
l
K and the cylinder set
into
defined by
uK
on
K. II.
P
by the relation
~R
H .
-1
(1)
(1).
_
{[h
,B]}, ... ,h
l
n
H1
Thus it is possible to define a cylinder set mea-
;B].
H
1
in
K.
that extends to a probability measure
n
Let
has a
a cylinder set measure
K
From (50), we have
covariance of
~R
induce on
to
R .
l
such that
and denote it by
Then the injection map
measure
T
l
assoc-
We adopt Notation
and Definition Z.Z.3.1 for
self-adjoint Hilbert-Schmidt operator in
H
l
H.
A(H)
is
Rl ,
;B]}
the vector random variable
since
~
c
~R
c-l
aP
H '
~R
as in (51).
The
1
~
c
(1)
(1) .
_
~R {CHI [hI
, ... ,h n ,B]}-
and the elements in the covariance matrix of
(.,h (I»
l
<
43
, ... ,
<.,h (I»
n
)
an' or the form
h ~l)
with
for
1
(with respect to
R
l
of
the cylinder set measure induced on
and the cylinder set measure
c
R
extends to a probability measure
~
v
(Proposition 2.2.3.3).
Let
G
l
H.
G
c
~
~R
v
Since
•
R
on
K
l
v :
R
Kl •
We now define an operator
is in
by the in-
denote the covariance operator of
Now, we first need to remark that
where
K
l
into
is an injection,
it is an operator on
1, ... ,n
by Proposition 2.2.1.4).
~R '
Denote by
clusion
c
i
H ,
2
G on
in
K
l
E[H]
K.
is dense in
This fol-
K by the relation
and
e
is the null element of
is clearly well defined, linear and defined everywhere in
is also a covariance operator.
K.
G
Indeed, it is non-negative, for
(by (76»
(def. of scalar product)
(G
l
is a covariance).
G is self-adjoint, because, as above, we can write
k(l) k(l)]
[G 1]
'2
K
[k(l) r k(l)]
l ' '1 2
I
(I)
(2)
. (I)
L(k l
'''I
)'«.lkL
G
is bounded, for the following relations obtain:
44
.()I K
.K
]
e-
or
Let
{gn' nEN' }
be a complete orthonormal set for
K
I
and
{e
mEN"} be a complete orthonormal set for H · Then { (gn' e) ,
2
mI '
(8, e ), nEN' , mEN"} is a complete orthonormal set in K and we have:
m
That is,
G
v
measure
R
is trace-class.
on
K.
Consequently,
E
I
into
H
H into
is the injection of
HI
H2 • Pick n
(k(l) h(2»
k
i
i ' i
K by means of the injection map
-1
E
into
K
I
and
k(l)
1.
c
I
1.
and
E
= (EI,I H ), where
2
is the identity oper-
H
2
k.
K
1
in
K.
in
They are of the
h~2)
in
1
H2 ·
=
{CK[kl, ... ,kn ; B]}
E-1 {kEK I ([k,kl], ••. ,[k,k ]) E B}
n
=
{hEHI([Eh,kll, •.. ,[Eh,kn]) E B}.
But
[Eh, k. ]
1
I, .
(I)] + (PI
I ki··
II '. len)
l 1>:1 I'II
I,
1
E
llR •
K is of the form
arbitrary elements
with
determines a probability
is an extension of the
R
and the cylinder-- set measure
The injection of
ator of
form
K
v
We want to show that
cylinder set measure obtained on
of
G
'1
Then
So that
{hEH I ([E P h,k.(1) ]+ ( PH h,h.( 2 )
, .1.=1, ••• ,n)
1 H1
1.
2
1.
(77)
B}.
E
Now, by Lemma 2.2.2.3, applied to
E!,
HI
and
K
1
and corresponding map
we have
(78)
(p
h E*k:-(.-l) >
HI ' 1 i
which can be written, using (68),
[E P h ' k i(l)]
1 HI
(79 )
where
to
E*
K.
=
(p
h,P
HI
H1
E*k(.l»
1.
is the map of Lemma 2.2.2.3, that is the injection of
So we can write, from (78) and (79),
=
(80)
Substituting (80) into (77), we obtain
which, in turn, by the definition of
c
~R '
gives
~{B;«(RE*lk~l),E*lk~l»))~
. I}'
1.
J
1.,J=
(81)
However,
~{B;«[Gk.,k.]))~ . I}
1.
and also
(83)
[Ck. , k. ]
1. J
46
J
1.,J=
H in-
Finally, we can write
(84 )
(by (82) and (83))
~
(v
R
extends
(Lemma 2.2.3.1)
c
~
(def. of
~R )
(Proposition 2.2.1.4)
From (81) and (84), we then get
=
This concludes the proof of Theorem 2.2.3.2. 0
Remark 2.2.3.10.
If
R
the set
is not a trace-class, the vR-measure (the extended one) of
E[H]
Proof:
K.
into
is zero.
The injection map
E
is a bounded linear operator from
Its range has measure zero or one and has measure one i f and
only i f
v
R
=
~oE
-1
for some Gaussian measure
and only i f there is a covariance operator
R = ESE*,
where
R
p. 7, Thm. 2, c)]).
Thus, if
H
R
S
~
in
on
H,
H such that
is the covariance operator of
v
R
(Baker [1971c,
But we know, by Remark 2.2.3.5, that
is not trace-class, we must have
that is, i f
VR(H)
R
ERE*.
= O. LI
Remark 2.2.3.11.
SUppOHl' thal, when gIven
in
H,
:111
Inll'cllve we;]k cllv<lrllllll'(' IIp(·r<llor
we can find a self-adjoint Hilbert-Schmidt
bounded extension
T
of
SR
_J;;,
<
to
47
H.
S
providing a
Then we have an extended
I{
measure on
K,
v '
R
say
with associated covariance operator
obtained through the inner product
[Eh,Eh'] = (Th,Th').
Proposition 2.2.3.1, we also get that
S'R-~
is the extension of
Schmidt operator
S'.
to
H,
R.
K
is
But then, by
[Eh,Eh'] = (T'h,T'h'),
where
T'
for some self-adjoint Hilbert-
We are interested in the relation of
-,--- .....
T
to
T'.
By Proposition 2.2.3.3, we have that the characteristic function of
is
where, for a certain complete ortho-
normal set
{Ee , nEN} ~ E[H]
n
sition 1.2.1.3,
R
(Th,STh')
=
K,
But, for
So
So, by Pro po-
n
h'
in
(TR~Th,Th') =
k
[Eh,ER 2 Th']
~~
is self-adjoint and
ER~T'.
BEe
hand
(Th,T~Th') =
=
we have that
T
B*B.
[ER~Th,Eh']
[BEh,Eh']
Consequently
=
for
R~T
R
R~T'
=
H,
=
(STh,Th')
=
[Eh,BEh'].
But, by Remark 2.2.3.6,
B.
R~
and, since
is injective,
T'.
This remark will allow us in Sections 2.3 and 2.4 to use properties
of the elements appearing in the proofs of Theorems 2.2.3.1 and 2.2.3.2
without having to distinguish between the case when we start with extended measures and the case when we look for a space where extension
obtains.
2.2.4.
Some examples.
As an illustration of the preceeding results,
we consider the following cases which include many useful problems of
reproducing kernel Hilbert space type.
C~)
Suppose that
Q(s,t): [O,l]x[O,l]
m
is a product measurable
function such that
f~f~
and
(The latter condition is not a restriction since we
t
in
[0,1].
could define a function
Q2(s,t)dtds
G
<
~
>
and
Q(s,t)
=
Q(t,s)
such that the operator defined by
48
for
C
has
s
the same range as the operator defined by
(using the relation
R(Q)
Q and
= R{(Q*Q)~}».
Assume also that
1 2
fOf (s)ds
if
is symmetric
G
Let
0.
>
denote the
[0,1].
set of all square integrable, Lebesgue-measurable functions on
Denote also by
= f 1O Q(s,t)f(t)dt.
[Q(f)](s)
two elements in
[0,1]
L2 [0,1]
Q the map from
Let
H
L2 [0,1]
to
= {Q(f), f
E
Lebesgu~-~easure.
Elements in
equivalence classes and the equivalence class of
f
Q(f).
1
O f(s)g(s)ds.
product
L [0,1]}
2
and define
H to be equal if they are equal almost everywhere on
with respect to
noted by
defined by
Define an inner product on
Q(f)
H by
H are thus
will also be de-
=
(Qf,Qg)
It is easy to verify that with respect to this inner
H is a real and separable Hilbert space.
Now define a new inner product on
1
= fO
[Qf] (s)[Qg](s)ds.
[Qf,Qg]
H,
Completing
induced by this new scalar product yields
[.,.],
by the relation
H with respect to the norm
L [0,1],
L [0,1]
where
2
2
L [0,1]
2
the usual Hilbert space of equivalence classes obtained from
with the inner product
The function
defines a self-adjoint bounded linear operator in
where
Df
L [0,1],
2
is
Q(s,t)
say
D,
represents the equivalence class, determined by
1
Q(s,t)f(s)ds, for f E L [0,1]. But Q(s,t) also defines a bounded
2
O
linear operator in H, say D , if we let [D (Q(f»](s) =
l
l
f
1 1
fOf
O Q(s,t)Q(t,u)dtf(u)du.
wherein
H.
D
l
is clearly linear and defined every-
Itisboundedsince
IIDII Ilfil L
=
IIDII IIQ(f)II H·
IIQ[Q(f)]II H =IID(f)II L
We also note that the L 2-norm is
2
weaker (on H) than the H-norm, since
IIDII
2
11 f
lli
= IIDI1
2
2
IIQ(f)II~.
actually the completion of
'
2
[Q(f),Q(f)] =
IID(f)11~2
We next note that
L [0,l]
2
is
H under the norm obtained from the inner
49
product
[Q(f),Q(g)] = <Dl[Q(f)],Dl[Q(g)]>,
since
\Dl[Q(f)],Dl[Q(g)]) = \Q[Q(f)],Q[Q(g)]) = [Q(f),Q(g)].
We thus have the
precise mathematical framework for applying Proposition 2.2.3.3.
Let
E: H
2.2.3.3, if
L [0,1]
2
+
R
be the natural injection.
is any weak covariance operator in
corresponding cylinder set measure on
probability measure
then
H,
~
Hand
c
R
Moreover, in this case
R
oE
-1
is the identity operator in
R = EE*
(relatiort (24)) and
From this, one concludes
2
D
R=
~R
c
the
extends to a
S = D R~
1
i f and only i f
~
Thus i f
Hilbert-Schmid t.
2.2.3.5).
By Proposition
for
H,
RE= ED
S
S = D •
l
2
1
since, for fixed
(Remark
f
in
IIE[D~(O]IIL
L2 [0,1],
Using this,
= IIE[Q2(0]II L
22222
II E[D l (0] - D (0 II L = 0.
2
From the above results, it is clear that any Gaussian measure on
L [0,1]
2
can be obtained as an extension of the canonical Gaussian meac
(~I)
sure
on the range of the square root of its covariance operator.
This result can clearly be extended to any real and separable Hilbert
space.
Instead of starting with a covariance operator and looking for
S)
"subspaces" and cylinder set measures on them that extend to the measure determined by this covariance operator, we can do the opposite and
try to obtain a completion.
Since completions are at best difficult to
describe, we restrict attention to reproducing
k~rnel
Hilbert spaces.
The separable ones can always be obtained as follows (Dieudonne [1970,
p. 131]):
that
rn=l
J
on a set
f 2(x)
n
f (x) = (i) n) ;
n
<
let
pick a family of functions
X,
for all
00
H
=
x
{f (x)
a.
in
X
1::=1
define a scalar product by the relation:
50
(f
n' nENI
such
(for example, X = [0,1]
a. f (x),
n n
Ci
=
(a.n)d:~2} ;
on
and
H
00
L
=
a
n=l
Then
H
S
=
n n
is a real and separable reproducing kernel Hilbert space with
reproducing kernel:
00
L
K(x,y)
f
n=l
are of the form
H
tor in
n
(y).
is a complete orthonormal set and the bounded linear operators
{f , nEm
n
in
(x) f
n
Qf
a
= f
qa ,
where
is a bounded linear opera-
Q
We will consider only the simplest case, for which all the
12·
operators considered are diagonal.
Then, if
R
is an injective, weak
covariance operator,
00
L
(Rf )(x)
a
with
R
r
> 0
n
are the
We suppose
and bounded.
r
n
's
r a f (x)
n n n
n=l
r
\'00
Ln=l
=
n
The eigenvalues of
00.
with the associated eigenvectors
{f}.
If
and
SR -~
n
S
is
Hilbert-Schmidt, then
00
(Sf ) (x)
with
\'00
Ln =
1 s 2 <
n
h:2
n '
SR-~fn (x)
s r -~
have bounded extension,
cr
n=l
Since
00.
L
=
a
n n
for some positive
c.
s a f
n n n
(x)
= s r -~f
n n
n
(x)
must be bounded.
So we require
Thus, to obtain an appropriate
sufficient to pick an 12-sequence that satisfies, for all
the relation
Let
K
s
2
< cr
n
a
for some positive
be the set of functions
00
f
n
(x)
Define then on
L
=
n=1 r
K
s
n
f
a
n
c.
of the form:
2
s
2
n
--a
n
r
n
n=l
00
%n.
with
f (x)
n n
L
the scalar product
=
\'00
Ln =
1 (s 2/ r )
n
n
51
as.
n n
must
<
m
.
Isn I
T,
it is
in
N,
$
That
K
with the given scalar product is a Hilbert space is shown
f .
as in the case of
That
K
~
Moreover, if
2
is obvious since
H
Let us determine the map
f
f
H
in
a
f
and
we thus require
a'
For
(E*f (3) n'
I:=l
(a )
n
/r
(s
n
are in
13
of Lemma 2.2.2.2.
I
r'
We must have, for
b(S )
n
is in
(Sn)
is dense in
K.
a b(S),
n
n
n=l
the evaluation map, we get
2
H
then
Noticing that
(fa E*f (3)'
/ r ) 0'.13=
n
n n
H,
That
is complete for
which is indeed an f -sequence, since
bounded.
f
is bounded.
m:NJ
E*
2
and
a
n
K, [Efalfsl
in
S
2
n
{f ,
n
is seen by noticing that
K
s
f
=
where
Ef
a
b(S) =
n
(s 2/ r ) 13 ,
n
n
n
K and
(s 2/ r )
n
is
n
Consequently,
00
I
s
n=l
2
n
r
13 f
n
n n
(x).
,00 1 (s 4/ r 2) a S.
n
n
n n
Ln=
[Rf , f ]
n
that is
n
R
Thus
is trace-class.
This shows exactly what is done in the procedure of Theorem 2.2.3.
2:
to have an,extension, one needs a trace-class operator, which is
obtained by "erasing" the effect of the initial operator.
As a conse-
quence one attributes a different measure to the same sets.
2.3.
Appl ication to the Detection Problem
In this section, we suppose that, on
variance operators
c
j.lR
and
j.lR
c
l
and
R
2
respectively.
1
2
triple (E, H, K)
c
R
H,
we have two weak co-
determining the cylinder set measures
We also suppose that we can find some
such that the cylinder set measures induced on
c
by
respectively extend to probability measures. We
1
2
would like to decide about the equivalence of those extended measures
(j.lR ' E)
and
(j.lR ,E)
K
52
just by looking at
R
l
and
R
2
and, if equivalence holds, to express
the Radon-Nikodym derivative in terms of
2.3.1.
R
l
and
R .
2
Equivalence of extended measures.
Hypo thes is
2.3. 1.1.
We suppose that we are given a triple (E, H, K)
injective weak covariance operators
R
l
and
and, on
H,
two
R
such that the cylinder
2
c -1
oE
extend to Gaussian
c -1
c
oE
and v
= J.l
R
R
R
l
2
2
1
probability measures v
and v
on K respectively. R
denotes
R
l
R
1
2
then the covariance operator of v
and R
the covariance operator
R
2
set measures
vR
c
= J.l
1
of
Remark 2.3.1.1.
We know that on
formula
Sl
S2'
1'1
and
E[H]
the norm of
II T2h ll,
T R~
2 2
I ITlhl I
and
are self-adjoint injective Hilbert-Schmidt operators,
S2
=
bounded extensions of
1'2
I ITlhl I
=
But
V
2
TlRl
-~
and
= S1 and
S2 R2
-~
respectively.
and the formula, valid for all
C,
~{(C(x+y),x+y) - (C(x-y),(x-y)}
(Cx,y)
and
where
SlRl
IIT2 h l I
bounded self-adjoint operators
VI
~
[IEhll =
From the equality
where
K can be obtained by the
l'
1
are unitary.
=
J.-
V (1'*1')2
and
III
l'
2
=
It thus follows that
1,;-
V (1'*'1' )'2
211'
V!T
l
=
V~T2
or
Consequently, to be able
that is
to obtain a "simultaneous" extension, we will need to find self-adjoint
operators
Sl
and
S2
such that, "modulo" a unitary operator, we have
the equality
that is, for appropriate
solve the equation
We will denote the operator
'1'*1' = 1'*1'
1 1
2 2
by
~.
Theorem 2.3.1.1.
If under Hypothesis 2.3.1.1, the following conditions are equivalent.
~hc~~~hc
R = R 2 (I+B)R 2 ,
2
2
l
i)
and
B
that
-}-
-}-
R = R (I+B)R ,
l
2
2
B
where
R
-}-
-1
= ER
2
-}W ,
2 2
~hc~~~hc
where
W
2
that
(89)
=
2
ER E*
1
W
2
out
Since
E
E
W2BW~
H,
W
2
E*
n
in
is an isometry.
N,
K,
that is
B
I
has dense range, we can cancel
~
-}-
-}-
R = R (I+W2BW~)R2 .
l
2
So we are
To that effect, pick a
is
since
I IW2kl I
= 0
W*
2
is an isometry, and it
or
k
0,
again because
The same reason justifies the equality
I ncN Ilw2ilw~enI12=XncN [lilw~enl]2
because
or
say
an orthonormal set of vectors of
for all
We
is Hilbert-Schmidt (resp. trace-class) and
complete orthonormal set in
o
H.
is the identity operator
does not belong to its eigenvalues.
-1
such
Thus the equality
W2W~
H,
is injective and
left to show that
that
to
E* from (79) to obtain
and
K to
ER E* = ER -}-W (I+B)W*R -}-E*
12222
K
is unitary from
H
H
From Proposition 2.2.3.2 we have that
R = ERlE*.
l
can be written
2
H.
K such
0
is a unitary operator from
R 2(I+B)R 2
of
K
is the identity operator on
does not belong to its eigenvalues.
moreover know ((24»
Since
I
is a Hilbert-Schmidt operator (resp. trace-class) in
Proof that i) implies ii):
~
is the identity operator of
does not belong to its eigenvalues.
ii)
that
I
is a Hilbert-Schmidt operator (resp. trace-class) in
-1
and
where
is Hilbert-Schmidt.
So
54
and the right hand si.de is Unitt>
W2BW~
is Hilbert-SclmJidt in
II.
If
~
B
L
is trace-class, we hvve
nE
N
(w2Bw~en,en) =
[BW~en,W~en]
LnEN
~
which is finite and this proves that
suppose that
-1
associated with
or
2
-1
Consequently
-1.
Finally,
be an eigenvector
W*W BW*h
222
= -W*h
2
B with
is an eigenvector of
W*h
2
This is in contradiction with the hypothesis
Proof that ii) implies i):
E*
h
that is
W2BW~.
cannot then be an eigenvalue of
ceeding one.
Let
-1.
associated eigenvalue
i) and
is trace-class.
W2BW~.
is an eigenvalue of
= -W*h.
2
BW*h
W2BW~
The proof is very similar to the pre-
Pre- and post-multiplying
= R2~(I+B)R2~
R
l
respectively and writing the identity
I
of
H
as
by
W2W~'
E
and
we get,
using (24) and Proposition 2.2.3.2,
=
But
W~W2
is
I,
orthonormal set in
set in
BW k
2
K
if
H whenever
K,
{W f , nEN}
2 n
{f, nEN}
n
is a complete
is a complete orthonormal
and the following equalities obtain:
W~BW2
Then
in
K
the identity of
is Hilbert-Schmidt in
B
= -W 2 k,
is in
H.
K
if
B
Again the assumption
is in
H
W~BW2k
and trace-class
= -k
means
a contradiction. 0
Remark 2.3.1.2.
Theorem 2.3.1.1 shows that, in order to have equivalence, one needs
weak covariance operators that relate to each other as "bona fide" covariance operators do.
2.3.2.
Computation of the Radon-Nikodym derivative.
We stated
in the
introduction (1.1.3) that, when the detection problem is non-singular,
55
in order to perform the optimum operation on the data, it is necessary
to compute the Radon-Nikodym derivative of the equivalent measures under
consideration.
We thus make Hypothesis Z.3.l.l and suppose that the re-
suIting problem is non-singular.
We then wish to compute the Radon-
Nikodym derivative from
R •
Z
R
l
and
Lemma 2.3.2.1.
R
6~RZ6~
T
T
and
Z
have the same eigenvalues.
E
=
1:
U(E*E)2.
value
A-
f
If
then
n'
A-
eigenvalue
iated eigenvalue
11 ,
n
iated eigenvalue
11 .
r
e
n
=
A- f
nn
and,
~~RZ~~U*fn = An U*f.
n
Rzue
n
=
ERZE*Ue
Ue
~~RZ~~
n
be the unitary operator satisfying
R
with associated eigen-
Z
~~RZ~~
E
= U~~,
RZf
n
=
A-Zfn.
{U*e}
n
with as soc-
R
Z
with as soc-
Suppose conversely that
6~RZ6~
T
T
(Z4) then implies
U~R Z~~U*f n
Af
U(11 e )
n n
=
Then
11 Ue .
n n
is trace-class follows from the
is a complete orthonormal set in
is a complete orthonormal set in
or
nn
~~RZ~~en = 11 nn
e.
= U(~~Rz~~)u*ue n = U(~~Rz~~)e n =
n
~~RZ~~
is an eigenvector of
Suppose that
because
with associated
is trace-class. 0
The fact that the self-adjoint
fact that
see Remark Z.3.l.l)
is an eigenvector of
then
Proof of Lemma 2.3.2.1:
ER E*f
Z
n
U
is an eigenvector of
n
and i f
n
Let
is an eigenvector of
n
U*f
A
T
(for the definition of
H
whenever
{e}
n
K. 0
Proposition 2.3.2.1.
We make Hypothesis Z.3.l.l and suppose that
R
l
Then
i)
if
B
is Hilbert-Schmidt and -1 is not among its eigenvalues,
(k)
lim L (k)
n
n
for almost every
56
k
in
K
where:
a)
L (k) =
n
b)
{s
c)
Y (k)
p
-~
In 1{(~ - l)Y 2(k) + log s }
sp
p
p
p=
is the set of eigenvalues of
pEN}
p'
I
a(p) X (k)
q
q
,
q
eigenvectors
{g ,nEN}
n
ii)
{e , nEN}'"
n
A
associated with
e.
q
B
dV
dV
R
1
Proof:
q
~~R2~~
g
is obtained from the
and the eigenvectors
- \ a(p) e
p - Lq q
q
and
A~
xq (k)
6~
T R T
2
being the eigenvalue of
is trace-class and -1 is not among its eigenvalues, then
00
=
(k)
-~
R
2
for almost all
of
I + B by writing
of
= Aq -~[k ' W*e]
2 q'
if
a(p)
q
where
I + B
k
in
I
n=l
00
(2-_
s
1) Y 2(k)
n
n
-
~
L
log s
n=l
n'
K. 0
This is an immediate consequence of Rao-Varadarajan [1963,
p. 318] if we make the following remarks:
a)
to obtain the eigenvalues and eigenvectors of
R ,
2
it is suf-
~~R ~~
ficient to compute the eigenvectors and eigenvalues of
2
(Lemma 2.3.2.1).
S)
W~(I+B)W2'
to obtain the eigenvalues and eigenvectors of
is sufficient to compute the eigenvalues and eigenvectors of
since the equality
(I+B)g
n
= s g
n n
implies that, for
h
in
it
I + B,
H,
We are now going to consider conditions under which the likelihood
ratio is a quadratic form, provided hypothesis 2.3.1.1 holds.
We know
by Theorem 1. 2.1. 2 that there are two conditions that are necessary and
sufficient:
the first relates to the ranges of the covariance operators
57
under consideration, the other is concerned with the existence of a
bounded extension of a certain operator.
Since the second condition
can be dealt with without additional hypotheses, we are going to state
Theorems Z.3.Z.l and Z.3.Z.Z with the condition
investigate what additional assumptions on
in order to insure that
R(R )
l
= R(R Z).
R
l
R(R )
l
and
= R(R Z).
We then
R
are necessary
Z
This is the content of Propo-
sitions Z.3.Z.3 and Z.3.Z.4.
Proposition 2.3.2.2.
If we make the Hypothesis Z.3.1.1, then
bounded, Hilbert-Schmidt extension to
6-~
T
(R
-1
l
-1
-R
Z
)R
Proof:
But
u~~
E
~
Z
K
if and only if
has a bounded Hilbert-Schmidt extension to
H.
0
We first notice the following equalities:
implies
~~U* , which, in turn, gives
E*
E*-l
= U~-~ .
So
~-l
(R
(90)
Now
l
~k
RZ~E
~ ~
WZR Z k
~-l ~ ~
R )R
Z
Z
k
ER 2T
and
Z Z
U~-~ (R-1 - R-1 )R ~W .
l
Z
Z Z
~k
R 2E
Z
~
= ER Z
WZE
= WZER Z~WZk = TZR Z~WZk = SZWZk.
give
WZE
= TZ• Thus
We then get the equivalence:
~-l ~ k ~ k
I
~ k
[I (R~-l
-R )R 2 [R 2kJ J $ a[I R 2k lJ if and only if
l
Z
Z
Z
Z
I 1~~(R~l_R;l)RZ~[SZWZkJ II ~ al ISZWZkl I, using (90),
SZWZk.
So
and
WZR Z k
=
(R~l - R;l)R Z;; and ~-~(R~l - R;1)R2~; extend simultaJ
r
~,
neously to bounded operators.
Moreover, if
{en' IJd~} :- R(R :')
2
for
because
58
SOIlll'
f
n
,
is a
and,
and the sum over both sides converges or diverges simultaneously. 0
Theorem 2.3.2.1.
If we make Hypothesis 2.3.1.1 and if we also suppose that
v
_
R
1
log (dv /dv ) is a quadratic form if and only if
' then L
R
R
R
~
~
6-~ 1_1 ~l
~
R(R ) = R(R ) and ~ (R -R )R
has a bounded extension to H that
Z
l
l
2
2
v
is Hilbert-Schmidt. 0
Proof:
This is a consequence of Rao-Varadarajan [1963, p. 326,
Thm. 6.2 (our Theorem 1.2.1.2)] together with Proposition 2.3.2.2. 0
Lenma 2.3.2.2.
Let
U
be a unitary operator from
o
K and
linear operators on
H to
K and
A and
H respectively such that
UOV(A)
A be
= V(A)
~
and
A
=
i)
UOAU~.
Then
A admits a closure if and only if
A admits a closure
~
ii)
A is closed if and only if
A is closed.
~
iii)
A is symmetric. 0
A is symmetric if and only if
Proof:
Suppose that
V(A)
such that
UOh n
and
h
,
UOh n
n
-+
and
h
are in
AUOh n
UOAU~UOhn
UOhO I .
Since
A admits a closure.
= UoAh n
Ah n -+ h
and
O
h ' -+ h
n
'
UOV(A)
= Va),
UOhO
and
and
admits a closure
A admits a closure.
on
AUOh n
~
A
Uh
~
-+
Take
UOhO
I
-+
=
UOh,
AUOh n
now that
= UOAh n
UOhO'
that is
closed.
-+
UOhO'
h
E
in
Then
Ah n ' -+ h ' .
O
UOh n
I
-+
U h'
o '
= UOAh '-+
n
that is h = h
O
O
= UOhO ,
I
I
The converse holds by symmetry
A is closed.
Since
V(A)
h'
n
UOAU~UOhn I
~
~uppose
hand
n
and
A is
UOAh
Take
h
closed,
59
UOh
in
E
V(A)
such that
V(A)
= UOhO or i\h = h O'
The converse also holds by symmetry.
Assertion iii) is obvious. II
n
Thus
A
is
Theorem 2.3.2.2.
If we make Hypothesis 2.3.1.1, assume that
L
log (dv
~
where
/dv
)
R
2
with
A
S
and that
R
2
,,-~
is the closure of
c
T
-1
(R
l
-1 ,,-~
-R
2
)T
and
S
the Hilbert-Schmidt operator satisfying the relation
= R2~ (I+S)R 2~ • 0
Proof:
This is also a consequence of Rao-Varadarajan [1963, p.326,
Thm. 6.2] and Lemma 3.2.2.3.
,,-~
T
- v
R
1
is a quadratic form, then
= UAc
U*
'
A
c
I+W~SW2'
R
l
R
1
v
(R
-1
l
-1
-R
2
,,-~
)T
•
Then
Indeed, let
V(A) = R(~~R ~~)
V(A) =
which implies
U*y
Conversely, if
y
an onto map.
U* R( ~
R ).
2
=
Thus
then
Uy
Consequently
R(R )
2
E
A=
V(A)
To apply
= ER 2E* =
R(R )· Then y = R X = U~~R2~~U*x,
2
2
and thus U*R(R ) ~ R(~~R2~~)'
2
belong to
y
and
2
the lemma we first need to show that
Let
and
Y
for
= U~~R 2~~U*x' '
Uy
or
uV(A) ,
E
R
2
since
U*
is
And so
U*R(R )·
2
UR(6~R26~)
T
T
= R(R 2 )·
~
Then, by Lemma 2.3.2.2, since
a closure
A
c
and
is closed.
UA U*
c
Since
UA U*
c
~
We are going to show that actually
A
c
first check that the two operators
domain.
So letting
x
be in
A
c
V(UA U*)
c
and
and
UA U*
c
y
A
c
Yn
that
Uy
tends to
n
U*y
tends to
y
and
and
Ay
n
tends to some
UAU~' [Uy
n
]
~
in
V(A )
c
and
A y
c
~
In other words,
A y
c
= UyO'
But
A U*y
c
= UA c U*y. II
60
to
UyO'
UAU*,
= UAc U*.
it
We
have the same
such that
there exists, by definition an extension, a sequence
that
A admits
c
extends
~
must extend
A ,
A admits a closure
YO'
Yn
x
in
= UAc U*y '
V(A)
lt then follows
Consequently,
= Yo and thus
such
UA U*y
c
y
= llyO'
is
Lemma 2.3.2.3.
Assume Hypothesis 2.3.1.1
a)
R(R )
l
b)
~
R(R )
l
If
~
R(R )
2
R(R )
l
If
R(R )
l
~
R(R ) ,
2
then
R - l R [R(E*)]
l
2
i f and only i f
R(R ) ,
2
~
~
R(E*).
then
R(R ) . i f and only i f
2
(R;lRl)*[R(E)]
~
R(E). 0
Proof:
linear operator in
H.
Suppose first that
postmultiplying by
(using (24».
in
~
R(E*).
the relation
GE*k = E*k'.
Hence
Premultiplying by
~
= R2G
R
l
By hypothesis, for each
such that
K
E*
G[R(E*)]
k
yields
K,
in
E and
= ER 2GE*
R
l
there exists a
range(GE*)
c
range(E*),
k'
so that
~
there exists a bounded linear operator
~
Consequently,
linear,
F
~
in
k
such that
~
= ER2GE* = ER 2E*F = R2F
R
l
GE* = E*F.
~
and, since
F
is bounded and
R(R ) ~ R(R ).
l
2
R(R ) ~ R(R ).
l
2
Suppose conversely that
~
bounded and linear
F
in
K.
GE* = E*F,
R E*
1
= R2E*F.
which shows that
But
~
R
l
R
1
= R2G
~
R(E*).
= R2F,
where
G[R(E*)]
b)
operator in
R
l
= R2F
Using (24) and the fact that
~
to-one, we can write
Then
for some
E is one-
and consequently
~
F
is a bounded linear
K.
As in the previous paragraph, we start by supposing that
F*[R(E)] ~ R(E)
~
and obtain
F*E = EG,
where
G is a bounded linear
~
operator in
R G*.
2
H.
This means
Since
RlE* = R2E*F,
R(R ) E R(R ).
l
2
61
we get
R E*
1
=
R G*E*
2
or
R1
=
R = R G,
1
2
R(R ) 5: R(R ),
1
2
Conversely, i f
G in
and thus
H
ER GE*,
2
that is
for some bounded linear
e
~
~
= ER 2GE*. But R1 = R2F and so ER 2E*F
E*F = GE* or FE
EG* and F[R(E)] 5: R(E). 0
R
1
Proposition 2.3.2.3.
If we assume Hypothesis Z.3.l.l and if
R(R )
2
1
R - R [R(E*)] 5: R(E*)
2
l
if and only if
= R(R Z)
R(R )
1
then
R(R )
l
1
R - R [R(E*)] 5:
l
2
and
R(E*). 0
Proof:
Apply Lemma 2.3.2.3.a) twice. 0
Lemma 2.3.2.4.
If we suppose that
Sand
Proposition 2.2.3.1 hold, then
and
k:
(T*T)2
T
is self-adjoint, commutes with
R,
= T. 0
k:
Indeed
TR 2
commute,
TR
Proof:
R~
R commute and if the hypotheses of
S
and
S
is self-adjoint,
k:
TR 2
= S.
Thus, since
k:
k:
SR 2
R2 S
k:
S
R
and
k:
= R2 TR 2
commute implies
TR~ = Rk:2 T.
or
But, since
k:
= R2 T* and so T = T*. 0
Proposition 2.3.2.2.
Assume Hypothesis Z.3.1.1 and suppose that
for
i
= 1,2.
Proof:
Then
R(R )
l
= R(R 2 )
Suppose first that
1
if and only if
R(R )
1
= R(R 2 )·
and
R.
S.
1
commute
R(R )
1
Then
R
1
~
C
is linear, bounded and has bounded inverse.
~
ERZE*G
RlTU* = RZTU*G,
or
Lemma Z.3.Z.4,
II U*CUh II
=
R
1
which gives
= R2U*GU.
[I CUh I] ;: :
I I]
y [ Uh
having bounded inverse.
Now
U*GU
= y
II h II,
Consequently
have equal ranges.
i
with bounded inverse for which
It
follows that
RlT = R2TU*GU
ER 1':*
1
or, from
has a bounded inverse since
U being an isometry
R(R )
1
= R(R Z).
and
G
Suppose now that
Then there is a linear and bounded
R
1
= RZG.
62
By Lemma 2.3.2.4,
R T
1
=
G
e~
RZTG.
Then
ER 1TU*
=
~
ERZTU*UGU*
or
~
= R2UGU*
R1
and, as above,
UGU*
is a bounded linear operator with bounded inverse. 0
Remark 2.3.2.2.
In general,
for
R~
Rand
S2
do not commute.
the multiplication operator on
tf(t).
As an example, we choose
L [O,l]
Z
defined by
It is obviously bounded, positive and self-adjoint.
Let
T be
defined by
(Tf)(t)
For
t
<
t
~
S, s
=s
s, min(t,s)
-1
min(t,s) =
st
1
=
fo
and so
~
1.
s
min(t~
f(s)ds.
s
-1
= 1.
min(t,s)
Consequently
s
-1
For
min(t,s)
is bounded by
one and
111£ liZ = f1o {f1OS -1
So
T
is bounded.
min(t,s)f(s)ds }2dt
Moreover
as can be checked easily.
R~
and
S
do not.
k
TR 2
=S
However,
where
f
- 1,
2
II f 11
dt
~
II f 11
has kernel
S
sZ
R and
2
•
min(s,t)
do not commute since
Indeed, we have
=
For
J:
~
this gives
t
t
-(1 - -)
2
3
and
t
t
J:
Z(l _ !.)
2
sf(slds + t
2
J:
f(slds.
respectively.
Remark 2.3.2.3.
If we make Hypothesis Z.3.1.1, then
only if
R
i
=
In
v(i)h(i)
n
n
and bounded and the
0
h(i),s
n
R
i
where the
and
Si
)i),s
n
commute if and
are non-negative
form n family of eigenveclorH 01
63
Proof:
1
p(i)
n
where
and
R.
A (i) .
1
So, i f
n
H(i)
n '
to
R.
1
is the projection on the eigenspace of
to the eigenvalue
of
R.
and
1
S.
(h(ii, ... ,h(i»
n,
n, Pn
is self-
(h(ii,· .. ,h(i) ) made up
n,
n, p
. ~(i) .
the restriction of R. to
But
1
n
R(n)
i
'
of eigenvectors of
1
A(i)
k
'v
a basis
H(i)
n
corresponding
S.
commute, the restriction
1
the eigenspace co rr es pond ing to
So there is in
adjoint.
R P (i) = P(i)R
1 n
n
i'
commute i f and only i f
S.
are eigenvectors of
S.
also.
1
R.
1
v (i) A(i)h (i)
n
n
n
s.R.h(i)
1 1 n
has the stated representation, then
Conversely, if
R.S.h(i). 0
1 1 n
Remark 2.3.2.4.
If we make Hypothesis 2.3.1.1 and suppose also that
eigenvalues
A
1
and
Si
has only
commute if and
Ln
v(i)f(i) ® f(i) where the / i ) 's are non-negative
n
n
.
n
n
(i)
f(i) = [A (i)]-~ R.~E*e(i)
the eigenvector of
with e
n
n
1
n
n
R.
only i f
R.
of multiplicity one, then
n
R
i
1
and bounded,
~
R.
associated with
1
Proof:
A..
1
Suppose indeed that
S~R.f(i) = R.S~f(i).
1
1
n
1
1
n
<S~R.
1
1
n'
1
1
m
#
A(i)
m'
R.~E*e(i) =
n
1
f(i),s
n
n
commute.
1
=
n
or
[A
(i)]-~S~R.~E*e(i)
n
1
\
n
1
A(i)<R.%E*e (i) f (i»
n
1
n' m
f(i».
n' m
1
n
we must have
are complete in
f(i».
n' m
1
since
IR.~E*e(i)
f(i»
n'm
\1
R.~E*e(i) ~
n
1
H.
or
This gives
= A (i)<R.~E*e(i)
v(i)f(i)
n
n '
Then
n
1
f(i)
m
n #
Since, for
o
for
for
# n.
m
m # n
m,
Thus
and the
We can rewrite the last equality as
R.f(i)
1
S.
= A (i)/R.~E*e(i)
A(i)<R.~E*e(i) f(i»
m
1
n' m
A(i)
n
1
1
E*e (i) f (i»
n' m
IR.tE*e(i) S~f(i»
\
and
1
S~R.{[A (i)]-~R.~E*e(i)} =
So
A(i)R.{[A (i)]-~ ~E*e(i)}
n
1
n
Ri
n'
Thus
R.
R.
n
1
is bounded,
The converse is as in the previous remark. II
64
\}
(i)
n
2.4.
A Review of Chapter II
We first considered a weak covariance operator
associated zero-mean Gaussian cylinder-set measure
R
~R
c
on
H and its
We assumed
•
that it is possible to find a real and separable Hilbert space
K,
a one-to-one bounded linear map
E[H]
dense in
and
K,
c
~R oE
-1
E
from
H
to
K,
such that
extends to a probability measure on
and
is
K.
With these assumptions, we determined the form of the inner product on
K,
relating this inner product to the inner product of
to certain operators in
2.2.3.1.
extension.
H.
H,
to
Rand
The main result was given in Theorem
This gave necessary conditions to obtain a countab1y-additive
We then showed that these necessary conditions are also
sufficient for the case where
the natural injection.
K contains
H as a subspace, and
E
is
This result was given as Theorem 2.2.3.2.
We next considered two weak covariance operators in
associated zero-mean Gaussian cylinder-set measures.
H and their
We assumed that
the two measures had a countab1y additive extension to the same space
K.
We then considered the problem of determining equivalence or orthog-
ona1ity of the two extended measures.
It was shown that necessary and
sufficient conditions for equivalence can be given completely in terms
of the properties of the two weak covariance operators in
H.
Finally,
for the case where the two extended measures are equivalent, we obtained
conditions on the weak covariance operators which imply that the 1ike1ihood ratio (for the extended measures) has certain specified functiona1 forms.
65
CHAPTER III
TRANSFORMATION OF L2-BOUNDED CONTINUOUS MARTINGALES:
AN EXTENSION OF GIRSANOV·S THEOREM
In this chpater, we consider the detection problem for a noise that
is an L -bounded, continuous ma~tinga1e.
2
Our objective is to determine
conditions for non-singularity, and to characterize the Radon-Nikodym
derivative when the problem is non-singular.
The procedure that we con-
sider is similar to the method developed by Kai1ath and Zakai [1971] for
the Wiener process, in which use of a theorem of Girsanov [1960, p. 287,
Thm. 1] is fundamental.
We obtain an extension of this theorem, but
this extension is no longer sufficient to give a solution to the particular detection problem considered.
This point will be further
discussed.
In Section 1, there is a summary of results on stochastic calculus
for martingales.
In Section 2, we develop an extension of Girsanov's
theorem and in Section 3, we discuss its relations to detection theory.
A review of the principal results is contained in Section 4.
3.1.
A Summary of Results on Stochastic Calculus for Martingales
Most of the results are taken from C. Doleans-Dade et P. A. Meyer
[1970, pp. 77-107].
Additional information is contained in I'll. Courrege
[1962-1963, pp. 6.01-7.20] and H. Kunita and S. Watanabe [1967,
pp. 209-245].
3.1.1.
Basic hypotheses and definitions
a.
(Q, A, P)
b.
{At' t
is a complete probability space.
IR+
E
=
is a family of sub-cr-a1gebras of
[O,oo)}
A
with the following properties:
i)
ii)
c.
A process
On
A :
1
t
in
E
lR+ x
X
= {X t ,
IR+ })
Q,
if
if
A =
t+
s:5
t.
n A
s'>t s'
IR+.
A contains the sets of
O
{At' t
d.
At ,
c
-
it is right continuous:
every
iii)
As
it is increasing:
t
E
Xt
A having P-measure zero.
IR+}
is said to be adapted (to
is At-measurable for every
t
E
IR+ •
we consider the following cr-a1gebras
The cr-a1gebra generated by the adapted processes whose
paths are (with P-probabi1ity one) right continuous functions with left limits.
A :
2
The cr-a1gebra generated by the stochastic intervals
[S,T]
where
Sand
T
are stopping times and
S
is
accessible (see f.).
A :
3
The cr-a1gebra generated by the adapted processes whose
paths are continuous to the left.
A , A , A
2
1
3
depend on the specific probability space under
consideration and the specific family of cr-fie1ds chosen.
cr-a1gebras
A E A
1
2
2].
A , A , A
1
2
3
satisfy the inclusion property:
The
A E
3
[Meyer, 1966, Ch. VII, Thro. 45 and Ch. VIII, Section
Moreover
A = A ,
2
3
when
{At' t
E
IR+}
is free of times
of discontinuity.
The functions
A
3
X: lR+ x
Q -+
IR measurable with respect to
are said to be predictable or very well measurable and
67
those measurable with respect to
Al
are said to be well
measurable.
e.
X and
Yare said to be indistinguishable if, for almost all
f.
A stopping time
is predicatble if there exists a sequence
T
of increasing stoPl?j.ng times that converges to
R
n
surely and such that, for all
{w: T(w)
O}
>
n
T
almost
and almost surely on
{T
(henceforth denoted
A stopping time
T
>
O}),
R < T.
n
is totally inaccessible if it is not
almost surely infinite and if, for any increasing sequence of
stopping times
R ,
= T(w)
< 00,
P{WE~: lim
n
R (w)
n
bounded above by
n
A stopping time
R (w)
a.
T (w), all n in
n
N}
O.
is accessible if, for every totally inac-
T
S,
cessible stopping time
3.1.2.
<
n
T,
P{w: T(w) = S(w)
< oo}
= O.
L2-bounded martingales
An adapted process
1 :s; p < 00)
grable,
i)
if
almost all paths of
limits at all
ii)
is an L -bounded martingale (p-intep
M
E
are right-continuous and have left
lR+ (then
M
is said to be standard).
is a mart ingale.
M
iii)
t
M
su P
tE
lR
+
ElM
t
Ip
<
00.
will denote the class of L -bounded martingales that are
2
zero at the origin.
Indistinguishable martingales are con-
sidered to be the same process.
M2
c
consists of those ele-
ments of
M2
with almost all paths continuous.
Let
M.
,
M.
l,t
as
1,00
t
i = 1,2,
denote the almost sure limIt of
tends to infinity,
68
M.
1
belonging to
2
M
for
i = 1,2,
(then
exists) •
M.
1,00
the mapping
j)
(M ,M )
1 2
2
M2
space and
A process
A
almost surely.
At(w)
is increasing on
iii)
At(w)
is right co-n-tinuous on
lR+ for almost every
w.
lR+ for almost every
w.
is adapted.
A
A process
j)
jj)
A
is an integrable increasing process if:
is increasing
EA00 < 00.
A process
A
is natural increasing if:
k)
A
is increasing
kk)
A
is integrable
kkk)
is a Hilbert
is a closed subspace.
c
ii)
A
is an inner
is an increasing process if:
AO(w) = 0
iv)
2
M
with the inner product of j),
jj)
i)
1-+ EM1,00 M2,00
M
product on
b.
Then
For every bounded, right continuous martingale
Io
X,
oo
X dA
s
where
lim
s+
E
s
X
s-
X
s-
dA,
s
is the process defined by
t X (w)
for
s
Xt_(w)
=
t > O.
s<t
When a process is increasing and integrable, it is natural if and only if it is predictable.
c.
For every martingale
creasing process
A
M2 ,
M in
such that
characterizing property of
1-; f M 2 - M 2 1A
I
H
8
} .. I~ f A
t
2
M _A
A is, for
- A
H
69
there is a unique natural in-
I
A I.
H
is a martingale.
s
~
t,
The
In the latter equality, one can replace
stopping times
Sand
For
M and
N
(M,N)
M2 ,
A
is
(M)
by
(M,M).
or
one defines
- (M) - (N)}.
characterizing properties of (M,N)
Then the
i)
in
t
T.
The usual notation for
d.
sand
= ~{(M+N,M+N)
(M,N)
are:
is the difference of two natural increasing pro-
cesses
(M,N) - MN is a martingale.
When (M,N) = 0, one says that the martingales Mand N
ii)
are orthogonal;
3.1.3.
MN
is then a martingale.
Stochastic integrals for predictable processes with respect to
square integrable martingales
a.
If
A
is the difference of two increasing processes, and thus
a process of bounded variation, denote by
tion obtained as follows:
w,
A
A.
defines a set function, for each
f~ diAl
IAI
the total variation
IAI
and it is this measure that we denote also
in what follows.
ffdlAI
For each
we can then consider
w,
as a Lebesgue-Stieltjes integral.
Ll(A)
will denote the set of predictable processes
For
such that
f~ GsdA
w,
The total variation defines a measure through the
equality
by
the set func-
since it is the difference of two increasing functions.
So, we can consider, for each
of
IAI
s
G
in
C
L1 (A),
is defined as a Stieltjes path integral and 1s an
adapted process.
70
e
b.
predic table processes
i)
C 1-+
the map
E{f~
C for which
[E{f~
2
L (M)
denote by
C;d(M) s}]~
N in
M2 ,
C is in
iii)
for every
M in
2
M,
there exists a unique
such that,
for every
~X
If
all
C(M)
s
and if
t
o
iv)
N in
= Xs -Xs- ,
M2 ,
C(M)
(C(M),N)t
~{C(M)}
s
C(M)
C
s
in
=
= Cs ~M s
for
is continuous.
is called the stochastic integral of the
mar~ingale
2
L (M)
LI«M,N)
M is continuous,
with respect to the
f
s
Then:
00.
is a semi-norm on
for every
2
the set of
C;d(M) s} <
ii)
M
C process
M and one writes
=
C(M)t
dM •
s
M and
If
2
L (N)
2
M
N are in
respec tively,
(C(M),D(N»)
3.1.4.
M2 ,
M is a martingale in
If
and
then
CoD
D in
is in
2
L (M)
LI{(M,N)l
and
and
= CoD«M,N).
Decomposition of M2 martingales.
there is a decomposition of
C and
M,
M2 ,
For every martingale in
M = M +M +M
I 2 3
with the following
properties:
i)
ii)
iii)
M ,M ,M
I
2 3
M
I
E
2
M
is continuous
the jumps of
M
2
are accessible and those of
M
2
inaccessible (that is,
M
3
totally
M
carries the discontinuities of
that can be expressed in the form
~-~_,
accessible stopping time, and
the discontinuities of
M
3
that can be expressed in the form
~-~_,
where
where
T
T
is an
M
is
totally inaccessible)
Iv)
If
N
Is in
M2
and
N
71
d<)l'H
not have common
d IH('Ol1l1lllllll('H
(respec tively
The usual notation is:
C
M
M
l
M +M
2 3
dp
M
M
2
M
3
d
M
=
dq
M
=
C
M = 0,
When
stands for quasi-left continuous).
(q
M is said to be a compensated sum of jumps.
equivalent to say that
This is
M 2.
M is ortHogonal to
c
3.1.5.
Second increasing process associated with an L2-bounded
martingale. Let M be in M2 and {M,M}t = Ls-t
< (~M )2.
One has, for
s
r
< t,
i)
E{
ii)
if
For any
j)
jj)
(~Ms )2 1A}
s
L (r,t ]
SE
C
M
M in
is a martingale.
define:
\Mc,MC)t - {M,M}t
[M,M]t
[M,N]
2
M - {M,M}t
= 0,
M2 ,
~ E{M t 2 _ Mr 2 1A s }
~{[M+N,M+N]
=
-
[M,M]
-
[N,N]}.
Then:
k)
kk)
[M,N]t
=
2
M - [M,M]
is a martingale
kkk)
(M,M)
[M,M]
kkkk)
[M,N]
°
is a martingale
occurs if and only if
C
discontinuities and
If
M
is in
the set
set of predictable processes
Moreover, for every
C
in
L
M
2
and
C
N
and
N
h:lv<'
no
COllllllOIJ
are orthogonal.
defined in 3.1.3.b coincid(·s with the
(M)
C
M
for which
L (M),
2
72
the stochastic integral, defined in
M2
3.1.3.b also, is the unique element in
t C d[M,N]
s
s
[C (M) ,N] t
If
M
such that for every
almost surely for all
fo
is a compensated sum of jumps, so is
N
in
t.
C(M).
Remark.
If
N
is a stochastic integral, that is, if
= D(M'),
N
then
t
Jo Cs
[C (M) ,D (M' ) ] t
3.1.6.
D
d[M,M'] •
s
s
Local martingales.
A local martingale
a.
to the family
M
is a right continuous process, adapted
{At' tE~+},
for which it is possible to find
T
an increasing sequence of stopping times
M
tAT
finity and such that
tingale (Le.
AEA, P(A)
:<::;
is a uniformly integrable marn
SUPtE~+ EIM tATn '
I
0 => EI 1M
A tAT
<
dV. tE~+» .
:<::;
inf{t ~ 0 such that IMtl ~ n}
(V.
and
00
n
tinuous, the sequence of stopping times
T
n
tending to in-
n
T
E:
> 0)(3
When
n
M
o
> 0 3
is con-
can be taken as
if the set {o} is not empty
if the set {o} is empty.
{ 00
M~n)
We sometimes write
for
M
.
tAT
n
L
will denote the local martingales that are zero at the
origin.
Local martingales
M
M
where
C
M
and
d
M
L
in
C
M
are in
73
+
can be decomposed as
d
M
L,
C
M
is continuous,
d
M
1s
orthogonal to every continuous
b.
L
in
and, for every
t,
One can also associate afi'increasing process to every element
of
L.
The formalism is the same as in 1.5, but now
MN - [M,N]
c.
N
is only a local martingale.
There is a stochastic calculus for local martingales, very similar in its properties to the stochastic calculus for L 2
bounded martingales.
Its main result is a very useful "change
of variables" formula described in 3.1. 7.
A locally bounded process
C
is a predictable process
for which there exists an increasing sequence of stopping
times
Tn
tending to infinity for which
CtAT I[T >O](t)
n
a bounded process.
For example, if
is
n
U is a process with
paths that are right continuous and have finite left limits,
then
is locally bounded.
U
For a local martingale
where
v(n)
T
M in
is the sequence of 1.6.
n
L,
one can write
~(n)
M
is in
2
M
and
is a process with finite expected total variation and
is also an L1-bounded martingale that is right continuous and
zero at the origin.
For
tingale
C
locally bounded, one ohtains a
C(M)
such that, for evpry
[C (M) ,N J
by letting·
C(M)tAT
n
74
C[M,NJ
N
in
1111
I
ique I oea 1 lIIar-
e'
3.1.7.
a.
Semi-martingales.
A semi-martingale
X is a process that can be written as the
sum of three elements:
=
where
X
o
is AO-measurable,
M a local martingale in
Land
V is a process of bounded variation, right continuous and
zero at the origin.
This terminology is due to Meyer [1967,
p. 107] but the concept was introduced by Fisk [1965] who
used the term of quasi-martingale.
b.
Let
X be the semi-martingale
with respect to
X
denoted
X = XO+M+V.
C(X),
The integral of
is defined to be the
semi-martingale
C(X)
for the process
c.
=
C(M) + C(V)
C such that
C(M)
and
C(V)
each exist.
The "change of variables" formula has the following content.
For any semi-martingale
X and any function
F
in
c 2 em),
one has
=
F(X )
t
+
I
s~t
F(X ) + It F'(X )dX +
O
o
ss
~Jt0
F"(X )d (XC,XC)s
s-
{F(X) - F(X ) - F'(X )[X -x ]}2
s
sss s-
where the last term converges almost surely for every
F(X )
t
t;
is a semi-martingale.
When
M and
V are continuous, the formula simplifies
and one can replace the integration limits by any stopping
times.
It takes the following form:
F(X )
t
75
C
As corollaries one has the following integration by parts
formulae.
i)
if
L,
N are in
M and
then
martingale and
MtN t = [M,N] t
ii)
MV
t t
-f:
3.2.
if
v
M dN +
ss
r
a
V dM
s
s-
J:
and
is a local
a
N dM.
ss
is of bounded variation and
f:
M dV
s s
J:
=
MN- [M,N]
V dM
ss
L,
M is in
then
is a local martingale.
An Extension of Girsanov's Theorem
To extend Girsanov's theorem to the case of martingales, it is necessary to make some basic hypotheses in order to insure that all mathematical objects be well defined.
'These hypotheses are essentially those
that a reasonable application of the stochastic calculus reviewed in
Section 3.1 requires.
These assumptions and the related notation will
be kept unchanged throughout the remainder of this chapter.
(1)
(Q, A, P)
(2)
{At' tEIR+}
i)
ii)
s ~ t
is a complete probability space.
is a family of sub-a-fields of
As ~ At
implies
for every
t
in
have P-measure
iii)
(3)
A such that:
lR+ '
At
contains the sets of
A
that
a
the family is right-continuous.
{Mt,At,At,t E lR+, P}
denotes an LZ-bounded (square inte-
grable) continuous martingale that is almost Hurely zero at
the origin.
A
is the natural increasing process associated
with the martingale
M.
76
The processes
M and
A are adapted
{¢t,A , t
t
(4)
Ep[J~
E
IR+}
¢t2dAt] <
denotes a predictable process such that
00.
We will often write
(5)
Nt
Jta¢sdM
s
for
and
for
Jt ¢ 2dA •
ass
{W , t
(6)
t
E
IR+}
denotes a measurable process such that
Ep[J~lwsldAs]
Zs t(¢)
(7)
<
00.
- ~(B -B).
t s
= (N t -Ns )
¢
is sometimes omitted.
Theorem 3.2.1.
be such that
Let
surely.
Suppose that
¢
and
A00
~
k <
00
almost
are predictable processes that sat-
g
isfy the relations
E
(8)
-I
oo
g 2 dA
Pot
Let
f
be well measurable and satisfy
by the relation
(9)
and
t f dA + It g dM
ass
fa s s
=
Z by the relation
Z
(10)
a
t
=
It ¢ dM ass
~
It ¢ 2 dA •
ass
Suppose that
(11)
Q by the relation
Define
(12)
and
dQ
M
=
00
exp(ZO) dP
hy 1I11' n') a t
lOll
77
l
~
<
00.
Ep[J~lftldAt] <
00.
Define
X
=
(13)
<p
s
dA .
s
Then, i f
(15)
{Mt,At,A , t
t
E
lR+, Q}
is an L -bounded continuous martingale
2
that is zero at the origin, and
X
t
= fot(f s +<P s g s )dAs + fat
g
-
s dM s
Q. 0
almost surely with respect to
The proof of the theorem will be given in a sequence of propositions.
For this we need certain properties of the process
«10», which will be established first.
Pro perty 3. 2. 1.
{Nt,Bt,A , t
t
E
lR+, p}
is a continuous square integrable martin-
gale that is almost surely zero at the origin. 0
Proof:
The property is a restatement of 3.l.3.b, iv) for
definitions (5). 0
Property 3.2.2.
t
Ep[exp(zs)] ~ 1
Proof:
in
2
C (IR).
z
s
t
and
Ep[exp(zt)IA] ~ 1
s
s
is a semi-martingale and
almost surely. 0
F(x)
= exp(x)
is a function
So by the change of variable formula (3.1.7.c):
F(Z~)
'1'11;1 Lis
, t
(16)
ex p (Z~)
+
f ,exp(z~)
o
dN s '
Since the change of variable formula yields a semi-martingale, (16)
78
gives a semi-martingale.
Since
s
exp(ZO)
table and locally bounded and thus
gale, for
N
is one.
f
t
O
is continuous, it is predics
exp(ZO) dN
s
is a local martin-
So the semi-martingale
of (16)
is
actually a local martingale.
Then for some sequence T
of stopping
tAT
n
tAT
n
times increasing to infinity, exp(ZO
) = I + f O n exp(Z~)dNs is a
uniformly integrable martingale. Consequently, almost surely P,
tAT
sAT
Ep[exp(ZO n)IA ] = exp(ZO n) and multiplying both sides by
s
sAT
sAt
tAT
n
n
) is
exp(-ZO n) we get Ep[exp(ZSAT ) lAs] = I because exp(ZO
tAT
n
n
t
A -measurable. Now, almost surely P, limn exp(ZO
) = exp(ZO) and
s
thus, by the conditional form of Fatou's lemma, we obtain almost surely
tAT
tAT
P, Ep[exp(zt)IA]
= Ep[lim exp(Z ATn)!A ] ~ lim Ep[exp(Z ATn)IA ] ~ L
s
s
---n
s',
s
---n
s',
s
n t
As a consequence, we also get
Ep[exp(Zs)]
~
1.
n
0
Remark 3.2.1.
Property 3.2.2 can also be obtained using an approach described in
Dynkin [1965, pp. 234-236].
Although not as straightforward, the latter
technique yields finer results, the most important being Property 3.2.4
below.
Pro perty 3. 2. 2.
I
If
for all
Y
t
S
u,
t/J dA ,
u u
almost surely
then
t
E exp(Y )
s
P
and if
1. 0
Consequently, Property 3.2.2 follows from Property 3.2.2.', when
-1;(/) 2 •
u
79
Proof:
For
v
g (v)
n
E
lR+ '
=
{:
let
e
v :s; n
+
~(v-n-2)(v-n)
3
+~
n
<
v
n+l
v
~
n+l.
<
Then
2
(lR)
a.
gn
b.
lim
c.
O:s; g'(v) :s; 1
n
d.
-~
Let now
V
2
E C
n
:s;
for all
g (v) = v
n
g~(v)
t :: ft
s
and
:s; 0
1/J dA
u
and
v
g' (v) = 0
n
g"(v) = 0
n
and consider
u
for
v
v
for
Y
s
t
[n+l,oo)
E
E
c
[n+l,oo) •
N t+V t.
s
s
Since
t t ~. s} is a process of bounded variation for fixed s and
s '
{N t
t ~ s} is a square integrable martingale for every fixed s.
s '
Y t is a continuous semi-martingale for fixed s and we can thus apply
x
{V
F(x)
the differential rule (3.1.7.c) to the function
process
Y t.
s
= g
n
x
(e )
and the
We obtain:
which gives
t
g (exp(y »
n
s
g'(exp(yu»exp(Yu) dN u
n
s
s
s
- 1
I
+ 1>
t
u
u
u
{g~(exp(Ys»exp(2Ys)
u
S
(19)
So
t
g (exp(Y »
n
s
- 1
I
t
u
u
g~(exp(Ys»exp(Ys) dN
s
+
+
u
s
2
u
u
}g'
(exp(Y
»exp(Y
) dA u
. u n
s
s
t.
(tjJ ~ep
Js
u
4;Jt
g"(exp(y u »exp(2Y u ) dB u.
s
80
n
u
+ gl~(exp(Y)('xp(Ys)}dBs·
s
s
s
u
u
is a continuous adapted process and is thus
g'(exp(Y »exp(Y )
n
s
s
Now
00
u 2
u
oexp(Y s )] dB s ]
u
u
·exp(Y )dN
s
s
~
(n+l)
2002
JO~
u
dAu
00.
<
u
Ep[J s [g'(exp(Y
».
n
s
Moreover, from c., we get that
predictable.
J t g , (exp(Yu ».
Consequently
s n
s
By assumption (17)
is a martingale with zero expectation.
and c., we get that the second term on the right hand side of (19) is
(~O).
negative
From d. it follows that the third term on the right
hand side of (19) is negative.
for all
~
n
1.
t
Consequently
Ep[g (exp(Y »] - 1
n
s
~
0
Thus, from Fatou's lemma, it follows that, taking b.
into account,
t
Ep[lim g (exp(y »]
n
n
s
=
$
t
lim Ep[g (exp(y »]
-n
s
$
1. 0
n
Property 3.2.3.
.-
Suppose that
c
~
<
"00
+ t,j,'l'u 2 ~
,I,
'l'u
u,
for all
0
for all
P
almost surely
and
u,
almost surely
A00
~
k
<
00
P,
almost surely
P.
A and
Then, for all
exp(ckll.l-A
2
I).
Proof:
AN
s
t
+ j1V
Let
~u
A~ u
==
for all
,
t
s
2 t
s
and
u,
Ep[exp(AN +A V )]
s
t
t
t, Ep[exp(AN +l.lV )]
s
s
1/J u
==
2
A 1/Ju •
almost surely
$
1.
Then
P.
= A21/J u"2
+kA2~2U =
~+J[q:
u
u
Hence by Property
But
t
Remark 3.2.2.
Property 3.2.3 insures, in particular, the existence of moments of
,
~
0
A2 {1/J +t~2} ~ 0
u
u
3.2.2.',
lR and for all
in
l.l
N t.
s
81
Property 3.2.4.
:
k
<
Su~pose
< C for all
u
Iw
uI,
that
almost surely
00
P.
surely
wu
and that
t
Ep[exp(Zs(~»]
Then
Proof:
P
almost surely
-~<p
2
for all
u
P,
A ~
that
00
u, almost
= 1. 0
With the given assumptions, the second term of the right
hand side of (19) is zero.
The first term of the same right hand side
has zero expectation and thus, taking expectations on both sides of
(19), we obtain the equality:
(20)
t
g"(exp(Z u »exp(2Zu )dB
Ep[g (exp(Z »]
n
n
s
s
s
s
J
.
But point d. at the beginning of the proof of Property 3.2.2.' gives
u
s
-g"(exp(Z »
n
u
2 su INul
exp(2Z ) ~ exp( s~u~t s)
Obviously,
for all
(21)
s
almost surely
u,
u
s
-g"(exp(Z »exp(2Z
n
P.
u
H 2
s u
~2 ~ 2c
and, by hypothesis,
u
So
~ ~2
X[n,oo) {ex p (zu)}exP [2 sug INU I)2C.
s
s~u-t
s
u
u
exp(Z) ~ n implies exp(N ) ~ n, which in turn gives
s
s
sup INul
exp(s~u~t
s) ~ n. Thus, from (21), we obtain
But,
u
u
s
s
-g"(exp(Z )exp(2Z)¢
n
2
u
~
3cX [n,oo) {ex p [ s~u~t
sup INul)}
s
Taking expectations, we get, because
3ckE
fx
L'L[n,m)
Aoo
<
{ex p [ sup
k
s'U'L
<
INUl]}
S
Now, with the hypotheses made, the moments of
82
00,
exp[2ss~.uuP~tINusl).
almost surely
(.xp('l sup
s'
U'
l
p,
INI~IJII.
~
s
exp(s~~~tIN~I) exist
(c. Doleans [1970, p. 247, Thm. 3]).
So we can use Schwarz's inequality
Ep[X[n,00){exP(s~~~tIN~I)}exp(2s~~~tINMI)]~
to obtain that
Ep[xfn,~){exP(s~~~tIN~I)}]~Ep[exp(4s~~~tIN~I)]~
3ckE p [exp(4 s~u~t
sup IN~I)]~'
(22)
0
~
t
-EPU
s
and,letting
k
O
=
also
kOp~(exp( s-u_t
~u~ IN~I) ~
u
g"(exp(Zu»exp(2Z )dB u
n
s
s
s
Thus (20) and (22) yield
and the right hand side tends to zero as
n
tends to infinity. 0
Remark 3.2.3.
The next property can be proved as in Girsanov [1960, pp. 289291], since the proof requires only Property 3.2.2 and attributes of
conditional expectations.
Property 3.2.5.
Define the measure
(23)
Denote by
measure
a)
IQ(f)
Q
A by the relation
00
exp(ZO) dP.
=
dQ
Q on
(thus
the integral of the function
IQ(f)
= EQ[f]
i f the random variable
measurable, one has
S)
(24)
suppose that
Q(~l)
=
00
Ep[exp(ZO)]
if
X
1.
Then
83
~
f
0
with respect to the
= 1).
almost surely and
Then:
X
is A t
t
i)
1
Ep[exp(Zs)]
Ep[exp(zt)IA]
s
s
ii)
iii)
if
Moreover, if
X ~ 0
1
almost surely and
X is At-measurable and
EQ[XIA s ]
=
integrable, one has
t
-~
iv)
Q
X is At-measurable, one has
Ep[X exp(Zs) lAs]
t ~ s.
for
0
Remark 3.2.4.
In Property 3.2.1, it was noted that
gale whose increasing process is
B •
t
Nt
Both
is an L -bounded martin2
Nt
and
surely P-finite random variables and consequently
surely
and
P.
Thus
00
exp(ZO) > 0
B
t
are almost
00
Zo >
-00
almost
almost surely, so that the measures
P
Q of Property 3.2.5 are equivalent.
Property 3.2.6.
There exists a sequence of uniformly bounded, predictable processes
cjJ (n)
s,
tends to
t
exp(Z (cjJ»
s
for all
t
and
almost surely. 0
Proof:
From Courrege [1962-1963, p.7.09, Proposition 3], there
exists a sequence of uniformly bounded, predictable processes
cjJ(n)
such that
i)
Ep[f~
ii)
Ep[f~
:S
1
2n
2
But, from Neveu [1965, p. 133, Proposition IV.s.2] we get that,
ft (cj> -cjJ(n»dM
o
s
s
s
being a martingale, for all strictly positive
cp{suplft (cjJ S -cjJ(n»dM
s
s
lR+
Since
E[ IXI]
I>
c}
0
:S
2
E [X ],
we get
84
c's,
c+~plr
e
lR+
0
{EPU:
Thus, for
c
I
(<j> -<j> (n))dM
s s
s
>
c}
<
(<j> _<j>(n))2
s s
S;:{EPU: (<j> s _<j>(n))2
s
dAJf
dAJf
::;
1
::;
n
2
0,
>
I
c
n=l
p{suplft (<j> _<j>(n)) dM
lR
0
s s
s
I>
c}
::;
1
+
which is a sufficient condition for almost sure convergence of
f~ <j>(n)dMs
f~ <j>s dM s (Cramer-Leadbetter [1967, p. 41, 3.5.2]).
2 2
la -b 1 = l(a-b)2+2b(a-brIAo~ (a_b)2 + 21bl la-bl. Thus
Also
to
t
<j>2 dA _ ft <j>(n)2 dA
s
0 s
0
1f o s
ooo
f
I
(<j> _<j>(n))2 dA +
s s
s
So
and consequently for
<j>2 dA
o
s
-f
to
c > 0,
<j>~n)2 dAs
I}
>
c
c
Thus
o < 00,
::;
-
C
for all
ft <j>2 dA
o
s
zt(rp),
H
c >
s
o.
Again this implies that
almost surely.
o
s
Then, by the definition of
s
converges to
zt(<j> (n))
and
S
we have that
so does then
ft <j>(n)2 dA
zt(11(n))
S
exp (Z t (<j> (n)))
H
converges to
towa rdH
Zt(rll)
H
exp (Z l «(1')). II
A
almOHl
Hlln'ly anel
Property 3.2.7.
If
Q(Q) = 1
cesses and
then
and
¢(n)
exp(zt(¢(n)))
s
exp(zt(¢(n)))
s
Proof:
is a sequence of bounded predictable pro-
converges to
converges to
exp(zt(¢))
s
in probability,
in Ll-norm. 0
exp(Zt(¢))
s
Lemma 5 of Girsanov's paper [1960, p. 292] depends only on
Ep[exp(z~(¢(n)))] = Ep[exP(Z:(¢))] = 1
the fact that
and so holds as
well in the present set up (Property 3.2.5.i) and Property 3.2.4). 0
Proposition 3.2.1.
A
Suppose that
Let
M
t
k <
~
00
00
almost surely
p
Q(Q)
and that
1.
t
Mt -fa ¢u dAu .
Then
E
[Mt 2] ~
k
Let
=
Q
Proof:
U
t
<
00
for all
t. 0
Q and
Property 3.2.5 (S, iii), the definitions of
M
t
yield that
Now, from the properties of conditional expectations for positive random
variables, we get:
Ep[Ep[(Mt-U t )
00
tool
2
exp(ZO) exp(Zt) At]]
I
Ep[Ep[exp(Zt) At]' (Mt-U t )
Ep [(M -U )2
t t
since, from Property 3.2.5 (S, ii)) ,
respect to
P,
M
variation (since
that is,
M
t
and the function
t
exp(ZO)]
exp(z~)]
Ep [ ex p (z'~) IAt] = 1.
Now, with
i.s the sum of a martingale and a process of hounded
t
A
k,
L
2
e]
eml'n ls willi
is a semi-martingall'.
is continuous.
2
By
n'sJH'('l
to
dA
it
rl'
assllmption this seml·-marLillgall·
So we can apply the differential rule (J.l.J.e) to
F(x)
= x2
to get
86
I. I ) ,
-- ...
M
t
F(M )
t
F(M )
O
+~
F' (M )dM
F' (M )dU
s
s - J:
s
s
=
J:
J:
F"(M )dA .
s
s
So
-2
M
t
ft 2M dM - Jt
o
s s
o
2M dU +
s
s
~
ft 2dA
s
0
or
-2
M
t
=
2{ft M dM - Jt M dU } + A .
o sst
o s s
t
t
s
exp (ZO) = l+JOexp(ZO)dN '
s
Now relation (16) gives
-2
t
M exp(ZO)
t
=
Atexp(Z~)
+
z{f:
Thus
f:
MsdM s -
MsdU s }
exp(z~)
+ 2{Jt M dM - ft M dU }·ft
o s s
0 s s
0
dN .
s
Equivalently,
=
(25)
At
exp(Z~) +
2{f: MsdMs - Jo MdU }
t
s
+ 2{ft M dM ft exp(ZOs)dN - ft
o s s 0
s
o
Since
M
s
and
cally bounded.
s
exp(ZO)
SO
s
MdU
s
ftexp(ZoS)dN }.
s 0
s
are continuous, they are predictable and 10-
J~ MsdM s
J~ eXP(Z~)dNs are local martingales
and
and one can apply to the last two terms of (25) the integration by parts
formulae (3.l.7.c. i) and ii)) which give
Jot -fs0 MudM~l exp(ZoS)dN s + ft0
87
IJs exp(ZUo)dN
~o
lM dM
~ s s
since all the processes concerned are continuous.
If we use (26) and
(27) in (25), we obtain:
At
+
exp(z~) + 2{J: MsdM s
- J: MsdU s }
2WMdM .JeZdi + J: U: MudMJeXP(Z~)dNs + J: U: eXP(Z~)dNJMsdMs}
- 2{J:
U: exP(Z~)dNJMsdUs
+
U: MUduJexP(Z~)dNs}'
J:
Putting together some of the above terms and using repeatedly relation
(16), we obtain:
At exp
- 2
(z~)
+
2
I: ~ I:
+
f: I~
exp
+
I:
exp
(Z~)dNJMsdMs
(Z~)dNJMsdUs
l
- JS M dU exp(ZoS)dN
o u ~
s
+ 2 Jt M exp(ZOs)dU
o
+ 2
s
M dM
u
u
s
- ISo Mu dU u eXP(Z~)dN s
The third and fifth terms of the right hand side cancel out and we
finally obtain:
88
fto
2
r;S M dM ~o u u
fS0 MuduJeXP(zoS)
u
dN •
s
We notice again that
are continuous processes, thus predictable and locally bounded.
Con-
sequently, the integrals in (28}~are local martingales and thus there
exist
sequences of stopping times
{T } and
{S}, tending to infinity,
n
p
tAT
such that the expected values of these terms evaluated at
are zero.
n
AS
p
Thus
=
But
Am ~ k.
ii».
~
tAT AS
Ep[Aoo exp(ZO n p)] ~ k
So that
by Property 3.2.5 (8)
2
E [M ]
Q t
Repeated application of Fatou's lemma finally yields
k. 0
Proposition 3.2.2.
A
If
00
~
k
{Mt,At,tElR+,Q}
Proof:
(29)
-
(M
almost surely
00
P
and
Q(Q)
t
- M ) exp (Z )
s
= 1,
then
is a martingale. 0
One has, using the definition of
-
t
<
s
M and relation (16)
t
=
{(M t -M s ) - (Ut-U s )}exp(Z s )
=
(M
=
(M -M )(1 + ft exp(Zu)dN ) - (Ut-U )exp(zt)
t s
sus
s
s
=
(M -M ) + (M -M )
t
t
t
)exp(Z ) - (U - U )exp(Z )
sst
s
s
- M
s
t
s
t
t
- (U -U )exp(Z ).
t
Ii
H9
S
ft
s
exp(Zu)dN
s
u
-
Now we use integration by parts (3.l.7.c) i) and ii)) on the local
(M -M)
martingales
t
s
and
t
(30)
(M -M )
t
s
Js
exp(Zu)dN
s
U
J: U:
+
exp (Zv)dN
s
jl dM
U
+ Jt (MU -M s )exp(Zu)dN
.
s
U
s
But, by the fundamental property of stochastic integrals (3.1.5 and the
following remark)
(31)
dMU
,f'
t exp(ZU) d[M,N]
s
U
exP(ZU)dNJ
s
U
s
fs
t
t exp(ZU)dU •
s
U
Js
=
Also, by (16)
=
(32)
Jt(exp(zt)-l)dM
s
s
t exp(ZU)dM
s
U
Js
U
- (M -M )
t
s
and
(33)
ft
S
(M -M )exp(Zu)dN
U s
s
U
.
Jt M exp(Z U)dN
U
S
U
M Jt exp(Zu)dN
S
S
U
s
S
J:
U
M exp(Z )dN
U
S
U
t
M (exp (Z )-1).
S
S
So (30) becomes with the help of (31) , (32) and (33) ,
(34)
(M t -M s )
ft
s
exp(Zu)dN
s
U
t
fs
ft
exp(Zu)dU +
exp(Zu)dM
sus
u
s
- (M -M ) +
ts
ft
M exp(ZlJ)dN
11
S
II
S
t
- M (exp(Z )-1).
s
s
Again from the integration by parts formula (3.1.7.c.i)), we obtain:
90
r
u
u
Jt .
exp(Z s )dM
. u + s M\,1 exp(Z s )dNu
s
(35)
s
= exp(-ZO)
=
exp(-z~)
{fs eXP(Z~)dMu +
=
r
s
Mud
exp(Z~)}
{list (M exp(Z) - [M,eXp(Z)])}
Putting (35) into (34), we get
t
Js
exp(Zu)dU - (M -M ) - M (exp(zt)-l)
s
u
t s
s
s
Consequently, (29) becomes
(M -M ) exp(Zt)
t
s
s
(36)
Whenever
Z
[M,e]t
exp(Zu)dU - (U -U ) exp(zt)
s
u
t s
s
is a square integrable martingale,
is a martingale.
This is in particular the case when
~
is
uniformly bounded, for one then has
=
and, if
c
is the bound of
recalling that
~,
one also has
p. 247, Th. 3]).
k
is the bound of
(C. Doleans [1970,
So, in case
~
is uniformly bounded, from (36) we de-
rive the equality
(37)
Ep [(Mt-M )exp (zt)
s
s
IA s ]
(the third term of the left hand side of (36) disappears by Property
91
Now, the integration by parts formula (3.l.7.c.ii»
3.2.5 (6) ii».
gives
(38)
f
u
t
exp(Z )dU - exp (Z ) (U -U )
s
t s
s
u
¢
is uniformly bounded,
s
When
all fixed
s
t
$
JR +
in
-f
u
(U -U ) d exp(Z ).
u s
s
s
t
exp(Z )
s
is an L -bounded martingale for
2
(Properties 3.2.3 and 3.2.4).
Moreover,
U
is continuous, bounded, thus predictable and such that
Ep[f~
2
Ut d\exP(Z:»t] <
Consequently, the right hand side of (38)
00.
-
is a martingale and from (37) we get
by Property 3.2.5 (y)
t
M exp(ZO)
t
is a martingale with respect to
In case
¢(n)
t
shows that for uniformly bounded
P,
Q (depending on
martingale with respect to
sequence
-
Ep[(Mt-M s )exp(Z s )IA s ]
so that
cP,
M
t
is a
is not uniformly bounded, we can approximate it by a
¢
Then
as in Property 3.2.6.
Mt (n)
M _f t ¢ (n)dA
t
0 s
s
Define
1
sup
if
u
I
n,p
(t)
1
p
sup
if
JA [I o,p
t
1Mu I
$
p
otherwise.
As'
in
(t)M (0) -
=
$
(t)
o
A
t
otherwise
u
I
$
=
o
lim lim
p
n
which
¢).
a square integrable martingale by what precedes.
Then, for all
0,
t
I
n,p
(s)M (0)] exp(Zl(el>
S
JA (M -Ms )exp(Zt(CP»dP.
s
t
92
s
(n»)dl'
is
Let indeed
for
A.
in
B
and
J.l n
are probability measures (by hypothesis for
and Property 3.2.4 for
t
exp(Zs(~))
in
verges almost surely to
Ifni ~ 2p
converges to
L1 (Property 3.2.7), J.ln(B) converges to J.l(B) for
De f"1ne a 1 so f = I
(t)-M (n)_I
(s)M (n). f
conn
n,p
t
n,p
s
n
. nA.
B 1
a 11
Since
J.l n ).
for all
n.
(proof of Property 3.2.6) and
I p (t)Mt-I p (s)M s
So we can apply a theorem of Royden [1968, 18.
Proposition, p. 232] to get
lim
n
fA [I n,p (t)Mt (n)
- I
n,p
(s)M (n)]exp(zt(~ ))dP
s
s n
fA
[Ip(t)Mt - I (s)M
s
p
]exp(zt(~))dP.
s
II (t)Mt-I (s)M lexp(zt(~)) ~ (IMtl+IM I)exp(zt(~)) and the 1atp
p
s
s
s
s
Now
ter is integrable.
M - M'
t
s
Since
I (t)M -I (s)M
p
t
P
s
converges almost surely to
we have by dominated convergence that
f [I (t)M
pAP
lim
t
]exp(Zt(~))dP
s
s
P
- -
t
(Mt-M )exp(Z (~))dP,
ASS
f
- I (s)M
which proves our claim.
We are now going to prove that this integral is zero.
M (n) exp (zt (,j,~ (n) ) ) "1S a mart i nga 1 e W1. th respec t t 0
t
o
P,
Since
the following
relation is valid:
fA
[I
n,p
(t)M (n) - I
(s)M (n)]exp(Zt(~ (n)))dP
t
n,p
s
s
fA [In,p (t)Mt (n)
... I
n,p
(s)M
t
(n)]~xp(zt(q>(n)))dP.
s
The same argument as presented above proves that
e
lim lim
n
p
fA
I
n,p
(t)M (n) exp (z t (qJ (n)) )dP
t
s
93
=
t
M
t
cxp(zt(<jJ))dP.
s
So, to conclude, it will be sufficient to show that
f
lim lim
p
n
I
n,p
A
(s)M (n) exp(zt(¢(n»)dP
t
s
fA
M exp(zt(¢»dP.
s
t
Now
f
(39)
I
A
(S)M (n) exp (Z t (<jl ) )dP
n, p
s
t
f
(S)Mt(n) exp(zt(<jl(n»)dP
s
An [w: I
(t)=l]
n,p
=
I
n,p
f
+
I
n,p
An [w : I
But
n
n,p
~
t
(t ) =0]
s
(s) = 1] and thus
n,p
I
(s)M (n) exp(zt(<jl(n») = I
(t)M (n) exp(zt(<jl(n»).
(t)=l] n,p
t
s
n,p
t
s
[w: I
X
[w:I
n,p
(t)=l]
(s)M (n) exp(zt(<jl(n»)dP.
[w: I
n,p
The first term in the right hand side of (39) is equal to .
fA
I
n,p
(t)M (n) exp(zt(<jl(n»)dP
fA
verges to
and, as shown above, this integral con-
s
t
M
t
exp(Zs(<jl»dP
t
as
n
to show that the second term is zero.
tends to infinity.
We are left
But
2
(s)M
n,p
f
An[w:I
(t)=O]
(n) exp (2 t (<jl (n» )dP
s
I
t
n,p
<
$
f
A
X[w:I
I
n,p
n,p
=
A
(t)=O] d IJ n
fA
fAn' p S t
I
()M (n) 2 d
M (n)2
t
(s)M (n)d
t
IJ n
IJ n
(t)=0]
k,
f I n,p (s)Mt
Hut
h Y I' ro po sit I () 11 ·l. /.. I •
2
n,p
I
X[w:I n,p (t)=O] n ,p
eXP(z~(<jl(n»)dP
where we apply Schwarz's inequality.
An [w: I
f
'
f eXP(z~(<jl(n»)dP
An [w: I
2
(n)
(t)=O]
94
So
W('
I
111:1
I I Y ", (. (
t
s
exp(Z (<p»
converges to
we apply the
same theorem of Royden mentioned above and the dominated convergence
theorem to the right hand side of the last inequality.
It is seen that
the right hand side tends to zero, for
tending to in-
finity, since
quently,
I
n,p
(t)
M exp (Z~)
t
tends to
I (t)
p
n
and then
p
which goes to zero.
Conse-
is a martingale. 0
In Proposition 3.2.3 below, we have to verify the characterizing
property (40).
To this effect we need a result due to Neveu [1964,
p. 125, Principe].
Neveu gives the supermartingale version of it.
The
same proof, "mutatis mutandis", works for submartingales and we give it
in the following lemma.
Lemma.
Let
X and
{At' tElR+}.
Y be submartingales on
Let
almost surely
P.
T
Z
=
{::
on
[t < T]
on
[t
[T
A -c AR
s
E[YtIAR]
XT
~
T] •
Let
~
T]
for
[s < T < t]
and
~
Define
[s
Then
YT
submartingale. 0
is
Proof:
with respect to
be a stopping time and suppose that
ZT
Then
(Q,A,p)
~
At
~
YR·
and
~
s < t
t].
E[xRIA s ]
~
X ,
s
E[X IA R]
t
Also
Zt
""
l[t<T]X t + I[Tot]Y t ·
95
~
~,
E[YR lAs]
~
Y
s
But
I[t~T]
=
I
+ I [t>T]
[t=T]
=
So
Zt
I[t<T]X t + YT + I[R<t]Y t
~
I[t<T]X t + XT + I[R<t]Y t
I[t~T]Xt + I[R<r]Y t
and
=
Now
and-thus
=
Then
E[ZtIAR] ~ (I[R=t]+I[R=T])XR + I[R=s]YR = I[R>s]~ + I[R~s]YR·
Finally
E [Z
1
t
A ]
s
=
Z . 0
s
Proposition 3.2.3.
If we suppose that
-
2
Aoo
k <
~
t
I
limp PP{suPtEm.+ 1Mt -At exp(ZO(</l»
creasing
process associated with
,
Proof:
00
almost surely
~ p} = O.
M is
p.
Q(~)
= land
then the natural in-
A. 0
Since there is a unique natural increasing process
A
such that
(40)
E
[A -A 1/\ ] almost surety
Q t s s
it is sufficient to show that. with respect to
increasing and satisfies relation (40).
96
Q.
A
is natural and
Since
A is ,increasing and uniformly bounded almost surely with
respect to
P
and since
Q is equivalent to
properties with respect to
To check that
P,
A will retain these
Q.
A is natural, we have to consider a positive,
bounded and right continuous martingale with respect to
Q,
say
Y.
Y
must then be positive, bounded and right continuous with respect to
t
We now show that
is a martingale with respect to
Y exp(ZO)
t
from Property 3.2.5 (S)iv)
we have
is a Q-martingale and thus
Y
s
s
exp(zO)
sides by
~
Then
=
{
00
is
Multiplying both
As
measurable, we
0
such that
if
not empty
otherwise.
tAT
n
YtAT exp(ZO
)
is a bounded, positive, right-continuous martin-
n
gale with respect to
(41)
Ep~OOo
Y
tAT
P
and since
exp (ZOtAT n)dA
n
But
(42)
EPU: YtAT n
=
exp(ZOtAT n)dA
Yt
J
A
is natural, we can write
=
t
J
t
eXP(Z~)dAt
+ f
YT
(Tn,oo)
=
Y
Let
Ysexp(zO)
inf{t
n
exp(z~)
and noticing that
But
= Ep[Y t exp(Zt)IA].
s
s
= Ep[Y t exp(Zt)IA].
s
s
s
obtain
T
EQ[YtIA]
s
P.
P.
Yt
n
exp(Z~)dAt + n(Aoo-ATn~
and also
97
From (41), (42) and (43) we deduce
y
t-
By monotone convergence, the same relation is valid if the upper limit
is infinity.
Now
L and
is in
Jot Y dA
s
s
is of bounded variation.
So by the integration by parts formula (3.l.7.c.ii», we get
(44)
exp(zot)
fto Ys dA s
ft0
-
exp(zOs) Y dA
s s
and on both sides of (44) local martingales with respect to
same is true if we replace
Y
s
by
Y
s-
Since
Jos
Y
u
dA
u
P.
The
and
JSo Yu- dA u are uniformly bounded, those local martingales will be
martingales as soon as
is an L -bounded martingale and this
2
occurs in particular when
¢
by a sequence
¢(n)
¢
is uniformly bounded.
as in Proposition 3.2.6.
So we approximate
Then taking expec-
We have just shown that the right hand sides are equal (for
But
exp(z~(¢(n») converges in Ll
98
to
exp(z~(¢»
I:
= "').
and, since
loXJ Y dA
s
00
fa Y dA
s- s
and
s
(45) and (46) (for
00
are uniformly bounded, the left hand sides of
= ~)
t
00
Ep[exp(ZO(~»fOYs_dAs]
and
respectively.
Consequently,
J:
(47) EQ Yt dAJ · EQU: Yt - dAJ.
However, (47) is necessary and sufficient for
A
to be natural (Meyer
[1966, p. 112, T 19]).
We now prove that
with
A
is the natural increasing process associated
M.
Denote by
a (t)
sition 3.2.1 (28».
la(t)1
Since it is continuous, it is possible to stop
the first time it crosses the level
So if
integrable martingale (3.l.6.a».
a(tl\S )
time,
-p
(Propo-
the local martingale
d(p,t) - +p.
a(S) ::; d(p,S).
p
p
Then, since
p
is the associated stopping
a(S )
P
= ±p,
c(p,t) _
Define
we get
c (p, S ) ::;
p
So
a(t)
if
t < S
{ c (p, t)
if
t
a(t)
if
t < S
{ d (p, t)
if
t
ex + (t) =
S
and obtain a uniformly
is a uniformly integrable martingale.
p
and
p
~
S
defines a supermartingale
(Neveu's principle)
P
P
and
ex-(t)
Thus, for
=
A
in
A
s
and
~
defines a submartingale
(lemma) .
P
S
P
we have (resp. super- and submar-
s ::; t,
tingale inequality)
and
(48)
Tile flrHt
I.ncquality of (48) cun Ill' wr Lttcn
99
JA a-(s)dP.
J a(t)dP + J c(p,t)dP
An [t<S ]
J
An[t:2:S ]
p
J
a(s)dP +
An[s<S]
p
p
c(p,s)dP
An[s:2:S]
p
or
J a(t)dP
(49)
J a(s)dP
-
An[t<S ]
p
Now, if
a (t)
p{P{An[t:2:S ]} - P{An[s:2:S ]}}.
p
p
$;
An [s<S ]
P
has crossed the level
crossed it at time
t
and thus
at time
p
[s:2: S ]
P
s
t,
$;
[t :2: S].
c
Consequently, the
p
-
right hand side of (49) can be majorized by
it will have
pp{An[t:2:S ]}.
p
Letting
p
tend to infinity, by our hypothesis, we obtain
fA
that is
a(t)
a(t)dP
$;
fA
is a supermartinga1e.
a(s)dP,
The second inequality of (48) will
give
J a(t)dP
An[t<S
-
J
p
J a(t)dP
:2:
-pP{[t :2: S ]}
P
An [s<S ]
p
fA a(t)dP :2: fA a(s)dP.
and again, in the limit, we get
So
also a submartinga1e.
It is then a martingale.
that
is a martingale with respect to
P.
Q and, by unicity,
At
then a martingale with respect to
ural increasing process associated with
M.
a (t)
is
We have thus proved
is
is the nat-
0
The next proposition shows how the integral
ffdM
is transformed
Ihill
q(\I.)
under the change of measure.
Proposition 3.2.4.
SllPPO~;l'
that
f
LhaL
II
.
k
allllOHI
slIndy
is a predictable process such that
then have almost surely:
100
I',
Ep[f~ [t 2 dAtj
<
'"
Wl'
t f sdMs + Jt0 f s
J
t
Jo
Proof:
gale.
fdM
s
=
s
o
With the given hypotheses,
But the integral
ffdM
ffdM
~
s
dA.
s
is an
L
-bounded martin2
can also be defined as a limit in proba-
bility (Courrege [1962-1963, p. 7-18, Th.4]).
It is this property that
is used below.
There is a sequence of stochastic step functions
E [foo[f -f (n)]2 dA ]
POt t
t
IV.5.2]).
~ 1/2 n
such that
(Courrege [1962-1963, p. 133, Proposition
So, we get that
fJoo
pllo
E
and consequently that
(50)
f(n)
,00
00
cLn=l P{fO[f t
lim Joo [f (n) - f ]2 dA
t
nOt
t
P
and
respect to
Q also.
Now, for the integral
f
0
(n)]2 dA
- f
t
2
-ft] dA t
>
l
t
~
c} ~ 1.
So
almost surely
P.
Q are equivalent, the same statement is true with
But, since
sufficient that
=
(n)
[f
00
f
O t
2
dA
t
ffdM
<
00
to be defined in probability, it is
almost surely and then the following
inequality holds (Courrege [1962-1963, p. 7-18, Th.4]):
(51)
3
E
~
~
I ffdMI
1 + I ffdMI
So, since convergence in the metric
E
fa
~~ ~
is equivalent to
convergence in probability, applying (50) and the dominated convergence
theorem to the right hand side of (51) with
get:
101
f
replaced by
f_f(n),
we
lim
n
(52)
r
o
f (n) dM
s
s
=
ft f
o
s
dM
s
Repeating the operation with respect to
by
P
in
Q
e
probability.
and replacing
M in (51)
M we also get:
lim ft f (n) dM
o s
s
n
(53)
=
J: f s
dM
s
in
Q
probability.
For some subsequence, (52) and (53) will hold almost surely.
Now, since
ft(f -f (n))~ dA ~ {ft(f -f (n))2 dA }~{ft~ 2dA}~
o s s
s s
0 s s
s
0 s
s '
approximation by
simple functions of the integral with respect to
follows from (50).
A
But for simple functions, one has:
=
If
T
k
(M
JfdM -
T
k+l
-M
T
k
)
Jf~dA.
Remark 3.2.5.
The proof of the Theorem 3.2.1 is now immediate:
the first asser-
tion forms the content of Propositions 3.2.1, 3.2.2 and 3.2.3 and the
second, the content of Proposition 3.2.4.
Remark 3.2.6.
As pointed out by Girsanov in his paper [1960, p. 296, Remark],
all hypotheses made, with the exception of the equality
Ep[exp(z~)]
=
and condition (14), are used to d('fine the different math('matlcal 01>jeets.
So it is useful to know conditions Insuring L1H' v,llidily 01" lhe
above equal.ity.
We already know that if
the desired equality holds.
general condition.
<I>
is t1l1ilorlllly bOlJl1dl'd
Proposition 3.2.5 givl's a slightly
tlH'11
1Il0n'
The idea of the proof is essentiall.y contained in
the proof of Theorem 2 of Hitsuda [1968, p. 308].
102
1
Proposition 3.2.5.
Suppose one assumes the hypotheses stated in Theorem 3.2.1, but
00
replaces the assumption
=1
Ep[exp(ZO)]
by the following:
Let T
be the increasing sequence of stopping times for which
n
tAT
n
exp(ZO
) is a uniformly integrable martingale; define dQ =
n
T
and then assume that
exp(ZOn)dP
Remark 3.2.7.
The proposition shows that if
00
= 1.
Ep[exp(ZO)]
bounded.
f~~t2dAt ~ Co
almost surely, then
~
In particular, this is the case when
is uniformly
Moreover, applying Fatou's lemma, we have that, if the stated
condition holds, then
<
fot~ s dMs
that is,
00,
is a mar-
tingale with respect to
Proof of Proposition 3.2.5.
Let
D
t
t
= exp(ZO)·
and
(Property 3.2.2)
each
n,
T
is a local martingale with continuous paths
D
t
D
tAT
is a uniformly integrable martingale for
n
being a sequence of stopping times tending to infinity.
n
] = 1. The latter, plus the uniform
n
integrability property mentioned, imply that limt+oo Ep[D tAT ] =
From (16) it follows that
Ep[D
T
tAT
n
= 1.
]
Ep[D
So it will be sufficient to prove that
n
{D
}
T
is a uni-
n
formly integrable family, because then,
D
T
being positive,
n
limn Ep[D T ]
= 1.
Ep[D oo ]
=
Let
n
I (t)
n
Then
1
{
0
if
t
if
t
~ Tn},
>
T
0 s
the theorem, "M- (n)
t
=:
=
~t In(t)
and
D
t
( )
n
=
n
D (n) = exp(Jtep (n)dM
t
()
~t n
- ;~;Jt(l) (n)2 dA )
M - ft'l) (n)dA
t
0 Ii
Ii
103
and
E [D (n)] '" l.
HI""
SOli
Lli
H
cOlltilHIOllH,
1.,,-hOllllllcd
~
So hy
martingale, whose associated natural increasing process is
dQ(n) =D (n)dP).
respect to the measure
log D (n)
t
=
=
f:
f:
f:
s
<p
s
(n) dM
ft
0
<p
s
(n)2 dA
s
(n) dM (n) + ft <p (n)2 dA s
o s
s
(n)
<p
~
-
s
s
dM
(n) +
=
E
~
s
ft
0
(with
So, by Proposition 3.2.4
00
<p
At
<p
s
~
ft
0
(n)2 dA
<p
s
s
(n)2 dA .
s
Thus
4'EQ
n
U·
<p
0
(n) 2
s
dAJ
::;
E
Qn
Joo
E
Qn
00
n
+
=
[log D(n)]
Q
U·.
0
(n)
s
<p
0
dMJs
(n)
s
dMS(n~
J
c
+ 2'
f t <p (n) dM
E [foo <p (n)2 dA ] ::; c <
is a martingale with
o s
Q 0 s
s
s
n
respect to Q and thus its expectation is zero. So EP [D(n) log D(n)]
n
Since
00,
00
::; c <
D (n)
00
for all
00
n,
00
which is a necessary and sufficient condition for
to be uniformly integrable.
Remark 3.2.8.
If for some
T,
AT
the theorem is valid on
~ k
almost surely (for example
[O,T],
At = t),
then
which is the situation occurring in
Girsanov's theorem.
3.3.
Theorem 3.2.1 and the Detection Problem.
As mentioned in the introduction, Girsanov's theorem can be used to
study the detection problem for which the noise is a Wiener process and
the signal a function of the form
measurahle process such that
Wl' l1L'g]('l'l
f
f
t
O fsds,
T
2
f
dt:<: K «
O L
where
m
f
is an adapted
/lLmosL Hllrely
P.
[(
ll'chnlcnL detal1f,l, thIs npprollclt Cllll III' dl'Hl'rllwd /lH I"nllllwH.
104
e'
One can define a probability
·e
dQ
With respect to
•
;:::
T
exp[J o (-f s )dWs - ~ JT0 f s 2dS]dP.
Q,
y
Girsanov's theorem).
Py
and
Qy
on
Q by the relation
;::: f 0t f ds + W
s
t
t
But since
C[O,T],
and
P
is a Wiener process (this is
Q are equivalent, the measures
defined by the relations
Q{YeB},
are equivalent.
Now, since there is a unique Wiener measure on
Pw(A)
where
Pw(A);::: P{WeA}.
problem is non-singular:
C[O,T],
Qy(A);:::
Consequently, the considered detection
Py - P •
W
An advantage of this approach is
that it yields fairly easily the Radon-Nikodym derivative.
We extended Girsanov's theorem in order to tackle the detection
problem for which the "noise" is a continuous, LZ-bounded martingale
and the "signal" a function of the form
dictaDle and satisfies
E[f
f
oo
f Z dA ] <
O s
s
00.
t
f dA,
s
O s
where
f
M
is pre-
The approach described for
the Wiener process seems a reasonable one to take in the present set up,
since very little is known about the distribution properties of the
signal and the noise.
Unfortunately, the extension of Girsanov's
theorem does not provide sufficient information, since we do not know
under which conditions two continuous, LZ-bounded martingales with the
same associated natural increasing process induce, on the space of continuous functions, measures that are either equal or equivalent.
Knowledge of the latter, however, is necessary to obtain the non-singu1arity of the problem considered, as can be seen from the sketch given
•
above for the Wiener case.
105
We expect that martingales (continuous, L -bounded) with the same
2
associated natural increasing process will induce, at least in certain
cases, equivalent measures on the space of continuous functions for the
following reasons.
First, those martingales can be obtained by sampling
Brownian motions (Kunita-Watanabe [1967, p. 218, Th. 3.1]), that is, if
M and
M'
are the martingales and
A
the associated natural increas-
ing process, there are Wiener processes
M' - W'
Wand
At
t -
Wand
W'
M
such that
t
=
Further, the class of martingales for which
At'
equivalence holds is non-void, since it contains the Wiener process.
In the following lemma, we show that equivalence holds whenever
A
is non-random and strictly increasing.
Lemma 3.3.1.
Suppose that
(M , At' At' P)
and
t
(n, A)
uous square integrable martingales on
random and strictly increasing
C[O,T]
Then
(Nt' At' At' Q)
(tE[O,T]).
for which
PM
Define
are continAt
and
is non-
Q
N
on
e'
as follows:
PM
= QM'
Proof:
Let
such that A >t}
S
i f the set
{
.}
Is not empty
i f the set
{
.}
11:1 empty.
=
N
at
Then, by the optional sampling theorem,
are Brownian
motions and thus:
P{M csl
Q{ N c S}
a
Consider now the map
A: C[O,T]
>
(1
C[o,T]
106
de[lrwJ by
•
= f (At) =
(Af)(t)
where
1T
denotes the evaluation map at
x
A is a measurable map:
1T
A
t
f
x.
indeed the sets of the form
1T
of C[O,T]
and thus it will be enough to show that
such that
1T
x
fEB, B a Borel set of
lU generate the Borel sets B{C[O,T]}
such that
x
f
E:
B,
{g:
and the latter set is in
PM oA
ex
-1
{f:
1T
t
fEtB}
1T
x
AgEB}
B{C [0, T]}.
PM {1TA gEB}
t
ex
=
P{M
ex
EB}
B{c [0, T]}.
But
= {g: 1TA gEB}
t
{g: (Ag) (x)EB}
=
=
A-l{f E C[O,T]
lR} belongs to
B a Borel set of
=
{f E C[O,T]
Also
=
=
poM-1 {1T gEB}
At
ex
P{MtEB}
A
t
=
=
poM-1 {f:
P{1T M EB}
At ex
1T
t
fEB}.
Since the cylinder sets are a determining class, it follows that
o
A-1
=
o
A-1
=
Similarly
Thus, since
= Q
PM
ex
Nex
'
=
With the above lemma, it is possible to show that the resulting detection problem is non-singular, when
is determined from
When
A
P
by
N,
with
P
and
Q are equivalent and
Q
of the form
is non-random, the martingale corresponds only to a change
of scale for a Brownian motion and thus little has been accomplished.
Thus the crucial problem of the equivalence of
tIl('
mains to be fwlvl'd, as does the problem of defining
107
induced mCHI·mres rellll' ] Ih·1 I hood rill Ill.
3.4. A Review of Chapter III.
In this chapter, we have shown that, for an L -bounded continuous
2
martingale
process
M,
A,
with associated uniformly bounded natural increasing
the translation
M
t
Jt
-
tinuous martingale, for appropriate
Q equivalent to
P,
P
,I,
0 't's
cP,
dA
S
is again an L -bounded con2
but with respect to a measure
being the original probability measure.
is the content of Propositions 3.2.1 and 3.2.2.
is a natural process with respect to
and assuming
enough as
associated with
sition 3.2.3.
these
t~ree
M
We also proved that
(Z~(CP)
=
A
goes to zero fast
is the natural increasing process
J~CPsdMs - ~J~ eps2
dA s )'
This is Propo-
The extension of Girsanov's result is a consequence of
propositions.
We have also included a discussion of re-
lated topics wherein the problems encountered in determining the
equivalence of the induced measures are discussed.
108
A
Q and that, letting M = M -
t
P{su p IR+ 1-Mt 2-At I exp(ZO(CP))
~ n}
tend,S to infinity,
n
This
BIBLIOGRAPHY
G. Bachman and L. Narici (1966).
New York.
C. R. Baker (1970a).
Functional Analysis.
Academic Press,
On equivalence of probability measures.
Institute of Statistios Mimeo Series No. 701, Department of Statistics, U.N.C., Chapel Hill, N.C. 27514.
On covariance operators. Institute of Statistios
Mimeo Series No. 712, Department of Statistics, U.N.C., Chapel
C. R. Baker (1970b).
Hill, N.C. 27514.
Detection and information theory. Third Southeastern Symposium on System Theory, Georgia Institute of Tech-
C. R. Baker (1971a).
nology, Atlanta, Georgia.
C. R. Baker (1971b). Zero-one laws for Gaussian measures on Banach
space. Institute of Statistios Mimeo Series No. 785, Department
of Statistics, U.N.C., Chapel Hill, N.C. 27514.
C. R. Baker (1972). Lecture notes for STAT 242.
tics, U.N.C., Chapel Hill, N. C. 27514.
Department of Statis-
A. V. Balakrishnan (1971). Introduction to optimization theory in
Hilbert space. Leoture Notes in Operations Researoh and Mathematioal Systems. No. 42, Springer Verlag, Heidelberg.
Ph. Courrege (1962-1963). Integra1es stochastiques et martingales de
carre integrable. Seminaire Brelot, Choquet, Deny (Th~orie du
PotentieZ), 7e
an~e,
expos~
7.
H. Cramer and M. R. Leadbetter (1967).
tio Prooe88e8. Wiley, New York.
J. Dieudonne (1970).
New York.
Stationary and Related StoOha8-
Foundation8 of Modern Analy8i8.
Academic Press,
C. Do1eans-Dade (1970). Diffusions a coefficients continus: 1e
prob1eme. Seminaire des probabiliti~8 IV.
Leoture Notes in Mathematio8 No. 124, Springer Verlag, Heidelberg.
C. Do1eans-Dade and P. A. Meyer (1970). Integra1es stochastiques par
rapport aux martingales locales. Seminaire des probabiZiti~s IV.
Leoture Notes in Mathematios No. 124, Springer Verlag, Heidelberg.
E. B. Dynkin (1965).
Markov Processes.
Academic Press, New York.
J. Feldman (1958). Equivalence and perpendicularity of Gaussian
processes. Pacific Journal of Mathematics, 8, 699-708.
J. Feldman (1971). Decomposable processes and continuous products of
probability spaces. Journal of Functional Analysis, 8, 1-51.
D. L. Fisk (1965).
Transactions of the American
Quasi-martingales.
Mathematical Society, 120, 369-389.
I. M. Gelfand and N. Y. Vilenkin (1964). Applications of Harmonic
Analysis. Academic Press, New York.
I. I. Gikhman and A. V. Skorokhod (1966). On the densities of probability measures in function space. Russian Mathematical Surveys,
21, 83-156.
I. W. Girsanov (1960). On transforming a certain class of stochastic
processes by absolutely continuous substitution of measures.
Theory of Probability and its Applications, 5, 285-301.
(English translation.)
L. Gross (1970). Abstract Wiener measure and infinite dimensional
potential theory. Lecture Notes in Mathematics No. 140,
Springer Verlag, Heidelberg.
J. Hajek (1963). On linear statistical problems in stochastic processes. Czechoslovak Journal of Mathematics, 12, 404-443.
P. R. Halmos (1950) •
Measure Theory.
P. R. Halmos (1967) •
New Jersey.
A Hi lbert Space Problem Book.
Van Nostrand, New Jersey.
Van Nostrand,
C. Helmberg (1969). Introduction to Spectral Theory in Hilbert Space.
North-Holland, Amsterdam.
T. Hida (1970). Stationary Stochastic Processes.
Press, New Jersey.
Princeton University
M. Hitsuda (1968). Representation of Gaussian processes equivalent to
the Wiener process. Osaka Journal of Mathematics, 5, 299-312.
K. Ito (1951).
On stochastic differential equations.
Memoirs of the
American Mathematical Society No.4, 1-51.
K. Ito (1970). The topological support of Gauss measure on Hilbert
space. Nagoya Journal of Mathematics, 38, 181-183.
T. T. Kudota and L. A. Shepp (1970). ConditionH for nhRolute contInuity hl·twt·en a cl'rtnill pllir or prohllhllfly IIlc'IIHllrc'll. I'.I'it.III·httij"
[Ur- Wr(hr·II{!}lI'inl':nhk.cit;jIUtf'()r-l,~ UII" Vt"-/,)(lII,II., , I,','/,i,"", /(;. :l',O-:IhO.
LlO
T. Kailath and M. Zakai (1971). Absolute continui.ty and RadonNikodym derivatives for certain measures relative to Wiener
measure. Annals of Mathematical Statistics, 42, 130-140.
G. Kallianpur (1970).
Zero-one laws for Gaussian processes,
Transactions of the American Mathematical Society, 149,575-587.
G. Kailianput and H. Oodaira (1963). The equivalence and singularity
of Gaussian measures. Chapter 19, ~oceedings of the Symposium on
Time-Series Analysis, Ed. M. Rosenblatt, Wiley, New York.
H. Kunita and S. Watanabe (1967).
On square integrable martingales,
Nagoya Journal of Mathematics, 30, 209-245.
Probability and Potentials.
P. A. Meyer (1966).
Massachusetts.
Blaisdel,
Integrales stochastiques. Seminaire des
probabilities I.
Lecture Notes in Mathematics NO. 39, Springer Verlag, Heidelberg.
P. A. Meyer (1967).
E. Mourier (1953).
Elements aleatoires dans un espace de Banach,
Annales de l'Institut Henri Poincare, 13, 161-224.
J. Neveu (1964).
Deux remarques sur la theorie des martingales.
J. Neveu (1965).
Mathematical Foundations of the Calculus of
Zeitsahrift fUr Wahrscheinlichkeitstheorie und Verwandte
Gebiete, 3, 122-127.
~obability.
Holden-Day, San Francisco.
K. R. Parthasarathy (1967). Probability Measures on Metric Spaces.
Academic Press, New York.
W. L. Root (1963). Singular Gaussian measures in detection theory.
Chapter 20, Proceedings of the Symposium on Time-Series Analysis,
Ed. M. Rosenblatt, Wiley, New York
H. L. Royden (1968).
Real Analysis.
MacMillan, London.
H. Sato (1969). Gaussian measure on Banach space and abstract Wiener
measure. Nagoya Journal of Mathematics, 36, 65-81.
]11