* Ph.D. dissertation under the direction of Stamatis Cambanis.
t This research was supported by the Air Force Office of Scientific Research
under Grant AFOSR-7S-2796.
NONLINEAR A[~ALYSIS OF Sr:·lERICALLY INW\RIANT
PROCESSES AND ITS RAI,UFICATIOI"JS*
by
-I-
Steel T. Huan;;:'
Deparpnent of Statistics
University of North Carolina at Chapel. Hill.
Institute of Statistics Mimeo Series No. 1028
September, 1975
HUANG, STEEL T.
IJonlinear Analysis of Spherically Invariant
Processes and Its Ramifications.
(Under the direction of STAI.1ATIS
CAr IBANIS.)
The objective of this work is to study the structure of the nonlinear
space of spherically invariant processes (lvhidl constitute a natural
generalization of Gaussian processes) and to define multiple Wiener
integrals for spherically invariant processes and hence for Gaussian
processes, thus generalizing the notion originally introduced for a
Wiener proces s.
\Je obtain an orthogonal decompos it ion
0
f the nonlinear
space and an integral representation for its L2-functionals, which is
expressed in terms of multiple Wiener inteBrals.
These results are applied to nonlinear estimation and predicticn
theory, the nonlinear noise theory, and the equivalence of spherically
invariant processes.
iv
ACKlJOWLEDGEI 'lEi JTS
I would like to express my gratitude to my advisor Professor S.
Cambanis for his constant stream of ideas and unfailinq'-' Datience.
~
11e has
not only given me helpful advice in my academic work, but also shmm
great concern for me personally.
I
a:r.l grateful to Professors G. Simons and :1. i10effding for intro-
clueing ne to the theory of Probability and Statistics; and to Professors
Ii. fl.. Leadbetter and C. R. Baker for
leadin~
m.e into the field of
Stochastic Processes.
Above all I wish to acknowledge my indebtedness to my parents and
my wife.
It is their love, encouraeement and understanding that made
all this possible.
v
CONTENTS
Acknow1edgements----------------------------------------------------- iv
I.
II.
III.
INTRODUCTION AND smfl'1ARY
1.
Introduction-----------------------------------------------
1
2.
Summary- -- - - - - - -- - .. - - -- - - .. - - _.. _.. - - -- -- -- - - ----- -- -- ----- - - -
2
THE HILBERT SPACES
N~D
A (R)
2
1.
The Hilbert Spaces
A2 (R)
and
A
2.
Tensor Products of
A (R)
2
and
A (R) ....
2
3.
Fourier Transform on
2 (R)----------------------
5
17
_u
11
A (0 R)----.---.------------------- .. -- .. 21
2
nONLINEAR SPACES AND l1ULTIPLE WIENER INTEGRALS FOR SPHERICALLY
INVARIN~T
IV.
AZ(R)
PROCESSES
1.
Spherically Invariant Processes (SIP's)-------------------- 24
2.
Nonlinear Spaces------------------------------------------- 30
3.
r1ultiple Wiener Integrals-----------··---------------------- 43
EQUIVALENCE N\)D A
1.
0-1
LAW FOR
SPHERICALLY INVARIANT PROCESSES
Equivalence of Spherically Invariant Processes------------- S3
2. A 0-1 Law------------------------------------------------ 67
V.
VI.
VII.
NONLINEAR ESTIH\TION Ai'JD PREDICTION
1.
Nonlinear Estimation- - - - - .. u - - -- -- u - - ----
2.
Nonlinear Prediction-----------------··--------------------- 81
u
__
uU
u
73
NONLINEAR NOISE------------·----------------u------------------ 87
APPENDIX
1.
Tensor Product of Hilbert Spaces--------------------------- 97
2.
Hermite Polynomials----------------------------------------l00
BIBLIOGP~PHY---------------------------------------------------------102
INTfmDUCTIOi~
I.
1.
Aim SU;lii\HY
Introduction
By a stochastic
proce~8
X=(X t , tET)
we mean a family of random
Xt , tET, defined on a probability space
variables (r.v.'s)
(n,B,p).
T is an arbitrary index set but very often it is an interval on the real
B is usually taken to be
line;
Bel), Vw
X, or SeX) , the completion of B(X)
cess
measure
P.
For each
a-::i~l,:I
generated by
t~lG
pro-
with respect to (w.r.t.) the
represents the corresponding sample
path of the process, which is an elenent of IRT, the space of all functions defined on T.
WEn,
X(w)
X is called a coordinate process if
T)
.~(
m
;.1
-,\
Xt ()
W =w ()
t , were
h
by cylinder sets of
for all
.
1S
t,h e a- f'1e 1t-1. generat eo..
IRT. X is called a second order -process if EXt2<oo
tET.
Now consider a second order process
X with zero mean.
two Hilbert spaces closely related to such a process.
the nonZineo.:r' space of
on
(n,B,p) =
(n,B(X) ,p)
x,
L (X)
2
= L2 (n,B(X),p)
There are
The first one is
(consisting of all r.v.'s
with finite second so:nent) with inner product
<~,n>=E~ll.
(~~ote
Elements of L (X) are called (nonlinear) L2=i.unctionaZs of X.
Z
that every B(X)-measurable function
p
is a
B(
mT) -measurable
tineal" space of X,
Elements of H(X)
function.)
8 is of the form
6(w)=p(X(w))
T:1e second Hilbert space is the
H(X), the subspace of
L (X)
Z
spanned by
Q::·ti~b?,~;!.q:r.
X , tET.
t
are called tinear Lz-functionaZs of X.
It is well known that a linear problem on the process
as an
where
X can be cast
or a representation problem on the Hilbert space
H(X);
2
and a nonlinear problem on the process
X can be cast as an optimization
or a representation pl"oblem on the Hilbert space
L (X) .
Z
The linear problems on second order processes have been studied
extensively over the past two decades and a satisfactory theory has been
developed, especially for the stationary case (see e.g. Doob [1953],
Gikhman and Skorokhod [1969]).
isomorphism between H(X)
The basic approach is to establish an
and a function Hilbert space (e.g. LZ-spaces,
reproducing kernel Hilbert spaces), and then to transform the original
problem to an optimazation or a representation problem on the function
space, and solve it.
This approach, usually called the Hilbert space
I!1ethod, has been fruitful for the linear theory.
It is only natural to
ask whether the same approach will work for nonlinear problems.
does exist.
space
L2 (X).
The analogy
However it requires knowledge of the structure of the nonlinear
It is clear that the structure of the linear space
depends on the process
X only through its covariance function, while
the structure of the nonlinear space
depends on the process
H(X)
LZ(X)
is much nore complex and
X through all its finite dimensional distributions.
The most useful notion in the study of the nonlinear space of a Wiener
process is the Multiple Wiener Integral.
by
~'Jiener
This notion was first introd.uced
[1938], who termed it npolyno!'1ial Chaos\< , and was redefined in
a somewhat deeper way by Ito [1951].
Ito showed that his multiple integrals
of different degree have the important property of being mutually orthogonal
and also presented their connection with the celebrated Fourier-Hermite
expansion of Lz-functionals of Cameron and Hartin [1947].
Subsequently,
Ito [1956] extended to general processes with stationary independent
increments the Wiener-Ito expansion of Lz-functionals in terms of multiple
Wiener integrals.
7
.J
In his important work on nonlinear problems 11iener [1958] reinterpreted
the multiple Wiener integrals for a Wiener process in an extremely simple
and intuitive way and made some interesting applications.
IJeveu [1968] and Kallianpur [1370] studied the connection between
the Donlinear space of a Gaussian process and the tensor products of its
linear space, which sheds new light and gives more insight on the structure of the nonlinear space.
The objective of this work is to study the structure of the nonlinear
space of spherically invariant processes (which constitute a natural generalization of Gaussian processes) and to define multiple lViener integrals
for spherically invariant processes and hence for general Gaussian processes, thus generalizing the notion originally introduced for a Wiener process.
These results are applied to the nonlinear estimation and prediction theory,
the nonlinear noise theory, and the equivalence of spherically invariant
measures.
In Chapter II we develop the structure of
AZ and AZ-spaces. which
are our choice of function Hilbert spaces in the Hilbert space method
approach to the linear and nonlinear problems.
In Chapter III we obtain an orthoeonal
decor~osition
of the nonlinear
space of a spherically invariant process and an integral representation
for its Lz-functionals.
The
of multiple Wiener integrals.
inte~~ra.l
representation is expressed in terms
The general properties of spherically
invariant processes are also studied.
In Chapter IV, we study the equivalence of spherically invariant
processes and obtain an explicit expression for the Radon-Nikodym
4
derivative; we also derive a
0-1
law for spherically invariant processes
and give some applications.
In Chapter V we solve the general nonlinear estimation problem for
spherically invariant processes, and hence for Gaussian processes.
As
an application we derive a lower bound on the mean square prediction error
for a class of nonlinear prediction problems.
In Chapter VI we introduce a new class of noises and show that they
possess the same important properties as the white noise.
In the Appendix we briefly recapitulate some results on tensor
products of' Hilbert spaces <md on Hermite polynomials which are used
extensively throughout the whole work.
II
0
THE HILBERT SPACES
Throughout this chapter
bOlUldcd or unbounded, and
X
A (R)
2
i\dJ
1. (R)
2
T will be an interval closed or open,
=
a second order process with
(X , tET)
t
zero mean and covariance function
R(t,s).
Integrals over
denoted by the integral sign with no subscript, aEd
Lo~ve
will denote the
E.
characteristic function of the set
It is shown in
IE
T will be
[1955, p. 472] that the following two integrals
I (f)
=R-
J(f)
= R-
I
I
f(t)dX
t
f(t)Xtdt
can be defined as the mean square limits of the corresponding sequences
of approximating Riemann sums if and only if the following double Riemann
integrals exist
R -
Rand then
I(f)
and
II
II
J(f)
2
f(t)f(s)d R(t,s)
f(t)f(s)R(t.s)dtds ,
are the random variables with means zero and
variances the corresponding double Riemann integrals.
1.
The Hilbert Spaces A2 (R) and A (R)
2
Consider the set
1 (a
b] (t)
n' n
SI
of all step functions on
(a ,b ]
n n
c
T and define
T f(t)
=
6
SI
is clearly a linear space and for all
I f(t)dX t
E
E
(f
f(t)dX t
0
I
g(t)dXt )
f,g
€
we have
SI
=0
=
II
2
f(t)g(s) d R(t,s)
where the double integral is defined in the obvious way.
tions
f,g
will be considered identical if
II
2
(f(t)-g(t)) (f(s)-g(s))d R(t,s)
If we define for
(Sr' <0,0»
=0
.
f,g E Sr'
<f,g>
then
Two step func-
=
II
2
f(t)f(s) d R(t,s)
is an inner product space.
Indeed
<f,g>
has the
ordinary bilinear and symmetric properties,
<f,f>
and
<f,f>
=0
only when
= E (f
f
fdX)2
~
0
is the zero element of SI
according to
the convention introduced above.
A2 (R) be the completion of Sr' so that it is a Hilbert
space with inner product denoted again by <0,'>. A typical element in
Now let
A2 (R) is a Cauchy sequence of step functions. However, we will find it
convenient to treat elements in A2 (R) as IIformal" functions in tET
and to write If f(t)g(s)d 2R(t,s) for the inner product <f,g~ (see
Theorem 1 for a partial justification).
Notice that for
fES r
through its increments.
the integral
If(t)dX t
depends on
X only
Thus we may suppose without loss of generality
7
that there is a point
toET
Xt
such that
o
=0
a.e.
Under this assumption we can establish an isomorphism between
AZ(R)
as follows.
H(X)
and
The nap
SI
+
t--+ J
HeX): f
fdX
preserves inner products and hence it can be extended to an isomorphisn
on
A2 (R)
where
H(X)
It
and
to a closed subspace of H(X).
= I (to,t ] for
t~to
But the set
= -l( t.t ] for t<t o' generates
o
and
It follows that the isomorphism is ont.O· H(X), i. e.
ltESI.
A (R) ~ H(X)
2
He denote this isomorphism by
with respect to
X
and we define the integral of
I
Jf(t)dX t
(v/hich \ve ,... rite as
tion to view elements of
A2 (R)
following our conven-
as formal functions) by
J f(t)dX t = l(f)
.
The properties of this integral follow from those of
analogues of the properties of the integral where
increJilents (see e.g. Doob [1953]).
I
and are the
X has orthogonal
The integral is defined for lifunc-
tions" in
A2 (R)
in
besides the step functions.
A2 (R)
fEAZ(R)
and thus it is of interest to identify usual functions
identified in the following.
Two such classes of functions are
8
Under the additional assumption that
R(t,s)
is of bounded varia-
tion on every bounded subset of TxT, Cramer [1951] defined
AZ(R)
as
the conpletion (with respect to the same inner product) of the set
Si
of all functions
exists.
whose double Riemann integral
R-JJf(t)f(S)dZR(t,S)
However, Cramer's definition is not appropriate for the general
case (where
since then
phic to
f
R is not necessarily of bounded variation on bounded sets)
It
may not be in
H(X).
!l. crt)
2
and thus
!l.
In this sense our definition of
2 (R)
may not be isomor-
AZ(R)
is the appro-
priate generalization of the definition given by Cramer.
s*I is always (even when R may not be of bounded variation)
That
a subspace of
AZ(R)
and that for
immediately from the fact that
R-Jf(t)dX t
the form
fES!>
fESi
I(f)
= R-If(t)dX t ,
is equivalent to the existence of
and the approximating Riemann sums for
J fnd\
with
fnES I'
It
R-Jf(t)dX t
can be also shO\m that when
bounded variation on bounded sets then the two definitions of
coincide.
follow
are of
R is of
AZ(R)
We show instead the following result which is more useful for
our purposes.
R(t,s)
is said to be of bounded variation on
all
N,M and points
sum
I~=l
H
m=l
a
(a,b] x [c,d]
if for
= to<t l <... <t n = b, c = So <51 < ••• <sl',l = d the
It. (tn_l,sm_l)
(t n ,s m)
RI
is bounded, \'!here
R(t',s')-R(t'.s)-R(t,s')+R(t,s) .
Also
TxT
an
R is said to be of bounded variation on every finite domain of
if it is of bounded variation on every
[a,b] x [c,d]
c
TxT.
Such
R determines uniquely a a-finite signed measure on the Borel subsets
of TxT, denoted again by
R, such that
R((t,t 1 ]x(s,s'l)
A(t',S')
= D(t,S)
R.
9
Let
LI
be the set of all measurable functions
on T
f
such that the
following Lebesgue integrals are finite
IJlf(t)f(SJI d
II
for all
(a,b]
c
If(t)1 l(a,b](S) d
say that the function
there is a
ff
E
IRI
T, where
f
in
2
2
1RI (t,s)
<
ro
lRI (t,s)
<
00
LI represents an e "lement in
such that for all
1I. (R)
2
<£I,g>
=
Notice that if such an
II
domain of TxT.
,-_
and
.i.
PROOF.
Then
II
Let
''Ie will prove that
for all
Let
I
R(t,s)
13
L
I
is dense in
51
f
is a dense subset of
2
2 CR) .
00
Also if
11. (R) •
2
then
be a bounded Borel subset of T.
E 11.
E 11.
be of bounded variation on every finite
If1(t)f2(s)ld2IR(t,s)1 <
IE
if
(R)
?
(R) . :1e vlill then denote fl by f and we will write
2
With this convention we have the following
Let
2
f(t)g(s) d-R(t,S)
11.
THEOREM 1.
11.
We
gES ,
1
exists it is unique since
fl
R.
is the total variation measure of
(R), i. e. there is an
f
E
Then
1EEL I
2 (R)
such that
11.
and
gES p
be a finite interval containing
We can always find
I cI,
n
n
= 1,2, ... ,
E
(so that
with each
In
IRI (IxI)
<
00
).
a finite union of
10
half open intervals such that
Since
~
IR(I n xl m)-R(ExE)I
it follows that
<II ,II >
n
A2 (R).
sequence in
IRI ((1 n ~E)XE)
R(ExE)
+
+
Inl ((I~E)XE)
m
and thus
m
Define
We first show that
f
f
A2 (R)
€
by
co
{II
is a Cauchy
}n=l
n
f = lim II
n
does not depend on the approximating sequence
= 1,2, ••.. Let I~e1, n = 1,2, ...
sequence and fl = lim II' . Then for each
In'
0 ,
+
n
be another such approximating
interval
JeI
we have
n
I<£-f' ,
If
MI
1J >1
=
1<1 1 -IIi' I J >I
n n
lim IR(I n xJ) - R(I'xJ)!
n
~
lim
~
lim
= lim
{I rrl ((I n. l\E)xJ)
{IRI ((In~E)XI)
is the closed subspace of
+
+
AZ(R)
IRI ((I~l\E) xJ) }
Inl ((I~l\E)XI)}
generated by all
(a,b]eI, then it is clear from the above that
It fo11o\'/s that
I
and
E.
I n'
Let
n
= 1,2, ...
IRI (J'~E,
InJ)
n
n
(J ~E)xJ and
n
f,ff
fA
E
I
o.
1
(a,b]'
and f-f'
.l
MI'
f = f'.
We now show that
taining
=
f
does not depend on the finite interval
= 1,2, ...
Then clearly
+
0
IRI
graph applied to
= 1,2, ... have the same properties
nI
Define Iin = I n nJ and J n = .Jn
'
Ii J' e I n J and IRI ((I~~E)x(InJ)) +
n' n
and J n'
J
since
n
and
conas
I
(I'~E)x(InJ)
n
is a measure.
I,J
I
InJ
e (I n ~E)x1,
(J'~E)x(InJ)
n
0,
e
Fron the result of the previous parawe have
11
lim II
and thus
f
I-Ience
= lim III = lim IJ' = lim lJ
n
n
n
n
does not depend on
is well-defined.
f
a finite interval containing
II
f.
+
I.
Now fix
EuJ
and
J = (a, b]
n =
I ,
n
c
T
and let
1,2, ...
I
be
as before, i. e.
Then
n
IRCIn xJ)
~
Hence
all
IE
<f,lJ>
gES
E
I,
IRI ((1 n ~E)XI) = 0
= R(ExJ) and since J is arbitrary it follows that for
<f,g>
=
II
2
lE(t)g(s)d R(t,s).
Thus we have
sho\~
that
1\2 (R) •
Now if E and
and
lim
- R(ExJ)I
IF
R(ExF).
before
F are bounded Borel subsets of T, denoting by
also the corresponding elements in
Indeed if
II
+
IE
I
we have
is a finite interval containing
and
n
1\2(R)
lJ
+
IF
we have (since
IE
<IE' IF> =
EuF
and as
R is symmetric)
n
IR(InxJm) - R(EXF)i ~
~
IRI (Inx(Jm~F)) + IRI ((In~E)xF)
IRI ((Jm~F)XI) + IRI ((In~E)XI) +
<IE,lF> = lim R(I n xJ m) = R(ExF).
It then follows that if ¢ is a simple function in
0
and thus
bounded support, then
~ E
Lr with
A2(R). and if ¢1'¢2 are two such functions
then
Now let
fELl.
Then there exist simple functions
with bounded support such that
I¢n I
t
If I on
T.
¢n'
n=1,2, ... ,
It follows from the
lZ
Bounded Convergence Theorem that
Hence
,j..
'f
limit by
= 1,2, ••.
n
n'
fl.
, is a Cauchy sequence in
Then for all
= lim
<fl.g>
=
II
= lin
<<pn,g>
II
'"l
9n(t)g(s)d~R(t,s)
f(t)g(s)dZR(t,s)
II If(t)g(s)ld21~1 (t,s}<~.
fl
and denote its
g€SI
again by the Bounded Convergence Theorem since
determine
flZ(R)
Since the values of
fELl
and
<fl,g>
for
uniquely, the last equality implies that
determined by
gES I
fl
imply
gES
I
is uniquely
<p . It
n
contains SI
f, independently of the approximating sequence
follows that
f
fl 2 (R)
€
and thus
L
I
C
fl 2 (R).
Since
L
I
flz(R).
it is dense in
For the last statement of the theorem, with the obvious notation,
we have
lim <</>1 ,n ,</>Z ,n >
=
II
= lim
2
f l (t)f2 (s)d R(t,s)
where the additional asslmption on
f 1 ,f Z makes the Bounded Convergence
Q.E.D.
Theoren applicable.
Consider the set
Riemann integral
R-
is a linear space.
SJ
II
of all functions
f(t)f(s)R(t,s)dtds
Two functions
f
and
f
on T such that the
exists and is finite.
g in
SJ
will be considered
identical if
R-
II
(f(t)-g(t)) (f(s)-g(s))R(t,s)dtds
SJ
=0
.
13
For
f,g
E(
€
we define
SJ
I f(t)Xtdt
Define for
f,g
E
f
•
J
g(t)Xtdt)
= R - II
and then we have
f(t)g(s)R(t,s)dtJs .
SJ'
<f,g>
Then. (SJ < 0.; • »
If(t)Xtdt -- R-If(t)Atdt
= R - II
f(t)g(s)R(t,s)dtds
becomes an inner product space.
is defined to be
A (R)
Z
the completion of the inner product space
5 T and so it is a Hilbert
space.
is a sequence of functions
"
Again a tY'pical element in
convergent in norm.
A (R)
2
However formally we shall treat elements in
as functions and \<Jrite
II
f(t)g(s)R(t,s)dtds
AZ(R)
as the inner product
<f,g>.
In order to establish an isomorphism between H(X)
shall assume that
and
AZ(R)
we
X is mean square continuous which is equivalent to
the continuity of the covariance function R(t,s). Consider the sequence of
functions
n· 1
1
(t)
(T -n,L]
where
L
is an interior point of T.
It is easy to show that this sequence is a Cauchy sequence in
A2 (R), whose limit is denoted by oT , and that
XL = 1. Lm.
f
n • 1
dt .
1
(t) \
(T - n,T]
Then, the map
5
J
-r H(X) . f
1--+
I
f(t) X cit
t
for all interior
preserves the inner product and its range includes
T
of T
(which is linearly dense in
HeX)
by mean square continuity).
Hence it can be extended to an isomorphism on
A (R)
Z
a I-I (X),
the isomorphism is denoted by J
AZ(R)
onto
and for
f
H(X).
€
1..
2 (R)
Thus
we
14
define
= I f(t)Xtdt
J(f)
.
A useful connection between the integrals
spaces
11.
2 and
J(l(to,t])
( 1
(to' t]
E
[Jence
H(X)
PROOF.
;{
lZ
-
IT IT
IT IT
can be established as fo11oHs.
where
to
>"2 (R)
since
f
E
J
n is continuous. )
>"2 (R)
= 11.2 (f)
I
f(t)Xtdt
=
f f(t)dZ t
and the
zt =
Let
is an arbitrary but: fixed point in
Io
t
t
X du =
u
T.
Then,
>"2 (R)
= 11. 2 en
.
= H(Z).
We will prove first that the existence of
is equivalent to that of
f(t)f(s)R(t,s)dtds
2
f(t)f(s)d ret,s).
We may assume that
by the very definition of a Riemann integral.
If(t) I < l,l
and
X is mean square continuous then
If
THEOnEl'; 2.
and for aU
>"2
I
T
is a closed interval
\le also assume that
'rJ tET; otherwise neither Riemann integral \>1ill exist.
Con-
sider the typical Riemann sums
RJ
=
IB·I
IA·I
L L f(t.)f(s.)R(t.,s.)
1
1
J
1
J
J
RI = L L f(t.)f(s.)r(A.xB.)
1
J
1
J
where
{Ai} , {B.}
IJ'L1 I, IB·I
J
J
are interval partitions of T,
denote the lengths of these intervals.
s .EB.; and
t.o'..,
1
1
J
J
By the uniform con-
15
tinuityof
R(t,s)
max(IAil,IBjl)
R-
II
+
we have that for every
O.
IK1-RJI
e>O,
~ M2 1TI 2e as
We thus conclude that
~
f(t)f(s)R(t.s)dtds
R-
II
2
f(t)f(s)d rCt,s)
and the existence of one side implies that cf the other side.
C',*
rJote that
. I
is dense in
For a step function
(since
r
In
is continuous) and
short~
5J
is
we clearly have
f
hence t;lis is true for all
f
11.
€
2
(n by the continuity of I and
J.
Q.E.D.
1-
2
U~)
may contain interestins classes of functions larger than
LJ be the set of all measurable functions
Let
f
on
sJ'
T such that the
followinG Lebesgue integrals are finite
II
II
for all
If(t)f(s)R(t,s)ldtds <
If(t)1 lCa,b](S) InCt,s)ldtds <
Ca, b] cT.
treating functions
€
A CR)
2
00
He will follow the sane convention (as for
f
in
L
as
eleme~ts
g
in a dense subset of
1
fi
00
"
such that for all
<fl,g>
=
II
of
A2 (R)
A2) in
if there is a
).,2 (R),
f(t)g(s)R(t,s)dtds .
With this convention the following is a corollary of Theorens 1 and 2.
16
COROLLARY 3.
Let
dense subset of A2 (R).
Also if f l ,f2
fI Ifl (t)f2 (s)R(t;;s) !dtds
<£1,f 2>
life
no\'l
The spaces
be continuous on
R(t,s)
<
= If
A2 (R)
L2 ([a,b],dt)
and
A2 (R)
AZ(R)
belongs to
AZ(R)
A2 (R)
€
= A2 (f).
= A2 (r)
max IR(t,s)1
$
max IRet,s)1
t
is not in
of
T
As an example, consider
[a,b]
$
= A2 (r)
for all interior points
If
is a
and
are generalizations of L2 -spaces.
Then
IfCt)f(s)R(t,s)ldtds
0t
J
remark on the relation between AZ,AZ-spaces and LZ-spaces.
defined as before.
However,
L
f l (t)f2 (s)R(t,s)dtds
continuous covariance function on
If
Then
then
Cl.',
general they are larger than LZ-spaces.
a
L
J
€
TxT.
x
[a,b]
and let
Any function
f
In
R(t,s)
rCt,s)
be
in
since
(J
0
If(t)ldt)2
Ib-al
0
J
2
f (t)dt <
L2 ([a,bLdt)
implies
since
00
•
Xt
= J(Ot)
R(t,s) =
°t(u)os(v)R(u,v)dudv •
Nevertheless, there is a special case where
LZ-space.
Assume
Xt
Let
X
=0
reduces to an
be a zero mean process with orthogonal increments.
a.e.
for some fixed
o
F[toV(tAs)) + F(toA(tvs))
F
A2 (R)
where
Then,
toET.
2
FCt) = EXt
is non-decreasing and thus
R(t,s)
R(t, s) =
if t?;t
o
and
= -EXt2
if
is of bounded variation in
every finite domain of TXT, and the associated measure concentrates on
the diagonal
t=s
of TxT.
In this case
particular, if X is the Hiener process
A2 CR)
!\2(R)
= L2 (T, F(dt)).
= L2 (T,dt).
In
17
2.
Tensor Products of AZ(R)
and 12(R)
n
Now let us study the tensor product spaces
Sin)
Con::ider the set
of all step functions
= JJ
with
K
K(t 1J •.• ,t )
n
n
on
AZ(R).
Tn.
~I.J,.
if
G
<K-G,K~G> ~
Jlx ... xJn
Then
I ' 1J x' " , xJ n > =
IX'''x n l
(Sin)
This implies that
JJ'
O.
n
~
2
I
1
is an inner product space and we shall
<0,0»
Since
@.•. ~lI}
{II
A2 (R), we have
Let
sen)
J
be the set
of functions of the form
N
K(tl,···,t n )
Sen) x Sen)
J
J
f's
is a
n
1
can be defined in a sinilar nanner.
where the
T ).
II (t)1J (s)d R(t,s) ....
A2(~nR) the completion of Sin)
complete set in
Let
1,
-Ix .•• xl
n
are bounded half open intervals in
Ci. e., I.,J.
1
]
1
denote by
~
and
0
and identify
<II
A2 (R)
;' (n) x S(rn )
Define the followintJo function on
<K,G>
@
belong to
the function
= L
k=1
SJ'
£Ck) (t ) ... f(k) (t )
lIn
(' (n)
":>J
n
is a linear space.
Define on
18
o
and ide,ltify
f. ,g.
1
J
K "lith
R(tn ,sn )dtlds
l
... dt n dsn
G if <C-G)C.'C>... O. With t~le observation that for
SJ'
E
<f1 (t l )··· f n (t )·
n
=R-
II
el (t l )· .. gn (t n »
f l (t)gl(s)R(t,s)dtds ' ...
= <f1~···~fn'
0
R-
If
fn(t)en(s)R(t.s)dtds
gl~···~gn>~nA~(R)
4.
and with the fact that
conclude that
{f1~ ... @fn}
(SJn). <0,0»
is an inner product space and that its
c0mpletion. which is denoted by
As for the spaces
and
is a complete set in ~nAz(R), we
Az(R)
A2(~nR), is isomorphic to
and
A2 (R)
@nAZ(R).
we will treat elements of
as "formal" functions and we will write the inner
product in a formal integral form.
As before. under some conditions,
elements of
will be representable by functions
on Tn
in the corresponding sense and in this case we will identify the
elements of
6).
and
A
Z
and
The important point here is that we have identified the abstract
tensor product spaces
spaces
between
Let
let
AZ with the functions (see Theorem 4 and Corollary
A (~nR)
Z
~nA2 (R)
and
and
~nA') (R)
...
n
"Z(1lll R).
A2(~
n
R),
and
~nA,", (R)
L.
t\rith the (nearly) function
From now on we will make no distinction
n
and bettomen 18n A (R) and
Z
AZ(~
R be of bounded variation on every finite domain of
be the set of all measurable functions
the following Lebesgue integrals are finite
K on Tn
R).
TxT
and
such that
19
II··· If
fI
II
2
IK(tl, .. ··tn)i«(sl'· .. ·sn)1 d21:tlCtl'sl) ... d /R1C"i: n ·s n)
IK(tl ... ·.t)1
11 x
xI (sl ... ·.5 )d2IRI(tl'Sl)···d2IRI(t .5)
n
l ... "n
n
n n
for all bounded half open intervals
11 •...• 1n c T.
The following
theorem can be proven like Theoren 1 and thus its proof is emitted.
THEOREH 4.
domain of TxT.
Kl .K2
d
2
1RI
Let
R(t,s)
Then
L(n)
II
A (®n R). Also if
2
2
I K (t ' .... t ) K (t •.•• , t ) I d 1R I (t, s) ...
n
1 1
n 2 1
II
n p
L (T·.d t)
2
COROLLARY 5.
PROOF.
is a dense subset of
I
L1(n) and
(t ,s ) < ro~ then
n n
E
be of bounded vmoiation on every finite
Taking R(t,s)
process, Theorem 4 implies
n
= ~~L2(T,dt)
.
to be the covariance function of a
L2 (TP,d Pt)
c
~PL2(T.dt).
fl~.·.0f
P
~
Wiene~
On the other hand,
f1(t1) ... f (t )
P P
Q.E.D.
preserves inner products.
If R(t,s)
is oontinuous on TxT" then
THEOREI~
6.
PROOF.
This follows inmediately £-ro1"'1 the facts that r.he set
is complete in both
n
A.., (~ R)
..
and
n
A (0 r),
2
and that
Q.E.D.
the two inner products are identical on this set.
Let
L~n)
be the set of all measurable functions
that the following
Lebes~le
integrals are finite
K on
Tn
suc;~
20
II II
II II
for all
IK(t1,···,tn)K(Sl"",Sn)R(tl'Sl)···R(tn,sn)1 dtlds 1 , .• dtnds n
jK(t1,···,tn)III Ix •.. Xl (5 1 , ... ,5)
n IR(tl,Sl) .. ·R(tn.S n )I
'C
n
bo~~ded
half open intervals
With the usual
corresponding convention the following is a corollary of Theorems 4 and 5.
COROLLARY 7.
Let
R(t,s)
dense subset of A2(~nR).
be continuous on TXT.
AZso if K ,K
1
2
€
Ljn)
Then
is a
and
If ... II i Kl (tl·····tn)K2(sl·····Sn)R(tl·sl)···R(tn'5n)ldtldsl···dtndsn<~'
then
dt1ds1 ... dt nds n .
Finally let us consider the syn~etric tensor products
~nA2(R) and
An
*
A2 (R).
For
K € A (@nR)
2
define
where the sum is over all per:mri:ations 1T = (1Tl' .... 1Tn)
~
K is called the 3yi71Jnetric 'ucpsion of K.
and
Ie
is a "function".
tion.
Let
A (@n R).
2
A2 (R) $
;n
If
.....
K=K
then
of
(l, ... ,n) ,
~
K is well-defined since
K is said to be a symmetric func-
A (;nR) be the subspace of all symmetric functions in
2
is a Hilbert space and
Then it is easy to show that
An
A2(~ R)
under the correspondence
Zl
fl; ••• ;fn
+-+
(fl(tl) .•. fn(tn))~'
Similarly, let
Az(;nR)
be the sub-
space of all symmetric functions in A2(~nR). Then we can show
"n
An
• Az(R) ~ AZ(& R) (under the natural correspondence). As before, we
shall hereon identify ;nAz(R)
An
A2 (& R).
with
A2C;nR)
and ;nAz(R)
A2(~nR)
3. Fourier Transform on
Consider a continuous stationary covariance function
t,S€ R.
~
=
I
ei(t-S)A
d~(A)
a finite measure on the Borel sets of
continuous functions vanishing at infinity.
transform of every
f
n
A (8 R)
Z
€
A (& R).
LEMr>!A 8.
(i)
n
L (m)
I
If
f,g
(E)
€
<f,g>
PROOF.
(i)
Let
let
We shall define the Fourier
and for this it is convenient to com-
n
plexify the space
z
IR. For
f'" € Co ( Rn ), the set of all
be its Fourier transform and recall that
If ... II
~(t,s),
By Bochner's theorem we have
R(t,s)
with
with
is a dense subspace of AZ(&nR).
" "
n n
Ll ( R n ) then f, g € LZ (IR ,~) and
A (@n R)
2
f
€
= <f,l!>
OJ
L1 ( lRn ) .
n n
L (IR , ~ )
Z
Then
If(tl,···,tn)f(Sl,···,Sn)R(tl'SI)···R(tn,sn)ldtldSl···dtndsn
::;
~(
IR) n
II f liZ
L ( ~n)
1
<
00
•
22
Ljn)
Sinilarly the second condition in the definition of
and thus
f
Jr.).
By Corollary 7,
L
€
ous, by Theorem 6,
tions is dense in
(~sp~) . Since R is continu2
Since the set of all step func-
f
E
= A2 (@nr ).
A (@n R)
2
1.. (@nf)
2
is verified
1..
it follows that
mn)
1.,1 (
is dense in
A2 (@n R).
Let
(ii)
f,g
€
n
L (:m ).
l
Ifl
Then
~
II-F! I
implies
n
L (m )
1
J '" J If(~1""'~n)12 d"(~l).··d"('n) $llfll~
(Rn ) "(JR)n <
00,
I
L2 (IR\n, ~ n) •
S·lnce b y ( 1
. ).
,.
X,
LJ(n)
an d
an d th us
f
II ... II
/f(t 1 ,· .. ,tn )g(Sl""'Sn )R(t 1 'Sl) .. ·R(t n ,5n )ldtlds1 ... dt n ds n
A
€
g
€
it follows by Corollary 7 that
dt n ds n
Substituting
R and interchanging the order of integration by Fubini1s
= <f,g>
theorem we obtain
n
Q.E.D.
n
L (JR , II )
2
THEOREt,; 9.
The map
F: L
l
(mn )
extended to an isomorphism from
A~(@
...
-l-
n
L ( m ,lln): f
2
n R)
The extended map is denoted again by
onto
L~(
"-
A
1-+ f
can be
mn ,ll).
n
F and is called the Fourier
...
We also write
transform on
PROOF.
By Lemma 8,
f
for
F(f),
F preserves inner products and thus it can be
extended to an isomorphism F on
We now show that
n
1.. (0 R)
2
F is onto
to a closed subspace of
L OR
2
n
, IIn ).
23
which is a subalgebra of Co ( IRn ) .
Consider
is well known that
no point.
A is self-adjoint, separates points and vanishes at
Thus by the Stone-Weinerstrass theorem any
the uniform limit of functions in
L2 ( Rn.~n)
n
(since
n
L2(lR,~).
~
Clearly Co (lRn )
f
E
n
Co (m )
R(t,s)
A is dense in
Q.E.D.
be the covariance function of the
W (the derivative of Wiener process
is a delta function
00(t-s)
and
j.l
W) then
is the Lebesgue measure.
In this case
(where
r(t)s) = EWtW s ) and
F becomes the ordinary Fourier transfornl on
and thus
Denote the translation (by
by
t=(T 1 , ... ,t n ) ) of a function
f T (t1, ... ,t)
n =/f(t1+t1, ... ;t n +t).
n
Then Ne have
THEOREr-·J IO.
PROOF.
f
€
n
L (IR ).
1
(Lemma 8).
Since
R is stationary, all these assertions hold for
Hence they hold for all
is
is dense in
F is onto.
Heuristically, if we let
R(t-s)
A.
is a finite measure) and hence
This implies
Gaussian white noise
It
f
€
n
A (0 R)
2
by continuity
NONLINEAR SPACES
AND ~ULTIPLE ---WIENER
'--"
INTEGRALS FOR SPHERICALLY mVARI.AI~T PROCESSES
III.
The main effort of this chapter is to obtain an orthogonal decomposition of the nonlinear space of a
~hericaZly
(Theorem 12) and an integral representation
(Theorem 21).
fOT
invariant process (SIP)
its L.,-functiona1s
t.:.
The integral representation is expressed in terms of
multiple Wiener integrals (HWP's) which are defined in Section 3.
The
general properties of SIP's are studied in Section 1.
1.
Spherically Invariant Processes
(SIP~s).
It is well known that all mean square estimation problems on Gaussian processes have linear solutions and that Gaussian processes are
closed under linear operations.
Vershik [1964] showed that these two
properties do not uniquely characterize the Gaussian processes.
They
do, however, characterize the class of SIP's.
X = (X t , t€T) be a second order process with mean met) and
covariance function ret,s). Then ;, is said to be a SIP if all r.v.'s
Let
V
in
H(X-m)
having the same va.riance have the same distribution.
is a mixture of Gaussian processes.
m(t), a covariance function
on
It can be determined by its mean
R(t, 5), and a probability distribution
m+; the characteristic function of
X
t 1 )""Xt k
(t , ..• ,t €T)
1
given by
(1)
=
f
e
-~
A SIP
L
R(t.,t.)(u.-m(L))(u.-m(t.))
1
J
1
1
J
J
dF(a.)
k
F(a.)
is
2S
(Vershik [1964] and Nagornyt [1974]).
R(t,s)
b ~. 1 ~. t
and
Such a SIP determined by
will be denoted, in short, by
F(a)
( !?
T. n ( IRT) ) .~s
~',U
on th e smap 1 c space
Y measure P.
sphericaZZy invariant measur>e
Let
THEOREM 1.
P
.
A
-l
sa~G
proba-
to b e a
(Sn;) if it is induced by a SIP; or
equivalently if the coordinate process on
SIM induced by a SIP(m,R;F)
SIP(m,R;F).
met) ~
(mT,B( mT),p)
is a SIP.
A
will be denoted by SIM(m.R;F).
he a prohahi Zi ty measure on
( IRT, B( IRT ) )
such
that the characteristic ftrftctions of its finite dimensional distributions
are given by (1)" «ad for each n~O let Pa be a Gaussian measure on
(lRT,B (IRT)) with mean m(t) "ind covariance function aR (t , s) • Then
(i)
PCE) =
f
Pa(E)dF(a)
'if E
E
T
B(IR ).
.
or any measu:pah 7",e June t "'-on
Fur th ermore"
J.p
8
n
( m
om Tn
, Ti
(: ( IR'f) , •p)
on
that
is nonnegative or integrable 'U'e have
(ii)
lJhere
[ 9
E8 =
PR00F.
I
= f Ea 8dF(a)
6dP
(i)
function of
and [as = f OdP a .
For each fixed cylinder set
a; and the family of sets
measurable is a a-field.
Therefore
E,
a
E such that
P a (EI~
is a measurable
P (E)
PaCE)
is a-
is a-measurable for every
is well-defined and is easily checked
to be a probability measure.
The characteristic functions of its finite
dimensional distributions is of course given by (1).
Since the charac-
teristic functions of finite dimensional distributions uniquely determine
a probability measure on
VEE
we have
peE) =
I
Pa(E)dF(a)
BURT).
(ii) .
o
(IRT,scm.T)),
This holds for
6=1
E
silnple function, and thus for
by (i).
lienee it holds for
G nonnegative by considering a sequence
26
of simple functions increasing to
e. For e integrable consider the
positive and negative part separately.
TheoTer:!
REHARK.
1 (i) was
proven
Q.E.D.
Cualtierotti
by
[1974]
for
P
a
SIM on a separable Hilbert space.
COP.OLLARY 2.
P is a SIM(rr,P.;F)
peE)
where each
tion
P
=
f Pa (E) dF (a)
(I~T,B(
on
b'.
E
~::
E:..
(
if and only if'
p T)
1'~\
is a (;au81:~ian meaSU2'e UJi-tJz mean
a
!(T))
1'1
and cova1>iance fVfic-
aR,i) and
0.
PROOF.
1
=
f
Let
~(mT"
( 'l!S
li\ ,1:5
J.f' ), p)
adF(a) <
ro
•
be the coordinate process on
such that the
chal~3cteristic
dimensional distributions are given by (1).
if and only if
functions of its finite
Then
X
is of second order
a. <co, since by Theorer" 1 (ii)
l
Q.E.D.
The result now fo11o\'l5 fron Theor0m 1 (i).
It follows from Corollary 2 that if X is a coordinate
then under each
p
x is a Gaussian process with nean met) and
a.'
covariance function
SIP(m,R;F)
aItet, s) .
The follovling lemma partially generalizes
this fact and it is central to the study of the nonlinear spaces of SIP's.
LEf<-1.!"JA 3.
{~}
n
Suppose
is a sequence in
X is a zero mean coordinate SIP and suppose
HeX).
Then we may take particular versions of
27
such that under each P the r.v.js
n
a
Gaussian wi th ze.;."o mean and covariance
each
~
are jointly
E
a
.
=-Es·~· = aEl~'~' ,
a ~.~.
1 J
0.
1 J
1 J
(1')
1
e on (mco.;,B( :n:(')) we have
and:; moreover" for any measurabZe fu,nct'Zon
whenever one of these expectations exists.
PROOF.
For each
T.V.
l:n(X)'
of finite linear combinations of
[ Pl.
Let
"
C
n
= {XE
T
lR ,
(n)
Q,~'
1
x
such that
as
(x)-+
i
~
n
l=P(C)=
0.
0
I Pa(C)dF(a),
ao~O
there is
o
and
XEC
and 1 et
-)o<lo }
for
dFCa)
V
a~O.
with covariance
0..
£0110\'15
R.
Then
linear space.)
But under each
Q,.
1
=n
1
(x) a.e.
C.
Cis
n
Now take each
Since by Theorem 1,
P0. (e)=l.
will concentrate at
Bring in a zero mean Gaussian process
C is
1.
xetC.
such that
case the lemma obviously holds by taking each
CiJote that
C
(n)
We may assume
0
>0, because otherwise
(Qo,i'3 0 ,Q)
= li~
(x)
clearly a measurable linear space with probability
~ n to be lim Q,~n)(x)
for
1
Q,~n)(x)
fiT, ther.e is a sequence
XE
P
a'
{Q, ~n) ,
1
~n
a=O
and in this
to be identically
O.
Y on sone probability space
V
a~O
= l'H1' ,~.1(n)
Thus
~n
n~l,
i~l}
is jointly Gaussian under each
]
u.c. fp
- a
is a Gaussian family.
Pa .
Since (i) is implied by (E), "Ie need to pr::lve (ii) only.
E E B( moo), since as before
It
(ii)
Pa[(~1,s2, ... )EE] =
holds for
e=lE'
P1[(1a ~l'
Ia sz, ... )EE] V a~O. Consider now a nonnegative e. Then
e
is the increasing li;Jlit of a sequence of sinple functions
the f10notone Convergence Theorem we have
</>n
and by
28
Eae(~l'~2'·· .)
= lim
= lim
E 4>. (~1'S2'''·)
an.
.
f: 1cP n (ra ~1 J
= EI e (I"a
~1'
ra
... )
s2'
ra s2' ... )
(ii) is now evident.
Q.E.D.
We now introduce a r. v.
A
which \'1ill play an impol'tant role in
studying the nonlinear space of a 3IP as well as in defining the multiple
\Hener integrals for it.
is a coordinate
In the follo\'li.ng discussion we assume that
SIP(O,R;F)
and
X is nondegenerate, i.e.,
H(X)
X
does
not have finite dimension.
Pick an orthogonal sequence
from H(X)
by Lemna 3 we nay assume that under each
Then
H; n } is a sequence of
P
0.
independent zero mean Gaussian r. v. 's ...l ith
with
E
r; 2=a.
a n
Define
1 n
= -n L
1:; • •
1 ·1
A
n
By the Law of Large Hunbers, we have
lire An
Let
C* =
{XE
RT , An (x)~,}.
P(C*)
Thus
A
tha.t
A=a
n
= a a.e. [P a ]
converges
Then
= J Pa(C*)dF(a) = J dF(a) = 1
a.e. [P]
.
and. its limit 5.s denoted by
a.e. [P ], and A has distribution function
a
P (A5;a) =
=
JPa
(A~a)dF(a)
J l[O,a] (a)dF(a)
= F (a)
.
F
A.
We remark
since
29
If we put
= {x€ mT ,
Q
a
bUity measures
ex~O,
In the following we
the r.v.
A(x)=a}
then
P (Q )=1, and thus the protaa
a
arc mutually singular.
~,oJill
introduce an alternative way of defining
A and we will give an interesting example of a sample continu-
aus tiartingale whose family of a-fields is not continuous.
{t}
Consider the same sequence
where
is a CONS in
{e }
n
A
.ct)
n
V!
=
t
Define
2n _ 1 .
I
=
j=1
Pa , converges
is a Wiener process with
O~t~l)
(i'l ,
(e.g. see Shepp [1966]).
a
Let
This sum, under each
a.c. and in mean square and
variance parameter
as above.
'n
2
[w(L:!.n t) - ~'!(J2'n t) ] .
2
By a theorem of Levy (Doob 1953]
= at
lim A (t)
n
a.e. [P ] .
a
Applying the S8.me argument as before we can show that for each
converges
HI ,
s
a.e. [P]
O~s~t}
Since
A=ex
t
A (t)
n
and its limit is the quadratic variation of
denoted by ACt).
The r. v.
a.e. [P ex ], we have
A(t)=At.
Using the fact that
is defined to be
A
A(l).
is a Wiener process with variance pa.rameter
H
P, it is easy to see that W is a sample continuous
ex
martingale under P. Let B = B(t'l , O~~s~t)} O~t~l, we will show that
ex
under each
s
t
B
t
is not continuous at
By definition
t=O.
A(t)=At
measurable for every
t>O.
is Bt-measurable and hence
Suppose
docs not concentrate at one point).
A
A is Bt -
is not a constant (i.e.,
Then
n
t>O
Bt
is nontrivial.
dF(ex)
But
30
BO is trivial because WO=O a.e. [Pl. Therefore Bt
at
t=O.
2.
Nonlinear Seaces
X = {Xt
Consider a second order process
teT}.
,
recapitulate some results on the nonlinear space
useful to us later.
Be may assume that
Suppose that every
~EH(X)
We shall briefly
\'/hich will be
L (X)
Z
has zero mean.
V
i\
has all moments finite.
linear space of all polynomials in elel'1ents of
Po is the set of all constants.
Let
p
p
Let
P be the
and let
H(X)
cmd for
orthogonal to
p~l
Pp- l'
(p~O)
Pp
P of degree at most
be the linear space of all polynomials in
the set of all polynomials in
is not continuous
let
p; hence
~
Denote by
be
<;,
is called the n··th polynomial c:haos
the closure of Q in LZ(X). 0
P
lJ
and Op is called the p-th homogeneous chaos.
The following theoren is well kno\.n.
THEOREiCi 4.
{e~. i;;EH(X)}
then
c lsl
[Neveu
E
Ii e~
L2 (X) foY' every I; E H(X),
L2 (X). Furthermore if
E
is a comptete set in
for every
L2 (X)
1968]
~
e H(X), then
P is a dense subspace of L2 (X)
and hence
PROOF.
show that
n is
{l;
Let
n=O
n
such that
a.e.
There exists a
}-measurable [Doob 1953].
Ene s
~
E
H(X).
in
H(X)
We will
seque~ce
{~n}
Since
is integrable we have by
the standard Martingale Convergence Theore!:!
ECnli;;l' •.•• l;n)
=a v
Tl
11
= lim
is a Borel measurable function of
such that
E(nlsl, ... ,l;n)
~l •... ,l;n
a.e.
and belongs
31
Ene
V t1 •... ,t
{e
n
E
R which inplies
tl~l+···+tn~n
, t1, ... ,tn€]R}
t1t;1+···+t E;
n n
g(l;l""'S n )=0
a.e.
is complete in
L..,u (sl >"
application of the Stone-Weierstrass Them'em).
a. e.
{e S,
for all
s€H(X)}
11
it foUoHs that
n=O
a. e.
since
• ,
(by an
s)
n
E(nls1""'~n)=O
Since
Thus \Y'e conclude that
is complete.
HeX). Then we can apply the Bounded
E
r. =
p
n
Convergence Theorem to
I.i.m.
L
...u=l p .
e~
SIP(O,R;F).
eC €
and conclude that
L2 (X)
In order to have all moments of
E;
E
of a nondegenerH(X)
finite we
introduce the follovving °mament" condition:
(1)
The m0li10nt generating function of
J
eat dF(a) <
Under the condition (M) we have for
E r,;P
=
t;,
F
exists, i. e.
m.
V tE
00
HeX)
E
J Ear,;P dF(a)
:e.
= J Ct 2 E1"f:"P
or:
;:-P <
L1
';;-
dF(a)
(by Lemma 3)
(0
.1.
COROLLARY
5.
If X is a SIP(O,R;F)
L (X)
2
= e Qp
p~O·
P.
Q.E.r.
We now proceed to study the nonlinear space
ate
=0
satisfying (M) then
32
PROOF.
l; E
H(X).
It suffices to show that
We may assume that
is integrable for every
X is a coordinate process.
assume by Lemma 3 that under each
p
Also we may
is a zero mean Gaussian varia-
a
2 a
2
(a =·---EE.
a. • )
1
-< 2 e (J
2
Thus
= J Ea e1t:1
dF(a)
'.
dF(a) <
00
Q.E.D.
RE~~RK.
The use of Lemma 3 in the proof of Corollary 5 is typical.
l'!e will hereafter use Lemma 3 without cO::lITIent.
COROLLARY 6.
[Wiener 1968]
If
X is a zero mean Gaussian process
then
=eQn
p~OL
PROOF.
trated at
Note that a Gaussian process is a SIP with
a=l.
dF(a)
coneen-
33
Suppose
LEI1HA 7.
H is a lIiZber,·t space.
{~@p~ E;E:H}
Then
is
compZete in .,0p
h
•
PROOF.
t:'
{ <,
0P l ..,."
Y1
n
E
"!Pk
'0' • • •
""/0"
""s
be a cQtm in
H.
: Y , ••• 'YkE r , p.+ ••• +Pl""P }
.
IS
{~, YEr}
Let
Yk
0p
Y
1
01)
Hand
<n,t;>=O
1
K
V t;EH.
i'!ri te
Consider the inner product of
Then
H~.
.
a COi\TS In
Let
nand
we have
Pl··,Pk PI
I
o = <n,
Pl+ ... +Pk=P
a ' .
Pk
51 ••• sk
Yi···Yk
lJ
S l'
... , sk
Since this equation has infinite many solutions, it follows
E
a
'm .
PI" ,Pk
=0
Y1 •· .Yk
and thus
n=O.
Therefore
{t;0p, t;dil
Q.E.D.
is complet·3.
Our first step in studying the nonlinear space of a SIP is to
investigate to wh.at extent it is determined by the structure of the
linear space.
THEOREH 8.
If )(
is a nondegenerate coordinate
satisfying (M) then there exists a unique isomorphism
$
H0P eX)
onto a subspaoe of L2ex)
SIP(O,R:F)
~
from
such that
p~O
::: e
where
e0t; :::
r
"
a. J~ t;0p,
~
(
pIa.l
t;EH(X).
IY!o!leover~ 9 is an isomorphism between
$
p2:0
and onZy if
dF
is ooncentrated at one point.
H0p eX)
and LzeX)
if
34
The convergence of the series defining
PROOF.
results from
the following:
and
<
Thus
Next we show that the set
is well-defined.
complete in
~ H@P eX) •
need show that
"=0.
Let
n
Write
'I
o = <e0U~ ,1"»"
$H~P(x)
11
U
p
=0
E:
IR,
if
[;,n
$od~p (X)
=I
E
H(X)
we have
np
H0P eX).
E:
~EH(X)}
orthogonal to this set.
1;
"
(_.l?-)
uP <~0p,
~laP
Then
ex
J.
n=O.
be
p~
~EH(X), \'1hich implies
and thus
For
E
n=Lt np '
p~O
'f
(by (M)).
<Xl
1
np.l
"
;~p
"if
11
>
A
p H0peX)
~EH(X).
By Lenuna 7
is
We
35
on the other hand we have
A
2
_
n - 2a
- - En
1
>
e
<e
=
..
J
E (e
J
a
2
a
E
=J
Thus
1.- 1'"2
2
n-IEnJ
e
a
dF(a)
o
I;n
e a
dF (0',)
a Et;ll
C11
dF(a) .
e
preserves the inner products and therefore it can be extended
¢
{e ®~, E;Ell(X)}
uniquely to the closed subspace spanned by
indeed
1
E,--;-t:s
which is
EDH~P(X). The first statement of the theoren is now proved.
'
Suppose
p~O
dF(a)
a=a.
is concentrated at
Then
A=a
a.e.
and in
t; --rEi;2
this case
e
by Theorem 4.
Thus
sely, suppose
dF
nontrivial r.v.
~
is an onto map and hence an isomorphism.
with zero mean.
£; _~
E f(A)e
~
Note that
a
n;2
2al
It then follows that
=
I f(a)E a e
=
I
1; - 2a EF,
f(a)dF(a)
1
[neven 1963]
If
that
,.,
I
(~I )~ E;ep,
dF (a)
Q.E.D.
is not O:1to.
X is a zero mean Gaussian process
EDIi®P eX)
p~O
e0t; =
2
= O.
then there exists a unique isomorph-ism from
where
Conver-
is not concentrated at one point then there exists a
f(A)
COROLLARY 9.
is a complete set in
• E;EH (X) }
i;dl(X).
onto
L (X)
Z
such
36
PROOF.
The omission of the assumption that
coordinate process needs explanation.
x
is a nondegenerate
Note that in the proof of Theorem
8 this assumption is only used to guarantee the existence of the r.v.
But in the Gaussian case
A.
r. v.
A
dF(a)
concentrate at
always exists and equals to
be dropped.
1.
0.=1, and thus the
Therefore this assumption can
Q.E.D.
The assertion now follows fron Theorem 8.
By the isonorphism ¢
in Theorem 8 we shall hereafter identify
eH@P(X)witha3ubspaceofL,.(X).
p2:J
THEOREr1 10.
X is a nondegenerate coordinate SIP(O,R;F)
Suppose
satisfying (M):J then for
t;;€H(X)
P
~
(a l )~ "
t;;' = p! 0' 'J n.
@)J
P
blare
generalZy~
A
I
I c 11 2 (t;;).
P'u l
1
<;
for a finite orthogonaZ sequence
~l,
... ,t;;k in
H(X)
D
a~
= (_1)J.? H
pia'
P
'
(F )
~11t' 11 2 '1PI' 0.
'" 1
1
in pa:t'ticuZar3
PROOF.
0Pl~
{t;;l
~
Since
@Pk
0"'®~k
{t;;l""'~k}
_
,PI'''' 'l\~O}
is an orthogonal set in tl(X),
@p
i'3 an orthogonal set in p~d (X)
and the following series converges in
l'
0E 'u . .;.
1 1 1
~
e
I:' )0P
u.,,-.
=
=
1 1
I
I
L C{)
Z
c
L2 (X)
37
the other hand
On
21 Ii;. I I2)
k
A
LI(u.l;.
- -2-u.
1:1
=e
=
k
II e
1
u.:1 1;.J. - ~2A~
a J. II t;.J. 11
2
l
Pi
u.
k
=~
J.
(XI:1
~o p~IH.
~11r:~i 11 2 (l;i)
:1
Pi'a
Pi-
1
The theorem now follows by comparing the coefficients in both developments.
Q.E.D.
COROLLARY 11.
process.
Then for
[Neveu
i;
€
196e]
Suppose
X is a
i,:el'O
mean Gaussian
H(X)
More generally, fo.!' a finite orthogonal. sequence
sl"'" sk
in
H(X)
Before we prove the main theorem of this section we introduce a
"continuityi' condition on
(Co)
L
2 (A)
of
F(a)
F:
is continuous at
a=O.
denotes the nonlinear space of the r.v.
L2 (X).
An element in
L2 (A)
is of the form
A which is a subspace
f(A),
f
€
L2 (dF).
38
ThEOREt; 12.
If X is a nondegenerate coordinate
SIP(O,R;F)
~
satisfying (M) "and (CO) then th.ere exists a unique isomorphism
L (A)
Z
@
($
onto
I.{'P (X))
p~O
from
such that
L (X)
Z
~
a
1
1 E ?
(j\) f;-~ C
e
h
were
IPT
?ENARK.
8).
d f E: L2 (d~)
an.
t',
e@f;,__
= l.. 1__ '-,c0P
We have defined
t"
~
-- ( 1-..
")
E i1
•
in two different ways (cf. Theorem
e@E,
However, the latter always appears in the form
f(A)0e@f;
and hence
t:lere wi 11 be no confusion.
PROOF.
The proof is similar to that of Theorem 3.
of the series defining
Thus
E9
e0E,
HOP (X)
e ~f;
resu1 ts from
II
E,~p 11 2;
H
8
is well-defined, and
{e f;, E,dl(X)}
p(X)
The convergence
=
F;' I ~p(. X) •
II
is complete in
as shown in the proof of Theorem 8.
Note that
p2:0
{f(A)@e0 1;: f(A)E:L (A). sE:H(X)}
2
Let
f,g E: L2 (dF(a))
and
s,ll
is a complete set in
L (A)@(
2
H~(X)).
$
p~O
E: H(X).
Then we have
@f;
= <f(A),g(A»L (A) <e ,e
on
2
>
"
$
H0 P(x)
p~O
= t f(A)g(A)
=
a 1 1:
2
(r) 2f;-tEs
on the other hand,
e
I
0
L ~l
f(a)g(a)dF(a)
0
<E"ll>fr(X)
e ESll ;
is well-defined because of (CO) and
39
.0., ~
,
2
C~~) n-~En·
g(A)e ~
>L (V)
..,,;. "
0.
;.~
1 ::
?
~-tEC
Ca)
f £(o.)g(o.) (e
= f f(o.)g(o.)eE~n dF(a.)
=
[~
0
a.
where we have used the fact that under each
E~2.
Gaussian with variance
a.
0/:
f(A)@e~~ r-~ f(A)e
phism from
Lz(A)@(
$
P
(~)~~
is
a.
(0.>0)
a.
We 8ay now conclude that the map
J.;
C~) ·~_E~2
can be extended uniquely to an isomor-
A
....
H~PCX))
into
LZ(X).
claim that
\'!e
0/
is onto;
p~O
the proof is given after Theorem 13.
Thus the theorem is proved.
Q.E.D.
As before, by the isomorphism 0/ in Theorem 12, we shall hereafter
0
identify L?(A)0( e H P(X)) with L2 (X).
..
p~O
THEOREH 13.
satisfying
Suppose
and
(/.1)
(Co).
X is a nondegenerate coordinate
Then foT'
1
a.
~ E H(X),
E.
11 1 2
( = f(A)(pl) (A)
generally~
f(A)
E
,, 0P k
~···@~k-)
= f(A)
0
(Lr~ H
pI
~ ((~)~~ )
Pl,lIsll"
s ).
{~I' ... '~k}
A
f(A)~(~l
L (A)
2
1
for a finite orthogonal set
L (A)
Z
@PI"
E
( )
Hp,~II;112
0.
More
f(A)
SIP(O,R;F)
A
1
in H(X)
and
40
PROOF.
The proof is essentially the same as that of Theorem 10,
Q.E.D.
hence it is omitted.
REI1ARK.
It is easy to
for
deduce
f(A)
€
from Theoren
L (A), e
that
e H~P(X); and
€
2
13
p;:::O
H~P (X) ) •
( E9
p;:::O
PROOF OF A CLAIM in the proof of Theorem 12.
{el;, I;€H(X)}
Since
that the
el;
is complete in
~.
belongs to the range of
E.
(~
)2 e
I;~ ,
L2 (X)
I;
€
it is sufficient to show
Consider the following series
H(X) .
1
The series converges since
=I
~E~2
ea.
l
[l:.!.- (~~2)P ]
pI
0.
ciF(a.)
1
(by Fubini I s Theorem)
<00
From Theorem 13
(by (N)).
41
where the series converges
e~
€
L (X).
2
and the proof is complete.
~{L~(A)@($H~P(X))}
a.e. [P]
and in
Thus
Q.E.D.
L_p~O·
In the following, it is convenient to assume that the index set
is linoarly ordered.
This is no
Tes~riction
r
since every set can be
linearly ordered.
THEOREM 14.
If X is a nond-egenerate coordinate
satisfying (M) and (CO) ~ and if
{~Y' "(Er}
may be infinite) are CONS"s in H(X)
SIP(O,R;F)
and· {en (A), n=l, ... ,N}
and L2 (A)
respectively" then the
family
-is a CONS in
{ e (A)
n
0
L (A) ~ (EDH~ OC))
Z
p~O
II(-.l-I)~H ((ACtl}~~):
Py
is a CONS in
~y
; or equivaZently the
n=l, ... ,N, YEf, P ~O.
Y
fa~ily
I
p
Y
<
oo}
LZ(X).
Henoe every
fottaws
Y
e
E L (X)
2
(N
has a unique orthogonal development as
42
e =
PROOF.
It is easy to check that
"
e
=
p~O
(L 2 (A)@H@POO) .
set in
L2(~)0H@p(X). Thus the set M is complete in
o (L (A)0H~P (X))
p~O
It is clear th:lt
"
z
= L? (1\) ~{
-
'1)
H~P c:q)
.
p~C
It is also clear that
M is an
orthogonal set.
To show that every.element in
(i)
M has norm
1, it suffices to show
=
But by definition
1
·p-l
~
L l;@
... @E;
VI
v
v
where
P
is a permutation of
the sur.: is over all such permutations.
and
Thus the
nOTJ»
in (i) equals
(ii)
and each SUJ71li1and is either
sununands.
0 or
1.
Consider the number of nonzero
The first term in the inner product can vary in
for each such term the second ter111 can vary only in
nonzero inner product).
Thus (ii) equals to
pi
ways, ard
P1I ... P !
PI!" .Pk 1
pI
vlays (for
k
and the prooE
for the first assertion is complete.
The second assertion is now a direct consequence of
13.
Theo~ems
12 and
Q.E.D.
<'I
43
Theorem 14 for Gaussian processes ip the celebrated theorem of
Cameron and f'Jartin which was a milestone in the study of nonlinear spaces
of Gaussian processes.
COROLLARY IS.
Gaussian process and if {t; y , yd'}
1
~
is a CONS in
H(X).,
then
~
{n(--)~
H (E;): YEf, p ~O, L P <oo}
Py
Y
Y
Py
is a com: in
If X is a zero mean
[Cameron and Hartin 1947]
L (X) •
Z
A final remark before closing this section is the following: life have
assumed while studying the
'late process.
nonli~ear
X is a coordi-
This is a convenient assumption rather than a restriction.
L ( lRT,B( JRT)
2
,px- 1)
(Q,D,P)
A, then
that every B-measurable function
for some B( mT)-measurable function
is of the form
pew)
6(x)
and the r.v.'s
e(x(w))
pew)
have the same probability distribution.)
3.
"tultiple Wiener
We
(HWI' s)
types
Ip(f) =
J
f f(tl, .. · , tP)
J (f) =
J
f
P
where
Inte~:p"i11s
shall define the I:lUltiple Hiener integrals
follot~ing
X= (X~, tET)
..
is a
with
L2 (X) = L2 (Q,B,P) is isomorphic
under the correspondence STX(w)) +-+ S(x).
B the a-field generated by
(;..[0-;:0
that
X is defined on an arbitrary probability space
For if
to
LZ(X)
space
d"' \ ••• d"At
1
f(t ,···,t p ) X •• ,X L
1
t1
k
SIP(O,K;F).
P
P
dt ... dt
p
1
of the
and
6(x)
44
Introduce the following iiintegrabilityli conditions on
(I)
X is zero mean, and vanishes at some point
a.e.) with
(J)
toET
(i.e.
an interval,
T
is zero mean and mean square continuous \'lith
He will always assur:le (I) while dealing with integrals
while doa1ing with integrals
X:
rp ,
T an interval.
p~O;
and (J)
p~O.
J ,
P
The following lemma is well known and is the key in t:liener's definition of liHI" s for a Hiener process.
LEI"jHA 16.
If
{~1 ' ••• , E;;n}
is a Gaussian jami ly with zero means then
E E;;1···E;; n = I*n* E ~.E;;.
~ J
where the sum is over all ways of dividing n
terms into pairs and the
product is over aU pair8 in this way of dividing.
to be
(Empty
sum is defined
0.)
PROOF.
Consider the moment generating function
=e
~Lt.t.a ..
~ J 1J
1 (1..)P (\'L 't .La .. )1'
-_ ~~ -1'2
p
1 J 1J
where
a .. = E
1J
E;;.E;; ..
1 J
Since
is exactly the coefficient of the term
t 1 ... t n
in
~,
we can conclude
Q.E.D.
this lemma after a little algebra.
LEMr"]A 17.
Let
X be a zero mean SIP with
a..(oo
n
"2
and let
45
n
P=Z
PROOF.
We may assume
E sl"'~n
X is a coordinate process.
J E ... sn dF (a)
= f L*rr* a E1s.t.;. of
=
a ~l
J
1
= a.P I*rr*
-
Then
(by Lel1U!!a 14)
(a.)
f1 E Si ~j
a.
....E. L*n* E
0. P
£;.£;.
1
J
.
Q.E.D.
1
The followinr; theorem is essential to our. approach in defining l-1WI' s
for SIP's, and it has its mm independent interest.
Let H be an infinite dimensional Hilbert space,; and
TPEOREM 18.
lel;
S =
,...
S =
Then
{<Pl~' .• ~9p:
{<Pi' ls;iS;p} an orthogonal set in
H}
{A
A
<Pl~' .. ~<Pp:
{<p. , ls;is;p} an orthogonal set in
H}
,.,
S
and
.
IS
u~p.
.
conp 1 e t e In
that each
e
elements in
Yl
u
~ ... ~e
S.
Hep
are aomplete in
S
Pick a
P~OOF.
1
CONS
{e)
yd}
y
To show that
'Y p
repeated
respeat-ive Zy •
H, then
S is complete, it suffices to show
distinct we have
For
{Yl""''Yp } has
Pl"",Pk
in
,,~p
.:1 -
can be approxiraated by linear combinations of
thus only the case that not all
Suppose that
and
times with
y's
k
distinct needs consideration.
distinct elements
Pl+",+Pk=P
and
Yl""'Yk' each
k~n.
We may choose
46
an orthonormal set
C
y. =
1
-y;
1.
n
~
{f1 •... ,fn; ... ;fCk_1)n+l' ...• fkn}
CfC'1- 1) n+ l+···+ f 1n
.),
l:$i:$k.
in H such that
lIe will show only the existence of
f p ...• f ; the existence of the others c(m be shoVJ:1 similarly.
n
Take n-1
e~
elements
p~
, ...• e o
~
<.
Let
from
{c , YEf} \
y
n
be an isomorphism on the subsnace spaImed by
~
C'V
"1
St~ch
:
that
and
1
-y;C~
{e
YI
•••.• e
}.
Ylc
,ee , ...• ef3
1
-1
n
e
n"2
Y
+~
-1
1
e
+ •..
S2
are our choice of
Now assume for convenience
p
where the sum is over all
I
this sum into two parts
distinct
y. ,'s
L
and
1)
= ki+1, ... ,k(i+l), l:$i~k. l:$j:5n.
and L where L is the sum over
Divide
L'
is an
Yij
I
II
is the
ll
SUM
all
I
of the remaining terms.
element in the span of Sand
\111 2 :1:-p
11 L.
So by choosing n
n
n!
)!
"'1
(n:..;)
\.
sufficiently large,
trarily close to the span of S.
e
Yl
@... ~e
Yp
can be made arbi-
Thus the first assertion is proved.
Q.E.D.
The second assertion is now obvious.
We can now proceed to define the MWI's
nondegenerate
rCt.s)
II
SIPCO,R;F)
= a1RCt,s).
10
I ,
?
p~O.
with covariance function
Suppose
rCt.s).
is defined to be the identity map on
is defined to be the isomorphism
X is a
Note
m.
and
47
For
p~2,
S = {~l;"';~P: {~i}
let
is an orthogonal set in A2 (r)}
and
consider the map
,...
1 : S
p
~I
..
L..,(X): 91°' . '@¢p
-+
,.
Then for
A
P
1JJ
j
A
A
c[I (<1>10' •• 0~ )1 (1JJ
P
P P
,
I
~pdX
.
,...
,@~
<,b10 ••
ep 1 d .,Y ,.
1
l
@, ••
0 1JJ
p
E
...C')
A
0 ,.,0} )]
P
where the second equality follows from. Lem..lla 17 and the fact that
E(
J ¢i dX
f ¢jdX)
= E( J 1JJ i dX J 1JJ j dX )
= 0
fOT
i~j
On the other hand,
where
are over all permutations.
'IT,O
A
A
~
'.i'hus
A
<I. (<,b.@" ,0<1> ), I (1/11°" ,01/1 »L (X)
pIp
and hence
P
cl )~
/(-p.a.
p
P
1p
2'\
is a map on
,. .
S which preserves inner products,
48
Theorcn 12 implies
can be extended uniquely to an isomor-
phism on
define
"'"
where
is the
f
I (f)
P
= I (f)
for
P
version of f.
s~r~etrie
We now summarize our results as fullows.
If
THEOREM 19.
is a nondegenerate
X
and a <0:>, then the f/?H
I
p
suah that
I : A2 (@Pr)
p
J ~ldjC
I (¢10 ... @<P ) =
p
p
Moreover
+
can be defined uniquely on
P
I
<P
p
dX
;for
(~l
)~ I l~estI"'~cted
p.a "p p
{cjJi}
an orthogonal set in
to the subspaoe
A (;P r )
Z
p
isomorphism
onto
The l,1l'!I
@
(p~O)
A (@P r ) (r=a R)
Z
1
is a bounded Zinear operator with
L2 (X)
•.•
satis fying (I)
SI P (() J R; F)
Az(r).
is an
H(X).
ena be defined in exactly the saJTlC fashion, and
we have the following
If X is a nondegenerate
THEOREl! 19'.
SIP(O,R;F)
satisfying
(J)
oan be defined uniqueZy on
and a p <0:>" then the I,1WI J
P
such that J : A2(~r) + L (X) is a bounded lineal' operator with
p
J
:fI
(¢.@ ••• @cjl )
P
1
f
= ¢ldX ...
A (r).
2
p
Moreover
isomorphism
2
ex
I9 pdX
for
{¢i}
an orthogonal set in
)1;
___
,1__ I restricted to the subspaoe
p.ap " p
onto @PII(X).
(
i.e; an
I.
We now give another less intuitive definition for
is isomorphic to
H(X)
is isomorphic to
H@P eX ).
under the isomorphism
p
I: f 1-+
Since
I fdX,
A
¢l""'¢p
orthogonal in
Denote this isomorphism by
AZer)
we have
I~P.
For
A (r)
2
Aze;Pr)
49
'"
I
<8Ip
'"
'"
(4)1<81 ••• <8I<jJ
p
)
::
(by Theorem 10),
"'."
A~(<8IYr)
which suggests to define on
4
This definition for
~
coincide on
is consistent "lith the previous one since they
Ip
An
which is conulete in
S
A,,(0Kr).
The following results are imrlediate consequences of Theorems 10 ffild
I:
2
1., and the fact that
(Plap)~
D
~
I;P
a-'-D
is a constant multiple of an
1
isomorphism.
THEO~EM
Let
20.
X
satis fying (I) ~ (M),
be a nondegenerate ooordinate
Then the
I,lWI.lI s
I ,
p
p~O,
SIP(O,R;F)
nave the fo llow-ing pro-
perties:
I
p
I
....
P
(f) :: I (f)
p
<I (f),1 (g»
P
::
'"
<f,g>
ul
o
if
'"
,
A2 (<8IPr)
p:tq
"
<8IP1'"
IpC<P l
q
>
~ ........
P
<1 (f), I (g) > ::
p
a, bE IR
(af+bg) :: aI p (f) + bI p (g) ,
", <8IPk
0 ... 0QJk
THEOREt',! 21.
satisfying
(I)~
)
Let
X
be a nondegenerate ooordinate
(M) and (CO),
Let
{e, n::l •.•• ,N}
n
(N
SI?(O,R;F)
may be infinite)
50
be a CC:I,)S in
Then every
L (dF).
Z
0
has an orthogonal develop-
L (X)
Z
€
ment as foU01J)s
e (A)
I
o= ~
n~l
n
>0
p-
,
I {flU))
P P ,
.E.
2
f Pen) E
{t:
Pick a CON2
is a cons in
y ' y€r}
(
P
2 ~ r
) .
A
then
PROOF.
A
H
{¢y» y€r}
H(X).
in
A2 (r).
""r-'" (n)
-p
="'0 (n)
L~p
for aU
~y = f
Let
<p
y
nand
dX.
Then
By Theorem 14 we can write
e= L
n~l
fen)
Putting
p
=
(in
A2(~Pr) ) He get e
An)
.L
P
= L I
~~d
Q~iqueness
THEOREi'l 20'.
Let
D
p
can be shovm similarly.
X be a nondegenerate coordinate
satisfying (J) 3 (M) and (CO).
A (@'- r)
2
of
Q.E.D .
is now clear.
The corresponding results for J
on
the
n~l p~O
Then the 11HI J
p
can be defined zmiquely
such that the foU01J)ing hoZd
J (af+bg)
p
= aJ p (f)
J (f) = J
p
p
(f) •
+ bJ (g),
p
SIP(O,R;F)
a,bE
m,
51
pIa
=~
<J (f),J (g»
P
aP
1
P
Let
THEOREH 21'.
(J)~
satisfying
<f,g>
A
,
A (~Pr)
2
X be a nondegenerate coordinate
(M) and (CO).
infinite) be a COiJS in
{en' n=l,Z, ... ,N}
Let
e
Then every
L (dF).
Z
E
L (X)
Z
SIP(O,R;F)
(N may be
has an orthogonal
deve lopment as fo Hows
N
o= I
n=l
If
LL
en (A) J (f(n))
R
P~O
P -P
fen)
,
P
€
A2(~
P
r) .
2
A
en (A)
then f(n)=g(n)
p
p
E.
A
n
I
for alZ
2
and p.
COROLLARY 22.
If X is a GaJ.A.ssian process with covariance
satisfies (I) then the
Ip(af+bg)
1 (f)
p
<I (f),1 (g»
P
P
~VI~s
I '
p
p~O
= alp(f)+blp(g)
have the following properties:
a,b
>
= 1P (1) •
= pI
..........
<f,g>
A
A2 (~Pr)~0"
if p;tq ,
Rand
•
€
m,
52
If X is a Gaussian process tJith covariance R and
COROLLARY 23.
satisfies (I) then every LZ-!unctionaZ
e of X has an orthogonaZ deveZ-
opment
e = L\ I P (fP) ,
then
COROLLA1~Y >,2:.
satisfies
(J)
J (f)
P
<J (f),J (g»
P
p
r~vI's
$l'."'~k
(J)
p
E
A2(~PR) .
= '"g'p
p~Os
I p'
rP
,
a,b
= pI <f,g>
P
(JjLI~t
E
IR ,
~
,
A (@PR)
2
p~q,
are orthogonaZ in
A2 (R).
If X is a Gaussian process with covariance
then every L,.,-j'unctionaZ
~
J (f )
It
have the folZowing properties:
e of
opment
If
Cova1~·'iance
P
COROLLARY 23 1 •
satisfies
f
= aJp(f) + bJp(g)
= J cf) ,
if
where
'"
P
If X is a GU,G,ssian pl')oae3s 'Lvith
then the
Jp(af+bg)
f
then
~f
p
= ~gp
~
"p->0 .
X
R
and
has an orthogonaZ deveZ-
In Section 1, we combine the representation theorem for
SIP~s
and the
dichotomy theorem for Gaussian processes to derive some results on the
equivalence of SIP'S (Theorems 5, 7 and 8).
(Two processes are said to
be equivalent if the induced measures are eo.uivalent.)
derive a
0-1
celebrated
1.
In Section 2, we
law for SIP'S (Corollary 12) which partially extends the
law for Gaussian processes.
0-1
f~J.ty~~tQ..I)s:_e o'~5pher'!Cdlly
Invariant Processes
We first state the general theorem concerning the equivalence
and the singularity
THEOREi'4 1.
(J.)
of two Gaussian processes.
process on the probabiZity space
x
x= (\,
tET)
(Q,3,P).
Let
Let
[Neveu 1968]
be a zero mean Gaussian
R be the covariance
Let Q be a second probability on
function of X.
is a Gaussian process with mean
Then either
P"'Q
(~)
or
sufficient that there exist
P.LQ.
under which
s.
m and covariance function
In Orde2? that
yEH(X)
(~,B)
P"'Q" it is necessary and
cvzd UEH(X)0H(X)
such that
(1)
(2)
S(t,s)-R(t,s) = <X.0X
't
s ,U>q(:f);H(X)
L.,.
and the Hilbert-Schmidt operator U on
H(X)
does not amrdt
-1
as its
eigenvalue.
~1en
p~
the Radon Nikodym derivative of Q w.r.t. P is given by
1
-Z+Y
(3)
where
H(X) ,
e
12
54
-RmrJARK:
in
L2 (X)
(ii)
(1)
An element in
"
H(X)~H(X)
can be viewed as an element
or as a Hilbert-Schmidt operatar on
H(X)
X in (3) represents the sample function
(cf. Appendix).
X(w).
Since conditions (1), (2) are very difficult to verify and the
expression (3) is very difficult to evaluate, i.t
~qy
be advantageous to
restate Theorem 1 as follows.
TREGREl'! 2.
assumption that
Under the cssW7lptions in Theorem 1 and the additionaZ
X satisfies (I) (resp.
(J))~
p-q or
either
order that P""Q it is neoessary and sufficient that there exist
A
In
he:i\Z (R)
A
AZ(R)) and K e: i\2(R)®i\2(R)
(resp.
PLQ.
~2(R)~A2(R))
(resp.
such that
(2' )
(resp.
and the Hilbert"'Schmidt operator
admi+;
-1
flhen
(3')
K
on
<0 00 ,K>, ('-')\ (n' )
t S
1"2" ~ 2 h)
(resp.
1\.2 (t<.)
as its eigenvaZue.
P"'Q
then
~JfH(tJs)dXtdXs+fh(t)dXt
dQ(X)
o
dP
e
~ffH(t,s)X
(rGSp.
1.. (R) ) doelJ>70t
2
o
e
t
X dtds+fh(t)Xtdt
s
)
55
"
~i=IAiei~ei
u)here
E
orthonormal sequence in
(l-Ai)(l+u i )
PROOF.
Xt
=
I
(resp.
A2 (R)@A 2 (R)
= 1, and
= L<h,e~>A 2 (R)e.1.
•L
Y=
I
hdX
~i =
and
(reap.
I<h,e.>, (R)e. ) •
1.
L
A2 (R)+H(X): f
By the isomorphism
It dX ,
1.
(resp.
A (R)
2
h
is an
{e.}
I
eidX
~-+
i~l.
I fdX
1\2
1.
we can write
Then it is easily seen
that (I') and (2 1 ) are equivalent to (1) and (2).
(3') is equivalent to
(3) sir.ce
%
II
H(t,s)dXtdX s
II
=~ I
Ai
=~ l
A.(1:2
1.
= -1
Ii
I
ei(t);ei(s) dXtdX s
I
e.dX;
1.
"1
A. E;.®~. = 1.
1. "'1.
1:2
I
e.dX)
1.
z.
Q.E.D.
X to be Wiener process under P, we get the following
If we take
well known theoren due to Shepp [1966].
COROLLARY 3.
in Theorem 2.
Then
Let
P'""Q
X be a Wienel' process UJith
if and only if there exist
met)
and a sy~netric function
=
J:
[0,(0)
for UJhioh
for UJhioh
= IS Jt
o 0
and the Hi lbert-Schmidt operator
m' EL Z (T)
or
m'(u)du
KEL 2 (TxTj
S(t,s)-rnin(t,s)
T=[O,T)
K
on
K(u,v)dudv
L (T)
2
does not admit
-1
as
its eigenvalue.
When
(3")
n
d (
~
dP
P""Q,
~- ;!.?ffH(t,s)dXtdX +fm' (t)dXt +%fm' (t)2 dt
trHK 'j--2 e
S
X+m ) = [S,(_I)eu
56
where
0 (A)
is the Car"teman-Pl'edhoZm determinant of K and H is the
FredhoZm resoZvent of K at
PROOF.
A
!l.2(R)0A 2 (R)
and set
2
-1.
(T) and
2
Adopt the notation in Theorem
Under the present assuoption we have
= L2 (TX T)
(see Appendix).
m';(I+K)h
denotes the identity operator)
(I
imlJediate that (In) and (2") are equivalent to
p-Q
and we will show that (3 1 ) implies (3,·1).
:J:'esolvent of K at
(I-H)(I+K) = 1.
Suppose as in Theorem 2
K
t Ai
eige i ,
(l~)
then it is
and (2 1 ) .
Now assume
Since H is the Fredholm
-1, it is determined by the operator equation
(i)
H ::
!I. 2(R)=L
= L u.1. e.0e
.. Then (i) implies that
1
1
(I-Ai) (l+u i )=l; and
h::(I-H)m 1 •
(3') now becomes
.)
<h,ek>~2(T)
dn
~(X) = [I1(I-A )e
i
-J;I
A. ;1]
It may be shown that
J;}). ((fe.dX)2- l ]+f(I-H)m dX.
1-A.
i
e
1..
f fd(X+m)
=f
e
fdX +
1
1
J f(t)m'(t)dt
for all
fELZ(T).
Thus we have
J;
L A.1 [(I
:: J;
L Ai [( J
= +iJJ
J
and
l]
e.d(x+m))21
e i dX))2_ 1] +
H(t,s)d\dX s +
(I-H)m'd(X+m) ::
L Ai
<m 1 ,e i >L (T)
2
J Hm'dX + ~<Hm' ,m
I
1
f eidX
+
~
>L (T),
2
(I-H)midX + «I-H)ffi',m 1 >L (T)'
2
I Ai <m' ,e i >i. (T)
2
57
2
= ~ L (1+u.)<h}e·>L
'T)
1
1
Zl
~'!hich
imply
e
-rffH(t,s)dXtdX +fmidX+~fm'(t)2dt
s
(3 11 ) follows by noting that from the definitions of
0 (A)
and
trHK
1:''- = [IT(I+u.)e -u.1]e -lA-.u.
o(-l)e- t r""
]. 1
].
A. -1
= [IT(l-A.)e
1]
1
When
P and
((l-A.)(l+u.)=l).
1
1
Q are equivalent the expression
(3') serves as a test
statistic for the Neyman-Pearson test of the hypothesis
HO: An observed sample path of X is realized under
P
against the alternative
HI: A!l observed sample path of
X is realized under
Q.
TIlus one desires to be able to calculate ~ for each sample function of
dP
x. Note hm'lever that the
II
H(t,s)d\d\
'Jr h(t)Xtdt
i~
and
I
expressio~
h(t)dX t
(3') involves the integrals
(respectively
II
H(t,s)\\dtds
and
) which are not defined for each individual sample path but
the mean square sense.
It is therefore of interest to sec if these
integrals can be determined throuGh sample paths.
tive and we shall show this for
II
H(t,s)dXtdX s
The answer is affirmaonly.
58
Suppose that
X is continuous in probability.
be a dense subset of the interval
= S(Xt
"B(X)
II
E(
,k~l)
T.
{tl.t z""}
Let
Then it is knO\nl that
and thus by the martingale convergence theorem we have
k
H(t.s)dXtdX s
Orthonormalize
I
xtl·····x
Xt . • ... 'X t
1
~i
tn
)
If
+
H(t,s)dXtdX s '
to obtain
~l
n
a.e. [P] .
• ... , ~ n ,
i
= I
j=l
b .. x...
1J ,-.
J
i~n.
Write
where each
e.
1
is a step function.
We may adjoint
(n)
(n)
sequence is orthonormal in
H =I
to
en+ l,en+ ?'
.. •••
J
1.
such that the resulting
and satisfies
AZ(R)
<H.e.~e.>e.~e.
1
e1 •... ,en
J
= 1:
a •• e.Q:!)e ••
1J 1. J
Then
L a 1J..
(~~F,.-O . .;).
J. J
1;
!-1ow
(4)
E(
If
H(t,s)dXtdX s
I
Xt1""'X
tn
)
= E( If
=
I
H(t.s)dXtdXs
a 1J
.. E(~.l;.-o
..
1 J 1J
I
~l""'~n )
n
= I
i > j=l
Thus
If
H(t.s)d\d\
a ..
1J
(~.~.-o .. ).
1. J
1J
can be determined as the a.e. limits of (4)
which is obtainabl0 through each sample path.
59
Froill now on X=(X t , t€T) will be a coordinate StP(O,R;F)
(mT, B( nT), P). Recall that R is a covari<:nce function and
m+
a probability distribution on
with Hni te first moment
Ct
on
F is
l.
P is
a mixture of Gaussian measures, specifically,
(5)
peE)
where each
function
P
I
a.R.
Suppose
X is nondegenerate and let
X as defined in Chapter III.
of
X lmder
P and
R
a
(JuZar versions of each
a
o
et~O.
Let
A be the r.v. associated with
H(X)
respectively.
If H;.}
l.
Lm1MA 4.
every
Pa (E)dF(a)
is a zero mean Gaussian measure with covariance
(c:~O)
et
=
and Het (X)
We \'Jill use the following
is a sequence in
~
{~i}
H(X)
is a sequence in
then we may take particular versions of each
a sequence in H(X)
PROOF.
for every
and II (X)
Ct.
This is implicit in the proof of
X is a
SIP(m,S;G).
where each Q
a
(Ct~o)
eto
so that
1;.
].
for a fixed
{~.}
1
is
Le~~a
Q.E.D.
3 in Chapter III.
Q on
(IRT , B(In,.T))
~
under
Then
Q(E) =
(6)
H' (X)
for
a~O.
Now conS1'd er a second. prob ab'l'
1 1ty measure
which
"then we may take parti-
is a sequence in Ha (X)
" so that
Conversely;) 1:f
be the linear space
f Qa(E)dG(et)
is a Guassian measure with mean
m and covariance
as.
He are interested in the equivalence and Dutual singularity of the
measures
P and
Q.
60
THEOREi<1 5.
Suppose that
if F"'G, P""Q
(i)
(ii)
if F.LO, P.LQ
(iii) if neither'
PROOF.
a~O
if
Then
Pl.... Ql.
F....G nor
FloG" neither
P"'Q
nor
(i) follows from (5), (6) and the fa.ct that
Pl"'Ql'
F(E)=G(E')=l.
Suppose
Since
P(AEE)
P~G.
Then there exists
A=a a.e. [P]
a
= J Pa(AEE)dP(a)
for
a~O,
EEi3( lH)
PloQ.
for
such that
we have
= P(E) = 1
and
Q(AEE') =
=
J Qa. (AEE')dG(a)
f
P (A€E')dG(a)
a
(P a."'Qa)
= G(E i) = 1
which implies (ii) .
The proof of (iii) is analogous to that of (ii).
REf,lARK.
P and Q
Q.E.D.
This theorem was first stated in Gual tierotti [1974] for
SI~l's
on a separable Hilbert space.
If P"'Q, it is of interest to find an expression of ~ in terms
dO"a
c1G
To this end we prepare the following
of P a = dP a.e. [P a ] and '-P
a.
a
LEillHA 6.
Suppose
P1"'Q1
and
F....G.
If PAis a measurable function"
then
(7)
~
dP(x) = PA(x) (x)
~~(A(X))
a.e. [PJ.
61
PROOF.
IE
dQ
dP
Thus
T
ElOS( lR ),
For
= PA
PA
0
I
:~(A)dP = dF(~)
dG
dF(A)
0
IE
=
f
dF(o.)
=
f
Q (E)dG«:l)
PA
Po.
0
<
~~(o.)dPo.
a.e. [PJ.
Q.E.D.
is not automatic since
PA
be arbitrarily changed on a set of Pa-measure
Suppose
order that
R=S
and
a
can
O.
p-G.
Then either
P-Q3 it is necessary and sufficient that
~eproducing kerneZ Hilbert space of
P
F satisfy (CO).
In the following we assume that
THEOREM 7.
~~(A)dPa
= Q(E).
o.
The measurability of
REIilA.RK.
IE
0
P-Q or
P.LQ.
In
m belongs to the
R (m€R(R))3 i.e' 3 there exists
for which
Yd1 (X)
(8)
~r(/.en
P"'Q •
~=
(9)
PROOF.
we have
that
P.LQ.
Y€Ii (X)
a.
Suppose
0.1
2
-A (Y-~EY )
m~R(R).
Then by a well known theorem of Fortet [1973]
Conversely, aS5urac (8).
'If
o.~O,
(Lemma 4).
0.
1
<Xt'o.
-Y> l '
d
Th~s
~~(A).
e
by Theorem 1
Pa "'Qa
Then
Ct.
o.
- (\'")-0.
",·,
and
We may take a version of Y such
1
Ea. Xt Y
62
Now
P-Q
fo11O\'1s from Theoren SCi). and (9) fallah's from Lemma 6.
Sytaya [1969] derived the expression (9) for
REHARK.
SIB's on
a senarable Hilbert space and
F=G.
S are said to be equivalent
Rand
Two covariance functions
P and Q.
if the corresponding zero mean C3.ussian measures are equivalent.
(R"'S)
We now
state the main result of this section.
THEOREt·l 8.
Suppose
and F-G.
R"'S
Then either P-Q or P.lQ; and
P-Q if and only if IDER(R).
vn~en
there exist YEH(X)
P~
w~d
U€
H(X)~1(X)
such that
(10)
(11)
and the Hilbert-Schmidt operator lJ on H(X)
does not admit
')
<Y ) <:'i
r- >'
et 1 ,
lJ(X)
-~-;- .>---,-,-,----
e
(12)
where
Z =
H(X)J
U ::
PROOF.
I
I
A. ~.0~.
1
U
1
€
1
i i;i0~i'
"
~
H(X)0H(X).,
)...... ,\..
.,.1
{~l'}
o
e
0. 1 ( 1 a~
_·_-Z+Y
A 12 A
EEi3(
(l-Ai)(l+ui)==l, and
IRT).
J
o
dG(A)
dF
is an orthonol"maZ sequence in
Bring in the probability measure
Q' (E) :: Q(E+m).
as its
.
eigenvalue!> and
h.
-1
Y ==
Q'
It is clear that under
I
<¥. i;i>H(X) ~i'
defined by
Q'
X is a
SIP(O,S;G).
63
thus
R~S
P""f{' because
and
follows by noting that
Suppose
and
E
l
(i)
(ii)
HI
(iii)
l
met) = <X
R""S.
S(t,s)-R(t,s)
-1
as its eigenva.lue.
be an orthonormal sequence in
= L u.1. ~ ~i
~ ~i = a 1 I
;
and
E~i = a l E1
Va:2:0.
Note that
=
-. =
L.
L u.
1.
a
1
and
t; .@f:.
1.
A.
1.
~ .~i;.
1.
V i:2:1.
E
1
I
1~
( 2a 2 f 2
= L A.1.
(l-A.)(l+u.)=l
1.
1.
Y d1 (X)
1 l
=
Yl
where
which
I
tie may assume by Lemma 4 that
Ha(X)
r.1€:R(S)=R(R)
"
+ <Xt~Yl'Ul>Hl (X);H (X)
e Yl>H l (X)
{~ t;i}
QJ.Q',
such that
1\ (X)0H (X)
U1
since
R(S)=R(R)
Thus it follows from Theorem 7 that there exist
does not admit
Let
or
The first part of the theorem now
Then Ql""Q, hence by Theorem 7
P"'Q.
P1"<l1.
U
From Theorem 7, either Q"-'Q,'
p.....G.
if and only if lnER(S).
Q-Qi
il:-,plies
and.
1.
i:i
Ui
"
H(X)~[-I(X)
(~~ - ~-)
1.
('1.1
"
€: H(X)0H(X)
such that
~ .~~.,
1.
1.
(i:2:1)
t;f
HI (X)
= 1.
are elements in
Define
H(X)
64
Note
th~t
(v)
r <Y'~·>f..!(X)~·
1.
~
<.
= Ct. l
r <Y,i;·>H1"'
(V)f.· .-
1.
~
Y
1
= -- = Y
ct
l
It is
ir,~ortant
(by (iv)).
to keep in mind that under each
Pa
Y € HeX),
a
(vi)
al
U = --
12a
2
a
2
L u.1. (E;.1. -~)
= _1 L u.
a
ra.;- J.
l
A
C;.@E;.€H(X)0H(X),
1.
~
ex
a
and
(vii)
In the following computations we will repeatedly use the fact that if
C;, ndl(X)
and H (X)
CI.
then
We now show (10) and (11):
<Xt 'Y>H(X)+<Xt 8Y,U>H(X);H(X) =
;1 <Xt'Yl>H(X) + <X 8Y, r i si0E;i>H(X);H(X)
u
t
= <Xt'Yl>H
+
1
(X) +
L ui
<Xt 0Y 1 * a l
<Xt,si>H(X)<Y,E;i>H(X)
L ui
Sie C;i>H (X);H (X)
1
l
(cont. )
65
(cant. )
.
<Xe Y>H(X)
+
"
<Xt~Y ,U>H(X);H(X) = met)
(by (i) and (iii);
= al
r ui
<Xt,Si>Hl(X)<Xs,Si>Hl(X)
= <Xt;Xs '
r ui
a1
Si~~i>Hl(X);H1(X)
(by (ii), (iii)).
= S(t,s)-R(t,s)
In order to show (12), we will first find an expression for
dQ
a
Pa = dP' •
a
Accordin~
a
corresponding Y,
a Ua
a~
U
a
2
= -~
a
to Theorem 1, it is necessary to determine the
and
a
l t u. ~.@s.
= -ex L 1 1 1
for
Z
ex
,~
a (X)0Hex (X)
Y
ex
l
= -- Ydl (X)
a
a'
(by (iv)); for the same ca1cu1a-
€ H
tions with those carried out for
I'le claim that
p",'
a=l
give
and
is an orthonormal sequence in Ha. (X); and since
~l ~
2 <Va' (a)
a1
%
al
L <Y,si>H(X)F,;i
F,;i>H (X) (0-) F,;i = 0a
a
= ..J:.
y
a
= Y
a
0
(by (v))
66
we have, by Theorem 1, under
P
a
",~,
,..:L-
Za =
""1
L ).1· (-r
f;.
a
1
'9
""1 '"
"
{II (-)
a
F,.
1
(from (v)).
Thus, it follows from Theo!em 1 that
0. 1
~
<¥0. > (0.- - )
1 ..\,
2
~ 1• >H
I-A.
--~L
a
(V)
/\.
1
·-z +y
-
1
• e
We can now conclude (12) from Lemma 6.
fi 0. 0.
Thus the theorem is proved.
Q.E.r.
As in the Gaussian case, Theorem 8 can be stated as follows.
THEOREM 9.
P-Q
Suppose
and F....G.
Then either
P-Q
P.LQ, arid
or
if and onZy if m€R(R).
If
p~
and
if~
in addition,;;
(J)), then there exist
"
K € A (0.1R){IIA (a R)
z 1
2
met)
R....S
= <It,h>Az(a1R)
h€AZ(a1R)
(resp.
0.
2
<00
and X satisfies (1) (I'esp.
(resp.
A2(0.1R)) and
~
A2 (0.1R)@A z(a 1R)) such that
"
+
<lt0h,K>A2(0.1R);A2(0.1R)
(resp.
<ot,h>A (Cl. R)
Z 1
"
+
<Ot@h,K>A2(0.1R)~A2(Cl.iR) )3
S(t.s)-R(t,S) = !" (a nn)
0. 1 <1 t ;1 s ,K> A2 (0.1R){IIA
2 l
(reap:
!Cl.
1
<6;0
t
5'
K>
"
Az(o.l R){IIA Z(aIR)
)
67
and the Hilbert-Sahmidt operator
K doea not admit
~1
aa its eigenvalue
and
A. ~
(reap.
[TI(l-A.)c 1]
1
.-
H = I Ai e i 3 e i ~ A2 (a R)@A 2 (a R) (reap. A2 (a 1R)@A (a R) ),{e }
l
1
2 1
i
is an orthonormaZ aequence in A2 (u R) (resp. A2 (a l R)).
l
K = \' u. e . ~e ..
(I-A.) (l+u.)=l, and h = I <h,e'>A ( ,...)e~ (resp.
l.. 1 1 l '
where
1
I
1
2 alK
~
<h,e.>,
( R)C')'
1 11. \,,0,1
1
2
PROOF.
2.
1
f\
Q.E.L,
The proof is analozous to that of Theorem 2.
0..1 La\'1
The following celebrated
0-1
law for Gaussian measures is due to
Kallianpur (see Cambanis and Rajput [1973] or Le Page [1973]).
THEOREI1 10.
G
or
Let
P be a Gauss'ian measure on
be a B( IR T)-measu.1'able (additive) subgroup of
( mT,13 ( IRT))
]RT.
Then
P(G)=O
1.
In the application of Theorem 10 it happens very often that
ze}:o mean and
we
and let
ca~
G is a linear subs:!?ace.
extend Theorem 10 to the following
P has
Under these acIditional assumptions
THEOilEr1 11.
a1;zd le t
F ( { 0 } ) = c:,
Then
N<:B( JRT)
of
a's
L
and
with
(sa
Pa(L)=l.
)
13"( JRT) -measurable inplies th;:t there exist
such that
We!-!
P (N)=O.
a
Then
L=L*uN*
F(E)=!
Pa(L)=Pa(L··~)
and
I
=
P(L)=P(L*)
We may assume that
aoEE\{O}.
,
Since
P(N)=O.
and
and
N*ElfY' ( JRT)
V alOE.
be the set
Thus
I
~
Pa(L)dF(a)
(O,ro)()~
£<1; otherwi.se, the theorem clearly hole's.
a
Fix
~
O
L) = P (L)
a
V C'lEE\{O},
O
P (L)=1.
O
(i), (ii) and (iii) yield
or
E
V alOE.
Pa(L*)dF(et) = PO(L)£ +
((-2.)
a
a
a
Let
L* ,
L is a linear space, we have
(ii) P (L) = P
(iii)
such that
Ct>O
a~O~ denotes the compZetion of
3
Now it follows that
(i)
IRT •
.
is
L=L*uN*dfC JRT)
with
SIM(O,R;F)
if and onZy if there exists
P(L)=l
L is no-measurabZe and
PROOf<.
he a
1.
Furthermore",
.,(
1:5 JRT) w.r.t. P
et
(mT,B( JRT))
IT( IRT) -measurah Ze Unear subspace of
be a
L
or
P(L)=c:
P on
Let
peL)
1 since by Theorem 10 Pa (L)=O
=€
or
+ P
a
(L)(l-e).
Thus
P(L)=e
O
1.
O
Q.E.D.
The second assertion is now obvious.
COROLLARY 12.
(COJ.. and let
P(L)=O
or
L
Let
P
on
(IRT,B(:n~'T)) be a SH1(O,R;P)
he a B( JRT)-me.:zsurable linear S7J-J.~sp(T.oe of
satisfying
mT •
Then
1.
Furthermore",
P (L) =1
is n«-meas~wable and
if and only if' there e.r.:ists
:1.>0
such that
P (L) =1.
a
By applying Corollary 12 we can ohtain a number of
path properties of a zero mean SIP.
0-1
laws for
For eXCl.mple, we c<..n show that if
L
69
X=(X,
tET)
i::
is a separable zero Dean
SIP
satisfying (Co)
is an interval on the real line then with probability
of
a
where
or
1
T
the paths
X are
(i)
(ii)
differentiable on T
absolutely continuous on every comnaet subinterval of T
(iii)
(iv)
(v)
T
continuous on
bOtLl'\ded on
....,
.'
T.
free of oscillatory discontinuities on
(Cambanis and Rajput [1973] proved these results for a separable
Gaussian processes, but their proof is also aVcdlable for the present case.)
Indeed most known
mean
SIP's
0-1
laws for Gaussian processes extend to zero
since their properties
correspofi~
to linear subspaces.
Also,
by Corollary 12. necessary and sufficient conditions for eae;1 alternative
(whenever known) remain the saLlC for z('ro mean
SIP's.
Some sample path
propc:cties for which necessary and sufficient conditions are known o.1'e
integrability, continuity (stationary case), absolute continuity,
city.
anal}~i-
For example, a necessary and sufficient condition for the p-th order
sanple integrability
(l~p<ro)
of a measurable Gaussian process (and hence
also, SIr) with covariance function
R is
J
E.
2
R (t.t)dt < ro [Rajput 1972];
T
a necessary and sufficient condition for the sample continuity of a seoarable stationary Gaussian (ani henc0 also, 51) DTocess is
I'1 l:Itl(~n )
qa
(t
<
00
for SOl:le
q>l
where
H is the metric entropy for this
n~.
process [Fernique 1974].
(H
is defined at the end of this section.)
In order to illustrate the techniques employed in proving
we sketch. in the following, the proof of the
condition for sample continuity of a SIP.
0-1
0-1
laws,
law and of a sufficient
70
Let
closed sets)
SIP
on
space
s il~~~~;i 1 e
.-- .... 0
i,..,;.lG
II (E)
=
f
(0, R; F)
satisfying (Co) .
(IR[a,b] , >'(
IT,[a,b])
,.J
01\
"if Ed3 ( IR [a> b ] )
II (E)dF(a)
a
wit':... zero Bean and covariance
Let
(~>J(X),p), be a separable (w.r.t.
X=(X t , tEla,b]), cefined on
Sc [a, b]
::
~'I';-j.ere
H* = {we:Sl: \(w)
of
L*
alL
Thus
be the exceptional
Consider the following sets
i,.
S}
sL
is uniformly continuo'us on
f.( E<[a,b])_ measurable linear subspace
ll* = X-lCL*)E::J(X).
By sepa.rability it fo11o\'IS that
P (n) =P W*) =ll (L*), ll1hich is either
12.
~20CSl
is unifornly continuous on
is easily shown to be a
m[a,bl.
is a Gaussian r.1easure
lla
is continuous}
L* = {XE IR [a , b] : x(o)
Then
eac:-l
be the separating set and let
{wESl: \(w)
be the induced SIE
II
Then
•
set in the c;efinition of separability of
1':1
Let
Therefore with probability
0
or
o
or
1
by Corollary
the 1J2ths of X are continuous.
1
To show that the best known sufficient condition for the sample ':cn-
tinuity 0::: Gaussian processes remains the same for
SIP's, we need a lemma
which is of independent interest.
There exist probability measures
LEIEl\ 13.
such that
11
a
=p
a
0
-1
X •
Hence under each
process with covariance fUnction
PROOF.
Let
C(
wl1ich forms a field.
DEC(X), define
defined.
It
P (D) =ll (E)
a
a
a.
a~O,
on
(Sl, f3(X))
is a zero mean Ga7A.Ssian
Pa "
aR(t,s).
m[a,b])ci3( m[a.b])
Let
p ,
ceo = }(l
0
be the
class of all cylinder sets,
C( m[a,b])ci3(X).
if D=X-lCE).
We claim that
For each
P
a
is well-
is not hard to see (by looking at their characteristic func-
tions) that when restricted to a finite dimensional space
is
71
~.
absolutely continuous w.r.t.
E,E
l
lJ(E)=P(D)=ll(E )
E C( m[a,b]), then
Pa.
sequently,
D=X·· 1 (E)=X- l (E )
l
Thus if
and her-ce
l
l
is a proba::'ility measure on the field
lla
= Pa. oy-1.
Slnce
(~t'
Suppose
,
tney
A
d(t,s) = [EU:t-t;t)
tE[a,b])
21~
p.
"
,
COln~lae
on
C(X)
C(
Now it
m[a,bJ).
is a pseudo-metric on
cl
and hence
SeX).
is a second order process.
Then
Q.E.C
Define
[a,b].
Let
be the smallest nUwber of (closed) s-balls (w.r.t. pseudo-metric
covers
The metric entropy
[a,b].
H of
The following result is Hell-known: If
is defined by
t;
Con-
lla(E)=lla(E ).
it can be extended uniquely to a probability measure on
fOllows that
~
N(s)
d) which
H(s)=log N(E).
is a separable measurable mean
.;
square continuous Gaussian process and if
(10)
B>O,
3
then l<dth probability
Assume that
X
1
r:
H(lld.
< -
the sample paths of
r; are continuous.
is also measurable and mean square continuous.
rle
now show that (10) is a sufficient condition for the sc:.mple continuity
of
X.
13y Corollary 12 it suffices to show that (10) implies
(11)
Jl
aa
(L*) = 1.
Thus assume (10).
We first note that under
process
nO E ~(Y)
U
.
Sl.nce~.
H
and
(d~)
almost all
Pa(QO)=O
It can be shovm that under almost all
that
X
a.
P
0.
0
Gaussian process.
a
X is a separable
for almost all
a
X is measurable.
is mean square continll0Us v.nder eacIl
that under
P
as before.
It is clear
Pa. .
such
X is a separable, measurable, mean square continuous
Let us
com~are
the metric entropies of
X under
P
72
and X under
d
Thus
00
Na
2
CY.
P
a
= Ea
(t,s)
o
(€:) =
O
Note thai:.
O
(X t -X)2
s
O
N((1..)
J.?c], Le.
a
O
>
which implies
?
0.
(II) =1.
Thus
= P
0
a
([1*) = P (M)
a
o
O
=1
as was
to be proved.
A similar result is obtained by Besson [1974] with a different
approach, using the fact that a
SIP
and an independent nonnegative r.v.
is the product of a Gaussian process
Via the tensor product space structure of th::: nonlinear space, we
Hre able to solve the general
~wnUnear
erri;imation problem for SIP's (in
particular for Gaussian processes), in th3 sense that we can reduce the
nonlinear probler;: to a standard linear estimation problem, the theory of
~~lich
has been well-developed.
This is done in Section 1.
In Section Z we introduce the concept of super prediction for a class
of prediction problems and derive a lower bound for the mean square error
of the nonlinear prediction.
The notation introduced in Chapter III is used here.
1.
;·lon1 inear Estination
X = (X+, t€T)
Let
...
be a second order process with
consider the following estimation problem: We observe
0 mean.
Xt
8.n LZ-fur:ctional of
estimation
It
We are interested in finding the best estimate
(\' tES)
In general,
which minimizes the
l!t6Cin
e,
square error of
Ece-e)2.
is well knmm that
tation of
tES, a
e of X based
subset of T, and we want to estimate an L2-functional
on the observations.
for
We
e can be obtained as the conditional expec-
e given (Xt , tES)
e is extremely difficult to determine.
a SIP we have a complete solution.
However, if
X is
74·
In formulating the main result we need the following notation.
= L2 (Xt •
L2 (XjS)
in
a
with
H(X)
~'T""
C U\c
tES).
tES), and
X
= (Xt ,
satisfying (M) and (Co).
Let
{~
Y
• YEf}
be a
{en , n=1.2, ...• N}
:~NS
be
N may be 00).
(where
Let
THEOREM 1.
= H(\,
a linearly ordered set.
f
L 2 (d~)
l'
ln
H(X;S)
Let
Let
be a nondcgenerate SIP(O.R;F)
tET)
e
E
have the following orthogonal
L (X)
2
development
N
e=
I
I
I
n=l p2:0 P +·· ,+Pk=P
1
'Yl<" :<Yk
Suppose
e :::
(Xt
,
tES)
is nondegenerate.
Then
N
I
I
n=l p2:0
where
~y. ::: ProjH(X,s)
1
PROOF.
~ y.
1
Recall the definition of the r.v.
A
and observe that it
is independent of the choice of the defining sequence
follows that
AEL 2 (X;S)
and eV0:ry
PEL 2 (X;S)
li1ent
N
(i)
p :::
L L
m:::l q2:0
where
{n ' ·SEE}
S
We have
is a :':'?IS in
H(X;S).
{~i}'
It then
has the orthogonal develop-
75
e :: E(e
I
Xt' tES) :: ProjL (x.r:)e
2 .•
LJ
N
=
L
n=l
suffic(.~s
Thus to show the theorem i t
to show
(ii)
If
~l""'~pEH(X)
<~i' nj \1 (X) :: <t:: i
>
and
nj >H(X)
n1, ... ,npEH(X:S)
then it follows from
that
(iii)
Now
Since
PEL 2 (X;S)
is arbitrary and
follows.
Q.E.>.'
If
8=l;di(X)
then it follows
E Cl;
I
which is also a characterizing property for SIP's.
Xt , tES) - ProjH(X;S) ~
76
X be a zero mean Gaussian process.
Let
COROLLARY 2.
have the following orthogonal development
e=
I
p~O
I
Pl+ ... +Pk=P
Y. < .•• <Y
1.
1~
Then
e=
I
I
p::::O Pl+ ••• +Pk=P
yl
If X is a zero mean Gaussian process then
COROLLARY 3.
Proj
A
H~P eX)
~Y~TLES.
<··· <Yk
0
prO j
Suppose
L2
(x:s)
-
= Pro j J"'2 eX ;. .
C"
SIP(O,R;F)
X is a
Consider the nonlinear estimation of
and
f(A)€L 2 (A).
Pro j
p>
p,IIr;!1
2(
C/) ~~J
a
A
Hlli'P eX)
satisfying (M) and (CO).
e = f(A)H
Write
f(A)H
0
.;»
II t; II
2
Ci.l)k }
( ( A 2~
,
~€H(X)
"-
== (p!)-t f(A)Sl(r;0p)
1.
(Theorem 13, Chapter III)
By Theorem 1,
e
= H
\
::....-1·1
.1"'a
I' r: II
A
T)
{:l'
1
2 (~) •
77
Next consider the nonlinear estimation of an exponential function
e =e
t;-~II
t;11 2
20.
1
We have
= 2 ;1
E(H
A
.
p,;-II ~II
~
I
2(t;)
Xt , t€:S)
1
1
=
A
L PI H A
2(~)
p.. ;-II t;1I
A
1
~_~11~112
=e
2°1
In tho most important case where
G~ussian
X is
we have
A
E(H
p,tlt;1I
E(e E; -~ II t; 11
COROLLARY 4.
Let
(Y t
X , tES) = II
t
2 UJ
A
p,IIl;1!
2
I
X , tES)
t
2(l;)
= e ~-~ II ~ 11
2
,
.
X be a zero mean Gaussian marUngaZe.
=H
p,
II Xt II
Then
2(X ), t€:T) ,
t
'")
(Zt
=e
X -l§
t
II \ II ,.
tET)
are mQ2·tingaZes.
PROOF.
REf,1ARK.
v
and
(0
This follows readily from the above example.
For
X
a Wiener process it is well kno1,<m that
t
.r't -"'2
t>-O)
are martingales.
Q.E.f
2
t
(X -t, t;e:O)
78
In the following we shall use
the nonlinear estimation problem.
~~I's
to present another solution of
For simplicity as well as for practi-
cal purposes we will consider the Gaussian case only.
Let
or (J).
X= eXt> tE:T)
be a zero mC,,,,l1 Gaussian process satisfying (I)
We will then employ the orthogonal development in terms of
in the former case and
For
J
the obvious extension of
TP\SP).
sP , S a subinterval of T, let I
f
TP
to
(i. e.
and
A2 (R), AZ(R;S)
(X , tES); and we denote them by
t
respectively.
<f,g>AZ(R;S) =
Is Is
for all
f,g
step function on
for all
f, g
step functions on
~p f. (1<.; 8) .
AZ"tRL
A2'CR;S)
and
2
f(t)g(s) d R(t,s)
I £(t)ics)
T T
2
sP and £=0 on
Since
=I
®P A (R)
on
£=f
denote
We will consider the AZ-spaces and AZ-spaces corresponding to
(\, t€T)
Z
P
in the lati:er case.
p
a function on
f
I
2
d R(t.s)
S, we have
sP
Therefore we can embed
which form a dense subspace of
®t'1\2 (Il; S)
by identifying each step function
f
as a closed subspace of
on
sP with f
on
TP.
Moreover, we have
fetl, ... ,t )dXt
..• d\
PIp
since this is true for
f
step functions.
Similarly we can conclude that
®P A2 (R;S)
is a closed subspace of
79
I ··· I
S
S
f(t1,···,t)
r
Xt
Let
THEOREi'! 5.
and let
X=
C\'
p
dt 1... cu.
3 ...
be a c"!r.1,ussian process satisfying
tET)
(I)
have the oY'-thogonal development
6EL (X)
2
Then
I
E(e
where
Xt ,
I
tES):::
I
p;::O
P
(f*)
P
S is a subinterval of T and
f
P
f* is well-defined since ""PA 2 (7) ,(~)
P
of ®PA 2 (R). Also Ip(f;) E L2 (X;S).
PROOF.
'<)I'
Consider
gp
E
®PA 2 (R;S).
PEL (X;S)
:: I
:: I
I
p!
I (f
P
P
I
L
p --
I
Ip(gp)'
I. (g ) >1 (0'_)
t I'
'2 ,fo..
J
<f ,g>
p
P ®p .0.~ (D1
2
l\..,1
,g
p! <f
P
-I
is a closed subspace
with the orthogonal development
2
Then
Eep ::: <
,'",;U
>
0
P 0eA2(R)
p! <f* ,&';;. >
P
P raPA (R)
2
(cont. )
p
80
(cont. )
fep
= 2 pI
<f* g >
p; . P fli'A') (R)
""
= < L I p (f;),
I
p
el!. ) >.~2 ('")
A
I
p .P
which implies
I
f(B
f*
REHA.H.K.
{e
COi%
ment
....
P
f*
p
that
=
= Proj
in
yEf}
y'
~
p
= Pro]' L....
TES)
~
'It'
.. .
L.
e
-I
(8)
and expand
0 .. .
~Q
Then it can be shown as in Theorem 1
Yn
Yl
;;
e* .- Proj 11. (R;S) e
y.
y.
1
1
2
where
is very difficult to determine.
X is a Wiener process)
1I. (R)
2
is simply the restriction of
and Zet
B
t
= B(Xs '
Pick a
into its orthogonal develop-
fn
In general; the explicit form of
COR0LLARY 6.
Q.E .
Cf*) .
P P
1.:
= L a Y · .. y e*Y @ • • • 0e*Y
I
1
p
P
However, when
I
can be obtainc(l as follows.
f
®P 11. 2 (R;S) P
11. (R)
2
L a YI'. .. yP
(X~S)
•
Let
X=(X
t~sET).
cP
d
•
to
.p
P
t
,
be a Gaussian process satisfying (I)
tET)
rhen
f*
P
(M , B , tET)
t
t
is a square-integrable
martingale if and only if
Mt
=
and 1I.~(R)
PROOF.
if
H
t
= E(e I
LI
p~O p
(Proj
= 1I.2 (R;
t
@P1l.i(R)
f ),
D
L
{sET: s~t}).
is an sql1art-integrable nartinga1e if and only
BtL
e
E
follows from Theorem 5.
L (X),
2
[Meyer 1966; p. 163].
So the assertion
Q.E.
81
Let
THEOREH 5'.
and let
X=(X t , tET)
have the
eEL 2 (X)
e =I
p~O
be a Gaussian process satisfying
orthogor~Z
J (f ) ,
f
P P
development
®I\2(R) .
E
p
(J)
Then
E(O
where
S is a
COROLLARY 6 1 •
l.
t€S)
3~0interval
= S(Xs ,
=I
J (f*)
p;::O p p
of T and
f*
= Proj
Let
X=(X
p
8~
I
t~s€T).
t
~PA2 (R;S)
,
t€T)
f
P
be a Caussian ppoaess satisfying (J)
Then
is a square-ir.tegrc::b Ze
nlaX'tingale if and only if
M
t
and
2.
=I
p;::O
At2 (R)
J (Praj
f ) ,
P
~PA~(R) P
= A2 (R; {sET:
s~t}).
Prediction
~onl~near
COl'sicier t'!10 following prediction problem for a certain class of
processes: Let
and let
X==(X
Y =8 eX )
t t t
t
, t€T),
with
T an interval, be a second order proce.£s
a real function such that
0uppose that on the basis of the past va1UC3 of
y= (Y~., tET)
up to tirr,e
L.
t
lYe want to find the best prediction of the future value
Y
t+'[
,.>0.
Two pTedictions are of special interest: the optimal linear predic!ion
and the optimal
nO:l1in~ar
prediction
"nJl,
Y..,. (.).
T~lC
optlDality
is in the sense of minimizing th? nean square error within the class of
~ll
linec.r and nonlinear predictions respectively.
It is well knOl,;ffi that
82
YAnt~(~)
L
= r(yt + 't
L.
I
y
I
s
' S:s:t ) ,
We denote the corresponding mean squRre predictiun errors by
2 = E(y.it(T)_Y
,)2
a~
t
t+T
Now introduce a super prediction
tion of Yt + T
based on
to be the nonlinear predic-
(X s ' s:s:t), i.e.
2
whose nean square prediction error is denoted by
a1~d
thus
IJ
2
If X is a SIP then
ving an estination problem (cf. Section 1).
0t
Xt
and
happens to be a
Y
t
It is clear that
is a lower bound for the mean square errors of linear and
S
nonlinear predictions.
t~
a .
s
coincide.
1-1
can be obtained by solIf~
in addition, for each
function then the a-fields generated by
In this case
ane. the nonlinear prediction can be obtained by solving an estimation
problem again.
In the important pal'ticular case where
X= C\,
tE
stationary Gaussian process with covariance function
ot =0
is a polynonial we can calculate the lower bound
IR)
is a zero mean
R(t,s)=R(t-s)
(is
and
as follows.
83
\'Jrite
(where
I-I
p,o
2'
l::>p::>n, are the Eermite polynomials and
a
2=cX
,- 2 ) •
O
Note
that
EH
2(X
t
p,\1
) H
p,o
2
x~p>
,
= pI <Xfl"p
s L (X)
t '
2
{X )
s
..0p
<,.'~p
= pI
1,
At '
S
>
.,.,
0l"'H(X)
n
<\' Xs>H(X)
= pI
= pI RP (t, s)
and
EH
p,o
~(Xt)H
0q
?(X)
S
t..
Xs >L 2 (V)
A
_
q,o
= 0 if
p;tq
Thus the covarian..:e function of Y is
n
,q
(X
E YtY s = L ap a q E(H
2 t-,l.
2(\))
p,q=l
q,o
.P , (5
n
I
=
pI a
p=l
X
5
of
\+.
2
H.PCt-s)
p
s::>t)
J
be the optimal nonlinear predictir.m
(which is also the cptima1 linear prediction since
Gaussian), and let
0
2
0
be the mean square prediction error.
f(Y t+.
I
X is
Then
X ' s:<:;t)
s
n
=
I
p=l
<\. H "
i-'
?(xt(t))
p.,II\(T)II"
(see Example in Section 1)
84
and
2
as
= E(Yt+L-Y~('t'))2
=E
y
2
_ E(y S (T))2
t+T
t
n
=
n
2 2p
L
p!
L
1
p! a a
p
1
a
2 (2 2) p
a -a0P
n
2)p]
a
= L pI ap2[ a 2p - (2
0
-(J
I
It is well known from the general theory of stationary process [Dooh
1953; Gikhman and Skorokhocl 1969] that
can be obtained analytically
(if not explicitly) through the Wiener-Paley Factorization Theorem if
x is
l'cguZal". (i. e.
n
H(X
t
te: ]R
; s::;t)
Y, and hence
sh')'t! that so is
= {O}).
a9,
Y is clearly stationary.
where
C(u)
line
a~d
Th(;l1
B
To prove that
= Jt
-co
J
constant.
X, i. e .•
"
Let
is trivial [Doob 1953].
Xt
Suppose
I;
E
Bt = i3 (IIu'
u::;t)
and
B-00
is Bt-measurable and hence
n H(Y , sst).
t
t
Y is regular we invoke
C(t-u)dW'1
is a \!liener process.
is i\-Dea£urable.
for all
nOl'l
is a square integrable function vani.shing on positive real
:11
-co
is regular we
X
can be obtained analytically.
the cole-side r.loving average rep::cesent.ation for
Xt
\'Jhen
2
Then
t;
S
But
Et;=O. so
1;=0.
Thus
Y
is regular.
Yt
t
t;
is a
(The regularity of
Y also follows from a general result which states that for a Gaussian
process the
l'Cl:10~e
re~Jlarity
p'lst.)
Bt ·
is B -r:leasurable
therefore it is B_00 -measurable which implies that
~
= tn
condition is equivalent to the triviality of the
es
Sum1ing up these resul ts, we now state the following
X= (X • tE E~)
Let
THEor£PeJ 7.
be a zero mean stational"":} Gaussian
t
.
.
process un'th covar-z..ance
~-{. vnc t-z..on
R. (
t -)
5
"") =0' :2
and'":".I.'J
0
Let
n
Yt
=I
2 C\)
apH
~
1
(i)
>
tE lR.
Y=(Y , tE IR)
t
n
I
function
Then
P.cr
p! a
1
(ii)
2
p
is a zero mean stctionariJ process with covariance
8.P(t-sL moreover."
Y
is regula:,./) if
X
is reguZar>
0
vie have
Ri3IiARlC:
The
corresponding theore,n for a stationary Gaussian sequence
also holds.
Jaglom [1970] has considered the problem at comparing the performance
cf 0IJtimal linear and nonlinear predictions for polyno::lial functionals
of certain stationary r']arkov processes.
stu~ied
Donelson III and Hal tz [1972]
this problem in d.ctail for polynomial fUIlctionals of the
Ornstein-Uhlenbeck process.
The inequality (ii) in Theorem 7 plays a
central role in such studies.
As an
exar.~le.
consider
(X I..
h
•
tE n~)
the Ornstein-Uhlenbeck process,
i.e. a zero nean Gaussian process with covariance function
By the
Iiarkov property
we have
, ..
'''s'
Thus
s~t)
"
"s'
s~t)
=e
-T
\
R(t,s)=e- 1t -51
•
(cont. )
E (H (X
P t +T
)
I xS ,
s~t) = II
p,e -21:
= e- P1:
(e-1: X )
t
H (X )
P
t
and hence
So
which rrovides a lower bound (sometimes the gTeatest lower bound, e.g.,
Yt=Hp (\) ) to the mean square prediction error of Yt+1"
This result has been obtained by Donelson II! and Maltz using a different approach, who also compared
C1
frequently very closed to each other.
2
s
to
?
C1~
a:1d found that they are
VI. NONLINEAR NOISE
A (strictly) stationary process
~y2
t
t
is called noise.
<00
and
y= (Y.. . , -oo<t <(0)
l.
Wiener [1953] liked to think of such a noise as
the output of a "black boxll
\v=O\,
6: t:le put in a ,,,hite noise
yo=e(wt ,
(formally the derivative of a Wiener process) and we get
out; the noise
(Y t , _oo<t<oo)
flow of the white noise
we require that
WeD)
+
W(D+t).
f f P (tl,···,t ) Wt 1 '.'Wt
p
where
f
p
€
f··· J
f (tl,· ..
P
v
't
Lz ( ffiP), the noise
YtY s
dt1···dt
p'
2.)
P
Y obtained by shifting the incoming white
e can be expressed as
=
and the covariance function of
r
Y to be a noise
,t) dWt ... dWt )
PIp
noise through the nonlinear device
(2)
In order for
Ee=O and Ee 2<oo.
(1)
(= L
p~l
·~<t<oo)
is produced by shifting the input by the
e has the orthogonal development (cf. III.
Since
-oo<t<oo)
= E YOY T
= L
p~l
Y is readily seen to be
(T=t-S)
I··· f
f (tlJ···,t p ) f (t 1 -T, ... ,t -T) dtl···dt
p
P
P
P
Wiener's theory of nonlinear noise starts from this idea.
proved a profound theorem which was clarified by Nisio
He also
[1960] and which
states that every ergodic noise can be approximated in law by noises
of
88
the form (2).
Note that not every ergodic noise has the representation
(2), and a necessary condition is strong
discussion of the
ergodic~ty
T'1ixi1'~CZ
[VlcKean, 1973].
(For a
of stationary processes see Doab [1953].)
Hore generally, instead of sending white noise through a nonlinear
e, we may send an arbitrary
device
}(:=
C\, .. oo<t<oo)
with covariance
htCan
Then the noise
R.
shifting the incoming Gaussian noise
y
(3)
where
::-,
t
L
p~l·
+
."p
€
squa:-8 continuous Gaussian noise
"A
c~n
Y obtained by
be expressed as
I... J
A2(~R), and the covariance function of
Y is again readily
seen to be
=
P~l JJ ... JJ
f (t , ... ,t )f (5 -T, .•. ,5 -T)R(t ,5 ) ...
1 l
p 1
p p l
p
R(t ,s ) dt 1ds ... dt dS n
1
p p
p .,
=
L
pI <f (0, ... ,0), f (o-T, ... ,o-T»
P
p~l
p
p
A (@-R)
2
(cf. II. 3.).
Altl~ough
(2) and (3) are intuitively clear, they require proof.
The
proof of (3) follows froD. the following property (and (2) is shown
similarly) .
LEMI/jA 1.
f
p
€
J ...
If
X i13 a Gcru:JI::.'1:an noise with covariance
A2 (@PR),
I
f p (t 1 ' ... , t p )
Xt +t,·,Xt +t dt1···dt p
1
P
= J ... J
fp(t1-t, ... ,tp-t)
R
then" for
89
PROOF.
Both integrals
Pick a CONS
and
Pl.' + ••• +P,
. =P, k:::O}
'rsl~f:f.ic(;:s
J (f )
P P
= J (f ) ,
to prove this ass;:::..'t.ion for
P P
it
But for such
f p , the assertion becomes
H,
PI'
1I¢111
2(J~1(tl)Xt +tdtl)···H
1
=H
PI,II¢I"
Pk'
II k II
2(f~k(tk)Xt +tdtk)
k
r1J
2(Ik(t k -t)Xt dt k )
2(f¢1(t l -t)Xt .dt 1)··:H
P k ,lI<P k l l " ,
.L
k
'
and thus we only need to shoH that
This is true for
<PESI
and hence for
q> €
A,,(R).
L
The proof is now com-
plete.
Q.E.D.
Hhen
Y
has representation (3). we say that
Note that
X
is always X-presentable si:nce
is not strongly mixing then
v
At =
Y
J
is X-presentabZe.
°O(u-t)Xudu.
If
v
A
X is not w:li to noise-presentable, however,
it is X-presentable.
In the follO\V'ing we shall prove the analogue of Wiener-Nisio I s
theo'rem using Nisio; 5 approacil as simplified (for
converger~ce
in law) by
McKean [1973J.
Let
X be a sawple continuous ergodic Gaussian noise, and let
x=(x(t), _oo<t<oo)
tllat
be the corresponding coordinate process.
X satisfies the following condition (S):,
We assume
90
Pr(Xt>O,
(S)
THEOREM 2.
> 0 V n~l .
O~t~n)
Let Y be a measurabZe ergodio noise (defined on any
probabiZity spaoe).
Then
Y can be approximated in law by a sequence
of X-presentable noises.
The proof of Theorem 2 requiring several auxilIary results will be
given after the following theorem
l~lich
shows that not every ergodic
noise is X-presentable.
THEOREM 3.
If X is a Gaussian noise with absolutely continuous
spectral distribution then every X-presentable noise
Y is strongly
Tidxing.
PROOF.
in
Having introduced the Fourier transforTII on
I!. 3., the proof is similar to [:ckean's proof for
We need to show that for
lim P~(Y€A,
A, B
yL€B)
€
B(lR
X '\';hite noise.
lR )
= Pr(Y€A)
Pr(YEB)
0
't~
where
yT
denotes the shift of Y by
will show that for
e,p € L
2
T, i.e.
Y~
= Yt+'t'
In fact we
(Y)
lim E ep't
= fa
0
Ep
't~
where
P
T
denotes the functional of shifted paths, i.e.
Then the strong mixing property is self-evident.
of Y, we can expand
e = Ea +
e and
L J ... J f
p~l
P
p
By
't
P (Y)
the presentability
as follo\'1s.
(t!, ... , t~))
1
\
.•• \
I
P
= p (Y T).
dt! .•. dt p
91
p ::: Ep +
fp,gp
€
~1
p-
f··· f ~Ctl,···>tp)
12 C;PR), and we have
E apT
where
:::
1:
gp Cc , • • • , n) :::
~(
0 -
: : f ... I
(where
as
1:
By Theore3 10 in Chapter II,
T , • • • . ' - T) •
!"i'
i1:(A l +·· .+An ) "
e
f (AI"'" A ) g (AI"'" A )
P
F is the spectral distribution of
approaches to
e
::: 1; (Var
lim E 8p T
:::
Var p) <
+
fe
0
C~
P
which approaches to
by the Riemann-Lebesgue theorem.
ro
It then follows that
X)
P P
0
Also
•
Q.E.D.
Ep.
T+oo
For the proof of Theorem 2
of r. v. 's
{a (x)}
n
\.'If:}
shall at first introduce a sequence
which will play a fundamental role in the proof.
Sex) :::
{t:
x(t»O} ,
x€ lR
IR
Because of the continuity of the paths of X,
probability
1
Let
.
Sex)
is an open set with
and can therefore be expressed as a denumerable disjoint
sum of open inteTvals
1. (x),
1
i~l.
Put
92
S (x)
n
Denote by
u.1
(x): II.
(x) I>n,
1. (x)c(-n,co)}
1
1
u
i~l
x
+
s
LEr1HA 4.
PROOF.
city of
=
the shifted path of
f(x)
and for almost all
x.
Then by the ergodi-"
X we have
I
T-+<t>
Now put
a
+
T
+
f(xt)dt
= Ef a.e.
a
C
if
f(x)
a=-n.
indeed
n~l
x (t)=x(t+s).
be an integrable function.
. 1
11m
T
and
+
S
s, i.e.
is nonempty for every
Sn (x)
Let
x by
.
=
if otherwise
Then it follows that
O~t<n)
Ef = Pr (Xt>O,
x(t»O, Ost<n
>0
S (x)
n
is nonempty if
by the assumption (S).
££>0.
But
Thus the proof
Q.E.D.
is complete.
Now we shall define
an (x)
an (x)
=n
by
+
inf Sn (x) .
By Lemma 4 a n (x) is finite with probability 1.
Next we shall determine the probability 1aYl of
LEMMA
5.
The probability distribution of an (x)
continuous and its density
deoreasing on
PROOF.
i't-n~S(xL
a (x).
n
funation~
say
is absoZutely
p , is flat on
n
[O,n]
and
(n,co).
Note that
Ila (x)=t" is equivalent to the condition
n
(t-n,t] c Sex) if O:st~n; t-n4S(x);.·(t-n.t) c Sex),
(-n,t-n) n S (x) =
n
4>
if
t~n";
we remark that if
O~t~n,
then
93
t-n
¢ Sex) implies (-n,t-n) n Sn (x) = ~.
For
O<t<t', s>O, t+s < t'+s <
Pr(an(x) € (t+s,t'+s))
=
we have
n
Pr(3 h~(t+s.t+s'); h-'l4 S (X) , (h-n,h)cS(x))
= Pr(
3
h€(t,t'); h-ndS(x;J, (h-n,h)cS(x;))
= Pr(
3
hE(t,t'); h-n<!S(x), (h-n,h)cS(x))
(by the stationarity of
while for
t'+s~n,
t>O, s>O
X)
we get
Pr(an(x)E(t+s,ti+s)) = Pr(::l h€(t+s,t'+s); h-n4S(x), (h-n,h)cS(x),
(-n,h-n)nSn (x)=¢)
= Pr(~
+
he(t,t'); h-n 4S(x:), (h-n,h)cS(x s ),
(-n-s,h-n)nSn(x;)=~)
+
(h-n,h)c$(x,J,
.>
(-n,h-n)nS (x+)=¢)
n s
Now
the assertion follows easily from these hm equations.
PROOF OF THEOREM 2.
integrable, we have
Since p (z)
lim p (z) =0
n
where
C (t) =
z
fo
ro
= fa>
t
ane!.
n
z~
P (t)
is nonnegative, decreasing and
n
-dp (z) .-
n
fa
ro
lim zp
z~
n
(~).=O.
Therefore
C (t)z(-dn (z))
z
>n
1:. O~t~z
z.
{ o
.
t>z
z(-dp (z))
n
, and
= -z
p
n
(z) /'"
0
+
foo P (z)dz = 1
0
n
Q.E.C
94
which implies that
weight
don (z)
p (t)
is a convex combination of C (t)
n
z
with the
= (-zdpn (z)).
Pick a "typical" sample path
of Y to be specified further below.
y
da ez)
n
fo
0
dt
z
-- fooo
fZ
.!.yZ(t)dt do (z)
z
=
n
2 <
lim!
y2(_t)dt
Ey
z-+a> z O O
since
en (X)
E:
00
<
<Xl
,
by the ergodicity of Y.
Thus
L (X) •
Z
Now we can define a sequence of X-presentable noises by
which will approximate Y in law.
l~ote
that
+
a (x )
n s
= a n (x)-s if
a
n
ex)~s.
So the probability of the
cylinder set
xen)
E:
B=
{XE:
mIR :
a.$x(t.)<b., l$i:;;m}
III
is the sane as
pre
~
.(a.$y(-a (X+
))<b.) ]
t i
n
l
_ 11
= pre ~l(a.$y(-a
(X)+t.)<b.);a (X)~max t.
In
lIn
i
1
+ pre
~
l
and as
nt oo
the second term goes to
the first term becomes
]
(a.:;;y(-a
I n eX:.... )<b1·);aneX)<max
. t.)
1 ]
0 since
1
Pr(aneX) < max t i )
1
= o(~);
(lr:
..." , 1
lim
ro~
In(-1.:::;y(t.-t)<b.) (t) Pn(t) dt
111
:: lim
JCO JZ 1-Z
n
O
1 (
n
< (
) b ) (t) Jt do (z,)
a.-y
< .
n
1 t.-t
11
:: Pl'(YEB)
since
1-Z JZ0 1 n ( a.:::;y(t.-t)<b. ) (t)dt+'Pr (YEB) as
111
z-+a>
by the ergodicity of Y.
Thus we have shovm that
as
.
1.C.)
X(n)
n-#J,
Y 1n
. 1aw.
converges to
Q.E.
He will conclude this chapter after a dis;::u5sion of the assumption
As before we assume that
X is a sample continuous ergodic Gaus3ian
noise with covariance function
R(t.s)
~
R(t-s).
ahTays holds, yet we are not able to prove it.
We believe that (S)
Instead, we find two suf-
ficient conditions for ($) which indicate that (5) is a mild
(if it is a restriction at all).
assumptio~
These two sufficient conditions are the
following:
V
..
rr~O
0"
R(T)
~
a v T Em.
Now we prove that (8 ) Lnplies (S).
1
First note that
[O,n]
(1 owe this proof to L. Pitt.)
X is mean square continuous since it is a sample
96
continuous Gaussian process.
f£R(R)
is continuous.
p (efil) "'" p
of
Thus
R(t.s)
Assume (Sl)'
is continuous and every
Then
(eL IV. 1. and recall that
cfnER(R)
p(cfil)
V CE
m
and thus
is the translated measure
P
by efn ). By the sample continuity of X and the continuity of
f n >0. there exists c>O such that
(cf )
p
.n (V >0 O~t~n) =.P(Xt+cfn(t»O. O~t~n)
i't
•
> 0
{wd~:
(since
\(w)+cfn(t»O.
the equivalence of
P
O::;t~n}
t
It
as
Next we prove that ($2) implies (S).
•
1.
If (S) does not hold then
as
O~O.
(S) now follows from
p. 469]
The key to the proof is a theorem
Wh'1C h '1mp l'
.
l.es t'iat:LX
~(X>O. O~t$o)
1:
'f'
R( 1: )'>0
l
X,
O~t$6}~t
{WElt: Xt(w»O.
This is obviously a contradiction since
a nontrivial Gaussian r.v.
and
,Co
= 0 V o$n, which implies
?(Xo>O) = 0 since, by the sample continuity of
{WElt: Xt(w»O}
).
p(cfn).
and
[1962 ; ~1
.
o f 81 ep1an
iaeorem
ct oo
Xo is
Thus (8) holds.
(8 ) is satisfied by all stationary processes with rational spectral
1
densities.
For it is well
kno~m
ra.tional spectral density R(R)
that for a stationary process with
= W~
(with
2I'1 = degree of denominator
- degree of nominator), the set of all functions possessing on every finite
interval absolutely continuous derivatives up to order m-l
integrable m-th derivative.
and square
(8 2) is satisfied by the processes with
triangular covariance functions.
VII.
APPENDIX
1. Tensor Product of Hilbert Spaces
Let
and
HI
<0,0>2'
a.nd
HZ
For each
be two Hilbert spaces with inner products <",0>1
hl€Hl~
h Z€H 2 define the map
h @h :
1 2
HlxH2~R
by
l1e then have the obvious relation:
Let
Then
H is an inner product spaca under the inner product
LEt'lr'lA 1.
N
<A, B> =
L
n=l
zJhere
N
A - t h(n).h(n)
- l.
1
1
Z'
The tensor product
H1sH 2 of HI. and HZ
w.r.t. its inner product.
The following properties are useful to us.
is the completion of
H
98
Ii
THEOREH 2.
over
if
{f , ctEA}
a.
EXAr1PLE.
is
{f • aEr\}
ct
OOl7rp
tete in
Let
(X "S,.,,1l,..,)
and
t.
4~
~
in an obvious IT.anner.
0P f
or
If
f0p
h :: .•. =H =H, we
1
p
for
(1,2, ... ,p).
P
for
p
Let
to the case of general
~H
or
for
Let
1TETI
p
•
IT
p
~PH as consisting of
be the set of all permuta-
Then
is a permutation
(1,2, ... ,p).
LE~~~ 3.
on
••• @H
fQSl ••• QSlf.
the symmetric elements (tensors).
of
H0
l
write
We will define the symr.1etric tensor product
tions of
be two measu!'e spaces.
The inner product is such that
and all properties carryover frol!l the case p=2
and
is
and
is
Hilbert spaces
H0 ••• @ll
1
and
Similarly we can define the tensor nroduct
p
H
For each
TIEIT
such that for all
there is a unique unitary operator
p
f , •••• f €H
1
U (flO" .0f )
1T
P
P
= f TIl e ... @fTIp
UTI
99
An element
f
3PU is called syrnmetric if
in
-
symmetric tensor product space
if
'\:
~ rrErr.
P
The
is the set of all s)~metric tensors
;PH
0 PH.
in
THEOREH 4.
;1
U f=~
L Urr
;PH is a cZosed svlmpace
is the project-ion operato:p onto
of
0
;PI-1.
PH and the operator
Hence
rr
1
p
~ H = sp. {-p
I
A
EXAllPLE.
on
xP
Let
Irr
f
~ ... ~ f , f , ... , f dI} •
1
p
rr 1
rrp
(X,S,ll)
be an arbitrary measure space and a function
is called symmetric if
V rrEIT
p
the set of all symmetric functions in
Denote
Then
THEOREt,j S.
If {e ,YEr}
Y
{e 3 ••• ee
Y1, ... ,Y €r}
Yl
Yp
P
sets in ®PH
and
THEOREN 6.
is a complete set in H.. then
A
alui {e
A
~ .•. @e
Y1
Yp
: Y1, ... ,Y €r}
P
are complete
;PH !'eBpe~ti've ly •
Let H be a HiZbert space.
Fo!' each element
the!'e corresponds a bounded linear operator on H" say
This operator is such that
U-,
U€HeH
such that
100
foY' (.my CCNS
{h, YEf}.
Y
Convel'sel1j every bounded U:1.c'7.r vperator
L IlilhY 11 2
<
00
for a CONS
with a zmique element in
Moreover
V on
H lJ-'ith
of H, is the operator
{h }
Y
U
assoeiated
H~H.
,...
UEl-I@!-I
if o:nd only if U is setf-adjoint.
,...
The operators
U
on
associated \\lith an element
H
Udf~H
as above
are called Hilbert-Sahlnidt.
In virtue of Theoren 6,
\'Ie
sh:111 make no distinction between
U
.....
u.
THEORE[v! ";.
Every
UEH0H
u
= l.t
can be wl"itten in the form
A.1f.0f.
11
{f.,
i~l} is an orthonormal sequence, and
1
where
L IA.11 2
<
00.
Also
we have
Ufo
= Lf.
111
2.
i~l
.
Hermite Polynomials
Let
X be a Gaussian variable tvith zero mean and varia-:lce
Consider the sequence of random variables in
1 X X2 ,,3
,~
jJ\.,
t.
L,,(X)
...
••••
Applying the Gram-Schmidt procedure to orthogonalize this sequence, we
obtain the orthogonal sequence
and
101
Hp,t (p=O,l,2, ... ) is called the
parameter
Hp, t(X)
t).
The first few
He.~mite~ZynomiaZ of'degree
is a polynomial in both varimJles
He~ite po1)~omials
are
He, t (X)
=I
HI, t (X)
=v
A
H2 ,t(X) =
= ,,3 -3tX
H3 , t (X)
Hp, t(X),
.
The Hermite polynomials
A
p~O,
E Hp, t(X) Hq, t(X)
p, t(X) =
H
x2 ... t
satisfy the following:
= pI cpq
tP
p-l,t (X) - (p-l)tHp- 2 , tCX) ,
}ill
p~2
X t 2
e
u -~
=
\'
1 p
L. H t eX) -- u~
p~O p,
pi
Hp, t(oX) = oP H t eX)
p ,-;:;
£.
a
V
0>0 •
UE
IR
p
t
(with
and
X.
BIBLI OGRAPHY
Besson, J. L. [1974] Continui.te des fouctions aleatoires spheriquement
invariantes, Publication du Dep?rtef.lent de Mathematiques, Lyon,
pp. 59-69.
Cambanis, S. [1974] Stochastic Processes~ (Lecture Notes), University of
North Carolina at Chapel Hill.
Canbanis, S. and Rajput, B. S. [1973] Some zero-one laws for Gaussian
processes, Annals of F:f'obabiZity~ 1: pp. 304-312.
Cameron, R. H. and I.lartin, W. T. [1947], The orthogonal development of
nonlinear functionals in series of Fourier-Hernite functionals,
Annals of Mathematics", 48: pp. 385-392.
Cramer, H. [1951] A contribution to the theory of stochastic processes,
Proc. Berkeley Symposium on Math. Statistics and Ploobability, pp.
329-339, Berkeley : University of California Press.
Doob, J. L. [1953] Stochastic Processes, New York: Hiley.
Donelson, J. III and Haltz, F. [1972] A comparison of linear versus nonlinear prediction for polynomial functions of the OrnsteinUhlenbeck process, Journal of Applied FPobability~ 9: pp. 725-744.
Ferniquc, x. [19741 Regula~ite des trajectoires des fonctions a1eatoires
Gaussiennes, Ecole diEte, Calcul des ProbabilitIes, St~ Flour.
Portet, R. [1973] Espaces a noyau reproduisant et lois de probabi1ites
des fonctions aleatoires, Annales de l'Institut Henri Poincare
Section B, IX: pp. 41-58.
Gikhman, I. I. and Skorokhod, A. V. [1964] Introduction to the Theory of
Random Processes, Philadelphia: Saunders.
Gua1tierotti, A. F. [1974] Some remarks on spherically invariant distributions, Journal of Multivariate Analysis, 4: pp. 347-349.
Ito, K. [1951] I1u1tip1e !'Jiener integrals, Journal of Mathematical Society
of Jo.pan~ 13: pp. 157-169.
Ito, K. [1956] Spectral type of the shift transfo~ation of differential
processes with stationary increments, Transactions of American
ldathematical Society~ 81: pp. 253-263.
Jag1on, A. r1. [1970] Examples of optimal nonlinear extrapolation of
stationary processes, Selected Translations of Mathematical
Statistics and Probability, 9: American Mathematical Society,
pp. 273··298.
103
Kallianpur, G. [1970] The role of reproduci.ng kernel Hilbert spaces in the
study of Gaussian processes. In Ney, P. (Editor). Advances in Probability
and Related Topics. New York: ~furcel Dekker.
Lo Page, R. [1973] Subgroups of paths and reproducing kernels, Annals of
Probability~ 1: pp. 345-347.
Loeve, H. [1955] Probability Theory. New York: Van Nostrand.
In Stochastic
Differential. Equations s SIAM-ANS Proceedings" Vol. VI" pp. 191-209.
f'1cKean. H. P. [1973] Wi.ener's theory of nonlinear noise.
Heyer, P. A. [1966] Probability and Potentials. Waltham, Hass.: Blaisdell.
Nagornyi, V. N. [1974] Random processes for which optimal prediction is linear,
Theory of Probabil.ity and Mathematical Statistics~ 1: pp. 147-154.
Neveu, J. [1968] Processus Aleatoires Gau8siens. Montreal: Les Presses de
L'universite de Montreal.
Nisio, N. [1960] On polynomial approximation for strictly stationary processes,
e70urnal of Mathematical. Society of Japan.. 12: pp. 207-226.
Hajput, B. S. [1972] Gaussian measures on
variate Analysi8~ 2: pp. 382-403.
L
p
spaces,
l~p<oo,
Journal of Multi-
f,ytaya, G. N. [1969] On admissible displacements of suspended Gaussian
measu~es, Theory of Probability and its Applications" 14: pp. 506-509.
8h.epp, L. A. [1966] Radon-Nikodym derivatives of Gaussian measures, Annals of
Mathematical Statistics s 27: pp. 321-354.
Slepian, D. [1962] The one-sided ba~~ier problem for
System Technical Journals 41: pp. 463-501.
Gau~sian
noise, Bell
Vershik, A. M. [1964] Some characteristic properties of Gaussian stochastic
proces£es, Theo~d of Probability and its Applications!) 9: pp. 353-?56.
Niener, N. [1938] The homogeneous chaos, American Journal of Mathemati(;os 60:
pp. 897-936.
Wiener, N. [1958] Nonlinear Problem3 in Random Theory. New York: Wiley.
© Copyright 2026 Paperzz