Cambanis, S.; (1971)Representation of stochastic processes of second order and linear operations."

'e
Department of Statistias~ University of North CaroZina~ Chapel. HiH~ North
CaroZina 27514. This researah was supported by the National. Saienae Foundation
under Grant GU-2059 and by the Air Forae Offiae of Saientifia Researah under
Grant AFOSR-68-1415.
REPRESENTATION OF STOOIASTIC PROCESSES
OF SECOND ORDER AND LINEAR QPERATICJJS
by
Stamatis Cambanis*
Department of Statistias
University of North CaroZina at Chapel. niH
Institute of Statistics Mimeo Series No. 735
FeblLUaJty I 7971
PJ:PRES8~TATION OF STOCHASTIC PROCESSES
OF SECGID ORDER AND LINEAR OPERATIONS
Stamatis Cambanis
,If
ABSTRACT
Series representations are obtained for the entire class of measurable,
second order stochastic processes defined on any interval of the real line.
They
include as particular cases all earlier representations; they suggest a notion of
"smoothness" that generalizes well known continuity notions; and they decompose
the stochastic process into two orthogonal parts, the smooth part and a strongly
discontinuous part.
Also linear operations on measurable, second order processes
are studied; it is shown that all "smooth" linear operations on a process, and in
particular all linear operations on a "smooth" process, can be approximated arbitrarily closely by linear operations on the sample paths of the process.
Department of Statistics, University of North CaroZina, ChapeZ Hill, North
CaroZina 2'1514. This research was supported by the NationaZ Science Foundation
under Grant GU-2059 and by the Air Force Office of Saientifia Research under
Grant AFOSR-68-1415.
2
I.
INTRODLCTION
Series and integral representations of stochastic processes provide a power-
ful tool in the study of structural properties of the processes as well as in the
analysis of a number of problems in statistical communication theory and stochastic systems.
The well known
Karhunen-Lo~ve
representation [6, p.478] applies
to mean square continuous processes defined on compact intervals of the real line.
Representations valid over the entire real line have been obtained for subclasses
of mean square continuous processes, the wide sense stationary processes [7] and
the harmonizable processes [2].
In an attempt to relax both kinds of restrictive
assumptions, series representations valid on any interval are obtained in [3] for
the class of weakly continuous processes.
In Section 2, two series representa-
tions are given in Theorems 1 and 3 for the entire class of measurable, second
order stochastic processes, valid on any interval of the real line.
Even though
these representations are not unique, they have some interesting invariant properties.
For instance, it is shown in Theorem 2 that they provide a unique de-
composition of every second order process into two orthogonal parts, one of which
can be considered as being the IIsmoothll part of the process.
This smooth part,
as expressed in the representation of Theorem 1, can be obtained explicitly in a
straightforward way, while in the representation of Theorem 3 it is expressed in
terms of eigenvalues and eigenfunctions that may be difficult to find explicitly.
As a corollary to these representations, a general representation is given in
Theorem 5 for symmetric, nonnegative definite kernels on any square of the plane,
which generalizes the well known Mercer's theorem.
In Section 3, two kinds of linear operations on a second order stochastic
process are considered, the first defined in the stochastic mean, and the second
defined on the sample paths of the process.
It is shmqu in Theorem 9 that all
3
"smooth" linear operations of the first kind can be approximated arbitrarily
~
closely in the stochastic mean by linear operations of the second kind.
These basic results presented in Sections 2 and 3 can be used in obtaining a
general, explicit solution to linear, mean square "signalllor "system" estimation
problems (including additive and/or multiplicative noise problems, stochastic
system identification problems, etc.) and also inproviding some new insight into
the problem of discrimination between two stochastic processes, both in the
Gaussian and the non-Gaussian case.
These applications will be given in a subse-
quent paper.
The results presented in this paper are stated and discussed in Sections 2
and 3 and their proofs are given in Section 4.
2.
REPRESENTATION OF STOCHASTIC PROCESSES OF SECOND ORDER
The following notation and assumptions will be used throughout this paper.
(I).
{x(t,w), t€T}
on the probability space
is a measurable, second order stochastic process defined
(Q, F, P),
with
open or closed, bounded or unbounded.
of
x(t,w)
variables
(II).
and
H(x)
T
any interval of the real line,
r (t,s)
x
denotes the subspace of
is the autocorrelation function
L (Q,F,P)
2
spanned by the random
{x(t,w), t€T}.
~
is any measure on
(T,B(T»
(B(T)
is the a-algebra of Lebesgue
measurable subsets of T) which is equivalent to the Lebesgue measure (i.e., mutua1ly absolutely continuous) and satisfies
fT r x (t,t) d~(t)
(2.1)
That such measures
Define
~O
on
~
(T,B(T»
<
+
00.
exist follows from the following particular choice [3]:
by
d~f\
LdIe-b](t)
= f(t)g(t),
where
g(t) > 0 a.e.[Leb] on
4
T,
g e: L1 (T, B(T), Leb.)
f(t) ... 1
and
os
for
III
r -l(t,t)
x
~o
and
for
1 < rx(t,t).
It is clear that
~o
r (t,t)
x
f(t) ...
and
1
satisfies (2.1),
00
is finite.
S
{fk(t)}k=l is any complete set in
~o
"'" Leb.
L2(~)'" L2(T,B(T),~).
A
way to find explicitly such complete sets is presented in [lJ, from which it is
clear that the
(III).
(2.2)
fk(t) is
can be chosen to be continuous functions on T.
The random variables
nk(w)
...
00
{nk(w)}k=l are defined by
IT x(t,w) fk*(t)
d~(t)
almost surely and are of second order as it is easily seen from (2.1).
{fk}k'~)
H(x,
~enotes
00
abIes
the subspace of
00
{~k(w)}k=l
{nk(w)}k=l.
L (n,F,p)
2
spanned by the random vari-
are the random variables derived from
by the Gram-Schmidt orthonormalization procedure:
00
{nk(w)}k=l
~k(w) = L~=l~jnj(w). We
then have
~k(w)
(2.3)
~(t)
where
=
= IT x(t,w) gk*(t)
d~(t)
a.s.
L~=l~j*fj(t).
H(x,{fk}k'~) £
We always have
H(x)
[3, Theorem 3] and if x(t,w)
weakly continuous equality holds [3, Theorem 4].
tablish the properties of
quences in representing
THEOREM 1.
H(x,{fk}k'~)
is
The following two theorems es-
as a subspace of H(x)
and its conse-
x(t,w).
Every measurable, second order stochastic process
{x(t,w), te:T}
admits the representation
00
(2.4)
x(t,w)
...
L
k=l
for all
tE:T
~(t) ~k(w) + w(t,w)
in the stochastic mean, where the ccnvergence of the series is in
the stochastic mean.
The orthonormal random variables
(2.3) and the time functions
00
{~k(w)}k=l
are given by
5
for all
tc::T.
r (t,s)
x
admits the representation
co
(2.6)
for all
on
=
r (t,s)
x
t,s c:: T,
TxT,
a (t) a *(s) + r (t,s)
K
K
w
where the convergence of the series is absolute in
and
r (t,s) is the autocorrelation function of w(t,w).
w
{w(t,w), tc::T} has the following properties:
tic process
(i)
is the subspace of L (n,F,p)
2
tfT} then
If H(w)
{w(t,~),
ables
L
k=l
t
and
s
The stochas-
spanned by the random vari-
(2.7)
=
(ii)
THEOREM 2.
r (t,t)
w
H(x, {fk}k'~)
(II) and of the complete set
=
a •e • [Leb ]
0
on T.
is independent of the measure
{fk}k
in
L2(~)'
Hence, denote
~
satisfying
H(x,{fk}k'~)
by
H(x, smooth).
It is clear from Theorem 2 that the decomposition of x(t,w)
orthogonal terms given in (2.4) is unique.
However, the representation of the
first term in this decomposition by the series
on the choice of
{fk}k
in
L2(~)
THEOREM 3.
~
and {fk}k'
L~=lak(t)~k(w) depends clearly
For a particular choice of a complete set
we obtain the following theorem.
Let
00
{~k(t)}k=l
and
00
{Ak}k=l
be the corresponding eigen-
functions and nonzero eigenvalues of the integral type operator on
kernel
r x (t,s).
Then
(2.8)
e
into two
1. e ., the random variab les
{ ~k (w)} ;=1
defined by
L2(~)
with
6
(2.9)
are complete in
H{x, smooth).
are the versions of the
eigenfunctions which are defined for all
(2.l0)
then
=
cl>k{t)
x(t,w)
JT r
,1
I\k
x
tET
by
(t,s) 4>, (s) dl.l(s),
1<
admits the representation
00
(2.ll)
for all
teT
r cl>k{t)
k=l
=
x{t,w)
~k{w) + w(t,w)
in the stochastic mean, where the convergence in the series is in
is as in Theorem 1.
the stochastic mean, and w(t,w)
Also
00
r (t,s)
x
(2.12)
for all
=
t,s E T,
where the convergence of the series is absolute in
t
and
s
on TXT.
Because of Theorem 2, (2.7) is written as
(2.13)
=
H(x)
H(x, smooth)
$
H(w).
It is clear from (ii) of Theorem 1 that
the process
x(t,w)
H(x, smooth)
H(w)
depends on the random variables of
on a zero Lebesgue measure subset of T.
represents the "smooth" part of the process
that is made precise in the next section.
H{x, smooth)
and H(w)
= H{w).
appear in (2.13).
is of the form
respondance
t
f(t)
++
f
= E[~x*{t)]
for all
in a sense
It is, therefore, interesting to
This is done in the following theorem.
the reproducing kernel Hilbert space of
x(t,w)
In general, both subspaces
characterize the stochastic processes for which we have
H{x)
On the other hand,
r (t,s).
x
teT
H(x)
= H(x,
Let us denote by
Every function in
and some
is an isomorphism between H(x)
smooth)
~EH(x),
or
RKHS(r )
x
RKHS(r)
x
and the cor-
and RKHS{r ) [8, Theorem 5D].
x
7
THEOREM
4.
(1.1)
H(x)
(1.2)
If
(1)
The following are equivalent:
= H(x, smooth).
f~RKHS(r)
x
for all
and
f(t)
= 0 a.e. (Leb] on T, then f(t)
= 0
t~T.
(1.3)
rx(t,s) = r~=l Ak ~k(t) 'k*(s)
given by (2.10).
(1.4)
rx(t,s) = r~=l ak(t) ~*(s)
for all
for all
t,s~T, where the
t,s~T, where the
~k's are
~'s are
given by (2.5).
The following are equivalent:
(2)
= H(w).
(2.1)
H(x)
(2.2)
There is a set
f ~ RKHS(rx )
(2.3)
TO
then
There is a set
for all
The process
t
~
~
TO
B(T)
f(t)
~
B(T)
with
Leb(T )
O
= 0 for all t
with
Leb(T O)
=0
~
such that if
T - TO'
=0
such that
rx(t,t) = 0
T - TO'
x(t,w)
will be called "smooth" if it satisfies any of the
conditions in Theorem 4.(1).
Note that the weak continuity of x(t,w)
is equi-
valent to the continuity of all functions in
RKHS(r) (8, Theorem 5E]. Since
x
condition (1.2) requires only those functions in RKHS(r) which are equal to
x
zero
almost everywhere to be continuous, it is clear that the smoothness of
x(t,w)
in the sense of Theorem 4.(1) is a weaker condition than the weak, and
therefore the mean square, continuity of x(t,w).
4.(2) that
H(w)
It
is also clear by Theorem
contains a strongly discontinuous part of x(t,w).
Let us note that if x(t,w)
is smooth in the sense of Theorem 4.(1) then,
as it is seen from Theorem 2, (2.3), (2.5) and from (2.8), (2.9), (2.10), the sets
00
{~*(t)}k=l
and
and complete in
Je
{A k
00
~k*(t)}k=l
RKHS(r).
x
determined by (2.5) and (2.10) are orthonormal
8
A useful criterion for
'It
x(t,w)
to be Gaussian is obtained as a straight-
forward consequence of Theorem 2 and the closure property of Gaussian families of
random variables with respect to limits in the stochastic mean.
CoRoLLARY 1.
,If
x(t,w)
is smooth in the sense that it satisfies any of
co~
the conditions in Theorem 4.(1), then it is Gaussian if and only if for some
m
plete set
{fk(t)}k=l
L2(~)
in
the family of random variables
m
{nk(w)}k=l
de-
fined by (2.2) is Gaussian.
We lastly remark that the stochastic process representations obtained in
Theorems 1 and 3 imply the following general result about kernel representation.
THEOREM
5,
Every symmetric, nonnegative definite kernel
admits the representations (2.6) and (2.12).
or continuous, on TxT,
given in [8, Theorem 2E].
k
's
T'
is weakly continuous,
ret,s)
to be weakly continuous are
of T is shown easily by using the contin-
as defined by (2.10)
the weak continuity of
TxT
The uniformity of the convergence of (1.3) and (1.4)
of Theorem 4 on compact subsets
~
r(x,t)
on
then the representations (1.3) and (1.4) of Theorem 4
Necessary and sufficient conditions for
uity of the
If
ret,s)
r(t,s).
for all
te:T,
which in turn follows from
The last statement in Theorem 5 includes as
particular case Mercer's theorem, for which it prOVides an alternate proof.
In
this sense, Theorem 5 can be considered as a generalization of Mercer's theorem
to arbitrary symmetric nonnegative kernels on arbitrary intervals.
It should be
emphasized that the representations (2.6) and (2.12) are not triviaL
obvious from
on TXT;
4It
r(t,s)e:L2(~x~),
As it is
(1.3) and (1.4) of Theorem 4 hold a.e. [LebxLeb]
the significance of (2.6) and (2.12) is in that they imply that (1.3)
and (1.4) of Theorem 4 hold on
(TNT O) x
(T~TO)'
with
Leb(T O) · O.
9
3. LINEAR OPERATIONS
CJ'J STOCHASTIC PROCESSES OF SECOND
A linear operation on
{x(t,w), teT}
into
{x(t,w), teT}
{y(s,w), seS},
is an arbitrary index set.
where
ORDER
is a transformation which maps
y(s,w)eH(x)
for all
seS,
and
S
This definition generalizes in a natural way the one
usually given for wide sense stationary processes and defines a linear operation
in the stochastic mean sense.
On the other hand, it follows by (2.1) and the measurability of
x(.,w)eL2(~)
that
a.s.
the stochastic process
~(w) = IT
(3.1)
(1)
x(t,w),
Hence bounded linear operations on the realizations of
x(t,w)
can be considered, and they are of the form
x(t,w) f*(t)
d~(t)
in the stochastic mean sense, and
a.s.
(2) on the sample paths of the process.
Linear operations of the second kind can be realized in a straightforward manner
by means of the weighting of filter functions
f(t).
On the other hand, linear
operation of the first kind are very important for linear mean square estimation
problems, where the estimate of a second order random variable (or stochastic
process) based on
{x(t,w), teT}
is its projection onto
H(x).
It is therefore
important to study the relationship between the two classes of linear operations.
It is shown in Theorem 6 that linear operations of the second kind form a subset
of linear operations of the first kind.
THEOREM 60
~eH(x,
smooth).
If
~(w)
is defined by (3.1)
a.s.
with
feL2(~)'
then
10
The inclusion relationship between linear operations of the two kinds is
proper as it is seen from the fact that the random variables
x(t,w)
obtained by a linear operation of the second kind (3.1) with
f~L2(~)'
cannot be
The
question which naturally arises at this point is whether it is possible to approximate linear operations of the first kind by linear operations of the second
kind.
It is shown in Theorem 9 that the answer is ye$ for the subset of linear
operations of the first kind that result in random variables in H(x, smooth);
and hence, if x
fied yes.
is smooth in the sense of Theorem 4.(1) the answer is unquali-
Two interesting properties leading to Theorem 9 are given in Theorems
7 and 8.
From now on, we assume for simplicity that
~(T) <~.
~
That finite measures
~
satisfying (II) exist is shown by the con-
struction of a particular finite measure in (II).
from (3.3), it suffices to choose
is a finite measure,
~
so that
~
(Note that, as it is clear
- Leb
and
IT[r (t,t)+fr (t,t)]d~(t) < +00. In this case X is defined by (3.2) for all
x
x
a~B(T) with ~(B) < ~ and, because of Irxl(TxT) <~, it can be extended to
B(T).)
For every
(3.2)
X(B,w)
B~B(T)
=
I
define
x(t,w)
d~(t)
f(t)
= IB(t),
B
By applying Theorem 6 to
have
X(B,wh:H(x,smooth)
is a random measure on
on
B(T)
for all
(T, B(T»,
to H(x, smooth)
C
a.s.
the indicator function of the set B,
B~B(T).
It is also clear from (3.2) that
X
i.e., a countably additive function defined
L (n,F,p).
2
Its corresponding complex measure
rX
clearly
defined on
satisfies
we
and is also of bounded variation since
(3.3)
<
+
co.
11
Let
tit
H(X)
{X(B,w),
L2 (n,F,p)
be the subspace of
B~B(T)}
and let
measurable functions
f
A2 (rX)
spanned by the random variables
be the set of all complex valued,
on T such that
Then upon considering two functions
f
fTxTf(:)f*(s)rX(dt,ds)
and
g in
A2 (rX)
B(T)-
is finite.
as identical if and
only if
fTxT [f(t)-g(t)]
A (r )
2 X
=
[f*(s)-g*(s)] rx(dt,ds)
0,
becomes a Hilbert space with inner product
I
=
TxT
f(t) g*(s) rx(dt,ds)
A (rX) = sp{IB(t), B~B(T)} (4J. The Hilbert spaces H(X) and
2
isomorphic with corresponding elements X(B,w) and IB(t), BeB(T),
and
A2 (rX)
gration of functions in
fTf(t)X(dt,w),
where
~
and
with respect to
f
~(w) =
~(w)
=
are corresponding elements under the isomorphism.
As it is shown in (2, p.196J, for every
(3.4)
X is defined by
A2 (r ) are
X
and inte-
IT f(t) X(dt,w)
f~A2(rX)
we have
= IT f(t) x(t,w) d~(t)
in the stochastic mean.
THEOREM
7.
H(x, smooth)
= H(X).
It follows from Theorem 7 that every random variable
has the representation (3.4) for some
f~A2(rX);
;
in H(x, smooth)
and hence so does
x(t,w)
if
it is smooth in the sense of Theorem 4.(1).
It is shown in the proof of Theorem 7 that
It should be clear that
A2 (rX)
L2(~)
is a subset of A2 (rX).
is a much bigger function space than
L2(~).
fact,
In
A2 (r ) contains functions with properties similar to those of delta funcX
tions. This is seen as follows: Assume that x(t,w) satisfies any of the conditions in Theorem 4.(1) so that
H(x)
= H(x, smooth)
(we may assume that
12
x(t,w)
4It
is mean square continuous).
H(x) • H(x, smooth)
x(t,w)
= H(X) ,
=
Then, for all
in the stochastic mean.
f(t,.)€A (r )
2 X
there exists
=
IT f(t,u) X(du,w)
IT IT f(t,u)
L2(~)
scribes the properties of
THEOREM
(i)
8.
if
L2(~)
C»
{fk(t)}k=l
(ii)
f(t,u).
as a subset of
A2 (rX);
A2 (rX).
Specifically:
L2(~)'
{$k(t)}k=l are the eigenfunctions of
nonzero eigenvalues, then the set
_~
{I.'k
then the set
and
00
if
d~(u)
The following theorem de-
A2 (r ).
X
is a complete set in
{fk*(t)}~.l is complete in
such that
d~(u) d~(v)
x
~s a dense subset of
x(t,w) €
t,SE:T
f*(s,v) r (u,v)
which is a delta function type property for
since
IT f(t,u) x(u,w)
It follows that for all
=
t€T,
rx(t,s)
corresponding to
00
<l>k*(t)}k=l
is orthonormal and complete in
The question raised after Theorem 6 can be answered now.
THEOREM
9.
Given any random variable
f;(w)
in H(x, smooth)
E£I ~-f; € 1 ] <
2
(3.5)
~€ (w)
. I
T
Clearly, if x(t,w)
linear operation on x
ation on x
x(t,w) f *(t)
€
d~(t)
€
2,
and any
where
a.s.
is smooth in the sense of Theorem 4.(1) then every
of the first kind can be approximated by a linear oper-
of the second kind.
The proof of Theorem 9 given in Section 4 makes
use of Theorems 7 and 8, which are interesting in their own right.
By using only
Theorem 2, we can obtain another proof of Theorem 9 and also an explicit expression for
f (t)
€
as follows.
If
f;€H(x, smooth)
and
p(t) .. E[x(t)f;*]
then,
13
as in the proof of Theorem 2 P~L2(~)
~
stochastic mean.
Hence, given any
E[I;-;EI2J < E2 ,
where
and
E> 0
~E is given by
>
f E(t) • tN(E)(
~n-1 p,gn L2(~)gn(t)
;(w) = l:=1(gn'P>L2(~)~n(w)
there exists N(E)
such that
(3.5) and
on T •
a.e. [bJ
Le
It is reasonable to assume that values of realizations of
t~T
in the
x(t,w)
at fixed
cannot be observed; this would require systems with zero inertia.
Thus ob-
servations
;(w)
of
x(t,w)
inertia are of the form
obtained by means of linear systems with nonzero
;(w) = !Tx(t,w)h(t)dt
a.s.
If
;
has finite second
moment, it is shown at the end of Section 4 (proof of a claim) that
;~H(x;
smooth).
As a consequence, among the observations or measurements of the
realizations of the process
x(t,w)
by means of nonzero inertia linear systems,
those that have finite second moments are in H(x, smooth).
It follows that the
linear mean square estimate of a second order random variable (or stochastic process) based on observations of the realizations of x(t,w),
is the projection
of the random variable onto H(x, smooth).
4.
~s
PROOF OF THEOREM 1.
of x(t,w)
=
{~k(w)}k=l
onto
For every fixed
H(x,{fk}k'~)'
and set w(t,w)
=
L
k-1
and
denote by y(t,w)
the projection
= x(t,w)-y(t,w).
Since the set
is orthonormal and complete in H(x,{fk}k'~)' we have
y(t,w)
for every
t~T
t~T,
~(t) ~k(w)
where the convergence is in L2 (n,F,p),
t~T,
~(t)
=
E[y(t)~k*J
= f
T
=
E[x(t)~k*J
r (t,s) gk(s) d~(s).
x
and where for every k
14
e
Thus (2.4) and (2.5) are shown.
w(t,w),
(2.6) follows
from (2.4) and the definition of
which implies thatw(t,w) .1. H(x, {fk}k,lJ)
convergence of the series lk ~(t)~*(s) on TxT
2
Elly(t)1 ] < +w for all te:T.
(i) is shown as follows.
H{x,{fk}k,lJ)
of w(t,w)
in H(x) ,
Assume that
and
t.1.w(t)
Now
te:H'SH(x)
for all
and
H'£H(w)
te:H'
and
te:T.
for all
t.1.H(w).
imply
lk
1~(t)12 =
By the definition
and thus H(w)SH'.
te;H'
It follows that
t.1.x(t)
The absolute
the orthogonal complement of
or, equivalently, that
Hence, by (2.4),
t.1.H{x)
te;T,
te;T.
follows from
it suffices to show that H(w)· H'.
we have that w(t,w)e:H'
suffices to prove that
t • O.
If we denote by H'
for every
t.1.t k ,
for all
and
Hence it
t.1.H(w)
k·
te;T
l~, •••
and
imply
,
t.1.H(x).
t · O.
The simplest way of proving (ii) is by making use of Theorem 3.
Note that
Theorems 2 and 3 are proven without employing (ii) and hence they can be used to
provide a proof of (ii).
We first note that the measurability of
r w(t,s)
and
follows from (2.6) and the measurability of lk ~(t)~*(s) and
lk /a (t)1 2 • It follows from (2.l2) and the monotone convergence theorem that
k
rw(t,t)
(4.1)
•
~
kal
Ak
+
JT r w(t,t)
dlJ(t).
Also, by using Tonelli's theorem, Parseval's relationship and the monotone convergence theorem we have
15
where
~
00
{ek(t)}k=l is any complete orthonormal set in
type operator on L2(~)
with kernel
L2{~)'
and t~{r}
rx{t,s)
r
is the integral
is its trace.
It
follows from (4.l) and (4.2) that
IT r w(t,t) d~{t)
and hence
r (t,t)
w
=0
PROOF OF THEOREM
L2{~')
2,
Let
respectively.
(4.3)
0
and
that for all
= E[~nk*]
• IT
The function
f*(t) = E[~x*{t)],
E[I~12] r x (t,
t)
and
(4.4)
=0
= 0
E[~x*(t)]
a.e.
and
(T,B(T»
H{x,{fk}k'~)
S H(x,{f'k}k,lJ').
~!H{x,{f'k}k'~Y)
imply
in
~(w)
L2 {O,F,P),
= I
k=l
~
= O.
It follows by
k
E[~x*(t)] fk~t) d~(t).
belongs to L2(~"
t€T,
by (2.1).
00
{fk(t)\=l in
[~~
a.e. [Leb]
Since by (4.3)
L2(~'
on T and, by
since
If*{t)1
f(t)
it follows that
lJ' "" Leb,
2
s
is
f
=0
f(t) = 0 a.e. [Leb]
on T.
co
(4.5)
satis-
be any complete sets in L2{~)
~!H{x,{f'k}k'~')'
r x (t,t) € Ll(lJ'
orthogonal to the complete set
in L2 (lJf), hence f(t)
on T. It follows that
be any measures on
{f'k(t)}~=l
~€H(x,{fk}k'~)
~€H{x,{fk}k'~)
;!H{x,{f'k}k'~')
~i
and
We first show that
It suffices to show that
Assume that
lJ
0
on T.
{fk{t)}~=l and
fying (II) and let
and
a.e. [Leb]
•
~ ~k(w)
where for every k we obtain from (4.4) and (4.5) that
16
~
It follows that
~
= 0
in
LZ(n,F,p).
H(x,{f'k}k'~') S H(x,{fk}k'~)
In the same way it is shown that
(these two statements are symmetric) and thus the
theorem is proven.
and thus the integral type operator
is Hilbert-Schmidt.
00
Let
{$k(t)}k=l
r
on
with kernel
L2(~)
and
{Ak}k=l be its corresponding eigen00
Let
{$j(t)}j=l be any
L2(~)
complete set in the orthogonal complement of the subspace of
00
00
are complete in
00
{~k(w)}k=l u {nj(w)}j=l
JT x(t~w)
=
H(x, smooth).
$.*(t)
J
defined by (2.9) and by
d~(t)
But we have for all
a.s.
j
that
=
r$
j
spanned by
Then Theorem 2 implies
i.e., the random variables
since
x
co
functions and nonzero, hence positive, eigenvalues.
{$k(t)}k=l.
r (t,s)
= 0
for all
j.
Thus
j
= 1,Z, •.•
0
and
,
(2.8) follows from (4.6).
It follows now from (2.8) that for every
t€T
00
(4.7)
x(t,w)
=
L
k=l
~(t) ~k(w) + w(t,w)
in the stochastic mean, where w(t,w)
E[~k~j*] = <r$k,$j)L2(~) = AkO kj
we have
is as in Theorem 1.
Since by (2.9),
it follows by (4.7) that for all
t€T
and
k
17·
and thus, if we consider the versions of the eigenfunctions defined pointwise
4It
everywhere on
~(t)
T by (2.10), we obtain
= ~k(t)
for all
t€T
and k.
Now (2.12) follows from (2.11).
PROOF OF THEOREM
f€RKHS(r)
x
t€T
with
for some
~€H(x).
!Tf(t)gk(t)d~(t)
(1).
f(t) = 0
~(w) = L~=lak~k(w)
for all
4.
We first show that (1.1) implies (1.2).
a.e. [Leb]
on
T.
Then
Since H(x) = H(x, smooth) =
f(t) =
E[~x*(t)]
H(x,{fk}k'~)
~
in L2 (n,F,p)
= 0
and
for all
we have
ak = E[~~k*] =
in the stochastic mean, where for all k,
= O. Hence
Let
f(t) =
E[~x*(t)]
= 0
t€T.
We now show that (1.2) implies (1.1).
~iH(x,
smooth) imply
plies
f(t) =
plies that
~
E[~x*(t)]
f(t) =
= O.
= 0
E[~x*(t)]
As in the proof of Theorem 2,
a.e. [Leb]
= 0
~iH(x,
T.
Since
f€RKHS(r ),
t€T.
Hence
~iH(x)
on
for all
~€H(x)
It suffices to show that
x
and
smooth)
(1.2)
imim-
and since
respectively and the fact that H(x) = H(x, smooth) if and only if w(t,w) = 0
a.s. for all
t€T,
i.e., if and only if
r (t,s) = 0
(2), It is shown in Theorem l.(ii) that
Let
T = {tET: r (t,t) ~ O}, Then TOEB(T)
w
0
We first show that (2.1) implies (2.2).
E[;x*(t)]
for all
a.s. for all
all
tET,
tET
and some
~€H(x)
for all
w
•
r (t,t)
w
= 0 a.e. [Leb] on T.
and Leb(T ) = O.
O
Let fE:RKHS(r ). Then f(t) •
x
Since H(x) = H(w), x(t ,w) = w(t ,w)
and by (ii) of Theorem 1, we have
f(t) = E[;w*(t)] = 0
for
t€T-T O'
We now show that conversely, (2.2) implies (2.1).
~EH(x,
smooth)
L~=l~~k(w)
~
t,s€T.
implies
~
= O. Let
~EH(x,
smooth) =
It suffices to show that
H(x,{fk}k'~)'
in the stochastic mean, where for all k,
!TE[;x*(t)]gk(t)d~(t).
But
f(t) = E[~x*(t)] ~ RKHS(rx)
Then
~ = E[~l;;k*] =
and by (2.2),
~(w)
=
18
f(t)
= 0 a.e. [Leb] on T. Hence ak = 0 for all k, and
~
= 0.
For the equivalence between (2.1) and (2.3) we first show that (2.1) implies
(2.3).
It follows by (2.1) that H(x, smooth)
a.s. for 'all
t€T,
r (t,t)
x
and
= {OJ,
hence
= w(t,w)
x(t,w)
= r w(t,t) for all t€T. Now (2.3) follows by
Theorem 1. (ii) •
Conversely, if
[LebxLeb]
on TxT,
r = 0 and E[lnkI2]
r x (t,t)
= 0 a.e. [Leb] on T, then r x (t,s) = 0 a.e.
since
Ir (t,s)
x
= <rfk,fk>L2(~)
= H(x,{fk}k'~) = {oJ
H(x, smooth)
,2
~ r x (t,t)rx (s,s)
= 0
for all k.
and H(x)
for all
Hence
nk
t,s€T.
=0
Then
for all k,
= H(w).
PROOF OF THEOREM 6D Let y(t,w) be the projection of x(t,w) on
H(x, smooth) for every
+ w(t,w),
r x (t,s)
a.e. [LebxLeb]
t€T,
as in the proof of Theorem L
Then
x(t,w) = y(t,w)
= r y (t,s) + r w(t,s) and by Theorem l.(ii), r x (t,s)
on TxT.
= r (t,s)
y
The integral
(4.8)
exists in the stochastic mean sense as an element in H(y)
02
(4.9)
[5, p.33].
cause
plies
fIr
TTY
(t,s) f*(t) f(s)
But (4.9) is satisfied since
rx(t,s)
integral
=
new)
f€L2(~)
exists and
n€H(x, smooth)
<
+
and since
= ry(t,s) a.e. [LebxLeb] on TxT and
H(y) S H(x, smooth)
H(x, smooth».
d~(t) d~(s)
if and only if
00
ry€L2(~Xll),
rx€L2(~Xll).
Hence the
since the definition of y(t,w)
(in fact it can be easily shown that
It follows from (3.1), (4.9) and
r
x
= ry
H(y)
a.e. on
=
TxT
(4.10)
By (4.8) and a property of the stochastic mean integral [5, p.30] we have
(4.11)
be-
that
im-
19
Also by a property of the stochastic mean integral, we get
(4.12)
=
0
E[n~*].
=
2
If follows from (4.10), (4.11) and (4.12) that
Hence
~
=n
€ H(x, smooth).
PROOF OF THEOREM 7u
e
H(x, smooth)
for all
H(x, smooth)£H(X).
to prove that
(2.2).
It follows from (3.2) and Theorem 6 that -X(B,w) €
B€B(T).
Hence
H(X)£H(x, smooth).
Since by Theorem 2,
nk€H(X)
for all k
H(x, smooth)
= 1,2, ••. ,
We now show that
= H(x, {fk}k,JJ),
where the
nk's
it suffices
are defined by \
It can be shown, as in the proof of Theorem 6, that the integral in (2.2)
defined almost surely or in the stochastic mean gives the same random variable
Ilfll~2 (rX)
s
J
If(t)l If*(s)1 Irxi (dt,ds)
TxT
If(t) I If*(s) I Ir x (t,s) I
dJJ(t) dJJ(s)
20
PROOF OF THEOREM 8.
= H(X).
and
A2 (rX)
{nk(w)}k=l
defined by (2.2) are complete in
Also, it is shown in the proof of Theorem 7 that
= !Tfk*(t)X(dt,w).
nk(w)
L2(~)'
{fk(t)}k=l be any complete set in
00
Then the random variables
H(x, smooth)
00
(1) Let
Thus, it follows from the isomorphism between
{fk*(t)}~=l is complete in
that the set
{fk*(t)}~=l is also complete in L2(~)' L2(~)
~k(w)
(ii) Since the random variables
Theorem 3 to be complete in
H(x, smooth),
=
A2 (rX)'
is dense in
H(X)
Since
A2 (rX)'
!Tx(t,w)cj>k*(t)d~(t)
are shown in
it follows as in (i) that the set
is orthonormal
The set
since
= (AkA.)-~
J
fTxT ~k*(t)
~j(s)
rX(dt,ds)
1,;
=
PROOF OF THEOREM 9,
such that
~(w)
8 , given any
Hence, if
(3.4) that
If
2
~€H(x, smooth)
= !Tf*(t)X(dt,w).
e: > 0
(Ak Aj )-7 <r~k'~j)L (~)
there exis ts
Since
= H(X),
L2(~)
fe:*€L 2 (~)
=
°kj'
there exists
is dense in
such that
~e:(w) = !Tfe:*(t)X(dt,w), we have E[I~-~e:12]
f*€A 2 (r )
X
A2 (r X)
by Theorem
IIf*-f * II A ( ) < e:.
e:
2 rX
< e: 2 • It follows by
(w) is given by (3.5) with the integral defined in the stochastic
e:
mean, and this completes the proof because, as it is shown in the proof of
~
Theorem 6, the integral in (3.5) defined almost surely and in the stochastic mean
gives the same random variable in L (Q,F,P).
2
PROOF OF ACLAIM
!Tx(t,w)h(t)dt
e
,
(4.13)
(made in Zast pa:r>agJ:laph of Section 3).
a.s. has finite second moment, then
=
fT fT r x (t,s)
h(t) h*(s) dt ds
<
+
00
If
~(w)
=
21
~
Let
~ be any measure as in (II) of Section 2 and F(t)
F(t)
~ 0 a.e. [Leb] on T and by (4.13), f(t) = ~~~~ e A2 (rX)' It now fol-
lows from
~(w)
= fTx(t,w)f(t)d~(t)
Theorem 6, and (3.3) that
by Theorem 7.
e
.
~(w)·
= [~~b](t).
Then
a.s., the argument used in the proof of
fTf(t)X(dt,w).
Hence
~eH(X)
= H(x,
smooth),
22
REFERENCES
[1]
S. Cambanis,
Bases in
L
2
spaces with applications to stochastic pro-
cesses with orthogonal increments,
Proc. Amer. Math. Soc.,
to appear.
[2]
S. Cambanis and B. Liu,
On harmonizable stochastic processes,
Information and Control 17(1970), pp. 183-202.
[3]
S. Cambanis and E. Masry,
cesses,
On the representation of weakly continuous pro-
Information Sci.,
to appear.
Also, in a shortened version,
Proc. Eighth Annual Allerton Conference on Circuit and System Theory,
October 7-9, 1970, pp. 610-618.
[4]
H. Cramer,
A contribution to the theory of stochastic processes,
Second Berkeley Symp. Math. Stat. Probability,
Proc.
Univ. of California
Press, Berkeley, 1951, pp. 329-339 •
e
[5 ]
•
K. Karhunen,
..
Uber lineare Methoden in der Wahrscheinlichkeitsrechnung,
Ann • Acad. Sci. Fen. Sere AI, No. 37(1947). pp. 1-79.
[6]
M. Loeve,
Probability Theory,
Van Nostrand, Princeton, New Jersey, 1963.
[7]
E. Masry, B. Liu and K. Steiglitz,
ary random processes,
Series expansion of wide-sense station-
IEEE Trans. Information Theory IT-14(1968),
pp. 792-796.
[8]
E. Parzen ,
Statistical inference on time series by Hilbert space methods,
in Time Series Analysis Papers,
1967, pp. 251-382.
.e
Holden-Day, San Francisco, California,