Cambanis, S. and Masry, E.; (1970)On the representation of weakly continuous stochastic processes."

* The major part of this work Was done whil"e the author was at the Eleatriaal Engineering Department, Prinaeton University. His work wa..q supported
in part by the National" Saienae FoUndation under Grant GK-1439 with Prinaeton
University and by the National Saienae Foundation under Grant GU-2059 with
the University of North Carolina at Chapel Hill.
** This author's work was supported by the National Saienr:Je Foundation
under Grant GK-13192.
ON THE REPRESENTATION OF WEAKLY CONTINUOUS STOCHASTIC PROCESSES
by
ST~~TIS
CAMBANIS *
ELIAS MASRY
Department of Statistias
University of North Carolina
Chape l Hi ll" North Caro lina
**
Department of Applied Physias
and Information Saienae
Univereity of California at
San Diego
Institute of Statistics Mimeo Series No. 674
MMch, 1970.
ON THE REPRESENTATION OF WEAKLY
CONTINUOUS STOCHASTIC PROCESSES
by
*
**
Stamatis Cambanis
Department of Statistics
University of North Carolina
Chapel Hill, North Carolina
Elias Masry
Department of Applied Physics
and Information Science
University of California at San Diego
ABsTRACT
A novel approach to obtaining series representations in the stochastic
mean for weakly, and therefore mean square, continuous stochastic processes
•
1s presented.
Two distinct orthogonal series representations are derived
for the entire class of weakly continuous stochastic processes on any
Lebesgue set of the real line, and a constructive procedure to obtain them
explicitly is given.
resentations.
of an
L
2
They include as particular cases all earlier rep-
Also they are shown to converge almost surely in the norm
space.
Two general results, on which the development of this
paper is based, are also presented.
The maj or part of this work was done while the author was at the Electrical Engineering Department, Princeton University. His work was supported
in part by the National Science Foundation under Grant GK-1439 with Princeton
University and by the National Science Foundation under Grant GU-2059 with
the University of North Carolina at Chapel Hill.
**
This author's work was supported by the National Science Foundation
under Grant GK-13l92.
2
1.
INTRODUCTION
Series representations of stochastic processes are of considerable
use in several problems of statistical commwication theory.
teT}
If
{x(t,w),
is a stochastic process of second order defined on the Lebesgue meas-
urable set
T
of the real line, a general series representation is given
by
(1)
where the convergence is usually in the stochastic mean.
The significance
of such a representation is in the fact that it decomposes the stochastic
process into the random, time independent, part
dependent deterministic part
{'it (t)},
{n (w)
k
j
and the time
thus providing some insight into
the structure of the stochastic process as well as a tool for the study of
many particular problems.
Various series representations have been obtained wder certain assumptions.
l
R
Before mentioning them, let us introduce some notation.
be the real line,
l
R ,
S;
Sl
Let
the a-algebra of Lebesgue measurab le sets on
the restriction of
Sl
to
algebra of Lebesgue sets on
2
R
and
TE:S l ,
2
R
the plane,
s2
the
0'-
m the Lebesgue measure on the real
line.
A second order, mean square continuous stochastic process
tE:I}
{x(t,w),
on a compact interval of the real line, has the series representation
[6,8]
(2)
where the convergence is in the stochastic mean uniformly on
I.
{f (t)}k
k
3
and
{Ak}k
are the corresponding eigenfunctions and nonzero eigenvalues
L (I,
2
of the operator on
B~ ....m)
with kernel
the autocorrelation function of
x(t,w).
R(t,s),
{~k(w)}k
where
R(t,s)
is
is a set of random
variables defined by
(3)
in the stochastic mean and satisfying
E[~k~l]·
\t5 kt •
Representation
(2) is known as the Karhunen-Loiave expansion and its great advantage is
that both the time functions and the random variables are orthogonal.
The
Karhunen-Lo~ve
representation makes the minimum of assumptions
on the process, but in general it is not valid over the entire real line
or over non-compact intervals of the real line.
A series representation
similar to (2) and valid over a possibly infinite interval
_00 S a < b
S
+00,
I" (a,b),
of the real line is given in [6] for the subclass of
second order, mean square continuous stochastic processes
{x(t,w), tel}
whose autocorrelation functions satisfy
II R(t,t)
m(dt) <
00
(4)
•
Equations (2) and (3) are valid in this case, the only difference being
that the convergence in (2) is not in general uniform on
I.
If
I
is a
compact interval, then (4) is satisfied and the convergence in (2) 1s uniform.
However, for any Lebesgue measurable set
known that a second order stochastic process
T
of the real line, it is
{x(t,w), teT}
has a series
representation of the form (1) if and only if the Hilbert space
spanned in the mean square sense by the random variables
H(x,T)
{x(t,w), teT}
4
is separable [7, p.27].
On the other hand, if
continuous on
H(x,T)
T,
then
{x(t,w),
t~T}
is separable [10, Tbm. 2D].
is weakly
Hence every
weakly continuous stochastic process, and a fortiori every mean square
continuous stochastic process, has a series representation of the form
(1) on any Lebesgue set
T.
What remains to be established then, is an
explicit way of determining the time functions
variables
Let us note that if the set
{n (w) } •
k
{~ (t)}
and the random
{ n (w) }
k
is orthonormal,
then the time functions are given by
~(t)
•
E[x(t) n~].
(5)
The time functions and the random variables in the representation (1),
valid over the entire real line, have been obtained in [9] for the class
of mean square continuous, wide sense stationary stochastic processes, and
in [2] for the class of harmonizable stochastic processes.
A general way to obtain complete sets of random variables in the span
H(x,T)
of any weakly continuous stochastic process
{x(t,w),
t~T},
and
therefore series representations of the form (1), is presented in Theorem
4.
The random variables are defined by linear operations on the stochastic
process.
The significance of this novel approach in obtaining series rep-
resentations for weakly continuous stochastic processes is illustrated by
the representations derived in subsequent theorems as immediate consequences of Theorem 4.
Theorems 5 and 6 provide two series representations
for any weakly continuous stochastic process; that is, they prOVide two
distinct ways of determining the time functions and the random variables
in (1).
The time functions in the representation of Theorem 5 can be ob-
tained explicitly in a straightforward way, while in order to obtain the
time functions in the representation of Theorem 6 the computation of the
5
eigenfwctions of an integral equation is required.
The novelty of the
representations presented in Theorems 5 and 6 is in the fact that they
hold for all weakly, and therefore mean square, continuous stochastic pro-
.
cesses over any Lebesgue set of the real line, finite or infinite, compact
Qr non-compact.
In contrast, the representations of mean square continuous
processes given in [6,8] hold either only on compact subsets of the real
line or on any Lebesgue set of the real line with the additional condition
(4) and they are included in Theorem 6 as particular cases.
Both repre-
sentations given in Theorems 5 and 6 can be written in the form of the representations given in [9] for mean square continuous, wide sense stationary
stochastic processes, and in [2] for harmonizable stochastic processes.
They also provide two absolutely convergent series representations for
R(t ,s).
For all measurable second order stochastic processes, a series representation of the form (1) is obtained in Section
4, where the conver-
gence is not in the sotchastic mean anymore but in the norm of an appropriate
L
2
space almost surely.
It is also shown that both representations
of Theorems 5 and 6 converge also almost surely in an
L
2
space sense.
All the results presented in Sections 3 and 4 rely on two general resuIts presented as Theorems 1 and 2, in Section 2.
They essentially say
that for every measurable second order stochastic process defined on any
Lebesgue measurable set
L
2
space over
T
of the real line, there exists an appropriate
I
T with the properties that
of the process belong to it, and
(i) almost all sample functions
(ii) the autocorrelation function of the
process, considered as kernel, defines a completely continuous operator on
it of the trace class.
It is believed that the significance of these two
results is beyond their implications presented in this paper.
6
2. SerE GENERAl.
RESULTS
This section presents two general results on measurable, second order
stochastic processes defined on any Lebesgue set
T
of the real line,
that are used in subsequent sections.
THEOREM
I.
For every measurable, second order stochastic process
there exists a finite, non-negative measure
IT
R(t,s)
where
Proof:
R(t,t) lJ.(dt) <
(6)
f
x(t,w) •
T by
R(t,t) :S 1
(7)
is measurable and
is defined by
[:~]
R(t,t) > 1.
for
o :S
R(t,t) f(t) :S 1
finite, non-negative measure on
(T, 8 1 )
T
on
for
1
1
R(t.t)
f
such that
T
m
Define a function
.
1
(T, 8 )
on
is the autocorrelation function of
f(t)
Then
lJ.
{x(t,w), tET}
.. f,
is non-negative and finite since
(T, B1 )
T
on
T.
If
is any
\l
and if the measure
then (6) is satisfied.
O:S f(t) :S 1
on
lJ.
on
Furthermore,
11
T.
Q.E.D.
Let us point out that if
lJ.
R(t,s)
is uniformly bounded on
then
TxT,
can be chosen to be any finite, non-negative measure on
includes the cases where
x
donary or harmonizable.
If
is mean square continuous, wide sense staR(t,t)
be chosen to be the Lebesgue measure
is integrable over
tit
on
T.
T,
then
lJ.
can
7
If the measure
v
is chosen to be absolutely continuous
to the Lebesgue measure
m with Radon-Nikodym derivative
will have Radon-Nikodym derivative with respect to
(dl.l](t)
dm
Since
TO •
x(t,w)
F(t)
•
respect
then
1.I
m:
f(t)~(t).
(8)
is of second order, it may be assumed that the set
2
{t~T;
E[lx(t,w)1 ] • R(t,t) = O}
one may consider
positive
•
.,
l~ith
x(t,w)
T
nonzero
a. e. [m] ,
T-T '
O
Hence
as for instance in (7).
a.e. [m],
zero on
defined on
has Lebesgue measure zero; otherwise
a.e. [m],
then the function
F
If
f(t}
•
may be chosen
is also chosen non-
defined on
T by (8) will be
non-negative and Lebesgue integrable over T.
Hence
we have the following.
The measure
1.I
of Theorem 1 can always be chosen to be absolutely
continuous with respect to the Lebesgue measure
F(t}
~
0 a.e. [m]
on
[:~J(t).
T.
It should be pointed out that the measure
uniquely determined.
m such that
In the construction of
Theorem 1, the choice of
f
1.I
1.I
of Theorem 1 is not
given in the proof of
is clearly not unique and
v
is an arbi trary
finite measure; neither is this construction the only way to obtain a
measure
1.I
with the properties stated in Theorem 1.
However, it should be
emphasized that the considerable freedom in the choice of the measure
1.I
with the properties stated in Theorem 1, far from being a drawback, represents an advantage to be appropriately exploited in every particular case.
8
This point is best illustrated by an example.
R(t,s) • minCt,s).
Let
T. [0, +CO)
and
As it will become clear in subsequent ..,ections, the
cons truction of a complete set in
L (T ,Bi ,ll)
2
For this purpose, an appropriate choice for
and enables one to choose as complete set in
polynomials or just the powers of
t:
will frequently be needed.
f
and
~
gives
L (T,S;,ll)
2
F(t). e- t
the Laguerre
k
{t, k • O,l,2, ... }.
THEOREM 2.
Let
and let
{x(t,w), t€T}
II
1
L (T,B ,ll)
2
T
be a measurable, second order stochastic process
be the measure introduced in Theorem 1.
= L2 (ll)
Then
x(t,w) E:
almost surely.
Proof: The measurability of x and Fubini 's Theorem imply
ITR(t,t)P(dt) •
ITE[lx(t,~)12IP(dt) • E[fTlx(t,~)12p(dt~.
Hence, by (9),
x(t,w)E:L (ll)
2
(9)
a.8.
Q.E.D.
It should be pointed out that the applicability of the property proven
in Theorem 2, i.e., of the fact that almost all sample functions of any
measurable, second order process belong to some
its use in Section 4.
L
2
space, is far beyond
For instance, this property can be used in general-
izing a number of results established in the literature in the particular
case of stochastic processes whose sample functions are almost surely square
integrable with respect to the Lebesgue measure.
9
3. DERIVATION OF SERIES REPRESENTATIa-m
Throughout this and the next section, the following notation is used:
(i)
{x(t,U), t«T}
is a measurable, second order stochastic
process defined on any Lebesgue measurable set T
of the
real line, and
(ii)
1.1
is a non-negative measure on
satisfied and
[1;Ht)
+0
(T
,ai>
a.e. [ml on
such that (6) is
T.
As shown in
Section 2 (Theorem 1 and its corollary), such a measure
always exists and can be found exp1icite1y if
R(t,t)
is
known.
Notice that in order to include in the subsequent study the case considered in [6], i.e.,
1.1.
m,
the assumption is not made that
is
1.1
finite.
Our goal, as made clear in the introduction, is to find complete sets
of random variables in the span H(x,T)
cess.
of a second order stochastic pro-
Such a set of random variables can be used in obtaining series rep-
resentations of the process
x(t,U)
and of linear operations on
A convenient set of random variables
{rlk(U) }k
x(t,U).
is defined by
(10)
almost surely, where
in
H k (t)}k
is an arbitrary complete set of functions.
L (1.1).
2
JHeoREM 3.
The random variables
If
H(n)
{nk(U)}k
defined by (10) are of second order.
is the Hilbert space spanned in the mean square sense by the
10
{ "It (CAl) }k '
random variables
H(n)
Proof:
ITJ
T
c
then
H(x,T).
Eq. (6) implies
(11)
R(t,S)«L2(TXT,
R(t,s)~~(t)~k(s)p(dt)p(ds)
{nk(CAl)}k
pxp} • L (pxp).
2
Hence
is finite and the random variables
are of second order.
To show (11), it suffices to show that
that if
2
BTxT
,
n
k
is orthogonal to
Assume that
n
x(t)
is orthogonal to
k
nk£H(x,T)
t~T
for all
x(t)
then
for all T£T.
for all
k,
or
2
E£/n / ] .. O.
k
Then
(12)
for all
t£T,
and
(13)
Q.E.D.
A natural question to ask is under what conditions equality holds in
(11) •
Before pursuing this question further let us make the following
remark.
If
T'cT
is such that
m(TNf'). 0,
then
H(n)£H(x,T').
This
is shown as Theorem 3; the only difference is that in this case
(R~k)(t)
.. 0
ply
R~k·
that
H(n)
0
for
in
t£T',
L (p)
2
i.e., a.e. [m] on T,
and hence
E[ /nkI2]
III
but this suffices to imO.
may not contain the random variables
points of discontinuity of
x
This remark suggests
x(t)
corresponding to
of some kind, for instance discontinuity in
the mean square sense; it also suggests that
H(n)
contains only the
11
"smooth" part of
process
x
and therefore in order to have
has to be "smooth" in some sense.
condition on
theorem.
x
x
which implies
H(x t T) • H(n)
if for every
tET
the
A continuity ("smoothness")
is given in the next
Let us note that a stochastic process
weakly continuous on T
H(n). H(xtT)
{x(ttll)t t£T}
is called
~EB(xtT)
and every
,
limT+t E[X(T)~*] • E[x(t)~*].
If
{x(t,II), tET}
=
H(n)
Proof:
for all
k.
[m],
2
• 0
~EH(x,T)
Assume that
The function
on
and since
T
(14)
is orthogonal to
r;EH(x,T)
F(t) r/: 0
a.e. [m].
x
is weakly continuous.
on
for all
n
belongs to
If(t)1
2
F(t)
f(t)
Consequently,
or
then
for all
k
L (p)
2
=0
a.e. [m], it follows that
T
k
{cjlk(t)}k by Eq. (15).
or by (8)
We further note that
is orthogonal to the set
n
k
is orthogonal to
f(t)· E[x(t)r;*]
a.e. [p],
T
r;(II)
then
H(x,T).
and is orthogonal to the complete set
If(t)1
T,
In view of (11), it suffices to show H(x,T)£H(n),
equivalently tbatif
E[Ir;J2) • O.
is weakly continuous on
k.
Then
since
Hence
on T
a.e.
f(t)· 0
on
is a continuous function since
f(t). 0
{x(t,II), tET},
for all
i.e., to
tE:T
so that
H(x,T).
Thus
2
E£lr;1 ) = O.
Q.E.D.
12
I t should be pointed out that Theorem 4 and all subsequent results of
this section, are true a fortiori for mean square continuous processes,
since mean square continuity implies weak continuity.
It would be interesting to find a necessary and sufficient condition
for
H(x,T)· H(n).
The weak continuity of
x
is shown here to be only a
sufficient condition.
Theorem 4 implies that every weakly continuous process
x
admits the
representation
(17)
in the stochastic mean on T.
However, the time functions
(17) cannot be explicitly obtained because the set
onal.
{n (w)}k
k
{~(t)}k
in
is not orthog-
An explicit expression for the time functions and the random var-
iables in the representation (1) is given in Theorem 5, which follows directly from Theorem 4.
THEOREM 5.
Every stochastic process
{x(t,w),
tE:~}
weakly continuous on
T
admits the representation
(18)
where the convergence is in the stochas tic mean on
abIes
{~(w)}k
are derived from
malization procedure:
~k(w)
•
~ (w)
IT
•
x(t,w)
{n (w)}
k
r~.l~n.e (w) •
g~(t) ~(dt)
T.
The random vari-
by the Gram-Schmidt orthonorThey are given by
(19)
13
almost surely, where
8tt(t).
~~.l~4>l(t),
constitute a complete set in H(x,T).
satisfy
E[~~l]· <Ski
and
{'it (t)}k
are
The time functions
given by
(20)
Also
R(t,s)
admits the representation
(21)
where the convergence is absolute in
Let us note that (20) implies
orthonormality of the set
t
and
s
bk(t)~L2(P)
{~(w)}k'
on
Txt•
for all
k,
and by the
(19) and (20) we have
(22)
i.e., the set
{bk(t)}k
In the case where
1
tiona in L (T,ar,p)
2
of functions ·{4>k(t)}k
{
If
p
4>k(t)
=
is biorthogonal to the set
p
{8tt(t)}k.
is a finite measure, as a complete set of func-
one can use the general orthonormal and complete set
given in [9, Thm.2] by
~k21T
~.
1
exp (T) p{(-=,t)nT},
{peT)
p
is the Lebesgue measure and T
line then as a complete set
{4>k(t)}k
k
}
= 0,±l,±2, ••••
is the entire real line or the half
one can choose the Tchebysheff-
Hermite or the Tchebysheff-Laguerre functions respectively.
hand, if
T
(23)
On
the other
is a finite interval then the Legendre polynomials or the
trigonometric functions are applicable.
14
Let us illustrate how the representation (18) can be explicitly obtained, i.e., how the functions
{b (t)}k
k
and
{8tt(t)}k
can be explicitly
found, in the following
Example 1:
with
T • [0, +00)
follows.
for
Let
R(t,s). min(t,s).
tboose
f
1 < t < +00.
• te-t
x(t,w)be a real stochastic process defined on
1 < t < +00.
for
f(t) .. 1
as in (7):
Choose
[:Ht) =
~
The measure
for
~(t)
in (8) by:
0 s t
• e
-t
can bechoaen as
lJ
S
for
1,
and •
t1
0 S t S 1,
and
Then by (8),
= e
F(t)
-t
(24)
one can choose
As a complete set
..
~k(t)
Then the functions
tk;
k • 0,1,2, •••
{bk(t)}k
k
•
~(t)
L
~
and
l
t;
(25)
•
{~(t)}k
in (18) and (19) are given by
(26)
k • 0,1,2, •••
t-o
k
•
bk(t)
L
~l[y(t+2,t)
+ tr<t+1,t)];
k • 0,1,2, ...
(27)
t-o
where
y
and
r are the incomplete ganuna functions
[4], and the coef-
ficients in the Gram-Schmidt orthonormalization procedure are easily expressed in terms of the values of the integrals
+oo+QO
I
D,m
•
E[n n] = f f min(t,s)tnsme-te-s dt ds
n mOO
•
nJ(m+1)1 + J n,m
+1
- J n, m+l
(28)
15
Upon making a particular choice for the complete set of functions
{CPk(t)}k
used in (10), a representation very similar to the usual
Karhunen-Lolve representation is obtained in Theorem 6 and is valid over
any Lebesgue set of the real line.
[6].
The results are similar to those in
However, the representation derived in Theorem 6 applies to all
weakly continuous stochastic processes, all mean square continuous processes included, while the representation of [6] applies to the class of
mean square continuous stochastic processes satisfying (4), which clearly
does not contain the entire class of mean square continuous stochastic
processes.
Every stochastic process
{x(t,lAl), tET}
weakly continuous on T
ad-
mits the representation
(29)
where the convergence is in the stochastic mean on T.
{~}k
{fk(t)}k
and
are the corresponding eigenfunctions and nonzero eigenvalues of the
operator on L (1l)
2
with kernel
R(t,s).
The random variables
{~(lAl)}k
are defined by
(30)
almost surely, satisfy
B(x,T).
Also
R(t,s)
E[~~l]· ~lSk£.
and
constitute a complete set in
admits the representation
(31)
where the convergence is abfl.03lute in
t
and
s
on Txt.
16
Proof: It follows from
R(t,S)~L2(~x~)
of a Hilbert-Schmidt operator R from
that
L2(~)
R(t,s)
L2(~).
to
is the kernel
Hence R is a
self-adjoint, non-negative definite, completely continuous operator [1,
pp.54-56, 58-59]; it is also of the trace class.
{\}k and the corresponding eigenflUlctions
Its nonzero eigenvalues
{fk(t)}k satisfy the integral
\c
equation Rfk • Akfk [I, pp .124-129 ] • The set {f k (t)
is orthonormal
in L2(~) and complete in the range of R. Let {h.e(t)}t be an orthonormal basis in the orthogonal complement of the range of R in L2 (~) •
By applying Theorem 4, we obtain
(32)
in the stochastic mean, where the
~t(w)
• IT
for all i,
Also
x(t,w)hl(t)~(dt)
it follows that
\~(t). E[x(t)t~]
~k t S
are given by (30) and
almost surely.
Since Rht • 0 in
E[I~tI2]. (Rht,h!)L2(~) •
• (Rfk)(t)
= Ak~(t)
0
for all k
L2(~)
t.
for all
and t€T.
Hence (29) follows from (32).
Q.E.D.
An
example of a representation (29) valid over an infinite interval of
the real line is now given.
Example 2:
Consider the same stochastic process as in example 1.
In
order to obtain the representation (29) explicitly it suffices to find the
fk's,
i.e., the eigenfunctions of the integral equation
+00
Af(t)
•
fo
min(t,s) £(s)
~(ds);
t €
[O,~).
(33)
For reasons that will become clear in the sequel we ahoose, in a way similar
17
to that of Example 1,
1.1
•
such that
F(t)
..
e -2t •
(34)
Then· (33) can be reduced to the differential equation
Af"(t) + e-2t f(t)
with boundary conditions
0;
III
f(o)· 0
and
t
~ [O,+»)
1imt++Clllf(t):
(35)
finite.
It follows
from the solution of the differential equation (35) [5, Section 5.12), that
the nonzero eigenvalues
{~k(t)}k
{~}k
and the corresponding eigenfunctions
of the integral equation (33) are given by
1
= 2
(36)
k . 1.2 •••• }
t
~
where
J
{l.I }k
are the positive zeros of
k
p
is the Bessel function of the first kind of order
p,
and
J •
O
In connection with the repTesentations in the stochastic mean sense
given in Theorem 5 and 6 for any weakly continuous stochastic process, and
a fortiori for any mean square continuous stochastic process, the following remarks should be made.
Remark 1:
In comparing the two representations given in Theorems 5
and 6, the following should be noticed:
(i)
(ii)
In both representations, the random variables are orthogonal.
The time functions in the representation of Theorem 6 are orthog-
onal, while in the representation of Theorem 5 they are not.
(iii)
The representation of Theorem 6 requires the computation of the
eigenfunctions of an integral equation, while the representation of Theorem 5
can be obtained explicitly in a straightforward way.
18
Remark 2: For series of independent random variables, it is
known
[8, p.25l] that convergence in the mean square sense implies almost sure
convergence.
x
It follows that, if the weakly continuous stochastic process
is Gaussian, then both representations (18) and (29) of Theorems 5 and
6 converge almost surely for every
t£T.
Remark 3: It follows from the representations of R(t,s) given in
Theorems 5 and 6, (21) and (31), and the bounded convergence theorem that
• f
T
The measurability of
first
n
R(t,t)
rA
k k
~(dt) ' .
x and (37) imply that if
xn(t,w)
<
~
•
(37)
is the sum of the
terms in the representation (18) of Theorem 5 or in the repre-
sentation (29) of Theorem 6, then
•
lim f E[lx(t,w) n" ~ T
xn(t,w)12]~(dt)
• o.
(38)
Remark 4: By using the properties of the autocorrelation function of
a weakly continuous stochastic process [10, Thm. 2E], it can be shown that
every function
in
t
T.
(111
f(t)
of the form
f · Rg,
In particular, the functions
where
bk(t)
g£L
2
(~),
is continuous
in the representation
(18) of Theorem 5, given by (20), as well as the eigenfunctions
R(t,s)
tions of
f (t) of
k
used in the representation (29) of Theorem 6, are continuous funct
on T.
19
Remark 5:
If· x
is mean square continuous on
T,
it can be shown,
by using Dini' s theorem [11, p .136 ], that the convergence in both representations of
of
T.
(31).
x
given in Theorems 5 and 6 is uniform on compact subsets
The same is true for the representations of
If in particular T
R(t,s),
(21) and
is compact, then an alternative proof to the
Karhunen-Loeve representation is thus obtained, a proof which does not
rely upon Mercer's theorem.
Moreover, an alternative proof to Mercer's
theorem is obtained.
Remark 6:
As a brief comment on the relationship of the main earlier
representations [6,8,9,2] to those derived in Theorems 5 and 6 the following
should be noted.
Karhunen' s representation [6] of mean square continuous
stochastic processes satisfying (4) is clearly a particular case of
Theorem 6.
As noted in Remark 5, the so-called Karhunen-Loeve represen-
.~
tation is also a particular case of Theorem 6.
Also, it is easily seen
that both representations of Theorems 5 and 6 can be written in the form
of the representations derived in [9] and in [2J over the entire real line
for the class of mean square continuous, wide sense stationary stochastic
processes and for the class of harmonizable stochastic processes respectively.
Remark 7:
It should be noted that for the results of this section
the measurability of the process
x(t,w)
is not essential, provided
is measurable and "smooth" in the sense specified in the sequel.
measurability of
R(t,s)
random variables
{nk(w)}k
more.
is assumed, but not that of
x(t,w),
R(t,s)
If the
then the
cannot be defined by (10) almost surely any-
Let us note that the functions
{</>k(t)}k
can always be chosen
20
continuous on
TJ
that the integrals
as for instance in (23).
fTf
T
Then, if
R(tJs)~~(t)~k(s)~(dt)~(ds),
R(t,s)
is such
which always exist
and are finite in the Lebesgue sense, exist also in the Riemann sense, the
integrals (10) can be defined in the usual stochastic mean sense [8].
This is clearly possible if
x
is mean square continuous on
T.
It should
be noted in conclusion that whenever both integrals exist, their values
are almost surely equal and that the almost sure integrals are more convenient from a practical point of view, since they can be calculated from
realizations of the process.
21
4.
SAr1'LE Ft.NcrlON REpRESENTATIOO
In this section, the process
x
and the measure
II
are defined as
in the beginning of Section 3.
The fact that almost all sample functions of the process
to
x
belong
shown in Theorem 2 of Section 2, implies in a straightforward
L (1l),
2
way the following.
THEOREM
If
7.
then
L (ll) ,
2
is an arbitrary complete set of orthonormal functions in
{lPk(t)}k
x
admits the representation
(39)
where the convergence is in
{n (w)}k
k
L (ll)
2
almost surely, and the random variables
are given almost surely by
(40)
The remark which follows Theorem 5 on how the functions
{lPk (t) }k
can be found in all cases undeT consideration, applies here also.
It follows from (39), Parceva1 's relationship and the monotone convergence theorem that
..
~ Pkk
(41)
k
where
"kk = E[l nk l
first
n
that
2
J • (RlPk ,lPk )L (1l)' If xn(t,w) is the sum of the
2
terms in the representation (39) of Theorem 7, then (41) implies
22
lim
1'1
~
00
E~UT
\J(dt~J
2
Ix(t,w) - x (t,w) 1
1'1
= o.
(42)
The representation of Theorem 7 clearly applies to weakly continuous
. processes x. For this class of processes, it is now shown that both representations given in Theorems 5 and 6 are a1$o sample function representations.
IHEORfJ1 8.
For every weakly continuous stochastic process
x
the representations
(18) and (29) given in Theorems 5 and 6 converge a1$o in
L {\J)
2
almost
surely.
Proof:
functions
(i) For the representation (18) of Theorem S.
{bk(t)}k
given by (20):
b
k
• R~,
and let
Consider the
{dt(t)}.t.
be an
orthonormal basis in the orthogonal complement of the span of the set
(43)
in
L (\J)
2
almost surely (a.s.).
a.s., where the
a.s.
k
It follows that
and
i,
and hence
Theorem 3 implies
.t.
tk's
Eq. (22) implies that
are given by (19).
Also
Sk(w).
6,e(w). f~(t,w)d!(t)\J(dt)
E[6,et~]· (Rgk ,d )L (\J) • (bk'~)L2{\J) • 0
t 2
6,e
is orthogonal to
6 £H(x,T)
t
for all
I,
H{x,T)
we have
It follows now from (43) that (18) converges in
(ii) For the representation (29) of Theorem 6.
the complete orthonormal set
{fk(t)}kU{h.t.(t)}.t
~k(w)
in
for all
t.
E[16,e12 J
.~O
L (\J)
2
for all
Since
for all
a.s.
Apply Theorem 7 for
L (\J)
2
introduced in
the proof of Theorem 6 and proceed as in (1).
Q.E.D.
23
It should be noted in conclusion that implicit in theoremS is the
fact that almost all sample functions of the weakly continuous process
belong to the span of the eigenflmctions
complement in
L (lJ)
2
{f (t)}k'
k
x
which is the orthogonal
of the null space of the operator
R.
rEFERENCES
1.
N.I. Akbiezer and I.M. Glazman, Theory of Linear Operators in
Hilbert Space, Vol. 1, Ungar Publishing Co., New York,
1961.
2.
S. Cambanis and B. Liu, On Hamonizable Stochastic Processes,
mitted to Information and Control.
3.
J.L. Doob,
4.
A. Erdelyi, Higher Transcendental Functions, Vol. II.
New York, 1953.
5.
F.B. Hildebrand, Advanced Calculus for Applications,
Ball, Englewood Cliffs, New Jersey, 1962.
6.
K. Karhunen, Zur Spectraltheorie stochastischer Prozesse,
Ann. Acad. Scient. Fennicae, Ser. AI, No. 34, (1946),
pp. 1-7.
7.
K. Karhunen, tiber lineare Methoden in der Wahrscheinlichkeitsrechnung, Ann. Acad.Scient. lennieae, Ser. AI, No. 37
(1947), pp. 1-79.
8.
M. Loeve, Probability Theory,
1963.
9.
E. Masry, B. Liu and K. Steiglitz, Series Expansion of Wide-Sense
Stationary Random Processes, IEEE Trans. Information Theory,
Vol. rr-14 (1968), pp.792-796.
10.
E. Parzen, Statistical Inference on Time Series by Bilbert Space
Methods, in Time Series Analysis Papers, E. Parzen, Ho1denDay, San Francisco, 1967, pp. 251-382.
11.
W. Rudin, Principles of Mathematical Analysis,
York, 1964.
Stochastic Processes,
sub-
Wiley, New York, 1953.
McGraw-Bill,
Prentice-
Van Nostrand, Princeton, New Jersey,
McGraw-Hill, New