Miller, G.W.; (1977)Some Results on Symmetric Stable Distributions and Processes."

SCME RESULTS ON SYM\1ETRIC STABLE
DISTRIBUTIONS AND PROCESSES*
Grady W. Miller
Department of Statistics
University of North Carolina at Chapel Hill
,
Institute of Statistics MUlleo Series No. 1121
May, 1977
*This
research was supported in part by the Air Force Office of Scientific
Research under Grant AFOSR-7S-2796.
MILLER, GRADY.
Processes.
Some Results on Symmetric Stable Distributions and
(Under the direction of STAMATIS CAMBANIS.)
This work investigates properties of symmetric stable distributions and stochastic processes.
A necessary and sufficient condition
is presented for a regression involving symmetric stable random
variables to be linear.
We introduce the notion of n-fold dependence
for symmetric stable random variables and under this condition characterize all monomials in such random variables for which moments exist.
A function space approach to symmetric stable stochastic processes is
developed and applied to the problem of system identification.
Neces-
sary and sufficient conditions are given for the existence of measurable
modifications of such processes and for the almost sure integrability
and absolute continuity of sample paths.
,
TABLE OF CONTENTS
Introduction and Summary-----------------------------------------I.
Independence, Regression, and Moments
1.
2.
3.
4.
II.
Fundamental definitions and characterizations------------- 4
Independence---------------------------------------------- 9
Regression------------------------------------------------ 15
Mbments--------------------------------------------------- 30
A Function Space Approach to SuS Processes
1.
2.
The linear space of a process----------------------------- 42
The integral ff(t)d~t-------------------------------------53
3.
The integral ff(t)d~t when ~ is SaS with independent
increments---------------------------------------------- 58
Spectral representation of a SuS process------------------ 64
The integral ff(t)~tv(dt)---------------------------------67
4.
5.
6.
7.
III.
1
The integral ff(t)~tv(dt) when ~ is SUS with independent increments----------------------------------------- 74
System identification------------------------------------- 78
Path Properties
1. Measurability of p-th order processes--------------------- 91
2. Integrability of sample paths of SaS processes------------ 98
3. Absolute continuity of sample paths-----------------------l03
Bibliography------------------------------------------------------111
iii
\1
INTRODUCTION AND SUMMARY
It is well known that the stable laws arise naturally as the limiting
distributions of normed sums of independent identically distributed
random variables (or vectors), and this result has been extended to
Banach space valued random variables ([Kumar and Mandrekar 1972]) as
well as to random variables with values in certain topological vector
spaces ([Rajput 1975]).
The limiting distributions that have infinite
variance can be typed by a parameter ex, 0 < ex < 2, and only absolute
moments of order strictly less than ex are finite, whereas in the finite
variance case (ex
=
all moments exist.
I
2) the limiting distribution is always normal and
Even though stable laws on the real line are abso-
lutely continuous, closed form expressions for their density functions
are known in only a few cases.
In contrast, the characteristic functions
of stable measures on finite or infinite-dimensional spaces are quite
simple ([Kuelbs 1973]) and therefore constitute a primary tool in our
research.
Many easily formulated problems involving stable distributions on
Euclidean n-space remain unsolved, and the study of multivariate stable
distributions is continually being renewed (as in [Hosoya 1976]).
Since
the normal distributions have been extensively investigated, our efforts
....
are directed mainly toward stable distributions that have 0
(for simplicity) that are symmetric.
<
ex
<
2 and
If the symmetric stable random
variables are defined on a probability space (n,F,p), then they belong
2
to the complete metric space Lp(n,F,p) where 0 < P < a < 2 in the infinite
variance case and to the Hilbert space L2 (n,F,p) in the normal case.
Our
basic approach is to extend results known for normal distributions to
symmetric stable distributions, and many of the difficulties which arise
are due to the more complicated structure of Lp spaces and (for p
~
1) the
lack of a simple representation for the dual elements such as exists for
an inner product space.
Consequently our development often originates
with p-th order random variables and then is narrowed to include only
symmetric stable random variables.
One of the most notable advantages
of specializing to the stable case is that here the notion of independence
provides a satisfactory and useful analogy to the concept of orthogonality
for random variables with finite second moments.
e
~ry
The first chapter begins with basic definitions and characterizations
~
on infinite-dimensional spaces, but deals mostly with problems in an ndimensional EUClidean space setting.
The principal results give necessary
and sufficient conditions for independence of random vectors, linear regression, and finite absolute moments of monomials.
In the second chapter we study the structure of the linear space of
a symmetric stable process (a > 1) and use this structure to represent
elements in the linear space (under certain conditions) as stochastic
integrals of elements of a function space Aa (or A).
Conditions for
a
independence, best linear approximations, and expressions for dual elements
are obtained in terms of these representations.
These results are
applied to the problem of linear system identification when the input is
a symmetric stable process.
•
3
The final chapter contains necessary and sufficient conditions
for the measurability of p-th order or symmetric stable processes and
for the integrability or absolute continuity of sample paths of symmetric
stable processes, as well as sufficient conditions for absolute continuity
of sample paths of p-th order processes.
I.
INDEPENDENCE, REGRESSION, AND MOMENTS
Multivariate stable distributions and their characteristic functions
have been known and studied for many years, but the subject continues
to attract the attention of researchers (e.g., [Press 1972, Paulauskas
1976]).
Still unanswered are same quite natural questions surrounding
such topics as the properties of conditional distributions or the effects
of nonlinear transformations on stable distributions.
In the first two
sections of this chapter we discuss definitions and results that will
be of use to us later, and in the latter two we present some developments on regression analysis and moments for jointly symmetric stable
random variables.
1.
Fundamental definitions and characterizations.
In this section we define a stable measure on a Banach space and
state some characterizations of the characteristic function (c.f.) of
a symmetric stable measure on a Hilbert space.
This material is wellknown for stable measures on n-dimensional Euclidean space Rn , but has
only recently been extended to infinite-dimensional spaces.
Even though
we shall rarely consider stable measures on spaces other than Rn , the
additional generality provided here will occasionally be needed.
Let E be a real separable Banach space and for every aER define
the continuous map T:
E
a
E by Ta (x) = ax. A probability measure
on the Borel subsets of E is said to be stabZe if for any a > 0 and
b
>
0 there exists c
>
-+
0 and x E E such that
1l
5
where Ox is the Borel probability measure satisfying 0x({x}) = 1 and
~
denotes the convolution operation.
Let E* be the dual space of E and C be the space of complex numbers.
The c.f. of a Borel probability measure
~
on E is a map 0: E*
~
C de-
fined by
o(w)
f
=
eiw(x) l.(dx)
E
for all wEE*.
It has been shown ([Ito and Nisio 1968]) that a Borel
probability measure on a real separable Banach space is uniquely determined by its c.f.
The following characterization of a stable measure
is given by [Kumar and Mandrekar 1972] and lRajput 1975].
1.1.1 A Borel probability measure
~
on a real separable Banach space
E is stabZe if and onZy if for every integer n
such that
[~(w)]
A.
n
=
1 a
0(n / w)e
~
condition in 1.1.1 holds.
(B)
~
and satisfies
a :-:; 2.
It is customary to say that the measure
~
~EE
iw(xn )
for every wEE*, where a is uniquely determined by
o<
1 there exists
=
~ (- B)
A measure
for every Borel set B.
~
~
is a-stable whenever the
on E is said to be symmetric if
For a symmetric a- stable (SaS)
we have xn = 0 in 1.1.1 for all n. It is straightforward
to check that ~ is a SaS measure on E if and only if ~w-l is a SaS
measure
~
measure on R for every wEE*.
•
6
Let us restrict attention to Sas measures on a real separable Hilbert
space H with inner product <-,-> and unit sphere S = {xEH:<x,x> = I}.
Then the SuS c.f. is characterized in [Kue1bs 1973, Corollary 2.1].
1.1.2 A map
~:
H + R is the c.f. of an SaS measure on H if and only
if it can be written in the form
for every yEH, where r is a finite symmetric Borel measure on S.
If H = Rn and
a
< u <
2, then the symmetric measure r on
S
is uniquely
determined by the SuS measure ([Kanter 1973, Lemma 1]), and we shall call
r the spectral measure of the SaS distribution (or c.f.) as is done in
[Pau1auskas 1976, p. 357].
If
measure (or distribution).
Whenever the distribution of a random vector
u
= 2, then
~
is the c.f. of a Gaussian
n
(sl, ... ,sn) is an SuS measure on R , we shall refer to sl, ... ,sn as
jointly SuS random variables.
We now present another characterization (also due to Kue1bs) of the
SaS c.f. on H after introducing some additional terminology.
Let
T
be
the topology induced on H by the seminorms of the form <Ty , y >1/2, where
T is a symmetric, positive, trace class operator on H.
valued function f on H satisfying f(O) =
if
is said to be of negative type
n
I
i,j=l
...
a
An even, rea1-
f (y . -y . ) c . c. :;; a
1 J 1J
for all n, all Y1' ... 'YnEH, and all real numbers cl, ... ,cn such that
Ij=lCj = O.
If ~ is of negative type and if f(Ay) = IAlf(y)
for all
7
real
A
and yEH, then f is called a homogeneous negative-definite function
of order a.
1.1.3
[Kue1bs 1973, Theorem 3.1]
A map
~:
H ~ R is the c.f. of a SaS
measure on H if and only if it has the form
where f is a homogeneous negative-definite function of order a which
~s
T-continuous on H.
A stochastic process
~
=
{~t,tET}
dimensional distributions are SaS.
is called SaS if its finite-
When a = 2,
~
is a zero mean Gaussian
process and its statistical properties can be expressed in terms of a
single function, the covariance function.
However, when 0
a
<
<
2, there
is in general no simple parametric description of the finite-dimensional
distributions of the process.
A special class of SaS stochastic processes which are closely related
to Gaussian processes and which have an equally simple parametric description are the so-called sub-Gaussian processes.
Nevertheless the sub-
Gaussian processes have quite different properties from the
a =
2 case,
some of which are mentioned in [Bretagno11e, et al. 1966, p. 251].
To introduce the sub-Gaussian process we begin with a zero mean
Gaussian process
{~t'
given in result 1.1.3:
tET} with finite-dimensional c.f. 's of the form
for every n and every lt1, ... ,tn)E~,
ep~
~
"'t 1 ' ... , "'t n
where f
t 1 ,··· ,tn
2
t Cr ,··· ,rn)},
l' ... , n 1
Cr 1 ,··· ,rn) = exp{ -ft
is a homogeneous negative-definite function of order 2.
~
,.,
8
It is well known that if a function ~ is of negative type, then ~p
is of negative type for all p such that 0
P
<
1.
$
and Schmidt 1972] for a general discussion.)
(See~arthasarathy
Thus it follows from
1.1. 3 that
exp{-~t
t lrl,···,r )}
1'"'' n
n
is an SaS c. f. on Rn for any a such that 0
<
a
<
2.
The family of all
such Sas c.f. 's, for n = 1,2, ... and (tl, ... ,tn)Erll, clearly specifies a
consistent family of finite-dimensional distributions and hence a stochastic process.
We shall use the term a-sub-Gaussian to refer to finite-
dimensional distributions having c.f.'s of this form as well as to such
SaS stochastic processes.
Note that the distribution of an a-sub-Gaussian vector is determined
by a and a positive-definite matrix
~
and that the distribution of an
a-sub-Gaussian process is determined by a and a positive-definite function
R(s,t).
Hence sub-Gaussian distributions have a particularly simple para-
metric description, unlike the general Sas distribution.
However, it is
not known how the spectral measure of a sub-Gaussian vector is expressed
in terms of a and
~.
While stable measures on a separable Banach space suffice for our
purposes, we may mention that [Dudley and Kanter 1974 J and [DeAcosta
1975] treat stable measures on more general "measurable vector spaces,"
and [Raj put 1975] defines certain stable measures on topological vector
spaces.
9
2.
Independence.
The question of how to characterize the independence of jointly SaS
random variables is a natural one and has been answered in [Schilder
1970, Theorem 5.1] and in [Paulauskas 1976, Proposition 4J.
Although
these results by Schilder and Paulauskas are stated correctly, the proofs
as they appear are not convincing and a more detailed treatment seems
justified.
The implications of independence are important for us in
the following chapter when we consider SaS processes having independent
increments; so in this section we prove a characterization of independence
for jointly SaS random variables or vectors in terms of the support of
their spectral measure.
1.2.1 THEOREM.
o<
Let
... '~n be jointly SaS random variables with
~l'
a <
2 and spectral measure
k
m ~ n,
r.
For fixed k and
m
satisfying
'
1
~
xkxm :f
<
On
=
~k
and
~m
are independent if and only if r({(xl,···,xn)ES:
o.
This result is essentially due to Schilder, but its proof here is
based on Lemma 1.2.2 which we state and prove first.
The technique used
in the lemma was motivated by the proof of Lemma 1 in [Kanter 1973].
ds
Define the a-finite measure p on (0,00) by peds) = l+a' define
s
Rn by T(x1 , ... ,xn)
= (Y1'···'y)
n where Yk = xk ' Ym = xm, and y.1 = 0 if i :f k and i :f m.
Let v = rT- 1 and G = (p x v)e- 1 . Choose four real numbers h , ~, h ,
k
k
~ such that
e: (0,00)
x
Rn
+
Rn by e(s,x) = sx, and define T: Rn
f(v) =
L
j=k,m
(2-cos h.v. - cos h!v.)
J J
J J
+
>
0
.
10
whenever vk 1" 0 or vm .,.
1. 2.2
LEMMA •
o.
The funotion
1jJ:
Rn
R defined by
+
•
n
uniquely determines the measure f(v)G(dv) on R .
Proof: Notice that
for all rERn and that for all zER
00
00
Izla
J (I-cos
o
Thus
s)p(ds) =
J
(I-cos zs)p(ds) .
0
00
ljJ(r)
00
J (I-cos
o
s)p(ds)
= J J l<r,x),a(l-COs s)p(ds)v(dx)
Rn 0
=
n
for every rER.
Let ok = ( ... O... ,hk, ... O•.. ) and om = ( ... O.•. ,~, ...
0... ),
where the coordinates hk and ~ are in the k-th and m-th positions , respectively. Then for every rERn the function"l/J determines
= Jn cos<r,v)
R
= Jn e
R
i<r,v)
L
j=k,m
L
j=k,m
(I-cos h.v.)G(dv)
J J
(I-cos h.v.)G(dv)
since G is a symmetric measure.
J J
Thus
IjJ
determines the value of
11
i<r,v\
n
e
f(v) Gldv) for every rER . Since f(v)G(dv) is a finite measure
R n
on R , the result follows from the uniqueness of the Fourier transform. 0
fn
Proof of Theorem 1. 2.1:
c. f. factors.
If
~k
and
~
are independent, then their joint
Thus for every real r k and r m
flrkxk+r x lar(dx) = /rk,a flxk,ar(dx) + Ir ,a fix lar(dx) .
S
mm
S
m
S m
t
on S placing mass
flxk,ar(dx) on ( ... 0... ,
O
1, ... 0... ) and on ( ... 0... ,-1, ... 0... ), where the 1 and -1 are the k-th
Consider the measure r
coordinates; placing mass
i flxmlar(dx)
on ( ... 0... ,1, ... 0... ) and on
( ... 0... ,-1, ... 0... ), where the 1 and -1 are the m-th coordinates; and
placing mass zero on the remainder of S.
(*)
Then clearly
flrkxk+r x ,ar(dx) = flrk~+r x ,arO(dx)
S
mm
S.l< mm
-1
for all r k , r m. Let Vo = rOT and Go = (pxVO)8
B = {VERn; vkv r O} and observe that
m
-1
.
Define
00
00
=f
f
X{x x rO}(x)f(sx)vO(dx)p(ds) = 0 ,
km
o Rn
n
since vo({xER : xkxm r OJ) = rO({XES: xkxm r OJ) = o.
By (*) and Lemma
1.2.2, f(v)G(dv) and f(v)GO(dv) must agree on B. Hence
o =
f
XB(v)f(v)G(dv)
n
R
•
12
f
so that
Rn
f(sOx)
>
x{x x 10}(x)f(s ox)v(dx) = 0 for some
So
>
O.
Because
k m
0 on {xkxm10J, we get that
Conversely, r({XES: xkxm10J) = 0 implies that
Jlrkx k
S
Thus ;k and
+ r x /ur(dx) =
mm
~
are independent since their joint c.f. factors.
1.2.3 COROLLARY.
0
A subset {;k ""';k } of {;l""';n} is independent
1
i
if and only if the random variables are pairwise independent.
Proof:
Necessity is clear.
For the sufficiency, r is concentrated on
the set {xk x = 0, p 1 q in 1,2, ... ,i} and therefore we have
p kq
Jlr x +... +rk ~ ,ur(dx)
S k 1 k1
i Ki
= [
1
_ J _ _ +... + _ J.
_ Irk ~ +... +rk.~. ,Ur(dx)
{xk - .. ,-xk . -OJ
kk - ...=xk . -O}j
1 1
1
1
2
1
1
1-1
o
Let {k , ... ,ki } and {m , ... ,m } be disjoint subsets
1
j
1
Then the random vectors (;k ""';k ) and (; , ... ,; )
1.2.4 COROLLARY.
of {l, ... ,n}.
1
i
m1
mj
13
are independent if and onZy if any two random variabZes, one seZected
from each vector, are independent.
Proof:
fl r k
Necessity is clear.
x +... +r x +r x
k ki
k Ii
I
=
ill l
For the sufficiency, observe that
+... +r x
mj mj
ill l
la rcdx)
{X~ +'~'.+X~.io}r~
+'~'+X~.fO}{~ +'~'+~.=O}{X~ +.~.+x~.=o}1 1···larcdx)
i
1 and 1
1 and 1
1 and 1
1 and 1
2 +... +x 2 iO l2
2 =0 x 2 +... +x 2 iO x 2 +... +x 2 =0 J
xm
xm +... +xm
m
m
mj
ml
mj
l
j
l
j
l
by Theorem 1.2.1.
o
'
1. 2. 5 Example.
o<
If f,;l's2' and s3 are jointly Sas random variables with
r,
a < 2 and spectral measure
then CSl'sZ) and s3 are independent if
and only if r is concentrated on
{xl x 3=0} U {x 2x 3=O} =
=O} U1
{x =0 and
x =O}
3
2
= {Xl2 + x 2=1} U {x 3=±1}
2
{x
~-.f'-~
Xl
I
*
x2
..
14
1.2.6 Example.
It is easy to check from their c.f. that two non-
degenerate jointly sub-Gaussian random variables (0 < a < 2) cannot
be independent.
Indeed, consider the bivariate normal c.f.
2
where 01 >
°and 022 °are the marginal variances and 012 is the co-
variance.
Then
>
is the joint c.f. of two nondegenerate sub-Gaussian random variables
that are independent if and only if
for all r l ,r 2 . The left-hand side of (*) is never zero when r 2 1 0;
so we hold r 2 1 and differentiate both sides with respect to r l to get
°
2
+ °12 r 2
---~--=-=-----"'-2--a-=
°lrl
a
°1 (r1)
a-I
22-2°2 r 2)
whenever r 2 1 0, which becomes
after substituting (*) and simplifying.
we see that 012
= 0;
Taking r l
=
° and r 2 = 1
so we now have
whenever r 2 1 0, which implies that 0102 = 0, a contradiction. (Note:
when raising a number u to a power p we shall use the convention
15
(u)p
3.
= lul P sign(u) .)
Regression.
In this section we obtain a necessary and sufficient condition,
expressed in terms of their spectral measure, for a regression involving
SUS
random variables to be linear (Theorem 1. 3. 7).
This is a consequence
of a result relating the form of the linear regression function to partial derivatives of the joint c.f. (Theorem 1.3.1).
We also obtain
a sufficient condition for linear regression (Proposition 1.3.8) which
is simpler than the necessary and sufficient condition in Theorem 1.3.7,
but nevertheless has some interesting applications.
Let
~O'~l""'~n
be jointly SUS random variables with 1 < a < 2.
For an SuS distribution on R it is well-known that the moments of order
p
<
a
~
exist, and it is therefore meaningful to consider E(~ol~l"",sn)
~
and to study the form of f for which
Kanter has obtained several results which show that f is a linear
function in certain cases.
The regression
E(~ol~l)
is always linear
(Corollary 1.3.4),as is the regression E(solsl,· .. ,sn) provided sl""'~n
are independent (Corollary 1.3.6) ([Kanter 1972a]).
E(sol~l"",sn) and sl""'~n are jointly SaS
In case
(a condition for which
criteria are not known), then once again the regression is linear
([Kanter 1972b)).
For our investigations we shall use a general result giving a necessary and sufficient condition for linear regression in terms of the joint
~
16
c.f. of the random variables (not necessarily stable).
The method of
proof comes from a related result found in [Lukacs and Laha 1964,
Theorem 6.1.1].
1.3.1 THEOREM.
~O'~l""'~n be
Let
moments and with joint
c.f.~.
random variables having first
Then
(*)
if and only if
d
[--~-- ~(rO,rl,···,r)]
n r O=0
orO
Proof:
Observe first that the condition may be written as
E[~Oe
i(rl~l+···+r ~
(**)
= alE[~le
Necessity.
)
n n ]
i(rl~l+···+r ~
)
i(rl~l+···+r ~ )
n n ]+ ... +anE[~ne
n n ]
(*) implies
and (**) follows by taking expectations.
Sufficiency.
where f is a Borel-measurable function on Rn .
Then for all rl, ... ,r
n
17
we have that
= E[f(~l'···'~n)e
i(r1~l +... +rn~n)
= E[(~O-al~l-···-an~n)e
by (**).
]
i(rl~l+·· .+rn~n)
J =0
Now
v(B) =
J
B
f(x l , ... ,x )dP(~l'· .. '~)
n
n
-1
(xl' ... ,x )
n
defines a finite signed measure (f.s.m.) on Rn which is therefore uniquely determined by its Fourier transform.
Thus
\,.,
n
for all Borel subsets B of R , and since f(~l'·· .'~n) is a a(~l'···'~n)­
measurable random variable, we get that
f(~l' ... '~n) =
0 a.s. [P] and
o
(*) follows.
If the regression is linear, then it is clear that the coefficients
... '~n are
For each choice of rERn
al, ... ,an are uniquely determined by ¢ if and only if
linearly independent elements of Ll(n,F,p).
~l'
the condition of Theorem 1.3.1 provides a linear equation involving the
aj's, but it is not clear in general what n choices of rERn will provide
n linearly independent equations which can be solved for the aj's.
18
1.3.2 Example.
If
~O'~l""'~n
are jointly a-sub-Gaussian random var-
iables, then the regression is linear and the coefficients are the same
as in the Gaussian case.
For, let
a
cP ( r
where
r
a
)IL
a..
1J r 1. r J. J
l
= expfl- 2-"2 . .lJ2
o ' . . . , r n)
1,J=0
= (a ij ) is a covariance matrix.
Then for rl, ... ,rn not all
zero,
a
acp(rO,rl,···,rn)
= -a2
arO
r =0
o
and for 1
$
k
$
"2cp(O,rl, .. ·,r )L,'
\,n
laO·r.
n=-J....=_....;-'
-!.J--=.-J
2-a
(I~ . la . . r.r.)-21,J= 1J 1 J
n
-a2
acp(O,rl ,··· ,rn)
=
ark
(I~ .
1,J=
2-a
la .. r. r. )-2-
1J 1 J
Therefore the condition of Theorem 1.3.1 is written as
or
n
I
j=l
n
[ao' J
I
a·k~Jr. = 0
k=l J
J
for all rl, ... ,rn , and thus it is satisfied by the
solutions of the system of equations
n
La·kak=aOj'
k=l J
j=l, ... ,n.
~'s
which are the
19
Hence the regression is linear and the regression coefficients satisfy
the same equations
when
~O'~l""'~n
matrix
~~
= Qa,
T
~
= (a l ,·· .,an),
T
~
= (aOl, ... ,aOn )' as
are jointly Gaussian with mean zero and covariance
Eo
1.3.3 COROLLARY.
r on t he
.
un~t
If
~O'~l""'~n are
jointly SetS with spectral measure
sp here S'~n Rn+l , t h en
if and only if
Before illustrating the use of Corollary 1.3.3, we define the
covariation
Cn~
where n and
~
of n with
~
as
are jointly SetS with spectral measure r
lack of symmetry in n and
~
here.)
(Note the
n,~
The next result provides
Cn~
with
an interesting interpretation.
1.3.4
COROLLARY.
[Kanter 1972a, Theorem 1.4].
If n and
~
are jointly
SetS random variables, then
C
= ~~ a.s.
C~~
Proof:
By Corollary 1.3.3,
E(nl~)
=
a~
a.s. if and only if
20
°
for all rER.
Solving for a yields
fS
x (x )
l
2
a-I
f
r(dx)
n,.,
_ Cnl;
D
- Cl;l;
For jointly Gaussian random variables n and
l;
with zero mean (the
case a = 2) it is well-known that a result analogous to Corollary 1.3.4
holds with Cnl; replaced by the covariance of nand
l;.
By appropriate choice of f it is easy to see from Corollary 1.3.3
that the regression can be nonlinear.
For example, take n = 2 and
suppose that
f(3-l/2,3-l/2,3-l/2) = f(O,l,O) = f(O,O,l) = 1
and that f places zero mass on the remainder of S.
(Note that r need
not be synnnetric unless we are concerned about uniqueness.)
E(~01~1'~2)
is not a linear function of ~l and ~2; however, even in this
simple case we do not know the fom of the regression.
1.3.5
then a
(*)
Then
COROLLARY.
If ~0'~1'~2 are jointly Sas and if
l and a 2 satisfy
21
where c .. is the aovariation of
1J
~.
1
with
~..
J
uniquely determine a l and a 2 if and only if
Moreover l equations (*)
~l
and
~2
are linearly
independent elements of Ll(n).
Proof:
If the regression is linear, then equations (*) follow immediately
from the condition of Corollary 1.3.3 by taking r l = 1, r Z = 0 and
r l = 0, r Z = 1. These equations have a unique solution unless
i. e. ~
which implies that xl
=
AX Z a.e.
Lr]
for some AER by Holder's inequality,
o
~l
=
A~2
If n
>
Z and the regression is linear, then the regression coeffi-
hence
cients a·
J
a.s.
again satisfy linear equations given by the condition in
Corollary 1.3.3.
Unfortunately, just as in the non-stable case (Theorem
1.3.1), we do not know in general how to choose n linearly independent
equations that can be solved for the a.'s.
J
The following corollary shows that the regression is always linear
and the regression coefficients are easily obtained whenever
~l""'~n
are independent.
1.3.6 COROLLARY.
[Kanter 1972a, Theorem 3.4].
are jointly Sas random variables and if
nondegenerate~
then
If
~l""'~n
~O'~l""'~n
are independent and
22
and the coefficients
~
are given by
ak
•
=
where COk is the covariation of
of
~k
C
Ok
C'
kk
~O
with
~k
and Ckkis the covariation
with itself·
The proof follows easily from Corollaries 1.2.3 and 1.3.3.
We now obtain a condition for linear regression by applying to
Corollary 1.3.3 the methods used in Lemma 1.2.2. Although the resulting
condition appears surprisingly involved, it is not clear that further
simplification is possible.
.
n+1
n+l
~ R
by T(YO'Y1""'Yn) = (0'Y1""'Yn)' define
DefIne T: R
8: (0,00) x S ~ Rn+l by 8(s,x) = SX, and define the a-finite measure p
on (0,00) by peds)
= ds/s a . Let f(x) = Xo - a l x 1- ... -anxn , and define
n+
a f.s.m. von" S by v(dx) = f(x)f(dx). Let G be the measure on R 1
defined by G = (p xv)8- 1 , and let G be the a-field of subsets
{RxB: B is a Borel subset of Rn -(O, ... ,O)}
of Rx[Rn -(0, ... ,0)].
1.3.7
THEOREM.
Let
~O'~l""'~n
be jointly Sas variables, 1 < a < 2,
with corresponding measure r on S.
Then E(~ol~l""'~n)
= al~l+ .. ·+an~n
a.s. if and only if G is the zero measure on G.
Proof:
Assume that E(~ol~l""'~n)
= a1~1+ ... +an~n
Corollary 1. 3.3,
n+1
for all rER
.
Note that for any zER,
a.s.
Then by
23
00
00
(z)a-lJ sin s peds) =
°
°
Thus
°
J sin(zs)p (ds)
J f(x) «r,Tx»
a~l
00
r(dx)
S
f
sin s p (ds)
°
00
= J J sin<r,T(sx»v(dx)p(ds)
°
S
= J sin<r,Tv>G(dv)
Rn+l
= J sin<r,v>GT
Rn+l
(*)
~l
(dv)
n+l
for all rER
.
Let hl, ... ,hn be any real numbers, and let 01 = (O,hl,O, ... ,O),
n+l
02 = (O,O,h 2 ,O, ... ,O), ... , on = (O, ... ,O,hn)· Then for every rER
°= -2 j=lnL Rn+l
J
1
=
[2 sin<r,v> - sin<r+o.,v>
J
n
J
sin<r,v> I (I-cos h.v.)GT
j=l
J J
Rn+ l
-1
sin<r-oj,v>]GT
-1
(dv)
(dv)
We can choose hl, ... ,hn and hi, ... ,h~ such that either Ij=lel-COS h.v.)
J J
n
l
or L~ leI-cos h!v.) is nonzero for every vE{(vO'vl, ... ,v )ER + :
J=
J J
n
2
2
v1+... +v > O}
Let g(v) = L~ le2-cos h.v. - cos h!v.). Then for
n
J=
J J
J J
n+l
every rER
,
J sin<r,v>gev)GT-l(dv) =
Rn+ l
°.
Now it is easy to see that gev)GT-ledv) defines a f.s.m. on Rn+l which is
antisymmetric in the sense that
24
f
g(v)GT-l(dv) = -
-B
.
for every Borel subset B of Rn+l .
e i<r,v>g(v)GT-l(dv) -- O.
f
f
g(v)GT-l(dv)
B
Thus for every rERn+l ,
It f o11ows bY t h
·
e unIqueness
0f
.
t he Fourler
Rn+ l
. the zero measure on Rn+l . Hence GT- 1
transform that g(v)GT -1 (dv) IS
n+l
2
2
is the zero measure on {(vO,vl, ... ,vn)ER
: vl+ ... +vn > OJ. Thus for
every Borel set BeRn - (0, ... ,0),
G(RxB) = GT-l(RXB) = 0 .
Therefore G is the zero measure on G.
For the converse, note that sin<r,Tv> is a G-measurable function on
Rx[Rn - (0, ... ,0)] for every rER~l .
Thus, if G is the zero measure on G,
then
1
sin<r,TV>G(dv) = 0
Rn+ l
n+l
for every rER
.
The linearity of the regression now follows from equa-
o
tion (*) and Corollary 1.3.3.
The condition in Theorem 1.3.7 is too complicated for easy verifica-
tion; so we shall present in the following proposition a simpler sufficient
condition that provides nontrivial examples in which the regression is
linear.
To simplify the presentation and to make geometric visualization
possible, we shall treat the case n = 2, i.e., the regression
Therefore, consider jointly SuS random variables
~0'~1'~2
E(~01~1'~2)'
with
1 < a < 2 and spectral measure r on the unit sphere S = {(XO,X l ,X 2)ER 3 :
2 = I}. Geometrically, we regard S as being separated into two
x2
+x2+x
O l 2
hemispheres S+ and S- by the plane {XO=O}and obtain from the spectral
25
two new measures r l and r 2 on the plane {x o = o} by "collapsing"
these two hemispheres. Specifically, let S+ = Sn{xO~O, xO~l},
S- = Sn{xO<O,xO~-l}, and U = {xER 2: x l2+x 22$lJ, and define T: S ~ U by
T(xO'x l ,x 2) = (xl'x Z). Then, for all Borel subsets B of U,
measure
r
and
define two measures on U which we notice place zero mass on the point
(0,0).
Finally, we introduce two functions f l and f 2 on U representing
+
the function Xo - alx l - azX2 on S and S , respectively:
=-
~l
-
..
and we define a f.s.m. on U by
1.3.8 PROPOSITION.
al~l
+
aZ~2
Proof:
If v is the zero measure on U1 then E(~01~1'~2)
=
a.s.
The result follows from Corollary 1.3.3 and the following equality:
=
f
S+
(rlxl+r2x2)a-lfl(Tx)r(dx) +
J (rlxl+rZx2)a-lf2(Tx)r(dx)
S
26
o
If r
(e.g.~ ~
l
and r Z are absolutely continuous with respect to a measure ~
= r l + r 2) with Radon-Nikodym derivatives gl and g2' respectively,
then the condition of Proposition 1.3.6 can be expressed as
o = v(B) = ![fl(x)gl(x)
+ f2(x)gZ(x)]~(dx)
B
for all Borel subsets B of U, or equivalently,
(*)
1.3.9
Example.
Given real numbers a l and a 2, define a Borel subset B
of U by
--Vl-x12-x 22 - a1x 1 - a 2x > O}
2
and two functions gl and g2 on U by
(
gl ex)
=
1
~l-XZ-XZ - alx - aZx
1 Z
Z
l
1
(
e
gz(x)
=
if XEB ,
0
otherwise ,
1
if XEB ,
~ l-xl-xZ
2 2 + alx + aZx
l
Z
1
0
otherwise ,
27
and let
~
be any a-finite measure on U with respect to which gl and gz
are integrable.
al~l + aZ~Z
Then (*) is satisfied, and consequently
E(~01~1'~2)
=
a.s.
There appears to be little more that can be said about the form of
the regression function using the techniques of this section, primarily
because higher moments of SUS variables do not exist (see the next
section).
One quite restricted kind of problem which can be solved
simply is to obtain necessary and sufficient conditions for
E(~1~zl~3'~4)
1.3.10
=
~3~4
a.s. when the appropriate moments exist.
PROPOSITION.
Let
~1'~Z'~3'~4
be jointly SuS random variables
with 1 < a <Z and spectral measure r such that
and ~3 and s4 are independent.
~1
and
~Z
are independent
Assume that ~l 0 and ~4 'f 0, and for
each i,jE{1,Z,3,4} let c .. be the covariation of~. with ~ .. Then
1J
1
J
E(~1~zl~3'~4) = ~3~4 a.s. if and only if either c c Z4 = c 33 c 44 and
13
c 14 = c 23 = 0 or else c 14 c 23 = c 33c 44 and c13 =c 24 =
o.
We remark that our analysis shows it impossible to have a firstorder term such as
Proof:
c~3
in the above regression.
In a manner similar to the argument in Theorem 1.3.1, it follows
r 1=r 2=0
...
28
the partial derivatives, this condition becomes
for all (r 3 ,r 4). Taking alternately (r 3 ,r 4) = (1,0) and (r ,r 4)
3
(0,1) we get c 13c Z3 = = c 14 c Z4 and our condition then becomes
°
But one of the terms on the right-hand side must be zero since c 13c Z3
= c 14c Z4 ' and both terms cannot be zero since
c33r
~3
°and c 44 r 0.
1.3.11.
Example.
r ° and
~4
=
r ° imply
°
0
To illustrate Proposition 1.3.10 consider a measure r
which concentrates its mass as follows: mass Z(a/2)-1 is placed on the
.
four pOlnts
(O,Z -l/Z ,2 -l/Z ,0), (0,2 -l/Z ,-Z -1/2 ,0), (Z -l/Z ,0,0,2 -l/Z ),
(2- l / Z,0,0,_2- l / Z); and mass Sa/2 is placed on the two points
((~)l/Z,O,(-§-)l/Z,O), (O,(~)l/Z,O,(})l/Z). Then c 13 = c 24 = c 33 = c44 = 2
and c 14 = c Z3 = 0, and therefore E(~1~21~3~4) = ~3~4 a.s. (Notice that
neither
~l
and
~4
nor
~Z
and
~3
are independent.)
We conclude this section with a discussion of regression in the
infinite-dimensional case.
The reading of these remarks might be deferred
since some of the concepts which arise here are dealt with extensively
in Chapter II.
Suppose that
1
<
a
<
{n'~t,tET}
is a SaS family of random variables with
2, and consider the regression of n on
{~t,tET}.
There exists
29
a countable subset T00 of T such that
If we order the points in T00 and let Tn be the set containing the first
n points, then
E(nl~t,tEToo)
= lim
E(nl~t,tETn)
n+oo
by [Doob 1953, p. 319], where the convergence is in
If
Lp(n) ,
1 < p < a.
E(nl~t,tETn)
E Ln for every n, where Ln is the linear space of
{~t,tETn}' then E(nl~t,tEToo) will belong to the linear space of
{~t,tEToo}
E(nl~t.tET)
and consequently the regression
We have seen two cases where the regressions
will be linear.
E(nl~t,tETn)
are always
linear: when the process is (1) a-sub-Gaussian (Example 1.3.2) and
(2) independent (Corollary 1.3.6).
{~t,tET}
~O
o
SaS process with independent increments, T = [0,00), and
For then cr(St ""'~t )
1
n
t 1$ ••• stn , and therefore
= O.
$
is a
Case (2) is interesting when
= cr(~t
1
'~t -~t ""'~t -~t
2
1
n
n-1
),
In both cases (1) and (2) we have expressions for the regression coefficients a j and can therefore write E(nlst,tET) as the limit of a sequence
of finite linear combinations of elements in {~t,tET}. We shall further
\
investigate case (2) in Section 3 of the following chapter.
30
4.
Moments.
It is known that if t;, is an SuS random variable and p > 0, then
EIt;,I P < 00 if and only if 0 < P < u. We consider here the question of
determining the values of positive constants Pl, ... ,Pn such that
(*)
when t;,l, ... ,t;,n are jointly SuS random variables.
Clearly if they are
independent the necessary and sufficient condition is 0 < p.1 < u,
i
= 1, ... ,n.
If n
=2
and t;,1 and t;,2 are dependent, we show that (*)
is equivalent to 0 < Pl+P2 <
U
(Corollary 1.4.5).
For general n the
necessary and sufficient condition on the p.'s
is the same,
1
provided the SuS distribution in Rn satisfies a condition which we shall
call n-fold dependence (Theorem 1.4.4) .
Jointly SUS random variables t;,l, ... ,t;,n with spectral measure r
are called n-fold dependent if
This condition will often be satisfied and in fact fails to hold only
when
r
is supported by a rather particular region of Shaving (n-l)-
dimensional Lebesgue measure zero.
It is clear from Theorem 1.2.1 that
2-fold dependence is equivalent to dependence, but that for n ;: : 3,
n-fold dependence is stronger than dependence (i.e., non-independence).
In Lemma 1.4.3 we prove an interesting characterization of n-fold
dependence.
We shall begin with a result that gives a condition for the existence
31
of moments in terms of the c.f., which is assumed only to be real-valued
and not necessarily $aS.
This theorem extends a special case of Theorem
Z of [Wolfe 1973] to a multivariate distribution and is proved by likewise extending the method of Wolfe's proof.
1.4.1
THEOREM.
~l'
Let
... '~ be random variables with real-valued joint
c.f. ¢ and suppose that 0
<
00
<
Pk
Then E(I~ll
Z for k=l, ... ,n.
<
PI
...
I~I
if and only if
n
E:
J {2n -1 [1
o
+
Zn-
Z
l:
-
¢ ( ... 0 ... , Zzk' ... 0 ... ) ]
k=l
l:
j<k
¢( ... O... ,Zz., ... O... ,(-l)
J
cl
ZZk'''.O ... )
c l dO,l}
(*)
l:
_... +(_l)n
,cl, ... ,cn - l dO,l}
for sane
Proof:
E:
> O.
We shall use the following elementary trigonometric identity:
Zn-l . Z
. Z
. Z
Z
Sln zl Sln zZ·· .Sln zn
= Zn-l[l _
Zn-Z
L
cos(Zz.
j<k
J
+
c l dO,!}
n-3
- Z
L
i<j <k
c
C
cos(Zz. +(-1) l ZZJ. +(-1) zZZk)
cl'cZdO,l}
1
Pn
)
32
n
Thus if II is the measure induced on R by (~l' ... '~)' then the integral
(*) can be written as
2n -1 JE
JE J . 2
. 2
. 2
d (
)
0 ···0 RnS1n rlzlslTI r 2z2 ···s1n rnzn II rl,···,rn
2
dZ l dz 2 ··· dZ n
I+PI I+PZ
zl
Zz
I+Pn
... zn
Sufficiency. If the condition of the theorem holds, then
J .. . J Ir 11
Ir l l2:I
PI
Irnl~l
... Ir
pEE
n
I ndll (r l' ... ,r ) J ...f
n 0
0
2
2
sin Y1" . sin y
dYl·· .dYn
1+P1
Yl
J ••• J
I
<
r 1 1 2:1
00,
I
rn 12:1
so that
Ir 11
PI
... Ir
P
n
I r 1 1E
I nL
0
IrnlE
...L
0
l+Pn
...Yn
2
2
sin Yl" .sin Y
n
n
33
The integral (*) is less than or equal to
Necessity.
dy
n
n l+Pn '
Yn
2
r
.
···6 SIn Y
00
o
If the condition of the theorem holds and if we let
E:
increase to
infinity, then the integral (*) converges to
2n-l
PI
Pn
2
E(I~ll ... I~I
00
2
)l sin Yl
dY l
l+PI
Yl
and we therefore get an expression for E(I~11
00
2
l sin Yn
PI
dYn
l+P '
n
Yn
P
... I~I n) in terms of the
c.£. cp.
For the analysis that follows we shall transform the rectangular
coordinate system used in (*) to another coordinate system in Rn that is
the familiar spherical coordinate system if n = 3.
The details of this
transformation are indicated in the following lemma.
1.4.2
LEMMA.
n/2
Condition
n/2
E:
J ••• J J {2na
a
1
a
.L
(*)
of Theorem 1.4.1 can be expressed as
n-l n
- 2
L ep( ••• a... , 2rrk (8), .. "0 ... )
k=l
:
C
CP( ... O... ,2rr j (8), ... O: .. ,(-I) l 2rrk (8), ... O... )
J< k
cldO,l}
+ ... +(-1)
r
n
dr
l+Pl+···+Pn n-l
c (-1) n l 2rrn (8))}
d8-=--::d8ld82
-.::=----=-__
<
n _1
1 \k
l+Pk+1
+L.·lP·
]
~l[(sin 8 )
J=
J(cos
8
)
k
k
00
e
34
n-l
n-l
rl(e) = IT sin 8k ,
k=l
r Z(8) = cos 81 IT sin ek '
k=Z
n-l
r 3 (8) = cos 82 IT sin ek, ... ,r (8) = cos 8 1 .
k=3
n
nProof:
Transform the region of integration of integral (*) as follows:
zl = rrl(e),
formation is
Zz
= rrZ(8), ... ,zn = rrn (8). The Jacobian of this transr n-1Jl-1
llk=Zsink-1 8k , and the lemma follows by straightforward
substitution.
o
The next lemma characterizes n-fold dependence and has an interesting
interpretation for bivariate saS distributions.
Let r be a finite symmetric measure on the Borel subsets
n
of the unit sphere S in R and suppose that 0 < a < Z. Then
1.4.3
LEMMA.
>
Proof:
0
Similarly to what has been done before, define the a-finite meas_ ds
n
ure p on (0,00) by peds) - l+a' define 8: (0,00) x S + R by
s
35
8(S,x)
=
SX, and let G = (p x f)8- l . Multiplying the left-hand side
of inequality (**) by foo(l-cos s)p(ds), we obtain
o
n-Z
Z
+
I
zn-3
i<J<k
cl' c Zd O,l}
-
I
. . k
l<J<
c
C
[l-cos(r.v.+(-l) l r.v.+(-l) zrkvk)]
1
1
J J
cos r.v. cos r.v. cos rkvk
1 1
J J
+ ••• +(-1)
nn
IT cos rkvk}G(dv)
k=l
n-l
f f (I-cos rlsxl) .•. (l-cos r sx )p(ds)f(dx)
SOn n
=Z
00
00
= Zn-l
f
{xES:xl",xn~O}
(
b
(I-cos rlsxl) ... (l-cos rnsxn)p(ds)f(dx)
It is then clear that (**) implies that f{XES: xl",x ~ O} > O.
n
Conversely, for every XES such that xl",x ~ 0 and every rERn such that
n
~
36
co
of(l-cos rlsxl) ... (l-cos r nsxn )p(ds) > 0
so that r{XES: xl ...xn
r
o
O} > 0 clearly implies (**).
It should be pointed out that the condition of Lemma 1.4.3 is
not changed. if we require inequality (**) to hold for only one choice
of rl, ... ,rn such that rl ... r n
r
0, and it is also clear that the direc-
tion of inequality (**) is never reversed.
If we take n = 2 and r l = r Z = 1, then Lemma 1.4.3 yields the
following: r{XES: xlx
O} = 0 if and only if
Z
r
f
S
Ixl+xzlar(dx)
= z f Ixllar(dx)
S
+
f
+
S
2
f
Ixl-xzlar(dx)
Ixzlar(dx) .
S
If t,;l and t,;Z are jointly saS random variables with 1 < a < Z and spectral
measure f, this latter condition may be written in terms of a norm introduced. in the next chapter:
which is analogous to the parallelogram law for an inner product space.
Indeed, if a = 2 the equation is precisely the parallelogram law stated
for two zero mean bivariate normal random variables, but if 1
<
a
<
2
we get that the "parallelogram law" holds for two bivariate saS random
variables if and only if they are independent (Theorem 1.2.1).
We now apply Theorem 1.4.1 to saS random variables.
37
1.4.4
THEOREM.
Let
~l' ... '~n
be n-fold dependent jointly SUS random
variables with spectral measure rand 0 < a < Z.
numbers P1, ... ,Pn we have
E(I~ll
PI
... I~I
Pn
)<
00
Then for positive
if and only if
Proof:
Combining Theorem 1.4.1 and Lemma 1.4.Z and using the particular
PI
P
form of the c.f. ¢, we have that E(I~ll ... I~I n < 00 if and only if
f
TIl
o
Z···f
TI/Z
0
f
£
{zn- l - Zn-l
n
L exp[-Zaraf
k=l
0
S
Irk (8) xk la r(dx)]
L
+ Zn-Z
j<k
cl dO,l}
c
+(-1) n-lrn(8)Xnlar(dx)]}
dr
d8l dez ... d8 n -l
l+L~=lP.
IT [(sin 8 )
J J(cos
k
k=l
<
00
rl+Pl+···+Pn n-l
for some
£
>
o.
It
is apparent from inspection of the development of this
integral in Theorem 1.4.1 that the only region where convergence to a
finite limit is in question is for points where r is small.
the factor of the integrand in braces reduces to
Zn-l - Zn-l(n) + Zn-Z(n)Z
1
Z
= Zn-l[l _ (~) + (~)
= Zn-l(l_l)n = 0
Zn-3(~)ZZ+ ... +(_1)n2n-l
(~) +... +(_l)n]
At r = 0
38
We now differentiate this factor in braces to assess the rate of
convergence to zero as r
-+
O.
The resulting partial derivative with
respect to r is
n
a2 ar a-I [-2n-l L\ II r k (8)xk ' a rCdx)¢C ... O... ,2rrk C8), ... O... )
k=l S
x
c
¢C. . . O. • . , ZIT j C8) , . . . 0 • • . , C- 1) l 2rrk C8) , . . . O. . .) }
···
From the common factor r a - l in this derivative it is evident that the
factor of the integrand in braces is of order OCr a ) as r -+ O. But it
is possible that the convergence to zero with r is even faster if the
other factor in the derivative converges to zero as r
r
=
-+
O.
However, at
0 this other factor in brackets is
c
+(-1) n-1 r n (8)x n la r(dx),
which is nonzero for all 8
=
7T n-l
C8 l , ... ,8n _l )ECO,Z)
by Lemma 1.4.3 since
39
the functional values rk(e) are nonzero for such e (Lemma 1.4.2).
It is clear from Fubini's theorem that if the integral is finite
then the factor of the integrand in braces is integrable over (O,s) with
l+Pl+" .+p
respect to the measure dr/r
n. Since this factor of the integrand is of order O(r a) and of no smaller order as r ~ 0, if the integral
is finite then P1+" '+Pn < a.
The sufficiency of the condition Pl+",+Pn < a is clear, since
o
by Holder's inequality.
The following corollary was conjectured by Holger Rootzen and is
an irrmediate consequence of Theorem 1. 4.4 for the case n
=
2 and Theorem
1.2.1.
1.4.5 COROLLARY.
o<
a
< 2~
~l
Let
and let PI
>
and
~2
0 and P2
be dependent jointly SaS pandom
>
0 be given.
Then
vaPiables~
P
P
(I~ll 11~21 2)
<
00
if and only if PI + P2 < a.
In Theorem 1.4.1 and the succeeding remark we have seen how to compute the absolute moments of monomials in
~l"'.'~
when their joint c.f.
is real-valued.
We shall conclude this section with an expression for
PI
P2
E[(~l)
(~2) ] in terms of ¢ when the corresponding absolute moment is
¢
finite.
variables
Such moments will be of interest in the next chapter for random
~l
and
~2
having finite absolute moments of order P, 1 < P < 2,
4It
40
with PI = 1 and Pz = p-l.
•
1.4.6
THEOREM.
Let
~l
e·f. ¢ such that E(I~ll
and when
Proof:
0 < P <
and
PI
~Z
I~zl
be random variables with real-valued joint
Pz
)
< OO~ where
Z let c(p) = ZfOOsin3y
o
0 < PI < Z and 0 < Pz < Z,
l~Y.
y P
Then
As in Theorem 1. 4.1 the proof is based on an elementary trig-
onometric identity:
. 3
. 3
32 S1n zl S1n z2
~,
= 9 cos(zl-zZ) - 9 cos(zl+zZ) + cos(3zl-3z Z) - cos(3zl+3z Z)
- 3 cos(zl-3z Z) + 3 cos(zl+3z Z) - 3 cos(3z l -z Z) + 3 cos(3z l +z Z).
Z
If ~ is the measure induced on R by (~l'~Z)' then
£
f f
-£ -£
£
[9¢(zl'-zZ) - 9¢(zl'zZ) + ¢(3z l ,-3z Z) - ¢(3z l ,3z Z)
41
Letting
E ~oo,
we get the stated expression for
E[(~I)
PI
Pz
(~z)].
0
II.
A FUNCTION SPACE APPROACH TO SaS PROCESSES
It is a customary method in the study of second order stochastic
processes to establish an isomorphism between (a subspace of) the linear
space of a process and a Hilbert space of functions and to translate
problems formulated in terms of the process into problems in the more
familiar function space.
In this chapter we investigate an analogous
approach to the study of p-th order processes by seeking an appropriate
Banach space of functions that is isometric to (a subspace of) the linear
space of a p-th order process (Section 2).
When applied to a $aS process
with independent increments (Section 3), our results yield the stochastic
integral defined in [Schilder 1970], and when applied to an "absolutely
continuous" process (Section 5), they yield a stochastic integral of
more specific form which can be also regarded as a sample path integral.
We conclude the chapter by exploring the possibility of using the statistical properties of the output process to identify a linear system with
a known $aS input (Section 7).
1.
The linear space of a process.
After defining the linear space of a p-th order process and the linear
space of a SaS process, we show that the duals of both spaces have a kind
of Riesz representation (Theorem 2.1.5) and that the two spaces coincide
in the case of a SaS process (Proposition 2.1.2).
These results are
essential to our later development of stochastic integrals with respect to
SaS processes.
43
Let
~t ={~t,tET}
space (n,F,p).
If
be a stochastic process with underlying probability
~tELp(n)
a p-th order process.
for all tET where 0
<
p
<
00,
then we call
~
Unless otherwise specified, throughout this
chapter we make the restrictions 1 < P <
00
for p-th order processes and
1 < a < 2 for SaS processes.
Let
of
t(~)
be the space of all finite linear combinations of elements
{~t,tET}.
~
If
is a p-th order process, then define a norm on
t(~)
by
for all
sEt(~).
~
is a Sas process, then for every
SEt(~)
it follows
from result 1.1.2 that there e~ists some b ~ 0 such that E(e irs ) =
-b
e s
Irl a
If
for all rER.
defines a norm on
Then
([Schilder 1970, Corollary 2.1]).
t(~)
sequence of the continuity theorem
fOT
It is a con-
characteristic functions that
convergence with respect to this norm is equivalent to convergence in
probability.
For a p-th order or a Sas process
the completion of
show that
L(~)
t(~)
with respect to its norm.
~
we let
If
~
L(~)
denote
is Sas, we shall
is a Sas family by using characterization 1.1.3 of the
SaS c.f. due to Kuelbs.
2.1.1
PROPOSITION.
If
~
is a SaS
process~
then
L(~) ~s
a family of
jointly Sas random variables.
Proof:
For each j = l, ... ,k let l~(j)} be a Cauchy sequence in t(~).
n
44
Then {sU)} is Cauchy in probability and hence converges in probability
n
to some real-valued random variable s(j). It follows that the sequence
of random vectors {(s~l) , ... ,s~k))} converges in probability to
(s(l) , ... ,s(k)).
For each n let An be a homogeneous negative-definite
function of order a on Rk such that exp{-A~(y)} is the joint c.f. of
(s~l) , ... ,s~k)). By the continuity theorem for c.f.'s,
Aa (y)
n
k
+
Aa (y)
a
for all yER , where exp{-A (y)} is the joint c.f. of (s
(1)
, ... ,s
(k)
).
It is easy to verify that A is a homogeneous negative-definite function
of order a which is continuous on Rk , and the result follows from 1.1. 3.
D
Clearly we may regard a SuS process
1
~
P
~
P
< a
L(~)
are equivalent.
If z: is a SuS random variable with c.f.
PROPOSITION.
where 1
as a p-th order process where
and in fact, an application of Theorem 2 of [Wolfe 1973] shows
< a,
that the two norms on
2.1.2
~
and
-
c(p,a)
=
2P- 1J s
(J()
-
E. -1
a
(l-e-s)ds
ll/p
o
J
This proposition shows that for a SaS process
pletion of
~(~)
with respect to either norm.
~,
L(~)
is the com-
45
Proof:
By Theorem 2 of [Wolfe 1973],
00
J r- P- l [1-¢(2r)]dr
Ell: /p
(*)
= -02f v-p-lsinZv dv
- - - - - 00
o
Now
f r- p - l [1-¢(2r)]dr = f
o
0
00
00
r-p-l(l - e- II I: IlalZr la )dr
Zp-Ilril
P Joo s =?
0
a
Substituting back into
(~),
~
-1 (l-e -s )ds.
o
we get the stated result.
If M is a Banach space of p-th order random variables, then for
each I:EM we define a continuous linear functional AI: on M by
for all T)EM.
If M is a Banach space of Sas random variables, then for each I:EM we
define a functional AI:: M-+ R by
Ar(T)) = C
?
T),I:
for all T)EM.
The linearity of AI: in this case follows from CorollaD'
1.3.4 and the linearity of conditional expectations.
of .\ note that
For the continuity
46
by Holder's inequality.
Thus Ar; is a continuous linear functional on
L(~) with liAr,; II L(~) * = 11r,; II a-I.
We now show that the continuous linear functionals thus defined on
represent the dual space
L(~)*.
L(~)
Although no reference for this result
is known to us, its proof is analogous to the argument used for the
Riesz representation for continuous linear functionals on a Hilbert space.
2.1.4
LEMMA.
[Singer 1970, Corollary 3.5 and Theorem 1.11].
closed linear subspace of
unique n2EM such that
I
L(~)
and
nlEL(~)
-
M~
In l -n 2 1 I = infllnl-nll.
~EM
If M is a
then there exists a
Moreover~ A
nl -n2
(n) = 0
for every nEM.
2.1.5
THEOREM.
Let
~
be either an a-th order process or a SaS process.
If A is a continuous linear functional on
L(~)~
then there exists a
0.-1
unique r,;EL(~) such that A = Ar,; (and hence I IAI IL(~)* = I 1r,;1 I ).
Proof:
M=
Consider M =
L(~),
take r,; =
o.
{nEL(~):
A(n) = O}, a subspace of
Otherwise, choose
approximation to nl in M, define n3
n =n -
=
nlEL(~)
nl-n Z
I In l - n2 1I
L(~).
If
- M, let nZ be the best
,
and take
47
Note that n -
A(n)
t;EM.
a
IA(n3) la-l
r
At;(n) = At;
by Lennna 2. 1. 4.
Thus
')
A(n)
a
IA(n ) ,a-l
3
t;1
J
Therefore
A(n)
At;(n) = ---"-=-aAt;(t;) =
-~CL.--a-
A(n)
IA(n3) ,a-l
IA(n3) ,a-l
I It;! Ia = A(n) .
The uniqueness of t; follows from Holder's inequality, since Ar = A
..,
t;o
implies that
o
We now obtain some auxiliary results which will be useful in the
development of the stochastic integral.
2.1.6 PROPOSITION.
Let {Sn}~=l be a sequence in L(~) such that
{~(sn)}~=l converges for every nEL(~). Then {sn} converges ~eakZy to
some
t;d(~).
Proof:
For every n let
QnEL(~)**
Qn (~*) =
for all
~*EL(~)*.
be defined by
~*(t;
n)
By [Rudin 1973, Theorem 2.8J, Theorem 2.1.5, and our
hypothesis, we can define another element Q of
Q(~*)
= lim
J1-?oo
Qn(~*)
,
L(~)**
by
48
for all
~*EL(~)*.
Since
L(~)
is reflexive, it follows that {sn}
converges weakly to S, where s corresponds to Q lll1der the isomorphism
•
between
L(~)
and
o
L(~)**.
In the development which follows we apply a result fOlll1d in [Cudia
1964] to show that the map s I->A from L(~) onto L(~)* is continuous
s
with respect to the norm topologies (Proposition 2.1.7 and Theorem
2.1.10).
This result is used to obtain sufficient conditions for a $aS
process to have weak right limits (Proposition 2.1.11).
2.1.7 PROPOSITION.
fE~(y,T,v)
For every
= Lp(V) , 1 < p < 00,
for all gELp(V) define a continuous linear functional on
map Lp(V)
-+
Lp(V)* defined by f
norm topologies of
Proof:
~(v)
~Af
let
~(V).
Then the
is continuous with respect to the
and Lp(v)*.
We can easily check that the flll1ction
J(x)P-1 - (y)p-1 L
Ix-yIP-1
is bOlll1ded on R2 - {x=y} by c = 22-p ; so
I (x) P-1
2
for all (x,y)ER.
- (y) p-11 s
clx-yl p-1
The result then follows from the following inequality:
=
U I (f)p-1
y
s
If
y
--.E-
2.:l
p-1
1
c P- If-gI Pdv) P = cllf-gll L (v)
P
o
49
Of course this result applies to the map
s
1->
A from L (n,F,p)
s
p
to Lp(n,F,p)* where
but we shall also need the norm continuity of the map
sEL(~),
~
for all
nEL(~).
l:; 1-> As
when
is a SuS process, and
In [Cudia 1964] the continuity of such maps is related
to properties of the norm, and by applying the development there together
with Proposition 2.1.2 we shall obtain the continuity of
belonging to a SUS family such as
I Ixi I = I},
1->
As for l:;
Ll~).
Let X be a Banach space, let U = {XEX:
{XEX:
l:;
I Ixi I
I If 1 I = I}.
and let C* = {fEX*:
~
I}, let C
=
For any XEC let Ex
be the set of elements fEC* such that {y: fey) = r} is a hyperplane of
support of U at x for some r
image of x.
>
o.
Then the set Ex is called the sphericaZ
We say that the norm on X is Frechet differentiabZe if for
all XEC the limit
lim~
r+O
r
exists and the convergence is uniform as y varies in C.
2.1.8
LEMMA.
[Cudia 1964, Corollary 4.12].
differentiable if and only if the
spheric~l
The norm on X is Frechet
image map E defined on C is
single-valued and continuous from the norm topology on C into the norm
topo logy on C*.
Notice that the set-valued function E in the lemma is taken to be
50
a mapping into C* when the image is a singleton set.
We shall apply the
result when X = M, a space of jointly SUS random variables, and begin by
examining the spherical image map in this setting.
2.1.9
PROPOSITION.
If M is a linear space of a-th order random variables
or jointly SUS random variables, then E = As for all sEM,
s
Proof:
Let
U=
{nEM:
I Inl I ~
I} and recall that
I lsi I = 1.
I lAs' IM* = I lsi \a-1 = 1.
Given any nEU,
with equality implying that s = n by Holder's inequality.
AsCn)
<
1 for nEU, n
r s,
Therefore,
and hence
o
is a hyperplane of support of U at S.
2.1.10 THEOREM.
then the map s
If M is a linear space of jointly SaS random variables,
r-> As from Minto M* is continuous with respect to the
norm topologies.
Proof:
1
<
P
If we regard M as a subspace of Lpcn,F,p) for some p such that
<
a, then Propositions 2.1.7 and 2.1.9 together with Lerrmla 2.1.8
imply that the
~cn)
norm on M is Frechet differentiable.
It is therefore
clear from Proposition 2.1.2 that the norm I I- I I on M satisfying
I Inl I = -log[E(e in )] is Frechet differentiable, and thus the map s r-> As
from C into C* is norm continuous, again by Lemma 2.1.8 and Proposition
2.1.9.
51
We complete the proof by showing that the map s 1-> As from Minto
M* is also norm continuous.
liAr -Ar II
'">
'">n
Indeed, if sn
~
s f 0 in M, then
M*
=
A
- A
s
nm $
~
0 as n
~
00; and if s
lIAr II
'">n
M*
n
=
~
s
0 in M, then
a-I
I Isn I I
o
~ 0 as n ~ 00.
Under the conditions of Theorem 2.1.10 it can in fact be shown that
the map s 1-> As is uniformly continuous, but we shall have no use for
this stronger result.
We conclude this section by obtaining sufficient conditions for a Sas
process to have weak right limits (assumption Cal) of Section 2).
This
proposition will be used in Section 3 to define ffd~ when ~ is SaS with
independent increments.
2.1.11
PROPOSITION.
Suppose that
~t'
=0
with 1 < a < 2.
Let
attention toward
theelements~
~ = {~t'
a
~ t
~ b}
is a Sas process
for some t' in order that we may direct our
rather than the
increments~
Then the following two conditions imply that for each
limit FsCt+O) exists at ~very tE[a,b)~ where FsCt)
of the process.
sELC~),
= AsC~t)·
the right
52
(at)
For every sE[a,b) there exists an £s >
(a2)
For every
SE~(~),
F
s
a and
an M <
s
00
is of bounded variation on T.
Notice that condition (al) is necessary for the existence of the right
limits Fs(t+O) and that conditions (al) and (a2) together are apparently
weaker than assuming that
(~t)
As
n
D (t)
n
=
is of weak bounded variation.
and let {sn } c ~(~) be such that sn ~ s. By (a2),
is a function of bounded variation on [a,b] for each n; so
Fix
Proof:
~
sEL(~)
lim Ar (~t ) exists for all tE[a,b).
£m'l''a s n +£m
(al) to get £S
>
a and
MS <
00
such that
I I~tl I
For fixed sE[a,b), apply
~
MS whenever s
<
t
~
s + ES .
We shall use Theorem 2.1.10 to show that {A } is uniformly convergent on
Sn
{~t: s < t ~ s + £s}. Indeed, let E > a be given and choose N such that
n ~ N implies
I IASn-Asl
IL(~)*
<
~s.
Then for every tE(S,S+E S] and n ~ N,
and hence the desired unifonn convergence.
Now by a standard result for a
uniformly convergent sequence of functions,
lim Dn (s)
n~
exists, and thus the right limit F (s+O) exists.
s
o
S3
2.
Yne integral
t
f(t)d~t .
J f(t)d~t
We begin this section by defining the stochastic integral
T
for an appropriate class of (detenninistic) "functions" f and obtain,
under certain (smoothness) conditions, an integral representation for the
elanents of
L(~).
Our approach is motivated by [Huang 1975] where the
case a = 2 is treated.
Let
~
=
T = [a,b].
{~t,tET}
be either an a-th order or a SaS process with
For each sEL(s) define the real-valued function F on T by
s
Fs(t) = As(st) for all tET.
We shall begin with two assumptions, (al)
and (a2), on s .
(al)
For every sEL(s), assume that the right limit Fs(t+O) exists
for all tE[a,b).
In other words, lim As(St+ ) exists for every sEL(s), so that the
En"'" 0
En
sequence {St+ } converges weakly in L(s) by Proposition 2.1.6 and we
En
denote its limit by ~t+O. Hence Fs(t+O) = As(~t+O)' Let ~b+O be equal
to
~b
and let
where n~l, akER, and
a=t O<t1 <···<tn =b} .
(a2)
For every sEI, assume that Fs(t) is of bounded variation on T.
Let S be the linear space of all step functions on T of the fonn
f(t) = L~lfkX(tk_l,tk](t).
I~=lfk(St +O-St
k
integral by
k-l
+0)'
For each such f define Jfd s to be
Define a nonn on S in tenns of a Lebesgue-Stieltjes
54
IIfll~ = TJ f(t)dF ffdc(t)
<,
=
I
f f X(t
,t] (t)dFffd;(t)
k=l k T
k-l k
Let A be the completion of S with respect to this norm.
a
Every element f of A can be represented as f = {f }, a Cauchy seq-
a
n
uence in S. I t follows that {f f d;} is a Cauchy sequence in L(;), and we
n
will denote its limit by ffd;. Then the map A -+- L(;) defined by
a
f 1-> f fd; is an isometry from Aa onto a closed subspace of L(;) • If
we assume that ;t' = 0 for some t'ET and that the process is weakly continuous from the right, then this isometry will be onto LC;).
Suppose now that the process ; is of weak bounded variation (see
[Shachtman 1970]); i.e., we assume that Fs(t) is of bounded variation on
T for all SE L(;).
This condition is clearly stronger than Cal) and
(a2), since functions of bounded variation have left and right limits at
all points.
f: T
-+-
Let S' be the space of all bounded measurable functions
R for which there exists/some nEL(;) such that the Lebesgue-
Stieltjes integral
f
fCt)dFs(t) equals AsCn) for every sEL(;).
Since
T
the dual separates points on LC;), we see by Theorem 2.1. 5 that f
uniquely determines n and we denote the latter by
f fd;.
It is immediate
that S' is a linear space containing S and that the definition of
ffd; on S' extends our previous definition of Ifd; on S.
As before, de-
fine a norm on S' by
and let A'
a
be the completion of S' with respect to this norm.
Then
ss
A c A' and as done above we can define an isometry from A' onto a closed
a
a '
a
subspace of L(~). Our next result shows that S is strictly contained in
S' .
2.2.1
PROPOSITION.
Let
~
of weak bounded variation.
=
be an a-th order or a saS process
{~t,a:;;t:;;b}
Then all continuous functions on [a,b] belong
to S'.
Proof:
If f is a function on T and TI
is a partition of [a,b] defined
m
by a = to<tl<.··<tm = b, let fTI(t) = Lk=lf(tk)XCtk_l,tk](t), where t k
is an arbitrary point in [tk-l,t k ], and ~(TI) = max (tk-t k - l ). If f
hk:;;m
is a continuous function and {TIn}
then f
for all
~EL(~).
TIn
is a sequence of partitions of [a,b]
ES' for every nand
Letting n be the weak limit of
(Proposition 2.1.6), we see that
t
f(t)dF~(t)
=
{J
f
TIn
A~(n)
d~}
in
for all
L(~)
sEL(~)
and hence fES'.
0
Although we have failed to resolve whether it is possible that
Aa t- A'a when ~ is of weak bounded variation, we now show that Aa = A'a
when ~ is of strong bounded variation, which is defined in the usual
way:
For every tE[a,b] define
m
V ~(t)
=
sup
L II ~s - ~s II
a=sO< ... <sm=t k=l
k k-l
where the supremum is taken over all finite partitions of [a,t].
If
56
vr,;(b)
<
00,
then the stochastic process r,; is said to be of strong bounded
variation (see [Brezis 1973, p. 141]).
It is easily seen that strong
bounded variation implies weak bounded variation.
2.2.2
Proof:
PROPOSITION.
If
r,; is of strong bounded variation, then Aa = A'.
a
For every I',;E L (r,;) define the total variation fmction IFI',; I by
and note that for t 1 < t 2 ,
Given any fESt let {fn}~=l be a sequence of step functions converging to
f in L1 (T,
Br ,dVr,;) •
Then
IIf-fnl 1;,= +[f(t)
~
f
T
- fn(t)]dF!(f-fn)dr,;(t)
If(t) - fn(t) IdIF!(f_f )dr,;l(t)
n
~
I If(f-fn )dr,;1 la-1Tf1f (t)-fn (t) IdV~(t)
=
I If-fnl 1~~l+lf(t)-fn(t)IdVr,;(t)
<,
.
Therefore
and the right-hand side converges to zero as n
S
~
+
00.
It follows that
o
S' , so that Aa = A'.
a
If additional conditions are placed on the process, it can be shown
that other classes of functions belong to the function space A.
a
For
57
instance, if the process
~
is weakly continuous, i.e., if Fs(t) is
a continuous function on [a,b] for all
sEL(~),
then S' contains all
bounded functions that are continuous when restricted to a subset
containing all but countably many points of [a,b].
58
3.
The integral
J f(t)d~t
when ~ is saS with independent increments.
-----T--..-;:-----------------In this section we show that when
is a saS process with independent
~
increments, the function space Aa is an La space ([Schilder 1970, Theorem
3.1]) and, if Mis the subspace of L(~) isometric to Aa , we relate the
dual elements of M to dual elements of the L space (Proposition 2.3.1).
a
As a corollary, we identify a separating family of continuous linear functionals on M (Corollary 2.3.2).
~
Suppose that
:
{~t,a~t~b}
dent increments and such that
I I~t
2
- ~t
1
~
is a SaS process, 1 < a < 2, with indepen: O.
a
By Lenma 3.2 of [Schilder 1970],
II a: II ~t II a
2
-
II ~t IIa
1
if a ~ t l ~ t 2 ~ b and therefore I I~tl ,a is an increasing function on [a,b].
Notice that condition (al) of Proposition 2.1.11 is satisfied since
II ~t II
~
II ~b II
for all tda, b] .
To check that condition (a2) holds, consider
a : to
<
tl
<. •• <
n
such that {tk}k:O
t n : b.
C
Sj
-~
Sj_l
m
\m
{Sj }j:O and S = L.j=la j (~s. -~s.
,1 ~ j ~ m, and let M =
S
II ~b II a
max
l~j~
).
J-l
Ia. Ia-I .
J
Corollary 1.2.3, we get that
A r (X. )
s
JO
=
f
S
x. (
m
=
where 1
~
jo
~
m.
Thus
and
We can clearly choose a : sO<sl <... <sm : b
J
~
SE~(~)
m
L a.x.)
a-I
J 0 j =1 J J
(a.)
JO
a-I
a
Ilx. II,
JO
f
Xl' ... , Xm(dx)
Write Xj for
Applying
59
and the botmded variation of Fset) follows.
Because these conditions are satisfied, we know that the SaS process
{~t+O,a$t$b}
is well-defined by Proposition 2.1.11, and we see, after a
moment's reflection, that it has independent increments.
It is clear
then that conditions (al) and (a2) are satisfied; so the integral
!fd~
is defined as in the previous section.
Let f and g be two step functions in S:
n
n
f(t) =
I f·x(t t (t)
j=l)
j-l' j]
and get) =
I
k=l
gkx(t
t let)
k-l' k
where f and g are defined over the same intervals with no loss of generality.
Let G(t) be the increasing ftmction
~t.+O-~t.
)
+0
I I~t+ol la
on T.
Writing Xj for
,1 $ j $ n, and recalling the notation of the previous
)-1
section, we find
n
=
=
n
1
f).A! d~(Xj) = I f.!x l (x 2)a- f X En
~ (dx)
j=l
g
j=l)
j' k=lgk--k
I
n
n
I
f.Jx. ( I gkxk)aj=l) J k=l
1
f
X (dx)
X1'···' n
n
=
L f. (g.) a-I" X. II a
j=l J )
)
n
=
JI
f.(g.)a-l XCt
t] (t)dG(t)
j=l))
j-l'j
= J f(g)a-ldG(t)
60
II fl \~
In particular,
f
=
f(t)dF ffdt,; (t) =
f If(t) IadG(t).
We conclude
a
that A = L (dG), since the step functions are dense in both spaces.
a
a.
In summary, if t,; is a SetS process with independent increments and
1 < a < 2, then we have shown that the integral ffdt,; is defined and
yields an isometry between a closed subspace of
L(t,;)
and an L space.
a
We obtain therefore the result found in [Schilder 1970, Theorem 3.1]
without any continuity assumptions on the process.
tains an analogous result for the case
°
<
Schilder also ob-
a :s; 1.
Our definition of ffdt,; in this section easily extends to the case
where the stochastic process t,;
is indexed by an infinite interval T.
If we let S be the set of all step functions that are zero outside a
compact subinterval, then we can define the norm on S in the same way
as before, and the completion of
with respect to this norm will be
S
It will be useful to have an expression for certain continuous linear
functionals evaluated at points of the form
2.3.1
PROPOSITION.
to L (dG).
a
Proof:
Let n
=
Jfdt,; and
s=
Jfdt,;.
fgdC where f and g belong
Then
Let {fn}~=l and {gn}~=l be sequences of step functions in La(dG)
such that f n
-+
f and gn
-+
g.
Then f f ndt,;
A (n) = lim A
s
m~
s
-+
nand j ~dt,;
(ff dt,;)
m
= lim lim Af dt,;(jfmdt,;)
m-+oo n-+oo ~
-+
S, and therefore
61
= lim lim f f ( )a-l dG
In-+<lO n~ T m &n
= lim f f (g)a-ldG
IJl"+OO
..
T m
by Theorem 2.1.10.
~
Let
=
D
{~t,tET}
be as abc¥e with T = [a,oo) and
L(~)
Mis the closed subspace of
~a+O
= o. Then if
Which is isometric to La (dG) , we get
from Proposition 2.3.1 that the set of continuous linear functionals
{A~
"'t+O
2.3.2
:tET} separates points on M.
COROLLARY.
Proof:
If sEM and
A~
"'t+O
Let fEL (dG) be such that
a
A~
t+O
f
=>
(s)
=
=0
(s)
s=f
T
for aZZ tET, then s
f(t)d~t.
=0
in M.
Then
0
f(s)dG(s) = 0 for all tET
(a,t]
If
~
= 0 a . e . [dG ]
~"'>
f
=>
s=
0 in M.
D
is weakly continuous from the right, then LaCT'Br,dG) is
isometric to
L(~)
and Corollary 2.3.2 implies that the set of contin-
uous linear ftmctionals
{A~
: tET} separates points on
L(~).
t
As a further application of the developments in this section,
we reexamine a regression problem introduced in Chapter I.
62
2.3.3
PROPOSITION.
Let
{n'~t,a<t~b}
1 < a < 2, such that the process
and
~a+O
= O.
be a family of
{~t,a<t~b}
sas
random
variables~
has independent increments
For every Borel subset B of (a,b] define
X(B) =
f
XB(t)d~t
~nX(B) = CnX(B) ,
~XX(B) = CX(B)X(B)
Then
~nX
measure
is a f·s.m. which is absolutely continuous with respect to the
~XX'
Moreover~
the Radon-Nikodym derivative
I I~tl
La(dF) , where F(t) =
d~nX/d~XX
belongs to
la~ and
E(nl~t,a<t~b) = f
d~
nX (t)d~
d~XX
t
a.s.
00
Proof:
To see that
~nX
is countably additive, let B = U B.,
where the
.1 1
1=
Bi's are disjoint measurable subsets of (a,b]. Then using Theorem 2.1.10
and the techniques of this section, we have that
lim C
n-+oo nL~=l X(B i )
n
=
It
IS
clear that
d~XX
lim
n-+oo
= dF and
00
LC
i=l nX(B
~nX «
i)
~XX·
Let T be a countable subset
00
of (a,b] such that
Without loss of generality, we may assume that the points in T are dense
00
in (a,b], order them, and let Tn be the set containing the first n points.
We know from the discussion in Chapter I and Proposition 2.1.2 that
63
in
L(~).
If Tn
=
{t 1 ,··.,tn } where t 1<t Z<· .. <tn , then
a(~t '~t -~t '···'~t -~t
1
Z
1
n
by Corollary 1.3.6.
subspace Mof
L(~),
n-1
a(~t,tETn) =
) and Uetting to = a)
Therefore by the isometry between La (dF) and the
the latter "integrands" form a Cauchy sequence in
La (dF) which converges to dll nxl dll XX in La (dF) and a. e. [dF] ([Hewitt
and Stromberg 1965]). Hence
E(nl~t,a<t$b) =
dll
J ull
~
XX
(t)d~ t
a.s.
o
64
I
4.
Spectral representation of a SaS process.
In this section we state an interesting result due to Kuelbs (Z.4.l)
•
and derive Schilder's characterization of independence (Theorem Z.4.Z)
from the characterization in Theorem 1. Z.1.
A simple result on estima-
tion of saS random variables is also included (Proposition Z.4.3).
In [Schilder 1970] the integral ffd s , where s is a SaS process with
independent increments, was used to obtain a "spectral representation"
for a finite set of jointly saS variables.
Kuelbs has extended this
result to processes indexed by an infinite set.
stochastic processes
Two
{~t,tET}
and {nt,tET} having the same index
set are called indistinguishable if their finite dimensional distributions
are the same .
..
2.4.1
[Kuelbs 1973]
Let
{~t,tET}
be a saS process that is continuous
in probability with T an interval and 1
process {sA' -l/Z
~
<
a < Z.
Then there exist a saS
A ~ l/Z} with independent increments and a family
of functions {ft,tET} in La([-
~, !],dF), where F(A) = I IsAI la, such
that {f ft(A)dsA,tET} and {~t,tET} are indistinguishable.
Let us consider two jointly saS random variables
~l
and
~Z'
1 < a < Z.
By Z.4.l these variables have the same joint distribution as two random
variables
f
fl(A)ds(A) and
f
fZ(A)ds(A) , where {SA' -l/Z ~ A ~ l/Z} is a
$aS process with independent increments, F(A)
f i ELa ([-1/2, l/Z],dF) for i
dence of
~l
and
~2
=
1,Z.
=
I IsAI
la, and
The following condition for indepen-
is expressed in terms of f l and fZ.
6S
2.4.2
and
THEOREM.
J
Proof:
[Schilder 1970]
The random variabLes
f2(A)d~A are independent if and onLy if
The random vector
Ee
(J
f l f 2 = 0 a.e. [dF].
fl(A)d~A,J f2(A)d~A)
i[r1Jf1(A)d~A+r2Jf2(A)d~A]
J fl(A)d~A
has c.f.
a
= exp{-Jlr 1f 1 (A)+r 2f 2 (A) I dF(A)}
Let f(A) = [fiCA) + f~(A)]1/2, and define
gl (A) =
f 1 (A)
fCA)
1
{ £2(A)
if f(A) > 0,
g2(A) =
if f(A) = 0,
feA)
0
if f(A)
>
0 ,
if f(A) = 0
Let S = {(X1 ,x 2):xi + x~ = 1}, define T: [-1/2,1/2] + S by T(A) =
(gl(A),g2(A)), and define a finite measure v on [-1/2,1/2] by vedA) =
fa(A)dF(A).
Then
By Theorem 1. 2.1 we have independence if and only if
Since f(A)
>
0 whenever f 1 (A)f Z(A) f 0, the result follows.
0
Notice in this theorem that the index set [- ~,.i] plays no essential role and could be any interval.
e
66
In Theorem 5.2 of [Schilder 1970] a necessary and sufficient condition is given for the best estimate of a saS random variable by a linear
combination from a finite set of SaS random variables.
In the following
result we solve an estimation problem in an infinite-dimensional space
of random variables and provide an explicit expression for the estimator.
2.4.3
PROPOSITION.
Let
{~t,tET}
increments, 1 < a < 2, ~t+O
= ~t
be a saS process with independent
for all tET, and pet)
= I I~tl la.
Let
I be a subinterval of T and denote by L(I) the linear space of the
increments of the process formed from aU subintervals of I.
n =
f
If
f(t)d~t' where fELa(dF), then the best approximation to n in L(I) is
T
given by
A = J f(t)d~t .
..
I
Proof:
If I'
= T-I, then for any fixed sEL(I) we can write s = J g(t)d~t
where gELa (dF)
and get)
= 0 if tEl'. Observe that
n - A=
J f(t)d~t
T
'
I'
and therefore n - A and s are independent by Theorem 2. 4. 2. Thus
a-I
An-A(s) = SJ x l (x 2)
fs,n-A(dx) = 0, so that An-A annihilates L(I).
Using a standard argument,
li n-Alia
,,(n-A)
n-n
= A
= A
,,(n)
n-n
whence
for all SEL(I).
o
67
5.
The integral
f
f(t)~tv(dt)
f
f(t)d~t
------T----The integral
.
discussed in Section 2 requires two assum-
T
ptions on the process S and consequently is somewhat restricted in its
applicability.
$aS
Even when these assumptions hold (such as when S is
with independent increments), there is no direct relation between
f
the sample paths of s and the realizations of
the integral
f
T
f(t)d St .
In contrast,
f(t)stv(dt) which is defined and discussed in this sec-
T
tion requires less stringent assumptions on S and for a large class of
functions f can be interpreted as a sample path integral (Theorem 2.5.5).
Throughout this section we shall consider S to be a general p-th
order process, p > 1,and q to be such that lip
process {St,tET}
if (t,w)
1->
+ 11q
= 1. A stochastic
on a probability space (n,F,p) is called measurable
s(t,w) is a product measurable map from Txn into R.
We shall
investigate conditions for the existence of measurable p-th order processes in Chapter II 1.
The following argument is taken from [Cambanis
and Masry 1971] where it is applied to a second order process.
2.5.1
PROPOSITION.
Let S
=
{st,tET} be a
measurable~
with index set T an arbitrary interval of the real
given.
p-th order process
line~
and let a > p be
Then there exists a finite measure v on (T, Br) such that v "s
equivalent to Lebesgue measure on T and
Proof:
Choose gl€L1 (T,By,Leb) such that gl
define gz on T by
>
0 a.e. [Leb] on T, and
"
68
)
g2(t) =
"
tIt::ll
if
a
a
~
II t:t II
~ 1 ,
if 1 < II t:t II
Then the measure v on (T,BT) defined by v(dt) =
equivalent to Lebesgue measure on T and
gl(t)g2(t~t
is clearly
o
Under the conditions of Proposition 2.5.1,
EIIt:t(w) IPv(dt) = II It:tl IPv(dt) <
T
00
,
T
since p < a and v is a finite measure; so the sample paths t:t(w) belong
to Lp(T,Br,v) with probability one, by the measurability of t: and Fubini's
theorem.
We can therefore define a stochastic process n
n(u) = I
~ (w)v(ds)
t
(-oo,t)nT s
•
=
{nt,tET}
by
a.s.
for each tET and observe that ntELp(O) , since
Elntl P
=
EI
I t; (w)v(dt) IP ~ {v[(-oo,t)nT]}P/qEJIt; (w) IPv(dt)<
(-oo,t)nT s
T s
t
2
It IIt; s I' v (ds )
1
o
2.5.3
PROPOSITION.
va~iation.
The stochastic
p~ocess
n is of
st~ong
bounded
00.
69
Proof:
For every partition to < t l <... <tn of T and every l;;EL(.;) ,
n tk
LJ
~
~
J
T
k=l t k~l
II~ Ilv(ds) <
s
s
D
00.
It follows that the stochastic integral
fEAp(n) as in Section 2 and that Ap(n)
\I ~ II v (ds)
J f(t)dn t
is defined for
Ap(n) (Proposition 2.2.2).
=
We
now define Ap(~) ~ ~(n) and for every fEAp(~)
J f(t)~tv(dt) ~ J f(t)dn t
T
•
T
We shall see that the "stochastic integral"
J
f(t)~tv(dt)
can be
T
expressed as a sample path integral for a large class of functions in
Ap(~)
(Lemma 2.5.4) and that sample path integrals of the form
belong to L(n) for all fELq(T,Br,v) (Theorem 2.5.5).
sample path integrals are dense in
L(~)
when
~
f
f(t)~t(w)v(dt)
In addition, these
is a weakly continuous
process (Theorem 2.5.6).
We begin our investigation of
Ap(~)
by recalling the space of func-
tions S' which generates AP(n): S' is the space of all bounded measurable
functions f:T
~
R for which there exists some nOEL(n) such that the
Lebesgue-Stieltjes integral
J f(t)dFl;;(t) equals Al;;(n O) for all l;;EL(n).
T
Since the stochastic process n is suppressed in our notation
F~(t)
here,
we emphasize that Fl;;(t) = A~(nt)'
2.5.4
equals
LE~.
J
T
For every fESt the sample path integral
f(t)~tv(dt) with probability one.
J
T
f(t)~t(w)v(dt)
•
70
Proof:
It is clear that the sample path integral exists since f is a
bounded function and St(w)ELp(V) a.s.
some nlEL(n) such that Ar = A
'"
t
z
J
t
1
dF
nl
nl
For any given
sELp(~)
on L(n) by Theorem 2.1.5.
there exists
Note that for
(t)
Thus
•
= J f(t)dF
T
= J f(t)dF
T
nl
nl
(t) -
(t) -
We have seen that
T
J
f(t)Ar(St)v(dt)
"
J
f(t)dF
T
nl
(t)
0
=
~t(w)ELp(T,BT'v)
with probability one, so that for
J f(t)St(w)v(dt) is defined a.s.
T
In fact, it belongs to L(n), and
all fEL (T,By,v) the sample path integral
q
and is easily seen to belong to
Lp(~).
the flffiction space Ap (s) contains Lq (v) in a sense which we now make
precise.
Z.5.5
THEOREM.
~
~
fEAp(S) such that
o
Every function fELq(v) determines uniquely an element
71
where the left-hand side is a sample path integral and the right-hand
side a stochastic integral.
Proof:
let s =
J g(t)~t(w)v(dt),
and observe that
T
$
I Igi IL
(JIAs(~t)
(v)
T
q.
$
IPv(dt))l/p
IlsIIP-l(Jll~tIIPv(dt))l/P .
IlgII L (v)
q
T
Thus we have the relationship
•
for all gELq (v).
00
Given any fELq(V) let {fn}n=l be a sequence of step functions converging to f in Lq (v) and recall that f n ES' for each n. Thus the sample
path integral J f (t)~t(w)v(dt) belongs to L(n) by Lemma 2.5.4, and
T n
which converges to zero as n
J f(t)~t(w)v(dt)
~
00,
so that the sample path integral
belongs to L(n).
T
Note that {fn}~=l is a Cauchy sequence in S', since
= II J[fm(t) -fn (t)] ~t (w)v(dt)
II
T
$
Ilfm-fnll L
q
(v)(JII~tIIPv(dt))l/P ~
T
0
72
~
as m,n
-+
00, and denote its limit in
Ap(~)
by f.
Then
= lim IIJ[f (t)-f (t)]~t(w)v(dt)11 = 0 ,
m,n+oo T m
n
so that
It is immediate that f determines f uniquely, since if gl,g2 in
Ap (~) satisfy
f
T
•
g.(t)~tv(dt)
=f
T
1
f(t)~t(w)v(dt)
a.s.
for i = 1,2, then
o
Identifying f with f
Ap(~).
we can then consider Lq(V) as a subset of
It is straightforward to check that the process n
in p-th mean and consequently that
Since S'
Ap(~)
is isometric to all of L(n).
Lq(V), it is clear that Lq(V) is abvays dense in
c
is continuous
Ap(~)
and
hence that Lq (v) is isometric to a dense subset of L(n). The final resuIt in this section shows that Lq(V) is isometric to a dense subset of
L(~)
when
~
is weakly continuous.
Suppose that
~
2.5.6
THEOREM.
is weakly continuous from the right and
that T
= [a,oo). Then the closure of {f
T
function} in L (~) is
P
L(~).
f(t)~t(w)v(dt):
f is a step
73
Proof:
Fix tET, and for every integer n
~(s) = v
-1
~
1 define
-1
{(t,t+n )}X(t,t+n- 1)(s) ,
so that
1 gn(s)v(ds)
= 1
T
Given any
£
>
a and
any sELp(n), use weak right continuity to choose
an N such that n ~ N implies that IAs(~t-~s) I
Then for n
~
£
for t
<
s
t + lin.
<
N,
~
Hence
<
1 gn (s) IAr(~t-~
) Iv(ds) ~ £1 g (s)v(ds)
sST n
T
1 ~(s)~s(w)v(ds)
=
£
•
T
and therefore ~t belongs to the closure of
{I
f(t)~t(w)v(dt):
f is a
T
step function}
Thus when
o
by [Rudin 1973, Theorem 3.1.2].
~
.
converges weakly to ~t by Proposition 2.1.6
is weakly continuous, every element of
L(~)
can be ex-
pressed as a limit in Lp (and hence also a.s.) of sample path integrals.
Specifically, if sEL(~), then there exists a sequence {fn}~=l c Lq(v)
such that
sew) = lim
1 fn(t)~t(w)v(dt)
n-wo T
a.s.
74
6.
The integral
I
f(t);tv(dt) when ~ is SuS with independent increments.
-----T----'----------~----------
When
is SUS with independent increments and fEL q we find the relationship between the two integrals of Sections 2 and 5 (Theorem 2.6.1), and we
~
characterize the independence of
I
f 1 (tntv(dt) and I f2(t)~tv(dt)
(Theorem 2.6.3).
Let;
{';t,tET} be a SuS process with independent increments, index
=
set T = [a ,00) , ';a = 0, weak continuity from the right,and F(t) = I I~tl la.
The process ; is therefore a p-th order process for any p such that
1 < P < a, and the integral
I
f(t)~tv(dt)
is defined as in the previous
T
Under the conditions we have placed on C we know in fact by
section.
I f(t)~tv(dt), where fEL (T,BT,v) ,
T
q
The following Fubini-type result re-
Theorem 2.5.5 that integrals of the form
l/p
+
l/q
=
1, are dense in
L(~).
1ates this integral to the one in Section 3.
I
T
Proof:
f(s); v(ds) = I ~f(s)v(ds)d~u .
sTu
We begin by showing that the right-hand integral exists,
loof(s)v(ds)ELa (dF).
can choose M so large that v[M,oo)
<
1.
Thus
00
f(s)v(ds) ladF(u)
u
$
1(/lf(s) ,qv(ds))a/q(/oov(ds))a/PdF(u)
TT
u
= I If 1 l~q(V)+[V[U,oo)]a/PdF(U)
$
IIf1I~ ( ) [I
q v
that
Since v is a finite measure (Proposition 2.5.1), we
u
III
i.e.~
M
a
[v[u,oo)]a/pdF(u)
+
I
00
M
v[u,oo)dF(u)]
75
which is finite since
I
M
[v[u,oo)]a/PdF(u)
S
V(R)a/PFQM)
<
00
a
and
I
M
v[u,oo)dF(u)
=
S
I I x[ u,
T T
00)
(s)v(ds)dF(u)
I I x( a,s ](u)dF(u)v(ds)
TT
S
II I~ s I,av(ds)
T
<
00,
by Proposition 2.5.1.
We complete the proof by showing that
00
A~
(I
t T
f(s)~sv(ds)) = A~
(I I f(s)v(ds)d~u)
t T u
for all tET and applying the remark following Corollary 2.3.2.
Indeed,
00
(I I f(s)v(ds)d~u) = I X( t](u) uf f(s)v(ds)dF(u)
St T u T a ,
A~
by Proposition 2.3.1.
=f
T
=
Also,
f(s)A~ (~
St s
)v(ds)
f f(s) f X(a,t] (u) X(a,s](u)dF(u)v(ds)
=T
f f(s) Tf X( a, t] (u) X[ u,oo) (s)dF(u)v(ds)
f X(a,t](u) Lf(S)V(dS)dF(U).
00
=
The following corollary is immediate in view of Proposition 2.3.1.
0
~
7{)
2.6.2
If f and g belong to Lq(T,BT,v),
COROLLARY.
then
Aff(s)~ v(ds) (J g(t)~tv(dt))
T
s
T
00
00
= J(1 f(s)v(ds))a-l(J g(t)v(dt))dF(u) .
T u
u
Al$o~
II1 f (s n
T
00
v (ds) IIa = JIf f (s) v (ds) Ia dF (u)
STu
As another application of Theorem 2.6.1 we obtain necessary and sufficient conditions for independence of such integrals.
2.6.3
THEOREM.
Let F(t) be stpictly
incpeasing~
and fop
1
= 1,2 consider
00
Bi = {uET:
and BI
= T-B i · Then {
J f i (t) v(dt)
= O}
u
£1 (t)~tV(dt) and
f
f2(t)~tv(dt) aPe independent if
and only if one of the following two (equivalent) conditions hold:
(i)
2
f l =0 a.e. [v] on B
and fop each uEB
Z
00
J
u
(ii)
X (t)fl(t)v(dt) = 0;
B
2
f 2 = 0 a.e. [v] on Bi and fop each uEBi
00
1
U
Proof:
If
XB (t)f 2 (t)v(dt)
0
1 f (t)~tv(dt) and J f2(t)~tv(dt) are independent, then
T l
T
00
(*)
=
1
J
u
00
f 1 (t)v(dt)J
u
f 2 (t)v(dt) = a
77
a.e. [dF] by Theorem 2.4.2 (with a slight modification of the index set
of the process).
Thus (*) holds for all uET, since the integrals are
continuous functions of
U
and F is strictly increasing.
Also by contin-
uity of the integrals, B2 is a closed set and hence (a,oo) - B2 is an open
set on which loofl(t)v(dt) is zero. Therefore f l = 0 a.e. [v] on BZ and
consequently
00
f
U
XB (t)fl(t)v(dt) = 0
2
for all uEB Z•
The necessity of (ii) follows in a similar manner, and the converse
can be seen by reversing the argument.
o
It is possible to obtain similar conditions for independence for
slightly more general F, allowing say F to be constant on a closed subset
of (a,oo) but strictly increasing elsewhere.
78
7.
System identification.
In earlier sections it has been convenient to use As to denote a
continuous linear functional defined by AsCn) = E[nCs)p-1] on a p-th
order family and by As Cn) = ens on a SaS family.
We shall now refer to
AsCn) in both cases as the covariation of n with s and likewise extend
the use of the symbol
function
e~~
en~
to the p-th order case.
of a stochastic process
e~~Cs,t)
and the cross covariation function
with
•
~
~ = {~t,tET}
= e~
ex~
~
s t
The covariation
is defined by
,
of a stochastic process X = {XV,VEV}
is defined by
Let the p-th order process
~
be tne input to a linear system and let
its output X be given by
X =
(1)
v
where f = {fvCe), VEV}
J f v Ct)d~t
T
c
$', or by
c
LqCV).
(2)
where f = {fvCe), VEV}
In both cases f is called the Ctime
varying) inpu1se response of the system.
Our purpose is to investigate
what can be determined about the system from knowledge of the statistical
relationship between
variation function of
and X, specifically from knowledge of the co-
~
~
and either the covariation function of X or the
cross covariation function of X with
When
~
~.
is Sas with independent increments we find that the impulse
79
response of the system, in both cases considered above, can be identified from the cross covariation of the output with the input process
(Propositions 2.7.3 and 2.7.5).
With additional restrictions it is
also possible to determine the impulse response of system (2) from the
covariation function of the output process (Proposition 2.7.6) and from
the cross covariation of the output with a known stationary sub-Gaussian
input process (Example 2.7.1).
For a p-th order process
~
of weak bounded variation, the cross
covariation function for system (1) is given by
and it is clear that the function
Cx~(v,t),
tET, determines f v if and only
if the signed measures determined by C~~(.,t), tET, separate points on 5'.
Similarly, for a measurable p-th order process
~,
the cross covariation
function for system (2) is given by
and
Cx~(v,t),
tET, determines f v if and only if the functions
separate points on Lq(v).
C~~(·,t),
One situation in which system identification is possible is when the
covariation function
C~~(s,t)
of the input process is a stationary co-
variance function, an interesting circumstance which arises for certain
stationary sub-Gaussian processes.
To see the form of
C~~(s,t)
general a-sub-Gaussian process, let
A~,t(r1,r2)
a
=
2- 2[riR(s,s)
a
+
2r 1r 2R(s,t)
+
r~R(t,t)]2
for a
tET,
80
for every
s, tET, where R(s, t) is a covariance filllction and 1
< ex
<
Z.
Then observe that
r =0
1
r =1
2
ex.
- 2"
Z [r1R(s,s)+r ZR(s,t)]
=----------------,,2-ex.
Z
/-
Z
[r1R(s,s)+Zr 1r R(s,t)+r R(t,t)]
Z
-Z-
r =0
1
rZ=l
ex.
2- "2
=
----~Z-_ex.-R(s,t)
R(t,t)-ZZ.7.1 'EXAMPLE.
~
Suppose that
is a p-th order process of weak bounded
variation having covariation function
C~~(s,t)
where c
>
= cR(s,t) ,
0 and R is a covariance function of the form
00
J
R(s,t) =
•
e1r (t-s)g(r)dr
-00
with g(r)
>
0, gEL l (Leb).
Assuming that Joolrlg(r)dr
<
00
,
we obtain for
-00
system (1) the cross covariation
00
Cx;(v,t) =
J fv(s)dsC;;(s,t)
-00
00
00
.
= J fv(s)(c J (-lr)e
-00
lr t-s
•
(
)
g(r)dr)ds
-00
00
•
00
•
= -ic J re 1trg(r)( J e-1rsfv(s)ds)dr .
-00
- 00
oo
Thus knowledge of CX;(v,t) for all t and of g determines_ J e
oo
-irs
fv(s)ds
81
for all r and hence fv(s) a.e. [Leb].
Assuming moreover that
.
I
Joo e -irtCX~(v,t)ELI
(or L2) as a functIon of t and that rg(r)
-00
(or L2) as a function of r, we can express the impulse response as
CX~(v,t)ELI
•
~2
c(2n)
f (s) =
1
v
irs
00
00.
J re(r) ( J e-lrtcx~(v,t)dt)dr a.e. [Leb].
g
-00
If we suppose instead that
~
s
-00
is a measurable p-th order process with
the same covariation function as above, then we obtain for system (2)
the cross covariation
00
cX~(v, t)
= J fv(s)C~~(s,t)v(ds)
-00
J e ir(t-s) g(r)dr
= cJ fv(s)
00
-00
00
= c J g(r)e lrtJ
00
v(ds)
-00
•
-00
00
•
fv(s)e -Irsv(ds)dr
-00
Thus knowledge of Cx~(v,t) for all t and of g determines Jooe-irsfv(s)v(ds)
-00
for all r and hence f v (s) a.e. [Leb]. Assuming as in the ~revious case
.
I
Jooe -irtCx~(v,t)dt
that Cx~(v,t)ELI (or L2) as a functIon of t and that
g
-00
ELI (or L2) as a function of r, we can express the impulse response as
-err
irs
00
2 I dv
(2n) c[d Leb](s)
J grrr
e
(J
-00
g
00.
e -IrtCx~(v,t)dt ) dr
-00
a.e. [Leb].
Throughout the remainder of this section we assume that
~
is a SaS
process and use the special properties of SUS processes to obtain more
concrete results.
We begin with a simple proposition relating covaria-
tions in the output space with the finite-dimensional distributions of X.
2.7.2
alent to
kno~ing
Kno~ing
Ay(X ) for all YEL(X) and all VEV is equivv
the finite-dimensional distributions of X.
PROPOSITION.
e
82
Proof:
Suppose that the finite-dimensional distributions of X are known
and fix Yd(X).
nation of
{~,
Then Y = lim Yn where each Yn is a finite linear combin-+oo
VEV}. For every VEV we know the joint distribution of
Xv and Yn ' and therefore we know
and thus also
by Theorem 2.1.10.
n
~
Conversely, if Ay(Xv ) is known for all YEL(X), then for any integer
1 and any vl, ... ,vnEV we know
Ar Xv +... +r X (rlXvI +... +rnXv )
l 1
n vn
n
= J Irlxl+···+rnn
x lar~ , ... , X
S
1
(dx)
vn
o
for all rl, ... ,rn and hence the joint c.f. of Xl"",Xn '
Now the dual of L(X) separates points on L(X); so each Xv is determined in L(X) by knowledge of Ay(Xv ) for all YEL(X) (Theorem 2.1.5),
hence by the finite-dimensional distributions of X (Proposition 2.7.2).
However, L(X) in general may be strictly contained in M, the subspace
L(~)
isometric to Aa , and the set of linear functionals {Ay : YEL(X)}
may not separate points on M. In such cases the elements of the output
of
process need not be determined in Mby the finite-dimensional distributions of X.
exists
(If {Ay : YEL(X)}does not separate points on M, then there
~EM, ~ ~
0, such that
Ay(~) =
0 for all YEL(X).
Thus for
any~,
83
for all YEL(X), and therefore knowledge of the finite-dimensional distributions of X does not distinguish between
~
and
~+~
in M.)
Since Mand Aa are isometric, knowing the output variable ~ in M
is equivalent to knowing the impulse response function f v in A.
Hence
a
the finite-dimensionaZ distributions of the output process X determine
f
= {fv ' VEV}
if and onZy if the set {Ay , YEL(X)} separates points on M.
An interesting example for which the system is determined only to
within an equivalence class is treated in [Kanter 1973].
Kanter considers a Sas process
I I~tl la = t
{~t,tER}
In that paper
with independent increments and
and a time-invariant system of type (1) with impulse response
fEL (R) and output
a.
= J f(v+t)d~t .
~
R
Then the finite-dimensional distributions of X determine f up to translation and multiplication by ±l.
To see how system (1) can be determined from covariation functions in
the Sas case, we assume that
~
tinuous, T = [a,oo), ~a = 0, and
C~~(s,t) =
has independent increments, is right con-
I I~tl ,a = F(t).
J X(a,min(s,t)] (r)dF(r)
Then
=
F(min(s,t))
Therefore by Proposition 2.3.1,
CX~(v,t) = A~t (
=
f fv(s)d~s)
J f (s)dF(s) = J f (s)d C~~(s,t)
(a,t] v
T v
s ss
for every fvELa(dF), with
Cx~(v,t),
tET, determining f v if and only if
the signed measures determined by C~~(.,t), tET, separate points on
La(dF).
In addition,
•
84
J fu(s)d~s)
Cxx(u,v) = Av (
ry T
and consequently f u is determined
VE V, 1of and 0 n1y 1. f t he f unctlons
°
2.7.3 PROPOSITION.
and E;,a+O = O.
=
J fu(s) (fV(s))a-1dF (S) ,
T
by the covariation function Cxx(u,v),
f a - 1 , VEV, separate pOlnts on L dF .
v
a
°
(
)
Suppose that E;, has independent inorements" T = [a,oo) ,
Then eaoh impulse response function f v is determined by
the oross oovariation funotion CXE;,(v, t+O) , tET.
Moreover" assuming
for simplioity of expression that E;, is weakly oontinuous from the right"
Cxc(v,t ( n ) ) - C (v,t (n) )
s
k~+l
XE;,
kn(t)
-F-(t--'("'-n~), -=-)---F-(t-r(n......)---=..:..)~- a.e. [dF],
k(n) (t)+l
knCt)
where F(t) =
II E;,t II a, {I~n) }~=1
is a partition of (a,oo) into semiolosed
intervals I~n)= (t~n), t~~i] suoh that the partitions beoome finer as
n inoreases and
as n
+
Proof:
00"
and k(n)(t) is the unique k suoh that tEI~n).
The first part of the proposition is immediate from Corollary
2.3.2, and the second part follows by an exercise similar to (20.61)(b)
of [Hewitt and Stromberg 1965].
For a discussion of the estimation of CXE;,(v,t) , see [Kanter and
Steiger 1974].
2.7.4 EXAMPLE.
0
Suppose that E;, satisfies the conditions in Proposition
2.7.3 with dF a finite measure such that dF
00
Jt
a
Leb and
~
2
dF(t) = c
<
00
•
85
Let the impulse response of system (1) be given by
1
fv(t)
for all v
u
~
~
=
(cos vbt)
a-I
0, where the "frequency" b
>
0 is unknown.
Then for each
0, the covariation function
00
Cxx(u,v)
=
vv ~ 0 ,
J fu(t)cos(vbt)dF(t),
a
determines f u a.e. [Leb] and hence b.
for b, observe that
To obtain an explicit expression
2 co 2
= -b J t cos(vbt)dF(t)
a
and therefore
v=o
We next consider how system (2) can be determined from covariation
functions and therefore assume that
and as before
C~~(·,t),
~
is a measurable SUS process.
Then
CX~(v,t),
tET, determines f v if and only if the functions
tET, separate points on Lq(v).
Assuming that ~ has independent
increments, is weakly continuous from the right, T = [a ,co) ,
co
=J (
T
~.
a
= 0,
co
J fu(s)v(ds))( J fv(t)v(dt))a-ldF(r)
r
r
by Corollary 2.6.2, and we shall see in Example 2.7.7 that the covariation function Cxx(u,v), VEV, sometimes determines f u '
86
2.7.5
PROPOSITION.
Let
~
be Sas with independent increments, weakly
continuous from the right, T
strictly increasing.
function
Proof:
CX~(v,t),
=
[a~oo), ~a
= 0, and F(t) = I I~tl la
Then for each fixed VEV the cross covariation
tET, determines f v in Lq(T,Ry,v) by the relationship
Observe that
00
=
A
( J J f (s)v(ds)d~r)
ftd~
Tr v
a
t
=
r
00
J J fv(s)v(ds)dF(r)
a
r
by Theorem 2.6.1 and Proposition 2.3.1.
Thus the covariations
CX~(v,t)
for all tET determine !oofv(s)V(dS) for all r since F is strictly increasing, and likewise f v is determined a.e. [Leb] since v is equivalent
to Lebesgue measure.
o
Applying Proposition 2.7.2 to system (2) we see that knowledge of
the finite-dimensional distributions of the output process X is
lent to knowing Ay(Xv ) for all YEL(X) and all VEV.
Ay(~)
where for each YEL(X) ,
Theorem 2.5.5.
=
Ay (
=
f
f
equiva~
In this case,
fv(t)~tv(dt))
fv(t)AY(~t)v(dt) ,
AY(~t)ELp(T,BT'v)
as we saw in the proof of
Thus the finite-dimensional distributions of X determine
fvELq(T,Ry,v) if and only if the family
Lp(V), separates points on Lq(V).
{AY(~t):
YEL(X)}, a subset of
87
As an example we show that for certain systems the finite-dimensional
distributions of X are more than is necessary, and in fact each impulse
response function f u is determined in Lq(v) by the covariation function
Cxx(u,V), VEV.
2.7.6 PROPOSITION.
Let S be a saS process with independent increments,
weakly continuous from the right, T
tET,
and v(dt)
=
[0,00), F(t)
= lis t ' ,a
= h(t)dt where h is chosen as in Proposition
t for all
=
2.5.1.
Let
{GV,VEV} be a set of twice differentiable functions in Lp(v) that separate points on Lq(v) and satisfy Gv(O)
O.
=
Let {gv' VEV} be a set of
differentiable functions related to the functions Gv by
1
r _ (dGv(r)~a-l
gv ( ) - -d;--r-J
and such that lim
~(s)
= 0 and
S-loOO
for aU VEV.
If the system is defined by
for all VEV, then each f u is
dete~ined
by the covariation function
Cxx(u,V), VEV.
Proof:
Given UEV, then
Cxx(u,V) = A~ (
f fu(t)stv(dt))
= f fu(t)Ax (st)v(dt)
T
= f fu(t)A
T
v
00
(st)v(dt)
f f fv(s)v(ds)ds
Tr
r
88
t
=
00
J fu(t) J ( J fv(s)v(ds))a-ldr
v(dt)
Tar
t
00
d~(s)
= -J fu(t)J ( J ( d
Tar
s
1
) ds)a- dr v(dt)
t
= J f U (t) J (~ (r))a-l dr v(dt)
T
o<7ll
t dG (r)
=
f fu(t) l
~r
dr v(dt)
Since {Gv ' VEV} separates points on Lq(v), it follows
that f u is detennined by Cxx(u,v), VEV.
0
for each VEV.
2.7.7
EXAMPLE.
Let Gy(t)
=
-t
e sin vt for VE[O,oo) and choose h(t)
min(1,t- 3) (Proposition 2.5.1).
=
Then
I
r
e
- a-I
(v cos vr
.
SIn
vr)
a-I
and
=
s
I
a-I
1e
a-I
I
-----=a-I
_(sin vs - v cos vs)
min(l,s -3)
+
.
(v 2 sin vs
+
2-a
v cos vs)(sin vs - v cos vs)a-l ] ,
which clearly belongs to Lq(v) and converges to zero as s
An
+
00
•
alternative approach to the system identification problem, when
the statistics of the output are known, is to investigate which systems
89
will produce the same output (in distribution) to the same input.
For
each system f, write
where
~
is a SuS process, and let L(f) be the space of finite linear
combinations of functions in {fv ' VEV}. Then two output processes
x(f) and X(g) have the same finite dimensional distributions if and only
if the map from L(f) onto L(g) defined by
for all integers n
~
1, rl, ... ,rnER, and vl, ... ,vnEV, is an isometry.
We
shall content ourselves here with discussing the simplified problem when
V is a singleton:
(1)
Xf
=
Xf
=
(2)
=
J
T
f(t)d~t
f(t)~tv(dt) .
d
a SaS process, we have Xf = Xg if and only if
II gil s' . Also, if f and g are bounded functions vanishing out,.
In system (1) with
II f II s'
J
T
~
side a bounded set, then Xf g Xg for all SUS ~ with independent increments
if and only if If(t)1 = Ig(t) I for all t. For system (2) with ~ as in
Theorem 2.6.1, we have that Xf d= Xg if and only if
00
II J f
r
00
dv II L (dF)
a
=
II J g
r
dv II L (dF)
a
Moreover, for bounded functions f and g, Xf g Xg for all such inputs ~
if and only if
90
00
IJ
fdvl
r
=
I J gdvl
r
for all r.
2.7.8 PROPOSITION.
Consider system
~t =
where S
=
J cos(tA)dsA
(2)
,
with
t ~ 0 ,
{sA' A ~ o} is a Sas process with independent
weakly continuous from the right~ such that FeA)
function and F(O)
= O.
=
I IsAI ,a
Then
where
~f(A) =
J f(t)cos(tA)v(dt)
.
Consequently~ Xf ~ Xg for a given S if and only if
and Xf
~
X
g
for all such
s
if and only if
for all A ~ O.
The proof follows by a familiar line of argument .
•
increments~
is a bounded
III.
PATH PROPERTIES
Some known path properties of Gaussian processes can be easily
extended to saS processes.
For instance, the zero-one laws for Gaussian
processes in [Cambanis and Rajput 1973] hold as well for SUS processes
with no change in proof in view of the zero-one law for stable measures
in [Dudley and Kanter 1974] (see also [Fernique 1973]).
This chapter
extends to p-th order or saS processes certain results known for 2nd
order or Gaussian processes.
Specifically we give necessary and suffi-
cient conditions for the measurability of a p-th order or a SaS process
(Section 1), necessary and sufficient conditions for the integrability of
almost all paths of a SaS process (Section 2), and sufficient and necessary and sufficient conditions for almost sure path absolute continuity
for p-th order and for SaS processes, respectively (Section 3).
These
results are obtained by appropriate modification of the proofs of similar
results for second order or Gaussian processes.
1. Measurability of p-th order processes.
Let
~
= {~t: tET} and n = {nt : tET} be stochastic processes on the
probability space (Q,F,P), where T is a Borel subset of a complete separable metric space and B(T) denotes the Borel subsets of T.
n is a modification of
~
if
P{~t
The process
= nt } = 1 for all tET; n is called
measurabZe if (t,w) 1-> nt(w) is a product measurable map from TXQ into
R.
The existence of a measurable modification is frequently of interest
92
in the study of path properties.
The following condition for the existence of a measurable modification is contained in [Cohn 1972].
~t
t 1->
We shall use
to denote the map
~
and shall specify the range space when required for clarity.
Let Mbe the space of all real-valued random variables on (n,F,P),and
define a metric
on Mby
p
ISl-S21)
(1+ls -s
r
P(Sl' S2) =
l 2
Then P metrizes the topology of convergence in probab-
for all sl,s2EM.
ility.
map
~
has a measurable modification if and only if the
~
The process
from T to M is Borel measurable.
For p-th order processes we now obtain further equivalent conditions
for the existence of a measurable modification.
Our line of proof follows
that in [Cambanis 1975a] where the case p = 2 is treated.
3.1.1
THEOREM.
Let
saS process with I
process.
(i)
(ii)
(iv)
be a p-th order process with p
>
1 or a
< a < 2, and let L(~) be the linear space of the
Then the following are equivalent:
The process ~ has a measurable modification.
The map
The map
L(~)
~:
T
L(~)
+
II ~t - ~t II
for every toET,
(iii)
{~t,tET}
~:
o
T
+
has separable range and is such that,
is B(T) -measurable.
L(~)
is Borel measurable.
is separable and for every
sEL(~)
the function Fs(t) is
Borel measurable.
It suffices by Proposition 2.1.2 to prove the result for p-th
order processes.
e
93
Proof:
(i) implies (ii).
Consider
a measurable modification of
{~t,tET},
~. Then for every toET, I I~t-~t II = (EISt-~ tlP)l/p is B(T)-measurable
o
0
by Fubini's theorem.
We next show that
~(T)
is separable with respect
to the norm topology, i.e., in Lp(n,F,p).
Assume for the moment that
by (i) it follows that
~(T)
II ~t II
is uniformly bounded on T.
is a separable subset of Mwith respect to
the toplogy of convergence in probability.
subset N of
{f,;t }~=l
c
~(T)
Now
Thus there is a countable
such that, for each tET, there exists a sequence
N converging in probability to f,;t.
By [Hewitt and Stromberg
n
1965, p. 207] the sequence {~t }~=l converges weakly to ~t in Lp(n,F,p),
n
and therefore the linear manifold generated by N is dense in
respect to the norm topology ([Rudin 1973, p. 65]).
L(~)
and
~(T)
L(~)
with
It follows that both
are separable.
Relaxing the above assumption, we let TN = {tET: II ~t II ~ N}.
Then
<Xl
TNEB(T) and
~(T)
= U ~(TN) is clearly separable with respect to the
N=l
norm topology.
(ii) implies (iii).
[Hoffmann-J¢rgensen 1973, p. 206].
an open subset of ~(T) in the norm topology, choose {tN}~=l
~
If V is
c
T and
> 0 such that
<Xl
V = U {~ :
N=l t
Then
~-1(V) =
II f,;t-~
~
<Xl
U {tET:
N=l
"
< ~} •
II ~t -~t
II < aN}EB(T)
N
.
(iii) implies (iv) is clear.
(iv) implies (iii) follows from Theorem 2.1.1 and a theorem due to
B.J. Pettis ([Hille 1973, Theorem 7.5.10]).
94
(iii) implies (i) is clear since convergence in Lp(n,F,p) implies
convergence in probability.
0
It should be noted that this result does not reflect the fact
that the existence of a measurable modification is a property of the
two-dimensional distributions of the process, as shown in [HoffmanJ¢rgensen 1973].
3.1.2
COROLLARY.
A stochastic ppocess
as in Theopem 3.1.1 has a
~
measupable modification undeP each of the following thPee conditions:
(i)
(ii)
of
~
~
is a weakly continuous ppocess.
T is an aPbiwaPy interoal and the swon{] left (Pight) limit
exists at all but countably many tET.
(iii)
T is an aPbiwaPy intepval and
~
is an SaS ppocess with
independent incpements.
Proof:
(i)
If
~
is a weakly continuous process, then
tinuous (hence measurable) function of t for every
separability of
L(~),
F~(t)
~EL(~).
is a conTo see the
let T* be a countable dense subset of T and let N
be the space of all rational linear combinations of elements in
{~t,tET*}.
Then N is a countable dense subset of
L(~)
by [Rudin 1973,
Theorem 3.12], and the existence of a measurable modification follows
from (iv) of Theorem 3.1.1.
(ii) Parts (i) and (ii)(a) of the proof in [Bulatovi~ and ASi~ 1976]
for second order processes hold with no alteration for the process
~
and
show that the set Tl of all points of discontinuity of ~ is countable.
Let TZ c T-T l be a countable dense subset of T. Then the space of all
~
95
rational linear combinations of elements in
L(~).
dense subset of
every
{~t,tETluT2}
It is clear that Fs(t) is Borel measurable for
since it is a continuous function on T-T l .
measurable modification again by (iv) of Theorem 3.1.1.
sEL(~)
(iii)
is a countable
Thus
has a
~
If ~ is $aS with independent increments, then F(t)
=
I I~tl la
is an increasing function which therefore has at most countably many points
of discontinuity.
Let Tl be the set of all points of discontinuity of
T-T l be a countable dense subset of T. From the rela-
F, and let T2 c
tionship I I~s-~tl
,a
=
IF(s)-F(t)I
for all s,tET,
it is easy to see that
the space of all rational linear combinations of elements in
is a countable dense subset of
sEL(~),
L(~).
{~t,tETluT2}
For the measurability of Fs(t),
recall fram Section 3 of Chapter II that the right limit
o
F (t+O) exists for all t.
s
The next corollary provides a result for sub-Gaussian processes
analogous to the corresponding result for Gaussian processes (rCambanis
1975aJ).
3.1.3
COROLLARY.
If
~
is a sub-Gaussian
urable modification if and only if
L(~)
process~
then it has a meas-
is separable and
C~~(s,t)
measurable.
Proof:
The two-dimensional c.f. 's of
~s,t(rl,r2)
~
are given by
=
exp{-I Irl~s+r2~tl la}
=
- 2 [R(s,s)r 2 + 2R(s,t)r r +R(t,t)r 2]2}
exp{-2
l
l 2
2
a
where R(s,t) is a covariance function, and clearly
a
is product
96
1
I I~t-~t II
o
1
= Z Z[R(t,t) - ZR(t,t O) + R(tO,t O)] 2 .
Using the relationship preceding Example Z.7.l, it is straightforward
to show that
Z-o.
R(s,t) = Z C~~(t,t) a C~~(s,t)
Thus if
C~~(s,t)
I I~t-~t I I
o
is product measurable, then so is R(s,t), and hence
is measurable in t for each fixed tOET.
Assuming
L(~)
separable, the existence of a measurable modification therefore follows
from (ii) of Theorem 3.1.1.
Conversely, if
~
has a measurable modification n, then we can see
from Proposition Z.l.Z and Fubini's theorem that
is a product measurable fmction on TxT and that
a measurable function on T.
ZI
I I~s+~tl I = I Ins+ntl I
II ~s II = II ns II is
From
Irl~s+rZ~tl IZ = riR(s,s)
+ ZrlrZR(s,t) +
r~(t,t)
for all rl,r Z' one has
IZ = R(s,s) + R(t,t) + 2R(s,t)
Z
ZIISs 1I = R(s , s) ,
R(s, t) = II ~s+~t liZ - II ~s liz - II ~t I ,Z
ZI I~s+~tl
and thus
Consequently, R(s,t) is product measurable and the product measurability
of
C~~(s,t)
now follows from
a
= Z- IR(s,t)
o
Z-o.
R(t, t)-Z-
For a general saS process
urable and
L(~)
~
it appears that
C~~(s,t)
product meas-
separable are not sufficient for the existence of a
97
measurable modification, though we do not have a counterexample.
I I~t-~tol I
cannot be expressed in terms of C~~(t,tO) for 1
<
a
<
Since
2, we
cannot express the condition for a measurable modification in terms of
C~~(t,to)
I
except in the sub-Gaussian case (Gaussian when a
I~t-~tol 1
2
2
C~~(t,t)a - 2C~~(to,tO)
=
If we set a(s,t)
I I~s-~tl I,
=
2-a
a
C~~(t,tO)
=
2), where
2
+
C~~(to,tO)a
.
then of course product measurability
of a(s,t) implies measurability of a(t,t O) in t, for each fixed to.
Conversely, if L(~) is separable ama( ,to) is measurable for each tOET,
0
then (ii) of Theorem 3.1.1 implies the existence of a measurable modification n and we can apply Fubini's theorem to show that a(s,t) =
I Ins-nt' I
is product measurable.
Hence condition (ii) in Theorem 3.1.1
may be written in the more synrnetric form:
c
(ii)'
= I I~s-~tl I
a(s,t)
~:
The map
T
~ L(~)
has separable range and the function
is B(T)xB(T)-measurable.
Finally, we note that if
~
is SoS and
L(~)
is separable, then
~
has
an integral representation of the type
where
{~u'
I I~I ,a
=
-
I1
~
u
~
I
I}
is a SoS process with independent increments,
F(u) , and ft(O)ELa(dF) for all t ([Kue1bs 1973, Theorem 4.2]).
And conversely, if
separable since
~
L(~)
has such a spectral representation, then
is separable (Corollary 3.1.2 (iii)).
L(~)
is
In particular,
every measurable SaS process has such a spectral representation.
98
2.
Integrability of sample paths of saS processes.
In this section we apply a result due to DeAcosta to obtain a neces-
sary and sufficient condition for almost all sample paths of an saS
process to belong to LpCT,A,v), 1 < p < a Cfheorem 3.2.4).
We also
show that a sample path integral of an SaS process is an saS random
variable (Lemma 3.2.3).
3.2.1
[DeAcosta 1975]
If
~
is an saS Borel probability measure on
Lp(T,A,v) with measurable seminorm w , then for every r < a,
We begin by proving a lemma involving a p-th order process.
3.2.2
~
=
LEMMA.
{~t,tET}
Let CT,A,v) be a finite measure space, and let
be a measurable p-th order process with 1 < P <
with II~tll ~ M <
1
P
+
q1 = 1,
belongs to
Proof:
00
for aU tET.
00
and
For any element fELqCT,A,v), where
the sample path integral
L(~).
From the measurability of
~
and Fubini's theorem, we get that
St(whLp(T,A,v) a.s., since
EJIt;t(w) /Pv(dt)
T
= JllstIIPv(dt) ~
T
Therefore Jf(t)~t(w)v(dt)ELp(n), since
#vCT) <
00
•
99
Elf f(t)f,;t(w)v(dt) IP
:<;
[f1
I?
f (t) ,qv(dt)]qEflt:t(UI) IPv(dt) .
In particular, the random variable
linear functional on
reflexivity of
for all
L(~)*,
ff(t)~t(w)v(dt)
and it follows from Theorem 2.1.5 and the
that there exists a unique
L(~)
defines a continuous
nEL(~)
satisfying
l;,EL(~).
Given any l;,'ELp(Q), we apply Theorem 2.1.5 to obtain
that Al;, = Al;,' on
L(~).
Al;,,(n -
f
T
l;,EL(~)
such
Then
f(t)~t(w)v(dt))
= Al;,,(n) - E[(l;,,)P-li f(t)~tv(dt)]
= Al;,' (n) - f f(t)E[(l;,,)P-l~t]v(dt)
T
= Al;, (n)
= Al;,(n) Thus n =
f
T
-f
f(t),\ (~t)v(dt)
T
Al;,(i f(t)~tv(dt))
f(t)~t(w)v(dt)
a.s., whence
f
T
=a .
f(t)~t(w)v(dt)EL(~).
0
We next prove a stronger version of Lemma 3.2.2 for an saS process
by,using a truncation suggested by L.A. Shepp and used in [Rajput 1972,
Proposition 3.2], where a similar result is proved for Gaussian processes.
,
3.2.3
~
LEMMA.
Let (T,A,v) be a a-finite measure
= {f,;t,tET} be a measurable saS process with
space~
~(·,w)
and let
Lp(T,A,v)
a.s.~
100
where 1
<
P
1
1
q
Then for every element fELq(T,A,v), where - + - = 1,
< ct.
P
the sample path integral
belongs to
Proof:
L(~)
and is thus a
For each integer n
~
SetS random variable.
1 define a truncated process by
and aSSlDne for the moment that v(T)<
f f(t)~n)(w)v(dt)
T
00.
Then the sample path integral
belongs to L(~) by Lemma 3.2.2, and, by the dominated
convergence theorem,
as n
~
00.
Because almost sure convergence implies convergence in
for an SetS process
~,
we therefore have that
J f(t)~t(w)v(dt)
e-
L(~)
belongs
T
to
L(~).
If v is a a-finite measure, let {Tm}:=l be a monotone increasing
00
sequence of elements of A such that v(Tm) < 00 for every m and m~lTm = T.
Then
in a sequence in
L(~)
that converges to
J
f(t)~t(w)v(dt)
by the domi-
T
nated convergence theorem.
3.2.4
THEOREM.
Let (T,A,v) be a a-finite measure space, let
0
{~t,tET}
be a measurable SetS process, and suppose that Lp(T,A,v) is separable,
101
where 1 < P < a.
Then fl~tIPv(dt) <
00
a.s. if and only if
T
This result for a = 2 is contained in [Rajput 1972, Proposition 3.4].
Note that Proposition 2.1.2 provides an expression for E(I~tIP) in terms
of the covariation of
Proof:
¢: n
~
on the diagonal.
Sufficiency is clear.
~
For the necessity, define the map
Lp(T,A,v) by
¢(w) =
~(.,W)
{o
if
~(.,wh~(T,A,v)
,
otherwise .
Then ¢ is a measurable map from (n,F) to (Lp ,B(Lp )), since the process
is measurable and LpCT ,A,v) is separable. Thus ~ induces a measure ~
~(T,A,v)
on
=
1
1
If fELq (T ,A, v), p
+ q = 1, and if f* denotes the element
(T ,A, v) corresponding to fEL q CT ,A, v), then we have that f* (~(. ,w))
~
f
by
BEB(~).
for all
of
~
f(t)~t(w)v(dt)
defines an element of
L(~)
by Lemma 3.2.3.
Therefore
T
~(f*)-l is an SuS distribution on R, since for every Borel subset B of R,
~(f*)-l(B) = p¢-l{xE~(T,A,v): f*(x)EB}
= p{WEn:
Thus
,-
~
f*(~(·,w))EB}
.
is an SuS measure on LpCT,A,v), so that by result 3.2.1 of
DeAcosta
102
J
=
~(T)
If (T,A,v)
=
Ilx II p
]J
(dx) <
00
o
•
Lp(T)
([a,b],B, Leb), then Theorem 3.2.4 holds for p
The only alteration required to the proof is to take q =
00
=
1.
and use
a result in [Doob 1953, p. 64] instead of Lemma 3.2.3.
.
-.
]03
3.
Absolute continuity of sample paths.
In this section we obtain sufficient conditions for the sample
paths of a p-th order process to, be absolutely continuous (Theorem
3.3.1) and show these conditions to be also necessary when the
process is saS (Theorem 3.3.3).
11·1 I
If ill is a Banach space with norm
and T = [a,b] is an inter-
val of R, then we write Ll [T ,:B] for the space of Borel measurable
functions f: T -+ ill such that II f(t) II ELI (T ,Leb). We call f absolutely continuous if for every
0 there exists a 0 > 0 such that
E >
for every disjoint family {(sk,tk)}~=l of subintervals of T,
2~=1(tk-sk) ~ 0 implies that 2~=11 If(tk)-f(sk)
II
~
E.
The following
characterization of absolute continuity is given in [Brezis 1973,
Appendice]: the function f: T
-+
ill is absolutely continuous if and
only if for every tET it can be expressed in terms of a Bochner integral
t
f(t) = f(a)
+
J ~(s)ds
a
A
where fELl [T, ill ] .
We now present sufficient conditions for a p-th order process to
have absolutely continuous paths.
The argument used is from [Cambanis
1975b] where second order processes are considered.
3.3.1 THEOREM.
Let
(n,F,p), where p
>
~
=
{~t,tET}
be a separable p-th order process on
1 and T = [a,b].
Then each of the following two
equivalent conditions is sufficient for the sample paths of
~
to be
absolutely continuous with probability one {and have a measurable p-th
order process as derivative}:
104
The map T -+ L(l;) defined by t I->';t is absolutely continuous.
(iJ
The function Cs.;(t)
(iiJ
As (';t) is absolutely continuous for all
=
sEL(';)~
for all tET-T O with Leb(T O)
for all sEL(';) , and
=
0
the derivative
C~.;(t)
exists
A
where for each t€T-TO' ';t is the unique element of L(';) satisfying
As(~t)
C~.;(t) for all sEL(';).
=
Proof:
The equivalence of (i) and (ii) is contained in [Brezis 1973,
p. 145].
If t
1->
';t is absolutely continuous, we have
A
for all tET where ';ELl[T,L(';)].
modification, say n.
E
f
b
a
A
By Theorem 3.1.1, .; has a measurable
Observe that
In(t,w) Idt ~
b
Iln(t,w) f IL (st)dt = f II~tIIL (st)dt<
a
pap
f
b
so n(o,w)ELl(T,Leb) a.s., i.e., for every WEst-stO with P(sto)
00
= O.
Define X by
X(t,w)
=
.;(a,w)
+
{
ftn(s,w)ds, tET, WEst-sto '
a
o
, tET, WEsto
'
and note that the sample paths of X are absolutely continuous.
A
Let ';n be a sequence of simple functions T -+
f
a
for all tET.
Then
t
II €n (s) -
g(s) II L
p
L(~)
such that
(st) ds-+ 0
,.,
105
EI~t-Xtl
=
t
Elf
a
.
=
lim
I
€sds
Ell
a
nsdsl
t
a
~
t
I
€n(s)ds
t
a
n(s)dsl
t
:-:;
lim
I EI~n(s)
- n(s) Ids
~a
:-:;
t
1im
n~
P{~t =
Thus
Xt }
= 1
A
I II ~n (s)
- n(s) IlL (n)ds ~ 0 .
p
a
for all tET.
Let S be a separating set for
with NEF, peN) = 0, such that if wEn-N and tET-S, then
lim
~(s,w).
Now
P{~t
s~t
SES
p(n l )
=0
such that
~(t,w)
= Xt , VtES} = 1.
~(t,w)
= lim
= X(t,w)
~(s,w)
= lim
s~t
s~t
SES
SES
~(t,w)
~
=
Thus there exists nlEF with
for all tES and wEn-n l .
X(s,w)
= X(t,w)
Given
,
since the paths of X are continuous.
~(t,w)
Thus, if wEn-(NUn l ), then
= X(t,w) for all tET, and hence the sample paths of ~ are abso-
o
lutely continuous with probability one.
In [Cambanis 1973] it is shown for a separable Gaussian process
that at every fixed tET the paths of
with probability zero or one.
~
~
are continuous, or differentiable,
Also, if
~
is measurable, then with
probability one its paths have essentially the same points of differentiability and continuity.
These same results follow for $aS processes
(with no change in argument) by applying a zero-one law for stable
measures from [Dudley and Kanter 1974].
We now state in detail one
106
of these results which will be used later.
3.3.2 THEOREM.
process~
[Cambanis]
Let
~
= {;t:tET} be a separable Sas
where T is any interval.
At every fixed point tET the paths of ; are differentiable
(iJ
with probability zero or one.
(iiJ
Let Td be the set of points t in T where the paths of ;
are differentiable with probability
t in T where the path
one~
and Td(w) be the set of points
;(·,w) is differentiable.
If; is
measurable~
then with probability one
The next result shows that the conditions in Theorem 3.3.1 are
necessary for a SaS process;.
Once again, we use the proof of a
similar result in [Cambanis 1975b] for Gaussian processes, certain
passages being lifted in their entirety.
3.3.3 THEOREM.
Let;
T = [a,b] and a > 1.
(iJ
= {;t,tET} be a separable SaS process with
Then the following are equivalent:
The paths of ; are absolutely continuous with probability one.
(iiJ The map T +L(;) defined by t
The function Cs;(t)
(iiiJ
1->
;t is absolutely continuous.
= As(~t) is absolutely continuous for all
for all tET-T O with Leb(TO) = 0 the derivative
for all sEL(~), and
sEL(~),
C~~(t)
exists
where for each tET-T o' gt is the unique element of L(~) satisfying
As(€t)
=
C~;(t) for all s~L(;).
.
107
Proof:
(i) => (ii).
Absolute continuity of a path implies differ-
entiabi1ity a.e. [Leb] , and therefore Leb[T-Td(w)] = 0 a.s.
Since
has continuous paths with probability one, it is measurable.
Thus
by Theorem 3.3.2, Leb[Td(w)
~
Td ] = 0 a.s., and hence Leb(T-Td ) = o.
Take TO = T-Td , and let nOEF with p(nO) = 0 be such that ~(·,w) is
absolutely continuous for all WEn-nO. Define s by
s(t,w) =
~
lllnSUP n[~(t+ l,w) - ~(t,w)], tE[a,b), WEn-no'
n~
n
{
o
tE[a,b), WEno; t=b, WEn
Then s is measurable and for all WEn-nO we have s(t,w) =
tETd(w) - {b}, where
~'(·,w)
~'(t,w)
denotes the path derivative of
Also, for all tETd - {b}, s(t,w) =
~'(t,w)
a.s.
for
~(·,w).
Now define n by
s(t,W) ,
n(t,w) =
{
o ,
and note that n is measurable and SaS.
n(·,W) = s(·,w) =
on T, and
~'(·,w)ELI(T,
~'(o,w)
Leb) since
follows that n(o,w)ELI(T,Leb),
Also, for all WEn-nO'
~(o,w)
and hence
the remark following Theorem 3.2.4.
a.e. [Leb]
is absolutely continuous.
+ I Insl ILI(n)ds <
00
by
In addition, for WEn-nO' we
have
~(t,w)
= ~(a,w) +
t
J n(s,w)ds
a
for all tET.
Let {(sk,tk)}~=l be a family of disjoint subintervals of T.
•
n
n
tk
I I I~t -~s IlL (n) =I Elf n(s,w)dsl
k=l
k k 1
k=l sk
Then
It
108
~
Therefore the map t
St is absolutely continuous by the absolute
continuity of the indefinite integral since
f
T
(ii)
=>
(i) and (ii)
I Insl IL (Q)ds <
1
00
(iii) follow from Theorem 3.3.1.
<=>
0
Theorems 3.3.1 and 3.3.3 with appropriate modifications give conditions for paths to be absolutely continuous with derivatives in
Lp(T,Leb).
They can also be extended to give conditions for paths to
have (n-l) continuous derivatives with the (n-l)-th derivative absolutely continuous with derivative in Lp(T,Leb).
The following corollary generalizes a well known result for stationary Gaussian processes to the (nonstationary) SuS case.
3.3.4
COROLLARY.
Let
{~A'
dent increments and F(A) =
stochastic process {St'
<
-00
I I~AI
a~t~b}
A < oo} be a SaS process with indepenla a bounded function.
Then a separable
defined by
00
St =
f
cos(tA)d~A
-00
has absolutely continuous sample paths with probability one if and only
if
-00
Proof:
If S has absolutely continuous sample paths, then for every
00
gELa(dF) and ~ =
f
g(A)d~A'
-00
C ~(t)
l;;s
00
=
f
COS(tA) (g(A))a-ldF(A)
-00
is absolutely continuous on [a,b] by Theorem 3.3.3.
Thus for any
.
109
00
f
COS(tA)gO(A)dF(A)
-00
is absolutely continuous on [a,b].
It is known that a function
00
f(t) =
f
cos(tA)d~(A), ~
finite, is absolutely continuous if and only
-00
00
if
f
00
IAld~(A) <
f
Therefore
00.
-00
IA/gO(A)dF(A) <
-00
00
for every positive
00
f
function goELa/a_l(dF), and it follows that
IAladF(A) <
00.
-00
For the converse, note that every
~EL(~)
can be expressed as
00
~
= f g(A)d~A' where gELa(dF).
If AEL (dF), then
a
-00
f
00
IAI Ig(A)la-ldF(A)
<
00
-00
and therefore
C~~(t)
=
00
J COS(tA) (g(A))
a-I
dF(A)
-00
is absolutely continuous.
It is easily seen that
C~~(t)
exists at
every tE(a,b) and that
00
~t = -
f
A sin(tA)d~A
-00
satisfies C~~(t) = A~(~t) for all ~EL(~). Thus
b
b
f I Igtl Idt = filA
a
and therefore the paths of
a
~
sin(tA) IlL (dF)ut
<
00
,
a
are absolutely continuous with probability
one by Theorem 3.3.3.
0
It should be remarked that a SuS process
~
=
{~t' a~t~b}
with
independent increments cannot have absolutely continuous sample paths
•
(except in the trivial case where F(t)
=
I I~tl la
is a constant function).
no
For, the claim is obvious if F is not absolutely continuous with respect to Lebesgue measure.
given any
£
°
> 0 and
In
~he
I
case of absolutely continuous F,
> 0 it is possible to choose a finite family of
disjoint subintervals {(sk,tk)}~=l such that L~=l(tk~sk) ~ 0, but
n
L I I~t
k=l
k
-~s
k
II
=
1/
n
L IF(tk)-F(sk) I
k=l
a
> £ •
To see this case, let (al,b l ) be a subinterval of [a,bJ such that
bl-a l < 0, F(bl)-F(a l ) > 0, and define
h = [FCbl)a: Fcal)ja/a-l .
By the uniform continuity of F we can choose n so large that It-sl
implies IF(t)-F(s) I
<
h, for all s,tE[a,b].
Let {(sk,tk)}~l be a
partition of (al,b l ) into n subintervals of equal length.
I~=l(tk-sk) = bl-a l
<
0, but
n
/
L IF(t )-F(s )1 1 a
k=l
k
k
n F(t )-F(s )
L
k
k =
k=l aha-l!a
>
£ •
Then
<
°
n
e
.
,
BIBLIOGRAPHY
Bretagnolle, J., Castelle, D.D., and Krivine, J.L. (1966). Lois stables
et espaces LP. Ann. Inst. Henri Poincare, Sere B, ~, 231-259.
Brezis, H. (1973).
Amsterdam.
Operateurs Maximaux Monotones.
North-Holland,
Bulatovi~,
J. and Asi~, M. (1976). The separability of the Hilbert space
generated by a stochastic process. (To appear in the Journal of
Multivariate Analysis.)
Cambanis, S. (1973). On some continuity and differentiability properties
of paths of Gaussian processes. Journal of Multivariate Analysis, l,
420-434.
Cambanis, S. (1975a). The measurability of a stochastic process of
second order and its linear space. Proceedings of the American
Mathematical Society, 47, 467-475.
Cambanis, S. (1975b). On the path absolute continuity of second order
processes. Annals of Probability, i, 1050-1054.
Carnbanis, S. and Masry, E. (1971). On the representation of weakly
continuous stochastic processes. Information Sciences, i, 277-290.
Cambanis, S. and Rajput, B.S. (1973). Some zero-one laws for Gaussian
processes. Annals of Probability, !, 304-312.
Cohn, D.L. (1972). Measurable choice of limit points and the existence
of separable and measurable processes. Z. Wahrscheinlichkeitstheorie
und Verw. Gebiete, 22, 161-165.
Cudia, D.F. (19&4).
The geometry of Banach spaces.
Smoothness.
Trans-
actions, American Mathematical Society, 110, 284-314.
DeAcosta, A. (1975).
ity, i, 865-875.
Doob, J.L. (1953).
Stable measures and seminorms.
Stochastic Processes.
Dudley, R.M. and Kanter, M. (1974).
Annals of Probabil-
Wiley, New York.
Zero-one laws for stable measures.
Proceedings, American Mathematical Society,45, 245-252.
Fernique, X. (1973). Une demonstration simple du theoreme de R.M.
Dudley et Marek Kanter sur les lois zero-un pour 1es mesures stables.
Lecture Notes in Mathematics, Vol. 381, 78-79. Springer-Verlag,
Berlin.
112
Hewitt, E. and Stromberg, K. (1965).
Springer-Verlag, New York.
Real and Abstract Analysis.
Hille, E. (1972). Methods in Classical and Functional Analysis.
Addison-Wesley, Reading, Massachusetts.
..
Hoffmarm-J9Srgensen, J. (1973). Existence of measurable modifications
of stochastic processes. Z. Wahrscheinlichkeitstheorie und Verw.
Gebiete, 25, 205-207.
Hosoya, Y. (1976). Stable processes and their properties. Technical
Report, Department of Economics, Tohoku University. (Abstract in
the IMS Bulletin, July 1976.)
Huang, S.T. (1975). Nonlinear Analysis of Spherically Invariant Processes
and its Ramifications. Institute of Statistics Mimeo Series No. 1028,
University of North Carolina at Chapel Hill.
ltd, K. and Nisio, M. (1968). On the convergence of sums of independent
Banach space valued random variables. Osaka Journal of Mathematics,
i,
35-38.
Kanter, M. (1972a).
Linear sample spaces and stable processes.
of Functional Analysis, ~, 441-456.
Journal
Kanter, M. (1972b). On the spectral representation for symmetric stable
random variables. z. wahrscheinlichkeitstheorie und Verw. Gebiete,
23, 1-6.
Kanter, M. (1973).
The LP norm of stuns of translates of a function.
Transactions, American Mathematical Society,l79, 35-47.
Kanter, M. and Steiger, W.L. (1974). Regression and autoregression with
infinite variance. Advances in Applied Frob bility, ~, 768-783.
Kuelbs, J. (1973). A representation theorem for symmetric stable processes and stable Jreasures on H. z. wahrscheinlichkeitstheorie und Verw.
Gebiete, 26, 259-271.
Ktunar, A. and Mandrekar, V. (1972). Stable probability measures on
Banach spaces. Studia Math., 42, 133-144.
Lukacs, E. and Laha, R.G. (1964).
tions. Hafner, New York.
Applications of Characteristic Func-
Parthasarathy, K.R. and Schmidt, K. (1972). Positive definite kernels,
continuous tensor products, and central limit theorems of probability
theory. Lecture Notes in Mathematics. Springer-Verlag, Berlin.
Pau1auskas, V.J. (1976). Some remarks on rrultivariate stable distributions. Journal of Multivariate Analysis, ~, 356-368.
e
"
113
Press, S.J. (1972).
Multivariate stable distributions.
Journal of
Multivariate Analysis, ~, 444-462.
Rajput, B.S. (1972).
Gaussian measures on 10 spates, 1
Journal of Multivariate Analysis, ~, 382-~03.
.'
~
P
<
00.
Rajput, B.S. (1975). Stable pro~ability measures and stable nonnegative
definite functions on lVS. Z. Wahrscheinlichkeitstheorie und Verw .
Gebiete, 36, 1-7.
Rudin, W. (1973). Functional Analysis. McGraw-Hill, New York.
Schilder, M. (1970). Some structure theorems for the symmetric stable
laws. Annals of Mathematical Statistics, 41, 412-421.
Shachtman, R.H. (1970).
The Pettis-Stieltjes (stochastic) integral.
Revue Roumaine de Math.Pures et Appliquees, 15, 1039-1060.
Singer, I. (1970).
Best Approximation in Normed Linear Spaces by
Elements of Linear Subspaces. Springer-Verlag, New York.
Tortrat, M.A. (1975).
Pseudo-martingales et lois stables.
C.R.
Academiedes Sciences, Paris, Serie A, 281, 463-465.
Wolfe, S.J.
(1973).
On the local behavior of characteristic functions.
Annals of Probability,
I
I
e
~,
862-866.