Cambanis, S. and Miller, G.Linear Problems in P-th Order and Stable Processes."

LINEAR PROBLEMS IN P-TH ORDER AND STABLE PROCESSES*
Stamatis Cambanis+ and Grady Miller t
ABSTRACT
This work extends to processes with finite moments or order p,
1 < P < 2 and to symmetric a-stable processes,
1 < a < 2,
some of the
basic linear theory known for processes with finite second moments (p=2) and
for Gaussian processes (a=2).
to the covariance.
Here the IIcovariationll plays a role analogous
Specifically, stochastic integrals of two types are intro-
duced and studied for p-th order processes and in particular for symmetric
stable processes.
Regression estimates and linear estimates on certain symmetric
stable processes are evaluated, including regression and linear filtering of
signal in noise.
Also, for certain symmetric stable inputs, the identification
of a linear system from the input covariation and the input-output cross covariation is considered, and the way the distribution of the output depends on
the linear system is studied.
•
*ThlS research was supported by the Air Force Office of Scientific Research under
grant AFOSR-75-2796.
+Department of Statistics, University of North Carolina, Chapel Hill, North
Carolina 27514
t U. S. Army Materiel Systems Analysis Activity, Aberdeen Proving Ground, Maryland
21005
1.
INTRODUCTION
The linear theory of Gaussian processes, indeed of random processes with
finite second moments (second order processes), has been fully developed.
This
includes linear estimation, and in particular prediction and filtering, of
second order processes, and the analysis and identification of linear systems
with Gaussian inputs (or second order random inputs).
It is very desirable to have analogs of these results for larger classes of
random processes which include the Gaussian processes or the second order processes as special elements.
The availability of such results would make it
possible to handle certain random processes which are nearly, but not exactly,
Gaussian or second order.
In this paper an attempt is made to develop a linear
theory for the class of stable processes, to which Gaussian processes belong, and
for the class of p-th order processes, i.e., processes with finite p-th order
~ •
moments.
The class of stable processes is very important as stable distributions form
a natural generalization of the normal distribution, they satisfy the important
stability property (linear combinations of jointly stable variables are stable),
they have heavier tails than the normal distribution and thus are more appropriate
for models with outliers, and they arise as limit laws of normed sums of independent
identically distributed random variables.
For simplicity only stable processes
with symmetric distributions will be considered; and their characteristic exponent
will be denoted by
~,
0<
~
< 2.
Gaussian processes are stable processes with
~=2.
Even though stable processes can be thought of as a one step generalization
of Gaussian processes, they constitute too rich a class of processes.
For instance,
while all finite dimensional distributions of a (general) symmetric Gaussian process are fully described by means of one parameter, its covariance function, in
•
~
-2-
order to describe all finite dimensional distributions of a (general) symmetric
stable process with 0 < a < 2 one needs an infinite number of parameters (the
spectral measures of all orders).
It would therefore be very difficult to study
nonlinear problems for general stable processes.
Linear problems should be
easier, but in fact they also turn out to be quite intractable in general.
The
best way to obtain explicit results seems to be to restrict attention to special
classes of stable processes with sUfficiently simple parametric descriptions,
such as sub-Gaussian processes, processes with independent stable increments,
and moving averages or Fourier transforms of processes with independent stable
increments.
A basic difficulty in developing the linear theory of stable processes is
due to the fact that while the linear space of a Gaussian process is a Hilbert
space, the linear space of a stable process is a Banach space when 1
~
a
<2
and only a metric space when 0 < a < 1. Here we focus our attention on the
case 1 < a < 2 (finite mean, infinite variance) where it is shown that in many
cases the "covariation" plays a role analogous to the role played by the covariance in the Gaussian case
a=2.
While the covariance of two random vari-
ables is linear in both arguments, the covariation of two random variables is
linear only in its first argument, and this hinders substantially its usefulness.
However, the covariation has a certain linearity property in its second argument
for sums of independent stable variables and this turns out to be very useful.
Most of the general results developed are valid for p-th order processes
with 1 < P < 2 (if
p~2
the second order process theory is applicable) as well
as for symmetric stable processes with 1 < a < 2;
but more specific results
are given only for stable processes (the crucial difference being the unavailability of a linearity-like property of the covariation in its second argument
in the general case).
-3-
The basic structure is developed in Sections 3 and 4.
An appropriate
Banach space of functions is shown to be isometric t via a stochastic integral t
to (a subspace of) the linear space of a p-th order or a symmetric a-stable
process t P > It a > It
(Section 3).
When applied to a symmetric stable
process with independent increments t our results yield the stochastic integral
defined in [13]t
and when applied to an lI absolutely continuous ll process
(Section 4)t they yield a stochastic integral of a more particular form which
can be regarded also as a sample path integral.
A Fubini-type result which
allows the interchange of stochastic and usual integration is also established
(Section 4).
These results are used to evaluate regression estimates and linear
estimates for certain classes of stable processes t including regression and linear
filtering of signal in noise (Section 5)t and also to identify and analyse linear
systems with certain specific stable inputs (Section 6).
2.
Let
= {~tt
~
space (QtF,P)
let
Q(~)
{~tt
t e T}.
Q(O
DEFINITIONS AND PRELIMINARY RESULTS
t e T}
such that
be a stochastic process with underlying probability
~t
e Lp(Q)
for all
t e Tt where 1 < p < OOt
and
be the space of all finite linear combinations of elements of
Then we call
~
a p-th order process and define a norm on
by
The linear space
L(~)
of the process
respect to this norm t i.e. t in Lp(Q).
number u to a power b > Ot
~
is the completion of
Q(~)
with
Throughout this paper when ralslng a
we shall use the convention (u)b
= lui b sign(u).
j
-4-
If M is a closed subspace of Lp(Q)
fixed
~
(such as
L(t)),
then for each
e M the expression
defines a continuous linear functional on M,
has norm 1~1~-I.
When p=2,
which by
H~lderls
i.e., for second order processes,
equals the usual inner product
inequality
<~'~>2
E[~~J.
An important subclass of p-th order processes is the family of symmetric
a-stable (SaS) stochastic processes with 1 < a
the familiar zero mean Gaussian processes.
2.
~
When a=2 these are
For 1 < a < 2,
the SaS processes
are defined by consistent finite dimensional distributions with characteristic
functions (ch.f. IS) of the form
$(Y)
where r
= exp{-Isl<x,y>/a
dr(x)}, y eRn,
is a uniquely determined [7, p. 36J finite symmetric measure on the
Borel subsets of the unite sphere 5 = {x eRn; <x,x>
Following [12, p. 357J, r
= I}
[9, p. 264J.
is called the spectral measure of the distribution.
In particular, for each SaS random variable
such that E(eir~) = eXP{-I~I~ Irla}
~
there exists a number
1~la ~
0
for all r e R.
It is well known that a SaS process is a p-th order process for any p
satisfying 1 < P < a.
function
~
7
I~/a
For a linear space of SaS random variables, the
defines a norm [13, p. 413].
An application of Theorem 2
of [15, p. 862] shows that this norm is related to the usual
"~Ilp
where C(p,a)
= C(p,a)
Lp(Q) norm by
1~la '
is the following constant depending on a and p,
1
~
P<a
~
2,
-5-
So the linear space
L(~)
of a SaS process
~
is the completion of
Q(~)
with respect to either norm, and it can be seen from the form of the multivariate ch.f. that
L(~)
is a family of jointly SaS random variables.
In the sequel when a Banach space of random variables with finite p-th
moments is considered, its norm will be the usual II-lip norm, and when a
Banach space of SaS random variables is considered, it will be normed either
by
II-I~,
l<p<a,
orby
I-Ia'
Because
for the ch.f. of a SaS random variable
~,
1~la
appears in the expression
it will often be the more natural
choice for a norm in the SaS case.
From the form of the ch.f. of two jointly SaS random variables
~
and
L it fo 11 ows that
is the spectral measure on S = {x e R2: xi + x~ = I}.
where r
For each such
pair of variables, we define
Notice that
[~'~]a is the derivative with respect to r of ~ fir x1+x 2 ,a r(dx)
evaluated at r=O.
If
~
shows that
and
~
are jointly SaS random variables, then [6, Theorem 1.4]
-6-
~ ~ [~,~]~
Linearity of the map
therefore follows from the linearity of
conditional expectation, and Htllder's inequality shows the map to be continuous
with norm
1~1~-1.
When ~=2,
with zero mean, it follows that
~
and
i.e., for jointly Gaussian random variables
[~'~]2
E(~~),
equals
the covariance of
~.
The covariation of
p-th order case and by
~
with
[~,~]~
~
will be defined by
in the
S~S
case.
<~'~>p
in the
The following property of
the covariation is analogous to the Riesz representation for continuous linear
functionals on a Hilbert space.
PROPOSITION 2.1:
Let M be a Banach space of
S~S
random variables (of
random variables with finite p-th moments).
If A is a continuous linear
functional on M,
~
A = [.,~]~ (A
PROOF:
then there exists a unique
= <.'~>p)'
A proof is sketched for the
Consider N = {~ e M:
Otherwise, choose
~1
~3
e M such that
A(~)
~1
= OJ,
S~S
case; the p-th order case is identical.
a subspace of M.
e M- N and let
~2
If N = M,
= (~1-~2)/1~1-~2~ ~ = {A(~3)J1/(~-1) ~3'
~ £
Note that
~-A(~)~O
~
be the best approximation to
in N (see Theorem 1.11 and Corollary 3.5 of [14]).
For every
take
M write
e N and consequently that
and ~O
Define
= ~/IA(~3)1~/(~-1).
= O.
-7-
To see uniqueness, suppose that
and let f
~*
be the spectral measure for
also satisfies the required conditions,
(~,~*).
Then
so that
f S12
x (x )'1-1 df(x)
= [~~*]
' a = I ~I a I ~*I a-I
a
-1
= [fsl x1l a df(X)]a [fsl x2l a df(X)](a-1)/a
It follows from Htllder's inequality that xl = cx a.e. [f], for some
2
c > 0 and thus
Hence
~ = c~*
1~la = 1~*la'
and by
In the SaS case the covariation
property with respect to
SaS random variables.
are independent and if
~
when
~
c = 1 and thus
[~'~]a
and
[]
possesses a certain linearity
is a linear combination of independent
Specifically, if
~1' ~2'
~ = ~*.
~
~
= a1~1
+ a2~2
where
~1
and
~2
are jointly SaS, then
(2.1)
This property is an immediate result of the definition of covariation and
[10, Theorem 1.2.1] (see also [11, Theorem 2.1]) which states that, if f
is the spectral measure for (~'~1'~2)' then f{x e R3: x2x3 # OJ = O.
It is likewise immediate under the same conditions that
-8-
(2.2)
while the converse is not true in general, i.e., if two jointly SaS
r.v.'s have covariation zero they are not necessarily independent.
An interesting subclass of SaS processes is the family of sub-Gaussian
processes [1, p. 251] which have an especially simple parametric description.
Specifically, if R: TxT 7 Rl is of nonnegative definite type, 1 < a ~ 2,
and t l , ... ,t N e T,
then
N
~(rl"" ,r N) = exp{-2- a/2( L r r (t ,t ))a/2}
m, n=l m n m n
is the ch.f. of a multivariate stable distribution [12, p. 359].
The family
of such ch.f. 's generated by varying Nand t l , ... ,t N is clearly consistent,
and a stochastic process {~t' t e T} with finite-dimensional distributions so
defined is called a-sub-Gaussian with parameter R or, more briefly, a-SG(R).
The zero mean Gaussian case 2-SG(R) will be denoted simply G(R).
Following
are some properties of a-sub-Gaussian processes for later use.
PROPOSITION 2.2.
If
~
=
{~t'
t e T}
is a-SG(R), then
family of random variables, i.e. for all
(Xl"" ,X N) is a-SG.
PROOF. Given any n,
have
L(~)
N and Xl'." ,X N e
any t l , ... ,t n in T,
is an a-sub-Gaussian
L(~)
and real numbers
the vector
r l ,··. ,r n, we
-9-
where
~
=
{~t'
follows that
t
~
L(~)
T}
is a G(R) process.
From the continuity of norms it
is a-SG(R).
[]
COROLLARY 2.3.
In the notation of Proposition 2.2, let Xl and X2 be
linear operations on ~, say Xi = Qi(~)' and let ~ = {~t' t ~ T} be
G(R).
If Yi =
Qi(~)'
i = 1,2, then
and, by symmetry,
In particular,
[X 1,X 2]a = 0 if and only if [X 2,X 1]a = O. Moreover, if
[X 1 ,X 1]a = [X 2 ,X 2]a' then [X 1 ,X 2]a = [X 2 ,X 1]a .
PROOF: The first statement follows by evaluating at r=O the derivative
with respect to r of
The rest is clear from
[X
X]
l' 1 a
= I X1I aa = 2- a/2
[E(Y )2]a/2
[]
1
In fact, sub-Gaussian distributions are (variance) mixtures of Gaussian distributions.
Specifically,
if
~
=
{~t'
t
~
T}
is a-SG(R) and
~
=
{~t'
t
~
T}
is G(R), then ~ has the same distribution as ~1/2~ = {~1/2~t, t e T} where
the random variable
~
is independent of
~
and has Laplace transform
-10-
W(A) = exp{-Aa/ 2} (i.e., is a positive stable random variable of index a/2),
as is seen from
N
N
Eexp{ i L r n tl/2~ } = W[2- 1 L r r RCtm,t n)]
n=1
tn
m, n=1 m n
N
= exp{-2- a/2[ L r r R(t ,t )]a/2}
m,n=1 n m m n
3.
THE INTEGRAL
IT
f(t)d~t
.
A stochastic integral IT f(t) d~t is defined for appropriate (nonrandom)
IIfunctions li f e J\a(O, and the function space J\a(O is examined when ~
satisfies certain IIsmoothnessli conditions (Proposition 3.1), when
a-sub-Gaussian (Proposition 3.2), and when
~
is
~
is SaS with independent increments.
In the latter case a separating family of continuous linear functionals on
L(~)
is also obtained (Corollary 3.4).
Let
~
=
{~t'
t e T}
be a p-th order or SaS process, where T is taken
for simplicity to be an interval.
The stochastic integral is defined in a
similar fashion for both the p-th order and the SaS cases, but we shall use
the notation for the SaS case in our development.
The following assumptions are used to define the integral
ITfd~;
they
are quite natural and they reduce to the standard assumptions used to define a
Lebesgue-Stieltjes integral when
~
is nonrandom.
Assume that
weak right limits, i.e., assume by Proposition 2.1 that
for every t e
at t
by
~t+O'
and every t e T,
possesses
t]a exists
and denote the weak right limit of
Assume in addition that
[~t,t]a
~
is of bounded variation on
is of the form t = L~=1 ak(~t +0 - ~t +0)' where n ~ 1,
k
k-l
and to < t 1 <... < t n all in T. This latter assumption will be
T whenever t
a k e R,
L(~)
[~t+O'
~
-11-
used to define a norm on the function space in terms of Lebesgue-Stieltjes
integral.
The use of weak right limits
~t+O
will be compatible with our
definition of a measure from a function of bounded variation.
Let S
be the linear space of all step functions of the form
f(t) = I~=l akX(t
IT
t let), and for each such f define
k-1' k
I~=l ak(~t +0 - ~t +0)' We introduce a norm on S by
k
k-1
Let
f
/\/0
£ Aa(~)
be the comp 1eti on of
S with respect to thi s norm.
can be represented as f
follows that
its limit by
f d~ to be
= {f n},
Every element
a Cauchy sequence in S.
It
{IT f n d~} is a Cauchy sequence in L(~), and we shall
IT f d~, which is easily seen to depend only on f and
denote
not
on the specific Cauchy sequence {f n}. Then the man Aa(~) ~ L(~) defined
by f ~ IT f d~ is an isometry from Aa(~) onto a closed subspace of L(~).
If we assume that ~tl = 0 for some t l e T and that the process ~ 1S
weakly continuous from the right, then this isometry will be onto
If the process
~
is of weak bounded variation, i.e., if
bounded variation on T for every
~ £ L(~),
be regarded as members of
PROPOSITION 3.1: If
~
Aa(~)
= {~t'
[~t'~]a
then it is clear that
weak right limits and consequently that the integral
Under this stronger (smoothness) condition on
L(~).
~,
IT
f
d~
~
is of
has
is defined.
continuous functions can
in a natural way.
a~t~b}
is a SaS (or p-th order) process of weak
bounded variation, then all continuous functions on
[a,b]
belong to
Aa(~)'
-12-
PROOF:
If f is a function on T and n is a partition of [a,b]
by a = to<t< ... <t m = b,
defined
let fn(t) = I~=lf(tlk)X(t
t Jet), where t'k
k-l' k
is an arbitrary point in (tk-1,t k], and ~(n) = max (tk-t k- 1). If f
l<k<m
is a continuous function and {nn} is a sequence oT partitions of [a,b]
with
~(nn) ~
0,
~
L(~).
for all
some
~
e
e
L(~).
then f n e S for every nand
n
{IT
Hence the sequence
IT
Define
f(t)d~t =~,
00
f n d~}n=l
n .
and note that this definition does
not depend on the particular partition of [a,b]
If additional conditions are placed on
classes of functions belong to
Aa(~)'
be weakly continuous (i.e., if
[~t' ~]a
for all
~
e
L(~)),
then Aa(t)
II
chosen.
Thus we may regard
Aa(~)'
the continuous function f as an element of
,4It
converges weakly to
~,
[]
it can be shown that other
~
For instance, if
is also assumed to
is a continuous function on
[a,b]
contains ll all functions of bounded variation
on [a,b].
For an a-SG(R) process
function
[~t' ~]a' ~
L(~),
e
~,
it is clear from Corollary 2.3 that the
has right limits or is of bounded variation if
and only if the same properties hold for
cess and
~I
e
L(~)
corresponds to
~
e
E(~t~'),
L(~).
where
~
is a G(R) pro-
Thus the integral
IT
f(t)d~t
is defined if and only if all functions in the reproducing kernel Hilbert
space of R (RKHS(R)) have right limits and all functions of the form
R(·, u+O) -R(', v+O), u,v e T,
PROPOSITION 3.2:
Let
the stochastic integral
~
=
IT
{~t'
have bounded variation.
t e T}
f(t)d~t
be a-SG(R), where R is such that
is defined for f e
Aa(~)'
Then
-13-
=A2(~)'
Aa(~)
f
£
where
= {~t'
~
t
£
T} is a G(R) process, and for every
A(xC~),
Il fll
PROOF:
AN(t)
s
u
= 21/2 Ilfll
A2(~)
The result is immediate from Corollary 2.3, since the step functions
f are dense in both function spaces and
[]
If the a-SG(R) process
~
is weakly continuous from the right (or equivalently,
if all functions in RKHS(R) are right continuous), then Proposition 3.2 shows
that
Aa(~)
coincides with the space A2(R) defined in [5]. When R is
the covariance of a (second order) process with orthogonal increments, then
the conditions under which the stochastic integral and the space
Aa(~)
been defined are satisfied and
function such that for all
t~s,
= L2(dF)
F(s)-F(t)
Aa(~)
have
where F is a nondecreasing
= R(s,s)-2R(t,s)+R(t,t).
For the remainder of this section we will consider
to be a SaS process
~
such that, given points t 1<t 2<... <t n in T, the random variables
~t ' ~t -~t , ... ,~t -~t
are independent. Under this assumption of independent
1
2 1
n n-1
increments, it can be shown that when T is a finite interval the conditions
for defining IT f(t)
d~t
are satisfied [10, pp. 58-59],
same stochastic integral as in [13, p. 146].
note that the process
F(t)
= I~t+ol~.
{~t+O'
t
£
T}
and we obtain the
To make clear this latter point,
also has independent increments, and let
By [13, p. 414] the nondecreasing function
F satisfies
-14-
for t 1 < t 2.
Let f and 9 be two step functions on T:
n
n
= .L
f(t)
fJ·X(t. t.](t), get)
J=l
J-1' J
Write X. for
J
Then
~t.+O
-
~t.
J=l
l~j~n,
+0'
n
=I j=l
L
and recall properties (2.1) and (2.2).
Ilfll~ (0 = I
a
f.(g.)a-1 X
dF = If(g)a-1 dF
J J
(t j _1 ,t j ]
I fl a dF, and therefore Aa(O = La(dF) since the
step functions are dense in both spaces.
ITfd~
gJ' X(t. t.](t).
J-1' J
r1
J
In particular,
= .L
This definition of the integral
easily extends to the case where T is an infinite interval.
The norm
on the set S of all step functions that are zero outside a compact subset of
T is defined as before, and the completion
norm is La(T,
Aa(~)
T, dF) where for T = (-00,00),
F(t) = sgn(t) lI~t+oll ~
B
PROPOSITION 3.3:
and let
PROOF:
~
= If
If r
Suppose that
d~
and
~
= Ig
~
d~,
say,
of S with respect to this
F is defined by
is a SaS process with independent increments.
where f and 9 belong to La(dF).
is the spectral measure for
(~,~),
then
Then
-15-
Thus
[~,t]a is the derivative with respect to r of
at r
= O.
(Note that the function
Let T = (-00,00),
COROLLARY 3.4:
of
L(~)
~o
&I
rf+g a dF evaluated
t a is differentiable when a > 1.)
= 0,
[]
and M be the closed sUbspace
which is the image of La(dF)
under the isometry f
{[·'~t+O]a'
Then the set of continuous linear functionals
7
IT
t e T}
f(t)d~t.
separates
points on M.
PROOF:
Given
[~,tt+O]a
=
~
e M,
° for all
choose g e La(dF) such that
t e T,
I(_t,O]g(s)dF(s)
for all
~
= IT
g(s)
d~s'
If
then by Proposition 3.3,
= °= I(a,t]
t e T, which implies that g
=
g(s) dF(s)
°a.e.
[dF] and hence
~
=
°in
M.
If the process ~ is weakly continuous from the right, then La(dF)
isometric to
functionals
L(~)
[]
1S
and Corollary 3.4 implies that the set of continuous linear
{[·'~t]a'
t e T}
4.
separates points on
THE INTEGRAL
IT
L(~).
f(t)tt dv(t)
In spite of its general form, the stochastic integral of the previous
section is defined under rather mild assumptions which are usually satisfied 1n
real-world applications.
The integral discussed in the present section has a
more particular form, but is defined under even less stringent conditions and
can often be interpreted as a sample path integral.
~
c
-16-
We shall consider
A stochastic process
{~t) t
is called measurable if (t)w)
~ ~t(w)
that p-1+q-1
(n) F)P)
to be a p-th order process and q to be such
~
= 1.
map from T x n into R.
Let
~
= {~t)
T}
on a probability space
is a product measurable
The following result can be established by an argu-
ment analogous to the discussion in
LEMMA 4.1.
£
t
£
T}
[2) pp. 280-281].
be a measurable p-th order process with
index set T an arbitrary interval of the real line) and let a > p be given.
Then there exists a finite measure v on (T)B T) such that v is equivalent
to Lebesgue measure on T and
Under the conditions of Lemma 4.1)
since p < a and v is a finite measure; so the sample paths
to Lp(T)BT)v) with probability one) by the measurability of
theorem.
PROPOSITION 4.2:
PROOF:
~
We can therefore define a stochastic process D = {Dt) t
and observe that Dt
£
Lp(n))
belong
~t(w)
and Fubini1s
£
T}
by
since
The stochastic process D is of strong bounded variation.
For every partition t o<t 1<... <t n of T)
-17n
t
IIIlt -Ilt II p 2. L I kll~sllpldV(S) 2. IT
k=l
k k-l
k=l t k- 1
t
since if a<b and ~(w) = I t 2 ~s(w) dv(s) we have
1
n
L
ItllP
p
=<I
t
II~sl~
dv(s) < <Xl
t
,
t
2 ~ (w) dv(s), ~> = I 2 <~ ,~> dv(s) < 1I~lIp-l I 2 II~ II dv(s),
t s
P
t
s
P
P
t
s P
1
1
1
and thus
~
From Proposition 4.2 it easily follows that for every
<Ilt'~>P
is of bounded variation on T.
If(t)dll t
is defined for f e Ap(Il).
and for every f e Ap(~)
Ap(~)
We further define
=Ap(Il)
e~
If(t)~t
as a sample path integral for all step functions
I
L(~),
Therefore the stochastic integral
We shall see that the stochastic integral
f e Lq(V)
e
dv(t)
can be expressed
f e S (Lemma 4.3)' and all
(Theorem 4.4) and that the sample path integrals of the form
f(t)~t(w)
dv(t)
belongto
L(I1)
for all
f e Lq(T,BT,v)
addition, these sample path integrals are dense in
L(~)
(Theorem 4.4).
when
~
In
is a weakly
continuous process (Theorem 4.5).
LEMMA 4.3.
I
f(t)~t
PROOF:
For every f e S the sample path integral
I
f(t)~t(w)
dv(t) equals
dv(t) with probability one.
It is clear that the sample path integral exists since f
function and
~t(w)
e Lp(v) a.s.
Any given
~ g
is a bounded
Lp(Q) determines the continuous
-18-
linear functional
<"~>p
on Lp(Q), which can be restricted to
Therefore Proposition 2.1 yields a unique
<"~l>p
= <"~>p
L(~).
on
t
P
1
=<I
t
t
2
1
2 ~t(Ul) dv(t), ~ >p
P
L(~)
e
Note that for t 1
I 2 dt<~t'~l> = <~t -~t '~1>
t
~l
~
L(~).
such that
t 2,
= <~t 2-~t1 ,~> P
=I
1
t
t
2 <~t'~>P dv(t)
1
Thus
<IT f(t)
~t
dv(t) - IT f(t)
dx(t),
~>p
=
<IT f(t) ~t dv(t), ~) P - E[(Op-1 IT f(t) ~t(Ul) dv(t)]
=
~IT f(t) ~t dv(t), ~1) P - IT f(t) E[(~)p-1 ~t] dv(t)
We have seen that
all
~t(Ul)
~t(Ul)
e Lp(T,BT,v) with probability one, so that for
f e Lq(T,BT,v) the sample path integral IT
a.s. and is easily seen to belong to Lp(Q).
the function space
THEOREM 4.4:
Ap(~)
contains Lq(v)
f(t)~t(Ul)
dv(t)
is defined
In fact, it belongs to
L(~),
in a sense which we now make precise.
Every function f e Lq(v) determines (uniquely) an element
r e Ap(~) such that
IT f(t)~t(Ul) dv(t)
= IT
and
r(t)~t dv(t) a.s.,
-19-
where the left-hand side is a sample path integral and the right-hand side a
stochastic integral.
PROOF:
Given any g e Lq(v),
let
~
=
IT
g(t)~t(w)
dv(t),
and observe that
Ild~ = <IT get) ~t(W) dv(t), ~>p = IT get) <~t'~>P dv(t)
< Igll L (v) {lIT <~t'~>pIP dv(t)}l/p
q
< IlgIILq(v)
Thus for all
{
IT II~tll ~
dV(t)}l:P
g e Lq(v) we have
Given any f e Lq(v)
verging to f in Lq(v).
IT
II~II ~-1
fn(t)~t(w)
let {fn}~=l be a sequence of functions in
dv(t) belongs to
L(~)
by Lemma 4.3, and
L(~).
as m,n
so that the sample path integral
00,
IT
Note that {f n} is a Cauchy sequence in
7
00,
con-
For each n the sample path integral
I lIT [f(t)-fn(t)]~t(w) dv(t)1 Ip ~ I If-fnl IL (v) { ITI I~tl I~ dv(t)}l/p
q
as n 7
S
f(t)~t(w)
, since
and denote its limit in Ap(~) by 1.
Then
7
0.
dv(t) belongs to
e-
-20-
IT
IIIT f(t)~t(w) dv(t) -
IT
so that
t(t)~t dv(t)11 p
f(t)~t(w) dv(t)
= IT
= lim
IIIT[fm(t)-fn(t)]~t(w) dv(t)11
m, n-x»
p
t(t)~t dv(t) a.s.
It is straightforward to check that the process
S
c
Lq(v),
it is clear that Lq(v)
L(~).
T = (-00,00).
Suppose that
~
L(~).
Ap(~)
is dense in
is isometric to a dense subset of
THEOREM 4.5:
is continuous in p-th mean
~
is isometric to all of
Ap(~)
is isometric to a dense subset of
Lq(v)
[]
we can consider Lq(v) as a subset of Ap(~).
Identifying f with t
and consequently that
= 0,
Since
and hence that Lq(v)
The following result shows that
L(~)
when
~
is weakly continuous.
is weakly continuous from the right and that
Then the closure of
{IT
f(t)
~t(w)
dv(t), f e S}
in Lp(Q)
is
L(O·
PROOF:
Fix t e T,
IT
so that
and for every integer n ~ 1 define
= 1.
gn dv
~
Given any e > 0 and any
e Lp(Q) , use weak right
continuity to choose an N such that n ~ N implies that I<~t-~s'~>pl< e
for t <
Hence
IT
S
< t + lin.
gn(s)~t(w)
Then for n ~ N,
dv(s) converges weakly to
to the closure of {IT
Thus, when
~
f(t)~t(w)
~t
and therefore
dv(t): f e S}.
is weakly continuous, every element of
pressed as a limit in
~t
belongs
[]
L(~)
can be ex-
Lp (and hence also a.s.) of sample path integrals.
-21-
Specifically, if ~ e L(~),
then there exists a sequence {fn}~=l cLq(v)
such that
~(w)
= lim
IT
fn(t)~t(w)
dv(t) a.s.
n~
It is frequently desirable (especially in applications, such as those considered in Section 6) to write the stochastic integral I
form I f(t)
~t
dt.
g(t)~t
dv(t)
Since n and Lebesgue measure are equivalent,
g are related by f(x) = h(x)g(x) where hex) = dv(t)/dt.
in the
f and
A condition such
as g e Lq(v)
is therefore equivalent to Ilf(x)lq[h(x)]l-q dt < 00 which we
will write as f e Lq(h 1- q) in the following (sometimes with no further
reference to the definition of h through v and Lemma 4.1).
The following theorem establishes a Fubini-type result which allows the
interchange of stochastic and usual integration, and which is used in Section 6.
THEOREM 4.6:
Let
~
=
{~t'
-oo<t<oo}
independent increments, ~o = 0,
be a weakly continuous SaS process with
and F(t) = sgn(t) I~tl~.
Fix p and q
such that 1 < P < a and p-1+q-1 = 1, and define ~s = ~oo a(s,t) d~t where
a e La (dv x dF) and v corresponds to
f e Lq (h 1- q), h(t) = dv(t)/dt, then
00
I
-00
PROOF:
f(s)
~s
ds =
00
00
-00
-00
I {I
as in Lemma 4.1.
f(s) a(s,t) ds}
If
d~t
The right hand integral is well defined since ~oo f(s) a(s,t) ds can
be shown to belong to La(dF(t)).
~(B)
~
= I B d~t and observe that
For any bounded Borel set B, write
~
~
-22-
[I
<Xl
<Xl
f(s)
-<Xl
~s
= I
~(B)]a
ds,
<Xl
=I
f(s) I B a(s,t) dF(t) ds
-<Xl
00
= [I {I
-<Xl
f(s)[~s,~(B)]a
-<Xl
= IBI
ds
<Xl
-<Xl
f(s) a(s,t) ds dF(t)
<Xl
-<Xl
d~t, ~(B)]a
f(s) a(s,t) ds}
.
The conclusion now follows from Corollary 3.4.
If the function a(s,t)
[]
in Theorem 4.6 is defined to be 1 when O<t<s
and 0 otherwise, then the conclusion may be written
<Xl
I
o
If a(s,t)
~s
then
I
ds =
<Xl
0
t
I {I
O<t~s,
is defined to be 1 when
= ~s
f(s)
f(s) ds}
-1 when
d~t
s<t~O,
and 0 otherwise,
and the conclusion of Theorem 4.6 may be written
<Xl
-<Xl
~s
f(s)
<Xl
~s
=I
ds
0
-<Xl
{-f
t
-<Xl
f(s) ds}
d~s +
<Xl
<Xl
<Xl
0
t
f(s) ds}
d~t
= 0,
and when ~<Xl f(s) ds
<Xl
f
<Xl
f {f
-<Xl
f(s)
~s
ds
=
f
-<Xl
{I
t
f(s) ds}
d~t
All these are integration by parts formulae, and the general integration by parts
formula over a finite interval is given in the following theorem.
THEOREM 4.7:
Let
~
= {~t'
with independent increments.
-<Xl<t<<Xl}
be a weakly continuous SaS process
If -<Xl<a<b<<Xl and all integrals are well defined,
-23-
b
fa
f(t)
~t
dt =
=
PROOF:
For all
[~af
b
f
a
+
b
fa
~a
f(s) ds
b
fa
~b
+
fa
b
{ f t f(s) ds}
b
f(s) ds -
d~t
t
fa {fa
f(s) ds}
d~t
using usual integration by parts, we have
a~s~b,
b
a
b
b
_
f (ft f )
d~t, ~s]a - [~a'~s]af
b
f
a
+
b
a
b
f (f t f )
dt [~t'~s]a
The first expression then follows by Corollary 3.4, and the second is
[]
established similarly.
5.
LINEAR ESTIMATION AND REGRESSION
In this section we consider the evaluation of regression estimates and
of linear estimates in SaS processes.
Let
{~, ~t'
the SaS process
of
~
based on
t e T}
~
=
be SaS, 1<a<2, and
{~t'
t e T}.
~,E(~I~),
necessarily belong to
L(~)
L(~)
be the linear space of
It is well known that the regression estimate
is not in general linear, i.e., it does not
(in sharp contrast with the Gaussian case a=2).
(see for instance [11]) When T consists of one point, or when T is a
finite set and the random variables
E(~I~)
is linear [6].
~t'
t e T,
Further cases where
(Theorem 5.1 and Corollaries 5.2 to 5.5).
are independent, then
E(~I~)
is linear are shown below
-24~
The linear estimate of
mation to
~
in
L(~),
minimum distance from
and is denoted by
based on
~
is defined as the best approxi-
i.e., as the random variable
~
in
L(~)
with
~
Q(~I~).
~
Thus
is uniquely determined by either of the
following
=0
~-A]a = 0
[~, ~-A]a
for all
~eL(~)
[~t'
for all
t e T
,
(5.1)
When a=2 the regression estimate is linear and equal to the linear
estimate.
When 1<a<2, even when the regression estimate is linear it may
not be equal to the linear estimate.
In this section we give examples of
regression estimates that are linear and coincide with the linear estimates
(Theorem 5.6 and Corollary 5.7) and examples of regression estimates that are
linear but differ from the linear estimates (Theorem 5.8 and Corollary 5.9).
We first evaluate regression estimates in certain cases where they are
linear.
Nonlinear regression estimates seem very hard to evaluate (see [11]).
THEOREM 5.1.
Let
teT}, T a possibly infinite interval, be SaS
{~,~t'
such that the process
~={~t'
continuous from the right,
teT}
~t
F(t) = sgn (t-t) I~t-~tl~, teT,
has independent increments, is weakly
= 0 for some teT,
is bounded.
and the function
For every Sorel set B of T
define
XeS)
= IT
xs(t)
d~t,
~(S)
= [~,
X(B)]a .
-25-
Then
~
is a finite signed measure which is absolutely continuous with
respect to the measure induced by F, the Radon-Nikodym derivative
d~/dF
belongs to LaCdF),
and
EC~I~) = IT cgr)(t) d~t a.s.
00
PROOF:
To see that
is countably additive, let B = U B,. where the B.ls
i=1
'
are disjoint measurable subsets of [a,b], and observe that with Cn = Ui=nBi
~
I [~, X(Cn)]a ~ 1~la IXCCn)I~-1 = 1~la CIc dF)a-l/a 7 0,
n
n
700
-
by Holder1s inequality and dominated convergence.
For every n > 1,
X(B 1), ... ,XCB n) are independent random variables by [13, p. 418],
property C2.1) yields
and
00
= 2 ~CBi)' For absolute
i=1
continuity, IT xBCt) dFCt) = 0 implies that XCB) = 0 a.s., whence ~CB)
which as n increases to infinity becomes
~(B)
= O.
By [3, p. 604] we can choose a countable dense subset Too of T such
that
Enumerate the points in Too'
and let Tn
= {to ,t1 ,··· ,t n: t o<t1<t2<... <t n}
be the set containing the first n points of Too and
t(~t
=0).
Then
-26-
(5.2)
by Corollary 3.4 of [11].
in
L(~)
Now
E(~ltt,
E(~ltt,
teT n) converges to
teToo )
by [3, p. 319], and therefore the "integrands" in the final step
of (5.1) form a Cauchy sequence in La(dF) which converges to
La(dF) and a.e.
d~/dF
in
[dF] by standard martingale convergence theorems [4, p. 369].
Thus taking limits in (5.2) completes the proof.
v
~
A more suggestive notation for
(d~/dF)(t)
[]
is
dt[~,
tt]a/dF(t)
so
that
The following elementary properties may be considered as corollaries
of Theorem 5.1 or appropriate modifications of it, but also follow immediately
from elementary properties of conditional expectation and are stated here
only for the purpose of comparison with the linear estimates.
COROLLARY 5.2.
11
t
is as in Theorem 5.1, a<b<c,
a<t~b,
feLa([a,c],dF) then
E(f~ f(t)dttl t s ' a<s~b)
=f~
f(t) dt t
and
-27COROLLARY 5.3.
Let
Xt = f~oo f(t,s) d~s '
where
~
crements,
~
-OO<t<OO ,
SaS, weakly continuous from the right and has independent in-
FlS) - F(s') = I~s-~s' I~,
for all
f(t, .) e La«-oo,t], dF) and L{X, (-oo,t]} =
denotes the increments of
I Xt +t
When
~
- E(X t +t
~.
Then for all
I Xu' u~s)1 a
a
=
fi+
t
s>s·,
L{~~,
and for all
(-oo,t]} where
~~
t>O and t,
I f(t+t,s)1 a
dF(s)
is an a-stable motion, i.e., when dF(t) = dt,
motion when a=2), then Xt = f~oo f(t-s)d~s
under the conditions of Corollary 5.3
t
(Brownian
is a stationary SaS process and
It should be pointed out that for 1<a<2 it is not known whether all purely
nondeterministic (i.e., nt L{X, (-oo,t]}={O}) stationary SaS processes are
moving averages of a stable motion (this is, of course, well known for a=2).
In following corollary we evaluate the regression estimate of a functional
of a signal based on signal plus noise, when the signal and noise are saS
processes of special form.
COROLLARY 5.4.
Let
-28-
where S,N are independent, weakly right continuous, SaS processes with independent increments,
So=O=N o'
and {f(t,-), -oo<t<oo}
La[d(Fs+F n)] where the functions
FS(A) = sgn(A) I sAI~,
is complete in
Fn(A) = sgn(A)
I NAI~,
-OO<A<OO, are bounded.
If geLa(dF s ) then the regression estimate of
~oog(A)dSA based on St+nt' -oo<t<oo, and the regression error are given by
e~ =I~oo g(A)dS A - E(~oo g(A)dS A I St+nt' -oo<t<oo)l~
= 1:-00 Igl a ($a-1
+ $a-1)
$s$n d(F s+F n)
s
n
where $s = dFs/d(Fs+F n) and $n = dFn/d(Fs+F n) = l-$s'
PROOF: Put ~ = ~oog dS, ~t = St+nt' ~A = SA+NA' Then ~t = ~oo f(t,A)d~A
and F(A) = sgn(A) I~AI~ can be evaluated by using the independence of S
~ ~
and N,
properties (2.1) and (2.2), and the linearity of [-'-]a in its
first argument, as follows
= [SA,SA]a + [NA,SA]a + [SA,NA ]a + [NA,NA]a
= ISAI~ + INAI~ = sgn(A) {FS(A) + Fn(A)}
Thus F(A) = Fs(A)+Fn(A).
{~t,-oo<t<oo}
by
{~t'
is complete in
-oo<t<oo}
E(~I~A,-OO<A<OO)
and
a.s.
{~A'
Since {f(t,-), -oo<t<oo}
L(~).
-OO<A<OO}
is complete in La(dF),
Hence the (completed) a-fields generated
are equal and
E(~I~t,-oo<t<oo)
It then follows by Theorem 5.1 that
=
-29-
Using again the independence of Sand N,
and properties (2.1) and (2.2)
we have
If we put
then SA
~(A,U)
= ~oo
= 1 when
O<U~A,
= -1 when
A<U~O,
and =0 otherwise,
~(A,U) dS u and by Proposition 3.3,
= f(O,A]
gdF,
A>O; =0, A=O; = -f(A,O]gdF s ' A<O
The expression for the regression estimate follows from
For the regression error we have, similarly,
Igi (1 ",(1
dF s + JrOO
Igi (1 ",(1
dF n
~n
oo
~s
Igi (1 (~(1-1 + ~(1-1) ~ ~ d(F + F )
s
n
s n
s
[]
n
When the spectra of signal and noise do not overlap, i.e., when the
spectral measures dF s and dF n are singular, then ~s~n =
[d(Fs+F n)] and the error of the regression estimate is zero.
°a.e.
When Fs and
Fn are absolutely continuous with spectral densities f s and f n then
-30-
$s
= fs(fs+fn)-l,
$n
= fn(fs+fn)-l,
and the regression error can be ex-
pressed as
By putting
= f(t,~)
g(~)
in Corollary 5.4 we obtain expressions for the re-
gression estimate of St based on signal plus noise, and of the resulting
error.
In the special case where both signal and noise are SaS processes on
the positive real line with independent increments these expressions simplify to
E(S
t
l ) = Imin(t,t')
Is u+N u' O<u<t
- 0
~ (u) d(S+N)
~s
u
I St- E(S t Is u+N u' 0<_u_<tl)/aa -- I (O,min(t,t')] {~a-l+~a-l}
~s
~n
~ ~
~s~n
Another special case of interest is when the signal and the noise are
harmonizable SaS processes with representations analogous to those of stationary
Gaussian processes:
~t
=~
cos(t~) dHA + ~ sin(t~) dH~,
(5.3)
-oo<t<oo,
where H' ,H II are independent, weakly right continuous, SaS processes with
independent increments,
F~
bounded.
HIo = 0
= H0
II
and
/W
I a = F~ (~)
~ a
= IWI~ Ia'
a
~>O
-'
with
Unlike the Gaussian (a=2) case however, such stable processes
with 1<a<2 are not stationary.
COROLLARY 5.5.
Let both the signal
s and the noise n have spectral repre-
sentation of the type (5.3) and assume they are independent.
XI~
= S'+N'
~~'
X"~
= S"+N
II
~~.
Let
-31-
and the error of the regression estimate is given by
~s
where
PROOF:
~t'
= dFs/dCFs+F n)
~n
and
= dFn/dCFs+F n) = l-~s'
Inversion formulae of C5.3) expressing
-oo<t<oo,
H~, H~
are identical to those valid when a=2,
that the convergence is now with respect to the
quadratic mean.
since 5 1 ,5"
the only difference being
I·I a norm rather than in
These inversion formulae imply that LCx)
are independent and NI,N"
are also independent.
_
EC~ x) - ~
= LCX' ,X")
are independent, that XI
and,
and X"
A straightforward extension of Theorem 5.1 then gives
dA [~, X~]a
I
dA[~' X~]a
II
dF (X)
dX A + ~ dF (A). dX A
x
x
=~
from which the results follow by putting ~
that Fx
,A>O in terms of
fd5 1 + ~ gd5'
and noting
= Fs+F n'
[]
We now turn our attention to the evaluation of linear estimates in certain
cases.
It should be pointed out that linear estimates are harder to evaluate
than regression estimates.
For instance, the regression estimate of
a simple random variable ~
while the linear estimate of
is given by EC~I~)
~
based on
~
= a~
where a
is of the form
~
= f~
gd~,
is used where ~
and f,g e LaCdF),
F(t)
based on
= [~'~]al[~'~]a
QC~I~)
where b cannot be found in general; if the representation ~
~
= f~
= b~
fd~,
is a 5aS process with independent increments
= I~t-~Ol~,
[13] then b satisfies
-32-
which in general, when 1<a<2,
cannot be solved for b (when a=2 the
solution is of course straightforward).
It may therefore seem somewhat surprising
that in certain specific cases the linear estimate of
~
based on
~
= {~t,t£T}
with T an interval, can be evaluated; this is feasible only because in these
cases
is "appropriately" related with
~
Q(~I~)
While
~.
cannot be evaluated under the general assumptions of Theorem
5.1, it can be evaluated under the more special assumptions of Corollaries
5.2 to 5.5.
THEOREM 5.6.
Under the assumptions of Corollary 5.2,
Q(I~ f(t) d~t I ~s' a<s~b) = I~ f(t) d~t
Let ~
PROOF.
= I~
for some a<t~b,
is onto.
fd~.
Since ~
the map La«a,b],dF)
Thus ~
= Q(~I~s' a<s~b)
for some 9 £ La«a,b], dF).
~
=~
is weakly right continuous and ~t
7
L(~,(a,b]) defined by 9 7 I~ gd~
£ L(~, (a,b])
Similarly
=0
every~
£
is of the form A = I~ gd~
L(~,
[a,b])
is of the form
hd~, h £ La«a,b], dF), and thus condition (5.1) is equivalent to
o = [~'~-~]a = [I~ hX(a,b]d~, I~ (f-gX(a,b])d~]a
=I ba
h(f-g)a-1 dF
for all h £ Li(a,b], dF),
COROLLARY 5.7.
=9
a.e. [dF]
on (a,b].
[]
Under the assumptions of Corollary 5.3
Q(X t +t I Xu' u~t)
PROOF:
which implies f
= I~oo f(t+t,s)d~s
We have Y = Xt +t - I~oof(t+t,s) d~s
.
= Ii+tf(t+t,s)
d~s ~ L(~~,(t,t+t]).
-33-
Since
has independent increments, every t
~
dependent of Y and by (2.2),
f:
sult then follows from
oo
in
L(~~,
(-oo,tl)
is In-
[t,Yl a= 0 so that (5.1) is satisfied.
f(t+t,s)d~s e L(~~, (-oo,tl) .
The re-
[l
Thus under the assumptions of Corollaries 5.2 and 5.3, the regression estimates are linear and equal to the linear estimates.
We now show that under the
assumptions of Corollaries 5.4 and 5.5 the regression estimates, which are linear,
differ from the linear estimates.
In the remaining examples
~={~t
= St + nt, teT}
and the noise n are indepedent SaS processes, and
~=Q(~I~)
mate of
where the signal
~
e L(s).
is the limit (with respect to the norm
of finite linear combinations of random variables from
this linear map by A:
~
=
A(~).
s
The linear esti-
I -'a) of a sequence
{~t'
teT},
and we denote
Since sand n are independent, their
corresponding sequences of finite linear combinations converge (with respect to
I·I a ),
to A(s) =
~s
and A(n) = An respectively (i.e., the same linear
operation of sand n)
so that A = A(~) = A(s) +A(n) = As + An'
The
characterizing equation (5.1) for A is then equivalent to
o=
[~t' ~-Ala = [~t' (~-As) - Anl a = [~t' ~-Asla- [~t' Anl a
= [St' ~-Asla - [~t' Anl a
for all
t e T.
(5.4)
A similar calculation gives the linear estimation error
(5.5)
THEOREM 5.8.
Under the assumptions of Corollary 5.4,
-34-
PROOF:
~
Put
for some h
£
= f gdS. Since A = Q(~
La[d(Fs+F n)]
so that As
~) £ L(~)
=f
hdS
= L(S+N) we have
and An = f h dN.
A = f h d(S+N)
By (5.4)
we have for all -oo<t<oo ,
o = [~oof(t,A) dS A,
= ~oof(t,A) (g(A)
= ~oof(t,A)
~oo(9-h)dS]a - [~oof(t,A) dNA' ~ooh dN]a
h(A))a-1 dFs(A) - ~oof(t,A) (h(A))a-1 dFn(A)
{(g(A)-h(A))a-1 ~S(A) - (h(A))a-1 ~n(A)} d(Fs+Fn)(A)
Since {f(t,·), -oo<t<oo}
is complete in La[d(Fs+F n)], this is equivalent to
(g_h)a-l ~s - (h)a-1 ~n = 0 a.e. d(Fs+F n)
and thus to
h = g[l + (~n/~s)1/(a-1)]-1
a.e.
d(Fs+F )
n
For the linear estimation error we obtain from (5.5),
and the final expression follows by substituting h..
By putting g(A)
= f(t,A)
[]
in Theorem 5.8 we obtain expressions for the
linear estimate of St based on signal plus noise, and of the resulting error.
In the special case where both signal and noise are SaS processes on the
positive real line with independent increments, these expressions simplify to
•
(under the assumptions of Corollary 5.4)
-35-
Q(S
t
I Su+N u , O<u<t
- -
COROLLARY 5.9.
PROOF:
)
')
= fmin(t,t
0
~1/(Cl-1)
~n--~s~----rTr--~--- d(S+N)
~17(Cl-l)
s
+ ~17(Cl-l)
s
Under the assumptions of Corollary 5.5,
Putting 11
we have As =
l
=~
fdS' + ~ gdS Il and A = Q(11 x)
=~
f~ f dS' + ~ ~ dS', An = ~ f dN' + ~ ~ dN
Il
f dX' + ~ ~ dX II
and by (5.4)
e·
for all -oo<t<oo,
o = [St' I1-A
=~
]Cl - [nt, An]Cl
cos(tA)(f-f)Cl-1 dF s + ~ sin(tA)(g_g)Cl-1 dF s
-~ COS(tA) (f)Cl-1 dF n - ~ sin(tA) (g)Cl-1 dF n
=f~
COS(tA) {(f-f)Cl-1 ~s - (f)Cl-1 ~n} d(Fs+F n)
+ ~O sin(tA) {(g_~)Cl-1 ~s - (~)Cl~l ~ n} d(F s+F n)
It follows that a.e.
(f-f)Cl-1 ~s
d(Fs_F n)
= (f)Cl-l
~n'
(g_g)Cl-1~s
= (g)Cl-1
~s
and the expressions are derived as in Theorem 5.8.
Corollary 5.9 solves the nonrealizable (if we think of t
linear filtering problem for harmonizable
SClS
[]
as time)
signal and noise of the type
•
~
-36-
(5.3).
The solution of the realizable linear filtering problem will be
considered elsewhere.
6.
LINEAR SYSTEM ANALYSIS AND IDENTIFICATION
We consider a linear system with input the SaS process
~,
output X,
and input-output relationship described by one of the following:
(I)
Xs
Xs
(II)
where f,
~
=f
=f
f(s,t)
d~t
f(s,t)
~t
dt
and the index sets are such that the indicated integrals are well
defined (specific conditions will be stated in each case to be considered).
The impulse response of the system is f and we will frequently focus attention
on time invariant systems:
f(s,t)
= f(t-s).
We will be concerned here with
analyzing and identifying linear systems of type I or II.
The system identification problem is the determination of the impulse
response f of the system from the joint distribution of the input and output.
It turns out that in many cases of interest, knowledge of the joint distribution
of the input and the output determines uniquely the impulse response.
We then
concentrate on the more significant (from an application viewpoint) problem of
actually expressing the impulse response function f explicitly in terms of the
input covariation function and the input-output cross covariation function.
The
advantage of this approach is that the system is identified by using not the full
joint distribution of the input and output SaS processes, but only a portion of
it which can be estimated in special cases.
The covariation function
with
~
~t
C~~(s,t)
of
and the cross covariation function
the covariation of Xs with
~t.
~
is defined as the covariation of
CX~(s,t)
of X with
~
~s
is defined as
For systems I and II respectively we have
-37-
(I')
CX~(s,t)
(II')
CX~(s,t)
=f
=f
f(s,u) du C~~(u,t)
f(s,u) C~~(u,t) du
For a discussion of the estimation of
see [8].
Of course when a
= 2,
CX~
and
C~~
in special situations,
covariations and cross covariations become the
usual covariances and cross covariances, and hence the approach taken here is
the analogue for stable processes to the approach taken for Gaussian or second
order porcesses.
(The present set up, including (I') and (II'), is applicable
to p-th order processes as well, but we are concentrating on SaS processes
because they arise naturally in applications.) This problem is considered 1n
detail for the following classes of SaS
inputs~:
stable processes with in-
dependent increments (Propositions 6.1 and 6.2), sub-Gaussian processes (Case
6.3), and moving averages (Case 6.3 and Case 6.4) and Fourier transforms (Case
6.5) of processes with independent increments.
The system analysis problem is the study of the statistical properties of
the output when the statistics of the input and the system are known.
In our
case the output X is SaS and we study, for certain specific SaS inputs, the
dependence of the distribution of the output on the linear system.
Specifically,
we consider time invariant linear systems, and for two such systems with impulse
response f and g we find necessary and sufficient conditions on f,g so
that with a specific SaS input
same distribution.
~,
the outputs of the two systems have the
Kanter [7] has solved this problem for system (I) and a
stable motion input, and we consider here both systems (I) and (II) and inputs
that are stable motions (Proposition 6.6), moving averages of stable motions
(Case 6.7), and Fourier transforms of SaS processes with independent increments
(Case 6.8).
This can also be considered as a kind of system identification problem:
for certain specific SaS inputs, what part of the impulse response can be determined from the distribution of the output?
-38Whenever type (II) systems are considered we shall assume without further
notice that f(s,·) ~ Lq(h 1- q) (see discussion preceding Theorem 4.6).
System Identification
The question here is to investigate what can be determined about the system
function f from knowledge of Cxt and C
by suing (II) or (III) as appropritt
ate. We first consider SaS inputs t with independent increments. The main
results are in Propositions 6.1 and 6.2.
PROPOSITION 6.1.
If t
= {tt'
with independent increments to
-oo<t<oo}
is a weakly continuous SaS process
= 0,
and F(t) = sgn (t) tt~, then each
impulse response function f(s, .)
£
La (dF)
for system (I) is determined in
La(dF) by the cross covariation function Cxt(s,t), all real
f(s,t)
dC
~(S,.)
= [ XF
(n)
Jet)
)
Explicitly,
((n))
Cxt ( s,t k(n,t)+l - Cxt s,tk(n,t)
= lim
t.
((n)
)
n-700 F( t kn)
(n,t)+l - F tk(n,t))
a. e. [dF]
(n)
(n) 00
.
where {(t k , t k+1]}k=-00 lS a partition of (-00,00) which becomes finer as n
increases,
such that t
PROOF:
sup (t(n) - ten))
k
k+1
k
£
7
0
k(n,t)
is the unique k
(t(n) ten)]
k ' k+1·
The first part is immediate from Corollary 3.4.
Since Cxt(s,t) =
f~oo f(s,u)dF(u) by Proposition 3.3, we have f(s,t) = dtCxt(s,t)/dF(t) a.e.
[dF]
and the second expression follows by an exercise similar to (20.61)(b) of [4].
If dF(t)
= dt
in Proposition 6.1, then t
(Brownian mortion when a
of type (1),
= 2).
is called a-stable motion
For such an input t
and a time invariant system
[]
-39-
CX~(s,t)
so that
PROPOSITION 6.2:
with independent
= CX~(t-s) and f(l) = CX~(l) a.e.
If ~ = {~t' -oo<t<oo} is a weakly continuous SaS process
increments, ~O = 0 and F(t) = sgn(t) I~tl~, then for
CX~(s)
each fixed s the cross covariation function
determines f(s,')
for system (II) Qy
__ d dCX~(s,·)
f(s,t) - at [
dF ] (t) a.e. [Leb].
PROOF:
If
~(t,u)
=
1
for O<u<t
-1
for t<u<O
o
then ~t
= ~oo~(t,u)
d~t
Xs
where
a(s,u)
= ~oo
for all
t
= ~oof(s,t)
otherwise
and by Theorem 4.6
~t dt
f(s,t) ~(t,u)dt
=
= ~oo
a(s,u) d~u
.J: f(s,v) dv for u>O
{ -f~oo f(s,v) dv for u<O.
Hence
CX~(s,t)
= [Xs'~t]a = J-toooo
=f5
a(s,u) dF(u)
a(s,u)(~(t,u))
a-I
for t>O and
dF(u)
= -f~
a(s,u) dF(u)
for t<O,
from which the results follows.
When
6.2,
~
is a-stable motion and f is time invariant in Proposition
then we have for any fixed s,
[]
-402
f(t)
a
= --2CXf(s,t)
at
a.e. [Leb].
S
We now consider sub-Gaussian inputs and inputs that are moving averages
of stable motions.
The cases we shall examine are special cases of the
following example.
Case 6.3:
Suppose that
is a SaS process with covariation function of the
~
form
where $ e LI (R I , Leb) and $ 1 0 a.e. [Leb] on RI . Assuming integrabilityof f(s,') and A$(A) , we obtain for system (I)
Cx~(s,t)
= ~oo
f(s,u)(~oo (-iA) ei(t-u)A $(A) dA) du
= i ~oo
CX~(s,t),
Thus knowledge of $ and
f(s,t) a.e. t.
e- itA A$(-A) f(S,A) dA .
If moreover
CX~(s,.)
CX~(S,A)
from which f(s,t)
all
= 2ni
e LI
t,
determines f(s,A) and hence
we have
A $(-A) f(s,A)
can be express:d in terms of
verse Fourier transform provided f (S,A)
CX~(S,A)
and $ via In-
is integrable in A.
A similar
calculation for system (II) yields
when f(s,')
is integrable, and thus similar conclusions, including
A
Cx~(s,A)
=2n
A
$(-A) f(S,A)
-41-
when
CX~(s,.)
we have
is integrable.
= CX~(t-s)
CX~(s,t)
For time invariant systems f(s,t)
= f(t-s)
tit
for both systems (I) and (II) and the resulting ex-
pressions are
A
CX~(A)
(I):
If
~
=i
A
A~(-A)
A
f(A),
(II):
CX~(A)
= 2n ~(-A) f(A)
is a-SG(R) with R a stationary covariance function, then by
Corollary 2.3
C~~(s,t)
and this
C~~
R(t-s)
=
is a stationary covariance function as well.
spectral distribution of R is absolutely continuous,
6.3 with
~ ~
0 a.e.
C~~
Assuming that the
is as in Example
and the results of Example 6.3 apply to obtain expressions
for the system f.
If
~
is a moving average of an a-stable motion
~t
= ~oo
a(t-u) d ~u'
~:
a e La
then
C~~(s,t)
= (2n)-1
a(s-t+v) (a(v))a-l dv
then C~~
a eLl'
and if in addition (a)a-l,
with ~(A)
= ~oo
is of the form of Example 6.3
&(-A) (&a-l)(A) , and the results of Example 6.3 apply.
One can also handle certain cases where
~
is a moving average of a SaS
process with orthogonal increments, which is not a stable motion.
CASE 6.4.
Let
~
= {~t'
-oo<t<oo}
and dF«dLeb with dF/dLeb
a(t-·) e La(dF), -oo<t<oo.
eLl'
and let ~t
= ~oo
a(t-u) d~u'
Then
C~~(s,t)
If the system (II) is time
=~
be a SaS process with independent increments
= ~oo
a(s-u)(a(t-u))a-l ~(u) du .
with feLl and if a e La- 1 then
a calculation similar to Case 6.3 gives
inv~riant
•
-42-
eX~(-A,~)
= a(_A)(aa-l)A(~)
from which f is uniquely determined provided
system (I) is time invariant with feLl'
and for each t,
C~~(s,t)
$(~-A) TeA)
a,
(a
a-I
A
), $ 1 0 a.e.
If the
and if a is absoluately continuous,
is absolutely continuous is s with derivative
~ooa'(s-u)(a(t-u))a-l ~(u) du, and if a l eLI' a e La-I' then
We can diagram this case as follows:
-;-n-de-p-e-nd-:-n-t---"
;ncrem~nts
SaS process
If
~
A
moving
average
were available along with X,
to generate
~
CX~'
and then use
f
..
1-----"3~x
linear
system
one could use
and then attempt to untangle f from it.
invertible and if one can find A-I,
....;~
o--
CX~
Equivalently,
to determine AOf
if A:
~
7
~
is
one would apply A-I to the input ~
to find A 0 f.
We finally consider the inputs that are Fourier transforms of SaS processes with independent increments and time invariant systems (I) and (II).
CASE 6.5.
We assume that ~t
SaS increments with F(A)
continuous at zero.
= ~oo
= sgn
COS(tA) d~A where ~ has independent
(A) I~AI~
bounded, and for system (1) also
Then
C~~(u,O)
= ~oo
cos (~A) dF(A)
-43-
For system (I) we assume that ~oo IAI dF(A) < 00 and f
£
L1(R ' ,Leb).
Then
we have from (II),
Cxt(s,O) =
L:
L: I
f(s-u) du C~~(u,O) = -
f(s-u) Asin(uA) dF(A) du
= - ~oo A{~oo f(v) sin[(s-v)A]dv} dF(A)
= - ~oo A{sin(sA) ~oo f(v)cos(vA)dv - COS(SA) ~oo f(v)sin(vA)dA} dF(A)
Denoting by fe' f o the even, odd parts of f (f=fe+f o ' feet) = (1/2) [f(t)+f(-t)],
f o(t)=(1/2)[f(t)-f(-t)]) and by fe' f o their (real) Fourier transforms we have
A
A
Cxt(s,O) = - ~oo-A fe(A) sin(sA) dF(A) + ~oo A fO(A) COS(SA) dF(A)
and hence
CX~,e(s,O) = ~oo A fO(A) COS(SA) dF(A)
Cx~,o(s,O) = - ~oo A fe(A) sin(sA) dF(A)
A
A
Since A fo(A)
A
is even and A fe(A)
is odd, these two integrals determine
A
f e and f o uniquely a.e. [dF]. If in addition dF is absolutely continuous
with respect to Lebesgue measure then ge(A) fo(A) are determined for all
A
A, and thus feet), foCt),
and hence f(t)
A
also, are determined a.e.
System
(II) is treated similarly as
CX~(s,O) = ~oo fe(A) COS(SA) dF(A) + ~oo fO(A) sin(sA) dF(A)
System Analysis
For certain known SaS inputs, we wish to specify the part of the system
function f which is uniquely determined from the distribution of the output.
Equivalently, if linear systems with impulse response functions f and 9
produce outputs Xf and Xg, respectively, to the same SaS input ~, we
find necessary and sufficient conditions on f and 9 for the outputs of
the two systems to have the same distribution, i.e., Xf Q Xg.
-44Kanter [7] showed that if the input
~
is a stable motion,
0<a<2,
and the system (I) is time invariant:
xI
= ~oo
f(t-s)
d~s'
-oo<t<oo ,
then the distribution of the output Xf determines f
up to translation and a global sign. Equivalently, Xf ~ Xg if and only if
with f
f(t)
La(R ' , Leb),
£
= get-a)
a.e.
= -get-a)
or f(t)
a.e.
for some real
a.
For time invariant system (II) we have the following result.
PROPOSITION 6.6.
1<a<2,
Let the input
~
be a stable motion with
=0
and
and let the system (II) be time invariant:
xI = ~oo f(t-s) ~t ds ,
11
(a)
• ~
~O
determines
~oo f(u)du
f a.e.
= 0,
-oo<t<oo .
then the distribution of the output Xf
up to translation and a global sign.
(b) If ~oo f(u)du ~ 0, then the univariate distribution of the output Xf determines f a.e. up to a global sign.
PROOF:
From the proof of Proposition 6.2 we have XI
a(t,u)
(a)
=
If ~oo f
J:: u f, u~O;
=0
= -~-u
then a(t,u)
f, u<O
= ~oo
a(t,u) d~u with
.
= sgn(u) J::uf
and
N
f
N
tn-u
L an Xtn = ~oo sgn(u) [L an J- oo f] d~u .
n=l
n=l
The condition Xf ~ xg holds if and only if for all choices of N, an and
tn'
ILn~l
an XInia
= IEn~l
an
X~nla
or equivalently
-45-
By Kanter's result this latter condition holds if and only if for some real
f:~ f
=~
f::ag
for all
t,
or equivalently f(t)
=~
get-a) a.e. [Leb].
Then the univariate distributions of Xf
(b) Assume now ~~ f 1 O.
determine for all
a,
t,
~~ la(t,u)la du
=
f~~ I~-u fl a du
+
~ If:: u fl a du
= ~t I~v fl a dv + ft_00 Ifv_00 fl a dv
and by differentiation
= If:~
A(t)
fl
a
- I~ fl
a
It is clear that Xf and X- f
have identical univariate distributions, so that
the univariate distributions of Xf would, at best, determine f a.e. up to
a global sign.
a(t)
Putting
= f:~
f
(a(~) 1 0),
we have A(t) = Ba(~)[a(t)].
Bc(X) = Ixl a - Ix-cIa
Also, using dilxla/dx = a(x)a-l,
we obtain B~(x) =a[(x)a-l - (x-c)a-l]
all
x,
B~(x)
> 0 when
c > 0 and
strictly monotonic when c 1 O.
I a(~)1
,
= [A(~)]l/a = c > 0 say.
determined from A(t) = Bc[a(t)]
-~<x<~ (a>l),
and it is easily checked that for
B~(x)
< 0 when
The modulus of
a(~)
c < O.
Thus Bc is
is determined by
If we take a(~) = c then a(t)
for each t
uniquely a.e., and let us denote it by fl'
and thus f
is uniquely
is determined
Similarly if we take
aC~)
= -c,
f
is determined uniquely a.e., and let us denote it by f 2. Since B_c(x) =
BcC-x), it follows that f 2 = -f l a.e., and thus f is determined uniquely
a.e. up to a global sign.
[]
Kanter's result can also be applied when the input ~ is a moving average
of stable motion.
~
-46-
CASE 6.7.
Let ~ be an a-stable motion and. ~t
= ~~
a(t-u) d~u' where
a e La(R ' , Leb). Further conditions needed for system (I) are the absolute
continuity of a and f, g, a' e L1(R', Leb), and f*a' e La(R ' , Leb). Then
for any t l < t,
so that for system (I),
Xt = ~~ f(t-s) d~s = ~~ {~~ f(t-w) a'(w-u)dw} d~u = ~~ (f*a')(t-u) d~u
By Kanter's result,
(t)
for some t,
Xf ~ xg if and only if (f*a')(t)
=!
(g*al)(t-t) a.e.
or equivalently t(A)(a,)A(A) =! g(A)(a,)A(A)e- tA a.e.
If we assume now that (al)A ¢ 0 a.e.,
the latter condition is f(t) =! g(t-t)
a.e.
The case of system (II) is similar since, by Theorem 4.7,
and a now plays the role of a'.
Finally, we consider inputs that are Fourier transforms of SaS processes
with independent increments and time invariant systems (I) and (II).
CASE 6.8.
The assumptions and notation are the same as in Case 6.5.
system (I) we have for all t l < t ,
so that
x~ = ~~ f(t-s)d~s = ~~ {-~~ f(t-s)Asin(sA)ds}d~A
= ~~
A{fo(A) COS(tA) - fe(A) sin(tA)} d~A .
For
-47-
It follows that
Cxx(t,O) = ~oo A{fO(A) COS(tA) - fe(A) sin(tA)} [A fo(A)]a-1 dF(A)
= ~oo IA fo(A)l a COS(tA) dF(A) - ~oo IAl a fe(A)(fo(A))a-1 sin(tA) dF(A) ~
the first term being its even part and the second its odd part.
all
A
t, determines uniquely a.e. [dF],
A
A
Ifol
A
and feCfo)
a-I
Hence CXX(t,O),
, and certainly
A
If 0 I and Ife I (and thus also
If I).
When f is even (f=f e , fo=O) then CXX(t,O) determines f
uniquely, and the distribution of Xf depends on f only through f.
This follows immedaitely from
We have the same results of course when f is odd.
For system (II) a similar calculation yields
Xi = ~oo {fe(A) COS(tA) + fO(A) sin(tA)} d~A
from which we obtain results identical to those for system (I).
REFERENCES
[1]
J. Bretagnolle, O. O. Castelle, and J. L. Krivine. Lois stables et
espaces LP. Ann. Inst. Henri Poincare, Ser B, 2(1966), pp. 231-259.
[2]
S. Cambanis and E. Masry. On the representation of weakly continuous
stochastic processes. Information Sciences, 3(1971), pp. 277-290.
[3]
J. L. Ooob.
[4]
E. Hewitt and K. Stromberg.
New York, 1965.
[5]
S. T. Huang and S. Cambanis. Stochastic and multiple Wiener integrals
for Gaussian processes. Ann. Probability, 6(1978), pp. 585-614.
[6]
M. Kanter. Linear sample spaces and stable processes.
Anal. 9(1972), pp. 441-456.
[7]
M. Kanter. The LP norm of sums of translates of a function.
Math. Soc., 179(1973), pp. 35-47.
[8]
M. Kanter and W. L. Steiger. Regression and autoregression with infinite
variance. Adv. Appl. Prob., 6(1974), pp. 768-783.
[9]
J. Kuelbs. A representation theorem for symmetric stable processes and
stable measures on H. Z. Wahrscheinlichkeitstheorie verw. Gebiete,
26(1973), pp. 259-271.
Stochastic Process.
Wiley, New York, 1953.
Real and Abstract Analysis.
Springer-Verlag,
J. Functional
Trans. Amer.
[10] G. Miller. Some results on symmetric stable distributions and processes.
Institute of Statistics Mimeo Series No. 1121 (1977), University of North
Carolina at Chapel Hill.
[11] G. Miller. Properties of certain symmetric stable distributions.
Multivariate Anal., 8(1978), pp. 346-360.
J.
--
[12] V. J. Paulauskas. Some remarks on multivariate stable distributions.
J. Multivariate Anal., 6(1976), pp. 356-368.
[13] M. Schilder. Some structure theorems for the symmetric stable laws.
Math. Statist., 41(1970), pp. 412-421.
[14] I. Singer. Best Ap~roximation in Normed Linear sgaces by Elements of
Linear Subspaces. pringer-Verlag, New York, 197 .
[15] S. J. Wolfe. On the local behavior of characteristic functions.
Probability, 1(1973), pp. 862-866.
Ann.
Ann.
UNCLASS H' 1 t.U
SECURITY CLASSIFICATION OF THIS PAGE (WI"," Data Entered)
READ INSTRUCTIONS
BEFORE COMPLETING FORM
REPORT DOCUMENTATION PAGE
l. REPORT NUMBER
4.
[2. GOVT ACC ESSION NO.
TITLE (and Subtitle)
RECIPIENT'S CAT ALOG NUMBER
5.
TYPE OF REPORT & PERIOD COVERED
TECHNICAL
Linear Problems in P-th Order and Stable
Processes
..
3.
6.
PERFORMING O"lG. REPORT NUMBER
Mimeo Series No. 1272
7.
AUTHOR(s)
8.
Stamatis Cambanis and Grady Miller
9.
11.
Grant AFOSR-75-2796
PERFORMING ORGANIZATION NAME AND ADDRESS
CONTROLLING OFFICE NAME AND ADDRESS
Air Force Office of Scientific Research
Bolling Air Force Base
Washington~ DC
20332
14.
CONTRACT OR GRANT NUMBER(s)
10.
PROGRAM ELEMENT, PROJECT, TASK
AREA & WORK UNIT NUMBERS
12.
REPORT DATE
April 1980
13.
NUMBER 0 F PAGES
15.
SECURITY CLASS. (of thfs report)
48
MONITORING AGENCY NAME & ADDRESS(1f dillerent from Controlf/nil Office)
UNCLASSIFIED
15a.
16.
DECLASSIFICATION/DOWNGRADING
SCHEDULE
DISTRIBUTION STATEMENT (of this Report)
Approved for Public Release -- Distribution Unlimited
•
17.
DISTRIBUTION STATEMENT (of the abstract entered In Block 20, If dillerent from Report)
18.
SUPPL EMEN T ARY NOTES
19.
KEY WORDS (Continue on reverse side If necessary and Identify by block number)
stable processes~ p-th order processes~ stochastic integrals~
covariation~ regression estimates~ linear estimates~
filtering of signals in
identification
20.
noise~
linear system analysis and
ABSTRACT (Continue on reverse side If necessary and Identify by block number)
This- work extends to processes with finite moments or order p~ 1 < p < 2
and to symmetric a-stable processes~ 1 < a < 2~ some of the basic linear
theory known for processes with finite second moments (p=2) and for Gaussian
processes (a=2). Here the "covariation" plays a role analogous to the
covariance. Specifically~ stochastic integrals of two types are introduced
and studied for p-th order processes and in particular for symmetric stable
processes. Regression estimates and linear estimates on certain symmetric
DO
FORM
1 JAN 73
1473
EDITION OF 1 NOV 65 IS OBSOLETE
UNCLASSIFIED
..J
I
SECURITY CLASSIFICATION OF THIS PAGE (When Data E n t = - J
UNCLASSIFIW
SECURITY CLASSIFICATION OF THIS PAGE(When D.t. Entered)
20.
stable processes are evaluated, including regression and linear
filtering of signal in noise. Also, for certain symmetric stable
inputs, the identification of a linear system from the input
covariationand the input-output cross covariation is considered,
and the way the distribution of the output depends. on the linear
system is studi ed.
..
•
UNCLASSIFIED
SECURITY CLASSIFICATION OF
TUO'"
PAGE(When n"t. Entered)