Qualls, Clifford and Watanabe, H.; (1970)An asymptotic 0-1 behaviour of Gaussian processes."

1
Researah supported in part by the Offiae of Naval Researah under
Contraat N00014-67-A-OJ21-0002; and University of New Mexiao.
2
Researah supported in part by the National Saienae Foundation under
Grant GU-2059; and Kyushu University~ Japan.
I'tJ
ASYMPTOTI C
0-1
BEHAVIOR OF GAUSS IAN PROCESSES
by
Clifford Qualls 1 and Hisao Watanabe 2
Department of Statistias
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 725
December, 1970
An asymptotic 0-1 behavior of Gaussian processes
By
1
Clifford Qualls , University of North Carolina, Chapel Hill
and University of New Mexico
2
and Hisao Watanabe , University of North Carolina, Chapel Hill
and Kyushu University (Japan)
O.
Introduction. Let {X(t),
-~ <
<~}
t
process defined on a probability space
and
v2 (t)
= EX 2 (t)
>
EX(x)X(t)/(v(s)v(t».
O.
(~,
be a real separable Gaussian
A, P). We assume EX(t) - 0
Denote the correlation function by
p(s,t) ..
This paper is concerned with the probability of the
event
Eep
.
X(t)
v(t)<p(t)
~
for all
One of the authors in [3] gives conditions on the correlation function so
that
PEep" 1 or 0 as Iep < 00 or .. 00, where the quantity I ..
2/ -1
ep
Ja ep(t) a
exp{-ep2(t)/2}dt. He considers the problem as a type of the so-
(00
called law of the iterated logarithm which orginated in the study of sums of
independent random variables.
In this paper, we treat the problem from a
different point of view as a type of crossing problem.
simpler proof shows the above
0-1
The resulting
behavior holds for a larger class of
processes, and also makes the intuitive content of the result clearer.
The
Gaussian processes (or rather the corresponding correlation functions) now
included satisfy:
i)
There are positive constants
o
< a ~ 2,
such that
6, C , C , T, and a with
2
l
a
a
1-C h ~ p(t,t+h) ~ 1-C h
for 0 ~ h < 6
1
2
1
Research supported in part by the Office of Naval Research under
Contract N00014-67-A-0321-0002.
2
Research supported in part by the National Science Foundation under
Grant GU-2059.
2
ii)
and all
t
p(t,t+s)
= O(s-Y)
~
T;
and
uniformly in
Condition ii) replaces the condition that
t
s ~
as
p(t,t+s)
00
for some
= o(l/s)
y > o.
in [3].
There is also a slight improvement in condition i).
Section 1 gives the proof for stationary processes making use of a
recent result due to Pickands [1].
A well known theorem of Slepian [2] is used in Section 2 to extend the
results of Section 1 to nonstationary processes described by conditions i)
and ii) above.
It has been pointed out in [3] that the result of Section 2
can be made to yield (by a time transformation) the analogous
0-1
behavior
for Brownian motion and its indefinite integral.
1. Stationary Case.
Theorem 1.1
Let
{X(t),
Gaussian process with
1)
2)
=
1 - Cltl
some
Cl
with
r(t)
=
O(t -y)
[a,oo) ,
interval
Cl
+
0 < a.
as
ep(t)
< t < oo}
-
EX(t)
r(t)
Then for all functions
_00
be a real separable stationary
and covariance function
0
0(1 tl
:5:
2;
t
~
(1
as
)
t
~
0
for some
r
satisfying
C > 0
and
00
for some
y > o.
that are positive and nondecreasing on some
it follows that
as the integral
1<1>
and
is finite or infinite.
3
Note that the condition 1) implies the process
sample functions.
X(t)
has continuous
The following lemma will be needed for the first half of
the proof of this theorem.
Lemma 1.2 If condition 1) of Theorem 1.1 holds and
A(t)
= ~nn{(l-r(s»/lsla:
P[
+
o
Y(t)
=
a'
a
T +
=
,(x)
(2n)
e S P[ ~up Y(t»s] ds <
OStST
.um
< H
~yhere
Cil a H
t x 2/a ,(x)
00
then
max X(s»x]
OSsSt
.urn
x
0 S sSt} > 0,
00
is a nonstationary Gaussian process with mean
EY(t)
~ _.i _x 2 /?
x ~e
-
••
00,
and
and
= -Itl a
and co-
Proof. This is Lemma 2.9 of Pickands [1]. In addition to condition 1),
Pickands required that
= ~nh{(1-r2(s»/lsla; 0 S sSt}
A (t)
1
>
O.
How-
ever, checking Pickands' proofs, we note that this requirement can be replaced by only the requirement that
A(t) > O.
In other words, these proofs permit
the interval
>
0
such that
with period
but not
r(s)
=1
in
0 < sSt.
Remark. If it should happen that A(t) = 0,
So
= -1
r(s)
sO'
r(sO)
Since
the requirement that
=1
and then both
max(o,t) X(s)
A(t) > 0
placing the denominator
t x
2/a
then there is a smallest
r(t)
= max(O,T)
and
X(t)
X(s)
where
are periodic
T
= mtn(t,sd'
can be eliminated from Lemma 1.2 by re-
,(x)
by
T
2/a
X
,(x).
4
Proof of Theorem 1.1 when
I~ <
00.
Only condition 1) of Theorem 1.1 will
be required for this half of the proof.
vals
00
>
[n, n+l]
I~ ~ 200
n=N
't'
tegral
I~
Considering the sequence of inter-
with integer end points, we obtain
~(n+l)2/a f(~(n+l»
where the lower limit
a
is chosen large enough that the integrand of
=N
of the in-
is a de-
I~
F = [ max X(s)~~(n)].
n
n::;s::;n+l
Since by Lemma 1.2, there is a positive constant K such that
creasing function in the argument
PF c ~ K~(n)2/a f(~(n»
~(n) ~ 00,
as
n
~.
Define
loo
we obtain
n=N
PF c < 00.
The
n
Borel Cantelli lemma completes this half of the proof.
Before proceeding to the second half· of the proof, we will need the
following lemmas.
Lerrma 1.3
If condition 1) of Theorem 1.1 holds and
P [max.
X(kax
where
a
tion, and
>
0,
t
m = [t/ax-
H (a)
a
oo
'Ct) > x]
=
x 2 / a f(x)
~a]
and
>
then
0,
_ 2/
O::;k~m
x+
A(t)
[]
H (a)
C1/ Ct _a__
a
'
denotes the greatest integer func-
is a certain positive constant.
Proof. This is Lemma 2.5 of Pickands [1]. All the remarks given for
Lemma 1.2 also apply here.
Lemma 1.4
If Theorem 1.1 for the case
tional restriction that for large
t,
I~
= 00
is true under the addi-
2togt ~ ~2(t) ~ 3logt,
then it is
true without this restriction.
Proof. A similar statement could have been made for the case
it was not needed in the proof.
The restriction that
I~
<
~2(t) ~ 3togt
00,
but
is
5
~2(t) ~ 2togt
treated in Lemma 4.1 of [3], and the restriction that
only a slight modification of Lemma 4.1 [3] when
/\
Since
occurs i.o. for any
X(t) > ep(t)
I~ = 00
implies
Letting
= {t>a:
A
ample
I
means
Bep
= 00,
and
B
= {t>a:
(largest) nonempty interval in
rl
/:,
n
I
and
/:, 's
are finite nuwhers and the sequence
jumps of
00
ep
infinitely often as
u(t)
= 14> = 00
1/\
ep
and
i.o., then either
for some
a.
=
~ IIn
t
-+
Consequently,
B is a
However, we assume
and therefore that the
00,
{t}
is infinite.
n
4> s u
Note that
are never downward.
> B4>
to.
(Unfortunately, it may not be possible to
crosses
doesn't cross
for otherwise
00
then there is a
according to their order on the line.)
u(t)
we write
whose lengths and left end points
n
4>(t)
n
1=00
4>
where for ex-
Bep <
tO€B,
containing
union of such (disjoint) intervals
= Au+B4>'
vIe may suppose
IB exP{-4>2(t)/2}dt.
Note that if
t n 's
= 2.
a
implies
4>(t»u(t)}
I~ = A~+B~
and
the lemma follows immediately.
index the
00)
we need only show that
4>(t)~u(t)}
u "" Au+:0 u
will be denoted by
t -+
Suppose
~ = max(4),u)
for
= Aep+Bep = 00,
14>
occurs infinitely often (as
X(t) > <fl(t)
a < 2.
is
1/\
and
ep
= Iu
If
00 ,
=
= u(tn+/:,n )
4> (t +/:, )
n n
4>
or
4> > u
since the
NO\I7
/:,
exp{ _t/12 /2 }dt
~
2
n
/:,
n
L t +/:,
n n n
=
exp{-ep2(t +/:, )/2}
n
n n
and
B
u
Since
A
u
= 00
~
2
n
/:,
2
exp{-u (t )/2}
n
n
/:, /(t +/:, )
n
n n
-+
0,
\ole
/:,
=
n
I n /:, n /tn
have
which in turn implies
l
/:, /t
n n
1/\
4>
= 00.
= 2t
n
< 00.
n
n
+/:,
n
Finally
{l-6 I(~
n
B
u
n
<
00
+0 )} •
n
implies
6
Lemma 1.5
Let
X(t)
variance function
E
n
=
[X(t
n,v
)Sx
be Gaussian process with zero mean function and co-
r(s.t)
n.v
:
v
I
~(x.y;
coefficient
Proof.
Ar)
S;
Let
with all
t
n.v
distinct.
Then
I"
i < j s; n
is the standard bivariate normal density with correlation
= Ar(t i ,v '
Ar
= 1.
r(t,t)
= O, ••• ,m]
n
1
where
with
t.
J,lJ
).
This type of lemma now appears in many proofs of asymptotic inde-
pendence for crossing problems.
We include the "standard" proof with the
necessary differences.
The event
random variables
involves
and the corresponding covariance matrix will be denoted by
where the doubly indexed random variables
a single index
k
(the
k
II = UiJ
Partition
of the random variables of
in
X(t
n,v
)
X(t
II
=
n,v ),
(rid)'
have been renumbered by
rid) •
is the covariances
so that each submatrix
I ij
E
i
Now the events
with those of
E •
j
Ek
would be independent if and only if the corresponding covariance matrix
were
La
Let
=
[1~J
LA
=
with
I~i
ALl+(l-A)I
= Iii
o=
(r
L~j = 0
but
Aij
)
F(A)
=
f_:'o ..
x
~A(l)'
f_:,m
x
l
for
i;
j.
be the covariance matrix for the
standardized multivariate normal density
x
matrix
•••
f_:'O
and
x
00
n
J_:,m
~A(X)dX'
7
= Id~A/dA
Since
F'(A)
(k,l)
refers to a diagonal
*
L*
The double sum
lij.
and
L.. L.. rkl
Ii<j"
IFl(A)1 =
of
dl
drAU/dA = 0 or r U
Lkk
I
.
according to whether
or not, we obtain by the chain rule for
d2<PA
dYkdY
dy~
k
extends over all
I
(k,l)
that refer to covariances
Integrating this inequality with respect to
rU
A finishes the
proof.
Proof of Theorem 1.1 when
I
= co •
<P
Note that condition 2) of Theorem 1.1
eliminates the periodic case discussed following Lemma 1.2.
quence of intervals by
I
n
= [n~,
v
G = {t
=k~: v=O, ••• ,[Sn ]}
k
k
k,v
~
chosen later.
Let
= I<p
co
~
Ek
,co
L..k=N
we have
~~(k~)
~
If we choose
constant
= [maxSf: Gk
2/a
c
k
~
for
be points in
~ ~(k~+S)].
Xes)
~(~(k~»
= ~(k~+8)2/a,
> 0
I
k
and
where
<~.
0 < 8
a
Let
shall be
K
Now
implies that
,2/a
L..S~(k~+S)
~(<P(k~+8»
= co.
then Lemma 1.3 implies there is a positive
PE c - K8~(k~+8)2/a~(~(k~+8»
k
K such that
LPE
n~+8]
Define a se-
as
<P(k~+8) ~ co.
So
= co.
As in the Borel lemma, the main step is
=
.u.m
n PE
m~com
TIle first limit is zero because
lPE c
k
k
+ Um
{p (OE
mk
m~co
= co,
) - nPE }
m k
and the second limit will be
zero because of the asymptotic independence of the events
separation between
I
k
's
is
~-8.
m,n
=
Note the
By Lemma 1.5, we have
[8n j ] [8 n ]
i
A
E •
k
l
~=O
L
v=O
1
Irl
J0 <P(~(i~+8),~(j~+8);Ar)dA,
8
where
= r(t j ,v-t i ,ll ).
r
of condition 2) of Theorem 1.1,
Ir(t
j,v
-t
i,ll
)1
t
Because
~ M_(j6-i6-B)-Y
--U
j,v
6
-t
j6-i6-B
~
i,ll
~
6-B,
and because
can be chosen large enough that
for all
j
>
i ~ m and some positive con-
MO and such that Irl ~ 0 = min(~,y/5). In fact, for
M = M (1- B/ 6 )-Y, we have Irl ~ M(j6-i6)-Y. Now by Lemma 1.4, we can
stant
O
choose
~(x,y;
Since
and
~2(k6+B) ~ u2 (k6+B) • 2log(k6+B),
m large enough that
y,
Ar)
for
is a decreasing function in each of the variables
k ~ m.
x
we ob tain
~(~(i6+B),~(j~+B);Ar)
(2n)-1(1-02)~ exp{- U2(i6+B)-2ArU(i6+B)~(j6+B)+U2(j6+B)}
2 (1-A 2 r )
(2.) -1(1_6 2 ) 4
(2n)
-1
By Lemma 1.4,
u(i~+B) ~ u(j6+B)
Am,oo
1
2 -7
(1-0)
n
K
~
u
2
2
(i6+B)-2l!:1 U(i6+: 1U(jA+il)+u ljA+Bl}
1
1
1 ~ 1-21 r I
i~+B ~~+B
•
= $(i6+B)2/a ~ (3l0g(i~+B»1/a,
i
and
~
e.~ _
Ir\ < y/5,
and since
we have
L L
~i<j<oo
This double series can be seen to be convergent by letting
obtaining
A
m,OO
~
K'
I I
i=m k=l
(log (k6+i~+B»2/a r11
kY
OJ
[k+1iJ 1-2y /5
~ ~
k
= j-i
and
9
I
(tog (kli+ilH13» 2/a
k=l
(k+i)y/5
rJ
we have
timm+ooAm~oo
00
L
K'
i=m
Since this series is
2.
convergent~
rfl l + Y/5 f@ 1+ Y/5 <
00
•
L~
=0
where
A
m~oo
=
Non stationary case.
We now extend the result for the stationary case to the nonstationary
case treated in [3].
Here, let
{X(t), -oo<t<oo}
be a real separable
Gaussian process with zero mean function.
2
2
v (t) = EX (t) > 0
We assume
X(t)/v(t)
p(s,t)
p(s~t)
by
and denote the covariance function of
= EX(s)X(t)/(v(s)v(t».
is continuous in Theorem 2.3.
We shall ::.ssut:'e alc;a
This assumption is not needed in
Theorem 2.1 (nor the introduction), since condition 3) below implies
p(s,t)
is continuous.
Theorem 2.1
3)
Suppose the above process
there are positive constants
such that
p(t~t+h) ~ l-Clh
Then for all functions
interval
[a,oo)
~(t)
a
X(t)
satisfies
01' C , T
l
l
for
and
0 < h < 01
a
with
and all
0 < a
~
2
t > T •
l
that are positive and nondecreasing on some
such that
we have
This theorem is Theorem 1 of [3], but we include the following different proof.
10
Proof.
Let
Y(t)
be a separable stationary Gaussian process having a
covariance function
q(h) ~ l-C h
and
a
1
follows for
q(h)
satisfying
~ p(t,t+h)
C* > C •
l
l
q(h)
=
l-Cl*lhla+o(lhla)
This second requirement
for
By a well-known result due to Slepian (see Theorem
1 in [2]), the fact that
q(h)
p(t,t+h)
~
for all
t
P[~up(n~,n~+~) Y(s) ~ u] ~ P[~up(n~,n~+~) X(s) ~ u]
T
~
for
implies
l
~ < 0 1*
nll > T1 • Now following the "proof of Theorem 1.1 when I</>< 00",
c
c
IpGn ~ IpFn < 00, where Fn = [~uP(n~,nll+ll) Y(s) ~ </>(nll») and
= [~uP(nll,nll+~)
Gn
G
n
f
h ~ 0
as
~
X(s)
</>(nll»).
and
we have
The Borel Cantelli lemma applied to the
completes the proof.
S
We shall need the following lemma for the nonstationary result when
Lemma 2.2. Let p(s,t) be a continuous covariance function with p(t,t) =1.
Suppose
p(t,t+s)
~
0
there is a constant
and all
t
~
uniformly in
0 < 1
t
such that
as
s
~
00.
Then for every
Ip(t,t+s)1 ~ 0 < 1
for all
II > 0,
s ~ ~
O.
Proof. Let R(s) = ~UPt~O Ip(t,t+s)1 = ~UPT~O Ip(TS,TS+S)I • Now R(O) = 1
and
as
R(s)
s
~
is continuous since
00.
for some
= p2(TOS,TOS+S)
there is a sequence
r (s)
n
TO ,
p(t,t+s)
~
0
as
s
=1
R(s)
~
00,
Since this
Consider that
F (A).
n
con-
we now suppose that
is a covariance function for each
f~oo(l-CO~SoA)dFn(A) = O.
0
then
n
= p2(T n S+T n s+s)
~
for some
h }
spectral distribution functions that we denote by
tunn
Also
R(sO)
is a periodic covariance function.
tradicts the hypothesis that
Now
is uniformly continuous.
The conclusion of Lemma 2.2 fails only if
If
rO(s)
p
T
11
We have
with
11
This implies
k
R(Z sO)
R(2s )
0
=1
= 1,
and consequently the contradictory statement that
for all integers
Theorem 2.3
k
~
Let the above process
O.
X(t)
have a continuous correlation
function satisfying:
3' )
o
4)
<
a'
~
all
t
>
such that
2
~
P[X(t) >
Let
yet)
p(t,t+h)
~
TZ and a' with
l-C ha' for 0 < h < °2
2
p(t,t+h)
~
t
as
s
for some
-+ co
as in Theorem 2.1 with
I~
= co,
v(t)~(t)
=
1.
i.o.
in
t]
and
y >
O.
we have
be a separable stationary Gaussian process having a co-
variance function
and
z'
2, C
T ;
and
2
p(t,t+s) = O(s -y) uniformly in
Then for all functions
Proof.
°
There are positive constants
q(h)
l-CZh
a'
satisfying
~
q(h)
for
q(h)
o
<
= l-CZ* Ih la' +o(lh la' )
h < 02*
(C Z* < C2 ).
as
h
-+
0
Applying
Slepian I S result (see Theorem 1 in [2]) and the "proof of Theorem 1.1 when
I~
= co",
Ek
= [maxSE Gk
we obtain
yes)
c
co = LPEn ~ LPHnc
~ ~(ktJ.+8)]
and
Hk
for
tJ.
<
= [maxSE Gk
quently, it only remains to show that the events
dependent (i.e. in the nonstationary case).
"proof of Theorem 1.1 w'hen
I
~
= co"
°2*,
Xes)
H
n
where
~ ~(ktJ.+8)].
Conse-
are asymptotically in-
Lemma 1.5 yields as in the
12
•
=
A
m,n
where
p
choose
t
j ,lJ
= p(t i ,v ,t j ,lJ ).
i < j.
Ip I
I\) large enough that
Since
•
Since here we cannot choose
t
Also
i,v
-t
j6-i6-8
~
j,lJ
for all
Ipl s 6 < 1
MO(t i ,v -t j ,lJ )
S
~
6-8,
we have
t
and
i,v
t
-y
6
large, we will
for all
t
Ipis M(j -i) -y
j,lJ
for some
and
i,v
for all
6 by Lemma
2.2.
Letting
= j-i
k
and using Lemma 1.4, we have
00
Am,oo
Now for all
00
L L L L
i=m k=l
v
K'
S
k- Y
lJ
k > some k '
O
Ipl
S
Mk- Y < y/5
of the double series dominating
Am,oo
the stationary case.
S
For
1
S
~(u(i6+8), u(j6+8); Ap)
But
2(1+Ap)-1 ~ 2(1±6)-1
dominating
Remark.
A
mt oo
>
1
k
I~ '(u(i6+S).
kOt
(2TI)
and the corresponding portion
is convergent as in the proof for
we estimate
-1
S
u(j6+S); 'O)d'.
(1-6 2 )
~ [
J
1 2 (lH.P)
IT6+~
-1
•
implies the remaining part of the series
Q.E.D.
is also convergent.
If conditions 3) and 3') hold simultaneously, then
a
~
a'.
The
nonstationary case theorem exactly analogous to Theorem 1.1 holds if conditions 3) and 3') with
a
= a'
and condition 4) hold.
Of course Theorem 6 of [3], which is used by Watanabe as a proof of
the asymptotic
0-1
behavior of Brownian motion and its indefinite integral,
can be improved by using a hypothesis anologous to condition 4) of
Theorem 2.3.
13
References.
[1]
J. Pickands (1969).
processes.
[2]
Upcrossing probabilities for stationary Gaussian
Trans. Amer. Math. Soc. 145,
D. Slepian (1962).
The one-sided barrier problem for Gaussian noise.
BeZZ Sys. Tech. J. 41,
[3]
H. Watanabe (1970).
51-73.
An
463-501.
asymptotic property of Gaussian processes, I.
Trans. Amer. Math. Soc. 148,
233-248.