Qualls, C. and Watanabe, H.; (1971)Asymtotic properties of Gausssian processes."

•
1.
Research supported in part by the Office of Naval Research under Contract
No. N00014-67-A-0321-0002. Visiting from University of New Mexico.
2.
Resea:l'ch supported in part by the National Science Foundation under Gmnt
No .--GU..2059.---Visiting!romKyus'hu-lJnl,versity(JapanJi .
------
AsYroPTOTIC PRoPERTIES OF GAUSSIAN PROCESSES
by
Clifford Qualls 1 and Hisao Watanabe 2
Department of Statistics
University of North Carolina at Chapel BiZZ
Institute of Statistics Mimeo Series No. 736
.FebltuaJty, 7971
•
AsV(1)TOTIC PROPERTIES OF GAusSIAN PRocESSES
Cl rfforCl~ua lls1;---universi-ijj ·orNoi'~7i7;ai'c;"l/ina--;-C'hape-·fHi17
•• --and University of New Me:m.co
2
and Hisao Watanabe ., University of North CaroZina.J Chapel, Hil,l,
and Kyushu University (Japan)
By
o.
Introduction. Let {X(t),
defined on a probability space
2
EX (t) > 0,
-00
< t < oo}
(0,
A,
P).
and the covariance function
respect to
t
and
s.
And we set
p(t,s)
n
•• - - - - - - - - - - -
be a real separable Gaussian process
We assume
r(t,s)
2
= 0,
EX(t)
= E(X(t)X(s»
v (t)
=
is continuous
= r(t,s)!(v(t)v(s».
In this paper,
we treat two problems concerned with Gaussian processes whose correlation functions satisfy
(0.1)
p(t,s)
where
°
S
a
S
2
=
and
1 - It-sla H(lt-sl) + o(lt-slaH(lt-sl»
as
It-sl
+
0,
H varies slowly at zero.
In §l, we relate the magnitude of the tails of the spectral function to the
order of continuity of the correlation function.
As a consequence, we can show
directly the equivalence (in very useful cases) of the Kahane-Nisio condition and
the Fernique-Marcus and Shepp condition which are necessary and sufficient conditions for
X(t)
to have continuous sample functions in terms of the spectral
function and the correlation function, respectively.
The second problem (§2) is to extend a result of Pickands [11] which gives
the asymptotic distribution of the maximum
(0.1) with
°
< a S 2.
p(lt-sl)
=a
constant.
H(lt-sl)
= maxOSsSt
X(s)
to condition
Pickands treated stationary Gaussian processes whose co-
variance function
for
Z(t)
= p(t,s)
satisfy condition (0.1) with
°
< a S 2
Such studies have been done for H3lder continuity of
sample functions by Marcus [8], Kano [6], and Sirao and Watanabe [13].
Our
2
•
efforts using Pickands' methods to give the asymptotic distribution of
the case
--- -
-
a
Z(t)
for
= 0 were not sucessfu1. (This is not too surprising.)
In §3, we use the result of §-2- to-oo-taiIi-tne -extens1oii-o! -the -
------------.. ---_._----- -----
u-r -oenaV!or -----------------
treated in Watanabe [15] and Qualls and Watanabe [12] to our present case.
The
method in [12] turns out to be powerful, while the method in [15] doesn't seem to
Section 4 treats the non-stationary case; in §2 and
be successful in our case.
§3, we assume stationarity.
1.
Existence of stationary Gaussian processes. We list some definitions and
properties of regularly varying functions that will be required in this and fo1lowing sections.
Some general references on regular variation are'
Karamata [5], Adamonic [1], and Feller [2].
~
Definition 1.1.
A
positive function H(x)
infinite (at zero), if for all
(1.1)
x-+ oo
(x -+ 0)
H(tx)
H(x)
=
t
x-+ oo
(x -+ 0)
A function
Q(x)
varies slowly.
Q(tx)
Q(x)
=
varies slowly at
defined for
x > 0
varies regularly
if for all
t > 0,
1.
at infinity (at zero) with exponent
Urn
x > 0
> 0,
Defi ni ti on 1. 2. A positive function Q(x)
(1.2)
defined for
a
~
0,
a
t •
satisfies (1.2) if and only if
Also,
H(x)
Q(x)
= xaH(x),
where
H(x)
varies slowly at infinity if and. only if H(l/x)
varies slowly at zero.
Let
infinity.
Q(x)
vary regularly with exponent
a
Then, the following properties hold.
~
0
and H(x)
vary slowly at
3
(1. 3)
The limits (1.1) and (1.2) converge uniformly in
subset of the half line
.urn
x-+(1.5)
x
-e; H(x)
=
Definition 1.3.
(1.6)
e;(x) -+- 0
=
=A
and
a(x) -+- A
x-+-oo
as
H(x)
(0 < A < 00) •
is said to be "normalized"
in property (1.5) above.
If H(x) is a "normalized" slowly varying function at infinity, then
e; > 0,
there exists
K
for all
x > 0
t
and all positive
such that
1
~
H(x)
-e;
t < 1 such that
tx
~
K.
If H(x) is a "normalized" slowly varying function at zero, then for
any
e; > 0,
there exists
t
for all
If H(x)
for any
(1.9)
00.
~xp{f:«t)/tdt}.
a(x)
s H(tx)
(1.8)
=
x-+-oo
The slowly varying function
for any
(1. 7)
e;
x H(x)
.um
and
The function H(x) varies slowly i f and only if
where
a(x)
on any compact
(0,00).
oo
H(x)
if
0
t
x
>
0
>
1 such that
tx
< 0
x~(x)
is eventually monotone decreasing.
varies slowly at infinity, then
x yPH(y)dy
O·
t
the function
xP+~(x)
f
H(tx)
H(x)
S
and all
such that
is a "normalized" slowly varying function at infinity then
a > 0
If H(x)
-e;
0 > 0
-+-
p + 1
for
p
+ 1
~
0,
4
and
xP+1 H(x)
Qo
fx yPH(y)dy
··1·p±11-
~
r: ¥
Moreover, if
dy
H(X);!J:
for- --P
*-1--"
--0.
exists then it varies slowly at infinity and
¥
dy
+
O.
Theorem 1.1. Let H(x) vary slowly at infinity. Let f be a positive even
r
function with
00
_00
=1
f(x)dx
satisfying the following condition.
there exist positive constants
C H(x)
1 x a+1
Then, if we set
p(t)
f(x)
S
= roo
_ 00
0 < a < 2,
<
-
C , C ,
2
1
C H(x)
2 a+1
eitxf(x)dx
for
Assume that
and K such that
~
x
K.
x
and
02(h)
= 2(1-p(h»,
2
0 (=P,,-)-
1Xm
(1.10)
h
-+
we have
0 Ih IaHU/h)
Remark 1.1. The p(t) in Theorem 1.1 is a covariance function of a stationary
Gaussian process.
Proof. Without loss of generality, we take H(x) to be "normalized". First, we
note that
02(h)
=
=
2(1-p(h»
=
8
f: ~~n2(h2x)
K1
8
fo ~~n2(~)
f(x) dx
00
f(x) dx + 8
fK
1
.6~n2(h;)
f(x) dx
-
11 (h) + 12 (h)
5
for arbitrary Kl ~ K. We may ignore
limh~ I l (h)/h 2 = 2r~1 x2f(x)dx < ~.
Il(h)
in Theorem 1.1, since
_____ n
_
----- Nowwe--estrmate-r2(li)--bY-CiJ(hr-~~-12(hr
S C2J(hJ---n<1-~ K) where
J(h) .. 8r~1.6..[/1.2(h;)x-(a+l)H(X)dx. By a change of variables,
J(h)
(1.11)
= ha Joo
hK
8 .6bz.2 (x) H (x/h) dx
2 xa+l
l
8 .6..[/1.2 (x) H(x/h) dx
2
0+1
(say),
x
where
hK
l
Since
interval
f
6
<
0 < 6 <
ti~~
H(x/h)/H(l/h) .. 1 uniformly with respect to x
in each finite
(0,6)
(H(x/h)-H(l/h»
86..[/1.2(x/2) dx
a+l
x
0
.~
00.
For the first integral of (1.11), we use property (1.6) to obtain for
where we choose
€l > 0 with
a+€l
<
2
(which is possible if
0
<
1
a < 2).
For the third term of the right hand side in (1.11), we use property (1.8)
to obtain, for
where we choose
Since
6 sufficiently large,
€2 > 0 with
H(6/h)/H(1/h)
+
a-€2 > O.
1 as
h
+
0,
we have, as
h
+
0,
1
J~ 86..[/1.2 (x/2)
H(l/h) J6 86..[/1.2 (x/2)
H(l/h) hK
xa+l
H(x/h) dx - H(l/h) 0
xa + l
dx
l
6
'.
Since
and
we have the desired result.
Corollary 1.1. Under the conditions of Theorem 1.1, we have
o<
cx.C1 JOO 86,[/12 (x/2) dx
C
0
xcx+1
z
where
F(x)
= IX
~
s
Urn
0'2 (h)
h-:;-0+ 1-F(l/h)
s
0'2 (h)
h ~ 0+ 1-F(1/h)
f(u)du.
-00
Proof. First, for h sufficiently small,
I
oo
C
1
OO
H(u) du
l/h ucx.+1
s 1 - F(l/h)
<
-
C2
fl/h
H(u)
cx+1 duo
U
From this and using property (1.9) and Theorem 1.1, we obtain the corollary.
0
Remark 1.2. We can find discussions similar to Theorem 1.1 and Corollary 1.1
in Garsia and Lamperti [3] and Ibragimov and Linik [4].
Next we consider the case
Theorem
1.2.~
for large
X.
= O.
Let
L(x)
vary slowly at infinity and assume
Let
f(x)
be a symmetric probability density function such that
C L(x)
1
~
cx
x
S
f(x)
C L(x)
2 x
for large
L(x)
is monotone
x.
The proof of Theorem 2.1 was communicated to us by Professor Walter L.
Smith.
'.
7
.
Then, if we define
H(x)
= Ix
00
L(y)/y dy,
(1.12)
S;
=I
00
-00
e
itx
f(x) dx,
0
(h)
= 2(l-p(h»,
02
(h)
h -+- 0
varys slowly at zero.
hypotheses of Theorem 1.2 hold for all
of generality we redefine
creasing) for all
f(x)
x > 0
L(x)
and
on
L(x)
x
L(x)
K,
~
where
x < K so that
-+-
L(O)
<
to be "normalized".
as
00
given above can only hold for
x
K> 1
L(x)
x
-+-
00.
say.
•
4
I: (l-~hx)
e
f(x) dx
:
as
1 (h)
1
+
4
+
I:
Let the
Without loss
is monotone (de(Of course, now the
bounded away from the origin.)
Now as in the proof of Theorem 1.1, we may ignore the integral
cr 2 (h)
and
Um H(l/h)
Wi thout loss of generali ty, we take
bounds on
2
we have
•
4C l
and H(l/h)
Proof.
p(t)
1l(h)
in
(l-c.ohhx) f(x) dx
1 (h) ,
2
h -+- O.
By assumption, we have
4C l J(h)
s;
I 2 (h)
s;
4C 2J(h)
where
J(h)
=
f~(1-C04hx)L~X)dX. We replace the study of J(h) by the study of JO(h)
=
1;(l-C04hx)L~) dx, since the integral f~(l-C04hX)L~)dx again may be ignored
as
h
-+-
Now
L(x!h)
O.
J (h)
o
= f 0 (l-C04X) L(x/h)
x dx
00
is increasing.
Of course
So we may study the Laplace transform
We write
is an increasing function of h
since
8
=
where
6
is large.
Since
J *(s)
1
~
s
-2 s6
6 2 L(6s)
f
xL(x)dx 2
as
O
s
+
00
by property (1.9)t
we
have
(1.13)
Also
~ foo ~~x)
dx
_ H(s6).
s6
Now, property (1.9) states that
L(s)/H(s) + 0
as
s +
H(l/h)
varys slowly at zero and that
(which we use here).
00
This together with (1.13) yields
.Um
s +
00
Jo*(s)
H(s)
1;
and consequently
(1.14)
lim
s +
Jo*(s)
H(s)
=
1.
00
We apply a fundamental Tauberian theorem to
£im
(1.15)
JO(h)
=
h + 0 H(l/h)
1,
JO*(s)
to obtain
as desired.
Remark 1.3. Given a slowly varying function H at infinity, we can find L(o)
00
by solving the functional equation
H(x)
=fx
ble, then we need only verify that
L(x)
= -xH'(x)
if
f(x) ~ 8/x(iogx)8+1
as
x
+
00,
e>
L(u)/u duo
then
0,
If H is differentia-
varies slm71y. For example,
o2(h) ~ 4/lioghl 8
Corollary 1.2. Under the same conditions as in Theorem 1.2, we have
4C
- 1 s
C2
where
F(x)
= f: oo
02(h)
1-F(1/h)
0
lim
h
+
f(y)dy.
s
:am
h
+
o2(h)
0 1-F(1/h)
as
h ~ O.
9
Several authors give necessary and sufficient conditions for the sample
~
functions of stationary Gaussian processes to be continuous.
~
-------- - -+9]-give-acondition in-terms-of- a 2 (h)
. i~e;,-if- a i s
Marcus and Shepp
monotonic-tnc-re~sirtfr,--·
then
a(h)
E
fo
db
<
00
h(.tog 1)1/2
h
is a necessary and sufficient condition for continuous sample functions.
Also
Nisio [10] has obtained a condition in terms of the spectral distribution function
F.
Based on her
resu1t~
Landau and Shepp [7] have given that if
is a necessary
is eventually decreasing, then
and sufficient condition for continuity of sample functions.
By using the re-
suIts of this section, we can make clear the relation between the two criterion.
Now, suppose that the spectral function satisfies the conditions of either
Corollary 1.1 or Corollary 1.2.
a 2 (h) - 1-F(1/h)
as
h + 0,
Then we may suppose without harm that
a2 (2-n ) - I~=n sk as
so that
(O,E),
is monotone increasing in some interval
oo
a(h)
fo h(.tog ~)1/2
dh
n +
a(·)
If
00.
then
00
<
L
n=l
if and only if
00
Now, there exist positive constants
C
3
and
C
4
<
00
such that
(1.16)
Here ~ we use the following inequality of Boas (see Marcus and Shepp (9]) •
Lemma 1.1. Let gn = I;=n a. , where a j
J
1
2
00
I
n=l
00
a >1 s
n
I
n=l
are positive and decreasing.
00
(8n)~ s
n
2
L
n=l
a >1 •
n
So
e
C3 00
s ~
2 I
n
n=l
00
s
I
n=l
a (2-n )
1/2
n
00
s
2C
4
L
n=l
s ~•
n
Then
10
Theorem 1.3. If we suppose that the spectral function F satisfies the conditions of either Corollary 1.1 or Corollary 1.2 and that
00
I
s
n=l
1a
n
e:
<
if and only if
00
o(h)
sand
o(h)
n
fo h (log 1.)h 1/2
are
dh
Remark 1.4. Of course if we take the "normalized" version of the regularly
varying
o(h)
in the case
property (1.8),
as
~
h
0 < a < 2,
then it is eventually decreasing by
O.
2. The asymptotic distribution of the maximum. For the study of the asymptotic
distribution of
cess
X(t)
Z(t)
= ~UPO~s~tX(s)
0
assun~
that the pro-
is stationary in addition to its covariance function
satisfying condition (0.1) with
mean
in this section, we
and variance
1.
0 < a
~
2.
We are assuming each
X(t)
The non-stationary case is discussed in §4.
loss of generality, we also assume the slowly varying function
tion (0.1) is "normalized".
E{X(t+s)-X(t)}
= p(t,t+s)
p(s)
= 2(1-p(s»,
See §1 for definitions.
Let
H(s)
has
Without
in condi-
02(s)_
define
in60<s~t o(s)/~(s) and AZ(t) = ~uPO<s~t o(s)/~(s).
The theorem of this section is an extension of Pickands' result [11].
Since
Pickands' methods apply in our case, we will only sketch the proof emphasizing
the points of difference.
Theorem 2.1. If condition (0.1) with 0 < a
and ~(.)
(Z.l)
holds,
0 2 (8) > 0
for
s ~ 0,
is defined as above, then
Urn -.R[Z(t)
x
and
~ 2
+
> x]
--1
00
0 < Ha <
t~{x)/o
00,
where
(l/x)
=
H
a
{Y(t), 0
~
=
lim T- I
T + 00
t <
oo}
eSP[
~up Y(t) > s] ds,
00< t < T
foo
is a non-stationary Gaussian process
11
with
..
...
Y(O)
=0
a.s.,
a
E{Y(t)} = -ltl /2,
( I t l ,a+ It ,a - It l -t 2 ,a )/2,
2
~(x)
and
_m_ u__ _
=
c.oV{Y(t ),Y(t )} =
l
2
-.k
-1
(2TI) 2 X exp(-x2 /2).
__
_ __
,..,
Remark 2.1. By property (1.8) in §l, the 0(0)
defined above (or any "normal-
ized" regularly varying function of positive exponent) is monotone on some small
(0,8).
interval
text.
This useful fact does not seem to be well known in this con~(o)
Of course, any function
with
~(s)"" o(s)
as
s
(or
0
+
itself) that is monotone near the origin can be used in Theorem 2.1.
any
g(x)"" ""'-1
0 U/x)
as
x
+
00
with
However, if
t
p(s)
In fact,
could be used whether g is monotone or not.
Remark 2.2. The condition that 02(s)
case.
0(0)
> 0
s # 0
for
is periodic with period
in the denominator of (2.1) replaced by
sO'
T
excludes the periodic
then Theorem 2.1 holds
= min(t,sO).
See the remarks
in [12].
The proof of Theorem 2.1 is accomplished by a series of lemmas; and in
particular Lemma 2.3 is a useful discrete version of Theorem 2.1.
between the constants of Theorem 2.1 and Lemma 2.3 is that
Ha
The connection
= ~a+oHa(a)/a.
The proof of the first lemma below illustrates the central idea behind the discrete version of Theorem 2.1.
the time interval
rate as
x
+
00.
(O,t)
Let
For this discrete version, we need a partition of
with mesh size
6(x)
= ""'-1
0 (l/x)
6(x)
for all
approaching
x
~
at the proper
0
,.., ,..,
1/0(0).
Lemma 2.1. If the conditions of Theorem 2.1 hold, then for a
>
0,
P[Z (t) > x]
Umoo t~(x) /6(x)
x+
where
Zx(t)
= maXOSksmX(ka06(x»,
integer function, and
~(o)
m = [t/(a6(x»],
[0]
denotes the greatest
is the standard Gaussian distribution function.
12
Proof. This lemma corresponds to Pickands v Lemma 2.4 [11]. Defining the events
~
B = [X(kao~(x»
k
>
x]
and using stationarity, we have
-----------m- ----------P[Zx(t) > xl ~
I PBk - I I p(BjnBk )
k=O
O~j<kSm
--------------------
-
--
- - -
n
m
~
L p(BonBk)·
(m+1)(PB O -
k=l
Now from [11, Lemma 2.3], we record that
where
p =
lim
(2.3)
x~oo
ing to
and obtain
P[Z (t) > xl
x
~ a- 1 (1_ lIm 2
t~(x)/~(x)
x
"m
2Lk=1{1-~(x(1-p) ~--*:
(l+p) 2)}
To study
4It
p(kao~(x»,
i)
ka·~(x) ~
ka
0.
~
1,
ii)
ka > 1,
K
,,(iii)
L
as
x +
k=l
{l-~(x(l-p)~(l+p)-~)}.
ka o6(x) < some 0,
and iii)
ka > 1,
llm x+ooL(i)2(1-~) = r(i)limx+oo 2(1-~).
First,
such that
1
1-p
~
{l-Hx(l~P) )}
~ K,
For kao~(x) ~ 0,
there exists a
and
~
00.
For positive
ka·~(x)
00
partition the sum into three parts accord-
We may ignore the third sum I(iii).
positive
+
m
I
< 01
x ( l+p
01
estimate
l-p)~
~
sufficiently small,
A (01) > O.
1
For
I(ii)
when
13
By property (107) in §1 and for
4It
ka > 1,
there is a positive
~ (ka)-a/2 provided ka 06(x)
H(kao6(x»/H(6(x»
0cx'
<
Take
0
cx
such that
0 = min(ocx,ol,t).
---'--------- ---Consequently
for
ka > 1,
ka o 6(x) < 0 and T
Finally, defining
~(x)
= 2{1-~(x(1-p)t(1+p)-t)} for ka 6(x)
ak(x)
= 2{1_~(2-l(ka)cx/2)}
large.
o
ka o 6(x) ~ 0,
for
< 0
and
we have
00
I
k>a
-1
.6UP
T~x<CIO
~ (x)
<
00
It follows that
00
:urn L
(2.4)
x
-+,00
k=l
00
~(x)
~
I
k=l
~ ~(x)
y 2(1_~(2-l(ka)cx/2»
=
<
00,
k=l
x -+ 00
since
k
-k
x(1-p)2(1+p) 2
as
x
~
2x cr(ka
o
~
6(x»
1. a(ka 06(x»
2
~
Applying (2.4) in (2.3) completes the proof.
-+ 00.
The partition corresponding to
Z (t)
1.
2
(ka) cx/2
0
above was made to depend on
x
and the lower estimate of the distribution of Z (t)
x
mate) depends on a.
-+
cr(A(x»
Since we wish to take
a
~
is not sharp enough to obtain the desired resulto
0,
(as well as the upper esti-
it is seen that Lemma 2.1
Hence Lemma 2.2 will be
needed.
Lemma 2.2. If the conditions of Theorem 2.1 hold, then for a
P[Z (na o 6(x»
Um
x-+ oo
x
1/J(x)
> xl
=
H (n,a)
cx
=
1
+
a,
fco eSP[ max Y(ka)
o
l~k~n
> 0
>
slds
14
Proof. Simplifying Pickands' proof [11, Lemma 2.2], we have
e
--
- - - - - no.
----
P[Z (na o6(x»
---------
------
----
---
-----
> x]
•
---x----------------------------------------- - - - - - - - -
..
x, -l-sk"Stt--------------max X(ka o6(x»
.
~
P[X(o) > x] + P[X(o)
n
n
--
-
._>_.. x].
_. .
The second term equals
f
x P[ max X(ka o6(x»
~(u)
where
> x/X(o)
l~k~n
-00
is the standard Gaussian density function.
and defining Yl(t) = x(X(t o 6(x»-x)+s,
~(x)
= u] ~(u)
l~~n
=
~(x)
oo
Io
Substituting
u
= x-six,
we obtain
> x / X(o) = x-six] exp(-sZ/(2x 2»ds
foo eSP[ max X(ka o6(x»
o
du,
s
2
2
e P[ max Y1(ka) >s / X(o) = x-six] exp(-s /(2x »ds.
l~k~n .
Note that
E(Yl(t) / x(o)
= x-six)
x(p(t o6(x»(x-s/x)-x) + s
- x2 (1-p(t o6(x» + s(1-p(t o6(x»)
=
=
2
= - x cr 2 (6(x»
°
a
Itl /2 + 0(1)
as
x
-+
00;
and that
COV(Y l (t l ),Y l (t 2) / X(o) = x-six)
=
x2[p«t2-tl)o6(X»-P(tlo6(X»P(tzo6(X»]
2
= x2 [-cr 2 (6(x»It z-t l ,a+cr 2 (6(X»I t l l a+cr2 (6(x» It2Ia-cr4(6(X»ltlt2Ia/2] + 0(1)
=
t[-lt2-tlla+ltlla+lt2Ia] + 0(1)
x
-+
00
Y1 (ka»s/X(o)=x-s/x] -+ P[maXl~k~n Y(ka»s] as x -+ 00,
and an application of Boole's inequality and the Lebesgue dominated convergence
Consequently
--
p[maxl~k~n
as
theorem completes the proof.
0
_
_
15
Lemma 2.3.
If the conditions of Theorem 2.1 hold, then for
P[Z
x]
(t) >
/!:'= .tt/J{x)/L\(x) __
(2.5)
where
0
a > 0
H (a)
a
a
=
< Ha (a) :: U""h~ Ha (n,a) /n < =,
Zx(t)
= ma.XO~k~mX(ka.~(x»,
and
m = [t/ (a~ (x» ].
Proof. This lemma corresponds to [11, Lemma 2.5]. For each non-negative integer
let B = [X(ka·~(x»
k
kn-l
~ =U
B. Then
-K
j=(k-l)n j
k,
p[
(2.6)
where m'
U' 1\] ~
k=l
= [(m+l)/n].
>
x],
and for an arbitrary positive integer n,
P[Z (t) > x]
x
let
~ p[m~+l:1\]'
k=l
By stationarity,
P(1\)
= P(A l )
for all k
~
1.
Consequently
P[Z (t)
x
=
> x]
Now using Lennna 2.2, we obtain
P[Z
(2.7)
1AJnoo
x+
(m '+1) PAl
(t) > x]
tt/J(x)/~(x)
1Zmoo (t/~(x»t/J(x)
x+
= Ha (n-l,a)/na
On the other hand, (2.6) and stationarity imply
(2.8)
P[Z (t) > x]
x
~
~
~
m'
r
k=l
P(~) -
L L
l~k<j~m'
p(~nAj)
m'
m'P(Al ) - m' L p(AlnA.)
J
j=2
r
m'{PA - nIl
p(BknB )}.
l
l
k=O l=n
As in the proof of Lemma 2.1, inequality (2.2) applied to
and inequality (2.8) yield
(2.9)
Um
-
x+
P[Z
oo
(t) >
x]
x
-tt/J..;:;(;....,x):--/,....~...,..(x
.....)-
p(BknB ) = p(BOnB _k )
l
l
16
~
{Ha(n-l,a) -
n-l
00
}
L L
d _ Ina
k=l t=n t k
_\\7~:~:__~j_~2~1_-~(!9_a) ~/:) 2·__ ~!_E~~4?:~~_J;:::O~j_~~~ . a~~u_t~::fore
Umn-+co L~:~ Lt=n dt_k/na = 0, by Kronecker's lemma.
Combining (2.7) and (2.9), we have
urn
P[Z (t) > x]
Um _-=x-,-..,...-,.......,.._
--- tw(x)/~(x)
x+ oo
P[Z (t) > x]
H (n-l,a)
_
na
...:;;a
lImoo
x+
tw{x)/~(x)
H (n-l,a)
Um --=a
_
na
'
n+ oo
and the conclusion of Lemma 2.2.
Now (2.7) implies
large, say for all
such that
rna > aO'
H (a) <
a
a > some
00.
By Lemma 2.1,
H (a) > 0
a
for a
suffic1en~ly
' For any a > 0, there exists an integer m
O
Now H (n,am) S H (nm,a) implies Ha (am) S mH a (a) and
R
a
a
0
H (a) > O.
a
Lemma 2.4. Under the same conditions as Theorem 2.1, it follows for a > 0 and
2-a / 4 < b < 1 that
lZm P[X(o)sx,
x
+
00
Z(a'~(x»>x]
S M(a),
w(x)
and
=
Um M(a)
a + 0
a
where
and
I
oo
R(x)
=
x
(l-~(s»
ds.
Proof. We combine Pickands' Lemmas 2.7 and 2.8 [11]. Note that
Zk_ l
00
[X(O)Sx,
Z(a'~(x»>x]
c
U Dk
k=O
where
and
U
j=O
Ej k '
'
0,
17
X(ja.~(x)/2k) S x,
[ max
OSjS2 k-l
max
OSjS2k+l -l
X(ja.~(x)/2k+l) > x+bk/x],
and
_un
-- ---------------k+l---- ke-- ---- --
- - --ok - - -
[X(ja·~(x)/2
X«2j+l)a·~(x)/2
) S x,
) > x+b Ix].
By using [11, Lemma 2.6], we obtain P(Ej,k) S ~(x)xp-l(1-p2)~R(y),
p = p(a.~(x)/2k+l),
Wi
(2.10)
and y = y(x) = bkx-lp(1-p2)~-x(1+p)-1(1-p2)~.
P[X(O)Sx,Z(a·~(x»>x]
~(x)
x-+-oo
00
lZm
S
x-+-
oo
I
Consequently
2k - l
S. llm I
I
x-+- oo k=O j=O
00
where
P(E j k)/Hx)
'
2k xp-l(1-p2)~ R(y).
k=O
In order to apply the technique used in Lemma 2.1, we need to show
00
I
2k .6up xp -1(l_p2) ~ R(y)
k=O
TSx<oo
<
00
for some T >
o.
That this sum is finite follows from the estimates:
xp
-1 (l-p)
2~
S xp -1
o(a·~(x)/2
k+l )
S xS -1
""
A2o(a.~(x)/2
k+l )
S S-l A (a2-k- l )a/2 (H(a.~(x)/2k+l)/H(~(X»)~
2
S S-l A (2-k a/2)a/2 (a/2 k+l )-a/4 by (1.6) in §l
2
= S-l A (a/2)a/4 2-Qk/4,
2
and
y(x)
= bk x-I p(1-p2)~ _ x(l+p)-l (1-p2)~ ~
~
~
Here
S
bk Sx-l/o _ 2- l xo
""
k+l ) - 2-1 A2o(a~(x)/2
""
k+l )/o(~(x»
""
bk SA2-1 ....,
o(~(x»/o(a~(x)/2
bk SA - l (2 k 2/a)a/4 - 2- l A (2 k .2/a)-a/4 by (1.6).
2
2
= ~n60sssa.~(x)
p(s) ~ l~,
and A2
= A2(a.~(x» S
l+~
for
x > some T.
Therefore, (2.10) yields
lZm P[X(o)sx,
x-+-
oo
Z(a·~(x»>x]
~(x)
S
00
I lIm
k=O x-+- oo
2kxp-l(1-p2)~R(Y)
= M(a),
18
_
..,
In order to see that M(a)/a ~ 0 as a ~ 0, use the estimates R(x) ~
2
2-~
$(x)/x ~ exp(-x /2) for x ~ (2~) ; note that one may disregard any finite
nUmSerof -l:heleadirigterm~fof H(ar;
sum M(a).
Here
c > 0
u-
cacx- -out -of--the-in-fini re---atfdfnefatt6r -e...
is a properly choosen constant.
0
Proof of Theorem 2.1. See [11, Lemma 2.9]. There it is shown that Ha
llm H (a)/a
a ~ 0
=
a
tim T-l(l + Joo e S P[
T~
~up
Y(t) > s]ds).
0
O<t<T
0
00
=
Remark 2.3. Pickands' Theorem 2.1 [11] concerning the expected number of e:-upcrossings is hereby generalized also.
e
3. An asymptotic 0-1 behavior. In this section, we use the results of §2 to
obtain an extension of the results in Qualls and Watanabe [12].
discussion of the non-stationary case to §4.
We again postpone
Using the notation of §2,
we
have
Theorem 3.1. If p(t) satisfies (0.1) with 0 <
a
S
2,
~(.)
is defined as
in §2, and
(3.1)
p(t)
=
O(t-Y)
as
t ~
00,
for some
then, for any positive non-decreasing function
Y > 0;
~(t)
on some interval
=
1 or
as the integral
fOO (~(t)cr-l(l/~(t»)-l exp(-~2(t)/2) dt
a
converges or diverges.
0
[a,oo),
19
e
Remark 3.1. Monotone ~(.)
Remark 2.1.
quent~y
other than the one defined in §2 can be used; see
Note that condition (3.1) implies
The-orenr 2.1
p(t)
is not periodic; conse-
isapplicab1e~
a-e:
a+e:
Proo.
f
For
and that
s
~.
~-1
S cr
The aase when
Let
fixed
t
6 >
n
0,
= n6,
2
(s) S s
I(~)
the integrand of
(1)
assumption (0.1) implies that
e: > 0,
~very
. 2 =
~
for all positive
-2-.
s
s S some O.
is eventually a decreasing function of
I(~)
<
where
S
.
~(s)
S s 2
In particular,
~.
~.
6 > 0
and
n
= 0,1,2, ••••
By Theorem 2.1 and for
we have
~
L
.6u.p
P{
n=n
X(t)
t n StStn+1
o
(t +l- t )(~(t )~-l(l/~(t »-1 exp(-~2(t )/2)
n
n
n
n
n
s
C
1
f~
(~(t)~-l(l/~(t»)-l exp(-~2(t)/2)dt
n 6
<
~,
0
for
nO
sufficiently large.
Here
C > 0
1
is a certain constant.
So, the
Bore1-Cante1li lemma yields
.6u.p
t n St St n+1
and consequently
(2)
The aase when
PE~
X(t) S $(tn ) for all n
~ n~}
=
1;
= 1. 0
I(~)
=
~.
For this part of the proof, we need the following lemma.
Lemma 3.1. If Theorem 3.1 when
that
I(~)
=
00
holds under the additional assumption
20
2togt S ~2(t)
S
for all large
3togt,
t,
then it holds without this additional assumption.
--1
Proof. From the bounds on
(1
( •)
given above, there are positive constants
C such that
3
2
C
2
(3.2)
flO ~(t)(;k
-1 exp(-</>2(t)/2) dt
a
2
s I(</»
When
choose
0 < a < 2,
e: > 0
it is well known that the B(s)
may choose
o<
a S 2
S
C
3
Jco
If>( t) H
-1 exp( -4>2 (t) /2) dt.
a
Buch that a+e: < 2 and a-e: > O.
in a 2 (s)
cannot tend to zero; consequently, we
in the left hand side of (3.2) when a · 2.
e:. 0
When a · 2,
We obtain for
that
(3.3)
Let
</let)
be an arbitrary positive non-decreasing function such that
I(~) • co. Let $(t). min(rnax(</l(t),(2togt)%).
we may assume
4>(t)
crosses
u(t).
To show l($). co,
(2togt)~ infinitely often as t
Otherwise, either </> S u and 1($). I(u) • co,
for some large
(3togt)%).
or
4>
>
+
co.
u and 1($) ~ 1(4)) • co,
a.
The proof of Lemma 1.4 in (12] now shows that
"
J(</». co;
and by (3.3) that
1($) • co.
That
"
P[X(t»</>(t) i.o.J • 1 implies
"Theorem 3.1 when
lev) < co" with
Lemma 4.1 in [15J.
0
P[X(t»4>(t) i.o.J • 1
follows from
v(t)· (3togt)~; details are given in
The proof of the second part Qf Theorem 3.1 now proceeds in the same way as
in Qualls and Watanabe [12]
We will use the same notation as in [12].
21
Define a sequence of intervals by
I
=
n
[n~, n~+S]
for
~ >
and
0
o < S <~. Let Gk = {~,v=k~+v/nk: v=O,l, ••• ,[Snk ]} be points in I k where
--- ----- -1>.1-1--- --- --- -- -1------------ - -- - -n = [(0 (l/ep(k~+S») ]. Let E = [maxS€G X(s}s-~(kA+I3}r: uNow-\ising--Lemmak
k
~---
---~
~
k
2.3 in the same way as in [12], we see that
I(~)
=
00
implies
c
~P(Ek)
So, we only need to prove the asymptotic independence of the
=
Ek's,
00.
that
is,
n
(3.4)
m-+
oo
n-+
oo
Ip(nEk )
m
n
-TIPEkl
= O.
m
Now by the use of Lemma 3.1 and the bounds on
2
.e.og(k~+S)
~-l
0
n
€ > 0
out change.
2
~ (k~+S) ~
we have
and
1
2
where
(0),
and
k
s
a-€ > O.
~(k~+S)a-€
s
(3.e.og(~+S»a-€ ,
Now the proof of (3.4) given in [12] applies with-
0
;
Remark 3.2. By the use of inequalities (3.2) and Theorem 3.1, we can easily show
that for every
€ > 0,
+
P /1 _1_ s lliii
2 a+€
x-+oo
(2.e.09t)~(Z (t) -(2.e.09t)~ s 1. +.-Lj =
log log t
2 a-€
1;
and consequently
It is interesting that this is true whatever H may be (as long as it satisfies
the assumptions of Theorem 3.1).
Remark 3.3. Generally speaking, it seems to be difficult to compute
the criterion of Theorem 3.1 in concrete examples.
may be used except in the critical cases.
I(~)
in
Of course, inequalities (3.2)
__ __ ___
22
tnllt
Slepian's-result-n~ailoe
Let
X(t)
used to generaUze§2.
be a separable Gaussian process with zero mean function and
correlation function
pet,s).
to the non-stationary case.
We adopt the notation of §2 with modifications
At first, we only assume that
(4.1)
for
0
zero.
<
h
<012
and
~ T ~
0
t,
where
0
< a ~
Without loss of generality, we take
ql(h)
h
<
~
respectively.
0,
For
C
4
C
2
for
0
~ T ~
t.
We shall use the subscripts
spond to the stationary processes
1
0
with
a
= 2,
we know
H(h)
2
Y (.)
1
H(s)
~~.
So when
h, 0
~
T+h
~
through out to corre-
= 2.
For a covariance, say
has a limit, that the zero limit must be
The interesting case is
a = 2 we assume (4.2) instead of (4.1).
(4.3)
K
= ~up{p(T,T+h):
t} < 1.
Theorem 4.1. If X(·)
as
< 034
In order to exclude any type of periodic case, we assume
034 ~
with co-
respectively.
excluded, and that a finite limit is easily handled.
when
h
<
and
Remark 4.1. Section §l does not cover the case a
ql(h) ,
and Y (t)
2
Yl(t)
By §l there
a
(4.2)
and
H is slowly varying at
l-C h H(h) and q2(h) ~ l-C4h~(h)
3
C < C , we have
l
3
~
<
and
H to be "normalized".
exist separable zero mean stationary processes
variance functions satisfying
2
satisfies (4.1) and
H (a)
C 1 1a Ha (a) _ 2(C 11a_ C 11a) _a
_
2
a
1
2
a
K <
1,
then for
Urn
-
x
a > 0
P[Z (t) >
x
~-l
~ ~ t~(x)/a
xl
(l/x)
23
P[Z (t)
~:urn
x
e l/a H _ 2(e l/a_ e l/a)H
2
a
1_2
a-
(4.4)
un
+
x
00
x]
~
t~(x)/~-l(l/x)
+
00
satisfies (0.1) with
~
P[Z(t) > xl
"'-1
t~(x)/a (l/x)
0 <a
~
a.....;a~_
a
1
00
x
1/ H (a)
C
~ - x~m t:~:~ ;~~~(~~X)
lIm
Moreover if X(·)
>
2 and
u
-
e
1
K
-
--
- - - - - -
- - - - - -
l/a H
a'
< 1,
then for
a > 0
P[Z (t) > x]
x
(4.5)
tljJ (x)
"'-1
/a
H (a)
=
a
=
H.
a
a
(l/x)
and
Um
(4.6)
P[Z(t) > x]
tljJ{x) /a"'-1 (l/x)
Proof. The proof consists of showing that the results only depend on the "local
condition" for
p(T,T+h)
instead of on the total time interval
the integer M be large enough that
= [
Aj
°
= tiM
max
°34/2.
is less than
X(kaoA(x»
(O,t).
Let
Define
> x].
(j-l)o~ka·A(x)<jo
P1Aj :: PIAl is Slepian's result in the non-stationary case together
as
with th e f ac t th a t P1 i s a s t a ti onary measure. Since uA(X) '" e3l/aAl(x)
u
That
x
+
PAj
00,
~
Theorem 2.1 (or rather Lemma 2.3) yields
(4.7)
:um
x+
oo
P[Z (t) > x]
= e
x
- t"':;';(~X):-/~A-:-(x--:-)-
3
Similarly
(4.8)
lIm
--'-
X-rOO
P[Z(t) > x]
t~(X)/ll(X)
:urn
--"-
X-r OO
Pl[Z(o) > x]
0~(x) / ll(x)
1/
H (a)
a _a,,--_
a
24
For lower bounds, we consider
P[Z
(4.9)
-
For
j-i
~
2
M
I
~
x]
(t) >
---~---
_un
j =1
I
PA -
I
P(A.nA).
j-ba-< j ~M- -- 1 . _ ] _ _
in the double sum, we use a well-known device (see, e.g., Lemma
1.5 in [12]) to obtain
~
where
m
I Ipl
k=l k=l
2
exp(-x /(1+p»
(1-p2)~
~
K1 m2 exp(-x2 /(l+K»,
m = [t/(a6(x»].
Dividing by
zero as
j-i
m
I
K
x
+
00;
tljJ(x)/6(x), we see that the error term and
PA.PA.
1.
J
approach
and therefore we may ignore this part of the double sum.
For
= 1,
(4.10)
Using (4.10) in (4.9), we have
(4.11)
Urn P[Z(t)
x+ oo
>
xl
Um
- oo
tljJ (x) /6(x)
C = C ' C = C '
3
1 4
obtain (4.3) and (4.4).
(4.5) and (4.6).
2
_
2 (C
1/
1/ H (a)
Ct -C
Ct) _Ct::.-._
3
4
a
and letting
Choosing
(t) > x]
tljJ{x) /6(x)
x+
H (a)
= C 1/ Ct Ct
4
a
Choosing
P[Z
_...::x~...,..-,.--,-_
a
+
0
C = C = 1
2
1
in (4.7), (4.8) and (4.11), we
in (4.3) and (4.4), we obtain
0
Of course, Theorem 3.1 of §3 can be generalized easily to a result analogous
to Theorems 2.1 and 2.3 of [12].
We write the following theorem without proof.
25
e
Theorem 4.2. If X(o) satisfies (4.1) with 0
T
> T,
0
for
< h <
0 and all
and
__m___
(4.12)
< a ~ 2
__ __
=
p(T,T+S)
__
O(s-Y)
__
_
__
uniformly in
T as
then, for any positive non-decreasing function
P[X(t)
as the integral
I(~)
<
00
> v(t)~(t)
or
=
00.
i.o. in t]
~(t)
=
s ~
00
for some
on some
0 or 1
[a,oo),
y > 0,
26
References
(1)
Adamovic, D.
Sur quelques propri~t~s des fonctions a croissance 1ente
- - - - - - de-Karamata. -I-,ll. Mat.-Vesnik-3 (18) 1966, 123"'136,161-172 .-- -(2)
An Introduation to Probabi tity Theory and Its
Feller, W.
VoZ.
II~
AppZiaations~
Wiley, New York, 1966.
(3)
Garsia, A. and Lamperti, J.
A discrete renewal theorem with infinite
mean. Comment. Math. HeZv. 37 (1963), 221-234.
(4)
Ibragimov, LA. and Linik, Yu. V. Independent and Stationary aonneated
variabZes~
Nauka Moscow 1965. (In Russian).
(5)
Karamata, J.
Sur un mode de croissance regu1iere des fonctions.
Mathematiaa (CZujJ 4 (1930), 38-53.
(6)
Kano, N.
On the modulus of continuity of sample functions of Gaussian
processes. Jour. Math. Kyoto Univ. 10 (1970), 493-536.
(7)
Landau, H.J. and Shepp, L.A.
(To appear.)
(8)
Marcus, M.B.
HB1der condition for Gaussian processes with stationary
increments. Trans. AMS 134 (1968),29-52.
(9)
Marcus, M.B. and Shepp, L.A.
Continuity of Gaussian processes.
Trans. AMS 151 (1970), 377-392.
(10)
Nisio, M.
On the supremum of a Gaussian process.
On the continuity of stationary Gaussian processes.
Nagoya. Math. Jour. 34 (1969), 89-104.
(11)
Pickands, J.,III. Upcrossing probabilities for stationary Gaussian
processes. Trans. AMS 145 (1969),51-73.
(12)
Qualls, C. and Watanabe, H.
An asumptotic
processes. Univ. of N. Car. ~ ChapeZ
Mimeo No. 725 (1970).
0-1 behavior of Gaussian
HiZZ~
Inst. of Stat.
(13)
Sirao, T. and Watanabe, H.
On the upper and lower class for stationary
Gaussian processes. Trans. AMS 147 (1970), 301-331.
(14)
Slepian, D.
The one-sided barrier problem for Gaussian noise.
BeZZ Sys. Teah. J. 41 (1962), 463-501.
(15)
Watanabe, H.
An asymptotic property of Gaussian process.
Trans. AMS 148 (1970), 233-248.
27
Footnotes
AMS Subject Classifications:
Primary
60G15, 42A68, 60G10;
Secondary
60F20, 60G17.
Key Phrases:
Stationary processes (existence), 2. Fourier transforms,
3. Tauberian theorem for Fourier transforms, 4. Regular variation,S. Slow
variation, 6. Supremum of a Gaussian process, 7. Extreme values of normal processes, 8. Upcrossing probabilities, 9. Law of the iterated logarithm,
10. Zero-one laws, 11. Continuity of sample functions.
1.
Research supported in part by the Office of Naval Research under
Contract N00014-67-A-0321-0002.
2.
Research supported in part by the National Science Foundation under
Grant GU-2059.