BJOMATHfMAncs TRAINING PROGRAM
Extreme Value Distribution for Normalized Sums
l
From Stationary Gaussian sequences
Short Title:
Maxima of Normalized Sums
by
Yashaswini Mittal 2 , Stanford University
lAMS subject classification: Primary 60Gl5, 60GlO, Secondary 60J65
2
Research supported in part by ONR, Contract No.
SIMS from ERDA and Rockefeller Foundation
Key Words and Phrases:
NOOOl4~75C-0809
and grant to
Normalized sums, extreme value distribution, stationary
Gaussian sequences, Brownian motion.
Summary:
Let {Xi) i ~ O} be a sequence of independent identically distributed
random variables with finite absolute third moment.
Then Darling and Erdos
have shown that
exp (-e -t )
L~=O1 xi and X = (2 ~n~n n)2.
k
max
The result is
n
O<k<n
k~
n
is a standard stationary
extended to dependent sequences but assuming that {X.}
1.
for
_00
< t <
00
where
~
=
Gaussian sequence with covariance function {r.}.
1.
dependent (e.g. when v(
Xi) ~ na
Ln
i=l
When {X.}
is moderately
l.
0 < a < 2) we get
H
~n(4 ~n~n n)
2II~
t
+ -)
Xn
where H is a constant.
a
= exp(-e -t )
In the strongly dependent case (e.g. when
V(L~=1 Xi) ~ n 2r(n)) we get
~3
~t P(~n -<
n~
for
_00
< t <
00.
Xli r
n
+ ~
~n 4II
l/rn -
Xl/ r
t
n
Xl / r
n
)
= exp( -e -t )
1.
INTRODUCTION:
Let {Xi' i ~ O} be a se,quence of independent identically
distributed random variables with mean zero and variance one.
·
Many classical
k
problems have come out of the stu~r of the partial sums Sk = li=l Xi'
The
celebrated law of the iterated logarithms (LIL) gives the first order terms in
the growth of Sk'
It states
(1.1)
lim
k-+oo
Sk
------;-1
(2kinR.n k) ~
=1
a.s.
Sometimes the LIL is stated in the following Feller form, giving information
about the second order terms.
k
P(Sk > k 2
For
~(k)
~(n)
positive and nondecreasing
o
i.O.) = or
according as
1
(1. 2)
00
l
~
exp(-~ ~2(n))
n
<00
or
00
1
The quantity Sk/k~ in both (1.1) and 0. .2) can very easily be replaced by
max S./1.~ . In this paper we will be interested in the behavior of this
l<i<k 1
maximum. Besides the fixed growth rate given so precisely by (1,2), the
1
randomness of max S./k~ is also of interest.
l<i<k 1
if {X.} has finite absolute third moment then
Darling and Erdos[3] prove that
1
(1.3)
for all
_00
<t <
00.
As usual in n = qth iterated logarithm of n.
q
The invariance principle of Erdos and Kac offers an elegant tool to prove
these results.
The required results are first proved in the special case when
X. are normally distributed.
1
The Central Limit Theorem lets us approximate
2
partial sums Sk/k
by normal variables.
The results then follow by essentially
concluding that this approximation is, in fact, very good.
Thus the study of
the partial sums from the stationary Gaussian sequences is more significant
than any other particular case in this area.
From now on, let {X.}
be assumed to be Gaussian.
1
It is interesting to see
how (1.2) and (1.3) are affected when the assumption of independence among X.
1
is relaxed.
Theorem 4 of Lai and Stout [5] states the following.
Theorem (Lai and Stout).
Suppose there exists 0 < a
~
2, Y > a/2, M ~ I, 0 > 0, 6 > 0
and positive sequence {L(n)} satisfying the following conditions
(i)
(ii)
naL(n) « yen) « naL(n) as n
v(n+m)_
I yen)
(iii) lim sup
n~
(1,4)
11I -<
fn(J/,nJ/,n
max
n)
00, yen) = variance(I~=l
+
.)
1
.
M(m)y if on > m > M.
n
-6
< n
~j
L(j) } < 00
L(n)
and
lim inf { min
n(J/,nJ/,n n)-6 < j < n
n~
-
(iv)
v
L(j~ } > 0
L(n
p > 0 there exists m and
p
1 > A > 0 such that if An> m >
P
-
P -
-
ill
p
(m) p < L (n) < (n) P
n
Let
~(t)
- L(m) -
m
be a positive nondecreasing function on [1,00). Then
1
peS
n
o
> (v(n))~~(n) i.o.) = or
according as
1
(1. 5)
2
exp(- -2
<P (t)
<00
)dt
or
=00
In Sections 2 and 3 we extend (1.3) in the cases of moderately and strongly
dependent stationary Gaussian sequences.
We prove that if ~
= max S./(v(i))
n
O<i<n 1
!<:
2
then
3
and
(1.6)
with some additional conditions on the slowly varying function L(n) to make it
well behaved then
(1.7)
for all
_00
< x <
00.
(1. 8)
Also if
v(n)
~
(1
+ 0((
1
2
. e)) n r(n)
(R.n n)
~
for some
e
> 0 again with some additional conditions on r(n) then
~
for all
_00
< x <
R.n 2 llr - R.n411
n!<:
(2R.n l/r ) 2
2
n
+
x
- - - - - lr
(2R.n
l/r ) ~
n
2
+
exp( -e
-x
)
00.
The proofs as in [3], compare the maximum
~
n
to that of a suitably chosen
stationary Gaussian process on an appropriate set.
The sketch of the main idea
1
We know that even though {Sk/(v(k))~} forms a standard Gaussian
is as follows.
sequence, its no longer stationary.
However, we could view this as coming from
a stationary process being sampled with increasing frequency.
o<
)
If v(k) ~ kaL(k),
a < 2, then sample from an a-process (i.e., the correlation function near
the origin is
~
1 -
~It
a
l
1
1
1
) at points 1, 1 +2' 1 + 2 + 3'
... ,
I~=l Iii, ... .
Thus we have n observations for this a-process in the interval about [O,R.n n].
The maximum of these n observations closely approximates
~.
n
In this, what we
4
call a "moderately dependent case", the frequency of sampling does not depend
on the covariance function of {X.},
1.
It is interesting to note that the above procedure fails when a
= 2.
Oddly
enough, we observe that in this case, the normalized sums process can be approximated by a Brownian motion with transformed time axis,
If we have to carry
the above analogy in this case as well, then we have to sample an a
=1
process
at a frequency that depends on the covariance function of the X-process.
At
any point k, instead of placing the next observation at a distance 11k from the
kth observation, we place it at a distance approximately Ir'(k)/r(k)
call this the "strongly dependent" case.
Thus, in this case,
the maximum of a a = 1 process on the set [0,
slow growth rates of
~n
by choice of r ,
n
~n
l/r].
n
~
n
I.
We will
approximates
This allows arbitrarily
Though not surprising, it is inter-
esting to note that in all cases, only the double exponential is obtained as
the extreme value distribution.
There may seem a discripancy between (1.5) and (1.9).
if v(k)
= k 2L(k)
According to (1.5),
satisfying (1.4), then the first order term for
~
n
is
1
1
(2 ~2n)~ while as (1.9) places it at a much smaller number viz (2 ~n2 l/rn)~'
None of the covariances looked at in (1.8) satisfy (1.4).
In fact, we could
not find any examples of covariance functions satisfying (1.4) for a = 2.
The next two sections give exact statements and the proofs of the moderately
and the strongly dependent cases respectively.
Examples of covariance functions
satisfying the given conditions and some comments are in Section 4.
2.
MODERATELY DEPENDENT SEQUENCES:
Throughout the remaining, {X. , i > O} will
1.
be a stationary Gaussian sequence with mean zero, variance one and
k
= E XOX i . Let v(k) = variance of li=O Xi; Yk = (v(k)) -~Iki=O Xi for
k = 0,1,2, ... and ~ = max Yk .
n
r.
1.
O<k<n
5
Theorem 2.1:
Suppose that for sufficiently large
k~
(2.1)
-1 < 0 < 1 and L(k) is a slowly varying function satisfying
(i)
for sufficiently large t,
(1·1·)·f
1 n
(2 . 2)
-+
00
for 0 < c <
00
then
~
tt
k400
Then for a
=0
+
tt
n400
_00
<x <
-+
IV-·
t
L(t) - L(t - k)
L(t - k)
k
=
00
V Y > 0
but kt-k
> 0 and
-+
c
o.
1,
(2.3)
for all
h
(n k)/k l - y
such tat
k~ k
IV
t~~~ is bounded if k ~ t 8 , 8
< l3
P(ll
n -
n
+
~)
xn
= exp(-e -x )
We write
00.
and
Ha
=
tt
T400
i r; eUP( O<t<T
max C(t)
> u)du
where C(t) is a separable nonstationary Gaussian process with E C(t)
and Cov(C(s), C(t))
=
Isl
a
+
Itl
a
= -Itla
a
- Is-tl .
Before we proceed to prove (2.3), we will state and prove two lemmas about
E YkY
t
which will simplify the proof of the Theorem.
6
For [exp(£m)] < k <
Lemma 2.1:
~
< [exp(£(m+l))], £ > 0 and m su££iciently
large.
'J.
(2.4)
v(~-k)
- (~+£')
v(k)
~
v(~-k)
E YkY~~ 1 - (~-£')
v(k)
By £' we denote a small positive number depending on £, not necessarily the
same at all places.
Also [.] denotes integral part.
Proof of Lemma 2.1:
Since v(k)
\"k-l L
d
= LJ'--O
i=-j
r.,
(2.1) and Theorem 1 in
1.
([4], pg. 273) gives that v(k) is regularly varying with exponent a
l/a kctL(k)
v(k-l)
(2.5)
.
Def1ne
LU. n
KJV
=
L. -k1 I j. + k.
J=
as k
1
~
1 and
00.
Then
r ..
1.=J
~
=0 +
1.
+ ~~
v(k)
1
(v(k)v(~)) ~
1
(v(k)v(~))~ - (v(k) + ~~)
(2.6)
= 1 -
=1
Notice that
v(~)
= v(k)
--------:-1---..:...:.(v(k)v(~))"Z
v(k)v(~)
- (v(k) +
v(k)v(9~){I + (v(k)
+ v(~-k)
+ 2
~~
+
and define
~~)
~~)(V(k)V(~))-~}'
Ek~
=
v(~-k) + 2 ~~
v(k)
Then the R.H.S. above is equal to
v(~-k)
(2.7)
2
- ~~/v(k)
1 - ---v(~){l + (v(k) +
= 1 - v(~-k)
v(k)
{l
2
w~k
'
v(k)v l"~-k) }
~~) (v(k)v(~)) -~ }
(l + Eu ,)
-k
2
(l + (1 + Ek~)
-k
2
+
~~
1) •
(v(k)v(~))"Z
7
We need to show that the product of the last three terms in (2.7) could be
bounded above and below by
~ ~
E' as k
+
00.
find bounds for various quantites involved.
At (a), (b) and (c) below we
First we look at v(t-k)/v(k).
The argument used here is repeated several times during the rest of the text.
We have 0 < l-k < E'k.
If (t-k) ~ k
8
for some 0 < 8 < 1 then we can write
v(t-k) ~ L(l-k) ka8 for large k and
v(t-k)
v(k)
L(l-k)
- L(k)k a (1-8)
<
Since L(t) is slowly varying function, we know that L(t) t- Y
L(t) t Y +
00
as t
+
L(l-k) is bounded.
L(k)
00
for all Y ~ O.
+
0 and
If k 8 < t-k < E'k then by (2.2(i)),
Thus
v(t-k) Iv(k) < E'
(a)
for all k sufficiently large.
.
Next, we w1ll look at
~t
_ ~t-k
- lj=l
~j+k
li=j rio
For
large k, by (2.1) we have
Hence
I~~I
2
l-k 1-0
L2(~)
v(k)v(t-k) < (const')(-r-)
~L=(k~)~L+(~~-~k~)
The fact that we need (t-k) sufficiently large to make the approximation
v(~-k) ~ (l_k)l+o L(l-k) is inconsequential here, since, if (l-k) is too small
~
then the L.H.S. above is obviously small for k large.
If (~-k) < k 8 then the
8
R.H.S. above tends to zero as k
+
00
by same argument as at (a) and i£
(~-k) > ke then L(~)/L(~-k) is bounded by (2,2(i)) and L(~)/L(k) is always
bounded for the range of
~
and k under consideration.
(b)
v(k)v(~-k)
for all k sufficiently large.
Obviously
~kk
<
£1
I~~I
1
(v(k)v(~))~
( ~)
Thus
o
~ ~~~ 2.
<
£1
as well.
Finally,
(c)
<
for all k sufficiently large.
Substituting in (2.7) we get the result.
Lemma 2.2:
If
~ >k
(2.8)
+
00
such that
£
~-k > ~e for some e > 0 then
1
(v(k)v(~))~
for some y > O.
Proof of Lemma 2.2:
We can write
=
I,k
L -1
u-
,~-j
L"") r.1. - v(k)
1.-- (n
i\'-J
<
l~=l (~_j)O L(~-j)
~
l/a (~a L(~)
(~_k)a L(~-k))
~ l/a (~a(L(~) - L(~-k))
~
k£o L(~-k) {I
I
+
~
CJ.k
+
ak~o L(~-k))
L(~) - L(~-k) }
L(~-k)
.
9
If t/k is bounded then we can bound the terms in the bracket above by a
constant since L(~)L(~~~~-k) is always bounded for
t/k
+
00
t L(t) - L(t-k)
-k
L (t-k)
however. then
I~tl
1
(v(k)v(t)) ~
<
=
Choose 0 < Y < 1-0
2
(const.)
+
t-k > t 8 by (2.2(i)).
0 by (2.2(ii)).
If
Thus
ktOL(t-k)
(kt)a/2 (L(k)L(t))!z
1-0
k)""2
( cons.
t ) (t
L(t-k)
1
(L(k)L(t))~2
Then the R.H.S. above is O((k/t)Y) as k
+
00
because here
both L(t-k)/L(k) and L(t-k)/L(t) are bounded due to (2.2(i)) (Notice that the
case 0=1 is excluded).
We now turn to the proof of the Theorem.
Proof of Theorem 2.1:
The proof is split into two parts.
First establishing
x
-x
lim inf P(lJ < B + ) _> exp(-e )
n n
X
n-+oo
n
(2.9)
= max Y where Y = (v(k))-!z I~=l Xi .
k
n O<k<n k
The constants B and X are defined after (2.3).
n
n
for
_00
< x <
00.
We recall that
j..l
We will split {Vi' 1 < i < n} into blocks t m ~ i < t m+ l • m = O.l •...• mO
where t
= [exp(Em)] for some E > 0 and mO is such that t
~ n.
We will find
m
mo
a lower bound for the probability in L.H.S. of (2.9) by treating Y. in different
1.
blocks as independent since E YkYt will be shown to be nonnegative. Now. suppose
a
. f act equa 1 to 1 - 1~ (t-k)
Let ~I:"()
b a stan dard stat1.onary
.
E Y Ytwas. 1.n
~.
te
k
a
Gaussian process with covariance function 1 - (!z + E) Itl for 0 ~ t ~ E. We
can sample {~(t). 0 ~ t ~ E} at points Sj where sl
It would be easy to see that max
k
~(sk)
= l/t m and
sk
= sk_l
+
l/sk_l'
provides a stochastic upper bound for
10
max
Y .
t <k<t +1 k
However, by using Lemma 2.1, we can get explicit bounds for
m-- m
E
YkY~
only when
subsequence {Y t
(~-k)
is large.
Thus it is necessary to delete all but a
} of variables from the mth block.
The proof will follow
m,h
this general outline.
Yl""'Y(tnn)~
To start off, we also need to exclude
from consideration, since all the bounds gotten in the Lemmas work only for
large k.
For the first part of the proof, define un
lJn '
Bn +
=
x/~.
Then for
=
< P(
<
where
=
~(u)
(2IT)
-~
exp(-u 2/2).
max
Yk > u )
n
O<k«~nn)~
(~nn)
;,.
2
~(u ) /u
n
n
The R.H.S. above tends to zero.
Thus we can
replace lJn by lJn ' in (2.3).
We will prove (2.9) with lJn replaced by lJn "
since by (2.1), v(k) + 2
~t
> 0 =>
~t
This, in view of (2.6) shows that E YkY
For k and t large
> -v(k)/2 and v(k) +
t
~
0 for k and
~
~t
large.
E
YkY t
~ 0
>v(k)/2 > O.
By Slepian's
Lemma [II],
mo
(2.10)
<U ) >
n - n
P(lJ '
where m is such that t
m
l
l
~ (~nn)
IT
P(
m=m l
max
Yo < u )
Nn
<t<t m+ 1
t m-
;,.
2.
The next step is to select a proper subsequence t
Define t
2
m,
m,
k from the mth block.
;,.
h = [exp(£(m +h) 2)] for h = 0,1, ... ,(2m+l) i.e., t
t m, 2m+ 1 = t m+ l'
Define ZoN = YoN - Yt
m, h
for t m, h <
~
m,
0 = t
m
and
-< t m, h +l' h = 0,1 •... ,2m.
11
Then
P(
(2.11)
where a' < a.
max
Y < ). > P( max Y
< U _ R,~/m2 )
~. - un t
- n
"'"
t <R,<t 1
O~h<2m
m,h
m
~
m+
_ P(
max
Z.R, > -:.a:T2
~ m )
t m<R,<tm+ 1
m
We will get an upperbound for the R.H.S. above by using the
following result.
Theorem:
(Marcus and Shepp[6]):
Gaussian random variables with p{sup
i>l
cr
2
= sup
i>l
{~.,
Let
i > I} be a sequence of jointly
-
l.
IY. I
l.
< oo}
= 1.
Then, letting
var(Y.), for p>O and all sufficiently large t, we have
l.
2
2
P(sup IYl.. I > t) < exp{-(l - p)t /(2cr )} .
i>l
(2.12)
max
ZR, < 00) = l.
t In<R,<tm+ 1
YR, is finite with probability one.
In order to apply this to ZR,' we need to verify that P(
It is sufficient to verify that
max
t~
<R,<t m+ 1
By (2.4), for t m -< k < R, < t m+ 1 and m large,
(2.13)
E
YkY() _> 1 _ (~+ E') (R,_k)a L(R,-k)
Tv
k
L(k)
> 1 - E' (R,-k)
k
We write the first inequality above for the sake of convenience.
small, we may not be able to write v(R,-k)
tV
(R,_k)a L(R,-k).
·
0
of k an d Tv() un der consl.· d
eratl.on
..
•
R,-k < E.
~ ~
For (R,-k) too
However, the second
inequality is correct for large k since we have chosen a' < a.
for other values of (R,-k) is similar to the one used before.
a'
The argument
Now, for values
By Slepian's Lemma [11],
max
YR, is stochastically smaller than the maximum of an at-process (i.e.,
t <R,<t 1
Inm+
12
correlation near zero is 1 - (const.) Itla') on the set [0 1 £],
obviously finite with probability one.
(2.14)
P(
2
where am
Thus from (2.12) we have
R,nm
max ZR, > a'/2)~ exp{
<R,<tm+ 1
m
t Dr-
max
Var(Z R,L
t In<R,<tm+ 1
=
The later is
R,nm 2
1
a'/2)' ~
m
am
1 p.
-(T) (
But for t m,l_
h<R,<t m, h + l'
Var(Z RJ
=
= 2(1
E YR,Y
t
)
m,h
R,-t
a'
(
m,h) using (2.13)
t
m,h
t
_ t
< £'( m,h+~
m,h)
m,h
a'
< £' m-a' for all h = 0,1, ... ,2m.
Hence the R.H.S. of (2.14) is at most
exp{- (const.)(R,n m) 2 }
and the R.H.S. of (2.10) is at least
m
O
{Pc max
IT
m=m
1
0<h<2m
R,n m
2
< u - a'!2) - exp(-(const.)(R,n m) )}
tm,h - n
m
Y
m
(2.15)
O
IT P(
m=m
1
max Y
<u
0<h<2m tm,h - n
R,n m
a'!2
m
)
}
13
mO
where E = TI
{I - exp(-(const.)(~n m)2)/ P( max
Y
< u _ ~n,ml }. We
ml
m=m
O<h<2m tm,h - n m~ 2
1
have established before that max Y .
is finite with probability 1 for all
O<h<2Jil t m,h
m.
< U Hence P( max . Yt
n
O<h<2m m,h
,R,rtm .
m~
'/2) ~ 1 as n
r
m=m
E > exp{-(const.)
m1 The R.H.S. above tends to one as n
~
00.
~ 00
and
3/2
e - Un m)
1
It remains to show that the product in
R.H.S. in (2.15) is bounded below by the required limit.
t
}.
Now, since t
m,
hand
. are sufficiently far apart for h f j, we can write
m,J
for values of k and ~ in the set Lm = {tm, h' h = O,l, ... ,2m}.
such values of k and
(~_k)/kl-y ~
Also, since t
00
~,
(~-k)/k
+
cannot tend to zero very rapidly
i.e.,
Hence applying (2.2(ii») we have L(~-k)/L(~) ~ 1.
V Y > O.
2 (1
Notice that for
E)k, L(t)/L(k)
~
1 as well.
By choosing an appropriate
value of E', we can write for large k that
for k,t ELm'
Let
{~(t),
covariance function 1 - (~
t
~
+
O} be standard stationary Gaussian process with
E') Itla for 0
2 t 2 E. We see that the covariance
is bounded below by that of {~t-t ' t ELm}'
Thus
m
t
m
max
Y
is stochastically bounded above by max ~t
-t which is
O<h<2m t m,h
O<h<2m m,h m
~
tm
at most
14
~(t).
max
O<t<E
Thus the product in the R.H.S. of (2.15) is asymptotically at least
mO
exp {-
(2.16)
l
p ( max
O<t<E·
m~ml
m
O
'V
exp {-
l
m=m
E(Un 1
by Lemma (3.8) of Pickands [9].
> un
~(t)
m
- m$l,na'/2
)}
.tn m ) 2/a-1
m ~ (1
cj>(un - 9..n '/2
~
a '/2
rna
m
+
E') l/a H }
a
H is defined in the statement of Theorem 2.1.
a
Substituting the value for u , we have the R.H.S. of (2.16) asymptotically equal
n
to
m
exp {- 9..n Em (1
(2.17)
+
2E,)1/a exp(-x
+
l o exp(un
0(1))
a'/2
9..n m/m
)}.
m=m 1
Spliting the sum into two parts say ml .::. m .::. (mO)
for large n
!.:
!.:
and m 2 < m .::. m ' we see that
O
O
2
m
l
O
exp(u
m=m
n
9..n m/m
a'/2
!.:
).::. mO2 exp(u )
n
+
mO exp(l/(9..n m)
a'/2
) = mO(l
+
0(1)).
1
Substituting in (2.17) we get that
lim inf P(ll < 8
n- n
n-+oo
for all E' > O.
+
x/x) _> exp(-(l
"n
+
2E,)1/a e- x )
This concludes the proof of (2.9).
It is much easier to show the otherside viz,
(2.18)
lim sup P(lln'::' 8n
n-+oo
+ x/~)
:s. exp(-e -x)
We just exclude the appropriate number of Yk and then compare them with variables
selected from an appropriate a-process by Berman's Lemma [1].
Let un' m and ml
O
4It
15
be the same as defined before.
sets T.
m
We will define new subsequences t
m,
To avoid clumsy notation, we will use the same symbols.
h and the
Thus, it
should be noted that t
h and T are defined two different ways in this section
m,
m
and two different ways in Section 3. All efforts to try to define a subsequence
that will work for both parts of the proof in each section have failed so far.
Care should be taken in noting an appropriate definition of tm,h and Tm in
reading different parts of the proof.
Let
t m,h = [exp{£(m +
h
(R.n m)
2/ a.
)}]
and
Tm = {tm,h; 0 < h < [(1 - £)(R.n m)
2/a.
] =
~
say} .
(i.e., we have clipped a small portion from the right hand side for each of the
sets T).
m
Now
m
o
P(~
(2.19)
< u ) < P{
n -
n
n
m=m 1
m
o
<
II
m=m l
P(
max Yt
.::. un) + C
t m, hETm m,h
The last inequality follows from Berman's Lemma.
C is some positive constant
and
PpR.qh
= E(Y t _
-y,R.
=0
Yt... )
l{,h
if
P
if P
~
q
=q .
16
Notice also that the largest value of (1 absorbed in C.
2
Pp~qh)
-~
depends only on £ and is
By Lemma 2.1,
E Yt-
P,R.
Yt
q,h
< 1 ~ (~ - £')
v(tp,,t '"' t. ,h)
V (t
q,
h~
~ 1 _ (~ _ £') L(tp,~ - tq,h) ( tp,~ - tq,h a
L(tq,h)
tq,h) .
2
F q,
2
t p , R. - t h > t h(e£ - 1) ~ £ t h'
q, - q,
q,
By the same reasoning as before and condition (2.2), we have
Because of the above definitions, when P
(2.20)
for all P
Fq
and q large.
For all m,
~ ~
(R.n mO)
upperbound for the sum in the R.H.S. of (2.19) is
2/0.
.
Substituting, an
u
(2.21) c
for all 0 <
n
2
n
exp(- l+p
n<
1.
)}
We choose
< £'/(2-£') for the value of £' that works in (2.20).
the first term in (2.21) to be 0(1).
PR.qh
This makes the sum of
For the second part of the sum, we know
by Lemma 2.2 that
for some y > O.
y' > 0
But v(tq,h)/v(tp,R,) ~ (tq,h/tp,R,)o. for large q.
Thus for some
17
n
~
q + (m ) . Thus the sum in (2.21) is 0(1). To bound the product
O
in the R.H.S. of (2.19), we define a standard stationary Gaussian process
for all p
{~(t),
0
~
t
~ £}
(different from the one in the first part of the proof) with
covariance function 1- (!z- £') It Ia. near origin.
We bound
/
P(
max
t
m, h£Tm
By Lemma (3.8) of [9], the R.H.S. above is asymptotically equal to
£(1-£)(1-£')
2/a.- l
H u
~(u ).
ann
l/a.
Substituting in the product we get the desired result (2.18).
The details of
the last argument are very much similar to those in the first part of the proof
and hence are not repeated.
3.
STRONGLY DEPENDENT CASE:
v(n) ~ na.L(n) for 0 < a < 2.
{X., i > O} and
1.
Theorem 3.1:
~
n
This finishes the proof of Theorem 2.1.
In last section we considered the cases when
In this section we will have a.
=
2.
The sequence
are as described in Section 2.
Let f be a probability density function on the real line and set
Ak
= {(x,y)/-oo
< x <
00;
0
~
Assume that the correlation function r k
y <
00
= EXo~
and f(x+k) > y} .
satisfies
18
(i)
roo f(x)
r k ::;
A f(x+k)dx
(3.1)
Let r
k
0 be a slowly varying function such that for large k
+
k
r
(3.2)
. k
1=-
with E
k
=0
R,t
n-+oo
-00
1
(2 + Ek)k r
(1
) nonincreasing and 8 > O.
(R,n k)8
(3.3)
for all
=
r.
< x <
00.
P(~
<
n -
Bn + x/y'n )
k
Then
= exp( -e
-x)
Here
Xn =
and
(2 R,n
R,n
Bn = Xn+
~
2
2
l/rn)
!.:
2
l/rn - R,n 4TI
\t
Condition (3.1) is used and discussed in Mittal and Ylvisaker [7].
discussion there, we have
o<
where
wi k
E IkIR,
and I
=rk
k
<k
i
are independent normal with mean zero, var(W. k )
V R, > k.
1
Thus
k
\"'. 1 X.1
£1=
t
(v(k)) ~
k
k
\"'. 1 W.1
£1=
t
(v(k))~
+
=1
and
From
19
The general idea of the proof is to show that the variables l~=l WikJ(V(k))~
!.:
are tOQ small and hence we can replace the process {Y } by {kIkl(v(k)) 2}. (In
k
.
~
fact, we replace it by ~ = Ik/{r k ) • We deal with the new process in very much
.
similar ways as the last section.
The following Lemma does the first part of
the proof.
Lemma 3.1:
Under the conditions of Theorem 3.1,
lim (~n k)o ~k
(3.4)
=0
a.S.
k~
for all 0
> 0
Proof of Lemma 3.1:
= Yk -
~k
sufficiently small, where
= [exp(kY)] for
Let us define t k
'\ = '\ (£) = { max
tk<j<t,
1
J+
for £ > 0, k
=
0,1, . . . .
1'.;. >
J
lower bound on the correlations of
d
J
~
k
To compute P(,\), we find
00.
t
~"
result of the assumed conditions, r
Now for j <
0 < Y < 1 and
The result follows by sYmmetry and the use of
Borel-Cantelli Lemma if we show that l~=l P(k) <
V i < k.
~.
< j < t k l' We first note that as a
k+
0 and for sufficiently large k, r ~ r
k
i
~,
jr. +
_ :-.J
(3.5)
l.'~J+ 1
r,
l.
k
v(~)) 2
(r.
J
But
j
1 }
r ~ { ---,:
!z
~
Yz
r.
(v(j))
J
!.:
= r~2
(v(j))
!.:
2
jr.
_
{
k
(rjv(j))
2
J
!~
2
}
'V
(
r~
r.
1
_)YzE . > 0
J
J
for large j
20
and
jr. +
J
L~·l
r.
J+ . 1
f
>
1
(r. v(R,))~
J
(v(R,))~
= (vej))
!.:
2
R, r.
J_
-
-.---It.
1
(v(R.))~
R.-j )
(E. - . .
J
J
For t k -<
n
J' < t
IV,
k+1'
R.-.j < (const.) k-(l-y).
J
Thus for large j, since r j
-
.
c
~
rR,'
1
E /:;n /:;J' > E.;1- > E. (1 - -1- )
IV
J
-
IV -
k -y
J
1
where c > 0 is a constant.
We know that v(Z.)
2(1 -
=
J
nonincreasing.
jr.~
J 1) ~
(v(j))~
E. and E. is
J
J
Hence
Now,
P(,\)
= P(
max
/:;. > £)
t k <j <t k+ 1 J
/:;.
J
< P(
max
1
tk2:i <t k+ 1 (v(Z.))~
(3.6)
J
c
1
2. p{ (l-Y)
k
~ *
M
t k+1
>
£
--T""i
(E
t
k
c
-t
~
u >
k1- y
+ (l - _1_)
k
)
)~
£ k
~
2
}
where M* is maximum of n independent standard normal variables and U also
n
*
standard normal, independent of M.
n
An upperbound for the R.H.S. of (3.6) is
21
P(M*
t
k+l
-t
.!.:1.
>
~ k 2
£
2 c
k
1
+ ~
2)
)
:I
Noticing that tk+l-t k ~ exp(k Y), we see that the above
This completes the proof of the Lemma.
where ~(x) = f~~ ¢(u)du.
is sumrnable over k.
Next, we can replace
l-l n
in (3.3) by vn
-v
'n
Il-ln -
v
= max Ik/rk~2 = max nk because
O<k<n
O<k<n
I < y max I',:k
n - 'n O<k<n
and
xn /(~n n)o ~ 0
as
n ~
00
for all
0 > O.
For the proof of (3.3) with v n in place of l-ln , we follow the same procedure as in Section 2. We compare {nk} with a sequence sampled from an
appropriately chosen (a=l) process.
Unlike Section 2, the frequency of sampling
here depends on the covariance function {rk }.
Some of the details of Section 2
become easier here in view of the observation that I
a standardized Brownian motion.
= B(rk ) where B denotes
k
Sketch of the proof is given below avoiding
repetition of arguments in Section 2.
Define (different from all previous definitions)
and
o<
k < 2m}
22
Here r- 1 (o)
= min
r(r- 1 (o)) = o.
°t
ret)
= o.
Since r is monotone, t k is nondecreasing.
By similar arguments that lead to (2.10) we have
mO
P(v' < u ) >
(3.7)
n -
where u
n
Also
VI
n
n
n
P(
m=m 1
max
t
m, kET m
nt
Also
< u )
m,k -
n
~
= Bn
=
+ x/Xn.; mO and m are such that t
'\J (R,n n) 2 and m = min{mlt
~ n}.
1
O
m
m1
mat
~. Define ZR, = nR, - nt
for t k ~ R, < t m,k+1' Then
(R,n n)~<k<n
m,k+1
m,
P(
max
n 0 ~ un) > P(max
n
< u _ R,n m )
t <R,<t 1 N
0<k<2m t m,k - n
m~
m- m+
- P(
max
ZR, > R,ntm ).
t <R,<t 1
m~
m- m+
But
P(
(3.8)
Z > R,nm) < (2m + 1) P(
max
max
R,
~
t m<R,<t m+ 1
m
t m,k<R,<t m, k+ 1
< (2m+1){P(
max
t m, k<Q,<t
m, k + 1
+ P(
max
t
<Q,<t
m,k- m,k+1
< (2m+1){P(
t
(I 0
max
N
k<Q,<t
m, m,k+1
-
(r
I
t
m,k+1
t
+
P(I
t
(r-~
m,k+1
Since IQ, = B(rQ,)' we can write IQ, - It
t
m,k+1
- r
Q,
2_
_l<
r -..
t
m,k+1
) > Q,n m r~
)
~ t
m;l
m,k+1
-~
t
_~
m,k
) > Q,n m )}.
t
m~
as B(rQ, - r t
). Using proposition
m,k+l
m,k+l
12.20 of Brieman [2], we get the first probability in the R.H.S. of (3.8) to be
atmost
23
(3.9)
But r
)
2
t
1
~ exp(-£(m +k)~).
.
.
Thus
m.k
var(It
It
m.k
) =
m.k+1
-£
1
~ exp(_£(m 2 + k + 1)~)[e2m - 1]
£ <1>(2 R.n m)
£
and (3.9) is at most
---=---R.n m
Substituting values for r
• we get
m.k
approximately the same bound for the second probability in the R.H.S. of (3.8).
t
Hence
P(
(3.10)
max
ZR. > R.n!5,m ) <
t m<R.<t m+ 1
m
The R.H.S. of (3.10) is summab1e over m.
2( 2m+ 1) <I> ( 2R.n m)
£
R.n m
Following the same line as in Section
2. it only remains to show that
(Notice that the constant H for
~
Let
{~(t).
t
~
~=1
is explicitly given in Theorem 4.4 of [8]).
O} be a standard stationary Gaussian process with covariance
function 1 - (~ + £') It I near origin.
Then
24
~
1 ~ (l+E)E
R,
4(m2+k)~
> 1 _ (~ + E ) (ER.)
2m
= E(~t
m,R.
t
-t
m
. ~O) .
m
It follows in the manner similar to Section 2 that
lim
n-+oo
(3.11)
P(~
<
n -
Sn
+
x/Xn ) _> exp(-e -x ).
-E
(It should be noted that r t
~ exp(-Em ) ~ ern'
Strictly speaking, the
O
m
O
but
normalized constants for the ~ process should be with r n replaced by r
t
mO
it is easy to see that this does not make any difference in the limit.)
The procedure for the rest of the proof is now apparent.
t
The fact that PpR.qh
quite obvious.
m,k
=E
= [r- 1 {exp(-E(m
+
k
We define a new
2))}l .
(R.n m)
ntp,R. ntq,h decreases rapidly to zero for p-q
~
mO is
The rest of the arguments are very similar and hence not
repeated.
4.
EXAMPLES AND DISCUSSION:
Examples of covariance functions {r } satisfying
k
(2.1) and (2.2) are given by convex {rk }, r = k-Y(R.n k)R. for k ~ k ' 0 < Y < 1
k
O
and R. E JR. (Defined appropriately on [O,kOl to make it convex). For {rk }
25
satisfying (3,1) and (3.2), we give again convex {rk}' r k
k
~
kO and ,lI, >
= (,lI,n k)~,lI,
for
o.
The proof of the result
,lI,t P(v
n~
reduced by the observation Ik/rk~
.
<
n -
Bn
+
1
= B(rk)/rk~
x/Y )
'n
= exp(~e
) can be greatly
and the result of Darling and
Erdos [3] as given in (3.1) of Robbins and Siegmund [10].
do not state their result in the form (3.1) of [10].
Theorem 1 in [3] is quite complicated.
~x
Darling and Erdos
Also their proof of
Theorem 1 of [3] is a particular case
of Theorem 2.1 and the last part of Theorem 3.1 offers an alternative proof for
(3.1) of [10].
26
References
[1]
Berman~
S.M. (1964).
Limit theorems for the maximum term in stationary
sequences, Ann. Math. Stat., 35, 502-516.
~~
~obability~
[2]
Breiman, L. (1968).
Addison-Wesley, Reading, Massachusetts.
[3]
Darling, D.A. and Erdos, P. (1956).
A limit theorem for the maximum of
normalized sums of independent random variables, Duke Math.
J.~
23,
143-155.
[4]
Feller, W. (1966).
tions~
[5]
An introduction to probability theory and its applica-
vol. 2, John Wiley and Sons, Inc., New York.
Lai, T.L. and Stout, W. (1977).
The law of the iterated logarithms and
upper-lower class tests for partial sums of stationary Gaussian
sequences, Manuscript.
[6]
Marcus, M. and Shepp, L. (1971).
Proc. of Sixth Berkeley
Sample behavior of Gaussian processes.
Symp.~
Univ. of Calif. Press, Berkeley, 2,
423-439.
[7]
Mittal, Y. and Ylvisaker, D. (1976).
stationary Gaussian processes.
[8]
Pickands, James III (1967).
z.
[9]
Wahr. und Verw.
Geb.~
Pickands, James III (1969).
Strong laws for the maxima of
Ann. Prob., 4, no. 3, 357-371.
-
Maxima of stationary Gaussian processes,
7, 190-223.
Asymptotic properties of the maximum in a
stationary Gaussian process, Trans. Amer. Math.
[10]
Robbins, H. and Siegmund, D. (1971).
Soc.~
145, 75-86.
On the law of the iterated logarithm
for maxima and minima, Proc. of Sixth Berkeley
Symp.~
Univ. of Calif.
Press, Berkeley, 2, 51-70.
[11]
Slepian, D. (1962).
Bell Syst. Tech.
The one-sided barrier problem for Gaussian noise,
J.~
41, 463-501.
SECURITY CL.ASSIFICATION OF THIS PAGE (Wh ...
D.,. lInt.r.d)
r'
REPORT DOCl:JMENTATION PAGE
1. REPOl'T NUMBER
30VT ACCESSION NO•
4. TITL.E (end Subtltl.)
READ INSTRUCTIONS
BEFORE COMPLETING FORM
••
RECIPIENT'S CATAL.OG NUMBER
S.
TYPE OF REPORT 6 PERIOD COVERED
Extreme Value Distribution for Normalized Sums
from Stationary Gaussian Sequences
rrECHNICAL
6. PERFORMING ORG. REPORT NUMBER
Mimeo Series No. 1160
7. AUTHOR(_)
8.
Yashaswini Mitta1
CONTRACT OR GRANT NUMBER(_)
NOOO14-75C-0809
10. PROGRAM EL.EMENT, PROJECT, TASK
9. PERFORMING ORGANIZATION NAME AND ADDRESS
AREA 6 WORK UNIT NUMBERS
Department of Statistics
University of North Carolina
Chapel Hill, North Carolina 27514
~.-
12. REPORT DATE
111. CONTROL.L.ING OFFICE NAME AND ADDRESS
Statistics &Probability Program
Office of Naval Research
Arlington, VA
March 1978
13. NUMBER OF PAGES
26
-I". MONITORING AGENCY NAME 6 ADDRESS(II dlll.Nllt IrotIl Controllln, Ollie.)
IS. SECURITY CL.ASS. (01 thl. r.port)
UNCLASSIFIED
IS•• ~c!~i~~I:ICATION/DOWNGRADING
16. OISTRIBUnON STATEMENT (01 thl. Report)
•
Approved for Public Release: Distribution Unlimited
e~
17. DISTRIBUTION STATEMENT (01 the .b.tract 8ft'.r.d In Slock 20, II dlll.r...t Irotn Report)
18. SUPPL.EMENTARY NOTES
19. KEY WORDS (Cent/nu. on t.v.r•• _Id. II n.e•••.,." end Id.ntlfy by block numb.r)
Normali zed sums, extreme value distribution, stationary Gaussian sequences,
Brownian motion.
20.
ABSTRACT (Contlnu. on r.v.r•• _'d. II n.c....,." end Id.ntlly by block numb.r)
Limiting distribution of the maxinunn of normalized sums of stationary
Gaussian processes is studied.
·
·
DO
"ORM
I JAN 73
1473
EDITIO~ 0 ' I NOV illS OBSOLETE
UNCLASSIFIED
SECURITY CL.ASS.... CATION 0" THIS PAGE (""'.n D.t. Bntared)
UNCLASSIFIED
SECURITY CLASSIFICATION OF
THIS.PAG.~. .
Date ""eNCf)
UNCLASSIFIED
SECURITY CLASSIFICATION OF THIS PAGE(When Data Entered)
© Copyright 2026 Paperzz