Leadbetter, M.R.; (1973)On extremem values in stationary sequences."

_It..
This research was sponsored by the Office of Naval Research
_CQntract_ N00014-67-A-0321-0002._____ __
ON EXTREME VALUES IN STATIONARY SEQUENCES
M.R. Leadbetter
Department of Statistics
University of North CaroZina at ChapeZ HiZZ
Institute of Statistics Mimeo Series No. 886
August, 1973
Unclassified
~curil
Classlficlltion
COCUMENT CON-ROl DATA· R&D
(Security da•• l/leah ..n 01 title, bodv of
,
I>.tra .. t "nd inde"in. "nnotation mu.t b .. • nl.red wlt.n tit.. ov"r.11 , .. o,t ,. d ••• lfl.d
... RIIPORT IECURITV CI. ... "~IC .. TIO ...
ORIGI ..... TI ... G "CTIVITV (I." ""o,al• •",IIor)
Department of Statistics
'versity of North Carolina
pel Hill, NC 27514
On Extreme Values in Stationary Sequences
4. OESCRII>T1VE NOTES (Type of ,epo,t end IncluaivE' d.t.... )
Technical Report
e·
"UTHORls.(FI,at name, middle Inltl.I, I.at n • .".. )
M.R. Leadbetter
II· AEI>ORT O .. TE
August, 1973
e ... CONTR"CT OR GR .. NTNO.
ONR
N00014-67-A-032l-0002
7a. TOT .. 1.. NO. OF I> .. GES
7b. NO. OF REFS
35
10
II•• ORIGIN .. TOR·S AEI>OAT NU"'IaEIIICSI
Mimeo Series #886
b.I>RO.lECTNO.
c.
Ill>. OTHEIII AEI>OAT NOCII (AnY'otlle, n..."bera ",.. t m.. y b ......, ....d
till. repo,,)
d.
10. OIlTIIIIIlUTION ST"TEME
~T
Approved for public release:
" . SUI>I>L.EMENT .. RY
NOTE~
distribution unlimited.
12. SI>ONSOAING ... ,L,T .. IIIY .... CTIV,TY
Office of Naval Research
Ill ... a'TR"CT
Extreme value theory is considered for stationary sequences'~~n} satisfying
dependence restrictions significantly weaker than strong mixing. In particular
the basic theorem of Gnedenko (developed later by Loynes for mixing sequences)
is proved under the weak restrictions. The conditions for the general results
are shown to apply to stationary normal sequences under the very weak covariance
assumptions used previously (e.g. by S.M. Berman). Distributional limit theorems
for other order statistics are also obtained.
KEY WORDS:
Extreme value theory; stochastic processes, dependent extremal theory.
Unclassified
On Extreme Values in Stationary Sequences
by
M.R. Leadbetter*
-
- t
University of North Carolina
Summary
In this paper, extreme value theory is considered
for stationary sequences
{;n}
satisfying dependence
restrictions significantly weaker than strong mixing.
The
aims of the paper are:
(i)
To prove the basic theorem of Gnedenko concerning
the existence of three possible non-degenerate
asymptotic forms for the distribution of the
maximum
(ii)
M = max(;l ••• ; ), for such sequences.
n
n
To obtain limiting laws of the form
r-l
tim pr{M(r) < u } = e- r ~ TS/S!
where
n
n
L
n~~
s=o
is the r
th
largest of
pr{;l > Un} ~ T/n.
;l ••• ;n' and
Poisson properties (akin to
those known for the upcrossings of a high level
by a stationary normal process) are developed and
used to obtain these results.
t Currently visiting Cambridge University.
* Research supported by Office of Naval Research,
under Contract N00014-67-A-0321-0002.
2.
As a consequence of (ii), to show that the
(iii)
asymptotic distribution of _. M(~)-n
is the same as if the
(iv)
{t;n}
- (normal-ized)
were Li.d.
To show that the assumptions used are satisfied,
in particular by stationary normal sequences,
under mild covariance conditions.
1.
Introduction and basic framework.
There is a considerable body of theory concerning
extreme values of independent and identically distributed
(i.i.d) random variables (e.g. [5],[7]).
Important
sections of this theory have been extended to apply to
certain types of stationary sequences.
For example
Watson ([9]) considered "m-dependent" sequences, and
Loynes
([8])
(and sUbsequently others, e.g.
[10],[4])
generalized a number of results to apply under the condition
of "strong (uniform) mixing".
In particular Loynes
obtained a generalization of the following theorem of
Gnedenko ([5]), which is central to the asymptotic theory
in the i.i.d. case.
(We write, here and throughout,
for the maximum of the first
n
random variables
M
n
t;l···t; n ,
of whatever sequence is being considered.
Theorem 1.1 (Gnedenko) :
random variables, and if for some sequences
a
n
> 0, b
n
of
3.
constants, the normalized maximum
...
n (Mn - b n ) has a
_non-degeneratelimiti-ngdistribu-tion --function (d. f.-)
a
G(x), then G(x) has one of the three following forms:
Type I
G(x) = exp(-e -x )
Type II
G(x) =
<
ClO
0
= exp(-x
Type III
-
-a )
G(x) = exp(-(-x)a)
<
ClO
x <
-
0
X
(for some a >
0)
x >
0
(for some a >
0)
x <
0
x >
0
= 1
(It is understood that each "type" of d.f. includes all
d.f.'s obtainable by replacing
x
by
ax + b
for any
a > 0, b).
The strong mixing condition used by Loynes requires
that
Ip(AnB) - P(A)P(B)
(1.1)
whenever
A,B
m,
<
g(k)
are events belonging respectively to the
and where
and
~l ••• ~m'
a-fields generated by
some
I
g(k)
+
0
as
""m+k'''"m+k+l··· for
This is a
~
~
"decay of dependence" condition of a very strong type,
since it must hold (uniformly) for all such events
A,B.
Its verification may also be difficult since all pairs
A,B
must be "tested".
It is apparent that not all events
interest in extremal theory.
A,B
will be of
Indeed the event
4.
< u'~2 -< u, ••• ~ n < u " ,
whose _probabili ty_i£; just_ a vallie Qf_ a fj,.ni te-diI1\eJl!;f.911aJ
"{M < u}"
n -
"{~l
is precisely
d.f. of the process.
-
Hence it is natural to look for
dependence restrictions involving only these d.f.'s.
An
obvious such restriction (of similar but weakened form of
(1.1»
would be the following, which we will refer to as
"Condition 0".
If
1···
J.' (u,u ••• u)
r
to satisfy
~
denotes the joint
. (xl ••• x )
r
J.l ••• J. r
we shall write
d.f. of
Fi
F,
F
,(u)
i l···J. r
Then the sequence
•
condition
where
will be said
if for any
0
i l < i 2 ···< in < jl < j2···< jm' jl - in
real u,
(1.2)
{~n}
to denote
~
k,
and any
(0)
g(k)
+
0
as
k
+
~
•
While Gnedenko's Theorem will hold (as we shall see)
under Condition 0, we can - for this and later purposes weaken it significantly by considering just certain
sequences of u-values in (1.2).
Specifically if
{un}
is a sequence of real numbers, we shall say that the
we have
5.
where
R.im lim a
R.-+oo n-+ oo n,l
=0
It is apparent that
sequence
D(u )
n
implies
D
D(U
n
)
for any
However for some processes (e.g. normal),
un
may be readily verified for sequences
interest, when it is not clear whether
(In this, it turns out that
D(u )
n
D
{u }
n
of
itself holds.
can be further
"encouraged to hold" by an appropriate decrease in tail
probabilities of the finite dimensional d.f.'s, over the
pure "dependence decay" required for
D).
In Section 2 we shall obtain Gnedenko's
if the assumption that the
~i
Theo~em
are i.i.d. is replaced by
stationarity together with Condition
D (u )
n
for all
u
n
+ b (-00 < x < 00), a ,b
being the
n
n
n n
particular sequence given in the theorem.
(This,of course,
of the form
x/a
implies that the result holds for stationary sequences
satisfying D).
The proof will follow the pattern of that
used by Loynes in [8], with modifications to suit later
applications.
Another useful (even though trivially proved)
result for i.i.d. random variables is that
6.
(1.4)
if
u
Pr{M
-
n
< u }
n--
= u n (T)
+
n
e- T
as
n
+
00
is chosen so that
(1.5)
being any fixed constant
T'
(T ~
0) •
It is easily seen
(by writing the left hand side of (1.4) as [l-(l-Fl(Un»]n
and taking logarithms, that (1.5) is certainly necessary
for
(1.4) to hold.
often chosen so that
If
F
l
is discontinuous,
Fl(un -) < l-T/n
~
u
is
n
However,
Fl(U n ) •
it should be pointed out that this does not itself
guarantee that (1.5) holds.
F
l
pr{t
(This may be seen by taking
to increase only by jumps, at 1,2,3 ••• , with
l
~ j}
=1
- T/(2 j -l).)
Hence, in such a case,
(1.5)
must be separately verified before (1.4) can be asserted.
Watson ([9]) gave conditions under which (1.4)
holds for m-dependent stationary sequences, and Loynes
([8]) showed that here also, m-dependence may be replaced
by strong mixing.
We show in Section 3 that strong
mixing may be replaced by
sequence
u
n
= u n (T)
D(U )
n
involved.
the same as would apply if the
for the particular
The limit in (1.4) is
~
"n
were i.i.d.
(with the
same marginal d.f. F , as when dependent.)
As a corollary
I
it will follow that - under appropriate conditions -
7.
4It
--
-- -
---
-
an(M - b ) has the same asymptotic distribution as
n
n
it would if the ~n were i.i.d. with d.f. Fl.
This is
also a result shown in [8] under strong mixing.
It is well known that the Type I extreme value
d.f. is the one which applies to i.i.d. normal sequences.
Berman ([1]) has shown that this remains true for stationary
normal sequences, under very weak conditions indeed on the
covariance function
(whic~of
course, is the most convenient
type of assumption in practice, for a normal process).
Specifically Berman showed that, provided the covariance
{r } of the (zero mean, unit variance) stationary
n
normal sequence {~n} satisfies either
sequence
00
(1.6)
(i) r
log n .... 0
n
as
n ....
00
r1 r n
2
or (ii)
<
00
then
(1.7)
Pr{a (M - b ) < x} .... exp(-e -x )
n n
n
as
n ....
oo
where
(1. 8)
a
b
n
n
=
(2 log
n)~
=
(2 log
n)~
- l(2 log
n)-~[l09
log n + log 4n]
It follows from Berman's proof that (1.4) holds (and
(1.7) may then be obtained by simple transformation of (1.4».
We shall show in Section 4 that either of the conditions
(1.6)
(for the normal sequence) implies the general sufficient
8.
conditions for (1.4)
Theorem 3.1.
(including
D(u »
n
Thus for normal sequences
given in
(1~4)
follows from the general result Theorem 3.1.
alternatively
This does
not reduce the amount of calculation, but it does
demonstrate the connections between arguments previously
used for normal processes under covariance conditions, and
those using assumptions of "mixing type" for other
processes.
It also indicates the satisfactory nature of
Theorem 3.1, since the particular normal result is known to
be a very sharp one.
It is known that for continuous parameter,
stationary normal processes, the upcrossings of a high level
are asymptotically Poisson in character, under quite weak
conditions ([2]).
Similar results are (in fact more)
easily proved for normal sequences.
In Section 5 we
obtain an asymptotic Poisson result of this kind for
(~
necessarily normal)stationary sequences under
assumptions which include
).
This result will enable
n
us to obtain the asymptotic distribution of the r th largest
value
M~r)
say among
D(U
tl ••• t n ,
as well as the maximum
It is clearly also possible to
M = M(l)
n
n ' as n + ~.
consider joint distributions of the M(r) along these
n
lines, (cf[lO]) but we do not pursue this matter here.
9.
Finally we note that the present paper is concerned
entirely with sequences.
We hope; in a future paper, to
consider corresponding properties in the (more complicated)
continuous parameter case.
2.
Basic lemmas, and Gnedenko's Theorem under
D(U
n ).
The proof of Theorem 1.1 given in [5] may be displayed
as the two following results.
Suppose that
Lemma 2.1
M
n
are random variables
(which in this lemma may be maxima or not) and
an > 0, b n
constants such that
(2.1)
as
n
Pr{a n (Mn - b n ) < x}
+
for each
where
~,
k
+
G(x)
(convergence in distribution)
G is a non-degenerate d.f., and further,
= 1,2 •••
(2.2)
Then, corresponding to each
are constants
(2.3)
a k > 0, Bk
k
k
= 1,2 ••• ,
there
such that
G (akx +
Bk ) = G(x) •
This lemma follows simply from a result of Khintchine,
which may be conveniently found in [6 Section 10, Theorem 1].
10.
Let
Lenuna 2.2
G
be a non-degenerate d.f. such that
(2.3) holds for each
k
1,2 •••
=.
(for some constants
> 0, Sk).
Then G is one of the three extreme value
k
d.f.'s, being of Type I, II or III according as a = 1
k
a
for some (and hence all) k, a
a
k
<!
k
> 1
for some (all) k,
or
for some (all) k.
The derivation of this latter result constitutes
the major part of Gnedenko's proof.
It is easily seen
M = max(;l ••• ;n) where the ;j are i.i.d.,
n
then if (2.1) holds, so does (2.2) and hence (2.3).
Thus
that if
the conclusion of Theorem 1.1 follows for the i.i.d. case.
Our task here will be to show that (2.1) implies (2.2) if
{;n}
un
is a stationary sequence satisfying
= x/an
+ bn·
,
(for all real x).
D(U )
n
for
This will be done by
means of several lenunas, and will enable us to obtain the
desired generalization of Theorem 1.1.
First, write
of integers.
M(E)
= max{;j:j€E}
for any set
E
It will be convenient to talk of an
"interval" to mean any finite set
E
integers (jl'jl+l, •••• 'j2)
We shall then say that
E
has "length" j2 - jl + 1
k l > j2'
with
by
k
l
- j2.
say.
If
we shall say that
F
of consecutive
= (k l ,k l +l, •••• k 2 )
E and Fare separated
The following result will have several
11.
applications :
Suppose.that
Lemma 2.3
sequence
(u )
E ,E ••• E
l
r
2
Ei
and
•
n
Let
holds for some
D (u )
n
N,r,k
be fixed integers and
subintervals of (1,2 ••• N),
Ej
are separated by at least
such that
k
when
i
+j
•
Then
r
r
Ip{f' (M(E.) ~ ~)} - II pr{M(E ,) ~ ~}I ~ (r-l)aN,k
J
j=l
J
j=l
Proof:
Let
if necessary)
A
j
=
=
E.
J
k
l
~
{M(E j ) ~ uN}.
(k.,k ,+1, ••• 1.)
J
1
1
J
< k
where (by renumbering
J
2
~
1 •••
2
For brevity write
Then
< a
-
N,k
Similarly
Ip(Al,n A ,n A )
2
3
-
P(A )P(A )P(A )
l
2
3
I
< Ip(A 'nA n A ) - P(A n A )P(A )
l
+
2
Ip(A n A )
l
2
3
-
l
P(A )P(A )
2
l
2
I
3
P(A )
3
I
12.
tt
Proceeding in this way, we obtain the result.
Now let
N
= nk,
k
be a fixed positive integer, and write
= 1,2 •••
n
In the following we shall
•
pr{~ ~ ~}
approximate
by
[pr{M
n
might be expected intuitively), when
~ ~})k
(which
This
holds.
D(un)
will enable us to obtain various results (including the
extended Gnedenko Theorem) quite simply.
follows that of Loynes ([8)
features.
in many of its essential
Specifically, we divide the first
integers into
and so on.
length
The method used
2k
Thus
(n-m)
(+00)
N
= nk
consecutive intervals as follows.
Let
*
*
*
Il,Il,I2,I2···Ik,Ik
and
m
(fixed).
The main steps of the
approximation are displayed in the following lemma.
Lemma 2.4
With the above notation, and assuming
D(u )
n
holds,
k
(i) 0 ~ P{J-Q. (M (I j ) ~ ~)} - pr{~ ~ ~} ~ k PriM (II)
k
(ii) I p{.n (M(I 4 ) ~ ~)}
J=l
J.
-
pk{M (II) ~ ~} I ~ k aN,m
(1ii) Ipk{M(I l ) ~ ~}
for some constant
K.
Hence, by combining (i), (ii), (iii),
13.
4It
(2.4) IPr{MN < uN} - pk{Mn ~ uN}1
~ (k+K)pr{M(I l ) <
UN
~ M(I~)} + kaN,m
k
(~~ uN)'
for some
n (M(I.)
J=l
J
follows at once since
Proof: (i)
and their difference implies
~ UN)
:::>
M(I j ) ~ uN < MJXj)
the probabilities of these latter events being
j,
independent of
(ii)
by stationarity.
j
follows from Lemma 2.3 with
Pr{M(I.) ~ uN}
J
noting that
I.
for
J
j
is independent of
E.
J
,
•
To obtain (iii) we note that
The result then follows (writing
x = Pr{M
o
< uN})
n -
< yk - x
= pr{M(I l )
y
~~}
,
from the obvious inequalities
k ~ K(y-x)
for some
K > 0,
0 < x,y ~ 1 •
We now dominate the right hand side of (2.4), to
obtain the desired approximation.
Lemma 2.5
if
n
(2.5)
If
) holds, r > 1
n
is sufficiently large, then
1s any integer, and
D(U
pr{M(I ) < u
I
-
N
< M(I*l)} <
-
1r
+ 2r aN
It then follows from Lemma 2.4 that
,m
•
14.
(2.6)
If
Proof
n
is sufficiently large, we may choose
r
intervals
El ••• E , each of length m from 1,2 ••• n-m,
r
so that they are separated from each other and from
by at least
m.
Then
By stationarity,
pr{M(E s ) ~ ~}
= pr{M(Ii)
~ UN}
= p,
say, and by Lemma 2.3, the two terms on the right differ from
r
r+l
p , p
(in absolute magnitude) by no more than (r-l) aN ,m '
raN ,m
respectively.
Hence
from which (2.5) follows since
pr - pr+l < l/(r+l)
•
Finally by (2.4) and (2.5),
lim sup
n+~
IPr{~ ~ uN} - pk{Mn ~ uN}1
from which it follows (by letting m
< k;K + [k+2r(k+K)]tim sup aN,m
n+~
+
~
and then
on the right), that the left hand side is zero.
r
+
~
Thus
(2.6) is proved.
The Gnedenko Theorem now follows easily under general
conditions.
15.
•
~n
Let
Theorem 2.1
be a stationary sequence and
> 0, b
given constants such that Pr{a (M - b ) < x}
n'
n n
n
n
converges in distribution to a non-degenerate d.f. G (x) •
a
D(u)
Suppose that
each real
x.
=~
+ b , for
n
an
n
has one of the three extreme
is satisfied for
n
Then
G(x)
u
value forms listed in Theorem 1.1.
Proof:
= x/a n
By Lemma 2.5, with
+ bn
by (2.6), and this may be rewritten as
which converges to
(as well as (2.1»
G(x)
we have
pr{aN(MN - b N) ~ x} + 0(1)
by assumption.
Thus (2.2) holds
and the conclusion follows as in the i.i.d.
case from Lemmas 2.1 and 2.2.
Corollary :
The result remains true if the condition
D(U ) be satisfied for each u = x/a + b
is
n
n
n'
n
replaced by the requirement that the condition D holds.
that
(For then
D(U
n)
in particular by
is satisfied by any sequence at all, and
un
= x/an
+ bn
for each
Finally we remark that the condition
x)
•
D (u )
n
may be
changed in various ways and still serve the present
purposes.
D(U ) is just one of a number
n
of possible ones, but is a choice which seemed convenient
Our choice of
and natural for our purposes.
16.
-
-_.
3.
< u }
PriM
Convergence of
--n -
and its consequences.
n
In this section we first consider conditions
under which (1.4) holds (i.e. PriM
where the
{ <:On
t:'}
stationary sequence
T
< u } + e- )
n -
n
{u }
n
for a
are assumed
to satisfy
as n +
(3.1) 1 - Fl(U ) = pr{~l > un} = TIn + o (lIn)
n
where T > 0 •
We can then show as a corollary that
00
,
(M
has the same limiting distribution as it would
n n - bn )
if the ~n were i.i.d.
a
As may be seen from the derivation below, if (3.1)
holds, Condition
that
D(u)
is then sufficient to guarantee
n
tim inf PriM
< u } > e-
n -
n
T
-
•
However we need a
further assumption to obtain the opposite inequality for
the upper limit.
may be used.
Various forms of such an assumption
Here we content ourselves with the following
simple variant of conditions used in [9],[8] •
to this as
D '(u »
n
Condition
(We refer
:
D'(u )
n
will be said to hold for the
sequence
(un)
if
(3.2) (D' (u
» :timn+oosup[n j=2
L pr{~l
n
n
> u k'~' > uk}] = o (11k) as k+ oo
n
J
n
•
17.
Note that if (3.1) holds, then a sufficient
condition for (3.2) to hold is
-I-
0 as n-l- oo
•
For if (3.3) holds it is readily checked that
(with
N
zero as
(3.2)
= nk).
n
-I-
The second term on the right tends to
00,
and the first to
T 2 /k 2
= O(l/k),
~
Suppose that
Theorem 3.1
O(u ), 0' (Un)
n
hold
(i.e. (1.3),(3.2», for the stationary sequence
where the
Proof:
Un
satisfy (3.1).
Fix a positive integer
consider first convergence of
n = 1,2 •••
,pr{M
yielding
< uN}
n-
Then
k.
{~n}'
Pr{M
. n -< u n }
-I-
e-
T
as n-l- OO
•
We use (2.6) to
pr{~ ~~}
where
N
= nk,
For this we note that
=1
n
- P
{U
(~.
j=l J
> ~)} > 1 - n Pr { ~ 1 > uN}
and hence by (3.1),
(3.4)
tim inf Pr{M < Q.} > 1 - T/k •
n 1.'1
n-l- OO
il
18.
we
Corresponding
(3.S)pr{Mn i
also have
1 - n pr{~l > ~} +
uN} i
i 1 - n Pr { ~ 1 > uN} + n
It follows from (3.2)
by stationarity.
that
n
pr{~l
1im sup Pr{M
n~1lO
>
~} ~
T/k)
pr{~i > UN'~j > uN}
n
L Pr { ~ 1
j=2
> ~ , ~J' > uN}
(and the fact
that
< u.._} < 1 - T/k +
n -
L
lii<jin
~
0
(11k),
and hence by
(3.4) and (2.6) that
(3.6)
(1 - f)k i
1im inf
n~1lO
pr{~ i ~} < 1im sup pr{~ i uN}
n~1lO
To complete the proof we show that
replaced by
by letting
that
k
may be
in (3.6) and the result then clearly follows
~
IlO
•
Choose
rk < n < (r+l)k.
(since
Now
n
N(= nk)
Mn i M(r+l)k)
r,
n,
so
It is easily checked
that
pr{u n < M(r+l)k i u(r+l)k}
is dominated by
depending on
(which is zero if
un > u(r+l)k)
19.
which is easily seen to tend to zero as
choice of
r.
n +
~,
by the
Thus
by the first inequality of (3.6)
(with (r+l)k for N).
Similarly,
from which it follows that 1im sup PriM
T
1
< u } < [1 - -k + o(-k)]
n n -
as desired.
We now write
An
for the maximum of
n
random variables with the same marginal d.f.
~~~~~~n7
i. i .d.
F
l
as each
(following [8] we may call these the "independent
sequence associated with
{t n }").
The following result is then
a corollary of the theorem.
Corollary:
Suppose that for some sequence
,..
PriM . < u}+
n -
n
a
> 0 •
k
,
20.
If
D (u)
n
and
H
D'
n
are satisfied, then also
(u )
-n
--
e •
< u } +
Proof :
- - - -
..
Pr{M
The condition
n
< u }
n
e
+
may be rewritten
as
F~(Un)
+
e
or
n log[l -
(1 - Fl(Un
»]
+
log
e ,
which
Thus (3.1) holds with
T = -log
e ,
and hence the result follows.
We may also deduce at once that the limiting distribution
of
a (M - b ) is the same as that which would apply if
n
n n
,..
the ~i were i.i.d. , i.e. of a n (M n - b n ) , under
conditions D(U ) and D'(U ) .
This result was proved
n
n
in [8] under conditions including strong mixing.
Theorem 3.2
Suppose that with the above notation,
(M
x} + G(x) for some constants a > o,b
n n - bn) <
n
n
and some (Type I,ll or III) d.f. G.
Suppose also that
Pr{a
-
are satisfied when
each
x.
Then
Pr{a (M
n
n
- b ) < x}
n
un
+
= 2L
an
+ bn '
G(x) •
for
Pr{a (M - b ) < x} = Pr{M < u } , the
n n
n
n- n
result follows from the previous corollary if G(x) > 0 •
Proof:
Since
,
21.
If
=0
G(x)
Pr{a
Since
n
G
(M
n
y > x
we have, for any
-
b ) < x} < Pr{a (M
n
n n
-
with
b ) < y}
n
is continuous, we may take
G(y) > 0 ,
+
G(y) •
G(y) arbitrarily
small, and the result easily follows.
4.
Stationary normal sequences.
Suppose now that
{~n}
is a stationary normal
sequence, with zero means, unit variances, and covariances
rp
= E ~n~n~'
rr
where
rp
+
0
as
p
+
~
As noted in
Section 1," Berman ([1]) has shown that if either
log n + 0 or L r 2 < ~ , then the Type I aSYmptotic
n
n
law (1.7) holds, with the constants given by (1.8).
We
r
propose to show how this result, and the technique for
obtaining it, fit in with the theory of the previous section.
We shall show in particular, that either of the above
covariance conditions implies our conditions
for appropriately chosen
{u }
n
(satisfying (3.1».
It
has, incidentally, been pointed out in [4] that
for any strongly mixing stationary normal sequence.
strong mixing implies
our assumptions
Define, now,
which in turn implies
D(U n ), D' (un) ,
u
n
Thus
by
in this normal case.
1 - t(u )
n
= Tin,
where
~
22.
4It
denotes the standard normal d.f.
~(x».
(with density
If we can show that (1.4) holds, it will (as previously
noted) follows that so does (1.7).
The steps involved
in this are :
(i)
Write
u~/2
(4.1)
(ii)
T
= e -x ,
= x + log n Deduce that
~
and use the fact that
log 2n - log un + 0(1)
U~/(2 log n) + 1
and hence by
taking logarithms and combining with (4.1), that
n)~[l+(x-
un = (2 log
t
log 4n -
t
log log n)/(2 log n) + o(l/log n)].
Insertion of this into (1.4) and rearrangement yields (1.7).
Thus we confine attention to (1.4).
First we state
a useful technical lemma, proved in [1].
00
Lemma 4.1
Suppose that either
Then with
(4.2)
u
n
n
n
r
log n ..... 0
n
as defined above, we have
2
Irjl exp{- u n / (1+ I r J·I)}
j=l
L
or
L
n=l
0
+
as
n .....
r
2
n
<
00
(It is perhaps worth noting that the condition
E r~ <
00
may be replaced by
iJ:l this lemma).
L
IrnlP <
00
forsorne
p > 0 ,
00
•
23.
In order to obtain (1.4) it is simplest to show
--
--
(as in [1]) that
[~(u
n
)]
n
Pr{M
< u}
n -
can be approximated by
n
Practically the same calculations are involved
•
in verifying
D(u)
and
n
D' (u ) ,
n
and hence we do this
in order to show the relationship between normal sequences,
and those of the previous sections.
These will follow
from the lemma below (which is virtually the same as
Lemma 3.1 of [1], from which the details of proof may be
obtained by slight changes.
The technique of proof develops
an argument originally due to Slepian, and which has been
extended in many ways by a number of authors (especially
S.M. Berman), to the extent that it may now be regarded as
part of the basic machinery of the subject.
Lemma 4.2
Let
~1'~2 •••
be the stationary normal
sequence as defined at the start of this section, and let
Then, for any
(4.3) IPr{~R,.
< u
for
j
=
u,
1,2 ... s} - ~s(u)
I
)
< K
where
~R,i
~
L
l~i<j~s
I p.. I e -u 2 /(1+lp·1)·1)
is the correlation between
and
~R,j'
and
K
is constant.
1)
24.
O(u
_ n ), 0' (u)
n__ now
._._follow easily:
The conditions
r
Suppose that the covariances
Lemma 4.3
(standard) stationary normal sequence
{~n}
either
Then
r
n
log n
O(u ), D' (un)
n
~
0
or
are satisfied for
1 - ~(u )
n
= Tin.
Proof :
From (4.3), with
s
un
= 2,
R.
of the
n
satisfy
defined by
= 1,
l
R.
= j,
2
N
= nk
we have
and thus, by simple manipulation,
Hence
n
n
I
j=2
pr{~l >
from which
tj
Q.,
.N
D' (u)
n
> U } < -- + KN
N
-
-u 2 /(1+\r·l)
N
T2
k2
I
j=l
Ir.1 e
J
N
J
follows, by Lenuna 4.1.
It follows also from (4.3) that if
then
n
< Kn
r
j=l
lr.,
J
1
~
R.
l
~
R.2 ••• ~R.s < n ,
25.
e
Suppose now that
1
~
i l < i 2q
.~:l..p -<-jl ~-J2 ••• ~-jq~ n+-
Identifying
(1 l ••• 1 s ) in turn with
(il, ••• ip,jl, ••• jq),(il ••• ip),(jl ••• jq)
which tends to zero by Lemma 4.1.
satisfied (and indeed
1im a n1
n
5.
= 0.
we thus have
Thus
D(U ) is
n
for each 1).
Poisson results, and asymptotic distribution of r th
largest values.
As noted in Section 1, for stationary normal
processes in continuous time, the upcrossings of a high
level tend to behave asymptotically like a Poisson process
«(3) ( 2 ) .
An analogous result can be shown to hold in
discrete time (an upcrossing of the level
occur between
For example if
1,2 ••• n
t
= rand
en
t
= r+l
en
~r
being said to
< u ~ ~r+l.)
denotes the number of the points
which are upcrossings of the level
un (un = un (T), 1 - t (un) = Tin),
(ii)
if
u
then under
(1. 6)
(i) or
can be shown to be asymptotically Poisson.
We propose here to generalize this result to
E£a-normal stationary sequences satisfying conditions
D(u ), Df (u), and thereby obtain, as a corollary,
n
n
asymptotic distributions for the r th largest among
26.
In doing so, it seems slightly more
-
-
natural in the discrete case to consider not the number
C
of upcrossings, but rather the number
n
points r of 1,2 ••• n for which ~ > u
r
where
~.1.
if
(L , say) of
n
n •
That is
=0
> u·n , Xi
otherwise
C
are essentially the same asymptotically;
n
we may regard L
as the "total time out of n which
n
spends above u ")
(L
n
and
n
•
The connection between
value
(M~r) say) of
L
~l ••• ~n
pr{M(r) < u }
n
n
(5.1)
and the r
n
th
largest
is clear, viz.,
= Pr{L n
< r} •
In the following, then, we shall assume that
D(U n )
form a stationary sequence satisfying
where
{un}
t'
....r
satisfies (3.1), for a fixed
and
{~n}
D' (un)'
o.
T >
shall obtain a limiting Poisson distributing for
We
L ,
n
by
The same notation will be used as in
the lemmas below.
Section 2 (stated prior to Lemma 2.4) concerning the
intervals
It will also be convenient to write
of integers
r
~
E
such that
~r
Ln(E)
> un'
for the number
if
E
is any set
of integers.
For any
Lemma 5.1
that
LN(I j ) > 1
(N = nk)
•
Then
r = 1,2 ••• ,
for at least
let
r
A
r
be the event
values of
j
= 1 ••• k
27.
(i)
and hence
Further,
(iv)
(and uniformly in
m)
tim sup pr{LN(I ) ~ 2} = o(l/k)
l
n+ oo
as
k
+
00
k
(i) follows since if
P~oof
for some
then
L
> r
but
N-
in one of the
j
L
km
(U
N
I.) < r,
j=l J
points of the
I~ intervals.
(ii) is equally obvious and (iii) follows
J
by adding (i) and (ii).
Also
D' (u)
and hence (iv) follows at once by
n
( Eqn. (3. 2) ) •
Lemma 5.2
that
L
N
(I.)
J
> 1
-
for exactly
r
of the intervals
= r}
P(B )
I. •
J
Then
tim suplPr{L
n+ oo
(uniformly in
m)
•
N
-
r
I
+
0
as
.k +
00
,
28.
Proof
By subtracting the inequalities (iii) of
--
Lemma 5.1 with
r
(r+l) replacing
r,
from those with
itself, we obtain
The first term on the right tends to zero as
choice of
I.emma 5.3
~,
Write
Write
~
by
00
and the result follows from (iv) of Lemma 5.1.
Ip(B ) r
Proof :
n
p(= Pn) = pr{M(l l ) ~ ~} •
Then
(~)pk-r(l_p)rl ~ k(~)2k-raN,m •
for the event
M(l.) < uN •
J -
Then
where the primes denote complements, the intersection
signs are omitted, and the union is over sets of distinct
Now by Lemma 2.3 if
1
~
jl < j2···< js
~
k ,
IP(e. e .••• e . ) - pSI ~ ka m' and by induction, if
N
Js
Jl J2
'
i l < i 2 ···< it' jl < j2···< js' i l
jl' for any 1,1',
+
(the left hand side does not exceed
29.
+ Ip S+l(l_p)t-l _ P( e I
•••'
e.
e. e .••• e. )
i1
~t-l ~t J 1
Js
which does not exceed
2k.2
hypothesis holds for
above) •
t-l
QN,m
if the inductive
It holds for
t-l.
I
t
=0
by the
Thus
IP(e! ••• e! e i
••• e.) - p k-r (l-p) rl < k 2 k-r QN,m
~l
~r
r+l
~k
from which the desired result follows.
Lemma 5.4
Then
-T/k
p ~
~ e
Let
Proof
and hence
pr{~ ~
uN }
pi
Pn +
e
= p'n
P~ +
-T
0
as
= Pr{M
n ~
~
< Q_}
nN
as
n
+
co.
and hence by (2.6),
from which the desired result follows.
co
•
Then
But by Theorem 3.1 ,
30.
By Lemmas 5.2, 5.3, 5.4 we at once obtain
1im suplPr{LN
n-+ ClO
k k-r 1im sup
k()2
n-+ClO
r
and hence, letting
m -+
(5.2) 1im suplPr{L
= r} - (~)
n+ClO
N
ClO
~N
,m
as
+ 0(1)
k -+
~
(since the 0(1) is uniform in
.
- 1.
(1 -
Our final lemma shows that
N
e
- !.(k-r)
k)r e
k
I
,
ro),
-+ 0 as
maybe replaced by
k
-+
ClO
•
n
in (5.2).
Lemma 5.5
(5.3) 1im suplPr{L = r} n-+ ClO
n
Proof :
(k ) (1 - e
r
It follows easily from (5.2) that
- !
r
(5.4) 1im suplPr{LN ~ r} - I (~)
n-+ ClO
1=0
(1
- e
k)1 e
- 'r'(k-1)
k
I
-+ 0 as k -+
We will show that
(5.5) 1im suplPr{L < r} n-+ ClO
n
from which (5.3) follows easily by expressing
Pr{L
< r} - Pr{L
n-
< r-l} •
n-
Pr{L
n
= r}
as
ClO.
31.
_e
As in the proof of Theorem 3.1 we choose
that
sk < n < (s+l) k
s
so
Then it follows without
difficulty that
(5.4) may now be applied with
N
=
(s+l)k, N
= sk
and,
as in the proof of Theorem 3.1, terms such as
n Pr{u n < ~l ~ u(s+l)k}
follows simply.
(5.3) then
tend to zero.
We may now obtain the main result.
Theorem 5.1
Suppose that
for the stationary sequence
(3.1) •
n
{~n}
,
Then
Pr{L
(5.6)
Proof :
are satisfied,
n
where u
satisfies
n
D (u ), D' (u )
If
e
n
= r}
=1
+
e -T
- e- T/ k
T
r
/r=
then
The result then follows from (5.3).
as
ke
+
T
as
k
+
00
and
32.
Two theorems now follow as corollaries.
Under the conditions of Theorem 5.1, if
denotes the r th largest of /;l···/;n , then
Theorem 5.2
M(r)
n
pr{M(r) <
n
r-l
U } -..
n
l
s=o
e
-T
Proof:
pr{M(r) < u } - Pr{L < r}
n
- n
n
follows from· (5.6) •
Theorem 5.3
Let
Suppose that
M
,..
n
{/;n}
TS/S!
and the result
be a stationary sequence.
(the maximum of the "associated
·independent sequence" - cf. Theorem 3.2) satisfies
,..
Pr{a (M
n
where
Let
G
D(U
n
n
- b ) < x} -.. G(x)
n-
is non-degenerate (and hence Type I, II or III).
),
DI
(u )
n
be satisfied by
{/;n}
where
, for each x. Then the limiting
= x/a + b
n
n
n
distribution of M(r) , the r th largest of /;l···/;n'
n
given by
u
Pr{a (M(r) - b ) < x} -.. G(x)
n
n
n
r-l
L
s= 0
(zero if
(-log G(x»r/r!
G(x)
= 0)
•
is
33.
Proof:
,.
Pr{an (Mn - b n_-_
) < x} .. G(x) may
F~(Un) .. G(x) from which we have (if
The condition
be written as
G(x) > 0), 1 - Fl(U n ) ~ -log G(x)/n.
applies with T = -log G(x) •
Theorem 5.2 now
References
[1]
Berman S.M.
"Limit theorems for the maximum
term in stationary sequences",
Ann. Math. Statist. ,35, (1964), P.502-5l6.
[2]
Berman S.M.
"Asymptotic independence of the numbers
of high and low level crossings of stationary
Gaussian Processes, "Ann. Math. Statist., 42,
1971 p.927-945.
[3]
Cram~r
H.
"On the intersections between the
trajectories of a normal stationary stochastic
process and a high level", Arkiv. Mat.
(1966) p.337-349.
[4]
Deo C.M.
[5]
Gnedenko B.V.
i
"A note on strong mixing Gaussian
sequences", Ann. PrDb. ! (1973) p.186-l87.
"Sur la distribution limite du terme
maximum d'une serie aleatoire", Ann. Math.
(1943), p.423-453.
[6]
Gnedenko B.V. and Kolmogorov A.N.
"Limit distributions
for sums of independent random variables",
Addison Wesley N.Y.
[7]
!!
(1954).
Gumbel E.J. "Statistics of extremes" Columbia Univ.
Press, N.Y. (1958).
(S]
Loynes R.M.
"Extreme values in uniformly
mixing staticmary stoc-hast1c processes It ,
Ann. Math. Statist. li,(1965), p.993-999.
(9J
Watson G.S.
"Extreme values in samples from
m-dependent stationary stochastic processes",
Ann. Math. Statist. ~, (1954) p.79S-S00.
(lOJ
Welsch R.E.
"A weak convergence theorem for order
statistics from strong-mixing processes",
Ann. Math. Statist., B (1971) p.1637-l646.
,, ,,
I'
i
I