Leadbetter, M.R., G. Lindgren, and H. Rootzen; (1979)Extremal and Related Properties of Stationary Processes."

./
EX'I'RRw, AND RELATED PROPERTIES
OF srATI01'..,ARY PROCESSES
Part I:
Extremes of Stationary Sequences
M.R. Leadbetter.
G. Lindgren. and H. Rootz&
Institute of Statistics Mirneo Series #1227
lily 1979
'.
I
DEPARTMENT OF STATISTICS
Chapel Hill, North Carolina
•
III
EXTREMAL AND RELATED PROPERTIES
OF STATIONARY PROCESSES
Part I: Extremes of stationary sequences
by
,
M.R. Leadbetter, G. Lindgren, and H. Rootzen
American Mathematical Society 1970 subject classification: 60G10.
Research supported by The Office of Naval Research, Contract
No. N00014-75-C-0809, and The Swedish Natural Research Council,
Contract No. F374l-004 .
.e
Contents
General Introduction
•
•
Introduction to Part I: Extremes of stationary sequences
Chapters:
1.
Classical extreme value theory
2.
Maxima of stationary sequences
3.
The normal case
4.
Convergence of the point process of exceedances, and the distribution
of r-th largest maxima
S.
Normal Seqeunces under strong dependence
Appendix:
References
•
.e
Some basic concepts of point process theory
General Introduction
Classical Extreme Value Theory - the asymptotic distributional theory for
•
maxima of independent, identically distributed random variables, may be regarded
as half a century old. even though its roots reach much further back into mathe-
•
matical antiquity.
During this period of time it has found significant applica-
tion - exemplified best perhaps by the book "Statistics of Extremes" by
E.3. Gumbel - as well as a rather complete theoretical development.
More recently, beginning with work of G.S. Watson, S.M. Berman. R.M. Loynes,
and H. Cram6r, there has been a developing interest in the extension of the theory
to include first dependent sequences, and then continuous parameter stationary
processes.
The early activity proceeded in two directions - the extension of
general theory to certain dependent sequences (e.g. Watson. Loynes) and the
beginning of a detailed theory for stationary sequences (Berman) and continuous
parameter processes (Cram6r) in the normal case.
In recent years both lines of development have been actively pursued, but
A
especially that relating to normal sequences and processes.
In the sequence
case, it has proved possible to unify the two directions and to give a rather
•
complete and
satisfying general theory along the classical lines, including the
known results for stationary normal sequences as special cases.
Our principal
aim in Part I of this work is to present this theory for dependent sequences in
as complete and up to date a form as possible, alongside a brief description of
the classical case.
The treatment is thus unified both as regards the classical
and dependent cases, and also in respect of consideration of normal and more
general stationary sequences.
Some general theory has recently been obtained for extremes of continuous
parameter stationary processes, and illuminating comparisons with the sequence
-
~case emerge.
However the most detailed results known in the continuous case are
for stationary normal processes.
In Part II we therefore concentrate substantially
on the normal theory, and consider the general situation more briefly.
•
Closely related to the properties of extremes are those of exceedances and
upcrossings of high levels, by sequences and continuous parameter processes.
By
regarding such exceedances and upcrossings as point processes, one may obtain some
quite general results demonstrating convergence to Poisson and related point
processes.
A number of interesting results follow concerning the asymptotic
behavior of the magnitude and location of such quantities as the k-th largest
maxima (or local maxima, in the continuous setting).
These and a number of other
related topics, have been taken up in both Part I and II, but especially in Part II.
Many of the results given here have appeared in print in various forms, but
a number are hitherto unpublished.
For the previously known results we have given
the smoothest proof (sometimes also new) of which we are aware and our aim throughout has been to stress the underlying main principles which pervade (and connect)
both the classical and the present context.
I
PART I
EXTREMES OF STATIONARY SEQUENCES
•
Classical extreme value theory is concerned substantially with distributional properties of the maximum
of
n
independent and identically distributed random variables, as
n
becomes large. Our primary object in Part I of this work is to extend the
classical theory to apply to certain dependent sequences. The sequences
considered fonn a large subclass of the stationary sequences - namely
those exhibiting a dependence structure which is "not too strong", in a
sense to be made precise. We shall find, in fact, that the classical theory may be extended in a rather complete and satisfying manner, to apply
to this more general situation.
Our first task - in Chapter 1- is to give an account of the relevant
parts of the classical theory, emphasizing those results with which we
shall be concerned in later chapters. Two results are of basic importance
here. The first is the fundamental result which exhibits the possible limiting fonns for the distribution of
M
n
under linear nonnalizations.
More specifically if for some sequences of nonnalizing constants
an> 0,
b , a (M - b)
n
n n
n
G
has a limiting distribution function
G (x), then
must
have one of just three possible "fonns". This basic, classical result
will be proved in Chapter 1. We shall refer to this result as Gnedenko's
Theorem, since Gnedenko's paper (1943) gave the first thorough treatment,
_even though the three extreme value types were previously discovered by
Frechet and Fisher & Tippett.
The second basic result given in Chapter 1 is almost trivial in the
independent context, and gives a simple necessary and sufficient conditions under which
P{M < u}
n- n
converges for a given sequence of constants
{un}' This result will play an important role in the dependent case (where
it is not as trivial but still true under appropriate conditions) .
II
In Chapter 1 we shall also discuss similar questions for the k - th
largest
n
M(k}, of
n
in certain ways.
~l' ... , ~ n
where
(The case where
k
k
is fixed, or increasing with
increases is described for cam-
pleteness only and will not be discussed in the later dependent context.)
The task of Chapter 2 is to generalize basic results concerning the
maximum
M , to apply to stationary sequences. As wi.ll be seen, the genn
eralization follows in a rather complete way under certain natural restrictions limiting the dependence structure of the sequence. In particular it is shown that under these restrictions the limit laws in such
dependent cases are precisely the same as the classical ones, and indeed,
in a given dependent case, that the same limiting law applies as would
if the sequence were independent, with the same marginal distribution.
In Chapter 3 this theory is applied to the case of stationary normal
sequences. It is found there that the dependence conditions required are
satisfied under very weak restrictions on the covariances associated with
the sequence.
Chapter 4 is concerned with the other topic of Chapter 1 - namely the
properties of the k - th largest
M~k), of the
£'i. 'rhe discussion is ap-
proached through a consideration of the "point process of exceedances of
a level
u " by the sequence
n
~l'
~2'
. . . . This provides what we consid-
er to be a helpful and illuminating viewpoint. In particular a simple
,
convergence theorem shows the Poisson nature of the exceedances of high
levels, leading to the desired generalizations of the classical results
for
M(k).
n
As noted, the discussion in Chapters 2 - 4 generalizes the classical
case described in Chapter 1, showing that the classical results still
apply if the dependence is not "too strong". It is, of course, interesting to investigate the situation under very strong dependence assumptions.
There is no complete theory for this case but such questions are explored
for normal sequences in Chapter 5, where a variety of different limiting
n~filll t H
<l rC'
f OlJ nd •
e.
-1-
CHAPTER 1
CLASSICAL EXTREME VALUE THEORY
~l'
Let
~2'
.•.
be a sequence of independent and identically distrib-
uted (i.i.d.) random variables (r.v.'s), and write
of the first
for the maximum
Mn
n, i.e.
( 1.1)
Then much of "classical" extreme value theory deals "lith the distribution
of
M , and especially with its properties as
n
tained for maxima of course lead
the obvious relation
mn
=
min{~l'
n
+
00.
All results ob-
to analogous results for minima through
... ,
~n)
=
-max{-~l'
••• ,
-~n)'
With
a few exceptions (e.g. Chapter 11) we shall therefore not discuss minima
explicitly in this work.
There is, of course, no difficulty in writing down the distribution
function (d.f.) of
,
exactly, in this situation; it is
P{Mn=:X} = P{~l=:x, S2=:x, ... , Sn~x} = Fn{x),
(1.2)
where
M
n
F
denotes the common d.L of the
si' Much of the "Statistics of
Extremes" {as is the title of Gumbel's book (l958)
tribution of
M
n
deals with the dis-
in a variety of useful cases, and with a multitude of
related questions (for example concerning other order statistics, range
of values and so on) •
In view of such a satisfactory edifice of theory in finite terms, one
may question the desirability of probing for asymptotic results. One reason for such a study appears to us to especially justify it. In simple
central limit theory, one obtains an asymptotic normal distribution for
the Bum of many i.i.d. random variables whatever their common original
d.f. Indeed one does not have to know the d.f. too precisely to "apply"
the asymptotic theory. A similar situation holds in extreme value theory,
and, in fact, a non-degenerate asymptotic distribution of
M
n
(normal-
ized) can take one of just three possible general forms, regardless of
.e
the original d.f.
F. Further, it is not necessary to know the detailed
-2-
nature of
F, to know which limiting form (if any)
it gives rise to, i.e.
to which "domain of attraction" it belongs. In fact this is determined
just by the behaviour of the tail of
F(x)
for large
x, and so a good
deal may be said about the asymptotic properties of the maximum, based
on rather limited knowledge of the properties of
F.
The central result - here referred to as "Gnedenko's Theorem" - was
discorered first by Fisher and Tippet (1928) and later proved in complete generality by Gnedenko (1943). We shall prove this result in the
1. Ld. context (Theorem 1. 7), using a recent, simple approach due to
de Haan (1976), and later extend it to dependent situation (e.g.
Theorem 2.4) •
We shall be concerned with conditions under which, for suitable normalizing constants
(1. 3)
an > 0, b
P{a (M - b ) < x}
n n
n -
w
+
n
,
G(x)
(by which we mean that convergence occurs at continuity points of
though we shall see later that the
G's
of interest are all continuous).
In particular we shall be interested in determining which d.f.'s
appear as such a limit. By (1.2),
(1.3)
G-
G
may
may be writ.ten as
(1. 4)
where again the notation
w
+
denotes convergence at; continuity points of
,
the limiting function.
If (1.4) holds for some sequences
an > 0, b , we shall say that
n
belongs to the (Ll.d.) domain of attl'action (fol' maxima) of
FED(G). Equivalently, of course, we may write
(1.4) without changing the definition of
an
F
G and write
instead of
in
D(G), and we shall do so with-
out comment where typographically convenient.
A central result to be used in the proof of Gnedenko's Theorem is a
general theorem of Khintchine concerning convergence of distribution functions which - for completeness - we shall prove here (Lemma 1.3). It will
be convenient to first define inverses of d.f.'s and other monotone func-
e.
-3tions and to give a few of their simplest properties which we shall use
in Lemma 1.3 and subsequently.
DEFINITION
..
~(x)
If
is a non-decreasing right continuous function, we
define an inverse function
~
-1
(as in de Haan (1976»
by
~-l(y) = inf{x; ~(x)~y}.
LEMMA 1.1
(i)
~
For
as above, a > 0, b, c
~ (ax + b) - c, then
(ii)
For
(iii)
If
_00
G
~
-1
~-l(~(x»
is continuous, then
is a non-degenerate d.f., there exist
< G-l(y ) < G- l (Y2) <
l
(i)
PROOF
H(x)
H-1 (y) = a-I ( ~ -1 (y + c) - b) .
as above, if
~
constants, and
= x.
Yl < Y2
such that
~-l(~(x»
~ x. If
00.
We have
~(ax
inf{x;
~y}
+ b) - c
a-l(inf{ (ax + b); ~(ax + b) ~ Y + c} - b)
a-l(~-l(y + c) - b),
,
as required.
From the definition of
(ii)
~
-1
strict inequality holds for some
z < x
existence of
~(z)
with
is non-decreasing. For
y > ~(z) = ~(x)
for
of
~-l. Hence
(iii)
y =
we have
~-l(~(x»
= x
~
it is clear that
~
~(x)
=
~(z)
=
~(z)
and hence
~(x)
we have
~
-1
z > xi
-1
x
I
1
< x'
2
G
(Y2) > xi
COROLLARY 1.2
~
Xl = G
If
G
(Yl)
z
whereas
0 < G (xl' )
such that
G(z)
G (xi) = G (xi + 0) ~ Y2' contradicting
-1
~
since
as asserted.
Y2 ~ 1. Clearly
l
G- (Y2) ~ xi and equality would require
so that
~(x)
(y)
-1
-1
Xl = G (Yl)' x 2 = G (Y2)
nite. Also
shows the
~-l(y) ~ x, contradicting the continuity
is non-degenerate there exist
If G
-1
x, the definition of
are both fi-
~
Y2
for all
G (xi> = Yl . Thus
as required.
0
is a non-degenerate d.f. and
constants, such that G(ax+b)=G(ax+f3)
for all
a > 0, a > 0, b, S
x, then a=a and b=B.
-4PROOF
Yl < Y2'
-1
_00
< xl < x
2 <
00
by (iii) of the lemma, so that
-1
(Yl)' x 2 = G (Y2). Taking inverses of
(i) of the lemma, we have
xl
=
Choose
G
a -1 (G- l (y) - b)
for all
-1
(xl - b) = a
-1
by
.
a- l (G- l (y) - 8)
=
y. Applying this to
a
G(ax + b), G(ax + 8)
Yl
(xl - 8), a
and
-1
from which it follows simply that
in turn we obtain
(x 2 - b) = a
a = a
and
-1
(x 2 - 8)
b = 8.
c
We now obtain the promised general result of Khintchine.
LEMMA 1.3 (Khintchine)
Let
{F n }
be a sequence of d.f.'s, and
an > 0, b n
non-degenerate d.f. Let
G
a
be constants such that
(1. 5)
Then for some non-degenerate d.f.
F (ex x+
(1. 6)
n
n
~
n
)
w
-+
G., constants
an > 0, 8n ,
G.(x)
i f and only i f
(1. 7)
for some
(1. 8)
PROOF
a > 0, b
,
and then
G.(x) = G(ax + b).
By writing
F~ (x)
we may rewr i te ( 1. 5),
(1. 5) ,
F~(X) ~ G(x),
(1. 6) ,
F' (a'x + 8')
n n
n
(1.7)'
a~ -+
a,
8~ -+
( 1. 6),
(1. 7) as
'!t G. (x),
b
for some
a > 0, b.
If (1.5)' and (1.7)' hold, then obviously so does (1.6)'
with G.(x)
G(ax+b). Thus (1.5) and (1.7) imply (1.6) and (1.8).
The proof of the lemma will be complete if we show that (1.5)
(1.6)' imply (1.7)' for then (1.8) will also hold, as above.
I
and
-5-
Since
x', x"
o
..
G.
is assumed non-degenerate, there are two distinct points
(which may be taken to be continuity points of
< G.(x')
< 1, 0 < G.(X")
{ex' X' + B' }
The sequence
n
n
G.) such that
< 1.
must be bounded. For if not, a sequence
{n } could be chosen so that ex' x' + B' -+- ±oo, which by (1.5)' (since
k
n
n
k
k
G is a d.f.) would clearly imply that the limit of F' (ex' x' + B' ) is
nk n k
nk
zero or one - contradicting (1.6)' for x = x'. Hence {ex'x' + B'} is
n
n
bounded, and similarly so is
{ex~},
sequences
{B~}
{ex'
x " + CI'}
which together show that the
.n
IJ n
'
are each bounded.
Thus there are constants
that
ex'
(1. 9)
nk
-+-
a
at b
{nk }
and a sequence
of integers such
and it follows as above that
'
w
F' (ex' x+ B' ) -+G(ax+b),
nk nk
nk
whence since by (1.6)', G(ax + b) = G.(x), a d.L, we must have
{m }
k
G(a'x+b')
On the other hand i f another sequence
a' > 0, B' -+- b' we would have
mk
a'
a, b' = b by Corollary 1.2. Thus
ex'
n
-+-
a > O.
G. (x)
=
a' -+mk
G(ax + b) and hence
B'n
-+-
b
of integers gave
=
a
'
as required to
complete the proof.
c
We shall say that two d.f.'s
f
(1.10)
are of the same type if
= Gl(ax+b)
G2 (x)
for some constants
a > 0, b. Then the above lemma shows that if
18 a u0tlueneo of d. f . ' II wIth
an > 0)
Gl , G2
then
r n (d n X
{F n }
(11 X t B ) , G , (.'1
. 0,
2
n
n n
n
n
l
are of the same type, provided they are non-
t " ) Iff G , I"
Gl and G2
degenerate. As pointed out in de Haan (1976) the d.f.'s may clearly be
divided into equivalence classes (which we call types) by saying that
Gl
and
If
G2
Gl
are equivalent if
and
F € D(G l ), Le.
satisfied with
G2 (x) = Gl (ax + b) for some a > 0, b.
G2 are d.L's of the same type (G 2 (x) = Gl(ax+b»
and
Fn(anX + b n > ~ Gl for some an > 0, b n , then (1. 7) is
ana, Bn = b n + anb,
Lemma 1. 3, and hence F € D(G 2 >. Thus if
ex n
=
so that
Gl
and
Fn(ClnX + Bn ) ~ G2 (x) by
G
are of the same
2
-6-
=
type, D (G )
l
D(G 2 )· Similarly we may see from the lemma that i f
D (G )
l
longs to both
D (G )
l
Hence
D (G ) , then
2
and
D(G )
2
and
and
G
l
are identical i f
G
l
be-
F
are of the same type.
G2
and
G2
are of the same
type, and disjoint otherwise. That is the domain of attraction of a d.f.
G
depends only on the type of
The following result gives a characterization for the d.f.'s
which
is not empty- Le. for the
D(G)
.
G.
G's
G
for
which are possible limiting
laws for maxima of i.i.d. sequences. We shall say that a non-degenerate
d.f.
is max-stabLe if for each
G
an > 0, b
stead of
an
G(x).
(Again, we can use
a
-1
in-
n
in this definition, and do so as convenient.)
(i)
LEMMA 1.4
=
Gn(anx + b n )
such that
n
there are constants
n = 2, 3,
A non-degenerate d.f.
there is a sequence
{F }
n
G
is max-stable if and only if
an > 0, b
of d.f.'s, and constants
n
such that
(1.11)
k = 1, 2,
for each
(ii)
In particular if
and only if
PROOF
(i)
If
G
G
=
G (cxkx + Sk)
Conversely if
for some
G
is non-degenerate,
k, Lemma 1.3 (With
for some
cx
k
is max-stable and
an > 0, b
D(G)
is max-stable. Then also
is non-degenerate, so is
(1.11) holds for each
l/k
G
(x)
G
G E D(G).
l/k
G
-1
an
for each
for
> 0, Sk' so that
F
is non-empty if
n we have
= G
n'
t
k, and if
an) implies that
G
is max-stable.
Gn (a-lx + b )
n
n
=
G (x)
and
n
-1
F n (ankx + b nk )
=
[G
nk
-1
(ankx + b nk ) ]
11k
(G (x) )
11k
so that (1.11) follows trivially.
(ii)
If
G
so (letting
is max-stable,
n
-+ 00)
n
G (a x + b ) = G (x)
n
we see that
n
for some
G E D (G). Conversely if
a
n
D(G)
> 0, b
n
is non-
-1
w
empty, FED (G), say, with
F (a~lx + b ) ~ G (x). Hence F
(ankx + b nk ) -+
n
n
n -1
w 11k
and
G(x), or F (ankx + b nk ) -+ G
(x). Thus (1.11) holds with F = F
n
n
hence by (i)
G
is max-stable.
nk
c
-7COROLLARY 1.5
b(s)
G
defined for
(1.12)
•
If
is max-stable, there exist real functions
s > 0
such that
GS(a(s)x+b(s» = G(x)
PROOF
Since
we have
G
all real
x, s >
is max-stable there exists
an > 0, b
o.
FED (G)
by Lemma 1.4. Thus
such that
n
F n (anx + b )
n
a(s) > 0,
'!t
G (x)
as
n -+
00
and hence (letting [ ] denote integer part)
which implies
F[ns]/s (a
[ns]
x +b
) w Gl/s (x) •
[ns]-+
But this is easily seen (e.g. by taking logarithms) to give
w
n
lis
F (a[ns]x + b[ns]) -+ G
Since
Gl/s
(x).
is non-degenerate, Lemma 1.3 applies with
b[~S] to show that
G(a(s)x+b(s»
=Gl/s(x)
for some
an = a[ns]' ~n
a(s) > 0, b(s),
as required.
c
We now show that the max-stable d.f.'s consist of precisely three
different kinds of types - from which the classical Theorem of Gnedenko
)
will follow simply. We refer to these as Types I, II, and III, even
though Types II and III are really families of types, indexed by a parameter
a > O.
THEOREM 1.6
Every max-stable distribution is one of the following types:
Type I:
G(x)
exp (-e -x)
Type II:
G (x)
0
_00
x < 0
, -a
exp(-x ) (for some
Type III: G (x)
a
exp (- (-x) )
1
..
< x <
(for some
a > 0)
a > 0)
x > 0,
x < 0
x >
o.
00,
-8-
PROOF
We follow essentially the proof of de Haan (1976). First, it is
trivially checked that each of the above types is max-stable.
Conversely, if
and all
G
is max-stable then (1.12) holds for all
0 < G(x)
x. I f
< 1,
-s 10gG(a(s)x+b(s»
s > 0,
(1.12) gives
•
= -log G(x)
so that
-log(-logG(a(s)x+ b(s») -logs = -log(-logG(x».
If
U(x) is an inverse function (as defined previously) of
-log (-log G(x»
= 1jJ(x), say, then
IjJ (a (s) x + b (s»
- log s
=
IjJ (x)
so that by Lemma l.l(i) ,
(U(x+logs) -b(s»/a(s) = U(x).
x = 0
Subtracting this for
we have
(U(x + log s) - U(log s) )/a(s) = U(x) - U(O)
and by writing
(1.13)
y = log s,
(y) = a(e Y), U(X)
=
U(x) - U(O)
x, y.
Interchanging
x, y
and subtracting, we obtain
=
U(X) (1 - a (y»
l
Two cases are possible,
(a)
l
U(x+Y) -U(y) = U(x)al(y),
for all real
(1.14)
a
al(y) = 1
for all
U(y) (1- a l (x»
.
(a) and (b) as follows.
y, when (1.13) gives
U(x + y) = U(X) + U(y) .
The only monotone increasing solution to this is well known to be simply
ij(y) = py
1jJ
for some
-1
p > 0, so that
(y)=U(y)=py+\I,
U(y) - U(O) = py, or
(\I=U(O».
Since this is continuous, Lemma l.l(ii) gives
x
=
IjJ
-1
(ljJ(x»
= pljJ(x) +\1
..
-9or
=
I/J(x)
(x-v)/p
so that
=
G(x)
exp(_e-(X-V)/p) when
is easy to see from the max-stable property with
n
=
2
0< G(x) < 1. It
that
G
can have
no jump at any finite (upper or lower) end point and hence has the above
form for all
x, thus being of Type I.
a (y) t 1 for some y, when (1.14) gives
1
.....
IT (y)
(1.15)
U(x)
1-a (y) (l-a l (x»
= c 1 (1-a 1 (x», say,
1
(b)
where
c
= U(y)/(l-
1
for all
x
+0
a (y»
1
and hence
U(x)
(since
= U(O),
U(y)
0
would imply
U(X)
=0
constant).
From (1.13) we thus obtain
which gives
=
a (x + y)
1
a (x) a (y). But a
is monotone (from (1.15»
1
1
1
and the only monotone non-constant solutions of this functional equation
have the form
1/J-1(x)
(where
v
=
=
al(x)
=
U(x)
e Px
c
1
P
+ O.
Hence (1.15) yields
V+c (1-e Px )
1
=
U(O». Since
that we must have
for
-log(-logG(x»
< 0
if
p >0
and
is increasing, so is
c
1
> 0
if
U, so
p<O. By Lennna 1.1(ii)
..
giving, where
G(x)
0 < G(x) < 1,
= eX P {-(1- XC-1 V)-1/P}.
Again from continuity of
G
of Type I I or Type I I I wi th
(c 1 < 0), or
at any finite end ponts we see that
ex = + 1/ P or
- 1/ P according as
G
is
P>0
c
P < 0 ( c 1 > 0) •
Gnedenko's Theorem now follows immediately for i.i.d. random variables.
THEOREM 1.7 (Gnedenko)
Let
Mn
= max(~l'
~2'
1. 1. d. random variables. If for some constants
(1.16)
an > 0, b
n
~i
are
we have
P{a (M -b) <x} ~ G(x)
n n
n-
for some non-degenerate
.e
••• , ~n) where
types listed above.
G, then
G
is one of the three extreme value
-10PROOF
(1.16) is just (1.4) where
definition gives
F
is the d.f. of each
FED (G). Hence by Lerruna 1.4 (ii), G
by Theorem 1.6, G
thc'reron'
Looking ahead, if
k = 1, where
If one can show that if (1.11) holds for
G
o
II
are not necessarily independent, but
of (1.16), then (1.11) holds with
G
is max-stable and
has an asymptotic distribution
k, it will follow that
which by
ono of l.lH' thn'(' listed types.
i:;
(,1, [,2'
M = max(sl' ... , F. n )
n
si
F
G
in the sense
is the d.f. of
n
.
M
n
k = 1, then it holds for all
is max-stable by Lerruna 1.4(i), and hence that
is an extreme value type. Thus our approach when we consider dependent
cases will be simply to show that under appropriate assumptions, the
truth of (1.11) for
k = 1
implies its truth for all
k
from which
Gnedenko's Theorem will again follow.
Returning now to the i.i.d. case, we note that Gnedenko's Theorem
assumes that
an(M n - b n )
then proves that
G
has a non-degenerate limiting d.L
example this is so if
PfM
means
=
<\} < 1
1, P{M
<
n -
G
and
for some finite
prE <>..} = 1
'n-
G(x), Le.
n, and hence
u n-> >..
U
-+
for which no such
P{M
<
n-
U
n
}
-+
for infinitely many values of
for these
G(x) > 0, then
F(u )
n
n
un < >..
< u } < pn (A - 0)
n - n -
Thus i f
pn
fF,n}
P{a (M - b ) < xl
n n
n -
u n = x/a n + b 11 . I f
and
must have one of the three stated forms. It is easy
to construct i.i.d. sequences
>... For suppose
G
n
}
=
G (x)
=
0
=
1, so that
G(x)
G(x), where
n, then
since
for all sufficiently large
Fn(u )
n
exists. For
=
F (A - 0) < l.
n, and this
1, and
G
is
degenerate.
Other conditions under which no non-degenerate limit exists may be
found in Gnedenko (1943), de Haan (1970). For example the conditions given there include the case where
F(x)
is a Poisson d.f.
On the positive side, various necessary and suffi.cient conditions are
known concerning the domains of attraction. We shall state these without
proof. However first we give some very simple and useful suffiaient conditions which apply when the d.f.
F
has a density function. These are
due to von Mises and elegant, simple proofs are given in de Haan (1976).
e.
-11-
Here we reproduce just one of the three proofs as a sample, and refer
the reader to the original paper, de Haan (1976), for the details of the
others.
THEOREM 1.8
quence
Suppose that the d.f.
{~n}
F
of the r.v.'s of the i.i.d. se-
is absolutely continuous with density
conditions for
F
f. Then sufficient
to belong to each of the three possible domains of
attraction are:
Type I:
f
has a negative derivative
(Xl' x O),
(x O .:: 00), f(x) = 0
f'
for
for all
~
x
x
Xo
in same interval
and
lim f'(x)(l-F(X»/f 2 (x) = -1,
xtx o
Type II:
f(x)
for all
> 0
x '. xl
lim xf(x)/(l-F(x»
finite,
aile}
= a > 0,
x....""
Type III: f(x) > 0
for
for all
x > X
x
in some finite interval
(xl' x
o),
f(x) = 0
and
o
lim (xO-x) f(x)/(l-F(x»
xtx o
PROOF FOR TYPE II case
a > O.
As noted, complete proofs may be found in de Haan
(1976), and we give the Type II case here as a sample. Assume then, that
lim x f (x) / (1 - F (x»
= a > 0
where
f (x) > 0
for
x....""
x f(x)/(l- F(x»
it is immediate that for
x2
~
x
~
xl. Writing a(x) =
xl
x
r 2 (a(t)/t) dt= -10g(1-F(X » +log(l-F(X l »
2
xl
so that
Clearly there exists
xl = an' x 2 = anx
an
such that
1 - F(a )
n
or vice versa according as
a x
= expC
\
r
a
n
=
n
x >1
-1
and (by writing
or
x < 1), we obtain
x
(a(t)/t) dt) = exp(-r(a(ans)/s )dS),
.
1
-12an
which, since
for
x
'>
e -a. log x
converges to
00,
+
n
( n ( l - F(anX)))n
=
1 -
f
x
1
0 > 0
F
0
fl (x)
n
->
00.
Hence
F(x)
e-
x -(1
(When
IJ
n
is normal then
and
=
as
x < 0, Fn(a x)
n
n
and it follows simply that lim F (a x)
0) •
(x) (1 - F (x) ) If 2 (x)
<
+
n
so that the Type II limit follows.
If
-a.
0
F (a x)
n
for any
x
=
1 - F (x) ..... f (x)
1- exp(-xo.)
(0.-1 _ o.xo.-l) f (x)
f
1
(x)
=
-xf (x)
so that
=
F (x)
0
for
for
so that
x
again giving a Type I
and
and a Type I limit applies. I f
-1
+
Ix
limit. These examples illustrate the simplicity of
the criteria. It is similarly easily checked that the Cauchy distribution
gives a Type II limit whereas a uniform distribution yields Type III, and
each Type I, II, and III iimit belongs to its own domain of attraction as
previously observed. Other examples are given in de Haan (1976).
As noted above, various necessary and sufficient conditions are also
known. These concern the tail behaviour of the d.f.
F
and do not assume
..
the existence of a density. We state these (as given in Gnedenko (1943»
without proof in order of increasing complexity.
to
THEOREM 1.9
Necessary and sufficient conditions for the d.f.
r.v.'s of the i.i.d. sequence
{~
n
}
F
of the
to belong to each of the three types
are:
Type II:
1 - F (x)
lim
1 - F (kx)
oo
ko., a. > 0, for each
k
>
0,
x+
Type III: there exists
x < X
o
x
o
< 00
F(X )
O
such that
1, F(x) < 1
for all
and such that
- kh)
O
I-F(xO-h)
1 - F (x
lim
MO
Type I:
ko.
for each
there exists a continuous function
where
x
O
(:5-
00)
is such that
x < xo' and such that for all
k > 0,
A(x)
F (x ) = 1,
O
h,
such that
F(x)
< 1
lim A(x)
xtxO
for all
0
1.
1 - F {x (1 + hA (x) ) }
~tm
1 - F(x)
x X
e
-h
c
o
Note that all these criteria involve the behaviour of
"right hand end point"
X
(finite or infinite) where
o
F(X ) = 1
O
Note also that if
and
then it is easily shown that
not, in itself, prevent
F(x)
< 1
for all
F
F
near the
l.
becomes
x < X
o
(x
O
fihilc),
with probability one. This does
M
n
from having a non-degenerate lim-
a (M - b )
n n
n
F(X - 0) < 1). The point is that
O
itself will usually tend to a degenerate r.v., a suit-
iting distribution (unless, of course,
even though
able
M
n
norm~lization
can often give a non-degenerate limit (as is famil-
iar in other contexts, such as central limit theory). In fact, if F(X )
O
1
for some finite
x
o'
there may be a limiting d.f. of either Type I or
Type III.
The Type II condition may be interpreted as stating that the tail
lji (x)
= 1 - F (x)
is regularly varying as
may be expressed as the product of
x
x -a
-+
in the sense that
00
lji (x)
and a function of slow growth.
The Type III condition involves similar considerations in the neighbourhood of the finite point
Convergence of
.
P{M n
~
un}
X
o
at which
F(x)
becomes unity.
and its consequences
Before obtaining a detailed result for normal sequences, we give a general theorem of a form which will be used in later considerations for
dependent sequences. It is related to Theorem 1.7 in that the limiting
value of
P{M
< u }
n n
is investigated as
n
-+
00.
(A generalization of
this result applying to other ordered values than maxima appears later
in this chapter - Theorem 1.13).
Here we focus on just one sequence
.e
{un}
which may, or may not, be
-14one of a family of the form
THEOREM 1.10
{un}
{~n}
Let
x/a + b
n
x
as
n
varies.
be an i.i.d. sequence. Let
,>
0
and suppose
is a sequence of real numbers such that
1 - F (u ) = Tin +
(1.17)
n
0
(lin)
as
n
+
00.
Then
(1.18 )
P{M
<
n -
un }
e- 1 as
+
n -)-
00.
Conversely if (1.18) holds for some
PROOF
,~O,
so does (1.17).
From
P{M
=
< u }
n -
n
F n (u )
n
=
{l - (1 - F (u »} n
n
it is obvious that (1.18) follows from (1.17). Conversely if (1.18) holds
then clearly
F(U )
n
and by taking logarithms
1
+
from which (1.17) follows.
c
One word of caution is perhaps in order here. If
d.t., we may always choose
,
> 0), and then (1.17)
sensible to choose
inf{x; F(x) ;: 1 -
u
n
,In}.
un
so that
1 - F (un)
trivially holds. If
F
=
F
is a aontinuoU8
Tin
(at least for
is not continuous, it is
< 1 - Tin < F (u ), e.g.
n
so that
u
n
However, in such a case it does not necessarily
follow that (1.17) holds. This may be seen without too much difficulty
by taking
F
to increase only by jumps at 1, 2, 3, •••
with
F(j)
=
1-,2- j .
Theorem 1.10 may be used to obtain the asymptotic form of the distibution of
M
n
THEOREM 1.11
~i
when the
If
{~n}
are i.i.d. standard normal random variables.
is an i.i.d.
then the asymptotic distribution of
(standard) normal sequence of r.v.'s
Mn =
Specifically
(1.19)
P{a (M - b ) < x}
n n
n
+
exp (-e -x ),
max(~l'
••. ,
~n)
is of Type I.
-15-
where
(2 log n) 1/2,
(2 log n) 1/2
PROOF
•
where
($
Write
~
T
=
e
-x
-1
(2 log n) -1/2 (loglog n + log 411').
in (1.17). Then we may take
denotes the standard normal d. f. Since
being the standard normal density) we have
hence
(1.20)
-log n - x + log un - log 4>(u n )
u~/ (2 log n)
2 log un - log 2 - log log n
~(u
l-~(u
n
n
)
= -n1
)
~
1 -x
e u / <f!(u n )
n
n
e
-x
'I' (u n ) /u n
..
1
and
0, or
-+
1
2
- log n - x + log un + 2 log 2TT + u /2
n
It follows at once that
1 -
-+
1
-+
o.
and hence
0
-+
or
log un
= ~(109
2 + log log n) + 0(1).
Putting this in (1.20) we obtain
1
1
x + log n - 210g 4TT - 210g1og n + 0(1)
or
1
1
x - 2 log 4TT - 2 log1og n
(
1 )}
2 log n{l +
1 ogn
+ 0 -1-ogn
U~
and hence
1/2{
un
(2 log n)
1
1
x- 2 10g4TT- 2 10glogn
( 1 )}
1 +
2 log n
+ 0 log n
'
so that
Hence, since by (1.18) we have
P{M
or
.e
n
< x/a
n
+b
n
+0(a- 1 )}
n
P{M
-+
< U
n -
n
} -+
exp(-e- x )
exp(-e- x )
where
T
=
e -x
-16-
P{a (M -b ) +0(1)
n n
n
x} ... exp(-e -x )
~
from which (1.19) follows, as required.
o
While there are same computational details in this derivation, there
is no difficulty of any kind and (1.19) thus follows in a very simple way
from the (itself simple) result (1.18). The same arguments can, and will
later, be adapted to a continuous time context.
Poisson nature of exceedances, and implications for k - th maxima
We can look at the choice of
un
different light. Let us regard
higher with
un
as a "level" (typically becoming
n) and say that an exceedance of the level
quence occurs at "time"
ceedance is clearly
by
which makes (1.17) hold in a slightly
F,;l' ... , F,;n
i
if
un
by the se-
[,i > un. 'I'he probability of such an ex-
1 - F (un)
and hence the mean number of exceedances
» ...
is
n(l- F(u n
T. That is the choice of
un is made
so that the mean number of exceedances by r , . __ , r
is approximately
l
n
constant. We shall pursue this theme further now in developing Poisson
properties of the exceedances. In the following, Sn
number of exceedances of a level
u
n
by
THEOREM 1.12
If
{F,;n }
(1.17) , then
S
is asymptotically Poisson, i.e. for
(1. 21)
n
-T
p{Sn < k} ... e
-
is an i. i.d. sequence, and i f
~
k = 0, 1, 2,
... ,
Tj/jl
PROOF
We shall show that if
n, Pn
and
<
T
<
00
k, then (1.17) holds
k).
Sn
is any binomial r.v. with parameters
then (1.21) holds if and only if
result will then follow in this particular case with
Sn
satisfies
j=O
(and (1.21) thus holds for all
If
{un}
k
Conversely if (1.21) holds for anyone fixed
0
will denote the
is binomial and
nPn'"
T,
Pn
nPn~' T.
=
The
1 - F (un) .
then (1.21) follows at once from
e.
-17the standard Poisson limit for the binomial distribution.
Conversely if (1.21) holds for some
T'
f
T' <
k
but
nPn
+ T,
there exists
,... T I
T,
o
00,
the Poisson limit for the binomial distribution shows that
T' <
~
-
00,
and a subsequence
such that
{nR,}
nR,PnR,
•
If
k
T,j/j: as R, ,... 00, which contradicts (1.21) since
< k} ,... e -T' Lj=O
nR,
-x k
the function e
Lj=O xk/k: is strictly decreasing in x ~ o and hence
p{S
•
1- 1. On the other hand if
k
L (n)
T'
r (1 _
r Pn
r=O
l-x<e
But
-x
for
0< x < 1
which, for each fixed
=
00,
Pn ) n-r
k
l:
<
(np ) r (1 _ P ) n-r.
-n
r=O
n
so that
r, tends to zero as
n""
.
00
through the sequence
,... 00 Thus p{S
R,
< k} -+ 0 as
nR,
nQ,
tradicting the non-zero limit (1.21) • Hence we must have nP
n
of values
{nR,}' since
nR,p
-
con-
-+
00,
-+
T, as
c
asserted.
We note incidentally, that the random variable
Sn
used above is as-
ymptotically the same as the number of upcpossings of the level
tween
1
and
be-
n. Thus we may obtain a Poisson limit for the number of
such upcrossings. Such Poisson properties of upcrossings will play an
important role when we consider continuous time processes.
Asymptotic distribution of k - th largest values
If
M~k)
the event
denotes the k - th largest amonq
{M(k) < u }
n
n
r: l'
rn,
r , •• .,
2
is the same as the event
{S
n
then clearly
< k}. By using this
we may at once obtain the following restatement of Theorem 1.12.
THEOREM 1.13
then
.e
Let
{~n}
be an i.i.d. sequence. If
{u
n
}
satisfies (1.17)
-18-
(1.22)
P{M (k) < u }
n
- n
+
e-'
k-l
~ sis'. ,
t..'
s=o
k = 1, 2, ... Conversely if (1.22) holds for some fixed
holds and so does (1.22) for all
k
then (1.17)
k = 1, 2,
c
We may further restate this result to give the asymptotic distribution of
M(k)
n
THEOREM 1.14
in terms of that for
M (= M(l».
n
n
Suppose that
(1.23)
for some non-degenerate (and hence Type I, II, or III) d.f.
for each
(1.24)
where
k=1,2, ... ,
(k)
w
k-l
P{a (M
-b) <x}+G(x) E (-logG(x»s/s!
n n
n s=o
G(x) > 0
(and zero where
Conversely if for some fixed
(1.25)
P{a (M(k) - b ) < x}
n n
n-
for some non-degenerate
'1.
G(x)
0) •
k,
H(x)
H, then
H(x)
must be of the form on the right
hand side of (1.24), where (1.23) holds with the same
(1.24) holds for all
PROOF
G(x) > 0
so that by Theorem 1.13
follows. The case· G(x) = 0
then (1.22) holds with
(1.22) holds for all
H(x) > 0, we may clearly find
e-'
,~o
k, and
degeneracy of
G
(1.24)
x
is such that
k-l
E TS/s!,
5=0
creases. Thus (1.22) holds for this
including
k, Le.
such that
since this function decreases continuously from
k
k = 1, , =
follows, using continuity.
Conversely if (1.25) holds, for some fixed
H(x)
G, an' b n . (Hence
k.)
If (1.23) holds and
-logG(x),
all
G. Then,
k
to
0
as
,
in-
and hence, by Theorem 1.13
k = I, which gives (1.23) with
being clear).
1
,= -log G(x)
for
(nonc
e.
-19We see that for i.i.d. random variables any limiting distribution of
M~k), has the form (1.24) based on the same d.t.
the k - th largest,
G
as applied to the maximum, and moreover that the normalizing constants
are the same for all
k
including
k
1
(the maximum itself). Thus
we have a complete description of the possible non-degenerate limiting
laws.
Joint asymptotic distribution of the largest maxima
The asymptotic distribution of the k - th largest maximum was obtained
above by considering the number of exceedances of a level
~l'
.... ,
~n'
un
by
Similar arguments can, and will presently, be adapted to
prove convergence of the joint distribution of several large maxima.
Let the levels
l-F(U~l»
u(l) > ••• > u(r)
n
-
-
n
satisfy
= Tl/n+o(l/n),
(1. 26)
1-
F(u n(r»
and define
S~k)
=.
T
r
In +
0
(lin) ,
to be the number of exceedances of
u~k)
~l' •.. ,
by
~n'
THEOREM 1.15
.... ,
= 1,
k
Suppose that
{~n}
is an i.i.d. sequence and that
r, satisfy (1.26). Then, for
> 0,
•.. , k
r
~
{u~k)},
0,
(1. 27)
as
n
PROOF
(k)
un
+
00.
Writing
Pn,k =
l-F(U~k»
for the probability that
~l
exceeds
' it is easy to see that the left-hand side of (1.27) equals
(1. 28)
.e
(l-p)
n,r
n-kl-···-k
r
-20From (1.26) it follows in turn that
k
(n-kl-···-kR._l) •...• (n-kl-···-kR. +
.• (1R. -lR.-l)
for
2 < R.
~
kR.
1)
(Pn,R. -Pn,9>-1) V'/k 9,:
/kR.:
r, and that
(l-p)
n,r
n-kl- ••. -k
r .... e
-1
r,
and thus (1.27) is an immediate consequence of (1.26) and (1.28).
c
Clearly
... ,
(1. 29)
< u(r)}
n
P{s(l) =0,8(2) "1, . . . ,s(r) <r-l}.
n
n
n-
and thus the joint asymptotic distribution of the
r
largest maxima can
be obtained directly from Theorem 1.15. In particular, if the distribution of
bn )
a (M(l) -b) converges, then it follows not only that a (M(k)_
n n
n
n n
converges in distribution for k = 2, 3, ••• as was seen above, but
also that the joint distribution of
converges. This is of course completely straightforward, but since the
form of the limiting distribution becomes somewhat complicated if more
than two maxima are considered, we state the result only for the two
largest maxima.
THEOREM 1.16
(1. 30)
Suppose that
P{a (M(l) - b ) <
n n
n-
xl
~
G(x)
for some non-degenerate (and hence Type I, II, or III) d.f.
G. Then,
for
e_
-21-
(1. 31)
P{a (M(l) - b ) <
a (M(2) - b ) < x }
n n
n
xl' n n
n- 2
G(x ) > 0
2
when
PROOF
(and to zero when
G(x 2 ) = 0) .
We have to prove that
P{M(l) < u(l) M(2) < u(2)}
n
- n '
n
- n
U~l) = xl/an + b n
converges if
then
and
u~2) = x 2 /a n + b n • If (1.30) holds
and
1 - F (u (1»
n
~
T
1
/n
and
1 - F (u (2»
n
~
T
2
/n, where
-log G(x ). Hence, by Theorem 1.15
2
0, 5~2) ~ l}
P{5(1) = 0, 5(2)
n
n
O}
+ p{ 5 (1)
n
o,
5 (2)
n
= l}
o
which by (1.29) proves (1.31).
Increasing ranks
The results above apply to the k - th largest
when
of
is fixed. We refer to this as the case of fixed panks (or extpeme
k
order statistics). It is also of interest to consider cases where
+
n +
as
00
00
k =
and we shall refer to this as the case of increasing
ranks. Two particular rates of convergence are of special interest:
(i)
k /n
(ii)
kn
+
n
+
00
e
(O<()<l), which we shall call the case of central ranks,
but
kn/n
+
0, which will be called the intermediate rank
case.
For the consideration of fixed ranks it was useful to define levels
{u}
n(l-F(u » +T. In the case where k +00 we
n
n
shall find that the appropriate restrictions are that nF (u ) (1 - F (u » + 00
n
satisfying (1.17), Le.
n
and, writing
.e
P n =l-F(u),
n
n
-22k n - np n
(1. 32)
(n p
.
n
-+ T
(1 _ P » 1/2
n
for a fixed constant
T, or equivalently (as we shall see),
k
(1.33)
n
(k (1 -
n
- n p
n
-+
L.
kin) ) 1/2
n
~
Theorem 1.11 now has the following counterpart (in which
note the standard normal d.f., and
ances of a level
THEOREM 1.17
and suppose
un
by
~l'
~2'
Sn
is again the numbers of exceed-
••. , ~n)·
With the above notation, let
nPn(l-Pn)-+
00.
If
will de-
{un}
k n -+
00,
write
Pn
satisfies (1.32), then
Conversely if (1.34) holds so does (1.32).
In the above statements (1.32) can be replaced by the equivalent condition (1. 33) .
n
PROOF
or
We may write
~i < u n
-
. The
S
Xi
};
Xi where Xi
1
are thus 1. 1.d. with
n
OJ"
0
P{Xi
=
C1ccordinq as
l}
=
Pn
=
i
i
1 - P{x.
~
>u
n
= A}.
It follows from the Berry-Esseen bound that
IP {S
< k }-
n -
n
~(
-np
)
n
n
(n p (1 _ P » 1/2
n
n
k
which tends to zero since
~
k
(
-np
n
12
(n p (1 - P »
I
n
i f and only if
n
n P n (1 - Pn) -+
)
I
00.
2
< CI (n Pn ( 1 - Pn) ) 1/ ,
-
The main result follows since
-+<I>(T)
n
(kn-nPn)/(nPn(1-Pn»1/2 -+
L,
(~
and its inverse func-
tion both being continuous).
Finally, that (1. 32) implies (1. 33) follows by writing
+ T(np (l-p »1 / 2 (1+0(1», and noting that this implies
n
n
and
(n-k )
n
~
k
=
n
k
n
n Pn +
~ np n
n(l-Pn). Similarly (1.33) implies (1.32).
Corresponding to Theorems 1.13, 1.14, we thus have the following
suIts.
0
re~
-23-
THEOREM 1.18
=
(Pn
(1.35)
l-F(U
With the above notation, suppose that
n
»·
I f (1.32)
p{M~n} ~
un} ....
holds wi th
n
....
00,
n p
n
(1 - p } ....
n
00
or (1.33) holds, then
~(T).
Conversely if (1.35) holds so do (1.32)
THEOREM 1.19
k
o
and (1.33)
Again with the above notation, suppose that (1. 32) or (1. 33)
II
--
n
11
II
(x)
-
xl (]
+b
n
~I
(I
II
fnr soml' scque'nC'C's
(x) )
{b J. Then
n
where
d.f.
H(x} =
~(T(X}).
H, then we have
Conversely if (1.36) holds for some non-degenerate
~(T(X})
H(x} =
where (1.32) and (1.33) hold with
o
Central ranks
k
The case of central ranks, where
n
In . . e
(0 <
e
< l)
has been studied
in Smirnov (1952). While' we shall have little to say about this in later
chapters, for the sake of completeness a few basic facts for the i. i.d. sequence will be discussed here. First we note that it is possible for two
sequences
{k},
{k n' }
n
with
lim k In = lim k'ln to lead to diffepent nonn
degenerate limiting d.f.'s for
Smirnov (1952), we may have
(k )
(1. 37)
p{an(M
(1. 38)
P{a'(M n
n n
n
n
- b } < x}
n
(k ')
where
- b ' } < x}
n
(k n )
M
n
n
,M
(k ' )
n
n . Specifically, as shown in
.... 8
kn/n .... 8, k'ln
n
w
->
H(x),
....w
H I (x) ,
an > 0, b , a ' > 0, b '
n
n
n
are constants and
and
H(x}, H' (x)
are non-
degenerate d.f.'::; of different "type". lIowever thi::; i::; not possible if
(1.39)
.e
(k
8 ) .... 0
lI1\nn_
as the following result shows .
-24LEMMA 1.20
Suppose that (1.37) and (1.38) hold where
k
degenerate and
k'
n'
both satisfy (1.39). Then
n
the same "type", i.e.
PROOF
=
H' (x)
H(ax + b)
H
for some
If the terms of the i. i. d. sequence
[,1'
H, H'
and
are non-
H'
are of
a > 0, b.
r'2'
••• ,
have d. f.
F,
then by Theorem 1.19
k
(1.
40)
n
- n (1 - F (x/a
=
H(x)
8
In
~(T(X».
(k I)
11
n
whereas also
By (1.39) we then have
-..
(8(1_8»1 / 2
p{a (M n
n n
H'
T (x)
-..
-q. - F(x/a n +b n »
(1.40) holds with
and
n
n
Again by (1.39), with
But if
+b »
(k (1 - kin) ) 1/2
n
where
n
k
k~
n
T (x) •
replacing
r.
n
H (x) •
(k' )
of
M n
n
w
H (x/a'+b ' ) -.. H' (x)
n
we must therefore have that
k , and hence by Theorem 1.19, that
n
- b ) < x} -.. ~(T (x»
n
is tilt, d.
k~,
replaced by
n
th.tB Bays that
w
H (x/a + h ) -. II (x) ,
n
n
n
by (1.38). Thus, by Lemma 1.3,
H
are of the same type as required.
It turns out that for sequences
{k }
n
o
satisfying (1.39)
just four
types of limiting distributions H satisfying (1.37) are possible for
(k )
n
M
• For completeness we state this here as a theorem- and refer to
n
Smirnov (1952) for proof.
THEOREM 1.21
If the central rank sequence
only possible non-degenerate d.f.'s
1.
2.
H (x)
H (x)
H
{k }
n
for which (1.37) holds are
0,
x < 0
~ (cxo.)
x > 0
(c > 0,
0.
> 0) ,
X < 0
(c > 0,
0.
> 0)
~ (-c Ix
1
I 0.)
satisfies (1.39) the
x > 0,
e.
-25ex
<P(-Cllxl )
<P(c x ex )
2
H(x)
3.
H(x)
4.
x < 0
(c
x > 0
-
l
> 0, c
2
> 0, ex > 0) ,
x < -1
0
-1 :::. x < 1
1/2
x > l.
1
[J
Intermediate ranks
By an intermediate rank sequence we mean a sequence
{k n }
such that
~ 00
but k = o(n). The general theory for increasing ranks applies,
n
n
with some slight simplification. For example we may rephrase (1.33) as
k
l 2
k
l 2
k /
k /
n
p = -n
-T-+ O (n)
-- .
n
n
n
n
The following result of Wu (1966) gives the possible normalized limit(k )
ing d.f.'s of
THEOREM 1.22
M
n
If
n
when
k
is non-decreasing.
n
;1' ;2' ...
are i.i.d. and
{k n }
is a non-decreasing
intermediate rank sequence, and if there are constants
an > 0, b
n
such
that
for a non-degenerate d.f.
<P (-a log I x
1
o
<P (a log x)
H (X) = <P(x)
3
H, then
I)
H
x < 0
x
_00
(a > 0)
> 0,
. x ::. 0
x
has one of the three forms
>
(a > 0)
0,
< x <
00
[J
This theorem is rather satisfying, though it does not specify the domains of attraction of the three limiting forms. Some results in this
direction have been obtained in Chibisov (1964), Smirnov (1967), and Wu
.e
-26-
(1966) • However these are highly dependent on the rank sequence
For example a class of rank sequences
(0 < 6 < 1)
{k }
n
such that
are studied in Chibisov (1964) • I f
known that there is at most one pair of
to the domain of attraction of
H
l
(R"
6)
F
k
n
{k }·
n
'" R, 2 n
e
is any d.L it is
such that
F
belongs
and the same statement holds for
In addition, there are rank sequences such that only the normal law
H •
2
H
3
is a possible limit and, moreover, there are distributions attracted to
it for every intermediate rank sequence
{k }.
n
The above brief survey of selected parts of classical extreme value
theory is by no means comprehensive or complete. However the topics have
been chosen to illustrate the theory, and to prOVide a starting point for
the discussion of dependent cases. In the next chapters we shall take up
some of these topics directly, but will also consider a number of areas
which are related to extreme value theory and have importance in their
own right.
.
-27-
CHAPTER 2
MAXIMA OF STATIONARY SEQUENCES
There are various ways in which the notion of an i.i.d. sequence may
Cn
be generalized by "introducing dependence", or allowing the
to have
different distributions, or both. For example an obvious generalization
is to consider a sequence which is Markov of some order. Though the consideration of Markov sequences can lead to fruitful results for extremes,
it is not the direction we shall pursue here.
~n
We shall keep the assumption that the
have a common distribu-
tion; in fact it will be natural to consider ;:la/.';oYlllr'y sequences, Le.
(~.,
sequences such that the distributions of
(~j
+m'
... , ~j +m)
1
and
.•. ,
Jl
are identical for any choice of
~.)
and
In
n, jl' .•• , jn'
n
m. Then we shall assume that the dependence between
falls off in some specified way as
Ii - j
I
~i
~j
and
increases. This is different
from the Markov property where in essence the past, {~i; i < n}, and
the future, {F. ; j
j
> n}, are independent, given the present, P.o.
The simplest example of the type of restriction we consider, is that
of m-depcndem:c, which rClJuircs tha t
dent i f
li-j! > m.
A more commonly used dependence restriction of this type for stationary sequences is that of ntpona mixing (introduced first in Rosenblatt
(1956)). Specifically, the sequence
{~n}
mixing assumption if there is a function
tending to zero as
k
-+-
and
AEF(~l' ... ,E,;p)
k, F(o)
denotes
g(k), the "mixing function",
co, and such that
!p(AnB) - P(A)P(B)
when
is said to satisfy the strong
I
< g(k)
and
BEF(E,;p+k+l' E,;p+k+2'
... )
A
pOI is "nearly independent" of any event
the future from time
p
the a-field generated by the indicated random
variables. Thus when a sequence is mixing, any event
past up to time
for any
p +k +1
onwards", when
k
"based on the
B
"based on
is large. Note that
-28-
this mixing condition is uniform in the sense that
on the particular
A
and
B
The correZation between
g(k)
does not depend
involved.
~i
and
is also a (partial) measure of
~j
thcir dcpendencc. lienee another dependencc rcstriction of the same type
is
!Corr(~i' ~j)1 :: g(!i-j!)
where
g(k)
->-
0
such a restriction will be most useful if the
as
~n
k>
00.
Obviously
form a normal sequence.
Various results from extreme value theory have been extended to apply
under each of the more general restrictions mentioned above. For example
watson (1954) generalized (1.18) to apply to m-dependent stationary sequences. Loynes (1965) considered quite a number of results (including
(1.18) and Gnedenko's Theorem) under the strong mixing assumption, for
stationary sequences. Berman (1964) used some simple correlation restrictions to obtain (1.19) for stationary normal sequences.
It is obvious that the results of Loynes (1965) and Berman (1964) are
related - similar methods being useful in each - but the precise connections are not immediately apparent, due to the different dependence restrictions used. Berman's correlation restrictions are very weak assumptions, leading to sharp results for normaZ sequences. The mixing condition used by Loynes, while being useful in some contexts, is obviously
rather restrictive. In this chapter we shall propose a much weaker condition of "mixing type", which first appeared in Leadbetter (1974), and
under which, for example, the results of Loynes (1965) will still be
true. Further, this condition will be satisfied for stationary normal
sequences under Berman's correlation conditions (as we shall see in the
next chapter). l1<.'ncc the
relatiunshi~Js
bctwecn thc various results are
clarified.
MiXing conditions for Gnedenko's theorem
In weakening the mixing condition, one notes that the events of interest
in extreme value theory are typically those of the form
their intersections. For example the event
{M
< u}
n-
ft i
< u}
or
is just
e.
-29-
{~l
=: u,
~2
=:
~n
u, •.. ,
=: uL
Hence one may be led to propose a condition
like mixing but only required to hold for events of this type. For
example one such natural condition would be the following, which we
•
F.
shall call Condition D. For brevity we will write
F.
.
~l"
of
'~n
~.
~l
(u,
,
.. ., u),
... ,
CONDITION D
~.
~n
if
F.
.
~l"'~n
... ,
(xl'
x )
.
~l"
'~n
(u)
for
denotes the joint d.f.
n
.
will be said to hold if for any integers
j
for which
I
- i
>!/', and any real
p-
i
< ••• < i p
l
and
u,
(2.1)
where
g(!/')
+
0
as
!/,
+
00
We shall see that Gnedenko's Theorem - and a number of other results hold under D. However, while
D
is a significant reduction of the re-
qUirements imposed by mixing, we can do better yet. We shall consider a
condition, to be called
D(u ), which will
involve a requirement like
n
(2.1) but applying only to a certain sequence of values
necessarily to all u-values. More precisely, if
•
sequence, we define the condition
CONDITION
D(u)
n
< i
D(u )
n
{un}
{u }
and not
n
is a given real
as follows.
will be said to hold i1 for any integers
p < jl ... < j q -< n
)'1 -
for which
i p ->!/, , we have
(2.2)
as
where
n
Note that for each
+
for some sequence
00
n, !/,
we may replace
a (n) •
an,!/,
by the maximum of the
left hand side of (2.2) over all allowed choices of
obtain a possibly smaller
fixed
such
n, and still satisfies
0
as
n
+
00,
!/,
n
=
and
which is non-increasing in
a
+
n'!/,n
a n,!C0' taken non-increasing in !/,
+
.e
an,!/,
i's
0
as
n
+
00.
!/,
for each
Note also that for
for each fixed
o(n), may be rewritten as
j's, to
n, the condition
-30-
a
(2.3)
0
+
n,nA
A > O.
for each
It is trivially seen that if
n
n, ~'n
~
~n
for some
0
= o(n), then
(2.3) holds. The converse may be shown by noting that (2.3) implies the
existence of iln increasing sequence of constants
o,n,n/k < k
-1
for
~
n
k
m . If
k
-
k
D, which in turn implies
Strong mixing implies
D(u )
n
-- r
n
a n,n/k
•
such that
k
is defined by
n
so that
< n
m < n < m + , r > 1, then m
k
r r l
n
sequence {R.} may be taken to be {n/k n }·
n
quence
m
for
l
< k-
n
-
n
D(u )
n
+
0, and the
for any se-
is satisfied, for appropriately chosen
by stationary normal sequences under weak conditions, whereas mixing
need not be (and it seems likely that
D
need not be either, though we
have not investigated this).
First we give a lenuna which demonstrates how Condi t,ion
D (un)
the "degree of independence" appropriate for our purposes. If
any set of integers, M(E)
= Mn
M(E)
if
=
E
will denote
,
{k ,
l
, k }
2
is another interval with
are separated by
F
Throughout,
LEMMA 2.1
{~n}
k
D(u )
n
be fixed integers and
E,
:i 2
- j l +1. If
E
j
k
l
~
R.
l
<
k
2
F
> j2' we shall say that
E
- j2.
... , E
r
n
1\)-
~
j=l
{u }. Let
n
subintervals of
k
when
{I,
i
j
= {k j ,
~
. . . . Then
.•. ,
n, r, k
... , n}
f
such
j. Then
P{M(E.) < u 11 < (r-l)o, k.
J
n
n,
This is easily shown inductively. For brevity write
{M(E ) ~ un). Let
sary)
l
holds for some sequence
E ,
l
u
PROOF
l
k
are separated by at least
J
(and of course
will be a stationary sequence.
Suppose
and
is
of consecutive integers
5
j2}' say; its length will be taken to be
{jl'
E
{I, ••• , n}). It will be convenient to let an
"interval" mean any finite set
and
max{i;j; j € EJ
gives
R. }
j
A. =
J
where (by renumbering if neces-
".
-31Ip(A nA ) -P(A )P(A ) I
l
l
2
2
IF
kl ..• ~1,k2 ... ~2
(u)-F
(u)F
(u)1
n
kl···~l n k2···~2 n
•
k 2 - v. 1 ~ k. S i III iI, I r 1y
sine e
Ip(A nA nA ) -P(A )P(A 2 )P(A 3 )
l
l
2 3
I
< \p(A nA nA ) -p(A nA )p(A ) 1+ Ip(A nA ) -P(A l )P(A 2 ) Ip(A 3 )
2
l
3
l
2
2 3
l
< 2 a. n, k
E UE c;:: {k ,
l
2
I
since
... , ~2}
k3-~2 >
and
k. Proceeding in this way
we obtain the result.
IJ
This lemma shows a degree of independence for maxima on separated
intervals. '1'0 obldin Gnedenko' s '!'heorem from Lemma 1.4 we need to show
that if (1.11) holds for
P{a (M -b ) < x}
n n
n k
=
=
1, it holds for all
G(x), then
n
k
=
k
P {Mnk -< x/a n k + b n k} - p {Mn -< x/a n k + b n k}
->-
00.
Hence we wish to prove
->-
Gl/k(x)
for each
2, 3,
->-
0
(2.4) to obtain Gnedenko's Theorem.
The method used is to divide the interval
tervals of length
k, i.e. that if
P{a k(M -b k) < x}
n
n
n
-
2, 3, ... This will be so if, for each
(2.4)
as
->-
k
{I, .•. , n}
into
k
in-
[n/k], and use "approximate independence" of the
maxima on each via Lemma 2.1 to give a result which implies (2.4). To
apply Lemma 2.1, we must shorten each of these intervals to separate
them. This leads to the following construction, used for example in
Loynes (1965), and given here in a slightly more general form for
later use also.
Specifically we suppose that
itive integer
we have
n'k
n, write
~
k
is a fixed integer, and for any pos-
n' = [n/k], (the integer part of
n < (n' +l)k. Divide the first
consecutive intervals, as follows. For large
.e
n'k
n, let
n/k). Thus
integers into
m
2k
be an integer,
-32-
k < m < n', and wr i te
1 ,
= {l, 2 , ... , n' - m}, Ii = {n' - m + 1, •.. , n'},
1
being defined similarly, alternately having length
1
Ii, ..• , I , I
k
2
n' -m and m. Finally write
k
=
I +
k l
=
{{k-l)n' +m+l, ... , kn'}, I +
kl
k
(Note that I + , I +
k l
l
.
{kn' +1, .. ., kn' +m}.
are defined differently from
I j , Ij
for
j
~
k.)
The main steps of the approximation are contained in the following
lemma. These are, broadly, to show first that the "small" intervals
can be essentially disregarded, and then to apply
separate) intervals
1
2.1 to the (now
, ... , I . In the following, {Un}
k
1
sequence (not necessarily of the form
LEMMA 2.2
Len~a
x/a
n
nJ
is any given
+ b ).
n
With the above notation, and assuming
D{U )
n
holds,
(i)
k
(ii)
IP(j~l
(iii)
Ipk{M{I )
l
ka.
{M (I j ) <
~
un} - pk{M ,
n
Hence, by combining (i),
(2.5)
Ip{Mn
~
:ii),
un} -pk{Mn ,
~
~
Un}
I
n,m
< k P{M{I ) < un < M{Ii)}·
l
(iii),
un}1
~
(2k+l)P{M{I l )
~
un < M{Ii)} +ka.n,m·
k
PROOF
The result (i)
follows at once since
and their difference implies
M(I.) < u
J
-
<
n
but
n
j=l
{M(I.)
J
M(I~)
J
for some
for 1 < j ~ k n I
Sj > un
n
k n' + 1, ... , k (n' + 1), which in turn implies
otherwise
S. < u
J -
m > k
and hence
of the events
M(I.) < u
(since
J
-
< u }
n
{M
:J
< u },
n n
j < k or
for some
j =
k{n' +1) < kn'+m). Since the probabilities
n
< M(H)
J
are independent of
j
by stationarity,
(i) follows.
The inequality (ii) follows from Lemma 2.1 with
P{M{I.) < u} is independent of
J
n
To obtain (iii) we note that
that
for
E , noting
j
J'.
...
o
~ P {M( I 1 ) ~ u } - P {M ,< un }
n
n -
e.
,
,
-33-
The result then follows, writing
< u }
n
y
and
<
u },
n
from the obvious inequalities
o
k
k
Y - x
~
~
k (y - x)
when
0 < x < y < 1.
We now dominate the right hand side of (2.5)
o
to obtain the desired
approximation.
LEMMA 2.3
If
D(u)
n
holds, r > 1
is any fixed integer, ilnd if
n > (2l1r+l)k, then, with the same notation as in Lemma 2.2,
(2.6)
<
-
u
n
< M( I *)}
1
1 + 2r
--: r
-
Ct
11,111
It then follows from Lemma 2.2 that
(2.7)
PROOF
each of length
m, from
II = {l, 2, ... , n' -ml,
they are separated from each other and from
(k
<
m
<
n'
E , ... , E ,
l
r
(n' = [n/k])
so that
n' > 2Mr+l, we may choose intervals
Since
1*
1
by at least
again). Then
~
By stationarity, P{M(E s )
~
un} = P{M(Ii)
un} = p, say, and by Lemma
r
r+l
2.1, the two terms on the right differ from
p
magnitude) by no more than
ra n,m , respectively. Hence
(r - 1) a. n,m
P{M(I l ) ~ un < M(Ill 1 ~ p
from which (2.6) follows since
p
r
r
- p
- p
and
r+l
r+l
Finally by (2.5) and (2.6), taking
lim sup Ip{M
n+oo
n
m
and
p
(in absolute
+ 2r an,m'
<
II (r + 1)
R. n
•
according to (2.2),
(R.n=o(n))
_< u 1 _pk{M , < u l!
n
n
-
n
< 2k+l+ (k+2r(2k+l)) lim supan
r
from which it follows (by letting
.e
m
n+ oo
r> '"
0
'~n
2k + 1
r
on the right), that the left
-34-
c
hand side is zero. Thus (2.7) is proved.
Gnedenko's Theorem now follows easily under general conditions.
'l'llEOREM 2.4
{r}
Let
he ,1 S lill i ona r'y s<'(juence, and
'n
lJivl'n constants slIch that
1'1,1
ate d.f. G(x). Suppose that
each real
x. Then
G(x)
n
(M
D(U
n
n
)
-1> ) . xl
n
<l
n
conVPl'q<,!; to
is satisfied for
',0
il
b
and
n
1l0n-dl'l)C'nC'r-
x/a
n
+b
n
for
has one of the three extreme value forms listed
in Theorem 1.6.
PROOF
un = x/a n + b n
Writing
we obtain (2.4). Hence if
connection with (2.4),
tion for
F
n
and using (2.7)
(with
is the d.f. of
Mn
(1.11) holds for all
k = 1. Hence by Lemma 1.4, G
k
nk
in place of
n)
then, as remarked in
since it holds by assump-
is max-stable and thus an extreme
value type by Theorem 1.6.
The result remains true if the condition that
COROLLARY 2. 5
D
holds.
(For then
u n = x/a n + b n
particular by
P{M
Convergence of
D(u )
n
be
un = x/an + b n , is replaced by the requirement that
satisfied for each
Condition
c
< u }
n -
n
D(u )
n
for each
is satisfied by any sequence, in
x.)
c
under dependence
The results so far have been concerned with the possible forms of limiting extreme value distributions. We now turn to the existence of such
a limit, in that we formulate conditions undnr which (1.17) and (1.18)
are equivalent for stationary sequences, i.e. condit:ions under which
(2.8)
P{H
<u }
n -
n
is equivalent to
(2.9)
1 - F (u )
n
We will show as a corollary that, under conditions of this type,
a
n
f,n
(H
n
- b)
n
has the samelimi ting distribution as it would if the
were L Ld.
e.
-15-
As may be seen from the derivation below, if (2.9) holds, then CondiliminfP{M
is sufficient to guarantee that
< u } > e- T
n -
n
-
However, we need a further assumption to obtain the opposite inequality
for the upper limit. Various forms of such an assumption may be used.
Here we content ourselves with the following simple variant of conditions used in Watson (l954) and Loynes (l965); we refer to this as
CONDITION
0' (un)
and sequence
will be said to hold for the stationary sequence
{un}
{Sj}
of constants if
[n/k]
(2.l0)
lim sup n
n+ oo
(where [
l:
P{Sl > u ,
n
j=2
~,
J
> u }
n
0
+
k
as
+
00,
] denotes the integer part).
Note that under (2.9), the level
are on the average approximately
r, l' .. ., ['n' and thus
" /k
u
T
amonq
n
in (2.10) is such that there
exceedances of
un
among
[']' ... , r'[n/kj. The condition
~l'
bounds the probability of more than one exceedance among
•.. ,
0' (u )
n
~[n/k].
This will eventually ensure that there are no multiple points in the
point process of exceedances, which of course is necpssary in obtaininq
(as we later shall) a simple Poisson limit for this point process.
Our main result generalizes Theorem 1.10 to apply to stationary sequences under
D{U ), 0' (un). A form of the "only if" part of this
n
theorem was first proved by R. Davis (l978).
THEOREM 2.6
If
D{u), D'{u)
n
n
hold for the stationary sequence
then (2.8) and (2.9) are eqUivalent, i.e. P{M
if
~
l-F{u)
PROOF
n
Fix
k
n'
U
{Mn,>U}=
n .
j=l
.e
< u } + e-
n -
n
T
if and only
T/n.
and for each
{~,
J
> u}
n
n
write
we have
n'
(=
n'
n,k
)
[n/k]. Since
-36-
n'
E pU;.>u}E
P{E~i>un,E;.·>U}<P{M,>U}
j=l
]
n
l~i<j~n'
]
n
n
n
n'
>: P r r, .1' > un 1 •
i I
Using stationarity it follows simply that
< P{M , < u } < 1 - n' {l - F (u » + S
n n n
n
n'
[n/k] , CondiE P{E;.l > u n' E;.j > u n }. Since n'
where Sn = S n,k = n'
j=2
k-lo{l) = o{k -1 ) as k + 00.
gives lim sup Sn
tion 0' (un)
oo
n+
Suppose now that (2.9) holds. Then 1 - F (un) "" Tin "" T/{n'k), so that
1 - n' {l - F (u »
(2.11)
n
+
00
n
in (2.11) gives
I-T/k <_ liminfP{M, < u} < limsupP{M, < u} < I-T/k+o{l/k).
n n n n n+oo
n+ oo
By taking the k - th power of each term and using (2. 7) we have
{l-T/k)k<liminfP{M <u} <limsupP{M <u} < {l-T/k+O{l/k»k.
n- n n- n n+oo
n+ oo
k +
Letting
exists and equals
lim P{M < u }
n n
n+ oo
we see that
00
e
-T
, as
required to show (2.8).
Conversely if (2.8) holds, i.e. PfM n ~ u n }
from (2.ll)
(2.12)
(again with
e-
+
T
as
n +
00,
we have
n' = [n/k]),
I-P{M, < u} < n'{l-F{u» < l-P{M, < u }+S •
n n n
n n
n
P{M n < u n } + e- T ,
But since
that letting
n +
00
P{M,
<n
u } + e -T/k , s 0
n-
(2.7) shows that
l_e- T/k < ~ liminfn{l-F(u n »
n+ oo
<
from which (multiplying by
+ T
k
and letting
+
00,
Ik + 0
(0)
( ]
Ik)
we see that
c
D{u), 0' (u)
n
n
(2.9) holds just for some subsequence
j
k +
T
so that (2.9) holds.
We remark here that if
as
n/k,
~ lim supn{l-F{u n »
n+ oo
1 - c-
n{l-F(u »
n
n'·~
in (2.l2) we obtain, since
hold and it is assumed that
{n }
j
of integers, i.e. 1 - F (u
then the same proof shows that (2.8) holds for that
n
)
j
-37T
subsequence, i.e. P{M . ~ u } + e- • This observation may be used to
nJ
nj
give an alternative proof of the statement that (2.8) implies (2.9) in
this theorem, by assuming the existence of some subsequence
n.(l-F(u
J
nj
»
f
-+- T'
0 ~
T,
~
T'
nj
for which
00.
As a corollary of Theorem 2.6 we may obtain a limiting result for the
for
of
totically proportional to
conOLL/\RY 2.7
n.
1F'1l'
Tol't
in any "interval" whose length is asyrnp-
j
r u n ('1)1,
he a stationary sc<!upnce, and
real sequences satisfying (2.9), and (2.9) with
BT
in place of
I 1I
T,
n
( ()" ) ,
re-
spectively, i.e.
(2.13)
where
~
l-F(Un(BT»
B > 0, T
~
BT/n
as
n
-+-
O. Suppose that
00,
D(Un(BT», D' (Un(BT»
is a sequence of intervals of integers with
vn
hold. Then if
members where
Bn, we have
(2.14)
PROOF
P{M(I) < u (T)}
n
-
n
-+-
e- BT
as
n -+-
00.
By stationarity it is sufficient to show that
Now it follows at once from Theorem 2.6 that
P{M
vn
P {M v n ~ un (T) } -+- e
-BT
< U
(BT)} -+- e
v
n
and hence it is only necessary to show that
(2.15)
If
u
Un(T) > U
n
( T) } - P { M
v n (BT)
vn
~ u"
v
( B T)} -+- O.
n
the left hand side of (2.15) is
< U (T)}
n
< v n {F (un (T»
If
un (T) <
U
v n (BT)
- F (u v n (B T) ) } •
the same result holds with reversed sierns so that
the left hand side of (2.15) is always dominated in modulus by
.e
-BT
•
-38\I n IF (un (, )) - F (u\I (e,))
I
\I !l-F(U (e,)) - (l-F(u (,)))1
n
\I
n
n
n
\I 1~(1+0(1)
n \I
-.!.(l+o(l»
n
n
o ( 1)
as
n
-+
I
00
as required.
c
Note that if
u n (,)
Un(eT)
is defined to satisfy (2.13) we may obtain
satisfying (2.9) by taking
u n (,) = u[ne] (6T), since, by (2.13),
(Of course the corollary applies with any other
U
n
satisfying (2.8)
(T)
as well as this one.)
Putting this slightly differently, if
(2.9), we may define
u (6,)
n
u[n/6] (T). That is, given
define
U (T')
n
D' (un (T))
is chosen to satisfy
to satisfy (2.13) by writing
Un(T)
satisfying (2.9) for some
satisfying (2.9) for a l l , ' > 0
It is also useful to know that when
or
Un(T)
u n (,')
by taking
un(e,)
T, we may
6 = ,'/T.
is thus defined, if
holds, then so does
D' (u
n
D(Un(T)
(,'», for
T' < ,. We show this in the following lemma.
LEMMA 2.8
For
6 > 0
Let
Un(T)
be defined, satisfying (2.9) for a fixed
define
Then
(i)
un(e,)
satisfies
(ii)
if
(iii)
the same is true for
6 < 1, and
(2.13),
D (u )
.
n
holds with
D' (un)
instead of
D(U n ).
T >
O.
-39-
(i)
~
By (2.9),
~
,/[n/B]
D(U
If
(ii)
n
(,»
holds and
•
1 ~ i 1 < ••• < i p < j1' '"
! =
J' 1 - i p-> t, we have, writing
B,/n.
(i 1 ,
1
... , i p )'
< jq ~ n,
(j1'
•.. ,
for
jq)
brevity,
= IF i , j
which does not exceed
a
n,t
-+-
0
then
a'n, t'n
n
lows.
(iii)
a'
n, t
0
-+-
(u [ n/O ] h»
= a[n/B],t
with
t'
n
- F i (u [ n/O] (,) ) F j (u [ n/ B] (,) )
.-
-
J'
(since
t[n/B]
q
I
_< n < [n/B]). I f
= o (n) ,
so that (ii) fo1-
also follows simply since
[n/k]
n
L
P{~l
> u
j=2
::I
n
n
(B,),~,
J
> u (B,)}
n
j:
[n/k]
2 P{~ 1 >
U[
[[ n/8 11k]
[ n/ B]
<
By
~j >
niB] (,),
U[
P { ~ 1 > U[ n/8] (,), ~ j > U[ n/8] (,) } •
L
j=2
D' (Un)' the upper limit of this expression over
tends to zero as
k
-+-
00,
tions the asymptotic distributions of
Mn , when
THEOREM 2.9
Let
In
{~n}
n
(or
[n/8])
c
So that (iUJfo1lows.
From these results we may see simply that, under
that of
n/ B] (,) }
has
vn
~
8n
M(I )
n
D
and
D'
condi-
is of the same type as
members.
be a stationary sequence, let
an > 0, b
n
be
constants, and suppose that
for a non-degenerate d.L
of the form
x/an + b n , and let
integers for some
.e
G. Suppose
8,0 < 8
~
I
n
D(U ), 0' (un)
n
hold for all
be an interval containing
1. Then
un
8n
-40P{a (M(I ) - b ) ~ x}
n
n
n
~
Consider a point
x
+
a
G (x)
with
as
n
G (x) > 0
+
<Xl.
,= - log G (x) ,
and write
= u n (,) = x/a n + b n • By Theorem 2.6 we see that (2.9) holds. Then by
n
hold with un(a,)
Lemma 2.8 it follows that O(un(a,», 0' (un(a,»
u
defined by (2.16), and (2.13) also holds. Hence, by Corollary 2.7,
P{M(I ) < un}
n
x/an+b ,
n
,=
+
. i ng
e -a, , which gives the desired result on wr1t
-logG(x).
Note that since
G
0
is an extreme value d.f., it is max-stable, and
a
G
Corollary 1.5 shows that
G(ax+b)
un
for some
is of the same type as
=
G, i.e. Ga(x)
a> 0, b. Also we may clearly modify the statement
a > 1, or to compare
of the theorem slightly to include
I'
for some other interval
v'
containing
n
~
n
M(I ) with
n
integers.
S'n
Associated sequence of independent variables
We now write
M
n
for the maximum of
the same marginal d.f. F
~n;
as each
n
i.i.d. random variables with
following Lovnes (1965) we may
call these the "independent sequence associated with
{F; }". The foln
lowing result is then a corollary of Theorem 2.6.
THEOREM 2.10
o (un) , 0' (u n )
Let
be satisfied for the stationary seA
quence
{F;n}. Then
PROOF
The condition
P{M
< u } +
n -
n
S > 0
if and only i f
a
may be rewritten as
P{M
< u }
n -
with
,= -log a,
+
a.
A
A
P{M
n
< u } +
n -
n
and, by Theorem 1.10, holds if and only if
,In. By Theorem 2.6 the same is true for
P{M
< u ]-
n -
n
_,
P{Mn <
- un }
+
l-F(U) ....
n
so that the result
follows.
0
We may also deduce at once that the limiting distribution of
is the same as that which would apply if the
is the same as that of
e.
F;n
an(M -b )
n
n
were i.i.d., i.e. it
an(M -b n ), under conditions
n
O(U )
n
and
0' (un).
Part of this result was proved in Loynes (1965) under conditions which
include strong mixing.
e.
-41-
THEOREM 2.11
Suppose that
nary sequence
when
some non-degenerate
G
D(u),
n
=
n
x/a
n
are satisfied for the statio-
+ b , for each
n
x, ({an> O}, {b }
n
being given sequences of constants.) Then P{a (M -b ) ~ x} .... G(x)
for
n n
n
PROOF
u
D'(u)
n
if and only if
Under either assumption
G
P{a
n
(M n - b n )
< x} .... G(x) •
-
is an extreme value distribution by
•
Gnedenko's Theorem, and hence continuous. If
e =
follows from Theorem 2.10 with
On the other hand, if e.g.
G(x )
O
any
0
y
~
x} .... G(x)
for all
x O} < P{a (M - b ) < y} .... G(y)
n n
n
G(y) > o. By letting y decrease we see that
we have
with
G(x).
p{a (M - b )
n
n n
(M n - b n )
G(x) > 0, the equivalence
P{a
n
limsupP{a (~ -b ) < xo}
nn
nn ....oo
-<
is zero, so that
-
P{a
(M
nn
.e
complete the proof .
for
-b ) < xO} .... O.
n-
This, and the corresponding result obtained by interchanging
Mn ,
x, and
M
n
and
c
-42-
CHAPTER 3
THE NORMAL CASE
In this chapter we consider a stationary normal sequence
zero means, unit variances, and covariances
r
n
M = max(f,l'
n
with
= E(r,j~j+n)' As noted
in Chapter 2, Berman (1964) has given simple conditions on
sure that (1.19) holds, i.e. that
{~n}
... ,
~n)
r
n
to en-
has a limiting
distribution of the double exponential type,
P{a (M - b ) < x} .... exp (- e -x), n .... "",
n n
nwith
=
(2 log n) 1/2
a
n
- (2a ) -lUOg log n + log 41f}.
n
One of Berman's results is that it suffices that
(3.1)
r
n
log n .... 0
as
n"" "",
and we will devote the first part of this chapter for a proof of this.
However,
for some
(3.1) can be replaced by an appropriate CesAro convergence,
y
> 2, and this and some other conditions will be discussed
at the end of the chapter.
It has been shown by Mittal and Ylvisaker (1975) that (3.1), and
therefore also the CesAro convergence, is rather close to beinq necessary, in that if
r n log n .... Y > 0
a different limit applies. In fact,
this strong dependence destroys the asymptotic independence of extremes
in disjoint intervals (cf. Lemmas 2.1-2.3). We return to these matters
in more detail in Chapter 5.
In this section we use the general results from Chapter 2 to derive
the double exponential law for normal sequences under some different
covariance assumptions, startinq with the transparent condition (3.1)
-43and proving that this implies
D(u )
n
and
0' (un)'
(This approach does
not reduce the amount of work required, but does bring the normal case
within the general framework.)
~,~,
Throughout this and all subsequent chapters
will denote the
standard normal distribution function and density respectively. We shall
repeatedly have occasion to use the well known relation for the tail of
~,
stated here for easy reference
(3.2)
1 -
~ (u)
'" ~ (u) lu
as
Double exponential limit if
u -+-
00.
r n,;;;;..;;;..&..:.:....-~
log n-+-O
Our aim is to show that if (3.1) holds, then
D(U )
n
and
0' (un)
are
satisfied when
(3.3)
We can then conclude that (2.8) holds and obtain (1.19) for this sequence.
First, it will be convenient to prove a "technical" lemma (given in
Berman (1964», which is of a type that is very useful in both discrete
and continuous cases. The proof is elementary, though some slightly messy
calculation is involved.
Before stating the lemma, we note the easily proved fact that if
r
n
-+- 0
as
Irnl
1
n -+-
00
for some
then
n
Ir n I
+ 0,
cannot equal
for all
k
sup Ir
n~l
LEMMA 3.1
1 -
PROOF
n
I
~
n
~2n+l'
and
Ir n I
r
n
+ O.
~l
(For if
~n+l
and
and hence so are
I r 2n l = 1. In this way it follows that
Irknl = 1
-+- 0). Hence it is
is actually bounded away from
1
for all
n > 1,
= 6 < 1.
Suppose
{r }
n
satisfies (3.1) . Then
_u 21(1 + I r I)
n
j
-+- 0
n E Ir .Ie n
J
j=l
(3.4)
if
~n+l
contradicting the requirement that
easy to see that
i.e.
for any
it follows (by normality) that
are linearly related, as are also
~l' ~2n+l' so that
1
(u n ) '" Tin
for some constant
as
n -+-
T.
By using (3.2) and (3.3) we see that
00
e.
-44-
(i)
(3.5)
using
K
~
un
(ii)
(2 log n)
1/2
as a constant whose value may change from line to line. As
above, let
suplr
n~l n
0
1 - 0
0<0.<1+0·
I
a.
< 1, and let
be a constant such that
Split the sum in (3.4) into two parts, the first for
and the second for
a.
nne
1 < j < [no.]
[no.] < j ~ n. The first sum is dominated by
2
-u / (1+0)
n
n1+o.(e
-u 2 /2
n
)2/(1+0)
<Kn1+0.(u/n)2/(l+6)
-
n
< K n 1+0.-2/ (1+6) (log n) 1/ (1+0)
(where (3.5),
(i) and (ii), have been used). This tends to zero since
2
1 + a. - 1 + 0 < 0
from the choice of
a..
To deal with the second part we define
and note that
6 n logn ~ suplrmllogm .... 0
m>n
Now writing
p
n"" "".
we have for the second part of (3.4),
[no.]
2
n
n
as
_u /(1+ Ir.l)
r
Ir.le
j=p+l J
n
J
< no
p
e
-u 2
n
< n2 6 e
p
by (3.5),
op
(i). But by (3.5),
u
2
n
~
20
nO.
n
r
2
u Ir.I/(1+lr.l)
e n J
J
j=p+l
_u 2
2
u2 6
2 0 u
n e n p < K 6 u e pn
p n
(ii),
log n
which tends to zero. Thus the exponential term above tends to one and the
remaining product tends to zero, so that the desired result follows.
We now attack the main task, using a very instructive method developed, in various ways, by Slepian (1962), Berman (1964, 1971a), and
c
-45-
Cramer (see Cramer and Leadbetter (1967». The conditions
D' (u)
D(U n )
and
will both follow from the following lemma. It is stated in some
n
generality, even though needed here only in a simple special form - the
complete formulation being used in later chapters.
LEMMA 3.2
Suppose
... ,
u )
n
111'
... ,
max ( I Ai j
f1ij
similarly with covariance
11 n
1
,,0 = (A 0 ) , and let
ij
matrix
are standard norma] varjables with co-
E,n
l
1
A = (Ai j) , and
variance matrix
(u l '
... ,
~l'
0
I' I "i j I) .
Further, let
be a vector of real numbers and write
!:!
u =
min(\ull, •.. , lunl). Then
P{f;j~Uj
(3.6)
for
j = l , ... ,n}-p{11j<U j
for
j = l , ••. ,n}
e
-u 2 /
(1
+ f1 .. )
1.)
(x) + =max(O,x).
where
11 , ... , 11
1
n
In particular, if
then for any real
IP {E, ~.
u
are independent and
1 2
and integers
< ufo r
~l
= 1, ... , s} -
j
4>
!;
I<
1,
~s'
<... <
(u)
max I At.
i+j
J
6
I
J
.~
<K
where
r .. =
1.J
1
A~
i
~
j
is the correlation between
{f;n}
for
(3.7)
is
i
= 1,
We shall suppose that
PROOF
f;~.
and
6).
stationar~
with
fl
and
1. J
f;~ .' and
J
K
ri
... ,
"I
and
,,0
opposed to ,'H'm1:-definite) and hf'nce that
have joint densities
2
-u / (1 + I r. . I)
1.)
1.
is a constant (depending on
If furthermore
l<i<j<s
Ir. . Ie
are positive duj"t:lld.e (as
(E,l' ... , E,n)
and
(11 ,
1
••• ,
n n)
f O' respectivp.ly. (The semi-definite case
is easily dealt with by considering
n.1. + £ 1.., where the
are independent normal variables with mean zero and then letting
Var(£.)
1.
~
0, using continuity.) Clearly
e.
-46-
P{f;j < u.]
for
u.
for
P{Tlj
where
~
f , f
l
O
matrices
j
1,
... ,
n}
J
!J
-00
]
j
1,
... ,
n}
J
!J
J fl(y l ,
... ,
J fO(Yl'
... ,
y )dy
n -
Y )dy
n -
are the normal density functions based on the covariance
AI, AO, the integration ranges being
A = h Al + (1- h) AD,
h
I f we write
{y; y. < u., j =1,
-
] -
(0:: h:: 1), the matrix
A
h
tive definite with units down the main diagonal and elements
°
(1 -
h) A
ij
for
on
A , and
h
=
F(h)
f
i
j. Let
f
... ,n}.
J
is posihA
1
+
ij
be the n-dimensional normal density based
h
J
-00
The left hand side of (3.6) is then easily recognized as
F(l) -F(O). Now
1
J F'(h)dh
F(l) -F(O)
°
where
... ,
F' (h) = J
ah
-00
The density
(regarding
fh
f
h
independent of
depends on
h
only through the elements
as a function of
h, while for
h
A..
for
~J
h
i < j , A..
q
h
A..
~J
i < j, say) . We have
-
1
0
h A + (1- h)A
ij
ij
of
A
h
h
A..
1
~~
so that
1
(I
A.. - A. .• 'rhus
~J
~J
r.
F' (h)
-00
i<j
1
0
~J
~J
(A .. -A .. ) J
Now a useful property of the multidimensional normal density is that
its derivative with respect to a covariance
A..
~J
is the same as the
second mixed derivative with respect to the corresponding variables
Yj'
.e
(cf. Cramer and Leadbetter (1967), p. 26), i.e .
Y ,
i
-47-
Thus
F
I
(A 1ij - A0ij ) f
i<j
(h)
E
-
integrations may be done at once to give
and
The
2
a fh
dYe
f
<lYiaYj
-00
~
( 3.8)
where
u. )
fh(Yi = u i ' Yj
formed by putting
denotes the function of
]
n-2
variables
u i ' Yj = u j ' the integration being over the re-
Yi
muining variables.
Further, we can dominate the last integral by lett:ing the variables
run from
_00
to
+00. But
00
is just the bivariate density, evaluated at
(u., u.), of two standard
~
J
h
A..
, and may therefore be writ-
normal random variables with correlation
~J
ten
1
------:h~-=2,..........1...,/"'2
exp { -
21T(1-(A .. »
~J
Now, since
min( lUll,
1h 2
2(1-(11. .. »
2} .
(u.2 - 211. h.. u. u. + u.)
~J
hl = 1
hA1.. + (l-h)A 0.. 1 <
I A..
~J
~J
1.J ••• , IU n I) :: min( Iu i I, Iu j I),
~
max(IA 1.. \,
~J
~] ~ J
J
1/\ 0.. 1) = r .. , and
1.J
~J
u =
it may be easily shown that the
above expression is not qrcater than
1
2
----:2:---::1,..../=2 exp {-u / (1 + Pi j) } •
21T(1-Pij)
(Note that if
Qp(x, x)
Qp(x, y)
and in general
=
2
x -2rxy+y
2
Qp(x, Y) ~ Qlpl
and
O~x<y
then
Qp(x, y)
(lxi,
lyl).) Eliminating the pos-
~
sible negative terms in (3.8), we thus obtain
F
and since
I
(h)
<
J:..
E
- 21T i<j
F(l) - F(O)
The conditions
= f~ F ' (h) dh, this proves the lemma.
D(u n )
and
D' (un)
now follow simply.
a
-48LEMMA 3.3
,~
sup
=
n;:
llc
n
I
< 1
isfted). Then both
PROOF
1 - tf>(u )
Suppose that
(which will be'
D(U )
From (3.7) with
IP { E; 1 ~
un' E; j
un} -
In p"rUcIJlar,
RO,
~1
<I>
2
whence, by simple manipulation,
~2
= 1,
(un)
K Ir j -1 l e n
and
n
J
iR
Si'lt-
= j, we have
E;2
J-
being each standard normal),
Ip{F,l > u , E,. > u } - (l-t(u »2 1 < Klr ·_
n
(3.1)
_u 2 I ( 1 + I r. 11)
I ~
(E;1
i.f
hold.
n
s = 2,
~
ilnd that (3.4) holds with
D'(U )
and
n
Tin
~
n
J 1
n-
Ie
-u~/ (1 + I r j -11 )
.
Thus
2
[ n/k ]
T2
n
-u / ( 1 +
n L P { E; 1 > u , E; > u } < -k + Kn L Ir l e n
n
J
n J'--l J
J= 2
0
Ir I>
0
0
J
+ 0 ( 1) ,
o
from which
0' (un)
follows by (3.4).
~
1
It follows also from (3.7) that if
~1 ~
... <
- t s < n, then
n
_u 2 I (1 +
1F
1/, (ul-tfls(ull ~Kn "Ir.le n
t 1'"
s n
n
j=l J
Suppose now that
1 < i
l
Hl,
... ,
1/,s}
in turn with
and
{jl'
... ,
jq}
<
...
< i
... ,
Hl,
J
< jq < n. Identifying
-
... ,
i p' j l'
jq} , H l ,
... ,
i
P
}
we thus have
-
F
0
1
0
1 .•• 1 p
n
L
< 3 Kn
(u)
n
which tends to zero by (3.41. Thus
for each
D(U
n
F
0
(u )
0
Jl ••• J
q
n
I
2
Ir
j=l
lim a
n = 0
n-+ oo n,,,,
...
< jl <
p
I r. I)
-u /(l+ Irol)
0
len
J
J
)
is satisfied.
(In fact
~.)
o
Our main results now follow from Theorem 2.6 and Lemma 3.3.
THEOREM 3.4
If
-+ e -T
then
THEOREM 3.5
{E;n}
If
{E;n}
is a stationary normal sequence satisfying (3.1),
if and only if
l-Q:>(ul~T/n.
n
o
is a stationary normal sequence satisfying (3.1),
-49-
then (1.19) holds, i.e.
P{a (M - b ) < x} .... exp(-e- x )
n n
n -
as
n ....
00
where
(2 log n) 1/2
(2 log n) 1/2 - ~(2 log n) -1/2 UOg log n + log 4 TI}.
PROOF
Since by Theorem 1.11 the result holds for the associated indepen-
dent sequence, if we write
1 - <P (un) .... T/n
where
T
un
= e -x
x/an +b , we have by Theorem 1.10 that
n
Thus by Lemma 3.3 both
hold and it follows from Theorem 2.6 that
P{M
D(un)
and
D I (un)
T
n
< u } .... e- . Rephrased,
n
this is precisely the desired conclusion of the theorem.
c
Weaker dependence assumptions for double exponential limits
r
The condition (3.1) that
n
log n
0
->
is rather close to what is neces-
sary for the maximum of a stationary normal sequence to behave like that
of the associated independent sequence.
As we have seen, it is the convergence (3.4) which makes it possible
to prove
D(u n )
and
D' (un)' and one could therefore be tempted to use
(3.4) as an indeed very weak condition. Since it is not very transparent,
depending as it does on the level
un' other conditions, which also re-
strict the size of
n, have been proposed occasionally.
In Berman (1964)
00
(3.9)
~
n=O
r~
<
r
n
for large
it is shown that (3.1) can.be replaced by
00
which is a special case of
00
( 3 .10)
~
n=O
rP <
n
00,
for some
p > O.
There is no implication between (3.1) and condition (3.10), but they both
imply the following weak condition,
e.
-50-
(3.11)
1
n
-
>: Irk 1 log k c
y 1r
k 1 log k
as
0
->
n>'"
n k=l
for some
y > 2, as is proved in Leadbetter et al.
(1979) and below
after Theorem 3.7. We now show that (3.11) implies the relevant
D(u ), D' (u)
conditions.
LEMMA 3.6
If
r
l-~(u) ~
T/n, then (3.4)holds, and consequently (by Lemma 3.3) also
n
n
n
+
n
D' (un)
0
as
n
+
00,
{r }
n
satisfies (3.11), and if
D(U )
n
and
hold.
PROOF
Using the notation in the proof of Lemma 3.1, let
a = sur n >
I'
rnI <
- a
1, take S = 2/y and let a be a constant such that o < a < min (S, 11 +
0) •
a
Split the sum in (3.4) into three parts, the first for 1 < j < [n ] ,
the second for
rna] < j
~
[n
e]
-
and the third for
[n S ] < j < n. The
first sum tends to zero as in Lemma 3.1.
Writing
a
sUPlrml, p = [n ], and
on
q
[n S ]
and using (3.5), Le.
m~n
2
e
-u /2
n
Ku /n ~ K (2 log n) 1/2/n ,
n
we have for the second part of (3.4),
q
n
E
2
-u /(l +
Irk Ie n
1r k I)
k=p+l
2
2
a 2
1 + S -un e u n 0p <
(3 - 1 2 e pUn
e
Kn
. u
n
S-l 2 20
< Kn
un n p
< n
which obviously tends to zero, since
S < 1.
Finally, for the last part of (3.4) we have, again using (3.5),
n
2/(1+ Irkl)
E Irk I (un/n)
k=q+l.
-1
n
21 r k l 10g n
< Kn
log n
E
Irkle
k=q+l
< Kn
For
k >" q
we have
log k > Slog n,
and hence this expression is not
larger than
1
Kn-
n
E
Irk I log k e
1
2S- Ir k !109k
k=q+l
.e
By (3.11) this tends to zero as
(3.4).
< Kn
-1 n
ylrkllogk
E Irk 1 log k e
•
k=l
n
+
00, which concludes the proof of
c
~51-
The next result (extending Theorem 3.5) now follows by exactly the
same proof as Theorem 3.5.
THEOREM 3.7
{~n}
If
is a stationary normal sequence with
rn
+
0
sat-
isfying (3.11), the distribution of the normalized maxima has a limiting
distribution of Type I with
an
and
bn
as in Theorem 3.5.
c
A few remarks may be in order here to elucidate the content of condition (3.11).
Define for each positive
and let
8 n (X) ={k; l~k~n, !rkllogk>X}
x, the set
be the number of elements in
\.In(x)
8n (x). Consider the
following condition (which we shall see is slightly stronger than (3.11»,
( 3.12)' n
-1 n
E Irk Ilog k
+
0, as
n
+
00,
and
k=l
\.l
n
(K) = o<nn)
for some
K > 0, n < 1,
and the equivalent condition
o (n)
(3.12)" \.In (c)
\.l
n
for all
o (nn)
(K)
E:
for some
> 0, and
K
> 0, n < 1-
Obviously (3.1) implies (3.12)' . Further, i f
00
E !rklP <
k=l
for some
p > 0
00
then, since
00
)~ Irkl P
->
k=l
it follows that
\.l
>:
kE8 (x)
n
n
Irk \p -> vn (x) (x/log n) P,
(x) = O({logn)P).
In particular, taking
p = 2
we
see that also (3.9) and (3.10) imply (3.12)", so that both (3.1) and (3.10)
are stronger than (3.12)' and (3.12)". The following lemma states that
(3.12)' or (3.12)" imply (3.11) and consequently that both (3.1) and (3.10)
imply (3.11).
LEMMA 3.8
(3.11).
If
r
n
->
0
as
n
+
<0, then (3.12)
I
and (3.12)" both imply
e.
-52PROOF
It is easily seen that (3.12)' and (3.12)" are equivalent so we
need only show that (3.12)' implies (3.11). We have
(3.13)
n
-1 n
y Irk 1 log k
E 1 r k 1 log k e
k=l
n
-1
y Irk 1 log k
n
E
Irk 1 log k e
k=l
k~en{K)
+ n -1
E
Irk 1 log k e
kEen (K)
y Irk /log k
,
and proceed to estimate the sums on the right separately, assuming that
(3.12)' holds. Now
n -1
n
y Irkllog k
yK -1 n
E 1r k 1 log k e
< e n } ; Irk 1log k
k=l
k=l
-~ 0,
n
~
m,
k~en{K)
by the first part of (3.12) '. Since we assume that
integer
N
n
-1
such that
ylrkl < {l- n)/2
E
Irkllogke
kEe (K)
.
y Irk 1 log k
for
r
n
+
0, there is an
k > N. Hence
1
(1
) /2
~ n- \In{K) lognn
-n
,
n
k>N
which tends to zero as n + 00, by the second part of (3.12)'. Since N
l
is fixed, n- E~=llrkllogkexP{Ylrkllogk) + 0, and i t follows that also
the second term of the right hand side of (3.13) tends to zero, and thus
that (3.11) is satisfied.
.e
c
-53-
CHAPTER 4
CONVERGENCE OF THE POINT PROCESS OF EXCEEDANCES,
AND THE DISTRIBUTION OF r-th LARGEST
~~XIMA
In this chapter we return to the general situation and notation of
Chapter 2. If
{~n}
u
is a given "level" we say that the (stationary) sequence
has an exaeedanae of
u
at
as "instants of time", und the
j, if
~j
> u. We may regard such
eXt:c'('cJullC('~; lltl'I"(,lorl'
itS
t'vent-s
j
OCIHTj nq
randomly in time, i.e. as a point ppocrnn. (The formal properties of point
processes which we shall need are contained in the appendix.)
~j
j
Point process of exceedances
We shall be concerned with such exceedances for (typically) increasing
levels and will define such a point process, N , say, for each of a sen
quence
{un}
of levels. Since the
un
will be typically high for large
n, the exceedances will tend to be rarer and we shall find it convenient
to normalize the "time" axis to keep the expected number of exceedances
approximately constant. For our purposes the simple scale change by the
factor
nn(t)
nn
n
will suffice. Specifically we define for each
at the points
t
has an exceedance of
ance at
=
j/n, j
un
= 1,
at
2,
j/n
by
whenever
j. Hence, while exceedances of
un
n
n (j/n)
~ .. Then
n
J
{~}
has an exceedn
may be lost as
creases, this will be balanced by the fact that the points
more dense. Indeed the expected number of exceedances by
..
.e
interval
T,
if
u
[O,l]
n
is clearly
n PU:
is chosen by (2.9).
l
> un}
a process
un
j/n
nn
inbecome
in the
which tends to a finite value
-54-
Our first task will be to show that (under D(U ), D' (un) conditions)
n
the exceedances of
creases,
un
by
nn
become Poisson in character as
n
in-
(actually in the full sense of distributional convergence for
point processes described in the appendix). In particular this will mean
that the number, Nn(B), say, of exceedances of
set
un
by
nn
in the (Borel)
B, will have an asymptotic Poisson distribution. From this we may
f
simply obtain the asymptotic distribution of the r - th largest among
••. ,
~n'
l
,
and thus generalize Theorems 1.13 and 1.14.
It will also be of interest to generalize Theorems 1.15 and 1.16, giving joint distributions of r - th largest values. This will reqUire an extension of our convergence result to involve exceedances of several 1eve1s simultaneously, as will be seen.
In the following theorem we shall first consider exceedances of
by
nn
on the unit interval
(0, 1]
un
rather than the whole positive axis,
since we can then mw ] eRR restrictive assumptions,
st i 11 obtel i n the
.1 nIl
corollaries concerning the distributions of r - th maxima.
THEOREM 4.1
( i)
Let
T > 0
hold for the stationary sequence
Let
~
nn (j/n)
.,
J
j = 1, 2,
process on the unit interval
un
by
nn(j/n)
=
N
cess
... ,
{~n}
with
n = 1, 2,
(0, 1]
~j > un) . Then
(0, 1]
on
... ,
0,1("'h
isfying (2.9), and that
and let
Un(T)
by
parameter
T.
PROOF
(a)
be the point
consisting of the exceedances of
j/n, 1 < j
-
T··
T, as
n
-+
~
n, for which
00.
0, there exists a sequence
D(Un(T», D' (Un(T»
hold for all
(Un(T)}
T > O. Then
nn' converges to a Poisson Process
N
n
on
N
of exceedances
(0,00)
By Theorem A.l it is sufficient to show that
E(Nn«a, b]»
o
<
it
<
sat-
T, the result of (i) holds for the entire positive axis
in place of the unit interval, i.e. the point process
of
N
n
converges in distribution to a Poisson pro-
with parameter
(ii) Suppose that, for
for any fixed
Nn
satisfying (2.9) •
un = un (T)
in that interval, (Le. the points
nn
D(U ) , D' (u )
n
n
be fixed and suppose that
b
~
-+
E(N«a, b]»
1, and
T (b - a)
as
n
-+
00
for all
with
-55P{N (B) = O}
n
(b)
P{N(B)
-+
all B of the form
= O} = e-Tm(B)
r
U (a. , b. ] , 0 < a
1
J.
(m
< b l < a 2 < .•. < a r < b r ~ 1.
l
J.
being Lebesgue measure) for
Here (a) is immediate since
=([nb]-[na])(l-F(u »~n(b-i1)T/n=T(h-i.l).
E(N «a, b]»
n
..
.
n
0 < a < b < 1,
To show (b) we note, using stationarity, that for
where
v
n
=
In
=
[bn]- [an]}. Now
{I, 2, ... ,
[bn] - [an] ..... (b - a)n
o (un CT) ) , 0 '
as
n
->-
00.
In contains
n
integers where
Further, by Lemma 2.8, since
(T»
hold they also hold with UnCUT)
defined by (2.16)
n
satisfies (2.13)
u (T), for o < U < 1, and this u n C8T)
replacing
(u
n
e
Thus Corollary 2.7 (with
= b - a)
gives
n -..
as
(4.1)
011
r
Now,
v
let
D
U
Ci1 i'
b i ], where
0
<'
Then, writing
for the set of integers
Ej
2
"° 2 .' •• .-'
J
J
u r .: h r :: 1.
C[na .] + 1, [na.] + 2, ... ,
ill . III .'
1
<1
.
[nb ]), it is readily checked that
j
P{N (B)
n
=
O}
•
= p( ~
{M(E.)
j=l
r
IT
j=l
J
~
un})
p{Nn«a ., b.]) = O}
J
J
u })-
n
e
-Tm (B)
,
(where
m
00, to
n ->-
By (4.1), the first term converges, as
r
IT
~
j=l
e
P{M(E.) < Un}}'
1-
-T(b'-a')
J
J
j=l
denotes Lebesgue measure). On the other hand, by
Lemma 2.1, it is readily seen that the modulus of the remaining difference
of terms does not exceed
by
0 (un)'
(cf • C2 • 3) l, ex
Cr - 1) a.
,
n,nll
0
as
,->-
n,nll
where
n
-+
00
min
(a. +1 - b.). But
l<j<r-l J
J
so that (b) follows. Hence
). =
(i) of the theorem holds.
The conclusion (ii) follows by exactly the same proof except that we
require
D(Un(UT»,D'(Un(UT»
to hold for all
U>l
as well as
6<1.
-56Note that since (ii) and (iii) of Lemma 2.8 require
a < 1, the as sump-
tion has to be made in (ii) of the present theorem that
0' (un ('[) )
D(un(T»,
. o.
hold for all
o
Note that in (ii) of the above theorem, we could have assumed the existence of
fined
Un(T)
Un(T')
satisfying (2.9) for just one fixed
for all other
T'
=
aT
by
T
and then de-
Un(aT) = u[n/a](T)
as in
Lemma 2.8. Of course it is possible that this may be less convenient in
verifying the
Un(T)
0
and
0'
conditions than some other prescrlption of
satisfying (2.9) for each
T.
It is of interest to note that the conclusion of (i) applies to any
interval of unit length, so that the exceedances in any such interval
become Poisson in character. But if the assumption of (ii) is not made,
it may not be true that the exceedances become Poisson on the entire axis
(or on an interval of greater than unit length).
COROLLARY 4.2
Under the conditions of (i) of the theorem, if
BC(O, 1]
(m(aB) = 0),
is any Borel set whose boundary has Lebesgue measure zero,
then
P{N (B)
n
=
r} -+- e-Tm(B) (Tm(B»r/r:, r
=
0, 1, 2, ..•
The joint distribution of any finite number of variables
Nn(B k )
corresponding to disjoint
Bi ,
(with
m(aB )
i
=
0
•
Nn(B l ), ••. ,
for each
.
i)
converges to the product of corresponding Poisson probabilities.
PROOF
... ,
This follows at once since
N
n
~ N.
II
It should be noted that the above results obviously apply very simply
to stationary normal sequences satisfying appropriate covariance conditions (e.g.
(3.1».
-57-
Asymptotic distribution of k - th largest values
The following results may now be obtained from Corollary 4.2, generalizing the conclusions of Theorems 1.13 and 1.14.
Let
THEOREM 4.3
M ) , where
n
pose that
M(k)
is a fixed integer. Let
k
~l'
denote the k - th largest of
n
~n' (M~ 1 )
••• ,
be a real sequence and sup-
{un}
hold. I f (2.9) holds for some fixed
D (un) , D' (un)
=
> 0,
T
then
(4.2)
as
n ....
00
Conversely if (4.2) holds for some integer
hence (4.2) holds for all
PROOF
k.
{M(k) < u }
As previously, we identify the event
that no more than
Nn«O, 1])
~
(k - 1)
of
£'1' ... ,
n
-
exceed
('n
with the event
n
Le. with
k -1, so that
P {M ( k ) < u }
(4.3)
k, so does (2.9), and
n
-
n
=
k-l
E
s=O
P {N ( (0, 1])
n
=
s}.
If (2.9) holds the limit on the right of (4.2) follows at once by
Corollary 4.2 •
•
•
Conversely if (4.2) holds but (2.9) does not, there is some
o
~ T'
~
00,
and a sequence
{n ,}
J
such that
n.(l-F(u
J
» ..
T'.
+ T,
Now a
brief examination of the proof of Theorem 4.1 (cf. also the remark after
Theorem 2.6) shows that if (2.9) is not assumed for all
n
a sequence
<
T
by
T'
{n .}
N
has a Poisson limit. If
nj
in the argument above, we thus have
J
then
But this contradicts (4.2) since the function
decreasing in
T'
•
nj
T'
=
00
x > 0 and hence
1 - 1. rhus
T'
T'
but just for
00,
replacing
k-l
E xS/s: is strictly
s=O
< 00 is not possible. But
e- x
cannot hold either since as in (2.11) we would have
P {M[ n /k ] -< u n } <
- 1 - [n/k] (1 - F ( u n » + S n, k '
-58-
which would be negative for large
n
uy the finiteness of
lim sup Sn k
n-)o>(X)
implied by
D' (un) , at least for some appropriately chosen large
,
k. Hence
c
(2.9) holds as asserted.
k = 1
The case
of this- theorem is just Theorem 2.6 again. Of course
Theorem 2.6 is used in the proof of Theorem 4.1 and hence of Theorem 4.3.
The following obvious corollary also holds.
COROLLARY 4.4
{un}
The theorem holds if the assumption (or conclusion) that
satisfy (2.9), is replaced by either of the assumptions (conclu-
sions)
(where
M
n
as usual denotes the sequence of maxima for the associated
independent sequence).
PROOF
This follows at once from Theorem 2.6.
THEOREM 4.5
an > 0, b n
Let
be constants for
a non-degenerate d.f., and suppose that
un
=
(4.4)
x/a n + b n ,
-00
where
00
P{a (M -b ) < x}
n
n
then for each
( 4 • 5)
< x <
n
k
=
-
> 0
D(U ), D' (un)
n
.•• , and
G
hold for all
. If
'1.
G(x),
k-l
xl "':' G(x)
(-log G (x) ) s / s :
I.
s=O
(zero where
G(x)
=
Conversely if (4.5) holds for some
holds for all
n = 1, 2,
1, 2,
pfa (M(k) - b )
n
n n
G(x)
c
0).
k, so does (4.4) and hence (4.5)
k. Further, the result remains true if
M
n
replaces
in (4.4).
PROOF
case
For
G(x) > 0
G(x) = 0
the result follows from the previous corollary. The
follows by continuity of
value type), as usual.
G
(which must be an extreme
c
,
-59-
Independence of maxima in disjoint intervals
As noted already in Chapter 2, it would be natural to extend the theorems
..
there to deal with the joint behaviour of maxima in disjoint intervals •
We shall do this here - demonstrating asymptotic independence under an
D(u )
appropriate generalization of the
n
condition, and then use this
to obtain a Poisson result for exceedances of several levels considered
jointly. This, in turn, will lead to the asymptotic joint distributions
of various quantities of interest, such as two or more
locations, as
n
+
where
T
~
2
l-F(U(k»
u(l) > u(2) ~ ... ~ u(r)
n
- n
n
T
r
levels
satisfying
as in Chapter 1 and consequently
r ·
D(u n )
It is intuitively clear that we shall need to extend the
ditions to involve the
CONDITION
{~j}
1
Dr(~n)
r
values
un(k) ' and we do so as follows.
!
= (i l , ... , i p )' j = (jl' .. ., jq)'
~
~
i l < i 2 < ... < i p < jl < j2 < ... < jq ::: n, jl - i p
obvious notation)
. . (v,w)
IF!,J
- -
where
v
=
any choice of the
Condition
R. n
9"
we have (using
-F.(v) F.(w)\ < an,.
1 1 ~
(v l ' ... , v p )' w
some sequence
con-
will be said to hold for the stationary sequence
if, for each choice of
(4.7)
and their
'
= Tk/n + o(l/n) ,
n
••• ~
n
00
As in (1.26) we shall consider
(4.6)
M(k)
r
=
values
... ,
(w l '
u(l)
n
'
... ,
wq ), the
vi
and
u~r), and where
= o(n) .
0r(~n)
extends
wj
each being
a n,R.
+
0
for
n
O(u n ) in an obvious way and will be con-
venient even though its full strenqth will not quite be neerled for our
purposes. It will not be necessary to define an extended
dition, since we shall simply need to assume that
•
arately for each
k
=
0' (u(k»
n
0' (un)
con-
holds sep-
1, 2, •.. , r.
The next result extends Lemma 2.1 (with slight changes of notation).
-60LEMMA 4.6
fixed integers, and
E
and
E ,
l
•.• , E
j=l
holds, if
subintervals of
s
when
~
are separated by at least
j
IP(~
{M(E.) <u
.})- ~P{M(E.) -< u n,]. }
]
n,]
j=l
]
where for each
PROOF
Dr(~n)
With the above notation, if
j, u
(1)
is anyone of
.
n, ]
un
'
n, s, k
{l, .•• , n}
i
+ j,
I
-<
... ,
are
such that
then
(s - 1) CI.
n
n,~
u(r) .
n
This is proved in exactly the same manner as Lemma 2.1 and the dec
tails will therefore not be repeated here.
In the following discussion we shall consider a fixed number
disjoint subintervals
(= J
k,n
)
J
l
6 n
has
k
, J
2 , ... , J s
of
elements, where
{l, .•• , n}
6
s
such that
J
k
are fixed positive con-
k
1. By slightly strengthening the assumptions, we
stants with
1, and let
be arbitrary finite
disjoint intervals of positive integers. Note that the intervals
increase in size with
s
of
J
k
do
n, but remain disjoint, and their total number
is fixed.
The following results then hold. In the proofs, details will be
omitted where they duplicate arguments given in Chapter 2.
THEOREM 4.7
(i)
Let
J
l , J 2,
••• , J
{l, 2, ... , n}
as defined above, J
fixed positive
61 , 62 ,
ary sequence
...
satisfies
having
k
(~~ 6
... , 6 '
s
D (u )
r -n
be disjoint subintervals of
s
k
vn,k ~ 6kn
members, for
~ 1). Suppose that the station> u (2) >
n
where the levels
satisfy (4.6). Then
~
(4.8)
for any choice of
(ii)
from
8 > 0, and that
'
for each
... ,
... , "lr' that
Suppose, for fixed
for all
(1)
un
0 (u )
r -n
holds for
... , U (6T )), 6 > O. Then (4.8) holds for
r
n
~n =
Un«()T k )
k.
satisfy (4.6)
(u (6T l ), U (6T 2 ),
n
n
finite
-61-
disjoint intervals of positive integers with
where
..• , Os
01' 02'
I
PROOF Let
remaining
t.liirl~l"ent
~
8k n
members,
arc arbitrary but fixed positive constants.
denote the first
k
vn,k
v n,k -
R-
n
where R- n is chosen as in
n'
frum LhuGU ill Chi.llJLel" 2, LuL <.IJ·L'
R-
J , and r* the
k
k
I , r* are
Dr(~n) . (These
k
k
Ihl/llud siuce Lhey IJL1Y
elements of
s()
a similar role.) By familiar calculation we have
(4.9)
<
where
s
E P{M(I *)
k
k=l
?
u
n,
k} < sp ,
n
Pn = max P{M(I~) > uk}' Further
l<k<s
- n,
(4.10 )
Uk}) -
n,
~
k=l
P{M(I k ) <
(s-l)a.
9
n, 'n
by Lemma 4. 6, and
s
s
(4.11)
0 <
II P{M(I ) < u
k=l
s
<
-
,S
II
k=l
k
n,
k} -
II P{M(J ) ~ un,k}
k
k=l
s
(P{M(J ) < u k} + P ) k
n,
n
s -1
(l+p)
n
~
s
(Yk + Pn ) - IT Y
k=l k
k=l
we also haveP n ,s 1. Now
II P{M(J ) < u k}
k - n,
k=l
2 s Pn
s
since
IT
is increasing in each
when
Pn > 0, and
o(n). Hence, by (4.9), (4.10),
n
(4.11), the left hand side of (4.8) is dominated in absolute value by
which tends to zero by (4.6) since
R-
(s + 2 s ) P + (s - 1) a n
which tends to zero, completing the proof of
n
n''''n
part (i) of the theorem.
Part (ii) follows by exactly the same arguments under the extended
.
.e
conditions.
[J
-62-
Note that the proof of this theorem is somewhat simpler than that e.g.
in Lemmas 2.2 and 2.3. This occurs because we assume (4.6), whereas the
corresponding assumption was not made there. We could dispense with (4.6)
here also with a corresponding increase in,complexity, but since we assurne (4.6) in the sequel, we use it here also.
COROLLARY 4.8
(i)
If, in addition to the assumptions of part (i) of
DI(U(k,»
the theorem, we suppose that
holds for each
n
k = 1, 2, ••• , r,
then (for
s
- 1:
P{M(J k ) ~ Un,k' k = 1, 2, .. ., s} .... e
where
II
is thdL
k
Tl
cor rl':;pond I.nq 10
U
n(l -F(Un,k») .... Tk.
such that
(ii)
ui
UIIL'
a
k=l k k
n,"I.' I. t ' .
If in addition to the assumptions of Part (ii) of the theorem,
DI(U(j)
are satisfied with
n
u (a.T.)
n
J J
in place of
then the conclusion holds for all constants
~
a
k
u~j), j = 1, •.. , s,
> O.
This follows by Corollary 2.7 and Lemma 2.8 which show that
...
c
It is easy to check that
Dr(~n)
holds for normal sequences under the
standard covariance conditions and hence Corollary 4.8 may be applied.
THEOREM 4.9
Let
{~n}
be a stationary normal sequence with zero means,
unit variances and covariance sequence
be
r
levels such that
pose that
r
n
.... 0
(1-
fr L Let
n
rjJ(U~k») ~ Tk/n, for constants
T
k
'. O. Sup-
and
2
(4.12)
n
n
1:
j=l
Ir.1
_(u(r) /(1+ Ir l )
j . 0
en...
as
n .... 00,
J
which will hold in particular, by Lemma 3.1, if
Dr(~n)
holds, as do
lary 4. 8 ( i i) ,
Dl (U~k»
for
r
log n .... O. Then
n
1 < k < r, and hence, from Corol-
e.
-63-
P {M( J k ) ~ un k
(4.13)
,
=
k
I
.
1, 2, ••. , s }
where
J
PROOF
With the notation of (4. 7 ), we may identify the
wi th
k
are as in Theorem 4.7 (ii) and
r.. , ..• ,
~l
[,i 1 ' ... ,
[,~
~p
,
E: i
[,!
P
,
r.. , ••. ,
'
F,.
Jl
here and the
rj
of Lemma 3.2
of that lemma with
n·J
[,i,
have the same
, such that
... , [,'i
Jq
1
p
r,! , •••
~. , •.. , C ' but are independent of
ip
~l
Jl
••• ,
J1
joint distribution as
Jq
as in Corollary 4.8.
[,!
... ,
which in turn have the same joint distribution as
[,. . Then
Jq
the lemma gives
e
where
un
= min(V l ,
are chosen among
.•. , v p ' wI' ••• , wq )
u~l), ... , u~r). Replacing
using the fact that for each
r
j
and
j
2
-u / (1 + Ir.
~s
n
_.
Jt
I)
v l ' · · · , vP, w l ' · · · , wq
un
by
there are at most
u (r)
n
n
«
-
terms
u)
n
and
containing
, we obtain
which tends to zero by (4.12), so that
Finally Lemma 3.3 shows that
0 (u)
r
holds as claimed.
-no
O'(u(k»
holds for each
n
k
and hence
the final conclusion follows from Corollary 4.8.
c
We may combine Theorem 2.9 with Corollary 4.8 in the following obvious way.
THEOREM 4.10 Let
{~n}
be a stationary sequence, an > 0, b n , constants,
and suppose that
P{an (Mn -b n ) -< x}
w
-+
for some non-degenerate d.f.
all sequences of the form
.e
2, ••. , r
G(x)
as
n
-+
00
G. Suppose that
0r(~n)' 0' (U~k»
u~k) = xk/a n +b n , and let
be disjoint subintervals of
J
{I, ..• , n}, J k
k
=
hold for
In,k' k
containing
=
1,
vn,k
-64r
integers where
vn,k/n
+
(L Ok
Ok > 0
~
1). Then the maxima
1
M(J k )
are
asymptotically independent. Indeed
This follows at once from Corollary 4.8(i) and Theorem 2.9 by
PROOF
identifying
Lk'
with -log G(x )
k
u n,k = u n(k) = xk/a n + b n •
and
c
Exceedances of multiple levels
It is natural to consider exceedances of the levels
nn'
(nn(j/n)
=
~j
(1)
un '
by
... , u(k)
n
as before), as a vector of point processes. While this
may be achieved abstractly, we shall here, as an obvious aid to intuition,
represent them as occurring along fixed horizontal lines
the plane - exceedances of
u~k)
L , .•• , L
l
r
in
being represented as points on
will show the structure imposed by the fact that an exceedance of
is automatically an exceedance of
(k+l)
un
'
... ,
as illustrated in
the following diagrams.
u
( 1)
n
L 1 .....---------!t-""*"--
u(r-l)
n
,.
•
L
u
(r)
1.
n
(i)
r-l
Levels and values of
nn(t)
L'
(ii)
Representation in plane
(fixed
L )
i
-65-
~n
from which one
the diagrams, (i) shows the levels and values of
can see the exceeded levels, while (ii) marks the points of exceedance
of each level along the fixed lines
L , ••. , L r .
l
To pursue this a little further, the diagram (ii) represents the exceedances of all the levels as points in the plane. That is we may regard them as a point process in the plane if we wish. To be sure, all
the points lie only on certain horizontal lines, and a point on any
has points directly below it on all lower
L
k
L , but nevertheless the posik
tions are stochastic, and do define a two dimensional point process,
which we denote by
N •
n
We may apply convergence theory to this sequence
{N
n
1
of point pro-
cesses and obtain the joint distributional results as a consequence. The
position of the lines
L , ... , L
l
r
do not matter as long as they are
fixed and in the indicated order. From our previous theory each one-dimenslonil1 point proc0sR, on a qlv0n
under appropriate conditions. The two-dimensional process indicated will
not become Poisson in the plane as is intuitively clear, in view of the
structure described. However, the exceedances
on
cessively more "severely thinned" versions of
as k ,decreases. Of
course, these are not thinnings formed
N(k+l)
on
Lk + l
form suc-
by independent removal of events,
except in the limit where the Poisson process
tained from
Lk
on
may be ob-
by independent thinning, as will become
apparent.
More specifically, we define the point process
N
in the plane, which
will turn out to be the appropriate limiting point process, as follows.
Let
{a lj ; j
parameter
T
r
=
1, 2, ..• }
on
L r . Let
be the points of a Poisson process with
~,j
= 1, 2, •.. , be i.i.d. random vari-
ables, independent also of the Poisson process on
1, 2, ..• , r
p
.e
r r~.J
L , taking values
r
with probabilities
'=
R
1
=
(T
r-s+l
-
"
r-s
) / T
r
R
-= 1, 2,
s
=
r,
... , r -1 ,
-66p{C3, ;. s}
J =
i.c.
For each
1
I
[or
r-s+l lr
j, place points
=
s
1, 2, .•. , r.
°3 J"
02"
J
••. ,
°
,
0
~jJ
on the
8,-1
J
lines
r l , L r _ 2 , •.. , Lr-~+l' vertically above 0lj' to complete the point
process N. Clearly the probability that a point appears on L _ l above
r
L -
°lj
is just
N (r-l)
that
p{8. > 2}
J -
tensity
and the deletions are independent, so
.
is obtained as an independent thinning of the Poisson pro-
N(r) . Hence
cess
=, r- 1 / , r
N (r-l)
lr(lr-l/'r)
=
is a Poisson process (cf. Appendix) with inN(k)
lr-l' as expected. Similarly
as an independent thinning of
N(k+l)
is obtained
with deletion probability
1- .
Lk/L + , all N(k) being Poisson.
k l
We may now give the main result.
(i)
THEOREM 4 .11
Suppose that
1 < k < r, where the
for
holds, and that
satisfy (4.6). Then the point process
(1)
of exceedances of the levels
on the lines
i.e. on
(a)
(represented as above
N, as
(0, 1] x R.
N
n
N, as point processes on the entire right half plane,
(0, 00) x R.
Again by Theorem A.l it is sufficient to show that
E(Nn(B»-+-E(N(B»
for all sets
a. < 8, 0 < a < b, where
(b)
(r)
, .•.• , un
If the conditions of Corollary 4.8(ii) are satisfied, then
converges to
PROOF
un
L , ••• , L ) converges to the limiting point process
1
r
point processes on
( ii)
holds
P{Nn(B)
=
O} -+- P{N(B)
b < 1
= O}
B
or
of the form
(a,b] x (a.,
8],
b < 00, respectively, and
for all sets
B
which are finite unions
of disjoint sets of this form.
Again (a) follows readily. If
the lines, let these be
t
LS
¥
E N~k) «a, b]), N(B) =
k=s
k=s
B
=
(a, b] x (a., 8]
••• , L t , (1 < s
N(k) «a, b])
so that
'
L s +l '
~
intersects any of
t
< r). Then
E(Nn(B»
e.
-67-
t
t
neb - a) E ('kin + o(l/n»
k=s
which is clearly just
..... (b -a)
E 'k
k=s
E(N(B».
To show (b) we must prove that P{N (B) = o} ..... P{N(B) = o} for sets
n
m
B of the form B = U C
with C = (a , b ] x (a , f\]. By considering
k
k
k
k
1 k
intersections and differences of the intervals
(a , b ], we may change
k
k
(a , b ] and
these so that, for k f R"
are
either identical
(aR,' bR,]
k
k
(a , (\]
k
or disjoint. If these intervals are identical,
must be disjoint, so that
these holds, N (C ) = 0
n k
of the
(aR,' 8R,]
aR, > 8
or a > SR,. If, say, the first of
k
k
implies N (CR,) = 0
(by the "thinning property"
n
{Nn(C ) = o}
k
Nn ) so that
and
n {Nn{CR,)
o} = {Nn{C ) = OJ, and hence
k
we may write
s
O} =
n {N (C ) = O},
k=l
where
C = (a k , b ] x (uk' 8 ], and the intervals
k
k
k
But i f the lowest
(4.15 )
k
n
L.
J
{Nn{C k ) = O}
intersecting
{N
C
k
is
L
m
k
,
(a , b ]
k
k
ar.e disjoint.
say, clearly
(m )
k
«a , b ]) =oL
k
k
n
{M([akn], [bkn]) ~ u
But this is just the event
part (i) and (ii), respectively, gives
m
k
}, so that Corollary 4.8,
s
-
P{Nn(B) = O} ..... e
E (b
k=l
k
-
a ),
k mk
= P{N{B) = oJ,
since (4.14) and (4.15) clearly also hold with
N
instead of
N , and
n
the result follows.
COROLLARY 4.12
(ii), and let
Let
Bl ,
c
{~n}
... , Bs
satisfy the conditions of Theorem 4.11(i) or
be Borel subsets of the unit interval, or
the positive real line, respectively, whose boundaries have zero
Lebesgue measure. Then for integers
(4.16)
P{N{k)(B.) =m~k), j=l, .. .,s, k=l, ... ,r}
n
J
J
..... P{N(k) (B.)
J
.e
mJk),
m~k), j=l, ... ,s, k=l, ... ,r} •
J
-68-
be a rectangle in the plane with base B j and such that
jk
intersects its interior, but is disjoint from all other L j • Then the
PROOF
Let
B
L
k
left hand side of (4.16) may be written as
which by the appendix converges to the same quantity with
N
instead of
o
N , i.e. to the right hand side of (4.16).
n
Joint asymptotic distribution of the largest maxima
We may apply the above results to obtain asymptotic joint distributions
M~k), together with their
for a finite number of the k - th largest maxima
locations if we wish. Such results may be obtained by considering appropriate continuous functionals of the sequence
N , but here we take a
n
more elementary approach, in giving examples of typical results. First
we generalize Theorems 1.15 and 1.16.
U~k), 1 ~ k ~ r, satisfy (4.6) with
Let the levels
THEOREM 4.13
U(2) > ••• > u(r), and suppose that the stationary sequence
n
or
-
-
(u ), and
-n
n
0' (u(k»
n
exceedances of
u~k)
1 < k < r. Let
for
by
T
-.
n""
PROOF
l
l
k
(T
2
-~
kl ·
-
k
T
1
)
n
-
{~n} satisfies
denote the number of
~l' .•• , Sn. Then, for
k
as
S(k)
n
u (1) >
kl
~ 0, .•• , k r ~ 0,
k
2
r -1
r
r
r-l
--:--1"-._- e
kr ·
(r
I
2·
- T
)
00.
With the previous notation
S(j) = N(j) «0
n
n
'
1])
and so by Corol-
lary 4.12 the left hand side of (4.17) converges to
(4.18)
where
p{s(l) = k , 8(2)
l
S(j)
= Nj«O,
k l + k 2 + •.. + k r
'=
k
1
+k ,
2
••• ,
S(r) --" k + ... +k },
1
r
1]). But this is the probability that precisely
events occur in the unit interval for the Poisson process
e.
-69on the line
r, k
L
r
and that
take the value
2
of the
8's
kl
of the corresponding
8's
r - 1, and so on. But the independence properties
show that, conditional on a given total number
k , the numbers taking the values
r
r, r - 1, ... , 1
k
l + k 2 + ••• +
have a multinomial
distribution based on the respective probabilities
... ,
take the value
CT 2 - Tl)/T r ,
Tl/T r ,
(T r - Tr- 1) IT r . Hence (4.18) is
(k l +k 2 •• .+k r )
k l !k 2 !·· .k r !
~
k
k
(:~) 1 (T2~:1) 2... (T
k
r -::_
l) r
x p{N(r) «0,
1])
which gives (4.17) since
(r)
-T
P {N
( (0, 1]) = k 1 +k 2 ... +k r } = e
k l +k 2 •• .+k
r Tr
r / (k1+k 2 + ... +k r) ~ c
Of course this agrees with the result of Theorem
1.15.
The next re-
suIt (which generalizes Theorem 1.16) again is given to exemplify the
applicability of the Poisson theory.
THEOREM 4.14
(4.19)
Suppose that
P{a (M(l) - b ) < x} .... G(x)
n n
n-
for same non-degenerate d.f.
u (k)
n
PROOF
D2(~n)'
(k) ) hold whenever
D' (un
b
P{a n (M n(l) -b n ) < xl' an (M(2>
n
- bn > ~ x 2 }
G(x 2 ) > 0
If
u~k)
(zero when
G(x ) = 0).
2
= xk/a n + b n , by (4.19) and the assumpti<?ns of the theorem
it follows from Theorem 2.6 that
if
and that
= x k an + n' k = 1, 2. Then the conclusion of Theorem 1.16 holds,
(4.20)
when
/
G
l-F(U~k»
Tk/n
where
Tk=-logG(x ),
k
G(x k ) > O. But clearly
P{M(l) _< u(l)
n
n'
p{s(l) = 0, s(2)
n
n
.e
o} + p{s(l)
n
O,
(2)
Sn
I},
-70-
is the number of exceedances of
where
r. n .
by
By Theorem 4.13 we see that the limit of the above probabilities is
c
which is the desired result.
As a final example, we obtain the limiting joint distribution of the
second maximum and its location, (taking the leftmost if two values are
equal) .
THEOREM 4.15 Suppose that (4.19) holds and that 0 4 (u ), 0' (u(k»
hold
-n
n
(k)
(2) M(2) are the
for all un
= xk/a n + b n , k = 1, 2, 3, 4. Then i f Ln
' n
,
location and height of the second largest of r. l ,
r.~~respectively,
...
(4.21)
x
P{! L(2) < t, a (M(2) -b) < xl .... t G(x)(l-logG(x»,
n n
n n
n-
real, 0 < t < 1.
That is, the location and height are asymptotically
independent, the location being asymptotically uniform.
PROOF
As in the previous theorem, we see that (4.6) holds with
T
k
=
-109G(x ). Write I, J for the intervals fl, 2, .. ., [ntJ}, f[ntl + 1,
k
••. , nl, respectively, M(l) (I), M(2) (I), M(l) (J), M(2) (J) for the maxima
and second largest
r. j
I, J, and let
in the intervals
Hn(x l , x 2 ' x 3 ' x 4 )
be the joint d.L of the normalized r.v.'s
X (1)
(1)
n
an(M n
y (1)
n
- <In(M~l)
That is with
an (M~2) (I) - b n ) ,
(I)-b n ),
X
(2)
n
),
y
(2) = a (M (2) (.J) - b )·
n
n n
n
(J) - b
xl > x 2
and
n
x
3
> x
4
u (1)
(I) < ( 2)
Hn (x l' x 2' x 3' x 4) = P{M·n(1) (I) -< n ' M(2)
n
- un '
M(l) (J) < u (3) M(2) (J) < (4) 1
n
n
- un '
n '
where
u(k) = xk/a + b
n
n
n
as above. Alternatively we see that
P{N(l) (I')
0,
N(3)(J')
= 0,
n
n
N~2) (I')
~
N~4)
< ll,
(J')
1,
e.
-71-
where
I' = (0, t]
Corollary 4.12 with
(4.22)
J' = (t, 1], so that an obvious application of
and
B
l
= I', B
2
= J'
gives
lim H (xl'x ,x ,x ) =P{N(l) (I')
2
3
n
4
0, N(2) (I')
.:: l }
x P {N ( 3) (J')
0, N(4) (J')
.:: l }
n .... oo
say, where
for
xl > x . Now clearly
2
p{l L(2) < t
a (M(2) - b ) < x }
n n
'
n n
n- 2
(4.23)
P{M~2)
(I)
~ u~2) , M~2)
(I)
::.
M~l)
(J)}
+ P{M(l) (I) < u(2) M(l)
n
n
- n '
p{x(2) < x
x(2)
n
2' n
But, by the above calculation
distribution to
> y(l)} + p{x(l)
-
n
n
~
x ' w > w }
3
2
2
-
x
2'
n
G
(I)
> M(2)
-
n
(J)}
y(l) > x(l) > y(2) 1.
n
n
- n
(Xl' X2 ' Yl' Y2)' whose joint d.f. is
Hence, since the boundaries of sets in
2
<
> M(l)
x(2)
y(l)
y(2))
( X(l)
n '
n '
n
n '
clearly absolutely continuous since
w
(J)
converges in
H, which is
is (being an extreme value d.f.).
4
R
such as
{(wI' w2 ' w 3 ' w4 );
clearly have zero Lebesgue measure, it follows that
the sum of probabilities in (4.23) converges to
and therefore the left hand side of (4.21) converges to (4.24). This
may be evaluated using the joint distribution
H
of
Xl' X2 ' Yl' Y2
given by (4.22). However, it is simpler to note that we would obtain
the same result (4.24) if the sequence were i.i.d. But (4.21) is simply
evaluated for an i.i.d. sequence by noting the independence of
and
M(2)
n
and the fact that
L(2)
n
is then uniform, giving the limit
stated in (4.21), as is readily shown.
.e
o
-72Complete Poisson convergence
In the previous point process convergence results, we obtained a limiting point process in the plane, formed from the exceedances of a fixed
number
r
of increasingly high levels. The limitinq process was not
Poisson in the plane, though composed of
thinned Poisson processes on
r
r
successively more severely
lines.
{~n}
On the other hand we may regard the sample sequence
itself-
after suitable transformations of both coordinates - as a point process
in the plane and, by somewhat strengthening the assumptions, show that
this converges to a Poisson process in the plane. This procedure has been
used for independent r.v.'s by e.g. Pickands (1971) and Resnick (1975)
and more recently for stationary sequences by R.J. Adler (1978), who used
the linear normalization of process values provided by the constants
an' b
n
appearing in the asymptotic distribution of
M . Here we shall
n
consider a slightly more general case.
Specifically, with the standard notation, suppose that
fined for
each
( 4 • 25)
n
=
1, 2, ... , T > 0
U
n
(T)
to be strictly decreasing in
T
is defor
n, and satisfying (1.17), viz.,
T/n + 0 (l/n) .
1 - F (un (T) )
Here we will use
N
n
sisting of the points
the inverse function of
THEOREM 4.16
Suppose
that
0' (un)
each
r = 1, 2,
to denote the point process in the plane con(j/n, u-l(t.», j
.
n
J
u n (,),
u n (,)
holds for each
=
1, 2, ... , where
u- l
n
denotes
(defined on the range of the
are defined as above satisfying (4.25) and
un
..• , ~n = (un(Ti)'
••. , ' r ' Then the point proceSSeS
-1
u (,), and that
n
..• , un('r»
Dr (9 )
n
holds for
for each choice of
N , consisting of the points
n
un (Sj»' converge to a Poisson process
N
on
Tl'
( j/n,
(0, 00) x (0, 00), having
Lebesgue measure as its intensity.
PROOF
This follows relatively simply by using the previous r- level ex-
e.
-73-
ceedance theory. Here it will be convenient to use rectangles whose vertical intervals are closed at the bottom rather than the top, so that
we need to show
E(N (B»
( a)
+
n
o
B
of the form
(a, bJ x [a, a),
< a < b, 0 < a < 8, and
P{Nn(B) = O}
(b)
for all sets
E(N(B»
+
P{N(B) = O}
for sets
B
which are finite unions
of sets of this form.
Here,
(a) follows simply since if
B = (a, bJ
[a, a) ,
x
= ([nbJ - [na]) P{a ~ U~l(~l) < a}
E(Nn(B»
n (b .- a) P {un ( a) < ~ 1 ~ un ( a) }
n(b - a) {F(U (a»
n
- F(U (a»
n
}
n(b - a) (a - a)/n,
while
E (N (B) )
(b - a) (8 - a) •
To show (b) we note that any finite disjoint union of such rectangles
may be written in the form
joint and
F.
J
U (E. x F.), where
J
J
is a finite disjoint union
j
E. = (a ., b . J are disJ
J
J
U [a. k' a. k)' (cf. the proof
J,
k
J,
of Theorem 4.11). Suppose first that there is just one set
m
U
B=
EXF , say, where we write
k
k=l
and where we may clearly take '1 < '2
Now
which
Nn(B) = 0
(r
means that, for each
-1
un (F,j) € F k , i.e. such that
number of exceedances of
(4.26)
But
(B) = O} =
{N
n
r/2
n
k=l
2m).
k, there is no
u n (T 2k )
is equivalent to the statement that for
as many times as it exceeds
=
<
r. j
j/n € E,
~
by
{N (2k-l) (E)
n
~j
for
N
j/n € E
for
u n (T 2k - l ). But this
r. j
exceeds
u ('2k). That is, writing
n
un('k)
E , i.e.
j
N~k) (E)
u (T - )
n 2k 1
for the
j/n € E,
(2k) (E) }.
n
is precisely the same as in Theorem 4.11 and its corollary,
and their conditions are clearly satisfied, so that by corollary 4.12(ii)
we have
.e
-74-
(N~l) (E), N~2)
(4.27)
... ,
N (1) ,
where
cesses on
r
N~r)
(E), ... ,
are the
~
(E»
(N (1) (E), N (2) (E) , .•. , N (r) (E» ,
successively thinned Poisson pro-
r
fixed lines as defined prior to Theorem 4.11. But since
N~k) (E), N(k) (E)
all the r.v.'s
are integer valued, it is an obvious
Rr
exercise in distributional convergence in
to show from (4.27) that
the probability of pairwise equality in (4.26) converges to the same
probability with
N(k)
=
P{N (B)
replacing
p(r~2
O} -+
N(k)
n
. Thus
=
{N (2k-l) (E)
N (2k) (E) }).
k=l
n
From the discussion prior to Theorem 4.11 we see that the events in
braces on the right occur if the
event on the line
P{B .
J
=
s}
=
y
=
if
s
00
~
e
~
... ,
2, 4, 6,
r- 1, and
Tl/T
-T (b - a) (T (b - a» j (
r
r
j=O
e (b-a) (Y-T r )
m
=
Sj
r
r. Since
if
s
=
r, we
(T r- 1 - Tr- 2) + (T _ - T - ) + .•. + (T - T ) + T ,
r 3
3
r 4
2
l
O} -+
P{N (B)
n
where
corresponding to each Poisson
is even, i.e.
(T r-s +1- Tr-s )/T r
have, wr i ting
Bj
0'
J.
Jl
)j
Tr ·
= e -m(B) ,
denotes Lebesgue measure, since
(b - a) (Y - T
r
)
=
r/2
E
r/2
-
(b-a)(T2k_l-T2k)
k=1
Hence (b) follows when
B
=
E
k=1
U E x F k . When
=
-m(B) •
° x F.)
the same proof
J
J
applies - using the full statement of Corollary 4.12, with slightly more
k
notational complexity since more
the additional
Tk'S
B
m(EXF )
k
U (E
j
may be needed corresponding to
c
Ej'S.
In the theorem above it is required, of course that
If there is a d.f.
that
P{M
G(x)
and
may be chosen as a function
< u (T(X»}-+G(x), then we would have
n n
< v (x)} -+ G(x), with
- n
v (x)
n
=
T(X)
T(X) = -logG(x)
such
and
u (T(X». In such a case it would be
n
natural to consider the point process formed from points
e.
-75-
instead of
U~l(~j». In particular when a linear normalization
(j/n,
P { a (M - b ) < x}
n n
n
leads to an asymptotic distribution, i.e.
have
N~
v (x)
n
=
x/an + b
G( x)
+
we
and it is natural to consider the point process
n
consisting of points
(j/n, an(l':j -b ». This is the case considered
n
in Adler (1978) where it is shown that a (non-homogeneous) Poisson limit
holds. Here we obtain this result as a corollary of Theorem 4.16.
THEOREM 4.17
Suppose that (4.4) holds, i.e.
for some non-degenerate d.L
G. Suppose that
P{a (M -b ) < x}
n n
n D' (un)
+
G(x)
holds for all se-
...
holds for all r = 1, 2,
,
un = x/an+b n , and that Dr (!:!n)
and all sequences u(k)
= xk/an+b , 1 :: k < r, for arbitrary choices of
n
n
quences
-
N'
n
x • Then if
k
the
denotes the point process in the plane with points
N' ~ N' where N'
is a Poisson process
n
'
whose intensity measure is the product of Lebesgue measure and that de(j/n, a (l;.-b»
n
J
n
at
we have
fined by the increasing function
PROOF
N
n
d
+
By Theorem 2.6, the conditions of Theorem 4.16 hold, and hence
N, with the notation of Theorem 4.16. But if
(s, t), N'
n
has an atom at
by Theorem A.
N
logG(y).
N'n
1 N'
(s, T-l(t»
where
by replacing atoms at points
and by Theorem A.2,
sure
A'
N'
.
~s
where
0
N
n
T(X)
has an atom at
-log G (x). Hence
b ta~ne
' d f rom th e P o~sson
.
process
(s, t)
by atoms at points
(S,
T
-1
(t»,
this is also a Poisson process, with intensity mea-
defined by
A ' ( (a, b] x (a, S]) = (b - a) (T (a) -
from which the result follows.
distribution, T
T
(0) ) =
(Note that since
(b - a) (log G ( p,) - log G (ex) )
G
is an extreme value
is continuous and strictly decreasing where
zero. )
G
is non[]
Finally we note that all the results which follow from the multi-level
resul t
(Theorem 4.11) may be obtained form the last two theorems - however
the D-assumptions made are correspondingly more stringent .
.e
-76CHAPTER 5
NORMAL SEQUENCES UNDER STRONG DEPENDENCE *
In Chapter 3 and 4 it was seen that if the dependence in a stationary
normal sequence is not too strong, then the extreme values of the dependent sequence have the same asymptotic distribution as the extreme values of independent normal variables, and it was shown that the exceedances of the level
u n = x/a n + b n converge in distribution to a
Poisson process with intensity e- x . Thus, in particular the distribuan (M n - b n ) tends to a double exponential distribution and the
numbers of exceedances in disjoint intervals are asymptotically indepention of
dent. The crucial conditions needed for these results concern the behaviour of
r n log n; they hold if
al circumstances, if
r n log n
r n log n .... 0
or, in somewhat more gener-
is not too large too often. Recent results
by Mittal and Ylvisaker (1975), which will be given below, show that these
conditions are almost the best possible ones. Their results are that if
r
log n .... Y > 0 then a n (M n - b)
does not tend in distribution to
n
n
exp(_e- x ) but to a convolution of exp(-e- x ) and a normal distribution
function and further that if
(but
r n .... 0
in a sufficiently smooth manner
still) then a different normalization is needed and the
limiting distribution is normal.
In this chapter we will use ideas from the paper by Mittal and Ylvisaker
to show that if
of the level
un
r n log n .... y > 0
then the point process of exceedances
converges weakly to a Cox process (i.e. a mixture of
Poisson processes with different intensities). The slow decay of the correlations not only changes the limiting distribution of extremes, but also
destroys the asymptotic independence between extreme values in disjoint
intervals. The reason for this is explained in an instructive way in the
proof of Theorem 5.2 below, where the limiting distribution of the cxceedances of
.e
*
un
is obtained as the limiting distribution of the exceedances
This chapter contains more special material and can be omitted at first
reading.
-77by an independent normal sequence of a random level
where
1:;
(x + Y - 12y1:;) /a + b ,
n
n
is a standard normal variable representing "the common part"
of the first
n
dependent variables.
The main tool for the proof will, as in Chapter 3, be Lemma 3.2 which
relates the distributions of the maxima of two normal sequences with different correlations. Now it is of course no longer sufficient to compare
with an independent sequence. Instead it will be convenient to compare the
distribution of
Mn
With that of the maximum
mal variables which have constant covariance
Mn{O)
0~0
of
n
standard nor-
between any two variables.
The usefulness of this comparison stems from the fact that if
are independent standard normal var iables then
(l- 0)1/21:; + 0 1 / 2 1:;
n
thus
M (o)
n
have constant covariance
has the same distribution as
(l - 0)
0
1/2
1:; 1+0
1:;, 1:;1' 1:;2' •••
1/2
1;,
••• ,
between any two, and
(l- 0) 1/2 M (a) + 0 1 / 2 1:;. That
n
the comparison is possible follows from the first lemma.
LEMMA 5.1
Let
b > 0
and.
y > 0
be constants, put
P
n
y/log nand
suppose that
(5.1)
r
n
log n
+
y
as
n
+
00.
Then
2
-u /(l+w )
k
nb L Irk - pie n
k=l
n
[nb]
(5.2)
+
0
as
n
+
00
,
where
w = max{Pn'
k
PROOF
Put
o(k) = sUPk<m<[bnlwm
1-0(0)
1+0(0)
. As
in the proof of Lemma 3.1 the contribution from the sum up
to
is tends to zero, so we only have to prove that the remaining part
P
Irkl) .
and let
p = [nell, where
o
< el <
of the sum also tends to zero. Now
(5.3)
-u 2 /{1 + o(p)) [nb]
n e n
L
Irk - Pn
I
k=p+l
2
2
-u /{1+(np))1
[nbl
= _n_ e n
~
L Ir _
I
log n
n
k Pn ·
k=p+l
e.
-78log n -+ y there is a constant C such that r n log n < C, n ~ 1.
n
Hence also o(p) log p ~ C so by (3.4) we have (again letting K be a
Since
r
constant whose value may change from line to line)
2
_n_ e
log n
(5.4)
2
-u / (1 + 0 (p) )
n
2
<
n
- log n e
_u 2 /\( 1 + _C__ )
n
II
loy n
2
~ K
2
n
log,n
2C/log n Ct
Ct
< Kn l + C/log n
as
n
-+
00.
Moreover, adding and subtracting
and using the fact that, log k ~ log n
(5.5)
0 (1)
Ct
for
P
log n/log k
n
k > p, gives
y/log k
[nb]
1 [nb]
[nb]
log n
E
Ir - P I <
E Irk log k - y I + Y • 1:
E 11 _ log n
n k=p+l k
n
Ctn k=p+l
n k=p+l
log k
I•
Here the first term to the right tends to zero by (5.1). Furthermore,
estimating the second sum by an integral, we obtain
mb]
1
E 11 _ log n
log k
n k=p+l
I
[nb]
1
E
I log ~ 11:
- a log n k=p+l
n n
<
1
b
J Ilog xldx),
og n 0
0(--1--
Ct
and hence the left hand side of (5.5) tends to zero. Since by (5.4), the
first factor on the right of (5.3) is bounded, this concludes the proof
c
of (5.2).
It is possible to weaken the hypothesis of Lemma 5.1 and thus of Theorem 5.2 below in the same way as (3.1) isweakenedto (3.11). However,
this is quite straightforward and is left to the reader.
Before stating the theorem we recall from Chapter 4 the notation
for the point process of exceedances of the level
{~j}
is defined from the stationary sequence
1, 2, •.. ; n = 1, 2, ••. Further, let
tic) intensity
.e
N
exp{ -x - y + 12Yr,}, where
variable, i.e. let
N
by
un
by
Nn
nn' where
nn(j/n) =
S'
j
nn
=
be a Cox process with (stochasr;
is a standard normal random
have the distribution determined by
-79-
(5.6)
-x - y +
exp{-IBile
for
Bl ,
•.• , Br
Leyc~yuc
variances
b
Suppose thi1t
denotes
is a stationary normal sequence with co1/2
and that the levels un
x/an + b , with an = (2 logn)
n
{r }
n
= a
n
process
IBil
mCilsure).
THEOREM 5.2
and
positive disjoint Borel sets (and
nyz } }~(z)dz
n
{J:n}
+ (2a ) -1 (log log n + log 41f) • I f (5.1) holds then the point
n
N
of time-normalized exceedances of the level un converges
n
in distribution to N, where N is the Cox process defined by (5.6).
Again we have to verify (a) and (b) of Theorem A.l. As in the
PROOF
(b - ale
proof of Theorem 4.1, E(Nn«a, b))
-x
, so since
-x
(b-a)e,
the first condition follows immediately.
We use the notation
Mn(a, b; p)
Mn(a, b) = max{f,k; [an] < k
~
forthe maximum of the variables with index
[bn], in a normal sequence with constant covariance
variables. Letting
above that
tion as
a = a l < b < ••• < a k < b = b
l
k
Mn(a , b ; p), .•• , Mn(a , b k ; p)
l
l
k
(1 - p)
1/2
1/2
Mn ( aI' b 1; 0) + P
k
where
[bn]}
{Mn(a i , bi; O)}i=l
and
r,;
II
and write
k, [an] < k <
between any two
it then follows as
have the same distribu1/2
M (a ' b ; 0) + pl/2r,;
n k
k
all are independent and r,; is stanr,; , •.. ,
(1 - p)
dard normal. Now
k
p( i=l
n
{N «a., b.])
n
1
k
p( i=l
n {M (a.,
n
1
n
-+
OJ)
=
k
p( i=1
n
{M
n
(a., b.) < un})
1
1
-
r n = y/logn it follows from Lemmas 3.2 and 5.1 that
and with
as
1
=
00.
b.) < u
1
-
n
k
})-p( i=l
n {M (a.,
n
1
bi; Pn) < u}
n
)
-+
0
Thus, to prove (b) of Theorem A.l it is enough to check that
e.
-80-
<
(5.7)
U
k
J)
p(
n {N«a.,
n
i=l
4-
b.n
~
~
However,
.
j
•
bi; 0) + pl/2z: < u
n
j
and using that
p( i=l
~
{M
an
I2logn, b
n
-
n
J)
(a., bi; 0) < ( l - r )-1/2(u - r l / 2 ;>;)})cP(Z)dZ,
~
n
n
n
-1
an + O(a n
n
log log n), and
y/log n
Pn
we get
-1/2
(1 - P )
n
(u
n
- P
1/2
n z)
(l+p /2+0(p »(x/a +b _pl/2 z )
n
n
n
n
n
x/an + b n - (y/log n)
1/2
-1
z + (y/log n) 12 log n/2 + 0 (an)
(x+y-nyz)/a +b +o(a- l ).
n
n
n
Hence it follows from Theorem 4.l(ii) that, for fixed
k
p( i=ln
{M
n
(a., b. ; 0) <
~
(l -
~
p ) -1/2 (u _ pl/2 z )
n
n
n
z,
J)
k
-x - y + nyz
II exp{-(b. -a.)e
},
~
i=l
~
and by dominated convergence this proves that
00
!
p
(kn
_00
i=l
{M (a., b.; 0)
n
~
~
< (1-Pn)-1/2(U
-
00
k
_00
i=l
-+-!
II exp{ - (b
p( i=l
~
i.e. that (5.7) holds.
COROLLARY 5.3
i
- p l/2 Z)})cP(Z) dz
- a.) e
~
{N ( (a., b.])
~
-x - y + ny Z
~
=
0
H
(z) dz
J),
c
Suppose that the conditions of Theorem 5.2 are satisfied
and that
Bl , ..• , B
are disjoint positive Borel sets whose boundaries
k
k
have Lebesgue measure zero. Then
n {Nn(B i ) =k.J) tends to the
i=l
~
p(
.e
n
-81-
expression in the right hand side of (5.6). In particular
P{a (M -b ) ::x}
n n
n
as
n
+
P{N «0, 1l)=0}
n
+
00
fexp(-e
-x-y+!2Yz
)<j>(z)dz
_00
[]
00
It may be noted that it is quite straightforward to extend the above
result to deal with crossings of two or more adjacent levels. However,
to avoid repetition we will umit the details of lltis.
Our next concern is the case when
r
log n + 00. Again the problem of
n
exceedances of a fixed level by the dependent sequence can be reduced to
considering the exceedances of a random level by an independent sequence,
but in this case the random part of the level is "too
large"~
in the limit
the independent sequence will have either no or infinitely many exceedances of the random level. Thus it is not possible to find a normalization
that makes the point process of exceedances converge weakly to a nontrivial limit, and accordingly we will only treat the one-dimensional
distribution of the maximum. Since the derivation of the general result
is complicated we shall consider a rather special case, which brings out
the main idea of Mittal and Ylvisaker's proof while avoiding some of the
technicalities.
First we note that for
{rn}~=o
with
Y > 0, 0 < q < 1, there is a convex sequence
= y/(log n)q, n :: no' for some n o -> 2. (This
n
y / (log n)q is convex for n ~ 2, and decreasing to
r O = 1, r
is easy to see since
zero.) By Polya's criterion
{r }
n
is a covariance sequence, and we shall
{~n}
now consider a stationary zero mean normal sequence
ticular type of covariance. Further, since
... ,
(r
n-
{rn}~=O
with this par-
is convex, also
is convex, so again
l-r )/(l-r ), 0, 0, .•.
n
n
according to polya's criterion there is a zero mean normal sequence
{sn}
with these covariances. Clearly
bu·tion as
(l-r)
n
1/2
1';1 +r
1/2
n
S'
••• ,
standard normal and independent of
~l'
(l-r)
n
.•. ,
1/2
~n
have the same
1/2
,.. +r
"'n
n
1';, where
r;
distri~
is
{I';n}'
e.
-82-
M' = max
~k
,the distribution of M = max Sk therefore is
n l<k<n
n l<k<n
the same as that of- (1 - r ) 1/2 M , + r l / 2 ~. This representation is the key
n
n
n
Putting
to the proof of Theorem 5.6 below, but before proceeding to use it we
shall prove two lemmas. The first one is a "technical" lemma of a type
already used twice before.
LEMMA 5.4
Let
n' = [ne-/logn]
and, suppressing the dependence on
n, let
Pk = (rk-rn)/(l-r n ), k = 1, ... , n. Then, for each
(5.8)
n'
- (b - Er 1/ 2 ) 2/ (l + P )
n' E I P - P ,I e
n
n
k -+ 0
k=l k
n
PROOF
Set
a
p = [n ]
where
as
n -+
E
> 0,
00.
0 < a < (l-rl)/(l+r ). Since
l
r
n
creasing we have as in the proof of Lemma 3.1 that the sum up to
is de-
P
tends to zero, and it only remains to prove that
(5.9)
Now, using (3.4)
(with
n'
(5.10)
n'
(b
E ex p {k=p+l
un = b ),
n
n-
I
r~/2) 2}
1 + Pk
..... ne
-/log n
Kne
-,!I()(i'll
g
l 2
2
n'
b
(1-Er / /b )2}
.. expJl-....n. 2
n
n
l.
2
1+ P
k=p+l
k
1/2
2
n' (b n )2 (1 - Ern Ibn) / (1 + Pk)
E
k=p+l
n
n'
1
< K locy n e -/log n
>:
eX P {2(1 k=p+l
n
Here
1- (1-Er l /
n
2
/b )2/(1+P ) < 1+P -l+2Er l / 2 /b
k
n
k
n
n
(1 -
£r l / 2 /b ) 2
1 +n )
n ) loq n}.
f k
P +2Er l / 2 / b ,
n
n
k
and since
r l / 2 b- l logn < K'(logn) (1-q)/2
n
n
we obtain the bound
(5.11)
.e
Klogn e-/logn+ K'(10gn)(1-q)/2
Furthermore, for
k > P +1
we have
n'
1
E
k=p+l n
log k:: a log n, and then
-83-
y/(log k)q - yl(log n)q log n
1 - y I (log n) q
< K
log n
{ 1 _ (lOg k )q}
log n
(a log n)q ,
1- q {1 - e~~gk~n + l)q}
< K (log n)
< K (log n) -q {-lOg kin},
so that
n'
E
1
e
2P k log n
~' ~
<
k=p+l n
(e-log kin t(109 n)-q
k=p+l
1 (-1
) K(log n)-q
< ~\e ogx
dx
1
f x
-K(log n)
-q
o
which tends to one as
1 - K(log n fq
00, and therefore is bounded. Together with
~
n
1
dx
(5.10) and (5.11) this implies that (5.9) is bounded by
K log n e
as
n
~
- Ilog n + K' (log n) (l - q) 12
00, which proves (5.8).
LEMMA 5.5
For all
P(!M' -b
n
n
I
c
c > 0
> cr
M' = max ~k' with
n l~~n
for
.... 0
l/2
)
n
~l'
~
0
.•• ,
as
n .... 00,
~n
standard normal and with covariances
{p }.
k
PROOF
As above write
Mn(p)
for the maximum of
ables with constant correlation
P
k
~
n
standard normal vari-
P between any two. By definition,
0, and hence, by (3.6) of Lemma 3.2,
l/2
P{M' > b +cr l/2 } < P{M (0) > b +e:r
a la }.
n
n
n
n
n
n
n n
Further, by definitions, a r l/2
n n
~ 0
as
n
~
00, and by Theorem 1.11 it
follows that
l/2
P{M' > b + cr
}
n
n
n
~ o.
e.
-84To show that
(5.12)
we estimate the difference
P{M'
n
< b
n
1
- e:r / 2 } -P{M (p
n
n
n'
)
by means of (3.6) in Lemma 3.2. Since
ing, Pk
~
P '
n
for
< b
n
1 2
- e:r / }
{Pk}
n
is convex and thus decreas-
k > n', and we obtain the bound
1
n'
-(b -rr / 2 )2/(l+P )
k
Kn'~lp-p,le
n
n
k=l k
n
which tends to zero by Lemma 5.4.
Moreover, Mn(P ,)
n
1/2
where
Pn '
For
r;
n
(1 - r
has the same distribution as
and
n
,) 1/2 M (0) +
n
are independent, so
large,
P ,
n
(y/(logn,)q- y/(logn)q)/ (1 - y/(logn)q)
y
(log n) q
(1/(1- (10gn)-1/2)q-1)
~ yq(10gn)-1/2-q.
Thus
«1-P
as
n ~
00
n
,)1/2_ 1 )
and also
bnr~1/2 ~~ Pn,bnr~1/2~q1(10gn)-q/2 ~
1/2 -1/2
P ' rn
~ q
n
IY (log n) -1/4
~
o.
0
Moreover,
(1- P ,) 1/2r~1/2a~1 ~ lY(1og n)-(1-q)/2 ~ 0,
n
and since
an (M (0) - b n )
n
converges in distribution by Theorem 1.11 it
follows that the expression in (5.13) tends to zero, and thus that (5.12)
holds.
.e
c
-85-
THEOREM 5.6
{r n }
has covariances
o
< q < 1, n
~
nO
with
for some
{rn}~=O
as
convex and
-+
~(x),
as
As was noted just before Lemma 5.4, M
n
(1- r n ) 1/2M~ +
r~/2r"
r n = y/(logn)q, y ~ 0,
nO. Then
p{r- l / 2 (M -(l-r )1/2 b ) <_ x}
n
n
n
n
PROOF
{~n}
Suppose that the stationary, standard normal sequence
where
r,
n
-+
00.
has the same distribution
is standard normal. It now follows at
once from Lemma 5.5 that
P{r
-1/2
1/2
(M - (l-r)
b) < x}
n
n
n
n-
P{(l-r )1/2 r -l/2(M'_b )+ r, < x}
n
n
n n
-+ ~(x)
as
n'"
c
00
Of course the hypothesis of Theorem 5.6 is very restrictive. The following more general result was proved by McCormick and Mittal (1976).
Their proof follows similar lines as the proof of Theorem 5.6 above, but
the arguments are much more complicated.
THEOREM 5.7
Suppose that the stationary standard normal sequence
-+
0
P{r- l / 2 (M - (l-r )1/2 b ) < x}
n
n
n
n
-+
has covariances
monotonically for large
such that
r
n
monotonically and
r
n
{~n}
log n
-+
00,
n. Then
~(x)
as
n'"
c
00
In the paper by Mittal and Ylvisaker (1975), where the above results
were first proved under the extra assumption that
{rn}~=O
is convex,
it is also shown that the limit distributions in Theorems 5.2 and 5.7
are by no means the only possible ones; they exhibit a further class of
limit distributions which occur when the covariance decreases irregularly.
-86-
APPENDIX
SOME BASIC CONCEPTS OF POINT PROCESS THEORY
Intuitively, by a point proae88, we generally mean a series of events
occurring in time (or space, or both) according to some statistical law.
For example the events may be radioactive disintegrations or telephone
calls, occurring in time, or the positions of a certain variety of plant
in a field (two dimensional space). The cases of particular interest to
us are when the events are the instants of occurrence of exceedances of
a level
u
by a stochastic sequence
crossings of a level
u
{F,; }
n
(Chapter 4) or of the up-
by a continuous parameter process
{F,;(t)}
(Chapter 8).
~n
~ (t)
u
u
n
Point process of exceedances
t
Point process of upcrossings
These point processes occur in one dimension (which we may regard as
"time" if we wish). We may simultaneously consider exceedances or upcrossings of more than one level, and obtain a point process in the plane (cf.
Chapters 4 and 9).
Point process theory may be discussed in a quite abstract setting,
leading to a very satisfying general theory, and we refer the interested
reader to the books by Kallenberq (1976) and Matthes, Kerstan & Mecke
(1978) for this. Here we shall just indicRte some of the main concepts
regarding point processes on the real line, ilnd in the plane.
If
N(I)
I
is any finite interval on the real line, the number of events,
say, of a point process occuring in
I
must be a random variable.
-87-
More generally for any bounded Borel set
B, N(B)
should be a r.v.
Further, the number of events in the union of finitely or countably
many disjoint sets, is the sum of the numbers in each set, i.e.
N(B)
00
E N(B.)
if
are disjoint (Borel) sets whose union is B. That is
1
~
N ( .)
is a measure on the Borel sets. Again the value of N (B) must
be an integer as long as it is finite. Hence the following formal definition naturally suggests itself.
Rn
A point process in
valued r.v.'s
each
is a family of non-negative integer (or
defined for each Borel set
Nw(B)
B
+oo)-
and such that for
is a measure on the Bor'el sets, being finite valued on
w, N (')
w
bounded sets.
Note that the same definition may be used for more abstract topologi-
Rn .
cal spaces by using the Borel set of that space in lieu of those in
Here, as noted, we shall concider mainly just the line and the plane.
If
is a random variable we may consider the trivial point process
T
consisting of just one event occurring at the value which
oT
the point process
oT
wise. Thus
T,
... )
N (B)
=
}:
0 (B)
T
=
1
if
0 (B)
€ Band
T
(~
J
are random variables (for
T
j = 1, 2, 3,
T~
the
)~
j
p.. \)
0
otherT.
... ,
or
(B)
N = E 0
provided
j
Tj
is finite a.s. for bounded sets B. Par this point
fT, t. Going a little furl
Lher still, we lIIay cunveniently include possible 1111/1/,:,./8
N =
=
we may define a point process
j
Tj
process, the events occur at the random points
i nq
takes, 1. e.
represents unit mass at the randomly chosen point
More generally if
;:hat
where
T
wllt'I'('
J.' j
taken distinct.
r",.
dr"
non-n0qnti.v0
,)/1,.. //[1)
by writ-
inlp'!('r Villllctl r.v.'s,
unci
J
In fact for a space with sufficient structure (such as the real line
or plane) it may be shown that any point process
in terms of its atoms in this way, i.e.
:i.5.
N
=
N
E 8.0
j
distinct random elements of the space, and
J Tj
~,
J
may be represented
, where the
are
J
are non-negative,
nteger valued random variables. In the case where the
8j
T,
are each
.:nity a.s., we say that the point process has no multiple events or is
-88If
B , ... , B
k
l
are bounded Borel sets, N(B l ), .•. , N(B k )
are ran-
dan variables and have a joint distribution - termed a finite dimensional
distribution of the point process. In fact, the probabilistic properties
of interest concerning the point process are specified uniquely by the
collection of all such finite dimensional distributions, i.e. for all
choices of
k
and the sets
B , .•. , B • Of course, to define a point
l
k
process starting from finite dimensional distributions, we must choose
these distributions in an appropriately consistent manner, so that the
r.v.'s
N(B)
will not
countably additive in
only be well defined but will be non-negative,
B, etc. We refer the interested reader to Kallen-
berg (1976) for details on this.
If
N
is a point process the measure
A
defined on the Borel sets
(of the space involved) by
A(B)=E(N(B»
is termed the intensity measure of the point process. Note that, unlike
N(B)
itself,
A(B)
may be infinite even when
B
is bounded (since
while a r.v. is finite valued, its mean need not be).
Although we shall not
need them here, it is of interest to note that
the probabilistic properties of a point process
N
may also be summa-
rized by various generating functionals. In our judgement the most natural and useful of these is the Laplace transform
non-negative measurable functions
when
N
is represented as
E 8.6
f
J Tj
LN(f)
defined for
by
• Such generating functionals have
properties and uses analogous to those of characteristic function, mo-
ment generating functions, and Laplace lransforms of random variables.
In particular if
as
x E B
or
x
f(x) = t XB(x)
¢
B) we have
(where
XB(x) = 1
or
0
according
LN(f) = E(e-tN(B». This is simply the
Laplace transform (or moment generating function evaluated at
-t) of
the r.v.
N(B).
N(B), and it uniquely specifies the distribution of
-89-
Similarly joint Laplace transforms for r.v.'s
he specified by taking
f =
k
~I
N(B ), ... , N(B )
l
k
may
lttXBi'
Probably the most useful point process - bot.h in its own right, and
,tl~HI liB
d
"blllldln<l block" In .. "IIH'T" Iyp(,fj .. In Ilip ""/"""11 ('1""'('1111.
may be specified by its intensity measure
A(B)
which may be taken to
be any measure which is finite on bounded sets. The point process
said to be Poisson with this intensity if for each (bounded)
is n Poisson r.v. with mean
dent for any choice of
such a process
N
A(B), and
N(B ),
l
k, and disjoint
B ,
l
'I'his
, N(B )
k
N
is
B, N(B)
are indepen-
, B . The existence of
k
is easily shown under very general ci,rcumstances
though we do not do so here, and
N
has the Laplace transform
LN(f)
exp{-!(l- e-f(u) )dA(u)}.
It is readily checked that the choice
f(x) = txB(x)
Laplace transform of a Poisson r.v. with mean
yields the
A(B). Incidentally this
process may be called the "general Poisson process". The usual (stationary) Poisson process on the real line arises when
multiple of Lebesgue measure
m(B), i.e.
A(B)
is a constant
A(B) = Tm(B), in which case
we say that the Poisson process har intensity
T.
As noted above, the Poisson process may be used as a building block
for the construction of other point processes. In particular, a most
useful case arises if the intensity measure
A
is itself allowed to be
stochastic. Such a point process is no longer Poisson, but may be profitably thought of as "Poisson with a (stochastically) varying mean rate".
We refer to such a process as a Doubly
8tu~hasti~
Poisson or, more
commonly, a Cox ppocess. Specific use is made of such processes in
Chapter 5/ where the distribution is explicitly given.
A notion which will be useful to us is that of thinning of a point
process - and, in particular of a Poisson process.
Thinning refers to
the removal of some of the events of the point process by a (usually)
probabilistic mechanism which can be qUite complicated. In its simplest
form - wi th which we shall be concerned here - each event is removed or
e.
-90retained independently, with probabilities
N
1- p, P say. For example if
is a Poisson process with intensity measure
cess obtained from
Borel set
N
A, and
N*
a point pro-
by such independent thinning, we have, for a
B,
00
P{N* (B)
= r}
~: P{N(B)
s=r
= s}
P{N*(B)
= rIN(B)
= sl
I.
s=r
since given that
N(B) =s, N*(B)
is binomial with parameters
(s, p).
This expression reduces simply to yield
P{N* (B) = r} = e -pA (B) (pA (B) ) r /r!
so that
N*(B)
is a Poisson r.v. with mean
be l::!C'lll lha t
measure
N*
Similarly it may
i.lre llldl..'pclldl'1\ L whl'lIcvl'r
...... I
are disjoint, so that
pA(B).
is clearly a Poisson process with intensity
p A- an intuitively appealing result of which use is made e.g.
in Chapter 4.
There are numerous structural properties of point process theorysuch as existence, uniqueness, simplicity, infinite divisibility and so
on - which we do not go into here. It will however be of interest to mention convergence of a sequence of point processes, and to state a useful
theorem in this connection.
is a sequence of point processes and that N is
n
a point process. Then we may say that N
converges in distribution to
n
d
N (written N ~ N)
if the sequence of vector r.v.'s
n
N (B ) ) converges in distribution to
(N (B ) , ...... , N(B ) ) for each
n k
k
l
Suppose that
choice of
{N }
k, and all bounded Borel sets
a. s. , i = 1,
..... ,
k,
(writing
aB
B.
1
such that
N (aB.) = 0
1
for the bounary of the set
B) .
A point process may be viewed as a random element of a certain metric
space (whose points are measures) and convergence in distribution of
to
N
becomes weak convergence of the distributions of
N
n
N
n
to that of
N. However we do not need this general viewpoint here since the above
-91-
definition is equivalent to it.
The main result which we shall need is the following simple sufficient condition for convergence in distribution. This is a special case
of a theorem of Ka11enberg (1976) and we state it here without proof.
THI::OREM A.l
(i)
Let
on the real line, N
(a)
N , n == 1, 2,
n
••• , and
N
ve
point processes
being simple. Suppose that
E{Nn{{a, b])) .... E{N{{a, bll)
for all
-co
< a < b <
co
and
k
(b)
P{N (B)
n
k
=
= O} ....
P{N{B)
O}
for all
B
of the form
U {ai' b i ]
1
1, 2,
d
N.
Then
N
n
(ii)
The same is true for point processes in the plane,
->
closed intervals
gles"
(a, b],
(a, b] x (a, (3],
(a., b.]
1-
if the semi-
are replaced by "semiclosed rectan-
1-
(a., b.] x {a., (3.].
1-
1-
1-
C
1-
The remarkable feature of this result if that convergence of the
probability of occurrence of no events in certain given sets is essentially sufficient to guarantee convergence of quantities like
r
J
..-
P{Nn{B)
und corresponding joint probabiH tics. 'l'he simple condi tions (a)
,
and (b) are often readily verified.
Our next result concerns convergence of a s·equence of point processes
proce~s
in the plane, to a Poisson
in the plane, and shows how this prop-
erty is preserved under suitable transformations of the points of each
member
of the sequence, and of the limit. Obviously this result could
be stated in much greater generality but the form given here is sufficient for our applications.
THEOREM A.2
Let
plane, andl (x)
N , n
n
=
l, 2, .•• , and
N
be point processes in the
a strictly decreasing continuous real function. Define
new point processes
{N'}
n '
N'
such that if
Nn{N)
has an atom at
(s,t)
l
-92-
N' (N')
n
tion of T.
has an atom at
then
(i)
d
Nn .... N
If
(s, T-1 (t) ) , where
T
-1
is the inverse func-
d
N' .... N' •
n
then
is Poisson, with intensity measure A, then N'
is Poisson
l
with intensity ATwhere
T denotes the transformation of the plane
N
If
(ii)
given by
T(s, t) = (s, T-l(t». If
AT- l
the intensity
A
is Lebesgue measure on the plane,
is the product of linear Lebesgue measure and the
measure defined by the monotone function
PROOF
T.
It is readily checked that for any rectangle
(i)
B
(a,b]x(a,S]
N'(B) =N«a, b]X[T(S),T(a)]) = N(T-l(B»
and hence (by uniqueness of extensions of measures)
for all Borel sets
Sup~ose
aB
B. This holds also with
now that
B
replacing
N'
n' N
is a Borel set such that
denotes the boundary of
N' (B) = N(T-l(B»
N'
(~B)
= 0
N', N.
a.s. (again
B). Now it may be seen (using the continuity
N(aT-lB) < N(T-l<lB) = N' (<lB) = 0 a.s.
d
d
Since N
~ N we thus have N (T-1B) ~ N (T-1B) or N' (B) ~ N' (13). 'I'his
n
n
n
d
result extends simply to show that
(N~(Bl)' •.. , N~(Bk»
.... (N' (B l ), ••• ,
of
T) that
d
aT-lBc T-laB
N' (dB i ) = 0
N' (B »
k
whenever
N' .... N'
as required.
d
n
(ii)
If
N
so that
a.s. for each
is Poisson with intensity
A, and
i = 1, ... , k
B
and hence
is any Borel set in
the plane,
for each
r = 0, 1, 2, ..• , so that
N' (B)
is Poisson with mean
AT-l(B).
Independence of
N'(B ), .•. , N'(B ) for disjoint B , ... , B
follows
l
l
k
k
-1
-1
from the faci: .:hat
T B , ..• , T B
are also disjoint, and hence
l
k
-1
-1
N(T B ), ... , N(T B ) are independent.
k
l
The last statement of (ii) follows simply since if
measure,
A
is Lebesgue
-93-
A{(a, b]x[-r(3),
(b - a) (T (a) - T
noting also that
T
T(a»}
(B»,
is continuous.
o
As a final note, it is apparent t.hat. the l:onl'('l't oj
may be generalized to include measures
N
for which
;1
point.
N(B)
lJnlcuuH
is not nec-
essarily integer valued. This generalization leads to a natural setting
for point processes within the framework of the theory of Random Measures - a viewpoint developed in detail by Kallenberg (1976).
..
-94-
REFERENCES
Adler, R.J. (1978). Weak convergence results for extremal processes generated by dependent random variables. To appear •
•
Derman, S.M.
..
(1964). Limit theoremf; for the maximum tC'rm in f;tationary
sequences. Ann. Math. Statist. 35, 502-516 •
Berman, S.M. (1971 a). Asymptotic independence of the numbers of high
and low level crossings of stationary Gaussian processes. Ann.
Math. Statist. 42, 927-947.
Chibisov, D.M. (1964). On limit distributions of order statistics.
Prob. Appl. Y, 142-14U.
Cram~r,
H. (1965). A limit theorem for the maximum values of certain
stochastic processes. Teor. Ver. i Prim. 10, 137-139.
Cramer, H. & Leadbetter, M.R. (1967). Stationary and Related Stochastic
Processes, New York: John Wiley and Sons.
Davis, R.A. (1978). Maxima and minima of stationary sequences. To appear.
Fisher, R.A. & Tippet, L.H.C. (1928). Limiting forms of the frequency
distribution of the largest or smallest member of a sample. Proc.
Cambridge Phil. Soc. 24, 180-190.
Gnedenko, B.V. (1943). Sur la distribution limite du terme maximum d'une
serie aleatoire. Ann. Math. 44, 423-453.
Gumbel, E.J. (1958). Statistics of Extremes. New York: Columbia Univ.
Press.
Haan, L. de (1970). On regular variation and its application to the weak
convergence of sample extremes. Amsterdam Math. Centre Tract 32.
Haan, L. de (1976). Sample extremes: an elementary introduction. Statistic? Neerlandica 30, 161-172.
Kallenberg, O. (1975). Random measures. Berlin: Akad. Ver.
Leadbetter, M.R. (1974). On extreme values in stationary sequences.
Z. Wahr. verw. Geb. 28, 289-303 •
.e
-95-
Leadbetter, M.R., Lindgren, G. & Rootzen, H.
(1979). Conditions for the
convergence in distribution of maxima of stati.onary normal processes. Stochastic Processes Appl. 8.
Loynes, R.M.
•
(1965). Extrcrne values in uniformly mixi.ng stationary sto-
chastic processes. Ann. Math. Statist. 36, 993-999.
McCormick & Mittal (1976). On weak convergence of the maximum. Techn.
Rep. No. 81, Stanford University.
Mittal, Y. & Ylvisaker, D.
(1975). Limit distributions for maxima of
stationary Gaussian processes. Stochastic Processes Appl. 3, 1-18.
Pickands, J.
(1971). The two-dimensional Poisson process and extremal
processes. J. Appl. Prob. 8, 745-756.
Resnick, S.
(1975). Weak convergence of extremal processes. Ann. Prob.
3, 951-960.
Rosenblatt, M.
(1956). A central limit theorem and a strong mixing con-
dition. Proc. Nat. Acad. Sci. USA 42, 43-47.
Slepian, D.
(1962). The one-sided barrier problem for Gaussian noise.
Ih. II ::Yti I. '1'",-'11 .• 1. ·11, 41, I r,O 1 •
Smirnov, N.V.
(1952). The limit distributions for the terms of a vari-
ational series. Ann. Math. Soc. Translation 67.
Smirnov, N. V.
(1967). Some
1
ellli.lrks on liml t
laws
lUl
unler sla listies.
T. Prob. Appl. 12, 336-337.
Watson, G.S.
(1954). Extreme values in samples fr'om m-dependent station-
ary stochastic processes. Ann. Math. Statist. 25, 798-800.
Wu, Chuan-Yi (1966). The types of limit distributions for some terms of
variational series. Scientia Sinica 15, 749-762.
",