Leadbetter, M.R.Extremem Value Theory for Continuous Parameter Stationare Processes."

EXTREME VALUE THEORY
FOR CONTINUOUS PARAMETER STATIONARY PROCESSES
by
M.R. Leadbetter
University of North Carolina
Abstract
In this paper the central distributional results of classical extreme
value theory are obtained, under appropriate dependence restrictions, for
maxima of continuous parameter stochastic processes.
In particular we prove
the basic result (here called Gnedenko's Theorem) concerning the existence
of just three types of non-degenerate limiting distributions in such cases,
and give necessary and sufficient conditions for each to apply.
The
development relies, in part, on the corresponding known theory for
stationary sequences.
The general theory given does not require finiteness of the number of
upcrossings of any level x.
However when the number per unit time is a.s.
finite and has a finite mean
~(x),
it is found that the classical criteria
for domains of attraction apply when
marginal distribution function.
--:::"~
~(x)
is used in lieu of the tail of the
The theory is specialized to this case and
applied to give the general known results for stationary normal processes
(for
w~ich
U(x) mayor mar not be finite).
A general Poisson convergence theorem is given for high level
upcrossings, together with its implications for the asymptotic distributions
of r
th
largest local maxima.
2
Key Words and Phrases:
extreme value theory, extreme values, stationary
processes, stochastic extremes
This work was supported by the Office of Naval Research under Contract
N00014-75-C-0809.
3
1.
Introduction.
In this paper we shall be concerned primarily with asymptotic
distributional properties of the maximum
M(T)
= sup{E;(t):
0::; t::; T}
of a continuous parameter stationary process {t;(t): t;::
oL
A
great deal is
known about such properties in the important special case when the process
is normal (cf. [2], [16]).
Our purpose here is to delineate the types of
limiting behavior which are possible when the process is not necessarily
normal, obtaining, in particular, versions of the central results of
classical extreme value theory which apply in this context.
The classical theory is concerned with properties of the maximum
max(~1'~2'
Mn
... ~n) of n i.i.d. random variables as n becomes large.
Central to the theory is the result which asserts that if M has a
n
non-degenerate limiting distribution (under linear normalizations), i.e. if
P{a (M - b ) ::; x} -+ G(x) for sequences {a > o} , {b }, then G must be one of
n
n
n
n
n
only three general types:
Type I
G(x) = exp(-e- x )
Type II
G(x)
Type III
G(x) = exp -(-x)
_00
= exp(-x -a)
<
x <
x > 0
a
00
a > 0
x < 0
(linear transformations of the variable x being permitted).
This result,
which arose from work of Frechet [5] and Fisher and Tippett [4], was later
given a complete form by Gnedenko [6] and is here referred to as "Gnedenko's
Theorem."
4
Gnedenko also obtained necessary and sufficient conditions for the
domains of attraction for each of the three limiting types.
These and other
versions obtained subsequently (cf. (7]) concern the rate of decay of the
tail l-F(x) of the distribution F of each t
n
as x increases.
A further result--trivially proved in the classical case--is that for
any sequence {u},
n
T
> 0, P{M Su }-+e- T i f and only if 1 - F(u)~ Tin.
n
n
n
This is sometimes useful in calculation of the constants a,b
n
Theorem (when u
n
=
xla
n
+
n
in Gnedenko's
b ).
n
In more recent years there has been considerable interest in extending
these and other results of the classical theory to apply to stationary
sequences which exhibit a "decay of dependence" which is not too slow.
In
particular the early work of Watson (17] concerning convergence of P{M
n
~u
n
}
applied under m-dependence, Loynes (14] proved Gnedenko's Theorem under
strong mixing assumptions, and Berman (1] obtained detailed results for
normal sequences under a mild condition involving correlation decay.
More
recently we have obtained a theory (cf. (9]) involving weak "distributional
mixing" conditions, which unifies these results and provides a rather
satisfying extension of the classical distributional theory to include
stationary sequences.
It is not too surprising that such an extension is possible for
stationary sequences, at least under suitable dependence restrictions.
What
may seem surprising at first sight is that a corresponding theory is
possible for continuous parameter stationary processes.
However this
becomes intuitively clear by recognizing that the maximum up to time n, say,
is just the maximum of n random variables--the "submaxima" in the fixed
intervals (i-l, i), 1 Sis n.
Our procedure will be, in fact, to use the
5
existing theory for stationary sequences by means of (a slightly modified
version of) this precise approach.
The sequence results which will be
needed are stated in Section 2.
In Section 3 we will obtain Gnedenko's Theorem for continuous parameter
stationary processes, showing under appropriate conditions that if
for some constants aT>O,b , then G must be one of the extreme value forms.
T
In Section 4 we obtain a related result--again extending a classical
theorem--to give necessary and sufficient conditions for the convergence of
P{M(T)~ u
T} for sequences not necessarily of the form uT
= x/aT +
b
T
implicit in Gnedenko's Theorem.
As a corollary of this result we obtain necessary and sufficient
criteria for the domains of attraction occurring in Gnedenko's Theorem.
In
the classical i.i.d. sequence case, the criteria for domains of attraction
involve the rate of decay of the marginal distribution I-F(x) as x
increases.
For the present case the very same criteria apply, provided
I-F(x) is replaced by another function
number
~(x)
precisely
~(x).
For processes whose mean
of upcrossings of any level x is finite, the function
~(x),
~(x)
is
a readily calculated quantity.
The general theory will not require that the mean number of upcrossings
of a level per unit time be finite, and accordingly will include the class
of stationary Gaussian processes with covariances of the form
reT)
=I
- ciT/a + 0(11'10,) as
l' -+
0 for 0 < a < 2.
In Section 5 we consider
such processes, as well as (possibly non-Gaussian) cases for which the mean
number of upcrossings per unit time is finite.
Finally in Section 6 we note
6
the general Poisson limit for the point processes of upcrossings of
increasingly high levels and its implications regarding limit theorems for
the distribution of the r
2.
th
largest local maximum of t;(t) in
a
:$
t
:$
T.
Two results for stationary sequences.
As noted, our development of extremal theory for stationary processes
will rely in part on the existing sequence theory.
Specifically we shall
require the following definitions and results (which may be found e.g. in
[10]) .
Let
{~
} be a stationary sequence and write F.
. (xl ... x) for the
n_
n
1 ," .1
n
1
joint distribution function of 1; .••• '1;. • For brevity write also F.
. (u)
1
to denote F.
1
. (u,u ... u)
1
, .. 1
=
,l,n
1
, . 11 · · · l n
p{t;. :o;u ...
1
n
1
~.
l
:O;u}.
n
{~n}
real constants, we say that the sequenae
If {u} is a sequence of
n
satisfies the (dependenae)
(2.1)
where
(2.2)
a
n'~n
Note that a
a for some sequence
-+
n
n,J'v
Q,
n
=
a as n
-+
00
•
can (and will) be taken to be decreasing in
Q,
for each n
by simply replacing i t by the smallest value i t can take to make (2.1) hold
(i.e. the maximum value of the left-hand side of (2.1) over all allowable
sets of integers i 1 ... i , j l' .. jp I '
p
Note also that (2.2) may then be shown
equivalent to the condition (cf. [12] for proof)
(2.3)
a n,nA\
-+
a
as n
-+
00
for each A >
a .
7
The condition D(u ) indicates a degree of "approximate independence" of
n
members of the sequence separated by increasing distances.
However this
condition, which we refer to as "distributional mixing," is clearly
potentially far less restrictive than, for example, "strong mixing."
In the
case of normal sequences, it is in fact satisfied when the covariance
sequence {r } tends to zero even just fast enough so that r
n
n
logn
+
O.
The following result is basic to the sequence theory and will be
required in later sections.
Lemma 2.1.
Let
{~
n
} be a stationary sequence satisfying D{un } for a given
sequence {un} of constants and write Mn = max(~1'~2" '~n)'
integer k ~ 1 (writing [ ] to denote integer part»
Then for any
This lemma indicates a degree of independence between the [n/k] maxima
when the first n integers are divided into k groups.
We shall also need the
sequence form of Gnedenko's Theorem, which is given (e.g. in [10]) as follows:
Theorem 2.2.
Let
{~
n } be a stationary sequence such that Mn
= max(~1'~2"'~
n)
satisfies P{an (Mn - bn ) :s; x} +. G(x) as n + 00 for some non-degenerate d.f. G
and constants {a >O},{b }. Suppose that D(u ) holds for all u of the form
n
n
n
n
x/a + b
_00 < x < 00.
Then G is one Of the three extreme value
n
n'
distributional types.
The other classical result quoted--concerning convergence of P{M :S;u }
n
n
for arbitrary sequences {u }--'is also important and holds under appropriate
n
conditions for stationary sequences
{~}.
n
This will not be discussed here
since the corresponding continuous parameter result will be independently
derived.
8
3.
Gnedenko's Theorem for stationary processes.
As indicated above, it will be convenient to relate the maximum M(T) of
the continuous parameter stationary process
of a sequence of "submaxima."
1
so that for n
=
to the maximum of n terms
Specifically if h >
1;;. = sup{~(t):
(3.1)
~(t)
(i-1)h
~
t
~
a
we write
ih}
1,2,3 ... ,
(3.2)
The following preliminary form of Gnedenko's Theorem (involving
conditions on the
Theorem 3.1.
~-sequence)
is immediate.
Suppose that fop Bome families of constants {aT>O},{b } we
T
have
(3.3)
foP some non-degenepate
G~
and that the
{~.}
1
sequence defined by (3.1)
satisfies D(un ) whenevep u n = x/a n h + bn h fop some fixed h > a and aU peal
x. Then G is one of the thpee extpeme value types.
Proof.
Since (3.3) holds in particular as T
+
00
through values nh and the
I;;n-sequence is clearly stationary, the result follows by replacing
~n
by
~n
o
in Theorem 2.2 and using (3.2).
Corollary 3.2.
The pesult holds in papticulap if the D(u ) conditions aPe
pep laced by the assumption that
n
{~(t)}
is stpongly mixing.
sequence {I;; } is stpongly mixing and satisfies D(u ).
n
n
Fop then the
o
9
We now introduce the continuous analog of the condition D(u ), stated
n
in terms of the finite dimensional distribution functions F
t
l · .. t n
of l; (t)
(again writing Ft
t (u) for F
t (u ... u) .
t 1'" n
1'" n
The condition Dc(uT) wiZl be said to hold for the process l;(t) and the
fami Zy of constants {uT : T > O}, with respect to a fami ly {qT
-+
O}, i f for
any points sl<sZ ... <sp<tl ... <t p ' belonging to (kqT: O::;kqT::;T) and
satisfying t l - sp
~ L,
we have
(3.4)
where aT
'YT
-+ 0
for some sequence Y
T
a
(3.5)
1
T,I\T
-+ 0
= oCT) or, equivalently, where
as T
-+
00
for each A > O.
The D(u ) condition for
n
{~n}
required in Theorem 3.1 will now be
related to Dc(u ) by approximating crossings and extremes of the continuous
T
parameter process, by corresponding quantities for a sampled version.
To
achieve the approximation we require two conditions involving the maximum of
l;(t) in fixed and in very small time intervals.
These conditions are given
here in a form which applies very generally--readily verifiable sufficient
conditions for important cases are given in Section 5.
Specifically we suppose that there is a function
h > 0,
(3.6)
lim sup
u-+oo
P{M(h) >u}
h~(u)
::; 1 ,
~(u)
such that, for
10
and that for each a > 0, there is a family of constants q = q eu)
a
u
-+
00
-+ ()
as
such that
lim sup p{~(O)<u, ~(q)<u, M(q»u}
u+oo
q~(u)
(3.7)
-+ 0
as a
-+ 0 .
Note that Equation (3.6) specifies an asymptotic upper bound for the
tail distribution of the maximum in a fixed interval, whereas (3.7) limits
the probability that the maximum in a short interval exceeds u, but the
process itself is less than u at both endpoints.
The following result now
enables us to approximate the maximum in an interval of length h by the
maximum at discrete points in that interval.
Lemma 3.3.
(i) If (3.6)
q
q(u)
-+
O.
ho'lds~
A'lso
then P{M(q» u}
then there are constants A
a
o
where q
~
ho'ld~
a
-+
00
for any
and I is an interva'l of 'length
i8 as in
h~
such that
lim sup[p{~(jq)~u, jqd} u+oo
= q (u)
as u
p{~(O»u} = o~(u).
(ii) If (3.6) and (3.7) both
(3.8)
= o~(u)
(3.7)~
p{M(I)~u}]Neu) ~ A
the aonvergenaebeing
a
-+ 0
unifo~
as a
-+ 0 ~
in al'l interva'ls
of this fixed 'length h.
Proof.
If (3.6) holds and q
-+
0 as u
-+
eventually smaller than hand P{M(q) >u}
lim sup p{M(q»u}N(u)
u+oo
~
then for any fixed h > 0, q is
00,
~
P{M(h) >u}, so that
lim sup P{M(h» u}N(u)
u+oo
from which it follows that p{M(q»u}N(u)
statement of
(i)
also follows since
p{~(O»
-+
~
h by (3.6) ,
0, as stated.
u}
~
P{M(q»u}.
The remaining
e
11
Suppose now that (3.6) and (3.7) both hold and that I is an interval of
fixed length h.
The interval I consists of no more than h/q subintervals of
the form ((j-l)q, jq), together with (possibly) a shorter interval at each
end.
The difference in probabilities in (3.8) is clearly non-negative and
(using stationarity) dominated by
A
a,u
=.!:.. p{l;(O)<u, l;(q)<u, M(q»u}
q
+
2P{M(q»u} .
The desired result (ii) now follows from (3.7) and (i) by writing
o
A = lim sup A
a
u400
a,u
It is now relatively straightforward to relate D(u ) for the sequence
n
{Sn} to the condition 0c(uT) for the process l;(t), as the following lemma
shows.
(In this we use the (potentially ambiguous) notation O(u
nh
) to mean
O(v ) with v = u .)
n
n
nh
Lemma 3.4.
SUppose that (3.6) holds with some function
be a family of constants for each
and such that (3.7) holds.
a > 0
~(u)
and let
with q a (u»O, q a (u)-+O as
{q (u)}
a
u -+
co ..
If 0c(uT) is satisfied with respect to the
family qT = qa(u T) for each a > 0 .. and T~(uT) is bounded .. then the sequence
{sn} defined by (3.2) satisfies O(unh ) for h > O.
Proof.
h - ip ~
For a given n, let i <i 2 ... <i <jl ... <jp,<n,
p
l
I r = [(i r -l)h, i r h], J s = [(j s -l)h, j s h].
a
P
== n {l;(jq)~u h' jqEI },
q r=l
n
r
A ==
p'
= n {l;(jq)~u h' jqEJ },
q s=l
n
s
B ==
B
Write
For brevity let q denote one of the
families {q (o)} and
A
9-.
P
n {so ~ u h}
r==l
l
r
n
p'
n {so ~ u h}
s=l
Js
n
12
It follows in an obvious way from Lemma 3.3 that
o~
lim sup{P(A nB ) - P(AnB)} ~ lim sup(P+P')~(unh)Aa
n~
q
n~
q
for some constant K (since
nh~(unh)
is bounded) and where Aa
~ 0
as a
~ O.
Similarly
lim sUPIP(Aq )-P(A) I~ KA a , lim sUPIP(B q )-P(B) I~ KA a •
Now
IP(AnB) - P(A)P(B)1 ~ IP(AnB) - peAq nB q ) I + Ip(Aq nB q ) - peAq)P(B q ) I
+ peA ) Ip(B ) - PCB)
q
q
(3.9)
::: Rn,a + Ip(Aq nB)
q
where lim sup Rn,a
n~
:!>
s
I
peA )P(B )1
q
q
r
is at most i h, and the smallest in any
p
is at least (j-l)h, their difference is at least
1
largest jq in J ' does not exceed jp,h
p
(3.10)
+ PCB) Ip(A ) - peA)
q
3KA a .
Since the largest jq in any I
J
I
IP(AnB)
- P(A)P(B)
I
~
(~-l)h.
Also the
nh so that from (3.4) and (3.9)
(a)
::; Rn,a + a nh , (~-l)h
(in which the dependence of aT , ~ on a is explicitly indicated). Write now
a*::: inf{Rn,a ~ a~~), (. ~-l)~}:- ·Si~~·~· th~- left-hand side -~f-(3~ 9) does not
n,~
a>O
depend on a we have
IP(AnB) - P(A)P(B)I
~
a*
n,~
13
which is precisely the desired conclusion of the lemma, provided we can show
that lim a* A = 0 for any A > 0 (cf. (2.3)).
n~ n, n
a*
s R
n, An
n, a
+
But for any a > 0
a(a)
s R
nh, (An-1) h
n, a
+
a(a)
nh,
~
Anh
when n is sufficiently large (since a~al decreases in ~), and hence by (3.5)
,
lim sup a*
s 3KA
n,An
a
n~
and since a is arbitrary and A
a
+
0 as a
+
0, it follows that a* 1
n,An
+
0 as
o
desired.
The general continuous version of Gnedenko's Theorem is now readily
restated in terms of conditions on ;(t) itself.
Theorem 3.5.
with the above notation for the stationary process ;(t)
satisfying (3.6) for some function
constants
1/J~
suppose
that~
for some families of
{aT>O},{bT}~
for a non-degenerate G.
Suppose that T1/J(uT) is bounded and Dc (u ) holds for
T
u = x/aT + bT for each real x~ with respect to families of constants
T
{q (u)} satisfying (3.7). Then G is one of the three extreme value
a
distributional types.
Proof.
This follows at once from Theorem 3.1 and Lemma 3.4.
o
14
As noted the conditions of this theorem are of a general kind, and more
specific sufficient conditions will be given in the applications in
Section 5.
4.
Convergence of P{M(T):;;; u }.
T
Gnedenko's Theorem involved consideration of P{aT{M(T)-b } :;;; x}, which
T
may be rewritten as P{M(T):;;; u } with u = a~l x + b .
T
T
T
question of convergence of P{M(T):;;; u } as T "*
T
00
for families u
not necessarily linear functions of a parameter x.
n
T
which are
(This is analogous to
the convergence of P(M :;;;u ) for sequences, of course.)
n
We turn now to the
These results are
of interest in their own right, but also since they make it possible to
simply modify the classical criteria for domains of attraction to the three
limiting distributions, to apply in this continuous parameter context.
The discussion will be carried out in terms of so-called "E:-upcrossings"
of a level by the stationary process--a concept originally introduced by
Pickands [15] to deal with extremes of processes whose sample functions were
so irregular that the "ordinary" upcrossings could be infinite in number in
a finite interval.
(Here we make essential use of this concept whether the
process is irregular or not.)
Briefly, if E: > 0,
to i f
~(t)
~(t)
is said to have an E:-upcrossing of u at a point
:;;; u for all t in the interval (to-E:, to)' but !;(t) > U for some
Since the interval (t o - E:, to) contains
no upcrossings, the number of E:-upcrossings in a unit interval does not
point t
E
(to' to+n) for each n > O.
exceed liE: .
We write NE:,U (t), NE:,U (I) for the number of E:-upcrossings in
the intervals (O,t),I respectively and
EN E:,u (t)
= t~
E:,U .
~E:
,u
=
EN E:, U (1) so that
The following small result indicates some connections
between E:-upcrossings and maxima.
IS
Lemma 4.1.
(i)
(ii)
For h > 0.. P{M(h) > u} ~ h).1h ,u •
h).1h
,u
~
P{M(2h»u} - P{M(h»u}.
(iii) If (3.6) holds (i.e. lim sup p{M(h»u}/(h1jJ(u))
u-+oo
h > 0) then lim sUP).1h 11jJ(u) 5: 1.
u-+oo
,u
(iv) If
P{M(h) > u}
(4.1)
then).1
Proof.
E:,U
~
~
h1jJ(u) as u
-+
00
for 0 < h < h
1jJ(u) for aU (sufficiently small) E: >
5: 1
for some 1jJ(u) ..
O
o.
Since clearly N (h) is either zero or one, we have
h ,u
= P{N
so that (i) follows at once.
P{M(2h»u}
5:
h ,u
(h)=I} 5: P{M(h»u}
To prove (ii) we note that
P{M(h) >u} + P{N
= P{M(h»u} + h).1h
h ,u
(h, 2h) ~ I}
,u
If (3.6) holds (iii) follows at once from (i).
Finally, if (4.1) holds, the conclusion of (iv) follows from (iii) and
the inequality (ii), which gives
IJ.·m J.·nf
u-+oo
).1E:,U
1"'( ) >
'V
U -
t
l' P{M(2E:»U} - P{M(E:»U}] = 1
J.m
E:1jJ(u)
E:1jJ(u)
.
u-+oo
o
Our main purpose is to demonstrate the equivalence of the relations
P{M(h»u } ~ TIT and P{M(T)su }
T
T
-+
e- T under appropriate conditions.
The
16
following condition will be referred to as
D~(uT)'
and is analogous to D'
conditions required for similar purposes for sequences (cf. [10]).
D~(uT)
If {uT} is a given family of constants ,the condition
will be
said to hold (for the process {set)} satisfying (4.1)) if
lim sup TI~ T
(4.2)
T~
E
,uT
- ~(uT)1
+
0 as
E +
0 .
We now state and prove the first part of the desired equivalence.
Theorem 4.2.
Suppose that (4.1) holds for some function
family of constants
suc~ _~hat
Dc (u
r ).. holds
satisfying (3.7)), and that D~(uT) holds.
~
and let {uT} be a
(wi.th respect to a family (q)
Then
(4.3)
implies
(4.4)
Proof.
e
Let 0 < h < h
n' ::: [n/k].
(cf. (4.1)), and let n,k be integers, writing
By Lemma 3.4, the sequence of "submaxima" U; } defined by
n
(3.1) satisfies D(u
(4.5)
O
-T
nh
P{M(nh)~
) and hence, from Lemma 2.1,
u
nh
k
} - P {M(n'
,..
h)~unh} +
.
0 as n
+
00
•
Now writing unh ::: u, Lemma 4.1 (i) gives
n'h~n 'h ,u ~ P{M(n'h»u} ~ n'P{M(h»u} ~ n'h~(u)
so that
17
(4.6)
1 - n'hljJ(u) (1+0(1)) :5 P{M(n'h):5u} :5 1 - n'hlJ 'h
n ,u
But n'hljJ(u) = nhljJ(unh)/k
~
T/k by (4.3).
Further
n'hlJ n 'h ,u
so that letting n
1 - T/k
~
:5
in (4.6) and using
00
D~(uT)
lim inf P{M(n 'h):5 u}
n-too
:5
we have
lim sup P{M(n 'h):5 u}
n-too
:5 1 - T/k + o(l/k) .
By taking k
th
powers and using (4.5) we see that
(l-T/k)k:5 lim inf P{M(nh):5u} :5 lim sup P{M(nh):5u}
and hence, letting k
~
00,
:5
(1- T/k+ O(l/k))k
that
(4.7)
e
Now if n is chosen so that nh
:5
-T
T < (n+l)h, and i f u
nh
where the last term does not exceed
nP{u :5M(h)<u }
nh
T
= n[P{M(h»unh }
= [~~
(1+0(1))
- P{M(h»u }]
T
_h; (1~0(1))-]
:5
u '
T
18
by (4.1) and (4.3).
Since nh"'" T this clearly tends to zero.
corresponding calculation where u
nh
> u
T
thus shows from (4.7) that
e
with T = [nih].
A
-T
Finally
But the event M(nh) :::; u
T
< M(T) implies that the maximum in the interval
[nh, (n+l)h] exceeds u ' which has probability P{M(h»u }, giving
T
T
from which (4.4) follows since P{M(h»u } '" hT/T
T
--)0
o
o.
In our treatment of the converse result it will be convenient to use
the innocuous further assumption
as n
(4.8)
(for some given h > 0).
--)0
00
This assumption is possibly dispensable but
certainly commonly holds (e.g. when
~(uT)
'" (2 log T)
!<
2
for stationary
normal processes) and, of course, always holds if the function u
T
is
replaced by the step function u [T/h] h' constant between consecutive points nh.
The first step of the derivation exhibits approximate independence of
maxima in disjoint intervals.
19
Lemma 4.3.
Suppose that (4.1) holds for some
~
and let {uT} be a famiZy of
constants (satisfYing (4.8) for some h > 0) such that Dc(u ) holds with
T
T~(uT)
respect to a family (q) satisfYing (3.7) and such that
is bounded.
Let
e
(4.9)
for some
Proof.
T
>
o.
-T
Then for k = 1, 2 •••
As in the previous result the assumptions surrounding DC (u ) imply
T
that
P{M(nh):S;u} -
(4.11)
where n' = [n/k] and u
= u nh .
k
P {M(n'h):S;u} -+ 0 as n -+
Now if n
=
<Xl
[T/h] it is readily checked that
[n/k]h :s; T/k < ([n/k]+l)h, so that
P{M(T/k):s; u} :s; P{M(n 'h):S;u} -+ e -T/k
by (4.5) (which holds here by the same argument as in Theorem 4.2), and
(4.9) with
T = nh.
But also
P{M(T/k):s;u} ~ P{M((n'+l)h);<;;u}
=
P{M(n'h);<;;u}
P{M(n'h);<;;u<M((n'+l)h)}
~ P{M(n'h);<;;u} - P{M(h»u}
The first term on the right tends to e -T/k, and the second is asymptotically
equivalent to
we have
h~(unh) ~ n-1[nh~(Unh)]
-+ 0
since
T~(uT)
is bounded.
Hence
20
(4.12)
But it follows simply that u
nh
may be replaced by u
the desired result since if, for example, u
nh
::> u
T
in (4.12) to give
T
we have
(4.13)
since if the maximum in (0, T/k) lies between u
nh
and u
T
this must also
occur in one of the first n' intervals CCi-l)h, ih) or in (n'h, T/k).
The
first term of (4.13) is readily seen to be
(nh/k)
[~(unh)(l+O(l))
-
~(uT)(l+o(l))]
nh~(unh)
which is easily seen to tend to zero by (4.8) since
Boundedness of
to zero.
nh~(unh)
,
is bounded.
also implies that the second term of (4.13) tends
A corresponding calculation applies for u
nh
~
u
T
so that
o
P{M(T/k)::>u } - P{M(T/k)::>u } + 0, giving the desired result.
T
nh
Lemma 4.4.
Under the same assumptions as in Lemma 4.3 we have
~ lim sup!P{M(ET)::>U T } - e-ETI
T~
Proof.
Then
+ 0
as E
+ 0 •
Choose the integer k depending on E > 0, so that k: 1 ::> E <
i-
21
By subtracting e
-ST
from each term, and using the facts from Lemma 4.3 that
P{M(T/k) ~ u } -+ e -T/k , P{M(T/ (k+l) )~uT}-+e -T/ (k+l), we see that
T
. sup IP{M(ET) ~ u
hm
T~
T
}
- e -ST I
~
I -T/k - e -E:T I] .
max [ Ie -T/ (k+l) - e -E:T
- I ,e
But
l!e-T/(k+l) _ e-E:TI ~ III _ e-T(S-l/(k+l))1
S
S
" h ten d s to zero as S -+ O.
wh lC
. .
Slmllarly
-1 Ie -T/k - e -sTI
S
-+ 0
so that the
D
desired result follows.
The next lemma gives a conclusion which is interesting in itself and
from which the main result will follow immediately.
Lemma 4.5.
(4.14 )
Proof.
Again suppose that the conditions of Lemma 4.3 hoZd.
lim sup ITll T
T~
S
,uT
- TI
-+ 0
as S -+
0 •
By Lemma 4.1 (ii),
P{M(ST»U } + sTll sT,u
T
so that
- e -ST]
1 [ e -2ST -
+ -
E:
T
Then
22
giving
lim
inf[T~
T
- T]
E ,uT
T~
_1 [e-ET _ -2ET
E
e
- ET ]
;?:
~ lim suplp{M(ET):OS;u } - e-ETI
T
T~
The latter two terms tend to zero as E
inf[T~ T
T~
E ,uT
- T]
Similarly from the inequality
ET~
lim
;?:
a
ET,u
E
+
0 by Lemma 4.4, so that
+
0 as E
:os;
P{M(ET»u } (Lemma (4.1) (i)) i t
T
T
+
0 .
follows that
sup[T~ T
T~
E ,uT
lim
where b
E
+
0 as E
+
O.
+
:os;
b~
:os;
Since i f lim inf 13
easily shown that lim sup l13n I
which tends to zero as E
- T]
'"
n
;?:
A and lim sup 13
n
:os;
A i t is
max ( I AI, IAI), we have
o
0, giving the conclusion of the lemma.
It will be noted that (4.14) is very similar to the condition
D~(uT)
and follows as a conclusion from the assumption P{M(T):OS; uT} + e -T under
appropriate conditions.
we now do assume that
result.
For this we do not require that
D~(uT)
D~(uT)
hold.
If
holds we immediately obtain the main converse
23
~
and l-et {uT} be a famil-y
of constants (satisfying (4.8) for some h > 0) such that Dc(uT) hol-ds with
Theorem 4.6.
Suppose that (4.1) hol-ds for some
respect to a famil-y (q) satisfying (3.7).
Suppose al-so that
D~(uT)
hol-ds.
Then (4.4) impl-ies (4.3).
Proof.
D~(uT)
Since T].l T
E
that
T~(uT)
::; T/(ET) = liE it is implicit in the assumption
T
is bounded. Thus the conditions of the previous lemma
,u
are satisfied if (4.4) holds and hence
1im sup IT].l T
T~
E
,uT
-
T
I+
0 as
0 .
E +
But 0 ~ (uT) requires that
lim sup IT].l T
T~
E
,uT
- T~ (u T)
from which it follows simply that
I+
0 as
T~(uT) +
T,
E
+
a,
as required.
Theorems 4.2 and 4.6 may be related to the corresponding results for
i.i.d. sequences in the following way.
Let {u T} be a famil-y of constants such that the conditions
of Theorem 4.6 hol-d, ZQ"t 0< P <, 1:, and l-et h be chosen 'as 'in (4.1)
Theorem 4.7.
and (4.8).
Then
(4.15)
if and onl-y if there is a sequence {sn} of i.i.d. random variabl-es with
common d.f. F satisfying 1 - F(u) ~ h~(u) as u
/\
Mn = max(sl,s2",sn) satisfies
+
00
and such that
o
24
1\
(4.16)
P{M :$ u h}
n
n
If there is an L L d.
Proof.
-+
P •
sequence {s } with common d. f. F such that
n
(4.16) holds then (as noted in the introduction) we have 1 - F(u
where p = e
-T
Since 1 - F (u) "., h1/J(u)
1/J(u ) ~ TIT.
T
(by (4.8))
we have 1/J(u )
nh
~
T/nh,
Hence Theorem 4.2 gives P{M(T):$u }
T
-+
nk
)
~
T/n,
from which
e-
T
so that
(4.15) holds.
Conversely if (4.15) holds it follows from Theorem 4.6 that
T1/J(u )
T
-+
T and hence nh1/J(u
with the same d.f.
F,
nh
)
-+
T.
Let
say, as M(h),
{sn} be L Ld.
random variables
so that by (4.1)
1\
from which it follows that M = max(sl's2' "sn)
n
P{M :$ u }
n
n
satisfies
e -T = p, as required.
-+
D
These results show how the function 1/J may be used in the classical
criteria for domains of attraction to determine the asymptotic distribution
of M(T).
We write V(G)
for the domain of attraction to the (extreme value)
d.f. G, Le. the set of all d.f.'s
F such that Fn(x/a + b)
n
n
-+
G(x) for
some sequences {a > O}, b .
n
n
Suppose that the conditions of Theorem 4.6 hold for aU
Theorem 4.8.
families u
T
= x/aT + bT~
constants and
(4.17)
Then
_00
< x < oo~ when {aT>O},{b } are given
T
25
tjJ(u)
(4.18)
1 - F (u) as u
~
+
00
for some F
E
peG) •
Conversely if (4.1) holds and tjJ(u) satisfies (4.18) there are families of
holds~
constants {aT> O}, {b } such that (4 .17)
T
of Theorem 4.6 are satisfied for each u
Proof.
=
T
provided that the conditions
xl aT + b
T
~
_00
< x <
00
If (4.17) holds, together with the conditions stated, Theorem 4.7
shows that
A
P{a heM - b h) ~ x}
n
n n
-+-
G(x)
1\
where M is the maximum of n i. i. d. random variables with a common d. f.
n
say, and where htjJ(u)
a d.f.
~
1 - FO(u) as u
-+-
00,
and F
O
E
P(G).
F 0'
We may choose
1
F such that 1 - F(u) = Il(l-FO(u)) when u is large and the classical
domain of attraction criteria show that F
E
P(G).
~
But tjJ(u)
1 - F (u)
as
desired, showing (4.18).
Conversely i f (4.18) holds and h > 0 we may choose F
O
htjJ(u) ~ 1 - FO(u)
E
peG) such that
and hence define an LLd. sequence {sn} with common
A
d.f.
FO ' Mn = max(sl,s2 ... sn)'
such that
1\
P{a' (M - b') ~ x}
n n n
for some constants al>O, b l .
n
n
n = 0,1,2 ....
G(x)
Define a =a' b =b' for nh ~ T < (n+l)h
T
n' T n '
Then (4.16) holds with p = G(x).
Theorem 4.6 hold for each u
(4.17).
-+-
T
= x/a
T
+ b
T
If the conditions of
then (4.15) holds, which yields
o
26
5.
Particular classes of processes.
In this section we first show how the conditions required for the
previous theory may be simplified when the mean number
of each level u by
~(t)
~(u)
of upcrossings
per unit time is finite, and then briefly indicate
applications to stationary normal processes (whether or not
~ (u)
< (0).
Throughout N (I) (N (t)) will denote the number of upcrossings of the level
u
u
u in the interval I (or in (O,t) respectively).
First we write for q > 0
I (u) =
(5.1)
q
Clearly I (u)
q
~
P{N
u
(q)~
I}/q
p{~(O)<u<~(q)}/q
~
EN (q)/q =
u
~.
Further, it is readily
shown (by a standard dissection of the unit interval into subintervals of
length q) that
(5.2)
~(u)
= lim I (u) ,
q+O
q
which, for now, we assume finite for each u.
~(u)
It is apparent from (5.2) that
may, at least in principle, be readily calculated from the bivariate
distributions of the process.
processes) that I (u)
q
~
)l (u)
It may also happen (as for many normal
as u +
00
when q depends on u, q = q (u) +
o.
For greater flexibility we shall use the following variant of such a
property.
Specifically we shall assume, when needed, that for each a >
there is a family {q (u) +
a
as u + oo} such that (with qa= qa(u) , )l = )leu))
lim inf I (u)/~
qa
u+oo
(5.3)
where \) a + 1 as a + O.
may take q (u)
a
a
=
a
;::
\)
a
As indicated below, for many normal processes we
a/u and more generally as aP'{';(O) > u}/~(u) •
27
We shall assume as needed that
p{E,;(O»u} = o]J(u)
(5.4)
which holds under general conditions.
i f for some q = q(u) + 0
as
as
u
+00,
For example, it is readily verified
u + 00 ,
lim sup p{E,;(O»u, E,;(q»u} < 1
p{E,;(O»u}
u-+oo
(5.5)
since (5.5) implies that lim inf qI (u)/p{E,;(O»u} > 0, from which it
u-+oo
q
follows that p{E,;(O»u}/I (u) + 0, and hence (5.4) holds since
q
~
I (u)
q
]J(u).
We may now recast the conditions (3.6) and (3.7) in terms of the
function ]J (u) .
Lenuna 5.1.
(i) Suppose ]J(u) <
00
fOX' each u and that (5.4) (oX' the sufficient
condi tion (5. 5)) ho ~ds •
(ii) If (5.3)
ho~ds
(fOX' some
Then (3. 6) ho ~ds with IjJ (u) = ]..I (u) •
fami~y
{q (u)}) then (3.7) ho~ds with
a
ljJ(u) = ]J(u) •
Proof.
Since clearly
P{M(h»u} ~ P{N (h)~l} + p{E,;(O»u} ~ llh + p{t,:(O»u} ,
u
(3.6) follows at once from (5.4), which proves (i).
Now i f (5.3) holds, then with q=q (u) , ]..I=]J(u),
a
p{E;(O)<u, t,:(q)<u,M(q»u} = p{t,:(O)<u,M(q»u}
~
P{N
~
]Jq - llqv (1+0(1))
a
u
(q)~I}
- qI (u)
q
p{t,: (O)<u < t,:(q)}
28
so that
lim sup
u-+oo
p{~(O)<u, ~(q)<u,
which tends to zero as a
-+
M(q»u}/(q]..l)
:S
1 - v
a
o
0 , giving (3.7).
In view of this lemma, Gnedenko's Theorem now applies to processes of
this kind using the more readily verifiable conditions (5.3) and (5.4), as
follows.
Theorem 5.2.
Theorem 3.5 ho Zds for a stationaroy process I; (t) with
~(u)
00
= ]..I(u) <
for each u if the conditions (3.6) and (3.7) are repZaced
o
by (5.4) and (5.3).
Finally, the condition
D~(uT)
may, in certain circumstances, be
replaced by a sufficient condition involving the second moment of N (1) when
u
this is finite.
This condition is not necessarily simpler to verify, but
the second moment involved may usually be obtained in terms of (integrals
containing) the joint densities of the process and its derivative at two
general points t , t 2 .
l
Lemma 5.3.
Suppose that for the stationary process I; (t)..
for a given famiZy {u} = {u } ..
T
(5.6)
1
" 1im sup EN (£T) (N .(£T) -1)
c..
T-+oo
u
u
Then I;(t) satisfies
D~(uT)..
with
~(u) =
-+
0 as
]..I(u)
=
£
-+
0 •
EN (1) •
u
2
EN (1) <
n
00 ..
and
29
Proof.
Clearly, writing
j.l
=
j.l
(u ) ,
T
= .!.E
E(NU (ET) - NET,u (E T) )
Now i f Nu (ET) > 1, Nu (ET) - NET· ,u (£T):::; NU (ET) (N u (ET) -1) .
Also
Nu (ET) - NET,u (ET) is zero i f NU (ET) = 0 and is zero or 1 i f Nu (ET) = 1,
the latter case requiring that N (-ET,O) ;:: 1 also.
u
Hence we have
0:::; E{N (ET) - NT (ET)}:::; EN (ET)(N (ET)-l) + P{N(2ET»l}
u
E ,u
U
U
:::; EN (ET)(N (ET)-l) + EN (2ET)(N (2ET)-1)
u
u
u
u
so that DC (u ) follows by applying (5.6) twice (once with 2E replacing E).
T
0
For stationary normal processes, finiteness of EN 2 (1) requires a little
u
more than existence of the second spectral moment used to ensure finiteness
of
j.l
(cf. [3]).
We turn now to the consideration of stationary normal
processes, but will not restrict attention to those for which even
j.l
= EN (1) is finite.
u
~(t)
Specifically we assume that
is a (zero mean)
stationary normal process with covariance function
(5.7)
for some a, 0 < a :::; 2.
(The case a
=2
gives
j.l
<
00 . )
There is a
considerable literature dealing with extremal properties of such processes,
and of slightly more general cases (which could be included here) in which
the term
IT ,a
is multiplied by a slowly varying function as
(cf. [2], [16]).
T -)-
0
Of course a number of the same arguments (which in some
cases are rather intricate) used in these papers are required to verify our
30
general conditions here.
We will not attempt to reproduce these arguments
but rather to simply indicate the basic considerations used and where they
may be found.
However it will be convenient to summarize these results as a
theorem even though formal proofs are not given.
Theorem 5.4.
Let
~(t)
be a zero mean stationary normal process with
covariance function r (t ) satisfying (5. 7) •
Then
= ella Ha u 2/a ¢(u)/u ..
in which ¢ is the standard normal density e is as in (5.7) .. and
(i) (3.6) .. and in fact (4.1) .. hold with 1jJ(u)
is a constant depending only on a.
a
(3.7) holds with q (u) = au- 2/a .
H
(ii)
(iii)
a
Dc(uT) holds with respect to a family {q} if T1jJ(u ) is bounded
T
and
-T.
(5.8)
Ir (kq) Ie
l\'
-U
2
1(1+ r r(kq)
T
I)
-r 0 as
T-+-oo
q AT::;; kq::;; T
This holds.. in particular.. if T1j!(u ) is
T
bounded (with 1jJ defined as in (i)) and r (t) log t -r 0 as t -r
for each A > O.
(iv) If ret) log t -r 0 and T1j!(u ) -r
T
P{M(T)::;;u } -r e-1'.
T
l'
D~(uT)
> 0 .. then
holds and
(v) If ret) log t-+-O.. M(T) has the limiting distribution given by
-x
P{aT(M(T)-b ) ::;; x}
T
-r
e-e
where
I
aT
= (2
log T)·~
bT = (2 log
T)~
+
(2 log
+ 10g(zC:t
T)-~ {(~-})lOg log T
-1
1
-1 7f-~
ella H )} •
a
00.
31
Indications and sources of proof.
(i) A derivation of (4.1) (from which (3.6) follows) appears in
several developments of the normal theory (e.g. Theorem 2.1 of [16]).
the case a. = 2,
]1
In
(3.6) is incidentally simply obtained from "Rice's formula"
[(_r"(0))~/27T]e-u2/2
=
(ii) This may be shown, for example, along the lines of Lemma 2.4 of
[16], although a more direct derivation is obtainable from the normal theory
given by Lindgren and Rootzen in [13].
(iii) The proof of this involves a standard calculation using "Slepian's
Lemma" (cf. Lemma 3.5 of [15]), from which it follows that for two sets of
standard normal random variables Sl'" Sn ' n l ... nn with covariance matrices
[A .. ],[v .. ] , IA .. I~I:v ..
1J
1J
1J
n
1
n
Ip{ n (s·:5u)}
j=l
1J
)
L IA ..
p{ n (n.~u)}1 :5 K
j=l J
i<j
2
I
1)
v .. (I-A .. )
1)
In this application (using the notation of (3.4)), the
1
-~
1J
S.1
-u
e
2
/(1+IA.1)·1)
are identified
with the r.v. 's s(sl)'" s(sp)' s(t l ) •.. s(t p ') and the n i with p+p'
standard normal r.v.'s having the same correlations except that
cov(S(s.), s(t.)) is replaced by zero for
1
)
l~i:5p,
l:5j:5p'.
The fact that boundedness of T1jJ(u ) together with ret) log t
T
-+
0
implies (5.8) follows by standard calculations (cf. [1] or Lemma 3.1 of [13]).
(iv) If
ret) log t
-+
0
and T1jJ(uT) -+-
> 0 then
T
D~ (u
T) may be
obtained from arguments leading to Theorem 3.1 of [15], though it seems
likely that a shorter route via our Lemma 3.3 may be possible.
e- T
follows from Theorem 4.2 that P{M(T):5u }
T
-+
which there are several) that ret) log t
-+ 0
and
It then
Of course any proof (of
T1jJ(~) -+ l'
implies that
32
P{M(T)SU } + eT
Lemma 4.5.
T
must also imply that O~(uT) holds by virtue of our
That is
O~(uT)
may be regarded as a
nece88a~
condition for
(4.3) to imply (4.4).
(v)
This follows at once from the (relatively) straightforward
verification of the fact that 'f\Ii(u ) +
= e -x when u
T
T
T
= x/aT + b
T
, using
o
the above results.
6.
Poisson and related properties.
In this section we shall just briefly indicate the Poisson properties
associated with high level upcrossings.
We confine the discussion to the
case where the number N (I) of upcrossings in a bounded interval I has a
u
finite mean, writing again 11
= l1(u)
=
EN (1).
u
Cases where this is not so
are similarly dealt with in terms of €-upcrossings.
Our objective is to show, under 0
c
and 0' conditions, that the point
c
process of upcrossings of a high level takes on a Poisson character--as is
well-known in the case when the stationary process
~(t)
is normal.
Since
the upcrossings of increasingly high levels will tend to become rare, a
normalization is required.
level u '
T
To that end we consider a time period T and a
both increasing in such a way that T11+T, (11= l1(u )) , and
T
define a normalized point process of upcrossings by
Nrc!)
=
Nu (TI) , (Nf(t)
T
=
Nu (tT))
T
for each interval (or more general Borel set) I, so that, in particular,
(6.1)
33
This shows that the "intensity" (Le. mean number of events per unit
time) of the (normalized) upcrossing point process converges to T.
Our task
is to show that the upcrossing point process actually converges (weakly) to
a Poisson process with mean T.
The derivation of this result is based on the following two extensions
of Theorem 4.2, which are proved by similar arguments to those used in
obtaining Theorem 4.2.
Theorem 6.1.
Under the conditions of Theorem 4.2., if
e
< 1 and l.lT +
T,
then
e -eT as T +
(6.2)
Theorem 6.2.
1~ = TI. = {t:
J
J
o
•
If 1 ,1 ... 1 are disjoint subintervaZs of [0,1] and
l 2
k
tiT
E:
I} then under the conditions of Theorem 4.2, if l.lT
k
p{ ()
(6.3)
00
j=l
M(1~) ~ u } -
T
J
k
II
j=l
P{M(1*J. )~uT}
+
+
T,
0 ,
so that by Theorem 6. 1
k
-TEe.
1
p{ n (M(I~)~ u )} + e
(6.4)
j=l
where
e.
J
T
J
is the Zength of I., 1
J
~
j
S;
k.
It is now a relatively straightforward matter to show that the point
processes
NT
converge (in the full sense of weak convergence) to a Poisson
process N with intensity T.
o
34
Theorem 6.3.
Under the conditions of Theorem 4.2., if Tl1 -+ T where
NT of (normalized) point processes of upcrossings
11 = 11 (u )., then the family
T
of u
T
on the unit intervaZ converges in distribution to a Poisson process N
with intensity
Proof.
T
on the unit intervaZ as T -+
00.
By Theorem 4.7 of [8] it is sufficient to prove that
(i) ENT{(a,b]}
-+
EN{(a,b]} = T(b-a) as T
-+
00
for all a,b,
O:S;a:s;b:'>l.
n
(ii) P{NT*(B)=O} -+ P{N(B)=O} as T -+
00
for all sets B of the form u B.
1
1
where n is any integer and B. are disjoint intervals
1
(a., b.] c (0,1].
1
1
Now (i) follows trivially since
ENT{(a,b]} = l1T(b-a) -+ T(b-a) .
To obtain (ii) we note that
a :s; P{NT(B)=O} - P{M(TB):S;u T}
= P{Nu(TB)=O, M(TB»uT}
n
:'>
L
i=l
P {~(Tai) > uT }
n
since i f the maximum in TB =
upcrossings of u
T
u (Ta., Tb.] exceeds u ' but there are no
T
i=l
1
1
in these intervals, then S must exceed u at the initial
point of at least one such interval.
Hence
But the last expression is just
35
- ~JV'
-..,1'E'(Q. -
But P{M(TB):SuT} = P\~l(M(TBi):suT} -+q~.l.
a.J
1.
by Theorem 6.2 so that
",-LL~
________
(ii)- follows since P{N(B)=O} = e
Corollary.
1.
o
1.
If B. are disjoint (Borel) subsets of the unit interval and
1
if the boundary of eaah B. has zero Lebesgue measure then
1.
r.
n
P{NT*(B.)=r. , l:Si:sn}
1.
1.
-+
-Tm(B.) [Tm(B.) ]
1
IT e
i=l
1
1
r. !
1
where m(B.) denotes the Lebesgue measure of B.•
1
Proof.
1
This is an immediate consequence of the full weak convergence proved
o
(cf. Lemma 4.4 of [8]).
The above results concern convergence of the point processes of
upcrossings of u
interval.
families u
T
in the unit interval to a Poisson process in the unit
A slight modification (requiring Dc and D~
eT
to hold for all
in place of u for all e > 0) enables a corresponding result
T
to be shown for the upcrossings on the whole positive real line~ but we do
not pursue this here.
Instead we show how Theorem 6.3 yields the asymptotic
distribution of the r th largest local maximum in (O,T).
Suppose, then, that qt) has a continuous derivative a.s. and define
N'(T)
to be the number of local maxima in the interval (O,T)
u
for which the
process value exceeds u, i.e. the number of downcrossing points t of zero by
~'
in (O,T) such that
~(t)
> u.
Clearly N' (T)
u
one local maximum occurs between two upcrossings.
~
N (T) - 1 since at least
u
It is also reasonable to
36
expect that if the sample function behavior is not too irregular that there
will tend to be just one local maximum between most successive upcrossings
of u when u is large, so that N' (T) and N (T) will tend to be approximately
u
u
equal.
The following result makes this precise.
Theorem 6.4.
with the above notation let {uT} be aonstants such that
Tll(=Tll(u )) -+T > O.
T
EN'u (1) ~ ll(u) as u -+
Suppose that
EN~(l)
Then~ wPiting u
00.
is finite fop each u and that
T
= u, EIN'u (T) - Nu (T)
I -+ o.
If also the conditions of Theorem 6.3 hold (so that
P{N (T)=r} -+ e-TTr/rl) it follows that P{N' (T)=r} -+ e-TTr/rl
u
u
Proof.
--
As noted above, N'(T)
u
i f N' (T) = N (T) - 1,
u
u
then
~
~(T)
N (T) - 1, and it is clear, moreover, that
u
> u.
E{N' (T)
u
~
which tends to zero as T -+
Hence
N (T)} + 2P{N' (T) = N (T) - l}
u
u
u
TEN' (1) - llT + 2P{l;(T»u} ,
u
00
since P{l;(T) > uT} =
p{~ (0»
uT} -+ 0 and
TEN'u (1) - llT = llT[(l+o(l)) - 1] -+ 0, so that the first part of the theorem
T
follows. The second part now follows immediately since the integer-valued
r.v.
N~(T)
- Nu(T) tends to zero in probability, giving
P{N~(T) ~
Nu(T)} -+ 0
and hence P{N'(T)=r} - P{N (T)=r} -+ 0 for each r.
u
u
0
Now write M(r)(T) for the r th largest local maximum in the interval
(O,T).
Since the events {M(r)(T)=>U},.{N'(T)<r} are identical we obtain the
following corollary:
u
37
Corollary 1.
Under the aonditions of the theorem
p{M(r) (T)~uT}
r-l
I
-+ e-'[
'[Sis!
o
•
s=l
As a further corollary we obtain the limiting distribution of M(r)(T)
in terms of that for M(T).
Corollary 2.
Suppose that P{aT(M(T)-b T)
aonditions of Theorem 4.6 hold with u
Su:ppose also that EN' (1)
u
~ EN (1)
u
T
~ x} -+ G(x)
= x/aT
as u
-+
00.
+ b
T
I
s=o
= ll).
for eaah real x (and 1jJ
Then
r-l
(6.5)
and that the
s
[-log G(x)] Is! "'
where G(x) > 0 (and zero if G(x) = 0).
Proof.
This follows from Corollary 1 by writing G(x)
Theorem 4.6 implies that Tll
=
e
-'[
since
o
-+ '[ •
Note that for a stationary normal process with finite second and fourth
spectral moments A2 ,A
4
it may be shown (Section 11.6 of [3]) that
EN' (1)
u
where 11
EN' (1)
U
~
=
Z
A - A and ¢ is the standard normal d.f.,
4
Z
II as
u
-+
so that clearly
00 •
The relation (6.5) gives the asymptotic distribution of the r th largest
local maximum M(r)(T) as a corollary of the Poisson result, Theorem 6.4.
This Poisson result may itself be generalized to apply to joint convergence
38
of upcrossings of several levels to a point process in the plane composed of
successive "thinnings" of a Poisson process.
From a result of this kind it
is possible to obtain the joint asymptotic distribution of any number of the
i
r ) (T), and also of their time locations.
Acknowledgements
I am very grateful to Drs. Georg Lindgren and Holger Rootzen for
uncountably many helpful conversations regarding this and related topics and
for access to their exposition of the normal case to appear in [13].
References
[1] Berman, S.M. (196l). Maxima and high level excursions of stationary
Gaussian processes. Trans. Amer. Math. Soa. 60, 65-85.
[2]
- - -sequences.
- - - (1964).
AlJZr.7-
Limit theorems for the maximum term in stationary
Math. Stat:iBt. $5, 502-516.
[3] Cramer, H. and Leadbetter, M.R. (1967). stationary and Related
Btoahastia Proaesses. New York: John Wiley and Sons.
[4] Fisher, R.A. and Tippett, L.H.C. (1928). Limiting forms of the frequency
distribution of the largest or smallest member of a sample. Proa.
Camb. Phil. Boa. 24, 180-190.
[5] Frechet, M. (1927). Sur la loi de probabilite de l'ecart maximum.
de la Boa. Polonaise de Math (Craaow), p. 93.
Ann.
[6] Gnedenko, B.V. (1943). Sur la distribution limite du terme maximum
d'une serie aleatoire. Ann. Math. 44, 423-453.
[7] Haan, L. de (1970). On regular variation and its application to the
weak convergence of sample extremes. Amsterdam Math. Centre Traat
32.
[8] Kallenberg, O. Random measures. Akad. Ver. Berlin (1975) and
Academic Press: London and New York, 1976.
[9] Leadbetter, M.R. (1974). On extreme values in stationary sequences.
Z. Wahr. verw.Geb. 28, 289-303.
39
[10]
Extreme value theory under weak m1x1ng conditions.
M.A.A. Studies in Mathematias 18 and Studies in ProbabiUty TheoI'y,
ed. M. Rosenblatt, 1978, 46-110.
[11]
(1978). On extremes of stationary processes. Institute
of Statistias Mimeo SeI'ies #1194, University of North Carolina at
Chapel Hill.
[12] Leadbetter, M.R., Lindgren, G., Rootzen, H. (1979). Extremal and
related properties of stationary processes I: Extremes of
stationary sequences. Institute of Statistias Mimeo SeI'ies #1227,
University of North Carolina at Chapel Hill.
[13]
Extremal and related properties of stationary
processes II: Extremes of continuous parameter stationary
processes. In preparation for Institute of Statistias Mimeo
SeI'ies, University of North Carolina at Chapel Hill.
[14] Loynes, R.M. (1965). Extreme values in uniformly mixing stationary
stochastic processes. Ann. Math. statist. 36, 993-999.
[15] Pickands, J. (1969). Upcrossing probabilities for stationary Gaussian
processes. Trans. ArneI'. Math. Soa. 145, 51-73.
[16] Qualls, C. and Watanabe, H. (1972). Asymptotic properties of Gaussian
processes. Ann. Math. Statist. 43, 580-596.
[17] Watson, G.S. (1954). Extreme values in samples from m-dependent
stationary stochastic processes. Ann. Math. statist. 25, 798-800.
uNL. LA':''' 1 r·.I. L
U
SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered)
r
READ INSTRUCTIONS
BEFORE COMPLETING FORM
REPORT DOCUMENTAJION PAGE
GOVT ACC ESSION NO. 3.
1. REPORT NUMBER
4.
TITLE (and SubtWe)
RECIPIENT'S CATAl.OG NUMBER
5.
Extreme Value Theory for Continuous Parameter
Stationary Processes
TYPE OF REPORT 8< PERIOD COVERED
TECHNICAL
6.
PERFORMING O"lG. REPORT NUMBER
Mimeo Series No. 1276
7.
8.
AUTHOR(s)
CONTRACT OR GRANT NUMBER(s)
M.R. Leadbetter
9.
Contract NOOO14-75-C-0809
10. PROGRAM ELEMENT. PROJECT, TASK
AREA 8< WORK UNIT NUMBERS
PERFORMIN G ORGAN I Z ATiON NAME AND ADDRESS
11.
12.
CONTROLLING OFFICE NAME AND ADDRESS
Office of Naval Research
Statistics and Probability Program
Arlington, VA 22217
REPORT DATE
March 1980
13.
NUMBER OF PAGES
IS.
SECURITY CLASS. (of this report)
39
14. MONITORING AGENCY NAME 8< ADDRESS(1f dillerent from Controfl/nll Office)
UNCLASSIFIED
ISa.
16.
DECLASSIFICATION/DOWNGRADING
SCHEDULE
DISTRIBUTION STATEMENT (of this Report)
Approved for Public Release
--
Distribution Unlimited
17.
DISTRIBUTION STATEMENT (of the abstract entered In Block 20, If different from Report)
18.
SUPPLEMENTARY NOTES
19.
KEY WORDS (Continue on reverse side If necessary and Identify by bfock number)
extreme value theory, extreme values, stationary processes, stochastic extremes
20,
ABSTRACT (Continue on reverse side If necessary and Identify by block number)
In this paper the central distributional results of classical extreme
value theory are obtained, under appropriate dependence restrictions, for
maxima of continuous parameter stochastic processes. In particular we prove
the basic result (here called Gnedenko's Theorem) concerning the existence of
just three types of non-degenerate limiting distributions in such cases, and
give necessary and sufficient conditions for each to apply. The development
relies, in part, on the corresponding known th~ory for stationary sequences.
DO
FORM
1 JAN 73
1473
EDITION OF 1 NOV 65 IS OBSOl.ETE
UNCLASSIFIED
SECURITY CLASSIFICATION OF THIS PAGE (When Det .. Entered)
SECURITY CLASSIFICATION OF THIS PAGE(When D.t. Entered)
20.
The general theory given does not require finiteness of the number
of upcrossings of any level x. However when the number per unit time is
a.s. finite and has a finite mean ~(x), it is found that the classical
criteria for domains of attraction apply when ~(x) is used in lieu of
the tail of the marginal distribution function. The theory is
specialized to this case and applied to give the general known results
for stationary normal processes (for which ~(x) mayor may not be
finite).
,e
A general Poisson convergence theorem is given for high level
upcrossings, together with its implications for the asymptotic
distributions of r-th local maxima.
•
UNCLASSIFIED
SECURITY CLASSII'"ICATIO. 01'"
TU'~ PAGE(IVhen
Data Enle'"D'
j