Leadbetter, M.R., Lindgren, G., and Rootzen, H.External and Related Properties of Stationay Processes Part II: Extreme Values in Continuous Time."

-J
•
"
"
?'"
EXTP~~~L
AND RELATED PROPERTIES
OF STATIONARY PROCESSES
Part II:
Extreme Values in Continuous Tine
M. R. Leadbetter
University of North Carolina
G. Lindgren
University of
.e
H.
1.JIrea
Rootz~n
University of Copenhagen
Research supported by the Office of Naval Research, Contract
N00014-7S-C-0809, and the Swedish Natural Research Council,
Contract F374l-004.
Table of contents
Introduction to Part II: Extreme values in continuous time
6. Basic properties of extremes and level crossings
7. Extremal theory of mean square differentiable normal processes
8. Point processes of upcrossings
9. Location of maxima
10. Sample path properties at upcrossings
11. Maxima and minima and extremal theory for dependent processes
•
~
I
12. Maxima and crossings of non-differentiable normal
processe~s
13. Extremes of continuous parameter stationary processes
References
I
t_
ie
~
PART II
EXTREME VALUES IN CONTINUOUS TIME
In this part of the work we shall explore extremal and related theory
"',
for continuous parameter stationary processes. As we shall see (in
.#
Chapter 13) it is possible to obtain a satisfying general theory ex-·
"t
s
tending that for the sequence case, described in Chapter 2 of Part I,
and based on dependence conditions closely related to those used there
for sequences. In particular, a general form of Gnedenko's Theorem
will be obtained for the maximum
M(T) =
.~
where
~
(t)
sup{~(t)~
0
S
t < T}
is a sta,tionary stochastic process satisfying appropriate
I
regularity and dependence conditions.
Before presenting this general theory, however, we shall give a detailed development fc,r the case of stationary normal processes, for
which very many explicit extremal and related results are
knol~.
For
mean-square differentiable normal processes, it is illuminatililg and
.e
profitable to approach extremal theory through a consideration of the
properties of uparossings of a high level (which are analogous to the
exceedances used in the discrete case). The basic framework and resuIting extremal results are described in Chapters 6 and 7 respectively.
As a result of this limit theory it is possible to show that the
point process of upcrossings of a level takes on an increasingly Poisson
character as the level becomes higher. This and related properties are
discussed in Chapter 8, and are analogous to the corresponding results
for exceedances by stationary normal sequences, given in Chapter 4.
The location of the maximum (primarily under normality) is considered in Chapter 9, along with a derivation of asymptotic P,oisson
properties for the point process in the plane given by the location:s
and heights of all ZoaaZ maxima. The latter results provice asymptotic
joint distributions for the locations and heights of any given number
of the largest local maxima.
II
The local behavior of a stationary normal process near a high
level upcrossing
is discussed in Chapter 10, using, in particular,
a simple process (the "Slepian model process") to describe the sample
paths at such an upcrossing. As an interesting corollary it is possible
to obtain the limiting distribution for the lengths of excursionS by
stationary normal processes above a high level, under appropriate conditions.
In Chapter
l'
we consider the joint asymptotic behavior of the
maximum and minimum of a stationary normal process, and of maxima of
two or more dependent processes. In particular it is shown that short of perfect correlation between the processes - such maxima are
asymptotically independent.
While the mean square differentiable stationary normal processes
form a substantial class, there are important stationary normal processes (such as the Ornstein-Uhlenbeck process) which do not possess
this property, Many of these have covariance functions of the form
r(,) = , - clTla + ol,l a as T-+-O for some a, 0 < a < 2 (the case
a = 2
corresponds to the mean-square differentiable processes). The
extremal theory for these processes is developed in Chapter 12, using
more sophisticated methods than those of Chapter 7, for which simple
considerations involving upcrossings sufficed.
Finally, Chapter 13 contains the promised general extremal theory
(including Gnedenko's Theorem) for stationary continuous-time processes which are not necessarily normal. This theory essentially relies on the discrete parameter results of Part It by means of the
slmple device of expressing the maximum of a continuous parameter process in say time
T = n, an integer, as the maximum of
n
"submaxima",
over fixed intervals, viz.
where
~i
= sup{~(t);
i-1
~
t
~
i}. It should be noted (as shown in
Chapter 13) that the results for stationary normal processes given in
e-
III
Chapters 7 and 12 can be obtained from those in Chapter 13 by specia.l,~
<-e
iA
12 is still needed to verify the general conditions of Chapter 13, and
1
.
!!
the normal case is particularly important, we have felt it desirable
!.~
and helpful to first treat normal cases separately •
~
....
ization. However, since most of the effort required in Chapters 7 an.d
S
~
f
,
;
I
.e
-1-
CHAPTER 6
f_
ie
"
"
;":
BASIC PROPERTIES OF EXTREMES AND LEVEL CnOSSINGS
We turn our attention now to continuous parameter stationary processes.
We 'shall be especially concerned with stationary normal processes in
this and most of the subsequent chapters but begin with a discussion of
sane basic properties which are 'relevant whether or not the process is
normal, and which will be useful in the discussion of extremal behaviour
in later chapters.
We shall consider a stationary process
tinuous ("time") parameter
t
~
t
~
O}
having a con-
O. Stationarity is to be taken in the
~
strict sense, i.e. to mean that any group
same distribution as
{~(t);
(t l + T), ••• ,
~
~(tl)'
(t n + T)
••• ,
for all
~(tn)
has tl1e
T. Equivalently
this means that the f'inite dimensional distributions F t
1- (xl' .... , x-d =
l' ... , -n
P{~(tl) <xI, ••• ,~(t) <x } are such that F
tl+T , ••• , t n+T • F t l , ••• _, ·tn
n - n
for all choices of T, n, and t l , t 2 , ••• , tn.
It will be assumed throughout without comment that for each
d.f.
Ft(x)
of
functions
{I;
(t)}
~the
is continuous. It will further be assumed that,
~(t)
with probability one,
t,
~
(t)
has continuous sample functions - that is the
are a. s. continuous as functions of
t
~
O.
Finally we shall assume that the basic underlying probability measure
space has been
completed~
if not already complete. This means in partic-
ular that probability-one limits of r.v.'s will themselves be r.v.'s a fact which will be useful below.
A principal aim in later chapters will be to discuss the behaviour
of the maximum
M(T)
= sup{~ (t);
0
~
t
~
T}
(which is well defined and attained, since
when
T
t(t)
by a sequence
becomes large. It is often convenient to approximate the process
{~n(t)}
all points of the form
l
1;' is continuous) especially
jqn' j
of processes taking the value
= 0,
~(t)
at
1, 2, ••• , and being linear between
-2-
+ 0 as
n"~.
such polnts, where
qn
showing that
is a r.v., as the following small result demonstrates.
LEMMA 6,,1
M(T)
With the above notation, suppose that
MnCT) - max{';(jqn);
M(T)
~
In particular this is useful in
o -< jqn-< T}.
Then
qn + 0
Mn(T) .. M(T)
and write
a.s. as
n ..
~,
and
ls a r.v.
loin (T)
is the maximum of a finl te number of r. v.' s and hence is
a r.v. for each
Mn(T) .. M(T)
n. It is clear from a.s. contin.uity of
a.s. and hence by completeness
We shall also use the notation
in any given interval
M(I)
M(T)
I;(t)
that
is a r.v.
to dencte the supremum of
c
';(t)
I - of course it may be similarly shown that M(l)
is a r.v.
Level cross.i.ngs and the.ir basic properti.es
In the discussion of maxima of sequences in Part I, exceedances of a
level played an important role. In the continuous case a corresponding
role is played by the uparossings of a level for which analogous result:!!
(such as Poisson limits) may be obtained. To discuss upcrossings, it
will be convenient to introduce - for any real
functions
f
G
of all
u
which are continuous on the positive real line, and not
identica.lly equal to
u
u - a class
in any subinterval. It is easy to see that the
sample paths of our stationary process
.; (t)
are, with probability one"
G • In fact, every interval contains at least one rational
u
point, alnd hence
members of
~
E
j=l
where
{t j }
P{';(t.) =u},
)
.;(t ) has
j
is zero for every
is an enumeration of the rational points. Since
a continuous distribution by assumption, P{.;(t j ) =u}
j.
We shall say that the function
f E Gu
has a striat uparossing of
u
e.
-3-
at the point
e
e
a
to >
(to - £, tal
and
if for some
f (tl :: u
in
= u,
f(t O)
f (tl < u
t € (to - n, to)
t € (to' to + n)
points
u
in the interval
(to' to + £l. The continuity of
quires; of course, that
at some points
~
£ > 0, f(t)
for each
and the definition of
and
f (t) >
U
f
re-
G
that
u
at some
n > O.
It will be convenient to enlarge this notion slightly to include
also some points as upcrossings where the behaviour of
f
is less regu-
lar. As we shall see, these further points will not appear in practice
for the processes considered in the next two chapters, but are useful in
our calculations and will often actually occur for the less regular
.
~
I
processes of Chapter 12. Specifically we shall say that the function
has an uparo8sing at
f €G
u
f(t)
~
u
for all
t
in
to > 0
(to -
£,
if for some
to)
and
£>0
f(t) > u
and all
for some
t)
in
upcrossing of zero at
to
is provided by the function
and
t
(and
(to' to + n). An example of a m:m-strict
hence infinitely many
t~to
n > 0,
f(t) = (t-to)Sin«t-to)-I)
f (t) := t - to for
for
t>t •
O
The following result contains basic simple facts which we shall need
in counting upcrossings.
LEMMA 6.2
(i)
f €G
for some fixed u. Then,
u
if for fixed t , t , 0 < t < t , we have f(t ) < u < f(t ), then
2
l
l
2
l
2
has an upcrossing (not necessarily strict) of u sanewh4!re in
(ii) if
Let
f
has an upcrossing of
u
infinitely many upcrossings of
Clearly
(ii)
If
t l < to < t 2
to
,and
to
at
u
t2
in
which is not strict, it has
(to' to + £), for any
is an upcrossing point of
is an upcrossing point of
certainly a point
to
in the interval
u
f
by
and
f
(to' to +
£)
u
£
with
by
£
>
o.
f.
> Q, there is
f(t
2 ) > u. If
-4-
is not a str~ct upcrQss~ng there must be
to
such that
t
2
a
point
t1
(to' t 2 1
in
f(t ) < u. By (i1 there is an upcrossing between
l
, so that (ii) follows, since
> 0
e;
t
l
and
is arbitrary.
01
Downcrossings (strict or otherwise) may be defined by making the ob-·
vious changes, and crossings as points which are either up- or downcross,ings. Clearly at any crossing
to
of
u
we have
other hand there may be nu-values" (i. e. points
which are not crossings-such as points where
or points
to
such that
f (t) - u
u+ (t··tO)sin«t-t O)
to
f
=
u. On the
where
f (to) = u)
is tangential to
u
is both positive and negative in
every right and left neighbourhood of
-1
f(t O)
to - as for the function
).
The above discussion applies to each sample function of our process
~(t)
satisfying the general conditions stated since, as noted, the sam-
pIe functions belong to
G
u
with probability one. Write, now,
to denote the number of upcrossings of the level
bounded interval
I, and
write
Nu(t)
N(t)
for
u
Nu(I)
~(t)
by
in a
Nu(t) = Nu«O,t]). We shall also sometimes
when no confusion can arise.
In a similar way to that used for maxima, it is convenient to use the
"piecewise linear" approximating processes
{~n(t)}
N (]:)
u
is a r.v. and, indeed, in subsequent ,calculations, as for example in obtaining
to show that
E(N (I». This will be seen in the following lemma, where it will
u
be convenient to introduce the notation
(6.1)
Jq(u) =
LE!'-IMA 6.3
P{~(O) <
Let
I
u <
~(q) }/q,
q > O.
-
be a fixed, bounded interval.. With the above general
assumptions concerning the stationary process
quence such that
j
= I,
~
( (j - 1) qn) < u <
a-n '" 0
and let
2, ••• such that both (j - 1) qn
~
(jqn). Then
~,let
{qn}
be any se-
denote the number of points
and
jqn
belong to
I, and
-~-
(1)
Nn~Nu(l),
(ii)
N -+ N U)
n
a.s. as
u
n"
IX)
is a (possibly
and hence Nu(l)
infinite valued) r.v.,
and hence
(i)
PROOF
j,
for some
If
Lemma 6.2 (1), that
.~
~
E(Nu(l»
= lim Jq(u).
qi-O
( (j - 1) qn) < u < E; ( j qn)
has an upcrossing between
it follows from
(j -1)<;>
and
:l
jSn
so that (1) follows at once.
(ii)
~(jqn)
Since the distribution of
{kqn; k= 0, 1, 2, .•. ,; n= 1, 2, .•• }
P{E;(kqn) =u
for any
may assume that
that
is countable, we see that
k=O, 1, 2, " ' ' ; n=l, 2, ••• } =0, and hence we
E; (kqn)
+u
for any
does not take the value
E;
is continuous and the set
k
u
and
n. We may likewise assume
at either endpoint of
and hence
1
that no upcrossings occur at the endpoints. Now, if for an integer
we have
u
by
N (1) > m, we may choose
u
E;(t)
m distinct upcrossings t l , ••• , t
of
m
1 which may, by choice of £ > 0, be
-
in the interior of
surrounded by disjoint subintervals
I, such that
T
~(t) ~u
in
i
+ d, i = 1, 2, ... , m, of
~h)
and
>u
2. q n € (t
E; (2. q n) < u < E; (kqn)' For some
~(jqn)'
contains such a point
~
1 - in which
(t) > u. For all suffi-
j
-
i
with
E,
jqn
we conclude that
n+CD
n-
(Nn
is a finite sum of r.v.'s
(t -
N > m when
n-
u
Xk=l
i
n
+ e:)
i
is suffit
£,
lim inf Nn ;: N (1)
u
n-+CD
we see that lim N =
n-+CD
as required. Finally it is easily seen that
n
n
Since eventually each interval
(finite or not). Since by (i), lim sup N < N (1)
each
kg •
t ), kqn € (t , t + £) such that
i
i
i
2. < j ~ k we must thus have
ciently large, from which it follows at once that
Nu(l)
for some
this interval must contain a point
Thus there are points
E;«j -l)qn) < u <
t
£,
is contained in an interval- which
T
may be taken as a subinterval of
n
(t i
(t -£, ti'
i
€ (t i , t + e:). By continuity,
i
ciently large
ro,
if
~«k-l)qn)
and zero otherwise) so that, by completeness, its a.s. limit
also a r.v., though possibly taking infinite values.
n
Nn . is a r.v. for
<u<t(kqn)'
Nu(l)
is
-6-
Since
(iii)
N
this shows at once that
= co
E (N (I»
u
lim inf E (N n ) ~~
n-+ co
E (N ) -+ E (N u (I) 1. But
n
a.s., Fatou's Lemma shows that
n -+ N u (ll
the same result holds, by dominated convergence, if
Nn~Nu(I)
and
Finally, if
Nn-+Nu(Il
E (N (I»
< co, since
u
.1!
-'.!.'~
•
~.
a.s.
I = (0,1), then
so
points
contains
I
that, using stationarity,
(v
Hence
-1) p{F,;(O) <u< F,;(q)} - J
(u).
n
qn
n
from which the final conclusion of
(u) -+ E(N (1»
qn
u
lows since the sequence
J
COROLLARY 6.4
If
for some sequence
PROOF
If
fo1--
{q}
n . is arbitrary.
[J
E (N (I» < co, or equivalently i f
u
qn i
E(NU(I»
(iii)
0, then the upcrossings of
<co
N (I) < co
then
lim inf J
(u) < co
n-+co
qn
u are a.s. strict.
a.s. and the assertion follows
u
from (ii) of Lemma 6.2.
IJ
In passing, we shall derive two small results concerning the maxirnwl
M(T)
and the nature of solutions to the equation
only on the assumption that
THEOREM 6.5
Suppose that
as usual that
E(N u (l»
E(N (l»
u
P(F,;(t) =u) = 0
F,;(t) = u, which rely
is a continuous function of
is continuous at the point
so that
F,;(.) EG
u
u.
u
and,
with probability one ..
Then
(i) i f
tE (0,1)
and
F,;(t) = u, then with probability one
t
is eithE!r
an upcrossing or a downcrossing point,
(ii)the distribution of
(i) If
PROOF
F,;(t)
=
is continuous at
M(l)
u, but
t
u, Le.
P{MU) = u} =
o.
is neither an upcrossing nor a down-
crossing point it is either a tangency from below or above, i.e. for
some
E
> 0,
F,;(t) ~ u
(~u)
for all t E (to -
are infinitely many upcrossings in
by the finiteness of
(to -
E,
E,
to)
I
to + e:) or else there
(and this is precluded
E(N u (1»). Further for each fixed
bility of tangencies of
u
u
the proba-
from below is zero. To see this, let
B
u
.:-
-1-
be the number of such tangencies of the level
··e
•
L-
N + B > m, so that there are at least
u
u-
u
m points
in
(0,1), and suppose
t l , ••• , t m which are
~ (.)
either u-upcrossings or tangencies fran below. Since
e: Gu
probability one, there is at least one upcrossing of the level
just to the left of any
t , for all sufficiently large
j
with
u-:L/n
n. This implies
and applying Fatou's Lemma,
E (N (1) ) + E (B (1»
u
u
~
lim inf E (N u - l / n (I»
n+co
= E (N u (1»
B > 0, we conclude that Bu = 0 (with
uprobability one). A similar argument excludes tangencies from above, and
is continuous. Since
if
we have proved that all u-values are either up- or downcrossings.
(il) Since
P{M(l) =u} <- P{UO) =u} +P{E;(l) =u} +P{B u-> I}
•
c
P{B u =O}=l.
the result follows as in the proof of part (i) from
Crossings by normal processes
Up to this point we have been considering a quite general stationary
process
U; (t); t
~
O}. We specialize now to the case of a (sta,tionary)
normal or Gaussian process, by which we mean that the joint distribution
~(tl)'
of
2,
••• ,
~(tn)
t l , t 2 , ••• , tn' It will be assumed without comment that
has been standardized to have zero mean and unit variance. The
covariance function
ObViously
t
n = 1,
and
~(t)
.
is multivariate normal for each choice of
Thus if
r
reT)
r <-r)
will then be equal to
is an even function of
is differentiable at
t
E (~(t)
~
(t + T) ) •
reO) =E(~2(t» =1-
T, with
= 0, its derivative must be zero
there. It is of particular interest to us whether
r
has
t~o
derivatives
~
10
'"
at
~
write
~
~.
;
t
= O. If
r"(O)
does exist (finite), it must be negative and we
A2 = -r"(O). The quantity A2 is the second spectral moment, so
-co
called since we have A2 = IA 2dF(A), where F(A) is the spectral d.f.,
-co
-8-
ClO
J eiATdF(Al.
r (T) =
i.e.
J,,2 dF (,,)
then
=
-0.
(15.2)
IJ
is not twice differentiable at zero
r
-00
00
00,
-
reT) = 1
i.e.
"2 =
"2,2/2 +
0
00
(T 2 )
Furthermore, it may be shown that
When
as
we have the expansion
"2 <
T
->-
o.
"2 = -r" (0)
<
00
~ (t)
if and only if
is differentiable in quadratic mean, i. e. if and only if there is a pro'-
{~' (t)}
cess
as
h
->-
~ I (t)
in quadratic mean
0, and that then
->-
E(~'
~(t),
h -1 (~(t + h) - ~ (t»
such that
~"
(t)
(t»
=
0,
Var(~'
(t» = -r"(O),
being jointly normal and independent for each
I. Further-
more
COV(~'(t), ~'(t+T»
= -r"(T).
For future use we introduce also
00
00
J d F ( A)
=
r (0) = 1
A4 = J ,,
and
-~
where also
"4
4
d F ( A) ,
-~
= r(4)
(0), when finite. An account of these and related
properties may be found in Cramer and Leadbetter (1967), Chapter 9.
To apply the general results concerning upcrossings to the normal
case we require that
~(t)
should have a.s. continuous sample paths.
•
It is known (cL Cramer and Leadbetter (1967», that if
(6.3)
1 - r (T) ~ C/ I log I T II a
for some
it is possible to define the process
C > 0, a > 1, for
;(t)
hi
< 1,
as a continuous process.
This is a very weak condition which will always hold under assumptions
to be used here and subsequently - for example it is certainly guaranteed
if
r
some
is differentiable at the origin, or even if
a > 0, C >
l-r(T) ~CITla
for
o.
In the remainder of this and in the next chapters we shall consider a
stationary normal process
~(t),
standardized as above, and such that
"2 < 00. To evaluate the mean number of upcrossings of
we need to evaluate the limit of
u
defined by (6.1)
per uni t time
as
q
->-
O.
.:-
-~-
This is obtained in the following lemma which is a more general result
~e
e
;;.
than we need at present, but which will be useful later also.
and
~
LEMMA 6.6
Let
{~(t)}
with
~
Let
$
denote the standard normal density and distribution
functions.
A <
2
be a (standardized) stationary normal process
IX":
2
~ (= ~(u» = 2: e-u /2. Let
and write
either be fixed or tend to infinity as
~
q
q
~ 0
and
in such a way that
0
u
uq
-+
O.
Then
Jq(u) = q
PROOF
p{E;(O) < u < E;(q)} .... ~
By rewriting the event
< E;(q) - E;(O)},
1;2 =
-1
(~(q)
-
Le. as
E.: (0)
)/q,
respective variances
=
••
(6.4)
q
~
O.
{E;(O) <u<E;(q)}
as
{!E;(O) +E;(q) -2uc!
\'lhere, 1';1 = (~(O) + E;(q)/2,
{Ir;l-ul <% r;2}
are uncorrelated, and hence independent, with
2 2 2
01 = (1+r(q»/2, ° "= 2(l--r(q»/q , we obtain
2
j
(Uqo 2) -1
y=O
=
as
$
(i-)2
P{
II; 1 - u I < ¥} dy
(~qol (2) -1 j
U+jY/2
y=O x=u-qy/2
$(~)
e-y2/20~ {0 2
=
20 l
uI2TI
°1
$(.:L) dx dy
°2
} ¢(U+gxY/2) dX}dY.
x=-l
°1
Now, by simple calculation, the second factor in the integrand may be
written as
which by bounded convergence
(01
-+
1, 1-
o~ = A2q2/4 + 0(q2) , °2
-+
!X2)
1- It is also immediate that the integrand of (6.4) is domi_ 2
cy
nated by the integrable function Aye
(for some constants A, c > 0)
tends to
so that an application of dominated convergence gives
= I .:L
o o~
~
2
2
e- Y /2'J2dy
= 1.
D
-J.u-
The following result - due in its original form to 5.0. Rice (1945) _.
is now an immediate corollary of this lemma.
THEOREM 6.7 (Rice's Formula)
{~(t)}
If
is a (standardized) stationary
normal process with finite second spectral momen"t
\2 (= -r" (0»
the mean. number of upcrossings of any fixed level
u
then
per unit time is
finite and given by
(6.5)
E(N (I»
u
If:
2
= ---22 e- u /2
1T
(Hence also all upcrossings are strict.)
PROOF
'I'his follows from the case
u
fixed, in ·the above lemma, togeth-'
er with (iii) of Lemma 6.3.
[]
The above discussion has been in terms of upcrossings. Clearly, similar results hold for downcrossings. In particular, the mean number of
downcrossings is also given by (6.5).
~(t)
In discussing the maximum of a stationary normal process
~
shall find it useful to compare
~* (t)
we
with a very simple normal process
'N'hose maximum is easily calculated using properties of its up-
corssings. Specifically let
n,
~
be independent standard normal r.v.'s
and define
~* (t) = n cos w t + ~ sin w t
(6.6)
where
is a fixed positive constant.
(j;1
It is clear that
~*(t)
is normal and that
jointly normal for any choice of
n
t
i
are
• (This follows most simply from the
E c.~*(t.)
is normal for any choice of t1.' - and c1.'.)
1 1.
1.
is a normal process and E(~*(t» = 0, Its covariance func··
observation that
Thus
~*(t)
tion is calculated at once to be
(6.7)
r
(T)
= E{ (n
= cos
cos w t +
~
sin w t) (n cos w (t +
w t cos w (t + T) + sin w t
T)
+
sin w (t +
~
sin w (t +
T) )
T)
= cos W T.
Thus
~*(t)
vlrite now
is weakly stationary and hence strictly so, being normal.
n
= ACos¢
and
~
= Asin¢,
with
0~¢<2iT.
Then
-11-
~* (t)
(6.8)
~­
•
= A cos (w t ~~1:;~
The Jacobian
and it follows simply that
A, ¢
have joint
density
s;-
~
= A,
¢) •
~.-
1
=
2n x e
A, ¢
showing that
-x 2 /2
,
x < 0, 0
~
y < 2n,
are independent, A having the Rayleigh distribution
2
x e-x /2 (x ~ 0)
and
¢
being uniform over
[0, 2n). The sample paths of
are thus cosine functions with angular frequency
~*
A and phase
dependent random amplitude
The distribution of the maximum
w, and having in-
¢.
M*(T)
for this process can be ob-
tained geometrically. However, it is more instructive (and simpler) to
use properties of upcrossings. It is clear that A = w2 for this pro2
cess and, writing N = N* (T) for the number of upcrossings of u in
u
.~
!
(O,T), we have
E(N)
(6.9)
and
•
P{M*(T) >u}
Now take
after
wT
<
= P{~*(O)
n. Then if
> u} + p{;*(O) ~ u, N ~ U .
;*(0) > u > 0, the first upcrossing of
t = n/w (see diagram), and hence
{N ~ 1, ~* (0)
> u}
that
p{~*(O) ~
u,
N > I}
C
P{N > I}.
A cos(wt-~)
Thus, since
(6.10)
N
P{M* (T)
=0
or
> u} = 1
=1
or equivalently,
1,
4> (u)
+ P{N
wT
~
- 4>(u) + 2i e
I} = 1 - 4> (u) + E eN)
-u 2/2
,
u occurs
is empty, so
-12P{M* (T) < u} = 4> (u) _ wT
(6 .. 11)
2TI e
-
2
-u 12
•
As a matter of interest and for later usc, it follows for
that for fixed
M
(since
l'nlcl'ss
h, 0 < (j)h < -rr,
(~)1/2
P{M* (h) > u} ->-
(6.12)
thi~;
2TI
(u)
1 - 4> (u) '\,
cj>
(u) lu
and
as
u ->-
00
.~
2
A2 = w ). This limit in fact holds under
much more general conditions, as we shall see.
As no1t:ed above we will want in the next chapter
to compare a general
stationary normal process with this special process. This comparison
will be made by an application of the following easy
conse~uence
of Lemma
3.3.
LEll1MA 6.8
(Slepian)
Let {E;;l (t)}
and
{~2(t)}
be normal processes (pos-
sessing continuous sample functions but not necessarily being stationary).
Suppose that these are standardized so that
E(~i(t» =E(~~(t» =1, and write
Pl(t, s)
variance functions. Suppose that
O
2
(t, s)
when
for
some
E(~l(t»
and
=E(~2(t»
P (t, s)
2
<S > 0
we have
0:5. t, s < <'I. Then the respective maxima
=0,
for their co·P (t, s) >
l
M (t)
l
and
M (1:)
2
satisfy
when
0 < T < <S.
PROOF
Define
6.1
{M~l)
where
M~l)
and
3.2 that
relative to
~l(t), ~2(t)
qn = 2- • Then, with probability one
::: u} -t {M (T) ::: u}
l
Similarly
M~2)
n
P{M~2)
P{M~2) ~
and hence
:::U}->-P{M (T)
2
Ut
~ P{M~l) ~
P{M~l)::: u} .~ P{M l
~u}.
u}
t1~ 1)
t
as in Lemma
Mi ('1'),
(T) ::: u}
as
so that
n ->-
00.
But it is clear from (3.6) of Lemma
so that the desired result follows.
CI
Marked crossin0s
The material in the remainder of this chapter will not be used until
Chapter 8 and in subsequent chapters. The reader who wishes to do so may
proceed directly to Chapter 7, and return to this section when needed.
••
cons~der situat~ons
We shall
where we not only register the occur-
renee of an upcross~ng, but also the value of some other random variable
connected with the
['" (t )
i
tive
upcrossi~g.
at upcrossing points
~(si)
at downcrossing points
where
~(t)
o~ossings~
We may, e.g. be interested in the derivat
si
of
i
u
by
[,(.)
~'(.),
of zero by
or the value
i.e. at points
has a local maximum. we shall refer to these as marked
~'(ti)
and for example regard
t.
to the crossings at
and
~
~(si)
and
as marks attached
s .• We shall here develop some methods for
~
dealing with such marks, along similar lines to those leading to Rice's
formula (although with some increase in complexity).
Let
{1;;(t); t;:O}
{D(t); t~O}
and
with continuous sample paths. Denote by
1;;(t), and let, for any interval
1'1 (t ) E A,
such that
1
i
A,
and write
be jointly stationary processes
t
i
the upcrossings of
u
be the number of
t
N (l; A)
u
lJ
u
(t; A)
N ( (0, t); A).
u
N (t)
will have the same meaning as before, e.g.
N (1;
u
(_00, 00». Further define
.
~
Jq(u; A) =
LEMMA 6.9
qn
+
0
(with
as
~ P{1;;(O) <u< 1;;(q), D(O)
Suppose
n
+
E(Nu(O,l»
00, and let
(j -l)q El)
n
< 00, let
Nn(A)
1
if
A
n+oo
n
Nu(I; A),
a.s.
/
(ii)
(6.13)
if, for every
'/,
p{1;;(t) =u, D(t) =v
then, for any interval
A,
for some
(1),
N (1) =
u
be the number of points
such that
~
in
be a bounded interval, let
is an open interval,
lim infN (A)
u
i
EAL
Then
(i)
N
by
tEl} = 0
jqn E 1
-14-
limsupNn(Al < Nu(I; Al,
n....""
a.s.,
and
NU(I; A) = lim N (A),
n....""
"~
(iii)
a.s.,
is an open interval,
A
:L ....
n
E (N (I; A»
< lim inf E (N (All
n
u
n....""
and, if (6.13) holds,
E (N (A»
n
.... E (N (I; A»
u
and
E(N
PROOF
(i)
of
at
u
u
lim Jq(u; A).
(I; A)l
qfO
Suppose that
and that 1; (t) has upcrossings
u
in the interior of I, with n(t ) E A, i =1,
i
, ••. , t m
l
(Be the continuity of the distribution of
••• , m.
t
N (I; A) > m
occur at the endpoints of
round the
which
t.'s
~
I.) Since
net)
by disjoint subintervals
Z;(t)
no upcrossings
is continuous we can sur(t.-£,t.+£)
~
~
of
I
in
net) E A. It then follows as in the proof of Lemma 6.3 (ii) that
lim inf Nn(A) > m.
n ....""
(ii)
First assume
Nu(I; A) = m<"", and let
If (a,b) is the interior of
that
(:1
that
-1) qn
J~
and
I
in which
but
n (t) E (a,b). Write
and
jqn
j
¢ I n•
I
both belong to
for the set of
j' s
Clearly
t
i
... , t
m
be as in (i).
n(t ) = a or b, so
i
disjoint intervals
(t - £,
i
A, (6.13) precludes
t~e
n {ti' E (a,b); and we may therefore
t i + £)
tI'
•
such that
n
for the set of
(t
i
- £, t
(j -1) qn
i
+ E)
and
is the only upcrossing of
j' s
such
for some
jqn
u
by
i,
belong to
set)
for
t E (t i - £, t + E), and therefore by Lemma 6.2 (i),
i
(6.14)
lim sup
n-+co
r
jEJ
x.
n
~
m,
J
where
.0'
."7
~
~
and
1
x·J =
o otherwise.
Furthermore, if
lim sup r
X. > 0,
n" oo jEJ*n J
then for
n
arbitrarily large there are
hence a sequence of integers
T
¢ (t i - e:, t i
of
net)
n(T~
=a
for
t
+ e:), i
it follows
or
e:
= 1, ••• , m, and X.
I
that
(T - e:', T + e:')
E J~
n
with
j~-
such that
{n}
=
neT)
for some
e:
(a, b). Hence
... ,
= 1,
-.
= m.
Nu(I; A)
net) E (a, b) c A
-
m. Further, for
j::-q- . belong to
(jn -1) qn and
n n
thus, by Lemma 6.2 (i) , 1; (t) has a u-upcrossing in
n+co
and
1
-+ T, with
n n
1. From the continuity
large enough, both
lim sup r
In
e:' > 0, which can be taken small
t i ¢ (T - e: ' , T + e:') , i
which contradicts
X.
neT) E [a, b), and, since (6.13) precludes
b, we must have
enough to make
jn
n
(T-e:', T 'II- e: ' )
and
(T - e: I, T ,1- e: ' )
This shows that
Xj = 0,
jEJ*
n
which together with (6.14) proves that
lim sup N (A) < N (I; A), a.s.
n
- u
n-+ oo
Nu(I; A) = Nu(I; (a,b», part (i)
Since furthermore, (6.13) implies that
gives that
lim inf N (A) > lim inf N «a,b»
n
n+co
Hence
-
N (A) .. N (I; A) = m
u
u
n
n+cr>
<
a.s. as asserted. If
00
conclusion follows from part (i) with
(iii)
(a,b)
replacing
= Nu (I;
A).
Nu(I; A) =
00,
the
A, since
The first conclusion follows at'once form Fatou's Lemma and part
(i), while it follows from part (11) that
since
> N (I;(a,b»
u
Nn(A)
~
Nu(I)
and
E(Nu(I»
<
CD
I = (0,1), there are approximately
E (N (A»
.n
J
q
= ~:: E (N n (A) ) ,
by assumption. Further" if
points
(u; A) •
n
E (N u (I; A»
jqn E I, so thC:tt
-10Hence the last assertion of (iii) follows since the sequence
{qn}
is
arbitrary.
o
We shall now evaluate the limit of
and
n(t)
~(t)
Jq(u; A)
for the case when
~(t)
are jointly normal processes. For convenience we assume that
is standardized, i.e. it has mean zero and variance one. As was
~(t)
noted.earlier, if
is quadratic mean differentiable then
E(~'(t»
= 0, A :: Var(~' (t» < aJ, and ~(t),~' (t)
are independent for each
2
and normal, and hence they have the joint density function
p(u, z) = ¢(U)A
-1/2
2
-1/2
¢(zA2
t
).
Fu~ther
it can be shown that the three processes
{~(t)}. {~'(t)},
{n(t)}
are jointly normal, and that the crosscovariances and covari-
and
ances can be obtained as limits, e.g.
Cov(~' (t), n(t+
T»
~(t)~, n(t+
= lim E(h-l(l;;(t+h) -
h.... O
T».
Conditional distributions can also be defined, using ratios of density
functions when they eXist, e.g. for a measurable set
P{n(O) E A
where
I ~(O)
A, we define
= u, ~'(O) = z}
~(O),
is the density function of
pz;(O) ,~' (0) ,n(O)
~'
••
(0), n(O).
In the sequel, conditional probabilities will always be understood as
defined in this way.
LEMMA 6.10
Let
{I;;(t)}
wH:h the process
be a zero mean normal process, jointly normal
{T] (t)}, and such that
~ (0), ~'(O), T] (O}
singular distribution. Assume further that
sample paths and that
>'2
= Var (~ ,
(t) ) «
~(t)
{T](t)}
have a non-
has continuous
is differentiable in quadratic mean with
co) • Then, for any measurable set
A
and any
u,
co
lim J q(u; A) = f
zp(u,z)p{T](O) EA
q-tO
z=O
PROOF
Write
=
I ~ (0)
= u,
~
, (0) = z}dz.
T] (0) , and as in the proof of Lemma 6.6
introduce the
independent normal r.v.'s
~ 1 = (~( 0 ) + ~ (q) ) /2 , ~ 2 = (I;; (c:r) - I;; (0) ) / q with
n
-. .
..::
-17-
0i = (r (O) + r (q) ) /2, o~ = 2 (r (O) - r
variances
J
q
(u; Al
=
q
-1
(q) )
/q2, and note that
q~2
P{I~l -ul
< -2-' n EA }
-1 00
u+qz/2 (
)
!
<P .~)¢(2- P{n
= (qol02l!
z=O x=u-qz/2 0 1
O2
1
00
(6.15)
EAI~l =x, ~2 =z}dxdz
! Oz <p(\OZ) ! 2; ¢(U+~qZ/2)p{ne:AI~l = u+xqz/2, 1;;2=z}dxdz
z=O 2
2 x=-l 1
1
=
To obtain the limit of the conditional normal probability
p{ne:AI~l =v, 1;;2=z}
and
of
{net)}
as
q" 0
(and
v -+ u) we note that since
{l;(t)}
are jointly normal processes, the conditional distribution
n = nCO)
given
1;1 =
(~(O)
+1;(q»/2=v, l;2 = (l;(q) -l;(O»/q = z"
is also normal with mean
and variance
Since
that
1;1" l;(O), s2 -+ s' (0)
Cov(n,l;l)i -+
Cov(n(O),~(O»,
Since furthermore, oi -+ reO)
l;'(O),
nCO)
Thus, with
for all
x
in quadratic mean as
cov(n,s2) -+
=Var(~(O», o~
-+
q -+ 0
Cov(n(O),~'(O»
as
A =var(s'(0», and
2
q -+ O.
~(O),
are non-singular by assumption we have
Vo = lim V > O.
q-+O q
mo = lim m (u + xqz/2, z), dominated convergence gives that
q-+O q
and z,
= P{rj{O) €Als(O) =u,
as
it follows
1;' (0)
= z}
q -+ O. Again by dominated convergence it follows that
J
(Ui A)
q
-+ !
z=O
._z_ (_z_)
<p
IX2 1X2.
_1_ <p
IrTOT
(_U_) P {n (0) E A I
= u,
IrTOT
I;;
which is the conclusion of the lemma.
1; (0)
'(0) = z}dz
CJ
-18-
Local maxima
As an application of the marked crossings theory we end this chapter with
some comments concerning local maxima. To avoid technicalities we assume
{~(t)}
that the stationary normal process
are,
wi~~
has sample functions which
probability one, everywhere continuously differentiable. Suf-
ficient conditions for this can be found in Cramer and Leadbetter (1967),
and they require slightly more than finiteness of the second spectral
moment
:~2;
cf. the condition (6.3) for sample function continUity.
Clearly then
~(t)
has a local maximum at
has a d01rlncrossing of zero at
to
~'(t)
if and only if
to' and a number of results for local
maxima can therefore triVially be obtained from corresponding results
for downcrossings.
In particular, to ensure that
has only finitely many local
~(t)
maxima in a finite time, we need the assumption
=
>"4
r(4) (0) <
00,
where
is the fourth spectral moment
If
A <
4
00,
then
~
(t)
in quadratic mean, and
C (t), defined
has also a second derivative
~(t),
~'
~"(t)
(t),
are jointly normal with
mean zero and the covariance matrix
where we usually assume
AO
=
non-singular distribution ?rovided
=
A cos (wt - ¢).
A2POA4-A~) =
F
~(t),
1. Further
~'(t),
~"(t)
is not of the form
~
•
have a
s(~)
=
(In fact, the determinant of the covariance matrix is
A {JdF(A)
2
n
4
dF(A) - (JA
2
dF(A»2), which is zero only i f
A4
is concentrated at two symmetric points.) If
<
we also have
00
the analogue of (6.2),
COV(~'(t),
~'(t+,»
=
-r"(,)
=
A
2
-2"1
A '
4
2
2
as
+0(,)
, .... 0,
and, normalizing to variance one, we obtain
1 A4 2
1 - -2 "--' +
1\2
2
0 (,)
as
, .... O.
.~.
-l!:l-
We will temporarily use the notation
maxima of
N'(T}
for the number of local
;(t), O<t<T. From Rice's formula (6.S) and (6.16) we obtain
that the expected number of local maxima in
(O,T)
is
E (N' (T) )
In Chapter 9 we shall study heights and locations of high ZooaZ maximao Write
N' (T)
u
whose height exceeds
s. E (O,T)
such that
LEMMA 6.11
If
0 < t < T,
u, i.e. with the previous notation, if
local maxima at the time points
~
~(t),
for the number of local maxima of
Us.) >
~
{; (t)}
{Si}' then
N
I
u
~(t)
has
is the number of
(T)
U.
is stationary normal, with continuously differ~"{t)
entiable sample paths, and with a quadratic mean second derivative
with
Var(;"(t»
~
= A4 <
such that
~(t),
~"(t)
;'(t),
have a non-
singular distribution, then
E(N~(T»
= T
~
a
J
x=u
z=-~
J
/z!p(x,O,z) dzdx,
is the joint density of
~(t),
~'
where
p(x,y,z)
PROOF
We shall use Lemmas 6.9 and 6.10, identifying
net) =
~(t).
By assumption,
of Lemma 6.10, with
{I;;(t)}
varCz;' (t»
and
{net)}
(t),
~"(t).
I;; (t) = -r,' (t)
satisfy the
and
hy~otheses
= A4 , so that for any open interval
A,
~
(6.17)
limJ (0; A)= J zfZ;(O)
qfO q
z=O
'
1;;'(0)
(O,z) P{n(O)E:AIz;(O)=O,c'(O)=z}dz=
o
J Izlp(O,z)P{~(O)e:AI;'(O)=o, C(O)=z}dz,
z=-~
where
p(O,z)
6.7, all
t
is the density of
such that
~'(O), ~"(O).
By Theorems 6.5 and
Z;(t) = ;' (t) = 0, are either upcrossing or
downcrossing points. Lemma 6.9 (iii) implies that, writing
for the number of maxima in
p{~'(t)=O, ~(t)=v
(0 ,T)
for some
with height in
t E (0,'1')} <
ve =
NO(T; Ve )
(v - e, v"+ e) ,
-20-
since
~ J
E(N n (V £ »
q (0; V£ ). By (6.17) the right hand side can be made
arbitrarily small. Thus ~(tl, n(tl satisfy condition (6.13) and by
Lemma 6.9(iii} and stationarity
E (NO (T; (u,"'»)
=
T E (NO (1; (u,"'»)
=T
11m J q (0; (u, co) ) •
q .... O
~-
Inserting
p{t;(O) E
(u,"'llt;'(O) =0,
'"
t;"(0) =z}
f p(x,O,z)/p(O,z) dx
u
into (6.17), the lemma follows.
[J
By inserting the normal density
=
p(x,O,z)
where
(6.18)
(2n) -3/2(A D) -1/2exP(-<A4x2 + 2A2XZ + Z2)/2D),
2
2
D = A4 - A2' we obtain after some calculation
E.(N~(T»
=
2~ {C;)1/2
(1 -
~(U(A4/D)1/2»)
+
+ (2TIA )1/2 ¢(u) ~(UA /D l / 2
2
2
)1J
-21CHAPTER 7
~e
•
~
'"
""
;0-:
~
~
EXTREMAL THEORY OF MEAN SQUARE DIFFERENTIABLE NORMAL PROCESSES
In this chapter, Lhe extremal theory of stationary normal processes
will be developed - giving analogous results to those of Chapter 3.
shall assume throughout this chapter that
ary, normal process with
E{~(t»
where t..'1e spectral moment
A
requires that the mean
2
t
is a station-
~ O}
E(~2(t» =1, E(~(t)~(t+,)i) =r(T)
=0,
exists finite. Equivalently this
=r"{O)
nur.~er
{~{t);
~Je
of upcrossings of any level per time unit
is finite (Theorem 6.7), and also equivalently that the covariance function has the following representation,
reT)
(7.1)
=1
2
2
- A2 ' /2 + 0(,)
=m
A2
Less regular cases where
,~O.
as
will be considered in Chapter 12.
As for normal sequences, the double exponential limit
(for
M(T) =
sup{~
(t)" 0
~
~
t
T}
as in Chapter 6) will be derived under
the weak condition
r (t) log t
(7.2)
~
0
as
t
~
ClO.
This is the continuous time analogue of (3.1), and it will be used to
derive a version of Lemma 3.1, before starting the main development.
Still weaker conditions corresponding to (3.12) will be obtained at the
end of this chapter. In the follOWing lemma we shall consider a level
u
which increases with the time period
remains constant,
i.e.
.,
l
E (N (1» = 2 n A~/2 e -u" /2.
E(Nu(T»
TV
T
in such a way that
remains constant, where
V
u
We shall also consider points
on
u
(or equivalently on
T) and
{kq; k
q
~
=1,2 ••• }
0
statement that "a property holds prOVided
where
as
u
(or
1j1
= 1j1(u)
.j.
q
T).
0
depends
ClO.
The
sufficiently
slowly" is to be taken to have the obvious meaning that there exists
some
1j10 {U)I t 0
for which the property holds, and it holds for any
-22-
~(u)
~(u)
such that
~
0
but
~O(u)
~
~(u)
as
~
u
The following
00
is the promised continuous analogue of Lemma 3.1.
LEMMA 7.1
(1)
E > 0
Let
be given.
If (7.2) holds, then
sup{ !r(t)
I; It I ~ d =
cS < 1.
(ii)
Suppose that (7.1) and (7.2) both hold. Let T
2
is fixed and ~ = E(Nu(l» = 211TA~/2e-U /2, so that
,as
T
~
00
,as
u
~
00
(as is easily checked). If
qu
if
(i)
values of
It
I
~ IS,
= q(u) u + 0
~
T
(210gT)1/2
sufficiently slowly
I)
~ 0
T ~
as
00.
As in the discrete case (cf. remarks preceding Lemma 3.1)
=
r ('t)
u
where
then
2
T
L
Ir(kq) /e- u /(1 + !r(kq)
q E~kq~T
PROOF
~ T/~,
t > 0, then
for any
t
which contradicts (7.2). Hence
and since
Ir
'we must have
r (t)
(t) I
r (t)
=
1
1
for arbitrarily large
!r(t)
I
<
1
for
t ~
is continuous and tends to zero as
bounded away from
1
It I
in
~
e:, and
00,
(i)
follows.
(ii)
o
<
As in the discrete case, choose a constant
s
1 - cS
< 1 + 8' Letting
K
S such that
.-
be a generic constant,
2
B l
-u 2 /(1+cS)
T
L
Ir(kq)le-u/(l+!r(kq)l) ~ T +
e
q E~kq5.TB
7
TB + 1
q2
K--~
<
...!.
2/ (1 + cS)
TB + 1 - 2/ (1 + cS)
q2
<
2K 2(logT)T B + 1 - 2 /(1+51
q u
2
1 - 8
1 + cS
'
y
the last expression is dominated by K(qu)-2 Twhich tends to zero
2
y 2
provided uq ~ 0 more slowly than T- /
(= Ke- Yu /4). Hence this sum
since
u
~
2logT, as noted. If
y
is chosen so that
O<y<---S
tends to zero.
.~.
-23-
we see
By writing
~e
•
that the remaining sum does not exceed
Ir
L
2
(kg)
I e U Ir
(kg)
I.
TB<kq~T
Again as in the discrete case, i f
IS (t) .. 0
6 (t) /log
t .. 00
as
t.
and for
s
kq > T B, u
-
Thus for
t > 1
~
2
sup Ir (5) log s
s>t
we have
Ir (s) I ~
1r
which tends to zero, uniformly in
=
6 (t)
(kq)
I ~
I
then
lS(t}/logs~
B
El
K log T 6 (T }/109 T
= ~ 15(T El )
k. Hence the exponential term
is certainly bounded in
(k,u). It is
L~US
sufficient to
show L'1at
,•
T
q e
t
-u
2
Ir (kg)
I ..
0
as
T .. 00.
Eut this does not exceed
which again tends to zero provided
•
slower than
qu .. 0
sufficiently slowly (i.e.
c
15(TB)1/2).
Having proved this technical lemma, we now proceed to the main d'erivation of the extremal results under the assumption that
(Le.
A <00)
2
and (7.2) holds. The condition
the point process of upcrossings of a level
tensi ty. The case
A = 00
2
r"(O}
exists
A <00 guarantees th,at
2
u will have a finite in-
is also of interest, and, as noted, will be
treated in Chapter 12, but requires the use of more complex methods
(such as an extended definition of upcrossings).
Our basic technique here is to divide the interval
becomes large) into
n
will clearly be close to
pieces of fixed length
M(nh)
(O,T)
h (n = [T/h]). Then
which is the maximum of
C = M( (j -l)h,jh), j = 1,2,. • ., n,
j
(the
{C }
j
n
M(T)
r.v.'s
forming a stationary se-
quence). Thus we might expect that the methods used for sequences wl:>uld
apply here and this is the case (al though ,,'e shall organize our arguments slightly differently to better suit the present purposes).
-24-
It is therefore not surprising that the tail of the distribution of
~i'
the
i.e.
P{M(h) >u}
(for fixed h) plays a central role. In fact
the same asymptotic form (6.12) holds for this tail probability here,
as did for the special process
t;* (t) = T] cos wt +
~
sin wt. In this pres-
ent chapter it will be sufficient to obtain the following somewhat
weaker result. In this we shall use Slepian's Lemma
pare maxima of
t;(t)
and
(Le~a
6.8) to com-
along the lines of a procedure origi-
t;*(t)
nally used by S.M. Berman (1971 b).
L:EHMA 7.2
{t;(t)}
Suppose that the (standarized) stationary normal process
satisfies (7.1). Then, with the above notation,
for all
(i)
so that
h>O, P{l-1(h)
>u}~l-cI>(u)
lim supP{M(h) > u}/(Ilh)
~
+Ilh
1,
u-+ oo
given
(ii)
(7.3)
so that
e <1
there exists
P{M(h) > u}
~
h
O
= h (e)
such that for
o
0
~
11
~
h
O
1 - cI> (u) + ellh
liminfP{H(h) >u}/(Ilh) ~e
0~h~hO=hO(8).
for
•
u-+ oo
PROOF
(i) follows since
P{I1(h) > u} < Pit; (0) > u} + PIN (h) > l}
-
~
~;le
U
1 - cI> (u) + E (N (h»
u
-
.
second result (ii) follows simply from Slepian's Lemma
6.G) by comparison with the simple process
if
w=eA~/2
we have, by (7.1),
r(t)
t;*(t)
~coswt
for
(Lerr~a
given by (6.6). For
O~t::hO<'JT/w.
(h O =h O (8) > 0). But this shows that the covariance function of
o]
P{M(h) >u} ~
is dominated by that of
t;*(t)
in
[O,h
P{H* (h) > u}
(with
H*
as in (6.10», which then gives
for
h:: h O'
and hence
t;(t)
(ii) •
Our remaining task is to approximate the maximum
o
M(T)
(for increa-
sing T) by the maxima over suitable, separated, fixed length subintervals, and show asymptotic independence of the maxima over these
.,'
.
te
•
q > 0,
intervals. First we give a simple but useful lemma. In this, for
N
u
N~g)
and
interval
I
will denote the number of upcrossings of
of length
{~(t)},
h, by the process
{( (kg)), respectively. Nore precisely,
N (g)
such that
< u < E;(kg)
with
LEMMA 7.3
and the sequence
kg E I
(cf. Lemma 6.3
and
for
g
in a fixed
is the number of
u
(k-l)g E I
u
If (7.1) holds, with the above notation, as
u
~,
+
gu
+
0,
(i)
(ii)
P{M(I) ~u}
where each
length
PROOF
(h/g)
= P{E,;(kg)
~u,
kgEI} + 0(\.1),
o(u)-term is uniform in all such intervals
h,< h
O for any fixed
H
O
B where
of
> O.
The number of points kg E I
-
I
with
0 < B < 2. Hence with
-
(k-l)g E I
J
g
(u)
is clearly
defined by (6.1) , Lemma
6.6 implies that
E(N~q»
= (~+B)PH(O)
<u<;(q)}
= (h+Bq)Jq(u)
= jJh(l+o(l)
where the
with
and O-terms are uniform in
0-
o(u)
uniform in
'1'0 prove
o
+ O(Uq)
~
The first term is
¢ (u)
a
is the left-hand endpoint of
I,
u, kq E I} - P{IHh) ~ u}
pH(a) >u} +
< 1 -
so that (i) clearly holds
O<h<h '
O
(ii), note that if
~ P{~ (J~g) ~
h
P{~(a)
<u,
Nu~l, N (g)= O}
u
+ P{N - N (g) > l}.
o(~(u»
u
u
-
= 0(\.1),
independent of
h. Since
is a non-negative integer-valued random variable (cf. Lemma 6 .. 3 ( i ) ,
the second term does not exceed
uniformly in
E(N
u
-N(g»
u
(O,h ). Hence (ii) follows.
O
which by (i) is
o(u),
o
-26-
Now let
write
length
~ co
u, T
in such a way that
n = [T/h]. Divide the interval
e:, O<e:<h
h. Fix
h .. e:
and
vals
II' Ii,
TU
[O,nh]
LE:r-1MA 7.4
... ,
I
n'
alternately of length
I*
n'
[O,T]
As
u
~ co,
qu
~
I n +l ,
0
and
n
TU
~
n
h - e:
pairs of interand
e: , making
T > 0,
I
~
li e:
n
e: ~
pieces each of
I~+l.
n
prE; (kq) ~ u, kq
n
and
apart form one further piece which is
limsuPIP{M{U I.)~u} - P{M(nh) ~ u}
T....co
I J
PROOF
into
h > 0
and divide each piece into two - of length
contained in the next pair,
(ii)
T > O. Fix
e:, respectively. By doing so we obtain
up the whole interval
(i)
~
o.
I j } - P{M(U I.) < u} ....
1 J -
For (i) note that
o
n
~
P{M(U I.) < u} - P{M{nh)
1. J
-
~
u}
< np{rHIi) > u}
..., T; P{M{Ii) > u}/{j.Ie:)
since
n
=
[T/h] ..., T/{uh). Since
Ii
has length
e:,
(i) follows from
Lemma 7.2 (i).
To prove (ii) we note that the expression on the left is non-negative
and dominated by
which by Lemma 7.3 (ii) does not exceed
(the
no{u)
=
[T/h]o{u)
= o{l),
o(p)-term being uniform in the I.'s), as required.
J
c
The next lemma, implying the asymptotic independence of maxima, is
formulated in terms of the condition (7.4), also appearing in Lemma 7.1.
LE:MMA 7.5
(7.4)
Suppose
1:
L
r(t) .... 0
as
2
t ....
co
Ir(kq) le- u /(l + Ir(kq)
and that, as
I) ....
T ....
co,
qu .... 0,
0
q e:<kq<T
..
:~
for each
e: > O. Then as
T ....
co,
qu .... 0, Tu .... T,
.~.
-.;t./-
n
p{~
(i)
(ii)
(kg)
lim sup
T-+eo
for each
PROOF
E,
I
n
u, Jk.g€U I.} -
~u,
IT p{E;(kg)
j=l
1)
kg€I.}
0
-+
J
n
IT pH (kg) ~ u,
j=l
<
2T
11
E
0 < E < h.
To show (i} we use Lemma 3.2, and compare the maximum of
n
E; (kg), kg € U I,
1 J
of
~
~(kg),
under the full covariance structure, with the maximum
assuming variables arising from
independent. To formalize this, let
matrix of
E; (kg), kg E U I
Al
0
and let
j
A
d~fferent
= (A ij )
= (A~j)
Ij-intervals are
be the covariance
be the modification
obtained by writing zeros in the off-diagonal blocks (which would
t
occur if the groups were independent of each other); e. g. with
n
'!
= 3,
I
I
All
:
_____ L
o
I
I
I
I
0
J:
A2~
I
I
I
0_
0
-----f----=-+.--I
I'
o
-.
I
I
I
0
I
I
I
A33
J
From Lemma 3.2 we obtain
n
n
IP{E;(kg) ~u' kg€U I ,} - IT P{~(kg) ~u, kg€IJ,}1
1 J
j=l
(7.5)
~ 21n
r
l<i<j<L
-
where
L
otherwise
sup Pij
=6
i,j
< 1
~J
-
A? , I (1 - P~ .) -1/2 exp (_u 2 / (1.,· p .. ) )
J
n
1)
1)
U I ., and p .. = p\'~J'
1 J
1J
~
in the same diagonal block vanish, while
is the total number of
Since all terms with
I A~ , -
kg-points in
I•
by Lemma 7.1 (i), we see that the double
sum does not exceed
Where
r*
indicates that the summation is carried out over
(i, j) in the off-diagonal blocks only. But
where there are not more than
since the minimum value of
kg
T/g
Pij
is of the fOlrm
terms with the same
is at least
"lTith
.i < j
Ir
(kg)
k-value. Thus,
E, we obtain the bound
I
-28n
p{!; (kq) ~ u, kq € U I"} 1 J
n
IT P { !; (kq) ~
j=l
U',
kq € I "} I
J
2
<K!
r Ir(kqlle- u /(l+lr(kq)l)
q e:~kq~T
which bends to zero by Assumption (7.4) so that (i) follows.
To prove (ii), note that by Lemma 7.3 (ii),
o
~
< P{!;(kq)
u, kqEI.} J
p{~1(I.)
J
<u}
-
(uniformly in j) and
o
~ P{M(I ) ~ u} -
j
P{M(h) ~ u} = P{M(Ij) > u}
so that by Lemma 7.2 (i), for sufficiently large
o
where
P
< p" - P
-
_<
J
n
j)
(uniformly in
21le:
= P{!;(kq) ~u, kq€I }, P = P{M(h) ~u}. Hence
j
j
n
o~
IT P " - p
j=l J
nil
~
< (max Pj)n - p
yn - x n < n(y - x)
(l~sing 1:he fact that
follows since
n
Tll/h
~
n
< 2nlle:
for
0 < x < y < I}. Part (ii) now
o
T/h.
The basic extremal theorem now follows readily.
THEOREM 7.6
T
>
o.
Let
u, T
Suppose that
weaker condition (7.4)
~
00
r (t)
in such a way that
Til
2
T ,1/2 -u /2
2TI
"2
e
~
satisfies (7.1) and either (7.2) or the
(cf. Lemma 7.1). Then
(7.6)
PROOF
By Lemma 7.1 the assumption (7.4) of Lemma 7.5
~olds.
From Lemma
7 .. 4 and 7.5 we obtain
lim sup IP{M(nh) ~ u} - pn{M(h) ~ u}
I
~ Ke:
T~oo
for some
K
independent of
e:, and since
lows that
P{M(nh) ~ u} - pn{r-1(h) ~ u} ~ O.
e: > 0
is arbitrary it fol-
-29Further, since
nh < T < (n + 1) h, it follows along now familiar
lines that
o
~
P{M(nh) ~u} - P{M(T) ~u} < P{Nu(h) ~l} < ].Ih
which tends to zero, so that
P{M(T) ~u} = pn{M(h) ~u} +0(1).
This holds for any fixed
o
<
a
< 1, and
h
h > O. Suppose now that
6
0 < h < h O where
h
chosen with
is fixed,
O = h O(6)
is as
in Lemma 7.2 (ii), from whence it follows that
P{M(h) > u} > 6llh (1 + 0 (1»
-
= .!l..!.
n
(1 +
0
(1»
and hence
P{M(T) ~u}
=
(l-P{M(h) >u})n+o(l)
< (1-6T/n+o(1/n»n+ o (1)
so that
limsupP{M(T)
T......,
••
By letting
e
t 1
~u} <
e -aT
we see that
T
lim sup P{M(T) ~ uJ ~ e- • That the
opposi te inequality holds for the
lim inf
but even more simply., from Lemma 7.2 (i)
is seen in a similar \olay,
(no
6
being involved) so
ie
that the entire result follows.
COROLLARY 7.7
= ET
Suppose the conditions of the theorem hold, and let
be any interval of length
p{r.1 (E) ~ u} .... e- YT
PROOF
as
yT
that the process
n (t)
y > O. Then
.... ..,
T
By stationarity we may take
point at zero, so that
for a constant
E
P{r1(E):: u}
=
~
(yt)
E
to be an interval with left end-
= P{M(yT)
:: u}. It is :3inply checked
satisfies the conditions of tIle
theorem, and has mean number of upcrossings per unit time given by
IJ
n
= y].l, so that
].IT? .... yT. Writing
resul t follows at once since
:-in
for the maximum of
Tl
P{rHyT) < u} = POi (T) < u} .... e -y1: •
-
n-
the
c
-JO-
It is now a simple matter to obtain the double exponential limiting
law for
M(T)
under a linear normalization. This is similar to the
result of Theorem 3.5 for normal sequences.
THEOREM 7.8
{~(t)}
Suppose that the (standardized) stationary normal process
satisfies (7.1) and (7.2)
(or (7.4». Then
(7.7)
where
= (21ogT)1/2
(7.8)
>.1/2
(21ogT)
PROOF
i'i'ri te
T
= e -x
1/2
2
1/2
+ (lo<J2TI)/(2log
T)
.
and define
(7.9)
51J
that
e
-x
=
T.
••
Hlence (7.6) holds. But,it follows from (7.9) that
u = (2 109'r)1/2[ 1 +
=
x + b
aT
T
+ o(a-1 )
T
so that (7.6) gives
-+
e -T
from which (7.7)
o
follows at once.
It is of interest to note in passing that this calculation is somewhat simpler - due to the absence of a
log u -term in (7.9), than the
corresponding calculation in the discrete case (cL Theorem 1.11) •
In the discrete case we obtained Poisson limiting behaviour for the
exceedances of a high level. Corresponding results hold for the point
processes of high level upcrossings under the conditions of this chapter. These are readily obtained from the present extremal theory by
means of our familiar point process convergence theorem, as in the
-31-
discrete case, resulting in a number of interesting consequences con-
•
,
~
0;
...
"
cernint local maxima, length of excursions, etc. We will defer such a
discussion to Chapters 8 and 9. However, it is worth noting here that
historically the asymptotic Poisson distribution of the number of high
€':.
level upcrossings was proved first (under more restrictive conditions)
by Volkonski and Rozanov (1961).
Cram~r
(1965) noted the connection
with the maximum given e.g. by
{NU(T) =O} = {M(T)
~u}
U
{Nu(T) =0,
~(O)
>u},
which led to the determination of the asymptotic distribution of
M(T),
and subsequent extremal development.
Extremal results under weaker conditions at infinity
As already noted, the above extremal results may be generalized by weakening of either (7.1) or (7.2). The weakening of the "local condition"
(7.1) by allowing
A2
=
0>
is somewhat more complicated and will be de-
scribed in Chapter 12. For a weakening of (7.2) - describing the behalviour of the correlation at distant points - we may proceed by similar
means to those used in the discrete case, and we devote the remainder
of the present chapter to this, following Leadbetter, Lindgren and
Rootzen (1979) and Mittal (1979). Of course we cannot expect a substantial weakening of (7.2) since it is clearly close to being a necessary
condition.
Let
h(t)
be any function and define
{tE (O,T];/r(t) !logt >h(t)}
(7.10)
By analogy with the conditions for discrete time we will place restrictions on th,e CL'1l0unt of time that
Ir
(t)
I log t
that there is some non-increasing function
t t
0>
such that
h
is large by requiring
with
h(t)
~
0
as
-32-
(7.11)
iT (h) = 0 (T/ (log T) Y) ,
and some constant
(7.12)
K > 0
:tT(K)=O(T n ),
for some
Y > 1/2,
such that
for some
Obviously the condition
n<l.
r (t) log t -+ 0
as
t -+
implies that
00,
8 (h)
T
is empty i f e.g.
h(t) = suplr(s)!logs, so that (7.11) is actually
s>t
weaker than (7.2). In fact;. (7.11) is also weaker than some other condi1:ions \<'hich have been used on occasions. For example, since
!~Ir(t) jPdt ~ £.T(h) (h(T)/ 10gT)P
2
is decreasing, !;r (t)dt <
h
if
00
implies that
£.T(h) = O( (log T /h(T» 2)
for all h, so that (7.11) is
00
2
indeed weaker than the condition fOr (t)dt < 00, sometimes used in
the literature.
THE 0 REl-1 7.9
as
t ....
PROOF
00,
u =
~
....
00
so that
T)..I ....
T
and furthermore satisfies (7.1),
> 0, and suppose
qu .... 0, and (7.4) holds, i.e.
T
E
q E~kq~T
r (t) .... 0
(7.ll)', and (7.12). Then
By Theorem 7.6 we have only to show that
so that
(7.13)
Let
for
E
>
q -+ 0
may be :::hosen
e·
0,
Ir(kq)!e- u2 /(1+lr(k q )!) .... O.
= suplr(s) I,
a
satisfy o < B < (l-o(E»/(l +O(E»,
let
s.::t
Le. let E'
and split the sum in (7.13) into two parts at kq RS T
Let
o (t)
be the sum over
E < kq < T B and
we can estimate
E'
-
En
the sum over
e,
Ta <
T
"r S' e -u 2 /(1+0(E»
q 'q
K
<-2T
1+8-2/(1+0(E»
-q
K T l + S- 2 /(1+0(E»logT-+ 0
2 2
q u
-
simply from the number of t,erms, as follows,
T
T
E
Ir(kq) le- u2 /(lo+ Ir(kq) I)
L' =
q
q E<kq<T a
-<
kq < T. Since
-33-
if
gu
0
+
slowly enough.
En
For the remaining sum
for which
Ir(kg)
for a function
=
#{k; T
S
<kg~T,
Ir(kg)
I log kg
>h(kgl}
denotes cardinality) in analogy with
#
A2 <
Since
is not bounded by a small function. Define,
logkg
h,
~(h)
(where
I
we need a bound on the number of terms
~
and therefore
r
iT (h)
in (7.l0}.
has a bounded derivative,
!r(t+h) - r(t)1 ~ cjhl
for some constant
C,. and this can be used to give a bound for
n (h)
T
1 m (h/2) .. In fact, we will see that
in terms of
.I.
(7.14)
if
is large enough. Since, for
T
Cit - kg I) log kg
Ir
and
t
(kg)
I
t ~ kg,
I log t
/r (t)
> (/r (kg) / .-
we have that i f
log kg > h (kg)
is such that
h(T)
kg < t < kg + 2C log T
then
Ir(t) Ilogt > h(t)/2.
Since
"'.I.
u
gu
if
l~g
the
~
/2 log T,
-+
0
h (T) 12C loe.:; T -
slowly enough. This implies tilat for
which contribute to
vals of lengt11 at least
with
C'
(h (T) ICgu) • q/u < g
nT(h)
h(T)/2ClogT
e:
have
6
T
large enough,
also contribute disjoint
to
in~er­
£T(h/2), and we get (7.14)
= 2C.
We can now proceed by splitting the sum
kg
T
for large
(2K)
or not. Recalling the notation
in
according to 'flhether
oCt)
=
sup Ir(s)
s;:t
I,
we
-34-
T E"
q
T
E
Ir(kq) le-
q
Tl3<kq~T
+
u2
/(1+ l r(kq) /)
!
B
2
/ r (kq) Ie-u (1-2K/10g T )
L
q TB<kq<T, kqEe (2K)*
T
("!....here
the
denotes complementation). The first term in (7.15) is
*
bounded by
~ C' (log 'l'/2Kl ~T (K) 0 (1) T- 2/
Since
n < 1
T
and
~ ~
~
a
~ ~:
l3
(log T) 3/2T1+n-2/ (1+0 (T ) )
C(T S) ~ 0, this bound tends to zero as
by (7.12) and
qu
l3
(1+Q (T l)
slowly enough.
The second term in (7.15) is bounded by
2
(7.16)
-u Cl-2K/(Sl09T»
(!)2
q e
1
g
B log':;:' • T
say, where the sum is extended over all
"I r
l.
kq
(k)' loC!kq
q
such that
= F1
• F
2'
T P < kq ~ T
and
kq E e T (2K)*. We will see that F l may tend slowly to infinity,
but F 2 ... a as T ~ 00 so that F l • F 2 ... O. We start with F 2 , introducing the function h that appears in (7.11) and split the sum accorkq E 8 (2h)
T
ding tQ whether
¥
F2
<
g
T
E
Ir
(kq)
or not, giving
I log kq
L
kqEe (2h)*
T
kq~T
< g . ! 2h(T B) + g. 2Kn (2h)
T q
T
T
< 2h(T B) + 2KC'
¥ (logT/h(T»~T(h)
2h (T S ) + h (T)-l (log T) 1-1/2-y (qu)
= 2h (T S)
+
k (T)
(qu) ,
say, by condition (7.11). Since
k(T)
~
a
as
T
~
00,
• 0 (1)
provided
1/2 - y < 0, we can deduce that
h(t)
decreases sufficiently slowly.
Note that if (7.11) is satisfied for some function
h, then it is satis-
fied for all functions which decrease more slowly. We therefore assume
e·
-35-
that
•
l'
k(T)
~
0
as
T
~
=
The remaining factor
F
in (7.16) is given
l
by
F
2
= (.'!:)2 e -u (l-2K!(S 10gT»
1
g
Using the fact that
F
0(1)
1 = rl10g T
Thus, for some
Since
u
k (T)
2
=
1
B log T •
2 log T + 0
(1)
we obtain
= O(11/(qu)2.
C > 0,
does not. depend on the choice of
g
\lIe
may choose
qu
+
0
sufficiently slowly so that both terms tend to zero, \vhich completes
the proof of the theorem.
RE~1ARK
7.10
As
o
in discrete time, one would be inclined to consider a
condition like
¥
(7.17)
E
Ir(kg) !logkg ey!r(kq)
I log kg
+
0
TS<kq<T
as
T
+
=, for some
However,
B < 1, y > 2
which in fact can replace (7.11).
(7.17) contains the somewhat arbitrary spacing
q, and a more
natural condition for a continuous time process would restrict the
size of
7 /r(t) Ilogt ey1r(t) /10gt dt •
1
HO\vever, it is not clear how this rniqnt be done, in relation to (7.17).
-36-
CHAPTER 8
POINT
PF~CESSES
OF UPCROSSINGS
The extremal theory of normal processes, as developed in Chapter 7, is
balsed mainly on the asymptotic independence of maxima over several sepclrate intervals of constant length, and on the form of t.he tail distribution clf the maximum over one such interval. In the proofs in Chapter 7
we made use of upcrossings, and of the obvious fact that the maximum exceeds
u,
if there is at least one upcrossing of the level
u. However,
as was seen already in Chapter 4, upcrossings have an interest in their
own right, and as we shall see here, in this continuous time setting,
they contain considerable information about the local structure of the
process.
This chapter is devoted to the asymptotic Poisson character of the
pc,int process formed by the upcrossings of increasingly high levels,
and indeed, this requires only little more than is needed to obtain the
mULch weaker results of Chapter 7. In our derivation we shall make subst:antial use of the regularity condition
A2 < "', which implies that
the upcrossings do not "appear in clusters" and remain separated as the
level increases. In fact, Theorems 6.7 and 6.5 imply that there are only' a finite number of u-points in finite intervals. Similar resluts
will be established in Chapter 12, for the case
A2 = "', with regular
upcrossings replaced by so called E-upcrossings.
We shall first prove asymptotic independence of the maxima over disjoint intervals, and then use this result to prove that the point processes of upcrossings of several simultaneously increasing levels tend
in distribution to a sequence of successively thinned Poisson processes.
In the case of a finite fourth spectral moment this will eventually,
in Chapter 9, give the joint distribution of heights and locations of
the highest local maxima.
.'
-37-
Poisson convergence of upcrossings
•
,
Corresponding to each level
_1
\ 1/2 e-u2/2
2Tr 1\2
u
we have defined
~ 00,
where
=
~(u)
=
to be the mean number of u-upcrossings per time uni1:,
and, as in Chapter 7, we consider
u
~
> 0
1
T = T(u)
such that
is a fixed number. Let
NT
T~ ~
as
or>
be the time-normalized
point process of u-upcrossings, defined by
N;(B)
=
=
Nu(TB)
for any real Borel set
upcrossing at
~(t);
#{u-upcrossings by
B, Le.
t/T€B},
N; has a point at t
tT. Note that we define
N*
if
,
has a: u-
as a point process on the
T
entire real line, and that the only significance of the time
:-
~
T
is that
of an appropriate scaling factor. This is a slight shift in emphasis
I
u T = x/aT + b
from Chapter 7, where we considered
as a height normaT
lization for the maximum over the increasing time interval (O,T).
N be a Poisson process on the real line with intensity
Let
T.
,To
prove point process convergence under suitable conditions, we need to
prove different forms of asymptotic independence of maxima over disjoint
intervals. For the one-level result, that
to
N*
T
converges in distribution
N, we need only the following partial independence.
a = a < b < a 2 < ••• < a r < b r = b be fixed numbers, E i =
l
1
(Tai,Tbil, and M(E i ) = sup{~(t); Ta < t < Tb i }. Then, under the coni
ditions of Theorem 7 .. 6,
LEMMA 8.1
Let
p( i=l
~
as
u
PROOF
~
00 ,
{M(E.)
~
T~'" T
<u})- ~ P{M(E.)
i=l
~
~u}~O
> O.
The proof is similar to that of Lemmas 7.4 and 7.5. Recall the
construction in Lemma 7.4, and divide the real line into intervals
of lengths
••• , II'
then approximate
intervals
tk's
,Of
V·
!
I
M(E )
i
h-£
and
£,
alternately. We can
by the maximum on the parts of the separated
which are contained in E i . Write n for the number ,of
k
r
wpich have non-empty intersection with
U E • We at once obtain
i
i=:l
-38-
o
p( i=1
~
<
~
where. (..lr i ting
n- h
-1
{U(U I
f'lE;)
k ·
k
IE I for the
r I I
T r
I: E.
= -h I:
~
i=l
Since Lemma 7.2(i)
i=l
L~~lies
~
p( i=l
~ {H(E.)
u}) -
~
1
u})
length of an interval
E),
< T(b - a)
(b. - a.) _
h
~
~
that
lim sUp].J -1 P{M(I * ) > u} ~ E,
l
U+""
we therefore have
(8.1)
Ip( i=l
~
lim sup
u+""
~u})
{!>leU linE.)
~
k
-
p( i=l
~ {M(E.)
~
~u}j~ll
<TE(b-a)
-
Now, l,et
q + 0
as
maxima in terms of
u + ""
so that
E;, (jq), jq E U I
k
7.4 (ii). In fact, since there are
UE i
101
(where
(8.2)
o~p( ~
1=1
h
qu +
k
f'l E
n +
o.
i
a
•
The discrete approximation of
is then obtained as in Lemma
intervals
I
which intersect
k
~ r), we have
{E;,(jq)
~u,
jqEU Ikf'lE.}) ~
k
p(.~
M(U
~=l
k
< E(P{E;,(jq) <u, jqElknE.} - P{M(IkflE.)
k
~
-
(n+o)o(].J) = 0(1),
as
~
IknE i )
~u})
~u})
U+"",
:by Lemma 7. 3 (11) •
Furthermore,
(8.3)
i"here
Yi = ~ p{ E;, (jq) ~ u, jq E I
n E }, the proof this time being a rek
i
phrasing of the proof of Lemma 7.5(i).
By combining (8.1),
(8.2), and (8.3) we obtain
T£ (b -
a)
h
and in particular, for
i = 1,
••• , r,
e.'
-39-
•
.
I {
hm sup P M (E i} ~ u
u .... a>
Hence, writing
xi
lim sup
u .... a>
}-
Yi
I
~
T E:
(b - a)
h
•
P{rHE.) < u}, we have
~
Ip( i=l
~ {r1(E.)
~
-
<u}) - ~ P{M(E.)
i=l
~
~u}1
r
r
< TE: (b - a) + lim sup I IT y. - IT x. I •
h
u .... a>
i=l ~ i=l ~
But, with
z = max 1Yi - xi I (so that lim sup z <
i
r
r
r
r
IT y.
IT x. < IT (x. + z) - IT x. < rz,
i=l ;l
i=l ~
i=l ~
i=l ~
-
-
-
(b - a) /h) ,
TE:
-
with a similar relation holding with
Yi
and
interchanged, and
hence
r
r
lim sup I IT y. - IT x.1 < rT£(~ - a).
u .... 00
i=l ~
i=l ~ .
Since
is arbitrary, we have proved the conclusion of the lemma.
E:
THEOREM 8.2
pose
ret)
~hen
Let
u.,· '" and
T.., T/ll, where il =
satisfies (7.1) and (7.2)
the time-normalized point process
in
2
A~/2 e -u /2, and sup-
(or the weaker condition (7.4)1).
NT
of u-upcrossings converges
in distribution to a Poisson process with intensity
PROOF
0
T.
By the basic convergence theorem for simple point processes,
Theorem A.l (see Appendix. to Part I), it is sufficient to shO\" that,>
as
(a)
u .... "',
E(N;«a, b)):) .... E(N«a, b)) = T(b-a)
for all
a<b,
and
for all sets B
(b)
Here, part (a) is trivially satisfied, since
~
E(N';«a, b))) =E(Nu(Ta, Tb]) =Tll(b-a) .... T(b-a).
;;.
~
'"
""
.3
!W
E
~~
~.
For part-(b), we have for the u-upcrossings,
of the
-40-
where
••
E
= (Ta , Tb ]. Now it is easy to see that we can work with
i
i
i
maxima instead of crossings, since
O~p( ~
i=l
= p(
~
i=l
~
{N (E i ) =O})-p(
{M(E.)
u
i=l
~
{N (E.) =O} n ~ {M(E ) >u})
i
u. ~.
i=l
r
<
~u})
1: p{~(Tai) > u} .. 0
i=l
as
u ..
00,
cmd since furthermore Corollary 7.7 and Lemma 8.1 imply that
r
lim P ( . n {M(E ) ~ u}
i
u .. oo
~=l
~7e
)
r
r
-T
(b. -
~
= lim IT P{M(E.) < u} = IT e
u.... oo i=l
~ i=l
a. )
~
have proved part (b).
c
One immediate consequence of the distributional convergence of
NT'
is the asymptotic Poisson distribution of the number of u-upcrossings
in increasing Borel sets
T·B. Since this is an important result we
formulate it as a corollary.
COROLLARY 8.3
Under the conditions of Theorem 8.2, if
B
is any Borel
set whose boundary has Lebesgue measure zero, then
..
(8.4)
as
u ... 00, where
bution of
IBI
is the Lebesgue measure of
N;(B ), ••• , N;(B )
l
n
..
,
B. The joint distri-
corresponding to disjoint
Bj
(with
boundaries which have Lebesgue measure zero) converges to the product
of the corresponding Poisson probabilities.
c
Full independence of maxima in disjoint intervals
A topic of some interest, which we have not touched upon yet, is the
relationship between the intensity
T
and the height
u
of a level
-41-
T~ ~ t. If
for which
u
2
T = t/~ = t2nA;1/2 e U /2
we have
. 1/2
logt + log (2'lT/A 2 »)
= 2 log T ( 1 log T
_
2
or
(8.5)
equally well in Theorem 8.2, and it is often convenient to use the
level obtained by deleting the last term in (8.5) entirely. (The rea.der
should check that also for this choice the relation
T~ ~t
holds.) If we write
1/2
112 logt + 10g(2n/A 2 )
1/2
u = (2 log T)' t
(2 log T)
(8.6)
we have, for
* _u
u
(8.7)
t>T*>O,
=
t
1
log l/t*
log t/t*
(2 log T) 1/2
u
> 0,
t
so that levels corresponding to different intensities
same time-normalization
T, T*
(under the
T) become increasingly close to each other, the
difference being of the order
u t *, u T which satisfy T~(ul*)
particular choice (8.6).
l/u t . Note that (8.7) holds for any
~
T~(Ul) ~ T,
T*,
Now, let ·u(l) ~u(:2) ~ ••• ~u(r)
be
r
and not only for the
levels such that the point
processes of upcrossings are asymptotically Poisson with intensities
o < T1
~ T2 ~
T~(u(i»
•••
~
tr
under one and the same time-normal iza tion, i. e.
~ 'i' i =1, •• ., r
as
u(r)
~
00.
We shall prove the full as-
ymptotic independence of maxima in disjoint increasing interva,ls under
the conditions (7.1) and (7.2)
u (r»
(8.8)
(or condition (7.4) with
Le.
T
I
Ir (kg) Ie- (u (r) ) 2 / (1 + Ir (kg) I ) ... 0
g £~kg~T
for each
£ > 0, and some
q'" 0
such that
u(r)g'" O.
u
replaced by
-42Let
THEOREIc1 8.4
U
(ll >u(21 > ••• > u (rl -+ CD, T -+ CD, and suppose
T]J(u(il 1
Suppose that
ret)
satisfies (7.1), and either (7.2) or the weaker con-
o -< a
dition (8.8). Then, for any
=a
l
<b
< a < ... < a < b = b,
l - 2
- s
s
~:
s
-
-+ e
(8.9)
is one of
where each
P~OOF
u
L
i=l
(1)
T ~
~
(b
0
~
, ••• , u
-
a
0
~
(r)
)
and
For proof it is enough to check that
J?\(
~
i=l
{M(Eo) ::u
~
T,
oJ) -
~
~
i=l
P{M(E.) <uT oJ -+ 0,
~ ,~
and this goes step by step as the proof of Lemma 8.1, with
by appropriate
u
replaced
uT,i' We only have to make sure that one can use the
s~ne grid in the discrete approximation for each level
is easy, since (8.7) implies that
conditions, so that
u(i)q -+ 0
if
u (i)q -u (j)q ..,. 0
u(r)q -+ 0,
u(i), and this
under the stated
(cL -the proof of Theo-
rem 4.9) ..
_
c
Upcrossings of several adjacent levels
The Poisson convergence theorem, Theorem 8.2, implies that any of the
time-normalized point processes of upcrossings of levels
are asymptotically Poisson if
T]J(u(i»
-+To > 0
~
as
u(l) > ••• >u(r)
T, u(i) -+-CD. We
shall nm/1 investigate the dependenae bebleen these point processes,
following similar lines to those in Chapter 4.
To describe this dependence we shall represent the upcrossings as
points in the plane, rather than on the line, letting the upcrossings
of the level
U
(i)
define a point process on a fixed line
Lo
~
as was
done in Chapter 4. However, for the normal process treated in this
"
.
-43-
chapter the lines
L , ••• , L
1
r
can be chosen to have a very simple re-
lation to the process itself by utilizing the process
where time has been normalized by a factor
and height by
T
aT = (2 log T) 1/2
b
T
= (2 log T) 1/2 + log
(A~/2 /211") / (2 log T)1/2
as usual.
Now,
~T
(t)1 = x
if and only if
~
(tT) = x/aT + b , and clearly the
T
mean number of upcrossings of the level
of length
equals
h
> ••• > x
as
by
~T(t)
in an interval
~~ A~/2 exp(-(x/a T +b T )2/2), which by (8.6)
is equal to
hT(l +0(1»
x
T-HO, with
T
=e
-x
.
• Therefore, let
xl
~x:2
L " L ,
2
l
••• , L , ana consider the point process in the plane formed by the upr
-
-
r
be a set of fixed numbers, defining horizontal lines
crossings of any of these lines by the process
~T(t).
Here the depen-
dence between points on different lines is not qUite as Simple as i1:
was in Chapter 4, since, unlike an exceedance, an upcrossing of a high
level is not an upcrossing of a lower level and there may in fact even
fol~
be more upcrossings of the higher than of the lower level; see the
lowing diagram, which shows the relation between the upcrossings of levels
u (i) = xi/aT + b
T
by
~ (t), and of levels
seen, local irregularities in the process
t(t)
xi
by
~T (t). As is
can cause the appear-
ance of extra upcrossings of a high level, not present in the lower ones •
-
•
,
o
t-+-+--4---+-f----41l-....lIw-( T
~
o
,
"
.f).
.... , .. , .. :, .. ,...... ,
... "" ..
.
•
'.'-
1
-44-
denote the point process in the plane formed by the upcross-
N*T
Let
ilngs of the fixed levels
a ,r ;( ~
(t'11)
N~ 1),
-
~),
••• , N(r)
T
~
xl
x
2
~ ••• ~
xr
by the process
~T
j
A."
.-~
(t) =
and let its components on the separate lines be
'
so that
NT(B) = #{upcrossings in
B
of
L l , ••. , Lr
by
~T(t)}
~ N(i)(BOL.),
i=l T
~
for arbitrary Borel sets
We shall now prove that
process
N
B
=R2 •
NT
converges in distribution to a point
in the plane, which is of a type already encountered in con-
nection with exceedances in Chapter 4. The points of
N
are concentrated
on the lines
L , ••. , L
and its distribution is determined by the joint
l
r
distributions of its components N(l), ••• , N(r)
on the separate lines
L , ... , L r ·
l
Let, as in Chapter 4, {a lj; j = 1, 2, ••• }
process
N(r). with parameter
T = e
r
-x
r
on
be i.i.d. random variables, independent of
be the points of a Poisson
L r • Let
Sj ' j = 1, 2, •••
N(rl, with distribution de-
fined by
p{ Sj = s} = (T r - s + l - Tr - s ) /T r , s = 1, ••• , r-l,
s = r,
so that 'p{ Sj ~ s} = Tr-s+llT r for s=l, 2, ••• , r.
Construct the processes N(r-l), ••• , N ( 1) by placing points
Cf
3j , .. ., 0Sjj
on the
Sj -1
02j'
lines
Lr - l , ••• , Lr - .+ l ' vertically above
S
J
and finally define N to be the Sill1 of the r pro-
lj , j == 1, 2, .•• "
cesses N(l), ••• ,N(r).
Cf
As before, each
t~e Poisson process
let ion probability
N(k)
N(r)
is Poisson on
L , since it is obtained from
k
by independent deletion of points, with de-
1 - p{ S . > r - k + l} = 1 - Tk/T,
r
J -
Tr(Tk/T r ) = Tk • Furthermore,
N (k)
~. {~+1)
is obtained from,~
nomial thinning, with deletion probability
itself is not Poisson in the plane.
and it has intensity
by a bi-
1 - Tk/T + . Of course,
k l
N
-45-
When proving the main result, that
N*T
tends in distribution to
N,
we need to show that asymptotically, there are not more upcrossings of
a higher than of a l,o'wer level. With a convenient abuse of notation, \o\rrite
NJi) (I)
for the number of points of
Suppose
LEMMA 8.5
NJj)
NJi)
with time-coordinate in
xi > x j ' and consider the point processes
~T(t)
of upcrossings by
of the levels
xi
and
NJi)
I.
and
x ' respectively.
j
Under the conditions of Theorem 8.2,
P{N(i) (I) >N(j) (I)} ~ 0
T"
as
T
~
PROOF
.,
•
Let
T
00, for any bounded interval
1.
By stationarity it is sufficient to prove the lemma for
I
(8.6):
k
1= (o,ll.
k-l k
.
= (--, -J, k = 1, ... , n, for f~xed n and recall the notation
n.
n
-x,
u T , = xj/aT+bT, Tj=e J. Since, by Theorem 6.7 all crossings
are strict; the event
n
U {~(kT) > u
k=O
n
{N (i) (I)
T
> N (j) (1)}
implies that one of the events
T
}
Tj
or
-e
~
{N (i) (1 ) > 2}
k=l
T
k-
occurs, so that Boole's ine~ality and stationarity give
P{N(i) (I) > N(j) (I)} < ~ p{~(kT) > u } + ~ P{N(i) (I )
T
T
- k=O
n
Tj
k=l
T
k
=
Obviously,
(n+l)P{~(O)
(n+l)P{~(O) >UT)~
>u
T
}+np{NS,i)C!l) >"2}.
J.
j
T
1
_>
21
-+
1- e
-
0, while by Corollary 8.3,
J
C')
PiN ~. (I )
~
-T
-Ti/n
e
i
In
,
which implies
( . )
( .)
lim sup PiN ~ (I) >N J (I)} <n(l-e
T-+oo
T
T
-
-T i/n
Ti
n
e
-T.ln
~
).
2}
-46-
Since
n
is
a~b.it~a~y
a,nd
T -TIn,
n (1 -e -TIn -e
L
n
as
n -+
Suppose that
ret)
o
satisfies (7.1) and (7.2)
T < T2 < ••• < Tk
l
generally (8.8»), let
N;
Q
(T > O}, tili:s proves the lemma.
00
THEOREM 8.6
let
~
~
be real positive numbers, and
be the point process of upcrossings of the levels
-x.
>x
(T i =e
r
~)
by the normalized process
presented on the lines
L
l
points on the horizontal lines
process
on
L
bi.nomial thinnings
r
N
T -+
00,
N*
re-
tends in
T
in the plane, described above, with
L , i = 1, ••• , r, generated by a Poisson
i
with intensity
N (k)
Xl > x 2 > •••
i;T(t) =aT(i;ftT]1 -bT'
, ••• , L • Then; as
r
di.stribu.tion to the point process
(or, more
T , and a sequence of successive
r
with deletion probabilities
1 - TkiT k+ 1 '
k := 1, ••• , r-l.
PF~OF
This follows similar lines as the proof of Theorem 4.11, in that
one has to show that
(a)
E(N;(B»-+E(N(B»
o<
(b)
a < b, a < 8,
for all sets
B
of the form
(a"b]x(a,8],
e-
and
P{N;(.B) =O}-+P{N(B) =O}
for all sets
B
which are finite unions
of disjoint sets of this form.
Here, if
B = (a, b) x (a,S] and
(a,S]
contains exactly the lines
i
E(N*CB»=E( iN(k)«a,b]»)= iT(b-a»),.I(u(k»-+(b-a)
T
T
k=s T
k=s
. k=s k
= E (N (B) )
so that (a) is satisfied.
To prove (b), as in the proof of Theorem 4.11 'W'ri te
B
in the form
m
m
jk
:6 = U C = U f( a , b ] x U (a ·, 13 , ])
k
kJ
k=l k k=l' k
j=l k J
,
.,
e: -
-47-
,,.
~
be the index of the lowest
Lj
that intersects
*
c~~kf·e .
L nC
~, L n C = ,0
for j > mk • Then clearly, if NT
( (a k ' b k ) = 0
j
k
k
mk
then either NT(C k ) = 0 or there is an index i <m
such that
k
NJi} «ak,bkJ) > 0, L'e. in (ak,bkJ there are more upcrossings of cL
higher than of a lower level. Since obviously
(m )
k
NT
«ak,b k ]) = 0,
NT(C k ) =0
implies
since the difference is bounded by the probability that some higher
level has more upcrossings than a lower one, which tends to zero by
Lemma 8.5.
But
(m )
{NT
where
k
«ak , 10 k ]) = O}n{~T(ak) ~xm } = {M(.(Tak,Tbk )
k
~u'I"k}
uT,k =Xmk/aT+bT' so that Theorem 8.4 implies that
. (m
(mk )
\
.
( m
r
)
lim p n {N
«ak,bk ]) =O}j=ll.m P n {MdTak,Tbk ) ~uT,k
T~oo
k=l
T+oo
k=l
m
=
where
T
I
k
=
T
m
= e
ne
-Tk. (b k -
a )
k
k=l
-xm
k • Clearly this is just
P{N (B) =O}, and thus
k
the proof of part (b) is complete.
COROLLARY 8.7
Let
{~(t)}
satisfy the conditions of Theorem 8.6,
and let
Bl , ••• , Br be real Borel sets, whose boundaries have Lebesgue
measure zero. Then, for integers m ~k) ,
J
- 1, ••• , r}
-- m'j(k) , J" --1, ••• , s , k -
For example, for disjoint
B
l
and
B ,
2
-48-
e;·
'I'-
'.
!{
-49CHAPTER 9
LOCATION OF MAXIMA
So far, we have examined the extremal properties of a continuous process
~(t)
by sections at certain (increasing) levels. Even if this gives
perfect information about the height of the global maximum of the pro·cess, it does not directly tell us where this maximum occurs or how it
is related to possible lower ZoaaZ maxima.
t; (t) , . 0 ~ t
'l'he maximum of
in
~
T, is certainly attained at some point
[O,T]. However, the maximum level may be reached many times,. or
even infinitely often. But there will - by continuity of
first occasion on which
t;(t)
attains its maximum in
t;(t)
-
be a
[O,T], and we
;;
I
denote this by
L(T).
We state the first result concerning
L(T)
as a lemma, though it is
rather obvious.
LEMMA 9.1
L(T)
= P {z,1( 0, t)
~ M (t, T) }.
PROOF
0 < t < T, P{L(T)
~t}
=
Both statements follow from the equivalence of the events
{L(T) ~ t}
M(O,t)
is a r.v. For
and
{M(O,t) ~M(t,T)}, the latter being measurable since
and
M(t,T)
o
are r.v.'s.
The distribution of
L(T)
can have a jump at
0
and at
E; (t) = A cos (t -
simple examples (such as the process
</»
T
with
as
T < 2n)
show. However, a simple condition precludes the possibility of any
other jumps in the distribution of
L(T), and this is generally satis-
fied except in "degenerate" or "deterministic" cases.
S~ecifically
at
to
we will say that
if there exists a r.v.
E;(tO+h) - E;(t O)
h
-
Clearly if
i;
~
n
E;(t)
n
has a derivative in probabiZity
such that
in probability as
h
~
O.
has a q.m. or probability-one derivative, it has a de-
rivative in probability (with the same value).
-50THEOREM 9.2
~(t)
Suppose that
has a derivative in probability at
t
0 < t < T), and that the distribution of this derivative is
(where
continuous at zero. Then
PROOF
Let
n
denote the derivative in probabiE ty at
{L{T) =t} c {~{t) f,or all
11
> 0
-
~(t-h)
-<
-h
0 < t - h
such that
{~(t+h)
Now
o.
Pr{L(T) =t} =
-
~(t»/h
.. n
O} n
{~
and
t + h < T.
t. Clearly
(t) - t;{t+h) > O}
-
h
-
in probability as
h '"
0
and there
{h } such that (~{t +h ) - ~(t»/hh .. n with probn
n
ability one. By considering a subsequence of {h } we may also arrange
n
exists a sequence
that
B
(~(t-hn)
with
~(t»/{-hn)
-
.. n
PCB) = l. 1i1e see that
{:L{T) =1t} n B c {n
O}
with probability one, Le. on a set
n = 0
on
and hence
P{L(T) =t} < P({L(T) =t} n B) +
~1en
n
{L(T) =t} n B, Le.
P(B
C
)
< P{n =O} + P{B
C
)
has a continuous distribution.
[]
Turning to stationary processes, one may be tempted to conjecture
~1at
over
~(t)
if
is stationary, then
(O,T). For example this is so if
uniformly distributed over
If
and
<P,
~(t)
(t) = A. cos (t T = 2n.
L{T)
(If
A
,~),
with
c;P
is Rayleigh
of course is normal.)
T < 2n, there is a positive probability of
L
being
0
or
T,
is not strictly uniform. However, its distribution is still
uniform between
0
and
T
as a simple calculation shows.
In general, however, L
(O,T), not even if
~(t)
need not be uniform in the open interval
is normal and stationary. As an example of
this, let
<PI' ¢2' AI' and
form over
{O, 21T J, and wi tl1
~
~
{O,2nJ, for
distributeo. and independent of
is uniformly distributed
L{T)
A2
be independent, with
Al
and
A
2
<PI
and
<P 2
uni-
Rayleig:1 distr:Lbui:ed, and put
cos (t - <PI) . + A2 cos (lOOt - <P 2 ). Then ~ (t)
is a stationary
normal process, and (e.g. by drawing a picture) it can be seen that if
(t)
1\'1
Al < A2 , <PI E {31T/2,2n], and
<P 2 E (n/4,31T/4)
then the maximum of
-51~(t)
over
[0,n/2]
is attained in the interval
(O,n/lOO]. Hence
~e
.,,
P{L (n/2) ~ 'IT/laO} ~ PiA]. < A2 , 3n/2 < ¢l :: 2n, n/4 < $2 :: 3n/4} =
"
be uniform over
..
=
(1/2) (1/4) (1/4)
= 1/32
> 1/50
=
(n/lDO)/('IT/2), and
L(n/2)
can not
(0,'IT/2).
However, for a stationary normal process the distribution of
always symmetria in the entire interval
a
and
T
is
[O,T], and possible jumps at
are equal in magnitude. This follows from the reversibility
{~(-t)}
of a stationary normal process in the sense that
has the same
{~(t)l.
distribution as
One method of removing boundaries, like
:-
L
a
disappear to infinity, and one may ask whether
ymptotically uniform as
T'"
IX).
and
L
T, is to let them
= L (T)
might be as'-
For normal processes, this is a simplie
consequence of the asymptotic independence of maxima over disjoint in·tervals, as was previously mentioned. We state these results here, as
simple consequences of Theorem 8.4.
THEOREM 9.3
.e
Let
as usual) with
~
....
~
-=-~
and suppose that
~
iT} ... i
as
T'"
IX)
With the usual notation, if
where
a's
~
0, y
~
and
0)
P{L(T)
~
IX),
PROOF
(x
~~
>'2 <
be a stationary normal process (standardized
r (t) log t .... a
as
t
Then,
P {L I[ T)
!~
H(t)}
as
b's
i
~
1) •
0 < i < 1, i*
=1
are given by (7.8), then
T'"
,~iT}
I[ 0 ~
IX).
Furthermore
= P{M(O,iT)
=
~M(iT,T)}
PiX - ai*T Y
(b
b) }
. T
a iT T ~ a iT i *T ~ iT
- i, and
,+
IX) •
-52where
aR,*T/aR,T .... 1
and
aR,T (bR,*T - bR,T) .... log (R,*/R,). As
T ....
00
the
&.i."
WJ,
above probability tends to
p{X - y ~ log R,*/R,}
where
X
and
Yare independent r. v. ' s with corrunon d. f ..
exp(-e
-x ),
c
and an evaluation of this probability yields the desired value
Height and location of local maxima
One consec:;:uence of Theorem 9.3 is that asymptotically the global maximum is attained in the interior
(O,T)
and thus also is a local maximum.
For sufficiently regular processes one might consider also smaller, but
still high, local maxima, which are separated from
L(T).
We first turn our attention to continuously differentiable normal
processes which are twice differentiable in quadratic mean.
In analogy to the development in Chapter 4, we shall consider the
point process in the plane, which is formed by the sUitably transformed
local maxima of
the path of
£,(t).
(Note that since the process
aT(£,{t) -bTl
to any bounded rectangle
£,(t)
is continuous,
is also continuous, and although its visits
B
R
C
2
are approximately Poisson in number,
they are certainly not points.)
Suppose
height
£, (t), 0 < t
£,(si)' Let
aT
< T
has local maxima at the points
and
bT
with
be the normalizing constants defined
by (7.8), and define a point process in the plane by putting points at
-1
(T si' aT(t,(si) -b T
We recall from Chapter 8 that asymptotically
».
the upcrossings of the fixed level
process with intensity
T
= e- x
that an upcrossing of a level
the higher level
y
x
by
aT(£,(t) -bTl
form a Poisson
when time is normalized to
x
tiT, and
is accompanied by an upcrossing of
with a probability
e-Y/e- x
=
e-(Y-~:) •
-53-
when investigating the Poisson character of local maxima, a question
of some interest is to what extent high level upcrossings and high local
maxima can replace each other. Obviously there must be at least one 10cal maximum between an upcrossing of a certain level
u
and the next
downcrossing of the same level, so that, loosely speaking, there are at
least as many high maxima as there are high upcrossings. As will now be
seen there are, with high probability, no more. In fact we shall see
that this is true even when T.
1 ,1/2 e- u2 / 2
= T 2n A2
converges.
First recall
~
T~
in such a way that
=
notation from Chapter 6,
t.~e
.,...
t
N~(O,TJ.
E (N 11 o(T})
(i)
and
If
LE?·mA 9.4
u
•
then
t
and
.e
(ii)
as
u·~.
PROOF
First note that at least one of the following events occur,
{N~
(T) .~ Nu (T)}
or
{~(T)
and that in the latter case,
p{
IN'u (T)
-N
u
(T]i
I
> u}
N' (T) > N (T) - 1. Therefore
u
-
u
> l } < E ( J N~ (T) - N (T)
u
< E (N' (T) - N (T»
U
U
and since
E(Nu(T»
quence of (i). But
1 -
~(x)
~
¢(x)/x
=
and
T.
E(N'(T»
\l
as
x.~,
P{~(T)
>u} • 0,
J)
+ 2P{ ~ (T) > u} ,
(ii) is a direct conse-
is given by (6.18) where we can use
so that for some constant
K,
-54-
..
A~'
while
T
27T
~I
2; A~/2 e- u '/2(1+O(1))
as
-+~,
u
-+
T
which proves (i).
THEOREM 9.5
[J
Suppose the standardized stationary normal process
{1;(t)}
has continuously differentiable sample paths and is quadratic mean
differentiable (i.e.
t
~ ~.
A4
< ~),
and suppose that
Then the point process
aTi(!;(si} -bTl)
NT
ret) losrt
-+
0
of normalized local maxima
as
(silT,
converges in distribution to a Poisson process
N'
in
the plane with intensity measure equal to the product of Lebesgue measure and that defined by the increasing function
PROOF
(a}
-x
e.
By Theorem A.l it is enough to show that
E(NT(B»
-+
E(N' (B»
= (b - a) (e-c/' - e- f3 )
(a "b] x ( Cl , 13], 0 < a < b,
(b)
-e
P{NT(B) =o}
-+
C/.
P{N'(B) =o}
for any set
B
of the form
< 13, and
for sets
B
which are finite unions of
sets of this form.
To prove (a), we use Lemma 9.4 (i). Then, with
u(l)
u(2) =.J!.. + b
aT
T,
E(NT(B»
= B(N' (2) (Ta,Tb»
- E(N' (1) (Ta,Tb»
u
-+
since
T~(u(i»
Part (b)
~rossing
-+
(b - a) e -C/.
e- f3 , e-c/'
....
u
-
for
(b - a) e -13 = (b i
=
(e -C/.
_
e- f3 ) ,
1, 2.
is a consequence of Lemma 9.4
theorem, Theorem 8.6. Let
a.}
(ii) and the multilevel up-
N (I), as before, 'denote the number
u
-55~(t),
of u-upcrossings by
U E" x F",
j
J
J
where
E" =
J
t
€
(a",b"]
J
I, and write the set
are disjoint and each
J
B
in the form
F "
J
is a finite
union of disjoint intervals. Suppose first that there is only one set
u(2k-lJ>
totically every upcrossing of the high level
u
=
is accompanied by one
(and no more) local maximum above that level, and hence
P{NT(E x Gk ) = O} = PiN'
u
(2k) (Ta,Tb) =N' (2k-l) (Ta,Tb)}
u
(2k) (Ta,Tb) = N (2k-l) (Ta, Yb)} +
u
u
-ex
-B
'2k = e k, T 2k - 1 = e k
= PiN
By Theorem 8.6, with
,
(2k) (Ta,Tb) =N (2k-l) (Ta,Tb)}
piN
I
u
u
..
m
r
e
-T
2k
(b-a)
(T
(b-a»j
_-=2",-,k;..,....,
_
j=O
j:
=e
.e
0 (1) •
_ (b - a)(-e
-B
k _ (-e
-ex
k»
By slightly extending the argument we obtain
P{NT(E xU G ) = O} = p\{ U{N
k
k
k
u
(2k) (Ta,Tb) =N (2k-l) (Ta,Tb)}) +
- (b-a) r (e
-ex
k - e
-B
k)
k
.. e
= piN'
0(1)
0
(E xU G )
k
k
and we have proved part (b) for sets
B
= o},
of the simple form
B
=
U Gk • The general proof of part (b) is only notationally mor,e complex.
c
Ex
k
'f
~.
;
-56Location of
~~e
largest maxima
The limiting Poisson process in Theorem 9.5 has exactly the same distributions as that in Theorem 4.17 for
=
-e -s
!;;(t)
normal, since
logG(s)
in this case. This means that all consequences that can be
drawn from that theorem about asymptotic properties of the normalized
point process
an(!;;i -b )
n
[.Jrocess of local maxima
also carries over to the normalized point
aT(!;;(si) -bT'.
As an example we shall use Theorem 9.5 to give the simultaneous distribution of location and height of the two lar,gest local maxima of
~(t),
t E (O,TJ. Let
MI(T)
highest local maximum, and
THEOREM 9.6
Suppose
{~(t)}
be the highest and
Ll(T), L (T)
2
M (T)
2
the second
their location.
satisfies the hypotheses of Theorem 9.4.
Then
(9.1)
e.
PROOF
The asymptotic distribution of the heights of the two highest
local maxima,
.... e
-e
-x
2
(l+e
-x 2
-e
-Xl
),
is a direct consequence of Ti1eorem 4.14, formula (4.20), and the obser··
vat ion above that the limiting point process of normalized local maxima
sequence of independent normal r.v.'s
(i/n, a
n
(~.
~
- b
n
»,
i
= 1,
... , n.
But also the location of the local maxima can be obtained in this
way. Suppose e.g.
(O'~IT),
(~IT'~2T),
~l < ~2'
(~2T,T),
and write
I, J, K
respectively. With
for the intervals
u(l)
= xl/aT
+ b ,
T
-57u(2) - x /a + b
the event in (9.1) can be expressed in terms of the
- 2 T
T
highest and second highest local maxima over I, J, K as
{Ml(I) ~u(l), 111 (1) ~u(2), Ml(J) ~u(2), Ml(J) ~Ml(J),
2
M (K) ~ M (I U J) }
2
l
and the limit of the probability of this event, when expressed in terms
of the point process
N'T
of local maxima, is again the same as it
would be for the point process of normalized independent r.v.'s. For
such a process obviously
LI(T)/Tand
uniformly distributed over
(0,1)
L (T)/T are independent and
2
and independent of the heights of
the maxima, which proves the theorem.
c
Maxima under more general conditions
We have investigated the local maxima under the rather restrictive assumption that
,e
A4 <
~(t)
is twice differentiable (in quadratic mean), i.e.
A4 = ~ the mean number of zeros of ~'(t) is infinite, by
Rice's formula, and in fact infinity close to every local maximum there
~.
If
are infinitely many more, which precludes the possibility of a Poisson
type limit theorem for the locations of local maxima.
One way of getting around this difficulty is to exclude from further
considerations a small interval around each high maximum, starting with
the highest. To be more precise, let
l"1
I
(T) = sup{E;(t); t€ (O,T)}
be the global maximum, and
Ll(T) = inf{t>O;
its location. For
£ > 0
~(t)
=Ml(T)}
an arbitrary but fixed constant, let
= (O,LI(T) -£)U(LI(T) +£,T), and define
II =
-58-
1'1
2,e:
('1'}
sup{.; (tl; t € II}
Proceeding recursively, with
we get a sequence
M
(T), L ... (T), l-1
(T)
k ,<k ,e:
l ,e:
=H,•. (T),
L
l ,e:
(T)
=Ll
(T)
of
heights and locations of e:-maxima, and there are certainly only a finite
number of those in any finite interval. In fact,. it is not difficult to
relate these variables to the point processes of upcrossings (in the
same way as regular local maxima can be replaced by upcrossings of high
levels if
A < 00) and thereby obtain the following Poisson limit theo-4
rem, the proof of which is omitted.
THEOREM 9.7
Suppose
and with
{~(t)}
r (t) log t
of normalized e:-maxima
is a standardized normal process with
+
0
as
t
+
00. Then the point process
(L.
(T)/T, aT(M.
(T) -b
1,e:
1,.e:
T
in distribution to the same Poisson process
N'
»
converges
in the plane as in
c
Theorem 9.5.
Note that the limiting properties are independent of the
e:
chosen.
We shall return to processes with even more irregularity in Chapter 12.
-59-
.
CHAPTER 10
1"
SAMPLE PATH PROPERTIES AT UPCROSSINGS
Our main concern in previous chapters has been the distribution of the
number and location of upcrossings of one or several adjacent levels
and of high local maxima. For instance we know from Theorem 8.6 and
relation (8.7) that for a standard normal process each upcrossing of
the high level
u = u
with a probability
T
by an upcrossing also of the level
p = T*/T
is accompanied
uT * = u.-!2ll,
u
asymptotically independently of all other upcrossings of
uT
u T*.
,and
In order to throw further light on the conclusions and proofs in
previous chapters we
~l'ill
now introduce some new concepts which give
more precise information about the structure of the sample paths of
{~(t)}
near upcrossings of a level
u. Perhaps a word of warning is
appropriate here, that we will need some slightly more difficult arguments that have been encountered so far.
We assume, as we did in Chapters 7-9, that
normal process with
reT)
E(E;;(t»
=
0, E(E;;2(t»
=1
{E;;(t)}
is a s1:ationary
and covariance function
satisfying
With a slightly more restrictive assumption,
for some
a > 1, we can assume that
{~(t)}
has continuously differen-
tiable sample paths - see Cramer and Leadbetter (1967), Section 9.5and we will do so since it serves our purposes of illustration. We also
assume throughout this chapter that for each choice of distinct non-zero
points
51'
••• ,
sn' the distribution of
C;;
(0),
C;;.
(0),
C;;
(sl)' ••• ,
•
C;; (5 )
n
15 non-singular. (A sufficient condition for this is that the spectral
-60-
distribution
F(A)
has a continuous component; see
Cram~r
and Lead-
better (1967), Section 10.6.)
Since
A2
= rR(O)
<
the number of upcrossings of the level
~
u
in
any bounded interval has a finite expectation and so will be finite
with probability one. Let
to < 0 < t
with
and note that
be the locations of the upcrossings of
l
Itkl
+
~
as
Ikl
+
In order to retain information about
Thus, the mark
t
{nk(tl}
k
a mark
In particular
n
k
N
u
{~(t)},
the
{t k }.
{~(t)}
near its upcrossings,
which is a function defined as
is the entire sample function of
translated back by the distance
by
~. As before, we denote by
point process of upcrossings with events at
we now attach to each
u
{~(t)}
tk•
nk(O) = u, while for small t-values
nkit)
describes
~(t)
u
t
_ _.s.-.,,-
~-I--I-
~t
-+-I-I-------~t
-..-1f--+--I---...l,+t
-01-
the behaviour of the
u-upcrossing
t
k
~-process
in the
~ediate
vicinity of its k-th
. Of course, any of the marks, say
{noCt)}, contains
perfect information about all the upcrossings and all other marks, so
different
{nkCt)}
are totally dependent.
The marks are furthermore constrained by the requirement that
is the first upcrossing of
u
after zero, t
which suggests that the different marks
2
t
l
the second and so on,
••• , {nl(t)}, {n (t)}, •••
2
are not identically distributed.
One would feel inclined to look for the distribution of the mark at
an arbitrary upcrossing without respect
the distribution of
cess at time
u
t
k
at some time
nk(t)
+ t
t
k
~o
its location. Intuitively,
is the conditional distribution of the pro-
given that there is an upcrossing of the level
• Two difficulties are involved here. One is, as
was previously noted, that the marks are not identically distributed,
which is somewhat incidental and due to the choice of time origin. The
other difficulty is that the event that there is an upcrossing of
at
t
has probability zero if
t
u
is a fixed point. It is therefore
not at all clear how one should define the conditional probabilities.
In point process theory, the abovementioned difficulties are resolved
by the use of Palm distributions (or Palm measures), which formalize the
notion of conditional distributions given that the point process has a
point at a specified time
T. We shall use a similar approach here to
obtain a Palm distribution of the mark at an arbitrary upcrossing, and
then examine this distribution in some detail.
Palm distributions of the marks at upcrossings
The point process
the level
u
by
N
u
on the real line formed by the upcrossings of
{~(t)}
is stationary and without multiple events,
~
3""
3
... ~
~
§
=."¥
'f
-at
i.e. the joint distribution of
NU(t+l ), j=l, ••• ,n
j
does not depend
on
o
is the set
t, and
Nu({t})
is either
or
1; (here
t +I
j
-62-
translated by an amount
t, i.e.
is jointly stationary with
~(t
+Sj)' j =1, ... , n
~(t),
t + I. = {t + s; s € I . }). Further, N
u
J
J
Le. the distribution of Nu(t +I j ),
does not depend on
t.
We shall now define the Palm distribution of the marks in accordance
~(tk
with the intuitive meaning of "the behaviour of
+t)
regardless
of the value of
t ". Let Sj' j = 1, ••. , nand Yj' j = 1, .•• , n be
k
fixed numbers. For each upcrossing, t , it is then possible to check
k
whether
~
(t
k
+ Sj) :- Y , j = 1, .•. , n
j
point process
N
u
or not. Now, starting from the
N ' form a new process
u
that do not satisfy
~(tk+Sj):
Nu
by deleting all points
tk
in
Y , j=l, ... ,n. The relative numj
ber of points in this new point process tells us how likely it is (for
the particular sample function) that a point
nied by a mark
DEFINITION 10.1
mark
{nO(t)}
nk(t), satisfying
n
k
t
k
in
N
u
is accompa-
(s .) < y., j = 1, ... , n.
J
-
J
The finite-dimensional Palm dis·tributions
P~
of the
are given by
u
E(Nu(l»
po{nO(sj) ~Yj' j =1, ••• , n} = E(Nu(l»
(10.3)
'e
= E(#{t k € (0,1); ~(tk +Sj) ~y., j =1, ... , n})
E(#{t € (o,ld)
k
The Palm distributions
of the mark
for any integer
v,
are given by
E (# t
k
€ (0, 1); ~ (t + + S .) < y ., j = 1, ••• , n})
k v
J - J
E ( # { t € (0, 1) })
k
and the joint Palm distribution of
{no(t)}
and
{nv(t)}
is defined
similarly.
Relation (10.3) defines a consistent family of finite-dimensional distributions for a certain stochastic process, which will be studied in
more detail later in this chapter. But first we give some more connections between Palm distributions and the marks at upcrossings.
o
-63Palm probabilities can also be obtained as limits of ordinary conditional probabilities given a point, i.e. an upcrossing, not exactly at
0, but somewhere nearby. Let
prior to
to
be the last u-upcrossing for
~(t)
O. Then
u
Po{no{Sj) ~Yj' j =1, ••• , n}
= ~~~ P{~(to +Sj) ~Yj' j =1, ••• , n I-h <to < OJ,
(10.4)
where it may be shown that the limit (10.4) exists, and equals the ratio
(10.3). In fact, (10.4) can be taken as a definition of the Palm distributions, an approach
w~ich
was taken by Kac and Slepian (1959), who
also termed it the horizontal window conditioning of crossings, indi;
~
eating that the sample path
has to pass through a horizontal
~(t)
I
window
let, u): -h
~
t
~
oJ. This is in contrast to verticaZ window
conditioning which requires that
u
~
~
1;(0)
u+h, 1;' (0) > 0, so that
the process has to pass through a vertical window
{CO, x); u < x < lJ+h}
with positive slope.
The following important proposition relates the Palm distribution to
(t + t) when t
runs
k
k
through the set of all upcrossings of the level u. It states that the
the empirical distribution of the values of
~
Palm distributions in an ergodic process can actually be observed by
considering the marks over an increasing interval.
PROPOSITION 10.2
If the process
{~(t)}
is ergodic, then with
probability one
p~{nv(Sj) ~Yj' j =1, ... , n}
(10.5)
= lim
T+a>
#{tkE:(O,T);
~(tk+
+s.) <y., j=l, ••.. ,n}
] - ]
It { ~ E: (0, T) }
v
The convergence of the empirical finite-dimensional distribution3 in
(10.5) to the Palm distributions can be extended to empirical
c
-64distributions of functionals such as the excursion tLme, i.e. the tLme
from the upcrossing to the next downcrossing of the same level, or the
maximum in intervals of fixed length following the upcrossing, i.e.
sup
t€I
~(tk
+t).
For a proof of Proposition 10.2 as well as generalizations and examples,
see Lindgren (1977).
In the introduction we suggested the interpretation that the Palm
probability
t
p~{nv(t) ~y}
gives the distribution of
{~(t)}
at time
later than the v-th upcrossing after an arbitrary upcrossing. For
this to make intuitive sense, the P~-distribution of
not depend on
THEOREM 10.3
should
v. The following theorem makes this precise.
The sequence of marks
ary und,er the Palm distribution
dinensional
{nv(t)}
{nO(t)}, {nl(t)},
is station-
p~, in the sense that the joint finite-
P~-distribution of
n k + k (s.), ... , nk+k (s.), j=l, .• .,n
1 J
r
J
is independent of
k. In particular, for a fixed
t, all the
nj(t),
j = 0, I, ••. , form a stationary real sequence, and hence have the same
P~-distrib~tion, whereas they are non-identically distributed under
PROOF
We only show that, under
same as that of
p~, the distribution of
nj(t)
is the
nj_l(t). A full proof is only notationally more compli-
c,ated.
Put
~
= E(Nu(O,l» = E(#{t
u
Po{nj(t) ~y} = ~
(10.6)
Similarly,
(10.7)
P.
-1
k
€ (O,l)}. Then
E(#{t k € (0,1); ~(tk+j+t) ::y})
= (~T)
-1
E(#{t k € (O,T); ~(tk+j +t) ~y}).
.. 65-
~
~e
Now take a pair of adjacent points
is
tk
t k + l • We see that
and
tk
counted in (l0.6) i f and only i f
~
t
i
."
k
€ (0, T)
~
and
(tk + j + t)
~
Y
~
.
~
;,
while
contributes to (10.7) if and only i f
tk+l
:;:
~
t k + l IE
(O/T)
and
F;.
(t k + l + j - l + t)
and
~
(tk + j + t) < y.
y,
~
Le. i f and only if
t k + l IE (O,T)
-
Hence the numbers in (10.6) and (10.7) differ at most by
+l
-1
or
so
that
,
;:
J
•
Pou {nj (t)
~ y]-
u
- PO{nj-l
(t)
~
y} j
~
1
.. 0
il T
T .. co.
as
0
The Slepian model process
We will devote the rest of this chapter to a study in more detail of the
properties of any of the marks under the Palm distribution, in particular as the level
have the same
u
gets high. In view of Theorem 10.3 all
P~-distribution and we pick
{no(t)}
{nk(t)}
as a ty?ical re-
presentative.
Our tool will be an explicit representation of the
of
nO(t)
P~-distribution
in terms of a simple process, originally introduced by D.
Slepian (1962) and therefore in this work termed a SZepian modeZ process.
The following theoren uses the definition of Paln distributions and
forms tile basis for the Slepian representation.
THEOREM 10.4
Let
il
Y
(10.8)
f
x=-co
where
and
p(u,z)
p(xju,z)
= 2~ ;X;
= E(Nu(l»
{ il-
l
7
e-
u
2
/2
Then for
t
*
0,
z p(u,z) p(x!u,z) dZ}dX,
z=o
is the joint density of
~(O)
is the conditional density of
and its derivative
(t)
given
(0)
~'(o),
= u,
-66~'(O) = z. Thus the P~-distribution of
nO(t)
is absolutely continuous,
with density
co
ll-l
The
f zp(u,z)p(x!u,z)dz.
z=O
tained by replacing
density of
PROOF
P~-distribution of
n-dimensional
~
p(xlu,z)
(sl)' .•. , I; (sn)
by
nO(sl)' ••• , nO(sn)
p(x l , •.• ,xnlu,z), the conditional
given
I; (0) = u, 1;' (0) = z.
The one-dimensional form (10.8) is a direct consequence of Lemmas
6.9(iii) and 6.10, since we have assumed that
non-singular distribution. We can take
rl
is ob-
(s) = E;, (s + t)
tIs)
1;(0)
= I;(s),
and
I;(t)
t' (s)
have a
= 1;' (s),
and, in the same way as in the r,>roof of Lemma 6.11,
check that
p{t(t) =u, net) =v
for some
tE (O,l)}
0
so that
co
f zp(u,z)p{E;,(t) <YIE;,(O) =u, t;;'(O) =z}dz
z=O
Y
e.
co
f zp(u,z)p(xlu,z)dz dx.
x=-co z=O
f
The multivariate version is proved in an analogous way.
Theorem 10.4 states that the joint density of
nO(sl)' .•• , nO(sn)
is given by
under
(10.9)
II
-1
co
f zp(u,z)p(x , ••. ,xnlu,z)dz,
l
z=O
\.here
p(x l , .. ., xn!u,z)
given
t;;(O)
is the conditional density of
t;;(sL)' ... , E;,(Sn)
= u, t;;' (0) = z. We shall now evaluate (10.9) in order to
obtain the Slepian model process.
u2
With II =
e- / 2 and using the fact that
2~ A~/2
are independent and normal with
1that
c
E(I;'(O»
1;(0)
and
= 0, E(E;,' (0)2) = A
2
1;'(0)
we have
-67-
'"e
and we can write (10.9) in the form
(10.10)
The covariance matrix of
~
(0),
~'(O),
~
(sl)' •.. ,
E;,
(sn)
is
o
1
o
-r' (s n )
1
r (s )
1
n
From standard propertiea of conditional normal densities - see Rao
(1973) , p. 522
p(X , ••• ,xnlu,z)
1
it follows that
is an n-variate nor-
mal density and that
and
(10.12)
COy (~ (s i)'
~ (s:i)
I ~ (0) =u,
=. res.
~
~'( 0)
= z)
-s.) - r(s.)r(s.) - r'(si)r'(sJ.)/A 2 •
J
~
J
The density (10.10) is therefore a mixture of n-variate normal densities,
all with the same covariances (10.12), but with different means (10.11),
and mixed in proportion to the Rayleigh density
z
(10.13) A e
2
-z2/ 2A2
(z > 0).
Now we are ready to introduce the Slepian model process. Let
C
be a
Rayleigh distributed random variable, with density (10.13) and let
-68{K (t),
t € R}
be a non-stationary normal process, independent of
Z;;
with zero mean, and with the covariance function
rK(s,t)
= COV(K(S) ,K(t»
= r(s-t)
-r(s)r(t) -r' (s)r' (t)/A •
2
That this actually is a covariance function follows from (10.12).
DEFINITION 10.5
(10.14)
~u(t)
The process
= ur(t)
- Z;;r' (t)/A 2 + K(t)
is called a Slepian model process.
c
=
z, the process (10.14) is normal with
Obviously, conditional on
Z;;
mean and covariances given by the right hand side of (10.11) and (10.12)
respectively, and so its finite-dimensional distributions are given by
the densities (10.10).
THEOREM 10.6
{nO(t)}
The finite-dimensional Palm distributions of the mark
and thus, by Theorem 10.3, of .all marks
{nk(t)}, are equal to
the finite-dimensional distributions of the Slepian model process
~u(t)
= ur(t)
- Z;;r'(t)/A2 + K(t)
c
i.e.
One should note that the height of the level
only via the function
are the same for all
u
enters in
ur(t), while the distributions of
Z;;
~u(t)
and
K(t)
u. This makes it possible to obtain the Palm dis-
tributions for the marks at crossings of any level
just one random variable
Z;;
u
by introducing
and one stochastic process
sequel we will use the fact that
to derive convergence theorems for
u
{K(t)}. In the
enters only through the term
~u(t)
as
U.~.
ur(t)
These are then
translated into the distributional convergence under the Palm distribut ion
u
PO' by Theorem 10.6, and thus of the limiting empirical distri-
butions by Proposition 10.2.
e-
the same reasoning applies to the limiting em-
As noted on p.
t;
pirical distributions of certain other functionals. In particular, this
~·e
applies to maxima, and therefore it is of interest to examine the
1:
asymptotic properties of maxima in the Slepian model process.
.
i.
~
"
J.':
{~u(t)}
Some simple facts about the model process
tioned here. From the form of the covariance function
that
{K(t)}
P{K (0)
t::
so that
one may show
= E(K'(t» = 0
E (K (0) 2)
~~
rK
is continuously differentiable with
E(K(t»
so that
should be men-
(0)
= E (K' (0) 2) = 0
= K'
= ur'
(0)
= O} = 1.
Since
A
= -r" (0)
2
r" (0) /A 2 + K' (0)
(0) -
is simply. the derivative of
one has
= 1;
tu(t)
at zero. From Theorem
10.6 this immediately translates into a distributional result for the
derivative at upcrossings.
COROLLARY 10.7
The Palm distribution of the mean square derivative
of a mark at a u-upcrossing does not depend on
Tlk(O)
u, and it has
the Rayleigh density (10.13).
PROOF
c
Equality of finite dimensional distributions of course implies
equality of the distributions of the difference ratios
and
(tu(h) - tu(O))/h
The value of
C
0
as
{~(t)}
t +~.
k
(0) ) /h
c
and hence of their mean square limits.
determines the slope of
values the dominant term in
+
(nk (h) - n
tu(t)
will be
t
u
(t)
at
K(t), if
o.
For large
ret)
and
r' (t)
(A sufficient condition for this is that the process
has a spectral density, in which case it is also ergodic.) Then
r(T+s, T+t)
+
r(s-t)
as
T
+
CD
so that. tu(t)
for large
t
hets
asymptotically the same covariance structure as the unconditioned process
1t-
t(t), simply reflecting the fact that the influence of the up-
crossing vanishes.
-70-
Excursions above a high level
We now turn to the asymptotic form of the marks at high level crossings.
The length and height of an excursion over the high level
u
out to be of the order
-1
u
will turn
~u (t)
,so we normalize the model process
by expanding the scale by a factor
er
u. Before proceeding to details we
give a heuristic argument motivating the precise results to be obtained,
by introducing the expansion
2
A2 t
t
r (-).
= 1 - - - 2 (1
u
+ 0 (1))
2u
(10.15)
t
r' (-)
u
=-
A2 t
- ( 1 +0(1»
u
as
t/u
~
0, which follow from (10.1), and by noting that
as
t/u
~
O. Inserting this into
~u(t)
and omitting all
K(t/u) = o(t/u)
o-terms we
obtain
(10.16)
as
u
~u(t/u)
~ ~
and
t
is fixed.
~t -
2
A2 t /2
wi·th a maximum value of
~2 /2A
The polynomial
~u(t)
in (10.16) has its maximum for
2
= ~/A2
and tnerefore vIe might expect that
has a maximum of the order
that the maximum exceeds
t
u + v/u
u +
~ ~2/2A2' Hence the probability
should be approximately
e
-v
..
The following theorem justifies the approximations made above.
THEOREH 10.8
Suppose
Then for each
p{ sup
O~t~t
r
satisfies (10.2) and
r(t)
~
0
as
t
~ ~.
> 0,
t
~ (t»u+Y.}
u
u
~
e -v
as
i.e. the normalized height of the excursion of
asymptotically exponential.
~
u
(t)
over
u
is
-71EBQQE
~
We first prove that the maximum of
Choose a function
o(u)
=
+
as
u
=
+
u
occurs near zero.
(t)
o(u)/u
such that
+
0
and
Then
(10.17) p{
~
sup
o(u)/U~t~L
(t)
u
>u}
+
0,
since the probability is at most
p{
sup
o(u)/U~t~L
(~(t)
- ur (t) +
< P{ sup
sup
ur (t) > u}
o(u)/U~t~L
u
(-z;;r'(t)/A
O~t~L
2
+dt»
>u(l-
ret»~}.
sup
o(u)/U~t~L
Here
sup
(-r;r'(t)/A
O~t~L
;-
2
+dt»
"
is a proper (Le. finite valued) random variable, and (since
2
2
= A t /2 + oft )
~(t)
is nonsingu1ar for all
1 - ret)
~
Kt
2
for
u (1 -
~(O)
t + 0, and the joint distribution of
as
2
0 < t
~
t
ret) < 1
so that
L, some
for
depending on
K
t
1 - r ('t)
and
+ 0)
L, so. that
r (t) )
sup
o(u)/U~t~L
which implies (10.17).
In view of (10.17) we now need only show that
= p{
sup
P{
u(~u(t/u) -u) = -u 2 C1-r(t/u»
- Z;ur' (t/U)/A 2 + UK (t/u) =
= -A t 2 /2' (1+0(1» + Z;;t(l+o(l»
2
uniformly for
0 ~ t ~ o(u)
as
u
+
=.
Since
uously differentiable sample paths with
K(t/U)
sup
t/u
O~t~o (u)
~
....'"
~
~
p
;1
~
l)e
v
v > O. By (10.15)
(10.18)
?-
e-
+
°itiiS(u)
0~t~6(u)/u
for
u(~u(t/u) -u) >v}
sup
+
0
(a.s.) as
K'(O)
U
....
CD.
{K(t)}
= 0,
and
+ t
K~~~U)
has a.s. contino (u)/u
+
0,
-72This implies that the maximum of
u(Su(t/u} -u) is asymptotically de2
termined by the maximum of -A 2 t /2 + st and that
2
limP{ sup suet) >u+Y.} = P{SUP(-A 2 t /2+ s t> >v} = p{s2/2A 2 >V}
u....""
O<t<T
u
t
e
-v
as was to be shown.
c
As mentioned above, distributional results and limits for the model
process
{suet)}
carryover to similar results and limits for marks
n (t) = s (t + t), Le. for the ergodic behaviour of the original process
k
k
{set)} after t k .
In particular Theorem 10.8 has the corollary that the limiting empirical distribution of the normalized maxima after upcrossings of a
level
u
is approximately exponential for large values of
#{ t
lim
k
E (O,T);
T"->
as
sup
E; (t
k
u, i.e.
+ t) > u +Y.}
,.,.,..O"'-~--:t::-~...,T-,::-'="""_r__-----u-.... e- v
#h
E (0, T) }
k
u .... "", a.s.
This clarifies the observation in the beginning of this section,
an excursion over the high level u
u - ~ with probability e log p= p.
~1at
also exceeds the level
u
It should be noted here, even if not formally proved, that the excursions emerging from different upcrossings are asymptotically independent. This explains the asymptotic independence of extinctions of
crossings with increasing levels.
The following theorem follows from (10.18).
THEOREM 10.9
Suppose
r
satisfies (10.2) and
ret) .... 0
Then with probability one the normalized model process
t u (t)
= u(su (t/u) - u)
tends uniformly for
It I < T
to a parabola
as
t .... ""
-73in the sense that, with probability one,
as
u'"
c
co.
This theorem thr01l1S some l:lght upon the discrete approximation used
•
in the proof of the maximum and Poisson theorems in previous chapters •
The choice of spacing in the discrete grid,
q, appeared there to b,e
chosen for purely technical rE!aSons. Theorem 10.9 shows why it works.
By the theorem the na,tural tilne scale for excursions over the high
level
;;
't
u
is
u
-1
, so the spacing
q = o(u
-1
)
of the q-grid Icatches
high maxima with an increasing number of grid points.
-74-
CHAPTER 11
MAXIMA AND MINIMA AND EXTREI1AL THEORY FOR DEPENDENT PROCESSES
The main thread in the previous chapters has been the extension of extremal results for sequences of independent random variables, to dependent variables and to continuous time processes. Trivially, extremes in
mutually independent processes are also independent, and we shall see
that this holds asymptotically for normal processes even when they are
highly correlated.
However, we shall first consider the joint distribution of maxima and
minima in one process.
(Since minima for
~(t)
are maxima for
-~(t),
this can in fact be regarded as a special case of dependence, namely be-
Maximum and minimum
Suppose
~l'
~2'
•••
is a stationary sequence of independent random va-
riables with
T
- ii'
as
n
+
00, so that
~.
~
<u }
- n
+
e -T
~. > -v }
~-
n
+
e- o
Then clearly
P {-v
< min E;. < max ~. < u } = p{ -v < ~l < u }n
n - n
n - l~i~n ~ - l~i~n ~ - n
e
-T
e
"0
i.e. maximum and minimum are asymptotically independent.
For a standardized stationary normal process
have the same distribution. Writing
{~(t)}
and
{-~(t)}
-75-
meT) =
clearly
m(T)
inf{~
=-
~
(s); 0
s
~
sup {-t (s); 0
asymptotic behaviour as
T},
~
s
~
T}, and hence
-M(T). If
{t(t)}
m(T)
has the same
satisfies the hypotheses of
Theorem 7.61,
~
P{m(T)
as
T, v ...
~
-v} ... e
and
-0
Tv'"
e
>
0
_V 2/2
for
0
•
It follows, under the hypotheses of Theorem 7.6, that
with the same normalizations as for maxima, i.e.
aT
= (2logT)1/2
It was shown by Berman (1971 a) that under no further assumptions
meT)
and
M(T)
are asymptotically independent as in the case of inde-
pendent sequences studied above. We shall now prove this result.
With the same notation and technique as in Chapter 7, we let
N~q)
be the number of u-upcrossings by the process
and the sequence
D(q)
-v
{t (jq); 0
jq
~
h}, and define similarly
to be the number of do~naroBsings of the level
serve that if, for
and
~
u, v ... ~
u ..., v
(11.1)
h
so that
...,
log ofT
u
D_V
and
-v. We first ob-
= 1,
TU'"
and
u - v
Nu and
{t (t); 0 ~ t ~ h}
'
T
> 0
and
Tv'"
0
> 0, then we have
-76cf.
(S.7). In particular, this implies that if
then also
vq
q
0
-+
so that
uq
0
-+
O.
-+
The following lemma contains the discrete approximation and separation
of maxima. As in Chapter 7, split the increasing interval
n = (T/h]
and
pieces, each divided into two, I
€, respectively. Write
interval
LE~~
m(I)
k
and
I
k,
for the minimum of
[O,T]
into
of length
E;(t)
h -
€
over the
I.
11.1
(1)
If (7.1) holds then, as
u, v
-+ ~
and
uq, vq
-+
0,
P{m (I) < -v, If(I) > u}
= P { min E; (j q) < -v, max E; (j q ) > u} +
jqEI
jqEI
where
o(\I+~)
(\I +
0
is uniform in all intervals of fixed length
n
,
hi
n
lim sup Ip{-v ~m( U I } <M( U 'I } ~u} k=l k
k=l k
T -+ ~
(ii)
~m(nh)
- p{-v
::M(nh) < u}
I
<
O+T
11
€
,
n
p{-v ~ E; (jq) ~ u, jq E U I } k
k=l
(iii)
n
- p{-v
PROOF
~)
(i)
o
~m(
n
U I } < M( U I > ~u}
k
k
k=l
k=l
By Lemma 7.3 (i) applied to
< P{m(l) < -v, M(l) > u} -
< E(D
-v -D(q»
-v
+ E(N
{E;(t)}
and
-+
o.
{-E;(t)},
p{ min E; (jq) < -v, max E; (jq) > u}
jqEI
jqEI
+
u -N(q),
u
p{E;(O) <-v} + pU"(O} >u}
~
O(V+ll}.
Parts (ii) and (iii) follow as in Lemma 7.4 and the details are not
repeated here.
c
The following two lemmas give the asymptotic independence of both maxima and minima over the separate
Ik-intervals.
e:
-77Suppose ~l' ••• , ~n are standard normal variables with co1_1
variance matrix A - (A ij ), and n , ••• , nn similarly with COVariiimCe
l
O
matrix A = (A~j)' and let Pij = maX(IA~jl,IA~jl). Further, let
LEMMA 11.2
!:!
2
(u l '
••• , un)
and write
(11. 2)
w
c:
min ( lUll, ••• ,
P { -v. < r-. < u .
J
y = (vI' ••• , v n)
and
'oJ -
for
J
j
be vectors of real numberls
Iun I, IvII,
= 1,
Ivn I). Then
••• ,
... , n} - p { -v. < n . < u.
J
J -
J
for
j = 1., ••• , n}
2
e-w /(l+Pij)
PROOF
This requires only a minor variation of the proof of Lemma 3.:2.
With the notation of that proof, it follows as on p. 46-47 that the
lefthand-side of (11.2) equals
1
. IOu
1{ 1: !A .. -A i ·) 1.:.1
o i<j ~J
J
-y
2
a
a
fh
a
Yi Yj
By performing the integrations on
d~}dh.
Yi
and
Yj
we obtain four terms .•
f h (Yi = u i ' Yj = u j ), -fh (yi = u i ' Yj = -v j) ,
containing the integrands
-fh(Yi =-v i , Yj =u j ), and
fh(Yi =-v , Y =-v j ), respectively. Each of
i
j
these terms can be estimated as on p. 47, and the lemma follows.
LEMMA 11.3
T
+
~,
(11.3)
ret)
and that, if
+
0
qu, qv
+
as
0
t
+
~,
e: > 0, where
r (t) log t
+
0
as
+
w = min(u, v) > 0,
t
+
(i)
p{-V~E;(kq) ~u,
(ii)
lim sup
T + ~
I
and that
~).
kq€
T~ +
T, Tv
+
a
as
sufficiently slowly
2
T
1:
Ir(kq)I e-w /(l+lr(kq)l)
q e:~kq~T
for every
if
Suppose
0
0
(which in particular holds
Then
n
U I.} j=l J
n
IT p{-v<~(kq) ~u, kqEI .}
J
j=l
-
+
0,
n
IT p{-v<~(kq) ~u, kqEl } - pn{-v~m(h) ~M(h) ~u}1
j
j=l
< 2(C1+T)
h
£ •
PROOF
Part (i) follows from Lemma 11.2 in the same way as Lemma 7.5 (i)
follows from Lemma 3.2. As for part (ii), note that
0 < p{-v < m(I.) <M(L) <u} - p{-v <m(h)
JJ -
<
P{m(I~)
J
by Lemma 7.2
that, for
0
where
< -v} + P{M(Ij) > u} = o(V+).I)e:
.~
and Lemma 7.2 (i)
(i)
sufficiently large,
-
P. - P < 2(v+).I)e:
~
J
P. =
:;:u, kq E I j }, P = p{ -v < m(h)
P{-v<~(kq)
J
n
_ pn
IT P.
:::
0
u}
-
It now follows from Lemma 11.1
(i) •
T
~M{h) ~
j=l J
< 2n (v + ).I) e:
~
~
M(h) ::: u}. Hence
2 (0 + T)
e:
h
as in Lemma 7.5 (ii).
lJ
The final essential step in the proof of asymptotic independence of
m(T)
and
M(T)
is to show that an interval of fixed length
h
not contain both large positive and large negative values of
LEMMA 11.4
If (7.1) holds, there is an
P{M(h) >u, m(h) < -v} = o(v
for all
PROOF
h
O
> 0
does
~(t).
such that as
u, v
~ ~,
+).1)
h < hoe
By Lemma 11.1 (i) and stationarity it is enough to prove that
P{
max
~
(jq) > u,
min
O~jq~h
for some
q
~
(j q) < -v} =
0
().I + v)
O~jqs.h
satisfying
uq, vq
~
O. By stationarity, this probability
is bounded by
(11.4)
!!q p{~(Ol
Here, for
jq
*
> u,
min
~
-h~jq:::h
{j q 1 < -y} < h
E
p{~(Ol
q -h:::jq:::h
> U,
~
(jq) < -v}.
0,
.:.-
"",
-/~-
CD
..
p{UO} >u, f;(jq} <-V}
J t(x)p{f;(jq) <-v!f;(O} =x}dx
u
CD
= J O(x)t(-v-xr(jg»dx,
(11.5)
11-r2(jq)
u
since, conditional on ~(O) = x, ~(jq)
and variance 1 - r 2 (jq). Now, choose
for
o
xr (jq)
is normal with mean
> 0 such that 0 < ret) < 1
O
~ hO' which is possible by (7.1). If 0 < Ijql < h < h o
< It I
h
then
<
for
u, v
>
-v,
0, so that (11.5) is bounded by
CD
!
~
(x) t (-v) dx = (1 -
Nul t (v)
(u) ) (1 - t (v) )
II>
uv
u
Together with (11.4) and (11.5) this shows that
o
¢l (v)
min f; (jq) < -v} < h2~(U)
q2 •
uv
< P { max t; (j q) > u,
O~jq~h
~(u)~(v)
(iJ + '>I)
= h2.
·e
since
make
Il> (u) Il>(v) /
quqv'" 0
THEOREM
11. 5
(iJ
O~jq~h
._quqv
1
+ \I) = 0
(iJ
(iJ\1/ (iJ
+ v) =
+ \1»
0(1) (iJ +\1),
... 0,
and
q
can be chosen to
arbitrarily slowly.
Let
u = u
T
...
c
and
CD
as
T ...
CD,
in such a
way that
2
Tv = ~ Al / 2 e- v /2 ...
2n 2
Suppose
r(t)
CJ
> O.
satisfies (7.1) and either (7.2) or the weaker (11.3).
Then
P{-v <m(T)
~M(T) ~ u}
... e -CJ-T
as
T ...
CD,
and hence
P{aT{m(T) -bT' ~x, aT(M(T) -bT' ~y} ... (l-exp(-ex»exp(-e-Y;~
with
aT = (2 log T) 1/2 ~ b
I
• (2 log T) 1/2 + log (),~/2 /2n) / (2 lOB I) 1/2.
-80-
PROOF
By Lemma 11.1 (ii) and (iii), and Lemma 11.3 (i) and (ii) we have
lim sup IP{-v<m(nh) ~M(nh) ~u} - pn{-v<m(h) ~M(h) ~u}1
T ..
co
<
as
n"
he:,
e: > 0, and hence
for arbitrary
(11.6)
3(<1+T)
P{-v<m(nh) ::M(nh) 5,u} - pn{-v<m(h) 5,M(h) 5,u} .. 0
co.
Furthermore, by Lemma 11.4,
Arguing as in the proof of Theorem 7.6, the result now follows from
Lemma 7.3, using
n(v
+~) ....
T
<1 + T
h(v +~) .. -h-'
o
This theorem of course has similar ramifications as the
maxi~mum
theo-
rem, Theorem 7.6, but before stating them we give a simple corollary
about the absolute maximum of
COROLLARY 11.6
P{ sup
If
u ..
co,
I ~ (t) I ~ u}
T
~(t).
= T/~,
then
0_
.. e -2T
O~t~T
and furthermore
p{a ( sup
T
O~t~T
p: (t) I - bT'
5, x + log 2} .. exp (-e-x) .
o
As for the maximum alone, it is now easy to prove asymptotic independence of maxima and minima over several disjoint intervals with lengths
proportional to
T. As a consequence one has a Poisson convergence theo-
rem for the two point processes of upcrossings of
of
u
and
downcrossings
-v, the limiting Poisson processes being independent. Furthermore,
the point process of downcrossings of several low levels converges to a
point process with Poisson components obtained by successive binomial
thinning, as in Theorem 8.6, and these downcrossings processes are asymptotically independent of the upcrossings processes. Of course, the
-81-
entire point process of local minima, considered in Theorem
9.,~,
is
also asymptotically independent of the point process of normalized local minima.
Extreme values and crossings for dependent processes
One remarkable feature of dependent normal processes is that, regardless
of how high the correlation - short of perfect correlation - the number of
high level crossings in the different processes are asymptotically independent. as shown in Lindgren (1974). This shall now be proved, again by
means of the important Lemma 3.2.
Let
{~l(t)},
••• ,
{~p(t)}
be jointly normal processes with zero
means, variances one and covariance functions
.e
rk(t)
= Cov(~k(t),
~k(t+t».
We shall assume that they are jointly stationary, Le.
Cov (~k (t),
~1
(t + 1")
does not depend on
)
1",
and we write
r
for the cross-covariance function. Suppose further that each
fies (7.1), possibly with different
(11. 7)
rk(t)
=1
k
sat is-
Ak'S, i.e.
2
2
- A2k t /2 + o(t),
t -+ 0,
and that
rk(t)logt-+O,
(11.8)
rkR. (t) log t
for
-+
as
0,
t
-+ CD,
1 < k, 1 ~ p.To exclude the possibility that
for some
(11.9)
k
* 1,
and some choice of
max sup Ir k1 (t)
k*1 t
I
to
and
+
or
=.
±~R,(t+tO)
-, we assume that
< 1.
However, we note here that if
there is a
to
~k(t)
such that
:: -~ 1 (t + to)' A maximum in
r
inf
r
(t) = -1 for some k. 1,
t k1
-1, which means that ~k(t)
(t ) =
k1 O
~ 1 (t)
is therefore a minimum in
~k (t)
,
-82-
and as was shown in the first section of this chapter, maxima and minima are asymptotically independent. In fact, with some increase in the
complexity of proof, condition (11.9) can be relaxed to
maxk*i SUPt rki(t) < 1.
Define
and let
uk = Uk(T)
T~k
T ~
as
00.
be levels such that
2
T
-u /2
= 2n ¥A 2k e k
Write
~
'k > 0
~ = min{~l' ••• , ~p}'
To prove asymptotic independence of the
maxima over separated intervals
h
Mk(T)
we approximate by the
I., j = 1, ••• , n, with
J
11
= [T/h]
for
fixed, and then replace the continuous maxima by the maxima of the
sampled processes to obtain
as}~ptotical
independence of maxima over
different intervals. We will only briefly point out the changes which
have to be made in previous arguments. The main new argument to be used
~k(t),
here concerns the maxima of
va,l,
k=l, ••• ,p
over one fixed inter-
I, say.
We first state the asymptotic independence of maxima over disjoint
intervals.
LEMr-1A 11. 7
If
and if
~
Tj.lk
satisfy (11.7)-(11.9) for
'k > 0, then for
h > 0
and
1 < k, i < p,
n = [T/h],
P{~(nh) ~uk' k=l, ... ,p} - pn{Mk(h) :suk ' k=l, .•• ,p} ~ O.
PROOF
This corresponds to (11.6) in the proof of Theorem 11.5, and is
proved by similar means. It is only the relation
(11.10) P{~k(jq) :su k '
n
jq€ U II"
r=l
k=l, •• .,p} -
n
II P{~k(jq) :;:u , jq€I , k=l, ••. ,p} ~ 0,
k
r
r=l
-~
-83-
corresponding to Lemma 11.3 (il, that has to be given a different proloL
.,
,
•'"
Identifying ~1'···' ~n in Lemma 3.2 with ~l (jq), ••• , ~p(jq),
n
jq € U I , and I'll'· •• , I'ln analogously, but with variables from differ=l r
rent I -intervals independent, (3.6) gives, since sUPlru (t) I < 1,
r
n
IpU;k(jq) ~uk' jqE U I r , k=l, ••• ,p}r=l
n
-
r
< K
(11.11)
Ir k ( (i
'-
k=l i<j
;
+ K
where
that
{~k(jq) ~uk' jgE:l r , k=l, ... ,p}1
2
.. *
p
n
r=l
- j ) q)
I
-u /(1 + Irk«i-j)g)
e
. L<.* IrkR. «i -
!
l::kH~p 1.
J
-U
j) q)
I
2
and
jq
rkR.(t) logt ~ 0
belong to different
and
suplrkR.(t)
I
< 1
I
r
+
(1+ Iru«i-j)q)
e
r * as before indicates that the sum is taken over
iq
I)
• Since both
i, j
I)
such
r k (t) log t
~
'0,
we can, as in Lemma 7.1 (ii),
o
conclude that both sums in (11.11) tend to zero.
LEMMA
11.8
,
satisfy (11.7) and (11.9), for
If
1 < k, R. < p.,
then
(i)
and
(ii)
~
(i)
q
so that
~
0
Since, for
p{
As in the proof of Lemma 11.4 it is enough to prove that, 1f
ukq
~
uR.q
~
0
sufficiently slowly,
r = k, 1,
max
O~jq~h
~
r
(jg) >u }
r
= O(ll
r
it clearly suffices to prove that
)
-84-
(11.12) p{
max ~k(jq}
Q~jq~h
- p{
~~o
>u k '
max ~t(jq)
Q~jq~h
max ~k(jq)
Q~jq~h
>u } p{
k
>u } t
max ~i(jq)
Q~jq~h
>U } = o(].1k +].1i)·
i
1
A
estimate the difference we again use Lemma 3.2 with
defined by
O
, r , and r
, and A
obtained by taking r
identically zero.
t
ki
ki
k
Elementary calculation show that the difference in (11.12) equals
r
p{
max
Q~jq~h
~k (jq) :: uk'
max
O~jq~h
~ i (jq) :: u
- p { max
Q~jq~h
i } -
~ k (j q) ~ uk} p {
max
O~jq~h
~ i (j q) ~ Ui } ,
.,hich by Lemma 3.2 is bounded in modulus by
2~
1[11.13)
.
1:.
Ir
O~~q, Jq~h
ki
((i - j)q) 1(1 -
r~i ((i
• e
with
_ j)q»
-1/2 •
2
-u / (1 + Ir ki ( (i - j ) q)
I)
u = min (uk' u i ).
Now, by (11.9), sup!rki(t)
I
1 - 0
for some
0 > 0 , and using this,
.,e can bound (11.13) by
2
2
Kh q - 2 exp (-u / (1 + 1 - 0) )
Kh
if
uq
(ii)
+
0
2
2 ·2 (2 0_ 0) )'].1
-~(u)(
- - uq )-22
u exp (-u
\ k + ]l i )
lJk+lJ i
SUfficiently slowly, since
l(u)/(lJk+lJ )
i
is bounded.
ThiS follovlS immediately from part (i) and the inequality
>u
p
< P ( U {~(h) > uk})
k=l
~
.~
i'
p
1: p {M (h) > uk} •
k=l
k
Reasoning as in the proof of Theorem 7.6, and usinq Lemma 11.7 and
Lemma 11. 8 (ii) we get the following result.
o
-85-
THEOREM 11.9
Let
uk = uk (T)
....
ClO
as
T ....
ClO,
so that
TlJ k
T ~
= 2n
A2k exp(-u k2 /2) .... T k > 0, 1 < k < p, and suppose that
satisfy (11.7) - (11. 9) • Then
rkR. (t)
r
k
(t)
and
p
P {M (T) ~ uk' k = 1, ••• , p} .... exp (- 1: T k )
k
k=l
as
T""
o
ClO
Under the same conditions as in Theorem 11.9, the time-normalized
point processes of uk-upcrossings tend jointly in distribution to
independent Poisson processes with intensities
p
T • The precise formula-
k
tion of the theorem, and its proof, is left to the reader.
We end this chapter with an example which gives an illustration to
the extraordinary character of extremes in normal processes.
Let
{~(t)}
and
{net)}
be independent standardized normal proces-
ses whose covariance functions
let
r~.
c ' k =1, ••• , p, satisfying
k
and
r
satisfy (7.1) and (7.2),
n
Ickl < 1, and c k * c i ' k
i, be
*
constants, and define
.e
Then the processes
~k
variance functions
rk(t)
(t), k = 1, .•• , p
are jointly normal and their co-
and crosscovariance functions
rkR. (t) = ckcR.r z;; (t) + (1 -
C~) 1/2 (1 - C~) 1/2rn (t)
satisfy (11.7)-(11.9). Thus, even though
~l(t),
••• ,
~p(t)
are linear-
ly dependent, their maxima are asymptotically independent.
We can illustrate this geometrically by representing
(Z;;(t),
net»~
-by a point moving randomly in the plane. The upcrossings of a level
by
~k(t)
then correspond to outcrossings of the line
2 1/2
ckx + (1 - c k )
y = uk
by
(Z;;(t),
net»~,
as illustrated in the following diagram.
uk
-Ho-
-
':..",.
-----'r-+-~-_+--_f--~Ir_~~-.x
The times of these outcrossings form asymptotically independent Poisson
processes, when suitably normalized, if
T~k ~
'k >
O.
e.
-87CHAPTER 12
·;
=e
·
MAXIl1A AND CROSSINGS OF NON-DIFFERENTIABLE NORMAL PROCESSES
,
•
The basic assumption of the previous chapters has been that the covari-
i
~
·
r
ance function
r (,)
of the stationary normal process
E;
(t)
has an lex-
pans ion
reT)
=1
2
2
- A2T /2 + O(T)
as
T
~
o.
In this chapter we shall consider the more general class of covariance
functions which have the expansion
;-
"
where
a
is a constant, 0
<
a
<
2, and
This includes covariances of the form
C
is a positive cons'tant.
exp(-IT!a), the case
a
=1
being tl1at of the Ornstein-Uhlenbeck process. Note that we may take
a
=2
in (12.1) so that all results of this chapter are indeed true
also for the regular case previously studied; cf. Theorem 12.9.
If
.e
a
<
2
we cannot expect a Poisson result to hold for the upcross-
ings of a high level, since
r
is then not twice differentiable, imply-
ing that
A2 = ~, and hence the mean number of crossings of any level
in any interval is infinite ~y Rice's formula. Furthermore, it can be
shown that at every point
t
where a crossing occurs, there are an un-
countable number of crossings in every neigbourhood of
is certainly possible for the maximum
M(T)
t. However, it
to have a limiting distri-
bution. The po,int is that while upcrossings may be infinite in number,
(12.1) is, as noted in Chapter 6, p. 8, sufficient to guarantee continuity of the sample paths of the process, and thus ensure that
M('l')
is well defined and finite.
We shall in fact show that
aT (M (T) - b T ) has a limiting double exponential distribution if the normalizing constants are
-88-
=
(12.2)
..
(2 log T)1/2
a~.,.;~.
1
{~log log T
(2 log T)1/2
a
(2 log T)1/2 +
+ log(Cl/aH (2n)-1/22(2-a)/2a)},
CL
where
H
is a certain strictly positive constant.
a
This remarkable result was first obtained by Pickands (1969), al-
though his proofs were not qUite complete. Complements and extensions
have been given by Berman (197Ib), Qualls and Watanabe (1972), and
Lindgren, de
Mar~,
and Rootzen (1975). The method of Pickands has some
particularly interesting features, in that it uses a generalized notion
of upcrossings which makes it possible to obtain a Poisson type result
also for
a < 2.
Specifically, Pickands considers what he terms £-uparossings. Given
£ > 0, the function
u
at
f(t)
to
>
u
if
f(t)
f(t)
<;:
for some
U
is said to have an £-upcrossing of the level
t € (to -£,t o )' and, for all
for all
t € (to,t
o
n
>
a,
+n). Clearly, this is equivalent to re-
qUiring that is has a (non-strict or strict) upcrossing there, and
furthermore
f (t)
~
u
for all
t
in
(to - £, to)' An £-upcrossing is
e.
always an upcrossing, while obviously an upcrossing need not be an £upcrossing.
Clearly, the number of £-upcrossings in, say, a unit interval, is
bounded (by
1/£)
and hence certainly has a finite mean .. Even if this
mean cannot be calculated as easily as the mean number of ordinary upcrossings, its limiting form for large
extremal results for
depend on the
£
u
has a simple relation to the
M(T). In particular, as we shall see, it. does not
chosen.
The main complication in the derivation of this result, as compared
to the case
a = 2, concerns the tail distribution of
M(h)
for
h
fixed, which cannot be approximated with the tail distribution of the
simple cosine process if
CL
< 2.
.~-
;
...
-89The proof involves qUite a number of steps, and it might be useful
:e
.
,.
1
~
~.
~
;c:
~
to get a "birdseye view" from the following summary of the main steps.
max { ~ (jq); j
1. Find the asymptotic distribution of
a fi:z:ed
s
""
p{
as
=0,
••• , n -I}
for
n:
max ~(jq) >u} ..... ~ ella
u
O,5j <n
u .....
00,
H
a
-2/a
q ..... 0, q ..... a u
, a > 0
(n,a)
and
n
fixed;
stant (Lemma 12.3) •
max{~(jq);
2. Find the asymptotic distribution of
fixed
0
~jq
.::;h}
for a
h > 0:
.~
I
where
as
n .....
00
(Lemma 12.4).
3. Approximate
.e
max
O<t<h
~
(t)
by
max
O.::;jq.::;h
~(jq)
for fixed
B
1
p{ max ;(jq) <u _.!!L
lim sup
u'
2
u... oo u /0:$(u)/u
0,5 j q'::;h
as
h:
sup ;(t) >u} ..... 0
O.::;t'::;h
a ..... 0
(Lemma 12.5)
and
limsup 21
u......oo'
U
as
1
p{u--< max E;;(jq).::;u} ..... O
0:$ (u) /u
u 0'::; j q'::;h
as
4. Find the tail distribution of
sup
;(t)
for a fixed
O~t~h
P{
sup
O~t~h
_. h
E;;(t) >u} --
(Theorem 12.9), where
H = lim H (a) > 0
a
a ..... 0 a
U
a ..... 0
2/a
U¢(u) ella H
a
(Lemma 12.6).
h:
-90(Lemmas 12.7 and 12.8).
5. Once the tail distribution and its discrete approximation are obtained, continue as in Chapter 7 to prove asymptotic independence of
maxima in disjoint intervals under suitable covariance conditions,
e.g.
ret) log t -;.
o.
We start with a general result, of some interest in its own, giving
an estimate of the probability of a large deviation. It is a special
case of a result announced by Fernique (1964), a proof having been given
by Marcus (1970).
LEMMA 12.1
If
2
{I; (t); 0:: t::
l}
is a normal process with mean zero,
Var(I;(O»
=0 ~
for some
a, 0 < a :: 2, then there exists a constant
pending on
0, such that
a, such that for all
o
c
c:
>
0, only de-
x,
the last term is zero.
PROOF
The idea of the proof is to express all t E [O,ll in dyadic
-1
-2
form, t = a 1 2
+ a 22
+ ••• , with a = 0,1, and then write
k
1
(12.4) I;(t) = 1;(0) + (I; (a 1 2- ) -1;(0» + (I; (a 2- 1 +a 2- 2 ) -1;(a 2- 1 » + •••
1
2
1
and construct bounds for each of the terms in this telescoping sum. Define for
p
=
0, 1, ..• ; k
and note that since
I; (t)
=
0, 1, .•. , 2P - 1,
is normal with mean z,ero,
Q)
(12.5)
2 f ep{y)dy
x
for
x > O.
~.
-91-
B
Now, let
A
If
>
=
0
and introduce the event
{max
U
p=O
l;;(k,p)
O~k~2P-l
> (2f3(p+l»1/2}.
E(~(k,p)2)l72 -
B > log 2, Boole's inequality, together with (12.5), implies
peA)
co
= e- B/(l_e-(a-log2»,
2Pe- B (p+l)
r
<
p=O
so that if
(12.6)
a > 2 log 2, then
P(A)
But for
~
4e- B •
f3 < 2 log 2
1::his holds trivially (since
B ~ O.
therefore use (12.6) for all values of
Next, note that on the complementary event
hold for
p
= 0,
= 0,
1, •• ,,; k
< 1) and we can
P (A)
A·, all the inequalities
1, ••• , 2P -1, and that (12.3) implies
that
·e
A·,
Thus, by (12.·4) we conclude that on
r
l~(t) -~(O)I ~
lac
4c
=
a'
p=O
say,
and that consequently, by (12.6),
p{
sup
O~t~l
I ~ (t)
-
~ (0) I
<
lice } <
4e-a.
a
The conclusion of the lemma now follows by choosing
p{ sup
O<t<l
~
(t)
>
x}
~
~
P{ sup
O~t~l
4e
I ~ (t)
-c x 2 /e
a
+
!2
-
~
(0)
I
>
f3
=
x/2} +
2
2
e- x /80
2
c a x /e, since
P{~
(0)
>
x/2}
D
-9:'::-
Wi th this result out of the way we return to 1:he process
with covariance function
{1; (t) }
ret). When considering the distribution of
continuous or discrete maxima like
sup 1; (t)
and
max
1; (jq)
for
O<t<h
O<jq<h
h, it is natural to condition on the value of ~(O).
small values of
In fact, the local behaviour (12.1) of
ret)
is reflected in the local
.~.
variation of
1;(t)
around
1;(0). For normal processes this involves no
difficulty of definition if one considers
of points, say
t = t
j
~
(t)
,only at a finite number
, j = 1, ••• , n, since conditional probabilities
are then defined in terms of ratios of density functions (cf. Chapter 6
p. 16). Thus we can write, with
to = 0,
u
PI
max
-,
J'-O
••• ,
1;(t.) ~u} = f ep(x)P{, max
1; (tJ,l ~ul1;(O) =x} dx,
n
J
- 00
J-l
- , ••• , n
whlere the conditional probability can be expressed by means of a (conditional) normal density function (cf. Chapter 10 p .. 66). In particular the
conditional probability is determined by conditional means and covariances.
For maxima over a real interval we have, e.g. with
t
j
n
j = 0, ••• , 2 ,
P{ sup
1;(t)
O~t~h
~u}
= lim P{max
~
t,
n+oo
J
(t.)
J
~
u},
which by dominated convergence equals
u
f ep(x) lim P{max 1;(t )
j
n+cc
t
-co
No~{,
~ul1;(O)
=x} dx
j
if the conditional means and covariances of
1;(t)
given
are such that the normal process they define is continuous,
P{ sup
1;(t}
~ul1;(O)
lHe
1;(0)
define
=x}
O~t~u
to be the probability that, in a continuous normal process with mean
E(~(t) /1;(0) =x}
and covariance function
maximum does not exceed
u. Then clearly
Cov(1;(s) ,1;(t) 11;(0) =x), the
-93lim P{max tCt.) <ultCO) =x} = p{ sup tCt) <ul~CO) .. x}.
n-+<lO
t.
J O<t<hJ
- -
In this case we have, by dominated convergence,
u
p{ sup
~
I(t)
~
u} =
O~t~h
J ~Cx)p{ sup
f,;Ct)
~ulf,;CO)
=x} dx.
O~t~h
-ex>
In the applications below, the conditional distributions define a continuous process, and we shall without further comment use relations
like this.
To obtain non-trivial limits as
u
+
we introduce the rescaled
ex>
process
f,;u(t) =
,,
~
u(~Ctq)
where we shall let
q
-u),
tend to zero as
u
+
ex>.
Here we have to be a
little more specific about this convergence than in Chapter 7, and shall
2
assume that u / a q + a > 0, and let a tend to zero at a later stage.
LEl-'.MA 12. 2
Ci)
··e
Suppose
u
+
ex>,
q
0
+
so that
the conditional distributions of
u 2/aq
f,;u(t)
+
a > O. Then
given that
f,;uCO) = x,
are normal with
where for fixed
all
(ii)
x
the
oCl)
are uniform for
max(lsl,ltl) ~ to' for
to > 0,
for all
to > 0
x, such that, for
there is a constant
K, not depending o,n
a
or
Isl,lt/ ~ to'
a
VarCf,;uCs) -f,;ul\t)lf,;u(O) =x) ~ Kaalt-sl •
PROOF
(i)
Since the process
covariance function
is normal with mean
-u
2
and
we obtain Csee e.g. Rao (1973), p. 522) that the conditional distributions are normal with
E (~
u Ct) / ~ u CO) = x)
-u
2
2
+ u rCtq)
1
2
• 2Cx+u)
u
2
_u 2 + (l_cqa!tl a +oCqalt!Cl» Cx +u )
= x
.~
Cx+u2)(cqaltla-l-oCqaltla»
= x - Caa I t I a C1 + 0 C1) ) ,
since
u 2 q a ~ aa > 0
Cov(~uCs),
and
x
is fixed. Furthermore,
~uCt) l~uCO) =x)
2
= u Cr((t-s)q) -rCsq)rCtq»
a
u 2 CI- cq a/ t -sla _ Cl-cqa/sla) Cl-cqCt.Jt/ ) +oCqa»
Caa cis I a + I t I a - / t - s I a) +
uniformly for
Cii)
Since
0
C1) ,
maxCls!, Itl) ~ to'
~uCs)
2u 2 Cl-rCCt-s)q»
-
~uCt)
and
and
~uCO)
are normal with variances
2
u , respectively, and covarianclE! u CrCtq)
2
- rCsq», we have, for some constant
K,
2
2
= 2u Cl-rCCt-s)q» - u CrCtq) -r(sq)2
< 2cu2qalt_sla+oCu2qalt_sla)
< Ka a It
- s Ia,
c
The first step in the derivation of the tail distribution of
sup{~Ct);
O~t~h}
M(h) =
is to consider the maximum taken over a fixed number
of points, 0, q, •• ., (n- l)q.
LEMMA 12.3
if
u
~
For each C
2/a
00, q ~ 0, u
q
there is a constant
~
a > 0, then
haCn,a)
< ~
such that,
-95~
We have
p{ max
~(jq)
>u}
O~j<n
= p{
= P{t u (0)
where
~u (0)
= p'{UO)
p{tu(O) >O}
>u}
is normal with mean -u 2
P{1.:
u
(0) :: 0,
o
max t
O<j<n u
> O} + P{t
=1
u
(0) :: 0,
max
O<j<n
~
u
(j»O},
~(u)
- ¢(u)/u. Since furthermore
2 we have
and variance u,
-
(j) > O}
1
e
-x-x 2/2u 2 p{ max (~ (jl -x)
O<j<n u
By Lemma 12.2 (i), for any fixed
t"
(0)
C 0\1' ( "'u
~
·e
>O}
= ! u <P(u+~) p{ max ~u(j) >OI~u(O) =xldx
-~
O<j<n
(12.7)
as
~u(j)
max
O::j<n
>-xl~u(Ol
=x}dx.
x,
t"
(0)
Ir u (0) = x
) ... caaCI;la+
IJ"a-I;-J'la)
- x, "u
J - x"
....
..
q'" O. Since limits of covariances are covariances, one can define
a sequence of normal r.v.'s,
ances depending on
a
=
Ya(j), j =1,2, •••
lim qu 2 / a ,
with mean and covari-
'» = -Ca Cl I J' Ia ,
E (y a ( J
Now convergence of moments implies convergence in distribution for
jointly normal r.v.'s (as can easily be seen, e.g. using characteristic
functions). The boundary of the set
{max Y (j) >-x}
O<j<n a
is contained in
the set
n-l
U
{Ya(j)=-x}
j=l
and since the lone-dimensional distributions of
are all non-degenerate,
{max Y (j) >-x}
C<j<n a
Ya (1), ••• , Ya (n -1)
is a.s. a continuity set,
-96and it follows that
p{ max
(~u(j) -x) >-xl~ (0) =x}
O<j<n
u
-+
p{ max Y (j) >-x}.
O<j<n a
To be able to use the dominated convergence theorem in (12.7) we note
tllat, by Lemma 12.2 (i),
:~
p{ max
(~u(j) -x) >-xl~u(O) =x}
O<j<n
n
<
E
j=l
< n(l
for come constants
P{~u(j) -x >-xi~ (0) =x}
u
-~
(c'-c"x»
< n
q,(c"-c"x)
c'-c"x
c', cIt > O. This shows that the convergence in
(12.7) is dominated, and we obtain
1
p{ max ~(jq) >u}
q,(u)/u
O~j<n
-+
1 +
o
J e-xp{ max Y (j) >-x}dx <
-aJ
O<j<n a
which proves the existence and finiteness of the constan1:
H
a
aJ,
(n,a).
c
For future use we note the following expression for the constant
H (n,a):
Ot
(12.8)
H (n,a) =
a
e- 1 / a (1
+
~
_aJ
LEMMA 12.4
Suppose
that
r(t) < 1
sup
u
-+
aJ,
q
for all
e-xp{ max Ya(j) >-X}dX)
O<j<n
0, u
-+
E
2/a
q
-+
a > 0, and take
> O. Then, for each
h
such
e,
E~t~h
(i)
there is a constant
H
a
H (a) <
a
aJ
such that
(n, a)
na
-+
H (a)
a
as
an.d
1
p{ max
~ (jq) > u}
2
u / a q,(U)/u
O~jq~h
-+
h ella H (a),
a
e·
-':J7-
~
(i)
Llet
n
= {
be a fixed integer, write
~
max
m = [h/nqJ, and
(jq) > uL
rn~j«r+l)n
Then
m-l
P( U
r=O
B) < p{ max
r O~jq~h
~(jq)
>u}
where, by Lemma 12.3,
m
Pi( U B) ~ (m+l)P(B )'" (m+l) .tl£Lel/aHa(n,a)
O
r=O r
u
2/0.
,.. ~ .tl£L ella Ha(n,a)
na
u
."
since
2 a
l/q'" u / /a
(12.9)
by assumption., Hence
1
lim sup
u~
p{
u2/a~(u)/u
max
O~jq:h
~
(jq) > u}
< h ella Ha (n,a)
<
co
na
Furthermore
m-l
P (U
r=O
m-l
B )
r
r
>
P(B } r
r=O
r
rfs
PCB
r
m-l
> np(B ) - m r P(B
O
O
r=l
n Bs )
nB
)
r
so that
1
(12.10) lim inf
u+""
2/
u
a~ (u) /u
m-l
P( U Br ) > h ella H(n,a)
na
r=O
1
m-l
- lim sup 2/0.
m r P(B O
u+ co
u
CP(u}/u
r=l
= h ella H(n,a)
na
n Br }
, say.
We shall now show that
m-l
1
(12.11) Pn = lim sup
u+ClO
2/0.
11
m
CP(u}/u
r
r=l
P(B O
By Boole's inequality and stationarity
n Br )
+ 0
as
n
+
co.
-98m-l
m E P(B O n B )
r=l
r
=
m-l
= m
P(
E
{~(iq)
U
U
> u,
~
(jq) > u})
O::i<n rn::j«r+l)n
r=l
(.12.12)
n-l ron
< m
-
E
E
P{~(iq)
> u,
~
(jq) > u}
i=O j=n
n
E
< m
j
p{~(O)
~(jq)
>u,
>u}
j=l
ron
+ ron
p{~(O)
E
>u,
~(jq)
>u}.
j=n+l
TI::>
estimate these sums we have to use different 1techniques for small and
large values of
1
Assume
tion of
1 - r
2
~
jq. Let
1 - ret) >
~
jq
~(jq)
i
E >
Itl
n
E, and write
given
~(O)
0
be such that
for
It I <
£.
r = r(jq). Since the conditional distribu= x, is normal with mean
rx
and variance
e-
,
P{~ (0)
> u,
~
(jq) > u}
co
J 4>(x)P{~(jq) >ul~(O) =x}dx
u
co
J 4> (x) (l-~(~»dx
Il-r2
u
(12.13)
= [-(l-~(x»
co
(l_~(~»]co
+ J(l-~(x»
Il-r 2
x=u
u
Here,
4) l-r = l-r(jq) > £ '.J"qln ~ K"n n -2
(J2
.• 1
l+r
l+r(jq) - 4
J au,
for some constant
K > 0, and thus if
nq
~
E,
r
~
Il-r-
4>( u-xr)dx
~
Il-r-
-9~-
n
1:
m
j
p{~(O)
;e
n
< 2m
~
i
"
>u}
j ¢ (u) (1 _ ~/Kjaaa»
1:
u
j=l
~
1
~(jq)
>u,
j=1
(12.15)
f.-=-
..., 2h
q
.tkL . .! ~ j (1 _ ~h:jaaQ:»
n j=1
u
K'h 1
.-a -n
for some constant
K'. Since
'"
j (1 - If>/KjQ:aQ:»
1:
j=l
1 - If>(x) < ¢(x)/x
the sum
'"
1:
j=l
nq * 0
is convergent, and since
1
n
(12.16) lim sup 2/0
m I:
u
CP(u)/u
j=1
u*""
as
u * '" (n
jP{~(O)
fixed)
>u, C(jq) >u}
= O(l/n)
as
r1 * ""
For the second sum in (12.12) we get, again using (12.13),
[E/qJ
mn
P{~(Ci)
1:
>u,
~(jq)
>u}
j=n+l
'"
(12.17)
1:
j=n+1
where the sum is convergent. For terms with
jq > E we use the esti-
mate from Lemma 3.2,
p{~ (0)
> u, t (jq) > u}
which implies "that
mn
mn
p{~(O)
1:
>u,
~(jq)
>u}
j=[ E/qJ+l
<
-
Since
0 =
sup Ir(t)
E<t<h
(h)
q
I
2 (1 -If> (u» 2 + Kh
q
1:
E<jq~h
< 1, this is bounded by
-100-
u 2 / a cp(u)
u
(K
u
2/a 2 e
q
u 2 / a cp(u) .0(1)
(12.18)
since
u2
u
as
-=-_.J:....l...=.!..
U
1.1/(u
2/a 2 ..... u l+2/a/a 2
q)
o
and
1-0
2 · 1+0
u -+
+ 0(1»
00,
< 1.
Toge"ther, (12.17) and (12.18) imply that
lim sup
1
/
u 2 acp(u)/u
u-+oo
ron
~
ron
p{~
(0) > u,
~
(jq) > u}
j=n+1
CD
= K'
~
j=n+1
and combining this with (12.16) and (12.12) we obtain (12.11).
Thus we have shown that
h
e1 / a
•
Ha(n,a)
1
p{ max
- Pn < lim inf 2 a
na
U-+CD
u / cp(U)/u
O::jq::h
~(jq)
>u}
1
p{ max
lim sup 2
a
u / cp(u)/u
u-+oo
O::jq::h
~(jq)
>u}
-<
< h
e1 / a
•
Ha(n,a)
na
P -+ 0 as n -+ CD. Since H (n,a) < CD for all n, and the
n
a
lim inf and lim sup do not depend on n, this implies the existence
where
of
lim
n-+oo
H
~a
(n, a)
= H (a),
a
na
which is then the joint value of
this proves that
(ii)
Take
e: > 0
H (a) <
a
I t I ::
and
lim sup. Furthermore
CD.
small enough to make
1 ~ 1 - r (t) ~ 2K I t I a, some
for
lim inf
K > 0,
e:. Applying (l2 .15), we obtain for
Ihi
< e:,
e
.
-101-
t (jq)
rna"
p{
> u}
O~jq~h
'e
r
P {t (O) > u,
t (j q) > u}
O<jq~h
~
u
r
O<jq~h
[h/qJ+l
r
2
(1 - t {u
/l-r qg} }}
l+r(Jq)
~
(1 - t/Kjaaa)} •
j=l
Since
[£]
- £a
q
u 2/a , part (ii) follows from
:~
...
~
[h/q]+.l
(1 -
-
j=l
Cl-t{hjaa CL
r
t{/Kjaaa)} <
j=l
...
r
<
as there is then certainly one
a
O
q>
j=l
»
(/i<jCLaCL)
IK
.0:
• J a
..
as
0
a ..
~,
0;
that makes the sum less than
1/2.
IJ
The following three lemmas relate the continuous maximum
sup t (t)1
O<t<h
to the discrete one
l~ax t(jq). We first prove that we can neglect
O<jq<h
the probability that the discrete maximum is less than u - Y/u and
the continuous is greater than
Let
LEMMA 12.5
positive constant
'V
=:
a
U ..... ,
u, as
Y
= as
o.
..
q .. 0, u 2/0: q .. a > 0, and let
8 < a/2. Then
Hm sup :2
1
P{ max
t(jq} <u_ X
sup t(tjl >u}
u.....
u la¢> {u} lu
O~jq~h
u' O~t~h
.. 0
PROOF
:Eor some
as
a" O.
By Boole's inequality and stationarity
sup
;(t) >u}
O~t~h
<
and with
£q
P{t (0) < u
-
t u (s) = u (t (sg) - u), we can write
_X,
u
sup
O~t~q
t(t) >u},
·,102-
p{~
(0) <u- l
u'
sup
(t) > 1U}
~
O~t~q
u-y/u
ep(x)p{ sup
f
x=-ex>
O.~t~q
=
~
(t) > u
I.; (0) = x }d:<
-y
1 ep(u+~) p{ sup
f
';u(s) >01.; u (0) =y}dy.
u
y=_ex> u
O~s~l
=
By Lemma 12.2 (i), the conditional distributions of
';\;1(0)
';u(s)
given
= yare normal with mean
lJ (s) = Y - aa I s Ia (1 + 0 (1) )
0(1)
with the
uniform in
= y}
~u(O),
lJ(s) < y
< p{ sup (.;
O~s~l
u
where, conditional on
q .... 0
lsi ~ 1. Here
p { sup .; (s) > 0 I.; (0)
O~s~l u
as
lJ(s)
';u(s)
for small
q, and
(s) -lJ(s» >-yl.;u(O) =y},
u
is a non-stationary normal
process with mean zero and, by Lemma 12.2 (ii), incremental variance
Var('; (s) -.; (t)l.;u(O) =y) < Kaalt-sl
u
u
for some constant
K
which is independent of
Lemma (Lemma 12.1) implies that, with
p{ sup (.; (s) -lJ(s»
O<s<l u
and thus, using
u
2/a
K
a
-
and
c
,
a
and
c = CalK,
>-y I ~ (0) =y} ~ 4 exp(-ca
as generic constants,>
-y
sup
O~t~h
.;(t) >u}
v
fep(u+"-) oexp(-ca
u
qu
-a 2
y),
U
1
p{ max
E;(jq) <u-1.
ep(u)/u
O~jq~h
u'
K
< -2/a
-
y. Fernique's
-a 2
y)dy
-y
-a 2
-y-ca y
fe
dy
_ex>
K
-y _ca- a y 2
< ~ fe
dy
qu
-ex>
Clearly, this tends to zero as
lemma.
a .... 0, since
a
< a/2, which proves the
c
-103-
LEMMA 12.6
s
stant
"" I' q .. 0, u 2/0:q" a
u ..
If
> 0, then, with
h
for some con-
> 0, and
as in Lemma 12.4,
= h(e Y _l)C I / o, H (a).
0:
u 2 /O:q .. a > 0
Since
PROOF
(u - 1.) 2/O: q .. a, and since further-
implies
u
more
( u -1.)
2/0: ep
eY u
2
/0:,ep
(u)
u
u •. 1.
u
u
as
eu -~)
----...,
u .. "", it follows from Lemma 12.4 (i) that
;
=
1
(p{ max ~(jq) >u- 1 } -pi max ~(jq)
u 2 /o:cp(u)/u
O~jq~h
u
O~jq~h
>u}}
.. heYcI/O:H 0. (a) - hC l /O:H a (a) •
LE?-ll-lA 12.7
-e
Under the conditions of Lemma 12.4,
h e l / a H (a)
a
(i)
(12.19)
.
for
Y
(ii)
c
= as,
_.<
lim inf 2 a 1
u....,
u / cp(u)/u
_.
p{ sup
lim sup 2/a 1
u..'"
u
CP(u)/u
O~t~h
-<
v
a
+ h (e Y -
= Ha ,
1]1 ei/aH
'J
a
a
as
~ (t)
> u}
say
exists finite" and
(iii)
PROOF
Ha
is independent of
C.
Since
p{
max
Ci~jC;[~h
~
(jq) > u}
< P{
sup
O~t~h
>u}
~
> u}
(t)
(a) + hC l / a H (a)
a
0
...
~(t)
O~t~h
<
,...here, by Lemma 12.5,
lim Ha(a)
a"O
sup
p{
a .. O.
< .."
-104< P{
max
O~jq~h
+
~(jq) ~u-~,
P{u- l <
u
+ P { max
sup
O~t~h
~(t) >u}
max
~(jq) ~u}
O~jq~h
~
(j q) > u} ,
O~jq~h
part (i} follows directly from Lemmas 12.4, 12.5 and 12.6.
Further, the middle limits in (12.19) are independent of
lim sup Ha(a) <
a"" 0
heey -11 Cl / a H (al .... a
a
follows that
as
a .... 0, and since
va ....
a
~.
a, and it
Therefore
it follows as in the proof of Lemma 12.4
(i) that
lim H (al exists, finite and (12.20) holds.
a .... 0 a
For part (iii), note that if ~(t) satisfies (2.1) then the covari-
ance function
~(Ll
of
t(t)
= ~(t/cl/Ct)
satisfies
Furthermore
1
p{ sup
~(t) >u}
2
u / a ¢(u)/u
O~t~h
which by (ii) shows thatH
a
2/
u
1
p{
sup
t(t) >u},
Ct¢(u)/u
O<t<hCl/a
does not depend on
C.
o
e-
One immediate consequence of (12.20) and LemmaL 12.4 (i) is that
lim sup 2
1
p{ max
u.... ~
u /a¢(U)/u
O~jq~h
(12.21)
~ (jq) ~ u,
sup
O~t~h
~ (t)
> u}
lim sup (\
p{ sup ~ (t) > u}
2 1
u .... ~
u / a ¢(u)/u
O<t<h
p{ max
i;(jq) >U})
2 1
u / Ct ¢(u)/u
O~jq~h
= hC
Of course,
l/a
(H
a
- H (a»
Cl
.... 0
as
a .... O.
(12.20) has its main interest if
H > 0, but to prove
ex
this requires some further work.
-. .--
.~
.
,
,
-105-
.
!"
LEMMA 12.8
~
H(X > O.
We havla from Lemma 12.4 (i) and (ii) that there is an
aO > 0
such that
Let the sequence
with mean
Ya (j;J be as in the proof of Lemma 12.3, 1. e. normal
_caalj!a aLnd covariances Caa(lila+lj!Cl:_lj_il a ). Then
we have from (12.7),
o
= 1 + f e-Xp{ max Ya(j) >-x}dx,
-co
O<j<n
o
- 1 + f e-xp{
-co
max
Y (j) > -x}dx,
O<j<nk a
o
.- 1 + f e-xp{ max 'Y k(j) >-x}dx.
-co
O<j<n a
Here
Y (jk), j = 1, ••. , n
a
have the same distributions as
Yak (j),
j =1, ••• , n, \o;hich implies
H (n,ak) < Ha(nk,a)
a
for
k=l, 2, ..•• , the r.v.'s in
appearing in
o
and since
Ha(n,ak)
forming a subset of those
Ha(nk,a). Thus
< H (a )
a.
O
Ha(aO/k) ... H
a
as
k ...
co,
c
the lemma follows.
By combining Lemmas 12.7 and 12.8 we obtain the tail of the·distribution of the ma.ximum
THEOREH 12.9
such that
If
sup
E<t<h
M(h) = sup { ~ (t); 0
ret)
ret)
H > 0
a
t
~
h}
over a fixed interval.
satisfies (12.1), then for each fixed
=
0
<
1
for all
E > 0,
E
lim -2;a 1
P{M(h) >u}
u",co u
~(u}/u
where
~
= hcl/aHa'
is a finite constant depending only on
a.
h > 0
-1UO-
REMARK 12.10
In the proof of Theorem 12.9 we obtained the existence of
the constant
H
a
Ha(n,a)
by rather tricky estimates, starting with
=
~
C-l/a(l +
\
e-xp{ max
Y (j) >o-X}dX).
O~j~n a
_00
By pursuing these estimates further one can obtain a relalted expression
lim T
T.... oo
where
{YO(t)}
-1 0
! e
-x
p{ sup
YO (t) > -x}dx,
O~t~T
_00
is a non-stationary normal process with mean
and covariances
Isla + Itl
a
-
It-sl
a
-Itl
a
• However, this does not seem to
be very instructive, nor of much help in computing
H •
a
It should be noted, though, that the proper tJLme-normalization of
the maximum distribution only depends on the covariance function through
l/a
C
the time-scale
and on the constant
H . Th,erefore, if one can
a
find the limiting form of the tail of the distribution of
some
M(h)
(for
h) for one single process satisfying (12.1) one also knows the
value of
H
a
for that particular
a. For
a
=
2
this is easily done,
by considering the simple cosine-process (6.6). By comparing (6.12) and
Theorem 12.9, we find
H
2
The only other value of
of
M(h)
l/fi.
a
has been found is
the entire distribution of
for which the tail of the distribution
= 1.
a
M(h)
triangular covariance function
(1961), and as a result one has
In fact, explicit expressions for
are known for the normal process with
ret)
=
HI
= 1.
1 -
for the Ornstein-Uhlenbeck process, with
It I ,
/1:1
~ 1, see Slepian
In particular, this shows that
ret)
=
exp(-It/), P{M(h) >u}
hu~(u)o
c
Before proceeding to the maxima over increasing intervals we formulate the following lemma for later reference.
LE~~~
12.11
such that
Suppose
sup
e:~t~h
{~(t)}
r(t) < 1
satisfies (12.1), let
for all
e: > 0, and let
h > 0
u ....
00,
be fixed
q .... 0,
-1072
u / a q .... a > 0 .. Then for every inter,ral
where
"
o(lJ)-term is
."
.~
PROOF
= Cl/a HaU 2/a I~(U)/U,
lJ
h,
< p{~ (jq) ;: u, jq E I} - P{lol.(I) ;: u} ;: IJhPa + 0 (IJ) ,
~e
~.
1>
o
of length
I
1~e
P
a
= 1 - Ha(a)/H
a
.... 0
same for all intervals of length
as
a .... 0, and the
h•
By stationarity
o
< p{~ (jq) ;:
u, jq E I} - P{l'l(I)
~
u}
< p{~(O) >u} + pH(jq) ~u, jq E [O,h]} - P{M(h) ~u},
where
P{~(O)
>u} ;: ¢(u)/u = o(IJ). Therefore the result is immediate
from Lemma 12.4 (i),
IJ
-1
p{
max
i;(jq) >u} = hH . (a) /H +
a.
a
O,:jq::,h
0 (1) ,
and (12. 20) ,
lJ- l P{M(h) > u}
=h
+ 0(1).
c
Maxima over increasing intervals .'
The covariance conditic:m (7.2), Le.
ret) 10gt .... 0
as
t ....
co,
is also
sufficient to establish the double exponential limit for the maximum
M(T) = sup{ ~ (t)
u ....
co
~
0,: t
~.
T}
in this general case. We then let
so that
T~ = TC1/aH u 2 / a ¢(u)/u ....
a
i.e.
T
> 0,
TP{M{h) >u} .... Th. Taking logarithms we get
log T + log (Cl/aH C2n) -1/2) + 2
a
~a
log u _ u; .... log
implying
u
or
2
log u =
.- 2 log T,
'21
log 2 +
'21
log log T + 0 (1), which gives
T,
T ...
co,
-108-
(12.22) u 2
=
LEM1-lA 12.12
hold. Let
so that
2 -a
2 log T + - a - log log T - 2 log T
Let
T""
T
0
>
€
be given" and suppose (12.1) and 7.2) both
T > 0
for
/11
u..., (2 log T) 1/2
a, way that
a
u 2/ q
-+
as
11 = Cl /
fixed and with
T
-+
00, and let
a H u2/a~(u)/u,
a
in such
as
a > O. Then
as
PROOF
This lemma corresponds to Lemma 7.1. First, we split the sum in
(12.23) at
o=
S
T , where
sup{lr(t)
I;
S
1-0
is a constant such that
0 < S, < 1+0'
It I ~ £} < 1. Then, with the constant
K
changing from
line to line,
!
~
q£~kq~TS
jr(kq) le- u2 /(1+lr(kq)
2
S+l -1+8
K
(log T) 2/a T
(u 2 / a qj 2
< q2 T
0
-+
,t
-+
as
T
0 (t) = sup{ / r (s) log s
00, and hence for
2
e -u / (1+ Ir (kq)
2
1+
S+1 -"i""+O
00,
-+
2
exp(-u /2) < K/T, u 2 ..., 210gT, and
With
S 1
< T :
e- u2 /(1+ 0 )
q
K
since
I)
I;
a
u 2/ q
-+
1r
s.:: t}, we have
a
o.
:>
(t)
I ~ ~~~t)t
as
kq > T S ,
I)
'2
S
S
< e -u (1-0 (T ) /log T )
so that the remaining sum is bounded by
(12.24)
~
2
~
S
S
Ir(kq)/e-u (l-o(T )/log T)
TS~kq~T
1 S.
log T -
Since
r(t)log t
-+
¥
a
~
Ir
(kg) Ilog kq.
')['1-'<kq<T
0, we also have
....
-109-
¥
..,
T
'e
.•
~
Ir(kq) Ilog kq ... 0
1:
T(3<kq<T
as
T ... .", whi.le for the remaining factor in (12.24) we have to use the
more precise estimate from (12.22),
"f;.-
u2
Since
=
2 log T +
0 (t) ... 0
as
q2-a log log T +
0 (1) •
t.... .", we see that for some constant
K > 0,
2-a
a
Thus, since
T
q
2
u
210gT
and
u
2/a
2
q'" a,
B
(3
Ir(kq) le- u (l-o(T )/log T )
r.
T(3~kq~T
(3
1
0 (1)
log T
1
(u2/aq)2
(log T)2/a(10g T)-(2-a)/a
1
0(1)
log T(3
= 0(1)"
and this concludes the proof of the lemma.
D
We can now proceed along sir.dlar lines as the proof of Theorem
7.6. First, take a !i:Ked
into
h
h > 0, write
intervals of length
subintervals
I
k
,
= [T/h],
and divide
[O,nh]
h, and then split each interval into
of length
I~
n
h-
E
and
E,
respectively. We then.
show asymptotic independence of maxima, first giving the following
lemmas, corresponding to Lemmas 7.4, 7.5, respectively.
LEMMA 12.13
T~
...
L
>
u ... .", q ... 0, u 2/a q'" a > 0, (12.1) hclds, and
Suppose
O. Then
n
(1)
(ii)
where
lim sup IP{Iot ( II I ) ~ u} - P{I-ll (nh) < u}
u-....oo
1. k
-
lim Siilp /pU; (jq) ~ u, jq
uPa ... 0
as
a"+ O.
n
€
I
< To-hE,
-
n
U I } - p{~I( u I ) ~ u}
k
1
1 k
I
< TP
a
,
-l1U-
PROOF
Part (i} fOllows. at once from Roole's inequality and Theorem 12.9,
since
o
n
< P{M( U I
1
since
n~
T~/h +
-
k
) ~ u} - P{M(nh} ~ u} <
T/h.
Part (ii) follows similarly from Lemma 12.11, which implies
o
< P { ~ (j q) ~
< n max( p{
k
\
~ (jq) ~ u,
< n~(h-e:)p
where
P
n
n
u, j q E U I k} - P {~1( U I k )
1
1
jq E I } k
+ no(~)
a
+
~ ll} ~
P{~l (I k )
T(l-~)p
11 a
::
U})
< TPa'
c
a
Let
LEr-1r1A 12. 14
ret) +0
p{~
(jq)
n
u, jq E U I
~
1
(ii)
lim sup
0
I
t
and suppose that, as
+00,
e: > O. Then, as
(12.23) holds for each
(i)
as
T
+
k
'No. q + a ,
u"
II p{~ (jq) ~ u,
jq E I
k=l
IT p{~ (jq) ~ u,
jq E I
k=l
k
} - p
n U'l(h) ~ u}
k
I
} + 0,
<
O.
where
p
PROOF
The proof c,f part (i) is identical to tha1t of Lemma 7.5 (i). As
a
+
a
2/0. q+a>O,
n
} -
n
as
00,
u
+
for part (ii), we have, by Lemma 12.11,
n
0<
II
k=l
p{~(jq) ~u,
jqEI } k
m~x(p{~ (jq) ~ u,
<
n
<
n~(h-e:)p
+
T
(1 -~) P
T"il/h
h
+
+
a
<
a -
T
jq E I
k
}
n
II
k=l
P{l1(I k " ~ u}
P{H (I )
k
:~ u} )
no(~)
P
a
T/h. Furthermore, by stationarlty
-111-
o
n
P{H (l ) ~ u} k
II
<
k=l
= p n {M(l 1 l
pn{r-l (hl ~ u} =
~U} - pnHHh) ~ u} ~
~ u}
< n(p{M(l )
1
- P{M(h)
< nP{~1(li) > u} .... nUE -+
{~(t)}
Let
mean and suppose
<
E
-
THEOREM 12.15
~ U})
"[OE·
D
be a stationary normal process with zero
satisfies (12.1) and (7.2), i.e.
ret)
and
,..
-+
0
P{M(T) ~ u}
-+
r(t)log t
as
t
-+ ~.
I
If
u
PROOF
=~
-+ ~
By Lemma
e- t
12.1:~
as
T
-+
~.
condition (12.23) of Lenuna 12.14 is satisfied,
and cy Lemmas 12.13 and 12.14 we then have
.e
lim sup IP{M(nh) < u} - pn{M(h) ~ u}
U-+'"
where
-
P a -+ 0
as
a.
-+
O. Letting
E
-+
lim P{M(nh)~: U}-pn{M(h),~ u}
0
I
< 2t (P
-
and
a
a
-+
E),
+-h
0
this shows that
= O.
n-+a>
By Theore:m 12.9, P{M(h) < u}
= l-lJh +0 (U)
and hence, as
lJ'" TIT,
n ... T/h,
Since furthe;rmore
r.1(nh)
~
M(T)
~
M( (n + 1)h),
o
this proves the theorem.
As is easily checked, the choic,e
given by (12.2), satisfies
TlJ
ly have the following theorem.
-+
T
=
x
= aT + b , with aT and b
T
T
T
x
e- , cf. (12.22), and we immediate-
u
THEOREM 12.16
{~(t)}
Suppose
satisfies the conditions of Theorem
12.15, and that
_
.J.\
.i~
b
T
(2 log T)1/2 +
1
{~ log log T +
(2 log T)1/2
a
2(2-a)/2a)}.
T -+
as
Then
o
00
Asymptotic properties of E-upcrossings
As mentioned on p. 88, the asymptotic Poisson character of upcrossings
applies also to non-differentiable normal processes, if one considers
E-upcrossings instead of ordinary upcrossings. To prove this, we need
to evaluate the expectation of
u
by
~(t),
LEMMA 12.17
0
< t
<
Suppose
N
E"U
(T), the number of E--upcrossings of
T.
ret)
satisfies (12.1). Then, with
rem 12.9,
lim
E (N
(h) )
E,U
2
u-+oo hU / a lj> (u)/u
PROOF
E (N
h
as in Theo-
e.
(1»
')
lim 2/aE ,u
.
u..oo u
</J(u)/u
Write
A
{~
(t) >u
for some
tE(-E,O]},
B
{~
(t) >u
for some
tE (O,E)}.
From 'I'heorem 12.9 we have, for
P (A U B)
peA)
2E
~
1
2
1
2E C / a H u / a - 1j>(u)
a
h,
as
U
as
u -+
00,
as
u -+
00.
..
(X)j'
Hence
.:,
'"
-113-
and
P(BACJI
"
0.
< P{N
-
e:,u
(e:) =l} = E(N
e:,u
(e:» < P(B),
-i'
since there is at most. one e:-upcrossing in
. E (N
E,l1
[0, e:). Hence
(e:) ) .... e:
and thus
E(N E,U (1»
=
1E
E(N e:,u (E»
c
as required.
,
In particular the lemma implies t:hat asymptotically the mean number
;
of e:-upcrossings of a suitably
choice of
incrt:~asing
level
is independent of the
0, and this leads us directly to the Poisson result f,or
IE:>
the time-norm,s.lized number of E-upcrossings. Let
process on
= N e:, u (T·B),
where the level
and let
u
is chosen so that
THEOREM 12.18
u
converges in
poin1~
process
distribu1~ion
Poisson process with intensity
PROOF
Ct
T.
Suppose that the assumptions of Theorem 12.15 are satis-
fied. Then the time-normalized
the level
Tj.! = TC1/a. H u 2/a.cp(u)/u .... T > 0,
be a Poisson process wi U1 intensity
N
to
NT
N
as
u ..... "", where
N
.is a
As in "the proof of Theorem 8.2 we only have to check that for
limE
(i~T(c,d])
== E(N(c,d]) = T(d-c),
T.....""
then
(b)
of e:-upcrossings of
T.
c < d
(a)
be the point
defined by
(0,"")
NT(B)
.
.e
NT
P{NT(U) == O} ..... P{N(U) = O}
m
II
1=1
-114-
(Tc,Td» = TE(N E,U (c,d) - T(d -c)~
(d - c), which proves (a). For part (b) go through the same steps as
By Lemma 12.17, E(NT(c,d)
'w
j~n
T
= E(N
E,U
the p+oof of Theorem 8.2, with only obvious changes.
c
In previous chapters we have encountered a variety of results, related to the Poisson convergence of upcrossings of an increasing level.
There are no further difficulties in extending these results to cover
E-upcrossings. However, we do not want to lengthen an already long journey over an ocean of lemmas. We just finish by mentioning that a little
further generality may be obtained throughout by including a function
Clf slmlll' growth (or perhaps slow decrease) instead of
C
in (12.1).
This has been considered by Qualls and Watanabe (1972), and also by
Berman (1971 b).
-115CHAPTER 13
EXTREMES OF CONTINUOUS PARAMETER STATIONARY PROCESSES
Our primary task in this chapter will be to discuss continuous parameter analogu,es of the sequence results of Chapter 2, and in particular
to obtain a corresponding version of Gnedenko's Theorem which applies
in the continuous parameter case.
Specifically throughout this chapter we consider a (strictly) sta·tionary process
{~(t)i;
t
~
O}
satisfying the general assumption sta·ted
at the start of Chapter 6. In particular it will be assumed that
~(t)
has a.s. continuous sample functions, continuous one-dimensional distributions, and that 1the underlying probability space is complete. As
shown in Lemma 6.1, i-t then follows that
r.v. for any interval
I
M(I) =
sup{~(t);
and, in particular, so is
tEl}
M(T) = M([O,T]).
Our main interest here concerns distributional properties of
as
T
~ ~,
is a
M(T)
a subject considered for the special (important) case of
normal processes in Chapters 7 and 12. We shall obtain asymptotic re-
suIts for the general stationary processes considered here, along the
same lines
as~
those for stationary sequences, obtained in Chapter 2.
In particular continuous parameter versions of Gnedenko's Theorem will
be proved under appropriate dependence restrictions on
gous to the Condition
~(t),
analo-
D(u). We shall also obtain results of the type
n
P{M(T) ~ u } -l- e-' (ef. Theorem 2.6) where ~ ~ ~ in an appropriat.e
T
manner as T ~ ~, using a continuous parameter analogue of the Condition 'D' (un)'
The general theory will not require that the mean number of upcrc1ssings of a level
u
be finite, and therefore will include normal pro-
cesses of the type considered in Chapter 12. As we shall indicate, the
Chapter 12 results can be obtained from the general theory of this chapter,
though of course the same ultimate amount of work is involved. We shall
also consider the special case in
~7hich
the mean number of upcrossings of
-116any level
u
per unit time is finite and obtain Poisson limit theorems
_J.;~
generalizing those of Chapter 8 to include non-normal processes.
.:~'
As indicated in Chapter 7, it is convenient to relate the maximum
M(T)
of the continuous process to the maximum of
n
terms of a se-
~~ence
of "submaxima". Specifically if for some conveniently chosen
h > 0
we write
(13.1)
then for any
(13.2)
(i-l)h~ t~ ih}
1;i = sup{l;(t)i
n
= I,
2,
we have
M(nh)
It is apparent that the properties of
M(T)
=
[T/h]
from those of
by
M(nh)
by writing
n
as
T
+
~
may be obtained
and thus approximating
T
nh.
As noted above, we shall consider continuous parameter analogues (to
be called
C (u ),
T
c'
(u } )
T
of the Conditions
D (un)' D I (un)1
quences. The Condi·tion
C (u )
T
tionary sequence
defined by (13.1) satisfies
{!;n}
used for se-
will be used in ensuring t:hat the staD(u ). However ben
fore introducing this condition we note a preliminary form of Gnedenko's
Theorem which simply assumes that the sequence
{!;n}
satisfies
D(u ).
n
This result follows immediately from the sequence case and clearly illustrates the central ideas required in the continuous para.meter context.
The more complete version (Theorem 13.5) to be given later, of course
simply requires finding appropriate conditions, of which the main one
will be
C(u T ), on
THEOREM 13.1
I;(t), to guarantee that
{!;n}
will satisfy
Suppose that for some families of constants
D(lln).
{aT?O}, {bTl
we have
(13.3)
as
T
for some non-degenerate
G, and that the
(13.1) satisfies
whenever
h > 0
D(u)
n
and all real
x. Then
G
u
n
+
~
{!;i}
= x/a n h
sequence defined by
+ b h
n
for some fixed
is one of the three extreme value types.
-117-
PROOF
Since (13.3) holds in particular as
T
~
through values
00
nh
and the
l;n-sequence is clearly stationary, the result follows by re-
placing
!;n
by
I;n
c
in Theorem 2.4 imd using (13.2).
Although "tie shall n.ot make further use of the fact, it is interesting to note that this at once implies that Gnedenko's Theorem holds
under "strong mixing" assumptions as the following corollary shows.
COROLLARY 13.2
Theorem 13.1 holds in particular if filie
tions are r.eplaced by the assumptions that
For then the sequence
{l;n}
{!; (t)}
D(un)
condi-
is strongly mixing.
is strongly mixing and hence satisfies
c
D (u ).
n
We now introduce the continuous analog of the Condition
D(u), stan
ted in terms of the finite-dimensional distribution functions
of
!;(tl, again writing
The points
where
t
{qT}
Ftl ••• tn(U)
for
F
t l ·· .tn
Ftl ••• tn(u, .. .,ul.
·will be members of a discrete set {jqT ~ j = 1, 2, 3,
i
is a family of constants tending to zero as T ~ 00 at a
rate to be specified later.
C(uTl:
!; (t)
The Condition
C(uT )
will be said to hold for the process
and the family of constants
qT
stants
~
belonging to
0, if for' any points
{kqT~ O~,kqT~T}
{uT~
sl
T > OJ, with respect to the con-
< 8
2
< ••• <
and satisfying
(13.41
where
-
a
T'Y T
~
0
for some family
Y
T
creasing as
y
1 < ••• < t p •
< t
8
F tl •.. t
~
p
p
'
Y, we have
(u T )
aT,y
to be nc,n-in-
increalses and also note that the condition
may be replaced by
for some
as
T
~
00
I
= oCT) •
As in the discrete case we may (and do) take
(13.5)
tl
sp
a
T'Y T
~
0
}
-118for each fixed
The
D
(un)
o.
A >
condition for
C(u )
T
be related to
{l,;n}
required in 'Theorem 13.1 will now
by approxima.ting crossings and extremes of the
continuous parameter process, by corresponding quantities for a "sampled
version". To achieve the approximation we require two conditions in~(t)
volving the maximum of
in fixed and in very small time intervals.
These conditions are given here in a form which applies very generally
- readily verifiable sufficient conditions for important cases are
given later in this chapter.
It will be convenient to introduce a function
~(u)
which will gen-
erally describe the form of the tail of the distribution of the maximum
M(h)
in a fixed interval
(O,h)
as
u
becomes large. Specifically we
shall as needed make one or more of the following succes:siv,ely stronger
assumptions:
o(~(u»,
(13.6)
P{~(O)
(13.7)
P{M(q) >u}
(13.8)
there exists
>u}
=
o(W(u»
h
O
for any
> 0
q
q(u)
~
1, for
u+oo
P{M(h) >u} - h~(u)
0
as
u
+
00,
such that
lim sup P{M(h»u}/(h~(u»
(13.9)
+
as
u
+
00, for
0 < h ~ h O'
0 < h ~ h O.
Note that Equation (13.9) commonly holds and specifies that the tail
of the distribution of
M(h)
is asymptotically proportional to
~(u),
whereas (13.8) is a weaker assumption which is sometimes convenient as
a sufficient condition for the yet weaker (13.7) and (13.61.
shall see later
~(u)
A~
we
can also be identified with the mean number of
upcrossings of the level
u
per unit time,
l1(u), in important cases
when this is finite. In any case it. is of course possible to define
~(u)
to be
P{M(h ) >u}/h
O
O
for some fixed
h
O
> 0, or some asymptoti-
cally equivalent function and then attempt to verify any of the above
conditions which may be needed.
-119-
We shall also require an assumption relating "continuous and dis·crete" maxima. in fixed intervals. Specifically we assume, as required,
..
...
that for each
there is a family of constants
a > 0
tending tOI zero as
u ...
for each
GO
{q} = {q (u)}
a
a > 0, such that for any fixed
~
h
0,
>
(13.10) lim sup P{M(h) >u, ~(jq) ~u, 0 ~jq ~h}/ljJ(u) ... 0
u...'"
Finally a condition
q lj; (u)
u--
:
'f
I
~l
qa(u) - as
a
a'"
o.
is sometimes helpful in verifying (13.10) is
l .. hich
(13.11) lim sup PH;(O) ~u, ~(g) ~u! M(g) >u} ... 0
Here the constant
as
as
a ... O.
specifies the rate of convergence to zero of
decreases, the grid of points
(asymptotically) finer, and for smelll
a
{qa(u)}
tends to become
the maximum of
E; (t)
on the dis-
crete grid approximates the continuous maximum well. as will be seen
below.
(Simpler versions of (13.10) and (13.11) would be to assume i:he
existence of one family
= q(u)
q
of constants such that the upper
limits in 1[13.10) and (13.11) are zero for this family. It can be seen
that one can do this without loss of generality in the theorems belc)w,
but it seems that, as was the case in Chapter 12, the condi tions in-·
volving the parameter
a
often may be easier to check.)
The follm"ing lerru:na contains some simple but useful relationshii?s.
LEM.::1A 13.3
(i)
I f (13.8) holds so does
(13.7) which in turn implies
(13.6). Hence (13.9) clearly implies (13.8),
(ii)
If
I
is any interval of length
hold, then there are constants
(13.12) 0
as
~
lim sup[p{~ (jq)
u--
a ... 0" where
q
='
q
a (u)
~
Aa
hand (13.6),
(13.10) both
such that
u, jq € I} - P{M(I) < u}]/lji (u) < A ... 0
- a
is as in (13.10), the convergence being
uniform in all intervals of this f:lxed length
(iii)
(13.7), and (13.6).
h.
If (13.7) and (13.11) hold so does (13.10) and hence, by (ii)
so does (13.12).
··120(iv)
I f (13.9) holds and
then
P{M(I ) > u, M(I ) >u}
2
l
=
q .... O
is eventually smaller than
h
2
as
0(1/I{u»
(i) I f (13.8) holds and
PROOF
q
(O,h) , 1
II
as
and
=
(h,2h)
with
o
h < h /2,
O
-
<
u .... ""'.
u ..... 00, then for any fixed
P{M (q) > u}
~
h> 0,
P{M (h) > u}, so that
lim sup P{M(q) >u}/ljJ(u) ,:: lim sup P{M(h) >u}/1/I(u) < h
u+oo
u....oo
by (13.8). Since
h
is arbitrary it follows that
P{M(q) > u}/1/I(u) .... 0,
giVing (13.7). It is clear that (13.7) implies (13.6) since
p{~(O)
>u}
< P{M(q) >u}, which proves (i).
To prove (ii) we assume that (13.6) and (13.10) hold and let
h. Since the numbers of points
[O,h]
2, it is readily seen from stationarity that
pH{jq)
,::U,
o
(jq) ::, u, jq E I} - P{M(I) ::, u}
in
be an
interval of fixed length
differ by at most
jq
I
I
and in
jqEI} ,:: pH(jq) ,::u, 0 ,::jq::,h} + p{~(O) >u} +P{~(h) >u}
so that
<
p{~
<
p{~(jq)
::,u, 0 ::,jq ::,h} - P{M(h) ::,u}
+ 2P{~(0)
>u}
e·
P{M(h) >u, ~(jq) ::,u, O::,jq::,h} +2P{~(0) >u}
from which (13.12) follows at once by (13.6) and (13.10), so that (ii)
follows.
To prove (iii) we note that there are at most
vals
[(j-l)q,jq)
in
(O,h]
(h/q]
complete inter-
with perhaps a smaller int<erval remaining
so that
P{~-1(h»u, ~(jq) ~u,
0
~jq ~h} ~ ~ P{~(O)
<u,
~(q) ~u,
H(q) >u}
+ P{H(q) >u}
so that (13.10) easily follows from (13.11) and (13.7).
Finally i f (13.9) holds and
o
< h
~
II
=
(O,h), I
2
=
(h,2h)
h /2, then
O
h 1/I(u) (1 +0(1))
and
with
-121-
.•.
;e
i
so that
P{M(I ) >u, ~HI2) >u} = P{M(I ) >u} + P{M(I ) >u}
l
l
2
}
;.
.",
,(
- P({M(I ) >u} U {M(I ) >U})
2
l
= O(1/J(U»
o
as required.
{'r'n }
h > 0, let
For
(n+l)h)
and write
be a sequence of time points such
= u
v
J:l
D(v )
n
~(t),
for the sequence
TJ:l
T
n
€ {nh,
• It is then relatively easy to relate
{'n}
to the condition
C(u )
T
for the process
as the following lemma shows.
1:
t
LEMMA 13.4
{qa (u)}
Suppose that (13.6) holds for some function
be a family of constants f,or each
qa(u) ... 0
u ...
as
PROOF
jl - i p
qT =
is bounded, then the sequence
v
fies
For a I;iven
~
1
brevity wri·te
q
qa(~)
{'n}
n
i
C(u )
T
[(i
. =
1
r
-l)h, irh], J
s
<v
n' jq € I r }"
A =
{; (jq) <v
jq €J }"
n'
s
s=l.
B =
lr=l
< j
p
• e n,
= [(js -l)h, jshJ. For
for the elements in one of the families
{~(jq)
is satis-
a > 0, and
1 < i 2 < ••• < i p < jl e
p'
B =
q
qa (u) > 0,
defined by (13.1) satis-
p
A =
q
for each
and let
is as above.
n
n, let
1. Write
with
and such that (13.10) holds. If
r:D,
fied with respect to the family
T1/J(u )
T
a > 0
w(u)
P
n {,.
r=l
J. r
{qa ( • ) }
,and let
<v}
- n
p'
n
n{,.
s=l
ev}.
Js - n
It follows in an obviclus way from Lemma 13.3 (ii) that
o
~
Hm sup{P(A B ) - P(AB)} < lim sup(p+ p')1/dv )A
n+r:D
q q
n+r:D
n a
< lim sup
n+r:D
for some constant
1a ... 0
n~(v)l
n
K
< K1
a a
(since
nh..,T
II
is bounded) and where.
-·122as
a + O. Similarly
lim sup IP {A ) - P (A)
n+co
q
I
< KA ,
lim sUPlP(B ) -P(B)
n+co
q
aL
I
.'.1'
< KA •
.l~
a
Now
Ip(A nB)
-P(A)P(B) I < Ip(ll. nB) -P{A nB )
q
q
+
1P
(A
n B q ) - P (Aq } P (8 q )
q
I
(13.13)
I
+ P fA ) IP (B ) - P (B)
+
n,aL +
R
lim sup R a ~ 3KA •
n+co
n,
a
Since the largest jq in any
IP (Aq n B q )
q
q
P (B)
IP (11. q )
•• P (A ) P (B )
q
q
- P (A)
I
I
I
where
in any
is at least
I
is at mos1:
r
i h, and the smallest
p
(j 1 -1) h, their difference is at least
(R.-l)h. Also the largest
jq
in
does no1: exceed
J,
p
j
p
,h < nh < Tn
so that from (13.4) and (13.13),
(13.14)
Ip(A nB) -P(A)P(B)
I
in which the dependence of
(a)
< Rn,a. + aT n" {R.-l)h'
aT,R.
on
a
is explicitly indicated.
}
Since
a * R. -'
- J.O f{R a +a T(a) (R.-l)h·
n,
a>O n,
n'
(13.14) does not depend on a we have
Write now
IP (A n B)
- P (A) P (B)
I
< ex*
n
n,'"
~~e
left-hand side of
,
which is precisely- the desired conclusion of the lemma, provided we
can show that
lim a*
n+co n, An
=0
for any
A > 0
(eL (2.3». But, for any
a > 0,
a*
< R
+ (a)
< R
+ (a)
n,An n,a
aTn,(An-l)h
n,a
aT
AT /2
0'
wh.en
n
is sufficiently large (since
n
decreases in
R.), and
h,ence by (13.5)
lim sup a* 1
< 3KA ,
n+co
n,~n a
and since
a
is arbitrary and
a*
n,An + 0
as desired.
A
a
+
0
as
a + 0, i t f01lo,"'s that
c
..:;
_:.
-'"
-123The general conti,nuous version 'of Gnedenko'S Theorem is now readily
=e
•
<
i
restated in terms of conditions on
THEOREM 13.5
With the above notation for the stationary process
satisfying (13.6) fair some function
~
> 0, b
T
1jJ
{~(t)}
suppose that there are constants
such that
for a non-degenerate
holds for
itself.
~(t)
uT
of constants
G. Suppose that
T1jJ(uT )
C(u )
T
x, wi th respect to famili1es
= x/aT
is bounded and
+ b T for each real
{qa(u») satisfying (13.10). Then
G is one of the
~rrree
extreme value distributional types.,
PROOF
ing
This follows a,t once from Theorem 13.1 and Lemma 13.4, by choosTn
= nh ..
As noted 1:he conditions of this theorem are of at general kind, and
more specific: sufficient conditions will be given in the applications
later in this chapter.
Convergence of
P{M(T)
~
uTi
Gnedenko's Theorem involved consideration of
may be re'Written as
P{M(T)
~~}
the question of convergence of
~
,",'ith
P{M(T)
P{aT {l>1 (T) - bTl :; x}, which
uT
= x/aT
~~j
as
+ b T • We turn now to
T 4 m for families
which are not necessarily linear functions of a parameter
(This is analogous to the convergence of
P (M < u)
n -
n
x.
for sequences, of
course.) These results are of interest in their own right, but also
since they make it possible to simply modify the classical criteria for
domains of attraction to the three limiting distributions, to apply in
this continuous parameter context.
Our main purpose is to demonstrate the equivalence of the relations
·-124under appropriate conditions.
~~he
following condition will be referred to as
1~o
D'l'u )
The Condition
{;(t)}
C' (u )
T
c'
~.
w·ill be said to hold fox' the proaes8
and the family of aonstants
{uTi
er
p
defined in Chapter 2, for sequences ..
'n
C' (uTl:
and is analogous
C' CUT)
~
T >O}. with respeat to the
-
=-:.
Clonstants
e:
-+
lim sup T
L
P{t; (0)
T-+<x>
q h<jq<e:T
if
0, for some
:>
u ' t; (jq) > u }
T
T
as
-+ 0
h > O.
The following lemma will be useful in obtaining the desired equivalence.
LEMMA 13.6
{u }
T
Suppose that (13.9) holds for some function
be a family of levels such that
families
C' (u )
T
{qa(u)}
n'
=
holds
satisfying (13.10), for each
not exceeding
writing
C I (u T )
h /2
O
[n/k], for
in (13.9). Then
nand
k
~,
wii~h
and let
respect to
a > 0, with
T~I
(u )
T
is bounded, and
integers,
l
(13.15) 0 < lim sup[n'P{M(h) >v } -P{M(n'h) >v }] = o(k·· )" as
n
n
n-+<x>
with
vn
PROOF
= uT
' for any sequence
n
in
h
{T}
n
with
T
n
k
-+ "",
E [nh, In+l)h).
We shall use the extra assumption
T~(uT)
(13.16) lim inf
> 0,
T....""
in proving
T~(uT)
(e.g. by replacing
bounded and (13.15). It is then easily checked
T1/J(u )
T
be
max(l, T1/J(u )
T
in the proof) that the
result holds also without the extra assumption.
Now, write
I
j
= [(j-l)h,jh], :i =1,2, ...
jq E I}, for any interval
and
Mq(I) = max{;(jq;
I. We shall first shm..' that (assuming (13 .16)
holds)
(13.17) 0
as
k
-+ "".
~ lim sup T 1/J~
n ....""
n
) [n'P{~Hh) >v } - PU'.1(n'h) >v }JI = o(k- 1 )
vn
n
n
The expression in (13.17) is clearly non-negative, and by-
stationarity and the fact that
M > M , does not exceed
q
-125-
n'
lim sup
n~
1
I: [P{M(L) >v } - P{M (L) >V }]
Tn~(Vn) j=l
J
n
q J
n
+ lim sup T
n~
n
~~V
n'
) [I: P{M (L) >v } - p{Mq(n'h) >vn }].
n
j=l
q J
n
By Lemma 13.3 (ii), the first of thl; upper limits does not exceed
"a lim sup n' /T
n~
n
=" / (hk), where
·).a .... 0
a
as
a .... O. The expression
in the second upper limit may be written as
n'
1
Tn~(Vn)
n'
P{M (L) >v }.- I: P{M (1.)
j=l
q J
n
j=l
q J >v n '
[I:
1
n'
n'
+ T ~r(v) 1: P{M (1.) >v , M ( U
It) >V n }
q J
n
q t=j+2
n
n j=l
by Lemma 13.3 (iv) and some obvious estimation using stationarity. By
c'
(u ), using (13.16), the upper limit (over n) of the last term is
T
readily seen to be oi(k- 1 ) for each a > 0, and (13.17) now follows by
gathering ,these facts ..
Further, by (13.17) and {13.9)
1
( ) n'P{M(h) >v n }
> lim inf
n+""
Tn~ v n
=
and hence
sequence
1
it -
(v »
:nn
n
{Tn} satisfying
lim inf (T
~
-1
is bounded. Finally,
~rhus
> O.
nh < T
-
Tn~
(u
) is bounded for any
Tn
< (n + l)h, which readily implies that
n-
(13.151 then follows at once from (13.17).
D
-126COROLLARY 13.7
Under the conditions of the lemma, if
= In'h1jJ(v)-P{M(n'h»v}l, then
n
n
PROOF
Noting that
n'1jJ(v )
n
lim sup A
n .... oo
n,
n, k =
A
-1
k=o(k)
as
oo •
k ....
is bounded, this follows at once from the
lemma, by (13.9).
D
Our main result now follows readily.
THEOREr-I 13.8
{u }
T
Suppose that (13.9) holds for some function
be a family of constants such that for each
Cif (u )
T
hold with respect to the family
(13.10) , with
h
(13.18) TlJ! (u ) ....
T
C· (u )
T
in
T
{qa(u)}
not exceeding
h /2
O
a
1jJ, and let
> 0, C(u T )
and
of constants satisfying
in
(13.9). Then
> 0
if and only if
-T
(13.19) P{M(T) ~ u } .... e
T
PROOF
If (13.9),
C'(u )
T
(13.10), and
hold as stated, t.hen
T1jJ(u )
T
is bounded according to Lemma 13.6 and by Lemma 13.4 the sequence of
"submaxima"
{1;;n}
for any sequence
writing
D(V ), with v = u
n
T
n
n
(n+l)h). Hence from Lemma 2.3,
defined by (13.1) satisfies
{Tn}
with
Tn E [nh,
n' = [n/k],
(13.20) P{M(nh) <v } - pk{M(n'h) <v } .... 0
- n
- n
as
Clearly it is enough to prove that
if and only if
(13.22) P{M(T ) <v } .... e -T ,
n - n
for any sequence
{Tn}
bounded implies that
o
~
with
Tn E [nh,
1jJ(u ) .... 0
T
as
T ....
P{M(nh) <v} - P{M(T) <v}
- n
n - n
~ P {M (h)
>v
n
} ...., hljl (v ) .... 0,
n
(n+l)h). Further,
00
so that
T\IJ(U )
T
-127and thus (13.22) holds if and only if
(13.23) P{M(nh) <v } + e
-
-'t
n
•
Hence it is sufficient to prove that (13.21) and (13.23) are equivalent
under the hypothesis of the theorem.
Suppose now that (13.21) holds so that in particular
(13.24) n'hw(v ) + 't/k
n
as
n +
~.
With the notation of Corollary 13.7 we have
so tha t, letting
n~' ~,
lim inf P{M(n'h) <v }
- n
n+<o
< lim sup P{M(n'h) <v }
-
n-
n
By taking k-1t:h powers: throughout and using (13.20) we obtain
lim inlE P{M(nh) <v }
- n
n+<o
< lim sup P{M (nh) < v }
n+<o
- n
and letting
k
tend to infinity proves (13.23).
Hence (13 .. 21) implies (13.23) under the stated conditions. We shall
now show that conversely (13.23)
in~lies
(13.21). The first part of the
above proof still applies so that (13.20) and the conclusion of Corollary 13.7, and hence (13.25), hold. A rearrangement of (13.25) gives
1 _. P{M(n'h) ·:v } - A
.• n
nk
< 1 - P{M(n'h) < v } + A
-
But it follow's from (13.20) and (13.23) that
and hence, using Corollary 13.7, that
n
nk
•
P{I.1(n'h) <v }
-
n
+
e -T/k
-
-·128-
1 - e- T / k - o(k- l ) < lim inf n'h1jJr(v )
n-+ co
n
lim sup n'h1jJr(v )
n-+oc'
n
<
Multiplying through by
k
and let.ting
k -+
00
shows that
T 1jJ(v ) ....
n
n
.... nh1jJ(v n ) -+ T, and concludes the proof that (13,,23) implies (13.21).
0
Associated sequence of independent. variables
With a slight change of emphasis from Chapter 2 'tie say that any 1. 1.d.
sequence
~l'
~2'
...
whose marginal d.f. F
1 - F (u) .... P{M (h)
for some
satisfies
> u}
h > 0, is an independent sequence associated with
{~(t)}.
If (13.9) holds this is clearly equivalent to the requirement
(13.26) 1 - F(u) .... h1jJ(u)
as
u -+
00
Theorem 13.8 may then be related to the corresponding result for i.i.d.
sequences in the following way.
THEOREM 13.9
Let
{u T }
be a family of constants such that the
conditions of Theorem 13.8 hold, and let
~
u } -+ p
T
~2'
be an associated
0 < p < 1. If
independent sequence. Let
(13.27) P{M(T)
~l'
as
T -+
oc
then
(13. 28) P {M < v } -+
n- n
with
p
as
n -+
00
v n = u nh . Conversely, if (13.28) holds for some sequence
then (13.27) holds for any
{uT }
such that
{v }
n
1jJ(u T ) .... 1jJ(v[T/h])'
provided the conditions of Theorem 13.8 hold.
PROOF
If (13.27) holds, and
P =
e
-T
, Theorem 13.8 and (13.26) give
-1291 - F(u nh ) - h~(unh) - T/n,
so that
P{M ~ u } ..,.. enh
n
(13.26) imply that
T~(uT) -
T
giving (13.28). Conversely,
M(v n ) - 1 - F(v ) - T/n
n
T~(v[T/h]) -
(13.28) and
and hence
TT/(h[T/h]) • T
c
so that (13.27) holds by Theorem 13.8.
These results show how the function
~
may be used in the classical
criteria for domains of attraction to determine the asymptotic distlribution of
V (G)
M(T). We '(t;'rite
(extreme value) d.f.
Fn(x/an+b ) · G(x)
n
THEOREM 13 .. 10
all families
for the domain of attraction to the
G, Le. the set of all d.f. 's
for some sequences
F
such 'that
{an>O}, {bn}'
Suppose that the conditions of Theorem 13.8 hold for
{u T }
of the form
u'l~:: x/aT + b , where
T
aT > 0, b T
are given constants, and that
Then (13.26) holds for some
FEV(G:). Conversely, suppose (13.9) holds
and that (13.26) is satisfied for some
constants such that
b
T
:: b
be..
Fn(x/a~ + b~) • G(x), and define
/ ]. Then (13.29) holds, provided the conditions of Theorem 13.8
eT h
are satisfied for each
PROOF
a'n > 0 ' b'n
FE V (G), let
u
T
': x/aT + b ,
T
_<I>
< x <
<1>.
If (13.29) holds, together with the conditions stated, Theorem
13.9 applies, so that in particular
-
P{a n hIMn -bn
h), -<x}. G(x)
M
n
is the maximum of the independent sequence of associated
variables
t l , ••• , tn' It follows at once that their marginal d.f.
where
F
V (G), and (13.26) is immediate by definition.
belongs ·to
Conversely, suppose (13.26) holdi;; for some d. f.
IF' E V (G), and let
t l' t 2' •••
be an L 1. d • sequence ,,,i th marginal d. f.
that for . v n
:=
F, and suppose
x/a'n + b',
n
•
-130-
-
P{M < v
n -
n
} -+
G (x)
as
n ...· "".
so that Theorem 13.9 applies, giving (13.29).
c
iStationary normal processes
.j~lthough
we have obtained the asymptotic distributional properties of
·the maximum of stationary normal processes directly, it is of interest
'to see how these may be obtained as applications of the g,eneral theory
of this chapter. This does not lessen the work involved, of course,
since the same calculations in thle "direct route" are used to verify
i:he condi tions in the general theory. However the use of t:he general
1:heory does also give insight and perspective regarding the principles
i.nvolved. We deal here with the more general normal processes considered
in Chapter 12. This will include i:he (Chapter 7) normal processes with
finite second spectral moments considered in Ch,apter 7, of course.
The latter processes may also be treated as particular cases of
general processes with finite upcrossings intensities - a class dealt
with later in this chapter.
Suppose then that
E:(t)
is a stationary normal process with zero
mean and covariance function (12.1), viz.,
(13.30) reT)
where
0
<
Ct
.s.
2. The major result to be obtained is Theorem 12.15,
restated here.
THEOREM 12.15
Let
{E:(t)}
with covariance function
be a zero-mean stationary normal process,
ret)
satisfying (13.30) and
-131-
.,.
(13.31) ret) logt .... 0, as
If
U
=
.... ""
~
t .... "" •
lll(U) = ella H u 2 / a ¢ (u) /u
and
a
in Theorem 12.9), and if
TJ,.t{U ) ....
T
then
T,
P{:-1(T)
(with
~u}
defined
H
a
.... e
-T
as
T .... "".
PROOF FROM THE GE!\"TERAL THEORY
Write
ljJ (u)
= lJ (u)
so that
Theorem 12.9 shows a1: once that (13.9) holds (for all
qa(U) = au
'-2/a
T
h > 0). Define
, and note that (13.10) holds, by (12.21). Hence the re-
e (~), e'
suI t will follow at once if
respect to
T:;' (u ) ... T.
{qa(U)}
(u )
T
are both shown to hold 'with
a > O.
for each
It is easily seen in a familiar way that
3.2 the left hand side of (13.4)
(,Yi th
C (u )
T
qa (u)
for
holds. For by L,emma
q (u}, U = u )
T
does
not exceed
which is dominated by
K.!
r
Ir(kq)
q ~(~kq~T
I
e-
u2
and this tends to zero for each
/(1 + Ir(kq)
{q}
If we identify this expression with
trivially since
e'
(u )
T
aT, :~T ~ CXT,y
= {qa}' a
aT,y
fixed, by Lemma 12.12.
then (13.5) holds almost
for any fi>rod
y
AT > y.
when
follows equally simply by Lemma 3.2 which gives
lP{~(O)
>U, t;(jq) >u} -
(1
·_~(u»21 ~
so that
T
I),
L
q h<jq~ET
P{E;(O) >U, (jq) :>u}
K/r(jq)
I
e-
u2
/(l+ Ir(jq)
I)
132-
The second term tends to zero as
T .....
again by Lemma 12.12. The
co
first term is asymptotically equivalent to
by the definitions of
Since
ET
2 /a 2 ..... 0
g
and
ljJ (u), and the fac·t that
for each fixed
a
as
E .....
0,
c'
(u )
T
Tlji (u)
..... T.
follows.
c
Finally, we note that the "double exponential limiting distribution"
for the maximum
(Theorem 12.16) follows exactly as before from
M{T)
Theorem 12.15.
Processes with finite upcrossing intensities
We show now how some of the conditions required for the general theory
may be simplified lNhen the mean number
level
u
II
(u)
of upcrossings of each
per unit time is finite. This includes the particular normal
cases with finite second spectral moments already covered in Chapter 7
and in the preceding section, but of course not the "nono·differentiable"
process with
ex < :2.
We use the notation of Chapter 6 in addition to that of the present
chapter and assume that
II = ll(U)
Writing as in (6.1)1, for
(13.32)
E(N
u
(1))
<
for each value of
co
u.
q > 0,
Jq(u) = PH;(O) <u <E;(q)}/q
it is clear that
(13.33)
J
q
(u)
<
-
P{N
u
(q)
> l}/g _<
-
E(Nu{q))/q
= II
and it follows from Lemma 6.3 that
(13.34)
J
q
(u)
for each fixed
..... II
as
q ..... 0
u.
In the normal case we saw (Lemma 6.6) that
J
q
(u)
-
ll{U)
as
q ..... 0
.~
,
e:-'
-133in such a way that
uq" O. Here we shall use a variant of this
property assuming as needed that, for each
constants
\.I •
qa(u)" 0
as
U" m
a > 0, there are
qa = qa(u),
such that, with
\.I(u),
(13.35) lim inf J
(u)/j.J > \)
u+oo
qa
a
where
when
\) a -.. JL
~ (t)
as
a +
o.
(As indic:ated below this is readily verified
q (u) = a/u.)
a
ls normal when we may take
We shall assume as needed that
(13.36) PH;(O) >u} = o(j.J(u»
.~
t
as
+
1.1
m,
which clearly holds for the normal case but more generally is readily
q = q(u) + 0
verified if, for example for some
as
u +
m
"
P{~(O) >u, s(g) >u)~
1
(13 • 37) I J.m
sup
p {~ (0) > u}
<,
U+'"
lim inf qJq(u)/P{~(O) >u} > 0, from which
u+ m
(0) > u}/J (u) .... 0, and hence (13.36) holds since
q
since (13.37) implies that
it follows that
J
q
(u)
~
P{~
j.J (u) •
We may now recast the conditions: (13.8) and (13.9) in terms of the
function
].J
('u),
LEMMA 13.11 (i)
idenUfying this function with
Suppose
].J(u) <
m
for each
tjJ (u) •
u
and that (13.36)
(or
the sufficient condition (13.37)) holds. Then (13.8) holds with
ljI(u)
=
\.I(u).
(ii)
I f (13.35) holds for some family
with
ljI(u) = ].J(u).
PROOF
{qa(u)}
then (13.11) holds
Since clearly
P{M(h) >u} ~ l?{Nu(h) ~l} + P{~(O) >u} ~ ].Jh + P{~(O) >u},
(13.8) follows at once from (13.36), which proves (1).
NoW, if (13.35) holds, then with
q
= qa(u),
].J
= ].J(u),
··134-
p{~(o) ~u,
~(q)
su, M(q) >u} =
= p{~(o) su, M(q) >u} - PC~(O) ~U<';(q)}
< P{N (q) > I} -
u
-
< j.lq - j.lqv
a
(J.
qJ
q
(1~)
+o{l»
s,o that
lim sup p{.;(O)
u-+<»
:su,
which tends to zero as
a
~(q)
:su, M(q) >u}/(qjl) <
0, giving (13.11).
-+
C
In view of this lemma, Gnedenko's Theorem now applies to processes
of this kind using the more readily verifiable conditions (13.35) and
(13.36), as follows.
THEOREM 13.12
~(u)
= jl(u)
<
Theorem 13.5 holds for a stationary process
~
for each
u
(or by (13.35) and (13.37».
By (i) of the previous lemma the condition (13.36)
sufficient condition (13.37»
with
if the conditions (13.6) and (13.10) are
replaced by (13.35) and (13.36)
PROOF
{.;(t)}
(or its
implies (13.8) and hence both (13.6) and
(13.7). On the other hand (ii) of the lemma shows that (13.35) implies
"e
(13.11) which together with (13.7) implies (13.10) by Lelnrna 13.3 (iii).c
The condition (13.10) also occurs in Theorem 13.8 and may of course
be replaced by (13.35) there, sinc,e (13.7) is implied by (13.9) which
is assumed in that theorem.
I
Finally we note that while (13.36) and (13.37) are especially convenient to give (13.8)
(Lemma 13.11 (i», the verification of J13.9)
still requires obtaining
lim inf P{l'1 (h) > u} / (hj.l (u»
u-+<»
> 1
for
0.$ h .$ hO'
This of course follows for all normal processes considered by Theorem 12.9, with
n
=
vations of this when
2. There are a number of independent simpler derin
= 2,
one of these being along the lines of the
,.
-135-
"cosine-process" comparison in Chapter 7. The actual comparison used
there gave a slightly weaker result:, which was, however, sufficient to
t~e
yield the desiired limit theory by
particular methods employed.
Poisson convergence of upcrossings
It was sho"m in Chapter 8 that the upcrossings of one or more high levels
by a normal process
take on a specific Poisson character under
~(t)
appropriate conditions. It 'was assumed in particular that the covariance
function
ret)
of
~(t)
satisfied (7.1) so that the expected number
of upcrossings per unit time,
1.1 =
E(Nu(l», is finite.
Corresponding results are obtainable for E-upcrossings by normal
processes when
r (t)
satisfies (12.1) with
a < 2
and indeed theproo'f
is indicated in Chapter 12 for the single level result (Theorem 12.18).
For general stationary processes the same results may be proved
under conditions used in this present chapter, including
Again when
while if
1.1 =
1.1
=
0>
E (N u (1»
<
co
C, C'.
the rlasults apply to actual upcrossings
they apply to E-upcrossings. We shall state and brief-
ly indicate the proof of the s .., ecific theorem for a single level in
the case when
1.1 <
co.
As in previous discussions, we consider a time period
uT
both increasing in such a way that
TlJ'"
T
> 0 (lJ
= 1.1
T
and a level
(u T ) ), and define
a normalized point process of upcrossings by
NT(B)
=N
~
(TB),
(NT (t)
= Nu
(tT»
T
for each interval (or more general Borel set)
E(NT*.O.»
= E(N u
T
(t»
= l.lT'"
B, so that, in particular,
T.
This ShO\oolS that the "intensity" (Le. mean number of events per unit
time) of the normalized upcrossing point process converges to
T. Our
task is to s,hc>w that the upcrossing point process actually converges
-136-
in distribution to a Poisson process with mean
T.
The derivation of this result is based on the follo\\ring two extensions of Theorem 13.8 which are proved by similar arguments to those
used in obtaining Theorem 13.8 and in Chapter 8.
THEOREM 13.13
Under the conditions of Theorem 13.8, if
0 < e < 1
and
j.lT -+ T, then
(13.38) p{M(eT) ~uT} .. e
'rHEOREM 13.14
,and
if
H = TI.
J
J
j.lT" T,
II' 1 , ••
2
If
=
-eT
as
Of
T" "".
I
[0,1]
are disjoint subintervals of
r
then under the conditions of Theorem 13.8
{t; tiT E I . }
J
so that by Theorem 13.13
r
p( J=l
o~ {M(I~)
J
(13.40)
.ihere
e
j
~ u T })
L
-T
.. e
is the length of
e.
j=l J
I ,
j
1 < j < r.
It is now a relatively straighotforward matter to shmv t:hat the point
processes
N;
converge (in the full sense of weak convergence) to a
N
Poisson process
THEOREM 13.15
with intensity
Under the conditions of Theorem 13.8 if
j.l = j.l(u ), then the family
T
crossings of
u
Poisson process
PROOF
(i)
T.
T
NT
T
where
of normalized point processes of up-
on the unit interval converges in
N
Til ....
with intensity
T
distribut~on
on that interval as
to a
T" ""
Again by Theorem A.l it is sufficient to prove that
E(NT{(a,b]}) .. E(N{(a,b]}) = T(b-a)
O<a<b<l.
as
T" ""
for all
a,b,
-137-
P{NT(B) =o} -. P{N(B) =o}
(ii)
as
T"
CI)
for all sets
B
of the
r
form
U B. where r is any integer and
1 J
tervals (aj,b j ] C (0,1].
B.
are disjoint in-
J
Now (i) follows trivially since
E(N;{(a,b]}) =
~T(b-a)
.. ,(b-a).
To obtain (ii) we notte that
0 < P{NT(B) =o} - P{M(TB) ~~}
-
= P{Nu(TB) :: 0, M(TB) > u T}
r
<
;:
'<
t
1:
P{E;(Ta.) > u }
j=l
J
T
r
TB =
U (Ta., Tb.] exceeds u ' but there are
T
j=l
J
J
in these intervals, then E; ('t) must exceed
at
since if the maximum in
no upcrossings of
uT
the initial point of at least one such interval. Bu't the last expression
is just
rP{E;
(0) >
u.x) ..
0
as
P{N (:B) = O} - P {l-1(TB)
T
T" ''''. Hence
~
u }
T
.+
O.
-,1: (b .-a.)
r
J
J
by Theorem 13.14
P{l-1(TlB) <uT } = PI n (M(TB.) <u'r} .... e
•.
j=l
J - .
-TL(b.-a.)
J J
c
so that (iiI follows since P{N(B) =:O} = e
But
COROLLARY 13.16
If
B.
J
are disjoint (Borel) subsets of the unit inter-
val and if 'thle boundary of each
P{N T
* (B .)
.
J
=r.,
J
1 ~ j ~ n}"
has zero Lebesgue measure then
r
n
j=l
-Tm (B.)
e
J
[Tm(B )]
j
r.
1-
rj :
where
m (B.]I
PROOF
This is an immediate consequence of the full distributional
J
convergence
denotes the Lebesgue measure of
pl~oved
c
(cf. Appendix).
The above results concern convergence of the poin.t processes of upcrossings of
uT
in 'the unit interval to a Poisson process in the unit
interval. A slight modification, requiring
C
and
C·
to hold for all
··13 8families
u
in place of
eT
u
for all
T
e > 0, enables a corresponding
result to be shown for the upcrossings on the whole positive real line,
but we do not pursue this here. Instead we show how Theorem 13.15 yields
tne asymptotic distribution of the r-th largest local maximum in
Suppose, then, that
I;(t)
Chapters 6 and 9) define
the interval
(O,T)
has a continuous derivative a.s. and (cf.
to be the number of local maxima in
N' (T)
u
the~
for which
number of downcrossing points
N' (T)
I;(t) > u. Clearly
u
(O,T).
t
process value exceeds
of zero by
> N (T)
u
_. 1
in
1;'
u, i.e. the
(O,T)
such that
since at least one, local maximum
occurs between two upcrossings. It is also reasonable to expect that
if the sample function behaviour is not too irregular, there will
tend to be just one local maximum above
crossings of
u
when
bet~l,een
u
is large, so that
u
N'I'T)
most successive upand
u'
will
tend to be approximately equal. The following result makes this precise.
THEOREM 13.17
p{ I; {OJ > u }
T
that
Suppose that
as
With the above notation let
u
-+
-+
0, and that
E(N~1(1))
TlJ (= TlJ (u ))
T
is finite for each
conditions of Theorem 13.15 hold (so that
follows that
PROOF
~~at
> 0
u
as
-+
-+
<Xl
E(N~(l)) ~
and that
I)
T
lJ{u)
O. If also the
P{N (T) =r}
u
-+
e-TTr/rl)
it
P{N'(T) =r} ... e-TTr/rl.
u
As noted above,
if
be constants such
-+ 'T
u T = u, E{IN~(T) -Nu{T)
Then, writing
<Xl.
{u }
T
N' (T) > N (T) - 1, and i 1: is clear, moreover,
u
-
u
N'(T) = N (T) - 1, then
u
u
I;(T) > u. Hence
E(N' (T) - N (T)) + 2P{N' (T) = N (T)-l}
u
u
u
u
< TE (N~ (1) ) - lJT
which tends to zero as
and
T
TE(N' (1)) - lJT = \JT[ (1 +
uT
0
P{I;(T) >u }
T
> u},
=
P{I;(O) >u } .... 0
T
(l) ) - 1] ... 0, so that the first part of
since
-+ co
+ 2P{ I; (T)
the theorem follows. The second part now follows immediately since the
integer-valued r.v.
ing
P{N~(T)
for each
r.
*
N~(T)
Nu(T)} .... 0
- N (T)
u
and hence
tends to zero, in probability, givP{N~(T) =r} - P{Nu(T) =r} .... 0
[]
-139 -
Now wri1:e
.
val
~
(O,T) ..
M(r) (T)
for the r-th largest local maximum in the inter{M(r) (T) ~ u} I {N' (T) < r}
Since th,e events
are identical
u
we obtain the following corollary.
COROLLARY 13.18
Under the condi tiems of the theorem
r-l
,sis:
1:
D
s=l
,
As a further corollary we obtain. the limiting distribution of
M(r) (T)
in terms of that for
COROLLARY 13.19
M(T).
Suppose that
p{aT(M(T) -bT' ;Sx} ... G(x)
conditions of Theorem 13.8 hold with
;
(and
•
= ~).
Suppose also that
u T = x/aT + b
E(N~(l»
and that t.he
for each real
T
- E(Nu(l»
as
x
U"'~.
Then
= e-'
since
( )
r-l
s
(13.41) p{aT(:M r (T) -bT' ~x} ... G(x) 1: [- 10gG(x») Is:
s=O
G(x)
PROOF
This follows from Corollary 13.18 by writing
>
0
(and zero if
Theorem 13.8 :lmplies tha t
Note tha1t , by Lemrnal 9.4
G(x)
= 0).
where
G(x)
T~... T.
(i)
I
D
for a stationary normal process with
finite second and fourth spectral moments
E(N~(l»
-
~
so that
Theorem 13.17 and its corollaries apply.
The relation (13.41) gives the asymptotic distribution of the r-th
largest local maximum MCr ) (T) as a corollary of Theorem 13.17. Further
it is clearly possible to generalize Theorem 13.17 to give "full Poisson
convergence" for the point process elf local maxima o,f height above
11
and indeed to generalize Theorem 9.51 and obtain joint distributions ()f
heights and positions of local maxima in this general situation •
..
.. ~:
'·140-
~(u)
Interpretation of the function
~(u)
The function
used throughout this chapter describes the tail of
the distribution of the maximum
H(h)
in a fix,ed interval
0 < h
~
h, in the
sense of (13.9), viz.,
P{M(h) >u} We have seen how
~.
(u)
= II (u)
,I,(U)
'I'
and as
h~(u)
~
for
h O'
may be calculated for particular cases - as
for processes with a finite upcrossing intensity
=
K~(U)u(2/a)-1
'I'
II
(u)
f
I
t '
'
(12 .1)•
or norma
processes
sa ~s fylng
Berman (1979) has recently considered another general method for ob~
taining
(or at least many of i.ts properties) based on the asymptot-
ic distribution of the amount of time spent above a high level.
Specifically Berman considers t:he time
process spends above the level
u
L
t
(u}
which a stationary
in the interv,al
(0, t)
and proves
the basic result
P{VL (u) > x}
t
lim
E
u-+ooo
at all continuity points
Here
v = v(u)
-f' (x),
-+0
(vL t (u) )
x > 0
of
f'
is a certain function of
(under given conditions).
u
and
fIx)
is an abso-
lutely continuous non-increasing function with Radon-Nikodym derivative
f', and
t
is fixed.
While this result does not initially apply at
to so apply giving, since the events
x
=
0, it is extended
{M(t) > u}, {L (u) ). O}
t
are equiva-
l<ent,
P{M(t) >u} - -f'(O) E(vLt(u»
-f '(0)
where
F
shown that
vt (1 -F (u) )
is the marginal d.f. of the process, since it is very easily
E (L t (u))
tion - obtain
~
as
=
t (1 - F (u)). Hence we may -
under the stated condi-
··141-
.
IHu) = -f' (0) v(u) (1 -F(u)!) •
,
~,e
It is required i111 this approach that
F
have such a form that it
belongs to the domain of attracticln of the Type I extreme value distribution and 5.t follorrJ'S (though not immediately) that
M(h)
has a
T~rpe
!
limit so that (e.g. from the theory of this chapter) a limiting distribution for
M(T)
as
T
~ m
must (under appropriate condition) also be
of Type I. However a number of important cases are covered in this approach including stationary normal processes, certain Markov Processes,
2
and so-called X -processes. Further the approach gives considerable! insight into the centr.al ideas governing extremal pr.operties.
-142REFERENCES
Berman, S.M. (1971a)" Asymptotic independence of the numbers of high
and low level crossings of stationary Gaussian processes.
Ann. M.ath. Sta't.ist. 42, 927-945.
Berman, S.M. (1971b). Maxima and high level excursions of stationary
Gaussian processes. Trans. Amer. Math. Soc. 160, 65-85.
•
Berman, S.M. (1979). Sojourns and extremes of stationary processes •
Courant Institute of Math. ScL, New York Vniv.
Cramer, H. (1965). A limit theorem for the maximum values of certain
stochastic processes. Theory Probe Appl. 10, 126-128.
Cramer, H. & Leadbetter, M.R. (1967). Stationary and related stochastic
proc,esses. Ne"" York: John Wiley and Sons.
Fernique,
x.
(1964). Continuite des processus Gaussiens. Comptes Rendus
258, 6058-6060 ..
Rac, M. & Slepian, D. (1959J. Large excursions of Gaussian processes.
Ann. M,ath. Statist. 30, 1215--1228.
Leadbetter, M.R., Lindgren, G. & Rootzen, H. (1978). Conditions for
.e
the convergence in distribution of maxima of stationary normal
processes. Stochastic Processes Appl. 8, 131-139.
Lindgren, G" (1974). "', note on the asymptotic independence of high
level l:rossings for dependen1: Gaussian processes. Ann. Probe :2,
535-539.
Lindgren, Go.
(1977) .F'unctional limits of empirical distributions in
crossing theory. Stochastic Processes Appl. 5, 143-149.
Lindgren, G"
~1are,
J. de
&
Rootzen, H. (1975). Weak convergence of
high level crossings and maxima for one or more Gaussian processes. Ann. Probe 3, 961-978.
Marcus, M. (1970). A bound for the distribution of the maximum of
continuous Gaussian processes:. Ann. Math. Statist. 41, 305-309.
Mittal, Y. (1979). A new mixing condition for stationary Gaussian processes. Ann. Pr,ob. 7, 724-730.
-143Pickands, J.
(1969a). Upcrossing probabilities for stationary
Gaussian processes. Trans . .1\mer. Math. Soc. 145, S1-73.
Qualls, C. & Watanabe, H. (1972). Asymptotic properties of Gaussian
processes. Ann. Math. Statist. 43, 580-596.
Rao, C.R. (1973). Linear Statistical Inference and its Applications. 2 ed.
New York: John Wiley and Sons.
Rice, S.O.
(1944, 1945). Mathematical analysis of random noise. Bell Syst.
Tech. J. 23, 282-332, and 24, 46-156.
Slepian, D.
(1961). First passage time for a particular Gaussian pro-
cess. Ann. Math. Statist. 3:2, 610-612.
Slepian, D. (1962). On the zeros of Gaussian noise. Time Series
Analysis, Ed: M. Rosenblatt, 104-115. New York: John Wiley and
sons.
Volkonskii, V.A. & Rozanov, Yu. A.
(1961). Some limit theorems for
random functions II. Theory Prob. Appl. 6, 186-198.
UNCLASSIFIED
,
SECURl,y
CL~SSIFIC~TION OF
THIS
P~GE (Wh~n
De'" Entered)
c
READ 1NSTRUCTIONS
BEFORE COMPLETING FORM
REPORT DOCUMENTArlON PAGE
1. REPORT NUMBER
4.
7.
t O V T ACCESSION NO. 3.
TITI.E (and Subtitle)
S.
TYPE OF REPORT A PERIOD COVERED
TEClINICAL
Extremal and Related Properties of Stationary
Processes. Part II: Extreme Values in
Continuous Time
6.
AuTHOR(.)
8. CONTRACT OR GRANT NUMBER(s)
M.R. Leadbetter, G. Lindgren, H.
9.
RECIPIENT'S CATA .. OG NUMBER
PERFORMING ORGANIl,ATION NAME
~ND
PERFORMING
Contract NOOO14-75-C-0809
SNRC Contract F374l-004
10. PROGRAM ELEMENT. PROJECT. T
ADDRESS
12.
CONTROLLING OFFICE NAME AND ADDRESS
Office of Naval Research
Statistics and Probability Program (Code 436)
ArlinQ:ton. VA 22217
14. MONITORING
AGE~ICY
REP()RT NUMBER
O~~
Ro()tz~n
ARE~
11.
O~G.
Mimeo Series No. 1307
NAME & ADDRESS(1t dille,ent f,om Controllin" OffiC'e)
a
~SK
WORK UNIT NUMBERS
REPORT DATE
October 1980
13.
NUMBER OF PAGES
IS.
SECURITY CLASS. (of this ,epo,t)
148
UNCLASSIFIED
1 S.•.
16.
DECL ASSI FIC ATI ON' D'OWN GRADIN G
SCHEDULE
DISTRIBUTION STATEMENT (of thi .. Report)
Approved for Public Release -- Distribution Unlimited
.
17. DISTRIBUTION STATEME.
" f the eb.t,ectrmte,ed in Block 20, ff dffferent trom Repo,t)
18. SUPP .. EMEN,ARY NOTES
19.
KEY WORDS (Continue on ,everse aide if neceaslJry lJnd identify by bfock numb",)
extreme values, stochastic processes, maxima, local maxima
20.
ABSTRACT (ContInue on reverae si,:te ff neceSSlJry lJnd
id~J1tify by blol'k
number)
In this ,""ork we explore extremal and related theory for continuous
parameter stationary processes. A general theory extending that for the
sequence case:, described in Chapter 2 of Part I, is obtained, based on
dependence conditions closely related to those used there for sequences. In
particular, a general form of Gnedenko' 5 Theorem is proved for the maximum
.
..
DO
. FORM
I j"~ 7:;
1473
EDITION OF I I~OV 65 IS OBSOI.ETE:
UNCLASSIFIED
SECURITY CLASSIFICATION OF THIS PAGE (When Dete Entered)
UNCLASSIFIED
SECURITY CLASS:FICATION OF THIS PAGE(Wh"" D ..ta Enta.ed)
20.
M(T)
=
Sup{~(t); 0 ~
t
~ T} ,
where ~ (t) is a stationary stochastic process satisfying appropriate
regularity and dependence conditions. Cases where the process {~(t)}
is normal are discussed in detail.
Related topics include point process properties of upcrossings of
levels, sample path properties at such upcrossings, location of
extremes, maxima and minima for two OT more dependent processes, and
properties of local extremes.
..
.~
UNCLASSIFIED
SECU RITY CL ASSI FICATIOIII 0 F
TU'~
PAGE(Whon Data Entered)