Lindgren, G.; (1971)Wave-length and amplitude for a stationary process after a high maximum."

ze
It
This research was supported in part by the Office of Naval Research under
Contract NOOO14-67-A-0321-o002.
Un:ivertsity of
Lund~
SuJeden.
WAve-LENGTH
AND
Arft.rnJIE
FOR A STATIONARY
PRocEss
It
AFTER A HIGi MAxIM
by
Georg Lindgren
ItIt
Department of Statistics
Unive:rsity of No:rth Carotina at Chapel. Hil.'t
Institute of Statistics Mimeo Series No. 742
February. 1971
WAVE-LENGTH AND tvlfLITUDE FOR A STATIONARY PROCESS
AFTER A HIGi MAxI~tJM*
by Georg Lindgren, Univepsity of Nopth Ca:l'oZina:J ChapeZ HiZZ
and Univepsity of Lund:J Sweden
Iv INTRODUCTION.
Let
{~(t), t€R}
cess with covariance function
r
be a stationary, zero-mean Gaussian pro-
and assume
= 1,
r(O)
-r"(O)
= A2 •
The ob-
ject of this paper is to study the distribution of the two wave-characteristics
wave-length and amplitude, i.e. the horizontal and vertical distances between
"a randomly chosen" local maximum and the following minimum, especially when
the maximum is very high.
(where
nu
The main tool is a random process
is a certain random variable and
Gaussian process).
6
is a certain non-stationary
{~(t),
The sample paths of the process
(almost surely) a local maximum of height
u
at
u.
=0
t
to describe the behaviour of the original process
"a randomly chosen" local maximum of height
u
t€R}
have
and they can be used
in the neighborhood of
~
(For "horizontal window",
hM.,conditiona1 processes, see Kac and Slepian [4], Geman [1] and, for this
special topic, Lindgren [6].)
Let the wave-length
~u'
and let
0
u
T >
u
= u-~ u (T)
u
0 be the time for the first local minimum of
be the corresponding amplitude.
The asymptotic behaviour of
(T
U
,0 )
U
as
u
~
-00
has been treated by
Lindgren [7] who also gives moment approximations to the distribution for moderate u-values, [8].
Here we will concentrate upon the case
u
~ +00
for the cases i) and ii)
defined below. Then the dominant term in the definition of ~ (t) will be
u
*
This pesea:l'ah was suppopted in papt by the Offiae of NavaZ Reseapah undeP
Contpaat N00014-67-A-0321-0002.
2
ur(t)
4It.
and it is seen that the behaviour of the process after a very high maxi-
mum is well determined by the behaviour of its covariance function; the process
ilfol-lows-
~ts
cov-ariance functl-on" .-We-have then-1l1ainly- the follOWing three
cases where we say that
i)
r
has a stationary point at
has a first local minimum at
r
has a first local minimum at
r
o
o
if
= O.
r'(t)
0
and has no stationary points
> 0
to > 0
and has at least one sta-
(0, to)'
tionary point in
iii)
t
t
(0, to)'
in
ii)
r
has no stationary points in
(0,00).
In case i) we will prove that, after suitable normalizations, both
L
u
-
t0
and
0
- u(l-r(t»
u
have asymptotic normal distributions while in
0
case ii) there will be a positive probability that
stationary points less than
to be modified.
to'
T
u
falls near some of the
Then the asymptotic normal distributions have
This is done in Section 3 and 4 respectively.
then for every
t
>
0
strictly decreasing in
the probability will tend to one that
(O,t)
and hence
T
u
~
00
In case iii),
~
u
(0)
in probability as
is
u
~
00.
The results and methods of proofs in case iii) are quite different from
those in case i) and ii) and therefore case iii) will be dealt with in a separate paper.
2. SavtE
DEFINITIONS AND GENERAL RESULTS.
Suppose tile covariance function
is four times continuously differentiable with
=
r(O)
r
IV
(t)
r(t)
~
-r" (0)
1,
=
0
1.. 4
as
+
O(
t
=
1.. 2 '
Itog Itil-a)
~ 00
rIV(O)
as
=
t
~
1..
0
4
for some
a > 1
r
3
and put
= (A4-A~)/A2'
f3
r(s-t) - [A 2 (;>..4 -A~) ]-1
=
C(s, t)
Adopt the following functions from [6]:
{~2A'tr(s)r~t) + A~r(s)r"(t:)
+ (A4-A~)r'(~)r'(t) + A~rll(s)r(t) + A2r"(s)r"(t)}
c(s,t)
d2C(S, t)
=
dSdt
+ A22 r' (s)r'" (t) + (A 4 -A 22 )r ll (s)r"(t) + A22 r H9 E;)r' (t) + A2 r"'(s)r'''(t)}
~(x)
=
where
+
$(X)
and
$
x~(x)
~
tion functions.
are the standardized normal density and cumulative distribuAlso let
nu be a random variable (r.v.) with the density
qu*(Y) = {O
(2.1)
y
<
- u/f3
y
~
- u/f3
2
A2f3(u/f3+y)exp(-A2Sy/2)
hiT
and let
{~(t),t€R}
dent of
n
u
~(uIA2/f3)
be a non-stationary, zero-mean, Gaussian process, indepen-
and with the covariance function
C(s,t).
ative definite function is proved in Lemma 2 of [6].)
(That
C is a non-neg-
The process
~
can be
chosen to have, with probability one, twice continuously differentiable sample
functions, the derivatives of which
Ht)
=
~'(t)
constitute a non-stationary, zero-mean, Gaussian process with the covariance
function
This can be proved in a similar way as Lemma 1.1 of [7] if
c(s,t).
one makes use of a weaker condition for the existence of sample derivatives
given by Leadbetter and Weissner [5].
and
o(t)
do not depend on
Note that the distributions of
u.
Define the processes
(2.2)
(2.3)
~
u
(t)
ur(t) - nu (A 2r( t)+r" (t»
=
=
+
ur'(t) - nu (A 2 r'(t)+r"'(t»
~ (t)
+ 6(t)
~(t)
4
and the wave-length and amplitude
~
(2.4)
~-
=
first local minimum of
=
first upcrossing zero of
ou =
(t)
~
~
u
'(t)
~ (~ ).
u -
u
u
We sum up some properties of
PROPOS ITION
~
~
u
and
~
u
'.
2.la
a)
~
b)
~u'(t) < 0
for all sufficiently small positive
c)
Given that
~
has a local maximum of height
u
~(t
d)
o+t)
and
u
at
0
~'(t
0
+t)
t
have the same distribution as
~
~
Rewriting
~
u
u
o
(in h.w. sense)
u (t)
and
~u'(t).
after a local maximum with height
~
u
and
O.
u
as
(t)
s = A28(uIB+ nu)
has the density
qu(z) = qu*«z-A2u)/A2B)/~2B
we recognize (2.2) and (2.3) as the processes given by (1.1) and
(1.2) in [7].
Then part a) and part b) follow from the proof of Lemma 1.1 in
[8].
Part c) and part d) are essentially the ergodic theorems 1.1 and 1.2 in
[8].
We only have to assertain that
since
r(t) + 0
discrete part.
as
t +
~(t)
is ergodic.
~
2.10
The r.v.
as
+
00
00
{~(t),t€R}
is an ergodic process.
the spectral distribution of
~(t)
But
can have no
From a theorem of Maruyama [9] and Grenander [2], it follows
that
u
u
(t)
where the r.v.
(z > 0),
~
u
(a.s.).
at
(in h.w. sense) have the same distribution as
0
t
has a local maximum with height
The wave-length and amplitude of
PRooF
(a.s.).
nulA 2 B has an asymptotic standard normal distribution
and its density tends to
~
with dominated convergence.
5
PROOF.
~
x/~{x)
The function
x ~
increases to one as
Therefore the
00.
density
of
nu/A28
tends to
$(x)
with dominated convergence.
nu {A 2 r' {t)+r ,n (t) )
The lemma implies that the term
erate order as
u
~
and so is the
00,
u,
~
u
'(t)
can be zero only if
is of mod-
in any bounded interval
~{t)-term
(since it has continuous sample functions).
~u' (t)
of
Hence we expect that, for large
r'{t) is very close to zero.
To express it
more precisely, we have the following lemma.
~
2.2.
If
is a bounded, measurable set of non-negative times and if
I
I
then ~6t€I
Ir'{t)1 > 0
E
I n {t; t ~ E}
•
for all
E >
0
implies
E
pet €I)
u
PROOF.
There is an
E >
0
S
u
it is sufficient to prove that
rU{O)
r'
= -A 2
r Vl9 {t)j <
and
0,
<
a)
functions with
u
~
pet
pet
u
u
SE)
00.
for
0 < t
Sg) ~
U
0, b)
so that in60StSE t-llri{t)1 > 0,
0(0)
defined, finite r.v ••
S
Since
E
P{-r €I ) ~
U
E
o.
r' CO)
~UPOStSE t
= r'" (0) = 0,
-I, A2r'{t)
+
0 has continuously differentiable sample
= 0 (a.s.), we have that ~UPOStSE t-llo{t)1 is a wellThus, for
0 < t S E,
i
.
I
'(t)l
/A 2r (t>+r ¢:)I
-ut· ~6 r t
+ Inuit • ~up
t
+ t
i9i
~Ui(t)
S E.
+ pet €I )
r'" are continuously differentiable with
Since, furthermore,
00.
as
0
such that ,r'{t) < 0
pet €I)
a)
~
which is strictly negative if
I~{)I
• ~up ~
6
Thi§Qcguj:'S with high prc.>bability if ... u .is large and therefore
P(r u ~e)
b)
Ie
= l-P(~ u '(t)<O
is arbitrarily close to zero for large
P(r €I )
It remains to prove that
[e, T].
€
O<t~E)
for
r'ii(t)1 <~ we can estimate
~uPI 10(t)1
e
~
0
as
m = ~nfile Ir'(t)1 > 0,
If we write
~up[£,T]lo(t)l.
u
I~ '(t)1
~~.
Take
T so that
M = ~up[e,T]IA2r'(t) +
in terms of
u
Inul ~ urn/2M we get
Thus for
u
u.
In I
and
u
~n6IEI~u'(t)1 ~ um - Inul M -
and conclude that
£
~
P( ~up 10(t)1 < urn/2)
[e,T]
P(r €I )
u e:
=
p(~n61~ '(t)1 > 0)
I
u
u
I
> urn/2M)
e:
1 - P(~n61~ '(t)l>o)
I
+ P(ln
~
u
e:
1 - P( ~up 10(t)l<urn/2)
[e ,T]
+ P ( Inu I>um/2M) .
The last probability in the right hand side tends to zero as
sample functions of
val
[e:,T]
0
There is a time
t
> 0
o
variance function
to
Since the
This proves that
= O.
3. AsYMPTOTIC NORMALITY IN CASE I).
C1
~~.
are continuous they are bounded over the compact inter-
and thus the first probability tends to one.
lim u+oo P(r u €I e: )
u
r
We specify the conditions for case
and a positive integer
is
2k
k
0
i).
such that the co-
times continuously differentiable near
o
and
r'(t)
=
/j)(t)
o
r(2k o)(t)
o<
for
0
<
0
for
=
j
o
1,2, ... , 2k -1
o
> O.
o
The asymptotic distribution of
independent normal r.v.
t < t
X
o
and
1jJ
(r ,0)
u
0
,
u
will now be stated in terms of two
defined as follows.
Let
7
n oe
N(O, 1/IA2S)
and independent of
(o(t o ), 6(t 0 »"
and let
(3.1)
Then
(xo,~o)
has a bivariate normal distribution with mean zero and the co-
variances
V(x.J
v
=
rill (t )2V(n) + V(o(t »
=
r tli (t )2/ A26 + c(t ,t)
0
0
0 0 0
COv(xo'~o)
=
r l9i (t o ) (A 2 r(t 0 )+r"(t 0 »V(n) - COV(o(t 0 ) ,6(t 0 »
=
riiV(t )(A r(t )+r"(t »/A (3
o
2
0
0
2
=
(A
=
(A 2 r(t )+r"(t »2/ A2 (3 + C(t ,t)
2
-
=
dC(S,t)1
dS
s=t=t
0
o
r(t )+r"(t ) )2 V(n) + V(6(t »
0 0 0
o
0
Note that since both
3.1.
2
0
=
rand
= 1 - r(t )2
0 0 0
Here we have made repeated use of the fact that
THEOREM
r"(t )2/ A
A 2
-r"
r'(t)=O.
o
are covariance functions we have
If condition C1 is fulfilled with
t
o
2
so that
1-r(t)
o
and
then, as
k
0
~
0,
u
+
a)
b)
Here
~ and
1
means convergence in probability and law respectively.
theorem can be restated as follows.
THEOREl'~
a' )
b ')
3.2.
If condition Cl is fulfilled with
Let
u
0
as
is
t
o
and k
0
then
The
00
8
c' )
ou -
d ')
•u
u(1-r(t 0
and
0_u
AsN (0, /1 - r (t )
o
2
)
are asymptotically
__ independent.
__
First we notice that, for all sufficiently small
[0, t -E]
interval
0
+
0
as
the closed
0,
Therefore,
Furthermore, the covariance derivative
is strictly positive for small positive
=
I; '(t +E)
u
0
E >
fulfills the requirements of Lemma 2.2.
o
P(. St -E)
U
is
3.1.
PROOF OF THEOREM
a)
»
r' (t +E)
o
and since the sample derivative
£,
ur' (t +E) - n (A 2 r l (t +£) + r"Xt +E» + o(t +E)
o
u
0
0
0
is positive if
nu (A 2 r' (t 0 +E) + r'" (t0
+£» - o(t +E) Sur' (t +E)
0 0
we conclude that
sample functions (a.s.)
of probability zero.
P(t
o
- £ S.
I; '(t +E) > 0
u
0
U
P(t
u
+ ~.
S t
0
+
o
~
£)
implies
u + ClO,
k
- E S.
r(. u )
0
and
instead of
then
'iJ
=
•
P(I; '(t +E)
U
U
< t
= 0 p (1)
k
> 0) -
0
u
S t
+ €)
0
'iJ
+
= • u -t 0
Therefore, we can expand the functions
If we write
Since
~.
.; ,
has continuous
+
except for a set
u
0
£
P(.
U
S t
- E)
0
> 0,
£
From part a), we know that
the random functions
as
u +
Thus
and, as asserted for every
b)
as
P(I; '(t +E»O) + 1
u
0
6
o
as
1
tends to zero in probability as
r, r', r", and r'"
as well as
in Taylor series in the neighborhood of
and employ the symbol
o (1)
p
=
r(t) +
o
•
o
for any r.v. ~ 0
and we have
r(t ) + 'iJ2k (r(2k)(t )+0 (1»
o
(2k)!
0
p
t
0
p
(1)
9
2k-1
r'(T)
u
=
g
(r(2k)(t )+0 (1»
(2k-1)!
0 P
rU(T)
=
r"(t) + 0 (1)
u
=
0(1')
u
o(t)
0
+
0
p
It will soon be evident that
=
ur(T )
u
ur' (1' )
U
=
and
p
0
and
(1)
ug 2k
Here we used that
X
u
o
l/J u
o
=
u
=
6(1')
u
= 0 p (1)
r"'(t) + 0 (1)
=
u
p
0
6(t) + 0 (1).
p
0
so that
0
p
(1»
= nu (A 2r(t 0)+r"(t 0 »
n (A r'(T )+r"'(-r »
u
0 (1)
P
rfif(T)
ur(t) + 0 (1)
o
p
2k
1
ug (2k)
(2k-1)! (r
(to) +
nu (A 2r(-r u )+r"(-r»
u
u2
=
+ 0p (1)
n r"'(t)+ 0 (1).
u
0
p
nu '0 p (1) = 0p (1). Combining the expansions and writing
= nu r!i9 (t 0 )
=
- 0 (t )
0
n (A 2 r(t )+r"(t »
u
0
0
- 6(t )
0
we obtain
=
~
U
'(1' )
U
l/J u + 0 (1)
ur(t)o
o
p
2k
1
ug = (2k-1)! (r(2k)(t o)+0p (1»
Up to now we have not used that
ou-u(l-r(t 0 » = ur(t 0 )-~u (1')
U
u'ii'
2k-1
(3.2)
ou -
=
u
= O.
'(1' u )
o
u
+
0
p
(1).
Doing so now and noticing that
we get
(2k-1)!
{X u+o (I)}
r(2k)(t ) + 0 (1)
0 P
o
P
u(l-r(t 0
» =
Lemma 2.1 directly gives that
problems.
~
- X
l/J o
u
+ 0p (1).
(xou,l/Jou)
Firstly it justifies that
bivariate version of the
1 (xo,l/Jo)
u'ii' 2k
Cram~r-S1utsky
= 0p (1).
theorem:
and this now solves our
Secondly we can use the
10
(a l~ +a 2, a 3n +a 4)
n n n
n n n
L
(~n,nn) -+ (~,n)
a
n
i P i
-+ a
1
(constant)
i = 1,2,3,4
which will give us part b) of the theorem if applied to the variables in (3.2).
4.
fttmIFIED ASYMPTOTIC NORMALITY IN CASE II) I
In this case, the covariance
function has a (series of) "terrace" point(s) before its first local minimum
and at these terrace points the covariance derivative
zero.
Even if the sample derivative
function
ur'(t),
~
u
r'
has a tangency of
given by (2.2) closely follows the
'(t)
the question whether it will cross the zero level or not
near the terrace point depends on the sign of
-n r"'( t) + Ht).
u
The probabili-
ties of the respective outcomes are nontrivial.
Example:
r(t)
= ~~nt
minimum at a point
t •
t >
e04~2
has a terrace point at
t
=~
and its first
~.
We specify the conditions for case ii).
C2
There is a finite number of times
0 < t l < t < ••• < t < to
2
n
positive integers
k , k , ••• , k , k
l
2
o
n
derivatives up to order 2k + I near
i
order 2k
near t . Furthermore
o
0
r' (t) < 0
r(j) (t )
=0
0
for
o<
for
j
t < t ,
0
such that
t ,
i
t
:f:
i
r
has continuous
= 1,2, ••• ,n
and up to
tl,···,tn
= 1, ... ,2k0 -1
r(2k O) ( t ) > 0
0
r(j)(t) = 0
i
r( 2k i+ l ) (t ) < 0
i
for
j
= 1 .... '2ki }
for
and
i = 1, ... ,0
•
11
and
With (3.1) we introduced two independent normal r.v.
to
nr"'(t) - o(t)
o
is -NCO, 1/I-A2S)
section~
and
0
n(A 2r(t )+r"(t
» -
and independent of all
where
o(tj)-sand
n
u
""0
equal
1 nand
6(tj)-S;
n
Irtthis
we put
(4.1)
i
Thus
6(t)
0 0 0
,1,
(X~~)
= (Xo'Xl~ ••• 'Xn,1/Jo~1/Jl~ ••• ~1/Jn)
is
= 0,
1, ••• , n.
(2n+2)-variate
normal with mean
zero and with the covariances
=
rin(t )rili(t )/1.. S + c(t
i
= =
COV(x ,1/Jj)
i
j
2
t)
i' j
r"(t i -t j ) -r"(t i )r"(t j )/1.. 2
I
r 9li (t.)(A r(t.)+r"(t »/1.. S+ aC(s,t)
1.
2
J
j
2
as
s = ti
t = tj
=
=
(1.. 2r(ti)+r"(t » (1.. 2 r(tj)+r" (t » /A 2 S + C(t , t )
i j
i
j
=
r(t.-t.) - r(t ) r(t.).
1.
J
i
J
.
It should be observed that
A2 - r"(t )2/ A2
i
and
Xi
and
are independent and have the variances
i
as before.
1 - r(t )2
i
1/J
If we recall the proof of Theorem 3.1 and try to use
theorem, we have to modify the procedure.
the stationary points
t
j
only if
Since
nur'" (tj)-o (t )
j
ually not normal but conditional normal
~
u
'(t)
is
Xi' 1/J
in a limit
i
is zero near one of
negative~
we have act-
r.v.
We devise the following method to pick up the right time and the right r.v.
(Xj~1/Jj).
Cover the times
I
€
j
=
to'
tl~
[t.-€~
J
••• , t n
t.+€]
J
by disjoint €-intervals
j
12
Usually
K*
is held fixed and then we suppress it.
E
Let the indicator variable
be defined by
--
if
j
K*
l'
=
0
if
l'
I.
€
u
n
~ u
u
0
A corresponding indicator variable
=
j
J
--------------
., ·.. , n
0,
I ..
J
K for the contemplated limit distribution
is defined,by
j
if
Xj < 0,
0
if
Xi
K =
The r.v.
2:
2:
Xi
0
0
for
i = 1, 2,
for
i = 1, 2,
•
0
0
j-l
,
·.. , n .
K has the distribution
P{K=j)
= Pj
= P (X <0, Xi
j
P{K=O)
= PO
=
2: 0
for
i = 1,2, ••• ,j-l)
n
where
PI' P2' and P3
L Pj
1 -
1
can be expressed in terms of elementary functions.
For
a reference to normal probability integrals see Gupta [3].
If we write
E.
J
and
rivatives of
r
2k +
'K
=
E
K
o
if
j
1
if
j=1,2, ••• ,n
=0
are the order of the first non-vanishing deand
at the randomly selected times
is now clear how to observe the r.v.
and
= ur{t K*) -
~ (1' )
U
U
t
K
respectively.
It
13
since once we have the value of
the difference
T -t
u
T
u we can pick up the
to the appropriate power.
K*
tK*-value and rise
Similarly we can observe
the r.v.
(2k
+E:
K
K
-1)1
and
,I,
"'K
XK in the sequence X1, ••• ,X n ,
and the corresponding l/J •
K
by taking the first negative
all positive, taking
THEOREM
4.1.
If condition C2 is fulfilled with
b)
<5
-u(l-r(t K*»)
u
L
-10
t
o
,tl, ••• ,t
n
or, if they are
and
]
(2kK+e: K-I)!
,
l/J
•
X
[ r (2kK+E: K ) (t ) K K
K
REMARK.
Part a) of the theorem gives the probabilities with which
the different
T
U
tj-s,
and the
PROOF.
From Lemma 2.2, it follows that for every
peT
U
4
£ >
0
as
To get a comprehensible and short notation write (c.f. (4.1»
J
=
n r"'(t.) - o(t.)
=
n (Azr(t.)+r"(t.»
=
(X
J
U
J
j
U
J
J
- ll(t j )
so that
(4.2)
Also, for
j
U
is near
while part b) says something about the distance between
it happens to be near.
l/J.U
T
U
U, l/J U, ... ,l/J u)
o ,o.o,X n
on
= O,l, ••• ,n,
define the events
L
-10
= O,l, ... ,n
14
e
A :
j
hj
< 0,
Xi ;:: 0,
= 1,2
i
•
p
.,j-l}
--
{ (2k/E j - 1 l!
A. (x,y) :
Xj < x,
r(2kj+e:j) (t )
j
J
fueL U- tj )
B (x,y) :
j
2k.+e:.-l
J
<
J
$j <
Y}
x,
The point in the proof is that we can express the conditions for the events
and
(i
B (x,y)
j
and
=°
X , ~iu
i
which are very similar to the relations which define the events
A. (x,y).
J
In what follows, we concentrate upon the case
j
u
in terms of certain relations for the variables
= 1,. .. ,j)
B
is quite analogous.
= l, ••• ,n.
j
The case
To start with, we derive the following bounds for
the random functions
ur(t j ) - ~u(tj+h)
=
ur(t j ) - ur(tj+h) + nu(Azr(tj+h) + r"(tj+h» - l.I(tj+h)
~
=
ur' (t .+h) - n (Azr' (t .+h)+r 9if (t .+h»
u
'(t.+h)
J
J
J
u
J
+
<5
(tj+h).
Starting with the non-random terms, we notice that, for any
is an
e:
>
0
such that for
~
Ihl
e:,
ur(t.) - ur(t.+h)
S
J
(4.3)
~
J
ur'(tj+h)
~
j = 1,2, ••• ,n
~ Mj+(h)
-mj-(h)
where
M.+{h)
=
M.-(h)
=
J
J
uh 2kj + l
r(2k j +l) (t )
- (l+e .6..i.gnh)
(2k +l) !
j
j
2k j +l
uh
(2k j +l) ( )
- (l-e o.6..i.gnh)
r
t
(2k +l)!
j
j
o
e > 0
there
j
15
In order to obtain bounds for the random terms fix a T
at
0 be arbitrary.
>
t o+€
and let
Then there is an M such that the event
has a probability P(N)
~
1-a'.
Considering only outcomes in N we get
nu (A 2 r(t j +h)+r"(t +h» - A(tj+h)
j
(4.4)
~
=
~
=
nu (A 2 r' (tj+h)+r"'(tj+h» - 0(tj+h)
u
+ h • Gj (h)
j
Xj
u
+ h • Hj (h)
where
IG.J (h) I
IHj(h) I
= .
In u (A 2 r'(t
+h')+r"'(t j
+h'»
. j..
.
- O(tj+h ') I
S
M(.6upIA 2 r'I+.6upl rm l+1)
...
S
M(.6 Up I A2 r" I+.6 Up Ir
with some K depending on
IV
M(A 2 3/2+(A 2 A'+ )~+1)
S
I+1)
S
Ihl S
(4.5a)
~jU + Mj-(h) ~ IhlK
(4.5b)
Xj
U
K
K
at.
Adding (4.3) and (4.4) we obtain, for all outcomes in N,
estimates, valid for
S
j
€,
S
+ mj-(h) - IhlK S
the following
= 1, ••• ,n.
ur(t j ) - ~u(tj+h)
~u'(tj+h)
S
Xj
U
S
~jU + Mj+(h) + IhlK
+ mj+(h) + IhIK.
III
As is easily proved by differentiation, the lower bound functions in (4.5) are
uniformly bounded from below (remember that
a constant K' > 0 such that for all h,
j
r(2k j +1)(t )
j
= 1, ••• ,n
<
0)
so that there is
Now we can proceed to the announced equivalences.
interesting case is
~u'(t) <
that
x
~
O.
for all
0
N,
For all outcomes in
t
j-1
Ui=lIj'
and
E
If
j >
0,
the event
~u'(t) >
the only
B
implies
j
for some
0
t
E
Ij ,
which in turn, together with (4.5b) and (4.6) gives
(4.7)
If
X
i
x > 0
u
for
> 0
i = 1, ••• ,j-1
and the event
Bj(X,y)
and
mj+(h) + IhlK
h K< 0
x
decreases as
implies that
X.
u
J
h
Bj(X,y)
<
h
x
= (x/u)1/2kj.
+ m.+(h) + IhlK < 0
implies that
~ K'/u.
But since
Xj U + mj+(h x ) +
tends to zero, we have that
~u'(tj+h)
Xj
u
> 0
+
x
< h.
x
~
0
Thus the
or equivalently
(l+e)x - (x)1/2kj Kif.
(4.8)
u
The event
B. A B.(x,y)
J
J
also implies that
~
ur(t ) j
u
(T ) <
u
together with (4.6) and the lower bound in (4.5a) gives
(4.9)
The upper
x
for such h-va1ues.
+ m. (h ) + h K
J
Ihl
for all
J
bound in (4.5b) then gives that
event
U
occurs and especially
~u'(tj+h) is zero for some Ihl
then
X
j
1JJ
u
j
~
y+Ki/u.
We sum up the inequalities (4.7)-(4.9) and obtain
P(BjABj(X,y»
A
~
P{xiu>O, i=1, ••• ,j-1AX
U
j
(2kj )!
1/2k
x. u«l+e)x-(x)
j K"
r(2kj+1) (t ) J
u
j
A 1jJ.u~y+K'/u} + P(N*)
J
~ K'/u
y
and this,
17
Letting
u
(4.10)
_Um.6u.p P(BjABJ'(Jl:'Y)
~
"."u
00
we get from (4.2)
~_~(AJ.AAj«l+e)x,y»
+ e'.
~ 00
A reverse inequality can be derived in a similar way, again considering only
outcomes in
N.
The relations (4.5b) and (4.6) give
i=1, ... ,j-1
that if
and
then
n
or
T
u
~
U Ii'
0
If, furthermore, the lower bound in (4.5b) is positive for
h =-h
x
= _(x/u)1/2k j ,
Le. if
(2k )!
j
~=-:--~-,--XU
j
r(2kj+1) (t )
j
then the derivative
~ui(tj+h)
<
(l-e)x -
A bound similar to (4.6) can be obtained for
M.+(h) + IhlK
J
$
h K'
x
if
Ihl
$
h.
x
K'"
h
and positive at
u
is negative at
so its first zero must fall in the interval
(X )1/2kj
-h
~
(-h ' 0)
x
M.+
J
<
J
then
Summing the implications, we obtain
h
=0
i.e.
to the effect that
Therefore the upper bound in (4.5a) gives
that if
~.u
x
Y _ (x/u)1/2k j K'
18
P(BjABj(X,y»
~
P{XiU>K'!U, i=l, ••• ,j-lAXjU<O
XjU~(l-e)x-(~u.) 1!2kj K"'
r(2kj+l) (t )
j
n
A 1P.U<y_(~)1!2kj K4 } - P(N*) - P(r E!:UI )
J
U
U 0 i
(2k j )!
A
and, if
U+
(4.11)
lim
00
~nn
u+oo
P(B.AB.(x,y»
J
J
~
P(A.AA.«l-e)x,y»
J
J
- e'.
Now a little reflection shows that the left hand limits in (4.10) and (4.11)
do not depend on
E.
The right hand bounds can be made arbitrarily close to
each other by first taking small
and a small
E
e
and
e',
then a sufficiently large
for the arguments to go through.
lim P(BjABj(x,y»
u+ oo
=
Thus, for
part b).
j = l, ••• ,n
P(AjAAj(X,y».
Since the same relation can be shown to hold for
part b) of the theorem.
M
j
= 0,
we have proved
Part a) follows in an obvious way from the proof of
19
REFERENCES
[1)
Geman, D.J.:
Horizontal-window conditioning and the zeros of stationary
processes.
Northwestern University, Doctoral dissertation,
August 1970.
[2J
Grenander, U.:
f~r
[3J
[4J
Arkiv
Matematik 1, 195-277 (1950).
Gupta, S.S.:
ate
Stochastic processes and statistical inference.
t.
Probability integrals of multivariate normal and multivariAnn. math. Statistics 34, 792-828 (1963).
Kac, M., Slepian, D.:
Large excursions of Gaussian processes.
Ann. math.
Statistics 30, 1215-1228 (1959).
[5J
Leadbetter, M.R., Weissner, E.W.:
On continuity and other analytic prop-
erties of stochastic processes sample functions.
Proc. Amer. Math.
Soc. 22, 291-294 (1969).
[6J
Lindgren, G.:
Some properties of a normal process near a local maximum.
Ann. math. Statistics 41, 1870-1883 (1970).
[7J
Lindgren, G.:
Extreme values of stationary normal processes.
Z. Wahrscheinlichkeitstheorie verw. Geb. 17, 39-47 (1971).
[8J
Lindgren, G.:
Wave-length and amplitude in Gaussian noise.
Technical
Report 1970:2, Dept. of Math. Stat., Univ. of Lund, Sweden.
To appear
in Advances in Applied Probability.
[9J
Maruyama, G.:
The harmonic analysis of stationary stochastic processes.
Mem. Fac. Sci. Kyusyu Univ. A4, 45-106 (1949).