Lacey, Michael T.; (1988).Locating the Maximum of Some Stationary Gaussian Sequences."

LOCATING TIIE MAXIMUM OF SOME STATIONARY GAUSSIAN SEQUENCES
by
Michael T. Lacey
University of North Carolina. Chapel Hill
ABSTRACf
Let X., j € IN be a stationary Gaussian sequence with covariance r.
J
J
satisfying r = 1. and r. -+ 0, j -+ +00. Let j be the location of the maximum
O
J
n
*
of X. on {1,2, ... ,n}.
J
We consider the problem of
sequential observation of X .
j
If
ja r
j
!
0,
as
observations that is both necessary and sufficent
determined.
This generalizes a reult of B. Hajek.
locating j * by random
n
j -+
+00,
as n -+
00
the number of
is essentiailly
1.
Introduction
Let X.• j €
J
~
be a stationary Gaussian sequence with covariance
and standarized so that r
maximum of
X on
observations.
(1.1 )
X*
n
and let
=
j:
O
=
We consider the problem of locating the
1.
{I .2 •...• n}
by a
minimal
number
of
random
More precisely. let
max
l~j$;n
X .•
J
be the location of the maximum on {1,2 •...• n}.
search of length v in {1.2 •...• n} is a sequence of integers 1
k
v
~
and the covariance of X.
the
A sequential
~
k • k •....
1
2
n. so that for all i. k + is a function of
i 1
~
(1.2)
sequential
s(n.v)
= max
maximum being
{1.2 •...• n}.
1
.....
~
i
.
Notice that k •...• k are random.
2
v
Let
P(j *
€ {k ..... k }).
n
1
v
taken
over
all
sequential
searches
of
length v
in
Thus s(n.v) is the largest success probability one can have for
locating j * with v observations.
n
One of our main results is as follows.
For a.
a
let
l/a
lILA)
1
t(n.a) = (a- (2 log n) ( 7ETU
)
> O. and
~
< a < ~.
-2-
(1.3)
Corollary
< p < 1,
+oo.Then for all 0
and 0
v(O}
( 1.4)
>1
Assume that for some a
lim inf sen, v(o»
n-*OO
<0 <~
= v(n,p,o}
=
> O.
and a
na r
the following holds.
[<1+0)P n]
~(n.-o)
n
~
a.
n
~
For
•
p,
~
while
(1.5)
~
lim sup s(n.v(-o}}
p.
:n-+f-OO
That is, to locate j * with success probability p.
n
observations are (almost) necessary and sufficient.
We learned of this
problem from [H]. who motivated his paper with the practical problem of
maximizing an objective function about which little is known, and is costly
to evaluate.
Such functions arise in geological exploration, for example.
=aj .
0
Also observe that for
r
He proved the theorem above when
-log log n/2 log a.
rj
< a < 1,
= j
j
-1
,
in which case ~(n,O)
we have
~(n.O) ~
(2 log n)~.
Note
that
if
the X's are
independent.
[pn]
observations are both
necessary and sufficient to locate j * with success probability p.
n
fraction (2
-1
~(n,·})
Thus, the
in the corollary represents the percentage savings one
realizes due to the dependence structure of X..
J
that this percentage goes ,to zero as n
~ +00.
The two examples above show
but at a rate too slow to be of
any practical use.
In view of this difficulty. one might wonder if the problem posed above
is too hard.
That is, is it significantly easier to locate a large, but not
-3-
the largest. value of X?
In this regard. we make the following conjecture.
stated. for berevity's sake. in less than precise language.
> O.
1. and 0
with probability tending to p as n
observations are sufficient.
70% of the true maximum. 0{nO
~
(2 log n) ~ • n
X*
so that
nq
locate j * .
n
<
~
r X* • n
vq
* >
- vqr
X*
n
n
~ +00.
In particular. to locate a point which exceeds
e49
for this would proceed this way.
n
q
n
k
X*
° < p.
in order to locate k* € {1.2 ....• n} so that
X
(1.6)
For
) observations suffice.
A plausible argument
In a very strong sense. ([LLR]).
~ +00.
~ +00.
n
Thus to locate k* as above. it suffices to
n
One then applies Corollary 1.3. but note the reduction of this
q
"easier" problem to the original one.
We
have
in
fact
proved
(1.3)
for
non-decreasing
covariance;
generali ty requires the notions of covering and packing numbers.
appearance is not surprising:
this
Their
they are implicit in [H] and are well known.
in some sense. to control stationary Gaussian processes.
posi tive or sufficient half is taken up in section 2.
See e.g. [jM].
The
The proof therein
holds for covariances with even a logarithmic decay to zero; see the end of
section 2.
The proof of the negative. or necessary half. is in section 3.
The strategy of the proofs below are due to Hajek.
these:
Our innovations are
The aforementioned covering and packing numbers; carrying out the
necessary half in a non-Markovian setting
{Compare the proof of lemma 2. [H]
-4-
to lemma (3.10) below); and the use of Borells inequality ([B]; (27) below)
which should be of use in studying versions of this problem for which (1.6)
does not hold.
2.
Sufficiency
We define the notion of covering numbers.
(2.1)
= {j
B{i.u)
€ Z : r _
i j
Let
> u}.
Observe that the sets B{ -. u) are translation invariant. and are balls of
~
radius (2(1-u»
in the metric
= least
Nn (u)
Theorem.
(2.2)
(log n)
Fix 0
< p.
0
a
P(X*
n
Assume that for all
r
< 1.
n
~
Let
number of B(-.u) needed to cover {1.2 ..... n}.
and let M be the median of X*:
n
n
(2.1)
= (E(Xi-X j ) 2 )~.
d(i.j)
O.
n
> Mn ) = ~.
a
e-
>0
~ +00.
and let
(M- 1+O)].
v = v{n.p.o) = [(l+o) N
[pn] n
Then
(2.3)
lim inf s(n.v)
n-+f-Ol)
~
p.
Note that (2.2) is satisfied if
more generally
a
r n '" exp(-(log n) ). n
r
n
'" n
-a
~ +00.
n
~
+00.
for any a
> O.
or
In fact the proof given below
handles covariances which decay slower than those just given; see remarks at
the end of this section.
Further note that (2.2) implies that r
the following (Theorem 4.3.3. [LLR]):
an
n
= (2
= o{{log
n)-l).
log n)~ and
Thus. we have
-5-
(2.4)
b
= (2 log n)
n
-
log n)
~(2
~
~
(log log n + log 4v).
M = b + O(a~I), n ~
n
n
log 2, n ~ +01,
Under (2.1), we have
(2.5)
n
~(M
n
)
~
and
+01
where
2
~(u) = _1_ J+OI e-x /2 dx
..g;;
Using the fact that M
n
~
u
(2 log n) ~ , (1.4) follows from (2.1) by an easy
calculation.
For the proof, we recall Borell's inequali ty,
[B],
in a weak form,
convenient for our needs.
(2.7)
Theorem.
Let Zt' t € T be a mean-zero Gaussian process with median M
> M)
in sup norm:P(sup Zt
tET
=~.
Then for all A
p(lsup Zt tET
where
a
2
= sup
MI > u)
>0
~ 2 ~(u/a)
2
E Zt
tET
Proof of (2.1).
We prove this result by exhibiting a strategy with the
desired properties.
First, however, we reduce the result to the case p = 1.
Assume the Theorem for p = 1.
(2.8)
s{n,v{n,I,o»
Now consider p
~
I, n
That is,
~ +01,
for all
0
<0 <~ .
< 1. As the law of n -1 I.*n is asymptotically uniform on [0,1]
*
(Chapter 5, [LLR]) it suffices to locate j[pn]
by a sequential strategy of
length v{n,p), with probability tending to I, as n
~ +01.
But note that for
large n,
] 1 ,ur)
v ([ pn,
= N[pn] (M[pn] -1+0) -< N[pn] (Mn -1+20) = v ( n,p, 2 ur )
-6-
Therefore. by (2.8) the Theorem for all p follows.
For the proof below. i t is convenient to extend Xj
sequence of Z.
to a stationary
Then let
B = B(O. M -1+6);
n
b
I
n
n
= # B;
n
n
= set
of centers of a minimal covering of {1.2 ..... n} by
translates of B ;
n
and
i
n
= #1n = Nn (Mn-1+6).
Our strategy is then:
Observe X at each j € I .
1)
n
is In
= {jl. j 2 .... } where
Order these observations by height.
Xj
> Xj > Xj
123
Completely search jl + Bn . j2 + Bn .... until a total of [(1+6)i n ]
2)
observations have been made.
Take as j* the location of the largest
n
value observed.
Let
S
n
= {j
s
= #S
n
I
€
n
n
~
: j + B is searched};
n
6 i /b •
n
n
and
A
n
= median
of b
n standard independent Gaussian r.v.
Observe that by (2.5).
(2.9)
That
bn "'(A)
-+ log 2.
n
n -+ +co.
Referring to the strategy above. we have that
e-
-7P(j *
n
not located)
= P(j * € j + B
n
for some j € I /S ),
n n
n
~ P(X*
< Mn - An )
n
>2
+ P(X.
J
for some j € I /S )
n n
A
n
+ P(for some j € I n/Sn and k € j+Bn , X
j
It is enough to show that each j, w.
J
in order.
As n
b
-+ +co,
-+ +co,
n
-+
0 as n -+ +CO.
hence A
n
-+ +co.
<2
An
and
Thus by Bore 11 's inequal ity
For the second term, use (2.9) and Chebyscheff's inequality.
2
~
P(#{j € In : Xj
~
sn E ~I
-1
n
~ i
n
I{X j
>2
An}
> 2 An}
s-1 .p(2 A )
n
n
~ 6- 1 log 2 .p(2 A )/.p(A )
n
-+
o.
For the third term,
n
~
sn)
> Mn -An )
We do this for each term
(2.7),
W
~
Uk
-8-
(2.10)
< 2 An and sup Xj > M - A )
w 5; in P(XO
3
.€B
J
~
Given Xo
i
n
X. > M -A
J
n n
P(sup
j€B
n
=x < 2
n
n
n
Xo < 2 An)'
An' and j € Bn , Xj is Gaussian with mean
mj
=x < 2
An
and var iance
1 - r
j
_<
1 - M - 1+6
n
Moreover, the median of
is dominated by 2 A.
Under (2.2) we have b
n
An = O«log n )6/4).
Therefore,
again
using
continuing.from (2.10),
(2.11)
w3
n
~[
= O(exp«log
n)6/2», so that
e'
Hence
~ M 1+6/2 - 10 M A ~
n
n n
~ i
n
Borell's
M -SA]
n
n
1 _ M -1+6
5; in
+ro,
inequality
~(Mn)
and
(2.7),
M-1+6/2
exp (-
~
+ 10 Mn An]·
n
~
0, n
~ 00.
•
This finishes the proof.
We remark that the theorem above holds in the case where
(log n)
a
r
n
~
a, n
~ +ro,
for some a
>
1,
and a
>
O.
For then (2.S)
continues to hold, and the proof just given only needs to be modified in
(2.11), by using the better estimate
-9-
i
- ~ n exp{-{a- 1{2 log n)~o)l/a).
n
This proof fails for 0
3.
< a < 1.
for then (2.5) fails.
See Chapter 6. [LLR].
Necessity
Recall the definition (2.1). and define packing numbers by
P{n.u)
= greatest
number of disjoint balls B{·.u) that can be placed
in {1,2 ..... n}.
We prove that
(3.1)
Theorem Assume that for some a
o < O.
p
<1
(3.2)
v
= v{n.p.o) = P{[{l-o)pn].
> 1.
n
a
r
n
~
O.
n
~
+00.
Fix
and let
(2 log n)~o)
+ P{[pn]. (2 log n)
-1-0
).
Then
lim sup o{v.n)
~
p.
n~
(1.5)
follows
from
this by a
routine calculation.
Concerning the
relationships between packing and covering numbers. we have
For the range of u we considering. P (u)
n
and
N (u) need not even be of the
n
same order; their values could also be quite difficult to compute.
Throughout the proof below C denotes a positive numerical constant that
may change from line to line.
-10-
Proof.
Let
a
= {2 log n)6/2,
n
log n)~6,
Pn
= {2
~
= (2 log n)
and
= 0n
Let 0
1+6
.
be a strategy of length (3.2); let
M = Mn
(3.3)
n
= {i
0n
€
Ix.1 I > a n },
-1
= {B{i,Pn
(3.4)
i €
),
M
B
i
-1
B{i,~
n
and U
=U =
n
), i € N,
e-
U B ..
i€O 1
We shall prove that
(3.5)
lim sup P{J *
€ U )
n
n
~
p.
n
This allows us to assume without loss of generality that for i, j
otherwise
Moreover, writing 0 = {i
(3.7)
rei u - i v )
Let
a denote
a = {On ; Xi' J
€
~
1
~
i
2
< ... }
c Pn-1 Iu-v l-a
we have
.
all information obtained from 0 ; that is,
n
O}.
n
Set
€
O.
-11-
A
= {9
IXjl < {2
max
log
1~i~n
n}~
We prove that
{3.8}
peA) -+ 1. n -+
and for G
{3.9}
co.
= Gn = {1.2 ..... n}1Un .
9 € A
=> #G ~ [{1-p}n].
This last claim is immediate from the defini tions of A and U. and the
following inequality:
Pn{u}
#
~
n{ B{-.u}}
-1
(Note that 0n • and hence Gn are
:
random sets of integers.)
{3.8} is seen in two steps.
First.
P{ max
1~i~n
~
Ix.J I > {2
2n
~{{2
-+ O. n -+
~{-}
is defined in {2.6} above.
log
n}~}
~
log n} }
+CO.
Second. let pn = P{[p6n].
~ P{t..t1 1{ Ix.1
I > an }
~
-1
n
}.
Then
~ pn }
~ {2 log n){1+6)/a ~{a }
n
-+ O.
n -+
+CO.
The last inequality holds for large n. and follows from the observation that
B{O. ~-1}:l {j:
n
for large n.
Ijl < {2Iogn}{1+6}/a}
These last two calculations finish the proof of (3.8).
-12-
A given outcome
a.
induces a Gaussian conditional probability Po on Xu'
Denoting expection with respect to Po by EO' let mu = EO Xu' and for
u (0.
v ( O.
c uv = EO{Xu-mu){Xv-mv )'
be the conditional mean and covariance. respectively.
(3.10)
Lemma.
For
(3.11)
1mu
(3.12)
Ic uu I > 1
I <C
a € A.
(log n)
u € G.
~l)/2
and
v ( 0,
•
- C ~-1
n •
and
Icuv - ru-v I
(3.13)
~ C{~-1
A lu-vl-a).
n
We observe here a corollary to (the proof of) this lemma..
facts about 0 used in the proof are (3.6) and (3.7).
€ B.
1
{Recall (3.4», let
iiiU
= r.
l-U
X.,
respectively the mean and variance of X
Hence, for j € O. and u
2
'"
c
and
1
= 1 - r.
uu
l-U
given X..
u
The only
1
.
which are
Then by (3.11) and
(3. 12),
- '"
m I,
u
Ic
- '"
c I
uu
< C(log
(3.14)
Imu
Proof.
Fix u and v as in the lemma.
uu
n) ~l)/2 .
Let
0
#
=
O. and write the covariance
matrix of the Gaussian vector {(Xu' Xv), (Xi' i € 0»
as
where
A = [ 1
r u- v
A
= (Awi )
= y,
is a 2xO matrix; and r
= (r ij )
r u- v) .
1
•
is a oxo matrix.
Given (Xi' i € 0)
(X, X ) is a bivariate Gaussian vector with mean vector
u
v
e-
-13-
(3.15)
A r- 1 y
and covariance matrix
(3.16)
A - A r- 1 At
Notice that r i i = 1, and by (3.7), Irij I
< C (3~1 Ii-j I-a.
That is r is
nearly the identity, but standard techniques for estimating the terms of r
(We shall see in a moment that r- 1 exists.)
directly fail here.
technique we employ is to define various norms on
mO~ mObelow.
This gives upper bounds on
Recalling (3.3) and (2.4), for y €
lIyll
°
and let I! = (m ,II-II).
= (max
u€N
1
:
Thus the
norm of
r
let
Iy I) V (a max Iy I),
u
n u€M
u
Observe that I!*, the dual of I!, has norm
=r
Consider the ith coordinate of y
Iy.
mO
r- 1
mO , and estimate
mO~ mO.
-1
x.
By (3.7),
I = I};0 r.IJ. x.1
J
~
Ixil - }; IX j r i _ j l
M
- }; IX r _
i i j
N
~
-1 -1
IX i I - C a n (3n IIxlI }; j-a
·>1
J_
- C "r-1 IIxlI
n
~
l
};
j -a
n1
-1
IX i I - C "rn IIxlI
Thus, for large n, the image of the unit ball of I! under
r
is an open set,
-14-
so
r- 1
exists.
>
IIrlli~
Moreover,
~.
This, and the fact that r is
self-adjoint imply that
(3.17)
IIf
-1
Let Au
(3.18)
IIAu II
II *
*'
i~
= (Au i
~
C;
IIf
-1
lIi~
i € 0)
IIAu 11*
~
< 2.
= (r u- i
As u € G,
"
-1
C..,n •
The proof of (3.11) is then easy.
lIyll ~ (2 log n)~6/2.
i € 0).
As 9
€
A, y
= (Xi:
i
satisfies
€ 0)
Thus by (3.15), (3.17) and (3.18),
1mu 1= I(Au ,
r- 1 y)1
~ C ..,-1 (2 log u)~6/2
n
~ C (2 log n)~6/2 .
The proof of (3.12) is similar.
11 - c
uu
I<
By (3.16), (3.17) and (3.18),
I(A , r- 1 A )1
u
u
< C ..,-1
n
Likewise the proof of the first half of (3.13).
.
Observe that IIA v II
Hence
-1
~C..,
-1
n a n =Cfjn
.
Proving the second half of (3.13) requires a new norm.
Y € IRo , set
For v f 0,
~
an .
-15-
lIyll
= sup
v
Iii-via Yi
j€O
I·
Then. for z = T y. consider the ith coordinate of z. where without loss of
generali ty we assume that i < v.
Then
( }; +
j€O
(3.19)
j<i
};
+ }; ]r._jlj-vl-a
j€O
j€O
i<j<v
v<j
1
= Yi - I - II - III.
As n
a
r
n
(3.20)
~
O. and (3.6) holds.
II
1,
~
Cllyllv 13n-11 i-v I-a .
Concerning II. a similar argument shows that
};
j€O
r
i-
j Ij-vl-a < Cllyll
v
13- 1 li-vl-a
n
i<j«i+v)/2
and likewise for the sum over (i+v)/2
~
j < v.
Thus (3.20) holds for II as
o
Denoting (lR . lI e ll ) by i ' (3.19) and (3.20) show that for large n.
v
v
well.
(3.21)
liT
-1
IIi -+I! ~ 2.
v v
To see (3.13). note that IIA II
vv
(3.18).)
~
C.
(For the defini tion of A see
v
Thus by (3.16) and (3.21) it is enough to estimate
But this last sum can be dominated in a manner very close to (3.19) and
(3.20).
This concludes the proof of the lemma.
•
-16-
Recall the definitions (2.3). and given x
xn
=
max
1<'<n
-J-
Xi'
€ ~.
set x
n
= b
n
+ a
-1
n x.
Let
Then (Theorem 4.3.3 .. [LLR])
(3.22)
e
-+
-e
-x
• x €
~.
For an arbitrary F C {1.2 •.... n} let
F = max
Xi
i€F
(3.23)
Lenuna.
The following limi t holds uniformly over 9
compact subsets of
€
A. and y in
~.
y
lim inf P(Cn ~ Yn) ~ exp(-(I-p)e- ).
n
Proof.
By (3.9). g(n) = #C
n
~ [(I-p)n].
that the lemma follows from this result:
A routine calculation then shows
Uniformly in y €
~
and 9 € A.
This last fact can be seen as an immediate corollary to (3.10) and (the
obvious triangular array version of) Theorem 6.2.1. [LLR].
(It can also be
seen by a direct computation involving the normal comparison lenuna.
below.)
see
•
(3.8). (3.22) and the last lemma can be used to prove that
lim sup P(j n* € 0n )
~
p.
n
(see [H]). but not (3.5). which we need to complete the proof.
lenuna provides the last ingredient we need.
The next
-17-
(3.24)
Lemma.
For all x €
~
uniformly in a € A,
sup IPa(Un ~ un: Gn ~ vn ) - Pa(Un-< un ) Pa(Gn ~ vn )1
(3.25)
u,v~x
-+ 0,
n -+ +00.
Furthermore,
(3.26)
'"
lim sup Pa(U
n
n
uniformly in a € A,
~
Ia
xn
€
e -p e
~
A)
-x
and x in compact subsets of
~.
To complete the proof of the Theorem, fix 0
> 0, and choose x so large
that for large n,
c
P(A ) , P(Un
> xn ),
peGn
< (-x) n ) < o.
This is possible by (3.8), (3.23) and (3.26).
Then by (3.24) and (3.25), for
large n,
P(j € U) = P(U
> G)
~ 3 0 + ~x P(U
= 4 0 +
> Yn'
jK-x e-P e
-Y
G € (dY)n' a € A)
peG € (dy) , a € A).
n
Now, integrating by parts and using (3.23), for large n the last integral is
less than
o+
~
-(l-p)e-Y
-p e-Y
J
e
d
e
-x
~ 0 +
As 6
~
1-00
e
-(1-p)e-Y
> 0 was arbitrary, this proves (3.5).
d e
-pe-Y
= 0 + p.
-18-
Proof of (3.24)
arbitrary 6
>0
(3.26) follows from (3.22). (3.23) and (3.25):
For an
and large n.
-x
e-e
~ 6 +
P(X
~ x )
n
~ 26 + P(X ~ x n ' 0 € A)
Gn
-< 26 +
P(Un
~ xn .
~ xn 10
~ 36 +
P(Un
~ x n 10 € A)
A)
€
peGn <
- x n 10
€
A)
(3.23) then implies (3.26).
It remains to prove (3.25).
To do this. given 0 € A. define Gaussian
o
0
random variables Y ..... Y • so that
1
n
= m.•
1
1 < j
( 1, j) €
(1,
n. and
~
UxU. GxG
j) € UxG.
GxU.
Here m. and c . are defined just before lemma (3.10).
iJ
1
Observe that
~
~
PO(Un ~ un) PO(Gn ~ vn )
= P(m:x
0
0
Yi ~ un; ~ Yi ~ v n )·
n
hence (3.26) can be shown by an application of the normal cOmParison lemma.
Let
P
ij
= c
Further. let
~
ii
C
~
jj
o
~
unj = Cjj(un - mj ). likewise for vnk .
[LLR]. it is enough to show that for all x
2
(3.27)
sup
:I
0
c ij . which is the normalized covariance of Yi and Y j.
:I
u. v~x j€U kEG
j<k
..11
2
- Pik
exp ( -
Unj
Then by Theorem 4.2.1.
€ ~.
2
+ v nk ]
2(1 + Pjk)
~
O.
n
~
+00.
-19-
Let
E.1
> 0 be a number to be fixed later.
~
= {j
r.I-j >
sup
(3.28)
~}.
};
u.v~x
Then for i € O. let
We first show that
};
Pjk
};
[
exp -
2
2
U nj
+ V nk ]
i€O j€E. kEG.../ 2
1 j<k
I- Pjk
-+
O.
n
-+ +00.
2(1 + Pjk )
We have that
(3.29)
sup r. < T} < 1.
j>1 J
This and (3.14) show that for large n.
O.
Here we take 0 <
meaningful.
P. k
J
<
l-rr~
< c .. <
JJ
2
1-2~
. j € E.. i €
1
" I-T}. so that the inequali ties above are
-1
Also. by the definition of Ei . and G. and (3.13). c jk < C Pn
for (j.k) as in (3.28).
(3.30)
2~
<
~
o
-1
< C Pn .
These facts and (3.12) imply that
(j.k) as in (3.28).
(3.29) and (3.14) show that for large n. j € Ej . and j € 0 we have
c(log n) -*-6/2 + T} a n •
~
as
a€
A.
This. and (3.30) show that
~ (1-rr~)a
(3.31)
n
where u . is as in (3.28).
nJ
(3.30). for large n we have
-
~
(2 log n)
~
(log log n + log 4w)
Last of all. fix y
<x
arbitrary.
By (3.11) and
-20V
nk
(3.32)
1 + Pjk
v, j, k
~ Yn'
as in (3.28).
We are now ready to estimate (3.28).
By (3.30-32), (3.28) is dominated
by a constant times
exp(~(d2 + y2»
n
C n
}; P .
j€G jk
i j>k
e xP (~(d2
2»
n + Yn
<
- C #0
~
};
};
j€O j€E
n
exp(~(d~
+
y~»
-+ 0,
n -+
+aI.
+aI
Here, we've used (3.13), and
#
On
~
n.
}; j-a ~ C; #E ~ C, by stationarity of X; and
i
j=1
This proves (3.28).
It remains to prove that
sup};
};
u,v>x j€O j€B.lE
(3.33)
1
};
kEG
i j<k
2
2
Unj + vnk
P
exp [jk
2(1 + P )
jk
J -+ 0,
n -+ +aI.
Here, B. is defined in (3.4).
1
In fact we will show that for
depending only on
~
sup
u,v~x
> 0 sufficiently small, and 0 > 0
in (3.29),
2
(3.34)
Eo
2
Unj + v nk
sup exp [(j,k)
2(1-p jk )
J~
n
-1-0
.
This and (3.13) easily imply (3.38), which finishes the proof of (3.27),
concluding the proof of the lemma.
-21-
For any
~
>0
fixed, and all j € 0, j € BilE , k € G the following holds
i
for sufficiently large n.
By the defini tion of E ,
i
(3.13),
(3.14).
and
(3.29) ,
11 +
~
This last expression can be made strictly less than
sufficiently small.
Also. by (3.14).
~,
11
< ~ < I, for
for i € 0, and j
€
~
>0
BilE . and n
i
sufficiently large,
u - m
j
> bn
+ a- 1 u - 2~ a
n
n
~ (1-~)an
-
~
-1
-1
an (log log n + log 41r') + an
y
= en .
where y
< x is arbitrary. Similarly. we have vnk
Fix 0
> 0 so that
~
+ 0
<
1/(1+~).
~
Then for
Yn.
~
>0
sufficiently small.
the observations above show that
2
2
unj +v
nk
>
2(1 + Pjk )
2
2
c n + Yn
2(1 +
>
~)
2
n
(2-~)a
2(1+~)
> (l+o)(log n)
This implies (3.34).
•
-22-
REFERENCE'S
[B]
Borell, C. (1975)
The Brunn-Minkowski inequality in Gauss space.
Invent. Math. 30 207-216.
[H]
Hajek, B. (1987)
sequential search.
[JM]
Locating the maximum of a simple random sequence by
IEEE Trans. Inform. Th. IT-33 877-881.
Jain, N. and Marcus, M. (1978)
In Probability in Banach Spaces.
Continuity of sub-Gaussian processes.
Adv. in Prob. 1 81-96, Marcel
Dekker.
[LLR]
Leadbetter, M.R., Lindgren, G. and Rootzen, H. (1983)
Extremes and
related properties of random sequences and processes. Springer-Verlag.
..