(#2016)
INTERMEDIATE- AND EXfREME-SUM PROCESSES
Department of Statistics. University of North Carolina
Chapel Hill. NC 27599-3260. USA
David M. MASON
2
Department of Mathematical Sciences. University of Delaware.
Newark. DE 19716. USA
Let Xl
< ... <
X
- n.n
.n -
be the order statistics of n independent random
variables with a common distribution function F and let k
numbers such that k
n
~ 00
and k /n
n
~
0 as n
n
be positive
With suitable centering and
~ 00.
norming. we investigate the weak convergence of the intermediate-sum process
1
rtkn 1
i=rak 1+1
n
Xli • a
n+ - .n
~
t
~
of the extreme-sum process 1
b. where 0
< a < b < 00.
rtk 1
n X
•
. 1
n+1-i.n
1=
o~
t
~
and the weak convergence
b.
Convergence is with
respect to the supremum norm and can take place along a subsequence of the
positive intergers {n}.
order statistics * intermediate-sum processes * extreme-sum processes
convergence * extreme-value domain of attraction
1
*
weak
On leave from the University of Szeged. Hungary. Partially supported by the
Hungarian National Foundation for Scientific Research. Grants 1808/86 and
457/SS.
2partially supported by NSF Grant DMS-SS03209 and a Fulbright Grant.
- 1 -
1.
Introduction and Statement of Results
Let X. Xl' X2 •. ·. be a sequence of independent non-degenerate random
variables with a common distribution function F(x) = PiX
each integer n
L1
let X . n
1
the sample Xl •... ,X '
n
~
...
~
~
m.
x}. x E
and for
Xn . n denote the order statistics based on
Let {k } be a sequence of positive numbers such that
n
k
n
~
(Xl.
and
k In
n
~
as n
0
~
(1.1 )
(Xl.
and consider the sum process
ftkn 1
I n (a.t) = I n (a.t;kn ) =
Xl"
n+ -l,n
!
a
~
t
~
(1.2)
b.
i=fakn 1+1
of intermediate order staistics. where 0
< a < b.
and the sum process
ftk 1
! n X
n+l-i,n'
i=l
En (t) = En (t;kn ) = {
L
k -< t <
- b,
n
O
o
(1.3)
1
~ t < j{'
n
of extreme order statistics, where fx1 is the smallest interger not smaller
than x and an empty sum is understood as zero.
The asymptotic distribution of the intermediate sum I (a,b) for fixed 0
n
<a <b
has been thoroughly investigated in [3J.
We found necessary and
sufficient conditions for the existence of constants A
n
>0
and C
n
€
msuch
that A-l(I (a,b)-C ) converges in distribution along subsequences of the
n
n
n
positive integers {n} to non-degenerate limits and completely described the
possible
subsequen~ial
limiting distributions.
Exactly the same programme
has been carried out previously in [2J for the extreme sums E (1).
n
The aim
of the present paper is to investigate the weak convergence of the suitably
centered and normalized processes I (a,e) and E (e) in the supremum norm on
n
n
[a,b] and [O,b], respectively.
COnsider the inverse or quantile function of F defined as
- 2 -
~
Q(s) = inf{x : F(x)
s},
0 < s
~
I,
and introduce the associated left-continuous non-decreasing function
H(s) = -Q«I-s)-),
~
0
s < 1.
(1.4)
Consider the centering functions
rtk
Irak
~ (a,t) = ~ (a,t;k ) = -n
n
n
n
~
We say that a sequence {fn(t) : a
each n
~
n
l/n
~
t
O~a<t.
(1.5)
b}n=l of stochastic processes has a
{~n(t)
~
H(s)ds,
IX)
~
t
distributionally equivalent version
distributional equality {fn(t) : a
l/n
n
: a
b}
~
~
t
IX)
b}n=l if the
=~ {~n(t)
~
: a
t
~
b} holds for
I, that is, all finite-dimensional distributions of f (0) and
~
n
are the same on [a,b] for each n
Theorem 1.
~
n
(0)
1.
Let {k } be a sequence as in (1.1) and fix 0 < a
n
0
< b
0
<
IX)
Suppose that there exist a subsequence {n'} of the positive integers and
positive numbers B
n
I
aLong it such that for a function
~
continuous on
[a ,b ], necessariLy non-negative, non-decreasing and satisfying
0
we have
~n
I
(a ;x) :=
0
0
r dHrk~']/B
a
en
n
.
~ ~(x)
at each x € [a ,b ] as n'
0
0
) = 0,
~
IX).
(1.6)
o
Then on a suitabLe probabiLity space, for any choice of a
....
there exist a sequence {In{a,t) : a
Wiener process W(t), t
1
Bn ,
'"
~
o
< a < b < b0 ,
IX)
~
t
~
b}n=l
equivaLent versions of the sequence {In{a,t) : a
as n'
~(a
0
of distributionaLLy
~
t
~
IX)
b}n=l and a standard
0, such that
{In,{a,t) - ~n,{a,t)} -
Jt
a
W{s)d~{s)
~
0
a.s.
(1.7)
~IX).
We note that by Theorem 1 in [3] for convergence in distribution of the
e-
- 3 -
process
Yn (a,t) := {I n (a,t) -11n (a,t)}/{lkB}.
-Inn n
at a fixed point a
~
~
t
< a < b < bo
b with a o
(1.8)
along a subsequence of {n} we
can always choose
B = A (a ,b ) := max(H(b kin) - H(a kin), 1) ) O.
n
n 0 0
0 non
Then, with this choice of B , for the non-decreasing, left-continuous
n
functions
~n(ao;x)
we have 0
~ ~n(ao;x) ~
1 on [ao,b ]'
o
Hence by a Helly
selection one can always find a subsequence {n'} C {n} and a non-negative,
non-decreasing, left-continuous function
converges to
~(x)
~ m
as n'
~
on (a ,b ) such that
o
0
at any continuity point x
of~.
~
n
.(a ;x)
0
Theorem 1 in [3]
shows that one can hope for weak convergence of I ,(0) in the supremum norm
n
on [a,b] only in the case when
[a,b].
~
is continuous on some interval containing
This is the underlying reason for condition (1.6).
Now we turn to the weak-convergence problem of the extreme-sum process
En(t) = In(O,t),
O.~
~
t
b.
Even though the problem is now the behavior in
the vicinity of zero, we still need a reference point a
o
) 0 as in (1.6),
which can in principle be chosen to be b .
o
Theorem 2.
Let {k } be a sequence as in (1.1) and fix 0
n
< ao
~
b
o
< m.
SUppose there exist a subsequence {n'} of the positive integers and positive
numbers B , such that for a function
n
non-decreasing and sa.tisfying
~n,(ao;x)
=
~(a
)
~
continuous on (O,b ], necessarily
= 0,
0
we have
r c1HF:~')lBn' ~ ~(x)
a
o
at each x €, (O,bo ]
(1.9)
o
as n' -+ m and
ra ~ d~n ,(a
lim lim sup
alO n'~ Jo
0
,x)
=0
and
ra ~ d~(x) = O.
lim
alo Jo
(1.10)
- 4 -
Then on a suitabLe probabiLity space. for any choice of 0
exist a sequence {En(t) : 0
~
t
~
Wet). t
~
O. such
00
b}n=l of distributionaLLy equivaLent
o
versions of the sequence {En (t)
< b < b o . there
~
t
~
00
b}n=l and a standard Wiener process
that
sup
-+p 0
O~t~b
as n
-+
00.
We note that it is easy to see using integration by parts that if (1.9)
holds and there exists a constant ~
n' large enough and all x
>0
> -1/2
such that I~ ,(a .x)1
n
0
< ~ for all
small enough. then condition (1.10) is also
satisfied.
Now we formulate a corollary to Theorems 1 and 2 under the classical
condition of extreme value theory.
We say that F is in the domain of
attraction of an extreme value distribution if (X
-c )/a converges in
n.n n
n
distribution to a non-degenerate random variable Y. where a > 0 and c € m
n
are some constants.
As pointed out in [2]. with earlier references. this
happens if and only if there exists a constant
lim H(sx)-H(sy)
slo H{su)-H{sv)
m such that
€
7
={ (x-Y_y-~)/(u-Y-v-Y),
if
< x.y.u.v <
suitable choices of a
n
00.
7 €
7
= O.
and c •
= P{Y
m.
if
7
n
~
y}
=
!
1/7
set
)
• y
> 0;
if
7
> O.
m;
if
7
= O.
< 0;
if
7
< O.
exp{-exp{-y». Y €
exp{-{-y)
For any
'# O.
In this case we write F € A • where. with
exp{-y
A7 (y)
7
(1.11)
log{xly)/log{u/v).
for all distinct 0
n
1/7
).
y
- 5 -
7
U
=[
;1_7)-1/7
-"Y
"Y
so that u
- v
-"Y
"Y
=
if
"Y
if
"Y
if
"Y
if
-"Y
"Y ~
1:
> O.
and
= O.
< O.
1+7)-1/7 •
v"Y =
0 and log(u0 /v0 ) = 1.
if
"Y
> O.
if
"Y
= O.
if
"Y
< O.
For a sequence {k }
n
satisfying (1.1) we define
Choosing the reference point of Theorem 2 as a
= 1. introduce now
o
(1.12)
< x < n/k.
n
which is well-defined for 0
Then. if F € A for some
"Y
"Y
€
m.
we
obtain from (1.11) that
cp
as n -+
00.
n
(x) -+ cp (x)
:= {
"Y
if "Y
~
log x
if "Y
= O.
O.
o
> 0,
>0
for any x
that is. we have (1.9) with the continuous function
the whole {n} and for any b
< b.
o
(1-x-"Y)/"Y •
cp
=
cp
"Y
(1. 13)
along
or. what is the same, (1.6) for any 0
<a0
Hence the first statement of Corollary 1 below follows from Theorem 1
and the second statement will follow from Theorem 2 after proving (1.10) for
the present
cp
n
and
cp
=
cp •
"Y
If F € A for some
Corollary 1.
"Y
"Y
€
m and {kn } satisfies (1.1). then on a
suitabte probabitity space, for any choice of b
""
{En(t) : 0
~
t
~
> O.
there exist a sequence
(X)
b}n=l of distributionatty equivatent versions of the
(X)
sequence {En(t) : 0 ~ t ~ b}n=l and a standard Wiener process Wet). t ~ O.
such that for the sequence {In(a.t) = En(t) - En(a) : a
~
t
~
(X)
b}n=l' being a
distributionatty equivatent version of the sequence {In(a,t) : a
and for any 0
< a < b,
~
t
~
(X)
b}n=l'
- 6 -
1
sup
rj{ A (~)
a~t~b
~'n
{",In(a,t) - n
1
rj{ A (~)
O~t~b
as n -+
tkn l/n Q(l-s)ds } - Jt W(s)s- 1-~
ds
rakn l/n
n
almost surely as n -+
sup
Jr
< 1/2,
~
> 0,
then for any b
{",E (t) - n Jr tkn l/n Q(l-s)ds} - Jt W(s)s- 1
n
~nn
Furthermore, if
00.
-+ 0
a
-~ ds
0
n
0
00.
Notice that if F € A and
~
~ ~
1/2, then we have (1.13) but condition
(1.10) is not satisfied by the limiting function
~
= ~~.
In this case,
according to Corollary 2 in [2], the appropriately centered extreme sums
E (t) n
~
n
>0
(O+,t) require a norming sequence A
n
distribution (denoted by
-+~)
to converge in
to a non-degenerate random variable V that is
heavier than the one needed by the centered intermediate sums.
follows from Corollary 2 in [2] that if F
> 0,
is a sequence A = A (k )
n
n n
{I (a,b) n
~
{En (t) -
~
n
€
A for some
~
Namely, it
l 1/2, then there
~
completely specified in [2], such that
(a,b)}/A
n
~p
0
for all 0
<a <b
and
where if
t
~
-+~
for all t
V
<lJ
> 0,
= 1/2. then V is the same standard normal random variable for all
> O. and if
1/~
n (O+.t)}/An
for all t
~
> 1/2.
then V is the same stable random variable with index
> O.
Our next corollary discloses a curious Darling-Erdos type behavior for
the extreme-sum process.
Whenever F € A for some
~
~
< 1/2
and {k } satisfies
(1.1) wri te
En(t) - n
e (t) := a
n
t~-1/2
~
rtk
J
l/n
n
0
rj{ An (~)
~un
Q(l-s)ds
n
~
- 7 -
where
and for T ) 0 and
~
< 1/2
set
A(T) = (2 log max(T,e»1/2
(1. 14)
B~ (T) = A(T) + (log(~/2~»1/2/A(T)
~"~.
(1. 15)
and
where
A
~
= (1-2~)/4.
(1.16)
This behavior will be a consequence of the weak convergence of a time-changed
variant of e (0) to the stochastic process
n
V (x) := a
~
~
e ( ~-~)x [
-x
W(u)u-l-~ du,
0
It is readily verified that V (0) is a sample-continuous mean zero stationary
~
Gaussian process with covariance function given, for x
r ~ (h)
= E V (x+h)V
~
~
~
0 and h
~
0, by
(x)
1~2~ { explf;~)hl _exp [_ ~]}
~
<!
~
=
and
~ ~
0,
=
! (2+h) exp (- ~]
Corollary 2.
Assume that F € A with
satisfying (1.1).
~
~
Then for any fixed 0
< 1/2
o.
and let {k } be a sequence
n
<c < 1
the sequence
x
{e (e- ) : 0 ~ x ~ log(l/c)} of processes converges weakly in the Skorohod
n
space D[O, log(l/c)] to the process
{V~(x)
lim lim P{A(lOg(I/C»{ sup en(t) c!O ~
c~t~1
: 0
~
x
~
log(l/c)}.
Furthermore,
B~(log(l/c»} ~ x} = exp(-e-x )
- 8 -
for all x
€ ffi.
Finally we would like to connect Theorem 1 to the classical theory of
domains of partial attraction for the whole sums Xl + ... + X and thereby
n
show that Theorem 1 is not empty for any choice of 0
non-negative. non-decreasing continuous function
we claim the following:
<a <b <
Let 0
00
~
<a <b
on [a.b].
00
= {n:}.
l'
J J=
a distribution function F. a subsequence {n'}
~
j
~
00
and k:/n '
~
j
J
0 as j
~
In particular.
be arbitrary and let
non-negative. non-decreasing. continuous function on [a.b].
k , satisfying k '
and
~
be any
Then there exist
and a sequence k'j --
such that for the versions I ,
00
n
of the intermediate sums I , pertaining to F in Theorem 1 we have (1.7) as
n
n'
~
00.
In fact. there is a universal
~
and all functions
Indeed. let 0
F that does the job for all 0
on [a.b] with the described properties.
<a <b
be arbitrary and
negative. non-decreasing function on [a.b].
and extend the definition of
~
~(a )
000
~(a
~(s)
=
o
~
be any continuous. non-
Choose 0
so that the extended
non-decreasing on [a .b ] and
) -
~(s)
-
= o.
~(b
~(b
o
0
= -inf{s > 0
~
< a o < a < b < b 0 < 00
is continuous and
Now define
o <s
).
~
ao '
a0
<s <
b .
0
)
o
and introduce R(x)
<a <b
s
~(s) ~
-x}. x
> bo •
> O.
Consider the spectrally
right-sided infinitely divisible distribution without a normal component. the
right L~vy measure of which is R(x). x
> O.
Then by the classical theorem of
Khinchin ([4]. p. 184) there is an F in the domain of partial attraction of
this infinitely divisible law.
Using Theorems 3 and 5 in [1]. this means. in
00
particular. that there exist a subsequence {n }j=l of the positive integers
j
- 9 -
such that if Q denotes the quantile function belonging to F then we have
-Q«(1- ~)-)IB·. ~..p(s).
nj
as j ~~. where B:
J
>0
rj
Then kj
~ ~
(1.17)
Now define n: = rn~/21 and k: =
are some constants.
n3/21 /n j • j = 1.2....
s.> O.
J
J
and kj/nj
~
J
j ~~.
0 as
J
and it follows
from (1.17) that
{H[ :~j ) -H[ a~~j )} / Bj ~(s) - ~(ao) = ~(s)
->
for each a
o
~
s
~
boo
This means that condition (1.6) is satisfied and hence
~
by Theorem 1 we obtain (1.7) along {n'} = {nj}j=l' with Bn : replaced by the
J
present B:.
A universal F is obtained by using the distribution function F
J
([~].
of any of the universal laws of Doeblin
p. 189).
In order to give a flavor of the content of Theorems 1 and 2. we close
this section by an illustrative example.
Q(1-s) =
where
"I'
>0
and
~
quantile function.
nj
{~
> (1+"1')/"1'.
Set
+ sin(log s)}s
-"I'
•
0
<s
~
1.
Differentiation shows that Q is an actual
For j = 1.2•...• set
= rexp(~1rj)l
so that kj/nj = exp(-2uj).
and
kj
= knj
j = 1.2•....
= nj exp(-21rj).
Also let
Bj' = Bn • = exp(21r'Yj).
j = 1.2 •...•
j
and choose a
o
>0
arbitrarily.
Then for any a
~
0
x
<~
and all j large
enough.
=
{~
+ sin(log s)} x
-"I'
-
{~
--r
+ sin(log a )}a
o
0
- 10 -
=: .,,(x).
m
We see that Theorem 1 applies along {n'} = {nj}j=l for all
applicable when 0
< ~ < 1/2.
Notice that by (1.11) the distribution function
corresponding to such a Q is not in the domain of attraction of A
~
2.
<
and all a o
and, moreover, it is easily checked that Theorem 2 is also
<b <m
a
>0
~
for any
~.
Proofs
Let U ,U , ... be independent random variables uniformly distributed on (0,1)
1 2
~
with corresponding order statisics U1 ,n
empirical and quantile processes a n (t) =
r
vn
~
t
~
1, where Gn(t) = n
and Un(t) = inf{O
~
s
~
(t-Un(t», 0
Un (t) = Uk ,n if (k-l)/n
1
Gn(s)
<t
~
~
..;n
-1
t}, 0
~
...
Un,n
Consider the uniform
(Gn (t)-t)
~
#{1
<t
kin, k=I, ... ,n.
~
k
~
and
n : Uk
Pn (t) =
~
t}, 0
~
t
~
1,
1, Un(O) = U1 ,n' so that
The tail empirical and
quantile processes pertaining to the given sequence {k } satisfying (1.1) are
n
defined as
wn (s) = (nIk)
n
~
a (sk In)
n
and
n
vn (s) = (nIk)
n
~
Pn (skn In),
O<s<nIk.
n
As pointed out in [6], w (e) converges weakly in the Skorohod space D[O,T],
n
for any T
> 0,
to a standard Wiener process.
Then by a Skorohod construction
and the left-continuous version of Lemma 1 of Vervaat [8] (both also in [7])
m
m
we see that the sequences {wn (e)}n=1 and {vn (e)}n=1 have distributionally
,.,.,
co
,..,
m
equivalent versions {w (e)} 1 and {v (e)} 1 on a rich enough probability
n
n=
n
n=
space that carries a standard Wiener process W such that
sup
O~s~T
as n
~
I;n (s)
- W(s)1
~
0
and
sup
O~s~T
I;n (s)
- W(s)1
~
0
a.s.
(2.1)
m.
In order to obtain the distributionally equivalent copies I
n
of Theorem
- 11 -
1. we first note that from (1.4).
(Xl .n ....• Xn.n )
=~
~
(-H(Un.n ) ..... -H(U 1 .n »
for
each n
~
> 0 arbitrarily for an n which is not a member of {n'l in
Define B
n
1.
(1.6).
Then. using the notations in (1.2) and (1.5). starting out from the equality
nn
U (tk /n)
a ~ t ~ b}
{I (a.t) - ~ (a.t)
n
n
= { -I
~
U (ak /n)
n
n H(u) d Gn (u)
n
and then integrating by parts. for the processes Y (a.t) in (1.8) and for
n
*
M (a. t)
n
= ...-.;;~-
n
FnBn
I
rtk
l/n
n
(G (u)-u)dH(u)
rak l/n
n
n
and
U (tk /n)
Inn
rtkn l/n
(Gn (u)-u)dH(u)
we obtain
Substituting now u
....
....
versions w and v
n
n
= skn /n
and transferring to the probability space of the
in (2.1). we get
~
'"
,....,
'"
{Yn(a.t) : a~t~b} =~ {Yn(a.t) := Mn{a.t)-Rn(a)+Rn(t) : a~t~b}. n ~ 1. (2.3)
where. with the obvious extension of the definition of
n.
~
n
. for an arbitrary
- 12 -
rtkn 1/kn 'w" (s)d
rakn 1/kn
n
<p (a ;s)
n
0
and
__1_; [r tkn 1]
'"
Rn (t)
=
J
J
Jk
n
n
rtkn 1
n
rtkn 1
+
k
n
kn
IV
-vn (rtkn 1/kn )
0
Proof of Theorem 1.
{;;n[
{;;n(S) + sFnn -
rtkn 1
+
k
n
KJ
[tk
n
Jk
n
l} H~~nJ
d
1
H[ rt:n + x
+
x} d
B
n
~]
B
n
Relations (I.B) and (2.3) show the existence of versions
IV
{In(a,t) : 1
~
t
~
b} of {In(a,t) : 1
~
t
~
b} as claimed in the statement in
the theorem once we prove that
'"
IRn ,(t)
I~0
a.s.
as n'
~ 00
(2.4)
and
It
IV
sup
IMn,(a,t) -
a~t~b
W{s)d<p(s)I ~ 0
a.s.
as n'
~ 00.
(2.5)
a
By (2.1) and the fact that IW{e)1 is bounded on any finite interval with
probability I, there exists an almost surely finite random variable K
such that for all n' large enough
sup
s~t~b
IR
n
, (t)
I~
K
sup
a~t~b
Fn']
tk
n ,l
H
•
+x-d __
n--=~__n_'_
-K
Bn ,
K
J
(r
fi{,]
fi{,]
tk
tk
H r n ,1 + K _.J"'_nn_ _ H (r n ,1 _ K _.J"'_nn_
( n'
n'
n'
n'
= K sup
a~t~
B,
n
>0
- 13 -
rtk
,1
-k,.....:.:;:-. -
Since
almost surely.
continuity of
~
~
n
k.
]}
.(a ,.) is non-decreasing, by condition (1.6) and the
0
this bound goes to zero as n'
~
and hence we have (2.4).
00
To prove (2.5), first notice that (2.1) and (1.6) easily imply that
sup
'"
M ,(a,t) a~t~b
n
Jr
1/kn ,
fakn ,l/kn ,
tk •
n
W(s)d~ ,(a ,s)
n
~ 0
a.s. as n
~
00.
0
Next, for all n' large enough,
d~n
,(a ,s) - Jt W(s)d~ .(a ,s)
on
0
a
Iw(s)
I
ftkn ,1 ]
sup
k
a~t~b
n'
- ~n,(ao,t) ,
and this bound goes to zero again by (1.6) and the continuity of
~
as n'
~
00.
Finally, (2.5) and hence the theorem will follow from these relations if we
show that
sup
a~t~b
as n'
~
II
t
a
W(s)d~n,(ao,s)
-
I at W(s)d~(S)1 ~
0
00.
To verify this. notice that with probability 1 for any given
exists a (random) partition a
and
(2.6)
a.s.
= to < t 1 < ... < t m < tm+1 = b
~
>0
such that
there
- 14 -
f
ti+1
IW(s}ld~ ,(a .s) +
n
t
f
t·+ 1
1
t
0
< -2e
IW(s)1 d~(s)
i
i
for all i=O •...• m and n' large enough. where m does not depend on n'.
for any t
i
~
t
~
Thus
t + and i=O •...• m.
i 1
o
almost surely for all n' large enough. proving (2.6).
Proof of Theorem 2.
First we note that it is easily checked that (1.10)
implies the finiteness of
~
n
(O.b) for all n large enough. so that the
representation (2.2) holds true for a = O.
Hence. again defining B
n
>0
arbitrarily for an n ( {n'}.
o~ t
En (t) - ~n (O.t)
{
~B
~n n
~ b } =~ {~lo.a) + M:la.t)
+
for each n
~
1.
R:lt) ,
0
- R:IO)
~a~ t~b
}
Furthermore. it follows from the derivation of (2.2) and
(2.3) that for each n
~
1.
~
=~
~
~
~
{(Mn(O.a). Mn(a.t}. Rn(O}. Rn(t}} : 0
~
a
~
t
~
b}.
Hence. in view of the fact that now we have (2.4) and (2.5) for any fixed
o < a < b < b o • it suffices to prove
lim lim sup
alO n'~
p{O~t~a
sup IM*,(O.t)1 > e} = O.
n
(2.7)
- 15 -
1f0 W(s)d~(s) I > e} = o.
t
lim p{ sup
alo
O~t~a
(2.8)
and
lim lim sup
alo n'~
>0
where e
p{ sup
IR:,(t)1
O~t~a
e}
(2.9)
= o.
is arbitrary.
We have
E
>
r sup
lo~t~a
1M:' (0. t) I) <
-
rak
1
K'
E
Bn ,
Ik,
n' IG ,( u) -u IdH (u)
n
Jrak ,l/k , vG
rn'
~~..
~"'n
I0
,l/n'
n
n
B,
n
dH(u)
0
n
rakn ,l/kn...rs
,
d~
Jo
,(a. s)
n
0
where the last inequality holds for all n' large enough.
Hence, using
condition (1.10). by the Markov inequality we obtain (2.7).
Also,
and hence condition (1.10) and the Markov inequality again imply (2.8).
Finally, using the fact that for any a
~ Un [akn n
k
n
)
-+.
P
a
> 0,
as
n -+
co,
- 16 -
which follows for example from (2.1), we obtain that for each a
IR*,(t)
sup
n
O~t~a
I
f
1
~
..J~n'
I
)
n'IG ,(u) - uldH(u)
n
Jn 'Un' (akn.ln'
~ -1 ~
(ak .In
n
0
n
..J~n'
I
n
~B,
1
= jk ,
n
as n'
U
)/kn
I (--¥-sk ' ]
Sk"
- --¥--
I
n' G,
n
0
f
> 0,
I [--¥--
sk ' ]
n
2a n' G,
0
n
n
sk,
- --¥-n
I
n
d.p
I
d.p ,(a, s)
n
0
(a , s) + 0p ( 1 )
no
But the expectation of the first term here is not greater than
~ 00.
.JS
f o2a
d .p ,(a ,s),
n
0
and hence we obtain (2.9) as above.
Proof of COrollary 1.
whenever
~
o
Since
< 1/2, we only have to prove that
1:
lim lim sup
.JS d.p (s)
alO n~
0
n
where
<p
n
=0
if
~
< 1/2,
is given in (1.12).
Fix 0
< a < 1. Then we have
JraO .JS
00
d.p (s) - !
n
- i=O
(2
i
a/2 i+1
.JS
d.p (s)
n
where. using (1.11).
r.(n.a)
1
= 2-i/2 {<pn (a/2 i )
-
<p
n
(a/2
i+1
)}
~ Ja
00
!
r.(n.a)
i=O 1
(2.10)
- 17 -
.
_ 2-1/2
H(2-(i+1}ak In}-H(2- i ak In}
n
n
H(u kin} - H(v kin}
"Y n
"Y n
-
-1
m
./2 H(2 ak In)-H(ak In) i H(2-(rn+1}akn In}-H(2- akn In}
n
n
IT
m
H(u"YknIn)-H(v "Ykin)
n
m= 1 H(2- ak In}-H(2-(m-1}ak In}
n
n
_ 2- 1
-
=
{
a -"Y
~"Y_1
+a~ }
"Y
{"Y-1/2
2
+a2_~ }i
I((1/2}-"Y- 1}/"Y I
for all n large enough. where we use the convention that
log 2 if "Y = O.
[o ~
Hence for all n large enough and all a
d~n(s) ~ [ a + a
~"Y
~]
2 -1
"Y
"Y~
[ 1-2
>0
- a 2
~
=
small enough.
-1
]
Since this bound goes to zero as a ! O. (2.10) follows.
Proof of COrollary 2.
o
The first statement follows directly from Corollary 1.
Calculation shows that for the covariance function r (-) of the process
"Y
v"Y (-)
we have
r (h)
"Y
where A is as in (1.16).
"Y
=1
-
A"Y h
2
2
2
+ o(h }
h
~
co.
Applying now Theorem 8.2.6 in Leadbetter. Lindgren
m.
and Rootzen [5]. we get that for all x €
lim P{A(T){ sup
T~
as
O~x~T
V (x) - B (T)}
"Y
"Y
~ x} = exp(-e-x ).
where A(T) and B (T) are given in (1.15) and (1.16).
"Y
This and the first
- 18 -
statement now easily imply the second statement after a time change.
o
References
[1]
S. Csorgo. E. Haeusler. and D.M. Mason. A probabilistic approach to the
asymptotic distribution of sums of independent. identically
distributed random variables. Adv. in Appt. Math. 9(1988) 259-333.
[2]
S. Csorgo. E. Haeusler. and D.M. Mason. The asymptotic distribution -of
extreme sums. Ann. Probab. (to appear).
[ 3]
. . " and D.M
S. Csorgo
. Ma son. I nterme d iate sums an d stocha stic compactness
of maxima. Submitted.
[4]
B.V. Gnedenko and A.N. Kolmogorov. Limit Distributions for Sums of
Independent Random Variables (Addison-Wesley. Reading. MA. 1954).
[5]
M.R. Leadbetter. C. Lindgren. and H. Rootz~n. Extremes and Related
Properties of Random Sequences and Processes (Springer. New York.
1983) .
[6]
D.M. Mason. A strong invariance theorem for the tail empirical process.
Ann. Inst. H. Poincar~ Sect. B (N.S.) 24(1988) 491-506.
[7]
C.R. Shorack and J.A. Wellner. Empirical Processes with Applications to
Statistics (Wiley. New York. 1986).
[8]
W. Vervaat. Functional limit theorems for processes with positive drift
and their inverses. Z. Wahrsch. Verw. Gebiete 23(1972) 245-253.
•
•
•
© Copyright 2026 Paperzz