No. 1793
8/1989
THE ASYMPTOTIC DISTRIBUTION OF EXTREME SUMS
By SANDOR CSORGO I , EH ICH HAEUSLER, and DAVID M. t\1ASON 2
.
University of Szeged, University of Munich, and University of Delaware
Let XI,n ~ ... ~ Xn,n be the order statistics of Tl independent ralldom variables with
a common distribution function F and let k n be positive integers such that k n - 00 and
kn/n - 0: as n - 00, where 0 ::; 0: < 1. We find necessary and sufficient. conditions for the
existence of normalizing and centering constants An > 0 and C n such that the sequence
En =
}
A1 {k..
LXn+l-i,n - en
n
-.
i=l
converges in distribution along subsequences of the integers {n} to non-degenerate limits
and completely describe the possible subsequential limiting distributions. We also give
a necessary and sufficient condition for the existence of An and C n such that En be
asymptotically normal along a given subsequence, and with suitable An and C 71 determine
the limiting distributions of En along the whole sequence {n} when F is in the domain of
attraction of an extreme value distribution.
1. Introduction and Statements of Results. Let X,)( I, X 2 ... be a sequence
of independent non-degenerate random variables with a common distribution function
F(x) = P{X ~ x},x E R, and for each integer n 2: 1 let XI,n ~ ... ::; )(n,n denote
the order statistics based on the sample Xl,"" X n' Throughout the paper k n will be a
sequence of integers such that
(1.1)
1 ~ k n ::;
'tl,
kn -
00,
and kn/n
-t
0 as n -
00;
or k n
= [noj
with 0 <
0:
< 1,
where H denotes integer part. (We shall refer to the first case as the case 0: =- 0.) The
study of the asymptotic distribution of the (properly normalized and centered) sums of
extreme values
(1.2)
•
k..
n
i=l
i=n-k.. +1
L X,.+I- i,n = L
Xi,n
Partially supported by the Hungarian National Foundation for Scientific Research,
Grants 1808/86 and 457/88.
2 Partially supported by the Alexander von Humboldt Foundation, NSF Grant DMS8803209 and a Fulbright Grant.
AMS 1980 subject classification. Primary 60F05
Key words and phrases. Sums of extreme values, asymptotic distribution.
1
1
was initiated in [6] for the case when 0: = 0 in (1.1) and under the restrictive assumption
that F belongs to the domain of attraction of a non-normal stable law. Later t.he problem
was solved in [7] assuming that F has a regularly varying upper tail, and by Lo [131 for all
F which are in the domain of attraction of a Gumbel distribution in the sense of extreme
value theory. When put together, these results say that whenever F is in the domain
of attraction of anyone of the three possible limiting extreme value distributions for the
maximum X n,n and 0: = 0 in (1.1), then the sums in (1.2), with suitable c.entering and
normalization, have a limiting distribution which is either non-normal stable or normal.
(See Corollary 2 below.) The first aim of the present paper is to give an exhaustive study
of the problem of the asymptotic distribution of the sums in (1.2) for an arbitrary F.
All three papers [6), [7), and [13] mentioned above employ a direct prohahilisti<: approach based upon the asymptotic behavior of the uniform empirical distribution funct.ion
in conjunction with the behavior of the inverse or quantile function of P defined as
Q(s} = inf{x: F(x}
~
s}, 0 < s
~
I, Q(O} = Q(O+}.
This quantile-transform method for handling the whole sums Li=l Xj was first used in
[21 and 131 to obtain probabilistic proofs of the sufficiency parts of the normal and stable
convergence criteria, respectively.. In 13] the effect of trimming off a finite number of
the smallest and largest summands is also considered. A refined version of this method
in 151 produces a complete asymptotic distribution theory of the finitely trimmed sums
Tn(m,k} = L7:;+l Xi,n, where m ;~ 0 and k 2:: 0 are arbitrarily fixed integers. Included
in this paper is a new description in terms of the quantile function of the classical theory
concerning domains of attraction, domains of partial attraction and stochastic compactness
of the whole sum Tn(O,O}.
The paper [6] also initiated the study of trimmed sums of the form T'l (m n , k n ), where
k n is as in (1.1) with 0: = 0 and m n satisfies the same condition as k n • This was done in
[6] only in the domain of attraction case. A different refinement of the quantile-transform
method in [4] has established the complete asymptotic distributional theory for the sums
Tn(mn,k n }.
The second aim of this paper is to completely round off our study of sums of order
statistics by means oCthe quantile-transform method, so that papers 141, 151, and the present
one together constitute a complete and unified general theory of the asymptotic distribution
of sums of order statistics of independent, identically distributed random variables.
We emphasize very strongly that the quantile-transform method itself is by no means
new. It has been in wide use in nonparametric statistics for many decades and scattered
applications of it can be found in probability as well. A good source for its earlier use
is the book [19). It is the approximation result for the uniform empirical and quantile
processes in weighted supremum metrics in [2) in combination with Poisson approximation
techniques for extremes that has made this old method especially feasible for the treatment
of problems of the asymptotic distribution of various ordered portions of the sums of
independent, identically distributed random variables. The method was augmented in 14 ]
by a general pattern of necessity proofs which has already been applied in 15] and lIS].
There are, of course, several other methods for studying sums of order statistics.
Besides classical approaches, various new methods have been invented recently to deal with
2
e·
.
a number of different types of trimmed sums and the influence of extremes on the whole
sum. For descriptions of these methodologies along with discussions of their advantages
and disadvantages, including ours, see the monograph [11] edited by Hahn and Weiner,
where extensive lists of references can also be found.
Now we introduce some notation. The basic function for the present paper is the
left-continuous function
H(s)
= -Q((1
- s)-),
0::; s <
1.
Notice that if Ul,n ::; .,. ::; Un,n are the order statistics of a sample of size tl from the
uniform distribution on (0,1) then for each n ?: 1 we have the distributional equality
(1.3)
For 0 ::; s ::; t ::; 1, consider the truncated variance function
u 2 (s,l) =
itit
(u /\ v - u1')dJl(u)dH(v),
where u /\1' = min(u,1'), and for a given sequence k n satisfying (J.I) set"n = u(I/71,k'l/tt}
and
if bn > 0,
otherwise.
·e
J:
Note that u 2 (s, t) is the variance of
B(u)dH(u), where B(·) is a Brownian bridge. Such
random variables will be seen to enter the picture quite naturally.
Choose and fix any sequence of positive constants fJ n such that uti n < nand IlfJ'L -~ 0
as n --t 00. Then we have P{fJ n ::; UJ,n :::;: Un,n ::; 1 - fJ n } --t I, as n ~ 00. The following
two sequences of functions will govern the asymptotic behavior:
if
and
H(yJn)- H(JJn)
n l /'2 a
lpn(Y) = { H(J-S'},-'H(J/n)
1
n
if 0 < Y
k l/ '2
T
< x <
:s n -- nfJn ,
if n - nfJ n < Y <
-a,.
00,
00.
The crucial (necessary and sufficient) conditions these functions will have to satisfy are
the following:
Conditions. Assume that there exists a subsequence {n d of the positive integers
{n} and a sequence of positive constants Ani such that, for the sequence {k nl } given in
(l.l),
3
(1) there exists a non-decreasing, left-continuous [unction '1/.' defined on (-00,00) with
tP(O) S; 0 and tP(O+) 2: 0 necessarily holding such that
t/J~I (x)
1/2
= n]A
ani tPnl (x) ~ t/.I(x),
nl
~ 00,
as n.
at every continuity point x of ¢;
(11) there exists a non-decreasing, left-continuous function 'P defined on (0,00) with
<p(l) :S: 0 and <p(l +) 2: 0 necessarily holding such that
<P~I (y) =
]/'}.
n]A
ani 'Pnl (y) --. 'P(y),
as n]
~ 00,
nj
at every continuity point y of 'Pi
(111) there exists a constant 0 :S: a < 00 such that
Conditions (I) and (II) are not directly related to each other. Condition (11) controls
the largest extremes while condition (I) governs the behavior of the smallest terms in
the sum (1.2). Condition (III) will say that there are two qualit.atively different ways to
normalize this sum according 1.0 the two cases when a :;:.. 0 or a -::.. o. The exact probabilistic
meaning of the conditions along with equivalent forms expressed through F are discussed
in Section 3 where illustrative examples are also constructed.
It will be shown in Lemma 2.5 in the next section that if conditions (II) and (JII) both
hold, then 'P(Y) ~ a for all y E (0,00). Therefore the finite limit <p(oo) := Iim y.- ou <p(y) :::: a
exists and, as Lemma 2.5 will also show, for the non-decreasing, left-continuous, nonpositive function <p(.) - <p(oo) defined on (0,00) we have
lOU (<p(y) -
(1.4)
<p(00))2dy < 00
e >
for all
o.
Consider now a standard (intensity one) right-continuous Poisson process N(t),O -S
t < 00, and two independent. standard normal random variables Z] and Z:/. such that
(Z],Z2) is also independent of N(.). Given a function t/J as in condition (I), a function I.p
as in condition (II) satisfying (1.4), and constants 0 :S: b < 00 and 0 ~ r ~ (1 - a) 1/2,
where a is the limit in (1.1), consider the random variable
00
V(<p,t/J,b,r,a):=
/
+
(N(t) -- t)d'P(t)
1]
]
N(t)dl.p(t) - <p(l)
+ bZ I +
/0
¢(x)dx,
- Z(r,u)
0
with Z(r, a) = -rZl + (1 - a - r 2)l/2 Z2, where the first integral exists almost surely as
an improper Riemann integral by (1.4). Finally, a natural centering sequence will be seen
to be
k .. /n
]
J.Ln :=
-n
j
H(u)du - H (-).
n
I/n
4
e-
Our principle results are contained
convergence in distribution.
In
the following two theorems, where ----. D denotes
THEOREM 1. lf the conditions (I), (II) and (Ill) are satisfied, then t.here exist a
subsequence {n2} C {nIl and a sequence of positive numbers In2 satisfying In2 ----. 00 and
Inz/k nz - t 0, as n2 - t 00, such that either 0(lnz/n2,k n2 /n2) > 0 for all n2, in which case
for some 0 S; b S; a and 0 S; T S; (l - 0)1/2,
(l.5)
k 2 n2
Tn" = ( n2 )1/2
1 - k n2 /n2
" /
sdll(s)
k nz
0(ln2 /n2, k n2 /n2) 1'''2/ n2
r
or 0(l n2 /n2, k n2 /n2) = 0 for aIJ n2, in which case we put b
-+
r
= T = o.
In either case
(1.6)
as n2 - t 00, where 0: is zero or positive according ~o tIJe two cases in (l.l), <p necessarily
satisfies (l.4) and t/J necessarily satisfies
-e
t/J(x) 2: -ar/(l - 0:),
(1. 7)
-00
<
X
< 00.
,
Aloreover, if a > 0 and
y> 1.
l{)
== 0 then b
=a
in (1.5), while if a
THEOREM 2. If there exist a subsequence {nIl
and
e
nl
=0
c {n} and
theIJ <p(y) =- 0 for all
two sequences Ani > 0
along it such that
(1.8)
where V is a non-degenerate random variable, then there exists a subsequence {n:.d C {nJ}
such that conditions (1), (II) and (Ill) hold along tlJe subsequence {n2} for AlI~ in (l.8) and
for appropriate functions t/J and l{) and some constant 0 S; a < 00, with l{) satisfying (1.4)
and t/J satisfying (1.7). T1Je random variable \' in (1.8) is of the form l'(<p,'I,b,b,T,o:) t c for
appropriate constants 0 S; b S; a,O S; T S; (l - 0:)1/2, and -00 < c < 00. Moreover, either
l{)
0 or t/J
0 or b > o.
t
t
Just as in the case of full sums in 15], the method of proof makes it possible to see
the effect on the limiting distribution of deleting a finite number of the largest summands
from the extreme sums Tn(n - kn,O) = L~~1 X n+ 1-i,lI at each stage n, both in the
5
sufficiency and the necessity directions. Let k
L7:::1 Xn+1-i,n by
~
0 be any fixed integer. Then, replacing
k..
=
Tn(n - kn,k)
L
TI-k
Xn+1-i,n
i=k+l
Xi,n,
i=n-k .. +1
J.l71 by
J.ln(k) := -n
L
=
k,,/n
H(u)du - H
(k+I}/n
J
(k-+-I) ,
n
and V(<p,tP,b,r,a) by
Vk(<p,tP,b,r,a) :
=
1
f.
00
(N(t) - t)dcp(t) -
Sk T
-
k+l
f.Sk-t1
i l
cp(t)dl - cp(l) + bZ 1 +
1
tdcp(t)
/0
+ kCP(Sk+d
tP(x)dx,
- Z(r,u}
where Sk+1 is the (k + l)st jump-point of the Poisson process 1\'(')' both Theorem 1 and
Theorem 2 remain true word for word. This can be seen by simple adjustments of the
proofs presented in Section 2 for the case k = O. (Cf. lsI for details.) That the case k = 0
formally agrees with the above theorems, i.e. that l'o(cp,tP,b,r,a) = V(cp,tP,b,r,a), is
easily seen by noting that
(1.9)
roo (N(t) _ t)dcp(t) _
J~
(SI tdcp(t) =
JI
roo (N(t) _ t)dcp(t) +
J1
t
Ju
N(t)dcp(t).
It is also straightforward to formulate Theorems 1 and 2 (or their generalized versions
just described) for the sum of the lower extremes L;;'1 Xi,n, where m n ~ 00 and mn/n -+
o as n - 00, or m n = [,8n] with 0 < ,8 < 1. The limiting random variable is of the form
-V(cp,tP,b,r,,8) with appropriate ingredients. In fact, if at least one of a and ,8 is zero
then the two convergence statements hold jointly with the limiting random variables being
independent.
Now we corne back to the principal results and formulate some consequences of them.
COROLLARY 1. Let {nIl be any subsequence of tlJe posith'e integers. There exist
sequences of constants A: I > 0 and C nl such that
~DZ
(1.10)
holds as nl - 00 for a non-degenerate normal random variable Z if and only if (1) and
(II) are satisfied with Ani == n:/ 2a nl ,tP == 0 and <p == 0, in which case (1.10) is true with
2
the choice A~I == n:/ a nl and C nl == J.lnl with Z being standard llOrmal.
6
e·
On heuristic grounds one expects that the existence of normalizing and centering
constants d n > and C n such that
°
with Y non - degenerat.e
(1.11)
implies that the suitably centered and normalized extreme sums Tn (n - krt,O) or Tn (n k n , k) also converge in distribution to a non-degenerate variable along the whole sequence
{n}. This is the content of our second corollary below.
As was pointed out in 18], results from de Haan 1101 imply that (1.11) holds if and
only if
(1.12)
lim H(xs) - H(y.s) = x- C
d!O
y-C
C
v- .- w-
H(vs) - H(ws)
-
C
°
for some -
00
<
e
<
00
°
for all distinct
< x, y, v, w < 00, where for e = the limit is understood as (log x Jogy)/(Iogv - logw). (The case e = 0, going back to Mejzler 1161, is explicitly stated as
Theoem 2.4.1 in [101.) If condition (1.]2) holds, the constants d n and en can be chosen
so that when c > 0, then P{Y :::; y} = exp( _yl/C), y > 0; when e = 0, then P{ Y :::; y} =
exp( - exp( -y)), -00 < y < 00; and when c < 0, then P{Y :::; y} = exp( (- y) l/c), y < 0,
and by Gnedenko's classic theorem these are the only possible limiting types. Whenever
(1.12) holds, we write F E ~(c).
For e > 1/2 set
D(c) :=
C
2
(
C2~ 1)1/~2{fOO
1 (N(t) - t)t-
C
-
1
t}.
dt + 10 N(t)t- C - 1 dt
According to Corollary 3 in 151, the random variable D(c) is stable with index lie. This
fact can also be derived after some calculations from the representations given by Ferguson
and Klass [9] and LePage, Woodroofe, and Zinn 1121. For the case when Q :> in (1.1),
consider
°
(1.13)
tPu(x) :=
{~1/2(H(a-t ) -
,if x:::; 0,
,if x> 0,
H(a))/o(O, a)
and note that when (1.12) holds then
°<
(1.14)
0
< 00 ,if e < J /2,
(0, a ) { = o o ' f
,1
C > 1 /2 .
If c = 1/2, then 0(0, a) can be finite or infinite. Define 0(0,0)
M(a) :=
where Z(a) = -raZl
+ (1 ._
r a ·-
0: -
{~~Z(U) tPa(x)dx
,if
,if
0:
= 0(0,0+)
= 0,
a>
0,
r;) 1/2 Z2 with
1- a
)
a 0 0, a
1/2 (
1
a
0
sdH(s)
7
:s (1 -
a) 1/2 .
and, finally, put
Note that for
0:
> 0,
M(o:) = o:l/<!(Il(o:+) - H(o:)) min(O, Z(o))/o(O, a).
Versions of the three subcases c > 1/2; - 00 < c :S 1/2, c i 0; and c = 0 of the case 0: = 0
in the following corollary were proven in 161, 171 and [131, respectively. The full form of the
corollary follows presently from Theorem ].
COROLLARY 2. Let {k n } satisfy (1.1). Whenever F E ~(c) for some
-00
<
C
<
00,
,jf c > ] /2,
,jf c::::.] /2 and 0(0, u) = 00,
,jf c = ]/2 and 0(0, n) < 00,
orc<1/2.
If we replace the sum Tn(n - kn,'O) by Tn(n - kn,k) and 11" by Iln(k) in Corollary
2, where k ~ 0 is a fixed integer, then the limits remain exactly the same in the cases of
c:s 1/2, while if c > ]/2, then D(c) should be replaced by
Again, Du(c) = D(c).
Of course, if the quantile function Q is continuous at J -- 0., then .\1(0:) := 0 also in
the case when 0: > O. It was Stigler l20] who discovered that such t.erms generally enter in
the limiting distribution of the classical trimmed mean. So the third subcase of the case
0: > 0 in Corollary 2 may be looked upon as a Stigler phenomenon for sums of ext rerne
values. In this regard we also note that in the case when 0 > 0 ill condition (1.1) we could
have worked with a sequence k n more general than k n = [on]. Namely, if k n is a sequence
of integers such that 1:S kn:S nand n l / 2 (k n /n - a) ~ 0 as n ~ 00 with some 0 < 0. < 1,
then all the results so far stated remain valid without change provided that we modify the
definition of tPn by replacing kn/n by a in the arguments of H in it.s numerator. However,
then the proofs would have to be separated for the cases a = 0 and 0: > O. (For the slight
modifications needed for the case 0: > 0 we refer to 115].) In order to keep a unified proof
for the two cases we have chosen to work with the special sequence k n = lanl here. A
similar remark applies to our version of Stigler's theorem, formulated as Theorem 5 in [4].
Consider now linear combinations of k n extreme values of the form
k ..
Ln =
L Ci,nXn+l-i,n,
i=1
8
where Ci,n, i = 1, ... ,n, n ~ ], is a triangular array of weights. Under the same regularity
conditions as in 115] one can easily formulate and prove the analogues for L n of the results
in [151 on the asymptotic distribution of linear combinations of the middle ?rder statistics.
The proofs consist of a straightforward technical extension of those in t.he present paper
combined with details from [15]. To keep, however, the main ideas easily accessible to a
wider audience we decided to only consider sums of extremes in the present paper.
We take this opportunity to correct some minor oversights and misprints in paper
[4]. First, Lemma 2.6 is not correct as stated. The specification that /(0) = 0 must
be changed to /(0)
and /(0+) ~ 0. For this reason, whenever in the statements of
results or in the proofs a function is specified to be zero at zero, this must be changed
to the requirement that it be non-positive on (-00,0] and non-negative on (0,00). In
particular, in Theorem 1 the requirement that wdo) = \11 2 (0) = 0 should be weakened
to read wt (0)
0, Wi (0+) ~ 0, i = 1,2, with analogous changes needed for Theorem 2.
Also, all integrals of the form Jo-Z(z + x)dg(x) should be read as
:s °
:s
0
1-Z :tdg(x) + zg(-z) =
.
-
1_
z
g(.c)dx,
whenever (z,g) = (Zi, wd, (Zt, Wi), (Zi,<pd, i = 1,2. With these corrections the proofs
proceed as before. Finally, the XI,n appearing in Theorems 2, 3, and 5 should be Xi,n,
the subscript 1 in formulae (1.]2), (1.13), and (1.15) should be i, and the expressions for
TI and T2 on page 678 should be divided by o(a, ,8) .
2. Proofs. Introducing the empirical distribution function
1
Gn(u) = -
(2.1 )
n
n
L l(Ui,n :s u),
O:s.
u ~ ],
i=l
where l,is the indicator function, using (1.3), and integrating by parts
k"
LXn+l-i,n - J.Ln
k..
LH(Ui,n) - J.Ln
=p -
i=l
i=l
=
-1 ll".. "
nH(u)dGn(u)
+ jk.. /n nH(u)du + H(l/n)
lin
(I
= -nGn(Uk,..n)H(Uk,..n)
+
1"k.. ,. nGJI(u)dH(u)
o
j
+ knH(kn/n) -
k .. /n
l/n
=
1
1/n
nGn(u)dH(u) +
U
+
1"""'"
k .. ln
nudH(u)
jk"/n n(GJI(u) lin
n(Gn(u) - :n)dJl(U).
9
u)dH(u)
Here the sum of the first two terms can be written as
tin
JUl." (nGn(u)
- l)dH(u) + H(ljn) - H(UI,n)
.(/,."
}
+
11
(G,.(U) - u)dH(u) +
1'....
I/n
=
=
1/n
UI." (nu
1,
)/n
1
jk.. ln TI(G'l(U) - u)dH(u)
- l)dH(u)
+ H(ljn)
- H(U I,71)
(nu - l)dH(u)
+ H(l/n)
- H(l'I,n) +
{fl."
+
jk"ln n(Gn(u) jm,,/n
(/1. ..
u)dH(u)
n(G,,(u) - u)dH(u)
{fl."
l.,/n
+
n(Gn(u) - u)dH(u)
/ m"ln
+
jk,,/n n(Gn(u) - u)dJI(u),
I,,/n
where, for the time being,
Hence
ffl
n and In are any real numbers such t.hat 1
~
m n <; In
~
kn .
where
A~)(mn) =
+
m
1
"
Jnul...
(nGn(~) __ u)dlJ(ujn) - 11 (1/n)
(
)
1
u- 1
nU!...
n
An
)dH(u/n) - H{1jn) - ---'---'-.:....-~-----'--'----:..
H(nUI,,,jn) - H(l/n)
An
A
'
n
and
This distributional equality is of course true without regard to the underlying probability
space where the order statistics U),n, .. " Un,n are defined. In the proof of Theorem 1
we shall be working on a specially constructed space (n, A, P) described in [2,3,5\. It
carries two independent sequences {yJj),n ~ I},j = 1,2, of independent, exponentially
distributed random variables with mean one and a sequence {Bn(t),O ~ t ~ l;n ~ I} of
Brownian bridges with the following property: for the G n in (2.1) and the uniform quantile
function Un(s) = Ui,n for (i-l)/n < S ~ i/n, i = 1, ... , n, Un (0) = U),n, determined by the
order statistics Uk,n = Sk(n) / Sn+ .(1£), k = I, ... , n, given by Sk(n) =
(n) +... + 1\ (n),
1\
JO
·
-
where Yi(n)
we have
).
= Yj( Ifor
J = 1, ... ,[n/2]
~n(v)
(2.3)
=
= Yn(2)
+2 - j
.
71- $8$1-71-
+ I, ... ,n + I,
for J = In/2]
In l / 2 {G n (s) - s} - Bn(s)//(s(1 - s))1/2-v
sup
1
-
and Yj(n)
= Op(n-
V
)
1
and
(2.4)
In l / 2 {s -
sup
n- I $8$1-71-
Un(s)} - Bn(s)I/(s(l - 8))1/2 v = Op(n- V )
1
for any fixed 0:::; v < 1/4. Note that it is justified to call the above Uk,n "order statistics"
since it is well known that (.~dn)/S,.+dn),... ,Sn(n)/S,.tl(rl)) equals in distribution t.o
the vector of order statistics of n independent random variables uniformly distributed
on (0, I). We will also need the Poisson process NO with jump-points 8 71 = 8~I) =
Y1( I) + ... + Y j 1), n ~ I, defined to be right-continuous by setting
00
L 1(8k :::; tlo o:s t <
N(t) =
(2.5)
00.
k=1
The behavior of the term ~~3) (In, k n ) is described in Lemma 2.4 below which requires
three preparatory lemmas, the first of which is crucial at many other places too.
••
LEMMA 2.1. ][0 < s < t
where %
:s 1 -
e, where 0 < e < I, then
:= I.
Proof. If s :::; u :::; v < t or s :::; v
:s u < t, then u 1\ v -
from which the result follows upon noting also that o(s, t)
which justifies the definition % := 1.
LEMMA 2.2. For all (J :f 1/2 and 0 < s < t
--
J:
uPdH(u)
<
o(s,t)
- el/ 2
_1_
:s 1 -
{{Jt J3 {J -
Proof. Integrating by parts we see that
11
1 2
/
!
uv
~
eS. Hence
= 0 if and only if 11 (8)
H (t),
0
e, where 0
+ ~f1-1/2 }
1-
=
213
< e < I,
.
o
Thus the lemma follows from Lemma 2.1.
In the following two lemmas k n is as in (1.1) and In is allY sequence of positive numbers
such that
(2.6)
LEMMA 2.3. Suppose there exists a subsequence {n2} C {n} such that
o2(ln'J In2' k n';) In2) > 0 for all n2 E {n2} and consider the t.w~dimensionalrandom vectors
Then there exist a subsequence {n3} C {n2} and a number 0::; T::; (1 - a)l/2 such that,
as n3 - 00,
Proof. First notice that (Z~~), Z~~») is, for each n2, a bivariate normal random variable
with mean vector zero and covariance matrix
where
o< T
-
.>
n_
=
n )
_2
( kn'J
1/2
(1 - k n'J In)
2
O(ln';) In2' k n -;: In2)
/k"'J/ n-;: sd H (s) < (1 k
_
''''J/ n -;:
-
n'J
)
1/2
n2
Thus by (1.1) there exist a subsequence {n3} C {n2} and a number 0 ::; T ::; (1 .. u) 1/2
such that Tn:! - T as n3 - 00, which implies the lemma.
0
LEMMA 2.4. Suppose tllat conditions (1) and (Ill) hold along some {nil, and le( k n
and In be as in (1.1) and (2.6). Then there exist a subsequence {U3} c {nil and numbers
o ::; b ::; a and 0 ::; T ::; (1 - a) 1/2 such tJlat
Proof. There are two cases. First we deal with
12
Case 1: There exists a subsequence {n2} c {nil such that n(ln2/n2,kn2/n2) > 0 for
all n2'E {n2}. Now let {n3} C {n2} be the subsequence given by the proof of Lemma 2.3.
To see the behavior of the_first term in ~~~)(ln:.,knJ in (2.2), fix any 0 < v < 1/4.
Then for the random variable Zn::. defined by the equation
by Lemma 2.2 and (2.3) we have
r
k "., In::.
IZn::. - ZA~)I ~ ~n::.(V) 11 ~
sI/2- VdH(s)/n(ln:./n3,k n:./n3)
',,:.In::.
V
= Op(n 3 )O((ln:./n3)-V)
= Op(l;;;)
= op(l)
by (2.6) as n3-t 00. By condition (III) we can assume without loss of generality that {n3}
is chosen is such a way that for some 0 ~ b ~ a we also have
and hence we have
(2.7)
Note that this step corresponds to Lemma 2.2 in [41.
Let 0 < Al < 00 be fixed. Adapting the proof of (2.6) in Lemma 2.3 in 14J to the
present situation (i.e. replacing the function t/JI,n there by the function t/J~ of assumption
(I) but otherwise proceeding line by line in exactly the same way), by (2.3) and (2.4) we
obtain
(2.8)
as
n3 -t 00.
Using now Lemma 2.3, we obtain exactly as in the proof of Lemma 2.4 in [4] that
bZ~~)+ fa
_Z1 2 1
"3
(2.9)
-tf)bZ I
+
(Z~~) + x)dt/J::. (x)I(lZ~~)1
f
< M)
o
t/J(x)dxI(IZ(r,Q)1 < M).
- Z(r,u)
13
Now (2.7), (2.8), (2.9), the simple little argument finishing the proof of the first part of
Theorem 1 in [41, and Theorem 4.2 in Billingsley [1] give the lemma in the first case.
Case 2. For all n 1 E {n I} sufficiently large, o(ln. /n 1, k n• /n 1) = O. Then the first term
of A~~) (In, ,k n.) is almost surely zero for all these niland a = 0 in condition (III) in this
case. Thus, with b = r = 0, a simplified version of the argument in Case 1 gives
o
Now we turn to the behavior of the terms A~ll) and ~~2) in (2.2). First we need the
following.
LEMMA 2.5. Suppose tllat conditions (11) and (lll) hold along some subsequence
~ a for all 0 < Y < 00 and (l.4) holds true.
{nIl. Then tp(y)
Proof. Si nee a ~ 0 and tp (y) :s: 0 for 0 < y ~ 1, it is sufficient to consider y > 1. Then
for all sufficiently large nil whenever O(ljnl,yjn.} > 0,
* ( ) _ n~/2b'l. o(ljn.,yjn.) H(Y/71.} - H(l/n.}
tpn. y Ani o(l/nl,k n ,/n.) n:/ 2o(l/nl,y/n.}
~ (1 + e)n:/ 2bn• /A nl
by an application of Lemma 2.1, where e > 0 is any preassigned number. (Of course, when
o(l/nl,y/n.) = 0 then tp~. (y) = 0.) The first statement is therefore clear.
Since tp is non-decreasing, the limit tp(oo) := Iim y-+ ou tp(y) ::; a exists. Thus ';?(y) :=
tp(y) - tp(oo) is a non-positive, left-continuous, non-decreasing function on (0,00), and the
fact that conditions (II) and (III) imply
11
00
00
(u /\ v)d<,O (u)d<,O (v) <
00
<
00
Since i'(y)
-+
for all
~an be shown exactly as in the proof of Lemma 2.5 in
151.
1<
is sufficient to conclude that (1.4) indeed follows.
S
0 as y
-t
00, this
0
LEMMA 2.6. Suppose that conditions (II) and (III) hold along some subsequence {n d
and k n is as in (1.1). Then there exist sequences of positive constants m n• and In. such
that 1 ~ m n• ~ In. ~ k nl and m nl -+ 00, In. /m n• -+ 00, k n• /In. -. 00,
with tp satisfying (1.4), and A~~)(mn. ,In.)
{lnl} can be chosen so that
-+p
(2.10)
14
0 as
nl -+ 00.
Moreover, if tp == 0 then
e-
.
Proof. Let 1 < m < I be arbitrarily fixed continuity points of cpo Then, noting that
in [51 the proofs of Lemmas 2.1, 2.2 and 2.3 require only the assumption of the weak
convergence of functions corresponding to the present sequence {CP~1 }, condition (II) and
these lemmas together with (2.10) in the proof of Lemma 2.2, all in [51, directly imply that
and
~~~)(m,l) -;[) Vrn,l:=
f:
(N(t) - t)dcp(t).
Since we have (1.4) by Lemma 2.5,
limsup
l,m-oo,l>m
·e
EV~ I ~
'
r (u
1
XJ
lim
m-oo
roc>
1
m
1\
v - uv)dip(u)d<p(v)
= O.
m
Thus l'm,l -;p 0 as m,1 -; 00,1 2: m.
Using these findings, the construction of the sequences {TIl",} and {/ fLl } is accomplished by a routine diagonal selection procedure.
Finally, choose any number d 2: 1 and let (~) and C2 be cOlltinuity points of ..p such
that c) :s d,1 :s C2' Then, using (11) and a weak convergence argumcnt, wc obtain
lim sup n)02(1/n), d/n),)/ A~J
nJ
--+
ull)dH (u)dlJ (l1)/A~,
ex>
so that if cp == 0, then for each number d 2: 1,
From this the last statement of the lemma also follows.
0
Proof of Theorem 1. Choose {m nJ } and {In.} according to Lemma 2.6. Since
o(l nJ /nl,k nJ /nt}
bn1 , by condition (III) and Lemmas 2.4 and 2.6 there exists a. subsequence {n2} c {nd such that we have (1.5) and
:s
as n2 --t 00, where cp satisfies (1.4). Moreover, {n:J can clearly be chosen so that in
addition either 0(1 112 /n2, k n2 /n2) > 0 for all n2, or 0(l1l~ /712, k n2 /71'1.) = 0 for all n2'
15
Since lnl/mnl -- 00, an elementary argument similar to the one used in the proof
of Theorem 2 of Mason [14] based on Satz 4 of Rossberg /18) shows that ~!,~)(mnJ and
~~~) (ln~, kn:J are asymptotically independent. Therefore, on account of (2.2) and the
above convergence relations, (1.6) follows since by (1.9),
(2.11)
If tp == 0 and 0 < a < 00 in (III), then by Lemma 2.6 the sequence {In.} can be
chosen so that (2.10) holds. It is easy to see that this implies (1.5) with b = a. Also, since
<p(1 +) ?: 0, if a = 0 then tp(y) = 0 for all y > I by Lemma 2.5.
It remains to establish the lower bound in (1.7). Since tP(O-t ) 2: 0, it is enough to deal
with x < O. If n2 = n is large enough,
(k 1/21k"ln
an ~)
dH(u)/a n
An
n
k•.jn+xk:/:JIn
k
n1/2an(kn)I/2(kn(
X
"/n
? - A
1 + 1/2
udH(u)ja n
n
n
n
kn
k.. ln+:z:k:/:J In
k n
n 1/ 2an
1
( n ) 1/21 .. /
>- A
1/2 -k
udH(u)/u(lnln,knln)
n 1 + xlk n
I .. /n
n
n 1/2 an
1
1
A n 1 + xlk n1/21 - k n I n Tn,'
tP~(x) = - n
1/2
))-ll
where Tn is as in the proof of Lemma 2.3 and the formulation of the theorem. Hence (1.7)
follows by (1.1), condition (III) and the proof of Lemma 2.3.
D
Proof of TIJeorem 2. Before starting the proof we note that by Theorem 3(i) in [5]
the random variable VI (tp) is degenerate if and only if 'P == O. Also, taking into account
the bound (1.7), an application of the second part of Proposition 2 in [4] shows that
V3 (tP, b, T, 0) is degenerate if and only if b = 0 and tP == O. Since VI and \'3 are independent,
by (2.11) we see that the limiting random variable F ('P, tP, b, T, 0) is degenerate if and only
if tp == 0, tP == 0, and b = O. Hence if we prove the first two statements of the theorem, the
third will be automatically true.
We distinguish three cases.
Case 1: For all n I large enough b nl > 0,
limsup ItPnJ (x)1 <
n.
00
for all
00
for all
-
00
<
I
<
00,
and
~OO
IimsupltpnJ(y)1 <
y> O.
nl-oo
Then by the Helly-Bray theorem we can select a subsequence {n2} C {nd such that
tPn';J -- tP weakly and tpn';J -. tp weakly as n2 -- 00, where tP and tp have all the usual
inherent properties. By Theorem 1, along a further subsequence {n3} C {n2},
O}
k".
(2.12)
{
~ Xn::+I-i,n:: 1=1
16
Iln::
e·
converges in distribution to V:= V(ip,~,b,r,Q), 0 S b S 1,0 S r S (l - a)I/2. If~ == 0
then b = 1, so that V is non-degenerate. Hence by the convergence of types theorem we
have (III) with some a > 0, and thus (I) and (II) with tP = at/.. and cp = a~, and V is of
the form stated with c being the limit of (Iln~ - Cn~)/.4l1~ as n3 ---+ 00.
Case 2: T1Jere exists a subsequence {n2}
•
lim ItPn;.! (x)1 =
(2.13)
00
c {nIl
such tlJat bn~ > 0 for all n2 and
for some
-
00
<x<
00
R2 -""00
or
lim l'Pn~ (y)1 =
(2.14)
n~
00
0 -.; y <
for some
00.
-""00
First we note that by the argument at the end of the proof of Theorem 1,
limsuplt/.'n(x)l::; (1- Q)-1/2
(2.15)
for all
x::;
1:::; y <
00.
0,
n-oo
and by the first part of the proof of Lemma 2.5,
lim sup 'Pn (y) ::; 1
(2.16)
for all
n-+oo
At the beginning of the present section we saw that for Tn in (2.12),
Tn
= R(J)
n + Wn + R(2)
n
n 1/21 1/
n
:= -
an
n 1/2
Gn{u)dH(u) + -
0
n 1/21 11k .....
+-
an
k.. /n
an
/k,,/n ((JfI{U)
, I/n
u)dH{u)
k
(Gn(u) - ~ )dH(u).
n
We have
(2.17)
lim IiminfP{IR~i)I<M}>O for
M-oo n-oo
The case i = 1 is trivial because R~I) = 0 if U I ,n > n -
I,
i=I,2.
and hence
for all M > o. The case i = 2 follows exactly as in the proof of Lemma 2.8 in [4], using
(2.15) and noting only that in the present generality of having a ~ 0 in (1.1) one has to
replace c in the limit in relation (2.24) of [41 by (1 - Q) 1/2 c.
The next step is to derive from Satz 4 of Rossberg [181 again that
(2.18)
IR~I)I
and
IR~2)1
are asymptotically independent.
17
Since bn2 > 0, we have Var(Wn2) = 1 for each n2, so that Wn~ is obviously stochastically bounded. Combining this fact., (2.17), (2.]8) and the general Lemma 2.10 in 14/, and
proceeding exactly as in the proof of Lemma 2.]] in [4j, we see that if the left-hand side
of (1.8) is stochastically bounded then
i
(2.19)
= ],2,
where x V y = max(x,y). Thus assumption (1.8) implies (2.19).
Now we claim the following three properties:
(2.20)
(2.21)
Iimsup ItP~~ (x)l <
00
for all
-
00
<x<
00,
"2 -'00
Iimsuplep~~(y)1 < 00
(2.22)
for all
0<
y
<
00.
n2----' OO
Note first that by (2.15) and (2.]6) the relatio!ls (2.]3) and (2.14) can only occur for
some x> 0 and 0 < Y < 1, respectively. If (2.13) holds, we first work on the set
where i ~ x. On this, similarly as in the proof of Lemma 1. J:l in [41, one has
(2.23)
Since tPn'J (i) - 00, as
normal variable,
n2 -
00,
lim p{nn~ (x,
n2--' OO
by (2.13) and since, with N(O,]) standing for a standard
in = P{N(O, 1) ~ (1 -
a) 1/2(1
+
xn > 0,
we see that (2.19) and (2.23) imply (2.20) and (2.21). To get (2.22), note that on the set
nn2 (y) = {n2 U I,n2 < y} we have
(2.24)
Since the right-hand side here is -ep~Jy) for all n2 large enough and since p{nn~ (y)} -.
1 - e- Y > 0, as n2 - . 00, we see that (2.] 9) and (2.24) imply (2.22).
18
.
•
If (2.14) holds then we work first on On~ (y) to get (2.20) and (2.22), and afterwards
on On2 (x, t) to get (2.21).
Clearly, (2.20), (2.21) and (2.22) imply conditions (I), (II) and (III) with a = 0 along
a further subsequence of {n2}'
Case 3: T1Jere exists a subsequence {n2} C {n d such that bn2 = 0 for all n2. In this
case the left side of (I.S) equals in distri bution to
( Grt~ (u) - knn; ) dH(u)
for all such rt2. If we now denote the firl:it two terms here by R~~) and R~;), rel:ipectively,
then of course' we still have (2.17) for i = 1. Since H has no mass on (J/n2,kn~/n21, we
(2)
have R n ,-,
= 0 •If l/n2 <
U k "2,n2.< kn'J/n2' and we see tl:tat
as rt2 ---7 00, so that (2.17) is also true for j = 2,along {n2}. Since (2.1~) is still obviously true along {n2} in the present case, we obtain as in Case 2 that R~~) - adJ) as
n2 ---7 00, j = 1,2. A simplified form of the corresponding argument abovt' 1I0W yields
(2.21) and (2.22). Thus, again, we obtain conditions (I), (1I) and (III) with a -'- 0 along a
subsequence of {n2}' The theorem is complet.ely proved.
0
In the proof of Corollary 1 we shall require the following.
LEMMA 2.7. Let the functions rp an t/J be as in conditions (I) and (lJ) and consider
the constants 0::; b < 00, 0::; ° < I and 0 ~ r ::; (1- 0)1/2 . .4ssume that rp satisfies (1.4)
and t/J satisfies (1.7). Then the random variable ~' (rp, t/J, b, r, 0) is non-degenerate normal
if and only ifrp == 0, t/J == 0 and b > 0, ill which case V(rp,t/J,b,r,o) is N(0,b 2 ).
Proof. The sufficiency part is trivial. Suppose that \l(rp,t/J,b,r,o) = \l1(rp) +
V3(t/J, b, r, 0), where we use (2.11), is non-degenerate normal. Since IT] (rp) and l'3(t/J, b, r, 0)
are independent, the Cramer characterization forces both to be normal. Theorem 3 (i) in
•
151 says that V.(rp) has an infinitely divisible distribution without a normal component.
Hence the only way it can be normal is when it is degenerate which again by Theorem 3
in [5] implies rp == O. Since then in fact \l.(rp) = 0, \l3(t/J,b,r,0) must be non-degenerate
normal. Using now condition (1.7), Proposition 1 from 14] implies that this can happen
only if t/J == 0 and b > O.
0
Proof of Corollary 1. First we prove the 'if' part. Let {n:d be an arbitrary subsequence
of {nJl. Since conditions (I), (II) and (III) hold along {n2} with An'J == ny2an'J' rp ==
19
0, tP == 0 and a = I, Theorem I and Lemma 2.7 imply the existence ofa subsequence {n:d
such that
k .. ::
{
~ Xn::+1-i,n::
}
/7l~/2an:: -~D N(O, 1).
Iln::
-
l=1
This of course implies that the same is true along the original {n d.
Now suppose that (1.10) holds. Let again {n2} C {nd be arbitrary. By Theorem
2 there exists a further subsequence {n3} C {n2} such that (I), (II) and (Ill) hold along
{n3} with An:: == A~~ and appropriate functions lP and tP satisfying conditions (1.4) and
(1. 7), respectively, and a constant 0 ~ a < 00, and the distribution of Z is necessarily that
ofV(lP,t/J,b,T,o) + C with some constant 0 ~ b ~ a, 0 ~ T:S (1-- 0)1/2 and -00 < C < 00.
Thus by Lemma 2.7, 'P == 0, tP == 0 and b> O. Hence a> 0, yielding that (I) and (Il) hold
along {n3} with Ana == n~/2ana' 'P == 0 and t/J == O. Since {n2} was arbitrary, the same
must be true along the original sequence {n d.
0
The proof of Corollary 2 requires some preparations concerning the asymptotic behavior of the functions tPn and 'Pn. First of all we note that, inverting the results of Section
2.3 in de Haan 1101,
(2.25)
(2.26)
F E ~(c) with 0 <
C
<
00
if and only if - 11(8) =
8-
C
L(s), 0 < 8
<..
I,
for some function L slowly varying at zero; and
FE d(c) with -
00
< C < 0 if and only if - H (8)
= A- s-C L(8),
0 < 8 < 1,
e·
for some function L slowly varying at zero and some finite constant A.
The following lemma is a slight extension of Lemma 2 in
161.
LEMMA 2.8. Let L be any function defined on (0,1), bounded on compact subintervals
and slowly \'arying at zero. Let {k n } be a not-necessarily iIJteger-\'a/ued sequence satisfying
(1.1) and {In} be any sequence of positive numbers such that kn/l n - 00 as n - 00. Then
for any 0 < v < 00 we lJave
Proof. The two cases
] .2.1 in [10], respectively.
Q
>
0 and
Q
= 0 follow from properties 1 and 2 of Corollary
0
for some c =f. 0 and (1.1) holds then with the respecti\'e L
functions from (2.25) and (2.26) we have tlle asymptotic equalities
LEMMA 2.9. If F E
~(c)
u(l/n,kn/n) -- K c (1/n)1/2- C L(I/n), if c > 1/2 and
k .. /n
u(l/n,kn/n) '"
(
f/n
u- 1L 2(u)du
)
Q
~ 0;
•
1/2
, if c = 1/2 and
Q
= 0; and
u(l/n,kn/n) '" K c (k n /n)1/2- C L(k n /n), if c < 1/2, c :f 0, and
20
Q
= 0,
as n
--+ 00,
where
, _ {(2c 2j ((]
J\c-
-
t:)(] -
2C)))1/2
(2c/(2c--]))1/2
, if c < 1/2, c
, if c > 1/2.
i-
0,
Proof. The case c > 1/2 can be inferred from Lemma 1 in 16J and Lemma 2.8. The
cases c = 1/2 and 0 < c < ] /2 are proven in Lemma 6 in [7], and the same proof given
there for 0 < c < 1/2 also works for c < o.
0
LEMMA 2.]0. Whene\'er F E ~(O) and (] .1) holds with
lims 1/ 2 {H(As) - H(s)}/o(O,s)
(2.27)
111°
= 2- 1/ 2 IogA
0
= 0,
for all
0<: A <
00
and
(2.28)
o(O,s) =
1 2
8 /
[(s)
for some function [ slowly varying at zero.
Proof. Assertion (2.27) follows directly from Lemmas 4 and 6, while (2.28) from Lemmas 2 and 6 of Lo [13].
0
LEMMA 2.11. Assume F E:: ~(c) for some c and that (1.]) holds. If 0:> 0, then with
the function tPu defined in (1.13), for all x E R,
lim tPn(x) =
n-+oo
while if 0
=0
{?to'f'u (X )
,if c> 1/2, or c = 1/2 and 0(0,0) -00,
,if c < ]/2, or c = ]/2 and 0(0,0) < 00.
then tPn(x) -; 0 for all x ERas n
--+ 00,
for any c.
Proof. The case 0 > 0 follows directly from (1.14) and the definition of tPn.
Consider now 0: = O. In this case we claim that for all 0 < A <.: 00 the following
limiting relations hold which clearly imply the second statement:
, if c < ] /2, c
,if c == 0,
,if c,?:-I/2.
1- 0,
Here the cases c < 0 and 0 < c < ] /2 follow from (2.26) and (2.25) and the third statement
of Lemma 2.9, the case c = 1/2 follows by (2.25), the second statement of Lemma 2.9 and
by an application of Lemma 4 in [7j, and the case c > 1/2 follows from (2.25) and the
first statement of Lemma 2.9 via an application of Lemma 2.8. Finally, the case c = 0 will
follow directly from (2.27) of Lemma 2.10 once we can show that
(2.29)
when c = 0,
a(1jn,k n /n) . . . o(O,kT,/n)
21
as
n
-+
00.
But this is immediate from the inequalities n 2 (0, kn/n) 2: n 2(] /n, k'l/n) 2: n 2(0, kn/n) n 2 (0,] In) and (2.28) of Lemma 2.10 via another application of Lemma 2.8.
0
LEMMA 2.]2. lf F E
~(c) for
some c and {l.l) llOlds thell
for all
for all
Proof. The case c
°:: ;
y > 0, if c > 1/2,
y > 0, jf c ~ 1/2.
> 1/2 follows direftly from (2.25) and the first statement of Lemma
2.9 for any
0: < 1.
In the case of c :::; 1/2, we first consider the subcase 0: = 0. Then for c = ] /2
the assertion of the lemma follows from (2.25), t.he second statement of Lemma 2.9 and
Lemma 4 in 17]. For c < 1/2, c =I- 0, the assertion follows by (2.25), (2.26), the third
statement of Lemma 2.9 and an application of Lemma 2.8. When c ::.:: 0, the assertion
follows from (2.29) and the two statements of Lemma 2.10, via one more application of
Lemma 2.8.
The subcase 0: > 0 of the case c ::; ] /2 follows from the former suLcase when
0: =
if we only notice that for an = n(l/n,kn/n) in the denominator of tpn we have
an 2: n(l/n,mn/n) for all large enough n for any sequence {m n } such that fn 71 - .... 00 and
mn/n --+ as n -+ 00.
0
°
°
Proof of Corollary 2. The boundary case c = ]
/2
and n(O,o) -
00
follows directly
from Lemmas 2.11 and 2.12 combined with Corollary 1.
Next., when c < ] /2, or c = 1/2 and n(O, 0:) < 00, OJl(~ notes t hat for Tn in the proof
of Lemma 2.3 or in the formulation of Theorem 1 and for T u in the definition of .M (0) we
always have Tn --> T u as n -+ 00 along the whole sequence {n} of the positive integers.
Thus by Lemmas 2.11 and 2.12 and Theorem I, applied with An == n 1/ 2 a,o t/J == t/JOll
where t/Jo == 0, cp == 0, and a = 1, for any subsequence {71d C {n} there exists a further
subsequence {n2} C {nil such that the statement of Corollary 2 holds along {n2}. Hence
it holds along the whole sequence {n}.
Finally, when c > 1/2, it can be shown using Lemma 1 in (61 and Lemma 2.8 that
for any sequence {In} of positive numbers such that In -+ 00 and kn/l n -+ 00 as n -+ 00
we have n(ln/n,kn/n) - K c{ln/n)l/2-CL(ln/n) as an extension of the first statement of
Lemma 2.9, which by (2.25) and a final application of Lemma 2.8 yields
n(ln/n, kn/n) /n(1 In, k n In)
-+
° as
n
-+ 00.
Thus by Lemmas 2.11 and 2.12 and Theorem 1, applied with An == n 1/ 2 a n , t/J == 0, cp(y) =
K.;- 1 (1 - y - C) and b = 0, we obtain that for every su bseq uence {n d c {n} there exists a
further subsequence {n2} C {nil such that the statement of Corollary 2 holds along {n2}.
This of course gives the same statement along the whole sequence {n}.
0
3. Discussion of the Conditions and Examples. The two propositions below
provide equivalent forms for conditions (I) and (II), respectively. The first of these forms is
22
a probabilistic interpretation of the respective condition. The second one, following from
the first, is a reformulation of the condition in terms of the underlying distribution function
F. That the latter reformulations might be possible was suggested to us by a referee.
For ease of notation we set
•
and for a non-decreasing function h defined on a subset S of the real line R we define its
left-continuous inverse by
h-) (x) = inf{.s E S : 11(8)
2: x},
x E R,
where we agree that the infimum of the empty set is + 00. Let Z denot.e a st.andard normal
random variable and Y denote an exponential random variable wit.h mean 1, and let {k,.}
be a sequence ofint.egers satisfying (1.1) with 0: = O.
PROPOSITION 1. Let '1/1 be a lIon-decreaJ;ing, left-wntinuulls function 01/ (-00,00)
such that 1/'(0) ~ 0 and 1/1(0+) 2 o. Condition (I) holds along {n d wit.h this '1/' if and only
if
(3.1)
as
Ttl
., 00,
\'v}Jich happens if and only if
(3.2)
for every continuity point x of the distribution function of -V·(Z).
PROPOSITION 2. Let t.p be a non-decreasing left-continuous function on (0,00) such
that t.p(I) ~ 0 and t.p(I+) 2 o. Condition (II) holds along {nd with this t.p if and only if
(3.3)
Al {X'tt,rtl -c n1 }
-f)
-t.p(Y)
as
n)
-->
00,
nl
which happens if and only if
(3.4)
•
for every continuity point x of the distribution function of -t.p(Y) .
Proof. First we consider Proposition 1. For the uniform (0, I) order statistics VI,n ::;
... ~ Un,n as in (1.3) we introduce
wk .. ,n -- ~
1/2 {U k",n
kn
23
_ kn }
n
,
n21.
·Note that by (1.3)
X n+l- k ''In
and hence, as n
=D
Q((1 - Uk,,,n )-) =- --lI(Uk .. ,Fl ) ,
-+ 00,
On the other hand,
::; sup IP{Wk... n E B} - P{Z E B}I,
B
where the supremum is taken over all Borel sets B on the real line, and this upper bound
goes to zero as n -+ 00 by Proposition 2.lO of Reiss II 71, where earlier referellces concerning
this result can also be found. Hence (3.]) holds if and only if
1/1-
t/J~I (Z) = - n IAn~nl tPnl (Z)
-+
(/'{Z)
almost surely as nl -+ 00, and it is easily checked that this happens if and only if conditioll
(I) holds along {nIl.
The equivalence of (3.1) and (3.2) follows by a st.andard argument as given 011 p. 654
of Watts, Rootzen, and Leadbetter 12] I.
The proof of the first statement of Proposition 2 is the same as above upon noti/lg
that
1
11 1/2 a
An {X n•n - en} =D An nrpn(nU 1 • n ) + op(l)
and, with the supremum taken again over all Borel sets B
sup IP{nU 1 •n E B} - P{Y E
Bli
-,0
Oil
as
the line,
n -.
00.
B
The latter follows from Theorem 2.6 of Reiss 117].
The equivalence of (3.3) and (3.4) is an easy exercise well known in extreme value
theor~
0
•
We note that since tP and 'P can only be constants if they are zero, the limits in (3.1)
and (3.3) are non-degenerate if and only if t/J == and <p == 0, respectively. If t/J == 0, then
we have (3.2) with
if x...:::. 0,
tP- I (x) = { -00,
00,
if x:> 0,
°
24
-and if cp == 0, then we have (3.4) with
cp -
•
1()
x
=. {O,00,
if x < 0,
if x> O.
Our first example shows that limiting distributions
su bsequences of {n} with cp "¥- 0, t/J f- 0 and b > O.
III
Theorem 1 can arise along
EXA:t\.IPLE 1. Let 1 :::; k n :::; n be integers surh that k n --; 00 and kn/n --; 0 as
71 ---t 00, let {nJ,n2,"'} be a subsequence of {n} and {dnJ,j ~ I} be a sequence of
arbitrary positive numbers. For j large enough to make k'L J Inj < 1/2, consider
, if
ll(s) :=
, if
, if
Clearly, a quantile function Q with this corresponding lJ function exists as long as
2k nj -t1
nj+l
<
1
njknj-tJ'
that is
for all large enough j. (This is the case, for example, if nj ==
Noting the asymptotic equality
12 j I(,g jj and kn.,
_
12 3 -
I
k.g
jl.)
it is easy to show that
.lim t/Jnj(x)
'-CXJ
= t/J*(x):-=
{-I°
.lim CPn J (y) = cp* (y) := { 0-
1
'-CXJ
,if
, if
-00<x:::;-1/4,
-1/4 < x < 00,
°
, if
< Y :::; 1/2,
, if 1/2 < Y < 00,
and that
.Jim a (In)t1 j, kn)nj)/a(l/nj, knJnj)
•
'-CXJ
for any sequence {/nJ such that In)
as j ---t 00. Thus we have
---t
00 and In)k n )
---t[)
25
-+
=
°as j
1
---t
V(cp"',tV,I,I,O),
00. Moreover,
as
J
---t
00,
Tn) ---t
1
where V(rp*,VJ*,l,I,O) = N(1/2) + max(ZI,J/4) and, by simple computation, Ittl ~
1/ 2
1
-knjH(kn)n J ) + dnj(k nj - k nj /4).
We emphasize that the numbers d nj > 0 determining the jump sizes of the quantile
function Q are arbitrary in the above example. Therefore, they can be chosen so that. t.he
underlying distribution has moments of arbitrarily high order.
•
The second example relates the convergence in distribution of extreme sums along
subsequences to that of the whole sum.
EXAMPLE 2. Suppose that F(O) = 0 and there exist a subsequence
{n} and constants A nj > 0 and B nJ such that
{1l1,1l2, .•. }
c
(3.5)
where W is an infinitely divisible random variable with a non-degenerate non-normal
ponent. It is shown in [5] that we can assume that
C0111-
1-1/"1
B nj
Note that necessarily Var(X) =
00,
:=
Ttj
1
U
Q(.s )ds.
so that we must have
(3.6)
By Theorem 5 in
141, for each 0 < (3 < 1,
l~nl
II~nlln
L:Xi,n-n
Q(s)ds=:.Op(n I/2 )
i=l
as
n--.. oo.
u
Thus by (3.6), for each 0 < {3 < 1,
~oW
as
J
~
00.
Hence by a simple diagonal selection procedure we can find a sequence {k nj } such that
k nJ - 00 and k nJ Inj -+ 0 and
•
-oW as
J -
00.
Our last example connects conditions (J), (II), and (III) with the not.ion of stochastic
compactness for sums and maxima and provides a relatively general situation when the
26
•
scaling factor An is the same for extreme sums, whole sums, and maxima. \Ve say that
F is stochastically compact, written F ESC, if there are sequences An > 0 and En such
that for every subsequence {mj} C {n} there exists a further subsequence {Tlj} C {mj}
such that (3.5) holds with a non-degenerate Il'. We call {X n •n } stochastically compact if
there exists a sequence C n > 0 such that for every subsequence {n/} C {n} there exists
a further subsequence {nil} C {n/} such that Xn" nil jC,i" converges in dist.ribution to a
non-degenerate random variable as nil - t 00.
'
EXAMPLE 3. Assume that F(O) = 0 and F is not in the domain of partial attra<:tion
of a normal law. Corollary 12 in 15] says that F E se if and only if X n ,n is stochastically
compact. In this case one can choose, according to the same coroJlary, en = An = a(ll) =
n J/ 2 u( ~,1 - ~), where u(.,.) is defined below (1.3). Also, for such an F it is readily
inferred from Corollary 10 in [5] that FE SC if and only if
IimsupsJ/2Q(1 - >.,s)/u(s,1 -,s) <
00
for all
0<
>. <
00.
a!O
Combining these two facts with the simple observation that a(n) ? TI 1/ 2 a n , 11 :: 1, for
allY sequen<:e {k n } sat.isfying (1.1) with o. = 0 or (l' / 0, we easily see that whenever
such an F is in SC, each subsequence of {n} contains a further subsequence along whidl
conditions (I), (II), and (Ill) are simult.aneously satisfied with An = a(n) for SOllW 1/', !p,
and 0 :::; a < 00. In fact the stochasti'c compact ness of the maxima forces !p # o.
Acknowledgements. The authors thank the Bolyai Institute, University of Szcged,
the Laboratoire de Statistique Theorique et Appliquee, Universite de Paris VI, t.he Mathematisches Forschungsinstitut, Oberwolfach, and the Department of Mathematical Sciences,
University of Delaware, for bringing them together for collaborative .work. They also appreciate the insightful comments of the referee.
REFERENCES
•
[I]
BILLINGSLEY,P. (1968). Convergena of Probability .Measures. Wiley, New York.
[2]
CSORGO,M.,CSORGO,S.,HORVATH,L. and tv1ASON,D.M. (1986). Weighted empirical and quantile processes. Arm. Probab. 143]-85.
[3]
CSORGO,M.,CSORGO,S.,HORVATH,L. and MASON,D.M. (1986). Normal and
stable convergence of integral functions of the empirical distribution function. Ann.
Probab. 14 86-118.
[4]
CSORGO,S.,HAEUSLER,E. and MASON,D.M. (1988). The asymptotic dist.ribution
of trimmed sums. Ann. Probab. 16672-699.
15]
CSORGO,S.,HAEUSLER,E. and MASON,D.M. (1988). A probabilistic approach to
the asymptotic distribution of sums of independent, identically distribut.ed random
variables. Adv. in Appl. Math. 9 259-333.
..
27
16]
CSORGO,S.,HORVATH,L. and MASON,D.M. (1986). What portion of the sample
makes a partial sum asymptotically stable or normal? Probab. Th. Rei. Fields 72
1-16.
171
CSOR.CO,S. and MASON,D.M. (1986). The asympt.otic distrihution of sums of extreme values from a regularly varying distribution. Arm. Probab. 14 974-983.
18]
CSORGO,S. and MASON,D.M. (1987). Approximations of weighted empirical processes with applications to extreme, trimmed and self-normalized sums. In: Proc.
1st World Congress of the Bernoulli Society, Tashkent, USSR, 1986, Vol 2, pp.
811-819. VNU Science Press, Utrecht, 1987.
[9]
FERGUSON,T.S. and KLASS,M.J. (1972). A representation of independent increment processes without Gaussian components. Ann. AJutlt. Statist. 43 1634-1643.
[101 HAAN,L.DE (1975). On Regular. Variation and its Application to tlte
•
~t'eak
Convergence of Sample Extremes. Tract No. 32, Mathematical Centre, Amsterdam.
Ill] HAHN,M.G. and WEINER,D.C., Eds. (1990). Sums, Trimmed Sums, attd Extremes.
Birkhauser, Basel. To appear.
/12]
LEPAGE,R., WOODROOFE,M. and ZINN,J. (198]). Convergence to a stable distribution via order statistics. Ann. Probab. 9624-632.
[13] LO,G.S. (1989). A note on the asymptotic normality of sums of extreme values. J.
Statist. Plann. Inference 22 127-]36.
[14] MASON,O.M. (1985). The asymptotic distribution of generalized Renyi statist.ics.
Acta Sci. A1ath. (Szeged) 48 315-323.
[15] f\'IASON,D.M. and SHORACK,G.R. (1987). Necessary and sufficient conditions for
asymptotic normality of L-statistics. Preprint.
/16] MEJZLER,D.G. (1949). On a theorem of B. V.Gnedellko. S6. TrudotJ [nst. Mat. Akad.
Nauk Ukrain. S.S.R. No.12 31-35. (in Russian)
/17] REISS,R.-D. (1981). Uniform approximation to distributions of extreme order statistics. Adv. in Appl. Probab. 13 533-547.
[18] ROSSBERG,H.-J. (1967). Uber das asymptotische Verhalten der Rand-und ZentralgJieder einer Variationsreihe (II). Publ. Math. Debrecen 14 83-90.
[19]
SHORACK,G.R. and WELLNER,J.A. (1986). Empirical Processes with Applications
to Statistics. Wiley, New York.
28
•
[20]
•
STIGLER,S.M. (1973). The asymptotic distribution of the trimmed mean. Ann.
Statist. 1 472-477.
121] WATTS,V., ROOTZEN,H. and LEADBETTER,M.H. (1982): On limiting distributions of intermediate order statistics from stationary sequences. Ann. Probab. 10
653-662.
BOLYAI INSTITUTE
UNIVERSITY OF SZEGED
ARADI VERTANUK TERE 1
H-6720 SZEGED
HUNGARY
MATHEMATICAL INSTITUTE
UNIVERSITY OF MUNICH
THERESIENSTRASSE 39
D-8000 MUNICH 2
WEST GERMANY
DEPARTMENT OF MATHEMATICAL SCIENCES
UNIVERSITY OF DELAWARE
NEWARK, DELAWARE 19716
U.S.A .
•
29
© Copyright 2026 Paperzz