Sen, P. Kumar; (1975)An invariance principle for linear combinations of order statistics."

*Work partially supported by the Air Force Office of Scientific Research,
A.F.C., U.S.A.F., Grant No. AFOSR 74-2736.
AN INVARIANCE PRINCIPLE FOR LINEAR COMBINATIONS
OF ORDER STATISTICS*
by
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1047
December
1975
AN INVARIANCE PRINCIPLE FOR LINEAR COMBINATIOI'JS
OF ORDER STATISTICS*
By
PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
ABSTRACT
Based on an (almost sure) reverse-martingale representation for linear
combinations of order statistics (with smooth weight functions), a backward
invariance principle (relating to the tail sequence) is established and the
underlying regularity conditions are critically examined.
AMS 1970 Classification No. 60FOS.
Key Words and Phrases: Almost sure convergence, Bahadur-representation, linear
functions of order statistics, invariance principles, reverse martingale, smooth
weight function, weak convergence and Wiener process.
*
Work partially supported by the Air Force Office of Scientific Research,A.F.S.C.,
U.S.A.F., Grant No. AFOSR 74-2736.
-2-
1.
Let
Introduction.
{X.,
1
i:::: I}
be a sequence of independent and
identically distributed random variables (i.i.d.r.v.) with a distribution
function (df)
F,
defined on
For every
(_00, 00)
n(::::l) ,
are denoted by
statistics corresponding to
X
n,l
~
the order
•••
~
X
n,n
•
and let
n :::: 1 ,
(1.1)
where the
c
.,
n,l
1
~
i
~
n,
n:::: 1
lity of (the standardized form of)
are known constants.
T
Asymptotic norma-
has been studied under diverse
n
regularity conditions and by diverse techniques by a host of research
workers; we may refer to Stigler (1969) and Shorack (1969) where other
references are cited.
sequence
{T , k:::: n}
k
The present investigation concerns with the tailand provides sui table invariance principles under
appropriate regularity conditions.
Instead of generalizing the Pyke-
Shorack (1968) approach to the tail-sequence of empirical processes and
incorporating the same to derive the invariance principles, we have
formulated our results through some reverse-martingale characterizations
of
{T } .
n
'
vi z ., Theorems 1 and 2.
Even if
{T }
n
may not exactly be a
reverse martingale, it may be approximated by a suitable reverse martingale sequence, so that under suitable regularity conditions pertaining
to this order of approximation, the invariance principle continues to
hold (see Theorem 3).
The current approach also yields a different
proof of the asymptotic normality of
T
n
under somewhat different
regularity conditions.
Along with the preliminary notions, the main theorems are introduced
in Section 2.
The proofs of the theorems are sketched in Section 3.
•
-3-
Section 4 deals with the almost sure (a.s.) convergence of certain functions
of order statistics having relevance to the main theorems.
The last section
is devoted to some concluding remarks as well as to applications of the main
theorems.
2.
The main theorems.
c
(2.1)
Let
l
. = n- ¢ (t)
n,l
n
¢n = {¢n (t): 0 < t
~ l}
(i-l)/n<t~i/n,
for
be defined by
i = l, ... ,n,
and, conventionally, we let
c
(2.2)
Also, let
u(t) = 1
=
n,O
or
0
c
n,n+l
= 0
according as
for
t
Fn (x) = n -lIn1=
. 1 u(x-x.)
,
1
(2.3)
be the empirical d.f.
T
n
(2.4)
T
Then,
=
n
n
~
1 .
is
or
~
_00
< 0
and let
< x < 00
may be rewritten as
Joo¢ (F (x))xdF (x), n
n n
n
~
1 .
_00
We are interested in the case of smooth weights where
lim ¢ (u) = ¢eu)
(2.5)
n~
n
exists for every
0 <u
<
1 ,
and, we define
~
(2.6)
(2.7)
0
2
=
=
Joo¢(F(X))XdF(X) ,
JooJoo[F(XAY) - F(x)F(y)]¢(F(x))¢(F(y))dxdy
_00_00
where
a A b = min(a,b)
and we assume that both
~
and
o
2
are finite.
-4-
For every
(I ::: [0,1])
,
n(~l)
consider a stochastic process
E
I
E
I}
{k (t),
t
n
E
where
I}
k (t) := min{k: n/k:s; t} ,
n
,
and then letting
!<:
W (t) ::: n 2(T
(2.8)
n
In order that
T
(2.9)
Then
t
by introducing a sequence of integer-valued, non-increasing
and left-continuous functions
t
W ::: {W (t),
n
n
n
W
n
-+].1
W
a.s., as
(i.e., W (0) :::
n-+ oo
Skorkhod Jl-topology.
I.
(t) -].1) /0,
n
0[0,1]
Also, let
(2.10)
n
-+
£ w,
W
n
(2.11)
n
Theorem 1.
If
° with probability
1) .
space and we associate with it the
W::: {W(t) , t
00
be a standard Wiener
I}
E
,
in the J -topo1ogy on
l
For this purpose, we define for every
U
I •
E
Our primary concern is to show that under suitable regu-
larity conditions, as
(2.12)
t
n
is properly defined in its lower boundary, we assume that
n
belongs to the
process on
k
::: n (T
d
n
i-I
n+l
c*
n+1,i
::: - - C
d
:::
n+l,i
- T
n,l.:::
n+
°
°
(2.13)
1)
C
n,i-l
n+1,i
and
n
-
~
+
0[0,1] .
1
n-i+l
c . ,
n+l
n,l
c*
i :s; i :s; n + 1
n+l,i'
::: ,n+l d
X
T*
L·1:::1 n+ l,1
' n+ l,1
'
n+ 1
for
2 <
< lim n Var (T ) ::: 0
n-+oo
n
~
n
00
n
(~l)
°
and if
,
then the convergence of the finite-dimensional distributions (f.d.d.) of
{W }
n
(to those of
W)
insures (2.10).
•
-5-
We shall see in Section 3 that (2.9) also holds under the hypothesis of
the theorem.
Stigler (1969) has shown that (2.13) holds under suitable regu-
larity conditions and further,
S
n
can then be written as
T
n
is an average of independent random variables and
such, for finitely many
T ,'s
nJ
R
n
S
n
= 0
p
+ R
where
n
!<:
(n - 2)
As
his decomposition and an application of
the central limit theorem (for a double sequence of independent r.v.'s) on
the
S . 's yield the convergence of the f.d.d. 's of {W} to those of
nJ
n
under the regularity conditions of Section 4 of Stigler (1969). Hence,
under the additional assumption that
d
.
n,l
=
0, 'r/ 1 s; i
s;
n, n 2: nO
W
his
asymptotic normality result extends to the invariance principle in (2.10).
Let
leA)
a2
(2.14 )
n
be the indicator function of a set
n
=
A and also let
n
2
L
L
c . c . (n+1) - [ (if\.j ) (n+1) - i j ] (X l ' 1-X l ' )
. I' 1 n,l n,J
n+ ,1+
n+,l
1= J=
Then, the following theorem provides a set of different regularity conditions pertaining to (2.10) where
{F ,
n
n2:I}
is defined in the beginning
of Section 3.
Theorem 2.
If
d
. = 0
a.s., as
(2.15)
(2.16)
and
for'
n,l
2
n
I n I > En~) IFn+ 1] -+ 0
E [U I ( u
n 2: no (2:1) and if
n-+
oo
,
a. s ., as n -+ 00, 'r/ E > 0 ,
then (2.10) holds.
We may note that for both Theorems 1 and 2, the assumption that the
d .
n,l
are all equal to
in many cases, the
d
0
.
n,l
is very crucial.
As we shall see later on that
may not be exactly equal to
0
though they
-6-
are very closely so.
Theorem 3.
Towards this, we present the following.
a.s ... as
If
then (2.10) hoZds under the
N+oo
rest of assumptions of Theorem 1 or Theorem 2.
t
Section 4 is
The proofs of the theorems are outlined in Section 3.
devoted to a.s. convergence and uniform integrability of linear functions
of order statistics having direct impact on (2.15), (2.16) and the first
condition of Theorem 3.
3.
Proofs of the main Theorems.
the a-field generated by
Then
F
Lemma 3.1.
If for some
then
F ; n ~ nO}
n
Proof.
(X n, 1 " " ,X n,n )
is non-increasing in
n
{Tn'
Given
k = 1, ... ,n+l
n+l)
(3.1)
X ., j
n+J
~
1
for
n
~
1 .
for every
. = 0
n,1
1~ i
~
n,
n
~
nO '
is a reverse martingaZe.
can assume the values
equal conditional probability
given
and
be
n .
d
Fn+l' Xn+l
(X n, l""'X n,n ) ,
Fn = F(Xn, l'''''Xn,n ,X n+J.,j~l)
Let
F
n+l
(n+ 1)
-1
with
Xn+l,l'" .,Xn+l,n+l
Thus, the possible realizations of
are
and these are conditionally equally likely; here for
the vector starts (ends) with
Xn +l ,2(Xn +l ,n+l)
Hence for
k = 1
n
~
(or
1 ,
. c . E[X . IF 1]
E(T n IFn+ 1) = l:n1=1
n,1 n+
n,1
l:n
{n-i+l X
+ -i- X
}
i=lcn,i
n+l
n+l n+l,i+l
n+l,i
= l:n+l
i=l
+ n-i+l c .}X
c
{i:l
n+l n,i-l
n+l
n,1 n+l,l.
(c n, 0
c
n,n+l
= 0)
e
•
-7-
\,n+l *
= Li=l Cn+l,i Xn+l,i
= Tn+l - I~+ll
1 . X
l' .
1= dn+,l
n+,l
Since
d .
n,l
are all equal to
0,
the lemma follows from (3.1).
Q.E.D.
As an illustration, we consider the following.
Example 3.1.
Let
k
be a non-negative integer,
nO = 2k + 1
and
(3.2)
(n-i+l)cn,i } = (n+l) -l{ (i-I) (i-2)
k (n-i+l)
k
+
n+ 1 ) -1 (i-I) (n+ I-i)
k
k
= cn+l,i'
= ( 2k+1
Thus, Lemma 3.1 applies here.
k
~
1
For
.
1~ 1~ n+1
is the sample mean while for
k = 0 , Tn
it is a robust competitor of the sample mean; see Sen (1964).
Let us now consider the proof of Theorem 1.
Lemma 3.1 and (2.13).
Also, note that for every
Here, (2.9) is insured by
0 ~
s
<
s +0~ 1 ,
(3.3)
where
and
(3.4)
k
-1
n
2
~
s < (k -1)
2
-1
n
{ IT -Tk ) I, F ; q ~ k } has the reverse sub-martingale property, by
2
q
2
q
reversing the order of the index set and using Lemma 4 of Brown (1971), we
Since
obtain that for every
(3.5)
E > 0 ,
P{k max
< <k IT -T k I >OEn -!-::2}
l-q- 2 q
2
-8-
n ~
where by (2.U) and (3.4), as
{W}
convergence of f.d.d. 's of
n
E{n[T
00
k
-T
2
to those of
k
]2} ~ 80 2
and also the
1
W implies that as
n
~
00
(3.6)
•
Finally, if
¢ (x)
l-¢(x)}~
(2/~)
2{
choose
8(>0) ,
be the standard normal d. f., then for every
!z -1
2
x exp(-!zx) ,
E
> 0,
~
1 ,
we can
so small that the right hand side of (3.5) is less than
for some arbitrary
n8,
and hence, for every
x
n > 0
Hence,
{W}
n
Consider next the proof of Theorem 2.
Q.E.D.
is tight.
Here also, by virtue of
Lemma 3.1, we are in a position to apply the invariance principles for
reverse martingales for our purpose.
(3.7)
Var(X n,1. IFn+ 1)
n-i+l X2
+ -i.- X2
_ {n-i+l X
+ -i.- X
}2
n+l
n+l,i n+l n+l,i+l
n+l
n+l,i
n+l n+l,i+l
=
= i(n+l-i)
[X
(n+l)2
and, similarly, for
(3.8)
Towards this, note that
n+1,i+1
1~ i
<
j
~
_ X
]2
n+1,i
n ,
Cov(X n,1.,X n,]·IFn+ 1)
=
i(n+1-j) [X
. 1 - Xn+,1
1 .][Xn+l,]+
. 1 - X
l'] .
(n+l) 2
n+ 1 ,1+
n+,]
Thus, by (l.l), (3.7) and (3.8),
(3.9)
E{(T n - Tn+ 1)2 1Fn+ I}
= 0
2
n
defined by
(2.14) .
Hence, by (2.15) and (3.9),
(3.10)
as
n ~ 00
•
-9-
Also, for every
E >
0,
as
n
+
00
(3.11)
a.s.
---> 0,
by (2.16) .
Hence, by an updated version of a basic theorem of Loynes (1970) we conclude
from (3.10) and (3.11) that (2.10) holds.
Q.E.D.
Finally, consider the proof of Theorem 3.
Let
by (3.1) .
(3.12)
Therefore, for every
(3.13)
k
q
~
.....,
n,
by the first condition of Theorem 3,
k
n 2(T - \
U) = n 2(\
T*)
q LN~q N
Ln~q+l N
+ 0
k ~ nO '
martingale - difference sequence.
Further,
a. s, as
{U ,F
n
n
E{U~}
n
+
00
; n ~ nO}
•
is a reverse
~21
= E{E[U F + ]} ~
k
k l
(3.14)
Further (3.13) and the convergence of the f.d.d. 's of
insures the same for the process where
TN
{w } to those of W
is replaced by
Hence, the proof of Theorem 1 readily extends.
Lk~NUk' N ~ n .
For Theorem 2, we note that
(3.14) and the first condition of Theorem 3 implies that
(3.15)
n
-10-
Thus, by (2.15) and (3.15), here also (3.10) holds with
TN -- TN+ 1
being
,~
replaced by
UN'
Also, for every
N~ n
*
by (2.15) and the fact that N-3/2 TN-+O
N~ n ,
a.s., as
N-+oo.
Hence, (3.11)
~
holds with
TN - T + 1
N
being replaced by
UN'
N ~ n.
Therefore, the
Q.E.D.
proof of Theorem 2 readily extends to that of Theorem 3.
4.
A.S. convergence and uniform integrability of functions of order statistics.
In this section, we shall elaborate (2.15), (2.16) and the first condition of
Theorem 3, and formulate suitable regularity conditions under which these hold.
In this context, we consider first the a.s. convergence of an arbitrary
linear compound of order statistics.
vn
(4.1)
where
(4.2)
h
.
n,l
n
-1
=
L~1= 1
W (t)
n
h
.X
.
n,l n,l
for
=
Let
Ioown (F n (x))x
(i-1)/n<ts:i/n,
v = JooW(F(X))X dF(x)
_00
dF (x) ,
n
ls:iS:n
and let
-11-
where we assume that
ljJ (u)
n
gral in (4.2) exists.
-+
ljJ(u)
as
n
-+
for
00
V0
<
u
<
1
and the inte-
Our concern is to study suitable regularity condi-
tions under which
vn
(4.3)
Note that whenever
{v
-+
n
a.s., as
V
,
F ; n
n
~ n
n
0
-+
00
•
is a reverse martingale, we may
}
use the reverse martingale convergence theorem to prove (4.3).
Hence, in
the sequel, we only consider the case where the above reverse martingale
property may not hold.
(4.4)
We assume that
ljJ(t) = ljJl (t) - ljJ2 (t),
0 <
t <1
and is continuous inside
where
I,
ljJ. (t)
]
is /'I
in
t
j = 1,2
Then, the following theorems relate to (4.3) under different sets of regularity conditions.
Theorem 4.1.
Let
rand
s
be two positive integers satisfying r- l
+
s-l
and such that
lim sup n-l\~ IljJ (i) IS <
n
L1 =1 n n
(4.5)
Then~
Proof.
(4.6)
under (4.4),
00.
(4.3) holds.
vn - v
We may rewrite
vn - V
as
Joo[ljJ (F (x)) -ljJ(F(x))]xdF (x)
n n
n
-00
+
JooXljJ(F(X))d[Fn(X) - F(x)]
_00
By the Kintchine strong low of large numbers, the second term on the r.h.s.
= 1
-12-
of (4.6) a.s. converges to
0
as
n
~
00.
We write first term as
(P (x))-1jJ(F(x))]xdF (x)
C n n
n
Joo[1jJ
(4. 7)
where
C(O
C < (0)
<
r
c
(4.8)
is so chosen that for given
(lx~(F(x)) !dF(x) < ~£
+
E: >
0
n > 0 ,
and
and
_00
For
1 , Ixl
n2
theorem,
hence,
n~
l}
is bounded, so that by (4.4) and the Glivenko-Cantelli
sup{x!1jJ (F (x)) - 1jJ(F(x))
n
I n2 + 0
n
a.5.
For
I
nl
I:
Ix\:<; C} ~ 0
a.s., as n
C
note that
+
U-
+ In3'
-+
00,
and
J:~(F(X))XdFn(X)'
Fn ;
is a reverse martingale (uniformly integrable);ooso that by (4.8),
Ire + J:x~ (F (x)) dFn (x) I< ~E
a. 5., as
n
+
Further, by the Holder-inequali ty
00.
_00
If-C + foo1jJn (F n (x)) xdF n (x) I
(4.9)
_00
U-
where
C
integrabl',;'), so that by (4.8),
r
c
+
is a reverse martingale (uniformly
+ (I xl r dFn (x) < n
a. 5., as
n~oo.
-00
Finally, by (4.5)
(4.10)
r
F ; ne I}
n
c
+ (lxlrdFn(X),
(l~n(Fn(x))ISdFn(X) <n-Il:~=II~n(i/(n+l)15<oo
_00
n(>O) ,
Thus, by proper choice of
a. s. , as
n
~
00
Thus,
I
nl
+ I
n3
the r.h.s. of (4.9) can be made
-+
a a. s. , as
In the above theorem, we need that
Elxl r
<:
n
00
~
Q.E.D.
00
for some
may be relaxed (at the cost of additional restrictions on
:<; !zE:
1jJn)
r > 1
This
as follows.
-13-
Suppose that
Theorem 4.2.
1JJ (u), 0
<
a.s., as
tion:
u
as
< 1
n -+
n -+
11JJn (u) - 1JJn (v)
r > ~.
fop some
Proof.
I
n
and
00
(ii) 1JJ
00,
(i) 1JJ (i/n)
n
joooo1JJ (F(x))xdF (x)
- n
n
(and hence,
klu-vl
S
1JJ(i/ (n+1) , 1 ~ i
=
1JJ (u) -+
n
u,v
E
I,
and
Elxl r
(iii)
<
00
Then (4.3) hoZds.
n-+ oo ,
and the crucial step is to show that as
IJ-C
(4.11)
01'
n-1\~
1 1JJ (F(X.))X. -+ v
L1 =
n
1
1
By (4.6)-(4.7), here also, we need to show only that
a.s., as
n
satisfies the Lipschitz condi-
1JJ)
fop aU
=
~
+ Joo{1JJ (F (x))-1JJ (F(x))}xdF (x)
C n n
n
n
I -+
0
+1
-+0
n1
n3
n-+ oo ,
I
a.s.
_00
In this context, we use the following result, due to Ghosh (1972):
for every
n>O,
there exists a
K (= Kn<OO) ,
l
such that as
n-+ oo ,
(4.12)
Hence, by (4.12) and the Lipschitz condition,
IX{1JJn (F n (x))-1JJn (F(x))}1 ~ Klx//F n (x)-F(x)
(4.13)
I
~ KKln-~(log n) IX~{F(X)[l-F(X)]}~-n a.s.
3
1 - -r + rn
(KK )n->Z(10g n) Ixl
2
{lxlrF(x)[l-F(x)l}~-n Ixl r
1
1
=
E/xl r
Now,
verges to
Elxl r
<
n -+
,
00
00
<
0
=>
00
as
=> IxlrF(x)[l-F(x)]
x -+ ±
max IX
l~isn
n,i
00.
Ir
=
is bounded for all
Also,
dF (x)
O(n)
a.s., as
n
= 0
unless
n -+
00.
x
x
and it con=
X ., 1 ~ i
n,l
~
n,
Hence, by (4.13), as
-14-
IJ-C + fooX[\jJ
(F (x))-\jJ (F(x))]dF (x) I
C
n n
n
n
(4.14)
_00
= 0 (1)
a .s . ,
as
> ~
r
Q.E.D.
!,:
Remark.
n 2 (T
In the context of the asymptotic normality of
n
- ll) ,
Stigler (1969) and Shorack (1969), both, for technical reasons, required
that
c
. = 0
n,1
as
for all
n
,
-T 00
i:o:: k
if
where
k
I I
k
?
n
n
n
t
in
=>
00
n
E X r <
00
I
I
max
X
r - .k :o::i:O::n-k +1 n,i - .
n
a
but
I I
> 0,
l'
E X r <
Note that
and hence, in (4.14), we need that
n
+ 1
n
and they assume that fOT some
This is also possible in our case.
o(n/k)
i ~ n-k
and
n
for some
n
a> 2(1-2r)/(2-3r)
0 < r :0: ~ .
Let us now examine the regularity conditions pertaining to (2.15).
(4.4).
1
. = n- ¢(i/(n+l)), 1:0:: i
n,1
2
Then, we may rewrite 0
in (2.14) as
(4.15)
n
simplicity, we let
c
~n
and assume that ¢
For
satisfy
n
2 2
0 = fooJoo[F
l(xAy)-F l(x)F l(y)]¢(F. l(x))¢(F l(y))dxdy
n
n+
n+
n+
n+
n+ .
We assume that
L = Jool¢(F(X))
(4. 16)
which insures that
{(x,y):
C«OO)
<
00
I{F(x)[l-F(X)]}~dx <
Ixl <C, Iyl <C} ,
such that for some given
E:= {(x,y):
Also, let
•
n
E*
> 0 ,
C
=
00
E
E
C
,
_OO<X, y<oo}
and for
Further, we choose
-15-
(4.17)
f
-C
foo
+ CI¢(F(X))
I{F(X)[l-F(X)]}~dx < n/4L
-00
Then, from (2.7) and (4.17), we have
10 2
(4.18)
By (4.4) (for
- IJ[F(XAY)-F(X)F(Y)]¢(F(X))¢(F(Y))dxdYI < n/2 .
E
C
¢ =1/J)
and the G1ivenko-Cantelli theorem, as
n
-+
00
,
fI{F n + 1 (xAy)-F n +1 (x)F n +1 (y)}¢(Fn +1 (X))¢(F n +1 (y))dxdy
(4.19)
EC
-+
ff[F(XAY)-F(X)F(Y)]¢(F(X))¢(F(Y))dXd Y a.s.
E
C
Finally,
IIIE [Fn+ l(x y)-Fn+ l(X)F n+ l(y)]¢(Fn+ l(x))¢(Fn+ l(Y))dXdyl
e
(4.20)
~
2 (Joo{F n +1 (x) [1-F n +1
(X)]}~I¢(Fn+1 (x)) IdXJ'
-00
we + J:{Fn + 1 (y)[1-F + (yJ}"IHF + (y)) Idy ).
n 1
n 1
-00
As such, if we let
1/J*(
n n+i 1J
(4.21)
= {[i(n+1-i)]~/(n+1)}I¢(n+i1J I,
1/J~(0)
1/J~ (n~ 1)
and note that by (4.17),
to
0
as
i/(n+1)
-+
0
(4.22)
and integrating by parts
= 1/J~(1) = 0
or
1,
1~ i
~n
,
is bounded for every
we obtain on letting
1~ i
~ nand it tends
-16OO
(4.23)
J
1
{F n+ l(X)[l-F n+ l(x)]}'2I<p(F n+ l(x))ldx
JOO~*(F
(x))dx
n n+1
=
=
-JOOXd~*(F
l(X))
n n+
_00
JOO~n (F n+ l(X))xdF n+ l(X)
=
_00
Thus, if
~n'
defined by (4.22), satisfies the hypothesis of Theorem 4.1
or 4.2 (or the remark there after), the r.h.s. of (4.23) converges a.s. to
L,
and similarly, the second factor on the r.h.s. of (4.20) can be made
(a.s.) s
~n,
Theorem 4.3.
as
n
~
00.
If (4.16)
satisfies (4.4) and
This leads us to the following.
holds~
~n'
for
c
. = n
n,l
-1
<p(ij(n+1)),
1~;
i s n,
<p
defined by (4.22), satifies the hypothesis of
Theorem 4.1 or 4.2, then (2.15) holds.
We proceed on to (2.16).
then under the condition that
k-1
Note that if
Xn+ 1 = Xn+,
1 k (1 s k s n + 1) ,
d . = 0, VIs i s n, n 2: nO'
n,l
n+1
n+1
by (2.12),
U = n{ L c . X l ' +
L c . IX 1 .'- I c l ' X l '
n
i=l n,l n+,l i=k+1 n,l- n+,l i=l n+ ,1 n+ ,1
\k-1 i
i-I
= n{ L
[-- c
- -- c
]X
.- c
X
i=l n+1 n,i n+1 n,i-1 n+1,i
n,k n+1,k
In+1
n-i+1
n-i+2
}
- i=k+1 [n+1 cn,i - n+1 c n ,i_1 ]Xn +1 ,k
(4.24)
oo
= J [I(X2:X n+ 1 , k) - Fn+ 1 (x)]<P(F n+ l(x))dx
_00
= Un+ 1 , k'
Hence, for every
(4.25)
say, for
k
1, ... , n+ 1 .
A > 0 ,
2
E[u I(lu \ >A)IF 1]
n
n
n+
=~1
n+
n+1
2
l:: U 1 kI(\U 1 kl >A)
k=l n+ ,
n+ ,
}
-17-
n-+ oo ,
Thus, it suffices to show that as
JOO [I(x~X
1
(4.26)
n-YzI
n+
l)-F
n+
l(x)]¢(F
n+
l(x) dx
l
-+ 0
a.s.
-00
For this purpose, we define
(4.27)
= Joo [I (x ~ Xn )
Z
n
- F (x) ]¢ (F (x)) dx ,
n
~
1 ,
- F (x)]¢(F (x)dx,
n
n
n
~
1 ,
_00
(4.28)
= Joo[I(X~ Xn )
Z*
n
_00
and decomposing the range
as
(_00,00)
(_oo,-C),
we denote the corresponding components of
z~2)
(C)
Since
Z~3) (C)
and
{Z
,
n
n~ l}
z~ (1)
(or
1<
n - 2\Z
I -+
n
Z~ (2) (C)
(C),
are LLd.r.v. with
(2.7), it follows that
(or
Z
EZ
n
Z*)
and
a.s., as
n -+
00
-~lz(l)1
-+ 0
n
a.s., as
n -+
00
EZ
it follows that
max
(4.29)
Also, if
1~l~3n
¢
satisfies (4.4), then for every
Z* (2) (C) - Z (2) (C)
n
n
(4.30)
C > 0 ,
= JC [F (x) -F (x)]¢ (F (x)) dx
n
n
-C
+
JC[I(X~Xn ) - F(x)][¢(F n (x)) - ¢(F(x))]dx
-C
-+ 0
Further, for every
a.s., as
n -+
00
•
C > 0 ,
(4.31)
~
C
J_00
+
Joo{F (x)[l-F
C
n
n
(x)]}~I¢(Fn (x)) Idx
z(l)(C)
n
*(3)(C)) ,
Zn
and
= 0
as
n
0
n
(C,OO) , C > 0 ,
[-C,C],
2
n
=02
'
. 1y.
respectlve
defined by
In a similar manner
-18-
As such, as in (4.20), by choosing
can be made
~ ~
n
a.s., as
n
+
C adequately large, the r.h.s. of (4.31)
00.
Hence, we obtain from (4.29)-(4.31),
the following:
Under the assumptions of Theorem 4.3, (2.16) holds.
Theorem 4.4.
Finally, let us examine the first condition of Theorem 3.
Here also,
we take
c
(4.32)
n,k
= n
-1 ( i
¢ n+l )
=
cn,n+l = 0 .
Then, by (2.11) and (4.32), we have
(4.33)
i) - ~
i-I (i-I)
dn+l,i = (n+l) -l{ cP (n+2
n+l
where conventionally
if
k
= n.
(4.34 )
n-i+l
n
= 0 if k = 0 and ((n-k)/n)cp(k/n) = 0
(k/n)cp(k/n)
Thus, on defining
2-) = n+l 3/2r:; (2-) _ i-I (i-I) - n-i+l (2-) ] ,
(
)
~ n+2 ~ n+l
n cP n+l
~n (n+l
l~i~n+ 1
,
it remains to verify the conditions of Theorems 4.1 or 4.2 and to show that
v
defined by (4.2) is equal to
0
Let
cp'
and
cP"
be the first and
second derivative, of
cp.
First, consider the simplest case, where
is bounded inside I.
The~
by (4.34) and simple expansion, we get
(4.35 )
I~ 0((n+l)-1/2){ln-2i+2
CP' (2-) 1+ i(n+l-i) Icp"(2-) I}
n+l
n+2
n+2
(n+l)2
n+l
'
I ~n l(2-)
so that by the same technique as in Theorem 4.2, it follows that
(4.36)
[E IXIr < 00 for some r >~] => N3/2T~
Consider next the case where for some
0 > 0
+
0 a. s., as N +
and
0 < K<
00
00
,
•
CP"
-19-
-~+o
-~+o
lep'(u)I~K[u(l-u)] 2
and lep"(u)I~K[u(l-u)] 2
(4.37)
O<u<l
!.:
W (l/(n+1)) = -[(n+1) 2/n (n+2)]¢' (6/(n+1) + (l-6)/(n+2)),
Note then
0 < 6 < 1,
n
!.:
W ((n+1)/(n+1)) = [(n+1)2/n (n+2)]¢'(1-6/(n+1)-(1-6)/(n+2)), 0<6<1
n
2~i~n,
on noting that
and for
_!.:
2 ~
(n+1) 2 ~ {i(n+1-i)/(n+1)} ,
I~ K1{In-2i+21{i(n+1-i)}~lep,
(.-L)
I
n+1
n+2
(n+1)
n+2
Iwn (.-L)
(.-L) I}
+ {i(n+1-i)}3/2
Iepll
(4.38)
(n+1)3
n+1
~ K*{i(n+1_i)/(n+1)2}-1+o, 2 ~ i ~ n
wn (u)
Also, by (4.35) and (4.37),
Thus, if for some
n: 0 < n < 0 < 1 ,
(4.39)
r = (o-n)
then
Elxl r
<
3 2
N / T*
N+ 0
N+oo.
a. s., as
I I
r > 2/3 .
we need
i
c n - k
(4.40)
for some
00
n
+
1
[u(l-u)]
n +
s = (l-o+n)
00
for every
,
-1
For
0
~
1
0 <u < 1 .
,
here we require
On the other hand, if in (4.37),
is the convergence of
E X r <
and
as
insures that (4.5) holds, and hence, by Theorem 4.1,
00
r > 1.
for some
-1
0
+
foolx\{F(X)[l-F(X)}O-ldF(X) ,
r -:' 1/0
all we need
which holds when
ep'
In this set up, for unbounded
However, if we let
where we let
0 c 1,
k
n
=n
a.
'
c
.
n,1
=0
,
0<a.<1,
for
i
~
k
n
(or
and
then on noting that
Y~ K*[n -1 k ] Y= O(n -y(l-a.) ),Vn -1 k <u<l-n -1 kn,y>O ,
n
n
and proceeding as in (4.38)-(4.39), we obtain that under (4.37) for
3 2
N / T*N
(4.41)
+
0
a.s., a 11 we nee d·1S t h at
y(l-a.) =
~
and replace
0
by
0 + Y -
~
1n (4.39) .
ep"),
-20-
Thus, for every
0 > 0 , y
can be made adequately large by choosing
Elxl r
accordingly, and hence, in the truncated case,
<
for some
00
ex
r > 0
is sufficient.
5. Some general remarks. In (1.1) and elsewhere, we have considered a linear
combination of the X .• More generally, one may also consider
n,l
(5.1)
where
and under suitable conditions on
g
obtain parallel results.
by
g(X)
Elxl r
-
00<
t
<oo}
,
(such as the absolute continuity etc.),
Note that Lemma 3.1 holds for
so also (3.7) and (3.8).
,
{g(t),
g
X being replaced
In Section 4, we need to replace
Secondly, one can also consider
by
~
\n
\s
0
Tn = L1'=lcn ,
1, X
l' + L'_IC
n 1
,
- n,l.Xn, [ np. ] + 1
(5.2)
1
c
where the constants
i
= 1, ... ,s,
o
.
n,l
0 < PI < '"
ger contained in
s .
Let
satisfy
< P
~.
O
. - c?1
Icn,l
1
and
< 1
s
[s]
a (n -~ ),
=
00
,
denotes the largest inte-
be defined (uniquely) by
1
0 <: 1C.0 1 <
1
1
F(~.) =
p. ,
1
1 sis sand
(5.3)
].l
0 = \~
IC?~'
.
L1 =
1 1
From the results of Bahadur (1966), it follows that if
~p.'
and in some neighborhood of
f' (x)
f(~
) > 0
Pi
is bounded, then as n
+ 00 ,
1
(X
(5.4)
for
(5.5)
i
[
n, nP i
] 1 -
= 1, ... , s
+
~
Pi
) + (F ( ~
n
Pi
) - p. ) / f ( ~
1
3
Pi
) = 0 (n - /4 10 gn ) a. s. ,
Further, it is well-known that
{(F (~ )-p.,
n Pi
1
lsiss),
F ;
n
n~l}
is a reverse martingale.
•
-21-
!< ~
n 2 (T
Thus,
0
n
-]J -]J )
is a.s. (as
n
+
00)
equivalent to
(5.6)
As such, Theorems 1, 2 and 3 readi ly extends to
(rN' N ~ n}
as we 11 ;
a
2
has to be adjusted only.
Ghosh (1972) considered an almost sure representation of
of
. . .d
' d th e con d't'
1.1
. r.v.'sd
an requlre
1 10ns th a t
r > 2
and
(ii) </>"(u)
of Section 2
Elxl r
<
is bounded inside
I.
T
n
(1') Elxlr <
00
in terms
for some
In so far as the Theorems
are concerned, we do not need the boundedness of
</>"
or that
for some r > 2.Also, Ghosh and Sen (1976) have also proved (2.15)
00
(in the context of sequential tests based on linear functions of order
statistics) under the assumption that
Elxl
2
<
00
and
</>"
Theorem 4.3 improves this result under weaker conditions.
bounded.
Our
This result and
the basic Theorems of Section 2 enable us to study the convergence of the
OC
functions of sequential tests based on
{T}
n
under weaker regularity
conditions.
REFERENCES
[1]
Bahadur, R.R. (1966).
Math. Statist.
~,
A note one quantiles in large samples.
577-580.
Ann.
Ann. Math.
[2]
Brown, B.M. (1971). Martingale central limit theorems.
Statist. ~, 59-66.
[3]
Ghosh, M, (1972). On the representation of linear functions of order
statistics. Sankhya~ Ser. A. 34, 349-356.
[4]
Ghosh, M., and Sen, P.K. (1976). Asymptotic theory of sequential tests
based on linear functions of order statistics. Essays in Statistics
and Probability (both Birthday volumne by Prof, J. Ogawa), Ed,
S. Ikeda (in press).
-22-
[5]
Loynes, R.M. (1970). An invariance principle for reverse martingales.
Proe. ArneI'. Math. Soc. 25, 56-64.
[6]
Pyke, R. and Shorack, G.R. (1968). Weak convergence of a two-sample
empirical process and a new approach to Chernoff-Savage theorems.
Ann. Math Statist. 39, 755-771.
[7]
Sen, P.K. (1968). On some properties of the rank weighted means.
Jow'. Indian Soe. Agri. Stat. ~, 51-61.
[8]
Shorack, G.R. (1969). Asymptotic normality of linear combinations
of order statistics. Ann. Math. Statist. 40, 2041-2050.
[9]
Stigler, S.M. (1969). Linear functions of order statistics.
Math. Statist. 40, 770-788.
Ann.