Sen, P.K.; (1979)Some Invariance Principles for Mixed Rank Statistics and Induced Order Statistics."

SOME INVARIANCE PRINCIPLES FOR MIXED
RANK STATISTICS AND INDUCED ORDER STATISTICS*
by
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1223
April 1979
SOME I~NARIANCE PRINCIPLES FOR MIXED
RANK STATISTICS AND INDUCED ORDER STATISTICS*
Pranab Kumar Sen
University of North Carolina, Chapel Hill
SUMMARY
Along with the affinity of mixed rank statistics and linear
combinations of induced order statistics, some weak as well as strong
invariance principles for these statistics are established.
~l
variety
of models (depending on the nature of dependence of the two variates)
is considered and the regularity conditions are tailored for these
diverse situations.
Some applications to some problems in statistical
inference are also stressed.
j~S
Subject Classification No.: 62E20, 60F99.
lWy
Words and Phrases:
Contiguity, induced order statistics,
invariance principles, martingales, mixed rank statistics, sequential
tests, weak convergence, Wiener process.
*Work partially supported by the National Institutes of Health,
Contract No. NIH-NHLBI-7l-2243 from the National Heart, Lung and
Blood Institute.
1.
Let
{(X., Y.), i
1
1
~l}
INTRODUCTION
be a sequence of independent and identically
distributed random vectors (i.i.d.r.v.) with a (bivariate) distribution
function (d.f.)
and
F,
2
defined on the Euclidean plan
E •
Let
G(x) =F(x, 00)
H(y) =F(oo, y) be the two marginal d.f.'s and we assume that both
andH are nondlegenerate and, in addition,
everywhere (a.e.).
G
is continuous almost
H
In the context of nonparametric tests for regression
with stochastic predictors, Ghosh and Sen (1971) have considered some
~~xed
rank statistics which may be defined as follows.
size
n(~l),
t
is
~
or < 0)
Y.1.
among the
continuity of
let
R. (=L~ lUCY. -Y.)
n1
J=
1
J
be the rank of
1
among
u(t) =1
Y., ... ,Y
n
1
or 0
(1
~
i
according as
~
n); ties
are neglected, in probability, because of the assumed
H.
Then, a mixed rank statistic
M
n
where
Y.
where
For a sample of
n
n
is defined by
=L~1= Ib (X.1) a n (Rn1.),
{a (1), ... ,a (n); n
n
M
~ l}
(1.1)
is a triangular array of scores (to be
defined formally later on) and
b(o)
is some given function.
normality of such a mixed rank statistic [under
H : F =GH
O
Asymptotic
as well as
local (contiguous) alternatives] has been studied by Ghosh and Sen (1971,
Sections 3 and 4) under appropriate regularity conditons.
renewed interest in
M
n
There is a
because of its affinity to induced (or con--
comitants of) order statistics [c.f., Bhattacharya (1974), David and
Galambos (1974)], though this affinity has apparently not been noticed.
Let
Y < .. ·<Y
n,l
n,n
and let
the
X .
n:1
X . =X
n:l
k
be the ordered r.v.'s corresponding to
if
Y . =Y
n,l
k
for
i =l, ... ,n,
are termed the induced order statistics.
Y] , ... , y
.
n
k =l, ... ,n.
Then,
Consider now a
-3-
lineal:' combination of a function of induced ol:'deY" statistics defined by
sn = Ll
\'~ =lC nl. b (Xn:l. )
where the
c.
nl
are (non-stochastic) constants.
= X.,
1.
nl
for
Y. =Y
1.
i = 1, ... , n.
.
Note that, by
1 :;:; i :;:;n, so that
nl
Thus, we may rewrite (1.2) as
definition (prior to (1.1)),
xn:R.
(1.2)
n,R.
\,n
(1.3)
Sn =L'1= lC nR . b(X.),
1.
c . =a (i), for
n1
n
Let
nl
is readily identified by letting
M
and hence, its affinity to
n
i =l, ... ,n.
m(y) =E{b(X.) IY. =y},
1
1
Y
E:E
and let
s*n =L\'~1 =lC n1. {b (X n:1.) - m(Y n, k)}
(1.4)
Bhattacharya (1974, 1976) and Sen (1976) have considered some invariance
principles for
{s*}
n
under suitable regularity conditions.
Their
results are based on the basic fact that given the order statistics,
the induced order statistics are conditionally independent (but, not
necessarily identically distributed).
If, we let
Ln =L~lc.m(Y
1= n1
n,l.)
then,
L
n
(1.5)
is a linear combination of a function of order statistics"
and by (1. 2), (1.4) and (1. 5) ,
sn = Ln
+ S*
n
.
(1. 6)
TI1US,
intuitively, the existing invariance principles for
{L}
[viz., Sen (1978)] suggest that parallel results should hold for
{S}
n
(or {M }).
n
n
{S*}
n
and
Note that where as S* and LaTe uncorrelated, they
n
n
need not be independent and this may cause certain difficulties in
adapting the above approach.
A representation of
M
n
(or S ) as a
n
functional of the empirical distributions along with existing invariance
principles for these empirical processes gives us a convenient way of
-4-
deriving the desired results.
Along with the preliminary notions, the basic theorems are
presented in Section 2, while their proofs are deferred to Sections 3
and 4.
The concluding section deals with some general remarks and some
applications of the main theorems in certain problems of stati.stical
inference.
INVARIANCE PRINCIPLES FOR
2.
{Sn}
We shall consider here both weak and strong invariance principles
and in this context, the regularity conditions vary with the depth of
the results as well as with the extent of stochastic dependence of
X and Y.
In the conventional way, we define the scores by letting
a (i) =E¢(U .),
n
where
U
nl
<"'<U
from the uniform
(0, 1).
(n
~l)
(2.1)
are the ordered r.v.s. of a sample of size
nn
n
(0, 1) distribution and
¢(u) =¢l (u)
where both
l::;i::;n
nl
<PI and <))2
-<P 2 (u),
0 <u <1,
(2.2)
are noridecreasing and square integrable inside
Let then
-a
n
.
=n -lIn1=
. la n (1),
¢ = Sl¢(u)dU
o
2
A
n
= (n -l)-lI~1= l[an (i) -an ]2,
2
and
A =
J1<P 2 (u) du
o
(2.3)
_¢ 2
(2..4 )
Note that, by definition,
2
A
n
2
::; [n/(n -l)]A ,
't,f
and
n,
as n
+00
(2.5)
Also, let
E;,
=Eb(X) = J b(x)dG(u)
We assume that
For every
E
O<L;<co.
n(~l),
and
1:;
222
=J b (x) dG (x) - E;, •
E
we define a stochastic process
(2.6)
-5-
weI)
n
= {WeI) (t)
n
t
'
S
[0 , I]}
w~l) (t)
where
[s]
=
by letting
(lilA s)-I{M[nt] - [nt]~¢}, ts
denotes the largest integer
W={W(t), t sI}
~s.
(2.7)
1= [0,1],
Also, let
be a standard Wiener process on
I.
Then, we have
the following
Theorem 2.l.
w~l) ~
above,
HO: F =GH and the regularity conditions stated
in the Skorokhod Jl-topologyon D[O, 1].
(2.8)
Under
W,
F as
Let us now express
F (x, y) = G(x)H (y) [1 + ~(G (x), H(y))],
where
[c.£., Sibuya (1959)]
~
=0 a.e. if
K :
n
and
Let
Consider now a sequence
(2.9) holds for
{~(n)} is a sequence of functions on
p
under
n
a.nd
Q
H
and
O
~
K,
respectively.
n
of
{K}
n
= Sl(n)'
2
E ,
be the J'oint distributions of
n
(2.9)
may be regarded as a dependence
H holds.
O
alternative hypotheses, where
~
function.
2
(x, y) s E ,
(2.10)
such that
=O.
lim ~(n)
n-7<lO
{(X., Y.),
1
l
1 ~i ~n}
We assume for the next theorem that
{Qn} is contiguous to {P n } ,
where for an elaborate discussion of contiguity, we refer to Hajek
(2.11)
and ~idak (1967, Chapter VI).
Theorem 2.2.
and
on
(where
w
n
Under (2.10), (2.11) and the same regularity conditions
as in Theorem 2.l, there exists a sequence
<p (.)
={wn (t)..
t
s I})
of
0[0, 1]
{w }
n
valued functions (non-stochastic),
such that
w(I) - w
n
n
~
W, in the Jl-topology on
D[O, 1J .
(2.12)
For the weak invariance principle considered above, it appears
that: for the null hypothesis as well as local (contiguous) alternatives,
-6-
the regularity conditions on
b(·)
context, a martingale property of
and
¢(.)
{M}
J[n this
plays an important role and
n
this will be considered in Section 3.
are minimal.
This martingale property along
with the strong invariance principle for martingales
[c.f., Strassen
(1967)] enables us to formulate the following
Theorem 2.3. If~ in addition to (2.1), (2.2) and (2.6), for some
r
r >2, JI¢l <00 or JlblrdG <00, then under H : F=GH, (in the
O
Bkorokhod-Strassen Sense),
1
AI;;(M n
n >0
where
and
=W(n) +O(n
~-n
) a.s.,
as
(2.13)
n -+00 ,
is a standard Wiener process on
{W(t), t >O}
with probability 1,
-,22
~
lim sup(M -ne:¢)/(2A I;; log 10gn)2=+1,
[0, 00).
Hence
-
-n¢~)
n-+oo
J,
(2.14)
n
--
~
2 2
-n~<p)/(2A
lim inf(M
n
n-+oo
(2.15)
I;; log 10gn) 2 =-1 .
The proofs of these theorems are deferred to Section 3.
Next, we consider the case where (2.10) - (2.11) may not hold.
Some extra regularity conditions are needed in this context.
Let us
define
jl=jl(b, ¢)=J 2 Jb (x)<jl(H(y))dF (x, y)
(Under
O
V. =b(X.)¢(H(Y.))
1
jl =¢~).
H : F ::GH,
1
1
(2.16 )
E
Also, let
+fE2 Jb(x)¢(l) (H(y)) [u(y
-Y.) -H(y)]dF(x, y), i
1
~
1,
(2.17)
where we assume that
continuous.
<p(l)
exists (a.e.) and both
<p(1)
and
Fare
Let then
¢(T) (u) = d: ¢(u),
r =0,1,2.
(2.18)
du
We need either of the two assumptions (depending on the nature of the
invariance principle) .
(A)
hex)
is continuous in
x
a.e. and for some
v = f Ib(x) \sdG(x) =Ejb(X)
s
E
Also, there exist positive constants
!¢(r) (u)
I =::;K[u(l
_u)]-~+8-r,
IS
(2.19)
<00
8(>5
K and
for
s >2,
=0,1,
I'
-1
),
U E
such that
(0,1).
(2.20)
The d.f.F admits of a continuous and bounded (a.e.) density
(8)
function
f,
T
(2.20) holds for
,
r
= 0, 1, 2 and
-1
jdblG (t)j=::;K[t(1_t)]8-r,
T
dt
where
8(>s-1)
and
for
r=O,l;
tE(O,l),
(2.21)
K are defined in (2.20).
Let us now introduce a sequence
W~2) = {W~2) (t), tEl}
of stochastic processes
{W(2)}
n
by letting
(2)
Wn (t)
(2.22)
= {M[nt] - [nt]]1}/(avn), t E1.
Then, we have the following
Theorem 2.4"
Under asswnption (A) and for
w~2) ~
Theorem 2.5"
a >0 "
in the J -topology on
1
W,
(2.23)
D[O, 1J •
Under the hypothesis of Theorem 2.4" (2.14) and (2.15)
hold with ¢~
being replaced by
]1.
In the final theorem, we consider an almost sure (a.s.) representation
for
which extends Theorem 2.3 to the general case,
{M}
n
Theorem 2.6.
M
n
C5 -1
Under (B), there exists an
=L~1= IV'1 + ~*n ;
'In1= \' 1
l . 1 . - n]1
u)hel?e
n -~
) = 1'1 ()
n
W = {Wet),
1\
t E
I~*n I =0 (n -n)
L
+ 0 ( Jln
[0, oo]}
)
n >0,
a. s ... as
a. s." as
n
such
n
tr~t
(2.24)
-+00;
(2.25)
-+00
is a standard Wiener process on
E+
=
Proofs of the last three theorems are presented in Seetion 4.
[0, (0) .
-8-
So long, we have assumed that the scores are defined by (2.1).
{a* (i)}
It is possible to replace the scores by some arbitrary scores
are
a (i)
n
provided the
a* (i)
n
n
are sufficiently close to each other.
Suppose that
"~I[a
(i) _a*(i)]2
I..J.=
n
n
lNhere
n' >0.
is (a)
Also, let
M*n
0(1)
as
or
=I~J.= 1b (X.J. ) a n* (RnJ..).
n
(2.26)
+00,
Then
n -1 (M -M*) 2 $; (n -lIn. lb 2 (X.))( In. l[a (i) -a*(i)] 2)
n
n
J.=
J.
J.=
n
n
where by the Kintchine law of large numbers, as
n
-l"n
2
a.s.
2
I..i=l b (Xi) ----+ Eb (Xl)
<00,
n
(2.27)
+00
(2.28)
by (2.19).
Moreover, by (2.19),
k
2
max II._lb (X.) lin =0 (1),
l$;k$;n J.J.
P
as
n
(2.29)
+00.
Hence, by (2.26) - (2.29), under (a) in (2.26),
max l~ -M
l~k~n
p
k11m -- 0,
sup i1\ - Mk II
k2:n
m+ 0
as
(2.30)
n +00 ,
as
a. s . ,
n
,
(2.31)
+00 •
(2.32)
+00
while, under (b) in (2.26),
1Mn - M*n I = 0 (n k2- n' ) a. s ., as n
Hence, Theorems 2.1 through 2.6 also hold for
{M*}.
n
{M}
n
being replaced by
Now, (2.26) holds under fairly general conditions [see" for
example, Appendix of Puri and Sen (1971)].
a (i) =E<p(V.)
n
nl.
and
a~'(i)
n
=¢(i/(n +1),
Hence, in the sequel, we may use
In particular, if
then, (2.26) holds under (2.20).
a*(i) =¢(i/(n +1))
n
instead of
a (i),
n
whenever this leads to some simplification in the proofs to follow.
3.
Let
PROOFS OF THEOREMS 2.1, 2.2 AND 2.3
F~l) be the a-field generated by
a-field generated by
(R 1, ... ,R
n
nn )
and
be the
(X , ... ,X ), F(Z)
n
1
n
Fno be the a- f·J.e ld genera t ed
-9-
when
by
(Xl' ... ,Xn ; Rn l' ... ,Rnn )
nondecreasing in n.
H : F:: GH
O
Under
Lemma 3.1.
martingale whenever
~[.
b( )
and (2.1),
and
L
l
E:
H : F :: GH
O
n+
+
n
n
~
is a
1,
independent where
R 1 1
n+ n+
common probability
n+1
)a
n+l
~,
n
given
HO'
Eb (Xn+1 ) = L,~ .
n ~l}
Fa'
n'
- n¢S,
L~1= Ib(X.)
1 [an+ l(Rn+ 1')
1 -an (i)]
+ b (X
Now, under
n
Then,
¢ eLl'
Note that by (1.1), for every
M 1 =M
{M
holds.
(R
n+1n+l
),
(3.1)
are (conditionally)
can assume the values
(n +1)-1
and
E{b(X + )
n l
1, ... "n+1
with the
I~} =E{b(Xn + 1 ) IF~l)} =
Thus,
E{"b (X
n+l
)a
n+l
(R
n+1n+1
= E{b(X
n+l
In
) FO}
)IF(l)}E{a
(R
)jF(2)}
n
n+l n+ln+1
n
n l
= ~. (n + 1) -lI . + la
1=
(3.2)
.
n+ 1 (1) = ~a n+ 1
= ~(j) , by (2.5) .
On the other hand, given
F(l)
n '
the
(2)
IFn }
i ::;n
are fixed, while as
1 ::; i ::;n,
in Sen and Ghosh (1974) , for every
E{an+ l(Rn+ 1')
1
b (X.),
1
R.
= n nl
+1)
+ 1 a n+ l(R.
nl
+
n +1 +R .
_ _--,,-_n_l a
n+l
n+l
(R) = a (R
ni
c
n
~I
nil'
(3.3)
where the last step follows from the recursive relation of expected
order statistics, granted by the definition (2.1).
En~1= Ib(X.)
[an+ l(Rn+ 1')
-an (Rnl.)]
1
1
Ir=<'}
n
Hence,
=0.
(3 A)
From, (3.1), (3.2) and (3.4), we have
E{M
1
n+~
i Fa}
n
= ?VI
n
and (:3.5) insures the lemma.
+ (j)~, 't
n ~ 1,
(3.5)
-10-
Let then
o
Mn
\,n
=L'
1=
lb(X.)<jl(H(Y.)),
1
1
o
E{M IF} =M,
n n
n
Then, we note that
n~l.
n ~1
(3.6)
and
0 2
-l{ V(M)
0
}
2
2
n -1 EO(Mn -M)
n =n
n - V(M)
n =A -An -+0
where
EO
stands for the expectation under
that for every (fixed)
m(~l)
and
max n -~ IM[ t ] - M[n t . ]
l~j ~
n j
J
0
{~, k ~l}
On the other hand,
<jll;
and variance
2
[,2A
H :
O
n-+ oo
(3.7)
F :::GH.
t ' ... , t (El),
l
m
H)
O
1
as
as
(3.7) insures
n -+00" (under
I~
p
0 •
(3.8)
involves a sum of i.i.d.r.v. with mean
{~- k"4iO .
and the Donsker theorem appl ies to
Hence, (3.8) insures the convergence of finite-dimensional di.stributions
{W~l)} to those of
(f.d,d.) of
W when
H holds. By our Lemma :3.1
O
and Lemma 4 of Brown (1971), the tightness of {w(l)} is insured by
n
the convergence of f. d, d . ' s, and hence, the proof of Theorem :2.1 is
complete.
Note that the contiguity in (2.11) along with (3.8) implies that
max n
-~
l~j::;m
1M [
t
n j
0
] - M[ t
n j
]
D
I A..+
0,
under
{K}
n
as well,
(3.9)
while by an appeal to the central limit theorem for a triangular array
of independent r.v.s., the asymptotic multinormality of
(j)l;[nt ]),
j
tEl)
1 ~ j ~m}
_!<:
follows directly (where
W.
J_
0
w (t) =n 2{Elv1[nt] - [nt]cjll;),
n
and this insures the convergence of f.d.d.'s of
those of
{n-~(M[nt.]{weI) -w } to
n
n
Further, as in the proof of Theorem 3.1 [viz., (3.8)
thorugh (3.12)] of Sen (1977), tightness under
insure the tightness under
2,2 is complete.
{K }
n
as well.
H and contiguity
O
Hence, the proof of Theorem
-11-
Let
'If n :2:0,
Z
n+l
=M
-M
n+l
n
¢"i;.
-
Then, by Lemma 3. 1, E (Z
1 IF
O
n+
n
= 0,
)
and by (3.11), we have
:E {Z2 liFO} =EO{[b(X l)a l(R 1 1) - qli;]2
n
n+
n+
n+ n+
O n+
Ifl}
n
,n
EO{(L'1= 1b (X.)[a
1
n+ l(Rn+ I')
1 -an (Rnl.)J)
=
r;
2 2
A + 1 + ~,
+
2
0
1 F}
n
+
0
(3.10)
say,
n
where proceeding as in Lemma 3.2 of Sen and Ghosh (1974), it follows
that
Q* -+0 a.s. as
n
n -+00.
Hence, by (2.5) and (3.10)
(3.11)
a.s., as
so that
,nLk=OE O{Zk+1
2 I
Fn } -+ 00
0
a . s ., as n-+ oo
(3.12)
Note that under (2.1), (2.2) and for fl¢l
max
Ia
_l_n
1 <'<
(i)
n
and, similarly, under
fib IrdG <00
Ib(X,)
I =o(n
l~i~n
1.
Note that by (3.1), for every
~
max
l~i~n
[Actually, take first
a
(3.13)
for some
r :2:2,
1/r ) a.s., as
(3.14)
n-+ oo
n :2:1,
Ib (X,)
II~1= 1 Ian+ 1 (Rn+:1
1')
1.
- a (R ,)
n
TIl.
I,
(3.15)
by (2.1), (2.2) and (3.13), we have
I~1= lla n+ l(Rn+ I')
1
~
r :2:2,
Ib (Xn +1) - i; Ila n +1 (Rn +1n+1 ) I + Ii; I Ia n + 1 (Rn +1n+1) - ¢"I
+
M~ere
<00,
I =0 (n1/r) ,
max
IZn+11
r
cp
It
l(R. +1), 'if 1 ~i ~n,
n+'
ill
-a (R .)
n nl
I
r
= Ornl/ ),
'
and note that
a
n+
cp.]
a. s . ,
(3.16)
For (2.2), use the
Thus, by (3.1:3) through (3.16),
we conclude that
IZn+ 1 I = 0 Cn 2/r )
n -)-00
1 (R .) ~ a - (R I' )
nl
n+l
n+'l
and then use (3.13).
same technique for each component of
as
as
n -+00 ,
-12-
f I¢ Ir
".hen either
is any
t
in
n,
or
< 00
I
fib r dG < 00
for some
r >2 .
Thus, i f
1jJ (n)
for which
2 r
1jJ(n)/n /
~
as
n-+
(3.18)
oo ,
then by virtue of (3.12), (3.17) and (3.18), on denoting by
2 "n-1
2 I 0
s n = L·1= OEO{Z.1+ 1 F.},
1
we have
(3.19)
By virtue of (3;.12), (3.18), (3.19) and Lemma 3.1, (2.13) follows from
Theorem 4.4 of Strassen (1967).
(2.14) and (2.15) follows directly by
using the law of iterated logarithm for the Wiener process
W..
This
completes the proof of Theorem 2.3.
4.
Let
Y E: E
and
G (x) =n
PROOF OF THEOREMS 2.4, 2.5 AND 2.6
-lIn.
-lIn
1u(x -X.), x E:E, H (y) =n
. lUCy -Y.),
1=
1
n
1=
1
2
F (x, y)= n -l"n
L. lU(x -X.)u(y -Y.),
(x, y) E:E,
be
n
1=
1
1
n
respectively, the empirical d.f.'s corresponding to
G, Hand F.
Also,
since (2.25) holds under (2.20), in the case of the scores defined by
¢(i/(n+1)),
l.:s;i.:s;n,
we shall work with these in the seque1.
Then,
we may write
n -1 M = 2Jb(X)¢(~1 H (y))dF (x, y) .
n
E
n+
n
n
f
By expanding
¢ (n ~ 1 H ) around
n
¢ (H)
and
Fn
(4.1)
around
F,
wle obtain
tha~::
from (4 .1)
n -1 M =
n
f ? {b (x)¢ (H(y) )dF
E-
n
(x, y) +
hex) [Hn(Y) _H(y)]¢(l) (H(y))dF(x, y)}+ n -1 ~* (say)
n
=n -lIn. IV. +n -1 ~*
1= 1
n
where the
terms.
V.
. 1
are defined by (2.17) and
(4.2)
~*
n
consists of the remainder
-13-
EV.
Now, under (2.19) and (2.20),
1
=]J
the Donsker theorem applies to the sequence
Var (V.) = (J
and
2
1
{v.}
1
of
< co.
Thus,
i.i.d.r.v., and
hence, to prove Theorem 2.4, all we need to show is that
n-
~
2
max \l;k
p
I --r
(4.3)
0,
l:s:k:S:n
while to prove Theorem 2.5, by virtue of the law of iterated logarithm
for the
\~ I (V.1 -)J), it suffices to show that
L1 =
It;;*n I (2nlog
~
log n) - 2 -+0, a. s., as
n""CO.
(4.4)
Finally, to prove Theorem 2.6, we need to verify (2.24).
For this
reason, in the remaining of this section, we consider explicitly the
{l;*}
remainder terms
n
and verify (4.3), (4.4) and (2.24) under
appropriate regularity conditions.
Note that by (4.2),
n-1t;;*= J 2 Ib(x)
n
E
-
J~2 Ib(x)
[¢(~l H
n+
n
(y)) - ¢(H(y))]dF (x, y)
n
[Hn(Y) _H(y)]¢(l) (H(y))dF(x, y)
b
=J 2 Jb(X)[Hn(y) -H(y)]¢
E
+JE2fb(x)[¢(~H
(y))
n+
n
=C
n1
+C
n2
,
(1 'I
) (H(y))d[Fn(x, y) - F(x, y)]
-HH(y)) -{H (y) -H(y)}<p(l)CH(y))]dF ex, y)
n
n
say.
(4.5)
In this context, the following result due to Csaki (1977) will be
repeatedly used.
Let
Then, for every
E: > 0,
{U., i ~ l}be a sequence of i. i.e.. r ,.v. having
1
n ""00,
Further, C:sorgo and Revesz (1975) have the follo'Hing result
mil
p{lim
n-?-OO
sup
E: <t<l-E:
n
_ _ _n
n
(t) -tJ
I2t(l -t)log logn
=
/2 ]-
=
1.1
,
-14-
where
£
4
= (log n)
n
In.
(4.8)
Also, by virtue of Theorem 3.1 of Braun (1976), for every
£ >0,
kllk(t) -tl
max sup
l::;k::;n td
m{ t
= 0 (1)
P
1
(1 _ t) }~-£
Now, by (2.20) and (4.6), for every
(1)
sup \ [Hn(Y) -H(y)]qJ
y£E
£ >0,
(H(y)){H(y) [1 -H(y))]}
1-0+£
(4.9)
as
n -roo,
I =O(n -!-,:2(log logn) k2.1
a.s.,
(4.10)
while by (2.20) and (4.7), as
n -roo,
I
H
-1
sup
[H (y) _H(y)]¢(l) (H(y)){H(y)[l _H(y)]}1-0\
-1
n
(£ )::;t::;H (1-£)
n'
n
= O(n
-~
!-,:
(log logn) 2) a.s.
(4.11)
and by (2.20) and (4.8),
max
k::;n
I{Hk(y)
k
sup y£E In
-H(y)}¢
(1)
(H(y)){H(y)[l -H(y)]}
1-0+£1
I
=0 (1),
P
(4.12)
where in (4.10), (4.11) and (4.12),
that on setting
£ = ~ (<5 -
5
-1
o >s -1
we have
) ,
(l - 0 + t.:) 51 (s - 1) = 1 - y,
n > 0,
By (2.19), for every
f
(by (2.19) - (2.20)), so
where
there exists a
(4.13)
y > O.
K «00),
n
such that
Ib(x) ,sdG(x) < n
(4.14 )
Ix I>K
n
Let
E(l)={x:
n
IxJ::;K},
n
E(2) ={y:
n
n ::;H(y) ::;1 -n},
E
n
=E(l) XE(2)
n
n'
Be =E 2 \E .
n
n
(4.15)
Then, by (4,,5) and (4.15), we have
Icnll ::;C n11
+C~g +C~g
(4.16)
-15-
where
C 11
n
=:
If E
fb(x)[H (y) _H(y)]¢(l) (H(y))d[F (x, y)- Ffx
l
,
n
n
1
V )] 1
(4.17)
.f
n
C(1 ) =:
n 12
c~g
IfEe fb(x)[H n (y)
_H(y)]¢(l) (H(y))dF (x, y)
n
I
(4.18 )
n
=:
If
fb(x) [Hn(Y) _H(y)]¢(l) (H(y))dF(x, Y)
I
(4.19)
e
E
n
Now, by the Holder inequality
s-l
S
f
C(l)::;{f Ib(x)jSdG (x)}l/S{
I[H (y) _H(y)]<jJ(l)(H(y))!S-ldH (y)}-Sn12
E
n
E\E (2)
n
n
J
+ {
Vs
Ib(x) jSdGn(X)}
Ixl~Kn
Now,
J
n
-
{J I [Hn(Y)
s
s-l
-
_H(y)]¢(l) (H(y)) \S-ldHn(y)}
E
5
(4.20)
J
Ib(x) ISdG (x)
(being a reverse martingale) converges a.S. to
n
b (x) ISdG (x) a. s ).
Ib(x) ISdG(x) [<00, by (2.19)] and, similarly,
E
Ix {~K
E
f Ib(x) ISdGn(X)
Ix I>Kn
n~(1og
where
logn)-J-z
f
<no
Y >0
Also, by (4.10),
_HII<jJ(l) (H) I }s-ldH ==O(l)! {H(y)[l _H(y)]}-l+YdH (y),
n n E
n
and the last integral is also a reverse martingale, and
J1.lu(l
hence;, a.s. converges to
-u)]
-l+y
o
n!.zClog
n
n
S
{IH
E
I
logn)-~
f
!{H
n
du <00.
Similarly,
S
_H}<jJ(l) (H)}\S-ldH
n
-+0
a.s., as
n
-;'-00.
Hence,
E\E(2)
n
C(/1)2 ==o((n,-llog logn)!z)
n
(2)
C 12 "
n
a.s., as
n
-+00.
A similar treatment holds for
n +00,
Thus, as
(1)
(2 1'
-1
k
+ C .J == a ((n
log log n) 2) .
n12
n12
C I..
(4.21 )
Moreover, by using (4.12) instead of (4.10) in the above manipulations,
we have on parallel lines
a (1),
P
as
n
-+00
•
(4.22)
-16-
Then
sUPlb(x)[H (y) _H(y)]<p(l) (H(y)) -b(a l)[H (c 1) -H(c 1)]<p(1) (H(c 1))
A
n
r
nr
r
r
II
r
:s;
sup
Ib(x)-b(a
arl:s;x~ar2
+ Ib(a 1)1
r
c
sup
-y-c
)!
sup
I[H(y)-H(y)]<p(l)(H(y))!
< <
n
c rl -y-c
r2
I[H (y) -H(y)] [<p(l) (H(y)) -<P(l)CH(C ]))]
«
rl
r1
n
r.
r2
F is assumed to> be continuous everywhere, and hence,
Also,
J
A
r
fd[F ex, y) - F(X, y)] +
n
J
Further using Theorem
a
a.s., as
(4.24 )
n -rOO •
of Csorgo and Revesz (1975) along with the
F
compactness of the Kiefer process, we have for small
1
n"2
c
r1
sup
< <
-y-c
I [Hn (y)
n
r
r2
is a compact subspace of
E
n
<p(1) (H(y))
H(c
- H(y)] - [H (c 1) - H(c 1)]
= 0 ((log log n)
Now
I
2
E,
r
!,;
2)
a. s "
b(x)
as
r2
) -H(c
r1
),
I
n
(4.25)
+00
is continuous inside
is bounded and continuous inside
r =1, ... ,m}
E' > 0,
(where
of non-overlapping and exhaustive blocks, such tha.t
E
n=
UA ,
r r
s
one can choose a set
sup
Ib(x) -b(a 1)
< <
r
a ;]_x_a
2
rl
r
I <s'/3
and
n
and (4.10) holds.
Thus, for every
m =m i)
E(l)
H(c?) -H(c 1) <s'n.
r_
r
Then, by using (4.23), (4.24) and (4.25), we obtain from (4.17) that
as
n +00,
C
=o(n
nll
-1
~
log 10gn) ) a.s., a.s
n+ oo
•
(4.26)
Also, we note that by the tightness part of Theorem 3.1 of Braun (1976),
for small
H(C
r2
) -H(c
r1
),
-17-
(4.27)
and hence by (4.9), (4.12), (4.17), (4.23), (4.24) and (4.27),
max n
-!z{kC
k::O;n
p
k11
} ~ 0,
n-+ oo
as
(4.28)
n-,.oo,
Thus, by (4.16), (4.21), (4.22), (4.26) and (4.28), as
(4.29)
For the treatment of
C
n2
in (4.5), we have by the Holder
inequal i ty,
s
Ic
n
s -1
2 1::o;{j Ib(x)jSdG, (x)}l/S{j 1<P(~1 H) -¢(H) -[H -H]cp(l)CH)jS -ldH,}-S-:,
EnE
n+
n
n
n
(4.30)
f
where, as in after (4.20),
Ib(x)lsdGn(x)~'Elb(X)ls<oo, by (2.19).
E
Hence, it suffices to show that as
n -+00,
s
JE I<p(~
Ii )
n+1 n
-ep(H) - [H
n
s
_. H] <jl(l) (H) Is-1dH
n
=o((n-1log log
(4.31)
where
S
(4.1.5).
>2
and (2.19) - (2.20) hold.
,.Here also. we deflne
.
~
<p (1) (H)
(inside
E~2)),
as in
n
¢(-'-" Ii )
n + 1. n
Then, by the first order Taylor's expansion of
along with the continuity of
t"2''"
bn
.~
by using (4.11), we
obtain that
S
[H - H] <p (1) (H) 1s-ldH
n
n
a. S . , as
n
(4.32)
-+00.
For the complementary part, we have
f
E\E
::0;
( 2)
I¢(-~ H ) -¢(H) - [H _H]<p(l) (H) IS/(S-l)dH
n +1
n
n
n
n
C*{
s
f
J
(2)
E\E l
n
=
r n
I¢ l-.-.n +1
IS/(S-l)dH +
H,)" -<f,(H)
't'
n.
n
c~;~ +c~~~,
say,
where
J
E\E
(2)
Ii "H
r
- H ) ¢ nil
,-n
!
I
n
C*«oo) depends only on
$,
II s-"1~dlH }
n
-18By using (4.10) and
procl~eding
as in after (4.20), i t readily follows
(2)
-1·
5/2 (s-l)
C
= 0 ((n log log n)
) a. s .• as n -+00. Similarly, by
n22
using (4.12) we have maxn -~kC (2) =0 (1). The treatment of C(2)
n2l
k22
k~n
p
that
I
deserves extra care.
Note that by (2.20)
n
max I~(----l)
.
n+
1 ~1~
Also the
I
I=
0(n
k2
(4 = 0),
0
-
)
(4.34)
•
are i.i.d.r.v. having the uniform (0, 1) d.f.
H(Y.)
1
(r = 0),
by some standard arguments, we have under (2.20)
0
k
k
max I¢CH(Y.))I =O(nz- (log n) 2) a.s., as
I~i~n
1
be defined by
{a}
(2.19).
Then, using the fact that by (4.7),
n
a.s., as
f
-00
~
n
I
n
n-+ oo
(4.35)
•
E = ~ (0
where
- 5
-1
) > 0,
by
H (a ) =n-l+E{l +0(n- E / 2 /log 10gn)}
n
n
we obtain that, with probability 1,
n -+00,
a
H(a) =n
-l+E
Let now
Hence,
~ (_n_ H ~I
n +1 n
¢CH) 1 5 / (5-1) dH
_
J
[O(n
~-o ~
vlogn)]
s/(s-l)
n
·H (a)
n n
a;, s. [0 (n~-O 110g n)] s/ (s-l) • [0 (n -1+E)]
(4.36)
= [0(n- s / 2 (s-I))] [O(n- E (S+l)/(S-l) (log n)s/2(s-1))]
= 0 (n
-s/2 (s-l))
,
as
On the other hand, by (4.7)
and hence, on writing
> O.
sup
a
n -+00,
E
n
~y~H
-1
(n)
IH (y)/H(y) -11-+0 a.s., as
n
¢C_n_ H )
n+1 n
o < e < 1)
-~(H) = (_n_ H
n+l
and bounding
arbitrary number less than 1, we have for large
n,
n
_H)¢(l) (H*) where
n
IH*n jH
-1
I
by an
-19-
H-lCn)
.
¢(_n- H ) -¢(H) Is/Cs-l)dH
n +1 n
n
I
Ja
n
:0; K'~f
-1.
Ben)
a
In ~l H
n
-HI s/(s-l){[H(1
-H)]
-3/2
J<
+u}s
/
~-
('-
1)
dH (by (2.20)
n
n
l
H- (n)
a s
-~
~ s/(s-D
' . . [D((n 2(10g logn) 2)
]
f
[H(! -H)]-
l+y
dH
a
n
(by (4.7))
n
-1
:0; [O((n-1log logn)S/2(s-1))]f H
en) [H(l _H)]-l+YdH
_co
= 0 {n
-1
n (>0)
as
log log n)
H- (n)
_co
1
1
0
n
-1
(4.37)
a. s • ,
{Hel -H)}- +YdH ~~. Sn[u(l -u)]- +Ydu ,
arbitrarily small.
{H
)
is arbitrary and by arguments following (4.20)
1
J
5/2(5-1)
n
(1 -n) :o;y <co}.
"mien
can be made
A similar treatment applies to the upper tail
Instead of using (4.4), (4.7), (4.10) and (4.11),
use (4 .9) and (4. 12), then by very similar arguments, we obtain
that
max n -YzI"K I. C
k2
I:P
-.:...-.r
0,
as
n ·+00.
Thus, we conclude that (4.3)
lc$fi
and (4.4) hold under assumption (A) in (2.19)-(2,20) and the proofs of
Theorems 2.4 and 2.5 are complete.
For the proof of {2.24) (under the strangeI' assumptioE (B)L
C
:02 '
we use
a,:,ound
Here, we expand
b)l
fOT
a
second order Taylor's expansion, permissible under (2.20) (for T=2).
This will make (4.32) as well as (4.37)
Oen
':Jhile (4.36) does not need any adjustment.
-~-n
) a.s. fcr some
n >0,
For the treatmert.!'c of
in (4.5), we make use of the Bahadur (1966) result (exte:f!.ced to the
bivariate case) that for every
(x ' YO)
O
E
2
E ,
~
sup{n2![F n(X, y)- F(x, y)]- [Fn(X ' YO)- F(x ' Yo)]
O
O
![F(x,
00) - F (x ' (0)] I + I [F (oo, y) - F (00, YO)] 1:0;
O
= 0 (n -!,r (log n) 2 ) a. s ., <l.S
This is a stronger result than (4.24)-(4.25).
I:
,
11 -'2 log
T!. +00
n}
(LL 38)
On the other hanc, fOT