I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
-t
I
THE INVARIANCE PRINCIPLE FOR ONE-SAMPLE
RANK-ORDER STATISTICS
By
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 751
June 1971
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
t
I
THE INVARIANCE PRINCIPLE FOR ONE-SAMPLE RANK-ORDER STATISTICS
By PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
Analogous to the Donsker theorem on partial cumulative sums of independent
random variables, for one-sample rank-order statistics, weak convergence to
Brownian motion processes is studied here.
This yields a simple proof of the
asymptotic normality of the related rank statistics for random sample sizes.
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
THE INVARIANCE PRINCIPLE FOR ONE-SAMPLE RANK-ORDER STATISTICS
I
By PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
Analogous to the Donsker theorem on partial cumulative sums of independent
random variables, for a broad class of one-sample rank-order statistics, weak
convergence to Brownian motion processes is studied here.
A simple proof of
the asymptotic normality of these statistics for random sample sizes is also
presented.
~.
Introduction and the main theore.
Let {X ,X , ..• } be a sequence of indepenl 2
dent and identically distributed random variables (iidrv) having a continuous
distribution function F(x), xcR, the real line (-00,00).
or
° according as u is > or <0,
(1.1)
and for every
~l,
Let c(u) be equal to 1
let
l<i<n.
R .
n1
Consider then the usual one-sample rank order statistic
(1.2)
T
n
,.n
1... 1 = l
c(X.)J (R ./(n+l»,
1
where the rank-scores J (i/(n+l»
n
n
n1
are defined in either of the following two
ways:
(1. 3)
1
(a)
EJ(U .) or
n1
(b)
Work supported by the Army Research Office, Durham.
J(EU .),
n1
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
,
I
2
Un l<"'<U
- nn are the ordered random variables of a sample of size n from the
rectangular (0,1) distribution, and the score-function J(u) is specified by
(1.4)
J(u)
J(l) (u)-J(2) (u)
is t in u:
O<u<l, i=1,2,
J(u) is absolutely continuous inside (0,1), and
(1.5)
The condition (1.5), due to Hoeffding (1968), is slightly more restrictive
than the square integrability condition of Hajek (1968), and it implies that
A~
(1. 6)
i=1,2.
1
Two well-known cases are (i) the Wilcoxon signed rank statistic for which
J(u) = u:
O~u~l,
and (ii) the normal scores statistics, for which J(u) is
the inverse of the chi distribution with one degree of freedom.
Let us define H(x) = F(x)-F(-x),
00
(1. 7)
IIJ(F)
= J J(H(x»dF(x),
o
1
Note that IllJ(F)
I~ J
o
x~O,
and let
1
1
J J(u)du,
o
IJ(u)ldu ~ A(J)<oo.
and
A2 (J)
J J 2 (u)du.
o
Asymptotic normality of n
-~
(Tn-nllJ(F»
were studied by Govindaraju1u (1960), Pyke and Shorack (1968), Puri and Sen
(1969), Sen (1970), and Huskova (1970), among others.
In the present paper,
we are primarily interested in the classical invariance principle or weak
convergence to Brownian motion processes of {T }.
n
For every
n~l,
let
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
3
Y (0)
(1. 8)
where
0
2
Y (~)
n n
0,
n
k=l, ... ,n,
is defined by (3.10), and it is assumed that
stochastic process Y
n
= {yn (t): tEl},
I
= {t:
~t~l},
0<0
2
<00.
Consider then a
where for tE[k/n,(k+l)/n],
we let
Y (t)
n
(1. 9)
= Yn (k/n)+(nt-k)[Yn «k+l)/n)-Yn (k/n)],
k=O, ••• ,n-l.
Then Y belongs to the space C[O,l] of all continuous functions on l, with which
n
we associate the uniform topology
p(Yn
,Y*)n
= SUPt
:
E l{IY n (t)-Y*(t)I
n
(1.10)
Finally, let W
{W·
t'
tEl} be a standard Brownian motion, so that
EW =0
(1.11)
Yn ,Y*EC}.
n
t
and
min(s,t); s,tEl.
Then, the main theorem of the paper is the following.
Theorem 1.
1! 0 2 ,
defined by (3.10) is positive and finite, and (1.3)-(1.5)
hold, then Y converges (as
n
-
n~)
in distribution in the uniform topology on
C[O,l] to a standard Brownian motion W.
We may remark that, in particular, the Wilcoxon signed rank statistic can
be expressed as a von Mises' (1947) functional, and hence, the result follows
from Miller and Sen (1971) who considered a similar theorem for Hoeffding's (1948)
U-Statistics and von Mises' functionals.
But, in general, this characterization
is not possible for T , and hence, a different proof is needed.
n
Our method of
approach is based on a powerful polynomial approximation of J(u) by Hajek (1968),
a subsequent follow up by Hoeffding (1968), a martingale theorem on
T
n
, and a
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
4
recent functional central limit theorem for martingales by Brown (1971).
martingale theorem is considered in Section 2.
The
The theorem for J(u) having a
bounded second deviative is proved in Section 3, while the general case is
treated in the next section.
The concluding section is devoted to a few app1i-
cations having importance in the developing area of rank based sequential inference
procedures.
,to
~.
For every
(2.1)
n~l,
define the vectors
and
of signs and ranks defined by (1.1).
R = (Rn 1, •.• ,Rnn )'
-n
Note that for every F, the distribution of
T is solely determined by the joint distribution of (c ,R).
n
-n -n
a-field generated by (c ,R ), n>l, so that
-n -n
-
~
n
is t in n.
Let ~ be the
n
Also, let
(2.2)
where X*
=0, X*
=00 and X*
< ••• < X*
are the ordered values of
n-1,0
n-1,n
n-1,1 - n-1,n-1
h
= E{F(X*
)n,r
n-1,r
Since p{Xn €[X*n-1 ,r- l'X*n-,r
1 ]}
F(X~_1,r_1)} can be written as
00
(2.3)
h
n,r
(~=i)J [H(x)]r-1[1_H(X)]n-rdF (x),
°
we obtain that
(2.4)
a
n
For later use, we note that
n>l.
r=l, ..• ,n,
I
I
I
I
I
I
I
5
h
(2.5)
= n-lE{[dF(X*n,r )/dH(X*n,r )]};
n,r
for all r=l, •.. ,n.
O<h
<n
-1
- n,r-
,
Finally, we define
(2.6)
T*n
Tn -a*'
n'
a*n
Then, we have the following theorem which extends theorem 4.5 of Sen and Ghosh
(1971) to underlying df's {F}, not necessarily symmetric about zero.
1
Theorem 2.1.
I
{T*,
n
I
I
I
I
I
I
I
I
I
I
I
Proof.
~
n
If
J
o
IJ(u)ldu<oo and the scores are defined by (a) in (1.3), then
; n>l}
is a martingale.
By (1.2) and (2.6), for every
n~2,
\~-llC(X.)E{J
(R ./(n+l»I ~ l}-a* 1
l.
n nl.
nn-
£l.=
+ E{c(Xn )Jn (Rnn /(n+l»I ~n- l}-a,
n
as £n-l is held fixed under ~n-l'
Also, given R l' R , can assume the two
-nnl.
with respective conditional probabilities,
values Rn- I'l. and (Rn- 1.+1)
l.
(l-n
-1
and n
Rn- I')
l.
-1
Rn- 1"l.
.)1
E{J «n+l)-lR
n
nl.
l<i<n-l.
Hence,
1
R 1,+1
~n- I} = (n - Rn-li )Jn ( n-n+ll.
(2.8)
=
J n- l(n
-1
1
R I'
) + (l-n- R
,)J ( n- l.)
n-ll. n n+l
Rn- I')'
l.
where the last equation follows from a well-known recursion relation among the
expected order statistics (cf. [15, p. 198]).
(2.9)
Thus, by (2.7) and (2.8),
= T* l+{E[c(X )J «n+l)-lR ) I ~ l]-a}
nn n
nn
nn
I
I
I
I
I
I
I
6
Now c(X )J «n+l)-lR ) is either 0 (when X <0), or equal to J (r/(n+l», when
n n
nn
n
n
Xn E[X*n- 1 ,r- l'X*n-,r
1 ], r=l, .•• ,n.
Thus, by (2.2) and (2.4),
E{c(X )J «n+l)-lR )I~ l} =
n n
nn
n-
(2.10)
I r=l
n
J ( r )h
n n+l
a •
n
n,r
Hence, from (2.9) and (2.10),
(2.11)
Also, E(T!)=O.
Hence the theorem follows.
I
Corollary 2.1.
E(Tn ) = a* for all n>l.
I
I
I
I
I
I
I
I
I
I
I
Proof.
an'
n>2.
T*n- l'
n~';:"'-=::::'
The result follows directly by noting that E(T*)=O,
n~l,
Remark.
n
~n>l,
-
and that
are all non-stochastic constants.
If, in particular, F(x)+F(-x)=l,
V x_>O, dF(x)=~dH(x),
1
and hence, an = {Ii~l I n (i/(n+l))}/2n = ~~ J(u)du = ~J.
then h
=~
n,r
2n'
Thus,
{Tn - ~J' ~n; n>l} forms a margingale; this was already observed by Sen and
Ghosh (1971).
We define {T*} as in (2.6)
~.
n
with the scores defined by (a) in (1.3), and let
processes
(3.1)
~
n =
{~(t):
n
~
n
tEl} and
~*
n =
{~*(t):
n
T~=O.
Let us then define two
tEl} by
(t)
< ,,2
k 0
n
h r
for ,,2
_< t,,2
v
v
n _ v k+l ' = , ... , , wee
k
(3.2)
0;
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
7
~*(t)
= {T*+(nt-k)(T* -T*)}/{on~} for t£(~ k+l)
n
k
k+l k
n' n '
(3.3)
k=O,l, ... ,n-l, where 0 2 is defined by (3.10).
and subsequently by
we let Vk =
~
n
We shall approximate Y by
n
, and the theorem will be proved for
T~-T~_l' k~l,
~.
~*
n
For this purpose,
n
and define
(3.4)
n>1.
(3.5)
Since {T~, ~k; k~l} has been shown in theorem 3.1 to be a martingale, by theorem
3 of Brown (1971), we obtain that
~ n 1.
(3.6)
W as
n~,
provided that
(3.7)
and for every £>0,
(3.8)
where leA) stands for the indicator function of the set A, and
in distribution.
Now, by (2.6) and Corollary 2.1, for every
(3.9)
Let us also define
n~l,
Var(T ).
n
J:f
+
for convergence
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
8
00
00
(3.10)
0
2
J J 2 (H(x»dF(x)
-
o
(J
J(H(x»dF(x»2 +
0
2[
JI H(x)[l-H(y)]J'(H(x»J'(H(y»dF(x)dF(y)O<X<y<oo
IJ H(x)J'(H(x»J(H(y»dF(x)dF(y) +
O<X<y<oo
JI
J(H(x»[l-H(y)]J'(H(y»dF(x)dF(y)].
O<X<y<oo
Then by lemma 2 of Huskova (1970), it follows that
[0<0<00] ~ [Var (T ) /n0 2 ] -+ 1
(3.11)
as
n
n~,
and hence, from (3.9) through (3.11), we obtain that
[0<0<00]
(3.12)
==h)
n
/(n0 2 )-+1
Thus, in the proof of (3.8), we may replace
(3.13)
Ivn I -<
R,
\~- 1 IJ (n1)_J
n n+1
L 1 ==1
where by (2.4) and (2.5), la
n
I
R
as
n~.
k<
V
n
by on 2 •
I'
Now, by (2.6)
R
(n- 1) 1+ IJ (nn)I+1 I
n-1
n
n n+1
an'
< A(J) for all n>l.
Also, by (1.4),
(3.14)
where J( ) (i/(n+1»
s n
==
EJ( )(U .), l<i<n, s=1,2.
s
n1
--
Further, noting that for
1<i<n-1,
R I' < R , < R I' + 1, and by (1.4), the J(s) are monotonic, we
- n- 1 - n1 - n- 1
obtain by using (2.8) that
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
9
R,
R l'
2
1
. 1
\,n- 1 I (n~)
( n- ~) I < \'
(\'n- IJ
(~+ ) -J
(2-) I)
Li=l I n n+1 -In _1
(s)n n+1
n
- Ls =l l'i=l (s)n n+1
(3.15)
IJ(l)n(n~l)-J(l)n(n~l)1
+
IJ(2)n(n~1)-J(2)n(n~1)1.
k
Now, (1.5) insures that [J(s)(u) (l-u) 2]+0 as u+1, and hence,
(3.16)
Also, noting that J" is bounded inside I, we have
IJ (s)n (-E-)
n+1 I
(3.17)
= IEJ( s )(Unn )1 = IJ( s )(EUnn ) + E(Unn -EUnn )J(' s )(EUnn )
+ ~E(U
=
as E(U
nn
-EU)2
nn
nn
-EU
)2 KI (where IKI<oo)
n I
+
IJ(s) (n+1)
n/(n+1)2(n+2)
(3.18)
nn
D(n
-2
D(n
-2
s=1,2,
),
), Thus, by (3.16) and (3.17),
IJ(s)n(n~l)1
s=1,2,
and a similar case holds for IJ(s)n(n~l) I, s=1,2.
Hence, from (3.13) through
(3.18), we conclude that
Ivn I
(3.19)
that is, for every s>O and O<a<oo, there exists an n o (s,a), such that
(3.20)
Ivn I
< sa
rn
for all n > n (s,a).
-
0
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
10
On
the other hand, for every
k~l,
(3.21)
Hence, (3.8) readily follows from (3.12), (3.20) and (3.21).
We now proceed to the proof of (3.7).
By (2.6), we have
(3.22)
R ,
R
l'
R ,
R
1
n{
l
n- 1
~
n+ "L 1'.J.J'--1
c(X,)c(X,)E
[J nn
(+l)-J
r
1
J
n
n- 1( n )][Jn (+l)-J
n
n- 1( n
l'
J )] '"
~n- 1 }
R
+ [E{[c(X)J (+nn1)]21~ 1}-a 2 ] +
n n n
nn
R
R ,
R l'
"n- 1
{ [c(X )J ( +1)]
nn [J (+l)-J
nl
n- 1 )]
2 L,
1
c(X,)E
1(
1
n n n
n n
nn
1=
Iif"PL.
}
n- 1 .
Proceeding as in the proof of theorem 2.1, the first term on the right hand side
-.
(rhs) of (3.22) can be written as
(3.23)
1
{
2}{
, 1 c (X,)
Jn (
I n1=
1 Rn- l,(n-R
1
n- l,)/n
1
Let us now denote by
R 1,+1
R l'
n- 1
n- 1}2
+1
)-J
(
n
n n +1)
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
11
sup IJ'(u)1
()( t< 1
(3.24)
sup I J" (u) 1
()( t< 1
.
By our assumption, both K and K are finite, positive constants (depending
1
2
only on J).
Then for every i:
IJ n (i+1)
n+1
(3.25 )
-
1<i<n-1
J ( i )1 - IJ(i+1)
J( i )1 + 0(n- 1 )
n n+1
n+1 n+1
(~K1 (n+1)
-1
+ O(n
-1
)).
Consequently, the first factor on the right hand side of (3.23) is O(n
1
the second factor is bounded by CK
1
J
o
-1
), while
1
[x(l-x)]~dx<oo, where C<oo.
Thu s , ( 3 •23)
converges to zero as n+oo, with probability one.
Let us now define for every
1
(3.26)
~1,
n
I
F*(x) = --c(X-X,)(-oo<x<oo),
n
n+1 i=l
1
H*(x)
n
F~(x)-F~(-X-),
As n/(n+1)+1 with n+oo, by the G1ivenko-Cante11i theorem, as
sup IF*(x)-F(x) \+0,
-OO<x<oo n
(3.27)
almost surely (a.s.).
x>O.
n~
sUPOIH*(X)-H(X) 1+0,
x>
n
For the second term on the rhs of (3.22), we note that
given ~n- l' (Rn1"RnJ,) can assume the values (Rn- l"R
1 n- 1')'
J
for Rn- l'1 < Rn- 1"J
(R l"R 1,+1) and (R l,+l,R 1,+1) with respective conditional probabilities
n- 1 n- J
n- 1
n- J
(n-R
l,)/n, (R
n- J
R l' > R l'
n- 1
n- J
l,-R
n- J
n-
l,)/n and R l,/n.
1
n-
1
A similar case follows for
Hence, by some simple steps, the second term on the rhs of
(3.22) can be expressed in the integral form as
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
12
(3.28)
Now, by (3.24) and (3.25), the integrand in (3.28) is bounded (in absolute value),
for all O<x<y<oo, by ~(K1+~K2)2, and it converges a.s. [by (3.27)] to
H(x)[l-H(y)]J'(H(x))J'(H(y)) as
(3.29)
2
JJ
n~.
Consequently, we may write (3.28) as
H(x)[l-H(y)]J'(H(x))J'(H(y))dF~_l(x)dF~_l(Y)
+ 0(1), a.s.,
O<x<y<oo
as
n~.
Since, by (3.24), the integrand in (3.29) is bounded (in absolute value)
by ~Kf for all O<x< y<oo , and (3.27) holds, (3.29) converges a.s. to
(3.30)
2
Jf
H(x) [l-H(y)]J'(H(x))J'(H(y))dF(x)dF(y),
as n~.
O<x<y<oo
Proceeding as in the proof of theorem 2.1, the third term on the rhs of
(3.22) can be shown to be equal to
(3.31)
Now, by the same method of proof as in lemma 2 of Huskova (1970), it follows
that under (3.24), as
n~,
00
(3.32)
lan-~J(F) 1+0, IIr~lJ~(n:1)hn,r -
Consequently, (3.31) converges (as
n~)
to
J J 2 (H(x))dF(x)!+0.
o
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
13
00
00
JJ 2 (H(x)dF(x) -
(3.33)
o
[J
J(H(x))dF(x)]2.
0
Finally, noting that under:J l' (R ,R 0) (i<n-l) can be either (r,R 10+1),
nnn n1
n- 1
1~r~n_110
h , r'
n
or (r,Rn-10)'
Rn- lo<r<n,
with respective conditional probability
1
1
-
l~r~n,
the last term on the rhs of (3.22) can be shown to be equal to
n-l
R 10
R 1.+1
R .
2 ' (X) {' n- 1 J (_r_)} {J ( n- 1 ) _ J
(n-1) }h
i:lc i l.j::l
n n+l
n
n+l
n-l
n
n, r
n-l
R 10
R 10
J (-!-)}{J ( n- 1)_J
(n- 1)}h
+ 2 ' (X ){\' n
l. c
0 1.0::R
+1 n n+l
n n+l
n-l
n
n,r
i=l
1
J n-li
(3.34)
n-l
R 10
R 1.+1
R 10
2 \' (X ){[' n
J ( r)h
] n- 1[J ( n- 1 )-J ( n- 1)]}
l. c
n n+l n,r
n
n
n+l
n n+l
'
i=l
10 l.o-R
J- n-li
by using (2.8).
Again writing the above in the integral form [on using (3.24) -
(3.26)], it follows by using (3.27) that as
JJ
2
n~,
it converges a.s. to
J(H(x))[l-H(y)]J'(H(y))dF(x)dF(y)
O<x<y<oo
(3.35)
- 2
JJ
H(x)J'(H(x))J(H(y))dF(x)dF(y).
O<x<y<oo
Thus, from (3.10), (3.22), (3.23), (3.30), (3.33) and (3.35), it follows that
(3.36)
Consequently, by (3.5), (3.12) and (3.7),
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
14
(3.37)
which proves (3.7), and hence, (3.6) holds.
~
Now, by the tightness property of
n
[cf. Billingsley (1968, p. 56)],
for every E>O and n>O there exist a 0>0 and an n
o
= n 0 (E,n), such that
for
(3.38)
n>n.
-0
Hence, by (3.1), (3.3), (3.12) and (3.38), we obtain that
p(~
(3.39)
p
n
,~*) +
n
0
as
n+oo,
so that by (3.6) and (3.39),
~
~* +
(3.40)
n
W as
n+oo.
Let us now define another process in C[O,l] by
(3.41)
~*
n
= {~*(t):
n
tEl}, ~*(t)
n
=
(V /na2)~*(t),
n
n
tEl.
Then, by (3.12), (3.40) and a well-known theorem in Cramer (1946, p. 254), we
obtain that
(3.42)
-~* H
+ W as n+oo.
n
Finally, by (1.9), (3.3), (3.41) and corollary 2.1,
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
15
(3.43)
SUPt E: IIYn (t)-s*(t)I
n
< Mn -~ ,
where
M«oo) depends only on J,
and the last inequality follows from lemma 2 of Huskova (1970).
proof is completed for scores defined by (a) in (1.3).
Hence the
If the scores are
defined by (b) in (1.3), we note that
(3.44 )
(0<8<1)
Hence, the metric p(Y ,Y*), defined by (1.10) for the two processes with the
n
n
T defined by (a) and (b) in (1.3) is O(n -~ ), and thereby tends to zero as
k
n~.
~.
The proof of the theorem for bounded J" is thus complete.
We now use the Hajek (1968) polynomial
approximation of J(u), as further studied by Hoeffding (1968).
By lemma 4 of
Hoeffding (1968), under (1.4) and (1.5), for every a>O, there exista a decomposition
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
16
O<u<l,
(4.1)
such that
~
~(l)
is a polynomial,
and
~(2)
are non-decreasing, and
Jl{ I~ (1) (u) I+1 ~ (2) (u) I}{u(l-u)} -~du
(4.2)
o
< a;
the last inequality, in turn, implies that
(4.3)
In (1.2), on replacing J
by~, ~(l) and ~(2), we define Tn(~)' T~l) and T~2),
respectively, so that
T (~) + T(l)
n
n
T
n
(4.4)
T(2)
n
n_>l;
'
the corresponding processes, defined by (1.8) and (1.9), are denoted by
Yn(~)' y(l) and y(2)
n
n'
so that
y
(4.5)
n
=
y (~) + y(l) _ y(2) .
n
n
n
Note that in (4.5), all the processes have (an
!«2
),
in the denominator, defined
in (1. 8) .
Now from Hajek (1968) and Huskova (1970), along with (3.12), it follows
that for every E>O, there exists an 0.>0, such that (4.2) holds, and
(4.6)
~
!«
Il-{Var(Tn (~))} /(an 2 )
I
< E for
n>n
(E).
-0
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
17
Since
~
is a polynomial and (4.6) holds, by the results of section 3,
(4.7)
y
n
(~)
"eI
~
W,
as
n~.
Consequently, it suffices to show that for every £>0 and n>O, there exists a
choice of a>O in (4.2), such that with probability> l-n,
(4.8)
For each i(=1,2), we define an(i) as in (2.4) (for J=~
T(i)_a*
.
n
n(i) ,
(4.9)
(')
1
),
and let
n>1.
a~(i)
Then
(4.10)
SUPly(i)(t)1 < ( ~)-l{ max IT(i)*1
max I *
. (i)( )I}
t£I n
an
l~j~n j
+ l~j~n aj(i)-J~J F ,
where
00
= J ~(i)(H(X))dF(x),
(4.11)
i=1,2.
°
Let us first consider the case where the scores are defined by (a) in (1.3).
Then, by theorem 2.1, {T
(i)*
k
'ak;
k~l} is a martingale, and hence, by the
Kolmogorov inequality for martingales [viz., Feller (1965, p. 235)], for every
n>O,
I
I
18
(4.12)
"I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
O.
Also, by theorem 4 of Huskova (1970), for each i(=1,2)
< 10 n
(4.13)
Jl{~(i)(u)}2dU.
o
Consequently, by (4.3) and (4.13), (4.12) can be made arbitrarily small by
proper choice of a(>O).
Now, under (1.5), we obtain on using our corollary 2.1 and proceeding as
in proposition 2 of Hoeffding (1968) that
(4.14)
where C does not depend on a or the ~(i).
(4.15)
max
l<k<n
Consequently,
Iak(i)-k~J
*
(i) (F )1/ (an)
~
< (C /a)a,
i=1,2,
and (4.15) can be made adequately small by proper choice of a(>O) in (4.2).
Thus (4.8) holds, and the proof is completed for scores defined by (a) in (1.3).
We complete the proof of the theorem by considering the scores defined by
(b) in (1.3).
Since in (4.1),
~
is the polynomial component on which the results
of section 3 apply, all we need to show is that on defining
I
I
t
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
19
n
L c(X.)E¢
(4.16)
i=l
(i)
(U R
n ni
1
),
n>l
(i=1,2), that for every £>0 and n>O, there exists a a(>O) in (4.2), such that
max I (i) - (i) I
~
p { l<k<n Tk -Tk
/(on ȣ}<n,
(4.17)
for all n>n
(£,n).
-0
i=1,2,
Now, by our (4.16) and proposition 1 of Hoeffding (1968),
(4.18)
where C does not depend on ¢(i).
1
Hence, (4.17) readily follows from (4.18)
and (4.2) by proper choice of a>O, and the proof is terminated.
Remark.
ak(i) =
If F is symmetric about 0, F(x) + F(-x) = 1, V
~ji)(F) =~f1¢(i)(U)dU,
o
for i=1,2, k>l.
x~O,
so that
Thus, we do not require (4.14)
and (4.15), and hence, for scores defined by (a) in (1.3), the square integrabi1ity condition (1.6) and (1.4) suffice our purpose.
However, in general,
for arbitrary F or scores defined by (b) in (1.3), we are not in a position to
replace (1.5) by (1.6).
For every t>O, consider now a
positive integer-valued random variable Nt' and for
as in (1.2), (1.8) and (1.9).
(5.1)
Nt=n(~l)
If Nt satisfies the condition
8:
0<8<00,
define Tn and Yn
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
,I
I
20
then analogous to theorem 1, we have under (1.3)-(1.5)
(5.2)
The proof is quite similar to theorem 17.1 of Billingsley (1968, p. 146), and
follows as a corollary to our main theorem in section 1.
details are omitted.
In particular, we have from (5.2) that
t(N~~(TN -Nt~J(F»/a)
(5.3)
For brevity, the
+
X(O,l)
as
t+oo;
t
for a different proof of (5.3) under slightly different regularity conditions,
we may refer to Pyke and Shorack (1968).
We conclude this section by the following problem.
T~
Tn-a~,
n>l.
For every T>O, define a positive number K , such that
T
_k
limT~
~ T ~
T
(5.4)
As in section 2, define
K*:
O<K*<oo.
Let then
(5.5)
T+
T
=
inf{n:
T* > K },
n -
T>O,
T
i. e. , T+ is the first time (n), T* exceeds or reaches K •
T n T
We want to find an
expression for
(5.6)
P{T+ < t }
T -
T
for
t
T
>0.
Note that on denoting by [s] the greatest integer contained in s,
I
I
"I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
21
P{T+<t } = P{T*>K
(5.7)
T- T
It-
where cr 2 is defined by (3.10).
(5.8)
for at least one n:
T
l<n<[t]}
-
-
T
Thus, if
t
T
from (5.4), (5.7) and (5.8), we obtain on using theorem 1 that
P{T+<}
·
1 1m
t
T~
T- T
(5.9)
W > (K*/ccr)}
= p{sup
UE I
u-
when W={Wu : uEJ} is standard Brownian motion on I.
Hence, by a well-known result
on W, we have from (5.9)
.
(5.10)
lim
T~
+
1
ku2
f
edu};
I2TI B
P{T <t }= 2{--T- T
00
2
B=K*/ccr.
Similarly, i f
(5.11)
T*T = inf{n:
IT*I>K},
n-T
T>O,
it can be shown that
k
00
(5.12)
lim
T~
P{T*<t} =
T- T
L
k=-oo
(-1)
(2k+l)B 1
f
1
--- e-~
2
du, B=K*/ccr.
(2k-l)B I2TI
These results are of importance in sequential tests based on {T }.
n
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
•I
22
REFERENCES
[1]
BILLINGSLEY, P. (1968).
New York.
[2]
BROWN, B. M. (1971).
~, 59-66.
[3]
CRAMER, H. (1946). Mathematical Methods of Statistics.
Press, Princeton, New Jersey.
[4]
FELLER, W. (1965). An Introduction to Probability Theory and its
Applications, Vol. 2. John Wiley, New York.
[5]
GOVINDARAJULU, Z. (1960). Central limit theorems and asymptotic efficiency
for one-sample nonparametric procedures. Technical Report No. 11,
Univ. of Minnesota (Dept. of Statistics).
[6]
HAJEK, J. (1968). Asymptotic normality of simple linear rank statistics
under alternatives. Ann. Math. Statist. 39, 325-346.
[7]
HOEFFDING, W. (1948). A class of statistics with asymptotically normal
distribution. Ann. Math. Statist. 19, 293-325.
[8]
HOEFFDING, W. (1968). On the centering of a simple linear rank statistic.
Inst. Statist., Univ. North Carolina, Mimeo report NO. 585.
[9]
HUSKOVA, M. (1970). Asymptotic distribution of simple linear rank
statistics for testing symmetry. Zeit. Wahrsch. verw. Geb. 14, 308-322.
[10]
MILLER, R. G., JR., AND SEN, P. K. (1971). Weak convergence of U-statistics
and von Mises' differentiable statistical functions. Ann. Math. Statist.
(submitted).
[11]
MISES, R. von (1947). On the asymptotic distribution of differentiable
statistical functions. Ann. Math. Statist. ~, 309-348.
[12]
PURl, M. L., AND SEN, P. K. (1969). On the asymptotic normality of onesample rank-order test statistics. Teoria Veroit. iee Primen. 14,
167-172.
[13]
PYKE, R., AND SHORACK, G. (1968). Weak convergence and a Chernoff-Savage
theorem for random sample sizes. Ann. Math. Statist. 39, 1675-1685.
[14]
SEN, P. K. (1970). On the distribution of the one-sample rank-order
statistics. Nonparametric Tech. Statist. Inference, Cambridge Univ.
Press, New York (edited by M. L. Puri) 53-72.
[15]
SEN, P. K. AND GHOSH, M. L. (1971). On bounded length sequential confidence
intervals based on one-sample rank order statistics. Ann. Math •
Statist. 42, 189-203.
v
Convergence of Probability Measures.
Martingale central limit theorems.
John Wiley,
Ann. Math. Statist.
Princeton Univ.
~
© Copyright 2025 Paperzz