Ghosh, M. and Sen, P.K.; (1974)Asymptotic theory of sequential tests based on linear functions of order statistics."

ASYMPTOTIC THEORY OF SEQUENTIAL TESTS BASED ON
LINEAR FUNCTIONS OF ORDER STATISTICS
By
Malay Ghosh and Pranab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 959
OCTOBER 1974
ASYflFTOTIC THEORY OF SEQUENTIAL TESTS BASED ON
LINEAR FUNCTIONS OF ORDER STATISTICS'
3
2
By MALAY GHOSH and PRANAB KUMAR SEN
Abstract
Based on an al.ost sure Wiener process approxi.ation for linear
functions of order statistics along with the strong consistency of their
variance esti.at.s, a class of sequential tests is considered. These test
ter.inate with probability'. Asyaptotic OC and ASN functions are also
studied.
AHS 1970 Classification No. 62 L 10, 62 G 10.
Key Words and Phrases: AI.ost sure convergence of variance estimates,
Asy.ptotic sequential tests, ASN, linear functions of order. statistics,
OC function, teraination with probability', Wiener process approxiaations.
1) Work supported partially by the Air Force Office of Scientific
Research, A.F.S.C., U.S.A.F., Grant NO AFOSR 74-2736 and partially
by a grant of the Deutsche Forschungsgeaeinschaft.
2) Currently at the Iowa State University, Ames, Iowa, U.S.A., on
leave of absence fro. the IRdian Statistical Institute, Calcutta,
India.
3) Currently at the Albert-Ludwigs
Universit~t,
Freiburg i.Br., Geraany,
on a subbatical leave of absence froa the University of North Carolina,
Chapel Hill, N.C., U.S.A.
- 2 -
1. Introduction. Let IX.,
i > 11 be a sequence of independent and inden1. ~
tically distributed rando. variables (i.i.d.r.v.) with each X. having a
1
continuous distribution function (d.f.) F(x), x E R, the real line (-<», <»).
For every n > 1, the order statistics corresponding to X1 ' ••• ,X are denon
ted by Xn, l' ••• 'X n, n ; by virtue of the assumed continuity of F,
xn,l
< ••• < Xn,n , with probability 1. Extensive research works have been
done on the (optimal) use of linear functions of order statistics for
drawing statistical inference on the parameters associated with F. We may
refer to Sarhan and Greenberg (1962) and David (1970) which provide a
thorough survey of the basic theory (for finite sample sizes). Consider
the sequence
(1.1)
where J = IJ( u): u E I I (I = r 0, 1]) is a smooth wight function. In the
context of asymptotically best estimates of location and scale parameters,
it follows from the works of Lloyd, Ogawa, Jung and Blom, among others
[viz., Chapter 4 of Sarhan and Greenberg (1962)J that under suitable regu.
larity conditions on F and J, T is an asymptotically most efficient estin
mator of the parameter
~
(1.2)
=j..l(F,J) = J x
J(F(x» dF(x).
_ <»
In general, for suitable choiae of J,
~
ia a functional of the d.f.F
and
is usually a meaaure of aome characteriatic of F. Within this framework,
we are primarily interested here in sequential teata for
IT , n > 1 I. Specifically, we con aider
n
-
~
baaed on
- 3 -
(1.3)
H0 :,.,. = }J 0
vs.
H1 :).& = }J 1 =)J 0 +.A, A > 0,
where ).&0 and A(small) are specified, and we aim to have a test having
some prescribed strength
(a,a),
a<
where 0 < a,
1. In our formulation,
we assume that under (1.3), the d.f. F is not completely specified. If,
it were so, the Wald (1947) sequential probability ratio test (SPRT)
would have been applicable.
Following the Bartlett-Cox motivation of sequential likelihood
ratio tests (SLRT) [as further elaborated and rigorously formulated for
nonparametric statistics by Sen (1973 a,b) and Sen and Ghosh (1974)],
the proposed sequential tests are introduced in Section 2. It is shown
in Section 3 that under fairly'general conditions (on F and J), These
tests terminate with probability 1. For studying the OC and ASN of the
proposed procedures, we confine ourselves to the asymptotic situation
where we let A
~
O. In practice, the derived asymptotic expressions
should provide good approximations whenever, in (1.3), A is taken small.
We present these asymptotic results in Sections 4 and 5. The results are
based on certain almost sure (a.s.) convergence resutls on some functionals of the empirical d.f.'s, the proofs of which are mostly sketched in
the appendix.
2. Sequnetial tests baaed on lineor estimates -...(STBLE).
... .....
................ '-"~'-.....--"-~,--........-../.-'-...../ ......... _--_/--- ....'-..../-"""'-'-
'-""------~---
.......... -~'"'----"
-------_.--"---~"-..,.,.-.
based on an a.8. invariance principle for IT -
'\..--- ...... '''-
lJ ;
n""
Our procedure is
-'"'\,.
n
~
-
11. It is well known
(viz. r71 and the references cited therein) that under fairly geaeral
conditions on F and J, as n
(2.1)
~ ~,
- 4 where
(2.2)
2
=
tT
JJ
2
F(x) [l-F(y)] J(F(x»
_CD<X<y<CD
and we assume that 0 <
J(F(y» dx dy,
< CD. By making use of a recent result of Ghosh
0
(1972 a), we strengthen (2.1) to the following a. s. invariance
princi-
pIe : as n .. 00,
(2.3)
n(Tn-~) = oW(n) + O([nloglogn1
1/4
[logn]
1/2
)a.s.,
where W = !W(t) : 0 ~ t < CDI is a standard Brownian motion on R+ = [0, CD);
see Theorem A.1 in the appendix. Also, in general,
0
2
, defined by (2.2),
is not known [even under (1.3)]. To estimate it, we define, for every
n
~
1, the empirical d.f. Fn by
(2.4)
Fn (x)
= n-1
n 1 u( x - X. ) , x E R,
1
~.
1=
where u(t) is 1 or 0 according as t is ~ or < O. Let then
(2.5)
a~
=
2
JJ
J(F n (x»
J(F n (y»
Fn (x)[l - Fn (y)]dx dYe
_CD<X<y<CD
We prove in Theorem A.2 that
(2.6)
(7
2
n
/a 2 --.
1
a.s.,
as n .. CD.
By looking at the Bartlett-Cox r2] SLRT, one observes that their
theory is based on a Wiener-procell approximation for the sequence of
likelihood ratios along with the a.s. convergence of the maximum likelihood eltimators; analogues of these for linear functions of order
Itatiltics are provided in (2.3) and (2.6). As such, by the same motivation as in r2, 9, 10, 111, we may proceed as follows.
- 5 -
let (~,~) be the desired strength of the test. We take 0 <~, ~ <
21 '
and consider two positive numbers B( ~ ~ / (1 -~» and A( ~ (1 - t3) / ~) I so
that 0 < B < 1 < A <
ClD.
let then
b = log B and a = log A, so that -
(2.7)
OIl
< b< 0 < a <
OIl •
Also for Ii. ( > 0) I defined by (1.3), consider an initial sample of size
no = no (Ii.)
where we assume that for ••all Ii. I
I
n0 (Ii.) is lIIoderately large.
A more precise statement will be introduced in Section 4. Then, starting
with the initial sample of size n (~), we continue drawing observations,
o
one by one, se long as
r./
b n <
(2.8)
If N(
=N( .6.»
n A IT
1
111) I <
n - *<2IJ0+' "
a
,,2n
,n>
n (6).
-o
is the smallest positive integer (> n (.0.» for which (2.8)
-
0
is vitiated, we accept Ho or H1 according as NA ITN-
~ ( JJ o+1-41)1
is
~ b a~
2
or
> a "N • N( .6.) is therefore the stopping variable; if the process does not
terminote, we let N( A.) =
ClD •
Note that, in structure, our procedure is similar to the one in
Sen (1973 a) where I instead of (T , ,,2), sequences of U-statistics and
n
n
their estimated variances were used. However, in mathematical manipulationa, we need here an altogether different approach.
3. Termination of the STBlE.
We assume that
(3.1)
J(u) and J'(u) exist and are continuous in u E I,
(3.2)
J'(u) i. of bounded variation in u E I,
(3.3)
CD
= J Ixl
I
vr
_
r
dF(x) <CD,
llO
for some r(~2), to be specified l.ter on.
- 6 -
Theorem 3.1. Under (3.1), (3.2) and (3.3)
!!!h r
= 2, the STBLE termi-
nates with probability one, i.e., far every given )Jo' A and lJ'
I lJ!
pi N( A)
<
Proof.
For every n > n (A), by (2.8),
)l
= 1.
-
0
P f N( A) > n I IJ I
(3 .4)
If
II)
f
~
1
P f n- b ~~ < A[ Tn-
~ ().Lo +}J 1 ) ]
1
< n- a
(Y
~
,
).L I·
1
2 (flo +~1)' we note that by Theorell 1 of Ghosh (1972 b), under con-
di tions less restrictive than (3.1) - (3.3), T .. ~, a.s., as n ..
and by
n
2
2
Theorem A.2, under (3.1) - (3.3), (y . . r:t a. s., as n ..
so that the right
II)
11),
n
hand side (rhs) of (3.4) converges to 0 as n ..
11).
If
)J
1
= 2'( IJ o +,u1)' we
rewrite the rhs of (3.4) as
1 2
P!n- / br:t
(3.5)
n
/A
1
1 2
< n / 2 [T n -II]/a
< n- / ar:t n /A lui,
~
n
1 2
where by the results of Moore (1968), n / [T
n
-~]/r:t
is asymptotically
normal with zero mean and unit variance, and by Theorem A.2,
as n ..
Consequently, (3.5) converges to 0 as n ..
11).
11).
4. OC function of the STABLE. We shall provide here an
r:t
/r:t .. 1 a.s.,
n
Q.E.D.
asymptotic expres-
~
sion for the OC function, where we let A .. O. Note that for fixed 1J
lJ (
f lJ o )'
0
and
the OC function will converge to 1 or 0 [according as lJ < ).Lo or
> ~ol when
A .. O. Hence, to avoid the limiting non-degeneracy, we let
(4.1)
where
K«
(4.2)
lim n (A)
o
6." 0
m) il some pOlitive nUllber (> 1). AlIO, w. allum. that
=
eo
but
lim tJ?n (A) • 0 •
6. .. 0
0
For motivation of (4.1) and (4.2), we again refer to Sen (1973 a, b) and
- 7 Sen and Ghosh (1974). Finally, let lJ,F{~' A) be the OC (probability of
acceptance of H when J, F and
o
~
are given,
~
is specified by (1.3) and
u, by (4.1» of the STBlE.
Theorem 4.1.
!i,
in addition to (3.1) and (3.2) J"(u) is bounded for
u E I and (3.3) holds for some r > 2, then under (4.1)
(4.3)
~,
lim l J, F(CP , A) = p(~) =
asymptotically (as
~ ~
~
f (A1-~_1)/(A1-~_B1-~),
1
(4.2),
t'l'
+1/2
~ = 1/2.
a/(a - b),
0), the OC function does not depend on F
and J and the STBlE is asymptotically consistent.
1
2
let E1 = E1(.~) = l(m~(Tm-2[~0+~1]):: bt1 11 for some n ~ no(a)
2
before being> ao for a smaller ml, E =E (A,,,)= 1102/02_11 <",Vm >n (.~)l,
2 2
m
m
- 0
Proof.
n > 0, and let E~, i =1,2 denote the complementary events. Then, by defi)
nition, for every n > 0"
where by Theorem A.2 and (4.2), Pet\!E 1 (.~) E~C~) 1 :: P~fE~(A)l .. 0 as At ..
o.
We also denote by N(ii)(A) the stopping variable, defined similarly as
N(A), but bo
2
n
and a0
2
n
2
being replaced by bo (1 +(_l)in ) and
a (12(1 + ( - 1) in ), respectively, and let l Si,V(l'f}, b.) be the corresponding
OC function, for i, j
=1,2.
Then, from the
a~ve,
we conclude that for
every n > 0 and E > 0, there exists as A (> 0), such that for 0 < A < A ,
0 - 0
Since E > 0 and n > 0
to show that
are arbitrary, to prove the theorem, it suffices
- 8 -
· {·lill
for·1, ):1,2.
11m
A.. 0 l(i,j)(
J F ~, Ii. )} = p()
~,
n"O
'
(4.6)
For every real d and c
(e
(4.7)
P(c ,c ,d) =
1 2
< 0 < c , let us denote by
2
1
2dc2
- 1
)/( 2dc 2
2dc 1 )
e
- e
,
c /( c - c )
1
2
2
{
d
f
0,
d = O.
Then, from Theorem 4.3 of Anderson (1960), we have for every y > 0,
lim P!W(t), 0 < t < T, first crosses the line y-1 c
T.. ell
c~ossing
before
= P(cl'c 2 ,d).
If we now let n*(A) = maxlk :
t?n*(A) ..
ell
the line y -1 c2 + y dt t
t:?k~-log A1,
1
+ y dt
then as Ii. .... 0, n*(4) ..
ell,
but 6([n*(Ii.)] 1/4 [log n*(Ii.)J3/4) .. 0, so that by our
Theorem A. 1, we have for I.J. = IJ + l!tl A,
o
--+
0, with probability 1, as 6" 0,
and by virtue of lemma 4.2 of Strassen (1967), we obtain that as Ii." 0,
I Ii. W( n) : NO(A)
~ n
:s
n* (6)! behaves like a continuous Brownian motion on
(O,eIl). Hence, (4.6) follows from (4.7) - (4.9) along with the fact that
c
1
= A-
1
b(l + (_l)i,.,), c
1
= A- a(l + (-1»)n),
2
and n(> 0) is arbitrary.
i, J = 1,2, d =
(~
-
~)
Q. E.O.
Note that, if we let
(4.10)
B=fl/(l-a.)
and
A=(l-S)/o.,
then from (4.3), (2.7) and (4.10), we obtain that
(4.11)
lim
()
A... O lJ,F 0, Ii.
= 1-0.,
lim
(
A.. OlJ,F 1, A) = ~,
so that asymptotically, the STBlE has the prescibed strength (o.,S) i.e.,
~.1.
!_
- _ ... __ "",_~: __ ",,~
,.. __
.:_~
.......
- 9 -
5. ASN function of the STBLE.
",--,,-.~~,----...;~~
We require here more stringent conditions,
as we face here the moment convergent problem, which is stronger then the
convergence in distribution problem of Section 3.
Theorem 5.1. Suppose
that (3.1), (3.2), (4.1), (4.2) hold, (3.3)
holds
for some r > 4 and J" (u) is continuous in u E 1. Then
*
\fq)EI,
(5.1)
where E stands for the expectation under
~
'It (~, r1
(5.2)
2
)
=
J1
l
[aP(q)
)J = JJ
+bl1-P(ep)IJ
1 2
-(a-b)P'(2)r1,
Proof. We consider first the case of
~+
0
and
+ ep A
0'2(~_~
)-1,
fl)
f
ep
= 1/2,
1/2,
1/2. Define
(5.3)
where K«
~)
6.2
(5.4)
is a positive constant and E(> 0) is arbitrary. Then,
E~[N(A)J
= t12
[
1';
+
1';
+
I;
n PeplN(A) = nl
n<n 1 n1 <n~n2 n>n 2
J'
where the first term on the rhs of (5.4) is < E. At this stage, we make
use of the following results proved in TheDrem A.3 in the appendix. For
every
n >
0, under the hypothesis of Theorem 5.1, there exist positive
numbers K , K and an integer n0
, such thot
for n > n , for some 5 > 0,
- 0
1 2
(5.5)
2
I
-1-0 ,
P II (7 2/
n r1 - l' > n ~ K n
2
(5.6)
Plll n -'.J - Zn' > K1 n- (10g n)2 1 :S K2 n- - 5
1
= n-1
n
E.1= 1 I.,
n >
1,
1
-
1
and
CD
(5.7)
Ii
=-
J lu(x-X i ) - F(x)1
J(F(x»dx, i > 1.
,
- 10 -
2
Suppose, in (2.8), we replace b(}2, a(}2 by b(}2(1 + (_l)i 1'1 ), a0 (1 + (_l)i,,)
n
n
1
and [Tn-2(}Jo+~1)J
by
r-Zn + ( rp-21) 6,],
corresponding stopping variable by
respectively, and denote the
~(ii)(A),
for i
= 1,2.
By making use
of the fact that
(5.8)
L,
n P IN(.1)
n <n<n ~
1
- 2
= n I = [n 1 (A) + 1]
P f N(A) '> n1 (~l +
et>
~
Pet>!N(A) > nl - [n2(A)]P~IN(~) '> n2 (A)l,
n 1 < n ~ n2
and similarly for 'N(ii)(A), i = 1,2, we obtain on using
and a few standard steps that as
(3.5),
where by
as ~ ~
0,
A~
(5.3), (5.5), (5.6)
0 ,
for every (fixed)
Further, for n > n2(~)' we note that by
E ~ 0,
(5.5)
and
(5.6),
(5.11)
2 1 2
Prp IN(A»nl -< PI'fI I ba n < n.6.rT n-2(u
" ' o +1I
,.. 1 )J < ao n
( 5.12 )
P II Rn I, > K1 A (
log)
n 2
I
:s K2 n- 1-0 ,
for n
~
I
no.
[In the sequel, we choos. A (> 0) .0 ••all that for 0 < 6. ~ ~o' nl(6.)~no.J
o
Since ~
f
1/2, rewriting the rha of (5.11) a.
- 11 2
b 0 (1 +,,)
p{-l'l'
Am
(5.13)
R
n
+
1
- rn~(p- 2)
~Jn
a
0
2
(1 +,,)
+
6jn
2
we observe that (i) b 0 (1 +") / t:A
6. .. 0,
a
0
2
(1 +,,) /
Am .. 0
as
m
A" 0,
<
mYn
-
rn A (rp - 2) }
R
1
n
\~ 6./('(' -~I
(iv)
Z
< b c?(l +,,) / A J~2(A) .. 0 as
(ii) by (5.12), R /4
n
1 2
= }n/n (6) [K / ( -log 4)1/2
2
is an averge over n
n
,
Am
1 2
= 0(n- / (10g n)2) = 0(1) (as 6" 0) with a probability
(iii)
<
~
rn
1 - K n -1-~ ,
2
JIcp- ~/ ..
ClO,
and
i.i.d.r.v., where by the hypothesis of the
r
theorem, EIZ ' < ClO for some r > 4, so that by using the Markov inequal1
ity along with (i) - (iii), we claim that for A (> 0) slIall, for all n> n2C~)
1 6
P IN(A) > nl = 0(n- - ), where
(5.14)
ct'
~?L.n>n
P IN(A) > nl .. 0 as A" 0, Le.,
2 l'l'
n P INCA) = n I -+ 0 as A" O. A similar treatment holds for both
Consequently,
fj,2 r
~ > o.
n"> n
cp
2
the stopping variables 'N(ii)(a), i = 1,2. Hence, it suffices to show that
(5.15)
Now, proceeding as in the proof of Theorem 4.1 it follows that for
every ,,("> 0), arbitrarily small the OC functions for both N(ii)(tJ, i = 1,2,
can be made close to p(rp), defined by (4.3), when A .. O. Also, by definition, Iz I involve on overage over independent random variables for which
n
E Zl
=0
2
2
and E Zl = a , 0 <
CT
<
ClO.
Consequently, (5.15) follows from the
classical result of Wald (1947).
The above proof fails when ~
= 1/2
(as then, in (5.13), both the
1
r
bounds converge to 0 as 4 .. 0). However, if we let IrPr=2+(-l) E/r, r~11
- 12 -
where ~ (> 0) is arbitrary, then first working with a given ~ ,using
r
the above proof, and then applying the l'Hopital rule, we arrive at our
desired result. Q.E.D.
In passing, we may note that if
~
is some identifiable parameter of
the d.f.F (such as the location or scale parameter) and fT
to the asymptotically best estimator of
~,
n
I correspond
by the results of Jung and
810m rviz., Chapter 4 of Sarhan and Greenberg (1962)1, we note that
2
(J
equals the Cramer-Roo bound, so that by Theorem 5.1 and the asymptotic
equivalence of the SLRT and SPRT, we conclude that, in such a case, the
proposed STBLE is asymptotically (as 6~ 0) as efficient as the SlRT considered by Bartlett-Cox. In general, for different fJ(u), u E II, by Theorem 5.1, the ARE (asymptotic relative efficieny) of the different STBLE,
as judged by their ASN, will be inversely proportional to the respective
~2; this is in agreement with the Pitman-ARE for the non-sequential case.
6. Appendix.
.......,.
First, we consider the following.
Theorem A.1.
Under the assumptions of Theorem 4.1, (2.3) holds.
.........
~~/\".,._~
•
-
1
n
Proof. Defining the Z.,
i ~ 1, by (5.7) and lett1ng l n = -~.
1 l.,
n~ 1,
1
n 1=
1
it follows from the main theorem of Ghosh (1972 a) that
(6.1)
T n - ,..
II
= Zn
+ Rn ,
where under the assumptions of Theorem 4.1,
(6.2)
Further, the Zi' i
~
1, are i.i.d.r.v. with mean 0, variance
2
(J
and a
finite 4th moment, so that by Theorem 1.5 of Strassen (1967), we claim
that aK n ~
00 •
- 13 -
nZn = C'1W(n) +0«n10g log n)1/4 (10gn)1/2) a.s.
(6.3)
From (6.1) - (6.3), we obtain that
neT n - rII) = n Zn + n Rn
(6.4)
=
C'1
wen) + o( (n log log n) 1/4 (log n) 1/2) + o( (log n)2)
= C'1 Wen)
Theorem A.2.
!f
Proof. Let E*
+ O([n log log n] 1/4 (log n) 1/2)
(3.1) and (3.3) (with r
= I (x, y): - CIl
a.s.
= 2) ho1d
J
a.s.
Q.E.D.
~ (2.6) also holds.
< X < Y < CIlI. Then, we observe that
V(X ) = 2 j'J"F(x)[l-F(y)] dxdy and s2 = n1
n
E*
1
"£~
~=
l(X. _X)2 =
~
n
2 JJF n(x)[l - Fn(y)] dx dy, where Fn is defined by (2.4). Since, s
E*
as n
(6.5)
~
2
n
~
a
2
a.s.,
CIl, from the above, we conclude that
JJF n (x)[l-F (y)] dxdy ~ JJF(x)[l-F(y)] dxdy a.s., as n ~ CIl.
n
E*
E*
Now, by (2.2) and (2.5), we have
1
2
2
2 ( (1 n - a ) = In 1 + I n2 + I n3 '
(6.6)
where
I n1
= JJrJ(Fn(x»
(6.8)
I n2
= JJ J(F(x» r J(F n(Y»
(6.9)
E*
I n3 = .)"J" J(F(x»
(6.7)
- J(F(x»] J(Fn(Y»
E*
J(F(y»
E*
Fn (x)[l-F n (y)]dxdy,
- J(F(y»J Fn(x)[l - Fn(y)] dx dy,
f Fn (x)rl - Fn (y»J - F(x)[l - F(y)]' dx dy.
Now, by the Glivenko-Cante11i Theorem, s~p IFn(x) - F(x)I ~ 0
a.s.,
as n ~ CIl, whi~by (3.1), J(u) is uniformly continuous (and bounded) for
u F I, so that s~p /J(Fn(x»
- J(F(x»1 ~ 0
a.s., as n ~ CIl, and
s~p /J(Fn(x»! ~ s~p IJ(F(x»1 < K < ell, ~ n ~ 1. Consequently, by (6.5),
o
both Inland I n2
converge to 0 a.s., aa n
~
CIl.
- 14 -
x.
(6.9)
~
-*
and let X
n
as n"
x. ,
r
«
~
=
= n-1
l
i 2: 1,
o,
n
*
~=
~
L:. 1 X.,
*2
n
s
= n-1
2
sn" V(X 1 ) a.s., as n ..
=,
(6.10)
Is
*2
n
n (* -*)2
*2
!:. 1 X. - X
• Now, s
....
~=
~
n
n
=,
I
2
- s , < E a.s., as n ..
IJ J(F(x»
a. s.,
and 1~-V(X1)' < E, so that we have
=.
As a result, writing E* = E; + (E*-E;), we have for I
(6.11)
t'p
n3
,
J(F(y»[Fn(x)11 - Fn(Y) 1- F(x){l - F(y)] dx dy /
E*
E
~ [.~P IJ(F(x» I}
O. a.s., as n ..
~
(6.12)
I
2
JJ J(F(x»
• (2c)2 •
{X~~)EE: IFn(x) Il-Fn(y)l
- F{x) Il-F{y) II
=i
J(F(y»[F (x)11-F (y)l- F(x)f1-F(y)J dx dy/
n
n
E*-E*
E
< {S~p IJ(F(x)1j2
[JJ Fn(x)[l-F n(y)]dxdy + JIF(x)[l-F(y)]dxdy J
E*-E*
E*-E*
E
< {
sup
x
sup
x
IJ(F(x»/Jl2 [
E
IS 2n -
2
IJ(F(x»l} • 2 E,
sn*2 I + / V(X) - l'4'1 ]
a.s.,
aa n ..
CD
•
Since, J is bounded in 1 and E is arbitrary, the rhs of (6.12) can be
made arbitrarily small, as n ..
=.
Hence I
n3
.. 0
a.s., as n ..
=.
Q.E.D.
- 15 -
Under the hypotheses of Theorem 5.1, (5.5) and (5.6)
Theorem A.3.
hold; for (5.6), one may even take r > 2, ~ (3.3).
Prof. We prove only (5.5); the treatment of (5.6) follows similarly
by noting that by (1.1) and (1.2)
(6.13)
_co
co
+
Jx J(F(x)d (Fn(x)-F(x)].
-co
We need to point ont only the modifications in the proof of
Theorem A.2. It follows from Kiefer (1961) that for every X > 0,
(6.14)
pI sup IF (x)-F(x)l ~ )(n- 110gn)1/2, < 2 expl-2X 2 10gn I.
x
n
-
Also, by (3.1) and (3.2), JI(U) is uniformly continuous and bounded for
u E I, so that by (6.14), we have for some K*«
(6.15)
co),
s~PI J(Fn(x»-J(F(x»! ! 10:~~1IJ'(u)ll fS~P'Fn(x)-F(x)I'
2
! K* n- 1/ 2 (10g n)1/2 , with probability ~ 1 _ 2 n- 2), •
Further, EIX 1 ,r < co for some r > 4, so that for some 6 > 0, for every
~ >
0, there exists a K « co), such that
E
(6.16)
P II s 2 - (1 2, > E I <KEn -1-6 •
n
-
Thus, if in (6.15), we let 2)
1
n2
2
=
1 +~, we obtain that both I n1 and
can be made smaller than T"l/3, n > 0, with a probability ~ 1 _ O(n-1-&).
For 1 n3 , we note that E!X 1 ' r < co ~ EIX *1 ' r < «J, r > 4, so that (6.16 )
holds for s2 and (12 being replaced by s*2 and~, respectively. Thus, in
n
n
(6.12), we attach a probability
~ 1 - 0(n-1-~), while in (6.11), (6.15)
applies. Hence, the result follows
- 16 -
REFERENCES
[1] ANDERSON, T.W. (1960). A modification of the sequential probability
ratio test to reduce the sample size. Ann.Math.Statist. 31,
165 - 197.
;..;.;.;";,,,;.,;~~..;;..;..~-.,;..;. 'V'V
r2] COX, D.R. (1963). Large sample sequential tests of composite
Hypotheses. Sankhylr, Sere A. Z.§, 5 - 12.
[3] DAVID, H.A. (1970). Order Statistics. John Wiley, New York.
[4J GHOSH, M. (1972 a). On the representation of linear functions of
order statistics. Sankhyo, Ser. A. 34, 349 - 356.
'V......
[5J GHOSH, M. (1972 b). Asymptotic properties of linear functions of
order statistics for m-dependent random variables.
Calcutta Statist.Assoc.Bull. N'\I
21, 181 -192.
-----------~
[6J KIEFER, J. (1961). On large derivations of the empiric D.F. of vector chance variables and a law of iterated logarithm.
Pacific. Jour.Math. 11,649-660.
------...;.-----
"',.,
[7J MOORE, 0.5. (1968). An elementary proof of asymptotic normality of
linear functions of order statistics. Ann.Math.Statist. '"'"39,
'"
263 - 265.
;..;";.;.;,,;,,,;.,;...;..;;;.,,;~..;;..;..~-.,;..;.
r8] SARHAN, A.E., and GREENBERG, B.G. (1962). Contributions to Order
Statistics. John Wiley, New York.
r9l SEN, P.K. (1973 a). Asymptotic sequential tests for regular functionals of distribution functions. Theor.Probab. Appl. l~,
235 - 249.
r101 SEN, P.K. (1973 b). An asymptotically efficient test for the
bundle strength of filaments. Jour. Appl. Probe lQ" 586 - 596.
r11l SEN, P.K., and GHOSH, M. (1974). Sequential rank tests for loca--'rv
tion. _
Ann.Statist.
2, 540 - 552.
r121 STRASSEN, V. (1967). Almost sure behavior of sums of independent
random variables and martingales. Proc. 5th Berkeley Symp •
.;.;M;,.; .a. ;.th;.;.•; . .....;S;,..;t;..;;a_t.,;;;i;,.;;.s..;.t.-.P;,..;r;;.;o:;.;b;.,,;..~, 315 - 343.
[131 WALD, A. (1947). Sequential Analysis. John Wiley, New York