Jureckova, J. and Sen, P.K.; (1979)Invariance Principles for Some Stochastic Processes Relating to M-Estimators and Their Role In Sequential Statistical Inference."

INVARIANCE PRINCIPLES FOR SOME
STOCHASTIC PROCESSES RELATING TO M-ESTIMATORS
AND THEIR ROLE IN SEQUENTIAL STATISTICAL INFERENCE
by
Jana Jureckov~
Charles University, Prague, Czechoslovakia
and
Pranab Kumar Sen1
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1213
February 1979
,
INVARIANCE PRINCIPLES FOR SOME STOCHASTIC PROCESSES RELATING
TO M-.ESTIMATORS AND THEIR ROLE IN SEQUENTIAL STATISTICAL INFERENCE
by
v
~
JANA JURECKOVA
Charles University
Prague, Czechoslovakia
and
I
PRANAB KUMAR SEN
University of North Carolina
Chapel Hill, NC, U.S.A.
ABSTRACT
Weak convergence of certain two-dimensional time-parameter
stochastic processes related to M-estimators is studied here.
These
results are then incorporated in the study of the asymptotic properties
of bounded length (sequential) confidence intervals as well as
sequential tests (for regression) based on M-estimators.
AMS Subject Classification Nos:
60FOS, 62G99, 62L99
Key Words & Phrases: Asymptotic linearity, bounded length (sequential)
confidence interval, converage probability, 0[0, 1] space, M-estimator,
operating characteristic, sequential tests, stopping number, weak
convergence, Wiener process.
lw~rk' ~f this author was partially supported by the National
Heart, Lung and Blood Institute, Contract No. NIH-NHLBI-71-224S from
the (U.S,) National Institutes of Health.
2
1,
Let
Xl~"'~
INTRODUCTION
•
c;
Xn be independent random variables (rv) with
continuous distribution functions (df)
defined on the real line
X
i
E:R,
and
~l,
where
R.
F , "', F , respectively, all
n
l
It is, assumed that F.(x) =F(x _I',°c.),
1
F is unknown,
is an unknown parameter.
1',0
Huber (1973)] is a solution (for
c l " .. , c
An
~)
n
M-estimatop
1
are given constants
~
n
of
~o
[c.f.
of the equation
r~=l c i 1l!(Xi - ~ci) = 0 .
where
1l!
(1.1)
is some specified saoPe funation.
properties of
,..,
I',n
Various asymptotic
have been studied by various workers; we may refer
v
to Huber (1973) and Jureckova (1977) where other references are also
cited.
In this context, one encounters a stochastic process
=t
Mn ={Mn (~)
lc.[1l!(X i -~d.)
-1l!(X i )], ~E:R} (where d l , .•. ,d n
1= 1
1
are suitable constants) and the asymptotic linearity of Mn (~) in
plays a vital role in the asymptotic theory of
(1977)].
,..,
I',n
v
[see Jureckova
In the context of sequential confidence intervals for
as well as sequential tests for
~
1',0
o based on M-estimators, we need
~
to strengthen the asymptotic linearity results to random sample sizes,
and, in this context, some invariance principles for certain twodimensional time-parameter stochastic processes related to
found to be very useful,
{M}
n
are
With these in mind, in Section 2, we formu-
late these invariance principles for {M } and present their proofs
n
in Section 3. Section 4 deals with the problem of bounded-length
(sequential) cQnfidence i.ntervals for
1',0
based on M-estimators and
some asymptotic properties of these procedures are studied with the
aid of the invariance principles in Section 2,
The last section is
3
devoted to the study of the asymptotic properties of some sequential
tests for
•
2.
~o
based on M-estimators .
SOME INVARIANCE PRINCIPLES RELATED TO M-ESTI'MATORS
For technical reasons, in this section, we replace the
{(Xnl , ... ,Xnn );
that the -following conditions hold.
a triangular array
n
~l}
of
rv's
X. by
l.
and assume
(A)
For every n (~ 1), Xni , i ~ 0, are independent and identically
distributed (i.i.d.) rv with a continuous df F, defined on R.
(8)
~:
is nonconstant and absolutely continuous on any
R+R
R.
bounded interval in
Let
~(l) be the derivative of ~ and
assume that
(i)
~
Yl(~'
F)
=Joo~(I)(X)dF(X)
;0,
(2.1)
_00
e
(ii)
Ioo[~(l) (x)]2 dF (x)
(2.2)
<00,
_00
'l
(iii)
o <012 =foo [~ (1) (x)] 2dF(x) -Y l2 (1jJ,
F) < 00,
(2.3)
_00
(2.4 )
(C)
n ~ O} be two
{d.,
nl. i ~ 0 ;
triangular arrays of real constants, such that if {kn } be any
sequence of positive integers for which k too as n +00, then
Let
{cnl.., t
~ 0;
n ~ O}
and
n
um{.max
n+oo l.sk
n
c~i/t. )'sknc~.}
= 0,
J
2
um{.
max
d
n+oo iSk nl,.
n
2
/t.'<k
.)- ndn j }
=
(2.5)
(2.6)
0,
um{max c2id2. /l.'Sk c2.d~.}
n+oo isk n nl.
) n n) J
=0 .
(2,7)
n
02 =1.
d2 . SO*2
n jSkn nJ
<00 ,
Y n ~l.
(2.8)
4
Conventionally, we let
c
nG
= dnO = 0, V n ~ 0
and let
n
Mk(M =l~
OC l·{1jJ(Xnl.
On
1= n
K*={t, ~):
Let
2
R
W = {W (t, M,
n
n
(t, 6) EK}
0 < ki,
(where
M = {Mnn(t) (~)
2
ki < 00) •
W
(t, 6) EK*}
~
- EMnn(t) (6)}/ (<1 1An ),
D[K*]
be defined by
= {~(t), t E Kp
(2.10)
6 ER.
be any
by letting
2
belongs to
n
~O,
We define
net) =max{k: Ank ~tAn}' t EKi = [0, ki];
Then
(2.9)
O~t~kj' O~161 ~ki}(= [O,ki]x [-ki,ki])
compact set in
Wn (t,
-6dnl.) -1jJ(Xnl.)}, k
~o
W(t, 6) = M~ (t) ,
E
(2.11)
K*
(2.12)
kn =n(ki) •
n ~ 1.
for every
(t, 6)
Finally, let
(t, 6) E K* ,
is a standard Wiener process.
W= {wet, 6),
where
Then, we have the
following
UndeP (A), (8) and (C),
Theopem 2.l.
{Wn }
We are also interested in replacing in (2.11),
a more explicit function of
(t, s).
W.
aonvepges weakly to
EMnn(t)(s)
by
For this, we assume that the
following holds.
(8')
In addition to (2.1) and (2.2),
(a.e.), denote its derivative by
lim
t-.Q
(D)
F
foo\1jJ(2)c.x
1jJ(1)
1jJ(2)
is absolutely continuous
and assume that
+t) _1jJ(2) (x) IdF(X) = 0 ,
(2.13)
_00
admits of an absolutely continuous probsbi1ity density function
(pdf)
f
with a finite Fisher information
where
ft (x) = (d/dx)f(x).
f
1(,£) = (,£, / f) 2dF
Let then
Y2(1/I, F) • [1/1(2) (x)dF(x)
tC1/l(l)
(x) I-f' (x) !f(x)}dF
C>:] .
(2.14)
!
5
so that by
.
(~,2),
(D) and the Schwarz inequality,
Y;C1lJ, F) s I (f) [I1lJUl J 2dF <00.
h (t)
n
Also, let
~n(t)
2
=1..'
.1= 0 c nl. d nl. I An ,
t
e: K~ •
(2,15)
1
Note that by C2,8}, (2,12) and the Schwarz inequality,
h 2 (t) SA-2(~~(t)c2.d2.)(~~(t)d2.)StD*2 <00 V t e:k*
n . n 1.. 1=0 nl nl 1.1=0 nl
'
1 •
Let us now define
Ct,
-
M e:K*}
and
(t, fl) E:K*}.
WO = {WOCt, fl) = {M (t) (M + fly(t/J,
n
n
nn
W
= {Wn·Ct,fl)
n
=WO(t.
n'
M-
C2,16)
F)L~(ot)C
.d . }/Ca,
1=
nl nl
A ),
n
2162Y2Ct/J, F)hn Ct)la ,
l
Then, we have the following.
Theopem 2.2 • .undep CAl, (B'),
to W and if, in addition,
(~)
and (D),
{Wn }
Iim h ( t ) = 0, V t
n
n~
£
weakly aonverages
{ ~WO}
h
ten,
k*l'
n
aonvepges weakly to W.
In some applications, we may encounter a process
-e
,
(t, fl)
£
K* },
where, defining the
as in (2.12), for every
"
W~ (t,
M (M
nn
W* ={W*(t, fl),
n
n
as in (2.10) and
{nCt) }
(t, fl) E: K*,
(2.17)
fl) = (a l An) ... 1{Mn (t)n (tl Cfl) - EMn (t)n (t) (6) } .
{W~}
The weak convergence of
to some appropriate Gaussian function
depends on the interrelationship between the elements of different
rows of the triangular arrays
{c.}
nl
and
{d.}.
nl
-2~n(tAs)
lim An I..i =0
n~
exists for every
o S s, t SKi.
M,
Suppose that
c n (t)
( ).1 d n ( s ).1 = g (s, t)
_ i dn (.5 ).1 c ns
(5, t)
s, t
and is a continuous function of
£
K*
(~,18)
s, t,
Further, we assume that there exists a positive number
such that
.J /
2
2
2 2
~qi
OCck·d
.dkI n
CA k _Anq )
L~ =
1 k 1· _.c ql.dql.) + L~l=q+ lc k l
uniformly in
0 S q <k S k
n
I
n
~
1.
SM,
(2,19)
It is also possible to replace
C2.l7) - (2,18) by alternative conditions,
Then we have the following.
6
Under CA), (B),
Theorem 2.:5.
(~)
weak'ly to a Gaussian funati-on
°
has mean
and (2,18) - (2 t 19) ,
W*. where
n
W* = W* (:t, .1),
t, t.) E K*
and
EW* (~ , .1) W* (t , A.) = At. , g (5 , t L V (s, .1). (t • .1')
Under
aonverges
{W*}
£
(2,10)
K*
(A), (B'), (e), CP) and (2,18) - (2.19), in (2,17), we may also
rep'laae
EMn(t)nCt) (A)
by
tn(t)
.1Yl(~' F)li=O Cn(t)idn(t)i
- ¥1 2Y2(~'
) tn (t)
2
F li=O cnCt)idn(t)i'
g (5 , t) = S
1\
t, 'tf (5, t) E Ki '
*
for
then
(t,.1) EK •
FinaUy, if
W*:: W•
In Sections 4 and 5, we shall encounter a single sequence {c., i ~o}
l.
and will have
c ki = c i ' dki = ck/C k
where
C~ = L~=lc~i'
for
k
In such a case, (2,18) and (2.19) are not difficult to verify.
fact, here,
g(s, t) = S
3.
1\
~ 1.
In
t, Y s, t E Ki.
PROOFS OF THEOREMS 2.1, 2.2 AND 2.3
First, consider Theorem 2.1.
Let us define
W~ = {W~ (t,
.1), (t, .1)
£
K*}
by letting
°
°
tn(t)
(1)
Wn (t, .1) =Ml'l.= c nl..dn i{~
(X.)
nl. -Y l ($, F)})/OlAn ' (t, .1) EK*, (3.1)
where
net)
is defined by (2.12).
Note that for every
[tni(to)c2.var{~(x . -.1d .) -~(X
I i =° c n iE{~(Xn i-.1dn i)
nCt) 2
W
n
by
WOo
n
(t,.1) E K* ,
E{WO(t,.1) - W (t, .1)}2=
n .
n .
S
We like to approximate
L. =
n1
n 1 , n1
.) _ d
n1
.~(1)(X .)}]l o2A2
n1
1 n
n1
(1)
2 2 2
-~(X.)
(Xn1.)} lOlAn
. n1 -.1d.~'
n1
(3.2)
*
2 2 2
(1)
2 2 2
i.1 E{['/J(X
.)]/.1d
(X.))
IOIA.-]
,
= [. n.('t)c nl.dn
n i-lidnl.) -~(Xn
1 n i -~
'nl
"b
I
where
t~(t)
extends, over all B; ~ sn(t)
(2,'2) and (2,4), for every
h >0,
f
and
2
h- {'/J(x +h)
_00
dni ~ O},
Note that by
_~(x)}2dF(X)
=
7
f
lO
Jih.,.If1//1) (x + t)dt] 2dF (x)
_00
°
S
h· 1 (fhIl/J(l) (x + t)] 2dt )dF(;lC) ..
°
.00
frl/J o,)(X)]2 dF ()C)«(0)
as
h .. O!
A similar case holds for
h <0,
.. 00
Hence, by Fatou "s
lemma~
lim(h'" 2 [l/J(x +h) ",l/J(x)]2 dF (x) = Joo[l/J(l) (x)]2 dF (x).
h~
.
_~
(3.3)
~oo
Similarily,
lim fooh-1[l/J(X +h) .. l/J(xnl/J(l) (x)dF(x) = foo[l/J(l) (x)]2 dF (x).
h~
.00
(3.4)
_00
From (2.6), (2.8), (3.2), (3.3) and (3.4), we obtain that
lim[E{W (t, 6) _wOrt, 6)}2 /t6 2] = 0. V (t, 6) e:K*.
h~
n
By (3.5) and the Chebychev in equality, for every (fixed)
(t ., 6 .) e: K* , I S j s m,
J
J
as
Also, by (3.1), for any
k
(1)
t , .•. , tm'
l
£= (£1' ... , b ) ;0;
-m
(X.)
-Y1(l/J, F)}/a ,
n1
l
~
(3.6)
°
m
t/..
1b.W (t., 6.) =
J n J
J
J=
where the e.
n1
depend on
{l/J(I) (X
ni
) -'VI (l/J, F) }/a
~
are i.Ld.rv
1
with
° mean and
v.
Hence, by a central limit theorem in Hajek and Sidak
(1967, p. 153), we conclude that
m
°
lj::l1bjWn(tj,
6j )
t
is asymptotically
Further, by (2.7) and (3.1),
EW~Ctj' AjlW~(:t.v 6t ) "6j 6t (t j
for every
b,
6 , ... ,6 " {c.} and {d.} and by (2.5) - (2.8),
1
m
n1
n1
2
kn 2
max e ill ._oe ... 0 as n" oo ,
(3.7)
Osisk n
J- nJ
unit variance.
normal,
1) and
1£"'0.
n
while the
m(~
n" 00
max IW (t., 6 ) -WO(t., 6.)
j
lSjsm n J
n J
J
I.noe . . {l/J
1= 1J
(3.5)
n
j, t=l, ,,,,m.
At t ) = EW(t j , 6 )W(tt' 6t
j
Hence, for every
m(~
1),
),
(3.8)
(t , 6 ) e:K*,
j
j
8
From (3,6) and
(~,9),
butions (f,d.d,) of
we conclude that the finite dimensional distri{W}
n
converge to those of
Theorem 2.1, it remains to show that
a process
x,
W.
Hence, to prove
{W } is tight.
n
we define the increment over a block
For this, for
10 $ ! $ 11 }
B = (t:
by
x(B) =x(!l) -x(:l:ll' t 02 ) -x(t ol , t ) +x(!o)'
12
where
!j = (t
j1
{~: 10 st s~o +
, t
o,V
j2
), j =0, I,
(0 > 0) •
Also, let
Bo(1o)
(3.10)
be the block
Then, proceeding as in (3.2) - (3.5), we
obtain that
(3.11)
where
A(B)
stands for the Lebesgue measure of the block
~.: In = 0,
uniformly in
0(> 0) and
Band
(3,12)
(t, t1) E K*.
Thus, from (3.11) and the results of Bickel and Wichura (1971), we
conclude that
{WO}
n
is tight.
{W - WO}
n
n
is tight,
Hence, it suffices to show that
Toward this, we note that
wO (BAt, t1)) = 0 [W (t + 0) - W (t)],
nun
n
where
°
\'n(t)
(1)
)
Wn (t) = ( L'1= c n1.dn1.{ljJ
(X.)
. n1 -Y1(ljJ, F)} lalAn , t EK*l'
.
Hence,
E[Wn(t) -Wn(S)] 2 s(t-s),
[A(B (t 1 t1)]72, for every
o
of
{WO}
n
0 >0
and
so that
°
E[Wn(Bo(t, t1))] 2 $Q 3
(t, t1) EK*,
(3.13)
=
and hence, the tightness
follows from the results of Bickel and Wichura (1971).
Q.E.D.
9
To prove Theorem 2.2, we note that for every
•
(t~
6) E:K*,
-11
(
.
~n(tl
1 2.
~n(t)
d2 I
An EMnn (t) M + I1Yl(l/J, F)L'
.1= 0 c n1.dn i -~ Y2 (l/J, F)L'1= 0 c n i nl..
=A-1n:~(t)c .E{l/J(X . -l1d .)
n
1=0
n1
. n1
nl
--l/J(X
ni
) tl1d .l/J(l)(X .)
n1
. n1
_~2d2.l/J(2)(X .)}I
n1
2
n1
(0<8<1)
~¥2D*t
r'l/J(2 l (x +h) _l/J(2) (x) IdF(x)
sup
h'lhl~6d*
.
n
where
d* = max
n l<'<k
-1-
_00
Id . I
0+
n1
n
0
as
n
Hence, by (2.6), (2.13) and
0+00.
sup{lw (t~ 11) -W (t, 11)1: (t,l1)
(3.14),
(3.14)
n
£
n
K*} ~
° as
n-+ oo ,
so
that Theorem 2.2 follows from Theorem 2.1.
For Theorem 2.3, in
W~o(t,
6)
(3.1)~
we take for every
(t, 11) E:K*
=6(Li~~)cn(t)idn(t)i{l/J(l)(Xni)-Y1(l/J,
F)})/a l An •
..
(3.15)
Then, as in (3.2) - (3.5),
and hence, as in (3.6),
lim E{[W*(t, 6) -w*o(t, 11)]2} =0, V (t, 11) E:K*,
n-+oo
n
n
*0
p
max Iw*(t , 6.) -w (t., 11.) _0 as n-+ oo •
l~j~m
The convergence of f.d.d.'s of
j
n
{W
*0
n
[under (2.18)] as in (3.7) -.(3.9).
W
n
W~
and
inequality.
by
W;
and
W:o,
n
J
}
to those of
W*
follows
respectively; (2.19) insures the same
Finally, by (3.10) and (3.15), for every
nun
W*n (t)]
~ (t) = (L~~~) cn(t) i dn (t) i {l/J(l) (Xni )
-
J
Further, in (3.11), we may replace
W* (B .1' ( t, li1) = <5 [Q* (t + t5) -
Note that
J
(t, li)
;
- Y1 (l/J, F)}
E:
K* ~
(3 •16)
J/a
1An ·
(3,17)
A*
~*(
2
-2{~n(t) 2
2
~n(s) 2
2
E[Wn(t) -wn·{s)] =An Li=O cn(t)idn(t)i +£1=0 cn(s)idn(s)i
2Li~~i\s)cn(t)idn(t)iCn(S)idn(S)i}
which, by (2.19), is bounded from
10
~s.
above by M(t - sL V t
for every
4,
(t, 6) E K* ,
Thus,
0 > 0,
E[W~(BO(t,
3
6))]2 ::;M0 =M[A(Bo(t, 6))]%
and this insures the tightness of {W:O}.
Q.E.D.
"
SEQUENTIAL
CONFIDENCE INTERVALS FOR 60 BASED ON M~ESTIMATORS
,
Let us consider the simple regression model:
where the
X~
l.
Xi
=6oc.1
0
+ X. ,
1
i
~
1,
are i.d,rv .with a continuous and symmetric df F,
defined on Rl , the
c.
are known regression constants and we want to
1
provide a bounded-length confidence interval for the unknown parameter
0
6 ,
Our procedure rests on the M-estimators [viz, (1.1)].
Since
F
is not specified, no fixed-sample size procedure exists and, therefore,
we take recourse to sequential procedures.
Let us define C~ = r~=lc~
and assume that
lim n-lC2 = C2 exists
n-+co
n
lim{ ~x
n-+co ls1sn
n
(4.2)
.,
as
= l.(~(l) + 6(2)) .
~(l)
n
2:n
n
= sup{a:
S (a)
n
We assume that
1
R
(4,1)
c~/C~} = 0
Define an M-estimator of 60
t.
(0 < C < 00) ,
(=> Sn (a)
'
=i~l ci1Ji(x - ac.)
1
=
>
o},
6(2)
n
=inf{a:
S (a) < O},
n
(4.3)
1Ji is non-constant, nondecreasing and skew-symmetric on
l
is "'" in a ER , and hence, 6n exists), and, further,
we assume that
0<0
2
0=
oo
2
1Ji (x)dF(x) <00 •
f
_00
v
'" 0977)], as
Then [c,f. JureckQva
(4.4)
n -+00,
(4.5)
11
where defining Y1(W, F)
by (2.1l,
v($, F)
Thus, if v(W, F)
=(lo/Y l ($,
(4.6)
F).
is specified, then for a given de> 0), one can
define a desired sample
s~ze
n
d
by letting
(4.7)
(where
Tal 2 is the upper 50a%
point of the standard normal df) and
take
I*n =[8n -d,
d
d
'6n
(4.8)
+d],
d
so that by (4.5), (4.7) and (4.8), we have
lim P{80 EI* } =1 -a
d.. . O
nd
Thus, for small
for
8
0
neither
(0 <a <1) •
(4.9)
d,
I*
provides a bounded length confidence interval
n
d
with asymptotic coverage probability 1 -a. In our case,
F nor V(W, F)
is specified, and hence, the procedure in
(4.7) - (4.9) is not usable.
For this reason. we take recourse to the
Chow-Robbins (1965) type sequential procedure.
[See GIeser (1965) for
the sequential least squares procedure and Ghosh and Sen (1972) for the
sequential rank procedure.]
$
2 -l~n
2
n =n /..1= lW (Xi
Let us define
~ c.) - {n -l~n
/..1= ll/J(X.1 - 8A}2
,
n 1
nc.)
1
-~
(4.10)
and
~L ,n = sup{a; Sn (a)
s }
_. >T a/2C
nn
(4.11)
8U.~-}l·. -1¢{a: Sn
(a) < -T 12C s }
an n
Also, we assume in addition that
limE{ sUJ?
h ....O t : It I sh
[l/J(X~ - t) -l/J(X~)] 2}
1
1
V
'"
Then [c.f. Jureckova
(1977)], it follows that
=0
(4.12)
12
(4,13)
[Actua11Y1 by
(~,2)
and the Kintchine law of large numbers l as
r 0 .
n -1~n
'i=~111 (Xi) -+.
r
.111
~ ()C)dF(x)
n~~1
a~s'1 for r = 1, 2
(4,14)
w* =ma~{1(6 -tp)cil: 1 si :!>n}(s C 16 - 110 1 max Ic 11 where
n
n
n n
1<'
< n
_1_n
c . =ci/C 1 1 si sn), Then 1 by (2,5), (4.2) and (4,5)1 w* ~ 0
n1
n
n
Let
as
n
~~.
On the other hand, [w* < E] (£ > 0)
insures that
n
where
wr ICE) = sU
n
t It <
1
:
are i.i.d.r.v.
for
£ >0
I111r (X.0
o· I , l:;;i :!>n, (r = 1,2)
-t) .. $ r (X.)
1
-£
1
(4.16)
.
By (4,12) and (4,16) along with Markov inequality,
arbitrarily small, the right hand side of (4,15) can be
made arbitrarily small, in probability.
2 n
sn
~
°02
,
as
Hence, by (4.14) and (4.15),
n
(4,17)
~OO,
v
...
while the rest of the proof of (4.13) follows as in Jureckova (1977)].
With (4.7) - (4.9) and (4.13) in mind, we consider a sequential
procedure where a stopping variable
N
d
is defined by
Nd =min{n ~nO: ~u n .. .8L,n s 2d}
=min{n ~nO:
where
nO (~2)
e
:!>dC n }
a/2 n
is an initial sample size and we propose
Itt N
C\
T
0
~
:!> A s ~ N ]
d
' d
' d
as the desired confidence interval for 11°.
IN
:rN
as
=:
d ... O.
(4,19)
Note that by (4,18),
IN } ~ 1 - a
d
asymptotic
properties
of
Our interest centers around the
has width s 2d,
d
(4.18)
while , i t remains to show that
P{6°
£
13
The01'em 4.
t,
Unde1' (4,1), (4,2), (4,4) and (4.12), as
Nd/nd +1,
d +0,
in p1'obabiUty~
(4.20)
2
'D
2
2
2
d Nd ....... e = [T / 2,,(IjJ, F1/C] = lim (d nd ) C< (0) ,
a
d+O
(4.21)
(4.22)
Remark;
In the
Chow~Robbins
(1965) procedure,
(4.20)~(4.2l)
have been
established up to an a.s, convergence as well as convergence in the
first mean; parallel results for rank estimates are due to Ghosh and
Sen (1972),
But, the later authors needed considerably stringent
regularity conditions on the score functions for such stronger results.
Here also, under stronger regularity conditions on the
such a,s. results can be established.
c.
and
1
1jJ,
Hcwever, we do not intend to
pursue these results.
Proof of Theorem 4,t.
For every
£ > 0 and
d >0,
Then, by (4.18)
p{Nind > 1 + d = P{Nd >n~£}
= p{i_
-U,n
-~L
,n
>2d, V nO ~n ~nod } ~P{6
£
....
U,n°£
- b.
°
> 2d}
L,nd £
d
(4.23)
By (4,1), (4.7) and the definition of
2T a/2,,[IjJ, F),
d +0,
n~£, 2dC
so that by (4.13) and (4.23),
Similarly, for every
hence, [4,20) holds.
Then~
TO prove (4,22), we define
E
t
0
n
dE
p{Nd/nd >1 + d
p{Nind < 1 + d
>0,
+(1 +£)~2Ta/2,,(IjJ, F) >
+
0 ad
+
0 as
d +0,
and
(4.21) follows from (4.20), (4.1) and (4.7).
z = {Z
n
n
(t) =S
net)
(6
o
)/OoC, t £K*l '
n
where
14
t E K*l'
Since
o
0
X, = X, ... A c •
1
1
i
i
~
1
are
i.i.d.r.v., by the same technique as in the proof of Theorem 2.3, it
follows by some routine steps that under (4.1),
(4.12),
{Z}
n
Iz
n
(4.4) and
converges weakly to a standard Wiener process
S = {set), t E KP.
sup
tEKi
(4.2)~
This insures that
(t) 1=0 (1) and lim
sup
Iz (t) -Z (5)
p
6+0 0<sstss+6sl n
n
I =0,
in prob.
(4.24 )
Also, we now appeal to Theorem 2,3, where we take
for
1 sis k,
k s k,
n
c ' =c,
k1
1
and
Then
(4.25)
so that
(4,26)
n -+00 ,
Finally, by (4,1), (2.18) reduces to
e·
g(s, t) =5 At, V 5, t EKi.
Hence, from Theorem 2.3, (4.3) and (4.23), we conclude that for every
to>O
(O<to<ki),
sup C I~ (t) - AOI =0 (1) ,
n n
p
t 0-<t<k*
- 1
(4.27)
sup I -1 2
~
0
I p
tostski Cn Cn(t) (An(t) -A) +Zn(t):V(IlJ, F) --0,
l 2
sup
IC- {C ( )
toss<tss+6Ski n
n t
(At(t ) _Ao ) _c n2 ( s ).~(sf'·AO)}1
P-+O
(4.28)
as <HO. (4,29)
Similarly, by Theorem 2.3, (4.11), (4,12), (4,24), (4.26), (4.27) and
(~. 28),
~Ut~k*lc~lCnCt){Cn(t)(AL,n(t) -6n(t~+TCt/2V(IlJ,
t 0- - 1
to~~ki Ic~ICn(t) ICn(t/Au,n(t)
F)}I £...0,
• An(t)) • Ta / 2V('i>, p)}1
!'.. 0,
(4.30)
(4,31)
...1 2·~
0
2
A
0
I p
sup
* 1Cn {Cn(t) (AL,n(t) - A ) "Cn(s) (AL,n(s) - A )} --+ 0 (as 0 +0)
toss<tss+oSk 1
(4.32)
15
By virtue of (4.20),
(~.32)
and (4.33), as
d
~o,
-11 2 A
~
2 A
A
Cn {C N (A. - N· - 6 L N ) - C (Au
- 6
L,n )} I
d
d --U, d
' d
n d lJ,n d
d
where by (4.30) .. (4.31), as
d
n d (~lJ,nd .. AL ,n )
d
t.
2T /2v(I/J, F).
""
0
where
Z (1)
n
)1 v(I/J,
.p
F) '" C
nd
(4.35)
ex
Also, from (4.20), (4.28) and (4.29), as
CN (~ - I!.
d
d
(4.34)
,
~o
C
L.o.
A
(6
nd
is asymptotically (as
d
~o,
V
0
- 6 )/v(I/J, F) -Zn' (1) ,
(4.36)
d
d
~
0)
N(0, 1).
Finally, by
d
(4.32) - (4.33), (4.20), as
d
~o,
""
""
A
p 0.
C-11 {CN2 (6A u N .. A._)
.. C2 (l:lu
-6
)} I -+nd
' d ~d
nd,nd
nd
d
A
p
V -+- v(I/J, F) as d ~o and hence, by (4.36) and (4.37),
Thus,
N
d
lim p{1!.
d~O
0
£ IN }
(4.37)
= 1 - ex .
(4.38)
d
This conpletes the proof of the theorem.
Note that by (4.13) and (4.30) - (4.31), for every
n -+00 ,
to >0,
.. 1
{A
} P
sup IC C (t) v (t) - V(I/J, F) - 0 •
n n
n
t 0-<t~·*
~""I
as
(4.39)
We now appeal to Theorem 2.3 to present an invariance principle
relating to estimators of
'Yf(W, Flo
Note that by (4.6), (4,13) and
( 4.17), we have
B =5
n
""
""
""
n Ivn =2Tex/25 n IC n (A
'U,n -6 L ,n)
(4.40)
We intend to study the asymptotic behavior of
{Bn(t)" 'Y 1 (I/J, F),
t
E
Kil·
16
For this, we note that by (4,14), (4,15), (4,16) and (4 1 27), we may
improve (4,17) to the following:
for every
to > 0,
as
n
-+ 00 ,
· . Is 2 't) - 0"021 -+
p 0,
. ~sup
tostski nu
where
net)
is defined by (2,12).
may replace
as
n
w;
by
[Actually, in (4.15) - (4,16), we
Sup{wn(t): tostski}
(~,39),
From
-+00].
(4.41)
(4.40) and (4.41), we obtain that as
sup *lc~lCn(t){Bn(t) -Yl(l/J, F)}I
tostSk 1
This, in turn, insures [by
B
N
~0
which by (4. 27)
(~.20)]
- Y1 (l/J, F)
d
~ 0,
~o,
n
-+00,
lJ to >0 . (4.42)
that
as
d
+0
(4.43)
•
To obtain results deeper than (4,41) and (4.43), we note that
by virtue of Theorems 2,1 and 2,3 (where as in (4.24) - (4,37), we
take
c
ki
for every
=c
and
i
d
ki
=ck/C ,
k
1 si sk;
k
~1)
and
(4.30) - (4.31),
to > 0,
sup {I {W*(t,
t stsk*
o 1 n
p
~o
0
~
where
~
Also, by Theorem 2.3, as
{W;(t, an(t) -W;(t, an(t) +2c), to st Ski}
n
(4.44)
0
an(t) =Cn(t) (6L,n(t) -6), bn(t) =C n (t)(6 U net) -6)
c = T / V(l/J, F).
a 2
,
e·
and
-+00,
E....
2c{s(t), to st Skp.(4.45)
From (4.44) and (4.45), we have [by (4.11)]
1
2.....
~
O"lA { [ 2T a./2Cn(t)sn(j:.) "'Y l (l/J, F)Cn(tJ C.AU,n(t) - 6L ,n(t))]' to st Skp
n
~
2T / V(W, F){s(t), to St Skp ,
a 2
(4.46)
where we assume the regularity conditions of Theorem 2,2 and further
that defining
h
n
by (2.14),
limhn(t) =0.
n-+co
~I
V tostski.
(4.47)
17
By using (4,39), (4 ~401 and (4,46), we have
{[C n (t)Cn (t/ V(1/J, F)O'lAnlIBn(t) ""Y 1 (1/J, F)). to st SkpV-+
{~(t),
to st ski}
(4.48)
(4.49)
so that from (4.46) and (4,48), we obtain that as
{cur
L~=IC1]-~cn(t)(Bn(t) -Y1 (1/J,
F)), to st
n-+ oo ,
SkP~ {O'l~(t),
tO:St :SkiJ,
(4.50)
for every
d
0 < to < ki < 00,
From (4,21) and (4.48), we conclude that as
~O
2 tnd 4-~
C [L.' 1c .] {BN -Y (1/J, F)}/O'1
1
nd ~= ~
d
-+-
(4.51)
N(O, 1).
Let us now define
02 -ltn
2
0
1 tn
0
2
s n =n L.'1= 11/J (X.1 -f!. c.)
- (L.' 11/J(X. -f!. c.)) ,
1
n 1=
1
1
By (4.1) and assuming that
~1/J4(X)dF(X) <00,
n ~l.
(4.52)
we may repeat the proof
_00
of Theorem 2.1 and show that for every
e: > 0
and
K < 00,
(4.53)
Also, from (4.46), as
iCy1 (1/J,
n-+ oo
F) /O'l An){C n (t) [sn (t) lao - Vn (t) Iv(1/J, F)]},
V
-{~(t):
By (4,1),
{(r I (1/J,
to:S t :S kp
tostski}
(4.54)
(4,53) and (4,54)~
F)/O'IAn){Cn(~) [s~Ct/ao "'VnCt )!V(1/J,
F)]}, tO:St :Skil L
{~(t),
Now {nCn "" 1) .. ls~2,
n
~2}
to:St Skp
(4,55)
is a sequence of U-statistics of degree 2
and the weak convergence of partial sequences to Wiener processes
18
follows from Miller and Sen (1972),
2
0'2 =
If we let
[00 4 ..
2
.2
l/J ex) dF (x) ....C(.l/J
(x)dF (x)) ,
"",co
(4.56)
"",00
then using the decomposition (2,181 and (3.1) - (3,5) of Miller and
Sen (1972) along with the proof of our Theorem 2.1, it follows that
2
02
2
{ (Cn(t)
[Sn(t/OO -1]/Cn 0 2 , Zn(t),
(when (4.1) holds)
{(~1 (t), ~2(t)),
converges to
(4.23) and
process.
sl
~2
and
t E Kil
is defined after
are independent copies of a standard Wiener
. (~n 4) n2 o2 «
hm L'=l c . /C
(4.55)~
1
1
=C
(4.57)
(0),
n~oo
and the above discussion, it follows that an
{(y1 (W, F) /°1CO){Cn(t) [Cn (t)/V(W' F) -1],
V
-+ {~
.
for every
Zn
weakly
Thus, if we assume that
n-+<lO
then from
where
t EKil
"'1 (t) + (y 1 (l/I, F) ° 2/2° 1COlt
0 <to <ki <00.
to ~ t
-~
S
kil
s2 (t), to
S
t ~ kp,
(4.58)
By (4.20) and (4.58),
A
*2
C [V /v(l/I, F) -1] ... N(O, a ),
n d Nd
(4.59)
where
2 2
*2
°
COOl
1 2
= 2
[1 +402] •
y 1 (l/I, F)
Note that by (4.7) and (4.18), as
(4.60)
d +0, V n
~nO
P{Nd >n} =P{Cm/V(l/I, F) ~Cm/Cnd' V nO ~m ~n} ,
and by (4.21), (4.61) converges to 1 or 0 according as
to a limit less than or greater than 1.
If we let
n=n
n/n
d
right hand side will have a limit different from 0 and I,
d
(4.61)
converges
+o(n~),
the
In fact,
using (4.32), (4,33), (4.56), (4.59) and (4.60), it follows from the
above that as
d + 0,
19
(4.62)
where
~
is the standard normal df and Co
(4,57) and (4,60),
Theorem 4.2.
where nd ,
and (4.60).
and
a*
are defined by
This leads us to the following
Under l4,l), (4.2), (4,4), (4.12), (4,56) and (4,57),
lim P {d,"l(Y'Nd/n .. 1) sy} = ~(yCO/a*), V y ER,
d
dfO 1::::.0
Nd are defined by (4.7) and (4.12) and CO' a* by (4.57)
5.
SEQUENTIAL TESTS FOR
0
1::::.
BASED ON M-ESTlMATORS
As in Section 4, we consider the regression model:
where
X.
=1::::.
o
0
c. +X.
111
[In (5.1), we may let HO: 1::::.0 = 1::::.0 for any
But, then, working with X. -I::::.Oc., i ~ 1, we reduce it
is specified.
6.
specified
0'
1::::.
to (5.1).]
. 1 1
Some sequential test for (5.1) based on linear rank
statistics and derived estimators have been considered by Ghosh and
Sen (1977).
Because of the close relationship of M- and R-estimators,
led by their motivation, we may consider the following sequentiaZ
M-test.
As in (4.3), let
define
~n
=[,i=ICi$(X
i
Sn(;l)
s~ and Bn as in (4,10) and (4.40),
l ~CX2 <1) be t~e
Consider two numbers B~ et 2 /
o <B <: 1 <A <(0)
st~rt
n
~l
and
(0 <) ett' et 2 « 1)
desired type I and type II error probabilities.
(0 <et
we
1
-ac i ), a ER,
and define
(l ~
a l )) and A(s
Let then
(l - et ) let )
2
l
(so that
a = logA, b = 10gB (O' 00< b< a< (0),
with an initial sample of size
nO(=nO(l:::.»
Then,
and continue sampling
as long as
Ds
2
<
n <6.8nSn (k)
2-
(5.2)
20
Define the stoppinfj va.ria,bZe
NC=N(A)) oy
A8 S (~2l)/s2 ~ (P, a)}
n n
NCAl = +00
(we allow
n ~ nO CA) ).
if [5,2) holds for all
stop sampling after having
NCA)
or
o
HI: 6 = M> 0), according as
>
2
(5,3)
n
Then, we
observations and accept
1
68N(6)SN(6) (¥)
is
S
H : 60 = 0
O
2
bS N(6)
or
- aSN (6) ,
60
Note that for every fixed
P {N(6)
60
P
S
60
> n}=P
2
n
where
60
1
= ;.82
(k
{bs 2 <68 S
) <as 2 , V n (6) smsn}
O
m
mm2
m
1
2
n
n n
2
1
S (O)/OOC
n
n
,
t
n -+ co , while
and
.
(5.4)
is asymptotically
Hence, (5.4) converges to 0 as
$
2}
0
n < l:::.Bn Sn (;.82 - 6 ) < as n
Cn -+co as
Section 4,
M> 0) ,
{bs <68 S (;.82 ) <as }
= Po {bs
When
AO
and
8n
~ YI (ljJ,
n -+co.
Sn(a) is \
If
d
N(O, 1) [see Section 4]
F) (finite) and
=.. 6
0
+¥ >0,
in a] for every
s~
£.... o~.
then [as in
K(O <K <co), there
exist an n*, such that
d
~
which insures that
2.1-2.2,
2 p
K/C , V n
n
~ n*
(5.5)
S (d) sS (K/C ), V n
n
n
n
~n*,
-1
C [Sn(K/C n ) "Sn(O)]-+ -KY ($, F)
I
n
2
and sn -..... 0 0 ,
Since
again converges to 0,
KC n Y1(W, F)
-+
co,
and hence, by Theorem
p
as n-+ oo , while 8 --+ Y1 ($, F)
n
the right hand side of (5.4)
A similar case holds for
d <0.
Thus,
lim P {N(A) > n} = 0 ,
n-+oo AO
(5.6)
so that the process terminated with probability 1,
We like to study the OC function of the proposed procedure.
As in Ghosh and Sen (1977), we take recourse to the asymptotic case
where we let
A-+ 0,
We assume that
e,
21
AO = 4>ll
={u:
lul:s; K}
for some
K > I,
(5.7)
40 (j 0), the OC will approach to 1 or 0 as
[Note that for fixed
O
A is
according as
~ £I
where
< or > 0,]
6 +0,
Further I we assume that. nO (ll) +00
such that
lim nO (s) = 00
ll+O
but
lim ll2 no (ll) = 0 ,
(5.8)
ll+O
Pinally, all the regularity conditions of Theorem 4.2 are assumed to
be true here.
Let
Lp (4), ll)
test in (5.2)-(5.3) when
P
be the OC function of the sequential
is the underlying df and
llo =4>ll.
Then,
we have the following
(5.9)
,e
Thus, the asymptottc straength of the test is
fora aU
(L(O) =l-a , L(l) =a )
l
2
P.
Proof.
POl" an arbitrary
£(> 0),
as the smallest positive integer
i
we define stopping variables
(~nO)
2
1
by
Lij ,p C4>, ll), i, j = 1, 2,
every
Thus,
~
i, j =1, 2
£ > 0,
i~
there exist an
in (S,IO) we let
2
.
Then, by virtue of (4,41) and (4.42), for
II (say, llO > 0), such that for all
e
NOCM
let nL\ =min{n ~nO(~): L\2C2 ~ l},
nL\
0 < II :s; llO'
and denote the limiting stopping
and
it suffices to show that (5,9) holds for
10
(5.10)
and denote the associated OC functions
£+0
variable and OC functions by
1.J
for which
b(l + (-1) £)00 <llY (l/J, P)Sn(~8) <aoO(l + (_1)£)
l
is not true, for
£
N•. (ll)
II > 0
0
Lp (4),
AL
0
Lp (4), M,
and define
respectively, then
Towards this, we
{Z }
n
as in after
22
(4.23) •
as
Then, by steps similar to in (4.24)-(4,29)1 we conclude that
for every
I:>. -"0,
O>e:>k*<oo
1
when
'
1
1
2
P
sup IMS(¥)-S....
(<1>1:>.) +(<I>-2)I:>.Y
1(l/J, F)C....
}~ 0,
e:~t~i
iiI:>. (t)
nl:>. (t)
nl:>. (t)
Z
={Z
nl:>.
n/i
(t) =S
~I:>. (t)
C<I>I:>.)/00C
1
~k*l}~ {~(t),
e: st
nl:>.
(5.12)
e: st Sk*l} (5.13)
where
nl:>.(t) =max{k:
Thus, when
1:>.0
= <1>1:>.
C~ stC~}
(5.14)
,
e: st Skil ~ {Zn (t) - (<I> -
~)t/v. (l/J, FL e: st Skil·
(5.15)
I:>.
By (5.13), (5.15) and the results of Dvoretzky, Kiefer and Wolfowitz
(1953), the desired result follows.
Under more restrictive regularity conditions on the score function,
for sequential rank tests, Ghosh and Sen (1977) have studied the limit
of
1:>.2 EN (I:>.)
(as
1:>.-..0).
Similar limit in our case also demands more
restrictive conditions on
l/J and the
the limiting distribution of
exit time of
{~(t) + (<I> -
absorbing barriers
1
!:>.N(I:>.)
2)tJ'v (l/J, F),
b",~,F)
c ..
1
However, under (4.1),
is the same as that of the first
t
~ e;}
when we have two
and aV (l/J ,F) •
We conclude with a remark that very recently Carro11(1977) has studied
the asymptotic normality of stopping times based on M-estimators where he
needs the assumption that the score function is twice bounded1y continuously
differentiable except at a finite number of points and that F is Lipschitz
of order one in neighbourhoods of these points. Our Theorem 4.2 remains valid
under weaker conditions and also the conclusions are valid for a more
general model.
23
REFERENCES
BICKELl P ,J, and WICHURA, M,J, (1971). Convergence criteria for
multiparameter stochastic processes and some applications.
Ann~ ·Math, $tatiBt, 42, 1656.1670.
CARROLL 1 R.J. (1977). On the asymptotic normality of stopping times
based on robust estimators, Sankhya~ Serf A. 39~ 355~377,
CHOW I Y.S. and ROBBINS 1 H. (1965), On the asymptotic theory of fixedwidth sequential confidence intervals for the mean. Ann. Math.
Statist, 36~ 457-462.
DVORETZKY 1 A., KIEFER, J" and WOLFOWITZ, J. (1953). Sequential
decision problems for processes with continuous time parameter.
Testing' hypotheses, Ann. Math, Statist. 24, 254-264.
GLESER, L.J. (1965). On the asymptotic theory of fixed size sequential
confidence bounds for linear regression parameters. Ann.
Math. Statist. 36, 463-467.
GHOSH, M. and SEN, P.K. (1972). On bounded length confidence interval
for the regression coefficient based on a class of rank
statistics. Sankhya~ Ser, A~ 34, 33-52.
GHOSH, M. and SEN, P,K. (1977). Sequential rank tests for regression .
. Sankhya, Serf A. 39~ 45-62.
HAJEK, J. and SIDAK, Z. (1967).
Press, New York.
Theory of Rank Tests,
Academic
HUBER, P.J. (1973) • Robust regression: Asymptotics, conjectures and
Monte Carlo. Ann, Statist. Z~ 799-821.
v
.,
JURECKOVA, J. (1977), Asymptotic relations fo M-estimates and Restimates in linear regression model. Ann. Statist. 5,
464-472 •
MILLER, R.G. and SEN, P.K. (1972). Weak convergence of U-statistics
and von Mises' differentiable statistical functions. Ann.
Math. Statist. 43, 31-41.