•
SEQUENTIAL PROCEDURES BASED ON M-ESTIMATORS
WITH DISCONTINUOUS SCORE FUNCTIONS
by
JANA JURE~KOVA
Department of Probability &Statistics
Charles University
Prague 8, Czechoslovakia
and
•
PRANAB KUMAR SEN
Department of Biostatistics
University of North Carolina
Chapel Hill, NC 27514, U.S.A.
Institute of Statistics Mimeo Series No. 1279
APRIL 1980
SEQUENTIAL PROCEDURES BASED ON M-ESTIMATORS
WITH DISCONTINUOUS SCORE FUNCTIONS
,JURECKOVA 1)
Department of Probability & Statistics
Charles University
Prague 8, Czechoslovakia
.lANA
and
PRANAB KUMAR SEN 2)
Department of Biostatistics
University of North Carolina
Chapel Hill, NC 27514, U.S.A.
ABSTRACT
Bounded-length sequential confidence intervals and sequential tests
for regression parameter based on M-estimators are extended to the case
where the score-functions generating the M-estimators have jump-discontinuities.
4It
In the context of the asymptotic normality of the stopping
variable, for the confidence interval problem, it is observed that the
jump-discontinuities induce a slower rate of convergence.
The proofs of
the main theorems rest on the weak convergence of some related processes,
which is also studied.
AMS Subject Classification:
Key Words & Phrases:
62LIO, 62L12, 60F17.
Bounded-length confidence intervals, jump-
discontinuity, M-estimator, regression (location) parameter, score functions,
stopping variables, sequential test, weak convergence.
..
1) This work was done while this author was visiting the University of
North Carolina, Chapel Hill, NC, U.S.A .
~ Work of this author was partially supported by the National Heart,
Lung and Blood Institute, Contract NIH-NHLBI-71-2243 from the National
Institutes of Health.
1.
{X., i ~ I}
Let
INTRODUCTION
be a sequence of independent random variabl es (r. v.)
1
with continuous distribution functions (d.f.)
F. (x) =F(x -l1c.), i 2:1,
1
the
x sE
1
1
= (_00,
where
(0),
(1.1)
are known (regression) aonstants (not all equal to 0),
c.
1
unknown parameter and the unknown d.f.
d.L's.
If all the
loaation
model~
c.
1
while, for the
interval for
l1
c.
1
l1
F belongs to some class
are equal to
so called regression model.
and
{F ., i 2: I}
1,
is an
F of
(1.1) reduces to the classical
not all equal, this relates to the
(i) a aonfidenae
We intend to construct
having the aoverage probability
1 -a
and width
~
2d
(ii) a test for
HO: l1 = l1 (specified) vs.
0
HI: l1 = l1 l (specified) > l1 ,
0
having type I and type 11 errors bounded by
where
d (> 0),
0.
1 (0
1< 1) and
< 0.
0.
2 (0
<0.
0.
1 and
0.
(1. 2)
2' respectively,
2 <1) are all preassigned
numbers.
For unspecified
F,
for either of these problems, no fixed-sample
size proaedure exists, but under an asymptotia (as
6 - l1 '" 0)
1
0
d -r 0
or as
set-up, a possible solution is to draw observations
sequentially with respect to some appropriate stopping rules.
For the
confidence interval problem, such procedures based on the least squares
and rank order estimators are due to Chow and Robbins (1965), GIeser
(1965), Geertsema (1968), Sen and Ghosh (1971) and Ghosh and Sen (1972),
among others.
For the sequential testing problem, rank based procedures
are due to Sen and Ghosh (1974) and Ghosh and Sen (1976, 1977), among
others.
For the location model, a sequential confidence interval based
on M-estimators is due to Carroll (1977), while Jure~kova and Sen (1980)
have studied the general regression model, both the confidence interval
•
-3-
and the testing problem, under less restrictive regularity conditions.
In both these papers, the score-function
(~)
[generating the M-
estimatorsJ is assumed to be absolutely continuous and some extra
regularity conditions are needed for the asymptotic
no~ality
of the
stopping numbers.
M-estimators corresponding to score-functions having jump-discontinuities
constitute an important class of estimators (the sample median belongs
to the same).
The object of the current investigation is to study
the asymptotic theory of sequential confidence intervals and tests for
6
based on M-estimators when the generating score functions have jump-
discontinuities.
v
For the location model, Jureckova (1980) has observed
some qualitative difference in the rate of convergence of M-estimators
corresponding to absolutely continuous
As will be seen
later on •
~
and
~
having jump-discontinuities.
the same feature is generally true for
the regression model and analogous qualitative difference
appears in the rates of convergence of the stopping times.
Along with the preliminary notions, the sequential confidence interval
procedure is presented in Section 2.
Section 3 deals with some in-
variance principles relating to M-estimators corresponding to
~
having
jump-discontinuities and, in Section 4, these are incorporated in the
proofs of the main theorems of Section 2.
The last section is devoted
to the study of the asymptotic theory of sequential tests for
6
based
on M-estimators, where, also, the results of Section 3 are utilized in
the derivation of the main results .
•
2.
Based on
THE SEQUENTIAL CONFIDENCE INTERVAL FOR
Xl" .. ,Xn ,
v
an M-estimator
6n of
b..
6
[in (l,l)J [c.f.
Huber (1973), Jureckova (1977), and others] is a solution of
-4-
Sn (t) =I~
lc.t/J(X.
-tc.)
(=0),
1=
1
1
1
where
is some score-function.
t/J
(2.1)
We assume that
(2.2)
where
t/J
l
is absolutely continuous on any bounded interval in
(t/JI l ) and
possesses first and second order derivatives
respectively) almost everywhere (a.e.), and
defined as follows.
o = _00 < a 1 < .•.
<a
p
<a
E. = (a., a. 1)'
J
J
J+
p+l
= +00 '
B.
p,
j =O, .•. ,p
assume that there
with
x
0::;; j ::;;
£ E.,
J
p,
(2.3)
are real numbers (not all equal).
J
t/JI 2 ),
such that
for
where the
it
is a step-function,
2
For some positive integer
exist open-intervals
a
t/J
E,
Conventionally, we
let
=-21(B.J - 1
t/J 2 (a.)
J
+ B. ) ,
for
J
Also, we assume that both
t/Jl' t/J 2
=1, ... , p
j
(and hence,
.
t/J)
(2.4 )
are nondecreasing
and skew-symmetric, so that
t/J . (- x)
J
The d.£.
F
=
-lj;. (x), 't/ x
J
£
E, j
=1,
(2.5)
2.
in (1.1) is assumed to be symmetric, so that
E.
(2.6)
j=1,2,
(2.7)
F (x) + F (-x) = 1, 't/ x
£
Note that by (2.5) and (2.6),
Joolj;j (x)dF(x)
0,
_00
so that by (2.1) and (2.7),
Et.,Sn (M =
where
Et.,
for every
0,
't/
n
~
stands for the expectation when
n(~
(2.8)
1,
t.,
holds.
Further,
1),
(2.9)
-5-
Moreover, we assume that
F possesses an absolutely continuous
probability density function (p.d.f.)
f
having a finite Fisher
information
«
1(f) = rO{f' (x)/f(x)}2 dF (x)
00),
(2.10)
_00
where
f' (x) = (d/ dx) f (x).
y
(i)
1r
Al so, we assume that
~ fooljJ 1(r) (x)dF (x)
exists, for
r
=1,
2,
(2.11)
_00
(2.12)
_00
and (iii)
1imfoo{ljJi1) (x +t) -
ljJi 1 ) (x)}2 dF (x)
= O.
(2.13)
t-+O
Note that
(2.1~~ insures
that
f1f'(x) Idx <00,
However, we need a
slightly more stringent condition that for each
neighborhood of
•
a.,
f'
J
J
in some
is bounded and
Y2r = I1?_l(B. -B. l)r f (a.) > 0,
J-
j(= 1, ... ,p),
J-
Finally, we assume that
2
0<° <
0
J
for
r=l, 2 .
(2.14)
oo
2
ljJ (x)dF(x) <00
(2.15)
f
_00
and that as
n -+00,
max c~)/C2
( l~k~n
n
Remark.
If
ljJ = ljJ2
-+ 0,
where
(i.e., ljJ1 =0)
Cn2 = In1=
. 1c 21. .
(2.16 )
then the assumptions (2.10) -
(2.13) are not needed; in this case (2.14) and the boundedness of
in some neighborhoods of the
•
a.
J
suffice.
constant outside a bounded interval
[a, b]
Moreover, if ljJ1
c
E
fl
is
(most of the functions
suggested for M-estimators are of this type), then, for (2.11) - (2.13),
it suffices to assume that
is bounded inside Ca, b).
-6-
Regarding (2.8) and (2.9),
~
as
n
= (~* + ~**)/2
n
n
'
(in (2.1)) may be formally written
where
6* = sup{t: S (t) > O}
n
"'** = l.nf
. { t·
b,
n
.
and
n
S (t) < OJ.
n
(2.17)
.
Moreover, we note that by (2.8), (2.15) and (2.16),
L(C n-1 Sn (b,))
0
2
0
-+
N(O,
0
2
0
)
n
as
(2.18)
-+00.
given by (2.15) depends on the unknown distribution function
We shall estimate
F.
by the sequence
n
~
1
(2.19)
_00
with
= n -l,n
L' lU (x
1=
A
F (x)
n
A
=1,
- X. - b, c.), X E: E; n
1
n 1
2,... ,
(2.20)
Finally, denote
(2.21)
(2.22)
(2.23)
where
<P
is the standard normal d. f. and
Notice that
~o
L
n
'r;J n~l
<P (T /
o. 2
) =1 -
a2'
0 <a < 1 .
[by (2.9) and (2.21) - (2.22)].
The proposed sequential confidence interval for
b,
is then
IN
d
corresponding to the stopping variabZe,
(2.24 )
where
is an initial sample
from (2.24) that
properties of
N
d
L
N
d
::;; 2d.
and of
and
si~e
d > o.
It
follows
The following theorems give the asymptotic
IN
as
d
+O.
Define
•
d
dCn ~T.o./200/Y}'
(2.25)
Under assumptions made above., for every (fixed)
Theorem 2.1.
lim Nd=+ooa.s., and., as
d4-0
P6(Nd <00) =1,
e-le
n N
d d
P6 (6
-+
1
d > 0,
d4-0,
in probability.,
IN ) ~ 1 - a.
£
(2.27)
d
If., in addition to the hypothesis of Theorem
Corollary 2.1.
2.1 ,
for some
a < 1 <b
lim
n-roo
and every
e~nt/e~
= pet)
t £
exists and is tin
then (2.26) ma!} aZso be replaced by: as
n
-1
N ~ 1
d d
[a, b] •
t,
(2.28 )
d 4- 0.,
1:n probability.
(2.29)
Note that for the location and the equispaced regression models,
(2.28) holds for
pet) =t
and
3
t ,
respectively.
asymptotic distribution of the stopping variable
N , we need the
d
OO.
following assumption on the sequence
lim K =K(e:[l,oo))
n
n-roo
and
{
c
}
n n=l
To prove the
:
4 ** = 0
lim max ck/e
n-+oo l~k~n
n
(2.30)
where
e**
n
Not ice that
K ;;;: I
=
\~ lC~1 .
[..1=
(2.31 )
V n ;;;: 1 .
n
Finally, we assume that
OO
J ljJ4(X)dF(X) <00.
(2.32)
_00
Theorem 2.2.
Under (2.30), (2.32) and the assumptions of Theorem 2.1,
.!<:
1,
lim P6{n~(dCN -8)/8 ~y} =<I>((28/Y22) 2 yy / K)
MO
holds for aU
(2.14) .
d
y e: E,
where
8 = la/2 0 0/ Y
and
Y22
is defined by
(2.33)
-8If~
Corollary 2,2,
1 im
n-W>
I::
= 0,
n
-1
2.2~
-1
C -l)(n m-I)-+8 as n~oo
m
8, 0<8<00, whenever I (min) - 1\
(C
ho tds for some
in addition to the hypothesis of Theorem
(2.34)
n
for every
then~
<E:
n
for any
{E: } :
n
Y E: E ,
. Pl'. {~-l
< }
11m
nd(n d Nd -l)_y
(2.35)
d+0
The proofs of the theorems and corollaries are presented in Section 4.
It may be noted that (2.34) holds for the simple location and the
equispaced regression models with
8 =1
3
and
respectively; (2.30)
2'
also holds in these cases.
The convergence propert ies in (2.26) - (2.29) hold without change
even if
\)J2:= O.
However, Theorem 2.2 does not extend to the case
has
while in the case
(i. e.
n~[Nd
'- 1)
d n
a
d
~
v
( see Jureckova and
nondegenerate asymptotic distribution as
*2 f 0, we have
Sen (1980)), in the current case, where
asymptotically normal as
d
+O.
n~[:: -1]
Hence, jump-discontinuities in the
score function lead to slower rates of convergence of the stopping
numbers.
SOME INVARIANCE PRINCIPLES
3.
The results of this section will
~rovide
the main tool for the proofs
of Theorems 2.1 and 2.2.
Let
'if
n
~
0)
{c ni' i
~
0, n
~
O}
and
{dni' i
~
0, n
~
O}
(with
be two triangular arrays of real numbers and let
sequence of positive integers such that
k
n
~oo
as
n -+00
c: nO =dnO =0
{k } be any
n
and that
(3.1)
and
lim{ max
n-W> l~k~k
n
IC~klld~kVL'~klc~.d~.[l
J n J Jf
= 0
(3.2)
,
-9for every
r = 0, 2
and
L
2
An k
2
C2 =. c.
nk
Jsk nJ'
for
~
k
n
0,
~
s = 0, 1, 2.
\
2
Moreover, denote
2
= L J'<k c nJ.d nJ"
(3.3)
and wri te
0,
2
2
= c nk
C
n
2
2
A =A
n
nk
n
(3.4 )
n
It follows from (3.2) that
A /B
n
Let
d.f.
Xl' X ,···
n
-+ 0
and
B /C
n
n
-+ 0
as
n-+ oo •
(3.5)
be i.i.d. random variables distributed according to
2
F and let
F,
~
satisfy the regularity conditions of Section 2.
Consider the process
W (s, t)= {[Q
()(t) -EO
()(t)]/(B /Y22):
n
n,n s
'n,n s
n
(s, t)E:K*}
(3.6)
Osssl, Itlsk*}(=[O,l]x (-k*, k*))
(3.7)
where
K*={(s,t):
is a compact set in
2
2
n(s) =max{k: B s sB }, 0 s s s 1,
nk
n
;
(3.8)
Q k(t) =L'<k c .. [\Ii(X. -td .) -\li(X.)], k~O, t E:E
n
J
nJ
W (s, t)
belongs to
n
D [K*]
states the weak convergence of
(3.9)
J
is defined in (2.14); suppose that
and
Then
nJ
J-
for every
Y22 > 0
n.
(i. e. ,
The following theorem
W
to some two-parameter Gaussian
n
process.
Theorem 3.1.
(3.6) - (3.9).
Let
Then~
under (3.1), (3.2) and for
the regularity conditions of Section
Gaussian process
'rj
(s, t)
E
K*
be the process defined in
{Wn (s, t): . (s, t) E: K*}
{W(s, t):
2~
(s, t) E: K*}
\Ii
and
F
satisfying
converges weakly to the
W
n
such that
W(s, t)
=0
and
EW(s, t)-W(s', t') = (s AS')og(t, t'),;
lrJ
(s,t),(s', t')
E
K* (3.10)
-10-
1JJhere
get, t') =
Proof: :
by 1jJ.1
It I A It' I
{
according to (3.9),
(1)
Wn
tot' > 0
(3.11)
o
Q (t) = 0(1) (t)
'nk
nk
Write
if
otheY'1JJise.
+ Q(2) (t)
nk
i = 1, 2.
where
Q (i) (t)
is generated
nk
Denote
(1)
(1)
*
(s, t)= {[Qn,n ()(t)
-EO'
(s, t)E:K}
s
'n,n ()(t)]/(OlA):
s
n
(3.12)
where
00
012 =
f
2
(1jJ1(1) (x)) 2 dF(x) -Y11
«oo,by (2.12)).
(3.13)
_00
Then
W (s, t) = (01/IY22) (A /B )W(l) (s, t)
n
n n n
where
w(2)
n
+
is defined by (3.6) and (3.9) with
v
W~2)
(s, t)
1jJ =1jJ2'
(3.14) .
It follows from
,.
Theorem 2.1 of Jureckova and Sen (1980) that
sup Iw(l) (s, t) 1= 0 (1)
(s,t)E:K* n
p
(3.15)
so that by (3.5), (3.11) and (3.15),
p
sup Iw (s, t) - W(2) (s, t)1
(s,t)E:K* n
n
.
Thus, the asymptotic behavior of
W
n
butions of
W(2)
n
0 as
n
1jJ2'
(3.16)
-+00.
is solely determined by
generated by the discontinuous component
theorem, it suffices to show that
--+
W(2)
n
Hence, to prove the
(a) the finite dimensional distri-
converge to those of
It follows from the definition of 1jJ2
Wand
(b)
{W(2)}
is tight.
n
that
W(2)(S, t) =
1
L.< ()c 'L~ 1(13.J -S.J - l)[I(X.1 <a.J <X.1 -tdnl.)
n
B ~ 1-n s nl J =
n Y22
(3.17)
- leX. -td . <a. <X.)].
1
n1
J
1
where
leA)
stands for the indicator function of the set
A.
Note that
t
-11-
E[I (X . <a. <X. + ltd . I)] k =E[I (X . -ltd . I <a. < X. ) ] k +0 ( ltd.
1
J
1
nl
1
nl
J
1
_ . nl
I)
=f(a·)ltd.1 +o(ltd .IL i~l, k~l,
J
nl
nl
(3.18)
E[I(X. <a. <X. + ltd. )oI(X. <a. <X. + It'd .1)]
1
J
1
nl
1
J
1
nl
=f(a·)(ltl
Alt'I)ld nl.1
J
(3.19)
and
E[ I (X. - Itd . I < a. < X. ) I (X. < a. < X. + It'd . I)] = 0, i ~ 1.
1
nl
J
1
1
J
1
nl
(3.18) - (3.20) imply
(3.20)
0
lim EW(2)(s, t) W~2)(s" t') = (5 AS')g(t, t')
(3.21)
n-XlO
where
of
get, t')
W(2)
n
is defined by (3.11); thus the limiting covariances
coincide with those of
W.
Moreover, any linear combination of
{W(2) (s ,
n
r
t
r
): 1:::; r :::; m}
can be
expressed through (3.17) as a sum of independent random variables and
the convergence of finite dimensional distributions of
W(2)
n
then
follows from· the classical central limit theorem.
To prove the tightness of
{W(2)}
n
'
for any pair of blocks
such that
and t
1
< t , consider the increments
2
W~2) (B) = W~2) (s2' t 2) - w~2) (52' t l ) - W~2) (51' t 2) + W~2) (sl' t 2 );
with
W(2)(B')
(3.22)
It follows from (3.18) - (3.20)
defined analogously.
n
lim E{[W (2 )(B)]2[W(2)(B,)]2} :::;K[A(BUB,)]2
n-XlO
where A(A)
n
denotes the Lebesque measure of
positive constant, independent of
inequal ity holds for increments in
B'
= {(sl'
(3.23)
n
t 2), (s2' t )},
3
t
l
A and
(sl' 52' s3' t l ,
K
t
2)·
is a finite
Analogous
B = { (sl' t l ), (52' t 2)},
<t 2 <t •
3
The tightness of
follows from the results of Bickel and Wichura (1971).
{W~2)} then
Theorem 3.1
-12is proved.
Let
Corollary 3.1.
w*n (s,
t)
=
1JJhere,
[Q
() (t) +yt
n,n s
L c.d .]/Bn -/Y22
'< ( s ) nl nl
l_n
(3.24)
is given by (2.11) and (2.14), i.e.,
y=foo-.p(l)(X)JF(X)
+L~J= 1(13.J
-13.
J-
(3.25)
l)f(a.).
J
_00
under the assumptions of Theorem 3.l, the process
Then~
converges 1JJeakly to the process
(s,t)E:K*}
Proof.
~n
Let
{c }~=l
n
sequeme
W = {W(s .. t): (s" t) E: K* }.
be the M-estimatoY' defined by (2.17) 1JJith the
satisfying (2.30).
Then, under 1/J 2
A
1 \n
C (ll - 1I) - -C- L' 1C .1/J (X. - lIc .)
n n
y n J= J
J
J
Remark.l.I!
~
P
t):
(3.24) follows from (3.1) - (3.3) and from (3.5).
Corollary 3.2.
o
{w*n (s,
$0,
it holds
=0p (n -~).
(3.26)
the order on the right-hand side of (3.26) is
\)J2 :::0,
.
v
as it foUo1JJS from Jure()kova and Sen (1980).
(n- 2),
Let
Theorem 3.2.
according to d.f.
be i.i.d. random variables distributed
Xl' X ""
2
F.
Suppose that
conditions of Section 2.
F and 1/J
satisfy the regularity
Then, under (2.1) -(2.5) and (2.10) -(2.16),
(3.27)
as
n
+
00 ,
1JJhere
1
Mnk(t) ={\. kc.[\)J(X. -tC- c.) -\)J(x.)]}/(aoC)
Ll~
andc i ,
Proof.
B0
B
k
be the a-field generated by
be the trivial a-field.
EM} (t), -k* ~ t ~ k*) ,B ;
n(
(3.28)
l I n
a o are all defined as in Section 2.
Cn'
Let
lIn
k
Then, for every
k ~ n}
{Xi' i ~ k},
n ~ 1,
k ~1
and
{(M (t) nk
is a martingale (process), so that
-13-
. sup 1M k (t) - EM k (t)
{ -k*stsk* n
n
I > e:,
B k;
k:S n}·
.
(3.29)
is a nonnegative submartingale.
Therefore, by the Kolmogorov inequality
(for submartingales), for every
e: > 0,
p{sup
sup IMk(t)-EMk(t)l>d
ksn _ k*stsk* n
n
1
s - E{
sup 1M (t) - EM (t)
e:
-k*stsk* nn
nn
Regarding the monotonicity of
Il.
(3.30)
the supremum on the right-hand side
~,
of (3.30) could be approximated by a supremum over a finite set of
lim E IMnn (t) - EM (t) I = 0
nn
Finally, the expectation term EM (t) may
nn
points and i t follows from the assumptions that
n~
uniformly in
be replaced by
[-k*, k*].
-tyC
similarly as in Corollary 3.1.
k
Q.E.D.
To prove the Theorem 2.2 of Section 2, we shall still need the
•
following result concerning
~he
error of the approximation of
2
n
s :
Lemma 3.1.
Let
s
2
n
be defined by (2.19) with the sequence
satisfying (2.30); let for' any
n
T
T,
O<T<oo,
=max{k:
2
2
C s TC } .
n
k
(3.31)
Then
!.:
2
2
02
2
I
max n 21 (Sk - 0 0 ) - (sk - 0 0 ) ne:sksnT
. .
ho lds for' any
p
0
as
n +00
(3.32)
e: > 0,
02
sn
= n-I
~L
2
~ (X. - L\c. ) •
. 1·
1=
1
(3.33)
1
Remark .2. Note that by the K'l:ntchine law of large number's,
s~2
+
e~
a.s.
as
n +00.
(3.34 )
-14-
Proof of Lemma 3.1.
Theorem 3.Z
(l/Y)L~1= lc.1/!(X.
1
1
max
-ClICk2(6k-M <k<
n e -nT n
as
n-+ oo •
The sequence
implie~
~
-6c·)1
1
l
{-c L'<kc.1/!(X. -6c.):
Yn
1_
1
1
k
1
(3.35)
0
~ nT}
is tight by
Donsker theorem and thus it follows from (3.35) that for every
and
n >0
p{
there exist an
max
nl_O~ ~nl+o
nO
Ic n (6 E:
and
E: > 0
0(> 0) such that
-6)
n I >d <n
for
n ~nO'
(3.36 )
Moreover, by the same technique as in the proof of (3.Z7), we obtain
that
p
It I ~ k*} --+ 0, as
n -+00,
(3.37)
where
M* (t) =n -~\L' k[lji 2 (X. -6c.) -lji Z(X. -(6+C -1 t)c.)].
nk
1~
1
1
1
nl
(3.38)
Lemma then follows from (3.36) and (3.37).
Remark ..3. It follows from ThooY'em 3.1 that~ if
[Wn(s,t Z) -Wn(S,tl)]/hlt z -t
wheY'e
(3.39)
~
l
s > 0 and
-,;-+
has the standaY'd nOY'mal distribution.
extends to the case when
of random variables.
More
t
l , t
precisely~
z'
if
tz
~
-tl
f:. 0,
(3.39)
Under some
assumptions~
s are replaced by sequences
s n , t n l' t n Z
aY'e random
vaY'iables such that
sn ~ s(E:(O, 1))
for some constants
sand
T,
and
(3.40)
then
(3.41)
We shall recall to this result in the next section.
•
-15PROOFS OF THEOREMS 2.1 AND 2.2
4.
Proof of Theorem 2.1
For any fixed
d >0
/1,
and any
by (2.24),
=PuA(Cn Ln
SPA (L > 2d)
un·
> 2dC ) .
(4.1)
n
Now, it follows from (2.21) - (2.23), Theorem 3.2 and Lemma 3.1, that
L C ~ 28
n n
lim 2dC
while, by (2.16),
n
= 2Ta / 2(JO/Y
and thus
= 00
as
n
lim P (N
n-+oo
d
>n) =0.
Moreover, it follows from the definition (2.24) of
in
d (> 0);
this together with the fact that
implies that
N
d
-+00
a.s. as
To prove (2.26), define
d
(4.2)
-+00,
N
d
L >0
n
that N
d
where
n
d
+O.
n~~),
is defined by (2.25).
i
=1,
2,
by
.d
d
S P (L (2)
n
-1
dE
(4.3),
2dC (2)
n
d
+O.
dE
~ nd 2 ),
E
~ 2d) =p{C
p
C (2) L (2)
~ 28
n
-1 n
-1
dE
dE
where by (4.2),
---+
(4.3)
Then
p(C-1CN > 1 + E) = P(N
nd
+
with probability 1
i = 1, 2
,
is
28(1 +E) > 28.
(2)
n
L (2) .
-1 n
-1
dE
dE
as
d
+0,
~ 2dC
(2)
n
dE
}
(4 .4 )
-1
while by (2.25) and
Hence, (4.4) converges to 0 as
-1
Similarly,
P (C
-1
C
n d Nd
S 1 - E)
~ 0
as
d '" O.
This proves (2.26).
By virtue of Theorem 3.2, Lemma 3.1, (2.21) and (2.26), it follows
that
(4.5)
-16and
L
C L
N N
d d
26
as
+O.
d
(4.6)
Thus, (2.27) follows from (4.5) and (4.6).
To prove Corollary 2.1, we may note that (2.26) and (2.28) imply
(2.29).
This completes the proof of Theorem 2.1 and Corollary 2.1.
Proof of Theorem 2.2
It follows from the definition of
N ,
d
(4.7)
and
~
~
P6{n~(dCN -1 -6)/8 ?:y} sP6{nci(L
_lCN -1 -26)/6 ?:2y},
Nd
d
Hence, to prove (2.33), it suffices to show that as d +0,
(4.8)
d
~
~
.
p,,{n '4(L C
-26)/(26) sy} ~ <P((26/Y22) 2yY/K)
d I1 n
D
d d
(4.9)
and
e~
2
k2
Put
c
0
nl
2
max{ln (L C -L C )1:lc IC -11 <a} =0 (1).
(4.10)
n n
mm
m n
p
c.
1
=co,
d
=
c
1 sis n. Then, regarding (3.20) and (3.21),
1
111
n
0
we have
(4.11)
where
2
B =
11
3
n
L Ic. I
i=l 1.
IC •
(4.12)
n
It follows from Corollary 3.1 and from Remark 3 that
L{(B
n
/y
d
22
)-1(Lo
c.[ljJ(Xo
Jsn 1.
1
d
-'6-n
co) -ljJ(X.
d
1
1.
-'6+n
co)] _yC
d
1.
2
I1
d
L )}-+N(0,26).
n
d
(4.13 )
By virtue of Lemma 3.1, (2.21) - (2.22), (2.25), (4.11) and (4.12), we
see that (4.13) implies (4.9).
Moreover, (4.10) follows from Lemma
3.1 and from the tightness property of the process
(3.24).
W*(s, t)
n
Finally, (2.34) along with (2.33) ensures (2.35).
in
Hence the
,
-17 -
proof of Theorem 2.2 and Corollary 2.2 is complete.
5.
SEQUENTIAL TEST FOR
BASED ON M-ESTIMATORS
~
Consider again the single regression model (1.1) and the testing
problem of (1.2).
Without loss of generality, we may take
H : ~ =0
O
vs.
HI: ~ =0 >0,
0
specified.
v
(5.1)
.
Along the same lines as in Jureckova and Sen (1980), we consider the
following sequential test.
Define
s
2
n
as in (2.19),
L
n
as in (2.33)
and put
\) n =C n Ln /2T a /2(~
Let
a
l
and a 2
\!
=oO/y
by (4.2)).
(5.2)
be the prescribed probabilities of the errors of the
type I and type IIi respectively, 0 <a +a < 1. Let A and B be two
l
2
a
1 -a
2
numbers such that
o < - - : ; ; B < 1 < A:;; - -2 < 00 and put a = 10 gA,
1 -a
al
l
b
= 10gB
no
=
nO (0)
(so that
b < 0 < a).
We start with an initial sample of size
and cont inue sampling as long as
v.
1
bs n ~ n < OS n (-2
0) < a s
n n n
Thus,the.stopping number
N(=N(o))
(5.3)
is defined by
(5.4 )
and we stop sampling when
acceptance or rejection of
N(o)
observations are made with the
H according as
O
is
:;; b
or
:<::
a.
(5.5)
Jure~kova and Sen (1980) studied the properties of the test in the
case of an absolutely continuous function, i.e., when
~
= ~l'
They
proved that the testing procedure terminates with probability 1 and
derived the limiting
OC
function as
0 -t O.
-18Unlike in the case of sequential confidence interval, it could be
shown that the presence of jump-discontinuities of the score function
does not change the limiting
OC
(excepting the contribution of
function of the sequential test
Y21
in
y).
finiteness of the sequential procedure and
Jure~kova and Sen (1980) readily extend
The proofs of (i) the
(ii) Theorem 5.1 of
to the case of
~ =~l +~2
when the theorems of Section 3 of the current paper are incorporated.
Hence) we omit the details.
REFERENCES
[1]
BICKEL, P.J. and WICHURA, M.J. (1971). Convergence criteria
for multiparameter stochastic processes and some applications.
Ann. Math. Statist. i~, 1656-1670.
[2]
CAROLL, R.J. (1977). On the asymptotic normality of stopping
times based on robust estimators. Sankhya, Ser A. 39,
355-377.
~~
[3]
CHOW, Y.S. and ROBBINS, H. (1965). On the asymptotic theory of
fixed-width sequential confidence intervals for the mean.
Ann. Math. Statist. ~, 457-462.
[4]
GEERTSEMA, J.C. (1968). Sequential confidence intervals based
on rank tests. Ann. Math. Statist. ~2, 1016-1026.
[5]
GHOSH, M. and Sen, P.K. (1972). On bounded-length confidence
interval for the regression coefficient based on a class
of rank statistics. SankhyaJ Ser. A. ~i, 33-52.
[6]
GHOSH, M. and SEN, P.K. (1976). Asymptotic theory of sequential
tests based on linear functions of order statistics. In
Essays in Frob. & Statist. (Ogawa Vol.), Ed. S. Ikeda et al.
480-498.
[7]
GHOSH, M. and SEN, P.K. (1977). Sequential rank tests for
regression. SankhyaJ Ser A. ~2, 45-62.
[8 ]
GLESER, L.J. (1965). On the asymptotic theory of fixed size
sequential confidence bounds for linear regression
parameters. Ann. Math. Statist. ~£, 463-467.
[9]
HUBER, P.J. (1973). Robust regression: Asymptotics, conjectures
and Monte Carlo. Ann. Statist. 1, 799-821.
-19-
--•
...
v
~
v
~
v
~
v
~
v
~
[10]
JURECKOVA, J. (1969). Asymptotic linearity of a rank statistic
in regression parameter. Ann. Math. Statist. 40,
"""
1889-1900.
[11]
JURECKOVA, J. (1973), Central limit theorem for Wilcoxon rank
statistics process~
Ann. Statist. 1, 1046-1060.
[12]
JURECKOVA, J. (1977). Asymptotic relations of M-estimates and
R-estimates in linear regression model. Ann. Statist.
~, 464-472.
[13]
JURECKOVA, J. (1979). Bounded-length sequential confidence
intervals for regression and location parameters. F70c •.
2nd Frague Conf. on Asymptotic Statistics. 239-250.
[14]
JURECKOVA, J. (1980). Asymptotic representation of M-estimators
of location. Operationsforschung. und Statistik,Ser
Statist.
[15]
JURECKOVA, J. and SEN, P.K. (1980). Invariance principles for
some stochastic processes relating to M-estimators and
their role in sequential statistical inference. Sankhya~
Sel'. A.
[16]
SEN, P.K. and GHOSH, M. (1971). On bounded length sequential
confidence intervals based on one-sample rank order
statistics. Ann. Math. Statist. i~, 189-203 .
[17]
SEN, P.K. and GHOSH, M. (1974). Sequential rank tests for
location. Ann. Statist. ~, 540-552 .
v
© Copyright 2026 Paperzz