Jureckova, Jana and Sen, Pranab Kumar; (1988).Uniform Second Order Asymptotic Linearity of M-Statistics in Linear Models."

UNIFORM SECOND ORDER ASYMPTOI'IC LINEARITY
OF M-STATISTICS IN LINEAR MODELS
by
v
"
Jana Jureckova
Charles University, Prague
and
Pranab Kumar Sen
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1856
August 1988
UNIFORM SECOND ORDER ASYMPTOTIC LINEARITY OF M-STATISTICS IN LINEAR MODELS
Jana Jureckova and Pranab Kumar Sen
Abstract. For conventional linear models, uniform second order asymptotic linearity
( in regression parameters ) of M-statistics is studied under a variety of regularity conditions on the score functions. Parallel results are also derived for
studentized versions of these M-statistics.
1. Introduction
Let Xl'" "X be independent random variables (r.v.) with distribution functions
n
l
(d.f.) Fl, ... ,F respectively, all defined on the real line R , where
n
F.(x)
= F(x - S'c.),
i = l, ... ,n,
1
_ _1
(1.1)
_13' =
c.)
,
(131 " ··,13p ) is an unknown vector of p(
1
~
l
are known vectors of regression constants. Consider a score function
1P
R
> 1) parameters, and the c.
= (c.1 ' ... ,
_1
1
R
W:
, and define
(1. 2)
Then the usual M-estimator ( 13 ) of S is obtained by equating M
_n (b)
_ to 0
_ in a
suitable norm. For the study of general properties of such M-estimators, the
following stochastic processes playa vital role. Let Yl""'Y
n
be independent and
identically distributed (Li-d.) r.v. (with a d.L F), and let
n
-k
~n(:> = ri=l ~i [W(Y i - n 2~'~i ) - W(Y i ) ] ,
(1.3)
t
E:
P
R .
Under suitable regularity conditions [ viz., Juretkova(1983)]
it is known that
(1.4)
in probability,
as n
~
00
where M is an arbitrary finite positive number,
y(W,F) is a suitable
n
constant ( depending on wand F) and ~n = ri=l ~i£i ; this is known as the uni60 nm
6bv.,:t OMvr. ct6ymp:totic. lineaJL.i.:ty 06 M-.6.:t.a.:t.-Wtic.J.J .in Jte.gJte...6.6.ion paJtame:teJr.. Results
stronger than (1.4) have also been established under diverse regularity conditions
[ viz., Jure~kova and Sen (198la,b), Jure~kova (1985), Jure~kova and Sen (1987) and
others]. Following the line of attack of the last two papers ( dealing with the
simple location model ), we intend to study unlno~ ~eQond o~d~ a¢ymptotie lin~
~~~
in the general case of linear models. Precise formulation of these second
order results depends very much on the nature of the score function
~
and the
density function f(.) corresponding to the d.f. F. A more general formulation of
the problem is needed to handle the case of studentized M-statistics ( as would be
introduced in the sequel). From the applications point of view, we may remark that
the M-estimators are not scale-equivariant , and hence, when the scale parameter
for the density f(.) is not known, there is a profound need for studentizing the
~«x.
score function ( viz., taking
l.
-b'c.)/s ) for suitable estimator (s ) of the
l.
n
n
unknown scale factor); we intend to treat this studentized case also in detail.
Concerning the score function, in practice, we may usually have either of the
following three types :
(i)
(ii)
(iii)
l/J is a step-function having finitely many jumps of finite
magnitude~;
l/J is absolutely continuous with a derivative l/J I which is a step-function;
e
l/J is absolutely continuous with an absolutely continuous derivative l/J'.
Since somewhat different techniques are needed to handle these three cases, we
shall treat them separately. However, for
type of
e~ch
~
, both the classical and
studentized versions of the M-statistics are considered. Section 2 deals with the
second order (uniform) asymptotic linearity results for case (i), Section 3 with
2;)
... -
,
case (ii) and Section 4 with case (iii). Some general remarks are appended in the
,
,
I
-'
of
last section. It will be seen that the rates
regularity conditions are possibly
differ~t
in the different cases.
.:~.
I '.-" .
convergence as well as the needed
;
2. Second Order Results When W. is a Step-Function.
,
(2.1)
~(x)
where a O"'"
= ~j
~
.'
• for x
."_',.) ... . .t,:'..
£
(
'
..
'
r j ' r j+l ], j =0, ..• , , k
are real (distinct) nmnbers and
k being a positive integer.
.
-00
·t-
= r O < r l <... < r k < r k +l = +00
,
.
Furt h er, we assume t hat t h ere exl.sts
a posl.tl.ve
..
d e f·l.nl. e
(p.d.) matrix Q
... such that
-2-
(2.2)
n
-1
C
-n
n
-1 n
'
E. lc.C.
1= -1-1
-+
• as n
9
-+
00
•
and
n-1E~=1 I I~il
(2.3)
where
I 1./1
3
1
= 0(1). as n
-+
00
•
stands for the Euclidean norm. Concerning the d.f. F • we assume that
in a neighbourhood of r . • F has bounded derivatives f and f' • for each j=l •...• k.
J
Then. we have the following.
THEOREM 2.1. Under the assumptions made above. for any M > O.
(2.4)
< M}
"Ell
where
k
y
(2.5)
= E. l( a. - a. l)f(r.).
J=
J
J-
J
Side by side. we consider the case of a studentized M-statistics. so that the
proofs for both the cases can be formulated in a common vein. Corresponding to (1.2).
we consider a Studentized M-statistics
where s
(2.7)
E~ 1 c. ~«X.-b'c.)/s ) • b
M*(b;s ) =
-n - n
(2.6)
n
1=
-1
1
-1
-
n
-
E
RP •
is a suitable estimator of the scale factor a • and we assume that
k
n2
I
s
n
-
a
I
= 0 (1)
.
p
I,":,
i, --:> .
In this case. we extend (1.3) to
u) _ ~«Y ./a»
(2.8)
1
],
1
p
tER.uER.
Then. we have the following.
,. j _..
~, j
.r ;
as n
THEOREM
-+
00
op (n J.i ).
(2.9)
where
(2.10)
y =
1
Proofs of the theorems.
suffices only to consider the proof of iTheorem: -2;.;.~.;. For' this purpose. we may assume
without any loss of generality that
a
or 1 according as x is <
we may take a =
~
has a single step. i.e..
~(x)
is equal to
or > r • where r may even be taken as equal to O. Also.
1. Identifying the coordinatewise structure in (2.9). it also
suffices to consider only the case of
-3-
Sn l(t.u).
_
Let us denote by
(2.11)
sO(t,ul
n ~
= Sn l(t,u) -
ES
~
=-
n
~. 1
n
l(t,u)
~
c'l{I(Y. < rae
1=
1
1
-
S E
-~,
+ n 2.c . t ) - I(Y, < ra)
~1~
I
- F(rae
Let W = {W(s),
_~
n 2. u
n -"2u
1
-
-1.-'"
+ n 2. c . t ) + F(ra) }.
~1~
1
R } be a Wiener process, and define
L.(t,U) = time for W(s) to exit the interval (-a., b. ) , i _> 1,
(2.12)
1
1
1
where
(- Ic'l II F(rae
(2.13)
n
-~
u
1
-~
+ n 2.c~t) - F(ra)
I
IC'l l [l-IF(rae
n
~1~
-"2
u +
1
I,
n~~c~t)
- F(ra)
1
I]),
1·
f c
-~
il
+
(-lc. l[l-IF(rae
1l
I
=
IC'1 l II F(rae
for i
~
(2.15)
n
2
U+
n -k2C ~ t
-k
n 2.c~t)
+
~1~
~1~
I],
- F (ra)
)
- F(ra)
I ),
if c
-~
,
C,t
o·,
>
~1~
I
i1
-~
n -":2
(ro(e
u -l)+n 2.c~t <0,
~1~
n
=v
-k
0
Sn(~'u)
4
=V
n
-k
4
n
W( ~i=l Li(~'u) ) =V
Wen
-k n
2r
i =lT i (:,U»
stands for the equality in law or distribution. For n > n
-
-1:
n 2
-~,
I
IF(rae U + n 2 c ,t) - F(ra)
-"2
-":2
n4
< IF(rae nu + n -":2 c.t) - F(rae nUl
)
+
IF(rae
u)
-~
< M.K.n
( 1 + II c, II ), where K is a constant.
if II tl I -< M and lui -< M,
~1~
1
0
,
t
1
v(:,u),
andll:ll, lui <
1
l'
~1~
- F(rO)
I
~1
Hence,
-
1. Then by an appeal to the Skorokhod embedding of Wiener process, we have
(2.14)
where
-"2
u
n
-~
n
[ra( e nUl)
-
~
(2.16)
L.(t,U)
1
+
-
i ~ ,1,
Vi (M) = Vi (M) ,+ Vi (M),:
<
~
where
=
(2.17)
V:(M)
(2.18)
V~ (M) = time for W(s) to exit the interval
1
1
Hence,
1
(2.19)
sup{
n~ Is
:::
O
(
t ,
n ~c
\
"
~~
sup{ IW(n
\h
sup{ IW(~)
I
~.n
I:
.:oj",,';, \;.J,~
.... '.
< M }
sr;;' ,~;7i.
L.i.=l ,L i (: ,u)ll:,
~ J ~-', 1 "
.:::
.. J. \ ; i
<
C ,-
u) I : II t II .( M',- lui
"
II: II,
lui
<M,
< M }
,
O~ s ~ n-":2 E~=l Vi(M) }
By (2.17) and (2.18),
where K is a finite, positive constant. Hence, given
o
such that
4
£
> 0, there exists a C > 0 ,
M,
(M)
n -~ "n
"'i=l Vi
>
p{
(2.21)
Moreover, given
(2.22)
£ > 0 and
}
C
<
£/2 ,
o
C > 0, there exists a K* > 0, such that
sup{IW(s) 1:0 < s < C} > K*}
p{
\}n>n.
<
£/2
Combining (2.19), (2.21) and (2.22), we obtain that for n > n
-
(2.23)
p{ sup{
In-~so(t,u)
I: Iltll,lul
n _
<M}
£/2 + £/2
> K* } <
-
0
£
,
so that
(2.24)
sup{
n
-~
0
41 S
(t, u) - ES (t, u)
n -
n -
It remains to show that uniformly in
(2.25)
A
n
n
-~
41 ESn(:,u) + n
-k
2
I: II -t II -<M, Iu I -<M}
11:1 I ,lui < M, as
-k
n
= 0 (1).
P
n
+
00
n
=
YlEi=lCil:iE + n 2y2Ei=1 cilu
0(1).
Note that by (2.8)
(2.26)
A
n
= n-~I E.n lc.l[FCrae n
~=
-~
u
~
-k'
+ n 2 C. t ) - F(ra)] + n
-~-
+
...k
~
n 2c~t) - F(rae
J""
2
f(ra) ]
A + A + A
nl
n2
n3
<
Since F(a
n
n
ylE. lc.lc.t
~=
~
-~-
-k
2
-~-
- ran
2
I
n
n -~y 2L_
l c.lu
~-
-k
u ) - n
-k
2
'
c . t f(rae
n
-~
u
-~-
) ]
u f(ra) J
I
(say).
+ n -~b)
- F(a ) - n -~ b f(a ) =
n
n
n
n
n
k
fno-~n
[f(a +u) - f(a )]du , using (2.2),
n
n
(2.3) and the boundedness of f and f' (in a neighbourhood of r ), we obtain by some
simple steps that each of A
nl
,An2 'and An3 is O(n-k4
uniformly in
),
I IE II
~
M and
lui ~ M . This shows that (2.24) holds, and the proof of the theorem is complete.
3.
SOA~
when
~
".
.
~
.'
W'
is Absolutely Continuous but
c: _
. :.> ..
u,,:.1_
is a Step-Function .
~..l_ .. _
\._:
As in Section 1, let Yl""'Y be i.i.d.r.v. 's with a d.f. F , and define the
n
stochastic processes S (t), t £ RP and S (.t ,u), (t,u) £ RP+l , as in (1.3) and (2.8)
-n -.
-:
-0 -
,
:
":"'"
Wls
respectively. In either case, we assume now that
function with a derivative
(3.1)
~'(x)
where
a o "'"
Thus,
~
x
~
=
a
~
v
, for
art-absolutely continuous
W' which is a, st"ep-fUb.ction, namely,
V =
r V < x -< r V +1
are real numbers( a
o
=
~
1, ... ,k
= 0 )
and~"
= r o < r <••• < r < r +l =
l
k
k
is a continuous, piecewise linear function and it is a constant for x
~
r
l
r ( as is the case with the Huber or the Hampel score functions). Regarding the
k
00
or
c., we assume that (Z.Z) holds, and we strengthen (Z.3) to
-l.
(3.Z)
Further, we replace (Z.5) by
(3.3)
= f ljJ'(x) dF(x)
Y
THEOREM 3.1. Suppose that F has a bounded defivative f in a neighbourhood of r , ... ,
l
and the c. satisfy the conditions mentioned above. Then
-l.
: II: II
(3.4)
<
M}
=
0 (1)
as n
,
p
~
00
We shall find it convenient to consider the studentized case side by side. We
replace (Z.lO) by
(3.5)
Yl = a
-1
f ljJ'(x/a)dF(x)
Yz = a
and
-1
f xljJ'(x/a)dF(x).
THEOREM 3.2. Under the assumptions made above, as n
~
(3.6)
00
Iltll< M,lul < M} = 0 (1).
-
-
p
Proof of the theorems. As (3.4) is a particular case of (3.6), it suffices to show
Whas
that (3.6) holds. Further, as
been assumed to be flat at the two tails, and ljJ'
has finitely many steps, it suffices to consider the particular case where
, for x < r
l
for r l .::. x .::. r Z
,for x >r
Z
(3.7)
Also, without any loss of generality we may' put a
<
<
=:
00
1 and consider only the case of
Snl(:'u). Further, by iirtue of (2.2) and (~,,!), for n adequately large,
(3.8)
IE~=l
cil[ljJ(e-
n -~
u(Yi-n-~:l~»
-
ljJ(Yi-n~~(uYi+:~E»
]I
= 0(1)
uniformly in lul< M andlltl'I<~M;:for ev~ry'f:inite M.·[ Note that ljJ in (3.7) is first
-
-
-
."
1
order Lipschitz.] As such~ WE! shali"repl'acer'ljJ(Yi-n-~c\)/en
-~
1
u) by ljJ(Y.-n-~(u+c~t»,
-l.-
l.
-l.-
for every i(=1, ... ,n),fn·'(i.8}·.(Wfth::-ithe~eadjustments, we may note that for any pair
(El'u ) and (£Z,u ) of clistinc't p8int's",--:'--r,"<J
l
2
(3.9)
Var(Snl (~l ,u l ) -Snl (~2' d
*
<
uniformly in
I
u
j
K {(ul - u Z)
I
< M and
.
2
1
1
2Jl< t~={ JcU E{1/J(Y i --n-Yz(U{i:i.~,i:l» -1jJ(Yi -n -~(uZYi ~l:2»
2
II -J
t. II
+
<
11:1
-
:zl I Z },
K* <
00,
M , for j =1,2; the last step in (3.9) is again
a consequence of the Lipschitz character of ljJ. In the same vein, note that
6
Z
]
tit
(3.10)
<
uniformly in
(3.11)
lu.
J
I I~l - ~zl I + lUI - uzl};
I It.
II < M, for j=l,Z. By
-J-
K** {
I -<M
and
E[ Snl(~l,ul)-Snl(~Z'uZ)
<
Ko {
(u
-~
- u )Z +
2
Ilt.11
2. M,
-J
uniformly in lu.1 < M and
J
l
+ n
-
K**
<
00
(3.9) and (3.10), we have
n
Ei=lcil[Yl(~l-~Z) '~i + YZ(u l -u Z)] ]
1I~1
-
~Z
liZ};
K
o
<
00
Z
,
j=1,2.
The process in (3.6) may not vanish along the lower boundary (where one or more
of the coordinates (t,u) is null), although at (t,u) = 0 , it vanishes. In order to
make use of some existing results on multi-patameter stochastic processes, we first
(3.12)
Sn l(tl-Eltl, ..• ,t p -£ p t p ,u-€ 0 u).
Note that for S (t,u) in (3.6), the centring part is linear, and hence, for S* (t,u)
n -
n -
the corresponding centering part is null. As such we may rewrite S l(t,u) -ES l(t,u)
n np+l
as a linear combination of Z
processes of the type (3.1Z) of dimension < p+l,and
Snl(~'O)
appears in the last tenn':ln this set (but
Snl(~'O)=
0). Each of these processes
vanishes along their lower boundary. Hence, it suffices to show that each of the
processes of the type (3.1Z) is uniformly bounded in probability(over the domain T =
[_M,M]P+l ). For this purpose"we.4enote by ~ = Diag(El, ... ,E p) and lett
(3.13)
n
) E + ••. +E p t/J.« Yi-n -~ c1..'( I - E»/
-0.1 0<'< }·(-,.O
t e
t/J*(Y.;t,u) =E{
1. E.-
J P"
J
-for i =l, ... ,n. Note that.by (3.2),
(3.14)
I(Y
i
-
.
,;
n-~:~C~-~)~ )ien-~U~l':";~,).:
-
-':2
- - -
-,Yil' < Zlj1n4etxr l + II:ill)=
u(l-E O) )
,
o(n-~),
uniformly in i(=l, ... ,n) .;1nd:luIS.~,B;~d,J~~H2., M, wha~ever E may be (over the
(rZ'oo),
t/J is linear, so that the correspoIJd,ing
~oJe. i~ Fequal
to O. Thus,
t/J* can only
-~
be different from 0 in a smaU (i.I:7.,' O(n "})l1~ighbQurhood of r l and r Z ' I f we denote
by r = max{ /rll, /rzl}, and make use of the Lipschitz character of t/J , then we can
CMn-~( r +
I Ic.
II
-.1.
)1. , where C « 00) is a finite constant ( < Zp) and
1.
Ii is 1 or 0 according as Y is in this neighbourhood of r l (or r Z) or not. Note that
i
bound t/J* by
the indicator variables Ii are all independent. Using (3.12), (3.13) and the above
7
bound, we obtain that
(3.15)
sup{
Isn*(t,u)
I:
-
1
lit II <M, lui
< M}
--
n -'24.~
-1 c '1 I CM (r + I I c." ) I, ,
11- 1 1
-1
1
<
where the I. are (nonnegative) indicator variables with
1
)n-~
« 00) depends on M and the
pdf f(.) at r and r .
2
l
Taking expectation on both sides of (3.15) and using (3.16) along with (3.2), we
(3.16)
EI. <
1
-
C*( r + Ilc.11
-1
,i=l, ... ,n;
C*
conclude that E[ sup{1 S*(t,u)l: Iltll< M,lul < M}] = 0(1), while using (3.2)
n -
-
-
-
and the binomial character of the I., it follows that the right hand side of (3.15)
1
-Yo
has a variance of the order n 4 (which converges to 0 as n + 00 ) , so that it is
bounded in probability. Hence, the left hand side of (3.15) is bounded in probability,
and this completes the proof of the theorem.
In passing, we may remark that in (3.7), we have taken
is possible to choose r
l
=
~
or r
+
2
fey) of Y ' yf(y) converges to 0 as y
l
finiteness of y in (3.3)
(when~'
00
. It
-00
provided we assume that for the density
,
(or +
+
00
),
and this is insured by the
is different from 0 for all finite x), which in turn
is guaranteed by the existence of the first moment of Y.
4. SOAR for Absolutely Continuous
~
We shall extend the results of Section 3 to the case where
~~by,tIJ".
continuous, and we denote the derivative of
may need some extra conditions on
~'and
~"
•
~'
and Absolutely Continuous
~'
is itself absolutely
Naturally, in this context, we
• Let us denote by
(4.1)
THEOREM 4.1. -=.S.=.uPLJP.:. .:o:..;:s:..;:e:.-.::,t.:.:;:ha::.t:;.. .:t:..:;h:.=.e :1 satisfy ~,~:~) ~,lind{3.2), y , defined by (3.3),is
,
V
finite, and that for some~~ >~,'~O~-~>'~;, $(\~'6(:) 1-J. <
00
, for every 8 :0 < 8 <
8 •
o
Then (3.4) holds.
Let us then define
(4.2)
and
iii
t (.)
iii 8"Cy)
=
sup{ 1~'(e-v(y+u»I: lui <8, Ivl < 8};
8> 0,
is defined analogously.
THEOREM 4.2. Suppose that the
are finite, and for some
~i
satisfy (2.2) and (3.2), Yl and Y , defined by
2
oo
V > land
8
>
0,
(3.5)~
(4.3)
<
, for all 0:0 < 0 <0 .
00
-0
Then (3.6) holds.
Proof of the theorems. Note that for the studentized case, we need (4.3) which is
more stringent than the parallel condition in Theorem 4.1. If
a difference
~6(')'
of two monotone functions, then in (4.3),
~"
Further, in most of the cases,
08(')
~'
can be expressed as
may also be replaced by
is decreasing in the two tails, and hence,
the second condition in (4.3) may be less restrictive than the usual moment condition on the Y needed in the studentized case. In this proof also, we consider specifically the case of S l(t,u) and take a = 1. For simplicity of the proof, we treat the
n
-
case of Theorem 4.1 in detail, and only mention the modifications needed for the
other theorem.
As a first step, we adopt the representations in (3.12) and (3.13) (without the
index u). Then, note that for each i( =l, ..• ,n),
(4.4)
Further, note that letting E as in before (3.13)(€
..
(4.5)
so that
(4.6)
,
TrE
}
{
(-1)
{E.=O,l , 1<J'< P } c.(I-E)t
_1. - - J
E
~*(Y, ; t
1. -
)
= o -
0
deleted),
0
- = 0 and L{ E • =0 " 1'1<'>
}(-l)
J p
J
+ (2n)-lt{
_~,
-0 1'1<'<
€.- ,
J.;
,
TrE
-
= 0,
--
}{C~(I-E)d2~,,(y.-hn~C~(I-E)tH+l)Tr~.
-1. - ~ ~
1.
-1. - - -
J P
-.-
Since by (3.2), n -~ Ic.tl is 6(n: ) uniformly in i(=l, ... ,n) and I I~I
- 1 . - _
I
<M , it follows
from (4.6) that
(4.7)
suPq~*(Yi;:> I: II~II ~ M
~ (2n):"'1~~~i~:2I-r~8(~i)2~
}
O(n
;on
-~
), as n +
00
n
Thus, parallel to (3.l5)i hereJ\fe 'have
(4.8)
sup{ IS*ct)
n -
I:
Iltll <M}<'
-
-
-
:.M2.2'B(2~)-1Z:~_lc'~c.1jJ~·(Y,)
1.- -1.-1. u
V
Under the assumed condition that E[{W (Y)} ] <
8
00,
n
for some
1.
V
6
n
< 0
0
Vn
> n
-
0
> 1, and (3.2), we may
use the Markov law of large numbers and verify that the right hand side of (4.8)
converges in probability to a finite positi~'~ constant. This completes the proof of
Theorem 4.1.
-
To prove Theorem 4.2, first, we show that (3.8) holds uniformly in Iltll< M and
lui
< M ,in probability, when n is large. Towards this note that
9
e
-a
-
(y - b) =
(y-a-by)+ [y(e
(4.9)
-a
-l+.?)+ b(l-e
~
~«Y.-n 2c~t)/e
1
k
n-~
-a
)]. As such, we may write
~
) = ~(Y.-n 2(uY.+ c~t»
1
~1~
1
+ [Y.(e
~1~
1
~'(
*
1
where Y . lies between (Y.-n
n1
1
-'2
un
c~t)/e
-~
~1~
-un -~ -~
-l+n u)+
Y* . ) ,
i=l, ... ,n,
n1
k
) and (Y.- n-2(c~t + uY.». Thus, proceeding
1
-1-
1
as in before ( and making use of (3.2», we obtain that
1
(4.10)
sup{
, ) / e un
IE.n lC'l[~«Y.-n -'2c.t
1=
1
-~
-~-
1
) - ~(Y.-n -~ (c~t+uY.»]
-1~
1
~ c{n-lE~=lICill{IYil+1I:il I} ~8
n
<5
(Y i ) ;
1
< <5
n 0
I : II t II <
~
-
I1 n >n
V
-
M,
I I_< M}
U
0
where C is a finite positive constant dePending on M. As such, using (3.2), the first
condition in (4.3) and the Markov law of large numbers, it follows that the right
hand side of (4.10) stochastically converges to a finite positive quantity, and hence,
(3.8) holds, in probability. Having shown (4.10), we may replace in S l(t,u) and
Sn* (t,u), the
~
~«Y.-n
1
4
1
1
c~t)/e
-1-
un -'2
) by
4
~(Y.-n
1
n
1
(c~t+UY.».
1
~1-
-
With this replacement, we
may proceed as in (4.7) through (4.8), wi th the only change that in the right hand side
of (4.8), we need to replace
~~u (Y.)
by (Y7 +
1
1
l)~~
n
we appeal to the second condition in (4.3) , and
u
(Y.) . Once this has been done,
1
ag~in
by reference to (3.2) and
th~
Markov law of large numbers, we conclude that the right hand side of (4.8) as ammended
here is 0 (1). This completes the proof of the theorem.
p
5. Some General Remarks.
It may be remarked that in Theorem~j2~1 ~hrough 4.2, we have not attempted to
establish the weak convergence' of! th1~~!ppropri~t-e" stochastic processes related to
the Sn (t)
or Sn (t,u) tbsotn'e~urhpJ&"<in{~fter;G11~~~ianl(orrelated) processes. If we
_
~
were to do so, then 1.11. iddi~ib~j t6-l th~' e'~1:~bll~h~d '~niform boundedness, we would
have to show that (i) th~[H£init~;a'bh~d~I'6naf'alstribui:ions(f.d.d.) converge, and
(ii) the stochastic processes under consideration are compact or tight. The first
aspect is relatively simple, and can be done along the lines of Jureckova and Sen
(1981 a,b, 1984) and others. The second aspect is, however, relatively more involved.
Recall that a process may be uniformly bounded (in probability) without being tight
( although tightness implies the uniform boundedness in probability). For Theorems
!<-
2.1 and 2.2, we need to multiply both sides of (2.4) or (2.9) by n 4
10
,
and then
~
~
the tightness can be proved along the lines of Jure~kova and Sen (l987a). For the
~"exists
processes related to Theorems 3.1 and 3.2, note that
excepting at the jump points of
and is equal to
a
W'( which are finite in number), and hence, the
proof of tightness is not that involved. In the case of Theorems 4.1 and 4.2, the
~"
situation is slightly different. Here
may not be bounded, and even so, it may not
be equicontinuous. As such, we may need an additional condition [related to (4.3)]
under which tightness can be established in a relatively simpler manner. Let us
~"
introduce a compactness condition on
(5.1)
by defining
I :
~;S*(y) = sup{ I~"(y+u) -~"(y)
lui < 0 } , 0
> 0 ,
and replace the first condition in (4.3) by
(5.2)
and
E[ lY1J!;S*(Y) IV
] <
, for all 0 : 0 < 0< 0
00
o
a very similar modification can be posed for the second condition in (4.3) and (4.2).
E[IY~'(Y)IV] <
In addition to (5.2), we may also need
(or E[ly~'(y) IV ] <
00
00
for Theorem 4.2) to apply the Markov law of large numbers on the leading term, and
then proceeding very much in
th~same
line as in Section 4, we can establish the
tightness property .
Our main interest in this, study is to show that as regards the uniform boundedness
( in probability) result is concerned, such additional regularity conditions may not
be needed, and detailed results can
-,,'::'
..
-'
b~
.
~
....
pertaining to the nature of
the. ;:'_'
score
_' ' '.J
obtained under appropriate regularity conditions
case of the studentized
c- ,~.. :)!,The
'j ~- ~":
function~ ~.
~,' l" ~~.
i:
~."
'.
"'i.
,,"';"
M-estimators is of espec,~linr~0l~,~t~s~Wn:hTJ;;e.¥.~,!!haYr1, ,?hQ¥nCp':-ecisely the effect of
studentization on the unit,?zm,;.~,ir~f5tf~sr~s~l1~ f3r;M::-:~f~t~ftics, and this is very
useful in the study of the
!
aSy!pPfptic"J?~9p.,7!F~e~ li'f on~7step
~
-,~,.,)
J.,.:' .~I. ... ~-:.:. ..
; ..... ;""'.' .. 1
_ .)
~
__'..
'. f.
\..1
M-estimators
viz .•
Jureckova and Sen (19841J.
AMS 1980 Subject Classifications: 62G05, 62J05, 60F99
Key words and phrases: M-estimator; second order uniform asymptotic linearity;
tightness.
References
[11
Bickel, P.J. and Wichura, M.J. Convergence criteria for
multiparameter stochastic processes and some applications.
Ann. Math. Statist. 42(1971), 1656-1670.
[2J
Billingsley, P. Convergence of Probability Measures. J. Wiley,
New York, 1968.
[31
Breiman, L. Probability. Adison-Wesley, Reading, Massachusetts,
1968.
[4J
Jureckov8, J. and Sen, P.K. Invariance principles for some
stochastic processes relating to M-estimators and their role
in sequential statistical inference. Sankhya A 43(1981),
190-210.
L51
Jureckov8, J. and Sen, P.K. Sequential procedures based on
M-estimators with discontinuous score functions. J. Statist.
Planning Inference 5 (1981), 253-266.
JureckoV8, J. Robust estimators of location and regression
parameters and their second order asymptotic relations.
Trans. 9th Prague Conf. on Inform. Th., Statist. Dec.
Functions and Random Processes, pp. 19-32. Reidel, Dordrecht,
1983.
JureckoV8, J. and Sen, P.K. On adaptive scale-equivariant
M-estimators in linear models. Statist. ~ Decisions 2 (1984),
Supple Issue No 1.
(81
JureckOV8, J. Asymptotic representation of L-estimators and
their relations to M-estimators. Sequential Analysis 5 (1986),
317-338.
[9]
Jureckov8, J. and Sen, P.K. An extension of Billingsley's
uniform boundedness theorem to higher dimensional M-processes.
Kybernetika 23(1987), 382-387.
Portnoy, S. (1983) Tightness of the sequence of empiric c.d.f.
processes defined from regression fractiles. Robust and Nonlinear Time-Series Analysis (J.Franke, W. Hardle, D. Martin,
eds.), 231-246. Springer Verlag, New York, 1983.
J. Jureckova
Charles University
Department of Statistics
Sokolovska 83, 186 DO Prague 8
Czechoslovakia
r
P.K. Sen
University of North CQrolina
Department of Biostatlstlcs
Chapel Hill, N.C. 27514
U.S.A.