Sen, Pranab Kumar; (1983).On Sequential Nonparametraic Estimation of Multivariate Location."

ON SEQUENTIAL NONPARAMETRIC ESTIMATION OF MULTIVARIATE LQCATION
by
Pr anab Kumar Sen
Department of Biostatistics
University of North Carolina at Chapel Bill
Institute of Statistics- Mimeo Series No. 1452
October 1983
ON SEQUENTIAL
NONPAR~mTRIC
OF MULTIVARIATE LOCATION
ESTI~~TION
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina
Chapel Hill. NC 27514 USA
For the multivariate one-sample location model (relating
to a diagonally symmetric distribution). sequential nonparametric (point as well as interval) estimators based
on appropriate rank statistics are considered and their
asymptotic properties studied. In this context. asymptotic
risk-efficiency of the proposed estimators and asymptotic
normality of the associated stopping times are established.
These results rest on moment-convergence of the permutation
dispersion matrix to its population counterpart; these
are studied as well. A brief review of some other robust
procedures is made along with.
.e
INTRODUCTION
Let
{~i;i>l}
be a sequence of independent and identically distributed (i.i.d.)
random vectors (r.v.) with a continuous distribution function (d.f.)
defined on EP • for some p~l. where eEe cE P . It is assumed that
xEE P
F(x-e).
where the P marginal d.f.s of
all symmetric about 0; in fact.
the
or~gln.
Thus.
p
is the vector of location parameters. and we
are interested in the estimation of e.
n.
(1)
F. denoted by F[l] ••..• F[p]. respectively. are
F is assumed to be diagonally symmetric about
e = (el •...• e)'
-
F(.;~).
Based on a sample
let!n
(Tnl •.. · •Tnp ) , be an estimator of e.
estimating e by !n is denoted by
L(n.c; !n'~) = p(!n'~)
+
cn.
(~l""
'~n)
of size
The loss incurred due to
(c>O)
(2)
where p: EP x EP .... [0."') is a sui table metric and c is the cost of sampling
. per unit observation. Some commonly adapted forms of p (.) are the following:
l~J~Jl}
p(~.~)
max{laj-bjl:
(~~X-norm)
p(~.~)
P I aj-b I
p -1 Lj=l
j
(MAD-norm)
p(~.~)
(!-~) 'g(~-~)
(QUAD-norm)
(3)
9
where
is some given positive semi-definite (p.s.d.) matrix.
~
(2), the risk in estimating
by !n
AF(n,c;~) '" EL(n,c;!n'~)
Note that the risk in (4) depends on
metric
p(.).
If Tn
Corresponding to
is given by
'" EP(!n'~) + cn
(4)
F through the distribution of !n
and the
is translation-invariant (as will be the case in our subse-
quent analysis), then 0n(F) '" EP(!n'~) and AF(n,c) '" AF(n,c;~) do not depend
on e. Suppose now that there exists a positive integer nO' such that 0n(F)
exists for every n~nO and 0n(F)~O as n~. For every C>O, we may define then
(5)
Then, T 0
-n c
depends on
is the minimum risk estimator (MRE) of ~. Note that 0n(F) generally
F (through some other functionals of F), so that when F is not com-
pletely specified (as is the case in practice), no single
simultaneously for all
F£F,
a class of d.f.'s.
nO may lead to the MRE
c
However, suitable sequential pro-
cedures may be incorporated to achieve this goal in an asymptotic setup where
is made to converge to O.
Towards this, we assume that for some
q(>O) ,
as
c
n
increases,
nqo (F) ... o(F):
n
and, further, there exists a sequence
[For the first two cases in (3),
one, it is equal to 1.]
q
O<o(F)<oo
(6)
{d } of consistent estimators of o(F).
n
is typically equal to ~, while for the last
Note that for
C~O,
(4) may be rewritten as
(7)
so that asymptotically, as
C~O,
n
we have
o
(8)
c
(9)
Motivated by (8) and the stochastic convergence of
c(>O),
where
{d }
n
(to
o(F)),
for every
we may define a stopping variable
h(>O)
N '" inf{n~nO: nl+q~c-lq(dn+n-h)}
c
is a suitable constant, to be chosen later on.
(0)
N ' we consider the (sequential) point estimator
c
defined for every c>O, and we denote the corresponding risk by
Based on the stopping rule
TN
- c
of
e,
We say that
Ap(C) '" EP(!Nc'~) + cEN c ' c>O
is an asymptotically ~IRE of e, i f
(11)
(12)
e,
In the literature, this is referred to as the (first order) asymptotic riskefficiency (A.R.E.) property. Besides this A.R.E., we shall also study the
following results: as c~O,
Nino ... 1,
c
c
o
in the first mean,
(13)
~
(n ) (TN -e) - N(O,r)
c
- c -
(14)
r- = limn->-ooE{n(T-n -e)(T
_el'}
_ -n-
(IS)
where
[!- is also the dispersion matrix of the asymptotic multi-normal d.f. of n~(T_n -8).]
Under some additional regularity conditions, we will also have the asYmptotic
normality of the stopping time i.e.,
o -~
(n c )
where
y(O<y<oo)
2
0
(Nc-n c ) - N(O,y ) ,
as
dO
is a suitable constant (depending on
q
(16)
and
I (cE P),
The second problem is to provide a confidence set
size n, such that
n
F).
based on a sample of
(17)
p(e€I
n Ie)
_ -> l-a(O<a<l)
maximum diameter of
where
a
and
d
are preassigned.
dures may not work out (for all
In
~
(18)
2d(d>0)
For this problem too, fixed sample size proce-
F€F), and hence, one may take recourse a sequential
scheme [viz., Chapter 10 of Sen (1981)], where (17)-(18) hold in an asymptotic
setup:
d~O.
Asymptotic properties of the stopping times are studied here.
In this paper, we specifically consider the case of R-estimators of location; some
general remarks pertaining to some other robust (viz.,
location are made in the concluding section.
~I-
and L-) estimators of
Asymptotic properties of these multi-
variate R-estimators are studied in Section 2.
Section 3 contains some asymptotic
results on the permutation dispersion matrix which are needed in Section 4 in the
derivation of the asymptotic results in (12, (13), (14) and (16).
with the confidence set problem.
multivariate case (i.e.,
p~2).
Section 5 deals
Throughout the paper, we consider the genuine
The univariate case has already been studied in
detail in Chapter 10 of Sen (1981) and Jure~kova and Sen (1982), among other places.
The last section deals with some other robust estimators of e
(e.g., M- and L-
estimators), and only the general setup is discussed.
MULTIVARIATE
R-ESTI~~TORS:
For each
(=
j
ASYMPTOTIC PROPERTIES
1, ... ,p), let
epj = {epj (u) ,O<u<l}
be a non-decreasing, non-constant,
skew-symmetric and square-integrable score-function, and let
ep. = {ep.(u) =
J
J
ep~((I+u)/2),0<u<I},
J
-
.
l_<j<n
.J
(19)
For a sample ~i = (X il , ... '\p)l,
~ by letting
a . (i) = E¢. (U .)
nJ
J
l~i~n,
or
nl
of size
n,
we then introduce a set of
l~i~n;
¢. (i/(n+l)),
J
(20)
j = 1, ... ,p
where
Unl<",<U
are the ordered r.v.'s of a sample of size n from the uniform
nn
(0,1) d.f. Further, for every real b, let R~ij(b) be the rank of IXij-bl
among IXlj-bl ... ,IXnj-bl. for i= l, ... ,n
(S l(b ), ... ,S (b ))1 be defined by
n
np p
l
and
j = l, ... ,p.
n
Snj (b j ) = Ei=lsgn(Xij-b.)anj (R~ij (b j )) ,
Note that for each j (= 1, ... ,p), Snj(b)
tribution symmetric about O. Let then
anj
for
1,2, ... ,p,
= ~(sup{b:
is \ in
b,
S .(b»O} + inf{b:
Let
~n(~)
=
(21)
1.9.3'
and
Snj(6 j )
has a dis-
(22)
S . (b)<O})
nJ
nJ
and let
e-n =
(23)
(§n 1 ' " ' ' §np ) ,
~
is a translation-invariant, median-unbiased, robust and consistent esti-n
mator of 6 [viz., Chapter 6 of Puri and Sen (1971)]. Let B = Diag(B , ... ,B p )
l
be defined by letting
Then
(24)
Further, let
v =
be defined by letting
(25)
for
for
j,j' = l,
,p, where F[jjl] is the bivariate d.f. of (Xij-6j,Xijl-6jl),
,po Note that the v .. do not depend on F, but V j j I ' j 'I j',
JJ
are dependent on F through the joint d.f. F[jjl]' j 'I j'. Finally, let
j
'I
j
1,
I
E=
((Y jj ,)) = B-lVB- l = ((Vjjl/BjBjl))
(26)
Then, under quite general conditions [cf. Puri and Sen (1971, Ch. 6)], we have for
n--,
(27)
Note that the asymptotic normality in (27) does not necessarily ensure (15).
however, for some
a>O
(not necessarily
>1),
-
Ellx.lla<oo,
-~
If,
then, it follows from
Theorem 2.1 of Sen (1980b) that there exists a positive integer n
a), such that
2
EII _n
~ -611
< 00,
_
j (= 1, ... ,p), for some
I
and
F[j]
0
'If n>n.
- a
(O~O<\)
(d r /dur )¢j(U) I
~
and
(depending on
a
For our purpose, we assume that for every
K (O<K<oo),
K(l-u) -o-r ,O<u<l; r
has an absolutely continuous density function
derivative- fij]
bounded almost eVerywhere), such that
= 0,1,2,
f[j]
(with a first
(28)
(29)
for
1•...• p.
We write
d
=
(4+2T)-1.
T>O.
Then. by a direct multivariate
extension of Theorem 2.2 of Sen (1980b), we may conclude that for every k < 2 (1+T),
limE{nk/21Isn-ellk}< .... Note that throughout the paper 11·11 stands for the maxnorm. Then (15) holds under (28)-(29). Actually. the following representation [a
coordinatewise extension of (2.49) of Sen (1980b)] is the key to the above results:
n(~_n -e)
_
where. under (28)-(29). as
~
_B-ls-n (e)
_ + _n
(30)
n~.
n-~I I~_n II + 0 a.s. as well as in the kth mean.
(31)
for every k < 2(1+T). Further. {S
is a martingale sequence [see Sen
_n (8).n>1}
_
and Ghosh (1971)]. so that {lis (8)-5 (8) II: m>n} is a sub-martingale. Hence
-m - -nusing the Kolmogorov maximal-inequality along with the fact that for every m~n.
E(~m(~)-~n(~))(~m(~)-~n(~))' = (m-n)~n'
standard steps that for every
such that for every n~n*.
£>0
and
where ~n~ as n~. we obtain by some
n>O. there exist an n* and an £*(>0).
(32)
Consequently. by using (30). (31) and (32). we obtain that
p{ m:
for every n~**.
r.v. 'so such that
Im-n
maxI <~* n n~lI~_m-~-n II
~
(33)
>£1 < n
Clearly. if {N} be any sequence of positive integer valued
n
n-IN +1. in probability. as n~. then by (27) and (33), as
n
(34)
Defining
B as in (24). we let
wn =
sup{n-~II~n(~)-~n(~)+n~~II:
11£II<n-~lOgn}
.
(35)
Then. as a direct multivariate generalization of (2.37) of Sen (1980b). we obtain
that for 0 ~ (4+2T) -1. T>O. there exist two positive constants c .c
and an
l 2
integer n~. such that
-c
p{wn>n 1(10gn)I~=~}~c2n-l-T,
~n~n~
(36)
This last result yields suitable estimates of B along with their rates of convergence. Note that under 8. = 0, 5 .(0) has a completely specified distribu)
n)
tion, symmetric about O. and hence. for every a(O<a<l) and n, there exists a
C(j), such that p{I.Sn~·(O}1 > c(j)le.=o} > a > p{ls .(0)1 > c(j)}. for j =
n,a
.
- n,a)
n)
n,a
'1, ... ,p, where n -~C() + T /2V'"
1<j 3'. T /2 being the upper 50a% point of the
n.a
a
))
- A') a
(")
A(")
standard normal distribution. Let then 8E) = sup{b: s. (b»C ) }. 8 )
u.n
.
(")
,n
n)
n.a
inf{b: S . (b)<-C )} and Jet
n)
n.a
B . = 2C(j) /(n(S(j)-S(j))} • j = l •...•p
(37)
n.a
U.n L.n
nJ
Then. ~n = Diag(Bnl.···.Bnp) is a translation-invariant. robust and consistent
estimator of B. By (36). (37) and some standard manipulations. we obtain that for
every
-
£>0
integer
and
nO'
6
~
(4+2t)
-1
• t>O.
such that for every
there exist a constant
C (O<c<oo)
and
an
n~nO'
PC! IB-n -BII
>d -< cn- l - t
_
(38)
Some other properties of
B
_n
RANK BASED ESTIMATORS OF
r: ASYMPTOTIC PROPERTIES
In (20), we replace the
l<i<n.
l<j<n.
Let then
-J
l ..... n
and
~j
are studied in the next section.
~j
and denote the resulting scores by .a;j(i),
be the rank of X1)
.. among X .•... 'X " for i =
nJ
lJ
Define ~n = ((v
,)) by letting
njj
by
Rn1J
..
j = 1•...• p.
(39)
Then
V is a translation-invariant, robust and consistent estimator of v, de_n
fined by (25) [See Puri and Sen (1971, Ch. 5)]. Defining the B . by (37), we
nJ
let then
En =
~~l~nB~l
((Ynjj ,)) = ((Vnjj,/BnjBnj'))
To study the asymptotic properties of
(40)
r.
consider first an asymptotic represen-n
tation for Y
n . By (28)-(29) and Theorem 7.5.1 of Sen (1981), we obtain that for
6 < (4+2t)-1. t>O.
(41)
where the
V
k
£
W.
-1
[0.2+t),
are i.i.d. random matrices with EW. = V
1
and further II _n
~* II = O(n -~-n) a.s., as
Actually, by the same proof it follows that for every
cl
and an integer
nO'
. Ilk < 00,
and Ell W
_1
n~,
£>0,
for some
n>O.
there exist a finite
n~nO'
such that for every
l t
PC! I~*II
_n >d -> c 1n- - /2
Also, for the i.i.d. random matrices
{Wi}'
(42)
by the Markov inequality,
n
I I>£} ~ £ -k EII n -1 I:i=l~i-~
n
Ilk ~ c (£)n -1-T/2 ;
P{II n -1 I:i=l~i-~
2
As such, from (38), (40), (41). (42) and (43), we obtain that for
large, for every £>0,
p{llr-n -rll
_
>d
C
3
Let us now write _n
U =. n~(V_n -v)
and Y
_
_n = n~(B-l_B-l).
_n_
classical central limit theorem (on the _1
W.),
~n - N(~'~l) • as
n~
<00
c 2 (£)<00
n
. (43)
adequately
'(44)
Then, by (41) and the
(45)
where ~l is the disper~ion matrix of the ~i' The asymptotic normality results
on y demands some extra regularity conditions. We assume that in (28).
2 n 2
-2
I(d /du )!jl.(u)1 <K(l-u) .O<u<l (where K<oo) and in (29). we assume that for
J
every n>O. there exists an e>O. such that
00
(46)
for every
j
= 1•...• p.
Then. we may use the second order linearity results of
-1
v
Huskova (1982). as adapted to the one-sample case, and con1cude that for _n
B • _~ a
representation similar to that in (41) holds. where the remainder term is o(n )
a.s., as n~. This representation ensures both the asymptotic multinormality of
n\B-l_B- l ) and the "uniform continuity in probability" of the n\B-l_B- l ).
~
-
Hence. under these additional regularity conditions.
n~(~~l_~-l)
-
~
-
N(~'~2)
(47)
for some p . s "d n
B- 1 = B- 1 + n-~Y_n and -n
V = \I + n -~u. we obtain on
_2' Writing -n
-n
using (40). (45) and (47) that under the regularity conditions mentioned above
n~(r -r) = 2Y \lB- 1 + B-lU B- 1 + 0 (n-~) - N(O.n*) ; n* p.s.d.
(48)
-n -n-_ -n_p
Also, the asymptotic "uniform continuity in probability" result:
p{
r
max ~e'n n ~II _m--n
r II > e
} n
< .
m: Im-nl
(49)
follows from the above results. so that (48) extends as well to a sequence {N}
n
of integer valued random variables. where n-IN +1. in probability, as n~.
n
ASYMPTOTIC PROPERTIES OF Nc
AND
aN
_
c
The asymptotic normality of the R-estimator has already been considered in (27).
Also. after (29). the moment-convergence result in (IS) has been studied. In the
sequential case, the study of (11), (12). (13), (14) and (16) depends on the form
of the metric p(.) in (2); we assume that the following holds:
(a) For 0 = Ep(~ .6). (6) holds with o(F) depending on F only through
n
-n r (which is a functional of F). i.e., o(F) = o([), and further, o([) satisfies
the (Lipschitz-type) condition that
(SO)
10 ([1)-0([2) I ~ Koll[1-r211. V r l ·E2
where K is independent of the r. and 11·11 is the max-norm.
-J
0
(b) With q defined as in (6), there exists a finite. positive
that whenever the expectations exist.
E{P(~n.~)}r ~ KlEII~n-~112rq. (r,::l).
Kl •
such
(51)
.Note that (50)-(51) hold for each of the three Metrics in (3). Also. note that in
(9). we take d = o(r). The Lipschitz-type condition (SO) along with (44) ensure
.
that for
n-n
n adequately large, for
ever~
e>O.
(52)
Now, by virtue of (44) and (52), we may virtually repeat the proof of the first
part of Theorem 3.1 of Sen (1980b) and conclude that (13) holds. Further, (13)
and (34) ensure (14). To establish the A.R.E. property in (12), we require to
establish some uniform integrability properties, and, for these, we need to put
some restrain on
h
in (10) and
6
6 < (4+2T)-1,
so that
6<1/6.
Note that by (13),
in (28).
We let
T > 1 + 2h,
o
lim CT'''OEN c /n c
=
h>O
1.
(53)
Hence, by virtue of (4)
and (11), it suffices to show that
(54)
or equivalently,
(55)
By virtue of Theorem 2.2 of Sen (l980b), whenever, for some
under (28), (29) and (53),
E{(n~ll~n-~II)k} <
lim
a>O,
EI'I~lla<oo,
'if k < 2(1+T)
00,
(56)
Therefore, by (51) and (56),
lim {nkqE(p(S ,e))k} < 0 0 , 'if k < l+T
(57)
-n With this, we may again follow the line of attack of the proof of Theorem 3.2 of
Sen (1980b), employing the Holder inequality for the two tails and the uniform
k (k<2 +T)) for the central part. For intended brevity,
integrability of 115-n (e)ll
_
the details are therefore omitted.
Finally, to prove (16), we note that by (10),
where
n* .... oo
c
N ~ n~
c
as C+O.
= [(c-lq)l/(l+q+h)]
,
with probability
Also, by (10), whenever,
(58)
Nc>n~,
l
c- q6(IN ) ~ N~+q, (Nc-l)q+l < c- l q(6(fN -1) + (Nc-l)-h)
c
c
where by (8) and (59), taking (no) q+ 1 = [c-lq6 (F)] + 1
c
(N/n~)q+l_l ~ {6(fN ) - o (E)}fo (I)
(59)
(60)
c
((NC-l)/n~)q+l-l < {o(IN -1) - o(r}}/o(r} + C-lq(N _l)-h(n~)-q-l
c
c
Note that for
h>~,
(61)
.by (8) and (13), as
c-lq(N _l)-h(no)-q-l
c
(62)
c
so that from (B), (60), (61) and (62), as
dO,
(no)~{(N
- (n oc )\6(r_ ) -15(~))/15(~)
c
cIno)q+l_l}
c
- + 0pO)
N
(63)
c
Further. by virtue of (49) and (SO). under (13), as
c+O.
(n~)~{15(EN
)-15(E)} - (n~)~{6(~no)-15(E)}
(64)
c
c
Hence. whenever h>~ [in (10)] and n ~ (15(r" )-15(r)) is asymptotically normal. by
~
1
-n(63) and (64), (n0) {(Nc/n~) q+ -I} is also asymptotically normal. Finally. by
the Slutsky
theore~. (ng)~{(Nc/ng)q+l-l}
asymptotically normal implies that
(no)~{N
c
c In c -I} is also asymptotically normal (with a different asymptotic variance),
and this proves (16).
Note that for (64), one needs the extra regularity conditions
due to Hu~kova (1982); for other results these are not needed. In any case. her
regularity conditions are not very restrictive. and hold for the common types of
score functions; we shall comment more on these in the last section.
SEQUENTIAL CONFIDENCE SETS
Note that defining v
as in (25) and
~n(~)
as in earlier. under
n-l(~n(~)),~-l(~n(~)) - ~
~,
(65)
while
(66)
Hence. if
2
x_
"'p.a
stands for the upper 100a% point of the chi-square distribution
with p degrees of freedom. we have from (65) and (66).
(67)
when
n
is large.
Following the steps after (36). we define then
= sup{b:
8*(j)
L.n
8~:~)
for
1•...• p.
= inf{b:
Snj(b)
For every d (>0).
N* = inf{n~nO:
d
S (b» ~~ X- }
nj
n jj"l'.a
<-n-\;j~.a}
(68)
(69)
consider then a stopping time
m~x (8 * (j) -8 * (j)) < 2d}
U,n
L,n -
1931
(70)
and the proposed {sequential) confidence set is
(71)
I f we de fine, for every d >0,
n*
d
then our basic concern is to verify (17) for
(72)
I
in (71) (when d+O). and to
Nd
show- that. Nd:/nd:" 1. in probabi Ii ty. as d+O. These results follow directly by
using (35) - (36) [where we let b
n\8_ l< ,n -8)
"and n ~(e*u
_ ,n -8)], along the same
L _
I ine as in the univariate case [See Sen and Ghosh (J 971)], and hence, the details
are omitted.
We may also note that
_ _n -e)l(l'l)-~:
_
__
supfn~ll'(8
l#o}
_ _ _
_n -e)1
_ (lll)-~:
__
supfn~lllr~r-~(e
l#O}
where
chi
stands for the largest characteristic root.
l (8 -e) = n-l(S (e))'v-l(S (e))
nee-n -e)'r- -n -n _
_n_
and (73), for large
+
0(1)
(73)
Now, by (30)-(31),
n~.
a.s., as
Hence, by (65)
n
Pf~~£(~I~)-~ll' (~n-~) I ~n-\h;(~)~,al~}
Motivated by (74) and (44), we may define a stopping time
by letting
(74)
::: 1- a
N~
(for every
d>O) ,
NO = inffn>n :
d
-
In this case, if we define, for every
nO = min{n>n:
d
(75)
0
-
0
d>O,
n > ch
1
(nlp,ad- 2 }
(76)
then by (44),
o
0
(77)
+ 1, a.s., as d+O
d
while (17) follows from (74), (76) and (33). The same method of proof of the
Nd/n
asymptotic normality of the stopping time as in (58)-(64) works out here; we need
(48)-(49) in this context, and in turn, these require the more stringent regularity
conditions due to Hu~kova (1982).
SO~ffi
GENERAL
RE~~RKS
In this paper, only the case of sequential R-estimators has been treated in detail.
In the univariate case, Jure~kova and Sen (1982) have considered sequential M- and
L- estimators. Multivariate generalizations of these estimators follow on parallel
lines. The coordinatewise uniform integrability of these estimators, established
in Jure~kova and Sen (1982) goes to the vector case as well.
based on the vector
~
For the ~l-estimators,
of score functions, the sample (aligned) mean (score-)
product matrix can be used to estimate the true mean product matrix, and under the
assumption that EII~llr <"", for some r>4, results parallel to (41)-(42) hold
for this covariance matrix too. The estimator of the diagonal matrix with the
elements J~j(x)dF[j](x), l~~ is the (diagonal) matr~x with the marginal estimators, considered in Jure~kova and Sen (1982), and hence, the convergence properties
[parallel to (38) and (47)] hold under the same regularity conditions.
the fact that the score function
~
In view of
are taken to be essentially bounded, ·the extra
regularity conditions of Hu~kova (1982) are not needed here.
Similarly, for the
L-estimators, the (univariate) decomposition of Gardiner and Sen (1979) goes
through in the multivariate case, and this yields the asymptotic normality as well
as the rates of convergence of the estimated covariance matrices of such Lestimators. In this context, the jackknife procedure for obtaining this estimated
covariance matrix also work out well; we may refer to Sen (1982) where the close
connection of these estimators are discussed in detail.
We have considered only the first order A.R.E. property. ~fuch more elaborate analysis will be needed to pursue higher order A.R.E. results. These are posed as open
problems.
We conclude this section with a closer examination of the Hu~kova­
condition (46).
It suffices to consider the right hand tail
G(a,d)
= j {l-F(x)}-l+ndF(x-d)
a
G(a,-d) < G(a,O)
[Note that
will follow by symmetry.]
= n-l{l_F(a)}n <00,
,
d>O, n>O, a>O
'" n>O, d>O,
while the lower tail
We write
1 - F(x) = exp{-H (x)} and
so that hex) is the failure rate (FR) and
H(oo) = 00. Then, we have
hex)
H(x)
=
(d/dx)H(x)
is increasing in
= f
G(z,d)
exp{(l-n)H(x+d)}d(l- exp{-H(x)})
a-d
Hence, to verify (46), it suffices to show that for every n>O,
d (>0),
n
·e
(78)
(79)
x with
(80)
there exists a
such that
lim sUP{H(x+d)/H(x)} < '(l-n)-l,
V O<d<d
-n
~
(81)
Towards this, we note that if
lim sUPh(x)/H(x) < c < 00
x--
-
(82)
then for all x (sufficiently large), H(x+d)/H(x) ~ exp(cd), for every d>O.
Consequently, choosing d (>0) such that exp(-cd ) > 1 -n, (8l) follows from
n
n
(82). Hence, we may verify (82) as well. Note that H(x)+» as x+», so that for
the entire class of d.f. 's with bounded or non-increasing FR, (82) holds trivially
(with c = 0). For distributions with increasing failure rates (I FR), note that to
verify (82), it suffices to show (by integration) that
lim sUp{x-llog(_log{l_F(x)})} < c < 00
JC"'OO
-
(83)
and this general condition holds for almost all IFR d.f.'s (including some of the
extreme value d.f.'s). Thus, (46) [via (82)-(83)] may not be regarded at all very
2
2
restrictive. Finally, note that the condition that I (d /du )¢.(u)1 < K(1_u)-2,
J
V O<u~l, holds for all the commonly adapted score functions (including particularly
the normal scores and the Wilcoxon scores).
REFERENCES
[1] Gardiner, J.C. and Sen, P.K., Asymptotic normality of a variance estimator of
a linear combination of a function of order statistics, Zeit. Wahrsch. Verw.
Geb. 50 (1979), 205-221.
[2] lIu~kova, M., On bounded length sequential confidence interval for parameter in
regression model based on ranks, ColI. Math. Soc. Janos Bolyai, 32:
Nonparamet.
Statist. Inf. (1982), 435-463.
[3] lIu~kova, M. and Jure~kova, J., Second order asymptotic relations of ~I-estimators
and R-estimators in two-sample location model, Jour. Statist. Plan. Infer. 5
(1981), 309-328.
[4] Jure~kova, M. and Sen, P.K., M-estimators and L-estimators of location:
Uniform integrability and asymptotically risk-efficient sequential versions,
Sequen. Anal. 1 (1982) 27-56.
[5] Puri, M.L. and Sen, P.K., Nonparametric Methods in Multivariate Analysis
(John Wiley, New York, 1971).
[6] Sen, P.K., An invariance principle for linear combinations of order statistics,
Zeit. Wahrsch. Verw. Geb. 42 (1978), 327-340.
[7] Sen, P.K., On almost sure linearity theorems for signed rank order statistics,
Ann. Statist. 8 (1980), 313-321.
[8] Sen, P.K., On nonparametric sequential point estimation of location based on
general rank order statistics, Sankhya, Ser. A 42 (1980),201-219.
[9] Sen, P.K., Sequential Nonparametrics: Invariance Principles and Statistical
Inference (John Wiley, New York, 1981).
[10] Sen, P.K., Jackknife L-estimators:
Affine structure and asymptotics,
Institute of Statistics, University of North Carolina, Mimeo Report No. 1415.
[11] Sen, P.K. and Ghosh, M., On bounded length sequential confidence intervals
based on one-sample rank order statistics.
189-203.
Ann. Math. Statist. 42 (1971),
r