Sen, P.K.; (1980).Functional Jackknifing: Rationality and General Asymptotics."

.
..
Functional Jackknifing:
Rationality and General Asymptotics
by
P.K. Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
.
Institute of Statistics Mimeo Series No. 1800
March 1980
FUNCl'IOOAL JACKKNIFING : RATIONALITY AND GENERAL ASYMPTOl'ICS
BY
*
PRANAB KUMAR SEN
University of North Carolina, Chapel Hill
•
'Ihough jackknifing serves the dual purpose of bias reduction and
variance estimation, the pseudo variables, it generates, may not generally
preserve robustness for general statistical functionals. 'These pseudo variables
are incorporated in a differentiable functional detour of jackknifing, and
along with its rationality, the related asyrrptotic theory is studied systematically. Second order asyrrptotic distributional representations for the
classical jackknifed estimators are also considered.
~.
~~~~~~CZ~.
rlhe jackknife technique, originally conceived for possible
reduction of (smaller order) bias of an estimator, generates pseudo variab les which
also provide a (strongly) consistent estimator of the asymptotic variance of the
jackknifed estimator; for sare detailed studies of these proPerties of jackknifing,
we may refer to Miller(1974) and Sen(1977), among others. Tb filter robustness
.
under jackknifing, one should choose a robust initial estimator; otherwise, the
pseudo variables, generated by jackknifing, may lead to a less robust jackknifed
version. For exanple, let Xl"" 'X be n independent and identically distributed
n
random variables (i.i.d.r.v.) with a distribution function (d.f.) F having finite
(but unknown) nean II
II is the saII1?le nean
Xn ,
and variance
(l .
For nonnal F, the classical esti.I11ator of
Xn = n -1L:~=IXi; the corresponding jackknifed version is also
and the jackknifed variance estimator agrees with the sample variance s2 =
n
(n-l) -1L:~ I (X. - X ) 2. However, none of X and s2 is robust against outliers or
1=
1
n
n
n
•
gross errors. In such a case (as well as for a general estimable parameter 8(F),
one may choose a rrore robust initial estimator T = T(XI, ••• ,x ) of
n
n
e [ viz.,
the
AMS Subject Classifications : 60F05, 62E20, 62F12
Key w::>rds and Phrases: Asyrrptotic normality; asyrrptotic variance; bias reduction;
Hadamard derivative; jackknifed variance; perturbations; pseudo variables; robustness;
second order representations; statistical functional; two-step jacJ&.nifing.
* w::>rk
supported by the Office of Naval Research, Contract : N00014-83-K-0387.
Running Head:
Asyrrptotics of Functional Jackknifing.
-2-
tr.i.rTtred Irean, roodian, R- and M-estimators etc.,} and construct the corresponding
jackknifed version T* (as well as the jackknifed variance estimator V* ) fran the
n
pseudo variables T
,
n,~
(l.l)
(l.2)
n
given by
Tn;
"'X'+l"
•• 'X)
,.... = nTn - (n-l}T(Xl, ••• ,x,~-..L
~
n , i=l, ••• ,n;
T* = n- l
IT ,
and V* = (n-l}-l
l(T , - T* }2 •
n
~=
n,~
n
~=
n,~
n
r:
•
r.r:
I
•
In this setup, we may take Tn = T(F ), a functional of the anpirical d.f. F •
n
n
*
Note that if G stands for the enpirical d. f. of the T ,in (l.l), then
n
n,~
(1.3)
T* = fXdG* (x)
and
v* = n(n-l}-lUx2dG* (x) - ( fxdG* (x) )2}.
n
n
n
n
n
*
*
*
For the Particular case of X , G = F , so that T = X and V = s 2 • However,
n
n
n
n
n
n
n
for a general statistical functional T(F} (=
equivalent, and IlDreover, the T
e), Fn and G*n are generally not
. in (l.l) are not independent, so that G* may
n
n,~
not possess all the properties of F
n
• In such a case, the appropriateness of the
linear functional in (1. 3) may be justified on the basis of the inherent reverse
martingale structure of the resarrpling scheme in jackknifing [viz., Sen(1977)],
although, on the groUnd of robustness, other functionals may appear to be IlDre
appropriate. Indeed, Hinkley and Wang(1980) advocated the use of the tPimmed
j ackkni fing by prescribing the usual tr.i.rTtred
Irean
on the pseudo variables in (1.1).
'rhus, one may raise the issue in favor of a general functional
(l.4)
0*
T
n
=T
0
*
0
(G ) , for suitable T (.) on DIO,lJ ;
n
we tenn TO * a functional jackknifed estimator (FJE).
n
It can be shown that under fairly general regularity conditions I on T(.} and
F], there exists a d. f. G* (which may depend on F), such that as n
(l.5)
\IG: - G* II =
~p I
-+-
00,
G: (x) - G* (x) I-+-o , alrrost surely (a.s.).
Thus, a minimal reguirerrent for the rationality of the FJE TO* in (l.4) is that
n
(l.6)
TO (G*) = T(F) , for all F belonging to a class
we
J.; the class of all TO (.)
denote by
for which (l. 6) holds (for the given T (• ) ) •
'!hen, the consistency of rr?* (as an estimator of
n
3t.
e=
T (F) ), in a weaker or stronger
sense, can be established under (l. 5) and (l. 6) ,whenever TO (.) is continuous in
tit
-3-
the netric in (1. 5) (Le., the unifonn topology).
For the asymptotic nonnaliq.' ( or weak invariance principles) for the FJE, it
is clear that (1.5}-(1.6) may not suffice; we may find it convenient to incorporate
.
the weak convergence of
(1. 7)
n ~ (G * - G*},to an appropriate Gaussian function W*
n
along with the usual Hadamard differentiahi li ty of TO (. ) in a conventional proof.
However, the pseudo variables in (1.1) are generally not independent, and, rroreover,
we have a triangular scheme, for which (1. 7) may hold (under additional regularity
conditions) but
w*
may not have a covariance function reducible to that of a
Brownian bridge. Thus, even if (1. 7) holds, the asyrrptotic distribution (or the
variance) of the FJE may not agree with that of T or the classical jackknifed
n
estimator Tn* • Generally, under adequate regularity conditions on T (.. ), for each i
(=1, ••• ,n), Tn,i can be expressed as a srrooth function of Xi' perturbed by another
stochastic elerrent
...
Cl.
-!.:
m which is generally 0p (n 2). Thus, the usual weak convergence
•
results on the emprical d.f. processes, subject to perturbations [ viz., Rao and
sethuraman (1976) J may not be adaptable here. For the trimned jackknifing, Hinkley
and Wang (1980) considered an alternative approach, which may not work out in
full generality for a general TO (.); indeed, (1.7) plays an i.Irportant role in this
context, and it needs to be exploited rrore systematically.
'!he main purpose of this study is to focus on the basic regularity conditions
on T (.) and
r? (.)
and on their fruitful incorporation in the study of the asyrrptotic
0*
theory of FJE. This enables us to study the general robustness of T
n
f
and also the
second ordep. asymptotic distributionaZ representations (SOADR) for the classical
jackknifed estimators. For (functional) jackknifing, one usually needs the second
order Hadamard differentiahi li ty of T (. ) [and TO (. ) J, and in Section 2, along with
the preliminary notions, SOADR results for T* are considered. The principal asyrrptotic
n
results on FJE are presented in Section 3. In the light of these, convergence
properties of V* and serre allied variance estimators are studied in Section 4. The
n
-4-
concluding section is devoted to scme general discussions and statistical irrplications of the results obtained in earlier sections.
~. ~~
:
£E~~~_~2~~. Let
L(A,B) be the set of continuous linear transfor-
mations fran a topological vector space A to another B, and let C be a class of
conpact subsets of A , such that every subset consisting of a single point belongs
O
to C • Also, let
A
O
be an open subset of A • A function T: A
Hadamard (or aompaatJ differentiable at F
such that for any K
(2.1)
B is said to be
.
,
A, if there exists a T
F
E
L(A,B) ,
C,
E
'
t -1 I T(F+tJ) - T(F) - TF(tJ)
] } = 0 ,
limt-+O{
unifonnly for J
E
-+
.
,
E
Ki T
F
is called the aompaat derivative of T at F. we may refer to
Fernholz (1983) for a detailed account of such differentiability conditions. In the
context of jackknifing, usually, we need the second order corrq;>act-differentiability
of T (at F), that is, we assume that for any K
(2.2)
T(F+ (G-F)) =T(F) +
C ,
E
fTl(FiX)dIG(x) -F(x)] +
~ffT2(Fix,y)dIG(x)-F(x)]dIG(Y)-F(Y)] + ~(FiG-F)
,
where
1~(FiG-F) I
(2.3)
2
= o( IIG - F11 ) , unifonnly in G
E
K •
The functions T (.) and T (.) are called the first- and second-order carpact deri-
l
2
vatives of T (.), and we can always nOl:malize them in such a way that
(2.4)
fT (FiX)dF (x) = 0 ,
l
(2.5)
fT (Fix,y)dF (y) .=. 0 ==
2
T2(Fix,y~
= T2 (FiY,X)
fT (Fix,y)dF (x)
2
,
•
We asstnre that the d.L F is defined on R = (-00,00), and denote by
= n-1 L.nl.=1 I( X.l.
(2.6)
F (x)
(2.7)
F l(x) = (n-1)
n-
n
(i)
t
< x) , X E R i
-
-1 n
L'_lr.J.)I(X. <.x), x
J- \Tl.
J -
e: R
i =l, ••• ,n.
Then, fran (1.1) and (2.6)- (2.7), we obtain that
(2.8)
T
. = T(F ) + (n-1) { T(F ) - T(F(i )) } , for i =l, ••• ,n.
1
n,l.
n
Note that F (x) = n
n
-1 n
n
(2.9)
m:oc
l<l.<n
{sup
x
n-
(i)
L =l F -
i
n 1 (x) , x
I F(i)
(x)
~1
- F (x)
n
E
R , and further,
I}
=
n
-1
, for every n
> 1.
J
-5Therefore, by (2.2), (2.3), (2.4), (2.5), (2.8) and (2.9), we have
(2.10)
. = T + (n-l){ ! T (F ~x)d[F (X)-F(i ) (x)]
1 n
n
n
n-1
T
n,~
~
.
! !T2 <.rn~x,y)dIFn (x) -F~~i (x) ]d[F
n
(Y)-F~~i (y)]}
1
+ 0(n- )
= T + !T (F ~x)d[I (X. < x) - F (x)]
1 n
n
~ n
-(2(n-1))-1 !!T (F ~x,y)d[I(X.<x)-F (x)]d[I(X.<y)-F (y)] + 0(n-1 )
2 n
~n
~n
= Tn + T (Fn~Xi) - (2(n-l))
1
-1
-1
T2(FniXi,Xi) + o(n
), i=l, ••• ,n.
As a result, by (1.2), (2.4) and (2.10), we obtain that
(2.11)
T*
n
(2(n-l))-1! T (F ~x,x)dF (x) + o(n-1 ).
n
2 n
= T . n
Thus, we have the second order representation of the classical jackknifed estimator:
(2.12)
= n( T* - T ) = -n(2(n-l))-l J T (F ix,x)dF (x) + 0(1)
2 n
n
n
n
-1 n
= - (2 (n-1))
L 1 T (F ~X. ,X.) + 0 (1) •
~=
2 n ~ ~
R*
n
Thus, the key role is played by the second order von Hises' functional
(2.13)
-*
T
=
2n
n
-1
n
E. 1 T (F ~X. ,X.) =
2
~=
n
1
~
I T2 (F
n
~x,x)dF
n
(x),
and, we have
(2.14)
*
-*
~T2n
(n-1) ( Tn - Tn ) +
-+
0 as n
-+
00
In fact, by (2.2) through (2.5) and the fact that
(2.15)
T = T(F) +
n
JTl(F~x)d[Fn(X)-F(x)]
n
+ n
-1 n
2 -1
= T(F)
+T
+
2 -1 n
E. 1 T (F,X.) + (2n)
1=
1
1
(2n)
ln
IIFn-F II
-1
(x)-F(x)]d[F (y)-F(y)] + 0 (n )
n
p
~J!T2(F~x,y)d[F
= T(F)
•
El<'~'<
1rJ n
+ (2n)-lT2
E. IT2 (F~X. ,X.) +
1=
1 1
-1
T2(F~X. ,X.)
n
+ 0 (n )
1 J
P
+ «n-l)/2n)u(2) +0 (n-1 )
n
p
where
(2.16)
and
are averages over LLd.r.v.
(2.17)
(2)
Un
n -1
= (2)
-
-1 n
T n = n . E.1=1T2(F~X.,X.)
1 1
2
while
IS,
T2(F~Xi,Xj)
E1 <i<j<n
is a Hoeffding (1948) U-statistic with rrean zero [by (2.5)]and is stationary of
order 1 (Le., whenever
~T2 <
(2.17), we obtain that as n
-+
,E(U~2))2 = o (n- 2 ) ). Thus, from (2.14) through
00
00
,
-6-
(2.18) (n-l) { T* - T(F) - T } +
n
ln
Though
EU~2) = 0
(2.19)
-*
T
2n
, v
- -T
2n
~(T*2n -!!.:!.
T ) n
2n
(n-l) 2~U(2)
n
n
may not be equal to O. However, note that
= II T2 (Fn ;x,x)
- T (F;x,x)]dF (x) ,
2
n
is finite, by using the fact that
-*
T
- -T
2n
2n
+
0 a.s.
n
g
n
v
+
o,
as n
+
At this stage, we note that whenever 0 <
n~
(2.23)
T
ln
2
~}}('(
~T2 (F;Xl'X
while, lUlder
2) <
a.s., as n
00
0 a.s., as n
+
n
~(n u(2»
R** -
2n
+
IIFn - FII
cr~
0,
~
we claim that
'.
•
00;
00
cr~ = ~T~ (F; Xl)
<
00
,
),
, i t follows fran
00
+
00,
and v
), we have
ln
(n-l) { T* - T } + v/2
(2.22)
-*
T
0 a.s., as n
(n-l) ( Tn* - '1'(F) - T
=
n
n
IIFn - FII+
and
·
R**
COnsequent1y, on 1. ettmg
(2.21)
0 •
= ~T2(F;Xl,Xl)
so that if the flUlctional in (2.13) is continuous in the rretric
(2.20)
~
GregOI"'j
(1977) that there exist
a set of (finite or infinite collection of) eigenvalues of '1'2 (.) corresponding to
h k (.), k
orthononnal eigen-functions
f T (F;x,y) Tk(X)dF(x)
2
(2.25)
I
k
(x) T (;x)dF (x)
q
such that
= \~,Tk(Y) a.e. (F) ,V
= 6kq' \t k,q 2 0,
(2.24)
T
~ O} ,
k > 0,
where (,kq is the usual Kronecker delta. Note that the T (.) and
k
depend on F. Then, as n
(2.26)
+
where the
n
~
,
A (Zk2 - 1) _< X } , X
R; A = 0 ,
O
are i. i. d. r •v. I s having the standard nonnal distribution. I An extension
P { n.u(2) < x}
F
00
'K may ~ well
-
+
p{ L
k>O
k
of this result for the joint asymptotic d.f. of
E
n~ Tln and n.u~2) is due to Hall
(1979).] Thus, we arrive at the following :
Under the assumed regularity
THEOREM 2.1.
(2.27)
(2.28)
R* +
v/2
n
2R~* ~
+
0, a.s.
Lk>O
Ak (
~
conditions~
as n
+
•
oo~
J
- 1 ) ,
where the Ak and Zk are defined as in (2.24)-(2.26).
Now, (2.27) is an inproverrent over (3.20) of 8en(1977). It not only provides a
sharper result on R* ' but also yields the even stronger result that for v
n
+
0 a.s., as n
+
00.
= 0, Rn*
Also, (2.28) provides the SOADR result for the classical
-7-
jackknifed estinlator under second order hadamard differentiability conditions.
This result is different from the parallel SOADR results for M-estinlators, studied
recently by Jureekova(l985} and Jureekova and Sen (1986) , arrong others. (2.27) also
•
suggests that jackknifing (in the classical sense}essentia;Lly arrounts to a second
l
order bias adjustrcent ( Le., n- v/2) without making any other functional change
•
in T • The positive effect of this feature is that if T
n
n
is any robust estimator,
T* also remains so, while the negative effect is that the classical jackknifing
n
while reducing the bias (to a smaller o:rder) shares the same lack of robustness
property with the initial estimator T when the later is not so robust. 'Ibis negative
n
feature generally calls for the choice of rrore robust initial estimators and/or
the use of functional jackknifing to induce robustness.
0*
lOOking at (1. I} and (2.10), we gather that the FJE Tn
be interpreted as a measure of location of the T
in (1.4) can generally
.• In tillS respect, for any d.f.
n,1
G , defined on R, we define G(x~a} = G(x-a}, for x,a E: R. Then, a statistical
functional T(G} is termed translation-equivariant if for every a
T(G(.~a}}
(2.29)
= a +
E:
R and G
E:
}i ,
T(G(.~O)}.
It is clear that the functional for T* in (1. 3) is translation-equivariant. Similarly,
n
for the tri.rmed jackknifing, considered by Hinkley and Wang (1980) , the corresponding
functional is also translation-equivariant. we assume that TO (.) satisfies (2.29).
For notational sinplicity, we let
* . = T . - T , for i=l, ••• ,n~
T
n,1
n,1
n
-1
n
**
(2.31)
G (x) = n
L 1 I (T* . < x) ,x E: R •
n
1=
n,1 Then, by (1.4) and (2.31), for translation-equivariant TO (.) , we have
(2.30)
,
(2.32)
...D*
'l'n
0
*
= T (Gn ) =
0
**
Tn + T (Gn ) •
lOOking at (2.10), we also define
0*
-1
(2.33)
G 0c} = n
n
(2.34)
G (x) = P {
F
F
0*
n
Ei=l I( T
l
(F~Xi)
::.. x) , x
T (F~Xl) ::.. x } ,
l
X
E:
E:
R
R •
Note that by (2.4), ~Tl (F~Xl) = O. '!hus, identifying G* as the d.L of T(F}
ri'l (F,X ), we may replace (1.6) by the following:
l
+
-8-
. trans 1a t'10n-eqw..varlant
..
TO,.)
\.
lS
(2.35)
\\U'th
GF
To( 0*) = O.
For the tri.rmed jackknifing, Hinkley and Wang (1980) asstm:rl that T (FiXi) has a
l
d.£. syrmetric about 0 , and this ensures (2.35), while, for the classical jackknifing,
~,'.i'l (F i Xi) = 0 ensures (2.35). other conditions on
T ( .) and TO (.)
will be introduced
•
as and when they are needed.
3. FJE: General asynptotics. note that by virtue of (2.4)-(2.5), the Hadamard
•
- ------------------------
derivative of '1'1 (FiX) is given by
(3.1)
(l~"iY)
Tl,l (Fix,y) = T2 (Fix,y) - Tl
•
Thus, for eve"r\] i ( =1, ••• ,n), we have
T (Fni X ) = T (Fi X ) +
i
l
i
l
[Tl,l (FiXi,y)d [F (y)-F(y)] + a ( IIF - F
n
n
= Tl(FiX1.) + n
= Tl
(3. 2)
-1 n
r. lT2(FiX.,X.) - n
J=
1
J
-
(F i X.) - T
1
+ vT
ln
-1 n
-~
2n
(X.) + a (n
1
p
r. lTl(FiX.) + a (n
J=
J. P
v2
-1
...,
), so that T (X.) =
2n 1
°p (n
-~
)
) ,
where 'I'ln'
defined by (2.16), is open-~ ), while under F'pT2 (.) <
2
E:l':2 (X.) = O(n
n 1
II)
00,
by (2.4)-(2.5),
-!.:
2). 'Ibis shows that Tl(F iX.) can be
n
1
-!.:
expressed as T (FiXi) perturbed by stochastic factors lOp (n 2)] which may playa
l
daninant role in the weak convergence of the related errpirical processes. As such,
we study first these weak convergence results and incorprate them subsequently in
the study of the main results on the FJE. By (2.10), (2.15) and (3.2), we have
_
(3.3)
. = 'reF)
'1'
n,l
_k
.
+ Tl(FiX.)
+ T2n (X.)
+ a p (n 2), for every i =l, ••• ,n.
1
1
/"0*
Thus, if we define the enpirical d.f. G by letting
(3.4)
'::*
G (x)
n
n
-1 n
v
= n r 1''-1 I ( T (FiX.) + T (X.) < x) ,
2n 1 l
1
X £
R ,
then, noting that G* is the errpirical d.f. of the T . , we obtain by using (3.3)
n
n,l
and serre standard steps that
(3.5)
n
k
2
II Gn* (x)
/"0*
P
II
- G (x - T (F»
n
-+
0,
as n
-+
00
•
As such, first, we consider the weak convergence of the errpirical process related
/"0*
toG
n
(3.6)
• Ne define then a process
wn(t) = n
-~
w = { w (t), t
n
n
-1
n
rj=l T2 (Fi F
Then, by (2.4)-(2.5),
~wn(t)
(t), Xj ) ,
= 0, V t
£
t
£
£
[0,1] }, by letting
[0,1] •
[0,1], and under
~T;
<
00
,
for every
•
-9-
o~ s
~ t ~ 1,
(3.7)
We assune that there exist sare finite and positive numbers J<:, y and
•
8, such that
(t) - w (s) IY < Kit - s11+8
O<s<t<1.
n
n
Note that whenever T (.) satisfies a local Lipschitz condition, (3.7) ensures (3.8)
2
EI
(3.8)
with
W
13 = 1. Also, we may note that (3.8) ensures the tightness of w (.),
n
Y = 2 and
while the convergence of the finite dilrensional distributions (f.d.d.) of w (.) to
n
those of a Gaussian function
w ={ wet) ,t
E
[O,l]}
follows firectly by a routine
use of the standard central limit theorem for LLd.r.v. 'so Thus, we conclude that
(3.9)
sup{
and, for every
I wn (t) I:
t
E
[O,l]}
= 0 (1) ,
p
>0 and n >0, there exist a positive
E
such that for every n > n and
- a
6 ( < 1) and an integer n ,
a
a
6: 0 < 6 < 6 ,
- a
sup{ Iw (t) - w (s) I: 0<s<t<s+6 <l} > E } < n •
n
n
--Then, defining cP* as in (2.33), we have the following.
(3.10)
p{
n
Iam1a 3.1. Whenever w converges weakly to w , Le., (3.9) and (3.10) hold, as n
•
n
(3.11)
n~
sup{
x
I a*(x)
-Go*(x-n-~w (F-l(t»)
n
n
n
x
-1
where x = T (F;F
l
~
I}
E
R •
Proof. By (3.9), for every
£
>0, there exists a positive K (= K
p{
O~~~l Iwn (t)
I
E
> K } <
E
,
v
V
Note that by definition [in (3.2)], T (X.) =n
2n 1
max {
Iw (F(X.»
n
1
I
-1:
2W
n
_1:
such that
(F(X.», so that by (3.l2),
1
Vn
> n • Consider a
a
E ,
_ .
-
_1:
),
n.:: no •
:l<i<n} < K , with a probability> 1 --
00,
0,
(t », x
x
(3.12)
-+
-
-1:
-1:
partition of R into (-oo,x-2n ~), [x-2n ~,x+ 2n ~], (x+ 2n ~, (0), for a given x.
,
-1:
v
For all i : Tl(F;X.) < x - 2n 2K , by (3.12), Tl(F;X.) + T (X.) = T, (F;X.) +
1
1
_
1
2n 1
n
-~
wn (F(X.»
1
-~_
< x-n
-
-K
< x- n
-
-~-l
wn (F
(t·x n
,
-;,rith a probability _> 1 -
f or all i : T (FiXi) > x + 2n-~K, T (F;X ) + vT
l
l
2n
i
E
;
similarly,
(X ) > x + n -~wn(F -1 (t x ) ) ' Wl. th a
i
-1:
probability.::. 1 - E. Finally, by (3.10), for all i: x - 2n 2K'::' T (F;X ) .::. x +
l
i
-1:
2n 2K , we have
(3.13)
Tl (F;Xi )
v
-1:
+ T2n (Xi) = Tl (F;Xi ) + n 2wn (F
-1
(t »
x
+
-1:
0p (n
2
\1
),V n .:: no
The rest of the proof follows by sorce standard arguments, and hence, is omitted.
-10-
~
0*
gF
define cI;.* as in (2.34) and assurre that it has a continuous density :function
a.e. Then,on using (3.9), we obtain that as n
(3.14)
-+
00
,
sup In ~ {G0* (x) - G0* (x-n-~w (F-1 (t ))) - n -~2 0* (x)w (F-1 (t ))}I -+
P
gF
F
F
x
n
x
n
x
o.
Thus, by (3.5), (3.9), (3.11), (3.14) and the fact that by the classical I\ollrogorov-
II
Srnimov result,
(3.15)
0*
II = °p (n-~
c£*
(x - T (F)) II
-F
0*
Gn
- G:.
-F
n~ II Gn* (x)
-
2
),
=
•
we obtain that
°
p (1).
Similarly, defining G** as in (2.31), we have
n
(3.16)
0*
1
n~11
G** (x) - G (x)
.
n
F
II =
0p (1) •
Thus, by (2.4), (2.32), (2.35), (3.2) and (3.16), we have
(3.17)
0* - T ) = n ~2 T0 (G ** )
~
n 2 (T
n
=n
n
n
0 0*
~2
**
fTl(GF ;X)d[Gn (x) -
0 0*
= n~
GF0* (x)]
+ opel)
**
fT (GF ;X)dG (x) + opel)
n
l
= n -~
n
0 0* *
.
L: . -1 T (G ; T .) +
p (1) •
O
Thus, if we define the first order remainder tenn R * by the left hand side of
F
l
~-
n,~
0
n
(3.17), we obtain that
0*
= 0 p (1)
when
n
-~
n
0 0*
*
L:. lTl(C~ ; T .)
= 0 p (1)
(3.18)
R
n
Note that
(3.18) holds for the particular functional TO (G)
~=
n,~
r'
•
= fxdG
(i.e., for the
classical jackknifing), but not, in general, for the FJE (as may easily be verified
with the case of the tri..rcrred jackknifing, treated in Hinkley and Wang (1980)). Hence
for FJE, T~ and T (or T* ) are not asynptotically equivalent (upto the order n ~ ),
n
n
n
so that stronger results (such as in Theorem 2.1) may not be generally true for the
FJE. Thus, for the FJE, there remains the issue of a representation
order remainder tenn RO*
n
~
0*
nonnalized fonn n 2 (T
n
( in tenns of independent surrmands) as well as for
- T (F) ). 'Ibese will be studied here.
we have
~{T2(Fn;
Xl,Xl ) - T2 (F; Xl'Xl )}
2
-+
0 , as n
-+
the
,
Note that if T2 (.) is regular in the sense that
(3.19)
for the first
00
-11-
(3.20)
I T2(FniXi,Xi)
·max
n-1 {l<i<n
< n
~
n
L.1=1 I T (Fn i
2
- T2 (Fi Xi ,Xi ) I 2 }
2
X. ,x.) - T (Fi X. ,x. )]
1
1
1
1
2
= J [ T2(FniX,X) - T2(FiX,X)] 2dFn(X)
•
P
+
0 , as n
+
00.
Hence, from (2.10), (2.15)-(2.18), (2.30) and (3.2), we have
max I *
X.) - -T,_ - vT (X.) I = 0 (n-~ ).
1<'<
2n 1
_l_n T.
n,l - Tl(Fi 1
~!
p
Thus, by the translation-equivariance of TO (.) and (3.21), we obtain that
(3.21)
0*
-
-~
0 A*
T = T - T +! (G ) + 0 (n 2).
n
n
ln
n
p
Hence, Parallel to (3.17), we have
(3.22)
(3.23)
~
0*
0*
= n 2( T
n
R
n
~
0 A*
-
- T ) = n 2 [ T (G) - T ] + 0 (1)
n
n
ill
p
-~ n
0
v
0*
Li=l {Tl(GF i Tl (Fi Xi )+ T2n (Xi )) - Tl (FiXi)} + 0p(l) •
Hence, if we assune t..'1at T~ (. ) admits the follCMing expansion:
o 0*
-~
o 0*
(3.24)
Tl (GF i Tl (FiXi)+n ~) = Tl(GF i Tl (Fi Xi )) +
= n
0
0*
n-~t TIl
(GF
iTl(FiXi)) + 0p(n-~2), dV It
I
<T < 00
and denote by
•
(3.25)
o
0*
Tl (GF iTl (FiX)) - T l (FiX) = 1)JF(X) , X £
then, from (3.23) through (3.35), we obtain that
(3.26)
0*
Rn
= n
-~ n
Li=l 1)JF(Xi ) + n
R ,
-~ n
~
0
0*
Li=l T2n(Xi)Tll (GF i Tl (FiXi)) + 0 (1).
P
Next, we note that
(3.27)
-~
n
v
0
0*
Li=1 T2n(Xi)Tll(GF i Tl (FiXi))
_ -3/2 n
n
•
0
0*.
- n
Li=lLj=l T2(F,Xi,Xj)Tll(GF ' Tl (F,Xi ))
n
2
_ -3/2 n
•
0
0* •
•
- n
Li=l T2(F,Xi,Xi)Tll (GF ,Tl (F,Xi )) +
-3/2
0
0*
n
L:l<ifj~n T2(FiXi,Xj)Tll (GF i Tl (Fi Xi ))
= On (1)
+ 0n(2)
, say.
Note that if we assune that
o 0*
u (F) = ~I '1'2 (FiXi,X i ) TIl (GF i Tl (Fi Xi )) I exists and is finite,
then, by the Chebyshev inequality, we obtain that
(3.28)
(3.29)
On (1)
-~
= open 2) •
Similarly, we may write
-12-
(3.30)
~
On (2) =
n (1- n
-1
.
n -1
~l<i<j<n <l>F(Xi'X j )}
) {(2)
0
0*
-
-
,
0
0*
.
<l>F(x,y) = I T2(F~x,Y)Tll(GF ~Tl (F~x»+ T2 (Fi x ,y)Tll (GF iTl (Fiy»]/2 1S a
where
sym:retric kernel of degree 2. Note that by (2.4)-(2.5), ~<I>F(Xi,Xj) = 0, but
~<I>F(x,Xi) need not be equal to 0 (a.e.),
so that 0n(2) will generally have a.synp•
totically nonnal. distribution. 'Ihus, if we asstm'e that
(3. 31)
2
~<I>F (Xl'X
2)
<
00
..
I
,
and denote by
(3.32)
<l>F,l (x) = ~<I>F(x,Xl) , x
R
E:
2
l;;l (F) = ~<I>F 1 (Xl) ,
,
and
then, by the classical results of Hoeffding (1948) [on U-statistics], we have
l3.33)
-~
-~
(3.34)
n
~'-l
1-
2n
p
n
~i=l <l>F,l (Xi)
On (2) - 2n
0
-+
i
<l>F ,l(X')
1
We may recall,at this stage, that by (2.5)
(3.35)
(GF0* i T l (Fiy»dF(y)
---.0
T2(F~x,y)Til
f
2<1>F,1 (x) =
a.e.,
so that fran (3.26), (3.27), (3.29), (3.30), (3.33), (3.34) and (3.35), we arrive
at the following •
THEOREM 3.1.
(3.36)
0*
R
n
Under the regularity conditions assumed above,
n
=
-~
n
L_ {
1:-l
VJF(X.) + 2 <l>F 1· (X.) } +
1
,1
°P (1)
,
where VJF and <1>1 are defined by (3.25) and (3.35), respectively. Thus,
(3.37)
R~*.-.I
~
)('(0, yO*2 )
yo*2 =
~[ VJF(~)+2<1>F,1 (Xl )]2
•
There+'ore
the FJE TO*
asymptotiaally (first order) equivalent when
J'
,
n and Tare
n
o 0*.
-0
0*
Tl (GF iTllFi X » - Tl CF~x) + f T2lFiX,y)'l'll (GF i T l (FiY) )dF(y) = 0 (a.e. F).
It is easy to verify that (3.38) may not generally hold for the FJE , although
(3.38)
for the classical jackknifed estimator, it holds trivially. Also, by (2.15) and
(3.22), we have
(3. 39)
0*
1
n~ (T
n
=
- T(P)) = n
n-~ ~T!-1=1{
~
°
A*
T (G
n )
+
°p (1)
T01(Gop*;Tl(P;X,)) + 2 <l>p l(X,)}
1 ,
°
1
+
°
0
P
(1)
n
••
•
0* • •
+ Opel).
-_ n -~ ~i=l{
T1 (G0*
p ,T 1 (P,Xi )) + fT2(P,Xi,y)T11(Gp ,T1 (P,y))dP(y)}
_
- n
-~ ~
1.. =l
i
0*
m
•
Tl CJ:,F, Xi) + Opel) , say.
-13-
In this setup,
T~* (T,F;x) may be identified as
the influence function of the
functional TO ( .) at the point F ; it depends also on the functional T ( .) through
the T
. •
n,~
•
(3.40)
we
0
THEOREM 3.2.
(3.41)
denote by
2
0
=
2
0*
= ~fT10* (T,F:X1 )} 2 •
IfT (T,F;x)} dF(x)
1
Under the assumed regularity conditions, for the FJE,
- T(F))
= n -1/2
n
~.
0*
l=
~ X(
1 T (T,F;X.) + 0 (1)
1
l
P
0,
0
2
o
).
It may be noted that for the classical jackknifed estimator T* ,the influence
function reduces to T (F;x), so that in (3.41),
1
0;
n
reduces to
o~
, defined i;n (2.23).
0*
However, for a general FJE, the two influence functions, T (T,F:x) and T (F;x) ,are
1
1
not the same, and this makes it possible to choose TO (.) skillfully, so that the
corresponding influence function reflects robustness in a generally interpretable
marmer [ viz., Huber (1981)]. This is perhaps the main point in favor of using a
FJE instead of the classical jackknifing. However, we may note that whereas the
..
asynptotic variance of the original and the classical jackknifed estimators are
the sane , for a general FJE,
0;
in (3.40) may not be equal to
of in
fact, if Tn is an asynptotica11y optiroa1 estiroator of T (F), then
0;
(2.23). In
is
2:.
of ' so
that while advocating rrore robustness, the FJE may lead to sorre loss of efficiency.
<il the other hand, if T
n
is not an asynptotical1y efficient estimator of T(F), then
it may be possible to choose an appropriate FJE (Le, TO C.)), such that we may have
simultaneously rrore robust and rrore efficient estimator of T (F). However, for T (F) ,
under fairly general regularity conditions, Tn
= T(Fn )
is an asynptotical1y optimal
(nonparanetric) estiroator, and hence, the last point generally does not rrerit serious
considerations, although such a consideration may arise in a paranetric setup. Since
robustness in a local sense is rrost1y confined to paranetric rrode1s, FJE may find
a better case in the parametrics than in the nonparanetrics.
cx::mrents on it in the concluding section.
vJe
shall make rrore
-14-
4. FJE: Estimation of asynptotic variance. we may note that by (1.3), (3.3), (3.9)
- ---------------------------------------
(F~Xi)
and the usual law of large numbers for the LLd. T
l
v:
(4.1)
oi, in probability, as n
+
In fact, if we assume that for each r
00
(= 0,1,2),
f ~(G~X)~-r(G~X,X)dG(X) is
(4.2)
+
,
•
a continuous functional of G
in sc:.ne neighbourrood of F (with respect to the
II G - F II ),
nonn
in the Hadamard-sense,
then, by using (2.10) along with the a.s. convergence of IIF
- FII
n
Khintchine strong law of large numbers on the T
l
(F~Xi)
to 0 and the
(and T (F~Xi'Xi))' we obtain
2
that under (4.2),
(4.3)
a.s. , as
n
+
00
•
Thus, in the classical jackknifing, V* ' based on the pseudo variables in (1.1),
n
serves a useful role in the estimation of the asymptotic variance
In view of Theorems 3.1 and 3.2,
may not consistently estimate
0
2
o
0
2
•
1
we may note that for the FJE, in general, V*
n
, the asymptotic variance of
n~ ( r{J*
n
- T (F) ). In
e-
specific cases, such as the tri..nm:::rl jackknifing treated in Hinkley and Wang
sate
(1980), it may be possible to introduce sate sarrple statistics h . = h (F ~X.) ,
nl.
nn~
i = l, ••• ,n, such that
max
(4~ 4)
l<i<n
0*
I
hni - Tl (T,F;X i )
--
I
+
0,
as n
+
00
,
*
( in probability or a.s.), where the T~ (T,F~Xi) are defined in (3.39), so that
••
• .D*
defmmg v
n
= (n-l)
l....
~=
~
~
•v.D*
(4.5)
-1 ....n
n
02
0
-
Ie h . - h)
n~
n
,asn
2
+00
-1 n
h = n E. lh . , we have
n
~=
n~
~
,
(in probability or a. s. ). For a general FJE, a natural way to choose these h
to employ the so called two-step jackknifing. Toward this, we notice
TO (G* ) , where G* is the empirical d.f. of the T
n
n
ni
is
that~* =
. , defined in (1.1). Let T(i )
n- l
n,~
(and 'l,(i?,) ) be the statistic T conputed fran a sarrple of size n-l ( and n-2)
n-.c,
n
deleting X. (and X. ,X.) fran the given sarrple of size n, for i
~
~
J
each i (=1, ••• ,n) , let us then define
(4.6)
T
n,i:j
=
(n-l)Tn(~l)
(n-2)Tn(~J2'), for j = l, ••• ,n ( j~i).
~
j =1, ... ,n. For
"'.
-15-
If we denote the errpirical d.f. 's for the sanple observations in the sarrq;>le of
sizes n-l and n-2
(resulting fran the deletion of X, and (X, ,X,) from the complete
l.
l. J
sarrple of size n) by P (i ) and P (i J') , respectively, then, using the sane expansions
n- l
n- 2
as in earlier sections, we have the following:
(4.7)
T , ,= (n-l)T(p(i » - (n-2)T(p(i j »
n,l.:J
n- l
n- 2
= T(p(i» _ (n-2) [ T(p(ij» _ T(p(i» ]
n-l
n-2
n-l
= T(P) + [ T(p(i » - T(P ) ] - (n-2) [ T(p(i j »
n- l
n
n- 2
n
TeF ) + Tl(PniXj)
n
=
(2(n-2»-1[ 2T (P ;X
2 n i
+ r
,xj )
- T(p(i»
n-l
]
+ T2 (PniXj ,Xj )
, ,
n,l.:J
where
(4.8)
max
l~i~j~
{nlr"
n,l.:J
\}
-+
0 a.s., as n
-+
00
As such, using (2.10) along with (4.7) and (4.8), we have
-t
(4.9)
1
~<
<l.rJ n
-
I T , , - T ,+ n- l T (p iX"X,)
2 n l. J
n,l.:J
n,J
l
= o(n- ) a.s., as n
I
-
*(')
For the T , , , J'=l, ••• ,n( ~i), we denote the errpirical dJ. by G l.l
n,l.:)
n-
-+
00
' for i=l,
••• , n, while, as in Section 1, G* stands for the errpirical d.L of the 'I' ., i=l, ••• ,n.
n
n,l.
'!ben, using (4.9) and a technique very similar to that in Lerrrna 3.1, it follCMS that
(4.10)
max
l<i<n
S;
*(')
*
{n~ I Gn_~ (x) - Gn(x) I}
1
P
,0, as n
-+
-+
00.
At the seoond step of J'ackknifing, we identify that the PJE based on the T , ,
,
n,l.:J
(for j=l, ••• ,ni
j~i) is noting but TO (G* (l.l'»
n-
, for i=l, ••• ,n. Thus, the pseudo
variables generated by these PJE are given by
o *
0 *(il
nT (G ) - (n-l)T (G 1)
n
n-
(4.11)
= 0 , , say, for i=l, ••• ,n.
-n,l.
Using (4.9), (4.10) and (4.11), we obtain that for TO(.) ,satisfying (2.2),
(4.12)
0
-n,i
*(') ) - TO (G* )]
= TO (G * ) - (n-l) I TO (Gl.
n
n-l
n
= TO(G:) -(n-l)!
•
T~(G:iX)dIG:~~) (x)
- G:(X)] +
o(nIIG:~~)-G:112
)
°
*(')
*
*
= TO (G ) - (n-l)! TO (G iX) dG l.l (x) +
(1)
n
np
I n
o *
n
-0
*
°
;::: T lGn ) - L,J=1(''+')
, , ) +
Jrl. 'l'l~(Gn i T
n,l.:J
p (1)
0*
n
*.
-1
0 *
}
= T
- L, 1(''+') { Tl(G iT ,) - n T2 (P ;X"X,)T lG iT ,) + opel)
n
J= Jrl.
n n,J
n l. J l I n n,J
°
-160*
0 *
*
0 *
= T
- n! Tl(G ix)dG (x) + Tl(G iT .) +
n
n
n
n n,l
-l...n
0
*
-1
0
*
n l.... IT2 (F iX.,X.)Tll(G iT .) - n 'J'2(F iX.,X.)Tll(G iT .) + 0 (1)
J=
n 1 J
n n,J
n 1 1
n n,l
p
0*
0 *
-1 n
0
*
+ Tl(Gn iTn, 1.)+ n r.J= IT2 (Fn iX.,X
)Tll(G iT .)
= T
n
1 j
n n, J
+ 0 (1) ,
p
where in the penultL-nate step, we have made use of (3.20) and similar inequalities
on the
T~l (G:iTn,i).
Thus, making use of (2.4) - (2.5) [on
T~ (.)
and T (.)], we obtain
2
from (4.12) that
0-n
(4.13)
n-l~ 10
=
so that as n
(4.14)
1=
+
= TO* + 0 + 0 + 6 (1) = TO* + 0 (1),
n
p
n
p
-n,l.
00,
{o
-
*
J)
In
n
0
*
r._
T (F iX.,X.)Tll(G iT .)}I
J- l 2 n 1 J
n n,J
-1 n
. - o}- h'l-(G iT
In,l
.)+n
n,l
p
+
0 •
Note that by definition,
0
*
2
n -1 ...n
l.... 1 {T (G i T . ) }
1=
l n
n,l
(4.15)
=
o * iX)} 2dG* (x) ,
!{Tl(G
n
n
*
*
p
where, in sectuon 3, we have shown that II G - G II +
n
F
true d.f. of
T~(~*;Tl(F,~».
0 , as n
Thus, if we assume that
T~(.)
+
00
;
*
G being the
F
is square integrable
and the functional on the right hand side of (4.15) is continuous in the hada:rcard
sense, then, the left hand side of (4.15) converges in probability to
}2 *
0 0*
! {Tl(GF iX) d~(x)
(4.16)
0 0*
2
=! {T (G iT(FiX»} dF(x) •
l F
A very similar treatnent applies to the other two tenns in the expansion of
n-1
rr:_
1'-l
(4.17)
(0 . - 0 ) 2 , using only the
In,l
In
**
n
( 0 . V
= (n-l) -1 E.1-l In,l
n
leading tenns in (4.14). Thus, if we define
0- ) 2 ,
In
we arrive at the following.
THOOREM 4.1.
tiable
If the functionals
T
and TO are both second order hadamard differen-
and their Hadamard derivatives satisfy the continuity and integrability
conditions discussed before~ then defining
(4.18)
Vn** +
02
0
~
•
~n
0
2
as in
o
. '7 • t y~ as n
probab ~~~
+
00
(3.40)~ we have
•
In passing, we may remark that (4.10) may be improved to an a.s. convergence
( under no extra regularity conditions), and hence, in (4.13), (4.13) and (4.15),
we may also replace 0p (1) by 0(1) a.s.,
+
00
•
as n
+
00
,
so that (4.18) holds a.s., as n
Note that the construction of V** is based on the FJE at the first step and
n
".
-17-
the classical jackknifing at the second stage. Thus, V** may be regarded as a
n
two-step jackknifed variance estimator. This two-step jackknifing avoids the
amitrariness in the choice of the h . in (4.4) and provides natural estimates of
Ill.
•
the rtt.*(T,F;X ).
i
5. sate general discussions. In Section 3, we have mainly stressed on the asyrrptotic
~ -~---------------------nonnalityof the FJE and on the representation of the first order remainder term. It
is possible to extend the asymptotic nonnality result to a weak invariance principle
-k
0*
for the partial sequence { n je( T
- T(F) ); k
k
~
n}, as n
-+
.
A key to this weak
00.
invariance principle is provided by the following well known result on the emprical
d.f. F
n
(5.1)
l~n
--
s~p {n-~
As such, in (3.5),
a_
Gk0*
k
I
F (x) - F(x)
k
I}
= 0p (1) •
*
Lerrma 3.1 and elsewhere, we may replace the G
n
and GO
*
n
*
by G,_ and
K
-~
,n by n -K and take a maximum of the resulting quantities over k .::. n , and
obtain the same convergence results. Thus, (3.39) may be extended to
(5.2)
ma.x {n-~21 k(T0*- T(F»
n
k <
k
0*
- ~. 1 T (T,F;X.)
l
1.=
1.
I }=
0 (1), as n
P
-+
k
00
•
*
Since the weak invariance principle holds for the partial sums {n-"2~i=l'l"~ (T,F:X ) ,
i
-
1
k .::. n} (under the usual square integrability condition), by (5.2), the same result
-k
holds for {n 1«
0*
T
- T(F», k < nL This extends 'Iheorerns 3.3.1 and 3.3.2 of
k
sen
(1981) to FJE •
We may recall that in section 3 , we have stressed the utility of the FJE fran
the basic consideration of robustness. However, we may note that whereas in the
.
I
classical jackknifing, under the usual second order Hadamard differentiability of
*
-1
0*
T(.), we are able to reduce the bias of Tn to 0 (n ), such a stronger result on Tn
may need extra regularity conditions on the T (.) and TO (. ). However, in the asyrrp-~
totic case, as we have shown that the bias of the FJE is 0 (n 2), this refinEm2J1t may
not be really of much i.nportance. Although, the FJE may not thus serve the primary
function of bias reduction to the extent the classical jackknifing does. Further, in
the FJE, we may need extra manipulations (as in section 4) to serve the dual role of
-18variance estimation. Thus, in advocating for the use of a general FJE, we should
take into account the increased robustness aspects of the FJE which need to be
sufficient to counterbalance the deficiency in the other two aspects.
We have confined ourselves, so far, to univariate d.f, 'so There is no problan
in considering multivariate d.f. 's ( for the Xi); the concepts of cc.rcpact-differen-
•
tiability remains in tact for this general case too, and the manipulations can be
made on Parallel lines. Further, we can also consider a vector of statistical
functionals and enploy FJE in the sane manner as in sections 1 and 2. Because of the
coordinate wise representation, the Parallel results for the vector case follow on
similar lines. Use of multi-sample statistical functionals for FJE also poses no
problem. For exaI'll>le , for a two-sample statistics T
nl ,n2 , the pseudo variables
in (l.l) should be taken as n l n 2Tn ,n - (nl-l) n 2Tn -1 ,n - n l (n2-l) Tn ,n -1 +
2
l
l 2
l 2
(nl-l) (n2-l}Tn -1 ,n -1' and the results in Sections 2, 3 and 4 can be extended in
2
l
a natural fashion.
Finally, we may cannent on the choice of the functional
rfJ (.)
on which to base
the FJE. For the classical jackknifing, the reverse martingale structure of the
conditional expectation of T 1 given the non-increasing tail sigma-field' ;n > 1 ,
....-vn
n-
-
viz., Sen (l977) , provides the desired tools for the asynptotic study. For a general
FJE, such a stronger justification has to be found out. Since robustness consideration
daninates the choice of a FJE, it has to be done so judiciously that the possible
increase in the asymptotic variance
a~ (over
ai ) and other negative features with
the bias reduction and variance estimation can be justified rationally. Note that
(2.35) is not needed for the classical jackknifing, but sans (2.35), the FJE may be
undesirable, so that in the robustness picture, the effect of possible departures
fran (2.35) needs to be studied too. For the tri.nned jackknifing, Hinkley and Wang
(1980) assumed the syrrrcetl:y of the d. f. of the T (F;X ) which ensures (2.35). It
l
i
appears that any departure from this synnetl:y may affect the tri.nned jackknifing,
while the classical jackknifing is not affected by this deviation. In Parametric
nodels, for local robustness asPects, of course, suitable TO ( .) can be constructed
•
-19to neet these requi.rerrents. However, judged fran the nonparam=tric (global) robust-
o
ness aspec..'1:s, such a T C.) may not work out well. Recently, Fernholz (1983) has
worked out neatly the von Mises t Calculus for statistical functionals covering the
paranetric as well as nonparanetric cases. 'Ihis treatlrent allows us to have a broader
•
choice of TO satisfying the regularity conditions asS'l..mEd in the earlier sections.
In particular, choice of M-functionals or L-functionals may preserve the global
robustness on a nore general franework, and may be advocated on general considerations.
"
--
[lJ
FERNHOLZ, L.T. (1983). Von Mises Calculus for Statistical Functionals. Lecture
Notes in statist. 19, Springer-Verlag, New York.
[2]
GREGORY, G.G. (1977). Large sample theory for U-statistics and test of fit.
Ann. Statist. 5 ,110-123.
[3]
HALL, P. (1979). On the invariance principle for U-statistics. Stoch. Proc.
Appl. 9, 163-174.
[4]
HINKLEY, D. and WANG, H.-L. (1980). A trimned jackknife. J. Roy. Statist. Soc.
B 42, 347-356.
[5]
HOEFFDING, W. (1948). A class of statistics with asyrrptotically nonnal distribution. Ann. Math. Statist. 19, 293-325.
[6J
HUBER, P.J. (1981). Robust Statistics. Wiley, New York.
17J
JURECKOVA, J. (1985). Representation of I+estimators with the second order
asymptotic distribution. Statist. Dec. 3, 263-276.
[8J
'V'
"
JURECKOVA,
J. and SEN, P.K. (1986). A second order asymptotic distributional
representation of M-estimators with discontinuous score functions. Ann.
Probability (under consideration of publication).
v
"
19] MILLER, R.G. Jr. (1974). An unbalanced jackknife. Ann. Statist. 2, 880-891.
[10] PARR, w.c. (1983). Jackknifing differentiable statistical functionals.
Tech. Report No. 194, Univ. Florida, Gainesville.
[llJ RAe, J .S. and SETHURAMAN, J. (1975). Neak convergence of etll'irical distribution
functions of randan variables subject to perturbations and scale factors.
Ann. Statist. 3, 299-313.
I12J SEN, P.K. (1977).Sore invariance principles relating to jackknifing and their role
in sequential analysis. Ann. Statist. 5, 316-329.
[13J.S~J,
P.K. (1981). Sequential Nonparametrics: Invariance Principles and Statistical
Inference. Wiley, New York.