#2021
(Xi
HADAMARD DIFFERENTIABILITY OF STATISTICAL FUNCfIafAL
~
JIAN-JIAN REN AND PRANAB KUMER SEN
University of North Carolina at Chapel Hill
Robust (M-) estimation in linear models generally involves statistical
functional
processes.
For
drawing
statistical
conclusions
(in
large
samples), some (uniform) linear functional approximations are usually needed
for
such
functionals.
In
this
context,
the
role
of
Hadamard
differentiability is critically examined.
1.
In nonparametric models,
functional T(·) on a space
~
INTRODUCfION
a parameter 9 (= T(F»
(i.e., T(F »
n
of
9.
Using a
regarded as a
of distribution functions (d.f.) F.
same functional of the sample d.f. F
estimator
is
n
form of
Thus, the
is regarded as a natural
the Taylor expansion involving
the
derivatives of the functional, von Mises [7] expressed T(F ) as
n
T(Fn )
= T(F)
+ TF'(Fn -F) + Rem(Fn -F; T(·»
AMS (1980) Subject Classifications:
( 1.1)
62F35, 62G99
Key words and phrases: asymptotic linearity, Frechet differentiability.
Gateaux differentiability, Hadamard differentiability, influence function,
robust (M-) estimation. statistical functional, statistical functional
process.
- 2 -
where T is the derivative of the functional at F and Rem(Fn-F; T(e»
r
remainder term in this first order expansion.
Fn(x) (= n
-1
..Jl
2.
i =l l(X i
~
is the
,;
Note that
x» is based on n independent and identically
distributed random variables (i.i.d.r.v.) X ....• X • each having the d.f. F.
1
n
and Tr(Fn-F) is a linear functional.
i.i.d.r.v.·s.
Hence. Tr(Fn-F) is an average of n
For drawing statistical conclusions (in large samples). Tr
plays the basic role. and in this context. it remains to show that Rem(Fn -F;
T( e» is negligible to the desired extent.
Appropriate differentiabili ty
conditions are usually incorporated towards this verification.
We also observe that a statistical functional induces a functional on
the space D[O.l] (of right continuous functions having left hand limits) in
the following way:
T(G)
= T(G
0
F).
G € D[O.l].
(1.2)
Thus. (1.1) can be written equivalently as
(1.3)
where Un is the empirical d.f. of the Yi (= F(X i ». 1
classical uniform d.f. on [0.1] (i.e .• U(t) = t.
expansion
in
(1.1).
rewritten
in
(1.3).
is
a
~
i
~
~
t
~
based
on
n. and U is the
1).
some
Since the
kind
of
differentiation. it is quite natural to inquire about the right form of such
a differentiation to suit the desired purpose.
The current literature is
based on an extensive use of the Frechet derivatives which are generally too
stringent.
Less restrictive concepts envolve the Gateaux and Hadamard (or
compact) derivatives (viz.. Kall ianpur [4]. Reeds [6] and Fernholz [2].
among others).
Using the Hadamard differentiability (along with some other
regularity conditions). Reeds [6] has shown that
- 3 -
~
n {Rem(U -U; T{e)}
n
so that noting that TU(Un-U) = n-
1
p
~
~=1
O.
as
n
~ ~
(1.4)
IC(Xi;F.T). where IC{x:F.T) is the
influence function of T at F. and assuming that a
2
= VarF
< ~.
IC(Xi:F.T)
one obtains that
p
~
~
n (T(Fn )-8) - n TU'(Un -U)
!)
~
2
N(O.a )
(1.5)
In the context of the law of iterated logarithm or some almost sure (a.s.)
representation for T(F ). one may require a stronger mode of convergence in
n
(1.4). and this.
condition.
in turn. may require a more stringent differentiabili ty
However.
in a majori ty of statistical applications. Hadamard
differentiability suffices. and we shall explore this concept in the context
of functional process arising in roubst (M-) estimation in simple linear
models.
Consider the simple linear model:
X.1 = ~
R'c. + e ,.
-1
i ~ 1.
1
(1.6)
where the £i are known p-vectors of regression constants. J2
is
the vector
of
unknown
i.i.d.r.v. with d.f. F(E
an M-estimator
l2n
(regression)
~).
parameters.
p
~
= (~I""'~p)'
1,
and e,
Based on a suitable score function
of J2 is defined as a solution (With respect to
are
1
~
: R
~)
~
R.
of the
following equations:
n
~.
1=
1 c.
-1
~(X,-8'c.) E
1 -
-1
O.
-
(1.7)
where "E 0" accommodates the possibility of left hand side being closest to
-o
when equality in (1.7) is unattainable (such a case may arise when
not continuous everywhere).
Setting Zi
= Xi -J2'£i
shall see that the following empirical function
~
is
(i.i.d. with d.f. F). we
- 4 -
= ~. =1
*.Jl
S (t.u)
"'1l....
1
c , I(Z.1
"'1l1
1.
< F-(t)
+ ....
u "'1l
c i)'
(1.8)
-
"
arises typically in the study of the asymptotic properties of ~.
where the
£ni are suitably normalized version of the £i'
every n
~
For example. we may set for
P.
(1.9)
= Diag(c~ll""'c~
n
npp ).
CO
"'1l
c .
"'1l1
o -1
= (C)
"'1l
c •
i
.... i
( 1.10)
= 1 •...• n.
(1.11)
Thus. letting
o
u = ""11
C (9 - f::,
R)
"V
( 1.12)
I'V
and
(1.13)
we see that (1.7) is equivalent to
M (u) == 0
"""n ""
(with respect to
~).
( 1.14)
"V
The solution of this implicit (set of) equations is greatly facilitated by
the following type of (Jureckova-) linearity for M-processes:
< K < 00,
finite K : 0
as n
For every
~ 00.
sup{/IM
(u) - M
(0) + Q
u ..,./1;
"'1l ....
"'1l ....
"'1l ....
Jlll./l
.-
~ K} ~ O.
9n
(1. 15)
where /1-/1 stands for the Euclidean norm . ..,. = I>/I'dF and
= r:=1 Eni ~i
o
o
Und er varIOUS
.
.
(C
"'1l )-1 C
"'1l (C
"'1l )-l.
cond'Itl0ns
on t h e £ni' t h e score f unct i on
and the d.£.
F.
(1.15)
=
,I.
~
has been established (Jureckova [3]). and this
provides an easy access to the study of the asymptotic properties of the
M-es timator
~.
We consider a different approach here.
For simplicity of presentation.
- 5 -
we consider explicitly the case of p=I, i.e., the c
consider a statistical functional process
T
{ (
ni
are real numbers.
S*(. ,u) )
n
:
~=ICnj
U €
}
R ,where
given in (1.2), and for this statistical functional process,
corresponding to (1.4) are shown in this paper.
*
is a linear functional of S (t,u) =
n
n
~
i=1
is
the resul ts
n
-1
nl
(t) + uc
nl
) , viz.,
d S*(t,u)
n
(1.16)
M (u) could be the Hadamard derivative of a certain functional
n
using the results of this paper, for a proper functional
K
T
Since for each u € R, M (u)
c . I(ZI ~ F
M (u) = J ~(F-l(t»
n
We
T,
T.
Thus,
we have, for any
> 0,
- [Mn (u) - Mn (0)]1 ~ O. (1.17)
sup
lul~K
Therefore, (1.15) follows from showing
sup
lul~K
( 1.18)
Some notation along with the basic assumptions are presented in Section
2.
In the same section.
the notion of statistical functionals and the
concept of Hadamard differentiability are also introduced.
The main results
along with part of their derivations are considered in Section 3.
of Theorem 3.1 is given separately in Section 4.
The proof
- 6 -
2.
PRELIMINARY NOTIONS
Consider the D[O.l] space (of right continuous real valued functions
with left limits) endowed with the uniform topology
t
The space C[O.l] of
real valued continuous functions. endowed with the uniform topology. is a
subspace of D[O.l].
*
Sn (t.u)
For every u € R. denote by
= ~in =1
-1
c n1. I{Zi -< F (t) + uc n i)'
t €
[0.1].
(2.1)
where the c . are all given real numbers. and the Zi are i.i.d.r.v. with the
n1
d.f. F.
It is easy to see that for every u (€ R). S* (t;u) is an element of
D[O.I] .
The population counterpart of the I{Zi
n
~
-1
F
(t) + uc ni ) are the
F{F-l{t) + uc .). and this leads us to consider the following:
n1
Sn (t.u)
= ~.n1= 1
c . F(F-l(t) + u c.).
n1
We also write for every i
t €
n1
~
[0.1].
u € R. (2.2)
1.
(2.3)
(2.4)
(2.5)
So that
S*
n (t.u)
= S*+
n (t.u)
- S*n (t.u).
t € [0.1].
u € R.
(2.6)
Let then
Wn (t.u)
= S*n (t.(2u-l)K)
- Sn (t.(2u-l)K).
2
(t.u) € [0.1] .
(2.7)
tAl though D[O.I] is endowed with the Skorohod-J l topology. in view of our
uniformity result in (1.15). we shall use the uniform topology.
- 7 -
where K is a posi tive real number. and let
WO(t.u) = S (t. (2u-1)K) - t
n
n
~=1
c
ni
;
2
(t.u) € [0.1] .
(2.8)
We also denote by
+ 2
(d+)2 = '1.n
(c
ni ) .
n
i=l
- 2
(dn ) =
and
ff!i=l(c-
2
ni ) .
(2.9)
\u-v\ ~ o}
(2.10)
and for any function f on [0.1]2.
Wf(O) = sup{\f(t.u) - f(s.v)!: It-s\ ~ 0.
Some assumptions. which may be required for our main results. are given
below:
ni
L0
n
c
(AI)
c
(A2)
'1. i =l
(B)
2
1,
ni =
lim n
n-lOO
max
l~i~n
c
2
ni
< CXI
F is absolutely continuous with density function F' which is
positive and continuous with limits at ±
CXI.
In order to prove our main results in Section 3. some basic concepts
about statistical functional and Hadamard differentiability are needed.
DEFINITION.
Let Xl .... ,X be a sample from a population wi th d. f. F
n
and let Tn = Tn (Xl ..... Xn ) be a statistics.
If T can be wri tten as a
n
functional T of the empirical d.f. F corresponding to the sample X1 •...• X •
n
n
Le .. T
n
= T(F ). where T does not depend on n. then T will be called a
n
statisticaL functionaL.
We
actually
consider
an
statistical functional process:
extended
n.u = T [
T
statistical
functional.
*
Sn(F(·).u)
)
~
• u €
R.
or
a
The domain
.z.i=1 c ni
of definition of T is assumed to contain S:(F(.).u)~=l c ni for all n L 1
- 8 -
and u € R. as well as the population d.f. F.
Usually. the range of T will
~
be the set of real numbers.
Any statistical functional T induces a functional T on D[O.I] by the
In Section 3 and Section 4. we will always assume
relation in (1.2).
functional T is induced by a statistical functional T.
DEFINITION.
The infLuence function (IF) or infLuence curve (IC) of a
statistics T at a d.f. F is usually defined by
IC(x;F.T)
where 6
x
= ~t
T(F + t(6x -F»l t =o
is the d.f. of the point mass one at x.
Let V and W be the topological vector spaces and L(V.i) be the set of
continuous linear transformation from V to i. Let
DEFINITION.
A functional T : ~ -+ i
~
be an open subset of V.
is Hadamard-differentiabte (or
compact differentiable) at F € ~ if there exists T € L(V.i) such that for
F
any compact set r of V.
T(F+tH) - T(F) - TF(tH)
lim
t~
uniformly for any H
€
r.
=0
(2.11)
t
The linear function T is called the Hadamard
F
derivative of T at F.
For convenience sake. in (2.11) we usually denote
Rem(tH)
= T(F
+ tH) - T(F) - TF(tH)
(2.12)
then. correspondently in Section 3 and Section 4. we always use
Rem(tH)
= T(U
+ tH) - T(U) - TU(tH).
(2.13)
By the two previous definitions. we can easily see that the existance
of Hadamard derivative implies the existance of the influence curve and we
also have
(2.14)
~
- 9 -
3.
THEOREM 3.1.
Suppose
differentiable at U.
T
MAIN RESULTS
D[O.l]
:
-+
R is a functional and is Hadamard
*
c . Rem [_Sn~(_-_'_U_) - U(
!? c
i=l m
n
sup
};
lul~K
Then. for any K > O.
Assume (AI). (A2) and (B).
_»)
P
as
-+ O.
n
-+
ClO.
(3.1)
i=l ni
Therefore. we have. as n
-+
ClO.
sup
lul~K
(3.2)
The proof of Theorem 3.1 will be given in Section 4.
When (AI) is not
satisfied. we have the following theorem.
THEOREM 3.2.
Suppose
differentiable at U.
lim n
max
H i ~n
Jl-l'IO
K
> O.
as n
R is a functional and is Hadamard
<
lim n
n-PJ
CXl.
Then. for any
max
l~i~n
CXl
n
sup
lul~K
};
i=1
n
+
c .
};
n1
- T(U) };~ 1
1=
Proof.
-+
Assume (A2) and (B) • and assume
(c+.)2/(d+)2
m
n
-+
D[O.l]
T
1=1
C
max
Hi~n
( c+.)2
n1
nl
P O.
.-T'(S* (-.u)-U(-)};n cn1.) -+
U n
1=1
(3.3)
n1
-+
Denote c . = c+ ./d+
n1
n1 n
lim n
n-PJ
c .
<
ClO
•
and
and
By the assumptions on
--lim n
Jl-l'IO
max
H i ~n
2
(c-ni )
=<
ClO.
- 10 -
hence, for c: i and c~i' (AI) and (A2) are satisfied, and
We also observe, for lui ~ K,
n
_S~:.:.-(_t_,_u_) = _'J..:..i=....:l:.-c.:.;:;.::.i_I(_Z-=i:.-~_F_-_1_(_t_)_+_c-=:.:.;:i:.-u_)
'J.~=lc:i
~=1
=
c: i
'J.~ 1 c+. I(Z. ~ F- 1 (t) + c+. u 1)
-=-1=--=---=..:n;.::.l_--=l:.--=n.:.;:l:.-.=._
-+
~
1=1 c n i
2..
Therefore, by Theorem 3.1. we have, as n
sup
lul~K
Since 0
n
Sn*+ (-,u)
-+
'J. c . T
1=1
m[ [
< d+n < I,
!~=I<1
we have, as n ~
00,
+
S (-,u)
c . T ( *+
n
) - T(U(
. 1 nl [
~n
+
1=
~. 1c .
1= nl
n
sup
lul~K
~ 00,
'J.
-»
]
*+
n
+
- T'(S (-,u) - U(-) 'J. c.)
U n
i=l nl
p
~
O.
(3.4)
Similarly, as n
n
sup
lul~K
~ 00,
_[ (S:-(-, u)
'J. c nl.
i=l
T
n-
'J..1= 1cnl.
i
- U(-)
cn-i)
) - T(U(-»] - T'(S*-(e,u)
U n
i=l
~ O.
(3.5)
Therefore, (3.3) follows from (3.4) and (3.5).
D
- 11 -
COROLLARY 3.3.
Suppose
T :
D[O.I]
~
R is Hadamard differentiable at U.
Assume (A2) and (B). and assume
lim d+
--
n-JOO
n
>0
and
lim d-
~
n
> O.
Then. we have (3.3).
Proof.
c
>0
+
and d- • there exists a real number
n
n
By the assumptions on d
such that d~ ~ c and d~ ~ c. for all n.
lim n
n-JOO
By (A2). we have
max
l~i~n
and
Therefore. (3.3) follows from the proof of Theorem 2.
[]
- 12 -
4.
PROOF OF THEOREM 3. 1
In this section. (AI). (A2) and (B) are assumed.
First. we notice that
there exists M > 0 such that for n large
(AI) and (A2) imply three facts:
enough
(
!n
i=1
max
l~i~n
c
)
ni
2
max c
l~i~n ni
c i -+ O.
~
M.
(4.1)
as n
-+
0);
(4.2)
as n
-+
0).
(4.3)
n
and further
The result (3.1) will first be proved on C[O.I]. and then be extended to
D[O.I] .
Consider
S* (t. u)
lui
n
~ K.
For each i:
otherwise
The curve lni :
t
= F(Zi-cniu) is nonincreasing in u.
Hence. for each n and
Zl •.... Zn. [O.l]x[-K.K] is divided into finite pieces by curves lnl ....• lnn.
shown as the following (for n=3):
t
1
------
o
-K
-----e;;-------------- -----e;;------------K
u
- 13 -
and in region (0.0.0) :
8* (t.u)
n
= 0:
region (0.1,0) :
8*
n (t.u)
= cn2 :
region (0.1,1):
8n* (t.u)
= cn2
region (1,0.0):
8*
n (t.u)
= cn1 :
region (1,1,0):
8*
n (t.u)
= c n1
+ c
region (1.1.1):
8*n (t.u)
= cn1
+ c
-M
+ c n3 :
n2
n2
:
+ c "
n3
bivariate continuous version of 8* (t. u).
3
Let 8 (t. u) be a
3
Generally.
8-M (t.u) can be got easily by smoothing S* (t.u) through the above regions.
n
n
-M
8ince S (t.u)
n
is bivariate continuous.
-M
8 (t. (2u-1)K)
n
is an element of
2
C[O.l] .
If 1
31
, 1
32
, 2
33
intersect at one point shown as the following.
t
1
-e----------------- --------------------------------1
33
(1,1,1)
(0.0.0)
o
u
·--~K:-----------~----~----~---='K,-.-----+
the biggest jump of 8* is c
+ c
+ c .
3
31
32
33
But. we will show in Lemma 4.1
that. with probability one. no more than two curves will intersect at one
point in [0.1] x [-K.K].
LEMMA 4.1.
For each n. no more than two I's intersect at one point in
[O.l]'x [-K.K] with probability one.
- 14 -
Proof.
Suppose t nl . t n2 and t n3 intersect at one point (to'u o )'
Since
F is increasing. we have
and c
ni
i # j.
# c
nj for i # j in (4.4) because. with probability one. Zi # Zj for
Therefore. t
nl
. t
n2
and t
n3
intersect at one point iff
=
Since F is continuous.
For each n. the probability that more than two t·s intersect at one point
only depend on cnl ..... c nn .
Hence. with probability one. no more than two
t·s intersect at one point.
o
Lenuna 4.1 implies that. with probability one. no more than two t's
intersect at one point among {l! n 1'... t nn
n
~
Therefore.
I}.
with
probability one. the biggest jump of S* (t.u) is no larger than 2 max
n
l~i~n
Since S~ (t.u) is a bivariate continuous smoothing of S* (t,u). we have
n
n
*
IIS-*
n (o.o) - Sn(o.o)1I
LEMMA 4.2.
Let G(t.u)
= (2u-l)K
~
2 max c i'
Hi~n n
F'(F-l(t».
sup
2IW~(t.u) - G(t.u)1 ~ O.
(t.u)€[O.l]
Proof.
By (2.8). we notice
(4.5)
a.s.
Then.
as
n
~
co.
~
- 15 -
sup
2 IW:{t.u} - G{t.u}1 = sup{1
{t.u}€[O.l]
i~_l cni[F{F-1{t}+cniU}-t]
- u F'(F- 1(t»I: t
€
[0.1].
lui
~ K }.
2
n
Since I cni = 1.
i=l
~
1
1
c .[F{F- {t} + c .u}-t] - u F'{F- {t}}1
ni
ni
i=l
=
I
u
~
~
1
2
c .[F'{f .} - F'(F- {t}}]1
i=l ni
ni
K
~
i=l
where f ni. is between F- I {t} and F- I {t} + c ni.u.
and the uniform continuity of F'.
The proof follows from {4.2}
= 0.1 ....• r.
LEMMA 4.3.{Neuhaus [5]}.
and f : ~ ~ R be given.
1!1-!2 1
Let 6
> O.
+
e~
2
D
i=l, .... k}
m(6} = max{rl62
~ 2-m(6}. the following inequality holds
-r
€
~ {e~
denotes the
=
r
~ I}. m
Then. for points !I' !2 € ~{2m} with
where the "sup" is to be taken over all
..t
2
1
n i IF'{fni }-F'{F- {t}}I
C
max
l~i~k
It.I
~-th
I·
..t
€ ~{2r} with
unit-vector of
~}
and
{4.6}
> m(6}
- 16 -
PROPOSITION 4.4.
For any
> 0,
€
lim lim peW, (6)
~ €)
n
6-() JHDO
= o.
Proof.
By (2.7), we can see that Wn(t,u) is a function from E to R.
2
For any (t,u), (5, v) € [0,1]2 with I t-s I < 6, lu-v I < 6, there exist t', t",
u·. u".
5' ,
v"
s", v'.
€
L (2m) where m is an arbi trary posi tive integer,
1
such that
t'
~
t
t", u'
~
~
u
~
u",
5' ~ 5 ~ 5",
v'
~ V ~
v"
and
max{(t-t'), (t"-t), (u-u'), (u"-u),
(5-5'), (5"-5),
(v-v'), (v"-v)} ~ 2-m.
We can write
[Wn (t,u) - Wn (t',u')] + [Wn (t' ,u') - Wn (s",v")] + [Wn (s",v") - Wn (s,v)].
First, we notice that
Is"-t'
I
~ Is"-sl + Is-tl + It-t'
I
~ 6 + 2-m+l
and
Iv"-u'
I
~ Iv"-vl + Iv-ul + lu-u'
We consider [W (t,u) - W (t' ,u')].
n
n
I
~ 6 + 2-m+l.
Since
Wn (t,u) + Sn (t,(2u-l)K) = ~~1= 1 cn i I(Zi ~ F-1(t) + cnl.(2u-l)K)
is a nondecreasing function in each component of (t,u), we have
Wn (t,u) - Wn (t' ,u') = [Wn (t,u) - Wn (t' ,u)] + [Wn (t',u)-Wn (t',u')]
~
-[Sn (t,(2u-l)K) - Sn (t',(2u-l)K)] - [Sn (t',(2u-l)K) - Sn (t',(2u'-I)K)]
= -[W~(t,u)
and we also have
- W~(t' ,u) + (t-t') ~=l c ni ] - [W~(t' ,u) - ~(t',u')]'
- 17 -
W (t,u) - W (t',u')
n
n
~
W (t",u) - W (t',u) - S (t,(2u-l}K) + S (t",(2u-l}K)
n
n
n
n
+ W (t' ,u") - W (t',u') - S (t' ,(2u-l}K) + S (t',(2u"-l}K)
n
n
n
n
- S (t',(2u'-I}K) - S (t,(2u-l}K) + S (t",(2u-l}K)
n
n
n
+ W (t',u") - W (t' ,u') - S (t' ,(2u-l}K) + S (t' ,(2u"-I}K)
n
n
n
n
+ [S (t",(2u"-1)K) - S (t,(2u-l)K)] + [S (t',(2u"-l)K) - S (t',(2u'-1)K)]
n
n
n
n
+ [W~(t" ,u") - W~(t,u) + (t"-t)~=l
C
ni
)] + [W~(t' ,U"}-W~(t' ,u')],
Hence,
Iwn (t,u)
- W (t',u')1
n
It-s I
~
I I
6+2-m+ 1 ,u-v
~
1
6+2-m+ }
Iu-v I
We can treat [W (s",v") - W (s,v)] in the similar way.
n
WW
n
(6)
~
n
~
2
-m+l
}
Therefore, we have
m
It-s I
5 sup{ IWn(t,u) - Wn(s,v} 1: (t,u), (s,v) € L (2},
2
~
6+2-m+l ,
- 18 -
lu-vl ~ 0 + 2-m+l} + 4 WW (2-m+l) + 2-m+l ~=1 c
O
ni
(4.7)
.
n
Since.
by Lemma 4.2 and the uniform continuity of G in [0.1] 2 • we have
-m +1
W
WO
n
(2
as
) -+ O.
n -+
(4.8)
00.
n
where
(4.9)
so that m -+
n
2
-m
n
n
};
i=1
00.
as n -+
m
c . ~ M(2 n
nl
and by (4.1).
00.
max
l~i~n
c.)
-1
m
3(m +1)/4
-mn
< M2
n
2
-+ O.
as
n -+
00
-
So. by virtue of (4.7). it suffices to show that
m
lim lim p(sup{IW (t.u) - W (s.v)l; (t.u). (s.v) € L (2 n).
6-()
n-iCO
n
2
n
It-sl ~ o. lu-vl~o} ~ €}
Let m(o} = max{rlo 2
r
~ I}. then
o 2m(o} ~ 1
and
m(o) -+
00.
as
0 -+ O.
By Lemma 4.3. we have
m
kit i i = 0.1 ..... 2
n
•
m
2
n
=0
- 19 -
Choose aE(O.I) such that 2
m
n
~
.L.
r=m(o)
(I-a)a
1/4
r-m(o)
a
6
> 1.
~
Since.
(l-a)~
16 = -
16
-
m -m(o)+1
n
(l-a
)
(I-a)
< ~16
.
We have
>
+ P
(l_a)a r - m(o)
~
16
[> :r ] - Wn[ ~r'
[sup{IWn
> (l-a)ar-m(o)
J
:;1
~
-16
J]
]I; 0 ~ k ~ 2 r. 0 ~ ~ 2 r - 1 }
l
=
We notice that if a r.v. f satisfies
n
m
};
(I + II )
r=m(o)
r
r
( 4.10)
- 20 -
E fi
i
={ 1
E f
then. for 2
~
=0
i ~ 1
2
Let
Al
= E[Wn (t.u)
where f ni = I(F
f nl.• we have
-1
-r
- Wn (t + 2
(t) + c ni (2u-l)K
.u)]
6
< Zi
.Jl
= E[~i·
=1
~
-1
F
C
n i(fn i-E f n i)]
(t + 2
-r
6
) + c ni (2u-l)K).
2 = 0
=1
2
2
where C is a constant.
By
max
l~i~n
~
2
~
6.
the choice of mn • we have
c nl. <
- 2-
3r/4
•
for r
~
m •
n
So.
o~
Efni
= F(F-1 (t+2-r )
-r
<
- 2
+
-1
+ cni (2u-l)K) - F(F
(t) + c ni (2u-l)K)
M1 c
< M 2-3r/4
ni - 2
where M and M are constants.
1
2
Hence.
k
E(fnn-Efnn ) n
For
- 21 -
n
6
c .
nl
C i=1
!
~
6!.-2
4
2
c! ! c i c .
..
i j n
nJ
Efnl.
+ 4'2'
6!.-2
+ 3'3'
. . c- !i !j
6!
_::\
+ 2'.2'.2.' c: !i !j !t
~ C M2
where
~
=
I
3~.3! '.
c2
n
2T
1=0
n~
C
3
nj
Efnl. Efnj
Efnl. Ef n j Efn~n
2
-3r/2
6!
-.3...3 -9r/4
C
2
+ 2!2!2! C:M 2
ni
2
3r
2
2
9r 4
M max c . 2- /2 < M_ 2- / ,
2 1~i~n nl
- -J
is a constant.
p[ suplw
~!
2
C n
3
n
C i
max
2 1~i~n
1~i~n
T
2
nJ.
4
-3r/4
6!.-2 _-2
c ni 2
+ 4!2! c: M
max
+
2
nl
C . C
Efn i Efn j
Therefore,
[L, L] - w
2r
2r
n
L] I >
[k+1,
2r
2T
(1_a)aT - m(c5) ES ]
16
t]
2!r -1 [
16
] 6 EW
[ [k
-,k=O
ES(1_a)a r - m(c5)
n 2 T 2T
-W
n
[k+ 1, I ] ]6
2T
2T
- 22 -
r
r
2
2
~!!- 1 [
16 ] 6
l=O k=O
~(1_a)ar-m(6)
~ 2-9r/ 4 ~ 2 ~ (
16
] 6 2- r / 4
~(1_a)ar-m(6)
(4.11)
Simi lar ly. let
i~l cni(~i - E~ni)]6
r
A2 = E[Wn(t.u) - Wn(t.u+2- )]6 = E[
where
~ni = I(F- 1 (t) + c nl.(2u-1)K < Z.1 ~ F- 1 (t) + c n i(2u+2- r +1-1)K).
Then.
= F'(f)
K c . 2- r + 1 ~ M 2-3r / 4
nl
4
where M4 is a constant.
II
Therefore.
<2
r -
]6 2- r / 4
16
M (
S ~(1_a)ar-m(6)
(4.12)
where MS is a constant.
From (4.10). (4.11) and (4.12). we have
m
~ 2(M3 +MS )
n
(
r=m(6)
!
16
~(l-a)a
16
]6
[ ~(l-a)
r-m(6)
~
k:a
k
p
~
]6 2- r / 4
O.
as
6
~
O.
-23-
because p = (a6 2 1/ 4 )-1 < 1 and m(o) ~
"'"*
PROPOSITION 4.5.
{Pn :
n
~
n ~
I.
0 ~ O.
as
m,
D
n
Let T (t,u) = S (t,(2u-I)K) - t I c i' and let
n
n
i=1 n
I} be the sequence of probability measures corresponding to Tn'
Then, {P } is relatively compact.
n
Proof.
Note that Tn (t,u)
for any
Eo
€
C[O,I]2. n ~ I.
Hence. it suffices to show that
> 0,
lim lim P(wT (0)
n~
n
~ Eo)
=0
(4.13)
frIJ
Since.
*
"'"*
n
IT (t,u) - T (s,v)1 = I[S (t.(2u-I}K) - S (s,(2v-I}K)] - (t-s) I c i
n
n
n
n
i=1 n
l
~ l[s:(t,(2u-I)K) - S:(s,(2v-I)K)] - [S:(t,(2u-I)K) - S:(S,(2v-I)K)]I
+ Iw (t,u) - W (s.v)1 + IWo(t.u) - WO(s,v)I
n
n
n
n
by (4.5), we have
wT (0)
n
~
4
max
c . + ~
I<i<n nl
- -
n
(0)
+ W
WO
(0).
a.s.
n
Therefore, (4.13) follows from (4.2), (4.8) and Proposition 4.4.
Since Sn* (O.-K)
=0,
by (4.5). we have
IT (0,0)1 = IS*(o.-K)I ~ 2
n
n
max
I~i~n
c i'
n
a.s.
Hence,
po
n
=P
lr-
I
°
converges in distribution.
(4.14)
By virtue of Neuhaus [5] (discussion on pp. 1290-1291). (4.13) and
{4.14} imply that {Pn } is relatively compact.
D
- 24 -
Suppose r
LEMMA 4.6.
rl
= {TA(o.U}:A €
Proof.
A. u
= {TA:A
€
2
A} is a compact set in C[O.l] • let
[0. I]}. then r l is a compact set in C[O.l].
€
It suffices to show that any infinite sequence in
r l contains a
convergent subsequence in r .
l
Suppose {T (o.u }} is a sequence of r • then {T (o.o}} is a sequence
A
n
l
A
n
n
of
r.
r
Since
is compact. {T (o •• }} contains a convergent subsequence also
A
n
denoted by {T ( •. o}} such that
A
n
as
n
( 4.l5)
~ CD
where A € A. Since u € [0.1]. there exists a convergent subsequence also
o
n
denoted by u such that
n
u
n
-+ u
0
€
as n
[0.1].
(4.l6)
~ CD.
For any t € [0.1]. we have
Therefore. by (4.15). (4.l6) and the uniform continuity of T
in [0.1]2. we
A
o
have
IIT (o.u ) - T (o.u }1I
A
n
A
o
n
0
~
O.
as n
~ CD
where T (o.u ) € r .
l
o
A
o
PROPOSITION 4.7.
c
If for any compact set r
o
in C[O.l]
-25-
1·
Rem(tH) = 0
t-<>
uniformly for H
sup
lul~K
Proof.
€ f
I
o
n
};
i=1
, then for K
c
ni
(4.17)
t
1m
> 0,
Rem [s*(e,u}
_n
- U( e}
]1
P 0,
-+
as
~=lcni
n -+
1lO.
(4. IS)
From Proposition 4.5, we know that {P } is relatively compact
n
2
in C[O,I] , where
Pn (A)
= peTn
€
A).
Since C[O,I]2 is complete and seperable, by Prohorov's theorem (Billingsley
[1], Theorem 6.2),
is tight,
{P}
n
1. e.,
for any
~
~
1.
>
0,
there exists a
compact set f in C[O,I]2 such that
peTn
f)
€
> 1-~,
for n
(4.19)
Use the same defini tion about f 1 in Lemma 4.6 with respect to f, we
know that f
1 is a compact set of C[O,l].
By (4.3) and (4.17), there exists
N such that
~
i=l
If Tn
€ f,
c ni Rem [
n H
]1
};.1= IC nl.
then, Tn(e,u)
€ f,
~
for n
€,
for any u
€
for n
~
H€ f
N,
1
(4.20)
[0,1] and by (4.20),
~
N,
uE[O,I].
(4.21)
N.
(4.22)
(4.21) implies
n
sup
};
O~u~1
i=1
cnl.
~
E,
for n
~
- 26 -
Equivalently. (4.22) is written as
sup
lu I~K
n
I
};
i=l
for n
Therefore. from (4.19) and (4.23). we have. for n
1 - €
< P(Tn
~ p[ sup
€ f)
I~
lul~K i=l
c
ni
~
~
N.
(4.23)
N.
Rem[S;:(o.U) - U(o)
~=lcni
]1 ~
€ ]
o
Let f be a set in D[O.l] and H € D[O.l]. define
(4.24)
dist(H.f} = inf IIH-GII
GEf
LEMMA 4.8.
D[O.l] x R ~ R and suppose that for any compact set
Let Q
f in D[O.l]
lim Q(H. t) =
a
(4.25)
t-()
uniformly for H € f.
such that a
n
~
O.
~
n
Let
~
~
>a
O. as n
there exists N such that. for n
and let a n .
~
~
00.
Suppose not.
n
be sequences of real numbers
Then. for any compact set f in D[O.l].
N. if dist(H.f)
IQ(H'~n)1
Proof.
~
Then. for
~
~ an'
then
< ~.
> O.
there exists a compact set f in
D[O.l] and sequence {Hk} C D[O.l] with dist(Hk.f)
~ a~
such that
IQ(Hk' ~~)I ~ ~.
Since dist(Hk.f)
~ a~.
we can choose
Hk*
(4.26)
€
f such that
- 27 -
*
.
* has an accumulation point
Since {Hk}
C T and
T IS a compact set. then {Hk}
H*
€
T.
* also denoted by {Hk}
*
Therefore. we can choose a subsequence of {Hk}
such that
as
Since a
~
-+
k -+ co.
O. we also have
as k -+
co
as
O.
and the set
is compac t .
By (4.25). we have
Q(~. t) -+
uniformly for
~ €
T.
O.
t -+
o
This contradicts (4.26).
From the discussion by Fernholz [2]. we know that
*
n
[S (-.u) - U(-}}; c.] are not random elements of D[O.l] with uniform
n
i=l nl
topology.
Next. we will use the inner probability measure P* corresponding
to P to deal with this problem.
LEMMA 4.9.
Under the assumption of Proposition 4.7. for €
exists a compact set T in D[O.l] such that for all n
P*(dist([S*(-.(2u-1)K} - U(·)~_lc i]' T} ~ 2
n
1n
Proof.
max
l~i~n
~
> O.
there
1
c i' V u € [O.l]}
n
> 1-~.
As the proof of Proposition 4.7. there exists a compact set T
in C[O.1]2 such that for all n ~ 1.
peT
n
€
T} > 1 -
~.
-28-
By (4.5), we have, for n
peT
n
€ f,
I,
~
liS-* (-,-) - S* (-,-)11
n
n
~
2
max c i)
n
l~i~n
~
1 -
~.
(4.27)
By Lemma 4.6, we know that f induces a compact set f 1 in C[O,l], which
is also a compact set of D[O,l] because C[O,l] is a subspace of D[O,l].
also know that if Tn
€ f,
then Tn (-,u)
€ f
[S:(-,(2u-1)K) - U(-)~=l cni ]
Al so, liS-* (-, -) - S* (., - ) II
n
n
~
2 max
l~i~n
*
IIS-*
n (·,(2u-1)K) - Sn(-,(2u-1)KII
~
for any u
1
€ f
c
[0,1], i.e.,
1 , for any u
[0,1].
€
(4.28)
implies
ni
2
€
We
max
l~i~n
c i' V u
n
(4.29)
[0,1].
€
Since (4.28) and (4.29) imply
dist([S* (-,(2u-1)K) - U(-)!.n 1 c.], f 1 )
n
1= n1
by (4.27), we have, for n
~
~
2 max
2
l~n
l~i~n
c i'
n
Vu
€
[0,1].
1.
.
*
n
P*(d1st([Sn{-,(2u-l)K)-U(·)!i=1
c ni ], f l )
~
c ni ' V u € [0,1]) >
1-~.
c
Proof of Theorem 3.1.
Since
T
D[O,l]
~
R is Hadamard differentiable
at U, (4.17) holds.
By Lemma 4.9. for
that, for n
~
~
> 0, there exists a compact set
f
in D[O,l] such
1
*
n
P*(dist([Sn(-,(2u-1)K)-U(-)!1.=1
c.], f)
n1
~
2
max c i' V u
n
l~i~n
€
Therefore, we can find measurable sets En for all n such that
[0,1])
> 1-~/2.
- 29 -
E C{dist([S*(e.(2u-l)K) - U(e)~i_l c i].f) ~ 2
n
n
n
c ni • V u
max
l~i~n
€
[O.l]}
and
peEn )
>1 -
By Lemma 4.8. consider Q(H.t) =
exists N such that. for n
~
I~
(4.30)
Rem(tH)
t
N. if dist(H.f)
c . Ren (
nl
i=l
~.
For the compact set f. there
~
H
)
c
i=l ni
~
2 max c i' then
l~i~n n
I < ~.
Therefore. for n ~ N. take H = S:(e.(2u-l)K) - U(e)~=l cni for any
u
€
[0.1]. {dist([S*(e.(2U-1)K) - U(e)~i_1 c I]' f) ~ 2 max c i'
n
n
l~i~n n
V u € [O.l]} implies
I~
c
i=l
Rem [
ni
*
Sn~e.(2U-1)K)
- u(e»)
I <~.
for u
€
[0.1].
(4.31)
2.1= 1 c nl.
Since (4.31) implies
I~
I<K
sup
Iu
-
by (4.30). we have. for n
1 -
~
< peE )
n
*
c . Rem ( Sn(e .u) - u(e»)
nl
~
1·=1
2 i =1 c ni
~
P
~
I ~ ~.
N.
[ lul~K I
sup
2ni =1
C
ni
* u
Rem[ Sn(e. )
~=l cni
- u(e»)
I/"'-).
~ ~
o
-30-
In the proof of Theorem 3.1, we assumed that
Remark (I).
S* (-,u)
Rem [
n
~1= 1 C n1.
U(o») is measurable, even though S:(-,u) is not a random
element of D[O,I].
S*(-,u) )
This is because that T [ n
is measurable, and by
r;=1 c ni
Lemma. 4.4.1 of Fernholz [2], we know that TU(S:(-,u) - U(-)r;=1 c ni ) is also
S*( -, u)
)
measurable, hence Rem ( n
- U(-) is measurable.
~1= 1 C n1•
Remark (2). By Lemma 4.4.1 of Fernholz [2], we have
*
n
TU'(Sn(-'u) - U(o)~l'=1
~.
= ~.1= 1 Cn1. TU«O(Z i-cniu ) - F)
C .)
nl
0
-1
F
)
n
=
~
C .
nl
1=1
IC(Z.-c .u; F, T).
1 n1
Note that this is true without any assumptions on {c .} and F as long as T
n1
is Hadamard differentiable at U.
Remark (3).
If in Theorem 3.1 and Proposition 4.7,
sup
lul~K
I
~=I c ni Rem
[
S*( -, u)
)
n
- U(-)
r;=1 c ni
I
or
-*
c
ni
Rem (
....;~~(---'u_)_ ~i=I
U(
0 ) )
I
c ni
is not measurable, we replace lui ~ K by u € QK' where
numbers in [-K,K]}, then, they both are measurable.
OK
= {all rational
The resul ts will be
slightly different, but still good enough for the study of (1.15).
- 31 -
References
[1]
Billingsley. P. (1968).
Convergence of Probability .easures.
John
Wiley & Sons. New York.
[2]
Fernholz. L.T. (1983)
Von .ises calculus for Statistical Functional.
Lecture Notes in Statistics 19.
Springer. New York.
[1.3f. 2.1b.
2.3a]
[3]
Jureckova. J. (1984)
M-. L- and R-estimators.
Statistics. Volume 4:
In Handbook of
Nonparametric Methods (eds.: P.R. Krishnaiah and
P.K. Sen). North Holland. Amsterdam pp. 463-485.
[4]
Kallianpur. G. (1963)
estimation.
[5]
Sankhya
Neuhaus. G. (1971)
Von Mises function and maximum likelihood
A 23. 149-158 [2.3a]
On weak convergence of stochastic processes with
multidimensional time parameter.
[6]
Reeds. J.A. (1976)
dissertation.
[7]
ANS 42. 1285-1295.
On the definition of Von Mises functionals.
Ph.D.
Harvard Univ .• Cambridge. MA.
Von Mises. R. (1947)
statistical functions.
On the asymptotic distribution of differentiable
AMS 18. 309-348.
© Copyright 2026 Paperzz