...
SEQUENTIAL NONPARAMETRIC DENSITY ESTIMATION
H.I. Davies
&Edward J.
Wegman
Depa:ratment of Statistios
Univepsity of North CapoZina at ChapeZ Hill
Institute of Statistics Mimeo Series No. 884
August, 1973
SEQUENTIAL NONPARAMETRIC DENSITY ESTIMATION
by
H• I
. 1
• DaV1es
and
Edward J. Wegman
1.
Introduction:
In this paper, we shall discuss a sequential approach to
probability density estimation.
For the most part we shall confine our attent-
ion to estimators of the form
(1.1)
first introduced by Rosenblatt (1956) and discussed in greater detail by Parzen
(1962) •
Here, of course,
Xl ,X 2 ,. •• ,X
according to some density,
f.
are i. i. d. random variables chosen
n
In this paper, the function K, the so-called
kernel, is assumed to be a bounded density on the real line satisfying
lim luIK(u)
(1. 2)
u+±QO
Moreover, the sequence,
h ,
n
=0
.
is assumed to be a sequence of positive real
numbers satisfying
(1.3)
~
1
lim h
n+ClO n
= 0,
lim nh
n+ClO
n
=
00
and
h
n+l = 1
-r
n+ClO n
·
11m
•
The work of this author was supported by a C.S.I.R.O. postgraduate studentship.
I
2
We shall principally focus our attention on a naive stopping rule defined by
the following procedure:
Choose successive random samples of size M and form the differences
(1.4)
A
where
ru~
A
f nM (x)
and (n-1)M respectively.
[
In section 2,
The stopping rule is
First n such that IVn (x)
N(e,M) =
(1.5)
e
are the density estimators based on sample sizes
and f (n-l)M(x)
OG
1<
e
for fixed
e > 0
if no such n exists.
investigate the asymptotic structure of Vn (x). In section 3,
we investigate properties of the stopping variable, N(€,M). Finally section-A\'le
is a concluding section.
2.
Asymptotic Structure of VnJ!l:
Theorem 2.1:
i.
If K and h satisfy (1.2) and (1.3) respectively then
n
Ivn (x)
I~
0 in probability for every xeC(f), the continuity
points of f,
and
ii.
sUPlvn (x) I ~ 0 in probability if f
is uniformly continuous.
x
If, in addition, for some a > 0
iii.
sup IUlm{K(cu)
K(u)}2
IUI~a
at c=l for some a>O,
is locally Lipschitz of order a
3
co
J {K(cu) - K(u)}2du
iv.
is locally Lipschitz of order a at c=l,
-co
and finally
Ico
1
_1__
v. n=l nh l - S h
n
n+l
S
L
<
00,
h
n
where 6 = min{~,l}
then
vi.
Ivn(x)I
0 with probability one for every x€C(f).
+
A
Proof:
i).
Under the stated conditions,
A
A
Ivn(x) I = IfnM(x) - f(n-l) M(x)1
way.
~
+
f(x) in probability, Parzen ,
is a Cauchy sequence in probability, so that in probability
(1962). Hence {fn(x)}
A
f n (x)
0 • Results ii and vi follow in a similar
+
Conditions iii, iv and v are those of Van Ryzin, (1969) for strong
consistency.
Alternate sufficient conditions were given by Nadaraya, (1965)
o
which may be used as replacements for iii, iV, and v.
The next sequence of results concerns the asymptotic variance structure.
LellUlla 2. 2 :
(1.2) and let
Let
K(y) be a piecewise continuous Borel function satisfying
g be a real function in Ll • Then if
{hn } satisfies (1.3)
00
~ (x)
00
=
f h~ K(:3 K[hn~lJg(X-Y)dY
_00
2
converges to g(x)! K (y)dy for every X€C(g).
_00
,
Proof:
(A)
The proof is in two stages.
lim
n+co
~ (x)
= 1im
n+co
First we show
g~x) foo K(if-) K(_-1:J
n
nJ hn_lJ
_00
dy
4
and then
=
=
~~ h: J K(~) K(~~I)
(B)
dy
-=
Proof of (A):
Consider
J= 2
= K (r)dy .
An defined br
=
An = Ign (x)-g(x) h:
JK(~) K(hn~J
drl
-=
=
=
J h:[g(X-r)-g(X)]K[~)
Kfh
-=
Letting
:3
drl •
n
0 > 0,
An s
J K(t] K(~) ~r
max Ig(x-r)-g(x) I
Irl so
Irl so
K(~)dr
J
hI
Ig(x-YJ1K(t)
n Irl>o
n
J
Irl>o
A
n
Ig(x) IK[t)
n
n-I
+
n-I
n".1
n
n
K(Z)K(hh
Z)dZ
Iziso/h
n-I
J
n
Lot-n K(tn)
Jr >0Jg(x-y)
r
1 K[~ldr
n-I J
I I
Ig(x) I
J
Izl~o/hn
+
n
K(~) ~.
s max Ig(x-y)-g(x)I
Irl so
n·
K(z)
K(h hn
n-I
Z)dZ .
+
+
5
Since
o
K is bounded, the first term can be made arbitrarily small by choosing
arbitrarily small.
The second term is bounded by
i Iz Isup
/ZK(Z) K(h hnzJ II
~o/~
n-l
co
Ig(y) Idy
-co
which along with the third term can be made arbitrarily small for choice of
n sufficiently large.
Proof of (B):
Letting z = y/h ,
n
co
II
h
co
h:[K(C
J-K(~JJK(h:]dyl = I I [K(~J-K(z)]K(Z)dZ I
-co
-ex>
~
h
ex>
f IK(hn: l
zJ -K(z) /K(Z)dZ .
-co
Clearly this last integrand is bounded by [2 sup K(y)]
0
K(z) and hence
y
appealing to the Lebesgue Dominated Convergence theorem completes the
o
result.
Theorem 2.3:
Let
K and h
n
let
lim n
n-MlO
then lim n2MhnM var(Vn(x))
n-MlO
Proof:
satisfy (1.2) and (1.3) respectively and also
{h (n-l)M
h I} 1- v,v
nM
=
-
< co
ex>
= vf(x) J K2 (u)du
•
co
By definition,
1
V (x) = n
nM
l -1
j=l h
nM
nM
(X-x.)
K ---l-
hnM
1
(nt)M
(n-l)M j=l
1
(X-X j J
h(n-l)M K h(n-l)M
6
Since X1' ... 'XnM are i.i.d.
var(Vn(x))
(2.1)
=.
12
nMh nM ·
var[K[~-XIJ]
nM
Parzen (1962) shows
lim
n-loCO
+
\1]
1
var[K( X-Xl
2
(n-1)Mh
h(n_l)MJ
(n-1)M
=
~var(K(:-X1)J
nM
nM )
=
2
f(X)J K (u)du
_00
In a similar manner, using Lemma 2.2, one may also show
so that
~~ n2MhnM var (V l1(x)) = ~~ {n h~M var [x[x~:~)) + (n~~)
_ 2n:ru.l
(n-1)M
~ ~ COV[K(:-XI )
2
=
=
h
J
_2]~r:nM
U (n-l)M
1]}
, K(h x-Xl
(n-l)M)
=
h
. {n+(n~l)
n
11m
h nM
n-loCO
(n-I)M
lim{n+n[ ~l -2] + [ ~1
n-loCO
n
n
nM
nM
nM
2h
var [[
K h X-Xl]]
h (n-l)M
(n-I)M
-2~ nM} f(x) J K2 (u)du
(n-I)M
-00
00
-I]} f(x)J K2 (U)du
_00
After suitable simplification,
00
~~ n2Mh~1var(Vn(X)) =
2
{1-(l-V)}f(x)J K (u)du
00
_=
= Vf(x)J K2 (U)du
7
The results of Theorem 2.3 may be Iefined by the decomposition of Theorem
2.4 .
Theorem 2.4:
V (x) can be decomposed into the sum of two independent random
n
variables, An(x) and Bn(x), such that under the conditions of Theorem 2.3
co
2
2
lim n r4h Mvar(A (x)) = (V-1)f(X)J K (u)du
n+oo
n
n
-00
and
00
2
2
lim n Mh M var(B (x)) = f(X)J K (u)du
n+oo
n
n
-00
Proof:
+ nrvfu 1
nM
= An (x)
+
r
K (X-X j )
j=(n-1)M+1
hnr41
Bn (x).
Since An(x) depends only on X1 ""'X(n_1)M and Bn(x) only on
X(n_1)M+1""'XnM '
2 ~
2';
n M hnM
An(X) and Bn(x) are independent.
Now var Bn(x) =
Mvar K( (x-X1)/hnM), sot:hat
00
lim n2~fu Mvar B (x)
n+oo
n
n
, ~
1 var
= 11m
n+oo nM
The result for An (x) follows from the fact that
var Bn(x) .
Notice that
J = f(X)J
K(X -X 1
hnM
var An (x)
2
K (u)du
_00
= var
n (x)
V
0
Bn(x) is a finite sum of M terms, each identically distributed.
If the density were known, then for suitable conditions on
K the exact
e
8
distribution could be found.
It would be an M-fold convolution of densities
of random variables of the form
zk
=
J1l'vlh
1
nM
X-Xkj
•
h
K (- -
nM
Also, it is not difficult to show that
We also note here that
+
Bn (x) is bounded by
sup K(u).
n nM
u
h = Bn -at with 0 <
n
at <
1 and
B a non-negative
constant, satisfies the hypotheses of Theorem 2.3 with v
=1 +
at.
We close this section by demonstrating the asymptotic normality of An(x).
Lemma 2.5 below follows in a manner similar to Theorem 2.3 so the proof is
omitted.
Lemma 2.5:
If K and h
n
satisfy (1.2) and (1.3) respectively and if
lim n{h hnM -1}
n-l-OO
(n-1)M
=1
- v,
v, a constant
then Anl(x) defined by
A
nl
(x) - -'!!.:.!) K(X-X1 )
- nhnM
h~f
satisfies
lim nMh M var(A 1 (x))
n-l-OO
n
n,
=
- h 1 K(~~-_X_l;......,_J
(n-l)M
(n-l)M
=
2
(V-l)f(X)J K (u)du
-00
Lemma 2.6:
i)
If
K and hn satisfy (1.2) and (1.3) respectively and if
lim n{h hnM -1}
n-l-OO
(n-l)M
=1
-v,
v, a constant
and finally
ii)
then
if for every sequence of real numbers en
uniformly in u,
~
1, K(cnu)
~ K(~)
9
00
f
Proof:
00
I (n-l)
nhnM
.
= ~
n
1
K( x- y ) 13f(y)dy
h(n-l)M h(n-l)M
hnM
h
l(n-l)K(~-Y) - n hnM
fOO
3
h~1
K(~) -
3
K(h x-y JI f(y)dy.
:(n-l)M
(n-I)M
nM
_00
Multiplying both sides by n2h~1 and making the transformation u
= ~-y
,
~
00
U J3
2 E!Al(X) I2 =. -I J I (n-I)K(u) - nh
n2hnM
h n1\1
K(hnl\1
h '
I f(x-h uu)du .
n.
n 00
(n-I)M
(n-I)M
nM
But by ii uniformly in u
K(hlli~U
lim /(n-l)K(U) _ nhnM
n~
h(n_I)M
Now
lim
n~
(n-l)
.. ,
n~M
h
(n-1)M
.
11=
h(n-l)M)
= lim
I
lim I(n-l) - hnhnM
K(u)
n~
(n-1)M
-l+n[l - h
n~
h~
(n-1)M
]
= !-l+v-ll = IV-21.
Hence given 0 > 0, there is nO such that for n > nO'
o$
2 E IAnI (x)
n2hnM
13
$
v-21
3
f K3 (u)f(x-hnMu)du
I n
00
00
_00
+ c/nJ f(x-hnMu)du .
_00
co
Since!
n
J f(x-hn MU)du = ~~
n
0
as n ~
00,
we have
-co
3
2 E IAnI (x) 1
n 2hnM
~ 0
o
We may now apply the Normal Convergence Criterion (N.C.C.) found in
Lo~ve,
(1963, p. 316), to complete this section.
Theorem 2.7:
then
If K and h
n
satisfy the hypotheses of Lemma 2.6,
.
rAn (x) - BAn (x)
~: P L (var An (x))~
(2.2)
J f
c
s;
c
=
10
1 _%u 2
I2Tr e
du
= <P (u) •
00
Proof:
By the N.C.C., a necessary and sufficient condition for (2.2) to hold is
that for
€
> 0,
(2.3)
(n-I)M P[
AnI (x) - BAnI (x)
¥
[var An lex)]
~ c:[
k]
(n-IH-1] 2
.
-+-
o.
.
A sufficient condition (Liapounov's condition) for 2.3 is that for some 6
2 6
EIAn1 (X) - E[Anl(X)] 1 +
-+
(nM)o/2 0 2+0[A (X)]
nl
where
0
(a+b)3
2
s;
[AnI (x)]
= var[Anl (x)].
° = 1.
We let
.
<
(nMyto3[Anl (x)]
By Lemmas 2.4 and 2.6,
n1
00
Then using the inequality
3
8E IAnl (x) 1
EIAnl (x) - E[An! (x)] ,3
2
-+-
4(a 3+b3), we obtain
so that
o [A
as n
0
(X)] =
O(nh
1
nM
),
EIA (x)1
nl
3
-
1
2 2 )
n hnM
= O(
--....---=---(nM)Ji 3.[A (x)] .
o
nl
and
so that
3
8EjAni (X) 1
---..--=---:';:""-_-
(nM)¥o3[A (X)]
n1
which completes the result.
=0
(-+n~ (nh
1
nM
k)
)2
o
>
0,
11
3.
The Stopping Variable N(8,M):
In this section, we shall generally
suppress in the notation the explicit dependence of N(e,M) on
Hence we write N(e,M) simply as N.
Noting that
[~n]
e
= [Vn(x)
and M.
S E],
it is clear that the probabilistic structure of N is closely related to
that of Vn(x).
Inasmuch as the structure of Bn(x)
depends on f(x), we
will be, in general, unable to give the exact asymptotic structure of N.
In this section, we demonstrate the finiteness of the moments of N, the
closure of N, and the divergence of N as e
Lemma 3.1:
(3.1)
For arbitrary t
+
O.
P[V (x)
>
0 and given e
-nMh Het S (x)t
E] s e n . E e n
P[Vn(x)
<
-nMh Met
-S (x)t
-e] s e n E e n
n
>
>
0,
and
(3.2)
where
hM
Sn(x) =
.l
J=l
x-X, (n-1)M
hnN
x-X.
K(?) ..lL -h
K(h
J)" •
nM
j=l n-l (n-1)M
(n-l)M
r
Define T(x) to be the indicator of [Sn(x)
t(S (x)-nMhnMe)
arbitrary t > 0, T(x) s e n
, so that
Proof:
Noting that
[Vn(x)
>
e]
> Th~hnMeJ.
Then for
= [Sn(x) > nMhn!4e] completes the proof of (3.1).
Equation (3.2) follows by similar arguments.
Now,
0
n
peN > nM]
= P[ n flvk(x) I
>
k=2
s P[ Iv (x)
n
I
>
e] .
e}]
12
By Lemma 3.1, for arbitrary 3.1, for arbitrary t
peN > nM] ~ e
We next examine E e
*
Ank(x)
and
*
Bnk(x)
-nMh
Sn(x)t
t
nM
0,
S (x)t
-Sn(x)t
[E e n
+Ee
] •
and E e
= K(:n-MXk]
>
-Sn(x)t
Let us decompose
•
hnM K( X-X k )
n-l h(n-l)M h(n_l)MJ
_..E.-
k)
= K(X-X
hhM
k
k=1,2, ••• , (n-l)M
= (n-l)M+l, ••• ,nM.
*
*
Notice that Anl(x),
•.. ,An,(n_l)M(x)
*
are i.i.d., that Bn,(n_l)M+l(x),
••. ,
* nM(x) are i.i.d. and that aZZ of these random variables are mutually
Bn,
independent. Thus
S (x)
E e n
Buf if L
= sup
~
A* (X)] (n-l)M[
nl
= E e
Ee
B*
n,~1
(X)]M
•
K(x),
x
00
E[.Bn,nM(X)]
=J
K(x-u)
•
hnJ.1
f(u)du
_00
~
Let an = E[/ AnI (x) len
lim in-I) a ~
n
-..00
og n
IA\ (x) I
*
Lemma 3.2:
eL .
n
y
for some
y
~ 0, then
].
If an
satisfies
E [e
Proof:
fSn
(X)]
<
_ n
13
My 1I1L
For n. sufficiently large, since an
e
~
0, an > log(l+an ).
Combining
this with the inequality on a,
we have for n sufficiently large,
n
(n-l)M log(l+a n )
My10g n.
~
Exponentiating both sides,
(1 +a )
n
(n-l)M -< nMy .
A*
Sex)
e n1(x)] (n-l)M ML
Ee n
~ [E
e
Now
This, together with the observation that
~ 1 + lA
so that
*
Anl(x)
Ee
~
1 +a
n
IA* 1 ex) I
*
n1
(x)le n
o
completes the proof.
Under the hypotheses of Lemmas 3.1 and 3.2 we have for arbitrary t
(3.3)
P[N
>
> 0,
ML Md -nMh MEt.
nM] ~ 2e n Ye
n
-nMhnM€t
. My
Notice in general nh ~ ~, so that e
~ o.
Since n ~~, however,
nM
we will usually want to choose hn in such a way that for any <5 > 0 and for
n sufficiently large
(3.4)
e
-nMhnM €
~
-< n- u .
-ex
e
We note here that the usual choice hn = Bn' , 0
guarantee (3.4).
< ex < 1
is sufficient to
tt
14
Theorem 3.3:
ENr <
have
Under the hypotheses of Lemmas 3.1 and 3.2 and assuming (3.4), we
m
for every r
o.
~
CXl
= I
E~
Proof:
nrP[N
n=O
= nM]
co
I
S
(n+l)rp[N ~ m4] .
n=l
Using (3.3) with t
= 1,
m
E~
S
I
2JfL
(n+l)r+MYe
-(n+l)~4h
(n+l)M
E
n=l
Reindexing
EW
S
2lIL
r
nr+MYe -nMhnME •
n=2
Now for
<5
= r+My+2,
there is nO such that for n
e· Nr <2eNL
no-l
r
n r +Mve
-nMh
~
nO' (3.4) holds.
Hence
€
nM
<
co
,
[n=2
0
which completes the proof.
The definition of an
in Lemma 3.2 involves the density, f, so is
unsatisfactory from a statisticians point of view.
lim sup n
/A*
I
x log n nl(x)
n~
=c
<
m.
Hence for n sufficiently large,
It is clear that
e
/A~l(x)/
<
Let us suppose
lim sup/A* l(x)/
x n
n~
2 for every x.
= O.
Clearly then the
lim l(n-l l a S Y holds. The normal and double exponential kernels
og n n
satisfy this latter sufficient condition. The uniform kernel does not, but it
condition
n~
does satisfy the condition on an for every density,
f.
~
15
Theorem 3.4:
peN
< ~]
Proof:
Under the hypothesis of Lemmas 3.1 and 3.2 and assuming (3.4),
= 1,
peN
hence N is a closed stopping variable.
peN ~ nM] = lim2e MLnMY e
n-+eo
n-+eo
= ~] = lim
-nMh
nME = O.
Now let us consider the behavior of N as a function of
= [lv.(x)1
J
for some jsn] and let n€
n
it follows that P[NSnM]
follows that N ~
~
= p(n€)
n
s € for some jsn].
converges to p(n).
n
in probability as €
~
O.
0
Let n =[V.(x)
E.
n
J
=0
Since n€ ~ n as € ~ 0,
n
If pen )
n
= 0,
n
then it
In general, pen ) may not be zero.
n
Consider
K(u)
~
Then Vn(x)
k
h J=
= o if K(X-X
0
. {:
lui
< 1
lui
~
I .
for k = 1,2, ••. ,nM.
Thus
nM
P[K[:~~J =oj =p[I::kl ~ 1]= l-pr-h~(VX+hnMJ
x+hnfv1
=1
f
-
f(u)du
x-hnM
This last quantity will in general be strictly positive, so that
x+h
.
p(n~) ~
P(Vn(T.)
~
0)
~ n-
f
nM
f(uV 1 'lnM > 0 •
x-hnM
The significant point of this example is that the uniform kernel may miss all
A
A
the observations and hence both fnM(x) and f(n_l)M(x)
could be O.
Clearly,
were we to consider a normal or double exponential kernel, this could not
happen.
Let
K be the class of kernels satisfying (1.2) and for which
P[Vn (x) = 0] = 0 for any n.
16
Lenuna 3.5:
(i)
If K(u) is a kernel satisfying (1.2) such that
K(u) is differentiable at all but possibly a finite number of
values of u
and (ii)
K'(u) is continuous and non-zero at all but a finite number of
values of u
K€K.
then Vn(x) has an absolutely continuous distribution so that
1
K(X - XkJ
= ~nM~h~hnM
, k=1,2, ••• ,nM have a
nM
conunon absolutely continuous distribution (Parzen, 1960, p. 313). But then
Proof:
'"
fnM{x)
The random variables
= Yl + ••• +YnM
of Xl.
Yk
has a density given by the n-fo1d convolution of the density
It follows that Vn(x)
'"
= fnM(x)
'"
- f(n_l)M(x)
has a density (Parzen, 1960,
p. 318).
~
0
We note the normal, double exponential and Cauchy kernels all satisfy i
and ii, so that
K is not empty.
K€K and let hn satisfy (1.3).
and with probability one as € ~ o.
Theorem 3.6:
Proof:
Let
every n]
= 1.
= 0 for
Let
Ivj(x) I <
€
E
p[IVn (X) I ~ 0 as n ~ 0, Ivn (X) I > 0 for
w€[lvn(x)I ~ 0 as n ~~, Ivn(x) I > 0 for every n].
Let
for at least one j s nO.
* = ~min Iv.(x)1 so that for
lSjsn J
is a contraditiono to N < nO for all
small.
Now
some n] = 0,
no be any finite number and assume
let
in probability
~
The divergence in probability follows from previous remarks.
since P[Vn(x)
.
Then N ~
N < nO for all
E•
But since Iv.(x)1
J
E*, Iv.(x)1 >
J
E.
€*
Then for every
>
for all
€
That is to say N is greater than any finite number for
> 0
> 0,
0 for all j s nO'
Hence N ~ nO for
small and for w€[IVn (x)l~ 0 as n ~~, Ivn {x)1
P[N ~ ~ as € ~ 0] = 1.
E
for every n].
0
j
s nO.
This
sufficiently
€ sufficiently
Thus
17
We are now able to state a convergence theorem based on Theorem 3.6.
Theorem 3.7:
Suppose N ~
= as
~
e
0 with probability one and
fn(x)
f(x)
~
A
as n
~
= with probability one, then
Proof:
fN(x)
~
f(x) as e
~
0 with probability one.
Let A be the set of probability one for which N ~
the set o! probability one for which fn(x)
~
f(x).
=.
Let
B be
Clearly on AnB,
~
fN(x)
~
f(x)
and P(AnB)
= 1.
Sufficient conditions for
N~
=
appear in Theorem 3.6.
Sufficient condi-
~
tions for
f n (x)
~
o
f(x) appear in Theorem 2.1.
We close this section by noting that a slightly revised stopping rule N'
given by
N'(e,M)
=
1st n such that Ivn(x)1< e butlvn(x) I
{
=
if no such n exists
obviates the need to consider the class
of this section holds for
> 0
Nt as well as
may be removed from Theorem 3.6.
K. For
K~K,
N = Nt
a.s.
The result
N except that the reference
K€K
Modifications needed in the proofs are
obvious and left to the reader.
4.
Concluding Remarks:
The problems associated with the choice of K and
hn are well-known and appreciated by users and theoreticians alike. We shall
not comment ex~ept to say these problems remain in the sequential case. To
these we have added those associated with the choice of e and M.
clue to the choice of
Some
e is given by the following easily-proved observation:
18
(4.1)
In general, we would like to choose
o.
some prespecified error level, say
that
For heuristic purposes, let us suppose
M is sufficiently small and n sufficiently large so that
~
E[fnM(x) - f(x)]
Then according to (4.1),
~
so that the mean square error meets
€
choice of
€
Elvn(x)
o.
to meet error
way of choosing
€
for
0
<
2
~
: 0 : E[f(n_1)M(X) - f(x)]
I ~ 2o~
, suggesting that
€
2
.
~ 2o~ is a suitable
Actually, this appears to be quite a conservative
1.
If there is no penalty for sampling items one-at-a-time rather than in
blocks of M, it is clear that M = 1 is the best choice.
/VN(X)I will be substantially less than
sampled.
€,
If M is too large,
and hence too many items will be
The optimal M must be determined by weighing the costs of sampling
one-at-a-time against the cost of taking an unnecessarily large sample.
A really satisfying theory for choice of
In the meantime,
M = land
€
= 2o~
€
and M is yet to be devised.
appear to be adequate.
19
6.
1
References:
Lo~ve,
M. (1963), ProbabiUty
Theoray~
Van Nostrand, New York.
2 Nadaraya, E.A. (1965), "On nonparametric estimates of density functions
and regression curves," Theopy Probe AppZ.~ 10~ 186-190.
3 Parzen, E. (1960), Modem P:r>obabiUty Theopy and Its
Wiley and Sons, New York.
AppU(Jations~
John
4 Parzen, E. (1962), "On the estimation of a probability density function
and the mode," Ann. Math. Statist., 33, 1065-1076.
5 Rosenblatt, M. (1956), "Remarks on some nonparametric estimates of a density
function, II Ann. Math. Statist. ~ 27~ 832-837.
6 Van Ryzin, J. (1969), "On strong consistency of density estimates," Ann.
Math. Statist.~ 40, 1765-1772 •
•
© Copyright 2026 Paperzz