•
e·
/
SEQUENTIAL NONPARAMETRIC DENSITY ESTIMATION
H.I. Davies
&Edward J.
Wegman
Department of Statistics
University of North Cal'oZina at Chapel, HiZZ
Institute of Statistics Mimeo Series No. 884
August, 1973
•
•
SEQUENTIAL NONPARAMETRIC DENSITY ESTIMATION
by
' 1
H.• I • DaV1es
and
Edward J. Wegman
1.
Introduction:
In this paper, we shall discuss a sequential approach to
probability density estimation.
For the most part we shall confine our attent-
ion to estimators of the form
(1.1)
e
first introduced by Rosenblatt (1956) and discussed in greater detail by Parzen
(1962) •
Here, of course,
are L Ld. random variables chosen
X ,X , ... ,X
n
l 2
according to some density,
f.
In this paper, the function
K, the so-called
kernel, is assumed to be a bounded density on the real line satisfying
lim luIK(u)
(1. 2)
=0
.
U-+±CIO
Moreover, the sequence,
h ,
n
is assumed to be a sequence of positive real
numbers satisfying
(1.3)
1
lim h
n-+oo n
= 0,
h
lim nh
n-+oo
n
= CIO
and
,
n+1
1
I 1m~=
.
n-+oo n
The work of this author was supported by a C.S.I.R.O. postgraduate studentship •
1
•
2
We shall principally focus our attention on a naive stopping rule defined by
•
the following procedure:
Choose successive random samples of size M and form the differences
(1.4)
A
where fnM(x)
ru~
and
A
f(n_1)M(x)
are the density estimators based on sample sizes
and (n-1)M respectively.
N(e,M)
(1.5)
=
The stopping rule is
First n such that IVn(x)
[
00
/<
e for fixed
e >0
if no such n exists.
In section 2, we investigate the asymptotic structure of Vn(x).
we investigate properties of the stopping variable,
N(e,M).
In section 3,
Finally section·A·
is a concluding section.
2.
ASymptotic Structure of Vn(x):
Theorem 2.1:
i.
If K and hn
satisfy (1.2) and (1.3) respectively then
/Vn(x) I-+-o in probability for every x€C(f), the continuity
points of f,
and
ii.
s~pIVn(x)/
-+- 0 in probability if f
If, in addition, for some a
iii.
sup IUlm{K(cu)
>
is uniformly continuous.
0
K(u)}2
is locally Lipschitz of order a
IUI~a
at c=l for some a>O,
•
3
00
f {K(cu)
iv.
- K(u)}2du
is locally Lipschitz of order a at c=l,
•
_00
and finally
I
00
1
_1__
hl - 8 h
n n
n+l
v. n=l
8
L
<
00,
h
n
where 8 = min{~,l}
then
Ivn(x) I ~ 0 with probability one "for every x€C(f).
vi.
1\
Proof:
i).
Under the stated conditions,
1\
1\
Ivn(x) I = IfnM(x) - f(n-l) M(x)1 ~
way.
~
f(x) in probability,
Par~en,
is a Cauchy sequence in probability, so that in probability
(1962). Hence {fn(x)}
1\
fn(x)
Results ii and vi follow in a similar
0 •
Conditions iii, iv and v are those of Van Ryzin, (1969) for strong
consistency.
Alternate sufficient conditions were given by Nadaraya, (1965)
o
which may be used as replacements for iii, iV, and v.
The next sequence of results concerns the asymptotic variance structure.
Lennna 2.2:
(1.2) and let
Let
K(y) be a piecewise continuous Borel function satisfying
g be a real function in Ll • Then if
{hn } satisfies (1.3)
"00
~ (x) = f h: K(~;)
K(hn_Jg(X-y)d Y
_1lO
IlO
2
converges to g(x)! K (y)dy for every X€C(g).
_1lO
Proof:
CA)
The proof is in two stages.
lim
n~
~ (x)
= 1 im
n~
First we show
g~X)
n
fllO K
_00
(if1nJ K(-L1
hn_lJ
dy
•
4
•
and then
~
~
~: h~ J K[~J K(hn~J
(B)
2
dy
=JK
-~
Proof of (A):
Consider
A
n
(y)dy .
~
defined by
~
An
= /&n (x)-g(x)
JK(t)nJ K(~J
n-l
hI
n -~
dyl
00
=
IJ
-~
Letting
K.(~)
h1rg(X-y)-g(X)]K(t)
dyl •
n.n-I]
n
0 > 0,
An
~
max Ig(x-y)-g(x) 1
lyl~o
J
hI
Ig(x-y) IK(t)
n Iyl>o
n
J
Ig(x)
Iyl>o
K(~] ~y
J
K(if)
lyl~o
n
n-l
K(~]dY
+
n
+
n-l
IK[~ K(~) ~.
n
n... l
n
In the first and third terms of the R.H.S., we make the transformation
Z
= y/hn ,
so that
An
~
K(Z)K[hh~
J
max Ig(x-y}-g(x) I
Iyl~o
Izl~o/h
n-l
n
J
.lg(X;y)
I.
Iyl>o
Ig(x)1
0
h K(t)
n
J
Izl~o/h
n
n.
Z)dZ +
K[~)dY
+
n-l
n
K(z) K(h h ZJdZ .
n-l
•
5
4It
Since
o
K is bounded, the first term can be made arbitrarily small by choosing
arbitrarily small.
The second term is bounded by
-
}
•
sup
Iz I~o/hn
00
If
IZK(Z) K(h hnzJ
Ig(y)ldy
n-l _00
which along with the third term can be made arbitrarily small for choice of
n sufficiently large.
Proof of (B):
Letting z = ylh ,
n
=
h
I J {K(~J-K(Z)JK(Z)dZI
00
-00
~
h.
00
J IK(hn: l
zJ -K(z) IK(Z)dZ .
-00
Clearly this last integrand is bounded by [2 sup K(y)] • K(z) and hence
y
appealing to the Lebesgue Dominated Convergence theorem completes the
o
result.
Theorem 2.3:
Let
K and h satisfy (1.2) and (1.3) respectively and also
n
let
lim n
n-+OO
then
lim
n-+oo
Proof:
n21~lnM var(Vn(x))
{h (n-I)M
hnM - l} 1- v,
=
= vf(x)
f
oo
v <
00
2
K (U)dU
00
By definition,
1 nM
1 (X-X,)
l - h. K --l.h
V (x) = 11M
n
1
j=l nM
nM
1
(nt)M
(n-l)M j=l
1
(X-Xj J
h(n-l)M K hCn-I)M
•
6
Since
•
Xl' .•. 'XnM are i.i.d.
(2.1)
Parzen (1962) shows
lim
n~
~ var(K(~-Xl)J- =
nM
nMJ
_00
In a similar manner, using Lemma 2.2, one may also show
lim
n~
~ COV[K(~-XlJ
nH
nM
00
lj = f(x) f K2(u)du ,
' K(h x-Xl
(n-l)MJ
-co
so that
2
n
~~ n2~ var(Vp(x)) = ~~ {n h~ var HX~~J)+ (n-l)
_1 _
_ 2n,or-:m
(n-l)M
2
= lim{n+(~~l)
n~
0
hnM
[[ X-Xl]]
h2
var K h(n-l)M
(n-I)H
1]}
l)
COV[K(_X-_X
K( ~-Xl
hnM
hnM '
h(n_l)MJ
_1
h
h
00
2
h nM -2nh nM }f(X)f K (u)du
(n-l)M
(n-l)M
-00
After suitable simplification,
00
lim
n~
n2~funMvar(V
n
(x))
= {l-(l-V)}f(x)J
00
2
K (U)du
_00
= Vf(X)f K2 (u)du .
-co
•
7
e
The results of Theorem 2.3 may be refined by the decomposition of Theorem
.'
2.4 •
Theorem 2.4:
Vn (x) can be decomposed into the sum of two independent random
variables, An(x) and Bn(x), such that under the conditions of Theorem 2.3
co
~~
2
varCAn(x)) = (V-l)f(X)f K (u)du
2
n MhnM
-co
and
co
2
2
K (u)du
= f(X)f
lim n Mh M var(B (x))
n~
n
n
-co
Proof:
(n-l)r~
I
j=l
r
1
K
lnMhnM
+
'v1h
ru
G- X')
_
llihnM
1J
X-Xj
hCn-l)M]
K(
nf\1
1
nM
1
(n-l)MhCn_l)M
I
j= (n-l)M+l
Since AnCx) depends only on Xl""'X(n_l)M and BnCx) only on
XCn-l)M+l""'XnM ' An(X) and Bn(X) are independent.
2' ~ 2" M var K( Cx-Xl)/hnM ), .so t:hat
n M h
nM
lim n2~fu Mvar Bn(x)
n
n~
· ~var
1
= 11m
n~
x Xl
K(h - ]
nM
The result for An(x) follows from the fact that
var Bn(x)
Notice that
nM
Now var Bn(x)
=
co
=
var An(x)
-co
= var
VnCx) -
0
BnCx) is a finite sum of M terms, each identically distributed.
If the density were known, then for suitable conditions on K the exact
•
8
distribution could be found.
•
It would be an M-fold convolution of densities
of random variables of the form
zk = nMh1
nM
Also, it is not difficult to show that
We also note here that
+
n (x) is bounded by n nM supu K(u) •
B
h = Bn -(1 with
0 <
n
(1
and
< 1
B
a non-negative
=1
constant, satisfies the hypotheses of Theorem 2.3 with v
+
(1.
We close this section by demonstrating the asymptotic normality of An(x).
Lemma 2.5 below follows in a manner similar to Theorem 2.3 so the proof is
omitted.
Lemma 2.5:
If K and hn
satisfy (1.2) and (1.3) respectively and if
-1} = 1 - v,
lim n{h hnM
n-+oo
(n-1)M
4It
v, a constant
then A (x) defined by
n1
A (x)
n~
satisfies
= (n-l)
nhnM
X-X 1)
K(hnr.1
- h
K (-:-~-_X_1_ _
1
(n-1)M
l
J
(n-l)M
00
~~ nMhnM var(An~(X)) = (V-1)f(X)J
2
K (u)du
-00
Lema 2.6:
If K and hn satisfy (1.2) and (1.3) respectively and if
h
i)
'
I1m" n{ nM - I} = 1 -v,
n-;.oo h(n-l)M
v, a constant
and finally
ii)
then
if for every sequence of real numbers en
uniformly in u,
~
1, K(cnu)
~ K(~)
•
9
co
Proof:
EIA
3
n1
(X) 1
=
f
K(!.:L-)1
K( x- y ) 13f(y)dy
nhnM
hnM
h(n-l)M h(n-l)M
I (n-l)
co
•
co
= _1_'_
n3h~1
f
I (n-l)K(x-y)
_co
hnM
MUltiplying both sides by n2h~1
- n :nM
K(h x- y ) 13f(y)dy.
:(n-l)M
(n-l)~f
and making the transformation u
= ~-y
2 EIA lex) I2
n2hnM
n.
=-1 f I (n-l)K(u)
n
,
nM
co
- nh
h nM
K(hnMU)
h .
I3f(x-h uu)du •
(n-1)M
(n-l)M
n~
co
But by ii uniformly in u
lim
n-+-oo
u
I K(u)
nM
Kt·
I(n-1)K(u) - nhh (n-1)M
= 1.1m I(1
n- ) _ nhnM
JI
h(n-1)M
h Cn -1)M
n-+-oo
ru-l
nh
lim (n-1) - h nM .
n-+-oo ...
(n-1)M
Now
= lim
n-+-oo
Hence given 0 > 0, there is nO
o~
-l+n[l -
3
~
o/nf
=
1-1+\1-11
= 1\1-2/.
n > nO'
fK
1\1-21
n
co
]
h(n-l)M
such that for
2 E IAnI (x) 13
n2hnM
+
hnf\1
.
co
3 (u)f(x-hm~u)du
-co
f(x-hnMu)du .
-co
Since
1 co
f(x-hn MU)du
n -co
-f
=
1
-~ 0
n
as n
+ co,
we have
2 E /An1(x) 13
n2hnM
+ 0
o
We may now apply the Normal Convergence Criterion (N.C.C.) found in
Lo~ve,
(1963, p. 316), to complete this section.
Theorem 2.7:
then
"
;,
If K and
h
n
satisfy the hypotheses of Lemma 2.6,
•
e
10
(2.2)
Proof:
e·
By the N.C.C., a necessary and sufficient condition for (2.2) to hold is
that for e:
> 0,
(2.3)
(n-l)M P[
AnI (x) - EAnl (x)
~
kl
J ~ O.
~
e:[(n-l)M] 2
.
[var AnI (x)]
A sufficient condition (Liapounov's condition) for 2.3 is that for some 6
2 6
EIAn1 (X) - E[Anl(x)] 1 +
~
as
0
n
~
> 0,
co
(nM)6/2 0 2+6 [AnI (x)]
where
cr
(a+b)3
2
S
[AnI (x)] = var[A
nl
(x)].
We let
6 = 1.
Then using the inequality
4(a 3+b 3), we obtain
so that
EIAn! (x) - E[AnI (x)] I
3
EIA
2
o [AnI (x)] =
so that
O(nh
1
),
nM
m(x)1
3
1
= O( 2 2
n h
nM
I
3
---.r-j;--:3=----- •
(nM) 2 0 [AnI (x)]
S
(nM}~;0 3 [An! (x)]
By Lemmas 2.4 and 2.6,
8E IAnI (x)
)
and
3
ni (xj 1
8EIA
---.,---::----- = 0 (-i
(nM) 'to3[AnI (x)]
which completes the result.
1
)~
nM
)
n 2 (nh
o
•
11
3.
The Stopping Variable N(€,M):
In this section, we shall generally
suppress in the notation the explicit dependence of N(€,M) on
Hence we write N(€,M) simply as
•
N.
Noting that
[Nsn]
€ and M.
= [Vn(x)
S
E],
it is clear that the probabilistic structure of N is closely related to
that of Vn(x).
Inasmuch as the structure of Bn(x)
depends on f(x), we
will be, in general, unable to give the exact asymptotic structure of N.
In this section, we demonstrate the finiteness of the moments of N, the
closure of N, and the divergence of N as E + O.
Lemma 3.1:
(3.1)
For arbitrary t > 0 and given € > 0,
-nMh HEt S (x)t
P[V (x) > €] s e n , E e n
n
and
P[Vn(x)
(3.2)
-nMh MEt -S (x)t
sen E e n
< -e]
where
x-X· _(n 1)M 2L
hnM
t
K(?)
l.
h
J=l
nM
j=l
n-l· (n-1)M
hM
Sn(x) = )
Proof:
Define T(x) to be the indicator of [5 (x)
t(S (x)-nMhnME)
n
arbitrary t > 0, T(x) s e n
, so that
Noting that
= [Sn(x)
[Vn(x) > E]
K(h
> ru~hnM€]'
x-X.
J)" •
(n-1)M
Then for
nMhnM €] completes the proof of (3.1).
Equation (3.2) follows by similar arguments.
0
Now,
>
n
P[N
>
nM]
= P[ n flvk(x) I
>
EI]
k=2
s p[IV (x)
n
I
> e] •
•
12
By Lemma 3.1, for arbitrary 3.1, for arbitrary t
P[N > m4] ~ e
•
-nMh t
5 (x)t
-5 (x)t
nM [E e n
+ E en]
5 (x)t
-5 (x)t
We next examine E e n
and E e n
.
(n-l)M
nM
I A*k(x) +
I
B* (x)
k=l
n
k=(n-l)M+l nk
An*k(X) = K(X-XkJ _..E- hnM
hnM
n-l h(n-I)M
*
(X-Xk )
Bnk(X) = K h
Let us decompose
where
5n (x) =
and
> 0,
K(
X-X k )
h(n_I)MJ
,
k=1,2, ••. ,(n-l)M
k = (n-l)M+l, ... ,ru\1.
ilM
*
*
Notice that Anl(x),
... ,An,(n_l)M(x)
*
are i.i.d., that Bn,(n_I)M+l(x)"",
*
Bn,nM(x)
are i.i.d. and that aZZ of these random variables are mutually
independent.
Thus
5 (x)
E e n
r
A* (X)] (n-l)Mr B*
(X)]M
= LE e nl
LE e n,~1
Buf if L = sup K(x),
x
00
E[.Bn.nM(X)]
=J
K(x-u)
hnM f(u)du
•
_00
<
_ eL .
Lemma 3.2:
Let an
(n-l) a ~
log
n n
n-+oo
lim
y
*
= E[IAnl(x)le
for some
IA * (x) I
nl
]
y;:: 0,
If an satisfies
then
•
E [e
~s
n
(X)]
13
M 1iL
~ n Yeo'
•
For n. sufficiently large, since an ~ 0, a n > log(l+an ). Combining
this with the inequality on an' we have for n sufficiently large,
Proof:
(n-l)M log(l+a n )
~
Mylog n.
Exponentiating both sides,
( 1+an ) (n-l)M ~ nMy •
Now
S (x)
Ee n
~
[E
A*
nl(x)](n-l)M ML
e
e
This, together with the observation that
* (x)
AnI
e
~
*
e,Anl(X)!
IA * 1 (x) I
,
~ 1 + lAnl(X) Ie n
*
so that
A*1 (x)
Ee n
~ 1 + an completes the proof.
o
Under the hypotheses of Lemmas 3.1 and 3.2 we have for arbitrary t
>
0,
(3.3)
-nMhnMEt
. My
Notice in general nh ~~, so that e
~ o.
Since n ~~, however,
nM
we will usually want to choose hn in such a way that for any 0 > 0 and for
n sufficiently large
(3.4)
e
-nMh E
nM <
_ n -0.
We note here that the usual choice hn
guarantee (3.4).
= Bn'-Ct ,
0 < Ct < 1
is sufficient to
•
~4
e
•
Theorem 3.3: Under the hypotheses of Lenunas 3.1 and 3.2 and assuming (3.4 l, we
have ENr < 00 for every r ~ O.
r
CIO
Proof:
Elf =
nrP[N = nM]
n=O
r
(n+l)rp[N
n=l
00
s
~ ru~] •
Using (3.3) with t = 1,
Elf
r
00
S
2JfL (n+l)r+MY e
n=l
-(n+l)~fu
(n+l)M
€
Reindexing
Now for
<5
= r+My+2, there is nO such that for n
E'Nr S2e~L
ro-1
r nr+MYe
-nMh
nM
n=2
<
+
~
nO' (3.4) holds.
~ 21]
r
n=n n
<
00
Hence
,
o
0
which completes the proof.
The definition of an in Lemma 3.2 involves the density, f, so is
unsatisfactory from a statisticians point of view. Let us suppose
lim sup n
I* I
x log n AnI (x) = c <
n~
00.
Hence for n sufficiently large,
It is clear that
e
IA~l (x) I
*
lim suplAnl(xll
= O.
n~ x
<
2 for every x.
Clearly then the
condition lim l,n-l) a s Y holds. The normal and double exponential kernels
n~ og n
n
satisfy this latter sufficient condition. The uniform kernel does not, but it
does satisfy the condition on an for every density,
f.
•
.'
15
Theorem 3.4:
P[N <
~]
Under the hypothesis of Lemmas 3.1 and 3.2 and assuming (3.4),
= 1,
hence N is a closed stopping variable.
P[N = ~] = lim P[N ~ nM] = lim2eMLnMY e
n-+<xl
n-+<xl
Proof:
-nUh
Now let us consider the behavior of N as a function of
e
for some jsn] and let n
n
= [lv.(x)1
J
0
nM€ = O.
s € for some j~].
Let n =[V.(x) = 0
€.
n
Since n€
n
+
J
n as €
n
+
0,
it follows that P[NsnM] = p(n€) converges to p(n). If pen ) = 0, then it
n
n
n
follows that N + ~ in probability as € + O. In general, pen ) may not be zero.
n
Consider
K(u)
Then
Vn(x)
- {~O
lui
lui
< 1
~ 1 .
=0if K[~:kJ =0 for k=1,2, ... ,nM. Thus
P[K[:::J =oj =p[I:~1 ' 1]= 1-P~-h~(VX+hnMJ
x+hnM
=1
I
-
f(u)du
x-hnM
This last quantity will in general be strictly positive, so that
. pro!:)
~
P(Vn(T.) -. 0)
~ p
x+hru\1
I
f(ta'k~"JnM
> 0 •
x-hnM
The significant point of this example is that the uniform kernel may miss all
"
'"
the observations and hence both fnM(x) and fCn_l)MCx)
could be
O.
Clearly,
were we to consider a normal or double exponential kernel, this could not
happen.
Let
K be the class of kernels satisfying
P[Vn (x) = 0] = 0 for any n.
(1.2) and for which
•
•
16
Lemma 3.5:
(i)
If K(u) is a kernel satisfying (1.2) such that
K(u) is differentiable at all but possibly a finite number of
values of u
and (ii)
K'(u) is continuous and non-zero at all but a finite number of
values of u
K€K.
then Vn(x) has an absolutely continuous distribution so that
Proof:
I
K(XhnM-Xk]
Yk = -nM~h--11M
The random variables
, k=1,2, ••. ,nM have a
common absolutely continuous distribution (Parzen, 1960, p. 313).
But then
A
fnM(X) = VI+••• +YnM has a density given by the n-fold convolution of the density
"
of .Yl •
It follows that Vn(x)
= fnM(x)
A
has a density (Parzen, 1960,
- f(n_l)M(x)
o
p. 318).
We note the normal, double exponential and Cauchy kernels all satisfy i
and
ii, so that
Theorem 3.6:
K is not empty.
Let
K€K and let h
n
and with probability one as
Proof:
€ ~
satisfy (1.3).
Then N ~
in probability
00
O.
The divergence in probability follows from previous remarks.
since P[Vn(x) = 0 for some n] = 0,
every n] = 1.
Let
IV j (x) I <
€
p[IVn(x)1 ~ 0 as n ~ 0, lvn(x) I
w€[!Vn(X)I ~ 0 as n ~
nO be any finite number and assume
00,
N < nO
for at least one j s nO'
Now
Ivn(x) I
for all
> 0
€.
But since Iv.(x)1
J
>
for every n].
Then for every
0
for
Let
€ >
0,
for all j s nO'
> 0
* = %min IV.(x)1 so that for €*, Iv. (x) I > €* for all j s nO' This
J
lSjSn J
o
is a contradition to N < nO for all e. Hence N ~ nO for € sufficiently
let
€
small.
That is to say
N
is greater than any finite number for
small and for we[lvn(x)l~ 0 as n ~
~
peN
~
00
as e
~
0]
= 1.
00,
Ivn(x)I
>
0 for every n].
0
€
Thus
sufficiently
•
17
~
411
We are now able to state a convergence theorem based on Theorem 3.6.
A
Theorem 3.7:
N ~ = as e
Suppose
A
as n
~
= with probability one, then
Proof:
f (x) ~ f(x)
n
0 with probability one.
0 with probability one and
~
~
fN(x)
f(x) as e
~
Let A be the set of probability one for which N ~
the set of probability one for which f (x)
n
fN(x) ~ f(x) and P(AnB) = 1.
+
f(x).
oo.
Let
B be
Clearly on AnB,
A
Sufficient conditions for
tions for
fn(x)
~
N+
00
appear in Theorem 3.6.
Sufficient condi-
o
f(x) appear in Theorem 2.1.
We close this section by noting that a slightly revised stopping rule
NY
given by
1st n
{
00
such that
€
butlVn (x) I > 0
if no such n exists
obviates the need to consider the class
of this section holds for
Ivn (x)l<
N' as well as
may be removed from Theorem 3.6.
K. For
K~K,
N = Nt
a.s.
The result
N except that the reference
K€K
Modifications needed in the proofs are
obvious and left to the reader.
4.
The problems associated with the choice of
Concluding Remarks:
K and
hn are well-known and appreciated by users and theoreticians alike. We shall
not comment ex~ept to say these problems remain in the sequential case. To
these we have added those associated with the choice of
clue to the choice of
€
€
and M.
Some
is given by the following easily-proved observation:
•
18
(4.1)
In general, we would like to choose
so that the mean square error meets
€
some prespecified error level, say 6.
For heuristic purposes, let us suppose
that M is sufficiently small and n sufficiently large so that
Then according to (4.1),
choice of
€
Elvn(x)
to meet error 6.
way of choosing
€
for
I ~ 26~
, suggesting that
€ <
,."
26~ is a suitable
Actually, this appears to be quite a conservative
6 < 1.
If there is no penalty for sampling items one-at-a-time rather than in
blocks of M, it is clear that M = 1 is the best choice.
/VN(X)I will be substantially less than
sampled.
~
The optimal
€,
If M is too large,
and hence too many items will be
M must be determined by weighing the costs of sampling
one-at-a-time against the cost of taking an unnecessarily large sample.
A really satisfying theory for choice of
In the meantime,
M = 1 and
€
= 2o~
€
and M is yet to be devised.
appear to be adequate.
•
19
6.
References:
1 Lot\Sve, M. (1963), ProbabiUty
Theory~
Van Nostrand, New York.
2 Nadaraya, E.A. (1965), "On nonparametric estimates of density functions
and regression curves," Theo:roy Prob. AppZ.~ 10~ 186-190.
3 Parzen, E. (1960), Modem P:t'obabiUty Theory and Its
Wiley and Sons, New York.
*
AppUcations~
John
4 Parzen, E. (1962), "On the estimation of a probability density function
and the mode," Ann. Math. Statist., 33, 1065-1076.
5 Rosenblatt, M. (1956), "Remarks on some nonparametric estimates of a density
function, Ii Ann. Math. Statist. ~ 27~ 832-837.
6 Van Ryzin, J. (1969), "On strong consistency of density estimates," Ann.
Math. Statist.~ 40 3 1765-1772.
•
© Copyright 2026 Paperzz