ON ESTH1ATORS OF BUNDLE...STRENGTH IN LENGTH..BIASED SAMPLlNG
by
Pranab Kumqr Sen
Department of Biostati'stics
University of North Carolina at Chapel Hill
t.
Institute of Statistics Mimeo Series No. 1426
•
January 1983
•
ON ESTIMATORS OF BUNDLE-STRENGTH IN LENGTH-BIASED SAMPLING *
BY PRANAB KUMAR SEN
University of North
Carolina~
Chapel Hill.
Under a length-biased sampling scheme, an estimator of the strength
of a bundle of parallel filaments and its jackknife version are considered
Various properties of these estimators are studied. Jackknife estimator of
the variance function is also considered and its convergence property is
studied.
AMS Subject Classification : 62E20. 60K99
Key Words and Phrases : Almost sure convergence, asymptotic normality,
bias, bundle-strength of filaments, jackknife estimator, law of iterated
logarithm, length-biased sampling, moment convergence.
t
* Work supported by the Office of Naval Research, Contract NO.NR 681-001.
Q
10 Introduction. Daniels (1945) in the context of the distribution theory of
the strength of a bundle of parallel filaments considered the statistic
(1. 1)
= max{
Dn
(n-i+l)X
.
n:l. : 1 < i < n },
where X
<••• < X
stand for the order statistics corresponding to n inden:l
n:n
pendent and identically distributed (i.i.d.) nonnegative random variables (rov.)
Xl,oo.'X
from a continuous distribution function (dof o) F, defined on R+ =
n
(0,00). It may be noted [ viz., Suh et o al (1970) and Sen(1973)J that Z = n-ID
n
n
actually estimates the parameter
(1.2)
e
=
= sup{
e(F)
x[l-F(x)J : x
It the context of some sampling problems in technology, Cox(1969) has stressed
on the role of length-biased (l.b.) sampling with some emphasis on the estimation
problems with the bundle-strength of fibres. A dof. G(
= GF)
is called a length-
biased distribution corresponding to a dof. F, if
(1.3)
GF(x)
=
}J
=
}J
-1
~ ydF(y)
Y E: R+
where
(1.4)
x
f o xdF(x) is assumed to be finite and positive.
For some other applications of l.b. sampling, we may also refer to Patil and
Rao(1977,1978) and Coleman(1979), among others.
We conceive of a set {Yl, ••• ,Y } of i.i.dononnegative r.v.'s from the lob.
n
distribution G, and our problem is to provide a suitable estimator of e(F) in
(1.2) (based on the Y.) and to study its various properties. Along with the
1
•
preliminary notions, the proposed estimator is introduced in Section 2. Asymptotic properties of the estimators are studied in Section 3. In this context,
weak convergence results on the empirical l.b o distribution playa vital role
and some of these are also presented along with. Same strong convergence
results are also presented side.by side.Ajackknife variance estimator is
presented in the last section.
2
-1
2. The estimator o Note that by Cl 3), dG(x) = II xdF(x), x
0
00
so that
dF(x) = II J Y dG(y) = llK(:x), sayo As such, by (1.2) and (1.3),
x
(2 01)
+
R
-1
00
l-F(x) = J
E:
x
e = sup {llxK(x): x
~
Note that xK(x)
R+} =
E:
1 for all x
Let now Gn(y) = n
E:
(. sup{ xK(x):x
+
Rand K(O) = II
-1 n
-1
E:
R+} )/K(O).
, so that e <
II 0
.
+
~i=lI(Yi ~
y), Y £ R be the empirical dof o (based on the
Y.) and let
1.
K (x) = Joo y-1 dG (y) = n-l~~ 1 Y.-1 I(Y. > x) , X E: R+ 0
n
x
n
1.=
1.
1. Then, by (2.1) and (2 02), we consider the following estimator of e
'"e
(2 03)
·n
Yn:l~
If
= ( sup{ xK (x) : x
£
n
R+} )/K (0) •
n
0.0< Yn : n be the ordered roV.'s corresponding to Yl,ooo'Yn ' then we
may write (2 03) explicitly as
en
It may be noted that { K (x),x
£
n
and hence, {sup{XKn(X): x
note that
+
n
n > I} is a reverse martingale (process),
.
R } , n ~ I} is a reverse sub~martinga1e , where we
E:
sup{ xK (x) : x
{Kn(O)jn::' I}
R+
E:
R+} <1 , with probability one, for all n _> 1 0 Also,
-
is a reverse martingale with EKnCO) = K(O) = 11-1 <
Hence, by the reverse (sub-)martingale convergence theorem, as n
sup{xK (x): x
n
E:
R+}
•
7
sup{xK(x): x
£
R+}
7
00
,
by (1.4).
00,
, almost surely (aos.)
and
(2 6)
0
. Kn(O)
7
K(O) a.s.
From, (2.1), (2 5) and (2.6), we conclude that whenever EG[l =
0
(2 7)
0
en
converges aos o to
e
as
n
7
00
0
subse~uently
results. Towards this, we define the Kolmogorov-norm
~
Un = n 2 sup{ IGn ex) - G(x) I :x
+
E:
R },
00
,
•
Note that for the strong consistency of the estimator we do not need other
regularity conditions, as will be introduced
<
for more refined
3
and note that [ viz." Dvoretzky, Kiefer and Wolfowitz(J956)J for every n
(2.9)
p{ Un -> u } _.
< 2 exp{ _2u 2 }
V u£
2:
1,
R+ ,
so that U is uniformly (in n) exponentially integrable. Further, by partial
•
n
integration, we obtain that for every n
(2.10)
n~xlKn (x) - K(x) I
<
-
I
U
n
+
Ixl
£
+
R ,
I
n~2 (Gn(y)-G(y)) y-2 dy
x
2 U ~
n
00
U xl y dy
n
x
+
1 and x
n~xlloox y-ld(Gn (y) - G(y))
=
< n~I G (x) - G(x)
n
00
-2
-
2:
=
sup{n~xIK (x)-K(x)
t: x £ R+ } -< 2Un , with probability 1, for every n.
n
Note that the law of iterated logarithm holds for {Un } • If we assume that
so that
(2.11)
then, noting that K (0) is an average of i. L d. r. v. with finite second moment,
n
we conclude that under (2.11)" as n
(2.12)
•
so that
(2.13)
~
~
n~1 Kn (0) - K(O) I
,
ei)
0, V
G
- P
-2
),
0p (1) , and (ii)
=
n)}~IKn (0)
{n/(loglog
00
X(
~
n 2 {Kn (0) - K(O)}
+
- K(o)1
=
0(1) a.s.
By (2.1) and (2.3), we have then
(2.14 )
IKn (0) - K(o)le/Kn (0) ,
so that by (2.10), (2.12), (2.13) and (2.14), we conclude that under (2.11),
1
(2.15)
A
n'2 Ie
n
I =
- e
1
(2.16)
•
0 (1)
A
{n/(loglog n)}'2 Ie
n
- e
p
I
=
0(1) a.s q
as n
+
00.
For the study of other properties of the estimator, we need some extra
regularity conditions which are introduced below. We assume that {xK(x), x
£
R+ }
has a unique (global) maximum (e/p) at a point x 0
:0 < 0
x < 00
so that for every
'
<5
(> 0), sufficiently small, there exists an> 0, such that
sup{ xK(x) :
Ix - xol > n } < e/p - 0 • Also, we assume that in some neighbourhood of Xo '
hex) = xK(x) (= II -1x[l-F(x)J ) has a continuous first order derivative h' (x)
4
~-l{l _ F(x) - xf(x)} ). Note that f is the density function of the d.f o F and
(=
,
by definition h (x )
o
=0
and
is
(2 17)
0
for all'x : lx-x'o I -< n..
> 0
Finally, for the moment convergence properties of the estimator, we may also
•
need to assume that for some r(:" 2),
..
::: \) *
(2.18)
<
r+1
00
We conclude this section with the remark that by definition
A
*
en-> xono
K (x )/K(O) = e , say,
(2.19)
n
and, we shall show that under the regularity conditions mentioned above, for
en* [9n which standard theory holds
the asymptotic theory , e may be replaced by
n
under general conditions].
A
3. Asymptotic properties of e
A
e
(3 1)
0
>
n~
e*
n
where C(O < C <
00)
>
e
- Cn
n
-~
• Note that by (2 10) , (2 12) and (2.19),
0
,
0
e·
in probability,
is a suitable constant. If we let
I
I*
(3 2)
0
n
=
{ x : xK(x)/K(O) <
e _ 2Cn- 1:2
}
,
then, proceeding as in (2 14), we conclude that
0
IsuphKn (x)/Kn (O):x
(3.3)
E
In* } - suphK(x) /K(O):x
E
In* }
and hence, by (3.1), (3 0 2) and (3.3), we obtain that as n
+
I
op (n -~
=
),
00
sup{ xK (x)/K (0) : x E I * } < e * , in probability.
n
n
n
n
+
*
{
For n adequately large, In = R "In = x
xK(x)/K(O):" 8- 2Cn-~}
(3.4)
a shrinking neighbourhood of x
for x
(3.5)
E
I
n
, so that by the assumptions made after (2.16),
, we have
xKn
(x)o
- xn
K (x0 )
=
o
reduces to
=
x[Kn (x)-K(x)]-x
. 0 [Kn (x 0 )-K(x0 )J + xK(x) - x 0 K(x 0 )
xlKn (x)-K(x)-Kn (x 0 )+K(x0 )J
+
(x-x)
[Kn (x 0 )-K(x 0 )J -t,;(x:x 0 ) Ix-x 0 I
0
where, by (2 17), t,;(x~x ) is nonnegative, and
o
0
Further, as in (2.10), we have
Xl
=
ax + (l-a)x
0
, O<a<l o
,
•
5
(3.6)
n~x I Kn'(x) -K(x) - Kno
ex ) + K(x 0 ) I
< 2U
-
~
I/x
n I ~x - x 0
0 , Vx
so that by (3.5), (3.6) and (2.9), we conclude that for every x
£
£
In ,
I ,
n
-!z ).
(xo
)/K
xK (x)/K (0) :; xo Kn
n (0)
' -~(x' ,x0 ) Ix - x0 I + 0 p (n
n
n
Therefore, by (3.7), sup{ xK (x)/K (0) : x £ I } = x K (x )/K (0) + 0 (n -!z ),
n
n
non 0 n
p
(3.7)
and hence, using (3.4), we conclude that
(3.8)
n 1:2 1'"
en
- en*
I
+
0, in probability, as n
+
00.
I In fact, in (3.1) and (3.2), we may replace cn-!z by C(n/{loglog n})-!z' and
conclude that (3.4) holds a.s. with C being replaced by C(10glog n)
(3.8) can be strengthened to o((loglog n)
1:
2
)
a.s., as n
assume that hex) has a second order derivative
~,
+
00
•
1:
2
•
As such,
If further, we
in some neighbourhood of x
o
-k
k
(where h"(x ) < 0, by definition) , then the diameter of I is O(n '1(1oglog) 'I),
o
n
so that by the same steps, it follows that
1 '"
*
(, ' _1'" (loglog n) ~2L ) a.s., as n
n >'::21 en
(3.9)
- en I = 0l.n"
(~nder
This stronger result
•
+
00
•
more stringent conditions) is,however, not needed
in the sequeL]
Let us denote by
W (x) :; n
(3.10)
1:
n
{K (x) - K (x)} ,
2
n
X
£
+
R •
Then, by definition, we have
n1:2 (e * - e ) :; n1:2X { K (x )/K (0)
(3.11)
non 0
n
1:
:; n 2 x
:;
- K(x )/K(O) }
'0
1:
o
1:
{ (K(x) + n- 2W (x ))/(K(0)+n- 2W (0)) - K(x )/K(O) }
0
non
0
e { W (x )/K(x ) - W (O)/K(O)
n
0
n
0
1:
+ 0 (n- 2
p
)}.
Now, parallel to (2.11), we let
(3.12)
00
VG(x):;!x y
-2
dG(y), x
+
R , so thatv G :; VG(O).
The j oint asymptotic normali ty of ( W (0), W (x o ) ) follows by a direct appeal
n
n
to the central limit theorem [using the Cramer-Wold characterization], and
£
hence, using (3.11) and (3.12), we obtain by some routine steps that as n
(3.13)
where
n ~ ( en* - e)
~
~
"\.."
",(0,
(1* 2 )
,
+
00
,
6
(3.14)
Combining (3.08), (3.13) and (3.14), we arrive at the following:
Theorem 10 Under (1. 4), (2. 11 ), (2. 17) and the uniqueness of Xo ' as n +
h
*2
(3.15)
X( 0, a
n 2 ( 8n - 8 )
ld':)
00
,
A
It may be noted that for both the sequence {Kn ex 0 )} and
{K (O)} , the law
n
of iterated logarithm holds [under (2 11)J, so that in (3.11), we have actually
h
h
~
*
.
n (8 - 8 ) ;:: 8{W (x )/K(x ) - W (O)/K(O)} + O(n- 2(10glog n) 2) aos .,
0
n
as n
+
00
n
n
0
The first term on the right hand side of (3.16) is expressible as
•
n- ~L.n1;:: 1
(3.17)
0
z.1
where Z,1 ;:: e{I(Y.>x
)/{Y.K(x
)} 1- 0
1
0
~/Y.}
, i
1
~
~
I.
The Z. are i.i.d.rov. with mean 0 and variance a*2 , given by (3.14). Hence,
1
the law of iterated logarithm applies to {nk2(8 * - 8)} as wel1
n
0
On the other
k
hand, by the remarks made after (3.8), under (2.11) and (2.17), n 2 18
;:: o((loglog n) } a.s., as n
for
k
{n 2( 8
A
+
00
,
so that the law of iterated logarithm holds
1
.
{n~[Km(X)- K(x)J, x E R+ ; m ~ n} [ along the lines of
principles for
k
0
0 , in probability, as n
+
00
,
A
sup{ n 21 8
Theorem 2.6 3 of Sen (198l)J, (3.8) may be strenghtened to
+
e I
8)}. We may also note that in view of the backward invariance
n
m> n }
-
n
m
-8 I:
while, for 8* ,being the ratio of
n
two means, a similar backward invariance principle may be worked out as in
Theorem 3.3.4 of Sen(198l)
k
0
A
Thus, for the tail sequence {n 2 (8
m
-8 ):m
2:.
n},
a backward invariance principle (relating to the weak convergence to a Wiener
process) holds under the same regularity conditions as in Theorem I. This
result is particularly useful in studying the asymptotic normality result for
A
8
n
for random sample sizes.
In the rest of this section, we study the bias and mean square results for
A
the estimator
k
8no Note that by (2.10) and (2.14),
A
n 21 8 - 8 I < 2U /K (0) + 81w (0) I/K (.0) ,
n
n n
n
n
where by the elementary inequality between the harmonic and arithmatic means,
(3.18)
...
7
(3.19)
= (n
-1"n
-1 -1
-l"n
u , lY'
1= 1 )
-< n u,1= lY'1
y.
n
=
so that we have
(3.20)
en - 8 I -< n{2Un + 81wn
n~1
Y
< ~ y2 +4U 2 +
n
..
n
(O)I} <
-
~ yn2
+
~{2Un
+8
Iwn (0) 1}2
82Wn2(0).
-2
Now 1 under (2.18)1 for r=2 1 Y is uniformly (in n) integrable l while l under
n
2
2
(1.4) and (2.11)1 U and W (0) are uniformly integrable. Further l by (3.8)1
n
n
(3.16) and (3.18)1 we have
*
-~L:n
(8 - 8 ) = Z' + 0 (1) ; -.:k
Z =n
'lZ , 1
n
n
P
n
1=
1
0*2 1 an d -Z*2,1S uniformly integrable.
' ,
.
where the Z. have 0 mean and f 1n1te
var1ance
(3.21)
n
~
A
n
1
Hence l by (3.20), (3.21) and a version of the Dominated Convergence Theorem l
we conclude that
(3.22)
E{n
n
-+
00.
(3.23)
A
(8
n
-8)}
o
-+
1
as n
-+
00
,
= 8 + o(n -~ ) which shows that the relative bias converges to 0 as
so that B8"
n
k2
Further l by (3.18) and (3.19),
n(
2y2 + 28 2y2 2(O) • for every n
en - 8 )2 -< 8Unn
W
nn'
1
where 1 by (2.9), for every finite k(> 0), Uk is uniformly (in n) integrable.
n
2
Hence l using the Holder inequalitYI it can be shown that u y 2 is uniformly
n n
integrable whenever (2.18) holds for some r > 2. Further 1 note that
(3.24)
~-l) }2
W2 (0)-Y2 = n -3{ L:n. l(l-~ -1 Y.) + L: 1<'4'< y,(y~l_
n
n
1=
1
_lr-J_n J . 1
1
and this is uniformly integrable whenever (2.11) and (2.18) hold. Hence 1 again·
using the Dominated Convergence Theorem along with (3.21) and (3.23), we have
E{n(
en _8
)2}
-+
a *2
, as n
-+
00.
Thus 1 in (3.15), one may also use the natural parameters and conclude that
§
k
( n - 8 )/{Var( 8 )} 2 ~. )((0 1 1) , as n -+ 00 •
n
It may be noted that (3.18), (3.19) and some simple inequalities may be used
(3.26)
A
1
A
to establish
the convergence of higher order moments of n~(8n -8). However,
.
this will require higher order moment conditions on y-l as well as y.
8
A
f\
4 Jackknife estimators. If we write An =8 n Kn (.O) and A = 8/11 , then, we have
0
(4 1)
0
A
We have noticed earlier that {A } is a reverse sub-martingale sequence converging
n
A
A, as
n
and hence, it can be shown easily that EA
+00
.
finite n. Similarly, K (0) unbiasedly est1mates
. o Hence,
11 -1 an d'1S nonnegat1ve
n
E{K (0)r
1
> A,for every
n-
.
11 -1 as
n+
> 11. for every finite n, though K (0) converges a.s o to
n
-'
n
00
0
The last term on the right hand side of (4 1) represents a covariance term and
0
is of lower order of magnitude
Hence, from (4 1) we conclude that upto the
0
0
A
first order, 8
finite no
may not be unbiased and has a nonnegative bias, for every
n
For this reason, it may be worthwhile to consider the jackknife
estimator to reduce this bias and also to provide an estimator of the unknown
A
*2
variance
SCi) = e(y(i))
n-l
-n-l .,
~(i) =
n-l
s
~
Thus,
e
n
2
n
= (n-l)
-1
n
1\
Li=l ( en , i -
-n-
~
a
*2
-1 n
A
L.1= 1 8n,1.
~
en
e and
s
2
n
is the estimator of
• Our basic goal is to study the properties of these estimators.
K(i) etc
For this, we define G(i)
n-l' n-l
y(i)l
-n-
8n = n
is the usual jackknife estimator of
the variance
-n
n
S
0
1\
= A(Y-n )
e = 8(Y ), A
of ...n
Y deleting Yo , for i=l,ooo,n o Let then
1
(n-l) (i) , i=l,ooo,n;
n8
n
n-l
and
(4 3)
A
, y =
n
-n
(i)
~(y(i)) ,were
Y 1 is the subvector
h
-n-l
Towards this goal, we define
a .•
as in Section 2 where Y is replaced by
-n
0
for i=l,oo.,n. Also, we write
I
A
en
(4.4)
where x
n
= x K (x )
n n
( and the
n
x~~i
and
e(i) =
, i=l,ooo,n;
n-l
) relate to the particular order statistics (Yn:r ) at
which the maximum is attained o Note that under the assumed uniqueness of x
(4.5)
x
n
x
+
a.s o , as n
o
+
o
00
Now, proceeding as in (2 10), we obtain that
0
sup + IX{K (1_ )1(X) _. K. (x)} I
x E R
n
n
0
(4 6)
0
2 suP
x
{I (n_1)-1{I(Y.
<;
2
s:UP+1 G(i)(-x) - G (x)
x E R
< x) - G (x)}I}
n
1-
n-l
<
n
I
2/(n-1), \j i=lpoo,n;
,
9
(4 7)
0
J
.
(4.8)
max I~(i) _ ~ I = m~x {Isup xK(i) (x)
l<i<n
l<l<n
x
n-I
n-1
n
< max {sup
x{
- l<i<n x I
max
K(i) (0) l<i<n I n-1
=
0
sUPxK (x)1 }
x
n
.
K(i) (x) - K (x)} I } < 2/ (n-1) ,
n
n-1
-1 max
K (0)
Kn (0)1 :5.. (n-I) l<i<n ly~l_
1
n
-
(n -!:! ) a so, as n +
0
I
,
00
where the last step is based on the well known result that for a distribution
having finite second moment, the sample range is 0(n
1:2
aos., as n +
)
00
[
and
.
(2.11) insures this for the Y.-1 ]. Let us rewrlte
1
(4.9)
~(i) _ ~
n-1
n
= x(i)K(i) (x(i)) _ x K (x )
n n n
n-1 n-I n-1
= xCi) IK(i) (x(i)) _ K (x(i))] +
n-1 n-1 n-1
n n-1·
so that by (4.6) and (4.7),
(4.10)
max Ix(i)K (x(i)) - x K (x)
l<i<n
n-1 n n-1
n n .n
x(i)K (x(i)) - x K (x ) ,
n-l n n-1
n n n
I
<
4/(n-IL
But
,
(4.11)
= xCi) [K
(xCi)) _ x K (x )
n-I n n-1
n n n
xU)K
n-1
(xU)) - K(x Ci ))] _
n n-l
n-l
x (K CX )- K(x)] +{x(i)lK(x(i)l) - x K(x )},
n n n
n
nnn n
where,by (2 10), the first two terms on the right hand side are each
0
1:
1:
O(n- 2(10glog n) 2) a.s q
as n +
00
O(n -!z (loglog n) !z ) a.s o , as n +
,so that by (4 10), the last term is also
00
0
•
Combining this with (4.5), we obtain that
max I x (_i ) - x n
+
0 a •s ., as n +
l<i<n
n 1
Also, we note that for each i(=l, ••• ,n),
(4 12)
00
0
(4.13)
en,l. = {Kn (0)}-1{
0
(n~ - (n-1)~(i)1) + e(i)l n(K(i)l(O)-K (O)]},
n
nnn-·
n
so that using (4.7), (4.8) and (4.13), we conclude that as n +
00
(4 14)
0
max I e
l<i<n n,i
A
,
-/ (O){(n~ -(n-I)~(i)l)+e n[K (O)-K(i)l(O)J)/ +0 aos o
n
nn
n
nn
Further,
(n_I)-l[
(4.15)
y:1 1
K (0) ]
n
,
i=l, ••• ,n~
and, by definition,
(4.16)
n~n -(n-I)~(i)l<
nx K (x ) -(n-1)x K(i!(x ) = x [y~lI(Y.>
n- n n . n'
n n-l n
n 1
1-
X
n
)];
10
(4.17)
so that by (4.12), (4.16) and (4.17), we obtain that
(4.18)
n~n - (n_1)~(i)
= x y~lI(Y. > x)
.
n-1
n 1
1- n
+
u . , i;::l, ••• ,n,
n1
where
(4.19)
max
1<1'<n
Iu.
I
n1 -<
1
and
I + 0 a.s., as n +
n -1 E.n1= 1 Iu.
n1
.
00
Therefore, by (4.14), (4.18) and (4.19), we conclude that
(4.20)
e
~;::
{K (0)}-1 x {n-1E~ .1y~lI(Y.> x )}+
.0
n
n
n
1= 1
1- n
n
= xn Kn (xn )/Kn (0) + 0(1)
a.s.
.
= '"e
0(1) a.s., as n +
+
n
00
+
0(1) a.s.
0
Similarly, by (4.3), (4.14), (4.18), (4.19) and (4.20), we have
(4.21)
~
2
s~ = (n-l) -1 Eni =l( '"en , i-en)
-1 n
-1
2
-2{ 2
= {Kn (0) }
xn (n-l) L.1= .1 [Y'1 I(Y.1 ->x n ) - Kn (x)J
n
2 n(n_1)-2E~ (y~l_ K (0))2 _
1=1 1
n
n
+
8
2n(n-1)-l ex (n_1)-lE~ l[y~l_K (O)][y~lI(Y.>x )-K (x )]}
n n
1=
1 n
1
1- n
n n
"'2 {
...1 n
-2
2
= en (n-1) Li=lY i I(Y i ~n)/Kn(xn)
(n-l)
+
-1 n
-2 2
Li=lY i /Kn(O)
- 2(n-l)-lE~=ly~2I(Yi> xn)/Kn(O)Kn(x ) }
n
If parallel to (3.12), we define the sample counterpart
(4.22)
V (x) = f
n
-2
-1 n
-2
.
y. dG (y) = n E. 1Y' I (Y. > x) ,
00
x
1=
n
1.
1. -
X
£
+ 0(1)
o(l)a.s., as n+
+
00
+
R
then, under (2.11), it is easy to show that
(4.23)
Also, by the continuity and boundedness of
VGlx), for every
£
> 0, there exists
a 0 (> 0), such that
(4.24)
I VG(x)
- VGCx o ) I <
£
for every x: Ix-x
o
I
< 0 •
Now, by (4.21) and (4.22),
(4.25)
2
"'2
sn = [n/(n-1)] e n { Vn (xn )/K2(~
n n)
+
Vn (O)jKn2 (0) - 2Vn (x
. n )/Kn (O)Kn (x n )}
+
Since sup{IK (x)-K(x)!:x
n
£
R+}
0(1) a.s., as n +
00
•
converges a.s. to 0 as n + oo,and K(x) is a
•
,
11
continuous and nonnegative function of x, by using (4.5), (4.23), (4.24) and
(4.25), we obtain that
(4.26)
where
..
s
0*2
2
n
=
0
*2
+
0(1)
aos.~
as n
+
00
,
is defined by (3.14). This gives the almost sure convergence of
the jackknife estimator s 2
n
REFERENCES
COLEMAN, Ro (1979). An Introduction to Mathematical Stereo logy. Memoirs
Dept. of Theoretical Statistics, Univ.of Aarhus, Denmark
No.3~
0
COX, DoR. (1969). Some sampling problems in technology. In New Developments -z-n
Survey Sampling (eds: N.LoJohnson and H. Smith), Wiley, New York.
DANIELS, H.E. (1945). The statistical theory of the strength of bundle of
threads. Proc e Roy. Soc. Ser.A 183, 405-435.
DVORETZKY I Ao, KIEFER, J. and Wolfowi tz, J (1956). Asymptotic minimax character
of the sample distribution function and the classical multinomial estimator.
Ann. Math. Statist. 27, 642-669.
0
,
PATIL, G.p. and RAO, C.R. (1977). The weighted distributions: A survey of their
applications. In ApFUcations of Statistics (ed:PoR o Krishnaiah). North
Holland, Amsterdam, pp.383-405.
PATIL, G.P. and RAO,C.R. (1978). Weighted.distributions and size-biased sampling
with applications to wildlife populations and human families. Biometrics
34, 179-189.
SEN, P.K. (1973). On fixed-size confidence bands for the bundle strength of
filaments. Ann. Statist. 1,526-537.
SEN, P.K. (1981). Sequential Nonparametric8: Invariance Principles and Statistical Inference. Wiley, New York.
SEN, P. K.~ BHATTACHARYYA, B.B. and SUH, M.W. (1973). Limiting behavior of the
extrema of certain sample functions. Ann. Statist. 1,297-311.
SUH, M.W., BHATTACHARYYA, B.B. and GRANDAGE, A.H.E. (1970). On the distribution
and moments of the strength of a bundle of filaments. Jour. Appl. Probability
7. 712-720 •
•
© Copyright 2026 Paperzz