SEQUENTIAL POINT ESTIMATION OF ESTIMABLE
BASED ON U-STATISTICS
PAJUU~ETERS
by
Pranab Kumar Sen
Department of Biostatistics
University of North Carolina, Chapel Hill
and
Malay Ghosh
Iowa State University
Institute of· Statistics Mimeo Series No. 1236
June 1979
SEQUENTIAL POINT ESTIMATION OF ESTIMABLE PARAMETERS
BASED ON U-STATISTICS
by
PRANAB KUMAR SEN 1
UIdversity of North Carolina
MALAY GHOSH 2
Iowa State University
ABSTRACT
Asymptotically risk-efficient sequential point estimation of
regular functionals of distribution functions based on U-statistics
is considered under appropriate regularity conditions.
Some
auxiliary results on U-statistics are also considered in this
context.
fu~S
Subject Classification:
Key Words & Phrases:
62L12, 62LlS, 62G99.
Asymptotic normality, asymptotic
risk-efficienc~
. estimable parameter, risk function, sequential estimation, stopping
times, U-statistics.
lWork supported partially by the National Institutes of Health,
Contract No. NIH-NHLBI-71-2243 from the National Heart, Lung and
Blood Institute.
2Work supported by the Army Research Office, Durham Grant
Number DAAG29-76-G-OOS7.
-2-
1.
INTRODUCTION
Robbins (1959) initiated the study of the
l~equentiaZ
point
estimation of the mean of a normal distribution and this was later
extended by Starr (1966) and Starr and Woodroofe (1969).
Sequential
point estimation of the scale parameter of a gamma distribution was
considered by Starr and Woodroofe (1972), while the case of the multinormal mean vector was treated in Ghosh, Sinha and Mukhopadhyay (1976).
The relevant properties of the normal and gamma distributions were
fully exploited in the above papers.
Recently, Ghosh and Mukhopadhyay (1979) have proposed a sequential
procedure for the point estimation of the mean of an unspecified
distribution (admitting finite eight moments) and established its
asymptotic risk-efficiency (to be defined in Section 2).
Their non-
parametric appraoch provides clues for further generalizations
embracing a broader class of statistics and requiring less stringent
conditions.
The object of the present investigation is to study nonparametric
sequential point estimation of an estimabZe parameter based on Ustatistics.
In this context, the moment-condition of Ghosh and
Mukhopadhyay (1979) is relaxed considerably and their results are
extended to a broad class of U-statistics.
Along with the preliminary
notions, the main theorems are presented in Section 2.
Two relevant
theorems on U-statistics are also considered in this section.
is devoted to the proofs of the main theorems.
Section 3
Section 4 deals with
some generalizations of these theorems along with some general remarks.
The Appendix deals with the proofs of the theorems on U-statistics.
-3-
THE lviAIN RESULTS
2•
Let
{X., i
1
~l}
be a sequence of independent and identically
distributed random variables (i.i.d.r.v.) with a distribution function
for some
(d.f.) F defined on the real q-plane
~(Xl"",Xm)'
kerne~
measurable
m(~ 1)
symmetric in its
of degree
q~l.
Let
arguments, be a Borel
m and consider the estimabZe parameter
(a functional of the d.f. F)
6(F) = E~(Xl"",Xm)
=
J... f<P (Xl' ... , Xm) dF (xl) ... dF (Xm),
(2.1)
F EF ,
Rqm
where
F ={F: 16(F)
corresponding to
I <co}.
6(F),
Then, for
n
~m,
the U-statistic
U ,
n
is defined by [c.f. Hoeffding (1948)]
U =(n)-\
<p(X. , ... ,X.);
n m LCn,m
1
1
1
m
C
n,m
={il, ... ,i: 1
m
~il<"'<im~n}
(2.2)
Note that
Let then
Un
is symmetric in
Xl'" "Xn
~d(xl"",xd) =E~(xl"",xd'
2
Xd+l, ... ,Xm),
2
o ~d
E<Pd (Xl' ... , Xd ) - 6 (F),
Then, whenever
n ~m
and
and is unbiased for
~m
0
~d ~m
(1;0 =0).
6(F).
and let
(2.3)
E~2 <co
(2.4 )
Note that by the reverse martingale property of
a~ -a~+l = Var(Un -U n +l ) ~O,
{U, n
n
~m}
a~ is ~ in n(~m).
so that
To motivate the sequential procedure, suppose that the Zoss
incurred in estimating
L
n
where
= a [U
a and c
n
6(F)
by Un
- 6 (F) ] 2 + cn ;
is
a > 0,
C
>0 ,
(cost per unit sample) are specified constants.
object is the minimize the risk (for given
a, c)
(2.5)
The
-4-
= EL n = aan2
R (n; a, F)
c
by a proper choice of
n.
Lemma 2.1..
a > 0,
For every
(2.6)
+cn,
Towards this, we have the following
>0
C
and m ~ 1,
2
whenever E<j> < 00
is a convex function of n.
Rc(n; a, F)
The proof of the lemma is given in the Appendix.
n* (= n*(a, c; F)),
c
Lemma 2.1, there exists an
R (n*; a, F)
c c
= min
R (n;
n c
Note that by
such that
a, F)
= aa~* + cn~
,
c
where in (2.7), the minimization is restricted to integers n
(2 .7)
and
~m
n* is also an integer (~ m), though i t need not be
c
unique (there may be two ~onsecutive values of n* for which
c
thereby,
(2.7) holds).
From (2.4), (2.6) and (2.7), it follows that
a, c, m as well as
~d'
1 sd sm,
n*
c
depend-'t on
where the later parameters are
all (unknown) functionals of the (unspecified) d.f. F.
absence of knowledge of these
~d'
1 sd sm,
minimizes the risk simultaneously for all
~d)
Thus, in the
no fixed sample size
1 sd sm,
and hence,
a sequential procedure may be desirable to achieve this goal.
We assume that
8(F)
is stationary of order 0
[viz. Hoeffding
(1948)], so that
1
o < ~l s m
- ~m <
00
(2.8)
•
Note that by (2.4), (2.8) and Theorem 5.2 of Hoeffding (1948),
2
an
and
nt.:(n)
+0
2-1
=mn
as
nt
l';l
00.
the contribution of
+ ~ (n)
where
~(n)
=O(n
-2
)
(2.9)
Suppose that in (2.6), we neglect (for
t.:(n)
and, then in (2.7), we denote the
-5-
resulting solution by
o
c
so that we have for small
n ,
c,
o
0
Rc (n c " a, F) - 2cn c ,
where
g(c) -h(c)
means that
g(c)/h(c)
as
~l
c
~O.
(2.10)
Using (2.4)
tOO
-i
R (n*; a, F) = 1..'
ld.1 (n*)
+ cn*,
c c
1=
C
·c
oo
222
•
*
-i-l
. lId.1 (n C )
- c, so that c(n~) -a m 1',;1.
L1=
and (2.7), it is possible to write
where
d
2
l =am 1',;1
and
Hence,
lim n*/no =1 and
c-l-O c c
whenever (2.8) holds.
interchange
(2.11)
Hence, in the sequel, we shall occasionally
n* and nO
c
limR(n*; a,F)/R (no; a, F)=l,
c c
c c
c~O
for the convenience of our manipulations.
c
For the proposed sequential procedure, we proceed to estimate
first.
As in Sen (1960, 1977), we let
based on
(Xl, ... ,X.1- l' X.1+ l"'.'X),
n
U(i)
n-l
for
1',;1
be the U-statistic
i =l, ... ,n
(~m
+1).
Let
then
s2 = (n _l)-lt~ [U(i) -U ]2
1.. 1 =1 n-l
n
'
n
Whenever
I:
'1Il
< 00
2
(= limn~n0n ).
'
s
2
n
n ~m +
1
(2.12)
.
is a (strongly) consistent estimator of
2
m 1',;1
Motivated by (2.7), (2.5), (2.10) and (2.11), we
propose the following sequential procedure:
Let
nO
(~
m +1)
be an initial sample size and define the
stopping number Nc (= Nc (a))
N =min{n
c
where
y (> 0)
~nO:
n
by
~
y
!,,;
(a/c) 2(sn +n- )},
(2.13)
is a suitable constant, to be defined later on.
Our proposed (sequential point) estimator of
8(F)
is
UN
and the
c
risk for the proposed procedure is
R~(a) = EL
= aE{U
·2
-8(F)} +cEN .
Nc
N
c
c
The main theorem of the paper is the following
(2.14)
-6-
Theorem l.
forsome
If
0>0
(i) 8(F)
is stationary of order zero..
4 0
<00
(ii) EI4>1 +
yE(O, (2+0)/4), then
and (iii) in (2.13),
limCT'OR*(a)/R
(n*; a, F) = 1 .
c
c C
(2.15)
The proof of the theorem is deferred to Section 3.
It may be
remarked that (2.15) [in the sense of Starr (1966)] asserts that the
risk involved in the sequential procedure is asymptotkally (as
c
equivalent to the risk involved in the corresponding "optimal"
-I- 0)
fixed-sample size procedure and hence, the sequential procedure is
asymptotically risk-efficient.. for all
of Theorem 1.
F satisfying (i) and (ii)
Also, it may be mentioned that Ghosh and Mukhopadhyay
(1979) have considered the case of the population mean (which
corresponds to
m =1) and obtain (2.15) under
assuming that in (2.13)
y
E(O,
~).
E 14>
Io <00
and
In our present setup,
m
(~l)
is arbitrary, y E (0, (2 +6)/4) (note (2 +0)/4) >~) and we need
4
that EI4>1 + 0 < 00 for some 0 >0. The relaxation of the regularity
conditions is achieved here by using some reverse martingale properties
of
Further, results weaker than
(2.15) can be obtained even without assuming that EI4>1 4 + 0 <00 for
some
{U}
n
and the components of
0 >0.
Theorem 2.
In fact, we have the following:
Under (2.8), and for
2
EI4>1 + 0 <00
for some
limC-l-OE(N/n~)k =1, V k E [0, 1],
[UN
-8(F)]/On*
c
and in (2.17),
V
-+-
N(O, 1),
as
(2.16)
c -1-0 ,
c
0n*
c
may also be replaced by
0 >0,
m(s1/Nc)~
(2.17)
-7-
It is of natural interest to study the asymptotic distribution
of
N (if it eXists). We shall see later on that under
c
V(s2) =v 2/n +0(n- 2), V n ~2m
E~4
(2.18)
n
where
v
Theorem
~ <y <
2
;5.
depends on
F.
Then, we have the following
4 0
(i) E I ~ 1 + < 00
If
(2 +0)/4,
then as
2m
2
l;;1 (N
c
<00,
for some
0 >0
and
(ii) in (2.13),
c-l-O
-
n~)/ (}n~)~ ~
(2.19)
N(O, 1) •
The proofs of these theorems are presented in Section 3.
k(~
For an integer
1), moment-inequality for U-statistics have
been considered by Funk (1970) and Grams and Serf1ing (1973), while
Sen (1974) studied the
LP-convergence of U-statistics, when
p ~1.
In the following lemma, we derive a moment in equality for U-statistisc
valid for any power bigger than 1.
Lemma 2.2.
Asswne that
E I~ Ir < 00
a positive aonstant
K
r
for some r >1 .
«
00),
Then, there exists
suah that
(2.20)
where
s
= {r_1,.
if
1 <r
s2;
miner -1, k), if 2(k -1) <r s2kj
and
K
r
does not depend on
k~2,
(2.21)
n.
The proof of the lemma is considered in the Appendix.
In the
remaining of this section, we consider the representation of
(2.12) in terms of a set of U-statistics, due to Sproule (1969).
each d(=O,l, ... ,m),
let
For
-8-
~
-1
<j>(.d)(X ,,,,,X
d) =
(2m ..;d)12
2m1
t ((m _d) 1) }
where the summation
L(d)<j>(X
, ... ,X
)<P(X
am
1
0.
a1, ... ,X m
Q
)
IJ
(2.22)
L(d)
extends over all combinations of (distinct)
a , ... ,a
(a1 , ... ,am) from (1, ... ,2m -d) with exactly d of the
1
m
being common with the a .. Let then (for n ~2m)
(d)
Un
(n) -1 l.c
= 2m-d
1
n,2m-d
<j>
(d)
0 ~d ~m.
(Xi "",X i
),
1
2m-d
(2.23)
Then, by (2.12), (2.22) and (2.23), we have (by some routine steps)
where, for some positive constants
K and K
1
2
(independent of n),
(2.25)
(2.26)
This representation plays a vital role in the proof of the main
theorems.
PROOFS OF THE MAIN nIEOREMS
3.
First, we consider the following lemma, which is crucial in the
proofs to follow.
Lemma 3. Z.
If
for every
E
E
E I<j> \2r < 00.,
PROOF:
s
r ~ 1 and (2.8) hoZds., then.,
(0, 1),
P{N ~n*(l
c
c
where
for some
-E)}
=0(C s / 2 (1+ y )),
as
c -1-0 ,
(3.1)
is defined by (2.21).
Note that by (2.24) - (2.26),
s~
2
2
-m 1;1 =m (U~l) - u~O) - 1;1)
+
L:=l endu~d) ,
(3.2)
-9-
N ~bl/(l+Y),
and, by (2.13),
with probability 1, where
c
b =(a/c)~.
= [b 1/ (1 +Y)] and n =n* (1 - £) . choo se c so small
lc
2c
c
'
that n sn
(otherwise, there is no need to prove (3.1)). Then,
lc
2c
by (2.13),
Let then
n
P{N Sn~(l -£)} =P{N sn 2c } SP{sn <b-ln, for some n sn sn 2c }
c
lc
c
2
<
- p{ S 2 <bn 22c f or some nl c <- n <- n2c }
n 222
2
SP{sn-ml;;l sml;;l{(l-£) -I}, for some n lc snsn 2c }
S p{ Is~ -m2I:ll/m2I:l ~£(2 -E),
for some
n lc sn sn 2c }
(3.3)
By (2.25), (3.2) and (3.3), we have
P{N
c
Sn* (1 - £)} S p{
c
+ p{
n
lc
,m
max
< <
-n_n
\u n(1)
- U (0)
n
-I;;
1
I· ~ £I: l }
2c
\u (d)
max
L.d=O n
n snsn
2c
lc
I >- Knlc }
(3.4)
where
K (> 0) does not depend on
F
Let
n
£).
be the a-field generated by the ordered collection of
and by
d =0, l, ... ,m
are reverse martingales, and hence
is also a reverse martingale, so that for
our Lemma 2.2,
n).
{D(l) _U(O) -1: , F .
n
n
n'
1
~2m,
by the
lc
Kolmogorov-Hajek-Renyi-Chow inequality for reverse martingales and
n
~2m}
(but depends on
X ., j ~l (so that Fn is nondecreasing in
n+J
n ~ 2m - d}
for every
{U, F ; n ~m} and {V n(d) ' F·
n'
,
n
n
Xl, ... ,Xn
Then,
c
n
-10-
(3.5)
where
s
is defined by (2.21), and,
r
S (m +l)K ni { max
r c Osdsm
where
EI<t>1
2r
Elu~d) Ir }
lc
,
(3.6)
Elu~d) Ir <00, V 0 sd Sm, n ~2m7 (3.1) then
<00='>
follows from (3.4), (3.5) and (3.6) after noting that by (2.21),
s <r
and
ri
lc
= [bl/(l+Y)] =0(c l / 2 (1+ Y)) as
c +0.
Q.E.D.
Now, by virtue of (2.24) - (2.26) and Lemma 2.2, we have for
some
r
I
E s
where
s
~
1,
2
2 Ir
n - m 1':;1
is defined in (2.21).
SK n
r
-s ,
(3.7)
Using (3.7) one can follow the lines
of proof in part (d) of the lemma of Ghosh and Mukhopadhyay (1979) to
conclude that
E(N /n*)k -~ 1
C
where
s
sl.
where
0 >0,
n
then
s >1,
This proves (2.16).
c +0, V k <s,
~OO,
u(d)
0 sd Sm
(3.8)
In particular, i f in (3.7), we let
so that (3.8) holds for every
Moreover, by (2.13),
bS N SN c sn +b(sN -1 + (Nc -l)-Y)
O
c
c
2
b =a/c. Since by the Convergence Theorem for reverse
martingales,
as
as
is defined in (2.21).
r =1 +0/2,
o sk
C
(3.9)
all converge a.s. to their expectations
222
r
a .s. , as
by (3.2), we claim that E'"'I' <00 ---> Sn ~ m "'I
n
'
-11-
n
Hence, dividing all sides of (3.9) by
-+00.
(i. e.,
n*
c
and letting
c-l-O
we obtain that
n~ -+00),
N /n* -+ 1 a.s., as c -l-O;
(3.10)
c c
(2.17) follows then by using (3.10) and the results in Section 5 of
Miller and Sen (1972).
In fact, for (2.17),
E~2
<00
suffices.
This
completes the proof of Theorem 2.
We proceed now to prove Theorem 1.
(EN )/n*
c
c
-+
1 as
c -l-O.
First, note that by (2.16)
Hence, by virtue of (2.6) and (2.14), to
prove (2.15), it suffices to show that
lim aE{U N - 6 (F)}2 / (cn~) = 1.
c-l-O
c
(3.11)
_6(F)}2 =E{U * _6(F)}2 +E{U
-U *}2 +
N
n
Nc
nc
c
c
2E(Un * -6(F» (UN -Un *) and note that by (2.9) and (2.11),
c
c
c
Let us write
E{U
E{Un * - 6(F)}
c
2
+0(n*-2),
= (m2Z;1/n*)
c
c
so that
lim aE{U *-6(F)}2/(cn*) = 1.
c-l-O
nc
c
(3.12)
Hence, to prove (3.11), it suffices to show that
-U *}2 = 0 .
lim n*E{U
Nc
ctO c
nc
Using the definition of the
~d'
prior to (2.2), we may write
U =mU(l) +U*· V n ~m,'
n
(1)
U
n
n
=n
(3.14)
n'
-lIn
.
~=
l~l(X.),
~
(3.13)
EU* =
n
o·'
(3.15)
(3.16)
where
C and C are positive and finite constants, independent of
2
1
Also, we have by (3.14),
(3.17)
n.
-12where by (3.16),
En *U*2
c n*
c
~
Clnc*-1
~
0
as
c i- 0 .
(3.18)
Further,
= n*
c
~
where
n
lc
'\ I(N =k)u*2
L
c
k
k~nlc
I
n*
c
nlc~
and n
I(N =k)U *2 + (sup U*2)I(N >n )
k
2c
c
n >n 2 n
c
n 2c
c
(3.19)
k~
are defined after (3.2).
2c
Now
(3.20)
~ n*/P (N
c
-+
0
as
c
~n
c
2c
)
by (3.1), (3.19) and definition
~O,
n
1c
.
Also, by the Doob maximal inequality for (reverse) submartingales,
n*E{ sup U*2 I(N >n )}
2c
c n >n
n
c
2c
~
n*E { sup U*2}
c n>n
n
2c
~
~n*4E(U
c
*2
)
n 2c+l
~
0
as
c i- 0
where the last step follows from (3.16) and the definition of
(3.21)
n
2c
.
Hence, it suffices to show that
Now
lim n*E{uN(l) _U(~)}2 =0.
(3.22)
ci-O c
c
nc
U(l) is a sample mean for all n ~m. If follows from
n
Anscombe's (1952) result and (3.13) that
as c 10
( n*)J:2(u(l)_u(l))~O
c
N
n*
y
c
c
•
(3.23)
-13-
We follow then the line of proof of Ghosh and Mukhopadhyay (1979)
[in view of our Lemma 2.2, which is stronger than the moment inequality
of Grams and Serfling (1973) (restricted to integer power), their
eighth moment condition is not needed here].
and obtain that
{n~(u~l) _u~:))2} is uniformly integrable in c sc O'
c
c
cO~O.
for some
(3.24)
From (3.23) and (3.24), we conclude that (3.22)
holds and the proof of Theorem 1 is now complete.
To prove Theorem 3, first, note that from Sproule's (1974)
Theorem, one gets
(3.25)
2
where
sN
can also be replaced by
c
2
in (3.25).
sN_l
Hence
c
using the Mann-Wald theorem, we obtain that
(3.26)
where in (3.26), also, . sN
may be replaced by sN -1' From (3.26)
c
c
and the definition of the stopping time in (2.13), one finds that
the sufficient condition in Theorem 3 of Ghosh and Mukhopadhyay (1979)
hold~with
A direct appeal to this theorem
now yields (2.19).
Q.E.D.
4.
Ghosh, Sinha and
SOME ADDITIONAL REMARKS
~fukhopadhyay
(1976) have considered sequential
point estimation of the multinormal mean vector with unknown covariance
matrix.
E~l
=].1
If
and
X., i
~l
V(~l)
=1,
~1
loss function, based on
are LLd. random
p(~
1) - vectors with
positive definite (p.d.), then assuming the
Xl' ••• , ""11
X ,
~
to be of the form
-14-
=
Ln
where
(X
-11)'
A(X - I::.
II) + cn
""1l
I::.
~ ""1l
(4.1)
is a known p.d. weight matpix,
~
sample and
-X =n -lIn. IX.,
""1l
1.=
1.
Rn
so that for known
=
L,
n
n
-1
~
c
is the cost per unit
the risk function is given by
1,
\'
Tr (~i) +cn ,
(4.2)
the risk is minimized at
1
\' k
nO ={c- Tr(~f.)}2 .
I,
For unknown
(4.3)
by analogy to (2.13), we define the stopping time
N =smallest positive integer
n (~2) for which
n ~c-~{Tr(A S ) +n- Y} ,
~
where
S
""1l
=(n
(4.4)
""1l
- 1) -l\'n
L..' 1 (X.
1. = ~1.
- X ) (X. - X )',
~1.
""1l
n
n
~2.
Now, Tr (A ""1l
S ) is a
~
U-statistic and the results of the previous sections apply to yield
an "asymptotically risk efficient" sequential procedure.
context, the multinormality of the
X.
~1.
In this
is no longer needed.
In the context of jackknifing, Sen (1977) has considered a class
of smooth functions of U-statistics.
Under his assumed boundedness
conditions of the first and second order (partial) derivatives and
the conditions of Cramer (1946, p. 353) the results of Sections 2 and
3 can also be extended to such functions of U-statistics.
Finally, Robbins (1959), while considering the case of the
normal mean, proposed a slightly different loss function, namely,
L
Our
n =alUn - 8(F)1
+
cn.
(4.5)
asymptotically risk efficient procedure also holds for such a
loss function, provided in (2.6) through (2.10), we make the necessary
modifications.
Note that as
n
+00,
-15-
(4.6)
so that the optimal value
n*,
in this case, is given by
c
2 I;
2 2
n~ ~ (a m 1:;1/ 21TC )
3
as
c ~0 ,
(4.7)
and analogous to (2.13), we define the stopping number by
(4.8)
where
y and nO
are defined as in Section 2.
With these modifications,
the theorems in Section 2 extend directly to the case of the loss
function defined by (4.5).
A similar case holds for
for some
b
b >0,
where for
of Theorem 1 by
E I<P I (2+0) b <
~2,
00
n
=a IUn - e(F) Ib +cn
we need to replace condition (ii)
for some
5.
L
0 > o.
APPENDIX
Proof of Lemma 2.1
Note that by (2.6),
Rc (n +2; a, F)-2Rc (n +1; a, F) +Rc(n; a, F)
222
(5.2)
= a (an+
2 - 2a n+ 1 + an ),
where
2
< 2 u
a n+
l-a,vn~m.
n
n
2
6 R (n; a, F)
~m,
c
~O.
d
Hence, it suffices to show that for all
For this, define as in Hoeffding (1948),
(d)
°d = Li=O(-l) i i I:;d_i'
so that
0 = So = 0 .
0
(5.3)
Then
I:;d = L~1--0 (~)
1 0d -1. =
L~1--0 (d d-1.) O.1
(5.4 )
From (2.4), (5.3) and (5.4), we have by some standard steps
2 _ n -1 m (m) (n-1.) 0
an - (m)
Li=l i m - i i
(5.5)
-16-
so that by (5.2) and (5.5), we have
c(n; a,
2
6 R
F)
.L~=1[~][:r[:=th~n (~:m~ :h+
n~\;1 +1]
2
) - 2
(5.6)
~o,
as
2
(n -i +l)(n -i +2) - 2(n +2)(n -i +1) +(n +l)(n +2) =i +1 >0,
and by Lemma 5.1 of Hoeffding (1948),
ok
~O, \f
k =0,1, ... ,m.
\f
i ~1
Q.E.D.
Proof of Lemma 2.2
Let
n
o
=[n/m] , n
~m
and let
o
Tn = (n o ) -lrnr= 1</> (X ( r- 1) m+ 1"" ,Xrm ).
Then,
ETn =8(F), V n
~m
and defining
E(TnIFn) = Un'
\f
(5.7)
Fn as in after (3.4),
n ~m,
(5.8)
so that by the Jensen inequality for conditional expectations,
EIU - 8 (F) Irs EIT - 8 (F) Ir,
n
On the other hand,
for any
o
is an average of n
n
Tn
r ~1 .
i.i.d.r.v.'s,
(5.9)
and hence,
Theorem 3 of Sen (1970) applies to the right hand side of (5.9) and
this yields (2.20) and (2.21).
Q.E.D.
As noted already, the above generalizes and strengthens the results
of Funk (1970) and Grams and Serf1ing (1973), where they needed
be a positive integer.
r
Also, our method of proof is elementary and
quite differept from the earlier ones.
REFERENCES
[1]
ANSCOMBE, F.J. (1952). Large sample theory of sequential
estimation. Froc. Camb. PhiZ. Soc., ~, 600-607.
[2]
CRAMER, H. (1946). f1athematicaZ Methods of Statistics.
Princeton Univ. Press, New Jersey.
to
~
-17 [3]
FUNK, G.M. (1970). The probabilities of moderate deviations of
U-statistics and excessive deviations of Ko1mogorovSmirnov and Kuiper statistics. Ph.D. dissertation,
Michigan State University.
[4]
GHOSH, M., SINHA, B.K. and MUKHOPADHYAY, N. (1976).
sequential point estimation. J. MUZt. AnaZ.,
Multivariate
281-294.
~,
[5]
GHOSH, M., and MUKHOPADHYAY, N. (1979). Sequential point
estimation of the mean when the distribution is unspecified.
Corrun. Statist., .....8, to appear
in the July issue.
.
[6]
GRAMS, W.F. and SERFLING, R.J. (1973). Convergence rates for
U-statistics and related statistics. Ann. Statist., 1,
153-160.
[7]
HOEFFDING, W. (1948). A class of statistics with asymptotically
normal distribution. Ann. Math. Statist., 19,
.......... 293-325 .
[8]
MILLER, R.G., JR. and SEN, P.K. (1972). Weak convergence of
U-statistics and von Mises' differentiable statistical
functions. Ann. Math. Statist., ..........
43, 31-41 .
[9]
ROBBINS, H. (1959). Sequential estimation of the mean of a
normal popUlation. ProbabiUty and Statistics (H. Cram~r
Volume), AlmBuist and Wiksel1, Uppsala, 235-245.
'/
[10]
SEN, P.K. (1960). On some convergence properties of U-statistics.
CaZ. Statist. Assoc. BuZL., ..........
10, 1-18 .
[11]
SEN, P.K. (1970). On some convergence properties of one-sample
rank order statistics. Ann. Math. Statist., ~1, 2140-2143.
[12]
SEN, P.K. (1974). On LP-convergence of U-statistics.
Inst. Statist. Math., ..........
26, 55-60 .
[13]
SEN, P.K. (1977). Some invariance principles relating to
Jackknifing and their role in sequential analysis. Ann.
Statist., 5,
..... 316-329 .
[14]
SPROULE, R.N. (1969). A sequential fixed width confidence
interval for the mean of a U-statistic. Ph.D. dissertation,
U.N.C., ChapeL HiLZ.
[15]
SPROULE, R.N. (1974). Asymptotic properties of U-statistics.
Trans. Amer. Math. Soc., .....199, 55-64 .
Ann.
,.."...
[16]
STARR, N. (1966). On the asymptotic efficiency of a sequential
procedure for estimating the mean. Ann. Math. Statist.,
37, 1173-1185 .
..........
[17]
STARR, N. and WOODROOFE, M. (1969). Remarks on sequential point
estimation. Proc. Nat. Acad. Sci., U.S.A., ..........
63, 285-288 .
-18[18]
STARR, N. and WOODROOFE, M. (1972). Further remarks on sequential
point estimation: the exponential case. Ann. Math.
Statist.) 43, 1147-1154.
--
4It.
© Copyright 2026 Paperzz