•
•
•
•
ON TIME-SEQUENTIAL POINT ESTIMATION OF THE MEAN OF AN
EXPONENTIAL DISTRIBUTION
by
PRANAB KUMAR SEN
•
Department of Biostatistics
University of North Carolina at Chapel Hill
Institute of Statistics Mimeo Series No. 1207
•
January 1979
ON TIME-SEQUENTIAL POINT ESTIMATION OF TI1E MEAN OF AN
EXPONENTIAL DISTRIBUTION*
•
Pranab Kumar Sen
University of North Carolina, Chapel Hill, NC
ABSTRACT
In the context of life testing, an asymptotically risk-efficient
time-sequential procedure for estimating the mean of an exponential
distribution is considered and its various properties are studied.
AMS Classification No. 62Ll2
Key Words & Phrases: Asymptotic normality, asymptotic risk-efficiency,
1055 function, risk function, stopping number, stopping time, timesequential procedure .
•
* Work supported by the National Institutes of Health, Contract No.
NIH-NHLBI-71-2243 from the National Heart, Lung and Blood Institute.
-2-
1.
Let
i·n
Ix.,
1
INTRODUCTION
he a sequence of independent and ident ically
distributed (i.i.d.) non-negative random variables (r.v.) with the
distribution function (d.f.)
is an unknown parameter.
failures
nk
=k8
and
V
nk
,
x
E
[0, 00),
E(V
nk
where
testin~,
items under life
c~nsiderations,
8(>0)
the
th
xn,k
failure
one may curtail
8
and estimate
k
Vnk = 1= IXn,1. + (n - k) Xn, k
L·
2
- k8)2 =k8 ,
for
k = 1, ... , n.
by
for
is the tota.l Ufe under test upto the k
and
a (> 0)
2
-x/El
are the order statistics corresponding to
where
(1.1)
EV
1),
and,from cost and time
experimentation at the k
Note that
n(~
For
Xl' ... , X
n,
n,n
Xl' ... , X
n
FEl (x) = 1 - e
Uz
1 • k ' n.
fa-i.lur·e,
Thus, if
a (> 0)
l
be respectively the cost of recruitment (per individual)
and of follow-up (per unit of test-life), then one may conceive of the
e
loss incurred in estimating
by
8nk
as
(1. 2)
where the weights
in estimating
e
a (> 0), a
O
l
by
8nk
and
are all known.
Thus, the risk
is
(1. 3)
and, naturally, one would seek to minimize (1.3) by a proper choice of
k.
However, as
e
is unknown, no single value of
k
minimizes
R (~, e) for all e(> 0), and hence, a time-sequential procedure for
nk
choosing such a value of k is desirable.
Motivated by the works of Robbins (1959), Starr and Woodroofe (1972)
and Ghosh and Mukhopadhyay (1979) [all dealing with the classical
•
-3-
sequential point estimation case], in Section 2, we formulate a timesequential procedure for our problem and under asymptotic setup (similar
to their cases) study its various properties.
v
The derivation of the
main results are postponed to concluding section.
2.
•
or
0
+ 1)
<
>
eaOI a 2 •
TIME-SEQUENTIAL POINT ESTIMATION
Note that by (1.3),
(2.1)
Rn k(a,
~
Thus, if
-Rn k + lea,
~
>
e) <=
n(n-l) <a e / a ,
O
2
hence,
k =n
n (n - 1)
~
which
e)
according as k (k
0
then
R k(a,
n ~
is an optimal choice.
eaOI a 2 ,
e) is {- in
R k(a,
n ~
e) is minimized for
is an unbiased estimator of
and
On the other hand, if
then there exists an optimal
kn < nand
k(l:::;k:::;n),
k(=k(a,e))
n
n
for
~
k = k.
n
Since
e,
motivated by the above, we consider
k(l
~~k :~n -1)
the following stopping numbep
(2.2)
N
n
=Nn (a)
=
~
smallest
{n
2
if
Vnk >k (k
+
for which
l)a/a '
O
2
Vnk:::;k (k
for every
+ l)a/a
l:::;k:::;n-1.
The corresponding stopping time is
e
Xn,N
and the point estimator of
n
Then, the risk corresponding to 8
is
nN
is
n
(2.3)
We may recall that by definition,
smallest
(2.4)
k = k (a, e)
n
n ~
= {n
if
k (1 :::; k :::; n - 1) for which k (k
+
1)
2
Oaol <1 2 ,
n (n - 1) < eaO/a2 .
Let then
(2.5)
o
Rn (a, e) = Rn k (a, e) .
n
~
~
Our primary interest centers around the behavior of
(a) Nn Ik n
and
o'
-4-
(b) R;(~J 8)/R~(~, 8)
i! and
when we impose some asymptotic considerations on
n.
In the classical sequential point estimation theory [c.f. Rohbins
2
(1959) and others], a = 0, L = a (0
O nn - 8) + aln and the problem i.s
n
2
1\
to choose
in such a way that the corresponding risk is minimized.
n
a +0
1
In this context, one lets
obtains some optimal results.
the stopping number
N
and, in this asymptotic sense, one
In our case, however, for a given
depends on
n
n,
but not on
and
and
we let
a /a +0 or, simply, a +0, keeping a
fixed. Note that
2
O
2 O
our main interest lies in the case where k
in (2.4) is < nand
n
2
in this case,
n=n(a )
2
a n >a n (n - 1)
2
2
depends on
(2.6)
a
2
~
8a > O.
We assume that the sample size
O
in such a way that
lim a [n(a )]2 =a*:
2
2
2
0 <a* <00
a +D
We may note that by (1.3),
Rnk(~'
8), V n'~ n,
Rn'k(i!, 8) =
in (2.6) is
that for
(2.7)
{n}
Rnk(~'
8)
+
a l (n' -n)
°
<00
a -~
2
'
n(a )
2
so that the restriction that
is of no loss of generality.
Secondly, we note
satisfying (2.6), by (2.4),
lim kn/n = y = (8a /a )
O 2
n~
!,,;
2
and we assume that
In terms of (2.6), (2.7) demands that
a* >8a '
O
0 < y < 1.
Finally, as in the
classical sequential point estimation case, we assume that
More explicitly, we let
(2.8)
~
and hence, there is no point in increasing
indefinitely even when we allow
a*
•
al=ra2,
where
Then, we have the following
p>O,
and allow
a 2 +O.
a +0.
l
-5-
Theorem 1.
Under (2.6) and (2.7),
(2.9)
N
n
Moreover~
fOY'
Ik n
-+1
aZmost sureZy (a.s.) as
x
evcP!f peal
a -+0.
2
(_m<x<m), under (2.6) and (2.7),
(2.10)
•
Theorem 2.
Under (2.6), (2.7) and (2.8),
O
lim R*
n (a,
..... 8) IRn (a,
..... 8) = 1.
a -+O
2
(2.11)
We may remark that by (2.2),
.
given
n,
Also,
N ::; n,
that
EN
by
n
Ik n -+ 1
If
a n,
1
a
1
n -+00.
as
I a 2 -+ 00
as
Let us denote by
•
xn, 0=0).
FS(X) =1 -e
Then
-xIS .
(3.2)
(3.3)
is
\ in
a
2
and for any
N = n, V 0 < a
n
2
~
a (n) .
2
so that (2.7) and (2.9) insure
1,
Further, (2.11) holds even when in (2.8),
a -+ 0,
2
then
R*
n
or
O
R
are both dominated
n
PROOFS OF THEOREMS 1 AND 2
VnO = 0
and
Z l' ... ,Z
n
nn
Vnk is
t
in
p{X
k2':O
~
~n.
m 2': n}
\'00
{
k
-1-n
::; L.k=OP X "k.
«n2 +j)
nw +J,l
::;I
0::; k
n(2': 1),
n > 0,
-l-n , for some
m, 1 < m
p {X
k:
l::;k ~n
are i.i.d.r.v. each having the
Also, note that for every
Further, note that for every
•
n .....
Znk = Vnk - Vnk _1 = (n - k + 1) (Xn k - Xn k-l)'
,
,
(3.1)
d.f.
= N (a)
and hence, (2.11) holds trivially .
3.
(where
n
a (n) (> 0), such that
2
with probability
n
r = O.
•
there exists an
N
for some
k
O::;j ::;nZ}
k -n
«2 kn)-1- n}=I
{l_e- 2 (2n) }
2k+1n,1
k2':O
k
Lk>O {2 (n2 ) -n} = 2n- n(l _ 2- n) -1 -+ 0
as
n -+00.
-6-
n >0,
Thus, by (3.1) and (3.3), for every
Vn1 > n
(3.4 )
-n
a. S . , as
Let us now choose a positive number
l<A<~
2
3 .
(3.5)
n -r 00.
A such that
- =2 - A> 0 •
o
3
i. e. ,
~
Then, under (2.6) and (2.7), by using (2.4) and (3.2),
piN ::;m
m
(3.6)
A
for some
m ~n}
2
::; p{ U [V
::; (a m laO)[m
2
m>-n m1
21..
(m
A
2
+ l)/m ]]) ,
21.. A
2-F,
m (m + 1) 1m ~ m "
that by (3.4), the right-hand side (rhs) of (3.6) converges to
n
+00.
n
A
0 as
Let us now denote by
k(l) =max{k:
nc
(3.7)
where
so
k
k(k+1) ::;(l-c)k (k +1)},
n n
is defined by (2.4).
n
Also, we choose
n
O<c<l,
so large that
Then
::;k
nc
P{N::;k
(3.8)
m
for some
me
::; piN ::;m
A
for some
m
p{ U [V
m~n
2
mk
m ~n}
m ~n} +
< k (k + 1) a/aO
for some
k:
A
m ::::k:<:k]}.
me
By (3.6) the first term on the rhs of (3.3) converges to
0
as
,
n-r w
,
while by (3.7), the second term is bounded by
(3.9)
where
\'
l.m~n
n(> 0)
{ (Vnk - k e) Ie,
m -ke)/ke <-n,
p{(V k
depends on
0::; k :::: n}
c(> 0)
for some
in (3.7).
k:
A
m ::::k::;k
me
},
By (3.1), for every
is a martingale, so that
{(Vnk - kG)
n(~
1),
418 4 , 0:'; k s n}
-7-
is a sub-martingale, and hence, by the Chow (1961) extension of the
Hajek-Renyi inequality,
(3.10)
p{(Vmk-ke)/ke< -n,
for some
:c; P{(V
- ke)4/ k 4 e 4 >n
mk
4
mA:C;k:C;k }
mE
for some
k:
mA:C;k 5k
}
mE
Ikmc{n-4E(V
_ kO)4/ 0 4}{k- 4 _ (k + 1)-4}
k=[mAr
mk
,
::;
:c; n
-4\
-3
-4
-2A
Lk~[mA] lOCk )] =n -Oem
),
so that by (3.5) and (3.10), the second term on the rhs of (3.8) converges
to
0
as
n
Thus, for every
-+00.
N /k
(3.11)
n
n
>1 - £
£ > 0,
a. s. ,
as
n
In a similar way, it follows that for every
(3.12)
N / k < 1 + E:
n
a. s ., as
n
n
-+ 00
•
£ > 0,
-+ 00 ,
and (2.9) follows from (3.11) and (3.12).
To prove (2.10), we note that for every (fixed)
(3.13)
2
P{Nn~kn+urn}=P{Vnk>k (k+l)az/a '
O
and we choose
by (3.7) and
n
k
so large that
by (2.4).
n
k + urn > k ,
n
nE:
u £
(0),
(_00,
Vk:C;kn+urn},
where
k
nc
is defined
Then, by using (3.11), the rhs of (3.13)
can be written as
(3.14)
Vk
= P{
•
Vnk - ke
ern
k
> rn
}
Vkn£:C;k:C;kn+urn
{W = {W (t), t £ [0, I]}, n
stochastic processes, where we let
and
:c;k:c;k +urn} + 0(1)
n
I.~n(kn+l)
~
k (k+l)
- J'
Let us now consider a sequence
O:c;k:c;n-l
nE:
n
n
k
W (t) = W (-),
n
n n
Wn(k/n)=(Vnk-ke)/ern,
for
~ l}
+0(1).
of
k
k +1
-:c; t < - - ,
n
n
k=O,l, ... ,n.
Thenby
virtue of (3.1), the classical Donsker Theorem applies and we have
(3.15)
W
n
~
W= {Wet), t E: [0,
In,
-8-
where
W is a standard Wiener process on
c' >0
(3.15), we have that for every
<5:
0 < <5 < 1
(3. 16)
and an
nO
n'
and
As a corollary to
>0
there exist a
such that
P{sup { IW (t) - W (s)
n
[0, 1].
n
I:
O:'S: s
<
t ~ s + <5 :'S: 1} > E: ,} < n', V n ~ nO .
To make use of (3.15) and (3.16) in (3.14), we note that for k =k + [urn]
n
,.
!-.:
n-1<[k(k + l)/k (k + 1) -1] +2u. Thus, the rhs of (3.14) can be expressed
n n
as
(3.17)
'Ilk
nE:
~k:'S:k+urn,
n
p{Vnk - ke
ern
E: > o.
where
and, by (3.16), it converges to
0
as
n
2
+00
a +0) .
(or
2
Similarly,
the first term is convergent-equivalent to
(3.18)
p{W (y) >(2u-e:)/e}+p{W(y) >(2u-E:)/e}
n
'
= p{W(l) > (2u - E:)/elY} = (2TI)
_1"2
foo
1 2
exp(-"2t )dt.
(2u-E:)/elY
Thus, (2.10) follows from (3.13), (3.14), (3.17) and (3.18) by letting
u = elY x/2
and
e: +0.
This completes the proof of Theorem 1.
To prove Theorem 2, we first note that under (2.6), (2.7) and (2.8),
!-.: 0
2
(a*/a ) zR (~, e) + aOe /y + pa* + a*ye as a +0.
n
2
2
1
Also, recalling that n- N :'S:1, with probability 1, we have by (2.9),
(3.19)
n
(3.20)
limE (n
n+oo
-1
N)
n
m
m
= Y
«
Further, by (2.2), (3.1), and the fact that
(3.21)
Nn(N n -1)2 a2 / aO <VnN -1 <VnN
n
•
1), 'If m = 1, 2,
n
Znk
is
~
:'S:N~(Nn +1)a~la2I(N
0, 'If k
~
1,
we
<n) +VnnI(N =n)'
n
n
-9-
Note that by (2.6) and (2.7),
2
P{N =n} =P{V
>k (k + 1)a /a '
2 O
nk
n
(3.22)
'If 1 ~k ~n - I}
2
_ > (n - 1) na /a } = p{V _ - (n - 1) 0 > (n - 1) [n(n - 1)a /a - O]}
2 O
2 O
nn 1
nn 1
~ P{V
•
2
~ 0 /(n-1)[n(n-1)az/a
Also,
O
-0]
2
2
~ 0 /[(n-1){a*/a
O
. 2
-1
-0}] =O(n ).
2
2
E(V ) =n(n + 1)0 , so that by the Schwarz inequality and (3.22),
nn
by (2.6).
(3.23)
Further, by (3.21)
(3.24)
2
-1
3
a2
2
I
-1
3
a2 IV
-aD a 2Nn 1 ~ a 2N n I(N <n) + a 2 Vnn -aD a 2n 1 I(Nn=n) ,
nNn
O
n
where by (2.6),
-1 va
ca -1 a 22n 3 +a O
2 (a*)
O
%,
while
a 22Nn2I (N <n) <- a 22n 2 +a a * .
2
n
Hence, by (3.22), (3.23) and (3.24), we have
(3.25)
k
(a*/a ) 2E{a21vnN
2
1
3
~
-a~ a2Nnl} = O(ai) + 0
as
a
n
2
+0.
On the other hand, by (3.20) and (2.4) - (2.7),
(3.26)
2 3
k
(a*/a ) 2E(a N )/a + yOa*
2
2 n
O
as
a +0.
2
Also, by (2.8),
(3.27)
k
(a*/a ) 2a1n + pa*,
2
as
a +0.
2
Hence, by (2.3), (3.19), (3.26) and (3.27), for (2.11), it suffices to
show that
(3.28)
Note that
(3.29)
(a*/a2)~k-2E{(V
N
n
n
n
Now, for every
n(~l),
- ON )2[(k /N )2 -I]).
n
.n n
{V
-kO=t (Z .-0),
nk
J= 1 nJ
l~k~n}
is a martingale,
-10E(2 . - 8) 2 = 82 and
nJ
EN <
n
00.
Hence, by the Wald second 1ennna [vi z .
2
Theorem 2 of Chow, Robbins and Teicher (1965)], we have
e2ENn ,
E(V
- 8N) =
nNn
n
so that by (2.6), (2.7) and (3.20), the first term on the rhs of
(3.29) is equal to
(3.30)
Thus, we need to show that the second term on the rhs of (3.29) converges
to
0 as
a -+0.
2
follows that
Now, by the same technique as in (3.24) - (3.25), i t
(a*/a )\-2E{IV N -ao-1a1N3]21(k IN)2 -II}
2 n
n
n
n n
(3.31)
n
=O(n
-1
r:)=O(va
)-+0
2
as
a -+O.
2
the other hand, by (2.4), (2.6) and (2.7),
(a* 1~2) ~k -2 E{ [a -1 a N. 3 - N 8] 2, (k IN ) 2 - 11 }
(3.32)
n
n n
o 2n n
On
= Ia.* a 2%a O-2 E{(Nn2 - 8aOI a 2) 2 11
= Iii* ai2a~2E{ (N~ - k~) 2 11
- (Nikn)
2
I}
- (Nn/kn) 2 /} + 0 (ra;-)
= Ia.* ai2a~2k~2E I N~ - k~ 13 + 0 (ra;-)
s;
r:-:;:
-2 8n 3k -2 E I Nn-k 1 +0(va
r:- ) .
va*
a 2%a O
2
n
n
3
Thus, it suffices to show (by virtue of (2.6) - (2.7)) that
3l 3
3l
-2 I - k 1 3 = 0
(3.33)
lim kEN
[as at n -+ (a*) 12] •
n
n
n-loOO n
Define
A as in (3.5).
Then
(3.34)
where by (2.7), the first term on the rhs of (3.34) converges to
On
(3.35)
the other hand, the second term is bounded by
3 2
n k- P{IN
n
n
-k
n
I>
n A} .
0
as
-11-
Let
b = 1/3 - E:, E: >0.
l
2
b
} = p{V ~k (k + l)a/a
for some k ~n I}
nk
O
n
a
2b
b
-2+3b
~ p{V ~ ~ n l(n 1 +l)} =P{V ~O(n
I)}
nl
a
. nl
O
-2+3b l
.1 3
) = 0 ( n - - E:) .
= O(n
(3.36)
~n
Then
bl
P{N
f
Also, let
k
Then proceeding as in (3.10)
be defined as in (3.7).
nE:
but using the 8th order moment of
bl
~
(3.37)
p{n
Finally, let
k* = k
n
n
(V
- k8) , we obtain that
nk
bl
} =o([n ]-4) =0(n-%-4E:).
N <k
n
nE:
A
-n
and assume
*} \k~
.p {knE: <N n ~kn ~L.k=k
(3.38)
k*
~ \ n
L.k=k
for any
n
r(> 0).
nE:
Then
{I
k(k+l)}
P k(Vnk -k8) <8(k (k +1) -1)
nE:
n n
Now,
r
E(V - k8)2r = O(k ),
nk
k
k* > k
n
nE:
k-2rE(V
_k8)2r/ 8 2r[ k(k+l) _1]2r
nk
k (k +1)
,
nE:
n n
(3.39)
Also, for
so large that
f or every
r = 2 , 3 , 4 , •..
~k ~k*
n '
k -k
Ik(k + l)/k (k + 1) ..: 11 = O( n
),
kn
n n
so that the rhs of (3.38) is
(3.40)
k*
0(\ n
L.k=k
(3.41)
k- r k 2r (k
n
nE:
n
_k)-2r)
Since (3.38) and (3.39) hold for every positive integer
we may choose
r
rhs of (3.41) as
for the case of
n > 0,
so large that
O(n-l-n),
N
n
~k
n
A
+n .
(2A - 1) r - A > 1,
for some
n >0.
rand
1
A >2 '
and this leads to the
A similar treatment holds
Thus,
and this proves that (3.35) converges to 0 as
for some
n -roo.
Q.E.D.
-12REfERENCES
A martingale inequality and the law of
PY'o,~. Amen'. Math. 80c. 1.1. 107 -111 .
[1]
CHOW, Y.S. (1960).
large numbers.
[2]
CHOW, Y.S., ROBBINS, II. and TEICHER, II. (1965).
randomly stopped sums. Ann. Math. Statist.
of
789-799.
~1oments
~E,
[3]
GHOSH, M. and MUKHOPADHYAY, N. (1979). Sequential point estimation
of the mean when the distribution is unspecified. Comm.
Statist. 4 ~, in press.
[4]
ROBBINS, H. (1959). Sequential estimation of~the mean of a normal
population. ProbabiZity and Statistics (H. Cram~r volume),
Almquist &Wikse11, Uppsa1a. 235-245.
[5]
STARR. N. and WOODROOFE. M. (1972). Further remarks on sequential
estimation: the exponential case Ann. Math. Statist. ~~.
1147-1154.
It
© Copyright 2026 Paperzz