ESTIMATION OF OPTIMAL BLOCK REPLACEMENT POLICIES
By John I. Crowell and Pranab K. Sen
University of North Carolina, Chapel Hill
SUMMARY
Stochastic approximation methodology is applied to the estimation of optimal block
replacement policies.
Here optimal is taken to mean the block replacement policy which minimizes
long-run expected cost. The basic procedure suggested is refined to provide a faster rate of convergence
for the estimator.
~NOTATION
In the following the symbol
0
is used to denote the composition of functions. A superscript of the
form (p) where p is a nonnegative integer will indicate the p-th derivative of a function. For example,
if Hand g are differentiable functions, then Hog(I)(t) = ftH(g(t)). Wherever used, K 1, K , ... will
2
represent appropriate constants. E and P will be used for expectation and probability, respectively.
Convergence in distribution will be denoted by
~.
Finally, I[
] will be the indicator function of the
event within the brackets.
.e
.L INTRODUCTION
An important topic in reliability theory is maintenance policies for systems ( mechanical, electrical,
etc. ) whose components have random lifetimes. Consider a system consisting of a single component or
part.
It is assumed that this part can be replaced by a like part at any time and is not repaired
following failure.
Suppose that all parts used have independent and identically distributed lifetimes
with distribution F. Further assume that the cost C 1 of having to unexpectedly replace a failed part is
greater than the cost C 2 of making a replacement at a planned time. If F has increasing failure rate,
then it may be less expensive to substitute new parts for aging parts near failure than to only make
replacements following failure. Two maintenance plans motivated by this consideration are the block
replacement policy (BRP) and the age replacement policy (ARP). Both of these policies are treated in
detail by Barlow and Proschan (1965, 1975), where some historical development is also given.
Suppose a new part is installed at time O. In the simplest BRP, the part in use is replaced by a
new part at times T l' T 2' T 3' ... , and replaced following failure at any other time. Such a plan is
termed periodic if T 1 = T, T 2 = 2T, T 3 = 3T, ... where T is some positive real number. For the
- 2periodic BRP, the expected long-run cost per unit time as a function of T is
B(T)
=
C 1H(T)
T
+ C2
Here H is the renewal function associated with the lifetime distribution F. Thus H(T) is the expected
number of failures for each of the intervals (0, T] , (T, 2T] ,etc. (It is assumed for both the BRP and
the ARP that all replacements are made instantaneously. )
Under an ARP, if a part has not failed and been replaced prior to reaching some fixed age T, it is
preventively replaced at age T. The expected long-run cost per unit time can be shown to be
A(T) =
+ C 2S(T)
fT S(t)dt
'
C 1F(T)
o
where S(t) = 1 - F(t) .
For both the age and block replacement policies an important practical question is what value of T,
if any, minimizes expected cost per unit time. Barlow and Proschan (1965) give sufficient conditions
for the existence of an optimal ARP To such that To minimizes A(T). For a given distribution F,
finding such a To is a deterministic problem.
For instance, Glasser (1967) has computed To for
selected gamma, Weibull, and truncated normal distributions.
Frees (1983) summarizes some
approaches that have been taken to the problem of estimating To for the ARP. Consistent estimators
were defined by Arunkumar (1972) and Bergman (1979) using an i.i.d. sample from the distribution F.
Bather (1977) proposed a sequential scheme that uses estimates of To to censor observations ( make
planned replacements ), essentially a "random" ARP. Bather's procedure has the desirable feature that
asymptotically the incurred cost per unit time is minimized, as though the optimal ARP were used all
along.
Frees (1983) successfully applies stochastic approximation methodology to the problem of
finding the optimal ARP when the distribution F is unknown.
Now suppose To minimizes the BRP cost B(T) in T. Again, if F is known the problem of finding
To is not statistical. At this time we have not been able to locate any procedure in the literature for
estimating To if no parametric form for F is assumed.
This report investigates using stochastic approximation to estimate the optimal BRP in a
nonparametric setting.
To motivate the application of stochastic approximation to this problem,
consider the following ad hoc procedure. Suppose a periodic BRP is being carried out as described
above with scheduled replacements at times T, 2T, 3T, • .. for some T
>
O. If T is less than the
optimal value To, then an unnecessary number of planned replacements will be made. This would
e.
- 3suggest that to decrease costs one should increase the value of T. If T is greater than To, then there
are apt to be to many part failures which could be remedied by adjusting the value of T downwards.
The stochastic approximation algorithm introduced in Section 2 incorporates this idea of making
successive adjustments to T to arrive at the value which minimizes cost.
In Section 2 sufficient conditions are given for the a.s. convergence and asymptotic normality of the
basic estimator of To. A refinement to speed the rate of convergence of the procedure is explored in
Section 3. The work presented here follows fairly closely that of Frees(1983) on estimation of optimal
age replacement policies using stochastic approximation.
~
THE BASIC ALGORITHM
Let { an } be a sequence of positive constants decreasing to zero. Suppose M is a differentiable
regression function on the real line which has a minimum at Z*. Take Z1 to be an initial estimate of
Z*. The usual stochastic approximation algorithm defines estimates of Z* recursively by
(2.1)
where
.e
M~l)(Zn)
is an estimate of M(l)(Zn)'
Such a procedure is now considered for finding an
optimal BRP.
Recall that with previously defined notation the cost function for the BRP is
where H is the renewal funtion associated with F and C l > C 2 . The objective is to minimize R in t.
Assume that F is absolutely continuous so that the renewal density h = H(l) exists. Define
80
that B(l)(t) = D(t)/ t 2 . Suppose that B(t) has a unique minimum at t = ¢* with D(¢*)=O.
Given an initial estimate ¢i of ¢*, (2.1) suggests defining successive estimates of ¢* by the formula
- 4-
where Dn(~ri) is an estimate of D(~ri). This procedure would have the disadvantage of possibly
generating negative estimates when necessarily ~. is greater than zero. If it is given that ~. is greater
than some known (
> 0,
then one might use the algorithm
Rather than constraining the estimates in this way, the following approach is taken.
Assume there exist known nonnegative constants T 1 and T 2 such that T 1 < ~. < T 2 . We allow
the possibility that T 1 0 and T2 00. Let g: R- (Tl' T 2) be a known strictly increasing function
=
=
having s+1 bounded derivatives. Suppose ~ minimizes Bog(t) in t, so that ~.= g(~). Define
Since Bog(I)(t)
= ~ and Dg(~) = 0, we now consider using a stochastic approximation procedure
[get)]
to estimate the zero of Dg. Given an initial estimate
~1'
define successive estimates of ~ by
(2.2)
where Dgn(~n) is an estimate of Dg(~n)' Given ~n , take g(~n) as an estimate of ~*.
The next step is to define an estimator Dgn(t) of Dg(t). Let { Xij liJ=1 be i. i. d. random
variables with distribution function F defined on a probability space ( 0, fiJ, P ).
Take Nn =
{ Nn(t), t~O } to be the renewal process associated with the sequence { Xnj }j~ with renewals
occurring at Xn1 , XDl +X n2 , XDl+Xn2+Xn3"" for n = 1, 2, . . .. Let { cn } be a sequence of
positive constants decreasing to zero. Define
an estimate of Hog<I)(t). Finally, let
(2.3)
e.
- 5-
with estimates of tP defined recursively by (2.2).
We will make use of the following assumptions.
AI: The distribution funtion F has increasing failure rate. F is absolutely continuous with bounded
density f, F(O)=O, and JJ2=
t 2dF(t) < 00 .
f:
A2:
There exist known nonnegative constants T 1 and T 2 such that T 1
t E (T l' T 2 ), D(t)(t-tP*) > 0 for t ¥ tP*.
A2': A2 holds with T 2
< tP* <
T 2 $ 00.
For
< 00.
A3: g: R-+(T 1, T 2) is a known, continuous, strictly increasing function with s+1 bounded derivatives
such that g(2) is continuous.
A4: {an} and { Cn } are sequences of positive constants such that lim an = lim Cn = 0,
0-+00
0-+00
.e
00
E
00
an = 00,
n=1
A5:(a)
E
anCn
0=1
< 00
00 a 2
, and
E
0=1
c~
< 00 .
Hog(2)(t) exists for each t E R and is bounded.
tP •
(b)
A5 (a) holds and Hog(2) is continuous at
(c)
Hog(2)(t) exists for each t E R and for some constant K 1
A5':(a)
(b)
>0
Hog(3)(t) exists for each t E R and is bounded.
A5' (a) holds and Hog(3) is continuous at
A6: For constants A, C
> 0,
let
r=
tP •
AltDg(t)lt=tP ' an= An-I, and Cn = Cn-"Y, where "Y E ( 0, 1 )
- 6-
is such that 1-.,..
< 2r.
Before proceeding with the main results of this section, some simple but important consequences of
the preceding assumptions are noted. For a fIXed x
> 0 , let
Yn = Nn(g(x+cn» - Nn(g(x-cn». Bya
Taylor expansion and Al ,
EYn = H(g(x+cn» - H(g(x-cn»
= Hog
For
(I)
(x)2cn + o(cn) .
k~2,
P { Yn
=k}
~
f:
[FG)(g(x+cn» - F(j)(g(x-cn» ] [ F( g(x+cn) - g(x-cn»] k-I
j=1
= [ H(g(x+cn» - H(g(x-cn» ] [ F( g(x+cn) - g(x-cn»] k-I
~ K I Cn [ F( g(x+cn) - g(x-cn»] k-I
Since Cn ! 0 , ultimately F( g(x+cn) - g(x-cn»
< I , and thus for any integer p > 0,
00
(2.4)
E{
yg I [Yn ~ 2] } = KI Cn L
kP [F( g(x+cn) - g(x-cn»] k-l = o( Cn )
k=2
The above implies that
P { Yn = 1 } = Hog
and that for any real number p
(x)2cn + o(cn)
>0
p
E Yn = Hog
(2.5)
(I)
(1)
(x)2cn + o(cn) .
Theorems 2.1 and 2.2 give sufficient conditions for a.s. convergence and asymptotic normality of
the estimates generated by the procedure.
Theorem I I Assume AI-A4 hold. Further suppose that anyone of the assumptions A5(a), A5(c), or
A5'(a) holds. Then ¢n~¢ a. s.
- 7-
Theorem
II
Assume AI, A2', A3, and A6 hold.
Let
E= Ci [g(</»]2 Bog(I)(</»/2
.
With
r
defined by A6, if either
(a) A5(b) or A5(c) holds with
(b) A5'(a) holds with!
1~ 'Y < 1, or
< 'Y < 1, then
(c) If A5'(b) holds with 'Y=! ' then
D N( n 2/5 (</>n-</» ....
The remainder of this section will be devoted to proofs of Theorems 2.1 and 2.2. The following
theorem due to Robbins and Siegmund(I971) will be used in the proof of Theorem 1.
Theorem(Robbins-Siegmund)
.e
Let ~n be a nondecreasing sequence of sub sigma-fields of~. Suppose that X n , Pn, '7n, and 'Yn
are nonnegative
~n-measurable random
E~nXn+I ~ X n ( 1 + Pn) +
variables such that
'7n - 'Yn
for n = 1,2, ...
00
L
Then, n~ Xn exists and
'Yn
n=I
00
< 00
on the set such that
Define the u-fields ~n = 0'( Xi, j '
L
00
Pn
< 00
and
n=I
L
'7n
< 00 .
n=1
i=1, 2, . . . ,n-I, j=1, 2, . . .).
Use E~n to denote
expectation with respect to the O'-field ~n' Since N n is ~n-measurable but is independent of ~ n-1' by
(2.2) and (2.3) </>n is ~n-measurable while Dgn(</>n) is independent of ~n-1' Define
and
1/2
V n = cn [Dgn(</>n) - Dg(</>n) - ~n ] .
In Lemmas 2.1 and 2.2 bounds are derived for ~n and Ec:F
n
[ Dgn(</>n) ]2 , respectively, to be used in
- 8the application of the Robbins-Siegmund result. Vn plays an important role in the proof of Theorem
2.2. The asymptotic variance of Vn will be :E
=cj [g(4))]2 Bog(I)(4»/2
.
e
Several properties of increasing failure rate distributions are used in the proofs of Theorems 2.1
and 2.2. These properties are quoted here and referred to as necessary. See Barlow and Proschan
(1965) for proofs of these results. Let F be an increasing failure rate distribution with mean p and
associated renewal function B. Suppose N = { N(t) , t
~
0 } is a renewal process with underlying
distribution F. Then
(2.6)
Var[ N(t)] ~ B(t) = E N(t) ~ ~ , for all t ~ 0 ,
and
(2.7)
E [N(t)] k <_
-tip
e
~ j k (t/p~
~O
J=
j!
Lemma 2.J.i Assume AI, A2, A3 and A4 hold.
e.
~
By definition we have that
For some" such that 1,,-4>1 ~ l4>n-4>I,
-9-
e
which together with the proceeding implies (a) since g(l) is bounded.
If AS(c) holds, then
by the Taylor expansion for g.
If AS'(a) holds, then
.e
where again l77r¢nl$cn, i= 1,2. In this case
completing the proof of part (b) of the lemma.
•
f.r.2Qfr Using the definition of Dg n and the fact that (a+b)2 $ 4(a2+b 2 ), we have that
- 10By (2.5) and A3 we have that
Since F has increasing failure rate, it follows from (2.6) and A3 that
E
CSn
[g(1)(<pn)]2[C 1Nn (g(<pn»
+ C 2]2 S
KS + K9 E Nn (g(<Pn»
CSn
+ K lO E CSn [N n (g(<Pn))]2
< K + K rg(<Pn)l + 2K rg(<Pn)12
-
S
9l JJ
J
10l JJ
J
e.
Noting that cn=o(1), the lemma follows from the inequality for Dg; at the beginning of the proof.
£IQQf gf Theorem I I Letting Un = <Pn-<P, we have by (2.2) that
Since <Pn is
Noting that
~n-measurable,
- 11 -
= Dg(4)n)
+ An,
it follows that
If A5(a) holds, then by Lemma 2.1
Similarly, under A5(c) or A5'(a) ,
since cn=o(I). Thus by Lemmas 2.1 and 2.2,
By A2 and A4 the assumptions of the Robbins-Siegmund result are met. Thus for some finite random
variable
ewe have that 4>n -+ ea. s.
00
and
I: an(4)n-4>)Dg(4>n) <
00.
Since (4)n-4>)Dg(4>n)
>0
n=1
00
for 4>n
'#
4> by A2 and
I:
an =
00
by A4, 4>n -+ 4> a. s.
n=1
Before beginning the proof of Theorem 2.2, we quote a theorem of Fabian(1968) on asymptotic
normality which is applicable to many stochastic approximation procedures. The univariate version of
this theorem given here is taken from Frees(1983).
Theorem (Fabian)
Suppose c:J'n is a nondecreasing sequence of sub CT-fields of c:J'. Suppose Un, V n , Tn'
r n , and
~n
are
- 12 random variables such that f n , .n-I' and Vn_I are
constants with f > 0 such that
f n -+ f,
.n -+ .,
~n-measurable.
Let Q, P, T, E, f, and. be real
Tn -+ T or E I Tn - T I -+ 0,
E':F Vn = 0,
and there exists a constant C such that C > I E':F V; - E I -+ O.
n
n
Suppose, with uf,r = E I[
vj ~ rjQ ] vj, that
.lim uJ?r = 0 for all r
J-+OO '
or
n
Q = 1 and lim n- I '"'
~
j=I
Let f3+ = f3 I[ Q = 1] where 0
Un + I
Then,
Lemma U
< Q ~ I,
0 ~
f3,
u?J,r =
0 for all r.
f3+ < 2f, and
= Un [ 1 - n_Q f n ] + n-(Q+ (3)/2 .n V n + n-Q- f3/2 Tn .
n
f3 /2
)
D (T
E.2
Un -+ N (f _ f3+/ 2 )' (2f-f3+) .
Assume AI, A2', A3, and A4 hold.
(a) If A5(b) holds, then d n = o(cn)'
(b) If A5(c) or A5'(a) holds, then d n = O(c;).
(c) If A5'(b) holds, then
£.I22fr If A5(b) holds, then from the proof of Lemma 2.1 and since A2' holds,
with
l'1r4>nl~cn, i= I, 2.
Since I 4>n-4> 1-+ 0 a. s. by Theorem 2.1 and Hog(2) is continuous in a
neighborhood of 4>, part (a) holds.
e.
- 13 -
It was seen while proving Lemma 2.1 that if AS(c) holds, then
•
since g(4)n) is bounded by A2'.
Also from the proof of Lemma 2.1, when AS'(a) holds
With g(4)n) and Hog(3) bounded, part (b) follows. Since 4>n
~ 4> a. s.
, when Hog(3) is
continuous in a neighborhood of 4>, we also have (c).
Lemma 2.4: Assume AI, A2', and A3. Then for t>O
.e
Proof: By definition
Since for any real numbers a, b, c, and d one can write I a+b+c+d It~ 4t [ lal t + Ibl t + Icl t + Idl t j,
Since F has increasing failure rate, by (2.7)
•
- 14-
where [ t+l ] is the greatest integer less than or equal to t + 1. Thus using A3 and the assumption
that T 2
•
< 00,
which establishes the result.
Lemma 2.&.i. Assume AI, A2', A3, and A4 hold. Further suppose that one of the assumptions A5(b),
A5(c), A5'(a), or A5'(b) holds. Then
(b) n.....oo
lim E....
V; = E =
~n
ci [g(¢)]22Hog(I)(¢)
frQQf:. For part (a) we note that E
tJn
Vn = 0 by definition.
e.
Next, let
)[ Hog(¢n+cn) - HOg(¢n-cn)j
d
Yn -_ g(.J.
'Pn
2Cn
' an
Zn = g
so that V;
(¢n) [ Nn(g(¢n» - Hog(¢n) ] ,
= cnCr [X n - Yn - Zn ]2.
expectations E
(i)
(1)
tJn
X;, E
To show (b), we consider separately the conditional
X Yn, etc.
tJn n
By (2.5) , Theorem 2.1, and the continuity of Hog(l) in a neighborhood of ¢ ,
•
- 15 -
= cn[g(t/>n)]2 ~
[Hog(1)(x)2cn
+ o(cn) ]
4cn
1 [Hog(1)(x)2cn + o(cn) ]
= [g(t/>n)]2 -4
cn
--+
•
C 2 [g(t/»]2 Hog(1)(t/»
~1
~-
~
as n ...
2
(ii)
Using (2.5) and the fact that T 2
<c
-
(iii)
K
n 1[
2Hog
< 00,
(1) (t/>n)cn+ o(cn) J2
2cn
By the conditional version of the Cauchy-Schwartz inequality,
.e
Thus by (2.5) , (2.6), and the fact that T 2
$ K2 0(1) as n ...
(iv)
Since T 2
< 00,
< 00,
00 •
00 •
- 16-
For all
(vi)
It follows from the fact that T 2
0,
ElJ'oYoZo
= O.
(v)
<
-
< 00 aod (2.6)
that
K [g(<I>n)l
1
IJ J
Thus lim cnE« Z~ = 0 .
0"'00
"'"0
The lemma follows immediately from (i)-(vi) since by defioition
Lemma U
Assume A1, A2', and A3 hold. Then
(a) Et:fo V~ is bounded for each
(b) for any t
0,
e.
and
.
1/2
t
> 0, hm
E« (cn V0) = 0 .
n"'oo
"'"0
fI22f;, Since the conditions of Lemma 2.4 hold,
By A1 and (2.5)
Agaio by A1 and (2.5) ,
cnl Et:fo[hgn(<I>o)] 12
This establishes (a).
=
~ [ 2Hog(1)(<I>o)cn + o(cn) ]2 ~
K2 .
- 17 -
By the definition of hg n and (2.5) we have that
t t l
[t+l]
Cn E~nlhgn(<Pn)1 :5 2t E~n[ Nn(g(<Pn+cn)) - Nn(g(<Pn-en»]
=
\ [2Hog(I)(<Pn)cn+ o(cn)] .
2
Also by (2.5) ,
Thus using Lemma 2.4,
which establishes (b) .
.e
Lemma I I Let O'~,r= E{ V~ I[ V~ ~rn] } for r= 1, 2, ... and n= 1, 2, .... Assume Lemma 2.6
and A6 hold. Then for each r,
Proof: For the
such that
r
lim O'~,r = 0 .
n-+oo
E ( 0 , 1 ) of assumption A6 , take q
a+ &= 1.
> 1 such
that
r :5
&.
Then take p
The proof is then identical to that of Lemma 2.7 of Frees(1983), using the fact
1/2
P
that by Lemma 2.6 ,E(cn Vn ) = 0(1) .
Proof 2! Theorem 2.2: We apply the result due to Fabian(1971) in the form quoted earlier.
By the definition of Vn
- 18Expanding Dg(tPn) about Dg(tP)=O we have that
where l'1n-tPl S ItPn-tPl. As before let Un =
tPn-tP. Then
-1/2
Un + 1 = Un - an[ cn Vn + Dg(tPn) + ~n ]
-1 -1/2 -y/2
d
= Un [ 1 - an atDg(t)lt='1n ] - An C
n
Vn - an~n .
d
If we set r n = A dtDg(t)lt='1n'
~n = ~ = - AC
-1/2
,Tn = - An
(1--y)/2
~n,
a=I,
and
13=1--y, then
-1
(~-1)
-1 -(1--y)/2
Un + 1 =U n [1-n rn]+n
~nVn+n n
Tn
Let r = A a\Dg(t)lt=tP' Since a\Dg(t) is continuous at
that r n.... r as n ....
00.
With E =
constant K such that I E«J V~ - E
n
tP and tPn .... tP a. s. by Theorem 2.1, we have
C2[g(tP)]2 Hog(I)(tP)
1
2
' by Lemmas 2.S and 2.6 there exists a
I < K for all nand
E«J V~ .... E as n ....
n
00.
Using Lemma 2.7 we
also have that,lim O'~ = ,lim E{ V~ I[ V~~rja]} = 0 for all r. By assuming A6 we have required
J....OO J ,r
J....OO
J
J
that l--y
< 2r.
It remains to establish the asymptotic behaviour of Tn. If AS(b) holds and -y ~ ~, then by Lemma
2.3
Under AS(c) or AS'(a), with -y
>
!'
.
- 19 -
If A5'(b) holds and 'Y
=
1' then
2 (1-'Y)/2 A
A Cn n
~n
T n -- _
2
=cn
by Lemma 2.3. The limiting normal distributions indicated then follow from the Fabian result.
~
IMPROVING THE RAIE .Qf CONVERGENCE
This section is principally concerned with acheiving a faster rate of convergence for the procedure
(2.2) by using a kernel estimator for Hog(I). Kernel estimation of Hog(p+l) for integer p greater than
zero is also considered here for future applications. The basic approach is that taken by Singh (1977)
for kernel estimation of derivatives of a distribution function. Frees (1983) used estimators like those
suggested by Singh to speed the rate of convergence of a stochastic approximation procedure.
The
.
development undertaken here is like that of Frees except that interest now lies in estimating derivatives
.e
of Hog.
Let 0 $ p
Hog(p+l)(x)
< r where p and r are integers. We will consider the problem of estimating
assuming that Hog(r+l)(x) exists and is bounded.
estimator of Hog(I). For p
estimators hg n and
A7:(a) sUPI E
x
(b) For t
~
0 ,
hg~)
Let hg n continue to denote an
will denote an estimator of Hog(p+l). It is desirable to have
hg~)(x) which satisfies the following conditions:
hg~)(x) _ Hog(p+l)(x) 1= O( n-(r- p )/(2r+l)).
> -1, sup EI
x
hgn(x) IH1 = O(n+t/(2r+l)) .
A8: For any integer t ~ 0 and sequence {xn} such that A~ooxn = :leo, there exist constants cI>1 and
II
•
cI>2,t such
that
r
(a ) n~oo
n
(r-p)/(2r+l) [ E h (p)( )
H (p+l)( )] - cI>
d
gn Xn og
Xn 1 an
- 20-
t
(b) lim n- /(2r+l) E[ hgn(xn) ]t+l =
n-+oo
~2 ,t.
It will be seen that if an estimator hgn(x) of Hog(I)(x) such that A7 and A8 hold is used in the SA
algorithm considered in Section 2, then tPn will converge to tP at a faster rate. As a first step a kernel
estimator of Hog(p+l)(x) is defined which satisfies A7 and A8.
Denote by B the class of all Borel-measurable bounded functions K that vanish outside the interval
( -I, 1 ). Take
Kp = { K
e B such that J.~
+1
I
I, if j=p,
}.
j
u K(u) du =
-1
0,
ifj~p,j=O,
1, ... , r-l
As in Section 2, let Nn , n = I, 2, ... , be independent renewal processes, each having underlying
distribution F with renewals at times Sni=X Dl +Xn2 + ... +X ni , i=I, 2, .... For K
e
K p define
(3.1)
where { Cn } is a sequence satisfying A4 or A6.
The next several results are analogous to those of Singh(1977) and Frees(1983) concerning
estimation of the derivatives of a distribution function. The lemmas will provide sufficient conditions
for A7 and A8 to hold for the estimator
hg~)(x).
Part (a) of Lemma 3.1 corresponds to Theorem 3.1
of Singh, while part (b) is similar in spirit to Lemma 3.1 (b) of Frees.
Lemma I I Let K
e
K p and
hg~)(x) be defined by (3.1).
Suppose Hog(r+l) is bounded. Then
- 21 -
(b) Also suppose that {xn} is a sequence such that lim Xn = Xo, where Hog(r+1)(x) is continuous in
n....oo
a neighborhood of Xo. Then
+1
= Hog(r+I)(Xo) ~ J u r K(u) du .
-1
fl22f;. By the substitution t =
g-\~) - x
and an application of bounded convergence we have that
= _1_
E{
p+1
Cn
=
~
cn
JOO K(g-I(U) - x) dN (u) }
0
cn
n
J+I K(t) hog(x+cnt) g(I)(x+cnt) dt
-I
= .!. J+I K(t) Hog(I)(X+cnt) dt.
c~ -I
.e
Expanding Hog(I)(x+cnt) in a Taylor series about x,
Hog
(I)
(x+cnt) =
r-I H O+
L
og.f
. 0
J.
I )()
.
X (cnty + Rn(t)
J=
where Rn(t) = Hog
(r+l)
r! ('7(t» (cnt)r with 1'7(t)-xl :5 ICntl. Thus
=
0+1) (x)
~ Hog
1 LJ
:Pc • 0
j!
n J=
J+I
.
K(t) t J dt
n_ 1
j
Co
- 22-
since K E K p and Hog(r+l) is bounded. This establishes (a).
By part (a) we have that
-(r-p)
n':!.~Cn
= lim
n-+oo
(p)
(p+l)
[ E hg n (xn) - Hog
(xn) ]
J
(r+l)
+1 K(t) Hog
-1
= Hog(r+l) ("0),1
r!
('1(t» trdt
J+l K(t) trdt as n -+
r. -1
00 ,
by the bounded convergence theorem and the continuity of Hog(r+l) at "0 , completing the proof of
the lemma.
We note that under the assumptions of Lemma 3.1, if Cn
= Cn-"Y, where "Y
~ 2r~1 ' then A7 (a)
holds. If "Y = 2r~1 ' then by Lemma 3.1 (b) we have that
lim n(r- p )/(2r+l) (E[hg~)(xn)] _ Hog(P+l)(xn»)
n-+oo
J+l
. ~1 = C (r-p) Hog (r+l) ("0),1
so that A8 (a) holds with
K(t) trdt .
r. -1
The following lemma establishes results for
hg~)(x)
similar to those contained in Lemma 3.2 of
Frees(1983).
.
Lemma U
Let K E KO and hgn(x) be defined by (3.1) with p = O. Assume Hog(l) is a bounded
function that is continuous at "0 and { Xn } is a sequence such that lim Xn
n-+oo
that Cn = o(1).
= "0.
Further assume
- 23(a) If t
> -1 , then
-t .
sup E I hgn ()
x IHI = O(cn)
x
.
bm
cnt E[ hgn(xn) ]HI = Hog(1) ("0) JI [ K(u) ]t+I du .
(b) If t~ 0 is an integer, then
Proof:
.
-1
n~oo
By the definition of hg n and the boundedness of K,
c~ EI hgn(x) IHI
=
~ EI
J:
K(g-I~2 - X) dNn(u) ,t+I
~ II K II~I ~ E[ Nn(g(x+cn» _ Nn(g(x-cn» ]HI
by (2.5) and the boundedness of h. This establishes (a).
To determine a limit for c~ E[ hgn(xn)]t+I we condition on the number of renewals occurring in
the interval [ g(xn-cn), g(xn+cn) ]. Let Yn = Nn(g(xn+cn» - Nn(g(xn-cn». By (2.4) and the
boundedness of K,
.e
00
L
E{
c~ [ hg n (xn) ]t+I I Yn = j } P{Yn = j }
j=2
~
in f
II
K
II~I jHI P{ Yn = j }
j=2
-Oasn~oo.
The following result will be used to find the conditional expectation when Yn = 1. Let Xl' X2, ..
be i.i.d. random variables with distribution function F which define a renewal process with renewals
occurring at Sn = Xl + ... + Xn , n = I, 2, .... Define the excess random variable at time t to be
'Y(t) = SN(t)+I - t,
- 24so that 'Y(t) is the remaining life of the part in use at time t. This random variable has distribution
function
P{ 'Y(t) ::5 x } =
I~
[
F(t-u+x) - F(t-u) ] dH(u) + [F(t+x) - F(t) ] .
( See Barlow and Proschan(1965).) Thus the distribution function and the density of the first renewal
to occur after time g(xn-cn) are given by
P{ SN(g(xn- cn»+l ::5 v } =
I~(xn-cn) [F(v-u) - F(g(xn-cn)-u) ] dH(u) +
[F(v) - F(g(xn-cn» ]
and
a(v)
respectively, for v
~
= hey) - fV(g Xn-Cn ) f(v-u) dH(u) ,
g(xn-cn). Thus we have that
g(Xn+Cn)
P { Yn = 1 } =
I
a(v) [ 1 - F(g(xn+Cn)-V) ] dv
g(Xn-Cn)
and given that Yn = 1 , the conditional density of the lone renewal in [g(xn-cn), g(xn+cn)] is
bey) = a(v) [ 1 - F(g(xn+cn)- v) ]
P {Yn = 1 }
Hence, by a change of variables,
•
- 25-
K(
g(Xn+Cn) [ - 1
g
I
= .\;
)~t+1
(~~ - Xn ~
a(u) (1- F(g(xn+'n)-U) I du
g(Xn-Cn)
•
+1
=
f[
K(v)]t+1 a( g(Xn+Cn V) ) [ 1 - F( g(Xn+Cn)-g(Xn+CnV) ) ] g(l)(Xn+cnV) dv
-1
where
g(Xn+cnv)
f
a(g(xn+cn v » = hog(xn+cn v )
f( g(xn+cn v ) - u ) dH(u) .
g(xn-cn)
Since K and f are bounded,
g(xn+cn v)
+1
f[
K(v)]t+1
.
-1
f
f( g(xn+cn v) - u ) dH(u)
g(xn-cn)
.e
~ 2 II K 1I~1 II f
by the continuity of F. Similarly,
g(Xn+Cn)
1100
II g(l)
1100
f
dH(u)
-+
0
g(xn-cn)
+1
lim
n.....oo
f
[K(v)]t+1 hog(xn+cn v) F(g(xn+cn) - g(xn+cnv» g(l)(xn+cnv) dv
-1
Thus,
+1
.
= n.....oo
lim
f
-1
[K(v)]t+1 hog(xn+cnv) g(l)(xn+cnv) dv
=
0.
- 26+1
= Hog(1)(Xo)
f
[K(v)]t+l dv ,
-1
which completes the proof of the lemma.
Suppose Cn
= Cn-'" where.,. = 2r~l. Then the conditions of Lemma 3.2 (a) imply that A7 (b)
•
holds for the kernel estimator hgn . Similarly, hg n satisfies A8(b) by Lemma 3.2 (b) .
Let the sequence { ¢In } of estimates of ¢I be defined recursively by (2.2), where in the definition
(2.3) the estimator hgn(x) is redefined to be any estimator of Hog(I)(x).
The following theorem
establishes some convergence properties for ¢In assuming that hgn(x) satisfies A7 and A8.
The
subsequent corollary addresses the special case when hgn(x) is of the form (3.1) with p = 0 .
Theorem I I Let.,.
= 2r~1 and r = A!tDg(t)lt=¢I .
(a) Assume AI-A4 and A7 hold. Then ¢In'" ¢I a.s.
(b) Assume AI, A2', A3, A6, A7, and A8 hold. Then
n(I-.,.)/2 (¢In-¢l)
g
N( 110'
O'~)
where
and
cy A2 [g(¢I)]2 ~2 1
2
0'0
=
fI22C 2f Theorem aJ. (al: Again take Un =
sup
x
I E[hgn(x)]
2r - (1-.,.)
'.
¢In-¢l. By A7 (a) with p=O,
- Hog
(1)
(x)
I=
-r/(2r+l)
O( n
).
This implies that
•
- 27-
.I.) -r/(2r+1)
$ K19( Y'n n
•
By A7 (b) with t=l
so that by the proof of Lemma 2.2,
•
Thus from the proof of Theorem 2.1
.e
Since the required summations are finite, ¢n .... ¢ a. s. by an application of the Robbins-Siegmund
result.
The proof of Theorem 3.1 (b) will require the following lemma, similar to Lemma 2.5 (b). As in
1/2
-1/(2r+l)
.
Section 2, let Vn = Cn [Dgn(¢n) - Dg(¢n) - ~n ], where Cn = Cn
by A6 .
r
Lemma U
Assume AI, A2', A3, A6, A7 and AS hold. Then
lim E.
7n
n....oo
vfi = C21 C [g(¢)]2 4>2 ,1 .
- 28-
fI22ll By definition V; = Cn C¥ [ Xn - Yn - Zn ]2 where
Y n = g(4)n) E
Zn = g
(1)
CSn
[hgn(4)n)] , and
•
(4)n) [ Nn(g(cPn» - Hog(cPn) ] .
As in Lemma 2.5, we consider separately the conditional expectations E
CSn
X;, E gn Xn Yn, etc. Since
the conditions of Theorem 3.1 (a) hold, cPn -+ cP a.s.
(i)
Since cPn -+ cP a. s., by A8 (b) with t=1
--+
(ii)
C [g(cP)]2
By A2' and A7 (b) with t=O
$ K 1 Cn 0(1)
(iii)
~2,1 .
--+
0.
By (2.6), A2', A7 (b) with t=l, and the conditional version ofthe Cauchy-Schwartz inequality,
cnEgnlXnZnl$ Cn g(cPn) g
(1)
(cPn) E CSn [ hgn(cPn) ]
I Nn(g(cPn» - Hog(cPn) I
2 1/2
2 1/2
$ Cn K 1 { E gn [ hg n (4)n)]}
{ E [ Nn(g(cPn» - Hog(cPn)]}
.
CSn
2 1/2
$ Cn K 2{ E [ hg n (cPn)] }
CSn
$ Cn K { O(n+1/(2r+l» }1/2 -+ 0
2
(iv)
Also by A2' and A7 (b) (with t=O) ,
- 29-
"
•
(v)
For all
(vi)
By Lemma 2.5 (vi) ,
0,
E
lJo
YoZ o = O.
lim E« Z~ = O.
o~oo
70
The result follows from (i)-(vi) .
Proof of Theorem
aJ. Lh1 Again the theorem
00
asymptotic oormality due to Fabiao is used. From
the proof of Theorem 2.2 we have that
Un +1 = Uo [ 1 -
-1
0
rn ] + 0
(~-1)
. 0 Vn
+0
where as before
.0
•
.e
-1/2
= • = -AC
,and
To = _An(I-1)/2 ~o'
Sioce ¢n
~
¢ a. s., using AS (a) with p=O gives
r
By definition E
lJo
Vn=O. From Lemma 2.4 we have that
-1 -(1-1)/2
n
To
- 30which is bouDded by A7 (b). By Lemma 3.3,
lim E. V~ = C21 C [g(iP)]2 ~2 1 .
D-+OO "'D
,
We Deed to show that 1T~,r= E{ V~ I[ V~ ~rD] } -+ 0 as D-+
accomplish this using Lemma 2.7 it is first showD that for any p
00
for r= 1, 2, . . .. To
.
> 2 , EtfD I c~/2VDIP -+ O. By
Lemma 2.4,
By A7 (b),
c~ ECJDI hgn(iPn) I p= c~ O( n+(p-1)/(2r+1»
= O( n- 1/(2r+1» - - 0
if p
> 2. Now taking t=O in A7 (b)
For Lemma 2.7,
Taking
p, q
> 0 were such that
e.
a+ ~ = 1.
Thus, taking p
> 2, Lemma 2.7 holds.
a=l aDd 13=13+=1-1, part (b) now follows from FabiaD's theorem.
Now let K E 1(0 and define hgn by (3.1) with p = O. Lemmas 3.1 and 3.2 apply to hg n if the
following assumption is met.
A9: For some integer
r~l,
Hog(l) and Hog(r+1) exist and are bounded OD the entire real line, and
both functions are continuous in a neighborhood of <p.
Corollary I I
Assume A1-A3, A6, and A9 hold. Let 1 = 2r~1 and
r
= A3tDg(t)lt=iP .
- 31 -
Then
(a) ¢n -+ ¢ a. s.
•
(b) If A2' also holds,
then n
(1-'y)/2
D
2
(¢n-¢) -+ N( Ill' 0'1) where
_
C A g(¢) Cr Hog(r+1)(¢).8
III - -
r - (1-1')/2
1
1
and
+1
with .8 1 = ~ J ur K(u) du and.8 2 = J+l [ K(u) ]2 du .
r.
-1
-1
Proof: By the discussions following Lemmas 3.1 and 3.2, A9 implies A7 and A8 hold for hgn(x), The
•
corollary is then an immediate consequence of Theorem 3.1 with
.e
~1
--
I'1m
n r/(2r+1) [E «;Tn hgn (.J.)
'l'n - H og (1)( ¢n ) ]
+1
r
= C Hog(r+l)(¢) ~ J u r K(u) du
n-+oo
r.
-1
and
= C- 1 Hog(I)(¢) J+l [ K(u) ]2 du .
r
-1
- 32 REFERENCES
Arunkumar, S. (1972). Nonparametric age replacement policy. Sankhya A 34,251-256.
Bather, J. A. (1977). On the sequential construction of an optimal age replacement policy. Bull. of
the Intern. Statia. Inat. 47, 253-266.
Barlow, R. E. and Proschan, F. (1965) Mathematical Theory of Reliability. Wiley, New York.
Barlow, R. E. and Proschan, F. (1975)
Reinhart and Winston, Inc., New York.
Bergman, B. (1979).
Statistics, 161-168.
Statiatical Theory of Reliability and Life Teating.
On age replacement and the total time on test concept.
Holt
Scand. J. of
Fabian, V. (1968). On asymptotic normality in stochastic approximation. Ann. Math. Statis. 39,
1327-1332.
Frees, E. W. (1983). On construction of sequential age replacement policies via stochastic
approximation. Institute of Statistics Mimeo Series #1525, the consolidated University of North
, Carolina.
Glasser, G. J. (1967). The age replacement problem. Technometrics 9, 83-91.
and Siegmund, D. (1971). A convergence theorem for nonnegative almost
Robbins, H.
suPermartingales and some applications. Optimizing Methods in Statistica (J. S. Rustagi, Ed.),
233-257. Academic Press, New York.
© Copyright 2026 Paperzz