Sen, P.K. and Bhattacharjee, M.C.; (1984).Nonparametric Estimators of Availability Under Provisions of Spare and Repair."

-NONPARAMElRIC ESTIMATORS OF AVAILABILITI UNDER
PROVISIONS OF SPARE AND REPAIR
by
P.K. Sen
Department of Biostatistics
University of North Carolina at Chapel Hill
and
M.C. Bhattacharjee
Indian Institute of Management
Calcutta, India
Institute of Statistics Mimeo Series No. 1461
May 1984
NONPARAMETRIC ESTIMATORS OF AVAILABILITI UNDER
PROVISIONS OF SPARE AND REPAIR
M. C. Bhattacharjee
Indian Institute of Management
Calcutta, India
and
P. K. Sen
University of North Carolina,
Chapel Hill, NC, USA
Nonparametric (as well as jackknifed) estimators are
developed for the 'availability' of an equipment supported by a
single spare and a repair facility, where down-time occurs whenever
no spare/repaired unit is available at the point of failure of an
operating unit. Asymptotic properties of the natural and the jackknifed estimators of availability are studied (without assuming the
stochastic independence of life and repair times). Fixed sample size
as well as sequential procedures are considered , and progressively
censored schemes are also introduced in this context.
-e
N'~
Subject Classifications: 62GOS, 62NOS, 62L10.
Key Words and Phrases: Availability; doum time; jackknifing; progressive
censoring schemes; renewal theorems; sequential confidence intervals;
sequential tests; up time; weak convergence.
1. INTRODUCTION
To introduce the basic model, we consider a single-unit system supported by
a repair faciUty and a single spare. At the time when the operating unit fails,
simultaneously, the spare takes over (instantaneously) as the new operating
unit and the failed unit is sent for repair at the same instant which is a
•
regeneration point, This system fails when the unit,currently operating,
fails before the repair of the currently failed unit is completed. Othendse,
a failed unit on completion of repair assumes the role of a spare in cold
stand-by attitude. Usually, it is assumed that the repair of a failed unit
-2-
restores it to its new condition, and also that the life distributions of the
original initial spare and the initial operating unit are the same, say F ,
=
defined on R+
~
(0,00). If G be the distribution of the repair time of the
failed unit (also defined on R+ ), and if ~F and P denote respectively the
G
means of the life time X and repair time Y having the distributions F and G,
then the limiting average availability (i.e.,
the limiting expected
proportion of system up-times ) is defined as
=
A
FG
EO/{EO
+
ED},
(1.1)
where EO is the mean time until system failure , measuring from the regeneration point, and ED is the mean system down time. If we denote by
a
= p{
FG
X < Y },
(1. 2)
then, we have
EO
=
{I - aFG}-l~F
and
ED
= {l-
aFG}-lE{(Y-X)I(Y > X)},
(1.3)
where I(A) stands for the indicator function of the set A. The renewal theorem
based results in (1.3) assume that for different i, the vectors (X.,Y.) are
1
1
stochastically independent, though for each i, X. and Y. may not be indepen1
1
dent. Assuming the operating periods (X.) and repair periods (Y.) to be
1
1
mutually independent, a simplified version of (1.1) is given in Barlow and
Proschan (1975, Ch.7), where other parametric models have also been briefly
discussed. When both F and G are exponential d.f.'s, this reduces to
(1.4)
where p
= ~G/~ F
• But, in general, AFG in (1. 1) depends on F and Gin a more
involved manner (and not just on their means). In practice, however, F and G
are generally not known, and hence, for a desirable maintenance of the system,
•
one may like to estimate ApG in a nonparametric way (and also to provide some
sharp bounds on AFG ) • Recently, Bhattacharjee and Kandar (1983) have
considered some useful bounds for ApG \\Ihich are computable with only a 1imi ted
knowledge about a few parameters of the unknown life and repair distributions.
e
-3-
One of the problems with such parametric procedures is their lack of robustness
for departures from the assumed model (e.g., P actually Weibull against the
assumed exponential form). Even for a small departure, there may be considerable
loss of efficiency of the parametric procedure, while for major departures,
•
they may even turn out to be inefficient or inconsistent. The main objective
of the current study is to focus on some nonparametric developments both in
the fixed sample size and sequential sampling schemes. Progressively censored
schemes (PCS) based on the (regeneration)
ayaletime~
{T:}
~
are also considered
in this context.
Looking at (1.1), (1.2) and (1.3), we may gather that ApG is a function
of
~F
,a FG and
ED
, each one of which being a regular functional of the
d.f. P (and G ). Thus, these estimable parameters
(i.e.'~F'
a pG and ED
) can
be estimated in a nonparametric fashion (under quite general regularity condi-
-e
tions) , wherein the validity and reliability of the estimates will be ensured
for a broader class of P and G. Or , in other words, these nonparametric procedures will be robust and have broader scopes. However, in view of the fact
that ApG is not a linear function of these parameters, the resulting estimates
may not be unbiased, and hence, some resampling schemes may be incorporated
to reduce the effective bias, and jaakknifing teahniques are therefore adopted
to achieve this goal.
In Section 2, without assuming the independence of the failure and repair
times, a reformulation of A in terms of some other functionals is provided,
FG
and this enables us to consider a very natural estimator. Fixed sample size
estimation procedures are then introduced in Section 3. Section 4 is devoted
•
to the development of sequential procedures pertaining to the confidence
interval problem as well as the sequential testing problem. The concluding
section deals briefly with PCS and some of the useful applications in the
current context.
-4-
2. A REFORMULATION OF A
FG
~
Note that the ith cycle involves the life time X. and repair time Y., and
~
1
we do not necessarily assume that X. and Y. are mutually independent, although
1
it is assumed that
{(X.,Y.),i> I}
1
1
-
1
form a sequence of independenent and
identically distributed (i.i.d.) random vectors (r.v.) with a (bivariate)
•
distribution function (d.f.) H(x,y), defined on IO,~)2. Thus, F(x) = H(x,~)
= H(~,y)
and G(y)
are the two marginal d.f.'s. We also denote by Z.= X.- Y.
111
and denote its d.f. by P(z),
-~
< z <
~.
Then, note that parallel to (1.2),
(2.1)
FG = pix ~ Y } = p{Z < 0 }= P(O) = cxp '
which may not be a sale functional of the marginal d.f.'s, but is simply
CX
expressible in terms of P(O). Thus, starting from the regeneration point, EO,
the mean time upto the system failure is given by
EO
=
~F / (1-
cx p )
= (1-
cx p )
-1
~
f 0 .xdF(x) •
(2.2)
Note that if the system fails (i.e., encounters down time) in the mth cycle
e~
for the first time, for some m _> 1, then X. > Y. for every i < m-l and Y > X •
1 -
m
1
m
Therefore for the stopping nunbep N , we have
P{ N
=m }
=
[p(O)]m-l[l_p(O)] , for m = 1,2, •••••• ad info
(2.3)
Also, note that D, the system down time,is given by Y - ~ , where the
N
stopping number N is defined earlier. Hence, we obtain that
=
=
i < m-l, Y > X )}
Lm>l E{ (Ym-Xm)I( Y.1<- X.,
1
m
m
Lm>l [p(O)}m-l!l_p(o)] E{ (Y - X ) I Y > X }
m
m
m
m
= 11-;(0)] Lm>l [p(O)]m-l f~ zdP(z)/{l-P(O)}.
m-l
O zdP(z)} Lm>l [P(O)]
{I
p(O)}-l{f~ zdP(z) }
= {f
=
= {I
~
p(O)}-l E{(Y-X)I(Y > X)}
Hence, from (1.1), (2.2) and (2.4), we obtain that
•
(2.4 )
-5-
AFG
= '\i
= (EX)/( EX + E((Y-X)I(Y > X)) )
= (EX)/( E(XI(X
> V)) + E(YI(Y > X)) )
= (EX) I (E{X V V})
,
where a" b = max(a,b).
(2.5)
This gives a clear picture of the availability in terms of the life and repair
times. In the special case where X and Yare independent with d.f. P and G,
respectively, (2.5) reduces to
AFG = {f~ (l-F(x))dx } l{f (l-F(x)G(x))dx }.
O
In the sequel, we shall find these expressions very useful.
3. ESTIMATION OF
~
(2.6)
: NON-SEQUENTIAL CASE.
Note that for a single copy of the system, we have the set of observations
~ <
Uk
X.,Y.,
i=I, ••• ,N, where the stopping number N is defined by min{k -> 1:
1
1
There is a down time
(YN-~)
Y }.
k
, and following which, we have a regeneration point
(Xl+ ••• +~-I+YN) where the repaired unit resumes the role of the operating one
-e
and the unit failing at time point XI+ ••• +~ (and remaining unattended for the
down time period) is put to the repairing facility. Thus, from one regeneration
point to the next one, we have a cycle of time length T =
Xl+···+~_l
+Y
N
=
N
Ei=l (Xi V Yi ).
Now, for the ith cycle, we denote the associated r.v.'s by X.. , Y•• , j=
1J
1J
1, ••• ,N. , N. and T. , so that N. = min{k > 1: X' < Y. } and T. = ~~il(X'.v Y.. ),
111
1
1k
1k
1
J=
1)
1)
for i=1,2, •••• In a non-sequential setup, we are given these observable r.v.'s
for m(
1) cycles, and based on these, we desire to provide a non-parametric
N.
estimate of ~ in (2.5). For latter use, we also define 0i = E.11X .. (the ith
~
J=
1)
cycle on time), for i=l, ••• ,m. Since the N.1 are stopping times, we have
E(Oi) = EXil·EN i = ~F {E:=l m[P(O)]m-lII-p(O)] }
=
llpl I1-P (0)]
(3.1)
Similarly, by (2.4) and (3.1),
E(T.)
1 = E(O.)
1 + E(Y'1.
N1 -X'1N.1 ) = Il-P(O)]-l{ ~F + E((Y-X)I(Y > X))1.(3.2)
Thus, we may be tempted in estimating A by
H
-6-
= m-1
m
E.~= l(O./T.)
~
~,
(3.3)
e
However, this estimator may have a basic undesirable property. Note that the
O./T. are i.i.d.r.v. (in fact, bounded between 0 and 1), so that A converges
~
m
~
almost surely (a.s.) to E(Ol/T ). But, E(Ol/T ) need not be equal to
l
l
amount of bias ,i.e., E(Ol/Tl) -
~
~
; the
, depends on the unknown d.f. H. Even when
H=FG, 0l/T may not unbiasedly estimate
l
may not converge to
~
~
"
, so that even if m is large, Am
(but to E(Ol/T1) which may differ from
~).
For this
reason, we consider the alternative estimator
°-
*
-
-
-1 m
-
-1...m
Tm = m
Am = m/Tm where 0m = m L.~= 10.~ and
~.
~=
IT.
•
~
Note that by the Khintchine strong law of large numbers, as m -+
Om
E(Ol)
-+
Tm -+
= ~pI(l-P(O))
(3.4)
00
,
a.s.,
E(T ) = (l_P(O))-l{ P
F
l
+
(3.5)
EI(Y-X)I(Y > X))}
a.s.,
so that
A*
-+ ~ a.s., as m -+ 00 •
m
Moreover, A* is a bounded valued random variable
1D
ensures that E(A* ) converges to
m
for~.
~
as m
-+
00
•
to
< A* < 1) , so that (3.6) also
-
m -
Thus, Am* is asymptotically unbiased
We denote the dispersion matrix of (Ol,T ) by
l
(3.7)
Then, by the classical central limit theorem (on the (O.,T.)), as m
1
m~
( 0 -
m~( A
*
m
-
-+
00
,
(3.8)
m
There fore, it follows by some standard steps that as m
t
1
,
(3.9)
Natural estimates of the parameters appearing on the right hand side of (3.9)
may be used to test for a suitable hypothesis on A or to attach a confidence
H
interval on ~ • In the absence of the knowledge of these paramtcrs, the sample
size m may not be fixed in a manner that the test has a sped fied power against
a sped fi ed alternative or the \vi dth of the confidence interval is bounded by
e
..
-7-
some prefixed positive number (2d). For either of these problems, we may need
a sequential procedure, and we shall consider the same in the next section.
,.
4. SEQUENTIAL METHODS
We have already noticed in Section 3 that A* is generally not unbiased for
m
'w
-\i.
Moreover, we need to estimate the parameters in the asymptotic distribution
in (3.9). For this purpose, we employ
jackknifing, by which we are able to
reduce the bias and to estimate the asymptotic variance as well.
For every m( _> 2), we define the -0
m
, T and A* , as in (3.4). Also, let
m
m
O(i) and rei) be defined as in (3.4) , but based on a sample of size m-l obtained
m-l
m-l
from the original sample (of size m) by omitting (O.,T.), for i=l, ••• ,m. For
J.
~
every i( =l, •••• ,m), we define
A*(i) = o(i) /T(i)
m-l
m-1 m-1
; A* . = mA* - (m_l)A*Li)
m,1
m
m-1 •
Then, the jackknifed estimator of -\i is defined by
= m-1~~=1
A:*
A:,i
= A:
+
[(m-1)/m]~~=1 (A:
-
(4.1)
A:~i)
(4.2)
) •
Side by side, we introduce the jackknifed variance estimator
2
sm
-1
m
*
** 2
Eo_
( Am,~. - Am )
1- 1
=
(m-1)
=
(m-1)rE~
~=1
{A*(i) _ m-lE~ A*(j) } 2].
m-1
J=l m-1
(4.3)
Note that in the current situation, we have a function
g(a,b) = a/b,where 0 <
a < b
<
(4.4)
co ,
and moreover, in (3.4) or (4.1), the estimators 0 and T (or ali) and T(i) )
m
m-1
m
m
are sample means ( and hence, U-statistics of degree 1). Note further that
2
ag/aa = alb, ag/ab = _a/b , a 2g/da 2 = 0, a 2gjd aab = _b- 2 and a 2g/db 2=2a/b 3 •
(4.5)
Thercfore, we may formally write
g(a,b) = g(a,B) + (a-a)/B -a(b-B)/B
wherc a
O
lies bet\oJeen a and a
2
- (a-a) (b-B)/b
02
2 0 03
+(b-B) a /b
,
and bO lies between band B .'
TI1C
(4.6)
second and third
terms on the right hand sude of (4.6) are the so called linear terms, while the
last two terms are the quadratic ones.
-8-
By virtue of (4.6), for enery i:l~i~m and m > 1, we have
A*(i) = A* +r'O(i) m-l
m \.. m-l
+ (r(i) _ r
m-l
0
.rn
- 0 (T(i} -T
)/T
m1l1-1
lU
0
m-l
111
m
m-l
5 . e: CO(i 1,0 ) and
ID,1
m- 1 ID
)2
./f3. , where
m m,1 m,1
)/~2
)/-r- _ (OCi} -0 ) (T(i)_ T
m-
m
Tm,1.
e
•
(4.7)
m,1
e: (r(i)I,T) •
mm
Therefore, we have
* . = A* + (0.-0 )/T -(T.-T )/T'2 + (m-l) (O(i)l- 0 ) (r(i )-T )/r2 .
Am,1
1
m
m
1 m
m
mm
m- 1 m
m,1
m
3
- (m-l) (r(i) - T )25 ./T . , i=I, ... ,m.
.
m-1
.rn
m,1 lU,1
(4.8)
By (4.2) and (4.8), we obtain that
m-l L~ (o(i)_O) (T(i) -r )/,r 2 .
1=1 m-1 m
m-1
m
m,1
m
AU = A* +
m
m
Since Om an4 T are both V-statistics (of degree 1), we may now appeal to the
m
proof of Theorem 3.1 of Sen (1977) and conclude that as
max {
-(i) - (0
0 )2 : 1
m-1
m
max{
(T(i) _ T )2: 1 < i < m }
m-I
m
--
(m-I)
L~
1=1
(O(i) m-1
0
-(i)
-
m
+
~
,
< 1.< m } = OCm-I) a. s. ,
-
= 0(m- 1 )
2
+
and (3.5) holds. As such, max{
(1TT
( 4. 10)
a.s. ,
)2 +
(100 a.s.,
m
(m-I) L_ ( T 1 - T)
1- I
mm
ID
(4.11)
e
(4.12)
"
a.s,
- 2
T- . :l<i<m}
m,1 - -
(4.13)
....
max{ 0
and
-3.
.:1<1<m}
m,1 - -
./T
m,1
both be bounded a.s., by some positive, finite numbers, as m +
~.
#
can
Therefore,
from (4.9) through (4.13), we conclude that
u
A*
m
IAm
I
=
OCm- I ) a.s., as
C4 .14)
m+~
Moreover, by (4.8), (4.14) and the a.s. orders in (4.10) through (4.13), we
conclude that as m +
~
2
max{/ (A* .. - AU) - (0.-0 )/T +0 (1.-T )/f
m,1
m
1
m m m 1 m m
I:
1 :: i
so that by (4.3) and (4.15), we conclude that
Is
2
5
m
02
m
~
m
as m
} =
....
Oem
-~
) a.s., (4.15)
00,
=O(m-I)a.s.,
C4.16)
where
s
02
m
= T -2
m
+
021
(m-l)-lE~ (0._0)2 _ 20 T -3 C 1)-I~ (0 0) (T T)
1=1 1 m
mm
mi=l i- m i- m
-4 (m_l)-l
mm
E~
1=1
(1.1
1
m
)2 •
By the strong convergence of U-statistics, we conclude that as m
(4.17)6 "
~
+
00
,
-9-
(m-1)
-1 m
(m-1)
-
Li =l (Oi-Om}
-L.m
1:..
~=
2
000 a.s., (m-I)
-+-
-
1 (T.
. 1 .,. Tm)
-1 m
-
-
Eo~= 1(0.-0
~
m)(T.-T)
~
m
-+-
00T a.s.,
2
(4.18)
and for these, the existence of the second order moments suffices. Thus, by
(3.5), (4.17) and (4.18), we conclude that whenever the d.f. H has finite second
order moments, as m
S~2
-+-
O~
-+-00
,
(= {E(T
1
)}-2{ 000 - 2~ 00T
+
~ OTT })
(4.19)
a.s.
Therefore, from (4.16) and (4.19), we obtain that
a.s., as
(4.20)
m-+-oo •
Having obtained the a.s. convergence results in (3.6), (4.14) and (4.20),
we are in a position to present the sequential estimation and testing procedures.
for~.
4.1. Bounded-width confidence interval
~
The underlying d.f. H ,and hence,
being unknown, we intend to provide a confidence interval for
~,
such that
-e
the confidence coefficient is 1 - a, for some preassigned a (0 < a <1) , and
"
d( > 0). For every m> 1, and d > 0, let
the width of this interval is bounded from above by 2d, for some preassigned
I (d) = { t
.m
: (A **
m
- d)VO
5..
t <
**
(Am +
(4.21)
d)Al}.
Also, let La be thr upper 100a% point of the standard normal d.f., and let
m (=m (d)) be an initial sample size, usually greater than 2. Then, we
o
0
may consider a stopping nwnber M (=M(d))~ defined by
s 2 5.. md 2/L 2/2
m
a
if no such m exists, we let M(d) = .po
M(d) = minI m > m
-
for
~
0
}
,.
(4.22)
. \\11Cnever M <
00
,
is taken as IM(d), defined as in (4.21), for H=m. Since t-I(d) is a stopping
number, we can verify that for every (fixed) d > 0, M(d)<
•
the confidence interval
00
,
with probability 1.
Now, by definition in (4.21), IM(d) has a width
< 2d, so that we need to
verify that IM(d) has the coverage probability
1- a. To\\ards this,
\,'C
proceed
as in ChOlv and Robbins (1965) and work out the propertics of this procedure in the
asymptotic case where d is made to converge to O.
We may note that by virtue of the law of itcrated lognrithm,
-10-
1
lim {(m/loglogm)~
Om
EO
I
lim {(m/loglogm} ~
I}
I}
= {20
0o
}~
a.s..
(4.23)
~
= {2 OTT} ~ a.s..
T - ET
(4.24) • ..,
m
1
so that by (3.4). (4.6). (4.14) and (4.23)-(4.24), we conclude that as m -+ co
Am**
~
= (Om-EOl)/ET
l
-(EO l )(Tm-ET )/(ET ) 2
l
1
+
Oem-1 loglogm) a.s.
= m-1 E.m 1 (ET ) -2{ (O.-EO.)ET -(T.-ET.)EO }
~=
~
~
I
1
1
l
I
-1 m
-1
= m Ei=l Wi + Oem loglogm) a.s.
Oem -1 loglogm) a.s.
+
(4.25)
2
where the Wi are i.i.d.r.v. with mean 0 and a finite positive variance 0A '
defined by (4.19). This basic representation provides the basic tools for our
subsequent analysis. For every m, we introduce a stochastic process Q = {Q (t),
m
m
t E [O,l]}
** - ~ )/OA ' t E [0.1]. Also,
by letting Qm(t) = m-~ [mt] ( A[mt]
let Q = {Q(t), t E rO,l]} be a Wiener process on the unit interval [0,1]. Then,
by an appeal to weak invariance principles for sums of i.i.d.r.v.'s [see for
example, Sen(1981, Ch.2)], we conclude that under the assumed regularity conditions,
Om;
Q. in the Jl-topology on DIO,l] , as m -+
and further, the compactness of
{Om}
(4.26)
co •
remains in tact with respect to the
unifo~ -
topology as well. A direct consequence of this weak invariance principle is that
for every E >0 and n >0, there exist a
~
:
o<
~
•
<1 and an m , such that
o
\f
m
> m
-
0
,
so that if {T } is any sequence of positive r.v. , such that as m -+
m
T -+ t, in probability, for some fixed t E [0,1],
m
(4.27)
co ,
(4.28)
then, by (4.26) and (4.27),
Q (T ) is asymptotically normal with 0 mean and variance t •
m m
(4.29)
Now, for every d( > 0), we define
m(d)
= minI
222
0A ~ md /Ta./2 }.
m> m
-
0
(4.30)
TI1en, by virtue of (3.9) and (4.14), along with the definition of 0A2 in (4.19),
'\IE
limdi-O pI
Im(d) (d) }
=1
- a. •
(4.30)
On the other hand, by (4.20), (4.22) and (4.30) , as d -I- 0,
r-t (d) /m (d)
-+
1
a.s.
(4.31 )
e
..
-11-
Note that if we define TID(d)
as d
~
= M(d)/m(d),
0, so that by (4.28) and (4.29) (with t
!.:
**
{M(d)}Z ( ~(d) - ~ )/OA
= 1),
we obtain that as d
0,
~
(4.32)
J'«O, 1),
and combining (4.32) with (4.20), we have by the Slutzky theorem, as d
!.:
~ 1,
d > 0, then, by (4.31), Tm(d)
**
{M(d)}Z ( ~(d) - ~ )/SM(d)
.~
~
0,
.)f(O, 1).
(4.33)
- a •
(4.34 )
By (4.22) and (4.33), we conclude that
=1
In the literature, (4.34) is termed the asymptotic consistency
of the sequential
procedure. In a sense, (4.31) reflects the asymptotic efficiency of the sequential
procedure; however, one usually
limd~O
=1
EM(d)/m(d)
defin~this
in terms of the following:
(4.35)
,
which would fit with the Chow-Robbins(1965) definition. Verification of (4.35)
-e
naturally requiresthe convergence results on the moments of M(d), and, this in
..
unlike the case in Chow and Robbins(1965), we do not have here a reversed (sub-)
trun, requires the study of the moment convergence properties of 52 • Note that
m
martingale property of
{s2, m > m } , so that the computation of the first order
m
- 0
moment of sup 52 , needed for the purpose, requires more elaborate analysis and
m m
more stringent regularity conditions. We note in this context that by (4.1),
for every i : 1 < i _< m , m > 1
s2 < (m-l) L~ (A*(i))2
m1=1 m-l
_<
,
0 < A*(i)
- m-l
< 1 , so that by (4.3)
m(m-I) , with probability 1.
,Moreover, using the inequality that
(4.36)
m
-2
m 2
m
2
L.1= 1(0.-0)
along
1
m -< L.1= 10.1 -< ( L.1= 10')
1
with the same for the T. , we obtain from (4.17) that
1
02
<
sm
2
4m /(m-l) , with probability 1 (\] m > 2).
(4.37)
Therefore, from (4.36) and (4.37), we obtain that
Is~
•
-
s~21
< 5m
2
, with probability 1, for every m:: 2.
Now, we assume that for some r > 8,
and
E IT.
1
Ir
(4.38)
<
00
•
Then, in
(4.10)-(4.11), in the right hand side, ,ve may replace 'a.s.' by 'with a probability
greater than 1- 0Cm
-r/2+1
), while in (4.12)-(4.13), we have with a probability
-12-
greater than 1-0(m-r/2 ). As such, using (4.8)-(4.9), (4.38), and the modified
steps in (4.10)-(4.13), we obtain that
= O(m- r / 2
EI 52 _ 502 1
m
m
3) , for every m _> 2,
+
(4.39)
and this ensures that
<
~Ioreover,
(4.40)
...
we may rewrite
2
sup 15 02 _ 02 I +
m
m
A
m
so that, for our purpose, it suffices to
q
02
02
E{ sup
m Ism _ 021
A } -< Lm>m EI 5m
sup
m
, for r > 8.
00
5
I52
_ SO 2 I
m
m
'
show that for some q ~ 1,
q
_ 021
A
converges.
sup
m
(4.41)
(4.42)
-0
By virtue of (4.37), we may now virtually repeat the proof of a theorem due to
Cramer(1946, pp.353-356) and conclude that for q
EI s~2 _ O~ Iq
= o(m- q / 2 )
+
= r/2
, r > 8,
O(m- q/ 2- 1/2) ,
= r/4
so that (4.42) hOlds by noting that q/2
(4.43)
> 2 •
Note that by (4.22), for every d> 0,
2
2
2
sM(d) ~ (d /T a / 2 )M(d) and
2
sM(d)-l
> (d
2
2
/T a /2) (M(d)-l) •
(4.44 )
Combinning (4.44) with (4.20), (4.30) and (4.39)-(4.42), we conclude that (4.35)
holds.
However, this result is not really needed from the practical application
point of view, and (4.31) requiring only the finiteness of the second moments
would suffice. We may notice that in view of (4.16), (4.17), (4.18), (4.44) and
the (joint) asymptotic normality of Hoeffding's (1948) U-statistics, weak
k
2
2
invariance principles are applicable for the {rn(d)} 2I\\1(d)
A ], as d i- 0,
°
so that we have under the finiteness of the fourth order moments,
{m(d)}-~ (M(d) - m(d))
)((0,
2
\l )
,
(4.45)
for some finite v , and this provides the asymptotic normality of the stopping
time. The asymptotic normality result in (4.45) requires less stringent regularity
conditions than the asymptotic efficiency result in (4.35). (4.45) also casts
light on the variability of M(d) around m(d), the optimal, sample size if 0A
were known.
•
-13-
4.2. Sequential tests for Availability. Consider the hypothesis testing problem:
eo
where
eo
vs.
Hi
~ ~ =
e1
=
e0
+ 11 ,
(4.46)
and 11 are specified, and we like the test to have the prescribed strength
(o.,B), where we restrict ourselves to positive o.,B
for which
0.
<~
and
B<~.
Since the underlying d.f. H is not known, no fixed sample size test may meet the
desired goal, and hence, we take recourse to a sequential procedure. In this
context, (4.20) and (4.25) play the vital role.
We define two positive numbers (A,B), such that 6/(1-0.)
<
00
,
and let a
= 10gA
and b
=
10gB, so that b < 0 < a
~
B < 1 < A < o./(l-B)
• Typically, for small
values of 11 , the lower (and upper) bounds for B (and A) are taken to be equal.
, we define a positive integer m (11) , such that as 11 +0,
o
2
m (11) goes to 00 but 11 m (11) converges to 0 • Further, for every m > 1, we
o
o
define the jackknifed estimator A** as in (~.2) and the variapce estimator s2
m
m
Further, for every
~
as in (4.3). Then, starting with the initial sample size m (/).), we continue
o
..
drawing observations one by one so long as
as
2
m
if, for the first time , (4.47) is vitiated for m
that time and accept HO or HI according as
(4.47)
,
= M (=M(/)')),
then we stop at
ml11 Am** - (8 0 +8 1 )/2] is
~
2 or
bS m
~
as2 • Thus, M(/)') is the stopping variable. Note that in the current context, at
m
the stopping number M, we have the data relating to 1-1 completed cycles of total
time length T + ••• + T •
M
l
Note that by virtue of (3.9),
p{ M(I1) >
m
I
~}
+
(~.14)
0 as m
+
and (4.20J. for every fixed 8
o
and /). ,
(4.48)
00,
so that the proposed test terminates with probability 1; the proof of (4.48) runs
parallel to that in Section 6 of Sen(1977), and hence, is omitted.
•
To study the OC and ASN functions of the proposed test, as in Sen(1981, Ch.9),
we consider an asymptotic setup wherein we allow
11 to converge to 0 (comparable
to d t 0 in the earlier prohlem in Section 4.1), and,
,,,e
set
-14-
~
= 60
+ </>l\, where
</>
E
q,
= Ill>
~ \</>\ ~
K -<
ClO
(4.49)
}
Also, we denote by LHl</>,6) the DC (Le., probability of accepting HO when actuall.
~
= 80
+ </>l\ ) of the test based on (4.47). Note that as l\
every fixed
</> E q,
liml\+O LH(</>,l\)
0, LH(</>,l\) , for
, converges to a limit P(</», and this is given by
= PC</» =
= 1-
Note that PCO)
~
~ (Al-2~ _ 1)/(A1-2~ _ Bl-2~ ) , </>
a/ (a-b)
a and PCl)
=6 ,
,
</>
t-
~
,
=
~
.
"
(4.50)
50 that the limiting strength of the test is
(a,B). By virtue of (4.20), (4.25) and the Skorokhod-Stra5sen embedding of Wiener
process for sums of i.i.d.r.v.ls, the proof of (4.50) follows precisely on the
same line as in the case of Theorem 6.1 of Sen(1977), and hence, is omitted.
Under the additional regularity conditions needed to ensure the existence of
the momemts of s2 in (4.41)-(4.43), we may again adapt the proof of Theorem 6.2
m
of Sen(1977) and arrive at the following
~
Under the asymptotic setup in (4.49), for every (fixed)
C.4.51)
where
1/J(</>'OA )
=
e'
•
fbP (</»+ aI1-P C</>l]} O~/ (</>-~) ,
2
{ -OA
ab ,
(4.52 )
2
and
0A and P(¢) are defined by (4.19) and (4.50), respectively.
The asymptotic results in (4.50)-(4.52) provide good approximations in practice
when
l\ is small.
S. PROGRESSIVE CENSORING SCHEMES
We may note that for the statistical inference procedures described in Section
4, one needs the set of observations (O.,T.), i
1.
1.
~
1. In an operating scheme, each
observation (vector) (O.,T.) requires the running of the system for the total
1.
cycle time Ti
~
1.
and these are nonnegative random variables. Thus, given a pre-
determined duration of the study, one would have a random number of cycles
completed, and based on this random number of observations, one would be require
to drm'l inference on AH • Alternatively, one would allow the system to be operative
-15-
until we have a prespecified number (m) of cycles completed, and then to base
the statistical analysis on (O.,T.), i=l, ••• ,m; in such a case, the duration
1
r
1
of the study is equal to Tl+ ••• +T and is therefore stochastic in nature. In
m
the usual fashion, we may term the two schemes as Truncated ( Type I Censored)
and Censored ( Type II Censored) Schemes. In either way, when the joint d.f.
of the (O.,T.)
1
1
is not (at least roughly) known, we may have some drawback.
Either we may have very few observations to base a test or formulate an estimate
with reliable precisions, or we may have to wait too long to obtain the desired
number of observations to initiate the statistical inference procedures. It is
not unnatural to have interim analysis, so that if at any early stage, there
is any strong indication of sub-standard Cunctionning of the system, one may
curtail the study and make corrective adjustments before putting it back to
operation. In this context, continuous monitoring of the system is often
advocated, and under such a monitoring scheme, progressively censored schemes
III
can naturally be adopted with some advantage.
To be more general, we consider n(
~
1) independent copies of the system,
and, for the ith copy, we denote the random variables by (0 .. , T.. ), j > 1 , for
1)
1)
-
i=l, ••• ,n. Also, for every t( > 0), we define the non-negative integer-valued
random variables { m. (t), i=l, ••• ,n } by letting
1
.
- . Tio.+···+T ik -<
m. (t) = max! k > 0
1
t },
T
= 0 ,i ~ 1
iO
t > 0 , i > 1.
,
and
(S.l)
-
Then, if all the n copies of the system start operating at time 0, we have at
time t, N(t) = m (t) + ... + m (t}
l
n
observations (vectors) (0 .. ,T. .), j=l, ••• ,
1J
1)
m (t), i=l, ••• ,n , and we desire to draw inference on AH (as in Sections 3 and 4)
i
in a repeated scheme, where we allow t to vary over an interval [O,T], for some
positive and finite T. This situation is quite comparable to the sequential
•
schcmes in Section 4, excepting that the N(t) are non-ncgative intcger valued
random variab I es, and hence, some addit i onal adj us tments are necessary to
make the theory applicable.
-16-
In Section 4.1, we have studied sequential confidence intervals for AH '
and, in this context, the stopping number M(d), d > 0 plays the vital role. To
adapt this sequential procedure in the setup of progressive censoring, we define
for every d > 0,
,
22
= mini t > 0 : N(t) ~ sN2
(t)d /Ta.f2 } •
(5.2)
d
Thus, the stopping time is given by w ' and, based on the number N(wd ) of
d
observations, we construct the confidence interval for A as in (4.2l),with
W
H
m = N(w ). At this stage, we may make use of the elemetary renewal theorem and
d
conclude that
N(t) / (nt)
{ETl}-l
-+
(5.3)
a.s., as t increases,
so that by (4.20), (4.30) , (5.2) and (5.3), we obtain that as d '" 0,
-
n w
d
m(d)E(T )
l
-2 2
2{
}
d
Ta./2 0A ET I
(5.4)
a.s.,
and,
2 02
wd ... n -1 d-2 Ta./2
A (ET1) a. s.
(5.5)
e-
Note that in the above derivation, we have tacitly assumed that either n is fixed
or n is allowed to be large, but nd
The stopping time
2
•
is small, so that (5.3) remains adaptable.
wd thus depends on n as well.
Having obtained these results,
we are in a position to verify (4.31) and (4.34) without any difficulty. To
verify (4.35) , with
~1(d)
replaced by N(w ) , we make use of the related
d
elementary renewal theorem [viz., Ross (1970 ,pp. 40-41)], and conclude that
lim
{EN (t)/ (nt) } ={E (TIn-I.
(5.6)
t~
As such, the results in Section 4.1 can readily be adapted to show that (4.35)
holds in the current context too.
Further~
given (5.3}, (5.5) and the asymptotic
normality result in (4.45), we have the parallel result for the stopping time
W
d
• The inversion theorem via (5.2) yields the desired result.
For the sequential testing problem in Section 4.2, we may replace in (4.47),
m by N(t), t > t
-
0
for some positive t
0
• TIle stopping number
~t (M,
defined a f t '
¥
-17-
(4.47), has to be replaced by the stopping time
w* (6)
= min!
t > t
-
w* (6), defined by
~ (b,a)},
0
(5.7)
where a and b are defined as in (4.47). With this minor change, we may prove
(4.48) by replacing M(6) and m by
w*(6) and t, respectively. Under (4.49),
the limiting OC function in (4.50) remains valid, while, for the limiting ASN
function in (4.51), if we replace M(6) by w*(6), then in (4.52), we need to
replace
o~ by {ETl}-l o~ ; this will then be the average stopping time in the
limiting case.
Control chart type sampling inspection plans
(for AH ) under
progressive censoring may also be adopted under the usual setup.
6. ACKNOWLEDGEMENTS.
Work of the second author was partially supported by the Office of the
Naval Research, Contract N00014-83-K-0387
REFERENCES
Barlow, RIE. (1962). Repairman Problems. In Studies in Applied Probability and
Management Saienaes (Ch.2J, ed. Arrow~ Karlin and Scarf., Stanford Univ.Press.
Barlow, R.E. and Proschan, F. (1975). Statistiaal Theo~ of Reliability and
Life Testing: Probability Models. Holt, Rinehart and Winston; New York.
Bhattacharjee, MIC. and Kandar, R. (1983). Simple bounds on Availability in
a model with unknown life and repair distributions. J. Statist. Plan. Infer.
8~ 129-142.
Chow, Y.S. and Robbins, R. (1965). On the asymptotic theory of fixed-width
sequential confidence intervals for the mean. Ann. Math. Statist. Z6~ 457-462.
Crame'r, H. (1946). Mathematiaal Methods of Statistias. Princeton Univ. Press, N.J.
Gnedenko. B.V., Belyayev, Yu.K. and Solovyev, A.D. (1969). Mathematiaal Methods
of Reliability Theory. Academic Press; New York.
Hoeffding, W. (1948). On a class of statistics with asymptotically normal
distribution. Ann. Math. Statist.
293-325.
Ross, S.M. (1970). Applied Probability Models with Optimization Appliaations.
Holden-Day ; San Francisco.
19~
Sen, p. K. (1977). Some invariance principles relating to jackknifing and
their role in sequential analysis. Ann. Statist. 5, 316-329.
Sen, p. K. (1981). Sequential Nonparametpics: Invariance PPinciples and
Statistical Infepence. John Wiley; New York.
,
•