ESTIMATION OF THE SCALE PARAMETER
IN THE WEIBUIL DISTRIBUTION
USING SAMPLES CENSORED
BY TIME AND BY NUMBER OF FAILURES
by
EUGENE H LEHMAN, JR.
and
R. L. ANDERSON
INSTITUTE OF STATISTICS
MIMEOGRAPH SERIES NO. 276
MARCH, 1961
iv
TABLE OF CONTENTS
•
Pflge
• • v
1.0 LIST OF TABLES •
•
•
•
•
•
•
•
•
•
•
•
•
2.0 LIST OF FIGURES.
•
•
•
•
•
•
•
•
•
•
•
• • • • .vi
3.0 INTRODUCTION.
• • • •
•
•
• • • • • • • • •
•
• • •
•
'J+.O REVIEW OF LITERATURE •
•
•
• 1
5.0 PROPERTIES OF THE WEIBtJU. DISTRIBUTION.
• • • • • • • 3
• • • • • • • • 8
6.0 NOTATION.
• • • • • • • .13
•
•
•
•
•
•
•
•
•
•
7.0 MAXIMUM LIKELIHOOD ESTIMATOR OF a
•
0
o
•
•
•
• .17
Test Procedure • • • • • • • • • • • • •
Derivation of the Maximum Likelihood Estimator. • •
Mean and Variance of the Maximum Likelihood Estimator
• .17
of
a.
•
•
•
•
•
o
•
•
•
•
0......
••
Nonmonotonic Behavior of V, A, D as P, N Increase.
Asymptotic Properties of the Estimator • • •
0
•
•
.18
•
.20
•
.32
•
.41
• .27
8.0 COST AND PRICE OF OBTAINING THE MAXIMUM LIKELIHOOD ESTIMATE
OF
a,.
0
.'
•
•
0
•
•
•
•
•
•
•
•
•
•
•
• • • • • • • • • • .42
• • • • • • • • • • .46
COMPUTATIONS. • • • • • • • o • • • o •
• .49
Computer Programs • • • • • • • • • • • • • .49
Demonstration of the Program. • •
• • • • • .53'
Determination of E(d).
Discussion of Results •
9.0
•
•
•
.' .
.' .
10.0 SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FUTURE RESEARCH
LIST OF REFERENCES.
•
•
•
• • • • •
• • • • •
.59
v
1.0 LIST OF TABLES
Page
1.
..
Mean (fJ.) and standard deviati on (0-) of life span (t) for
various IX and M. • • • • • • • • ,.\- • • •
•
• 65
2.
Bias (B), variance (V), and mean square error (D) of
estimator (a) for selected N, R or ~, and P •
• • • 66
3.
Cost (C) and price (U) of estimator (a) for selected
M, J, N, R, and P. • • • • • • • • • • •
•
4.
• • 69
Minimum standardized test duration (S) for various P and M •
78
5.
Information
•
•
• 79
6.
Ordinates of the standardized Weibull density function,
_sM
s
e
w( s; M) ... MM-l
• • • • • • • • • • • •
•
0
0
1
(I = -)
D
obtained from estimator (a) •
•
80
vi
2.0 LIST OF FIGURES
Page
The standardized Weibull density function,
w( s; M) "" Ms
...
2.
M-l _sM
1 (1)
1
e
for M "" 2' '2 2 2' and 0 ~
6
:£ 2 •
•
Variance, V, of the standardized estimator, a, as a function
of N for various combinations of small R and small P •
Mean square error, D, of standardized estimator, a, as
function of P for various (R, N) combinations
•
• 81
•
82
• • • 83
Asymptotic mean square error, A, of standardized estimator,
a, as a function of P for various (R, N) combinations •
5.
Minimum standardized test duration, 5, as a function of
for various M • • • • • • • • • • • • •
6.
Expected life span, J.1, as a function of the scale parameter,
a, of the Weibull density function for various values of
the shape parameter, M • • • • • • • • • • • •
• 84
P
• • • 85
• 86
.3.0 INTRODUCTION
Researchers in many areas are interested in estimating the life
.
span of individuals, be they human beings, animals, automobiles, or
picture tubes.
Each of these individuals is characterized by having a
specific moment of birth, a specific moment of death, and a finite
measurable life span.
For instance, an actuary wishes to estimate the
longevity of a person, an animal trainer desires to know the expected
life of a dog, an electronic technician must have some idea of the
usability period of a transistor, or a retailer needs to know how long
f
his goods may remain on his shelves.
Further, it is desirable to learn
the distribution of the life span of these individuals.
These distributions vary from type to type, but past researchers
have noticed empirically that the probability of death by time t
(at least as great as
~)
often can be adequately approximated by the
Weibull [1951] distribution:
1 _ exp[- (t-p)MJ,
a
F(t) •
o
(.3.0.1)
, t < p
The rtshape" parameter (M), always positive and observed by previous
authors to appear usually in the range (0.5, 2.5), controls the general
appearance of the corresponding density function, f(t) • F'(t).
The nscale" parameter (a), also always positive, indicates the spread
of the individual life spars on an absolute scale;
value of
a
is also the expected
(T_~)M.
The "location" parameter (\3), or minimum guaranteed life duration, is
the starting time of the test, assumed in this dissertation to be zero.
It is proposed to investigate the bias, variance, and mean square
2
error of the maximum likelihood estimator (MLE) ~
of a, based on
various testing procedures for selected values of M.
fixed number
•
(N)
Starting with a
of items on test, two methods of conducting the
experiment have been studied extensively in the past:
1.
Terminate the test after a fixed number (R)
of units has
failed.
2.
Terminate the test after a fixed time
(T)
has elapsed.
This paper will study plans in which both of these conditions are
fulfilled, i.e., stop at time
T if R units have failed, otherwise
continue the test until R have failed.
Finally we will study the
11
cost per unit information" for each
combination of N, R, and T, where "cost" is defined as a linear function
of N and the expected duration of the test, and "information" is defined
as the reciprocal of the mean square error.
3
4.0 REVIEW OF LITERATURE
Of all the diverse literature on life testing, papers germane to this
study are those
•
:r.e rtaining to esti;rL9.tion with a censored sample. A cen-
sored sample is one in which all values outside of a given range are
ignored but their number is known.
It differs from a truncated sample
wherein the number of ignored observations is not known.
There are two
methods by which censoring is performed:
1.
Stop the test after a predetermined number of units (R) have
failed (0 < R < N).
2.
Stop the test after a predetermined time
(T)
has elapsed
(T > 0).
In the following, the random variable of interest is time to failure or
death.
Hald [1949] considered the maximum likelihood estimators of the
mean and variance of a normal distribution for both truncated and censored
samples
0
However, because of the noticeable asymmetry of the distribution
of time to failure, his estimators are of less practical value.
Cohen [1950] obtained maximum likelihood estimators for simply and
doubly truncated normal populations under sampling censored at a fixed
failure time, the maximum likelihood equations being arranged for use in
connection with ordinary normal tables.
Cohen [1951J derived estimators of the parameters in truncated
Pearson type distributions by substituting sample moments for the population moments in the Pearson equations.
He considered these estimators
as first approximations to maximum likelihood estimators.
Gupta [1952] studied sampling from a normal population with censoring occurring after a fixed number of units out of a total sample
of given size
N had failed, rather than after a fixed time as by
4
Cohen [1950].
He considered maximum likelihood estimators and their
asymptotic variances and covariances, but did not attempt, as in this
dissertation, to derive their small sample variances and biases.
Sarhan and Greenberg [1956] continued Gupta's work by finding a best
linear unbiased estimator of the mean and a best estimator of the standard
deviation of a normal distribution under double censoring, that is, the
smallest k
and the largest k values were counted but not measured.
2
l
Epstein and Sobel [1953] derived tests based on time of failure of
the first
R out of N items from an exponential distribution, the re-
maining N-R failure times again being unknown.
Epstein [1954] continued his work by giving one test procedure in
which censoring occurs after R failures with d, the duration of the
test, a random variable, and a second test procedure in which the experiment terminates after T hours, so that
failures, is a random variable.
r, the number of recorded
Epstein considered only the Weibull dis-
tribution with parameter M=l, that is the ordinary exponential.
The
estimator in the first case is:
1\
~
where t
r
1.
.JL
[(N-Rh R + ~ t ],
= -R
r=l
r
~ 0 is the life span of the r th successive failure. The quan-
'"
2
tity ~ is a X variate with 2R degrees of freedom.
ex
mator ~R has expectation ex
and variance
2
ex /R;
ance and is unbiased.
In the second case the estimator is:
AIL.
ex = - [(N-r)T + > t ], r
r
r
N i
f
O.
Thus the esti-
it has minimum vari-
5
There is no MLE if r :: O.
2~
minimum. variance, but
2r degrees of freedom.
The estimator ~
r
is neither unbiased nor
2
r
is asymptotically distributed as X with
cx
Since it is a maximum likelihood estimator, it
is asymptotically minimum. variance and is consistent.
Epstein and Sobel [1954] investigated the properties of maximum.
likelihood estimators of
cx
in the exponential distribution with fixed
R, but added nothing to Epstein's earlier [1954] article.
Jaech [1955] in an unpublished manuscript at Hanford, Washington,
set forth a test to determine if 'two different kinds of tubes, each distributed as Weibull with the same M, have a common cx.
stopping the test when Rj
of population j(j=1,2)
He proposed
have failed.
A
used the estimator ~O of this thesis as his
A
cxR .' showed that
J
follows the X
2
He
densi ty function with 2R.
J
2R j CX
cx
degrees of freedom and de-
rived the power of his test.
Deemer and Votaw [1955] obtained a maximum likelihood estimator for
the parameter
c in the exponential distribution
ce-ex,
x
nonnegative,
for time censored sampling.
Herd [1956] considered multi censored sampling, that,
ki(~ 0)
units removed from the test at the time of the i th ordered failure;, the
test terminates after R failures.
Tilden [1957J in a Master's thesis at Rutgers considered two very
simple statistics, the median ~ and the half-range ~ - xp where Xi
is the life span of the i th successive failure, and ~ is unconventionally defined as
XN
for even N.
2
Kao [1956] considered censored sampling and stated that in his experience life spans follow the Weibull distribution very frequently indeed, with the exponent
M having a value in the neighborhood of 1.7.
6
Mendenhall [1957] considered the case of a population made up of two
exponential distributions (that is, M=l) with different scale parameters
a. (i=1,2).
l.
He derived MLE's and showed they had large biases and vari-
ances for small sample size or short test duration.
He discussed an
"adjusted estimation procedure" suitable in such cases when at least the
order of magnitude of the ratio of the
a
is known. Mendenhall and
i
Hader [1958] covered substantially this same material under the same
title.
Mendenhall [1958] compiled a thorough bibliography on life testing
research.
Zelen [1959] analyzed factorial experiments in life testing.
Let
there be at least two factors affecting life span - temperature and voltage, say - and let life test be conducted using several levels of each
factor.
Procedures are given to estimate main effects and interactions
as in ordinary factorial analysis, confidence limits are presented, and
robustness of the tests discussed when the shape parameter in the
Weibull distribution (M in this paper) is poorly guessed.
are EQ1 robust.)
In each case the parameter
M
a· E(t)
(The tests
is being esti-
mated, and the effect of the factors and interactions upon
a
studied.
Zelen [1960] gives likelihood ratio tests for analyzing the results
of a two way classification with respect to the scale factor of the
exponential distribution.
and effect
meter (a
where
When effect
B occurs at level
j (j
A occurs at level i (i • 1, ••• , a)
= 1, ••• , b) then the scale para-
in this dissertation) is called Qijo
In m is the analog of the mean effect,
Zelen defines
In a i
and In b j
are
analogs of the main effects and
ln
~ij
7
is the analog of the interac-
tion in the ordinary fixed effects model of analysis of variance.
Zelen and Dannemiller [1961J ' emphasize the non-robustness of
tests on the scale parameter when the exponential distribution is erroneously assumed to be correct under four testing plans:
fixed sample
size, fixed sample size with censoring, truncated nonreplacement, and
sequential.
Kao [1956J presented a graphical method of estimating the parameters
when the population consists of a mixture of two different Weibull-distributed variates.
Epstein [1960J warns against assuming all life spans are distributed
exponentially.
He gives a set of tests to determine if the exponential
distribution is appropriate to the data in hand, and suggests use of the
Weibull distribution with shape parameter M other than 1.
Mendenhall and Lehman [196OJ consider the usual MLE for
Weibull distribution with location parameter
censoring.
E(r-k)(k
~
a
in the
= 0 and fixed time of
The first two negative moments of the binomial distribution
= 1,2), were needed to compute the mean and variance of ~.
A Beta-function approximation to the binomial was used for these computations.
8
5.0 PROPERrIES OF THE WEIBULL DISTRIBUTION
This dissertation is concerned with the estimation of the scale
parameter a
in the Weibull cumulative distribution function with no
~
"guarantee" period, i.e.,
=
F(t)
of equation (3.0.1) is zero:
1 - exp(-t
M
fa), t non-negative
,
o
t
negative.
The properties of this function vary considerably as
o
to
M proceeds from
It is easier to study these effects if we examine the char-
00.
acteristics of the accompanying density function
f(t)
= F' (t) for var-
ious positive values of the shape parameter M.
MtM-1
=
f(t)
----- exp(-t
a
M
fa), t non-negative
,
o
t
negative
whence by integration we obtain
00
E(t)
=
St
f(t)dt
= al/ M (~
L)
o
and
2
00
=
V(t)
~
[t _ E(t)]2 f(t)dt
= aM [(~
!) _ (~ !)2],
o
00
where we employ the notation ! \ .
M •
5
x K/M e-x dx because of its
o
brevity over the more conventional symbol \(~
Let
s
+
1) 0
= t/al/ M be a standardized time variate. The density function
9
for
is
8
M8
w(s)
M-l exp (M)
-8 ,
8
non-negative
,
8
negative;
Table 6
c
o
and
lists ordinates
Figure 1 presents graphs of w(s) for M:: 0.5(0.5)2.5, and 0
For M< 1, w(O)
s, w'(s)
exists and has the value
is always negative and w"(s)
O.
For
positive, indicating
an ever,ywhere convex monotonically decreasing curve with no mode.
compare W(8)
s ~ 2.
is undefined, although the corresponding cumula-
tive distribution function W(O)
positive
~
If we
for two values of M both less than 1, we note the
curve of lesser M exceeds that of greater M for very small values of
s, then they cross; thereafter the curve of greater M is above that
of lesser M.
If M:: 1, w(s)
is the ordinary exponential density function, in
general appearance similar to the above except it is defined at
s
=
0
and has a mode there of value 1.
For 1 < M< 2, w(O)
= 0,
and w(s)
increases as one moves to the
11M
right to a genuine mode at the point
w( s)
The second derivative,
=
s
max
=
(M-l)
M
w
•
max
w"( s)
is negative from
s:: 0 through and a
little beyond
curve.
< 1 where
s
, indicating concavity at the left portion of the
max
There exists, however, the point
3(M-l)
s = Sip • [
+
v'(M-l)(5M-l) ]1/
2M
M
'
10
w (s) = 0, an inflection point, to the right of which the curve
where
l1
becomes convex as it asymptotically approaches the horizontal axis.
This inflection point is to the left of, at, or to the right of
according to whether M <
As
M
approaches
";'2,
M
= ./'2, or
2, Smax moves slowly to the right, although
the curve resembles the case 1 < M < 2.
As
1,
./"2.
M>
always well to the left of 1, while Wmax increases.
If M" 2, there is an inflection point at the origin.
the X 2
S"
otherwise
y .. 2s2 follo'Ws
Note that
distribution with 2 degrees of freedom.
M
continues to increase from 2 to
00,
approaches 1,
Smax
W
approaches ~e -- that is, increases without limit in proportion
max
to M -- and the two roots of w",
s
ip
(5.0.8)
..
are positive; the smaller is less than 1, the larger greater than 1.
Both roots approach 1 as
M
-->
00.
Thus the peak becomes very tall and
narrow about 1 until in the limit the entire distribution is concentrated
there.
A particularly intriguing feature of the Weibull density function
is the surprising fact that the probability PEs > 1]
s = 1 (or after t = a1/ M) is:
of a failure after
00
1 - W(l) ..
S
M
s
~l
exp(-sM) ds .. exp(-sM)
1
for all
1
M.
1
e
== -
=
00
If M is small, the 36.8% of the distribution that exceeds
is greatly skewed to the right.
For instance, if
bility of a very late failure, say beyond
S"
32, is:
1
M = 5' the proba-
11
1 - W(32)
= exp(- (;32) = e-2 "" .135,
indicating 13.5% of all failures occur even later than that extreme
value.
This is, however, consistent with the variance as given in (5.0.4);
for if
M ""
*
V(s)
If
s
= 1,
(an unusually low value),
= 10L
- (5l)2 "" 3,628,800 - 14,400
M is large, there is et.ill 36.8%
= 3,614,400.
of the area to the right of
but the distribution is much less skewed.
For instance if M "" 10
(a very large value),
1 - W(l)
= .368
(5.0.12)
as always, but
1 - W(l.l) = e
indicating that 29.)%
_1.110
2 59
= e-·
=
.075,
(5.0.13)
of cases fail in that narrow range (1, 1.1).
However, this is consistent with the very small variance given by (5.0.4),
for if
M "" 10:
Thus if a manufacturer has his choice of producing one of several similar
items but with different
M, he may wish to select that item for which
M is greatest, for it will have the most uniform life span.
It is now instructive to study the notion of
II
hazard. 11
The hazard
is the instantaneous tendency to fail - that is - it is the probability
of failing in a given time interval after having survived up to the beginning of that interval.
Let the hazard [z(t)]
MtM-l
---1ill = z(t) .. r::F(tJ
a
,
t
be defined thus:
non-negative.
Then the conditional probability of failure in the interval
(t, t
given the individual has survived until time t, is proportional to
If M is less than 1,
decreasing hazard.
t
dt)
+
z(t).
has a negative exponent, indicating a
For instance, in the case of newborn humans,
M is
less than 1, because the probability of death in the first few moments
is high, and decreases rapidly with each hour of life.
If M = 1, the hazard is constant.
If
M exceeds 1, the hazard increases, as in the case of aged
humans wherein the probability of death during the following year increases
with age.
The hazard increases at a decreasing rate, a constant rate, or
an increasing rate, according to whether M < 2, M = 2, or M> 2.
If M approaches either zero or infinity, so does the hazard.
13
6.0 NGrATION
a
• Wa [see Section 7.3]
A = Cramtr-Rao lower bound on mean square error [see equations (7.5.1)
and (7.5.27)]
= scale parameter in Weibull distribution function; also E(t M) [see
a
equation (5.0.1)]
~
= maximum likelihood estimator of a [see equations (7.2.3) and
• fractional bias of ~
B(a)
~
= starting time of test;
C = cost of a test
d
= E(a) - 1 [see equation (7.3.15)]
~
= 0 in this dissertation
• N + JE(d)/a!/M [see equation (8.0.1)]
= duration of test [see Section 7.1]
Rth failure occurs on or before T
th
• t R if the R
failure occurs after T
D(a) = mean square error = (Bias)2 + Variance = [B(a)]2 + V(a)
=
T if the
[see
equation (7.3.28)
EO
= incomplete expectation operator for the Case RO [see equations
E
= incomplete expectation operator for the Cases r [see equation (7.3.2)]
r
N
Ea[x(r)]
=>
x(r) her), the incomplete expectation of x(r), where
x(r)
r=R
is any function of
r.
f(t)
= Weibull density function [see equation (5.0.2)]
F(t)
= Wei bull cumulative distribution function [see equation (5.0.1)]
g*(t )
R
= marginal density function of t R, the Rth failure time
= R(~) [F(tR)]R-l [1 - F(tR)]N-R f(t R) [used in deriving equation
(8.0.5)]
14
)
g (qR
h(r)
N)
R( R
=
R-l N-R
~
qR
= binomial probability of r failures (out of N on test) up to
time
T
N
H(R)
= >
h(r), the upper-tail binomial cumulative probability, some-
r=R
times abbreviated to simply H
I
•
1/D, the definition of information used here [see equation (8.0.2)]
J
=
constant in the cost function, a ratio:
the cost per unit time of
15
=
p
F(T)
M
=1 -
la) = probability that a given life span does not
exp(-T
exceed '1'
=
q
1 - P
~ =
= exp(_dM/a)
= exp(-t~a)
1 - Pi
Q = 1 - P = exp(-TMYa)
=
r
actual number of failures in the test (up to time
d) [see Section 7.1J
• R, R + 1, ••• , N
=
R
minimum number of failures required before test terminates [see
Section 7.1J
RO •
case in which
s
=
S
=
= R,
r
d> T
t/al/ M, a standardized time variate [defined after equation (5.0.4)J
M
'1'/ al/ , minimum duration of test expressed in units of the standardized time variate [see equation (9.1.2)J
S
Q
N)
(R
=
Smn
pR-n qN-R+n-l
1 m dq [
~see
)J
equation (
5.11
o
t
t
=
continuous time variate
""
time to failure of the
r
T
=
r th individual
minimum duration of the test [see Section 7.1J
U = C/I· CD
= cost
per unit information or price of the test [see
equation (8.0.2)]
v(d)
v(q)
=
density function of test duration [used in equation (8.0.5)J
•
H~
=
density function of survival probability q
...
g( q)
• H,
if d "" T
if q
if
q
=
=Q
qR
<Q
16
v =
variance
w(s)
•
density function for standardized time variate"
s
[see equa-
tion (5.0.5)J
W(s)
··cumulative distribution function for
y(q)
= pR-1
S [see equation (5.0.9)]
. t egrand use d'~n comput ~g
.
qN-R ( -~IJ) 1/M • ~n
E(d)
[see
equation (800.6)J
z(t)
M
Mt - 1
= 1 _f(t)
F(t) = a
"the hazard function [see equation (5.0.15)J
~
E(t)
•
= al/ M (~1) [see equation (5.0.3)J
M
52. V(t). a2/ [~
l-
(~
!)2 J [see equation (5.0.4)J
17
7.0 MAXIMUM LIKELIHOOD ESTIMATOR OF
a
7.1 Test Procedure
This dissertation will consider experimental and analytical procedures
to estimate the scale parameter (a) in the Weibull distribution
(5.0.1),
where the shape parameter (M) is assumed known in advance of testing.
maximum likelihood estimators
of
al/
M
=
Given
a
E(t)/(~
(~)
is the MLE
!).
N items put on test at
t
0, the maximum information on
=
would result from an experiment in which all items were allowed to
fail; hence, the time to failure
(i
~l/M
will be considered;
Only
=
1, 2, ••• , N).
and T
=
O.
would be available for all items
R= N
In our terminology, this would mean that
The mean of the
estimator of
(t )
i
a, since
E(tm)
i
t m would be the minimum variance unbiased
i
=a
and
..1L
[~
Var
m,
t1N)
= a 2/ N.
The
i=l
Cram{r-Rao lower bound for an unbiased estimator of
a
is
2
a /N.
However, since the cost of conducting the experiment is a function
n~
of the length of time (d) to termination of testing as well as the
ber of items (N) on test, it is necessar,y to examine a variety of test
procedures and choose one for which the cost per unit information is
minimized.
E(d)
and
In this dissertation, it is assumed that cost is linear in
N.
either at time
Previous test procedures have considered termination
T or after
R failures.
In this dissertation, the
test will be continued until both of the following events have occurred:
(1)
At least
R units have failed.
(2)
At least
T time-units of testing have elapsed.
(In the event all
N units fail before
T we consider
d
=
T as
the duration of the test even though there are no units remaining to
fail between t
N
and
T.
In a practical situation,
T would be set
18
small enough to make the probability of this event negligible.)
7.2 Derivation of the Maximum Likelihood Estimator
Two different cases cover all situations; these will be called Case
R and Case r. The former case is for the situation in which less than
O
R failures have occurred before T; hence, there will be exactly R
failures at the termination of testing.
than T; i.e.,
d
occur up to time
> T. The latter case occurs if R or more failures
TJ the test duration will be
failures (r) will be at least
N- R + 1
The test duration will be greater
R, i.e.,
d
r - R, R
=T
+
and the number of
1, ••• , N.
These
situations will be designated as Case r.
In Case RO(r
th
d > T), the likelihood for the case of the R
= R,
ordered failure occurring at time t R > T is
R
M-l
1 ..1L M
M
(t i )
exp [- - (~ t i + (N-R)tR»).
i=l
a i=l
.
~
[In this and all other
~ and
cases where symbols
are used we under-
E
stand that if the upper limit is less than or equal to the lower limit then
a-l
~
i-a
a-I
x(t i ) = 1,
?-- x(t i )
~=a
a
- 0, and
~
i-a
a
x(t i )
=~
~=a
x(t i )
= x(t
)
a
where x is any function of t .] Hence the logarithm of the likelihood
i
differs by a constant independent of a
The MLE of
a
for this case is
from
19
e
It is convenient to set
q.1
Li
in
..
t~/O:I
qi ::: -
i.e., dqi = f(ti)dt i
and
= 1 - F( t.).
In terms of this ..L -notation l
1
1\
0: 0
-!L
1
a 0 •0-: • - -R [>
N
0
/J
~.]. + (N-R)~'R].
The probability of Case R occurring is simply
O
R-l
~
-r-O
(N) [F(T)]r [1 _ F(T)]N-r .. 1 - H.
r
In Case r (r .. RI R + 11 ••• , or N; d = T), the likelihood for the
case of
N
r
failures within the time interval (0, T) is
N
r
N
Mr
r
() n f(t.) [1 - F(T)] -r .. ( ) (-)
r i-I
].
r
0:
M-l
n t
i=l i
TM
1
The logarithm of the likelihood differs by a constant independent of
from
£.r .. _r in
Hence the MLE of
1\
0:
0:
!
0: _
0:
[>r t M + (N _ r).f!].
r:r
i
for this case is
1 -!- M
M
[> t i + (N - r)T];
.. r
r
r:r
r" R, R
+
1, ••• , N.
In terms of the ~-notation
A
0:
1
a
r
r
a = --r • - - [>
r
/J
~. +
1=1
i
M
r
exp[- (N-r) -- - - ~ t ].
0:
0: W
i
(N-r)L],
0:
20
where
L
=-
M
T
fa.
The probability of any single Case r occurring is
N
>
h(r)" H.
r=R
7.3 Mean and Variance of the Maximum Likelihood Estimator of
"
We note that
E(~)" a E(a)
V(~)" a 2 V(a), where a" ~/a.
and
It is more convenient to study the random variable
k
k
E(a ) .. EO (a O)
$,.. ,
+
N
k
~ E (a )
-
r-R r
r
a.
We note
(7.3.1)
k
where EO(a ), the first term in (7.3.2), is the incomplete expectation
k
of ak for Case RO' and Er(a ), the second term in (7.3.2), is the incomplete expectation of ak for each Case r. That is:
where
..•... TM
Q .. e
--a •
In (7.3.2) we consider t l , ••• , t R_l unordered but all less than t RI
that is, - .il , ... , -LR-l unordered but all less than (-.f ).
R
20
where
L" - TM/a.
The probability of any single Case r occurring is
N
>
r-R
7.3
h(r)" H.
Mean and Variance of the Maximum Likelihood Estimator of
--
We note that
E(~) .. a E(a)
and
V(~) .. a 2 V(a), where a" ~/a.
It is more convenient to study the random variable
k
k
E(a ) .. EO (aO)
•
((~
N
+ ~
-
r-R
a.
We note
k
r
E (a )
r
(7.3.1)
where EO(ak), the first term in (7.3.2), is the incomplete expectation
k
of ak for Case RO' and Er(a ), the second term in (7.3.2), is the incomplete expectation of ak for each Case r. That is:
[
1
1
qR
qR
R-l
(J .•• S [- Li=l ~i -
where
.!,. TM
Q .. e
--a •
In (7.3.2) we consider t , ••• , t _
that is,
-
1
l , ... , -LR_l
.i
R l unordered but all less than t R,
unordered but all less than
).
R
(-.t
21
Let us first evaluate E(a).
We make use of the following definite
integrals, if R > 1.
1
S·..
q
1
)
.£j
q
R-1
'It
dqi = - (1 _ q)R-
2
i=l
1
)
~j dqj = pR-2(p+q1), j=l, ••• , R~l,
q
and
1
1
S·.. 5
r
J1
..t. j
Q
Q
Hence for
k
= 1,
r-l (p + QL), j=l, ••• , r •
. 'It dqi = P
~=1
the part inside the curly brackets of the first ter.m of
(7.3.2) is simply
R-2
0
t7 R-1
(R - l)p
(p + q~) - (N - R + l)~p
,
where the subscripts
ience.
R
on
p,q and ~ have been deleted for conven-
If R· 1, the part inside the curly brackets is merely
(- N.i).
Similarly, that part inside the curly brackets of the second term of
(7.3.2) is
r pr-1 (p + QL) _ (N. _ r)LPr
=r
pr-1 (p + L) - NLF'.
Hence
In order to perfor.m the integration in the first ter.m of (7.3.8), a
22
procedure will be introduced which again will be needed in evaluating
2
E(a ). Let
Q
S
p
R-n q N-R+n-l
0
~
m dq; m = 0,1,4; n
= 1,2,3.
°
Hence
E(a)
=
(R-l) (SOl + S12) - (N-R+l)Sll + H(l +
~) - NLEH (l/r),
where
A well-known relationship between the beta and binomial cumulative
distribution functions is
By use of integration by parts on Sll' it can be shown that
Applying (7.3.13) and then (7.3.12) to (7.3.10), we have
E(a) = RSOl -
=1
+
~ h(R)
+
H(l
+
~)
-
NLEH(l/r)
~ [H - Qh(R)] - NLEH(l/r).
23
Hence the fractional bias of ~
as an estimator of
In order to obtain the variance of
by setting k:: 2
for
"l
and
in (7.3.2).
qt td
N
-
is
2
a, we first determine E(a )
Hence, after deleting the subscript
R
q, and bearing in mind the bracketed statement after (7.2.1),
1
1
+>
a
1 (N) QN-r
-::;
r
r-R r G
...
R-l
S
1t
i-I
q
dq. ] qN-R dq
~
t {l ...
where
Zl ::
.L.
2
= 2(N-r)L
LL j ;
i-I
.L.
~..1, j; Z4 :: (N-r)
2
2
L.
j=l
In this case the following definite integrals are appended to (7.3.4)
1
1
S·.. S
L
q
q
2 R-l
j
.1t
~::l
R 2
dqi'" p -
(2p + 2ql
24
r
dq. = P'~-1 ( 2 + 2QL - QL2) ;
n
~
i=l
and
1
1
S~"i 5JjJ
~ d~
k
Q
=
pr-2 (p + QL)2, j
f
kj j, k = 1, ••• , r.
i=l
Q
(7.3.21)
If R = 1, the first curly brackets in (7.3.2) becomes
(_N~)2.
Apply-
ing (7.3.4), (703.5) and (7.3.18) through (7.3.21), we have
E(a 2) =
*
(~)
Q
S
qN-R [(R-1) pR-2 (2p
+ 2q
i _ qf., 2) + (R4:1.)(R_2)pR-3(p+ql)2
q=O
_ 2(N-R+1) (R-1) pR-2 (p +
This result is true for both R
oj) L
=1
+ (N_R+1)2 pR-1 .L 2]dq
and R > 1.
After expanding in powers of p, q, and
1..,
and using the S-notation
(703.9), we have
2
E(a )
1
=R
[R(R-1)SOl
- 2(R-1) (N-R+1)Sll + 2(R-1)
2
S12
25
Again by use of integration by parts, it can be shown that
2
(N-R+l)5 21 = (R-l)522 - 2511 + ~ h(R);
and
Applying (7.3.24), (7.3.25) and (7.3.13) to the first bracketed part of
(7.J.23)we obtain
*
2
[2R(R-l)S12 - 2R(N-R+l)Sll + R(R-l)SOl + (N-R+l)
~2
~ h(R) - (R-L) ~ h(R)]
P
Finally using 7.3.12 and collecting terms, we have
2
E(a 2) = 1 + 1 + H[- 1 + 2L + L ] + QLh(R) [L(NP - R+l) _ 2J
R
R P
p2
P
RP
2 2
+ N L
2
~(J./r) + [1 - 2NL -
2N0 ~
-p
- 2 ]~(l/r).
p
26
The mean square error of a, that is;-the sum of the variance and the
squared bias of
a
is
222
· !
+
R
The variance of a
RL - P H + Q&: (NP _ R + 1) h (R)
~
RP2
is simply
It is noted in passing that, when
R· N (all units are required
to fail),
Substituting these in the above results, we have
B[a(N)] •
1
D[a(N)] • V[a(N)] = -N
where
a· a(N)
+
~ - NLPN/N • 0; E[a(N)] • 1;
_N
L2
1
p2
N
Y" (-- - -
in this case.
~(N) is the minimum variance,
m. 2
m 2
m. 2
1
')L2
m 2
+ ~ - ~ + ~ + - - --- - ~ +
P
p2
NP2
N
P
~
Hence this checks that the variance of
2
a. /N.
2
1
L ) = N- ,
27
If R
fined for
=:
0,
r
=:
we have in all cases the estimator
O.
If T
=:
0, we have the first case described by Epstein
"O;
[1954], that is the estimator is always
and its variance is
a
2/R.
a
the estimator is unbiased
If T is allowed to increase without limit,
we have the same situation as for
R
=:
N.
Mendenhall and Lehman [1960] tacitly assumed
T.
Hence in their case
H = 1 - QN and h ( R)
given by tables in their paper.
B[a(l)]
D[a(l)]
=
( "a ) whi ch is under
R
= 1 with a fixed
= NPQN-l ; ~ (-k)
r
is
Hence
L (1 - QN - NPQN) - NLE (1I r)
=p
H
2
2
p 2NL2p
QL2
2 2
2
L ( l-QN) + QN + 2.-2 N
- N-L Q +
2 ~(l./r) + N L EH(l./r ).
p2
P
7.4 Nonmonotonic Behavior of V, A. and D as P and N Increase
In section 9.1, the computational methods by which values of V, A,
and D were obtained from electronic calculators will be presented.
In
this section the results of this computation are discussed.
Empirically, we observe several strange events for which explanations
have been attempted.
Conjectures are stated which are drawn solely from
an observation of the results, but for which a rigorous proof has not yet
been obtained.
The bias of ~
II
as given in (7.3.15) is a function of L, P, and other
constants ll which are themselves dependent upon a.
"a
cannot be used as an unbiased estimator.,
values of
P =1
P and
or R = r.
Hence a function of
The bias is large for small
Rand appears to decrease, becoming 0 when either
2S
The variance is observed to increase with increasing N if
P and
R are small until it reaches a peak; with further increases in N it
decreases again, approaching the asymptotic value.
of the peak varies with
P and
R.
Another perplexing feature of
with increasing
As
P increases,
P for constant
V is its nonmonotonic performance
R and
N.
If
P is
V drops to a minimum at or near P •
increases despite the increasing test duration.
upon ~, generally in the neighborhood of
mum.
The location and size
0,
N;R
V
.
1
R:0
J..s
V then
At a point depending
P = 1 - :N' V attains a maxi-
Thereafter it decreases, monotonically it is conjectured, with in-
creasing
1
N
P to a value of
at
P
= 10
These oddities are depicted
graphically in Figure 2, and numerically in Table
The mean square error
D and the asymptotic variance
have, peculiarly as functions of
Table 2.
~.
A also be-
P, as shown in Figures 3 and 4 and
1 if P
V, both have the value R
Note that, like
off to a minimum at or near
P
= ~;
P
=1
R
- 2N' and finally decrease
again (monotonically, it appears) to the value of
i
at
P = I.
The results are illustrated by the following examples.
= 1 then for
N~5,
V(~)
= .963, .886, .776
and .859.
p·.3 and R = 3, then for
N·
we have
If
P
= .05
10, 20, and 40 (respectively) we have
Observe the down-then-up movement.
Even if
and fall
after this, they increase to a maximum
which is more or less in the vicinity of
and R
=0
5, 10,
20 and 40
(7.4.1)
29
v(I(?) = .322, .269, .280 and .138.
Note that the movement is still wavy.
But if
P
= .7,
then for
R
= 1,
the corresponding variances are
v(~) .... 904, .260, .0905 and .0398,
which is monotonically decreasing, at least for these
jectured) for all
(7.4.3)
N and (it is con-
N.
The following numerical example points up how for even moderately
large
S (in this case,
is nonmonotonic in
If we set
P = ~ or S '" ~ .69315 ) the mean square error
N for small
R" 1, and
R.
P '" ~, then
'" - .• 69315,
.... 48046,
her)
h(l)
1 - H
... (N) 2-N,
r
.. N 2-N ... heR),
... h(O) ... 2-N•
From the definitions and discussions in Section 7.2 and 703 it is
evident that
30
E(a )
I
I~
nr dq
r
~l
V(a )
nr
r
i=l
z::
her)
= 1+(-N - 2) .69315 = l+B(a )
r
dq. her) -[E(a)J 2
r
.039094
r
= -1 (1 r
(from 7.3.4)
r
whence the mean square errore: (D) are
2
N2 L
D(ao)
•
D(a )
r
= [B(ar )J2
1
+
+ V(a ) •
r
Now
E(a)
= (l-H) E(ao)
N
+
>
r=l
her) E(a r } = 1 + B(a),
N
+
>
r=l
her) [V(a ) + (E(a ) - E(a»2 J
r
r
and
N
D(a) = (l-H) D(a ) + ;-- her) D(a ).
O
r=l
r
Using these last two sets of formulas we perform the following calculations to show the contribution each partial estimator,
makes toward the total
last row for each N.
B, V, and D of
a.
a O or a ,
r
The latter is shown in the
~
2)
P
31
B(a )
r
V(a )
r
D(a )
r
her)
1
-1
0
1
.039094
1
1.48046
.51955
1
.5
2
0
-1
.25
1
.039094
.• 019547
.84497
0
1
2
3
3
1
-1/2
-1
.4375
0
1
2
3
4
0
1
2
3
N
r
1
0
1
.69315
all
2
0
1
2
.25
1
.039094
0019547
.013031
.88245
5.3241
.51955
.13967
.49349
097441
0125
.375
.375
.125
4
2
0
-2/3
-1
.52083
1
.039094
.019547
.013031
00097735
.99747
8.6874
1.9609
0019547
.22657
.49023
1.12780
.0625
.25
.375
.25
.0625
5
3
1/2
- 1/3
- 3/4
- 1
.52865
1
.039094
.019547
.013031
.0097735
.0078188
1.C17752
13.012
4.3632
.13966
0066415
.28003
.48828
1.21179
.03125
.15625
.3125
.3125
.15625
.03125
1
6
4
1
0
- 1/2
- 4/5
1
.49531
1
.039094
.019547
.013031
.0097735
.0078188
.0065158
1.07638
18.297
7.7264
.50000
.013031
.12989
031538
.48698
1.19425
.015625
.09375
023430
.3125
.23438
.09375
.015625
0
1
2
3
4
5
6
7
1
.039094
.019547
.013031
.0097735
.0078188
.0065158
all
7
5
3/2
1/3
- 1/4
- 3/5
- 5/6
1
.44520
24.542
120031
1.1006
.066415
.039802
018079
034017
048605
1.11135
.0078125
.054688
.16406
.27344
.27344
016406
.054688
.0078125
1
0
1
10
8
49.046
300788
.00097656
.0097656
all
4
all
5
4
5
all
6
0
1
2
3
4
5
6
all
7
10
1
2.9218
.039094
.50000
087500
all
3
·5
-
-
.00~5850
1001 12
1
.039094
.25
~5
1
1
1
1
32
2
3
4
5
6
7
8
9
10
all
.019547
.013031
.009'7735
.0078188
.0065158
.0055850
.0048868
.0043437
• 0039 4
.6758
3
4/3
1/2
0
- 1/3
- 4/7
- 3/4
- 8/9
1
.30058
-
W
These results show that both
increases from 2 to
4.3437
.88581
.12989
.0078188
.059000
.16246
.27515
.38396
.48437
.71927
.043945
.11719
.20508
.24609
.20508
.11719
.043945
.0097656
·00097656
1
B and V (and hence D) increase as
N
5, then decrease, undoubtedly monotonically. This
can be explained as follows.
B(a
o)
and B(a )
r
are linear with positive slope in N.
Hence,
B
1
In the cases of a O and a r where r < 2 N, B(aO)
hence rB(aO)r and IB(a r )! increase with N.
increases with N.
and B(a » 0;
r
V(a ) and V(a ) are independent of N. Hence
D(aO) and
r
O
D(a ) increase monotonically with N if r < ~ N.
r
Now the probability of using these early a's decreases ,with N, but not
entirely monotonically for small positive
r.
So at first, the decrease
if any is insufficient to eliminate the influence of these early a's
with their large
B, V, and D.
But as
probability of using these early,.
a's
N continues to increase, the
diminishes to insignificance
and thus a point is reached where the natural effect of increased sample
size predominates. For R ~ 1,
7.5
P
= ~,
this point is at
N = 5.
ASymptotic Properties of the Estimator
Asymptotic properties of MLE have usually been studied mainly in the
case of independent observations.
Wald [1948J showed that under certain
restrictions on the joint probability distributions of the observations,
the ML equation has at least one root which is a consistent estimator of
the parameter to be estimated.
33
Furthermore, any root of the ML equation
which is a consistent estimator of
efficient.
~
is shown to be asymptotically
Wald shows that, if four conditions hold, the maximum likeli-
hood estimator is both consistent and asymptotically efficient "in the
By "in the wide sense" is meant the estimator need not be
wide sense."
asymptotically normal.
Wald's article is concerned with unbiased estima-
tors for which the Cr~r-Rao lower bound on the variance, the asymptotic
variance, is the reciprocal of
where ~ is the logarithm of the likelihood.
/
Since our estimator is biased, the Cramer-Rao lower bound is on the
mean square error.
This lower bound is
Hence we will show that Wald's conditions hold for
A.
c;l>
Condition 1:
i~
Q i (i ... 1,2,3)
O~·
finite. where
K is some finite proper interval on the positive real axis.
exist, and further E(l.U·Kb.
~e
IQ
i
is
a~
We note by successive differentiation that the likelihood functions
for both d > T and
d
~
T have finite derivatives well beyond the third.
For K, choose (1,2), say, for which E(~/a)
is always finite and is
continuous; hence, the expected l.u.b. also exists finitely.
Therefore,
Condition 1 is fulfilled.
Condition 2:
As
N increases without limit (for fixed
goes to zero.
(i)
OE(~) "" l+B +
Oa
a B' , where
B'
@
oBloa.
Rand
P),
A
34We wish to ascertain first the limit of the bias in Condition 2 (i)
which from (7.3.15) is
limlBl. lim L
1
N--> 00
N->oo P [H - Q h(R)] - NLEH (r)·
Three factors in (7.5.2) require study, namely, H, h(R)
(7.5.2)
1
and ~ (;).
Note that
We must find the limit of
N
lim N(N-l) ••• (N-r+l)Q
N--> 00
1. 2 ••• r
by r
(~) QN.
< lim
- N->oo
Nr . lim
1
..N-->oo Q-Nr.I
N->oo Q-N..-.r
--L
Nr~N ... lim
r•
applications of l / Hepital's rule and the limit is
o < Q < 1.
Hence
h(r)
may be referred to as
large, where the expression 0(1)
0(1)
0
when N is very
indicates any function tending to
zero with increase in N.
R-l
Then 1 - H ... ~ h(r)
r-o
becomes with increasing N
R-l
1 - H ... >
0(1)· (R-l) 0(1)
r==O
or
H ... 1 - 0(1) •
Finally,
NK-(!r) ... N >N h(r)! < N >N h(r)! _ NE(l)
rr
r
r-R
r-l
--Ii
since
35
where E(l)
means the expectation of 1
for the nonzero binomial in
r
r
the Grab and Savage [1954] sense. In that paper they show that
1
NP
+
1
1
0(1) ~ E(;) ~ NP +
where the expression
~
_2
N-P
2
1
+ O(~)
N
means a function whose value at most is of
order N-i • Then
Now applying (7.5.8) and (7.5.5) to (7.3.15) we have, for
N suffi-
cient1y large
IBlsl~ [(1-0(1»
== -
- Q 0(1)] -
~-
31 - L
NP2
O(~)I
N
~P (l+Q) 0(1) - 312 - L 0(1-)
)
NP
N2
all these terms go to zero with increasing N.
Hence we may say B-->O and
a
Now turning our attention to aB'
is consistent.
of Condition 2 (i),
l'
_T
LP ' + aB' .. a[=-] (H - Qh) + 2 (H' - Qh' - hQ')
p2
== -
P
1 2 ()
1
[~ + p]
P
P
(H - Qh) +
L
~(r)
P [p
.. LNH -
m.~h
T
+
QLNh + QLhJ
36
where
h'" heR).
Using the same methods as above, we see that
EH(r) ... E(r) + 0(1) ... NP + 0(1).
I
1 + B + cxB
II ... 1
-
2
1. H
~
Then
2
2' ~(r)
1
(Q + 2NP) +
P
2 2
+ N 1
p
~(l/r)
+ 1
2
§h (l..;R+NP)
P
2
S
1 +
~
[-(1 - o(l»(Q + 2 NP) + NP + 0(1) + Q 0(1) (l-R+NP)]
For N large enough to ignore
11
+ B + aB'
k
1 +
0(1)
$
we have
[-Q-NP) +
$
[NP + 3 +
O(~»)
2
1
- 1 - - 2 [Q - 3 + o( -N)] •
p
1
Thus the limiting value of
(ii)
11
+ B + cxB'1
(7.5.12)
is at most
In evaluating the limit of the denominator of A, (7.2.2) and
(7.2 7) are differentiated twice, l and then the expectation is determined.
0
The two second derivatives are, respectively,
The first part of (7.5.14) occurs with probability 1 - H, and each of
the second parts with probability
her), where
N
>
r-R
The expectations for the
a's
her) - H.
can be found from (7.3.14):
37
where EO
is the incomplete expectation for case
- a~[
()2.r.
da
which for large
2
]
= R(l - H)
Hence
2R9Lh(R)
+ (1 + ~) E (r)
2NLH ,
P
P
H
-
N,
= (1
(iii)
-
(Ro ).
+
~) NP - 2 NL
+
0(1)
Since (7.5.1) can now be rewritten
= NP
~~f
+
0(1).
N is large) as
2
1 -
L
"2
(Q - 3)
A - a 2 _--.;;;P
+ O(l/N)
,
NP + 0(1)
Lim
. . . A = O.
N- -......
00
Therefore, Condition 2 is fulfilled.
Condition 3:
For any a
in
'PI
~a2
0~2~~u
K, the standard deviation of
divided by its expectation converges to zero with increasing N.
(i)
In order to compute the standard deviation, we first require
the limiting value of the sum of
less the square of the limiting expectation,
~ p2.
2
EO[(R - 2Ra o )2] • R [(1 - H) - 4EO(ao) + 4Eo(a~)]. We have
shown above that the limiting value of EO(a o) is zero. Using (7.3.26)
(ii)
and our previous limit operations, the same holds for
2
EO(a o ).
38
Hence,
Lim
2
N- -> 00 EO[(R - 2Ra 0 ) ]
= o.
(7.5.18)
22222
EH[(r - 2rar ) ] = EH(r ) - ~(r a r ) + 4EH(r a r }.
second part of (7.3.14) and of (7.3.23),
(iii)
From the
Hence
EH[(r - 2rar }2]
= [1
+
+
4[1 -
~L]2 EH(r2)
2
~
2NL
2 - NL - p-] EH(r}
p
which for large N,
2_?
- 0(1» [1
=:
(NPQ + N r
+
2 2
4N L [1 - 0(1)]
=
N(PQ
+
4LQ
212
p]
- 4N2 LP(l
2
+
4 NP[l - ~]
p
~ p2
+
0(1).
4L Q + 4P)
+
0(1)0
4P}
+
+
+
Hence the variance is
N(PQ
+
Therefore the ratio in Condition 3 has the limiting value
Lim
N--> 00
jPQ
+
4LQ
Np2
+
4P • 0
'
+
2L
p)
<,,[,
39
as required, and Condition-3 therefore holds.
Condition 4:
For some small positive
than distance
6, and for all
at
closer to a
6, the expression
is a bounded function of N.
The third derivatives of (7.2.2) and (7.2.7) are, respectively,
and
2r( - 1 + 3ar )/ a' 3•
[4R(1 - H) - 6RQLh(R)/P],
and
[(4
Then by the continuity of
exists a positive
+
6L
p)
EH(r) - 6NLH] •
I a31n~
3 f,
oa
& such that if
(7.5.26)
for every small positive
la' - al <
e, there
&, then the l.u.b. of the
numerator of (7.5.23) has for its expectation a value that differs by less
than e
from the sum of (7.5.25) and (7.5.26).
of (7.5.23) is)by proper choice of
6, closer than any distance
the sum of the two expressions (7.5.25) and
for all
Therefore the numerator
(7.5.26)~
e to
which is finite
N, but becomes infinite with increAsing N. ,Recalling the limits
of H» h(R)
and EH(r),the numerator (if N large) is
4NP/a3 + 0(1).
have already shown in (7.5.16) that the denominator under this
is (7.5.17).
Hence the fraction, finite for all
We
condition
N, approaches a finite
40
value and Condition 4 is thus fulfilled.
Our estimator therefore is consistent and asymptotically efficient
in the wide sense.
In recapitulation, the lower bound on the mean square error for A
a is
2
= ~[l
where 1 + B(a) + aB/(a)
2
I
B(a) + aB (a)J:2
'
_ a 2 E[ a ~J
oa 2
+
is given by the first equality of (7.5.11) and
the denominator by the first equality of (7.5.16).
of Grab and Savage [1954J namely,
A
A(a)
E(~) ~
2
= ~P
Using the approximation
iF, we have
2
+
O(l/N ).
For the special cases, we have
(i)
R
= N. A(~) =-
(ii)
R
= O.
2
a /N.
The denominator of A(~)
infinite, since it involves E(l/r)
for
r
is NP, but the numerator is
=0
as well as for
The asymptotic result (7.5.11) does not hold since E(l/r)
r
= 1,
••• , N.
thus defined
is infiniteo
(iii)
T· O(R> 0).
Using the first equalities of (7.5.11) and
2
(7.5.16), with L = P = H = EH(r) = EH(1/r) = 0 and Q = 1, A(a) = a /R;
2 2
in this case, L /p = 1. Note that the asymptotic result (7.5.11) does
not hold, since
(iv)
EH(r)
= N,
T
=
H is zero and not unity when T = 0.
00.
EH(l/r)
Q = 0, P
= 1, heN) = 1, her < N) = 0, H = 1,
= 1/N, L
= 00
but
QL
= QL2 = O. A(~)'"
2
a /N.
41
8.0 COST AND PR]CE OF OBTAINING THE MAXIMUM LIKELIHOOD ESTIMATE OF a
In Section 7.0 the mean square error, D(a), of the estimator was derived.
of
We now define the information obtained, I(a), as the reciprocal
D(a).
The experimenter desires to increase
I
as much as possible.
However, the cost of obtaining the estimator~(a) must be considered.
If
N units are used in this testing program one of the costs of obtaining
information is the cost of
N units.
During the test,
r
items will
be destroyed and all others will be at least partially damaged.
In
either case they are unmarketable; hence, the number of failures. (r)
appears to be unimportant in determining the cost of estimation.
However, the duration of the test certainly is a factor in the cost,
for there will be an expense involved in labor or testing equipment
more or less linear with time.
Since the test duration, (d) is unknown
when the test is planned, it will be replaced in our considerations by
its average value, E(d).
In light of the above, the cost of obtaining the estimate is defined
as
where
J
is a factor selected by the experimenter to represent the
ratio of cost per time unit of testing to cost per item subject to test.
Having obtained
C, the next step is to determine the price, U(a),
defined as the cost per unit information.
U
Thus
= CII = CD •
If information is available on the value of the shape parameter
(M)
and the ratio
(J), the experimenter might wish to select a
42
combination of N, R, and P (or 5), to minimize
U, either uncondition-
ally or perhaps subject to some limitation such as
exceed a given value, or I
C or
N not to
to be at least some minimal value.
In this study, the following values of M and
J
were chosen:
In (8.0.1)
E(d)/al/ II • [HT +
~aJ t R g*(~)dtRl/al/1I
T
Q
S
y{q)dq,
o
where
Y(q)
= pR-l
N-R ( . 1)1/M
q
-~
•
(8.0.6)
8.1 Determination of E{d)
Now if 1/M is an integer - which holds only for the first two
values of M studied in this dissertation - we can integrate directly
as follows, using the tables in Dwight [1957] based on:
where we have expanded
{l_q)R-l by the binomial theorem.
Then
43
Q
S
M
= ~,
Q
S
o
If
Md
N-l-r (IJ
)1/q
-..,(..n q
.
q
S
o
(R-l) (_I)R-r-l
Y(q)dq • :;r
r=O
o
If
Q
R-l
R-l R 1
1
N-r I 2
N-r IJ
N-r Q
y(q)dq • :;- ( - ) (_I)R-r- [9
~ _ 2g
~ + 29
].
r-O
r
(N-r) 2
N-r
M - 1,
Q
S
o
y(q)dq
=
R-l
>
(R-l)
r:o
(_I)R-r-l [_ 9
r
N-r
0
N-r Q
.-(.. + 9
].
N-r
(N_r)2 0
Now (8.1.3) and (8.1.4) can be simplified as follows.
Note that
Q I-HK_l(R)
S
o
Thus
(N_r)3 0
Q
dQ.
Let
44
5
Q
pR-1 qN-R dQ
o
- 1 - H(R)
by (7.3.12).
Similarly, using (8.1.6) and (8.1.7) we have
(8.1.8)
1 - H (R) 2
Employing (8.1.7) again but replacing R - 1 by r
1 _ H (R)
2
I:
R-1
>
-
r-O
(N)
r
1
N
[1 _ H(r+1)]
I:
we have
R-1
(
)
>
1-H r+1
N-r
r-O
(r+1) (r+1)
•
Again using (8.1.9), (8.1.6) and (8.1.7),
&:1 1
->
-r=O -N-r
Applying (8.1.7), (8.1.9), and (8.1.10) to (8.1.3) and (8.1.4), we
1
M 2
have, if
I:
5
:
Q
N
R(R)
o
2
R-1 1-H(r+1)
y(q)dq - (l-H)L - 2L >
r=O
N-r
+ 2
&! .L- 1-H~~+1)
> >
(N-r N-j)
r"O j-O
(8.1.11)
;
45
and if
M == 1:
Q
So
y(q)dq
R-1
== _
()
(l-H)L + >
1-~:~+1.
r-O
Finally, inserting (8.1.11) and (8.1.12) into (8.0.5) we see that, if
M ==
1
'2:
and if
M == 1:
~
== _
a
L + ~-1 1-H(r+1) •
r=O
N-r
These values are not difficult to program on an electronic calculator.
But if ~ is not an integer, the integration can not be expressed
in closed form.
It is necessar,y therefore to use some form of quadrature
such as Simpson's Rule:
Q
S
o
where
q1 - q2 - q1 - q3 - q2
= ... -
Q - ~-1;
n must be even and
selected large enough to give the desired accuracy of three significant
digitso
This computation was performed on the Datatron at Purdue and is discussed in Section 9.0.
46
•
-7,..---
Having obtained E(d), it was easy to compute
C and U.
The
machine program was found satisfactory for all five values of M, including
the two cases
used.
M ...
'21 and M = 1, where direct integration could have been
However, programming time was saved by using the same program
throughout.
8.2
We note
combinations
Discussion of Results
Table 3 presents values of
M ... 0.5, 1.5, 2.5
and
C and U for the
J ... 1, 10VlO, and
1000.
The
columns are headed p .... 05, .1(.1).5,.7
given the corresponding value of
in each part
and under each value of P is
S'" T/a l / M• There are four subtables
for the four values
four subtables represent values of
N ... 5, 10, 20, 40.
The rows of the
R, namely:
In subtable
N ...
In subtable
N ... 10,
R ... 1, 2, 3, 4, 7, 9
In subtable
N ... 20,
R ... 1, 2, 3, 4, 10, 14, 19
In subtable
N • 40,
R ... 1, 2, 3, 4, 10, 20, 30, 39.
In each case
The case
R=
5, R = 1,- 2'rJ, 4
° is omitted because if
r ... 0, there is no MLE.
R ... N is also omitted, because the Simpson Rule program on the
Datatron implies the existence of y(q)
on the closed interval
[0,
QJ;
Q
if R'" N,
y(O)
exists when R
is not defined.
Hence even though
5
o
y(q)dq
= N, an entirely new program would be required for that
last case.
Table 5
records I for those values of
and U have been computed.
N, R, and
P for which
C
Thus cost, information and price may be ob-
served for all the designs studied.
It is of interest to note that when Q < e-l .... 632, an increase in
47
M means a decrease in expected duration of test and hence cost; the
Q > e-l •
reverse holds for
independent of
holds for
I
M.
Since
and
D;
V and
thus,
We note that if
Q. e-l , E(d)
On the other hand if
B are independent of
U depends on
M,
is
the same
M only because of
C.
is large, it is advantageous to put a large num-
J
ber of items on test and terminate the test after a short time; likewise
if
is small,
J
N should be smaller and these tests continued until
more items fail.
M =.5
For instance, if
and
J
= 1000
(that is, the hazard decreases with time)
(meaning items are cheap but a long test is expensive), we
note that the price
(U)
for the combination
is the smallest of those computed; U
M = 05
Likewise, if again
but
= 11.5
J
N
= 40,
P
= .1,
R· 4
and
56.9.
with a total cost of
=1
(implying items are costly
but once sacrificed we may continue the test a long time without great
expense), the combination (N • 40, R • 39), gives the best price
for all
P, with a cost
combination
U = 2.59,
(N
= 40,
C
P
= 51.4.
= .5,
R
J, the
For the case of intermediate
= 20)
gives the smallest price; namely,
c = 58.0.
Turning to the last three parts
for
M = 2 0 5, indicating an in-
creasing hazard, the prices are much higher.
= 40,
the combination
(N
smallest price,
U· 37060
J
(U· 1.32)
R
= 30,
In the table for
P ~ .7)
and any value of
Naturally the cost is high
J
= 1000,
gives the
= 1180).
(C
If
= 1, it is probably best to let all items in a sample of size 40 fail.
The minimum price (for N
= 40,
R
=
39, and any p)
Note that we have not computed the results for
R
is
U
=
1.07; C = 4106.
= No
If the experiment is to be limited by a maximum N or
possible to find optimal combinations within our budget.
C, it is still
For instance if
48
M = 1.5, J
= 103/ 2, and we
may spend no more than 40 times the cost of
a single item, the combination
(N
= 10,
P •
.3, R • 4) would be used,
,
giving U = 6.68, the lowest price for any C under 40.
case is 30.1.
C in this
49
9.0
COMPUT~rIONS
9.1 Computer Programs
All arithmetic work beyond the range of a desk calculator was perThe IBM 650 at North Carolina State
formed on electronic calculators.
College was employed for programs (1) through (6), while program (7) was
accomplished by the Burroughs Datatron 204 at Purdue.
As representative
values of the five parameters, we chose:
N
==
5, 10, 20, 40
P
==
.5, .1(.1).5, .7
R
==
1, ••• , N-l
J
= 1. 3.162, 10, 31.62, 100, 316.2, 1000 •
Following is a description of the programs:
(1)
The individual terms
rkh(r), k
== -
2(1)2,
of the binomial
expectation were computed for all 525 combinations of N, P, R.
Using the output of program (1), E(r), V(r),
(2)
and E ("\)
H
were computed where
r
is distributed as
JV'(r), EH(~)
h(r)
==
(~) yQN-r
r
(3)
Since the standardized time variable,
S
T
d
~M
== ~ == [- ~n(l-P)]
== (-
L) VM,
a
is uniquely determined from P for each M, a conversion table was computed.
These results are presented in Table 4
by means of which
and also as a graph in Figure 5
S may be determined from P for each M.
(In Figure
5,
50
the curve for
M = .1 is shown ana-that for
of interest to note that if
where
Q
M = 2 is omitted.) It is
S = 1, then Q = e-l independently of M,
= 1 - P.
(4) Using the results of (1) and (2) and equations (7.3.15), (7.3.28),
and (7.3.29) the bias, variance and standard deviation of a (= ~/a)
were
computed.
(5)
A conversion table was desirable giving us
and
For this purpose the following computations were read into the IBM
650 from a regular gamma function table:
If
M= 1/2, 1, 1 1/2, 2, 2 1/2,
then
(~ i)
= 2,
1, .90275, .88623, .88726
and
M. - (1M0)I 2
( 2 I)
= 20, 1, .37568, .21460, .14415.
The mean and variance of the life span of an item may thus be estimated by merely multiplying
~1/M
(9.106) or (9.1.7)0 The results for
in Figure 60
Both ~
and 6 =
~2/M
or
~
Jvttr
by the appropriate factor in
= E(t) are presented graphically
are tabulated in Table 1.
(6) The mean square error, D(a), was computed by (7.3.28) and A(a),
51
the lower bound on D( a), was computed by (7.5.27).
(7)
The calculation of
C proved to be the most difficult task
because of the integral involved in (8.0.5)
Q
S
o
where
y(q)
y(q)dq
is given by (8.0.6):
The general appearance of y(q)
is as follows:
i
\Q
y(q)
0
y(q)dq
1
q
For small values of
BVN, the peak usually occurs to the right of Q,
and there is no difficulty.
the left, narrows and rises.
peak approaches
------:>~
For moderate value of
As
R
N
approaches
R
N'
the peak moves to
1, the location of the
0 and its height increases without limit.
Thus, an
extremely large proportion of the area is concentrated in a narrow band
between two small values of
q.
Ordinary Simpson Rule quadrature as described in (8.1.15), therefore,
does not give uniform accuracy throughout the domain of y.
To maintain
good results to three significant figures one must vary the interval
qi+l - qi
inversely to the change in slope of the curve - that is,
52
The Datatron successfully accomplished this by integrating separately:
S
)
y(q)dq,
Q-.05
S
y(q)dq, ... ,
2 - that is,
n, the number of Simpson intervals, was first
q2 - ql - .025.
vals halved, and the two results,
Then n was doubled, the inter12 - II
12 and II' compared. If
II
was less than 10-4 , the second result was accepted.
or exceeded 10-4 ,
compared.
- I
j
I _
j l
j-l < 10-4
The last result was then accepted.
in the case where
where it is planned for
f
1
3
This process was continued until two successive integral
27
If this ratio equaled
n was doubled again and the last two results,
tions yielded a ratio
n was
y(q)dq.
o
Q-.l
In each integration,
chosen as
.05
Q-.05
Q
N
'
where
j
= 10g2n •
The greatest value attained by
= 40, R = 39. In subsequent computations
N to be 80, 160, and 320, the peak for large
will become so sharp that it will be necessary to integrate separately
over domains of length 00005, because of the danger of the width of the
peak being even less than .005 if R is close to
N.
Integrating over
005 intervals may thus miss the peak entirely.
Q
Having obtained the values of the separate integrals,
)
y(q)dq
o
was derived by addition; then
EffJ,
C,
and
U followed immediately,
a
while
I
evolved as the reciprocal of D obtained from program (6).
53
9.2
Demonstration of the Program
To demonstrate the performance of the programs it is instructive to
examine ~
and its properties for some small, easily computed, if imprac-
tical samples.
N
=2
R
=1
For instance, let
a .. 1
M=2
S .. T
= .32459475 (corresponding to
p .. 1 - e
_ 2 1a
T
...1)
(T 2 .. _ L ... 10536169; T4 .. L2 ...011101994; Q ... 9)
J .. 10.
That is, we have two randomly selected units from a population where
life span follows the density function
2
f(t) .. 2t e-t , t ~ 0,
and we burn them until at least time
R = 1 unit has failed.
2
with probability 1 - .9 ...19.
at least
If no failures occur before
T" .32459475 has elapsed, and until
The latter event will occur before T
In this case ~ .. a.
T we wait for one and then use
If but one failure occurs at or before T we use
a
!\
.. a
= t 2_2
+ y-;
III
and if both fail before T we use
54
The life span estimates are
1\
E(t)
='
Jd. 2'1 !
='
.88623
1&,
and
A?
E(t ,)
1\
='
a 1
='
Ia tor "V(t}
,
Actually, of course,
E(t)
= a(l - .88623 2 ) = .21460
1\
a.
= .88623 and V(t) = .21460, as
indicated in (9.1.3) and (9.1.4).
Now to compute the mean and variance of
J\
we go back to the original
a
definition of expectation and integrate:
E(ak ) = 2
_t 2
_t 2
(2t~)k e 1 d(e 1 )
(
0
1
+
2
S
_t 2
2
(t~ + T )k .9 d(e 1)
.9
1
1
S S
.9
.9
+
2
t2 + t2 k
_t2
-t 2
( 1 2 2) d(e 1) d(e
),
(9.2.8)
which, using again the "q,.l" notation, becomes
1
S
.9
1
+
5
Letting k = 2 we obtain:
(-,i'l - L)k .9 dql
55
1
.9
E(a 2) = 8
S
2
S
.9
~1 q1 dq1 + 2(.9)
0
1
+
1.
4
(t~
2.£1L
+ L2)
dq1
1
1
S 5 t~
[
dql
+
SL
2 .-£2
.9
.9
+
1
1
dql
+
L~
.9
which upon integrating by meansof the Dwight [1957] formulas,
S£ dq ... q( L
5
qi dq ..
t
- 1),
q2(2L - 1),
Si 2 dq .. q(.£ 2 - 2.f.
+
2),
and
gives:
E(~2) = 8 (.24966660)
+
1.8 (.00255906)
+
.25 (.00011264)
.. 2.0019740.
Now using in (7.3.27) the values:
h(O) .. Q2
.
,
h(l) .. h(R)
h(2)
= p2
81
2 PQ ... 18
01
H = H(l) + h(2) = .19
1 - H = .81
E (1.) '"
Hr
hill
+ hi&...
1
2
•
18 + 1. ( 01) ....185
2·
dql]dq2
S
.9
~(12 )
=<
h~l) + hi2)
56
.18-+ ~ (.01) • .1825
=<
r
~(r)
• Ih(l)
+
2h(2)
=
.18
+
2(.01) • .2
L = _T 2 = -.10536169
2
L = T4 • .011101099
..;:r. =
T = .32459475
we obtain
E(~2)
=
1 +1
+
+
.19 [-1-201072338
4 (.011101099) .1825
+
+
1.1101099]
+
9(.18)(-.10536169) 2.21072338
[1 + .42144676 - 130 (.011101099)] .185
• 2.0019759
in agreement with direct integration (9.2.12).
Similarly, letting k of (9.2.9) be 1:
E(~) = -4
.9
Sl
1q1 dq - 108
o
1
- .5
1
SJ. S
2
.9
.9
• 1.0094826 .
...
And substituting in (703.15) we find
B(~)
= 10(-.10536169)(019 - .162) -2(-.10536169) .185
• .009482552,
consistent with (9.2015).
dql
d~
57
Thus
V(~)
'" 2.0019759 - 1.0094826
D(~) ... 9829199
+
.0094826
2
... 9829199,
2
'" .9830098,
and
..;vrar ... 9914232.
These last three values, as would be expected from such a small sample)
are somewhat large.
The mean, variance, and standard deviation of the number of failures
can be computed from her):
E(r2) ... 81
+
(.18
+
.04) '" 1.03
E(r)
'" .81
+
(.18
+
.02) '" 1.01
VCr)
'" 1.03 - 1.0201 '" .0099
.JV(r) '" .0995.
The expected test duration can be computed from (8.0.5):
.9
E(d) '" .19(.32459475)
+
2
S
q
vi:):
dq.
o
Let us obtain a rough approximation of E(d)
for Simpson's rule.
If y( q) '" q ..; -.J"
yeo) '"
using only 4 intervals
then
0
4y (.225) '" 4(.225) \/1.49166 '" 1.0992
2y (.45)
.. 2(.45) .j.79851
'" .8042
4y (.675) '" 4(.675) J.39304
.. 1.6926
y (.9)
Total
::: .2921
.. 3.8871.
"'.9";.10537
58
Hence,
E(d)
= .0618
+
2(1/3) ~ (3.8871) = .6447.
The cost of the test is
C • 2 + .6447J
= 8.447.
The mean square error and the information are, respectively
D • .9830098
I • 1/.9830098 • 1.01728.
I
The Cramer-Rao lower bound of the mean square error may be obtained from
2
1 + ~ [-H(Q+2NP) + EH(r) + Qh(R)(l-R+NP)] + N~2 Ea(~)
A•
R(l-H) _
2R~h(R)
+ (1 + ~) Ea(r) _ 2NLH
• 1 + 1.1101099 [-.247 + .2 + .0;243 + .00821481 •• 29 2007 21 • 98218536
.81 + .34137188 - .22144676 + .08007488
1.0100000·
not much lower than D ••9830098
as in (9.2.25).
U = CD = 8.477 x .9830098 = 8.303.
The price
59
10.0 SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS FOR FURTHER RESEARCH
This dissertation presents the results of an investigation to determine the optimum sample size (N) minimum required failures
minimum test duration
(T)
(R)
and
in order to estimate most effectively the
scale parameter, a, in the Weibull distribution:
In this paper we use
dependent upon
be guessed until
to be
S, a standardized time variate,
a, which can be set at will, but from which T may only
a
is known.
0; the shape parameter
The location parameter
(M)
(~)
is considered
is assumed known to the experimenter.
In determining the optimum combination of these test constants, the maximum
likelihood estimate of
a
has been used.
The properties of the Weibull distribution and the shape of the
acco~
panying density function for various values of M were presented verbally
in Section 5.0 and graphically in Figure 1, and numerically in Table 6.
The mean and standard deviation of the life span, t, of an item were
shown in Section 5.0 to be proportional to
a1/ M:
..
Figure 6 displays
both
IJ.
u as a function of
and a as functions of
a.
a
and Table 1 presents
The "hazard" function
60
was defined in Section 5.0 as the instantaneous tendency to failure; it
is proportional to the probability of failure in the next instant of
time given that the item has survived till time
t.
This probability
increases, remains constant, or decreases with time as
M exceeds,
equals, or is less than 1.
A new method of censoring by both Rand T was given in Section
7.1 - that is:
place
N items on test and stop the test only after
both R items have failed
~
likelihood estimator (~) of
[a
The maximum
was derived in Section 7.2.
R
M
M
t + (N-R)tR],
o "'!R [>
i=l i
1\
A
a
T time units has elapsed.
if
d>T
if
d = T,
a =
1\
a
r
.. !
r
r
[>
tM+
I=I i
M
(N-r)T ],
where:
t
i
is the life span of the i th successive item to fail,
r
is the actual number of failures, at least as great as
d
is the actual test duration, at least as great as
R, and
T.
The bias (B) and mean square error (D) of the standardized estimator
1\
(a .. !) were shown in Section 7.3 to be
Q:
B(a)
,
•
=
L
p [H - Q h(R)]
1
- NLEH (r)
and
D(a) '" E(a) 2 - 2 B(a) - 1
61
•1
R
+
222
RL -2 P H + Qh:2 (NP - R + 1) h(R)
RP
RP
where the expectation [E(a)]
are defined in Section 6.0.
is the bias plus unity, and the other symbols
The varianee
V is then
2
V(a) • D(a) - [B(a)] •
It was proved in Section 7.4 that the estimator satisfies the Wald
[1948] conditions for consistency and asymptotic efficiency; hence, the
lower bound
[A(a)]
on D(a)
is
A(a) •
R(1 - H) _ 2RgLh(R)
p
where
X.
(1
+
~)
P
is the log of the likelihood of the sample.
The behavior of
B, V, D, and A,
discussed in Section 7.4.
R as might be expected.
demonstrate, if
No
+
as functions of
R, P, and V are
They all decrease montonically with increasing
But as functions of
P or
N, they actually
R is small, a period of increase with increasing P or
Even if we consider B1N as an argument, still
V, D, and A
display
62
a hump, although
abIes
B does not.
Considered as functions of
V, D, and A all have the value
mum near the point
P-R/N,
1/R for
P, the vari-
P=O, decrease to a mini-
rise again to a maximum in the vicinity of
P '" 1 - R/2N, and then falloff to the value
1/N at
P-l.
A suggestion
as to why these oddities occur is given by some calculations in Section
7.4. It is shown there, that at least for the case P '" ~ and R=l,
the bias, variance, and mean square error all increase with increasing
N to a maximum at
N=5, and thereafter decrease.
for the early values of
a
for each
It is evident that
N, that is,
bias and variance are both large, as is to be expected.
the probability of employing these early a's
For small
N,
as estimators is high, and
does not in all cases decrease with increasing N at first.
Thus the
early a's with their large biases and variances occur often with small
N and therefore contribute heavily to the total
finally grows to the point that the early a's
later a's with decreasing
Figures 2,
B, D, and V.
are used rarely, then the
B, D, and V, predominate.
3, and 4 present these oddities graphically in the cases
of V, D, and A, while Table 1 shows the numerical values of
for various
When N
B, V, and D
N, R, and T.
A brief table, Table
S and the probability
49 is given showing the relationship between
(p)
of an item failing before
S for various
S '" (- L)1/M '" [-,i'n (I-P) ]1I M
Figure 5 graphs
S as a function of P for several
The information
[I(a)]
M.
derived from the estimator is defined in
Section 8.0 as the reciprocal of D, and is tabulated in Table 5.
The cost [C(a)]
of the estimator was defined in Section 8.0 as
M.
63
C .. N + JE(d)/al!M
where
J
(8.0.1)
is the ratio "cost per unit time of continuing the test" to
"cost per item placed on test".
The formula for obtaining E(d)
E(d) • HT +
was derived in Section 8.1.
~
t R g*(t R) dtR
T
where
g*(t R)
Rth
is the density function of the
The price
U(a)
failure time.
of the estimator is the cost per unit information,
U .. C/I .. CD.
Values of
C and
presented in Table
we would select
of 11.5
U for various combinations of M, N, P, R, and J
For instance if M".5
3.
N =
56.9.
and a cost of
J ..
1000,
If
M.. 1.5,
P .. .3 (or
J ..
103/2 ,
and
C is limited
5 ... 502) and R .. 4, giving a
6.68 and a cost of 30.1.
By means of the IBM
of
and
are
40, P .. .1 (or 5 ... 0111) and R .. 4, giving a price
to 40, we choose N = 10,
price of
(8.0.2)
B, V, A, D and
computed.
I
650 at N. C. State College, the numerical values
used in the above cited tables and graphs were
With the aid of the Burroughs Datatron 204 at Purdue, the
values of E(d), C, and
U were obtained.
With the aid of the tables, an experimenter who has knowledge of
the
M and
J
which apply to his product, may design a life test ex-
per1ment selecting N, R, and 5 which will be optimum for him - either
giving a minimum price, a maximum information, or cost not exceeding some
value, or a best test for a given size sample.
In determining T from
5, he must have at least an idea of the order of magnitUde of a
since
T is a function of S and
a.
Future topics of research, some of which the author already has
planned or in progress include:
1.
A deeper investigation of the peculiar nonmontonic behavior of
the variance, mean square error and asymptotic mean square error, of the
estimator as a function of sample size and testing time, with a view
toward obtaining a rigorous explanation of their strange performance.
2.
Expansion of the results of this thesis to include samples of
size 80, 160, 320, and 640.
3.
Simulation of a real life test situation by means of several
hundred random samples of each of the various sizes, drawn from several
thousand Weibull distributed random numbers for each value of M.
such samples for each
(N, M)
Ten
combination have been drawn so far and
many more are planned.
4. A study of an estimator in the event the test stops when
either R failures occur
5.
~
time
T elapses instead of
~
••• and.
A study similar to this but wherein ~ rather than R is con-
sidered one of the constants at the disposal of the experimenter.
6.
Rand
A study of the large bias which is characteristic of ~
for small
P and an investigation of a method of reducing this bias.
7. Some entirely different procedure not based on maximum likelihood, and free from some of the
timator.
~steries
and disadvantages of this es-
65
Table 1.
Mean (IJ.) and standard deviation «f') of life span (t)
for various 0: and M
l;.~;:>;:'~'
1
2
M
.01
.141
.01
.0284
.0463
.0602
.02
.0665
.125
.186
.02
.0452
.0655
.0794
32 • 10-4
71.6 • 10-4
.04
.106
.177
.245
.04
.0717
.0927
.105
2 • 10-2
2
4.47 • 10-
.1
.194
.280
.353
01
0132
.146
.151
.2
.309
.396
.466
a
2
8 • 1017.9 0 10-2
02
.210
.207
0199
IJ.
32 • 10-
.4
.490
.561
.615
a
2
7106 • 10-
.4
.333
.293
0263
1
0903
.886
.887
1
.613
.463
.380
IJ.
fJ.
IJ.
a
.2
.4
1
2
4
21
2
.0886
a
.1
2
1>.0419
a
.04
4
2 • 104.47 • 10-4
112
.01
IJ.
a
.02
l
IJ.
4
8 • 1017.9 • 10-4
2
IJ.
2
a
4.47
I.l
8
2
a
17.9
2
IJ.
32
4
2.27
a
71.6
4
1054
,"::\
1 \
IJ.=M
I.l
o
1.43
0973
M/-
. /2 I
1 I 2
vo:; a=VMo-(M.)
and a expressed in time units
[See Section 9 0 1, Program (5)J
1025
.655
1077
.926
1017
.501
1054
0661
66
~
Table 2.
p
.0
.05
Bias (B), variance (V), and mean square error (D) of estimator (a) for selected N, R or ~, and P
.1
.3
.2
.4
.5
.7
P
1
.0
.05
.1
.3
.4
.5
.7
.008
.322
.322
.084
.269
.276
.153
.280
.303
.082
.138
.145
.022
.308
.12"6'
.275
.290
.118
.231
.245
.054
.086
.089
.042
.295
.296
.137
0290
.309
.083
.163
.170
.038
.062
0064
.084
.297
.304
.090
0232
0240
.041
.090
.092
.019
.040
0040
0
.200
0200
0
.100
.100
0
.050
.050
0
.025
0025
.002
.,247
.247
.062
.207
.211
.,109
.198
.. 210
.054
.086
,,089
.005
.243
.243
,,090
.210
,,218
0081
0157
.164
.038
.062
.064
.022
.233
.233
.084
,,207
.214
.041
.090
.092
.019
.040
0040
0
.200
.200
0
.100
.100
0
0050
.050
0
,,025
.025
1
R=3
R=l
N
N
B
5 V
D
B
10 V
D
B
20 V
D
B
40 V
D
0
1
1
0
1
1
0
1
1
0
1
1
.021
.963
.964
.078
.886
.892
.218
.776
.823
.401
.859
1.019
.073
.208 .320 .375
.366 .232 0
.904.200
.893
.791 .825
.958 1. 077
.898 .835 u.927 _~&98 1~?lg.928 .200
.215
.393 .390 .301
.208 .093 0
.959
~674
.260 .100
.780 .895 10066
.717
.269 .100
.826 1.049 1,,21a __l~949
.399 .349 .206 .125
.083
.041 0
0090 .050
.870 1.012 .595
.292
.169
0176 .092 .050
1.029 1.134 .617 .3~{
.3b9 .152 .082 .054 .038 0019 0
1.026 .361 .141 .086 .062 .040 .025
1.162 .384 0148 .089
.064 .040.025
B
5 V
D
B
10 V
D
B
20 V
D
B
40 V
D
00
.333
.,333
0
.333
.333
o
0
.333 .333
.333 .333
o 0005
.333 .326
.333 0326
o .040
.333 .289
.333 .291
0
.333
.333
.004
.327
.327
0038
.291
.293
.139
.259
.278
R=:2
e
.2
B
5 V
D
B
10 V
D
B
20 V
D
B
40 V
D
0
.500
.500
0
.500
.500
0
.500
.500
0
.500
.500
.004
.025
.465
.493
.466
.493
.031 .123
.457
.395
.410
"lt58
.128 .227
0389
.435
,,405 . .486
.234 ,,146
.427
.294
.4~~~2_ 0316
.001
.499
.499
0005
.491
0491
.034
0454
0455
.130
0386
.002
0330
_,,310
.033
.296
.221
,,135
.263
.281
.137
0244
.263
.,302
R=4
.063
.430
.43~
.19
,,412
.450
.183
,,394
.427
.. 082
.140
0147
.110
0410
.422
.209
.450
.494
.122
.260
.275
.054
.086
.089
.150
.420
.442
.178
,,427
.459
.083
.166
.173
.038
.062
0064
0163
.475
•.2.02
.092
.249
.257
0041
.090
.092
.019
.040
.040
0
.. 200
,,200
0
.100
.100
0
.050
.020
0
.025
.022_
B expressed in nondimensiona1 time units} V and D :: B2
[See Section
9.1~
B
5 V
D
B
10 V
0
0
.250 .250
.250 .250
0
0
0250 .. 250
~250
B
20 V
D
B
40 V
D
+
0
.. 001
.. 250 .. 249
.. 250 ,,21±2
0
.011
.250 .236
_.250 .236
0
,,250
.250
0
.. 249
.249
.010
,,238
.238
.074
,,197
.203
0
.001
,,249
.250
•.250 __ .249
ooryI .029
•
,,241
.221
.222
d41
.070
~li7
.200
.205
.205
.219
,,081
.123
.200
.135
.215
.141
V expressed in (nondimensional time units)2
Programs (4) and (6)]
--
67
Table 2 (continued)
p
.0
.05
.1
.2
.3
.4
.5
.7
p
1
.0
.05
.1
.3
.4
.5
.7
0
.200
.200
0
.100
,100
0
.200
.200
0
.100
.1CO
0
.200
.200
0
.100
.100
0
.050
.• 050
0
.025
.025
0
.200
.200
0
.1CO
.10C
0
.050
.05c
0
.025
.025
1
R=N
R=5
N
N
B
5 ,..,V
J.J
3
10 V
D
B
20 V
D
B
40 V
D
-
.2
0
.200
.200
0
.200
.200
0
.200
.200
0
.200
.200
0
.200
.200
0
.200
.200
0
.200
.200
.003
.196
.196
0
.200
.200
0
.200
.200
.002
.197
.197
.035
.170
.171
0
.200
.200
• COl
.198
.198
.031
.172
.173
.104
.162
.173
0
.200
.200
.008
.191
.191
.079
.161
.167
.080
.129
.136
0
0
0
.200
.200
.200
.200
.200
.200
.024
.070
.047
.170
.176
.179
.181
.172
.179
.041
.094
.079
.166
.148
.090
.092
.154
.175
.054
.038
.019
.086
.062
.040
• Oi3~_~6l;.___.040
0
.200
.200
0
.100
.100
0
.050
.050
0
.025
.025
B
5 V
D
B
10 V
D
B
20 V
D
B
40 v
D
0
.200
.200
0
.100
.100
0
.050
.050
0
.025
.025
0
.200
.200
0
.100
.1QQ.,
0
.050
.050
0
.025
.025
0
.200
.200
0
.100
.100
0
.050
.050
0
.025
.025
0
.200
.200
0
.100
..1.00
0
.050
.050
0
.025
.025
0
a
.050
.050
.05-0
.e50
0
.025
.025
.025
.025
0
0
.200
.200
0
.100
.100
0
.050
.050
C
.025
.025
e
68
Table 2 (continued)
p
.0
.05
.1
.2
.3
.4
.5
.7
p
1
.0
.05
.1
.2
B
5 V
D
B
10 V
D
B
20 V
D
B
40 V
D
0
1
1
0
.500
.500
0
.250
.250
0
.125
.125
.021 .073
.963 .893
.964 .898
.005 .031
.491 .457
.491 .458
.001 .010
.249 .238
.249 .238
0
.002
.125 .123
.125 .123
.208
.791
.835
.123
.395
.410
.070
.200
.205
.039
.102
.104
.320
.825
.927
.196
.412
.450
.117
.205
.219
.067
.101
.106
.366 .232
.375
· .958 1.al? .904
1.098 1.212 .958
.209
.178 .092
.450
.427 .249
.459 .257
.494
.081 .041
.109
.198
.157 .090
.210
.164 .092
.038 .019
.053
.062 .040
.083
· .086
.063 .040
0
.200
.200
0
.100
.100
0
.050
.050
0
.025
.025
B
0
5 V .333
D .333
B
0
10 V .167
D .167
B
0
20 V .083
D .083
B
0
40 V .042
D .042
0
.333
.333
0
.167
.167
0
.083
.083
0
.042
.042
0.333
.333
B
5 V
D
B
v
D
B
v
D
B
40 V
D
0
.500
.500
0
.250
.250
0
.125
.125
0
.062
.062
.C01 .004
.499 .493
.499 .493
0
0
.250 .249
.250 .249
0
0
.125 .125
.125 .125
0
0
.062 .062
.062 .062
.025
.465
.466
.0Cf7
.241
.241
.001
.124
.124
0
.062
.062
.063
.430
.434
.029
.221
.222
.011
.115
.115
.003
.060
.060
.002
.330
.330
0
0
.167
.167
0
.083
.083
0
.042
.042
.166
.166
0
.083
.083
0
.042
.042
~ = .4
20
.5
.7
1
N
N
10
.4
R
N
= .6
R
N
= .2
e
.3
.008
.322
.322
.001
.165
.165
0
.083
.083
0
.042
.042
.022
.308
.309
.007
.159
.159
.001
.082
.082
0
.042
.042
.042 .084
.295 .297
.296 .304
.019 .050
.151 .147
.151 .150
.007 .029
.078 .074
.078 .075
.002 .017
.040 .037
.040 .037
0
.2CO
.200
0
.1CD
.100
0
.050
.050
0
.025
.025
.002
.247
.247
0
.125
.125
0
.062
.062
0
.031
.031
.005 .022
.243 .233
.243 .233
•COl .009
.124 .118
-.124 .118
0
.003
.062 .060
.062 .060
0
.001
.031 .031
.031 .031
0
.200
.2CO
0
.100
.1CO
0
.050
.050
0
.025
.025
R
N
= .8
.110
.410
· .422
.062
· .207
.211
.034
.106
· .1Cf7
.019
.054
· .055
.150 .163
.420 .475
.442 .502
.090 .084
.210 .207
.218 .214
.053 .040
.106 .089
.108 .091
.031 .019
.053 .040
.054 .040
a
.200
.200
0
.100
.100
0
.050
.050
0
.025
.025
B
0
5 V .250
D .250
B
0
10 V .125
D .125
B
0
20 v .062
D .062
B
0
40 V .031
D .031
0
.250
.250
0
.125
.125
0
.062
.062
0
.031
.031
0
.250
.250
0
.125
.125
0
.062
.062
0
.031
.031
0
.250
.250
0
.125
.125
0
.062
.062
0
.031
.031
.001
.249
.249
0
.125
.125
0
.062
.062
0
.031
.031
-
69
Table 3.
Cost (C) and price (U) of estimator (a) for selected M, J, N, R, and P
M= .5
P
~
'-'
.05
.00263
.1
.0111
.2
.0498
.3
.127
.4
.261
.7
1.45
.5
.480
J =1
P
S
.05
.00263
.1
.0111
1 C
u
2 C
u,...
3 u
u
4 C
u
5.08
4.90
5.31
2.65
5.83
1.94
7.11
1.78
5.08
4.56
5.31
2.62
5.83
1.94
7.11
1.78
5.11
4.26
5.31
2.48
5.83
1.92
7.11
1.78
5.16
4.79
5.34
2.32
5.84
1.88
7.11
1.77
5.28
5.80
5.42
2.29
5.87
1.81
7.12
1.76
6.45
6.18
6.47
5.49
6.65
5.58
2.47
5.96
1.77
7.15
1.74
~.25
.65
2.02
7.51
1.75
N=10
c 10.0
U
e
.5
.7
.480 1.45
20.1
12.8
20.1
8.60
20.1
6.10
20.1
4.41
20.5
2.02
21.4
1.53
27.3
1.44
20.3
6.23
20.3
5.58
20.3
4.96
20.3
4.26
20.5
1.90
21.4
1.53
27.3
1.44
20.5
3.60
20.5
3.55
20.5
3.48
20.5
3.35
20.6
1.80
21.4
1.51
27.3
40.3
3.58
40.2
3.58
40.3
3.58
40.3
3.58
40.3
3.24
40.5
1.95
41.9
1.40
51.4
1.32
40.5
2.57
40.5
2.57
40.5
2.57
40.5
2.57
40.5
2.56
40.6
1.81
41.9
R
R
2
.4
.261
.0498
N=20
N=5
1
.3
.127
.2
c
u
3 C
u
4 c
u
7 c
u
9 c
u
8.94
10.1
4.94
10.2
3.38
10.3
2.57
11.4
1.63
14.3
1.59
10.0
8.29
10.1
4.62
10.2
3.32
10.3
2.57
11.4
1. 63
14.3
1.59
10.1
10.5
10.1
4.14
10.2
3.02
10.3
2.48
1l.4
1.63
14.3
1.59
10.1
12.3
10.1
4.57
10.2
2.82
10.3
2.29
11.4
1.62
14.3
1.59
10.3
10.8
10.3
5.07
10.3
2.99
10.4
2.19
11.4
1.61
14.3
1.59
10.5
7.52
10.5
4.81
10.5
3.24
10.5
2.29
1l.4
1.57
14.3
1.58
11.4
3.08
1l.4
2.95
1l.5
2.75
11.5
2.45
li.8
1.53
14.3
1.57
1
2
3
4
T = [- 1. n (l-P)] 1/M expressed in nondimensionaltime
s = ---ry-;
units [see Section 8.0J
c
U
C
U
10 C
u
14 C
u
19 C
U
20.0
16.5
20.0
9.11
20.0
6.53
20.1
4.99
20.5
2.05
21.4
1.53
27.3
1.44
:20.0
20.6
20.0
8.12
20.0
5.36
20.1
4.77
20.5
2.05
21.4
1.53
27.3
1.44
20.1
22,.7
ZQ.1
9.75
20.1
5.64
20.1
4.ll
20.5
2.05
21.4
1.53
27.3
10M
21.4
1.98
21.4
1.98
21.4
1.98
21.4
1.98
21.5
1.83
21.7
1 ........I,?27.3
1.4L 1.41
N=40
1
C
2
U
C
u
3
c
U
4
C and U expressed in units "cost per item placed on test"
C
U
C
U
C
U
10 C
U
20
C
U
30 C
U
39 C
u
40.0
40.8
40.0
16.1
40.0
11.6
40.0
9.46
40.1
4.01
40.5
2.02·
41.9
1.40
51.4
1.32
40.0
46.5
40.0
19.3
40.0
11.1
40.0
8.11
40.1
4.00
40.5
2.02
41.9
1.40
51.4
1.32
40.0
15.4
40.0
12.6
40.0
10.5
40.1
8.61
40.1
3.61
40.5
2.02
41.9
1.40
51.4
1.32
40.1
5.94
40.1
5.88
40.1
5.80
40.1
5.66
40.1
3.39
40.5
2.02
41.9
1.40
51.4
1.32
41.4
1.67
41.4
1.67
1.;.1.4
1.67
41.4
1.67
41.4
1.67
1...1.4
1.65
42.0
1.40 1.34
. 51.4 51.4
1.32 1.32
e
70
Table 3 (continued)
M = .5 J
p
s
.1
.05
.00263 .0111
.2
.0498
.3
.127
.4
.261
,.7
.5
.480 1.45
=
10v'10
p
S
.2
.0498
.3
.127
.4
.261
.7
.5
.480 1.45
R
1 C
U
2 C
U
3 c
U
4 c
U
8.33
6.95
14.9
6.93
31.2
10.3
71.7
17.9u_.17.2.~_17. 9
7.54
7.27
14.6
7.31
31.2
10.4
71.7
7.63
6.85
14.7
7.23
31.2
10.4
71.7
10.2
9.46
15.8
6.86
31.5
10.1
71.8
17.9
14.0
15.3
18.3
7.71
32.5
10.0
72.0
17.8
20.5
24.9
23.4
10.4
35.4
10.5
73.0
17.8
50.9
48.7
51.6
25.9
57.1
17.3
84.5
19.7
1 c
u
2 C
U
3
1 C
U
C
u
3 c
U
4 c
U
7 c
U
9 c
U
10.8
11.8
25.2
10.7
14.1
18.3
18.1
19.2
17.2
9.50
8.93 12.4
12.1
12.2
12.8
14.6
18.5
25.3
6.58
9.12 11.6
5.58
5.23
5.95
14.8
16.2
14.8
15.0
25.6
19.4
4.92
4.84
4.48
5.63
7.91
4.47
19.1
19.1
19.2
19.8
21.9
27.0
4.62
4.61
5.88
4.76
4.77
4.39
54.8
53.9
53.9
54.1
53.9
53.9
7.70
7.70
7.70
7.69
7.64
7.54
145
145
145
145
145
145
16.1
16.1
16.1
16.1
16.1
16.1
C
u
4 C
U
10 C
N=10
e
.1
.Olil
N=20
N=5
R
2
.05
.00263
U
14C
55.8
15.0
55.8
14.4
55.9
13.4
56.0
12.0
68.2
8.81
147
16.1
U
19 C
u
20.2
21.6
20.4
24.0
28.3
35.2
16.6
21.0
6.18
24.5
15.3
8.69
20.6
21.6
20.5
24.0
28.3
35.2
6.10
10.3
8.37 10.5
7.78
9.34
21.1
21.1
21.9
24.1
28.3
35.2
6.18
6.86
6.15
6.91
5.98
7.30
22.4
21.9
21.9
24.3
28.3
35.2
5.20
4.58
5.31
5.76
5.44
5.94
36.2
39.2
35.6
35.6
35.6
35.7
3.56
3.51
3.36
3.56
3.56
3.42
65.0
65.0
65.0
65.0
65.2
65.0
4.64
4.64
4.64
4.64
4.63
4.59
252
252
252
252
252
252
13.3
13.3
13.3
13.3
13.3
13.3
65.8
6.W
65.8
6.07
65.8
6.07
65.8
6.07
66.0
5.63
75.3
4.91
252
13.3
N=40
1 C
U
2
C
U
3 c
U
4 C
U
10 C
U
20 C
U
30 C
U
39 C
U
40.1
41.6
40.4
44.0
48.3
55.2
16.0
40.9
46.9
6.52
4.29
3.51
41.6
40.2
40.4
55.2
44.0
48.3
16.2
13.1
6.46
19.4
4.24
3.51
41.6
40.3
44.0
40.4
48.3
55.2
li.7
11.2
10.9
3.51
6.37
4.29
41.6
48.3
40.4
40.5
44.0
55.2
8.21
6.21
4.28
9.56
3.51
8.94
42.8
42.8
42.9
48.3
55.2
44.4
4.28
3.89
4.27
3.87
3.74
3.49
58.0
55.6
55.4
55.4
55.4
55.4
2.68
2.77
2.77
2.77
2.77
2.59
99.8
99.8
99.8
99.8
99.8 . 99.8
3.33
3.33
3.33
3.33
3.33
3.33
400
400
400
400
400
400
10.2
10.2
10.2
10.2
10.2
10.2
85.8
3.45
85.8
3.45
85.8
3.45
85.8
3.45
85.8
3.45
85.9
3.43
103
3.28
400
10.2
e
71
Table 3 (continued)
M=.5 J = 1000
S
P
.05
.00263
.1
.0111
.2
.0498
.3
.127
.4·
.261
.5
.480
.7
1.45
S
P
.05
.• 00263
.1
.Olll
.2
.0498
N=5
.5
.480
.7
1.45
281
86.4
281
77.4
281
68.8
282
59.2
534
49.5
1440
103
7360
388
500
87.9
5C0
86.7
500
85.0
501
81.9
627
54.8
1450
102
7360
388
1470
135
1470
135
1470
135
1470
135
1480
126
1770
115
7370
388
R
1 C
U
2 C
U
3 C
U
4 C
U
85.4
82.3
310
155
832
277
2120
529
88.2
79.2
310
153
832
277
2120
529
110
92.0
317
148
834
275
2120
529
170
157
347
150
842
271
2120
528
288
316
424
179
875
270
2120
525
497
602
588
260
966
286
2150
524
.1460
1390
1480
742
1650
501
2520
587
1
2
3
4
10
N=10
1 C
U
2 C
U
3 C",
a
.4
.261
N=20
R
•
.3
.127
tr
4 C
U
7 C
U
9 C
U
30.7
35.4
66.7
140
272
27.4
29.3
70.0
170
285
77.0
78. 6
97.0
156
278
37.8
36.0
39.8
70.2
137
161 .... . 161
169
207
306
'"' 53:6'" 52.8 50.2
57.2
89.0
298'
.3~8,... 300
319
385
71..4' 74.3
72.3
70.8
81.3
1400
1400'1400 ..... 1400
.1400
200
200
200
200
198
4280
4280
4280""42804280.
476
476
476
476
476
491
352
493
226
504
156
547
119
1430
196
4280
475
. ---, , . _.
1460
392
1460
375
1460
351
1460
313
1850
239
4350
477
"
14
19
C
U
C
U
C
U
C
U
C
U
C
U
C
U
26.3
21.6
36.2
16.5
53.5
17.4
78.9
19.7
514
51.4
1440
103
7360
388
33.0
33.9
40.1
16.3
55.2
16.2
79.5
'18.9
514
51.4
1440
103
7360
388
70.1
79.5
72.2
35.1
79.0
22.2
94.5
19.4
514
51.3
1440
103
7360
388
147
93.9
148
63.0
149
45.2
155
33.9
515
50.7
1440
103
7360
388
N=40
1
2
3
4
10
20
30
39
C
43.1
51.2
89.8
167
301
520
11.90
U
44.0
59.5
34.5
24.8
26.8
33.1
59.9
C
44.9
51.8
89.8
167
301
520
11.90
U
18.1
24.9
28.3
24.5
26.7
33.1
59.9
C
48.3
53.4
89.9
167
301
520
1490
U
14.0
14.8
23.7
24.2
26.7
33.1
59.9
c.
53.7
56.9
90.3
167
301
520
1490
U
12.7
11.5
19.4
23.6
26.7
33.1
59.9
C
128
129
133
178
302
520
1490
U
12.8
12.8
12.0
15.0
24.3
32.9
59.9
C
528
528
528
528
535
610
1490
U
26.3
26.3
26.3
26.3
25.7
27.2
59.4
C 1930
1930
1930
1930
1930
1930
2030
U
64.4
64.4
64.4
64.4
64.4
64.4
61..7
C 11,400 11,400 11,400
11,400 11,400 11,400 11,400
U
293
293
293
293
293
293
293
e
72
Table 3 (continued)
M= 1.5,
s
P
.05
.138
.1
.223
.2
.368
.3
.503
.4
.639
.5
.783
.7
1.13
J =1
p
s
.05
.138
.1
.223
R
U
2 c
5.32
5.13
5.56
U
2~77
3 c
U
4 c
U
5.82
1.94
6.15
1.54
5.35
4.80
5.56
2.74
5.82
1.94
6.15
1.54
5.43
4.53
5.59
2.60
5.82
1.92
6.15
1.54
5.53
5.13
5.63
2.45
5.84
1.87
6.15
1.53
5.65
6.21
5.71
2.41
5.87
1.81
6.16
1.52
5.79
7.01
5.82
2.57
5.93
1.76
6.18
1.50
6.13
5.87
6.14
~.08
.17
1.87
6.30
1.47
1
c 20.2
U
c
u
3 c
u
4 c
u
10 c
2
U
N=10
14 C
U
e
1 c 10.2
u 9.il
2 c 10.3
5.08
U
3 c 10.5
U
3.48
4 c 10.6
U
2.65
7 c 11.0
U 1.58
9 c 11.5
U
1.28
10.3
8.48
10.4
4.75
10.5
3.43
10.6
2.64
li.O
1.58
li.5
1.28
10.4
10.9
10.4
4.27
10.5
3.12
10.6
2.55
il.O
1.58
11.5
1.28
.3
.503
.4
.639
.5
.783
.7
1.13
20.6
6.35
20.6
5.69
20.6
5.05
20.6
4.33
20.8
1.93
21.1
1.50
21.9
1.15
20.8
3.65
20.8
3.60
20.8
3.53
20.8
].40
20.8
1.82
21.1
1.49
21.9
1.15
21.1
1.95
21.1
1.9
21.1
1.9
21.1
1.95
21.1
1.80
21.2
1.38
21.9
1.15
40.6
3.61
40.6
3.61
40.6
3.61
40.6
3.61
40.6
3.27
40.8
1.96
41.2
1.37
42.2
1.08
40.8
2.59
40.8
2.59
40.8
2.59
40.8
2.59
40.8
2.58
40.8
1.82
41.2
1.37
42.2
1.08
41.1
1.65
41.1
1.65
41.1
1.65
41.1
1.65
41.1
1.65
41.1
1.64
41.2
1.31
42.2
1.08.
N=20
N=5
R
1 C
.2
.368
10.5
12.8
10.5
4.74
10.6
2.92
10.6
2.36
11.0
1.57
11.5
1.28
10.6
11.2
10.6
5.25
10.7
3.10
10.7
2.26
il.O
1.56
li.5
1.28
10.8
7.73
10.8
4.94
10.8
3.33
10.8
2.35
11.1
1.52
li.5
1.28
li.l
2.99
11.1
2.86
li.l
2.68
11.1
2.38
li.2
1.45
11.5
1.27
19 C
U
16.6
20.2
9.20
20.3
6.61
20.4
5.07
20.8
2.08
21.1
1.51
21.9
1.15
20.2
20.8
20.3
8.21
20.3
5.94
20.4
4.84
20.8
2.08
21.1
1.51
21.9
1.15
20.4
23.1
20.4
9.90
20.4
5.74
20.4
4.18
20.8
2.07
21.1
1.51
21.9
1.15
20.5
13.1
20.5
8.76
20.5
6.21
20.5
4.49
20.8
2.04
21.1
1.51
21.9
1.15
N=40
1
2
3
C
40.1
C
40.2
11.7
40.2
9.51
40.4
4.04
40.8
2.03
41.2
1.37
42.2
1.08
u 40.9
C 40.2
u 16.2
U
4
c
U
10 c
U
20 c
U
30 c
U
39 C
U
40.2
46.7
40.2
19.4
40.2
11.2
40.2
8.15
40.4
4.04
40.8
2.03
41.2
1.37
42.2
1.08
40.4
15.5
40.4
12.7
40.4
10.6
40.4
8.67
40.4
3.65
40.8
2.03
41.2
1.37
42.2
1.08
40.5
6.00
40.5
5.94
40.5
5.86
40.5
5.72
40.5
3.42
40.8
2.03
41.2
1.37
42.2
1.08
e
73
Table 3 (continued)
H = 1.5 J = 10 v'IO
P
S
.05
.138
.1
.223
.2
.368
.3
.503
.4
.639
.5
.783
.7
1.13
p
s
.05
.138
.1
.223
N=5
C
15.2
U
1Lo6
22.6
11.3
,..
3 v 30.9
U 10.3
4 C 41.2
U 10.3
2
C
U
16.0
14.4
22.7
11.2
30.9
10.3
41.2
10.3
18.5
15.5
23.5
il.O
31.0
10.2
41.2
10.3
21.8
20.2
25.1
10.9
31.5
10.1
41.3
10.3
C
U
e
2 C
u,..
3 v
u
4 C
U
7 c
U
9 c
U
16.9
15.1
20.7
10.2
24.7
8.24
28.8
7.21
43.0
6.15
58.2
6.47
18.4
15.2
21.2
9.71
24.8
8.13
28.8
7.19
43.0
6.15
58.2
6.47
22.0
23.0
23.2
9.53
25.7
7.63
29.1
7.01
43.0
6.15
58.2
6.47
26.0
31.6
26.4
il.9
27.7
7.65
30.1
6.68
43.1
6.14
58.2
6.47
.4
.639
.5
.783
.7
1.13
40.2
12.4
40.2
11.1
40.2
9.84
40.2
44.8
7.87
44.8
7.76
44.8
7.60
44.8
7.33
46.4
t..05
54.5
3.8t.
79.2
4.17
55.8
5.14
55.8
5.lL.
55.8
5.14
55.8
5.14
55.8
4.76
57.7
3.77
79.2
4.17
R
25.6
28.1
27.5
11.6
32.5
10.0
41.6
10.3
29.9
36.2
30.9
13.7
34.4
10.2
42.2
10.3
40.8
39.1
40.9
20.6
41.9
12.7
46.1
10.7
1
C
U
2 c
U
3 c
U
4 C
u
10 C
N=10
1
.3
.503
N=20
R
1
.2
.368
U
30.2
31.7
30.3
15.0
30.9
8.96
32.2
6.78
43.2
6.10
58.2
6.47
34.8
24.9
34.8
16.0
35.0
10.8
35.5
7.73
43.7
6.01
58.3
6.47
45.8
12.3
45.8
11.8
45.8
il.O
45.8
9.81
48.1
6.22
58.9
6.46
14C
U
19 c
U
25.2
20.7
26.9
12.2
29.0
9.45
31.1
7.75
43.9
4.39
54.4
3.88
79.2
4.17
27.3
28.1
28.1
11.4
29.5
8.63
31.3
7.44
43.9
4.39
54.4
3.88
79.2
4.17
31.7
35.9
31.8
15.4
32.1
9.04
32.9
6.74
43.9
4.39
54.4
3.88
79.2
4.17
35.9
22.9
35.9
15.3
36.0
10.9
36.1
7.91
44.0
4.33
54.4
3.88
79.2
1..17
8.1.5
44.5
4.13
54.4
3.88
79.2
4.17
N=40
1 C
U
2
C
U
3 C
u
4 C
U
10 C
U
20 C
U
30 C
U
39 c
u
60.2
47.1
51.6
64.8
44.5
75.8
55.9
19.8
8.28
45.4
54.7
4.11
5.35
3.e5
45.0
47.2
51.6
60.2
6Lo8
55.9
75.8
18.2
22.7
Le.1l
16.3
8.20
5.35
3.05
45.9
60.2
47.4
51.6
64.8
55.9
75.8
13.2
13.3
13.6
8.08
4.11
5.35
3.05
46.9
47.9
51.7
60.2
6Lo8
55.9
75.8
11.1
9.69 11.1
7.89
4.11
5.35
3.05
53.5
53.5
53.9
56.3
60.2
64.8
75.8
5.35
5.34
4.86
4.75
4.85
4.C)
3.05
64.3
64.3
64.3
66.0
64.3
64.5
75.8
3.21
3.21
3.21
3.21
3.10
2.9Le
3.C2
78.5
78.5
78.5
78.5
78.5
78.5
79.4
2.61
2.61
2.61
2.61
2.61
2.61
2.53
109
109
109
109
109
109
109
2.80
2.80
2.80
2.80
2.80
2.80
.2.20
e
71J.
Table 3 (continued)
M= 1.5 J = 1000
P
S
.05
.138
.1
.2
.223
.368
.3
.503
.4
.5
.639
.783
.,7
1.13
S
P .05
.138
R
':\
-'
c
u
4 c
U
327
315
562
280
824
275
il50
288
354
318
566
279
824
274
1150
288
536
497
640
278
842
271
1150
287
433
362
590
275
828
273
1150
288
656
720
717
303
875
270
il60
287
793
960
825
365
934
277
1180
287
il40
1090
1140
573
1170
356
1300
304
649
682
654
323
670
195
711
150
1060
150
1540
171
793
569
794
364
799
247
816
178
1070
148
1540
171
il40
307
1140
294
1140
274
1140
245
1220
157
1560
171
N=10
1
e
C
u
2 c
u
3 c
U
4 c
u
7 "v
u
9 c
u
229
204
349
171
476
159
605
151
1050
151
1540
171
274
226
364
167
479
157
606
151
1050
151
1540
171
388
408
428
176
506
150
614
148
1050
151
1540
171
.2
.223
.368
.3
.503
.4
.639
.5
.783
.7
1.13
N=20
N=5
!)
1 c
u
2 c
u
.1
515
628
530
238
570
157
645
143
1060
150
1540
171
1
2
3
4
c
u
c
u
c
u
c
u
10 c
u
14 c
U
19 C
u
185
388
803
523
659
~51
258
203
141
152
441
333
276
523
803
239
392
659
il2
182
191
223
109
139
320
803
304
404
525
659
161
99.1
136
159
93.7 il4
660
372
428
803
531
377
92.6
89.6
87.6 116
131
139
776
776
856
776
796
779
77.6
77.6
77.6
76.7
73.9
74.7
1110
1110
1110
ill0
1110
lliO
79.0
79.0
79.0
79.0
79.0
78.3
1890
1890
1890
1890
1890
1890
99.6
99.6
99.6
99.6
99.6
99.6
1150
106
1150
106
1150
106
1150
106
1-150
98.3
1210
79.2
1890
99.6
N=40
1
2
3
4
10
20
30
39
C 183
264
408
543
679
823
U 187
306
157
80.4
60.4
52.3
266
408
543
679
823
C 199
U
80.3 128
124
79.6
60.4
52.3
C 226
274
408
543
679
823
u
65.6
76.1 107
78.5
60.3
52.3
C 259
288
409
543
679
823
U
61.2
58.4
87.9
76.6
60.3
52.3
C 467
467
480
554
680
823
u
46.7
46.6
43.3
46.8
54.7
52.0
C 810
810
810
810
816
864
U
o.
40.4
40.4
40.4
39.3
38.5
12 0
12 0
12 0
12 0
12 0
C 12 0
U
41.9
41.9
41.9
41.9
41.9
41.9
C 2230
2230
2230
2230
2230
2230
U
57.3
57.3
57.3
57.3
57.3
57.3
1170
47.1
1170
47.1
1170
L.7.1
1170
47.1
1170
47.1
1170
46.8
1280
41.0
2230
57.~
e
75
Table 3 (continued)
Me
p
s
.05
.305
.1
.407
.2
.549
.3
.662
.4
.764
.5
.864
.7
1.08
::z:
2.~
J =1
p
S'
.05
.305
.1
.407
1
.::
3
c
u
C
u
C
U
4 C
U
5.49
5.29
5. 68
2.84
5.87
1.96
6.07
1.52
5.52
4.96
5.69
2.81
5.87
1.96
6.07
1.52
5.60
1..67
5.71
2.66
5. eJ1
1.94
6.07
1.52
5.68
5.27
5.75
2.50
5.89
1.89
6.07
1.51
R
5.77
6.34
5.81
2.46
5.91
1.82
6.08
1.50
5.m
7.11
5.89
2.60
5.95
1.76
6.09
1.48
6.08
5.82
6.08
3.05
6.10
1.85
6.16
1.44
10.8
11.3
10.8
5.32
10.8
3.13
10.8
2.28
11.0
1.56
11.3
1.25
10.9
7.79
10.9
4.98
10.9
3.35
10.9
2.37
11.0
1.52
11.3
1.25
H.l
2.98
11.1
2.85
11.1
2.66
11.1
2.37
11.1
1.44
11.3
1.24
N=10
e
1 C
U
2 c
U
3 c
U
4 c
10.4
9.27
10.5
. 5.16
10.6
3.53
10.7
U 2.68
7 c 11.0
U
1.57
9 c 11.3
U 1.25
10.4
8.63
10.5
4.83
10.6
3.48
10.7
2.67
11.0
1.57
11.3
1.25
10.6
11.1
10.6
4.34
10.7
3.16
10.7
2.58
11.0
1.57
11.3
1.25
.3
.662
.4
.764
.5
.864
.7
1.08
N=20
N=5
R
.2
.549
10.7
13.0
10.7
1:..81
10.7
2.96
10.8
2.39
11.0
1.57
11.3
1.25
1
C
U
2 C
u
3 C
U
4 C
U
10 C
U
14 C
20.3
16.7
20.4
9.28
20.5
6.67
20.5
5.11
20.8
2.08
21.0
U
1.50
19 C 21.4
U 1.13
20.4
21.0
20.4
8.29
20.5
5.99
20.5
4.88
20.8
2.08
21.0
1.50
21.4
1.13
20.5
23.3
20.6
9.99
20.6
5.79
20.6
4.22
20.8
2.08
21.0
1.50
21.4
1.13
20.7
13.2
20.7
8.82
20.7
6.26
20.7
4.52
20.8
2.05
21.0
1.50
21.4
1. '3
20.8 20.9 21.1
6.38 3.67 1.94
20.8 20.9 21.1
5.72 3.62 1.91.
20.8 20.9 21.1
5.08 3.54 1.94
20.8· 20.9 21.1
4.36 3.41 1.9120.9 20.9 21.1
1.82 1.80
1.9~
21.0 21.0 21.1
1.50 1.48 1.38
21.4 21.4 21.5
1. 13 1.13_ 1.13
N=40
1
2
3
4
10
20
30
39
C 40.3
U 41.1
C 40.3
U 16.3
c 40.4
U 11.7
C 40.4
u 9.55
c 40.6
u 4.06
c 40.9
U 2.04
c 41.1
U 1.37
C 41.6
U 1.07
40.4
46.9
40.4
19.5
40.4
11.2
40.4
8.19
40.6
4.05
40.9
2.04
41.1
1.37
41.6
1.07
40.5
15.6
40.5
12.8
40.5
10.7
40.5
8.71
40.6
3.66
40.9
2.04
41.1
1.37
41.6
1.07
40.7
6.02
40.7
5.96
40.7
5.88
40.7
5.74
40.7
3.43
40.9
2.04
41.1
1.37
41.6
1.07
40.8
3.62
40.8
3.62
40.8
3.62
40.8
3.62
40.8
3.28
40.9
1.97
41.1
1.37
41.6
1.07
40.9
2.60
40.9
2.60
40.9
2.60
40.9
2.60
40.9
2.58
40.9
1.82
41.1
1.37
41.6
1.07
1.1.1
1.65
41.1
1.65
41.1
1.65
41.1
1.65
41.1
1.65
41.1
1.64
41.1
1.31
41.6
i.O?
e
e
e
Table 4.' }A"..i.nimum. standardized test duration (S) for various
P and r.1
l-e -1 =
.63213
.7
.8
0
.05
.1
.2
.3
.4
.5
.6
0
.00263
.Oill
.0498
.127
.261
.480
.840
1
1.45
2.59
8.97
co
0
.0513
.105
.223
.357
.511
.693
.916
1
1.20
1.61
3.00
co
1 12
2
0
.138
.223
.368
.502
.639
.783
.943
1
1.13
1.37
2.08
co
0
.226
.325
.472
.597
.715
.833
.957
1
1.10
1.27
1.73
co
21
2
0
.305
.407
.549
.662
.764
.864
.966
1
1.08
1.21
1.55
co
P
.95
1
M
-
1
2
1
S expressed in nondimensional time units
S
= [- J n
(l-P)] l/M
= (-L) 11M
[See Section 9.1, Program (3)]
--J
en
e
79
Table 5.
p
.05
.1
.2
.3
.4
.5
Information (I =
.7
t)
obtained from estimator (a)
p
1
.05
.1
N=5
e
.4
.5
.7
1
10.9
10.9
10.9
10.9
11.7
15.3
19
20
20
20
20
20
20
20
20
20
24.9
24.9
24.9
24.9
24.9
25.1
31.4
39
40
40
40
40
40
40
40
40
40
40
R
1.04
2
3
4
5
1.11
2.03
3
4
5
1.20
2.15
3.03
4
5
1.08 .911 .825
2.26
2.30 2.37
3.11 3.24 3.37
4.01 4.04 4.11
5
5
5
1.04
1.99
3.29
4.29
5
5
5
5
5
5
3.72
3.89
4.16
4.67
7.73
9.12
10
10
10
10
10
10
10
10
N=10
1
2
3
4
7
9
10
.3
N=20-
R
1
2
3
4
5
.2
1.12
2.04
3.01
4
7
9
10
1.21
2.18
3.06
4.01
7
9
10
.821 .953
.953
2.44 2.22 2.03
3.37 3.62 3.44
4.15 4.50 4.74
7.01 7.08
7
9
9
9
10
10
10
1.39
2.18
3.24
4.59
7.27
9.01
10
1
2
3
4
10
14
19
20
1.22
2.20
3.07
4.02
10
14
19
20
.882 1.57
.972
3.25
5.69
2.06 2.34 3.63 5.77
2.47
3.42 3.55 3.30 4.09 5.89
4.21 4.88 4.57
4.76 6.11
10
10 10.2 10.8 11.5
14
14
14
14 14.2
19
19
19
19
19
20
20
20
20
20
N=40
1
2
3
4
10
20
30
39
40
.861 2.60 6.75
.981
2.48 2.08 3.17 6.82
3.44 3.60 3.80 6.92
4.23 4.94 4.65
7.09
10
10 11.1 il.8
20
20
20
20
30
30
30
30
39
39
39
39
40
40
40
40
I expressed in (nondimensional time units)-2
[See Section 8.0]
11.2
11.3
11.3
11.3
12.4
20.8
30
39
40
15.7
15.7
15.7
15.7
15.8
22.4
30
39
40
80
Table 6.
M
5
0
.05
.1
.2
.3
.34&
.4
.481
:~
e
Ordinates of the s~andardized Weibull density function,
.
w( s;M) = MsM-l e-5
1
2
-J;'
1
2J;'
w= - - e
wee-s
1.783
1.153
.714
.527
1
.951
.905
.819
.741
.420
.670
.3&9
.297
.697
.549
.497
co
.7
.707
.229
.8
.815
.9
1
.184
1.066
1.225
1.245
1.35
.120
1.5
1.731
,1.75
.086
2
*maximum
°inf1ection point
112
1
.449
0
.368
2. J-s e-8 J;' w=2 se _s2
W=2
0
.332
.~59
• 14
.697
.736
.746*
.7M
.730
0
.010
.198
.384
.548
.682
.779
.838
.664
.857*
.845
.611
.552
.515°
.736
.5&6°
.223
.292
.437
.316
.135
.125
.164
.073
[See Section 5.0 and Figure 1]
~2
2
.
I
w= -5 se-s
2
2
J8
0
.028
.079
.220
.390
.470°
.572
.7&1
.880
.972
1.006
1.0101f.980
.920
.611°
.291
.110
.024
e
e
e
1.4
1.2
t
--
0.8
en
~ 0.6
0.4
(See table 6 and section 5.0)
o
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
0)
,....
..
e
t
e
e
\.0
1.0
0.8
0.8
0.6
t
R=2,P=0.05
0.6
> 0.4
>0.4
0.2
0.2
o
10
20
30
-
40
0
'==------- ----~
R=2,P=O.2
R=3,P=0.2
10
>
1.0
1.0
0.8
0.8
0.4
t
________
R=2,P=O.l_
012
o
20
30
0.6
>0.4
0.2
R :;:3,P=O.l
10
30
40
N~
N~
t 0.6
20
40
0
--
R=2'P=O.3~
;;
R =4, P=0.3
10
N~
20
30
40
N~
(See section 7.3)
Figure 2.
Variance (V) of the standardized estimator (a) as a function of N
for various combinations of small R and small P
R5
.
e
t
e
e
1.2
1.2
1.0
1.0
0.8
0.8
t
0.6
r---
o
o
0.4
0.2.
0.6
0.4
0.2
R=5
N=5
,
0
0.2
0.4
0.6
0.8
R='I
N=40
1.0
o
R=40 , N=40
0.2 0.4
p
P --..
0.6
•
0.8
1.0
(See table 2, section 7.4 and section 9.1)
Figure 3. Mean square error (0) of standardized estimator (a)
as a function of P
for various ( R, N) com binations
~
e
e
e
1.0
0.8
t
<{
0.8
t
0.6
0.4
<{
R=5.N=5
0.6
0.4
kt
R
=2
N=40
R=3,N=5
0.21
0.
0.2
0.4
P
..
:""J.
0.2
R=40.N=40
0.6
0.8
1.0
o
0.2
0.4 0.6
P
,
0.8
1.0
(See section 7.5)
Figure 4.
Asymptotic mean square error (A) of standardized estimator (a )
as a function of P
for various (R, N) combinations
co
.;:-
85
M=O.l
1.8
1.6
M = 1.5
1.4
1.2
t
1.0
en 0.8
0.6
0.4
o
0.2
0.6
0.4
p
1.0
0.8
\>-
(See table 4 and
section 9.1 program(3»
Figure 5. Minimum standardized test duration (S)
as a function of P
for various M
e
e
e
2.5
2.0
r·
M=2
5
-I
M=2
2
=t..
1.0
0.5
·0
0.5
1.0
2.0
1.5
oc.
2.5
~
3.0
3.5
(See table I
and section 3.0)
Figure 6. Expected life span (}l) as a function of the scale parameter (Cl')
of the Wei bull density function
for various values of the shape parameter, M
co
0'-
LIST OF REFERENCES
Chapman, D. G., 1959. Advanced Theory of Statistics. R. C. Taeuber,
recorder, North Carolina State College. Mimeo Series No. 214.
•
Cohen, A. Co, Jr., 1950. Estimating the Mean and Variance of Normal
Populations from Singly Truncated and Doubly Truncated Samples.
Annals Math. Stat. 21, 557-569.
Cohen, A. C., Jro, 1951. Estimation of Parameters in Truncated Pearson
Frequency Distributions. Annals Math. Stat. 22, 256-265.
Deemer, W. L., Jr., and Votaw, D. F., Jr., 1955. Estimation of Parameters
of Truncated or Censored Exponential Distributions. Annals Math.
Stat. 26, 498-504.
Dwight, H. B., 1957.
Co.
Tables of Integrals.
3d Ed. New York.
Epstein, B., 1953. Statistical Problems in Life Testing.
Annual Convention ASQC, 385-398.
Epstein, B., 1954. Life Test Estimation Procedures.
No.2, Dept. of Matho, Wayne State Univ.
Epstein, B., and Sobel, M., 1953.
486-502.
Life Testing.
The Macmillan
Proceedings 7th
Unpub. Tech. Rep.
J. Am. Stat. Assn.
~,
Epstein, B., and Sobel, Mo, 1954. Some Theorems Relevant to Life Testing
from an Exponential Distribution. Annals Math. Stat. 22, 373-381.
Epstein, B.~ 1960. Tests for the Validity of the Assumption that the
Underlying Distribution of Life is Exponential. Parts I and II,
Technometrics ~, 83-101, 167-183.
x
.
Grab, Eo L., and Savage, I. Ro, 1954. Tables of the Expected Value of 1
for Positive Bernoulli and Poisson Variables. J. Am. Stat. Assn.
lt2" 169-177 •
Gupta, S. S., 19520 Estimation of the Mean and the Standard Deviation of
a Norlllal Population from a Censored Sample. Biometrika.2.2., 260-273.
Hald, Ao, 1949. Maximum Likelihood Estimation of the Parameters of a
Normal Distribution which is Truncated at a Known Point. Skand.
Aktuarietidskrift ~, 119-1340
Herd, Go R., 1956. Estimation of the Parameters of a Population from a
MUlti Censored Sample. Unpub. Ph.Do dissertation, Iowa State College.
IBM Reference Manual, 1959.
York.
Random Number Generation and Testing.
New
88
Jaech, J. L., 1955. New Techniques for Life Testing.
Atomic Products Co.
Unpub. Hanford
Kao, J. H. K., 1956. A New Life Quality Measure for Electron Tubes. IRE
Trans. on Reliability and Quality Control. PGRQC-7, April, 1956. 1-11.
Mendenhall, W., 1957. Estimation of Parameters of Mixed Exponentially Distributed Failure Time Distributions from Censored Life Test Data.
Unpub. Ph.D. dissertation, North Carolina State College •
•
Mendenhall, W., 1958. A Bibliography on Life Testing and Related Topics.
Biometrika ~, .521,-543.
Mendenhall, W. p and Hadar, Ro Ho, 1958. Estimation of Parameters of Mixed
Exponentially Distributed Failure Time Distributions from Censored
Life Test Data. North Carolina State College Reprint Series No. 130.
Mendenhall, Wop and Lehman, Eo H., Jr., 1960. An Approximation to the
Negative Moments of the Positive Binomial Useful in Life Testing.
Technometrics ~ 227-242.
Sarhan, Ao Eo, and Greenberg, B. G., 1956. Estimation of Location and
Scale Parameters by Order Statistics from Singly and Doubly Censored
Samples 0 Annals Math. Stat. gz, 427=451.
Stephan, F. F• .9 1945. The Expected Value and Variance of the Reciprocal
and other Negative Powers of a Positive Bernoullian Variate. Annals
Math. Stat. ~, 50-61.
Tilden, Do Ao, 1957. Life Testing Using the Halfrange.
Day Conference QC, Rutgers, Sept. 7, 1957.
Proceedings All
Wald, Ao, 1948. Asymptotic Properties of the Maximum Likelihood Estimate
of an Unknown Parameter of a Discrete Stochastic Processo Annals
Matho Stat. 12" 40-46.
Weibull, Wo, 1951. A Statistical Distribution Function of Wide Applicability.
J. Applo Mech. 18, 293-297.
Zelen ll M., 19590
269-2880
•
Factorial Experiments in Life Testing.
Technometrics
1,
Zelen, Mo, 1960. Analysis of Two-Factor Classifications with Respect to
Life Test. Contributions to Probability and Statistics. Stanford
University Press, Stanford, Calif., 508-517.
Zelen, Mo, and Dannemiller» Mo C., 1961.
The Robustness of Life Testing
Procedures Derived from the Exponential Distribution. Technometrics
J., 29-49.
ABSTRACT
LEHMAN, EUGENE H, JR.
Estimation of the Scale Parameter in the Weibull
Distribution Using Samples Censored by Time and by Failures.
(Under the
direction of RICHARD LOREE ANDERSON).
This dissertation presents the results of an investigation to deter-
•
mine the optimum sample size" (N)
and the minimum test duration
(T)
the miniJnum required failures
(R)
in order to estimate most effectively
the scale parameter ,(a) in the Weibull distribution.
To determine the
optimum combination of these test constants, the maximum likelihood
estimator
(~)
of a
was used.
The properties of the Weibull distribution and the shape of the
accompanying density function for various values of the shape parameter
The mean and variance of the life span
an item were shown to be proportional to al/ M:
(M)
were presented.
(t)
of
~ = E(t) = al/ M ~ t
(J2 • Vet) = a2/ M
[~'.
_
(~
1)2].
The "hazard" function
z •
was defined as the instantaneous tendency to failure.
It is proportional
to the probability of failure in the next instant of time given that an
•
item has survived to time
t.
This probability increases, remains con-
stant, or decreases with time as
M exceeds, equals, or is less than I.
A new method of censoring was presented where
N items are placed
on test and the test is stopped only after both R items have failed
T time units have elapsed;
~ for this method was derived.
~
The bias
(B») the variance
(V), and the mean square error
(D)
of the standardized
1\
(a =~)
estimator
were investigated.
0:
are not monotonic as functions of
It was noted that
P and
of any item failing before time T.
N, where
B, V, and D
P is the probability
B, V, and D are, however, mono-
tonically decreasing as functions of R.
"
It was proved that
0:
satisfies the four conditions set up by Wald
in 1948 for consistency and asymptotic efficiency.
The information
of
D.
The cost
(C)
(I)
derived from ~ was defined as the reciprocal
of the estimator was defined as
C
where
J
=N +
J E(d)/o:l/M
is the ratio "cost per unit time of continuing the test" to
"cost per item placed on test", and E(d)
the test.
The price
(U)
is the expected duration of
of the estimator was defined as the cost per
unit information,
U
Values of
U and
= C/I
=
CD.
C for various combinations of
M, N, P, R, and J
were calculated and tabulated.
With the aid of the tables, an experimenter who has knowledge of
the
M and
J
which apply to his product, may design a life test experi-
ment selecting N, R, and S (where
•
S
S is the standardized time variate,
= T/o:l/ M) which will be optimum for him - either giving a minimum
price, a maximum information, or cost not exceeding some value, or a best
test for a given sample sizeo
Included also are graphs and tables presenting the Weibull density
function of the standardized time variate
W( sJ
M)
= MsM-l
exp (M)
- s ;
D and the asymptotic mean square error (A) as
a function of N;
~
functions of
P;
V as
as a function a; and P as a function of S.
© Copyright 2026 Paperzz