iv
TABLE OF CONTENTS
Page
1.
INTRODUCTION • • • •
1
2.
REVIEW OF LITERATURE • •
3
3.
INVARIANT SELECTION PROCEDURES
8
3.1
3.2
3.3
4.
SELECTING WEIBULL, LOGNOmt<\L AND GA..'1M.A. MODELS
4.1
4.2
4.3
4.4
5.
Introduction • • • • • • • • • . • . . .
Selection Statistics for the Weibull Family
Selection Statistics for the Lognormal Family .
Selection Statistics for the Gamma Family
DISCUSSION OF PROGRAMUING
5.1
5.2
6.
Introdtiction • • • •.
....
Methods of Selection • • • •
Some Remarks and Properties
Complete Samples
Censored Samples
ON THE SELECTION OF ~iEIBULL, LOGNO~~L &~ GAMMA
DISTRIBUTIONS FOR COMPLETE AND CENSORED SM1I'LES
(with C. P. Quesenberry) •
Introduction • . • • • .
Densities and Selection Functions
Simulation Results for Complete Samples
Selection with Censoring . . . . . . . .
Simulation Results for Censored Samples
A User Progr~m and Examples for Censored Samples
References . . . .
AND CONCLUSIONS
8
10
12
15
15
17
21
25
31
31
33
38
40
42
46
50
55
58
56
7.
St~~y
8.
LIST OF REFERENCES
66
9.
APPENDICES . . . .
69
9.1
9.2
User Program for the Selection Procedure
Graphs of the Nine Densities of Table 3 . •
64
70
87
1.
INTRODUCTION
Statistical analyses are usually performed under the assumption
that the observations come from some specified family of distributions.
Let
Pi' i=l, ••• ,k
be k parametric families of probability distribu-
tions defined on the same measurable space
be a random sample of size n.
(X,cr),
and
Xl, ..• ,X
n
The problem to be considered here is to
find a selection procedure, based upon the data, for selecting one of
the k families that in some sense best fits the data.
The problem can
be approached through the hypothesis testing frame-work,
i.~.,
for
i ; j, by considering the testing problem
e·
HO:
X - Fl
E
Pi
H :
A
X - F2
E
P
j
where the family of distributions labelled as the null hypothesis is
chosen with a view to controlling the Type I error.
the two families is preferred and
the evidence in support of
Generally, one of
H should not be rejected unless
O
H
is strong.
A
In the approach to be
considered here, it is desired to treat all families symmetrically.
For many statistical problems, it is natural to confine attention
to procedures that are invariant with respect to a certain group of
transformations.
The two-parameter (scale and shape) Weibull, lognormal and gamma
families of distributions are commonly used in reliability and life
testing models.
In practice, experiments are often quite time consuming
and costly, therefore, the experiment is carried out for a specified
length of time (Type I censoring).
2
In a recent paper, Kent and Quesenberry (1980) considered the
problem of selecting the member of a collection of families of
probability distributions for complete samples of reliability or life
data.
The statistic proposed is based upon the value of the density
function of a scale maximal invariant.
Some other authors have proposed
maximum likelihood ratio (M.L.R.) procedures.
Theoretically, it is
still possible to improve the procedure for some families of distributions to obtain an optimal one, in the sense of minimizing
misclassification errors.
The optimality is important since it gives
a basis for power comparisons to other procedures.
In this work, the general selection procedure based on invariance
under a group of transformations is considered.
For some families of
distributions this leads to optimal selection procedures.
It allocates
a number to each family of distributions according to its likelihood,
the greater, the more likely.
Further, some properties of the selection
procedures and the study of misclassification errors and correct
classification through Monte Carlo simulation are given.
The procedure
is applied to Weibull, lognormal and gamma families of distributions
for complete and censored samples.
For complete samples, the selection
procedure invariant under
g(x) = AX
~.~.,
G = { g:
invariance under scale-shape changes.
o
; A,o >O}
is used,
For censored samples, the
maximum likelihood procedure is studied in addition to the scale
invariant procedure.
given.
Finally, some applications to data sets are
3
2.
REVIEW OF LITERATURE
The general theory of invariance is given in Lehmann (1959),
including uniformly most powerful invariant (U.M.P.I.) tests obtained by
using the likelihood ratio of densities of the maximal invariant
statistic.
In an earlier paper, Lehmann (1950) attributes the notion of
invariance to R. A. Fisher, Hotelling, Pitman and others; and much of
its development to Hunt and Stein in unpublished work.
Let
Let
PI
Xl' ••• 'Xn
and
be a random sample having distribution function F.
P2 be families of location parameter distributions with
corresponding families of densities
{f l }
and
{f } , respectively.
2
Then, the U.M.P.I. test for
HO:
H :
A
X- F
E
PI
X- F
E
P2
is given by
00
J fl(xl-~,···,xn-~)
d~
-00
>
00
J
c ,
(2.1)
f2(xl-~,···,xn-~) d~
_00
where
c
is chosen so that the desired significance level is obtained.
For location-scale parameter distributions (2.1) becomes
4
00
00
J
J
0
_00
00
00
J
o f
n-2
dlJdll.
fl(Axl-lJ, •.. ,Axn-~)A
>
f2(Axl-lJ, •.. ,AXn-lJ)A
n-2
(2.2)
c .
dlJdll.
_00
A derivation of
(2.2) can be found in Hajek and Sidak (1967), and
the following modification for scale parameter distributions
(2.3)
> c •
o
A number of writers have used and investigated the invariance
property, mainly for location-scale changes for hypothesis testing or
selection procedures for full samples.
Uthoff (1970, 1973) derived
U.M.P.I. tests for testing normal versus two parameter uniform, normal
versus two parameter exponential, two parameter uniform versus two
parameter exponential, and normal versus double exponential.
In testing a composite null hypothesis
H :
A
X- F
E
P2 ,
H :
O
X- F E
Pl
versus
Cox (1961, 1962) considered tests of separate families
of hypotheses based on the logarithm of the Neyman-Pearson maximum
likelihood ratio (M.L.R.), and derived some general results and specific
examples.
The test statistic proposed was
ln MLR - Ep (In MLR)
1
Ia=aA'
5
where
MLR
= sup
go
f(xl, ••• ,x ;a)/sup g(xl, ..• ,x ;6),
n-
ts
and
n-
denote the probability density functions from
PI
and
f
P2 ,
and
g
respectively.
Cox showed that the separate family tests are asymptotically normal
with zero means.
The test is carried out based on this fact.
Jackson (1968) investigated the adequacy of Cox's results for
H :
lognormal versus
H :
A
exponential and derived the test for
H :
lognormal versus
H :
A
gamma.
O
O
In 1969 the results were applied
to fitting gamma or lognormal distributions to fibre-diameter measureUsing Cox's idea, Atkinson (1970) developed a test
ments on wool tops.
for a mixed model including a set of hypothesized families of distributions in order to determine which family adequately describes the data.
Dyer (1973, 1974) investigated a number of tests from the discrimination point of view, including the U.M.P.I. and M.L.R.
=
Pi' i
1,2, •.. ,k
Let
be location-scale parameter distributions and
corresponding density families
{f,}, i = 1,2, ... ,k.
~
Hogg et al
(1972)
gave a Bayesian selection rule, which for a special case will yield a
decision to select that member of the k families of distributions that
corresponds to
max {S(f. ,x) , i = 1,2, ... ,k}
~
where
00
S(f,x) =! f
0-
00
00
or the maximum
likelihood function.
Dumonceaux, Antle and Haas (1973) examined M.L.R. tests for
discriminating between two models with unknown location and scale
parameters and compared empirically the power of the M.L.R. test with
the power of the U.M.P.I. test in testing between normal and Cauchy
6
distributions.
Although the M.L.R. test is not as powerful as the
U.M.P.I. test, their recommendation for using the M.L.R., whenever they
differ, is based on ease of computation and relatively good power.
Dumonceaux and Antle (1973) followed with the M.L.R. procedure for
discriminating between lognormal and Weibull distributions by the fact
that the log-transformation of the observations will be normal and
Type I extreme value distributions, both of which are location-scale
parameter distributions.
Kent and Quesenberry (1980) considered selecting the uniform,
exponential, Weibull, gamma and lognormal families of distributions by
using invariance under scale changes.
In cases when shape parameters
are unknown these parameters are replaced by M.L. estimates.
Bain and Englehardt (1980) proposed the use of the M.L.R. statistic
in choosing between Weibull and gamma families of distributions and
gave some empirical error rates of correct selection obtained from a
Monte Carlo simulation.
Most distributions that are important in reliability and life
testing models are distributions with regions of support bounded below
by location parameters.
Quesenberry (1975) derived a general trans-
formation to "eliminate" the location parameter(s) involved in truncated
parameter distributions.
One particular case is for
Xl, .•• ,X
a random sample of size n with density function of the form
where I is an indicator function,
l.~.,
n
to be
7
=
[1,
0,
Let
Zl, ••• ,Zn_1
X E
(ll,b)
otherwise •
denote the n - 1 sample members larger than
X (1) ,
the smallest order statistic, where the Zls are subscripted in the same
order as the original XIS.
Zl, •.. ,Zn_1
Then for fixed
X(l).' the random variables
are conditionally independent, identically distributed with
common density function
d
J-1
J
h(z,~)
This transformation simplifies the problem since invariance under
location-scale changes becomes invariance under scale changes only.
Goodness of fit problems in the presence of censoring have been
discussed by some writers.
Among graphical methods, Nelson (1972)
described methods for plotting the empirical cumulative failure rate
against time, while Barlow and Campo (1975) have proposed "total time
on test" plots.
Most goodness of fit problems in the hypothesis testing
framework have only considered the classic simple hypothesis problem,
i.~.,
for a completely specified null distribution •. Barr and
Davidson (1973), Koziol and Byar (1975) and Dufour and Maag (1978)
considered modified Ko1mogorov-Smirnov tests for censored samples and
Pettit and Stephens (1976) considered this problem for Cramer-von Mises,
Anderson-Darling and Watson statistics.
Other writers, such as Koziol
and Green (1976) and Hollander and Proschan (1979) considered the
problem for randomly censored observations based on the product limit
estimate proposed by Kaplan and Meier (1958).
8
3.
3.1.
INVARIANT SELECTION PROCEDURES
Introduction
Let
i)
G be a group of one-to-one transformations g:
iii)
iv)
G then gZ&l(x)
E:
= gZ(gl(x)) ,
the group operation is associative; for gl' gz, g3
the identity transformation
if
g
-1
g
E:
~
gg
Two points
transformation
relation,
G:
the group operation is the composition of functions in
.:!:. ~. , for gl' g2
11)
X onto X,
!. ~. ,
e
E:
G; e(x)
G , then
x ,
=
G , there exists a unique inverse
-1
= g-1 g = e
.
G i f there exists a
x ,x
are equivalent under
1 Z
g
E:
EO:
G for which
X
z
g(x 1 )·
=
This is an equivalence
i t is reflexive, symmetric and transitive.
equivalence relation defines a partition as follows.
relative to a group
Indeed, an
The orbit of
G of transformations g is defined as
A transformation group on a space X is said to
be transitive if for every
Let
P
= {P
e; e E: e} ,
one-to-one
g
E:
G
(X,a x ,P)
x
E:
X, Gx
=
X (Lehmann, 1959).
be a family of probability spaces where
and
e is a parameter space.
measurable
induces a mapping
transformations g:
-g:
e
onto
e
Let
X onto
G be a group of
X.
If each
such that for every
A
E:
a ,
x
9
then the collection of all maps
G,
say
g:
8
onto
8
induced by
g
E
G ,
is itself a group.
A family of probability spaces (X,a x ,P)
G of one-to-one measurable transformations g:
under a group
X onto
is said to be invariant
X if
g(X) =
g(a ) =
x
:Jv
g(8) = 8 ,
An event
B
E
a
x
V
E
G
-g
E
-G
is called invariant under
gB == { g(x)
The class
g
X
G if
B} == B ,
E
Vg
E
G •
B of all events possessing this property is a sub a-field of
and is called the invariant sub a-field under G. A function t on
x
is invariant under G if it is constant on each orbit or is a
a
B-measurab1e function.
A maximal invariant is a function that is
constant on orbits and distinguishes orbits,
such that
U(x )
1
= U(x 2)
invariant test function
Le.,
implies
$(x)
$(g(x)) = $(x), V x
E
!.~.,
xl == g(x )
2
a function
for some
g
E
U(x)
G.
An
is a test function which is B-measurab1e,
X ,V g
E
G
Two families of probability spaces
(X, a , P.), i
x
1
said to be conformable if there exists a group
measurable transformations g:
G
2
X onto
=
1,2,
are
G of one-to-one
X and corresponding
-
G
1
and
on the parameter spaces that are transitive (Quesenberry and
Starbuck, 1976).
group
G
o
of
X
Actually, it is also meant that there exists no sub
G such that
G
o
is transitive for one family of
probability spaces under consideration.
10
The following fact is useful for finding
versa.
P
Let
-G
given
G, or vice
be absolutely continuous with respect to Lebesgue
e
measure with density
fe(x)
f
e (x)
and
= f
P
ge
with density
then
fge(x),
ge (g (x)) • Ig' (x) I •
For a random sample of size n,
n
II
i=l
3.2.
fe(x i ) =
n
11 f ge (g (xi)) Ig' (xi)
i=l
I .
Methods of Selection
Let
Pi' i
= l,2, .•• ,k
be families of probability distributions
(X, a ),
x
defined on the same measurable space
i)
G.
~
and let
be the smallest group of one-to-one measurable
X onto X corresponding to
that the induced group G. is transitive,
P.
transformations g:
~
such
~
ii)
B.
~
be the invariant sub a-field under
G.~ ,
k
iii)
define
B
B. ,
n
~
i=l
iv)
v
v)
fi
and let
be a a-finite measure on
G be its corresponding group,
(X,B) ,
be a B-measurab1e density corresponding to
respect to
P.
~
v.
Then the selection procedure is given by the rule:
select the j-th family of distributions if
f.
J
max {f., i
~
=
l,2,oo.,k} .
with
11
1.~.,
In the case of k conformable families,
i=1,2, .•. ,k,
if
G = G , for all
i
actually the density of a maximal invariant corresponding to
respec t to
Euclidean space, and
p.
~
is
with
cr
x
X be a Borel subset of n dimensional
distribution
Po
whose density is given by
= S(f.,x)/S(fO'x)
~
given in Table 3.2.1.
X.
be the Borel sets on
G = {g: g(x , ... ,x ) = (g(x ), ... ,g(x ))} and
n
n
1
1
~
~
v.
Let the sample space
f.
f.
the selection procedure is called optimal since
fO'
where for a number of
The function
v
If
is a probability
then
g(x)'s,
S(fi,x)
S(f,x)'s are
will then be called
the selection statistic for the i-th family of distributions.
Table 3.2.1.
Selection statistics for some transformations.
g(x}
S(f,x)
00
_1
f(X1-V' ...
'Xn-~)
dv
CXl
AX
f
A>O
0 00
AX-~;
_00< ~ 00,
A>0
n 1
f(Ax ,···,Ax )A n
1
d~
i
00
r f(AX1-~, ... ,AXn-~)An-2
J
o _00
fJ;(Ax
o0
Q
1
, •••
,!.xn" ) An-l;
dvdA
n-2C~1 xl d' dA
12
The last expression in Table 3.2.1 follows from the transformations
g(z) = AZ -
A
].1;
>
0,
_00
<
].1
<
00
a location-scale invariant in z-space,
,
that is equivalent to transformations
x-space, where
z = In x.
g(x)
= AXo ;
A,S>
°
in the
The latter transformations assure that the
selection procedure (based on the original observations) is not affected
by multiplying each
observati~n
to a real positive power.
by a real positive number or raising it
It will be recalled that such a procedure
is invariant under scale and shape changes.
Therefore, the four
transformations above will be called invariant under location, scale,
location-scale and scale-shape changes, respectively.
Hence, the selection procedure is equivalent to the rule:
select the j-th family of distributions if
S(fj,x) = max {S(fi,x), i=1,2, .•• ,k} •
3.3.
Some Remarks and Properties
For
f
i
a density with respect to a a-finite measure
corresponding to
P. ,
then
~
P(A) =
v
J f.
~
A
dv ,
E
a
x
A
depends on the parameter(s) chosen,but
P(B)
f
B
f i dv
=
f ii
B
dv , V B
E
B,
(3.3.1)
13
does not depend on the parameter(s).
conditional expectation of
is well defined for
v
f
given
i
f.1.
Indeed,
B.
I
= E (f . B) ,
1.
the
The integral in (3.3.1)
B.
any a-finite measure on
can be taken to be the probability distribution
For example,
p
o
whose density
is given by
If all k families are conformable with the corresponding group
G
g: {g(x) = AX -
=
A>O}, this selection procedure is the
ll; -<X><ll<oo,
same as that given by Hogg et a1
(197Z).
Consider next the case of telescoping families,
where
P
1
C
P
z
or
P
z
C
P
1
.
of densities corresponding to
density member corresponding to
For
f
1
the case
choose any member of the class
P , then for
1
i.~.,
P
1
PZ ' therefore
C
PZ ' f 1
f1 =
E( f
is also a
1
IB) = f z
.
From this it is clear that the procedure cannot select from telescoping
families, since it will refer only to the largest family under
consideration.
P1 , Pz be conformable families defined 'on (X,a)
where X
x
is an n-dimensional Euclidean space. Let B be the Borel sets of X,
Let
and let
x
framework,
be observed values.
!.~.,
for the testing problem
H :
x - P E
P1
H :
A
x - P
Pz
o
then
In the composite hypothesis testing
E
14
1)
it follows immediately from the Neyman-Pearson lemma that the
U.M.P.I. test is given by rejecting
>
where
for
H
o
c ,
c' is chosen to give the desired significance level.
If
~
clearly it is an
2)
=
invar~ant
test function.
The selection procedure will minimize
a
l
+ a2
in the class
of all invariant procedures where
a
a
and
CR
l
= Type I error =
fldv
f
CR
2
= Type II error =
f dv
2
J
X-CR
is the critical region.
Equivalently, if it is assumed that each family of distributions
has a prior probability of
~
with no loss for a correct decision and
equal losses for incorrect decisions, then the procedure will minimize
(al + a 2)/2 the Bayes risk or total probability of misclassification.
The selection procedure is transitive in the sense that when it
selects
Plover
rules.
Plover
P •
3
P , and
2
P2
over
P , then it automatically selects
3
This property generalizes naturally to multiway selection
15
4.
4.1.
SELECTING WEIBULL, LOGNORMAL AND GAMMA MODELS
Introduction
The selection rule used here to decide which of the families of
distributions best
fits the data is as given in Chapter 3.
That is,
select that one of the three families considered; Weibull, lognormal
and gamma; that corresponds to
S(fi,x)
max {S(f.,x); i
J.
= 1,2,3},
where
is called the selection statistic for the i-th family of
distributions.
From this point on, the indices 1, 2 and 3 will denote
the Weibull, lognormal, and gamma distributions, respectively.
In
this section the selection statistics to be considered for both
complete and censored samples will be derived.
From Chapter 3, it can be observed that the Weibull and lognormal
families are conformable, where the corresponding group is
G = { g: g(x)
G = { g:
1
5
= AX
; A, 5
g (6,3) = (A6° ,
(j={g: g(S,o)
2
in their parameter spaces.
=
>
~);
\)
O} in the sample space, and
A,o
>
0 } for Weibull families
and
(A6°,00); A,0
>
for
O}
lo~normal
For the gamma family, the group
families
G above
does not correspond to any group defined on its parameter space that
-
is transitive, so the gamma family of distributions is not conformable
-
-
to either the Weibull or lognormal families
~
-- ------
distributions.
If the
shape parameter of the gamma distribution is known, the group
G = {g: g(x) = AX ; A
>
O}
corresponds to
which is transitive on the parameter space.
G3
= {g: g(6) = AS
A > O}
16
For complete samples, let
X "",X
n
1
having probability density function
f(x).
assumed that the shape parameter is known.
be a random sample of size n
For the gamma family, it is
In applications, the shape
parameter will be replaced by its M.L.E., which is invariant under
scale changes.
Xl
Let
f(x , •.• ,x )
1
n
= x 1 "",Xn = xn '
be a joint density function of
then by the procedure given in Section 3.2, the
selection statistic is given by
(4.1.1)
S(f,x)
There are no closed form expressions for this integral for the Weibu11
and gamma families of distributions, so it will be evaluated by
numerical integration.
X1""~n
For censored samples, let
probability density function
where r values are observed,
the truncation point T.
fe(x)
i.~.,
be a random sample having
and distribution function
Fe(x),
have values less than or equal to
Note that the number r itself is a random
variable, distributed as a binomial random variable with probability
function
b(n,F(T)).
Let
be the observed values indexed
in the same order as the original sample, and
corresponding order statistics.
to
x(l), ..• ,x(r)
be the
The likelihood function corresponding
and r is
Using (4.1.2) in (4.1.1) gives a selection statistic that is very
difficult to evaluate and requires complex numerical integration
17
techniques.
For this reason, and because it performs well, as will be
seen later, the scale invariant approach will be used here.
The scale
invariant selection statistic is defined by
00
S
=
J L(Ax(l),··.,Ax(r),r;
r l
AT)A - dA ,
(4.1.3)
o
where the shape parameter in S will be replaced by the maximum likelihood estimate.
There are no closed form solutions for (4.1.3) for
lognormal and gamma families of distributions, so, again, numerical
integration techniques will be used.
A maximum likelihood approach will also be considered, where the
selection statistic is defined as
(4.1.4)
S = sup {L e (xCI)"" ,x(r) ,r; T)} •
8
The derivations of the selection statistics for these approaches
are given in Sections 4.2 through 4.4.
4.2.
Selection Statistics for the Weibull Family.
A random variable X has a Weibull distribution with scale
parameter
6
and shape parameter B, denoted by
We6,B),
i f it has
a probability density function of the form
f (x)
X
B x,.,"-1
= -(-)
6 6
x 0D
exp{ - (-) } . I
6
6,S. > 0 •
(x)
(0,"")
(4.2.1)
18
The corresponding Weibull distribution function is
(4.2.2)
4.2.1.
Complete Samples
The special case of the Weibull density function for
e
=S = 1
is denoted by
flex)
exp(-x) . I(O,~)(x) ,
hence the joint density function for a random sample of size n is
(- i=lI x.)1.
where
x(l)
is the smallest order statistic.
Therefore, by applying
(4.2.1.1) to (4.1.1), the scale-shape invariant selection statistic for
the Weibull distribution is
=
I~rexp(-.I
o
0
= fen)
1.-1
f~ (~
o
The value of
S(fl,x)
i=l
AX i o) .
x.)O .
1.
An-l(.~
Xi)8on-2dOdA
1.-1
(I
i=l
x.o)-n. on-2 d8 .
1.
will be computed using Simpson's rule after
transforming the integral onto the (0,1) intE!rval.
19
4.2.2.
Case 1.
Censored Samples
Maximum likelihood procedure.
When the density function (4.2.1) and distribution function (4.2.2)
are substituted into (4.1.2), then the likelihood function for the
censored sample is given by
f (.L
J.~ •
L8 ,S (x(l)"" ,x(r) ,r;T) = (n-r)!
r xi
-:3 r
g
S
(3) r
13 -1
•IT
exp
-8
xi
+(n-r)T
8
)
( 1=1
1=1
Maximum likelihood estimates of
8
e
n!
and 13
are given by the solution to
the maximum likelihood equations (Cohen, 1965),
r
I
i=l
x. 13 in xi + (n-r) TSinT
1
r
L
i=l
r
1
- -
x. B + (n-r)T 3
1
L
=
i=l
in x.
1
r
B
(4.2.2.1)
and
, (t
X.
1
s" + cn
e=
The solution
"
13
-r)T3) t
r
is found for this nonlinear equation using the Newton-
Raphson method of finding roots to equations.
seen that S
Also, it is readily
is invariant under scale changes.
Therefore, the maximum
likelihood selection statistic is given by
"
n!
"r-'-r3
r
(n-r)! B 8
( . IT x
1=1
JS
-1 exp
f_e--:3"( .L
r x/
1=1
20
Case 2.
Scale Invariant Procedure.
e
The special case of the Weibu11 density for scale parameter
= 1
is denoted by
Therefore, the scale invariant selection statistic for the Weibu11
family is given by
f {l-F1 (AT)} n-r i=lr £1 (Axi )-'
00
n!
S(f ,x) =
1
(n-r)!
r-1
I (Q, T) (xi) A dA
IT
0
8-1
00
n!
=
(n-r)!
J {exP(-(AT)S)} n-r§r
0
n!
(n-r)!
= -'-::::"':"-:-"'7"
= (
~!)' 13r
n r.
x.1
i=l
-i
r
IT
. 1
1=
A
exPf i=lL
(Ax
i
!]
,r-1 d'
A
x.)
8-1
1
r
,3
AX.)
1
gr( ~
00
is given by (4.2.2.1) .
f
r
( exp -u' ( L x.1
• '1
0J
1=,
n!
fer) §r-1
IT x.
(n-r)!
(i=l 1)
where
(~
. 1
1=
r
§·-1 [
r
8
B+
'u
i1-
L x.:l i3 + <n- rlT3
i=l
r
J'
r-1du
21
4.3.
Selection Statistics for the Lognormal Family.
A random variable X has a lognormal distribution with scale
parameter
e
a, denoted by
and shape parameter
LN(6,a), if it has
a probability density function of the form
fX(x)
=
1
I2TI
.
ax
e} 2J
.-{in x
exp [ -
2
(4.3.1)
• 1(0,00) (x) .
20
There is no closed form solution for the corresponding distribution
.
function,
i.~.,
where the function
~(x)
is the probability distribution function of
a standard normal (Gaussian) random variable.
4.3.1.
Complete Samples
The special case of the lognormal density function for
e =a =1
is denoted by
f 2 (x)
1
= --
I27i-x
2
exp {-~anx) } • I{O,oo) (x)
and the joint density function for a random sample stze of n is
(21T)-n/2 :( •ITn
Xi
)-1
·~=1
f
exp -~
n
L
i=l
(4.3.1.1)
I (0,00)
(x(i))
22
Therefore, by applying (4.3.1.1) to (4.1.1), the scale-shape invariant
selection statistic for the lognormal distributions is
This result is given by Hajek and Sidak (1967) for location-scale
invariant function for the normal distributions.
4.3.2.
Case 1.
Censored Samples
Maximum likelihood procedure.
When the density function (4.3.1) and its distribution function
are substituted into (4.1.2), then the likelihood function of the
lognormal distribution for censored sample is given by
n!
L e ,a(x(l),···,x(r),r;T) = (n-r)!
~-r
(aY 2:r)
I
exp - - 2
{
20
e
The estimators for
and
0'
r
I
i=l
r-~(!ncrtr\~l xir
(in :i )2} .
are given by the solutions to the
maximum likelihood equations,
1
cr
r
I
i=l
o,
23
and
+ -1
-r
0
2
Ir
( .
in
X,)2 - __~--..loo~
-7
e
i=l
(in
~
1-41
~\
a
(4.3.2.1)
0,
1
where
1
2
1jJ(x) = --- exp(-~ x ) •
I2iT
It is readily seen that
0
is invariant under scale changes.
e
maximum likelihood estimators for
and
0
The
and the iterative
procedure are based on results of Harter and Moore (1966), adjusted
to type I censoring.
notation
~
for
It is to be noted that Harter and Moore used
ine . Another procedure based on type I censoring
is given in Aitchison and Brown (1963),
Therefore, the maximum
likelihood selection statistic is given by
n!
(n-r)!
Case 2.
f(~
in
1-41
~
~
o
n-r
r
( . IT Xi)
1=1
-1 ~
(0/21;:)
-r
f-::z )
1
exp
20
r
1=1
t
e
Scale invariant procedure.
The special case of the lognormal density for scale parameter
e
=
1
is denoted by
J
x-; 2
fin -;:-)
24
Therefore, the scale invariant selection statistic for the lognormal
family of distributions is given by
(1"f,T a)-r
(
...
~
f
Axi)-l exp -
i=l
1
~L
-:z
20'
i=l
(inAx i ) 2}' Ar-l dA
-1
n!
(n-r)!
A
-:--~-:-
t
exp -
1
r
---:z I
20'
i=l
n!
= --.;,.,.....,...
(n-r)!
Note that
f
exp -
1
-:z
20'
r
i=l
r
(u + in xi)
ex{ -2-&-~-/r-
j
= exPf
~ ( i=lr
20'
25
hence,
1
r
- -r i=l
L
l-r
=
where
u
n!
(/ 2n
(n-r)!
a
i=l ~
)-1
f· :2). 2 r1(
(r
exp-
20
~=l
lnx i -
rIlnx
i
i=l
)2~
is a
N
and
r-~ ( rrtx.
cr)
I
bt Xi
i=l
(
- ---""--r-- , crA2 /r ) r.v.
is given by (4.3.2.1).
The value of
S(f ,x)
2
will be computed via Monte Carlo and
importance sampling from the normal distribution.
4.4.
Selection Statistics for the Gamma Family
A random variable X has a gamma distribution with scale parameter
6
and shape parameter a. , denoted by
G(e ,a.), i f it has a probability
density function of the form
0.-1
~(o.) exp(-
t) . I(O,~)(x)
.
(4.4.1)
There is no closed form solution for the distribution function for any
arbitrary
a.
However, when
a.
is a positive integer, the distribution
26
is also known as the Erlang distribution, then the distribution function
is given by
4.4.1.
Complete Samples
It has already been observed
6 = 1
a
this family is not conformable
O
G = { g: g (x) = AX ; A, 5
under the trans forma tions
that
th~t
Assuming
> O}.
is known, then the special case of the gamma density for
is denoted by
hence, the joint density function for a random sample of size n is
= (r (a)) -n (.
~
xi)a-l exp (-}
1=1
x .\. \0 00) (x (1))
1=1
~
,
(4.4.1.1)
Therefore, the scale-shape invariant selection function for gamma is
ex>
=
00
J J r-n(a) (~i=l Ax. O)&-l
n
r-n(a) [ ( IT x.
o i=l 1
00
-n
~
(a)r (na)
.
t' n-2
0
I\i~l
0
Ax.
\ . . -l=l
1
o0
= r
expf- }
a).
n l
A- (
~
a\
00
( An&-l exp (-) [
xi )dAdO
J
i=l
0
r C~l
xi
xJ 0 n-2dS dA
i=l
1
x.
1
°rna n-2 do
\5
,
1}
27
where
is the maximum likelihood estimator for
~
S(L,x)
a.
The evaluation uf
is performed in the same way as in Section 4.2.1.
j
suggested bv Greenwood and
The maximum likelihood estimator
Durand (1960) is used, i.e.,
; =
['-'(0.5000870 + 0.164085 , - 0.:54427: y2); 0
+ 11.968477 y + y..:.)-1(S.89P919
y-I(l7.79728
I
0.9775373
y~);
0.5772
y
Y
-+-
0.5772
9.059950 Y -
17 ,
where
n
~
1
mean
" (arithmetic
u:
,
geometrlc mean)
Y
x. /n
..
-l"
1
i= 1
C:l
x
•
~
M)
I
1;
Bowman and Shenton (1968) found that the error of these approximate
solutions does not exceed
and 0.5772
4.4.2.
Case 1.
<
Y
<
0.0088~
and 0.0054% when
0
~
Y
0.5772
17, respectivelY.
Censored Samples
Maximum likelihood procedure.
When the density function (4.4.1) and its distribution function
are substituted into (4.1.2), the following expression results,
28
n!
(n-r)!
I -
1
(l)ro.
( r11 x ) exp (
- r Xi)
(f(o.))r e
i=l i
i=l e
n!
T
n-r
(f(o.)) -n{f(o.) -f(-e'o.)}
. e -ro.( rTI Xo{
(n-r) !
i=l ...
-:--~-:-
=
)0.-1 exp (~Xi)
-e
'
L
i=l
where
a
f(a,b)
=
f s b-l exp(-s)ds
o
The maximum likelihood estimators for
e
and
and the
0.
iterative procedure are obtained based upon the results of Harter and
Moore (1965), adjusted to type I censoring.
0.
e
The estimators
and
are given by the solution to the maximum likelihood equations,
-f-(o.-~)-=n::"'~---/=-(~-,-a-) (tl" exp (e
n
~
1
o.r + -:::-
e
r
o,
LX.
i=l ~
(4.4.2.1)
and
n- r
[fl(a) _ fl(:,a)]
f(a) - f(~,a)
e
e
~
n
~
where the primes indicate differentiation with respect to
readily seen that
r
'\
- rine - ----f'(o.) + L inx.=O ,
fC a )
i=l
~
& is invariant under scale changes.
maximum likelihood selection statistic is given by
&.
It is
Therefore, the
29
~
IT
i=l
Case 2.
~ (Ei-I x .))
a- l
e~-ra(
x.1 )
exp (-
e
,
1
Scale invariant procedure.
e =1
The special case of the gamma density for scale parameter
is denoted by
f
xa-I
Cx) = 3
fCa)
Therefore, the scale invariant selection function for the gamma family
'of distributions is given by
r
~
co
r
AT
1
Ll- rCa) J
o
J
o
exp (-
=
n!
Cn-r)!
.
r-l:(a)
s
~-l expC-s) 'dS]
E
n-r"
(~
1
AX.
i=l
1.
rr C&)
dA,
i=l
(r
)~-l( L
r
)-r&
J x.
x
i=1 1
i=l i
n-r
exp(-s)ds]
-
1
rea)
.-",.,-
1
1-
1
&-1
S
exp(-u)'u ra.-l du
1
- fCa)
=
n-r
exp(-s)ds )
r
TuIL·_lx.
,
Tut''':
r [,1= IX.1
J
o
s
a-I
3Q
where
of
u
is a
S(f .x)
3
G(l.rei)
r.y. and
"
CI.
is given by (4.4.2.1).
The value
will be computed via Monte Carlo and importance sampling
from the gamma distribution.
I
31
5.
5.1.
DISCUSSION OF PROGRAMMING
Complete Samples
The selection procedure used is invariant under scale-shape changes,
so without affecting the conclusion, after estimating the shape
parameter of the gamma family, the observations are transformed by
g(x)
= ax b
,
where a and b are determined such that
becomes (0.1,10).
(x(l),x(n))
The logarithms of the selection statistics will be
computed instead of the selection statistics themselves, since it is
more convenient computationally.
Having no closed forms for the selection statistics for the Weibull
and gamma families of distributions raised considerable difficulties
in their evaluation.
It is to be noted that the selection statistic
for the Weibull family is a special case of the selection statistic for
the gamma family,
i.~.,
a
= 1.
One possible approach to evaluate the
selection statistics is integration by sampling (Davis and Rabinowitz,
1975).
For
00
- ( g(A)dA
b
(5.1.1)
the region of integration is divided into (O,A) and [A,oo), where A is
determined to be 1 for
a~
~, otherwise A is obtained as the smallest
positive integer number less than 10 such that
g(A+l) < g(A) •
Then
S is evaluated by taking the sampling distribution to be the uniform
distribution on the interval (O,A), and the exponential distribution
with the location parameter A on the interval [A,oo) •
32
It can be noted that for any given
J
E >
a
there exists a positive
00
number V such that
g(:A)d:A <
Some samples were generated to
E.
observe the functional shape of
g(:A) •
Simpson's rule (Davis and
Rabinowitz, 1975) was also used to evaluate the selection statistics
numerically.
In this approach, the region of integration is first
transformed onto the (0,1) interval.
Considering the "stability" and time needed for the simulation,
Simpson's rule is preferred.
W(1,2)
tion.
For example, for samples generated from
let M be the number of evaluations used for numerical integraNote that the samples generated here are different than in the
simulation study.
The estimate of probability of correct classification
(based on 1000 samples) and CPU-time for different M's are given in
Table 5.1.1.
Table 5.1.1.
Correct classification and CPU-time needed on samples
generated from W(1,2) with sample size 10.
Two-way
CPU-time
(min. )
Method
M
LN
G
Sampling
20
40
80
.627
.625
.631
.563
.567
.560
.482
.512
.521
1:49
3:22
6:18
Simpson
41
101
.628
.628
.560
.560
.560
.560
0:46
1:48
Three-way
Another reasonable way to perform the numerical integration of
(5.1.1) is to integrate from
a
to a number, say 10.
The integration
then is done over successive intervals of fixed length until the
relative increase in value is less than a prescribed small percentage.
33
This approach appears preferable to the previous ones, especially for
a,
small values of
say less than .2 .
In order to estimate the power of the procedure, and to obtain
results that can be compared with those of KQ, samples of nine parent
distributions,
G(l,~),
W(l,~),
W(1,2), W(1,4), LN(1,0.4), LN(l,l), LN(1,2.S),
G(1,2), and G(l,S)
20, and 30.
were generated for sample sizes
n = 10,
The random numbers for the Weibull, lognormal and gamma
families were generated by the IMSL (1979) subroutines GGWIB, GGNLG,
and GGAMR, respectively.
of these distributions.
One thousand samples were generated from each
The results and discussion are given in detail
in Chapter 6.
5.2.
Censored Samples
Having initial estimates close to the final estimates can
appreciably reduce the number of iterations required in the estimation
procedures.
In finding the MLE's of the scale and shape parameters of
.the lognormal and gamma families, moment estimates were used as the
first approximations.
Moment estimates generally are easy to calculate,
but they are generally defined for complete samples.
Therefore,
reconstructing a pseudo complete sample from a type I censored sample is
given by defining
T + (i-l)·d , for all i = l, .•• ,(n-r) ,
where
d
=
(x(r) - x(l»/(r-l).
Hence, the initial estimates for the
scale and shape parameters of the lognormal family are
34
and
and of the gamma family are
e
v/x
0
and
A
CL
=
0
x2 Iv
,
where
n
v
'I
(
i=l
and
n
x = i=l
I
X(o)/n.
l.
Scale invariant selection statistics for the lognormal and gamma
families have no closed fJrms.
The expected values in the selection
statistics are then estimated by sampling,
i.~.,
for
n f r ,
35
~
where
u.
J.
M
1
L
M i=l
is a
n
L
.(11 X.
J.
i=l
(
N-r
r.v. (if
wi
is obtained from IMSL subroutine GGNML, standard normal
distribution, then
.(11
r
r
1
TU/
- --Ar(a)
J
L
i=l
a
.
xi ) .
'
x
~
J. . ·
a-I
n-r
s
exp(-s)ds
A
Tu./
J.
\n-r
s &-1 exp (-S)dsj
,
1
M
where
u.
J.
is a
G(l,r&) r.v.
obtain~d
from IMSL subroutine GGAMR.
Standard deviations for the selection statistics are given in the user
program, for example, for
1 M
S(f,x) - M L hi'
i=l
then the standard deviation is given by
36
h 2_
i
(y
\i=l
h. 2 /M
~
)~
M(M-l)
Since the procedures considered here are invariant under scale
'changes, for the analyses the observations are multiplied by a factor of
lOfT, where
T
is the truncation point.
The logarithms of the
selection statistics and standard deviations are given in the user
program, since they are computationally more convenient.
In Monte Carlo simulations, the particular distributions which
were studied were
G(1,2)
W(l,~),
W(1,4), L(1,0.4), L(1,2.5),
for samples_of size 30 with truncation point
F(T) = 0.9,
i.~.,
for a mean rate of 10% censoring.
G(l,~),
T
and
such that
The results and
discussion are given in detail in Chapter 6.
The convergence criteria used in the simulations are as follows.
For the Weibull family, the convergence criterion for change in 3
the next iteration is 10-
2
or at most 50 iterations.
family, the convergence criterion for change in
.
. .~s 10- 4 , or at most 550 ~terat
.
i ons.
~terat~on
the convergence criterion for change in
e
and
e
on
For the lognormal
and
cr
on the next
For the gamma family,
<l
on the next. two
2
consecutive iterations is 10- , or at most 1100 iterations.
The number
of evaluations for the expected values was 100.
In the user program, the convergence criteria were changed to 10for the Weibull and gamma, and the evaluations for the expected values
were changed to 1000.
4
37
The same samples generated from
W(l,~)
were rerun with the new
criteria as in the user program to get some information of the effects
of these changes.
The results, the estimate of probability of correct
classification (based on 36 samples), and CPU-time are given in
Table 5.2.1.
There is no chanee for the results obtained for the
maximum likelihood procedure.
Table 5.2.1.
The scale invariant correct classification and CPU-time
needed on 36 samples generated from W(l,~).
Type of program
LN
Two-way
G
Three-way
CPU-time
(min. )
Simulation
.50
.86
.36
0.62
User
.53
.92
.44
11.17
Empirically, if the observations are "heavily" censored then
misleading results in the evaluation of the scale invariant statistics
can occur.
In the user program if the coefficient of variation,
i.~.,
(standard deviation)/(selection statistic) is larger than 35%, then no
output for the selection procedure based on the scale invariance will be
printed.
38
6.
ON THE SELECTION OF WEIBULL, LOGNORMAL AND GAMMA
DISTRIBUTIONS FOR COMPLETE AND CENSORED SAMPLES
Siswadi and C. P. Quesenberry
North Carolina State University
Raleigh, NC 27650
ABSTRACT
In a recent paper, Kent and Quesenberry [19J considered using a
statistic based on the value of the density function of a scale
maximal invariant to select the best fitting member of a collection
of parametric families of probability distributions for complete
samples of reliability or life data.
In the present work extensions
of this approach in two directions are given.
First, selection for
complete samples based on scale and shape invariant statistics is
considered.
Next, the selection problem for type I censored samples
is considered, and both scale invariant and maximum likelihood selection
procedures are studied.
The two-parameter (scale and shape) Weibu11,
lognormal, and gamma distributions are considered and applications
to real data examples are given, as well as results from a (small)
simulation study.
39
KEY WORDS
Selection procedures
Invariance
Truncated distributions
Type I censoring
Weibull distributions
Lognormal distributions
Gamma distributions
40
1.
INTRODUCTION
The two-parameter (scale and shape) Weibull, lognormal and gamma
distributions are all commonly used in reliability and life testing
problems.
The problem of selecting one of these three distributions
for a particular sample, either complete or censored, is a difficult
one, and it appears desirable to use a selection procedure that is as
efficient as possible.
Kent and Quesenberry [19], KQ, proposed a selection procedure
for this problem based upon scale transformation maximal invariants.
This paper will be referred to at a number of points in the development
below.
Other relevant literature includes a paper by Dumonceaux, Antle
and Haas [11] who examined maximum likelihood ratio (MLR) tests for
discriminating between two models with unknown location and scale
parameters, and compared empirically the power of MLR tests with that
of uniformly most powerful invariant (UMPI) tests for discriminating
between normal and Cauchy distributions.
They actually recommend the
MLR test over the UMPI test based on relatively good power and ease of
computation.
Dumonceaux and Antle [10]
gave an MLR procedure for
discriminating between Weibull and lognormal distributions that is
based on the fact that the logarithms of both Weibull and lognormal
random variables have location-scale parameter distributions.
In a
recent paper, Bain and Engelhardt [2] considered a likelihood ratio
selection statistic for selecting between gamma and Weibull distributions.
Some graphical procedures for the selection problem have been
given by Nelson [21], and by Barlow and Campo [3].
Other papers that
41
are related to the present work include Hogg, et a1
[18J, who discuss
a number of selection procedures; Dyer ([12J, [13J) who considers a
number of selection procedures and tests for discriminating between
pairs of classes of location-scale distributions; and Uthoff ([24J,
[25J) who considers some particular invariant statistics.
references for invariant
~sts
As general
see Hajek and Sidak [15J and
Lehmann [20J, and for MLR tests see
[8J.
COX
Volodin [26J considers
a generalized three-parameter gamma distribution and discriminates
between two-parameter gamma and Weibu11 distributions by making scale
invariant tests on the other parameters.
As mentioned above, KQ considered selecting the gamma, lognormal
and Weibu11 families for the complete sample problem, and based
selection upon the optimal scale invariant statistic with the shape
parameter replaced by a ML estimator.
Such procedures were called
suboptimal, and the selection statistics for the three families
wer~
set out in simple closed form formulas in that paper,
In the present work we will consider two major changes in the
approach and problem considered in KQ.
First, for the lognormal and
Weibu11 families we will use optimaL scale and shape invariant selection statistics.
Also, we consider here this selection problem for
type I censored samples as well as for the complete samples case.
For these cases the selection statistics are generally expressed as
definite integrals whose evaluation involves numerical integration
techniques.
Thus part of this work has necessarily been concerned with
the development of computer algorithms
to evaluate these integrals.
42
2.
DENSITIES AND SELECTION FUNCTIONS
In many applied problems it is reasonable to assume that the
location parameters of life distributions are known, and we will here
consider distributions with only scale and shape parameters unknown.
The densities of the gamma, lognormal and Weibull distributions for
both the complete and censored samples cases that we consider are
given in Table 1.
We consider type I censored samples, which are obtained when a
number of items are put on life test and observed for a previously
specified fixed time T.
Thus the parent density for the observed
lives is a truncated version of the complete samples density, as given
in Table 1.
The symbol
I(a,b) (x) in Table 1, and elsewhere, is the
!.~.,
indicator function of the interval
(a,b),
a< x < b,
The function
and is zero elsewhere.
I(a,b)(x)
tP(x)
=1
if
is the
probability distribution function of a standard normal (Gaussian)
random variable.
Table 1.
Densities of Weibull, Lognormal and Gamma Distributions
Name
Symbol
Weibull
Wee,S)
E. (~)
e
LN(e,o)
1
Density
8-1
Lognormal
ffi ox
Gamma
G(e,a.)
exp[ -
(x/e l~]. I(O,"")(x);
°
e,g
>
2
2
exp{-[ln(x/e)] /20 }.I(O,,,,,)(x); e,o
:>
0
>
0
e-a.[r(a.)]-l x a.-I exp(-x/e)· I(O.oo)(x);
e ,a.
43
We denote the densities by
f , f , and f
l
2
3
for the Weibull,
lognormal and gamma classes, respectively.
The approach used here is of the same general form as that in KQ.
That is, a selection statistic, S, will be defined for each of the
three parametric classes, and that class with the largest valued
selection statistic will be chosen as the best fitting family for a
given sample.
Before defining selection statistics for these three classes of
distributions, we consider some transformation properties of these distributions.
If
LN(6,o)
W(6,S)
or
X is a random variable with either a
distribution, then consider the transformation
Y = aX
If
X
is a
W(6,S)
is a
.
L~(a6
b
,bo)
b
, a
>
0, b
>
random variable, then
random variable; and if
Y
G(6,a),
X is a LN(6,cr)
random variable.
(2.1)
0 .
Y is a
b
W(a6 , Sib)
random variable, then
Thus Weibull random variables are
transformed to Weibull random variables by (2.1), and lognormal random
variables are transformed to lognormal random variables by this transformation.
Unfortunat~ly,
for present purposes at least, the gamma
distributions do not share this property since a gamma random variable
is not always transformed to another gamma random variable by (2.1).
That is to say, the
G(6,a) class is not a scale-shape class that is
conformable with the lognormal and Weibull classes as defined by
Quesenberry and Starbuck [22J.
We will, nevertheless, use here a
selection statistic for the complete samples problem which is essentially
the value of the density function of a maximal invariant when each of
the three parents are assumed.
44
Thus for
an
observed sample we define the selection
statistic for a density function
00
f , (i=1,2,3) by
i
00
(2.2)
(See Lehmann [20] or Hajek and Sldak [15} for general introductory
accounts of test invariance.)
Due to the property of the
G(e,u)
distribution discussed above, the selection statistic defined by (2.2)
is still a function of the parameter
statistic by replacing
this function.
a.
We obtain the selection
a by its maximum likelihood estimator, a, in
Thus the selection statistics which we use in this
work are given in Table 2.
Table 2.
Family
Scale-shape Invariant Selection Statistics for Complete
Samples.
i
S.
1.
00
w(e,3 )
1
r (0)
I
A
A -0 ;\0-2 dA
(IT x .) (Ix. )
1.
1.
0
_1
LN (e , (1)
2
~ n ~
G(e,a)
3
r(oa)r
~
1T
-~(n-l)
-0
~
(a)
r[~(n-l)J[Iln
r
2
x. - (Ilnx.)2/nJ-~(0-1)
1.
1.
0
0 2
) & 1. - dA
(IT xi) aA (Ix 1.i
0
45
These selection statistics were obtained directly from (2.2) and
the densities of Table 1.
The evaluation of these functions requires
numerical integration, except for the lognormal selection function.
It is often computationally easier to compute and compare the
logarithms of the selection statistics than the statistics themselves.
To estimate the parameter
the ML estimator
A
a
a
of the gamma distribution, we use
of Greenwood and Durand [14] which has been
studied further by Bowman and Shenton [6J, and was recently used and
given in detail by KQ.
We have developed a FORTRAN program for the selection procedure
described above and used this program to study the procedure using
simulated samples.
Some of these results are given in the next section.
The selection procedures proposed above are closely related to
separate families hypotheses tests, for pairs of tpe three classes,
that are uniformly most powerful among the class of tests invariant
(UMPI) with respect to the transformations of (Z.l).
For the particu-
1ar case of selecting either lognormal or Weibu1l distributions,
ustngthe selection statistics of Table 2 is equivalent to using the
UMPI test statistic for classifying a sample into one of these two
distribution families.
If
and
denote the probabilities
that a sample from a lognormal parent will be classified as a Weibu11
and, conversely, that a Weibu11 sample will be classified as lognormal,
respectively, then the above selection procedure will minimize
a
l
+ aZ
among all procedures invariant with respect to the transformations of
(2.1).
Or, if equal prior probabilities of
~
each are appropriate,
then this procedure minimizes the total probability of misclassification,
~.,
(a 1 + aZ)/Z
(see KQ, section 3).
46
3.
SIMULATION RESULTS FOR COMPLETE SAMPLES
In this section we report results of a simulation study of the
selection rules proposed above.
In order to obtain results that can be compared with those of
KQ we have generated samples from the nine parent distributions:
W(1 ,~), W( 1 , 2), W(1 , 4) , LN( 1 ,
and
G(1,5)
for
°.4) ,
n = 10, 20, 30.
from each of these distributions.
LN( 1 , 1) , LN( 1 , 2 •5), G(1 ,~), G(1 , 2)
One thousand samples were generated
The pairwise selection error rates
are given in Table 3 and the observed rates of correct classification
in the 3-way procedure are given in Table 5.
This procedure has total error probabilities for the case of a
lognormal vs a Weibull that are the smallest possible for an invariant
procedure.
Comparison of the results for this case given in Table 3
and those in Table 4 of KQ shows that the present optimal procedure
has very little advantage over the suboptimal procedure of KQ.
For the
other two-way selection problems, gamma vs lognormal and gamma vs
Weibull, the comparisons of the procedures of this paper with those of
KQ are not clearcut.
admit a
l~I
This is because the gamma distribution does not
statistic with respect to the transformations of (2.1).
In view of these observations we recommend the selection procedures
set out in KQ on the grounds that (i) the selection statistics in
Table 2 of KQ have convenient formulas that are readily evaluated,
and (ii) the error rates achieved by those procedures appear to be
about as favorable as for those achieved by the much more computationally difficult scale-shape invariant procedures.
Also, Bain and Engelhardt [2J give in their Table 2 some
probabilities of correct selection between gamma and vleibull
~
47
Table 3.
Selection Error Rates for Pairwise Procedures*
n
X'VW
X 'V LN
Total
10
20
30
.355
.213
.145
.284
.232
.179
.320
.223
.162
n
X 'V G(~)
10
20
30
.400
.371
.343
X 'V
10
20
30
10
20
30
G(~)
10
20
30
W(~)
.424
.378
.345
Total
.412
.375
.344
X 'V W(4)
.400
.371
.343
.399
.291
.228
X 'V G(2)
X 'V W(2)
.438
.418
.395
.435
.362
.331
X 'V G(5)
10
20
30
X 'V
X 'V
.424
.378
.345
X 'V G(5)
X 'V W(4)
.387
.314
.295
.399
.291
.228
G(~)
.400
.371
.343
X 'V G(2)
.400
.331
.286
.437
.390
.363
W(~)
.387
.314
.295
X 'V
.406
.346
.320
.393
.303
.262
X 'V W(2)
Total
.435
.362
.331
.418
.367
.337
X 'V
W(~)
.438
.418
.395
.424
.378
.345
X 'V G(2)
X 'V W(4)
.438
.418
.395
.399
.291
.228
X 'V G(5)
X 'V W(2)
.387
.314
.295
.435
.362
.331
.431
.398
.370
.419
.355
.312
.411
.338
.313
48
Table 3 (continued).
n
X tV
G(~)
.251
.153
.088
10
20
30
X tV
10
20
30
G(~)
.412
.382
.348
.228
.138
.078
X tV G(2)
X tVLN (1)
.369
.305
.238
.329
.270
.206
X tV G(5)
X tVLN(0.4)
.436
.362
.325
.412
.382
.348
X tV G(5)
X tVLN( 2. 5)
.436
.362
.325
.228
.138
.078
10
20
30
10
20
30
*Graphs
.332
.268
.218
.251
.153
.088
X tVG(2")
X tVLN(2. 5)
.251
.153
.088
10
20
30
X tVG (~)
X tV LN (0.4)
.240
.146
.083
.349
.288
.222
.329
.270
.206
.412
.382
.348
X tV G(2)
X tVLN(2.5)
.369
.305
.238
.228
.138
.078
.436
.362
.325
.290
.212
.147
X tVLN(0.4)
.369
.305
.238
X tV G(5)
.424
.372
.337
X.tVLN(l)
.391
.344
.293
.299
.222
.158
X tVLN(l)
.329
.270
.206
.383
.316
.266
.332
.250
.202
of the nine densities of Table 3 are given on page 87.
49
distributions using a likelihood ratio test statistic.
Their results
can be used to construct total error rates comparable to those of
Table 3, for a few selected values of the gamma and Weibull shape
parameters.
We have computed these values and give
Table 4.
.5
2
them in Table 4.
Total Error Rates for Gamma vs Weibull for
Likelihood Ratio Procedure of Bain and Engelhardt.
n
.5
2
4
10
.435
.405
.385
20
.375
.345
.320
10
.470
.440
.420
20
.415
.385
.360
Comparison of the total error rates of Tables 3 and 4 shows no
trend in favor of either procedure.
Table 5 gives the selection rates in our simulation study for the
three-way selection procedure.
The results in Table 5 can be compared
with the results in Table 5 of KQ.
The comparisons do not show that
either of these procedures has a clear advantage, however, the
selection procedure of KQ may have a slight edge.
Thus, as for the
two-way selection procedures above, we favor the computationally
simpler procedures given in KQ.
50
Table 5.
Selection Rates for Three-way Procedure
X
n
'V
G
10
20
30
W
.569
.624
.657
.206
.258
.291
X
'V
G
10
20
30
-G
10
20
30.
.171
.229
.245
.231
.409
.510
'V
X
LN
G
.225
.118
.052
.195
.277
.367
W(~)
W
.417
.378
.345
X
G(~)
.436
.418
.395
X
LN
G
.352
.213
.145
.150
.239
.259
LN(0.4)
W
LN
-G
.241
.153
.103
.588
.618
.652
.141
.184
.162
'V
W(2)
W
.565
.638
.669
X
4.
G(2)
W
'V
'V
LN(l)
W
.188
.088
.045
X
LN
G
.369
.305
.238
.177
.324
.380
-
.285
.123
.072
.158
.193
.178
G(5)
W
G
'V
LN
.601
.709
.772
.241
.098
.050
LN
.716
.768
.821
LN
G
.671
.728
.793
.138
.044
.016
.146
.188
.163
'V
.436
.362
.325
1.;1'(4)
W
LN(2.5)
W
X
LN
.387
.314
.295
X
LN
'V
SELECTION WITH CENSORING
Suppose that from a random sample of size n on a parent random
variable with density and distribution functions f and F, respectively,
that only the values less than a prespecified time T are observed.
If
r
is the number of values less than T, then
with probability function
b(r;F(.T),n).
r
is a binomial
Let
rv
be the
observed values, indexed in the same order as the original sample, and
X(l), ... ,x(r)
be the corresponding order statistics.
selection procedures based on'the values
We require
xl"",x ' and
r
r.
We have
studied procedures based on scale-shape invariance, as considered
above for complete samples, scale invariance as in KQ, and maximum
likelihood ratio procedures.
Of these procedures, only the scale
invariant and maximum likelihood procedures will be described now,
since these procedures will be reconunended for reasons· given below.
51
When
f
and
F
are functions of parameters a, say
the likelihood function corresponding to
{n!/(n-r)!}{l-Fa(T)}
xl, •.• ,x
r
fa
and
r
and
Fa'
is
n-r
(4.1)
The scale invariant selection statistic is defined by
s =
J L(Ax(l), ••• ,Ax(r)'
r;AT)A
r-l
(4.2)
dA,
o
where the scale parameter in
L has been set equal to one, and
depends upon a shape parameter.
The shape parameter in
S
S
for each of
the three families considered here will be replaced by its maximum
likelihood estimator, obtained by maximizing the likelihood in (4.1).
We then select the family with the largest selection function.
The
details for the particular families are given below.
We will also consider selection for the censored case using,
essentially, a likelihood ratio procedure.
In this approach we use
the maximum value of the likelihood function in (4.1) as the selection
statistic.
Formally, the selection statistic is
(4.3)
for
e
the ML estimator(s) of
a.
52
The selection functions for these two methods for the three
families are given in Table 6.
Table 6.
Selection Statistics for Censored Samples.
Family
i
Scale Invariant
wee ~ )
1
LN(e,cr )
2
r
1-r -~
n!
-1
(I
21T
cr)
r (IT x (i) )
exp
(n-r)!
i=l
-A
f-----:-z
1
2cr
A
G(6,a)
3
n!
(n-r)!
where
Maximum Likelihood
W(e,3)
1
LN(e,cr)
2
f(ra)r-r (0)
u
is a
(i~l x (i))
G(l,ra)
r.v.
a-I
A
(i=lIx (i))-ra
53
Table 6 (continued.
Family
i
3
G(e,a.)
<i-I
C2
l
X(i))
exp
-(~e i=l
I
X(i))
a
Note:
r(a,b)
=f
sb-lexp(-s)ds
o
We have written programs to evaluate the selection statistics
of Table 6.
A brief description of this work follows in the remainder
of this section.
For more detail see Siswadi [23].
Maximum likelihood estimates for the scale and shape parameters
are required for the ML selection functions, and for the shape
parameter for the scale invariant selection function.
For the Weibull
class, these estimates are obtained as solutions of the ML equations
as in Cohen [7].
For the lognormal class solutions for the ML
equations were obtained using results of Harter and Moore [17],
adjusted for type I censoring.
Another procedure
for lognormal type I
censored samples is given by Aitchison and Brown [IJ.
Solutions of the
ML equations for the gamma class were obtained from the results of
Harter and Moore [16J, adjusted for type I censoring.
Once the ML
estimates of the scale and shape parameters are obtained, the
evaluations of the ML selection functions of Table 6 are straightforward.
54
A
Once the ML estimate 3 of the shape parameter of the Weibull
distribution is obtained, the scale invariant selection function is
readily evaluated.
However, the selection functions for both the
lognormal and gamma scale invariant procedures are difficult to evaluate
and we have used Monte Carlo and importance sampling from the normal
distribution and gamma distributions, respectively, (see Davis and
Rabinowitz, [9J) to evaluate them.
55
5.
SIMULATION RESULTS FOR CENSORED SAMPLES
We have conducted a small Monte Carlo simulation study of the
two selection methods discussed above for censored samples to provide
some information on the error rates for these procedures.
These
empirical err-or rates allow comparisons of the two procedures with
each other as well as with complete sample rates.given in section 3
and in KQ.
Comparison with complete sample
rates gives a.measure of
the loss of information due to censoring.
W(~),
The families of distributions considered were
G(2),LN(0.4), andLN(2.5); and the sample size was
The truncation point
T was chosen so that the df
for a mean rate of 10% censoring.
= 30
F(T)
G(~),
in all cases.
= 0.90,
i.~.,
There were 100 samples generated
for each of the above distributions except
were generated.
n
W(4),
W(4), for which 16 samples
The reason that more samples were not run was because
computer running time for some cases is very long.
For example, the
CPU time required to obtain the ML estimates for lognormal and gamma
shape parameters for a
W(4)
sample typically was approximately one
minute per sample.
The misclassification rates for pairwise selection are given in
Table 7.
The total error rate is the average of the rates for the
two families.
W vs
LN.
Note that only one set of error rates is given for
This is because both the ML and KQ selection
statistics
for both the Wand LN classes are invariant under the transformations
(2.1).
However, the gamma selection statistic does not possess
this invariance property, and therefore when a gamma distribution is
compared with any other the error rates are generally different.
56
Table 7.
Misc1assification Rates for Pairwise Selection Procedures Censored Sample en = 30).
Procedure
ML
KQ
Procedure
ML
KQ
ML
KQ
ML
KQ
ML
KQ
Table 8.
KQ
KQ
Total
.40
.45
.18
.12
.29
.29
X'VW(~)
Total
.38
.52
.32
.16
.35
.34
X'\G(2)
X'\W(~)
.36
.39
.32
.16
X'\G(~)
X'VLN(. 4)
.14
.20
.32
.31
X'VG(2)
X'VLN (.4)
.35
.42
.32
.31
X'VG(~)
X 'V W(4)
.38
.52
X'\G(2)
.34
.28
X'VW(4)
.44
.50
.36
.39
.14
.20
.08
;09
X'VG(2)
.34
.37
.40
.45
X'VLN (2.5)
X'\.G(~)
.23
.26
X'VG(~)
G
W
LN
G
.11
.15
X'VLN(2.5)
.08
.09
.35
.42
.60
.47
.29
.35
.11
.18
.29
.28
X'VW(4)
W LN
G
.13
.25
.56
.50
.31
.25
.23
.23
X'VG(2)
W LN
.36
.32
.35
.40
X'VLN (.4)
W LN
.09
.08
.68
.69
Total
.41
.50
.44
.50
.22
.21
Classification Rates for Three-way Procedures Censored Sample (n = 30)
G
ML
X'VLN
X'VG(~)
Procedure
ML
X'VW
X'VW(~)
G
W
LN
.27
.16
.35
.40
.38
.44
G
.01
.01
X'VLN(2.5)
W LN
.17
.13
.82
.86
57
Consider the
W vs
LN
rates of Table 7.
The ML and KQ
procedures are about the same and, in fact, give the same total error
rate of 0.29.
Comparison of these results with those of the KQ
procedure for complete samples (see Table 4 of KQ) shows that there is
a rather large loss of information due to censoring since the W, LN
and total error rates are 0.19, 0.15, and 0.17, respectively.
For the two-way selection error rates in Table 7 that involve a
gamma distribution and either a Weibu11 or a lognormal distribution,
neither the
}~
nor the KQ procedure appears to have an overall
advantage.
Also, by comparing these cases with the same cases in
Table 4 of KQ, we feel that the loss of information due to censoring
is not so large as for the
W vs
LN case commented on above.
The classification rates for three-way selection procedures are
given in Table 8.
Again, neither the ML nor the KQ procedure appear
to have any overall advantage and both perform quite well.
Also,
by comparison with Table 5 of KQ it appears that censoring has little
effect on the probability of correctly classifying a lognormal sample,
but the probabilities of correctly classifying either Weibull or gamma
samples are reduced somewhat.
58
6.
A USER PROGRJU1 AND EXAMPLES FOR CENSORED SAMPLES
We have programmed the selection procedures for the three families
of distributions in FORTRAN.
be obtained from the authors.
A program listing for the procedures can
The program computes the selection
statistics for complete and censored samples according to the formulas
given in Tables 2 and 6, respectively.
For complete samples, the
maximum likelihood estimate is computed for the shape parameter of the
gamma family.
For censored samples, the maximum likelihood estimates of
the scale and shape parameters are computed for each of the three
families.
In each case, the data is then classified into the family
with the largest valued selection statistic.
For the scale invariant procedure, the selection statistics are
computed
~y
the Monte Carlo method given in Davis anQ Rabinowitz [9J .
The program was tested on several examples and on many samples produced
through simulation.
In general, the selection statistics estimated
did not appear reliable for heavily censored samples.
Therefore, in the
user program for the scale invariant procedure, the selection results
will not be printed if the coefficient of variation of the replicated
values in the Monte Carlo method is larger than 35%.
The program is dimensioned to analyze an original sample of
maximum size
n
= 500.
The input data necessary for the program is the
number of data sets, and for each data set:
(1) the title for the data
set, (2) the number of observed values, (3) the (original) sample size,
(4) the truncation point, and (5) the observations.
printed out for each set of data analyzed is:
The information
(1) the input data,
(2) the maximum likelihood estimate(s), (3) the value of the selection
59
statistic
for each family with its standard deviation for the scale
invariant procedure, and (4) the family selected.
Example 1.
Birnbaum and Saunders [5J considered a set of data of lifetimes, in
thousands of cycles, of aluminum sheeting under periodic loading, to
illustrate the gamma family.
If we assume that the experiment was
terminated at a prespecified time, say
T = 1900,
then the censored
observations and the results of the selection procedure are presented in
Table 8.
For these data, the Weibull family is selected.
It is also
to be noted, although the details are not given here, for the complete
sample the selection procedure based on the selection statistics given
in Table 2 yields the same conclusion.
Example 2.
Bartholomew [4J, p. 370 gives the failure times of 15 items that
failed during a specified period of testing from an original sample
of size
n
= 20.
He states that the items have an exponential life
distribution, and uses the exponential distribution to perform analyses
of the data.
We have used these data in the selection program and the
results are given in Table 9.
Both the maximum likelihood and scale
invariant procedures prefer the lognormal distribution to the gamma
and Weibull distributions, which casts doubt on the claim of an
exponential parent distribution.
60
Table 8.
Results of Selection Procedure.
Lifetimes of Aluminum Sheeting under Periodic Loading
370
844
960
1018
1108
1200
1252
1293
1355
1450
1502
1540
1604
1750
1792
1895
706
855
988
1020
1115
1200·
1258
1300
1390
1452
1505
1560
1608
1750
1820
716
858
990
1055
1120
1203
1262
1310
1416
1475
1513
1567
1630
1763
1868
746
886
1000
1085
1134
1222
1269
1313
1419
1478
1522
1578
1642
1868
1881
785
886
1010
1102
1140
1235
1270
1315
1420
1481
1522
1594
1674
1781
1890
= 101
Sample Size
Sample Observed =
91
Truncation point = 1900
Maximum Likelihood Estimates
Family
Scale
Shape
Weibu11
Gamma
Lognormal
0.154149D+04
0.125214D+03
0.135159D+04
0.404114D+01
0.112550D+02
0.317034D+00
Family
Selection Statistic
Standard Deviation
Maximum Likelihood Procedure
Weibu11
Gannna
Lognormal
0.147164D+03
0.146470D+03
0.144092D+03
The family selected is Weibu1l
Scale Invariant Procedure
Weibu11
Gamma
Lognormal
0.144432D+03
0.144103D+03
0.141762D+03
The family selected is Weibu11
0.142042D+03
0.139649D+03
797
930
1016
1102
1199
1238
1290
1330
1420
1485
1530
1602
1730
1782
1893
61
Table 9.
Results of Selection Procedure
Bartholomew Data
3
38
99
19
41
109
Sample Size
Sample Observed
Truncation point
23
45
26
58
27
84
37
90
138
=
=
=
20
15
150
Maximum Likelihood Estimates
Family
Weibull
Gamma
Lognormal
Family
~aximum
Scale
0.105498D 03
0.876l46D 02
0.6825l7D 02
Shape
0.108289D 01
0.116892D 01
0.122585D 01
Selection Statistic
Standard Deviation
Likelihood Procedure
Weibull
Gannna
Lognormal
-0.669l85D 01
-0.66425lD 01
-0.660273D 01
The family selected is lognormal
Scale Invariant Procedure
Weibull
-0.720l0lD 01
Gamma
-0.699896D 01
-0.78471lD 01
Lognormal
-0.851l29D 01
-0.667347D 01
The family selected is lognormal
62
REFERENCES
[lJ
Aitchison, J. and J. A. C. Brown, The Lognormal Distribution,
Cambridge University Press, London (1957).
[2J
Bain, L. J. and M. Engelhardt, "Probability of Correct Selection
of Weibull versus Gamma based on Likelihood Ratio,"
Communications in Statistics A9:375-38l(1980).
[3J
Barlow, R. E. and R. Campo, "Total Time on Test Processes and
Applications to Failure Data Analysis," in Reliability
and Fault Tree Analysis: Theoretical and Applied Aspects
of System Reliability and Safety Assessment: 451-481, SIAM,
Philadelphia (1975).
[4J
Bartholomew, D. J., "The Sampling Distribution of an Estimate
Arising in Life Testing," Technometrics 5:361-374 (1963).
[5J
Birnbaum, Z. W. and S. C. Saunders, "A Statistical Model for
Life-length of Materials," Journal of the American Statistical'
Association 53:151-160 (1958).
[6J
Bowman, K. O. and L. R. Shenton, Properties of Estimator for the
Gamma Distribution, Report CTC-l, Oak Ridge, Union Carbide
Corporation, Nuclear Division, TN (1968).
[7J
Cohen, A. C., '~ximum Likelihood Estimation in the Weibull
Distribution Based on Complete and on Censored Samples,"
Technometrics 7:579-588 (1965).
[8J
Cox, D. R., "Tests of Separate Families of Hypotheses," in
Proceedings of the Fourth Berkeley Sumposium 1: 105-123,
University of California Press, Berkeley (1961).
[9J
Davis, P. J. and P. Rabinowitz, Methods of Numerical Integration,
Academic Press, New York (1975).
[lOJ
Dumonceaux, R. and C. E. Antle, "Discrimination between the Lognormal and the Weibull Distributions," Technometrics 15:
923-926 (1973).
[llJ
Dumonceaux, R., C. E. Antle and G. Haas, "Likelihood Ratio Test
for Discrimination between Two Models with Unknown Location
and Scale Parameters," Technometrics 15:19-27 (1973).
[12J
Dyer, A. R., "Discrimination Procedures for Separate Families of
Hypotheses," Journal of the American Statistical Association
68:970-974 (1973).
63
[13J
Dyer, A. R., "Hypotheses Testing Procedures for Separate Families
of Hypotheses," Journal of the American Statistical
Association 69:140-145 (1974).
[14J
Greenwood, J. A. and D. Durand, "Aids for Fitting the Gamma
Distribution by Maximum Likelihood," Technometrics 2:55-65
(1960).
[l5J
Hajek, J. and Z. Sidak,
New York (1967).
[16J
Harter, H. 1. and A. H. Moore, "Maximum Likelihood Estimation of
the Parameters of Gamma and Weibu11 Populations from Complete
and from Censored Samples,," Technometrics 7: 639-643 (1965).
[17J
Harter, H. 1. and A. H. Moore, "Loca1-maximum-1ike1ihood Estimation
of the Parameters of Three-parameter Lognormal Populations
from Complete and Censored Samples," Journal of the American
Statistical Association 61:841-842 (1966).
[18J
Hogg,~.
[19J
Kent, J. and C. P. Quesenberry, "On Selecting Reliability Models,"
submitted for publication (1980).
[20J
Lehmann, E. L., Testing Statistical Hypotheses, Wiley, New York
(1959)
[21J
Nelson, W., "Theory and Applications of Hazard Plotting for
Censored Failure Data," Technometrics 14:945-966 (1972).
[22J
Quesenberry, C. P. and R. R. Starbuck, "On Optimal Tests for
Separate Hypotheses and Conditional Probability Integral
Transformations," Communications in Statistics: Theory and
Methods. Part A, 5:507-524 (1976).
[23J
Siswadi, "Selecting Among Weibu11, Lognormal and Gamma Distributions
for Complete and Censored Samples," Unpublished North Carolina
State University thesis (1981).
[24J
Uthoff, V. A., "An Optimum Test Property of Two Well-known
Statistics,", Journal of the American Statistical Association
65:1597-1600 (1970).
[25J
Uthoff, V. A., "The Most Powerful Scale and Location Invariant
Test of the Normal versus the Double Exponential," Annals
of Statistics 1:170-174 (1973).
[26J
Vo1odin, 1. N., "On the Discrimination of Gamma and Weibu11
Distributions," Theory of Probability and its Applications
19:383-393 (1974),
Theory of Rank Tests, Academic Press,
V., V. A. Uthoff, R. H. Randles and A. S. Davenport,
"On the Selection of the Underlying Distribution and
Adaptive Estimation," Journal of the American Statistical
Association 67:597-600 (1972).
64
7.
SUMMARY AND CONCLUSIONS
The two parameter Weibull, lognormal and gamma families of distributions are commony used in reliability and life testing problems.
There are two main difficulties in selecting one of these three families
for a particular sample, either complete or censored.
First, is that
the gamma family is riot conformable to either the Weibull or lognormal
family of distributions.
Secondly, some selection statistics that must
be evaluated are not available in closed forms, which gives computing
difficulties.
In a recent paper, Kent and Quesenberry (1980) considered the
problem of selecting the member of a collection of families of
probability distributions for complete samples of reliability or life
data.
The selection statistic proposed was based upon the value of the
density function of a scale maximal invariant.
The statistics used have
closed form formulas that are readily evaluated.
The general selection procedure based on invariance under a group
of transformations is considered.
For some families of distributions
this leads to selection procedures that are optimal in the sense of
minimizing misclassification error.
For complete samples, the selection procedure invariant under
scale-shape changes is studied.
For two-way selection between the
Weibull and lognormal families, the procedure used here is optimal,
i.~.,
it has the smallest possible total error for an invariant procedure.
A Monte Carlo simulation study has been conducted.
It shows that the
optimal procedure has very little advantage over the procedure
proposed by Kent and Quesenberry, the scale invariant procedure.
65
For the other two-way and three-way selection problems, the comparisons
of the two procedures are not clear-cut.
Hence, based on the power
achieved and ease of computation, the scale invariant procedure is
preferred.
For censored samples, the maximum likelihood procedure is studied
in addition to the scale invariant procedure.
A small Monte Carlo
simulation study has been conducted for 10% censoring.
The two proce-
dures used perform about the same, in fact, both give the same total
error in selecting between the Weibull and lognormal families.
For the
other two-way and three-way selection problems, comparisons of the two
procedures give neither a clear-cut advantage.
There is a rather large
loss of information due to censoring in selecting the Weibull versus
lognormal.
It appears
that censoring has little effect on the
probability of correctly classifying a lognormal sample, but the
probabilities of correctly classifying either Weibull or gamma samples
are reduced somewhat by censoring .
..
The above procedures for censored samples have been applied to two
sets of data obtained from the literature and it is quite interesting
that the results were different of that supposed in the literature.
66
8.
LIST OF REFERENCES
Aitchison, J. and J. A. C. Brown. 1957.
Cambridge University Press, London.
The Lognormal Distribution.
Atkinson, A. C. 1970. A method of discriminating between models.
J. Royal Stat. Soc., Ser. B 32:323-345.
Bain, L. J. 1978. Statistical Analysis of Reliability and LifeTesting Models. Marcel Dekker, Inc., New York, NY.
Bain, L. J. and M. Engelhardt. 1980. Probability of correct selection
of Weibull versus gamma based on likelihood ratio. Commun.
Statist., A9:375-38l.
Barlow, R. E. and R. Campo. 1975. Total time on test processes and
applications to failure data analysis. Reliability and Fault Tree
Analysis: Theoretical and Applied Aspects of System Reliability
and Safety Assessment, edited by R. E. Barlow, J. B. Fusse1, and
N. D. Singpurwa11a. SIAM, Philadelphia, PA, 451-481.
Barlow, R. E. and F. Proschan. 1975. Statistical Theory of Reliability
and Life Testing: Probability ~odels. Holt, Rinehart and Winson,
Inc., New York, ~IT.
Barr, D. R. and T. Davidson. 1973. A Kolmogorov-Smirnov test for
censored samples. Technometrics 15:739-757.
Bartholomew, D. J. 1963. The sampling distribution of an estimate
arising in life testing. Technometrics 5:361-374.
Birnbaum, Z. W. and S. C. Saunders. 1958. A statistical model for
life-length of materials. J. Amer. Stat. Assoc. 53:151-160.
Bowman, K. O. and L. R. Shenton. 1968. Properties of Estimator for
the Gamma Distribution. Report CTC-l. Union Carbide Corporation,
Oak Ridge, TN.
Cohen, A. C. 1965. Maximum likelihood estimation in the Weibull
distribution based on complete and on censored samples.
Technometrics 7:579-588.
Cox, D. R. 1961. Tests of separate families of hypotheses. Proceeding
of the Fourth Berkeley Symposium. University of California Press,
Berkeley, CA 1:105-123.
Cox, D. R. 1962. Further results on tests of separate families of
hypotheses. J. Royal Stat. Soc., Ser. B 25:406-424.
Davis, P. J. and P. Rabinowitz. 1975.
Academic Press, New York, ~~.
Methods of Numerical Integration.
67
Dufour, R. and U. R. Maag. 1978. Distribution results for modified
Kolmogorov-Smirnov statistics for truncated or censored samples.
Technometrics 20:29-32.
Dumonceaux, R. and C. E. Antle. 1973. Discrimination between the
lognormal and the Weibull distributions. Technometrics 15:923-926.
Dumonceaux, R., C. E. Antle, and G. Haas. 1973. Likelihood ratio test
for discrimination between two models with unknown location and
scale parameters. Technometrics 15:19-27.
Dyer, A. R. 1973. Discrimination procedures for separate families of
hypotheses. J. Amer. Stat. Assoc. 68:970-974.
Dyer, A. R. 1974. Hypotheses testing procedures for separate families
of hypotheses. J. Amer. Stat. Assoc. 69:140-145.
Greenwood, J. A. and D. Durand. 1960. Aids for fitting the gamma
distribution by maximum likelihood. Technometrics 2:55-65.
Hajek, J. and Z. Sidak.
New York, NY.
1967.
Theory of Rank Tests.
Academic Press,
Harter, H. L. and A. H. Moore. 1965, Maximum likelihood estimation of
the parameters of gamma and Weibull populations from complete and
from censored samples. Technometrics 7:639-643.
Harter, H. L. and A. H. Moore. 1966. Local-maximum-likelihood
estimation of the parameters of three-parameter lognormal
populations from complete and censored samples. J. Amer. Stat.
Assoc. 61:842-851.
Hogg, R. V., V. A. Uthoff, R. H. Randles, and A. S. Davenport. 1972.
On the selection of the underlying distribution and adaptive
estimation. J. Amer. Stat. Assoc. 67:597-600.
IMSL Library, Edition 7, Reference Manual. 1979. International
Mathematical and Statistical Libraries, Houston, TX.
Jackson, O. A. Y. 1968. Some results on tests of separate families of
hypotheses. Biometrika 55:355-363.
Jackson, O. A. J. 1969. Fitting a gamma or lognormal distribution
to fibre-diameter measurements on wool tops. Appl. Statist. 18:
70-75.
Kaplan, E. L. and P. Meier. 1958. Nonparametric estimation from
incomplete observations. J. Amer. Stat. Assoc. 53:457-481.
Kent, J. 1979. An efficient procedure for selecting among five
reliability models. Ph.D. Thesis, North Carolina State University
at Raleigh.
68
Kent, J. and C. P. Quesenberry.
Submitted for publication.
1980.
On selecting reliability models.
Koziol, J. A. and D. P. Byar. 1975. Percentage points of the asymptotic
distributions of one and two sample K-S statistics for truncated
or censored data. Technometrics 17:507-510.
Koziol, J. A. and S. B. Green. 1976. A Cramer-von Mises statistic
for randomly censored data. Biometrika 63:465-474.
Lehmann, E. L. 1950. Some principles of the theory of testing
hypotheses. Ann. Math. Stat. 21:1-26.
Lehmann, E. L. 1959. Testing Statistical Hypotheses.
Sons, Inc., New York, NY.
John Wiley and
Mann, N. R., R. E. Schafer, and N. D. Singpurwalla. 1974. Methods for
Statistical Analysis of Reliability and Life Data. John Wiley and
Sons, Inc., New York, NY.
Nelson, W. 1912. Theory and applications of hazard plotting for
censored failure data. Technometrics 14:945-966.
Pettit, A. N. and M. A. Stephens. 1976. Modified Cramer-von Mises
statistics for censored data. Biometrika 63:291-298.
Quesenberry, C. P. 1975. Transforming samples from truncation
parameter distributions to uniformity. Commun. Statist.
4:1149-1155.
Quesenberry, C. P. and R. R. Starbuck. 1976. On optimal tests for
separate hypotheses and conditional probability integral transformations. Commun. Statist. - Theor. Meth., Part A 5:507-524.
Uthoff, V. A. 1970. An optimum test property of two well-known
statistics. J. Amer. Stat. Assoc. 65:1597-1600.
Uthoff, V. A. 1973. The most powerful scale and location invariant
test of the normal versus the double exponential. Ann. Stat.
1:170-174.
Van der Vaart, H. R.
1979.
Unpublished note on statistical inference.
Vo1odin, I. N. 1974. On the discrimination of gamma and Weibull
distributions. Theory Prob. Applications 19:383-390.
69
9.
APPENDICES
70
Appendix 9.1.
c
c
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
5
6
1
8
9
10
11
27
sJ!:U:c'r 1lie '1"llE UJ!:S1' ~'119f INC 1" AM:1 L'( MIONe THE WE IBULL. CAJUofA AND
LOGNOIlM.L ~'AlllLl~ CJl" UlS'l1UUUTlO.NS ~'OR COM.l'~T~ AND CE.NSORED SAMPLES
NSAM IS THE NUMBER OF SAMPLE( S) ANALnED.
NSIZE IS OBSERVED SAl'!PLE SIZE,
NORIG IS ORIGIN~\L SAMPLE SIZE •.
TC IS THE TRUNCATION POINT.
SAMPJ.F. (RF.FORF. CF.NSORINGl IS LIMITED TO /SOO AS DIMENSIONED.
L IS THE NUMRF.R OF F.VAJ.l1ATIONS FOR SIMPSON'S RULE ON
COMP~TE 8AJIIPLES.
UlU~" lS TllE .NUfW1lli m' EVALUATIONS FOR EXPECTED VALUES (ON
fJJ!JiSOIUW SAMl'!,I';S) OUTAlliW UY SAnl'LING.
ON COMPLETE SMtPLES, DATA A1\E TIWS1"Ultl'1lW CA!"TEll OJ.rrAI.N ING nLE O"~ THE
SHAPE PAR.J\METER FOR GAMMA) sucn THAT CXC 1> • XUO) UJl:CO~ CO. 1, 10) ,
ON CENSORED S.\MPLES, DATA ARE M.ULTIPLIED BY 10/TC •
IMPLICIT REAL*8CA-H.O-Z}
RF.Al.*8 X<lS.)M, XXC H).).) , W( 1000) ,SSTc 3}
RF.AL*8 SWC 2) ,see 2) • ST.e 2) , TITLE( 18)
REAL*4 WCJOO),TC,A2,Rc l),~~T7.F.
lU!:AL*4 PllOU, ALOG. ALP4, UP, T4, ALGAl1A
2
3
4
12
13
14
l(J
16
11
18
19
20
21
22
23
24:l5
26
User Program for the Selection Procedure
Ul~SlOli
WK(6)
,U1ST(~)
,ALl'(2) ,4..:1"(2) ,8U1'I(2) ,XI1A.X(2)
DATA DIST/'WEIllIILL','GAMMA'
456
46
401
402
,'LG.NOI~'/
DATA .\nIN/0, ID0/,AMAX/10.D0/
DATA L/101/,UP.'\13/LD0/
INDEX= 1000
READ(1,4/S6)NSAM
FORMAT( 13)
DO 457 TD=l.NSAM
READC 1,40) (TITLE( I) , 1= 1,10)
1"OlU'Lo\"l'( 14MB)
1lliAJ.)( 1, 40 [\IS 1~, .NUlU G, 1'C
FORMATC213,¥10.5)
RE.\D(1,42)(W( I}, l=l,NSIZE)
FORMATe 8FI0.4)
XSlZE=NSlZE
XXSS~DFLOAT(NSIZE)
WRITF.( 3,44) (TITI.E( I> • 1= 1,10)
44
45
46
28
FORMAT (' l ' ,65C ' -') / ' \)' • 10A8/' 0' ,65C '
WRITE(3,45HWC I>. l=t,NST7.F.)
l"01U1A1'( IX, 61"10.4)
Wll1TJl:(~, 46) rWll1G
FOlU1A1'( '~)', '~AMl'Lt: ~I~
=',17)
IFCNORIG - NSIZE)451,47,49
-'»
29
41
CONTINUE
30
31
59
WRITE(3,59)
FORMAT(/lX,65('-')//1X, 'FAMILY', 14X. 'SELECTION STATISTIC'//IX,
$40C '-') )
32
33
PT=3.14159200
l L= (L-l) /2
~4
~5
XX( 1> =0
36
31
38
39
III =UP AB/ IL
"'W( 1> = 1./6
we L) =wW( 1)
XXC L> = 1 • *UPAB
DO 11 1=1, IL
.
71
40
41
42
1'1=2* 1
WWUf) =4. /6.
43
Ml=M-l
XX< ID =DFLOAT< Ml) /( L-l) *UPAB
44
45
tF(K.F.Q.L) CO TO 11
46
WW(TO=2./6.
47
M2=K-l
XX<K)=DFLOAT(M2)/(L-l)*UPAR
K=2*I+l
4U
49
11
C
C
C01'4T UIU~.
TO FIND ALPHA FOR GAl'IMA
C
30
SSS=0.
SUML=0. 00
DO 5 I=l.NSIZE
51
52
S~S=SSS+W( t)
Z=w( t>
SUML=SUML+DLOC(Z)
53
54
a5
56
5
C01'4Tl1'4U~
C
57
58
59
Y=DLOG(SSS/XSIZE)-SUML/XSIZE
IF(Y-0.5772)6.6,7
6
60
61
ALPa~=(e.5000876+0.1648852*Y-0.0544274*Y*y)/Y
GO TO 8
7
ALPHA=(8.898919+9.059950*Y+0.9775373*Y*Y)/(Y*(17.79728+11. 968477*Y
S+Y*Y> )
62
63
8
xu.= W<
. 64
68
1)
DO 12 1=2,1'4S1ZE
65
66
67
CONTtNlffi
XL=W( 1)
IF< XL. GT. W<
12
69
70
71
1) )
XL= W(
1)
IF(XR.LT.W( I»XR=W(I)
CONTINUE
ALP( 1) = 1.
ALP(2) = ALPHA
ALPN= ALPHA*XSIZE
72
CF(I)=OLGAMA(XXSS)
73
CF( 2)
=DLCAMA( ALFN) -NSt7.F.:t:Ol.CAMA( ALPHA)
C
C
74
C
77
79
BSC= ( DLOG( A11I Ie) - DLOG( AMAX> ) / ( DLOG( XL) - DLOG( Xll> )
.1SC=~\X/(XR**BSC)
75
76
78
DATA SCAL11'4G
DO 65 ISC=I.NSIZE
X(ISC)=ASC*W(ISC)**BSC
65
CONTINl~
CP=I.00
80
SUML=O.00
81
SUMQL=0.m)
U2
03
84
DO 55 1=1.~SI~~
Cl'=('::l'*X( 1)
Z=DLOG(X(I»
85
86
87
SUML=SUML+Z
55
SUMQL=SUMQL+Z**2
CONTINUE
C
C
C
UU
SF.l.F.CTTON STATISTIC FOR LOGNORMAL
72
89
90
c
C
SSL=(SUMQL-SUML**2/XSIZE>
SST(3)=DLOG(.5D9)-.5*DLOG<XXSS) -AL*DLOG(PI)+DLGAftA(AL)-AL*
SOLOO(SSL>
~TART
WORKING ON INTF.GRATION BY SIMPSON'S RULE
C
~1
~UM(I)=O.
92
93
94
95
96
97
c)8
99
100
101
102
103
104
105
106
107
71
200
125
202
82
~tTM'( IWG)=~UM'( IWG)*OEXP(TR)
XM'~X( IWC)=S~
~18=SS-XMAX( lWC)
8S.1=0
lY(SIS.~r.(-175»SSI=V~·(SlS)
80
70
81
343
346
126
127
128
129
34~
130
312
131
1:32
:344
SUM(IWG)=SUM(IWG)+SSl
CONTINUE
CONTINUE
DO 81 IST= t ,2
SST( TST)=XMAX(IST)+CF( IST)+DLOG(SUM(IST»
CONTINl1F.
342 IT= 1,3
no
WRITE<3,343)OlST( IT),SST( IT)
YOlWAT( '0',A8,4X,V20.6)
11"( 1'1'. Etl. 1) GO TO :345
IF(SEL-SST(lT»346,342,342
SEL=SST(IT)
NSEL=IT
GO TO 342
NSEL=1
~F.T.=~ST(l)
l:.J3
134
AC= (NS 17.F.-2) *Or.OG( -xr.m -xr.U+OT.OG( W( TN) ) +OLOG( HI >
80 IWC=1,2
SS=-AUJ(lWG)*XLU*DLOG(CP)-NSlZE*ALP( IWC>*OLOC(SUMI)+AG
IY(IN.EQ.2)XMA}(IWG)=SS
IF( XMAX< HiG> • GE. SS) 00 TO 02
TR=XMAX(IWG)-SS
IF(TR+175)200,200,201
SUM(IWG)=O
GO TO 202
201
120
121
122
123
124
CONTINl1F.
DO
t08
109
110
III
112
113
114
115
116
It7
118
119
SUM(2)=0.
LX=L- 1
DO 70 Ifl=2,LX
XLU=DLOG( XX< IN) )
SUMI=O
DO 71 1= 1, NS I ZE
~lJ1"lI=SlJMI+X( J) **( -XLU>
347
CONTINt1F.
WRITE<3,344) ALPHA
1"OlWAT('O','ESTl1'tATED ALPHA FOR GAMMA =',012.5)
Wl11TI!:( 3, :347) IH~T( NSEL>
FOHlIAT( , '-" ,40( , -' ) /' 0' , , TJJ..I!: 1"UILY SI!:LECTEU 18 ',AD
• / ' 0' , 40( , -' ) / '0' ,40( , -' ) )
135
136
137
13R
13c)
140
49
GO TO 348
CONTINUE
TP=2.50662801DO
TI .ON= 2 • 302585 1D0
NI=l
141
DSEEX=6291950,00
DIN=DFLOAT( INDEX)
142
143
UUIN=VLOG(UIN)
TllUN=TC
73
144
145
146
147
148
149
150
151
152
153
154
Xl.=TRtJN
XR.=0.00
DO 72 1= I.NSI7.E
U'( XL. CT. W( J} ) XL= W( 1>
11"( Xll. LT. W( D ) Xll= W( 1)
CONTINUE
SSS=0.D0
SUML=0.00
SUHQL=0.D0
SIDIQ.=O.OO
CS= 10. OO/TRUN
72
C
G
DATA SCALING BY f1m.TIPT.YTl'fG 10/TRUN
C
155
156
157
158
159
160
161
162
16a
164
165
166
167
=
00 14 1 1 • NS 1~
X< I> =CS*W( 1)
SSS= SSS+ X( I)
SUMQ=SUMQ+X<I)**2
CCC=DLOG(X(I»
SUML=SillIL+CCC
SUMQL=SUMQL+CCC*~2
COl'fTINUF.
TRUN= HL 110
14
XU=<.:S*XR
XL=<.:S*XL
UN= l>SWlT( XXSS)
C
G
G
G
DETERMINING INITIAL VALUES FOR ITERATIONS DY METHOD OF nOMEl'fl'S
FOR COMPLETE SAMPLE
16A
X.~I=SSS
169
170
171
172
173
174
175
176
XS(lJ=SIJJItQ
U~L= ( Xll- XL)
NNS=NSl~+l
IND=0
IND=IND+l
XST=XSI+X( J)
XSQT=XSQI+X(T)*X(I)
AC=IlLOCC XC J»
X."L= XSL+A.C
XSQL= XSQL+ AC**2
2
CONTJNUIC
XORIG=DFLO~r(NOUIG)
XBAR= XS l/XORIG
SVAR= (XSQ!-XBAR*XBAR*XORIG> /( KORIG-l. D0)
Cl=O.DO
187
IRA
Tl=SVAR/XBAR
Al=XRAR**2/SVAR
CALI. AI.PHM(NSTZF..NORTG.TRUN.X.Cl.Tl,Al, INGAl'I. TNGAl'II.TET.ALPHA.SGL.
SELNM)
189
190
191
192
193
194
195
/ ( NS 1ZE- 1>
DO 2 !=NNS.NORIG
X(!)=TRUN+DEL*IND
177
17A
179
100
18 1
182
183
184
185
186
}{"';L=SUML
XSQI.=SUMQI.
IFCINGAM.EQ.O)CO TO
501
5001
~OOI
WIUTIC( a. 50D
FOl\1.lAT( • l ' •• A1lGU~T 01'4 U'JSL !lOUT 11'41C UCArU1A IS LARGER THAN ti7')
GO TO 457
IF (INGAMI.EQ.O)GO TO 500
WRITE(3,502)
74
196
~02
197
198
199
500
::000
201
202
203
204
FORK~T( 'l','O\~LOW ON EV.~UATING THE FIRST DERIVATIVE OF'/'
SIX, 'THE INCOMPLETE G~"1MA· FUNCTION')
GO TO 457
CONTI i'OF.
ALP4=ALPHA
A:!: x.xsS*ALPHA
(;ALL U.I!:TA( t4S lZ.I!:, t40RIG, TUUN, X, 8UML, KCOUNT, BH)
SG(I)=~GL
Ul=XSL/XORIG
S2:XSQL-(XSL**2)/XORIG
Sl:DSQRT(S2/XORIG)
TA1:6.D0
CALL LOtiHM( NS JZE, NORIG, TRUN, X, TAl, U1, S I, EMUJ, S IGMAJ ,SLL>
20~
206
207
Sf.( 1) :SU.
~08
C
C
WElHULL
(;
209
210
211
212
213
214
215
10
S-( XXSS*RH) *J)(.OC( TRW)
216
217
218
48
C
C
219
220
221
222
223
224
225
226
227
SW( 2): OLGAMA( XXSS) +( XXSS-1. D0) *OI.OG( RH) +( RH-l • D0) *SlJML -XXSS*
SVLOG( SUT> +.l!:LNM
WllITE( a, 4U> t48 lZ.l!:, T(;
FOill'lAT( 1X, 'SAMPLE OnSJ.!:llV ED ='. 17/ 1X, •TllUt4(;AT1ON 1'0 iNT= ' , 1"1 a. 5/
SlX,65( '-'»
COMPUTING SCALE PARMJETERS TO ORIGINAL SCALE
C
THWW=THW/CS
TF.TG=TF.T/CS
T.I!:TL= m:xp( F:MU.J) /CS
WillTE( a, 1949)
1949 FOIU1A'f( 'O',5X,'MAXIMUM LIKELIHOOD ESTI.MA1~'/lX,4o('-')/IX,'FAMILY
S' . 12X, 'SCALE' ,12K, , SIIA1'E'/lX,45( '-'»
WRITE(3,1950)DIST( 1) ,1'lIWW,DH,DIST(2), TETG, AlJ'IIA,lJIST(:.n ,TETL,81Gl'IA
SJ
1950 FORMAT(lX,A8,3X,D14.6,2X,D14.6)
WRITE(3.1951>
1951 FORMAT(IX,45('-')/'l',65('-')//lX,'FAMILY',14X,'SELECTION STATISTI
SC',6X,'STANDARD DF.VJATJON'//tX.65('-'»
C
(;
228
SUM.U:.). DO
DO 10 I: 1, NS IZE
SUMB:SUMB+X(I)**BH
CONTINUE
SUT:SUMB+(XORIG-XXSS)*TRUN**BH
TRW: (SlIT/XS IZE) **( 1. OO/BH)
SW( 1) :F.LNM+NS JZF.*DLOG( BH)+( BH-1. D0) *SUl'IL-( THW**( -BH» *S'UT
C
C
229
230
23 t
232
233
234
C
C
GAMMA AND LOGNOIlMAL
(;LOSEU l"OllM I' AllTS
SG(2)=DLGAMA(XXSS*ALPIIA)-XXSS*DLGAMA(AL!'IIA)+(AL!'llA-l.J.)0)*SUMLSXXSS*ALPHA*DLOG(SSS)+ELNM
SL(2)=(1.D0-XXSS)*DLOG(2.5066283D0*SIGMAJ)-.5*DLOG(XXSS)-SUMLS( .5D0/SJGMAJ**2)*(S~IQL-SUML**2/XXSS) +ELNM
ZG=SG(2)
ZT.= SJ.( 2)
STDC=-9999999999.D0
STDL=-9999999999.D0
11"(NSIZ.1!: . .I!:tl.NOlllG)GO TO 2014
FINDING EXPECTED VALUES DY SAMPLING
·e
75
23u
236
237
238
239
240
241
242
243
C
XM~AN=SUML/XXSS
60 10
244
DO 30 lUX= 1, ltWEX
CALL GGA.l'11l( DSEt:X, A2 , ~ 1 , WK., 10
up=lnUN*U(1)/SSS
CALL MDGAM( UP , ALP4 , PROD, I En)
CGI=-9999999999.DO
IF(PROB.EQ.I.)GO TO 6010
CGI=(XORIG-XXSS)*DLOG(l.DO-PROB)
CONT I NIJF. .
CAI.I. AOEXP( mX,XMG,CCI ,~~G,~QG)
C
245
246
247
248
249
250
251
252
253
2u4
25~
256
257
258
259
260
261
CALL GGNnL(V~EEX,Nl,R)
U=S IGnAJ/VSWll'( ~S) *ll< 1) - XMEAl4
UL=(U+TLON)/SIGMAJ
CALL MDNORD(UL,PL)
CLI=-9999999999.DO
IF(PL.EQ.I.)GO TO 6001
CLI=(XORIG-XXSS)*DLOG(I.D0-PL)
6001 CONTINUE
. CAI.1. AOEXPC mx. XMJ., CI. I. SST., SQL)
IFC PROR. Ell. 1. AND. PL. EO.• 1. ) CO TO 6002
30
COIHINUE
6002
~L(2)=SL(2)+XML+DLOG(SSL)-DDIN
Vr.=C~QI.-~Sr.**2/0TN)/CDIN*(DTN-l.D0»
I.I'~.
~TDL=
h~n..
262
263
264
IFC VL.
0) CO TO 2011
. 5D0*DLOC( VL) +ZL+
265
266
267
268
269
270
271
272
273
274
2011
2014
442
441
277
279
280
281
282
448
DO 212 IC=1,2
IF(IC.NE.l)GO TO 44i
WRITE(3,442)
FORMAT( 'O'.3X,'MAXIMUM LIKELIHOOD PROCEDURE'/IX,8X,28('-'»
GO TO 443
CONTTNIJF.
CVC=DEXPCSTOC-SCCTC»*100.D0
CVL= DEXPC STDL-SL< IC) ) 100. D0
lY(CVG.LJ£.35.V0.AND.CVL.LE.3a.DO)GO TO 447
WUl'ru( 3,440>
FORMAT(///1X,'**** TOE SCALI!: INVAllIAl41' SJ£LJ£CTIO~ RJ£SUL1~ ARE NOT'/
81X,'**** PRINTED SINCE AT LEAST ONE OF TOE CO&~YICIJ£NT OY'/
8lX,
VARIATIONS EXCEEI>S 35 ro')
GO TO 212
WRTTE.C3,444)
FORMATC 'O',8X,'SCALE INVARIANT PROCEDURE'/9X,25('-'»
CONTINUE
WRITEC3,440)DISTC t) ,SWC IC)
l 'OllMATC ' 0' , A8, 4X, V20 . 6)
Wl11TI!:C3,440)IHSTC2) ,SGC It:>
IF(IC.EQ.2)WUlTE(3,445)STDG
FORMAT( '+',36X,D20.6)
'****
447
444
443
2B3
284
285
440
286
287
445
288
289
290
CONTINUE
CONTINUE
*
27u
276
273
COtf1'1~UE
. SG(2)=SG(2)+XMG+DLOG(SSG)-VV1~
VG=<SQG-SSG**2/DIN)/(DIN*<DIN-1.D0»
IF(VG.LE.0)GO TO 2010
STDG=.5DO*DLOG(VG)+ZG+XMG
2010 CONTINUE
o
~~ITE(3,440)DIST(3),SL(IC)
IF(IC.EQ.2)WRITE(3,445)STDL
IF(SW(IC).GT.SG(IC).AND.SW(IC).GT.SL(IC»NSEL=l
76
291
292
293
294
295
296
297
298
299
1Y(SG(
1C).AND.8G(IC).GT.SL( IC»NSEL=2
1C).~r.sW(
IF(SL(lC).GT.SW(lC).AND.SL(lC).GT.SG(IC»~SKL=3
WRITE(3,446)DIST(NSEL)
FORMAT('O',65('-')/'O','TBE FAMILY SELECTED IS
446
212
348
457
S65( , -'»
',Aa/'O',
CONTINUE
CONTINUE
CONTTNITF.
STOP
)!;l'W
C
C
THIS
SUDROUT1~J!:
DOJ!:S ADD 1'1" 1O~ lo'Oil SUM. AND SS IN EXPONENTIAL TERMS
C
SUBROUTINE ADEXP( N, XMAX. VALUE, SUM, SQ)
IMPLICIT REAL*8(A-H.O-Z)
TF(N.NE.l)GO TO 10
XKA}{= VAT .1fF.
300
301
302
303
304
305
306
307
308
309
310
311
312
313
10
SUM= 1. D0
sa= 1.00
GO TO 11
IY(XMAX.GE.VALUJ!:)GO 'fO.20
TRAN= XMAX- VALUE
IF(TRAN+175)21,21.22
21
SUM=O.DO
22
DT=DEXP( TRAN)
SUM= SIJM*DT
CONTINUE
GO TO 23
314
23
315
316
317
318
24
SQ=U"DU
GO TO 26
25
26
SQ=SQ:r.DT**2
319
320
:121
1}1'(
20
:122
28
CO~TUIUJ!:
If ( 8C . EtA. 0) GO TO 31
SUM=Sill1+SC
328
329
IF(2:r.ST+175)31.31,32
330
32
331
332
333
31
334
CONTI NIJE
XMAX=VALUE
CONTINUE
ST=VALUE-XMAX
SC=O.DO
It' ( ST. I.E. ( - 175» GO TO 28
8C=DEXP( 8T>
323
324
325
326
327
2*TRAN+ 175) 24; 24. 25
11
SQ.=SQ+SC**2
CONTINUE
CONTTNUE
RETTJRN
c
C
C
C
END
l'CEWTOl'(-llAl'USON METIlOD fOR ID..E OF THE SHAPE PARAMETER OF WETBULL
TIllS SUllllOUTlf4E IS l<.J!:NT'S Al>JUSTED }I'OIl TYPE 1 CEl'CSORING
33~
SUBROUTINE BETA< N8 IZE, NORIG, TRUN, X, 8UML, KCOUNT, BID
337
REAL*8 X(500).XL(SOO).XL2(SOO)
DATA NTOT/50/. FTOT./ 1. D- 10/ •BTOL/ 1.0-4/
336
338
339
340
IMPLICIT REAL*8 (A-H.o-Z)
DO 12 J=I,NST7.E
XL( 1)=DLOG(X( I»
77
341
342
343
344
345
346
347
348
349
350
12
C2=2.00
ta'a= •aU4)
Ffl=UFLOAT(NSIZ~)
KCOUNT=0
AVEL=SUML/FN
DO 225 LB= 1, 100
BH=DFLOAT(LB)/C2
SXB=~}. DO
SXRT .X= 0 . no
DO 185 L.J= I • NS T7.F.
351
352
353
354
355
356
357
358
359
360
361
XB=X(LJ):lC:lCHH
Sxu=SXU+XU
SXULX=SXULX+XU*XL( LJ)
185
362
363
364
365
366
367
368
369
370
371
372
373
XL2C J) =XJ.( Tl:I:XL( J)
CONTTNUF.
Cl=1.D9
190
195
200
CONTINUE
TXB=(NORIG-NSIZE)*TRUN**Bll
SXB=SXB+TXB
SXBLX=SXBLX+TXB*DLOG(TRUN)
FB=SX8LX/SXB-Cl/BH-AVEL
rFCFR)225,235.199
IF(LR-I) 195,195,205
HH=OH/C2
SXU::O.UO
SXULX=O.UO
DO 200 LJ=l,NSIZE
XB=X(W)**BH
SXB=SXB+XB
SXBLX=SXBLX+XB*XL(W)
CONTINUE
TXR=CNORIG-NSlZE)*TRUN**BH
SXR=SXR+TXR
374-
SXHLX=SXBLX+TXB*DLOC(TRUN)
375
376
377
378
379
380
Fll=SXULX/SXU-Cl/BH-AV~L
IF(Fll)210,23a,195
205
210
;181
382
383
384
3U5
366
387
388
389
390
DO 220 11= 1, NTOL
SXB=0.DO
SXBLX=O.DO
SXBL2X=0.DO
no 215 T..J= 1. NS rZE
XR= X( T•• J) *:lCRH
215
391
392
393
394
395
396
397
398
399
400
BII=rm-CP5
SXB=SXB+XB
SXULX=SXULX+XH*XL(LJ)
SXUL2X=SXUL2X+XU*XL2(LJ)
CONTINUE
TXB=(NORIG-NSIZE)*TRUN**BH
TI..=DLOG(TRUN)
SXB::SXB+TXB
Sh'RLX= SXBLX+TXB*TL
SXRI.2X= SXRI.2X+ TXB*TL*TL
FR= SXRr .X/SXR-C I /RH- AVF.T.
IF(DABS(FB).LT.FTOLlCO TO 235
FPll=(SXUL2X*SXH-SXBLX*SXBLX)/(SXR*SXB)+Cl/(BH*RH)
U·(lt'l'll.Et!.O)GO TO 235
DELTAB= 1"D/n'D
IF(DABS(DELTAB).LT.BTOL)GO TO 235
BH::BH-DELTAB
220
CONTINUE
78
401
402
403
404
405
406
407
GO TO 2::10
225
23~j
COt4T1t4Ul!:
KCOUt4T= 1
235
nll=0.DO
CONTINUE
REroRN
C
C
C
C
C
END
'T'HF. FOLLOWING SUBROUTIl'fES ARE HARTER AND MOORE'S ADJUSTED FOR TYPE I
CENSORfNC
MIl IS THE NUMBER OF OBSRRVATIONS CF.NSORRD FROM HF.LOW
!,'Oll TIllS WOHK. OBSERVATIONS NEED NOT TO BE ORDERED
C
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
SUBROUTINE ALPHM( M. N. TIUffl, 1', C1, 1'1, At, INCAn, INCAn1, TllJ, ALJ, EL, J£J.J'fM>
IMPLICIT ~~*8(A-H.o-Z)
DIMENSION 1'(300) ,C(1100),THETA(1100> ,ALPHA(1100)
DIMENSION DLT(50).DLA(SO) ,AL(SO) ,DLC(50) ,CE(50) ,TH(50)
DATA MR/O/.SSl.SS2,SS3/2*I.DO,O.DO/,JI.JH/2*20/
C(
1)
=Cl
THETA( t> =1'1
ALl'llA( 1) =Al
J!:I'4=t4
86
87
88
109
110
89
111
139
117
110
135
119
120
121
136
122
123
124
El'1=l'1
ELNM=0.D0
Erm=~m.
MRP=MR+l
NM=N-M+ 1
DO 88 I=NM. N
F. 1= I
ELNM=ELNM+DLOC( EI)
l1"( I11l> 66,89,109
UO 110 1= 1, I11l
EI= 1
ELNM= ELNM- DLOG( E I)
DO 63 J= 1. 1100
IF(J-066.112.111
JJ=J-l
IFLJ-.J06, 139, 139
IF( (.f/.JR-.n /.JH) 6.6. 117
J2=J-2
J3=J-3
11"< SS 1) 119, 119, 118
D2T=Till:TA(JJ)-2.00*'l'1lJ!:TA(J2)+TlU!:TA(J3)
DT=THETA(JJ)-TIIETA(J2)
IF(D2T) 135,119,135
NT=DABS(DT/D2T)
GO TO 120
NT=999999
IF(SS2) 122,122.121
D2A= ALPHA< .J.T> -2. D0*ALPRA( .J2>+ALPRA< .13)
UA=ALPRA(JJ)-ALPHA(J2)
1l"( U2A) 136, 122, 136
NA= UAUS( UA/U2A)
GO TO !23
NA=999999
IF(SS3) 125,125.124
D2C=C(JJ)-2.DO*C(J2)+C(J3)
DC=C(JJ)-C(J2)
IF(CLJ.T>+5.D-5-T( 1» 140,125.125
a
.,
79
454
455
456
457
4a8
4a9
460
461
462
463
464
465
466
467
468
469
470
411
412
473
474
475
476
477
418
479
4LJ0
4tH
482
B3
e:
140
141
137
12:>
126
IF (C(JJ)-5.D-5) 125.125,141
IF( D2C) 137. 125. 137
NC=DABSCDC/D2C)
CO TO 126
NC=999999
NS=2*MIN0CNT,NA,NC)
11"( NS) 6,6, 142
IY(NS-999999} 13LJ,6,6
142
138
EliS=NS
127
THETACJ)=THETACJJ)
128
THETA(J)=TRETA(JJ)+CDT+.25D0*CENS+l.D0)*D2T)*ENS
THF.TA(.J) =DMAXt<TIlF.TA< J) , 1. D-4)
IF(SSl) 127, 127,128
GO TO 129
129
130
131
132
133
IFCSS2)13~,130,131
ALPHA( J) =ALPHA(
GO TO 132
.JJ)
ALPllA(J} =AU'llAe JJ)+(DA+.2GDo*e ENS+ I.DD}*D2A)*ENS
ALPllA( J) =DM.AXI ( AU'llA( J) , 1 . D-4)
IF(SS3) 133,133,134
CCJ)=C(JJ)
GO TO 112
134
6
7
C(J)=C(JJ)+(DC+.25DO*(ENS+1.D0)*D2C)*ENS
C(.n =DMAXI (C( J) .0. DO)
C(.n =OMlN1 (C(.n ,T( 1»
IF« l.OO-E.MRJ*CCJ)-TC 1» 112.6,6
Till:TA(J) =Tlf.El'AC JJ)
lit'< SS 1) 13, 13, 7
81=0.DO
DO 8 I=MRP,M
8
Sl=Sl+T(I)-C(JJ)
4:
486
487
73
4B8
74
THETACJ)=SI/(EM*ALPHA(JJ)
CO TO 13
CONTlNtJF.
CALL CAMSC ALPHAC .J.n • INCAM, CMA)
lY(INGAM.EQ.lJGO TO 999
IF(N-M+MFU66.18,74
489
490
491
4?2
4?3
494
495
496
491
49U
4??
500
501
502
503
504
505
506
501
50U
50?
510
KS=O
DO 100 K= 1 , 50
KK=K-l
GMAI=GAMI(CTRUN-C(JJ»/TBETAeJ),ALPHACJJ»
GMAI2=GAMI(CTCMRP)-CCJJ»/TBETACJ),ALPBACJJ»
DLTCK)=-EM*ALPHACJJ)/THETACJ)+SI/CTHETA(J)**2)+(EN-ElD*<TRUN-C(JJ)
1)**ALPHA(JJ)*DEXP(CC(JJ)-TRUN)/THETA(J»/CTBETA(J)**(ALPHA(JJ)+1.D
20) *C CMA-CMA 1» +F.MRULPHA( .J.n /THETA<.J) -EMR;I:( T( MRP) -C(JJ» **ALPHA( JJ
3} *DEXP( (C( J.n -T< MRP) ) /TRF.TAC.J> ) /C TRF.TA(.J) **( ALPHA< .J.J> + t .00) *CMA 12)
TU< KJ =THETA( J)
l1"(DLT(KH 101, 13, 102
101
102
103
104-
KS=KS-l
IF(KS+K) 105.103,105
KS=KS+ 1
IFC KS-IO 105.104. 105
THETA(J)=.5D0*THCK)
CO TO 108
TRF.TA( .n = 1. 5D0*1'H( ro
CO TO 108
105
1U6
lYCDLT(K>*DLTCKK»101,13,106
KK=KK-l
GO TO 105
107
TIIETA( J) =m( 10 +DLT( K):l:( Tll( KJ -Tll( KIO),1( DLT( KK> -lJLT( 10)
80
511
512
513
~14
110'( DAllS( TllliTA( J) -Tll( 10 ) -1. D-4) 13, 13, 108
108
13
14
515
516
US
517
518
519
16
:>20
521
522
523
CONTINUE
ALPHACJ)=ALPHACJJ)
IF(882)44,44,15
SL=O. DO
00 16 I=MRP. M
SI.=SI.+OLOGC 1'C T> -CC JJ) )
K..';;=0
DO 43 K= 1 ,50
KK.=K-l
CALL GAtJSC ALl'UAC J) , l1'4GA1I, GnAJ
IF(INGAM.EQ. l)GO TO 999
IFCN-M+MR)66,30,21
524
21
525
526
30
527
528
529
76
77
:>30
531
532
:.J2
533
GMAI=GAMICCTRUN-CCJJ»/THETACJ),ALPa~CJ»
G~~12=GAMICCTCMRP)-CCJJ»/THETACJ),.\Lpa\CJ»
DG=DGAM(ALPHA(J»
TFCN-M+MR)66,77.32
DLAC K) =-F.mm.oc-:c TIJF.1'AC.J» +SI.-F.lf*J)G/GMA
GO TO 78
GALL XJ)GAnl( CTllUN-GCJJ) )/THETA< J) ,ALPHAC J) ,DCI, TNCAMI>
I1o'C l1'4GAnl .l!:Q. 1) GO TO UUU
CALL XDGAMH (T( Mill') -C( JJ» /TllliTA( J) ,ALj'UAC J) ,0012, INGA1Il>
IFC INGAMI. EQ. 1) GO TO 888
534
38
535
78
537
39
KS=KS-I
11"( KS+K)70,41, 70
40
KS=KS+l
1J.t'( KS-K) 70,42,70
ALI'IlAC J) = . 5DO*ALC 10
GO TO 43
ALPHACJ)=1.5D0*ALCK)
GO TO 43
IFC OLA< K) *DLAC KJ{) )72.44.71
536
538
:>39
540
541
41
542
543
544
545
42
70
546
71
547
548
72
549
550
551
552
553
554
555
556
43
44
85
45
143
79
C(J)=C(JJ)
IF C883) 112, 112, 45
IFCl.D0-ALPHACJ»79,143,143
IFC881+882)57.57.79
IFCN-M)66,83,46
U3
KS=O
DO 56 K= 1,50
69
8R=0.D0
DO 69 I=MRP, M
8R=8R+l.D0/CTCI)-CCJ»
IFCN-M+MR)66.80.81
DLC( TO =( 1. D0-ALPHA( J) ) *8R+E:M/THETAC J)
KK=K-l
565
566
80
567
568
81
569
KK=KK- t
GO 1'0 70
ALPHA( J) =AU K>+DLAC TO *C AI.< TO -AL< KIO) /C OI.A( KK) -m.AC TO)
IFCDAllSCALPUA(J)-ALCK»-I.D-4)44,44,43
G01'4T11'4Ul!:
CONTINUE
CALI. GAJlfS( ALPHAC.J) • TNGAM, GMA)
IF( INCAM. F.Q. t> GO 1'0 999
560
561
562
563
564
AI.< TO =ALPHAC J)
TFC OLAC K»
:19,44.40
46
557
558
559
DL~CK)=-EM*DLOGCTHETA(J»+8L-EN*DG/GMA+CEN-El.D*(DG-DGI)/CGMA-GMAI)
I+El~DLOGCTHETACJ»+Elm*DGI2/GMAI2
GO TO 82
CMAI=CAMI « l'R.UN-<:(.J» /THF.TA<.J) ,ALPHA(.J) )
GMAI2=GAl'1I ( (T( MRP) -CC J) ) /THETA(.J> ,ALPHA(.J»
81
DI.CC K) =C1. D9-ALPHAC ,n) *:;;R+C EM-F.MR.) /THETALJ)+( EN-E1"D *( TRUN-C( J» **(
lALPHAC.J) -1, D0) *DEXPC -C TRUN-CC ,n ) /THETAC ,J> ) /C THF.TAC ,J> **ALPHAC ,J) *( GM
a70
2A-GnAl»-~MH*(TCMl~)-CCJ»**CALPHACJ)-I,D0)*DEXPC-CT(MRP)-C<.J»/TJ{
571
572
373
374
375
576
577
578
579
500
581
582
583
384
585
586
a87
a88
5U9
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
696
607
60U
609
610
611
612
613
614615
616
617
61U
619
620
621
622
623
624
82
51
90
91
52
53
54
55
67
68
56
57
112
Ita
114
113
116
58
92
96
98
99
3ETArJ»/(T~TA(J)**AW'HA(J)*GnA12)
CEC I{) =CC J)
IFCDLC(I{»90.112,91
KS=KS-l
IF(KS+K)54,52.54
KS=KS+l
IFCKS-K)54.53,54o
CC ,J> = . 5D9*CF.< K)
GO TO 68
CCJ)=CECK)+,5D0*<TC1)-GECK»
GO TO 6U
IF(DLC(IO*DLC(KK)67,l12,55
KK=KK-l
.
GO TO 54
C( J) =CE( K) +DLC( K) *( CE( K) -CE( 100 ) /( DLC( KIO -DLC( K) )
IF(DABSeC(J)-CEeK»-1.D-4)112,l12,56
CONTINUE
CO TO 112
C(.J>=TCl)
U·CMlU66,l13.58
00 115 l=l,n
IFC C( J) + 1. D-4-T( I» 116. 114, 114
MR=MR+l
C( 1) =Te 1)
IFeMR)66,58,86
SI=O, D0
:;;1.=0.D9
00 1)2 I=MRP, M
Sl=Sl+TC O-CC,J>
SL=SL+ULOGCTCI)-CCJ»
CALL GAmS( AU'HAC J) , l~GAl'1, GMAJ
U'( INGAl'l, EQ, 1> GO TO 999
IF(H-M+MR)66.98,96
G~~I=GAMICeTRUN-C(J»/TDETACJ).ALPllACJ»
G~~I2=G"\MI«TeMRP)-C(J»/THETA(J),ALPHAeJ»
EL=ELNM-EM*DLOGeGMA)-EM*ALPHA(J)*DLOG(TBETA(J»+(ALPK\(J)-l.D0)*SL
1 -Sl/THETA< J)
IFCN-M+KR)66.100.1)1)
CONTINUE
~L=~L+(EN-EM)*CDLOGCCMA-GMAI)-DLOGCGMA»
100
60
61
62
63
4
999
888
1+KMll*ALl'1!AC J) *ULOG( TID:TA< J) ) +El'IR;IcDLOCC GMAI2)
CJ=C(J)
mJ=TIIETAeJ)
ALJ= ALPHAc J)
IF(J-2)63,63.60
IF(DABSeC(J)-ceJJ»-I.D-4)61.61.63
IFCDABSCTHF.TACJ)-THETACJJ»-1.D-4)62,62,63
IFC DAB:;;C ALPHAC ,n -ALPHAC J.n ) - 1. D-4o) 4.4.63
CONTJNUF.
CO~Tl~UE
INGA.Ml=0
INGAl'l=0
GO TO 66
CONTINUE
INGl\MI=O
GO TO 661
INGAM=O
82
625
626
627
628
629
ALJ=0.D0
THJ=0.D0
EL=0.D0
RETURN
F.ND
661
66
SUBROUTINE CAMS(Y,INCAM,CMA)
630
631
632
633
634
635
636
637
2
INGAM=1
638
3
RF.TtmN
Ull'LlC IT R.lf.AL*lH A-a, o-Z)
GnA=0.U0
IflGAn=0
IF(Y-57)l,I,2
GM= DGAMM!\( Y)
1
GO TO 3
639
F.ND
640
641
642
643
644
645
646
647
648
to'UNCTiON DeAJlf( Y)
IM1'LIC 1T lU!:.AL*U< A-ll, O-~)
z=y
DG=0.D0
IF(Z-9.D0)2,2,3
DG=DG-l.D0/Z
Z=Z+I.DO
GO TO 1
1
2
3
DGAM=DG+(7o-.~D0)/Z+DLOG(Z)-I.D0-1.D0/(12.D0*Z**2)+t.D0/(120.D0*Z**
1
2
649
650
651
652
653
654
655
4)-1.D0/(252.D0*Z**6)+I.D0/(240.D0*7.**8)-I.00/(132.D0*Z**t0)
+691.00/(32760.D0*7o**12)-I.00/(12.00*7.**14)
CALL GAMS(Y,lNGAM,GAMY)
DGAM=DGAM*GAnY
RETURN
END
FUNCTION GAMI(W.Z)
IMPLICIT REAL*8(A-H.0-Z)
RF.AI.*4 W4. 7.4 • PROB. GAMMA
W4=W
656
657
65U
659
660
661
CALL mJGAM( W4, Z4, PROD, IER>
GAMl=GAI'1l'IA( Z4) *l'llOll
662
663
664
665
666
667
666
669
670
671
672
673
674
675
676
677
676
SUBROUTINE XDGAMI (W. Z. DGAHI. NGDINC)
IMPLICIT REAL*8(A-H,O-Z)
DIMF.NSION U(100).V(100)
Wl.=Dr.OG( W>
XX=Z*WL
lY(XX.GT.170)GO lU 8
U( 1) = W**Z*WL/Z
XX=Z*WL-2*DLOG(Z)
IF(XX.GT.170)GO TO 8
V( 1> =W:uz/Z**2
SU=U( 1> -V( 1)
DO 1 L=2.100
1.1.=1.-1
F.I.I.= I.I.
XX= DABS ( U( LL> )
U(L)=0.
1l"(XX• .I!:Q.0.U0) Ga TO 5
~4=Z
RETURN
END
83
679
680
661
662
683
684
685
686
687
688
689
690
691
5
XX=DLOG(~+WL+2.DO*DLOGCZ+ELL-1.DO)-2.D0*DLOG{Z+ELL>-DLOG(ELL)
TF(XX.T.E.(-175)H~O
TO
IF(XX.CT.170)CO TO R
V( L)
=-
1
W*( Z+ELL-l. 00) **2/(
(Z+ET.J.) **2*EI.J.> *V( LT.)
TUV= U( L) -V( L)
6~2
693
694
695
696
697
698
699
700
701
702
703
704
XX=Dr.OG( XX>+WL-m.OG! EU.l+DLOGCZ+ELL-I. DO)-DLOGCZ+ELL)
IF<XX.LE.(-175»CO TO 5
lY(XX.GT.170)CO TO 8
U( L) =
(Z+ELL-l. 00) /( Z+ELL) *( -U< LL) /ELL*W>
CONTIl'WE
XX=DABS( V( LL»
IF(UCLL).EQ.0.D0.AND.XX.EQ.0.D0)GO TO 6
VCL)=O.D0
IFCXX.EQ.O.DO) GO TO 1
UST=TUV/SU
IF(DADS(RST).LE.(1.D-15»GO
1
6
7
8
9
SU=SU+UC L) -V( L)
CONTINUE
CONTINUE
OOAMI=SU
NcDrNC=0
CO TO 9
1~
7
DCAMI=0.DO
NGIHNC= 1
Il1!:TUIU4
END
705
706
707
708
709
710
711
SUBROUTINE LOGHM(M.N.TRUN,T,TAl,UI,Sl,EnUJ,SIGnAJ,EL)
IMPLICIT RL\L*8(a-H.0-Z)
DIMENSION T(500).EMU(550>,SIGMA(550) ,DM(50) ,EUM<50> ,DS(50),SIG(50)
DTMENSION DT(50).TAC50).TAU<550)
DATA SSl,SS2.SS3i2*1.00,.) .. DO/
712
SlGM.A( 1) =Sl
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
7a2
73a
734
735
736
737
TAU(I)=TAI
EM.U( 1) =Ul
MIl=0
EN=N
EM:=M
83
7
6
EMR=MR
MRl=MR+l
ELNM= -. 5DO*( EM--EMIl> *DLOG( 6 • 2831853071800)
NM=N-M'+ 1
DO 8 I=NM. N
EI= l
ELNM=ELNM+DLOG( EO
U'( M.ll. E(,L 0HX) TO ~
DO 6') 1= i.MIl
EI= I
69
9
65
ELNl'1= ELNM-DLOG( E I )
DO 30 J= 1. 550
IF(.J- t) 34, 22.65
.J.J=.J-l
EMU< .J> =F.MlH .J.n
1.F(SSI>34, Hi,10
10
11
70
12
SI=0.D0
DO 11 1 =MIll , .M
S 1=S 1+DLOG< '!'( 0
IF ( MR) 34 , 70, 13
IF( N-M) 34,12,13
EMUC J) =S l/EM
-TAU( JJ) )
84
738
739
740
741
742
743
144
745
746
747
148
749
750
751
752
133
154
155
756
757
758
759
760
761
762
163
764
765
166
767
168
769
770
711
112
773
174
775
176
177
778
779
700
781
182
183
184
183
786
787
788
7119
790
791
792
193
194
195
196
197
13
GO TO 15
KS=0
DO 14 K= 1,50
KK=K-l
X=(DLOG(TRUN-TAU(JJ»-EMU(J»/SlGMA(JJ)
FL=.39894022804001D0*D~~(-.5D0*X**2)
71
12
35
36
37
38
39
400
41
14
15
16
11
13
18
19
YGS=-X
CALL MDNORD(YGS.FU)
nK< K) =( ~I-< EM-EMR) *EMU( J» /SIGMA( JJ) **2+< EN-Elf) *FL/( SIGlfA( JJ) *FU>
IF(MR.)34,72.71
X= (DLOC( T( MRl> -TAU( .J.n) -F.M'lJ!.n) /~ IGMALJ.n
t'L= , 391194228040ID0*DEXP( -. 5D0*X**2)
CALL MUl'WIW( X,I"U)
DM( K) =DM< K) -I!:Mll*Jl:L/( S IGMA< JJ) *I"U)
EUM(K)=EMU(J)
IF ( DM( 10 ) 35, 13, 36
KS=KS-I
IF( KS+IO 39,37.39
I<S=KS+ 1
IF( KS-TO 39.38.39
EMU( .1) =F.UK( TO -. 5n0*nAR~( F.IJM( K) )
GO TO 14
!!:MU(J)=Jl:UM(K)+.5D0*DAllS(EUM(K»
GO TO 14
IF(DM(K)*DM(KI{»4J., J.5,40
KK=KK-l
GO TO 39
EMU( J) =EUM( K> +DM( K> *< EUM( 10 -EUM( KIO) /( DM( KIO -DM( 10)
IF(DABS(EMU(J)-EUM(IO)-1.D-4)15.15.14
CONTTNITF.
EM'lJ< 1 ) =- F.M'lJ< 1)
GO TO 9
SIGMA(J)=SIGMA(JJ)
11"( SS2) 34,48,16
S2=0.D0
DO 17 I=MRl, M
S2=S2+(DLOG(T(I)-TAU(JJ»-EMU(J»**2
IFOlR)34.73.19
IF(N-M)34.1B.19
~ TGMA ( .J) = n~Q.RT( S2/EM>
co TO 48
KS=0
DO 21 K= 1,50
KK=K-l
X= (DLOG( TIlUIFl'AU( JJ) ) -!!:MU( J) ) /S I(;MA( J)
FL=.398942280401D0*DEXP(-.5D0*X**2)
YGS=-X
CALL MDNORD( YGS. FU>
DS(K)=-(EM-EMR)/SIGMA(J)+S2/SIGK~(J)**3+(EN-EM>*X*FL/(SIGMA(J)*FU)
74
75
42
43
IF( MR.):H. 75.74
X= (m .OG( T( MR.1 ) -TAU< .J.n ) -F.MtJ<.n ) /S IGMA< J)
FL=.39R9422H040100*nF.xp(-.500*X**2)
CALL MUl'WRD( X, l"U)
DS( K) =DS( K) -1!:M1l*X*t'L/( SIGMA( J) *1"0)
S 1(;( K) =S IGfI.A.( J)
IF(DS(K»42,48,43
KS=KS-l
IF(KS+K) 406. 440,46
KS:KS+ I
IF( KS-K) 46.45.46
_
~
85
796
799
600
801
802
803
804
805
806
807
B06
609
610
811
812
813
814
44
45
46
47
20
21
48
49
81~
816
817
818
619
020
821
822
823
824
825
826
827
828
629
630
831
832
833
834
50
76
51
52
77
53
54
83~
8.'16
837
838
639
640
041
842
843
844
845
846
847
848
849
650
051
852
853
854
855
856
8.'"
55
56
57
58
59
60
61
22
70
79
SlGMA(J)=.5DO*SIC(K)
GO '1'0·21
SlGMA(J)=1.5U0*SlG(KJ
GO TO 21
IF( DS( 10 *DS( KIO ) 20,48,47
KK=KK-1
GO TO 46
SlGMA(.J) =SIG( IO +DS( IO *( SIG( IO -SIG( KIO) /( DS( KIO -DS<IO)
IF( DARS( SIGMA(.n -SIG( TO) -1. D-4) 48. 48. 21
CONi' I NlJF.
TAU( J) =TAU( .J.n
!JI' ( SS3) 34. 22. 49
KS=O
DO 61 K= 1,50
KK=K-1
SR=0.D0
S3=0.00
00 50 I=MR.t. M
SR=SR+l.n0/(T(I)-TAU(J»
S3=S3+( DI.OC( T< J) -TAU(.n ) -EMtJ< J) ) /( '1'( J) -TAU< J) )
IF( JllR) 34.76.52
110'( 1'4-M) 34. 51, J2
un KJ =S3/S 1GMA( J) **2+S11
GO TO 53
X=(DLOG(TRun-TAU(J»-EMU(J»/SIGMA(J)
FL=.39894228040100*DEXP(-.500*X**2)
YGS=-X
CALL MDNORD(YGS,FU)
DT( TO =S3/S ICMA< J) **2+SR+( EN-EMl *FV( (T( Ml-TAU< J» *SIGMA( J) *FUl
IF(MR)34,l'i3.77
X= (DLOC( '1'( MRl) -TAu<.J> ) -F.JIflH.n) /S IGMA(.n
YL=.398942280401DO*DEXP(-.5DO*X**2)
GALL M.U1'4OlW< x, 1"U)
DT( K) =un K) -EMn*I"V( ('1'( mU)-TAU( J» *SIGMA( J)*1"U)
TA( 10 =TAU( J)
IF ( DT( K> ) 54 , 22,55
KS=KS-l
IF( KS+IO 58. 56,58
l{S=KS+ 1
IF< l{S-ro 58, 57,58
TAU< .J) =. 5D~)*TA< l{)
GO TO 61
'fAU(J)=.5UO*('fA(K)+T(I»
GO TO 61
IF( DT( K) *DT( KIO) 60, 22,59
KK=KK-1
GO TO 58
TAU( J) =TA( IO +DT( IO *( TA( IO -TA( KIO ) /( DTe KK) -DT( IO)
IF(DABS(TAU(J)-TA(IO)-I.D-4)22.22.61
CONTINlJF.
S2=0.D0
SL=0.DO
110'( n 1) -TAU< J) -J. D-(;) 7B, 78, 84
11"( ml> 34,79,64
MR= 1
EMU( 1) =EMU( J)
SIGMA(l)=SIG~IA(J)
TAU( 1) =T( 1)
DO 80 1=2. M
IF(T(I)-T(1»34,80.83
86
858
859
860
861
862
863
864
865
866
067
868
869
870
871
872
873
874
875
876
077
078
879
880
881
882
883
80
l'11l=Mll+ 1
ro TO
34
23
DO 23 I=MRl,M
SL=SL+DLOG(T(I)-TAU(J»'
S2=S2+(DLOG(T(I)-TAU(J»-EMU(J»**2
EL=ELNM-(EM-EMR>*DLOG(SIGMA(J»-S2/(2.D0*SIGMA(J)**2)-SL
81
X= (DLOCo( T( MR1 ) -TAU< .J> ) -F.MtJ< .J> ) /8 TCom(.n
CALLMDNORD(X,FU)
84
TF(MR)34.82.81
~L=KL+~M1t*ULOG(FU)
82
24
25
27
66
29
26
30
31
34
lit' ( N-ID 34,27,24
X=(DLOG(TRUN-TAU(J»-~MU(J»/SIGMA(J)
YGS=-X
CALL MDNORD(YGS.FU)
EL=EL+(EN-EM)*DLOG(FU)
CONTINUE
TF( .J-1 ) 34.30.66
TF( DARS( F.M1J<.J> -F.M1H .T.J> ) -1. D-4) 29.29. :10
IF( DABS ( S ICJllA( J) -8 leMA( .J.J> ) -1. D-4) 26.26.30
IF(UAllS(TAU(J)-TAU(JJ»-I.D-4)31,31,30
COlnINU~
CONTINUE
SIGMAJ=SIGMA(J)
EMUJ=EMU(J)
RETURN
END
87
Appendix 9.2.
W(l,
Graphs of the Nine Densities of Table 3
WEIDULL
~)
'.8
•• a
I.'
1.&
t.8
t.S
1.'
'.S
4.8
4.5
S.'
3.5
4.'
4.&
5.8
X
LOGNOIDW.,
LN (1,0-)
s.
t.
e'
0=1
8.
••• '.5 I.'
1.5
2.'
t.S
1.'
X
GAMMA
G(l,Ct.)
z.
t.
I.
I•
--
••
Ct.=2
Ct.=5
II.
*'
1S.8
11.&
1.8
1.6
t.e
t.6
X
3.1)
3.6
4.e
4.6
'r
6.'
© Copyright 2026 Paperzz