OPrIMAL TESTS FOR SEPARABLE FAMILIES
OF HYPOTHESES
by
Robert Richard Starbuck
Institute of Statistics
Mimeograph Series No. 978
Raleigh - Januar,y 1975
iv
~BLE
OF CONTENTS
Page
1.
INTRODUCTION . • • • •
2.
REVIEW OF LITERATURE •
3.
GENERAL RESULTS
3.1
3.2
3.3
3.4
4.
..
1
.0.0
.......
"
••••••
00
••
'.
.....
6
Introduction
• • • •
Some Properties of Conditional Probability Integral
Trans forma tions • • • • • • • • • • • • • • • • ••
Most Powerful Similar and Most Powerful Invariant
Tests for Separable Families
• • •
Optimal Classification Rules
. • • • •
APPLICATIONS •.• • • •
4.1
4.2
4.2.2
4.2.3
4.3
4.3.2
4.3.3
4.3.4
4.3.5
4.3.6
4.4
Exponential class as the null class of
distributions
•
Normal class as the null class of
distributions
•
Discrimination be ween the E:lt ponentia1 (e ,A)
and Normal 0.,~ ) distributions
..
..·
..·..... .
..
1
The U.M.r.S.-~ Test For the Exponential (O,A) Versus
the Uniform (O,e) Distribution • • •
• • ••
4.3.1
6
9
12
17
19
Introduction
• ••
•• • • •
The U.M.P.S.~ Te~t for the Exponential (e,A) Versus
the Normal (~,~ ) Distribution
• • • •
4.2.1
3
Exponential class as the null class of
distributions
Uniform class as the null class of
distributions
Distributions of ~(O)'whe~ Xl: :.:, ·X· are
e,u
n
i.i.d. Uniform (O~e) random variables ••
Distribution of T(Oj when Xl' ••• , X are
e,u
n
i.i.d. Exponentia1«(0,A) random variables.
£ritica1 values of T 0) and power for
e,u
various sample sizes • • • • • • • • • • •
Discrimination between the Exponential (O,A)
and Uniform (O,e) distributions
·.
o
"
•
•
0
...
Tests For the Exponential (O,A) Versus the Lognormal
(0,~2) Distribution. • • • • • • • • • • • • • ••
19
20
21
24
26
27
27
28
28
29
31
31
32
v
TABLE OF CONTENTS (Continued)
Page
4.4.1 a known • • • •
4.4.2 Critical values
n = 10, 20 •
4.4.3 a unknown
• •
4.4.4 Critical values
n = 10, 20
.
4.5
The
U.M.P.S.~
· · ·
•
•
•
• • • • • • • •
of T 1- (0-) and power for
e,
•
• • • • •
• • •
•
•
•
•
•
•
•
•
• • •
•
of T~,t and power for
•
• • • •
• •
• • •
··
··
·· ·
·
Test for the Uniform
the Normal (fj. ,a) Distribution
4.5.1
4.5.2
4.5.3
4.5.4
4.6
(e 1 ,e 2)
• • ••
·· ·
·
• • ••
Uniform class as the null class of
distributions • • • • • • • • • • • • ••
Normal class as the null class of
distributions • • • • • • • • • • • •
Critical values of T
and power for
u,n
various sample sizes • • • • • • • • • •
Discrimination between the Uniform (9 ,6 2 )
1
2
and Normal (fj.,a ) distributions. • • ••
Uniform class as the null class of
distributions • • • • • • • • • • • • • •
Exponential class as the null class of
distributions
•••••••••••••
Di.stribution of Te,u when Xl' ••• , Xn are
4.7
i.i.d. uniform (6 ,6 2 ) random variables.
1
Distribution of T
when Xl' ••• , X are
e,u
n
i.i.d. Exponential (6,A,) random
variables • • • • • • • • • • • • • • • •
Critical values of T
and power for
e,u
various sample sizes • • • • • • • • • •
Discrimination between the Uniform (9 ,6 )
1 2
and Exponential (e ,A,) di.stributions • • •
The U.M.P.S.""Q' Test for the Uniform (0,6) Versus
the Right Triangular (0,6) Distribution • • • ••
4.7.1
35
36
37
Versus
The U.M.P.S.ooQ' Test for the Uniform (6 ,8 2 ) Versus
1
the Exponential (6,A,) Distributions • • • • • • •
4.6.5
34
Right triangular class as the null class
of distributions • • • • • • • • • • • •
38
38
41
41
42
43
43
45
46
47
47
47
49
49
vi
TABLE OF CONTENTS (Continued)
Page
4.7.2
4.7.3
4.7.4
4.7.5
4.7.6
4.8
Uniform class as the null class of
distributions • • • • • • • • • • • • • ••
Distribution of T
when Xl' ••• , X are
u,r
n
i.i.d. right triangular random variabl@§
Distribution of T
when Xl' ••• , X are
u,r
n
i.i.d. Uniform (0,9) random variables •
Critical values of T
and power for the
u,r
various sample sizes • • • • • • • • •
Discrimination between the Right Triangular
and Uniform distributions • • • • • • •
The U.M.P.S.-a Test for the Pareto Versus the Lognormal Distribution •• •
5.
SUMMARY
6.
LIST OF REFERENCES
•
•
I I ' .-'.
CI
•
•
•
•
8
ID
o.
0
'000110
•••
•
0
•
•
•
•
50
50
51
52
52
54
55
57
1.
INTRODUCTION
The problem of deciding which family of distributions describes
the behavior of a random sample of observations has been examined
rather extensively in the goodness-of-fit and hypothesis testing
literature.
Given a null class of distributions, if particular
alternative families of distributions are considered, then a
question arises as to the largest possible power a statistical test
may achieve.
When a particular alternative distribution or
alternative family of distributions i.s considered, the testing
problem is essentially that of testing separable hypotheses.
For a composite null hypothesis class, tests can be constructed
as a function of conditional probability integral transformations
(C.P.I.T.'s), which have been considered in O'Reilly and
Quesenberry [161.
Under general conditions these transformations
map a random sample from an unspecified member of a continuous
parametric class of distributions to a smaller set of independent
random variables with uniform distributions on the unit interval.
Then any
stati~tic
which measures the divergence of the transformed
values from a unifrom pattern may reasonably be considered as a
goodness-of-fit test statistic for the original composite null
hypothesis class.
Many such tests may be constructed and, indeed,
many exist in the extensive goodness-of-fit literature.
In this work, it is shown that a most powerful similar test
may be obtained as a function of C.P.I.T.'s for a composite goodnessof-fit null hypothesis, and sufficient conditions are given to
2
assure that such a test is uniformly most powerful against a
composite alternative class.
It is further shown under general
conditions that this test identifies with certain uniformly most
powerful invariant tests.
3
2•
REVIEW OF LITERATURE
The problem of testing
P e Po
H:
versus
P e PI ' where
K:
Po and PI are separable families of distributions, separable in the
sense that an arbitrary member of one family can not be obtained as
the limit of the members of the other family, was first considered in
detail by D. R. Cox [5J.
COX was the first writer to clearly identify
and point out the importance of tests of separable families and give
specific tests.
The term "separable" is due to him.
Cox developed a
general method for this testing situation based on the logarithm of
the Neyman-Pearson maximum likelihood ratio (M.L.R.).
The statistic
he examined was
"
~n M.L.R.·~
Eo (~n M.L.R.)\
ij-O
Oi
=& '
where
M.L.R.
= sup
f(xl, ••• ,x ;Oi)!SUp g(xl, ••• ,x ;S)
and
f
and
g
S
n
Oi
n
denote the probability density functions arising from
Po and PI ' respectively.
Cox examined the asymptotic properties of
this test, and, in particular, the asymptotic variance, in order to
construct a statistic whose asymptotic distribution is standard normal.
In [6J, he develops the test for
H:
Poisson
versus
K: Geometric.
Jackson [13J investigated the adequacy of Cox's results for
normal
versus
K:
Exponential
H: Log-
and derives the power of the test.
Jackson also compares the test with other tests and derives the test
for
H:
Lognormal versus
K:
Gamma.
Using the same principle,
4
Atkinson [21 developed a test for a mixed model including a set of
hypothesized families of distributions in order to determine which
family adequately describes the data.
The M.L.R. statistics proposed by Cox lack the property of invariance, in general.
Lehmann [14J gave the general theory of
uniformly most powerful invariant (U.M.P.I.) tests and gave integral
expressions for the location-scale parameter case.
reduction is required for particular cases.
A great deal more
In two papers, Uthoff
[20, 21J derived the U.M.P.I. test for testing two-parameter Normal
versus Uniform, Normal versus Exponential, Uniform versus Exponential,
and
for
~orma1
versus Double Exponential.
H: Normal
versus
K:
Uthoff also shows that the test
Double Exponential
is asymptotically
equivalent to the M.L.R. test and to Geary's test [llJ.
In a series
of papers, Dyer [8, 9, 10J investigated various test statistics,
including the U.M.P.I. and M.L.R. test statistics, and compared their
relative efficiencies from a discrimination point of view for several
alternative families of distributions.
Antle, Dumonceaux, and Haas
[lJ examined the M.L.R. test for several location-scale parameter
families and compared its power with the power of the U.M.P.I. test.
Their recommendation for using the M.L.R. test instead of the U.M.P.I.
test, when the two differ, is based on the ease of computation and
relatively good performance of the M.L.R. test with respect to the
U.M.P.I. test.
Dumonceaux and Antle [7] followed with the M.L.R.
procedure for discriminating between the lognormal and Weibu11 distributions.
5
It should also be pointed out that many goodness-of-fit tests
are also tests for separable hypotheses, and that even though it would
intuitively be expected that tests utilizing the knowledge of the
alternative class would have better power than those tests that do
not, there is, in fact, no proof that this will occur.
Note, for
example, the remarkably strange behavior of some goodness-of-fit tests
reported by Dyer [8, 9, 10J.
6
3.
3.1
Introduction
Let
of
GENERAL RESULTS
of~al
X denote a Borel set
X , and
numbers,
a
the Borel subsets
denote a vector of independent and
••• , X )
n
identically distributed (i.i.d.) random variables, each distributed
P
according to an absolutely continuous distribution
space
P
(X,a) ; and, further, suppose that
metric class of distributions
= (Pe ; e
p
is a member of a para-
o} •
e
on the Borel
The set
assumed to be a K-dimensional Borel set with elements'
eK) •
p
n
T
=
n n
(X ,a )
= r1" Pn ;
P
n
measures on
(T , • '"
l
=
(X
T )
K
x ••• x
= P x...
n)
N
(yn,a
for
X, a
(or
x ... x
correspond'~ng t 0
is also written as
Let
g:
X
a) •
on
pn
0U~
is a,' subclass of product
u-
The class of product
•
= {P~; e
e
=
(g(x ), ••• , g(x »
n
l
g:
there exists a function
0 ~ 0
(or
g
= Pe(X
e A)
,respectiv,ely.
Let
formation subgroup on
n
G
Xn.
G
onto
n
X
pn ~ pn ) such that
A
e;
an
be such that the set of transformations
ponding set of transformations"
o
for every
n
X
g
n
be
defined
. For a given gn, suppose
~
Pge(X e gnA)
o} •
X be a one-to-one transformation, and let
gn(x , ••• , x )
l
n
ation
••• ,
Also, put
the corresponding one-to-one transformation of
and
e = (e l ,
0), defined on the sample
.
pn
~
p
x· P, P e OJ'} , 1:. .~.,
measures
by
is
It is also assumed that there exists a K-dimensional sufficient
statistic
space
0
.
ge
e 0
Let the transformG
are transformation
and the corresgroups on ~ and
denote the corresponding product trans-
7
Definition
A
transformation group on a space is said to be transitive if
the maximal invariant of the group is constant on the space (cf.
Lehmann [14J, p. 216).
Denote by
S ; by
hI
0
~;
h2
th~
the sub a-algebra of
composition of a function
by
a set
A.
g
g
or
a
~
iriduced by a statistic
hI
with a function
the indicator function of
With the usual abuse of notation the same symbol,
-1
~.a.,
will be used to denote a point function and the corres-
pondingset function.
Lenuna 3.1
For
g:
X
X , one-to-one, and
~
S
any statistic defined on
(3.1.1)
Proof
The function
gn
and
establishes a one-to-one correspondence between
Thus
In fact,
= Sgn
n
dP-e(y)· Pg-e(g A
B gnA g
I
n gn B)
8
By the Radon-Nikodym theorem, a.s.
= p(Als)
•
Lemma 3.2
If
T
is a sufficient statistic for
0
(or
p), and
n
G
is a
Xn that induces a transitive group
product transformation group on
G on 0 , then the distribution function of the conditional distribution on
Xn for fixed T is invariant under
n
G
, i.~.,
(3.1.2)
a.s •. p n V g e G(g n e Gn ) •
Proof
Let
exists a
Let
x
=
Jx
e
e 0
e'
be fixed and
gn e Gn
such that for the corresponding
= {(YI' .oo,
(xl' ... , x n ) •
be an element of
Yn) ;
Then
y.
1.
:!::
x. ;
1.
i
= I,
O.
-
g ,
••• , n}
Then there
e' = ge .
and
9
a.s.
e'
= p.ge (gn J x laTog.-n)
a.s. P
by Lemma 3.1,
by sufficiency
of
= Fe(gx l ,
By the sufficiency of
••• , gx
T, the subscript
e
n
laTog~.)
T,
a.s. P
e
can be omitted on
F,
leaving
..., gx
where the exceptional set may depend on
is almost invariant under
n
G
g
n
n
laT og...m>
,and thus
However, since
family on a Euclidean space, and both
n
X
a.s •
and
pn
0
is a dominated
are Euclidean sets,
it follows by Lehmann [14J, Theorem 4 and discussion on p. 226, that
the exceptional set does not depend on
3.2
g
n
Some Properties of Conditional Probability Integral
'1'ransformations
Conditional probability integral transformations are introduced
in O'Reilly and Quesenberry [16J, and extended to a larger collection
of classes of distributions in Quesenberry [17].
Some basic properties
of the transformations given in [16J are developed here.
The same
results can be obtained for the transformations in [17] by the same,
or very similar, arguments.
10
Put
(3.2.J.)
··•
and
u(x , ••• , ~n-K)
1
=
(u , ••• , u _ ) •
1
n K
the conditional distribution of
continuous, then
In [16J it is shown that if
Xl' ••• , X _
given
n K
(U , •• ., U _K)
1
n
is absolutely
••• , 'un_K(X _ »
n K
= (~(X1),
independently and identically distributed
T
U(O,l)
are
random variables.
From this and a result of Basu [3J, the next theorem is immediate.
Theorem 3.1
If
T
=
(T , ••• , T )
K
1
is a complete and sufficient statistic for
o , and if the conditional distribution of (Xl' ••• , Xn _K)
is absolutely continuous, then
(T , ••• , T )
1
K
given
T
(U , ••• , U _ )
1
n K
and
are independent vectors.
This theorem has important applications for constructing inference
procedures that may be alternatives to nonparametric or robust
procedures.
The sufficient statistic
for making inferences within the family
statistic
U = (U , ••• , U _ )
l
n K
T
contains all the information
p
(or 0), whereas the
contains information about the family
P. Thus U may be used to make inferences about the class P , such
as a goodness-of-fit test for the class
p, and
T
to make a
11
parametric test within P , and the independence exploited to assess
overall error rates.
Inferences based on
U are considered in the
following sections.
Theorem 3.2
If
G is a group of transformations of X such that the
induced group
G
continuity rank
then u
on 0
is transitive, and if
P
n - K and sufficient statistic
has absolute
T
=
(T , ••• , T ) ,
K
l
of (3.2 .1) is equivalent to an invariant statistic,
i.~.,
Proof
By Lemma 2.1 of [16
J,
1, ••• , n-K a.s. P •
By Lemma 3.2 above,
E(I[X.~x.JIT)
J J
is invariant under
G.
The
result follows.
The following lemma is a consequence of the fact that the transforming functions of (3.2.1) are (conditional) distribution functions.
Lemma 3.3
In the conditional space for fixed
to-one correspondence between
!.~.,
and
statistics,in this space.
T
=t
, there is a.s. a oneand
(u ' .•• ,
l
(Xl' ••• , X)
n'
o •• ,
are equivalent
x ),
n·
12
3.3
Most Powerful Similar and Most Powerful Invariant Tests for
Separable Families
Using the values
(Xl' ••• , X ) , consider testing the hypothesis
n
(3 •.3.1)
against the composite alternative
(3.3.2)
It will also sometimes be useful to consider a simple alternative
= P1
P
K' :
f
and
e
respectively, for
Let
(3.3.3)
denote the density and distribution functions,
Fe
denote the density
under H, and f
and F
P
1
1
e
If Po of
and distribution functi.ons, respecti.vely, for PI of K'
.
(3.3.1) is specified by a class of densities
has a particular functional form, then
goodness-of-fit null hypothesis.
{fe; e e
O} where
f
e
H is a classical composite
In work in classical composite
goodness-of-fit testing, it is usually (tacitly) assumed that the null
hypothesis class contains all distributions for which
density,
i.~.,
that
0
f
e
is a
in (3.3.1) is a natural parameter space.
Definition
A test
vee
0 •
~
is
~imi1ar~
for
H of (3.3.1) if
Ep
e
(~)
=~
13
Tests for composite goodness-of-fit hypotheses are traditionally
required to be
similar~.
This restriction on tests has obvious
appeal in that, for example, if it is desired to test for normality,
then all normal distributions are equally normal, so the probability
of rejection should not vary on the null
Under
H, let
class~
U ' ••• , U _
denote the
l
n K
random variables obtained by (3.2.1).
parent density of the sample
n-K
Also, let
Xl' ••• , X
n
f
under
i.i.d.
U(O,l)
denote the
l
K'
of (3.3.3), and
the corresponding density of
K)
••• , UK •
nnFrom the remark preceding Lemma 3.1, it follows that hl(u , ••• , u _ )
n K
l
is zero a.s. except in the unit hypercube. The next lemma is a direct
••• , u
application of the Neyman-Pearson Lemma.
Lemma 3.4
The most powerful
level~
test of
H versus
K'
1, i f hI (HI' ••• , U
based on
)
n-K
> c ,
(3.3.4)
0, otherwise,
where
c
U ' ••• , U _
i.i.d.
l
n K
Let
~
= WoU.
U(O,l)
•
H versus
=~ ,
K' •
~
for
random variables.
The following theorem shows that if
boundedly complete, then
test for
P {hI (U ' ••• , U _ ) > c}
l
n K
is determined by
is a most powerful
T
is
similar~ (M.P.S.~)
14
Theorem 3.3
If
T
is a boundedly complete sufficient statistic for
(3.3.1), then the test
test for
H versus
= *0 U
cp
fob
of
above is a most powerful similar'"QI
K' •
Proof
By Lehmann [14J, Theorem 2, p. 134,
Thus to find a most powerful test in the class of similar'"QI tests
it is sufficient to find a most powerful conditional size'"QI test on
the conditional space of
Xl' ••• , X
n
most powerful Neyman-structure test.
••• , X )
n
Lemma 3.3.
and
given
T,
But for
T
l.~.,
=t
to find the
fixed,
are equivalent statistics by
Thus, the test
cp
is a most powerful
similar~
test.
It will sometimes be the case that
P s
~l
of (3.3.2).
Then, of course,
(U.H.P.) similar'"QI test for
H
V8!'SUS
does not depend on
cp
P
is a uniformly most powerful
cp
K.
Conditions under which
such tests exist are considered i.n the followi.ng theorem.
Theorem 3.4
If the conditions for both Theorem 3.2 and Theorem 3.3 are
satisfied by
~O'
for
then a U.H.P. invariant
level~
test exists for
testing
H versus
15
-
K, provided
G is also transitive on PI .
Moreover, this test is equivalent to the
U.M.P.S.~
test of Theorem
3.3.
Proof
If
is invariant level-QI, then since
t:p
-
G i,s transitive it
follows from Lehmann [14J, Theorem 3, p. 220, that
if P e PO'
i.~.,
t:p
is a
similar~
test.
Thus if a test is M.P. similar-QI, it
will be M.P. invariant level-QI, provided it is invariant.
But by
Theorem 3.3, a M.P. similar-a test can a.s. be written as a function
of
ul '
.•. , u
n-K
only, and is
Bu-measurable and invariant a.s. by
Theorem 3.2.
Thus under rather general conditions
U.M.F.'!.-a tests
iden~ify.
U.M.P.S.~ 'and·,
The two approaches of finding the
U~M.J:>. S."'Q' test vary from example to example in the amount of effort
required to construct the test.
rather complicated functio$ of
If
Xl ' ••• , Xn
obtaining the marginal density of
3.4'is a difficult one.
... ,
of (3.2.1) are
n-K
then the task of
U
Ul' ••• , U _
required in Lemma
n K
When this occurs, the invariance approach
might be superior in termS of effort required, although this . i~ not
in general true.
The next definition,
an~
particularly, Theorem 3.5 are motivated
by a result of Dyer [8, 9, 10J.
He demonstrates empirically that a
number of important goodness-of-fit tests have the property that the
16
power is less when the value of a parameter is assumed known than for
the case when the same parameter is assumed unknown, under the null
hypothesis.
This behavior is not particularly remarkable in that none
of the tests considered have any known optima.1 power properties.
In
the next definition and Theorem 3.5, natural conditions are given which
assure that the power of the U.M.P.S.-cv test for a smaller null
hypothesis family is never less than that of the
U.M.P.S.-~
test for
a larger family.
Definition
Two separable families of distributions on the Same space Cx:.,u)
are said to be conformable with respect to a group
-G
formations if the corresponding group
G of trans-
on the parameter space is
transitive for each family.
Consider two testing problems
H :
l
f>lH
HZ:
f>ZH
versus
K :
l
r lK
'
(3.3.5)
versus
K :
Z
r 2K
'
(3.3.6)
and
where
rm C
f>2H'
f>lK
C
f>2K ' and
separable families of distributions,
r iH
i
and
= 1,
r iK
are conformable
2 •
Theorem 3.5
If
CP1
is U.M.P.S.-cv for (3.3.5) and
is U.M.P.S.-cv for
(3.3.6), then
(3.3.7)
17
Proof
The class of tests that are
similar~
for (3.3.6) is a subclass
of the class of tests that are similar-QI for (3.3.5).
3.4
Optimal Classification Rules
In practice, one may confront the problem of deciding which class
among a set of classes of distributions the data
has arisen from.
X
=
(Xl' ""
X )
n
The general classification model is to base the
decision as to which class the data arise from on a statistic
S
= S(X I ,
'."
X ) •
n
When the set of classes of distributions consists
of only two members, say
Po
and
PI ' a classification rule is
Outcome
£..(nl
Decision
1
S
> c
X arises from PI
0
S
s: c
X arises from Po
It is assumed that . S
.
(3.4.1)
has a continuous distribution.
It has not been mentioned, but is implied by the conditions of
Theorem 3.3, that the power function of the U.M.P.S.-QI test for testing
H versus
K is constant on PI '
l.~.,
is not a function of
With this in mind, it seems reasonable to use
the discriminant function in (3.4.1).
hI
P e PI •
of Lemma 3.4 as
If equal error probabilities of
misclassification are reqUired, then the next theorem shows that the
decision function based on
hI
is optimal among the class of decision
functions having equal error probabilities of misc1assification.
18
Theorem 3.6
Consider the classification rule (3.4.1).
Assuming that
satisfies the conditions of Theorem 3.3, let
d(n)
classification function based on sample size
n
p
denote the
and statistic
h1
of
Lemma 3.4 with error probabilities of misc1assification equal to
~(n).
size
Let
n'
d' (n')
denote a classification function based on sample
and a statistic
classification equal to
If
01'
(n')
S;
01
S
with error probabilities of mis-
~' (n')
Let
no =
= 1,
2,
••• }
0
Q'~O
' then
0
min {n; n
Proof
~'
Assume that
P(d(n)
S;
~O
~'(n') ~
=S'(n').
OI(n) , since the
definition of
~(nO
nO'
- 1) >~O.
contradiction,
n'
n' < nO.
for some
= 0lp1) = P(d(n) = 1lpo)
= P(d'(n') = Olp 1)
~ ~(nO
(n')
=~(n)
and
- 1) > ~O.
nO •
n
= n'
test is based on
Therefore,
Clearly ~'(n') >0'0
~
=P(d'(n')
Q"(n')
By Theorem 3.5, if
U.M.P.S.~
By definition,
1f
n' < nO.
, then
h1 •
~'(nO
= l\PO)
- 1)
By
By the
19
4.
4.1
APPLICATIONS
Introduction
In this section the results of Chapter 3 are applied to obtain
M.P.S. and
U.M.P.S.~
tests, and to show that certain tests studied
previously by other writers are M.P.S. or U.M.P.S.-a tests.
When the
conditions of Theorem 3.4 are satisfied, the minimal sample size
of Theorem 3.6 is obtained for
a
O
=
nO
.10, .05, and .01 •
In most of the testing situations considered here,
Po
and
P
l
are location-scale parameter families, and the testing problem has
already been considered from the invariance and R.M.L. approaches.
When the conditions of Theorem 3.4 are satisfied, the invariance and
similar approaches yield equivalent test statistics.
The invariance
approach is easier to use in practice in many cases.
The R.M.L. is
also equivalent to the best invariant test statistic in many examples,
although this does not happen in general
(£t.
Dyer [8, 9, 10J).
Tables were generated for various test statistics by empirical
methods whenever the distribution of the test statistic was
mathematically intractable.
For the generation of normal and
lognormal variates, the algorithm developed by Chen [4J was used.
Variates from other distributions considered were generated using
I.B.M.'s RANDU program together with the inverse of the distribution
function.
For testing
H:
P e Po
versus
K:
P e P l ' tables of critical
values and power are given for samples of size
(with one exception) for critical values
a
n
= .10,
= 10,
20, and 30
.05, and .01.
The
20
primary purpose of these tables is to enable the reader to compare
various goodness-of-fit statistics in a vari.ety of testing situations.
These tables are not intended to provide a comprehensive study of a
particular testing problem.
Antle, Dumonceaux, and Hass [lJ give
additional tables for the Normal versus Cauchy, Normal versus
Exponential, and Normal versus Double Exponential testing problems.
For determining
nO
of Theorem 3.6 in a particular classification
problem, tables are presented giving
equal error probabilities for
nO'
c, and the attainable
a O = .10, .05, and .01 •
It is interesting to note that in the examples considered the
test statistic for testing two location-scale parameter families are
ratios of functions of the complete and sufficient statistics for the
respective families.
This is particu1ar.1y appealing from the
computational viewpoint, in that after the respective complete and
sufficient statistics have been computed, the test statistic can be
computed directly from them, aV9iding the additional and often
cumbersome task of calculating the U-statistics.
4.2
The
U.M.P.S.~
Test for the Exponential (e,A) Versus the Normal
(10k .0'2) Distribution
The problem of testing exponentiality versus normality arises in
the life testing area of reliability.
It is desired to determine
whether failure rates follow an exponential or normal distribution
before inference is undertaken.
the
U.M.P.S.~
This pr.ob1em may be solved by using
test developed below.
21
4.2.1
Exponential class as the null class of distributions.
Let
the null class of densities be
11
A expC -A (x-e)}.
(8,
(x) .,
..
CO
< 8 < + CO
A> 0 •
"
CO )
Uthoff [20J derives the U.M.P. location and scale invariant test for
testing this family against a Normal
{j.J,
2
alternative family.
,0' )
The
test statistic obtained is
(4.2.1)
The equivalent R.M.L. statistic is presented in Antle, Dumonceaux, and
Hass [IJ.
An alternative method of deriving the U.M.P.S.-a test
statistic is the C.P.I.T. approach based on Theorem 3.4.
transformations for a random sample,
Xl' ••• , X
n
The C.P.I.T.
from this class of
densities are (from Corollary 2.1 of [16J):
u _
r 2
= (1
- (z
r-l
r-
1 - Z )/( ~
(z, - z »}
n, 1
r
n
~
~=
r-2
= 3,
••• , n ,
(4.2.2)
where
zn
= x(l)
, the smallest sample member, and the other
defined as follows.
j-l, and
Let
z
n
e<
u
n
i
= x i +1
zi
ul '
= z
Suppose
•• 0
,
u
= j+l,
= x,J
n
... 0
•
Then
z,
~
= x.
••• ,
i == 1,
= un ,
and
... , n-2
;
u
u
n-l
n-l
= zl
= un.
z. == u
~
n
+u
n-l
,
+ un_l(l
z
-
n
• 1
l/(i-l~ ~-- u 1/'J
u, 1
IT
1.- .
• 1
J
J"".
i
= 2,
-
> 0 , and
u n <CO;
zl
are
n-l
,
,
u. e: (0,1)
~
= 1,
i
~
be defined as in (4.2.2) and let
n-2
Then
n
z
Z i S
••• , n-l •
22
The Jacobian of the transformation is
u
0
0
o
n-l
0
o
I
-u
n-l
2u u 372
I 2
o
I
---Z
u
l
*
..
= det
1
I
-u
n-I
n-3
(n-2)( IT u~!j)u(n2-1)!(n-2)
n=
j=l J
ic
*
o
o
o
1.
o
I
n-2
un_I
= ----n--~2~-----(n-2)![ IT u:!j1n-1
j=l J .
must be obtained under
The joint distribution of
the assumption that
normal distribution,
2
N(j..L ,0')
Xl' ••• , X
n
2
(j..L,0')
random variables.
constitute a random sample from a
unknown.
Let
Xl' ""
Then letting the
z' s
Xn
be Li.d.
be as described
above,
n-l
• IT
11 (z i)
i=l (z ,co)
o
n
Applying the u-transformation to the
is
Zi S
,
the joint density of
23
n-2
n-l
u ..
n
(n-2)!(cr~)n (n~2 u~/j}n-l
j=l
exp(-[(un -e)
2
J
+ (un+u n- I-e)
2
n-1
+ ~ (u +a,u 1-e)2l/2cr2}
n J nj=2
n-2
•
'if
11
(u.) •
j=l (0,1) J
where
j = 2, ••• , n-1 •
Integrating
h(u , ••• , un)
1
joint density of
0/
u '
1
=1+
n-l
with respect to
... , un- 2
S
a. ,
j=2 J
n
=1 +
JCO
0
n-2
•
a. ,
~
11
-~----n---2----
(n_2)!(cr~)n-1( IT
'-1
J-
IT
11
i=l (0,1)
(u.)
~
J
y n-2 exp( -y 2 (O/-S 2 In) 12cr 2
2
u~/j)n-l
J
2
exp(-n(x-(Su _ 1-ne)/n) 12cr }dxdy
n
11
i=1 (0,1)
•
,the
J
CO
n-2
n
u~/j)n-l
Jo
=
u
is, with
j=2
----.......:;;.:..-n--~2----
j=1
and
n-l
2
~
(n_2)!(cr~)n( 11
u _
n 1
CO
J0
y
(u i )
n-2
2
2
2
exp(-y (O/-S /n)2cr }dy
24
11
(u.) •
(0,1) ~
(4.2.3)
Expressing (4.2.3) in termS of
zl
~
.'"
zn
(ignoring the constant
coefficient), (4.2.3) reduces to
n
(
~
(4.2.4)
i=l
and, fi.nally, in terms of the origin.al
n
(
~
i=l
X
vari.ables,
n
( x -x(l) )/( .~l ( Xi _-)2)1/2}n-l
x
•
i
(4.2.5)
1.=
By Theorem 3.4, the hypothesis of exponentiality is rejected if
expression (4.2.5) exceeds a critical value
x - X(l)
c
e,n - -v-;::;:n=:::::===2= >
T
~
i=l
where
peTe,n > cl Po)
c, or, equivalently, if
(4.2.6)
,
(x.-i) In
1.
=~ •
As mentioned at the end of Section 4.1,
T
e,n
is a ratio of
and~
functions of the respective complete and sufficient statistics,
in particular, isa ratio of estimators of dispersion for the
respective families.
Also~
the statistic
T
e,n
is independent of
the complete and suffi.cient statistics of the respective families.
4.2.2
Normal class as the null class of distributions.
Let the
null class of densities be
(\[ZT
(J)
-1 exp{ - (x-l1) 2 12c 2 }
11.
(x)
(-00,00)
(J
> 0 , ·00< 11 <00 •
25
The C.P.I.T. transformations for a random sample
Xl' ""
Xn
from this class of densities are (from [16J);
U.1,
where
= G.
1,-
G.1,- 2
.
1/2
.
2
- 2 1/2
2[(1-2)
(X.-X.)/[(l.-l)S.
+ (X.-X.)
1. 1
1,
1,
1,
1,
1,
i
is the Student-t distribution function with
n - 2
= 3,
••• , n ,
degrees of freedom,
i
i
X .Ii
~
j""l
Since the
of the
UIS
J
and
~
j=l
2
Ii
are invariant with respect to linear transformations
XiS, the U.M.P.S.-et and U.M.P.lo-O'
by Theorem 3.4.
-
(X. - X.)
J
1.
tests are equivalent
The test statistic is simply the reciprocal of
(4.2.6), with the resulting test being to reject normality if
(4.2.7)
or, equivalently, if
T
< c , when
e,n
peT 6,n <
cl p o) = 0'
•
This
test is equivalent to the R.M.L. test presented in Antle,
Dumonceaux, and Hass [11.
Also, the remarks in the paragraph following
(4.2.6) apply here.
•
Tables of critical values and power for the test statistic
for samples of size
n "" 10(5)30 are presented in [lJ.
n.
e,n
The table
values were obtained by simulation using 5000 samples of size
each
T
n
for
26
4.2.3
(u,
Discrimination between the Exponential (e.A) and Normal
.O'~ distributions.
If it is desired to use the statistic
T
e,n
to discriminate between exponentiality and normality, then it may be
reasonable to require"
01
= P(Type
I Error)
and
= 13
~ 01
0
for
01
0
= P(Type
T
exceeds the critical value
e,n
II Error)
nO
required
= .10, .05, and .01. Table values
were obtained by simulation using 5000 samples.
accepted i f
S
Table 4.2.1 gives the minimal sample size
to be equal.
to obtain
01
Normality is
c ; otherwise
exponentiality is accepted.
Entries that appear in this and other tables which are derived
by simulation are presented with two or three digits to the right of
the decimal.
By investigating samples of 5,000, 10,000 and 30,000, it
was generally observed that entries based on 5,000 and 10,000 samples
differed only in the second digit, and that entries based on 30,000
samples differed only in the third digit with those based on 10,000
samples.
Table 4.2.1
0/
nO
c
.10
.098
14
1.356
.05
.046
20
1.386
.01
.009
33
1.419
01
0
27
4.3
U.M.P.S.~
The
Test For the Exponential (O,A) Versus the Uniform
(o,e) Distribution
4.3.1 Exponential class as the null class of distributions.
Let
the null class of densities be
1
A- exp(-Ax)
'TI (x)
°.
A>
(0, ro )
A maximal invariant under the group
... , x n)
=
(cx 1 ' ••• , cx )
n
'
G of transformations
c > 0, is
since both classes have the positive real line as a region of
support.
The U.M.P.I. test rejects the hypothesis of exponentiality
whenever, [14],
where
(~A)
f
is the Uniform (O,e)
l
density.
e
density and
f
is the Exponential
O
Evaluating the numerator of (4.3.1),
°
-n Jro n-l
v
11,(v)
-n
(4.3.2)
dv = X( /n •
JO,e/X(n)[
n
Evaluating the denominator of (4.3.1),
-n
A
J°ro vn-l exp (-v r:n
i=l
x. I A, ) dv = J~
n
~
n
i=l
(r:
x. ) -n .
The test then reduces to rejecting exponentiality if
> c , and accepting
otherwise, where
(4.3.3)
~
T(O)
e, t1
P(T(O) > clp)=
e,u
()"
= xiX (n)
~. This test
is readily shown to be equivalent to the R.M.L. test.
As mentioned at the end of Section
4.1,
T(O)
e,u
is a ratio of
functions of the respective complete and sufficient statistics, and,
in particular, is a ratio of estimators of dispersion for the
28
respective families.
Also, the statistic
T(O)
is independent of the
6,U
complete and sufficient statistics of the respective families.
4.3.2
Uniform class as the null class of distributions.
Let the
null class of densities be
8 -1
11
(x)
8 > 0 •
(0,8)
As in Section 4.2.2, the
T(O)
e,u
< c
U.M.P.S.~
test is to reject uniformity if
= 01
and accept otherwise, where
test is equivalent to the R.M.L. test.
•
This
Also, the remarks in the
paragraph preceding this section apply here.
Distribution of
4.3.3
(0)
Te.u when Xl'
• • • :>
Let
= x (n)
Uniform (0.8).random variables.
member, and the other
=x
Then
z
n-l.
Then
.
ii'
zls
= 1,
i
z
n
X are i.i..d.
n
,the largest sample
be defined as follows.
••• , j-±, and
zi
= x i +l
Suppose
;
i
=
z
n
= x J. •
j+l, ... ,
n-l
X/X Cn) =
(1
The joint density of
+ ~ zJz n )/n
i=1
t
n
= zn
and
t.
~
n-l
X/X(n) =
(1
The J'oint densi.ty of
(4.3.4)
~ (z )
(0,8) n
(4.3.5)
z1' ••• , zn
= n8 -n
Let
•
~
+ ~
i=1
is
= z./z n
i .., 1., ••• , n-l.
~
t.) In
~
t l' ••• , t n
•
Then
(4.3.6)
is
.
29
= ne -n t n-l
n
h(t 19
, ••• ,t)
n
Observing that
t , ••• , t n
l
are i.i.d. Uniform (°91)
n-l
1
(O,e)
are independent and that
random variables,
X/X(n)
approximately normally distributed with mean
2
•
t , ••• , tn_I
l
has the same
T(O)
e,u
Using the Central Limit Theorem,
variables}/n
(4.3.7)
(t,) •
1=1 (O~l) ~
n
(1 + sum of (n-l) i.i-d. Uniform (0,1) random
distribution as
(n-l)/12n
11
IT
(t)
Therefore, for large enough
(n+l)/2n
is
and variance
n, the critical value
c
in 4.3.2 is approximately
c.& (n + 1 - z
01
= 1-01
~(z)
01
where
and
(4.3.8)
VCn-l){3}/2n,
~(.)
is the standard normal distribution
function.
The distributional properties of a sum of i.i.d. Uniform (0,1)
random variables have received considerable attention in the literature
of statistics.
It is well known that the convergence to normality of
such a sum is very rapid, the approximation being good for many purposes
for samples as small as
n
= 5.
For the purpose of examining the
tails of the distribution, however, larger sample sizes may be
required.
For
n
simulation is
power
0.0093
= 10
and
0.351
0
01
= .01
, the critical value obtained by
Using (4.3.8)9
c
eau
(02~)
random variables.
o •
Let
in Section 4.3.3 and consider again x/X.(n)
density of
0.349
The comparisons improve for
Distribution of T(O) when Xl'
Exponential
~
z
n
is
01
0
:)
=
with empirical
.05 and .10 •
X are i.i.d.
n
••• , z
n
in (4.3.1).
be defined as
The joint
30
n
= nr.. -nexp ( - L;
i=l
Z
.I.,...)
1.
n-l
(z) IT
(zi) •
n
(0, co )
i=l (O,z )
11
11
n
(4.3.9)
Let
t
n
= z
n
and
t. = z .lz
1.
1.
n-l
X/X(n) = (1 +
The joint density of
1.
t , '."
l
h(tl, ••• ,t n ) = nr..
tn
11
(4.3.10)
is
n
-
(t )
n
. 1
n
n-l
The marginal distribution of
11
IT
,
t
h*(tl, ••• ,t _ ) = n!(l+ L; t.)
n l
i=l 1.
n-l
The distribution of Y = L; t. is
1.
i=l
where
~
n
(.)
1.
is
n-l
-n
(4.3.11)
(t.) •
i=l (0,1)
•• 8
1.
1.=
n-l
-n
Then
n-l
-n n-l
t
expr-t (1+L; t.)/A}
(0, co )
P(y) = n!(l+Y)
•
.
t.)/n
L;
i=l
... , n-l
= 1,
i
n
n-l
11
II (t.) •
IT
i=l (0,1)
(4.3.12)
1.
~n-l(Y)
(4.3.13)
is the density function of the Sum of
Uniform (0,1) random variables.
n
Finally, the density of
i.i.d.
R
= T(O)
e,u
is
(4.3.14)
The asymptotic properties of this distribution are not known.
This density is quite complicated for even small
fore, only recommended for use for very small
n.
n, and is, there-
31
4.3.5
sizes.
Critical values of T(O) and power for various sample
e"u
T(O)
e,u
gives critical values of
Table 4.3.1
versus
H:
Exponential (O,A)
H:
Uniform (D,S)
30
the normal approximation was used to obtain critical values and
versus
K:
K:
Uniform (D,S)
for the test
and for the test
Exponential (O,A) •
power under the assumption of uniformity.
obtained by simulation, using
30,000
For
n = 20
and
All other entries were
samples.
Table 4.3.1
Critical Values And Power of The U.M.P.
Test For Discriminating Between
Exponential and Uniform Distributions
Similar~
K: Uniform (O,S)
Reject H if T(O) > c
e,u
H:
Exponential (O,A)
n
.30
.80
.98
20
30
.594
.471
.415
H:
Uniform (O,S)
10
a=.05
Power
c
.524
.419
.371
.61
.95
1.00
c
a=.lO
Power
.487
.389
.343
.76
.98
1.00
K:
Exponential (O,A)
T
Reject H if CO ) < c
e,u
n
10
20
30
4.3.6
a=.Ol
c
Power
a=.Ol
Power
c
.351
.379
.396
.47
.88
.98
a=.05
c
Power
.407
.422
.431
.69
.96
.99
c
a=.l0
Power
.438
.444
.450
.79
.98
1.00
Discrimination between the Exponential (0,,4) and Uniform
(O.S) distributions.
If it is desired to use the statistic
T(O)
e,u
discriminate between exponentiality and uniformity, then it may be
to
32
reasonable to require the error probabilities of misclassification to
be equal.
obtain
Ot
Table 4.3.2 gives the minimal sample size
=S
s; Ot
o
for
=
01 0
.10, .05, and .01 •
nO
required to
Table values were
obtained by simulation using 5000 samples, and by using the normal
approximation developed in 4.3.3.
exceeds the critical value
Un ifo rmi ty is accepted i f
T(O)
e,u
c; otherwise exponentiality is accepted.
Table 4.3.2
Ql
O
Ot
nO
c
.lD
.100
14
.440
.05
.048
20
.420
.01
.0lD
34
.401
2
Tests For the Exponential (0.4) Versus the Lognormal (O.cr )
Distribution
The problem of deciding whether data is exponentially or lognormally distributed arises in the study of survival times of microorganisms which have been exposed to a disinfectant or poison
Irwin [12J).
COX
[5,6J developed tests for the testing problem and
presented the asymptotic distribution of the test statistic.
testing
H:
(£f.,
2
Lognormal (O,cr )
versus
K:
For
Exponential (0,4) , the
test statistic given by Cox is
T..e
where
= ..en ("SISCi )I[ (exp (")
01 2
" 2 - 2'
1 ....01 2) / n J1/2 ,
- 1 - 01
(4.4.1)
33
n
A
Ct'l =
tn X./n
~
1.
i=l
n
A
Ct'2 =
,
A
~
(tn Xi - Cl'l)
i=l
2
In
A
S = X
Sa = exp fCt'l + 2'1 Ct' 2
A
A
}
•
T > Z ,where
t
Ct'
is the standard normal distribution function.
Lognormality is rejected if
For testing
H:
Exponential (0,8)
Hz ) = l-a and
verSuS
Ct'
K:
2
Lognormal (O,a ) ,
the test statistic given by Cox is
(4.4.2)
where
W(l) = Euler's constant,
WI (1) =
and
rejected if
7'1'
2
and
16 ,
S
are as described above.
Te > ZCt' ' where
normal distribution function.
Exponentiality is
Hz ) = 1'"'Ct' and H.)
Ct'
is the standard
34
It is clear that interchanging
Hand
K will not, in general,
have the effect of inverting the test statistic :i.f Cox's method is
used, nor will the test statistics developed by this method be invariant, in general.
Srinivasan
[19J has also considered the testing problem H:
~O'A)
Exponential
K:
versus
2
Lognormal (O,a ) •
The statistic
proposed by Srinivasan is
D
n
=:
I'
SuplS (t) .. 1(t;A)
n
t
(4.4.3)
where
F(t;A)
=:
{1 ..
(l .. tin
5b n- 1} 11 .. (t)
(0, nX)
D
n
_11
(t)
[nX, 00 )
the M.V.U.E. of the distribution function under
the empirical distribution function.
+
H, and
S (t)
n
is
Exponentia1ity is rejected if
exceeds a specified critical value.
For corrections to and comments about Srinivasan's results, see
Schafer, Finkelstein, and Collins
Let the null class of densities be
4 .4 • 1 a known.
-1
A
[18J and Moore [15J.
exp(-x/A)
"1'1
I! (x)
A> 0 ,
(0,00)
and the alternative class of densities be
(~ Ii?v~rr) -1 exp{ .. (t n
x) 2 /2a 2 }
11
(x)
a > 0 •
(0,00)
The .U ..M.,P.I.--Q'
£1
and
U.M.P.S.-a test is given by (4.3.l), where
is the lognormal density function
density function.
f-'.nd
f
O
is the exponential
Evaluating the numerator of (4.3.1),
35
n
(.rr
xJ.')
-1
Jo
1= 1
OJ
v
-1
n
n
n
= (rr x.)-1
i""l J.
=
2
?
exp{- .~ 1 (~n
x. + ~n v)-/2a }dv
. 1
1=
J
OJ
- OJ
exp{- ~ (t+~n x.:>2/ 2cr 2}dt , letting t ... ~n v,
i=l
...
n
(2n0' In) n( rr x.)
i=1 1
-1
n
2
n
exp{ -[ L: ~n x. - ( L:
i=l
J.
i=l
~n
2
2
x.)
J. InJ/2a } •
(4.4.4)
The denominator of (4.3.1) is given in (4.3.3).
The test reduces to
rejecting exponentiality i f
n
(L:
'1
J.=
X.)
n
J.
n
-1
n
2
n
2
2
(rrX.)
exp(-[ L: ~n X... (L: ~nX.) In]2a } > c(O') ,
'1 J.
'1
1
'1
J.
J.=
J.""
J.=
or, equivalently, if
n
T
~(O')
e,JCI
E:
n
~n( L:
X.) - (L:
• 1 J.
. 1
J.=
].=
n
.. [L:
~p.
i=l
where
2
n
Xi - (L:
i=l
o) ""
P(Te,~(O') > c(O')lr
~n
X.)/n
J.
2
2
Xi) In]!2re > c(O') ,
~n
~.
(4.4.5)
It is readily shown that this test
is equivalent to the R.M.L. test.
4.4.2
Critical values of Te.&(O') and power for n = 10, 20.
4.4.1 gives critical values of
of 0'
0'
for testing
known.
H:
T
~(O')
e,JCI
Exponential (O,A)
Table
and power for various values
versus
K:
2
Lognormal (0,0' ),
All entries were obtained by simulation using 5000 samples.
36
Table 4.4.1
Critical Values and Power of T t(a) Test For
e,
Discriminating Between Exponential and Lognormal
Distributions
2
K: Lognormal (O,a ) ,
H: Exponential (O,A)
a known
. Reject H if Te,t(a) > c(a)
cv=.Ol
Power
c(a)
c(a)
Power
QI= .10
c(a) Power
n
a
10
0.4
0.6
0.8
1.0
1.4
2.0
2.4
1.63
2.07
2.24
2.38
2.75
3.07
3.20
.93
.38
.08
.08
.24
.64
.81
1.16
1.91
2.18
2.32
2.63
2.91
3.02
1.00
.80
.36
.21
.40
.78
.90
.84
1.80
2.14
2.30
2.58
2.84
2.93
1.00
.92
.53
.35
.53
.85
.94
1.63
2.52
2.84
3.02
3.36
3.65
3.75
1.00
.93
.43
20
0.4
0.6
0.8
1.0
1.4
-2.0
2.4
1.16
2.34
2.78
2.99
3.28
3.54
3.63
LOO
LOO
.76
.45
.68
.96
.99
.83
2.23
2.73
2.96
3.25
3.49
3.58
1.00
1.00
.89
.63
.77
.98
l.00
4.4.3 q unknown.
meter,
a
cv=.05
' .23
.48
.91
.98
Since
occurs in (4.4.4).
a
is not a location or scale para-
Consequently, the
expression (4.4.4) is not a statistic unless
a
l~ft-hand
is known.
is unknown, test statistics can be formed by replacing
(4.4.4) with
a, a
sample estimator of a .
a
si.de of
When a
in
In order to obtain an
invariant test, the expression
n
2
n
2
2
&nexp ( -[ ~ tn X. - (~ J.n X.) /nJ/2& }
1.
• 1
1.
1.=
i=l
(4.4.6)
37
must be invariant with respect to the group
... , cx ),
n
c >
°.
G of transformations
The estimator
n
2
n
. 2
= [L: tn Xi - (L: tn X.) /n]l(n-1)
. 1
1..=1
1.
.
1.....
(4.4.7)
satisfies the invariance requirement and, furthermore, is an unbiased
.
of,.,2
est1.mator
'"
placing a
versus
K:
2
if the underlying distribution is lognormal.
.... 2
.
by a
1.n (4.4.4), a test for
2
Lognormal (O,a)
H:
Re-
Exponential (O,A)
is given by rejecting exponentia1ity if
or, equivalently, if
n
n
T*
== tn( L: X.) - (L: .en X.)/n
e ,t
i=l 1.
i=l
1.
n
2
n
2
c ,
- (n-1)tn[ L: tn X.1. - (L: tn X.)
1. /nJ/2n>
i=l
i=l
P(T~,t > cl p
where
o)
= ~.
(4.4.8)
The R.M.L. test rejects exponentia1ity if
(4.4.9)
c ,
and does not have the property of invariance.
4.4.4
Critical values of T*
~2.e
4.4.2 gives critical values of
versus
K:
Lognormal.
5000 samples.
and power for n
for the test
= 10,
H:
20.
Table
Exponential
All entries were obtained by simulation using
38
Table 4.4.2
For Dise,£
criminating Between Exponential And Lognormal
Dis tributions
Critical Values And Power of
K:
Exponential
H:
Reject H if
n
Q' "" .01
c "" 2.02
"" .• 05
c "" 1.90
.10
01 ""
c "" 1.84
Q'
Q' "" .• 01
c "" 2. .17
Q' "" .05
c "" 2.10
20
Q' "" .10
c "" 2.07
4.5
~,a)
T~c
e,J-
Lognormal
> c
Q'''''.01.
Power
Q'''''.05
Power
Q'''''.1O
0.4
0.6
0.8
LO
L4
2.0
2.4
.92
.34
.08
.04
.55
.75
.99
.69
.26
.13
.29
.67
.83
LOO
.83
.43
.26
.39
.75
.87
0.4
0.6
0.8
LO
L4
2.0
2.4
LOO
.85
.27
.1.2
.40
.86
.97
LOO
.98
.63
.35
.55
.91
.98
LOO
.99
.78
.52
.66
.93
.98
a
c
1.0
T~c
.1.7
Power
The U.M.P.S.-cr Test for the Uniform (9 ,9 2 ) Versus the Normal
1
Distribution
4.5.1
Uniform class as the null class of distributions.
Let the
null class of densities be
Uthoff [20J derives the U.M.P. location and scale invariant test for
testing this family against a Normal
2
(~,a)
alternative family.
The
test statistic obtained is
(X
(n) -
X
(1.)
)/(
~ (X. _
i~1.
~
X)2/(n_l»1/2 •
,
(4.5.1.)
39
An alternative method of deriving the U.M.P.S.-ot test statistic is
the C.P.I.T. approach based on Theorem 3.4.
The C.P.I.T. trans-
formations for a random sample
from this class of
Xl' ••• , Xn
densities are (from Corollary 2.1 of [161):
= 2,
i
where
follows.
i = 2,
i
,
zl = x(l)
= k,
Suppose
... , j
... , n-1
Let
,
z
J
1
,
~
z's
and the other
= x. and
zl
= x
n
i = j+l,
•
II
k
e ,
,
are defined as
.
Then
and
z. =
j of: k
k-1
(4.5.2)
1
z. = x. 1
~
J.-
,
Xi + l
To find the U.M.P. similar-ot test the joint
u ' ••• , u _
2
n l
Xl' ••• , X
n
distribution.
n = x(n)
z. = x.
distribution of
that
Z
••• , n-1 ,
must be obtained under the assumption
constitute a random sample from a
Without loss of generality, let
."., z
Normal
2
~,a)
(~,a
= (0,1)
2
)
•
be as described in the preceding paragraph.
n
The joint density of
zl' ••• , zn
is (ignoring constant coefficients)
(4.5.3)
Let
un =
(4.5.2).
Z
n
-
zl ' and
The joint density of
u
z' ••. ,
u ' ••• , un
1
u
be as described in
n-1
is (ignoring constant
coefficients)
g(u , ••• ,u )
1
n
ex:
•
u
n-2
n
2
n-1
2
n-1 2
exp(-[nu +2u u (1+ ~ u.)+u (1+ ~ u.)J/2}
1
1 n
i=2 1
n
i=2 1
1
(u 1)
(-OJ, OJ)
11
n-1
(Un)
(0, OJ)
IT
11
(u.) •
i=2 (0~1)
1
40
Integrating
with respec t to
g(u , ••• , un)
1
density of
and
1
u ,the joint
n
i.s (ignoring constant coefficients)
• • ., u n- 1
00
. S~
u
00
2
~1
exp(-(nv +2vt(1 + ~ u~»/2}dvdt
.
~=
•
n-1
II
(u.)
i=2 (0,1) ~
~
So
2
...
11
n-2
00
t
2
n-1 2
exp(-t «1 + ~ u.)
i=2 ~
n-1
2
n-1
- (1 + ~ u i ) /n)/2}dt. II
(u.)
i=2
i=2 (0~1) ~
11
~
00
S0
- (l +
n-l
•
II
r
(n-1) /2-1
n
exp~-r«l
2
+ ~ u.)
i=2 ~
n-1
2
n-1
~ u.) /n}dr. II
(u.)
i=2 ~
i=2 (0,1) ~
11
11
(4.5.4)
(u.) •
i=2 (0, 1) ~
Expressing (4.5.4) in terms of
(z
n-1
-r
••• , z
n
,(4.5.4) reduces to
• zl ) 2/ (~n (z . • .z)2)(n.1)/2 ,
i=l
~
and, finally, in terms of the original
n
( (x(n) • x(l) )/ (.~
~=1
(x~...
(4.5.5)
X variables,
• x·)2)1/2}n-1 •
(4.5.6)
41
By Theorem 3.4 the hypothesis of uniformity is rejected if
T
o)
Xen) - X
u,n
_ 2
n
> c ,
(4.5.7)
}: (X.-X)
i=l
peT u,n > cl p o)
where
1.
=~ •
As mentioned at the end of Section 4.1,
T
u,n
is a ratio of
functions of the respective complete and sufficient statistics, and in
particular is a ratio of estimators of dispersion for the respective
families.
Also, the statistic
T
u,n
is independent of the complete
and sufficient statistics of the respective families.
4.5.2
Normal class as the null class of distributions.
Let the
null class of densities be
(~a)
.. 1
exp(-(x~)
2
2
/2a }
"'!'T
II
a>o,
(x)
-00<J.,l,<+00.
(-00 ,00)
As in Section 4.2 2, the
0
T
< c
u,n
U.M.P.S.~
test is to reject uniformity if
and to accept otherwise, where
peT
u,n
< cl p
o) = ~.
The
remarks in the paragraph follOWing (4.5.7) also apply here.
4.5.3
sizes.
H:
Table 4.5.1 gives critical values of
Uniform
versus
Critical values of T
and power for various sample
u,n
K:
versus
Uniform.
30,000 samples.
K:
Exponential
T
u,n
for the test
and for the test
H:
Normal
All entries were obtained by simulation using
42
Table 4.5.1
Critical Values and Power of T
Test for Disu,n
criminating Between Uniform and Normal Distributions
2
H: Uniform (8 ,8 2 )
K: Normal (/J.,O' )
1
Reject H if T
> c
u,n
0'=.01
Power
n
c
10
20
30
1.215
.903
.728
H:
Normal
0'=.05
Power
c
.22
.59
1.138
.840
.687
.07
.33
.65
2
K:
~ ,0' )
c
0'= .10
Power
1.096
.810
.667
.83
.35
.72
.90
Uniform (8 ,8 )
1 2
Re j e c t H if T
< c
u,n
n
10
20
30
c
0'=.01
Power
.837
.687
.607
c
0'=.05
Power
.07
.890
.22
.27
.53
.728
.644
.54
Q'= .10
c
Power
.34
.921
.755
.666
.80
.69
.90
4.5.4 Discrimination between the Uniform (8 ,8 2 ) and Normal
1
UL.O' 2 )
distributions.
If it is desired to use the statistic
T
u,n
to discriminate between uniformity and normality, then it may be
reasonable to require the error probabilities of misclassification to
be equal.
obtain
Q'
Table 4.5.2 gives the mi.nimal sample size
= S
S; Q'o
for
Q'o
= .10, .05, and .01
obtained by si.mulation using 5000 samples.
T
u,n
exceeds the critical value
accepted.
nO
required to
Table values were
Normality is accepted if
c; otherwise uniformity is
43
Table 4.5.2
4.6
0'0
0'
nO
c
.10
.100
31
.657
.05
.048
41
.587
.01
.009
68
.467
The U.M.P.S.-O' Test for the Uniform (8 ,8 2) Versus the
1
Exponential (8,&) Distribution
4.6.1
Uniform class as the null class of distributions.
Let the
null class of densities be
Uthoff [20J derives the U.M.P. location and scale invariant test for
testing this family against an Exponential (8,".)
alternative family.
The test statistic obtained is
(4.6.1)
which is easily seen to be equivalent to the R.M.L. test statistic.
An alternative method of deriving the
U.M.P.S.~
C.P.I.T. approach based on Theorem 3.4.
fot'_ a random sample
given in (4.5.2).
distribution of
that
(8,".)
Xl' ••• , X
n
Xl' ••• , X
n
To find the
u , ••• , u _
2
n 1
test statistic is the
The C.P.I.T. transformations
from this class of densities are
U.M.P.S.~
test the joint
must be obtained under the assumption
constitute a random sample from an Exponential
distribution.
Without loss of generality, let
(8,".)
= (0,1)
•
44
Let
z
o •• ,
be defined as in (4.5.2).
n
The joint density
is (i.gnoring constant coefficients)
of
n
f(z1' ••• , zn)
.r.
exp( -
0:
~=1
Z i)
n-1
(z 1) 11
(zn) IT
(z i) •
(O,co)
(zl'co)
i=2 (zl,zn)
11
11
(4.6.2)
u
i
= 2,
= z
n
••• , n-1
n
- z
1
,and
The joint density of
u 1 ' ••• , un
is (ignoring
constant coefficients)
n-Z
(un)
IT
11 (u.).
(0, co) l (0, co)
i=2 (0',1) ~
11
Integrating
density of
(u )
11
g(u , ••• , un) with respect to
1
u
z' ••• ,
h(uZ"."un _ l )
0:
u _
n 1
co n-Z
n-l
°
t
exp{-t(l
+
r.
J
i=Z
J°
n-2
00
0:
(1
t
1
and
u
n
,the joint
is (ignoring constant coefficients)
. J°co e -nvdvdt·
0:
u
u.)1
~-
n-1
11
)
IT
(u.
i=2 (0', 1) ~
n-l
n-l
exp( - t (1 + L: u J 1dt · IT
(u .)
i=2 1
i=2 (0,1) ~
11
n-1
)-(n-l) n-l
+ L: u.
IT
11 (u i ) .
i=Z 1
i=2 (0,1)
Expressing (4.6.3) in terms of z1' ••• , zn
(4.6.3) reduces to
(4.6.4)
45
and,finally,
in terms of the original
X variables,
(4.6.5)
By Theorem 3.4, the hypothesis of uniformity is rejected if
if
T
_ Xe,u
X(l)
= X(n) -X(l)
< c ,
where
P(T
e,u < c/PO)
= 01
•
As mentioned at the end of Section 4.1,
T
is a ratio .df
e,u
functions of the respective complete and sufficient statistics, and
in particular is a ratio of estimators of dispersion for the
respective families.
Also, the statistic
T
e,u
is independent of the
complete and sufficient statistics for the respective families.
4.6.2
Exponential class as the null class of distributions.
Let the null class of densities be
A-1 exp(-A(x-8)1 11 (x)
(e, co )
-co<
e <+00,
A> 0 •
As in 4.2.2, the U.M.P.S.-OI test is to reject exponentiality if
P(Te,u >clr ) = 01.
O
remarks in the paragraph following (4.6.6) also apply here.
Te,u >c
and to accept otherwise, where
The
46
4.6.3
Distribution of Te •u when Xl' •.. ,
(91,9 2 ) random variables.
(4.5.2).
Un - 1
Let
xn are i.i.d.
uniform
U2 ' ••. , Un- 1 be as described in
As mentioned at the beginning of Section 3.2,
are i.i.d. Uniform (0,1) random
variab1e~.
U2 , ••• ,
Observing that
(4.6.7)
and
ap~lying
the,Central Limit Theorem, for large n
the distribution
n-l
!: U'i is approximately normal with mean (n-2) 12 and variance
i=2
(n-2)/12. Therefore, for large n, the distribution of
of
Te,u
= (X - X(l»/(X(n)
1/2
and variance
- X(l»
is approximately normal with mean
(n-2)/12n 2 ,and the critical value c in (4.6.6)
is approximately
(4.6.8)
z(:i • V(n-2)/3 /2n
where
iR (~) ..
,1-0'
and H.) is the standard normal distribution
function.
The distributional properties of a sum of i.i.d. Uniform (0,1)
random variables have received considerable attention in the literature
of statistics.
It is well known that the convergence to normality of
such a sum is very rapid, the approximation being good for many
purposes for samples as small as
n" 5.
For the purpose of
examining the tails of the distributipn, however, larger samples may
be required.
For n .. 10
by simulation is
power
0.0091.
0~315.
and
(:i"
.01 , the critical value obtained
Using (4.6.8),
C"
The comparisons improve for
(:i
0.310 with
...
05
empiric~l
and .10 •
47
4.6.4
Distribution of Te,u when
W.A) random variables.
The density of
Y
=
~
i=2
f(y)
~
where
n
(.)
R
= Te,u
g(r)
-1
Ui
are i.i.d. Exponential
U , 0'"
Un - l be as described in (4.5.2)0
2
is, using (4.6.3),
(n-1)!(1+y)
-(n-1)
~n=2(Y)
(406.9)
,
is the density function of the sum of
Uniform (0,1)
of
= 2
Let
n-l
Xl""'~
random variables.
n
i.i.d.
Using relation (4.6.7), the density
is
= (n/2)(n-1)!(nr) -(n-1) ~n_2(nr+l) .
(4.6010)
The asymptotic properties of this distribution are not known.
This density is quite complicated for even small
fore, only recommended for use. for very small
4.6.5
K:
K:
n.
Critical values of Te,u and power for various sample. sizes.
Table 4.6.1 gives critical values of
versus
n, and is, there-
T
Exponential and for the test
Uniform.
For n
~
for the test
e,u
H:
Exponential
H:
Uniform
verSus
20 , the normal approximation was used for
critical values and power under the assumption of uniformityo
Other
entries were obtained by simulation, using 30,000 samples of size n
for each
4.6.6
n.
Discrimination between the Uniform (9 ,92) and Exponential
1
(e ,A) d1$t,ributions.
If it is desired to use the statistic
to
discriminate between uniformity and exponentiality, then it may be
reasonable to require the error probabilities of misclassification to
be equal.
Table 4.6.2 gives the minimal sample size
nO
required to
48
Table 4.6.1
Critical values and power of T
e~u
test for dis-
criminating between uniform and exponential distributions
K:
Exponential (8,A)
Reject H ifT e u < c
,
n
10
20
30
H:
Q'=.01
c
Power
Q'=.05
c
Power
.42
.85
.98
.315
.357
.382
.366
.398
.416
.64
.94
.99
K:
Exponential (8,A)
Q'= .10
c
Power
.395
.420
.435
.75
.97
l.00
Uniform (81'8 )
2
Reject H if T
e,u > c
n
10
20
30
Q'=.01
Power
c
Q'=.-5
c
Power
.24
.558
.455
.406
.489
.404
.361
.77
.97
c
.454
.376
.338
.55
.94
1.00
Table 4.6.2
c
.10
.05
.01
.095
.049
.010
15
21
36
Q'= .10
Power
.409
.401
.391
.71
.98
l.00
49
obtain
=S
01
for
:s;; 010
C:V
o=
.10 , 005 , and .01.
Table values were
obtained by simulation, using 5000 samples, and by using the normal
approximation developed in Section 4.6.30
accepted if
Te,u
Exponentiality is
exceeds the critical value
c; otherwise uniformity
is accepted.
It is interesting to note that the sample sizes in Table 4.6.2 are
larger than the corresponding sample sizes in Table 4.3.2, and the
power values in Table 4.6.1 are smaller than the corresponding values
in Table 4.3.1. . This is a direct consequence of the result stated in
Theorem 3.5.
4.7 The
Test for the Uniform (O,e) Versus the Right Tri-
U.M.P.S.~
angular (O,e) Distrib.ution
4.7.1
Right triangular class as the null class of distributions.
Let the null class of densities be
28 -2x
11
8 > 0 •
(x)
(0,8)
Since both classes are scale parameter classes, the
and U.M.P.S.'"Cl' test is given by (4.3.1), where
triangular density function and
f
U.M.P~I.-OI_
f O is the right
is the uniform density function.
O
Evaluating the denominator of (4.3.1),
(28
-2 n
)
J
.11n x. 0
~=1
CXJ
v
11
2n-1
=
(v) dv
2
n-1 n
2n
11 x.lnx( )
i=1
(o,e!X(n»
The numerator of (4.3.1) is given in (4.3.2).
~
n
(4.7.1)
The test is then to
n
rej ect right triangularity if
11
i=l
if
(X() !X ) > c , or equivalently,
n
i
50
n
T
where
tn X,/n > c ,
E
u,r
P (T
u,r > clp 0)
(4.7.2)
1.
i=l
:= ()(
This test is equivalent to the R.M.L.
•
test.
As mentioned at the end of Section 4.1, the U.M.P.
test
similar~
is based on the ratio of functions of the respective complete and
sufficient statistics.
Also,
Tu,r' the logarithm of this ratio, is
independent of the complete and sufficient statistics of the respective
families.
4.7.2
Uniform class as the null class of distributions.
Let the
null class of densities be
1
8 -1
(x)
(0,8)
e>
As in Section 4.2.2, the
°.
U.M.P.S.~
test is to reject uniformity if
and accept otherwise, where
u,r < c
T
peT
u,r
<c\p
o) : = ( ) ( .
The
remarks in the paragraph following (4.702) also apply here.
4.7.3
Distribution of Tu,r when Xl'
triangular random variableso
member, and the other
zls
Let
c •• ,
Xn are i.i.d. right
zn:= x(n) , the largest sample
be defined as follows.
i := 1, ... , j-l, and
Then
The joint density of
f (z l ' ••• ,z )
:=
n (2e
zl' ••• , zn
-2 n
)
n
i=l
1.
i := 1,
Let
density of
n
IT z,
Yl'
00&,
Yn
is
Suppose
i
...
j+l,
zu:= x j •
•
0
•
,
n-l •
is
n-1
11
11 (z.) •
(z ) IT
(O,e) n i=l (O,z ) 1.
(4.7.3)
n
e
0
I:)
,
n-l, and
The joint
51
(4.7.4)
Yl' 00.' Yn-l
The marginal density of
is
n-1
h ( y., ••• ,y
1.
n-
1)
=
n (2 e -2)n IT
1=1
11< Yi )
Yl.'
(0,1)
Se° v2n - 1 dv
(4.7.5)
Let
"" nTu,r
1.
.
...
i
= 1,
i
.en Yj
j=l
The joint density of
to
-
i:
tl'
&
..,
&
n-l
,
t
e _ • ,
0
Note that
t
n-l
is
n-1
• (4.7.6)
The marginal density of
t
is
n-1
n
u 1
pet) = (rn')-1:2 - 1 t - exp(_2t)
so
4nTu, r
freedom.
is a chi-square random variable with
The distribution function of
F(t)
... peTu, r
.
~
t) ...
(O,e) random variables.
f(z1'" .,z ) =
n
density of
2
2n
~
2n
degrees of
is
(4.7. 8)
4nt) •
Let
The joint density of
Let
P(X
Tu,r
(4.7.7)
Distribution of Tu,r when Xl' •.• , Xn are i.i.d. Uniform
4.7.4
4.7.4.
-n
(t) ,
(0, 00)
y .... z./z
1.
1.
n
ne- n 11
be defined as in Section
zl' ••• , zn
(z)
(O,e) n
;
Y1' ••• , Yn
n-1
IT
1=1
is
11
(zi) •
(O,z )
n
i ... 1, ••• , n-1, and
is
(4.7.9)
yn ... zn.
The joint
52
g (y 1"
so
Y l' •
0
0
0
,y )
n
= ne
y
= nTu,r
n
Yn - l
.
f!
h(t , 00.' t 1)
1
n-
=:
exp(-t
pet)
t n- 1
•
I!l
,
(4.7.10)
0
•
.
n-2
11
11
(t 1) IT
(0, co )n1=1 (0, t
(G1)-l t n.-l exp (_t)
4.7.5
:=
peT
u,r
2
= P(X 2n
s: t)
i 1
o
(4.7.11)
)""
11
(t)
(0, co )
The distribution function of
G(t)
(t.;)
+
is
(4.7.12)
is a chi-square random variable with
u,r
freedom.
=
1)
n-
The marginal density of
2nT
n-l
(y) IT
11 (y i) ,
(0,9) n 1=1 (0,1)
are LLd. Uniform (0,1) random variables.
i
n-l
Note that t
t i = - L: . .R,n Yj , i = 1, o •
n-1
j=l
The joint density of t l'
t n-l is
0,
Let
so
11
-n n-l
Tu,r
degrees of
2n
is
s: 2 nt) •
(4.7.13)
Critical values of Tu,r and power for various sample sizes.
Table 4.7.1 gives critical values of Tu,r
versus
K:
Right Triangular
versus
K:
Uniform.
for the test
and for the test
H:
H:
Uniform
Right Triangular
Formulae (4.7.8) and (4.7.13) were used to
obtain the entries in the table.
4.7.6
Discrimination between the Right Triangular and Uniform
distributions.
If it is desired to use the statistic
T u, r
to
discriminate between right triangularity and uniformity, then it may
be reasonable to require the error probabilities of misclassification
to be equal.
to obtain
Q'
Table 4.7.2 gives the minimal sample size
...
S s:
01
o
for
nO
required
Q'o = .10, .05, and .01 • Table values
53
Table 4.7.1
Critical Values and Power of T
Test for Disu,r
criminating Between Uniform and Right Triangular
Distributions
H:
K:
Uniform
Reject H ifT
01
n
10
20
30
40
50
00
H:
= .01
c
Power
.4130
.5541
.6248
.6693
.7007
1
.316
,706
.908
.977
.995
1
00
c
Uniform
u,r > c
cr= .10
01=.05
Power
c
.9392 .536
.7961 .818
.7365 .937
.7021 .980
.6790 .994
.5
1
Power
.7853 .735
.6970 .926
.6590 .981
.6367 .995
.6217 .999
1
.5
c
c
.09250
.04981
.00975
15
23
46
Power
.7103 .820
.6476 .959
.6200 .991
.6036 .998
.5925 l.000
.5
1
. . Tab1e 4.7.2
.10
.05
.01
Power
.6221 .794
.7263 .968
.7743 .996
.8035 1.000
.8236 1.000
1
1
K:
Right Triangular
c
01= .10
Power
.5425 .643
.6627 .918
.7198 .986
.7549 .998
.7793 1.000
1
1
QI=.Ol
10
20
30
40
50
u,r < c
01=.05
c
Reject H ifT
n
Right Triangular
.67786
.68316
.68815
54
were obtained by using formulae (407.8) and (4.7.13).
accepted if
Tu,r
exceeds the critical value
Uniformity is
c; otherwise right
triangularity is accepted.
4.8 The
U.M.P.S.~
Test for the Pareto Versus the Lognormal
Distribution
Let
Xl' ••• ,
xn
be i.i.d. according to one of the two following
classes of densities:
f (x)
l
fZ(X)
8p
= (~O'x)-lexp(-(,R,n
X"1-L)2/ 2c Z}
11
ez > a
(x)
(Pareto density),
(Lognormal density) •
(O,co)
i
= 1,
..• , n •
Then
according to either the Exponential (,R,n 8 2 ,Sl)
distribution.
in 4.1.
test.
The U.M.P.S.'"'O!
••• , y
n
are 1.1. do
or the Normal <iJ, ,0'2)
test for this testing problem is given
It is readily shown that this test is equivalent to the R.M.L.
55
SUMMARY
Under rather general conditions it has been shown that a most
powerful similar test may be obtained as a function of C.P.I.T.'s for
a composite goodness-of-fit null hypothesis.
Sufficient conditions
are given to assure that such a test is uniformly most powerful
against a composite alternative class.
It is further shown under
general conditions that this test identifies with certain uniformly
most powerful invariant tests.
Also, it is shown that if certain
conditions are satisfied then the addition of information about the
parameters of either the null or alternative classes results in a
revised test with power at least as great as the test based on the
original information.
The usefulness of such optimal tests is limited in practice by
the following considerations.
The separable hypotheses testing
problem consists essentially of two parametric classes of distributions,
~.,
the null hypothesis class and the alternative class.
When either of these classes is changed, a different test statistic
with different null distribution will obtain, in general.
Thus,
each new testing problem requires a new set of significance points.
Since for many cases the only practical way to obtain significance
points is by Monte Carlo simulation, the computational effort and
resultant tables would be large if tests are to be available for
many cases of interest.
In practice, one often uses the same test statistic for a
particular null hypothesis class against all alternative classes.
56
Also, the C.P.LT. approach menti.oned earlier can be used to
establish a specified size test for many different testing problems
and only one set of significance points is required.
Either of
these types of tests will, in general, be suboptimal for a particular
problem.
When such a suboptimal test is used, the power of the
foregoing optimal test will provide a least upper bound for the power
of such tests.
Finally, it was shown that under general conditions the test
statistic used to construct the most powerful similar test for a
particular testing problem can be used to construct an optimal
classification rule.
57
6.
LIST OF REFERENCES
1.
Antle, C., R. Dumonceaux, and G. Haas. 1973. Likelihood ratio
test for discrimination between two models with unknown
location and scale parameters. Technometrics 15:19-28.
2.
Atkinson, A. C. 1970. A method of discriminating between models.
J. Royal Stat. Soc., Series B, 32:323-345.
3.
Basu, D. 1955. On statistics independent of a complete
sufficient statistic. Sankhya 15:337-380 and 20:223-226.
4.
Chen, E. H. 1971. Random normal number generator for 32-bitword computers. J. Amer. Stat. Assoc. 66:400-403.
5.
Cox, D. R. 1961. Tests of separate families of hypotheses.
Proceedings of the Fourth Berkely Symposium, Vol. 1,
University of California Press, Berkeley, Calif., 105-23.
6.
Cox, D. R. 1962. Further results on tests of separate families
of hypotheses. J. Royal Stat. Soc., Sere B, 24:406-424.
7.
Dumonceaux, R. and C. Antle. 1973. Discrimination between the
Log-Normal and the Wetbull Distributions. Technometrics
15:923-926.
8.
Dyer, A. R. 1971. A comparison of classification and hypothesis
testing procedures for choosing between competing families
of distributions, including a survey of the goodness of fit
tests. Technical Memorandum No. 18, Aberdeen Research and
Development Center, Aberdeem Proving Ground, Maryland.
9.
Dyer, A. R. 1973. Discrimination procedures for separate
families of hypotheses. J. Amer. Stat. Assoc. 68:970-974.
10.
Dyer, A. R. 1974. Hypothesis testing procedures for separate
families of hypotheses. J. Amer. Stat. Assoc. 69:140-145.
11.
Geary, R. C. and E. S. Pearson. 1955. The ratio of the mean
deviation to the standard de'viation as a test of normality.
Biometrika 27:310-335.
12.
Irwin, J. o. 1942. The distribution of the logarithm of survival
times when the true law is exponential. J. of Hygiene 42:
328-333.
13.
Jackson, o. A. Y. 1968. Some results on tests of separate
families of hypotheses. Biometrika 55: 355-363.
14.
Lehmann, E. L. 1959. Testing Statistical Hypotheses.
Wiley and Sons, Inc., New York City, New York.
John
58
15.
Moore, D. 1973. A note on Srinivasan'$ goodness of fit test.
Biometrika 60:209-211.
16.
O'Reilly, Fo J. and Co P. Quesenberry. 1973. The conditional
probability integral transformation and applications to
obtain composite chi-square goodness-of-fit tests. Annals
of Statistics 1:74-83.
17.
Quesenberry, C. P. 1973. On conditional probability integral
transformations and unbiased distribution functions. Unpublished manuscript. Department of Statistics, North
Carolina State University, Raleigh, N.C.
18.
Schafer, R.,J. Finkelstein, and J. Collins. 1972. On a goodness
of fit test for the exponential distribution with mean
unknown. Biometrika 59:222-224.
19.
Srinivasan, R. 1970. An approach to testing the goodness of fit
of incompletely specified distributions. Biometrika
57:605-611.
20.
Uthoff, V. A. 1970. An optimum test property of two well-known
statistics. J. Amer. Stat. SOCo 65:1597-1600.
21.
Uthoff, V. A. 1973. The most powerful scale and location invariant test of the normal versus the double exponential.
Annuals of Statistics 1:170-1740
© Copyright 2026 Paperzz