Gertrude M. Cox
ON CERTAIN TESTS AND THE MONOTONICITY OF THEIR POWERS
by
K. V. Ramachandran
-
!"w
..
Special report to the Office of Naval
Research of work at Chapel Hill under
Contract NR 042 031, Project N7-onr284(02}, for research in probability
and Statistics.
Institute of Statistics
Mimeograph Series No. 120
November 16, 1954
11
ACKNOWLEDGMENT
I wish to express my sincere and heartfelt
thanks to Professor S. N. Roy, who suggested the
prob.lem, and without whose constant advice and
help the work would not have been completed.
To the Institute of Statistics, and particularly to Professor Harold Hotelling, I am obliged
for the financial and other facilities which were
provided me during the progress of this work.
My thanks are due to the United States
Educational Foundation in India and the Institute
of International Education for their financial
assistance.
Finally, I wish to thank Mrs. Mary Ann
Taylor for her careful typing of the manuscript.
i11
TABLE OF CONTENTS
ACKNowLEDGMENT
INTRODUCTION
Page
ii
vi
Chapter
I.
II.
III.
UNIVARIATE TESTS ON VARIANCES FROM NORMAL
POPULATIONS.
1
1.1 Case of One PopUlation
1
1.2 Case of Two Populations
4
MULTIVARIATE TESTS ON COVARIANCE MATRICES.
10.
2.1 Case of One PopUlation
10
2.2 Case of Two Populations
12
2.3 Properties of a Certain Special Function
14
2.4 Reduction and Evaluation of the Integral
24
2.5 On the Joint C.D.F. of the Largest and
Smallest Roots
35
2.6 Power Function of the Test Procedure
Given in (2.2)
38
TUKEY TEST ON THE EQUALITY OF MEANS AND
HARTLEY TEST ON THE EQUALITY OF VARIANCES.
51
3.1 Introduction
51
3.2 The Tukey q Test
53
3.3 Power Function of the q Test
54
3.4 Canonical Reduction of the Power Function 57
3.5 Uniform Unbiased Nature of the q Test
59
3.6 Lower Bound to the Power of the q Test
69
iv
Table of Contents (Continued)
The Hartley F
max
Ratio Test
10
Power function of the F max test
11
3.9 Canonical Reduction of the Power Function
13
3.10 Uniform Unbiased Nature of the Fmax Test
3.11 Lower Bound to the Power of the Fmax Test
14
82
3.12 Multivariate Analogue of Tukey and
Hartley Tests
83
,
IV.
THE SIMULTANEOUS ANALYSIS OF VARIANCE TEST.
85
4.1 Introduction
85
4.2 Quasi-independent Test of Hypotheses
86
4.3 Simultaneous Analysis of Variance Model
and Tests of Hypotheses.
90
4.4 Evaluation of the Probability Statement
Given in the Left Side of (4.3.4)
92
4.5 Special Case when t.~
V.
=t
(1
= 1,
••• , k)
95
4.6 Studentized Largest Chi-square
96
4.7 Power Series Expansion for the Incomplete
Gamma Type ~ntegrals
91
4.8 Convergence of the Series on the Right
Side of (4.1.2)
98
4.9 Distribution of the Studentized Largest
Chi-square
100
4.10 Power Function of the Sim. Anova Test
102
THE STUDENTIZED MAXnv'lUM MODULUS TEST.
109
5.1 Introduction
109
5.2 Power function ot the Student1zed
Maximum M")du1us Test
112
v
T~b1e
of Contents (Continued)
APPENDIX.
114
BIBLIOGRAPHY.
117
INTRODUCTION
The subject of this dissertation is the monotonicity property
of a class of tests which are derived by the union-intersection
principle
L-
18
-71 ,
the tests being all within the Neyman-Pearson
set_up of two-decision problems.
It is well known that most non-sequential parametric tests
(within the Neyman-Pearson set-up), some of them in current use
and some of them more or less recently proposed by various workers
for different situations, have power functions With the following
property.
In each case the power happens to be a function of
certain parameters, which are functions of the original parameters,
being much less in number than the original parameters.
Further-
more, this set of parametric functions is such that the set as a
whole can be appropriately regarded as a measure of deviation from
the hypothesis to be tested.
For example, in most cases it turns
out that every member of the set is zero, and in some cases every
member is unity if and only if the hypothesis tested (to be called
the null hypothesis) is true.
For many of the current non-
sequential parametric tests (to be called the classical tests),
it is further known that this power function has the additional
property of unbiasedness in the sense of Neyman; later designated
1.
Numbers in square brackets refer to bibliography.
vii
as complete or uniform unbiasedness in contrast with local unbiasedness, whose meaning is also obvious in the context of the Neyman
·Pearson theory.
Many of these uniformly or completely unbiased
tests happen to possess what is indeed a
stro~ger
property and one
that automatically implies complete unbiasedness.
Thisproperty
is that the power function of the test monotonically increases, as
each of the parameters involved in the power function (to be called
the deviation parameters), tends away from its value on the null
hypothesis; the value being usually zero or unity.
This property
has a considerable significance in terms of the loss functions of
the general Wald theory, but this point need not be elaborated
here. We shall call this property, when it exists, the monotonicity
property of the power function and note that it implies the weaker
property of complete unbiasedness.
In this dissertation a number of tests recently proposed in
relation to means and variances of univariate and multivariate
normal populations will be studied from this point of View, and it
will be shown that many of these tests have the monotonicity
property, while some of them are completely unbiased but do not
have the monotonicity property.
It is also shown, incidentally,
that each of these tests can be derived from a uniform principle
of test construction called the union intersection principle which
yields in most cases a lower bound to the power function which
turns out in many cases to be pretty good L-19
_7.
viii
The monotonicity property of the multivariate analysis of
variance test, and of the test for the independence of two sets of
variables from a (p + q).variate normal population, each test being
based on the union-intersection principle, has been proved by
Roy
L-
19
_7.
In Chapter I, we have proved, that under a certain partition
of the tail areas, the two-sided F test for the equality of two
variances from two univariate normal populations, has the
monotonicity property.
The case of one population follows as a
corollary to the above result.
Percentage points connected with
the modified test are given for different values of the parameters.
In Chapter II, we have shown, that i f we impose a certain
restriction on the tail areas, we get a test for the equality of
two covariance matrices from two multinormal populations, which
has the monotonicity property; when all the characteristic roots
of a certain matrix are equal.
The case of one population follows.
as a corollary to the above result.
The distribution problem
connected with the test procedure is also given.
The results
given in·this Chapter, generalize those given in Chapter I for
multinormal populations.
In Chapter III, we have investigated certain power properties
of the Tukey Studentized range q and the Hartley Fmax ratio tests.
We have shown that these tests are uniformly unbiased, but that
the power functions do not have the monotonicity property.
Also
we have obtained certain useful lower bounds to the power functions
ix
of these tests.
Multivariate extensionsof these tests are also
considered.
In Chapter IV, power properties of the sim. anova test are
investigated.
We have shown that the sim. anova test has the
monotonicity property. We have also obtained lower bounds to
certain probability statements connected with the sim.
~nova.
test.
The distribution problem connected with the test has been solved
and upper 5 per cent points of the Studentized largest chi-square
are given for different values of the parameters.
In Chapter V, certain optimum power properties of the
Studentized maximum modulus test have been proved.
The distri-
bution problem connected with the test was solved by the author
jointly with Pillai
L-
16
.J.
It is shown that the Studentized
maximum modulus test has the monotonicity property.
certain result due to Kimball
L- 11 _7,
Using a
we have obtained useful
lower bounds to certain probsbi11ty statements connected with the
test.
x
NOTATION
All vectors are column vectors and primes indicate their transpose,.
"X(p x q)" denotes
"a matrix with p rows and q columns."
"c(X)" means "the characteristic roots of X(p x p)."
"Sup C(X)II means "the largest C(X).II
"u
" stands for "the maximum in the set (up ••• , ,\}."
max
"u
II stands for "the minimum in the set (u , ••• , ,\)."
1
min
"Ixl" stands for "the absolute value of x."
"a.e." stands for
"almost everywhere. 1I
"p.d." stands for "positive definite."
"n.d." stands for "negative definite."
"p.d.f." stands for
~lprobability density
function."
"c.d.f. 1I stands for "cumulative distribution function."
IId.f." stands for "degrees of freedom."
"N(~,
2
(J
)"
stands for
lI
a random variable x having a normal
p.d.f. with mean f and variance
"N(I,
I.)" stands for
lI
i."
a random vector .!(p x 1) haVing a
multinormal p.d.f. with mean vector l(p x 1) and covariance
matrix r.(p x p)."
CHAPTER I
UNIVARIATE TESTS ON VARIANCES FROM NORMAL POPULATIONS.
1.1 Case of one population.
Let xi (i
= 1,
2, ••• , n) be a random
sample of size n from aN(~,a2) population. To test the hypothesis
2
ci 1 againstoJL--2
a2
=
B
l,we have the test procedure:
a'2 =
"ao
o
r
(1.1.1)
n
=
2
X
.
~s
.)/2 2, x = Lx.1
n
E(x 1-x
1
a
O
r-~
n, and
a c h"~-square variable wit h n-l d.f, and reject the hypothesis
otherwise.
2
2(
The usual procedure is to choose Xl and X such that
2
x21
( 1.1.2)
J
f
00
2
p(X /1;n_l)
2
2
P(X jl;n_l) dX
x2
o
2
where a is the given level of significance of the test.
2 2
p(X /8 j n-l) is defined to be equal to
2.£:.2
Const. (£) 2
52
e
=!:!,
2
If we do not impose the restriction (1.1.2), then the test procedure given above depends on two quantities
xi and X~ which we
can
choose in an infinite number of ways such that the level of significance of the test is a.
Equation (1.1.2) gives one partition of
the tail areas.
We shall choose the tail areas in such a way that the resulting
test has the monotonicity property.
This presumably can be
accomplished in various ways, one being as follows:
Under the alternative hypothesis let P denote the power of
the test procedure, i.e., let
(1.1.4 )
P = I -
p(X
222
/0 jn-l} dX ,
,2
denote the power of the test procedure based on Xl
(1.1.5)
222
and '6 = a /00
,2
Xl
is the deviation parameter.
We shall choose
,2
and X
2
~P / dO2
a "
r1
such that, in addition to (1.1.5), we also have
= 0 )wh en
S't
v
2 -_ 1, .
1. • e "
wh en a2 =
02 ,
0
.3
,2
This condition imposes a further restriction on Xl
2
2
and x
•
which, together with (1.1.5), fixes
Xi
2
We shall presently show that under this condition the test
procedure has the monotonicity property.
Now,
(1.1.6)
P
=1
- c
is the power function of the test procedure.
Notice that c
>0
is a pure constant independent of 02 •
Hence
].
(1.1.7)
The condition
(1.1.8)
,2
(Xl )
OP/o02 = 0
n-l
2"
e
2
when 8
,2
Xl
-2
,2
=1
n-l
= (X 2 ) T
that is,
,2
X
(~)
,2
X
2
n-l
1
"2" = e - '2
is equivalent to
,2
,2
(X 2 - Xl )
e
,2
X2
2
,
4
NOll
(1.1.7) reduces to
e
1 ,,2
,2
- --(X
2 2 - Xl ) ]
20
using (1.L8).
It is now easy to check that
>0
<0
and
Thus P, the power of the modified test is a monotonic
increasing function of e2 if 02 > 1,and a monotonic decreasing
function of 02 if 02
< 1.
monotonicity property.
Hence the modified test has the
2
2
The values of Xi and x for a =.05
2
are given in Table 1 (see appendix) for different values of n.
= 1,
2, ••• n1 ) and
x2i (i = 1,2, ••• n ) be independent samples of sizes n and n
1
2
2
1.2 Case of two populations.
Let Xli (1
from N(~1' cri) and N(~2' cr~) respectively.
To test the hypothesis
5
2 __
01
2
~
2 ~ 2
°2 against tee alternativeSo r 02 we have the test procedure:
1
222
accept 01 = O2 ' that is 11o~
°
= 1,
if F1 ~ F .$ F2' where
(1.2.1)
n
njXj
=E
j
i=l
x
ji ,(j
= 1,2),
and F has an ordinary F
distribution with n - 1, n - 1 d.£, and reject the hypothesis
1
2
otherwise.
The usual procedure is to choose F and F such that
1
2
0()
dF =
o
~p(F/l;nl·l,n2-1)dF =
F
2
where a is the given level of significance of the test.
If we do not impose the restriction (1.2.2), then the test
a
'2,
6
procedure given above depends on two quantities F l and F2 which
we can choose in an infinite number of ways such that the level
of significance of the test is
Equation (1.2.2) gives one
Q.
partition of the tail areas.
We shall choose the tail areas in such a way that the
resulting test has the monotonicity property.
This presumably can
be accomplished in various ways, one being as follows:
Under the alternative hypothesis let P denote the power of the
test procedure, i.e., let
(1.2.4)
P. 1 -
,
f
denote the power of the test procedure based on F and F ' where
2
1
p(F/ljn
1 - a
l
- 1, n2 - 1) dF;and
2
82=2
(11 / (12 .; 1 is the deviation parameter.
We shall choose F1'
,
and F
2
oP1d-c?
such that, in addition to (1.2.5),
2
= 0 when 8
= 1,
we also have
ai = (1~.
i •e ., when
,
This condition
t
imposes a further restriction on F and F which, together with
l
2
7
I
,
(1.2.5), fixes F and F •
l
2
We shall presently show that under this condition the test
procedure has the monotonicity property.
Now,
is the power function of the test procedure.
Notice that c
a pure constant independent of 02 •
Hence
oP
002 =
-c
~
]
The condition OP/002
= 0 when 0 2
= 1 is equivalent to
> 0 is
8
(1.2.8)
Now (1.2.7) reduces to
n -1
1
2
t
1+
-{
(Ol-l)F1
2
(°2- 1 )5
r
(01- 1 )F2
1 +
2
(°2- 1 )5
\
n +0 -2
1 2
2
)
1
I
-
{
(01- 1 )F 1
1+
2
(°2- 1 )5
I
(n -1)F 2
1+ 1
2
(n 2-1)5
°1+°2-2
2
J
]
J
9
using (1.2.8).
It is now easy to check that
and
>0
if 8
2
> 1,
<0
2
if 8
< 1.
that is,
O~ > ~ ,
Thus P, the power of the modified test is a monotonic
increasing function of 82 if 82
function of 82 if 82
>1
and a monotonic decreasing
< 1. Hence the modified test has the
,
,
The values of F and F for a: .05 are
1
2
given in Table 2 (see appendix) for different values of n and n2 •
l
monotonicity property.
It is worth noting that the results for the case of one
population can be derived from those for the case of two populations by making n large. Also it is worth noting that the
2
likelihood ratio criterion in either of these situations does not
give a test which has the monotonicity property. As a matter of
fact the tests based on the likelihood ratio criterion in the
two situations are each biased.
CHAPTER II
MULTIVARIATE TESTS ON COVARIANCE MATRICES
2.1 Case of one population.
of size n from a
N(~,
HO: E. EO' i.e., EE-1
O
Let X(p x n) be a random sample
E) population.
= I ()
p
To test the hypothesis
against oJL
~
-1
alternat1ve~EEO
=
r(p x p) '" I(p) we have the following test procedure L-18 j)
accept H if
O
and reject HO otherwise, where ~l' ••• ,
-1
istic roots of (n-l)SEO ,where (n-l)S
,
~p
are the p character-
= XX '
- n
_1_
~
, ! being
the sample mean vector. -eO and -eO are so chosen that
a being the given level of significance of the test.
It is well known that under the null hypothesis
p
p
!I.ei
1=1
n-p-2
- 2
...!E9
21 i
e
n(-e -e )
i>J i j
11
The test procedure given above depends on two quantities
,
and~O
~o
which we can choose in an infinite number of ways such that
the level of significance of the test is a. We shall choose
,
~O
and ~O such that in addition to (2.1.2) we have
OP
(2.1.4)
dri
]
y
=1
,
where Yl' ••• , Yp(=] ) are the characteristic roots of r,and P
,
1s the power of the test based on
~O
and
~O.
It is easy to
check and it will be shown in (2.6) that the p equations (2.1.4)
are all equivalent each to the other, each being also equivalent
d (pL
to
dY / 1 , . " , Yp = y] = O. Thus (2.1.4) imposes just one
y=l
,
further restriction on ~O and ~O,which, together with (2.1.2),
,
fixes
~O
and
We have shown that under this restriction the
~O.
resulting test will have the monotonicity property if all the y's
are assumed to be equal and to stay equal.
It is conjectured that when the y's are not equal, the test
based on
~O
and
,
~O
will have the monotonicity property if either
all y's are> 1 or if all are
< 1.
The d;strlbut10n problem connected with the teet will be solved
12
2.2 Case of two populations.
Let Xl(p x nl ) and X (p x n ) be ran2
2
dom samples of sizes n1 and n from N
and:IiI (.!2' E2 ) . respectively.
2
To test the h.ypothesis He: E = E , i.e., E1E~1= I(p) against -tee oJJ..
l
2
alternative,E1E;1 = rep x p) ~ I(p) we have the following test pro-
(h, It)
cedure
Lie ..J:
accept He if
being the sample mean vector of the 1 th population (i = 1, 2).
f
~e
and
~o
are
60
chosen that
t
-e0-·< .g:s -< ,g0
a being the given level of significance of the test.
It
is well known that
n -p-2
where m = 1 2
and n
=
n -p-2
2 2
and
e < ~l
< ... <.Qp
< 1.
13
The test procedure given above depends on two quantities
-9
I
0
~O
and
which we can choose in an infinite number of ways such that the
level of significance of the test is a. We shall choose
.eo
I
and -90
such that in addition to (2.2.2),we have
~]
o"i
=0
(1 = 1" 2" ... , p) "
1=];
,
where "1" , •• , r p (=! ) are the characteristic roots of r,and P is
I
the power of the test based on ~O and ~O. It is easy to check
and it will be shown in (2.6) that the p equations (2.2.4) are all
equivalent to ~ (p)
07
rl ,
""
rp
i
= r-l ,,=1,
= O.
ThuB (2.2',4) imposes
just one further restriction on ~O and .eO which" together with (2.2,2)
fixes .eO and
•
~O.
We have shown that under this restriction the
resulting test will have the monotonicity property if all the r's are
assumed to be equal and to stay equal.
It is conjectured that when the rls are not equal" the test
based on
all
"I S
~O
and
,
~O
will have the monotonicity property i f either
are> 1 or if all are < 1.
To solve the distribution problem connected with the test we
need the joint distribution of the largest and smallest roots of
certain determinental equations. We shall derive in the next few
sections certain mathematical results which we shall use to obtain
the joint c.d.f. of the largest and smallest roots.
14
2.3 Properties of a certain special function.
t3
(2.3.1)
L-x,
Yi (m s ' n s'•
••• j
Let
m , nl)j
l
Y
=
r
J
x2
"0
x
x
f
m
x s (l-x )s
s
s
s
II dx
1=1
•••
00 0
i
x
nl
l
x s (l-x s )
= t3 L·x,
Yi
x
1
•
00'
m
c
a
n
s (l-X ) s
l
m
n
xs
o •
o ••
nl
ml
xl (l-xl )
0
, n.
•
•
ms' n s
o
0 •
•• 0
The last expression is in the form of a pseudo-determinant
whose meaning is made clear by considering, for illustration, the
case of s
= 3 for
which
(1-X2 )
x
x
n
2 dx
2
JX 2xlm1
x
15
x
x
y
-J
x
x
x
x
x
x
x
x
~3
-J
m
•
x2 2
x
x
In opening out the pseudo-determinant it is very important to
stick to the order of the factors, indicated in the expansion on the
right side of (2.3.2),and to keep in mind that the factors are noncommutative.
It is also clear that the whole expression will be
zero if any two columns become equal in
16
,I m6 ,n S
I
ms- l,n8-1
• ••
ml/n l
• ••
f
I
•
~\
• ••
•
m6 ,n 8
m
n
• ••
6-1' 8-1
ml/n l
Again let
y
·f
x
m
n
S(l_x) S dx
B
S
S
x
x
•••
x
y
so that
t3 (x,y;
m) n) =
f
x
Also let
m
n
x (l-x) = t3 0 (X; min).
Using (2.3.3), we can write (2.3.2) as
17
•
I
(2-.3.5) I:(_l)r t3(x,y; m" n
,
,
,
t
I
•
3; m2 ,
•
•
•
n2; ml' n1 ) ,
I
where (my n ; m2 , n2 ; ml , nl ) is any permutation of (m3, n3;
3
n ), the summation is taken over all such permutations;
1
r being the total number
of inversions of the order of the sub-
scripts in (m" n,; m2 , n2 ; ml , nl ).
Care is to be taken to preserve on the right side of (2.,.2)
the order of the operations with regard to the
XIS
from x, through
x 2 to Xl'
SimilarlYt(2.3.1) can be rewritten as
I
I
I
where (m s ' n s ; ••• ; ml , nl ) is any permutation of (m s ' n s ; ••• ;
ml , n ), the s~tion is taken over all such permutations; r being
1
the total number of inversions of the order of the sUbscripts in
(m s ' ns ; ~ •• ; ml , nl ). Care 1s to be taken to preserve the order of
the operations with regard to the x'S from x through x l' ••• to x l .a
s-
Lemma 1.
18
Yo
+
x~
(l_XO)n+l t (xO'x O) +
f
X
m
n+l'
.
f (x,x O) dx
x (l-x)
o
+.m
Proor:
The result follows immediately by integration by parts,
integrating (l_X)m+n,and differentiating (l~X)m and f(X,xO)~ We
shall assume here that m, n
> -1, 0 ~ x o < Yo
~
l,and fex,xO) 1s
t
such that f (x,x ) and the two integrals on the right side of
O
(2.3.7) all exist.
Lemma 2.
t
t
,
t
.
t
t
s
(2.3.8) I:f3 (x,y; ms ,n 6 ; m6-1' n e..l' ••• ~ ml'n l ) = n
1=1
f
r
where on the left side ,(m ,n ;
s s
...,. ml ,
I
t
n )
l
~(x,y;
mi,n i ) ,
is any permutation of
(ms ' ns ; ••• ; ml , nl ), the summation being taken over all such
permutations.
Proof:
The mechanism of the proof will be evident if we consider,
for simplicity of algebra, the case of s
We have
= 2.
19
y
J
x
x
x
x
x
x
+
=
y
J
+
x
y
f
(obtained by interchanging, in the second term on the left side of
(2.).9), the variables x2 and xl,and rewriting the domain of
integration in the appropriate manner)
=
y
y
J
f
x
x
20
Lemma 3.
where ~ r (
) is the result of putting (m,n) in the r th place
and filling up the other positions with (ms~l,nS_l)' ••• , (ml,nl );
r going from 1 to s. Note that each "r (
) is an a-fold integral,
an (6-1) fold integral.
Proof:
The mechanism of the proof is brought out by considering, in
particular, the case s=3, where we have
y
=
f
x
x
x
21
y
J
+
x
x
y
J
+
x
x
(by interchanging the variables and adjusting the domain of
integration)
Lemma 4.
=
•••
ml,nl
•
•••
•
ms "ns
•••
ml,n l
.
,
r
LL-
_7
_7
_7
ms ,n8
• ••
ml'n l
•
• ••
•
ms ,n a
• ••
ml,n l
) f3
n
L-x,y;
s-r+ l' s-r+ 1: rr
.E (_l)r-l f3(x,y; m
where f3 r
row of f3
ms ,n s
_7,
on the left side is the result of replacing the r th
r
r
t
r
by (ma,n s ; ••• j ml,nl),and f3 rr
_
L
_7
on the right
22
side is the result of suppressing the r th row and r th column of
Notice that f3 r L_7 is an s x sJand f3 rr
an (8 -1) x (8 - 1) pseudo-determinant.
f3J:
_7.
Proof:
L-
J
is
The proof will be obvious i f we consider, for simplicity of
algebra, the case of
= 3,and
8
pick out from the expansion of each
pseudo-determinant on the left side of (2.3.12), (for the case s
,
= 3),
t
the term involving the index, say, (m"n ) and then put together
3
t
t
all such terms (With index m ,n ). We have thus the following con3 3
tribution from such terms
(2.,.13)
I
I
f3 (x,y.I m ,n,i m2 ,n ; m1,n1 )
2
3
t
,
- f3 (x,y; m"n ; m1,n1i m2 ,n2 )
3
•
I
I
I
+ f3 (x,y; m2,n2i m3'n,i ml"n l )
- f3 (X,Yi ml,nli m3,n,; m2 ,n2 )
I
,
+ f3 (x,y; m2, n2i ml,n1i m ,n,)
3
•
f
- f3 (x,y; m1,nli m2, n2 i m ,n )
3 3
23
(using lemma 3)
= 13
I
r
(x,Yi my n )
3
L-X,Yi
= 13 (X,Yi mr ,n r) 13 11
3 3
my n
3
m ,n
3 3
m ,n
3 3
m ,n2
2
ml'n l
m2 ,n2
ml,n l
m2 ,n 2
ml'nl
_7
(by definition).
This shows that if, in the general case, from the
expansion of each pseudo-determinant (with the proper sign) on the
left side of (203.12) we pick out the term with the index
(m
I
s
I
,0 )
t
s
we shall have the following contribution
(2.3.14)
t
r
13 (x,Yi ms,n s ) 13
11
I_
_ X,Yi
ms ,n s
•
ms ,n s
• ••
_7,
• ••
• ••
•
• ••
ml,nl
expressions like (2 .3.14) involving the different indices
I
I
r.
\r
= 1,
""
\
s~.
»'
ml'n l
whence the proof is obvious by combining together different
(mr,nr )
I
and add together such terms (with the same index (ms,n s
24
2.4 Reduction and evaluation of the integral:
•••
ml,n
•••
_7
•
•••
ml,n
where ma > ••• > ml > -1, n > -liand the m's differ by integers.
We have already aeen from (2,3,6) that the pseudo-determinant can
be expanded into
,
,
~
(-1)
r
~
'
,
(x,yj ma,nj ... j ml,nhwhere
(ms ' "', ml ) is any permutation of (m s ' ••• , ml ) , the summation
is taken over all such permutations, s 1 in numberjandr is the
total number of inversions of the order of the sUbscripts in
,
t
ml ), Recalling from (2,3.2) that ~ will be zero if any
two columns of the pseUdo determinant become equal, let us try to
(m s '
'0"
reduce m to m 1 by successive integration bY parts.
s
s-
consider the typical term in the expansion.
in this will of course be ms'
To this
~nd
The largest exponent
To reduce this by 1 we proceed as
follows:
We have
,
,
••• j mr+l,n; ms,nj mr.l,nj
25
y
=cf
I
Xr+2
m
x
s
6
(l-x )n dx
s
s
•••
J
mrtl
Xr+l
x
x
x
x
x
Using (2,3.7), we get
Xr
x
J
I
m_
n
r 1
xr- 1
(l-xr- 1) dxr- 1 •••
x
x
ms
=
x
n
•
t
(l-xr ) dxr ~(x,x r jmr-1,nj ••• jm1,n)
26
m
t
+ X s (l_x)n ~ (x,x; mr_l,n;
xr+l
+
J
••• j
x
(l-xr )n dxr ~ (x,xr ; mr- l' n;
+ m
a
x
m
+ x s (l_x)n+ 1 Mx,x; m' l,n;
r-
,
+ ms ~ (x, xr+l; ma-l, n; mr_l,n;
(note that on the right aide of (2.4.3), the first, second and
third
~ta
are each an (r-l) fold integral,while the fourth
~
is
an r fold integral). Now using (2.4.3), we have (2.4.2) reducing to
27
(2.4.4)
L--
~ (x,y;
t
,
,
,
ms ,n;
••• j
mrf-l + ms ' 2n +1; mr _l , n;
...
;
I
+ ~ (x,Yjms,n; ••• ; mr+-l'
,
r
+ m ~ (x,y; m ,nj ••• ; mr+-l,n; ms-l,n;
s
s
where the first and second
while the third
m - 1.
s
~
~rs
are each an (8-1) fold integral,
is an s fold integral with index reduced to
It is easy to check that this reduction holds for
r = 6-1, ••• , 2.
= s,
If r
it is easy to check that (2.4.4)
will be replaced by
t
,
t
+ ~ (x,yj m 1 + m , 2ntlj m 2,nj ••• ; ml , n)
ss
s-
,
+ m ~ (x,y; m -l,nj m l,n;
s
s
s-
and, if r
= 1,
...,.
(2.4.4) will be replaced by
28
...,.
I
(2.4.6)
m ,nj
a
t
I
+ ~O (x; ma' n+l) ~ (x,yj ms' nj
...
t
+ ms ~ (x,yj ms' n;
••• j
t
m,;,n j m2 ,n)
I
-1,
; m ,nj m
.s
2
We shall now introduce the rether convenient notations
... ,.
t
f3 (x,yj ms ,n;
I
t
ms ,n;
= f3 (x,yj
t
t
+ m , 2n+l; m 1,n;
ri-l
a
r-
m
,
<- <-
• • .; mrt 1,n; ms,n+ lj
... ,.
••• j
m ,n) ,
1
<:- <t
where (m ,ntl) is supposed to be added to the (mrt1 ,n) on the
s
left so as to reduce the integral by one dimensionj
t
(2.4.8)
~O (y; m ,n+l) ~ (x,yj m 1,n;
s
=~
~
s-
<-<-
t
(x,y; ms , n+l ; ms.. 1,n;
t
(x,y; ms ,nj
,
... ;
... ,. mr _1 + ms'
,
2n +1; mr _2 ,nj
.
• • .J
29
=~
I
(X/Yj ma,n;
where
(m;~
... ,. ->
n+I1 1a
mel
-:>
nr~j
f
mr -l '
n.
1
'
t
mr ..2 /nj
... .
~
supposed to be added to the (m;.. ll n) on the
right so as to reduce the integral by one dimensionj and
t
I
t
~O (x; msl nrl) ~ (x/Yj ma/nj ... i m;/nj m2/ n)
=
~
... ,.
I
(x/Yj ma,n;
ms ,n
(2.4.11)
f3 L-x,y;
<-<m"n+l
a
•
.. 1
m+n+l ~ L"x/y;
•
<-<m6 "n+l
6
i
+
1
m +n+l
a
L-x"Y;
~
_7
• ••
•
ms ,n
=
ml/n
• ••
-~1
m ,n+-
s
ml/n
• ••
ma- pn
•
0
ms-1 / :0.
ma/n+
• ••
•••
• ••
•
• ••
•
m
s- l,D
ml/n
• ••
me..l,n
•
-~1
• ••
•
ml,n
ml/D
• ••
•
• ••
mpD
_7
30
m -l,n
s
+
m
s
•••
m1 ,n
•
• ••
•
• ••
•
• ••
ml,n
•
L-X,Yi
m+n+l 13
s
ma- l,n
ms -l"n
ms- l"n
_7
•
Recalling the notations introduced ear11er,and using lemma 4,
it is easy to see that
(2.4.12)
13
L·X,Yi
(~l
\
ms- l,n
.
o
• ••
•
• ••
ms-l,n
• ••
<--<ID
s )n+1
ms-1,n
=
13 0 (Yi ms ,ntl) 13 L-x,Yi
•
ml,n
•••
•
m ,n
1
• ••
m ,n
• ••
•
1
_7
• ••
ms- l"n
• ••
ml,n
• ••
• ••
a-l
+ .E (_l)r
reo1
13
r
L-X,Yi
• ••
• ••
ms- l,n
where 13 r
L- _7 is an
• ••
(a-l) fold pseudo-determinant
•
_7
31
obtained by sUbstituting (m + m l' 2n +1), ••• , (m + m1 , 2n +1)
s
ss
in the r the row of the (s-l) fold pseudo-determinant
ill
·...
s- l,n
t3 L-x,y;
•
m 1,n
s-
ml,n
• ••
•
• ••
•
• ••
.J
ml,n
Thus (2.4.12) is equal to
( ms_1,n
• ••
mS~l,n
• ••
•
• ••
•
\
• ••
ms-l,n
s-l
+ .E (_l)r t3 (x,y; m +ms-r; 2n + 1)
s
r=l
t3 rr L-x,y;
•
• ••
•
• ••
ms-1,n
L-
•••
·...
ml'n
],
•
ml,n
where t3 rr
-7 is the (s-2) fold integral obtained by suppressing
the r th row and r th column of the (s-l) fold pseudo determinant
32
Similarly
-~f
m "n+
s
(2.4.13)
t> L-x"y;
-~1
ms,n+
=
(_l)S-l 13 (x; m n+ 1) 13
0
s'
• ••
ms- l"n
•
• ••
•
• ••
• ••
ms- l"n
L-x" Y;
m "n
s-1
•
•
ms- l'n
ml"n
•
mI,n
• ••
m1,D
• ••
•
•••
• ••
m l"n
s6-1
+ I: (_l)r-l t> (x,y; m+ m
s s-r' 2n +1) 13 rr L-x"y;
r-l
.J
•
ms- l'n
ml'D
••• ml,n
• ••
•
• ••
•
• •• ml,n
Using (2.4.12) and (2.4.13),we find that (2.4.11) becomes equal
to
•••
• ••
•
_7.
33
m
n
a-l'
.... ml,n
-
•••
•••
•
_7
•
ma- l,n ••• ml,n
ms- l,n
m
s
+ m +0+1
f3
s
L- x,Yi
• ••
ml,n
•
• ••
•
•
•••
ma- l,n
• ••
_7
•
ml,n
Notice that the expression on the left side of (2.4.11) is an
s th order pseudo-determinant, while on the right side of (2.4.14),
the second f3 L-
_7
1a an (a-l) th order pseudo-determinant, the
7,
each such f3
jfirst group of terms involvea f3 rr Lrr being an (s-2)th order pseudo-determinant, and the laat term has a
f3
L- _7
which is an a th order pseudo-determinant.
be noticed that f3
f3 L·x,y;
j-
rr -
-
7 can
be written as
• ••
mrtl,n
mr-l,n
•
• ••
•
•
•
• ••
• ••
It may alao
mrtl,n
mr-1"n
• ••
·..
-
.. ..
·.'.
ml,n
•
ml,n
J
•
(2.4.14) thus gives a recurrence relation, whereby proceding along
the chain and reducing m to m 1 (in which case the pseudos
sdeterminant will vanish) we have the following reduction of the
integral by one dimension.
m ,n
s
• ••
~ L·X,Yi
(2.4.15)
•
ms ,n
m 1,n
s-
\
r
~
s~l
•
_7
• ••
• ••
• ••
m ,n
1
mI,n
]
•••
•
s
mI,n
• ••
= - ~ L-x,y;
m -m
• ••
m
s-l
,n
• ••
mI,n
t
L-~o(y;ms-r +1,n+1)
=1
I)
s-I ms -ms _1
r-1
+ 2 Z Z,
,(-1)
~ (X,Yi m + m
-r + 1, 2n +1)
r=l r =1
s
s-r
i
35
ms- l"n
• ••
•
• ••
•
•••
•
. 13 L-x"yj
• ••
ms- l"n
where (m) p
= m(m-l)
mr+l"n
• ••
•
• ••
•
• ••
•
mr- l"n
mr+l"n
••• (m-p+l).
mr- l"n
• ••
ml"n
_7
(ms}r~l ,
(ms+n+1Jt
ml"n
The s th order pseudo-determinant
is thus thrown back on (s-l)th and (s-2)th order pseudo-determinants"
and so on till we get to first order pseudo-determinants which are
easily evaluated from the incomplete beta function tables
L-
13
-7.
2.5 On the joint c.d.f. of the largest and smallest roots.
We have noticed in (2.2) that in testing for the equality of
covariance matrices from two multivariate normal populations we run
into the joint distribution of the largest and smallest
determinental equation:
C
n -1
2
=n
-1 and (n 1-1)8 i
1
I81 - -e(81 + c82 ) I = 0" where
= Xi X1r
--
- n 1 ~i ~i (i = 1,,2).
and 8 will be a.e.p.d.
2
Now
P( 0
< -eO
t
~ -e.1 s oS -eO
<1
}
roots of a
Note that 81
-9
-Go
0
= c(p,m,n)
-El
•
•••
•
•
• ••
•
1
0
J
'm
.g
p
-&:2
(l~ )n .doe
p
•••
p
n 1-p-2
and n
2
m
-Ell
)
(1-9 )
1
n
d.g
l
.gO
.gO
wherem =
(
=
n -p-2
2 2
,and ~(p,m,n)
=
r(2m+1+1) r
]
2
(~)
Now, using the results given in (2.3) and (2.4), it is easy to
obtain the final reduction for the exact c.d.f. of the largest and
smallest roots.
For p = 2,
. , 5 • 2)
( ,c;:.
For p
t
P2 = P (~
< n I < 1'1 I ) c 2 t m, n) /- Q (_1"\ .n t n...u.1 ,2n+ 1)
"'l:7
0 _ "'l:l' S _"1:'0 =m+n+2) _ 2.... "'='0''''='0;
=3,
'::;UET
37
t
f
• t3 (-9 ,-9 ; 2m+3, 2n+1) - 2t3 (-9 ,-9 ; m+1,n)
0 0
0 0
For p
_ c(4,m,n)
-
c
(mt-n+4)
=4
,
_
L
f
2t3 (-90 , -90 ;
~3P3
,m,n ) {t3 0 ("'O';
~
2m+5, 2n+ 1) c ( 2,m,n )
mt-3,n+1) + t3 0 (-e O; mt- 3,n+1)}
38
t
(-9 ,-9 ; 2m!- 4,2n+ 1)
- 2(3
0 0
(mrn+3)
+ (mr2)
7n
)
c\"m,n
P2
}J
•
The joint c·.d.f. of the largest and smallest roots in the
case of one population can be easily obtained from the results
given in (2.5) by making n
2
large.
It is worth noting at this point that the c.d.f. of the
largest or smallest root can easily be obtained from the expressions
(2.5.2) - (2.5.4).
roots for p
>4
The joint c.d.f. of the largest and smallest
can easily be obtained from (2.4.14).
But since the
expressions are lengthy, they are not given here.
2.6 Power function of the test procedure given in (2.2) •
t
It has been shown in
L-19 _7
that the second kind of error of
the test procedure given in (2.2.1) is
(3
= Const.
J
P
Ii 1i
D
-B1+1
-
2 Exp
L-1
-2
tr { Dl;
1
ZlZ~+Z2Z;}_7dZldZ2'
39
We shall show that when r l = r2
-1
~1~2 =
r
r
I (P), i.e. , £1=
= •••
I
i.e. ,
~21
<0
r>
if
>0
and
=r
= rp
1
r <1
,
provided that we choose
~O
and
~O
in such a way that in addition
to (2.2.2),we also have
o
=
In the case where the
when r = 1.
r'a
are not equal it is conjectured that
(2.6.4)
<0
if y
min
>1
and
>0
ifymax
<1
provided that we choose
~O
and
t
~O
in such a way that in addition to
(2.2.2),we also have
=0
(1 = 1 , "., p).
40
There are p equations in (2.6.5). We shall show that these p
equations are all equivalent each to the other, each being equivalent
.
~ ~
to
~
07
7
ll,r2 , ••• ,7p=7- 7=
further restriction on
~O
I
~O.
and
to just one
Thus (2.6.5) imposes just one
l
and
t
.1
~O
which, together with (2.2.2),fiKes
To prove that the p equations (2.6.5) are all equivalent
condition~ -77=1
Differentiating
~1
(2.6.6)
~O
1 • O.
c
~
• '0, we proceed as follows:
given in (2.6.1) with respect to '"Ii' we get,
Canst.
f ~I
-n
l
+1
7/< 2
Exp
f·
~ tr{ I$lZ~+Z2Z;}J
D
Now make use of the transformation
Under this transformation ,(after integrating out
we get
L-
19
J ,
L
l and ll.2)'
41
t
where D is the domain:
= Canst.
f
r
Exp -
'21 _j- tr UD 1+-9Ut - 7
D
·j-Iul
-
n +n -p p
m
1 2 II~. d~i
1
J.
II
i>j
(-9 --9.) dU 7.
1 J-
Similarly
~ .1 =1:.
dlj
Z
(2.6.10)
= Const.
J
•
Exp
L- - '21
tr U Dl+-e U' _ 7
D
,
-
00
Now since the uij s are to be integrated out 1n the region
< UOj <00 , and since the expressions in curly brackets on
-
1.
-
the right side of (2.6.9) and (2.6.10) are symmetric in -e's, it is
evident that
(2.6.11)
.
d~
Hence the p equations ~
1
-7r =l = 0
--
equivalent.
Now adding the p equations, we get,
d~.J
+
+d~.J
dr:"'
•..
drP -r=l
1
z=1;
-
(i
.
= l, ••• ,p)
are all
•
= Const.
Now when 71
= ••• = 7p = 7,
?t
or-77=1 = Const.
we have,
f
Exp
L--
'21 tr U D1+-e Ut _ 7
I
D
- dY:"-77=1
- d13
1
+ ••• + dt3
'dr-
--
Hence
7
P
*13
7
--
7 i- 7-1
=0
7=1
--
is equivalent to
~~7~=1 = O.
I
I
We shall now prove the following theorem:
Theorem 1.
If in the set up given in (2.2) all the 7's are equal
and equal to 7, say, then the power function P of the test procedure
given in (2.2) will be a monotonic increasing function of 7 if 7
and a monotonic decreasing function of 7 is 7
> 1,
< 1, prOVided that we
44
choose
~o
and
oP 7
have 'dr-" )'=1
Proof:
I
~o
in such a way that in addition to (2.2.2),we also
= o.
From (2.6.1), the second kind of error of the test procedure
is
~
(2.6.14)
= Const.
f
= Const.
Now using the transformation given in (2.6.7) and integrating
out L , L and U,we get, .
1 2
~
(where n
f
= Const.
f
= ~n+P'"1)
..
J
45
f
= Const.
f(-9 , · .. J -ep > d-91 ••• d-ep
1
(say).
D
1
Now it is easy to verify that
,
r
-
~ =e
(2.6.16)
0(-)
"I
-9
I
0
-eO
-ep-1
r 'Y
(
I
...
)
)
-9
-eO
'Y
-e
2
J
-9
0
'Y
0
'1
I
-GO
f(-e ,
1
I
-9
.00'
.g
0
0
r
-B
f
J
...
-9
-eO
'Y
o
'Y
T )
~ d-9 i
-9,
P
'Y
-09
-9p _1 '
p-1
-GO
f(y- , -9 , ••• , -9p )
2
0
'Y
(where c is positive)
f
-90
I
90
e-e (_)m
=
•
o
t
(1+0-.2)
'Y
f
n
-e
2
p-1
J J
y
-9
.Q.
-'Y
I
-eO
-9
y
'Y
0
... J
-9
'Y
0
p
II d -9
i
2
_7
46
m
p-l
L-n.
-e
1
1
-n
(l+-e )
1
r
,
p-2
d-e
1
II
i>j=l
p-l
(-e -Q > II
1 j
1
~
(T -
09 )
1
_7
,
~
-eO
..,
-eO m
-c-eo(-)
..,
-e
(l+-.Q)n
'Y
r
f /
-
-'Y
3
•••
~O
-eO
.Q
p
f
-eO
'Y
'1
,
p
m
L-n.
-e.
2
J.
-n
(lore. )
J.
de
i
(say).
The proof of the theorem will be complete it we show that
and
>0
if )'
>1
<
if 7
<1
subject to the condition
0
47
=0
(2.6.18)
Condition (2.6.18) is equivalent to
-9
r
(
,
I m
c'GO(-e )
O
I
(l+-e~)n
-9
-9
p-l
0
(
)
)
-eO
-9 0
2
(
I
I
)
-9
I
0
I
p-l
L-II1
-e.
m
).
-n
p-2
p-l,
d-9. IT
(-e --9.) IT (-9 0 ). i>j=l i J
1
(1+09 )
1
,
09
.gO
-~0(-90)m
(1+-9 )n
0
L-
3
/
/
I
-eO
-eO
)_7
-e
p
f J
-ei
•••
J
-GO
p
m
I
p-l
P
II 091 (1+-91 )-n d-9 IT
(o91~j)
IT (-e·~o)
1 i>j=2
2
2).
_7
48
= o.
Hence the proof will be complete if we sbow that if
.e
r
(l+-2)n
Le.,
'
>(
l+-E·~O
l+.e
and if .,
n'
1)
o
< 1,
Il(r)
I 2(r)
Now if r
> 1,
<
>
r > 1,
,go
and i f 'Y
1+ -
< 1"
-e
<
,'Y
1+...2...
'Y
Thus" the proof will be complete if we show that I (1) is an
l
increasing function of 'Y and I ('Y) is a decreasing function of 'Y.
2
Now
t
-9 _ 1
p
-eO
'Y
j J
...
{
-eo
"1
t
m(
)-n I
_/ _P-l
n..Q i l+-9i
d-9!
2
p-2
n
(-9 --9)
i
J
f>j=2
p-l (1"\
n ~i
2
-eo)
~
I
p-l -eo
II (--e
~
~
I
<0
and
t
.so
,
t
..i- I 2 ("1)
0(1)
'Y
.,
..gO
-e (_)WI
= o 'YI
-e
I
r
(
I
I
)
J
-eO
-eO
'Y
-9
3
p-l
t
I
(1+-2)°
'Y
.Q.
"1
i
...
(
\
I
---
-eo
-'Y
t
-eOn
)
(
1 ~ ~-eO
I
I
50
> o.
Hence Il(r) is a decreasing function of
r1 and hence
is an
increasing function of r,and I (r) is an increasing function of
2
~ and hence is a decreasing function of
r.
Hence the theorem.
Making n
large, we get, as a corollary to theorem 1, the
2
following theorem:
Theorem 2.
If in the set up given in (2.1), all the rts are
equal and equal to
r,
say, then the power function P of the test
procedure given in (2.1) will be a monotonic increasing function
of
r
if
r>1
and a monotonic decreasing function of
t
r
if
r < 1,
provided we choose -eO and -eO in such a way that in addition to
.
~p
7
(2.1.2),we also have dI~ r=l = O.
CHAPTER III
TUKEY TEST ON THE EQUALITY OF MEANS AND
HARTLEY TEST ON THE EQUALITY OF VARIANCES
3.1 Introduction.
In testing the equality of means of k
univariate normal populations with a common but unknown variance
a'2,
Fisher proposed the analysis of variance z or the equivalent
F test based on the ratio of two independent mean squares.
This
test has several optimum properties including that of the
monotonicity of its power function.
Tukey
L- 23.J
range q.
As an alternative procedure
proposed the short cut test based on the Studentized
This test has the advantage that it is rather easy to
carry out from the
arithmetical point of view. We have shown
that the q test is completely unbiased, but its power function
does not have the monotonicity property. Another feature of this
test is that its power depends on k - 1 parameters, whereas, in
the case of the anova F test, the power depends only on one simple
function of the deviation' parameters. Moreover it is well known
L-17 _7,
L21.1
that the
anova F test is an all contrast test
for the means where as the q test is a test built around all two by
two differences of the means and hence is a test related to a
sub set of all contrasts.
It is also well known that the anova F
test is a likelihood ratio test.
When analysing data the experimenter is frequently faced with
the necessity of testing the homogeneity
variances.
IIIIIr
.........
in a set of estimated
When it is desired to combine a number of variances to
~----------
-----
52
obtain an estimate of the common variance,it is necessary to apply
such a test. For general use in such cases Neyman and Pearson
L- 12 _7
have suggested a test 1 namely, the L l test. The L l test
has been modified by Bartlett L-l _7. The exact distribution
problem connected with the likelihood as well as the modified
Bartlett's criterion is rather difficult.
Several approximations
have been suggested for the distribution of the likelihood ratio
criterion
L- 2 _7, L- 9 _7
G. W. Brown
[0 3
J
and percentage points tabulated
L-22 _7.
has shown that the power function of the
likelihood ratio criterion is unbiased when the d.f. of all the
sample variances are equal. Also he has shown that when the d.f.
are not equal the likelihood ratio criterion is biased but is
unbiased in the limit.
It is easy to check that the likelihood
ratio criterion test does not have the monotonicity property. Also
it is known that the power of the likelihood ratio criterion depends
on k - 1 parameters.
Cochran
L- 4 _7
has suggested for use in these situations a
rather simple test based on
2
sk are the k sample
variances with n.d.f. each.
tabulated
L- 7 _7.
is yet available.
The distribution of w has been
k
No study of the power function of this test
53
Hartley L-10
_7
has intuitively suggested for use in these
2/2
situations a test based on the statistic Fmax = smax
smin •
He recommends the short cut test based on F
when each s2i is based
max
on the same number of d.f. It is easy to check that the Hartley
F
ratio acceptance region is the intersection of the acceptance
max
k
regions based on all the (2) two by two variance ratio Frs. The
distribution problem connected with the F
ratio test has been
max
solved and percentage points tabulated L- 5 _7. We have shown that
the F
ratio test is completely unbiased, but its power function
max
does not have the monotonicity property. Also the Fmax test depends
on k - 1 parameters.
An all contrast test in the case of testing for the equality
of several variances is unknown.
The Hartley F
ratio test whose power properties we shall
max
study in detail, being a test built around two by two ratios of
variances,
is a test related to a subset of all contrasts. An
2c
k
all contrast for k variances will be II
0i
i
k
subject to Ie = O.
1 i
1
3.2 The Tukeyq test.
Let x ij (1=1, ••• , kj j=l, ••• , n) be
k independent samples of size n from N(~i'
0
2
) (i=l, ••• ,k)
population. Also let s2 be an independent estimate of
m.d.f. (say, the error mean square in anova).
hypothesis:
~l
rl
based on
To test the
= ••• = Sk' the anova F test of Fisher which is
also equivalent to an all contrast test of the form
54
k
k
= ° for
all c i subject to Ie i = 0, is well known. In this
1
1
situation it is also well known that if the null hypothesis is not
Icis i
true, then the power of the test would involve as a deviation
2
k
parameter only the quantity Tl =.t (t i 1
n2 Irr? where
k
.t Silk •
1
g=
Also it is well known that the power of the test is a monotonic
increasing function of the absolute value
ITl/.
We shall show that
an alternative test for the equality of several means from normal
populations with a cammon variance
d2
based on the Studentized
range q which is much simpler than the anova F test from the
computational point of view has the following properties.
The
of the test would involve as parameters the k - 1 differences
~ower
i l = £1 - £i (i = 2, ••• , k). The test is completely unbiased,
but the power function does not have the monotonicity property.
Tl _
3.3 Power function of the
_
02
n
Xi = r. x . ./n is N(Si' - ) .
j=l iJ
n
Under the set up given in (3.2),
q test.
Let
2
S
be an inde~endent and unbiased
estimate Of ~2 (say, the error mean square in.anova) with m.d.f.
The hypothesis HO: £1
= ••• = ~
is equivalent to the
hypothesis H : Si = Sj (all i ~ j). Now for any two £IS, the
iJ
hypothesis ~i = £j can be tested using Fisher's It' with m.d.f.
The hypothesis £i
= ~j
is accepted if'
IXi - xJ I
~ \:¥s J~ ,
where to is the upper a point of' Fishers 't l with m.d.f.
since HO is equivalent to Hij ,(for.all i ~
j =
Now
l, ••• ,k),we get a
55
test of
He
k
T~e the intersection of all the (2) two by
as follows:
two Fishers 't .' acceptance regions and accept H it
iJ
e
largest
It is easy to check that this is the same as accepting
-
-
x
- x i
q = max s m n ~
'2
.j'ii
'
Q
where
Studentized range with m.d.f.
Q
He
it
is the upper ex point ot the
This is the Tukey q test.
If the hypothesis H is true, then
O
- -x-
x
1 -
ex = P L----;rrmax
min
s
k
f
-n
x.+Q's
co
cO
= I:
1=1
< ~7
J.
(
p(s)ds
p(xi)di i
I
.J
L-
~OO
0
=
,J;;
err;;
n
e '2
...2
-; ,
X
~
m
pes)
II
J-fi
J
-
Xi
where
p(x)
k
=
2(~)2
r{~)om
_ms 2
m...l
2er2
e
s
,
p(xj)di j
_7 ,
56
QI = Q
r:g ,
'~ Ii
and ex is the given level of significance of the test.
If HO is not true l then the second kind of error of the test is
=
k
I.
1=1
f
p(s)ds
o
+Q I
1
f
s
~
p(xJ;~j)di~,
... 00
n
r-
where
x-
co
00
....:..In
p(x;t) = -
e
-"2
Li_U 2
02
,
It is easily checked that the right side of (3.3.1)
is equal
to
00
k
I.
1=1
(
J
o
p(s)ds
(
• ex>
yi+Q " B
f
Y1
P(YJjdY~,
51
where
2
-~
1
and p(y) = -
&e
,
2
e
Similarly the right s1de of (3.3.2) 1s equal to
0<)
k
r.
J
p(s)ds
i=l
Y +QI 's
i
00
~ p(YJ;gJ)dY~,
J
- <X)
Y1
where
p(y;t)
=
1
3.4 Canonical reduction
show that
~
•
j2i"c e
~
the power function.
We shall presently
given in (3.3.2) could involve as parameters only the
k - 1 differences
\.1
= ~l - £1 (1 = 2, .. 01 k).
Form (3.3.4) we see that
58
~
(3. 4.1)
k
= r.
I I
I (0 I
00 j
mj
- co , 00 j
Si
Yl' Y1 + Q
j
Sj
S1 j
...
i=l
II
II
Yl , Yl + Q Sj Si_l j Yl, Yl + Q
II
Sj
~i+lj ... ; Yl , Yl + Q
Sj
Sk) I
where
I
1(0,00
IIij -
j
00 ,00 j
co
=
j
t I
co
p(s)ds
0
...
t
i
t
slj Yl " Yl + Q Sj s2 j ... ; Yl " Yl + Q s; sk)
J
yl+Q
P(yl;sl)dYl
-00
J
s
p(Y2·; s2) dY2 •••
Yl
J
Now putting Yi -
simplification, we get
si = zi
and 61 - £i
= ~i-l'
after a little
59
(3.4.2)
~
k-l
=.r.
i=l
t,
I(O,cp i mj -00,00; OJ zl - Tl i , zl - Tli + Q
Sj
t t
0; zl - Tl i + TIl' zl - Tl i + TIl + Q
Sj
0; ••• j zl - Tl i + 'l.k-l'
t I
zl - Tli + 'l.k..l + Q
Sj
0)
t I
+ I(O,oo
j
mj
-00 ,00 j
0; zl + TIl' zl + TIl + Q
t I
zl + Tl2 + Q
Sj
0; zl + Tl 2 ,
I- I
Sj
0,;
... j
zl + 'l.k-l' zl + 'l.k-l + Q
Sj
0).
From (:3.4.2) it is eVident that 13 could involve as parameters
only the k-l 1)'s.
Hence the power (
=1
-
~)
parameters only the k - 1 Tt's.
of the q test could involve as
It is worth noting at this point
that the right side of (3.4.2) is symmetric in the Tlls.
Hence the
power of the q test is also symmetric in the 1)'s.
3.5 Uniform unbiased nature of the q test. To prove the uniform
unbiased nature of the q test we need to use certain lemmas which we
shall prove now.
Lemma 1.
If
(1) in the domain D: ) x'
L-"
f(x l , ••• ,
~)
exists,
60
all partial derivatives of order one and two eXist, all partial
derivatives of order one vanish simultaneously at one and only one
inner point P
= (xlO'
x 20 ' ••• , ~O) of D,
(2) the matrix of second partials evaluated at P is negative definite
(n.d.)" and
(3) at every point (xl" ••• " ~) on the boundary of D,
f(X l " ••• , ~) < f(x lO " ••• , ~O)' that is" <A,(sa~, then
f(X l " ••• , ~) < f(x lO ' ••• , ~O) that is, < A for all
(3.5.1)
xeD.
Proof:
Condition (1) implies there is one and only one stationary
point P inside the domain D.
By condition (2) we see that at this
point there is actually a local maximum.
Hence there cannot be
another point inside D where f(x l , "" ~) > A, for otherwise
there will be a contradiction. But there can be points on the
boundary of D where f(X , ... ,~) > A. But by (3) this is impossible.
l
Hence f(X l , ••• , ~) <A for all xeD.
Lemma 2.
or b
i
If the conditions of lemma 1 are satisfied, as a
-> 00
,
for any i and for fixed values of a
j
,
b j (j
i
-> ..
1-
i =
••• , k),
1S
t
-00 < x s
f(x l , ... , ~) < A for all xeD: l,!:
<co
J'
Proof: The proof follows obViously from lemma 1.
Theorem 1.
The Studentized range test of Tukey is completely
unbiased, but its power function does not have the monotonic1ty
property.
00
1,
61
Proof:
The second kind of error of the q test for a
given
significance level ex is
-
xmax - xmin
s
k-l
= 1:
1=1
r(o,co; m; - 00 ,00; 0; zl - 1)i; zl - 1)1 + Qs; 0;
+ r(o,oo
j
m;
-ClO ,00
0; zl + 1)1' zl + 1)1 + Qs; 0; ...
;
where Q > 0 is so chosen that
k
r(o,oo; m;-oo
,00 ;
• • •; zl' z1 + Qs; 0)
OJ zl' zl + Qs; 0; zl' zl + QSj OJ ...
=1
- a.
62
Now differentiating
~
with respect to
~l
we get , after some
simplification,
J
00
p(s)ds [ '
o
co
J
-co
00
J.
...
-00
...
00
+
j
o
00
p(s)ds [ '
J
f
...
+ •••
a::>
f
-J
-00
...
f
It is easy to check that the right side of (3.5.3) will be
negative if ~l
~l
<0
and ~l
>0
<
and ~l
~i (i
= 2,
> ~i
(1
= 2,
••. , k - 1).
variables the same is true of
d~/O~.
,
1.
(i
•.• , k-l),and positive if
By the symmetry in the
= 2,
••• , k - 1), i.e.,
64
<0
,
>0
and
if
T}i1' 0 and "i
= Tlmin'
klso it is evident that
= O.
~~ i .J"_ = 0-
Similarly
= 0,
(i = '2,
••• , k -
J.).
0 0 0
Now suppose"!. :fo2' Then either Tlmax > 0 or Tlmin < O.
Hence the first partials can vanish simultaneously only at
(0, 0, ..• , 0).
Again it is easily verified that
= -'2(k-l)Q C (Q) ,
where
co
00
C(Q)
(
\
) Sp(s)ds
o
> O.
j
-0:>
e
65
2
Hence
0 13
-.1O'1~
~
-
'1
<
= -0
0 ,
(i
= 1" ... " k -
~
Also
= 2Q
>0
c (Q)
f
(1
I
Hence the matrix of second partials when ]
M
j
= 1"
... "
k-~.
=£
=
2f(Q)
2f(Q)
.i.(k-l )f( Q)
...
_~(k-l)f(Q)
2f(Q)
=
2f(Q)
2f(Q)
2f(Q)
where f(Q) = Q C (Q), is n.d.
It will now suffice to show that 13
--->
0 on each point of the
.!l.:E 1 .$ '1 .$
~ij
i=1, •.• ,k-1} as, say,
boundary of the domain D:
E
l
-> - 00
Now
or h
1
t
-> co for
1
fixed values of
€."
~
h.
~
(i=2, ••• ,k-l).
66
k
(2n~~ =
(:3.5.9)
fco
p(s)ds
0
j
Zl+T)l+ Qs
00
e
2 1
~2
dZ
-00
1
..!z2
2 2
j
e
zl+T)l
e
zl-T)t Qs
co
00
e
/
+
J
p(s)ds
j
-co
0
1
+ ••.
1 2
-z
2 Id
e
zl
J
zl-Tl 1
1 2
z
e "2 2 dZ
2
dz,.,
c:.
...
61
Zl-"k_l+Qs
co
00
+
J
p(s)ds
j
1 2
--z
2 1 dZ
e
l
-co
0
f
e
1 2
'2 z2
dZ
2
...
zl-"k_l
Zl-~_t"k_2+Qs
...
(
J
Now consider the case where "1 = €l and "2'
, (.
. )
in the domain D :
€i ~ "i ~ ~1' i=2,.. ,k-l
J
l
Zl+€l+Qs
(3.5.10)
/
... ,
Zl-€l+Qs
z2
e
-2
1 2
r
dz =
°,and
J
e
Zl-€l
Lt. €l ->
zt€l
Lt €l -:> - cc
2Z
_00
I
Similarly when "1= h l and
"2"."~-1
are in D , we have,
dz = 0.
68
Zl-~+Qs
Zl+~+QS
/
I
(3.5.11)
1 2
-z
2
dz
e
J
= o and
e
1 2
--z
2 dz
= O.
co
Also
~ e~x2
~initely.
dx exist.
-00
Hence as £1
-> - 00
point on the boundary
~'S
=0
1s 1 -
(3.5.12)
1 -
~>
0:
o.
-->
or
~l
-> 00
,
the value of t3 at each
0 while the value of t3 at the point where
Hence
= t3(Q) > t3(D
as
E:
-> - 00 or
~
-> 00) = 0
•
Hence all the conditions given in lemma 2 are satisfied by the
function t3 ("1).
Hence
- I -0
for every ~
.
Hence the Tukey q test is completely unbiased.
Now by equation (3.5.4) we have
if 1'1 1 > 0 and Tl 1
= 1'1max
and Tl i = Tlmin'
(i = 1,2, ••• ,k-~.
9.t ~.J,e ~ ~ ~·FJ/ ~ 'ric.' >0 trY ~ 0 ~ t:W (t") o~ Yf.; ~L. l'Jj
t!Y(t.'i) Yl ' ~t: '>11.' -: 0
(frtv o.U j ::J:i :. ~ 2/·" k- I ) .
J
Hence the power function of the q test does not have the
.
> 0
i f 1)i
<0
monotonicity property.
3.6 Lower bound to the power of the
~
test.
The second kind of
error of the q test is
- -
m1n
- xmaxs-x,
= PL
0.6.1)
~ Pi L-
I-
-j
X.-x
1S j
<
Q
/1,
7.
J=1,2, ••• ,i-l,i+l, ••• ,k-
Hence
Power of q test> l-P,
-
1
There are k such lower bounds and the power of the q test will
be greater than the g.l.b. (greatest lower bound).
The lower bound given in (3.6.2) can be evaluated by using the
distribution of a multivariate analogue of Students' t considered
70
by Dunnett and Sobel
L- 6 _7.
Entensive tables of the distribution
of the bivariate analogue of Student's t are given in
L- 6 _7.
It is to be noticed that the evaluation of the exact value of
the power function of the q test 1s extremely difficult.
The
evaluation of the lower bounds given in (3.6.2) 1s difficult, but
the tabulation of the expression on the right side of (3.6.2) 1s
useful not only in this situation but also in other situations
including the important situation of ranking means of normal
populations.
3.7 The Hartley Fmax ratio test. Let
= 1,
Xij(i
... , kj j
= 1,
2
of size (n + l) from N( £l ' ai
0+1
_
=~
(x
j=l
>. (i
2
= 1, ••• , k).
n+l
_
x.) /n, where x.
ij
n + 1) be independent samples
•.• ,
~
~
= E..
J=
2
unbiased estimate of O'i based on n.d.f.
2
HO:O'I
2
= ... = Ok'
I
x .. /(n+l), be the
~J
To test the hypothesis
we can obtain a test procedure as follows:
To test the hypothesis H.. : a~
~J
~
= O'~J
we have the well known variance
ratio F test of Fisher given by Fij =(Si/S~)with d.f. (n,n).
hypothesis
the upper
a~].
=
O'~J
is accepted i f l/F,..,
«s~/s~)<
J -
"" -\: ~
F"" where F", is
"'"
....
Since H is
O
equivalent to Hij (all i ~ j) we get a test of H as follows:
O
Take the intersection of all the (k ) Fisher's F .. acceptance
2
J.J
Q
point of FisJer's F with d.f. (n,n).
The
71
regions and accept He if
It is easy to check that this is the same as accepting He if
F
max
=(s2
max
/ S2 )< Fa' where Fa is the upper" point of the
min
max distribution with d.f. (n,n).
test.
F
This is the Hartley Fmax ratio
We shall show that the power function of the Fmax test could
involve as parameters only the k-l ratios
T}i_l =
(ai/ai>' (1=2, ••
,k).
Also we shall show
that the F
max
test is completely unbiased, but the power function does not
have the monotonicity property.
A
set of useful lower bounds is
obtained on the power of the F
test, which can be evaluated
max
using tables of the distribution of the Studentized largest chis~uare.
The distribution of the Studentized largest chi-square
is given in Chapter IV.
1;.
multivariate generalization of the
Fmax test is given in (3.12).
3.8 Power function of the F
test.
max
Let
x ij (i=l, •.• , kj j=l, ••• , n+l) be k (n+l)
12
independent N(E , a~; 1=1, •.• , k) variables.
i
2
that
5
/2.=
~+l
i 0, Lj=l
_ )2/
(X ij - xi
2-
_
n~~where xi
It is well known
n+l
=L
xij/(n+l), has a
j=l
chi-square distribution with n.d.f.
When the h ypothe sis HO·0021
= C1k2
=
1"S
true, then
2
1 - a
= P L-(si/S~) ~ F'for
all i
co
k
= r.
i=l
j
o
f
,
2
_,);L
2
2E.:,g
(~) 2
2
e
and a is the given level of significance of the test.
If H is not true, then the second kind of error of the test
O
is
i
:i
j
/
02
- -
7 <;:::=:::::;>
P /-
s;ax
< F / el7
2
6
mn
i
73
Fv.2
~
=
k
L
f
i=l
h
were
p (v 2.; o.2)
~
~
= _ _l~~
2
2{'('!!) 0
2 i
2
2 dv.2 7dv.,
2
p(v 2 ; 0.)
j
202
i
J -
J.
2
v:1
n-2
(V i ) 2
J
e
202.
~
I
and
0
222
= (01 ,
••• , Ok) .
3.9 Canonical reduction of the power function. We shall presently
show that
k-l ratios
.
~
given in (3.8.2) could involve as parameters only the
~.
~-
1
= 0i/cY~
(i
= 2,
.~
From (3.8.2) we see that
where
,
I(C x
,
... ,
... , k).
74
=
f
2
2
CfJ
p(Si;Oi)dSi
]"1
j"1
222
p( s2; "2 )ds 2 ••.
2
2
0
222
p( Sk;f! k )dsk •
sl
sl
= oi/,,~
we get, after a
~
little simplification,
v2
l
v2
l
"') ~ ~k-l' F ~ ~-l' 1)
~
J.
From (3.9.2) it is evident that
only the k-l
~IS.
Hence the power (
~
could involve as parameters
=1
could involve as parameters only the k-l
- ~) of the F
max
~IS.
test
It is worth noting
at this point tha.t the right side of (3.9.2) is symmetric in the
Tj'S.
Hence the power of the F
test is also symmetric in the Tj's.
max
3.10
Uniform unbiased nature of the F
test.
max
To prove the
uniform unbiased nature of the F
test we need to use a lemma
max
which is
75
Lemma 3.
If the conditions of lemma 1 are satisfied, as a.
1
-->
0
-->00 , for any i and for fixed values of a J , bJ(j~i=l, ••• ,k),
r
'
f(X 1 , ••• ,~) <A for all X € D: t~: 0 < XIS < 001.
or b
i
I
Proof:
The proof follows obviously from lemma 1.
Theorem 2.
The F
ratio test of Hartley is completely unbiased,
max
but its power function does not have the monotonicity property.
Proof:
The second kind of error of the F
max
test for a given
significance level a is
k-l
... ,
=L
i=1
FV
2 1)k_l
, 1)
l 1).
1
where F
>1
is so chosen that
Now differentiating
~
with respect to
~l
we get, after some
simplification,
2
co
2 n-2 -u2
2
u -- u
(~) 2 e 2. d(~)
2
2
J
o
2
d(~-l) 1
2
o
-
77
2
J
u
u
-u2
2 n-2
-
(.2) 2
2
-
e 2
u
2
d(.2)
J
2
2
1
It is easy to check that the right side of (3.10.3) will be
negative if ~l
and ~l
<
~i
>1
> ~i
(i = 2, ••• , k-l).
same is true of
and
and ~1
~
~i
(i=2, ••• ,k-l) and positive if ~1
By the symmetry in the variables, the
(i=l, •.• ,k-l), i.e.,
<0
if
>0
if
<1
~.
1
>1
and
~.
1
= ~max
78
Also it is evident that
~J
.!l
afll
=1
= O.
o~ 7
dT):'" - .!l = 1
Similarly
i
Now suppose ]0
(1=2, ..• ,k-~.
=0
-
f 1.
Hence the first partials
~x > 1 or {in < 1.
Th7n either
ca~~y at
"
(1, •.• ,1).
Again it is easily verified that
= (k-l)
(3.10.6)
(l-F) C (F),
where
CD
C(F)
-u
2
( (u )n e2"" d(%-)
J 2
2
1
= rk(~)
2
0
Fu 2
2
f
L-
2
2~ ~
2
2
2
u2
> O.
d2~
Hence ---2
7 -1
~
- fIaT}
- -
<0 ,
(1=1, •.. ,k-~.
i
Also
d2 -
@
0fl. Ofl· :L
J
7.!l = 1
-
= (F-1)C (F)
k-2
(L) 2 e 2 d(!-).J
> 0, (irJ=l, ... ,k-~.
79
Hence the matrix of second partials when j
=1
M=
g(F)
g(F)
-(k-l)g(F)
g(l')
g(F)
-(k-l)g(F)
,
=
...
g(F)
= (F-l)
where g(F)
-(k-l)g(F)
g(F)
C (FJ, is n.d.
It will now suffice to show that
the boundary of the domain D:
as, say, €l
(1=2, ... ,
-->
0 or A
l
-> 00
i
j: €i
~
-->
0 on each point of
~ ~i ~ ~i;
and for fixed values of €i and Ai
k-l).
Now
j
j
c
2
~
( 2"
n-2
) 2
i=l, ••• , k-l}
u
2
d(~ )
2
80
+
._e.
2
...
n-2
(~
U
) 2
d( k,", ) •
2
Co
Now consider the case where
I
(
the domain D :
FU
-i E
l
-<
i
2
(3.10.8)
(~)
2
J
~1
= £1
and
~
.)
2
U
n-2
2
e
2
""2
d(~)
2
=o)
2
u l €l
Lt.E
1
->
0
2
FU 1 /€1
and
J
2
U1/E 1
Lt. E1 -> 0
2
U2
n-2
(~ ) 2
2
e
~21
1=2, •• 0' k-l "
<)..;
~~
~.
2
E
1 l
f-
2
T
2
d(~)
2
= o.
••• , ~k-l are in
81
Similarly when ~l
= kl
,
and ~2' ••• , ~k-l are in D , we have,
2
e
(3.10.Y)
-~.
2
d(;:"') = 0
2
2
2
d(u
e
2
)
= O.
<Xl
(
Also
exists finitely.
J
o
Hence as
€ 1->0
boundary
-->
is 1 -
> O.
Q
(3.10.10)
or "'1-->
00
,
the value of f3 Cit each point on the
0 while the value of f3Cit the point where all q's
=1
Hence
1 - 0: = f3 (1)
> f3
(D as €
-->
0 or '"
-> 00
) =
o·
Hence all the conditions given in lemma 3 are satisfied by the
function f3 (!l)'
Hence f3(1)
> f3(])
for every]
1! .
82
Hence the Hartley F
ratio test is completely unbiased.
max
Now by equation (3.10.4),we have,
(3.10.11)
<0
and
>0
st
if ~i
if
~.
>1
< 1 and
"I
op '?>'>11.: 1 >0
~.ke, ~ ~
(it) Yfj ~,,~(, '"
l /?fY All j:f ,,'
and ~i
~.
1
t7'Y'::o
=
= ~ax
= n.
(i=l, ... ,k-l).
.
? ~~ ~ .
~ a.o6-"-J '" /.-'
J
'mln.
~ 2,·" k-,j.
Hence the power function of the F
test does not have the
max
monotonicity property.
IfY
3.11 Lower bound
of error of the F
to the power of the F
test.
max
test is
The second kind
max
s2
(3.1Ll)
1..
=p
-- max
2
s .
mlr~
<
< p.1
Hence
F /
0
-
2
.
,
<F
-
.J.
all i r j
I
02
-;
= 1, .'O,
..
k
j=l, ••• , 1-1, l+l, ..• ,k
Power of F
test
max
>1
-
1, .•• , i -
There are k such lower bounds
~nd
1,
i + 1,
•.• , k
_7.
the power of the Fmax test
will be greater than the g.l.b.
The lower bound given in (3.11.2) can be evaluated by using
tables of the distribution of Studentized largest chi-square.
The distribution of the Studentized largest chi-square is derived
in Chapter IV.
It is to be noted that the evaluation of the exact value of
the power function of the F
test is extremely difficult. The
max
evaluation of the lower bound given in (3.11.2) is rather easy.
3.12 Multivariate analogue of Tukey and Hartley tests. Given
random samples X.(p x n. + 1) of sizes (n + 1) (i = 1, ... , k)
i
~
J.
from k independent
He: !l = •••
=~
of the q test.
2
Hotelling's T
N(~.,
-J.
LMi=l, ••• ,k); the hypothesis
k c~n be tested by using a multivariate analogue
This test based on the largest of a set of
is due to Roy and Bose L-20
Let Xi (p x n + 1), (i = 1,
sizes (n + 1) from
N(~.,
-J.
••• 1
_7.
k) be k independent samples of
L..),(i
= 1, ••• , k).
1
To test the hypothesis
£1 = ••• = L against the alternative H f He,we can get a
k
multivariate extension of the Hartley test as follows: The test
He:
84
of the hypothesis H?
l.J
: ~. = L. can be carried outlas given in
1.
J
Chapter II,using the joint distribution of the largest and
smallest roots of Si(Si + CSJ)-l or equivalently of SiS;l where
. -'
,
Si=XiX i - (n+l) !i!i.
Hence the test of H will be based on
O
Sup
.J.. c ( SiS.-1) , where the supremum is to be taken over all
l.rJ
i
J
rj
= 1, ••. , k.
The distribution problem connected with this
test is quite difficult.
Power properties of this test will not
be discussed here.
So far in this chapter we considered the F
2
the sils are based on the same number of d.f.n.
is proceding on the behaviour of the F
max
unequal.
max
test when all
Investigation
test when the d.f. are
Power properties of similar generalizations of the q
test are also being investigated.
CHh.PTER IV
THE SIMULTANEOUS ANALYSIS OF VARIANCE TEST.
4.1
Introduction.
It is well known that in situations invQlving
the testing of the significance of k mean squares, the usual method
of anova gives tests which clre not independent.
To test all the k
hypotheses together we can either use the usual analysis ot variance
test (which we shall call the joint test) or else we can use a
simultaneous test (which we shall call the sim. anova test).
sim. anova test is due to M.N. Ghosh
L-a -7.
This
It will be presently
seen that the sim. anova test has a very close tie up with the
individual tests of hypotheses, unlike the joint test.
The essential difference between the sim. anova test and the
usual joint test Ccin be best illustrated it we consider, for
simplicity of exposition, the Cdse of two hypotheses H and H •
l
2
Ind~vidual tests on HI and H are well known.
They are based each
2
on the F statistic.
The sim. anov8. test has an acceptance region
which is given by the intersection of the acceptance
regions of
the individual F acceptance regions of the two hypotheses.
if the significance levels of the tests of HI and H are
2
Q
Thus
l
and a
2
(Ol'w ) but ~ ~l + a • The
2
2
extension to the case of several hypotheses is immediate. The joint
that of the sim. anova test will be
~
'
test is an F test obtained by considering the hypotheses H and H
l
2
together.
We shall in this chapter study certain distribution problems
connected With the test and shall also investigate certain power
86
properties of the test.
Suppose in a field experiment there are
Hk
k hypotheses H , •.• ,
based on, say, nl , ••• , ~ d.f. and
l
2
suppose sl'
•• , sk2 are the k mean squares corresponding to these
hypotheses.
Let s2 be an independent estimate of the common
unknown variance a2 based on m.d.f. (s2 will be the usual error
mean square in the anova).
In the usual anova situations we test
each hypothesis Hi individually by the F ratio
(i
( 4.1.1)
with (n. ,m) d.t.
1.
=
1, ... , k)
These k tests are not independent because we are
using the same estimate of error variance for all the k tests,and
because also of the possible non-orthogonality ot the estimates.
we shall consider the problem of simultaneous tests of hypotheses
by the method of anova.
We introduce below the not ion of quasi-
independent tests of multiple hypotheses which proved useful in
Ghosh's development of simultaneous tests.
4.2 Quasi-independent test of hypotheses.
Consider k hypotheses
H , ••• , lik. For any test of hypotheses we consider the first and
l
second kinds of error. The second kind of error depends upon the
alternative hypotheses,
Hk
Hl(~l)'
•. "
Hk(~k)j
Hk:
where the hypotheses
Hl , , •• ,
are, say, Hl : ~l = 0, •• ,
~k = o. Tests T , ".,
l
Tk of H , ••• , H are defined to be ~uasi-1ndependent if, for
l
k
i
= 1,
•.• , k,
87
PT (accept Hil ~1 1 OJ ~l' ••• , ~1-1, ~i+l' ••• , ~k) is
1
(4.2.1)
independent of
-el ,
.,.,
~i-l' ~i+l'
, •• ,
-ek ,
and
(4,2.2)
independent of ~l' .,', ~i-l' ~1+1' •••, ~k'
As an example consider the anova of k linear hypotheses
Hi' • '" "
Hk •
Let! (n x 1) be a set of n uncorrelated stochastic variates
2
with the same (unknown) variance 0 and let E(!) be subJect to the
constraint:
where p
~
n and
~
(p x 1) is a set of unknown parameters (to be
estimated or about which we are interested in testing certain
hypotheses) and
h
is a matrix of rank r
~
p
~
n, whose elements
are given by the particular experimental design.
Assuming that each Xi is N(E(X i ), J2),(i
obtain the test for the hypotheses:
= 1,
••• , n); to
88
(4.2.4)
l(p x 1) =
Q(~ xl),
p
k
where q = L
1
~.~and
1.
C \
1 \
C
=
1
~J
Taking (C
ill
is a given matrix of rank t
~
min(r,q).
Ci12 ).(i=1, ••• ,k) to be the set of t i
independent row vectors of Ci,and Al(n x r) the matrix formed by
the r1ndependent columns of A we get,
to be a chi-square variable with t
i
d.f.
Now if
(4.2.6)
o,
2
2
1.
J J
then tis. and t.s. will be independent.
2
Let s2 be an independent and unbiased estimate of a with
m.d.f. (say, the error mean square in the anova). Then we have,
2
2
2
on the null hypothesis (4.2.4), estimates sl' ••• , sk of a
2
ti
corresponding to H , ••• , -k
K. We construct F.1. = (s./
2) m
-- )
1.
S
l
(i
= 1,
2
2
sk' s
j
••. , k),and obtain,from the joint distributions si,
... ,
given by
t
-2
k
i
m-2 !/-2 2
2
2 2
k
2 ~ 2 2
~l.tisi+ms_7,
(4.2.1) P(sl, ••. ,sk;s)= Const •. II (si)
(s)
e
1
1.=1
the joint distribution of F l , ""
F k equal to
k
(Eti+m)
= -rr:
1
2
.
/
t
n r(-!)
k
1=1
2
The marginal distributions are, of course, the usual distribution
of ratios of chi-square variables.
Since
any deviation of
H , •.• ,H from the null hypothesis does not affect the marginal
2
k
distribution of F , the first and second kinds of errors in the
l
90
test of HI are independent of the parameters under H2 , ••• ,
Similarly for the other hypotheses.
~.
Thus the usual F tests of
mUltiple hypotheses where these are orthogonal are quasi-independent.
Even when the Xi2 variables were not independent, if the marginal
distribution of F
i
did not involve the parameters of the other
hypotheses, then the F tests would be qUdsi-independent by definition.
There may be different points of view for assigning
significance limits in the case of stmultaneous tests of hypotheses.
In certain situations where the decisions regarding the hypotheses
HI' ••• , Hk are unrelated it is proper to consider the significance
level of each hypothesis individually at 5 per cent or I per cent
(say).
But when these decisions have a joint import it is proper
to consider the first kind of error of the simultaneous test as the
error of rejection of at least one of the hypotheses when all are
in ftict true.
The significance level of a simultaneous test is
defined as the probability of rejecting at least one of the
hypotheses when all are in fact true.
4.3 Simultaneous analysis of variance model and tests of hypotheses.
Let
~(n
x 1) denote a set of n uncorrelated stochastic variates with
the same (unknown) variance 0 2 and let E(!) be subject to the
constraint:
E(~)
where p
~
n,and
1
= A(n x
p)
!
(p
xl),
(p x 1) is a set of unknown parameters
91
(to be estimated or about wbicb we tire interested in testing
~
certain bypotbeses), and A is a matrix of rank r
~
p
n, whose
elements are given by the particular experimental design.
us assume that each xi is N(E(x ),a1,(i'=1, •.• , n).
i
the simultineous test for tbe k hypotheses on
Let
To obtain
s:
1
(4.3.2)
(p x 1)
where
.~
is
=0
(q xl),
given
m~;trix
Now for testing the hypothesis C.
~
1-
results given in
L-
19
_7,
1
Ai ~nd
(C
ill
min(r, qJ •
we get, using certain
= 1,2, ••• k).
C
) Core
il2
;:.AS
defined in (4.2).
We note thot ebcb F. is distributed as un F with t
1 .
~
that
is <:...n F with t. ,-nd n-r d.f. (i
Notice that
= -0
of r:mk t
i
~nd
(n-r)
92
d.f.
But the F'B are not mutually independent.
quasi-indep~ndent,
If the tests are
then the numerators of the different Fls are
independent chi-square variables with t i d.f. (i = 1, ••• , k).
But if the tests are not quasi-independent, then the different chisquares are not independent.
The hypothesis C(q x p)
1 (p x 1)
x 1) is accepted if F i S a (i = 1, ••• , k).
i
The optimum choice of a is not known We shall choose' a i
i
proportional to tie If we want the significance level of the
= E (q
0
simultaneous test to be a,
then we choose a
i
such that
(4.3.4)
A method to evaluate the probability in the left side of
(4.3.4) will be presented in (4.4) - (4.9).
If the tests are not quasi-independent, then the numerators
of the different F' 6 are not independent.
These tests can easily
be made quasi-independent and the results given for quasiindependent tests can then be applied.
4.4
Evaluat1~f
side of (4.1..4)"
the probability 6t~tement given in the left
From (4.3.4) we see that the sim. anova test
depends on the evaluation of expressions of the form, (with
m• n - r),
93
•••
o
o
where c
= c(tl ,
••• , t ; m) is a function of tl, ••• ,tk and m.
k
Usually we will be interested in obtaining a l , ••• , ~
such that
=1
c
o
-
a.
0
This can be evaluatea as follows:
Denoting the left side of (4.4.2) by leal'
t l , ••• , t k ; m), we get, by integration by
••• J
~arts,
~-l
-C(tl,···,tk;m)
•••
= -:---------.;;..,k
.\ (I:t +m-2)
. 1 i
I
2
l
f
I
J
o
o
o
c(tl' ••• ,tk;m)
(tt +m-2)
i
,
2
o
o
i.e. ,
al
1+ak
t k -2
=
2"-ak
c(tl' ... ,tki m)
t k+m-2 {(Et i +m-2) }
(l+ak ) 2
2
(
J
0
-
~-l
l+~
••
f
0
t.-2
1.
k-l
n
1
-r-
dF
i
i
k-l
tt i +m-2
Fi..1 2
F
. L-1+I.
1
.
Successive reduction will leave us with the evaluation of
integrals of the form
(4.4.4)
•••
•
o
Now it is easy to see that (4.4.4) is equivalent to
.
95
00
(4.4.5)
f
dv
0
Jo
bjv
I
(
•••
J
)
0
0
L- 15 .JJ we
Also from
x
....
?lV
2.
get J
'2"
1
p..2 j
1
2
"2 -ui
-v
e
v
IIu. e
du i gIv.
J.
r(~) 1
x
1
u "2 e ...u du=2x
e
i
00
Z Ai x
•
o
The evaluation of (4.4.2) for given values of
al'."'~
can
be carried out successively for different values of tl, ••• ,tk and
m by using the reduction formula (4.4.3).·
When m is large, (4.4.2) can be evaluated using tables of the
incomplete gamma function
L-
14
_7.
It is easy to notice that the tabulation of (4.4.2) is rather
tedious because of the large number of parameters involved.
In
the next section we shall consider the special but important case
when t i
= t,(
i
= 1,
••• , k).
4.5 Special case when t.
= t(i = 1,
••• , k).
have to obtain an a such that
a
•
1 - a • c(k,t;m)
J
o
a
•••
(
J
o
In this case we
where c(k,t;m}
kt+m}/
= r(~
rk(~)r(~)
•
It is evident that the right side of (4.5.1) is equivalent to
the following statement:
Let us call the statistic
~
ts 2
= ~ , the Studentized largest
ms
chi-square.
In order to obtain an a such that (4.5.2) is satisfied, we
shall study the distribution of the Studentized largest chi-square.
4.6 Studentized largest chi-square. Let xl' ••• ,~ be
k independent
chi-square variables with the common p.d.f. given by
(4.6.1)
n -x
p(x) = r(~l}
•
Let y be another independent chi-square variable with the p.d.f.
(4.6.2)
•
The Studentized largest chi-square is defined as
97
x
(4.6.3)
max
•
y
We shall derive in the next few sections certain mathematical
results which we shall use to obtain the distribution of
~.
4.7 Power series expansion for the incomplete gamma type integrals.
Let
x
I(n,k;x)
=
L1
n -u
k
u e
r(nt-l) du _
7
•
o
L-
Using methods similar to those given in
16
_7, we
find an
appropriate expansion for I(n,kjx) is given by
k(n*-l)
I(n,k;x)
=~
e
-~
n+2
kx
00
- 1: A
o
r(n+2)
(k)
i
x
i
where the A's satisfy the recurrence relation
1
(k-l)
- (nt-2) Ai _ l
+ •••
1
+
(-1)
-:-i-:~:-'-
(k)
A'J.- l
'
(1=0,1,2, ••••• ).
(k)
(k)
Notice that A
O
= 1 and A
1
=0
for all k.
00
We shall now prove the convergence of the
~es
(Ie.) i
E A. x •
o
J.
4.8 Convergence of the senes on the right side of (4.7.2}.
Consider
x
(
!
(4.8.1)
I(njx)
=
j
0+1 x 00 (1)
0+1
ne -u
x
nt-2
xi
E Ai
r(nt-l) du = f(nt-2) e
u
,
0
0
where
(_l)i
i !
(4.8.2)
Since we will be interested in cases where n is of
t~e
r
form 2'
(r = -1, 0, 1, ••• ),we shall prove the convergence of the series
on the right side of (4.8.1) for the case n = ~ ,(r
= -1
The case when r
Case 1.
n
= 0,
= -l,O,l, ••• ).
has been already considered L-16
i.e., r
-7.
= O.
In this case
(1)
A 21+ 1
= 0
(1)
(1=0,1, ••• ), A2i
1
=
21
2
____
2.3 ... 2i+1
(1=1,2, ... )
(1)
and
AO
=1
99
Hence
1
(4.8.4)
= "'4-.--:"2i":"'1(~2-:-i+~1~)
00
<--L16i2
(1)
Hence 1: Ai
is convergent and the value of the ratio of the
o
i th to the (i-l)th term of the power series in (4.8.1) is less than
x2
00
~.
(1)
Hence the series I: Ai
16i
i
x
is convergent and therefore the
0
powers of the series are also convergent.
It may be noticed that
the series (4. 1.2) is rather rapidly convergent, so that for a
relatively small x, only a few terms of the series will suffice for
any degree of accuracy desired in practice.
Case 2.
n
> 0,
i.e., r
> O•
.Now from (4.8.2) after a little simplification,we get,
•••
+
(~l)i ~i)~.
U(n+l)Hn+l)J.-l -
Hence
7
100
1 ...{n+l)
Sum of first (i+l) terms in (1+~'
(n+l)
(4.8.6)
(n+2)(nri+l) Sum of first i terms inCH ~l)-(nrlJ •
Hence if i is large,
(4.8.7)
(1)
Ai
'"TiT
_
1
<
•
T
A
i l
(1)
00
Hence E A.
o
is convergent and the value of the ratio of the
J.
i th to the (1-1)th term of the power series in (4.8.1) is less than
x
00
-i.
(1)
Hence the series EA.
o
i
x is convergent and therefore the
J.
powers of the series are also convergent.
4.9 Distribution of the Studentized largest chi-square. Under the
set up given in (4.6), the p.d.f. of x
max
=v
is
v
(4 .9.1 ) P()
v
k
= r(nrl)
v
n e -v
L- I
)
!
o
=
k
r(nrl}
k(nr 1)-1 ... k(n+l)+ I v
n+2
-r(,.,..k--l....)
e
v
(0+2)
n -x
x e
r(nrl}
dx_7
k-l
101
using (4.7.2).
Multiplying (4.6.2) and (4.9.1), using the transformation
v
u =-, and integrating with respect to y in the interval 0 to 00,
y
we get
From (4.9.2) it is eVident that the distribution of u.can be
tabulated using tables of the incomplete beta function
L- 13 _7.
Upper 5 per cent points of u are given in Table 3 (see appendix)
for k = 2 and for different values of m and n.
The methods presented in (4.4) - (4.9) will enable us to
evaluate integrals of the form
00
(
J
o
~u
du
•••
fo
k
ume -u II Vini e -vi dv i •
1
These integrals are found to be useful in obtaining lower bounds
to the power of the Hartley test for equality of several variances
from univariate normal populations which is discussed in Chapter III.
Before proceding to the study of the power of the sim. Mova
test i t is interesting to note that a very useful lower bound to
the probability statement on the left side of (4.3.4) can be
102
obtained by using a result due to Kimball
Using the result given in
L-
11
_7, we
L-
11
_7 •
get
The expression on the right side of (4.9.4) can be easily
obtained from
L-
13 j .
4.10 Power function of the SimI anova test.
When the hypothesis
given in (4.;.2) is not true, let the alternative hypothesis be
(4.10.1)
ql
•
•
•
~
Cl
•
•
ql
1
(p x 1) =
~
(q x 1)
Ck
P
-
.
qk
~l
•
•
•
~k
1
In the anova situation it is well known that the power function
of the test HI would involve as a parameter only
In a similar way it is easy to verify that the power function
of the sim. anova test, in the case of quasi-independent tests,
would involve as parameters only
'1.' ...,
~;
where Aj
Also it is well known that under this set up the power
function would be equal to
(4.10.2) p. 1 -
c
where
•••
104
,
D is the domain:
and c
>0
is a pure constant independent of the
~fS •
We shall now prove an optimum property of the sim. anova
test.
Theorem 1.
The power function of the sim. anova test, in the case
of quasi-independent tests, is a monotonic increasing function of
the absolute value of the square root of each of the deviation
parameters separately.
Proof:
The second kind of error (complement of the power) of the
sim. anova test is equal to
00
c
(
I
2
pes ) ds
!
2
I
..J
)
o
D
105
It is well known that for the purpose of discussing the power
properties of the sim. anova test, we can start, without any loss
of generality, from the canonical probability law:
t
m 2
k 2
••• + .E ~i + .E Yi
1
1
k
ti
n n
m
U dyi
cJx
1 1 ij
1
where -
00 ~
x's
~ 00
and -
00 ~
y's
)_7
)
~ 00.
Also it 1s well known that
(4.10.5)
)
2
tls l
•
1
t s2
kk
2
ms
[7;)2 + x2 + ... + Xit ]
12
1
•••
•
•
•
• ••
•
2
(~l + J~)2 + X;2 + ... + ~t
= (Xll +
=
k
2
= Yl2 + ••• + Ym
•
Under this set up,the second kind of error of the sim. anova
test 1s equal to
(4.10.6)
106
and c
>0
is a pure constant independent of the A's.
It is easy to see that
~
is symmetric in the A's.
shall prove the theorem only for
~.
Hence we
For any other Ai the theorem
is immediate because of the symmetry in the variables.
Notice that \
limits of x
ll
occurs only with xll • From (4.10.5) we get the
to be
In (4.10.6) perform first the integration over x • The
ll
contribution to the total p.d.f. (4.10.4) made by x
is
ll
1 2
"2 xll
Const. e
• The upper and lower limits of the x
integration
ll
are 1. and 1 given by
1
2
(4.10.8)
107
1
1 2 '2
~
yi - 1: xli) - ~
t
m
and .e 2 = - (a1 1::
1
2
"l
2
r--
It we now dUterentiate w.r. to~
'1.
over the domain D1 we get I through the x
the integral ot
ll
(4.10.4)
integral an integrand
which is
•
For all positive values ot Jil' the expression in
be negative, and tor all negative values of
(4.10.9) will
Fl it will be positive.
Thus
(4.10.10)
<0
it
and
>0
if
,p; > 0
r--
By symmetry in the variables, the same is true of any ....l ~i'
(4.10.11)
<0
u
108
and
>0
it
~ <
0
,.
(1 = 1, ••• , k).
Hence the second kind of error of the s1m. anova test is a
decreasing function of each
IJAi I
separately and hence the
power of the test (complement of the second kind of error) is an
increasing function of each
Hence the theorem.
I ~,
separately.
CHAPTER V
THE STUDENTIZED MAXIMUM MODULUS TEST.
5.1
Introduction.
In a 2n factorial experiment suppose we are
interested in testing the hypothesis that all linear functions of
the treatment effects t ij are simultaneously zero. The estimates
of the treatment effects are assumed to be independently and
normally distributed with a common variance
0
2
which can be
independently estimated by an appropriate multiple of the error
mean square in the anova.
The test of this hypothesis can be obtained by taking the
intersection of the n Student' sit' acceptance regions [20 _7.
It is
easily shown that this test is based on the Studentized maximum
I i Is'
modulus given by un = x
where xl' ••• , xn are independent
2 2 2
N(O, 0 ) variates, s is an unbiased and independent estimate of 0
based on m.d.!. \ and
IX!
is the maximum of
IXII
,. •.,
I xn I ·
The distribution problem connected with the test has been
solved in
L- 16 J.
We shall investigate certain optimum power
properties of the test in (5.2).
Before proceding with the power properties of the Studentized
maximum modulus test we shall prove two lemmas which are useful in
demonstrating the optimum properties of the test.
110
Lemma 1.
00
where I(O,
OOj
m)
=c
(
ms
s
m-l
-
e
2
T
ds
j
o
y
and U(x,yj
~)
=
r
dt.
J
x
Proof:
It is easy to see that the expression on the left
side of (5.1.1) can be put in the form
Also it is easy to check that
(5.1.3)
--
U(-a+b,a+bjO) <U(-a,ajO) tor every brO.
Hence the lemma.
111
Lemma 2.
It
(5.1.4)
~(~)
-
= I(O,
00;
m) U(-us-~l,us-tl;O) ••• U(-us-~ ,us-~ ;0),
n
then
~
~
<
0
it S1
>0
and
>0
it ~1
<
(i;l, ••• ,n).
0
Proof:
U( -us-~i_l,us-~1_l; 0) U( -us-~i+ l'
us-s 1+ 1
j
0) •••
U(-us-s n ,us-S n jO)
which is negative it
~i
> O,and
positive if
~i
< O.
Hence by the symmetry in the variables, we get,
<0
(5.1.6)
and
>0
Hence the lemma.
if
~1
>0
(1=1, ••• , n).
n
112
5.2 Power function of the Studentized maximum modulus test.
Under
the set up given in (5.1), if the hypothesis 1s not true, let xl'
... , xn be independent N(Ei,a2 ;
i=l, ••• , n).
The second kind of error of the Studentized maximum modulus
test is
~
( 5.2.1)
ix.I
= p,r
L ":'--s1.! ~ u
/ _t"
~
= I(O,oo;m)U(-us,uS;E ) ...
l
I
j-
all i=l, ••• ,n_7 = P,r
Xi ~ u!
L -Sl
!_7
u (-us,us;E n ),
where u is so chosen that
IxI
I
is the maximum of Xli,I x 2 \
,
... , IXn\; and
a is the given
level of significance of the test.
We shall now prove the following properties of the Studentized
maximum modulus test.
Property I.
The Studentized maximum modulus test is completely
unbiased.
Proof:
The proof follows from lemma 1 and (5.2.1).
Property II.
The power function of the Studentized maximum modulus
test 1s a monotonically increasing function of each of the absolute
values of the deviation parameters
Proof:
~l'
•• "
~n
separately.
The proof follows from lemma 2 and (5.2.1).
113
Property III.
Proof:
We have
pL- f:1
n
1u.J
> n pLi=l
from
L-
11
<==.:::>
pl~
Ixli::: us
•
_7
Ix.,'. ~ usJ
1.
J.
Notice that the right side of (5.2.3) is easy to evaluate and
hence we can easily obtain an upper bound to the error probability
of the first kind.
APPENDIX
TABLE 1
Values of
Xi2 and X22 (see Chapter I)
,
different values of n , where n
,
n
x,2
1
,
=n
for a
= ,05 and for
2
- 1 is the d.f. of X •
,
x,2
2
n
x,2
1
,2
2
X
1
.0332
7~82
13
5.32
25.90
2
.08
9.53
14
5.95
27.26
3
.30
11.19
16
7.24
29.95
4
.61
12.80
18
8.58
32.61
5
.99
14.37
20
9.96
35.23
6
1.43
15.90
22
11.36
37.82
7
1.90
17.39
24
12.79
40.39
8
2.41
18.86
26
14.24
42.93
9
2.95
20.31
28
15.71
45.45
10
3.52
21.73
30
17.21
47.96
11
4.10
23.13
40
24.86
60.32
12
4.70
24.52
60
40.93
84.23
115
*
TABLE
,
2
,
= .05 and for
r
different values of n1 and n2,., where n1 = n1 -1 and n = n -1
2
2
Values of F1 and F2 (see Chapter I) for a
,
, r
are the d.f. of F.
~.
4
2
6
8
10
16
12
24
20
30
2
39.0 30.5 28.0 26.8 26.1 25.6 25.1 24.8 24.6 24.4
4
12.9
9.60 8.56 8.05 7.75 7.55 7.30 7.16 7.06 6.97
6
9.14 6.64 5.82 5.42 5.17 5.01 4.81 4.69 4.61 4.53
8
7.73 5.53 4.80 4.43 4.21 4.07 3.88 3.77 3.69 3.62
10
7.00 4.95 4.27 3.93 3.72 3.58 3.40 3.29 3.22 3.14
12
I 6.56 4.61 3.95 3.62 3.41 3.28 3.10 3.00 2.93 2.85
16
6.05 4.21 3.58 3.26 3.06 2.93 2.75 2.65 2.58 2.51
20
5.76 3.98 3.37 3.06 2.87 2.74 2.56 2.46 2.39 2.32
24
5.58 3.84 3.24 2.93 2.74 2.61 2.43 2.34 2.27 2.20
30
5.41 3.70 3.12 2.81 2.62 2.49 2.31 2.22 2.15 2.07
*The values given in the table are F , • To obtain the value of
2
r
"
"
,
Fl for n1, n2 j take the reciprocal of F with n , n1 "
2
2
116
TABLE 3
Upper 5 per cent points of u(see Chapter IV) for different
values of m and n when k
~.
= 2.
0
1
2
3
4
5
2
6.90
5.82
5.38
5.14
4.98
4.89
3
5.86
4.83
4.33
4.19
4.04
3.96
4
5.32
4.34
3.93
3.71
3.56
3.48
5
4.99
4.03
3.63
3.42
3.27
3.19
6
4.77
3.83
3.44
3.21
3.06
2.98
7
4.65
3.69
3~29
3.07
2.92
2.85
8
4.51
3.57
3.19
2.96
2.82
2.75
9
4.41
3.46
3.11
2.88
2.75
2.66
3.69
2.79
2.41
2.19
2.05
1.94
00
I
117
BIBLIOGRAPHY
"Properties of Sufficiency and
Statistical Tests," Procedings of
the Rotal Society of-rQndon,
160 A 1937), 268-282.
and Nair, U. 8., "A Note on Certain
Methods of Testing for the Homogenity
of a Set of Estimated Variances,"
Supplement to the Journal of the
Royal Statistical Society, VI (1939),
89-99.
".On the Power of the L Test for Equality
1
of Several Variances," Annals of
Mathematical Statistics, X (1~39),
119-128.
-/-4-7 Cochran, W.
G.,
"The Distribution of the Largest of a
Set of Estimated Variances as a Fraction
of Their Total," Annals of Eugenics,
XI (1941), 47-52.
"Upper 5 and 1 Per Cent Points of the
Maximum F-Ratio," Biometrika, XXXIX(1952),
422-424.
/-6
7
Dunnett, C. W., and Sobel, M, "A Bivariate Generalization
of Student's Distribution, with Tables
for Certain Special Cases," Biometrika,
XXXXI (195 4 ), 153-169.
L-7_7
Eisenhart, Hastay and Wallis, Techniques of Statistical
Analysis, McGraw-Hill Book Company, Inc.,
First edition (1947).
L-8_7
Ghosh, M. N.,
"Simultaneous Test of Linear Hypotheses
by the Method of Analysis of Variance,"
(unpublished manuscript), 1953.
L-9_7
Hartley, H. 0.,
"Testing the Homogeneity of a Set of
Variances," Biometrika, XXXI (1939-40),
249-255.
118
"The Maximum F-Ratio as a Short Cut
Test for Heterogeneity of Variance,"
Biometrika, XXXVII (1950), 308-312.
"On Dependent Tests of Significance
in the Analysis of Variance," Annals
of Mathematical Statistics, XXII (1951),
600-602.
L-12_7
Neyman,
J.,
and Pearson" E. S., "On the Problem of k
Samples," ~ulletin de l' academ.!!.
Polonaise Sciences, Lettres, Berie A
(193l), 460-468.
Pearson, K."
Tables of the Incomplete Beta Function,
London, The ltBiometrika" office, 1934.
Pearson" K."
Tables of the Incomplete r - Function,
Cambridge University Press, 1946.
Pillai, K. C. S., "On the Distributions ot Midrange and
Semirange in Samples trom a Normal .
Population," Annals ot Mathematical
Statistics, XXI (1950), 100-105.
L-16J.
Pillai, K. C. S., and Ramachandran, K. V., "On the
Distribution ot the Ratio ot the i th
Observation in an Ordered Sample from
a Normal Population to an Independent
Estimate of the Standard Deviation,"
Annals of Mathematical Statistics,
XXV (1954'), 565-512.
Roy, S. N.
"Notes on Testing Composite Hypotheses II," Sankhya, IX (1948-49), 19-38.
Roy, S. N.,
"On a Heuristic Method of Test Construction
and Its Use in Multivariate Analysis,"
Annals of Mathematical Statistics, XXIV
(1953), 220-238.
Roy, S. N.,
"Lecture Notes on Multivariate Analysis,"
(Unpublished).
Roy, S. N., and Bose, R. C., "s 1mu1taneous Confidence
Interval Estimation," Annals of Mathematical Statistics, XXIV (1953),
513-536.
•
119
"A Method for Judging All Contrasts in
the Analysis of Variance," Biometrika,
XXXX (1953), 87-104.
L-22_7
Thompson, C. M., and Merrington, M., "Tables for Testing
the Homogeneity of a Set of Estimated
Variances," Biometrika, XXXIII (1946),
296-304.
L-23_7
Tukey, J. W.,
"Allowances for Various Types of Error
Rates," (unpublished invited address,
Blacksburg meeting of the Institute of
Mathematical Statistics, March, 1952).
© Copyright 2026 Paperzz