Mudhokar, G.S.; (1963)Some contributions to the theory of univariate and multivariate statistical analysis."

•
SOME CONTRIBUTIONS TO THE THEORY OF UNIVARIATE
AND MULTIVARIATE STATISTICAL ANALYSIS
Govind Shrikrishna Mudholkar
..
Institute of Statistics
Mimeo Series No. 360
May 1963
UNIVERSITY OF NORTH CAROLINA
Department of
Stat~st~cs
Chapel Hill, N. C.
SOME CONrRIBUTIONS TO THE THEORY OF UNIVARIATE
,
.
.e
.AND MOIIl'IVARIATE STATISTICAL ANALYSIS
by
Govind Shrikrishna MudhoJkar
May 1963
Grant No. AF AFOSR-84-63
A number of problems
statistical analysis
classes of solutions
obtained for some of
classical and modern
in univariate and multivariate
are considered. Extensive
with desirable properties are
them. properties from the
Viewpoint are investigated.
f •
This research was s~ported by the Mathematics" Division of the
of the Air Force Office of Scientific Research.
"
Institute of statistics
Mimeo Series No. 360
ii
ACKNOWIEDGEMENTS
•
It is a pleasure to acknowledge my gratitude to all those who
baNe been of help, direct and indirect, during the course of this
work.
I can mention here but a few of them.
I an highly indebted to Professor S. N. Roy for getting me started
on this work and prOViding encouragement and valuable guidance at the
early stages.
To Dr. W. J. Hall I extend very sincere thanks for agreeing to be
the chairman of my examination committee and for reading the manuscript
very carefully and suggesting many valuable improvements.
To Dr. W. Hoeffding and Dr. Harold Hotelling I extend warm appreciation for their valuable criticism.
I am very thankful to Dr. George E. Nicholson, Jr. for his help and
advice during my study here.
Through him I wish to thank the faculty of
the Department eor excellent teaching of a balanced and ambitious graduate program in statistics without which this work would have been impossible.
To the members of the office staff of the Department I am grateful;
in partiCUlar to Miss Martha Jordan for her kindness and to Mrs. Doris
.
,
Gardner for excellent typing of this dissertation form a messy manuscript.
I am very thankful to the Mathematics DiVision of the Air Force
Office of Scientific Research and the H. E. Fund, Bombay, for the financial assistance which made this work possible.
In conclusion I must record, although very inadequately, my sentiments towards my family.
I owe them most for their unfailing faith in me.
iii
TABLE OF CONTENTS
PAGE
CHAPTER
ACKNOWLEDGEMENTS
I
ii
- - - - - - -
:rnTRODUCTION AND SUMMARY
vi
FOUR PROBLEMS IN MULTIVARIATE ANALYSIS
1
1.0 Introduction and Summary - - - - - -
1
1.1 The Multivariate Analysis of Variance
3
1.1.1 The multivariate linear model and the
general iD.\11tivariate linear hypothesis
3
1.1.2 Canonical form and notation -
8
1.1.3 Preliminaries and development
9
1.2 The Problem of Testing Independence Between
Two Sets of Variates - - - - - - - - - - - -
18
1 ..2.1 The model, the hypothesis and reduction - - - 18
1.2.2 Canonical form - - - - -
-,
- 20
1.2.3 Further reduction - -
21
1.2.4 Preliminaries and development -
23
1.3 Testing the Equality of Two Dispersion Matrices
1.3.1 The model and the hypothesis
1.3.2 Canonical form --
- - 27
- 27
- - - - - --
- 28
- - 29
1.3.3 The development - 1.4 Testing Whether a Dispersion Matrix Equals a
Given Matrix - - - - - - - - - - - - -
- 34
1.4.1 The model and the hypothesis-
- 34
1.4.2 Canonical form - - - - - - -
- 35
1.4.3 The development -
- - - - -
- 35
iv
CHAPTER
II
PAGE
SOME PROPERTIES OF PERCENTAGE POINTS AND PROBABILITY
INTEGRALS OF SOME RANDOM VARIABLES - - - - - - - -
;9
2.0 Introduction and Summary - - - -
;9
Same Properties of Percentage Points and Integrals
Connected with Chi-Square Tests - - - - - - - - - -
40
2.2 Same Properties of Percentage Points and Integrals
Connected with variance-ratio Tests - • - - - - _.
45
2.1
2.; Some Pr01lerties of percentage Points and Integrals
Associated with Same Multivariate Tests • - •• - -
52
2
2.;.1 Power function of T .tests
52
•• - - -
2.;.2 Percentage points of the distribution
of a class of statistics - - - - -
III
A STUDY OF SOME UNION-INTERSECTION PROCEDURES
58
;.0
58
Introduction and Summary - - - - - - - -
;.1 A Problem due to Neyman and pearson - -
59
;.2
6;
The Union-Intersection Principle --
;.; Simultaneous Analysis of Variance -
;.;.1
;.;.2
".
64
Model - - - - -
64
Canonical for.m -
67
;.;.; Comparison of the analysis of variance
test and the simultaneous analysis of
variance test - - - - - - - - • - - -
68
3.4 Modified Simultaneous Analysis of Variance,
a Multiple Decision Procedure - - - - - - - - - - A Pr01lerty of the Power Functions of Some
Intersection Tests - - - - - - - - - - - -
71
v
PAGE
CHAPTER
IV
snroIJ1l.A.NEOUS INTERVAL ESTJNATION ........ - ...... -
78
4.0 Introduction and Summary
78
4.1 Simultaneous Interval Estimation
79
4.2 Simultaneous Interval Estimation in the
Univariate Analysis of Variance
81
4.2.1 Univariate model - .... -
81
4.2.2 Univariate estimation
81
4.2.; Applications .... - - - ....
4.;
Simultaneous Interval Estimation in
Multivariate Analysis of Variance ...... - ...... -
84
4.;.1 Multivariate model.. .. .. .. .. ..
84
4.;.2 Multivariate estimation
85
4.;.; Applications .... 4.4 Comparison of Various Bounds
BIBLIOGRAPHY _ .... - .... - ..
..
,J
- -
87
-
88
91
vi
INTRODUCTION AND SUMMARY
statistical problems which do not admit 'known optimum' solutions
are not rare; but nonexistence of 'known optimum' procedures is almost
a characteristic property of multivariate problems, that is, problems
associated with statistical analysis of multiresponse data.
It has been
generally recognized that these problems can be reduced to an extent
by invariance consideration.
One of the earliest explicit uses of the
invariance method is due to Hotelling
["22:.7
variate problem of two sets of variables.
in his treatment of a multi-
The invariance reduction,
however, does not lead to a unique solution.
Under these circumstances,
reasonable heuristic principles like the likelihood ratio principle
and the union-intersection principle, which are known to have same
good properties, have been used to obtain valid and workable procedures
for many of these problems.
variance properties.
..
All these procedures have the desirable in-
There is, thus, no available way of deciding under
what conditions one of these procedures is preferable to the others •
Therefore the current interest is in studying intrinsic and comparative
properties of these procedures (See, for instance Ito
Roy and Mikhail
["5'L7
and Mikhail
["327
["217).
have shown that, in case of
many of the multivariate testing problems, power functions of the union-
intersection tests have certain monotonicity properties.
In the first
chapter of this dissertation we shall consider four multivariate
vii
problems of testing hypotheses.
For each of these problems we shall
obtain an infinite class of invariant procedures with the power functions having the desirable monotonicity properties.
In the second chapter of this dissertation we shall obtain a number of properties of percentage points and probability integrals associated with chi-square tests and variance-ratio tests in univariate
statistical analysis and with some tests in multivariate statistical
analysis.
Some of these properties have been generally known from
tables and charts; but proofs are not known to exist in related literature.
Our proofs shall be based on some results in the classical theory
of testing statistical hypotheses.
The studies in chapter two have some bearing on the study of some
union-intersection procedures, which we shall undertake in the third
chapter.
The problems discussed here have the characteristic property
mentioned in the first paragraph of this introduction - Mikhail
has used a method due to Stein
£517,
£'52,7
with slight modification, to
prove admissibility of some union-intersection tests in multivariate
analysis.
We shall use a similar argll1'jlellt for comparing the power
functions of some union-intersection procedures.
More specifically,
we shall compare a simultaneous analysis of variance procedure with
the
t corresponding t
alternatives.
analysis of variance procedure for distant restricted
Also, we shall suggest a modified simultaneous analysis
J
of variance procedure and study it as a multiple decision procedure.
In the fourth chapter, we shall obtain a result with a view to
comparing sharpness of various multiple comparison methods in analysis
of variance.
CHAPrER I
FOUR PROBLEMS IN MlJIlt'IVARIATE ANALYSIS
l
1.0 Introd.uction and Summary
In this chapter we shall consider the following four problems in
multivariate analysis:
(i) testing the general multivariate linear hypothesis, i.e,
multivariate analysis of variance, abbreviated MANOVA,
(ii)
testing independence between two sets of variates, to be
called the 'independence t problem,
.~
e
(iii) testing equality of two dispersion matrices, to be called
the t equidispersion t problem, and
(iv) testing that a dispersion matrix is equal to a given matrix.
Each of these problems can be reduced to an extent by the method
of invariance.
Each of the tests which has been suggested for these
problems is based on the characteristic roots of some matrices, and
these roots are the maximal invariants under the invariance reduction.
None of these tests, however, is known to be optimum in a sense which
would give it precedence over the others.
The current interest is,
therefore, in investigation of good properties of these tests and
their comparisons (Ito
£2172).
~hiS research 'WaS supported by the Mathematics Division of the Air
Force Office of Scientific Research.
~umbers in square brackets refer to the bibliography.
2
The main competitors for. each of the above problems are tests
given by the likelihood ratio principle and the union-intersection
principlel except that for the MANOVA problem Hotelling's trace criterion is also very popular.
It has been shown by Roy and Mikhail £5~7 and Mikhail
£32.7
that the power functions of the union-intersection tests for the
•
above problems increase monotonically as each of a number of noncentrality parameters I which can be regarded as measures of departure
from the hypotheses I increase separately.
these tests are unbiased.
This property implies that
Somewhat weaker results in this direction
were, previously I obtained by Anderson
£2J
and Narain
£42.7.
We
shall show that for each of these problems there exists an infinite
class of test procedures I characterized by ordered characteristic roots
of certain matrices and their elementary symmetric functions I which
have the same or similar properties.
More specificallyl for the first two of the above problems, we
shall prove that any test 'With acceptance region of either of two
special forms given below has the mbnotonicity property.
The two forms
are
a'e < constant
:III
and
a'A < constant
where A and e are vectors of ordered characteristic roots of certain
matrices and of their elementary symmetric functions respectively and
~
is a vector of arbitrary nonnegative constant ai's (the a i corresponding
to the largest characteristic root being strictly positive). The largest
root test, the likelihood ratio test and the trace criterion belong
to this class and, thus, share the monotonicity property.
The third
and the fourth of the above problems are very similar from the point
of view of this investigation.
For the third problem we have obtained
a class of test procedures, characterized by the form of their accep-
..
tance region3as above, which have a weaker monotonicity property.
After
observing that the class of procedures suggested for the third problem
has as its counterpart a class of test procedures for the fourth problem,
members of which have analogous weaker monotonicity properties, we shall
show that a subclass of the procedures in this class have a stronger monotonicity property.
1.1 The Multivariate Analysis of Variance
1.1.1 The Multivariate Linear Model and the General Multivariate Linear
Hypothesis
Let
z, z2' ••• , z•• be a sample of N independent observations from
-1 -
N p-variate
means
E<,,~'i )
-,..
normal population 'With a common covariance matrix E, and
given by
E( Z' )
=
Nxp
..
=
D
Nxm
,
mxp
7
NID1 : Dar
m-r
p
where
(i)
Z
pxN
(ii)
denotes the matrix formed by ~l' ••• '~N as columns,
D is the design matrix of known constants determined by the
N:xm
design of the experiment, with rank(D) = r < m < M and a basis
=
(iii)
t;;l"
'2
is the partition of
'J
4
a matrix of unknown para-
meters"
ass,ociated with the partition Dl " D2 of D.
Under this set-up the general multivariate linear hypothesis is
0
Ho'•
sLC l •• C2
•
moor
r
i.e.
u
C
s:xm
pxu
mxp
_7
n:1
against
U
=
0
sxu
=0
where C and U are matrices given by the hypothesis with
rank(O) = s ~ r ~ m < n,
the partition
'1' '2
is the );lB.rt1tion of 0 associated with
01 " O
2
of ,,, and rank (U)
=u
~ p •
It is well known (Roy L42,7) that this set-up can be reduced to the
following:
* ~2'
* ••• "z-6* ,
Let ~l"
*
* *
*
~s+l" • u"!r' !r+l" •• "~N be a sample of
N
in..
dependent observations from a u-variate normal population with a common
covariance matrix and means
,
.
*
E(!.r)
=
i = 1"2,, ••• ,r
=
i
= r+l" ••• ,N
•
In this reduced set-up the general multivariate linear hypothesis H
o
reduced to
H*
o:
against H*:
l
where not all
* =
,i;l
*
];2
=
.... =
~s* = 0
,* = ]i* , i = 1,2, ••• ,5 ,
,,* are 0 simultaneously • Let
-i
-i
is
Then all the suggested tests for the problem use symmetric functions of
the characteristic roots of
(Zl* Zl*') (*
Z3 Z3*)-1
as the test statistics.
Roy £427 has shawn that the power function of the maximum root test
involves, aside fram the degrees of freedom, only t
= min(u,s)
noncen-
trality parameters, which are characteristic roots of a matrix of parameters.
His argument can, however, be simplified as follows:
The problem of testing the general multivariate linear hypothesis
is known to be invariant under the following groups of transformations
(Anderson £':::7, Lehmann £3~7, Roy ["42,7, James f2.-fS!):
G:
l
Addition of arbitrary constants to the components of the variables
Z* l' ••. , -r
z* •
-s+
G2 :
Orthogonal transformations of the structure
** = ~ Zl*
Zl
uxs
uxu uxs
and
Z3**
ux(N-r)
*
Z3
uxu ux(N-r)
= ~
on the matrices Zl* and Z3'
* where Al
gonal matrices.
Transformations of the structure Zl**
and A are real ortho3
= B Zl* and Z3**
where B is any uxu nonsingular matrix.
6
•
Under these groups of transformations, the
= mi~
t
(u,s) roots of the
determinantal equation
=
0
,
or, in terms of the original variates, the roots of the determinantal
equation
I 8H
.
=
Q 8E
-
0
,
0
where 8
and 8 are two uxu matrices, called the hypothesis matrix
Ho
E
and the error matrix respectively, and
~ =
U'Z Dl (DiDl)-l Ci
o
f
C
l
(DiDlrlci_7-l Cl (DiDlr~i Z' U
= U' (Z'Z' - Z D (DtD r l D'1Z/'
U
E l l 1
"
8
.e
,
form a set of maximal invariants.
A test for H Will be invariant under
o
if, and only if, it depends on the N observations
G , G2 and G
l
3
through the t roots of the above determinantal equation.
Each of the above groups of transformations induces a corresponding
group of transforma. tions in the space of parameters 1.: and -'i '
i
= 1,2, ••• ,r.
set of t
A set of maximal invariants under these groups is the
noncentrality parameters, which are the roots of the deter-
minantal equation
..
J
where
r = (U'I:
r -
Q
U)-l 1)'
II
=
0
,
ct1-;-C1 (D'D
)-1
11
C' ~l C ~
1-
Then a reference to the folloWing theorem (Lehmann
1'1
f 327)
shows that the
power functions of all the tests based on the characteristic roots of
(~ 8;1) will depend only on the noncentrality parameters mentioned
o
above.
7
Theorem:
PQ'
.~
Q
Let X e
e:l1...
:J.
be distributed according to a probability density
Let G be a group of transformations of the sample space
onto itself; and let G be the group of transformations of the
parameter space induced by G.
if
Let T(X)
be invariant under G.
Then,
V( Q) is maximal invariant under the induced group G, the distribu-
tion of T(X) depends only on V(Q.).
When t
=1
there exists a UMP invariant test which coincides with
2
the T -test due to Hotelling
f
22,7.
But for
t > 1, the problem of
MANOVA, even though, considerably reduced, does not have an optimum test.
There are three well-known tests which are all invariant under Gl , G2
and G • These are, in te~s of critical regions:
3
(i)
Royt s maximum characteristic root test (Roy£4J., 42,7):
Ch (SH S;l) > constant
max
0
,
(ii) Likelihood ratio test (Wilks £6~7, Rao £4§1):
1\ ==
ill
< constant ,
=
/SH + SE /
o
2
(iii) Hotellingts T test (Hotelling ;-227,
o
-
-
trace (SH S;l) ~ constant •
o
Pillai £4~7 has suggested three other functions of the characteristic
roots as the test statistics; but, because of their arbitrary nature, we
do not consider them here.
If we wish to restrict to tests, which are invariant under the
groups G , G , G , we can use the following canonical form due to
l
2 3
Roy and Mikhail £5~7, which can be obtained by transformations in G ,
l
G and G •
2
3
8
1.1.2 Canonical Form and Notation
.
~jCS
variables
x
ij
= 1,2, ••• ,n)
j
and
X =
Let
(i'';' 1, •• :, u;
with joint
1 un
constant exp£- "2 ~!: !:
i=l j=l
Y
= (Yij)
and Y '(i
ij
probability density
2
t
2 u
2
y" j + !: (Xi" -Qi) +!: x1i+
~
i=l
~
i=t+l
Tf
i,j
where u and
s
be two matrices of random
mm
j = 1,2, ... ,s)
dx ij
n:
i,j
dYij
= 1,2, ••• ,u;
,
are the same as in the original model, namely, the
effective number of variates and the number of degrees of freedom
for the hypothesis;
quantities
o
HI:
=N
- r
is the number of d.f. for error; and the
2
Q are the t = min(u,s) noncentrality parameters mentioned
i
The general multivariate linear hypothesis in canonical form is
above.
H :
n
(d.f~
Ql = Q = ••• = Q = 0
t
2
At least one Q # 0
i
against
i = 1,2, ••• ,t
Under this canonical form we shall restrict to tests which involve xij's
and Yij'S
only through the characteristic roots of
(XX') (yyl,(l
We shall order these roots and denote them by
!: AiA , ••• , e = Al ••• Au
be the u elementary
u
j
i#j
symmetric functions of the u roots. ObViously only t Ai sand \ eJ s
Let
el
=~
Ai'
1
Will be nonzero.
A
=
[tl
e2
=
Let
and
e =
U:1
9
We shall denote a test by
ArJ
in the space of
¢ = ¢(XI
Y)1 its acceptance region by
(XI Y) and the boundary of the acceptance region
by B¢.
We shall be interested in certain sections of A¢.
To describe
these sections let us express the matrix X as
=
f~l' xl
_7 •
Then the sections we shall be interested in are those of
Xl and Y.
A¢
for fixed
We denote these and their boundaries, respectively by
A¢IXlI Y and B¢,Xl,y ,both in the space of ~i
= (xll,x2l, ••• ,xul).
1.1.3 Preliminaries and Development
.e
Roy and Mikhail
A¢:
15"2.7
A.u
~
have shown that the power function of ¢ with
constant
is a monotonically increasing function of
(Qll ••• ,QP} separately.
In
this section we shall obtain two infinite classes of tests for which
this property holds.
For this we shall need investigation of some prop-
erties of ¢.
We shall define these:
Definition 1:
A test ¢
for the MANOVA problem is said to have the
'monotonicity' property if its power function increases montonically
.,
as each of the noncentrality parameters
IQi l , i = 1,2, ••• ,t
increases
separately.
DefInition 2:
A test
¢ for the MANOVA problem is said to have the
'property G.. ' if B¢,Xl,y is a homogeneous quadric in the space of the
components
xll, ••• ,xul of
~l.
10
Definition 3:
.f,.
test ¢ for the MANOVA problem is said to have the
'property
5 '
Definition
4:
'property
'J ' if the intersection of A¢,Xl,y and any line L,
if xl
-
€
A¢ X Y implies
, l'
c xl
€
A¢ X y' 0 ~ c ~ 1.
- , l'
--
A test ¢ for the MANOVA problem is said to have the
through the origin in the space of ~l' is a finite segment of the line.
Now we shall prove a theorem, which is implicit in Roy and Mikhail
~5L7,
connecting these properties:
Theorem 1;
If a test ¢ for the MANOVA problem has all the three
properties Q.
, $ and J then it has the monotonicity property.
Proof:.. We have to show that
.e
decreases as each
1Qi l, i= 1,2, ••• ,t increases separately; or equi-
valently, the integral, with Q = 0 in the integrand, decreases monoi
tonically as A¢ is given a translation composed of translations
Qi'
i = 1,2, ••• ,t in the directions of Xii' 1 = 1,2, ••• ,t.
We assert
that, becuase of the form of the integrand, we may give these translations
successively.. Moreover, as
teristic roots of
of
A¢ involves xi/s only through the charac-
(XX' )(yyt )""1, whatever is true of ~l will be true
all the columns of X. 'It will be, thus, sufficient to show that
the integral, with Qi = 0, i = 1,2, ••• ,t in the integrand, decreases as
A¢ is given a translation Q in the direction x •
l
ll
11
To do this observe that A¢,Xl,y because of the properties
.
~ and
J
must be an ellipsoid in the space of
(xII'· •• ,xul ).
~,
This
ellipsoid can always be referred to with respect to its principal axes.
In other words, there always exists an orthogonal transformation
=
Z
(UXl)
M~
,
M orthogonal
,
(uxu)(uxl)
such that
are the principal axes of the ellipsoid. If the
-Zl' ••• ,z
-u
rows of Mare (mil, ••• ,miU ) i = 1,2, ••• ,u then a translation Ql
in the direction XII
zn, ••• ,mu lQl
l along ~l' m21Ql along
Now the equation of an ellipsoid referred to
is equivalent to
~l Q
along -u
z.
principal axes is free from the product terms, and
-~
.e
IJ:.ste
1
.j 21t
1 2
e-~ x ax
decreases
increases •
monotonical~ as
-fH·e
Hence we have the theorem.
We shall now proceed to develop the relationships among these properties in the following lemmas and theorems.
Lemma 1:
a'
= (aI'
Proof:-
A test
¢ for the MANOVA problem with
B¢:
a' A. = constant,
B¢:
a' e
=
a 2 , ••• ,au )
(YY')
or
constant, where
is a fixed real vector, has the property ~ •
is a (uxu) symmetric and p.d. (a.e.) matrix.
~
Therefore,
there exists a nonsingular (uxu) triangular matrix V with zeros above the
diagonal such that
YY' =
•
.e
12
Then the characteristic roots of
"""(XX') (V'V)
are the same as those of
s* =
* *'
X X
~
=
~
(V X) (V X)'
•
Now consider determinantal equation
Is* -
v
If
=
o.
We can write this as
u
v
..I
1 v
-
1-'
u-l
o ,
+
where
=
sum of all
j =
tl
Thus
principal minors of
(;)
1"2, ••• ,, u
*
:: trace( S)
and
tj =
of equations that
, S* J
,
•
tu
e
IS'.
*
::
It is clear from the theory
the j-th elementary symmetric function of
j
the roots of the equation.
Now let us fix
ej
as a function
Xl and
of
~l.
L"
i.e.
When
,..,
V, and investigate the nature of
-
Xl and V are fixed, the elements of
~~ , the first column of X* , are fixed linear functions of the elements
of
~lj
fixed.
and
*
* the submatrix of X
Xl'
Then any
I s*r =
corresponding to
j-rowed principal minor of
.*
*
xII· • • • • • XIs
•
.
*
~ll
•
•
•
x* •
ul
. • • . • x*us
•
*
XIs
Xl
of
X, is
13
is a homogeneous quadratic function of
(X* '. ",x* ) with coefficients
ul
ll
which are functions of the other X~jIS, plus a constant which is also a
X~j IS.
function of the other
of
(Xll' ••• 'X ).
U1
But
(x~, ••• ,X~) are linear functions
Therefore, given Y and
Xl any
ej ,
and hence
any linear combination
~I!
tion of
with coefficients which are functions of ~ and
(Xll' ••• 'X )
U1
of
e ,
j
is a homogeneous quadratic func-
Xl' plus a constant depending upon Y and Xl.
is
a
homogeneous quadric in the space of
Thus
(~l)
when
aI e
=
constant
(Xl' t) is fixed.
This proves the first half of the lemma.
For the second part we observe that if
>.oj
is a root of (XX' )(yyl f l
it satisfies the equation
I s* -
v I
I =
=
constant
o
Thus!f we have
and Xl and Yare fixed
I S*
- >.oj I
I
=
0
will be a homogeneous quadric in the space of 2£.1.
fixed linear combination ~ '1::
.,
of
>.oj IS.
An immediate consequence of this lemma is the following theorem:
Theorem 2:
Any test
¢
for the MANOVA problem with the properties
~ and 1- and with
e
The same is true of any
B¢:
at e
B¢:
a l ~ = constant
-
=
constant,
or
,
14
=
where at
(al, ••• ,a)
. ....u
is a real vector, has the monotonicity
property.
Examples: B¢:
the
The maximum characteristic root test has
B¢
= const.with ~t = (0,0, ••• ,1). It has been
properties i and 'J elRoy and Mikhail f5'J.7).
~11
shown that it has
B¢ with either Qf the following two forms
The trace test has
or
of the form
B¢:
at
~
=
cCinstant,
at = (1,1, ••• ,1)
B¢:
at e
=
constant,
at
The likalihood ratio criterion has
= (1,0, ••• ,0)
•
B¢ of the form:
at
= (1,1, ... ,1)
In what follows wo shall show that both these tests have the properties
Sand
J .
It will then follow that all three tests have the mono-
tonicity property.
Now we shall state a lemma. which will be very useful in the sequel.
Lemma 2:
If
~l ~
•••
~
~u
are the
u
61
of a symmetric (uxu) matrix A and
ordered characteristic roots
~ ... ~"'u
characteristic roots of a matrix (A+B), where
are the
u
ordered
B is a symmetric at
least p.s.d. matrix, then
j
if B is
= 1,2, ••• ,u
with strict inequality
p.d.
This result, though known, is not very widely available.
f§J
tions.
Bellman
has given a proof which involves properties of' continuous.funcA matrix proof ef this result has been given by Everitt fl~7 •
15
••• < Au (c)
Lemma 3:
denote
u
characteristic roots
of the matrix
Then Aj(C),
j = l,2, ••• ,u are nondecreasing functions of c.
Proof:- We can write
r c 2 ~l~l, + XlX'1 - 7( yy ,)-1 + Lr(2_
X' + xx'7(
c2
cl2)~l-l
1 1- yy ,)-1 •
l
--
L
Application of the previous lemma proves that the ordered characteristic
2
-1
roots of . f c 2 ~l~i + X1Xi_7(YY')
are greater than or equal to the
ci ~l!i + X1Xi_7(yy') -1 if c 2 > cl •
We shall, now, prove the following:
ordered characteristic roots of
Theorem 3:
Any
test
¢,
f
for the MANOVA problem with the property
1:
and .
A¢:
a' A
< constant or
A,,:
a' e
< constant
=
=
,
has the monotonicity property.
-,
Proof: property
The tests have the property ~ • To see that they also have the
S'
we observe that,
(i) All the ordered characteristic roots are non-negative ..,
(ii) The ordered characteristic roots of
o~
c
~
(c2~1~ + X1Xi)(yy,)-1
1 are not greater than. the ordered characteristic roots
of (~~i + X1Xi>(fi' )-1; and the same is true of a linear function
.e
16
or a' e if'
....-
a' A.
Therefore, if
-
~l e A¢,Xl,y then c ~l e A¢,Xl,y
By
I
0 ~
C
hypethesis of the theorem the tests have the :property
~ 1 •
a.
There-
fore by Theorem 1 they have the monntonicity property.
Examples: -
All the three tests mentiened in this sectiM, mmely, the
largest root test, the likelihood ratiq test and the trace test have
the
pr~erties
Theorem 4:
mentio.ed in the theorem.
All tests
9
fer the MANOVA probl@lll with
a' e < eonstant
=
,
have the monotonicity property.
Proet': - We know that all the tests
Q., and the property
Consider a line
1.' =
S'.
¢ of the theorem have the property
We shall show that they have the property
L in the space of
~l'
With
d.irecti~n
cosines
(11 ,,,., f u )' The equations of L may be written as
= .••
=
and any point on L may be written as
«(lr, ... ,fur),
where
I'
is the
distance of the point fram the origin.
Now if
«(11', ... ,fur)
e;
A¢,Xl,Y then at least one elementary sym-
metric function of the characteristic
ro~ts
of the matrix
.~
17
is nonzero and finite.
is finite.
ch (r
max
This implies that the maximum root of the matrix
But
2
i if HIT' (1 ~-
ch (r~
max
i'
+
X1Xi)(yYf)-1
Thus
r
2
~ ch (r2 ii' + x1xi)(YYf)-1 <
(constant)
ex).
max
This proves that the tests
¢ have the property 3- • Hence the
theorem is proved.
Theorem 5:
¢ for the MANOVA problem with
All tests
a'
<
~
- - =
constant,
a
u
>
0 ,
have the montonicity property:
Proof:
Since a
u
> 0 and
a'
~u
implies that
~
-
~IS
are all nonnegative
< constant
=
< constant <
ex)
•
The proof of Theorem 4..
¢ have the property
indicates that this implies that the tests
It has been shown before that the tests have the property
property ~
Examples:..
a
1-- .
and the
Hence the theorem is proved.
Theorems
4 and 5 show that the three tests for the MANOVA
problem have the monotonicity property.
Remark:
We have shown that all the three tests for the MANOVA problem
have the monotonicity property.
For this we have, essentially, verified
that these tests have the properties
Q..,
.1
and
'}.
It may be re-
marked that for the individual cases the verification of the properties
is much easier.
·e
18
1.2 The Problem of Testing Independence Between Two Sets Of Variates.
1. 2.1 The Model, The
Hypothesis and Reduction
!:ll P • p:; q
[ ~21 q
be distributed as a (p-tq)-
1
variate normal with a
covariance matrix.
E
(p+q)~(p+q)
symmetric
p.d. matrix E as the
Let E be partitioned as
Ell
E
12
p
Ei2
E22
q
q
=
P
and q rows and columns of E correspond, respectively, to
p
be a sample of
the p-set !l and q-set ~2· Let
Z
=
q
(p+q)xn*
where
p
[:~1
n*
n*
trom the above population.
pendence between the
H :
o
=
E
12
pxq
Then the problem is that of testing inde-
p-set and the
q-set, i.e.
0
pxq
against
o
•
As in the case of the general multivariate linear hypothesis we can
reduce the problem by invariance.
H
l
G :
l
The problem of testing
H against
o
remains invariant under the following groups of transformations:
Addition of arbitrary constants to the elements of Z i.e.
Z*
(pxq)n*
=
Z
+
B
·e
19
Nonsingular linear transforma. tiona of the va.riates within the
I
z*
c z
II:
(p+q).n*
(p+q) -n*
where C is a
(p+q)x(:p+q)
nonsingu.1.a.r matrix with structure
c -=
A set of maximal invaria.nts, under the groups of transforma.tions induced
in the apace of sufficient statistics, is the set of p
characteristic
roots of the matrix
,
where
s
=
z Z'
1
---::"
w
n
I:
(p+q)x(p+q)
z
=
(p+Q)X1
n*
-
n*
z z'
,
! i ' !i are the columns of Z, and
i=l
S is partitioned as
s =
[~J'
I
8i2
822
p
q
p
q
Also a set of maximla.l invariants under groups
induced by the groups
p
Gl, G
2
•
'011 02 of transfermation
in the space of parameters
characteristic roots of matrix
is the set of
20
These are the p
population canonical correlations (Rotel1ing £2]}).
All the invariant tests of the hypothesis
R will involve observao
)
tions only through the characteristic roots of ( 8-1 8
8-1 8i2;
and
11 12 22
:their power function will depend, aside from the degrees of freedom,
-1
-1
t
only on the characteristic roots of Ell E12 E22 E12 •
Thus, as for the case of the MANOVA problem, for the independence·
problem no UMP invariant test exists.
The two well known tests for the
problem are:
(i)
The largest root test with critical region
ch
max
(ii)
-1 8
-1 ' )
( 811
12 822 812
~
constant.
The likelihood ratio test with critical region
IS I
<
constant.
=
Roy and Mikhail f5~7 have shown that the power function of the
largest characteristic root test is a monotonically increasing function
-1
-1
characteristic roots of Ell E E22 Ei2 separately. We
12
shall show that there exist at least two infinite classes of invariant
of the
p
tests which have this property.
cal form due to Roy
We shall work in the following canoni-
£42.7
1.2.2 Canonical Form
Let
X
pxn
=
X and Y be random matrices of the form
[;11 ····· In]
xx:
X p1 • • • • •
pn
,
Y
qxn
=
~ll . .
[
• • •
;"'1
y ql • • • • • y qnJ
21
With probability density:
(p+q)n
{ (21C)
-1
~
2
(1-
i=l
f)n/2}
f
exp t - ..! [ ~
2
i
11
TT
i,j
dx
ij
i .. j
i =1
1 2
1- Pi
dyij
The independence hypothesis is then
against
Hl :
at least one
Pi ~ 0, i = l,2, ••• ,p •
The invariant tests Will involve
xij's
and Yij'S
only through the
characteristic roots of
Let us denote these characteristic roots by
1.2.3
(v 1 < v 2 < ••• < v ) •
p
Further Reduction
This canonical form can be further reduced for our purpose {Roy and
Mikhail
["52/).
Towards this end we put
Y
qxn
""
T
=
L
,
qxq qxn
"
IV
where
T
is a traingular matrix With zeros above the diagonal and L
is an orthonormal matriX.
q
n-q
L]
f·
_M '
so that
rlML]
L
LL'
can be completed into an orthogonal matrix
M'J = fL' M'J
[.
L] =
M
.
l(n) •
22
Let
x*
M'J
X ["L'
=
pxn
pxn
r
=
nxn
v* J
U*
q
,
p
n-q
say
,
,..,
Then the probability density of U*, V*
(Roy and Mikhail
p
2 -n/2
1=1
can be written as
["521):
n (l-p.)
Constant
and T
1 { P
exp["- '2
1
I:
i=l
1
1-p~
2} J
'1j
J=l+q
n
. I:
q
11
1=1
where
P!j
= 1,2,.",1;
=
Pi'
j
=
0
otherwise,
,
1
= 1,2,.",p
Next put
U
,
u*
/~
=
pxq
pxq
v
=
px(n-q)
l/jl-p?
J.
pxp
Y1j
e
v*
D
=
::
P1j
~
0
px(n-q)
=
Yi
,
and
j = 1,2,.",i
1 = 1,2, .. "p
otherwise
,
x
where D
i
is a diagonal matrix with elements
1/j1_p2
= l, ••• ,p.
1L!i=""PI '
"'"
~hen the distribution of U, V and T is
Constant
p
+
n
1::
1::
1=1 j=q+l
vi
j
}
J
q
dU • dV .
1f
i=l
The independence hypothesis now reduces to
against
A ~ A2 ~ ••• ~ Ap
l
Also, the characteristic roots
are related to the cha racteristic roots
and Mikhail
( "'1' "'2" ••• " '"p)
f5f/):
A
'"
=
1"'"'T1.:
or
Thus
AIS
•
are the characteristic roots of
1.2.4 Preliminaries and Development
Let us denote
and
e
=
of
U:]
,
(uu')(VV,)-l
by
(Roy
.e
24
where,
e , e , ••• , e are the p elementary symmetric functions
2
l
p
A. • Then all tests based on the roots A.. ,
J.
P
i
= 1, ••• , P or their symmetric functions will be invariant, under
the groups
G
l
Now let ¢
and G •
2
= ¢(U,V)
be a test for
Ho with the acceptance region
A¢ with boundary B¢, both in the space of
tests
¢ based on the characteristic roots of
(u, ·V). We shall consider
(UU') (VV,)-l i.e. of
-1 8 8-1,)(
-1
-1, )-1 • The power function of test
( 811
12 22 812 I - 811 512 822 812
will involve, aside from the degrees of freedom, only p canonical
correlations in the population viz. Pi' i
Pi
tions y. =-....,;..- , i = 1,2, ••• ,p
= 1,2, ••• ,p
or their func-
Jl~p~
J.
It is easy to see that there exists an easy tie-up between the distribution problem of the MANOVA problem and the independence problem.
This was exploited by Roy and Mikhail
f52) to show the monotonicity
property for the greatest characteristic root test for the independence
problem.
We shall define:
Definition 5:- A test
¢ for the independence problem is said to have
the monotonicity property if its power function increases monotonically
as each of the
Let U
space of ~i
p population canonical correlations increase separately.
=f~l Ul -7
and let
A¢,Ul,V and B¢,Ul,V' both in the
= (ull '
u2l ' ••• , upl ) denote the sections of' Afj and
their boundaries
for different values of U and V. Then we can
l
define the properties ~,
and 3' for the tests ¢(U, v) as in
3
the case of the MANOVA problem.
is implicit in Roy and Mikhail
And we have the following theorem which
1 58 .
25
Theorem 6:
If a test
properties
Q." ~
Proof: -
f
¢(u"
and
T
V)
for the independence problem has the
then it has the monotonicity property.
We have to show that
const.
A¢
q
. 1
J.=
n... i
""
d T
TI t .
decreases as each of Y ij , s
increase sepa.rately •
iJ.
Or" equivalently, we
have to show that
i
~
const. exp["-
A¢
....,
q
dU dV
IT
d T
,
1;;:1
A.q;* is
where
i
A¢
translated by Yij'tij along u ij i.e.
= 1"2" ••• ,p,,
decreases as each of Yi
Yi t ii along
increases separately.
By an argument similar to one used for proving Theorem 1, it can
,...
be easily shawn that" for each fixed '1'" the integral decreases as
is translated by Y1t
,..,
ll
along
u
ll
• We can then introduce the density
function of T and see that the integral decreases as
Y1t ll along
i
= 2"
••• , p
u ll • We can then
rep~t
A¢
is moved by
this for the other Yi 's
and we shall get the theorem.
From the method of proof of the Theorem 6 in this section the
following theorem is immediate.
Ac)
-e
26
Theorem'
or
7:-
Any test
A¢:
al e
A¢:
a' A. < constant,
¢ for the independence problem with
< constant, a' > (0, 0, ••• , 0)
=
=
al
=
a , ••• , a ) ,
p
2
= (a l ,
••• , a p- 1 > 0,
a
P
> 0 ,
has the monotonicity property.
Examples:-
The greatest characteristic root test has
\I
p
< constant ,
=
which is equivalent to
A. < constant.
p =
The likelihood ratio test has
> constant
This can be reduced as:
> constant
p
i.e.
(1-\1.)
P
A.i
(1 - l+A.
i
)
(1 + A.1 )
< constant
IT"
i=l
i.e.
P
'TI
i=l
i.e.
> constant
TI
J.
i=l
1 + el +
=
> constant
=
=
... + ep
< constant •
=
27
This proves that the likelihood ratio test has the monotonicity property.
1.; Testing the nguality of Two nispersion
~~trices.
1.;.1 The Model and the Hypothesis
Let
Zl and Z2
px(nl+l)
be two samples of sizes
px(n2+1)
from N( l;l' 2:: 1 )
pXl pxp
N(
and
are both symmetric and p.d.
~2 ' 2:: )
2
px1 pxp
respectively, where 2::1 and 2:: 2
We are interested in testing the hypothesis
against various alternatives.
This problem is invariant under the group of nonsingular transformations of structure
,
=
C Z2
pxp
'
C nonsingular,
and under the group of translations of type
B
l
and B
2
where
~i
pxl
=
real matrices.
[~il]
The characteristic roots of
and
=
~
_ zip
i
= 1,2;
j
= 1,2, ••• ,p
,
form a set of maximal invariants under these groups of transformations.
Also the maximal invariants under the groups of transformations induced
in the parametric space are the characteristic roots of
(Zl2::;l), say,
.e
28
Yl
~
•••
~
Yp • The hypothesiS of equal dispersion matrices then be-
comes
•
Roy and Gnanadesikan
several alternatives.
["59
have considered this hy'pothesis against
Out of these alternatives Mikhail
["32!
has con-
["5§}
have given
sidered the following four alternatives:
Hl
H2
·
~
·
H4 •
All Yi' S
> 1 •
All Y 's
i
< 1.
The largest Yi
=
Yp
> 1.
The smallest y.
J.
=
Yl
< 1.
For each of these four alternatives Roy and QI:la.Da.desikan
procedures, some of which are three-decision procedures.
Mikhail
f321
has obtained different kinds of monotonicity properties for these procedures.
We shall show that in some cases we can obtain very brl-lad classes
of procedures having properties he has obtained and in other cases
classes of procedures having somehmt weaker monotonicity properties.
We shall consider only invariant tests and can
restrict consideration
to the following canonical form.
'.
and
be two matrices of random variables with probability density
constant
n
'(Y")-nl / 2eXIf£
"-1'
J.-
J.
- -21
'J ~"~\iX dY
I !.
tr / Dlf XX' + YY'
l
Yi
..
.e
29
Under this canonical form the hypothesis is
and all the tests will involve
tic roots of
( XX')(YY')
-1
y1
= Y2
e , j = 1,2, ••• ,p
say ~
1
< •••
=
are the
j
= 1,
P
X a.nd Y only through the characteris~
\.
Let
,
and
where
= ••• = 'Y
p
elementary symmetric functions of
~l' ••• , ~p •
1.3.3
The Development
Now lle shall consider the problems of testing H -agAinst H and H
2
l
o
as c&se 1 and thooe 'of" testing H against
o
Case 1.
e
and H4 as case 2.
= Y2 = ... = 'Vp = 1
H0 :
"(1
against
H :
l
all y's '>
and against
~:
all yf s
To test
~
1
< 1 •
For these problems Roy and Gnanadesikan
I5'§
have given the following
three-decision procedures:
i)
Accept
Accept
..
< III ' a constant ;
against H if ch (XX' )(yy' f l > III
o
min
H against
o
Hl
H
l
if
ch (XX' )(yyf f l
max
Make no decision otherwise
ii) Accept
H
o
against
H
2
if
Accept
H
2
against
H
o
if
ch (XX')(yyl)-l> 1l2' a constant
min
ch (XX')(yyf)-l ~ 11
2
max
Make no decision otherwise.
Constants
III
and 1-1
2
bility of accepting H
o
are determined by the condition that the probawhen it is true equalS a given number l-OC •
30
£?liJ
Mikhail
has shown that the probability of accepting H
o
Hl " when ~ is true" decreases monotonically as each of the
against
noncentrality parameters increase separately; and the probability of
accepting H
2 against Ho ' when H2 is true, increases montonically
as
y's
decrease separately.
nicity properties'.
We shall call these the 'strong monoto-
We shall, however, concern ourselves with a wa.ker
property, namely" monotonicity of the above mentioned probabilities when
the Vi' s
are all equal to say y
and y
increases or decreases.
We
shall call this the weak monotonicity property.
We shall, in this section, give procedures
by regions over which H
o
A~.
be denoted by
or
~
¢ characterized only
may be accepted; these regions will
The procedures themselves may be two-decision pro-
cedures or three-decision procedures.
More specifically, we shall prove
the following:
Theorem 8: - In the canonical form of section 1 replace
by
~
n
2
and u
by p.
Let
sl by n , n
l
A¢ be the acceptance region of any test
for this MANOVA problem which has the properties Q.... , j
Then all the procedures for testing H against
o
2" which
H
(i)
€
A¢,
it
ozbfie~JiiQ)
H2 against Ho if (X" y)
€
A¢, rejee=a 1'5
e:5fte~~e e"
Hl
if
have the weak monotonicity .. propert;y.
Proof: -
We are concerned with
Constant
j-' -IT
A1
i=l
<z: ejeet
3- .
and H against
o
(X, y)
accept H against
o
(ii) accept
H
l
and
31
This is the same as
22)
y.. 7 ax •
p
n
+ Z Z
i=l j=1
Const.
where
J.J
dY
A¢
* is the same as the domain A¢ expanded by quantity
the directions
(x. , x
J. I
12
' ••• , x.
J.,n ) ,
against
IV;.
in
i = 1,2, ••• ,p.
l
But we are concerned with alteriJatives y 1 = Y2 = ••• = y p
which is
,
-
=Y
,
> 1 in testing Ho against HI and < 1 in testing H0
H2 • Therefore we may regard A¢
* as the domain A¢ expanded
by the quantity
F
in directions
(xlj , x2j ' ... ,
X
pj )'
j
= 1,
2,
••• , nl ·
Now these expansions are cumulative.
It is, therefore, sufficient
to see what happens when A¢ is expanded by quantity
JY
along
(XII' x2I ' ••• , xpl )·
In the notation of section 1 with proper changes, the region
~,Xl'Y in the space of
the space of
(xl1, ••• ,X ).
pl
quantity
JY
JY
X
pl )
is an ellipsoid in
This ellipsoid can be referred to its
The expansion of ~,Xl'Y along (~l' x2l ' ... ,Xpl ) by
principal axes.
quantity
(~l' x 2l ' ••• ,
is then equivalent to expansion of this ellipsoid by
along its principal axes.
Thus
if Y > 1
and
if Y < 1
Thus, the probabillty of accepting
of accepting H against
2
'1(= Yl = Y2
= •••
H and the probability
l
Y.9Sr.~
~...J. 1r\l.re4Jei
H ~ decreases monotonically as
o
H against
o
A-
= yp), respectively, increases or decreases.
32
Examples:-
Procedures, which accep't H
o
otherwise) and which accept H agains't
2
over
~
a'
e
a.
> 0,
J
= 1,
j
a' ).. < constant,
,..- =
= o,
>
a.
J
H ( and reject it
l
H (and reject it otherwise)
o
constant, !:t::;: (a , a , ••• , a ) ,
p
2
l
2, ••• P
or
-
against
at
= .(al, ••• ,ap.)
j = 1, 2, ••• , p-l;
a
P
>0
have the weak mono'tonicity property.
Case 2
e
To test
Ho
against
H
3
or against
.
H4
Yl = Y2 = ••• = Yp = 1
the largest Y = Y > 1
p
i
the smallest Yi ::;: Y < 1
l
For these problems the following test procedures are kno'Wl'l (Roy
and Gnanadesikan
(i)
f
5y):
Ho against
ject it otherwise.
Accept
(ii) Accept
~
if
ch (XX' Hyy'
max
rl
< const. and re-
r
< const. and re-
ch (XX' Hyy' l
4 against H0 if max
H
=
=
ject it o'therwise.
Mikhail
f3f/
has sho'Wl'l that the power functions of both these 'tests
are monotonicallY increasing functions of the noncentrality parameters,
i.e. the power functions increase monotonically as
Yl
decreases respectively.
Y increases or
p
We shall prove the following:
Theorem 9:
With the changes in the notation of the canonical form of
section 1, as suggested in Theorem
aver A¢ of Theorem 8 or accept
8,
if we accept
H
o
~
against
4 against Ho over A¢, and reject
H
them otherwise then the power function of
¢
for
(i)
testing
H
against
H,
increases monotonically as
Yp
increases ,.
(ii)
testing
H
against
\
increases monotonically as
Y1
decreases.
Proof: -
o
o
Again we are concerned with
as
const.
A¢
Where
expanded by
JY:p
> 1 along (Xp l' Xp 2' ••• , Xp,n ) •
l
The expansions being cumulative we may give them successively.
enough to show that the integral decreases by an expansion
NoW
A¢,x ,y
JY:p
* *
(Yl' Y , ••• , Y ). An expansion of this region by JY:
2
P-l
P
can be considered as a result of a number of expansions
where
,
along
is interior of an ellipsoid in the space of
*
(xll ' x 21 ' ••• 1, ~l) for any given value (Y l ' Y2' ••• , Yp - l )
f l';::;:p
It is
f 2.JY:..p , .•••
, t. IV:.. along
. ' p p
(ll' l2' ... , fp ) are the
respect to the principal axes.
of
along
x 1
P
the principal axes of the ellipsoid,
direction cosines of
Since
.JY;
xpl
with
> 1 it follows that
¢,
Thus the power function of each test
for testing
given by the previous theorem increases as
y
p
~,
H against
o
increases.
The proof for the monotonicity of the power functions of the tests
¢ for testing
Examples:
against H
4 is similar.
H
o
All tests
~
¢ which accept H against
o
reject it otherwise or accept
H
4 against Ho over
over A¢
~
and
and reject
it otherwise, where
a I e ; ; constant,
a.
J
or
A¢:
> 0,
a' ~
a
j
>
=
~
0,
j
= 1,2, ••• ,
~'
constant,
j
= 1,
=
a'
P
= (aI'
2, ••• , p-l,
a 2, ••• , ap)
a
:P
>
,
0
have power functions which increase monotonically as, respectively,
Y:p
increases or y 1
decreases.
1.4 Testing ,fbBther a Dispersion
~~trix
Equals a Qiven Matrix.
1.4.1 The Model and the HYPothesis.
Let Z
px(n+l)
where
~
be a sample of
is a symmetric
(n+l)
observations from a
N( "
pXl
~)
pxp
p.d. matrix• We are interested in testing
,
.~ = !:
H :
o
o
!:o being a given symmetric p.d.
matrix.
In many respects this problem is similar to the "equidispersion"
problemj and especially so in case of the alternatives to be considered
and the monotonicity properties 0'£ the ptMer functions ~
.e
35
It is well known and easy to see that the problem can be reduced
by invariance and we may restrict our consideration to the following
canonical form:
1.4.2 Canonical Form
-~ll • • •
X ==
Let
x ln ·
.•
·••
be a matrix of random variables
Xl· • •
-p
Wtl.th p r _ y density •.
p
1T
Constant
y -n/2
i==l
i
exp
r
-
- -1 tr
2
d.X •
We are interested in testing
••• = y p =
Ho :
1
against various alternatives:
Hl
all Yi's
>
1
H2
all Y 's
i
<
1
H
3
the largest y.
J.
H4
the smallest Y
i
The tests
¢(X)
teristic roots
= Yp >
= Yl
1
<
1
are restricted to involve
~l ~
~2 ~ ••• ~
~p
of
X only through the charac(XX').
1.4.3 The Development
All the results in the previous section on the tests of equality
of two dispersion matrices can be extended to this section, that is,
results on the weak monotonicity when testing against
the monotonicity in V
p
H
l
or H and
2
and Vl' respectively, when testing against
·e
36
~
or H • We shall not consider these results further, but instead
4
we shall derive sane strong monotonicity properties for a single class
of tests of H against
o
Theorem 10:
A¢:
j
= 1,2, ••• ,
~.
¢ with
All procedures
~t ~
p,
~
constant,
= (al ,
••• , a p )'
Ho against H
for testing
l
which increase monotonically as
Proof:-
at
a j ~ 0,
have power functions
Yl' Y2' ••• , Yp
increase separately.
We are interested in showing that
p
JA
constant
n
-n/2
y.
exp
exp
I -~
J.
i=l
r---2
1
-
;:p
which is the same as
J",constant
Ai)
where
tr
xx'_7
ax
,
~ is A¢ expanded by quantity .;:;;;.
i = 1, 2, ... , p.
when A¢
It is sufficient to show that the integral decreases
is expanded by
x =
JY;.
(Xll , x12 ' ... , xln ) •
along
=
p
and consider the sections
A¢,~
of
¢.
ellipsoids in the n-dimensional space of
prove the previous contention.
Consider the equation
I XX'
- AI
Let
I
=
°
•
We shall show that these are,
(~,
••• , x
ln
).
This will
37
This can be -written in two ways, as before, as
'\ P
~
~
~l
-
p-l
~
p-2
A
+ ~2 ~
+
- •••
(
-1
)p
od
~p
=
0
and
,
t.J
where
is the sum of all
(~)
j-rowed principal minors of
!i'!l
=
2, ••• , p.
j = 1,
[ Xi
It is easy to see tba.t given Xl'
homogeneous quadratic functien of
Bnf X
'P' 1
~l
'itj
will be a
Thus
n-dimensional space of
Next consider matrix
,
Any
J
plus a constant.
xlj
will be a homogeneous quadric in the
(Xll ,· •• ,Xln ).
e.
, j = 1,
o<
c < 1 •
2, ••• , p, of this Will contain
(~:i) principal
minors which contain the first row and column, whereas, the remaining
Will not contain these.
Now each of the
contains the first row and column is
c
(3:i)
2
principal minors which
times the corresponding minor
and each of the remaining (~) - (P-l) is the same as the
J
j-l
corresponding minor when c = 1. Thus when 0 < c < 1, any e . ,
when
j =
c
= 1,
1, 2, ••• , p,
ponding
ej
of
of
XX' •
Xi.· <C. ~l
(.c~)
Therefore, for
and
a' e
< constant
=
J
Xl )
a
j
is less than the corres~
0,
j =
1, 2, .. 0' p,
.e
38
¢
has the property
O<c<l
, i.e.
•
Next consider a line
(XU'" .,Xln ) 'With
L in the space of
equations
X
••• =
Consider the intersection of
rln
n
(= r) •
L and A¢ Xl.
a segment of the line itself.
We notice that this is
(fl r , t2r ,
Now if a point
••• , (nr)
on this line belongs to A¢,X ' the section of the acceptance region,
l
then for given Xl' same e , j = 1, 2; ••• , P of
j
is finite; also it is positive.
Now this
e
j
'Will be the sum of (~:i)
j-rowed principal minors containing the first row and column and the
remaining j-rowed principal minors not containing the first row and
column.
K , K
l
2
Thus
0
~ ej <
<Xl
implies that
are positive and finite.
Therefore, the section A¢ X
.
of
.
, 1
0
~ r~l
+ K
2
This implies that
r
< co, where
is finite.
is interior of an ellipsoid in the space
(xli' ••• ,Xln ). This completes the proof of Theorem 10.
n
is easy to prove on parallel lines that all procedures
testing H against
o
~,
which accept (]ITer A¢
analogous strong monotonicity property.
¢
for
of Theorem 10, have
CHAPTER II
~
.
.
SOME 'PROPERTIES Oli' PERCENTAGE POINTS AfI:l PROBABILITY
2.0
Introduction and Summa.ry
In this chapter we shall obtain a number of properties of percen-
tage points and integrals associated with the distributione of some random variables.
Some of these properties are generally known from charts
and tables of these quantities; but proofs of these are nat known to
have appeared in the related literature.
In the first section we shall
study the monotonicity properties of the percentage points and same
integrals associated With a noncentral chi-square distribution with
respect to various parameters and obtain some order relations for the
integrals.
In the second section we shall do the same for' noncentral
F-distribution and pose some unsolved problems in this area.
In the
third section we shall extend some of the results of the second section
2
to the distribution of Rotelling's T -statistic and investigate some
monotonicity properties for the percentage points of a class of statistics suggested in Chapter I.
The methods of investigation and the proofs
are based on various results in the theory of hypotheses testing, for
which we may refer to Lehmann
2.1
I"3g,
3~7.
Some properties of percentage points and integrals connected with
Chi-square tests.
Many problems of testing hypotheses in univariate and multivariate
inference use statistics with central or noncentral chi-square distri-
.e
40
butions (Kendall and stuart
129_7,
Patnaik £43.7).
The percentage
points, as well as incomplete integrals, are available in extensive
tables for the chi-square distribution (Bancroft f~7, Greenwood £l§]).
Tabulation for the noncentral chi-square distribution, however, is not
extensive.
In this section
shall list and prove some properties of
'We
the percentage points and the integrals connected with the noncentral
chi-square distribution. These properties have been generally recognized
from the tables and charts, but proofs have not heretofore appeared.
Definition 1:
2
X (x;
Where
A stochastic variable
2
X (x;
'J,
is said to have the
x
x
,
x < 0
2
n
degrees of freedom and the noncentrality
or eqUivalently,
X (n, 0 )-variate.
> 0
,
X (n, [) )-distributions i.e., the noncentral chi..
square distribution with
2
\)
~ -1
o
=
parameter [),
2
~r
r. (2° ) x (x; n + 2r, 0) ,
1
= -1-r;2~2\)
r(~)
6»
with probability density
2 r
2, c»
5) = e- o,2 E
r=o
n,
X >: 0
When
0
=0
X is said to be distributed as a
the distribution is said to be the
(central) chi-square distribution with n
degrees of freedom.
We
shall denote
•
It is 'Well known that if X.
].
are independently normally distributed
.
n
i = 1, 2, ••• , n , then .E
1
x~ lei
].
2
.
E(X ) = a '
i
i
2
is distributed as a x (n, 6 )-variate,
random variables, with common variance
(j
and means
.e
41
2
where 5 =
.
2
as a X (n,
n
1:
2 2
aile" •
Conversely, a random variable
X, distributed
1
a)-variate may be expressed as a sum of squares of
n
independently normally distributed random variables with unit variance.
Theorem 1:
If
a is a fixed number, 0 < a < 1, and
=
~2
\ X (x;
n, 8) ax
=1
- a
then
function of n
for each fixed
for each fixed
n.
Proof:
i
= 1,
is an increasing
I.l
a
-0
6 and an increasing function of 5
2
Consider a random variable
••• + Xn ' where
Xi'
2, ••• , n, are independently normally distributed random variables
with variance unity and means
Let Y be a
N(O,l) variable independent of X.
2
X (n+l, 5 )-variate.
X < b, where
=
E Xl = 5, E X.J..
Now the event
b > 0
is constant.
X + y2
~
b
= 0,
= 2, 3, ••• ,
i
2
Then X + y
n.
is a
implies the event
From this the first half of the
theorem easily follows.
For the second half of the theorem we observe that
1
.1)
I 2(b; n, 5)
X
= ~ x2(t;
e
n-l, 0) dt
2
..fiir
o.
The integral on the r.h.s. decreases as
2
- - x
0
increases.
dx
•
The proof of
the theorem is complete.
In the next theorem we shall consider the behaviour of
for varying values of
condition.
nand
5 when
I.l
I ~(I.l;n,5)
is selected by some boundary
.e
42
If nand n* are positive integers,
Theorem 2:
I(~;n,~*)
< I 2 (~*; n*, ~*)
n'* > n, then
,
~*
>
~
,
~*
<
~
X
> I
where
~,
Also, for
are positive numbers and
~*
6
2 (~*; n*, ~*)
X
= 0(1)
and 6*
~,
~*
Proof:-
satisfy
= O( 1) ,as n ->
as
00
n
->
X be a X2 (n, B)-variate and Y be a
Let
independent of X.
against
,
co
•
x2(n*_n,O)_variate
Consider the problem of testing
5
=
6
5
=
~*
>
6
For this problem there exists a
U.M.P. test given by a critical region
X > constant.
To see this let P5(x,y) denote the joint probability density of X
and Y and notice that
=
po(x,y)
By
2
X (x;n,5)
the Neyman-Pearson lemma the U.M.P. test rejects when
p~*(x,y)
P6 (x,y)
>
=
constant
>
constant
i.e.
2
X (x; n, 6*)
2
X (x; n, 6 )
=
·e
But we have a
Lemma 1:
2
X2-(Xi n" t1*)
The ratio
is a monotonically increasing
X (xi n" A )
function of
x provided t1* > A • (Lehmann
[):2.7) .
Tbus the rejection region of the U.M.P. test is
x
~
constant
,
the constant being chosen to satisfy the level condition.
The test
with critical region X ~ constant is" therefore" more powerf'ul than
the test with critical region X + y
2
distributed as a x (n*, 5 )-variate.
e
I 2(lJ.i n" A)
:::
X
then
~
constant.
I 2(1J.*; n*" A)
X
I 2(1J.*; n*, t1*) ,
>
:::
I 2(1J,*; n*, A*) ,
-
X + Y is
Thus if
I 2(lJ.i n" A*) ~
X
Now"
X
X
A* >
A
:::
t1*
.
< A •
To prove the second part we shall use an approximation of a
2
2
X (n" a)-variate by a X (v" O)-variate" due to Patnaik f4~7 .
According to this approximation" the incomplete integrals of a
2
2
X (n" a)-variate can be approximated by those of a p. X (v, 0)variate" where
p :::
8
1 + n + 5
'
and the error of approximation will tend to zero as
n tends to infinity"
provided that
ts.*::: 0(1), we can
a::: 0(1).
Thus for large
approximate both I 2(1J,; n" A)
X
n and
and I 2(IJ.j n, A*)
X
b."
by I 2(1J,; n" 0).
X
.tt
This proves the result.
This theorem explains the nature of the power curves of a chi...
square test, depending upon noncentral chi-square distribution, e.g.
analysis of variance with known population variance.
a curve of this family starts at
5 = 0
with power equal to the level
of significance, and increases monotonically to
curve for
n*
For fixed n,
1
as
5
->
> n lies completely below the curve for n.
(l).
As
n
The
in..
creases the curves gradually become parallel to the abscissa.
The next theorem will prove a property of incomplete chi..square
integrals.
Theorem 3:
1.1.*
Let
n, n*
be positive integers,
n* > n, and let
1.1.
and
satisfy
< I 2(CI.I.*, n*, 0) ,
.
c
< 1
c
>
1
X
normally distributed random variables with zero means and common
variance (J2 • Consider the problem of testing
against
2
Ho
(J
Hl
(J
2
<
=
1
> 1
•
For this problem there exists a
n
I:
i=l
~
+
U.M.P.
>
test which rejects
constant
,
H
o
when
the constant being chosen so as to satisfy the level condition.
U.M.P
The
test is certainly more powerful than the test with the same
level of significance and with critical region
n
!:
>
=
i=l
when the alternative
2
is
(j
constant
1
=-c
,
c
< 1. This proves the first in-
equality; the second may be proved analogously by considering the uniformly least powerful test of
H1 against Ho •
Now the general nature of the incomplete noncentral chi-square in-
tegral is similar to that of the incomplete chi-square integral.
It
would be interesting to know whether there exists a property for noncentral chi-square integral analogous to that given by Theorem"
chi-square integral.
Let
I
n* > n
More specifically we pose the following problem;
and let
2(1l; n, a)
=
I 2(1l*; n*, 5) ,
X
{)
fixed then does there
X
exist any order between
I 2( Cll; n, 5)
and
I 2(CIl*; n, 5)
for
X
X
varYing
fo~
c?
The property of incomplete chi-square distribution given by Theorem
"
can be readily translated into a similar property of incomplete
gamma integrals.
2.2 Some Properties of Percentage Points and Integrals connected with
the:, V..e.ria.nce... ~:t1Q
Definition:
.~
Test.
A stochastic variate
..
X > 0, with probability density
function
=
e
_5 2 /2
co
l'
!: -r.t :
1'=0
2 r
(~ )
·e
46
where
,
•
t Cn1,
is said to be distributed as
r-
is said to have the noncentra1
n , 5)- variate or, equivalently,
2
~,
F-distribution with
of freedom and the noncentrality parameter
n
& •
It is then well kno'WIl that the distribution of a
n2
1- (n1 ,
Y
Z '
variate is the same as that of a stochastic variate -
n1
Y
degrees
2
n , 5)2
where
and Z are independently distributed random variates with probability
2
X (y; n , 6)
density functions
and stuart
£227,
2
and X (z; n , 0)
2
1
respectively (Kendall
patnaik £4~7).
ex be a fixed number 0 < ex < 1, and let
Now let
1l
I
'1 (~;
n , n , 5)
1
Then values of
values of
n
1
.0
~(nl' n )
2
and n
S:rex;
=
2
2
n1 , n2 , 5) ax
= 0, ex = 0.05
5
for
.
or
, are very widely available.
=1
0.01
and various
The tables of
~(nl' n ) do not show any monotonicity with respect to
2
- ex •
n
l
and
n •
2
The following theorem. gives a monotonicity property of ~(nl' n )·
2
n
l
Theorem 4: ~* = ~ ~(nl' n 2 ) is an increasing function of n l and
and a decreasing function of
Proof:-
Let
n
2
•
Y, Y ' Z, Zl be independent random variables with rel
2
2
2
spective probability densities x (y; n , 5), X (Yl; 1, 0), X (z; ~2'0)
l
2 '
.
n2 y
n2 Y+Yi
and X (zli 1, 0). Then the prcb. densities of -" -Z· I
- - ~=--nl
n +1
Z
'
1
·e
y
n2 + 1
Z + Zln
are respectively,
l
and 'J( ., nl , n2+l, 0).
Now
Y + Yl
Z
and
<
=
'Where k
implies that
=
is a constant.
Pr
and
pr
Z
j
nl , n2, 0)
<
=
:f(.,
nl+l, n2 ,0)
k
Y
< k
Z + Zl =
,
Therefore
1
~ Y: Y
<
k
{
<
k
~
Y
implies that
k
Y < k
!
:t(.
=
=
}
}
<
<
•
The theorem follows from this.
We shall next consider a property of incomplete integrals of
:f(nl , n2, a)-variates.
Theorem 5:- Let n , n , nt,
l
2
n~
be positive integers and let
=
then
c <1
<
c > 1
,
-.
Proof:-
Let Xl' X2 , ••• , Xn ,Xn +l' ••• , Xn*i Yl , ••• , Yn ,
2
l
l
Yn +1' ... , Yn* be independently nonnally distributed random variables
22
2
2
with zero means and variances (xj , (jy respectively. Consider the
problem of testing
we
48
against
For this
unbiased test with critical
region
>
constant
•
The potrer of this test against the alternative
a~
~
a;
=
c1 >
1 will
be greater than that of the unbiased test which rejects when
>
constant
•
This proves the theorem.
Whether there exists any similar property for a nancentral
F-distribution is an open question.
Also since the F-distribution
and the Beta-distribution may be obtained from each other by monotone
transformaticm, the above property can be easily extended to incomplete
beta integrals.
We shall next prove a property of the power function of the analysis
of variance test.
we
49
Theorem 6:-
Let
n~
> n2 • Then
Ll*)"
6.*
>
I 'J (\l*; nl , n~" 6.*),
6.*
< 6.
I 3' (\l; n l " n2 " Ll)
=
Ll) ,
I'} (\l*; n l " n*"
2
I,} (\l; nl " n 2 " Ll)
>
I
<
1
(\l*; nl "
n~,
Ll
Xl' X2 , ... , X ; Yl , Y,.,,, ••• , Y , Y 1'···' Yrrlt be
nl
c;
n2
n 2+
2
independent, normally distributed random variables with cozmnon variance
Pro~.~-
2
0"
Let
and means
i
= 1,
2, ••• , n
i
= 1,
2, ••• ,
l
n~
•
Consider the problem of testing
n
Ho :
E
1
2, 2
i=l
'11./0"
1.
=
,.2
u
against
For this problem there exists a
U.M.P.
invariant test, invariant
under the group of orthogonal transformations of the variates
i
= 1,
2, ••• , n "
l
..... , nl
and the group of scale changes
" Y! = cYj ,
j
= 1"
2, ••• ,
region of this test is given by
nl
I:
1=1
X~
J.
> constant
n~.
Xi*
=
Xi'
cXi ,
The critical
.e
50
This test is more powerful than the invariant test with critical region
n
l
1::
.
i=l
n
2
1::
X~J.
>
=
y2
constant
•
j
j=l
This proves the result.
Now it is well known from tables and charts that the power of the
analysis of variance test increases as the number of degrees of freedom for error increase, and decreases as the number of degrees of freedan for hy:pothesis increase (Feldt and Mo~rram £l"~7, FoX
Hartley and Pearson £~ff/, Lehmer
£317,
Tang ["5~).
these two results is a special case of the theorem.
is , however, not yet proved.
["12,7,
The former of
The latter result
As can be seen, most of the properties
of integrals, proved in this chapter, are proved, very simply, by
statistical methods.
The last problem stated above, namely, the varia-
tion of the power of the analysis of variance test with respect to hypothesis degrees of freedom, gives rise to an interesting statistical problem which is not yet solved.
Let
Xl' ••• , Xn , Y, Zl' Z2' ... , Zn be independent, normally
2
l
distributed random variables with common variance (J2 and means
E Xi
= 11 i
,
EY
= S
,
=
,
E Z.
J.
0
i
= 1,
2,
... , nl
i
= 1,
2,1
···1
Consider the problem of testing
111 = 11 2 = •••
=
0
•
n
2
,
•
·e
51
The problem can be reduced by invariance (Lehmann
the U.M.P. invariate test which rejects
n
I;l
H
["327)
and we get
when
o
x~
i=l
n2
J.
>
constant
=
•
I; Z2
j=l j
This test can also be obtained as the U.M.P. similar region test.
is, therefore, better than any other
Now suppose that
n
1::1
i=l
E Y
= l;
= O.
It
similar region test.
Then tests with critical regions
~ + y2
>
=
constant
and
>
=
constant
are both similar region and invariant tests.
But now the last: ·01' the
three tests is U.M.P. invariant, and is more powerful than the first
two.
We are interested, however, in comparing the first two tests
between themselves.
We want to show that the first test is more power-
ful than the second test.
It can be shown that the first test is U .MoP.
among tests which have Neyman structure with respect to the sufficient
_
statistic T - (~v2i
+ ~
~ZJ2" y2). But thi s S tat'J.S ti C i S not camp1 e t e,
~
and hence the second test need not necessarily have Neyman structure
with respect to T.
It can be shown that it does not have it.
Thus
.e
52
~he
comparison is not possible
by
this argument.
n -> co all three tests above are
2
equivalent to the corresponding chi-square tests with power function
It may be observed that as
depending on the corresponding noncentral Chi-square distribution.
It will, therefore, follow that, for a sufficiently large number of
observations, the power function of the analysis of variance test is
a decreasing function of the number of hypothesis degrees of freedom
for fixed values of the noncentrality parameter.
2.3
Some Properties of Percentage Points and Integrals Associated with
Same Multivariate Tests.
In this section we shall consider same properties of percentage
points and power functions connected with some multivariate tests.
2.3.1 Power Function of the T2.test
Let us first consider the MANOVA problem of Chapter I.
noted that when t
2
= 1,
Rotelling's T -test.
We have
there exists a U.M.P. invariant test which is
This test depends on the only nonzero charac-
teristic root of the matrix
(XX' )(yyt fl.
To see this we observe
that in this case X is either a s-row vector (u =.1) or au-column
vector (s = 1).
When u = 1, the critical region of the test is
s 2
I: Xl'
1
J
> constant
or
>
constant
.e
When s = 1, the critical region is
where
sij
meter
Qi·
u
!:
u
!:
i=l
j=l
> constant,
are the elements of (yyf)-l
2
nT
In the former case
is distributed as a noncentral Fs
variate with sand n degrees of freedom and the noncentrality paraWhen the nUll-hypothesis is true the distribution is the
corresponding central distribution.
In the latter case T2 • n-u+l
u
F-variate with u and (n-u+l)
parameter
Qi·
is distributed as a noncentral
degrees of freedom and noncentrality
Under the null hypothesis the distribution is the
corresponding central distribution (Anderson
f!!7).
Thus we get:
Theorem 7:- When t = 1, the power function of the U.M.P.
invariant
test for the multivariate linear hypothesis is a monotonically increasing function of the number of degrees of
~reedom
of error and
hence of the number of observations.
In view of the remarks concerning the variation of the power tunction of the analysis of variance test with respect to the hypothesis
degrees of freedom we may remark that the test mentioned in the theorem
above is a monotonically decreasing function of s, if u, the effective number of variates,· equals unity, and a monotonically decreasing
function of u if s, the number of degrees of freedom for the hypothesis, equals unity.
·e
2.3.2
Percentage Points of the Distribution of a Class of Statistics
In Chapter I we have characterized two classes of tests for the
MANOVA problem by statistics which are linear functions of the ordered
characteristic roots of the nw.,trix (XX' )(yyt (1, or of the elementary
symmetric functions of these characteristic roots.
For the other
three problems also, in that chapter, we have given such classes.
The
distribution problems associated with these problems are very closely
related.
Restricting our attention to the MANOVA problem, the test
statistics in the notation of Chapter I
and
e
a l A.
=
R(!) say
at
=
E(!) say •
(a
a-
, u , s, n) } =
- -e
Let Probe
and Probe
are
1R(!) <=
IJ.
s, n) } =
v (a ,
~ E(!) <
= a- u"
l-a
1 -
a
•
Then for using a test belonging to one of the above two classes we shall
~a's
need
and va'se
However these percentage points are not avail-
able for all u, sand n
for even the three well known test statistics"
associated with the three tests belonging to these classes, namely, the
maximum characteristic root, the likelihood ratio and the trace.
A
large number of workers have investigated the distribution problem
associ~ted with these statistics (e.g. Anderson £1 , 2,
!!.7,
Hotelling
£2~7, Hsu
£49,
£23, ~4" 22.7, Ito £26, ~7"a.nd references therei;Y" Rao
Roy £4J.7). We, however" have not been able to use their methods
for determining percentage points
~a
and va.
NoW we shall show that these percentage points have certain monotonicity properties with respect to
sand n.
For this let, as in
·e
55
Chapter I,
~i(X,
Then we know that the ordered characteristic roots
i
= 1,
2, ••• , u,
Y),
of
are greater than or equal to the corresponding ordered characteristic
roots
Ai(X ,
l
Y),
i
= 1, 2, ••• ,
u, of
X') (yy. ).1
( X11
This implies that for fixed
e
!:.
~
<
a
~ (Xl' I) <
=
b
~' !; (X, Y)
and
•
=
will imply, respectively, that
!: t !; (Xl' Y) < a
=
and
!:' .!: (Xl' I)
where a
~ b
and b are constants.
,
This will imply that the percentage
points of the distribution of !:'!; or
increasing functions of
~. ~
will be monotonically
s, the number of degrees of freedom for hypo-
",
thesis:,when u and n are fixed.
Now let us denote
Y =
(Xl' II) , and assume that
is p.d. (a.e.) nonsingular matrix.
s ~ u.
Then roots of
reciprocals of the roots of {yy' )(XX'
rl•
By
Then
(XX')
(XX' )(yyt l are
r
a method similar to that
56
above it can be shown that the ordered characteristic roots of (yyt)
'(XXt)-l are not less than the corresponding ordered characteristic
roots of (YIYi>(xxtrl.
From this it 'Will follow that for fixed a
and
~t ~ (X" Yl )
< b
'Will" respectively, imply that
< a
at A (X, Y)
and
where
a
~t ~
(X, Y) < b
,
and bare positive constants.
Thus we have proved the
following:
Theorem 8:
The percentage points of the distributions" of the class of
statistics
R(.=:: ) ~ E(.=::)
are monotonically increasing functions of
s, the number of degrees of freedom for hypothesis, and are monotonically
decreasing functions of
n, the number of degrees of freedom for error,
provided for the latter case
s
>u •
We shall now make some remarks concerning the power functions of
the tests for the MANOVA problem mentioned above.
It is believed that
for a given set of noncentrality parameter values, each of these power
<
.
functions is a monotonically decreasing function of
s
and u;
for fixed
a monotonically increasing function of
u, and a monotonically decreasing function of
Hsu ["23,
2!!:.7
has proved that as
u
n
for fixed
for fixed
n
sand
sand n.
N" the number of observations,
becomes large the likelihood ratio test and the trace criterion become
equivalent.
More specifically it has been shown that, for large numbers
57
of observation and
-n log 1\
tic with
t
I:
u. s
=
=
Q.
1
0(1), the statistics
n T
2
and
i=l
0
can both be approximated by a noncentral chi-square statisQ
degrees of freedom and the noncentrality parameter Q.
Thus we can consider both the likelihood ratio test and the trace test
as chi-square tests when the number of observations is large.
We can
then approximate the powers of tbese tests by the incomplete noncentral
chi-square integrals of section 2.1
•
It then follows that these
approx1IDations to power functions 'Will be monotonically decreasing functions of both the parameters
u
and
s.
CHAPTER III
A STUDY OF SOME UNION-INTERSECTION PROCEDURES
3.0 .!Ell:irodnct1.on anti Summary
The object of this chapter is to discuss power properties of
some union-in.tersection tests and also to study them as multiple
de~ision
procedures.
For this purpose the simultaneous analysis of
variance test of Ghosh
£1'27
and Ramachandran
£44, 4'27,
and a modi-
fication of it, will be taken as a model union-intersec-ci.on test.
An empirical investigation of the power properties of various
intuitively appealing test procedures, ·for a problem which does not
admit a U~M.P. test, was done by Neyman and Pearson
£4},7
with a
view to comparing these tests when the alternatives are restricted
to 'certai.n directions!.
In secti en 3.1 we shall present and explain
the relevBnt part of their findings.
After observing that the pro-
cedures considered are union-intersection procedures we shall state
the union-intersection principle in section 3.2.
In section '.3 we
shall generalize the problem and the findings of Neyman and Pearson
to the simultaneous analysis of variance situation.
We shall show
that against certain alternatives the simultaneous analysis of variance
test is more pmTerful than the 'corresponding' analysis of variance
test.
In section 3.4 'We shall investigate a modified s1multaneous
analysis of variance procedure.
We shall show that under certain con-
·e
59
ditions this procedure is unbiased and that it is a Bayes solution.
In the final section 3.5 we shall study the variation of the power
of some union - intersection test procedures with respect to the
number of intersections.
We sba11 show that the power decreases as
the number of intersections increase.
3.1 A Problem due to Neyman and Pearson
Neyman and Pearson ["4'};7 have considered the fo11o'Wing problem.
Let U and V be independent, normally distributed random
variables With unit variance and means
E
,
= S
U
E V
=
•
Consider the problem of testing
=
=
o •
The critical regions of the two tests
Reject when u2 + v 2 > constant =
=
I..l ,
say
and
where the constants are determined by the significance level of the
test, are exteriors of the circle and the square in the following
diagram
V
B
D
--- U
C
-----'\I£.....-----J-
A
60
Consider the problem of comparing the powers of these two tests.
Neyman and Pearson obtained the values for the powers of these two
tests when the noncentrality is in the directions
In directions
A, B or C, D.
A and B only one of the parameters
nonzero, whereas, in the directions
C and D
I;
sand
= i 11.
1'\
is
Their
computations are given in the following table:
Power functions of the two tests
Direction
or
,A
test
¢1
0.0
B
¢1
C
¢2
¢1
0.050
0.050
0.050
0.2
.05;
.05;
.05;
0.4
.062
.062
.062
1.0
.1;;
.1;1
.13,
1.5
.249
.250
.249
2.0
.415
.422
.415
0.,69
2.5
.602
.614
.602
o•
;.0
;.5
.770
•78;
.770
0.702
.889
.899
.889
4.0
.956
.962
.956
0·92,
5.0
.996
.997
.996
0·991
Y
NoW ¢1
is a
=
L2
Jr!,
+ 11
chi~s.quare
direction but only on y
0.050
0.125
2
test and its power does not depend on
as can. be seen in the table.
From this com-
·e
61
putation Neyman and Pee,rson concluded that in the directions
is
B
the test
Y
is large, but in the directions
rI.
w2
'slightly more powerful' than the test
C
and D the test
'considera.bly less powerful' than the test
¢l.
¢2
A and
~l
'When
is
In this section we
shall explain this behavior of the power functions only partially, and
pose same problems.
in section
3.3.
I.l
v <
Lemma 1:
Proof:-
e
Generalization of this problem will be considered
The event
u
2
+ v
2
< a
=
Therefore
Prob ( 2
u + v 2 ~ a)
Thus if
Prob ( 2
u + v2 ~
v <
then
Lemma
If a>b>O
2:
2
_7
~ a.
( max £ u2 , v~7 ~ a)
(max £u2 ,
Prob
v
v
_7 ~
2
v)
•
then
a+y
1
-'2 x
S
Lt
y_> c:o
< Probe
I.l) =
I.l
max £u2 ,
implies the event
2
e
dx
-a + y
=
b +Y
1
-2 x
S
c:o
2
e
dx
-b + Y
Proof: -
As
Y -> c:o
the ratio becomes indeterminate.
by using
L'Hospital's rule twice get
a +Y
Lt
y ->c:o
5
1
-'2 x
e
2
dx
-a + y
b+y
S
-b + Y
1
:2
--2 x
e
We, therefore,
dx
1
=
2
2
-ay
ay
e
- e
e-bY _ ebY
Y -> c:o
-2 (a -b )
e
Lt
e
=
=
ay
ebv
co
Now, when the noncentrality y
is in the direction A or B,
we can write the probabilities of the errors of the second kind for
the two tests
¢l and ¢2 as:
J~-x~
.r
1, 0) ax
-j~-x+y
F
=
f
-F
Now for
0
<X<
~-'J
,
'We
have J~
-
x
range of X where exists a number y**
however large, for y
~
>
F.
Therefore for this
such that, given a number N
y**,
jV
.J
-IV
•
fV+y
.
j
e
1 2
-'2 t
IV+y
dt
(21C
It is then easy to deduce that there exists a number Y*ff such that
for y > y*
tha.t is for y > y*
in directions A or B, the test ¢2 is more
po't-Terf'ul than the test ¢l"
At this stage we can not say that y*
belongs to the range of y
see from the table that for
¢l·
If it is true that" if
then it is better than
¢l
considered in the computation.
5
~
¢2
y ~ 1.5
Though
'11
would not be expected that
only as good as
the directions
The test
testing
¢2
'11
for y ~ Yo ' then y*
= 1" '11
the corresponding values of power being
be interesting.
is more powerful than
is better than
In the table we see that for y
one would not expect this.
¢2
0.133
for y
= Yo
= Yo
•
'12
is better than
and 0.131.
A local comparison of
is an
We can
'11
and
'
Intuitively
'12
should
L.M.P. test for the problem" it
¢l would be better than
¢2
locally but
when the departure from the null hypothesis is in
A or B.
¢l
is the likelihood ratio test for the problem of
S = 0 = 11 against the violation of it; it" also" is a union-
intersection test for the problem of testing the hypothesis
S = 0 ="
clS + c " ~ 0" c " c
not vanishing simultaneously.
2
l
2
is the union-intersection principle test for testing the
against alternatives
The test
hypothesis
¢2
g
= 0 = 11
against the alternative that only one of sand
11 is nonzero.
;.2 The Union-Intersection Principle.
A heuristic method of test construction was suggested by Roy
£417.
This method" later called 'the union-intersection' method" is,, in outline" as follows:
Suppose that
on a parameter S
h;tpothesis
X is a stochastic variate with distribution depending
Suppose that we are interested in testing a
·e
64
against
H
0
Hoy
S
E:
HI
g
€
= ('I
y
,-.
'-I
"--'
1
S
H
,..-,
,,,-_J
t-i
,-,
C
H
•
'_-l
H·
1
and
ay
r
€
an.d H
ly
Hay ;
and
H
......'0
H0 and HI are expressible as
Suppose that
where
/-
('-
r--,
Ho :
- Y
Ur
H
ly
€
,
are given by
€
.....
,
r-'\
\-1
~
'-'ly
r is a finite or infinite index set. Suppose that, for each
component problem of testing
Hay
against
test procedures with acceptance regions
H ' we have same good
ly
A ,
y
Y
€
r , say. Then
the union-intersection principle dictates that the hypothesis
accepted over
H be
o
(l
A and be rejected otherwise.
y E:..I:'. Y
In this chapter we intend to study various properties of intersec-
tion tests, from both the classical power point of view and the modern
loss-f'unction-risk point of view.
For this purpose we shall consider
the problem of simultaneous analysis of variance, which !DJ3,y be considered
as a generalization of the above problem due to Ne;yman and Pearson.
3.3§.imultaneous Analysis of Variance
We shall borrow the following model for the simultaneous analysis
of variance problem fram Ramachandran
Let ~ (nxl)
ly
f4!:!:.7 •
denote a set of observations on
d:i.s-cributed rand.om var.w,bles with ccrmnon var:J.ancl:l
E(x)
nXl
=
D ~
man mxl
n independent, nonnal-
,l and means
given by
where D is a matrix of known constants" such that rank (D) = r ~ m < n,;
and
.s.
is a vector of m unknown parameters.
We wish to test" s:lJnultaneously" the following
,
C
=
q:xm mxl
ql
C
1
~
•••
?2
••
•
I Ck
m
\:
-'
mxl
=
0
qil
k
hypotheses on
{;:
"
k
where
q::
l;
~
and C is a given hypothesis-matrix of rank.
i=l
s ~ min(r"q).
Now the analysis of variance test for testing
=
Ci
-'
~:xm
mxl
o
is given by the critical region "
Fi
?
constant
,
where
"
..
where as in section 1.1.1 D " D
l
2
and C "C
are the partitions
il i2
of D and C " and t
is the rank of C •
i
i
i
F defined above is distributed as a variance-ratio with
i
and
(n-r)
t
i
degrees of freedom" when the hypothesis is true.
We shall assume that the k
hypotheses are quasi-independent, i.e.
66
for all combinations
(i,j), i
~
= 1,
j
2, ••• , k.
Under these conditions the numerators of F , i = 1, 2, ••• , k, are
i
2
distributed as k independent
X (t , a)-variates, i = 1, 2, ••• , k,
i
2
under the nUll-hypothesis, and as k independent X (t , 5 )-variates,
l
i
i = 1, 2, ••• , k, under the alternative
ql
•
•
•
••
.TIk
~
••
•
=
C ~
~l
..
•
~
a
,
The simultaneous analysis of variance test accepts the hY,pothesis
c
= a
~
if, and only if,
F.
< \Ii
i
~-
The optimum choice for
\Ii
= 1,
2,
... ,
k •
is not known.
suggests that \Ii be chosen proportional to t ,
i
the level condition
i
a < ex < 1
= 1,
2, ••• , k
_7 =
Ramachandran
£4!!:.7
and so as to satisfy
1 - ex
,
• We shall comment on this aspect in section 3.4, where we
shall consider the problem of simultaneous analysis of variance as a
multiple-decision problem.
The distribution problem of the simultaneous
analysis of variance has been solved by Ramachandran
£42,7 •
67
In the following section we shall compare the simultaneous analysis
of variance test with the corresponding analysis of variance test for a
restricted class of alternatives.
For this we shall need the following:
3.3.2 Canonical Fom
Let Yij (i
Yi (i
= 1,
= 1,
2, ••• , kj
= 1,
j
= 1,
2, ••• , r-s), Zi (i
k
2, ••• , t
2, ••• , n-r)
i
E t
j
i
= s),
i=l
n independent
be
rl·
normally distributed random variables with common variance
and
means
E(Yij )
=
Sij
,
E(Y )
i
=
Si
,
E(Zi )
=
,
0
i
= 1,
= 1,
= 1,
2,
i
= 1,
2,
i
j
···1
2,
2,
k
.•.• _e, t i
... ,
... ,
r-s
n-r •
We wish to test the hypothesis
Ho :
Sij
=
0
,
= 1,
i
2, ••• , kj
= 1,
j
2, ••• , t i
against the alternative
= 1,
2, .,. f·' k
j = 1, 2,
, ti
The usual analysis of variance test rejects when
1
1
t
E
E
i
i=l j=l
n-r 2
E Zi
i=l
not all zero.
...
2
Y
1j
>
constant
(Il* , say)
•
The simultaneous analysis of variance test rejects the hypothesis when
t
E
j=l
E
1=1
>
constant
(Il-~
say
)
for at least one
i
=1,
2, ••• , k •
3.3.3 Comparison of the analysis of variance test and the simultaneous
analysis of variance test
For this comparison let us assume that t l = t 2 = ••• = t k = ~
The situations when all hypothesis degrees of freedom are equal are
very common in the analysis of factorial designs, where we simultaneously
test the significance of a number of effects and interactions.
We shall assume that we are testing
H ••
0
i
!;ij = 0
j
Hl
Only one of the k
(Sil' Si2 '
... ,
= 1,
= 1,
2,
2,
... ,
••• J
k
against
t
t-vectors
Sit )
,
i
= 1,
2,
... ,
k
is nonnull.
This alternative corresponds to what Neyman and Pearson
call a shift in a particular direction.
£4];7
An interest in restricted
alternatives of this t;y:pe has also been shown by Roy
J.§O~],
where
some advantages of using the union-intersection principle in certain
experimental situations are considered.
It has been conjectured there
that the simultaneous analysis of variance test with t
= 1 will be
more powerful than the corresponding analysis of variance test if the
departure from the h;y:pothesis is in particular directions.
We shall
show that this is true, at least for sufficiently large deviations
from the hypothesis in the particular directions.
Let
69
2
t
Sij
E
20'
j=l
2
=
lI i
,
... ,
i = 1, 2,
k
Then we can express our hypothesis as all 'n, = 0, i = 1, 2, ••• , k
J.
and alternative as only one of these 'ni's
is nonzero, say 'n.
The
simultaneous analysis of variance test accepts the hypothesis when all
2
t
E Yij
j=l
<
=
constant, say,
i = 1, 2, ••• , k, and rejects it otherwise.
,
~
The analysis of variance
test accepts the hypothesis when
k
t
E
,E
~j
i=l J=l
< constant, say,
=
and rejects it otherwise.
~*
,
We can, therefore, write the probabilities
of errors of the second kind for the two tests, respectively, as
k
co
r
t,
2
X (v; n-r, 0)
~v
r x2(u.; t,
IT
-6
i=2
0) d u
J.
i
.!JoV
J
o
S'
where
~
and
2
X (v; n-r, 0)
o
~*
n-r, 0)
(' ,feu; kt, 'n) du dv ,
i:>
are the constants mentioned above, being determined
~
J;...'V'
2
(X (u , t, 0) dU dv
i
i
i=l Jo
1x
. ~l'*"r
n-r, 0)
j
/iv
C':>
=
2
X (u
l
2
(u, kt, 0) du
=
dv = 1
1 - ct
,
- ct
,
70
a being the level of significance.
Or
~
=
; [
J ,f(v; n-r, 0) i=2
o
p
ax
J
1 2
- '2x
CX)
=
,f(v;n-r"O)
o
e
- - ax
j'VX2(u*;kt-l'0)
dv
o
Now it can" as before, be easily seen that
J.l
of the lemma will give us existence of an 11*
of
dv
< J.l* •. Then application
such that for values
11 greater than 11* the simultaneous analysis of variance test
Will be more powerful than the analysis of variance test.
for 11
What happens
< 11* is not known. However" it is believed that the simultaneous
analysis of variance test would be at least as powerful as the analysis
of variance test for small values of the noncentrality parameter 11
when it is in the restricted directions.
We may summarize this argument
in the following:
Theorem 1:
For the restricted simultaneous analysis of variance pro-
blem" as stated above, there exists an 11*
>
° such that for 11
~
11*"
the simultaneous analysis of variance test is more powerful than the
analysis of variance test.
In the following section we shall consider the simultaneous analysis of variance procedure as a multiple decision procedure.
71
,.4
Modified simultaneous Analysis of Variance, a MUltiple-decision
Procedure.
The problems in analysis of variance" for which the simultaneous
analysis of variance procedure
developed, are more in the nature
'WaS
of multiple-decision problems than in the nature of two decision
problems.
To illustrate this" we may consider analysis of variance
of classical factorial experiments.
In this the experimenter is more
interested in knowing which of the effects and the interactions are
significant and which are not than in testing some total hypothesis.
In this section we shall discuss properties of the following
modification of the procedure in section 1.
1
suppose that we have tested the k
c.J. -,
=
In the notation of section
hypotheses
0
against
by the tests of that section which depend upon the test-statistics
= 1,
at levels of significance a " 0 < a < 1,
1
i
1 = 1, 2 , ••• " k. Then the modified simultaneous analysis of variance
Fi , i
2 , ••• , k ,
procedure dictates that among these k
hypotheses only those, accepted
by the component F-tests" be accepted and the remaining be rejected.
This modified procedure of the simultaneous analysis of variance
belongs to the following formulation of multiple-decision problems
(Lehmann
£3§7).
Let X be a stochastic variable with its distribution depending
upon a parameter
Q E
(}!) ,
where
®
is the parameter space.
Suppose
12
that we are interested in testing a number of hypotheses ,
against
Q €
®i C ®
Q €
-l
Hi:: (8)-~,
®
concerning the parameter
Q
i
= 1,
2, ••• , k
,
and determining, simultaneously, which of
these hypotheses are true and which are false.
Suppose that we use procedures
Hoi'
11:.
J.
and as a simultaneous decision
:J(
= 1,
2, ••• , k)
for testing
accept only those which are
:J(
accepted by the individual procedures.
total procedure
(i
Let us also suppose that the
based on the individual procedure is logically con-
sistent, that is, there are no
contradictions in the possible decisions
arrived at by this procedure based on the component procedures.
For determining some properties of such a total procedure in relation
to those of component procedures, let us assume the following
Loss function:
Let
a
i
and b
i
(i
= 1,
to falsely rejecting and accepting Hoi
and b
such a
i
2, ... , k)
be the losses due
by procedure
are not necessarily constant with respect to
11:
i
i.
,
where
a
i
Notice that
loss function is appropriate for the kind of problems in classi-
cal factorial experiments which we have mentioned.
Losses due to errors
in misjudgment concerning main effects and first order interactions, in
these experiments, may be more serious than those due to higher order
interactions.
For this loss function, it may be noted, the risk function
is simply a weighted sum of the probabilities of the errors of the two
kinds.
73
Let us further assume that the losses are additive, that is ,
the loss due to the total procedure is the sum of the losses for the
individual procedures.
With this set-up in mind, suppose that we test each of the hypotheses
Hoi (i
= 1,
2 , ••• , k) at levels of significance
b
=
i
ai+b i
1
i
=
1 , 2, ."1 k
Now the analysis of variance test is strictly unbiased in the classical,
Le. Neyman a.nd Pearson
£4];7,
sense.
It is, therefore, unbiased in the
LeblDann £3~7 sense, Le. the average loss using the analysis of variance
test is minimum when the decision is correct.
Because of the nature of the loss function, the total risk of the
total procedure is the sum of the ccmponent risks of the component procedures.
It, therefore, follows that when the individual F-tests are
used at the levels of significance ai' i
= 1,
2, ••• , k, the modi-
fied simultaneous analysis of variance procedure is unbiased in the
Lehmann sense.
This gives us a rule concerning the choice of the
levels of significance for the individual F-tests of the modified s1multaneous analysis of variance procedure.
The distribution problem con-
nected 'With the test has been solved by Ramachandran
£41il .
We shall next prove that the modified simultaneous analysis of
variance procedure
is a Bayes solution.
state a result due to Duncan
Lemma
£7, §J
Towards this end we shall
and Lehmann
3:- If the component procedures ni , i
= 1,
[)§I.
2, ••• , k, are Bayes
solutions 'With respect to a common a priori distribution on the parameter
space, then the total procedure is also a. Bayes solution 'With respect to
the same a. priori distribution.
••
74
Now it is well known (Hsu
["'227,
Wald
["6};7
and Wolfowitz ["~7)
that the analysis of variance test is a Bayes solution with respect
to an a prioro distribution, in the space of parameters, which is
uniform on the surfaces where the noncentrality parameter is constant.
In the case of the modified simultaneous analysis of variance procedure
we can consider an a priori distribution
on each of the k
A. ,
say, which is uniform
spheres
t
i
2
E ~.j
. 1 J.
J=
for all real numbers
c.
=
2 2
c
u,
i = 1, 2, ..• ,
k
Then each of the component procedures is a
Bayes solution with respect to this a priori distribution.
Therefore,
by virtue of the above leIlmlSo, the modified simultaneous analysis of
variance is also a Bayes solution with respect to this a priori distribution.
We may summarize the results of this section in the following:
Theorem 2:
The modified simultaneous analysis of variance procedure
is unbiased if the levels of significance of the component tests are
i = 1, 2, ... , k. Furthermore, the modified
i
simultaneous analysis of variance procedure is a Bayes solution for the
O:i
= bi/(ai
problem when
'.5
+ b ),
!l.
is the a priori distribution in the parametric space.
A Property of the Power Functions of Some Intersection Tests.
The simultaneous analysis of variance test considered in the pre-
vious section is not the union-intersection test for the problem stated
there.
The union-intersection test accepts the hypothesis when all k
quantities
.e
15
<
constant, i
=
and rejects it otherwise.
When
2
CT,
= 1,2, ••• ,k
the population variance, is known
this test coincides with the corresponding test given by the simultaneous analysis of variance procedure as well as that given by the
likelihood ratio principle.
This test has an intersection of k
square test acceptance regions for its acceptance region.
chi-
In this sec-
tion we shall examine the variation of the power function of this test,
with the intersection of chi-square test acceptance regions for its
acceptance region, with respect to the number of intersections.
This
problem arises in the following context also.
Let Xl' X2, ••• , Xk be independent normally distributed random
variables with variance unity and means E(Xl ) = Q, E(Xi ) = 0, i = 2, 3,
.••• , k.
Consider the problem of testing
We know that the U.M.P. test accepts
H : Q = 0 against H : Q ~ o.
l
o
only when
~ constant.
H
o
other valid acceptance regions for the hypothesis are
j
Dl
j
Bj
:
E
1=1
~
xi
=
constant
< constant,
j
say
= 2, 3, ••• ,
,
k.
j
= 2,
3, ••• , k and
We are interested in
=
comparing powers of these tests.
alternatives
IJ.ji
IJS.I
We know that, for sufficiently large
will be more powerful than B • It is felt that
j
Aj will be at least as good as B for all IQf > o. Whether A. l'
j
Jthe acceptance region with j-lintersections, will be more powerful
than Bj
,
Q,
A
j
the Chi-square with
j
d.f., for all
IQf
> 0 is a problem.
76
In what follows we shall show that for the intersection of chi-squares
tests, the power decreases as the number of intersections increases.
Theorem 3;
Let k
and k* be positive integers,
k* > k.
Let
then
<
Proof:
It is sufficient to prove that
2
1~
. X (Xi
(
t, ~) dx
x2(x;
¥*
2
X (x; t, ~) dx
J"
<
1
~2
X (Xi t, 0) dx
t, 0) dx
Towards this end we shall need the following:
ill X2 (Xi n, 6) dx
Lemma 4:
=
J'\2(x; n, 0) dx
o
is a monotonically increasing function of
~
for each fixed 0 > 0 •
This lemma can be proved by stnightforward differentiation of
R(IJ., 8) w.r.t.
~.
However, as the distribution of a noncentral chi-
square variate has monotone likelihood ratio (M.L.R.), (Lehmann £)4,
32,7) it is interesting to note that the above lemma is a particular
case of the following lemma due to Hall
f117.
.~
77
Lemma 5:
Let a continuous random variable
of densities
for
Q'
<
fg(X)
X
have a M.L.R. family
with distribution functions
Q, the ratio
FQ(X)/FQ,(x)
FQ(X)
•
Then
is a monotonically increasing
function of x.
Proof:
Let us write
f g , (x) = g{x).
sgn
Fg(X)
= F(X),
fQ(X)
= f(x)
, FQ.(X)
= G(x),
Then
f ~ ~~~~
_7
= sgn ff(x)
G(X) - F(X) g(x>-7
X
=
Now
fg
ftX~ >
gx
is a M.L.R.
fey)
irYJ
•
Thus
sgn
£_1 £ ~f~l - ~!~
family of densities.
'
An observation that
Therefore, for
y
<x ,
the integrand in the last step being nonnegative,
the integral is positive.
increasing function of
3g(x).g(y) dy3
x.
It follows, therefore, that
F(X)/G{x)
is an
The proof of theorem is thus complete.
IJ.*
>
IJ.
completes the proof of Theorem 3.
.e
CHAPTER IV
SThtULTANEOUS INTERVAL EST:rnATION
4.0 Introduction and Summary
Even though the concept of simultaneous interval estimation is
old (Working and Hotelling
£6"17)
its application to the problem of
multiple comparisons in analysis of variance is comparatively recent
(Scheffe'£54,
517,
Tukey
£59, 69.7).
The method of simultaneous
interval estimation was first systematized by Roy and Bose ~5~7
and Roy ~4§l.
They obtained both the Scheff~-bounds and the Tukey-
bounds as particular cases of their argument.
It is well known that
pairwise comparisons by Scheffe's method compare unfavorably with those
by Tukey's method.
In this chapter we shall discuss the problem of simultaneous interval estimation in univariate and multivariate analysis of variance
situations.
We shall obtain various solutions to the problem in the
univariate analysis of variance situation and extend them to the multivariate situation.
In section 4.4 we shall obtain a theorem which will
I
explain the sharpness of Tukey bounds as compared with Scheffe bounds,
and of Dunnet bounds as compared with both these bounds.
Using the
theorem we can extend these sharpness properties from the univariate
to the multivariate situation.
·e
79
4.1
Simultaneous Interval Estimation
Let
y
= (Yl ,
Y ' ••• , Y ) be an observed set of random variables
n
2
with distribution depending on a set of unknown parameters
s = (;1'
,
,--t
s
1;2' • • ., Sk)'
€
6
'.....
H
where .-.
is a set •
Let
i wY = Ul,,/S) ,
Y
\
where
€
,
r }
r is a finite or infinite index set, be a set of parametric
functions.
The problem is that of making simultaneous confidence
statements
<
u
y
,
y
€
r
,
with a known simultaneous confidence coefficient
(l-a),
0 <a <1 •
This problem occurs in several branches of applied statistics,
especially, in the analysis of variance and covariance of experiments
and regression analysis.
The earliest nontrivial example of the simul-
taneous interval estimation of the kind to be discussed in this chapter,
'.
occurs in the paper of Working and Hotelling
f 6"d.7,
where they have
obtained a confidence bound for a regression line, which may be regarded
as a set of simultaneous confidence statements on all linear functions
cll3l + ctj32 (c l "0)
159, 62,7
and
of the regression parameters
Scheffe
154, 52.7
13 1 and 13 • Tukey
2
have given methods for the simul-
taneous interval estimation of, respectively,
and all linear contrasts of the parameters
(~) pairwise contrasts
Sl' S2' • u, Sk
80
The earliest systematic investigation of the problem, however,
appears to be due to Roy and Bose £5]}.
Examples of confidence in-
terval estimation can be found in, Roy and Bose £5]}, Roy £4§.7 '
D'wass £l]}, Dunnet £27, Durand ["12.7, Hoel £12,.7, Mandel £3§}
Scheffe
be
f5§!.
Ro~,r
~ under
Lemma.:
and Bose
f5!:.7
?'
have shown that the problem, ·can
following circumstances:
the
Roy and Bose £51:.7
Suppose that it is possible to find a set of functions
Y
€
r
Y
€
r
Y
€
r
such that
III <
= ¢y
<
,
112
=
is equivalent to
fy
where
<
=
<
w
u
=
y
y
,
(.~. )
III and 112 are constants independent of y.
For a given
let
<
;::;
be the set of points
true.
y
y
~
Is}
in the sample space En'
Let
;::;
If
112
rl
€
r
is a Borel set for each
S,
and
1 - a~
0
<a<
1
,
S
.e
81
S
are true with the same probability•
•r-'
:!... , then (.x.)
•
€
4.2
Simultaneous Interval Estimation in the Univariate Analysis of
Variance.
Without any loss of generality we may confine ourselves to two-
factor design terminology.
4.2.1
Univariate Model
Let
X· =
(Yl' Y2' ••• , Yn)' where Yl' Y2 ' ••• , Yn are independent, normally distributed random observables with connnon variance
~2 and means E(Y1 ), 1
E(y)
nil
=
= 1,
2, ••• , n,
A
nxv
T
vX1
+
given by
X
13
nxb bXl
where
(i)
A and
X are matrices of known constants depending upon the
design of the experiment with rank (A)
= r,
rank (AlX)
•
=r
+ q,
r+q<n
(1i) .! and
~
are vectors of unknown parameters.
In the design
terminology the elements of .! may be regarded as the v
treatment effects and the elements of
4.2.2
~,
the b block effects.
Univariate Estimation
The experimenter is, usually, interested in testing significance
of the treatment effects i.e. testing the hypothesis
Ho
°0
Tl
= T2 = ..• = 'v =
~
0
and estimating some linear contrasts of the treatment effects.
82
Suppose that the experimenter is, specially, interested in a particular set of contrasts
where
~
v
2
1: c
j=l j
and the rank of
~
\)
being the space of all real vectors
=
v
1: c
j=l j
,
1
=
0
> (v-l) •
Under these circumstances we can express the null hypothesis as
H
o,~
where H : c' T
oc
-Ho against
-1
where H
denotes
=
Let us consider the problem of testing
0
'not H'.
Suppose that the
T'S
are estimable.
Let
be the best unbiased linear estimate of the treatment effects
T'
Then e' t
= (Tl'
••••• , TV)
is the best unbiased linear estimate of c'
be the estimate of the variance of
r
Then _~'
2
(.:!: - !.t72 Is c
~'.:!:,
with ne
T •
Let
s
2
c
degrees of freedom.
is distributed as the square of Student's
t
.e
With n
e
degrees of freedom.
We have, therefore, a test for
Hoc
as:
Accept if, and only if,
<
=
<
i.e.
=
constant
<
=
2
=
IJ. (,,),0;
1J.(,,),0;
say
s;:
Thus by the union-intersection principle we accept Hover
o
< c'
= -
t
where 1J.(,,),o; is given by
It is easy to see that from this test procedure we get the following
set of simultaneous interval estimates for
c'
T,
C €
6"
with the simultaneous confidence coefficient 1 - 0; •
4.2.)
Applications
By considering different sets
6'1)
C
6, we shall indicate how
same of the previous results in this field are particular cases of
this set-up, essentially, due to Roy and Bose
(i)
6"
=6
, the entire space.
151_7 .
84
·e
In this case we get the simultaneous interval estimates for all
~1.1'
the contrasts
cient
.7 •
This is due to Scheffe' _£54, 55.:•
(1 - ex).
constant
with a specified simultaneous confidence coeffiThe
~()
\) ,ex is determined from the tables of the distribution of
the variance ratio.
(ii)
bo\)
= boo
' a set of (v-l) orthogonal vectors ~
€
bo.
This gives us the simultaneous interval estimates for a set of
(v-l) orthogonal contrasts
cI
'f.
The distribution problem, in this
case, has been solved by K. V. Rama.chandranL4~7.
(iii)
bo\)
=
bop
, the set of
~,
ments of
(~) vectors ~ e 6,
such that all ele-
except two, are zero, the. .n~ull being
+ -
1
•
-/2
In this case we get the simultaneous interval estimates for all the
pairwise contrasts
hi -
'f .. )
.2:....
Jj2
The determination of constant
\) ,ex uses distribution of the studentized range statistics. This
~()
solution is due to Tukey £52,7.
(iv)
6\)
=
bof
'
element of
a set of vectors (v-l)
c
is
the rest are zero.
This gives
'LlS
.2:...,
J2
c e
only one of the remaining is
4.3
1
and
j2
the simultaneous interval estiulates of the pairwise
comparisons of the treatments with a control.
Dunnet
bo, such that a fixed
The solution is due to
LJ.7.
Simultaneous Interval Estim9.tion in Multivariate Analysis of Variance
4.3.1 Multivariate Model
Again, with no loss of generality, we speak in terms of a two factor design model.
85
.e
be a matrix, with n p-dimensional random observables
Let Y
nxp
Let 11' l2' ... , In be
Il' ... , In as the columns.
independent
and distributed in a p-variate normal distribution, With cammon posi~,
tive definite variance-covarinace matrix
i
= 1,
E(li)'
2, ••• , n, given by
E Y'
nxp
where
and means
i)
A and
=
A
'f
+
nxv vxp
x
,
~
nxbbxp
are two matrices with the same meaning as in the
X
univariate case.
ii)
and
'f
~
are matrices whose
v and b
rows, respectively,
signify the treatment effects and the block effects.
Let
.!i' 12' ... ,.!~
be the v
rows of
'f.
Then we define a
contrast-vector, a multivariate analogue of contrast, as a vector
Ct
'f
}
1Xv vxp
where c
e
6, the same space of real vectors ;
as in the previous
section.
4.3.2 Multivariate Estimation
The experimenter, as before, is interested in testing the hypothesis
of no treatment effects i.e.
0 ,
Tl = _'f~c; = ••• = -'f ' I=
and making simultaneous confidence statements about the
Ho :
_'f
'f t
s
In this case it is possible to make simultaneous confidence interval
estimation of all the linear compounds, like
86
; @; is the space of all real, nonnull p-dimensional vectors
where
a' , of a contrast-vector c' ,..
["5'};7 •
See Roy and Bose
Again suppose that the experimenter is interested in a particular
set G
of contrast-vectors specified by a set
\)
rank
~
(f)\))
\)-1.
~
c:
\)
~,
-C
€
~,
We can decompose the hypothesis of no treatment
effects as
=
c
fl
,
€ ~\)
c' ,. a
where H
0, ~, ,::
=
0
•
Now consider the problem of testing H against
o
where H- l
0, ~, ,::
denotes
'not H
0,
Suppose that the elements of
~,
,.
a'
have for the maximum likelihood
vxp
estimates the corresponding elements of a matrix
V.
Also let Sc
vxp
denote the covariance matrix of the estimate c V of the contrastvector c' ,..
Then the variance of the linear compound c' V a
a'
~
"
Sc -a
•
We have, then, a test for H
0,
region is
(~'
A
~,
::
~,
a
-
against H- l
0,
V ,::)2
a' S
a
c-
< Constant =
=
~,
(\) )
fJ.o:
a
Wh ose
accep ta nce
( say)
Then, as before, by the union intersection principle we accept
over
is
H
o
=
c··
n
€
87
l!.'V
=
.
which is known to be
=
where T~
n
is
€
l!.
L T~ ~ ~~'V) _7 ,
'V
2
the Hotelling's T -statistiC for testing
c
c'
T
=0
,
~a('V) is determined by
and
pr
L
c
n
€
~a( 'V)
2
{T
<
"
c =
u
-
JI H
7 =
0-
1 - a
,
O<a<l •
'V
Notice that, as before the acceptance region A('V) gives us a set
of simultaneous interval estimates, for all linear compounds of the
contrast vectors
cl T ,
1£=.' V ~ - J~~'V)J~t
Sc
C
-
~
€
~
l!.
'V
, namely
C' T
c€
-.
~ =.'
a
l!.,
'V
V
~ + J~i'V) J~t
Sc
~ _7 :
-a€(i..}
with the simultaneous confidence coefficient equal to 1 - a •
4.,.,
Applications:
As in the univariate case we shall see that different subsets
l!.
'V
of l!. yield different sets of simultaneous confidence interval estimates.
~
In this we get the simultaneous interval estimates for all the linear
compounds of all contrast vectors
C'T.
The determination of the con-
88
stant
lJi\) )
is done by using the distribution of the maximum charac-
teristic root.
This is generalization of Scheffefs procedure to the
multivariate set-up •
•
We get the simultaneous confidence interval estimates on all the
linear compounds of a set of orthogonal-contrast vectors.
(iii)
6.\)
=
6.
-------=-P
We get the estimates on all the linear compounds of the set of
pairwise contrast vectors.
The distribution problem is unsolved.
We get the estimates on all the linear compounds of' a set of comparisons with a control-contrast vector •
4.4 Comparison of various Bounds.
Now we shall state and prove a theorem concerning the nature of the
IJ( \) ),a
constants
of
6.
and
as functions from the class of subsets
to the positive real line.
Theorem: -
1J(\),a and
of subsets
6.
Proof:
lJi\))
\)
of 6.
lJi\)
are monotone set functions from the class
to the positive real line.
We have to show that if 6.\)C b.\) f
<
1J(\)f ),a
<
lJa(\) I)
and
then
-e
are defined by
IJ. ( \I)
0:
.
0
~
pro1
and
1 - 0:
n
Now (
t
c
6\1 t
€
implies
e
if
..
C
6\1
pr
<
ol
6\1 f
n
C €
-
Pro
~
6 I
(~I ~)2
f
S
\I
/1 6
c €
2
(
<
=
f.l(\lf ),0:
_7 ,
Ho )
C
\I
This proves the result for the univariate case.
The argument :for lJ.i\l)
is analogous.
This theorem provides us with the explanation o:f the sharpnesso:f
the Dunnet bounds in comparison with the Tukey bounds and of the Tukey
b ounds as compared with the Scheffe t bounds when all three sets of bounds
are obtained :from the same data..
The reason :for the sharpness of the
geometrical width of bounds on a set of orthogonal contrasts in comparison
90
'With that of the Scheffe bounds is the same.
It is easy to see
that these comparisons of the geometrical width extend to multivariate extensions of the various bounds.
However we still have
no means of comparing the sharpness of bounds on two sets of con-
.
trasts when one of the sets is not contained in the other set, e.g •
the set of pairwise contrasts and the set of orthogonal contrasts.
91
BIBLIOGRAPHY
f:!7
Anderson, T. W. (1948). ''The asymptotic distribution of the
roots of certain determinantal equations," J. Roy. stat. Soc.
..
Ser. B 1:2
£5.7
'
1}2-139·
Anderson, T.
w.
(1950).
"The asymptotic distribution of cer-
tain roots and vectors".
Proceedings of the Second Berkeley
Symposium on Mathematical Statistics and Probability, University of California Press, Berkeley and Los Angeles, pp. 103-l}0.
£L.7
Anderson, T. W. (1955).
function If.
£3:.7
e
£:27
£§J
"The integral of a synnnetric unimodal
Proc. Amer. Math. Soc.,
.§,
170..176.
Anderson, T. W. (1958).
An Introduction to Multivariate
Statistical Analysis •
New York, John Wiley and Sons, Inc.
Bancroft, T. A. (1950).
''probability values for the cammon
tests of significance".
J. Amer. Statist. Assoc. ~, 211-217.
Bellman, R. E. (1960).
Introduction to Matrix Analysis .
New York, McGraW-Hill.
£17
Duncan, D. B. (1955).
Biometrics
iN
l!,
'Multiple range and multiple F-tests".
1-42.
Duncan, D. B. (1961).
"Bayes rules for a common mUltiple com-
parison problem and related Student-t problems".
Ann. Math. stat.
}2, 101}-1O}} •
. £27
Dunnet,
c.
W. (1955).
"A multiple comparison procedure for com-
paring several treatments with a control n.
.2Q, 1096-1122.
J. Amer. stat. Assoc •
92
£12.7 Durand, D. (1954).
"Joint confidence region for multiple re-
gression coefficients".
£1J:7
Dwass, M. (1955).
estimation".
•
fl5,7
J. Amer. Stat. Msoc. 49, 130-146.
"A note on simultaneous confidence interval
Ann. Math. stat. 26 , 146-147.
Everitt, W. N. (1962).
"Two theorems in matrix theory".
~
Math. Monthly 69, 856-859.
f~7
Feldt, L. S. and Moharram, W. M. (1958).
"Power function and
charts for speci:f'ying number of observations in analysis of
variance of fixed effects".
[i.4_7
Fox, M. (1956).
Ann. Math. stat. 29, 871-877.
"Charts for the power of F-test".
Ann. Math.
stat. ~, 484-497.
f127
Ghosh, M. N. (1953).
"Simultaneous test for linear hypothesis
by method of analysis of variance".
fIg
Unpublished manuscript.
Greenwood, J. A. and Hartley, H. O. (1962).
in Mathematical Statistics.
Guide to Tables
Princeton University Press, Prince-
ton, New Jersey.
fl17
Hall,
w.
J. (1963).
Personal Communication.
£l§} Hartley, H. O. and Pearson, E. S. (1951).
function of analysis of variance tests It.
£127 Hoel, P. G. (1950).
"Charts of the power
Biometrika 38, 112-130.
ItConfidence regions for linear regressions",
Proceedings of the Second Berkeley SympoSium on Mathematical
Statistics and Probability, University of California PresG,
Berkeley and Los Angeles, 75-82.
-
1:2~7
ratio".
1:2~7
t~he generalization of student's
Hotelling, R. (1931).
Ann. Math. stat.
Hotelling, H. (1936).
~
,g , 360-378.
"Relations between two sets of variables,"
Biometrika 28, 280-287.
1:2g7 Rotelling, H. (1951).
"A generalized T-test and measure of
multivariate dispersion." Proceedings of' the Second Berkeley
Symposium on Mathematical Statistics and Probability, University
of California Press, Berkeley and Los Angeles, 23-41.
1:2~7
Hsu, P. L. (1940).
"On generalized analysis of'variance".
Biometrika 31, 221-237.
1:2~7
Hsu, P. L. (1941).
"On the limiting distribution of roots of
a determinantal equation".
1:~7 Hsu, P. L. (1945).
tests".
£2.-61
J. London Math. Soc • .§ , 185-189.
2 and T2
"On the power function of E
Ann. Math. Stat. 16 , 278 - 285.
Ito, K. (1956).
"Asymptotic formulae for the distribution of
Hotelling's generalized T~
statistics".
Ann. Math. stat. 27,
1091-1105.
£217 Ito, K. (1960).
"On multivariate analysis of variance test".
Bull. Internat. Inst. stat. 38, 87-98.
1:2§1
James, A. T. (1954).
orthogonal group".
"Normal multivariate analysis and the
Ann. Math. Stat. 25, 40-75.
1:227 Kendall, M. G. and stuart, A. (1958). The Advanced Theory of
Statistics.
London, C. Griffin.
£32,7 Lawley, D. N. (1938).
Biometrika 30, 180-187.
"A generalization of Fisher's z-test."
94
"A correction to fA generalization of
£3]} Lawley, D. N. (1939).
Fisher's z-test l
£3~7
."
Lehmann, E. L. (1950).
thesis testing".
£3'J.7
Biometrika d,Q, 467.
Ann. Ma.th. stat. 21, 1-26.
Lehmann, E. L. (1951).
"A general concept of unbiasedness".
~"592.
Ann. Math. stat. 22 ,
["3::'7 Lehmann, E. L. (1955).
Ann. Math. Stat.
"Some Principles of the theory of hypo-
"Ordered families of distributions".
g§, 399-419.
["3:27 Lehmann, E. L. (1959).
Testing statistical Hypotheses, New York,
John Wiley and Sons, Inc. _
["3§} Lehmann, E. L. (1957).
itA theory of some multiple decision pro-
cedures I." Ann. Math. Stat. 28, 1-25.
£317 Lemer, E. (1944).
"Inverse tables of probabilities of errors
of the second kind".
["3§j Mandel, J. (1958).
1:2,
388-398.
"A note on confidence regions in regression
Ann. Math. stat. ~, 903-
problems It.
["327 Mikhail,
Ann. Math. stat.
w.
F. (1960).
Scme 'feats'· in
1t0nthe t1onotood.city and AOmiss1cbility of
Mult:ilv~r:iate ADaJq$1s f ' .
Pb.P."TbGSiS, University"
of North '..~W«ljJ)a
iGJw;pe,l mIl.. N. C.
".,.. ......
,(~
£49.7 Narain, R. D. (1950). "On the completely unbiased character of
tests of independence in multivariate analysis".
Ann. Math.
stat. 21, 293-298.
["4]} Ne;ynJan, J. and pearson, E. s. (1938).
"Contributions to the
theory of testing statistical hypotheses".
Vol. II, 25-27.
stat. Res. Mem.
95
£4~7 Patnaik" P. B. (1949).
and their applications II.
£4~7 Pi1lai" K. C. S•. (1955).
analysis".
2
''The noncentra1 X
Biometrika~"
and F-distributions
202-232.
"Some new criteria in multivariate
Ann. Math. stat. 26" 117-121.
£4!!-7 Ramachandran" K. V. (1954).
"On certain tests and the mono-
tonic i tyof their power functions II •
University of North
Carolina Mimeo Series No. 120.
£42,7 Ramachandran, K. V. (1956).
test lf •
£4§l Rao, C.
Ann. Math. stat.
R. (1952).
Research.
gr"
"Simultaneous analysis of variance
521-528.
Advanced Statistical Methods in Biometric
New York" John Wiley and Sons, Inc.
£417 Roy, S. N. (1953).
liOn a heuristic method of test construction
and its use in multivariate analysis".
Ann. Math. stat. 24,
220-238.
£4§l Roy, S. N. (1954). "Some further results in simultaneous confidence interval estimation".
£427 Roy, S. N. (1958).
Ann. Math. stat. 25, 752-761.
Some Aspects of Multivariate Analysis,
New York, John Wiley and Sons, Inc.
£59.7 Roy" S. N. (1961).
of variance".
-.
"Some remarks on normal multivariate analysis
University of North Carolina Mimeo Series No. 304.
£5];7 Roy, S. N. and Bose, R. C. (1953).
mation".
£5~7
"Simultaneous interval esti-
Ann. Math. stat. ~, 513-536.
Roy, S'~ N. and Gnanadesikan, R. (1960).
"Equality of two dis-
persion matrices against alternatives of intermediate specificity".
Ann. Math. stat. 33, 423-438.
-
..
f5~7
Roy, S. N. and Mikhail, W. F. (1961).
"On the monotonic
~
character of the power function of two multivariate tests".
Ann. Math. stat. ~, 1145-1151-
["5!!.7 Scheffe, H. (195;).
"A method for judging all contrasts in the
4
analysis of variance".
Biometrika,,~,
87-104.
152,7 Scheffe, H. (1959). Analysis of Variance. New York, John
Wiley and Sons, Inc.
["5§j
Scheffe, H. (1961).
"Simultaneous interval estimates of linear
functions of parameters rr •
Bull. Internat. Inst • stat.
;8,
Part IV" 245-25;.
r~he admissibility of Hotellingfs T2-test".
£517 stein, C. (1956).
Ann. Math. stat. gJ,,, 616-62;.
£5§J Tang" P. C. (19;8).
•
"The power function of the analysis of
variance test 'With tables and illustrations of their use".
stat. Res. Mem. Vol. II" 126-149.
152.7 Tukey, J. W. (1951).
Part II".
ffQuick and dirty methods in statistics,
Proc. Fif'th Annual Convention" Amer. Soc. for
Quality Control, 189-197.
£ 6g7 Tukey, J. W. (1952).
rates" •
..•
"Allow.nces for various types of error
Unpublished invited address, Blacksburg meeting of the
IMS" March 1952•
.L6];7
Wald, A. (1942).
variance test".
"On the power function of the analysis of
Ann. Math. stat. 1;, 4;4.4;9.
97
£63,7
Wilks, S. s. (1932).
of variance".
£6"2.,7
"Certain generalizations in the analysis
Biometrika 24, 471-494.
Working, R. and Rotelling, R. (1929).
of errors to interpretation of trends".
"Application of the theory
J • .tuner. Stat. Assoc."
March Supp1., 73-85.
£G.:.7
Wolfowitz, J. (1949).
''The power of classical tests associated
with the normal distribution".
"
•,
•
Ann. Math. stat. ,gQ, 540-551.
,
'-
INSTITUTE OF STATISTICS
NORTH CAROLINA STATE COLLEGE
(Mimeo Series available for distribution at cost)
328. Schutz, W. M. and C. C. Cockerham.
The effect of field blocking on gain from selection. Ph.D. Thesis. August. 1962.
329. Johnston; Bruce 0., and A. H. E. Grandage.
tial model. M.S. Thesis, June. 1962.
330. Hurst. D. C. and R. J. Hader.
1962.
331. Potthoff, Richard F.
1962.
332. Searls, Donald.
A sampling study of estimators of the non-linear parameter in the exponen-
Modifications of response surface techniques for biological use.
Ph.D. Thesis, June,
A test of whether two regression lines are parallel when the variances may be unequal. August,
On the "large" observation problem.
Ph.D. Thesis. August, 1962.
333. Gupta, Somesh Das.
On the optimum properties of some classification rules.
334. Potthoff, Richard }'.
August, 1962.
A test of whether two parallel regression lines are the same when the variances may be unequal.
335. Bhattacharyya, P. K.
On an analog of regression analysis. August. 1962.
336. Fish, Frank.
August, 1962.
On networks of queues. 1962.
337. Warren, William G.
Contributions to the study of spatial point processes. 1962.
338. Naor, P, Avi-Itzhak, B., On discretionary priority queueing.
1962.
339. Srivastava. J. N. A note ·on the best linear unbiased estimates for multivariate populations.
340. Srivastava, J. N.
Incomplete multiresponse designs.
341. Roy, S. N. and Srivastava, J. N.
342. Srivastava, J. N.
1962.
1962.
Hierarchical and p-block multiresponse designs and their analysis. 1962.
On the monotonicity property of the three main tests for multivariate analysis of variance.
1962.
343. Kendell, Peter. Estimation of the mean life of the exponential distribution from grouped data when the sample is censored-with application to life testing. February, 1963.
344. Roberts, Charles D. An asymptotically optimal sequential design for comparing several experimental categories with a
standard or control. 1963.
345. Novick, M. R.
A Bayesian indifference procedure.
346. Johnson, N. L.
Cumulative sum control charts for the folded normal distribution.
347. Potthoff, Richard F.
348. Novick, M. R.
349. Sethuraman,
A Bayesian approach to the analysis of data for clinical trials.
350. Sethuraman, J.
352. Smith, Walter L.
Fixed interval analysis and fractile analysis.
1963.
1963.
1963.
On the Johnson.Neyman technique and some extensions thereof. 1963.
On the elementary renewal theorem for non-identically distributed variables.
35-3. Naor, P. and Yadin, M.
.
1963.
On testing for independence of unbiased coin tosses lumped in groups too small to use X".
J. Some limit distributions connected with fixed interval analysis.
351. Potthoff, Richard F.
J
1963.
Queueing systems with a removable service stations.
1963.
1963.
354. Page, E. S.
ary. 1963.
On Monte Carlo methods in congestion problems-I. Searching for an optimum in discrete situations. Febru-
355. Page. E. S.
On Monte Carlo methods in congestion problems-II. Simulation of queueing systems.
356. Page, E. S.
Controlling the standard deviation by cusums and warning lines.
357. Page, E. S.
A note on assignment problems.
February, 1963.
March, 1963.
358. Bose, R. C. Strongly regular graphs, partial geometries and partially balanced designs.
March, 1963.