UNIVERSITY OF NORTH ChROLINA
Department of Statistics
Chapel Hill, N. C.
Mathematical Sciences Directorate
Air Force Office of Scientific Research
Washington 25, D. C.
AFOSR Report No.
ON THE MONOTONIC CHARACTER OF THE POWER
FUNCTIONS OF TWO MULTIVARIATE TESTS
by
s.
N. Roy and W. F. Mikhail
August, 1959
Contract No. AF 49(638)-213
,.... . .
~.,'
The power function of the largest root test of normal multi'variate linear hyPOthesis or of independence between two sets
of variates involves, in each case, aside from the degrees of
freedom, certain nonnegative, noncentrality parameters. This
report supplies a relatively simple and elegant proof that the
power function monotonically increases as each parameter,
separately, increases - a result that was conjectured and proved
(but not pub1tshed ) by one of the authors several years ago by a
very lengthy and laborious method.
~ualified requestors may obtain copies of this report from the ASmlA
Document Service Center, Arlington Hall Station, Arli~on 12, Virginia. Department of Defense contractors must be established for
ASTIA services, or have their "need-to-know" certified by the cognizant military agency of their project or contract.
Institute of Statistics
Mimeograph Series No. 238
•
ON THE MONOTONIC CHARACTER OF THE POWER
FUNCTIONS OF TWO MULTIVARIATE TESTS l
By S. N. Roy and W. F. Mikhail
1. Summary. The largest characteristic root test for multivariate analysis
of variance or independence between two sets of variates has in each case a
number of well known properties ,[i,2,3 J including, in particular" the
following.
(1)
The
power function involves as
arguments, aside from the
degrees of freedom and the level of significance, a set of non-negative
non-centrality parameters that are statistically meaningful, (ii) the
power functions has a lower bound which is a monotonically increasing
function of each of these parameters" separately, and (iii) it is possible"
•
by using the distribution of the teat statistic under the null hypothesis
alone, to obtain, with a confidence coefficient greater than or equal to
a preassigned level, simultaneous confidence bounds on a set of parametric
functions that might be interpreted as measures of departure, respectively
from the total hypothesis or from partial hypothesis defined, for multivariate analysis of variance, by cutting out one or more variates and one
or more factor levels" and for independence between a p-set and a q-set,
by cutting out one or more of the p-set and one or more of the q-set.
It is the purpose of the present paper to prove that, for each
the power function is a
monoton~cally
test~
increasing function of each non-
centrality (or deviation) parameter separately, a fact which was stated
1. This research was supported by the United States Air Foce through the
Air Force Office of Scientific Research of the Air Research and Development
Command" under Contract No. AF 49(638)-213. Reproduction in whole or in
part 1s permitted for any purpose of the United States Government.
2
•
without proof in
D_7.
There the remark was made that the very long and
tedious proof available to the author of Ll J was not being offered in
the hope that a far simpler and more elegant proof would be forthcoming
in the near future.
The kind of proof looked for at that stase is being
offered here.
2.
Preliminaries for the multivariate analysis of variance SituationL)J.
Let u, s and n-r denote respectively the "effective" number of variates, the
degrees of freedom of the hypothesis and the degree:J of freedom for the error
and let t .. m1n (u,s).
Then with
xll ••• x ls
(2.1)
x=
uxs
•
•••
x
••• x
ul
Yll ••• Yl,n-r
•
•
y ux(n-r)
•
•••
Yul ••• Yu,n-r
us
the canonical form tor the elementary distribution is
(2.2)
L1./(~)
u
(
s+~-r
)
J
u
exp
n-r
t
[,-i { 1h jh Y~J + ih (Xii
- !i;.)2
u 2
u s
2J
u s
u n-r
+ 1: Xu + 1: I
:¥iJ "JT[ "IT dX1J 1T T[ dYij;
i-t+l
i-l J,l1-l
. 1=1 j=l
i=l j-l
and the acceptance region (at 0:) tor the linear hypothesis HoOf analysis
of variance (under which 1's=0) can be expressed as
ch C(XX' )(lY' )-1 ] StU,
max
where Ju is given by
(2.4)
P
Lob""."
i
(xx·)(yy·)-l
~ ::;",,11'. - OJ-
1 - a.
e
If we denote the domain (2.3)
bY~,
then with an obvious notation for
the differential elements, the probability of the second kind of error can
J
be written as
conet ••xp
:D
The problem now is to prove that the integral (2.5) is a monotonically
decreasing function of each 1i'
~ As
separatel~
If, now, we regard the domain
one in an Euelidean space of dimensiona.lity u s (for X) and u(n-r)
(for Y), then it is clear that we can rewrite (2.5) as
(2.6)
where
1
const. exp
';if
c-~r ~
nEr
! 1>01 "'1
y~j
+
~ ~ X~j ~ _7 dXdY,
1-1 j -1
i
:J51S merely the doma1n:lJ translated by r-;;. along Xii' that is,
along the i1-th axis (with 1=1,2, ••• ,t). Notice that if, in the integral
(2.6), we replace the domain :/'bY
equal to 1-0" where
0: 1s
dJ, integral over the new domain becomes
the probability of the first kind of error.
It
is useful now to put
N
where Y is a uxu triangular matrix with zeroes above the diagonal, observe
that
-1
ch LTxx t Hyyt)
max .", ,.,.
= c.hmax l. vx) (vx ) t
"",,,,,,
_7 = ch LXXI )(ytY)_7
max
]"
4
where
(2.8)
N
VX
.
l
-
0
vII
~21
v
ul
• • • 0
0
~22 • • •
v
"'u2 • • • uu
x ll
• • •
•
• • •
xul
• • •
x1s
xus
and rewrite (2.3), that is, the domain:JJ as
J):
Notice
chmax
CVx(Vx) J
I
that the domain;ztis a shift
$:
(.L
•
otJ' by ~ along xii with
1=1,2, •• "t. The problem now can be rephrased in the following way.
How
does the integral
(2.10)
1 a{ i~ ~~ Yi
exp
~ ~
Z
x2
7 dXdY
+
J 1=1 j=l i 3 ) -
XJ
over the domain ~ given by (2.9) change under successive translations
of
Iii. along xll '
of
Ii;, along x22 '
...,
Ii; along Xtt ?
It is clear that the
successive changes are C1.1IIUJat1ve and that at any stage" say the i-th, the
change depends on
fi;. .
It will be also seen from the mechanics of
the demonstration that i f we can prove that the integral decreases for the
first shift Of']), namely, by ;:;;- along xll J then the general theorem
itself will be proved.
3.
Proof of the monotonicity property for the multivariate analysis of
variance situation. The proof 1s developed in three main steps discussed
in the following subsections.
3.1 The proof for the univariate case. In this case" u=l and we can drop
the first sUbscr1pt in X, Y.
The dOmBinJJof (2.9) now takes on the form
\
5
s
1:
J=l
x
2
J
SfJ
n-r 2
E
j=l
Y4
,
II
and the integral (2.10) the form
i
exp
A
n-r
2 s
n-r
s
J
dyJ
TI
<b!l J •
)
J=l J
j=l
J=l
(j!t Yj +-E
;T;
2
rr.
X
n.
Notice that now31iS just a shift ot] along xl by
from the form of (3.1.2)
t~t
It is evident
the integral (3.1.2) decreases under :this shift
if, for any given set of yJr s (J=l,2, ••• , n-r) and Xj 's(J=2,3, •••,s),
.
1
(3. 1 .3
jQ.-
rr
a+
1 2
exp C-2%l J d:cl < .. exp
-a+f/
_~
.
2
L -~~ J
s
dxl
2 n-r
'
2
~ere a is the positive square root of fJJ~2 x J /J~ yJ
that it doesn rt matter whether we take
h
j
tAentt is clear
to be positive or negative.
It is easy to varify an even more general result than this, namely that
a.
a+A.
'
rJ (x) dx < (I(x) dx ,
-a+A.
f
'
f
-c::\...
W~Je
a is positive and A. might be taken to be either positive or negative
and ~(x) is a continuous function of x, symetrical about 0 and monotonically
decreasing with .) x~
the left side of
3.2.
It is also clear from the nature of the proof that
C~.1.3)
steadily decreases with.
hl.
The nature of the multivariate domain ~2.9). We go back now to the
multivariate case and to the domain
of (2.9).
Let us investigate the
nature of the domain in x ll ' X2l' ••• , xul for a given set of values of
t".,./
fJ, Y (that is, V) and of the elements of the matrix X ex.cept those in the
~
first column.
Toward this end, put X* • VX and observe that, if v is any
6
e
characteristic »oot of X*X*t, then denoting the matrix. by ['st j
have that
(3.2.1)
stl
-
\I
s12
*
•
s*lu
or \ stJ
(sum
1-
\I
s12
• ••
s*22 -\I
• ••
•
• ••
s*2u
•••
s*lu
s*
2u
we
., 0
6*-\1
uu
(sum of the p-l rowed principal mirDr.-s of
of the p-2 rowed principal mimrs of
L-sfJ J)
rSlJ-~'" \12 x
- ••• +
(-1 )u \Iu .,
0
••• x*ul
• • • x*ls
•••
•
~l ••• x*us
xts
xtj s{i=1,2, ••• ,uj
3,
•••
•
,
which, given
••• x~s
j=2, ••• ,s), is easily seen to be a homogeneous quadric
function of (xtl' ••• ,
X~l) +
other XfJ's Just mentioned.
a constant which is really a function of the
The coefficients of the quadric function are
each polynomial functions of XfJ'S (i=1,2, ••• ,uj j=2,3, ••• ,s). Likewise,
if we take anyq~ed principal minor of L-Stj~' say the one With rows
and columns numbered (1,2, ••• ,q), then that minor
... xts
=
..
... .
~l ... x~s
• •• x*
ql
• ••
,
x*ls • •• x*qs
Which, given xtj'S (i=1,2, ••• ,q; j=2, ••• ,s), is a homogeneous quadradic
function of (xIl' ••• , ~l){in which the coefficients are polonomials in
xtJ's, with i=1,2, ••• ,q and 3=2, ••• ,s) + a constant which is really a
7
function of the other X!.1's just mentioned. Thus l given v and X!j'S
(i=112 1 ••• l u; .1=2 1 ••• ,s)1 the equation (3.2.1) in
quadric surface in xIl'
,.,...
"'1 X~l'
"I
yields a homogeneous
Now recall from (2.8) that, given
Yij'S, that is V, the (xI11 ••• , ~l) are linear functions of (Xlll"'IXU1)
and likewise (X!J I ' ",X~j) are linear functions of (X l.11 ... , XU.1)(With
.1=2 1 •••,s). Thus, given ~I ZiJ's (i=l""lu; 3=2 1 ••• l s), the equation
(3.2.1) yields a homogeneous quadric surface in (xll, ••• ,xul ) in which the
coefficients and the constant term are all functions of ", Y and the other
xi.1's already referred to.
This fs for any characteristic root ".
Keeping this result in mind, let us examine more closely the nature
of the domain (2.9) wbich is the same as tbe one given by (2.3).
Let us
rewrite (2.3)in the eqUivalent form
where!' stands for the (u-l)"'dimensional vector (a2 , ...,au )' Now l given
~, Y and XiJ'S (i=l, ••• ,u; .1=2, ••• ,8), (3.2.2) represents the domain for
(Xll""'Xul)in an u-dimensional Euclidean space, the boundary being,
given by the sur' ',ce defined by the equality sign. An equivalent f0z? of the
same surface is the homogeneous quadric associated with (3.2.1) after v is
replaced
(1)
if
by~.
Next l (3.2.2) tells us the following:
a. particula.r point (xll ' .. "xul ) belongs to the domain just mentioned l
then c(xll""lxul)1 where 0
(ii)
<c
~
1 also belongs to the domain;
(0 1 ••• ,0) belongs to the domain;
(iii) radiating
from (0, ... ,) along any given direction there is only a
finite length belonging to the domain.
8
4It
Thus, given
~,Y
and the other xij's (already described), (2.3) or
(2.9) represents a domain for (xll""'Xul ) which is the interior of an
u-dimensional ellipsoid whose boundary is given by (3.2.1) atter ~ is
susbstituted for Y.
It is well known that there is an orthogonal transfor-
mation by which the ellipsoid can be referred to principal axes, or in
other words, the transformed equation to the surface becomes free from
the product terms in the transformed variables and involves only·the square
xi = Lx
terms with positive coefficients. LetliU
I1 , ...,xulJ and let
where L is the orthogonal matrix that transforms the ellipsoid into
principal axes. This L can be determined and the rows of L, say
direction
(Vil, ••• ,Vij ) (i=1,2, •••,u) are the/cosines of the different principal
axes. Note that !,'.! = !i!l' It would be useful to rewrite (2.6), atter
substitution Of;[) for:Jj if and omission of the constant, in the form
JW·
where, given
~,
~
exp
2
JJ
u n-!" 2
u s
2
u 2
yij + i~ J~2 Xij + i~ Zi
u s
u
x d1' 'IT
dx i4
dZ i
i=l J=-2
oJ i~
L-:'l 2i~l!=J.
1I
Yand the Xij'S, the
domain~, as
1!
,
a domain in (zl""'zu)'
forms the interior of an ellipsoid referred to principal axes ( that is,
in a form which 1s free from the product terms of z' s and invovles only
the square terms with positive coefficients. In
.ot~er words,j? 1s .symmetrical
shoub'the origin in each zi separately. A displacement
r:r;. along the
dirrection of xll might be regarded as the resultant of-a displacement
9
e
"'11
rr;. along zlI
a displacement f
that is, along the dirr"ction with cosines
21
(Vll , "'12' ... , flU)
~ along z2' that is, along the direction with cosines
rr;.
("'21' "'22' ... , V2u )' and so on, and finally a displacement VUl
along
zu' that is, along the direction with cosines (Vul, ••• ,fuu ). It should
be remembered that these ViJ'~ are functions of IJ.,Y and the XiJ's of (3.2.4)
3.3 The final step in the
p~.)of
of the monotonicity property.
Looking at
(3.2.4) and using (3.1.4) we observe that a displacement Of:!) by
Vll
rr;. aong zl will decreaue the-·integral under (3.2.4), because, for
any given set IJ.,Y, xiJ's anA z2,z3' ••• 'zu'
a+"'ll ~
(}.3.1)
5
exp
a z~J cI!'l
a
<
S
exp
c-t {Jdzl'
rr;.,
-a
where a and V
ll rr;., w:.thout any loss of generality, can be assl-med to be
-a+ V11
of IJ., I, x .. 's and z2' ••• ,z • Using the same
lJ
u
argument for successiv~ displacements by "'21 along
z2' by "'31 ~ along
positive, a being a
f~.ction
z" and so on, and filially by "'Ul
crease of the
integr~l.
rr;.
along Zu we have a successive de-
In other words, the resultant displacement\which
is along x ll and by ~ decreases the integral. At this point we go back
to (2.10), forget atJout the zits, use the result just stated about a displacement by
rr;.
12
along xll ' apply successive displacements by /
along
and so on, and finally ~ along Xtt and eventually ob-
x22 , ~ along x
33
tain an integral over the displaced domainJD* which is less than the one
over the original
domai~
• It is also clear from the mechanics of the
proof that the integral over::fl * decreases as each
creases separately.
1,q;:-l-(i=1,2, ."/t) in-
This proves the monotonicity property.
10
e
4. The case of the test for independence between two sets of variates.
With a (p+q) set (p'sq) of variables let us assume l for a sample of
size n+l (>p+q), the canonical distribut:fonlaw
(p+q)n
(4.1)
{i/(2TD
2
+
*
~l
~
(1.. pi)2
J
exp
L1J
Coo! <1fl
1
:La
~l (xilyi.1-2PiXijYij)
1~1
i.~l .1i yi ~L] -IT
fr dXij Tri=l -IT.1=1 dYijl
i-l .1=1
where Pi'S are the population canonical correlation.coefficients, the
hypothesis of independence B is equivalent to the hypothesis that Pi'S
o
the acceptance region for H is
o
~
=0
is given by
and X and Y are given by
• ••
• ••
• ••
•••
• ••
•
•••
•
It would be profitable to reduce still further both the canonical distribution law (4.1) and the domain:J) of (4.2).
~
Y
pxq
=qxq
T
L
qxn
1
Toward this end put
1
11
~
where T is a triangular matrix of the type already described after (2.7)
and L is an orthonormal matrix or in other words" Lt' = I (q).
Next complete
L (qxn) into an orthogonal matrix
(4.4)
""l
Ll
so that [MLJo [L' ; M'J
M n-q"
q
n
.. I(n).
Now put
(4.5) pxn
x*
II:
X
pxn
:
•
M'
• n-q
J
xt"q+l
xIl
• ••
xtq
•
• ••
•
x*pl
• ••
x*pq
u*
••
••
II:
.f
Lt,q
q
v*
n-q
n
• ••
•
x*p,q+l
1
p
.•. Xfn
•
(say)
• .. x~n
(say) •
Thus
x • 1:"11* :
V*
J
[q.
U*L + V*M.
The Jacobian of the transformation given by (4.3) and the first line of
(4.5)
is
L1J
(4.6)
rv
where tij's are the elements of T and
under this transformation, we have
I.n
and LI are as in
[1_7.
Thus"
12
dXdY
const.
~
Next observe that
2
1..1:.1 x 1.1
(4.8)
n
.1£1 Yij X1J
==
==
tr XX' == tr (U*U*I+V*V*')
the 11-th element of X Y'. the i1-th element of
~
(U*L+V*M)L'T'
i
.j=~
u*ij t ij •
We have now.. for 1, LI , U* and V*, the distribution
coat •
...£. ···21H (1-P1t
1=1
P
+
x
1h
1
n
2
E
V1j
l-P1 j=qa
~
El t~~1
q
1
2~+i=~l j~ t i j · J dU*dV*
d"; aLI / \
~!~;)
t ·
L1
·~o express the domain~of (4.2) in terms of the transformed variables
we observe that
13
c~
!Ixx' )-l(XY' Hyy' )-l(yx,) J
1
L
N
rJ/V
- chmax
L[X*X*')- x*
(M) L' T' (TT')-
= ch
L{X*X*' )-1 X*
(I~q»
max
(I(q) ~ 0)
1
r-J
•
TL(L': M') X*'
J
x*'_7
'"' chmax LrU*U*I+V*V*' )-l(U*U*')].
Hence we can rewrite (4.2) as
(4.10)
-;J) "
ch
L[U*U*') (V*V*' )-1 J <.JL •
max
- 1-~
Starting trom (4.9) we can also integrate out over
\
~
Lx L1.J and
have, tor
T, U* and V* the distribution
(4.11)
const
n
fi
2%
(1-P1)
exp
it
J""l
L~ ,·i=1
.
i-l
+ - 1
P
1-P~ i~
n
1:
jiiq+1
v*
y
i.
where p!j '"' Pi' tor .1-1,2, ••• ,1 and 1-1,2, ••• ,p; and '"' 0, otherwise.
At this stage set
(4.12)
pI" /
A-p~
'"'
1 1 .1'
where 11.1 '"' Pi/ll-P~ '"' 1i (say), for jel,2, •••,1 and 1=l,2, ••• ,p;
and '"' q otherwise.
Next observe that under this transtormation
LiJ the
characteristic roots
of the matrix in (4.10) stay invariant, so that we can rewrite (4.10) as
14
(4.13)
'7\ ·
~.
Lfuu r )(vv' )-lJ-<-
ch
max
-i:L • ie, < f.l* (say).
l -~'
-
f'V
We have now for U, V and T, the distribution
(4.14)
const. exp
["-i
p
2
q
q
2
i
P
n
2-
(i~l J=~ (uij-1iJtij) + i!). j!). t ij + i!). j~q+lv1lJ
.¥.d U d V
lIlt~~i
d";.
The probability of the second kind of error is given by integrating (4.14)
over the domain (4.13).
It is easy to see that,aside from the positive
constant factor, this is equivalent to
(4.15)
S
ex,p
q
L -!(ih
i
2
P
q
2
n
P
2
J~l t ij + i~l \~ u ij +i~l j:fq+1Vij)Ja.UdV
~+
9- n-1 ""
x )( tii
r;jJ
dT I
i=l
mere, for any given set of
T and V,~ * is
just7) displaced by "lt ll
along ull ' by 12t21 along u21 and "2t22 along u22 ' and so on, and finally
by " p t p1 along upl,~t
~ along u ~, ••• ,,.. t
'P pc;
pc;
P pp along upp • Notice that when
Ho is true, that is, when all 1 i
in the integral (4.15).
'S
= 0,
we should haveJ)
*
replaced by:iJ
llailng ehesame jdnd of argument as in section 3
r-v
it follows that, for any given T, the integral
(4.16)
~
~
exp
p
q
2
P
n
2
t:-t ( ih
Jh uiJ + ih J~l v iJ ) J dUdV
decreases aS~iS displaced by "ltll along ull and we can continue using
the same reasoning for the other displacements.
,...
Finally, introducing the
density function of T and going back to (4.15), it is easy to see
considerations of symmetry that the integral (4.15)
from
monotonically decreases
15
•
as each· t"i \ ' that is each \ Pi \ separately increases. The proves the
monotonicity property of the power function of the test for independence
between two sets of variates.
Concluding remarks.
The power functions of the Arcriteria for the multi-
variate linear hypothesis and for the test of independence between two
sets of variates have also somewhat similar monotonicity properties that
will be discussed in a subsequent paper.
References
[i_7 S.
N. Roy, "Some aspects of multivariate analysis," John Wiley and
Sons
•
(1958), chapters 6, 10, 11, 14, and Chapters 4, 6 and 7 of
the append 1x •
L2J S.
N. Roy and R. Gnanadesikan, "Further contributions to multivariate
confidence bounds", Biometrika., Vol.44 (1957), pp. 399-410.
L3J S.
N. Roy and R. Gnanadesikan, "Some contributions to ANOVA in one or
more discussions - Parts I and II,"
Vol. 30 (1959), pp. 304-340
Annal~
£!. Mathematical
Statistics,
© Copyright 2026 Paperzz