Chakravorti, S.R.; (1974). "A note on step-down procedure in Manova problem with unequal dispersion matrices."

BrOMATH£MATICS TRA·tHIHG: PROGRAM
•
III
.,
/."
....
\
A NOTE ON STEP-DOWN PROCEDURE IN MANOVA
PROBLEM WITH UNEQUAL DISPERSION MATRICES
By
S. R. Chakravorti
Department of Biostatistics
University of North Carolina at Chapel Hill
and
. University of Calcutta. India
Institute of Statistics Mimeo Series No. 910
r
FEBRUARY 1974
.'
"y
A NOTE ON STEP-DOWN PROCEDURE IN MANOVA
. PROBLEM WITH UNEQUAL
D~SPERSION MATRICES I
By S. R. Chakravorti 2
University of North Carolina, Chapel Hill
In this paper we have considered step-down procedure
in multivariate analysis of variance problem when dispersion
matrices are different and unknown.
The distribution
problem of the test criterion has also been studied under
the null hypothesis.
1.
Introduction.
'\f\NVV\fVVVVVV
Step-down procedure in standard MANOVA problem has
been considered by J. Roy [7].
The essential feature of this procedure
is that if on some a priori grounds the variates are arranged in
descending order of importance, then the test procedure can be carried
out sequentially by considering marginal and conditional distributions
of the variates concerned.
At each stage F-statistic can be used which
are independently distributed under the null hypothesis so that the
overall hypothesis can be tested by combining the component tests.
Optimum properties of this procedure have been discussed by Roy [7]
and Roy et al. [9].
lWork sponsored by the Aerospace Research Laboratory, U. S. Air Force
Systems Command, Contract F336l5-7l-C-1927. Reproduction in whole or
in part permitted for any purpose of the U. S. Government.
20n leave of absence from the University of Calcutta, India.
2
In this article the problem has been considered when the dispersion matrices are not equal.
It has been shown that by transforming
the original vector variables to Scheffe-vector-variables (Eqn. (2.2)
below) Hotelling's T2 test can be constructed at each stage of the stepdown procedure and these T2 ,s are shown to be independently distributed.
It may be noted that the distributions of the Scheffe-vector-variables
chosen from the original vector variables at different stages (vide,
Remark at the end of Sec. 2) may not, in general, be the same as those
considered earlier.
This is possibly true under certain restrictions
(to be shown in Sec. 3), when the test constructions will be difficult
in this situation.
However, a reasonably satisfactory solution can be
attained only in Bhargava's [3] procedure, which is a
n~dification
of
Anderson's [2] procedure in generalized multivariate Behrens-Fisher
problem.
2.1.
Hypothesis and Test Construction.
Let x(t) (lXp) be the a-th
-a
observation vector in t-th population (a=l, ••• ,n , t=l, ••. ,m) and
t
distributed as N (n(t), ~t)' where n(t)(lxp) is the mean vector and
p ~t(pxp) the dispersion matrix in t-th population.
The problem is to
test
(2.1)
against the alternative of at least one inequality among n
(t)
'so
Anderson [2] proposed a Rotelling's T2 test for (2.1) by assuming
n l < ••• -< nm and transforming the variables X(t)
to Scheffe-vector-variables
-a
3
(2.2)
for r=2, ••• ,m, a=l, ••• ,n , where U = (u(2), ••• ,u(m)) J'ointly follow a
-a
-a
-a
l
p(m-l)-variate normal distribution.
Now suppose the p variables in the vector x(t) are arranged in
-a
descending order of importance and the ordering remains the same for
each t=l, ..• ,m and is given b~
(t)
(2.3)
(t)
= (xl a , ••• ,xpa )
After transforming the vectors x(t) to (2.2) if we order the variables
-a
in U(r) (lxp) as (Ul(r) , ••. ,u(r)), these new variables correspond to the
-a
a
pa
.e
ordered variables in (2.3) through this transformation.
(2)
Now writing
(m)
U (lxp(m-l)) = (U , .•• ,U ), where U. = (U . , •.• , U, ), we have the
1a
1a
-a
-la
-pa
-1a
distribution of U as N ( 1)(6,f), where for r=2, ••• ,m, i=l, ••. ,p
-a
p m-(2.4)
(2.5)
[(p(m-l)xp(m-l)) =
(t 1J.. ), t..
(m-lxm-l)
1J
n
l
+ diag ( -
(2)
(j"
n Z 1J
where i,j=l, •.• ,p,
~(m-lxm-l)
(j~:)
1J
=
(j~~)J
1J -
nl
(m)
, ••• , - (j.. )
n m 1J
is the (i,j)th element of
rt ,
t=l, ••. ,m, and
is the matrix with all the elements unity.
Now if we consider j-th step-down procedure, then we are to
consider the conditional distribution of U, (lxm-l) for fixed
-Ja
U(' 1) = (U , ••• ,U, 1 ), which is an (m-l)-variate normal distribu- J- a
-la
-J- a
I
tion with mean vector ¢. and residual dispersion matrix f. 1
. l'
-J
-J. , •.. ,Jwhere
4
(2.6)
(2.7)
S. is a matrix of order (m-l)x(m-l) and B.
1 is the (j-l)th
-J,s
-Jorder step-down regression matrix.
where
Under this set up, the hypothesis (2.1) can be written as
follows
n
R (j )
-.
J=l
0
[e-J' =0- Ie- (j -1) =0- ]
(')
-
Thus the component hypothesis R J [s.=O] can be tested from model (2.6)
o
.e
-J-
by Rotelling's T2 (Anderson [1], Page 187), where
A -IA
2
T. = (nl-(m-l) (j-l)-l)nls.SU s~,
J
-J- j-J
(2.9)
where
~U.
J
This ~U. is distributed as Wm_l(nl-(m-l)j, r.
. 1) and
. . . J. 1 , ... ,JJ
(nl-(m-l)j) (m-l)
-1
. -1
(nl-(m-l) (j-l)-l)
F(m-l,nl-(m-l)j), j=1,2, ... ,p.
2
T is distributed as
j
5
2
2.2.
~u
E
" and
It is clear that for fixed j, ~j
are independently distributed (Anderson [1]).
To prove that
j
T2 ,
1
2
Independence of Tl, •.• ,T .
•••
,T 2 are independently distributed under H , let us consider a vector
p
.
0
~(lxm-l) of real elements so that ~~U.~'/~[j.l,.•• ,j-l~' is distributed
J
as xj(nl-(m-l)j).
this
x~
not.
J
Now following Roy et ale [9], Page 47, it follows that
is distributed independently of U(. l)t', whether H(j) is true or
- J-
distributed.
Also under Ho(j),
U(.
1) t' and is true for every
- J- of !l(j-l).
2
Since T
j
~.t'
-J-
&,
~U
It follows,
for j=l, ••• ,p are independently
j
is distributed independently of
so that "~. is distributed independently
-J
in (2.9) can be written
[nl-(m-l) (j-l)-l]
where the distribution of
~_j
0
So that X~, ... ,X~ are independently distributed.
therefore, (Rao [6], Page 453) that
(2.10)
-
(Rao [6], Page 458).
-1
"
2
Tj
-1"
"
-1
"
~.SU ~~/~.r. 1
. l~~ does not depend on
-J- .-J -J-J. , ... ,J- -J
HenceJunder H(j), the conditional distribution
0
of T~ for fixed U(. 1) does not depend on U(. 1) (j=2, ••• ,p) and T12
J
- J- a
- J- a
is distributed as (nl-m+l)-l(m-l)F-distribution with d.f. (m-l, nl-m+l).
Hence unconditionally also Ti, ... ,T~ are independently distributed.
The test criterion for the overall hypothesis (2.8) can be constructed from the component tests either by using union-intersection
~ A(j),
j=l
where A(j) = {l+(nl-(m-l) (j_l)_1)-lT 2 }-1, where A(j) is the product of
principle (Roy [8]) or by considering the test criterion A =
(m-l) independent beta variables.
The exact as well as asymptotic null-
distribution of the statistic A are available (Chakravorti [4], eqn.
4.39 and 4.55 for zero non-centrality parameter).
6
REMARK.
It may be noted that if we start with the marginal and con-
ditional
d~stributions
.
(t)
(t),
(t)
(t).
of xl ' x
g~ven xl
, ••. ,x
g~ven
2a
a
a
pa
xl(t) , •.• ,x(t ) from the distribution of x(t), and then choose the
a
p- l a
.
~a
Scheffe-variables in each stage of the step-down procedure, the distributions of the resulting vector-variables will not be the same as
those obtained from (2.2).
variate-wise and
Since these variables are correlated both
group-wise~
we are to fmpose a number of restrictions
on the regression coefficient matrix B, 1 to satisfy this requirement,
~J-
in which case a satisfactory solution of the test construction is not
easily available by Anderson's procedure.
However, ~'f we assume t h at t h e
.
regress~on
.
coe ff'~c~ents
0
f x,(t)
J
on x(t) (s=1,2, ••• ,j-l) are same for t=1,2, ••• ,m, we can apply the
s
procedure of Bhargava [3], which is the modification of Anderson's
procedure.
Following Bhargava [3] let us assume that
3.
n = (m-l)n' , n = •.. =n =n' and consider the following transformation on
l
2
m
X(t)
~a
'
(3.1)
r=2, ..• ,m; a=l, •.. , n ' •
Then under the set up considered in Sec. 2.1, we can choose the vector
(U~ a , ••• ,U~pa ), which is distrubuted as Np (m- 1)(8,r*),
~
~
l
where
(2)
(m)
(1)
.
r*=(t~.);
t~,=a~:)I
1 + diag(a~~) , .•. ,a~~»; i,J'=l, ... ,p.
~
t~J
t~J
~J ~m~J
~J
~p
~~
~
,
-no
(3.3)
~
);
-n,
8=(8 , ... ,8 ); 8.=(8,
1
~
, ... ,8,1
8 (r) _ (r)
(3.2)
~
~
.~
Then j-th step-down procedure will lead to the model (2.6), where 8, 's
~J
7
are defined according to (3.2) and submatrices of B. 1 given by (2.7)
-Jare for s=l, ••• ,j-l
(3.4)
S, (m-l m-l) = diag(S~2,2) , ••• ,Sj(m,m».
-J,s
.
J,s
,s
When S~r,r) remains constant for r=2, ••• ,m,(Sj (say»
J,s
,s
ditions stated in the remark of Sec. 2 are satisfied.
the con-
Now since the
components of U. =(U~2) , ••• ,u~m» are independently distributed let us
-Ja
Ja
Ja
write the null hypothesis (2.8) as follows from (3.2),
(3.5)
m (r) (r)
(r)
(T)
~}
{rl
n
HoJ' [8 J. =ol,@,(J'-1)=Q];..§(j-l)=(8 l ,···,8 J·_ l )
j=l r=2
p
- n
Thus we can construct test for H(r) by using individual estimates to
oj
m
S.
for r=2, ••• ,m and hence the combined test for H . =
H(:) by
J, s
OJ
r=2 oJ
n
.e
(3.6)
where
F:Jr follows
.e~MJ
f-distribution with (J;\1~Jd.f. Obviously
L. is the product of (m-l) independent beta variables, the distribution
J
of which is well-known (Anderson [1], Rao [5]).
Test criterion of the
p
overall hypothesis can therefore be constructed by considering L = IT L •
j
j=l
The independence of L. for j=l, ••• ,p can be verified by the similar
J
arguments as in Sec. 2.2.
~.
helpful discussion.
The author is grateful to Professor P. K. Sen for
8
REFERENCES
[1]
ANDERSON, T.W. (1958).
John Wiley & Sons, New York.
Analysis.
[2]
An Introduction to Multivariate Statistical
ANDERSON, T.W. (1963).
A test for equality of means when
covariance matrices are unequal.
Ann. Math. Statist. ~
34
671-672.
[3]
BHARGAVA, R.P. (1971).
A test for equality of means of multi-
variate normal distributions when covariance matrices are
Cal. Stat. Assoc. Bulletin 20 153-156.
unequal.
[4]
~
ClliU(RAVORTI, S.R. (1973).
On some tests of growth curve model
under Behrens-Fisher situation.
Institute of Statistics
Mimeo Series No. 870, University of North Carolina,
.e
Chapel Hill.
[5]
RAO, C.R. (1951).
An asymptotic expansion of the distribution
of Wilks' A-criterion.
Bull. Inst. Internat. Stat.
~
Part II 177-180.
[6]
RAO, C.R. (1965).
tions.
[7]
Linear Statistical Inference and Its Applica-
John Wiley & Sons, New York.
ROY, J. (1958).
Step-down procedure in multivariate analysis.
Ann. Math. Statist.
[8]
ROY, S.N. (1953).
~
1177-1187.
On a heuristic method of test construction
and its use in multivariate analysis.
~
[9]
Ann. Math. Statist.
220-238.
ROY, S.N., GNANADESlKAN, R., SRIVASTAVA, J.N. (1971).
Analysis
and Design of Certain Quantitative Multiresponse Experiments.
Pergamon Press, New York.