Dix, Lynn Palmer; (1983)Minimum Norm Quadratic Variance-Covariance Estimation in a General Multivariate Linear Model with Special Emphasis on the Multivariate One-Way Random Model."

MINIMUM NORM QUADRATIC VARIANCE-COVARIANCE ESTIMATION
IN A GENERAL MULTIVARIATE LINEAR MODEL WITH SPECIAL EMPHASIS
ON THE MULTIVARIATE ONE-WAY RANDOM MODEL
by
LYNN PALMER DIX
A thesis submitted to the Graduate Faculty of
North Carolina State University
in partial fulfillment of the
requirements for the Degree of
Doctor of Philosophy
DEPARTMENT OF STATISTICS
Raleigh
1 9 8 3
APPROVED BY:
~.
-
Chairman
of~A~d-Vl~'s~o~r~y~C~o=m=m~·lt=t~e~e---
I!. {
~
/,---
~~l/
ABSTRACT
oIX, LYNN PAU.1ER.
~1inimum
Norm Quadrati c Va ri ance-Covari ance Estimati on
in a General Multivariate Linear Model with Special Emphasis on the
Multivariate One-way Random Model.
(Under the direction of FRANCIS
GIESBRECHT. )
In this paper the problem of minimum norm quadratic estimation MINQE
in the following multivariate linear model is studied:
. Y,'
= XB·, +
,
i=l, ... ,p
Uc::.
[(eke 1 ' ) = ~
8i
{kl)F i
k,l=l, ... ,p
where (8 i ) = (8 i (kl)) i=l, ... ,q are the unknown p x p matrices of
variance-covariance components. Several different MINOE are derived:
j·lINQE(U,n, MH1QE(U),
~lINQE,
r~INQE(n,
and
r~INQE
subject to X'AX=Q where
the letter U indicates unbiased and I indicates translation invariance.
The special case for which the matrices Fi are idempotent and pairwise orthogonal is considered. It is shown that MINOE and MINQE(I)
estimators of the matrices 8 1 , ... ,8 q are positive definite and nonnegative definite, respectively.
Finally, estimation for the multivariate one-way random model
Yijk =)lk + a ik + uijk i=l, ... ,g j=l, ... ,n i
k=l, ... ,p is studied in
detail. If g>l and ni>l for at least one i then all parametric
forms tr(C 18 1 + C202 ) are shown to be MINQE(U,I)-estimable. Given T >O
1
and T2>O as prior estimates of the among and within covariance matrices
and n1'" .,n g , the group sizes, the MINQE(U,I) equations are expressed
in terms of the group sizes, T2- 1 , and the eigenvectors and eigenvalues
~
of T2-1/zT1T2-1/z
The special case for which T1 and T2 are diagonal
is considered, and explicit expressions for the MINQE(U,I) estimates of
01 and 02 are given.
and 02 are developed.
Finally, the MINQE(I)
and MINQE estimates of 01
ii
BIOGRAPHY
Lynn Palmer Dix was born May 13, 1949 in Oak Ridge, Tennessee.
In
May, 1967 she graduated from Oak Ridge High School and entered the
University of North Carolina at Chapel Hill where she studied mathematics.
After graduation, she married James F. Dix and enrolled in North
Carolina State University, earning a Master's of Science in Operations
Research in 1973.
While working as a computer programmer for the North
Carolina State Department of Education, she began a part-time study of
statistics. In 1976 she entered the graduate program, completing the
Master's degree in May 1977 and passing the Ph.D. preliminary examinations in September 1978.
Fellow in 1978.
She was also designated the first Gertrude Cox
The author began full-time research in September 1981
and, upon graduation, plans to continue working in the field of
statistics.
iii
ACKNOWLEDGMENTS
I wish to express my sincere appreciation to Dr. Francis G.
Giesbrecht for his advice, support. and especially patient understanding
throughout my study.
I would also .'1 ike to thank Dr. H. Robert van der Vaart, Dr. Thomas
M. Gerig, and Dr. Thomas W. Reiland for serving on my committee.
Barbara Faucette is to be thanked for her conscientious typing of this
manuscript.
Finally, I would like to express my deepest thanks to. my husband,
Jim, for his love, patience, and encouragement, without which I could
never have completed this research.
iv
,
TABLE OF CONTENTS
Page
1.
·...
1.1 Int roduct ion
· · · ·re· · · ·
·····
1.2 Revi ew of the ·Literatu
····
·
·
·
·
·
·
and Asymptotic Results.
1. 2.1 Iterated
1. 2.2 Computational Burden.
·············
1. 2.3 Non-negative Estimation
· · ion
···
1. 2. 4 Covariance Component Est imat
1.3 Outline of this Research
····
·
THE MINOUE PRINCIPLE.
·····
········
2.1 Introduction
· · · ·Model.
···· ···
· ·General
2.2 The Forn of the
···
2.3 Minimum Norm Quadratic Estimation.
· · · · ·Invariant
·····
2.4 Minimum Norm Quadratic Unbiased Translation
Estimation.
· · · · · ·Under
· · ·Some
· · · · Estination
· · Special
·····
2.5 Minimun Norm Quadratic
Conditions.
····················
MINIMUM NORM QUADRATIC ESTIMATION IN THE
LINEAR
.······• ·• ·
3.1 Introduct ion
· · Li· nea
· ·r · · ·1•· ·· · ·
3.2 The t1ul t iva ri ate
·· ··
NQE (U, I ) Estimation.
3.3
·
·
·
·
·
·
·
·
·
·
·
·
3.4 Multivariate MINQE Under Some Specialized Conditions
·
INTRODUCTION AND REVIEW OF THE LITERATURE •
r1INQE(U~1)
2.
3.
~10de
~1I
1
5
5
7
8
8
10
10
10
13
15
19
e
22
22
22
31
37
MINIMUM NORM ESTIMATION GIVEN SPECIAL STRUCTURE OF THE
DISPERSION MATRIX.
•
•
41
4.1
4.2
41
42
· ···
Introducti on
· · · Estimation.
· · · ·· · · · · · · · ·· ·
MINQE and MINQE(I)
·
·
5.
1
~ULTIVARIATE
t~OOEL
4.
1
MINIMUM NORM QUADRATIC ESTIMATION IN THE MULTIVARIATE
ONE-WAY RANDOM MODEL
•
•
47
5.1
5.2
5.3
5.4
5.5
5.6
5.7
47
47
49
51
53
70
78
··· ·
····· ·
Introduction
· · · · Model
· · One-way
·• ·············
The Multivariate
Identifiability and Estimability· · · · · · · · · · •
·
·
·
·
·
·
Preliminary Results.
· · ·· · ·• ··
·······
The MINQE(U,I) Equations
·
·
·
·
·
·
· ·•
MINQE(U, I) Estimation with Diagonal Priors
·
·
·
·
·
MINQE(I) and MINQE Estimation.
·······
e
v
TABLE OF CONTENTS (continued)
Page
6.
SUMMARy • • • • • •
7.
LIST OF REFERENCES.
..................
....
85
87
1.
INTRODUCTION AND REVIEW OF THE LITERATURE
1.1
Introduction
The estimation of variance-covariance components finds many useful
applications especially. in the biological and behavioral sciences.
As a
result, since the early papers of Daniels (1939) and Crump (1946) the
problem of variance-covariance estimation has been a frequent topic in
the literature.
For a long time, the methods of equating mean squares
to their expectations put forth by Henderson (1953) predominated.
Henderson1s methods were relatively easy to compute and were widely
applied.
However, their shortcomings, lack of uniqueness for methods II
and III and bias in mixed models for method I, were discussed by Searle
(1958), and the area of research was very much alive.
A major new deve-
lopment occurred when Hartley and Rao (i967) applied maximum likelihood
methodology to the area of variance-covariance estimation.
It was C. R.
Rao who with a series of papers (1970) (1971a) (1972) introduced an
entirely new approach which he called MINQUE (minimum norm quadratic
unbiased estimation).
This paper is in the area of minimum norm quadra-
tic estimation, so a short review of the literature is given in section
(1.2).
Section (1.3) contains an outline of this research.
1.2
Review of the Literature
The reader is referred to the Kleffe (1977b) and Searle (1979) for
excellent surveys of the area of variance component estimation.
(1979) is an eXhaustive bibliography of the field.
Sahai
Rao (1979) and
Kleffe (1980) contain exceptionally thorough discussions of the recent
progress in the area of MINQE theory.
2
Rao (1970) introduced his now famous principle of minimum norm
quadratic unbiased estimation in an application to the estimation of
heteroscedastic variances in linear models.
He soon followed this in
Rao (1971a) (1972) with a more general application to two models:
y
q
+ r
= XB
(1.2.1)
U• E •
1
1
i=l
where y is an n-vector of random variables, X is a given n x m matrix, B
is an m-vector of unknown parameters, U·1 is a given _ n x c.1 matrix and
E·1 is a c.-vector
of random variables with zero expectation and disper1
. rna trlX
' oi 2 K
' 1 , ... ,q.
s,on
i'=
Fur th ermore, E are uncorre 1a t e.
d
i
the covariance structure of the vector y is
=
D(y)
The
, o.Zy.
I:
•
1
1
where
Y"
Hence,
= U.K.U.'.
i 1 ,
covariance components model considered is
= XS
q
+ r
(1.2.2)
U.E.
i=l 1 1
where y, X, and B are as in (1.2.1) and E.1 is a c-vector such that
y
E(E.)
= 0, E(E.E.
1
, 1
1
)
') =
= e , E(E.E.
1 J
a
i
* j,
where e is a c x c
matrix of the unknown variance-covariance components.
For each of these
models, Rao gives MINQE(U,I) (minimum norm quadratic unbiased translation invariant with respect to the mean parameter) and r-MINQE(U)
(minimum norm quadratic unbiased where r is an estimate of the order of
magnitude of B compared to E) estimates for linear parametric forms of
the unknown variance-covariance components.
(1.2.1),
For example, for model
3
let a1, ... ,aq be prior estimates of at, ... ,a~
V =Ea.V.
ail 1
pa = X(X'V a-lX)- X'V a-1
Ra = Va-1(1 - Pa )
then the MINQE(U,I) estimate of f ' a 2 =
E f.a. 2
is
ill
where A is a solution of the system of equations, often called the
MINQE(U,I) defining equations,
E A.tr(R V.R V.) = f.
i
The
1
a
1
a J
j=l, ... ,q.
J
(1.2.3)
MINQE(U,I) estimate exists if and only if the system (1.2.3) is con-
sistent.
The
form of the
r~MINQ,=(U)estimate
is considerably more
complicated and more difficult to compute.
The development of MINQE theory was independent of distributional
assumptions; however, Rao (1971b) pointed out that when the variables
are normally distributed the MINQE(U,I) estimates are locally minimum
variance unbiased estimates, that is, at the point for which the true
population parameters are equal to the prior estimates.
He also proved
that under normality the MINQE(U,I) estimate for model (1.2.1) is the
uniformly minimum variance unbiased estimate if and only if for all
triplets i,j,s there exist scalars t i i=l, ... ,q such that
(MV i M)(MVM)-l(MV j M)(MVM)-l(MV sM) =
where M= I - X(X'X)-X' and V = E MViM
i
~ tit~1M
4
Seely (1971) provided another result which even further strengthened
the theoretical footing of MINQE estimation.
Given the following
assumptions:
(1.2.4)
y - n(XS, E s.V.),
,. "
Ie
l
E 81Vi) O} contains a non-void open set,
i
vq = I and
the mean parameter sand (8 1 ", .,8 q) are functionally independent,
the
, ,
,. a.MV.t1
est~mator ~
is the uniformly minimum variance unbiased
estimator of its expectation if and only if the span of matrices
!Wlt~,
... ,tWqt.1 is a quadratic subspace, i.e., for all d1 , ... ,c q there
there exist scalars t 1 , ... ,t q such that
(I: d.MV . r.1) 2 = L: t,.rJN,.M. As a consequence, under normal ity and
l'
,
.
balanced data with design matrices consisting
usual
~IOVA
Of
zeros and ones, the
estimators and MINQE(U,I) estimators are identical.
Soon many other articles on the principle of MINQE began appearing.
While Rao (1970) had given two sets of sufficient conditions for the
MINQE(U) estimability of all the heteroscedastic variances of a diagonal
dispersion matrjx, Mallela (1972) gave a necessary and sufficient set.
Pri~gle
(1974) developed a more specific solution for the MINQE(U) esti-
mator in the general variance component model (1.2.1) and derived the
~1INQE(U,I)
estimator using the residuals from a weighted least squares
analysis for fitting S using Vex -1 as the weight.
5
1.2.1
Iterated MINQE(U,I) and Asymptotic Results
C. R. Rao (1972) suggested that after solving the MINQE(U,I)
equations, the estimate 0 be used as the new prior to calculate the
MINQE(U,I) equations again, solving for a new estimate repeatedly until
the two successive values of 0 are IIclose l l .
The resulting iterative
values were named I-MINQE(U,I) by Brown (1976) who showed that
I-MINQE(U,I) and MINQE(U,I) are asymptotically normal.
A general asymp-
totic theory of MINQE(U,I) and I-MINQE(U,I) was developed by Schmidt and
Thrum (1981) based on quite arbitrary sequences of variance component
models.
They found surprisingly simple and general conditions for
strong and weak consistency of MINQE(U,I) if V = ~ 0i2Vi is such that
,
0i 2 > 0 for all i.
Brown (1976) proved the asymptotic normality of
MINQE(U,I) for m-replicated models assuming a universal normalizing
factor m1 / Z .
1.2.2 Computational Burden
From (1.2.3) it is clear that brute force attempts to calculate
MINQE(U,I) estimates involve
an extremely large computational burden.
Inverting the matrix Va which is n x n where n is the number of observations is prohibitive in many practical applications.
However, several
different approaches are available which reduce this problem of calculating the defining equations to one of inverting a matrix whose order
is equal to the number of random levels in the design.
6
The W-transformation which was developed by Hemmerle and Hartley
(1973) was applied by Corbeil and Searle (1976) in an iterative method
for obtaining restricted maximum likelihood estimates.
The
W-transformation was similarly extended to the MINQE(U,I) equations by
Liu and Senturia (1977).
Hemmerle and Lorens (1976) developed a
Cholesky type algorithm for forming the W-transformation, and Goodnight
and Hemmerle (1979) wrote an extremely efficient code with computational
efficiency and storage economy equivalent to Hemmerle and Lorens.
In addition to the W-transformation, other approaches have been
suggested for evaluating the MINQE(U,I) equations.
Schaffer (1975)
showed that Henderson's mixed model equations and solution contained all
the necessary information.
Hartley, Rao, and Lamotte (1978) put forth a
siMple "synthesis"-based method fer the rv1INQE(U,I) problem when Vex =I.
Most recently, Wansbeek (1980) showed that the residuals from several
regressions using instrumental variables may be used to construct the
MINQE(U,I) defining equations.
Several authors have given explicit solutions for the MINQE(U,I)
equations for specific models.
These include Ahrens (1978) and Swallow
and Searle (1978) who gave straight-forward methods for the one-way random model and Kleffe (1977a) who gave explicit formulae for the
~-MINQE(U)
problem.
In addition, Kleffe (1979) gave closed form solu-
tions for the problem of MINQE(U,I) estimation in the two-way nested
design and the two-way crossed with no interactions design for both
7
mixed and random effects.
Shah and Puri (1976) applied MINQE(U,I) esti-
mation to incomplete block designs and derived simple formulae for a
class of designs which includes balanced designs.
Giesbrecht and
Burrows (1978) gave a compact and efficient method for computing
MINQE(U,I) estimates for hierarchical classification models.
1.2.3
Non-negative Estimation
The issue of computational burden aside, there still remains the
problem of negative estimates of variance components.
P. S. R. S. Rao
and Chaubey (1978) developed MINQE (without invariance and without
unbiasedness) estimates for the heteroscedastic variance model and
showed that these estimates are non-negative.
Chaubey (1980) extends
this result to the variance component model (1.2.1).
Brown (1977) also
studied the problem of negative heteroscedastic variances and
constructed non-negative estimates which are identical to MINQE(U,I)
estimates whenever the MINQE(U,I) estimates are non-negative.
Pukelsheim (1980) considered MINQE(U,I) given that Va = I and that
the span of the matrices MV1M, ... ,MVqM is a quadratic subspace. He
proved that either the estimate of
negatively estimable.
~
,
fiS i ) 0 or else it is not non-
Pukelsheim (1981) reviewed previous work on
non-negative estimation and gave many new results involving projections
onto closed convex cones--geometrically the estimates described are
clear; however, the solutions are quite difficult.
~
8
1.2.4 Covariance Component Estimation
With the exception of Rao's covariance model (1.2.2) very few applications of the principle of MINQE to covariance component estimation
appeared in the literature until 1979.
the question of a natural estimator.
At the heart of the problem was
Given the model y = Xe
+
U€
q
where O(€) = L e.F., e restricted to the set {e : L e.F. > O},
i
"
i
Rao (1979) and Rao and Kleffe (1979) settled upon
"
€~Ly.F.€
i ' ,
as a
natural estimator of the linear parametric form fie. Recall from
Seely (1971) that under the assumptions (1.2.4) the natural estimator
described above is the uniformly minimum variance unbiased estimator of
its expectation.
Minimum norm quadratic estimation under several dif-
ferent sets of side conditions were given.
Chaubey (1982) considered a
special case of the above and worked out explicit expressions for the
intraclass covariance regression model.
Rao and Kleffe (1979) studied the m-replicated linear model in
detail.
Pukelsheim and Styan (1979) proved, using the dispersion mean
model results from Pukelsheim (1977a), that if the span of the matrices
MV 1M, ... ,MV qM is a quadratic subspace then with probability=1 MINQE(U,I)
estimation provides a non-negative estimate of O(y) and a positive definite estimate if the sample dispersion matrix is positive definite.
1.3 Outline of this Research
In this paper the problem of minimum norm quadratic estimation in
the following multivariate linear model is studied:
9
,
,
,
y. = Xa. + Ue:.
,
i=l, ... ,p
E(e:ke: 1 ' ) = ~ 0 i (kl)F i
k,l=l, ... ,p
where (6 ) = (0 (kl)) i=l, ... ,q are the unknown p x p matrices of
i
i
variance-covariance components. Several different MINQE are derived:
E(e: . )=0
i=l, ... ,p
,
MINQE(U,I), MINQE(U), MINQE, MINQE(I), and MINQE subject to X'AX=O where
the letter U indicates unbiased and I indicates translation invariant.
The special case for which the matrices Fi are idempotent and pairwise orthogonal is considered. It is shown that MINQE and MINQE(I)
estimators of the matrices 01 , ... ,0 p are positive definite and nonnegative definite, respectively.
Finally, estimation for the multivariate one-way random model
Yijk =
detail.
~k +
a ik
uijk i=1, .. ·,9 j=1, ... ,n i
k=1, ... ,p is studied in
If g>1 and ni>1 for at least one i then all parametric
+
forms tr(C 10 1 + C20 2) are shown to be MINQE(U,I)-estimable. Given Tl~O
and T2>O as prior estimates of the among and within covariance matrices
and n1, ... ,n g , the group sizes, the MINQE(U,I) equations are expressed
in terms of the group sizes, T2- 1 , and the eigenvectors and eigenvalues
-l/zT 1T2-liz .
of T2
The special case for which T1 and T2 are diagonal
is considered, and explicit expressions for the MINQE(U,I) estimates of
01 and 02 are given.
and 02 are developed.
Finally, the MINQE(I)
and MINQE estimates of 01
tit
10
2.
THE MINQUE PRINCIPLE
2.1
Introduction
In this chapter the form of the general linear model used throughout
the remaining discussions is described.
The principle of minimum norm
quadratic estimation using the Euclidean norm is discussed.
Finally,
the minimum norm quadratic estimator y'Ay is defined under five different sets of side conditions:
unbiasedness and translation
invariance, unbiasedness alone, translation invariance alone, X'AX = 0,
and A symmetric.
2.2
The Form of the General Model
Let y be an n-vector of random variables with a linear structure
y
= XS
(2.2.1)
+ Us
~
where X is a given n x m matrix, S is an m-vector of unknown parameters,
U is a given n x c matrix and s is a c-vector of random variables with
I
zero mean value and dispersion matrix D(s) = E(ss') = 8.F .. For
i=1 ' ,
i=1, ... ,q, Fi is a given c x c symmetric matric and 8 = (8 1 , ... ,8 q)' is
restricted to the full rank cone of all q-vectors such that I8 i Fi is
non-negative definite.
The dispersion matrix of y is O(y) =
where Vi = UFiU ' for i=1, ... ,q.
I 8.V.
' ,
,=. 1
Notice that this model admits the
representation of covariance components as well as variance components.
11
Definition (2.2.1): e is said to be identifiable in distribution if D(y)
given e=e(l) equal to D(y) given e=e(2) implies that e(1)=e(2).
case D(y) =
~eivi'
In the
e is identifiable by distribution if and only if the
1
matrices V , ... Vq are linearly independent.
1
assumption of the general linear model.
Let Fa=.La 1.F 1•
1
1/2
liz
= Fa Fa
This is an important
be a positive definite prior estimate of D(€)
and transform model (2.2.1) to
(2.2.2)
l/Z
where U* = UFa
€
Definition (2.2.2):
*
=
A quadratic form y'Ay used as an estimator of fIe
is said to be translation invariant if it is unaffected by changes in
the fixed effects parameter S.
This means that if
e is mapped to
8+0,
then y'Ay is unchanged, i.e.,
or
<5X'AX<5+ 2o'X'A(Xs + UE) = 0 for any o.
A necessary and sufficient condition for this is AX=O.
Definition (2.2.3):
A quadratic form y'Ay is said to be an unbiased
estimator of fie if E(y'Ay) = s'X'AXe + Ie 1·tr(AV.) is equal to
i
for all sand 9.
(2.2.3)
1
I
i
f.e.
1 1
This implies that
X'AX = 0 and tr(AV i ) = f i i=l, ... ,q.
(2.2.4)
~
12
Definition (2.2.4):
Under the transformed model (2.2.2) a natural esti-
mator of fie is defined to be
Y
= iLU.g*'F.*g*
= g*'Ng*
'
,
(2.2.5)
where u1 , ... uq are chosen such that E(g*'Ng*) = fie for all e.
Note that this natural estimator differs from that originally proposed by Rao since the intention is to extend minimum norm estimation to
models more general than the variance components or covariance components models considered in Rao (1971).
Lemma (2.2.1):
If the matrices v1 , ... ,V q are linearly independent then N exists and
is unique.
Proof:
Since g*'Ng* is an unbiased estimator of fie,
(\..'1""
IJ
=
S
=(
,'J
q)
I
is a solution of Su = f where
tr(F,.*F .*) )
J
=(
-1
-1
tr(F.F
F.F )).
, a J a
If S is non-singular, then u is the unique solution and N is uniquely defined.
But it is a well-known result that S is non-singular if
and only if the matrices F1*,
,F q* are linearly independent. The
linear independence of F1*,
,F q* follows from the linear independence
of V1 , ... ,V p ' To see this, assume
13
o = L c.F.*
1 1
i
L c.UF.U'
1
1
=
multiplying by UF
i
1/2
a
1/2
and F
a
U
1
Therefore ci=O i=l, ... ,q.
Hence F1*, ... ,F q* are linearly independent.
A consequence of Lemma (2.2.1) is that under the very reasonable
assumption that eis identifiable in distribution, the natural estimator
o~
fie exists and is uniquely defined.
2.3
Minimum Norm Ouadratic Estimation
The natural estimator has several desirable properties.
It is unbiased,
always exists and is uniquely defined if V1 , ... ,V q are linearly independent. Seely (1971) proved another desirable property of the natural
estimator.
If
E
follows a multivariate normal distribution and if the
space spanned by the matrices F1 , ... ,F q is a quadratic subspace, i.e.,
A = La.F. implies that there are scalers c1 , ... ,c q such that
ill
A2 =
~ciFi
and the remaining assumptions of (1.2.4) hold, then the
1
natural estimator is the uniformly minimum variance unbiased estimate of
fie.
But
available.
E
is unobservable; therefore, the natural estimator is not
Minimum norm quadratic estimation leads to an estimate y'Ay
which in some sense approximates the natural estimate by making the norm
of their difference
II
small
II •
14
Consider the transformed linear model described in (2.2.2) and a
quadratic estimator
y
= ylAy
of fie.
Now the difference between y'Ay
and the natural estimator E*INE* is
E:S*ll
[
J
[U* I AU* - N
(2.3.1)
XI AU*
The minimum norm estimator is obtained by minimizing an
appropriately chosen norm of the
matri~
of the quadratic form in
(2.3.1):
U I AX
*
XI AX
=
(2.3.2)
The choice of matrix norm to be used is an important decision in theory
but in practice the class of norms considered is limited to the weighted
and unweighted Euclidean norm, owing to its mathematical tractableness.
The unweighted Euclidean norm will be used throughout this paper.
Notice that
2.
(2.3.3)
r~inimizing
the Euclidean norm of (2.3.2) is equivalent to minimizing its
squared norm or, using (2.3.3),
(2.3.4)
15
2.4 Minimum Norm Quadratic Unbiased Translation Invariant Estimation
A quadratic form y'Ay is an unbiased (2.2.3) and translation
invariant (2.2.2) estimator of fie if
= f.1 i=l, ... ,q.
AX = 0 and tr{AV.)
1
(2.4.1)
Hence y'Ay is a minimum norm quadratic unbiased translation invariant
estimator of fIe, denoted MINQE{U,I), if the matrix A minimizes (2.3.1)
subject to AX=O and tr{AVi)=fi i=l, ... ,q.
Under these side conditions
(2.3.4) becomes
tr{U*1 AU*-N) (U*'AU*-N) = tr{U*'AU*U*'AU*)-2tr{U*IAU*N)+tr{NN).
1/2
Now U* = UF <l
, so
tr{AUF U'AUF U·) = tr{AV AV )
<l
Since N =
I
i
(2.4.2)
<l
<l
<l
(2.4.3)
u.F.*,
1 1
1/2
r/2
tr(U*' AU*N) = tr(F <l U' AUF <l
= 1~
= ~1
'J.
1
tr( U' AUF.1 )
u·1 tr{ AV 1.)
=I
i
, since tr(AV.)
= f.1
1
i=l, ... ,q.
(2.4.4)
Using (2.4.3) and (2.4.4), minimizing tr(U*'AU*-N){U*'AU*-N) subject to
AX=O and tr(AVi)=f;
;=l, ... ,q
is equivalent to finding
mi n tr( AV <l AV <l )
AX = 0
tr(AV i ) = f i
i=l, ... ,q
This problem is addressed in the following theorem.
(2.4.5)
~
16
Theorem (2.4.1):
Rao and Kleffe (1979), Kleffe (1980)
Given the transformed model (2.2.2), if V +XXI>O and
a
f
CUI = {A: A symmetric, AX=O, tr(AV i ) = f i i=1, ... ,q}
is not empty, then the MINQE(U,I) estimate of fie, obtained by solving
(2 .4.5), is
y
=
I
(2.4.6)
Ai YIAi Y
i
where
A. = (~1V t~)+V.(MV M)+
and
A = (A!' ... 'A q ) I is any solution of QA=f,
(Q .. ) = ( tr(A.V.) ).
a
1
lJ
1
a
i=1, ... ,q,
1 J
The following lemmas were stated but not proved in Rao and Kleffe (1979)
and Kleffe (1980).
Lemma (2.4.1):
If Va+XX1>0 then r(SM) = r(S(MV M)+) where r(A) denotes the range of
a
A and the matrix SA is defined by (SA)ij = tr (AViAV j ) and A+ is the
Moore-Penrose inverse of the matrix A.
Proof:
Let T = V +XX
Then (MV M)+ = (M(V +XX I )M)+ = (MTM)+.
a
a
a
+
Since T>O, (MTM) = T-l - T-IX(X'T-1X)-X'T-l.
1
•
(2.4.7)
Several equalities follow directly:
M(MTM)+ = (MTM)+M = (MTM)+.
(2.4.8)
(MTM)+(MTM) = (MTM) (MTM)+ = M.
(2.4.9)
Let vec(A) be the vector formed by stacking the columns of A one under
the other in a single column.
Then tr{AB)=vec(A) 'vec(B ' ).
Consider
C=
17
I
1
1
vec(T / Z M(MTM)+V (MTM)+MT / Z ),
1
L
vec (T 112 r·1( MTM) +Vq(MTM) +MT1/2 )
I
/ 2Tl / 2M(MTM)+V.(MTM)+MTl / 2)
= tr(T1/2M(MTM)+V.(MTM)+MTl
,
J
(CC') ,..J
= tr(Vi(MTM)+Vj(MTM)+)
=
(S ( MTM )+ ) i j
, i . e.,
CC' =
vec(Tl I 2 MY qMTI I 2
C* = [vedT11 2 MV 1NT1 / 2)
using (2.4.9)
=
(S~1) i j
, i. e . ,
Therefore, r(SM) C r(C)
CC* = Sw
= dCC')
=
r(S(MTM)+) = r(S(MV M)+)·
(2.4.10)
a
Similarly, consider
o =
DO'
= Sfvt.
Let
vec( (MTM) +Vq(MTM) + ) ]
18
= (S (MTM) +) . ,.
1,
i.e .• 00*
r(S(~1V
= S(MTM)+'
.
(2 .4. 8) •
uSlng
Therefore.
M)+) = r(S(~1TM)+) C r(O) = r(OD') = r(Sr~)'
(2.4.11)
Ct
Using (2.4.10) and (2.4.11). r(S(MV M)+)
= r(St~).
Ct
Lemma (2.4.2):
The MINQE(U.I) of fie exists if and only if f is in the range of SM'
Proof:
If the r1INQUE(U.I) of fie exists then f is in r(SM) by Lemma
(2.4.1).
1.=(1.
Conversely. if f is in the range of SM then there is a
1 •...• Aq)1 such that
f. =
J
=
Ii
tr(MV.MY.)A.
1
J
1
.1=l ..... q
tr( (~AiMViM) Vj )
1
The matrix ~ AiMViM is in C~1; therefore. C01 is not empty and the
1
M1NQE(U.1) estimator of fie exists.
As a consequence of this lemma. to check the MINQE(U.I)-estimability of
the parametric function fie only the range of SM need be investigated.
19
2.5
Minimum Norm Quadratic Estimation Under Some Special Conditions
The existence of the MINQE(U,I) estimate is not assured in all
situations since the joint conditions of invariance and unbiasedness may
, ,
lead to constraints AX=O and tr(AV.)=f.
sistent.
i=l, ... ,q which are incon-
Hence it is reasonable to investigate other side conditions.
In particular, the following classes of symmetric matrices are
considered:
, ,
C~
= {A:
X'AX=O, tr(AV.)=f.
C
= {A:
A symmetric}
(2.5.2)
X'AX=O}
(2.5.3)
Co = {A:
CI
(2.5.1)
i=l, ... ,q}
= {A: AX=O} .
(2.5.4)
The estimators y'Ay obtained by minimizing (2.3.5) subject to the
restrictions (2.5.1), (2.5.2), (2.5.3), and (2.5.4) are denoted
MINQE(U), MINQE, MINQE(O), and MINQE(I), respectively.
If only unbiasedness is considered and translation invariance is no
longer guaranteed, then it is advisable to use a prior estimate of B,
say BO' and map y to y-XS O and S to (8-8 0), denoted y* and B*, respectively, and work with this transformed model in addition to the transformation indicated in (2.2.2).
The MINQE(U) involves not only
different side conditions but also a different objective function since
II U* AU*-N II
I
2
+ 2 II U* I AXX I AUJ
= II U* AU*-N
I
II
2
+
I
2
+ II XI AXX I AX
2 II U* AXX AU*
I
I
II
2
II
2
f
subj ect to A in Cu
20
= tr(A(V + XXI)A(V
a
Theorem (2.5.1):
a
+ XX')) since X'AX=O.
(2.5.5)
Rao and Kleffe (1979)
-1
Let Va >0 and Pa = X(X·V a X) XIV a
-1
f is not empty. then the
If CU
MINQE(U) estimate of fie, obtained by minimizing (2.5.5) subject to A
f
in CU' is
A., = (V a +XX
-1
I )
(V.- P V.P I)(V +XX I )
,
a' a
a
-1
(2.5.6)
If as in (2.5.2) through (2.5.4) the condition of unbiasedness is
. withdrawn. the minimum norm quadratic estimator of fIe is no longer
independent of the natural estimator N.
t~
= ~1J.,
i
S
F-I!2F .F-d2
a
, a
= (tr(F i F:1 Fj F:1)).
where
u
is the unique solution of Su=f.
Also. the three classes of symmetric matrices
will never be empty; hence. the existence of the MINQE. MINQE(O). and
MINQE(I) is guaranteed.
Theorem (2.5.2):
Rao and Kleffe (1979)
Let Va>O. VN=U*NU*· where N is as defined in (2.2.5). and
R
a
= Va
-1
(I-P).
a
21
(i)
The MINQE of fla ;s
A
(; i)
e
y* where
-1
= (V a +XX') VN(V a +XX')
The MINQE(O) of fla ;s
A
(;; ;)
y~A
y~A
-1
(2.5.7)
y* where
-1
= (V a +XX') (VN-P a VNP a I)(V a +XX')
-1
(2.5.8)
The MINQE (I) of fla ;s yl A y where
A
= Ra NNR a .
(2.5.9)
22
3.
MINIMUM NORM QUADRATIC ESTIMATION IN THE MULTIVARIATE LINEAR MODEL
3.1
Introduction
In this chapter minimum norm estimation is extended to the multi-
,
,
variate linear model y. = XS. +
, i=l, ... ,p
U~.
by applying results from
the preceding chapter to an equivalent univariate model.
The defining
equations for MINQE(U,I), MINQE, MINQE(I), and MINQE(O) are derived.
It is also shown that if e is identifiable in distribution in the univariate model then 01, ... ,0 p are identifiable in the multivariate model.
Another concept which is preserved is that the matrices F1 , ... ,F q span
a quadratic subspace in the univariate model if and only if their counterparts in the multivariate model also span a quadratic subspace.
the natural estimator
is
Hence
in both the univariate and the multivariate models
the uniformly minimum variance unbiased estimator if the matrices
F1 •... ,F q span a quadratic subspace and the remaining assumptions in
(1.2.4) hold.
3.2 The Multivariate Linear Model
In describing the multivariate linear model the following notation
will be useful.
vec(A) is the vector formed by stacking the columns of A one under
the other in a single column.
vech(A) of a symmetric matrix A is formed by using only that part of
each column which is on or below the diagonal.
A® B is the Kronecker product of A and B.
23
ai{kl) =(Ai)kl
is element (k,l) of the matrix Ai'
okl equal to 1 when k=l; otherwise okl equal to zero.
e. is the i th unit vector.
1
E
is a square matrix of all zeros except elements (k,l) and (l,k)
kl
equa 1 to 1; i. e. ,
Consider the linear model,
Yi
= X6 i
+ U~;
(3.2.1)
i=l, ... ,p
where Yi is an n-vector of random variables i=l, ... ,p,
X is a known n x k matrix,
I
U is a known n x r matrix,
8; is a k-vector of unknown parameters i=l,
;; is an r-vector of random variables i=1,
,p, and
,p.
q
E(~;)=O
i=l, ...• p.
-.I
1 =1
E{~ ~ I)
k 1 -
k,l=l, ... ,p
0 i (kl)F;
where F1 •...• Fq are known r x r symmetric matrices and 0 i i=1, ... ,q are
unknown p x p matrices.
Using matrix notation model (3.2.1) becomes
y
= X8 + U::
(3.2.2)
where
y
=
[Yl
p
J
Y
B=
[S1
Sp
J
24
and
Stacking Y by rows yields an equivalent univariate model:
vec(Y')
= vec
=
( (XB)') + vee{ (U=:) I)
(X@I)vec{B ' ) + (U(1)vec(=:')
using vec{ABC)
=
(X @ 1) 8 + (U
=
(C' @A)vec(B)
.
® 1) E
(3.2.3)
where S
= (Bll,B12,···,Bkp)' = (8l{l),8 2 {l), ... ,8 p{k))'.
and
= (Ell, E12 ' . . . , Erp ) = (:; l{ 1) , ~ 2 (l ) , . . . ,; p( r) ) , .
E
I
Lemma (3. 2 .1) :
D{E:)
=
I,
F;Q)0;
=
I,k<l 0i(kl)
F;@E kl
.
Proof:
E{EskEtl)
= E(~k(S)~l{t))
= ,~e;{kl)f;{st)·
Let E = (Esl' ... ,ESp)1
S
s=l, ... ,r.
The (k,l) element of E{EsE ' )
t
Therefore, E{EsE t ' )
=
= E{EskE tl ).
,~ f;{st)0;o
of E{EE ' ); hence,
D{E:) = E{EE ' )
= ~ F;@0;
,
But E{EsE ' ) ;s the (s,t) submatr;x
t
25
=ik~10i(kl)Fi ®E kl
Summarizing, the multivariate model (3.2.1) has an equivalent univariate expression:
y
= vec(Y I)
= (X @
ne +
(U
® Ih
(3.2.4)
E(y) = (X@ne and
where
D(y)
= (U@n
D(e) (U@n'
where Vi k1 = Vi @ Ek1 .
Lemma (3.2.2):
The matrices v1 ,
,V q are linearly independent if and only if the
matrices Vi (8) Ekl ;=1,
,q, l(k(l(p are 1;nearly independent.
Proof:
Assume that V1 , ... ,V q are linearly independent and
I
c·klV;®E kl = O.
i k( l'
The n,
~ V;
,
® C; = 0 wh ere (C; )k1 = c; k1 .
26
I
Therefore,
i
( Vi @ Ci)kl ,klll= 0 for all kl , kill
or
,~ (Vi )kk'cill,
or
I
i
vicilll
= 0 for all kl,kll '
= 0 for all 11
1
Therefore, c ill l =0 for all i, for all 11
Conversely, assume the matrices Vi ®E kl i=l, ... ,q l<;k<l<p are linearly
1
•
independent and
I,.
Then
,~ Vi (8) Ci
,,
c.V. = 0
= 0 where (C i) 111 = c i for all "'.
The linear independence of the matrices Vi Q9 Ek1 implies that Ci = 0;
therefore, c i = 0 i=l, ... ,q.
As a consequence of Lemma (3.2.2) only the linear independence of
V1J. ,
••• ,
Vq need be checked to assure the
1 i near
independence of the
matrices Vikl in the mUltivariate linear model.
Definition (3.2.1):
The space S spanned by the matrices F1, ... ,F q is
said to be a quadratic subspace if A =
,I. c.F.
"
in S implies A2 in S;
i.e., there are scalars d1 , ... ,d q such that A2 = I d.F ..
. "
Lemma (3.2.3):
,
I
The space S spanned by the matrices F1, ... ,F q is a quadratic
subspace of the space of all symmetric matrices of order r if and only
if the space S2 spanned by the matrices F ®E
i
kl i=l, ... ,q i..:k..:l..:p is a
quadratic subspace of the space of all symmetric matrices of order rp.
27
Proof:
Assume F1,· .. ,F q span a quadratic subspace.
Let A = i~<l t ikl Fi <8> Ekl
A2
=
=
~
F. F. ® T.T.
~
F.2
~
F/ (8) Ti 2 + i ~j
1j 1 J
1
=
1
~ Fi
=
<8> Ti
be an arbitrary element in Sz·
1 J
® T.2
1
+
(F.IX\T.)
L (F.IX\T.)
1~ 1
J~ J
i*j
(C.C. + C.C.)
1 J
J 1
where Ci
Now Fi 2 is in S; therefore, F . 2 ® T . 2 is; n $2'
1
1
By definition, Ci,Cj,Ci2,Cj2, and (C i + Cj )2 are in S2'
(3.2.5)
(3.2.6)
Therefore,
(C; + Cj )2 - Ci 2 - Cj2 = C;C j + CjC; is in S2'
(3.2.7)
Hence, A2 ;s in S2 since using (3.2.5), (3.2.6), and (3.2.7) A2 ;s seen
to be a sum of matrices ;n S2'
Conversely, assume that $2 is a quadratic subspace.
Let F
= Li t.F.
1 1
be an arbitrary element of S.
By definition F®E 11 is in S2 and {F®E U )2 ;s in S2'
But {F@E ll )2 = F2@E 1l 2 = F2(8)E ll . Since {F@E ll )2 is in S2
there exist matrices A1, ... ,A q such that
F2 ® Ell
=
L Fi @ Ai .
i
Equating elements,
(F2 ® E11 )mn ,min
I
=
~
28
Now (Ell )nn' = on10n'1, so
(F2)
mm'
= ~(F.)
,
i 1 nm
(A')11
1
m,m'=1, ... ,r
or
Hence
F2
is in S.
As a consequence of Lemma (3.2.3) the natural estimator in the
multivariate model will be the uniformly minimum variance unbiased estimator if
€
has a multivariate normal distribution and S is a quadratic
subspace and the remaining assumptions of (1.2.4) hold.
The following results will be useful in the next section.
Lemma (3.2.4):
Let A = (a.·) and B = (b .. ) be arbitrary symmetric matrices.
lJ
lJ
Then
Proof:
(3.2.8)
29
+ yk 1 , 1 Ae, e
k
I
Be
k
1
e,1
+ Ae, e
1
k
1
Be,1 e
k
I I )
= yk'ykl,' a"kb'k' + ykl ak,k bll ,
+ yk'lla",bkk' + ak"b k"
= (l
okl)(l - okll')akl,bk"
+ (l - 6kl)a kk ,b ll , + (l - 8k'" )all,bkk,
Furtl,ermore. if A=B,
tdAEk1AEk'l') = a k1 , a ,,(2-ok'-8k"'+8k16k'l')
k
+ a
kk
,a
ll
,(2-8k'-8k " ')
But ckl"ak"akl, = 8k " 'a kk 1 a"I' so
tdAEk,AEk",) = ak,la k" (2-8kl) + akk,a", (2-8kl)(l-8k"')
= (2-okl) (ak"akl, + (l-ok"')akk,all')
Lemma (3 . 2 . 5) :
Let 6k'k
6klk ' "
,
,1
= 1 if k='=k'=", 0 otherwise.
= ok18k"8k', = 8k " '8kl ' 8k ' "
etc.
Note that
Then
tr(Ek,Ek",) = 28k ' 18,lk + 28kk cll' - 3ck'k ' "
1
30
and if 1;;> k an d 1 ; > k
I
I,
then
Proof:
= (2-ok1){ok1'ok ' 1 + (l-ok'l )ok ' kol l)
l
l
using lemma (3.2.4) and 1k1 = 0kl
= 20k l 101 1 k + 20kk 1 0ll' - 30klk ' l 1 .
l~k'
Now if l;;>k and
then ok'101 1k = ok1k'1' since 1 1 )k l = l;;>k and 1'=k
implies that 1 =k'=1=k.
1
l' ;;>k
(3.2.9)
Hence, as a special case of (3.2.9) if l)k and
I ,
= 20kk 1011' - oklOkk 011 1
1
= (2-ok1)011 10kk '
Lemma (3.2.6):
Let A and B be arbitrary p x p symmetric matrices.
Then (ABA\l =k~<;ll (a k1 Ia1<"
+
Proof:
(ABA) k1 =k~1 1 akk bk 1 a1 11
I
I
1
(l-ok"')akk,a",)bk'l'
31
=I
a
a
b
I
+
kk' 1'1 kill
k')l'
a
a
b
kk l 1'1 kll'
l' )k I
ck1lla
a
kk ' 11
b
1
kill
l ' )k I
=
I
(l-ck'l')a
a b
kk ' 11' kill
+
I
k'<;l'
a b
kl' kll llk '
k 1 (1'
k' (1'
=
a
)'I.-
(a
a
+ (l-ck l')a
a
)b
kl' k'l
kk'l"
k'"
l
3.3
e
MHIQE(U,I) Estimation
Let Ti i=1, ... ,q be prior estimates of 0 i i=1, ... ,q
v = LV. (8) T. is non-negative definite.
a
i'
,
such that
Applying Theorem (2.4.1) to
the multivariate model (3.2.4) using Va as a prior estimate of D(y)
leads to the following theorem.
Theorem (3.3.1):
Given model (3.2.4) and prior estimates T1 , ... ,T q of 0 , ... ,0 q such
1
that V + (XX' (8) I) > 0, the parametric form
a
~
,
tr(CiEl;) is MINQE(U,I)-
estimable if and only if there is a q x p(p+1)1/2
solves
matrix r which
I
32
(vech (C ))
1
where SM
I
= (tr(MViMV j )).
The MINQE(U,I) estimate of
~
i
tr(C 0 ), C1, ... ,C symmetric p x p
i i
q
matrices, is
~
M)+
. y' (~1V a
m(V.
,
1
® A., )(MV a r~)+v
mw
where (t~Y a
M)+m
= (~MV .t~®T .)+
. 1
,
1
and A1, ... ,A p are any solution of
=
(3.3.2)
8kl)cHkl) .
(2 -
Proof:
The proof is a straightforward application of Theorem (2.4.1) and
Lemma (2.4.2) with the following correspondence in terms:
t~odel
(2.2.4)
x
X®I
M
M®I
,
V.
fie
Model (3.2.4)
=~
i
Vikl
foe.
"
= Vi ®E kl
33
4It
since Ci is symmetric.
From Lemma (2.4.2) the parametric form
E
i kc;l
(2-<5kl)c i (kl)8 i (kl) has a
MINQE(U,I) estimate if and only if the vector ( (2-<5kl)c i (kl):
i=l, ... ,q l(k"l"p) is in the range of S(M ® 1) ; i.e., there exists a
vector (ri(kl):
i=l, ... ,q
1c;k(1(p) which solves
(2-5kl)ci(kl) = i~kl(1ril(k'l')tr((M01)(Vil @E k 'l,)(M®I)(V i
=
1:
ilk ,,1
I
1
r,'
I
(k' 1 1 ) t r ( MV " I r~v,.
)<5 kk ' <5 11
1
0
Ekl))
(2 - <5 k1)
using Lemma (3.2.5)
=
,
c i( k 1) = ~
1
ri
I (
,~ r i
1
I
(k 1)
(2
6k 1) t r( MV i ' r~v i )
(3.3.3)
(3.3.4)
k1) tr( MV i d..w i ).
Let rkl = hl(k1),· .. ,r q (k1))' and c k1 = (Cl(kl)""'C q(k1))"
Then (3.3.4) becomes
SMr k1
= Ck1 1" k" 1" p.
Let r = [rn r12 ... r1p r22 ... r pp
(3.3 .5)
J.
Then the p(p+l)1/2
of equations in (3.3.5) are equivalent to the matrix equation
=
systems
34
Now given that
okl)cHkl) : i=1, ... ,q 1<;k<;1<;p) is in the range of
((2 -
SM' the MINQE(U,I) estimate Ofi~<;l (2 - okl)c i (kl)6 i (kl) is
where (Ai(kl)) = (hi)kl
i=1, ... ,q 1<;k<;1<;p is any solution of
(2 - okl)c.(kl) =
Ai'(k'll)tr( (MVaM)m (Vi' ®Ek,1,)
1
E
ilkl<;l'
+
+
(MV aM)m(V i ® Ekl ) )
(3.3.6)
+
+
= E tr( (MV aM)m(V
i I ® hi I )(MVaM)m(V i ® Ekl ) )
i t
Kleffe (1980) considers the multivariate linear model (3.2.4) and gives
the MINQE(U,I) estimate if the prior estimates of Gi are non-negative
multiples of a postive definite matrix. Some interesting simplifications resul t.
Theorem (3.3.2):
Kleffe (1980)
Suppose Ti = wiT, Wi)O i=1, ... ,q are prior estimates of G; i=1, ... ,q
where T is a positive definite pxp matrix. Then if Va + (XXI QDI) > 0
and if there is a q x p(p+1)1/2 matrix r which solves
then the MINQE(U,I) estimate of
Y
E
tr(C.G.) is
i l l
=i
E tr(X.Y'(MWM)+V.(MWM)+Y)
l l
35
where W = L w.V. and the symmetric matrices Xl"" ,X q are an arbitrary
i "
solution to
( S( MW~1) +
® 1)
=
X
q
cq
and
y =
~
[Y1;
Yp ]
is the n
x p matrix of the observations.
Proof:
=
(t~\~t·1)+ ®r- 1
(3.3.7)
(S(MV M)+ )ikl ilk'l l
So
...
am'
= tr
..
( ( (MWM)+®r-l) (V;@E
k1
)( (~1Wt,,,)+®r-l)
(Vii @Ek,l,) )
= tr
( (MWM) +V i (MWM) +V i ,) tr(T-l E T-l E 11 ,)
kl
k
= (S(MWM)+ii.(2 - okl)(tkl,tk'l + ((1 - ok'l')tkk,tll')
using Lemma (3.2.4) and T-l
Let A , ..• ,A be a solution of (3.3.2).
1
q
A
Y = L y'(MV M)
,.
= 1:
i
+
=
(t
kl
From Theorem (3.3.1)
+
(V.0A.)(MV M)v
am,
,
).
alTt'
yl ( (MW~1)+Vi(MWM)+® (T-lAiT-l) )y using (3.3.7)
(3.3.8)
",
36
-,
,}
i
=r
y'
i
- i ~l
Z
+
+
i( kl)Y' ( ( (MWM) Vi (MWM) ) ® eke,' )y
=i~l Zi(kl)y'(I@ek) ((M\iM)+Vi(MWM)+) (I@el')Y
= ~ tr(Zi yt (MWM)+Vi(MWM)+Y)
(3.3.9)
,
It remains to show that if the symmetric matrices xl, ... ,X q are a solution of
[
X~]
=
X·q
rlc·C~]
(3.3.10)
q
t,nen the matrices TX 1 T, •.• ,TX qT are a solution of the MINQE(U,I)
equations (3.3.2).
Using this, from (3.3.9)
Let Xkl = (Xl(kl)'··· ,Xq(kl»'
and
c kl = (cl(kl)' ... 'Cq(kl»'
k=l, ... p,
1=1, ... ,p.
Using this notation, (3.3.10) becomes
( S( MWM) +
®
I)
Xl( k1)
Xl( k2)
X.
q(kp)
cUkl)
=
cl( k2)
Cq(kp)
k=1 , ... p
or
: .r:
k=l, ... p
= TXiT
15 k1) c i( k1) = (2 -
Continuing, let Ai
(2 -
( S( MWM) +) iii
15 k1) E
i
(.
I
using Lemma, (3.2.6)
~ ~ ~1
I
I
I
(
S(r~WM) + ) i k1 , ilk
I
1I
(A it) 1<
I
1I
••
using (3.3.8)
Hence the matrices TX1T, ... ,TXqT are a solution of the MINQE(U,I)
equations.
Theorem (3.3.2) may be considered the multivariate anaiog of the
well known result that in MINQE(U,I) estimation of variance components
an estimate based on a vector
a
of prior values will be the same as one
based on the vector of prior values w a,
W>
O.
3.4 Multivariate MINQE Under Some Specialized Conditions
The multivariate MINQE(U,I) was derived by applying.Theorem (2.4.1)
to the equivalent univariate model.
This same approach is used to
construct the multivariate versions of MINQE(U), MINQE, MINQE(O), and
MINQE(I) .
e-
38
Theorem (3.4.1):
Let Va
and
= E.
V. IX'- T . > 0,
1
1\C1
1
T
a
= (V a
+ XX I ~ I) -1
'CI
'
Pa = {X®I)((X' 0I)Va-1{X®I))-(X'@I)Va-1.
If {A: A symmetric, (XI
tr{ A{V i
® t)A{X ®
® Ekl ))
I')
= 0,
= (2 - ok1) c i( k1}
;=1, ... , q
tr{~C;e;)
;s not empty then the MINQE{U) estimate of
1<k<:1 <p}
;s
1
where A; = Ta(V;(&)A; - Pa{V;®Ai)Pa')Ta
and Y*
=Y -
XS O' BO a prior estimate of S, and where A1 , ... ,A q ;s any
solution of (2 - okllc Hk 1)
= ,~I
tr{Aikl(V;, (8)A;I)
(3.4.1)
i=1, •.. ,q
Proof:
A;kl = (VI]. + XXI (8)O-l('1 i ®E
kl
Employ;ng Theorem (2.5.1) with V;kl
l
k1 )Pa )(V a + XXI ®I)-l
;=1, ... ,q
1(k<:1<:p
Pa(V;@E
= V;
~Ekl'
the MINOE(U)
estimate of E tr(cie i) ; s
;
y
= ;~<:l
A;klY*'Aikly*
= ; ~ <: 1 Ai k1Y* ITa (V ; Q) Ek1 - Pa ( Vi @ Ek1) Pa I )T aY*
=
~ Y* I Ta (Vi
,
® Ai
where (Ai)kl
- Pa (V i <]) A; )Pa )T aY*
I
= Aikl
39
= r,. y*'A.y*
,
where A
is any solution of
ikl
(2 - okl)Ci{kl) ~1~I(l,Ai'k'l,tr{Ai'k'l'Vilk'l')
=
r
tr{ A., k '1 ' A. , k' l' V ® Ek , l' )
i'k'(l'
,
1
1
oJ
Theorem (3.4.2):
Let Fa
= ~ Fi®Ti' Va = ~ Vi®Ti'
1
1
VN = ~ Vi®T i where the matrices Tl'H.,T q are the unique
solution of the matrix equations,
=
(2 - okl)c i (kl)
(3.4.2)
i=l, ... ,q
and Ra
= Va -l(I-P a ) .
(i)
The MINQE of r tr{C;8;) ;s
;
A
(i i)
= (V a
® O-lVN(V a
+ XX'
The MINQE(O) of : tr{C.8.) ;s
;
(iii)
y~A
l'
+
y* where
XX'
y~A
® I)-I
(3.4.3)
y* where
A = (Va + XX' ®O-l{V N - PaVNPa'){V a + XX' @U-l
The MINQE{I) of ~ tr{C;8 i ) is y'ft y where
(3.4.4)
,
A
= Ra VNR a
.
(3.4.5)
Proof:
Employing Theorem (2.5'.2) with F;kl = F; ®E kl '
I)F a II2 '
F"kl* = Fa-1/2F.,kl Fa -II2 ' and U* = (U IX'
~
e-
40
N = L: u. F -1I2(F.@E )F -112
ikc;1 lkl a
1
kl a
Therefore,
= i:
UF.U' @T.
ill
Finally, u
ik1
is defined by
(2 - okl)ci(kl) =i\'<;l,u i 'k'1,tr((F i
l
@Ek'1,)Fa-l(Fi®Ekl)Fa-1
= i: tr((F., 0T.,)F -l(F. @E )F -1 ).
k1 a
i'
1
1
a
1
41
4.
MINIMUM NORM ESTIMATION GIVEN SPECIAL STRUCTURE
OF THE DISPERSION MATRIX
4.1
INTRODUCTION
In many multivariate analysis of variance models the random component
~
has dispersion matrix
=i
r F. ~e.
l l
D(~)
where the matrices
Fl , ... ,F q are idempotent and pairwise orthogonal; i.e., F;F j = o;jF;.
An example which illustrates this structure is the following linear
model.
y = X13 + Ue
Let
l
e =(e 1 , ... , el)'
n
where
'
E(e.e., ) = Oijl a . @8;
1 J
1
where I a . is the a i x a;
1
identity matrix.
Then
O(e) =
where Fi = r + (oij I a. ).
= r. F.@e.
1
1
j
1
1
F.F .. = r+ (oijI
1
j
1
=
a;
) r+oi'j'I
j
I
ai
)
I
oii' r+
oijI a.
.
J
1
Given this special structure the computation of the MINQE and MINQE(I)
estimates of e i is greatly simplified.
But more importantly, the MINQE
e-
42
and MINOE(I) estimates of 0
i
are positive definite and non-negative
definite, respectively.
4.2
MINOE and MINOE(I) ESTIMATION
In this section F is assumed to have the form
ex
r."
F. ® T. > 0
1
= ci j F.,
wit h F. F.
, J
i,j=l, ••• ,q.
Lemma (4.2. 1) :
Let F"• ... ,F q be pairwise orthogonal idempotent-matrices of order s.
If there are symmetric matrices T1, ••• ,T q of order p such that
.l'
ZF. (VT.
1
> 0 then r:F.
. , = I and rank of T., is p i=l .... ,q •
1
Proof:
F. @T.
1
i=l, ... ,q
1
are pairwise orthogonal since
(F. @T.) (F.@T.)
,
,
J
J
oijF. @T.
=
Therefore, rank (r F. ®T.) = r rank (F.
i'
i
1
2
"
'
•
® T.)
1
=r rank (F ) rank (T ).
i
i
i
Now, rank (T,.) ( p so r rank (F.) rank (T.) ( P r rank (F,.) ,
i
,.,
, > 0,
and since r F. ®T.
'
,
i
sp = rank (r F. ® T. ).
i'
,
(4.2.1)
(4.2.2)
Combining (4.2.1) and (4.2.2),
s ( r rank (F,.) = rank (r F.) (s.
i
Hence, rank
(~
1
i
'
Fi ) = s.
This implies that the projection matrix of the column space of
r F. is the i dent i ty rna t ri x; i. e. ,
i
'
43
F;)I (~ Fi) )-
(~
I = (~ F;)
1
,
1
, ,
= (~ Fi)
,
= (t. F.)
(~
1
(~
1
Fi )
Fi )- (~ F;)
1
Finally, rank (T i ) = p for all i since if rank (T i ) < p for some i
then sp = rank (~ F; ® T;) = ~ rank (F i) rank (T i) < sp. Thi s ; s a
1
1
contradiction; hence, Ti is non-singular i=l, ... ,q.
Proof:
By Lemma (4.2.1) E. F·
1
Then
, = I and T·, > a
i=l, ... ,q.
(t F. ®T.) (t [email protected])=tF,.®I p
i 11
i 11
i
Lemma (4.2 .3) :
Let A and B be arbitrary n x n non-negative definite matrices.
Then it is well known that
(i)
A(l)B) 0,
(ii)
Tr(AB))
a
with strict inequality if A * a and B > O.
44
Lemma (4. 2 . 4) :
Let VN = ~ Vi ®T i where matrices T1 ,··· ,T q are the unique solution
of the matrix equations,
t
i
tr (( Fi
® Til )F~ 1
I
I
(F i
® Ek1) Fa.-1}
= (2 - 0k1 }c i( k l)
(4.2.3)
lc;kc;lc;p.
i=l, ... ,q
If F·F· = oijF. then T,. = (trF.}-1T. C·T·
'J'
where
t F;
i
""
® T;
= Fa. •
Proof:
F =tF.®T.;
a.
;'
,
Let Ti-l = S;'
therefore, F~1 = ~ F ®T i -1
i
,
by Lemma (4.2.2).
The defining equation (4.2.3) becomes
(4.2.4)
,
"
,,
i =1 , ... , q •
(4.2.4) is equivalent to
C. = tr (F . )S . T •S . .
Solving forT;,
T. -- tr ( F. )-1 T.C.T. ;=l, ... ,q.
1
1
1 1 ,
45
Theorem (4.2.5):
Let V = r V.
N i'
e
QDT.
as defined in (4.2.3).
,
The MINQE estimate of
o., i=l, .. "q is positive definite. Similarly, the MINQE(I) estimate of
0 i i=l, ... ,q is non-negative definite.
Proof:
Consider Ci
Then
I
tr(Z C.0.) = tr(1/2(l
i
, =0
C.
=%:(1 + okl )E kl '
i
:1=
i'.
(4.2.5)
<Skl)E kl 0,")
+
"
= l!2(l + <skl)((l - <Skl)0 i ' lk + 0i,kl))
= 1/2(1 +
<skl)(2 - <Skl)0 i '(kl)
= 0 i ' (kl)
Hence in estimating 0 i (kl)
V\J = r V. ® (t rF .
1 T, C' T '
1
1
1 1 1
I'
,
I
I
) -
using Lemma (4.2.4)
I
.,
= 11' 2 (l + <S k1 )( t rF i )- 1 Vi
® TiEk1T;
using (4 . 2.5) .
Now the MINQE estimate of 0i (kl) is
y*1 (V
a
+ XXI ®n-1VN(V
+ XXI ®If~y*
a '
and the MINQE(I) estimate of 0 i (kl) is
y'RaVNRay,
where VN is defined in (4.2.6).
Both cases may be represented by
0 i (kl) = z· AVNAz
where for MINQE, z = y*,A = (V a + XX' @n- 1 > 0,
and for 111 NQE ( 1), z = y, A = Ra ;> O.
(4.2.6)
46
Let x
= (xl' ... ' Xp )
I
* 0,
"-
then X· 0 i x ) O.
To see this,
X' 0;X
= ~lXkX10i(kl)
= E
kl
=
liz
XkX1Z'A(lhO + okl)(trFi)-l(V i ®TiEklTi))Az
(trF . ) -1 Z I A E xkx (1 + 8kl) (V .
l
1
kl
1
= l!Z(trF.) -1
1
Now trF i
= rank
® T . Ekl T .) Az
1
1
z· A(2E XkX l (V. ® T.EklT.) )Az
k<:l
1
1
1_
(F i ) > 0 since F i is idempotent.
Using Lemma
(4.2.3) (i), V.llCl
tX'·T.xx'T.
) O.
1
1
Finally, Azz'A) 0 with strict
inequality if z
Hence, using Lemma (4.2.3) (ii), the
=::
0 and A> O.
MINQE estimate of 0 i is positive definite; while, the MINQE(I)
estimate is non-negative definite.
47
5.
MINIMUM NORM QUADRATIC ESTIMATION IN THE MULTIVARIATE ONE-WAY
4It
RANDOM MODEL
5.1
Introduction
Minimum norm quadratic estimation for the multivariate one-way random model Yijk =
+ a + u
i=l, ... ,g, j=l, ... ,n i , k=l, ... ,p is
ijk
ik
k
discussed in this chapter. If g > 1 and ni > 1 for at least one i then
i-l
all parametric forms tr(C 10 1 + C2 0 2 ) are shown to be r·lINQE(U,I)-estimable.
Given T1 a non-negative definite estimate of 01 and T2 a positive
definite estimate of 82' the MINQE(U,I) defining equations are expressed
in terms of the group sizes ni , T2-1, and the eigenvectors and eigenvalues of T2-1/2T1T2-1/2
In section 5.6 the special case in which the
prior estimates T1 and T2 are diagonal matrices is considered and explicit expressions for the MINQE(U,I) estimates of 8 1 (kl)' 8 2(kl) l~k~l~p
are given. Finally, in section 5.7 the MINQE(I) and MINQE estimates of
8
1
and 8 2 are developed.
5.2
The Multivariate One-way Model
In the sections which follow the notation below will be useful.
f+A. is the block diagonal matrix, diag (AI'" .,A ).
i=1 '
g
jn is the n-vector whose elements are all equal to 1.
J n. , n . is the n,. x n.. matrix whose elements are all 1.
J
, J
4It
48
The multivariate one-way model is written
Yijk =
where
~k +
(5.2.1)
a ik + Uijk
,
i=l, ... g, j=l, ... ,n., k=l, ... ,p
is an unknown fixed effect,
~k
E(a ik ) = 0 = E(e ijk ),
E(aikai'k') = oii'el(kk ' ) and
E(uijkUi1j1k1) = oii 1ojj'e 2(kk ' )
It is also assumed that ni > 1 for at least one i=l, ... ,g.
Let n. = I: ni ' ~ = (~l'···'~p)', and
i
all
~
~lp
a gp
ag1
= ull1
lJ
. . .
..
gng 1·
~l1p
ugn p
9
Model (5.2.1) may be written in matr-ix form as
y
=
jn.~
I
I: .
~+
i Jn
+
i
= X~
I
f
In. ]
-
(5.2.2)
UE.
Taking the vec of the transpose of both sides of (5.2.2) yields an
equivalent univariate expression,
(5.2.3)
where ~ = (all ,a12"" ,agp,ulll ,u112"" ,u gn p)
I.
9
The random variable
~
has zero expectation and dispersion matrix
49
(5.2.4)
Therefore, O{y) = (U ® I p') (F 1 0° 1 ... F2 ® 02HU' 0 I p)
= UF1U' <&> 01
+ UF 2U'
® 02
(5.2.5)
VI and V2 are calculated using (5.2.2) and (5.2.4).
VI = L+J
and
V2 = I n.
. n.1
,
Hence given T1 and T2 as prior estimates of 01 and 8 2 ,
V
a
=
z+ I .0 T + I .®T
n1
1
n
2
= L+ (I .®T +
n
1
i
1
(5.2.6)
,
In. ®T 2 )
i·1odel (5.2.3) .and dispersion matrix (5.2.6) are in a form such that
Theorem (3.3.1) may be applied to obtain the MINQE(U,I) estimate of the
parametric form tr{C 10 + C 0 ).
1
2 2
5.3
Identifiability and Estimability
An important assumption of Theorem (3.3.1) was that 01 and 02 be
identifiable in distribution, i.e., that VI and V2 be linearly independent. To verify this for model (5.2.3)' suppose there are scalars
cl and c2 such that c1VI + c2V2 =
i=l, ... ,g.
,
o.
Then c1J n + c2 1n =
i
i
If n· > 1 for at least one i then c = c = O.
I
2
,
o for
At least one
n. > 1 is a necessary and sufficient condition for the linear independence of VI and V2 ·
e
50
The question of estimability is considered in the following theorem.
Theorem (5.3.1):
,
If g > 1 and there is at least one i such that n. > 1, then all
parametric functions tr(C 0 1 + C 0 2) have MINQE(U,I) estimates.
2
1
Proof:
All parametric functions have MINQE(U,I) estimates if SM is nonsingular.
Now (SM) = (tr(MV i MV j ))i,j=1,2 and is non-singular if and
only if the matrices MV1M and MV 2Mare linearly independent.
M= I - X(X'X)-X'
= I
jn.(jn.1jn.)-jn.'
= In. - n.-1J n•
The (i,j) submatrix of MV 2M ;s
En.In.MnJij
= EnJ;j
= o;j
I
n;
- n.-1J
(5.3.1)
ni,n j
The (i,j) submatrix of MV 1M;s
En.( nl,Mn]lj
J
=
~l
=
r (oikI n .
kl
,
En]lk li:Jnl.]kl rnJ, j
n.-1J
- n.
n;n k
) okl I n (olj I
k
nj
51
{5.3.2}
Now suppose there are scalars c and c such that
2
l
c l MV l M+ c2MV 2M= O. Then
C1[MV 1M]ij + c2[MV 2M]ij
Let ni > 1.
=a
(5.3.3)
i,j=l, ... ,g.
Substituting (5.3.1) and {5.3.2} into {5.3.3},
for i
= j,
and for i
* j,
-1
.
c2 I n . - n.- 1 I n .{C 2 - c l {n. - 2n i T n. t,.nk"}}:IIO {5.3.4}
1
1
I<
From (5.3.4), c2
= O. Then using (5.3.5), c1 = O. Hence MV 1M and
MV 2Mare linearly independent and SM is non-singular.
5.4 Preliminary Results
The following results are used in computing the MINQE(U,I} defining
e~uations.
Theorem (5.4.1):
Rao {1965}
Let A and B be arbitrary square matrices of order n.
Then
det( J n ® B + In ® (A - B) }
= (det(A - 8}}n-l det(A
+ (n-1}8) .
(5.4.1)
52
Lemma (5.4.2):
let K = T2-1T 1 , T1 and T2 real symmetric p x p matrices, T1 ) 0,
let xl' ... 'x p be an orthonormal set of eigenvectors of
T2-1 = ll.
l TIL with corresponding eigenvalues
Then
p
K=l r
s=1
~
~l'
...
'~p·
-
x x ILl
s s s
Proof:
The existence of the set of p orthonormal eigenvectors of lTll is
guaranteed by the fact that lT l l is real symmetric and, hence, unitarily similar to the diagonal matrix of its eigenvalues, A, i.e.,
l T l = XAX
l
where XIX
I
= I,
(5.4.2)
l T1 lX = XA.
The columns of X = (xl , ... ,x p ) constitute an orthonormal set of
eigenvectors of l Tll.
Now
K = Ll T1
= L XAX' L-l
=L r
S
~
x x'
S S S
using (5.4.2)
L-l .
Lemma (5 .4 .3) :
As in Lemma (5.4.2) let K = T2-1T l
an orthonormal set of eigenvectors
= lL Tl and let xl, ... ,x p be an
of L TIL with corresponding
53
eigenvalues A1 , ... ,A p '
Then
W-1
=L
Let W= 1+ nK, n > O.
P
r
s=1
(5.4.3)
Proof:
Note that since x1 , ... ,x p are orthonormal vectors,
p
L
xsx S
s=1
Also, since L T1L
~
= X'X = I.
I
0, AS
~
0 and 1 + nA s > O.
Therefore,
P
I + nK = L L x
X I
S S
s=1
P
L-1 + nL L
s=l
A X X '
S S S
using Lemma (5.4.2)
p
= L (L
s=1
=L
x x '(1 + nA ) )L-1 .
s s
s
L
S
=L
( L
x X I)L-1
S S
s
= I.
5.5
The MINQE(U,I) Equations
Theorem (3.3.1) required prior estimates of the variance-covariance components to be such that Va + XXI is non-singular.
In what follows, more
54
restrictive assumptions will be made, i.e., T2 positive definite and
T non-negative definite. These are sufficient to insure that Va is
1
non-singular in which case Ra
= (MV aM)+ = Va-1 - Va-1X(X 1Va-1X)-X'V a-1
and the MINQE(U,I) defining equations may be expressed in terms of the
2
2
eigenvalues and eigenvectors of T 1/2T 1T 1/2, T2-1, and the group sizes
Recall from Theorem (3.3.1), the MINQE(U,I) estimate of
tr(C 101 + C202 ) is
(2 -
okl )c,. (kl)
2
+
tr( (MV M)m (V' ® A'
i'=l
a
,
,
=!:
I
+
I
)
(MV M)m(V,'® Ekl )
a
,
~1~l,Al(kll,)tr(Ra(Vl ®Ek'l,)Ra(V i ®E kl ))
+ 1.2 (k 1
I
I
)
tr( Ra (V 2 ® Ek , 1 Ra (V i ~ Ek1 ))
I )
55
In Theorem (5.5.1) and Theorem (5.5.2) expressions for Va -1 and
(X'V a -lX)-l, respectively, are derived in terms of the quantities above.
In Theorem (5.5.3) Ra is defined. The elements sikl ,ilk'l =
tr(RaViklRaVilk'l I) are developed ;n Theorem (5.5.4) and the matrix
I
S(MVaM)+m is presented in Theorem (5.5.5).
the quadratic forms y'RaViklRaY
;=1,2
Finally, in Theorem (5.5.6)
l<k<l<p are derived and the
results from the previous theorems are summarized.
Theorem (5.5.1):
Let T1 and T2 be non-negative definite prior estimates of 01 and 02'
respectively.
Then
V
a
g +
=.i: (I n .
1=1
1
®T 1- + In. lID T2 )
1
is
non-singular if and only if T2 is non-singular, in which case
V
a
-
1
g+
=. i: 1
1=
(5.5.1)
i=l, ... ,g.
~
56
Proof:
V
a
is non-singular if and only ifJ
non-singular
ni
®T 1 + I
ni
®T 2 is
i=l, ... ,g.
Using Theorem (5.4.1) with B = T and
1
A = T2 + T1 , I n. <&JT1 + In. ®T 2 is non-singular if and only if
1
1
A - B = T2 and A + (n i - 1)8 = (T 2 + niT!) are non-singular.
Now T2 and T2
+
niTl are non-singular if and only if T2 and
I + ni T2-1T l are non-singular. Furthermore, from Lemma (5.4.3),
T2 > 0 and Tl ~ 0 are sufficient for the non-singularity of
Verifyi~g
l<
(5.5.1), first note that
K.,
I
= (n )-1(I
i
K )
i
(I . ®T 1 + In. (g)T 2 ) (M n . ®T 2-1 + (ni)-lJ n . ®K i T2-1
n1
1
1
1
(5.5.2)
57
Theorem
(5.5.2):
Let T2-1 = LL, T = L T1L, and x1""'x p be an orthonormal set of
eigenvectors of T with corresponding eigenvalues A , ... ,x p '
1
Let
Gs = xsx s
I
9
,
t(X ) = r n. (l + n.X
, s )-1
s ;=1
g5 = (t(X
s
(5.5.3)
s=l, ... ,p,
))-l
P
G = r gs Gs
5=1
(5.5.4)
Then
(5.5.5)
Proof;
XI V0. -1 X
= (j n .0 I p) ~ + ~ 0.- ~ i
I
= Lin:
(g)I) ~+
"
(j n. @ p)
(M n . (8)T 2-1 + (n i )-lJ n . ®K i T2-1)(jn. ®I)
,
-1 )
= r Li n. 1M n. j n. 0T-1
+ n.K.T
,
,
2
2
i
, , ,
= r niK i T2-1
i
t
= rn.L
G (l + n.X )-1 L-1LL
i ' s=l s
' s
= L r G r n.(l + n.X )-1 L
, s
s s,. '
using Lemma
(5.4.3)
58
Hence
Theorem (5.5.3):
Let Ra
+
= (MVaM)m'
then the (i,j) submatrix of
R
(l
= V -1.
V,
(l
.
_01.
-1
X (XIV
= K·, LGL
where K··
,J
-1
(l
)-X'V
-1
(l
K·'
J .
Proof:
=
f
jn1 ®:K 1T2-1
Ljn g ®K gT2
, , =0
since Mn.jn.
-1
Using this, the (i,j) submatrix of V -lX(X'V a -lX)-X'V a -1 is
(l
using Theorem (5.5.2)
(5.5.7)
Hence the (i,j) submatrix of Ra is
fRal ..
L~'J
= 0i j
[Va-1
J. -
J n. ,n.
"
J
® Ki j
59
Theorem (5.5.4):
Let (S(MV M)+)
a
m
= (tr(RaViklRaV;'k'll» = (s;kl,;'k'l ' )
;,m=l, ... ,g
Then
1~k~1~p
Proof:
+
VI k1 = ~ (J n. ~ Ek l) so
1
,
~aVlk~;j
=
r~;j
(J nj
® Ek1 )
= (o;j(M . @T -1 + (n;)-lJ
n1
2
n
. ®K;T -l)
2
1
- I n . ,no ®K;j) (In. ®E k1 )
, J
J
= 0; jJ n.
,
® K,. T2- 1 Ek1
- n.J J n . , n . '-'Y
r.:.. K.. E .
1 J
, J k1
(5.5.8)
60
- J
ni ,nj
® K..
E
lJ k1
(5.5.9)
Using (5.5.8) and (5.5.9)
s2kl , 2 k l
1
1
1
= tr(RaV2klRaV2k'1')
=~
1m
0\ . ® T2- 1 Ek1
t r ( (0 i m
1
+ (n i ) - 1 J n. (8) KiT2- 1 Ek1 )
1
61
+ ~ n.n tr(K. EklK .E klll )
im 1 m
1m
m1
+ (n. - g)tr(T2- 1E kl T2 -lE k 'l')
=
~
im
tr ( AimE kl Ami Ek11'
) + ( n.
-1
- 9 ) tr ( T2
Ekl T2 -1 Ek11' ) .
(5.5.10)
In a similar manner,
slkl
. , 2'k'l'
=
(5.5.11)
~1m nmtr(AimEklAmiEk'l')
and
(5.5.12)
Theorem (5.5.5):
Given the notation used in Theorems (5.5.2) through (5.5.4), let
1s
= Lx s '
and
9 S (1 + nmAs )-1
; ,m= 1 , ... ,g,
s= 1 , ... ,p.
Let the matrix Qim of order 1/z p(p+1) be defined by the following
equation:
62
= (2
- okl)
I:
ss
a(;\ ,n.,n )a{). ,no ,n )
S 1 m
s 1 m
I
I
~1S)k(lS')1
+ (l -
Also, define T2*
by
(
T)
* kl,k'l' --
2
( (ls')k,(ls)l'
Ok'l')(lS)k,(lS')l'~
the following equation:
tr (T 2-1 Ekl T2-1 Ek,l,
= (2
)
- okl) ~T2-l)kl,(T2-1)k'1 + (1 - ok'l'HT2-1)kk'
(T 2-1)11]
Then
SU-1V M);
a
=
where
Sll =
I:
;m
n;nmQ;m'
S12
= ~1m nmQ;m
and S22 = E Q;m + (n. - g)T 2* .
;m
Proof:
Using Lemma (5.4.4) and the fact that
K.T -1
, 2
=L ~
s=l
p
= L I:
S
and
GSGSI
= osslG S :
G (1 + no). )-1 L-1 LL
S
1 S
G (1 + n.).
S
1 S
)-1
L
(5.5.13)
63
p
= LEG (1 + n.A )-1
s=l s
, s
Hence
Aim
= dimK i T2-1
= dim
-
g
S
(1 + n.A
J S
)-1
L .
(5.5.14)
(n i "m)1 / 2K im
p
L E Gs (l + niA s )
s=l
-1
L
- (n.n )1/2L ~ G (1 + n A )-1 gs({l + n A )-1 L
i s
ms
, m
s=l s
= E LGsL
(dim(l
S
+
ni As )-1
(n.n )112(1
' m
l 1 ' a(As,ni,n m)
s s s
= E
+
n.A
, s )-1 gs (1
+
n A )-1)
ms
(5.5.15)
In (5.5.15) it is clear that Aim is symmetric and that Ami = Aim'
Using Lemma (3.2.4),
tr(AimEkl~iEkllI)
= tr(AimEklAimEkll')
(5.5.16)
Therefore,
tr( AimE k1 ~iEk 11' )
e
64
Using (5.5.12),
(Sll)kl,k'l'
= ~1m
n;nmtr(AimEklAmiEk'l
Similarly, using (5.5.10) and (5.5.11),
(S12)kl ,k'"
='~m nmtr(AimEklAm;Ek'l')
and
Therefore,
S12 =
~
1m
nmQ im
and
Theorem (5.5.6):
I)
65
where Y;
I
(
I
•
= Y;l '···'Y;n.
;=1, ... ,g,
I )
1
and Y;j' = (Y;jl'H"Yijp).
Reca"
A..
lJ
= cijK.T
-1
1 2
- (n.n.)1/2K
.. as defined in Theorem (5.5.4),
1 J
lJ
= ~ a(>'s,ni,n j ) Ls '
where a(>'s,n 1.,n .)
J
and Ls = , s' S
Define Yi.
I
= cij(l
+
n.>. )-1 - (n.n.)1/2J1 + n.>. )-l g (1
1S
1J
S
+
n.>. )-1
JS
s=l, ... ,p.
•
=~
1S
the vector of sums from group i
Yij
J
-y.
the vector of means from group i
1•
X.
-- K.1 T -1 y.,
2
Wi
=zn.K .. y.
Zi
= Xi
t
= T -1 Z
1
j
•
1 1J J.
2
i=l, ... ,g
- wi
is
(y
is - -Y)
i.
Then
(i)
the MINQE(U,I) quadratic forms are
(y'RaV1k,RaY
where Y'RaV1klRaY
l~k~'~p,
= (2
- ok1)
y'R aV2k ,Ray
~
(zi)k(zi)1
l~k~l(p)'
and
1
( i i)
e-
66
(iii) the MINQE(U,I) estimate of tr(C 10 1 + C20 2 ) is
g
L
i=l
zi' ( A1 + (n i )-l A2 )zi + t'A 2 t
where the symmetric matrices A1 and A2 solve the
MINQE(U,I) defining equations:
L ninmQ im vech (A 1 ) + ~ nmQ im vech (A 2 )
;m
1m
l~k';;l~p,
=
((2
6k1)C 2 (kl))
Proof:
[RaV1klRa ]ij
=
~
= L
m
[R aV1k1 Jim [Rex ]mj
(6im I n . ®K i T2-1E k1 - nmJ n . n 0KimEkl)
1
l' m
using (5.5.8) and (5.5.6)
-1
- 6;m n.J
~ K. T
E K ..
k1 1J
1 n. ,no \C.I 1 2
~
J
67
® K.1mEk1 KIT1J.
- n. K •. Ek1 K •T2 -1 )
J 1J
+ J n; ,n
j
(5.5.17)
.J
(
®!:
K. Ek1K mJ.
m nm,m
- K1·T2
-1
Ek1 K,.J'
(5.5.18)
~j
Now y'RaV2klRaY =
y;1 [RaV2klRa ];jYj
(o;j((M . ®T -1E T -1 + n;-lJ .
k1 2
n1
2
n1
= L y;1
ij
IV'I
~K,.T2 -1 E
k1 K,.T 2
-1
- K.T 2
,
Let
B be
-1)
1
1
J
® B) y j = Et
L n K. E K .
m m,m k1 mJ
(5.5.19)
Then
y; s I By j t
s
J
(
Ek1 K..
E
T -1)) I
'.1 - K.. kl K. 2
Y. .
'J
J
J
an arbitrary p xp matrix.
y; , (J n . , n .
+ J n.,n. ®
x
= y;.
I
By j .
(5.5.20)
(5.5.21)
E
Y
;s;S
'By
;s
- L n.- 1y. IBy.
i
'
,.
,.
= E;s
(y
is -
-y )'B(y
-y)
i.
;s - i.
(5.5.22)
68
Using (5.5.20) through (5.5.22), equation (5.5.19) becomes
y'RaV2k1RaY
=~
1S
(Yis - Y'i.)'T 2-1E k1 T2-1(Yis + ~j Yi.
I (
-1
••
Yi.)
-1
01J ni . KiTZ Ek1 Ki T2
-1
+; nmKimEk1Krrti
- KiTZ-1 Ek1 Kij - KijEk1KjTZ -1) Yj.
=
tr(E k1 T2-1 ~
1S
(Yis - Y'i.)(Yis - Yi'.)'T2-l
-lK . T2-1 y. y. 'K . T2 -1
( ..
+ tr (E k1 E
ij o ' J n.,
1
,.,.,
- K•. y.
y. 'K.T -1
1J J. 1 . ' 2
=
trCEl<l tt') + tr(El<l
-
K.T
J 2
- 1 y. y. 'K ..
J.'. 1J
(5.5.23)
A)
Simplifying this expression for A:
n.-lx.x.' - E K..y. x.
i '
, 1
ij 1J J.'
A = L:
=E
i
n. -1 x.x. I
'
l'
ni -l( xi
i
= E
-
-
1
-
- 1:j
n. -1 w.x. I
, 1
i '
1:
,
w. ) (Xi
- w.),
E
ij
xJ'Yi.'KiJ'
n. -1 x·w.
J J J
I
+ E
m
I
= E n.- 1 z.z. '
i '
"
Clearly A is symmetric.
Y'RaVZk1RaY
Continuing with equation (5.5.Z3),
= (2
- ok1) ( (ttl )k1 + (~ ni-1zizi' )k1)
= (2
6k1) (t kt 1 + E ni - 1 (zi)k(zi)1)
i
69
In a similar manner,
(5.5.24)
Reconstructing zi'
Zi = xi - wi
= K. T -1 Y .
1
=r
j
=L
j
2
1.
- Ln. K .. y .
.
1 1 J .1.
J
n.dzn.-dZ(oij K.T
J
1
1
(n.n.p/ZK .. )y.
1 J
1J
J.
-1
Z
n.dzn.-dzA .. y.
J
1
= n.1 1 ;,.. r
js
1J
J.
n. 1 / 2 a ( A ,n.,n. )
L y.
s
J
1
=n.EL ( ( 1+n.A ) - 1
y.s
1 S
1 S
1.
J
s
(5.5.25)
J.
E n. ( 1 + n.A ) -1 9 ( 1 + n.A )-1-)
y.
j.1
1 S
S
.1 S
J.
(5.5.26)
Finally, suppose (A
1kl
, l,k,l,p , A
l,k,l,p )1 is a solution of
2kl
the t1ItJQE(U,I) equations, then the !'1HJQE(U,I) estimate of
70
using (5.5.23)
=
tr(t"'lE z.z.') + tr(A 2 E n.- 1 z.z.') + tr'{A 2tt ' )
i
11
i 111
= E Z i , ( 1. 1
i
)
,
+ ni - 1
1.2 Z i + t 1. 2 t .
5.6
If T1
MINQE(U,I) Estimation with Diagonal Priors
= diag(t 1s
s=1, ... ,p) and T2 = diag(t 2s s=1, ... ,p) t 1s ) 0,
t 2s > 0 5=1, ... ,p are the prior estimates of 01 and 02 then the
MINQE(U,I) estimates of 01 (kl) and O2 (kl) l<k<l<p may be expressed
explicitly in terms of ni
Theorem (5.6.1):
Let T1 = diag(t 1s )
i=1, ... ,g, tls and t 2s s=l, ... ,p.
° s=l, ... ,p)
and T2 = diag(t 2s > 0 s=l, ... ,p) be
prior estimates of 01 and 02' respectively.
In addition, define
n·1
_
-1
y. k - n.
E y"k
, •
1 j=l
1J
71
s
12,kl
= E
n q.
im m lm,kl
l<:k<:1<:p
i ,m=1 , ... ,g
Then the MINQE(U,I) estimates of 01 (kl) and O2(kl) are
0 1 (kl)
= dkl -1(s22,kl ~ (zi)k(zi)l - s12,ki
1
(E
;
ni - 1 (z;)k
and
;=1, ... , 9
Proof:
T
2
-1 =
di ag( t
2s
-1)
= diag(t -112) diag(t -d2)
2s
2s
= LL
T = L TlL
= diag(t2s-1/2) diag(t ls ) diag(t2s-1/2)
The unit vectors e1 , ... ,e p form an orthonormal set of eigenvectors of
L T1L with corresponding eigenvalues As = t1s~s-1, s=1, ... ,p.
72
Therefore.
, s = Lx s
-
and (ls)k
t~
-LS -1/z e S
s= 1 •...• p.
(5.6.1)
s=l •...• p.
= ask t2s-1/2
Al so,
t'l,.
\ 5)'
= L:
;
(
--l:n.l+n.t
;'
= I:
i
,
t -1 )-1
15 2 5
n. t (t
+ n. t )-1
' 25 25
1 1S
= t Z5
,
I:
.
) -1
n.1 ( t 2S + n.t
1 15
= t Z5 h s -1 ,
a = (t("s))-l
"s
=
t.z s -lh s
s=l •...• p.
(5.6.2)
Now, from Theorem (5.5.5),
(Qi m) k' •k 1, 1 = ( 2 - 0k') I:
a ( AS' ni • nm)a ( AS1•n i • nm)
S5 I
(15)k('s' ),(1s' \1(1S),1
+ (1 - ok"')('~ k(1s,),(1s)k ' ('s')"
osk t_
-1Izos 1 ' t 2s I -dZOS'k' t 2s 1 -112
-LS
+ (1 - ok "
1
)8skt
2s
-liz 8s " t ' -l/zosk't -liz
2s
2s
73
using (5.6.1)
(Qim)kl,k'l'
= (2
- okl)
I:
ss'
a O. s ' ni ,n m)a(As I ,n i ,n m)t 2s- 1t 2s' - 1
(oskosl' os'lOs'k ' )
+ (1 - ok'l')oskosk'os'los'l')
okl)a(Ak,ni,nm)a(Al' ni,nm) ~k-1t2l-1
= (2
(okl'ok'l + okk'oll' - ok'l'okk'oll)
= (2
- okl)a(Ak,ni,nm)a(Al' ni ,nm) t 2k -1 t 2l -1 0kk ' 011'
since okl'ok'l = ok'l'okk'oll '
when k < 1, k' < 1'.
= (2 - okl)okk'oll'qim,kl
(5.6.3)
Therefore, Qim is diag( (2 - okl )qim,kl l<k<l<p)
where
-
qim,kl - t Zk
-1
~l
-1 (
) (
a Ak'"i'"m a Al' "i'"m
= ~k-lt2l-l(
(l +
)
"i Ak)-l (oim - (n;nm)l/2(1 + nmAk)-19k)
(1 + "iAl)-l {oim - (";nm)1/2{1 + "mAl )-l g,)
=
t 2k -1 t 2l -1 t 2k ( t 2k + "; t ik )-1 ~l ( t 2l + "i tIl
)-1
(oim (1 - ";t2l (t2l + "itll )-1 9l
- ";t2k {t2k + n;tIk )-1 9k )
= Y;kYil(o;m
(l - nih'Yil - "ihkYik) + ";nmhkhlYmkYml)
using (5.6.2) .
74
T * is also diagonal.
2
(T2*)kl,k'l' = (2 - t'ikl)( (T2-1)kl,(T2-1)k'l
+ (l - t'i k I 1 I
)(
T2-1) kk I (T2 -1) 11
= (2
- t'ikl)~k-1t21 -l(okl'ok"
= (2
- t'ikl)t'ikk'oll ' t.. -It -1
-lk
21
I )
+ (1 - t'ik ' 1 ')t'ikk't'il1
I
(5.6.4).
l-ok-ol-op)
Now, recall from Theorem (5.5.5) that
5(MV a M);
=
[5 111
512
512
522
J
(5.6.5)
nmQ im'
Using (5.6.3) and (5.6.4),
511
= diag(
(2 - t'ikl) ~ ninmqim kl
1m
'
= di ag( (2 - okl)s11, k1
512 = diag( (2 - okl); ninmqim k1
1m
'
= diag( (2 - okl )s12 , kl
l-ok-ol-op)
1<k-01-op)
(5.6.6)
1<k<l-op)
1<k-01-op)
(5.6.7)
5
= diag( (2 - okl) E q. k1 + (n. - g)(2 - okllt - 1t ,-1
2
2k
22
im 1m,
l-ok<l-op)
= diag(
(2 - okl)s22,kl
l<k<l-op)
(5.6.8)
75
It is a well-known result that the inverse of a matrix partitioned
as in (5.6.5) can be expressed as
w
Using this and (5.6.6) through (5.6.8) and dropping the subscript range
l(k(l(p:
[ sIr~v aiel) +-1]zZ = W
= ($22
=
- $12'$11- 1$12)-1
(diag((2 - okl)s22,kl - (2 - okl)2 S12 ,k1 2
( 2 - okl ) -1 Sll,kl -1) )-1
=
(
diag (( 2
= (diag((2
okl ) fIS22,kl - s12,kl 2 sll,kl -1)
) )-1
okl)Sll,kl-1(Sll,klS22,kl
- s12 , k1 2 )
) )-1
(5.6.9)
$imil arly,
(5.6.10)
~
76
(5.6.11)
Now in estimating
S(MV M)+ A = f
a
6 i (kl)'
A = (Aikl) is any solution of
where f;lk'l
=
1
= 8i;'8kk ' 8ll
1
8kk 1811 1
(5.6.12)
since the (i ,i') submatrices of S(MV M)+-l are diagonal.
a
Using (5.6.12) and Theorem (5.5.6), the MINQE(U,I) estimate of 61 (kl) is
o
'"l(kl)
=
+
(zi )1 + t kt l )
= dkl - 1(s22,kl ,~ (Z;)k (zi)l - s12 , kl(
and the MINQE(U,I) estimate of 62 (kl) is
1:
i
ni-1(z;)k
77
+
= dkI - 1 ( s 11 , kI (:: n; - 1 ( z; ) k (z;) I
,
+ t k t, )
where
z;
= n;1 / 2 ~
m
=
nml/2AimYm.
n. 1/2 ~ n liz a(A n n)t
,
m,s m
s' i' m 2s
-l e
-sy m.s
•
Finally, since es is the s-th unit vector,
(Zi)k
= ni 1/2
; nm1/2 a ( Ak,ni,n m) t 2k - 1
Ym.k
= n., 1/2 m
E nm1/2(1
•
k=l, ... ,p
+ n.A )-l(cim - (n.n )1/2
, k
, m
78
t 2k (t 2k + "m t lk)-1 t2k-lhk)t2k-l~.k )
= n;1/2 E "m 1/2
m
Y;k(o;m - (n;n m)1/2 Ymkhk) Y;.k
= n., 1/2 'y.,k,
(".1/2 T
1.k
= n'Y'k CY':- k
"
.
,
Al so,
tk =
=
(T 2- 1 E
is
tzk
-1
E
is
n. 1h h
,
k
L: n Y
m
m mk Ym.k
hk E "m'Ymk Ym.k
m
(5.6.13)
-
(Y i s - y.
)
'. \
(5.6.14)
(Yisk - Yi.k)
5.7
MINQE(I) and MINQE Estimation
The development of the MINQE(U,I) defining equation in chapter 2
involved transforming the general linear model
In the case of unbiased translation invariant estimation the minimum
norm problem was min tr(AVa.AVa.) subject to ftX=o and tr(AV i ) = f i
i=l,oo.,p which required only that Va. + XXI be non-singular.
However,
in the case of MINQE(I) and MINQE estimation the explicit involvement of
-112 requires prior
the natural estimator N = 1:• Fa.-1/2(F.'X'T.)F
,\CI 1
a.
,
estimates T1 and T2 to be such that Fa. is positive definite.
In the multivariate linear one-way random model
79
and
hence, FiF j = oij Fi i=1,2 and Lemma (4.2.4) may be used to calculate
the quadratic form of the natural estimator
N =.~ Fa-l / Z{F i @Ti)Fa-l/Z.
1=1
Clearly, Fa is positive definite if and oniy if both T1 and T2 are
positive definite.
Lemma (5.7 .1) :
Let T1 and T2 be positive definite prior estimates of 81 and 82 ,
respectively.
(i)
Let VN =.1: Vi®T i where Tiis defined in Lemma (4.2.4).
1=1
If s* 'Ne:* is the natural estimator of 62(kl)' then
VN = I1z {( 1 + 0k1) n.- 1 ( In. ® T2Ek1T2)
( i i)
If
€""
IN€* is the natural estimator of 61(kl)' then
VN = liz
((l
g+
+ okl)g-l ~E In.®T1EklT1)
1=1
1
Proof:
As an estimate of 8i (kill)
l
(i)
= tr{C 181 +
As an estimate of 82(kl)'
T1
=0
,
C282 ) ,
81
and (;;)
the MINQE(I) estimate of 01 ;s
'"
01
= g-lT 1
,~ z;z;'T 1
(5.7.4)
Proof of (;):
Using the expression (5.7.1) for T. and Theorem (3.4.2), the
,
MINQE(I) estimate of O2 (kl) is
°-2(kl)
= Y'R a VNRa y
-
..
""
k' <l'
<5 k
I" )
using (5.5.23)
80
(5.7.1)
Therefore,
VN
= V2 ®T 2
= 1/2(1, + okl)n. -lIn. ®T 2Ek1 T2
(ii)
As an estimate of 0 1 (kl)'
T
1
= (trF 1 )-lT1(1 / 2(1
= 1/2(1
T2
+ okl)E
k1 )T1
+ okl)g-lT 1E T
k1 1
(5.7.2)
=a .
Therefore,
VN
= V1 ®T 1
= 1 / 2(1
+okl)g-l( ,=1
.f+ I n,• @T 1Ek1 T1 )
Theorem (5.7.2):
Recall the notation of Theorem (5.5.6) and in particular,
X., --
K.T
,
-1 y.
1.
2
w,' = E
n,.K,·J·Y J·
j
.'
Zi
= Xi
- Wi
and
Let T1 and T2 be positive definite prior estimates of 01 and 02'
respectively, then
(i)
the MINQE(I) estimate of 02 is
...
0l
= n.-ITz (r
i
n.- 1 z.z.
~
~
~
I
+ ttl lT
z
(5.7.3)
82
= n. -1
,~ n;-1(T 2z;\(T2Z;)1
since
(1 +
+ (T 2t\(T 2t)1
okl)(2 - okl)
=2
Hence
6
2
= n.- 1
n; -1 T2z;Zi I T2 + T2tt I T2)
;
(E
= n. -1 T ( E. n. -1 Z.Z. I
2
Proof of (ii) is
1
1
qui~e
1
1
+ ttl) T2
similar.
using (5.5.24) and (5.7.2)
Hence
Theorem (5.7.3):
Let y* = y - XS O where So ;s a prior estimate of s·
define
= (I
+ E L s t (;\s) ) -1 =
s
In addition,
83
-1
K· .* = Ki T2 V*
'J
-1
-1
Kj T2
,.
-1
Xi * = Ki T2 y. *
,
w. * = r n.K ..y. *
j , 'J J.
,
Zi * = X. *- wi *
i=l, ••• ,g,
j=l, ••• ,g
-,.
= rT~1y· * - y. * )
is's
t*
Let T1 and T2 be positive definite estimates of 01 and 02'
respectively. Then
(i)
the MINQE estimate of 02 is
-' z. *z. * I + t*t*1 )T
n.•
1
1
1
2
and (ii)
(5.7.5)
the MINQE estimate of 01 is
°1 = 9
-1
(5.7.6)
T1 r. z; *z ; *IT11
.
,
Proof:
Va is non-singular; therefore,
(V + XX , )-1
a
Now Va -1 X =
jn
jn
1
g
= V -1
a
®
K T -1
®
K T -1
_
V -I X(X ' V -IX + I)-I X'V -1
a
a
a
1 2
9 2
Using this,
[
V -IX(X'V -IX
a
a
= (.
IV\ K T -1) (1 t::\ V -1)
+ I) -1 XI V -1 ]
a
In.~i2
\CI*
,..J
,
( .
'IV\
K. T -1)
In. ~ J 2
J
e-
84
and
IV\
-1
oi j ( t.1 n . Ib
T2
+ ni
-1
1
- J n.n,.
1
J
1'::\
J n. \C; KiT 2
1
®K lJ
.. *
Notice the correspondence between [(Va + XX' )-1 ]ij
ci j
(M
ni
® T2-1
+ n.
-1
1
J
ni
-1 )
® K.1 T?~ -1 )
- J
nin'j
and
® K..
lJ
Since (Kij*)' = Kji *, all the steps involved in calculating the
~lINOE
estimate of
~i(kl)
=yl(V
Gi
(kl)'
,
a
+ XX )-l VN (V
a
+ XX')
-1
Y
are exactly the same as those for calculating the MINOE(I) estimate
estimates of 8
8
2
= n.
-1
2
and 8
1
T2 ( r n.1
;.
are
-1
Z. *z. *1
1
1
e1 = 9 -1 T1 1:z.*z.*'T
. 1
1
1
1
+ t*t* ')T
2
85
6.
SU~1~1AR
y
In this paper the problem of minimum norm quadratic estimation in
the following multivariate linear model was studied:
y.1 = X8.1 + UE:.1
E(E:. )=0
i=l,. •• ,p
i=l, ••• ,p
E(E: kE 1 ') = ~ 0i (kl) Fi
1
k,l=l, ••• ,p
v/here (";) = (0;(kl)) i=l, ... ,q were the unknovm p x p matrices of
1
variance-covariance components.
Several different MINOE were derived:
MINOE(U,I), MINQE(U), MINOE, MINOE(I), and MINQE suhject to X'AX=O where
the letter U indicated unbiased and I indicated translation invariant.
The special case for which the matrices Fi were idempotent and pairwise orthogonal was considered. It was shown that MINOE and MINOE(I)
estimators of t~e matrices 0 1 , •.• ,0 p were positive definite and nonnegative definite, respectively.
Finally, estimation for the multivariate one-way random model
Yijk =
~k
in detail.
+ a ik + uijk
i=l, ••• ,g
j=l, ••• ,n i
k=l, ••• ,p was studied
If g)l and ni)l for at least one i then all parametric
forms tr(C 18 1 + C28 2) were shown to be MINOE(U,I)-estimable. Given
T1)0 and T2)0 as prior estimates of the among and within covariance
matrices and n1, ••• ,n g , the group sizes, the MINOE(U,I) equations were
expressed in terms of the group sizes, T - 1 , and the eigenvectors and
2
eigenvalues of T2-1/2T1T2-1/2.
The special case for which T and T
1
2
were diagonal was considered, and explicit expressions for the
4It
86
MINQUE(U,I) estimates of 8 and 8 were given.
2
1
and MINOE estimates of 8
1
and 8
2
Finally, the MINQE(U,I)
were developed.
One area of further research is the adaptation of computational
techniques developed for univariate models to this multivariate model.
Another potentially fruitful area would be the extension of the results
for non-negative variance estimation to the case of non-negative definite covariance matrix estimation.
•
87
7.
LIST OF REFERENCES
1.
Ahrens, H. 1978. MINQE and ANOVA estimator for one-way classification--a risk comparison. Biometrical Journal 20(6):535-556.
2.
Brown, K. G. 1976. Asymptotic behavior of Minque-type estimators
of variance components. Ann. Statist. 4:746-754.
3.
Brown,
4.
Chaubey, Y. P. 1980. r1inimum norm quadratic estimation of
variance components. Metrika. 27:255-262.
5.
Chaubey, Y. P. 1982. Minimum norm quadratic estimation of a
covariance matrix in linear models. Biometrical Journal
24:457-461.
6.
Corbeil, R. R., and Searle, S. R. 1976. Restricted maximum
likelihood (REML) estimation of variance conponents in the
mixed model. Technometrics 18:31-38.
i
Cru"lp, S. L. 1946. The estimation of variance components in
analysis of variance. Biom. Rullo 2:7-11.
I
•
K.
G.
mNQUE.
1977. Estimation of diagonal covariance matrices by
Commun. Statist. Theor. Methods. A6(5):471-484.
8.
Janiels, H. E. 1939. The estimation of components of variance.
J. Roy. Statist. 6:186-197.
9.
Giesbrecht, F. G., and Burrows, P. M. 1978. Estimating variance
components in hierarchical structures using Minque and
Restricted ~1aximum Likelihood. Commun. Statist. Theor. r"ethod.
A7(9) :891-904.
10.
Goodnight, J., and Hemmerle, W. J. 1979. Simplified algorithm for
the W-transformation in variance component estimation.
Technometrics 21:265-267.
11.
Hartley, H. 0., and Rao, J. N. K. 1967. ~1aximum likelihood estimation for the mixed analysis of variance model. Biometrika
54:93-108.
12.
Hartley, H. 0., Rao, J. N. K., and La r~otte, L. R. 1978. A simple
"syn thesis"-based method of variance component estimation.
Biometrics 34:233-242.
13.
Hemmerle, W. J., and Hartley, H. O. 1973. Computing maximum
likelihood estimates for the mixed A.O.V. model using the Wtransformation. Technometrics 15:819-832.
14.
Hemmerle, W. J., and Lorens, J. A. 1976. Improved algorithm for
the W-transform in variance component estimation.
Technometrics 18:297-211.
_~
88
15.
Kleffe, J. 1977a. A note on the infinity-minque in variance
covariance models. ~~th. Operationsforsch. Statist. 8:337-343.
16.
Kleffe, J. 1977b. Optimal estimation of variance components--a
survey. Sankhya 39B:211-244.
17.
Kleffe, J. 1979. C. R. Rao's Minque under four two-way Anova
models. Biometrical Journal 22(2) :93-104.
18.
Kl effe, J. 1980. On recent progress in MINQUE iheory-,non-negative estimation, consistency, asymptotic normality and
explicit formulae. Math. Operationsforsch. Statist.
11:563-588.
.
19.
Liu, L. M., and Senturia, J. 1977. Computation of MINOUE variance
component estimates. J. Amer. Statist. Assoc. 72:867-868.
20.
t~allela,
21.
Pringle, R. M. 197d. Some results on the estimation of variance
components by f1inque. J. mer. Statist. AssO"C. 69:987-989.
22.
P~kelsheim.
23.
Pukeisheim, F. 1981. Linear models and convex geometry: aspects
of ~on-negative variance estimation. Math. Operationsforsch.
Statist. 12:271-286.
24.
Pukelsheim, F., and Styan, G. P. H. 1979. Non-negative definiteness of the estimated dispersion matrix in a multivariate linear
model. Bulletin de L'academie Palonaise des Sciences.
27:327-330.
25.
Ra 0, C. R. 1970• Estimation of heteroscedastic variances in
linear models. J • Arne r • St at i st. As soc. 65 : 161-172.
25.
Rao, C. R. 1971a. Estimation of variance and covariance components--MINOUE theory. J. Multivar. Anal. 1:257-275.
27.
Rao, C. R. 1971b. Minimum variance quadratic unbiased estimation
of variance components. J. Multivar. Anal. 1:445-456.
28.
Rao, C. R. 1972. Estimation of variance and covariance components
in linear :nodels. J. Amer. Statist. ,A.ssoc. 67:112-115.
29.
Rao, C. R. 1979. Minqe theory and its relation to t1L and MML
estimation of variance components. Sankhya 41:138-153.
P. 1972. Necessary and sufficient conditions for
MINOU-estimability of heteroscedastic variances in linear
models. J. Amer. Statist. Assoc. 67:486-487.
F. 1980. On :he existence of unbiased nonnegative
estimates of variance covariance components. Ann. Statist.
8:293-299.
•
89
•
30.
Rao, C. R., and Kleffe, J. 1979. Estimation of variance
components. Technical Report No. 79-1, nept. of Statistics,
University of Pittsburgh, Pittsburgh, Pennsylvania.
31.
Rao, P. S. R. S., and Chaubey, Y. P. 1978. Three modifications of
the principle of the MINQUE. Commun. Statist. Theor. Meth.
7(8) :767-778.
32.
Sahai, H. 1979. A bibliography O!1 variance components.
Statist. Review. 12:177-222.
33.
Schaeffer, L. R. 1975. Di sconnectedness and vari ance component
estimation. Biometrics. 31:969-977.
34.
Schmidt', W. H., and Thrum, R. 1981. Contributions to asymptotic
theory in regression models with linear covariance structure.
~1ath. Operationsforsch. Statist. 12:243-269.
35.
Searle, S. R. 1968. Another look at Henderson1s methods of
estimating variance components. Biometrics 24:749-778.
36.
Searle, S. R. 1979. Notes on variance component estimation: A
detailed account of maximum likelihood and kindred methodology.
Paper BU-673-~1 in the Biometrics Unit, Cornell University,
Ithaca, New York.
37.
Seely, J. 1971. Quadratic subs paces and completeness.
Statist. 42:710-721.
38.
Shah, K. R., and Puri, S. C. 1976. Application of MINOUE
procedure to block designs. Commun. Statist. Theor. Methods.
A5(2):191-196.
39.
Swallow, W. H., and Searle, S. R. 1978. l-'1inimum variance quadratic unbiased estimation (MIVQUE) of variance components.
Technometrics 20:265-272.
40.
Wansbeek, T. 1980. A regression interpretation of the computation
of MINQUE variance component estimates. J. Amer. Statist.
Assoc. 75:375-376.
Intern.
Ann. Math.
•