Giesbrecht, F.; (1978)Combining Experiments to Estimate Variance Components."

COMBn~ING EXPERlr~TS
VARIANCE
TO.
ESTn~TE
CO~WONENTS
by
F. Giesbrecht',
Institute of Statistics
Series No. 1182
Raleigh - July 1978
~limeograph
COMBINING EXPERIMENTS TO ESTIMATE VARIANCE COMPONENTS
by
F. Giesbrecht
ABSTRACT
Occasionally information from a series of balanced
experiments is to be combined to provide estimates of
variance components.
Combining the data typically leads
to an unbalanced set and consequently makes the variance
component estimation problem difficult.
It is shown that
the Minimum Norm Quadratic Unbiased Estimation theory leads
a system of equations that can be formed by taking linear
functions of the sums of squares obtained in the analyses
of the separate data sets with weights proportional to the
reciprocals of the squares of the expected values of the
sums of squares evaluated at the prior values for the components.
Limited simulation studies indicate that the
iterative scheme implied above converges rapidly and is
insensitive to choice of starting values.
An intermediate
result is the proof that MJ}TQUE and the analysis of variance yield identical estimates for balanced data sets.
COMBINING EXPERIMENTS TO ESTIMATE
VARIANCE COMPONENTS
F. Giesbrecht
1.
INTRODUCTION
The analysis of variance method has long been accepted as the appropriate method for estimating variance components in the case of balanced data
sets and the classical linear model.
Under modest distributional assump-
tions,this method is known to yield minimum variance quadratic unbiased
estimates and if one assumes normality then minimum variance unbiased estimates (Searle 1971, Graybill 1961).
For unbalanced data sets Rao (1971a)
has proposed a method which yields minimum norm quadratic unbiased estimates
(MINQUE) of the variance components.
If good prior information about the
relative magnitudes of the variance components is available and normality
holds, then Rao's (197lb) minimum variance quadratic unbiased estimation
(~rrvQUE)
technique is available.
These estimates are minimum variance in
the sense that if the prior values agree with the true (but unknown) values,
then the estimates have minimum variance among all unbiased quadratic
estimates.
Unfortunately computing either
~ITNQUE
or
~QUE
for large data
sets can be rather formidable.
Patterson and Thompson (1971) proposed the restricted maximum liklihood
(REML) method for estimating variance components.
Harville (1975) has shown
that if the prior values used to obtain MINQUE agree with the final estimates
obtained then MINQu~ and REML coincide.
Giesbrecht and Burrows (1976) have
shown that the REML equation can be arranged to correspond exactly with the
MllJQUE equations.
exactly ~nth ~~.
The iterative scheme suggested by
~QUE
corresponds
Sriburi (1978) has shown that the REML estimates are
consistent, asymptotically normal and efficient in the sense of attaining
2
the Cram~r-Rao lower bound for the covariance matrix.
The object of this paper is to obtain the MINQUE or REML equations
when the data set is sufficiently balanced to permit an orthogonal partitioning of the total sum of squares.
This is particularly relevant when
a number of balanced but dissimilar data sets are to be combined in order
to estimate variance components.
The quadratic forms in these equations
are shown to be weighted linear functions of the sums of squares in the
separate analyses of variance.
The weights are simple functions of the
prior values for the variance components.
If one does not iterate but uses
prior values for the variance components as fixed constants then the estimates are linear functions of orthogonal quadratic forms.
covariances of the estimates can be obtained directly.
Variances and
If an iterative
scheme is used to obtain the REML estimates then asymptotic variances and
covariances of the estimates are available.
A small Monte Carlo study
indicates not only that the method tends to converge rapidly but also that
(a) the answers are not seriously biased, (b) the formula for the asymptotic
variances and covariances is applicable and (c) in some cases the ~4L estimators are much more efficient than estimators obtained by other methods.
2.
THE MINQUE EQUATIONS
The linear model treated by Rao (1971a, 1971b) can be written as
+ ••• +
(2.1)
X E
-m-m
where J is the n-vector of observations, the [X.} are kno~m n x c. matrices,
~l
"-
!
1
is a CO-vector of unknown parameters and the [E.} are independent c.-vectors
-~
~
of independent random variables with mean zero and variance cr~ respectively.
~
It follows that the variance-covariance matrix of Y is given by
m
1
2
v* =.2: cr. V.
~=
~-~
3
where V. = X.X~ for i = 1 ... m.
-1. -1.-1.
"
The MINQUE principle leads one to:
Compute V = ~ O'.V.
1.-1.
2
where the [O'J are proportional to prior estimates of the unknown [Oil.
1.
Note that V is a function of the [O'.} and that the notation should indicate
1.
a)
as much.
However, this is cumbersome and the risk of confusion resulting
from the imprecise notation is minimal.
b)
Compute Qv
=
y..
-1
- y..
, -1-_ )- , -1
~ ~ ~ ~
-1 __ (
where A- is the Moore-Penrose generalized inverse of A.
c)
Compute the quadratic forms
y'Q V.Q Y for i = 1, "', m.
- -V---1.-v--d)
Express the expected values of the quadratic forms in
(c) as linear functions of the components to be estimated and solve the
system of equations obtained by equating observed quadratic forms to their
expected values.
For this paper a model with somewhat more structure than that implied
by
(2.1) is required.
In particular, it is assumed that
I
X. = (X'
-1.
-1. l
X. ) for i = 0, 1,
-1.s.
... ,
m.
J.
The model now becomes
(2.2)
and
v*
Define V..
-J.J
V*
-
=i>O
~
2
(j.
J.
= -J.J-J.J
x.. X:.
=i>O
~
(~X..X:.).
j -J.~J.J
for all ij.
It follows that
2 (~V . .)•
(j.
J.
j -J.J
It is assumed that no fixed effect in
(2.2) is nested in a random effect,
and that the model is balanced in the sense that there exists an analysis
4
of variance of the form
= 'E
y 'y
- -
'EY'A. •Y
J'''
i
with orthogonal sums of squares.
-~J-
Note that it is quite possible that
the expected values of several sums of squares may be identical or possibly
differ by no more than a constant.
The (A .. J have the usual properties as
spelled out by Cochran's theorem.
It is convenient for our purposes to let
-~J
A ' = ~n.(~~.~n.)-Xo.
- OJ-vJ -vJ~-vJ - J
for j
= 1,
So .
"',
It follows that there exist constants, (e ... ,.,} and (~ ... ,.,} such that
~J~ J
~J~ J
'E
=
. , •,
9 ... ,.,
J
~J ~
J
~
v.,.,
......~ J
and
v.. :::
'E
. , .,
J
-~J
for all ij.
~
~
...
~J~
'.I
A.,.,
J -~ J
Also there exist (d .. J, functions of (~ ... './} and
~J
~J~
J
(a.}
~
that
v=
=
=
'E
'E
CL.
'E
'E
CL.
~
i>O j
i>O j
V ..
~ -~J
('E 'E ~ ... , • ,A. , . , )
i'j'
'E 'E ('E
. '.'
~
J
'E
;'-0
~
J'
~J~
J
-~
CL. ~ ••• , •
~ ~J~ J
:::
L: 2: d. I ./A. I .
. , . , l J -~ J
~ J
=
'E 'Ed..
I
J
)A. , . ,
-~
J
I
and
-1
V
-
-1
i
j
The known non-singularity of
.~J
A. . •
~J
Y guarantees
that d ..
lJ
The MINQUE principle requires ......Y'Q
V.Q Y for i
......V'.. ......v-~
where
r 0 for
= 1,
all ij.
"', m,
such
5
The balance in the structure implies
~
.-v
.. = 0
for all j and i > 0, and in particular
-J.J
~~j'
=0
for j
r j'
•
It follows that
and
o
o
o
=
o
equal to
-1
('
)-'
L:. dO'J -Aok·
.XOJ• !-r.
u vJ ~fI
-vJ-vJ·P.fI·
-vJ
J
= z:j
-1
dO. &:;.
J
J
and
Q
-v
=i>O
L:
-1
L: d .. A.. •
j
lJ -J.J
The desired quadratic forms follow as
for i= 1, ••• , m.
-2
In order to obtain expressions for d ..
J.J
6
define
and consider
= d~.
lJ
tr(A .. ) •
-lJ
It follows that d~. is exactly the expected value of the mean square
:LJ
obtained from the quadratic for y'A .. Y in the analysis of variance.
-~d'
quently the [d .
.J
lJ
Conse-
are the expected values of the sums of squares evaluated
at prior values of the variance components.
If one is willing to assume normality, then the result in
obtained via
REML.
The assumptions in model
(2.3) can be
(2.2) together with normality
imply the existence of k independent surns of squares [SS.} based on
l.
degrees of freedom and expected values [~w.. cr~} respectively.
j
lJ J
[v.}
:L
Setting
the partial derivatives of the log-likelihood equal to zero yields
SS. w..
Z
i
l
v. w.•
lJ
l
=
lJ
(2.4)
--2
(Z w' h cr h )
h l
.
for j = 1, ••• , m.
An approach to solving (2.4) is to use prior values for [cr~} in the
denominator and then equating the observed quadratic forms to their
expected values.
This, however, is exactly equivalent to
J
lJ
identifies [y'A .. Y} with [SS.} , [d.
- -ld'
l
(2.3) if one
2
with [hZ w' ;h } and [Z cp. , .,. J with
l h
j ' l J :LJ
7
3.
3.1
EXAMPLES
Combining a Series of r x c Tables with Subsampling.
We consider now the special case where data is available from a
series of m experiments where r
rows, c columns and ~ observations
k
k
are available in each cell for k = 1, ••• , m. The row, column, interaction and error variance components are assumed to be constant across
all experiments.
If RSS , CSS , RCSS and ESS represent the row, column,
k
k
k
k
interaction and error sums of squares respectively for the k 'th
experi-
ment and a R, a ' aRC and a prior values for the variance components then
E
C
(2.3) implies that the following 4 quadratic forms be set equal to their
expected values:
e
a)
1:
k
b)
1:
c)
rk~
(rkn k a C+ ~aRC+ aE )2
k
1:
k
!
~
~
1:
k
RSS
k
(Ck "k a R+ ~a RC+ OlE )2
+
d)
CSSk
RCSSk
(~ aRC + aE )2
[
RSS
+
I
k
2
(ck~ aR+ ~ aRC+ aE )
+
+
These quadratic forms are set equal to
ing system solved.
+
th~ir
expected values and the result-
The iterative process is obtained by replacing the prior
values with estimates (negative values must be replaced) after the first
8
cycle.
If ~
=1
then the value for ESS
k
in d) is replaced by zero.
Note also that allowing a different error variance component for some or
all of the experiments introduces no new difficulties.
3.2
An Experiment with Common Rowand Column Variance Component.
Consider the case where it is assumed that cr
222
= cr c = crA and data is
R
available from an r x C table with one observation per cell.
Let RSS,
CSS and ESS be the row, column and error sums of squares respectively.
~A
and
~E
represent prior values then the appropriate quadratic forms are
cRSS
3.3
If
+
rCSS
and ESS
Numerical Example.
Corn yield data from a series of three experiments conducted by
Dr. R. H. Moll in the Genetics Department at NCSU is used to illustrate
the methodology developed.
In experiment I,
of 4 were each used on 4 different females.
tested at two locations.
males were used.
64 males, grouped into blocks
The identical ~atings were
Experiment II is similar, except that only 56
Experiment III again involved
64 males. However in this
experiment, the blocks were replicated at each location.
Table I shows the analysis of variance for each experiment.
Note that
a common model involving an effect due to replications within locations is
used to obtain the expected mean squares for all three experiments, even
though only one replicate was used in experiments I and II.
We note that
the analysis of variance for experiment III has ten lines while each of
the other two has only seven.
Ten variance components are to be estimated.
It is assumed that all effects in each experiment are random.
The ten quadratic forms required in the first iteration are given in
Table II.
Rather than use a prior value for the unknown [~.}, all were
l
e
Table T.
e
It
Analyses of Variance of TIlree Experiments
Experiment I.
Source
M.S.
JJoc
4.501875
Block
.110619
E.M,S.
d.l'.
1
15
CJ
e
2
CJ
e
2
CJ
e
2
CJ
e
2
CJ
LxB
.121897
15
Male (B)
.016996
48
LxM (B)
.011032
48
F'ema1e (M,B)
.012128
192
CJ
LxF (M,B)
.010947
192
CJ
5.733175
1
a
Experiment
2
2
+ CJ 1F (MB)
2
+ CJ 1F (MB)
+
+
-I-
2
0I,F'(MB)
e
+
2
e
+
2
e
-I-
2
2CJF (MB)
CJ~l"(MB)
2
CJ 1F (MB)
2
CJLF(MB)
2
CJI.F(MB)
2
CJLF(MB)
+
2
+ 4CJ RM (1B}
-I-
2
2CJF (14B)
2
+ liCJLM(B)
I 2
+ ICJ RM (1B)
I 2
+ICJ RM (1B)
2
+ IICJRM(LU)
2
+ lICJRM(LB)
2
+ 4a LM (B)
I 2
+ ICJRM(LB)
2
+ 4CJ 1M (B)
I 2
ICJRM(LB)
2
4a RM (LB)
2
4o RM (LB)
2
1'0RM(LB)
2
+ 4a
Ll4( B)
2
+ 8CJ M(B)
+ I 2
ICJ114( B)
+ I 2
ICJ LN(R)
2
+ IICJLM(B)
2
1OO R(1B)
2
+ 1oo R(1B)
lOO~B
-I-
+
+
2
+ 100
1B
2
+ 1OO
1B
16a~(LB)
+
-I-
25OO~
-I-
224a~
32CJ~
2
+ 8CJ M(B)
2
+ 2CJ F (MB)
II.
Source
Lac
Block
LxB
Male (B)
LxM (B)
.024852
.0117801
.022411
.010157
13
13
42
42
Female (M,B)
.008474
168
LxF (M,B)
.005724
168
2
e
2
CJ
e
a2
e
2
CJ
e
2
°e
2
°e
2
CJ
e
~ 2
- °LF(MB)
+
2
°LF(MB)
~ 2
. °LF(MB)
2
+ °Ll"(MB)
2
+ °LF(MB)
+
2
+ 20 F (MB)
+
+
2
+ 2o F (MB)
+
+
2
I- l(XJ R(I.B)
2
+ 8CJ M(B)
+ 4 2
°LM(B)
2
+ IICJLM(B)
2
+ 1OO R(LB)
2
-I- 1OO R(I,B)
-I-
2
100
LB
+
1OO~B
+
320~
2
+ 100
LB
+ 8CJ2
M(B)
+ I 2
ICJLM(B)
2
+ 20 F (MB)
o~,F'(MB)
\0
Experiment III.
Source
I,oc
~~
1.476984
1
Bloclt
.078833
15
LxB
.Oq5938
15
Rep (I.,B)
.011495
32
Males (B)
.051411
48
I.x14 (B)
.006574
118
Rx14 (B)
.004850
96
Female (14,B)
.017559
192
LxF (14,B)
.00q980
192
RxF (14,L,B)
.005076
384
E.14.S.
a
2
e
2
a
e
2
a
e
2
e
2
a
e
2
a
e
2
a
e
2
a
e
2
a
e
2
a
e
°
+ 2a
2
I,F(MB)
~ 2 2
. aLF(HB)
+
4aRlIl(LB)
2
2
+ 4o F (MB)
2
+ qaRM(LB)
2
+ 2o I.F (14B)
1 2 2
. aLF(MB)
2
+ IIOF(MB)
I 2
+ lORM(LB)
2
+ 40
RM(LB)
I 2
+ IO RM (LB)
2
+ 4o RM (LB)
2
2
+ 2a LF (14B)
+ 2 2
a LF (t-1B)
2
+ 4a F (MB)
+
2a~F(MB)
2
+ BaLM(B)
2
+ BaIM(B)
2
+ BalM(B)
+
2
+ 16aM(B)
2
+ 16a R(LB)
2
+ 16a R(LB)
+
2
+ BalM(B)
2
+ BalM(B)
2
+ 16a14 (B)
16a~(LB)
+ 32aLB
2
2
+ 32a LB
2
+ 32a I.B
+
512a~
2
+ 640
B
16a~(I.B)
+ 4o RM (LB)
I-'
o
e
e
-
11
TABLE II
Symbolic Form
Quadratic Forms for First Iteration
Quadratic Form (first iteration)
Z:(!'SYLS~Ot
.033425
L: (I'3:Y:s3~O t
.01249
L:(!'3:YL:s3I)t
.03627
L:(I'S.YRSI)t
.04589
L:(I 'S:YMS:O t
.07050
L:(I'S.YLMSI)t
.09222
L:(I'S.YRMS20 t
.15545
L:(!'S.YFSI)t
.76186
!:(I'S:YLFSI)t
1.37352
Z:(~'3YRF3~)t
3.16347
12
set equal to one for the first cycle.
In particular, we note that the
first quadratic form is computed from only the location lines in the three
analysis of variance tables.
In particular, it·is given by
256 x 4.501875
298 x 298
+
+
224 x 5.733175
266 x 266
512 x 1.476984
575 x 575
The third quadratic form is based on the first three lines in the analyses
of variance.
In particular, it is given by
+ 16 x (15 x .1211397
42 x 42
+
15 x .110619
84 x 84
+ 16 x (13 x .047801
42 x 42
+
13 x .02452
84 x 84
+ 32 x
115 x
.045938
63 x 63
+ 13 x .078833
147 x 147
+
4.501875 }
298 x 298
+
5.733175 ]
266 x 266
+
1.476984
575 x 575
!.
We notice that the first quadratic form involves only the sums of squares
whose expected value contain cr~.
Similarly the third quadratic form
. 1ves a 11 Stuns a f squares whose expec t e d va1ue cantaln
' cr 2 •
lnvo
LB
Th e t env.J..h
quadratic form is based on all lines in all three analyses.
The estimates are obtained by equating the 10 quadratic forms to
their expected values and solving the resulting system.
are then used as prior values for the next cycle.
results of 10 cycles of iteration.
These estimates
Table III shows the
Table IV shows the estimated asymptotic
variance-covariance matrix of the estimates.
13
TABLE III
Results of 10 Iterations of the Estimation Procedure
Variance
1
Iteration No.
2
3
7
8,9 & 10
Component
2
CT
E
.005258
.005538
.005050
.005714
.005715
2
CTLF(MB)
.001602
.001074
.000904
.000810
.000809
2
CTF(MB)
.001984
.002177
.002223
.002248
.002249
-.000096*
-.000104*
-.000104*
2
0' RM(
LB)
-.000059* -.000082*
2
O'LM(B)
.000413
.00461
.000477
.000486
.000486
2
CTM(B)
.001356
.001324
.001323
.001323
.001323
2
CT R(LB)
.000628
.000447
.000427
.000421
.000417
2
O'LB
.002594
.003007
.003087
.003087
.003100
-.000572*
- .000572*
- .000576*"
.015071
.015070
.015070
2
O'B
2
O'L
*
-.000427* -.000550*
.014684
.015072
These values were replaced by zeros when used for the next iteration.
Table IV Estimated Asymptotic Variance - Covariance Matrix of Estimated Variance Components.
2
O'F
2
O'LF
2*
O'E
4
2*
erE
.783XIO-
2
O'LF
.116xlO- 7
2
aRM
.116xlO -7 •23lxlO -7
2
2
aM
2
O'R
2
aLB
2
O'B
2
O'L
0
0
0
0
0
0
.268xlO- 8 .175xlO- 9
0
0
0
.2 68xlO -8 -. 825xlO -8 ".3 6 3xlO -8
0
0
0
.175xlO- 9
.36 3xlO-8 -.500xlO -8
0
0
0
.659xlO- 7
.120xlO- 7 -.856xlO- 9-.l35xlO- 7 .135xlO-
•466xlO- 10
.229xlO-6 -.177xlO -6 -. 876xlO -10 -.15 6xlO-7
2
-7
-.231xlO -7 -.177xlO -6 .367xlO -6 -.158
xlO
F
2
lO
JD
7 .199xlO- 7
aRM -.466xlO- -.875xlO- -.158xlO-
u
2
0'1M
.175xlO-9
8
.628xlO- 9
0'1M
0
-.15 6xlO -7 -.268xlO -8
2
aM
0
.2 68xlO -8 -.825xlO -8
2
O'R
0
.175xlO
2
aLB
0
0
0
0
2
O'B
0
0
0
0
.507xlO- 8 -.197xlO- 7 .127xlO- 7 -.203xlO- 7 .790xlO- 7 -.510xlO- 7
2
O'L
0
0
0
0
.628xlO- 9
-9
•363xlO
-8
-8
-7
.36 3xlO-.120xlO
-.500xlO
-8
8
.379xlO- 7 -o177xlO- 7 .506xlO- -.197xlO- 7 .127xlO- 7
-.856xlO -9 -.177xlO -7 •246xlO -9 •628xlO -9 .127xlO -7 -.186xlO -7
-.135xlO- 7
.507XlO- 8 .628xlO- 9 .539xlO- 7-.203xlO- 7 -.251xlO-
8
8
.127xlO- 7-.186xlO- 7 -.251XlO- -.510xlO- 7 .743xlO- 7
~"This component could also be labeled O'~F.
I-'
-I='""
e
e
e
15
4. ASYMPTOTIC PROPERTIES
The complexity of.the formulas implied by (2.3) makes it difficult to
assess exact distributional properties of the estimators obtained.
several points can be made.
However,
First of all, if one does not iterate then
the resulting estimators are unbiased (Rao 1971a, 1971b), regardless of the
prior values used.
If the prior values are proportional to the true values
and one assumes normality, then the estimators also have minimum variance
among all unbiased quadratic estimators (Rao 1971b).
Sriburi (1978) has shown that if one iterates and obtains the REML
solutions then the estimates are consistent, asymptotically normal and
efficient in the sense of attaining the Cram~r-Rao lower bound for the
covariance matrix.
The asymptotic covariance matrix is twice the inverse
of matrix of coefficients obtained when the expected values of the quadratic forms (2.3) are expressed as linear functions of the variance components.
We note that there is no guarantee that the solutions to the
MINQUE equations will be non-negative.
Consequently, any iterative proce-
dure must employ some method to replace negative values if they appear.
The adequacy of the asymptotic
theo~J
can be judged from a Monte
Carlo run involving 1000 sets of five experiments, each involving rows,
2
2
2
columns interaction and subsampling with 0 = .5, 0 = .2, 0
=.9 and
r
c
rc
2
o e = 1. The sizes of the five experiments are given in Table V.
Table VI gives the exact variances and covariances of estimators
for the four variance components, using three alternative methods of
estimation.
Note that the variances and covariances for the restricted
M.L. estimators are for the case where actual values of the components are
used in (2.4) and the procedure is not iterated.
of the simulation experiment.
Table VII shows the results
Several points are rather striking.
First of
16
Table V.
Size of experiments used combined to generate Yariances and
covariances of estimates given in Tables VI and VII.
No. of Rows
Table VI.
No. of Cols.
No. of Subsamp1es
10
10
2
10
4
2
4
4
2
10
4
10
4
u
10
Exact Variance-coyariance matrices for estimatescf
cr 2 , cr 2 ,a 2
r
c
and cr 2
rc
.
us~ng
e
Method
2
cr
r
•
-PI:verag:.ng
Hatr';~
rc
a
2
e
-.Ou16
0
2
c
.0019
.0182
-.0076
0
-.0416
-.0076
.0351
-.OOhO
0
0
-.0040
.0086
a
Sums of
a
Squares
a
a
?estricted.
:HVGJJE
.... .
.0019
a
or
Variance-Covariance
2
2
a
a
c
.... •
eSyLnav~on.
.0471
Peo1ing
L.
~
o~
2
2
rc
2
3stimates a
e
l~I.
....h 0 ds
me~
C! r
Unbiased. a
AOV
,,
~nxee
2
r
2
c
2
~c
2
e
C
2
........
a2
c
2
a
rc
2
a
e
.ou60
.0007
-.0054
.0000
.0007
.02.88
-.0033
.0000
-.0054
-.0033
.0252
-.0008
.0000
.0000
-.0008
.0030
.03 U2
.0005
-0040
.0000
.0005
.0127
-.0025
-.0000
-.0040
- .0025
.0211
-.0010
.0000
-.0000
-.0010
.0030
e
17
Table VII.
Means and variances of 1000 simulated sets of estimates.
Method of Estimation
0-
2
r
0-
2
c
0-
2
rc
0-
2
e
Means
Averaging Unbiased AoV Estimates
.495
.199
.905
1.000
Averaging Estimates with Neg.
.512
.231
.907
1.000
.492
.201
.902
1.001
First Iterate, Restricted ML
.504
.204
.897
1.001
Fourth Iterate, Restricted ML
.498
.198
.903
1.001
Restricted ML Using True Values
.498
.198
.903
1.001
Values Eliminated
Pooling Sums of
S~uares
Variances
Averaging Unbiased AoV
.0477
.0200
.0394
.0080
.0437
.0157
.0387
.0080
.0417
.0194
.0264
.0030
First Iterate, Restricted ML
.0330
.0125
.0210
.0030
Fourth Iterate, Restricted ML
.0331
.0125
.0217
.0029
Restricted ML Using True Values
.0331
.0125
.0217
.0029
Estimates
Averaging Estimates with Neg.
values Eliminated
Pooling Sums of
S~uares
18
all it appears that bias is not a problem, except for the obvious case of
computing an average after replacing negative values by zeros.
the iterative scheme appears to converge very rapidly.
Secondly,
In all cases, the
initial values used were obtained by averaging the unbiased AoV estimator
(with negative values replaced by zero).
Finally, the variances observed
for the estimator obtained via the iterative scheme agree very closely with
the exact values given in Table VI.
19
REFERENCES
Giesbrecht, F. G. and Burrows, P. (1976). Estimating Variance Components
for the nested model, MINQUE or restricted maximum likelihood. Inst.
Stat. Mimeo Series 1075, Raleigh, NC.
Graybill, F. A. (1961). An Introduction to Linear Statistical Models.
Vol. 1. McGraw-HillBook Co., New York, NY.
Harville, D. A. (1977). Maximum likelihood approaches to variance component estimation and to related problems. J. Amer. Statist. Assoc.
72, 320-340.
Patterson, H. D. and Thompson, R. (1971). Recovery of interblock information
when block sizes are unequal. Biometrika 58, 545-554.
Rao, C. R. (1971a).
MINQ'lJE theory.
Estimation of variance and covariance components
Mult. Analysis !, 257-75.
~
Rao, C. R. (1971b). Minimum variance quadratic unbiased estimation of
variance components. ~ Mult. Analysis !, 445-56.
Searle, S. R. (1971).
Linear Models, John Wiley & Sons, New York, NY.
Sriburi, S. (1978). Properties of Variance component estimators obtained
by restricted maximum likelihood and by MINQlJE. Inst. Stat. Mimeo
Series 1175, Raleigh, NC.