Bush, N.; (1962)Estimating varinace components in a multi-way classification." Ph. D. Thesis.

ESTDvlATING VARIANCE COMPONENTS IN
A MULTI-WAY CLASSIFICATION
by
Norman Bush
and
R, L. Anderson
Institute of Statistics
Mimeograph Series No, 324
June, 1962
ERRATA
Line
from
top
Line
from
bottom
6
4
Small
p should be p (rho)
All ,e I s should be £
18
21
5
'V
21
2
lBmall p should be capital P
22
1
var(t) should be var(~)
22
12
Rightmost' p'.l should be p-l,
26
3
RightmostP~l should be Pa-1 ,
27
e
Correction
should be
'V
-1
-1
Rightmost pb should be Pb '
8
27
4
The word "on" is replaced by the word Itbi'
28
1
Vr should be V
33
4
The '\vord
II
givesu is replaced by the word "giveil
The word
Ii
andll is replaced by the word "an"
39
4
40
12
r
The first word "ot' in the line is replaced by the
word "on"
40
3
42
2
51
63
Insert commas after cov(R,C) and var(S)
9
56
Replace the word" intersting l by "interesting"
2
2
2
~2
~2
~2
orc' cr r and 0e should be arc' or and 0e
3
The word "there" is replaced by the '!Vord IItheseil
'fhe word sequence
2
II
it is'' is replaced by the i'Tords
"they are"
e
63
'7
The word "a" is replaced by the word It of"
64
5
The rightmost X should be
a
X~
66
5
The ivord sequence lin .._ =1" is replaced by"n .. = 1 or 0
77
95.
3
The word
8
~J
If
The ntullber
crossedll is replaced by the word
II
161" should be "160. '7"
~J
II
cross"
II
iv
TABLE OF CONTENTS
Page
LIST OF TABLES ..
if
•
..
•
•
0
LIST OF APPENDIX TABLES. • •
...
0-
•
•
....
....
. . . . . . . . .. ..
•
•
•
•
o
'0
0
vi
• vii
CHAPrER
I
II
III
INTRODUCTION • • • • • •
REVIEW OF LITERATURE ..
~
. .
THEORETICAL DEVELOPMENT"
0
•
0
~
00.000.
..
•
0
•
..
•
0'
•
. .. . .
•
0
(II
<>
q
0
•
..
. .. .
GO.
a
'0
1
3
6
6
.3.1 Introduction. .. • .. • • • • • • • • • • .. • • ..
6
.3.2 Analysis of Variance in Model I • • • • • • • ..
.3.3 Constructing C'-matrices for Model I. • • • • • • .. 10
3.4 Variance Components in Model II • • • • • • • • • • 19
3.5
IV
Variances of Estimates of the Variance
Components. • • • • • • • • .. • • • •
22
COMPARISON OF THREE PROCEDURES FOR ESTIMATING VARIANCE
COMPONENTS, • .. • • • • • • • • • • • • • • • • • • .. • ..
2~.
4.1 Introduction. • • .. .. .. • • • .. .. ...
4.2
4.3
4.4
4.5
v
• • • • .. 2~.
Estimating Equations for v and var(v) .. • .. • • • .. 26
Calculating Ph1 Pa and Pb- ••• • .-. • .. • •
28
Calculating var(~h)p var(~a) and var(~b)" • • • .. • 34
A Computer Comparison • .. .. .. .. • .. • • • .. • • • • 36
A SIMPLIFIED COMPUTATIONAL METHOD FOR PROCEDURE A.
Introduction. • .. • .. .. • .. •
Searle 1 s results for var(~) ..
Computing var'( I*) ~ Gov(R» I~*') J
cov(S.llI~!')
..
VI
EXTENSION OF RESULTS TO THE
MULTI~WAY
.. • •
.
o
0
•
•
0-
•
•
•
4>
0
.• •
.. .
·
48
48
. 48
51
CLASSIFICATION. .. • 56
6.1 Introduction.. .. • • • • .. .. .. • .. • • • .. • • .. • 56
Sums of Squares in the 2 x 2 x .3 Classification
56
The Completely Random Situation • .. • • • • • • • • 71
6.4 Estimating Variances of the Estimates of the
Variance Components for the Nested Design • • .. .. 74
6.5 Other Mathematical Models • • • • • .. .. • • .. • " • 76
6.2
603
0
•
v
TABLE OF CONTENTS (continued)
Page
VII
SUMMARY AND CONCLUSIONS • • • • • • • •
.... •
• • • ••
77
..
7.1 Introduction• • • • • • • • • • • • • • • • • • • • 77
7.2 The Two-way Classification•• • • • • • • .
• • 77
7.3 Suggested Future Research •• • • •
• • • • 81
~
LIST OF REFERENCES • • • • • • • • • • • •
0
•
•
.0440'·.·82
APPENDIX A.
COMPUTER PROGRAM FOR VAR(!h)' VAR(!a) AND VM(.!b)'
IN THE TWO-WAY CLASSIFICATION • • • • • • • • • • • 84
APPENDIX B.
COMPUTER RESULTS FOR VARIANCES OF ESTIMATES OF
VARIANCE COMPONENTS • • • • • • • • • • • • • • • • 101
vi
LIST OF TABLES
Table
Page
4.1. The nij arr angements for the 6x6 designs. • • • • • • • • • 37
4.2. The n .. arrangements for the 3x3 designs. • • • • • • • • • 38
~J
4.3. General results from the 6x6 designs S, C and L • • • • • • 39
Relative variances of estimates of variance components
2
222
for a R 4, a ~ 2, a
R 0 and a
~ 1. • • • .. • • • • •
rc
e
r
c
Relative variances of estimates of variance components
2
?
2
.
2
for a
16,
a- ~ 2, a
R 1/4 and a
R 1 ••••••
• •
r
c
rc
e
h4
Relative variances of estimates of variance components
222
2
for a r ~ 1, a c = 1, arc = 1 and a e ~ 1• • • • • • • • • •
45
t
4.7. Relative variances of estimates of variance components
2
2 = 4 and (j 2 1
f or ar
16,2
a = 0, a
• • • • • • • • • 46
c
rc
e
R
IRe
4.8. Relative variances of estimates of variance components
22·
2
for a 2 .. 1, a c = 1, (jrc 16 and a e .. 1 • • • • • • • • • 47
r
R
Subclass n. . data for a two-way classification. • • • • ••
50
Coefficients of terms in var(t ). • • • • • • • • • • • ••
55
~J
-a
6.1. Three factor design in terms of nijk's. • • • • • • • • • • 57
6.2. Sums of squares for estimation procedures A, B and H
in a three-way classification • • • • • • • • • • • • ••
69
6.3. Sums of squares for the four stage nested classification.. 75
vii
LIST OF APPENDIX TABlES
Table
Page
B-1.
B-2.
Design 6x6-equal. • • • • • • • • • • • • • • • " • • •
" 102
DeSign 516.. • • • • • • • • • • • • •
•
• • • • • • .. 103
B-3.
Design 822. • •
B-4..
Design C18 ... • •
B-5.
Design c24. • • • • •
..
•
· .. • • •
B-6.
Design I20. • •
. • .
•
..
• •
B-7.
Design I24.
.
..
• • • • • • • • • •
· .. . •
. ·.
•
•
e
_
..
•
· . ..
. • 104
• • • .. 105
. . .
.
•
•
.
•
.
. .
•
..
•
.. ..
• • • • • • • • • • • • • . •
.
•
.
• 109
Design D18-1. • • • • • • • • • • • • • .. • • • • • • •
.
" 110
.. ..
• III
•
•
..
•
.. • • • .. ..
B-8. Design 3x3-equal. •
B-9.
.
· .. •
.
..
B-10.
Design D18-2. • • • • •
B-ll..
Design D18-3.
B-12.
Design D18-1.j.. • • • • • •
.
.
•
• •
. ..
• •
. ..
•
.
.
.·.
.
·.
• • • • • • • • • • •
• • • • • • • •
.
.
• • • ..
..
.
•
•
0
• • • • • • • • • • • • .
• 106
•
• 107
108
112
. . .. 113
CHAPTER I
D1TRODUCTION
The primary purpose of this dissertation is to describe and illustrate an analytic procedure for obtaining variances of' estimates of'
variance components in a general multi-way classification, when there
are unequal numbers of replications in the subclasses.
component model is always taken to be Eisenhart's
The variance
["'19417
Model II,
and the estimation procedure is based on equating the expected mean
squares, from the analysis of variance table, to the computed mean
squares, (as if it was a fixed model), and then solving a set of equations for the estimates of the variance components.
For the fixed ef-
fects models we will use Eisenhart's definition of Model I, the only
variance component being
2
(J'
e
Specifically, three estimating procedures, A, B and H, are worked
out for the two- and three-way classifications, and the generalization
to higher-way classifications is indicated.
the method of' fitting constants, Yates
Procedure A is based on
["193!±7
and Henderson
Procedure B, the method of weighted squares of' means, Yates
and Procedure H, the unadjusted sums of squares, Henderson
and
De
Roy and Gluckowski ["1 96")}•
["1952];
['193!±7;
[195"2.7
For the two-way classification,
the three procedures are computed for a number of' experimental designs and parameter values (true variance components) with the use
of' a UNIVAC
1105
digital computer.
2
The computer comparison was designed to compare experimental
designs, f'or estimating variance components, as well as to compare
the three estimation procedures.
A sunnnary of' the computer results
is given in Section 4.5, a.'tld the actual results in Appe:n:dix B.
The
computer program is shown in Appendix A.
A simplified
computatiol~l me~~od
for Procedure A, in the
way classification, is given in Chapter V,
a.~d
two~
the extensions to
multi-vl'ay and nested classifications i.n Chapter VI.
All the experimental designs that
sertation
follows:
ar~
cOTIrlected designs.
a!~
considered in this
A connected
desi~~
dis~
is defined as
1
itA treatment and a block are said to be associated if the treatment occurs in tha"c particular block.
Two treatments, two
blocks or a treatment and a block are said to be connected if
it is possible to pass from one to the other by means of a
chain, consisting alternately of treatments or blocks such
that
a.~y
two cOl1secu·tive members are associated.
Thus if
treatments i ' and in are connected, we must have a chain
O
of treatmen'ts and bl.ocks say,
i O' jl' iI' j2' ••• , in~l' jn' in '
'
..
where i O' iF •.• , in are treatments, a.'Yld jl' J2 , ••. , jn
are blocks, an.a i ~,i occur in block j • A design is said
p=~
to be
corll~ected
p
i f every block a.nd
P
treatme~t
of the design is
cotLnected to every ot.c':ler block a:n.d tl"eat'ment. Ii
~. C. Bose gives this definition in his class notes of Analysis
of Variance with application to experimental designs, University
of North Carolina, Chapel Hill, Nori;h Carolina.
3
CHAPTER II
REVIEW OF LITERATURE
For the completely random model, in the multi-way crossed classification with unequal numbers in the subclasses, very little work
has been done in the area of' what is the best procedure and best experimental design for simultaneous estimation of variance components.
Crump ,["1946, 195"J:7 and Anderson and Bancrof't L195g7 discuss the
estimation problem for the balanced ease, and mention the difficulties in the unbalanced case.
["19517
Henderson ["195"2..7 and Henderson et al.
describe, with some examples, Procedures A and H for a two-
way classification (these are Methods 3 and 1, respectively, in the
article), but do not evaluate the estimation procedures.
Searle
["1956, 195§.7 develops a matrix procedure by which the variances of'
the estimated variance components for Procedure H can be obtained in
the two-way classification.
These results are extended to the three-
stage nested design by Searle ['"1962:,7.
Estimating procedure
]I
is
extended to the three-way classification by Le Roy and Gluckowski
["196jJ.
None of' the above articles attempts to compare or evaluate
the various estimation procedures, except for Searle's work on Procedure H.
The area of' design for estimating variance components is even
more recent than the work in the estimation procedures.
The earliest
work, to the author's knowledge, on designs for the two-way
cation is by Gaylor
L196y].
classif'i~
Anderson ,["196'1:,7 reviews t..'lJ.e problems
5
In this dissertation the L-design will be compared to some
other designs; however, the BDR-design will not be considered,
since it is not a connected design.
Instead we have been led to two connected rectangles design
which are denoted as the S- and C-designs.
rows and six columns and n
ij
For the case of six
= 0 or 1, these designs appear
as follows:
C-design
110000
111000
111000
111000
o 1 1 100
011100
001110
001110
000111
000111
000011
00011 1
4
of design in the nested classification and comments on Gaylor's
["1969..7
work for the two-way classification model.
Gaylor points out that, when the experimenter is interested in
simulatneously estimating both the row and colunm variance component
2 and 0-2 ), the optimum design depends upon the estimator used and
r
c
.
(0-
the values of the parameters.
He studied the simultaneous estima=
2
2
2
"'2
"'2.
r and 0-c , when 0"r = 0"c and Var(o-r ) = Var(()c ), for two
designs which are denoted as the L~design and the balanced disjointed
tion of
2
0"
rectangles
design (BDR-design).
Examples of the
Land BDR-designs
are shown below for a two-way classification with six rows and six
columns, in tenns of the number of observations in each subclass.
L-design
BDR-design
110 0 0 0
1 1 1 0 0 0
11000 0
1 1 1 0 0 0
110 0 0 0
1 1 1 0 0 0
1 100 0 0
o0
1 1 1 1 1 1
000 I I I
III III
o0
0 1 1 1
0 1 1 1
Gaylor's general conclusions were based on changes in
and
0-;,
O"~, cr;c and
0";
p,where
represent the row, column, interacti,on and.
error components in the random model.
As
p increases, the L-design
is better than the BDR design, and as
p decreases the BDR-design is
The variances of the estimates of' 0"2r (and 0-2c ) for the two
designs are about equal when p is approximately 2.
better.
6
CHAPrER III
THEORETICAL DEVELOPMENT
3.1
Introduction
The main purpose of this chapter is to illustrate, in a general
way, how to calculate the variances of the estimates of the variance
components for different estimation procedures.
Thus it becomes pos-
sible to compare and study a class of estimation procedures over a
suitable range of parameter values and unequal subclass numbers.
In
order to understand the develol?Ulent of this analytic procedure, it
is necessary to discuss first the calculation of sums of squares in
Model I, through the cell means rather than the more conventional
row, column and interaction effects.
Then, as it will be shown for
Model II, it is relatively easy to obtain the variance-covariance
matrix of a vector of sum of squares, which is the key computation
in obtaining the variances of the estimates of the variance components.
3.2 Analysis of Variance in Model I
The mathematical model for a multi-way classification can be
written as
Y"k
J.J ••• = ~ + a.). + ~.J + ok + ••• +
(a
~) lJ
.. +
(a o)).'k'+ •••
+(a~O)i'k+···+e'·k
J
lJ, •• ' ;
Y.lJ'k ••• - the observed value for cell ijk••• ,
!.I.
-
the population mean ,
7
a,~,o,
etc. - the population main effects,
(a ~), (a
0),
(a ~
0),
etc. - the population two factor and
higher interaction effects,
e, 'k
l.J •••
- a ra..ndom residual assumed normally independently
distributed with a mean zero
2 ; NID ( 0,
e
(j
(j
variance
~~d
2) •
e
In matrix notation (3.1) can be expressed as
l.
:=
nxl
where
+
A
~
man
mx1
e
md
2
X
column vector of observed values,
A
known matrix of ones and zeroes for a particular design,
where Rank(A)
:=
t :::: m, and n > t ,
vector or parameters,
e
error vector, where
~,
ai'
E(~)
=
~j"'.
0, E(~ ~')
=
a~
I
,
nxn
I being the identity matrix.
In the multi-way classification, the mathematical model can
also be written in terms of subclass parameters, Le., t..ne expected.
values of the means of each non-empty cell.
'1!b.is model would then
appear as
2
Underscored Roman or Greek small letters will be colum~ vectors;
the prime will denote transpose. Thus y
is a column vector of
nXl
n elements, \-lhile y' is a row vector. Capital Roman le'tters
will be used for matrices, e.g.
A is a matrix of n rows ffild
runn
m columns.
8
I
nxl
where now
=
I
A
nxt
+
txl
,
e
nXl
A is of' f'ull rank (=t), and
the cell mean parameters.
Note that
I
t
is the column vector of'
equals the number of' non-
empty subclasses in the experimental design.
To illustrate the formulation of
(3.2)
(3.3),
and
consider a
two-way ( 2 x 3) classification.
Y.·k=1J.
~J
where
i = 1, 2; j
+a.~ +13.J + (af3) lJ
.. +e l
.. k ;
J'
= 1,
2, 3; k == 1, 2, ••• , n .. , i.e. two rows
~J
and three columns, with (possibly) unequal numbers in each subclass.
In (3.2),
(a (3)21'
which has 12 parameters, among which only
contrasts are estimable, if' all
which has
n..
~J
6
(a
(3)22'
(a (3)2~'
independer::t linear
> 0 ; and in (3.3)
6 estimable parameters if all n .. > O.
A.YJ.y hypothesis
lJ
that can be expressed as some linear ftLYlction of tl1e parametel"'S in
1,
can also be expressed as a linear function of' the
?/ t s.
A
com~
plete and detailed description of testing hypo't:.t:teses in uniYariate
analysis of variance, in terms of (3.2) can be found in a book by
Roy
Ll9f>17
and an article by Roy and Gnanadesikan
[19527.
Some
comments on testing for (3.3) are in a paper by Elston ~YJ.d Bush
L19627.
9
For the remainder of' this dissertation the development of' the
necessary mathematics depends on computing sum. of' squares through
the estimates of' the parameters in (3.3); thus, (3.1) and (3.2)
will only be used for illustrative purposes.
In model I, any hypothesis can be expressed as
C'
sxt
where
the
y
=:
til
0
sxl
,
C'
vs
f
I
0,
C' - matrix in terms of its coef'ficients reflects the
particular hypothesis to be tested and is of
s.
r~~~
Since
A is
of' f'ull rank in (3.3), the customary F statistic for testing HO
as shown by Roy ["19517 is
=
F
where
s
[1'A(AtA)-lC ["C'(A'A)-lQ7- l Ct(A'A)-lA'~/S
,
(;;:.';;:. - ;;:.'A(A'A)-l A'z)/ne
is the hypothesis degrees of freedom (d.f.) and n, =n-t .is .
e
the error .'d.f'.
The sum. of' squares for hypothesis, shown between the curly brac:kets of' t..he numerator of' (3.5) can be
= ~f
SSH
where
1\
vrri tten
as
C ["C' (A f A)-lQ7- l a' ~,With
txs
sxs
sxt txl
iit
1
I =I' A(AtA)- ,
f
t..~e
sample estimate of
s
r,
(3.6)
d.f.,
and
1\
E(l) = r .
1\
Note that I
is simply a vector of t..he reea:.11 values from each non-empty
cell of' the design.
Defining
=
txt
Q
C["c t (A t A)-lc7- l C' ,
-
it is easy to see for different hypotheses, say HO: Cj Y
j
1, 2, ••• , that in
~
0 ,
=
(3.8)
10
as
j
varies only the
Q matrix is altered.
Regardless of the
total number of observations in the experiment, the dimension of the
Q matrix depends only on the number of non-empty cells in the experiment.
3.3
Constructing O'-matrices for Model I
The main purpose of this section is to show that all the sums
of squares of interest, which come frrnn a
cO!L~ected
design,
c~~
be
found either directly in terms of (3.6), or through some linear combination of SSH's.
Although some of the SSH's to be described below
are com.putationally very simple without the use of matrix algebra,
it is the associated C'-matrices that are necessary for the theoretical development•.
For illustrative purposes, consider the 2 x 3 classification
shown in (3.4).
Using the convention that a dot instead of a sub:5
script denotes a total over that subscript (E y..:: y. ), the obj=l ~J
J..
servations for the experiment are:
Columns
I
,
.
2
:5
Totals
1
YIL
Y12.
Yl:5.
Y1 •.
2
Y2l.
Y22.
Y23.
Y2
y
y
y
Y
Rows
Totals
.1.
•2.
.3 •
..
... =G
11
Note that in this example and throughout the dissertation the order of the :Parameters, for the puDpose of constructing matrices,
r il'
will al'ways be
7
12
, •.. , 7
21
, 7
22
The numbers in
, .•• .
each subclass, and the corresponding parameters are shown below:
Columns
nIl
n
n
n
Parameters
Totals
n
12
13
nl.
711
712
713
23
112 .
7 21
722
7 23
Rows
21
Totals
n. l
In (3.3), t
= 6,
1 1
o0
Af
b:xn
=
o0
o0
n
22
n. 2
and the
1
·
. . ·0
. .·0
• • • 0
~
nIl cols
n
n.)
Af
and
(A'A)
matrices appear as
1 1
...0
.1
o0
o0
·· ·0
· · ·0
o0
o0
o0
.
· · ·1
o 0 · .. 0
~
o0
o
0
1 1
0
~
n
12
cols
nIl 0
(AlA) ""
6x6
= n
n
cols
22
0
0
0
0
0
0
0
0
0
0
0
0
0
n
0
0
n
0
0
0
0
u
0
0
0
0
n
0
0
0
0
0
X2
13
21
22
1 1
n
23
·0
· 0 .
·•· ,
·.·0
· 1
cols
0
n
23
In the general two--way classification (expressed as the model
in (3.3)), (A fA) 'Will be a
diagonal.
txt
diagonal matrix with
n .. 's on the
~J
12
The sum of squares with one d.f. due to the mean, denoted as
M, is
= G2 /n
M
In terms of (3. 6) if
(3.9)
then
7 -1
r"O'(A'Atlo
- -m
-rrr-
==
_~. -m
a
lin,;
== G
and
M
=-A"J t
r.c+w(A 'A)
0
-m L
t
-1
7 -1 -m
c' -A
r ::. G2 In .
0
-m-
As usual the sum of squares due to rows (El', with two d.f., uncorrected for the mean and unadjusted for columns, will be
R =
+
The O'-matrix for (3.6) that will yield R is
C'
ru
=
(3.10)
Also the C'-matrix for the sum of squares due to columns
with three
d.f~
(C),
uncorrected for the mean and unadjusted for rows
is
0
n
0
n12 0
0
n
0
0
0
0
'It
0
--11
C'
cu
=
n
13
21
0
0
22
0
(3·11)
n
23
The uncorrected sum of squares f'or subclasses is
s ::
2
3
Z
L:
j==l
i=l
matrix ==
I
6x6
(y;.
.LJ .
In .. ),
~J'
with six d.f. where the C'-
13
The interaction sum of squares, I*, with two d.f., which is
adjusted for row and column effects as obtained from the method of
fitting constants or. the weighted squares of means Yates'
["193!J:.7,
can
be found by us ing
-1
o
-1
1
o
-1
-1
o
The general construction of the
~J
C' -matrices will be explained
rc
at the end of this section.
With linear combinations of M, R, C, S, and I*, it is possible to obtain other meaningful sums of squares for the twO-Y1aY classification.
For example denote
R* as the sum of squares for rows,
adjusted for columns, with one d. l' ., when interaction is not ineluded in the model.
This is what would be obtained for adjusted
rows on the method of fitting constants.
Then for the
2 x 3
classification,
R*
=
S - C - I*
Corrected sUIas of squares for R, C and S are obtained by
just subtracting M from each.
It is possible to calculate row and column sums ot squares
as weighted squares of the means (see Yates
L193'27 and
["194§}) in terms of the parameters in (3.3).
the rows and cobxmns in the
2 x 3
Snedecor
The Ct-matrices for
classification are shawn below
and a proof of the identity of the sum of squares by means of (3.6)
with the weighted squares of' means will be given after the development of the C'-matrices f'or the
r x c
classification.
14
Testing the hypothesis that all the row means are equal is
=
or
(R**
For the weighted squares of means for rows
with one d,f.)
the C'-matrix is
c~
=
(1
1
1
-1
-1
-1)
Testing the hypothesis that all column means are equal is
or
1'23)
=
0
+ 1'23)
=
0
(I'll + 1'21)
(1'13+
(1'12 + 1'22)
(1'13
For the weighted squares of means for columns (C**, with two
d.f.) the
C'-matrix is
Cc1
Although the
=
1
[0
S'LmlS
o
-1
1
o
1
-1
o
1
-lJ
-1
of squares described above are for a 2 x 3
classification, it is easy to see the generalization of the C'matrices in (3.9), (3·10), (3.11), (3·13), and (3,14) to a
classification.
n .. 's
~J
~~
lxrc
For e:xanr.Ple in a
r x c
r x c
classification (all
> 0; here rc = t):
= (nIl'
n12,···,nlc ; n2l , n22,···,n2c;···;nrl,nr~,···,nrc)
15
n
11
n
n
12
0
1c
0
0
0
n
0
0
0
0
C'
=
ru
rxrc
n
0
11
0
n
C'cu=
12
0
21
n
0
n
22
0
0
n
0
0
21
2c
0
0
n
22
0
0
0
0
0
0
0
0
0
0
0
o ... n
0
n
0
0
n
r1
r2
.. n
rc
0
0
r1
n
0
r2
cxrc
•
0
C'r
0
n
1c
0
0
n
0
2c
0
n
rc
o
-1-1
-1
1
o0
o0
o
-1-1
-1
o
1 1
1
-1-1
-1
1
1
1
o
0
o
o
o
o
1 1
o
o
o
o0
1
o
o
-1
1
0
o
-1
1
0
o
-1
o
1
o
-1
0
1
o
-1
o
1
o
-1
o
o ...
1
-1
0
0 ... 1
-1 ... 0
0
1
-1
=
(r-1)xrc
=
c
{c-1)xrc
C'
The corresponding sums of squares would have respective d.f. of
r, c, r=l,
If any
c~l
.
n .. = 0, the column associated with that
~J
nored and the above natrices are compressed.
C'c
n .. is ig..
~J
Note that for
C' and
r
one must be careful that the orthogonality property with res-
pect to the other factor is preserved.
'I'hat is, the elements in
1,
16
each row must always add to zero.
C~-
struct
and
C~-roatrices
In this way it is possible to con-
for cases where some
n
ij
== 0, and there-
by extend the weighted squares of means procedure.
Now it will be shown that in a two-way classification when all
n
ij
> 0, the sums of squares from (3.6) with the Cf-matrix
identical to the weighted squares
V1
i
= 11
c
Z
. 1
J=
0:['
means for rows,
=1,2,
i
(lin .. ) ,
J.J
C~
is
Defining
... , r
j = 1, 2, .. "
c
then the weighted squares of means for rows is
r
Z
i=l
vi.
[C
L;
j=l
J.
A
J
2
Y .
i J_
.[~
. 1
J.=
Ii.
J.
(~. 1 ~J.J..
J=
)J/ ~
• 1
J.=
-
Ii.,
J.
Since the weighted squares of means is in terms of
c
A
row totals of subclass means, denote t.
L;
y. .
Thus for
with r-l d.f.
J..
A
_yf Cr
=
=.
1
J=
J.J.
1 - t
7,
Lr-t l . - t r. , t 2 . - t r. , ... , t r-.
r.-
and
where
D(l/vl.) is a diagonal :matrix with l/tvi , i = 1, 2, ... , r-l,
J.
on the diagonal and
.J.
is a vector of' ones.
Using Roy f s f195§1
101Ork on inverting patterned ma.:t,rices, and defining
(W , W , ... , W _ ),
l
2
r 1
it is seen tlat
Ii'
=
17
I'
WI (~ IVi - Wl )
-
vI
1 1'12
2
-W
1,'1
-Ttl
Wr-l
1
r-l
I'
I'
= (l/I:
vI
. 1 i
1'1 (I: H -Ttl )
2. i 2
2
•
- VI2 Iv1
)
2
2=
I'
(2; 1'7. -1'1
VI.
r-l i
2
r-l
)
Now by combining terms
+ ...
. 22
Hr-t
I'
=
I'
2
2
I'
r.
rc
I'
H. w. 7/I:VT
2. j • 2 J- i i
+2L: I: t. t
i~j
I'
L: \'1. t. - (I: 1'1. t . ) /I: 1,(.
J.. 2 •
•
22 • •
:L
i
2
J.
which is the weighted squares of' means for rows.
lopment can be
sh~n~
An analogous deve-
for the weighted squares of means for columns.
It is possible to
down a set of rules for constructing in-
~rrite
teraction matrices, such as (3.12) f'or a cO!'..nected design.
Basically
it consists of iITiting down all possible one d. f. interaction contrasts and then selecting a basis which is denoted as the Ct =matrix.
rc
Any basis vnll yliela the same
Q matrix as calculated by
(3.1).
example in the 2 x 3 classification, there are three possible one
d.f. interaction contrasts.
They are shQ'tm as
V',
(It,
(t.
-=1 1..2 ..;.;3-
For
18
Cell
11
12
13
21
22
23
If.
1·
1
-1
0
-1
1
0
1M,
1
0
-1
-1
0
1
('3'.
0
1
-1
0
-1
1
Note that i
2 - Ii =13 ;
is given in (3.12).
<1.1 ' I 2 ),
hence, one basis is
However
<!.2'
which
.f.3 ) and <'ll' 13 ) are
also suit-
x c
classif'i-
In a general
able f'or interaction C f -matrices.
rc
I'
cation if' all the possible one d.f'. contrasts are listed as
f , f2J ... , f ' then it is known f'or a connected design that
l
c
there will be
n.
J.n
= t -
I' -
can be selected as a basis.
C
+ 1
linearly independent If s that
One way to find
n.
such ifS
J.n
to f'irst write dawn all the its in the design.
The general
is
I'
x c
case would appear as
Cell
11
13
lc
21
22
23
2c
1
12
...1
It.l'
0
0
-1
1
0
0
.f'2'.
1
0
-1
0
-1
0
1
0
1.3'f .
0
1
-1
0
0
-1
1
.,
First ta.1te any
another
Ilf)
.!.
say
0: ,
AJ
Now eliminate any If
three combinations.
i.
f
as your f'irst rCJ\il in
emd" compute
o! +
A:L
. ..
rc' say
or. , (/: ...
f:
-J
0
0
0
0
0
0
0
0
0
0
0
ii,
and
select
fI': _ AJ.
(/!
AJ
.
in the set that is equal to any of above
Then select a third If , say
l.k'
and take all possible additions and diff'erences of'
.f.k.'
rc
.. .
0
Of
AJ' AJ.
rl r2 r3
i ~ j ~ k,
(Ii,!j) 'With
If' any of' these additions or dif'f'ere:'1ces appear as any of' the
remaining It's, then they are to be eliminated from the set.
...
0
19
Continue this process until
lltn
Itf s
CI
the C I -118 trix f'or interaction.
rc'
only of
are found.
Call this set
i'
Since all the
lS
0, 1, and -1, the addition and subtraction of
will yield all other linearly dependerrt
i
l
if
If some
IS.
consist
1
n
s
::: 0,
ij
then those cells cannot be used in constructing an interaction
contrast.
Another way to find the contrasts in
and
C1 matrices.
c
Denote the elements of the
e~ and the k-th row vec'tor in
= 1,
tively; f
is through the
2, .,., t.
o~
as
m~th
c'r
row vector in
(Cr)mf' and (Cc)kf respec-
Then for the case where
n ..
~J
>
°
(all
i,j,), the row vector.
L, (0 c )kl' (c rllJ.<::
L,..,(o cc'
)12"" J (0 rm
) t(ec )kt-7 ,
-r(O r.u;w..
given any
m and k, is an inter'"dction contrast.
(r-l) (C-l) combinations of
be formed.
When some
those combinations of
tion contrast.
n ..
~J
By taking all the
m and k, all the rows of
= 0,
m and
Of
rc
will
all tr.tat is required is to eliminate
k
'which do not make up an interac-
Note that in order for the above multiplication to
work, the cont2asts in
and
CS
c
mast be linear forms.
3 . J+ Variance COilll)onents in Model
II
Model II 1irlll be the satl](:l as that pro!losed by Eisenhart £194,]].
In the two-'way cla.ssification, the :mathenatical model is
y. 'k ::: u + r, + c, + (rc)., + e, '}
~J
.
~
J
~J
~J c
'W'here
is a constant)
means zero and respective variances
anct e ijk are all lUD vr.t th
t)
2
2
cr ) ~c and ~, If (3.15),
c
rc
e
20
is rewritten in terms of subclass random variables, and subclass
A
means
C:y.1..
j
= y.1. j)
the mathematical model is
A
...
Yij = Yij
where
+ e ..
J..J.
are the subclass random variables
YJ..' '
J
= ~+ri+c.+(rc)
.. ;
J
lJ
average error for the i, J·-th subclass, and
e ..
J..J.
A
~
.. has a
lJ..J
multivariate normal distribution 1vith variance-covariance matrix
which shall be denoted by V
l'iIo-iv e.. ~ NID (,0, 0'2/n •. )
J..J'
e J..J'
txt
It is sho\~ in Section 3.2 that all sums of squares for (3.3)
can be written in the quadratic form of (3.8).
Using results ob-
tained from vlhittle ~195~7, Lancaster ~195~7 and Searle ~195~7,
=
E(SSj)
Var(SS.)
J
=
COV(SSj' SSk)
where
tr(A)
(3.16)
trey Qj )
2tr(V Q.)2
J
= 2tr(V
Qj ) (V~)
(3.18)
is the trace of matrix A.
For the general multi-way classification, say k
cations, there vn.ll be
(including
~
O'~).
e
k
2
classifi-
variance components to be estimated
However since the within cells sum of squares
is independent of' all other between cells sums of squares,
0'
2
e
need not be included in the set of variance components to be estimated.
The sum of squares f'or error (SSE) divided by 0'2 in
e
2
the analysis of variance has a X distri"bution vTi th n = n - t
e
d.f.; hence, the variance of the estimate of
2cr
1+
in
e e
0'
2
e
is
21
One procedure
lect a set
~or
o~ sun~ o~
squares, such that their expected values
o~
are linear in terms
estimating variance components is to se-
the parameters (the variance components),
and that the produced set of linear equations uniquely determine
the estimates.
Let
= II
pXQ.
t*
pxl
squares,
Thus
~*
of another set of
H
=I
(p
pxp
= q),
t , q ~ P., denote such a set of' s~ of
qxl
can also be expressed as a linear combination
s~
then
of squares, through the matrix
H.
If
=t
t*
Define the expected value of
t* as
=
p
v
,
px(p+l) (p+l)Xl
where
p
= 2k
(including ~2)
and P
e
2
(3,19) .
v
- 1,
contains the p+l true variance components
is the coefficient matrix that
is considered as the last element in
If ()e
satis~ies
v, then
(3,19) can be rewritten as
(3.20)
==
.where
P
v +
pxp pxl
p
is partitioned, P '"'
~
2
e
m
,
pXl
LF !!!:-7.
Substituting
Ht
for
(3,20) becomes
H
l'IOIv denoting
v
E(.!) "" p ! +
()
2
ill
e-
as the vector of estimates of !.
LE(::!J = '!!.],
t-l!.-,
22
and replacing E(~) by its sample values, the estimating equation
Ht
is obtained.
= Pv+
Solving for !
AtJ.
cr m
e -
yields
1\2
cr m 7
e--
.
(3.21)
However there is more than one set of sums of' squares that
will satisfy the above conditions.
Thus the questions becomes,
which set should be used for estimating the variance components?
3.5 Variances of Estimates of the Variance Components
One ""'lay of evaluating which set of sums of squares will yield
the best estimates of y , would be to select the set for which var
is smallest.
Taking the variance of both sides of
(!)
(3.21)
gives
var(v)
pxt
var(t) Ht + (2cr4/n ) m m'
PxP - pxq
qiq qxp
e e pil lxp
= p-l;- H
cov(~,~)= 0,
Since
7 p-l
(3.22)
pxp
there is no cross product term on the right
hand side of (3.22).
It is always assumed that
t
never contains
any within subclass sum of squares.
In any given analysis of data following a multi-w..y classi-
..
fication,
P
and H are fixed matrices, and
~
is a fixed vector,
The difficult computation is to find var(~) (a qxq matrix).
if it is possible to express the sums of squares in !
of the
C'-matrices in
HO'~rever
in terms
(3.6), then by using (3.17) and (3.18), it
is easy to construct var(!).
One of the many advantages, computationally, in the use of
(3.8) with (3.17) and (3.18) in finding var(t), is that the largest
23
matrix,
V
has the order of the number of non-empty subclasses
txt
(t) in the :nru.lti-way classification. Also note that Q
is of
txt
the general form
C
B
C'; hence, since usually s < < t
txs sxs sxt
using the identity tr(AB) = tr(BA), equations (3,17) and (3.18)
become computationally simpler.
It should be noted that
var(!), for a particular experiment,
will be a function of the true magnitude of the variance components
and the arrangement of the subclass mxmbers.
Searle ~195~7 considers a special case of (3.22) for a twoi~y
classification.
Chapter IV.
This will be discussed in more detail in
24
CHAPTER IV
CCMPARISON OF THREE PROCEDURES FOR
ESTJNATDTG VARIANCE CCMPONENTS
4.1
Introduction
As mentioned in Chapter III, there are many different sets
of sums of squares that could be used to determine !' and
hence,
This chapter will consider only three such sets
yare!) .
for the general two-way classification.
!*h
=
~
-
=[~
-1
J [1
0
0
0
1
0
-1
-1
-1
-1
1
1
M
- M
- R- C+M
"""
J
R
C
S
M
where
!),
The first set is
0
1
-1
0
0
1
-lJ
-i '
=
!h
UJ
This set is described by Henderson £1952.7 and Searle £195'§..7 in
the Method 1 procedure and is comprised of unadjusted sums of
squares.
Flor this dissertation the estimating procedure will be
denoted as Procedure E.
..
The next set is
t*
-a
:::
[R*]
C~
~''''
I*
:::
t~
-1
1
0
1
0
0
-1
-1
1
J
'D
.l.t
C
S
I*
25
where
R*
=S
and C*
- C - 1*
=S
- R - 1* , from Section 3.3
and
H
a
t:
=
-1
R
1
0
1
0
0
-1 ]
-~
,
t
a
C
::::
S
I*
This set of sums of squares (~*a) has been used at the
Department of Experimental Statistics, North Carolina State
College for a number of years and is based on the method of
fitting constants, Yates ~193~7.
Method 3 in Henderson ~195L7.
It is also described as
In this diss~rtation it will
be denoted as Procedure A.
The last set to be considered is
t*
~ b
Bb::::
where
1.
= t
-b
::::
[=]'
1*
These are the weighted squares of means des-
cribed by Yates ~193~7.
The estimating procedure from
~
-b
will be denoted as Procedure B.
The main distinction between
t*a
,~
t* interaction is igno:r'ed in computing
-a
in
~,
and
_t. ,. 1.8 that in
b
R*, and
C*, while
R**, and l"':** are obtained with interaction iIi. the modeL
26
4.2
Est~mating
and var(~)
Equations for v
For Procedure A, we can let
E(t* )
-a
::::
(J
(r-l)
2
r
0
a
0
a
a4
(c-1)
(J
0
0
a <::;
noIn
(J
a
l
3
2
./
2
c
2
rc
2
(J
e
::::
2
0
a
0
a
aI,.,.
(J
2
c
0
0
a
(J
2
rc
a
1
3
(J
2
5
r-l
r
+
c~l
(J
2
e
nin
or in matrix notation
P
E(t*
-a ) = a
v
m
+
(J
2
e
Solving for the estimates of ! ' as in (3.21)
v
-a
=
r
-1 H t
P
a
a-a
"2
(J
e
and from (3. 2:c: )
(~.• 1)
var(v )
~a
where
yareR)
var(t )=
-a
cov(R,C)
cov(R,S)
cov(R, I*)
var(C)
cov(C,S)
cov(C,I*)
var(S)
cov(S, I*)
var(I*)
,
(4.2)
27
For Procedure B, we can let
0
b
0
b
b
0
0
b
E(~) =
l
3
a
cr
2
2
r
2
cr
4
c
2
cr
5
rc
r-l
+ c-l
cr
2
,
e
n,
l.n
or in matrix notation
+
cr
ill
-
2
e
Again solving for the estimates of
v.=P.-1
-b
b
V ,
Le.
!b
L.':'b
r.0
t - c r"2m
e
and
where
va:r(R**)
var(~)
=
cov(R**,C**)
cov(R'**, I*)
var(C**)
cov( C**, I*)
var(I*)
For Procedure H, usi:ng results on Searle L195§.7,
E(~*h) ""
hI
h
hlj.
h
-h
4
2
5
~h2
1,
A_
2
r
2
cr
3
cr
'h
-6
h
cr
rc
7
or in matrix notation
E(~*)
::::
Ph v
c
2
2
+ m cre
we can let
r~l
+
cc;l
n,
J.n
2
cr
e
28
Solving for
!h'
(4.4)
and
(4.5)
where
var(R)
cov(R,C)
cov(R,S)
cov(R,M)
var(C)
cov(C,S)
cov(C,M)
var(S)
cov(S,M)
(4.6)
var(M)
Pa , P , Ph' var(~a)'
b
The computational details for obtaining
var(~) and var(.!:x) are given in Section
4.3
4.4.
and
Although
Searle LI95~.7 has worked out all the computational details for
(4.4)
and
(4.5),
they will be repeated in this dissertation as an
illustration of one special case of' a more general procedure.
sides some of the matrices in Procedure H are
4.3
r~eded
ance-covariance matrix of
A
J
2
for Model II.
Y is the vari-
J
For the two-way classi-
V is comprised of' four parameters
Since the four parameters appear additively in
to separate
in Procedure A.
Calculating Ph' Pa and Pb
As stated in (3.16), E(SS,) == tr(VQ.), where
fication,
Be-
222
0-r
'
0-C '
o-rc and
2
0-e
•
V, it is possible
V into four matrices, where each matrix is only as-
sociated with one of the parameters.
V
txt
=:
2
(J",,,
Vr
txt
Then
2
+ (J"2 y
+ (J"2 V + 0- 11
tSd rc
t§hC
t~tte
(4.7)
29
For the case where all
n .. >0 (t=rc) , it is easy to see
lJ
that
Vr =
txt
Vrc
txt
J
0
cxc
cxc
...
0
I
I
I
cxc
exc
cxe
exe
0
J
0
cxc
cxe
cxe
V
c ""
I
I
I
cxc
cxe
exc
t:x:t
=:
0
0
exe
cxe
,
I
txt
•
V
0
•
I
I
cxe
cxe
(A' Afl
txt;
=:
e
txt
J
cxe
,
J
...
I
cxe
is a ma<trix of all ones,
0, is a matrix of all zeroes and I is an identity matrix.
cxc
fore (4.7) becomes
V
:::::
txt
2V
r r
(4.8)
(J"
When some cells have
V
n
ij
=:
0, the associated rows and columns of
will not appear for those cells, and the
in V
and
r
There-
,T
and
I
matrices
Vc will have different dimensions; however, the
pattern shown will be the same.
Now in Procedux'e H, f'or
A
SS. ""
oJ
A
:( Qj I
has
,,
a has
;
R
S has
"" (AlA);
M has
::: (c a' )/n
-m -m
where these matrices are defined in Section 3.3.
30
For R-M, corrected rows,
CI fJel+ 'tr(V Q
a \7elc
E(R) - E(M) = L'tr(Vr Qru )~tr(V
. r~
r L
c ru )-tr(Vc1mL
+ ~tr~u-tr~7~;c+ ~tr(A1A)~lQru-tr(AIA)-1~7~~
Simplifying (4..9 L i:t is seen that "ehe followi.ng relB;tionships hold
Searle's
Notation
["195§,7
Identl ties
tr(Vr Qru )
~
tr(e' V.C B )
ru r ru ra
~ n
k
'" tr(e'ru Cru Bru )"'" 1£12.r above
tr~ ::; (e'
C )/n
-m. -m.
tr(A1A)=1C\
~ru
::
i:
j
n;.
/n
.LJ
7~
~ trr-a' (A1A)~lC B
L ru'
ru ru-
'I'heref'ore
E(R)
Z
i
=
E(JI.1)
tr
2
31
For e-M, the additional expressions in E(e) - E(M) are
Identities
Searle's
~195~7 Notation
2
iir(V Q, )::: tr(e' V e B )::: Z (Z n . . /n .)
r cu
cu r cu cu
j
i lJ oJ
tr(V Q, )::: tr(e' V e B )
cu c cu cu
c cu
::: tr(e' e B )
cu cu cu
::: tr
I
=:
k
2l
n
= k21
:::
c
cxc
Therefore
For
S-R~C+M,
the only unknown quantity is the coefficient of
2
and it is just n-k].2-k2l +11: •
3
by Searle
195§.7.
crrc
Finally then, as is also shown
L
n
Ph
-
1
n
-
k
l
k
21
k
k1"'2
k21 - k l
k
..
k
=
2
""
12
-
k;e
k
-
k
2
k
2l
-
=
kJ2
Xl =
k
A
For Procedv..re A, I*
k
2
r'
A
j
12
Z , where
rc
Q
3
-
k
2i + k3
32
For rows,
E(R*)
= E(S)
- E(C) - E(I*)
= 'tr(V Q )-tr(V Q )-tr(V Q )7rl+ 'trQ -trQ
L
l'
s
l'
cu
l'
rc-
L
l'
s
cu -trQrc-70"2rc
2 2
= (n-k2l )0"2
+ L·
r-n-k21-tr(c
' Crc Brc-)7cr-rc +(r-l)cre
l'
rc
Note that treyl'Qrc )::: 0, and that r-l=t-c-n.In
For columns,
E(C*)
= E(S)
- E(R) - E(I*)
2
2
= 'tr(V Q )-tr(V Q )-tr(V Q L'7cr + 'trQ -trQ -trQ.
L
c s
C ru
c rc
c L
S
ru
rc-7crrc
2
::: (n-k / )crc + L·
I n-k -tr(C ' C B
rc rc rc
12
12
)J + (c-l)cr:2e
•
In E(C*), note also that treyCQ.1'c )::: 0 •
For Interaction
E(I*)
..
::0:
2
2
(trQ~c )cr1'0
.. + 'tr(A' A)-lQ.
1 0.
L
1'0-1 ' e
2 2
=.
~r(C' C B )0"
+ n. cr •
1'0 ro rc re
In e
Therefore, defining
n - k2l
P
a
:::
k
30
:::
tr(e'1'cCrcBro )
0
0
n -
0
0
k2l - ~O
n - k
12 - k30
n -
k
12
~O
(4.10)
33
Searle and Henderson ~196!7 give a simplified procedure that
can be used for finding
P
but would not be useful in this
a
general matrix approach for finding
R** has Q
r
= Cr
C 7- 1CI
Lr-C'(A'A)-l
r
For Procedure B,
var(v).
-a
= Cr
B Ct
J- 1C'c -= Cc
Bc Cct
r-1
r-
C** has Q = C r-C'(A'A)-l C
c
c Lee
r
r
Then
E(R**)
'7 2
= L tr(Vr Qr-:-)/~
r
2
+ (trQ)~2 + Lr tr(AtA) -1Q '7
/~
r rc
r- e
and
E(C**) =
2
'7
+
L tr(Vc Qc l/~
C
L
-1
'7
rc + tr(A'A) Qc-/~e
2
(trQ)~2
c
2
= Lr-tr(C'V
C B )7~2 + ["tr(C'C
c c c c- c
c c Bc-)7~ rc +
Since
e
E(I*) is the same as (4010),
0
tr(C'V C B )
r r r r
P
b
(c-l)~2 •
=:
0
tr(C1V C B )
c c e c
0
Henderson et ale
tr(C'C B )
r r r
0
tr(C'C B )
c c c
tr(e' C B )
TC rc rc
L195]] gives the elements of
P
b
for the
special case of n, . > 0 (all i, j).
~J
simple functions of the
n .. t s
~J
In all three procedures
~
m'
=:
(r-l, c-1, n. ) .
~n
4.4 Calculating
var(~hL var(!a)
and var(~)
Using the V-matrix and the Q-matrices from Section 4.3 in
equations (3.17), and (3.18), it is an easy matter to write down
all the elements of the three sums of squares variance-covariance
matrices (4.2), (4.3), and (4.6).
Again it should be mentioned
that for Procedure H, Searle ~195§7 has 'worked out explicit ex~
pressions for the elements in var(t ), (4.6) in terms of the n .. 's
h
~
and the four parameters
2
cr-, cr-, cr
r
c:
rc
?
?
.
~J
0
and crt.:.. 'Ihe equivalent
e
expressions are shown below in terms of the general development
described in Section 3.4.
For Procedure H, the ten unique elements of
var(R)
= 2tr(~QI'u )2 = 2tr(CI'uVC ruBI'u )2
var(C)
= 2tI'(VQeu )2 = 2tr(C'cuVCeuBeu )2
are
I
cov(R.C)
/
var(s)
var(~h)
= 2tI'{~Q
" I'uVQcu ) = 2tr(C'cuVC ruBruC'ruveI'uBI'u )
= 2tr(VQs )2 = 2tr(VA'A)2
= 2t~(C'
VAIAV~ B )
- , I'u .. - -- ":ru I'u
cov(R.S)
/
~
2tr(VQI'uVQ)
S
covCe,s)
=:
2tI'(-ilQ VQ) '" ~~tr(CI 'lA' Aye B )
cu s
cu
eli cu
eov(C,M) = 2tI'(VQeliVO)
"""m
cov(S,M)
= 2(Ot
V ecu Bcu C'eu V -m
e )/n
-;n
= 2tr(VQs V ~) ""
2(~ VA' A V Sn.)/n
35
Now in Procedure A, the first six elements shown for var(t )
-a
are the same as above and the new last four elements are:
var(I*) :::: 2tr(VQrc )
2
:::: 2tr(C'rc VCrc Brc )
cov(R,I*) :::: 2tr(VQI'uVQI'c )
2
= 2tr(C'rc VCI'uBI'uC'I'uVC rc Brc )
cov(C,I*) :: 2tr(VQeuVQ. rc ):::: 2tr(C'I'c VO euBeuC'euVC rc Brc )
= 2tr(C're VA'AVercBrc ).
cov(S,I*) :::: 2tr(VQsVQrc)
Finally for Procedure B, only var(r*) need not be repeated.
The remaining five elements are:
var(R**) :::: 2tr ( VQ)
r
2 2
= 2tr(C' VC B)
r
r r
var(C**) :::: 2tr(VQ )2 :::: 2tr(C' VC B)2
c
c
c c
cov(R**,C**) :: 2tr(VQ VQ ) :: 2tI'(C'VC B elve B )
r e
err r c c
:::: 2tr(VQr VQrc ):::: 2tr(C'rc va r Br c've
cov(R**.I*)
,
r rc Brc )
cov(C**,I*) :::: 2tr(VQ VQ ):::: 2tr(C' VC B elvc B )
c rc
rc c cere rc
Note that the matrices have been arranged so that there is
a minimum of computation.
The computer program. descr:ibed in Appendix A evaluates vaI'(v. ),
-n
var (v
) and var (v:.J,
for corrib:inati.ons of different sets of nl.J
..
-a
'~l
and parameter values
. 2
l~
r
,
2
~
c
,
2
~
rc
,
2)
~
e
.
Tbe
program was written in
order to compare the three procedur'es described above, and also to
investigate experimental designs for their efficiency in simultane=
ously . estimating
~
2
r
a:nd~
2
c
36
4.5 A Computer Comparison
Using an UNIVAC 1105, a computer program was written to compute the variances of the estimates of the variance components, in
a two-way classification, for procedures A, Band H.
The computer
program listing is given in Appendix A, and the results from the
various computer runs, by design, is given in Appendix B.
The
two main purposes of maki:ng a computer eomparison are:
i)
ii)
To compare the 'three estimating procedures,
To compare some experimental designs for estimating
variance components.
Because the results did not indicate any general type of invariance over the changing n. , patterns, or over changing para~J
222
2
meter sets (cr , cr ,cr ,cr :: 1), a limited number of com'Puter
r
C'
rc e
-
runs were made.
However the data from these runs are suff:icient
to indicate the difficulties encountered in attempting to choose
an optimum procedux'e or design.
The computer runs 'were divided into two groups.
T'he purpose
of one group was to campare possible experimental designs for the
use of estimating varia:.'lce compone:'1ts.
For these designs there
were always six rows and six colul1lD.s (the largest experimental
design that the program. woulcl acc:omoda"Ge).
1J:r..\e three basic types
of designs, S, C and L, are given in 11able 4.1.
':rhe
Sand
C
type of design are variations of Gaylorls ~196Q7 balanced disjointed rectangles design.
t..'I1e
L design.
Gaylor
£19627
has suggested and analyzed
'!he complete tabulation of computer results are
given in AppendiX B, for each of' the designs shown in Table 4.1.
37
2
2
Note that f'or the parameter values in Appendix B, cr > cr •
r - c
Because of' the symmetry in most of the designs it was decided not
to do the redundant combinations where
with replication, cr
2
e
cr
2
c
> cr2r • Also for designs
~~e other major group is a set of 3 x 3
= 1.
4.2.
designs which are shown in Table
Equal
S 16
S 22
1
1
1
1
1
1
1
1
0
0
0
0
2
1
0
0
0
0
1
1
1
1
1
1
1
.....
1
1
0
0
0
1
2
1
0
0
0
1
1
1
1
1
1
0
1.
1.
1
0
0
0
1
2
1.
0
0
1
1
1
1
1
1
0
0
1
1
1
0
0
0
1
2
1
0
1
1.
1
1
1.
1
0
0
0
1
1.
1.
a a a
1.
2
1
1
1.
1
1
1
1.
a a
0
0
1
1
0
0
0
0
1
2
c
18
c 24
1.
1
1
0
a
0
2
1
1.
0
0
0
1.
1.
1.
0
0
a
2
1
1
a
0
0
a
1
1
1
0
0
a
2
1.
1
0
a
0
0
1,
1
1.
0
0
0
1
1.
2
0
0
0
a
1.
1.
1
0
0
0
1.
1
f_
a
0
0
1
1
1.
0
0
0
.J...
"I
1.
2
0
0
L 20
L
0
:24-
1
1
0
0
0
0
1
.1
0
J~
1.
0
0
0
0
1
"I
.....
0
a
a a
1
1.
0
a a
0
2
1
0
0
0
a
1
1
a
0
0
0
1
2
0
0
0
0
1
1
1.
1
1
1
1.
1
2
1
..:L
1.
1.
1.
1.
1
1.
1
1.
1.
1
2
1.
1
0
,
The n .. arrangements for the 3 x 3 designs
Table 4.2.
l.J
Equal
D
18-1
D
18-2
2
2
2
1
1
1
1
2
3
2
2
2
2
2
2
2
1
2
2
2
3
3
3
3
3
1
D
18-3
D
2
18-4
1
1
h
1
1
1
1
4
1
1 2
3
4
1
1
1
5
3
For these designs the total number of observations was held
fixed and the n .. patterns were changed.
l.J
In all instances, only
connected designs are considered.
Some of the more striking results of Appendix B are shown. as
relative variances of variance components, for a fixed parameter
set, in Tables 4.4
through 4.8.
In all cases, the variances
for the 6 x 6-equal designs are used as the base of the compariFirst all the variances were multiplied by (n / 36), where
t
is the total nurrilier of observations for the desi~~, then the
son.
n
t
adjusted variances "Jere divide<:1 by the vari.ances i'or the 6 x 6-
equal design.
If a..YJ.y relative variance in Tables 4.~. through
4.8, is less than one, the design for
~hat
particular variance
component is more efficient than the 6 x 6-equal design; and if
greater than one, it is a less efficient design.
39
From the comparison-of-designs viewpoint, the most inf'ormation per sample occurs with S, C or L designs whenever
(j2
and
I'
(jc2 dominate (j2 , and the least information when (j2 dominates
rc
rc
2
2
(jr and (jc • In other words, and 5, C or L design can be much
better than the balanced design for purposes of estimating variance components when (j
Although Tables
2
rc
2
2
is smaller than (j and (j •
r
c
4.4 through 4.8 are constructed primarily
to make comparisons between desi-gns, it. i.s also easy to compare
the three estimating procedures.
From all the data in Appendix
4.3 summarizes the major results for the comparisons
B, Table
of designs 5, C and L, and estimating procedures
Table
4.3. General results from the 6x6 designs S, C and L
Parameter
Combinations
(j2
(j2
I'
c
« (j2
rc
= (j2rc
«
<
=
L
H
(j2
rc
L
A
or C for rows
L for columns
a
A
S or C
A
<
=
(j2
re
>.....
..- (j2
»
(j2
rc
rc
Best
Design
Best
Estimating
Procedure
(j2
rc
> (j2
rc
A, Band H.
[s
a In this case H appears to be slightly better for rows than A;
however, since H gives such a poor estimate for columns, procedure A is generally reconnnended. See Table 4.7 as an
example.
40
Note that Tables
4.4 through 4.8 illustrate each of the four
parameter combinations in Table
4.3. The analysis of the computer
results will be mainly in terms of jointly estimating
Procedures
A and B have the same
var(~rc ),
computed the same ,·ray in both systems.
dures A and B give a smaller
var(~.J
ey2
I'
and
since interaction is
In most situations Proce-
than Procedure fl,
ex~
The
.:,..v
:2
')
ception occurs when
eye
rc
dominates
ey2.
C
ey
r
and
;2
ey •
4.8)
f
,See Table
c
As noted in Table 4·.3, Procedure H is also lJest for estimating
2
ey
I'
and
2
ey
2
ey
c' when rc is much larger than
since Procedure H
car~
2
ey
I'
I)
and
ey~
c
.
However
be extremely erratic under different
n ..
~J
arrangements and parameter combinations, it cannot be a recommended procedure unless the experimenter has some a priori information of the magnitude of the variance components.
The dif-
ferences between Procedures A and B are not ver'y great, a.11.d in
general, Procedure A has variances which are smaller than Procedure B, for designs S, C or L.
The results for the
through 4.8.
3x3 designs are also given in Tables 4.4
As with the 6x6 designs, there are small differences
between the variances for Procedures A and B.
Procedure B has slightly lower variances,
occur when
4.5).
2
r
ey
ey
2
c
are larger than
HOi-leVer in this case
The exceptions seem to
2
Tables
rc • (See
~
ey
It is intersting to note that ch&"1.ges in the
"2
1\2
1"
C
4.4 and
n .. arrange~J
ment do not drastically affect var(ey ), a.11.d var(ey ) for A a.."ld B.
However, as in the 6x6 designs, Procedure H is again erratic.
4.1
An interesting comparison between the 3x3 and 6x6 designs
222
is that when O"rc dominates (J and 0" , H is best, and B is
r
c
poorest for 6x6 designs, while B is best and H is poorest for
3x3 designs.
Procedures A and B have the desirable characteristic that
~
2
var(O"r) remains invariant oyer any change in O"c' and vice versa.
However, since Procedure IT depends upon unadjusted sums of' squares,
it can be seen, from Appendix. B, that, i:f'
:2
fixed, an increasing er
thing is true for
causes an increasing
var(~),
when er'2, (l
COl"O
and er'2 is increased.
r
c
0";,
er;c and er; are held
A;;J
Var«(f~).
r
T'ne same
and er'2 are held fixed
e
This is one of the objections to procedure
.
'2
H, in that it is so easily affected by changes In err
A somewhat analogous situation with procedures A and B is that
for a fixed (f'2 and er'2, an increasing (f2 causes var(~) to inI'
e
ro
r
'22.2.
crease. Also for a fixed er and er , an increasing 0"
causes
o
e
ro
var(~)
to increase. However this effect (in procedures A and
c
B) is also noticed in the "balanced case.
The computer p:r'ogram can also print ouf; the eovaria:nces
between the row, column
and interaction variance components.
However, a preliminary analysis of these covariances, in terms
of' correlations, did not indicate any consistent behavior, so
no further analysis was attempted.
SummariZing the reSUlts, it seems that if' the experimenter
does not have any a priori information concerning the magnitudes
of the varia:.'1ce components, then an S=design would probably be the
42
best f'irst choice.
When there are more rows than columns, or
vice versa, it is still possible to work out some type of' Sdesign.
say r
When the number of' rows equals the number of' columns,
=c =k
an S-design has the desirable f'eature that it
will always have
k-l d.f'. f'or rows, columns and interaction.
The interaction contrasts matrix C: is easy to construct since
rc
there are only k-l interaction contrasts.
For any f'orm of' an
S, C or L design, it would probably be best to use Procedure A
f'or estimating the variance components and then switch to
dure H, if'
2
ro is much greater than 0-r and (f2.
c
2
0-
proce~
Chapter V of' this
dissertation discusses the computational aspects of' Procedure A.
e
e
e
a
Table 4.4. Relative variances of estimates of variance components for
222
2
crr = 4, crc 2, crrc = 0 and cre = 1
:=0
Type of design
and n,
lJ
arr angement
Columns
Procedure
Rows
Procedure
0
A
B
LOO
1.00
1.00
.67
.9.3
.60
.82
CI8
G24
.. 69
.. 95
6x6b ... L20
6x6 ... L24
-
equal
6x6
b
6x6 - S16
6x6 ... S2:2
b
6x6 6x6
-
--
equal
3x3
D18...1
3x3
.3x3 ... D18-2
3x.3 - D18-3
3x3 - Dl8-4
Interaction
Procedure
H
AiB
A
B
1.00
1.00
LOO
.62
.84
1.43
2.. 20
.68
.89
.72
.. 91
25.3
44.0
2.22
3031
.. 63
.82
.66
.86
1.44
2 .. 05
.72
.87
.. 75
.91
23.7
38.7
1 .. 79
3.54
.85
1003
.78
.90
.82
.95
1.34
1.79
.82
.93
,,86
.99
16.. 7
28.7
1 .. 54
4.. 81
1.25
1.38
1.46
1.80
J.,,43
1.25
1•.38
1..26
1.29
1.39
1.25
1.38
1.28
1.30
1.34
1,,25
1.25
2.05
3038
1..59
1.25
1.25
1.26
1.33
1.40
1.25
1..25
1.29
1 ..35
1.36
1..13
1•.34
21.2
54,,9
6.80
1.13
1•.34
1.50
2 0 25
1.,64
H
H
1.00
1.00
-.
a Each variance was divided by the variance for 6x6-equal, then adjusted for
different numbers of observations; e .. g., after the variances for S16 were
divi.ded by those for 6x6-equal.9 they were multiplied by 16/36 •
b For these designs cr2
e
:=0
2 was increased by one •
O~" and crrc
-I=""
\>l
tit
tit
Tabl'8 4.5.
Type of de si gn
and no .
~J
arr angement
tit
a
Re1ative variances of estimates of variance components for
222
2
cr - 16, cr ~ 2~ cr
rc = 1/4 and cre = 1
r
c
Interaction
Procedure
Colunms
Procedure
Rows
Procedure
AtB
H
A
B
H
A
B
1.00
1.00
1.00
1000
1,,00
1.00
516
322
.. 51
.. 69
.55
.. 78
.56
.79
7.. 29
13 .. 0
•73
.. 95
.. 77
.. 98
109 ..
199.
2.22
2.. 99
C2~"
..55
073
.58
.79
.61
.81
7.40
11.3
.76
.. 92
.. 79
097
109 ..
172.
1.79
3 .. 00
6x6 ... L20
6x6 ... L24
.73
.88
.75
.. 87
.79
.93
4.92
7.20
.84
.97
.88
1.02
10~.•
1026
1.26
1.}..j.0
1.26
1.27
1.,41
1.26
1.40
1.26
1.27
1032
1.30
1.3.3
4.55
9.79
2.. 35
1.30
1.33
1.34
1.39
1 .. 48
1..30
1 •.34
1.34
1.40
1.42
6x6
""
b
6x6
6x6
-
equal
6x6b ... C18
6x6 b
3x3
3x3
3x3
3x3
3x3
-- equal
DI8-I
...
...
-
D18-2
DI8-J
D18...4
10~.0
1.32
1<.41
1.41
H
1000
67 02
1 .. .35
1.55
52 .. 9
1.38
1601
1.00
1.54
3.69
1035
1055
1.67
2.. 28
1.,81
...
a Each variance was divided by the variance for 6x6-equal J then adjusted for
different numbers of observations; eog." after the variances for 516 were
divided by those for 6x6-equalJl they were multiplied by 16/36 0
b For these designs
0
2
e
2
~ 0 and 0 rc
was increased by one •
t=
e
e
e
a
Table 4~6.. Relative variances of estimates of variance components for
222
2
CJ
r ""' 1~~ CJc "'" 1, CJrc "'"' 1 and CJe "'" 1
Type of design
and n..
~J
arrangement
-
ColUIlllJ.s
Procedure
Rows
Procedure
H
B
A
H
Interaction
Procedure
---A
B
H
AiB
equal
6x6
6x6b .,. 31.6
6x6 "'" 522
1~00
1000
1~00
LOO
1.00
1000
1.00
1000
1 ..20
1070
1028
1.56
1.42
1..67
1.20
1.70
1.28
1.56
1042
1,.67
~~ .89
4•.52
2.. 22
2,,77
6x6b ... 018
6x6 "" 024
1 ..13
1,.57
1.10
1040
1.20
1.49
1.18
1059
1~13
1.39
L23
1.51
2.. 53
3.98
1.78
2.. w.s
b
6x6 - L-'20
6x6 ... L24
1.,18
1 ..49
1.09
1 .. 27
1.15
1..31
1.18
1 .. 49
1.09
1.27
1.15
1031
2..10
3.54
1 .. 54.
2,,50
1~6,
1..6,
1.83
1..78
1.90
2.. 01
1.65
1083
1 .. 73
1.8,
1,,85
1.65
1.80
2,,23
3 .. 27
2,,13
1.65
1..80
1 .. 78
1.090
1.6,5
1 .. 73
1.7.3
1085
108"
1..84
2,,04
3".37
6,,27
2,,65
1,,%.
3x3
3x3
3x3
3x3
3:xj
--
equal.
D18",,1
D18~2
D18-3
D18-4
1083
2.. 23
.3 .. 27
2.13
2~01
2,,04
2..10
2..50
2..30
a Each va~iance was divided by the variance for 6x6~equalJ then adjusted fQr.
different numbers of observations~ e.g.,9 after the variances for 316 were
d,i"ltide'::l by those for 6x6-equal,9 they were multiplied by 16/36.
b For these d
'
. es~gns
a2
e
~
0 an d CJ2 was ~ncrease
.
db y one"
rc
\);"
e
e
e
Table 4.7. Relative variances a of estimates of variance components for
222
2
(J
"'"' 16, (J
=- 0, (J
"'- 4 and (J
=0 1
r
c
rc
e
Type of design
Rows
Procedure
and !l.ij
arr an f!ement
Columns
Procedure
A
B
1000
1000
1.00
•.52
.. 71
.. 62
.89
.64
.90
-
.54
.75
.64
.. 87
-
.73
.89
1.35
1,,50
1.,40
1.51
1..54
H
-
equal
6x6
6x6b .,. 51.6
6x6 "'" 622
b
6x6 "" C18
6x6
c24
b
6x.6 "" 120
6x6
L2~,
-- equal
.. D18...
- D18-3
D18-4
3x3
3x3
3x.3
3x3
3x3 "'"
1
D18~2
I
Interaction
Procedure
i
A
B
H
1.00
1 00
1.00
1.00
42.0
77 .. 0
6.67
7.94
7 .6~.
8.86
8.13
14.4
2.22
2.87
.. 68
.,91
40.7
65.6
4.47
6.. 04
5.10
6.. 80
7.85
1203
1079
2.34
.,79
.93
.83
.97
23.2
35.2
3.30
4.10
4.08
3.W
4.. 83
1.54
1.93
1035
1.50
1.38
1.39
1.55
1035
1.50
1.36
1 ..36
1.42
5.. 07
6.49
20.7
47.1
11.. 7
5.07
6.49
6.. 08
6.44
7.30
5.07
5.42
5.37
5.68
5.83
2 •.55
H
1000
0
7..13
2.83
5.. 28
9.. 74
3.85
A
B
2.. 55
2.83
2.. 78
2.93
3.,11
I
a Each variance was divided by the variance for 6x6-equal 3 then adjusted for
different numbers of observationsj e.g .. , after the variances for 516 were
diidded by those for 6x6-equal, they were multiplied by 16/36.
b For these designs r} =- 0 and
e
(irc
was increased by one.
.p:-0\
e
e
Table 4.8.
e
a
Relative variances of estimates of
22
0'2
~ 1, 0'
1, 0' ;:, 16 and
rc
r
c
::0
Type of design
and n ..
lJ
arrangement
Interaction
Procedure
Colunms
Procedure
RoWS
Procedure
H
A
B
H
A
B
H
AtB
-
equal
1 ..00
1000
1 ..00
1.00
1000
1000
1.00
1.00
-
816
322
2 0 28
3.68
4.44
5.69
5.07
6.35
2.. 28
3 ..68
4.44
5.69
5.07
6 ..35
1..70
2.49
2.22
3.02
c18
1.. 94
3.01
3..11
4.29
3.42
4.59
2.00
3.. 28
3.12
4.45
3.54
5.00
1.53
2..19
1.79
2.41
1.,93
2.. 70
2.41
3.. 10
2.. 52
3,,07
1093
2..70
2.41
3 .. 10
2.. 52
3 .. 07
1.,43
1.77
1..54
1 .. 91
4..40
4.88
6..12
9 ..42
6.56
4.. 40
4.. 88
5.15
5.. 24
6.. 25
4.40
4.. 88
4.. 53
4.. 58
4.84
4.. 40
4.40
5.54
5 .. 15
5.24
6.. 25
4..40
4.. 58
4.53
~...58
4.. 84
2,,94
3,,26
3.,0
4.. 23
3.70
2.94
3 .. 26
3.. 15
3 .. 20
3.59
6x6
b
6x6 6x6
b
6x6 6x6
b
6x6 6x6
3x3
3x3
3x3
3x3
3x3
variance components for
0'2 ~ 1
e
- c24
120
-- equal
D18-1
D18-2
-- D18-3
- D18-4
L2h
5.54
6.. 12
9.. 42
6.56
a Each variance was divided by the variance for 6x6-equal, then adjusted for
different numbers of observations; e.g.. , after the variances for 316 were
divided by those for 6x6-equal, they were multiplied by 16/36.
b lUr these designs 0'2
e
=0
and 0'2 was increased by one ..
rc
.f="'
--J
48
CHAPTER V
A SIMPLIFIED CCMPUTATIONAL :METHOD FOR PROCEDURE A
5.1
Introduction
The main purpose of this chapter is to shaw an easy com;putational procedure for obtaining the
matrix var(t ), ahovm in (4.2).
-a
4 x 4 variance-covariance
In general, the major portion
of the computation in evaluating yare!) is in finding
yare!).
Searle £195'9} and Henderson et al. ,['19517 show a simplified set
of equations for evaluating var (!h)' in Procedure H,
Searle
-was able to reduce the matrix products to simple combinations of
sums of the
n .. 's
and the true variance components.
J.J
Since six
of the ten unique elements in yare":'h
t . . ) and yare t ) are identical,
-a
Searle's results 'will be repeateCJ. in this chapter.
The other
work in this chapter is concerned with obtaining a computational
simplification of the remaining four elements in var(t ), (vari-a
ances and covariances with
I* ) .
5·2 Searle's Results for
var(t~)
-.L1
Because there is no consistent notation between the Searle
£195'9.7 and Henderson et al.£19517 papers, the notation used in
this section for defining
and products of the
su~
different from that of Searle
n
r
c
t
n
,
=
in some instances.
n .. ; n .j
J.J
=
n
n .. IS
J.J
Define
=
~
~
i
j
=
=
=
num.ber of rOi-TS in the experimental design,
ij
ni •
~
j
~
i
ij
,.
number of columns in the experimental design,
number of non-empty subclasses.
v7il1 be
49
Also we have the following
k
k
k
k
k
k
k
k
k
Definition
i
l
2
12
21
3
26
k :
i
2
L: n.
2·
2.
In
k
k
2
L: n ./n
k
2
L:· -rL:..n.2J·/n.2.-7
k
·
J
•J
2
J
2
'
L: rL: ni./n . 7
j-i
J .J2
L: L: n . . /n
2J
i j
L:
i
2
L:(n .In. n .)
. i J 2. . J
k
k
k
J
5
L:rL:
n~ .In. 7
. - . 2J 2.-
k
8
~L~ n~j/n. j_7
k
where
2
J
J
2
Definition
i
14
23
L:
.
J
L:
LL:(n~
.n. )/n . 7
. 2J 2.
• J2
rL:(n~.n
. )/n. 7
2J. J
2.-
. -.
2
J
27
L:
i
28
L:
i
L:(n~
.In. n .)
. 2J 2. • J
J
4
L:(n .In. n .)
iJ
.
J
r:
7
2
~ _ ~ n 2ij_ 72 Ini.
lO
~ L~ nij_7 In. j
24
2L: L: (L: n ..n. , j )2 /n . n. t
i<i j 2J l
2, l .
25
2L: L: (L: n .n .. t )2 In .n '1
i J lJ
' J 'J
J.<j" 2
2
J
2
J
already been defined in Section 4.3.
tation of k
24
in Table 5.1.
) k
25
2
2
2
'
k ) k , k ) k ) k ) k , k
and k
28
14
24
26
8
25
23
27
Henderson et al. £19517, and that
l. . J
are the same as in
k , k ) ~) k
and k
12
l
2
21
have
To best illustrate the ca,frlpu-
and the :rest of the k's consider the
n
ij
data
50
Table 5.1.
Subclass
n .. data for a two-way classification
J.J
Columns
Rows
Totals
1
2
3
1
0
1
1
2
:::
nl.
2
2
1
1
4-
:e:
n2 •
3
1
1
2
4
:::
n3 •
3
3
4
n .1
n. 2
Totals
n
10 ""n
.3
Now
::: 51/8 ,
+
k
2
:::
3.4
k
12
:::
4
,
,
,
k
:::
25/6
,
k
k
k
l
2l
3
26
~
k
3.6
-"'~
"" 1.4
:::0
:::
57/48
,
,
6
8 ::: 6.5
,
16
+
12
16]
=:
12
k
23
=:
13.75
,
k
~
85/48
,
k
~1~"1/48
,
27
28
}
7 "" ~.5
k10 =217/3 8 •
k
22
3'
51
Searle {195fJ..7 and Henderson et al.
+ nk ( 40-
3
ru
B )2, which would be
ru ru
considerably more difficult to COIl1:pute.
V
of (4.8) in
will be obtained.
show that
2 2
2 2
2 2
~ 2
2 2
(j
+ 40- (j )+n( 40- (j )+k {4<r-(j +4cr ( j )
r c
l' rc
r e
12
c e
1'c e
In Section 4.4, var(R) ::: 2tr(C' VC
of
L19517
By using the expansion
~
2tr(C;-uVC:ruBru)-' it can 'be seen that (5.1)
Table 5,2
at tbe end of Section 5.3 gives
the remaining coefficients developed by Searle L195~7 and Henderson et a1. {19517 for var(C), cov(R" C)
var(S)
cov(R,S) and
cov(C,S) .
5.3
CO!llJ?uting var(1*), cov(R,1*), cov(C,I*) and cov(S,I*)
In the expansion of each of the four quantities, var (1*),
cov(R,I*) ,cov (C,1*)
to a
Cf
rc
and cov(S,1*), the matrix
interaction matrix
Since
general~
Using the
f
C1'c
Vl'""
a'1'cvC:::;() '
Thus i'or the considered four items, the general
V-matriK can be VITitten as
v=
is next
at1'c is constructed in
order to eliminate rOiil and cc,lunm effects, then
in
V
I
txt
V in (5.2)
52
= 2tr
var(I*)
ref (e,.2 I + (J2(A fA)-l)e B 72
rc
e
rc rc-
L . rc
72
= 2tr -;-(e'rce rcBrc )(J2rc
:::: 2tr
r( e ro e roBro0-2ro
I
-
+ (e' (A'A)-le B )(J2
rc
rc rc e-
rl.
~n
= (2(J4ro ) tr(e'roc
1
2 2
cr 7
e-
+ I
Jal
i
n
B )2+(40- 2 0-2)trre~ e
ro ro
TO
e
B )
\ ro ro TO
1+
+ (2cr )n
e in
..
Before expanding cov(R,I*), note that (A'A)-l e
ru
is alvmys
a matrix of ones and zeroes, where the ones reflect the nonempty subclasses of' rows in the experimental design.
J"ust as
= 0 ' re'c (A 'A)-Ie r u
== 0
'
e' V
rc r
since the
Theref'ore
C' -matrix
ro
consists of' contrasts which are orthogonal to rows and columns"
Analogously
e t (A 'Afle
TC
cu
= O.
Hence
'I
;-0-2 I +- (J2(A 'A)-17e B
- rc
e
- rc rc)
~
°
(2(J 4ro )tr(C'rceruBr'J.0'ru. roBrc ),
since every other expression contains
null matrix>
Of (AfA)-lc ,which is a
ro
I'u
Analogously
cov(O,I*)
= (2(J4ro )tr(C rcCcuBoue'cuCrcBrc )
f
53
Also
2
rrr
I+<r2(A'Ar 1 7A'Arrr2 I+rr2(A'A)-17C B )
lrc-rc
e
-rc
e
rcrcj
cOV(S,I*) ::: 2tr (C'
4
2 2
= (2rrrc
)trrC·
(A'A)C rc Brc-7+(4rrrc rre )tr(C'rc Crc Brc )
- rc
Using
30
= tr( C'rcCrc Brc ),
3l
= tr( e' e B ) 2 ,
rc rc rc
k
k
k
k
as previously done in Section 4.3,
32
= tr(e'rce ruBrue rue rc Brc ),
33
= tr(e'rcCcuBcue cue rcBrc )
f
f
k..
:> 4
= tr( e I (A I A)e
rc
and
B ),
rc rc
var(I*), cov(R,I'*), cov(C,I*) and COV(S,I*) are represented in
Table
5.2.
When some
n
ij
:=
0, there does not seem to be any !flay of
simplifying k30 '··" 'k;s4"
However when all
n ij
> 0, for any
given two-way classifj.cation, it is possible to write out some
of the
IDatrix
products in terms of the
re~uire a tabulation of matrices
~
binations of
r
and
This would then
(Of n :l.J
.. 'S') for all pertinent com-
c.
In any event the computation of
which is obtained by inverting
matrix.
:n •• t s .
lJ
~O'·'· ,k
34 involves B
7, an (n. xu. )
rC'rc (A'Ar Ie rc-
-
lU
ln'
It does not seem likely that this inversion can be by-
passed in the general unbalanced case,
Finally after var(t) is obtained through the use of' the
-a
results in Table 5.2, it is then necessary to evaluate (4.1) in
order to obtain the desired variances of' the esti:ma.tes of' the
variance components.
Since only the diagonal terms of' the right
hand side of' (4.1) are required, it is not necessary to compute
the covariance terms of' var(v ).
-a
e
e
Coefficients of terms in var(t )
-a
Table '02.
Elements of
var(t )
2a4
var(R)
nk
1
r
-a
var(C)
k
25
+
k10
2a4
c
k + k
24
7
2a4
rc
k
e
7
2a4
e
Coefficients of
22
2 2
4aO"
r c 40"rO"rc
22
40"0"
r e
2 2
4o-co-rc
2 2
4o-rcO"e
k12
k12
n
k2l
k12
k
27
n
n
k12
k12
n
k 2l
--
--
k
30
~=
=~
r
nk
nk
n
k
3
3
7
nk 2
klO
c
nk
klO
k 21
nk
3
3
cov(R,C)
k
14
~3
k2S
k26
nk
3
k
S
k2l
k,
var(S)
nk1
nk 2
nk
t
nk
nk3
n
Ilk
cov(R,S)
nk1
k
23
k,
r
nk
nk
n
k,
cov(CjS)
k
14
nk 2
kS
c
nk3
k 2l
nk
k
31
noJ.ll
var(I~})
~=
=~
cov(R,I*)
-=
~=
cov(C,?I*)
==
","",,,,,,,"
cOV(S,I1~)
-=
==
3
3
3
~~
k
3
S
~-
~~
22
40"0"
c e
3
3
k
32
k
33
k
34
n.~n
=-
--
=-
k
30
\Jl
\Jl
CHAPTER VI
EXTENSION OF RESULTS TO TIm MULTI-WAY CLASSIFICATION
6.1
L~troduction
In the previous three chapters the general two-way classi-
fication was thoroughly investigated with respect to the three
estimating procedures A, Band H.
It is the specific purpose
of this chapter to show how each of these estimating procedures
can be extended to the three-way classification, and also to
indicate general extensions to four- and higher-way classifi-
A 2 x 2 x 3 experimental design is used to illustrate
cations.
the necessary matrices for each estimating procedure.
6.2
Sums of Squares in the 2x?x3 Classification
Consider the mathematical model as
(6.1)
~ Il+A,o+B·+{;k+(AB),. .+(AC)·k+(BC) ·k+(ABC). 'k+ei.kf!
1 J.
lJ
1.
J
lJ
J A
(1=1,2
1.=1,2, ••• ,no Ok)
l.J
where in Model I, there a:r'e al.1 fixed effects except
e. 'k fi
J.J _"
nijk's
-
?
NID (0 , 0--) • Tl1.e experimental de sign :i.n terms of the
e
is shown in Table 6.1.
(6.2)
51
Table 6.1.
Three factor design in terms of
A
l
n" k
lJ
f
S
A
2
B
l
B
2
B
1
B
2
Totals
C
l
n
n
121
1121l
n
n .. l
C
2
n
122
:n2 J.~
",,?
C
3
n
otals
lll
l12
l13
nIl.
n
n
123
1112.
221
:no .2
H,;)??)
~_I:-
Tl
2I3
11223
llrl
C. •
1122 •
n
n
• ·3
..
where
As before, the parameters
r" k of the model shown in (6.1)
l.J
A
are the true subclass means of each cell and
served cell mea..l'J.s.
Y
ijk
are the
ob~
No"W as in the t"Wo-way classifieation, it is
important t.o be able to express the relevant SU!l1S of squares
through the C: -matrices, as shown in (3.6).
Because there will
have to be a large nU.l'l1ber of sums of squares defined in the threeway classification, the carrvention will. be adDpted to use SS(C )
j
as the sum of squares associ.ated w:i:th the C. mat:r'ix.
J
First assU!l1ing that a.II
in the 2X2x3 design for
are as follows:
The general mean:
ll.
'k
lJ -
obtainir~
> 0, the relevant
C~
-matrices
all the necessary sums of squares
Factor A, unadjusted and uncorrected:
o
C'
au
2xl2
o
o
=
o
o
0
o
Factor B, unadjusted and uncorrected:
n
C' =
bu
2xl2
ll1
0
0
n
211
n121 0
n
0
n
221
l12
0
'n
~"212
n 190 0
0
~.<:
Factor C, unadjusted and
cu
2xl2
0
o
o
o
o
o
o
o
l13
0
'n
-4213 0
0
n~V)2
.G
>;'1123 0
n
o
o
o
o
o
o
o
223
Q~corrected:
o
C' =
n
0
o
o
o
o
o
o
For the uncorrected subclasses, C ~
aucu
=:
I
12x.l2
Now it is also
necessary to be able to express the set of sums of' squares which would
be obtained from equation (6.1) if it were reduced to all the possi=
ble two-way classifications.
For "the two=way subclasses AB(u..~co:rre(:d;ed.L
!l111
nl~,L.
n
0
0
0
n
0
0
0
o
n
211
o
o
-0
n
21..
o
o
o
o
~-113
0
0
0
nIl.
nIl.
o
Cf =:
abu
4xl2
l12
llJ22
0
n 12 •
o
o
o
o
o
o
o
o
o
o
:n~23
n
22.J
59
For the two-way subclasses
n
nUl
-lll
nl. l nl. l
0
0
AC (uncorrected),
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
~--o
rl
0
0
0
0
0
0
0
0
0
n211 ll221
n2 • l n2 • 1
0
0
--:.- - - - 0
0
0
0
n
n
122
L2
0
l12
-----
0
---0
n1. 2
t
Cacu
=
6xJ2
0
0
0
0
0
0
0
0
0
0
0
0
0
--~O
0
0
0
0
0
0
0
0
0
0
n2l3 ll223
n2 • 3 n2 • 3
n
n
212 222
n
n...,c.. :2 --2.2
n
For the two-way subclasses
n Il1
0
n .11
0
n121
n
Il3 123
n1. 3 llL3
Be (uncorrected)
n
211
n .11
0
0
0
0
0
0
0
0
0
-0
0
0
0
0
0
0
0
0
0
. . . (;..~ 0
0
0
0
0
0
0
0
0
~-=O
-0
ll.21
Ct :::: 0
bcu
6xJ2
0
0
0
0
0
0
0
n 221
n. 21
'!1
n 01 '?
111
l..!.
0
0
122
=---0
~O
0
0
0
0
--112 0
--.12
n
....
n 02
0
.12
)1299
n. 22
:n11:;;
~._.;
n
ri
.13
213
:n
.13
n
0
0
0
0
0
0
0
0
0
123
n. 23
--0
".,
--223
n
.23
(6.3 )
60
For the two-way interaction AB (adjusted for A and B, ignoring C),
matrix C~b(C) is
For the two-way interaction AC (adjusted for A and C, ignoring B),
matrix C~C(b)
is
n1l1 n121 -n211 -n221 ~nl12 -n122 n212 n222
n
n
nn
n
n
n
n
1.1
~.1
1.1
2.1
1.2
1.2
o
2.2
o
o
o
o
o
2.2
o
For the two-way interaction BC (adjusted for B and C, ignoring
A),
matrix Cbc(a) is
n
-n
n
-n
111
221
121 211
-- - - n. 11 n o21 noll n o21
-n
n
-n
n
o
o
o
l12 J22
2J2 222
--- - - 0
n. 12 n. 22 n. 12 n o22
Now again in terms of the three factor
o
~n1l3
o
l'lJ23
o
=11
213
u223
n. 13 n o23 n. 13 n. 23
mode1~
Factor A (weighted squares of means)
Ct
aw
::::
(1
1
-1
-1
1
1
-1
-1
1
1
-1
=1).
1
-1).
Factor B (weighted squares of means)
atbw
=:
(1
-1
·1
-1
1
-1
61
Factor C (weighted squares of means)
C'
cw
=
~
1
1
1
0
0
1
1
1
-1
-1
0
-1
-1
-1
-1 -1
0
0
0
0
-~.
Interaction lIB
C'abw= (1
1
1
~1
-1
~1
-1
1
1
1).
-1
-I
Interaction AC
C'acw =
~1
1
-1
-1
0
0
0
0
-1
-1
1
1
-1
-1
-1
-1
1
1
0
0
0
-1
1
-1
0
0
0
0
-1
1
-1
-1
1
-1
-1
1
-1
1
0
0
0
-1
-1
1
0
0
0
0
-1
1
1
-1
-1
1
-1
1
1
-1
0
0
0
:1-
Interaction BC
C'bcw=
~1
Interaction ABC
-k
C'abcw- 1
:l
-:]
(6.5 )
Note that as in Section (3.3), 'the above four interaction matrices
can be constructed by multiplying the proper contrasts.
Ca'bcw can be obtained by multiplying the columns of'
As before when some
n"
J.J k
= 0,
For examp.le
C1
aT:7
with C~bcw' •
the interaction matrices are con-
structed by eliminating those contrasts which are not interaction
contrasts.
62
The necessary remaining sums of squares are obtained by combining some of the above CI-matrices and taking linear combinations
of them.
C
abw
All interactions:
Ct 7
~w
7xl2
=
C
acw
C
abcw
C
C_
' abw
3xJ2
AB, ABC:
C'acw
4xJ2
AC, ABC:
abw
=
Cabcw
=
t
eacw
M""tt
v
abcw
at
et_
BC, ABC:
.'
bew
4xJ2
Let us use the convention that
bew
c
Ct
abcw
R(!J., A, B, etc.) denotes the total
sum of squares for a model involving the parameters shown inside
the parentheses.
Those i.tems that do
!lO't
theses are not in. the model. For example
Y-lJ'
....
::=
appear inside the
paren.~
R(I-t, A) refers to a model,
p. + A. + e.. •
~
~J
In order to obtain the remaining sums of squares for the general
three-way classification, a proof will be presented on the partitioning
bfthesums of squares due to regression into two or more independent
The particular distinction in finding these inde-
component~.
pendent components is that it is obtained from two mathematical
models.
For the three-way classification, the two models are (6.1),
and (6.2).
However, to be more general, consider Model ~ as (3.2),
is the cell mean re-
where a basis of
A
is X
Model
man
nxt
parameterization, and is sho~~ as (3.3).
A , is already a full rank.
met
regression with t dofo is
~~e
coefficient matrix
'l"hus the sum of' squares due to total
Model M2
Model
~
Consider a subset of the parameters in Model
~
which are in-
volved in a part of the total regressi.on su.m. of squares, say
R(~,
A, B, etc.)
with
q
~
y'Q y, with
-
a=
contrasts in Model
V"'3'
regression sum of squares, say
t~q
d.fo
Also consider a Cf-matrix,
involving another part of the total
.tv with
q dofo
applying Cochran 1 s theorem, as shown in Graybill
independence of quadratic forms, if
squares associated with
Roy
II ~
r19577
-
L
QQ
a b
""
Therefore by
L.196l7,
0, t;hen the
on the
SurriS
of
Q and ~ are independerrt under Model
a
sho'\'ls that quadra't::tc forms like y' Q y
-
a-
r.. 3
and
are orthogonal to the error sum of' squares in his chapter
on least squares.
'lherefore sl.nce
ZfQa.l and
of the total regression sum of squares and the
x.'V are
SuIll
subsets
of their dof.
3There is no assurance that the SU!:.1S of squa:t'Eis components
which are independent under Model I, are also independent
under Model II. See an example from t..h.e nested design by
Prairie L196g7o
64
= t)
(t - q + q
equals the total regression d.f., it follows that
NOiv the key requirement is that
partitioning of
cb
t
o.
Consider
X
a
nxt-q
as the
X so that
X (XtX )-~
a a a
a
and
Q ~ :::
a
,
=
is the C1 -matrix such that., as in the development of the
qxt
F statistic in
(3.5),
b,
It can be shown, for the proper contrasts in
X~A(At Aflco
::: 0,
whieh implies
Qa~ = O.
C
Regardless of how
is constructed it is possible to show that for any
["X'
ment,
A(N Afl
7 is
(t-q)xt
independent of the n. .
8-'1
L:
j
represent
8 d.f.
L:
k
~J""
t
S
arrange-
•
exam.ple consider the three-way
classification shown in t.Ilable 6.1.
= L:i
n..
X~
lJ •••
To illustrate thi,s ,.,ith
n
that
n. 'k' and q ""
4.
For this example t "" 12,
X will be taken to
a
R(~t, A, B.? C, AB, AC), which in this exam.ple will have
lJ,
The matrix
The convention will 'be used that
d
is a column vector of'
all ones,
0
is a COlum..11 vector of all zeroes, and that the length
of
0
is shown on the top of each column of
d
or
the right of each row of'
A.
Thus
XI, and at
a
65
x'a =
8xn
d'
d'
dt
d'
d'
d'
d'
d'
d'
0'
0'
d'
d'
0'
0'
A
0'
d'
0'
d'
0'
d'
0'
d'
0'
B
d'
d'
0'
0'
0'
0'
0'
<c
0'
0'
d'
d'
0'
0'
d'
0'
0'
0'
d'
d'
d'
d'
0'
d'
0'
d'
d'
d'
0'
0'
d'
A
d'
d'
d'
d'
0'
0'
0'
d'
0'
0'
0'
0'
0'
0'
0'
0'
0'
j
d'
d'
0'
0'
0'
0'
d'
d'
0'
d
o
o
o
o
o
o
000
o
o
n
o
d
o
o
o
o
o
000
o
o
n
o
o
d
o
o
o
o
000
o
o
n
o
o
o
d
o
o
o
000
o
o
n
o
o
o
o
d
o
o
000
o
o
n
= o
o
o
o
o
d
o
o
o
o
o
o
n
o
o
o
o
o
o
d
o
o
o
o
o
n
o
o
o
o
o
o
o
d
o
o
o
o
o
o
o
o
o
o
o
d
o
o
O 11--113
o
o
o
o
o
o
o
o
o
d
o
o
:t1
o
o
o
o
o
o
o
o
o
o
d
o
:':1
o
o
o
o
o
o
o
o
o
o
o
d
n
nxl2
0'
111
121
211
221
l12
122
212
123
213
223
66
(A1Afl
=
D(l/n. 'k)
12xJ2
J2x12
J.J
is a diagonal matrix with lin. 'k on the
D(l/n
)
J.J
ijk
is
diagonal in the same order as the colmL~s in A. Hence XIA(AtA)-l
a
where
Note in
1
1
1
1
1
1
1
1
1
1
1
1
l1.
1
1
0
0
1
1
0
0
1
1
0
0
A
1
0
1
0
1
0
1
0
1
0
1
0
B
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
0
0
0
0
~C
1
0
0
0
1
0
0
0
1
0
0
0
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
Xl
a
and
AB
~
AC
X1A(AtA)-1 that the column vectors are identified
a
on the right-hand side of the matrix by their respective effects.
Thus because the
n ijk 1s cancel out in
X~A (A'Af\ all that
is necessary is to find a Cb-matrix which is orthogonal to
X~A(AI Ar l • For the exam.ple if Cb is taken to be otew' then
Generalizing again, if Model M is set up, so that all nij ••• ""l,
2
it can be seen that the interaction contrasts., from the weighted
squares of means, are orthogonal to all columns of the coefficient
matrix
XtA(AtA)-l which represent main effects, or interactions
a
of the same order or a smaller order.
F'or e:l<"J31!1ple, in a three-way
61
,"J"
classification
Cl
a b cw
is orthogonal to the column vectors for
\-L, A, B, C, AB, AC, and BC, while CI -bcw is orthogonal to the
column vectors for \-L, A, B, C, AB and AC. Using the previous developed notation, this implies that for the three-way classification
SS(REG)
= R(\-L,
A, B, C, AB, AC, BO) +
SS(REG) = R(\-L, A, B, 0, AB, AC) +
Thus the sum of squares for interaction
ss(e abcw )
SS(C~b
Be,
ow )
when ABC :is not in the
model, is
R(\-L, A, B, C, AB, AC, Be) - R(\-L, A, B, C, AB, AC)
or, from the above identity,
ss(~)
ss(Cab cw )
Note the substantially reduced amount of calculation involved in obtaining the BC interaction f'rom
ss(~--bcw ) - SS(Cab cw ).
The extensions of the above concepts to multi-way classifications and also to regression sets which contain more than two sums
of' squares should be straightforward.
Thus from the generalized t.h.eorem on partitioning sums of squares,
the following relationships hold for the 2x2x3 example.
(6.6)
R(\-L, A, B, C, AB, AC, BO) -
R(~,
R(\-L, A, B, C, AB, AC, BO)
R(\-L, A, B, 0, AB, BO)
=
A, B, 0, AC, Be)
(6.8)
68
R(~)
A, B, C, AB, AC, BC) -
R(~,
A, B, C, AB, AC)
Now that all the pertinent sums of' squares are defined, it is
possible to take linear combinations of them to obtain the analogous
sets of sums of squares in the three<'actor eX.Qsy'imeTlt.
ProcedUY"E;
H as before, will use all unadjusted sums of squares; Procedure B,
H0l:4eVer the ext,ension to Procedure
all weighted squares of' means.
A is not so obvious.
illustrated.
For the main effects, only factor A will be
We want the surn of squares f'or A, adjusted for Band
C, when all the interactions
sS(AIB,c)
Note that
R(P"
= R(~,
a~e
ignored, i.eo,
A, B, C) -
R(~,
B, C)
B, C) is the total regression sum of squares of a tw'o~
factor experiment with
R(P., B, c)
B and C.
SS(Cb euJ
Hence, using (6.3) and (604),
- ssCCb c ( a,)
With the result from (606), it is seen t,hat
SS (A 8
~ ) - SS (C7'
I B, G) ::: SS (. C..
aoeu
\ :l,!f')
-
SS (C,.0(~U
. ) + SS (. CbJ~. ( a »
JL'Ylalogous e:x;pressions can be obtained f'm:'
ss(e IA,B).
associated with a main
H.
and
iI'he inte:eaction sums o:f squares :for Procedure B a:re
In summary, Table 6.2 shows al.1 tlw
and
sS(BIA,C),
.
e~fect
SlmlS
of squares that can be
and an interaction for Procedure A, B
Table 6.2.
Sums of squares for estimation procedures
A, Band H in a three-way classification
Source of
Variation
Estimation Procedures
B
A
H
A
R(~,A,B,O)-R(~,BJC)
SS(C
, aw )
SS(Ca)=SS(Crn )
B
R(~,A,BJC)~R(~JAJC)
SS(C-'
\ b"\v )
SS(Cbu)=SS(Cm)
0
R(~,A,BJO)~R(~JAJB)
SS(Ccw )
SS(Ceu )=88(0)
. m
AB
SS(C
)
SS(Cabw )=
abcw
SS(C ar .)
SS(C
. abw )~8S(C)
ill
AO
SS(Cacw )~ SS(O abcvl )
SS(r;v
)
SSe 'i;cw )abcrw
SS(C
SS(C
J
BO
ABO
SS(C ah cw )
Error
,
l).
am"
)
acu')=SS(O)
m
SS(Cbcw )
SS(C eu )~SS(C)
If!
b
SS(Cah
cw)
SS(Cabcu)-SS(Cm)
S€I
Se
S
e
For procedure A in the
2x2~3
(6.7), (6.8)
are easy to compute in terms of the model
~~d (609)
experiment the
~wo
factor interactions
(6.1) since the largest matrix to be inverted is only a 4:x4Q
slightly dif'ferent arrangement o:f' the sums of squares for
dure B: is given by Ie Roy and Gluckowski
L'1961} for the
A
Proce~
t.h:r'ee~
way classification.
It is seen that any tnree=way classification will yield
exact arrangement
01~
the
SVlI1S
t~he
of squares shoW's. in Table 6.2.
i1he
2x2x3 example which was 'Worked o'u:t above ea;;.'1. be generalized to
any other
three~way
classification.
W11en there are some
nijk::::O J
it is just a matter of' constructir.g su:i.table contrasts on the main
effects of' ProcedUJ:'Ei B and obtaining the interaction matrices by
the multiplication FQle.
70
Extension of the above results to the general multi-way
classification is not too difficult to write out although it may
involve considerable computation.
Procedure B is the easiest
to extend, since one needs only to write out the main effect
contrasts; all the interaction contrasts can be obtained by
multiplication as noted in the two- and three-way classifications.
For Procedure H, the extension is also not too difficult.
The only problem might be the construction of the subclass matrices.
To illustrate the
e~ension,
say in a four way-classification,
a typical term for the two-way subclass is n, 'kll/n..
l.J"
the three-way subclass
always involve
n, 'kd/n, 'k
J.J"
l.J .•
•
l.J ••
,and for
The highest order subclass
an identity matrix which is the order of the num-
ber of non-empty cells.
Procedure A can also be extended.
Again using the four-way
classification as an illustration, the sum of squares for main
effect A, adjusted for all other main effects, ignoring interactions, is computed by
R(~,
A, H,
0,
D)-
R(~,
H, C, D)
or
where the notation is an extension of that used in the threeway classification.
71
A two factor interaction can be found by
where
Ss(oabw) - SS(CI w)
3
includes the AB contrasts and all higher order
C
aEw
actions while
C~
~3Yl
inter~
includes contrasts from all three factor and
higher order interactions.
Of course if all the j,nteractions are grouped together as in
C-:- , all three procedures have considerably less computation.
J.w
6.3
The Completely Random Situation
Consider the mathematical model as
y,
(6.10)
'kf! = P-+A,+B,+ck+(AB), ,+(AC)'k+(BC) 'k+(ABC). 'k+e, 'kif
~
J. J
J.J
~
J
J.J
~J ~
~J
(i = 1, 2, ••• ,a; j=1,2, ••• ,b; k~1,2, ••• ,c; !=1,2, ••• ,nijk )
where
~ is a constant, Ai'
Bj ,
ok' (AB)ij' (AC)ik' (BC)jk'
(ABC)ijk' e ijk! are all NID with means zero, and respective vari2
2
2
2
2
2
2
2
ances, era' ~b' (J'c , (J'au
'1-.' (J'ac , <Tb c ' (Jac
band (Je
As in t~e two-way classification, if (6.10) is rewritten in
1\
terms of' subclass random variables and subclass means (y. 'k ::: y. 'k)'
J.J.
~J
the mathematical model is
A
y.~J'k
where
'k
lJ.'k + e.~J.
=:. )',
y. "k are the subclass random variables,
J.J
72
A
is the average error for the (ijk) subclass, and 7ijk has a multivariate normal distribution with variance covariance matrix V •
txt
Using the general results described in Section 3.4 on the sums
of squares in Table 6.2, it is seen that for procedure A, the first
seven items will make up the vector t*
-a
,
and
1
-1
0
0
1.
0
0
=1
0
0
0
0
SS(C abcu )
1
-1
0
1
0
0
~l
0
0
0
0
0
SS(Crw)
1
-1
1
0
0
=1
0
0
0
0
0
0
SS(Cab(n»
.<.:.
t*= 0
-a
0
0
0
0
0
0
0
1
0
0
~1
SS(Cac(b) )
0
0
0
0
0
0
0
0
0
1
0
-1
SS(Cbc(a) )
0
0
0
0
0
0
0
0
0
0
1
-1
SS(C ab u )
0
0
0
0
0
0
0
0
0
0
0
1
SS(Cacu )
SS(Cbcu )
SS(C:5
a w)
SS(Cacw )
SS(~cw)
SS(C abcW )
For Procedure B, the f'irst seven ite1l1'S under B.'I in Table 6.2.'/
are
~
=<!b ' since there are no linear combinations.
73
Finally, for Procedure H, it is seen that
1
0
0
0
0
0
0
-1
SS(C
0
1
0
0
0
0
0
-1
SS(C )
bu
0
0
1
0
0
0
0
~1
SS(Ccu)
t* _ 0
0
0
1
0
0
0
~l
SS(Cabu )
0
0
0
0
1
0
0
~l
SS(C acu )
0
0
0
0
0
1
0
-1
SS(Cbeu )
0
0
0
0
0
0
1
-1
88(0 b
""11-
au
)
a cu )
SS(O )
m
In the three-way classification there eight variance components
to be estimated
However since the sum of squares for error is in-
0
"2 ) can be
(J" 8...'l1d var «(J"
dependent of' all the other sums of' squares, "2
e
e
estim.ated separately.
Thus the main problem is to estimate and
find the variances of the estimates for the seven variance
2
222
2
2
2
nents (J"a , O':b' (J"c , (J"a b' (J"ac , (J"bc
and
a(J"'b.
e
classif'ication the
compo~
Again as for the two-way
variance~covariance matrix
of' the subclass means
A
lijk is requiredo
sum of' eight
However now it is desired to express
V'
each associated wit...11 one parameter.
~
2
0
2
~
~
?
V = (J"2V +a:G:v. -kJ V -kJ<- V -kJ V ·kC V" -kr- V -kJ-V
a a b b c c ab ab ac ac bc be a"be abc e e '
matrice~,
where t...1J.e size of
V depends upon t:.l1e number of'
classes in the experiment.
Now
no:a.~ zero
as a
That is
(6 11)
•
sub-
E(!:), E(~) &YJ.d E(~li) can be ob-
tained so that the seven variance components can be estimated for
each procedure.
With (3.21) expressed for each procedure, the vari-
ances.of the estimates of the variance components can be found by
use of (3.22).
Although it is not done here, it is expected that
74
many of the matrix products can be reduced to simple functions of
the nijk's and the ~2,s through the use of the expansion of
(6.11).
V in
From a computational viewpoint, note that var(t ) is a
-a
12xl2, var (~) is a 7x7 and var(~) is an 8x8 matrix.
If the breakdown of the interaction into separate components
is not required, then the analysis of three-way classification data
can be somewhat simplified.
The total interaction component for
Procedure A and B would be obtained from
from
SS(Cab cu ).
SS(C~).
~1tl
for Procedure H,
The new var(t
. . . ) is a 4x4 and
-a ) is an 8x8, var(t.
-u
var (~) is a 5x5 matrix..
6.4
Estimating Variances of the Estimates of the Variance
Components for the Nested Design
Recent work by Searle
L196"};.7
and Prairie
L196g} have
con-
.sidered the nested model
(6.12)
Yijk ::: ~ + Ai + B~j + Cijk'
(i=1,2, ••• ,a;
j=1,2, ••• , b.; k=1,2, ••• ,no .)
~
~J
Ai' Bij and Cijk are all NID with means
222
zero and respective variances ~A' ~B and ~C. Since the nested de-
where
~
is a constant and
sign can be
consider~d
as a special case of the multi-way classifi-
cation, (6.12) can be obtained from the two-way classification model
byequating r. =A" (rc) .. ""B .. , e ..k:C .. ka,rtd c_ =0.
1
1
2J·
~J
~J.
1J
J
2222222
For the parameters ~r := ~A' ~rc ::: ~B' ~e
::: ·
~C and 0- ::: o.
Then with
c
the development shown for Procedure H, in the two-way classification
and with the above changes under model (6 .. 12), the variances of the
75
"'2A" "2
(JB
· tde
·
es t 3.m.a
var3.ance
component s,
(J
and
"'2C
(J
can b e
d
0 bt·
aJ.ne.
These results are exactly the same as those shown by Searle L196")}
.
or Prairie L196g7.
nested models is not
Extension to the analysis of higher order
difficult~
As long as it is possible to write out the sums of squares
for Procedure H, it will 'be possible to obtain a linear combination of these sums of squares to represent the
squares.
x~sted
sums of
As an additional 8:YE.mple, consider the :nested model
y. 'klJ
J.J ~
:=
I.l + A.
1
+ BJ.J
.. + C. 'k + D. .kl! ,
J.J
3.J ~
A., B ., C" k and D. 'k fl are all N:ID
iJ
1.
J.J .
lJ "
222
2
with means zero and respective variances (JA' (JB' (J 0 and (JD.
where
I.l
is a constant and
Again we use results "'orked out for the model in (6~2) and equate
A. ::: A., (AB) ..=B •. , (ABC). 'k"" C. 'k' e 1 'kll"'" D 'k ll and B. ::: Ok :::
iJ A
3.
1.
1.J 1 . J 1 J
1.J
J~
J
2
2
(AC)
:::1
(BC)
= O. For the variances 0: =< (J2 ::: (J2 :::: 0:
= O.
ij
'b
jk
The appropriate sums of squares for
Table
6.3.
SUln of' squares
a
~
I
ssle
'- au;'=88(0)
m
a
ss(e
a
B in A classes
~
a
L:
i
::L
b. -
J.
a
L: c .. - Z b.
j
J.J. 1. 3.
n-L:L:c ..
1 j
n - 1
l.J
abu
)-SS(O
au
)
ss(e aucu
~ )-SS(O b )
a u
a 01
Total
be
Sums of squares for the foux' stage
nested classification
A classes
D in C classes
ac
(6.13) are shown in Table
Source
C in B classes
c
S
e
6.3~
76
It is anticipated that an algebraic simplification can be
worked out for any nested classification so that the variances
of the estimates of the variance components appear explicitly
in terms of linear combinations of the populatfon variance
components.
6.5 other Mathematical Models
Now that a general procedure has been :i.llustrated a.i'ld described for the nulti-way and nested classifications, it should
not be too difficult to extend these results to combinations
of multi-way and nested designs and possibly to the mixed model.
In fact for any experimental design where the number of sums of
squares in
t*
is the same as the number of variance components
to be estimated (excluding lT~), the general results of Section
t::
3.5 can be applied.
The importa."1t criterion is to be able to
find the sums of squares in
t* either directly, or through
some linear combination of other sums of' squares, where all necessary sums of squares are express:i.ble in terms of a
for the model under consideration.
OJ-matrix
77
CHAPTER VII
SUMMARY
Al\ID
CONCLUSIONS
7.1 Introduction
The basic problem was to develop a procedure by ,.,hieh
va!'i~
ances of estimates of the varianee components in Model I I could
be computed for multi-way classifications, when there are unequal
numbers in the subclasses.
Subsequently it was possible to make
comparisons of estimation procedures and experimental designs.
As a by-product of this development the extension to nested designs was wOI'ked out.
In all cases the variances of estimates of the variance components involved the true parameters (the variance components) and
the number of observations in the subclasses
(n .. ).
~J
A computer
program was vITitten on the UNrvAC 1105 for evaluating the variances of the estimates of the variance components for three estimation procedures (A, B and H) in the two-way classification.
computer program is given in Appertdix A.
The
The computer results,
for 18 sets of parameter values, a number of different designs and
for each estimation procedures, is shown in Appendix B.
7.2
Tbe TWO-Way Classification
The analytic results, and examples of the construction o.f the
necessary matrices, for the two-way crossed classification are given
in Chapters III and IV.
The essential requirement, based on the
fixed effects model (Model I), is to be able to express the sums of
78
squares for a particular estimation procedure in terms of linear
combinations of quadratic forms such as
=
A
yl
A
Q.
'1
lxt tit til
,
A
where
r
is a vector of the t cell means, and
of constants.
puting mean
Q
j
is a matrix
Then by the analysis of variance procedure of
com~
squares based on Model I, and equating these mean
squares to their expectations under Model II, a set of equati.ons,
linear in the parameters, is obtained.
For this discussion we
will assume that only one mean square is computed for each source
of variation in the analysis of variance, e.g., main effect A,
main effect B, ••• , interaction AB, ••••
In this case there are
as many equations as unknowns (including 0";; which can be esti~
mated independently of the other variance com.ponents).
Thus the
variance-covariance matrix of the estim.ates (y) of the variance
components is
.r.>. 0"4
7 p~
---
(~) m
ne
mt
where the above notation is explained in Section
IV
3.4.
This is a generalization of Searlevs ~195~7 work for the es~
timation procedure using unadjusted sums of squares (Procedure H).
A
In Model II, the Yarianc6=covariance matrix of the cell means
is
V
txt
,
Yij
and the variances and the covariances of the sums of squares,
79
var(~),
can be found through
var(SS.)
J
=
2tr(VQ.)2
J
The computational details, in matrix notation, for the estimation procedures,
A - based on method of fitting constants;
B - based on weight8d squares of means;
H - based on unadjusted sums of squares,
are described in Chapter IV.
Searle Ll95§.7 shows a simplified set
of equa.:bions for obtaLning var(y) using Procedure H, and Chapter V
gives an analogous simplified set of equations for obtaining var(.!)
using Procedure A.
The computer comparison evaluates Procedures A, Band H for the
following experimental designs:
c
6x6-equal
6x6 - S
1 1 1 1. 1 1
1 1 0 000
1 1. ]. 0 0 0
1 1 0 000
1 1 1 1 1 1
1 1 1. 0 0 0
1. 1. 1 0 0 0
1. 1 0 0 0 0
1 1 1 1 1 1
o1
o1
1 1 0 0
1. 1 0 0 0 0
1 1 1 1 1 1
o 0 1 11.0
o0 1 1 1 0
1 1 0 0 0 0
1 1 1 1 1 1
000 1 1 1
000 1 1 l
111 1 1 1
1 1 1 1 1. 1
000 0 1. 1.
00011:1..
1. 1 1 1 1 1
3x3
1 1 0 0
- equal
2
:2
:2
2
:2
:2
2
6x6 -
2
:2
-D
3x3
nll
n
2l
n
31
n
n
6x6 - L
12
22
11
32
n
13
n
23
n
33
,
3
3
L:
L:
i
j
n
ij
==
18
80
A summary of the computer results, for the 6x6 designs
2
2
showed that when
and r:r
dominate r:r
the best design is
r
c
rc'
ri
S or
C, and the best estimation procedure is
dominates
r:r; and
A.
2
When r:r
rc
r:r~, the best design is L, and the best esti-
mation Procedure is
H.
depending upon whether
If'
'2
r:r
I"
r:r2
rc
or
falls between r:r'2 and r:r'2, then
I'
'2
r:r
e
C
is more important, either
S, C or L would be used; however, Procedure A is the best estimation procedure.
For the 3x3 designs, Procedure B is
sli~~tly
better than Procedure A; however since the differences appear to
be negligible and Procedure A is easier to calculate, it is
recommended.
Based on these results, it is suggested that if -the experimenter has no prior knowledge of the relative magnitude of the
variance components in a two-way classification, and if the observations are either costly or time consuming, then he should
use the S type of design with estimation based on Procedure A.
' d'~cat es a 1arge r:r' 2
,
.
t +
'2
d '2
If Proce d ure A ~n
ro , ~n conuras vO r:rr an r:r,
c
a recomputat:lon of the data usirJ.g Procedure II might be an improvemente
The varia1"J.ces of the estimates of the var:ia.'1ce components
can be estimated by using the sample estlmates for the parameters
in
(7.1).
The general matriX development for Procedures A., B and II in
the three-way and higher-way classification is outlined in Chapter
VI.
Although only three procedures of estimation are discussed
in this dissertation, the general matrix set-up is suitable for
81
other procedures, as well as other experimental designs.
It is al-
most impossible to consider a comparison of estimation procedures
or experimental designs without the use of a digital computer.
The
computer program in Appendix A is written in the Remington Rand
UNICODE language which is very similar to the FORTRAN language.
It
should not be a difficult task for those who are interested, to re=
write the UNICODE program into a different compiler language.
7.3
L
Su~gested
Future Research
Compare experimental designs and estimation procedures for twoway classifications with the larger core digital computers in
order to handle more than six rows and columns.
2.
Compare experimental designs and estimation procedures for threeway classification modeJs using a digital computer.
3.
Investigate the problem of what to do with negative estimates
of variance components.
4.
Develop an analytic procedure for the purpose of put·ting confidence bounds on the variance component,s.
5.
Obtain the distribution of the estimates of the variance components for the aboye procedures, a...."1alytically, or by empi.rical
sampling, especially when the random. variables do not follow a
normal distribution.
6.
There is a need for investigating and evaluating procedures, other
than the analysis of varia....'Ilce technique, by which we could obtain
estimates of- the var-iance components.
82
LIST OF REFERENCES
Anderson, R. L. 1961. Designs for estimating variance components.
Inst. of Stat. Mimeo Series No. 310.
Anderson, R. L. and Bancroft, T. A. 1952. Statistical Theory in
Research. McGraw-Hill Book Co., Inc., New York"
Crump, S. L. 1946. The estimation of' variance components in
analysis of variance~ Biometrics 2: 7=11.
Crump, S. L.
analysis~
1951. The present status of variance component
Biometrics 7: 1-16.
Eisenhart, C. 1947 ~ The assumpt::l.ons U".11derlying tJ.1e analysis of
variance. Biometrics 1: 1-21.
Elston, R. C. and Bush, N. 1963. The hypotheses tested by different sums of squares when there are interactions in the
model. Submitted for pUblication to Biometrics.
Gaylor, D. W. 1960. The construction and evaluation of some designsfor the estimation of parameters in random models~
Unpublished Ph.D~ Thesis, North Carolina State College, Raleigh.
Inst. of Stat. Mimeo Series No. 256.
Graybill, F. A. 1961. A.'I1 Introduction to Linear Statistical
Models, Vol. I~ McGraw-Hill Book Co", Inc~ N. Y.
Henderson, C~ R.
components.
1953. Estimation of variance and covariance
Biometrics 2: 226-252.
Henderson, C. R., Searle;! S. R. and. Robson, D. S. 1957. Seminar
on unbalanced data. Mimeographed class notes, Cornell Univ~
Ithaca, N. Y.
Lancaster, H~ o. 1954. '!'Taces and cumulants of quadratic forms
in normal variables~ J. Roy. stat. Soc. (B) 16: 247-254~
Le Roy, H. L. and Gluckmrski, W.
1961. Die bestimmung der varianz
kom.ponenten im abc ~ faktoren versuch mit ung1eichen
klassenfrequenzen" Biom.etrische Zeitschrift .. .2.: 73-91.
Prairie, R. R. 1962~ Optimal designs to estimate variance com.ponents and to reduce product variability for nested classifications. Unpublished Ph.D. Thesis;! North Carolina State
College, Raleigh.. Inat. of Stat. Mimeo Series No. 313.
Roy, S. N. 1957. Some Aspects of Multivariate Analysis.
Wiley and Sons, N. Y.
John
Roy, S. N., and Gnanadesikan, R. 1959. Some contributions to
ANOVA in one or more dimensions: I. Annals Math. Stat. 30:
304-317.
Roy, S. N. and Sarhan, A. E. 1956. On inverting a class of
patterned matrices. Biometrika 43~ 227-231.
Searle, S. R. 1956. Matrix methods in components of varia.'I1.ce
and covariance a.'l1alysis. Armals Math. Stat. 27: 737-748.
Searle, S. R. 1958. Sampling variances of estimates of components
of variance. Annals Math. stat. 29: 167-178.
Searle, S. R. 1961. Variance components in the unbalanced twoway nested classification. Annals Math. stat. 32: 1161-1166.
searle, S. R. and Henderson, C. R. 1961. Computing procedures
for estimating components of variance in the two-way classificiation, mixed model. Biometrics 17: 607-616.
Snedecor, G. W. 1946. Statistical Methods, 291-296, fourth
edition. Iowa State College Press, Ames, Iowa.
Whittle, P. 1953. The analysis of multiple stationary time
series. J. Roy. Stat. Soc. (B) 15: 125-139.
Yates, F. 1934. The analysis of multiple classifications with
unequal numbers in the different classes. J. Am. Stat. Assoc.
29: 51-66.
84
APPENDIX A
COMPUTER PROGRAM FOR VAR(.!l)' VARC.~a) AND VAR<,~)
IN THE TWO-WAY CLASSIFICATION
The computer program evaluates variances of estimates of variance
components for Procedures H,9 A and B in the two-way classification.
Wbr each procedure
varey) ~ p-lLH var(~)
is computed and printed out o
4
.... (2cr /; n )
e/ e
_111
-Ii
_m t7p
:J
The program was written in the UNICODE
1105 digital computer (with an 8K core and 32K
language for the UNIVAC
drum) 0
HI
A description of the input.9 output.9 notation and flow diagram
follows~
INPUT
Program
Explanation
hI
H
h2
H matrix for Procedure A
h
matrix for Procedure H
a
const(O) "'" irmtl
Number of rows in design
const(l) ~ icc!
Nixmber of GO~wmns in design
const(2) - indfi
const(J) ~ mac
~
dof o for interaction
,'
No print out
Prirlt out constants for Procedure
const(4)
const(5)
const(6)
const(?)
(~t)
Number of nOD!,,,empty cells
(>'" 00
~4 ~:tr:ino···uttOBut
"J,.,L!..
matrix
rc
(:>
0
~> 0
No print out
Print out Php Pa and P matrices
b
(=>
No print out
Print out V and VB
t
0
t?0
s
matrices
H
const(8)
f::c 0
(>
0
const(9)
~ ...
0
Go
No print out
Print out var(!l)' var(!c) and var(!b)
matrices:
a 2 ... 1, and some n .. > 1
e
~J
2
a :: 0, all n. . = 1, or 0
e
~J
cr
CI-matrix for rows in Procedure B
cc
CI-matrix for columns in Procedure B
ci
CI
cofr
r
c
rc
for interaction in Procedures A and B
~matrix
tr(C~VrCrBr)J) that is the expected value for 0-;
in the estimating equations of Procedure B
tr(C6VcCcBc)' that is the expected value for a c
in the estimating equations of Procedure B
cru
0 1 -matrix where 1 represents a non-empty cell.
ccu
..
2
cafe
ru
Note the program places the correct value of n ..
in the CI -matrix
~J
ru
CI
cu-matrix where 1 represents a non-empty cell.
Note the program places the correct value of nij
in the CI -matrix
eu
~
2
sigrna(O)
Value of ar
sigrna(l)
Value of crc
sigrna(2)
Value of ara
sigma(3)
Value of 0-e J) which is always one in a parameter
set
2
2
2
The program can handle 36 parameter sets for a particular n..
arrangement.
2
A parameter set is (el, a J) a
~J
2
2
rc , ( e ). The parameters
are entered through the sigma vector in groups of four numbers.
2
The stopping rule is to put 0- .... 0 after the last parameter set.
I'
e
c
86
1!br example the sigma vector
set 1
set 3
set 2
~~~
1,1,1,1,
2,2,2,1,
1,16,2,1,
0,0,0,0, ••• , 0
will compute three parameter sets, for a particular design, before
continuing on with the next n.. design in the program.
lJ
n .. arrangement for a particular design
n
lJ
The program is set up for an unspecified number of n .. matrices.
lJ
Once the zero cells of an n .. matrix are located, these cells remain
lJ
zero for the computer runo
However any cell having n .. > 0 can be
lJ
changed to any nohioOzero value.
For example i f
n
n
""'
[:21
n
12
22
~J
23
,
n
any arrangement of values for the n .. fs in the 2 x 3 classification
lJ
wOll1d be acceptable.
The program is designed to handle any two-way classification,
up to six rows, six columns and 20 non-empty cells.
Because of the
amount of storage space needed for program and data, it was not
feasible, in terms of computer time, to consider a program to handle
more rows, columns or non-empty cel180
,
.
Thus" for a particular set of n.. arrangements, the program will
l.J
compute the variances of the estimates- of the variance components for
every combination of n .. arrangements fuJ.d parameter setso
lJ
The n ..
lJ
arrangements for which the zero cell pattern are different constitute
different computer runs.
87
OUTPUT
The output is partially governed by the const( ) vector.
Const(4) through const(8) provide for an optional output.
The outputs
which are not optional are ~
1- The n.. arrangement
J.J
2- The parameter set
3- The variances of the estimates of the variance
components
NOTATION
Constants in program
Ex:planation
indfr
dof. for rows
indfc
dof. for columns
ne
d.f. for error
ns
n
sigr
sigc
sigrc
nk15
n(kJ.)
nk1
n(k2 )
uk3
n(k )
3
k12
nk17
uk5
k21
P2(2,2)
k
30
88
Matrices in program
e.f
0'
-m
p2
Pa
p3
Pb
pI
Ph
br
Br
be
Be
bi
Bre
Bru
bru
beu
Beu
bre
BI'0'I'
bee
Be 06
B or
re re
bie
brue
Bru0'I'u
beue
Beu0'eu
V
v
vbs
e.fv
.'
Explanation
VBs,
OIV
-ill
erv
O'V
I'
eev
C'V
e
0'rc V
eiv
eruv
eeuv
.fmm
.fmme
Of V
I'u
Cf V
eu
m m'
(2/ne ) ~ !!', for
2 ::.1
e
(J
ssa
var(!a)
ssb
var(~)
ssh
var(~)
vca
var(v )
vcb
var(!b)
vch
var(.!h)
......:i
FLOW DIAGRAM
Sentences
Description
maximum. storage allocation for
matrices and vectors
1
3, 20, 30
input
40, 160, 344
fixed output
47, 48, 106, 122.3
165, 312
optional output
4
-
7
- 16
6
make floating point numbers
fixed point integers
30.1 - 3D.9
31
50
61
-
-
46
01
.....ml
constants for Procedure H
58
63
Ph
(21n ) m ml~ since 0 2 = 1
e - -
e
70
..
72.1
74
-
°ru
76.1
Ccu
80
- 87
Br
88
-
95
B
96
- 103
B
107
- 122
Fa and Pb
c
re
90
shift parameter sets
150.1 - 150.6
160.1 - 164
V and VB
s
var(~), var(~) and var(~)
170
- 302
330
- 331
var(~)
332
- 333
var(!a)
334
- 335
var(y )
..:..tl
Subroutines
400
- 404
floating to fixed
800
- 800.6
~tr(
A
B)
pxq qxp
899
- 899.22
C
:0
pxn
997
- 997.18
998
- 998.22
invert matrix in place
C
=.
pY.ll
999
- 999.22
A Bf
pxq qxn
ct
1Xn
A
B
pxq qxn
:0
at
lXq
B
qxn
unicode program •
exact variances of variance components •
norman bush jan 1962 •
1
2
3
3~"
3~'2
3~ ,3
3~,4
3.,5
3~,6
3.17
3.,8
3~19
3.2
3 ~ 2i
3.22
3.3
~
5
6
7
8
9
10
11
12
13
,~
15
16
20
20.,
20.2
2i
22
23
23~ 1
23.2
91
dimension n(6,6), const(10), cru(6,20), ccu(6,20), cf(20),
cr(5,20),cc(5,20),
ci(9,20), sigma('50); v(20,20),
vbs(20,20), cruv(6,20), bruc(6,20), ccuv(6,20), bcuc(6,20),
bru(6), bcu(6), cfv(20), tt'(9,20), tt2(9,20), civ(9,20),
bic(9,20), crv(5,20),brc(5,20), ccv(5;20), bcc(5,20), ssh(~,~),
ssa(~,~), ssb(3,3), fmm(27,27), fmme(3,3), vCh(3,3), vca(3,3),
vcb(3,3), h'(3,4), h2(3,~), p'(3,3), p2(3,3), p3(3,3), br(5,5),
bc(5,5), bi(9,9), cofr(5,5), cofc(5,5), tnk5(6), tnk17(6),
tv' (20), tv2 (20) •
start ..
read h, , h2 , const, cr, cc, ci, cofr, cofe ..
cruv( 0, a) = a •
ccuv(o,o) = a •
cfv( 0) = a •
tt2(O,O) = a ~
civ(O,O) = a ..
bic(O;O) = a ..
crv(O,o) = a ..
brc(O,O) = a ..
ccv(o,O) = a •
bcc(O,O) = a ..
tv1 ( 0) = a ..
tv2(O) = a .
if eonst(9) G a jump to sentence 7 •
vary i 0(,)2 sentences 5 thru 6 •
vary j 0(,)2 sentence 6 ..
fmm(i,j) = const(i)Xconst(j) ..
compute fix(const(O» ..
irow = kk ..
indfr = irow - 1 ..
compute fixe const(' » •
ico1 = kk ..
indfc = ico1 - 1 ..
compute fix(const(2» ..
indfi = kk~l •
compute fix(const(3» ..
insc = kk-1 co
read cru, ccu, sigma ..
rom = -, •
m, = a ..
vary i 0(,)8 sentences 22 thru 23.2 •
bcu(i)=O co
bru(i)=O.
tnk5(i)=0.
tnk, 7(i) =0 ..
r
24
25
25~1
25~2
25.3
26
27
28'
28~1
28~2
28~3
28~4
28.5
30'
30~ 1
30~2
30~3
30~4
30.5
30~6
30~ 7
30~8
30~9
30.91
31'
31.1
33
34
35
36
37
38
39
40
41
42
43
44
45
46,
46.1
47
48
vary i 0(1)2 sentences 25 thru 27 •
vary j 0(1)2 sentences 26 thru 27 •
vch(i,j)=O.
vca(i,j)=O.
vcb(i,j)=O.
p2(i,j)=0.
p3(i,j)=0.
ns=O.
nk3=0.
nk1 =0 •
nk15=0.
nk5=0.
nk17=0.
read n, if end of data, jump to sentence 399 •
k =0 •
i =0 •
j =0 •
if n(i,j) = 0 jump to sentence 30.7 •
cf(k) = n(i,j) •
k = k+1 •
j = j+1 •
if j L= icol jump to sentence 30.4 •
i = i+1 •
if i L= irow jump to sentence 30.3 •
vary i 0(1 )insc sentence 31.1 •
ns = ns + cf(i) •
vary i 0(1 )irow sentences 34 thru 40 •
vary j O(i)icol sentences 35 thru 40 •
nk3 = nk3 + n(i,j) X n(i,j) •
bru(i)=bru(i) + n(i,j) •
bcu(j)=bcu(j) + n(i,j) •
tnk5(j)=tnk5(j) + n(i,j) X n(i,j) •
tnk1 7( i )=tnk1 7( i) + n( i, j) X n( i, j) •
list i, j, n(i,j), tape 5, «subclass matrix» •
vary i 0(1) irow sentences 42 thru 43 •
nk15=nk15 + bru(i) X bru(i) •
nk17=nk17+( tnk17( i)/bru( i» •
vary j 0(1) icol sentences 45 thru 46
nkl =nk1 +bcu( j) X bcu( j) •
nk5=nk5+( tnk5 (j )/bcu( j» •
if const(4) = 0 jump to sentence 50 •
listnkl, nk3, nk15, nk5, nk17, tape 5.
list ns, tape 5 e
93
50
51
52
53
54
55
56
57
58
pl (0,0)==ns-(nk15/ns) •
pl (0,1 )==nk17-(nkl/ns) •
pl(0,2)==nk17-(nk3/ns) •
p1 (1 ,0)==nk5-(nk1 5/ns) •
p1 (1,1 )==ns-(nk1/ns) •
p1(1,2)==nk5-(nk3/ns) •
p1 (2,0)==-pl (1 ,0) •
p1 (2,1 )==-pl (0,1) •
p1 (2,2)==nS-nk17-nk5+(nk3/ns)
59
60
61
62
63
70
71
72
72.1
74
75
7676.1
if const(9) G jump to sentence 70 •
ne == ns-const(3) •
vary i 0(1)2 sentences 62 thru 63 •
vary j a(1)2 sentence 63 •
fume(i,j) == (2.a/ne) X fum(i,j) •
vary i a(l)irow sentences 71 thru 72.1
vary j a(1)insc sentences 72 thru 72.1
cru(i,j) == cru(i,j) X cf(j) •
bruc(i,j) == cru(i,j)/bru(i) •
vary i a(l )icol sentences 75 thru 76.1
vary j a(l)insc sentences 76 thru 76.1
ccu(i,j) == ccu(i,j) X cf(j) •
bcuc(i,j) == ccu(i,j)/bcu(i) •
80
81
82
83
vary i a(l)indfr sentences 81 thru 82 •
vary j a(l )insc sentence 82 •
ttl (i,j) == cr(i,j)/cf(j) •
compute prodt(br(O,O), ttl(O,O), indfr, insc, cr(a,a), indfr) •
if indfr == a jump to sentence 87 •
compute invert(br(a,indfr» •
jump to sentence 88 •
br(a,a) == 1.a/br(a,a) •
vary i a(l)indfc sentences 89 thru 9a •
vary j a(1)insc sentence 9a •
ttl (i,j) == cc(i,j)/cf(j) •
compute prodt(be(a,a), tt1(a,0), indfc, inse, cc(a,a), indfc ) •
if indfc =
jump to sentence 95 •
compute invert(bc(a,indfe» •
jump to sentence 96 •
bc( a,a) == 1 • a/be ( a,a) •
vary i a(l)indfi sentences 97 thru 98 •
vary j a(l )inse sentence 98 •
ttl (i,j) == ci(i,j)/ef(j} •
compute prodt(bi(a,O), ttl(O,a), indfi, insc, ci(a,a), indfi } •
if indfi == a jump to sentence 1a3 •
compute invert(bi(a,indfi») •
jump to sentence 1 a3.1 •
bi(a,a) == l.a/bi(a,a) •
84
85
86
87
88
89
90_
, 91
92
93
94
95
96
97
98
99
100
101
102
103
°
°
•
•
•
•
°
1 03.1
104
105
106
107
108
109
110
111
112
112.1
113
114
115
116
116.1
117
118
119
120
121
122'
if const( 5) ==
jump to sentence 107 •
vary i 0(1 )indfi sentences 105 thru 106 •
vary j O(l)i sentence 106 •
list i, j, bi(i,j), tape 5 •
compute prodt(ttl(O,O), ci(O,O), indfi,insc, ci(O,O), indfi ) •
compute tr(ttl (0,0), bi(O,O), indfi, indfi } •
p2(2,2} == trace/2.0 •
p3(2,2) == p2(2,2} •
compute prodt( ttl (0,0), cr( 0, O}, indfr, insc,' cr( 0,0.), indfr } •
compute tr( ttl ( 0, 0), br( 0,0), indfr, indfr ) •
p3(O,2) == trace/2.0 •
.
compute tr(cofr(O,O), br(O,O), indfr, indfr ) •
p3(0,0) == trace/2.0 •
compute prodt(ttl(O,O), cc(O,O), indfc, insc, cc(O,O), indfc } •
eompute tr( ttl (
be ( 0), indfc, indfc } •
p3(1,2) == trace/2.0 •
compute tr(cofc(O,O), bc(O,O), indfc, indfc ) •
p3(1,1) == trace/2.0 •
p2(0,0) == ns - nk5 •
p2(0,2) == p2(0,0) - p2(2,2} •
p2(1 ,1) == ns - nkl 7 •
p2(1,2) == p2(1,1) - p2(2,2) •
122~05if const(6) == 0 jump to sentence 123 •
122~1 vary i 0(1)2 sentences 122.2 thru 122.3 •
122~2 vary j 0(1)2 sentence 122.3 •
122.3 list i, j, pl(i,j}, p2(i,j), p3(i,j), tape 5 •
"'O""~u·t~).o!tV,,"
~""'rt(....
I c../
"-.I.J:J
.1:" 'l \(()
V,'-J J •
124 compute invert(p2(0,2» •
125
compute invert(p3(0,2» •
130
compute prod(brc(O,O), br(O,O), indfr, indfr, cr(O,O}, insc) •
131
compute prod(bcc(O,O), bc(O,O), indfc, indfc, cc(O,O}, insc) •
132
compute prod(bic(O,O), bi(O,O), indfi, indfi, ci(O,O), insc} •
., >")7
° °), °,
.9
f) \ ,
c:;
150~ 1 mm := mm+l •
150.2 ml == 4Xrmn •
150~3 m2 == ml -+ 3 •
150.4 vary i 0(1)3 with j ml(l )m2 sentence 150.5 •
150~5 sigma(i) == sigma(j) •
150.6 if sigma (3) :=
jump to sentence 20 •
°
151
152
153
153.1
154
155
156
,60
cfvc==O •
trts==O •
trtf==O •
tcts==O •
tctf==O "
tstf==O •
tsti==O ..
list sigma (0), sigma (1), sigma (2), sigma (3), tape 5,
«scale factors for theoretical variance components», (row s.f.),
(column s.f.), (interaction s.f.), (error s.f.) •
1 60~ 1 sigr = sigma( 0) •
160~ 11 sigc = sigma(l) •
160~12sigrc
sigma(2) + sigr + sigc •
160~13i = 0 •
160~14iv = 0 •
160 ~ 1 5 j = -1 •
160~ 16jv = iv •
160~ 17if iv G insc jump to sentence 161 •
160~ 18j = j+1 •
160~19if j G icol jump to sentence 160.6 •
160~2 k = i •
160~21m = j •
160~22if n(i,j) = 0 jump to sentence 160.18 •
160.23v(iv,jv) = sigrc •
160~24jv = jv+l •
160~ 25m = m+"j •
160.26if m G icol jump to sentence 160.31 •
160~27if n(k,m) = 0 jump to sentence 160.25 •
160~28v(iv,jv) = sigr •
160~29v(jv,iv) = sigr •
160~3 jump to sentence 160.24 •
160~31m = 0 •
160~32k = k+1 •
160~33if k G irow jump to sentence 160.5 •
160~34if m G icol jump to sentence 160.31 •
160~35if n(k,m) = 0 jump to sentence 160.4 •
160~36if m = j jump to sentence 160.42 •
160~37v(iv,jv) = 0 •
160~38v(jv,iv) = 0 •
160~39jv = jv+l •
160~4 m = m+l •
160 ~ 41 jump to sentence 160.34 •
160~42v(iv,jv) = sigc •
160.43v(jv,iv) = sigc •
160~44jv = jv+l •
160 ~ 45m = m+"! •
160~46jump to sentence 160.34 •
160~5 iv = iv+l •
160~51jump to sentence 160.16
160~6 j = 0 •
160~61i = i+1 •
160~62jump to sentence 160.2 •
160~ 7 if conat( 9) G 0 jump to sentence 160.91 •
160.8 sigee "" 1.0 ..
160~9 jump to sentence 161 •
,60.91 sigee = 0 •
161
vary i 0(1 )inse sentences 162 thru 164 •
162 v(i,i) = v(i,i) + (sigee/cf(i» •
163
vary j O(l)insc sentence 164 •
164 vbs(i,j) = v(i,j)Xef(j) •
164~1 if const(7) = 0 jump to sentence 170 •
164~2 vary i 0(1 )inse sentences 164.3 thru 165 •
164.3 vary j O(l)inse sentence 165 •
165
list i,j, v(i,j), vbs(i,j), tape 5 •
=
95
170
171
172
180
181
182
183
184
185
186
187
188
189
190
191
192
193
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
compute prod(cruv(O,O), cru(O,O), irow, insc, v(O,O), insc) ~ 96
compute prod(ccuv(o,O), ccu(o,O), icol, insc, v(o,O), insc) •
compute vtprod(ctv(O), cf(O), insc, v(o,O), insc) •
compute prodt(ttl(O,O), cruv(o,O), irow, insc, bruc(O,O), irow) •
compute tr( ttl (0,0), ttl (0 ,0), irow, irow) •
ssh(O,O) == trace.
ssa(O,O) == trace.
compute prodt(ttl(O,O), ceuv(O,O), ieol, insc, beuc(O,O), ieol) •
compute tr(ttl (0,0), ttl (0,0), ieol, icol) •
ssh(1 ,1 )==traee •
ssa(1,1 )==traee •
compute tr(vbs(O,O), vbs(O,O), inse, inse)
sSh(2,2)=traee •
ssa(2,2)==traee •
vary i 0(1 )inse sentence 192 •
cfve=efvc + cfv(i) X cf(i) •
ssh(3,3) == 2.0 X (erve/ns) X (etvc/ns) •
compute prodt(ttl(O,O), ccuv(O,O), icol, inse, brue(O,O), irow) •
compute prodt(tt2(0,0), eruv(O,O), irow, insc, beue(O,O), ieol) •
compute tr(ttl(O,O), tt2(0,0), ieol, irow) •
ssh(O,l)==traee.
ssh(l,O)==traee •
ssa(O,l)==trace.
ssa(l,O)==trace
compute prodt(ttl (0,0), eruv(O,O), irow, inse, vbs(O,O), insc) •
compute prodt(tt2(0,0), ttl (0,0), irow, insc, bruc(O,O), irow) •
vary i O(l)irow sentence 205 •
trts==trts+tt2(i.i)
trts==2.0 X trts
ssh(0,2)==trts •
sSh(2,0)=trts •
ssa(0,2)=trts •
ssa(2,0)==trts •
compute vtprod(tvl(O), efv(O), inse, brue(O,O), irow).
compute vtprod(tv2(0), cf(O), insc, cruv(O,O), irow) •
vary i O(l)irow sentence 214 •
trtf==trtf + tv1(i) X tv2(i) •
ssh(0,3) == 2.0 X (trtf/ns) ~
ssh(3,0) == 2.0 X (trtf/ns) •
compute prodt(ttl(O,O), eeuv(O,O), icol, insc, vbs(O,O), insc)'.
compute prodt(tt2(0,0), ttl(O,O), icol, inse, bcuc(O,O) icol) •
vary i O(l)icol sentence 220 •
tcts == tcts + tt2(i,i) •
tcts == 2.0 X tets •
ssh(1,2) == tcts •
ssh(2,1} == tcts •
ssa(1,2) == tcts •
ssa(2,1) == tcts •
compute vtprod(tv1(0), efv(O), insc, bcuc(O,O), ieol).
compute vtprod(tv2(0), cf(O), insc, ccuv(O,O), icol} •
vary i O(l)icol sentence 229 •
tetf == tctf + tv1 (i) X tv2(i) •
ssh(1,3) == 2.0 X (tetf/ns) •
ssh(3,1) == 2.0 X (tctf/ns) •
0
--
--
_.
-
----,---"
I
0
232
233
234
235
236
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
280
oFh
~~.
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
310
311
312
97
compute vtprod(tvl(O), cfv(O), inse, vbs(O,O), inse) •
vary i O(l)insc sentence 234 •
tstf=tstf + tvl (i) X ef( i) •
ssh(2,3) = 2.0 X (tstf/ns) •
ssh(3,2) = 2.0 X (tstf/ns) •
compute prod(eiv(O,O), ei(O,O), indfi, inse, v(O,O), inse) •
compute prodt(ttl(O,O), eiv(O,O), indfi, inse, bie(O,O), indfi) •
compute tr( ttl (0,0), ttl (0,0), indfi, indfi) •
ssa(3,3)=traee •
sSb(2,2)=traee •
compute prodt(ttl(O,O), eiv(O,O), indfi, inse, brue(O,O), irow) •
compute prodt(tt2(0,0), cruv(O,O), irow, inse, bic(O,O), indfi) •
compute tr( ttl (0,0), tt2( 0,0):> indfi, irow) •
ssa(0,3)=traee •
ssa(3,O)=traee •
compute prodt(ttl(O,O), civ(O,O), indfi, inse, bcuc(O,O), icol) •
compute prodt(tt2(0,0), ecuv(O,O), icol, inse, bic(O,O), indfi) •
compute tr(ttl(O,O), tt2(0,0), indfi, icol) •
ssa(1,3)=trace •
ssa(3,1)=trace.
compute prodt(t~I(O,O), eiv(O,O), indfi, inse, vbs(O;O); insc) •
compute prodt(tt2(O,O), ttl (0,0), indfi, inse, bie(O,O), indfi) •
vary i 0(1) indfi sentence 268 •
tsti=tsti + tt2(i,i) •
ssa(2,3) = 2.0 X tsti •
ssa(3,2) = 2.0 X tsti •
compute prod(crv(O,O}, er(O,O}, indfr, inse, v(O,O), inse) •
""omnut'"
.....
.t"
...... -nrr-d(c",,,fn
1:"
v \
3 1"1'
- J J "''''(1''1
-...... ...." 0'
J' l'ndf'"
'"
- /) ;ns'"
- ..... , ,rf(j
.. \ . . . . , (j\
..., I ' 1'1''''':('''\
.......... - I
compute prodt(ttl(O,O), erv(O,O), indfr, inse, bre(O,O), indfr) •
compute tr( ttl ( 0), tt'j ( 0, 0), indfr, indfr) •
ssb(O,O)=traee •
compute prodt(ttl(O,O), ecv(O,O), indfe, inse, bec(O,O), indfe) •
compute tr( ttl (0" L ttl ( 0, 0), indfe., indfe) •
ssb(l ,1 )=trace •
compute prodt(ttl(O"O), eev(O,O), indfe, inse, brc(O,O), indfr) •
compute prodt(tt2(0,0), crv(O,O), indfr, inse; bee(O,O), indfe) •
compute tr(ttl (O,O), tt2(0,0};I indfc, indfr) •
ssb(0,,1)=trace.
ssb(~,O)=trace •
compute prodt(ttl(O"O), eiv(O,O), indfi, inse, bre(O,O), indfr)
compute prodt(tt2(O,O), er~(O,O), indfr, inse, bie(O,O), indfi) •
compute tr(ttl(O,O), tt2(0,0}, indfi, indfr) •
ssb(0,2)=trace •
ssb(2,0)=trace •
compute prodt(ttl(O,O), civ(O,O), indfi, inse, bee(O,O), indfe) •
compute prodt(tt2(0,0), cev(O,O), indfc, insc, bic(O,O), indfi) •
compute tr(ttl(O,O), tt2(O,O), indfi, indfe) •
ssb(1,,2)=trace.
ssb(2,1)=trace.
if const(8) =
jump to sentence 313 •
vary i 0(1)3 sentences 311 thru 312 •
vary j 0(1}3 sentence 312 •
list i" j, ssh(i,j}, ssa(i,j), ssb(i,j), tape 5,
« sums of squares matrix }) •
v
'-"
'J
~
9
°,
°
0
°
98
e
..
313
314
315
316
317
320
321
322
323
324
compute prod(ttl (0,0), h1 (0,0), 2, 3, ssh(O,O), 3 ) .'
compute prodt(ssh(O,O), ttl (0,0), 2, 3, hl (0,0), 2 ).
compute prod(tt1 (0,0), h2(0,0), 2, 3, ssa(O,O), 3 ) .:
compute prodt(ssa(O,O), ttl(O,O), 2, 3, h2(0,0), 2 ) •
if const(9) G 0 jump to sentence 330 •
vary i 0(1)2 sentences 321 thru 324 •
vary j 0(1)2 sentences 322 thru 324 •
ssh(i,j) = ssh(i,j) + fmme(i,j)
ssa(i,j) = ssa(i,j) + fmme(i,j) •
ssb(i,j) = ssb(i,j) + fmme(i,j) •
330
331
332
333
334
335
343
344
345
399
computeprod(ttl(O,O), p'i(O,O), 2, 2, ssh(0,0),2).
compute prodt(vch(O,O), ttl(O,O), 2, 2, pl(O,O), 2 )'.
compute prod(ttl (0,0), p2(O,O), 2, 2, ssa(0,O),2 ) .'
compute prodt(vca(O,O), ttl(O,O), 2, 2, p2(O,0), 2 ) .
compute prod(ttl (0,0), p3(0,0), 2, 2, ssb(O,O), 2 ) •
compute prodt(vcb(O,O), ttl(O,O), 2, 2, p3(0,0), 2 ) •
vary i 0(1)2 sentence 344 •
list, i, YCh(i,i)' vca(i,i)' vCb(i,i)' tape 5,
« variance of estimates for rows, columns, and interactions
( ), (h est.), (a est.), (b est.) •
jump to sentence 150.1 •
stop •
400
401
402
403
404
fixe q(ka» •
zz = q(ka) •
vary kk 0(1)9999 with zz zZ(-l )-2 sentence 403 •
if zz L 0.5 jump to sentence 404 •
exit •
800
tr(aa1 (igmin,kgmin),bbl (kfinin,ifinin), itr, itc ) •
800.1 trace = 0 •
800.2 vary ig igmin(l)itr sentences 800.3 thru 800.4 •
800.3 vary kg kgmin(l)itc sentence 800.4 •
800.4 trace = trace + aal (ig,kg)Xbbl (kg,ig) •
800.5 trace = 2.0Xtrace
800.51igmin = ifmin •
800.52kgmin = igmin •
800.53kgmin = kfmin •
800.6 exit •
»,
99
899
prodt(cg(igmin, jgmin), ag(ifmin, kgmin), ifmax, kgmax,
bg(kfmin, jfmin), jfmax) •
899~03jg = jgmin.
899.04jf = jfmin •
899.05produc = 0 •
899.06kg = kgmin •
899.07kf = kfmin •
899.08produc = produc + ag(ifmin,kg)Xbg(jf,kf) •
899.1 kg = kg + 1 •
899.11 kf = kf + 1 •
899.12if kg L= kgmax jump to sentence 899.08 •
899.13cg(igmin,jg) = produc •
899.15jf = jf + 1 •
899.16jg = jg + 1 •
899.17if jf L= jfmax jump to sentence 899.05 •
899.19ifmin = ifmin + 1 •
899.2 igmin = igmin + 1 •
899.21 if ifmin L= ifmax jump to sentence 899.03 •
899.22exit •
invert(arraY(ie, maxi» •
997
997.01 factor=l!array( ie, ie) •
997.02je=maxi •
997.03array(ie, je)=array(ie, je)Xfactor •
997.04je=je-l •
997.05if jeG=O, jump to sentence 997.03
997.06array(ie, ie)= factor.
997 007 je=je+l •
997.08if je=ie, jump to sentence 997.15 •
997.09factor=-array(je, ie) •
997.1 ke=O •
997.11array(je, ke)=array(je, ke)+factorXarray(ie, ke) •
997.12ke=ke+l •
997.13if keL=maxi, jump to sentence 997011 •
997.14array(je, ie)=array(ie, ie)Xfactor •
997.15if jeLmaxi, jump to sentence 997.07 •
997.16ie=ie+l 0
997.17if ieL=maxi, jump to sentence 997.01 •
997.1 8exit •
100
998
prod(cg(igmin, jgmin), ag(ifmin, kgmin), ifmax, kgmax,
bg(kfmin, jfmin) , jfmax) •
998~03jg=jgmin •
998~04jf=jfmin •
998~05produc=0 •
998.06kg=kgmin •
998~07kf=kfmin •
998~08produc=produc+ag(ifmin, kg)Xbg(kf, jf) •
998~ 1 kg=kg+1 •
998.11 kf=kf+1 •
998~12if kgL=kgmax, jump to sentence 998.08 •
998.13cg(igmin, jg)=produc •
998.15jf=jf+1 •
998.16jg=jg+1 •
998.17if jf L= jfmax, jump to sentence 998.05 •
998.19ifmin=ifmin+1 •
998~20igmin=igmin+1 •
998.21 if ifminL=ifmax, jump to sentence 998.03 •
998.22exit •
vtprod(cg(jgmin), ag(kgmin), kgmax, bg(kfmin, jfmin), jfmax) •
•
999.04jf=j:fuJ.in •
999~05produc=0 •
999.06kg=kgmin •
999.07kf=kfmin •
999.08produc=produc+ag(kg)Xbg(jf,kf) •
999.1 kg=kg+l •
999.llkf=kf+1 •
999~12if kgL=kgmax, jump to sentence 999.08 •
999.13cg(jg)=produc •
999~ 15jf=jf+1 •
999~ 16jg=jg+1 •
999.17if jf L= jfmax, jump to sentence 999.05 •
999.22exit •
zzzzzzend of tape •
999
999~03jg=jgmin
..
101
APPENDIX B
COMPUTER RESULTS FOR VARIANCES OF ESTIMATES
OF VARIANCE COMPONENTS
The computer results for each design in Tables
shown on the following
page~.
4.1
and
4.2
are
The variances of the estimates of the
variance components are given for estimation procedures A, B and H
and for 18 different parameter sets.
102
Table B-1.
2
rc
(J
Rows
Procedure
H, A and B
0
1
6.95
1/4 1/4 1
1.95
Parameters
2
2
c
(J
(J
4
2
r
2
16
2
(J
2:
e
1/4 1
1/2 1/2 1/2 1
2
Design 6x6.equal
1/2 l/a 1
105.
0230
2.03
1
1
1
1
16
1
1L
1
4
1/4
1
1
7.52
4
4
1
JL
7.52
16
4
1
1
107.
16
16
1
1
107.
2
1/2
2
1
2052
2
2
2
1
2.52
1
0
4
1
1040
16
0
4
1
113.
16
1.1<
4
1
113.
16
16
4
1
113.
1
1
16
lL
0720
107.
6052
Columns
Procedure
H, A and B
Interaction
Procedure
H, A and B
1.88
.080
.088
1.95
.125
.125
.230
.180
.230
.180
0'72-0
0320
0720
0320
0145
0320
7.52
.320
7.. 52
0320
1070
0'420
2.52
.320
.720
.720
41333
2.00
..333
2.00
9040
1130
6.52
2.00
2.00
23.1
e
e
e
Table B-2"
Rows
Procedure
Parameters
2
r
2
e
2
rc
(j
2
1
1/4 5/4
16
:2' 5/4
1/2 1/2 3/2
2
1/2 3/2
0
0
0
0
0
(j
(j
2
c
(j
Columns
Procedure
.A.
B
120.
",706
3.12
9,.41
3.21
130.
.869
3.55
9.69
3040
132.
.975
3.79
2
2
0
0
0
0
0
0
1.95
120..
9.68
15.6
134,.
203.
2.08
135.
11.6
11.6
135.
135.
2.30
1380
12.2
12.2
138.
138.
1.95
28.4
3.00
15.6
48.5
203.
2.08
2.08
1.03
11 06
11.6
135.
2..30
2.30
1.17
12.2
12 .. 2
138.
1/2 3
2
3
0
.5
0
.5
4
.5
16
5
1 17
0
0
0
0
0
0
0
4.72
6.34
4.11
132.
152•
225.
33.4
6.10
6.10
7.48
159.
159.
159.
65.1
'" 6.70
6.70
8.. 48
164•
164.
164.
74.4
2.67
6.34
2.91
31.5
60.2
225.
33.4
2.52
6.10
5.00
5.00
20.6
159..
65..1
2.86
6.70
5.73
5.73
22.5
164.
74.4
4
2
1
16
4
1/4
16
16
4
16
4
Design 316
1
1
4
2
2
2
2
H
10.4
2.71
H
6.05
1.00
32 01
.706
1.46
A
2.90
.467
3.21
.869
..869
2
1
16
16
16
1
3;.05
.530
3.40
.975
.975
4.56
1.24
30.6
.947
1.71
.400
.625
•.625
.900
.900
2.08
28.6
3.70
9.63
42.5
112.
1.60
1.60
1.60
1.60
1.60
1.. 60
4.15
5.77
7.97
36.5
5604
130.
88.5
3060
3.. 60
10.0
10.0
10.0
10.0
116.
,
.~,
2
B
Interaction
Procedure
A and B
H
I-'
8
-
e
Table B-3..
2
r
()
2
c
2
()
rc
(le
2
1.
0
4
2 1/4 1/4 1
2
16
1/4 1
1/2 1/2 1/2 1
2
1/2 1/2 1
1
1
16
16
1
1/4
4
4
1.6
1
1
1
1.
1
2
1/2
2
1.
2
1.
16
1.6
16
1
2
2
0
0
4
16
1.
4
4
1.
1
4
16
1
1
1
16
4
~.
1.
L~
1.
1.
1
1
1
1
1.
Design 522
Rows
Procedure
Parameters
cr
e
H
A
B
10.. 6
2.. 54
119 ..
.668
3.. 01
9.. 39
3.04
1.35.
.701
3.36
9.51
3.13
136.
.. 756
3.48
2000
1190
9.. 47
1608
136.
22.3 ..
4.. 82
6082
4..41
132.
157.
250"
39.3
Interactl.on
(jOl.umns
H
6.76
1.11
41.7
.668
1060
Procedure
A
B
Procedure
A and B
H
2.75
.. 341
.3.04
.701
.701
2081
..370
3.13
.. 756
.756
5.. 76
1055
40 .. 8
1.12
2006
.433
.612
0612
.842
.842
140 ..
11.5
11.5
1400
140..
1097
142.
11..9
11.9
142 0
1420
2.. 00
37 .. 8
3.64
16 08
60.3
223 ..
1084
1084
.820
11.5
11.5
140.
1 .. 97
1.97
.903
11..9
11.9
142 ..
2.37
38.1
4.50
11.9
55.4
143.
1.45
1..45
1.45
1.45
1.45
1.45
5.. 76
5.. 76
6081
1640
164..
164.
60.7
6.15
6015
7.50
168.
1680
168.
67.5
3.00
6.82
3.29
42.0
74.6
250.
39.3
2.17
5.. 76
4.. 34
4d4
2002
164.
60.7
2..39
6015
4.83
4.83
21 ..2
168.
67.5
4.. 60
6061
8.34
4700
71.5
165.
94.3
3.28
3028
9.38
9.38
9.38
9.38
114.
108~.
g
-
e
e
Table B-4. Design C18
Parameters
2
r
(J
(J
2
c
2
(J
rc
cr
e
.1
2
0
4
2 1/4 5/4 0
16
2 5/4 0
1/2 1/2 3/2 0
2
1/2 3/2 0
1
16
4
4
16
16
2
2
1
16
16
16
1
Rows
Procedure
I
2
H
A
B
9.. 61
2..51
115..
.. 576
2.84
8.67
2.85
122.
.631
3.07
9.14
3.01
128 ..
.689
3.28
H
5.44
.830
29.0
.597
1.22
Columns
Procedure
A
B
2.. 71
.320
2.95
.638
.638
2.. 81
.. 358
3.08
.. 705
.705
Interaction
Procedure
H
A and B
3,,78
1.. 01
27.3
.740
1,,37
.286
.446
.. 446
.643
.643
2
2
2
2
2
2
0
0
0
0
0
0
106.3
115.
9.08
1307
1260
182.
1.59
12?..
10,,4
10.. 4
127.
127.
1.,73
133..
11.. 0
11.0
133.
133.
1.,69
25 ..6
2.50
14..4
43.4
191.,
1.62
1.62
.673
10.8
10.8
134.
1.77
1.77
.759
11..3
11..3
136..
1.62
25.5
.3.. 05
7.. 72
36.7
92.6
1.14
1.14
1.14
1.14
1.14
1.14
1/2 3
2
3:
0
5
0
5
4
5
16
5
1 17
0
0
0
0
0
0
0
4.14
5..36
3.36
126.
142..
20l.
2503
4.88
4.88
5.09
145.
145..
1~.5 ..
40.5
5.27
5.. 27
5.59
153 ..
153 ..
153.
44.6
2.15
5.58
2..19
27.1
52 .. 4
210.
26.1
1.,69
5.00
2.98
2.98
17.3
153.
40.6
1.90
5.38
3,,40
3.40
18.6
1.57.
46.. 1
3.. 28
4.. 50
60.36
31.3
47 .. 0
106.
70.7
2.57
2.57
7..14
7.14
7.14
7.14
82.6
1
1
1/4
4
4
16
....
&-
-
e
Table B-5.,
2
cr
2 cr2 cr2
rc
c
e
Procedure
H
A
B
2
0 1
4
2 1/4 1/4 1
16
2 1/4 1
1/2 1/2 1/2 1
2 1/2 1/2 1
9.. 86
2.40
116..
<;555
2.79
8.54
2.71
124.
.. 557
2.96
8.94
2..86
128.
.603
3..13
1
16
4
4
16
16
1069
116..
9.01
15.0
130.
202.
1.51
129.
10.3
10.3
129.
129.
1.. 61
133.
1008
10.8
133.
133.
4.26
5.82
3.58
127.
148.
224.
29.5
4.84
4.84
5010
149.
149.
149.
42.0
5.09
5.09
5.42
154.
154.
154.
44.9
r
2
2
1
16
16
16
1
1
1
1
1
1
1
1/4
4
4
16
1
1
1
1
1
1
1/2 2
2
2
0
4
0
4
4
4
16
4
1 16
J);lsign c24
Rows
Parameters
cr
e
1
1
1
1
1
1
1
1
1
H
Columns
Interaction
Procedure
Procedure
A
B
H
A and B
2.46
.259
2.70
.554
..554
2.. 57
.291
2.83
.613
0613
1.72
29.7
2.85
15.1
49.8
202.
1.50
1.50
.603
10.3
10.3
129.
1.63
1.63
.677
10.7
10.7
130..
1..91
29.9
3057
9.53
44.3
116.
1.19
1.19
1.19
1.19
1.19
1.19
2.44
5.95
2.60
32.8
61.7
225.
32.1
1.62
4.79
3.02
3.02
17 .0
148.
43.5
1.81
5016
3.41
3.4l
18.2
152.
48.9
3.70
5025
6.75
37.0
57.1
133.
7.5.9
2.55
2.55
7.03
7.03
7.03
7.03
83.3
5.79
.870
33.3
.563
1030
4..65
1.26
32.2
0926
1.. 66
.42.5
.562
.562
.. 735
.735
g
e
-
e
Table B-6. Design L20
Rows
Procedure
Parameters
2
(J
r
(J
2
c
2
rc
(J
2
(J
H
A
B
H
10.6
2.83
138"
•.533
3,,11
9.71
2,,96
142.
.526
3.16
10.2
3.11
149"
.550
3.31
4,,54
,,535
17.3
.533
,,883
1053
139.
10.5
13.7
147.
186,.
1.42
145"
11..1
2
2
0
0
0
0
0
0
145.
145"
1049
153.
11.6
11.6
153.
153.
1/2 3
2
.3
0
5
0
5
4
5
16
5
1 17
0
0
0
0
0
0
0
4.31
5,,12
3..24
150"
162..
202 ..
22,,7
4.55
4.55
3.. 94
161.
161.
161.
28.3
4,,77
4,.77
4.13
169 ..
169.
169.
29.6
e
I
2
0
4
2 1/4 5/4 0
16
2 5/4 0
1/2 1/2 3/2 0
2 1/2 3/2 0
1
16
4
4
16
16
2
2
1
16
16
16
1
Columns
Procedure
1
1.
1/4
4
4
16
2
2
2
2
liel.
A
B
2" 77
.. 246
2.96
,,526
.526
2,,91
.258
3,,11
.550
.550
1.53
13.9
1.. 47
13.7
30,,4
186.
1042
1.42
.493
11.1
11.1
145.
1.61
5,,12
1..72
13,,9
37.0
202"
22.7
1028
4.55
1.98
1.98
16,,2
161.
28.3
Interaction
Procedure
A and B
H
2.40
.660
15.1
.575
,,926
.222
.347
.347
.500
.500
1049
1..49
,,516
11,,6
11,,6
153.
1..21
13.6
1090
5.12
21 .. 8
60.7
.889
.889
.889
.889
.889
.889
1033
4,,77
2,,08
2,,08
17.0
169.
29,,6
2.45
3.26
5.25
17.4
28.8
69.5
59.4
2.00
2.00
5.56
5.56
5.56
5.56
64.2
I-'
.2
e
e
Table B-7.
Parameters
2
r
0'
2
0'
c
(irc
0-
2
e
2
0 1
4
2 1/4 1/4 1
2 1/}.j. 1
1.6
1/2 1/2 1/2 1
2
1/2 1/2 1.
1
16
4
~,
16
16
2
2
1
16
16
16
1.
9.36
2.83
1.38 ..
.. 489
.3 .. 03
9.92
2.99
146..
..508
3..19
5.05
.600
21.2
.5.32
.985
142.
10,,8
10.8
142 ..
142"
10.37
1.42
150.
lL,4
11.4
150.
150.
4.48
4.48
4.02
1.58.
158,.
158.
30.. 3
4.. 64
4.64
4.06
166..
166.
166.
30,,0
1.61
1.
1/4
4
4
16
1
1
1
1
1.
1
10.5
1408
150.
1.
1.
203.
1/2
2
0
0
4
16
2
2
4
4
4
4
16
1
4,,46
5.56
.3.48
151.
167.
1
Procedure
A
10 ..8
2.77
139.
.532
].10
1
1
Procedure
R
1
1
Columns
B
1
1.
1
Rows
A
1
1
Design L24
R
1
1.
e
139~
222,.
26.. 4.
Interaction
B
Procedure
A and B
R
2.64
.220
2.83
.489
.489
2.80
.228
2.99
.508
.508
3.45
1.13
19.5
.980
1.043
1076
14.. 8
36.4
20.3.
1037
1.37
.469
10.8
10.. 8
142.
L.w.2
1,,42
.477
11.4.
150.
1.70
17.3
2.56
6.. 91
28 .. 5
80.9
1.20
1.20
1.20
1.20
1.. 20
1.20
1.84
5•.56
1097
17.6
44 .. 8
222.
26.4
1.26
4.48
2.05
2.05
16.2
158.
30.3
4,,64
2.04
2,.04
16.7
166.
30.0
3.. 01
4.10
5.68
2.27
2.27
5.78
5.78
5.78
5.78
64 .. 7
1.. 61
17.2
11o~.
1028
210.3
37.0
92 .. 0
61.4
.577
.691
.691
.832
.832
g
109
Table B-8.
Parameters;
(j
2
r
4
2
16
(j
2
c:
2
2
rc
(J
0
1
(j
2
e
1/4 1/4 .1
2
1/4 1
1/2 1/2 1/2 1
2
1/2 1/2 1
Design 3x3-equal
Rows
Procedure
H, A and B
17.4
5009
264.
.750
5.50
Co.luIDns
Procedure
H,p A and B
4.71
.281
5.. 09
Interaction
Procedure
H,9 A andB
~181
0337
.337
.750
.556
0750
..,556
1
1
1
1
16
1-
.1
1
4
1/4
.1
1
2004
4
4
1
1
20.4
16
4
1
1
272 0
16
16
1
.1
272.
2
1/2
2
1
8037
2 1.3
31'18
2
2
2
.1
8837
8.37
3.18
.1
0
4
1
7038
3038
100 2
16
0
4
1
3038
100 2
16
4
4
.,
....
307 •
16
16
4
1
3070
1
1
16
1
2838
272.
3-01.
57 ..4
2.38
1818
2838
1018
0688
1.18
ao.. 4
1.18
20.4
1.18
272 ..
0
31=4
3010
57ob.
1.. 18
10",2
10.2
136..
-
110
Table B-9.
.parameters,
2 (J2
(J2 (Jrc
e
c
4
2
2
16
1
1/4 1/4 1
2
1/4 1
1/2 1/2 1/2 1
2
L;Ol.U1l1I1S
liOWS
(J 2
r
0
Design D18-1
1/2 1/2 1
Procedure
H, A and B
Procedure
H and A
B
19 .. 2
4.72
4.87
.312
5.63
292 0
5.17
.830
6.08
..317
5.. 26
Interaction
Procedure
H,? A and B
.,215
.386
..386
.822
.. 813
.. 626
.822
, .. 813
,,626
1
1
1
1
16
1
1
1
4
1/4
1
1
22,,5
4
4
1
1
22.5
16
4
1
1
302 0
16
16
1
1
302.
2
1/2
a
1
9.. 26
2..53
2,,26
3052
2
2
2
1
9.26
9 11
8.62
3..52
1
0
4
1
80 15
4.34
3.60
11.,3
16
0
4
1
340.
4.34
3.60
1l..3
16
4
4
1
340.
16
16
4
1
340.
1
1
16
1
2.63
302 ..
63.5
2.59
2.49
1.31
2,,59
2..49
1.. 31
.. 818
.. 751
1.31
20.. 9
20,,7
1.31
20.9
20 ..7
1.31
274..
274.
0
?4
..J
'1
e.,&,.
315.
72.3
':\1
1,.31
0
1,"':"0/
309.
59 ..5
11.3
11.3
151.
e
.
e
,
Table B-10.
Parameters
2
r
0-
0-
2
c
2
1/4
16
2
1/2 1/2
2 1/2
4
2
2
rc
0-
0
1/4
1/4
1/2
1/2
1
1
1
e
1
1
1
16
4
4
16
16
1
1
1
1
1/4
4
4
16
1
1
1
1
1
1
1
1
1
2
1/2
2
2
2
2
1
16
16
16
1
0
0
4
16
1
4
4
4
4
16
1
1
1
1
1
1
1
1
Design D18-2
Columns
Procedure
Rows
Procedure
2
0-
e
R
A
B
R
20.3
5.. 41
276..
.995
6014
17.5
5022
2650
0828
5.68
1707
5026
265.
0819
5.68
7075
0681
1708
.995
1044
3021
281.
2106
2705
2980
367.
2057
2740
20.9
2009
274.
27~.•
2.50
274.
20 .. 7
2007
274.
274.
9093
1104
9.. 29
319 ..
344.
4170
79 .. 8
8.95
8095
8..36
313 ..
313.
3130
66.9
8062
8.62
7065
309.
309.
309.
58.9
A
B
Interaction
Procedure
A and B
R
4.78
0321
5022
.828
0828
4.87
.322
5..26
.819
0819
3.21
11.2
1096
27 .. 5
46 04
367 ..
2057
2057
.803
20.9
20.9
274.
2050
2..50
.757
2007
2007
274.
2..16
10.2
2.59
8049
27.4
96 00
1.35
1.35
1.35
1.35
1.35
1.35
3.64
1104
5..18
13 08
66.6
417.
79 .. 8
2.44
8095
4.. 05
4.. 05
33-3
313.
66 ..9
2.26
8.. 62
3.. 58
30.58
31..9
309.
5809
4.91
6.. 43
12 .. 5
21..1
4505
119 ..
162
3.52
3.52
3.40
.. 830
13.2
.916
1.. 37
.240
.417
.417
.661
.661
li.l
11.1
11.1
11..1
146.
'I
1-1
~
.
e
,
e
Table B-ll.
(J2 (J2 (Je2
c
rc
2
1/4
16
2
1/2 1/2
2 1/2
H
25qO
5.99
295.
1q47
7.28
17.9
5.42
266.
.921
5.89
18.1
.5.49
267.
.915
5.92
12.7
1.43
38.4
1.47
2.68
4.97
.382
.5.42
.921
..921
5.08
.385
5.49
.91.5
0915
8.80
1.8.5
34.3
1.71
2.92
2.73
276.
21,,3
2l..3
276•.
276.
2.67
275.
21.2
21.. 2
275.
275.
4q72
26.0
4.23
39.2
88.2
517.
2.73
2.73
.901
2103
21.,3
276.
2.67
2.67
.853
21.2
21,.2
275.
4.02
25.3
5.21
2005
69.5
246.
1.60
1.60
1.60
1.60
1.60
1.60
9.. 25
9.2.5
8.69
.315 •
315q
315.
68.1
8.92
8.92
7.96
310.
310.
310.
59 .. 8
6.39
16.8
8.67
31.4
124.
595.
123.
2.61
9.25
4.29
4.29
33.9
315.
68.1
2.42
8.92
3.79
3.79
32•.5
310..
59.8
8.01
12.0
16.3
39qO
102.
291.
196.
3.90
3.90
11.7
11.7
11.7
11.7
148.
1.
1
1.
1
1
1
1.
4072
296.
2309
39 0 2
340.
517.
1/2
2
2
:2
1.
1.
12.9
1608
0
0
4
16
4
1
1..3,,2
1
1
2
2
1
16
16
16
1
B
1
1
1
1
1
1
16
4
4
16
16
1/4
~,
4
16
1
4
1.
1.
4
1
16
1.
1.
4
Interaction
Procedure
A and B
H
A
1
1
1
1
2
Columns
Procedure
H
0
1/4
1/4
1/2
1/2
4
Design D18-3
Rows
Procedure
Parameters
(J2
r
e
.343.
~.06.
59.5 ..
123"
A
B
.360
•.570
•.570
.846
.846
!-'
!-'
I'\)
e
e
Table B-ll.
Design D18-3
H
Rows
Procedure
A
B
2
0 1
4
2 1/4 1/4 1
16
2 1/4 1
1/2 1/2 1/2 1
2 1/2 1/2 1
25.0
5.99
2950
1.47
7.28
1709
5.42
266 ..
0921
5.. 89
18.1
.5.49 I
267.
.915 I
5092
1
16
4
4
16
16
4
16
4,,72
296"
23,,9
39,,2
340.
5170
2.73
2760
2L3
2103
276•.
276.
2
1/2
12.9
16.8
13,,2
343.
9.. 25
9025
8.69
315.
315.
315.
68.1
Parameters
0'r2
2
1
16
16
16
1.
2
2
0'2c 0'rc
0'e
1.
1
1/4
~.
2
0
0
4
16
1
1.
1
1
1
1.
1.
1.
1.
1.
:1
1
1
2
2
1
1
4
4
1.
1.
4
4
1
~.06.
1
1.6
:1
5950
123.
e
H
Colurrms
Procedure
A
B
Interaction
Procedure
H
A and B
12.7
1.43
38.4
1047
2.68
4.97
.382
5.42
0921
.. 921
5.08
.385
5.49
.915
0915
8.80
1.85
34.3
1.71
2092
2.67
275.
21.2
21.. 2
275.
275 ..
4.72
26.0
4.23
3902
8802
517.
2.73
2.73
.901
21.3
21.3
276 ..
2.67
2.67
.853
21.2
21,,2
275.
4.02
2503
5.21
2005
6905
2460
1.60
1.60
1.60
1.60
1.60
1.60
8.92
8.92
7.96
310.
310.
310.
59.8
6039
16.8
8.67
31.4
124.
595.
123.
2061
9.. 25
4.29
4.29
33.9
.315.
68.1
2.42
8.92
3.79
3.79
.32 05
310"
59 .. 8
8001
1200
16.3
39.0
102.
3.90
3.90
11.7
11.7
11.7
11.7
148.
I
I
291.
196.
.360
.570
.570
.846
.846
f-J
~
•
.
,
Table B-12..
2
r
2
Cf
c
Cf
2
rc
2
Cf
e
Columns
Procedure
H
A
B
:2
0 1
4
2 1/4 1/4 1
16
2 1/4 1
1/2 1/2 1/2 1
;2
1/2 1/2 1
1909
5078
295.
0969
6.40
19.4
5.. 78
295.
.924
6.33
18.6
5.56
278.
.880
5.99
1
16
4
4
16
16
1
1
1
1
1/4
4
1
1
4
2.91
306.
23 .. 4
23 ..4
16
1
1
3.07
305.
23.4
2409
309.
326.
306.
2.66
286.
21..8
2108
286.
286.
2
1/2
2
2
10..4
10..8
10.2
351.
357.
375.
8506
10.2
10.2
9.89
353.
353.
3530
81.6
9.10
9.10
8.16
323.
32.3.
323.
63.3
2
1
16
16
16
1
2
0
0
4
16
1
1
1
1
1
1
1
1
1
1
4
4
1
4
16
1
4
1
1
De sign D18-4
Rows
Procedure
Parameters
Cf
e
e
306.
H
5.98
.456
9.22
.969
1.09
A
B
Interaction
Procedure
A and
H
B
5027
.353
5.78
0924
.924
5.14
.352
5.56
.880
.880
3.07
5.45
1..25
2409
29.9
326..
2.91
2.91
..920
2304
23.4
306.
2.66
2.66
.. 818
21 .. 8
21..8
286.
1.70
4.08
1.84
3•.31
8030
25.. 4
1.48
1.48
1.48
1.48
1.48
1.48
3.18
10.. 8
5.23
7.77
46.8
375.
85 .. 6
2.84
10.2
4.88
4088
38.3
353.
81.6
2.42
9.10
3..88
.3.88
33.5
323.
6303
4.29
4068
12.8
15.4
2104
39 0 6
171..
3.91
3.91
12.4
12.4
12.4
12.4
166.
1.09
.574
4.01
.794
.914
.263
.452
,,452
.717
.717
I-'
~
INSTITUTE OF STATISTICS
NORTH CAROLINA STATE COLLEGE
(Mimeo Series available for distribution at cost)
273. Schutzenberger, M. P. On the definition of a certain class of automata. January, 1961.
274. Roy, S. N. and J. N. Shrizastaza. Inference on treatment effects and design of experiments in relation to such inferences.
January, 196!.
275. Ray-Chaudhuri, D. K. An algorithm for a minimum cover of an abstract complex. February, 196!.
276. Lehman, E. H., Jr. and R. L. Anderson. Estimation of the scale parameter in the W'eibull distribution using samples cen·
sored by time and by number of failures. March, 196!.
4
277. Hotelling, Harold. The behavior of some standard statistical tests under non-standard conditions. February, 1961.
278. Foata, Dominique. On the construction of Bose-Chaudhuri matrices with help of Abelian group characters. February,
196!.
279. Eicker, FriedheIm. Central limit theorem for sums over sets of random variables. February, 196!.
280. Bland, R. P. A minimum average risk solution for the problem of choosing the largest mean. March, 1961.
28!. ,,yilliams, J. S., S. N. Roy and C. C. Cockerham. An evaluation of the worth of some selected indices. May, 1961.
282. Roy, S. N. and R. Gnanadesikan. Equality of two dispersion matrices against alternatives of intermediate specificity.
April, 1961.
283. Schutzenberger, M. P. On the recurrence of patterns. April, 196!.
284. Bose, R. C. and 1. M. Chakravarti. A coding problem arising in the transmission of numerical data.
285. Patel, M. S. Investigations on factorial designs. May, 1961.
286. Bishir, J. W. Two problems in the theory of stochastic branching processes. May, 1961.
287. Konsler, T. R. A quantitative analysis of the growth and regrowth of a forage crop. May, 196!.
288. Zaki, R. M. and R. L. Anderson.
ning over time. May, 196!.
April, 1961.
Applications of linear programming techniques to some problems of production plan-
289. Schutzenberger, M. P. A remark on finite transducers. June, 1961.
b·+mc'+p in a free group. June, 1961.
290. Schutzenberger, M. P. On the equation a·+n
29!. Schutzenberger, M. P. On a special class of recurrent events. June, 1961.
=
292. Bhattacharya, P. K. Some properties of the least square estimator in regression analysis when the 'independent' variables
are stochastic. June, 1961.
293. Murthy, V. K. On the general renewal process. June, 196!.
294. Ray-Chaudhuri, D. K. Application of geometry of quadrics of constructing PBIB designs. June, 1961.
295. Bose, R. C. Ternary error correcting codes and fractionally replicated designs. May, 1961.
296. Koop, J. C. Contributions to the general theory of sam pling finite populations without replacement and with unequal
probabilities. September, 1961.
297. Foradori, G. T. Some non-response sampling theory for two stage designs. Ph.D. Thesis. November, 1961.
.
-
298. Mallios, W. S. Some aspects of linear regression systems. Ph.D. Thesis. November, 1961.
299. Taeuber, R. C. On sampling with replacement: an axiomatic approach. Ph.D. Thesis. November, 196!.
300. Gross, A. J. On the construction of burst error correcting codes. August, 1961.
30l. Srivastava, J. N. Contribution to the construction and analysis of designs. August, 1961.
302. Hoeffding, ,"Yassily. The strong laws of large numbers for u-statistics. August, 1961.
303. Roy, S. N. Some recent results in normal multivariate confidence bounds. August, 1961.
304. Roy, S. N. Some remarks on normal multivariate analysis of variance. August, 1961.
305. Smith, ,,y. L.
A necessary and sufficient condition for the convergence of the renewal density.
306. Smith, ,"V. L.
A note on characteristic functions which vanish identically in an interval.
307. Fukushima, Kozo.
308. Hall, ,,y. J.
A comparison of sequential tests for the Poisson parameter.
Some sequential analogs of Stein's two-stage test.
309. Bhattacharya, P. K.
310. Anderson, R. L.
September, 1961.
September, 1961.
Use of concomitant measurements in the design and analysis of experiments.
Designs for estimating variance components.
31l. Guzman, Miguel Angel.
Thesis-January, 1962.
312. Koop, John C.
August, 1961.
September, 1961.
November, 1961.
November, 1961.
Study and application of a non-linear model for the nutritional evaluation of proteins.
An upper limit to the difference in bias between two ratio estimates.
Ph.D.
January, 1962.
313. Prairie, Richard R. and R. L. Anderson. Optimal designs to estimate variance components and to reduce product variability for nested classifications. January, 1962.
314. Eicker, FriedheIm.
Lectures on time series.
January, 1962.
315. Potthoff, Richard F.
Use of the Wilcoxon statistic for a generalized Behrens-Fisher problem.
316. Potthoff, Richard F.
Comparing the medians of the symmetrical populations.
317. Patel, R. M., C. C. Cockerham and J. O. Rawlings.
February, 1962.
February, 1962.
Selection among factorially classified variables.
February, 1962.