ESTIMATION OF THE MEAN AND VARIANCE COMPONENTS IN
SOME RANDOM EFFECTS MODELS WITH COMPOSITED SAMPLES
By
David Anthony Edelman
Department of Biostatistics
University of North Carolina, Chapel Hill, N.C.
Institute of Statistics Mimeo Series No. 854
,
DECEMBER 1972
..
ESTIMATION OF THE MEAN AND VARIANCE
COMPONENTS IN SOME RANDOM EFFECTS
MODELS WITH COMPOSITED SAMPLES
by
David Anthony Edelman
.e
A dissertation submitted to the faculty
of the University of North Carolina at
Chapel Hill in partial fulfillment of
the requirements for the degree of
Doctor of Philosophy in the Department
of Biostatistics
Chapel Hill
1972
Approved by:
Adviser
•
Reader
Reader
ABSTRACT
DAVID ANTHONY EDELMAN. Estimation of the Mean and Variance Components
in some Random Effects Models with Composited Samples (Under the
direction of LAWRENCE L. KUPPER.)
A method is presented for estimating the mean, variance components, and their variances in a two-way crossed classification where
the experimental units are composited.
Estimates of the variance
components are obtained using a procedure similar to Henderson's method
I in which the sums of squares are computed using weighted or unweighted
observations or cell means.
.e
and compared.
Four estimation procedures were suggested
The results show that any best estimation procedure
depends on the relative sizes of the variance components.
Conditions
on the various design parameters were investigated to determine when
exact tests of hypotheses on the variance components can be made.
For
all estimation procedures considered, the designs should be completely
balanced if exact F-tests are required.
Using some of the results from the two-way crossed classification,
estimates of the mean, variance components and their variances in 2- and
3-stage nested designs were derived.
These nested designs allowed for
the experimental units to be composited in all but the last stage.
estimation procedures were proposed and evaluated.
For both the 2- and
3-staqe nested designs any best estimation procedure depends on the
relative sizes of the variance components.
Four
The procedure for obtaining estimates of the variance components
and their variances in the 3-stage nested design was extended to the
r + I stage nested design in which compositing could occur in any of
the first r stages •
.e
ACKNOWLEDGMENTS
Special thanks go to Drs. L.L. Kupper, D.W. Gaylor, R.W. Helms,
P.A. Lachenbruch, and D.C. Leighton.
This investigation was supported by NIH Training Grant No. Tal
GM00038 from the National Institute of General Medical Sciences •
.e
TABLE OF CONTENTS
LIST OF TABLES
Paqe
vi
Chapter
I.
INTRODUCTION AND REVIEW OF THE LITERATURE
1
1.1 Introduction
1.2 . Review of the Literature
4
1.2.1 - Methods of Estimation
1.2.2 Variances of the Estimators of the
Variance Components
1.3
1.4
.e
II.
10
13
15
17
2.1
2.2
2.3
17
17
2.5
2.6
IV.
5
THE TWO-WAY CROSSED CLASSIFICATION WITH COMPOSITING
2.4
III.
Comparison of Estimation Procedures
Outline of Research
1
Introduction
The Model
A Method for Estimating the Variance Components
and their Variances
Expected Values, Variances, and Covariances of
the Tk' k = 1,2,3,4
Estimating the Mean
An Alternative Model for a Two-Way Crossed
Classification with Compositing
20
24
41
42
COMPARISON OF ESTIMATION PROCEDURES IN A TWO-WAY
,CROSSED CLASSIFICATION
46
3.1
3.2
3.3
46
47
48
A Method for Comparing Estimation Procedures
Designs
Results
HYPOTHESIS TESTING IN THE TWO-WAY CROSSED CLASSIFICATION
WITH COMPOSITED SAMPLES
4.1
4.2
4.3
4.4
Introduction
Restrictions on si' t , Vij' and n
for Procedures
j
ij
1 and 3
Restrictions on si' t. , Vij' and nij for Procedures
J
2 and 4
Independence of Sums of Squares
67
67
67
75
77
V.
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
VI.
6.4
Introduction
The Model
A Method for Estimating the Variance Components
and their Variances
Determination of var(~), W, and !r+l
80
81
83
91
92
102
107
108
118
118
118
120
124
THE 2-STAGE NESTED DESIGN WITH COMPOSITED SAMPLES
130
7.1
7.2
7.3
130
130
7.4
7.5
7.6
7.7
7.8
Vln.
Introduction
The Model
Estimates of the Variance Components and their
Variances
Estimating the Mean
Numerical Comparison of Estimation Procedures
A Comparison of some Designs
Two Models Studied by Kussmaul and Anderson
An Evaluation of some of Kussmaul's and
Anderson's Results
SAMPLING VARIANCES OF THE VARIANCE COMPONENTS IN THE
r+l STAGE NESTED DESIGN WITH COMPOSITED SAMPLES
6.1
6.2
6.3
VII.
Page
80
COMPOSITING IN THE 3-STAGE NESTED DESIGN
Introduction
The Model
Estimates of the Variance Components and their
Variances
The Effects of Compositing on the Estimate of 0;
Estimating the Mean
.
Comparisons of the Four Estimation Procedures
A Comparison of some Designs
Some Remarks about Estimating ~ and
0;
131
135
138
141
148
151
SUMMARY OF RESULTS AND SUGGESTIONS FOR FUTURE WORK
154
8.1
8.2
154
158
Summary of Results
Suggestions for Future Work
LIST OF REFERENCES
160
e.
vi
LIST OF TABLES
Paqe
2.3.1
2.3.2
22
Analysis of Variance Table for the Two-Way Crossed
Classification
23
u..
2.5.1
Values of
3.2.1
Designs Used to Compare the-Four Estimation Procedures
when ti tj v·1)·n··
= 100, 200
1)
49
Relative Efficiencies of Four Estimation Procedures for
a Two-Way Crossed Classification with Composited Samples
55
Best Estimation Procedures for the Two-Way Crossed
Classification with Composited Samples
65
=n
68
3.3.1
3.3.2
.e
Values of fij' 9ij, hij, and mij for Four.Estimation
Procedures
1)
for Four Estimation Procedures
4.2.1
Expected Mean Squares for Procedure 1 when n,.
4.3.1
Expected Mean Squares for Procedure 2 when
.. = v and n··
=n
v.1)
1)
1)
4.3.2
Expected Mean squares for Procedure 2 when si
tj = t, v ij = v, and nij = n
42
75
= s,
76
5.3.1
Analysis of Variance Table for the 3-Stage Nested Design
83
5.3.2
.. , h .. , and m:. for Four Estimation Procedures
Values of f 1)
1)
1)
84
5.4.1
Values of Uij for Each Estimation Procedure
90
5.5.1
3-Stage Nested Designs Used in the Comparison of Four
Estimation Procedures when t t Sitijnij = 50,100,200
93
\
i
5.5.2
5.5.3
j
Relative Efficiencies (R.E.'s) of Four Estimation
Procedures for Estimating the Variance Components and
Mean in a 3-Sta~e Nested Design with Composited Samples
Best Estimation Procedures 'for the 3-Stage Nested
Design with Composited Samples
97
102
5.6.1
5.8.1
Relative Efficiencies for the Comparison of Designs
1,4,8, and 10, and Designs 6 and 19
Designs to Evaluate the Biases in Estimating cr~;r and
cr;, when cr~ > 0 and si'= nij
5.8.2
105
=1
Percent Bias in Estimating cr c2;r and cr r2 when s.1
111
= n 1)
..
= 1
and cr; > 0, and Relative Efficiencies of Two Estimation
Procedures
112
6.3.1
Analysis of Variance Table for the r+l Stage Nested Design
120
7.3.1
Values of f.1 and m.1 for Four Estimation Procedures
132
7.3.2
Analysis of Variance Table for the 2-stage Nested Design
132
7.5.1
Values of
7.6.1
Designs Used to Compare Estimation Procedures for the
2-Stage Nested Design with Composited Samples when
L sini = 10,30,100
u
1
for Four Estimation Procedures
139
142
i
7.6.2
7.6.3
7.7.1
Relative Efficiencies of Four Estimation Procedures for
a 2-Stage Nested Design with Composited Samples
144
Best Estimation Procedures for the 2-Stage Nested Design
with Composited Samples
148
Relative Efficiencies for the Comparison of some of the
Designs given in Table (7.6.1)
150
e.
CHAPTER I
INTRODUCTION AND REVIEW OF THE LITERATURE
1.1
Introduction
During the past fifteen years the estimation of variance,
components in random and mixed effects models has received considerable
attention in the literature.
For mixed models the emphasis has been
on finding methods for estimating the variance components that are not
.e
biased by the presence of the fixed effects, but little attention has
been given to the problem of jointly estimating the random and fixed
effects and finding estimation procedures and/or classes of designs
that provide efficient estimates of both the random and fixed 'effects.
In random effects models the emphasis has been on finding procedures
to estimate the variance components.
While various procedures have
been suggested for obtaining estimates of the variance components, and
finding the variances of these estimates for a few special classes of
designs, little work has been done in determining the relative merits
of the different estimation procedures.
In both random and fixed
effects models little attention has been given to finding methods for
estimating the variance components in situations where the usual analysis of variance assumptions and/or experimental procedures are not
valid.
2
In this study procedures are investigated for estimating the mean
and variance components in some basic experimental designs in which the
experimental units have been composited.
In the usual experimental
design situation the sampling procedure is to select a group of experimental units, apply some treatment to each unit, and then make one or
more measurements on the experimental units.
In the situation where the
experimental units are composited, groups of experimental units are
pooled into a single unit prior to measurement.
Compositing has been
used in several fields of research, e.g., soil sampling, where the estimation of the mean has been the primary objective, with estimates of the
variance components used for the sole purpose of estimating the variance
of the sample mean.
The only systematic study of compositing was done by Kussmaul and
Anderson (18) who considered compositing in a three stage nested design
in which groups of experimental units from the second stages were composited.
Kussmaul and Anderson studied different procedures for esti-
mating the mean and variance components using three different cost
functions that related the total cost of the experiment to the costs of
measurement and the costs of sampling from the first and second stages.
Perhaps the greatest constraint on their study is that they assumed
measurements are made without error, i.e., the component of variance due
to measurement error is zero.
Further discussion of the above study is
given in this chapter and in Chapter 6.
Compositing of the experimental units can either be designed into
the experimental situation or can occur inadvertently.
Some general
examples of compositing and situations in which compositing can occur or
e.
3
might be the appropriate experimental procedure are given below:
1) The experimental units are too small for a single reliable measurement to be made.
Consequently, several experimental units arc
combined into one unit so that a single measurement can be made.
2) The costs of measurement are expensive relative to the costs of
the experimental units.
If measurements were cheap and easy to
obtain, one would not composite but would make measurements on
each experimental unit.
3) Some experimental units were inadvertently combined prior to
measurement.
Some specific examples of compositing are:
.e
1) An experimenter wants to determine the mean DDT concentration in
a certain lake.
He suspects that the concentration will depend
on the location in the lake at which the sample is taken.
Loca-
tion is not a factor of interest, but nevertheless the experimenter still wants to estimate the location component of variance.
A reasonable procedure would be to composite the samples from
several locations and then make one or more measurements on the
composited samples.
Provided that all experimental units are not
composited into a single unit, estimates of the mean DDT concentration and of the among and within location components of variance can be obtained.
The above is an example of compositing in
a 2-stage nested design.
4
2) Cameron (4) gives the following example of compositing in a 3stage nested design.
The purchase price and customs levy-on a
shipment of raw wool depends on the amount of wool present, i.e.,
its clean content.
A shipment of raw wool comes in bales.
The
clean content varies considerably from bale to bale and among
cores (samples) within a bale.
Since the laboratory determination
of the clean content of a core is costly and time consuminq,
Cameron devised a sampling plan in which cores were composited
prior to measurement of the clean content.
Under this sampling
plan the various components of variance can be estimated.
3) The following is an example of compositing in a two-way crossed
classification.
To determine the effects of alcohol on the fat
accumulation in rat livers, rats are daily given a certain amount
of alcohol.
After one month each rat liver is biopsied.
In
order to accurately measure the fat accumulation, at least 1 gram
of hepatic tissue must be obtained.
To satisfy this requirement,
a number of biopsies (2-5) are performed on each liver; these
biopsies are then pooled into two or more groups so that measurements of the fat accumulation can be obtained for each group.
From such an experiment the mean fat accumulation and the rat,
biopsy, and measurement error components of variance can be estimated.
1.2
Review of the Literature
In this section no attempt is made to review all aspects of vari-
ance component estimation since this has already been ably done by
e.
5
Harville (13) and Searle (2a).
Instead, the most significant contribu-
tions to the following three areas of variance component estimation will
be discussed since they have a bearing on some of the results ohtained
in this study:
methods of estimation, variances of the estimators of
the variance components, and comparisons of estimation procedures.
Harville (13) presents a comprehensive review and analysis of the
literature dealing with one-way random classification.
This extensive
study covers all of the literature pertaining either directly or indirectly to the non-sequential point estimation of the variance components
in the one-way random effects model.
Harville discusses the estimation
of variance components when the data are either balanced or unbalanced,
Bayesian and non-Bayesian estimation procedures, and the effects on the
.e
estimation of the variance components when some of the usual analysis of
variance assumptions are relaxed.
Searle (28) provides an extensive review of variance component
estimation.
His article is divided into three parts.
The first part
presents a lucid discussion of the various models underlying variance
component estimation, the second part deals with variance component
estimation when the data are balanced, with prime consideration being
given to the analysis of variance method of estimation.
The third part
is primarily concerned with the various methods of estimating the variance components when the data are unbalanced.
1.2.1 Methods of Estimation
In 1953 Henderson (14) gave the first systematic approach to the
estimation of the variance components in random or mixed models when the
6
data are unbalanced.
Method 1:
He proposed the following three Methods:
The sums of squares for each source of variation are calculated in the same way as though the design was balanced.
The
estimates of the variance components are obtained by equating
the various sums of squares to their expected values and
solving the resulting system of equations.
Hethod 2:
If the model contains fixed effects, then obtain some least
squares estimates of the fixed effects and adjust the data
for the fixed effects.
The estimates of the variance compo-
nents are obtained by applying Method 1 to the adjusted data.
Method 3:
Estimates of the variance components are obtained by computing
the reductions in the sums of squares due to fitting appropriate sub-models of the model being considered, and then
equating these reductions to their expected values.
Henderson pointed out that Method 1 leads to biased estimates of
the variance components if. certain effects in the model are fixed or if
some of them are correlated.
Method 2 also will give biased estimates
if some of the elements of the model are correlated.
Method 3 yields
unbiased estimates, but it can require a large amount of computing.
Henderson illustrated the use of the three methods using a replicated
two-way crossed classification.
Searle ·(26) reviewed Henderson's three methods using the general
linear model and showed that:
1) Method 1 gives biased estimates of the variance components if the
model contains any fixed effects other than the overall mean.
e.
7
2) If the model contains fixed effects, then there are many ways in
which the data can be corrected for these effects depending on
the choice of a generalized inverse used in the estimation of the
fixed effects.
Moreover, Method 2 cannot be used if there are
non-zero interactions among the fixed and random effects.
3) Method 3 does lead to a unique set of unbiased estimates of the
variance components.
Gaylor and Hartwell (7) indicate a procedure for estimating the
variance components in models containing fixed, random, or finite effects.
Sums of squares are computed as in Method 1, but the form of the expected
mean squares depends on which effects are considered fixed, random, or
.e
finite.
For the model
i
j
k
= 1, ....• r
= 1, ..... c
= O,l, ••• nij
(1.2.1.1)
the expected mean squares are obtained by taking the expected value of
each mean square in which the Yijk are replaced by the above model.
These expected mean squares will include terms such as E(riri') for
i
= i'
and i ~ i'.
Gaylor and Hartwell define C1~ as
Sr
where ~ r i
i=l
= 0,
the population.
and Sr is the number of levels of the r i effects in
They show that
8
=
E(r r )
i i"
S - 1
r
Sr
Or
1
- -S
Or
r
2
= i"
i
2
i ; i"
Using similar definitions for the Cj and (rc) ij effects, the expected
mean squares can be derived; the only unknowns being 0r2 , 0 2 , 0 2 , and 0 2 •
c
rc
e
This procedure gives unbiased estimates of the variance components and
can be used if the model contains fixed, finite, or random effects.
Koch (17) presents a general procedure for estimating the variance
components in any random effects model.
This method does not require the
computation of any analysis of variance table as other procedures do.
Koch uses the fact that the expected value of the squared difference between pairs of observations is a function of the variance components.
For example, in model (1.2.1.1),
E(y.1)'k -yo1 .. )· ..k ... )
2
:AI
2(0:)
if i = i. j = j, k
2 (0'2 + 0 2 + 0'2 )
e
c
rc
if i
.
..
= i,
if i
~
i, j
2 (0 2 +0 2 + 0 2 + 0 2 ) if i
r
e
c
rc
~
i, j f' j, k
222
2 (Oe + O'r +Orc)
...
...
..
..
~ k
j ~ j, k ~ k
=
...
j, k ~ k
...
~ k
..
...
.
The sums of squared differences can be used to obtain unbiased estimates
of the variance components.
Forthofer (6) modified and extended Koch's estimation procedure.
The procedure given by Forthofer overcomes one difficulty in Koch's approach in that in some cases Koch's procedure leads to solving p equations in r unknowns (p > r), in which case a unique set of estimators is
not obtained.
~orthofer's
extension allows for the model to contain
..
9
both fixed and random effects, and the estimation procedure does not require that the random effects come from any particular distribution.
In
the mixed effects model the variance components are estimated without
first estimating the fixed effects.
The maximum likelihood method provides a procedure for estimating
the fixed and random effects in a model.
One disadvantage of this pro-
cedure is that the likelihood equations cannot be solved explicitly for
the fixed and random effects, and that solving the equations using itterative techniques often requires an excessive amount of computing.
and Rao (12) developed a general
Hartley
set of likelihood equations and provided
an itterative procedure for solving the equations.
The authors discussed
the asymptotic properties of the estimates and showed that they are con-
.e
sistent and asymptotically efficient.
Joreskog (16) studied the maximum likelihood estimation of the
fixed and random effects in a more general model than the one considered
by Hartley and Rao (12).
Joreskog assumed the following model
E(Y) = X B P
where
Y is a nxp matrix of n observations on p response variables, X and Pare
known full rank matrices of sizes nxq and hxp respectively, q < n, h < p,
and B is a qxh matrix of fixed effects.
The rows of Yare assumed to be
independently distributed, each haviag a multivariate normal distribution
with the same variance-covariance matrix V, where
A, L, the symmetric matrix F, and the diagonal matrices 0 and Tare
10
parameter matrices.
Joreskog (16) gave a procedure for estimating the
fixed effects (B) and the parameters of V using maximum likelihood
methods.
1.2.2
Variances of the Estimators of the Variance Components
Hammersley (10) and Tukey (29) derived the variances of the
estimators of the variance components for the one-way classification
without assuming that the random effects are normally distributed.
Most
authors have derived the sampling variances under normality assumptions,
and the remaining discussion in this section is confined to this situation.
Searle (23) derived the sampling variances of Henderson's Method
1 estimators for the one-way classification.
Although they had previously
been derived by Crump (5), Searle's principal contribution was that he
provided a procedure for deriving the sampling variances using matrix
methods.
Searle made use of the fact that sums of squares can be repre-
sented as a quadratic form
...
~Ai~'
and if
~
has a p-variate normal dis-
...
tribution with 0 mean and covariance matrix V, then cov (~
2 trace(VAiVA i "').
...
Ai~' Y
Ai ...~)
=
Searle (24, 25) then obtained the variances of Hen-
derson's Method 1 estimators for the two-way crossed classification and
the 3-stage nested design, respectively.
Mahamunulu (20) using Method 1 extended Searle's results to the
4-stage nested design.
Blischke (1) found the sampling variances for a
3-way crossed classification in which one effect was assumed to be fixed.
He used Method 2 to estimate the variance components.
Later, Blischke
(2) developed a general procedure for finding the sampling variances in
e.
11
an r-way crossed classification random effects model using Henderson's
Method 1.
Bush and Anderson (3) studied three procedures for estimating the
variance components in a 2-way crossed classification.
nate the three procedures as:
The authors desig-
the method of fitting constants, unadjusted
sums of squares (Method l), and weighted squares of means.
For all three
procedures they derived the variances of the estimates of the variance
components.
Hirotsu (lS) derived the sampling variances for a two-way crossed
classification when the variance components are estimated using the method
of unweighted means.
.e
In Chapter II it is shown that this estimation pro-
cedure is related to Method 1 •
Forthofer (6) obtains the variance-covariance matrix of the modified
Koch estimators.
The great utility of these results is that the procedure
for finding the sampling variances is not restricted to any particular
class of designs or to random effects models.
A procedure for directly obtaining numerical values of the coefficients in the formulae for the expected mean squares in the general
random effects model was given by Hartley (II).
He termed the procedure
'synthesis' since the coefficients are computed using the columns of the
design matrix as synthetic data which are analyzed using the analysis of
variance method.
Hartley's method can also be used to obtain the vari-
ances and covariances of the mean squares.
In the few cases where Hart-
ley's method has been used to give mathematical formulae for the coefficients in the expected mean squares, the formulae are the same as those
12
obtained using Method 1.
Rao (22) extended Hartley's method to general
design matrices (Hartley required the design matrix to have exactly one
1 in each row and the remaining elements being 0), and to the general
mixed model.
Searle (27) gives a procedure for computing the large-sample
variance-covariance matrix of the maximum likelihood estimators in the
general linear model.
Searle shows that if a2 is the vector of q vari-
ance components then
where
V
is the covariance matrix of the observations and Vi is the
2
partial derivative of V with respect to a i •
Searle points out that in
the case of unbalanced data analytic expressions for var(£2) are difficult to obtain since in this situation V-I is not amenable to any analytic form.
Kussmaul and Anderson (18) considered the problem of estimating
the variance components in the following model:
Yij
i=l, •••• ,r
j = l, .... ,ci
(1.2.2.1)
tij > 1
The above model represents a 3-stage nested design where tij
experimental units in the jth sub-class within the ith-class have been
composited into a single unit on which a single measurement is made.
Measurement error is assumed to be zero.
Kussmaul and Anderson con-
sidered three procedures for obtaining the sums of squares for classes
and one procedure for obtaining the sub-class sums of squares.
They
e.
13
derived the variances of the estimators of the variance components for
each of the estimation procedures.
In Chapter VI it is shown that model
(1.2.2.1) is a special case of a more general model and that the
estimation procedures considered are related'to Henderson's Method 1.
1.3
Comparison of Estimation Procedures
About the only feasible way to compare different estimation pro-
cedures is to compare the variances of the estimators of the variance
components.
Since these variances are functions of the unknown variance
components and are also complicated functions of the design parameters,
analytical comparisons are not feasible even for the simplest designs.
In the studies discussed below the comparisons among the estimation pro-
.e
cedures have been made using numerical methods involving the computation
of the sampling variances for a given design and estimation procedure.
Crump (5) compared, for the one-way classification, Method 1 and
the method of unweighted means.
Crump concluded that the unweighted
means estimator is best, i.e., has smaller variance, if the among classes
component of variance is large compared to the within class component;
otherwise, it is poorer.
Bush and Anderson (3) compared the following three estimation procedures:
the method of fitting constants, Method 1, and weighted squares
of means.
The comparisons among the estimation procedures were based on
12 two-way crossed classifications and 18 sets of variance component
values.
They concluded that if all components of variance are small
relative to the error component then Method 1 in combination with L-shaped
14
designs gives estimators with the smallest variance.
If this is not the
case then the method of fitting constants should be used with diagonal
type designs.
Hirotsu (15) showed that the unweighted means estimators
had uniformly smaller variances, than the estimators given by the three
Bush and Anderson methods, except for
a~.
a:cwhen a:c is small compared to
However, Hirotsu's study was done using only 5 of the Bush and An-
derson designs in which there were no empty cells, and only four sets of
values of the variance components.
One of the more extensive evaluations of estimation procedures was
done by Forthofer (6).
He compared Methods 1 and 3 with the modified
Koch procedure for the two-way crossed classification with and without
subsampling, and Method 1 with the modified Koch for the 2- and 3-stage
nested designs.
Forthofer arrived at the following conclusions:
TWo-way crossed classification:
the type of design considered.
The best estimation procedure depends on
Usually the modified Koch procedure gives
estimates that are quite competitiYe with those of Method 1.
Method 3
can be better or worse than Method 1 depending on the particular design.
2-stage nested design:
If the design is quite unbalanced then Method 1
is better than the modified Koch method.
In other cases there is little
difference in the two methods.
3-stage nested design:
Usually Method 1 is better than the modified
Koch method.
Forthofer points out that the sampling variances for Methods 1 and
3 have been worked out for only a few specific classes of designs.
For
the modified Koch method this is not a constraint on its use, and perhaps
15
this advantage outweights the somewhat lower efficiencies it tends to
give.
Kussmaul and Anderson (18) using model (1.2.2.1) compared the following estimation procedures:
1)
Method of weighted means
2) Method of unweighted class means
3) Method of unweighted measurements
.
i ng a 2 and a 2 / as t h e among f'1rst stages and among second
Des1gnat
r
c r
stages within first stages variance components,
Kussmaul and Anderson
concluded that
1) If a:;la~/r > 1, procedure 2) is best for estimating ~, a~, and
.e
2
ac / r •
2) If
a:;la~/r
< 1, procedure 1) is best for estimating
either procedures 2 or 3 should be used to estimate
~
a:,
but
and
ar2 + a 2 / •
c r
1.4
outline of Research
In this study the problem of estimating the mean and variance com-
ponents in some basic random effects models in which the experimental
units are composited is investigated.
For the two-way crossed classification with column and/or row compositing, a procedure similar to Henderson's Method 1 is derived for estimating the mean and variance components and their variances.
This
16
procedure permits the estimates to be computed using sums of squares
obtained from weighted or unweighted observations or cell means.
Four
estimation procedures are proposed and their relative merits are investigated.
Using the results from the two-way crossed classification estimators for the mean and variance components and their variances are
derived for 2- and 3-stage nested designs.
For both of these designs four
estimation procedures, i.e., four different ways ot weighting the observations, are compared.
For the r + I stage nested
des~gn
a method for obtaining estimates
of the variance component and their variances is given, when compositing
can occur in any of the first r stages.
e.
•
CHAPTER II
THE TWO-WAY CROSSED CLASSIFICATION WITH COMPOSITING
2.1
Introduction
In this chapter a procedure is presented for obtaining estimates
of the ~J:Il~a~_~!1~_~ar!:~~~t!colllP()n.ent~,and the variances of these estimates,
in a two-way crossed classification which allows for both across rows
and across columns compositing.
The procedure is similar to Henderson's
•
Method I for estimating the variance components, but the sums of squares
.e
are calculated using weighted or unweighted observations or cell means •
The estimates of the variance components are obtained by equating the
sums of squares to their expected values and solving the resulting system
of equations.
2.2
The Model
In Chapter I an example of compositing
sification was given.
i~ __a__1;~-way
crossed clas-
In this example the compositing was only across
columns (biopsies), and not across rows (rats).
The general model
underlying this example is given by:
Yijk
(rc) ijR.m + e ijk
(2.2.1)
18
i = l, •.... ,r
where,
j
= 1, ..... ,c
k
= O,l, ••••• ,nij
•
= 0,
( For k
Yijo does not exist )
and,
S.
1.
= the
number of experimental units composited in each cell of
row i.
t j = the number of
_
col~mn
~xperimen~~~_~n~~s composited
in each cell of
j.
v .. = the number of experimental units composited in the i_jth cell.
1.)
The V
are such that Max(si,t j )
ij
Also v ij is such that Vij
= ais i
~
V
ij
~
Sitj for all i and j.
= bjt , where a and b are
j
j
i
positive integers.
The.notation
is used to indicate that the summation is not neces-
sarily over all Sitj pairs of 1-m values.
=L:L:
If v
ij
= Sitj' then
In the remainder of this study summations of the
1=1 m=l
form
~
f:1
will be indicated by
L: .
i
For model (2.2.1) it is assumed that the only fixed effect is the
mean,
~,
and that the row effects, r
i1
, the column effects, c, , the
Jm .
interaction effects, (rc), .n_' and the measurement error effects, e. 'k'
1.) MU
.-2:l..are random, and are independently and normally distributed with zero
•
1 y.
means and variances cr 2 , cr 2 , cr 2 , and cr 2 , respect1.ve
r
c
rc
e
It is also as-
sumed that all covariances among the effects are zero.
A total of N
_ _ n . __ . .
."._'_.
_ _. _ " . -
_._~",
measurements are made on C experimental units, where
•
e.
19
•
and C ..
There are some restrictions on model (2.2.1).
L:L: v .
i
j
j
1
For each cell in
/-
the i th ' row of the_c:.~mposited design for whi:.~_nij~_~ the same number
of each of the row effects arecomposit..=?, Le., 5i is constant for cells
(i,l), (i,_,2) ••••• (i,c).
the same number of column
Similarly,
__£ol\1lll!Lof
fo!:_~nY__ 9-!,,~!1
effects_ar~_<=~m~sited within
the design
each cell in that
column.
In order to clarify model (2.2.1) and some of its associated nota--"~-----
tion, a schematic representation of the model is given below for a com=
.e
V
= v 22 = 4,
12
and v 21
= 8.
Starting with a non-composited design with
4 rows and 6 columns, designate the experimental unit in the 1 th row and
mth column by (1,m).
fx~p~u)
Columns
1
l}!,;
2
Rows
3
4
1
1,1) "
(2,1)
~:.~:
2
(1,2)
3
'(1,3)
4
(1,4)
L(2,2)\
(2,3)
(2,4)
(3,2)
(3,3)
(3.4)
(4,2)
(4,3)
~
(4,4
!( 3,5)
/(3,6{
----..J
(4,5)
'--
/(4,6'(
'--.J
Let the experimental units from the above design be composited in
the following way:
)
I
uni ts (1,5), (1,6), (2,5), and (2,6)
20
..; Units (3,5) , - (3 (6) , (4,5) , and (4,6)
~/Units (3,1) ,
:J
(3 , 3)" (3 i 4), (4, '1) ,
(3,2) ,
(4,'2) ,
(4,3) , and (4,4)
•
Units (1,1) , (2,2) , (1,3) , and (2,4)
Units (2,1) , (1,2), (2,3) , and (1,4 ) are discarded and no measurements are made on them.
From this compositing scheme.J:!te
fo!!~...!.!.ng
design is obtained:
Columns (c .., 2)
hi
1
V
., 4
n
V
=2
= 6 l?
V
·'4
n
=1
n
=1
11
1
Rows (r
= 2)
11
21
2
21
V
•
n
= 1
12
12
22
22
4
Note that nij measurements are made on the v ij composited units.
The model for any observation in the above design can be represented by model (2.2.1).
For example, the model for the second meas-
urement in row 1, column 1, of the composited design is:
~cjm;!.f(rc)
+
4\
1111
1
4~
2.3
(rc)
1122
+ (rc)
1113
+ (rc)
)+e
112~
112
A Method for Estimating the Variance Components and Their Variances
Henderson's Method 1 for estimating the variance components in a
random effects model requires that the sums of squares for each source
of variation be computed as if the design were balanced.
Using this idea,
the various sums of squares for model (2.2.1), or any two-way crossed
..
21
classification, can be represented in a rather general form as follows:
SS(Rows)
=~/)D,
,y. 'k)2 -ILLLm ..y"k)2= T
~~ k 1) 1)
~ i j k 1) 1)
SS-(Interaction) ...
LLh..fLY,
'k)2i j 1 J \k 1J
2
SS(Error) =L: LLYi 'k
i j k
J
T - T + T .. T
1
2
..
-LL:~(LYi'k)2 =
i
j
ij
k
J
1
- T
..
- T
3
- T + T
1
T
- T
5
2
..
6
(2.3.1)
The f .. , g .. , h
1)
1J
ij
, and mi' are sets of weights that depend only on
J
the particular way in which the observations or cell means are weighted •
.e
The weights are subject to some restrictions which are given in section
2.4.
In Table (2.3.1) the values of the f
, gij' h ij , and m for four
ij
ij
different estimation procedures are given. These estimation procedures
are not arbitrary, and there is some rationalization for their use.
The
weights for two of the procedures correspond to the weights used to compute the usual analysis of variance sums of squares as though the data
were balanced (Procedure 1), and to compute the sums of squares using
the method of unweighted means (Procedure 3) if for all i and j, nij > O.
The other two estimation procedures (2 and 4) are derived from the first
two u.sing the v.. as weights.
1J
In Chapter III a comparison of the four
estimation procedures is made to determine which procedure is best for
estimating the variance components in a composited two-way crossed classification.
22
Table (2.3.1)
Values of f ij , gij' h , and m for Four Estimation Procedures
ij
ij
Procedure
f ij
1
1/;n:y\ 1..
v
V •.
1.)
2
LV
..
•
1.' jn1.)
1.
1
3
n 1.)
..
1
1
1
n .. ~
J c*i
1.)
)
V •.
V .•
1.J
4
n.jJLv
..
1.
•
1.)
n
1.
1.)
2
n··JLLv
..
1.)
• •
1.)
ij
e.
1. )
In Table (2.3.1),
N~
= total
number of cells for which n .. > O.
1.)
(N~ ~
rc).
ci*
= the
number of cells in row i for which nij > O. (ci*
r.*
= the
number of cells in column j for which n ij >
)
o.
~
c).
(rj*
~
r) •
The sums of squares corresponding to any of the four estimation
procedures are computed by substituting the values of the f .. , g .. , h .. ,
1.)
1.)
1.)
and m.. given in Table (2.3.1) into the sums of squares defined by
1.)
(2.3.1).
If any n
ij
= 0,
then the summations in (2.3.1) are over those
values of i, j, and k for which n ij > O.
In this and subsequent sections,
various formulae are derived and presented which contain n .. as a divisor.
1.)
In those cases where n ij
= 0,
the formulae are still valid since in these
~
..
23
cases the summations over i and j are defined only for those values of
i and j for which nij >
o.
The analysis of variance table for the two-way crossed c1assification using sums of squares defined by (2.3.1) is given in Table
(2.3.2).
Table (2.3.2)
Analysis of Variance Table for the Two-Way Crossed Classification
-
Source
d.f.
Rows
r-1
Columns
c-1
~
N-N
Error
Expected Sums of Squares
2
2
2
2
w (Jr+w (J +w (J +w (J
lit
e
11
12 c 13 rc
222
2
W (Jr+W (Jc+W (Jrc+W (J
21
22
23
21t e
W (J2+w 02+w 0 2 +w (J2
31 r 32 c 33 rc 31t e
(N - N")(J2
e
SS .. T -T
1
1 It
SS .. T -T
2
2 It
N -r-c+1
Interaction
Sums of Squares
SS == T -T -T +T
3
3 1 2 It
~
SS
.
It
.
T -T
5
6
Searle (24) suggested the following procedure for obtaining estimates of the variance components and the variance-covariance matrix of
the estimates.
If the estimate of (J2 is denoted by
02r
s
...
(J"'2
H
C
0"'2
rc
w
w
12
W
W
21
W
W
23
W
31
W
11
W
..
22
32
...
02 ,
1
0
0
-1
0
1
0
-1
-1
-1
1
1
W
13
W
33
then let
W
-It
.
Itl
W
1t2
N-N
and
~
24
or
T
t
=
1
2
T
3
T
4
Then,
~2 + w
w
Ht
",2
=
Ws + a
w
e-..
11 r
"'2
=
W
21
[
a +
r
"'2
ow a
31
r
+
"'2
12
"'2
W
22
W
a +
c
"'2
32
"'2
"2
"'2
w14 a e
"2
a + w13 a rc
c
a +
c
W a
23 rc
"'2
W
33
a
rc
+
+
W
]
a
2 .. e
+ (N-N")&~
Then, Henderson's Method 1 estimators are defined by
or
Ht
s
=
W- 1 (H!
-
"'2
has a a
e
"'2
aw
=
From Table (2.3.2), a
2
-
Ws
2
=
e
X distribution with
ss
e-a.
..
-
"'2
awl
(2.3.2)
e-a.
/(N-N").
It can be shown that SS
4
N-N" degrees of freedom and that SS.. ' and
hence &~, is independently distributed of SSi' i = 1,2,3, i.e.,
cov(~:,SSi) = 0,
i
= 1,2,3.
Thus, the variances and covariances of
&~,
cr~, &:c can be obtained as linear functions of var(a:) and the variances
= 1,2,3, k = 1,
var(a~) = 2a:/(N-N").
and covariances of SSi which are functions of the T , i
k
2,3,4.
Since SS .."
a: X~-N'"
From (2.3.2), and using
var(~:)
cov(~:'SSi)
is given by,
= 0, i = 1,2,3,
var (~ - (H var(~H- + !!,,!(var(~:J) (w-f
W-
l
cov(~:,s) = -w-l~var(a:J
(2.3.3)
e.
25
To find var(s) it is necessary to find var(t) and the elements
Since w1.'j is the coefficient of a~ in E(SS,), and the SS.
J
1.
1.
of Wand w.
-;
are linear functions of the T , the wij
k
are determined by evaluating
the appropriate sums and/or differences in the E(Tk ), k • 1,2,3,4.
2.4
Expected Values, Variances, and Covariances of the T , k
k
R
1,2,3,4.
To facilitate the derivations of the expected values, variances,
and covariances of the Tk , k • 1,2,3,4, some vector and matrix notation
is defined.
~
is a 1 x v row vector of ones.
!y!,;" = J v
.e
is a v x v matrix of ones •
Drc(X ij ) is a rc x rc diagonal matrix.
• • • •• X Ie' X 21'
••••• , X2c , ••••• , Xr·1-'
DC(X ij ) is a c x c diagonal matrix.
D (X,.). diag (X ,
rc 1.J
11
••••• ,
·X
Xrc )·
Dc(X ij ) = diag(X i1 , X , •••
i2
•• , Xic)
B~
Q
=
.'
,
(B~·• B~'• ••••••'B~)
=z" '
-1'"
.
= (Ql:,Q.
2:····: Qr)'
.
Let
y~
Qi
= DC (qij) = diag(Qil'
be a 1 x rc row vector of
(2.2.1) •
Y~
= (Y , Y , •••.• , Yl '
11
12
C
=
~
( Xl,
~
~)
I.2' ..... , "i.r '
12'
qi2' ••••• qic)
the rc cell means of model
26
The Tk , k
in Y.
= 1,2,3,4,
can now be represented as quadratic forms
Using these quadratic forms, the expected values and the variance-
covariance matrix of the Tk will be determined.
~(~~)ijYijk)'
• J;(~>ijnijYijr
T, •
Similarly,
T, •
•
~(~ aijYij)'
=
~(~j)'-i)' • ~ x.i~~:ii • !'Dr ( ~iAi) !
~(J; qijyij)' =
T 3 ""L:L:P
.. y~,
.
,
1) 1)
T.
Y'Q'QY, and
qij
(2.4.1)
(2.4.2)
qijnij
=
2
=Y"'PY,
-
t~bijYij)'
a ij • fijn ij
,where
(2.4.3)
and P1'j "" h 1· j n i ).
= y'BB'y, and
bij
= Dlijnij
(2.4.4)
Lancaster (19) has shown that if Y is normally distributed
N(lJ,V), and Pk is a symmetric matrix then,
(2.4.5)
r
2tr(Pkv
+ 4lJ"'FkVF~
2tr(FkVFk"'V) + 4lJ"'P kVFk ",lJ
if k
= k'"
if k ~ k'"
(2.4.6)
"tr" denotes the trace operation on a matrix.
use will be made of the following trace
In this section frequent
prope~ties:
tr(zzz),
231
whenever the indicated matrix operations can be performed.
e.
27
Por each of the quadratic forms Y"'Pk!., where PI ...
P
...
2
= Q Q,
P
= P,
s
Dr(~i~i},
...
and P
Let
matrix Pk bY!Kij.
= ~ , denote the ijth column of the rc x rc
~
... ~l
, then
-r:c
~
lJfijnij
1'" f
~~~ij
~fijnij
~gijnij 4: gijnij
...
, k
III
1
, k • 2
~
2
lJhijnij
~m, .n . j
~J
~
, k ... 3
LL
m. j n, . ,
i j ~ ~J
k = 4
Por each of the four estimation procedures defined in Table
(2.3.]) it can be verified that
~l'" f
"
...-r:c sij ... ~l;cf~ij , for i • l, ••• ,r, j ... l, •• ,c,
... Ill'" f
-":c-2~J
when the appropriate values of f ij , gij' h , and m are substituted
ij
ij
into lJl'" f ".
-":~~J
H....pkVF k~' and
Thus, for each of the four estimation procedures ~"'Pk~'
~"'FkVFk ...~, k .; k'" can be treated as constants.
Since Ht
in (2.3.2) consists of pairs of differences among the T , k ... 1,2,3,4,
k
Ht and consequently!. and var(!.) will not contain any terms in lJ.
Al-
though E(Tk), var(Tk ), and COV(Tk,TkJ do contain terms in lJ, they will
be computed as tr(VF k ), 2tr(VFk
)2,
sand var(!.) are independent of
~.
and 2tr(VFkVF ...) respectively, since
k
Using the method of section 2.3 to obtain estimates of the variance components and the variances of these estimates, restrictions on
the estimation procedures can now be stated.
1)
If T
k
They are:
... !"'Pk!., then F must be symmetric.
k
The matrices in the
quadratic forms given by (2.4.1), (2.4.2), (2.4.3), and (2.4.4)
are symmetric.
28
2)
H... FJe,ll,
ll"'FkVFJe,ll, and
that s and
var(~)
H... FkVFk ...ll f
k J4 k"', must be constants, so
given by (2.3.2) and (2.3.3) are independent
of ll.
Model (2.2.1) can be rewritten in terms of the cell means as,
~
y ..
+
~J
i
s' t·
l:rL
c' m + vI ~f
J m
J
ij~ m
= 1, •••• ,
r, j
= 1, ..•. ,
(rc)"/I
~n
..
+ 1
ff.iT
~J
~J~m
c, k = 0,1, ••.• , n ..
(2.4.7)
~J
For model (2.4.7), cov(Yij' y ... j ...) is given by:
i
cr 2
r
cr 2
c
cr 2
rc
cr 2
e
Si
tj
Vij
n ij
if i =
--+--+ - - + - -
o
.... ,
~
j
=
j'"
e.
if i
= .... ,
if i
~
i"', j
if i
~
i"', j # j'"
~
=
j'"
Let V be the variance-covariance matrix of Y, the 1 x rc vector
of cell means, and let V be partitioned into r 2 matrices Vii' of size
c x c.
v
11
V
12
. . . . . . . . . Vlr
V21
V
0:
V
rl
V
r2
.......
Vrr
e
29
where,
(2.4.8)
Using the partitioned form of V given
by (2.4.8)
and the quad-
ratic forms defined by (2.4.1), (2.4.2), (2.4.3), and (2.4.4), the expected values and the variance-covariance matrix of the Tk , k • 1,2,3,4,
can now be determined using (2.4.5) and (2.4.6), and dropping the terms
in
~
for reasons given previously •
.e
" - 1 "L..J f. jn. .
= (Jr2L..J
1 s1 ( j 1. 1.)
+
2
L:f i jn1.)
..
e L:
.,
(J2
1. )
2
)
2 L:L:fljn1.~)'
f~ .n~.
+ CJc
+ CJ2 "
" 1.) 1.)
L..J L.J
i j
tj
rc 1 j v ij
(2.4.9)
30
+L: L: (b.~ ",V. '"
.~ ...
~r~
A~V .• A.
-~ ~~~
=<
~
~
.!.)2
J:
(2.4.10)
~
CJ2 ( ~a .. )2
~
Si
li
1)
(2.4.12)
e.
E(T 2 ] and 1/2 var(T 2 J can be obtained directly from E(T 1 ] and
1/2 var(Tll by simply interchanging the summations over i and j, inter-
changing
CJ
-E
si
CJ2
and -£, and by replacing f .. by gij'
t
+
~)
j
2",~
2
2
e i j
~J
~J
CJ LJLJg . . n ..
Then,
(2.4.14)
31
~2
+ 0e2}:
g 2.. n ..
•
1)1)
1
.e
Then,
~!~(~Vii"Q·1(L.Q
. .V
i. i )!.
.
1...
1
... 1
1 1 1
1
32
sion gives,
=LLn~.(~ qiJ' L f .....n. j ... + °t~fijL: g .... jn ....
• •
1.1
+
02
rc
1.J
1.
f .. gi .n. .
1.J
v.
J 1.J
.... 1.J
J
+
1.j
02 f
e
J'
1.
.... 1.
1.
~2
ijgij
1. J
(2.4.16)
e.
(2.4.17)
1
2" var (T3J = tr (PV) 2
= tr L:L:p.v.....
P ....V .....
• •
1. 1.1.
1.
1. 1.
1. 1.
=
~
tr L:(PiV
)2
ii
i \
+
trL~
i~~
P.V .... P ...V .....
1. i 1.
1.
1. 1.
(2.4.18)
33
tr
LL
p. v. "P"V"i
.~., ~ ~ ~ 1
~
..
~r~
0; PiJe
P. v. i .... ~ ~
8
i
2
"'tr 0 e(PijPi 'j~
acIt L:'~~i
t2
J
1r
(I}
2~
~
+ 0 e p.~ 0 e r
- + a
- 1 ) + 0e2 Pi Dc nij
-1 )
~j
rc PiDe Vij
.
Let Zij' be the j-j' th element of PiV
.
ii
, then
.e
(2.4.20)
(2.1.21)
Combining (2.4.19), (2.4.20), and (2.4.21) into (2.4.18), and
substituting for Pij • hijn~j gives,
~ var (T. J= ~(:: hijntj + ~ hij n1 j + :~; hijnlj + a~ hijni~'
+
*
hijnI{1hj,nk- hijDfj) +
:i hijDfj~hijDkhijDtj)]
(2.1.22)
34
= tr""A.A.V ..... P ....V .....
~~-~-~ ~ ~ ~
~
~~
~
=""
'LJ
i i"
A:"V i ~.... p.~ "'V ~.....~_~
A·
L.J-~
='"
.. P.~V ~~_~
.. A.
L.J A:"V
_~ ~~
i
+Li'-. J'i'"
A:"V ~~
.....P ~....V.~ "'iA
L.J_~
_~.
(2.4.23)
~~i"'Vi'
... p ....V ..... !.
i J'i
~
~
~
1
~
(2.4.24)
e.
+
(J
2 (ail
a iC) P.V2+
- , •••• , ViI
v.~c
~
re
2 (ail
a iC ) 1/2
- , • • • • , - P.
nil
n Ie
~
(Je
Then,
a.
1
~
2
j)
(2.4.25)
Substituting (2.4.24) and (2.4.25) into (2.4.23), and
a ij
=:
f
p ij
ij nand
ij
= h ~)
.. n~.,
1)
gives
.~
35
02
+....!:£ f.jn ..
v..
1.)
1.
1.)
(2.4.26)
By making the appropriate changes in (2.4.26),
~ COV(T••T,) ·~tijn~j~; F9i'jni'j
+
+ ::
9ijn,j + :~; 9,jnij
a; 9j)' + :i 9~jn~j(~,hij,nk- hit~j)]
.e
1
=
~B1.':V'
• -
1.
1.1
B~V.
B~V. )
, ~-1.
'""
'""
1.2 , •••• , ~-,..1.
1.
LL
~~V
i i
=
ii
1.
1.r
B
-
"~1.' ..
= '""
B':Vi,B.
,,~"".. _1.
B:"V·1.1._1.
...B· ..
~-1.
1.-1. + ~~
-... 1.1.-1.
2
)2
= S.r "L...JD
'1.'j
1. j
~
(2.4.28)
i~i
i
B;V .. B.
(2.4.271
+ 0
b~,
L~
c j t,
2
)
+ 0
b~
L..2:1
rc, v. .
2
)
1.)
+ 0
b~.
L21
e
n ..
2
j
(2.4.29)
1.)
(2.4.30)
SUbstituting (2.4.29) and (2.4.30) into (2.4.28), and replacing
36
+ CJ 2
e
LL·
2 n 2.•
m..
. .
1.)
(2.4.31)
1.)
1. )
1
2' var(T It ]
= tr(B l!. V) 2
= B"'v B !!...v .!!
From (2.4.31)
=
22: 1Ptm.
[r
CJ
_
i Si
j
n.
1j 1.j
)2 + 22:et(D
j
CJ
C j
i
•• n..
1) 1)
)2 + rc2LLmrjnij
CJ
i
j
Vij
(2.4.32)
= -B"'V 0 r (A.A~)
V -B
:-1.-1
B.. .V
-
=~B"'''''V
..... ,~B; . . V1..... 2 , •••• ,~B-:
. . V ..... )
..,.
1. 1 4 . .-....
...- 1 1 r
1.
. . ( . .) =~....
. B.....V· . . 1A1A 1. . , •••• ,2:'"
B.....V..... ArAr. . )
B V 0 r -1-1
A.A.
-
Then,
1.
i -1
1
~
-
i ....- 1
1. r- -
e.
37
-=
L(!~V'':~i
•
1
1
1~
+
.
L
.... ~.
1 r1
~i'" ...V....
1 i~i~ 2
(2.4.33)
B ':V .. A.
- 1 11-1
.e
(2.4.34)
(2.4.35)
Combining (2.4.34) and (2.4.35) into (2.4.33), and letting
(2.4.36)
By making the appropriate changes in (2.4.36),
38
C2.4.37)
1
~COV(T3,T~J
V ~~~V)
= trCP
Then,
e.
C2.4.38)
"
" ~ ~V pl./2
L.JB.
i~-~
~
~ .. p.1/2 + L.J
" " B.~ ~V •.
= B.V
-~ ~~ 1
i~1i -1
~~
r
~ 1/2
biJ·~l P.
= -.
81
-e
....
J
b
+
02
~
il
---, •••• ,
rc ( ViI
°2 ( L
+ c
2
.~~.
~
•
+ arc (
r~
b.11
b.~~I
~,
I
"11'····,
1~
~p.1/2
~
b. }
~
--=.:.
° ~b'l
- t, .••• ,
tc
+ c2
b. )
~c
v:~c
I
1/2
p.
~
..... ,
b.~C )
Vic
+
2
°e
~
r~
I
2
Pit
~
b. )
n.- ' .... , -=.:..
n.
c
(b.11
~I
~ b.~
~ c
.~.
p.l. /2
-tc
2
+ °e
)
~
pU
2
1
1/2
Pi
(b.~I
nil'· .. ·'
bic )
iik
pl/
2
39
SUbstituting the above result into (2.4.38) and replacing b .. by
1.)
m.. n ..
1.) 1.)
(2.4.39)
The variance-covariance matrix of the Tk'S, var(!), is given by
var(T I]
var(t)
= cJIt
COV(T I ,T 2]
J
var(T 2
+
COV(TI,T,]
COV(T ,T
I It
3]
COV(T 2 ,Tit
COV(T 2 ,T
var(T
3]
J,
(2.4.40)
COV[T"TIt ]
var[T lt ]
.e
where the elements of the variance-covariance matrix or the right-handside of (2.4.40) are given by formulae (2.4.13), (2.4.15), (2.4.16),
(2.4.22), (2.4.26), (2.4.27), (2.4.32), (2.4.36), (2.4.37), and (2.4.39),
and c
= 4~2~cFkVFk~C'
where ~cFk is a constant for each of the four
estimation procedures.
The E(TkJ, k
= 1,2,3,4
(2.4.17), and (2.4.31).
are given by formulae (2.4.9), (2.4.14),
The elements of W and
determined from the coefficients of the
oj
~lt'
namely the Wij' are
in E(SSiJ, where E(SSiJ is a
For example, E(ssll = E(T I ] - E(T It ] , and
2 0c'
"
t s 0 f 0 r'
2 arc'
2
.
4 31) t h e coe ff l.c1.en
4 9 ) and (2 ••
a nd 0e2 g1.ven
from ( 2••
linear function of the E(TkJ.
by wII ' w12 ' w13 ' and WIlt respectively, can be determined.
i
m
1,2,3, j
= 1,2,3,4
are given in (2.4.41)
The Wij'
40
2
W
21
=
2
LL:gij~ij
S1
.
i
_
)
~l
'"' mijn .. )2
. Si ~
1)
j
1
_.
W
24
= LL:g~
.n ..
i j 1) 1)
W
=
W
31
32
LLhijnij
..
Si
1 )
= L~hijn1j
i j
-
~~m~jnij
-
W
-
W
tj
ll
12
-
W
21
-
W
22
-
W
W
33
34
= LL:h.jn
..
i j 1 1)
-
= Vij
=
1 for
)
~~j(~mijnij)
)
1
_
L~mij~~j
2
-W
If si ~ t j
1
_
2
= LLhijnij
V' •
i j
1)
W
13
14
-W
-
23
W
24
.
~P1m
. Si . 1)D.1)-) ,
••
1 )
-
2
2
V
1)
2:2:m~
jn. j
i j 1 1
(2.4.41)
all i and j then the formulae for
COV(Tk,Tk~J' and the coefficients of the
oj
in the expected sums of
E(Tk ],
41
squares correspond to those given by Hirotsu (15) when the f .. , g .. , h ij ,
.
1.)
1)
and m.. for procedure 3 are used, and to those given by Searle (24) when
1.)
procedure 1 is used.
2.5
Estimating the Mean
An unbiasecl estimator of the mean, ll, is
=
.e
LB·
.nij
i j
1.)
y ··
1)
-
u .. = Uijnij' and the Uij are subjected to the restriction
1.)
(![; ~
Let u'
~ ~)
, where u1, = (ui l ,
~~Uij
Ui2' .. ··'U ir
... 1.
J.
Then, II" = U'Y, and
"
var (ll)
=0
u, V
U
(2.5.1)
U.Vi.U
-1.
.
1- i
=0
a:(Lu..)2
S.
1
j
1.)
2
ij + 2 ' " u~.
2 '"U
1)
arc
L
J
- aeLJn'j
j v 1)
..
j
1.
(2.5.2)
42
L
• 'J'
Ui"'",V.",.U.
1. r1.
1.
-
(2.5.3)
1.--1.
SUbstituting (2.5.2) and (2.5.3) into (2.5.1) and replacing Uij by
iiijnij gives,
var (ll)
= ()'2L:~(Lii'jn
..)2
r . si . 1. 1.J
1.
J
+
()'~:Etl.(Liii.n'j)2
j
J
.
J
l.
1.
(2.5.4)
Corresponding to each estimation procedure defined in Table
(2.3.1) a set of ii .. 's can be defined.
1.J
They are given in Table (2.5.1).
Table (2.5.1)
Values of
...
u
ij for Four Estimation Procedures
Estimation Procedure
1
lIN
2
3
4
vi'jln1.J
{ ..L:L:v
. . ij
1.
2.6
J
An Alternative Model for a Two-Way Crossed Classification with
Compositing
Model (2.2.1) is not the most general model for a two-way crossed
classification with compositing.
A more general model is given by:
e.
43
Cjm +
V~j
nL
A.,m
*
(rc) ij1m + e ijk
(2.6.1)
i = l, •••• ,r, j = l, •••• ,c, k = O,l, •••• ,nij
where,
Sij
= the
number of row effects composited in the i_jth cell.
t ij = the number of column effects composited in the i_jth cell.
Again, if Yij is the mean of the i_jth cell, then model (2.6.1)
can be rewritten as:
t
ij
~
I
I
L-ic.Jm +
v··
= lJ + Sij
m
1J
L:
1,m
•
(rc) ij1m +
n~.
1)
L:k ttijk
(2.6.2)
COV (Yi j , Yi .. j ..
J is
given by
2
ac2
a~
a~
arc
+ -Vij
- +nij
Sij'" tij
o
i = i; j
=
j
44
COV(Yij'Yiij = E~ij - E(Yii))(Yii ' = E(Yid)
~c.
"e \
= E(-=--'Lr. + 1
+ _l_LL(rc) .. + _1
'(ij £ 1.£ "Gj~)m Vij £ m
1.)£m
n ij ~ i j j
· (S~i fri!
+
E~j ,~Cj'm + V~j Fprc) ij 'R.m+ n~j ~ "; j1<)
Then,
Min {Sij ,Sij")
=
-:s~.-.~;-.-.-" E ( ~ r ~£
1.)
1.)
e.
+ cross product
x.
0'2
=
r
Min (S' . , S ....) =
s.jS....
1.)
1.)
1.
1.)
The covariance matrix, V, of
y
Ma:X(S.1.).,s.1.). ..)
can be partitioned as in (2.4.8)
to give
1 )
( Vij
+
2
O'e Dc
(2.6.3)
S11.
.. is a c x c matrix whose elements are
1
(
Max Sij ,Sij" ) , and Tii" is a
c x c diagonal matrix whose diagonal elements are
1
Max(t1.)'
.. t 1.....J ) .
4S
If Vii and Vii~ given in (2.6.3) are used to derive E(Tk ] and
COV(Tk,Tk~]' k, k~
= 1,2,3,4,
then algebraic expressions for these
quantities similar to those given in section 2.4 cannot be obtained.
For example, the coefficient of
a~ in E(T l ] is given by ~~~Sii~i'
i
Using the above definition of Sii no simplification of this term is
obtained.
In this study no further consideration will be given to model
(2.6.1), since the methods for obtaining the expected values, variances
and covariances of the Tk , k
= 1,2,3,4,
are no longer applicable.
One
possible solution to obtaining estimates of the variance components and
the variances of these estimates in model (2.6.1) would be to use the
.e
estimation procedures proposed by Koch (17) and Forthofer (6).
no work along these lines has been done.
To date,
CHAPTER III
COMPARISON OF ESTIMATION PROCEDURES IN THE TWO-WAY
CROSSED CLASSIFICATION
3.1
A Method for Comparing Estimation Procedures
In Chapter II four estimation procedures were defined that could
be used to obtain unbiased estimates of the mean and variance components
in model (2.2.1).
It was shown in section 2.3 that the vector of estim-
ated variance components,
S
~,
is given by
-1 [H! - a"2)
e!"
= W
y'
e.
and,
(3.1.1)
var (_t) H'" + w
var (~)
The principal drawback to obtaining
is that ~1 has to be computed.
var(a~J ) W
",_.1
W'"
-It It
~
and
var(~)
using (3.1.1)
In special cases, e.g., for estimation
procedures 1 and 3 given in Table (2.3.1), if nij = n for i
and j
= l, •••• ,c
then explicit expressions for
obtained without first finding
balanced, i.e., if n ij
j
= l, •••• ,c,
~
= n,
si
w- 1
= s,
~
and
var(~)
= l, •••• ,r
can be
Also, if the design is completely
tj
= t,
and Vij
=v
and var(s) can readily be determined.
for i
= l, •.•. ,r,
However, in this
situation sand var(s) will be the same for all four estimation procedures.
If the design is unbalanced an analytic expression for
w- 1
can
..
47
still be obtained, for example, in terms of its cofactors and determinant,
...
but this would then lead to cumbersome expressions for s and
var(~)
Even if explicit expressions in terms of the elements of
H, and var(!) were derived for
~
and
var(~
•
w- 1 ,
~,
their complexity would pre-
clude the possibility of analytically determining which estimation procedure is best, i.e., which one gives estimates with the smallest variance.
Another difficulty is that not only do ~, var(~), ~, and var(~)
depend on the design parameters, namely, r, c, si' tj' .vij' and nij' but
222
2
they also depend on the unknown variance components, Or'
OC' Orc' and Oe.
In this study comparisons among the estimation procedures are
."
.e
made using numerical methods to compare the variances of the estimated
mean and variance components.
structed, and for each design
A number of unbalanced designs are con-
'"
var(~)
and
var(~)
are evaluated for each
222
2
estimation procedure for certain choices of or'
0c' Orc' and oe.
This
is the same approach taken by Bush and Anderson (3), Hirotsu (15), and
Forthofer (6) in their comparisons of estimation procedures in the twoway crossed classification.
3.2
Designs
The four estimation procedures are compared using twenty designs
constructed under the following constraints:
= c = 4,8
1)
r
2)
LL
i
j
v .. n ..
1.)
1.)
= 100,
200.
48
In experimental designs in which there is no compositing the
total number of measurements, N =~~nij' is usually fixed.
i
A gen-
j
eralization of this restriction to designs where the experimental units
are composited is to fix ~~Vijnij' Although there are other ways
i
j
of weighting the nij' the constraint ~~vijnij = 100, 200 was used
i
j
since it helps to simplify the problem of constructing unbalanced designs
used in the comparisons of the estimation procedures.
Table C3.2.1) lists the 20 designs, 01 - 020, used to compare
the estimation procedures.
Vij' and nij are given.
For each of the designs, values of si' tj'
The si are the numbers shown on the left-hand
side of the design, and the t· are those shown above the design.
In
J
each cell of the design the upper number is v·1.J"
nij'
In all designs if nij =
o
then set Vij = O.
the lower one being
This choice of Vij
is arbitrary since if nij = 0 then none of the quantities required to
compute
~, varC~),
or
with the i-jth cell.
"
varC~)
depend on the design parameters associated
For designs 01 - 010,
~~Vijnij
i
designs 011 - 020,
L~Vijnij =
i
3.3
200.
= 100, and for
j
For all designs max.
j
(Si,tj'~
'\
')
Results
For each design in Table C3.2.1), var(~)/ae~' var(a2)/a~,
e
e
varca;)/a:, varca~)/a:, and var(a~c)/a: were computed using (2.5.4) and
(2.3.3) for the following combinations of PI = a:/a~, P
2
= a~/a~, and
e.
49
Table (3.2.1)
Designs Used To Compare the Four Estimation Procedures
.
~~Vijnij
When
• 100,200
DI
D2
4
0
0
154
4
0 16
2
1
0
2
2
1
2
4
10
1
8
1
3
12
1
3
2
2
3
0
0
10
1
8
1
1
0
0
1
1
3
2
1
3
3
2
3
2
0
0
12
1
1
0
0
0
0
1
1
2
2
1
D3
.e
3
1
2
12
3 3
9
1
0
0
0
0
9
3
1
0
0
3
D5
D4
1
4
7
8
1
1
6
4
3
0
0
0
0
2
0
0
1
1
3
4
1
7
3
8
2
5
3
5
3
1
0
0
4
3
7
1
10
1
10
1
1
0
0
4
1
7
1
3
3
2
255
0
0
2
5
0
0
1
3
3
2
1
0
0
1
3
1
2
1
2
6
2
2
3
1
4
466
0
3
0
0
2
0
2
0
0
1
4
1
6
1
6
3
0
0
8
1
6
0
0
6
1
12
3
0
0
0
0
1
4
2
6
1
6
1
2
2
D6
07
4261322
0
0
0
0
0
12
6
6
0
0
0
0
0
1
1
2
0
0
2
3
1
3
1
1
1
1
1
2
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
4
6
12
1
6
1
6
1
0
0
0
0
0
0
0
0
0
0
4
1
12
1
0
0
0
0
0
0
0
0
0
0
0
0
3
0
0
6
1
6
1
3
1
0
0
0
0
0
0
0
0
6
6
1
6
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
6
1
1
1
3
1
0
0
0
0
0
0
6
6
1
6
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
1
2
3
2
2
0
0
0
0
3
6
1
3
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
2
2
2
2
1
0
0
2
1
6
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
2
1
2
1
2
1
1
2
1
3
1
7
1
3
1
1
1
1
2
1
2
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
2
2
1
1
2
1
3
1
1
7
1
3
1
1
1
1
1
1
2
1
2
1
2
50
Table (3.2.1)
08
1
7
0
0
5
0
0
3
0
0
2
2
1
221
2
0
1
1
0
2
1
1
3
1
7
5
2
1
2
1
0
0
1
1
0
0
1
1
1
1
1
0
0
1
3
1
2
1
2
1
0
0
0
0
1
2
1
0
0
0
0
0
0
2
1
0
0
2
2
1
1
1
1
1
0
0
0
0
3
1
2
1
2
1
2
1
0
0
1
3
5
7
5
1
1
3
1
0
0
2
1
2
1
1
1
0
0
1
0
0
5
1
0
0
2
1
2
1
0
0
1
1
1
1
1
0
0
5
1
3
1
0
0
0
0
0
0
0
0
0
0
5
0
0
010
323
3
2
0
2
0
1
3
3
1
1
0
0
1
0
0
1
I
09
e.
1
2
0
0
2
0
0
2
0
0
2
0
0
2
0
0
1
0
0
1
0
0
1
6
0
0
7
14
1
14
1
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
1
3
1
2
1
3
1
3
1
0
0
0
0
4
4
1
4
1
4
1
4
1
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
3
1
0
0
0
0
0
0
2
0
0
0
0
2
2
2
1
0
0
0
0
0
0
0
0
2
6
1
10
1
6
1
2
2
0
0
0
0
0
0
2
0
0
0
0
2
3
2
2
0
0
0
0
0
0
0
0
3
0
0
0
0
0
0
6
1
0
0
0
0
3
1
3
1
0
0
1
0
0
0
0
0
0
2
3
2
1
0
0
0
O·
0
0
3
0
0
0
0
0
0
3
1
3
1
3
1
0
0
0
0
1
0
0
0
0
0
0
2
1
2
3
2
1
1
2
1
1
3
0
0
0
0
0
0
3
1
0
0
3
1
3
1
3
1
0
0
0
0
0
0
0
0
2
2
2
1
1
1
1
2
3
0
0
0
0
0
0
3
1
3
1
3
1
0
0
0
0
8
2
8
5
1
51
Table (3.2.1)
011
.
.e
11
33
1
5
3
2
0
0
6
2
6
4
012
541
35
8
7
7 1
2
1
2
11
2
10
1
6
2
4
2
3
15
2
12
2
1
11
2
5
1
3
1
2
3
2
10
2
8
2
2
2
1
1
1
11
2
5
3
3
2
0
0
1
5
2
4
1
1
1
1
1
3
1
13
13
4
013
11
2
11
0
1
0
1
0
0
1
13
3
11
4
2
1
1
0
0
11
1
0
0
1
1
1
1
0
0
3
9
1
0
0
0
0
2
0
0
3
15
3
9
3
0
0
0
0
5
0
0
15
3
5
1
5
1
2
2
6
1
3
2
10
3
6
1
6
1
2
1
5
0
0
15
1
5
3
5
2
0
0
2
4
1
7
1
5
1
3
1
3
2
1
1
10
0
0
0
0
10
1
10
3
1
0
0
1
0
0
1
0
0
1
0
0
017
3
0
0
5
0
0
15
1
15
2
10
1
0
0
0
0
0
0
0
0
5
0
0
15
1
5
1
9
1
5
3
0
0
0
0
3
0
0
3
1
3
1
3
1
3
1
3
2
1
0
0
0
0
3
1
2
2
1
1
0
0
0
0
0
0
0
0
0
0
0
0
2
2
0
0
0
0
0
0
0
0
0
0
1
1
6
15
4
3
21
2
1
015
3
14
14
4
7
7
2
1
3
1
3
1
014
331
5
016
2
0
0
7
1
7
1
1
7
2
3
0
0
5
0
0
6
0
0
6
0
0
9
0
0
10
0
0
10
0
0
7
7
2
21
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
5
5
1
15
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
3
1
3
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
2
0
0
2
2
3
6
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
2
1
1
1
2
1
2
1
2
1
1
1
1
3
1
3
1
0
0
5
1
0
0
6
1
0
0
6
1
0
0
0
0
10
1
0
0
10
2
0
0
1
1
1
1
1
2
1
1
3
1
5
1
6
2
6
1
10
1
10
1
7
1
1
1
I
9
1
9
1
52
Table (3.2.1)
018
.
1
877
8
7
0
1
0
1
5
5
1
5
0
0
211
2
0
1
2
2
0
1
8
1
7
1
0
0
5
1
5
1
2
1
1
1
1
1
1
0
0
7
1
7
1
5
1
5
1
2
2
0
0
0
0
1
0
0
7
1
7
1
5
1
0
0
2
1
1
1
1
1
1
0
0
7
1
7
1
5
1
0
0
2
1
1
1
1
2
1
8
1
7
1
0
0
5
1
0
0
2
2
1
3
0
0
1
0
0
1
1
7
1
5
1
5
1
2
1
1
1
1
1
1
0
0
0
0
0
0
5
5
1
1
2
3
1
1
0
0
3
3
2
2
0
0
2
0
0
020
1
0
0
1
0
0
1
1
1
1
0
0
e.
17
17
1 1
16
16
2
10
0
0
9
0
0
019
3
0
0
3
0
0
1
0
0
1
0
0
1
3
0
0
1
17
1
16
1
10
1
0
0
0
0
0
0
0
0
0
0
1
3
1
3
1
2
2
0
0
1
2
0
0
1
1
1
1
1
17
0
16
2
10
1
9
1
0
0
0
0
0
0
0
0
1
3
2
0
0
2
1
0
0
1
1
0
0
0
0
1
1
1
0
0
0
0
10
1
9
1
3
1
0
0
0
0
0
0
0
0
9
1
3
2
3
1
0
0
0
0
0
0
0
0
0
0
0
0
2
1
0
0
2
1
1
0
0
0
0
0
0
1
1
4
1
1
1
4
1
1
1
4
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
3
1
0
0
3
1
3
2
1
1
1
1
0
0
1
1
12
1
0
4 0
12
2
12
1
8
1
0
0
8
2
8
1
4
1
0
4
1
4
1
4
1
0
0
0
4
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
0
0
21
1
7
1
0
0
7
1
7
1
7
1
1
1
1
1
4
4
7
a
a
0
0
0
53
P,
= o~c/o~: ( 1 / 8 ' 1/ 8 ' 1/ 8 )
1
, ( / 8 ' 1/ a ' aJ
'
(l/a ' a, 1/aJ' (a, 1/a ' 1/aJ' (1,1,1) ,
(a,a,l/a),(a,l/ a ,a},(l/a,a,a) and (a,a,a).
"2 "2
"2
2
" Or'
The variances of lJ,
0c' and arc are functions of 0 e2 , 0 r' 0 c2 ,
2
2
2
depends on
and o;c. If y = lJ, o~, Or'
°c' or arc' then var(y)/o~
e
2
(Pl'P 2 ,P,), except when y .. 0 e •
In this case var(o;J/o:
m
2/(N-N').
Comparisons of the estimation procedures are made by computing their
relative efficiencies (R.E.'s) for estimating y.
The R.E.'s are computed
by,
R.E ...
min. (var ('9) /o~J
where,
var(y)/o~
min.(var(Y)/o~J is the smallest value of var(Y)/o~ for any design for a
.e
fixed value of
~~Vijnij and
i
j
a given value of (P 1 ,P ,P,).
2
Defined in
this way the R.E.'s do not depend on the actual sizes of 0 2 , 0 2
e
o~c' but are a function of PI' Pa ' and p"
variance components.
r'
02
c'
and
the relative sizes of the
Also, the R.E.'s are such that 0
~
R.E.
~
1.
In Table (3.3.1) the R.E.'s have been tabulated for all designs,
estimation procedures, and values of (P 1 ,P 2 ,P 3).
Examination of Table (3.3.1) shows that any best estimation procedure depends on the relative sizes of P 1 ' P2 ' and P 3 •
An estimation
procedure is judged to be "best" if most of the time its R.E. is larger
than the corresponding R.E. of "the other 3 procedures, and in those
cases where some other
the two R.E.'s is small.
procedure has a larger R.E., the difference in
Table (3.3.2) gives the best estimation pro-
2 •
cedure for estimating lJ, 0r2 , 0 c2 , and 0 rc
Note that var(02)
e is the same
for each estimation procedure for all (Pl,P2'P,) values, but, varies
54
from design to design.
In some instances no single estimation procedure
satisfied the above criterion of best.
In these cases two or more of
the procedures were selected as best.
Although the above procedure for selecting the "best" estimation
procedure is subjective it does lead to essentially the same conclusions
as more objective procedures such as one based on the average ranks of
the estimation procedures.
II'l
II'l
Design
No.
1
p
p
1
Table (3.3.1)
Relative Efficiencies (R.E.'s) of Four Estimation Procedures for A Two-Way
Crossed Classification with Composited Samples
Estimation Procedures
1
P
2
3
l/ e 1/ 8 1/e
~2
e
.572
r
c
rc
,..
~
"2
"2
Or
0c
"2
arc
.... ~2
~
Or
3
"2
°c
"2
arc
"
~
"2
Or
4
"2
°c
"2
arc
.478 .051 .184 .089
.541 .072 .301 .136
.372 .028 .084 .048
.334 .035 .448 .170
.628 .099 .617 .180
.559 .088 .678 .195
.607 .096 .484 .147
8
.212 .015 .298 .048
.420 .048 .404 .059
.341 .052 .376 .089
.459 .042 .425 .033
e
.501 .352 .772 .166
.571 .372 .552 .103
.520 .3661.000 .185
.551 .347 .210 .044
111
8 8 l/ e
.401 .108 .399 .126
.315 .163 .362 .055
.586 .166 .439 .095
.517 .297 .514 .080
.532 .186 .507 .141
.435 .296 .516 .127
.559 .133 .308 .054
.544 .300 .491 .052
l/ e 8
.451 .233 .502 .146
.600 .352 .662 .159
.533 .327 .911 .198
.596 .348 .387 .107
l/ e 8 8
888
.217 .017 .308 .092
.309 .132 .371 .073
.448 .073 .448 .137
.528 .290 .555 .112
.361 .064 .419 .158
.441 .274 .556 .152
.477 .078 .480 .104
.561 .306 .543 .081
.519 .096 .149 .220
.423 .042 .063 .044
.448 .108 ~166 .263
.374 .054 .081 .061
1/e 1/e 8
.287 .041 .117 .133
.472 .062 .187 .103
.336 .056 .172 .151
.510 .083 .244 .125
l/e 8 l/ e
.369 .025 .369 .059
.444 .024 .359 .023 •• 353 .046 .360 .101
.450 .043 .385 .039
1/ S 1/ e
.371 .379 .065 .055
.437 .355 .071 .026
.351 .367 .117 .094
.441 .374 .108 .037
.452 .164 .220 .135
.517 .135 .183 .043
.433 .199 .278 .181
.510 .176 .241 .064
.385 .237 .283 .061
.466 .226 .280 .028
.367 .295 .371 .112
.476 .303 .389 .051
.343 .258 .086 .088
.451 .301 .116 .060
.345 .290 .147 .121
.469 .341 .170 .081
8
.329 .028 .268 .094
.447 .036 .336 .064
.336 .045 .313 .123
.468 .059 .3E7 .090
8
.360 .187 .239 .073
.466 .209 .281 .042
.357 .244 .332 .115
.484 .284 .390 .069
1/
e
8 1/
8 1/ 1/
8
8
1/ s 1/81/ e
8
1
1
1
8
8 l/ e
8 l/ S 8
l/ e 8
8
e
~
2
cr2----~a2
.524 .052 .400 .176
1/ 1/ 8
e 8
2
A
8
.501
"
-
.
e
e
e
\0
3
6
"2
"
l.I
"2
"2
6
"
l.I
6
4
"2
r
rc
c
.580 .263 .101 .201
r
c
rc
.561 .292 .144 .340
8
.368 .131 .124 .212
.513 .188 .145 .209
.490 .201 .204 .240
.524 .255 .176 .230
/e
.463 .179 .339 .173
.671 .178 .412 .108
.503 .387 .352 .277
.669 .246 .392 .135
8 l/e l/e
1 1 1
8 8 1/.
.278 .403 .034 .046
.452 .320 .183 .193
.352 .340 .204 .063
.271 .395 .033 .037
.495 .337 .176 .135
.389 .352 .218 .052
.282 .424 .126 .148
.470 .388 .272 .265
.367 .426 .367 .188
.260 .370 .090 .086
.453 .330 .197 .168
.383 .371 .355 .110
8 l/ e 8
.284 .356 .057 .101
.294 .380 .061 .093
.303 .410 .174 .201
.289 .382 .142 .163
/ e / e / e 1. 000
.417 .117 .261 .169
.344 .312 .187 .087
.496 .286 .147 .524
.620 .151 .328 .154
.393 .344 .211 .076
.507 .322 .053 .170
.487 .237 .321 .229
.371 .414 .345 .200
.510 .277 .092 .240
.653 .248 .362 .197
.394 .383 .346 .143
.414 .202 .030 .087
8
.193 .075 .039 .109
.498 .299 .072 .165
.429 .224 .067 .154
.541 .397 .088 .157
.330 .164 .244 .181
.810 .343 .385 .105
.618 .287 .314 .144
.828 .535 .353 .096
.207 .369 .083 .130
.321 .254 .128 .175
.258 .292 .163 .064
.204 .391 .022 .035
.421 .380 .111 .113
.324 .378 .151 .045
.231 .431 .065 .095
.430 .376 .155 .163
.338 .408 .211 .080
.209 .383 .034 .048
.417 .363 .106 .087
.335 .401 .210 .072
/e 8
1
a
1.000
1 Ie
8 8
888
111
1/ 1/
8
8
1/
8
8
1/
8
u
2
"2
r
rc
c
.626 .299 .157 .393
1/ e 1/ e
4
"
a"2
a"2
c
r
rc
.404 .196 .061 .192
1 /e 1/ e 1 /e
1
3
1
"2
e
Table (3.3.1)
2 Estimation procedur s
Lfl
Design
No.
PI P2 P,
.
cr
a
a
a
2
2
a
"2
a
u"
"2
a
8
1
8
1/ 1/
8
8
8
1/ 8
8
.196 .282 .037 .075
.226 .402 .034 .074 •• 249 .419 .052 .102
.236 .415 .053 .098
1/ 8
8
8
8
8
.226 .063 .138 .088
.239 .237 .115 .063
.720 .267 .249 .130
.332 .390 .135 .062
.555 .210 .207 .133
.340 .400 .166 .089
.781 .479 .284 .145
.348 .434 .195 .093
8
1
8
1
1/
8
f"III
Table (3.3.1)
Design
No.
5
Pl P2 P 3
1'2
C1 e
r
02
c
&2
rc
,..
lJ
&2
r
Estimation Procedures
3
,..
ac arc
2
2
lJ
(;2
.....2
C1
"0 2
,..
lJ
4
'(12
,&2 ""2
a
rc
r
c
.474 .116 .022 .042
.553 .110 .016 .048
.496 .104 .022 .047
.490 .110 .032 .093
.516 .145 .053 .102
.508 .139 .037 .096
.576 .189 .062 .101
a
.835 .118 .334 .037
.664 .160 .298 .042
.819 .198 .349 .048
.762 .229 .308 .055
8 l/ a l/ a
1 1 1
8 8 l/ a
.276 .309 .011 .016
.514 .227 .050 .040
.418 .275 .093 .023
.365 .326 .014 .018
.559 .249 .050 .040
.487 .313 .120 .031
.240 .292 .030 .037
.461 .240 .065 .045
.373 .285 .157 .042
.293 .302 .062 .056
.515 .264 .097 .050
.432 .316 .247 .082
8 l/ a 8
.297 .293 .014 .033
.388 .326 .023
046
.264 .294 .025 .051
.324 .320 .058 .082
.723 .099 .177 .067
.419 .265 .076 .030
.844 .283 .261 .234
.627 .143 .212 .080
.487 .314 .110 .043
.701 .125 .125 .097
.739 .153 .201 .083
.381 .287 .116 .048
.858 .245 .248 .204
.731 .237 .238 .096
.442 .333 .204 .086
.635 .097 .109 .079
.620 .194 .309 .338
.865 .204 .426 .303
.735 .210 .373 .350
.858 .192 .442 .281
.768 .099 .780 .133
.914 .061 .837 .064
.833 .090 .826 .121
.869 .056 .797 .057
8 l/ a l/ a
1 1 1
8 8 1/
a
.541 .729 .110 .110
.783 .468 .450 .233
.653 .589 .579 .153
.904 .826 .099 .063
.999 .398 .396 .117
.977 .625 .629 .102
.653 .799 .116 .105 .975 .856 .120 .064
.887 .480 .483 .216 .980 .364 .386 .100
.761 .639 .638 .157 1.000 .638 .668 .104
8 1/ 8 8
.547 .635 .171 .215
.928 .757 .193 .172
.664 .708 .194 .223 1.000 .791 .228 .175
.705 .125 .636 .255
.637 .533 .525 .193
.914 .108 .806 .201
.973 .594 .636 .150
.791 .127 .723 .261 .885 .103 .798 .191
.748 .587 .602 .203 1.000 .611 .683 .153
1/ a l1e 11e
1/
a
8 1/
l/ a 8
8
8
.501
8
8
1/ a 1/ a 1/ a
1/ a 1/ a 8
1/
8
8 1/
l/ a 8
8 8
e
lJ
'0 2
2
rc
r
c
.473 .113 .017 .044
1/ a 1/ a 8
6
"
1
8
8
a
.286
e.
e
e
e
.
e
::0
VI
Design
PI P2 P3
No.
7
1
arc a
4
'(12
(12
rc
a
.875
.216
.150
.633
.089
.148 .064
.250 .423 .360
.799
.273
.260
.404
.885
.234
.534 .332
.159 .514
.107 .599 .083
.659
.134
.844
.153
.484
.109
.505 .075
.175
.445
.647
.172 .591
.230 .704
.210 .578
.517 .083 .063
.341 .392 .106
.459 .490 .120
.412
.680
.515
.447
.406
.419
.138
.528
.704
.131
.203
.224
.635
.699
.581
.534
.332
.475
.082 .056
.375 .093
.451 .112
.159
.260 .643
.541 .182 .210
.448
.453
.201
.289
.689
.559
.196 .198
.537 .141 .565 .309 .557
.401 .346 .506 .245 .599
1/ 8 1/ 8 1/ 8 .501 .963 .961 1.000 1.000 .906
.181 .632 .297 .664
.485 .531 .196 .527
.725 .412 .509 1.000
.193
.428
.989
.683
.626
.882
.358
.290
.826
.529
.603
.836
.180
.501
.637
.571 .277
.510 .187
.323 .420
lie lie 8
.716 1.000 1.000
.986
.999
.835 .935
a
0'2
1/
8
8
8 1/
8 1/ 1/
8
III
8
8
8 1/ 8
8
8 1/ 8 8
1/ 8
8
8
8
'0 2
2
~2
'(12
'02
.185 .662
rc
c
r
.097 .140 .074
.168
.357 .888
.572 .103
.822
.304 .394
.526 .350
.400 .354
.322 .378
c
.188
.516 .210
8
8
a
3
r
.246
e
r
1/ 8 1/ 8 1/8 .143 .726 .277
1/ 1/
8
8
8
/'(J1.2
Table (3.3.1)
2 Estimation procedures
,&2
c
"2
(J
r
~2
c
()2
rc
.623 .422
.745
.877 .960
.866 .795 .939
.844
8 1/ 8
.500 .753
.674 1.000 .956
.773 .948 .529
.655 1.000
8 1/ 8 1/ 8
1 1 1
8 8 1/ 8
.468 .995
.656 .896
.495 .940
.908 1.000 .450
.869 1. 000 • 796
.828 .789 .620
.960 .274 .351
.923 .763 .579
.952 .909 .478
.471 1.000 1.000 .989
.742 1. 000 1. 000 .936
.558 1.000 1.000 1.000
.424
.769
.603
.907
.887
.924
.270 .332
.725 .502
.954 .489
8 1/ 8 8
.480 .903
.763
.853 .497 1.000 .445 .681
.995 1. 000 1. 000
.474
.969
.457 .686
8
8
.496 .429
.494 .868
.658
.799
.839 .948
.793 .637
1/
8
1/ 8
8
8
8
.783 .921 .858
.994 .903 .623
.507
.793
.947 1.000
.941 1. 000 .492
.663 .759 .809 1.000 1.000 1.000 1.000.885
.568 1.000 1.000 1.000 .624 .987 .967 .648
0\
ltl
Design
No.
P1 P2 P,
9
a
8 r2
1
8 c2
2
8 rc
a
8 r2
2
8 c2
8:c
a
3
8 r2
8 c2
8~c
a
4
2
8r
8 c2
8~c
.637 .076 .221 .296
.848 .241 .380 .691
.481 .047 .162 .210
.536 .109 .284 .424
.712 .097 .551 .422
.620 .139 .336 .441
.694 .097 .625 .405
8
.652 .052 .934 .157
.575 .021 .775 .062
.673 .054 .998 .169
.489 .018 .684 .055
8 1/8 1/ 8
1 1 1
8 8 l/ e
.552 .625 .096 .121
.754 .378 .539 .455
.614 .464 .629 .174
.969 .748 .160 .108
.833 .255 .530 .244
.781 .437 .710 .125
.556 .604 .125 .144 1.000 .718 .203 .109
.763 .384 .610 .463 .729 .207 .483 .197
.629 .479 .772 .215 .726 .422 .730 .126
8 l/ e 8
e 8 8
8 8 8
.535 .499 .158 .246
.590 .065 .699 .282
.588 .393 .556 .217
.942 .592 .294 .250
.591 .039 .779 .202
.775 .383 .728 .178
.559 .521 .209 .283 1.000 .613 .385 .260
.636 .078 .800 .315 .525 .037 .749 .197
.615 .428 .695 .263 .736 .388 .785 .184
1 /8 1/ 8 1 /8 .143 .915 .224 .284 .090
.840 .169 .164 .079
.925 .215 .265 .084
1/ l/ 8 8
e
.866 .246 .690 .364
.991 .276 .617 .356
.966 .272 .716 .364 1.000 .271 .598 .343
8 l/ e
.629 .154 .721 .112
.808 .109 .792 .086
.706 .143 .765 .102
.860 .096 .817 .076
8 1/ 8 1/ 8
1 1 1
8 8 1/ 8
.699 .690 .398 .150
.855 .478 .637 .132
.690 .628 .830 .253
.701 .706 .177 .091
.926 .442 .513 .113
.784 .619 .774 .169
.723 .700 .350 .134
.912 .486 .642 .124
.745 .642 .873 .243
.715 .704 .151 .077
.947 .421 .493 .103
.817 .608 .759 .147
8 1/
8
.726 .654 .553 .336
.751 .702 .340 .260
.763 .686 .531 .325
.767 .705 .306 .236
8
8
.647 .190 .747 .323
.697 .603 .846 .326
.835 .173 .830 .290
.800 .621 .811 .247
.732 .197 .802 .320
.757 .633 .898 .322
.885 .161 .852 .271
.833 .614 .798 .223
/8 /8 /8 .787 .944 .296 .425 .808
1/
8
8 1/
l/
1/
8
8
1/ 8 8
8 8
e
8e2
111
1/ 1/ 8
8 8
10
Table (3.3.1)
Estimation Procedures
-
.
.819 .151 .154 .072
e
•
e
e
e
o
1.0
Design p p P
No.
1
2
3
11
e
(1
82
r
82
c
82
rc
.093
r
c
rc
.334 .079 .037 .053
4
8
2
rc
(1
8
2
82
82
l/ l/ l/
e .560 .636 .255 .132 .117
8
.432 .141 .080 .235
.551 .206 .089 .217
.502 .192
.091
.245
.459 .167 .075 .221
e
.307 .092 .358 .142
.442 .135 .399 .096
.376 .260
.401
.230
.436 .106 .399 .099
8
1
8
l / e lle
1
8
1/ e
.285 .381 .016 .082
.592 .323 .192 .142
.515 .345 .212 .034
.325 .402 .019 .070
.678 .335 .197 .089
.642 .394 .264 .035
.275 .377
.609 .351
.536 .380
.060
.244
.334
.226
.160
.092
.326 .395 .018 .075
.599 .266 .145 .090
.659 .400 .296 .049
8
l/ e
8
.298 .352 .038 .133
.351 .398 .050 .134
.294 .365
.090
.220
.355 .398 .054 .151
l /e
8
8
8
8
.311 .090 .282 .179
.386 .313 .160 .068
l / e l / e l / e .240 .578 .129 .335 .095
.462 .163 .327 .176
.473 .369 .183 .059
.426 .046 .125 .018
.385 .190
.402 .347
.513 .109
.323
.214
.316
.231
.117
.101
.467 .163 .341 .187
.468 .397 .246 .103
.323 .039 .086 .023
.485 .082 .311 .280
.556 .076 .294 .160
.393 .056
.259
.259
.478 .076 .275 .193
.215 .127 .424 .271
.273 .063 .449 .054
.175 .270
.393
.362
.264 .097 .444 .082
.559
.708
.746
.591
.394
.540
.484
.396
e e
l /e l /e
1/
e 8
8
12
1
82
Table (3.3.1)
Estimation Procedures
2
3
2
2
2
2
(1
(1
8
8
8
8
82
c
r
rc
r
c
.521 .188 .082 .053 .592 .247 .108
1/ e l / 8
1/
8
8
1
8
8
1/ 8
8
8
1/
1
8
1/
8
1
8
1/8
1/ 8
8
.405
.614
.547
.418
8
8
8
8
.235 .086 .401 .277
.547 .300 .428 .355
1/ 8 1/
1
.335
.259
.341
.301
.622
.462
.464
.563
.425
.194
.303
.318
.379
.201
.379
.355
.267
.350
.483
.507
.082
.044
.074
.176
.308 .079 .446 .151
.756 .348 .469 .165
.324 1.000 .507
.240 .432 .310
.344 .445 1.000
.275 .560 .306
.190 .079
.479 .285
.364
.401
.274
.438
.561
.639
.751
.599
.378
.195
.401
.359
.274
.308
.503
.594
.115
.058
.226
.228
.301 .107 .449 .204
.764 .372 .495 .326
"'\0"'
Design p p p
No.
123
13
~
8~
8~c
8~
8~
~
8~c
.491 .317 .135 .356
8~
0
8~
8~c
.382 .217 .025 .052
.142 .026 .013 .118
.386 .155 .018 .139
.232 .059 .020 .154
.395 .212 .023 .146
.193 .018 .309 .081
.568 .084 .401 .052
.266 .043 .338 .131
.537 .177 .390 .079
8 1/ 8 1/ 8
1 1 1
8 8 1/ 8
.189 .406 .003 .029 .160 .354 .001 .009
.368 .247 .097 .164 .437 .332 .048 .062
.332 .295 .108 .023 •• 376 .338 .054 .009
.195 .419 .005 .053
.427 .312 .123 .193
.379 .353 .144 .034
.156 .327 .002 .019
.417 .311 .063 .066
.368 .331 .092 .018
8 1/ 8 8
.172 .279 .006 .054
.174 .344 .003 .029
.196 .344 .010 .081
.172 .330 .006 .050
.157 .015 .139 .080
.299 .213 .071 .047
.400 .601 .130 .081 .066
.508 .086 .202 .082
.379 .321 .045 .025
.508 .075 .069 .034
.236 .035 .181 .116
.361 .291 .098 .068
.456 .111 .079 .076
.509 .163 .234 .109
.374 .326 .075 .044
.542 .098 .073 .042
.477 .080 .054 .166
.454 .065 .080 .137
.343 .070 .044 .154
.526 .094 .090 .169
8
.291 .047 .433 .089
.225 .051 .297 .056
.264 .054 .391 .126
.267 .078 .366 .089
8 1/8 1/8
1 1 1
8 8 1/8
.378 .361 .013 .055
.666 .245 .175 .091
.607 .311 .245 .029
.428 .391 .018 .041
.619 .221 .158 .053
.580 .344 .212 .019
.268 .300 .031 .130
.514 .217 .189 .117
.474 .284 .309 .064
.421 .394 .025 .064
.669 .261 .198 .076
.624 .376 .302 .043
8 1/8 8
.391 .321 .031 .094
.439 .345 .046 .079
.277 .268 .047 .131
.440 .366 .064 .118
h 8
.305 .049 .312 .126
.601 .276 .195 .068
.244 .048 .256 .098
.577 .300 .187 .050
.266 .055 .271 .138
.465 .251 .225 .115
.291 .081 .319 .145
.626 .345 .263 .097
8
1/ 8
8
8
8 1/
8
8
8
8
8
1/ 1/ 1/
8 8 8
1/8 1/8
1/
8
e
o
4
8~
8~
~
8~c
.454 .288 .030 .085
1/
1
1
1.000 .508 .338 .158 .591
111
/8 /8 /8
1/ 1/
8 8
14
82
e
Table (3.3.1)
2 Estimation procedurjs
8
8
8 1/
8
8
8
A
e.
A
e
.
e
e
e
N
ILl
Design
No.
1
PI P2 P3
8
2
e
15
tl
2
6
r
2
8c
"2
Ore
1.1"
"2
or
"2
°c
"2
ore
Procedur~s
1.1"
2
8r
4
82
c
"2
Ore
"
1.1
2
8r
8c2
2
8rc
1/S 1/S 1/S .400
.551 .033 .344
.071
.529 .023 .210 .024
.432 .025 .302 .069
.451 .021 .224 .029
8
.451 .022 .370
.155
.473 .023 .337 .125
.360 .017 .378 .145
.499 .026 .433 .146
.147 .002 .352
.040
.221 .001 .385 .011
.117 .005 .331 .082
.171 .002 .371 .019
.086
.098
.036
.596 .255 .037 .017
.693 .066 .360 .035
.670 .102 .367 .011
.636 .274 .207 .159
.485 .086 .378 .110
.442 .169 .352 .065
.714 .287 .074 .034
.625 .081 .375 .048
.606 .150 .380 .019
I/ S 1/ S
1/
S
8
1/
S
8
1
8
l/ s l/s
1
8
l/ S
.644 .284 .106
.584 .087 .401
.521 .135 .362
8
1/ s
8
.619 .226 .241
.113
.589 .212 .130 .047
.590 .203 .393 .139
.713 .247 .237 .081
l/s
8
8
8
8
.165 .004 .353
.519 .115 .351
.805 .356 .924
.092
.078
.999
.242 .003 .391 .044
.662 .091 .360 .030
.612 .058 .250 .114
.131 .007 .330 .111
.439 .131 .342 .111
.650 .314 .800 .810
.194 .005 .386 .067
.611 .134 .382 .051
.551 .049 .204 .086
.382 .061 .232
.708
.689 .068 .392 .575
.355 .069 .264 .741
.692 .073 .443 .529
.229 .037 .855
.573
.300 .012 .835 .102
.200 .037 .842 .547
.290 .016 .778 .118
8
16
Table (3.3.1)
2 Estimation
1
1/ S 1/ S 1/ 8 • 600
1/ 8 1/ 8
l/
s 8
8
l/
s
8
1
8
l/ S l/ s
1
8
1/
8
1/ 8
1/ 8
8
8
8
.471 .657 .154 1.000 1.000 .804 .110 .218
.686 .418 .772 1.000 .963 .246 .596 .204
.603 .536 .763 .312 1.000 .490 .739 .089
.406 .652 .153 .951 .930 .765 .136 .224
.593 .426 .776 .962 .908 .250 .558 .178
.524 .553 .783 .357 •• 955 .553 .748 .118
8
.446 .487 .217
.635 1.000 .652 .288 .418
.392 .501 .258 .689
.954 .650 .380 .443
8
8
.233 .035 .680
.577 .404 .610
.549 .337 .021 .785 .298
.508 1.000 .415 .688 .230
.207 .040 .689 .591
.507 .430 .641 .580
.329 .029 .765 .334
.964 .477 .720 .291
1
8
1"'1
\D
.667
rc
c
r
.208 .233 .101 .585
Table (3.3.1)
Estimation Procedures
2
3
2
2
2
2
'"
8r 8c 8rc
8r 8c2
lJ
.136 .107 .036 .615 .229 .197
.580
.111 .284 .397 .728
.210 .235 .329
.598
.206
.029 .326 .107 .374
.027 .410 .039
.310
.559
.462
.437 .016 .089 .285
.342 .294 .176 .633
.368 .246 .056 .550
8
.332
8
8
Design
PI P2 P3
No.
17
8
2
8
2
2
8
lJ'"
4
2
lJ'"
.102
.549
.155 .282
.384
.730
.232 .220 .322
.231
.044 .345
.137
.391
.029 .438 .039
.353 .005 .021
.298 .219 .068
.321 .211 .020
.269
.539
.447
.416 .018
.358 .306
.374 .271
.098
.186
.067
.285
.632
.559
.354 .005 .018
.296 .206 .064
.325 .209 .018
.421 .068 .230 .316
.369 .027 .090
.293
.411 .076
.241
.317
.372 .024 .079
1/ 8 1/ 8 1/ 8 • 360
.231
.469
.986
.042 .322 .243 .418
.346 .243 .149 .566
.936 .667 .391 .943
.057 .417 .151 .259
.326 .218 .066 .456
.736 .208 .126 1.000
.067 .343
.362 .268
.984 .581
.279
.176
.324
.439
.577
.885
.067 .448 .157
.333 .218 .062
.631 .154 .099
8
.586
.352 .219 .937 .997
.963 .270 .869
.774
.629 .297 1.000 1.000 1.000 .259 .788
.348
.496 .642 .979 .733
.881 .947 .390
.446
.994 .706 1. 000
1/ 8 1/ 8 1/ 8
1/
8
8
1/
8
1/
1/ 8
8
8
1
1/
8
1/ 1/
8
8
8
1/
.240
8
1/ 8 1/ 8
1
8
8e
8
8
1
8
8
8
1/
8
8rc
2
8r
8c2
.771
.852 .980 .311
.996
.672
.630
.390
.954
.834
.958 .029 .209
.937 .435 .226
.986 .662 .205
.988 .104 .551
8
8
1
8
1/ 1/
8
8
1
8
1/
8
1/ 8
8
.416
.902 .206 .860 .436 1.000 .127 .643
.422
.954 .315 1. 000
.430
1/ 8
8
8
8
8
.360
.660
.274 .516 .857 .774 .914 .799 .837
.849 .460 .681 .852 1.000 .590 .574
.470
.725
.626 .602 1. 000
.949 .582 1. 000
.818 1.000 .848 .766
.853 .994 .589 .518
8
e
lJ'"
a"'2
rc
.124 .088 .034
1/ 1/
8
8
18
1
2
1
8
.402 1.000 .158 .990 .397 .978 .039 .276
.794 .916 .585 .713 .967 .974 .501 .283
.668 .954 .564 .361 .835 1.000 .678 .237
e.
.394 .983 .196
.860 1. 000 •680
.721 .999 .688
e
e
e
.
e
"1
\L;
Design
No.
PI P2 P3
19
1
&2
e
II'"
&2
r
&2
c
"'2
(J
rc
'"
II
"'2
(J
r
"'2
(J
c
"2
(J
rc
"
II
"2
(J
r
3
4
"2
(J
c
&2
rc
"
II
"2
(J
r
"2
(J
c
"2
(J
rc
.160 .729 .330 .091 .080
.503 .225
.013 .012
.692 .297
.093 .083
.497 .222
.012 .013
.511 .138 .046 .255
.647 .426
.032 .138
.479 .117
.044 .248
.668 .446
.034 .145
8 1/ 8
.594 .062 .633 .106
.955 .210
.643 .033
.541 .049
.592 .099 1.000 .215
.658 .036
8 1/ 8 1/ 8
1 1 1
8 8 1/ 8
.385 .934 ~004 .004
.833 .613 .167 .130
.780 .796 .222 .045
.258 .626
.663 .522
.621 .629
.001 .009
.045 .025
.076 .012
.389 .942
.811 .580
.769 .783
.005 .051
.172 .134
.234 .048
.265 .641
.677 .526
.641 .645
.001 .008
.046 .027
.078 .013
8 1/
.397 .822 .015 .115
.286 .649
.004 .035
.398 .811
.016 .122
.295 .667
.004 .035
.547 .068 .382 .179
.759 .689 .173 .105
1/ 8 1/ 8 1/ 8 .240 .938 .362 .982 .248
.941 .313
.636 .637
.744 .120
.434 .113
.072 .038
.496 .070
.501 .057
.744 .666
.899 .325
.359 .171
.178 .109
.995 .237
.999 .331
.658 .656
.710 .101
.457 .122
.075 .039
.439 .071
.625 .125 .602 .751
.836 .125
.948 .573
.640 .121
.683 .762
.850 .128 1.000 .589
.281 .069 .958 .451
.314 .Q32 1.000 .134
.255 .070
.909 .462
"310 .029
8 1/ 8 1/ 8
1 1 1
8 8 1/ 8
.560 .664 .242 .603
.847 .493 .989 .483
.730 .586 .920 .290
.821 .792
.988 .386
.938 .613
.338 .229
.927 .159
.999 .121
.592 .666 .374 .715
.826 .488 1.000 .479
.711 .597 .912 .346
8 1/ 8 8
.565 .563 .517 .705
.855 .698
.884 .513
.598 .566
.693 .762 °.932 .713 1.000 .539
8
8
.303 .072 .875 .626
.722 .495 .831 .563
.355 .052 1.000 .399
.949 .543 .988 .319
.279 .511
.707 .503
.853 .646
.842 .632
1/ 8 1/ 8 1/ 8
1/ 1/
8
8
1/ 8
1/ 8
8
20
Table (3.3.1)
2 Estimation procedures
8
8
8
1/ 1/
8
8
1/
8
1/ 8
8
8
8
8
8
8
8 1/
8
8
8
.977 .138
.894 .793 .363 .237
.998 .364 .907 .162
.968 .618 1.000 .135
.354 .051 1.000 .420
.982 .556 1.000 .353
65
Table (3.3.2)
Best Estimation Procedures for the Two-Way Crossed
Classification with Composited Samples
Best Estimation Procedure for Estimating
PI
P2
l/ e
l/ e
l/ e
1/
1/
.e
8
1/
ar2
ac2
arc
1,3
1
1
1
4
2,3,4
3,4
1,3
2,4
3,4
2,3,4
1,3
2,3,4
1,2,3,4
1,3,4
1,3
~
8
8
1/
8
8
P3
8
2
8
1/
1
1
1
2
3
3
1,3
8
8
l/ e
2,4
3,4
3,4
3
8
1/
8
2,3,4
2,3,4
3,4
3
8
8
2,4
3,4
4
3,4
8
8
2,4
3,4
3,4
3
1/
e
8
8
8
1/
e
From Table (3.3.2) it is obvious that no single estimation procedure is best for estimating
P 2 ,P 3 )
•
~,
2 a2
2
ar'
c' and arc for all values of
(PI'
It should be emphasized that the results in Table (3.3.2) do
not necessarily apply to any given design.
Table (3.3.2) simply sum-
marizes what appears to be the best estimation procedure for estimating
~,
a;,
a~, and
a;c based on the numerical evaluation of the 20 designs
given in Table (3.2.1).
The only general conclusions that can be drawn
from Table (3.3.2) are:
1) Procedure 1 is best for estimating
and
e
a2 are small relative to
rc
~,
2
2
2
2
2
ar' ac' and arc if ar , ac ,
a~.
2) Procedure 3 generally provides good estimates of
2
2
2
ar' ac' and arc
66
if at least two of the variance components are large relative
to
0'2
e·
3) If no information is available on (p ,p ,p
1
2
3
J,
as is usually the
case, an appropriate strategy would be to estimate o'~, o'~, and
O': c
using procedure 3, and estimate ~ using procedures 2 or 4.
An alternative procedure to 3) above would be to use all 4
methods to estimate the variance components and p ,P , and P , and then
1
using the estimates of P , P , and P
1
2
3
2
3
use Table (3.3.2) to select that
method which gives the best estimates for those PI' p ,p •
2
3
Hirotsu (15) compared Henderson's Method 1 and the method of unweighted means for estimating the variance components in a two-way
crossed classification.
These two methods are the same as estimation
procedures 1 and 3 defined in Table (2.3.1) when s. • t. = 1. i.e., when
1.
there is no compositing.
Hirotsu concluded that the method of unweighted
means gives smaller variances than Method 1, except for
small.
J
rc when
0'2
rc is
0'2
Although the results in Table (3.3.2) give some support to
Hirotsu's conclusions, there is no reason to expect that best estimation
procedures for composited and non-composited designs will be the same.
e.
CHAPTER IV
HYPOTHESIS TESTING IN THE TWO-WAY CROSSED
CLASSIFICATION WITH COMPOSITED SAMPLES
4.1
Introduction
In this chapter restrictions on the si' t , Vij' and n , i • 1,
j
ij
•••• ,r and j = 1, •••• ,c, are determined for each of the four procedures
defined in Table (2.3.1), so that exact tests of hypothesis on the significance of the variance components can be made assuming the model
.e
given by (2.2.1).
In section 4.2, restrictions on the si' t j , Vij' and nij are
determined for procedures 1 and 3 so that the sums of squares corresponding to each source of variation are such that SSi/EMSi~X2(Pi)'
where Pi is the rank of the matrix of the quadratic form for SSi.
section 4.3 gives a similar set of restrictions for procedures 2 and 4.
In section 4.4 it is shown that under the restrictions derived in sections 4.2 and 4.3 the sums of squares are an independent set, in which
case exact tests of the hypotheses
a: • 0,
a~
• 0,
and
a;c = 0 can be
made.
4.2
~
Restrictions on si, ti' Vii' and nii for Procedures 1 and 3
For any of the four procedures the expected mean squares for
rows, columns, and interaction will generally contain terms in
a:,
a~,
68
O~c'
and
0: if the design is unbalanced in the si'
t j , Vij' or n ij ,
Also,
~
the coefficients of the variance components in the expected mean squares
will be different for each source of variation.
Even if SSt/EMSi were
distributed as X2 (Pi)' this would not be helpful in testing H : 0 2
•
0,
since such a test could not be made using the ratio of two independent
X2 ·s divided by their degrees of freedom.
For procedure 1 if n ij
=n
for all i and j, and if the various sums of squares divided by their
expected mean squares are independently distributed as X2 ·s then tests
of H : 0 2
=0
can be made using the ratio of two independent X2 ·s divided
by their degrees of freedom.
for the case nij
=n
The expected mean squares for procedure 1
are given in Table (4.2.1).
tion it is shown that nij
=n
Later on in this sec-
is necessary but not sufficient for the
SSi/EMSi to be distributed as X2 ·s.
e.
Table (4.2.1)
Expected Mean Squares for Procedure 1 when nij
=n
Expected Mean Squares
Source
L
1
nc
n
2
2
0 r2 r
+ 0e
.1. si + °rc cr LLvi·
i j
)
2
n
02
0 c2 nc L1
°rc --iY
e
• .
1.) +
c j tj
e
2
2
n
0
°rc ere
0 e2
Rows
-+
Columns
Interaction
Error
LL:v-h-
~~vh+
Restrictions on si' t j , and Vij' sueh that SSi/EMSi~X2(Pi) are
obtained using the following theorem:
If !..-..N(J!.,V) then !.'QiY/[E(!.'Qi.!.)/Pi)"""" X2 (Pi,A), where
Pi
= rk(Qi)
and A
is idempotent.
=
1/2
~'Qi ~, if and only if QiV/[E(!.'Qi!.)/Pi)
69
A proof of the above theorem .is given by Graybill (9).
From (2.3.1), (2.4.1), and (2.4.4) the row sums of squares,
55 1 ,
is given by,
55,
= .!.'tr(~!,i)
- ~ ~-) .!.
,. Y"'Ql!.' say.
Restrictions on
8
i and Vij can now be obtained by determining the con-
ditions that must be placed on si and v ij for QIV/[E(!....Ql!.)/Pl) to be
idempotent.
Partition Q1 into r 2 cxc matrices Q ......
The elements of
111
Dr(~!i) and ~~... are nijmij' respectively.
Replacing n ij by n, and
fij and mij by their values for procedure 1 given in Table (2.3.1), the
.e
sub-matrices of Q1 are
if
i .. i'"
Q11'i'" ..
Let QIV be partitioned into r 2 cxc matrices, X..... ' and denote
... 11
the first row of this partitioned form by the matrix X
rl
Using the
partitioned form of V given by (2.4.8),
n(r-l) J + 0 2 n(r-l) J Df-!-)+ 0 2
c
rS
rc rc
C~Vij
e
I
n
rc rc JcDc
(v~j)
D
rc ~J
rc c c
(v~j)
_ 02
02
_ 02
1
e -rc J c '
-
02
e
r~ J c )
.....
r-l
rc
-- J
,
-0
c
....2
n
,-u::: - - J
rrs 2 . c
2 n
J
r 'fir
r c
(4.2.1)
If XCI is the first column of the partitioned form of Q V, then
1
XCI
is obtained from
= l, •••• ,r.
i
Xrl
by replacing the si by sl and v
ij
by v
lj
' for
If QIVQIV is partitioned in the same way as QIV, then the
cxc matrix in the first row and column of QIVQIV is
e.
r
!!.[r1) (--l:....) + rtC2
nL..J
~ c
c r j c c v..
l
+
J 0
0
1J
04
+
e
~l-c (r-lJ
- - 2 + r-l )
r
cr
~
Xll /EMS(Rows)
C
i=2
(4.2.2)
Jc
= the
cxc matrix in the first row and column
of QIVQIV divided by (EMS(Rows))2
(4.2.3)
From (4.2.1),
X
1 1"
(--l:....)
0 2 n(r-l)J + 0r2 c n(r-l)J 0
r rs
c
rc
c c v. .
1
1J
+ (12 r-l J ' the right hand side
e rc
c
71
of (4.2.3) is given by (4.2.2), and EMS(Rows) is obtained from Table
(4.2.1).
The only way for the identity in (4.2.3) to hold is to have
51'
= sand
v 1]
..
=v
= l, •••• ,r
for i
and j
= l, •••• ,c.
Under these re-
strictions, (4.2.2) reduces to:
= EMS (Rows)
.e
XI I ' where
EMS(Rows) and XII are obtained from Table (4.2.1) and (4.2.1), respectively, with si and Vij replaced by s and v.
If the sums of squares for columns had been used instead of the
row sums of squares then the restrictions t j .. t and Vij = v would have
been obtained.
So far, it had been shown that QIV/EMS(ROWS) cannot be
idempotent unless si
= s,
Vij = v, and nij
= n.
It remains to be shown
that under these restrictions QIV/EMS(RoWS) is, in fact, idempotent.
are given by
,,'-!.J+
(,,'t c+ -°rcv+ ne
s
c
2
2
0
Vii" =
2
0c
tIc
if
i
~
i"
)
Ic
if i
= 1. "
(4.2.4)
72
if i =
...
1.
(4.2.5)
if i
~
i"
(4.2.6)
Then, Q1V is obtained as
=
EMS (Rows)
c
Rr1 • • • • • • • • ~r
=
EMS (Rows) R
c
(4.2.7)
where
R ..,
1.1.
= i"
if
i
if
i ~ i"
=
(4.2.8)
From the result given by (4.2.7),
=
=
Thus,
QIV
EMS (RowS)
Q1V/EMS(RoWS) ,
using (4.2.7).
is idempotent.
From (4.2.5) and (4.2.8), Q1 can be expressed as, Q1
=
n
~
R.
So,
73
~)
('L'
(
2 (1)2
1
C =~
R
a
Q
Thus, - 1
n
R
, since R2
=
is idempotent.
=r
Also, ~'Ql~
=0
if ~'
•
cR.
Aad,
- 1.
= ~~c'
since the sum of the elements in each
column of Q1 is zero •
.e
In a similar way it can be shown that if t j • t, Vij ... v, and
= n,
then
55(Columns)~ X2 (c-l).
EM5 (Columns)
From (2.3.1), (2.4.1), (2.4.2), (2.4.3), and (2.4.4), the interaction sums of squares, 55
3
,
is
Partition Q3 into r 2 cxc matricies, Q'ii';
n(r:l) (I c -
~ Jc)
if nij
= n,
then
if i ... i '
Qsii' ...
(4.2.9)
- n
r
(rc _!.c
J )
c
if i
~ i~
74
Partition Q3V into r 2 cxc matrices, and denote the matrix in the
ii'th position of Q3V by «Q 3V»·1.1.".
Using the partitioned form of V
given by (4.2.4), the matrix «Q3V»ii' can be written as,
!!. + 0 2) II
v
e ,c
«Q3V»ii'
J)
c
= i'
if i
=
(4.2.10)
O~)(Ic - ~ J c)
- ;(o:c ; +
In (4.2.10), O~c ~ + O~
and nij = n.
_!.c
= EMS (Interaction)
Then, Q V/EMS(Interaction)
3
if i "
= t,
when si - s, tj
= !.
Q3'
n
i~
v ij
= v,
and
-~ J c ) + r;i(I c _ ~ J c)2
=(r:1 )(I c - ~ J c) if i == i '
(r;1)2(I c
Q3V
.\ 2
( EMS(Interactiony
=
-
r.
~ J c)
= - ;(I c =
_ !. J
2 (r-1) (
1)2
r-2
r
Ie - ~ J c
+ ~~c
!.
Q
n 3
C
)2
C
if i " i'
= Q V/EMS(Interaction)
3
Thus Q V/EMS(Interaction) is idempotent, and Q V/EMS(Interaction) ~X2
3
(P3,A) where P3
3
= rk{Q 3) = rk(~
~
Q3) •
tr(Q 3)
= ~ tr(nr(~fc
=
(r - 1)
- ~ c)
J
(c - ~ c)
= (r - 1) (c - 1) •
Now,
,1,
1\ = - 1.1 Q l.l.
2 3-
If !:!.'
= 1.1!.', then
A
=0
since -1'Q 3
=
o.
e.
75
When nij
...
= n,
the sums of squares for procedure 3 can be obtained
from those of procedure 1 by dividing the procedure 1 sums of squares by
n.
The corresponding expected mean squares will also be divided by n.
Thus, for rows, columns, and interaction, !~Qi!l[E(!~Qi!)/Pi) will be
the same for procedures 1 and 3.
In this section it has been shown that a necessary and sufficient
condition for the row, column, and interaction sums of squares divided
by their respective expected aean squares to have a central chi-square
distribution is for si
rand j = l, •••• c.
= s,
tj
= t,
Vij
= v,
and n ij
= n,
for i
= 1, •••• ,
It should be noted that under these balanced condi-
tions all four procedures are identical.
,e
4.3
Restrictions on si' tj' vii and n ij for Estimation Procedures 2
and 4
For estimation procedures 2 and 4, exact tests of the hypotheses
a~
= 0,
a~
= 0,
and a~c
=0
cannot be made even if nij
= n.
Some sim-
plification in the expected mean squares can be obtained if v ij
and nij = n.
= Sitj
The expected mean squares for this situation are given in
Table (4.3.1).
Table (4.3.1)
Expected Mean Squares for Procedure 2 when
v·1). = v and n·1).
Source
Rows
=n
Expected Mean Squares
76
2
0 c2 nLsi + no rc
+
Columns
0 e2
c-l
~L
~
~s.
~
i
- .L2
J
L:. t.J
...
)
Ls~
i ~
02
2
e
no rc +
(r-l) (c-l)
Interaction
L:t~
t·
j
e
Lt2
_L2
---
4:i s i
Lt.
j
)
Error
From Table (4.3.1) it is obvious that exact tests of the hypotheses O~
tj
= t.
= 0,
O~
= 0,
and O~c
=0
cannot be made unless si
= sand
The expected mean squares for this case are presented in Table
e.
(4.3.2)
Table (4.3.2)
Expected Mean Squares for Procedure 2 when
si
= s,
tj = t, Vij
Source
= v,
and nij
= n.
Expected Mean Squares
Rows
2
ntc 0 r2 + n °rc
+ st0e2
Columns
2
nsr 0 r2 + n °rc
+ sto~
2
2
n 0rc + st0e
Interaction
0 e2
Error
For si = s, t j = t, Vij
= v,
and n ij
= n,
the expected mean
squares for procedure 4 are equal to those of procedure 2 divided by n.
A similar relationship exists between the sums of squares for rows,
columns, and interaction for the two estimation procedures.
si
= 5,
tj = t, and Vij
= v,
Also, if
then the expected mean squares for rows,
77
columns, and interaction in Table (4.2.1) multiplied by st are identical
to the corresponding expected mean squares of Table (4.3.2).
From the
above it is apparent that the various sums of squares and expected mean
squares for procedures 2 and 4 can be obtained from those of procedure 1
if si
= s,
tj = t, Vij
= v,
and nij
= n.
Under these restrictions it
can then be shown, using the results in section 4.2, that for estimation
procedures 2 and 4 the row, column, and interaction sums of squares divided by their expected mean squares have chi-square distributions with
r-l, c-l, and (r-l) (c-l) degrees of freedom, respectively.
Independence of the Sums of Squares
4.4
In sections 4.2 and 4.3 it was shown that for each of the four
estimation procedures defined in Table (2.3.1) the row, column, and interaction sums of squares divided by their expected mean squares each
have a chi-square distribution if si
for i
= l, •••• ,r
and j
= l, •••• ,c.
cov(SS(Columns), SS(Interaction»
= s,
tj = t, Vij
= v,
and nij
=n
If cov(SS(Rows), SS(Interaction»=
= cov
0, exact tests of the hypotheses 0:
= 0,
(SS(Interaction}, SS(Error»
o~
= 0,
=
o~c = 0 can then be
made using F-tests.
It was shown in section 2.3 that cov(SS(Introduction), SS(Error»
=0
for all four estimation procedures.
The other two covariances will
be evaluated using procedure 1, since, as stated in sections 4.2 and
4.3, if the design is balanced in the si, tj' Vij' and n ij then the sums
of squares and expected mean squares for rows, columns, and interaction
for the four estimation procedures differ only by a multiple of nand/or
st.
78
The covariance between the row and interaction sums of squares
is given by
using (2.4.6).
Q, V, and Q are given by (4.2.4), (4.2.5), and
1
3
(4.2.10), respectively.
From section 4.2,
~~Ql =~,
if
~~
= ~!~.
Therefore,
cov(Y~Q
Y,Y~Q3YJ = 2tr(Q VQ
1- 1
3
vJ.
If the matrix product Q VQ V is partitioned into r 2 cxc matrices
1
3
« Q1VQ 3V » 11
.. ~, then, using (4.2.7) and (4.2.10),
_!J)
c c
e.
+
=0
since Jc €c -
~ J c) = 0
•
Similarly, it can be shown that cov(SS(Columns), SS(Interaction»
In this chapter it has been shown that if si
Vij
= v,
and nij
=n
for i
= l, •••• ,r
and j
= s,
= l, •••• ,c,
= o.
tj = t,
then SS(Rows),
SS(Columns), and SS(Interaction) divided by their expected mean squares
are, for each of the four estimation procedures, distributed as central
chi-squares with r-l, c-l, and (r-l) (c-l) degrees of freedom respectively.
Also, for the above restrictions on si' tj' Vij' and nij'
cOV(SS(Rows), SS(Interaction»
cov(SS(Interaction), SS(Error»
= cov(SS(Columns),
= O.
SS(Interaction»
=
For this situation exact tests of
•
79
the hypotheses a~
= 0,
a~ = 0, and a~c • 0 can be made using the ratio
of two independent chi-squares divided by their degrees of freedom.
CHAPTER V
COMPOSITING IN THE 3-STAGE NESTED DESIGN
5.1
Introduction
Using the results from the two-way crossed classification, given
in Chapter II, estimates of the variance components and their variances
are obtained for the 3-stage nested design in which the experimental
units (sub-classes) have been composited.
In such a design compositing
can occur in either of two ways:
1) The experimental units within a single class (first stage) are
composited into one or more groups.
2) The experimental units from two or more classes are composited
into a single unit.
For either method of compositing the experimental units, four
estimation procedures using weighted or unweighted observations or subclass (second stage) means are defined and compared.
In this chapter the model for the 3-stage nested design is a
generalization of the two models given by Kussmaul and Anderson (18).
Some of the results given by these authors are evaluated assuming the
general model for the 3-stage nested design which is discussed in the
next section.
e.
81
5.2
The Model
In Chapter I an example was given of a 3-stage nested design in
which the sub-class experimental units were composited.
The general
model for a 3-stage nested design in which there is among classes and/or
within sub-class compositing is given by
si
Yijk
=
lJ
1
+ si
r:
si
riR, +
1=1 .
1
Sitij
tij
L L
1=1
c j (i) R.m + eijk
m-1
(5.2.1)
where, i = l, •••• ,r
j = 1, .... ,ci
k = l, •••• ,nij
and,
N
"L:n..1J = """""
4J n.
= 4.
1"
J
1
ci=~the
1
="""""
~ n. J.
J
number of sub-classes within the i th class of model (5.2.1)
si= the number of classes that have been composited to form the i th
class in model (5.2.1)
tij= the number of experimental units that have been composited from the
i1 th class in the non-composited design to form the jth sub-class
in model (5.2.1)
A restriction on model (5.2.1) is that if two or more classes
are composited to form a single class then an equal number of subclasses are composited from each class, i.e., if si > 1, then til
= •••• = t ici •
= til
It should be noted that if si > 1 then the corresponding
value of 0i is also equal to one.
82
For model (5.2.1) it is assumed that the only fixed effect is
the mean,
~,
and that the class effects,
ri~'
the sub-class effects,
c j (i)1m' and the measurement error effects, eijk' are random, and are
normally and independently distributed with zero means and variances
a~, a~/r' and a; respectively.
It is also assumed that all covariances
among the effects are zero.
In order to better understand model (5.2.1), and the way in
which experimental units are composited, a schematic representation of
the model is given below for the case r
1, n
t
23
13
= 2,
= 3.
n
21
= 2,
s
1
= 1,
s
2
= 3,
t
11
= 2,
=
c
1
2, t
= 1,
n
= t 13 = 3,
t
- 3, c
12
2
11
21
= n 12 =
=
t
22
The design prior to compositing has 4 classes, eight sub-
classes within the first class, and 3 sub-classes within each of the
other 3 classes.
Classes
(2,2)
(1,1)
(2,3)
Sub-
Classes
The 8 experimental units
are combined into 3 groups
of 2, 3, and 3.
Measurements
n
11
• 1
n
= 1
12
n
A
13
=2
The 9 experimental units - 3
from each of 3 classes - are
combined into a single group.
n
/\
21
= 2
Measurements are made on each group of experimental units.
=
83
The model for any observation in the above design can be represented by model (S.2.1).
For example, the model for the second measure-
ment in the second class and only sub-class within the second class of
the composited design is given by,
y
= 11 +
212
+c
5.3
!3 (r 21 + r 22 + r 23 ) + !9 (c 1(2)11 + c !l2)12 + c 1(2)13
+c
1(2)21
1(2)22
+c
+c
+c
~+e
1(2)23 1(2)31 1(2)32 1(2)3~
212
+c
Estimates of the Variance Components and their Variances
For the 3-stage nested design, the sums of squares for each
source of variation can be computed using weighted or unweighted observations or sub-class means.
.e
These sums of squares and their expected
values can be obtained from the sums of squares and their exPected
values defined in Chapter I I for the two-way crossed classification.
The analysis of variance table for the 3-stage nested design is
given in Table (5.3.l)
Table (5.3.l)
Analysis of Variance Table for the 3-Stage Nested Design
d.f.
Source
Sums of Squares
r-l
Classes
SS
Sub-Classes LCi-r
i
Error
N-
SS
L:Ci
i
SS
1
3
.
= T
= T
W
1..
, W
23
,
3
..
and w
2 ..
5
3
= T5
T , T , T , T , and T
1
1
6
- T
..
- T
- T
1
6
Expected Sums Squares
2
w (12 + w (1c/r
+ w (12
11 r
1.. e
13
2
w (1c/r +
23
(12
2.. e
W
(N ~:C1a~
are obtained from (2.3.l), and w
are obtained from (2.4.4l) when the following
11
, w
13
84
notational changes are made.
1) t.J.
=
t·J.).
c
...
ci
2)
(5.3.1)
3) gij
4) Vij
=
=
0
Sitij
The weights, f ij , h ij , and m , used to evaluate T
ij
I
,
T~,
and
are given in Table (5.3.2) for each of four estimation procedures.
T~
These
procedures are similar to those used in Chapter II, and are also subject
to a set of restrictions similar to those given in section 2.4.
Table (5.3.2)
Values of f ij , h .. , and mij for Four Estimation Procedures
J.)
Procedure
1
f ij
h ij
m..
J.)
l/Iiii.
l/nij
l/IN
2
lIn J.)
. . Ie:
J.
3
s.t . ./n .. It s.t· .
J. J.) J.) j J. J.)
4
lIn·J.).{ t cJ.·
i
2
s.t .. /n ..
J. 1.) J.)
The sums of squares corresponding to any of the four estimation
.. , and m..
Procedures are computed by substituting the values of fJ.')·' h J.)
J.)
given in
Tab~e
(5.3.2) into the sums of squares defined by (2.3.1) and
using the notational changes given by (5.3.1).
e.
85
From (2.4.9), (2.4.17), (2.4.31), and the changes given by
(5.3.1), the expected values of T , T , and T
l'
design can now be obtained.
for the 3-stage nested
It
They are
(5.3.2)
E(T )
It
(5.3.4)
.e
The coefficients of a~, a~/r' and a~ in Table (5.3.1) are then
obtained by evaluating the appropriate differences in E(T ), E(T ), and
1
E(T).
3
These coefficients are:
It
W
11
w
13
(5.3.5)
W
lit
W
23
86
Note that the coefficient of O~ in the expected sub-class sums of squares
is zero for each of the four estimation procedures.
2
Estimates of 0r2 , 0 c/r'
and 0 e2 can directly be obtained from Table
(5.3.1), by equating the sums of squares to the expected sums of squares
and solving the resulting equations.
The estimates are,
(5.3.6)
2
0c/r
(T
=
3
-
T1
W
2~
Oe2 )/w 29
- T
(5.3.7)
(5.3.8)
W
11
~
The variances and covariances of Tl, T 3 , and
T~
are obtained
from (2.4.13), (2.4.22), (2.4.27), (2.4.32), (2.4.36), and (2.4.39)
using the changes given by (5.3.1).
These variances and covariances
are given by,
~
2
+ °c/r L.-J
j
f~.n~.
J.J
J.J
t
si ij
+
O~~. jn~. ~Lh n~
+ s.
J.
COV(T I,T
J.
J.J
.... J.J...
." J.J
J
J- 2LLf· .n~j 0~ ~f.
Y
. j
1.
J.J
1.
s.
1.
)
-
hiJ.n~J~~
J.)· .. n··
J.J
..
(5.3.10)
e.
87
(5.3.13)
Since (N - E ci)a~/o~ has a chi-square distribution with
i
.e
(N - E c i ) degrees of freedom and cov(a~,SSi)
= 0,
i • 1,2,3, the vari-
i
ances of the estimated variance components can now be obtained by eva1"'2
"'2
"'2
uating var(Oe)' var(Oc/r)' and var(Or)' using (5.3.5) and (5.3.9) "'2
"'2
"'2
.
(5.3.14), when 0e' 0c/r' and Or are defined by (5.3.6), (5.3.7) and
(5.3.8) respectively.
88
(5.3.16)
- 2(1+ w~) 2:S+(Lm..
1.
. 1.J n'j~2(LfiJ.,niJ.)2
1.
. '"
1.
23
•
J
J
e-
89
-e
90
- 2
(1+ w~)~
~ ~m .. n'j~f ....... ~n· .~~f. ·"n··"
~51' ~ 1J 1 ~ 1) 1J 1J ~ 1J 1J
23
i
j
j"
j
2
2
4(1c/r(1e
~ w"~ ~
~
w
13
1+ - -
2
2
f 2.. n..
1) 1)
Shj
~
f2
.. ~n.·~
1)
1)
+
(
w2
13
w,,1 ~~
--
h 2.. n 3..
1) 1J
Sitij
e(5.3.17)
If 5i
= tij = 1
for i
= l, •••• ,r
and j
= l, •••• ,ci '
f ij , h , and m
ij fore5timation procedure 1 are used, then
ij
and if the
var(cr~/r]
and var(1~)are the same as those derived by Searle (25), and later rederived by Prairie (21) and Goldsmith (8).
91
5.4
Estimating the Mean
An unbiased estimator of the mean, 1.I, is
Corresponding to each of the four estimation procedures defined
in section 5.3, a set of Uij's can be defined.
They are given in Table
(5.4.1).
Table (5.4.1)
Values of
Uij
for each Estimation Procedure
Estimation Procedure
liN
1
-4t
2
sitij~~siti·n
..
i j
) 1)
3
4
Sit .. /ni·~~sit.).
1)
>'t j
1
From (2.5.4) and using the changes given by (5.3.1),
(5.4.1)
92
5.5
Numerical Comparison of Estimation Procedures
The complexity of the formulae for
var(a~/r)' var(a~) and var(~)
given by (5.3.16), (5.3.17), and (5.4.1) precludes the possibility of
making analytical comparisons among the four estimation procedures given
in section 5.3.
This being the case, numerical methods, similar to
those used in the comparison of estimation procedures in the two-way
crossed classification, are used.
The estimation procedures are compared by constructing unbalanced
3-stage nested designs, then computing
2
A
2
var(a~J, var(a~/r)' var(a~J,
2
var(p) for various values of 0e' 0c/r' and Or.
and
Constructing 3-stage
nested design with across and within class compositing is considerably
more difficult than in the non-composited case since the si and tij have
to be specified in addition to r, the c ' and the n.1J'.
i
Table (5.5.1) ennumerates 24 designs used in the comparisons of
the estimation procedures.
These designs were constructed using the
following constraints,
1) The degrees of freedom for classes, sub-classes, and error are
always equal to r - 1.
These constraints materially simplify the construction of the designs.
The primary reason for using the first constraint is that the relative
sizes of the variance components are usually unknown, in which case an
equal assignment of degrees of degrees of freedom to classes, sub-classes,
e-
M
m
Table (5.5.1)
3-stage Nested Designs Used in the Comparison of Four Estimation
Procedures When E I Sitijnij • 50, 100, 200
i j
Design ~ ~ Sitijnij
n ..
t ..
ci
si
No.
1. J
r
1.)
1.1
-43,2,1,1
50
1,1,7,7
1
1,3,4,4,3, (2)*
2,2,1,2,3(1)
3,2,1,1
1,1,2,2
2
1,3,4,4,2,7,7
2,2,1,2,3 (1)
4,3, (1)
3
1,3,5,4
5(1),4,5
3(2),4(1)
4
3(2),1
3(1),12
6(2),1
4(1),2,1,3
5
4(3),4(1)
8
5(1),3,2,2
1,3,1,2,4(1)1(2U,2,3,4 2,1,1,2,1,1,2,1,1,2,
1,1,4,3(1)
6
5,3,2,5(1)
4(1),2,3,6,5
1,4,3,1,1,3,3(1),3,2, 3,4, (1) ,2,1,1,2,1,2,
1,2,1,1
3,3(1)
7
3,3,4{2) ,1,1 6(1),4,6
10(1),3,2,3,2,1,1
2,2,1,2,2,9(1) ,2,3
8
4 1 3,2,1,1
100
1,1,5,5
2,1,2,1,3(1)
8,3,3,9,7,5,5
9
4,3U)
1,1,3,15
4,6,4,6,8,5,2
2,3(1),2,2,1
10
3 (2) ,1
3(1) ,4
10,13,14,15,8,8,2
4(1) ,2,1,3
11
8
5,3 (2),4 (1)
5(1),2,2,3
8,4(1),7,1,9,1,8,1,1, 3,4(1),3,1,2,1,2,1,2,
3 (1)
2,1,2
12
3,5{2) ,1,1
6(1),5,7
6,6(1),2,8,4,2,3,4,2,1 9(1),2,1,2,1,3,4
13
6,3,6(1)
1,1,4,4,7,5,1,1
3,4,1,5,6,1,6,2,2,1,4,4(2),1,1,2,6(1),2,2
1,2,4,3
14
12
5(3) ,2,6(1)
6(1),3(2),3,3,2
4,2,1,5,4,7,3(1),6,6, 2,2,1,2,2,10(1),4,1,
3,3,2,2,3,6(1),2
3,3,4(1)
15
7,5,2,9(1)
7 (1) ,9,10,1,3,13
1,1,3,1,3,3(1) ,9,8,9, 3,3,5,1,4,4(1),2,1,
7,9(1),2,1
3(2),6(1)
16
200
4
3(2),1
3(1),22
2,1,2,3(1),2
11,9,13,4,21,30,2
17
3,2,1,1
1,1,7,13
3(1),2,1,3,1
1,1,7,12,5,4,6
8
7(2),1
18
7(1),18
7,9,12,1,5,9,1,9,4,4, 6(2),8(1),2
2,10,9,3,2
19
5,3,2,5(1)
11(1) ,2,3{3)
4(1),7,18,18,17
8(1),5,7,7,4(1)
20
4,4(2),3(1)
4,2,1,1,3(2) ,8(1)
5(1) ,3,10,11
11,9,14,7,9,3,1,7,1,
3,5,13,10,12
12
21
3,9(2) ,1,1
10(1) ,22,12
5,5,2,7,1,3(6),2,6(1),5(2),3(3),15(1)
8,2,1,2,3,2,2,3
22
7,3,3,2,8(1) 5(1) ,13,6,5,2,10,4,1
7 (1) ,9,15, 7,3,14,2,4, 13(1) ,3,5,3,4,6(1)
9,6(1),2,1
e
e
e
...
e
e
e
Table (5.5.1) (Continued)
o:r
Cl"\
Design
No.
r. r.
~
s. t .. n ..
~
~J
~J
Ci
r
Si
23
5,3,512),5(1) 7(1),9,7,5,12,3
24
8,3,3,9(1)
*
4(1),9,14,2,2,6,3,2,1
3(2) indicates 2,2,2
t·1).
nij
3,4,5,3,8,2,9,7,14,6,
2,1,3,4,8,2,7,3(1),2,
1,2
8(1) ,2,3,3,10,2,13,7,
1,1,2,2,3 (1),3
8(2),10(1),3(2),1,1
11(1) ,3,3,4,2,2,3,6(1)
95
and error seems to be a reasonable procedure.
For each desiqn in Table
(5.5.1) values of r, the ci' si' tij' and nij are specified.
Design
number 2 is used to illustrate how the desiqns in Table (5.5.1) are constructed from the information presented.
Desiqn 2 has 4 classes (r • 4)
and 3,2,1,1 (the ci) sub-classes within each of the classes.
A schematic
representation of design 2 is given below.
Classes
1
2
s ... 1
1
s 2- 1
3
4
s 3• 2
s ... 2
It
Sub-
Classes
_....
Measurements
n
11
=2
n
12
n
-2
13
-I
n
21
-2
n
22
-I
n 31 • 1
... 1
n
S2
The set of tij'S associated with desiqn 2 are 1,3,4,4,2,7,7.
In
the above diaqram the 3 sub-classes within the first class are formed by
compositing the 8 sub-classes in the non-composited design into 3 groups
of sizes 1, 3, and 4, i.e., t
11
- 1. t
12
... 3. and t
13
- 4.
The third and
fourth classes are formed by compos!tinq, from the non-composited desiqn,
2 classes each havinq 7 sub-classes.
Then, the value of the t ij for the
third and fourth classes of design 2 is 7.
For each of the designs in Table (5.5.1), var(a)/o:, var(a~)/o:·
var(a~/r)/o:, and var(o:)/o: were computed and evaluated at the follow-
ing combinations of p
1
... o~/r/o: and p
(1,1), (8,1/a), and (8,8).
2
- o~/o~:
(1/.,1/.), (l/a,8),
Comparisons amonq the estimation procedures
are made using relative efficiencies (R.E.'s) defined in a similar way
96
to those in section 3.3.
R.E.
=
minimum (var (y)/a:)
, where
var(y)/a:
y
= ~,
a~/r' or a;, minimum(Var(y)/a: is the smallest value of
a;,
var(y)/a: for any of the designs and estimation procedures for a given
value of
~~itiJ·niJ· and
i
j
(p ,p).
I
The R.E.'s are tabulated in Table
2
(5.5.2) for all designs, estimation procedures, and (P ,p ) combinations.
I
2
Table (5.5.2) indicates that any best estimation procedure depends on the relative sizes of p
I
and p.
2
An estimation procedure is
judged to be "best" for estimating Y if most of the time its R.E. is
larger than or equal to the R.E. of the other procedures, and in those
instances where some other procedure has a larger R.E., the difference
in the two R.E.'s is "small".
In Table (5.5.3) the best estimation pro-
cedure(s) is (are) given for estimating ~, a~/r' and a~ for the five
(p ,p) values considered.
I
2
For some values of (P ,p ) no estimation
I
2
procedure satisfied the above definition of best, and in these cases
two or more estimation procedures were selected that were best in all
but a few cases.
the value of
~~'sitijnij' with
~
when PI
= 8.
The choice of estimation procedure did not depend on
the exception of estimating
J
In this case procedure 4 is best
if~~s.t'J.niJ'
= 50,
. .
~
but procedures 1 and 4 are best if
a~/r
~~s 1.. t·~J.ni'J
4J4-J
1. J
J
~
~
"" 100, 200.
e.
Table (5.5.2)
l'
0'1
Relative Efficiencies (R.E.'s) of Four Estimation Procedures for
Estimating the Variance Components and Mean in a 3-Stage
Nested Design with Composited Samples
Design
Estimation Procedures
1
2
3
4
p
Ei Ej s·t·
·n·. No.
P
....
1 1J 1J
"2
"2
"2
1
2
2
2
"2
"2
"2
"2
"
"
"
0
8
II
II
II
II
0c/r Or
0c/r Or
0c/r Or
c/r r
e
50
1
1
1
/8 /8
1/ 8
1
8
8
2
1/
1/ 8
8
8
1/
1
8
8
1
8
8
4
e
.265
.361
.588
.306
.565
.554
.735
.713
.113
.176
.331
.331
.358
.273
.513
.427
.284
.424
.550
.361
.683
.521
.684
.851
.147
.209
.316
.316
.293
.361
.643
.347
.091
.148
.325
.325
.362
.205
.406
.438
.249 .091 .102
.223 .184 .320
.352 .234 .440
.420 .289 .750
.406 .113 .444
.474 .176 .385
.723 .331 .929
.270 .147 .365
.410 .209 .455
.549 .316 .891
.409 .091 .430
.425 .148 .314
.670 .325 .846
.279 .289 .373
.539 .331 .562
.346 .316 .440
.548 .325 .557
.667 .369 .820 .125
.194 .820 .022
.398 .703 .105
.173 .703 .014
1/ 8
.159 .820 .240
.251 .593 .227
.244 .418 .231
.607 .820 .277
.416 .593 .102
.586 .418 .205
.227 .703 .254
.338 .564 .223
.345 .433 .237
.596 .703 .249
.384 .564 .072
.552 .433 .161
8
.188 .418 .236
.776 .418 .307
.271 .433 .253
.770 .433 .286
.332
.867
.603
.508
.843
.418
.310
.429
.464
.373
.471
.677
.689
.647
.752
8
1
8
8
1
1/8 1/ 8
1/ 8
8
1
8
8
.184
.234
.289
.289
.259 .091 .051
.429 .147 .297
8
8
.228
.357
.420
.284
.431 .147 .222
.313 .113 .144
8
1/ 8 1/ 8
1/
.324 .113 .083
.667 .434 .184 .303
.667 .431 .184 .378
1/
1/
8
3
8
1
°
1
1/ 8
8
.667 .512
.425
.558
.559
.500
.099
.099
.155
.322
.322
.276
.430
.427
.412
.431
•
.099
.099
.155
.322
.322
e
.200
.463
.379
.402
.440
.095
.095
.151
.324
.324
.287
.388
.412
.443
.405
.095
.095
.151
.324
.324
.235
.466
.417
.400
.465
e
•
e
e
e
Table (5.5.2) (Continued)
co
(j\
E
E s·t·
·n ..
. .
l. l.) l.)
l.
Design
No.
P
)
Estimation Procedures
P
1
2
82
e
5
1
1
8
8
6
1
l/ s
8
1
/s 1/s. 286
l/ s 8
1
8
8
7
1
/s /s· 286
l/ s 8
1
/s
/s
1
l/
1
8
8
8
8
1
l/ s
8
l/ S
9
1
/s
1/
2
8c/r
6 r2
2
0
2
6 c/r
3
6 r2
!l
6~/r
8
1
l/ S
8
1
/s .667
8
1
1s
8 l/ s
8
8
6 r2
.930
.508
.702
.523
.542
.635
.635
.772
.923
.923
.914
.843
.851
.417
.657
.710
.723
.885
.929
.858
.482
.482
.653
.952
.952
.439
.996
.863
.720
.950
.875
.520
.763
.783
.618
.574 .908
.574 .828
.728 1.000
.939 .854
.939 .830
.896
.445
.676
.605
.552
.646
.646
.719
.750
.750
.763
.671
.769
.536
.615
.795
.699
.906
.941
.829
.414
.414
.558
.780
.780
.391
.777
.732
.586
.769
.812
.436
.676
.779
.539
.529
.529
.653
.818
.818
/s .286 1.000 1.000 1.000
l/ S l/ s .667
1
8
8
11
1
1
100
s
8
1
4
!l
.562
.715
.811
.979
.882
.679 .621
.626 .689
.789 .833
.760 1.000
.662 .851
2
6~/r 6 r
.434 .316
.434 .916
.612 .755
.973 1.000
.973 1.000
.311
.311
.466
.828
.828
.261
.707
.622
.689
.777
.596 1.000
.829 1.000
.666 .911
.658 .911
.767 .661 .516 .907 .785
.870 1.000 .661 1.000 .575 .785
.930 1. 000 .821 .822 .793 .906
.554 .744 .977 .509 .681 .995
.743 1.000 .977 .842 .647 .995
.937
.913
.960
.641
.811
.867 .480
.784 .480
.974 .669
.894 1.000
.889 1.000
.599
.941
.889
.693
.891
.306
.211
.320
.495
.262
.030
.030
.058
.206
.206
.269
.223
.294
.822
.295
.247
.442
.463
.627
.546
.021
.021
.044
.198
.198
.095
.299
.236
.536
.394
.302
.261
.369
.554
.324
.027
.027
.053
.202
.202
.195
.248
.285
.734
.328
.194
.549
.445
.554
.674
.018
.018
.039
.192
.192
.057
.313
.182
.388
.412
.319
.237
.353
.557
.296
.018
.018
.038
.179
.179
.221
.255
.294
.615
.329
.254
.540
.514
.614
.651
.016
.016
.035
.178
.178
.089
.314
.227
.396
.397
.283
.204
.309
.556
.258
.015
.015
.033
.176
.176
.183
.247
.276
.650
.328
.196
.540
.444
.542
.658
.014
.014
.030
.169
.169
.069
.288
.094
.374
.374
Table (5.5.2) (Continued)
(11
(11
1:1:s.t .. n ..
i
j
~ 1J 1J
Design
No.
p
p
2
1
10
1
/8
1
12
1
1
82
c/r
r
c/r
82
r
0
82
82
c/r
r
0
4
82
c/r
82
r
.150
.369
.332
.593
.484
.331
.338
.453
.691
.421
.003
.003
.006
.068
.068
.147
.359
.329
.692
.483
.296
.284
.394
.696
.361
.003
.003
.007
.072
.072
.197
.331
.349
.948
.458
.254
.249
.346
.651
.320
.002 .182
.002 .307
.006 .325
.063 1.000
.063 .430
/8 1/8
/8 8
1
1
8 1/ 8
8
8
.286
.625
.399
.510
.570
.458
.442
.442
.574
.791
.791
.856
.617
.693
.383
.635
.164
.370
.528
.665
.446
.398
.398
.535
.788
.788
.783
.530
.661
.997
.651
.556
.413
.557
.445
.450
.467
.457
.564
.727
.727
.607
.627
.592
.295
.581
.523
.468
.640
.772
.560
.415
.415
.553
.801
.801
.623
.617
.697
.798
.743
.286
.750
.791
.932
.670
.827
.272
.272
.400
.654
.654
.610 .570 .156 .360
.813 1.000 .156 .728
.685 .945 .267 .553
.333 .647 .657 .495
.743 1.000 .657 .740
.584
.545
.674
.631
.576
.283
.283
.405
.312
.631
.620
.736
.633
.312
.665
.529
.660
.779
.772
.757
.167
.167
.284
.680
.680
.486
.666
.611
.599
.756
.286
.556
.285
.458
.631
.347
.292
.292
.408
.570
.570
.730
.427
.585
.
.714
.515
.500
.462
.634
.871
.566
.141
.141
.247
.620
.620
.310
.484
.511
.682
.600
.558
.343
.526
.692
.415
.249
.249
.366
.581
.581
.522
.431
.546
.655
.519
.405
.600
.680
.841
.726
.122
.122
.223
.637
.637
.181
.492
.410
.540
.611
.369 1.000
.369 1.000
.559 1.000
.948 .464
.948 .971
.806
.558
.812
.893
.656
.151
.151
.284
.885
.885
.704
.802
.898
.674
.904
.882
.734
.978
.888
.817
.341
.341
.522
.903
.903
.875 .662 .140
.987 .533 .140
.969 .763 .270
.540 1.000 .924
LOOO .646 .924
.657
.762
.847
.883
.923
1
/8
8
1
1/ 8
8
1/ 8 1/ 8
1
/8
1
8
8
8
1
1
/8
8
/8
1
1/ 8
1
8
8
e
0
.003
.003
.008
.075
.075
1
/8
1
1
82
.354
.366
.482
.633
.446
1
8
8
14
82
Estimation Procedures
3
.667
/8
13
e
0
2
/8
8
1
1 ~
/8
8
/e
1
8
8
11
1
82
]
/8
8
1
1/ 8
8
.182 1.000
.741
1.000
.785
.805
e
e
;
e
.
e
e
f
Table (5.5.2) (Continued)
0
0
.-i
L L s.t. ·n··
. .
1. J
1. 1.J 1.J
Design
No.
15
p
1
16
18
19
1
8
8
8 c/r
r
a
8~/r 8:
11
2
8 c/r
8r2
.242
.451
.461
.440
.556
.776 .566
.462 .566
.680 .776
.601 1. 000
• 529 1. 000
.672
.668
.668
.242
.627
.421
.430
.581
.942
.541
.070
.070
.151
.762
.762
.156
.401
.364
.476
.521
.667
.357
.119
.323
.518
.167
.007
.007
.013
.079
.079
.199
.261
.323
.740
.390
.282
.202
.391
.504
.280
.003
.003
.005
.045
.045
.091
.281
.253
.511
.423
.317
.108
.291
.472
.151
.006
.006
.011
.075
.075
.184
.252
.304
.729
.376
.267
.132
.310
.502
.187
.002
.002
.004
.040
.040
.095
.251
.242
.563
.385
.667
.359
.136
.333
.318
.174
.116
.116
.145
.201
.201
.150
.207
.222
.184
.258
.201
.320
.373
.379
.423
.037
.037
.058
.184
.184
.018
.206
.086
.116
.287
.289
.981
.246
.208
.123
.128
.128
.153
.195
.195
.133
.184
.194
.153
.224
.121
.277
.248
.264
.378
.042
.042
.066
.195
.195
.019
.217
.091
.128
.307
.286
.708
.202
.545
.535
.261
.171
.171
.236
.416
.416
.609
.634
.747
.729
.798
.447
.304
.597
.671
.407
.096
.096
.148
.413
.413
.338
.598
.639
.950
.845
.667
.213
.554
.525
.273
.139
.139
.200
.430
.430
.500
.678
.725
.662
.845
.517
.318
.655
.753
.427
.093 .364
.093 .687
.146 .697
.440 1. 000
.440 .963
.286
.782
.308
.710
.502
.366
.695
.695
.682
.645
.645
.308 .576 .221 .044
.390 1.000 .221 .455
.394 1.000 .314 .171
.177 .627 .593 .079
.421 1.000 .593 .466
.559
.154
.401
.297
.186
.695
.695
.682
.645
.645
.380
.345
.380
.192
.380
.676
.610
.971
.730
.720
.221
.221
.314
.593
.593
1
l/e
8
1
l/ e
8
8
a
82
.156
.156
.312
.860
.860
8
1
c/r 8 r
4
3
2
.495
.438
.616
.945
.545
1
lie
2
.773 1.000 .875
.407 1.000 .641
.581 1.000 .617
.406 .843 .187
.430 .843 .518
1/ 8
1/ 8 1/8
1/ 8
8
~
2
82
.182
8
1/ 8 l/ e
1/
8
8
1
8
8
8e
1
1/ 8 l/e
1/ 8 8
1
8
8
2
1/ 8
1/ 8 1/ 8
1/
8 8
1
8
8
17
1
2
1
1
/8 /8
1/ 8 8
1
8
8
200
P
.115
.412
.295
.191
.503
Table (5.5.2) (Continuedt
r-l
0
r-l
Design
~ ~ si tijnij No.
1
Estimation Procedures
p
1
J
20
1
8
8
22
24
8
4
(Jc/r
"2
"'2
(Jr
.036
.036
.062
.305
.305
.135
.417
.377
.631
.621
.182 1.000
.244
.679
.589
.308
.732 1.000
.732 .857
.806 1.000
.923 .463
.923 .981
.430
.333
.610
.663
.445
.422
.422
.520
.801
.801
.169
.617
.531
.591
.981
.948
.311
.755
.509
.366
.712 .690
.712 1. 000
.753 .892
.824 .281
.824 .958
.271 .473 .072
.489 .• 473 .768
.519 .601 .350
.516 .955 .362
.641 .955 1.000
.182
.958
.219
.614
.523
.274
.673
.673
.744
.833
.833
.624
.547
.633
.186
.609
.711
.221
.590
.663
.292
.124
.124
.198
.585
.585
.320
.400
.500
.505
.555
.788
.192
.537
.498
.246
.656
.656
.716
.779
.779
.497
.531
.606
.233
.616
.574
.221
.565
.784
.304
.095
.095
:160
.633
.633
.221
.436
.461
.543
.627
.182
.952
.199
.607
.819
.270
.313
.313
.422
.765
.765
.777
.567
.812
.810
.774
.870
.255
.711
.983
.348
.178
.178
.268
.698
.698
.497
.549
.725
.931
.791
.960
.259
.720
.789
.341
.276
.276
.374
.690
.690
.624 .799
.706 .302
.840 .771
.639 1.000
.905 .411
.146
.146
.229
.704
.704
.402
.653
.734
.895
.928
.182
.894 1.000
.174 1.000
.532 1.000
.634 .988
.232 .988
.788
.444
.667
.591
.591
.598
.131
.401
.630
.182
.299
.299
.384
.660
.660
.384
.326
.462
.511
.461
.767
.167
.486
.487
.216
.989
.989
.959
.923
.923
.569
.481
.619
.392
.600
.886 .565
.268 .565
.730 .695
.853 1.000
.358 1.000
.415
.536
.646
.465
.718
8
8
8~/r Or
"
1.1
.118
.118
.170
.346
.346
1
1
8~/r 8~
"2
.617
.185
.500
.560
.244
1/ 8
1/8
3
"
1.1
.216
.358
.430
.847
.532
8
.127
.127
.184
.388
.388
"
lJ
.048
.048
.080
.321
.321
1
.628
.135
.404
.498
.181
2
~~/r 8 r
.474
.130
.378
.678
.183
1/8
1/ 8 1/ 8
1/
8
8
1
8
8
e
1
1/ 8
a
.426
.429
.539
.445
.546
8
1/ 8 1/ 8
1/
8
8
1
8
8
286
1
1/ 8 1/ 8
1/ 8 8
1
8
8
23
8e2
1/ 8
1/ 8 1/ 8
1/ 8 8
2
1
2
1
1
/8 / 8 .
1/ 8 8
1
8
8
21
P
e
.355
.397
.551
.435
.635
.397
.207
.470
.682
.289
e
102
Table (5.5.3)
Best Estimation Procedures for the 3-Stage Nested Design
with Composited Samples
Best Estimation Procedure for Estimating
2
cr r2
1
1
1
8
2,4
1
2,3,4
1
1
2,4
1
1,3
8
lie
2,4
1,4
8
8
2,4
1,4
1
P2
lie
lie
lie
p
crc/r
2,3,4
4
A few general conclusions can be obtained from Table (5.5.3).
1) If cr~/r and cr~ are small relative to cr~ then procedure 1 is
best for estimating ~, cr~/r' and cr~.
2) If cr~/r and 0; are greater than o~ then procedure 1 should not
be used to estimate ~ or cr~.
3) Procedure 3 is never best for estimating ~ or cr~/r.
The above conclusions are valid only for 3-stage nested designs
restricted to have an equal number of degrees of freedom for classes,
sub-classes and error.
5.6
A Comparison of Some Designs
In this section some selected comparisons are made among some
of the designs given in Table (5.5.1), to illustrate some of the difficuIties involved in trying to obtain optimal designs for the joint
e.
103
estimation of the mean and variance components in 3-stage nested designs
having both within and across class compositing.
The. only difference between designs 1 and 2 is that for design 1
s
= s
3..
= 7, t
31
= t
32
= 2,
and for design 2
s
• s
3..
= 2,
t
31
= t
32
= 7.
From Table (5.4.2) it is seen that the effect of decreasing the amount
of across class compositing, for
fixed~~s,t. ,nij , is to obtain a
i
j
1. 1.J
poorer estimate of the mean, but an improved estimate of a~.
The esti-
mate of a~/r was the same for designs 1 and 2.
The values of r, the c i ' and the n
are the same for designs 4
ij
and 10, and for designs 1 and 8.
For each of these designs there are 4
classes, and a total of 7 sub-classes.
.e
= 50,
and for designs 8 and 10
For designs 1 and 4 ~~s, t. ,n ..
~~sitijnij = 100.
i
j
1. 1.J 1.J
The four designs
are compared using R.E.'s defined by,
var(y)/a: for designs 4 ~r 10 with Si-tij=l
=
R.E.
var(y)/a:
Note that in a non-composited design, i.e., when si • tij = 1, estimation procedures 1 and 2, and 3 and 4 are equivalent.
In computing the
R.E.'s, the estimation procedure for which var(y)/a: was a ainimum for
a given value of (p , P ) was used.
1
2
Sometimes the minimum was for pro-
cedures 1 and 2, and sometimes for procedures 3 and 4.
19 were also compared;
Designs 6 and
the comparisons being made using R.E.'s computed
in a similar way to those for designs 1, 4, 8, and 10, except, the noncomposited design used to calculate var(y)/a: in the numerator was
104
= tij
design 19 with si
• 1.
The R.E.'s computed for the above six
designs are given in Table (5.6.1).
The essential difference between designs 4 and 10, and between
designs 1 and 8 is that there is more across class compositing and less
within class compositing in designs 1 and 4 than in designs 8 and 10.
Comparing the R.E.'s in Table ( 5.6.1) shows that if the best estimation
procedure is used, then for some values of (P ,p ) designs 8 and 10 are
1
better for estimating
for estimating a~/r.
~,
2
but for all values of p
designs 1 and 4 better
1
Design 8 is better than design 1 for estimating
a~, but design 10 is better than design 4 for estimating a~ if p > 1.
1
A similar set of comparisons between designs 6 and 19 shows that design
19 is best for estimating ~, and for estimating a~ if p
1
=8
and p
=
2
From Table (5.6.1) it can be seen that a design with no compositing, i.e., when si
= t ij
= 1, usually does not give better esti-
mates of ~ and a~, but for the six designs considered does give better
estimates of a~/r'
For some designs the gain in efficiency of a com-
posited design over a non-composited design when estimating ~ and a~
is quite large for all estimation procedures, especially when (p ,p ) =
1
(8,1/ e ), (8,8).
2
But, non-composited designs give better estimates of
Based on the above it seems that if ~ and a~ are of principal
interest then a design with some within and across class compositing
should be used.
However, if one is interested in estimating a~/r then
such designs should not be used.
If efficient designs are sought for
e.
Table (5.6.1)
In
0
.-4
~ ~ Sitijnij
~
)
50
Relative Efficiencies For the Comparison of Designs 1, 4, 8 and 10,
And Designs 6 and 19
Estimation Procedure
1
3
2
Design
"2
"2
2
"2
"2
2
"
"
"
8
8
No.
P
lJ
II
II
II"
P
Or
°c/r
°c/r Or
r
c/r
1
2
1
1/ 8 1/ 8
1/ 8
1
8
8
50
4
8
10
1
1/ 8
8
1
1/ 8
8
1/ 8 1/ 8
1/ 8 8
1
8
8
e
8
1/ 8 1/ 8
1/
8
8
1
8
8
100
1/ 8
1/ 8 1/ 8
1/
8 8
1
8
8
100
8
1
1
1/ 8
8
4
"2
°c/r
"2
Or
1.100
.772
1.023
1.803
.988
.496
.496
.602
.717
.717
1.092
.545
1.103
4.669
1.034
.760
1. 913
1.587
3.171
2.485
.305
.305
.452
.822
.822
.300
.736
.830
4.076
1.443
1.013
1.913
1. 214
2.364
1.257
.396
.396
.537
.785
.785
.800
.603
1. 099
5.103
1.173
.608
2.213
1. 493
2.955
2.965
.246 .182
.246 .746
.380 .623
.806 3.217
.806 1.466
1.202
1.438
1.597
2.407
1.742
.268
.268
.400
.800
.800
.993
.884
1.296
3.270
1.457
.779
2.933
1. 728
2.167
2.939
.268
.268
• 400
.800
.800
.721
.952
1.151
2.707
1. 486
.981
1.048
1. 228
1.990
1. 300
.257
.257
.388
.804
• 804
1. 033
.799
1.254
3.524
1.370
1.107
2.291
1.974
2.766
2.620
.257 .845
.257 .959
.388 1.267
.804 3.184
.804 1.570
1.504
.769
1.117
3.514
1.097
.096 1.203
.096
.566
.172 1.362
.562 16.50
.562 1. 329
.852
1.618
1.617
4.482
2.299
.440
.069
.760
.069
.131 1.090
.538 10.68
.538 1. 777
1. 040
.957
1.289
3.939
1. 362
.878
.087
.087
.631
.157 1. 318
.551 14.52
.551 1.478
.668
2.007
1. 555
3.927
2.840
.059 .257
.059 .796
.114 .684
.522 7.721
.522 1.857
1.220
1.341
1.684
4.483
1.867
.011
.671
.011
.939
.023 1.534
.205 11. 71
.205 2.181
1.140
1. 236
1.583
4.905
1.770
.009
.660
.009
.913
.019 1. 521
.185 13.80
.185 2.178
1.020
1.038
1. 376
4.943
1.516
.881 .876 .008 .814
.010
.842 .909 .008 .781
.010
.021 1.615 1.206 .017 1.502
.196 18.91 4.643 .1711~95
.196 2.064 1.341 .171 1.937
e
•
e
e
•
e
e
107
the joint estimation of ~, a~/r' and
a:
then designs with some within
and across class compositing might be optimal.
5.7
Two Models Studied by Kussmaul and Anderson
Kussmaul and Anderson (18) studied the following two models.
r:
(5.7.1)
m=l
where,
k
=1
i = 1, ••• • ,r
j = 1, ••• . Ie.
1
N
.e
= Lc.1
o
1
r
Ci
LL
Yijk -
(5.7.2)
t=l m-l
where,
i
= 1,2
j
= 1,. •• .,N/2
k = 1
The eijk terms are shown in parentheses since Kussmaul and Anderson
assumed that measurements are made without error, i. e., a~ ...
o.
Models (5.7.1) and (5.7.2) are special cases of model (5.2.1),
the general model for a 3-stage nested design in which the experimental
units are composited.
Model (5.7.1) allows for within class, but not
across class, compositing.
ing is used.
In model (5.7.2) only across class composit-
108
Cameron (4) considered model (5.7.2) and presented a method for
obtaining estimates of ~,
0;,. and O~/r'
and their variances.
Kussmaul
and Anderson (18) made an extensive study of model (5.7.1), in which
three different estimation procedures, i.e., ways of weighting the observations, were used to estimate O~, and one procedure was used to estimate O~/r.
~ and
Two of the Kussmaul and Anderson procedures to estimate
0: are equivalent to procedures 1 and 4given in section 5.3;
the
third procedure, not considered in this study, computes the class sums
of squares using unweighted class means.
Their estimate of O~/r is de-
rived using a procedure equivalent to procedure 4.
The estimates of ~, O~/r' and O~, and their variances in model
(5.7.1) can be obtained from the appropriate formulae in section 5.3 by
incorporating the following changes:
1) si = 1, i = l, •••• ,r
2) n··
1)
= 1,
i
= l, •••• ,r
and j =l, .... ,c.
1
3) 0 e2 = 0
5.8
An Evaluation of Some of Kussmaul's and Anderson's Results
In their study of the 3-stage nested design with within class
compositing, Kussmaul and Anderson (18) assumed O~
ments are made without error.
rarely, if ever, assume that
= 0,
i.e., measure-
In the practical situation one can
0; = O.
For the results of Kussmaul and
Anderson to be of some practical value it is necessary to determine how
small O~ must be relative to O~/r and O~ so that it has a negligible
effect on
""2
~,
"2
0c/r' Or and their variances.
If it is assumed that O~
=0
e.
109
when in actuality 0; > 0, then the expected bias in estimating a~/r'
E[B(O~/r)]' is,
(5.8.1)
and the expected bias in estimating a~, E[B(a~)], is
2
E[B(Or)]'"
-O~
Wll
~Wilt
-
wI 3w2 It )
~;"..-;;;,..:..
w23
-
m~.)
1.)
.e
(5.8.2)
The expected bias in estimating ~ is zero even if O~ > 0 when it is
assumed to be zero.
The size of the biases given by (5.8.1) and (5.8.2) will depend
on the particular design, estimation procedure, and on the relative
sizes of the variance components.
A numerical evaluation of the biases was made using 5 basic
designs.
These designs are similar to some of the designs evaluated by
Kussmaul and Anderson (18) in that si • nij
j
= l, •••• ,ci.
=1
for i
= l, •••• ,r
and
Each of the basic designs was nearly balanced in the
•
110
t ij , but not in the ci' the number of sub-classes within the i th class.
From each basic design an additional one or two designs were generated
that were decidedly unbalanced in the t ij •
In these additional designs
..
,
the ci and
~~tij
1.
design.
were the same as those in the corresponding basic
J
The basic as well as the additional designs are enumerated in
Table (5.8.1).
For each design in Table (5.8.1),
E [B (O~) ]
x 100 and
x 100, as well as the mean square errors (MSE's) for~, O~/r'
02
r
and 0: were evaluated for the following values of PI
= O;;O~/r
and P2
=
(O~/O~/r) : (4,0), (4.0.01),(4,0.1),(4,1),(1/.. ,0),(1/.. ,0.01),(1/.. ,0.1),
and(l/.. ,l).
e.
The mean square error for 8 is,
MSE(8) = var(a) + (E[B(a)])2
The expected biases in a expressed as a percentage of a, where a
= ~,
O~/r or O~, as well as the relative efficiencies (R.E.'s) for~, O~/r'
and O~ are presented in Table (5.8.2).
"
R.E. (8)
"
min(var(8»
= the
=
"
MSE(8)
min (var (e)}
where
" for all designs within a basic design
smallest var(8)
and for given balues of PI and P2 •
Note that when nij
= 1,
as is the
case for all designs in Table (5.8.1), then estimation procedures 1 and
2, and estimation procedures 2 and 4 are equivalent.
From the results in Table (5.8.2) the following conclusions are
obtained:
Table (5.8.1)
r-4
r-4
r-4
Designs to Evaluate the Biases in Estimating O~/r and
0
Des!9.n No.
1a
2
r
when 0e2 > 0 and s.
1
r
Ci
15
6(2)*,9(1)
tij
1b
3,11(2) ,3(3) ,2(2) ,4(1)
2(5) ,2(4) ,2(3) ,3(2) ,12(1)
2a
10
3(3),3(2),4(1)
5,6,3 (4),7,3(5) ,4,5,6,3 (5) ,4,6,7,3
2b
15,15,16,3(8) ,3(5) ,10(1)
2c
9(1) ,3,3,2,3,3,4,4(17)
3a
8
4,2(3) ,2(2) ,3(1)
3b
4,3,5,6,2(3) ,5(4) ,5,2(3) ,5,2(4)
7 (1),2 (6) ,7,2 (6) ,5,8,9,2 (6)
4a
5
5,2(3),2(2)
4(8) ,9,2(8) ,7,2(6) ,5,8,9,2(6)
4b
2(19),1,11,1,21,1,1,15,1,1,16,1,11,1
4c
12,7(13) ,7(1)
5a
3
5,2,1
6,4,5,6,4,5,4,6
5b
2(11) ,3(1) ,8,1,6
5c
5 (1) ,2 (12) ,11
*
e
= n 1J
.. = 1
6(2) indicates 2,2,2,2,2,2, etc.
1'-
e
.
-
•
e
e
~
e
Table (5.8.2)
N
.....
.....
Percent Bias in Estimating a 2
and a: when s. = n, , = 1 and
c1r
1
1J
a~
> 0, and the Relative Efficiencies of
Two Estimation Procedures
Design
No.
P
--
p
Estimation Procedures
2
1
1
2
E[B(a~/r)]X100 E[B(a;)]x100 RE(a)RE(8~/r)RE(8;) E[B(a~/r)]X100 E[B(a;)]x100 RE(a)RE(8~/r)RE(8:)
2
a r2
O.
O.
O.
ac / r
1a
4
0
.01
.1
1.
1/4
O.
.01
.1
1.
1b
4
0
.01
.1
1.
11
2a
4
4
O.
.01
.1
1.
O.
.01
.1
1.
-2.1
-20.6
-206.
O.
-2.1
-20.6
-206.
O.
-2.0
-19.8
-198.
O.
-2.0
-19.8
-198.
O.
-4.9
-48.7
-487.
.4
4.1
O.
.6
6.4
64.
O.
.1
1.1
10.8
O.
1.7
17.3
173.
O.
O.
- .1
- .5
2
a r2
O.
O.
O.
ac/ r
1.000
.999
.986
.873
.971
.961
.879
.474
.996
.995
.630
.045
.996
.995
.630
.045
1.000
.996
.963
.670
.756
.733
.570
.112
.978
.977
.965
.856
.836
.829
.767
.440
.757
.734
.534
.048
.757
.734
.534
.048
.928
.924
.889
.571
• 400
.390
.313
• 063
1.000
.999
.989
.902
.987
.890
.304
.007
1.000
.998
.977
.793
-2.1
-20.7
-207.
O.
-2.1
-20.7
-207.
O.
-2.7
-27.2
-272.
O.
-2.7
-27.2
-272.
O.
-4.9
-49.3
-493.
.1
1.0
O.
2.
1.6
16.
O.
.1
1.1
10.6
O•
1.7
17.0
170 •
O.
O.
O.
- .3
.924 1.000
.930 .959
.917 .631
.810 .045
1.000 1.000
.989 .959
.895 .631
.459 .045
.950
.947
.917
.662
1.000
.970
.747
.146
.579 1. 000
.578 .946
.572 .537
.515 .026
.769 1. 000
.759 .946
.684 .537
.346 .026
.629
.627
.610
.433
.862
.832
.608
.079
.990 1. 000
.989 .899
.979 .301
.891 .007
.989
.986
.965
.781
Table (5.8.2)
M
r-l
r-l
DeSign
No.
p
Estimation Procedures
p
2
1
1
2
E[B(Oc'r)]x100 E[B(0;)]x100 RE(a)RE(8~/r)RE(8:) E[B(0~/r)x100 E[B(0~)~100 RE(O)RE(B~/r)RE(8;)
2
°e7r
l/It
O.
.01
.1
1.
2b
4
O.
.01
.1
.1
1 lit
O.
•OJ.
.1
1
2c
4
o.
.01
.1
1.
1/ It
O.
.01
.1
1.
3a
4
O.
.01
.1
1.
e
O.
- 4.9
- 48.7
- 489.
O.
- 2.4
- 23.8
- 238.
O.
- 2.4
- 23.8
- 238.
O.
- 1.3
-12.8
-128.
O.
- 1.3
- 12.8
- 128.
O.
- 3.8
- 37.8
- 378.
0 r2
O.
.1
.8
7.5
O.
.1
1.1
10.7
o.
1.7
17.2
172.
o.
- .1
- .6
- 6.4
O.
- 1.0
- 10.2
- 102.
O.
o.
o.
- .4
°c/r
0 r2
O.
O.
2
.995
.983
.883
.435
.961
.960
.951
.870
.669
.663
.615
• 358
.987
.890
.304
.007
.509
.496
.363
.026
.509
.996
.363
.026
.968
.936
.706
•119
.896
.894
.874
.667
.197
.193
.161
.035
.960
.959
.950
.870
•664
.658
.611
.357
.865
.846
.666
.079
.865
.846
.666
.079
.924
.922
.904
.740
.413
.407
.347
.091
1.000
.999
.991
.916
.967
.895
.388
.011
1.000
•998
.980
.820
•
e
- 4.9
- 49.3
- 493 •
O.
- 6.6
- 66.3
- 663.
O.
- 6.6
- 66.3
- 663.
O.
- 1.3
- 16.5
- 165.
O.
- 1.3
- 16.5
- 165.
O.
- 3.9
- 39.2
- 392.
- .1
- .5
- 5.1
0
.1
1.4
13.5
O.
2.2
21.6
216 •
o.
- .2
- 1.6
- 16.1
O•
- 2.6
- 25.8
- 258.
O.
O•
- .1
- .5
1.000 1.000
.985 .899
.881 .301
.425 .007
.372 1.000
.372 .862
369 .199
.343 .004
.442 1.000
.437 .8i2
.395 .199
.202 .004
1.000
.966
.726
.121
.374
.374
.368
.304
.487
.473
.350
.035
.835 1.000
.833 .967
.816 .672
.678 .050
.877 1.000
.851 .967
.678 .672
.224 .050
.834
.830
.797
.524
.899
.853
.524
.036
.972 1.000
.971 .920
.963 .380
.889 .011
.991
.989
.971
.808
e
•
e
•
e
e
Table (5.8.2)
"1
.-;
.-;
De-
sign
p
p
1
No.
2
1
2
E[B(O~/r)JX100 E[B(0~)JxI00 RE(O)RE(8~/r)RE(8~) E[B(0~/r)JxI00 E[B(cr~)JxI00 RE(O)RE(8~/r)RE(8~)
2
11..
°c/r
O.
.01
.1
1.
3b
O.
4
• 01
•1
1.
1/
4a
4
O.
.01
•1
1.
O.
4
• 01
•1
1.
1/
4b
4
4
O.
.01
.1
1.
O.
.01
.1
1.
o.
- 3.8
- 37.8
- 378.
O.
- 1.6
- 15.7
- 157.
O.
- 1.6
- 15.7
- 157.
O.
- 7.2
- 71.9
- 719.
O.
- 7.2
- 71.9
- 719.
O.
- 1.6
- 15.9
- 159.
0 r2
2
ac/ r
O.
- .1
- .7
- 6.5
O.
- .1
- .5
- 5.4
O.
- .9
- 8.7
- 86.6
O.
O.
O.
.3
O.
.1
.5
4.6
O.
O.
O.
- .3
O.
1.000
.989
.903
.480
.967
.895
.388
.011
.983
.957
.759
.159
.977
.976
.968
.896
.785
.778
.723
.424
.716
.697
.541
.056
.716
.697
.541
.056
.959
.957
.941
.790
.595
.585
.499
.135
1.000
.999
.933
.932
1.000
.990
.908
.497
.971
.830
.180
.003
.971
.830
.180
.003
1.000
.999
.984
.856
1.000
.978
.801
.197
- 7.4
- 73.9
- 739.
.0
- 7.4
- 73.9
- 739.
.966
.965
.959
.902
.787
.766
.575
.051
.930
.927
.915
.798
- 3.4
- 34.4
- 344.
- 3.9
- 39.2
- 392.
O•
0 r2
O.
- .1
- .8
- 7.9
O.
- 3.
- 30.4
- 304.
- .1
- .6
- 5.9
3•
- 30.4
- 304.
1.0
- 9.5
- 94.9
o.
o.
o.
O.
O.
O•
O.
.3
.0
.0
.4
4.4
O.
- .1
- 1.4
- 13.7
.991 1.000
.980 .920
.890 .380
.467 .011
1.000
.973
.764
.154
.954 1.000
.953 .938
.942 .460
.846 .016
.977 1.000
.963 .938
.847 .460
.385 .016
.984
.982
.957
.746
.994
.961
.713
.105
.937 1.000
.936 .847
.956 .174
.875 .003
.947 1.000
.938 .847
.863 .174
.478 .003
.965
.963
.950
.829
.984
.960
.788
.195
.937 1.000
.935 .926
.922 .374
.814 .009
.965
.963
.934
.693
Table (5.8.2)
In
~
~
Design
No.
P
p
1
Estimation Procedure
2
2
1
E[B(a~/r)]XI00 E[B(O~)]x100 RE(a)RE(8~/r)RE(8;) E[B(a~/r)]X100 E[B(O:)]x100 RE(a)RE(8~/r)RE(8:)
lilt
4c
4
II
Sa
4
II
5b
e
.
4
..
o.
o.
o.
.671
.667
.629
.400
.787
.766
.575
.051
.430
.421
.366
.126
- 3.4
- 34.4
- 344.
.973
.973
.967
.910
.497
.483
.361
.027
.931
.929
.916
.794
- 8.1
- 81.2
- 812.
.729
.724
.679
.420
.497
.483
.361
.027
• 343
.338
.296
.098
- 8.1
- 81.2
- 812.
- .1
- 1.3
- 12.8
.963
.962
.957
.903
.965
.958
.885
.515
.980
.890
.362 .
.011
.980
.890
.362
.011
.958
.957
.937
.789
.970
.942
.750
.164
O.
.0
- .4
- 4.1
• 938
.938
.935
.932
.763
.745
.581
.074
.910
.908
.892
.753
O.
• 01
•1
1.
O.
- 1.6
- 15.9
- 159.
O.
• 01
.1
1.
O.
- 2.2
- 22.4
- 224.
o.
o.
O.
.01
.1
1.
O.
- 2.2
- 22.4
- 224.
o.
O.
.01
•1
1.
O.
.01
.1
1.
O.
- 4.8
- 47.5
- 475.
O.
- 4.8
- 47.5
- 475.
O.
.01
.1
1.
O.
-1.6
- 16.1
- 161.
- .4
- 4.2
.4
4.2
.7
6.6
66.4
o.
.0
- .1
- 1.0
o.
..
e.
o.
o.
o.
- 4.9
- 48.6
- 486.
o.
- 4.9
- 48.6
- 486.
o.
- 3.4
- 34.0
- 340.
o.
- 2.2
- 21.9
- 219.
.947 1.000
.928 .926
.783 .374
.305 .009
.984
.938
.623
.057
O.
.1
.7
6.7
.489 1.000
.489 .830
.486 .145
.462 .002
.393
.393
.388
.344
O•
1.1
10.7
107.
.526 1.000
.521 .830
.484 .145
.280 .002
.447
.438
.363
.081
O.
- .1
- 1.0
O.
- .2
- 1.6
- 15.9
.973 1.000
.973 .905
.968 .357
.911 .011
.979 1.000
.972 .905
.896 .357
.511 .011
1.000
.998
.979
.812
1.000
.970
.763
.156
O•
- .1
- .9
- 8.8
.973 1.000
.972 .932
.963 .448
.873 .016
1.000
.997
.970
.746
o.
-
"
e
.
'e
e
Table (5.8.2)
\D
....
....
Design
0
1
No.
P
Estimation Procedure
2
1
2
E[B(0~/r)]xI00 E[B(0~)~100 RE(a)RE(6~/r)RE(6~) E[B(0~/r)]XI00 E[B(0~JxI00 RE(a)RE(6~/r)RE(6~)
2
°c/r
II
5c
..
4
O.
.01
.1
1.
O.
.01
.1
1.
1
I ..
O.
.01
.1
l.
0 r2
O•
- 1.6
- 16.1
- 161.
O.
- 1.2
- 12.2
- 122.
O.
- 1.2
- 12.2
- 122.
2
°c/r
O.
- .7
- 6.6
- 66.
-
O.
.1
.8
8.0
O.
- 1.3
- 12.9
- 129.
O.
.738
.734
.690
.442
.763
.745
.581
.074
.910
.908
.892
.753
.934
.932
.929
.880
.832
.815
.607
.112
.936
.934
.918
.772
3.2
- 32.
- 320.
.693
.687
.651
.426
.832
.815
.607
.112
.729
.713
.594
.147
- 3.2
- 32.
- 132.
- 3.4
- 34.0
- 340.
-
O.
o.
0 r2
o.
- 1.4
- 14.1
- 141.
O.
- .1
- .8
- 8.3
O.
- 1.3
- 13.2
- 132.
.979 1.000
.965 .932
.847 .448
.387 .016
1.000
.963
.697
.102
1.000 1.000
.999 .935
.984 .456
.876 .016
.953
.950
.923
.699
1.000 1.000
.986 .935
.841 .456
.348 .016
.985
.949
.675
.091
117
1) E[B(O~/r)] x 100/0~/r can be as large as 8.1 for O~ ~ 0.01
0 2/ •
C r
The percent bias in O~/r depends on the design and estimation
procedure~
it is minimized for an unbalanced choice of the tij
and is usually smaller for procedure
2) For procedure 3, MSE (a~/r) depends on
~.
~~t
..
. . 1J
1
dividual t
ij
when O~
= O.
and not the in-
J
There is little loss in efficiency in
estimating O~/r if O~ ~ 0.01 O~/r.
3) E[B(O~)] x 100/0~ is usually smaller for procedure 1 than for
procedure 2, and is minimized when the designs are nearly
•
balanced 1n
the t ij •
·e
~or
E[B(O;)] x 100/0; < 3\.
0e2
~
0.01 Or2 and PI = 4,
I
/~,
If the design is nearly balanced in the
t ij then E[B(O;)] x 100/0~~ 7\ if O~ ~ 0.10:, and PI = 4, I/~.
4) For PI
=4
or I/~ and O~ ~ 0.01 O~/r there is little loss in
efficiency in estimating 0:.
For some designs even if O~ •
25-40\ O~ there is less than a 20\ drop in R.E.(a:).
5) var(p) is insensitive to the size of P2
P2
~
1, or PI • 0.25 and P2
~
0.1.
= O~/O~/r'
if PI
=4
and
CHAPTER VI
SAMPLING VARIANCES OF THE VARIANCE COMPONENTS IN
THE r + 1 STAGE NESTED DESIGN WITH
COMPOSITED SAMPLES
6.1
Introduction
In this chapter a procedure for obtaining numerical estimates of
the variance components and their variances is derived for the r + 1
stage nested design in which compositing can occur in any of the first
r stages.
Specific formulae for the a~ and var(a~), k
= l, .... ,r
+ 1,
are not given, but can be derived for any particular nested design.
6.2
The Model
For the following model it is assumed that there is no compositing
in the first p-l stages, and that compositing first occurs in the p-th.
stage.
It is also assumed that if some of the p-th.stages have been
composited, then prior to compositing there are the same number of p + k
stages within each of the p + k - 1 stages, k = l, ••.• ,r - p.
The model
is:
y. .
.
J. J. ••••• J.
1 2
r+l
+ ••.•. + ex
=
s·J., •••• J..
1
+
s,
1, •••• 1
p
exp,J.•
'
p
j =1
P
•
P-l , J. P - 1 (i 1 ••• i _ 2 )
p
P ('J.,
•
•••• J.. _ 1) ) p
p
+
119
s·1
1
..
1 .......p+l
L
jp+I=l
+ •••• +
....1
51'
;
I .......p
51'
_
;
I .••• "'P+
I •••• 5 1. 1 •
• ••
{l r,1.
ir
r
(6.2.1)
where,
i
·e
.2
ik
= 1, ..•• ,
i
r
= 1, •..• ,a1· I···· i r-I
1
= l, •.•• ,n..
1 1.
i +
r
= 1, •••• ail
a··
.
1,12 •••• 1 k_1
1 2
If P = 1 and r
=2
•••• J..
r
then model (6.2.1) is equivalent to (5.2.1), the
model for the 3-stage nested design with across and within class compositinge
In model (6.2.1) it is assumed that the {lk effects are indepen2
dently and normally distributed with zero means and variances, ok'
k = 1,
••.• r+l.
zero.
It is also assumed that all covariances
among~he
effects are
120
6.3
A Method for Estimating the Varinace Components and their Variances
The method for estimating the variance components and their
variances in the r+l stage nested design is similar to the one used in
Chapter II to estimate the variance components and their variances in the
two-way crossed classification with composited samples.
The analysis of variance table for the r+l stage nested design
is given in Table (6.3.1).
Before presenting this table it will be he1p-
ful to introduce some notational changes.
2)
The summation over subscripts ik,through i k , k'<k, i.e. ,
is designated by
L:
k" ,k
Table (6.3.1)
Analysis of Variance Table for the r+1 stage Nested Design
Expected Sums
of Squares
Sums of Squares
d. f.
Source
r+l
SSl = Tl-T o
a -1
1st stages
1
L:
k=l
r+l
I:
2nd stages
k=2
pth stages
L
a 1 iP-l-
l,p-l
r+1 th stages
If -
L
l,r- 1
L
l,p-2
w
2k
r+l
al
iP-2
SSp=Tp-Tp _ l
L
k=p
(N -
L..J
"'"
1,
r-l
2
a 1 ; r- 1 )0 r+ 1
121
In Table (6.3.1), N·
~ a lir •
Also, the Tk' k • 0,1, •••• , r
1,r
are defined by
Tk =
=
~ (J;+,
f',r Y"r+}
k Gf"r n"r Y,,~
2
k • l, •••• ,r
k •
where,
·e
b
k lir
= k f lir
n
lir
(6.3.1)
a
, k = O,l, •••• ,r.
The error sums of squares, SS + ' are given by
r 1
SS
r+l
=
~
l,r+l
y2
1 ir+l
1
~
l,r
n
lir
(r;, Y"r+)
The y 1 ir+l are .the actual observations, and the y a r e the r th stage
lir
k = O,l, •••• ,r are sets of weights that depend only
means. The kflor'
,
on the way in which the observations or k-th stage means are weighted.
These weights are subject to .ome restrictions which will be indicated
later in this section.
Let
! be the vector of the Yl.r
, ordered by 1st stages, then 2nd
stages within 1st. stages, ••••• , and then r-l tho stages within r-th.
stages.
Let~,
k - O,l, •••• ,r be a vector whose elements are
122
f
k 1;r
n
1;r
and are ordered in the same way as those of Y.
To obtain estimates of the variance components and their variance one has to compute E(T k ) and cov(Tk,Tk '), k, k'= O,l, •••• ,r.
In com-
puting these quantities it is assumed that the r+l stage nested design is
balanced in the first r stages, i.e.,
i.p
= l, •••• ,ap
= l, .... ,n t ;r
If any element of ! is missing then the corresponding element of
k~
is
set equal to zero.
Let Tk =
Each of the Tk can be expressed as quadratic forms in Y.
r'Qk!' and var(Y)
E(T k )
= V.
= tr(QkV)
cov(Tk,Tk ')
The elements of
Then,
~,
k
+ ~!'Qk~
= 2tr(~V~'V)
= O,l, •••• ,r
+ 4~!'QkVQk'~
(6.3.2)
depend on the particular estimation
procedure used, and are subject to the following two restrictions.
1)
~
is a symmetric matrix
2) For any estimation procedure ~!' Qk!~ and 4~!'QkVQk' ~
constants that are independent of k, for k
The matrix Q
k
tion k b, where b.
k 1,r
(k
~
are
= O,l, •••• ,r.
1) is obtained in the following way.
= kf l,r
. n l,r
. ,
into
.
~
J=
1
a J, vectors of size
Parti-
~ ~k
.
= +1
av, xl.
123
k
If
~ a· is designated by l,k then,
.J=1 J
~- = (JJ!-; ,~, ...., ~; ,k )
l
and
Q~ = D"k k~j ~),
the
j~laj
cov(Tk,Tk ") =
If k
Le., Qk is a diagonal matrix consistinq of
diagonal matricies,
E(Tk) = tr€l,k
= 0,
,
'"
k~j k~j·
~j k~)~
2tr€l,k(k~j
Then,
+ ll!."Qk.!}J
kbj)V DI,Jt{k"ej k ..
then o~ is not partitioned and Qo
=
~j)0
(6.3.3)
+
o~b".
Estimates of the variance components are obtained by equating the
·e
SSk to E(SSk) with
aj
replaced by
8j
for j
resulting system of equations for the
8j.
= l, •••• ,r,
and solving the
Let
.
~+l = (wI r+l' w2r+I'·····'wrr+l)
W=
H
-1
1
o
-1
0
o
1 0 ••• 0
=
o ••••• 0 -1 1 0
o ...•
Then,
0 0 -1 1
124
Ht
or
s
= Ws
+ 6~+1 !C+l
= W- 1 (Ht
r+ 1 w
.::.r+ 1 )
b2
-
U
var(s) = W-1 (H var (
t) "
H + var (b2
U r +1 )
and
(6.3.4)
.... 1
~+1 ~+l)
W-
(6.3.5)
where,
In order to compute
var(~),
given by (6.3.5), var(t),
~+1'
and
Whave to be determined.
Determination of
6.4
Var(~,
The elements of W and
O,l, ••.• ,r, given by (6.3.3).
W, and
~+l
~+l
are obtained by evaluating E(Tk ), k =
Before this can be done it is necessary
to determine V, the variance-covariance matrix of Y.
Using the notation
of section 2.4, V is given by,
V = Da (Vi i ), where
1
Vi i
1 1
= o~
1
1
J 2 ,r + o~ D2 ,z(J s ,r) + • • • • + o~ D2 ,i(Ji+l,r) +
o~ D, 'P(~1 :p Jp+,.r) +O~, D, .p+l (S"p ~I ;pH Jp+' .r)
+ ••••• +
0: DZ,r
(6.4.1)
A matrix of the form D2 P+l{
1
,
\Sl;pSl;P+l
consisting of p; 1a , matrices of size
J
j-2
J +
pr,r
(~
j=p+z
a\
) is a diagonal matrix
~
x (
:
a\
~=p+z /
on the
125
diagonal.
The elements of each sub-matrix on the diagonal are
If V, given by (6.4.1), is substituted into E(Tk) and E(Tk _ l
)
2
defined by (6.3.1), expressions for the Wkj' the coefficients of the OJ
in the expected sums of squares for the k-th. stage are obtained.
Assuming P > k, then
•
+
IL
~+l,r
Then,
k-l b 1 : ) 2 + ••••• +
/
a: L
l,r
1
S
I
:P
••••• S
Ii
r
0
2
P
""
L.-J
l,p
1
SliP
•
126
(6.4.2)
2
Substituting the Wkj into (6.3.4) estimates of the OJ are obtained.
for
If these estimates are substituted into V, a numerical value
2tr(QkvQk~V)
is determined.
values for the elements of
Doing this for all k and k', numerical
var(~)
are derived.
Var(s) is then obtained
by sUbstituting W, var(~), !r+l, and var(6;+1) into (6.3.5) and performing the indicated matrix calculations.
For any given nested design explicit expressions ·for the wkj
and the
c~(Tk,Tk')
can be obtained using the methods presented in this
section.
The model for a 4-stage nested desiqn with no compositing is:
a 3k Cij) + a~1(ijk) , where
127
i
l, .... ,a
0::
l
j • l, ••.• ,ai
k ". l, •••• ,a ..
1)
R.
1, .... ,n
0::
ijk
If Henderson I s method 1 is used to estilDllt. the variance components, then
the weights used to determine the expected sums of squares for second
stages are:
lfijk
0::
1/(ni •• '/2 and, 2fijk = 1/(n .J/2, where
ij
From (6.4.2),
E(SS2) ... E(T 2 )-E(T I
=
)
a: ~[~ijk!nJY{~nijk/n~]
+
1
{t~(pijk/n~ y-~Wijk/n~)
a~a
i
i
-a)
+
+
(6.4.3)
I
The result given by (6.4.3) agrees with the results obtained by Gaylor
and Hartwell (7) and Mahamunulu (20).
In a similar way the other ex-
pected sums of squares can be determined.
128
For any given nested design explicit expressions for the coefficients of the aja~~ terms in COV(Tk,Tk~) can be derived using the
methods of this section.
This is illustrated by determining the coef-
ficient of a~ in var(T2 ) of a 4-stage nested design in which there is
compositing across the second stages.
The model is,
+ ClltR. (ijk) , where
i
= l, •••• ,a l
j
= I, .... , ai
k
= l, ... .,aij
R.
= l, •••• ,nijk
For a 4-stage nested design which is balanced in the first 3
stages the ranges of the subscripts i, j, and k would be
i
= 1, .... ,a 1
j
= I, .... , a2
k
= l, .... ,a s
L~t the vector of weights, 2~~' be
"" (2b~, b~, ••••• , b: a ), then
-1 2--2
The coefficient of
2
2tr(Q2V) , and
.
1S,
a;
2---1 2
in var(T 2 ) is given by the coefficient of
a;
in
129
b'"
2-1
....L
b'"
b
2-1 2--2
0
sll
b'"
2~la2 2~la2
a,
1
9'12 J a
3
tr
0
J
0
0
1
sa a J a 3
1 2
,
In a similar way expressions for the other terms in var(T 2 ) can
be obtained.
CHAPTER VII
THE 2-STAGE NESTED DESIGN WITH COMPOSITED SAMPLES
7.1
Introduction
Using some of the results presented in Chapter II, estimates of
the mean and variance components, and the variances of these estimates
are obtained for the two-stage nested design in which experimental units
from the first stages are composited.
Formulae for the variances of the
mean and variance components are derived, and numerical and analytical
comparisons of these variances are made for four different estimation
procedures.
7.2
The Model
The model underlying the Chapter
I
example of a 2-stage nested
design (one-way classification) in which experimental units from the
first stage are composited is given by,
si
Yik =
where,
~
+
s~1 1=1
~ ri~
+ e ik
(7.2.1)
i = l, •••• ,r
= l, •••. ,ni
k
and,
s
i
= the
number of experimental units composited into each class
(first stage) of model (7.2.1)
,
e·
131
For model (7.2.1) the following assumptions are made,
1) The only fixed effect is the mean,
~.
2) The class effects, ril' and the measurement error effects, eik'
are random, and are independently distributed with zero means
and variances
r and
(12
e respectively.
(12
3) All covariances among the effects are zero.
A total of N measurements are made on S experimental units, where
N •
7.3
and
Estimates of the Variance Components and their Variances
Estimates of the variance components are obtained by equating
the class and error sums of squares to their expected values and solving
the resulting equations.
The class and error sums of squares are de-
fined in a way similar to the row and error sums of squares for the two-
(7.3.1)
The f i and m are sets of weights that depend only on the way in
i
which the observations, yik' or the class means, yi =
ntL
Yik' are
k
weighted.
The weights are subject to restrictions similar to those
given in section 2.4 for the two-way crossed classification.
In Table
132
(7.3.1) values of the f i and mi are given for four estimation procedures.
These estimation procedures are similar to those used in Chapters II and
V.
p
Table (7.3.1)
Values of f i and mi for Four Estimation Procedures
f.
1.
Procedure
mi
l/N Y2
2
1/nr
1.
1
Sit;Sin)Y.
s./(s.ni~
1.
1.
2
3
l/ni
4
si/s.1.~ n.1.
l/nir
si/S
X2
Y,
2
ni
The sums of squares corresponding to any of the four estimation
procedures are obtained by substituting
and m. into (7.3.1).
1.
the appropriate values of f.
1.
The analysis of variance table for the 2-stage
nested design using sums of squares defined by (7.3.1) is given in
Table (7.3.2).
Table (7.3.2)
Analysis of Variance Table for the 2-Stage Nested Design
Source
d. f.
Sums of squares
Classes
r-l
55 l
= T 1 - T..
Error
N-r
55 ..
= T5-
T
6
Expected Sums of Squares
2
wllO r
+
2
wl"O e
(N-r)02
e
2
.
f ij = f i' m
d 0 c2 = 0 rc
By sett1.ng
ij = mi' n ij = n i , an
(2.4.9) and (2.4.31), E(T
l
)
and E(T.. ) are then obtained as,
= O'1.n
e.
133
E(T It )
Then,
(7.3.2)
By making the above changes in (2.4.13), (2.4.32), and (2.4.36),
var(T 1 ), var(T It ), and cov(T 1 ,T It ) are given by,
var (T 1)
2
•
L
(O;flnl/si +
i
var(T It ) = 2 (0r2 ~
~
i
m~n~/s.
1
1
1
O~flni) 2
2 ~m~n.) 2
+ 0 e~
1 1
(7.3.3)
i
From Table (7.3.2) estimates of 0 2 and 0 2 are obtained as,
e
r
8~
= SSIt/(N
- r)
Since (N-r)8~/0~ has a chi-square distribution with N - r degrees of
freedom, and cov (8~ , ss 1)
•
0, the variances of 8~ and 8: are,
(7.3.4)
var(8 2 )
r
=
(var(T
1
)
+ var(T ) - 2cov(T ,T »/w 2
It
1
It
11
+ (w
lit
/w 11 )2 var (8e2 )
(7.3.5)
134
For each of the four estimation procedures defined in Table
(7.3.1), formulae for var(&;) are obtained by substituting var(T 1 ),
var(T~), cov(TI,T~), wII ' wI~' and var(&~) given by (7.3.3), (7.3.2),
and (7.3.4) into (7.3.5).
A subscript on var(e) denotes the estimation
+ 2 e(r-l) (N-l)!(N-r)
(7.3.6)
2
- 2
~sini!
Dini' + 40;0~(Esini + ESinl~s~1/~ Si n .\
~
~
~
~
~
~
\~ i.)
- 2
L(s.~ n.~ ) I LSin~Y+ 20: '~iL s i + f\'
sini! '\"si n~
~~
~ V
- 2
~~nJ~Sini +(~Sini ~Si- -¥inJ/(N-rl~SinJ))
2/
i
2
-
i
(7.3.7)
var,(O;1 (~- ~)~s~r Eo: ~ -~)~S~2 +r-2~sd2)
=
+ 40 2 0 2
r e
+
-~) L (Sini)-l + r- Ls~l L n
\$ r i i i ' " j))
((1
1
2a~ (~1 -~~iJ2/(N-r) +~- ~)~n~2 +r-Wi1)J)
(7.3.8)
135
varlt(~~>
(7.3.9>
If the si
=1
for i • I, •••• , r then var 1 (~~> • var2(~~>' and
they are both equal to var(8~) for the one-way classification derived by
Crump (5) and Searle (23).
7.4
.e
2
The Effects of Compositing on the Estimate of ar
The variances of 8;, given by (7.3.6) - (7.3.9>, for the four
estimation procedures are functions of the si.
If experiments are de-
signed to obtain efficient estimates of a~, then it is useful to know
what conditions - if any - should be placed on the si so that vark(~~)'
k
= 1,2,3,4,
is a minimum.
In this section it is shown that for certain
choices of the n , the number of measurements made within each class,
i
m~~(vark(8~» occurs for si • I, i • l, •••• ,r.
If the si
= I,
then designate vark(B~) by var~(8~).
2-stage nested design is balanced in the n i , then ni
= var lt (8;>.
For
If the
= N/r,
and var2(B~)
this situation it is shown below that m~J.1(var2 (B~»
1.
=
136
(7.4.1)
Subject to the constraint S
= ~Si'
~si/s
S -
i
s. = Sir, i = 1, •••• ,r.
1-
S
= ~Si' ~si
i
si
+
i
= Sir.
is minimized when
i
Crump (5) showed that, subject to the constraint
(~slls)'
- 2
~sl/s
is also minimized when
i
In this case
(7.4.2)
e.
The right-hand-side of (7.4.2) is minimized when S is a minimum, i.e.,
when S
= r,
or equivalently si = 1, i - 1, •••• ,r.
The coefficient of cr~
in (7.4.1) is independent of the si' and the coefficients of cr;cr~ and cr~
are then minimized when si
= 1.
Hence,
var (&~)
!.
When n
i
= N/r, i = 1, •••• ,r, varlta~) = var3(&~)' and varl(8~)
~~- Hf=s~) -. ~a~(~)'~SI ~~)' Jr'
-2~;f/r) 4a~~(~ ~- ~) ~s~ 2a:(r-l) IN_ll/(N-0
+
+
The coefficient of cr~, involving only terms in the si is
=
(7.4.3)
137
(7.4.4)
One form of Tchebychef's inequality is
(7.4.5)
In (7.4.5) let ai = llsi' then a lower bound on (7.4.4) is
this bound is attained when s.1
terms in the si is
l/L.-!. ,
i si
= Sir.
= 1.
The coefficient of a~a~ involving
and is a minimum when
Ls ~
i
is a maximum,
1
Similarly the a: term is a minimum
i.e., when si = 1, i = 1, •••• , r.
when the si
(1- ~); , and
Thus,
.e
min (var 1 (8~»
si
= var (8~)
1
If the design is not balanced in the n , i.e., if ni
i
min(var3(8~» = var3(8~).
si
-
~
N/r , then
Attempts to establish similar results for es-
timation procedures 1,2, and 4 have not been successful.
T~e coefficient of a~ in var3(8~) is minimized when
Ls~ /~s~)2
.
1
1
.
1
1
is a minimum.
Using (7.4.5) with a i = llsi' the coef-
ficient of a~ is a minimum when si
of
a:
is a minimum when
(~.~)_2
= Sir,
i • l, •••• ,r.
The coefficient
is a minimum. i.e. when the si
C
1.
The coefficient of a~a~ in (7.3.8) that must now be minimi~ed is
(i~r (~- ~) ~l/.i~+
2
r-
~l/Si
t
lIn i )
(7.4.6)
138
(7.4.7)
..Then,
)~ f i (si)
is equal to the expression given by (7.4.6).
Since
i
l/~Sj' ~
l/r,
J
(~- f)
f.J. (s·)
=
J.
r-
2
~l/nj~/~l/S'''L:s./s.
J.
.,
J
j
1.
J
J
~ ~ - ~l
f (si) is minimized when the si/Sj are maximized for j • 1, •••• , r.
i
For
-
Similarly, fk(sk) and hence
1, •••• ,r.
Thus,
~fi(Si)
is a
i
minimum for si
= 1,
i
and var3(8~) > var (8~) for any choice
= 1, •••• ,r,
1..
of the si and ni •
7.5
Estimating the Mean
An unbiased estimator of the mean,
=
y.
1.
u.1.
= ~Yik/n;,
k·
~u.yi'
•
l.
J.
~,
is
where
and the u. are a set of weights such
1.
correspondjng to each of the four estimation
are given in Table (7.5.1).
that~ui
= 1.
i
procedu~es
The
in section 7.3
e.
139
Table (7.5.1)
..
Values of ui for Four Estimation Procedures
Procedure
1
lIN
2
3
4
-
2
From (2.5.4), var(O) is obtained by setting ac2 • arc
u i ' nij .. n i , and by eliminating the summation over j.
For each of the four estimation procedures
var(~)
= 0,
u J.)
....
Then,
are given by
(7.5.1)
(7.5.2)
(7.5.3)
(7.5.4)
For estimation procedures 1, 2, and 3
max (vark (0»
si
the s i .. 1, i .. 1, ••.•• , r, for fixed r and n i •
occurs when
The coefficients of a~
in var 1 (a) and vars(a) are obviously maximized when the si .. 1, and the
coefficients of a~ in these variances do not depend on the si.
coefficient of a~ in var 3 (O) is a maximum when the si • 1.
The
The
140
this variance can be written as
LSinl
i
=
(7.5.5)
+
~ ~ s·n·s· ...n· ...
~~~
111
1
From (7.5.5) it is seen that each of the r terms in the numerator multiplied by si also appears in the denominator.
Since si > 1 the denomina-
tor will increase faster than the numerator when the si are increased.
Then, (7.5.5) will be maximized when s.1 = 1, i
above, maxCvarkCO»
si
occurs when the si
=1
= l, •••• ,r.
for k
= 1,2,3.
From the
If P
= a~/a~
then it can be shown that var~CO) ~ var4CO) , whenever p ~
, providing r
7.6
~S.
If r
= S,
then si = 1,
Comparisons of the Four Estimation Procedures
Comparisons among the four estimation procedures considered in
this chapter are made using methods similar to those used to compare the
mean and variaqce components in the two-way crossed classification and
the 3-stage nested design.
The estimation procedures are compared by constructing 34 unbalanced 2-stage nested designs that cover a wide range of ni and si configurations, and then evaluating var(o;), varCo~), and varCQ) for various
values of a; and a~.
The 34 designs used in the comparisons of the esti-
mation procedures are presented in Table (7.6.1).
These designs were
141
constructed under the following constraints,
-..,
1) r = 3,5,10,15
These constraints help simplify the problem of constructing unbalanced
2-stage nested designs.
For each design in Table (7.6.1), var(a)/a:, var(8~)/a:, and
var(&:)/a: were computed and evaluated for p
p
I
-~,~,1,4,8,
where
Comparisons among the estimation procedures were ~en made
= a;;a;.
using relative efficiencies (R.E.ls) defined in a way similar to those
in section 3.3.
minimum(var(~)/a:)
R.E. -
, where
var(y)/a:
y
= ll,a~,
or a~, and minimUDl(var(~)/a:) is the smallest value of
var(y)/a: for any of the designs and estimation procedures for a given
value of
~sini and
p.
In Table (7.6.2) the R.E.ls have been tabulated
i
for all designs, estimation procedures, and values of p.
Note that in
Table (7.6.2). for any design var(&~)/a: is the same for all estimation
procedures.
•
~
It
e
e
Table (7.6.1)
N
"2'
.-l
Designs Used to Compare Estimation Procedures for the 2~Stage Nested Design
With Composited Samples for t sini = 10,30,100.
i
Design
No.
I
2
3
4
5
6
7
8
9
10
~sini
i
10
30
11
12
13
14
15
16
17
100
N
--
S
--
r
si
ni
6
8
6
8
9
12
20
23
15
15
20
19
19
22
6
5
9
7
14
3
1,3,2
1,1,3
4(2)*,1
2,2,3(1)
4,4,3(2)
3L3),2,2
1,1,4,4,5
4 (1) ,8
3,3,2,2,6 (1)
10(2)
1,1,6(2) ,3,3
2,3,3,12 (1)
12(1),4,5,5
7(1),8(2)
11,11,10,2,2
1,1,30,31,31
2,3,4,5,6
3,1,2
4,3,1
4 (1) ,2
3(1),2,3
3,3,3(1)
3(2),3,3
9,8,3(1)
10,10,3(1)
3,3,2,7(1)
5(2),5(1)
6,6,8(1)
3.2,2,12(1)
3,2,2,12 (l)
7 (2) ,8 (1)
3(3),1,1
4,4,3(1)
2,3,5,5,7
11
11
22
5
5
13
15
12
16
20
20
20
26
23
36
94
20
10
15
5
* 4(2) indicates 2,2,2,2 etc.
M
qo
....
Design
No.
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
•
Ls.n.
1. 1.
i
100
l
Table (7.6.1)
..
N
22
36
50
16
16
20
20
30
32
39
60
25
31
33
40
45
65
-S
23
si
-r
3(5),4,4
3(3),1,1
5 (2)
15,10,5,4,2,5 (1)
2,3,4,4,11,11,4(12)
9,6,6,4,3,3,4'2)
15,15,4,4,3,4(2),1
5 (10) ,5 (2)
4(3),4,4,3,2,3,4
4,3,3,3{2) ,4(1)
5 (1) ,5 (5)
5 (7) ,15 (2)
6,6,3(4),3,3,2,12(2)
1,1,2,2,3(3),3(4),10(5)
3(3),17(2)
10 (2) ,5 (3) ,5 (2)
5 (2) ,15 (1)
5
11
10
41
83
39
50
60
32
20
30
65
56
77
43
45
25
10
20
...
--
ni
3 (4) ,5,5
11,11,10,2,2
20,15,3(5)
3,3,2,2,6(1)
4,3'2),6(1)
4,3,3,3(2) ,4(1)
10 (2)
5(1),5(5)
6(3),3(4),2
9,6,6,4,3,3,4(2)
5 (10) ,5 (2)
5(2),15(1)
3,3,6(2) ,12(1)
4,4,3,3,312),13(1)
8,8,4,4,16 (1)
15(2),5(3)
5(7),15(2)
'"
"
e
144
-
Table (7.6.2)
Relative Efficiencies of Four Estimation Procedures for a
2-Stage Nested Design with Composited Samples
DeSign~Sini
No.
10
o~
1/ 8 .60
1/ 1t
1
4
8
1
2
.
1
/8
1.000
/
4
8
1/ 8
1/
3
.200
lit
4
8
1/ 8
1/ 1t
4
.600
1
4
8
5
30
lie
1/1t
.222
1
4
8
6
1
1/8
/
1
4
8
7
.
1
1/8
.389
.833
/It
8
,
Estimation Procedures
i
P
1
'"
1
1
4
8
1/8
/It
1
4
8
1.000
1
II
.749
.748
.623
.454
.407
.884
.820
.576
.379
.332
.823
.881
.900
.778
.728
1.000
1.000
.834
.610
.547
.446
.478
.540
.592
.590
.568
.585
.576
.542
.515
.562
.439
.241
.160
.140
.589
.446
.235
.153
.134
O?
II
.346 .686
.405 .734
.531 .750
.459 .648
.427 .607
1.000 .791
1.000 .798
.801 .682
.494 .506
.428 .455
.101 .791
.135 .874
.336 1.000
.650 .982
.735 .958
.568 .950
.687 .992
1.000 .938
.907 .754
.845 .692
.035 .411
.060 .438
.144 .484
.226 .516
.246 .510
.116 .556
.184 .581
.310 .600
.294 .593
.272 .571
.681 .516
.742 .491
.404 .388
.174 .307
.135 .279
1.000 .431
1.000 .403
.484 .304
.203 .234
.156 .211
2
3
4
o~
o~
o~
II
II
.231 .673 .230 .460 .141
.285 .721 .284 .515 .178
.465 .736 .461 .623 .337
.501 .636 .497 .660 .462
.491 .596 .487 .663 .491
.560 .741 .515 .378 .312
.636 .763 .592 .424 .373
.743 .689 .722 .514 .551
.578 .534 .583 .548 .557
.525 .486 .535 .551 .540
.079 .791 .079 .709 .063
.108 .874 .108 .793 .086
.284 1.000 .284 .953 .233
.665 .982 .665 1.000 .611
.822 .958 .822 1.000 .810
.421 .890 .374 .706 .264
.527 .950 .472 .777 .339
.896 .957 .847 .873 .673
.969 .817 1.000 .839 .947
.940 .762 .997 .813 1.000
.037 .342 .029 .431 .033
.062 .372 .049 .464 .056
.141 .4.41 .121 .535 .138
.200 .516 .202 .601 .225
.209 .527 .226 .604 .284
.103 .556 .103 .510 .089
.165 .581 .165 .539 .144
.297 .600 .297 .579 .211
.305 .593 .305 .598 .299
.288 .571 .288 .583 .289
.181 .376 .088 .204 .019
.283 .396 .147 .229 .033
.387 .421 .285 .311 .089
.251 .429 .269 .463 .179
.206 .418 .239 .527 .217
.257 .361 .194 .114 .067
.379 .366 .297 .129 .112
.418 .341 .426 .184 • 235
.239 .306 .330 .304 .288
.193 .287 .288 .368 .285
"
e·
..
e
145
'.
-
Table (7.6.2)
Estimation Procedures
DeSignLsini
No.
i
P
9
1/8
1/..
O'~
.278
1
4
8
10
1/ 8
1/ ..
.278
1
4
8
11
1/ 8
1/ ..
.556
1
4
8
12
1/ 8
1/ ..
.222
1
4
8
T
.e
13
1/8
1/ It
.222
1
4
8
14
1/ 8 .389
1/ It
l:
4
8
15
1/ 8 .120
1/ It
1
4
8
16
1/ 8 .120
1/ It
1
4
8
17
1/ 8 .340
1/ It
1
4
8
e
1
2
3
4
2
0 :r;
o~
Jl
Jl
Jl
O'~
C1~
Jl
.719 .147 .599 .131 .589 .138 .688 .134
.749 .248 .623 .218 .615 .232 .721 .226
.763 .536 .634 .459 .632 .495 .753 .505
.743 .658 .616 .547 .621 .599 .752 .652
.713 .643 .591 .532 .597 .583 .728 .647
.730 .091 .730 .091 .661 .072 .661 .072
.769 .151 .769 .157 .708 .125 .708 .125
.818 .376 .818 .376 .800 .318 .800 .318
.834 .548 .834 .548 .876 .533 .876 .533
.811 .565 .811 .565 .874 .582 .874 .582
.729 .377 .740 .199 .594 .105 .479 .067
.637 .557 .729 .326 .636 .179 .524 .116
.418 .615 .625 .574 .714 .414 .642 .290
.299 .349 .524 .477 .777 .560 .788 .492
.265 .281 .482 .411 .773 .562 .820 .551
.906 .159 .724 .141 .807 .157 .865 .150
.940 .275 .758 .244 .839 .270 .907 .261
.946 .680 .789 .595 .849 .660 .945 .654
.909 .982 .786 .853 .822 .939 .942 .980
.869 .989 .759 .862 .787 .944 .910 1.000
.868 .192 .556 .084 .810 .145 .456 .061
.872 .329 .606 .149 .845 .252 .506 .108
.791 .745 .738 .415 .866 .631 .661 .317
.693 .869 .896 .758 .848 .912 .903 .704
.645 .809 .929 .819 .815 .912 .986 .848
1.000 .326 .939 .213 .939 .213 .741 .134
1.000 .537 .979 .364 .979 .364 .795 .231
.896 1. 000 1. 000 .818 1.000 .818 .904 .577
.778 .926 .977 1.000 .977 1.000 1.000 .902
.722 .826 .938 .958 .938 .958 1.000 .951
.220 .005 .194 .003 .164 .005 .213 .004
.267 .009 .237 .005 .197 .009 .259 .007
.381 .026 .346 .015 .268 .025 .376 .022
.484 .084 .457 .056 .309 .072 .490 .080
.483 .123 .466 .090 .296 .099 .496 .122
.167 .048 .073 .001 .138 .011 .065 <.001
.167 .073 .092 .001 .163 .019 .081 <.001
.129 .090 .155 .004 .205 .051 .139 .001
.087 .087 .302 .027 .212 .095 .278 .005
.071 .085 .411 .061 .193 .104 .388 .016
.401 .045 .370 .042 .334 .038 .399 .044
.451 .071 .415 .066 .376 .062 .449 .070
.478 .117 .438 .106 .405 .101 .478 .116
.407 .170 .371 .152 .350 .150 .409 .170
.350 .187 .319 .166 .302 .166 .352 .188
146
DeSign~Sini
No.
Estimation Procedures
i
P
Ie
1~
18
1
0
2
.340
4
8
19
100
II e
.620
1/4
1
4
8
20
lie
.900
1/ 4
1
4
8
21
II e
.120
1/ 4
1
4
8
22
II e
.120
1/ 4
1
4
8
23
lie
.200
1/4
1
4
8
24
lie
.200
1/4
1
4
8
25
lie
.400
1/4
1
4
8
26
lie
.440
1/4
1
4
8
27
e·
Table (7.6.2)
1 Ie
1/4
1
4
8
.580
1
l.l
.406
.460
.503
.441
.382
.525
.510
.376
.249
.201
.554
.476
.286
.173
.137
.311
.368
.473
.503
.465
.304
.353
.421
.406
.363
.390
.463
.602
.651
.606
.378
.437
.511
.482
.437
.494
.515
.444
.321
.265
.583
.653
.688
.583
.500
.650
.683
.600
.440
.363
0
2
.037
.060
.104
.160
.179
.237
.292
.238
.208
.203
.451
.457
.253
.183
.171
.024
.044
.114
.267
.328
.007
.014
.041
.121
.154
.018
.033
.089
.247
.332
.039
.068
.148
.268
.304
.108
.167
.226
.246
.247
.087
.141
.242
.352
.384
.263
.374
.428
.455
.461
2
l.l
.404
.459
.509
.453
.394
.493
.476
.346
.227
.183
.554
.476
.286
.173
.137
.189
.231
.344
.471
.489
.222
.275
.432
.671
.753
.314
.375
.496
.552
.519
.200
.246
.376
.554
.600
.326
.388
.509
.559
.523
.570
.646
.707
.621
.538
.550
.574
.496
.360
.297
.0 2
.036
.058
.103
.162
.182
.224
.261
.190
.158
.152
.451
.457
.253
.183
.171
.007
.012
.038
.144
.224
.002
.004
.013
.077
.156
.015
.027
.073
.201
.272
.009
.017
.054
.199
.299
.017
.031
.092
.239
.292
.077
.127
.233
.365
.407
.222
.312
.346
.362
.366
3
l.l
.404
.459
.509
.453
.394
.314
.322
.268
.189
.155
.505
.487
.356
.235
.189
.246
.285
.333
.313
.277
.260
.319
.477
.664
.696
.304
.361
.469
.507
.471
.378
.437
.511
.482
.437
.326
.388
.509
.559
.523
.568
.644
.705
.619
.537
.499
.527
.469
.347
.287
0
2
.036
.058
.103
.162
.182
.131
.175
.174
.172
.172
.306
.364
.273
.227
.219
.029
.051
.121
.245
.289
.003
.005
.019
.098
.175
.016
.029
.077
.212
.285
.039
.068
.148
.268
.304
.017
.031
.092
.239
.292
.077
.127
.232
.364
.405
.227
.321
.362
.384
.389
4
J..l
.392
.448
.504
.454
.397
.512
.504
.381
.256
.208
.505
.487
.356
.235
.189
.244
.297
.431
.560
.566
.168
.210
.342
.592
.719
.385
.458
.600
.656
.613
.200
.246
.376
.554
.600
.146
.182
.293
.487
.574
.530
.607
.688
.628
.550
.642
.678
.602
.445
.368
0
.'
2
.034
.057
.100
.161
.182
.206
.264
.237
.218
.214
.306
.364
.273
.227
.219
.013
.024
.073
.235
.329
.001
.002
.006
.040
.100
.017
.032
.087
.245
.333
.009
.017
.054
.199
.299
.002
.004
.016
.085
.172
.066
.110
.213
.359
.409
.254
.365
.425
.457
.465
,
e.
e
147
e
Table (7.6.2)
Estimation Procedures
DeSignLsini
No.
i
P
•
0 e2
~/8 1.000
28
I,.
1
4
8
1/
1 8
29
.100
I,.
1
4
8
1/ 8
1/,.
30
.220
1
4
8
31
,
.e
100
1/ 8
1/,.
.260
1
4
8
32
lie .400
1/,.
1
4
8
33
1/8
1/,.
.500
1
4
8
34
1/8
1/,.
1
4
8
.
.900
1
J1
0 r2
.608 1.000
.507 .930
.293 .433
.174 .291
.138 .269
.494 .011
.593 .021
.810 .074
.947 .341
.910 .567
.622 .028
.736 .052
.944 .155
1.000 .489
.924 .679
.581 .059
.634 .103
.618 .219
.498 .320
.412 .333
.693 .081
.748 .140
.705 .300
.544 .506
.455 .559
.823 .147
.925 .246
.986 .470
.843 .749
.725 .832
1.000 .732
1.000 1.000
.782 1.000
.535 .958
.435 .948
2
l.I
.554
.581
.509
.373
.308
.364
.442
.637
.818
.822
.483
.573
.744
.804
.747
.513
.615
.833
.960
.917
.645
.681
.606
.448
.371
.803
.911
.996
.875
.758
.870
.851
.637
.424
.343
0 r2
3
l.I
.268 .554
.402 .581
.430 .509
.375 .373
.358 .308
.008 .449
.015 .536
.053 .713
.260 .799
.462 .755
.026 .498
.048 .591
.139 .766
.428 .827
.593 .769
.019 .503
.036 .603
.113 .818
.392 .943
.546 .901
.093 .463
.158 .548
.299 .698
.434 .735
.463 .677
.135 .806
.228 .913
.452 1.000
.757 .878
.852 .761
.740 .799
.956 .832
.835 .713
.744 .514
.725 .424
0 r2
4
1.I
0 r2
.268 .280 .039
.402 .334 .068
.430 .444 .159
.375 .495 .327
.358 .466 .392
.012 .461 .010
.023 .557 .019
.078 .782 .066
.338 .961 .322
.542 .944 .556
.026 .572 .027
.047 .679 .049
.141 .882 .148
.450 .953 .484
.633 .887 .685
.019 .380 .009
.036 .465 .016
.114 .694 .055
.381 .958 .266
.520 1.000 .477
.040 .513 .043
.072 .603 .078
.194 .753 .209
.531 .770 .558
.709 .702 .735
.135 .768 .123
.229 .877 .209
.454 .986 .430
.762 .889 .756
.859 .777 .865'
.571 .963 .661
.807 .985 .925
.898 .810 .993
.931 .570 1.000
.937 .466 1.000
148
In spite of the large number of comparisons that can be made in
Table (7.6.2), some surprisingly uniform conclusions can be drawn.
conclusions are presented in Table (7.6.3).
estimation procedure for estimating ~ and
a;
These
This table gives the best
for p < 1, P • 1, and p > 1.
•
An estimation procedure is judged to be "best" if most of the time its
R.E. is larger than the R.E. of the other procedures, and in those instances where some other procedure has a larger R.E., the difference in
the two R.E. 's is "small".
Table (7.6.3)
The Best Estimation Procedures for the 2-Stage
Nested Design with Composited Samples
Best Estimation Procedure for Estimating:
p
<1
1
>1
1
1
3,4
1
4
3
The choice of a best estimation procedure did not depend on the
value of
~sini.
With respect to estimating a~ the results of Table
i
(7.6.3) agree with those of Crump (5), namely, to use procedure 1 if P
is small and procedure 3 if P is large.
7.7
A Comparison of some Designs
In this section some selected comparisons among the designs given
in Table (7.6.1) are made in order to evaluate the effects of compositing
on the estimates of ~ and cr~, and to gain some insight in the problem of
obtaining optimal designs to estimate these parameters.
•
149
By comparing the R.E.'s - given in Table (7.6.2) - for designs 15
and 16, 23 and 24, and 12 and 13, it is seen that for designs with fixed
rand N, decreasing S does not always result in an estimate of
0:
~ith a
smaller variance, and th.t increasing S does not always give an improved
estimate of
~
for the entire range of p values considered.
It would seem
that an important feature of any design would be the way in which the si
are combined with the ni' in addition to the values of r, N, and S.
Designs in which the large si are combined with the small n i , or
the small si are combined with the large ni' generally yield poor estimates, i.e., ones with large variances, of o~ especially when estimation
procedures 2 or 4 are used.
This can be seen by examining the R.E.'s
in Table (7.6.2) for designs 7, 16, 22, and 25 •
.e
In Table (7.7.1), 10 of the designs given in Table (7.6.1) are
In computing the R.E.'s in Table (7.7.1), var(a)/O~ or
compared.
var(8;ro: for a balanced design with N
i
= l, •••• ,r
= 100,
were used in the numerator.
r • 5 or 10, and si
= 1,
The R.E.'s were computed using
the best estimation procedure given in Table (7.6.3).
For estimating O~
procedure 1 was used for p < 1, and procedure 3 was used for p > 1.
estimating
~,
For
=1
procedure 1 was used for p < 1, procedures 3 or 4 for p
(
and procedure 4 for p > 1.
For a 2-stage nested design without compositing, Crump (5) has
shown that for fixed Nand r a balanced design, i.e., a design in which
ni = N/r, i
= 1, ••••
r, is optimal for estimating
Crump's optimal designs with r
= s,
10 and N
~
= 100,
2
and or'
Relative to
none of the designs
considered in Table (7.7.1) are very efficient for estimating a~ for
~ P ~ 8.
Except when p =
1
1
Ie
Ie, for most of the designs in Table (7.7.1)
]')0
Table (7.7.1)
e
Relative Efficiencies for the Comparison of some of
the Designs Given in Table (7.6.1)
Relative Efficiency For
Design No.
r
N
S
P
lJ
4
5
8
7
1/8
.23
.67
1.20
.37
1.26
2.26
.68
2.33
4.07
.38
.72
1.43
.93
1.63
1.94
.30
.69
1.42
.31
.79
1.60
.41
1.23
3.10
.63
1.69
2.84
.66
1.22
2.41
1
8
12
6
13
1/8
1
8
22
18
23
1/
1
8
8
23
12
1/
8
8
1
8
50
20
10
1/ 8
1
8
9
10
15
16
1/ 8
1
8
11
20
20
1/ 8
1
8
24
20
50
1/ 8
1
8
26
32
32
l/
1
8
28
60
30
S
1/ 8
1
8
(12
r
.02
.25
.76
.02
.23
.77
.02
.25
~79
.14
.36
.76
.24
.61
.94
.02
.19
.70
.04
.22
.67
.02
.18
.59
.03
.28
.78
.39
.51
.69
.,
-"
.
151
there can be a large gain in efficiency in
estimating~.
It should be
noted that for the designs in Table (7.7.1) the total sample size, N, is
always less than the total sample size for Crump's optimal design with
N
= 100,
and r = 5,10, and in this respect it is not surprising that the
R.E. 's for o~ in Table (7.7.1) are all less than one.
7.8
Some Remarks about Estimating ~ and o~
Some of the results and conclusions from the preceeding four sec-
tions are summarized below.
1) If estimation of the mean,
~,
is of principal interest then the
experimental units should be composited.
.e
2) The best procedure for estimating ~ and
0;,
of the four procedures
considered, depends on the relative sizes of the unknown variance
components o~ and o~.
3) If estimation of G~ is of principal interest then the experimental
units should not be composited.
4) With respect to estimating ~ and
0;,
an important feature of any
design is the way in which ni are combined with the si'
In section 7.6 the four estimation procedures defined in Table
(7.3.1) were compared, and in Table (7.6.3) it is seen that the best
estimation procedure depends on p • o~o~.
so that
l/ S ~
In this study p was chosen
P ~ 8, and the results of section 7.6 are applicable only
if P is in this interval.
The determination of any best estimation pro1
cedure for estimating
~
and 0 r2 when p < /e or p > 8 has not been given
152
any consideration in this paper except for the two limiting cases discussed below.
If P = 0, Le., when
a; .. 0,
then it can be shown that
(7.8.1)
and that var 3 (a) ~ var
..
(a)
or var
3
(a) ~
var
...
(0)
depending on the design,
i.e., the particular configuration of the si and ni.
From (7.5.1) -
(7.5.3) it is seen that establishing the inequality given by (7.8.1) is
equivalent to showing
(7.8.2)
since the si
~ 1, L ni si ~ N and
hence
i
minimized when ni = Nlr, i
= liN.
= l, ••• ,r,
and in this case
Thus, the inequality given by (7.8.2) is established.
It is
worth noting that in section 7.6, estimation procedure 1 was the recommended procedure if
1
Ie
~ p < 1.
However, if P is alose to zero it seems
that procedure 2 is better than procedure 1 for estimating
If P
= " CO"
i.e., when a~
= 0,
~.
then the following can be estab-
lished.
1) var .. (~) ~ var 3 (~)
....
2) var .. (p)
,.,
~ var2(~)
3) All other inequalities among the vark(~)' k
= 1,2,3,4
the particular configuration of the si and n i •
depend on
153
Showing that 1)
is true is equivalent to showing
This is equivalent to one of the inequalities in (7.8.2).
2)
Showing that
holds is equivalent to showing
l/S
~
4=sini
1.
~ s·~ L.J
~s.n~
(~Sini)2 ~ L.J,
i
i
or,
1.
(7.8.3)
1. 1.
(7.8.3) holds if and only if the following inequality is satisfied
(7.8. )
.e
The coefficient of
sisi~
on the left-hand-side of (7.8.3) is 2nini"
on the right-hand-side is n~ + nf~
and
But,
2nini~) = sisi~(ni- ni,)2 ~ 0, and then
ni~)2 = LLsi~sinl
i
~ i~
-
L LSiniSi~ni'?
i ~ i~
0
which implies that
Due to the complexity of the formulae for vark(8~), k = 1, •••• ,4
sets of inequalities on vark(6~) for the case
.,
been derived.
a; · 0 or a~ = 0 have not
However, it is conjectured that of the four estimation
procedures considered procedure 1 is best when a~ «
is best when a~ « a~.
a~,
but procedure 3
CHAPTER VII I
r
SUMMARY OF RESULTS AND SUGGESTIONS FOR FUTURE WORK
8.1
Summary of Results
The emphasis in this study has been on finding estimates of the
mean and variance components and their variances in some random effects
models in which the experimental units are composited prior to measurement.
For each of the models considered, four different procedures for
estimating the mean and variance components were given and then compared
e.
using numerical methods.
The basic model investigated in this study was the two-way
crossed classification in which the experimental units are composited
across columns and/or across rows.
The procedure used to
est~te
the
variance components is similar to Henderson's method 1 in which sums of
squares are calculated as if the design were balanced, and then are
equated to the expected sums of squares.
The resulting system of
equations is then solved to give estimates of the variance components.
The principal difference between Henderson's method 1 and the methods
used in this study is that the sums of squares in this study are calculated using weighted or unweighted observations or cell means.
Four sets of weights were defined that gave four different sets
of estimators of the mean and variance components if the design is
155
unbalanced.
If there is no compositing then two of the sets of weights
gave estimates of the variance components that correspond to those obtained using the method of unweighted measurements (Henderson's method
1), and to the method of unweighted means.
ject to some restrictions.
Each set of weights is sub-
The purpose of these restrictions is to
assure that estimates of the variance components and their variances are
not functions of the overall mean in the two-way crossed classification
model.
since simple expressions for of the variance components and
their variances cannot be obtained; it was not possible to analytically
compare the four estimation procedures to determine under what circum)
stances anyone of them can be considered as best, i.e., which procedure
gives estimates with the smallest variance.
The four estimation pro-
cedures were then compared using numerical methods.
A number of un-
balanced designs were constructed and for each design the variances of
the mean and variance components were evaluated for certain values of
the variance components.
From this part of the study the following con-
clusions were obtained.
1) Any best estimation procedure dePends on the relative sizes of
the variance components.
2) If no information is available on the relative sizes of the
variance components then the method of unweighted measurements
should not be used to estimate the mean and variance components.
The method of unweighted measurements is the "best" estimation
procedure - of the four considered - only if the row, column,
156
and interaction variance components are small relative to the
error variance component.
The problem of making exact tests of hypotheses on the significance of the variance components was investigated.
For the four esti-
mation procedures considered it was determined that the two-way crossed
classification should be balanced in all design parameters if the significance of the variance components is tested using exact F-tests defined by the ratio of two independent chi-squares divided by their respective degrees of freedom.
Using some of the results obtained on the two-way crossed
classification model, estimates of the mean and variance components and
(
their variances for the 3- and 2-stage nested designs were derived.
The
model for the 3-stage nested design allowed for across and within class
compositing where one or more measurements are made on the experimental
units after compositing.
In this respect the model for the 3-stage
nested design is a generalization of the one considered by Kussmaul and
Anderson (18).
They allowed for only one measurement on each group of
composited experimental units r and also assumed that measurements were
made without error.
Explicit formulae for the estimates of the mean,
variance components, and their variances were derived using sums of
squares computed from weighted or unweighted observations.
Four pro-
cedures for weighting the observations were suggested, but due to the
complexity of the resulting variance formulae comparisons among the
estimation procedures were made using numerical methods.
these comparisons showed that:
Results from
157
1) Any best estimation procedure depends on the relative sizes of
the variance components.
2) For all four estimation procedures, composited designs give
better estimates of the mean and within class variance component
than do non-composited designs.
The opposite is true for the
among classes variance component.
The within class compositing model of Kussmaul and Anderson was evaluated
without using their assumption that a~, the error component of variance,
Results from this study indicate that if it is assumed a~
is zero.
=0
when in fact a~ > 0, the biases in the estimates of the other two variance components can be quite large depending on the design and the
•
.e
relative sizes of the variance components •
The procedure for obtaining estimates of the variance components
and their variances in the 3-stage nested design was extended to the
r + 1 stage nested design in which compositing can occur in any of the
first r stages.
For any qiven r + 1 stage nested design a procedure for
obtaining explicit expressions for the expected mean squares and for obtaining the variances of the variance components was indicated using
sums of squares in which the observations are either weighted or unweighted.
For the 2-stage nested design it was shown that if the design is
balanced in the number of observations within each class and if one
wants efficient estimates of the mean then one should
~omposit
as many
experimental units as possible, but if efficient estimates of a~, the
among class variance component, are wanted then the experimental units
158
should not be composited.
Four estimation procedures, similar to those
used in the 3-stage nested design, were compared and evaluated.
Results
of these comparisons show that no single estimation procedure is best
for all values of P
= a~/a~,
and that if P is large it is preferable to
weight the observations in some way.
8.2
Suggestions for Future Work
Although the methods for obtaining estimates of the mean, vari-
ance components, and theor variances could be extended to the three-way
and higher order classifications, more general methods need to be found
so that estimates of the mean, variance components and their variances
can be determined in composited models that include fixed effects other
than the mean, and/or when the random effects in the model are not
,normally distributed.
To this end the
following are recommended.
1) Extend the estimation procedures proposed by Koch (17) and later
modified by Forthofer (6) to the situation where the experimental units are composited.
2) Investigate the use of maximum likelihood methods for estimating
the mean and variance components in composited models.
Some specific problems that have been left unsolved in this
study include,
1) Procedures for obtaining optimal designs for the joint estimation of the mean and variance components in the two-way crossed
classification and 2- and 3-stage nested designs.
r
159
2) Determination of the effects on the estimates of the mean and
variance components when the random effects are not normally
distributed.
3) Determine if ways, other than those given in this study, of
weighting the observations give more efficient estimates of
the mean and variance components.
,
LIST OF REFERENCES
"
1.
Blischke, W.R. 1966. Variances of Estimates ov Variance Components
in a Three-Way Classification. Biometrics, 22, 553-65.
2.
Blischke, W.R. 1968. Variances of Moment Estimators of Variance
Components in the Unbalanced r-Way Classification. Biometrics~
24, 527-40.
3.
Bush, N. and Anderson, R.L. 1963. A Comparison of Three Different
Procedures for Estimating Variance Components. Technometrics,~,
421-40.
4.
Cameron, J.M. 1951. The Use of Components of Variance in Preparing
Schedules for S.ampling of Baled Wool. Biometrics, 'i, 83-96.
5.
Crump, P.P. 1954. Optimal Designs to Estimate the Parameters of a
Variance Component Model. Unpublished Ph.D. thesis. North Carolina State University.
6.
Forthofer, R.N. 1970. An Extension of Koch's Method of Estimating
Components of Variance. Unpublished Ph.D. thesis. University of
North Carolina. Institute of Statistics Mimeo Series No. 699.
7.
Gaylor, D.W. and Hartwell, T.D. 1969. Expected Mean sqUares for
Nested Classifications. Biometrics, 25,427-30.
8.
Goldsmith, C.H. 1969. Three-Stage Nested Designs for Estimating
Variance Components. Unpublished Ph.D. thesis. North Carolina
State University. Institute of Statistics Mimeo Series No. 624.
9.
Graybill, F.A. 1961.
V.l, McGraw Hill.
An Introduction to Linear Statistical Models.
10.
Hammersley, J.M. 1949. The Unbiased Estimate and Standard Error of
the Interclass Variance. Metron, 15, 189-205.
11.
Hartley, H.O. 1967. Expectation, Variances, and Covariances of
ANOVA Mean squares by 'Synthesis'. Biometrics, 23, 105-14.
12.
Hartley, H.O. and Rao, J.N.K. 1967. Maximum Likelihood Estimation
for the Mixed Analysis of Variance Model. Biometrika, 54, 93-108.
161
"
•
13.
Harville, D.A. 1969. Variance Component Estimation for the Unbalanced One-Way Random Classification - A Critique. Aerospace
Research Laboratories, ARL - 69 - 0180.
14.
Henderson, C.R. 1953. Estimation of Variance and Covariance Components, Biometrics, ~, 226-52.
15.
Hirotsu, C. 1966. Estimating Variance Components in a Two-Way
Layout With Unequal Numbers of Observations. Reports of Statistical Application Research, Japanese Union of Scientists and
Engineers, ~, 29-34.
16.
Joreskog, K.G. 1970. A General Method for Analysis of Covariance
Structures, Biometrika, 57, 239-51.
17.
Koch r G.G. 1968. Some Further Remarks on 'A General Approach to
the Estimation of Variance Components. Technometrics,~, 373-90.
18.
Kussmaul, K.L. and Anderson, R.L. 1966. Estimation of the Mean and
Variance Components in a Two-Stage Nested Design With Composited
Samples. Unpublished Ph.D. thesis. North Carolina State University. Institute of statistics Mimeo Series No. 473.
19.
Lancaster, H.O. 1954. Traces and Cumulants of Quadratic Forms in
Normal Variables. J. Roy. Stat. Soc., suppl.,16, 247-54.
20.
Mahamunulu, D.M. 1963. Sampling Variances of the Estimates of Variance Components in the Unbalanced 3-Way Nested Classification.
Annals of Mathematical Statistics, 34, 521-7.
21.
prairie, R.R. 1962, Optimal Designs to Estimate Variance Components
and to Reduce Product Variabili~y for Nested Classifications. Unpublished Ph.D. thesis, North Carolina State University. Institute of Statistics Mimeo Series No. 313.
22.
Rao, J.N.K. 1968, On Expectations, Variances, and Covariances of
ANOVA Mean Squares by 'Synthesis'. Biometrics, 24, 963-78.
23.
Searle, S.R., 1956.
ponents Analysis.
24.
Searle, S.R. 1958. sampling Variances of Estimates of Components
of Variance. Annals of Mathematical Statistics, 29, 167-78.
25.
Searle, S.R. 1961. Variance Components in the Unbalanced 2-Way
Nested Classification. Annals of Mathematical Statistics, 32,
1161-6.
26.
Searle, S.R. 1968. Another Look at Henderson's Methods of Estimating Variance Components. Biometrics, 24, 749-88.
)
Matrix Methods in Variance and Covariance ComAnnals of Mathematical Statistics, 27, 737-48.
162
27.
Searle, S.R. 1970. Large Sample Variances of Maximum Likelihood
Estimators of Variance Components. Biometrics, 26, 505-24.
28.
Searle, S.R. 1971. Topics In Variance Component Estimation.
metrics, 27, 1-76.
29.
Tukey, J.W. 1957. Variances of Variance Components: II. The
Unbalanced Single Classification. Annals of Mathematical
Statistics, 28, 43-56.
Bio-
f
r
© Copyright 2026 Paperzz