~
GENERALIZED LEAST SQUARES ANALYSIS OF THE
SPLIT-PLOT MODEL USING AN ESTIMATED
VARIANCE-COVARIANCE MATRIX
Luis Leon Landois
Inst. of Statistics Mimeo
Series~#1~3~5~2
~
ABSTRACT
LANDOIS, LUIS LEON.
Generalized Least Squares Analysis of the Split-
Plot Model Using An Estimated Variance-Covariance Matrix.
(Under the
direction of FRANCIS GIESBRECHT.)
The variance-covariance matrix of the observations obtained from a
split-plot type of experiment depends on two variance components.
In
the unbalanced case the best linear unbiased estimates of the treatment
effects depends upon these unknown components.
In this thesis the use
of generalized least squares to estimate the treatment effects using an
estimated variance-covariance matrix is considered.
C. R. Rao's (1979)
minimum norm quadratic estimation (MINQE) and invariance with respect to
translation of the fixed effects are principles used to obtain estimates
of the variance components.
It is shown that invariant unbiased MINQE
of the two variance components converge in probability to their true
values.
This is used to establish asymptotic properties of the general-
ized least squares estimators.
Alternate estimators are obtained by
using an iterated invariant MINQUE and by imposing the side condition
that the estimate of the split-plot component be non-negative.
A method
for estimating the degrees of freedom associated with estimated variances
of linear constraints among the generalized least squares estimates of
the fixed effects is developed.
This permits the construction of tests
of hypotheses about constraints among the fixed effects.
A small simula-
tion is included to check on the performance of the procedure .
..
ii
BIOGRAPHY
Luis Leon Landois-Pa1encia was born August 27, 1943, in Los
Herreras Nuevo Leon, Mexico.
He received his elementary education in
Torreon Coahuila, Mexico, his secondary and preparatory education in
Monterrey Nuevo Leon, Mexico.
,
He received the degree of Ingeniera-Agronomo from the Facu1tad-deAgronomia in the Universidad de Nuevo Leon in December 1967.
He worked
as an assistant professor for six months in the Facu1tad-de-Agronomia.
He entered in 1968 to the Centro-de-Estad{stica-y-Calculo of the
Colegio-de-Postgraduados at Chapingo, Mexico, where he finished his
courses in statistics in 1970, and he received the Master of Science in
Statistics in May 1976.
While he was pursuing his master's degree, he was an assistant
professor for the Biometry course at the undergraduate level, a position
that he kept until 1970.
He then became a professor of the course and
held this position until 1976.
In 1970, he began to work as a consultant in Statistics and
Statistical Computing and as a professor in the Computer Department at
the same center.
From 1972 until December 1976 he was the Head of the
Computer Department and at the same time he was a part-time professor
at the graduate level for the Computer Science Department.
In January 1977, the author enrolled in the Department of Statistics
at North Carolina State University at Raleigh on a full-time basis until
May 1981.
He has now returned to his duty at the Centro-de-Estad1stica-
y-Calculo at Chapingo, Mexico.
iii
The author has been married to Guadalupe Alvarez-Icaza de Landois
since 1970.
His wife has studied Economics at the Universidad Nacional
Aut~noma de Mexico.
·e
iv
ACKNOWLEDGEMENTS
The author wishes to express his appreciation to many people who
have advised and assisted in the preparation of this thesis.
In
particular, he would like to express his sincere appreciation and
special thanks to Professor F. G. Giesbrecht, Chairman of the Graduate
Committee, whose advice and guidance have been invaluable.
Thanks are
also extended to Professors T. M. Gerig, A. R. Gallant, R. E. Hartwig and
J. W. Bishir for their helpful suggestions to this study.
His appreciation is extended to Professor J. F. Monahan for his
-•
helpful discussions and to Professor D. D. Dickey for letting him use
his computing library.
Also, the author acknowledges all professors
in Statistics, Mathematics and Computer Science who assisted him in
any way in the pursuit of his doctoral studies.
He also wishes to express his gratitude to Professor A. Carrillo,
Head of the Centero-de-Estad{stica-y-Calculo, for the support and interest
during his studies.
Further, he gives thanks to CONACYT (Consejo Nacional de Ciencia
ye Tecnologia), the Mexican Agency which gave him financial aid and
academic support in the realization of his doctoral studies, and to the
Ford Foundation for providing partial support to his family.
To his
friend, Said E. Said, he also gives thanks for his invaluable help in
the correction of his English writing.
The author would like to express special thanks to his mother,
Carmen Palencia de Chavarria, for her sacrifices and encouragement
v
during all his educational life; in memorial to his stepfather,
Demetrio Chavarria A., who was always pleased with his achievements;
to his brothers, Jesus, Armando, Demetrio y Roberto; to his aunts,
Refugio and Maria Luisa; and to his cousins, Esther and Bertha, he
gives very special thanks.
Also, he extends thanks to his family-in-
law for their moral support.
He also expresses thanks to Mrs. Carolyn K. Samet for the excellent typing of this thesis.
Finally, he would like to express his deepest thanks to his wife,
Guadalupe, for her love, patience, time and encouragement, without
which he could never have completed these studies.
--
vi
TABLE OF CONTENTS
Page
LIST OF TAB LES • • • • • • • • • •
••• •
• viii
LIST OF FIGURES •
. • • •
ix
••••
1.
INTRODUCTION.
2.
REVIEW OF LITERATURE. •
2.1
. • • • • • • . . . • •
Estimation of the Parameters
• • • . • • • • •
e in
3.
3
a General
Linear Model
-e
1
3
2.2
Missing Values and Unbalanced Data.
7
2.3
Estimation of Variance Components ••
9
MINQUE ESTIMATORS FOR THE SPLIT-PLOT VARIANCE COMPONENTS.
3.1
The Split-Plot Model • • • . • • • • • • . • .
3.2
The MInimum Norm Quadratic Unbiased Estimator
(MINQUE) • • •
3.3
Positive Definite MINQUE Estimators.
3.4
The Variance Component Estimators for the
Split-Plot Model . . •
3.5
10
10
11
19
21
Seely's Method to Obtain the Estimators of the
Variance Components under the Invariance
Condition. • •
4.
ASYMPTOTIC PROPERTIES
4.1
33
40
Properties of the MINQUE Estimators of the
Variance Components . .
40
vii
Page
4.2
e
Behavior of the Vector of Parameters B When the
Variance Components Are Replaced by Those
Obtained by the MINQUE Method.
5.
TESTING HYPOTHESES.
5.1
48
52
Procedure to Test the Degrees of Freedom
Associated with a t-test for Testing a Linear
Combination of the Vector of Parameters B• ••
5.2
The Accuracy of the Formula for Approximating
5.3
the Degrees of Freedom • • •
A2 A2
Distribution of Z = f(01,02) =
52
56
A
V Is Computed Using the MINQUE Estimators of
the Variance Components ••
5.4
65
Simulation Study to Evaluate Adequacy of
Approximation. • .
68
6.
THE ANALYSIS OF VARIANCE ••
75
7.
SIMULATION STUDIES • • • •
81
7.1
Standard Conditions ••
81
7.2
Non-standard Conditions ••
83
8.
Sill1MARY AND CONCLUSIONS •
9.
RECO~mNDATIONS
.
95
97
BIBLIOGRAPHY.
101
APPENDICES. •
104
A.
MINQUE Estimators for the Split-Plot Error Variances. • .
105
B.
Programs for the Simulation of the Behavior of the
Estimate Vector of Parameters B
C.
122
Programs for Formula 5.1.3 for Approximating the
Degrees of Freedom . • • . . • . . . • . • . .
125
e-
viii
LIST OF TABLES
Table
5.2.1
Page
Degrees of freedom associated with different degrees
of unbalanced designs ••
5.2.2
58
Mean, variance, degrees of freedom of the distribution
A2 A2
of Z = f(01,02)' (d.f.Z) and degrees of freedom
associated with formula (5.1.3) (d.f.F ) for the 27
o
cases considered . . • • . • • • . . . .
63
ix
LIST OF FIGURES
Figure
Page
5.2.1 Distribution of Z for testing a linear contrast between
blocks for testing a split-plot model with no missing
observations • • • •
5.2.2
60
Distribution of Z for testing a contrast between whole
plot effects for a split-plot model with no missing
observations
5.2.3
62
Distribution of Z for testing a linear contrast between
split-plot effects for the split-plot model with no
missing observations •
5.4.1
Distribution of Z for testing a contrast between split
plot effects for a split-plot model. •
7.1
71
Flow chart of the simulation of the behavior of the
estimate vector of parameters S. . • . • •
7.2
66
82
Distribution of the estimated general mean for the
split-plot model under normal distribution, when the
variance components were replaced by the MINQUE
estimators
7.3
84
Distribution of the estimated general mean for the sp1itplot model, with missing observations under normal
distribution, when the variance components were
replaced by the MINQUE estimators • • • • •
85
e-
x
Figure
7.4
Page
Distribution of the estimated general mean for the
split-plot model, with missing observations under
uniform distribution, when the variance components
were replaced by the MINQUE estimators •
7.5
87
Distribution of the estimated general mean for the
split-plot model, with missing observations under
normal distribution, when the variance components
were replaced by the restricted MINQUE estimators.
7.6
88
Distribution of the estimated general mean for the
split-plot model, with missing observations under
uniform distribution, when the variance components
-e
were replaced by the restricted MINQUE estimators.
7.7
89
Distribution of the estimated general mean for the
split-plot model, with missing observations under
normal distribution, when the variance components
were replaced by the restricted MINQUE estimators.
7.8
91
Distribution of the estimated general mean for the
split-plot model, with missing observations under
uniform distribution, when the variance components
were replaced by the P.S.D MINQUE estimators
7.9
92
Distribution of the estimated general mean for the
split-plot model, under slash distribution when
the variance components were replaced by the MINQUE
estimators •
94
1.
INTRODUCTION
The split-plot experiment comes naturally when the researcher has
data that arise from one of the following cases:
Case A.
The random selection of whole units from which several measures
(or subunits) are made.
Case B.
The random selection of whole units followed by the selection
of several subunits at random within each of the whole units.
Case C.
A factorial arrangement of the factors A and B with p and q
levels respectively in which the factor A with p-1 degrees of
freedom is completely confounded with whole plots and B is
-4It
confounded with subdivisions of the plots, i.e., split plots.
Case D.
A large number of treatments assigned to plots in a randomly
selected group of blocks, i.e., the incomplete block design,
In each one of these cases the structure of the errors in the model
consists of two random elements, one associated with the whole plots (or
whole units or blocks in case of the incomplete block design) and the
second associated with the split plots (or split units or plots in the
incomplete block).
All of these random elements are assumed to be inde-
pendently distributed with mean zero.
.
2 t h e second group
var1ance
01'
Those in the first group have
°22 ,
In general, the model associated with the split-plot experiment can
be written as
Y = XI3
+
E:
2
where Y is an r-vector of observations, x is a r x m matrix of known
constants, S is an m-vector of parameters and € has structure U €1 + V2€2
1
where U and U are r x mi matrices and €1 and €2 are mi-vectors of
1
2
random errors, such that E(€.) = 0 and variance-covariance matrix
~
2
= I m. cr.,
i = 1,2.
~
D(€i)
~
Analyses are straightforward when the number of split plots is
constant for all whole plots.
The object of this study is to consider
cases when this is not true.
In particular, this thesis is a report of
the study of the behavior of the generalized least squares estimator of
the parameters S, say 8
= (X'V-~)-X'V-~,
using an estimated variance-
covariance matrix V, i.e., to observe the behavior of
when cr
8=
(X'V-~)-X'V-~
2
2
and cr are estimated using the MINQUE method.
2
1
A second problem to be considered here is to obtain a general
formula for computing the degrees of freedom associated with at-test
for testing a linear function of the parameters S of the form
H:
o
against H :
a
>"'S=>..IS.
0
~ > <
when the variance-covariance matrix V in the generalized least squares
estimator 8 is replaced by an estimate V that comes from the MINQUE
procedure.
e-
3
2.
REVIEW OF LITERATURE
The general Gauss Markoff linear model can be written as
= XS + e:
y
where:
(2.1)
Y is the n x 1 response vector, X is an n x p matrix with rank
q < p, S is a p x 1 vector of parameters and e: is an n x 1 vector of
random errors.
It is common to assume that e: has E[e:]
covariance matrix D[e:]
=
10
2
,
0
2
unknmvn.
= 0 and variance-
Techniques for estimating
under these conditions are well known.
Under certain conditions it is much more reasonable to assume that
D[e:]
= V.
If V is known, or at least known up to a constant multiplier,
then again the analysis is well known, though in general the computing
may present some difficulties.
The object of this thesis is to examine several methods for estimating S when V is unknown but has some structure; i.e., depends on 2
unknown parameters
0
2
and
1
2
0'2'
In particular it is assumed that the data
(and model) are from an unbalanced split plot experiment and that
a;
a~
and
can be estimated.
2.1
Estimation of the Parameters S in
A General Linear Model
One of the first persons to work with the weighted least squares
principle was Aitken (1943).
unbiased estimator of
S=
(X'V-1X)-X'V-~.
He showed that the minimum variance linear
S, under the linear model (2.1), is
The obvious difficulty with this approach is that
4
V is often unknown.
A possible alternative is to ignore V and use
ordinary least squares.
Magness and McGuire (1962) were able to express the differences
between variances of estimated regression coefficients under ordinary
least squares and generalized least squares as functions of the variancecovariance matrix V.
Following along similar lines, McElroy (1967) found that ordinary
least squares estimators also are best linear unbiased, when all errors
have common variance and common non-negative coefficient of correlation
between all p"air-s.
Williams (1967) extended this to show that ordinary
least squares estimators were best linear unbiased when the rows of X
were a full-rank linear combination of the characteristic vectors of V.
Zyskind (1969) gives several conditions on the form of the covariance matrix such that simultaneously for all models with common specified systematic part every ordinary least square estimator is also best
linear unbiased.
He shows that models with such a covariance structure
may be viewed as possessing just one error term.
Linear models for many
complex experiments, like the split-plot design, with an induced covariance structure under which all linear ordinary least square estimators
are also best linear unbiased estimators, often possess several natural
error terms.
Bement and Williams (1969) studied the regression model with independent but not homogeneous errors.
They examined the performance of
weighted least squares estimators with estimated weights.
Their conclu-
sion was that the weighted least squares estimators had smaller variance
than ordinary least squares estimators if the variances could be estimated with at least 10 degrees of freedom each.
~
5
J. N. Rao and Subrahmaniam (1971) studied the following two
problems:
(i)
eter
where Y is the mean of n observations being normally
i
i
~,
and independently distributed with mean
(ii)
= 1,2, .•. ,k of a param-
Combining k independent estimators Yi' i
J.
= 1,2, ••• ,n.,
J.
E .. , j
J.J
known constants and the
E ••
J.J
and variance
cr~.
B in a regression model Yij
Estimating the parameters a and
a + Bx. +
~
i
= 1,2, ..• ,k,
=
where the x. are
J.
are normally and independently dis2
tributed with mean zero and variance cr .•
J.
Their approach differed from that of the previous authors in that they
used the MInimum Norm Quadratic Unbiased Estimation (MINQUE) principle
introduced by C. R. Rao (1970) to estimate the unknown variances.
-e
They found explicit formulas for the MINQUE estimates of the variances in (i), given by
n.
n(n (n-2»
i
where n
=
~
•
1.
n. and s
2
J.
-1
~
J.
_
(YiJ'-Y)
2
-1 2
- (n-2)
s
j=l
- 2
= (n-1) -1 ~.~.(Y
.. -y) •
J. J
J.J
These estimators are not necessarily positive and must be modified to
provide satisfactory weights.
On
the basis of a simulation study, they
concluded that weighted least squares using the modified MINQUE values
is more efficient than the weighted least squares using
small and k is large.
s~J. when n is
Another conclusion was that MINQUE may not lead
to substantial gain in efficiency when n.
1.
=m~
8, specially for small k.
Fuller and J. N. K. Rao (1978) considered the problem of estimating
the parameter
Y
= XB +
E
B in the linear model with heteroscedastic variances,
where
E
is the n-vector of random variables with mean zero
6
= block
and dispersion matrix V
unknown variances and I
diagonal (olZ1
n
is an n
n
i
least squares estimator of 8 is
x n
i
, ••• ,oZI
i
n
), {oiZ} are
k
identity matrix.
i
S = (X'X) -~'Y.
The ordinary
They defined the two
step weighted least squares estimator of 8 by 8 = (X'V-~)-X'V-~ where
"
V
.
"Z
d1agona1 {o11
= block
"Z
, ••. ~okI
n
n
i
} and
k
"Z .
0. 1S
1
Z
an estimator of oi.
The
They
first step was to compute estimator V and the second to compute S.
studied the class of two step estimators of S given by 8
w
(Y - XS)',
{W1In1,···wkI~} and wi
Y
Z
<
00
= g(n i )
°"Zi
= n
8w
reduced to
-1 "z
l:e:
, W = block diagonal
i
ij
for some g(.) such that 0 < Y < wi <
1
for all i and Y and Y being constants.
Z
1
model, the
=
S= (X'V-~)-X'V-~.
When they replicated the
They used the two step
Z
estimator Sw to construct a new estimator of oi and inserted these esti-
mated variances into Sw to obtain the three step estimator of S.
They
viewed the maximum likelihood estimator as the limit of an iterated
process with W = I.
They investigated the special case of the two step
estimators of a common mean and found that the two step estimator was
superior to the maximum likelihood estimator for a considerable range
of parameter values.
Fuller and Battese (1973) considered a linear model with a nested
error structure.
They demonstrated that it is possible to make a rela-
tively simple transformation and then compute the generalized least
squares estimators of the fixed parameters.
estimates of the variance components.
The transformation requires
These they estimated using the
fitting constants method of Henderson's method III described in Searle
(1968).
They estimated the variance-covariance matrix, say V, and
obtained the value of the generalized least squares estimator 8
=
" 1
" 1
"-1/2
(X'V- X)-X'V- Y by the ordinary least squares regression of V
Y on
e-
7
V- l / 2X.
They gave sufficient conditions under which the estimated
generalized least squares estimator is unbiased and asymptotically
equivalent to the generalized least squares estimator.
Williams (1975) studied convergence rates of weighted least squares
to best linear unbiased estimators.
He wrote the model as follows:
where Y is a n x 1 response vector, X is a n x p matrix of full rank p,
l 2
6 is a p x 1 vector of parameters and V / is a synnnetric square root of
o
0
the dispersion matrix V of the vector Y, and the respective mean vector
o
and dispersion matrix of Z was 0
n
and I .
n
His study concentrated on
situations where the matrix V could be written as V(8 ), i.e., a funco
0
tion of a relatively limited number of parameters which remained constant as n increased.
Under these conditions he showed that for
sufficient large n a weighted least squares estimator
Bw
=
(X'V-~)-X'V-ly can be constructed and regarded either as an estimator
of B "r of the best linear unbiased estimator B
o
2.2
=
(X'V-lX) -XlV-lYe
0
0
Missing Values and Unbalanced Data
In a general experiment if the number of observations belonging to
the subclass is the same for all subclasses, then the experiment is said
to have balanced data or no missing observations.
In contrast, if there
is an unequal number of observations in the subclasses or if some subclasses contain no observations at all, the experiment is called
unbalanced or with missing data.
One of the oldest papers that describes methods to estimate the
yield of a missing plot is due to Allan and Wishart (1930).
They
8
provided formulas for a single missing plot.
Yates (1933) extended that
method to estimate several missing plots, and provided an iterative procedure to estimate the missing plots.
Anderson (1946) derived formulas
for one missing plot in a split-plot experiment.
He used the covariance
method for the derivation in both cases when there is one subplot missing and when there is one whole plot missing.
In principle the covariance
analysis method can be extended to more missing values.
However, this
quickly becomes impractical and one is forced to use methods appropriate
for the general unbalanced case.
Hocking and Speed (1974) compared a number of methods for handling
unbalanced data sets.
Speed, Hocking and Hackney (1978) reviewed the existing methods
for analyzing experimental design models with unbalanced data and related them with existing computer programs.
The methods are distin-
guished by the hypotheses associated with the sum of squares which are
generated.
Their claim is that the choice of the method should be based
on the appropriateness of the hypotheses rather than on oomputational
convenience or the orthogonality of the quadratic form.
One possible alternative for the experimenter faced with missing
cells is provided by Hocking, Speed and Coleman (1980).
This alternative
is based on the assumption that the experimenter would test the usual
hypotheses if the experiment had been balanced.
are derived from the balanced case.
The hypotheses suggested
They also provided a criterion for
choosing the hypotheses to be tested in the unbalanced case and in addition also provided an algorithm for testing the desired hypotheses.
e-
9
The salient feature in the Hocking and Speed (1975), Speed, Hocking
and Hackney (1978) and Hocking, Speed and Coleman (1980) papers is that
only the simple error structure with independent identically distributed
errors is considered.
This thesis deals with unbalanced data and a more complex error
structure.
Emphasis will be on the error structure and the questions
concerning hypotheses to be tested will not be addressed.
2.3
Estimation of Variance Components
The large body of literature dealing with variance component estimation has recently been reviewed by Harville (1977).
Subsequent to
that, Rao and Kleffe (1979) described a series of modifications of the
MInimum Norm Quadratic (MINQ) estimation principle.
In particular,
they discussed MINQ-unbiased, MINQ-invariant and MINQ-unbiased, invariant estimators.
10
3.
MINQUE ESTIMATORS FOR THE SPLIT-PLOT
VARIANCE COMPONENTS
3.1
The Split-Plot Model
The linear statistical model appropriate for the basic split-plot
design in which observations are taken on n
i
split-plots in the ith
whole-plot can be written as:
m
Yij
= k:1
where the {Y
ij
j
xijkS k + u ij
=
=
1,2, ... ,n., i
~
1,2, ... ,a
} are the observed response values, the {x
ijk
} are the m
different control variables, the {B } are the m fixed unknown parameters,
k
and the {u .. } are the unobservable random errors.
~J
These errors consist
of two components, a random element associated with the ith whole-plot,
say vi' and a second independent random element associated with the jth
subplot in the ith whole-plot, say e'.. ; i.e., u ..
~J
~J
= v. +
~
e...
~J
The {vi}
and {e .. } are assumed to be independently distributed with zero mean
~J
and variances
IJ
2
0,
>
v -
IJ
2
e
° respectively.
>
are independent of each other.
!
I
E[u .. U., 01]
~J
~
J
~l
2
:;
v +
IJ
IJ
2
e
Also, the {v.} and {e .. }
~
~J
These assumptions imply
if
i
= i'
if
i
=
if
i :f i'
and j
=
j'
i' and j :f j '
Alternatively, in matrix notation, the statistical model for the
split-plot becomes a general Gauss Markoff model
Y
= XB +
E:
(3.1)
e·
11
where Y is an r-vector of observations, X is an r x m matrix,
B £ Rm is
an m-vector of parameters, and £ is an r-vector of random errors.
For the split-plot design, £ has a special structure.
written as £
= Ul£l
It can be
+ U2£2 where U and U2 are r x m and r x m matrices
l
2
l
of known constants and £1 and £2 are m and m -vectors of random errors.
l
2
Also, E[£t 1
= 0,
E[£t£;l
= 02 Imt
for t
= 1,2,
E[£1£i 1
=0
where I
p
is
the p x p identity matrix.
It follows that the variance-covariance matrix of £ (or Y) is given
by D(£)
= V(o) = VIol2
2
2
2
+ V20 2 where 01 and 02 are the whole and split plot
error variances respectively.
with Y'
.
-
Note that if the elements of Yare arranged
=
(Yll' Y12'.··' Ya ), then VI is a block diagonal matrix, where
na
each block contains unitary elements and is of order n. x n. and V is
2
1.
1.
the identity matrix of order r x r .
The remainder of this chapter will be devoted to obtaining esti-
2
2
mates for 01 and 02·
3.2 The MInimum Norm Quadratic
Unbiased Estimator (MINQUE)
The purpose of this section is to present the UINQUE theory for
the variance components as initially presented in C. R. Rao (1970, 1972)
and C. R. Rao and J. Kleffe (1979).
Let Y
= XB +
£ be a linear model such that Y is an r-vector of
observations, X is an r x m matrix,
B £ Rm is an m-vector of parameters,
and s is an r-vector of random errors, such that the structure of s is
+Us
where U ,
l
P
1,2, ..• , P matrices of known constants and sl' .•• , sp are mQ,p
Q, =
2
= 0, E[s Q, s']
=
anI , where I
Q,
mQ,
mQ,
is the mQ, x mQ, identity matrix for Q, = 1,2, ... , p and E[sQ,S~'] =
vectors of random errors.
Also, E[sn]
Iv
.
Q,=lQ,' •
Iv
°
12
It follows that the variance-covariance matrix of £ (or Y) is given
by D(£)
2
2
= V(o) = VIol + ... + VpO p '
where V~
= U~U~
11-=
1,2, ••• , p.
In general, the above papers deal with the problem of obtaining
estimators for the linear parametric function
y(B,o*)
= c'B +
f'o*
m
where c £ R , f £ RP ,
B £ Rm, 0* £ Land L C RP is an open set.
Note
that the model contains m fixed effects and p variance components.
The estimators of the linear parametric function y(B,o) are functions of the observed values and have the form g(Y) = a'y + y'AY.
Before proceeding to the estimation technique itself, a number of
preliminary concepts must be established.
3.2.1
Identifiability
Definition 3.2.1
A parametric function y (B ,0) is said to be identifiable Hf V(o 1) =
V(02) and XB l = XB 2 implies y(Bl,ol) = y(B ,02).
2
The following lemma establishes the necessary and sufficient conditions to have identifiability of a parametric function.
Lemma 3.2.1
The parametric function y(B,o)
C E
o(X') and f
E
=
c'S + f'o is identifiable iff
O(W) where o(A) is the vector space generated by the
columns of A, W is the matrix with elements w.. equal to tr(V.V.), and
~J
tr represents the trace of a matrix.
The proof is given in C. R. Rao and J. Kleffe (1979).
~
J
13
3.2.2
Unbiasedness
Suitable conditions to establish unbiasedness are given by the
following theorem.
Theorem 3.2.1
The estimator g(Y) = a'y + y'AY is an unbiased estimator of the
parametric function y(B,a) = c'B +,f'a if c' = a'X, X'AX = 0 and
tr(AV ) = f , i = 1,2, .•• , p where A is a symmetric matrix.
i
i
The proof of this theorem is given in C. R. Rao and J. K1effe
(1979).
These authors also prove the following theorem concerning the
existence of an unbiased estimator.
Theorem 3.2.2
There exists an unbiased estimator g(Y) if c
€
o(X') and f
€
o(Q)
where Q = (qii') is a matrix with q£i'= tr(ViV i , - PXViPXV i ,) and where
P
x is
3.2.3
the projection operator of X onto o(X').
An Invariance Principle
If the vector of parameters
B is replaced by
Ba = B - B0
where B is arbitrary, then the model (3.1) becomes
o
Y - XB
0
= X(B-B ) +
Letting Y
d
€ •
0
=Y
- XB
o
leads to
and the quadratic estimator corresponding to y'AY becomes
.
Y~
AY .
d
The
14
quadratic estimator y'AY is said to be invariant with respect to X if
YciAYd and y'AY are equal for all So'
= (Y -
xs o )'A(Y
= y'AY
- 2S'X'AY + S'X'AXS
o
0
0
It is clear from the expression
- XS )
0
= y'AY
that AX
3.2.4
=0
is both necessary and sufficient for invariance to hold.
Minimum Norm Principle
If the
{£~}
were known, then a natural invariant estimator of the
parametric function f'o is
f
+ ... +-E..£,£.
m
p p
p
This can be written as
£1
(£i' £2' ... , £;) /).
(3.2.4.1)
£2
£
P
where
o
/j.
=
o
f
-E.. r
m
p
m
p
.
15
{E~}
In practice, the
are not observable and the natural estimator
of f'a is
g(Y)
= y'AY.
Imposing the condition for invariance leads to
g (Y)
=
(Xe
=
£'
+ E)'
A (Xe
+
£)
A £
and since £ has structure UlE l + .•. + Up£p' then
+ U £ )' A (U l £2 + ... + U £ )
P P
P P
A (U £2 + ... + Up£p)
+ £p'U')
p
l
g(Y) = (Ul£l +
= (£ 1'u'1 +
U'
1
.e
£p, )
= ("
E1 , E2 , ""
U'
2
£1
A (U :U :
l 2
...
:U )
p
U'
£2
£
P
P
= £*' U' A U £*
(3.2.4.2)
where
£1
£ = £2
*
£
U = (U :U :
l 2
and
...
:U )
p
P
A reasonable strategy is to make the difference between (3.2.4.1)
and (3.2.4.2),
£~(U'AU
-
~)£*
small.
The MINQUE principle is to minimize
II u' AV
- ~ II, where
denotes the Euclidean norm /tr(H H') of a matrix.
II HII
16
This leads to the formal definition of the MINQUE of a parametric
function:
Definition 3.2.2
€~U'AV€*
The quadratic form g(Y) = y'AY =
is said to be a MINQUE of
2
p
~
f~at if the matrix A is determined such
k=l
that Ilu'AU - ~II is a minimum, subject to the conditions AX. = 0 and
the parametric function f'o =
In order to obtain an explicit solution, it is convenient to minimize the square of the Euclidean norm,
-
~)).
I IU'AU
~I 12 = tr«U'AU - ~)(U'AU
-
This can be simplified somewhat by using the following result.
Lemma 3.2.2
Let A be a real symmetric matrix subject to the conditions AX = 0
and tr(AVg,) =
f~,
g, = 1, ••• , p.
Then
tr(U'AU~)
=
tr(~~).
Proof
o
o
·f
1
m
p
= tr «U :
1
...
f
:U ) I A(..1:. U :
p
m 1
1
f
f
UI AU ..1:. U' AU ~
1 1 m 12m
2
1
f'
f
= tr( Ui AU 2 --1:. U'AU ~
m 22m
1
2
f
f
2
UI AU ..1:. UI AU
p 1 m p 2 m
1
2
I
m
p
f
:1 U ))
m
p
p
f
U' AU -E..
1 p m
p
f
U'AU -E..
2 p m
)
p
f
U' AU -E.
p p m
p
e
17
,
= tr (U1AU1
f1
m ) + tr(Ui AU 2
1
1
tr(AU U') +
1 1
m~)
m~
f
= L -t
t mQ,
=
where VQ,
tr(AV,Q)
f2
Q,
L
Q, mQ,
p
= UQ, U'Q,
because tr(AVQ,)
1
f
1
-1
m m
1
1
tr(~~)
f'
~)
+ ... + tr(U'AU
ppm
f
+
~
tr(AU U')
+
•..
U
tr(AU 2 i)
m
p P
P
f
f
= -m1
f
0
if
2.. 1
m m1
1
I
I
f
~1
0
mp
-e
.
r-
i
= tr(
= f Q,
0
)
I
f
0
mp
~1
mp mp
;...
f2
2.. 1
2 m
= tr(
m
1
1
)
f2
~1
mp m
p
It
follows that
tr«U'AU -
~)(U'AU
-
~))
= tr(U'AUU'AU) -
2tr(U'AU~)
+
tr(~~)
18
= tr(U'AUU'AU)
= tr(AVAV)
tr(~~)
Since
-
-
tr(~~)
tr(~~),
where V
does not involve A, the problem of MINQUE is reduced to
=0
minimizing tr(AVAV) subject to the conditions AX
£~,
= 1,2,
~
= UU'.
and
=
tr(AV~)
.•• , p.
E~
Also, since the
in the model (3.1) may have different standard
E~(U'AU
deviations, it is reasonable to rewrite the difference
in terms of standardized variables
~~ = E~/a~.
~)E*
-
This leads to the
quadratic form
0
allm
1
~
,
(U'AU -
~)
.
~
a I
p m
p
0
0
allm
1
a I
p m
p
0
e-
The problem is now to minimize tr(AV*AV*) under the conditions
AX
=
0 and trAV
t
=
f~,
t
=
1,2, .•. , P and V*
= v 1 a 21 + ... +
2
vpa p •
In
practice the {a:} will be unknown and will have to be replaced by the
1
best value available from prior knowledge.
Note that knowledge of the
ratios of the variance components is sufficient.
The required solution is obtained via the following theorem.
Theorem 3.2.3
Let P by the projector operator onto the space generated by the
columns of X, using inner product (X,Y)
definite matrix, and defined by P
=
~A£QVVtQv,
~
where V is a positive
= X(X'V- 1X)'-X,v- 1 .
tr(AVAV) subject to the conditions AX
is attained at A*
= X'V- 1y,
=0
and tr(AV )
t
-1
v =V
where Q
The minimum of
=ft,
(I-P) and A'
=
t
=
1, •.. , p
(A , .•• , A )
p
1
~
•
19
is determined from the equations SA
S
= (Stt')
=f
where f'
=
(f , ••• , f )' and
1
p
= tr(QvVtQvVt')·
and Stt'
The proof of this theorem is given in C. R. Rao (1972).
2
It follows that the MINQUE of l:tf'tOt is
= y'AY
g(Y)
=
Y'(l:tAtQVV QV)Y
=
A'q
where q' = (Y'QVV1QVY' Y'QVV2QVY' ••• , Y'QVVpQvY) and A is a solution of
l:tAttr(QVVtQvVt')
= £t" t' = 1,2, ... , p, that is SA =
f.
In this case A = S-l f is a solution to SA = f, and provided the
inverse of Sexists.
-e
Also
A'q
= (S-lf ), q
= f,s-l q
= f'o
where
° is
a solution to So
3.3
= q.
Positive Definite MINQUE Estimators
A criticism of the estimators defined in section 3.2 is that the
estimates may be negative.
Rao and K1effe (1979) derived a procedure to obtain non-negative
definite (n.n.d.) estimators of the variance components.
They showed that if the following hold:
(i)
v
(ii)
v
(iii)
~
t
=
a for t
=
l: V
t
V(t) =
v-
V
~
1,2, ••• , p (V
t
~
a means positive semidefinite)
20
(iv)
and B
t
=X
'VtX , where X is any matrix in the space orthogonal
to o(X'), then there exists an n.n.d quadratic unbiased estimator of
o~
iff O(B t )
f
O(B(t))'
Rao and Kleffe (1979) also showed that o(B )
t
to (I-PG)Vt(I-P )
G
~
0, where P
G
¢ o(B(t))
is equivalent
is the projector operator onto the space
generated by the columns of the compound matrix
3.3.1
Positive Semi-definite Estimator for
the Split-Plot Error Variance
It is informative to apply the above result to the model for the
split-plot experiment.
Clearly, V2
= I,
VI is a block diagonal matrix and G
=
[X:V ].
l
Also (I-P )V (I-P ) = (I-PG)(I-P )
G 2
G
G
=
I-P
G
~
0 because P
G
is not the identity matrix.
Consequently there exists a non-negative definite quadratic estimator
for
2
0Z'
"z
2
The non-negative quadratic unbiased estimator 0z of 02 is given by
Unbiasedness follows from
"2
E(oZ) = E(Y'(I-PG)Y/tr(I-P ))
G
-1
=
(tr(I-P G))
=
(tr(I-PG))-ltr«I-PG)E(YY'))
=
(tr(I-P G))
(tr (I-PG))
-1
-1
tr(E(Y'(I-PG)Y))
tr«I-PG)V)
2
[tr «I-PG)V 1)01 +
e-
21
It is to be noted that this unbiased estimator for
0; agrees with
the one obtained by Henderson's method III.
Observe that in the notation of S. R. Searle (1968) the model (3.1)
can be rewritten as
Using the notation X
a,b
=
[Xa :x..]
-0
write
~X'X
a a X~~lX'X
b a
[X
a
:~]'
•
Xb~
Also let R(Sa' Sb) = Y'P X Y.
a,b
It follows that E[Y'y - R(Sa' Sb)] involves only
Y' (I-P
X
O
2
2
and Y'Y - R(Sa,Sb)
=
)Y.
a,b
Nmv
note that o(X:V )
l
the estimator of
= o(X:U l ), since VI = UlUi.
It follows that
0; obtained from Henderson's method III agrees with the
n.n.d estimator of Rao and Kleffe.
3.4
3.4.1
The Variance Component Estimators
for the Split-Plot Model
Preliminary Definitions
In order to construct explicit forms for the estimators for the
variance components of the split plot experiment, the following sequence
of definitions is needed.
22
Definition 3.4.1
A submatrix or block matrix is a matrix that is obtained from the
original matrix by deleting certain rows and columns.
Definition 3.4.2
A partitioned matrix is a matrix whose elements are block matrices.
Example:
all a 12 : a 13 a 14 a 15
.....................
A=
a
a
2l
3l
a
a
a
22
• a
32
a
23
a
33
24
34
a
a
25
=
[All ~2J
A A
2l 22
•
35
It is convenient to denote a partition matrix
A=
PMATX(~,;
or simply A
= PMATX
k = 1,2, ••• , a, k' = 1,2, ..• , b)
(~k')'
if there is no confusion in the order of the
indices.
Definition 3.4.3
The matrix A is a block diagonal matrix if A contains block matrices
in the diagonal and zeros elsewhere, and is written as
A
= BMATX
or more simply A
(~;
k
= 1,2, .•• , a)
= BDMATX
(~).
23
Definition 3.4.4
The matrix A is a column matrix if A is written as
A
a
where the
{~}
are block matrices having the same number of columns.
The notation
or
-e
A
= CMATX
A
=
(~;
k
= 1,2,
CMATX (~)
••• , a)
will be used.
Definition 3.4.5
Define a as a column vector if a can be written as
a
=
where the {ail are real numbers.
or
a
= CVECTOR
(a.; i
a
= CVECTOR
(a.).
~
Also write
= 1,2, ..• , k)
~
Definition 3.4.6
The matrix A is a row matrix if A is written as A = [A :A : ..• :A ]
2
1 2
where the
{~}
are block matrices having the same number of rows.
will also be written as A
= RMATX
(~;
k
This
= 1,2, •.. , a) or A = RMATX
(~).
24
Definition 3.4.7
The
symbol~'
will denote a row vector if a' = [a ,a , ••• ,
l 2
where the {a.} are real numbers.
Alternatively
1.
or
a'
= RVECTOR
(a.; i
a'
= RVECTOR
(a.).
1.
-
1.
~]
= 1,2,
••• , k)
Using this notation one can write
... ,
1.i
=
y'
= RMATX
-1.J
x~ .
=
[x
and
Xi
=
CMATX (x ~ . ;
3.4.2
The MINQUE Equations
[Yil' Yi2 ,
(1.i) ,
ijl x ij2 '
-1.J
Yin ]
i
,
... , x ijm] ,
j = 1,2, ... , n )·
i
e-
Recall from section 3.1 the error vector
€
has the structure
where VI is a block diagonal matrix, each block an n i x n i matrix of
ones.
It is convenient to let J
of ones for i = 1,2, •.• , a.
n
i
or simply J. represent an n. x n i matrix
1.
1.
It follows that
Now assume that prior information indicates that
2
guesses for 011 and
0:
2
respectively.
2
ai
and
a;
Use this to write V as
are good
Z5
Lennna 3. 4 •1
Z
Z
2
If Ai = J i a 1 + Iia where a
Z
1
-1..
then Ai
-Z
i
Z -Z
1S g1ven by a Z Ii - a a Z
1
It follows that Va
~
= - a 21a -Z
Z
(n a
i
Z
1
-l/Z
1
Z
(n a +
i 1
Z
> 0, Ai of order n x ni,
i
Z
Z-l
a ) J ·
i
Z
= BDMATX (aoI i + aiJ i ) where a O = a -Z
Z and
2-1
+ a Z)
Observe that V
BDMATX (n.
1
0 and a
•
= BDMATX (J i ) can be written as V~V~ where V~ =
J.).
1
In section 3.Z it was shown that the MINQUE of
where q'
-e
=
2
~ifiOi
is g(Y) = A'q,
(Y'QvV1QVY' ••. , Y'QvQpQvY) and A is a solution for
LiAitr(QVVtQvVt') = f t "
t' = 1, ••• , p.
A
Also, it was shown that g(Y) = flo where a is a solution to
Z
~tottr(QVVtQVVi')
= Y'QVVi,QvY,
Therefore, letting t,t'
t'
= 1,Z,
= 1,
••• , p.
the estimators for the split-plot
variance component will be completely determined by solving the fol1owing system of equations:
tr (QVV1QVV1)
[ tr(QVVZQVV )
l
=
(3.4.Z.1)
In order to obtain an explicit equation for the estimators of the variance components, it is necessary to compute the terms involved in
(3.4.2.1) .
26
3.4.2.1
Y'OVV1~Y
Computing
Note first of all that Y'QVV1QVY
Now obtain
where
V~QvY as follows:
8= CX'V-1y and
= X(X'V-~)-X'V-1,
P
V
= Y'QVV~V~QvY.
C
= (X'V-~)-
and
(X'V-~)- is the generalized inverse of X'V-~.
Observe that
-1/2
BDMATX (n,
(a
=
~
+ ain,)J ).
~
i
0
~
Also XB
= CVECTOR (L:x, , kBk;
and
= L:ck t
13
k
k
0"
~J
t
~t
where
~J
(a L:L:x, , y"
~Jt
~J
= 1, ... , n.,
j
~
+ L:a,x,+ y,+), k
t
,~~
~
~
are the elements of C and x i + t
= 1,
... , a)
= 1,2,
..• , m
i
= L:x,
't
,
~J
J
and "1,+
1: .
e-
= L:y. , •
j
~J
Observe that
~
Y - XB
~
=
CVECTOR (y,. -
L:x. 'kBk)
~J
b 1
and V V- (y - X8)
,~J
J
= BDMATX (n~1/2(a
~
0
1
+ a,n.)J,) • CVECTOR (y., ~ ~
~
~J
L:x, 'kBk)
k ~J
= CMATX ( n.-1/2(
a + a.n. )( y _+ - L:x-+kB~ k )1 .).
~
0
~~
~
k~
"'"1.
It
follows that
Y, QVV1QVY
-1(
= L:n,
a
i ~
0
=
,
+ a,n, )2( y,+ - L:x'+kBk )2 1,1,
~
L(a + a.n.)
i
0
~ ~
~
2
~
(y_+
~
k
~
-~--'l.
(3.4.2.1)
27
3.4.2.2
Computing
Y'Qv~Y
The first step is to compute
-1
= V (Y -
A
XB)
Consequently,
-e
A
+ 2 aoai(Yij - ~kXijkBk)(Yit - ~kxi+kBk)]
=
2
A
~.~.a (y .. - ~kx .. kBk)
1. J
0
1.J
1.J
2
+ ~.a.(2a
1. 1.
A
0
+ a.n.)(y.+ - ~x.+k8k)
1. 1.
1.
k
1.
(3.4.2.2)
2
•
28
3.4.2.3
Computing
tr(9vVl~Vl)' tr(9vVl~)and tr(~~l
As a first step observe that tr (V1Q VQv) = tr (V1Qv ) . Also observe
v
2
2
that tr(QVV QVV) = tr(QVV Q V )a. + tr(QVV QV)a. • This implies
1
1 V1
1
2
-2
2
tr(QVV1QVV1 ) = 0. 1 [tr(V 1QV) - a.2tr(~V1QV)]·
= tr(Qv).
Also observe that tr(QvvQv)
Then
implies
evaluated.
e-
Now
= (I -
and tr(QvV)
V-~CX')
=
tr(1 - V-~CX')
=
tr(1) - tr(V
-1
XCX').
-1
Note that V XCX' is idempotent.
Using the property, that the
trace of an idempotent matrix is equal to its rank, and R(X)
R(XCX'V-~) ~ R(XCX'V- 1 ) ~ R(X)
=
=
q where R(A) means the rank of A, it
follows that
(3.4.2.4)
Now
tr(QV)
=
tr(V- 1 - V-~CX'V-1)
=
tr(V- 1 ) - tr(V- 1XCX'V- 1 )
= tr(V- 1 )
- tr(CX'V-~-lX).
Z9
Observe that tr(V
-1
)
=
Litr(a o I.~ + a.J
)
~ i
=
L.(a
+ ai)n.,
~
0
~
(3.4.Z.5)
(3.4.Z.6)
X'V-~-~
=
[Xi:Xi: ... :X~]V-~-l Xl
X
.z
X
= l:.Xi'(aZI.
0
+ (Za a. + a:n.)J.)X.,
= a2L.CX~Xi
o
+ r.(2a a. +
~
-e
a
~
~
and tr(CX'V-~-~)
~
=
0
~
~
0
~
~
~
~
~
a:n.)CX~J.X.,
~
~
~
~
~
aZL.tr(CX~X.) + L.(Za a. + a:n.)tr(CX~JiX').
O~
~~
~
O~
~~
~
~
(3.4.Z.7)
Recall that tr(CX'V-~)
=q
= tr(CX'
=
a
o
• BDMATX(a I. + a.J.)X)
o
L.tr(CX~X.)
~
~
~
+
~
~
~
L.a.tr(CX~J.X.)
~
~
~
~
~
which can be rewritten as
a O~
L.tr(CX~X.)
~~
=q
-
(3.4.2.8)
L.a.tr(CX~J.X.)
~~
~~~.
Substituting (3.4.2.8) in (3.4.2.7) gives
=
a q + L.(a a. + a:n.)tr(CX~J.X.)
o
~
0
~
~
~
~
~
~
30
(3.4.2.9)
where
=
t.1.
(3.4.2.10)
tr(CX~J,Xi)'
1. 1.
From (3.4.2.5) and (3.4.2.9),
(3.4.2.11)
From (3.4.2.4) and (3.4.2.11),
tr(QVV1 )
= Ct. 1-2 ["(n-q)"-Ct. 22 (E i (a 0+1a.)n.
.1.
=
- a 0 q - E.ai(a
+ a.n.)t
1.
0
1. 1.
i )]
-2
-2 2
Ct. 1 (n-q) - Ct. 1Ct. 2 (E i (a o + ai)n i - aoq - Liai(a o + aini)t i ).
(3.4.2.12)
To compute tr(QvQv)' observe that
=
e-
tr(V-~-l) - 2tr(CX'v-~-~-~) + tr(CX'v-~-~CX'V-~-~).
(3.4.2.13)
From (3.4.2.6) it follows immediately that
tr(V-~-l) = tr(BDMATX(a 2oI.1. + (2a 0 a.1. + a:n.)J
))
1. 1.
i
L.(a
1.
=
Note also that
2
0
2
~.(a
1.
0
2
tr(I.) + (2a a. + a.n.)tr(J.))
1.
0
2
1.
+ 2a0 a.1. + a.n.)n
.•
1. 1.
1.
1. 1.
1.
(3.4.2.14 )
31
It follows that
2 2
= a2
(q - L.a.tr(CX~J,Xi))
+ L.(3a
a. + 3a a
o
11
11
1
01
o i
=
2
2
2
a q + L.(2a a. + 3a a.n. +
=
2
2
2
3 2
a o q + L.(2a
a. + 3a 0 ain.1 + a.nih
.•
101
1
1
2
Recalling CX'V- X
o
101
,
= a 0L.CXiX.
+
1
1
2
1
tr(cx,v- XCX'V- X)
011
3 2
ain.)tr(CX~J.X.)
1
111
(3.4.2.15)
2 )
'
1 ead s to
L.1 (2 a 0 a i + a.n.
CX.J.X.
11
111
= tr«a o L.CX~X.
+
1
1 1
L.(2a a. + a7n.)CX~J.X.)2)
101
1
1
111
-e
2
+ 2ao Li L.,(2a
ad + a.,n.,)t
.. ,
1
0 1
1
1
211
2
2
+ L.L.,(2a
a. + a.n.)(2a
a., + a.,n
,)t 11
.. ,
1 1
0 1
1 1
0 1
1
i
3
{3. 4.2.16)
=
where
tr(CX~X.CX~,x.,),
{3.4.2.17)
tr(CX~X CX~
(3.4.2.18)
1 1
1
1
i
1
1
,J. ,X. ,)
1
1
and
t .. ,
311
= tr(CX~J.X.CX',J.,x.,).
1 1
1
i
1
1
Expressions (3.4.2.14), (3.4.2.15), and (3.4.2.16) lead to
2
2
2
2
= L.(a
+ 2a a. + a.n.)n. - 2(a q + L.(2a a.
10
01
111
0
l.
Ol.
2
3 2
+ 3ao a,n,
+ a.n.)t.)
l. l.
l. l.
l.
{3.4.2.19)
32
Combining expressions (3.4.2.3), (3.4.2.4), (3.4.2.11), (3.4.2.12),
and (3.4.2.20) yields:
= a. 1-4 (n-q)
- 2a.
-4 2
a. (L:(a + a.)n. - a q
1 2 i 0
110
- L:a.(a
+ a.n.)ti)
.1011
1
-4 4
2
2
+ a.1 a. 2 (L:(a
+ 2a 0 a.1 + a.n.)n.
.
0
111
1
2
2
2
3 2
- 2(a q + L:(2a a. + 3a a.n. + a.n.)t )
i
o
i
0 1
011
1 1
2
2
+ EE(a t 1 .. , + 2a (2a a., + a.n.,)t 2 .. ,
." 0
11
0
0 1
1 1
11
11
2
2
+ (2a o a.1 + a.n.)(2a
a., + a.,n.,)t .. ,».
1 1
0 1
1
1
3 11
(3.4.2.21)
33
La.(a
i
- a
~
0
+ a.n.)t.)
~
~
~
-2 2
2
2
a (L(a + 2a a. + a.n.)n.
l 2 i 0
0 ~
~ ~
~
2
2
2
3 2
- 2(a q + L(2a a. + 3a a.n + a.ni)t.)
i
o
i
o~
o~
~
~
(3.4.2.22)
3.5
Seely's Method to Obtain the Estimators
of the Variance Components under
the Invariance Condition
An alternative approach to variance components estimation was
-e
developed by Seely (1970a, 1970b).
This technique will now be used to
obtain estimators for the variance components in the split plot subject
to the invariance condition.
Xa
First, recall the model (3.1) Y =
assumed that E[£.] = 0,
~
E[£.£~] = I m.
~
~
~
0:
~
+
U~£l
+
U~£2'
E[£.£~]=O
~
J
in which it was
i :f j, and D(Y)
2
2
= VIol + V20 2 , where VI = UtUl*' and V2 = I = U~U~'.
Second, let ai and
a~
represent prior values or guesses for
2
2
o~
and
Use these to write V = VIal + V a 2 , where V is positive definite.
2
l 2 l 2
2
Let v l / be the square root matrix of V, that is V = v / v / .
Third, obtain the generalized least squares estimator of the
parameters
a under
the transformed model
34
Fourth, obtain the residuals
Fifth, let Z x Z
=
ZlZ2
where x denotes the kronecker product
of matrices
Z Z
r r
,
u
1i
and let U =
i
i
u' .
-rl.
Observe that
= 1,2, where
~i
=
. . ]'.
[~li' ... ,u
Jrn.l.
l.
e-
35
The expectation of the vector'Z x Z is
E[Z
where
x
Z]
= He
2
2),
0 = ( 01' 02
-u ,
u
11 11
and
,
uZ1u n
-e
,
~lun
H
=
,
u11~1
,
uZ1~1
, ,
u1Zu 1Z
uI u
Z2 1Z
,
~2u12
,
u12~2
,
~22~Z
,
u'
~1~1 -r2~2
Observe that
and that
(3.5.1)
36
-
u
Ulu 1l
Uu
l 2l
u
=
(I x U )
l
Ul~l
1l
2l
(3.5.2)
~l
Definition 3.5.1
Given a matrix A of order m x,n, the vector of A, say vec(A) , is
the mn-vector that is obtained by writing vertically all the elements of
A starting with the first element of A and proceeding in the lexicographical order.
Under the latter definition the right hand side of (3.5.2) becomes
(3.5.3)
e-
From (3.5.3) and (3.5.2)
Finally, the Seely's estimators of the variance components is
obtained through the model
z
®Z
= He
+ error terms
by using least squares solutions, say
H 'He = H' Z
®
(3.5.4)
Z.
The following two lemmas can be found in Rao and Kleffe (1979).
Lemma 3.5.1
Let A, Band C be m x n, n x k and k x s matrices.
i)
tr(AB)
= vec (A)'
vec (B'),
Then
37
ii)
vee (AB)
=
(A
iii)
vee (AB)
=
(I (!O B') vee (A),
m
iv)
vee (ABC)
=
~
(A
0
1k ) vee (B),
C') vee (B).
Lemma 3.5.2
Let A, B, C and D be m x n, k x s, n x q and s x t matrices.
(A ~ B) (C ~ D)
=
Then
AC ~ BD.
Using the latter lemmas the elements of the matrix H'H say H'H(i,j)
are:
=
H'H(2 ,1) ,
and the elements of H'Z x Z, say H'Z x Z(i) are
38
= vec
Recall that U =
i
value of U and U ,
l
2
and
(Z'U U 'Z)
1 1
v- l / 2 (I
-
X(X'V-~)-X'V-l)Uf'
then replacing the
39
Therefore (3.5.4) becomes
[:~ ]
=
Observe that this is the MINQUE estimator of the variance components.
Therefore, under the invariant condition, Seely's estimators of
the variance components are also MINQUE estimators.
40
4.
ASYMPTOTIC PROPERTIES
This chapter consists of two major parts.
Section 4.1 deals with
the asymptotic properties of the MINQUE of the variance components as
the number of observations increases.
Section 4.2
proceeds to the
asymptotic properties of the generalized least squares estimate
B
obtained by replacing the elements of the unknown variance-covariance
matrix by functions of the MINQUE of the variance components.
4.1
Properties of the MINQUE Estimators
of the Variance Components
Consider the model (3.1), Y = XB + e where Y ix t x 1, and let t
increase to infinity in such a way that Y permits a partition into n
r-vectors, each one representing a replica for the unbalanced splitplot model in the study.
Rewrite model (3.1) as
e*
1
+
Y
n
XB
n
e*
2
e*
n
= U*e ..
~
e-
41
which can also be written as
Yt
=n
{I~X)S +
t
{I~U*)E:
n
t
where In is the n x n identity matrix,
~,
~t
=
(~'
~1'··
S~ =
(Si' ••. ,
S~)
and
~').
.,
~n
It is assumed that Y , Y ' ••• , Yare iid random variables such
Z
1
,
n
that E{Y )
i
= XS i
definite.
Or
and D{Y )
i
equivalently
E[In~
Observe that
= V,
where V
E{E:~) =
~
U*)E: ]
t
= V10 1Z +
D{E:~) =
0 and
= 0,
Z
V 0 , is positive
2 2
V.
~
and
D[b0 U*)E: ] = {I ® U*)D[E: ] (Tu~ U*')
t
n
t
-e
=
(I ~ u*)( I Q9 D[E: . ]) (I ® U* ' )
~
n
n
n
=
(IQ9 U*D[E:.]U*')
n
~
= (I
n
® V)
•
Z
The goal is to estimate a linear parametric function L.q.a.. of
~
~
~
the variance components by the quadratic function y'A Y subject to the
t t t
invariance, unbiasedness and minimum norm conditions of the M1NQUE procedure.
Let a
Z
Z
1
and a
2
Z
Z
Z
V a + VZa ' where V is positive definite.
Z
1 1
1 2 1 2
square root matrix or V, such V = v / v / and
V
=
For each 1
<
2
be prior values for 01 and 02' and use this to write
i
<
1 Z
Also let v / be the
v1 / 2 = v1 / Z '.
n let S. be the generalized least square estimator
~
of the parameters S., under the transformed model
~
z.
~
= v- 1 / 2 {y.
~
- XS.).
~
v- 1 / 2y.,
~
and let
42
Let
2
2
2
t
=
V- l / 2 (y
1
2
V- l / 2 (Y
=
V- l /2('y
2
n
1
2
n
_
£8 )
-
x8 2 )
1
-X-)
8n
V- l / 2 (1 _ P )Y
V 1
l
2
V- / (1 _ P )Y
V 2
=
V- l / 2
.
(i _ PV)Yn
-1 -1
v = X(X'V X) XlV
where P
X8 l + U!E ll + U~E12
X8 2 + U!E 2l + U~E22
eEll
=
(I
IQo,
n~
v- l / 2 (1 - P ))(1<19 U*)
+ (1/-V\
v- l / 2 (I n~
where
v- l / 2 (1
-
U = (I G9
2
n
v- l / 2 (I
- P
=
1
p) (I
V
JCI
n\.:Y
PV))(ln~
(I Q9
n
Ul
n
V
v
U*) E22
2
U!)
)) (I ® U~)
n
E2l
43
and
8
tl
8
11
8
21
..
=;
8
and
8
t2
8
12
8
22
8
n2
=.
nl
.
Now apply Seely's method of section 3.5.
The elements of the
matrix H'H are
=
-e
l
tr[«In@ V- /2CI - Pv))(lh® U!))
«I,Q
v- 1/2 (I - PV ))(In~
19\ U*))'
nl:::l
1
«In® V-
1/2
(I -
«I~V-1/2(I ~
= H'H(2,1)
Pv))(lh~
U!))
P ))(I(Z)U*))']
V
n
1
-
44
Also the elements of the vect or H' Zt
H'Zt~
'eJ
Zt(l)
=
Z'U
U'Z
t 1 1 t
=
Z~ «:!hd} v- 1/2 (I
«I
Arl
n\::l
v- 1 / 2 (I
are
v ) )(:!h@ U!))
- P
- P ))(1
V
Vn
~ v- 1/2
n'<:>'
)(1
n
(V\
n'O
1 2
(I - P')V- / ) (I
= y't (In ®
«I
® Zt
U*))
1
'z t
® v- 1 / 2 (I
~ U*))
'(I ® v1
n
- P )) (I
V
1/2
n
®
U*)
1
,ci - PV))Y t
and
H'Z t
® Zt
(2)
= L:Y~QVV2QVY'.
.-:1.
1.
-:1.
Therefore, the Seely's estimators of the variance components are solutions to the system of equations
H'Hcr
=
H'Z
t
~
(4.1.1)
Z
t
where
ntr (QVV 1 QvV1)
H'H
=
[ntr(Q V Q V )
V1 V 2
ntr(QVV 1 QvV 2
)1
ntr(QvV2QVV2)_
45
and
Observe that (4.1.1) are the MINQUE equations for the variance
components, and that the matrix A that minimizes the quadratic form
t.
g(Y ) =
t
Y~AtYt
~jqj
with respect to the parametric function
2
j is
(4.1.2)
where A is a solution to the equations
j
(4.1. 3)
and
and
q.,
J
= tr(At(I@V.,)),
n
j'
J
= 1,2.
Using a generalized version of section 3.2.3, it is observed that
Y~AtYt
a necessary and sufficient condition for
to be invariant with
respect to the translation of St is that (InQ9 X')A
t
= O.
The following lemma shows that the quadratic form g(Y )
t
unbiased with respect to the parametric function
Y' A Y is
t t t
2
~jqjcrj.
Lemma 4.1.1
The quadratic form g(Y )
t
= y'A
Y
t t t
parametric function Lq.cr: iff q.
J J J
J
is unbiased with respect to the
= tr(A (IJX'IV.)).
t
nV' J
46
Proof:
E[Y~AtYt]
=
tr(E[Y~At Y ])
=
tr(AtE[YtY~])
t
= tr(A t (In ® V»
2
= tr (A (I ® r .V. cr .) )
t n
J J J
2
= Hr(A (I@V.»cr j
t n
j"
J
2
= Eq.cr .•
j
J J
From (4.1.3) it is observed that
H'HA = q
and a solution is given by
"
A
=
(H'H)
-1
q.
From (4.1.2)
=
y'A Y
t t t
(4.1.4)
Let Y , Y2 , " ' , Y be independent and identical distributed
1
n
normal random variables with mean XS. and variance-covariance matrix
~
v.
It is known that if Band D are symmetric matrices, then under the
latter condition
~
47
2tr(BVDV) +
= 2tr (BVDV)
4S~XBVDXS.
1
1
under invariance.
Observe that
var(Y~(~® QVV1 Qv)Y t , Yt(~® QVV1 Qv)Y t )
=
2tr( In<a9 QVV1QVVQVV1QVV)
=
2ntr(QVV1QVVQVV1QVV)
=
2ntr(QVv1QVV1QVV)
=
2ntr(Q V Q V )
V 1 V 1
Then var(g(Y )) = q'(H'H)-lD(H'Z G9 Z )(H'H)-l q
t
t
t
=
!
~
= q'(H'H)-l
2q' (H'H)
=
®QV
Q )Y )
V 1 V t (H'H)-l q
Y~ (I~ QVV2QV)Y t
'(T
q'(H'H)-lD t
It
2 (H'H) (H'H)-l q
-1
q
Using Chebyshev's inequality
Taking the limit when n
lim P{\g(Y
n~
t
)
-
+
~
2
z.q.cr.1
111
> £}
= 0,
establishing that the MINQUE estimators are consistent .
...
48
4.2 Behavior of the Vector of Parameters 8
When the Variance Components Are Replaced
By Those Obtained By the MINQUE Method
It is not possible to guarantee that the MINQUE of the variance
components will be positive or even non-negative.
It follows that it is
possible that the estimated matrix V may not be positive definite.
Con-
sequent1y, care must be exercised in using V to compute generalized
least squares estimates of 8.
4.2.1
Three possible strategies come to mind.
Behavior of 8 When V Is Computed with
2
2
The MINQUE Estimators of 01 and 02
The justification of the procedure is rather weak, because the
A2
estimators 01 could have negative values and yield undesirable results.
A_1
It is also possible that V
does not exist and the method fails. But
A2
2
since it was shown that 0. converge in probability to 0. where the latter
~
~
always takes positive values, then the probability of obtaining positive
estimates increases for large experiments and correspondingly the probability of obtaining a positive definite V increases.
A small simu1a-
tion study reported in Chapter 7 shows that this is indeed true and
8
appears to be asymptotically normal.
4.2.2
Behavior of 8 when V Is Computed with
The MINQUE Estimators of
oi and
0;, But
The Estimators Are Restricted to Be
Positive
The procedure is defined as follows:
(0
=)
1
Let
A2
°1
I
2
I °1
if
~2
°1
<
0
otherwise
e
~
49
~Z
if a
0.01
Z
< 0
otherwise.
The justification of the procedure is based in the fact that since
"'z and a"z converg: in probability to a Z and a Z where the latter always
Z
l
l
Z
a
"z
are positive, so a
l
"z
and a
Z
also converge in probability to a
Z
Z
and a •
l
Z
Moreover, the procedure guarantees that V is positive definite for
all n.
Note that the vectors Yl , Y ' ••• , Y have bounded fourth moments.
Z
n
This assumption guarantees that y'A Y is 0 (n- a ), a > O.
n n n
p
Assumption 1
-e
The elements of I
® V are
functions of a Z-dimensional vector of
a
-1
parameters a such that the elements of the matrix Gni(a) = - IZ( x -'eIV ,
aa.
i = 1,Z, are continuous functions of an open sphere 0 of aO, ~here aO
is the true value of the parameter vector a.
Assumpt ion Z
The sequence of matrices {I ® X} and {I ® V} are such that
n
n
lim n -l(~@ X')(~® V) -lql~ X) = M(a) where M(a) is a p x p matrix of
n-+<x>
-1
fixed constants such that M (a) exist for all aeo and
H (a) where H (0) is a matrix whose elements
r
r
n-+<x>
2
are continuous functions of 0i' i = 1,Z.
Assumption 3
An estimator
= Vn (0) for Vn
V-l(~) exist for all nand
n
o
>
o.
0
=
V (0) is available, such that
n
satisfies the condition ;Z =
0
Z
+
0 (n
p
-0
),
50
Theorem 4.2.2.1 (Fuller and Battese, 1973)
Assumptions 1, 2, 3 are sufficient conditions for the estimator
Bn
= (1~
"-1
(X'V
-
B = (I
the estimator
"-1
X) X'V
191
\:>'
)Y to have the same asymptotic distribution as
n
(X 'V-~) -X 'V-l)y under the model Y
n
n
= (I ® X) Bn
+ (I ® U*)e:n.
Applying the Fuller and Battese theorem, it follows that
(X'V-~)-X'V-ly has the same asymptotic distribution as
A small simulation study of the behavior of
"2
"2
S=
B=
(X'V-~)-X'V-ly.
B when V is computed
with the restricted estimators 01 and 02 is given in Chapter 7.
The
distribution of B appeared to be normal.
4.2.3
Behavior of B When V Is Computed with A
Positive Definite Estimator of
o~ and A
Restricted Positive Estimator of
oi
This procedure also restricts the estimators of the variance com.
ponents 012 an d 022 to b e non-negat1ve.
where
projector operator onto the space of [X:U ].
1
(
Also let
"2
=
°1
i0
I
<
I
I
"
"2
1°
\
"2
°2
=
"2
°2H
otherwise
e-
51
AZ
The justification for this procedure is that 0lH is the MINQUE
positive definite estimator defined by Rao and Kleffe (1979).
is the
Analysis of Variance estimator.
AZ
Moreover, Rao and Kleffe (1979)
Z'
show :hat the distribution of 0ZH has a scaled X
AZ
that a
Also, it
distribution.
Note
Z
is unbiased for 0 •
1
Chapter 7 gives the results of a small simulation study of the
properties of
this section.
-e
B, where V is computed using the estimators defined in
52
5.
TESTING HYPOTHESES
Chapter 5 will be devoted to testing an estimable linear parametric
function of the parameters S, say A'B, that is,
H:
o
A'S=A'S
#,
against H :
a
0
<, >
In Chapter 4 it was established that in large samples S has a
normal distribution with mean S and variance-covariance matrix (X'V-~)
where the latter can be approximated by
(X'V-~)-.
The present chapter deals with testing hypotheses with small samples.
The procedure is to test a linear combination of the parameters S using
an approximate t-test.
t
=
A'S - A'S
The method is to compute
o
where 8 = (X'V-~)-X'V-ly and compare with tabulated percentage points
of the t-distribution.
The difficult question is to determine the
degrees of freedom associated with the above t-test.
5.1 Procedure to Test the Degrees of Freedom
Associated with a t-test for Testing a Linear
Combination of the Vector of Parameters 8
A survey of the literature reveals a general solution to some
similar problems.
Specifically, Satterthwaite (1946), B. L. Welch (1947),
G. S. James (1951) and G. E. P. Box (1954) were concerned with the
e-
53
problem of approximating the distribution of a quadratic form.
Their
general method will be applied to the problem at hand.
Let A'B be an estimable linear function of the vector of parameters
S, and
Z
AZ
AZ
= f(ol' °Z)
= A' (X'V-lX)-A
where
V
AZ
AZ
= VIOl + IO Z •
The distribution of Z will be approximated by a gamma distribution
with density
rr
l
-e
P (t)dt
= -2
with parameters a
t ¥;"'l
(-)
Zg
1
= Zf
and S
t
t
exp(- -)d(-)
2g
Zg
= Zg,
(5.1.1)
f and g being constants.
In Section 5.Z the justification for this assumption will be
examined in some detail.
The values f and g are chosen by equating the first two moments of
Z and the approximating distribution.
Using Taylor's theorem one can write
+ an error term.
Given that the error term is sufficiently small, one can write
54
and
Note that
= A'( ~2(X'V-lX)-)A
dOl
= - A'(X'V-lX)-
~2(X'V-lX)(X'V-~)-A
dO 1
= - A'(X'V-lX)-X'( ~2(V-l»X(X'V-~)-A
dOl
=-A'(X'V-~)-X'V-l( ~2(V»V-~(X'V-lX)-A
dOl
=-q'V q
1
where
Similar manipulations yield
Consequently,
~ -
55
Observe that if A'B is estimable, then A'
and
= q'X, q' = A'(X'V- 1X)-X'V- 1
E(Z): A'(X'V-~)-A
= A'(X'V-1X)-X'v-~q
=
q'Vq
=
,
2
, 2
q V1 q01 + q q02 •
(5.1.2)
Now, equating these two moments of Z and the first two moments of
the approximating distribution yields the following system of equations,
E(Z)
and
var(Z)
-e
,
2
2
= q V1 q01 + q'qo 2
= fg
A2
2
A2
2
A2
= var(ol) (q'V 1 q) + var(02)(q'q) + 2 cov(ol'
=
2fl
A2
°2 )q'V1 qoq'q
Solving this system of equations leads to
f =
2( q ' V qo12 + q ,2)2
q02
1
----:A:-::2-----=-2---~A2=---"~---:2:------,-A"::'"2-A:-:2:-----
var(ol) (q'v q)
1
+
var(oZ)(~'~)
+ 2 cov(ol' 0Z)q'v 1 qoq'q
(5.1.3)
as the "degrees of freedom" parameter for the approximating gamma distribution and consequently the "t" statistic for the test of hypothesis.
Notice that the formula (5.1.3) for the degrees of freedom is a
Z
Z
function of the true parameters 01 and oZ.
Two alternate suggestions are available to compute the degrees of
freedom.
i)
Since the MINQUE are based on prior information, a reasonable thing
to do is to use such information in the formula (5.1.3) to compute the
degrees of freedom.
56
ii)
An alternative suggestion is to iterate the MINQUE procedure several
times.
Experience shows that the process tends to converge rapidly and
hence provides unique values to compute the degrees of freedom by the
formula (5.1.3).
5.2 The Accuracy of the Formula for
Approximating the Degrees of Freedom
A check on the adequacy of the foregoing procedure was performed
with a small simulation study.
~
Step 1.
Generate Y
N(O,l)
Step 2.
Assume prior information and compute the MINQUE for the com-
2
2
ponents, 01 and 02'
Step 3.
Select two different designs and different degrees of unbal-
anced data.
Design I.
A split-plot design arranged in randomized complete blocks,
with two blocks, four whole plots, two split plots and five different
cases of unbalance:
(i)
no missing data
(ii)
missing two observations say Ylll and Y242
(iii)
missing two observations say Yl12 and Y
212
(iv)
missing two observations say Y
and Y
222
22l
(v)
Design II.
missing Y
222
A split-plot design arranged in randomized complete blocks,
with four blocks, three whole plots, two split plots and four different
cases of unbalance:
(i)
(ii)
(iii)
no missing observations
missing two observations say Y
and Y
lll
l12
missing four observations say Y
, Y
, Y
and Y
4ll
222
lll
322
~
57
(iv)
missing twelve observations say Y
, Y
' Y
, Y
,
222
lll
l12
22l
Y23l , Y232 ' Y3ll' Y3l2 ' Y42l' Y422 , Y43l and Y432 •
Step 4.
Select three different contrasts:
(i)
testing the difference between blocks, say Al
(ii)
testing the difference between whole plots, say A
2
(iii)
testing the difference between split plots, say A
3
Combining the different cases of unbalance in Designs I and I I with the
contrasts for testing hypotheses yields a total of twenty-seven cases.
The notation used to identify the twenty-seven cases is given by (design,
unbalance case, contrast), i.e., (I, (i), AI) corresponds to Design I,
no missing observations, and a contrast for testing differences between
blocks.
-e
Step 5.
For the twenty-seven cases, the degrees of freedom approximated
by formula (5.1.3) were computed.
Step 6.
For the twenty-seven cases, the estimates of the variance com-
ponents were computed 100 times, and with these estimates the function
~2
~2
~ -1
Z = f(ol' 02) = A'(X'V X) A evaluated. The mean and the variance of
the distributions were computed and histograms constructed.
The degrees
of freedom of the distributions were computed, and compared with the one
obtained in step 5.
For the first three cases (I, (i), ll)' (I, (i), l2) and
(I, (i), l3)' the function Z was evaluated 200 times.
The purpose of
this was to obtain a better approximation for the distribution of Z.
The results obtained from the simulation are summarized in Table
5.2.1.
Table 5.2.1.
Degrees of freedom associated with different degrees of unbalanced designs
I, (i) Anova
I, (ii) Anova
Source
B
W
Error (a)
S
SW
Error (b)
Total (Cov)
II, (i) Anova
II, (ii) Anova
II, (iii) Anova
II, (iv) Anova
Source
Source
Source
Source
B
3
H
2
Error (a)
6
S
1
2
WS
Error (b)
9
Total (Cov) 23
B:
=
W:
S:
WS: =
B
H
Error (a)
S
WS
Error (b)
Total (Cov)
d.f.
1
3
3
1
3
2
13
d .f.
3
2
5
1
2
7
21
Source
I, (iv) Anova
d.f.
B
1
W
3
Error (a)
3
S
1
SW
3
Error (b)
4
Total (Cov) 15
d.f.
Source
I, (iii) Anova
B
W
Error (a)
S
SW
Error (b)
Total (Cov)
B
W
Error (a)
S
WS
Error (b)
Total (Cov)
d .f.
1
3
3
1
3
2
13
d.f.
3
2
6
1
2
5
19
Source
B
W
Error (a)
S
Sl-l
Error (b)
Total (Cov)
I, (v) Anova
d.f.
1
3
2
1
3
3
13
Source
d.f.
1
B
W
3
Error (a)
3
S
1
Sl-l
3
Error (b)
3
Total (Cov) 14
d. f.
3
B
2
W
Error (a)
0
S
1
2
WS
Error (b)
3
(total (Cov) 11
Blocks or repetitions
Whole plot
Split plots
Interaction between whole and split plots
V1
00
e
e
e
59
5.2.1
Results Obtained from the Simulation
of Case (I, (i), All
The degrees of freedom associated with the t-test for testing the
difference between blocks given by formula (5.1.3) was 3.00.
In Table (5.2.1) it is observed that error (a) which would normally
be used to test block effects has 3 degrees of freedom.
Observe the
closeness between the degrees of freedom obtained from formula (5.1.3)
and the one from Table (5.2.1).
The frequency histogram from the distribution of Z, which was based
on 100 values, gives the values of 0.746206 and 0.416883 for the mean
and variance respectively.
The degrees of freedom associated with this
distribution was 2.67.
-e
A second frequency histogram based on 200 values was constructed,
and gave the values of 0.685476 and 0.327729 for the mean and variance
respectively.
The degrees of freedom associated with this distribution
was 2.86.
The graph of the latter frequency distribution is shown in
Figure 5.2.1.
Observe that the graph has the shape of a gamma distribu-
tion.
5.2.2
Results Obtained from the Simulation
of Case (I, (i), Azl
The degrees of freedom associated
with the t-test for testing the
difference between whole plots, given by formula (5.1.3), was 3.00.
In Table (5.2.1) it is observed that error (a) which would normally
be used to test whole plot effects has 3 degrees of freedom.
Observe
the closeness between the degrees of freedom obtained from formula
(5.1.3) and the one from Table (5.2.1).
60
>
50..,
.
.
··
·
48-i
.,i
.,,•i
I
...
F
..,
I
I
R 38-1
I
E
a
u
E
.
I
~
i
4I
e
N 29
C
y
2.5
8.0
ZVALUES
FIGURE 5.2. 1 DISTRIBUTION Of Z
FOR TESTING ALTIIEAR CONTRAST BfT1JEEN BLOCKS
FOR Aspur PLOT MODEL UITH tlO MISSING OBSERVAnONS
.
61
The frequency histogram from the distribution of Z, which was based
on 100 values, gives the values of 1.3265 and 1.3175 for the mean and
variance respectively.
The degrees of freedom associated with the dis-
tribution was 2.67.
A second frequency histogram based on 200 values was constructed
and gave the values of 1.2186 and 1.0357 for the mean and variance
respectively.
The degrees of freedom associated with this distribution
was 2.86.
The graph of the latter frequency distribution is shown in Figure
(5.2.2).
5.2.3
Observe that the graph has the shape of a gamma distribution.
Results Obtained from the Simulation
of Case (I, (i), A31
The degrees of freedom associated with the t-test for testing the
difference between split plots, given by formula (5.1.3) was 4.00.
In Table (5.2.1) it is observed that error (b) which would normally
be used to test split-plot effects has 4 degrees of freedom.
Observe
the closeness between the degrees of freedom obtained from formula
(5.1.3) and the one from Table (5.2.1).
The frequency histogram from the distribution of Z, which was based
on 100 values, gives the values of .174674 and .014510 for the mean and
variance respectively.
The degrees of freedom associated with the
distribution was 4.18.
A second frequency histogram based on 200 values was constructed
and gave the values of .162663 and .014226 for the mean and the variance
respectively.
was 3.71.
The degrees of freedom associated with this distribution
62
.:
~
~
-18"';
,,
~
"'i
I
F
-i,
R 38-4
E
I
~
~
c
...l
1
283
i
~
~~
e·
~Y/; ~
V~~~f1~!)~~~ ~lV71
~"'-'V ~.V-'-l~LVJ1.u,.Y
9-.,.1
.Jf-%'-I.I,~V" '-L/ L..t,.J~,.
f71r,',';1
~
11'-J.r.!lV..J-J,/!I.J.V
J1..1&..i
I
.il
rT
'/
9 2 3
ZVALUES
FIGURE 5.2.2 DISTRIBUTION OF Z
FOR TESTING ACONTRAST BETUEEN YHOL£ PLOT EFFECTS
FOR ASPLlT PLOT MODEL YITH NO MISSING OBSERVAnoNS
63
Table 5.2.2.
Mean, variance, degrees of freedom of the distribution of
2 2
Z = f(01,02)' (d.f.Z) and degrees of freedom associated
with Formula (5.1.3) (d. £OF ) for the 27 cases considered
0
Mean
Case
-e
d.LZ
d.f.F
(I,
(i) , A )
1
.746206
.416883
2.67
3.00
(I,
(i) , A )*
1
.685476
.327729
2.86
3.00
(I,
(i) , A )
2
1. 326589
1.317556
2.67
3.00
(I,
(i) , A )*
2
1.218625
1. 03578
2.86
3.00
(I,
(i) , A )
3
.174674
.01451
4.18
4.00
(I,
(i), A )*
3
.162663
.014226
3.71
4.00
(I,
(ii), A )
1
.694981
.632815
2.94
2.88
(I,
(ii) , A )
2
.785614
.471430
2.61
2.25
(I,
(ii) , A )
3
.433751
.094709
3.97
2.25
(I, (iii), A1 )
(I, (iii) , A2 )
.882843
.552669
2.82
2.92
2.68
2.63
.060799
3.39
3.36
1.02667
1.113545
1.89
2.00
2.053346
4.454139
1.89
2.00
(I, (iii) , A )
3
(I, (iv) , A1 )
(I, (iv), A2 )
e
Variance
2.266146
.321029
3.82542
(I,
(iv) , A )
3
.357754
.091751
2.78
3.00
(I,
(v) , A )
1
.814711
.748071
1.77
2.95
(I,
(v) , A )
2
1. 65
2.63
(I,
(v) , A )
3
2.31
3.25
(II,
(i) , A )
1
2.09957
7.19
6.00
(II,
(i), A )
2
1. 04979
.306500
7.19
6.00
(II,
(i), A )
3
.001733
10.71
9.00
5.81
5.00
(II, (ii) , A )
1
2.07997
.304351
.096352
2.63044
5.23834
.079917
1.22603
2.38098
0
64
Table 5.2.l.
Continued
Case
Mean
Variance
d.f.Z
d.f.F
(II,
(ii), A )
2
0.292271
.029393
5.81
5.00
(II,
(ii), A )
3
.104107
.002057
10.53
8.00
(II, (iii) , AI)
2.161932
1. 618287
5.77
5.41
(II, (iii) , A )
2
(II, (iii), A )
3
(II, (iv) , AI)
.427687
.046884
7.80
7.37
.150781
.007013
6.48
5.51
3.719712
29.287296
.94
1.00
(II ,
(iv), A )
2
.543057
.658062
.89
1.00
(II,
(iv) , A )
3
.197671
.022929
3.40
3.00
0
e-
65
The graph of the latter frequency distribution is shown in Figure
(5.2.3).
5.2.4
Observe that the graph has the shape of a gamma distribution.
Results Obtained from the Simulation
of the Remaining Cases
The remaining cases are summarized in Table (5.2.2).
It is clear
that the degrees of freedom computed by formula (5.1.3) are satisfactory
for approximating the distribution of Z.
5.3
Distribution of Z
A2 A2
A1.
= f(cr1~2) = A'(X'V--X)-A
When V Is Computed Using the MINQUE Estimators
of the Variance Components
In Chapter 4 it was shown that the parametric function r.q.cr
~
a MINQUE of the form Y'AY, where A satisfies the conditions X'A
·e
tr(AV.) = q .•
~
In specific terms the equations
~
[tr(QVV1QVV1)
cr
['21
=
A2
cr
2
tr(QVV1 Qv)
[~~l
=
-1
S
r
tr(QVV1 Qv)
tr (QVQv )
[Y'QVV1QVY]
Y'QvQVY
[Y'QVV1QvYj
Y'QvQvY
were obtained.
j
Let {Si } denote the elements of S-l.
It follows that
tr(QvQV)/Det(S)
12
= - tr(QVV1 Qv)/Det(S)
S
tr(Q VV Qv)/Det(S)
1
where Det(S) is the determinant of the matrix S.
~
2
i
=0
has
and
66
F
R 68
E
a
u
e·
E
N
C
y
8.5
1.8
ZVALUES
FIGtRE 5.2.3 DISTRIBUTION OF Z
FOR TESTING AI.lNOR CONTRAST BETIlEEN SPlTI PlOT EFFECTS
FOR THE SPLIT PlOT HODU IITIH t(} HISSING 0B5ERVATIONS
1.5
67
~2
It follows that 01
= Y'~Y,
~2
and 02
= Y'A2Y.
Note that Al and A are
2
symmetric but not idempotent.
Scheffe (1970) found that under the assumption of normality of Y
with mean
Q
= y'AY
~
and positive definite variance matrix V, the quadratic form
2'
.£'2 where fA 1 are the characterh
• 1.
i , u.1.
r
1.
0 2 is a chi-square random variable with hi
can be written as Q
2'
istic roots of AV, X h
=
EX. X
i' i
2
degrees of freedom and non-centrality parameter 0t.
However, since
MINQUE are not necessarily positive, a more general statement is needed.
J. P. Imhof (1961) worked with the quadratic form Q
-e
2'
EiAix h .
1.
,0:
1.
= y'AY =
2
where the x ' .£'2 are chi-square random variables with h.
h ,u.
1.
i
2
1.
degrees of freedom and non-centrality parameters 0i' and where Al > A
2
> ... > A > 0 > AI> .•• > A are the characteristic roots of AV.
p
He
m
p+
found that under the assumption of normality of Y, the distribution of
the quadratic form Q can be accurately approximated by
P {Q > X}
j = 1,2,3.
The invariance condition of the estimators, AX
simplification C.
J
= Er Ajh
for j = 1,2,3.
r r
It follows that Y'AlY and
Y'A Y have approximate chi-squared distributions,
2
tively where hi
= a leads to the
X~i
and
= (C (i))3/(C (i))2, cji) = Er(A;i))jh (i)
2
3
r
x~,
~
respec-
= 1,2,3
and A(i) > A(i) > ... are the characteristic roots of A.V for i
1 2 1 .
=
1,2.
68
5.4
Simulation Study to Evaluate
Adequacy of Approximation
In this section a simulation study to evaluate the adequacy of the
A2 A2
approximation to the distribution of Z = f(01,02)
A2
A2
01 and 02 are MINQUE will be investigated.
5.4.1
Simulation of the Distribution of
A2 A2
Z = f(01~2) ~Vhen the Prior 'Values
Are Used to Compute the Parameters
a and S of the Approximate Gamma
Distribution
The simulation that is proposed is given as follows:
~
Step 1.
Generate Y
N(O,l)
Step 2.
Assume prior values and compute the MINQUE for the variance
2
2
components, 01 and 02·
Step 3.
Select a balance design with two blocks, four whole plots, two
split plots and a linear parametric function for testing split plot
effects.
Step 4.
Compute the approximate parameters of the gamma distribution
given by:
)
q'V q
2
(5.4.1.1)
S
q'V q
2 )
(5.4.1.2)
by using prior values instead of the variance components.
e-
69
Step 5.
Generate the estimates of the variance components 100 times,
and with these estimates evaluate the function Z
A'(X'V-~)-A.
"2 "2
= f'(01,02) =
Use a, Sand Z to generate a gamma distribution by using
the function u = CDGAMA(Z,a,S).
Step 6.
Test the hypothesis
H:
u e:
1"
H :
u i
1"
o
against
a
,
where
1"
is the set of gamma distributions.
Three different competing test procedures will be used.
The first
• b ase d on Watson ' s U2 stat1st1c
. . g i ven b y
1S
2
U
-e
where u.
1
~
= l2l N +
= P(Z
~
i=l
{(2i-l) _ u(;)}2
2N
...
z) is the cumulative distribution function of Z and
u(i) are the ordered u
observations.
where t
15
r
=
N(u - 0.5)2
i
values, u is the mean, and N is the number of
212 2 2 2
The second procedure is based on P = N(t +t 2+t +t )
4
l
3 4
liZ
IIT (u.) for r = 1,2,3,4 and ITl(u) =
. r
J 2
J
'-
(u - 1/2), IT (u) =
2
3
- 1/2), IT (u) = v7 (20(u - 1/2) - 3(u - 1/2» and
3
4
2
IT (u) = 2l0(u - 1/2) - 45(u - 1/2) + 9/8. The third procedure is a
4
2
X text based on the fact that one tenth of the {u.} values should fall
(6(u - 1/2)
1
in each of the intervals [0, .1), [.1, .2), ... , [.9,1.0].
Step. 7.
Graph the distribution of Z.
Results of the Simulations
The parameters a and S show values of a = 1.99 and S = 0.08, the
computed values for
13.2.
u2 , P~
and
x2
2
show U = .075,
P~ =
2
.1533 and X =
70
2
The critical values for u , PZ and
X~.05
pZ,=,.05 = 9.49 and
=
x2
2
are U
=, .05 =
0.187 and
16.9.
Therefore in each case the null hypothesis was not rejected. It
A2 A2
is concluded that Z = f(01,02) belongs to the class of gamma distributions.
A second simulation was performed to know how many times the value
H :
is rejected under the hypothesis
o
against
A'13 = 0
H :
a
It was found that in 3 of 100 cases "the null hypothesis was rejected.
e-
This means less than 5% of the time.
A third simulation was performed as follows:
Step 1.
Generate N(O,l).
Step 2.
Select an unbalanced design with two blocks, four whole plots
and two split plots and assume there are missing values, say Y22l' Y222.
Step 3.
Assume prior values and compute the MINQUE for the variance
2
2
components 01 and 02·
Step 4.
Compute the approximate parameters of the gamma distribution by
formulas (5.4.1.1) and (5.4.1.2).
Step 5.
Generate the estimates of the variance components 100 times,
A2 A2
and with these evaluate the function Z = f(01,02). Use a, 13 and Z to
generate a gamma distribution by using the function u
Step 6.
=
CDGAMA(Z,a,I3).
Use two criteria to test the hypothesis
H:
u E T
H :
u
o
against
a
i
T,
where
T
is the set of gamma distributions.
71
F
R 38
E
a
·e
u
[
N 28
C
Y
19
9!-+-4LL.L..~..J.LJ-L+L-L.L...---+-'-.L..--.,.----,....---.
9.8
9.5
1.8
ZVALUES
fIGtI<E 5.4. I DISTRIBUTION OF Z
FOR TESTING ACONTRAST BETIIEEN SlUT PLOT EfFECTS
FOR ASPLIT PlOT l100a
1.5
72
2
The criteria used were based on Watson's U and Neyman's P4.
Results
The parameters a and S show values of a = 1.4999, S = 0.1333.
2
The computed values for U and
1. 99323.
P~
2
show values of U = 0.097 and
P~
=
222
The critical points for U and P are Uoo ,0.05 = 0.187 and
4
2
p ,00 , •05 = 9.49.
4
Hence, the null hypothesis was not rejected in both
cases.
Finally, a simulation was performed to know how many times the
value
was rejected under the hypothesis
H:
o
against H :
a
~
It was found that in 2 of 100 cases the hypothesis was rej ected.
means less than 5% of the time.
e·
A'S=O
This
We conclude from this that the distri-
but ions belong to the class of gamma distributions.
5.4.2
Simulation of the Distribution of
t = A'S/lvir(A'S) Under the Null
Hypothesis A'S =
° By Using the
MINQUE Iterated Estimators in the
-
Computation of A'S
The simulation that is proposed is given as follows:
Step 1.
Generate N(O,l).
Step 2.
Select a balanced design with two blocks, four whole plots,
two split plots and a linear parametric function for testing split-plot
effects.
73
Step 3.
Assume prior values.
Step 4.
Generate the MINQUE estimates of the variance components 01
and
°22 ,
2
100 times, by iterating the MINQUE procedure several times.
-
Step 5.
Compute t = A'B/lvar(A'B) by using the MINQUE iterated esti-
mators.
Use the approximated degrees of freedom given by formula
(5.1.3) to generate the distribution function of the t-distribution.
Step 6.
Use three criteria to test the hypothesis
against
H:
o
u e: T
H :
a
u i
T,
where
T
is the set of t-distribution.
The criteria were based on Watsen's
u2 ,
Neyman's P4 and a
x2
test
using 10 intervals.
-e
Results of the Simulation
2
2
2
2
The computed values for U , P and X shows values of U =
4
2
2
2
0.0957, P = 0.0178 and X = 10.6, with critical points u00,.05 = 0.187,
4
2
2
p
= 9.49 and X9,
.05 = 16.9. Hence, the null hypothesis was
4,00, .05
accepted.
Also, the number of times that the null hypothesis A'B = 0 was
rejected at the 5% level of significance was 2 of 100.
This means
less than 5%.
A second simulation was performed as before but instead of step 2,
now select an unbalanced design with missing values, say Y22l' Y222'
Results of the Simulation
The computed values for
p2
4
=
2
2.333 and X
2
9.49 and X9,. 05
= 3.8
16.9.
u2 , P~
and
x2
show values of
with critical points
u2 = 0.09224,
U: , .05 = 0.187,
p2
=
4,00,.05
Hence, the null hypothesis was accepted.
74
The number of times that the null hypothesis A'S = 0 was rejected
at the 5% level of significance was 6 of 100.
This means over 5%.
As a conclusion of the above simulations, it is legitimate to
assume that t
=
A'S/I~r(A'B) be distributed approximately as
t-distributions with degrees of freedom given by formula (5.1.3).
e-
75
6.
THE ANALYSIS OF COVARIANCE
The methodology developed in Chapters 4 and 5 will now be applied
to the analysis of covariance for the unbalanced split-plot experiment.
For the complete balanced
experime~t
the model can be written as
i
= 1,
... , a
j = 1, ""
n
k=l, ... ,b
Note that this model allows for different slopes at the whole-plot and
·e
the split-plot level.
In matrix notation this model becomes
(6.1)
Also, vector 1, columns of X and Zl lie in the space spanned by the
a
columns of U .
1
Without further mention, it will be assumed in the remainder of
this chapter that the model (6.1) has been appropriately modified to
allow for unbalanced experiments.
or gueses for oi and
0;,
Also, let ai and a; be prior values
and use this to write V
= vlai +
v a;.
2
-1/2
Transforming the model (6.1) by V
yields
(6.2)
76
Applying generalized least squares and reordering leads to the
normal equations:
l'V-ll
X'V-ll
a
Xbv-lJ,
l'V-~
l'V-\,
a
X'v-~
a
a
x~v-\,
l'V-lX
ab
X'V-~
ab
l'V-lZ
1
X'v-lz
a
1
!.'V-lZ
2
X'v-lz
a
2
l
Xbv-~a Xbv-l~ Xbv-~ab Xbv-lzl XbV- Z2
X' v-ll X' v-lx X' v-l~ X' v-~
X' V-lz X' v-lz
ab
- ab
a ab
ab
ab ab
1 ab
2
Z'V-ll z'v-~ ziv - \ ' Z' v-lx
z'v-lz
z'v-lz
1
2
a
1
ab 1
1
1
2
z'v-lz
Z'V-ll z'v-lx
z'v-lz
Z' v-lx
v
2
a zi - \ '
2
ab 2
1
2
2
2
J.l
l'V-~
a
X'v-~
a
b
Xbv-~
X' V-ly
ab
Z'V-ly
1
Z'V-ly
2
(6.3)
Also, let v-
l/2
z
= V- l/2 [Zl:z2]
I
Define
e-
of order n x n.
and
T1 = [_
Z'V-~(X'V-lX)-
of order n
x
2.
:]
Transform (6.3) by T
l
ab
(6.4)
77
This leads to a solution for Sand S, say
1
Z
(6.5)
In order to test hypotheses about the ab interaction, define
Xl
= [!:Xa:~]' Px = Xl(XiV-~l)-XiV-l
1
I
and
TZ =
-
XabV-~l(XiV-~l)-
o
Transform (6.4) by T
Z
·e
o
o
I
o
0
I
to get
X'V-ly
X'v-lz
1
1
X' (V-l_V-~ )y
ab
Xl
a
X' (V-l_V-lp )z
ab
Xl
b
Z' (V-l_V-~X) Z
ab
=
Z' (V-l_V-lp ) y
X
(6.6)
This leads to a solution for ab, say
s
- X' (V-l_V-lp )Z (All).
ab
Xl
B
Z
(6.7)
Hypotheses about linear functions among the interaction parameters,
i.e., \'ab can be tested by replacing V by V, and then using the approximated t-test developed in section 5.1 in the form
.
78
A
:\'ab - :\'ab
t =
1I'v;r(ab)
and using formula (5.1.3) to approximate the degrees of freedom associated with the t-test.
If interest is in split-plot treatment effects, then the analysis
must go one step further.
Let X
=
[l:X) and P
xa
Observe that the vector 1 lies in the space generated by the columns of
X •
a
Therefore, there exists a vector
~
such that 1 =
Xa~.
This implies
that
e·
Define
and
I
o
o
o
o
I
o
o
o
- X'v-~ (X'V-lX )b
a a
a
I
o
o
o
o
I
79
Transform (6.6) by T to get
3
l'V-~
ab
X'V-~
a
ab
X-' (V-1_V-~
)X
~D
X
ab
a
X' (V-1_V-~ )X
ab
X ab
a
l'V-~
~
a
X'V- 1y
a
b
ab
=
Xb(V-1_V-~x )Y •
(6.8)
a
·e
81
X' (V- 1 _V- 1p )Y
ab
Xl
82
Z' (V-1_V-~ )y
X
A solution for b is given by
(6.9)
A linear combination of the parameters b can be tested by using
the approximated t-test method of section 5.1 by computing
>"'b - >"'b
t
= -----
and using the formula (5.1.3) to approximate the degrees of freedom
associated with the t-test.
80
Finally, to derive a test on whole plots treatments, let PI •
1(1 'V-II) -ltv-I.
Define
I
T
4
= -
X V-ll(l'V-ll)a
o
o o
I
o
o
I
Transform (6.8) by T to get
4
l'v·lz
x' (V-l_V-~ )Z
a
l'v-ly
u
1
x· (V-l_y-~ )Z
a
X
a
x· (V-l_V-~ )Z
ab
Xl
Z· (V-l_y-~ )Z
X
!.
!.
!!.
S
1
S
2
X'(V-l_V-~ )Y
a
1
.
X· (V-l_V-~
b
)Y
a
X. (V-l_V-~
ab
X
)Y
Xl
Z. (y.l-v-lp )Y
X
A solution for a is given by
(6.11)
Now it is possible to test a parametric linear combination of the
parameters
~
by using the approximated t-test method of section 5.1 of
the form
A'a - A'a
t
=
•
Formula (5.1.3) is used to approximate the degrees of freedom associated with the t-test.
e·
81
"
7.
SIMULATION STUDIES
Previous chapters have presented asymptotic and large sample
properties of the estimators obtained via generalized least squares
based on an estimated variance-covariance matrix.
In this chapter some
insight into the small sample properties of these estimates will be
obtained from a series of small simulations experiments.
These simulations all follow the same general pattern.
A set of
observations is generated according to a specified split-plot model.
The first step in the estimation procedure is to compute values for
A2
A2
01 and 02. In the second step generalized-least-squares estimates of
the fixed effects in the model are obtained by using the estimated
variance-covariance matrix obtained from the first step.
A listing of
the Fortran program used for these simulations is in Appendix A.
A
flow chart is given in Figure 7.1.
7.1
Standard Conditions
The first simulation was performed under standard or ideal conditions as a check on the computer program.
Observations for a sp1it-
plot experiment with two blocks, four whole plots per block and two
split plots per whole plot were generated.
drawn from N(O,l).
Both error components were
2
2
Estimates for both 01 and 02 were obtained using
the MINQUE equations with prior values a
1
= a 2 = 1.
Note that in this
case the estimates are identical with those obtained from the usual
analysis of variance.
Also in this case the weighted least squares
estimates of the fixed effects, i.e., the treatment effects, are
82
INPUT
I 1
I 2
parameters of
the
program
construction of
the. split-plot
model
~
~
tions
I
iteradone?
yes
Display
histogram and
graph
......
~
No
-------
~
~
I 8
I 4
Compute
A2
A2
oland O2
Compute
Watson's and
Wayman's test
,
It
I 9
I 5
2
Display p
and u 2
statistical
values
Compute
-
S
/
/
1/
I
10-
Figure 7.1.
7
6
Compute
histogram and
graph
Flow chart of the simulation of the behavior of the
estimate vector of parameter S
.,
83
identical with the usual treatment means.
This simulation was repeated
100 times.
The resulting "observations, Le., estimate of the general mean,
are given in Figure 7.2.
Clearly, this distribution is close to normal.
2
The Watson's U statistic for testing normality is 0.03563, well beyond
2
= 0.187.
the critical value U.
05
. i c was 1.32074,
Neyman's P42 statl.st
again well below the critical value p
7.2
2
= 9.49.
4,.05
Non-standard Conditions
The second simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole plot with two missing
observations, say Ylll and Y242 , were generated.
were drawn from N(O,l).
oi
Estimates for both
using the MINQUE equations with prior values a
Both error components
0; were obtained
and
a
=
l
2
=
1.
This simula-
tion was repeated 100 times.
The resulting "observations," Le., estimates of the general mean,
are given in Figure 7.3.
2
The Watson's U statistic for testing normality
2
is 0.02422, well below the critical value U.
2
0.187.
=
05
Neyman's P4
statistic was 0.74839, again well below the critical value p
2
=
4,0.05
9.49.
The third simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole plot with two missing
observations, say Y
and Y
, were generated.
242
lll
were drawn for U(-l,l).
Estimates for both
oi
using the MINQUE equations with prior values a
•
tion was repeated 100 times.
Both error components
0; were obtained
and
l
=
a
2
=
1.
This simula-
84
F
R 39
E
Q
U
E
N
C
Y
-0.5
0.0
8.5
1.8
1.5
B VALUES
FIGURE 7.2 DISTRIBUTION OF THE ESTI11ATED GENERAL MEAN
FOR THE SPLIT PLOT MODEL
UNDER NORMAL DISTRIBUTION, ~HEN THE VARIANCE COMPONENTS
~ERE REPLACED BY THE MlllQUE ESTDt4TORS
85
F
R
E
Q
U
E
N 28
C
y
18
9-+--r---,--~-r.L.U.,.u...LJ~~U-...r---r---,----,
-1.5
-1.9
-9.5
9.9
9.5
1.9
1.5
BVALUES
FIGl.RE 7.3 DISTRlBUTION Cf Tt£ ESTIMATED GENERAl. ~
FeR Tt£ SPlIT PLOT MOCtl., \lITH HISSING OBSERVATIONS
lNOER NORMAL DISTRIBUlION,\ItfN Tlf VARIANCE cafONOOS
1m lIPLAC8) BY 1lf MINQl[ ESTIMATORS
•
86
The resulting "observations," Le., estimates of the general mean,
are given in Figure 7.4.
2
The Watson's u statistic for testing nor-
2
mality is 0.04703, well below the critical value U •
= 0.187.
O 05
P~
statistic was 1.15893, again well below the critical value
Neyman's
P~,0.05 =
9.49.
The fourth simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole plot, and two missing
observations, Y
and Y
, were generated. Both error components were
242
lll
2
2
drawn from N(O,l). Estimates for both 01 and 02 were obtained using the
restricted MINQUE estimators defined in section 4.2.2 with prior values
a
l
= a 2 = 1.
This simulation was repeated 100 times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.5.
2
The Watson's u statistic for testing normality
U~.05
2
Neyman's P
4
2
=
statistic was 0.67622, again well below the critical value p
4,0.05
is 0.02350, well beyond the critical value
=
0.187.
9.49.
The fifth simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block, two split plots per whole plot and two missing observations, Y
and Y
, were generated. Both error components were drawn
242
lll
2
2
from U(-l,l). Estimates for both 01 and 02 were obtained using the
restricted MINQUE estimators defined in section 4.2.2 with prior values
a
l
= a 2 = 1.
This simulation was repeated 100 times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.6.
4It
2
The Watson's u statistic for testing normality
2
is 0.04925, well beyond the critical value U .
O 05
= 0.187.
Neyman's p2
4
87
F
R
E
Q
·e
U
E
N
C
Y
9-1---r---,---,.---r---'+1-'.f-~~---r----r---r---,
-1.5
-1.9
-0.5
0.9
0.5
1.8
t.5
BVALUES
FIGURE 7.4 DISTRIBUTION Of TI£ ESTIMATED GENERAL MEAN
FOR THE SPLIT PlOT MODa, YTIH MISSING OBSERVATIONS
UNDER UNIfORM DISTRIBUTION, ytIN TJ£ VARIANCE COMPONENTS
YERE REPLACED BY TI£ MINQUE ESTIMATORS
.
88
F
R
E
o
U
E
N
C
Y
81-+-~~~~t..I.,I-J~.&..+I-'-4-J~~~......,
-1.5
-1.8
-0.5
8.8
8.5
I.a
1.5
BVALUES
FIGURE 7.5 DISTRIBUTION Cf TIE ESTIMATED SENERAL MEAN
FOR THE SPtlT PLOT l1OOa,lIITIl HISSING OBSERVATIONS
UNDER NORMAL DISTRIBUTION, m· TI£ VARIANCE COMPONENTS
WERE REPLACED BY THE RESTRlCiED /1INOUE ESTIMATORS
89
F
R
E
Q
·e
U
E
N
C
Y
9-+--r--r--r-~...t,..U...I..f-J.~L..L.,....--------
-1.5
-1.9
-0.5
9.9
9.5
1.0
1.5
BVALUES
FIGURE 7.6 DISTRIBUTION OF TI£ ESIDlATED GENERAL MEAN
FOR TIf SPlIT PLOT 11lDEl. WITH I1ISSING OBSERVATIONS
UNDER UNIFORM DISTRIBUTION WHEN TIf VARIANCE COMPONENTS
WERE REPLACED BY TIE RESTRICTED l1INauE ESTIMATORS
1
I
90
statistic was 1.14965.
p2
4,0.05
This was again well below the critical value
= 9.49.
The sixth simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block, two split plots per whole plot, and two missing
observations, say Y
and Y
, were generated. Both error components
242
lll
2
2
were drawn from N(O,l). Estimates for both 01 and 02 were obtained
using the positive semi-definite MINQUE estimators defined in section
4.2.3 with prior values a
l
=
a
2
= 1.
This simulation was repeated 100
times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.7.
2
The Watson's u statistic for testing normality
2
O 05
is 0.03291, well beyond the critical value U •
statistic was 0.7935.
= 0.187.
Neyman's p2
4
This is again well below the critical value
p2
= 9.49.
4,0.05
The seventh simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block, two split plots per whole plot, and two missing
observations, say Y
and Y
, were generated. Both error components
242
lll
2
2
were drawn from U(-l,l). Estimates for both 01 and 02 were obtained
using the positive semi-definite MINQUE estimators defined in section
4.2.3 with prior values a
l
= a 2 = 1. This simulation was repeated 100
times.
The resulting "observations," estimates of the general mean, are
2
f or test1ng
'
"
. F'19ure 78Th
g i ven 1n
•.
e Watson ,u
s
stat1st1c
norma I'1ty
2
is 0.04645, well beyond the critical value U .
= 0.187.
O 05
statistic was 1.26313.
p2
= 9.49.
4,0.05
NeYman's p2
This is again well below the critical value
4
e·
91
F
R
E
Q
U
E
N
C
y
0-+---.----.---.....---,..&.~..r....i.f_"""_4J~U_..,.___.___,,.___,
-1.5
-1.8
-a.5
0.8
0.5
\.8
1.5
BVALUES
FIGURE 7.7 DISTRIBUTION Of THE ESTIMATED 6ENERAL MEAN
FOR TI£ SPLIT PLOT MODEL, \lITH MISSlNS OBSERVATIONS
UNDER NORMAL DISTRIBUTION, YHEN rtf VARIANCE COI1PONENTS
WERE REPLACED BY Ttf RESTRICTED t1INCUE ESTIMATORS
92
58
F
R 38
E
Q
U
E
N
C
Y
8-+-...,.-...,.--r--~¥"'-"'f-oUo~"-r--"'T"""""""""'--r--"1
-1.5
-1.8
-0.5
8.8
8.5
1.8
1.5
BVALUES
FIGURE 7. 8 DISTRIBUTION Of TIf ESTIl1ATED GENERAL /£AN
FOR TIf SPLIT PLOT MODEL, \lITH MISSING OBSERVATIONS
UNDER UNIFORM DISTRIBUTION, ~t£N Tt£ VARIANCE COK>ONENTS
\lERE REPLACED BY THE P.S.D I1INQUE ESTIMATORS
93
Finally, the last simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole block were generated.
Both error components were drawn from the slash distribution.
Estimates
2
2
for both 01 and 02 were obtained using the MINQUE equations with prior
values a
l
= a 2 = 1.
This simulation was repeated 100 times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.9.
This distribution is symmetrical with tails
heavier than the normal.
Watson's
u2
statistic for testing normality
was 3.9252 which is greater than the critical value
Neyman's
p2
4,0.05
·e
u2•05,00 = 0.187 •
P~ statistic was 65.9737, again greater than the critical value
= 9.49.
,
94
F
R 38
E
Q
U
e·
E
N 28
C
Y
10
0-+---.L.,w.......r;J+I.~f-LJ-I'..J+J-u.L,{J-J.J~-....-.....,
-Ie
-5
o
5
Ie
BVALUES
FIGURE 7.9 DISTRIBUTION Of THE ESTIMATED GENERAt. MEAN
FOR TIl SPLIT PLOT l1JDEL, UNDER SLASH DISTRIBUTION
\/HEN TIl VARL\NCE COMPONENTS IlERE REPLACED BY
TI£ HINCUE ESTIMATORS
95
8.
SUMMARY AND CONCLUSIONS
The generalized least squares analysis for the split-plot model
with missing observations and estimated variance-covariance matrix was
discussed in this thesis.
Many methods have been suggested to estimate the variance-covariance
matrix, a few of them are:
Seely's method.
Henderson's method III, Rao's method and
The behavior of the generalized least squares estima-
tors of the split-plot model when using an estimated variance-covariance
matrix depends on the distribution of the random errors and the method
used to estimate the variance-covariance matrix.
·e
In this thesis it was found that the estimated variance-covariance
matrix V, converges in probability to the variance-covariance matrix V,
i.e., V
+
p
V, when using Seely's method under the invariance condition
with respect to the fixed effects.
Under the invariance condition
Seely's method is equivalent to the MINQUE method.
Also, three procedures were proposed to observe the behavior of
the generalized least squares estimator of the parameters S, when the
variance-covariance matrix was replaced by its estimate.
Under simula-
tion it was found that each one behaves normally under the assumption
that the random errors are both normal (0,1) and uniform (-1,1).
In many practical problems, the researcher wants to know how many
degrees of freedom he needs to consider in a t-test for testing a linear
combination of the parameters when he uses an estimate of the variancecovariance matrix.
In this thesis the answer is given in terms of a
96
formula (formula 5.1.3), when the variance-covariance matrix is estimated by the MINQUE method.
Also, it is suggested that either prior
values or iterated MINQ estimators be used to evaluate the formula.
Simulations indicate that this formula provides a good approximation
for the degrees of freedom associated with the t-test.
Finally, Appendix A shows explicit formulas for the MINQUE estimators of the variance components for a complete randomized design in
both the balanced and the unbalanced case.
But the latter was restricted
to the condition that if a split plot in the ith whole plot is missing,
the same split plot must be missing for all the repetitions of the ith
whole plots.
e·
97
9.
RECOMMENDATIONS
It is recommended that in the future an efficient simulation study
be performed by using some variance reduction techniques for the different methods used in the simulations of this thesis.
There are several methods used to reduce the variability of an
estimator; one of them is the control variates.
The goal of this
method is to obtain an efficient estimator t of the parameters A'S.
This method is defined as follows:
"2
"2
Let 0"1 and 0"2 denote the MINQUE estimators of the variance compon2
2
ents 0" 1 and 0"2·
Step 1.
Define t
Let a
1
=
!n
2
1
and a
2
2
be prior values of the variance components.
,n A'S where 81.' =
loi=l
i
are known matrices.
Step 2.
Define t
1
n"
"
, lA'S where S1.' =
n 'lo1.=
2 = -
Since 8 and Si come from the same generalized random numbers, they
i
are likely to be highly positively correlated, and therefore t
should be positively correlated.
Step 3.
Define t = A'S + t
l
- t
2
as the new estimator of A'S.
l
and t
2
98
Step 4.
Define var(t) = var(t ) + var(t ) - 2 cov(t ,t ) and observe
l
2
l 2
that var(t) < var(t ) if var(t ) < 2 cov(t ,t ).
l
2
l 2
Since this is true
because of the higher positive correlated coefficient, then the estimator t is an efficient estimator of A'B with small variance.
Another way to look at the same problem is when there is an especial
interest in observing the number of times that the value
is rejected under the null hypothesis A'B =
o.
e·
n _
n
A
Define the estimated power t = 0.05 +
L t - L t, and observe that
'
l.=l i ' l
l.=l .
n_
n
n_
n
var(estimated power t) = var( E t,) + var( E t ) - 2 cov( E t" E t )
i=l l.
i=l l
i=l l. i=l i
A
A
n _
and again if the var(estimated power t) < var( L t,), then it is an
, 1 l.
l.=
efficient estimator.
Another method to reduce the variability of an estimator is the
antithetic variates.
Step 1.
1 -
~l'
Define
1 -
~2'
This method is defined as follows:
~l' ~2'
... , 1 -
... ,
~s
~s
as uniform random numbers and define
as also uniform random numbers.
99
Step 2.
With
~1'
••• ,
~s
construct normal desviates such that U. =
U1£1 + U2£2 where £1 and £2 are N(O,l) and Uland u 2 are known matrices.
Step 3.
With 1 -
~1'
~s
••. ,1 -
construct normal desviates, such that
U* = U1£*1 + U2£*2 where £*1 and £*2 are N(O,l) and U1 and u 2 are known
matrices.
n
1
Let t = - E A'S. where Si = (X'V-~)-X'V-~i where Y = U •
i
1
i
n i=l
1
n
1
Let t = - E A'S*. where S*i = (X'V:~)-X'V:~*i where
2
n i=l
1
-
Step 4.
Step 5.
Step 6.
t
1
Let t = i(t
and t
Step 7.
t
2
2
1
+ t 2 ); this will produce a new estimator such that
are negatively correlated.
Var(t) =
41 var(t
) +
41 var(t 2 )
1
are negatively correlated var(t)
~
+ 2 cov(t 1 ,t 2 ) and since t 1 and
var(t ), then t is an efficient
1
estimator.
Another way to look at this is when it is desired to count the
number of times that the value
A'S
t = ----"---
is rejected under the null hypothesis A'S = O.
Define t
li
A'S
1
if
o
otherwise
=
>
t
f,0.05
100
1
t
Zi
A'B *
if
>
t
f,0.05
var(A'B*)
=
0
otherwise
where f is computed with formula (5.1.3).
1
2
n _-n -
L t . + L t ) and var(estimated power t)
i=l 1J.
i=l Zi
1
n_
1
n_
n_
n_
= 4 var( L t ) + 4 var( L t ) + Z cov( L t , L t Zi ) and again
Zi
1i
1i
i=l
i=l
i=l
i=l
n _
Define estimated power t =
var(estimated power t) < var(.L t ).
1i
J.=1
e·
101
BIBLIOGRAPHY
Aitken, A. C. (1934). On least squares and linear combinations of
observations. Proceedings of the Royal Society of Edinburgh A.
Vol. 55, pp. 42-47.
Allan, F. E. and J. Wishart. (1930. A method of estimating the yield
of a missing plot in field experimental work. Jour. Agr. Sci. 20,
pt. 3, pp. 399-406.
Anderson, R. L. (1946).
No.3, pp. 41-47.
Missing-plot techniques.
Biometrics, Vol. 2,
Anderson, T. W. (1973). Asymptotically efficient estimation of
covariance matrices with linear structure. The Annals of Statistics,
Vol. 1, No.1, pp. 135-141.
-e
Bement, T. R. and J. S. Williams. (1969). Variance of weighted regression estimators when sampling errors are independent and heteroscedastic. Journal of the Amer. Stat. Assoc. 64, pp. 1367-82.
Box, G. E. P. (1954). Some theorems on quadratic forms applied in the
study of analysis of variance problems. I. Effect of inequality
of variance in the one-way classification. Ann. Math. Statist. 25,
pp. 290-302.
Dickey, D. A. (1978). A program for generating and analyzing large
Monte-Carlo studies. North Carolina State University, Department
of Statistics. Unpublished note in programming.
Fuller, W. A. and G. E. Battese. (1973). Transformations for estimation
of linear models with nested error structure. JASA, Vol. 68,
pp. 626-632.
Fuller, W. A. and J. N. K. Rao. (1978). Estimation for a linear
regression model with unknown diagonal covariance matrix. The Ann.
of Statist., Vol. 6, No.5, pp. 1149-1158.
Harville, D. A. (1977). Maximum likelihood approaches to variance
components estimation and to related problems. JASA, 72, pp. 320340.
Henderson, C. R.
components.
(1953). Estimation of variance and covariance
Biometrics 9, pp. 226-252.
Hocking, R. R. and F. M. Speed. (1975). A full rank analysis of some
linear models problems. JASA, 70, pp. 706-712.
102
Hocking, R. R., F. M. Speed, and A. T. Coleman. (1980). Common
hypotheses to be tested with unbalanced data. Commun. Statist.
A9(2):117-129.
Imhoff, J. P. (1961). Computing the distribution of quadratic forms
in normal variables. Biometrika 48.3 and 4, pp. 419-426.
James, G. S. (1951). The comparation of several groups of observations
when the ratios of the population variances are unknown. Biometrika 38, pp. 324-329.
(1954). Test of linear 'hypothesis in univariab1e and mu1tivariable analysis when the ratios of the population variances are
unknown. Biometrika 41, pp. 19-43.
Johansen, S. (1980). The Welch-James approximation to the distribution
of the residual sum of squares in 2-way weighted linear regression.
Biometrika 67, pp. 85-92.
Lamote, L. R. (1976). Invariant quadratic estimations in the random,
one-way Anova model. Biometrics 32, pp. 793-804.
Magness, T. A. and J. B. McGuire. (1962). Comparation of least squares
and minimum variance estimates of regression parameters. The Annals
of Mathematical Statistics, 33, pp. 462-70.
McElroy, F. W. (1967). A necessary and sufficient condition that
ordinary least squares estimators be best linear unbiased. JASA 62,
pp. 1302-1304.
Quesenberry, C. P. (n.d.) Some transformation methods in goodness of
fit. North Carolina State University, Department of Statistics.
Unpublishe~.notes on goodness of fit test.
Rao, C. R. (1970). Estimation of heteroscedastic variances in linear
models. JASA 65, pp. 161-171.
(1972). Estimation of variance and covariance components in
linear models. JASA 67, pp. 112-115.
(1979). MINQE theory and its relations to ML and MML
estimation of variance components. Samkhya: The Indian Journal
of Statistics, Vol. 41, Series B, Pts. 3 and 4, pp. 138-153.
Rao, C. R. and J. Kleffe. (1979). Variance and covariance components
estimation and applications. Technical report, No. 181. The Ohio
State University, Department of Statistics.
Rao, J. N. K. and K. Subrahmaniam. (1971). Combining independent
estimators and estimation in linear regression with unequal
variances. Biometrics, Vol. 27, No.4, pp. 971-991.
e-
103
Satterthwaite~
F. E. (1946). An approximate distribution of estimates
of variance components. Biometrics Builetin, Vol. 2, pp. 110-114.
Scheffe, H. (1959).
Inc., New York.
The Analysis of Variance.
John Wiley and Sons,
Searle, S. R. (1968). Another look at Henderson's method of estimating
variance components. Biometrics 24, pp. 749-788.
(1971).
Linear Models.
John Wiley and Sons, Inc., New York.
Seely, J. (1970). Linear spaces and unbiased estimation.
Statist., Vol. 14, No.5, pp. 1725-34.
The Ann. Math.
(1970). Linear spaces and unbiased estimation, application
to mixed linear models. The Ann. Math. Statist., Vol. 41, No.5,
pp. 1935-48.
Seely, J. and G. Zyskind.
unbiased estimation.
pp. 691-703.
(1971). Linear spaces and minimum variance
The Ann. Math. Statist., Vol. 42, No.2,
(1971). Quadratic subspaces and completeness.
Math. Statist., Vol. 42, No.2, pp. 710-721.
·e
The Ann.
Speed, F. M., R. R. Hocking and O. P. Hackney. (1978). Methods of
analysis of linear models with unbalanced data. JASA, 73, pp. 105112.
Welch, B. L. (1947). The generalization of student's problem when
several different populations variances are involved. Biometrika
34, pp. 28-35.
(1951). On the comparation of several means values:
alternative approach. Biometrika 38, pp. 330-336.
an
Williams, J. S. (1975). Lower bounds on convergence rates of weighted
least squares to best linear unbiased estimators: a survey of
statistical design and linear models. J. N. Srivastava, ed.,
North-Holland Publishing Company, Amsterdam, pp. 555-570.
(1967). The variance of weighted regression estimators.
JASA 62, pp. 1290-1301.
Yates, F. (1933). The analysis of replicated experiments when the
field results are incomplete. Exp. Jour. Exp. Agr. 1:129-142.
Zyskind, George. (1969). Parametric augmentations and error structure
under which certain sample least squares and analysis of variance
procedures are also best. JASA, 64, pp. 1353-1368.
105
APPENDIX A
MINQUE ESTIMATORS FOR THE SPLIT-PLOT
ERROR VARIANCES
Consider a split-plot design arranged in randomized complete blocks.
Let
=
i
-e
=
1, ••• , n ; j
h
1, ••• , n
hi
•
In matrix notation this becomes Y
= XS +
Ulo + U e where Y is a n x 1
2
vector of observations, X is a n x (1 + a + b + 2b) matrix,
S is a
(1 + a + b + ab) x 1 vector of parameters, U is an n x ra matrix, ~ is
l
an:ra x 1 vector of random errors, U is an n x n identity matrix and e
2
is an n x 1 vector of random errors.
Let a
2
l
and a
2
2
be prior values or guesses for the variance components
2
2
01 and 02 and use this to write V
I
a
hi
o
= BDMATX(Jhia 2l +
I
2
a ) where J
and
hi 2
hi
+ ah,J
.) where
x n hi , and V-I = BDMATX (Ih,a
1.0
1. h 1.
2 2
2
2-1
-2
a 2 and ~i = - al(a2(nhial + a 2 )) •
are of order
~i
Note that X =
CMATX(~i)
where
~i
is an n hi x (1 + a + b + ab)
block matrix that contains the rows of the ith whole plot of the h'th
repetition.
Rewrite
~
i
=
RMATX(X_. ;
-luQ,
Q, =
l, ... , t )
106
where t is the number of different terms which are included in the
fixed effects part of the split-plot model.
=
In the present example t
4.
The first submatrix
~il =
~i2
.!rti is an
is an
~i
~i
x 1 vector of ones due to the mean in the model.
x a matrix of the form
[~:
: 0: 1: 0: ••• : 0] where
the column 1 is in the ith column.
~i3
is an n
x b matrix which has exactly one non-zero element equal
hi
to one in each row.
This matrix can be formed from an identity matrix
of order b x b by deleting the jth row of that matrix if the split plot
of the ith whole plot is missing.
CMATX(~) where~,
~i4
= RMATX(Dki ;
= 1,
k
= 1,2,
k
••• , n
D1.
k
if i
ka
hi
~i3
hi
x ab row matrix where the
e-
x a and of the form
=k
=
o
Alternatively
otherwise.
~i4 =
[0: •••
:0:~i3:0:
••• :0],
These definitions are now used to compute
~i3
is the ith position.
X'V-~.
X'V-~ = RMATX(~'.)
• BDMATX(a L . + a . J .) • CMATX(~ .)
-n1.
0 -h1.
h1. h 1.
-n1.
= RMATX
=
__ I 0
Let
=
is a unitary vector.
••• , a) is an n
block matrices are of order n
. t~i3
It is also possible to write
(~'.)
-n1.
• CMATX(a oh1.
~. +
ah.Jh'~
1.
1.-n1..)
[[a X' .X · + [[ah'~' ·Jh·Xh ··
hi 0 h 1. h 1.
hi 1.-n1. 1. 1.
if the jth split plot treatment is missing in the ith
whole plot of the hth repetition
I 1 otherwise.
~
l
n = ~n
~j = Ho j "
where n = ~~n
h
~~ hi' i
~ hi' u++
hi].
Observe
j
0+'
...
=
j
L: 0h' •
h
].
108
o
o
o
o
o
o
o
o
0
0
0
0
1
1
b
1
dhi dU '
dhi dU '
.. -,
~id:i'
b
b
0
dhid hi , 0
, ... , 0
,
... ,
0
o,
e-
o
,
hi'
o,
o
109
Thus,
... ,
1
1
=
[~ahinhiohi'
n
2
],
·.. ,
~~~iXhi1Jhi~i3 = [~~ahi~iohi'
~~~iXhi1Jhi~i4
a
h ha ha
·.. ,
... ,
1,2,
~ahonh
h
~
1
0
~
0h ~
0
,
1
h~ah a n ha 0ha '
-e
a) ,
b
·.. , h~ahonhooho
~
~
~
·.. ,
~
n
ob
h~a ha ha
·.. ,
~ahanhiohi
b
1
~ah n 0h '
h a ha a
... ,
~ah
h
b
a n h a 0h a
·.. ,
=
· .. ,
~a
h
h~
~
b
and
hi
~
b
b
0h o0h o
~
1 1
1 1
oOhooho' ... , a OOhOoh"
~
~
•
1·
~ahoohooho'
h
~~aho
~
~
h
h~
~
b
~
b
... ,~a
oOhooh o
h h~ ~ ~
~
~a
h ha
~a
1 1
1 b
0h 0h , ... ,~a 0h 0h
a a . hha a a
b
1
b
b
h ha 0h a 0ha , ... ,~ah
h a 0ha 0ha
110
1 lIb
~ahiohiohi,···,~ahiohiohi
b 1·
b
b
~ahiohiohi'···'~~iohioh~
.
1 lIb
Ea 0h 0h , ••• ,Ea 0h 0h
hna a a . hna a a
Let each whole plot treatment have the same number of repetitions, that
is n
a for all i.
=
i
Now consider the MINQUE estimators for the variance components in
these two cases:
first, when there is no missing split-plot treatment
and, second, when there are missing split-plot treatments.
If there are no missing observations, then
=
ob serve t hat
a
1
a
=
hi
rab
rbI'
ral'
rl'
rbl
rbI
rJ
rIxl'
ral
rJ
raI
rlxI
rl
rIxl
rlxI
rI
=-
~i
Ct. 2( Ct. 2(b Ct. 2
1
2
l
~i
N
Thus,
EEX~ .Jh.~.
hi
1
1
1
2
for all h,i and
+ ~ 2 ))-1 f or a 11 h and i , so 1et
2
for all hand i.
2
rb 1'
rab1 '
rbI'
2
rb 1
2
rb I
rbJ
rbIx1 '
rab1
rbJ
raj
rJ
rb1
rblxl
rJ
r-BDMATX(J)
rab
=b
=
e-
111
and hence
rab(a +a b)
o 1
X'V- 1X =
ra(a +a b).!.'
o 1
r(a +a b).!.'
o 1
rb(a +a b).!. rb(a +a b)I
o 1
o 1
r(a +a b)J
o 1
r(a o+a b)I@!.'
1
ra(a o+a b).!. r(a +a b)J
1
o 1
ra(a I+a J)
o
1
r(aJ:.~I+a1J)
r(a~I+a1J)
r o BDMATX(a I+a J)
o
1
r(a o+a 1b).!.
rb(a o+alb).!.'
r(a o+a 1b)J@!
Notice that the columns in the design matrix X that correspond to
the mean, whole plots and split plots are linear combinations of the
columns of the interaction between whole and split plots.
Given the
inverse of the matrix r o BDMATX(a I+a J), the generalized inverse of the
1
o
matrix X'V
-1
X can be computed.
0
0
-e
Therefore,
1
C = (C'V- X)- =
r- 1 BDMATX(b I +b J )
on. 1 n.
0
1.
where b
0
1.
-1
-1
-1
•
= a 0 and b 1 = - a 1 a o (ba 1+a)
0
Recall that a
o
The quadratic form Y'QVV1QVY can now be developed.
b b
Recall that V = V V , then
1
1 1
(A.1)
112
(A.2)
Also observe that
b -1
-1 -L_
b -L_
-1V1V X(X'V X) X'V -y = V1V -X(X'V X) RMATX(Xhi)·CMATX«aolhi+a1Jhi)Yhi)
b -1
= ~~V1V
'
'CMATX(~'i,)(CXhi(aolhi+a1Jhi)Yhi)
b -1
=h
EEV1V .CMATX(~ 'i,CXh.(a Iho+a.Jh·)Yho)
ih
~
0
~
~
~
~
•
(A.3)
0
0
0
0
~i1
0
0
0
0
~i2
0
0
0
0
~i3
0
0
0
r
-1
•BDMATX(b l +b J )
o b 1 b
~i4
Then A.3 becomes
b -1
= ~V1V
b -1
=
h V1V
·CMATX(r
·CMATX(r
-1
-1
(bo~ + b 1J b )(a o l h 'i' + aiJh'i')Yh'i')
~'i'Yh'i')
e-
113
(A.4)
From (A.2) and (A.4)
Similarly
Y'QvQvY
= ~~~(aOYhij
= HLa 2 (Yhi ,
hij
·e
0
- r
-1
J
y+,,)
2
J.J
+ LLa 1 (2a + alb)
hi
0
The tr(QVV Q V ) now is developed.
1 V1
From (3.4.2.21)
2
2
+ LL LL (a t~
+2a (2a a 1+a1b)t Zh 'i'h'
0
, J.' 'h'
0
0
J.
'"
J.
hh ' J.J.
2
2
+ (2a a +a b)(2a a +a b)t
»
1 1
3h 'i 'hi
o 1 1
0
where
q =
=
R(X),
tr(~
·4r-1.BDMATX(b 0 I n, +b J n, ).~,
'4Jh')
--hJ.
-lu
J.
1
J.
J.
115
Finally, observe that t
3h
'i'hi = 0 if i 1 i', then for i = i'
(A.8)
Therefore, from (A.5), (A.6), and (A.7),
2
2 -L 2
2
+ r a(a (r ~(b +2b bl+blb»
000
+ 2ao (2a 0 +alb)r
-2
2
(b 0 +blb) b
Also, from (3.4.2.22) and (A.5), (A.6) and (A.7)
2 2
2
2
- ala2(~~(ao+2aoal+alb)b
- 2(a q + ~~(2a2al+3a aib+aib2)r-l(b +blb)b
o
hi
0
0
0
+ 2a o (a 0 +alb)r
-2
2
(b 0 +b1b) b
e·
116
Finally, from (3.4.2.20) and (A.5), (A.6) and (A.7)
2
2
2
3 2 -1
2 (a q+H (2a a +3a a b+a b)r (b +b b)b
1
1
1
1
o hi
0
0
0
2··
2
-2 2
2
+ rr r(2a a +a b)(2a a +a b)r b (b +b b) •
1
1
1
hh I i
0
0 1
0
0
If there are missing split plots, then in order to compute a
generalized inverse for XIV-~, it is first necessary to compute a
generalized inverse of the matrix
·e
This matrix is a block diagonal matrix with blocks
= r(a 01*.
+
1.
B.1.
where
a.J*.)
1.1.
is a b x b matrix with zeros or
=
ones on the diagonal,
o
°hi
2
[ °hi °hi
2
°hi
and
J*
=
i
is a b x b matrix with zeros or
b
°hi
ones.
-
Observe that 1* has n
hi
ones on the diagonal.
117
A generalized inverse of Bi is B
i
and b i
=-
-1
aia o (oiai + a o )
-1
= r -1 (boI*
+ biJ*) , where b o
= a -1
o
,where 0i is the number of split-plot
treatments in the ith whole plot.
Therefore, a generalized inverse of X'V-~ is
o
o
o
where b o
= a 22
and b i
2
= ale
The quadratic form Y'QVV1QvY can now be developed.
Recall that Vl
b b
= V1V
l'
then
(A.9)
Observe that
(A. 10)
Also observe that
=
=
b -1
L:l:V1V
hi
CMATX(K,.,)CK-' .(a Ih.+a.Jh.)Y .)
-0 1.
-111
0
1.
1.
1.
h 1.
~~vblv-l CMATX(K,.,CK-' .(a Ihi+a.Jho)Y .)
h1 . '
-0 1 . 0 1 .
0
1.
1.
hi
(A.ll)
e·
118
Note that
0
0
0
0
X
hi1
0
0
0
0
~i2
0
0
0
0
~i3
0
0
0
r
-1
oBDMATX(b oI*.+b 1 J*.)
~
~
~i4
= r -1 (b Ou.
I~ +b.J~ )
~u.·
~
~
Then (A. 11) becomes
=
b -1
~V1V
oCMATX(r
-1
h
~
b -1
= ~V1V
h
(b o
I . +b.J
)a O
Ih.+a.Jh.)Y
.)
~.
~~~
h~
oCMATX(r
-1
~
(b 0a0 Ih.+(b
a.+b.a
+a.b.
~)Jh~)Yh~)
~
O~
~o
~~ . . . . . . . . .
=
CMATX(r
-1 -1/2
n. ,.,(a +a.,nh,.,)Jh,.,)Jh,.,Y+.)
h~
0
~
~
~
~
~
=
CMATX(r
-1 -1/2
nh,.,(a
+a.,nh,.,)y+.
1,.,).
~
0
~
~
~~ ~
From (A.IO) and (A.12)
(A.12)
119
Similarly,
From
-
r~(a
hi
-4 4
2
2
+a.nh.)t ) + a 1 a 2 (EE(a +2a a.+a.nh')~ .
a ~ ~ hi
hi a
a ~ ~ ~ h~
2
2
2
3 2
- 2(a q+EE(2a a.+3a ainh.+a.nhi)th')
a hi
a ~
a
~
~
~
2
+ EE H (a t 1h 'i'h·+2a (2a +ainh · )t 2h ,· 'hi
hh'ii' a
~
a
a
~
~
+ a,a,
,(2a a+a.~
'i,)(2a a +a.n
·)t 3h , ~"h'))
~ ~
~ h
~ h~
~
where
q
= R(X),
and
= tr(~h~'4r
=
tr(~
'3r
-n~
-1
BDMATX(b a I* . +b.J*
)~' '4Jh')
~
,h~
~
~
-1
~
(b a
I*. +b,J*
)~ '3Jh')
~
.n~
~
~
= tr(r -1 (b aI~
+b,J~
u.
~
~u,
~
~
)J .)
h~
= r -1 (b a +b.nh.)tr(Jh .)
~
= r
-1
~
,
(b a +b,nh.)n
~
~
h~
~
(A.13)
120
=0
Observe that t 1h 'i'hi
= tr(~'i3(r
=r
=r
-2
-2
-1
if i
~
i', then for i
(boI*i+biJ*.))~i3~i3(r
-1
1
= i'
(boI*.+biJ*.))~i3
1
1
(tr«bo+Ih'i+biJh'i) (boIhi+biJhi))
2
2
(tr(b o I , 1.+(2b 0 b.+binh.)J
,)
1
1
h
h1
(A.14)
Also observe that t 2h 'i'hi
=0
if i
~
=0
if i
i', then for i
= i'
-e
Finally, observe that t 3h 'i'hi
~
= i'
i', then for i
Therefore,
=
4
Ill(Hn , -q)
hi h 1
-
~~(r
-1
hi
- 2(a
+
hi
o
1
1l2(~~(b
hi
0
+a.)n ,
h1
1
+ 2
2
1l2(~~(a
2
-4 2
(b +b.nh.)nh,)(a +a,n ,))
0
1
1
1
0
1 h1
-4 4
+ III
211
+2a a. a.nh.)n h ,
0
0 1
1
1
1
2
2
3 2
-1
a.+3a a.nh.+a.nh,)(r (b +b.nh')Yh'))
q+~~(2a
hi
0
1
0
1
1
1
1
2 -1 2
2
Z~ Z(a (r
(b +2b b.+b.~ .)n .
h1
hh'i 0
0
0 1
1 h1
0
1
1
1
121
Also,
tr(QVV1QV)
-2
= a l (LL(a
+a.)n .
hi 0 ~ h ~
- a qLLa.(a +a.nhi)r
0 hi ~ 0 ~
-1
(b +binhi)n h .)
0
~
222
2
- ala2(~~(ao+2aoai+ainhi)~i
2
2
3 2
-1
- 2(aoq+~~(2aoai+3aoainhi+ai~i)r (bo+binhi)nhi
2 -2 2
2
(b +2b bi+bi~ .)n .
h~
0
0
0
n~
+ LL L(a r
hh'i
+ 2a (a +a.nh.)r
o 0 ~ ~
-2
(b +b.n .)
0
~ h~
2
~
.
n~
e·
Finally,
2
2
= hi
LL(a +2a a.+a.nh')~'
0
0 ~
~
~
~
2
2
2
3 2
-1
2(a q + LL(2a a.+3a a.nh.+a.nh.)r (b +b.nh.)n .
o
hi
0 ~
0 ~ ~ ~ ~
0 ~ ~ h~
+ a2 LL L(r -2 (b 2+2b b.+b.nh.)n
2
.
o hh'i
0
0 ~ ~ ~ h~
2
+ 2a LL L(2a a.+a.~ .)r
0hh'i
0 ~ ~ n~
2
-1
2
(b +b.n .) n .
h~
0 ~ h~
2
+ LL L(2a a.+a i n h .)(2a a.+a.nh.)r
hh'i
0 ~
~
0 ~ ~ ~
-2
(b +b.n .)
0 ~ h~
2 2
~
.'
n~
APPENDIX B
PROGRAMS FOR THE SIMULATION OF THE BEHAVIOR OF
-e
THE ESTIMATE VECTOR OF PARAMETER S
123
DIMENSION TT(4).TC4)
RCAL.S TIT/' X·/
REAL LA1C30l.LA2(~OJ.LA3C30l.YAI130).LAC44J
QUtENSIOlll XC30 .30 J. VI C.JO .30 J • V1'30.30), QV (30.301. W( 30.30)
DIMENSION GC30.30J.OP(30.30J.A(30.~OJ.POXI30.30J
DIMENSION U1C,J0.301
DlMEhSIOlll YC72J.ZCI441.Rw'4.J.S(2.2J.DELTAC21 .ZA(72J.Q1C72)
DIMENSION SI(2,21
DIMENSION AVIC30.30J,AQVC30.30J.ASC2.2J.ASIC2,2J
DIMENSION UCIOOOI
DIMENSION UU(2001
INTEGER R.C.CA.NBC20J.8L.PRCIOJ
INTEGER BC30l,.P(30).SPC30J
INTEGER HA.f30/.NG/30/.N3/30/
CALL $CLPARCPR.1T.1TM,ITN.XI.X2.CONSf.R.X.NR.NC.BL.NB.C.CAJ
HAJIC-CA-C
DO 70 1-I.R
00 70 .Jal.NU1C
70
U1C~.I)-XCI.C+~)
XII-XI
XI2-X2
CALL INTIAL(1J
IDISTaPRC (0)
C... END GENERATE RANDOM ERRORS
CALL·SCRNYCy.Z,R.ID1Sf)
CALL SCRNYCYAI.Z.A.IOISf)
CALL SCA8CYAI.UI.l.NU1C.R.YA1,Z.I.NR.NR.NR.I.NR.NR.I)
DO S.J ~~I,R
83
YC~)-YC~~J+YA1C~~J
CALL SCSNICX.VI.VI.QV ••• G,R.C.NR.NC.BL.NB.Z.ZA.S.SI.XI.X2.CONSf)
~CPRC9J.eQ.l) GO TO 66
00 99 Kl -1.ITN
C••• NINQUE ESTIMATORS
C... CONPUTE NINOUE
YYlaFQFCY,Vl.R.NR.NRI
YY2-FQFCV.QV.R.NR.NRJ
XllaYY1.SICI.IJ+YYZ.SIC1.2J
X12-VY1.SIC1.2)+YYZ*SIC2.2J
wRITEC3.60Z)Xll.XI2
602
FOANATC2FI0.6)
CALL SCSNICX,VI.Vl.QV ••• G.R.C.NR.NC.BL.NB.Z.ZA.S.SI.
•
Xl1.X12.CONSTJ
C VI ::aQV*Vl*QV • QV::-Qv*QV
99
CONTINUE
IFIPR(9).EQ.2)XI-XII
IFCPRC9J.EQ.2JX2-X12
66
CONTINUE
CALL RSTART(563.542J
105
READC6.2.ENDalOo)(LAClJ.lal.CJ
nA-O
NDPaO
2
FO"M~TC40F2.0J
CALL
CALL
CALL
CALL
SCVICVl.R.XI.X2.BL.NB.NR.NRI
SCATBCX.VI.C.R.R••••A.NR.NC.NR.NR.NR.NR.I.11
SCABC •• X.C.R.C.G ••A.NR.NR.NR.NC.NC.NC.l.1J
SCG1CG.C ••• TRI.ZA.L.CONST.NC.NC.NR.NR)
e-
124
SCAB(VI.X.R.R.C. W.WA.NR.NR.NR.NC.NR.NR.l.l)
SCAS( W.G.R.C.Co W.Z.NR.NR.NC.NC.NR.NR.NR.l)
CALL SCA8(W.LA.R.C.l.Ql.Z.NR.NR.NRol.NR.l.l.l)
CA~ SCV1( A.R.BL.NB.NR.HR)
QV1QaFaFCQ1.A.R.NR.NR)
C~ SCI( A.R.NR.HR)
QQaFQFCQ1.A.R.NR.NR)
AA-GV1Q.Xl.aQ.X2
~
CA~L
SS-2.CQV1Q.QV1Q.SI(1.1)+QQ.~Q.SIC2.2)+2.QV1Q.QQ.SIC1.2)
ALaAA**Z/SS
ae-S8/AA
OFFaZ.AL
~lTEl~••3)OFF.AL.SE.lLAlKl).KI.l.C)
FORNATl//· OE&REES OF FREEDOM °oF15.1// o AL••• F1S.6//
•• SE··.F15.6//
•
• LAN·/Z(30F4.0»
lDFaDFF + .05
00 100 l-l.1T
END ~ENEAATE RANDOM ERRORS
C~ SCRNYlY.Z.R.IOIST)
~ SCRNYCYA1.Z.NU1C.10IST)
CA~ SCAS(YA1.Ul.l.NU1C.R.YA1.Z.1.NR.NR.NR.l.NR.NR.l)
DO 84 ~~.l.R
43
YC~~)·YC~~)+YA1(~~)
106
MINQUE ESTIMATORS
YV1-FQF(V.Vl.R.NR.NRJ
YYa-FQF(Y.QV.A.NR.NR)
Yl-YY1*SI(1.1'+YYZ*SI(1.2)
VZ-YY1*Sll1.Z)+YVZ*SllZ.Z)
CALL SCVllVI.R.Vl.YZ.8L.NS.NR.NR,
CALL SCATS(X.Vl.C.A.R ••• WA.NR.NC.NR.NR.NR.NR.l o1)
CALL SCAS(W.X.C.R.C.G.WA.HR.NR.NR.NC.NC.NC.l.l)
CALL SCG1CG.C ••• TRl.ZA.Z.CONST.NC.NC.NR.NR)
ALPLaFQFlLA.G.C.NR.NR)
WAIT£C3.609)ALPL
FORMATC· Z-VALue·.Fl0.6)
CA~ SCASCV1.X.R.R.C.W o.A.NR.NR.NR.NC.NR.NR.l.l)
CALL SCASlW.G.R.C.C ••• Z.HR.NR.NC.NC.NR.NR.NR.l,
C~ SCASlV.W.l.R.C.Ql.Z.1.HR.NR.NR.l.NR.l o l)
C~ SCASCQ1.LA.l.C.l.SLA.Z.l.NR.NR.l.1.1.1.1)
TVaBLA/SQRTCALPLJ
PRCS-PAGTlTV.IOF)
WRITEC3.610ITV.PROB
FORMATlo T-VALUE·.Fl0.6.· PRQSASIL1TV·.FI0.6)
IFIPROS.LT• • OZ5 .OR.PROS .GT• • 975 )HOF-NDF+l
IFICOGAMAIALPOF.AL.SE).LE. O.)GO TO 100
ITA-ITA+l
U(lTA)aCOGANA(ALPL.AL.SE)
CONTINUE
GO TO 105
CONTINUE
600
C••
FORMATlo N R.·oIS)
GOCONESS OF FIT TEST
-e
609
610
100
~ITE(3.o00)HOF
*
CA~
END
SCCDFlU.l.~.ITA.2)
APPENDIX C
PROGRAMS FOR FORMULA 5.1.3 FOR APPROXIMATING
THE DEGREES OF FREEDOM
126
-e
OI~E~SION TTC4J.T(4)
REAL*a TIT~' X·~
REAL LA1C30J.LAZC30J.LA3C30J.YA1C30J.LA(44)
OI_EhSION XC30.30).VIC30.30).Vl(30.30).QVC30.30J.W(30.30J
DIMENSION GI30.30).OPC30.~OJ.AI30.30).POX(30.30)
DIMEhSlON U1C30.30J
OI.E~SION YC7Z).ZCI4.J.RWC44J.SCZ.ZJ.OELTACZ)
.ZA(72).QIC7ZJ
01 N~NS ION SI( Z. Z)
OI~E~SION AVIC30.30J.AQVC30.30J.ASC2.2J.ASICZ.2)
DIMENSION UIIOOO)
DIMEhSION UUCZOO)
INTEGER R.C.CA.NSCZOJ.SL.PRCIO)
INTEGER SC30J.wP(30).SPC30J '
INTEGER NR~30~.N~30~.N~30~
CALL SCLPARCPR.IT.ITM.ITN.XI.XZ.CONSr.R.X.NR.NC.SL.NS.C.CAJ
NUIC-CA-C
C•• CCiMPI.TE Ul
00 70 l-l.R
00 70 .J=1.NUIC
70
UIIJ.IJ-XCI.C+~J
CALL INT IALC 1 J.
CALL RSTARTC563.S42)
C POX
CALL SCATACX.R.CA.G••• NR.NC.NC.NC.NA.NR)
CALL SCGICG.CA.W.TR.Y.Z.CONST.NC.NC.NR.NRJ
CALL SCASCX.G.R.CA.CA.W ••A.NR.NC.NC.NC.NA.NR.l.l)
CALL SCASrCW.X.R.CA.~ ••• y.NR.NA.NA.NC.NR.NR.NR.l)
CALL SCICQV.R.NR.NAJ
CALL SCAMSCaV.W.R.A.POX.NA.NR.NA.NA.NR.NAJ
C END POX
XII-Xl
XI2=XZ
IDIST-PRC 10)
C**. GENERATE RANDOM ERRORS
CALL SCRNYCY.Z.R.ID1STJ
CALL SCRNYCYAl.Z.NUIC.IDISTJ
CALL ~CA8CYAl.Ul.l.NUIC.A.YAl.Z.1.NA.NR.NR.l.NR.NR.1J
DO 83 J,lal.R
83
YlJ~)-YlJJJ+YAICJJJ
C••• ~ND GENERATE NANDON E~ORS
IFCPRC9J.EQ.l) GO TO 66
DO 99 1==1.ITN
C*** COMPUTE MINQUE
YY1=FQFCY.Vl.R.N~.NRJ
YY2&FQFlY.QV.R.NR.NR)
XII-YYI.SICI.IJ+YYZ*SICI.2J
XlZ-=YY I*SI C1 .2J +YY2*SI CZ .Z)
CALL SC~MICX.VI.VI.QV ••• G.R.C.NR.NC.SL.NS.Z.ZA.S.SI.
•
Xll.XIZ.CONST)
C VI ::=QV*VI*QV
• QV::=Qv-av
99
CONT lNUE
IFCPRC9J.EQ.ZJX1=XII
IF1PR(9).EQ.ZJXZ-XIZ
66
CO"T I NUE
10PT=PRC 7)
NYI=O
127
NY 2-0
00 100 I-l.IT
GlaO.
G2"0.
00 85 .J-l.ITM
GENERATE RANOOM ERRORS
CALL SCANYCY.Z.R.I0IST)
CALL SCRNYIYA1.Z.NU1C.IUIST)
CALL SCA81YAl.ul.l.NUlC.A.YAl.Z.1.NR.NR.NA.l.NR.NR.l)
00 84 ...I=l.R
Y( .... ).. Y( .... )+YAl( .... )
END GENERATE RANOOM ERRORS
MINOUE ESTIMATORS
Yl ..FOF(Y.~1.R.NR.NR).SICl.l)+FOFIY.OV.R.NR.NA).SI(1.2)
Y2aFQF1Y.Vl.R.NH.NA).SICl.2)+FOFIY.QV.R.NA.NR)*SIC2.2)
GO TO(150.151.15ZJ.IOPT
RESTAICTED MINQUE ESTIMATORS
CONTINUE
IF1Yl.LT.0.)NY1-NY1+l
IFCYl.LT.O.)Yl-O.
IF1Y2.~T.O.)NY2aNY2+1
IFIY2.LE.0.)Y2-0.01
GO TO 150
CONTINUE
152
C.... P.S.D MINQUE ESTIMATORS
Y2aFQF(Y.POX.A.NR.NA)/TR
YlalFOFCY.Vl.A.NR.NR)-Y2.SCl.Z»/S(1.1)
IFIY1.LT.O.)NYl"NY1+1
IF1Yl.LT.O.)Yl-0.
COhTlNUE
150
CALL SCvllVI.R.Yl.Y2.8L.H8.NR.NR)
CALL SCAT81X.VI.C.R.R••••A.NR.NC.NA.NR.NA.NR.l.l)
CALL SCAS1W.X.C.A.C.G.WA.NR.HA.NR.NC.NC.NC.l.l)
C~
as
~CGI(G.C.W.TR1.ZA.Z.CONST.NC.HC.HR.NR)
CALL SCAS(VI.X.A.A.C. W.WA.NR.NR.NA.NC.NR.NR.l.l)
CALL SCASe W.G.R.C.C. W.Z.NR.NR.NC.NC.NR.NR.NA.l)
CALL SCASCY.W.l.R.C.Ql.Z.l.NA.NA.NR.l.NA.l.1)
~1"Gl+QU 1)
G2-G2+QU2)
GI-Gl/FLOATIITM)
~Z·G2/FLOATCITM)
Ull)"Gl
HISTOGRAM
CALL HSTGMCGl.G2.G3.G4.1.Z)
CONTINUE
100
C••• GAAPHS
CALL GETPCT(lT.9)
IlfAITEI3.Z0)
ZO
FORMAT(lHl.16X.· SO •• 24X.· Sl·.24X.· 82".Z4X." 83·)
CALL PANT U .ZO)
IlIR nEe 3.21)
FORMATC1Hl.20X."GRAPH OF SO"//)
Zl
CALL G.APHll.1J
WA 11£(.3.Z2)
FORMATC1Hl.20X.·GRAPH OF 81·//)
22
CALL ..RPH(l.Z)
,
e
128
2.
1
C***
-e
WRnE( 3.24)
FORMAT(lHl.· MOMENTS·)
CAU- RAlilMOMU.I T )
WRITE(3.1INY1.NY2
FO~MAT(elNY1.',lo,'
Nyz.e,16)
GOODNESS OF FIT TeST
CAu. SCCDF(U.l.4.IT.2)
END
129
DIMENSION TT(4).T(4)
REAL*8 TIT/' X'/
REAL LAl(30).LA2(30,.L43(30).YA1C30).LAC4.)
DIHENSION XC30.30).VIC30.30).VI(30.30).QVC30.30).W(30.30'
DIMENSION GC30.30,.OP(30.30,.AC30.30 •• POXC30.30,
DIMENSION UIC30.30)
DIMENSION Y(72).ZCI44).RWC44 •• SC2.2 •• DELTAC2) .ZAC72•• QIC72.
DIHENSION SIC2.2)
OI~ENSION AVIC30.30).AQV(30.30).ASC2.2,.ASIC2.2'
DIMENSION UCIOOO.
DIMENSION UUC200J
INTEGER R.C.CA.NBC20 •• BL.PRC10)
INTEGER B(30J ••P(30 •• SPI30),
INTEGER NR/30/.NC/30/.N3/30/
CALL SCLPARCPR.IT.ITM.ITN.XI.X2.CONST.R.X.NR.NC.BL.NB.C.CA)
NUICaCA-C
00 70 lal.A
00 70 ,Jal.NUIC
70
Ull,J.I)aXCI.C+,J)
XXaXl
XYaX2
XllaXI
Xl2aX2
CALL INTIALCIJ
IDISTaPAUO)
CALL RSTARTCSo3.S42)
105
READ(o.2.ENDaIOo"LACI,.J-l.C)
ITAaO
NO,.. 0
DO 100 1-1.1 T
Xl-XX
X2-XY
C••• END GENERATE RANDOM ERRORS
CALL SCANYIy.Z.R.IDIST)
CALL SCRNYCYAl.Z.R.IOlST,
CALL SCABCYAl,UI.l.NUIC,R.~Al.Z.l.NR.NR.NA.l.NA.NA.l)
00 83 ,J,Ja I.R
83
YC.J~)aYC,J.J)+YAIC.J.JJ
CALL SCSMJCX.VI.VI.QV ••• G.R.C.NA.NC.BL.NB.Z.ZA.S.SI.Xl.X2.CONST'
IFCPR(9J.EQ.IJ GO TO 66
00 99 KI -1. I TN
C*** MINQUE ESTIMATORS
C*.* COMPUTE MINQUE
YY1=FQFCy.Vl.R.NR.HAJ
YY2aFQF(Y.av.R.NR,NR)
Xl l=YY 1* 51 U. U +YY2.SI U .2 J
Xla=Y~I.5ICl.2)+YY2.SIC2.2)
wRITEC3,6021Xll.X12
FORMAT(2Fl0.o)
CALL 5CSMICX.Vl.VI.QV ••• G.R.C.NR.NC.8L.NB.Z.ZA.S.SI.
•
Xll.XI2.CONST)
C VI ::aQV*Vl*QV • QV::-OV*QV
99
CONTINUE
IFCPR(9 •• EO.2)Xl=Xll
IFCPR(9).EO.21X2=XI2
66
COhTINUE
602
e-
130
oil
-e
609
610
100
100
FOAMAT(40Fa.OJ
CALL SCVI(Vl.R.XI.XZ.BL.Ne.NR.NRJ
CA~ SCATo(X.VI.C.A.R.W.WA.NA.NC.NR.NA.NR.NR.I.I)
CALL SCAB(W.X.C.R.C.G.WA.NR.NA.NA.NC.NC.NC.I.IJ
CALL SCGl(G.C.W.TRI.ZA.Z.CONST.NC.NC.NR.NAJ
CA~ SCAB(VI.X.R.A.C. W.WA.NR.NA.NR.NC.NA.NA.I.l)
CALL SCAB( W.G.R.C.C. W.Z.NA.NA.NC.NC.NR.NR.NR.1J
CALL SCAB(W.LA.R.C.l.QI.Z.NA.NA.NA.I.NR.l.l.l)
CALL SCVI( A.R.SL.Ne.NA.NA)
QV1QaFQF(QI.A.R.NA.NA)
CALL SCI( A.R.NR.NAJ
QQaFQF(UI.A.R.NR.NRJ
AA=QVIQ.X1+QQ*xa
BBaZ*(QVIQ*QVIQ*SI(1.1J+QQ*QQ*SI(Z.ZJ+a*QVIQ$QQ*Sl(I.Z)J
AL=AA.*a/BB
BEaBB/AA
DFP=2*AL
WRITE(3.01IJDFF .AL.BE
FORMAT(· OF·.FIO.o.· A·.FI0.6.· B·.FIO.oJ
IOF-OFF + .05
YYI=FQF(Y.VI.A.NR.NR)
YY2aFQF(Y.QV.R.NA.NR)
YI=YYI*SI(I.I)+YYZ.Sl(l.ZJ
YZ=YY1*SI(I.Z)+YY2*SJCZ.Z)
CALL SCVICVl.A.YI.YZ.BL.NB.NR.NRJ
CA~ SCATB(X.Vl.C.A.A ••••A.NR.NC.NR.NR.NR.NR.l.IJ
CALL SCAB( •• X.C.R.C.G.WA.NR.HR.NA.NC.NC.NC.I.l)
CALL SCGI(G.C.W.TR1.ZA.Z.CONST.NC.NC.NR.NR)
ALPLaFQFCLA.G.C.NR.NRJ
WRITE(3.609JALPL
FOAMAT(· Z-VALUE •• F10.6)
CA~ SCAB(Vl.X.R.R.C • • • WA.NR.NR.NR.NC.NA.NR.I.IJ
CALL SCAB(W.G.R.C.C.W.Z.NR.NR.NC.NC.NA.NA.NR.1)
CA~ SCAB(Y.W.I.R.C.Ql.Z.l.NR.NA.HR.l.NR.l.lJ
CALL SCAB(Q1.LA.l.C.I.BLA.Z.l.NA.NR.l.1.1.1.1J
TVaBLA/SQRT(ALPLJ
PROB=PRGT(TV.IDFJ
WRITE(3.610)TV.PROB
FORMAT(· T-VALUE".F10.o." PROBABILITY".F10.oJ
1F(PROB.LT • • 025 .OR.PROB .GT • • 975 )NDFaNOF+1
ITA:ITA+l
U(ITAJ=PROB
CONTINUE
GO T~ 105
CONTINUE
WRITE(3.000JNDF
FORMAT(" N Ra l .15J
GOODNESS OF FIT TEST
CALL SCCOFCU.l.o.ITA.ZJ
~D
131
SUBROUTINE POOL
CU.N2.~L.IL.NSAM.W)
C
C
C
POOL AND I:ANK U-V ALUES
C
CIMENSION UCI000). WCI000)
"3 • No
NL • ~L + 112
00 2 I '" 1.H2
2
'" U(I)
IF (IISAM.Gl.ILI GO TO 6
CALL RANIC OhNL I
WRITE (3.1)
1
FOR~AT
WCM~+II
(~/.10X.'POOLEC
AND
RA~ICEO
U··S·I
CO 4 I '" 1.NL
WRt T! (3.! I I. W( I I
4
5
FORMAT (T2.I4.·-·.FI2.4)
ReTURN
END
6
SUBROUTINE RANK(U.N21
C
C
SUBROUTINE AANK(U.N21
C
DIMENSION U(1000)
M'-"'2-1
00 202 I-l.Ml
IC 1- 1+1
DO 203 ..-1C1.N2
209 IF (UCIJ-U(.JJ) 210.210.220
210 GO TO 203
220 Zt-U( I I
U(II-U(.JI
U( ... I-ll
203 CONTt~UE
202 CONTI"'ue
RETUR~
END
SUBROUTINE SCGI(G.M.O.TR.0.WIC.CONST.N1.N2.N3.N41
DIMENSION G(Nt.N21.0(N~.N4J.O(11.WIC(lJ
CALL !CIU].M.N3.N41
CALL LSVOF(G.Nl.M.M.C.II3.M.O.WIC.IERJ
CALL SCSVINCO.M.WIC.CONSTI
CALL SCOOOP(G.0.a.M._IC.M.M.NI.N2.N3.N4)
TRaG.
33
CO 33 I-l.1f
IFCD(II.NE.O.ITR=TR+l.
CONTINUE
IOETURIIl
END
132
SUBROUTINE TESTS(U,NZ)
C
C
SUBROUTINE TO COMPUTE TEST STATISTICS FOR UNIFORMITV, UZ,NSZ, NS3,NS4
C
C
COMPUTE ~E~.AN SMOOTH TESTS: NSZ, NS3,NS4
IMPLICIT REAL*S(A-H,o-Z)
REAL*. U( 1000)
P~I"O
P12-0
P13-0
P14_0
00 I I - I,NZ
Pit '"' Pit + (DSQRTU2.CO))$(U(I. ,:",.S)
PIZ - PIZ + (OSQRT(S.DO))*( 6*(U(I) - .S)**Z - .5.
Pl3
PI3+ (DSQRT(7.CO).*(20*(U(I. - .5)**3 -3*(U(I) -.5 ••
pl. - ZI0.(U(I) - .5' ••4 - .S*(U(I. -.S)**Z + g./8 +PI.
AS2" (PII**2 + PIZ**Z)/NZ
AS3 '"' AS2 + PI3**2/N2
AS4 ,. AS3 + PI4**Z/NZ
=
C
C
COMPUTE WATSON STATISTIC: UZ
C
-e
ws - 1./IZ.ffl2
DO 7 1-1, ..2
7
"s • "5 + (( 2.*1 - 1 ./2.I'NZ - U(t) '**2
usa - 0.0
DO 7~ t
I . l ' NZ
7g1
usa - usa + U(I)
usa-.s
~2*«USaI'NZ O.S )**2)
U2
- (CSo-O.l/N2+0.I/N2/NZ).(I.O+O.S/NZ)
WRITE (3,5)
5
FOR~'T(I'/.7X,'WATSON U2'.4X,'NEVMAN SNODTH 2',SX,'HS3',7X,
.'NS.',12X.'NUM8ER OF U"'S")
WRITE f3,E) U2,AS2,AS3.AS4,N2
6
FOR~AT (//,SX.Fl0.S,7X.Fl0.5,SX,F8.4,2X.FS.4,9X.IS,I')
ReTUR~
END
133
SUBROUTINE TRANS(X.N.NP.U.MOOI
CIME~SIO~ X(IOOI.UCIOOI.Z(IOO)
GO TO (1.2.3.4.5.6.71 • ~O
C
C
SCALE PARAMETER EXPONENTIAL CLASS. MOD
=I
C
5UM I = 0.0
5U'I2 • X( I )
00 101 I • 2.N
SU'II
SU'II + x ( I - l )
5U'I2 - 5UM2 + X(I)
101
U(I-l1
1($UMI/SUM2)**(I-l)
GO TO 200
C
ONE PA~AM~Tep CLOCATION) !XPD~eNTIAL CLASS. MOD
2
CALL SOR(X.N.Tl.Z)
I
=
=
~2
DO
161
c
C
= 1'1-1
161 I
•
UCI) = 1 GO TO 200
TWo-PARAM~TER
=2
1.J112
EXP(TI-Z(I))
=3
EXPONENTIAL CLASS. MOD
C
3
CALL SOR(X.N.TI.Z)
Nl - N-I
SUllll
0.0
5Ua42 • Z<t) - TI
00 171 I • 2.NI
SUMI
SU~1 + Z(I-I) - Tl
5UIII2 = 5UM2 + Zel) - Tl
U<I- 19 1
(SU"'I/$UM2)**<I-l)
GO TO 200
=
=
171
C
C
TWO-PARAMETER NORMAL CLASS. iliaD •
C
4
TI
=
T2
=
= 0.0
X(I) + X(2)
DO 201 I • 3. N
T1
T1 + XU)
DO 202 J •
202
201
I. I
T2 = T2 + (X(J) - Tl/I,**2
A=CSQATCI-2.,)*eX(I)-TI/I)/(SQRT(ABSC(I-I.)*T2/I-(X(I)-T1/1)**2)))
UCI-2). PPGT(A.I-2)
GO TO 200
C
C
C
5
300
co :!OO I • 1. N
XCI' • ALOG(X(I))
GO TO "
C
C
C
6
4
TI = XCI'
T2
X( 1 )
co 601 I • 2.N
IF (Tt.LE.X(tlt GO TO eOI
=
e-
134
601
602
604
603
606
60S
20
21
22
·e
Tl .. Xft)
CON"fI"Ue
CO 602 I .. 2.N
IF (XUt.LE.T2t GO TO e02
T2 • X( It
CON"fINUE
00 603 I .. 1. N
IF (X(lt.EO.Tlt GO TO e04
GO TO 603
Ito .. I
CONU"'UE
CO 60! I .. toN
IF (X(It.eC.T2t GO TO 606
GO TO 60!
120:a I
CONTINUE
YF (120-110) 20.20.21
hIYN .. 120
I"'AX • 110
GO TO 22
I"'IN" 110
IMA)l .. 120
IF (IMIN.EO.l) GO TO 10
I"INI .. IMIN
1
DO 11 I " 1. IIIIINI
11
ZO) .. X(1t
10
12
14
IMIN2 .. I~I" + 1
IF (IMAX - YMIN) 13.13.14
IMAX1:a IMAX - I
DO IS I :a IMIN2. IIIIAxl
IS
ze 1- 1)
13
IMAX2" IMAX + 1
YF (flI
IMAX) 16.16.17
YMAX2" IM.X + 1
DO 1~ I .. IMAX2. N
Z(I - 2t .. XCI)
N2" flI - 2
DO 19 I .. I.N2
U(I) .. (Z(I)
Tl)/(T2 GO TO 200
17
18
16
19
c
c
..
lC
(I)
Tl)
TWO-P.R.IIIE"fER PARETO CLASS. 11100 .. 7
C
00 613 I
613
.. to N
X(It .. ALOEex(I))
200
GO TO "3
RETURN
7
END
135
c
c
c
~EAL
FUNCTION PRGT*4(TVALUE.OF)
T-oISTRIBUTION
IMPLICI~
c
C
C
FUNC~ION
SUBROUTINE
~EAL*S(A-H.o-2)
REAL*4 TVALUE
INTEGER OF
FOR OF<-20 COMPUTE EXACT PROB>ITt
REF: ~URNAL OF OUALITY TECHNOLOGY. VOL 4. NO.4. OCT.1972. P 196
C
TSO-TVALue·*TVALUE
PRGT-l.00
IF(OF.LT.l)RETUAN
IF(TSC.LT.l.D-10)GO TO 3!
V-oF
IF(OF.GT.20'GO TO 50
IF(TSQ.GT.l.08tGO TO 30
c
THETA.OATAN(OSORT(TSQ~\"
(OF.l!)
TwOS 1M THEtA'
e-oCOSn ..eTAJ
M-1ll0C
IF(~.EQ.O'GO TO 10
PAGT-l.00-2.00*THETA'3.141592653589793
IF(OF.EO.l.GO TO 25
Ta 2.00*T*C/3.141592653!89793
C
10
PR~.PRGT-T
NT-(Of"-M-2'/2
IF(N~.L~.I'GO
TO 25
C2aC*C
0-/111
00 IS I-I ...T
0.0+2.00
T-T*C2*(D-l.00./0
15
PRG~.PRG~-T
C
25
30
35
IF(PRGT.GT.O.O)GO TO 35
PRGT-O.O
PRGT-PRGT/2.0
IF(TVALUe.GE.O.O'PRGT-I.O-P~T
RETURN
C
C
C
C
FOR O~20 USE FISHER'S eXPANSION (FIRST 3 TE~MS'
ABSOLUTE ERROR < .00002
REF: JOHNSON & KOTZ P. 102
'CONTINUOUS UNIVARIATE 0IST-2'
HOUGHTON MIFFLIN CO. 1~70
C
50
C
IF(TSC.GT.36.00.GO TO ~O
T-OSQRT (TS C'
X_T/(2.00*V'*(TSO+l.00-(3.00+TSO*(S.OO+TSO*<7.00-3.00*TSO).)'
•
(24.00*V .-( 15.00+TSQ.C~.Oo-TSO*(6.OO+TSO*( 14.0 o-TSO* (11. 00-T50' )
• • '.'(96.CO*V*V"
PRGTaOERFC(T/l.414213!e237.+0EXP(-.91S9385332-TSC'2.00)*x
GO TO 25
136
C
END
T-DIST~IBUTION
SUe~OUTINE
C
END
C
C
CPIT -aOEL TESTING P~CG~AM
DIMEhSION XC100 •• UCIOOO'.W(IOOO •• Z(100.
NL
=
0
DO 8~~ IL • I. NSAM
100 FORMAT (3I~'
wRITE (3.1' NSAM.MOD
FORMAT (//.5X.'CPIT MOCEL ANALYSIS PROGRAM. ON THIS RUN WE ANALYZE
.',2X.I3.2X.'SAMPLE5. TME MODEL ASSUMED IS MOC = '.13.'(5EE MODEL C
.OOE Ih PROGRAM'"
C
C
C
C
C
C
C
MOO
I 2 3
4
IS A~ INDEX FOR THE NULL ~POTHESIS MODEL TO BE TESTED:
SCALE PARAMETER EXPONENTIAL
LCCATION PARAMETeR EXPO~E~TIAL
TwO-PARAMETER EXPO~ENTIAL
T.o-PA~AMETER NORMAL
5
TWO-PA~AMETE~
LOGNO~MAL
UNIFORM
TWO-PA~AMETER PARETO
ETC. (ADO OTHER FAMILIES)
N • Ne. OF OBSERVATION!
NP • hO. 0 .. PARAMETERS
NSAM • h~MBE~ OF SAMPLES. MUST eE SPECIFIED IN A PROGRAM CARD
6
7 ,
C
C
C
C
C
C
C
Two-PA~AMETER
CATA INPUT
C
C
C
C
NULL CLASS TRANSFORMATION
C
821
822
823
721
833
WRI TE (3.82 II
FORMAT(IHO.//.T7.'ORIGJNAL DATA')
DO 822 I . I.N
wRnE (3.823' I.X(I)
FORMAT (T2.I4,' - '.FI2.4)
N2 • ~-NF
CON 11 NUE
CALL TRANS(X.N.NP,U.MOC)
CALL PANK (U. N2.
CALL TESTS (U. N2)
CALL POOL (U.N2.NL.IL.NSAM •• )
CONTI~UE
CALL TESTS (W.NL)
CALL EXIT
STOP
ENO
137
SUB~OUTINE
SCXCX.B ••• S.L.NI.N2.8L.NB.C.CA)
XCN1.N2)
INTEGER BClt •• Clt.NB(1).BL.C.CA.SCI)
INTEGER EM.X ••"'AX.SMAX •• A.BA
BMA lIaBC 1 )
_"'AXa.Ct)
SMAll-SCI ,
CO 1 1-2.L
IFCBMAX.LT.SCIJtBMAXaeCIJ
IFC.MAX.LT •• CIJJWMAX-.CIJ
IFCSMAX.LT.SCI)JSMAXaSCI)
CDNT IhUE
SLaO
_A-W ( 1 )
BA-eCl)
N=I
CO 2 1-2.L
IFCBA.EQ.BCIJ
.ANO• • • • l!a.wCI)'GO TO 3
BL:aBL+l
NBCBL ):aN
WA-WC II
BA-BC I J
DIME~SIC~
"1
3
2
GO TO 2
"N+l
CONTINUE
ElL-BL+I
NBCBL)-N
Mlat
"210111 +BMA X
M3a"2+WMAX
M4aM3+SMAX
Me-"4+WlIIIA X-. SlIIIA X
CaMS
CA-C +WMA X*eMAX
DO " lal.L
XCI.I,..I.
00 S I-t.L
DO e .l=2.C.XC I ••U-O.
XU.lll+BC In-I.
XCI.M2+W(I))=I.
XCI.M~+S(I))"I.
5
xU • "'''+C WC I t-t )*SMAx+!C I' )=1.
xc 1.115+(5(1)-1 )*WMAX+.( I))al.
f;ETUf;~
END
e·
138
SUBROUTINE SOR(X.N.Tl.Z)
C
DI~EN!ION ~(100).Z(100)
Tl ,. X( 1 )
00 111 1-2.N
IF (Tl.LE.X(I»
GO TC ltl
Tt
11 1
.. XU)
CONT I ...lJE
Nl ,. N - 1
CO 121 I - 1. N
IF (XU ).EC.TU GO TQ 131
GO TO 121
131
lliltN" I
121
1
CONTI"'UE
IF (IMIN - 1) 1.1.2
CO 3 1=2."
3
Z(I-t) -
XCI)
GO TO 4
2
5
7
IF (N 00
Z(n
GO TC
6
8
I~IN)
= 1.
7 I
5.5.6
Nl
.. XCI)
4
1MB. 1 .. HtlN-l
CO 8
I -
Z(n
..
1.IMINl
XU)
00 9
9
4
·e
I . IMIN. Nl
Z(I)" X(I+l)
RETURN
END
SUBROUTINE SCLBM(R.C.X.XA.IOP.Nl.N2.N3.N4)
DIMENSION ~(Nl.N2).XA(N3.N4J
INTEGER R.C.IR(100).IC(100)
READ(1.1.E ...D"50)N.M
IP( IOP.EC.lJGO TO 49
READ(7.1.END=SO)(IR(IJ.I-l.N)
READ(7.1.E"'DaSOJ(IC(I).I-l.~)
49
50
t
CALL SCBMAT(X.XA.N.M.ICP.IR.IC.Nl.N2.N3.N4)
R-N
CaM
RETURN
POR.. n (40 12)
END
139
SUBROUTINE SPRINT(T.A.L._.PR.Nl.N2)
CI~E~SION ~(Nl.N2)
INTEGES; ~R
REAL FICOe)'·C7X.·.·
•
REAL
•
t
:3
·.·CttH·.·
).,
2'.'
~'.'
'.'
'.
·COL.·.·I~)·.·
F2(~)"
I'.'
7' . '
IFCPR.NE.l'RETURN
"RI TECJ. t 'T
FOR IUT Cl x. ~e)
Kt--8
K2aCl
I(t-I< t +9
l!'. •
4'.'
5'.'
6' •
9' /
1(2-1<2+9
IFCK2.GT.lIII)K2-M
1<-9
IFCI<2.EC._)KaK2-Kl+t
FI( 2 l-F2(K)
WRITE(3. Ft HI. I-KI.1(2)
IFCK2.LT.lIII)GO TO 3
DO 4 l-t.L
4
"RITECJ.IO'I.CA(I.~).J-I.lIII)
10
FORlIIIAT(lX.·ADW·.13.9EI4.6.IOO(,7X.9EI4.6'
RETUR~
END
t03
102
104
107
105
108
106
SUBROUTINE SCRNYCY.Z.R.IDlST)
DIMEhSfON Y(tl.Z(l)
INTEGER R
GO TOC'0~.104.IOS).IDI!T
DO 102 L-t.R
Y(L)-RNDRCO)
GO TO 106
00 107 L-t.R
Y(L )-VHf CO)
GO TQ 106
DO le8 L-l.R
Y(L 1-r:NQAC OJ
ZCL)-V"lleOl
IFCZeL).Ec.O.)VCLl-O.
IFCZCL).NE.O.)YCL)·VCL)/ZCL)
CONTf~UE
RETURN
END
FUNCTION FOFCY.A.L.Nt.N21
OI~E~SIO~ ~(Nl.~2).YC1)
FQF-O.
DO t lal.L
DO t Jal.L
FQFaFOF+YCI).ACI.JI*YCJ)
AETURl\;
END
e·
140
SUBROUTINE SCQVCV,X,G,L,~,Qy,_.NI.N2.N3.N4,NS,N6.N7.N8.N9,NIO)
DIMENSION YCNI.N2t.XCN3.~4).GCNS.N6).OVCN7,N8t.WCN9.NIOt
CALL SCAecy.X,L.L,M.CY.WA,NI.~2.N3.N4.N7,N8.1.lt
CALL !CA8CCy.G.L.M.M.Qy._,N?,~8,NS,N6.N7,NS,N9.lt
CALL SCABTCOV.X.L,M,L,OV •• ,N7,NS,N3.N4,N7,NS.N9.1)
CALL SCABCCV.V,L.L,L.Cy, •• N7.NS.NI.N2,N7,NS.N9,1)
CALL SCAM8CV.QV.L,L.OV.NI.N2.N7.N8,N7.NS)
RETUR~
END
SUBROUTINE SeVICYI.L,N.NB.NI.N2)
IMPLICIT REAL*e (A-H.O-Z)
DIMEhSION VICNI.N2t.N8CI)
Klal
K2=N! CI)
11=1
DO I lal.L
00 2 J:a1.L
IFC~.LT.KI.OR.~.GT.K2tGO TO 3
VIC I.~)-I.
GO TO 2
3
2
YICI.~t-O.
CONTINUE
IFCI.LT.K2.0R.I.EQ.LtGO TO
Kl-KI+NBC I I )
II- I 1+1
1<2al<2 +NB CI I t
CONT INUE
RETUR'"
END
100
SUBROUTINE SYECINCD.NJ
~AL*e TIT"
00,
DIMENSION CClt
DO 100 Ial.N
IFCOClt*OCIJ.LT•• OIJOClt=O.
IF(D(I).NE.O.JOCI)-I/DCI)
RETUf;~
END
FUNCTION FCTRCECA.L.NI.N2)
DIMENSION _CNt.N2t
FeTReE=O.
00 1 I=t.L
FCTRCe=FCT~CE+ACI.It
RETURN
END
I
141
SUBROUTINE SCAAT(A.~.N.C.W.Nl.N2.N3.N4.N5.N6)
IMPLICIT REAL*! (A-H.C-Z.
DIME~SION .(Nt.N2 •• C(N3.N4).W(N5.N6)
IF(N6.EQ.l)GO TO 21
00 t9 I-"IiI
CO 19 J.I.~
W( I. J .=0.
00
19
19 1(.1 ....
W(I.J)-W(I.~'+A(I.K.-.(J.I('
00
20 l-t.IiI
00 20 J=t.IiI
20
2t
C(I.JJ-W(I.J.
RETUl= ...
IF(N5.~Q.t.GO
00 2 2 I
23
22
24
TO 24
• iii
DO 2~ Jat.1iI
W(.I. t ).0.
00 23 I<.t ....
W(.I.t.-W(J.t)+A(I.KJ-A(.I.K.
co 22 J=I.It
C(I.JJ-W(J.I.
RETUR'"
00 25 I.I.~
00 25 .I-t.1III
C( I. J
00 25
25
~'1
'.0.
1<-,,'"
C(I.JJ=C(I • .I'+A(I.KJ-A( ... K'
fOETUl=N
END
SUBROUTINE SCAT8(A.B.L.M.N.C.W.Nt.N2.N3.~.N5.N6.N7.Ne.
IMPLICIT REAL-! (A-H.O-Z.
DIME"SIC ... A(Nl.N2 •• B(N3.N4J.C(NS.N6J.W(N7.N8J
IF(Ne.EQ.1JGQ TO !5
00 3 l-l.L
DO 3 J=t.N
III (
3
J. J
'-0.
Ie-I."
DO :3
_(I ... )... (I .... +A(K.I).8(1< ... )
DO .. I-I.L
DO 4 Jat ....
C( I ... )• • ( I •
5
8
7
9
10
.1'
RETUl=N
IF(,,7.EC.I. GO TO 9
CO 7 l·l.L
DO ! Jat.N
W(.I.I)-O.
00 e l(-t.M
.. ( .I • t ••• ( J.I ) +A (K. I J _e (I( ... J
00 7 JaI.N
C ( J • J ).W ( J • I )
I=ETURN
DO Ie 1= I.L
CO 10 .I-t.N
C(I.J.=O.
00 tOK-I."
C(I.J)=C(I."'+A(K.I •• e(K • .I'
RETUR'"
END
142
t2
13
14
16
tS
17
18
SUBROUTINE SCATA(A.~.N.C.W.Nl.N2.N3.N4.NS.N6)
I~PLICIT PEAL*8 (A-H.C-Z)
DIMENSION A(Nl.N2).C(N~.N41 •• (NS.N6)
IF(~6.eO.I)GO TO 14
DO 12 I al ....
00 12 Jal. ~
WCI. J 1=0.
DO t2 K=t."
W(I.J).WCI.J)+A(K.I).A(K.J)
00 13 1st ....
DO 13 J=t.N
CCI.J)=W(I.J)
I:ETURN
IF(M5.EO.I)GO TO t7
00 15 la I.N
00 16 Jat ....
"(J.1taO.
00 16 K=t.'"
.. (4.1)-W(J.I)+A(K.I)*A(K.J)
00 15 J-l.N
CCI.J'=W(J.II
F:ETUr:'"
00 Ie lal ..
00 t8 J-t ...
C(t.J)=O.
00 18 Kat.N
C(I.J)-C(I.J)+A(K.I)*A(K.J)
RETURN
alO
SUBROUTINE SCA8(A.e.L."'.N.C.W.NI.N2.N3.N4.NS.N6.N7.N8)
I~PLICIT REAL*8 (A-H.C-Z)
OI~E"SION A(Nl.N2).B(N~.N.).CCNS.N6)
•• (N7.N8)
IF(N8.EO.I) GO TO S
00 3 I=I.L
CO 3 J-I.N
w(
3
5
8
7
9
10
I . ,.n-o.
00 :! K=I.",
W(I.J)=W(I.J)+A(I.K).S(K.J)
00 • (al.L
00 4 J-I.N
C( I.J)=w( (.J)
AETUF:'"
IF(N7.EO.tt GO TO 9
00 7 Iat.L
00 e Jal.N
IIII(J.t )-0.
00 e Kal.",
"CJ.tl=wCJ.t )+ACI.K).B(K.J'
00 7 J=t.N
C C I • J)
J.t )
AETUr:P;
00 10 I=t.L
00 to J=I."
CCI.J'=O.
00 to K=I. '"
C(I.J)aC(I.J)+A(I.K)*S(K.J)
AETUAN
END
=.(
143
32
SUBROUTINE SCAPB(A.a.L.~.C.Nl.N2.N3.N4.NS.N61
IMPLICIT ~EAL.8 (A-H.O-ll
DI_EN!ION A(Nt.N21.B(~3.~4).C(NS.N6)
00 32 lal.L
00 32 J=t ••
C(I.J)-A(I.J)+B(I.J)
RETU~~
END
SUBROUTINE SCVI(VI.L.A.B.N.~e.NI.N2)
IMPLICIT RE~*8 (A-H.O-ZI
DIME~SIO~ ~1(Nt.N2).Ne(l)
I-I
1C2-NI! ( l
K
)
11:11
DO t I-t.L
CO 2 J- •• L
IF(J.LT.ICI.O~.J.GT.K2)GO
3
2
TO 3
C--A/(B*(NB(II)*A+8»
IF(I.EO.J)C-C+l/B
VUI • .J)aC
GO TO 2
VI(I • .J)-O.
CONTINUE
IF(I.LT.K2.0~.I.eO.L)GOTO
1
Kl=Kt+NB( II)
II- 1 1+1
K2-1C2+NB( II)
CONTINue
ReTURh
END
3t
SUBROUTINE SCAMB(A.8.L.M.C.Nl.N2.N3.N4.N5.N6)
IMPLICIT REAL.8 (A-H.O-l)
DIMENUDN A(Nl.N2) .e(~3."4) .C(N5.N61
00 31 I-l.L
CO 3t J-l."
CCI.J)=A(I • .J)-B(I.JI
RETURN
END
SUBROUTINE SCI(A.L.NI.h2)
IMPLICIT REAL*e (A-H.O-l)
DIMENSIO~ _(NI.N2)
00 34 I-t.L
00 34 ,J-t.L
_n.J)-o.
34
IF( I.EQ.J 1"(1 • .1)= I.
CONTINUE
f;ETUR~
END
144
SU8ROUTINE SCOOOPCG.0.C.L.WK.M.N.NI.N2.N3.N4)
OIME~SIDN GCNI.N2).OCN~.~.).OC1)
••KCI)
CD I lal.L
DO 2 J=I.~
WK(J}-O.
DO 2 Kal."
2
~KCJ,sWK(J)+GCI.K)*DCK)*OCK.J)
I J=I.N
GCl.J)·"ICCJI
RETURN
END
DO
SCOOOPCO.C.CP.L.W.M.N.Nl.N2.N3.N41
IMPLICIT REAL*e CA-H.D-Z)
DIMENSION OCNI.N2).OPC~3.N41.0CI).WCI)
DO 2 Jal.N
DO :3 I-I.L
"(11-0.
DO J Kal.11
W(I) •• CI)+CCI.K)*OPCK.J)*OCI)
DC 2 lal.L
OPCI.JI-.. CI)
RI!TURh
END
SueROUTI~E
3
2
3
SUBROUTINE SCABTCA.B.L.M.N.C.W.Nt.N2.N3.N4.NS.N6.N7.N8)
IMPLICIT REAL*e CA-H.O-Z)
DIME~SIO~ ~CNI.N2).BC~3.~4).CCN5.N6'
•• <N7.N81
IFCNe.EO.l)GO TO 5
DO :3 I=I.L
00 3 J=1.N
w(I.J)-O.
CO 3 te-I. M
w(I.J)=wCI.JI+ACI.K)*SCJ.K,
DO 4 I=t.L
CO 4
.I=I.~
C ( I • J )-. C I • J I
RETURN
5
8
7
9
10
IFC~7.EC.l' GO TO 9
CO 7 Ial.L
DO e JaI.N
.(J.l)aO.
DO e Kal.M
WCJ.II-WCJ.tl+ACI.K)*SCJ.KI
DO 7 J=I.N
C ( I • J 1='11 C J • 1 I
f:'ETUr:N
DO 10 1= I.L
CO to J=I. II:
CC I. J )=0.
00 10 K=t.",
C(I.J)=CCI.J)+ACI.K)*eCJ.K)
RETUR~
END
145
SUBROUTINE SCSVIN(O.N.W.CONST)
t(I).W(I)
CO 1 l= 1 .N
OI~e~SIO~
w(
n.o( I )
W(N+I)-I
SUIIII=O.
N~I="-l
DO 2 r=I.~"'1
11'1" 1+1
DO 3 .pIP1.N
IF(W(IJ.LT.W(JJJGO TO 3
..... W( Il
W(IJ=W(J'
w(J »=11...
WA=W(N+I)
lHN+I)-w(I\+J»
W(N+~)=WA
3
CONT I~UE
SUM8SU~+'(I)*W(I)
IF(SUM.LT.CONSTJD(W(N+I»=O.
IF(SUM.GT.CONST'GQ TO 4
2
4
5
CONTI~UE
CO !5 l-l.N
IF(O(I).NE.O.)O(lt·l./C(I)
CONTINue
~ETU,"N
END
SUBROUTINE SCTRS(OV.Vl.L.S.Wl.W2.Nl.N2.N3.N4.NS.N6)
CV(Nt.N2t.Vl(N3.N4).S (2.2).Wt(N5.N6).W2(t»
CALL SCAB(OV.OV.L.L.L.Wl.W2.Nl.N2.Nl.N2.N!5.N6.1.1)
OIIlllE~SION
S(2.2)=FCT~Ce(Wl.L.NS.N6'
CALL SCAB(CV.Vl.L.L.L.lt.W2.NS.N6.Nt.N2.NS.N6.1.1)
CALL SCAB(Wl.OV.L.L.L.Wl.W2.NS.N6.Nt.N2.NS.N6.N7.1)
S(I.2)"FCT~CE(Wl.L.NS."6)
5(2.1)=S(I.2'
CALL SCAe('I.Vt.L.L.L •• t.W2.NS.N6.N~.N4.N!5.N6.N7.1)
S(I.l)=FCTf;CE(wl.L.NS."6)
RETU~"
END
•
146
.--
COUBLE
PReCISIO~
t~PLICtT
FUNCTION CCGAMA(X.A.B)
~EAL.8(A-H.O-l)
IF(A.GT.O.CO .AND. B.GT.O.OO)G] TO 10
COGAIi'A·-l.00
RETU~"
10
IF(X.GT.O.rOt GO TO 20
COGA"'A-O.OO
20
l=x/!!
IF(X.LT...*etGO TO 200
Al=1.00
A2=1.00+Z-A
Bl= 1.00
82.. Z
~ETUf;N
...·1
F 1-"21'82
100
110
120
200
210
220
AM-I. CO-A
AN=O.OO
NaN+l
IF(N.GT.51)GO TO 400
IF(MOC(N.2t.Ea.I)GO TO 110
AN="N+I.OO
A3""2+AN.Al.
83=B2+AN*BI
GO TO 120
AM-" 111+ 1. 00
A3=Z.A2+A~."1
83= Z*E2+"~*Bl
F2-"3.IB3
IF(OAes(F2-Fl).LT.l.D-10)GO TO 300
Fl-F2
AI"" 2
A2=A3
BI=82
82=83
GO TO 100
Al=l.DO
A2.... + . . 00-2
Bl-1.CO
B2=A+l.00
Nal
FI- .. 2/82
AMa-A.Z
ANaO.DO
BC=A+I.OO
NaN+I
[F(N.GT.!!I tGO TO 400
IF(Ii'OC(N.2t.EO.ltGO TO 220
o4N=:o4N+Z
BC=BC+I.OO
~3=e C.o42 +A ,,*.. t
B3.8C.B2+o4 .... BI
GO TO 230
o4"'=o4II-Z
BC=BC+l.00C
o43=eC'''2+AIi'.''1
147
e-.
230
300
310
.00
E!3""8C*B2.AIf*et
F2=A3;B3
IFCCABSCF2-Fl).LT.t.O-tO, GO TO 300
FI=F2
Al=A2
A2=A3
el=82
B2... e~
GO TO 210
COEF= •• CLOGCZ)-Z-OLGA~ACA)
COEFaCEXPCCOEF)
IFCX.LT.A*e)GO TO 310
COGAIfA=1.OO-cOEF,CZ.F2)
RETURN
COGAIfA=CCEF'CA.F2)
JlETUR"
COGAMA--2.00
RETUR'"
ENO
SUBJlOUTINE SCLPARCPR.IT.ITM.ITN.Xt.X2.CONST.R.X.NR.NC.eL.NB.C.CA)
DIMEN!IO'" 'CNR.NC)
INTEGER PR(10).R.BL.NBCt).C.CA
INTEGER B(72).WPC72).SFC72)
JlEAOCt.t)PR.IT.ITM.ll .... 'I.X2.CONST
FORMATCtOI1.315.3F10.O)
WRITEC3.~)
5
t 1
6
3
to
•
FORMATC"
B WP SP' •
R=t
REAOC5.3.E ...O=tO)BCR'.WPCR).SPCR)
WRITEC3.e)BCR).WP(R).SPCR)
FORM"TUX. 313)
FOJlMA1C3It )
FhoR+t
GO TO tl
R=R-t
CALL SCXtx.8.WP.SP.R.NR.NC.BL.NB.C.CA'
.RITEC3 •• )PR.R.C.CA.IT.ITM.ITN.XI.X2.CONST.BL.CNBCI,.1=1.BL)
FORMATC' OP •••••••• ·.IOlt
. , . ROWS •••••• ·.13
. , . COL ••••••• ·.13
./. COL x ••••• ·.13
. , . IT!R •••••• ·.16
*/. ITER"'••••• • .16
. / . ITERN ••••• •• 16
.,. At •••••••• ·.FIO.3
. , . A2 •••••••• ·.Ft 0.3
.,. CONST ••••• •• E t3.6
. / ' N eL•••••• ·.I3
./. N OB IN BL.·.2013
* f;ETU
) 1=1\
END
© Copyright 2026 Paperzz