ABSTRACT
LANDOIS, LUIS LEON.
Generalized Least Squares Analysis of the Split-
Plot Model Using An Estimated Variance-Covariance Matrix.
(Under the
direction of FRANCIS GIESBRECHT.)
The variance-covariance matrix of the observations obtained from a
split-plot type of experiment depends on two variance components.
In
the unbalanced case the best linear unbiased estimates of the treatment
effects depends upon these unknown components.
In this thesis the use
of generalized least squares to estimate the treatment effects using an
estimated variance-covariance matrix is considered.
C. R. Rao's (1979)
minimum norm quadratic estimation (MINQE) and invariance with respect to
translation of the fixed effects are principles used to obtain estimates
of the variance components.
It is shown that invariant unbiased MINQE
of the two variance components converge in probability to their true
values.
This is used to establish asymptotic properties of the general-
ized least squares estimators.
Alternate estimators are obtained by
using an iterated invariant MINQUE and by imposing the side condition
that the estimate of the split-plot component be non-negative.
A method
for estimating the degrees of freedom associated with estimated variances
of linear constraints among the generalized least squares estimates of
the fixed effects is developed.
This permits the construction of tests
of hypotheses about constraints among the fixed effects.
A small simula-
tion is included to check on the performance of the procedure.
ii
BIOGRAPHY
Luis Leon Landois-Palencia was born August 27, 1943, in Los
Herreras Nuevo Leon, Mexico.
He received his elementary education in
Torreon Coahuila, Mexico, his secondary and preparatory education in
Monterrey Nuevo Leon, Mexico.
..
He received the degree of Ingeniera-Agronomo from the Facultad-deAgronomia in the Universidad de Nuevo Leon in December 1967.
He worked
as an assistant professor for six months in the Facultad-de-Agronomia.
He entered in 1968 to the Centro-de-Estad{stica-y-Calculo of the
Colegio-de-Postgraduados at Chapingo, Mexico, where he finished his
courses in statistics in 1970, and he received the Master of Science in
Statistics in May 1976.
While he was pursuing his master's degree, he was an assistant
professor for the Biometry course at the undergraduate level, a position
that he kept until 1970.
He then became a professor of the course and
held this position until 1976.
In 1970, he began to work as a consultant in Statistics and
Statistical Computing and as a professor in the Computer Department at
the same center.
From 1972 until December 1976 he was the Head of the
Computer Department and at the same time he was a part-time professor
at the graduate level for the Computer Science Department.
In January 1977, the author enrolled in the Department of Statistics
at North Carolina State University at Raleigh on a full-time basis until
May 1981.
He has now returned to his duty at the Centro-de-Estadistica-
y-Calculo at Chapingo, Mexico.
iii
The author has been married to Guadalupe Alvarez-Icaza de Landois
since 1970.
His wife has studied Economics at the Universidad Nacional
Aut~noma de Mexico.
iv
ACKNOWLEDGEMENTS
The author wishes to express his appreciation to many people who
have advised and assisted in the preparation of this thesis.
In
particular, he would like to express his sincere appreciation and
special thanks to Professor F. G. Giesbrecht, Chairman of the Graduate
Committee, whose advice and guidance have been invaluable.
Thanks are
also extended to Professors T. M. Gerig, A. R. Gallant, R. E. Hartwig and
J. W. Bishir for their helpful suggestions to this study.
His appreciation is extended to Professor J. F. Monahan for his
helpful discussions and to Professor D. D. Dickey for letting him use
his computing library.
Also, the author acknowledges all professors
in Statistics, Mathematics and Computer Science who assisted him in
any way in the pursuit of his doctoral studies.
He also wishes to express his gratitude to Professor A. Carrillo,
Head of the Centero-de-Estad{stica-y-C,i!culo, for the support and interest
during his studies.
Further, he gives thanks to CONACYT (Consejo Nacional de Ciencia
ye Tecnologia), the Mexican Agency which gave him financial aid and
academic support in the realization of his doctoral studies, and to the
Ford Foundation for providing partial support to his family.
To his
friend, Said E. Said, he also gives thanks for his invaluable help in
the correction of his English writing.
The author would like to express special thanks to his mother,
Carmen Palencia de Chavarria. for her sacrifices and encouragement
v
during all his educational life; in memorial to his stepfather,
Demetrio Chavarria A., who was always pleased with his achievements;
to his brothers, Jesus, Armando, Demetrio y Roberto; to his aunts,
Refugio and Maria Luisa; and to his cousins, Esther and Bertha, he
gives very special thanks.
Also, he extends thanks to his family-in-
law for their moral support.
He also expresses thanks to Mrs. carolyn K. Samet for the excellent typing of this thesis.
Finally, he would like to express his deepest thanks to his wife,
Guadalupe, for her love, patience, time and encouragement, without
which he could never have completed these studies.
vi
TABLE OF CONTENTS
Page
LIST OF TABLES. • •
• viii
LIST OF FIGURES • • • • • • •
ix
1.
INTRODUCTION.
1
2.
REVIEW OF LITERATURE. •
3
2.1
Estimation of the Parameters S in a General
Linear Model
3.
••••••
• • • • .
3
2.2
Missing Values and Unbalanced Data
7
2.3
Estimation of Variance Components ••
9
MINQUE ESTIMATORS FOR THE SPLIT-PLOT VARIANCE COMPONENTS ••
3.1
The Split-Plot Model . • . . • . • . • . . • •
3.2
The MInimum Norm Quadratic Unbiased Estimator
(MINQUE) • • •
10
11
3.3
Positive Definite MINQUE Estimators.
3.4
The Variance Component Estimators for the
Split-Plot Model • • •
3.5
10
19
21
Seely's Method to Obtain the Estimators of the
Variance Components under the Invariance
Condition. • • .
4.
ASYMPTOTIC PROPERTIES •
4.1
33
. • • • • •
40
Properties of the MINQUE Estimators of the
Variance Components • .
40
vii
Page
4.2
Behavior of the Vector of Parameters
e When
the
Variance Components Are Replaced by Those
48
Obtained by the MINQUE Method. • •
5.
52
TESTING HYPOTHESES.
5.1
Procedure to Test the Degrees of Freedom
Associated with a t-test for Testing a Linear
Combination of the Vector of Parameters
5.2
e...
52
The Accuracy of the Formula for Approximating
56
the Degrees of Freedom • • •
A
V Is Computed Using the MINQUE Estimators of
the Variance Components ••
5.4
65
Simulation Study to Evaluate Adequacy of
Approximation.
68
6.
THE ANALYSIS OF VARIANCE.
75
7.
SIMULATION STUDIES • • • • . •
81
7.1
Standard Conditions.
81
7.2
Non-standard Conditions ••
83
8.
Sill1MARY AND CONCLUSIONS •
95
9.
RECOMMENDATIONS.
97
BIBLIOGRAPHY.
101
APPENDICES . •
104
A.
MINQUE Estimators for the Split-Plot Error Variances..
105
B.
Programs for the Simulation of the Behavior of the
Estimate Vector of Parameters
C.
e
122
Programs for Formula 5.1.3 for Approximating the
Degrees of Freedom . • . • • • • . . . • . . .
125
e
viii
LIST OF TABLES
Table
5.2.1
Page
Degrees of freedom associated with different degrees
of unbalanced designs ••
5.2.2
58
Mean, variance, degrees of freedom of the distribution
A2 A2
of Z = f(01,02)' (d.f.Z) and degrees of freedom
associated with formula (5.1.3) (d.f.F ) for the 27
o
cases considered • • • • • • • • • • • •
63
ix
LIST OF FIGURES
Figure
Page
5.2.1 Distribution of Z for testing a linear contrast between
blocks for testing a split-plot model with no missing
observations • • • • • • • • • • • • •
5.2.2
60
Distribution of Z for testing a contrast between whole
plot effects for a split-plot model with no missing
62
observat ions
5.2.3
Distribution of Z for testing a linear contrast between
split-plot effects for the split-plot model with no
missing observations •
5.4.1
•• • • • • • • • • •
Distribution of Z for testing a contrast between split
plot effects for a split-plot model. •
7.1
71
Flow chart of the simulation of the behavior of the
estimate vector of parameters 8• • • • • •
7.2
66
82
Distribution of the estimated general mean for the
split-plot model under normal distribution, when the
variance components were replaced by the MINQUE
estimators
7.3
84
Distribution of the estimated general mean for the sp1itplot model, with missing observations under normal
distribution, when the variance components were
replaced by the MINQUE estimators • . • • •
85
x
Figure
7.4
Page
Distribution of the estimated general mean for the
split-plot model, with missing observations under
uniform distribution, when the variance components
were replaced by the MINQUE estimators •
7.5
87
Distribution of the estimated general mean for the
split-plot model, with missing observations under
normal distribution, when the variance components
were replaced by the restricted MINQUE estimators.
7.6
88
Distribution of the estimated general mean for the
split-plot model, with missing observations under
uniform distribution, when the variance components
were replaced by the restricted MINQUE estimators.
7.7
89
Distribution of the estimated general mean for the
split-plot model, with missing observations under
normal distribution, when the variance components
were replaced by the restricted MINQUE estimators.
7.8
91
Distribution of the estimated general mean for the
split-plot model, with missing observations under
uniform distribution, when the variance components
were replaced by the P.S.D MINQUE estimators
7.9
92
Distribution of the estimated general mean for the
split-plot model, under slash distribution when
the variance components were replaced by the MINQUE
estimators •
94
1.
INTRODUCTION
The split-plot experiment comes naturally when the researcher has
data that arise from one of the following cases:
Case A.
The random selection of whole units from which several measures
(or subunits) are made.
Case B.
The random selection of whole units followed by the selection
of several subunits at random within each of the whole units.
Case C.
A factorial arrangement of the factors A and B with p and q
levels respectively in which the factor A with p-1 degrees of
freedom is completely confounded with whole plots and B is
confounded with subdivisions of the plots, i.e., split plots.
Case D.
A large number of treatments assigned to plots in a randomly
selected group of blocks, i.e., the incomplete block design.
In each one of these cases the structure of the errors in the model
consists of two random elements, one associated with the whole plots (or
whole units or blocks in case of the incomplete block design) and the
second associated with the split plots (or split units or plots in the
incomplete block).
All of these random elements are assumed to be inde-
pendently distributed with mean zero.
Those in the first group have
.
2 t h e second group 02'
2
var1ance
01'
In general, the model associated with the split-plot experiment can
be written as
y = XS
+
E:
2
where Y is an r-vector of observations, x is a r x m matrix of known
e is
constants,
an m-vector of parameters and € has structure Ul€l + V €2
2
where U and U2 are r x mi matrices and €l and €2 are mi-vectors of
l
=0
random errors, such that E(€i)
and variance-covariance matrix
2
D(€i) = I m ai' i = 1,2.
i
Analyses are straightforward when the number of split plots is
constant for all whole plots.
The object of this study is to consider
cases when this is not true.
In particular, this thesis is a report of
the study of the behavior of the generalized least squares estimator of
the parameters
e,
say
e=
(X'V-~)-X'V-~, using an estimated variance-
covariance matrix V, i.e., to observe the behavior of
when
ai
and
a;
S=
(X'V-lX)-X'V-~
are estimated using the MINQUE method.
A second problem to be considered here is to obtain a general
formula for computing the degrees of freedom associated with at-test
for testing a linear function of the parameters B of the form
H:
o
against H :
a
A'e=~"B.
0
#
> <
when the variance-covariance matrix V in the generalized least squares
estimator B is replaced by an estimate V that comes from the MINQUE
procedure.
3
2•
REVIEW OF LITERATURE
The general Gauss Markoff linear model can be written as
Y = Xf3
where:
+
(2.1)
£
Y is the n x 1 response vector, X is an n x p matrix with rank
q < p, f3 is a p x 1 vector of parameters and
random errors.
It is common to assume that
covariance matrix D[€]
= 10 2 ,
a
2
unknmJn.
£
£
is an n x 1 vector of
has E[€]
=0
and variance-
Techniques for estimating
under these conditions are well known.
Under certain conditions it is much more reasonable to assume that
D[€]
= V.
If V is known, or at least known up to a constant multiplier,
then again the analysis is well known, though in general the computing
may present some difficulties.
The object of this thesis is to examine several methods for estimating f3 when V is unknown but has some structure; i.e., depends on 2
unknown parameters 0
2
and
1
O
2
•
2
In particular it is assumed that the data
(and model) are from an unbalanced split plot experiment and that
0;
o~
and
can be estimated.
2.1
Estimation of the Parameters f3 in
A General Linear Model
One of the first persons to work with the weighted least squares
principle was Aitken (1943).
He showed that the minimum variance linear
unbiased estimator of S, under the linear model (2.1), is
S=
(X'V-lX)-X'V-~.
The obvious difficulty with this approach is that
4
V is often unknown.
A possible alternative is to ignore V and use
ordinary least squares.
Magness and McGuire (1962) were able to express the differences
between variances of estimated regression coefficients under ordinary
least squares and generalized least squares as functions of the variancecovariance matrix V.
Following along similar lines, McElroy (1967) found that ordinary
least squares estimators also are best linear unbiased, when all errors
have common variance and common non-negative coefficient of correlation
between all pairs.
Williams (1967) extended this to show that ordinary
least squares estimators were best linear unbiased when the rows of X
were a full-rank linear combination of the characteristic vectors of V.
Zyskind (1969) gives several conditions on the form of the covariance matrix such that simultaneously for all models with common specified systematic part every ordinary least square estimator is also best
linear unbiased.
He shows that models with such a covariance structure
may be viewed as possessing just one error term.
Linear models for many
complex experiments, like the split-plot design, with an induced covariance structure under which all linear ordinary least square estimators
are also best linear unbiased estimators, often possess several natural
error terms.
Bement and Williams (1969) studied the regression model with independent but not homogeneous errors.
They examined the performance of
weighted least squares estimators with estimated weights.
Their conclu-
sion was that the weighted least squares estimators had smaller variance
than ordinary least squares estimators if the variances could be estimated with at least 10 degrees of freedom each.
5
J. N. Rao and Subrahmaniam (1971) studied the following two
problems:
(i)
= 1,2, ••• ,k of a param-
Combining k independent estimators Yi' i
eter ll, where Y is the mean of n observations being normally
i
i
and independently distributed with mean
(ii)
Estimating the parameters
~
+ SX i + E ij , j
~
and variance
II
2
0 .•
1
and S in a regression model Y =
ij
= 1,2, ••• ,n , i = 1,2, .•• ,k, where the Xi are
i
known constants and the
E ••
1J
are normally and independently dis2
tributed with mean zero and variance o .•
1
Their approach differed from that of the previous authors in that they
used the MInimum Norm Quadratic Unbiased Estimation (MINQUE) principle
introduced by C. R. Rao (1970) to estimate the unknown variances.
They found explicit formulas for the MINQUE estimates of the variances in (i), given by
n.
n(n (n-2»
i
where n = En. and s
i
2
-1
1
_
E (yiJ.-y)
j=l
= (n-I)
1
-1
2
- (n-2)
- 2
L.L.(y . . -y)
1
J
1J
-1 2
s
.
These estimators are not necessarily positive and must be modified to
provide satisfactory weights.
On
the basis of a simulation study, they
concluded that weighted least squares using the modified MINQUE values
is more efficient than the weighted least squares using s: when n is
1
small and k is large.
Another conclusion was that MINQUE may not lead
to substantial gain in efficiency when n. = m ~ 8, specially for small k.
1
Fuller and J. N. K. Rao (1978) considered the problem of estimating
the parameter S in the linear model with heteroscedastic variances,
y = XS +
E
where
E
is the n-vector of random variables with mean zero
6
= block
and dispersion matrix V
unknown variances and I
222
diagonal (0Il
n
is an n
n
i
x n
i
least squares estimator of S is B =
, ••• ,0 I
i
n
), {ail are
k
identity matrix.
i
(X'X)-~'Y.
The ordinary
They defined the two
step weighted least squares estimator of S by S = (X'V-~)-X'v-ly where
"
V
"2
= block
"2
"2 .
diagonal {oIl , ••• ~okI } and 0.
n
n
1
1S
2
an estimator of 0 .•
k
i
The
1
first step was to compute estimator V and the second to compute S.
studied the class of two step estimators of S given by S
w
(Y - XS)',
{wlI
nl
Y <
2
, •••wkI
~
~
} and w.
1
= g(n.)
1
"2
LE. _,
°"2i = n -1
i
1J
Sw
reduced to
=
W = block diagonal
for some g(.) such that 0 < Y < Wi <
l
for all i and Y and Y being constants.
2
l
model, the
They
S= (X'V-~)-X'v-ly.
When they replicated the
They used the two step
2
estimator Sw to construct a new estimator of 0i and inserted these esti-
mated variances into Sw to obtain the three step estimator of S.
They
viewed the maximum likelihood estimator as the limit of an iterated
process with W = I.
They investigated the special case of the two step
estimators of a common mean and found that the two step estimator was
superior to the maximum likelihood estimator for a considerable range
of parameter values.
Fuller and Battese (1973) considered a linear model with a nested
error structure.
They demonstrated that it is possible to make a rela-
tively simple transformation and then compute the generalized least
squares estimators of the fixed parameters.
estimates of the variance components.
The transformation requires
These they estimated using the
fitting constants method of Henderson's method III described in Searle
(1968).
They estimated the variance-covariance matrix, say V, and
obtained the value of the generalized least squares estimator S =
" 1
" 1
"-1/2
(X'VX)-X'V--Y
by the ordinary least squares regression of V
Y on
7
V- l / 2X.
They gave sufficient conditions under which the estimated
generalized least squares estimator is unbiased and asymptotically
equivalent to the generalized least squares estimator.
Williams (1975) studied convergence rates of weighted least squares
to best linear unbiased estimators.
He wrote the model as follows:
where Y is a n x 1 response vector, X is a n x p matrix of full rank p,
6 is a p x 1 vector of parameters and
o
v0l / 2
is a symmetric square root of
the dispersion matrix V of the vector Y, and the respective mean vector
o
and dispersion matrix of Z was 0 and I .
n
n
His study concentrated on
situations where the matrix V could be written as V(0 ), i.e., a funco
0
tion of a relatively limited number of parameters which remained constant as n increased.
Under these conditions he showed that for
sufficient large n a weighted least squares estimator
Bw
=
(X'V-~)-X'V-ly can be constructed and regarded either as an estimator
of B "r of the best linear unbiased estimator
-0
2.2
S=
(X'V-~)
-X'V-ly.
0
0
Missing Values and Unbalanced Data
In a general experiment if the number of observations belonging to
the subclass is the same for all subclasses, then the experiment is said
to have balanced data or no missing observations.
In contrast, if there
is an unequal number of observations in the subclasses or if some subclasses contain no observations at all, the experiment is called
unbalanced or with missing data.
One of the oldest papers that describes methods to estimate the
yield of a missing plot is due to Allan and Wishart (1930).
They
8
provided formulas for a single missing plot.
Yates (1933) extended that
method to estimate several missing plots, and provided an iterative procedure to estimate the missing plots.
Anderson (1946) derived formulas
for one missing plot in a split-plot experiment.
He used the covariance
method for the derivation in both cases when there is one subplot missing and when there is one whole plot missing.
In principle the covariance
analysis method can be extended to more missing values.
However, this
quickly becomes impractical and one is forced to use methods appropriate
for the general unbalanced case.
Hocking and Speed (1974) compared a number of methods for handling
unbalanced data sets.
Speed, Hocking and Hackney (1978) reviewed the existing methods
for analyzing experimental design models with unbalanced data and related them with existing computer programs.
The methods are distin-
guished by the hypotheses associated with the sum of squares which are
generated.
Their claim is that the choice of the method should be based
on the appropriateness of the hypotheses rather than on oomputational
convenience or the orthogonality of the quadratic form.
One possible alternative for the experimenter faced with missing
cells is provided by Hocking, Speed and Coleman (1980).
This alternative
is based on the assumption that the experimenter would test the usual
hypotheses if the experiment had been balanced.
are derived from the balanced case.
The hypotheses suggested
They also provided a criterion for
choosing the hypotheses to be tested in the unbalanced case and in addition also provided an algorithm for testing the desired hypotheses.
9
The salient feature in the Hocking and Speed (1975), Speed, Hocking
and Hackney (1978) and Hocking, Speed and Coleman (1980) papers is that
only the simple error structure with independent identically distributed
errors is considered.
This thesis deals with unbalanced data and a more complex error
structure.
Emphasis will be on the error structure and the questions
concerning hypotheses to be tested will not be addressed.
2.3
Estimation of Variance Components
The large body of literature dealing with variance component estimation has recently been reviewed by Harville (1977).
Subsequent to
that, Rao and Kleffe (1979) described a series of modifications of the
MInimum Norm Quadratic (MINQ) estimation principle.
In particular,
they discussed MINQ-unbiased, MINQ-invariant and MINQ-unbiased, invariant estimators.
10
3.
MINQUE ESTIMATORS FOR THE SPLIT-PLOT
VARIANCE COMPONENTS
3.1
The Split-Plot Model
The linear statistical model appropriate for the basic split-plot
design in which observations are taken on n
i
split-plots in the ith
whole-plot can be written as:
m
y ..
~J
=
j = 1,2, ... ,n , i = 1,2, ... ,a
i
L:, xi' kSk + u ..
k=l
J
~J
where the {Yij} are the observed response values, the {x
ijk
} are the m
different control variables, the {B } are the m fixed unknown parameters,
k
and the {u
ij
} are the unobservable random errors.
These errors consist
of two components, a random element associated with the ith whole-plot,
say Vi' and a second independent random element associated with the jth
subplot in the ith whole-plot, say eij; i.e., u
ij
= Vi +
e ij
The {vi}
and {e .. } are assumed to be independently distributed with zero mean
~J
and variances a
2
> 0, a
v -
2
e
° respectively.
>
are independent of each other.
+ a
E[u .. u.,.,]
~J ~ J
2
e
Also, the {v.} and {e .. }
~
~J
These assumptions imply
if
i = i' and j = j'
if
i
=
if
i
f i'
i' and j f j'
Alternatively, in matrix notation, the statistical model for the
split-plot becomes a general Gauss Markoff model
y
= XS
+
E:
(3.1)
11
m
where Y is an r-vector of observations, X is an r x m matrix, S E R is
an m-vector of parameters, and E is an r-vector of random errors.
For the split-plot design, E has a special structure.
written as E = UlE
l
It can be
+ UZE Z where U and Uz are r x m and r x m matrices
l
l
Z
of known constants and E and E are m and mZ-vectors of random errors.
Z
l
l
Also,
E[E~] = 0, E[E~E~] = aZI
mR,
for
~ = 1,Z,
E[ElEi]
=
° where I p is
the p x p identity matrix.
It follows that the variance-covariance matrix of E (or Y) is given
Z
Z
Z
Z
by D(E) = V(a) = VIal + VZa where a and a are the whole and split plot
Z
Z
l
error variances respectively.
Note that if the elements of Yare arranged
with Y' = (Y ll ' YlZ ' ••• ' Y
a
), then V is a block diagonal matrix, where
1
na
each block contains unitary elements and is of order n. x n. and V is
1
z
1
the identity matrix of order r x r.
The remainder of this chapter will be devoted to obtaining estimates for a
Z
1
Z
and a .
Z
3.Z
The MInimum Norm Quadratic
Unbiased Estimator (MINQUE)
The purpose of this section is to present the UINQUE theory for
the variance components as initially presented in C. R. Rao (1970, 197Z)
and C. R. Rao and J. K1effe (1979).
Let Y
= XS +
E be a linear model such that Y is an r-vector of
m
observations, X is an r x m matrix, S E R is an m-vector of parameters,
and E is an r-vector of random errors, such that the structure of E is
given by E
£
= UlE l +
= 1,Z, •.. ,
UZE Z + ... + UpE p where U ' .•. , Up are r x m£,
l
p matrices of known constants and E , ••• , E are m£p
l
vectors of random errors.
Also, E[E£]
is the m£ x m£ identity matrix for £
= 0,
E[E£E~]
2
= a£I
= 1,Z, .•• , p and
m£
,where I
E[E£E~,]
=
°
m£
12
It follows that the variance-covariance matrix of
€
Q.
(or Y) is given
=
1,2, ... , p.
In general, the above papers deal with the problem of obtaining
estimators for the linear parametric function
y(8,a*)
where c
€
=
c'8 + f'a*
m
R , f
€
RP , 8
€
m
R ,
a*
and T C RP is an open set.
€ T
Note
that the model contains m fixed effects and p variance components.
The estimators of the linear parametric function y(8,a) are functions of the observed values and have the form g(Y)
= a'y +
y'AY.
Before proceeding to the estimation technique itself, a number of
preliminary concepts must be established.
3.2.1
Identifiability
Definition 3.2.1
A parametric function y (8 ,a) is said to be identifiable iff V(a 1)
V(a ) and X8
2
1
= X8 2
implies y(8 ,a )
l 1
=
= y(S2,a 2 ).
The following lemma establishes the necessary and sufficient conditions to have identifiability of a parametric function.
Lemma 3.2.1
The parametric function y(B,a)
C €
8(X') and f
€
=
c'B + f'a is identifiable iff
8(W) where 8(A) is the vector space generated by the
columns of A, W is the matrix with elements w.. equal to tr(V.V.), and
~J
tr represents the trace of a matrix.
The proof is given in C. R. Rao and J. Kleffe (1979).
~
J
13
3.2.2
Unbiasedness
Suitable conditions to establish unbiasedness are given by the
following theorem.
Theorem 3.2.1
= a'y
The estimator g(Y)
= c'8
parametric function y(8,0)
= 1,2,
tr(AV ) = f , t
t
t
+ y'AY is an unbiased estimator of the
+,f'o if c'
= a'X,
X'AX = 0 and
••• , p where A is a symmetric matrix.
The proof of this theorem is given in C. R. Rao and J. K1effe
(1979).
These authors also prove the following theorem concerning the
existence of an unbiased estimator.
Theorem 3.2.2
There exists an unbiased estimator g(Y) if c
E
o(X') and f
E
o(Q)
where Q = (qZt') is a matrix with qtt'= tr(VtV t , - PXVtPXV t ,) and where
P
x is the projection operator of X onto o(X').
An Invariance Principle
3.2.3
If the vector of parameters 8 is replaced by
- 8
where 8
0
o
is arbitrary, then the model (3.1) becomes
Y - XS
o
= X(S-8 ) +
Letting Y
d
E.
0
=Y
- XS
o
leads to
and the quadratic estimator corresponding to y'AY becomes
Y~
AY .
d
The
14
quadratic estimator y'AY is said to be invariant with respect to X if
YciAYd and y'AY are equal for all B •
o
=
It is clear from the expression
(Y - XB )'A(Y - XB )
o
= y'AY
0
- 2B'X'AY + B'X'AXB
o
0
0
= y'AY
that AX = 0 is both necessary and sufficient for invariance to hold.
3.2.4
Minimum Norm Principle
If the {E } were known, then a natural invariant estimator of the
t
parametric function f'a is
+ ...
This can be written as
(3.2.4.1)
E
P
where
o
!:J. =
o
f
.-E.. I
m
p
m
p
15
In practice, the
{€~}
are not observable and the natural estimator
of f'a is
g(Y) = Y'AY.
Imposing the condition for invariance leads to
g(Y) = (X13 + €)' A (X13 + €)
= €'
A
€
and since € has structure Ul€l + ... + Up€p' then
+ U € )' A (U1€2 + ... + U € )
g(Y) = (U €l +
l
P P
+
= (€'U'
1 1
P P
+ €p'U')
p A (U l €2 + ... + Up €p )
U'
1
= ("
€1'€2' •.• , €p, )
U'
2
€l
A (U :U :
l 2
...
:U )
p
U'
€2
€
P
P
= €*' U' AU €*
(3.2.4.2)
where
€l
€ = €2
*
€
U = (U :U :
l 2
and
...
:U )
p
P
A reasonable strategy is to make the difference between (3.2.4.1)
and (3.2.4.2),
€~(U'AU
-
~)€*
small.
The MINQUE principle is to minimize
II u' AV
- ~ II, where
denotes the Euclidean norm Itr(H H') of a matrix.
II HII
16
This leads to the formal definition of the MINQUE of a parametric
function:
Definition 3.2.2
£~U'AV£*
The quadratic form g(Y) = y'AY =
the parametric function f'o =
that
I IU'AU
- ~I
I
p
E
f
2
0
2 i
is said to be a MINQUE of
if the matrix A is determined such
k=l
is a minimum, subject to the conditions AX = 0 and
tr(AV ) = f , 2 = 1,2, ..• , p.
2
2
In order to obtain an explicit solution, it is convenient to minimize the square of the Euclidean norm,
-
~)).
I IU'AU
-
~I
12
= tr«U'AU -
~)(U'AU
This can be simplified somewhat by using the following result.
Lennna 3. 2 •2
Let A be a real synnnetric matrix subject to the conditions AX = 0
and tr(AV ) = f , 2 = 1, ••• , p.
i
2
Then
=
tr(U'AU~)
tr(~~).
Proof
o
o
= tr«U :
l
...
f
f
l U :
:U )' A(p
m 1
l
f
U' AU -.!. U' AU
2
1 1 m 1 2 m
l
2
f
f
2
U' AU
= tr( Ui AU 2 --!.
m 2 2 m
l
2
f
f
2
U' AU -.!. U' AU
P 1 m p 2 m
l
2
f
:-E. U ))
m p
p
f
U'AU -E.
1 p mp
f
U'AU -E.
2 p m
p
f
U' AU ~
p p m
p
)
'f
-E. I
m
p
m
p
17
£'
+ tr (Uppm
' AU -.£.)
p
£
+ -.£.
tr(AU U')
m
p p
p
= l:
fJl,
-
JI, mJl,
tr(AVJI,)
= UJI,U~
where VJI,
£2
= l:
JI,
because tr(AVJI,)
JI, mJl,
f'JI, •
=
t""
if
£1
o
-1
m
m
1
i
1
i
0
m
1
m1
I
= tr(
tr(~~)
-l:. 1
)
I
I
Ii
0
£
0
!
-.£.1
m
p
m
p
l-
=
tr(
)
f2
-.£.1
m
p
It
p
follows that
tr«U'AU -
~
m
~)(U'AU
-
~»
= tr(U'AUU'AU) -
2tr(U'AU~)
+
tr(~~)
18
=
tr(U'AUU'AU) -
= tr(AVAV)
tr(~~)
Since
-
tr(~~)
tr(~~),
where V
does not involve A, the problem of MINQUE is reduced to
=0
minimizing tr(AVAV) subject to the conditions AX
f~,
= 1,2, .•. ,
~
= UU'.
and
tr(AV~)
=
p.
£~
Also, since the
in the model (3.1) may have different standard
deviations, it is reasonable to rewrite the difference £;(U'AU in terms of standardized variables
~~ = £~/o~.
~)£*
This leads to the
quadratic form
0
allm
1
~
,
(U'AU -
~)
a I
p m
p
0
0
allm
1
~
.
a I
p m
p
0
The problem is now to minimize tr(AV*AV*) under the conditions
AX
=0
and
trAV~
= f~,
~
= 1,2,
.•. , p and V*
= V10 12 + ... + Vpo p2
•
In
practice the {o~} will be unknown and will have to be replaced by the
1
best value available from prior knowledge.
Note that knowledge of the
ratios of the variance components is sufficient.
The required solution is obtained via the following theorem.
Theorem 3.2.3
Let P by the projector operator onto the space generated by the
columns of X, using inner product (X,Y)
= X'V-~,
where V is a positive
1
1
definite matrix, and defined by P = X(X'V- X)-X'V- •
tr(AVAV) subject to the conditions AX = 0 and
-1
is attained at A* = [A Q V Q ' where Q = V
V
2 V 2 V
~
The minimum of
tr(AV~) = f~,
2 = 1, ••. , P
(I-P) and A' = (A , .•• , A )
p
1
19
=f
is determined from the equations SA
S
=
(S~~,) and S~~,
where f'
=
(f , ••• , f
1
p
)' and
= tr(QVV~QVV~,),
The proof of this theorem is given in C. R. Rao (1972).
2
I t follows that the MINQUE of L:~f~a~ is
= y'AY
g(Y)
= Y'(L~A~QVV
=
QV)Y
A 'q
= (Y'QVV1QVY' Y'QVV2 QVY' "" Y'QVVpQvY) and A is a solution of
where q'
L~A~tr(QVV~QvV~,)
= f~"
~,
= 1,2,
... , p, that is SA
' case 1\' S
'
I n t h ~s
= - lf ;s
... a so lut~on
to S'1\
= :t'.
= f . , and
'd e d t he
prov~
inverse of Sexists.
Also
=
A'q
(S-lf ) 'q
= f'S-l q
= f'a
where a is a solution to So
3.3
= q.
Positive Definite MINQUE Estimators
A criticism of the estimators defined in section 3.2 is that the
estimates may be negative.
Rao and K1effe (1979) derived a procedure to obtain non-negative
definite (n.n.d.) estimators of the variance components.
They showed that if the following hold:
(i)
(ii)
V~ ~
v
=
= 1,2, ... ,
0 for ~
LV
~
(iii)
V(Q,)
=
V - V
~
P (V~ ~ 0 means positive semidefinite)
20
(iv)
and B
t
=X
'VtX , where X is any matrix in the space orthogonal
to O(X'), then there exists an n.n.d quadratic unbiased estimator of
o~
iff o(B t )
¢ o(B(t».
Rao and K1effe (1979) also showed that o(B )
t
~
to (I-PG)Vt(I-P )
G
0, where P
G
¢ o(B(t»
is equivalent
is the projector operator onto the space
generated by the columns of the compound matrix
3.3.1
Positive Semi-definite Estimator for
the Split-Plot Error Variance
It is informative to apply the above result to the model for the
split-plot experiment.
Clearly, V2
=
Also (I-P G)V 2 (I-P )
G
I, V is a block diagonal matrix and G
1
=
=
[X:V ].
1
(I-PG)(I-P )
G
= I-P G ~
0 because P
G
is not the identity matrix.
Consequently there exists a non-negative definite quadratic estimator
2
for oZ.
A2
2
The non-negative quadratic unbiased estimator 02 of 0z is given by
Unbiasedness follows from
A2
E(oZ)
=
E(Y'(I-PG)Y/tr(I-P »
G
=
(tr(I-P G
=
(tr(I-P »-l tr «I-P )E(YY'»
G
G
=
(tr(I-P G»
=
(tr(I-P G»
» -1tr(E(Y'(I-PG)Y»
-1
-1
tr«I-PG)V)
Z
Z
[tr«I-P G)V 1 )01 + tr«I-PG)VZ)oZ]
21
=
(tr(I-P G»
z
= 0"2
-1
2
(tr(I-P »0"2
G
•
It is to be noted that this unbiased estimator for
o"zZ agrees with
the one obtained by Henderson's method III.
Observe that in the notation of S. R. Searle (1968) the model (3.1)
can be rewritten as
where Xa
= X,
~
= U1 , BZ = B, Bb = £1 and £ = UZ£Z·
Using the notation Xa,b =
=
X~~]
[Xa :K.]
[XIX
-0
a a
XbXa
Also let R(B , B )
a
b
= Y'P X
[Xa:~]'
write
- [Xa :~]'
.
Xb~
Y.
a,b
2
It follows that E[Y'Y - R(B , B )] involves only 0"2 and Y'y - R(Ba,B ) =
b
b
a
Y'(I-P
X
a,b
)Y.
Nrnv note that o(X:V ) = o(X:U ), since V = U Ui.
1
1
1
1
It follows that
the estimator of 0"; obtained from Henderson's method III agrees with the
n.n.d estimator of Rao and K1effe.
3.4
3.4.1
The Variance Component Estimators
for the Split-Plot Model
Preliminary Definitions
In order to construct explicit forms for the estimators for the
variance components of the split plot experiment, the following sequence
of definitions is needed.
22
Definition 3.4.1
A submatrix or block matrix is a matrix that is obtained from the
original matrix by deleting certain rows and columns.
Definition 3.4.2
A partitioned matrix is a matrix whose elements are block matrices.
Example:
all a 12 : a 13 a 14 a 15
.....................
A
=
a
a
21
31
a
a
a
22
• a
32
a
23
a
33
24
34
a
a
25
=
[All ~~
A
21
A
22
-
35
It is convenient to denote a partition matrix
A
= PMATX(~,;
or simply A
= PMATX
k
=
(~k')'
1,2, .•• , a, k'
= 1,2,
••• , b)
if there is no confusion in the order of the
indices.
Definition 3.4.3
The matrix A is a block diagonal matrix if A contains block matrices
in the diagonal and zeros elsewhere, and is written as
A
= BMATX
(~;
k
= 1,2,
or more simply A = BDMATX
(~).
.•• , a)
23
Definition 3.4.4
The matrix A is a column matrix if A is written as
A
where the
a
{~}
are block matrices having the same number of columns.
The notation
or
A
= CMATX
A
=
(~;
k
= 1,2, ••• ,
CMATX (~)
a)
will be used.
Definition 3.4.5
Define a as a column vector if a can be written as
a =
where the {a.} are real numbers.
~
or
a
= CVECTOR
a
= CVECTOR (a.).
(a.; i
~
Also write
= 1,2, •.• , k)
~
Definition 3.4.6
The matrix A is a row matrix if A is written as A = [A :A : ••• :A ]
Z
1 2
where the
{~}
are block matrices having the same number of rows.
will also be written as A
= RMATX
(~;
k
This
= 1,2, •.. , a) or A = RMATX
(~).
24
Definition 3.4.7
The symbol a' will denote a row vector if a'
where the {ail are real numbers.
or
a'
= RVECTOR
(a.; i
~I
= RVECTOR
(a ).
i
~
=
[a ,a , ••• ,
1 2
~]
Alternatively
= 1,2,
••• , k)
Using this notation one can write
1.i
=
y'
= RMATX
, =
~j
[Yi1' Yi2 '
[x
~
... , x ijm ] ,
j = 1,2, ... , n.) •
and
X.
3.4.2
The MINQUE Equations
~
Yin. ],
(1.i) ,
ij1 x ij2 '
= CMATX
... ,
(x fj ;
~
Recall from section 3.1 the error vector 8 has the structure
8 = U18 1 + U28 2 , and
, 2
I
2
D(8) = U U 0"1 + U U 0"2
2 2
1 1
2
2
= V 0"1 + V 0"2
1
2
where V is a block diagonal matrix, each block an n x n i matrix of
i
1
ones.
It is convenient to let J
of ones for i = 1,2, .•. , a.
n
or simply J. represent an n. x n. matrix
i
~
2
~1
~
It follows that
Now assume that prior information indicates that
guesses for
~
2
and 02 respectively.
ai and a;
Use this to write V as
are good
25
Lemma 3.4.1
2
2
2
I i a 2 where a ~ 0 and a 2 > 0, Ai of order n i x ni,
l
-1..
-2
2 -2
2
2-1
then Ai 1S g1ven by a Ii - a a
(nia
+
a
2) J i •
2
l 2
l
If Ai
= Jia l2 +
It follows that V-I
ai
=-
= BDMATX (aol i + aiJ i ) where a O = a;2 and
2 -2
2
2-1
a a 2 (nia l + a 2 ) •
l
Observe that V = BDMATX
-1/2
BDMATX (n.
J.).
1
1
In section 3.2 it was shown that the MINQUE of
2
L~f~o~
is g(Y)
=
A'q,
whereq' = (Y'QVvlQVY' .•• , Y'QvQpQvY) and A is a solution for
L~A~tr(QvV~Qvv~,)
=
f~"
~'
= 1,
Also, it was shown that g(Y)
... , p.
= fro"
where 0 is a solution to
2
L~02tr(QVV2QVV~,) = Y'QVV~,QVY' 2' = 1, •.. , p.
Therefore, letting 2,2'
= 1,2,
the estimators for the split-plot
variance component will be completely determined by solving the following system of equations:
[~~]
.
(3.4.2.1)
In order to obtain an explicit equation for the estimators of the variance components, it is necessary to compute the terms involved in
(3.4.2.1) .
26
3.4.2.1
Y'~Vl~Y
Computing
Note first of all that Y'QVVlQVY =
V~QvY as follows:
Now obtain
P = X(X'V-~)-X'V-l,
v
where
Y'QVV~V~QvY.
8= CX'V-ly and
C = (X'V-~)- and
(X'V-~)- is the generalized inverse of X'V-~.
Observe that
-1/2
BDMATX (n.
(a
=
~
0
+ a.n.)J ).
~ ~
i
A
Also XS
=
and
= tECk t (a 0 E~xi·
Y ·
ij J t i J
Sk
where
~t
= 1,
CVECTOR O::x .. kSk; j
k
~J
= 1,
... , a)
= 1,2,
... , m
... , n. , i
~
+ Ea.x i + y i +), k
i ~
t
are the elements of C and x i + t
= EX
.. and y'.+ = Ey ...
. ~J t
J: .
. ~J
J
J
Observe that
A
Y - XS
b -1
and VIV
CVECTOR (y.. -
=
~J
A
(Y - XB)
=
It
~
~J
j
-1/2
BDMATX (n.
= CMATX
~x .. kSk)
(a
( n.-1/2 ( a
~
0
+ a.ni)J.) • CVECTOR (y .. ~
~
~J
'kSk)
~J
) ( y.+ - Lx,+kS 1 .).
+ a.n.
k ~
~ ~
~
k ~
A
0
~x.
k
)
follows that
(3.4.2.1)
27
3.4.2.2
Computing Y'Qv9vY
The first step is to compute
-1
A
= V (Y - X8)
= BDMATX(a oI.1.
+ a.J.)CVECTOR(y
.. 1. 1.
1.J
= CVECTOR(a oy1.J
..
~kX1.'J'k8k)
- a 0 ~kx"k8k
+ a.y.+
1.J
1. 1: . - a i k~x'+k8k)'
1: .
Consequently,
= ~.~.a
2
A
.. - ~kx"k8k)
1. J 0 (y1.J
1.J
2
= ~.~.a
.. 1. J 0 (y1.J
~kx.
A
1.J'k8k)
2
2
2
2
+ ~.a.n'(Y·t
~kxi+k8k)
1. 1. 1. 1.
A
A
2
+ ~.a.(2a
+ a.n.)(y.+
1. 1.
0
1. 1.
1. - k~x'+kSk)
1.
•
(3.4.2.2)
28
3.4.2.3
Computing
tr(QVVl~Vl)' tr(QVVl~)and tr(Ov~l
As a first step observe that tr (V1Q VQv) = tr (V1QV) • Also observe
V
2
2
that tr(QVV QVV) = tr(QVV QVV )o. + tr(QvV1QV)o. 2 . This implies
1
1
1
-2
2
tr(QVV1QVV1 ) = 0. 1 [tr(V 1QV) - o.2tr(~v1QV)]·
Also observe that tr(QVVQV)
= tr(QV).
Then
implies
evaluated.
To evaluate tr(Q V ), write tr(o.vV)
V1
Now
QVV
and tr(QvV)
=
(V- 1 _ V-~CX'V-1)V
=
(I - V-~CX')
=
tr(1 - V-~CX')
= ~tr(·Q·VV1 )N'""12
2
V '""2·
+ tr(Q )N
= tr(1) - tr(V- 1XCX') .
-1
Note that V XCX' is idempotent.
Using the property, that the
trace of an idempotent matrix is equal to its rank, and R(X)
=
R(XCX'V-~) ~ R(XCX'V- 1 ) ~ R(X) = q where R(A) means the rank of A, it
follows that
(3.4.2.4)
Now
tr(Qv)
=
tr(v- 1 - V-~CX'V-1)
=
tr(V- 1 )
tr(V- 1XCX'V- 1 )
=
tr(v- 1 )
tr(CX'V-~-lX).
29
Observe that tr(V- I ) =
~.tr(a
~
= ~.
~
-L-I
V -V
= BDMATX(a o2 I. +
=
I.~ + a.J.)
~ ~
(a 0 + ai)n.,
~
~
X'v-~-~
0
(2a a
0
(3.4.2.5)
2
+ a.ni)J.),
~
~
i
(3.4.2.6)
[Xi:Xi: ... :X~]V-~-l Xl
X
2
X
a
= a2ricx~x.
o
~
and
~
+ r.(2a a
~
0
i
+
a:n.)CX~JiX"
~
~
~
~
tr(CX'V-~-~) = a2~.tr(cx~x.)
+ ~.(2a 0 a. + a:n.)tr(CX~JiX').
o
~
~
~
~
~
~
~
~
~
(3.4.2.7)
Recall that tr(CX'V-~) = q
= tr(CX'
=
• BDMATX(a I. + a.J.)X)
o
~
~
~
a r.tr(CX!X.) + r.a.tr(CX!J.X.)
o ~
~ ~
~ ~
~ ~ ~
which can be rewritten as
a
~.tr(CX~X.)
O~
~~
=q
-
(3.4.2.8)
~.a.tr(CX~J.X.).
~~
~~~
Substituting (3.4.2.8) in (3.4.2.7) gives
a q + r.(a a. + a:n.)tr(CX~J.X.)
o
~
0
~
~
~
~
~
~
30
= ao q +
where
t.~
E.ai(a
+ ain.)t
~
0
~
i
(3.4.2.9)
= tr(CX~J.Xi).
~
(3.4.2.10)
~
From (3.4.2.5) and (3.4.2.9),
(3.4.2.11)
From (3.4.2.4) and (3.4.2.11),
-2
-2 2
= a 1 (n-q) - a 1a 2(E i (a o + ai)n i - aoq - Eiai(a o + aini)t i ).
(3.4.2.12 )
To compute tr(QvQv)' observe that
= tr(V-~-l)
- 2tr(CX'V-~-~-~) + tr(CX'V-~-lxcx'V-~-~).
(3.4.2.13)
From (3.4.2.6) it follows immediately that
tr(V-~-l) = tr(BDMATX(a o2I. + (2a 0 a. + a~n.)J.»
~
~
~
~
~
2
= E.(a 20 tr(I.) + (2a 0 a. + a.n.)tr(J.»
~
~
~
2
~
~
~
2
=
L(a
+ 2a 0 a.~ + a.n.)n
...
~
0
~ ~
~
=
3
2
2
3 2
BDMATX(a I. + (3a a. + 3a a.n. + a.n.)J.).
(3.4.2.14 )
Note also that
-1 -1 -1
V -V -V
o
~
0
~
0
~
~
~
~
1
31
It
follows that
=
3
a otr(CX~X.)
+
11
=
2 2 2
a (q - ~.a.tr(CX~J.X.)) + ~.(3a a. + 3a a
01111 1
101
o i
= a o2q +
2
= a q
o
2
Recalling CX'V- X
+
~.(2a
1
2
~i(3a
2
2
3 2
a. + 3a 0a.n.
+ a.n.)tr(CXi'J.X.)
11
11
11
01
2
3 2
a. + 3a0a.n.
+ a.n.)tr(CX~JiX')
11
11
1
1
01
2
L.(2a a.
101
2
3 2
+ 3a0a.n.
+ a.n.)t.,
11
111
(3.4.2.15)
= a 0~,CXi'X,
+ ~.(2a
a + a2in.)CX~J.x.
1
1
1
0 i
1
111
2
1
tr(CX'V- XCX'V- X)
leads to
tr((a ~.CX~X. + ~.(2a a. + a:n.)CX~J.x.)2)
=
o
1
1 1
101
1 1
111
0.4.2.16 )
where
(3.4.2.17)
= tr(CX~X CX~ ,J. ,X. ,)
1 i
1
1
(3.4.2.18)
1
and
t .. ,
311
=
tr(CX~J.X.Cx~,J.
1 1 1
1
1
,X.
1
0.4.2.19)
I)'
Expressions (3.4.2.14), (3.4.2.15), and (3.4.2.16) lead to
=
L. (a
1
2
0
2
+ 2a 0 a.1 + a.n.)n.
111
2
3 2
+ 3a0
a.n.
+ a.n.)t.)
1111
1
2
2(a o q +
~.(2a
1
2
0
a.1
32
Combining expressions (3.4.2.3), (3.4.2.4), (3.4.2.11), (3.4.2.12),
and (3.4.2.20) yields:
- L:a.(a
+ a.n.)t.)
.1.01.1.
1.
1.
-4 4
2
2
+ a 1 a2(L:(a
+ 2a0 a.1. + ain.)n.
•
0
1.
1.
1.
2
2
2
3 Z
- 2(a q + L(Za a. + 3a a.n. + a.n.)t.}
i
o
0
1.
01.1.
1. 1.
2
1.
2
+ n(a t 1 .. , + Za (2a a., + ain.,)tZ'"
ii' 0 1.1.
0
0 1.
1.
1.1.
2
2
+ (Za o a.1. + a.ni)(Za
a., + a.,n.,)t3"'».
1.
0 1.
1.
1.
1.1.
(3.4.2.21)
33
La.(a
i
~
0
+ a.n.)t.)
~
~
~
-Z Z
Z
Z
- a l aZ(iL(ao + Za o a.~ + a.n.)n.
~ ~
~
Z
2
2
3 Z
Z(a q + L.(Zaoa + 3a a.n. + a.n.)t.)
i
o
o ~ ~
~ ~
~
~
2
Z
+ LL (a t l . i , + Za (Za ai' + a.,n.,)t Z . i '
ii' 0 ~
0
0
~
~
~
Z
2
+ (Za o~
a. + ain.)(Zaa., + a.,n.,)t .
~
~
~
~
3~i
,».
(3.4.2.22)
3.5
Seely's Method to Obtain the Estimators
of the Variance Components under
the Invariance Condition
An alternative approach to variance components estimation was
developed by Seely (1970a, 1970b).
This technique will now be used to
obtain estimators for the variance components in the split plot subject
to the invariance condition.
First, recall the model (3.1) Y = X8 + utEl +
assumed that E[£.] = 0,
~
2
E[£.£~]
~
=
~
I
m.
~
2
a:
~
u~£2'
E[£.£~] =
~ J
°
in which it was
i =! j, and D(Y)
VIal + VZa2, where VI = utul*' and Vz = I = u~u~'.
Second, let ai and
a~
represent prior values or guesses for
2
2
a~
and
Use these to write V = VIal + V a 2 , where V is positive definite.
2
l 2 l 2
l 2
Let v / be the square root matrix of V, that is V = v / v / •
Third, obtain the generalized least squares estimator of the
parameters 8 under the transformed model
34
Fourth, obtain the residuals
Z
1 2
= y- / y _ y-1/2 X8
= y-1/2(I - X(X'y-~)-X'y-1)y
= y-1/2(I - X(X'y-1X)-X'y-1)(xe + U! 1 +
u~ 2)
= U1E: 1 + U2E: 2
y-1/2(I - X(X'y-~)-X'y-1)U*
1 =
1
where U
U = y-1/2(I - X(X'y-~)-X'y-1)U*2
2
Fifth, let Z x 2
=
2 2
1 2
where x denotes the kronecker product
of matrices
u
u
and let U =
i
,
2 2
r r
1i
,
2i
i = 1,2, where
u' .
--r1.
Observe that
u~. = [u.
-:J1.
", ... , u.
i
-:J 1 1.
-:Jmi
l '.
35
The expectation of the vector'Z x Z is
E[Z x Z] = He
where
-u ,
and
11
,
u 11
u Zl u
, ,
u1Zu 1Z
,
ll
u ZZ'!!'lZ
~lul1
~ZuIZ
,
H =
,
ul1~1
,
u21~1
,
,
,
.!!.12~Z
,
u2Z~Z
u'
~1~1 -rZ~Z
-
Observe that
and that
(3.5.1)
36
-
u
Ulu 1l
Uu
l 2l
=
(I x U )
l
Ul~l
1l
u 2l
(3.5.2)
!!,.-l
Definition 3.5.1
Given a matrix A of order m X,n, the vector of A, say vec(A) , is
the mn-vector that is obtained by writing vertically all the elements of
A starting with the first element of A and proceeding in the lexicographical order.
Under the latter definition the right hand side of (3.5.2) becomes
(3.5.3)
From (3.5.3) and (3.5.2)
Finally, the Seely's estimators of the variance components is
obtained through the model
z
,29 Z
= He
+ error terms
by using least squares solutions, say
H 'He = H' Z
®
(3.5.4)
Z.
The following two lemmas can be found in Rao and Kleffe (1979).
Lemma 3.5.1
Let A, Band C be m x n, n x k and k x s matrices.
i)
tr(AB) = vec (A)' vec (B'),
Then
37
ii)
vee (AB)
=
(A 0 I ) vee (B),
k
iii)
vee (AB)
=
(Im ~ B') vee (A) ,
iv)
vee (ABC)
=
(A
~
C') vee (B).
Lemma 3.5.2
Let A, B, C and D be m x n, k x s, n x q and s x t matrices.
(A ~ B) (C
cg D)
Then
= AC ~ BD.
Using the latter lemmas the elements of the matrix H'H say H'H(i,j)
are:
= H'H(2,1),
and the elements of H'Z x Z, say H'Z x Z(i) are
H'Z x Z(l)
=
[(I
~
U ) vee (U )] , Z x Z
l
l
38
= vec
Recall that U =
i
value of U and U2t
l
and
Z'U U'Z
2 2
=
(Z'U U 'Z)
1 1
v- l / 2 (I
-
X(X'V-~)-X'V-l)Uft then replacing the
39
Therefore (3.5.4) becomes
=
Observe that this is the MINQUE estimator of the variance components.
Therefore, under the invariant condition, Seely's estimators of
the variance components are also MINQUE estimators.
40
4.
ASYMPTOTIC PROPERTIES
This chapter consists of two major parts.
Section 4.1 deals with
the asymptotic properties of the MINQUE of the variance components as
the number of observations increases.
Section 4.2
proceeds to the
asymptotic properties of the generalized least squares estimate 8
obtained by replacing the elements of the unknown variance-covariance
matrix by functions of the MINQUE of the variance components.
4.1
Properties of the MINQUE Estimators
of the Variance Components
Consider the model (3.1), Y
= X8 +
s where Y ix t x 1, and let t
increase to infinity in such a way that Y permits a partition into n
r-vectors, each one representing a replica for the unbalanced splitplot model in the study.
Rewrite model (3.1) as
s*1
~
+
=
Y
n
X8
s*2
s*
n
n
=
U*E ..
~
41
which can also be written as
Yt
=
(I ~X)8 + (I~U*)e:
n
tnt
where In is the n x n identity matrix,
~,
""t
=
(~'
8~ =
(8i' ... ,
8~)
and
~').
""1"· ., ""n
It is assumed that Y , Y , ••• , Yare iid random variables such
2
l
,
n
that E(Y ) = X8 and D(Y ) = V, where V = vloi + v 0;, is positive
i
i
i
2
definite.
=
Or equivalently E(e:!)
Observe that
E[In~
U*)e: ]
t
= 0,
° and D(e:!) = V.
and
D[l ® U*)e: ] = (I ® U*)D[e: ] (~~ U*')
n
t
t
n
=
(I ~ u*) (I ~ D[e: i]) (I ® U* ' )
n
n
n
=
(IQ9 U*D[e:.]U*')
n
~
= (I ® V) •
n
The goal is to estimate a linear parametric function
the variance components by the quadratic function
Y~AtYt
~.qia.: of
~
~
subject to the
invariance, unbiasedness and minimum norm conditions of the MINQUE procedure.
2
2
2
2
and a be prior values for 01 and 02' and use this to write
l
2
2
2
l 2
V = VIal + V2~2' where V is positive definite. Also let V / be the
Let a
square root matrix or V, such V
= v l / 2v l / 2
and v l / 2
= v l / 2 '.
For each 1 < i < n let 8. be the generalized least square estimator
~
l 2
of the parameters 8., under the transformed model v- / y"
~
~
l 2
Z; = v- / (y. - X8.).
....
~
~
and let
42
Let
V- 1/2 (Y
V-
1/2 (Y
=
1
2
-
X8 1 )
-
xB 2)
V- 1/2 (1 _ P )Y
=
V 1
1/2
V(1 _ P )Y
V
2
+ U!E ll +
XB l
U~E12
XB 2 + U!E 2l + U~E22
Ell
= (1191 VnV'
1/2
(I - P »(1 Q9 U*)
+ (I ~ V-I I 2 (I
n~
where
V
-
n
1
E2l
P) (I ~ U*) E22
V
nl,!)' 2
43
e: 11
and
e: tl =:
e: 12
e: 2l
.
e:
and
e:
t2
=.
e: 22
.
e: n2
nl
Now apply Seely's method of section 3.5.
The elements of the
matrix H'H are
=
l 2
tr[«I @v- / (I
n
«I.¢l v- l / 2 (I
nl.:)'
l 2
«I ® v- / (I
n
l 2
«IlXlv- / (I
hV'
=
H'H(2,1)
- Pv))(In®Uf))
- P ))(1 R\ U*)) ,
V
n~ 1
- P ))
v
- P
V
(In~ Uf))
))(I~U*))']
n
1
44
Also the elements of the vector H' Zt
= Z~ «lhQl v-liZ (I
«I A'I
n'O
= y't (In ®
® Zt
are
- P ) )(I
n
V
® u!))
v-liZ (I - P ))(1 rvI u*)) 'z
V
n~
(I - p')V-I/Z)(I
I
t
® V-lIZ(I
Vn
1/2
«I 0. v)(1 (29 U*)) '(I
n'e'
n
I
n
® V-
- P ))(1
V
1/2
n
® U*)
I
'(I - P ))Y
V
t
and
Therefore, the Seely's estimators of the variance components are solutions to the system of equations
H'Hcr
= H'Z t G9
Z
(4.1.1)
t
where
ntr(QVV1QVV2)]
ntr(QvVZQVV2).
45
and
Observe that (4.1.1) are the MINQUE equations for the variance
components, and that the matrix A that minimizes the quadratic form
t.
g(Y t )
= Y~AtYt
2
with respect to the parametric function ~jqj j is
(4.1.2)
where A is a solution to the equations
j
(4.1.3)
and
and
Using a generalized version of section 3.2.3, it is observed that
a necessary and sufficient condition for y'A Y to be invariant with
t t t
respect to the translation of St is that (InQ9 X')A
t
= O.
The following lemma shows that the quadratic form g(Y )
t
unbiased with respect to the parametric function
= y'A
Y
t t t
2
J J J
~.q.a ..
Lemma 4.1.1
The quadratic form g(Y ) = Y'A Y is unbiased with respect to the
t
t t t
parametric function Lq.a: iff q. = tr(A (ItX'tV.».
J J J
J
t
nO' J
is
46
Proof:
E[Y~AtYt] = tr (E[Y~AtYt ])
=
tr (AtE[Yt Y~])
=
tr(A (I ® V»
t n
=
tr (A (I ® Ej V. cr. »
t n
J J
=
Hr(A (I@V.»cr.
. . t n
J
J
2
2
J
=
2
Eq.cr .•
J J
j
From (4.1.3) it is observed that
H'H:\
=q
and a solution is given by
:\ = (H'H)-l q
•
From (4.1. 2)
=
y'A Y
t t t
(4.1.4)
Let Y , Y '
z
1
... ,
Y be independent and identical distributed
n
normal random variables with mean XS. and variance-covariance matrix
~
V.
It is known that if Band D are symmetric matrices, then under the
latter condition
47
CQV(Y~BYi' Y~DYi)
=
2tr(BVDV) + 4BiXBVDXBi
=
2tr(BVDV)
under invariance.
Observe that
var(Y~(~® QVV1 Qv)Y t , Yt(~® QVV1 Qv)Y t )
Then var(g(Y
t
»=
=
QVV1QVVQVV1QVV)
=
2ntr(QVV1QVVQVV1QVV)
=
2ntr(QVV1QVV1QVV)
=
2ntr(Q V Q V )
V 1 V1
q'(H'H)-lD(H'Z Q9 Z )(H'H)-l q
q'(H'H)-~
= q'(H'H)-l
=
= 2tr(1n~
2q' (H 'H)
.
f:
t
t
'(I
t
®
n
Q V Q )Y )
V 1 V t(H'H)-l q
Y~ (1~ QVV2QV)Y t
2 (H'H) (H'H)-l q
-1
q
Using Chebyshev's inequality
2
p{lg(Y) - r.q.o.1 > s}
t
<
1. 1. 1.
Taking the limit when n
D(g(Y t »
2
S
+
00
lim p{lg(Y t ) - r.qio:1
1.
1.
>
s} = 0,
n+oo
establishing that the M1NQUE estimators are consistent.
48
4.2
Behavior of the Vector of Parameters S
When the Variance Components Are Replaced
By Those Obtained By the MINQUE Method
It is not possible to guarantee that the MINQUE of the variance
components will be positive or even non-negative.
It follows that it is
possible that the estimated matrix V may not be positive definite.
Con-
sequently, care must be exercised in using V to compute generalized
least squares estimates of S.
4.2.1
Three possible strategies come to mind.
Behavior of S When V Is Computed with
The MINQUE Estimators of
o~
and
0;
The justification of the procedure is rather weak, because the
A2
estimators 01 could have negative values and yield undesirable results.
A_I
It is also possible that V
since it was shown that
A2
does not exist and the method fails.
converge in probability to
0.
1
2
0.
1
But
where the latter
always takes positive values, then the probability of obtaining positive
estimates increases for large experiments and correspondingly the probability of obtaining a positive definite V increases.
A small simula-
tion study reported in Chapter 7 shows that this is indeed true and
appears to be asymptotically normal.
4.2.2
Behavior of S when V Is Computed with
2
2
The MINQUE Estimators of 01 and 02' But
The Estimators Are Restricted to Be
Positive
The procedure is defined as follows:
(0
~2
if °1 <
j
Let
A2
°1
=
0
J
1
2
°1
otherwise
S
49
0.01
if
"2
< 0
2
O
otherwise.
The justification of the procedure is based in the fact that since
"'2
"2
0
and O converg: in probability to
2
1
are positive, so
"2
0
1
and
"2
O
2
0
2
and
1
2
O
2
where the latter always
also converge in probability to
0
2
1
and
O
2
2
•
Moreover, the procedure guarantees that V is positive definite for
all n.
Note that the vectors Y1' Y , ••• , Y have bounded fourth moments.
2
n
This assumption guarantees that y'A Y is 0 (n -0 ), <5 > O.
n n n
p
Assumption 1
The elements of I
® V are
functions of a 2-dimensiona1 vector of
d
-1
parameters 0 such that the elements of the matrix G • (0) = --2 I ® V ,
n1
dO.
i = 1,2, are continuous functions of an open sphere <5 of 0°, ~here 0°
is the true value of the parameter vector
0.
Assumption 2
The sequence of matrices {I ® X} and {I ® V} are such that
n
n
lim n -1(~@ X' )(~® V) -l(bQ9 X) = M(o) where M(o) is a p x p matrix of
n+oo
-1
fixed constants such that M (0) exist for all oe:e and
H (0) where H (0) is a matrix whose elements
r
r
n+oo
2
are continuous functions of 0i' i = 1,2.
Assumption 3
An estimator
=
V-1(~)
exist for all nand
n
e
>
O.
V (0) for V
n
0
n
V (0) is available, such that
n
satisfies the condition
~2
=
0
2
+
0 (n
p
-0
),
50
Theorem 4.2.2.1 (Fuller and Battese, 1973)
Assumptions 1, 2, 3 are sufficient conditions for the estimator
Sn
=
(I~
"-1
(X'V
the estimator
-
"-1
X) X'V
B=
(I
)Y to have the same asymptotic distribution as
n
® (X'V-~) -X'V-1 )yn
under the model Y = (I
n
®
X)S
n
Applying the Fuller and Battese theorem, it follows that S =
1
(X'V- X)-X'V- 1y has the same asymptotic distribution as
B=
(X'V- 1X)-X'V- 1y.
A small simulation study of the behavior of S when V is computed
"
"2
"
"2
with the restricted estimators 01 and 02 is given in Chapter 7.
The
distribution of S appeared to be normal.
4.2.3
Behavior of S When V Is Computed with A
Positive Definite Estimator of
0; and A
Restricted Positive Estimator of
oi
This procedure also restricts the estimators of the variance com.
ponents 012 an d 022 to b e non-negat1ve.
where
projector operator onto the space of [X:U ].
1
(
Also let
"2
°1 =
I0
I
<
I,
"
. "2
iO
\.
otherwise
51
A2
The justification for this procedure is that 0lH is the MINQUE
positive definite estimator defined by Rao and Kleffe (1979).
Also, it
is the
Analysis of Variance estimator. Moreover, Rao and Kleffe (1979)
A2
2'
show that the distribution of 02H has a scaled X distribution. Note
A2
2
that
is unbiased for 01.
A
°
Chapter 7 gives the results of a small simulation study of the
properties of
this section.
B, where V is computed using the estimators defined in
52
5.
TESTING HYPOTHESES
Chapter 5 will be devoted to testing an estimable linear parametric
function of the parameters 8, say A'B, that is,
H:
o
A'8=A'8
:f:.,
against H :
a
0
<, >
In Chapter 4 it was established that in large samples 8 has a
normal distribution with mean 8 and variance-covariance matrix
where the latter can be approximated by
(X'V-~)
(X'V-~)-.
The present chapter deals with testing hypotheses with small samples.
The procedure is to test a linear combination of the parameters 8 using
an approximate t-test.
t =
The method is to compute
A'8 - A'8
---"0_
J v~~(A' S)
where 8 = (X'V-~)-X'V-1y and compare with tabulated percentage points
of the t-distribution.
The difficult question is to determine the
degrees of freedom associated with the above t-test.
5.1 Procedure to Test the Degrees of Freedom
Associated with a t-test for Testing a Linear
Combination of the Vector of Parameters 8
A survey of the literature reveals a general solution to some
similar problems.
Specifically, Satterthwaite (1946), B. L. Welch (1947),
G. S. James (1951) and G. E. P. Box (1954) were concerned with the
53
problem of approximating the distribution of a quadratic form.
Their
general method will be applied to the problem at hand.
Let A'S be an estimable linear function of the vector of parameters
S, and
Z
AZ
=
where
V
AZ
= f(ol' °Z)
A' (X'V-lX) -A
AZ
AZ
= VIol + IO Z •
The distribution of Z will be approximated by a gamma distribution
with density
rr
l
P (t)dt
= -2
with parameters a =
t ¥;""l
(-)
2g
if
t
t
exp(- -)d(-)
2g
2g
(5.1.1)
and S = Zg, f and g being constants.
In Section 5.2 the justification for this assumption will be
examined in some detail.
The values f and g are chosen by equating the first two moments of
Z and the approximating distribution.
Using Taylor's theorem one can write
+ an error term.
Given that the error term is sufficiently small, one can write
E(Z)
•
54
and
Note that
= A'( ~2(X'V-~)-)A
dOl
= - A'(X'V-lX)-
~2(X'V-lX)(X'V-~)-A
dO 1
= - A'(X'V-lX)-X'( ~2(V-l»X(X'V-~)-A
dOl
=-A'(X'V-~)-X'V-l( ~2(V»V-~(X'V-lX)-A
dOl
=-q'V q
1
where
Similar manipulations yield
Consequently,
55
Observe that if A'S is estimable, then A'
and
E(Z):
= q'X,
q'
= A'(X'V-lX)-X'V- l
A'(X'V-~)-A
= A'(X'V-~)-X'V-~q
= q'Vq
=
,
2
,
2
(5.1.2)
q VlqOl + q q02 •
Now, equating these two moments of Z and the first two moments of
the approximating distribution yields the following system of equations,
E(Z)
,
2
,2
= q Vlqol + q q02
= fg
and
var(Z)
A2
= var(ol) (q'Vlq)
2
= 2fg 2
Solving this system of equations leads to
(5.1.3)
as the "degrees of freedom" parameter for the approximating ganuna distribution and consequently the "t" statistic for the test of hypothesis.
Notice that the formula (5.1.3) for the degrees of freedom is a
2
2
function of the true parameters 0'1 and 0'2.
Two alternate suggestions are available to compute the degrees of
freedom.
i)
Since the MINQUE are based on prior information, a reasonable thing
to do is to use such information in the formula (5.1.3) to compute the
degrees of freedom.
56
ii)
An alternative suggestion is to iterate the MINQUE procedure several
times.
~
Experience shows that the process tends to converge rapidly and
hence provides unique values to compute the degrees of freedom by the
formula (5.1.3).
5.2 The Accuracy of the Formula for
Approximating the Degrees of Freedom
A check on the adequacy of the foregoing procedure was performed
with a small simulation study.
~
Step 1.
Generate Y
N(O,l)
Step 2.
Assume prior information and compute the MINQUE for the com-
2
2
ponents, 01 and 02·
Step 3.
Select two different designs and different degrees of unbal-
anced data.
Design I.
A split-plot design arranged in randomized complete blocks,
with two blocks, four whole plots, two split plots and five different
cases of unbalance:
(i)
no missing data
(11)
missing two observations say Ylll and Y242
(iii)
missing two observations say Yl12 and Y
212
(iv)
missing two observations say Y
and Y
222
22l
(v)
Design II.
missing Y
222
A split-plot design arranged in randomized complete blocks,
with four blocks, three whole plots, two split plots and four different
cases of unbalance:
(i)
(ii)
(iii)
no missing observations
missing two observations say Y
and Y
lll
l12
missing four observations say Y
, Y
, Y
and Y
222
4ll
lll
322
_ _ _ _ _ _ _ _J
57
(iv)
missing twelve observations say Ylll' Y
, Y
, Y
,
l12
22l
222
Y23l , Y232 ' Y3ll , Y3l2 ' Y42l , Y422 , Y43l and Y432 .
Step 4.
Select three different contrasts:
(i)
testing the difference between blocks, say Al
(ii)
testing the difference between whole plots, say A
2
(iii)
testing the difference between split plots, say A
3
Combining the different cases of unbalance in Designs I and I I with the
contrasts for testing hypotheses yields a total of twenty-seven cases.
The notation used to identify the twenty-seven cases is given by (design,
unbalance case, contrast), i.e., (I, (i), AI) corresponds to Design I,
no missing observations, and a contrast for testing differences between
blocks.
Step 5.
For the twenty-seven cases, the degrees of freedom approximated
by formula (5.1.3) were computed.
Step 6.
For the twenty-seven cases, the estimates of the variance com-
ponents were computed 100 times, and with these estimates the function
A2 A2
A-I _
Z = f(ol' 02) = A'(X'V X) A evaluated. The mean and the variance of
the distributions were computed and histograms constructed.
The degrees
of freedom of the distributions were computed, and compared with the one
obtained in step 5.
For the first three cases (I, (i),
(I, (i),
~3)'
~l)'
(I, (i), ~2) and
the function Z was evaluated 200 times.
The purpose of
this was to obtain a better approximation for the distribution of Z.
The results obtained from the simulation are summarized in Table
5.2.1.
Table 5.2.1.
Degrees of freedom associated with different degrees of unbalanced designs
I, (i) Anova
Source
I, (ii) Anova
B
W
Error (a)
S
SW
Error (b)
Total (Cov)
II, (i) Anova
II, (ii) Anova
II, (iii) Anova
II, (iv) Anova
Source
Source
Source
Source
B:
=
W:
S:
WS: =
B
W
Error (a)
S
WS
Error (b)
Total (Cov)
d.L
1
3
3
1
3
2
13
d.L
3
2
5
1
2
7
21
Source
I, (iv) Anova
d.L
B
1
W
3
Error (a)
3
S
1
SW
3
Error (b)
4
Total (Cov) 15
d. L
B
3
lv
2
Error (a)
6
S
1
2
WS
Error (b)
9
Total (Cov) 23
Source
I, (iii) Anova
B
W
Error (a)
S
SW
Error (b)
Total (Cov)
B
W
Error (a)
S
WS
Error (b)
Total (Cov)
d .L
1
3
3
1
3
2
13
d.L
3
2
6
1
2
5
19
Source
B
W
Error (a)
S
Sl.J
Error (b)
Total (Cov)
d.L
1
3
2
1
3
3
13
I, (v) Anova
Source
d. L
1
B
W
3
Error (a)
3
S
1
SW
3
Error (b)
3
Total (Cov) 14
d.L
B
3
2
W
Error (a)
0
S
1
2
WS
Error (b)
3
(total (Cov) 11
Blocks or repetitions
Whole plot
Split plots
Interaction between whole and split plots
V1
(X)
e
e
e
59
5.2.1
Results Obtained from the Simulation
of Case (I, (i), All
The degrees of freedom associated with the t-test for testing the
difference between blocks given by formula (5.1.3) was 3.00.
In Table (5.2.1) it is observed that error (a) which would normally
be used to test block effects has 3 degrees of freedom.
Observe the
closeness between the degrees of freedom obtained from formula (5.1.3)
and the one from Table (5.2.1).
The frequency histogram from the distribution of Z, which was based
on 100 values, gives the values of 0.746206 and 0.416883 for the mean
and variance respectively.
The degrees of freedom associated with this
distribution was 2.67.
A second frequency histogram based on 200 values was constructed,
and gave the values of 0.685476 and 0.327729 for the mean and variance
respectively.
The degrees of freedom associated with this distribution
was 2.86.
The graph of the latter frequency distribution is shown in
Figure 5.2.1.
Observe that the graph has the shape of a gamma distribu-
tion.
5.2.2
Results Obtained from the Simulation
of Case (I, (i), Azl
The degrees of freedom associated
with the t-test for testing the
difference between whole plots, given by formula (5.1.3), was 3.00.
In Table (5.2.1) it is observed that error (a) which would normally
be used to test whole plot effects has 3 degrees of freedom.
Observe
the closeness between the degrees of freedom obtained from formula
(5.1.3) and the one from Table (5.2.1).
60
>
50..,
..
.
,
.
:
48-i.
.,
.,,
I
I
I
-..;,
I
F
R
E
I
38-1I
a
u
E
.
~I
4
I
N 29
C
Y
2.5
8.0
ZVALUES
FIGURE 5.2. I DISTRIBUTION OF Z
FOR TESTING AIlNEAR CONTRAST BfT1JEEN BLOCKS
FOR ASPilT PLOT MODEL YITH lID HISSING OBSERVATIONS
61
The frequency histogram from the distribution of Z, which was based
on 100 values, gives the values of 1.3265 and 1.3175 for the mean and
variance respectively.
The degrees of freedom associated with the dis-
tribution was 2.67.
A second frequency histogram based on 200 values was constructed
and gave the values of 1.2186 and 1.0357 for the mean and variance
respectively.
The degrees of freedom associated with this distribution
was 2.86.
The graph of the latter frequency distribution is shown in Figure
(5.2.2).
5.2.3
Observe that the graph has the shape of a gamma distribution.
Results Obtained from the Simulation
of Case (I, (i), A31
The degrees of freedom associated with the t-test for testing the
difference between split plots, given by formula (5.1.3) was 4.00.
In Table (5.2.1) it is observed that error (b) which would normally
be used to test split-plot effects has 4 degrees of freedom.
Observe
the closeness between the degrees of freedom obtained from formula
(5.1.3) and the one from Table (5.2.1).
The frequency histogram from the distribution of Z, which was based
on 100 values, gives the values of .174674 and .014510 for the mean and
variance respectively.
The degrees of freedom associated with the
distribution was 4.18.
A second frequency histogram based on 200 values was constructed
and gave the values of .162663 and .014226 for the mean and the variance
respectively.
was 3.71.
The degrees of freedom associated with this distribution
62
48-';
-,
,
"'i
...
I
F
~
R
38-1
E
I
J
~~281~~~
~ ~~
y
tJ1~~~;
f;1~~ ~~~~~
V,1
181'l.~~~~~~~1~~~j, ~
~~/)~~;j0~V}.~
. 110!~~
~
-V, 1/ YI1V/j/ !V):!/:/Il
/:(~ 1/
~
rt.
;. VA /jl111/1! %~v / / '{V71771Y/
8~;~ .........
i~i
1 ........
! .........
...,...1~~~
",,-,i
9 2 3
ZVALUES
FIGURE 5.2.2 DISTRIBUTION OF Z
FOR TESTING ACDtITRAST BETUEEN ~HOLE PLOT EFFECTS
FOR ASPLIT PLOT MODEL YITH NO MISSING OBSERVATIONS
63
Table 5.2.2.
Mean, variance, degrees of freedom of the distribution of
2 2
Z = f(01,02)' (d.f.Z) and degrees of freedom associated
with Formula (5.1.3) (d.f.F ) for the 27 cases considered
0
Mean
Case
Variance
d.f.Z
d.f.F
(I,
(i) , AI)
.746206
.416883
2.67
3.00
(I,
(i) , A )*
1
.685476
.327729
2.86
3.00
(I,
(i), A2 )
1.326589
1. 317556
2.67
3.00
(I,
(i) , A )*
2
1.218625
1.03578
2.86
3.00
(I,
(i) , A )
3
(i) , A )*
3
.174674
.01451
4.18
4.00
.162663
.014226
3.71
4.00
(I,
(I,
(ii), AI)
.694981
.632815
2.94
2.88
(I,
(ii) , A )
2
.785614
.471430
2.61
2.25
(ii) , A )
3
(I, (iii), AI)
.433751
.094709
3.97
2.25
.882843
.552669
2.82
2.92
(I, (iii) , A2)
2.266146
2.68
2.63
(I, (iii), A )
3
(I, (iv) , AI)
.321029
.060799
3.39
3.36
1.02667
1.113545
1.89
2.00
(I,
(iv), A )
2
2.053346
4.454139
1.89
2.00
(I,
(iv), A )
3
.357754
.091751
2.78
3.00
(I,
(v) , AI)
.814711
.748071
1.77
2.95
(I,
(v), A )
2
1.65
2.63
(I,
(v) , A )
3
(i) , AI)
2.31
3.25
7.19
6.00
.306500
7.19
6.00
.001733
10.71
9.00
5.81
5.00
(I,
(II,
(II,
(i), A )
2
(II ,
(i), A )
3
(II, (ii) , AI)
2.07997
.304351
2.09957
1.04979
.096352
2.63044
3.82542
5.23834
.079917
1. 22603
2.38098
0
64
Table 5.2.1-
Continued
Case
Mean
Variance
d.f.Z
d.f.F
0
(II,
(ii), 1. )
2
0.292271
.029393
5.81
5.00
(II,
(ii) , 1. )
3
.104107
.002057
10.53
8.00
(II, (iii), AI)
2.161932
1.618287
5.77
5.41
(II, (iii), 1. 2 )
(II, (iii), 1. )
3
(II, (iv) , AI)
.427687
.046884
7.80
7.37
.150781
.007013
6.48
5.51
3.719712
29.287296
.94
1.00
(II,
(iv) , 1. )
2
.543057
.658062
.89
1.00
(II,
(iv) , 1. )
3
.197671
.022929
3.40
3.00
65
The graph of the latter frequency distribution is shown in Figure
(5.2.3).
5.2.4
Observe that the graph has the shape of a gamma distribution.
Results Obtained from the Simulation
of the Remaining Cases
The remaining cases are summarized in Table (5.2.2).
It is clear
that the degrees of freedom computed by formula (5.1.3) are satisfactory
for approximating the distribution of Z.
5.3
Distribution of Z
A2 A2
AL
= f(01~2) = A'(X'V--X)-A
When V Is Computed Using the MINQUE Estimators
of the Variance Components
2
In Chapter 4 it was shown that the parametric function L.q.O. has
1. 1. 1.
a MINQUE of the form Y'AY, where A satisfies the conditions X'A
tr(AV i )
= qi.
In specific terms the equations
0
['21
A2
O2
[~il
[tr(QVV1QVV1)
=
tr(QVV1Qv)
=
-1
S
r
tr(QVV1 Qv)
tr (QvQv )
QY
[Y'QVV1 V ]
Y'QvQvY
-Y'QVV1 QvYj
Y'QvQvY
-
-
were obtained.
Let {Sij} denote the elements of s-l.
It follows that
tr(QvQV)/Det(S)
12
= - tr(Q V Qv)/Det(S)
S
V1
tr(Q VV1 Qv)/Det(S)
where Det(S) is the determinant of the matrix S.
=0
and
66
188.,
I
1
..!
~
i
F
R 68
I
E
Q
U
E
N
C
y
8.5
1.8
ZVALUES
FIGtRE 5.2.3 DISTRIBUTION OF Z
FOR TESTING ALlNEAR CONTRAST BE'TlIEEN ~ PlOT EFFECTS
FOR THE SPLTI PlOT l10Da \lITH ttl HISSING OBSERVATIONS
1.5
67
A2
It follows that 01
= Y'AlY,
A2
and 02
= Y'A2Y.
Note that Al and A are
2
symmetric but not idempotent.
Scheffe (1970) found that under the assumption of normality of Y
with mean
Q
= y'AY
~
and positive definite variance matrix V, the quadratic form
can be written as Q
=
2'
:A i Xh . o~ where
~
~,
{Ai are
the character-
~
istic roots of AV, X2 '
2 is a chi-square random variable with h.~
hi,oi
2
degrees of freedom and non-centrality parameter at. However, since
MINQUE are not necessarily positive, a more general statement is needed.
J. P. Imhof (1961) worked with the quadratic form Q
= y'AY =
2'
2
L:.AiX h 02 where the Xh "' u~2 are chi-square random variables with h.~
~
i", ~
i
~
2
degrees of freedom and non-centrality parameters ai' and where Al > A
2
> ... > A > 0 > AI> ••• > A are the characteristic roots of AV.
p
p+
m
He
found that under the assumption of normality of Y, the distribution of
the quadratic form Q can be accurately approximated by
2
P {Q > X} '" P {X , > y}
h
j
=
1,2,3.
The invariance condition of the estimators, AX
simplification C.
J
= L: r Ajh
r r
for j
1,2,3.
=
(C (i))3/(C (i))2, cji)
2
3
leads to the
It follows that Y'AlY and
Y'A Y have approximate chi-squared distributions,
2
tively where hi
=0
=
X~i
and
x~,
L:r(A~i))jhr (i), ~
respec-
=
i
i
and Ai ) > Ai ) > ••. are the characteristic roots of AiV for i
1,2,3
=
1,2.
68
5.4
Simulation Study to Evaluate
Adequacy of Approximation
In this section a simulation study to evaluate the adequacy of the
approximation to the distribution of Z
"2 "2
= f(01,02)
"2
"2
5.4.1
Simulation of the Distribution of
01 and 02 are MINQUE will be investigated.
Z
"2 "Z
= f(ol~Z)
When the Prior 'Values
Are Used to Compute the Parameters
a and
e of
the Approximate Gamma
Distribution
The simulation that is proposed is given as follows:
~
Step 1.
Generate Y
N(O,l)
Step Z.
Assume prior values and compute the MINQUE for the variance
2
2
components, 01 and oZ.
Step 3.
Select a balance design with two blocks, four whole plots, two
split plots and a linear parametric function for testing split plot
effects.
Step 4.
Compute the approximate parameters of the gamma distribution
given by:
Z
. 1
a
= 2"
Z Z
(V 1°1 + VZo 2)
"Z
2
("2
2
"z "2
)
var(ol)(q'Vlq) +var(oZ)(q'Vzq) +Zcov(ol,oZ)q'Vlq q'Vzq
(5.4.1.1)
q'V q
s
2 )
(5.4.1.2)
by using prior values instead of the variance components.
69
Step 5.
Generate the estimates of the variance components 100 times,
and with these estimates evaluate the function Z
"2 "2
= f'(01,02)
=
A'(X,~-lX)-A.
Use a, Sand Z to generate a gamma distribution by using
the function u
= CDGAMA(Z,a,S).
Step 6.
Test the hypothesis
against
H:
o
u e: T
H :
a
u iT, where T is the set of gamma distributions.
Three different competing test procedures will be used.
The first
2
is based on Watson's U statistic given by
N(u - 0.5)2
where u.
1.
= P(Z
~
z) is the cumulative distribution function of Z and
u(i) are the ordered u
observations.
where t
15
r
i
values, u is the mean, and N is the number of
The second procedure is based on P
212 2 2 2
= N(t +t +t +t )
4
l 2 3 4
= EIT
(u.) for r = 1,2,3,4 and ITl(u) = 112 (u
. r
J
1/2), IT (u)
2
=
J 2
'3
- 1/2), IT (u) = v7 (20(u - 1/2) - 3(u - 1/2» and
3
4
2
2l0(u - 1/2) - 45(u - 1/2) + 9/8. The third procedure is a
(6(u - 1/2)
IT (u)
4
=
2
X text based on the fact that one tenth of the {u.} values should fall
1.
in each of the intervals [0, .1), [.1, .2), ... , [.9,1.0].
Step. 7.
Graph the distribution of Z.
Results of the Simulations
The parameters a and S show values of a
computed values for
13.2.
u2 , P~
and
x2
show
u2 =
= 1.99
.075,
and S
P~ =
= 0.08,
the
2
.1533 and X =
70
2
The critical values for U ,
p~,~,.05
= 9.49 and
X~.05
P~
x2
and
are
2
u~,
.05 = 0.187
and
= 16.9.
Therefore in each case the null hypothesis was not rejected.
is concluded that Z
"2 "2
= f{01,02)
It
belongs to the class of gamma distribu-
tions.
A second simulation was performed to know how many times the value
H:
o
is rejected under the hypothesis
against
A'S
0
=
H :
a
It was found that in 3 of 100 cases the null hypothesis was rejected.
This means less than 5% of the time.
A third simulation was performed as follows:
Step 1.
Generate N{O,l).
Step 2.
Select an unbalanced design with two blocks, four whole plots
and two split plots and assume there are missing values, say Y22l' YZZ2.
Step 3.
Assume prior values and compute the MINQUE for the variance
Z
2
components 01 and oZ·
Step 4.
Compute the approximate parameters of the gamma distribution by
formulas (5.4.l.l) and (5.4.l.2).
Step 5.
Generate the estimates of the variance components 100 times,
and with these evaluate the function Z
=
AZ AZ
f{al,a )'
Z
Use a, Sand Z to
generate a gamma distribution by using the function u
Step 6.
=
CDGAMA{Z,a,S).
Use two criteria to test the hypothesis
against
H:
o
u
H :
u i
a
E:
't"
't",
where
't"
is the set of gamma distributions.
71
F
R 38
E
a
u
£'
N 28
C
y
18
91-4-'.L.LL.u.,u.~4-LoU-.---f-J...L..-~---.-----.
0.0
9.5
1.9
ZVAlUES
FIGtJ<E 5. 4. I DISTRIBUTION OF Z
FOR TESTING ACONTRAST BETIIEEN SPllT PLOT EFfECTS
FOR ASPllT PlOT HOOB.
1.5
72
2
The criteria used were based on Watson's U and Neyman's P4.
Results
The parameters a and 6 show values of a
= 1.4999,
2
2
2
The computed values for U and P4 show values of U
1. 99323.
p
2
4,00,.05
= 0.1333.
= 0.097
222
The critical points for U and P are Uoo ,0.05
4
= 9.49.
6
and
p2
4
=
= 0.187 and
Hence, the null hypothesis was not rejected in both
cases.
Finally, a simulation was performed to know how many times the
value
was rejected under the hypothesis
H:
a
1..'6=0
against H :
a
~
It was found that in 2 of 100 cases the hypothesis was rejected.
means less than 5% of the time.
This
We conclude from this that the distri-
but ions belong to the class of gamma distributions.
5.4.2
Simulation of the Distribution of
t = A'Sllv~r(A'S) Under the Null
Hypothesis 1..'8 =
° By Using the
MINQUE Iterated Estimators in the
-
Computation of 1..'8
The simulation that is proposed is given as follows:
Step 1.
Generate N(O,l).
Step 2.
Select a balanced design with two blocks, four whole plots,
two split plots and a linear parametric function for testing split-plot
effects.
73
Step 3.
Assume prior values.
Step 4.
2
Generate the MINQUE estimates of the variance components 01
2
and 02' 100 times, by iterating the MINQUE procedure several times.
Step 5.
Compute t = A's/lv1r(A's) by using the MINQUE iterated esti-
mators.
Use the approximated degrees of freedom given by formula
(5.1.3) to generate the distribution function of the t-distribution.
Step 6.
Use three criteria to test the hypothesis
against
H:
o
u e:
H :
u i L' where L is the set of t-distribution.
a
L
The criteria were based on Watson's
u2 ,
Neyman's P4 and a
x2
test
using 10 intervals.
Results of the Simulation
2
2
2
2
The computed values for U , P4 and X shows values of U =
2
2
2
= 0.187,
0.0957, P = 0.0178 and X = 10.6, with critical points u
00 , .05
4
2
2
p
= 9.49 and
X9, .05 = 16.9. Hence, the null hypothesis was
4,00, .05
accepted.
Also, the number of times that the null hypothesis A'S
rejected at the 5% level of significance was 2 of 100.
=0
was
This means
less than 5%.
A second simulation was performed as before but instead of step 2,
now select an unbalanced design with missing values, say Y22l' Y222'
Results of the Simulation
2
The computed values for u ,
P~ and x2 show values of U2 = 0.09224,
and X = 3.8 with critical points u:,.05 = 0.187, p~,00,.05 =
2
9.49 and X9,. 05 = 16.9. Hence, the null hypothesis was accepted.
p2
4
= 2.333
2
74
The number of times that the null hypothesis A'S
at the 5% level of significance was 6 of 100.
=0
was rejected
This means over 5%.
As a conclusion of the above simulations, it is legitimate to
assume that t = A'S/I~r(A'S) be distributed approximately as
t-distributions with degrees of freedom given by formula (5.1.3).
75
6.
THE ANALYSIS OF COVARIANCE
The methodology developed in Chapters 4 and 5 will now be applied
to the analysis of covariance for the unbalanced split-plot experiment.
For the complete balanced
experime~t
the model can be written as
i = l , ••• , a
j
= 1,
... , n
k=l, ... ,b
Note that this model allows for different slopes at the whole-plot and
the split-plot level.
In matrix notation this model becomes
(6.1)
Note that U
2
= I , Z'l
1 =
Also, vector 1, columns of X and Zl lie in the space spanned by the
a
columns of U .
1
Without further mention, it will be assumed in the remainder of
this chapter that the model (6.1) has been appropriately modified to
allow for unbalanced experiments.
Also, let
ai and a;
be prior values
2
2
2
2
or gueses for 01 and 02' and use this to write V = V a + V a .
1 1
2 2
Transforming the model (6.1) by V-
1/2 yields
(6.2)
76
Applying generalized least squares and reordering leads to the
normal equations:
l'v-l.y
! 'v-Izl !.'V-IZ 2
X'V-IZ
a
Xbv-Iz
I
X'V-IZ
a
Xbv- I z
2
2
l
X' V-IZ X' V-IZ
ab
I ab
2
Z'V-IZ
Z'V-IZ
I
I
I
2
Z'v-IZ
Z'V-1Z
2
I
2
2
a
X'V-\
b
Xbv-ly
a
X' V-Iy
ab
Z'V-Iy
I
Z'V-~
2
(6.3)
of order n x n.
and
I 2
Also, let v- / z
Define
T
I
=
=
v- I / 2 [ZI:z2]
r__ Z'V-IX(X'V-IX)I
of order n x 2.
:]
Transform (6.3) by T
I
ab
(6.4)
77
This leads to a solution for Sand S, say
1
Z
(6.5)
In order to test hypotheses about the ab interaction, define
and
TZ
-
I
o
0
= - XabV-~l(XiV-~l)
I
o
o
o
I
Transform (6.4) by T
Z
to get
X'v-~
X'v-lz
1
1
a
X' (V-l_V-lp )z
ab
Xl
X' (V-l_V-lp )Y
ab
Xl
b
Z' (V-l_V-~x) Z
ab
=
Z' (V-l_V-lp ) Y
X
(6.6)
This leads to a solution for ab, say
S
- X' (V-l_V-lp )Z (AI)).
ab
Xl
S
Z
(6.7)
Hypotheses about linear functions among the interaction parameters,
i.e., A'ab can be tested by replacing V by V, and then using the approximated t-test developed in section 5.1 in the form
•
78
A'ab - A'ab
t
=
Vv;r(ab)
and using formula (5.1.3) to approximate the degrees of freedom associated with the t-test.
If interest is in split-plot treatment effects, then the analysis
must go one step further.
Let X = [l:X ] and P
a
xa
Observe that the vector 1 lies in the space generated by the columns of
X.
a
Therefore, there exists a vector S such that 1 = X s.
a
that
Define
T =
3
and
I
0
0
0
0
I
0
0
0
- XbV-~ (X'V-~ )a a
a
I
0
0
0
0
I
This implies
79
Transform (6.6) by T to get
3
llV-~
X~V-l~
Xb(V-l_V-lPX )~
a
llv-ly
l.l
a
xlv-ly
a
b
ab
=
Xb(V-l_V-~ X )Y •
(6.8)
a
81
Xl (V-l_V-lp )Y
ab
Xl
82
Z I (V-l_V-~ )Y
X
A solution for b is given by
(6.9)
A linear combination of the parameters
~
can be tested by using
the approximated t-test method of section 5.1 by computing
Alb - Alb
t
=----
and using the formula (5.1.3) to approximate the degrees of freedom
associated with the t-test.
80
Finally, to derive a test on whole plots treatments, let PI •
1(1 'V-II) -1 'V-I.
Define
I
- X V-ll(l'V-ll)a
T •
4
o
Transform (6.8) by T
4
o
o
I
o
o
I
to get
l'Y-lz
x' (y-l_y-lp )z
•
1
x' (y-l_y-lp )z
•
X.
x' (y-l_y-lp
ab
l'Y-ly
~
)z
xl
Z I (y-l_y-lpX)Z
!.
!!.
!!!.
S
1
s
2
x' (y-l_V-lp )Y
•
1
.
x' (y-l_y-lp
b
)Y
X.
x' (y-l_y-lp )Y
ab
xl
Z. (y-l_y-lp )Y
X
A solution for a is given by
(6.11)
Now it is possible to test a parametric linear combination of the
parameters
~
by using the approximated t-test method of section 5.1 of
the form
A'a-A'a
•
Formula (5.1.3) is used to approximate the degrees of freedom associated with the t-test.
81
7.
SIMULATION STUDIES
Previous chapters have presented asymptotic and large sample
properties of the estimators obtained via generalized least squares
based on an estimated variance-covariance matrix.
In this chapter some
insight into the small sample properties of these estimates will be
obtained from a series of small simulations experiments.
These simulations all follow the same general pattern.
A set of
observations is generated according to a specified split-plot model.
The first step in the estimation procedure is to compute values for
AZ
AZ
01 and oZ. In the second step generalized-least-squares estimates of
the fixed effects in the model are obtained by using the estimated
variance-covariance matrix obtained from the first step.
A listing of
the Fortran program used for these simulations is in Appendix A.
A
flow chart is given in Figure 7.1.
7.1
Standard Conditions
The first simulation was performed under standard or ideal conditions as a check on the computer program.
Observations for a split-
plot experiment with two blocks, four whole plots per block and two
split plots per whole plot were generated.
drawn from N(O,l).
2
Both error components were
Z
Estimates for both 01 and 02 were obtained using
the MINQUE equations with prior values a
l
= a 2 = 1. Note that in this
case the estimates are identical with those obtained from the usual
analysis of variance.
Also in this case the weighted least squares
estimates of the fixed effects, i.e., the treatment effects, are
82
INPUT
I 1
I 2
parameters of
the
program
construction of
the. split-plot
model
~
~
itera~
tions done?
I 7
--
yes
Display
histogram and
graph
........
-
No
------
----
I
/
I 8
I 4
Compute
,,2
"2
(J 1 and (J2
Compute
Watson's and
Wayman's test
V
/
I 5
I
2
Display P
and u 2
statistical
values
Compute
-S
-
It
-
I 6
-
Figure 7.1.
Compute
histogram and
graph
Flow chart of the simulation of the behavior of the
estimate vector of parameter S
9
83
identical with the usual treatment means.
This simulation was repeated
100 times.
The resulting "observations, Le., estimate of the general mean,
are given in Figure 7.2.
Clearly, this distribution is close to normal.
2
The Watson's u statistic for testing normality is 0.03563, well beyond
the critical value
U~05 = 0.187.
Neyman's
again well below the critical value
7.2
P~
statistic was 1.32074,
P~,.05 = 9.49.
Non-standard Conditions
The second simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole plot with two missing
observations, say Ylll and Y242 ' were generated.
were drawn from N(O,l).
ai
Estimates for both
using the MINQUE equations with prior values a
Both error components
a;
and
=
l
a
2
were obtained
= 1.
This simula-
tion was repeated 100 times.
The resulting "observations," Le., estimates of the general mean,
are given in Figure 7.3.
2
The Watson's u statistic for testing normality
2
is 0.02422, well below the critical value U.
= 0.187.
05
2
Neyman's P
4
2
=
statistic was 0.74839, again well below the critical value p
4,0.05
9.49.
The third simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole plot with two missing
observations, say Y
and Y
, were generated.
242
lll
were drawn for U(-l,l).
Estimates for both
ai
using the MINQUE equations with prior values a
tion was repeated 100 times.
Both error components
and
1
a;
were obtained
= a 2 = 1.
This simula-
84
58
48
F
R 38
E
a
u
E
~ 2e~
,~
y
1
19
~
a-r--r--r---r---r'-'~.Lf-'-L.+L-l.J..,---r---,--,--'
-1.5
-0.5
a.a
a.5
1.0
BVALUES
FIGURE 7.2 DISTRIBUTION OF THE ESTIMATED GENERAL MEAN
FOR THE SPLIT PLOT MODEL
UNDER NORMAL DISTRIBUTION, YHEN THE VARIANCE COMPONENTS
YERE REPLACED BY THE MINQUE Esm"..UORS
85
--
F
R
E
Q
U
E
H
C
Y
9·-+-.......-.......-~..,..I-U,u..I-'~~~--.---.----.
-1.5
-1.8
-a.5
8.9
8.5
1.9
I.S
BVALUES
FIGtRE 7.3 DISTRIBUTION ex TI£ ESTIMATED GENERAL !AN
FCR TI£ SPlIT PlOT MtlOEl., \lITH MISSING OBSERVATIONS
t.t«R NORMAL DISlRIBUTIOH,1IHEN TI£ VARIANCE CtWONENTS
\/ERE mucEI) BY 1lf KINOlE ESTIMATORS
86
The resulting "observat ions," i. e., est imates of the general mean,
are given in Figure 7.4.
The Watson's u
2
statistic for testing nor2
ma1ity is 0.04703, well below the critical value U •
= 0.187.
Neyman's
O 05
2
2
P4 statistic was 1.15893, again well below the critical value P4 ,0.05 =
9.49.
The fourth simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole plot, and two missing
observations, Y111 and Y242' were generated.
drawn from N(O,l).
Both error components were
2
2
Estimates for both 01 and 02 were obtained using the
restricted MINQUE estimators defined in section 4.2.2 with prior values
a
1
= a 2 = 1.
This simulation was repeated 100 times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.5.
2
The Watson's u statistic for testing normality
is 0.02350, well beyond the critical value
U~.05
=
0.187.
Neyman's P
2
4
2
statistic was 0.67622, again well below the critical value p
=
4,0.05
9.49.
The fifth simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block, two split plots per whole plot and two missing observations, Yl11 and Y242' were generated.
from U(-l,l).
Both error components were drawn
2
2
Estimates for both 01 and 02 were obtained using the
restricted MINQUE estimators defined in section 4.2.2 with prior values
a
1
= a 2 = 1. This simulation was repeated 100 times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.6.
2
The Watson's u statistic for testing normality
2
is 0.04925, well beyond the critical value U .
= 0.187.
O 05
Neyman's p 2
4
87
F
R
E
Q
U
E
N
C
y
0-+-~~~--r--¥-U-f-~~-.---,----r--,
-1.5
-1.0
-0.5
0.9
9.5
1.0
1.5
BVALUES
FIGURE 7.4 DISTRIBUTION tf TIE ESTIMATED GENERAL MEAN
FOR THE SPI.TI PlOT MODEL, WITH MISSING OBSERVATIONS
UNDER UNIfORM DISlRIBUTION, UtfN TI£ VARIANCE CtlIfONENTS
YERE REPLACED BY TIf /1INCUE ESTIMATORS
88
F
R 38
E
Q
U
E
N
C
Y
9-+--,---,---,----r&-~.J...fJo~~~~~__,
-1.5
-1.9
-0.5
8.8
8.5
1.0
t.5
BVALUES
FIGURE 7.5 DISTRIBtllION Of THE ESTIMATED GENERAL MEAN
FOR THE SPUT PLOT l100a ,\lITH MISSING OBSERVATIONS
UNDER NORMAL DISTRIBUTION, m- THE VARIANCE COMPONENTS
YERE REPUCED BY TIf RESTRICTED MINQUE ESTIMATORS
89
F
R
E
Q
U
E
N
C
Y
9-t-~~--r---r-~..Lf-'oL+L'-L,-~~-..----.
-1.5
-1.9
-0.5
9.9
9.5
1.9
1.5
BVALUES
FIGURE 7.6 DISTRIBUTION Of TIE ESTIMATED GENERAL !'lEAN
FOR TIE SPlIT PlOT lilDEl., WITH MISSING OBSERVATIONS
UNDER UNIfORM DISTRIBUTION, \ltfN TI£ VARIANCE COtfONENTS
YERE REPLACED BY TI£ RESTRICTED I1INQUE ESTIMATORS
90
statistic was 1.14965.
p2
4,0.05
This was again well below the critical value
= 9.49.
The sixth simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block, two split plots per whole plot, and two missing
observations, say Y
and Y
, were generated.
242
lll
2
were drawn from N(O,l).
Both error components
2
Estimates for both 01 and 02 were obtained
using the positive semi-definite MINQUE estimators defined in section
4.2.3 with prior values a
l
=
~
= 1.
This simulation was repeated 100
times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.7.
2
The Watson's U statistic for testing normality
2
is 0.03291, well beyond the critical value U .
O 05
statistic was 0.7935.
p2
4,0.05
= 0.187.
Neyman's p2
4
This is again well below the critical value
= 9.49.
The seventh simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block, two split plots per whole plot, and two missing
observations, say Y
and Y242' were generated. Both error components
lll
2
2
were drawn from U(-l,l). Estimates for both 01 and 02 were obtained
using the positive semi-definite MINQUE estimators defined in section
4.2.3 with prior values a
l
= a 2 = 1.
This simulation was repeated 100
times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.8.
2
The Watson's U statistic for testing normality
2
is 0.04645, well beyond the critical value U .
= 0.187.
O 05
statistic was 1.26313.
p2
4,0.05
=
9.49.
Neyman's p2
This is again well below the critical value
4
91
F
R
E
Q
U
E
N
C
Y
9-+---r---,----.----,.&~~~~U-,.----.---,,.........,
-1.5
-1.8
-9.5
0.8
0.5
1.8
1.5
BVALUES
FIGURE 7.7 DISTRIBUTION OF THE ESTIMATED GENERAL MEAN
FOR TIf SPLIT PLOT MODEL, VITH HISSING OBSERVATIONS
UNDER NORI1Al DISTRIBUTION, VHEN TIf VARIANCE CO!'IPONENTS
VERE REPLACED BY THE RESTRICTED HINQUE ESTIMATORS
92
58
F
R
E
Q
U
E
N
C
Y
8-+-0000r-0000r--r--rL-'-~~-+&-'-"r---r---r---r--""1
-1.5
-I.B
-0.5
B.B
9.5
I.a
1.5
BVAlUES
FIGURE 7.8 DISTRIBUTION Of TIE ESTIMATED GENERAL MEAN
FOR TIE SPLIT PLOT 1«lDEl., IITTH MISSING OBSERVATIONS
UNDER UNIFORM DISTRIBUTION, VtfN THE VARIANCE cmPONENTS
\/ERE REPLACED BY THE P.S.D MINOUE ESTIMATORS
93
Finally, the last simulation was performed as follows:
Observations for a split-plot experiment with two blocks, four whole
plots per block and two split plots per whole block were generated.
Both error components were drawn from the slash distribution.
cri and
for both
values a
l
cr~
Estimates
were obtained using the MINQUE equations with prior
= a 2 = 1.
This simulation was repeated 100 times.
The resulting "observations," estimates of the general mean, are
given in Figure 7.9.
This distribution is symmetrical with tails
heavier than the normal.
2
Watson's u statistic for testing normality
was 3.9252 which is greater than the critical value
Neyman's
2
P4,0.05
U~05,oo
=
0.187.
P~ statistic was 65.9737, again greater than the critical value
= 9.49.
,
94
58
F
R l'
E
Q
U
E
N
C
Y
0-+---Lru.......t..J....f-L.~u...LZ.J..f-I-L.LL,{.LI.J~-..----.
-Ie
-5
o
5
10
BVALUES
FIGURE 7.9 DISTRIBUTION OF THE ESTIMATED GENERAL MEAN
fOR n£ SPLlT PLOT ~a, UNDER SUSH DISTRIBUTION
m
TIf VARIANCE COMPONENTS
m
TIE MINOUE ESTIMATORS
REPLACED BY
95
8.
SUMMARY AND CONCLUSIONS
The generalized least squares analysis for the split-plot model
with missing observations and estimated variance-covariance matrix was
discussed in this thesis.
Many methods have been suggested to estimate the variance-covariance
matrix, a few of them are:
Seely's method.
Henderson's method III, Rao's method and
The behavior of the generalized least squares estima-
tors of the split-plot model when using an estimated variance-covariance
matrix depends on the distribution of the random errors and the method
used to estimate the variance-covariance matrix.
In this thesis it was found that the estimated variance-covariance
matrix V, converges in probability to the variance-covariance matrix V,
i.e., V
+
p
V, when using Seely's method under the invariance condition
with respect to the fixed effects.
Under the invariance condition
Seely's method is equivalent to the MINQUE method.
Also, three procedures were proposed to observe the behavior of
the generalized least squares estimator of the parameters S, when the
variance-covariance matrix was replaced by its estimate.
Under simula-
tion it was found that each one behaves normally under the assumption
that the random errors are both normal (0,1) and uniform (-1,1).
In many practical problems, the researcher wants to know how many
degrees of freedom he needs to consider in a t-test for testing a linear
combination of the parameters when he uses an estimate of the variancecovariance matrix.
In this thesis the answer is given in terms of a
96
formula (formula 5.1.3), when the variance-covariance matrix is estimated by the MINQUE method.
Also, it is suggested that either prior
values or iterated MINQ estimators be used to evaluate the formula.
Simulations indicate that this formula provides a good approximation
for the degrees of freedom associated with the t-test.
Finally, Appendix A shows explicit formulas for the MINQUE estimators of the variance components for a complete randomized design in
both the balanced and the unbalanced case.
But the latter was restricted
to the condition that if a split plot in the ith whole plot is missing,
the same split plot must be missing for all the repetitions of the ith
whole plots.
97
9.
RECOMMENDATIONS
It is recommended that in the future an efficient simulation study
be performed by using some variance reduction techniques for the different methods used in the simulations of this thesis.
There are several methods used to reduce the variability of an
estimator; one of them is the control variates.
The goal of this
method is to obtain an efficient estimator t of the parameters A'B.
This method is defined as follows:
A2
A2
Let (J 1 and 0"2 denote the MINQUE estimators of the variance compon2
2
2
2
ents 0" 1 and 0"2· Let a 1 and a 2 be prior values of the variance components.
Step 1.
Define t
1
=
1 \n
n L.i=l
A'S i
B~... =
where
are known matrices.
Step 2.
Define t
Since
1
2
n
A
= -n \.
1A'B
L.~=
A
where B~...
=
Bi and Bi COme from the same generalized random numbers, they
are likely to be highly positively correlated, and therefore t
should be positively correlated.
Step 3.
Define t = A'B + t
1
- t
2
as the new estimator of A'B.
1
and t
2
98
Step 4.
Define var(t) = var(t ) + var(t ) - Z cov(tl,t ) and observe
l
Z
Z
that var(t) < var(t ) if var(t ) < Z cov(tl,t ).
Z
l
Z
Since this is true
because of the higher positive correlated coefficient, then the estimator t is an efficient estimator of A'8 with small variance.
Another way to look at the same problem is when there is an especial
interest in observing the number of times that the value
is rejected under the null hypothesis A'8 =
Define the estimated power t = 0.05 +
o.
n _
n
~
~
t. -
A
t. and observe that
i=l 1
i=l 1
n_
n
n_
n
var(estimated power t) = var( ~ t.) + var( ~ t ) - Z cov( ~ t., ~ t )
i=l 1
i=l l
i=l 1 i=l i
A
A
n _
and again if the var(estimated power t) < var( Lt.), then it is an
i=l 1
efficient estimator.
Another method to reduce the variability of an estimator is the
antithetic variates.
Step 1.
1 -
~1'
Define
1 -
~Z'
This method is defined as follows:
~l' ~2'
... , 1 -
.•• ,
~s
~s
as uniform random numbers and define
as also uniform random numbers.
99
Step Z.
With 1;1' ••• , I;s construct normal desviates such that U =
UlE l + UZE Z where E and E are N(O,l) and Uland u z are known matrices.
l
Z
Step 3. With 1 - 1;1' ••• ,1 - I;s construct normal desviates, such that
u* = UlE*l + UZE*Z where E*l and E*Z are N(O,l) and Ul and u z are known
matrices.
n
Step 4.
Let t
Step 5.
Let t
Step 6.
Let t = I(t
t
l
and t
z
z
1
z are
l
+ t Z); this will produce a new estimator such that
negatively correlated.
41
1
var(t ) + 4 var(t ) + Z cov(tl,t Z) and since t and
z
l
l
are negatively correlated var(t) ~ var(t ), then t is an efficient
l
Step 7.
t
l
-
= - 1: A'S where'Si = (X'V-~)-X'v-lyi where Y = U •
i
i
n i=1
i
n
1
=
where S*i = (X'v:~)-X'V:ly*i where
n 1: A'S*i
i=l
1
Var(t) =
estimator.
Another way to look at this is when it is desired to count the
number of times that the value
is rejected under the null hypothesis A'S = O.
1
Define t
li
if
A'S
Vv;r(A'S)
=
0
otherwise
>
t
f,0.05
100
1
t
Zi
A'B *
if
>
t
f,0.05
var(A'B*)
=
°
otherwise
where f is computed with formula (5.1.3).
1
2
n -
o~
-n -
~ t
) and var(estimated power t)
Zi
i=l
1
n_
1
n_
n_
n_
= 4 var( ~ t 1i ) + -4 var( ~ t Zi ) + Z cov( L t1o, ~ tzo) and again
i=l
i=l
i=l 1. i=l 1.
Define estimated power t =
t
1.=1
1i
+
n _
var(estimated power t) < var(
~
i=l
t
1i
).
101
BIBLIOGRAPHY
Aitken, A. C. (1934). On least squares and linear combinations of
observations. Proceedings of the Royal Society of Edinburgh A.
Vol. 55, pp. 42-47.
Allan, F. E. and J. Wishart. (1930. A method of estimating the yield
of a missing plot in field experimental work. Jour. Agr. Sci. 20,
pt. 3, pp. 399-406.
Anderson, R. L. (1946).
No.3, pp. 41-47.
Missing-plot techniques.
Biometrics, Vol. 2,
Anderson, T. W. (1973). Asymptotically efficient estimation of
covariance matrices with linear structure. The Annals of Statistics,
Vol. 1, No.1, pp. 135-141.
Bement, T. R. and J. S. Williams. (1969). Variance of weighted regression estimators when sampling errors are independent and heteroscedastic. Journal of the Amer. Stat. Assoc. 64, pp. 1367-82.
Box, G. E. P. (1954). Some theorems on quadratic forms applied in the
study of analysis of variance problems. I. Effect of inequality
of variance in the one-way classification. Ann. Math. Statist. 25,
pp. 290-302.
Dickey, D. A. (1978). A program for generating and analyzing large
Monte-Carlo studies. North Carolina State University, Department
of Statistics. Unpublished note in programming.
Fuller, W. A. and G. E. Battese. (1973). Transformations for estimation
of linear models with nested error structure. JASA, Vol. 68,
pp. 626-632.
Fuller, W. A. and J. N. K. Rao. (1978). Estimation for a linear
regression model with unknown diagonal covariance matrix. The Ann.
of Statist., Vol. 6, No.5, pp. 1149-1158.
--~.-_.
Harville, D. A. (1977). Maximum likelihood approaches to variance
components estimation and to related problems. JASA, 72, pp. 320340.
Henderson, C. R.
components.
(1953). Estimation of variance and covariance
Biometrics 9, pp. 226-252.
Hocking, R. R. and F. M. Speed. (1975). A full rank analysis of some
linear models problems. JASA, 70, pp. 706-712.
102
Hocking, R. R., F. M. Speed, and A. T. Coleman. (1980). Common
hypotheses to be tested with unbalanced data. Commun. Statist.
A9(2):117-129.
Imhoff, J. P. (1961). Computing the distribution of quadratic forms
in normal variables. Biometrika 48.3 and 4, pp. 419-426.
James, G. S. (1951). The comparation of several groups of observations
when the ratios of the population variances are unknown. Biometrika 38, pp. 324-329.
(1954). Test of linear ,hypothesis in univariable and multivariable analysis when the ratios of the population variances are
unknown. Biometrika 41, pp. 19-43.
Johansen, S. (1980). The Welch-James approximation to the distribution
of the residual sum of squares in 2-way weighted linear regression.
Biometrika 67, pp. 85-92.
Lamote, L. R. (1976). Invariant quadratic estimations in the random,
one-way Anova model. Biometrics 32, pp. 793-804.
Magness, T. A. and J. B. McGuire. (1962). Comparation of least squares
and minimum variance estimates of regression parameters. The Annals
of Mathematical Statistics, 33, pp. 462-70.
McElroy, F. W. (1967). A necessary and sufficient condition that
ordinary least squares estimators be best linear unbiased. JASA 62,
pp. 1302-1304.
Quesenberry, C. P. (n.d.) Some transformation methods in goodness of
fit. North Caro~ina State University, Department of Statistics.
Unpub1ishe~-notes on goodness of fit test.
Rao, C. R. (1970). Estimation of heteroscedastic variances in linear
models. JASA 65, pp. 161-171.
(1972). Estimation of variance and covariance components in
linear models. JASA 67, pp. 112-115.
(1979). MINQE theory and its relations to ML and MML
estimation of variance components. Samkhya: The Indian Journal
of Statistics, Vol. 41, Series B, Pts. 3 and 4, pp. 138-153.
Rao, C. R. and J. K1effe. (1979). Variance and covariance components
estimation and applications. Technical report, No. 181. The Ohio
State University, Department of Statistics.
Rao, J. N. K. and K. Subrahmaniam. (1971). Combining independent
estimators and estimation in linear regression with unequal
variances. Biometrics, Vol. 27, No.4, pp. 971-991.
103
Satterthwaite p F. E. (1946). An approximate distribution of estimates
of variance components. Biometrics Bulletin, Vol. 2, pp. 110-114.
Scheff~, H. (1959).
Inc., New York.
The Analysis of Variance.
John Wiley and Sons,
Searle, S. R. (1968). Another look at Henderson's method of estimating
variance components. Biometrics 24, pp. 749-788.
(1971).
Linear Models.
John Wiley and Sons, Inc., New York.
Seely, J. (1970). Linear spaces and unbiased estimation.
Statist., Vol. 14, No.5, pp. 1725-34.
The Ann. Math.
(1970). Linear spaces and unbiased estimation, application
to mixed linear models. The Ann. Math. Statist., Vol. 41, No.5,
pp. 1935-48.
Seely, J. and G. Zyskind.
unbiased estimation.
pp. 691-703.
(1971). Linear spaces and minimum variance
The Ann. Math. Statist., Vol. 42, No.2,
(1971). Quadratic subspaces and completeness.
Math. Statist., Vol. 42, No.2, pp. 710-721.
e·
The Ann.
Speed, F. M., R. R. Hocking and o. P. Hackney. (1978). Methods of
analysis of linear models with unbalanced data. JASA, 73, pp. 105112.
We 1ch, B. L. (1947). The generalization of student's problem when
several different populations variances are involved. Biometrika
34, pp. 28-35.
(1951). On the comparation of several means values:
alternative approach. Biometrika 38, pp. 330-336.
an
Williams, J. S. (1975). Lower bounds on convergence rates of weighted
least squares to best linear unbiased estimators: a survey of
statistical design and linear models. J. N. Srivastava, ed.,
North-Holland Publishing Company, Amsterdam, pp. 555-570.
(1967). The variance of weighted regression estimators.
JASA 62, pp. 1290-1301.
Yates, F. (1933). The analysis of replicated experiments when the
field results are incomplete. Exp. Jour. Exp. Agr. 1:129-142.
Zyskind, George. (1969). Parametric augmentations and error structure
under which certain sample least squares and analysis of variance
procedures are also best. JASA, 64, pp. 1353-1368.
105
APPENDIX A
MINQUE ESTIMATORS FOR THE SPLIT-PLOT
ERROR VARIANCES
Consider a split-plot design arranged in randomized complete blocks.
Let
i
= 1, ••• , n h ;
= 1, ••• , nhi •
j
Let
In matrix notation this becomes Y
= XS +
U 0 + U e where Y is a n x 1
1
2
vector of observations, X is a n x (1 + a + b + 2b) matrix,
S is a
(1 + a + b + ab) x 1 vector of parameters, U is an n x ra matrix,
1
0
is
anra. x 1 vector of random errors, U is an n x n identity matrix and e
2
is an n x 1 vector of random errors.
Let a
2
1
and a
2
2
be prior values or guesses for the variance components
2
2
01 and 02 and use this to write V = BDMATX(J
2
2
a + I a 2) where J
and
hi 1
hi
hi
-1
are of order n
hi x n hi , and V = BDMATX (Ihiao + ahiJ hi ) where
2-1
2 2
2
-2
a o = a 2 and ~i = - a1(a2(nhia1 + a 2)) .
I
hi
Note that X
CMATX(~i)
where
~i
is an n hi x (1 + a + b + ab)
block matrix that contains the rows of the ith whole plot of the
repetition.
= RMATX(Xh . ;
1.£
£
= 1, ••• , t)
h~th
106
where t is the number of different terms which are included in the
fixed effects part of the split-plot model.
=
In the present example t
4.
The first submatrix
~i1
= lni
is an
~i2
is an
~i
~i
x 1 vector of ones due to the mean in the model.
x a matrix of the form
[~:
:0:1:0: ••• :0] where
the column 1 is in the ith column.
~i3
is an n
x b matrix which has exactly one non-zero element equal
hi
to one in each row.
This matrix can be formed from an identity matrix
of order b x b by deleting the jth row of that matrix if the split plot
of the ith whole plot is missing.
CMATX(~) where~,
~i4
k = 1, ••• , n
It is also possible to write
ka
~i3
=
is a unitary vector.
i
= RMATX(Dk ; k = 1,2, ••• , a) is an n hi x ab row matrix where the
block matrices are of order n
hi
x a and of the form
if i = k
otherwise.
Alternatively
=
~i4
[0: •••
:0:~i3:0:
••• :0],
~i3
These definitions are now used to compute X'V
-1
is the ith position.
X.
X'V-~ = RMATX(~'.)
• BDMATX(a I_ • + a . J .) • CMATX(~ .)
-n~
on~
h~
~
-n~
h
= RMATX
= ~~a
hi
Let
j
chi
=
j 0 if
0
(~'.)
·h~
• CMATX(a o-n~
X" +
ah.Jh.~
.)
~
~Uh~
X' .Xh " + ~~ah·~' .Jh·~··
h~ ~
hi ~ h~ ~ -h~
the j t h split plot treatment is missing in the ith
1 whole plot of the hth repetition
otherwise.
l
where n =
Observe
j
j
= "n
.i'j
i:i: 0h .. O~; = i:° h "
'"h"'u++ = hi
h ~
~
...
h
~
108
o
o
o
0
o
o
o
o
0
0
0
0
1
1
1 b
... , 0hiohi
b •
1
... ,
°hi °hi'
0hi °hi'
b
b
0 ,
0hiohi 0
, ... ,
1
1
b
1
1
L
0
0hi °hi'
0
0hi °hi'
o ,
0
0hi °hi'
b •
°hi
,
hi'
o,
o
109
Thus,
2
rLahinhi'
~~ahiXhi1Jhi~i1 = hi
... ,
2
~~ahiXhi1Jhi~i2 = [~ahinhi'
h
~~~iXhi1Jhi~i3
1
= [l:~ahonhooho ,
hi 1 1 1
~~~iXhi1Jhi~i4
1
= [~ahinhiohi'
... ,
·.. ,
2
(
=BDMATX~ahonho;
h
2
~anha] ,
1
1
0
1
·.. ,
b
rLahonhooho] ,
hi 1 1 1
b
h~~ a n h a °ha ],
... ,
= 1,2,
a) ,
b
~ahinhiohi
~
n ob
·.. , h~a
ha ha
1
~ahonhooho
h
111,
·.. ,
b
~a.
h na
nhooho
~
1
1
h~ah a n h a 0h a '
1
1
~~ah 00h 00h 0,
hi
~?ahoXh'
hi
1 1°3Jho~
1-11103
1
1
1
=
... ,
... ,
~a
1
1
b
l'
1
h
and
1
1
b
h~ah a n h a 0ha
b
~~ah 00h 00h 0
hi
1
1
b
1
b
rLa 00h 00h o
hi h1 1 1
1
1
b
b
oOhooho"", a oOhooho
h n1 1 1 . h n1 1 1
~ah"ohOoho'
... ,
1 ..• ,~a
h n1OOhOoh"
1 1
~a
1
lIb
b
l'
0h 0h , ... ,~a 0h 0h
h na a a • h na a a
~a
b
b
h na 0h a 0ha , ... ,~ah
h a 0ha 0ha
110
1 1
1 b
~~iOhiOhi,·:·,~ahiohiohi
b 1·
b
b
~ahiohiohi'···'~~iOhiOh~
.
1
1
1
b
Ea 0h 0h , ••. ,Ea 0h 0h
h na a a • h na a a
Let each whole plot treatment have the same number of repetitions, that
is n
=
i
a for all i.
Now consider the MINQUE estimators for the variance components in
these two cases:
first, when there is no missing split-plot treatment
and, second, when there are missing split-plot treatments.
If there are no missing observations, then
=
~~~i~i
Observe that
a
1
=
a
hi
rab
rb1'
ra1'
r1'
rb1
rbI
rJ
rIx1'
ra1
rJ
raI
r1xI
r1
rlx1
rlxI
rI
~i
= -
~i
Thus,
HX~·Jh·~·
hi
1.
1.
1.
2
for all h,i and
ai(a;(bai + a;))-l for all hand i, so let
for all hand i.
2
rb 1'
rab1 '
rb1'
2
rb 1
2
rb 1
rbJ
rblx1'
rab1
rbJ
raJ
rJ
rbl
rblx1
rJ
r·BDMATX(J)
rab
=b
=
111
and hence
rab(ao+alb)
X'v-~
=
rb(a o+alb)!.'
ra(ao+alb)!.'
r(ao+alb)!.'
rb(ao+alb)!. rb(ao+alb)I
r(ao+alb)J
r(ao+alb)I@!..'
ra(a o+a b)!. r(ao+alb)J
l
ra(aoI+alJ)
r (aa!.~I+al J)
r(ao+alb)!.
r(a~I+alJ)
r'BDMATX(aoI+alJ)
r(ao+alb)@
Notice that the columns in the design matrix X that correspond to
the mean, whole plots and split plots are linear combinations of the
columns of the interaction between whole and split plots.
Given the
inverse of the matrix r'BDMATX(aoI+alJ), the generalized inverse of the
matrix XlV
-1
X can be computed.
o
Therefore,
o
o
r-
l
BDMATX(b I
o n.
1
where b
o
=
a
-1
o
+blJ
n.
)
1
and
Recall that a
o
The quadratic form Y'QVVlQVY can now be developed.
Recall that VI
=
b b
VlVI' then
(A.I)
=
CMATX(nhI1/.2 J hi ( a o I hi + a i J hi )Yhi )
112
(A.2)
Also observe that
b -1
= ~~V1V
.
'CMATX(~'i,)(CX~i(aolhi+a1Jhi)Yhi)
b -1
~~V1V .CMATX(~'i,CXhi(aolhi+aiJhi)Yhi).
=
(A.3)
0
0
0
0
~i1
0
0
0
0
~i2
0
0
0
0
~i3
0
0
0
r
-1
. BDMATX(b l +b J )
o b 1 b
~i4
Then A.3 becomes
b -1
= ~V1V
'CMATX(r
b -1
= h V1V
-1
-1
'CMATX(r
(bo~ + b 1 J b ) (aolh'i' + aiJh'i')Yh'i')
~'i'Yh'i')
113
(A.4)
From (A.2) and (A.4)
Similarly
The tr(QVVlQVVl) now is developed.
From (3.4.2.21)
-4 4
2
2
2
2
2
3 2
+ a l a2(~~(ao + 2a oa l +a l b)b - 2(aoq+~~(2aoa1+3aoa1b+a1b )t hi )
2
2
+ II II (a t L
+2a (2a a 1+a 1b)t 2h 'i'hi
hh'ii' 0 ~'i'hi
0
0
2
2
+ (2a a +a b)(2a a +a b)t
»
1 1
3h 'i 'hi
o 1 1
0
where
q = R(X) ,
=
)
), .x.'
tr ( -Ill.
X_ '4r -1 ·BDMATX ( b 0 I n, +b 1J n"
· h1.'4Jh'1.
1.
1.
115
Finally, observe that t
t,
=r
3h ihi
-2
3h
= 0 if i
,., .
~ h~
~
i', then for i
2 2
(b +blb) b
0
=
(A.8)
•
Therefore, from (A.5), (A.6), and (A.7),
2
2 -L 2
2
(r ~(b
+2b bl+blb»
+ r a(a 0
00
+ 2a (2a +a b)r
o
0
1
-2
2
(b +b b) b
0
1
Also, from (3.4.2.22) and (A.5), (A.6) and (A.7)
2 2
2
2
- ~1~2(~~(ao+2aoa1+a1b)b
2
2
3 2 -1
- 2(a q + EE(2a a +3a a b+a b)r (b +b b)b
1
1
1
1
o
hi
0
0
0
+ 2a (a +a b)r
o 0 1
i'
-2
2
(b +b b) b
0
1
116
Finally, from (3.4.2.20) and (A.5), (A.6) and (A.7)
2
2
= II(a +2a al+alb)b
hi
0
0
2
2
2
3 2 -1
2(a q+II(2a a +3a alb+alb)r (b +blb)b
l
o hi
0
0
0
2'
2
-2 2
2
+ II I(2a a +a b)(2a a +alb)r b (b +b b) •
l
l
hh'i
0
0 1
0
0
If there are missing split plots, then in order to compute a
generalized inverse for X'V-~, it is first necessary to compute a
generalized inverse of the matrix
This matrix is a block diagonal matrix with blocks
where
is a b x b matrix with zeros or
=
ones on the diagonal,
o
Chi
2
[o~i chi
2
chi
and
J*
=
i
is a b x b matrix with zeros or
b
chi
ones.
-
Observe that 1* has n
hi
ones on the diagonal.
117
A generalized inverse of B is B
i
i
and b
i
=-
-1
aia o (oiai + a o )
-1
= r -1 (boI*
+ biJ*) , where b o
,where 0i is the number of split-plot
treatments in the ith whole plot.
Therefore, a generalized inverse of XIV-~ is
o
o
o
where b
o
= a 22.
= a 2l •
and b,
~
The quadratic form Y'QVVlQVY can now be developed.
Recall that VI
b b
= VlV
l'
then
(A.9)
Observe that
=
-1/2
CMATX(n., J , (a lh,+a,J , )Y ,)
n~
h~ 0 ~ ~ h~ h~
/2 (a+ain )Jhi Yhi )
= CMATX( n.-~,
n~
o
hi
(A.lO)
Also observe that
b -1
-1
-
-L
b -1
VIV lC(X IV lC) XlV -y = n:v v lCC(X ' , (a l ,+a ,J , )Y ,)
h ~ 0 h ~ ~ h~ h ~
hi 1
=
b -1
n:v v
O1ATX(x.. I ' I )Cx..1 , (a l ,+a ·J · )Y , )
~
nh~
0 h~
~ h~
h~
1 in
h
= hi
~~vblv-l
CMATX(x.. I 'ICx..I ,(a lh,+a.Jh.)Y ,)
-11 ~ -11~ 0 ~ ~ ~ h ~.
(A. 11)
= a -1
o
118
Note that
000
o
X
hi1
000
o
~i2
000
o
~i3
~i4
= ~'i'4(r
=
-1
'BDMATX(bOI*i+b1J*i))~i4
r-~'i4·BDMATX(bOI*.+b1J*.)~i4
1.
1.
Then (A. 11) becomes
=
~VbV-1.CMATX(r-1(b I +b . J ) a I h' +a, J h' )Yh' )
ht.. 1
° i 1 . i °1.1.1.
1.
= hrvb1v-1 .CMATX(r -1 (b 001.
a Ih.+(b a,+b.a +a,b.
01. 1.0 1.1.
=
CMATX(r
= CMATX(r
1..)Jh 1.')Yh 1.')
-1 -1/2
n.",
+a. ,nh , 1.., )Jh "1. ,)J , 1.., Y+.)
h1. (a 01.
1.
h
-1 -1/2
nh,.,(a
1 "~f)'
_ 1.
° +a"nh,.,)y+.
1.
1.
1.+=11
1.
From (A.10) and (A.12)
(A.12)
119
Similarly,
From
Z
+ ~~ ~~ (a t 1h "'h.+Za (Za +a.nhi)tZh'i'h'
hh'ii' a
~
~
a
a ~
~
+ aia.~ ,(Za a+a.n
.)t
, .,)(Za a+a.a
~ h ~
~ h~
where
q =
3h
, ~"h'))
~
R(X),
and
=
tr(~
'4 r
~rl1
= tr(~-h~'3r
-1
BDMATX(b a 1* , +b,J*
~
. )Xh' ~'4Jh')
~
~
-1
(b a
1*. +b,J*
'3Jh')
~
. )X_
~n~
~
~
= tr(r -1 (b aI~
+b,J~
u.
~
=
r
-1
~
~u,
~
~
)J .)
h~
(b +b.nh,)tr(J ,)
a ~ ~
h~
= r -1 (b a +b.nh.)nh ,
~
~
~
(A.13)
120
Observe that t 1h 'i'hi = 0 if i # i', then for i = i'
= tr(Xh'i3(r
=r
-2
-1
(b o I*i+biJ*i»Xhi3Xhi3(r
-1
(b o I*i+biJ*i»Xhi3
(tr«bo+lh'i+biJh'i) (bolhi+biJhi»
(A.14)
Also observe that t 2h 'i'hi = 0 if i # i', then for i = i'
Finally, observe that t 3h' i 'hi -- 0 if i
t
3h , ~.h'~
~
T
i', then for i
= r -2 (b 0 +b. nh .) 2n.2 .
~
~
=
i'
(A.15)
n~
Therefore,
=
-4 2
2CL 1 CL2(~~(b +a.)n h ·
hi 0 ~
~
4
CL (Hn · -q)
1 hi h ~
-
~L(r
hi
-1
(b +b.nh.)nh.)(a +a.R .»
~
0
~
~
0
~
n~
4
2+
2
+ CL -4
CL2(~L(a
2a a.+a.nh.)n h .
1
hi 0
0 ~
~
~
~
2
2
2
3 2
-1
- 2(a q+LL(2a a.+3a a.nh.+a.nh.)(r (b +b.nh')Yh'»
o hi
0 ~
0 ~
~
~
~
0
~
~
~
2
+ LL L(a (r
hh'i 0
-1
2
2
(b +2b b.+b.n. .)n. .
0
0
~
~ n~
n~
121
Also,
tr(QVVlQV)
-2
= a l (EE(a
+a.)n
hi 0 l. hi
- aoqhEEiai(aO+ainhi)r
-1
(bo+binhi)nhi)
222
2
- ala2(~~(aO+2aOai+ainhi)~i
2
2
3 2
-1
- 2(aoq+E:(2aoai+3aoainhi+ai~i)r (bo+bi~i)nhi
hl.
2 -2
+ EE E(a r
hh'i 0
2
2
(b +2b b.+b'~i)nh'
0
0 l. l. h
l.
+ 2ao (a 0+a.nhi)r
l.
-2
(b 0+b.n
l. h l..)
2
~
hl..
2
-2
2 2
+ a.(2a
+a.n )(2aOl.l.
+a.nh.)r (b Ol.l.
+b.n ·) n l..).
l.
o~ hi
h
h
Finally,
=
2
2
EE(a +2a a.+a.nh')~l.'
hi 0
0 l. l. l. h
-1
2
2
2
3 2
2(a q + EL(2a a.+3a a.nh.+a.nh.)r (b +b.nh.)n .
hi
0 l.
0 l. l. l. l.
0 l. l. h l.
o
+ a
-2 2
2
2
LL L(r (b +2b b.+b.nh.)n .
0
0 l. l. l. h l.
o hh'i
2
+ 2a LL L(2a a.+a.~ .)r
0hh'i
0 l. l. hl.
-1
(b +b.n .)
0 l. h l.
2
~
.
hl.
-2
2 2
2
2
+ LL L(2a a.+a.n .)(2a a.+a.nh.)r (b +b.n .) ~ .'
h
h
hh'i
0 l. l. l.
0 l. l. l.
0 l. l.
hl.
APPENDIX B
PROGRAMS FOR THE SIMULATION OF THE BEHAVIOR OF
THE ESTIMATE VECTOR OF PARAMETER 8
123
DIMENSION TT(4).TC4)
ReAL•• TIT/· X'/
REAL LAI(30).LAZC~OJ.LA3C30).~Alt30).LAC.4)
DIME~SION XC30.30).VIC30.30J.Vl(30.30).QVt30.30).wC30.30J
DIMENSION GC30.30).OPC30.30).AC30.30J.POXC30.30)
DIMENSION UIC30.30J
DIMENSION YC7ZJ.ZCI44).RWC44).SC2.2).DELTAC2J .ZA(72).Q1C72J
DIMENSION SIC2.2)
DIMENSION AV1C30.30).AQVC30.30).ASt2.2J.ASIC2.2)
DIMENSION UC1000)
DIMENSION UUCZOO)
INTEGER R.,.'A.NSCZO).8L.PRC10J
INTEGER S(30).wPC30).SPC30J '
INTEGER ~30/.N~30/.N3/30/
'ALL SCLPARCPR.IT.ITM.ITN.Xl.X2.'ONST.R.X.NR.NC.8L.NS.,.'A)
NUIC-CA-C
DO 70 1-I.R
00 70 Jal.NUIC
70
c...
UIC~.JJ-XCI.C+~)
XII-Xl
XI2aX2
'ALL INTIAL(1)
IDIST-PR(10)
END GENERATE RANDOM ERRORS
CALL·SCRNYCV.Z.R.IDlST)
CALL SCRNVCYA1.Z.R.IDISTJ
CALL SCA8CYAI.Ul.I.NUI'.R.YA1.Z.l.NR.NA.NR.I.NR.NR.l)
00
83
8~
~~-I.R
YC~).YC~~j+YAIC~~J
CALL SCSMICX.VI.VI.QV.W.G.R.C.NR.NC.SL.NB.Z.ZA.S.SI.XI.XZ.CDNST)
lPCPACVJ.EQ.l) GO TO 66
DO 99 KI -1.ITN
C••• NINQUE ESTIMATORS
, . . . COMPUTE NINQUE
YYlaFQFCY.VI.R.NR.NRJ
YY2a.Q~CY.QV.R.NR.NAJ
Xl1-YY1.SIC1.IJ+YY2*SIC1.2J
XI2aVYI.SIC1.Z)+YYZ.SIC2.2)
WRITEt3.602JXll.X1Z
602
FOAMATC2F10.6)
CALL SCSMICX.VI.VI.QV ••• G.A.'.NA.NC •• L.NB.Z.ZA.S.SI.
•
Xll.XI2.CONST)
C Vl ::-QV.Vl.0V • QV::-OV*QV
99
COhTlNUE
IFCPAC9J.EQ.2JXlaXll
IFtP8(9).EO.2JX2aX12
66
CONTINUE
CALL RSTARTC563.542)
105
READC6.2.ENDal06)CLACJ).1=I.C)
ITA=O
NDF=O
2
FO~MATC40F2.0J
CALL SCvICVl.R.XI.X2.8L.NB.NR.NA)
CALL SCATSCX.vl.C.R.A•••• A.NA.NC.NA.NR.NA.NR.l.1J
CALL SCASCW.X.C.A.C.G.WA.NR.NR.NA.NC.NC.NC.l.l)
CALL SCGICG.C.w.TR1.ZA.L.CONST.NC.NC.NR.NRJ
124
CALL SCAB(VI.X.R.R.C. W.WA.NR.NR.NA.NC.NR.NR.l.l)
CA~~ SCAB( W.G.R.C.C. W.Z.NR.NR.NC.NC.NR.NR.NR.I)
CALL SCABIW.LA.R.C.I.QI.L.NR.NR.NR.I.NR.I.I.1)
CALL SCV1( A.R.BL.NB.HR.NR)
QV1Q~QFCQI.A.R.NR.NR)
CALL SCIC A.R.NR.NRJ
OQaFQFCOI.A.R.NR.NR)
AAaQV1Q*XI.QO*X2
SBaa*cQVIQ*QV10*SIC1.I)+OO*gg*SIC2.2)+2*OVIO*OO*SlCl.2))
ALaAA••2/SB
&ease/AA
OFF-2*AL
~ITEI~.43JDFF.AL.eE.CLACKI).~1=1.C)
FORNATI//' DEGREES OF FREEDOM ·.FI5.1//·
AL.. ·.F15.6//
•• e£-·.F15.6//
•
• LAM·"'2C30F4. 0))
IOFaDFF • •05
DO 100 l-l.IT
C.** END GENERATE RANDOM ERRORS
CALL SCRNYCV.Z.R.IDIST)
~ SCRNYCYA1.Z.NU1C.IDIST)
CALL SCAeCYA1.Ul.I.NU1C.R.VAl.Z.l.NR.HR.NA.l.NR.NR.1)
DO e4 .I.1-I.R
VC.I.I)-VC.I.I)+VA1C.I.I)
MINQUE ESTIMA'DRS
VVI-FQFCY.Vl.R.NR.NR)
YYz-FQFCY.OV.R.NR.NR)
Yl-VY1.S1Cl.l).Vv2*SICl.2J
Y2-YY1*Slll.2)+YY2*Sl(2.2)
CALL SCVICVl.R.VI.Y2.8L.Ne.NR.NR)
CALL SCATeCX.VI.C.R.R.W.WA.NR.NC.NR.NR.NR.NR.l.l)
CALL SCABCW.X.C.R.C.G.WA.NR.NR.NR.NC.NC.NC.1.1)
CALL SCGICG.C.W.TA1.ZA.Z.CONST.NC.NC.NR.NAJ
ALPLaPQFCLA.G.C.NR.NR)
WRITEC3.609)ALPL
609
FDAMA'C' Z-VALue'.FI0.6)
CALL SCAeCVI.x.R.R.C.W ,WA.NR.NR.NR.NC.NR.NR.1.1)
CALL SCABCW.G.R.C.C.W.Z.NR.NR.NC.NC.NR.NR.NR,l)
CALL SCABCY.W.1.R.C.QI.Z.1.NR.NR.NR.l.NR.l.l)
CALL SCABCQ1,LA.l.C.l.BLA.L.l.NR.NR.1.1.1.1.1)
TV:a8&...,SOAT 'ALPL)
PROS-PAGTCTV.IDF)
WRITEC3.610JTV.PROB
FDRMATC' T-VALUE·.FIO.6.' PROSASILITY'.FIO.6)
610
IFCPROS.LT• • 025 .OR.PRDS .GT• • 975 )NDF-NDF+l
IFCCOGAMACALPDF.AL.8E).LE. O.JGQ TO 100
IT.ITA+l
UCITA)aGOGAMACALPL.AL.BE)
CONTINUE
100
GO TO 105
CONTINUE
106
WAITEC 3.600) NDF
FORMATC' N R.·.IS)
GOOONESS OF FIT TEST
CALL SCCDFCU.l.9.1TA.2)
END
APPENDIX C
PROGRAMS FOR FORMULA 5.1.3 FOR APPROXIMATING
THE DEGREES OF FREEDOM
126
OI~E~SION TTl4J.Tl4J
REA&.*8 TIT/' X'/
REAL ~AIl~OJ.LAZl30J.LA~(30J.YAIC30).LAl44J
DIMEhSlON Xl30.30).Vll30.30J.VIl30.30J.QVl30.30).WC30.30J
DIMENSION G(30.30J.OPl30.30J.Al30.30J.POXl30.30)
DINE~SI0N UI(30.30J
OlNEhSl0N vl7ZJ.ZlI44J.RWl44J.S(Z.ZJ.OELTAlZJ .ZA(72).QI(7ZJ
DIMENSION SI(2.2J
OI~EhSION AVIl30.30J.AQVl30.30).ASl2.ZJ.ASll2.2J
DIMENSION U(IOOOJ
DIME~SI0N UU(200J
INTEGER R.C.CA.NBl20J.BL.PRlIOJ
INTEGER B(30J.WP(30J.SP(30) ;
INTEGER NA/30/.Nc/30/.N3/30/
CALL SCLPARlPR.IT.ITM.ITN.XI.X2.CDNST.R.X.NR.NC.BL.NB.C.CAJ
NUIC-CA-C
C*. COMPl.TE UI
00 70 l-I.R
00 70 .Ja1.NUIC
70
UIlJ.IJ=XlI.C+JJ
CALL INT lAL( I ).
CALL RSTARTl503.542J
C POX
CALL SCATAlX.R.CA.G.W.NR.NC.NC.NC.NA.NAJ
CALL SCGl(G.CA.W.TR.V.Z.CONST.NC.NC.NR.NAJ
CALL SCABlX.G.R.CA.CA ••••A.NA.NC.NC.NC.NA.NR.l.IJ
CALL SCABTlW.X.R.CA.R ••• Y.NR.NR.NR.NC.NA.NR.NR.IJ
CALL SCl(QV.R.NR.NA)
CALL SCANBlQV.W.R.R.POX.NR.NR.NR.NA.NR.HRJ
C END POX
XII-Xl
XI2aX2
10 IST-PRl 10J
C**. GENERATE RANDOM ERRORS
CALL SCRNYlY.Z.R.IOISTJ
CALL SCRNYlYAI.Z.NUIC.lDlSTJ
CALL ~CA8lYAI.UI.I.NUIC.R.YAl.Z.I.NA.NA.NR.I.NR.NR.lJ
00 8~ JJal.R
83
VlJJJaYlJJJ+YAIlJJJ
C••• ~ND GENERATE NANDON ERRORS
IFlPRl9J.EQ.IJ GO TO 66
DO 99 l:&l.lTN
c••• COMPUTE M1NQUE
YYlaFQFlV.Vl.R.NR.NAJ
YY2aFQFlV.OV.R.NR.NAJ
XII=VVI.Slll.IJ+YV2.S1ll.2J
X12=VYl*Sll1.ZJ+VV2*Sll2.2J
CALL SC~MI(X.Vl.Vl.QV ••• G.R.C.NR.NC.BL.NB.Z.ZA.S.Sl.
•
XII.XIZ.CONST)
C VI ::-OV*Vl*QV
• QV::-QV.QV
99
CONT INUE
IFlPRl9J.EQ.ZJXI=XII
IFlPR(9).EQ.2JX2=XI2
66
CO"T1 NUE
10PT=PR( 7)
NYI=O
127
NY2=0
00 100 1-1. IT
G1-0.
G2&0.
00 85 ,J& 1.1 TM
GENERATE RANDOM ERRORS
CALL SCRNYIY.Z.R.IDIST)
CALL SCRNYCYAl.Z.NU1C.IOISZ)
CALL SCA8CVA1.U1.1.NU1C.R.VA1.Z.1.NR.NR.NR.1.NA.HR.1)
00 84 ....=l.R
Y( .... ,·Y( .... '+YA1( .... )
END GeNERATE RANDOM ERRORS
MINQUE ESTIMATORS
YlaFQFlV.V1.R.NR.NR).SIC1.l)+FQFIY.QV.R.NR.HR)*Sl(1.2)
Y2=FQFIY.Vl.R.NH.HR).SIll.2)+FQFlY.QV.R.NR.NR)*SlI2.2)
GO TOl150.151.152'.IOPT
C••• RESTRICTED MINQUE ESTIMATORS
CONTINUE
151
IFlYl.LT.O.JNY1=NY1+1
IFIY1.LT.0.)Yl=O.
IFlY2.LT.O.)NY2=NY2+1
IFlY2.LE.0.JY2-0.01
GO TO 150
152
CONTINUE
C••• P.S.O NINQUE ESTIMATORS
Y2=FQFCY.POX.R.NR.HR)/TR
YI-lFQFIY.Vl.R.NR.NR)-Y2*SC1.Z»/SC1.l)
IFlYl.LT.0.JNY1=NY1+1
IFCY1.LT.0.)YI-0.
150
COhTlNUE
CALL SCVI(VI.R.Y1.YZ.8L.H8.NR.NR'
CALL SCATS(X.VI.C.R.R••••A.NR.NC.HR.NR.NR.NR.l.1)
CALL SCA8IW.X.C.R.C.G.wA.NR.NR.NR.NC.NC.NC.l.1)
CALL SCGIIG.C.W.TR1.ZA.Z.CONST.HC.HC.HR.NR)
CALL SCASIVI.X.R.R.C. W.WA.HR.HR.NR.HC.NR.NR.l.1)
CALL SCAB' W.G.R.C.C. W.Z.HR.NR.NC.NC.HR.NR.NR.l'
CALL SCAS'Y ••• 1.R.C.Ql.Z.l.NR.NR.HR.l.NR.l.1)
lOl.Gl+Ql(l)
G2-G2+QUZ)
85
Gl =G l/FLOAT( 1 TM)
~2=GZ/FLOATCITM)
U( O-Gl
C••• HISTOGRAM
CALL HSTGMIGl.G2.G3.G•• l.Z)
100
CO/liTINUE
C••• GRAPHS
CALL GETPCTCIT.9)
IIIfRITEI3.20)
20
FORMATC1Hl.16X.· BO',2.X.· 81·.24X.' B2·.2.X.· 83')
CALL PANT U .20'
wR ITEl 3.21)
FORMAT(lHl.20X.'GRAPH OF 80·//)
21
CALL ..RPH(l.l)
WR nEl.3.22)
FORMATllHl.20X.'GRAPH OF 81·//'
22
CALL GRPHll.2)
128
WR ITE l.3. 24)
FORMATll"1." MOMENTS·)
CAU- RAlIlMOHH.l T)
wAITEl.3.l)NYl.NY2
1
FO~MATI·lNYl.".lo.· NYZ.·.16)
C••• GOOONESS OF FIT TEST
CAu.. SCCDFlU.l.4.IT.2)
END
24
129
70
105
DIMENSION TT(4).T(4)
REAL.a TIT/' X./
REAL LA1(~O).LAZ(~O).LA3(~O).YA1(~O).LA(.4)
OIMENSION X(30.30).VI(30.~O).Vl(30.~).QV(30.30).W(~O.30)
DIMENSIDN G'30.30).OP'30.~O).A(~O.30).POX'30.~O)
DIMENSION Ul(30.~O)
DIMENSION Y(72).Z(1.4).R.(44J.S(Z.ZJ.OELTA(Z) .ZA(72).Ql(7ZJ
DIMENSION SI(Z.Z)
OIHENSION AV1(~O.30).AQV(30.30J.AS(Z.Z).ASI(Z.ZJ
DIMENSION U(IOOO)
DIMENSION UU(ZOO)
INTEGER R.C.CA.NB(ZOJ.BL.PR(lOJ
INTEGER S(~O) •• P(~OJI$P(30)'
INTEGER NR/30/.NC/30/.N3/30/
CA~ SCLPAR(PR.ITIITM.1TN.XI.XZ.CONST.R.XINR.NC.BL.NB.C.CA)
NU1CaCA-C
00 70 1-I.R
DO 70 J-I.NUIC
UI(J.I)=X(I.C+J)
XX-Xl
XY=XZ
XII-Xl
Xl2aXZ
C ~ INTIAL(I)
10IST=PRUOJ
CALL RSTART(So3.S4Z)
REAO(6.2.ENO&l061(LA(I).I-l.C)
lTA-O
NOFaO
00 100 I-I II T
Xl-XX
XZaXY
C••• END GENERATE RANOOM ERRORS
CALL SCRNY(V.Z.A.I01STJ
CALL SCRNY(YAI.Z.R.I01ST)
CALL SCAS(YA1.Ul.l.NU1C.R.YA1.Z.l.NR.NR.NR.I.NR.NR.l)
00 83 J.J&I.R
83
Y(JJ)-Y(JJ)+YA1(JJ)
CALL SCSMI(x.vI.Vl.QV ••• G.R.C.NR.NC.8L.N8.Z.ZA.S.SI.Xl.XZ.CONST)
IF(PA(9).EQ.I) GO TO 66
00 99 1<.1 -1.ITN
C••• MINQUE ESTIMATORS
C••• COMPUTE MINQUE
YYlaFQF(y.VI.R.NR.NR)
VVZ=FQF(V.QV.R.NR.NRI
XllaVY1*SI(1.1)+VYZ.SI(1.Z)
XI2=VYI.SI(l.ZJ+YYZ*SI(Z.Z)
~ITE(~.602)Xll.X1Z
60Z
FORMAT(ZF10.6)
CA~ SCSMI(X.VI.Vl.QV ••• G.R.C.NR.NC.BL.NB.Z.ZA.S.SI.
•
Xll.X1Z.CONST)
C VI ::aQV.Vl*QV • QV::-QV*QV
99
CONTINUE
IF(PA(9).EQ.Z)Xl=X11
IF(PA(9).EQ.2)XZ-XIZ
66
CO hT I NUE
130
FOAMAT(4OFZ.0)
CALL SCVI(Vl.R.Xl.X2.SL.N8.NA.HR)
CA~ SCAT8(X.VI.C.A.A.W ••A.NA.NC.NR.NA.NA.NR.l.lJ
CALL SCASlW.X.C.A.C.G.WA.HR.NA.NA.NC.NC.NC.l.l)
CALL SCGllG.C.W.TAl.ZA.Z.CONST.NC.NC.NA.NAJ
CA~ SCAS(VI.X.R.A.C. W.WA.NA.NA.NA.NC.NA.NA.l.lJ
CALL SCASl W.G.R.C.C. W.Z.NA.NA.NC.NC.NR.NA.NA.IJ
CALL SCAB(W.LA.R.C.l.Ql.Z.NA.NR.NA.l.NR.l.l.1J
CALL SCvll A.A.BL.NS.NA.HRJ
QVIQ:FQF(Ql.A.A.NA.NAJ
CALL SCll A.A.NR.NA)
QQaFQFlQl.A.R.NR.NR)
AA-QVIQ.XI+QQ*XZ
BBaZ*lQVIQ*QVIQ.Slll.lJ+QQ*QQ*SIIZ.2)+Z*QVIQ*QQ*Slll.2))
AL-AA·.2/BB
BE-BS/AA
DFF-Z.AL
WRITEI3.611)OFF .AL.SE
FORMAT(· DF·.FIO.6.· A•• FIO.6•• B•• FIO.6J
611
IOF-OFF + .05
YYI-FQFIY.VI.A.NA.NA)
YY2aFQF(Y.QV.A.NA.NRJ
YlaYY1.SI(1.lJ+YYZ.Sl11.Z)
YZ=YYI.Sl(1.Z)+YY2*Sl(Z.2)
CA~ SCVI(Vl.A.Yl.YZ.BL.NS.NR.NR)
CA~ SCATBIX.VI.C.A.A ••••A.NA.NC.NA.NA.NR.NA.l.lJ
CALL SCASI •• X.C.R.C.G••A.NR.NA.NR.NC.NC.NC.l.1J
CALL SCGllG.C ••• TRI.ZA.Z.CONST.NC.NC.NA.NRJ
ALPLaFQFILA.G.C.NA.NR)
WRITE(3.609JALPL
609
FOAMATl· Z-VALUe •• FIO.6)
CALL SCAB(Vl.X.R.R.C • • • WA.NA.NR.NR.NC.NA.NA.l.1)
CALL SCABlW.G.A.C.C.W.Z.NA.NR.NC.NC.NA.NR.NA.IJ
CA~ SCABlY••• l.R.C.Ql.Z.l.NR.NR.NA.l.NA.l.1J
CALL SCABlQl.LA.l.C.l.BLA.Z.I.NA.NR.l.l.l.1.1)
TVaBLA/SQAT(ALPL)
PROS=PAGTITV.I0F)
WAITEl3.6(0)TV.PROB
610
FORMATl· T-VALUE·.FIO.6 •• PROBABILITY·.FIO.6J
IF(PAOB.LT • • 025 .OR.PROS .GT • • 975 JNOF-NOF+I
ITAaITA+l
ulITAJ-PAOS
CONTINUE
100
GO T~ 105
CONTINUE
106
wRITeI3.600)NOF
FORMATl· N Rc·.lSJ
600
C••• GOOONeSs OF FIT TeST
CALL SCCOFlu.l.6.1TA.Z)
2
~o
131
C
C
C
POOL AND
~ANK
U-VALUES
C
CtMeNStON U(IOOO). W(IOOO)
"3 .. No
NL .. "'L + .. 2
00 2
2
I -
W(M~+t)
I.N2
.. U(lt
IF ( .. SA~.Gl.IL) GO TO 6
CALL ~ANK( •• NL)
WRITE (3.1 t
FOR~AT (//.IOX.oPOOLEC AND RANKED U'OS')
CO 4
4
1 •
I.NL
WRITE (3.!t I.W(I)
FORMAT (T2.14.'-',FI2.4)
ReTURN
END
5
6
SUBRQUTINE RANK(U.N2)
C
C
SUBROUTINE RANK(U.N2)
C
DIMENSION U(IOOO)
IIII-N2-1
DQ 202 I-I.MI
K 1- I + I
DO 203 "'-lCl.N2
209 IF (U(I)-U(J)
210.210,220
210 GO TO 203
220 ZI-U(1)
U( I ) -U( J )
U( J ) - l l
203 CONTI"UE
202 CONTI"'ue
RI!TUR'"
END
33
SUBROUTINE SCGI(G.M.O.TA.D •••• CONST.NI.N2.N3.N4)
DIMENSION G(Nl.N2).O(N~.N4).D(I).WK(I)
CALL SCI(C.M.N3.N4)
CALL LSVDF(G.NI.M.~.C.~.M.O.WK.IER)
CALL SCSVIN(O.M.WK.CONST)
CALL SCODQP(G.O.C.M._K.M.M.NI.N2,N3.N4)
TA-O.
CO 33 I-I.~
fF(D(f).NE.O.)TR=TR+I.
CONTINUE
I:ETURPll
END
132
SUBROUTINE TESTSCU.N2J
c
C
SUBROUTINE TO COMPUTE lEST STATISTICS FOR
UNIFOR~ITY.
U2.NS2, NS3.NS4
C
C
COMPUTE ~E~~AN SMOOTH TESTS: NS2. NS3.NS4
IMPLICIT REAL*S(A-H.O-ZJ
REAL*. un 000 J
P~I=O
1'12=0
1'13-0
1'14=0
00 1 I - I.N2
I'll = PIt + CDSORTCl2.con*CUCIt ,:"".SJ
1'12 = PI2 + CDSQRTCS.DOJJ*C 6*CUCIJ - .S)**2 - .S)
1'13 = P13+ CDSORTC7.COJ)*C20*CUCIJ - .SJ**3 -3*CUCIJ -.S»
1'14 • 210.CUCIJ - .S)-.4 - 4S.CUCIJ -.SJ--Z + 9.'S +1'14
(PII._2 • PI2**2J'N2
AS2
AS3 • AS2 + PI3.*2'N2
AS4
AS3 + PI4**Z'N2
=
=
C
C
COMPUTE _ATSON STATISTIC: U2
C
WS • 1.' 12.~2
DO 7 1-1."2
7
"s - -S + (( 2.-1 - 1 )'2.'N2 - UU) J**2
USQ • 0.0
DO 7~1
I . I.NZ
791
USQ. usa + UCIJ
USQ••S
..2*CCUSO'NZ - O.S J••2)
U2
• (USo-O.l'N2+0.1'NZ'N2J*(1.0+O.S~2)
WRITE C3.S)
5
FOA~.TC".7X.·WATSDN U2·.4X.·NEYMAN SMOOTH 2·.SX.'NS3'.7X •
• ·NS4',12X.·NUMBER OF U"S')
WRITE f3.E) U2.AS2.AS3.AS4.N2
6
FOR~AT C".Sx.F10.S.7X.Fl0.S.ex.FS.4.2X.FS.4.9X.I5.')
Rt!TUR ..
END
133
SUBAOUTINE TRANS(X.N.NP.U.MOO)
CIMe~SID~ X(100).U(100).Z(100)
GO TO (1.2.3.4.5.6.7) • ~
c
C
SCALE PARAMETER eXPONENTIAL CLASS. MOO - 1
C
SUMI. 0.0
SUM2" X(I)
1
2.N
SUI4I - SUM I + X(I-l)
SUM2 - SU~2 + XCI)
101 UCI-l) - I (SUM1/SU~2)**CI-I)
GO TO 200
C
ONe PA~A~ETE~ (LOCATION) ExPO~eNTIAL CLASS. MOD 2
CALL SORCX.N.Tl.Z)
00 101 I •
2
~2 - 1'1-1
00 161 I .. 1."2
161
UCI). 1 GO TO 200
eXPCT1-ZCI»
C
C
TWO-PARAMETER eXPONENTIAL CLASS. MOO • 3
C
3
CALL SORCX.N.TI.Z)
NI - N-l
SUloll • 0.0
SUM 2 •
Z C 1) -
T1
00 171 I •
171
2.Nl
SUMI - SU~l + %CI-I) - TI
SUIol2 .. SUM2 + %el) - TI
UU-It 1
CSUMl/SUM2)**U-tl
GO TO 200
C
C
TWO-PARAMETER NORMAL CLASS. MOD. 4
C
4
202
201
TI - XU) + X(2)
DO 20 I I • 3.N
Tt - Tt + XCI)
T2 - 0.0
CO 202 J • I. I
T2. T2 + CXCJ) - TI/I)**2
A-CSQAT(I-2.»)*CXCI)-Tl/I)/CSQRTCABS(CI-I.'.T2/I-CXCI)-Tl/I)**2)))
UCI-2'. F~GTCA.I-2'
GO TO 200
C
C
C
5
300
co
~oo I I.N
XCI) .. ALOGCxCI»)
GO TO 4
C
C
C
6
TI" X(I)
T2 - X( 1 )
co
601 I .. 2.N
IF (TleLE.xCII' GO TO eOl
134
601
602
604
603
606
60S
20
21
22
11
10
12
14
15
13
17
18
16
19
C
C
C
Tl ,. Xf 1)
CONTI"ue
CO 602 I :0: 2.N
IF (X(lt.Ll!.T2t GO TO e02
T2 .. X( It
CONTINue
00 603 I ,. t. N
IF (X(lt.EO.Ttl GO TO e04
GO TO 603
110 ,. I
CONTI"'ue
co 60! I '"' hN
II" (x(It.eC.T2t GO TO 606
GO TO 60~
120
I
CONTI'"'UE
IF ( 120-1101 20.20.21
hUN ,. 120
II4AX • I to
GO TO 22
IIIIIIN '"' 110
IMAll '"' 120
IF (IMIN.EO.I. GO TO 10
I"lINI
t
'"' IMIN
00 I 1 I = I. IMINI
l U I " xu.
IIlIIIN2 '"' 1"1" + I
IF (IMAX - IIlIIIN) 13.13.14
IIlIIAXl ,. IMAX - 1
DO IS I ,. H4IN2. I IlIIA XI
l ( 1- t I '"' x (J'
IMAX2 ,. IMAX + 1
IF (N
I"AX' 16.16.17
1"lAX2 ,. IMAX + 1
DO 18 I '"' IMAX2. N
III - 21 '" X( I •
N2 '"' '"' - 2
DO 19 I so I.H2
U( I )
TU/CT2 - T1)
'"' CZU.
GO TO 200
=
TWO-PARA"'ETER PARETO CLASS. "'00
7
613
200
DO 613 I '"' 1. N
X(lt '"' ALOE(X(I ••
GO TO 3
RETURN
END
..
7
135
REAL FUNCTION PRGT*4(TVALUE.OFl
c
~-oISTRIBUTION
C
FUNCTION SUBROUTINE
C
I~PLICIT REAL*S(A-H.O-Zl
REAL*4 TVALUE
INTEGER OF
C
FOR OF<-20 CO~PUTE EXACT PROB>ITf
REF: JOURNAL OF QUAL lTV TECHNOLOGY. VOL 4. NO.4. OCT.1972. P 196
C
C
C
TSO-TVALUE·*TVALUE
PRGT-hOO
IFfOF.LT.llRETURN
IF(TSC.LT.l.D-10lGD TO 3!
VaDF
IF(OF.GT.20.GO TO 50
tF(TSa.GT.I.08)GO TO 30
C
THETA.OATAN(OSQAT(TSa,~"
MwMoe (OF.2)
TlIO S I I\( THE lA'
CaOCOS(Tt'ETAt
tF(N.EQ.OlGO TO 10
PRGT=I.0o-2.00*THETA/3.141592653589793
IF(OF.EQ.l)GO TO 25
T-2.00*T*C/3.141592653589793
C
10
PRGT-PAGT-T
NT-(OF-M-2)/2
IF(NT.LT.I'GO TO 25
cz-c*c
0-'"
DO 15
l-l.NT
0-0+2.00
T-T*C2*(D-I.00)/0
PRGT=PAGT-T
15
C
25
30
35
IF(PRGT.GT.O.O.GO TO 35
PRGT-O.O
PRGT-PRGT/2.0
IF(TVALUE.GE.0.0'PRG1-1.0-PRGT
RETURN
c
C
C
C
FOR CF>20 USE FISHER'S EXPANSION (FIRST 3 TERMS'
ABSOLUTE ERROR < .00002
REF: JOHNSON & KQTZ p. 102
'CONTINUOUS UNIVARIATE 015T-2'
HOUGHTON MIFFLIN CO. 1~70
C
SO
IF(TSC.GT.36.00.GO TO ;!O
T-OSQRT (TS C'
x_T/(2.00*Vl*(TSO+t.OO-(3.00+TSO*(S.00+TSO*(7.00-3.00*TSQ"'/
*
•
(24.00*V'-(15.00+TSO.(~.00-TSO*(6.00+TSO*(14.0o-TSO*(11.00-T50"
"'/(96.CO*v*vt,
PRGT=OERFC(T/l.4142t3!~237'+OEXP(-.91S93e5332-TSC/2.00l*x
c
GO TO 25
136
C
END
T-DIST~IBUTION
SUe~OUTINE
C
END
c
C
CPIT
~DEL
D1M~hSION
c
TESTING P~OG~AM
XCIOO •• UCIOOO ••WCIOOO •• ZCIOO)
C
C
CONT~OL
C
NL
C~RDS
=0
DO e~~ IL • I. NSAM
100 FORMAT C31e)
wRtTE (3.1) NSAM.MOD
PORM~T
C/'.5X.'CPIT MoceL ANALYSIS PROGRAM. ON THIS RUN we ANALYZE
.1.2x.13.2X.·SAMPLES. TME MODEL ASSUMED IS Moe. '.13."CSEE MODEL C
.ODE Ih PROGRAM.")
C
C
C
C
C
C
C
C
C
C
C
C
C
MOD
I 2
3
4
5
IS A~ INDEX FOR THE NULL ~POTHESIS MODEL TO ee TESTED:
SCALE PARAMETER EXPONENTIAL
LCCATION PAR~MET!R EXPQ~!hTIAL
T.o-PARAMETER EXPOhENTIAL
T.o-PA~AMETER NORMAL
TWO-PA~AMETE~
LOGNC~M~
UNIFORM
7
TWO-PA~AM~T~R PARETO
ETC. (ADO OTHER FAMILIES)
N
Ne. CF OBSERVATION!
NP
NO. OF PARAMETERS
NSAM
h~MeE~ OF SAMPLES. MUST EE SPECIFIED IN A PROGRAM CARC
6
Two-PA~AMETER
=
=
=
C
C
CATA INPUT
C
102 FORMAT(FIO.S)
C
C
NULL CLASS TRANSFORMATION
C
821
822
823
WRI Te (3.82 ..
FORMATCIHO./'.T7.'ORIGJNAL DATAl)
DO 822 I . I.N
WRITE (3.823) I.X(I)
FOR!l4AT (1'2.14.' - ·.FI2.4)
"'2
=
~-N':
721
CON Tt Nue
CALL TRANseX.N.NP.U.MOO)
CALL RANK (U.N2.
CALL TeSTS (U.N2)
CALL POOL (U.N2.NL.IL,NSAM •• )
833
CDNTI~UE
CALL TESTS (W.NL)
CALL eXIT
STOP
ENO
137
SUBROUTINE SCX(x.e.W.s.L.NI.N2.BL.NB.C.CAI
X(NI.N2t
INTEGER S(lt.WCII.NSClt.BL.C.CA.S(lt
INTEGER eM.X.WMAX.SNAX.WA.BA
eNA )laB ( It
"MAX-"(I'
SI4AX-S(l ,
CO 1 r"2.L
rF(SMAX.LT.B(I)'SMAXaBC[)
[F(WM.X.LT.w(l,tWNAX.WCI)
IF(SMAX.LT.SC[t)SMAX-SCI)
CONTllIoUE
BL-O
"A-W( II
BA-en,
D[Ne~!rC~
N-I
CO 2
r-2.L
I~CBA.EQ.B(I)
.ANO. " •• EQ.WCI'.GO TO 3
BL-BL+I
NBCBLtaN
WA-W( I )
BAaS (I.
N-t
3
2
•
6
5
GO TO 2
NaN+l
CONTINUE
BL"SL+I
NB(SLt-N
MI-t
lII2-III+BlIiIAX
M3a"2+WMAX
M.-1lI3+SllilAX
M!.".+WMA X- SMAX
C-M!
CAaC+WMAx*eMAX
DO .. lal.L
X(I.I'-I.
CO ! I-I.L
DO e .l=2.C.A
X( I ••UsO.
X([.III+BC I'lsl.
X( t .M:<+W( I t '-I.
X([.I01'3+$(I'.·I.
xU."4+<-lC [)-It*SMAX+!U''''I.
X( 1.1lI5+(SC[)-t'.WMAX+ltCIU-I.
RETUR~
END
138
SUBROUTINE SOR(X.N.TI.Z)
C
OI~e~SION
III
131
121
I
:3
2
5
7
6
8
9
4
~(tOO).l(IOO)
TI - X( I )
00 III 1-2.N
IF (TI.Le.~(I» GO TC III
Tl • XU)
CONTI,,"ve
NI - N - I
CO 121 I .. I. N
IF (X(I).EC.Tt) GO TQ 131
GO TO 121
UUN. I
CONTI,,"UE
IF (IMIN - I) 1.1.2
CO :3 1=2.~
l(I-lJ .. XU)
GO TO 4
IF (N-I"'UO 5.5.6
00 7 I .. I. NI
l ( n .. X(I)
GO TC 4
IMINI .. IMIN-l
CO 8 I .. I.IMINI
l ( n . X(I)
00 9
I - IMIN. Nl
l(I)" X(I.I)
RETURN
END
SUBROUTINE SCL8M(R.C.X.XA.IOP.NI.N2.N3.N4)
OIME""SION ~(NI.N2).XA(N:3.N4)
INTEGER R.C.IR(IOO).IC(IOO)
REAO(7.I.E,,"0-SO)N.M
IF(IOP.Ea.I)GQ TO 49
READ(7.1.ENDaSO)(IR(I).I-I.N)
REAO(7.1.E"'o-SO)(IC{I).I·I.~)
49
CALL SCBMAT(X.XA.N.M.IOP.IR.IC.Nl.N2.N3.N4)
50
I
CaM
RETURN
FORM'T(4012)
ENO
R..N
139
SUBROUTINE
CI~e~SION
SPRrNTCT.A.L.~.PR.Nl.N2)
ACN1.N2)
INTEGER FR
REAL FUOeJI'·C7X. ' . '
•
I'.'
' . ' (ttH·.·
J'I'
2'.'
3'.'
7'.'
e'.'
·COL.·.·I~)·.·
REAL
F2C~J~'
•
g./
•••
••••
IFCPR.NE.I '~TUAN
.RI TECJ.I)T
FO'UIATC IX.ASJ
Kl"-8
K2=0
Kl"Kl+9
K2-lC2+9
IFCK2.GT.M )K2"M
lC-9
IFCK2.EQ._)K=lC2-Kl+1
f'U 2 )aF2CK)
WRITECJ.FI HI. l-KI.l(2)
IFCK2.LT.M)GO TO 3
DO 4 I"I.L
.RIT!(3.IOJI.CACI.J,.J-l.MJ
FORMA1C1X.·ROW·.I3.9EI4.6.100CI'7X.9EI4.6J
,
1
J
•10
RETUR~
END
103
102
10.
107
SUBROUTINE SCRNYCY.Z.R.IDISTJ
CIMEhSION YCIJ.ZCI)
INTeGER R
GO TOC\0!.I04.10!5).IOI!T
DO 102 L-l.R
YCL)-RNQRCOJ
GO TO 106
00 107 L-l.R
YCL).. VNICO)
GO TO 106
105
108
t 06
DO 1 ce L"I.R
YCL)-f:NQRCOJ
ZCL )-VI'1I CO,
IFCZCL).eC.O.)YCL).O.
IFCZCL).NE.O.)YCL)-YCL)/ZCL)
CONT I ~UE
RETURN
END
FUNCTION FOFCY.A.L.NI.N2)
Dl~e~SIO~ ACN1.N2).YC1)
FOF-O.
00 1 I-t.L
DO 1 J-l.L
FOFaFOF+YCI)*ACI.J)*VCJ)
RETURN
END
••
5'.'
6' •
140
SUBROUTINE SCOVCV.X.G.L.~.OV._.NI.N2.N3.N4.N5.N6.N7.N8.N9.NIO)
DIMENSION VCNI.N2J.XCN3.~4J.GCN5.N6J.OV(N7.Net.W(N9.NlOt
CALL SCA8CV.X.L.L.M.CV.WA.Nl.~2.N3.N4.N7.N8.1.lt
CALL SCAB(CV.G.L.M.N.QV._.N?~8.N5.N6.N7.N8.N9.IJ
CALL SCABTCQV.X.L.M.L.QV.W.N7.N8.N3.N4.N?N8.N9.IJ
CALL SCAB(CV.V.L.L.L.CV ••• N7.N8.NI.N2.N7.N8.N~.IJ
CALL SCAM8eV.OV.L.L.QV.NI.N2.N7.Ne.N7.NBt
RETUR~
END
SUBROUTINE SCVICVI.L.N.NB.NI.N2t
IMPLICIT REAL.e (A-H.O-Z'
DIME~SION VICN1.N2t.N8(lt
Kl:o:l
K2:0:N! C1 ,
11=1
3
2
DO I I-l.L
DO 2 JzI.L
IFCJ.LT.KI.OR.J.GT.K2tGQ TO 3
VIC I.J'-l.
GO TO 2
VICI.Jt-O.
CONTINUE
IFCI.LT.K2.0R.I.EQ.LtGO TO I
Kl-KI+NtH I J)
I t - I 1+ I
K2-K2+NSCIIJ
CONT INUE
RETUR~
END
100
SUBROUTINE SVECINCO.N)
~AL.e TIT/' 0·/
DIMENSION CClt
DO 100 I-I.N
IFCD(It*D(I'.LT •• OltOCIt=O.
IFCO(I).NE.O.tDCIJ.l/0CI)
RETUJ;~
END
FUNCTION FCTRCECA.L.NI.N2t
DIMENSION _CNI.N2t
FCTRCE=O.
00
1
1= 1.L
FCTRC~FCT~CE+.Ct.It
RETURN
END
141
SUBROUTI~e
SCAAT(A.~.N.C.W.Nl.N2.N3.N4.N5.N6)
IMPLICIT REAL-S (A-H.C-Z)
0IME~5IO~ '(lIIl.N2 ••C(N3.N4).W(N5.N6)
IF(N6.EO.l)GO TO 21
CO t9 I=t."
CO 19 .J-l.'
W( 1 • .1 )=0.
19
20
21
00 19 K-I.I\
_(I.J)aw(I.J)+A(I.K)*'(.I.K'
DO 20 I-I.'"
DO 20 .1=1.'"
C(I.J)-W(I.J)
RETUf=I\
IF(N5.EO.l)GO TO 24
DO 22 Is I. '"
DO 2~ .1=1.'"
_(.1.1.=0.
23
22
24
25
3
•
5
8
7
9
DO 23 1<-1."
W(.I.1)-W(.I.I)+A(I.K)*A(J.K)
CO 22 .I-I.'
C(I • .I)-_(J.l)
RETURI\
DO 25 I-I.,
DO 25 .1= 10 iii
C( I • .I )=0.
DO 2! I<slo"
C(I • .I)-C(I • .I)+ACI.K)*A(J.K)
JOETUI=N
ENO
SUBROUTINE SCAT8(A.B.L.M.N.C.W.Nl.N2.N3.N4.N5.N6.N7.N8,
I~LICIT REAL*8 CA-H.o-Z)
OIME"5IOl\ '(Nl.N2,.B(~3.N4).CCN5.N6,
• • (N7.N8)
IF(NS.EO.l)GQ TO 5
DO 3 I-l.L
DO 3 J-l.N
_( I • .I )-0.
CO :3 le""I.1iI
W(I.J)=W(I • .I)+ACK.I)*8(1< • .I'
DO 4 I=I.L
00 4 .I-I."
C(I • .I)=WCI • .I)
RETURN
IF("7.EC.I' GO TO 9
CO 7 I-l.L
00 S .l=1.N
w( J.I )-0.
00 e 1<- I.M
W(J.l)=W(J.l'+A(K.I)*B(K.J)
00 7 J-I.N
C(I.J)=W(J.I'
F:ETURN
DO 1 C I- I.L
CO 10 .I=I.N
(1 •
10
.1)=0.
00 10K-I.'"
CCI.J)=C(I • .I'+A(K.I)*e(K • .I'
RETUR"
END
142
12
13
SUBROUTINE SCATA(A.~.N.C.W.Nl.N2.N3.N4.NS.N6'
I~PLICIT PEAL*8 (A-H.C-Z'
DIMENS[ON A(Nl.N2,.C(N3.N41.W(NS.N6'
IF(~6.EO.I'GO TO 14
00 12 [001."
DO 12 ,Js I. ~
W([.,J,=O.
00 12 Kal.111
W([.,J)=W(I.,J.+ACK.I)*ACK.,J)
CO 13 [=1 ....
DO 13 ,Jal.N
C(I.J)-WC1.J)
I:ETURN
14
TO 17
IS I-I.N
DO t6 ,J=I.~
.. (,J.U=O.
00 16 K-l.1f
.(4.1)-WCJ.I,+ACK.I)*A(K.,J)
00 IS J-I.N
C(I.J'=W(,J.l)
tF(~!.EQ.l)GO
CO
16
IS
r:ETUF:~
17
18
[-I."
CO 18
CO 18 J-I.~
cct.,J)=O.
DO 18 K-l.N
C(I.J)a(CI.,J'+A(K.I)*A(K.J)
RETURN
END
3
4
SUBROUT[NE SCAe(A.B.L.~.N.C.W.Nl.N2.N3.N•• N5.N6.N7.N8)
I~PLICIT REAL*8 CA-H.Q-Z)
01~E~SI0N ACNI.N2,.B(N~.N4).C(NS.N6)
• • (N7.N8)
IFCN8.EO.I' GO TO S
00 3 I=I.L
CO 3 J=t.N
W(I.,J)aO.
DO ~ K=I.IIII
W(I.J)=W(I.J)+ACI.K)*!(K.,J)
DO 4 I-I.L
DO • ,J-I.N
C(I.J'=W([.J)
RETUla.
S
e
7
IF( N7.EO. U GO TO 9
00 7 t-l.L
DO e J=I.N
lII(J.I )=0.
CO e 1C=1.1f
"CJ.U=WCJ.U+A(I.K)*S(K.,J'
DO 7 J-l.N
CCI.J)=W(J.l'
RE'TUI:~
9
to
00 10 [=t.L
00 10 J:I ....
CC I. J 1=0.
00 10 K=I."
C([.J)=CCI.J)+A(t.K)*SCK.J'
RETURN
END
143
SUBROUTINE SCAPB(A.a.L.~.C.Nl.N2.N3.N4.NS.N6'
IMPLICIT ~EAL*8 (A-H.O-Z'
DrMEN~rON A(Nl.N2 •• e(~~.~4).C(NS.N6)
00 32 I-l.L
00 32 ,J=1."
32
C(I.~).A(I,~'+B(I.J'
RETU~~
END
SUBROUTINE SCVI(VI.L.A.e.N.~e.Nl.N2)
IMPLICIT ~EAL*e (A-H.O-Z'
DIME~SIO~ ~I(Nl.N2',NB(I)
K I- 1
K2-NE
(1 )
II-I
1 1= I.L
CO 2 ,J-t,L
00
rF(~.LT.Kl.Oq.J.GT.K2)GQ TO
3
C=-A/(S*(NB(II)*A+B»
IF(I.EQ.~'C=C+t/B
3
2
VI CI ••J)aC
GO TO 2
vI (I .,J)=O.
CONTINUE
IF(I.LT.K2.0R.I.EQ.L)GCTO 1
Kt=KI+NB(II)
II- I 1+1
K2-K2+NB( I I)
CONTINUE
RETURh
END
SUBROUTINE SCAMBCA,B.L.M.C.NI.N2.N3.N4.N5.N6.
IMPLICIT REAL*e (A-H.O-Z'
DIMEN!ION A(Nl.N2) .B( 1\.3.1\.4).C (NS.N6)
DO 31 I-I.L
CO 31 ,J-l ...
31
C(I.~)2A(I.J)-B(I.~.
RETURN
END
SUBROUTINE SCICA.L.NI.1\.2)
IMPLICIT REAL*e (A-H.O-Z'
CIMENSIo~ A(NI.N2.
00 34 I-t.L
00 34 J-l.L
A(I,,J'·O.
34
IF(I.EQ.J'A(I,J)=I.
CONTINue
RETUR"
END
144
SUBROUTINE SCOOCPCG.O.C.L.WK.M.N.NI.N2.N3.N4)
OIME~SION GCNI.N2).OCN3.~4).OC').WKel)
2
CO I I-I. L
00 2 .1.. 1 ....
WK(J)-O.
00
.K(J)=WK(J)+GCI.K)*OCK)*OCK.J)
2 K-'.'"
00
I
J .. t.N
G( 1 • .I ).WIC( ,Jl
RETURN
END
SUPRCUTI"'E SCOOOPCO.C.OP.L.W.M.N.Nl.N2.N3.N4)
I~PLICIT REAL*8 (A-H.Q-Z)
OI~ENSION QCN1.N2).OPCN3.N4).OC1).WC1)
00 2
J-t.N
I-I.L
W(I)=O.
00 3 !C=l./II
WCI)-.CI)+C(I.K).OP(K.J)*O(I)
00 2 I:I:'.L
CPCI.J)-WCI)
RETURN
END
00 :3
3
2
3
SUBROUTINE SCABTCA.B.L.M.N.C.W.Nl.N2.N3.N4.N5.N6.N7.N8)
I~PLICIT REAL.S CA-H.O-Z)
OI~E~SICh ~CN1.N2).e("'3."'4).CCN5.N6)
• • (N7.N8)
IFCNS.Ea.I)GQ TO 5
00 3 I=I.L
00 3 J=1.N
w(I.J)-O.
CO 3 k=l. M
_CI.J).wCI.J)+ACI.K)*B(J.K)
DO 4 l=t.L
00 4
.I"'t ....
5
C(I.J).WCI.JJ
RETURN
IFCh7.EC.l) GO TO 9
CO 7 1='.L
008 JaI.N
e
00 8 !C=t.1Il
WCJ.1J=WCJ.l)+A(I.K)*SeJ.K)
4
toC .1.1 )-0.
00 7
7
9
Jzt.N
CCI.JJ=WeJ.I)
f'ETUj;N
00 to l=t.L
CO to J=I.N
C CI • .I )=0.
to
00 to K=t./II
C(I.JJ=CCI.J)+A(I.K)*eCJ.K)
RETURh
END
145
3
2
4
!5
SUBROUTINE SCSVIN(D.N ••• CONST)
OlrolEf\SIOfl, e(1 ).W(I)
00 I lal.N
W(I)-OlI)
W(N+[)-I
SUIlI-O.
NMI-..-I
00 2 [-I." iii 1
IPI=I+I
00 3 .l=IPI.N
IF(W(I).LT.W(J»)GC TO 3
WA=W( I )
W(I)sUJ)
'-(J )=,-A
WA-W(N+I)
W(N+ 1 )-W( "'+J)
.(N+J)-WA
CONT I"UE
SUM.SUN._(I)*W(I)
IF(SUM.LT.CONST)O(W(N+I))-O.
IF(SUM.GT.CONST)GQ TO 4
CONTlfI,UE
00 5 t-l.N
IF(DClt.NE.O.)O(I)-l./C(I)
CONT INUE
IOETUION
END
SUBROUTINE SCTRSCOV.VI.L.S.Wl.WZ.NI.N2.N3.H4.NS.N6)
OIIllE"510N CV(Nt.N2t.VI(N3.N4).S (Z.2).wl(NS.N6).W2(1)
CALL SCAB(OV.OV.L.L.L.wI.w2.Nl.NZ.NI.NZ.NS.N6.1.1)
S(Z.2)-PCTIOCE(WI.L.N5.N6t
CALL SCAB(CV.Vl.L.L.L •• l.WZ.NS.N6.Nl.NZ.N5.N6.1.1)
CALL SCAB(Wl.QV.L.L.L.,-t.W2.N5.N6.Nl.N2.NS.N6.N7.I)
S(I.2)-PCT,"CE(WI.L.NS.fI,6)
5(2.1)-S(I.2)
CALL SCABCwI.Vt.L.L.L.WI.W2.N5.N6.N3.N4.NS.N6.N7.1)
S(1.1)-PCTRCE(Wl.L.N5.N6)
RETUIO"
END
146
ooueLE
PRECISIO~
FUNCTION
CCGA~ACX.A.el
I~PLICtT ~EAL*8CA-H.O-2)
IFCA.GT.O.CO .AND.
CDGA"'Aa-l.CO
8.G~.0.00lGO
TO
RETU~~
10
20
100
110
120
200
210
220
IFCX.GT.O.tOI GO TO 20
COGA.A=O.DO
FOETUF<N
Z=X.I!!
IFCX.LT.A*elGO TO 200
Al.I.DO
A2'"1.DO+Z-A
el=1.00
82-Z
...·1
Fl-A2.1B2
A~=I. CO-A
AN=O.OO
NaN+l
IFCN.GT.5IlGO TO 400
IF(MOCCN.21.EQ.llGO TO 110
ANaAN+I.OO
A3=A2+AN*A 1.
83aB2+AN*Bl
GO TO 120
AM-AI'+I.00
A3aZ*A2+A"'*A t
83= z.e2+AM*Bl
F2-A3.IB3
IFCOAeSCF2-Fll.LT.l.D-l0lGO TO 300
Fl-F2
Al-A2
A2=A3
Bl=B2
t'I2=B3
GO TO 100
Al-1.00
A2.. A+ . . 00-Z
8ho t.CO
B2.. A+l.00
Na 1
Fl,..A2.1B2
AM2-A*Z
ANaO.DO
BC-A.leCO
N=N+I
IFCN.GT.!51IOO TO 400
IFC"'OCCN.21.Ea.lIGO TO 220
ANzAN+Z
BC=BCH .00
"3=e C*A2 +A ".A 1
B3=BC*B2+A ... *BI
GO TO 230
""'=AII-Z
eC=BC+l.00C
A3=eC*A2+A"'*At
10
147
230
e3.,.eC*B2+AIII.eI
F2-A3.1B3
tF(OABS(F2-Fl).LT.I.O-10) GO TO 300
Fl=F2
Al=A2
A2=A:!
el-B2
B2=B:!
GO TO 210
300
COEFzA*OLO~(ZJ-Z-DLGA~.CA)
3.0
COEF=OEXPCCOEF)
IFCX.LT.A*!)GO TO 3.0
CDGAIIIA=I.OO-eOEF.lCZ*F2)
RETURN
COGAIII.=CCEF/CA*F2)
flETU':; ...
CDGA~••-2.00
RETUR"
ENO
400
SUBROUTINE SCLPARCPR.IT.ITM.ITN.XI.X2.CONST.R.X.NR.NC.eL.NB.C.CAt
OIME"!IO'" 'CNR.NCI
INTEGER PRC.0).R.BL.NBC1J.C.CA
INTEGER B(72).WPC72).S~(72t
REAO('.ltPR.IT.rT~.rl"'.Xl.X2.CONST
5
II
6
3
10
FORMATCIOlt.315.3FtO.0)
WRITEC3.!!t
FORMATC.I·
B WP SP· )
R='
REAO(S.3.E"'O-IOtBCRI.wP(R).SPCR)
WR(TEC3.ttB(RI.WP(RJ.SP(RJ
FORMA1C1X. 313)
FORNA1C3It)
R=R+I
GO TO t 1
R=R-I
CALL SCX(X.B.WP.SP.R.NR.NC.eL.NB.C.CAt
~RITEC3.4)PR.R.C.CA.IT.rTM.(TN.XI.X2.CDNST.BL.CNB«(t.l.t.BL)
4
FORMAT(' Op •••••••••• 10II
. / ' RO.S •••••• '.13
*.1' COL ••••••• '.13
*/. COL X••••• ',13
. / . ITER •••••• ' . 16
*/' 1 TERM ••••• ' • 16
*.1' ITERN ••••• ' . 16
*.1. AI •••••••• '.F.0.3
*/' A2 •••••••• '.F10.3
*/. CONST ••••• '.EI3.6
*.1. N eL•••••• ·.13
*.1' N oe IN eL.f.2013
*I
RETU':;'"
END
© Copyright 2026 Paperzz