Mohammed, Ali A.The Analysis of Autonormal Models and Their Correlation Structure as Applied to Uniformity Data."

THE ANALYSIS OF AUTONORMAL MODELS AND THEIR CORREIATION STRUCTURE
AS APPLIED TO UNIFORMITY DATA
by
ALI ABULGASIM MOHAMMED
DEPARTMENTS OF STATISTICS AND BIOSTATISTICS
Institute of Statistics Mimeograph Series No. 1313
tv
TABLE OF CONTENTS
Page
1.
INTRODUCTION.....
2.
REVIEW OF LITERATURE
2.1
2.2
2.2
1
·.......
General Review
Two Dimensional Stationary Processes
2.2.1 Correlation structure of some two
dimensional stationary processes
2.2.2 A conditional probability approach
to the spatial processes • .
Statistical Analysis of Lattice Systems . • .
2.3.1 Maximum likelihood estimation
2.3.2 A coding method on the rectangular
lattice
2.3.3
3.
Unilateral approximation on the
rectangular lattice
A COMPUTATIONAL PROCEDURE FOR ORD' S METHOD
3.1
3.2
....
.. · ..
The Determination of the Eigenvalues of W
The Computation of the Variance-Covariance
matrix
4.
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
UNIFORMITY TRIALS WITH SORGHUM, ONIONS AND WEAT •
4.1
Materials and Methods . • • • • • •
4.1.1 Sorghum uniformity trial •
4.1.2 Wheat uniformity trial • • • • • • . • • • • •
4.1.2 Onion uniformity trial . • • • •
4.2
Resul ts . . . . . . . . . . . .
. . . .
4.2.1
4.3
Auto-normal analysis •.
4.2.1.1 Auto-normal analysis of
sorghum plots data . . • • •
4.2.1.2 Auto-normal analysis of
onion plots data
4.2.1.3 Auto-normal analysis of
wheat plots data
4.2.2 Fitted corre1ograms and correlation
functions • • • • • • • . • • •
Discussion
5.
CONCLUSIONS
6.
REFERENCES • •
7.
APPENDICES .
. ... .. .
·.. . ..
. . ... . " · ..
. . .· .... .. . .· ..
5
5
12
12
15
21
21
27
31
34
34
39
41
41
41
41
42
42
43
43
49
53
61
80
87
88
99
1.
INTRODUCTION
"Statistics may be regarded (i) as the study of populations,
(ii) as the study of variation, (iii) as the study of methods of reduction of data."
statistics.
This is how Fisher (1925) defined the scope of statistics.
It is the second item that we intend to investigate in some
detail.
Different approaches and techniques have been used to study the
yield variation of field crops with a view to ascertain the principal
component of this variability.
One such an important tool is the study
of the sample autocorrelation coefficients.
These measure the
correlation between yield from plots that are at various distances apart.
The domain of the study of the autocorre1ations lies in the field of time
series.
Thus, given
N successive observations of a time series,
Y1'Y2' ••• 'YN' we can form
(N-1)
(Y1'Y2)'(Y2'Y3)' •.• '(YN-1'YN) •
pairs of observations, namely
By regarding the first observation in
each pair as one variable while the second observation as a second
variable, the correlation coefficient between
Yt
and
Yt+1
can be
calculated in the same way as the product moment correlation.
This is
referred to as an autocorrelation coefficient or serial correlation.
In a similar fashion the correlation between observations that are a
distance
k
apart can be obtained by first forming
observations, namely
(N-k)
(Y 1 'Yk+1)'(Y2'Yk+2)' ••. '(YN-k'YN)'
calculating the correlation coefficient between
Yt
and
pairs of
and then
Yt+k.
This
is referred to as the autocorrelation coefficient at lag k.
In order to apply these techniques for the study of the correlation
between the yield from plots that are at various distances apart,
2
Whittle (1954) introduced the concept of a line transect which he defined
as a straight line that has been laid over an area along which observations were taken equidistantly.
He regarded the observations of the
transect as being generated by a one-dimensional stochastic process,
in a similar fashion to the terms of a time series.
However, he drew
attention to the important difference between the two cases.
of the time series there is the distinction
future.
In the case
between the past and the
The value of any observation depends only on past values and,
hence, dependence extends backwards only.
For the transect, however the
dependence extends in both directions.
To demonstrate the consequences of this difference he considered
the first-order (unilateral) autoregressive model where the value of the
random variable at time t is a linear function of the value at time
plus a random error, and the
(b~lateral)
t-l
transect model where the value
of the random variable at any time t is a linear function of the values
at times
t+l
and
t-l
plus a random error.
He pointed out that in the case of the (unilateral) autoregressive
model the parameters could be estimated by minimizing the residual sum
of squares in the usual way, while it would not be legitimate to
estimate the parameters of the (bilateral) transect model by carrying
out the above minimization.
As a consequence of this difference, he showed that it would not
be possible to use ordinary least squares to estimate the parameters
of a transect (bilateral) model.
As an alternative, the quantity to be
minimized that he proposed was a product of a function of the parameters
times the ordinary residuals sum of squares.
~
3
Also, several other approaches have been proposed.
These include
Besag (1974) who introduced the concept of autonormal models in contrast
to the autoregressive models.
The details of autonormal models will be
discussed fully in this study but basically he examines the formulations
of conditional probability models for finite systems of spatially interacting random variables, where the variables themselves have a normal
distribution.
When these autonormal models are examined and the corresponding
parameters estimated, one could, at least theoretically, derive and
estimate the respective correlation functions and correlograms following
the method developed by Besag (1972).
The alternative approach adopted
in the study is that of estimating the autocorrelations generated by
certain diffusion processes that were suggested by Whttle (1962).
All these estimated autocorrelations could then be compared with
the observed ones.
An agreement between the observed and the estimated
autocorrelations will be an indication that the model generating the
latter correlation can be considered to give an adequate description of
the data, while a disagreement between them will be taken to indicate
the inadequacy of the postulated models.
In order to examine autoregressive models, we will start by
developing a computational procedure for obtaining the eigenvalues of the
the weighting matrix W that had been introduced by Ord (1975) in his
autoregressive model.
Then we will fit the auto-normal model that had
been developed by Besag (1974) to determine the appropriate order for
three data sets.
According to
the order of this model, whether a first,
second or third, we will determine its unilateral representation as
4
outlined by Whittle (1954).
Then we will estimate the parameters of this
unilateral model and fit its corresponding correlogram, using Besag's
(1972) method.
This correlogram will then be compared with the observed
one as well as with the correlogram that will be computed assuming a
lag 1 correlation suggested by Quenouille (1949).
Also, we will examine two correlation functions, the first derived
assuming a sYmmetric autoregressive model (Whittle (1954»; the seond
derived assuming a certain diffusion process (Whittle (1962».
These
two fitted correlations will then be compared with the observed ones.
Finally, we will examine the results of all these methods and
techniques in an effort to research some conclusions regarding the
variability of crop yield.
2.
REVIEW OF LITERATURE
2.1
General Review
Fairfield Smith (1938) developed his empirical law that describes
the relationship between the variance of yield and plot size.
The law
is given by the relationship
(2.1)
where
Vx is the variance of yield per basic unit for
plots ofx units ,
V is the variance among plots of one basic unit,
l
x is the number of basic units in the combined plot
size, and
b is a parameter to be estimated and frequently
called the soil heterogeneity index.
Later, numerous uniformity
exper~ments
were reported where in almost
all of the soil heterogeneity index, b, was employed in order to
determine the optimum plot size which had been defined as that area for
which the cost per unit of information would be minimum.
On the other
hand, Whittle (1956) examined the spatial covariance function of yield
(from the knowledge of the way yield varies with plot size
and shape).
Under the assumption of an isotropic covariance, so that
the covariance
pes), was a function of the distance, s, only,
•
6
and that plots were of constant shape (i.e., they could only vary
similarly by changing all its dimensions in the same ratio), he coneluded that if the variance of plots of increasing size was observed
to increase as
1 <
~
< 2),
distances.
A~
then
for large areas (A being plot area and
p (s)
would decay as
s
(~-2)
for large
However, he noted that some of the results presented by
Fairfield Smith would seem to indicate that the dependence of variance
on shape might be weak and that the variance might be determined almost
entirely by plot
area'~
This inference led Whittle to suggest that this was
presumably true only if the shape was not too extreme, e.g., not tOQ
narrow and elongated.
Nonetheless, he made use of the Fairfield Smith approximate
variance of yield per unit in plots of area A, which could be represented as a curve const.
A-· 749 ,
and derived a covariance function as
pes) '" const. s
-3/2
(2.2)
He concluded that these results provided evidence of two intriguing
possibilities:
-l-
a)
that covariances decay as
b)
that the rate of decay might be so small that yield variance
s
,A"O;
increases faster than plot area.
However, he noted that none of the simple linear models proposed by
him in his paper led to covariances of the above type.
So he argued
that any model which is to provide a satisfactory explanation of the
power law decay observed in agricultural work must embody two features,
7
(a) it must be nonlinear, (b) it must consider the variate (yield
density) to be a function of time as well as the spatial coordinates.
He cited as an example of such models the situation where the fertility
gradients in the soil were smoothed out in the course of time by a
diffusion process, which was nonlinear to the extent that only
gradients which were greater than a certain value would tend to
diminish.
Whittle (1962) examined some models of deterministic diffusion in
two and three dimensions and into which pure random error was continua11y being injected.
s
-1
He found that such models would explain the
behavior of agricultural covariance.
He indicated that, however,
he was unable to find any diffusion model which would generate
and
s-3/2
s -1/2
laws despite a fair amount of experimentation with shape,
dmensiona1ity of region. (area) and boundary conditions.
On the other hand, Besag (1972) considered the correlation
structure of the class of stationary, unilateral, linear autoregressions
defined on a plane lattice.
He showed that although the standard
techniques for deriving the corre10gram analytically were not
immediately appropriate, it was, nevertheless, possible to give a simple
solution in a certain region of the plane when the number of regressors
was small.
He added that the remainder of the corre10gram would tend
to be analytically awkward but could be calculated numerically.
Hitherto, we have reviewed the various correlation functions and
corre10gramsthat were put forward by various authors.
As to the actual
computations involved, however, these would depend on the method of
estimation of the parameters that would appear in the models which led
8
to these derived correlation functions and correlograms.
To estimate
the coefficient of soil heterogeneity, b, Fairfield Smith (1938) perlog V
formed a weighted regression of
x
on
log x.
He also listed
the b values estimated from uniformity data for several kinds of crops.
The histogram for the thirty-nine estimated b values has its main peak
in the interval
= 0.41
b
upper cut-off point at
b
- 0.50,
= 0.71
and a second peak as well as an
- 0.80.
(1956) to suggest that the values
distinguished.
These values led Whittle
b = 1/2 and 3/4
were in some way
Whittle (1962) noted that low values of b came largely
from irrigated crops while high values were associated with opposite
conditions.
Quenouille (1949) reviewed works by Osborne (1942) who suggested
the possible use of
pes)
= e -hs ,(h>O),
and Mahalanobis who calculated
correlations for a paddy field of 800 cells together with the function
e
-s
which gave a quite good fit.
Quenoui11e himself considered an
"elliptical" Markov process given by
P ( s , t)
where
sand
t
= exp •
-
~
s
2
2mst
- 2 - ab-
a
2J~
+ -t 2 '
(2.3)
b
are the lag distances in the two dimensions and
a, b, and m are constants.
By taking
m=0
and changing the units
in which the distances were measured he worked with the process
per)
= e-ar
9
which he called a circular Markov process.
The formulation
(2.4)
p(s,t)
where
and
are population values for the lag 1 correlations
along the rows and columns, respectively; and
p (s, t)
= exp
-
la + btl
(2.5)
S
were called degenerate Markov processes of the first and second order,
respectively.
He also indicated that by using a circular Markov
process it would be possible to obtain numerically a law in substantial
agreement with the Fairfield Smith law over a wide range of values.
Patnakar (1954) worked with Mercer and Hall's (1911) wheat crpp
data which consisted of 500 plots in a rectangular field arranged in
20 rows and 25 columns.
He regarded the data to be homogeneous as far
as the variations from north to south were concerned, but there was a
linear trend in the data from west to east which he estimated and
adjusted the data accordingly.
Then he calculated the serial correla-
tions for the rows and columns of both the adjusted and the original
data.
He noticed that these calculated correlations were very small
and so he used the degenerate Markov process of the first kind, as
suggested by Quenouille, to calculate the correlation function.
estimated
and
PIO
by
and
He
r Ol ' the observed lag 1
correlations along the rows and columns, respectively, taken over the
whole area.
The fit was quite satisfactory in the case of the
adjusted data.
10
On the other hand, Whittle (1954) argued that it would always be
possible to find a unique process which would generate a given set of
autocorrelations.
As to the estimates of the parameters encountered in
such processes, he explained that it was no longer correct to minimize
the residual sum of squares (u) and that the correct equations for the
least square estimates would be obtained by minimizing ku, where k is
a known function of the parameters.
Thus he fitted a symmetric auto-
regression model for the Mercer and Hall wheat data, the model being
of the form
= a0 +
Ys,t
where
y
s,t
a(Ys+ l ,t + Ys-,t
1 + Ys,t-1 + Ys,t+1) + Es,t
is the yield for the plot at row s and column t and
is a random normally distributed error with mean zero and variance
ao
and
a
(2.6)
Es,t
2
a ,
are constants.
The correlation function corresponding to this process turned out
to be of the form
per) =
where
and
~
constant.(kr).~ (kr)
(2.7)
is the modified Bessel function of the second kind, order one,
k = 1(1/a-4)
and
r
is the lag.
However, instead of estimating
a, he equated the correlation
function above with the two extreme values of the observed correlation
to obtain the values of the constant and
k
= 0.13
whence
a
= 0.2489.
k.
It turned out that
This resulted in an impressive agreement
between the observed and the fitted correlation, but he warned that the
11
apparent agreement should be discounted considerably, since almost any
monotone decreasing function would fit the observed curve reasonably
well if only the end points were arranged to coincide.
However, he
added that if one were to fit, for example, an exponential curve in the
same fashion, the agreement would not be at all as good, for the
exponential curve is sagging too much in the middle.
Whittle also noted that the relation (2.7), which was expressed in
terms of a Bessel function, was of interest in that it might be regarded
as the 'elementary' correlation in two dimensions, similar to the
a
exponential e- Ix I in one dimension. Both correlation curves were
monotone decreasing but the former differed in that it was flat at the
origin, and that the rate of decay was lower than the exponential.
He
added that two-dimensional processes could be constructed which would
have exponential correlation functions, but added that such processes
were very artificial,
However, Whittle (1962) took a different approach
and derived the autocorre1ations by considering a process in which random
variation would diffuse in a deterministic fashion through physical
space.
Thus he was able to derive a correlation function that had no
singularity at the origin ~nd that had an
s-l behavior at infinity
similar to the conventional diffusion processes.
Besag (1974) established a conditional probability approach to
spatial processes.
By applying this approach to the basic lattice
models he was able to develop a parameter estimation procedure (the
coding technique) and, atdeast for binary and Gaussian varieties,
to develop straightforward goodness-of-fit tests of the model.
By applying these coding methods he was able to fit autocorrelations
for the Mercer and Hall data.
There was some disparity between the
12
observed and fitted correlograms.
As to this disparity, Whittle, in the
discussion that followed Besag's article, argued that the correlogram presumably reflected uncorrelated noise superimposed on the variables of the
spatial model.
2.2.
Two-Dimensional Stationary Processes
Besag (1972) considered a large two-dimensional rectangular lattice,
each site (k,t) of which has a random variable
Yk,t
associated with it.
He assumed that these variables interact and that each
might be expressed as a unilateral linear autoregression on
{Yk-i,t' i
>
O}
and
{Yk-i,t-j, j > 0, i unrestricte&.
That is, borrowing Whittle (1954) terminology, Besag considered
the stochastic equation
(2.8)
{~,;
where
is a sequence of uncorrelated error variables, each
2
and {a .} is a set, of real
having zero mean and variance v
i ,J
parameters, independent of position (k,t) and satisfying
I < 1. He imposed these conditions to ensure the existence of a
,J
stationary process satisfying (2.8).
~Iai .
2.2.1.
Correlation Structure of Some Two-dimensional Stationary
Processes.
Besag assumed the autoregression (2.8) to be finite and
the
. = 0 for all i < -P, P being noni ,J
By considering a stationary finite autoregression
least integer such that
negative, usually.
P
a
4It
13
satisfying (2.8) and letting
lags
sand
t
in
kand
p
stt
denote the autocorrelation for
i t respective1Yt he derived the auto-
correlations as
(2.9)
provided that
extends over
E{ekti Yk-Sti-t} • Ot
i > 0t j • 0
and
i
where the summation in (2.9)
~
-P t j > O.
Besag pointed out t the
primary physical characteristic of a unilateral autoregression is that
it can be generated stochastically element by element and row by row
provided that arbitrarily remote boundary conditions exist to the
top and left of the array.
s
>
0 when
t - 0
According1Yt (2.9) is valid at least for
of validity may be extended since
e
influence the values of
for
5
Yk-s i+1
t
t
when
and for all 's
>
O.
Moreover t the range
does not t for examp1e t
kti
s
>
P t nor the values of
s > 2P t and so on.
Hence t (2.9) could be used for
and for
-Pt •
t
~
0t
S >
Then he explained the difficulties involved in solving the
recurrence relations (2.9).
However t he suggested the following method
of solution.
From (2.9)t "we have in particular
(2.10)
so that
14
. p(s+i,t+j), (t < 0, s > -Pt) •
1.,J
(2.12)
p (s, t) = Let.
Also
p(s,t)
= La..1., j
p(s-i,t-j), (t
<
0, s
>
(2.13)
-Pt) ,
so that the trial solution
p(s,t)
= LC u Au s
j.l
t
u
, (t
<
0, s
~
(2.14)
-Pt)
yields characteristic equations
(2.15)
The occurrence of a pair of equations in
A and
J.l
determines an
even number of roots
(AU'J.lU)
and, further, we may discard half of
.these roots since i f
(A ,A)
satisfies (2.15) so does
u u
and the form (2.14) demands
values of the nonzero
c
u
IAUI
~ 1.
t
°
-< ,
S ->
u ' u
In any specific problem, the
may be found by applying (2.9) for values of
(s,t) in a neighborhood of (0,0)."
at least to
(A- l A-I)
"The solution (2.14) may be extended
-Pt ."
As Besag noted, the usefulness of the above methods would depend
upon the number of regressors.
However, its advantage was that it would
immediately give the region where a simple form of the solution would
exist.
He also added that outside this region, the solution might
become increasingly complex but the correlations could still be
calculated numerically by successive applications of (2.9) using
(2.14) to provide boundary values.
15
2.2.2.
A Conditional Probability Approach to the Spatial Processes
Besag (1974) also examined some stochastic models which might be
used to describe certain types of spatial processes.
examples of these spatial
processes~
He gave some
They were classified according
to the nature of
(a)
the system of sites (regular or irregular),
(b)
individual sites (points or regions), and
(c)
the associated random variable (discrete or continuous).
We will be interested in examining a regular lattice of point sites
with continuous variables which commonly occurs in agricultural
experiments where aggregate yields are measured.
Multivariate
normality will be assumed.
Besag explained that there appear
to be two main approaches to
the specification of spatial stochastic processes.
These stem from
the nonequiva1ent definitions of a "nearest neighbor" system
originally due to Whittle (1963) and Bartlett (1955, section 2.2,
1967, 1968), respectively.
We will restrict our attention to a
rectangular lattice with sites labelled by integer pairs (i,j) and
with an associated set of random variables
{Yo
.l.
~,J
Then Whittle's
basic definition requires that the joint probability distribution of
the variates should be of the product form
(2.16)
where
is the value of the random variable,
Y•••
~,J
On the
other hand, Bartlett's definition requires that the conditional
probability distribution of
Y ., given all other site values, should
i ,J
16
depend only upon the values at the four nearest sites to
namely
Yi-1,j, Yi+1,j' Yi,j-1
and
(i,j),
Yi,j+1·
As Besag has noted, the conditional probability formulation had
more intuitive appeal, but this was marred by a number of disadvantages.
Firstly, there was no obvious method of deducing the joint probability
structure associated with a conditional probability model.
Secondly,
the conditional probability structure itself was subject to some
nonobvious and highly restrictive consistency conditions.
Referring to
these conditions, Besag quoted Brook (1964) who showed that the
conditional probability formulation was degenerate with respect to
(2.16).
To overcome these difficulties, Besag suggested the considera-
tion of wider classes of conditional probability models in which the
conditional distribution of
Y . was allowed to depend upon the
i ,J
values at more remote sites. Thus he built a hierarchy of models,
which eventually would include (2.16) and any particular generalization
of it.
That is, he extended the concept of first, second and higher
Markov chains in one dimension to the realm of spatial processes
which removed any degeneracy associated with the conditional probability
models.
Besag went on to derive the joint probability distribution
associated with those sites.
He showed that by specifying the neighbors
and associated conditional probability structure for each of the sites,
their joint probability would be uniquely determined.
In order to derive this joint probability and establish its uniqueness, he started by defining Markov fields and cliques as follows:
17
(a)
A Markov Field:
"Any system of
n
sites, eC\ch with specified
neighbors, generates a class of valid stochastic schemes.
Any member
of this class is called a Markov field."
(b)
A clique:
"Any set of sites which either consists of a
single site or else in which every site is a neighbor of every other
site in the set is called a clique."
Then he went on to prove Hammersley and Clifford's theorem which
addressed the following problem:
"Given the neighbors of each site,
what is the most general form which
Q(Y)
may take in order to give
a valid probability structure to the system?"
It turned out that
+ Y1Y2 ••• Yn G1 , 2 , ••• , n (Y 1 'Y2'···'Yn ) •
Where, for any
G
i,j, ... ,s
i,j, ••• ,s
the
1 < i < j < ••• < s
~
n,
(2.17)
the function
in (2.17) may be nonnu11 if and only if the sites
form a clique.
Subject to this restriction, he showed that
G functions might be chosen arbitrarily, and hence, given the
neighbors of each site, the general form of
Q(Y)
and the conditional
distributions could be obtained.
Having considered the Hammersley-Clifford theorem, Besag went on
to consider some particular schemes, within the general frame work, as
follows:
18
Given
n
1,2, ..• ,n
sites, labelled
each, he considered
Q(Y)
and the set of neighbors for
which was well defined and had the represen-
tation
(2.18)
where
s.1, j = 0
unless site
i
and
j
are neighbors of each other.
Besag termed such schemes auto-models.
Besag considered a specific auto-model which arises, for example
in plant ecology, when it is reasonable to assume that the joint distribution of the site variables (plant yields), possibly after suitable
transformation, to have a multivariate normal.
In particular, Besag considered schemes for which
(2.19)
P (.)
i
where
P (·)
i
denotes the conditional probability distribution of
Y
i
given all other site values.
Thus the joint density is
(2.20)
where
an
B is
n x n
diagonal
n x 1
arbitrary vector of finite means,
lli'
and
B is
matrix whose diagonal elements are unity and whose off(i, j )
element is
-So1,J.. Thus B is symmetric and he
argued that it should also be positive definite in order for the
formulation to be valid.
19
He also pointed out the distinction between the process (2.19),
defined above, for which
= ~i
E(Y.la1l other site values)
~
and the process defined by the set of
n
+ EB .. (y.-~.) ,
~,J
J
J
(2.21)
simultaneous autoregressive
equations, typically
(2.22)
where
E ,E
1
2 , ••• ,E n
mean and variance
are independent Gaussian variates, each with zero
a
2
Besag showed that in contrast to (2.20), the latter process has
joint probability density function
(2.23)
where
B. i
J~
B is as defined
= B.
~,
j
only that
before~
but it is no longer necessary that
B should be non-singular.
The construction of conditional probability models on a finite
regular lattice is simplified by the existence of a fairly natural
hierarchy in the choice of neighbors for each site.
we shall be primarily interested in rectangular
For our purpose
l~ttice
with sites
defined by integer pairs (i,j) over a finite region, and in particular,
we will concentrate for most of the time on auto-normal schemes since
they are of relevance to analyzing our data later on.
20
Three homogenous schemes are of particular interest.
(a)
They are:
the first order scheme for which Y. j' given all other site
~,
values, is normally distributed with mean
(2.24 )
and constant variance
(b)
0
2
the second order scheme for which
Yi,j , given all other
site values, is normally distributed with mean
and constant variance
(c)
o
2
, and
the third order scheme for which Yi,j , given all other site
values, is normally distributed with mean
+ XI (Yi-2,j+1 + Yi+2,j-l) + X2 (Yi-2,j-1 + Yi+2,j+l)
and constant variance
0
2
(2.26)
21
These schemes will be used to analyze crop yields in our uniformity
trials later, since it has been argued by various investigators such as
Bartlett (1975) and Besag (1974) that through local fluctuations in soil
fertility or the influence of competition, it was no longer reasonable
to assume statistical independence.
2.3.
Statistical Analysis of Lattice Systems
In this section we will describe the methods of parameter estimation
and some goodness of fit tests applicable to spatial Markov schemes
defined over a rectangular lattice.
These methods can be extended to
non-lattice situations, as Besag (1974) has shown.
We will concentrate
on first, second and third order lattice schemes since these are
particularly interesting in applications.
As to the methods of estima-
tion, we will consider the following three methods:
maximum likelihood,
coding methods, and unilateral approximation on the rectangular lattice
(homogeneous first-order spatial schemes).
2.3.1.
Maximum Likelihood Estimation.
Besag (1974) has illustrated that, in general, a direct approach
to statistical inference through maximum likelihood is intractable
because of the extremely awkward nature of the normalizing function.
However, in certain exceptional cases when the variates have an autonormal structure, the normalizing function can be evaluated numerically.
Another instance where maximum likelihood estimation is possible is
when there are only one or two parameters determining the normalizing
function.
22
A method of computation was developed by Ord (1975) when the number
of sites is limited (about 40 or less) but we were able to extend this
to handle any number of sites no matter how large.
Both the method and
its extension will be discussed in the next section and Chapter 3.
In Chapter 3 we will develop a computational procedure for Ord's
method, but in the remainder of this section, we give a sunnnary of the
method as has been presented by Ord.
Yi
where
Y
i
=a
+ p
L
j €J (i)
Wij YJ'(i) +
and
a
and
p
~
j
(2.27)
(i = 1, ••• ,n)
= l, ••• ,n
is the yield of plot i, i
Yj(i) is the yield for plot
E:. ;
•
which is a neighbor of plot i
J(i) may include all locations other than
i.
are parameters to be estimated, while
is a set
W
ij
of non-negative weights, to be defined, which represent
the degree of possible interaction of plot
j
on plot i,
and
E:
i
are the random disturbance terms which are uncorrelated with
zero means and equal variances.
;
so that
E:
i
~
NID
We will assume normality,
2
(o,cr ) •
The model given in (2.27) may be formulated in matrix terms,
momentarily taking
a
= 0,
as;
Y = pWY
+
E:
where
W is an n x n matrix of weights
Y and
E:
are n x 1 vectors •
(2.28)
23
The constant
a
was suppressed to simplify the initial exposition and
was restored when he considered the regression model.
Given
e:
~
given that
e:
= AY,
A
=
From (2.28)
where
(I - pW) •
2
nid (0,0 I), the log likelihood function for
Y
= y,
p
and
2
o ,
is
--~~~
Y'A'AY +
LnlAI
(2.29)
From (2.29) he derived the maximum likelihood (ML) estimator of
,,2
o
.e
o
= n-1 Y'A'AY
2
as
(2.30)
He explained that the principal difficulty in determining
P
from
(2.29) centers on the evaluation of
IAI = II
However, he argued that if
IAI
-
- pW
I.
W has eigen values
lvl
A1 , A2 , .•. ,A n ' then
=
so that
IAI =
iT
i=l
(1 - pA ) .
i
(2.31)
24
As Ord pointed out, the advantage of (2.31) is that
determined once and for all, so that
S
is the value of
{Ai}
p
can be
which
would maximize
a ) = Const.
L(p., 2
~
i. e., the value of
p
- (n/2) Ln(o~21 A\-2/n
.)
(2.32)
that would minimize
(2.33)
where
YL
= WY
•
The ML estimator is that value of
which minimizes
= =-- L
2
2
n
Ln(l - pA i ) + Ln(S }
n i=l
f(p)
where
p
S2 = S2(p) = Y'Y - 2p Y'YL + p2 (YL}'YL
logarithm of expression (2.33).
(2.34)
and
f(p)
is the
The derivatives of
f(p)
are
n
f (p) =
p
L
1n
. 1
~=
A./(l - PA i ) + 2(p(YL)'YL-Y'YL)/s2
~
(2.35)
and
f
pp
(p)
I (A~)2/(1
n i=l
•
=1
PA )2
i
+ 2(YL)'YL/S 2
(2.36)
25
Then
p
may be determined iteratively from the expression
(2.37)
P" o
Taking as a starting point,
= Y'YL/Y'Y •
On considering model (2.27) with general
e:
where 1 is an
=
value; then
ll!
(I - pW)Y -
n x 1 vector of ones.
a
Thus for
(2.38)
p
unknown, the ML
procedure leads to estimators of the same form, with the ML estimator
p
replacing
p.
Then
II
and
a
2
are estimated by
(2.39)
,,2
a
""
2
= -n1 Z'Z
- (Iz) In}
c-
(2.40)
i
where
Z = (I
- pW)Y .
Substituting back into the likelihood function,
of
p"
is that value
p which maximizes, as before, (2.32),
and he showed that for computational purposes, the exp;ressiQn (2.33) could
be used except that the second bracket would. be replaced by
(Y'Y -
X2In)
- 2p (Y-"YL - X • XL/n)
+ p2{(YL)'YL - XL 2/n}
(2.41)
26
where
n
Y
L
=
n
Y
i=l
i
and
YL
=
L
(YL) . .
~
i=l
He gave the asymptotic variance-covariance matrix, for
and
p
w
= cr 2
in that order as
n/2
V(w,p)
=w
-
1
E(e'YL)
[
wE(YL)'YL + aw
(2.42)
J
2
where
a ..
a2Ln A
ap 2
~~=
..
~
2
L Ail (1 - PA )
2
i
and i f
(2.42a)
then
E(e'YL) .. w tr (B)
For
w, p and
~,
and
E(YL)'(YL)
=w
tr(B'B) •
in that order, the variance-covariance matrix is
given by
n/2
V(w,p,~)
= w2
E(e'YL)
wE{(YL)'YL} + aw 2
o
wi' ECYL)I
nw
where
.,-1
J
(2.43)
27
E(YL)
=~
, E(€'YL)
=w
tr(B)
B
n
E(YL)'(YL)
and
= w tr(B'B)
+ {E(YL)}'{E(YL)}
B. = the sum of the elements of the ith row of B;
~
2.3.2.
i = l, ••• ,n •
A Coding Method on the Rectangular Lattice.
Coding methods of estimation were introduced by Besag
(lq72) in the context of binary data and then he extended them in order
to fit first and second order schemes (Besag. 1974).
The method was developed as follows.
It was assumed that the con-
ditional distributions
Pi,j(·) of Y . were of a given functional form
i ,J
but collectively contained a number of unknown parameters whose values
were to be estimated on the basis of a single realization
system.
Y of the
Thus, in order to fit a first-order scheme, he began by
labelling the interior sites of the lattice, alternately by
as shown in Fig. (2.1).
X and
Then, according to the first-order Markov
assumption, the variables associated with the
X sites, given the
observed values at all other sites, would be mutually independent.
This
resulted in the simple conditional likelihood
(2.44)
for the
X site values, the product being taken over all
X sites.
Thus the conditional maximum likelihood estimates of the parameters
could be obtained in the usual way.
By using a shift in the coding
28
x
x
x
x
Fig. (2.1) •
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Coding Pattern for a First-Order Scheme,
(Besag's Fig. 1, 1974).
29
pattern alternative estimates might be obtained by maximizing the likelihood function for the
sites conditional upon the remainder.
Besag
noted that the two procedures were likely to be highly dependent but in
practice the results could be combined appropriately.
Again, in order to estimate the parameters of a second-order
scheme, he coded the internal sites as shown in Fig. (2.2), and by
considering the joint distribution of the
X site variables given the
site values, the conditional maximum likelihood estimates of the parameters
might be obtained.
By performing shifts of the entire coding frame over
the lattice, four sets of estimates would be available and these might
again be combined appropriately.
By using the coding methods, likelihood ratio tests might be
constructed to examine the goodness-of-fit of particular schemes.
Here,
Besag stressed three points:
a.
"It is highly desirable that the wider class of schemes
against which we test is one which has intuitive spatial
appeal, otherwise the test is likely to be weak."
b.
"The two maximum likelihoods we obtain must be comparable,
e.g., if a first order scheme is tested against one of
second order, the resulting likelihood-ratio test will only
be valid if both schemes have been fitted to the same set of
data, that is by using
.c.
Fig. (2.2)
coding, say."
There will be more than one test available (under shifts in
coding), and as Besag has suggested, these should be considered collectively.
,
r
30
Fig. (2.2).
x
x
x
x
x
x
x
x
x
x
Coding Pattern for a Second-Order Scheme,
(Besag's Fig. 2, 1974).
31
2.3.3.
Unilateral Approximations on the Rectangular Lattice.
Besag (1974) has investigated and extended an alternative estimation
procedure for homogeneous first-order spatial schemes that involved the
construction of a simple process which has approximately the required
probability structure but which is much easier to handle.
This procedure
was first developed by Whittle (1954) who proved that it was always
possible to find a unilateral representation of any two-dimensional
stationary process.
However, Besag's (1974) approach is equivalent to that of Bartlett
and Besag (1969).
of any site
(k,t)
He began his method by defining a set of predecessors
(i,j)
in the positive quadrant to consist of those sites
on the lattice which satisfy either
(i)
t
< j
or
(ii)
t
=J
and
k < i
Then a unilateral stochastic process
{Yi,j:
i
>
0, j
>
O}
in the
positive quadrant, would be generated by specifying the distribution of
each variable
Y. .
predecessors of
J.,J
conditional upon the values at sites which should be
(i,j).
In practice the distribution of
Y .
i ,J
be allowed to depend only on a limited number of predecessors.
would
Thus
the simplest unilateral representation of a two-dimensional process had
the form
(2.45)
where
Yi-l,j
and
Yi,j-1
are two predecessors of
Yi,j.
He con-
sidered as a better approximation processes where more predecessors were
included, e.g.,
32
P(Y
,fall predecessors)
i ,J
= q(Yi,j;
(2.46)
Yi-l,j' Yi,j-l' Yi+l,j-l)
So far, we have reviewed the relevant literature for finding
statistical models that would describe yields from agricultural experiments.
In the coming chapters of this study, we will apply these methods
to our data which consists of three uniformity trials on three crops,
namely, sorghum, onion, and wheat.
However, we will start in Chapter 3
by developing a computational procedure for obtaining the eigenvalues
of the weighting matrix, W, that has been employed by Ord (1975) to
develop his autoregressive model (2.28).
In Chapter 4 we will start
by examining the auto-normal models, suggested by Besag (1974), to find
the order of the model which will fit the data.
In particular, we will
examine the first, second and third-order models (i.e., the models
2.24, 2.25 and 2.26, respectively).
Then we will fit the correlogram,
2.10, according to the model order that will fit
unilateral representation.
the data and its
Also, we will fit the Markov process of the
first-order, 2.4, as was developed by Quenouille (1949).
These two
correlograms will then be compared with the observed one.
Then we will compare the two correlation functions that were
developed by Whittle (1954) and (1962) where he assumed an autoregressive
model to derive the first function and a diffusion process in the case
of the second function.
For estimation of these two correlations,
we will employ the computational procedure that we will develop in
Chapter 3 to compute the eigenvalues of the weighting matrix for the
symmetric autoregressive model (2.6).
33
All these results will then be compared.
evaluate the usefulness
the data variability.
Thus we will attempt to
of the autonormal models in characterizing
We will compare the various fitted autocorrela-
tions and their usefulness for prediction purposes.
3.. A COMPUTATIONAL PROCEDURE FOR ORD fS METHOD
3.1.
The Determination of the Eigenvalues of W.
In order to apply Ord's method we need to determine the eigenvalues
of W.
n
Ord gave these values as the solution for a polynomial of degree
and showed that the solution for these polynomials is possible pro-
vided that
n
was not large
(~bout
40).
In fact, Besag (1974) referred
to the determination of these eigenvalues as an obstacle to using Ord's
method when
n
was large.
In this section, however, we will describe a method for obtaining
these eigenvalues for any regular lattice of size
method will be illustrated for the case of
p
=
p x q
24, q
~
= 50
n.
The
and n
= 1200.
The reason for this particular choice is that the data that we are going
to consider in this study comes from field of uniformity trials where
the fields are of size
= 1200
24 x 50
restrict the size of the lattice.
plots but the method:does not
The method is illustrated in what
follows.
We start by considering the model of the form
Yr,s
= 1/4
p(l r- 1 ,8 + Yr+l ,s + Yr,s- 1 + Yr,s+1) +
r
= 1, ••• ,2-4
s
= 1, ••• ,50
E
r,s
(3.1)
where all the terms are as explained earlier in Chapter 2.
Suppose that the cells are numbered as in the diagram below:
1
2
3
4
51
1511
52
1152
53
1153
54
1154
...
50
100
1200
35
The weighting matrix
can be written as
(WI)
C
I
0
0
0
0
I
C
I
0
0
0
0
I
C
I
0
0
(3.2)
W = ~
1
where
C is a
C
(50x50)
=
(C ij )
0
0
0
I
C
I
0
0
0
0
I
C
matrix of the form
0
1
0
0
0
0
1
0
1
0
0
0
0
1
0
1
0
0
0
=
(3.3)
0
0
Then,
where
W
1
= ~n
I
0
0
0
1
0
1
0
0
0
0
1
0
L 24x24
®
C)
50x50
+ (B ~ I
24x24
50x50
® is the Kroncker product and
form as
B
is
)J '
(24x24)
(3.4)
matrix of the same
C.
On the other hand, if we consider the model of the form
Yrs
= 1/8
p(Y r - 1 ,s-1 + Yr - 1 ,s + Yr - 1 ,s+1 + Yr,s-l + Yr ,s+l
+Y
r
r+1,s-1
+Y
r+l,s
+Y
r+1,s+1
)+e:
rs
= 1, ... ,24
s=1, ... ,50
(3.5)
36
(W )
2
The weighing matrix
can be written as
C C1
C C
1
W
2
where
C is
0
0
0
0
0
0 C
1
C1 0 0
C C 0
1
0
0
0
0
0
0
0
=-81
(50x50)
.
C C C1
1
(;) C
1 C
matrix of the previous form while
C
1
is
(50x50) matrix of the following form.
C
1
1
1
0
0
0
0
1
1
1
0
0
0
0
1
1
1
0
0
".
=
=I +
0
0
0 1
1
1
0
0
0 0
1
1
C •
Then,
W
2
1200x1200
= 1/8 ~
I
l24x24
®
c ] + [ 24x24
B
®
50x50
5~50]}
(3.6)
= 1/8
Since
Wi' (i
R24x24
+ B)
(I
= 1,2),
®
c]
50x50
+
(24=24
®
5~50~
are composite matrices, to calculate the eigen-
values we make use of the following result (Lancaster, 1977, p. 259):
37
"Consider a polynomial
coefficients.
cj>
in two variables
Thus for certain complex numbers
cj>(x,y) =
If
cj> (A,B)
A (mxm), and
P
·I
i,j=O
C
ij
y
with complex
and integer
i j
P-
(3.7)
Cijx y
as follows:
P
i
I
values
cj>(Ar,lls)
j
C . (A ~ B)
i,j=O iJ
A ,A 2 , ... ,A are the eigenvalues of
m
l
are the eigenvalues of
ron
and
B (nxn), are complex matrices, let us define
cj> (A,B) =
If
x
A and
B then the eigenvalues of
where
r = l, .•• ,m
and
Thus, to evaluate the eigenvalues of
(3.8)
1l1,1l2, ... ,lln
cj>(A,B)
are the
s = l, ..• ,n."
Wi' (i = 1,2)
we apply the
above results to get:
A =
W
1
~{(
AI ®
24xl
AC ) + ( AB ®
24xl
SOx!
AI )}
SOxl
(3.9)
and
~2
= 118 {(A (I+B)
24xl
= 1/8 {((~ +
AB)
24xl
~
AC ) + ( AB ®
24xl
SOxl
~
AC )
SOxY
+ (A B
24xl
AI )}
SOx!
~ 1 )}
SOxl
(3.10)
38
where
AW ' AW ' AB, AC' AI
1
2
are the eigenvalues of
1
and
WI' W2 ' B, C and I;
1
24xl
50xl
are the vectors of ones of length 24 and 50, respectively.
However, it should be noted that it was possible to obtain a closed
form for the evaluation of the eigenvalue W '
only because of the
z
linear relationship between
Cl
50x50
Now, if
C
l
e.g for
W
2
=a
C,
where
W
2
a
C
l
and
=
C
C,
+
50x50
I
50x50
is a constant, it would turn out that
= 1/8{ (I
(i)
C)
+
(B
= 1/8{ (I ®
C)
+
(aB
= 1/8{ (I +
aB)
(i)
®
®
C}
a Cn
C)}
(3.11)
and in this case
Similar results could be obtained for
that the
EW
ij
+1
W. 's (i
~
= 1,2)
WI.
Further, we should note
are symmetrical and as a consequence of this
for the plots that are either on the sides or at the corners.
This might lead to values of
p > 1 , but the estimates will not be
affected since the M.L.E. were derived without any restriction on the
form of
W.
39
3.2.
The Computation of the Variance-covariance Matrix
The variance-covariance matrices for
(w,p)
and
(W,p,ll), as
derived by Ord, were given by (2.42) and (2.43).
Also, we have seen
Thus it can be shown that the eigenvalues of
are
i
= 1,2, •.• ,n
while the eigenvalues of
B'B
B
are
\/(1 - pA ),
i
A2 /(1 _ pI. )2
,
i=1,2, ... ,n
Therefore,
E(€'YL)
=w
tr(B)
n
=w L
Ai/(l - pA.)
~
i=l
= w a. l
(say)
and
{(YL) , (YL)} • w tr(B'B)
Also, since
B
l
B
2
E(YL)
= II
l.l~
B
n
-
it follows that
E(YL) , (YL)
=w
tr(B'B) + {E(YL)} '{E(YL)I}
i'
40
+
E(YL) '(YL) = w tr(B 'B)
= wcx +
{E(YL)} '{E(YL)}
n
112
L B~
i=l
Hence, we can rewrite the variance-covariance matrices in the
following computational form
n/2
V(w,p) = w
[
and
-1
n/2
2
= w
where
CX
2
= sum
of elements of
B.
4.
UNIFORMITY TRIALS WITH SORGHUM, ONIONS AND WHEAT
4.1.
Materials and Methods
The data used in this study came from three uniformity trials that
were carried out by the author when he was employed by the Agricultural
Research Corporation in the Sudan.
The trials were conducted on three
different crops, namely, sorghum, wheat and onions.
4.1.1.
Sorghum Uniformity Trial
In this trial sorghum was grown in an area of 2.6 acres, approximately, at the Gezira Research Farm in the Sudan, in the season 1971-72.
In performing the agricultural operations, the Gezira Station Field
Practice manual was followed.
At harvest the guards were removed leaving a net area of 96 meters
long by 60 meters wide.
This area was divided into basic units, each
measuring 4 meters long x 2 ridges (1.2 meters), thus in all we had
1200 basic units.
Each unit was harvested separately, placed into
large sacks and tagged.
Then the heads were threshed and the weight of
grain yield for each basic unit was recorded.
4.1.2.
Wheat Uniformity Trial
This uniformity trial was also conducted at the Gezira Research
Farm in the Sudan in the season 1971-72, in an area of 2.6 acres.
At
harvest the guards were removed leaving a net area of 96 m. long x
50 m. wide.
wide each.
These were divided into basic units of 4 m. long x 1 m.
Again as in the case of the sorghum trial, each basic unit
was harvested separately and its grain yield recorded separately.
42
4.1.3.
Onion Uniformity Trial.
This uniformity trial was conducted by the'late Dr. Hilo at the
Girba Research Sub-Station in the Sudan in the season 1972-73.
The
trial occupied an area of approximately 2.6 acres.
However, the harvest was carried out by the author.
After
removing the guard area, there was a net area of 96 m. long x 60 m.
wide.
This was divided into basic unit of areas 4 m. long x 1.2 m.
(2 ridges) wide.
Also, as in the case of previous trials, the weight
of bulbs for each basic unit was recorded separately.
Thus, for each one of the trials we had 1200 basic units and the
fields formed rectangular lattices of 24 rows and 50 columns each.
4.2.
Results
In Chapter 2 we discussed the various statistical analysis
techniques that could be employed in order to find a proper model that
would fit the data.
If this proper model is found then we can go on to
fit the proper correlation functions and correlograms generated by these
models, in the manner that has been explained earlier.
Therefore, our strategy in presenting the results will be to:
(a)
find the best fitting auto-normal model,
(~)
fit the correlation function and correlogram according to the
above model,
(c)
compare the fitted autocorrelations with the observed ones.
So far nothing has been reported in the literature as to what would
be the effect of changing plot size and shape on the observed and fitted
autocorrelations.
To investigate this matter, we will repeat steps
(a), (b), and (c) above for various plot sizes and shapes.
~"
43
The plot shapes and sizes that will be examined are as follows:
=1
a)
IX 1
b)
2 x 1 .. 2
basic units
c)
3x 1
=3
basic units
d)
4x 1 • 4
basic units
e)
Ix 2 .. 2
basic units
f)
Ix 4
=4
basic units
4.2.1.
basic unit
Auto-normal analysis.
We start our analysis by fitting
the third order auto-normal under Fig. (4.1) codings.
Under these codings we will obtain the conditional maximum likelihood estimates of the parameters by considering the joint distribution
of the
X site variables given the
site values.
Then by performing
shifts of the entire framework over the lattice, nine sets of estimates
will be available and these will. be combined appropriately.
4.2.1.1.
Auto-normal analysis of sorghum plots data.
The
analysis for sorghum plots data for plots of size 1 x 1 is shown in
Appendix Tables (A.l) through (A.9), these being the first analysis
through the ninth analysis obtained by shifting the coding every time.
The analysis is aimed at testing the effect of first order parameters
81 , B·
2 and the significance of the parameters
second order model, as well as the parameters
Xl
and
X2
Yl , Y2'
°1' °2'
due to the
el , e2 ,
4>1' 4>2'
due to the third order model.
In Table (4.1) we present an approximate summary of the separate
nine analyses.
This is obtained by taking a simple average for the
effects mean squares and degrees of freedom and a weighted average for
the error mean square, (weighting by degrees of freedom).
This table
44
Fig. (4.1).
x
x
x
x
x
x
Coding Pattern for a Third-Order Scheme.
45
Table (4.1).
Effect
Auto-normal analysis of sorghtml plots data (plots of
size 1 x 1): summary analysis of variance under
Fig. (4.1) codings.
Sum of
squares
d.£.
M.S.
F-ratio
60.93*
13 Z
10.03
Z
5.01
Y1 YZ
.25
Z
.13
1.5Z
1 •.18
8
.15
1. 79
7.34
89
.08
18.80
101
13 1
°1 °2
</>1
6
1 6Z
</>Z Xl Xz
Residual
Total
46
clearly indicates that the first-order model is highly significant,
while the second and third order models are not.
confirmed by the
R 2
2
and
R
3
2
This result is also
2
R 2 values displayed in Table (4.2), where R1 ,
p
are the coefficients of multiple determination for the
first, second and third order models, respectively.
appreciable increase in the value
used instead of a first order one.
o~
R 2 when a third order model is
p
However, we notice that though the
first order model terms are highly significant, the
not very high.
They range from .37 to .69 for
value of .52, while
value of .62.
R 2
There is no
R
1
2
R
2
p
values are
with an average
values range from .41 to .74 with an average
3
Thus the first order model is the only highly signifiR 2
cant model and there is no appreciable increase in the value of
p
due to the second or third order models.
To examine the effect of plot size and shape on the choice of an
appropriate model, we have adopted the codings of Fig. (2.2).
Under
this coding we have a larger number of degrees of freedom for error
compared with those obtainable under Fig. (4.1) codings.
four separate analyses.
There are
The results are shown in Appendix Tables
(A.lO) through (A.14) for plots of size 2 x 1, 3 x 1, 4 x 1, 1 x 2 and
1 x 4 units 1 respectively.
The summary Table (4.3), that has been calculated in the same way
as Table (4.1), shows that as in the case of plots of size 1 x 1, all
the first order effects are highly significant.
The second order
effects, however, are only significant for plots of size 2 x 1 and
1 x 2.
47
Table (4.2).
Analysis
no.
Average
2
Rp value
Auto-normal ana1ys~s of sorghum plots data (plots
of size 1 xl): Rp values for the first through
ninth analysis of variance.
2
R
1
2
R
2
2
R3
1
.47
.48
.52
2
.37
.37
.41
3
.50
.53
.62
4
.49
.50
.56
5
.69
.71
.76
6
.66
.67
.74
7
.56
.57
.62
8
.47
.49
.58
9
.51
.54
.59
.52
.54
.62
Table (4.3).
Effect
Auto-normal analysis of sorghum plots data (plots of size 2 x I, 3 x 1, 4 x I,
1 x 2.and 1 x 4) summary analysis of variance under Fig. (2.2) codings.
Plots of
2 x 1 units
d. f.
M.S.
Plots of
3 x 1 units
d.f.
M.S.
Plots of
4 x 1 units
M.S.
d. f.
Plots of
1 x 2 units
d. f.
M.S.
Plots of
1 x 4 units
d.f.
M.S.
81 82
2
13.63 *
2
25.96 *
2
25.30 *
2
18.56 *
2
31.33 *
81 82
2
1.09
2
.71
2
.78
2
3.53
2
1.60
115
33
67
.55
43
.79
116
.22
50
.87
*Significant
at
P < .05 •
~
00
e
e
e
49
In Table (4.4) we show the
shapes.
R 2 values for these plot sizes and
p
Again as in the case of plots of one basic unit there is no
appreciable increase in the value of
R
2
p
when fitting a second order
model instead of a first order one •. However, neither
2
R l
.
.
nO~-R2
2
are high enough for anyone of these plot sizes and shapes to enable
us to assert the goodness..-of-fit of either model.
The plots of
si~e_
2 x 1 have a behavior of a different pattern and we will discuss
possible reasons for this behaVior in the discussion section.
4.2.1.2.
Auto-normal analysis of onion plots data.
Again, as in
the case of sorghum data, we started by performing an auto-normal
analysis of onion plots data.
Appendix Tables (A.15) through (A.23)
display the results for plots of size 1 x 1 basic units.
The summary in Table (4.5) shows that the terms of the first order
model are highly significant, while none of the second order and third
order terms have reached the significance level.
The
R 2 values in
Q
Table (4.6) show, on the other hand, no appreciable increase when a
second order model is fitted instead of a first order one; at the same
time the addition of third order terms produce no appreciable improvement.
Thus, as in the case of sorghum, we notice that though the first
order model terms are highly significant, the
high.
Table (4.6) indicates that the
R
1
2
R 2 values are not very
p
values range from as low as
.27 to .56, with an average value of .42 while those of
R 2 range
3
from .38 to .68 with an average value of .50.
In order to analyze the data for the combined plots of size 2 x 1,
3 x 1, 4 x 1, 1 x 2 and 1 x 4 basic units, we have again adopted the
50
Table (4.4).
Auto-normal analysis of sorghum plots data (plots
of size 2 x 1, 3 x 1, 4 x 1, 1 x 2 and 1 x 4):
R 2 values for the first through fourth analyses.
p
'2
R
l
R 2
2
1
2
3
4
Average
.39
.48
.37
.39
.41
.45
.51
.38
.41
.44
1
2
3
4
Average
.53
.63
.60
.53
.57
.54
.63
.63
.56
.59
Average
.59
.68
.52
.57
.59
.61
.72
.53
.58
.61
1
2
3
4
Average
.49
.52
.56
.54
.53
.60
.61
.64
.66
1
2
3
4
Average
.57
.63
.56
.50
.57
.58
.68
.59
.53
.60
Plot size
Analysis
No.
2 x 1
3 x 1
4 x 1
1 x 2
1 x 4
1
2
3
4
.63
51
Table (4.5).
Auto-normal analysis of onions plots data
(plots of size 1 x 1): summary analysis
of variance under Fig. (4.1) codings.
Effect
Sum of
squares
d.£.
M.S.
F-ratio
61
62
331. 94
2
165.97
37.08*
Y1
Y2
6.71
2
3.36
(\
°2
62.27
8
7.78
Residual
398.33
89
4.48
Total
799.25
101
<1>1
<1>2
61
Xl
62
X2
<
1
52
Table (4.6).
Analysis
No.
Auto-normal analysis of onion p~ots
data (plots of size 1 xl): R
values
for the first through ninth an~lysis.
2
R
1
R2
2
2
R
3
1
.41
.41
.50
2
.38
.38
.42
3
.50
.52
.59
4
.46
.46
.52
5
.56
.57
.68
6
.53
.55
.58
7
.32
.32
.46
8
.27
.29
.38
9
.32
.33
.44
Average
.42
.42
.50
53
codings of Fig. (2.2).
The results for these analyses are shown in
Appendix Tables (A.24) through (A.28) with a summary analysis in
Table (4.7).
This summary analysis indicates that the only significant
terms are those of the first order model, without exception, for all
plots sizes.
Here again, as in the case of the sorghum data, the
first as well as the second order terms are significant.
These
R 2 values shown in Table (4.8).
P
R
2 value due to the addition of the
is hardly any increase in the
p
results are supported by the
second order model terms, except for plots of size 1 x 2 where
value is as much as twice or more than the va1uc'of
analyses such as the fourth.
R 2
1
R2
2
in certain
But again, even for these plots the
fit is not that impressive.
4.2.1.3.
Auto-normal analysis of wheat data.
The results for
these analyses under Fig. (4.1) codings are tabulated in Appendix
Tables (A.29) through (A.37), for plots of size 1 x 1 basic units.
The approximate summary analysis in Table (4.9) shows that only the
first order terms are significant.
On examining the
R 2 values for
p
these nine analyses, as shown in Table (4.10), we notice that these
values are very low.
They range for
R 2
1
from an extremely low value
of .02, for the ninth analysis, to .43, for the first analysis, with
an approximate average value of .22.
Thus, it seems that a first order
model will not fit the data for plots of size 1 x 1.
However, even if
we are to adopt a third order model, the fit will not be that good
since the values for
R 2
3
range from .11, for the third analysis, to
.49 tor the first analysis, with the average value of .34.
Table (4.7).
Effect
Auto-normal analysis of onion plot data (plots of size 2 x 1, 3 x 1, 4 x 1,
1 x 2 and 1 x 4): summary analysis of variance under Fig. (2.2) codings.
Plots of
2 x 1 units
d. f.
M.S.
Plots of
4 x 1 units
d. f.
M. S.
Plots of
3 x 1 units
M. S.
d. f.
Plots of
1 x 2 units
d. f.
M.S.
Plots of
1 x 4 units
d.f.
M.S.
2
430.36 *
2
698.17 *
2
456.29 *
2
451.48 *
2
969.12 *
2
2
15.95
2
34.35
2
29.52
2
348.87
2
36.93
Error
115
14.80
67
27.36
43
32.36
116
12.03
50
37.76
*Significant
at
1
2
1
P < .05 •
VI
.po
-
e
e
55
Table (4.8).
Auto-normal analysis of onion data (plots of size
2 x 1, 3 x 1, 4 x 1, 1 x 2 and 1 x 4): R 2
values for the first through fourth analy~is.
Analysis
No ...
R 2
1
R 2
2
2 x 1
1
2
3
4
Average
.51
.54
.43
.53
.50
.53
.57
.43
.55
.52
3 x 1
1
2
3
4
Average
.48
.30
.52
.39
.42
.49
.32
.55
.41
.44
.37
.36
.37
.44
.39
.42
.41
.37
.44
.41
.33
.26
.39
.22
.55
.52
.58
.48
.30
.53
.49
.55
.43
.53
.50
.51
.58
.43
.55
Plot size
4 x 1
1
2
3
4
Average
1 x 2
1
2
3
4
Average
1 x 4
1
2
3
4
Average
.52
56
Table (4.9). Auto-normal analysis of wheat plots data
(plots of size 1 x 1): summary analysis
of variance under Fig. (4.1) codings.
.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
~
81
82
1.54
2
.77
15.01*
Yl
Y2
.11
2
.05
1.07
°1
°2
.71
8
.09
1. 73
Error
4.57
89
.05
Total
6.92
101
4>1
4>2
8
1
8
Xl
X2
2
57
Table (4.10).
Auto-normal analysis of wheat data
(plots of size 1 x 1); ~2 values for
the first through ninth ~na1ysis of
variance.
Analysis
No.
2
R1
R2
2
R2
3
1
2
.43
.13
.43
.16
.49
.30
3
4
.06
.06
.11
.21
.24
.31
5
6
.17
.39
.20
.41
.26
.48
7
8
.34
.13
.36
.47
.15
.27
9
.02
.02
.25
Average
.22
.24
.34
58
As in the case of sorghum and onions, the analyses for plots of
size 2 x 1, 3 x 1, 4 x 1, 1 x 2 and 1 x 4 were carried out under
Fig. (2.2) codings.
through (A.42).
(~able
The results are shown in Appendix Tables (A.38)
The approximate summary analysis of vatiance
(4.11)) shows that once again, as in the case of 1 x 1 basic
unit results, only the first order terms
a~e
significant except for
plots of size 1 x 2 where the first as well as the second order ter,ms
are significant.
On examining the
Rp 2 values in Table (4.12),
we notice that
-
for plots of size 2 x 1, 3 x 1 and 4 x 1, as in the case of plots of
size 1 x 1 units, the
R
1
2
values are extremely low and the
values are not much better.
are twice the values of
once again even the
size 1 x 4, the
R 2
2
R 2
1
For plots of size 1 x 2 the
R2
R2
2
2
values
in three out of the four analyses, but
R 2 values are not high.
2
In case of plots of
values are of the same magnitude as those for
plots of size 1 x 2 except that the
R 2 values for the former are
1
slightly higher.
Thus, it appears that for all plot sizes considered the first order
model does not seem to provide a reasonable fit for the data, and that
the fit cannot be improved by considering a second or a third order
model, except for plots of size 1 x 2 and 1 x 4 where the second order
model provides an improved fit over the first order model.
Further,
the third order model might provide a more improved fit for these
latter plot sizes but it was not examined since the degrees of freedom
would be very low, especially for plots of size 1 x 4.
e
e
Table (4.11).
Effect
e
Auto-normal analysis of wheat plots data (plots of size 2 x 1, 3 x 1, 4 x 1,
1 x 2 and 1 x 4): summary analysis of variance under Fig. (2.2) codings.
Plots of
2 x 1 units
d.L
M.S.
Plots of
3 x 1 units
d .f.
M. S.
Plots of
4 x 1 units
d.L
M.S.
Plots of
1 x 2 units
d.£.
H.S.
Plots of
1 x 4 units
M.S.
d.f.
e1 e2
2
.80*
2
.89
2
.15
2
1.83
2
3.62
e1 e2
2
.19
2
.20
2
.18
2
1.52
2
.49
Error
115
.18
67
.39
.43
.51
116
.12
50
.30
*Significant
at
P < .05 •
1J1
>.0
60
Table (4.12).
Auto-normal analysis of wheat data (plots of size
2 x 1, 3 x 1, 4 x 1, 1 x 2 and 1 x 4); R 2 values
for the first through fourth analysis.
P
Analysis
No.
R 2
1
R 2
2
.06
.08
.07
.08
.07
.06
.11
.09
.09
.09
.08
.03
.09
.05
.09
.06
.10
.06
Average
.06
.08
4 x 1
1
2
3
4
Average
.14
.02
.23
.08
.12
.14
.02
.25
.13
.14
1 x 2
1
2
3
4
Average
.24
.14
.18
.14
.35
.30
.35
.28
.18
.32
.18
.26
.42
.• 44
.23
.32
.45
.46
.33
.37
Plot size
2 x 1
3 x 1
1 x 4
1
2
3
4
Average
1
2
3
4
1
2
3
4
Average
61
4.2.2.
Fitted correlograms and correlation functions.
In the previous section, we have examined the results of the
auto-normal analysis.
It was clear that the first order model was
highly significant for all three trials, irrespective of the plot
size and shape considered.
However, judging by the
R
2
p
the first order model did not seem to fit the data welL.
values,
Now, in
order to fit the correlograms and correlation functions, we will
assume a first order model.
Now, we have already seen that under the assumption of a first
order auto-normal model, each Yi,j' given all other site values, is
normally distributed with mean given by (2.24) and variance
So,
02
•
in order to fit the proper correlograms we will proceed to
find a unilateral representation of this assumed first order autonormal model as was first presented by Whittle (1954) and later
developed by Besag (1972), as shown in Section 2.3.3, and further
developed by Besag (1974).
Besag (1974) took, as an approximation to the scheme, the
stationary autoregression
(4.1)
where
{Zi,j; i,j
= 0, +
1, + 2, •••}
is a set of independent Gaussian
variates, each with zero mean and equal variance.
show that the estimates of
8
1
and
82
in
(2~24)
Then he proceeded to
to be
62
where
bl
and
respectively.
b2 were estimates of b l
Therefore, according to
shown, the autocorrelation
p(s,t)
and
b
(4.l)~and
2
in (4.1),
what Besag (1972) has
for the first order scheme might be
estimated by
(s ~ 0, t ~ 0)
(4.2)
where
b1 A2
- (1 +
b1 2
and
Thus, one way to fit the correlograms would be to use (4.2).
Another possibility would be to fit the correlogram that was suggested
by Quenouille (1949), given by equation (2.5).
purposes we will fit both autocorrelations.
However, for comparison
Also, for abbreviation we
shall refer to (4.2) and (2.5) as Besag's and Quenouille's autocorrelations, respectively.
Tables (4.13) and (4.14) show the functional form of the fitted
auto correlations according to Besag's and Quenouille's functions, for
plots of size 1 x 1, 2 x 1, 3 x 1, 4 x 1, 1 x 2 and 1 x 4.
The fitted
correlograms were computed for plots of size 3 x 1 for each trial.
The
reasons for choosing plots of size 3 x 1 in order to fit the autocorrelations
instead of the natural choice of plots of size 1 x 1 will
be explained in the discussion section.
63
Table (4.13).
Crop
Sorghum
Onions
Wheat
The parameter estimates for the unilateral representation of the first-order models and the corresponding
Besag's functions for the three crops taking various
plot sizes and shapes.
Plot size
and
shape
1
...
b2
1 x 1
2 x 1
3 x 1
.279
.452
.399
.323
.117
.185
(.316)s (.355) t
(.460)s(.148)t
(.416}S(.222)t
4 x 1
1 x 2
.380
.132
.212
.457
1 x 4
.224
.372
(.402) s (.250) t
(.168)s (.467) t
(.262)s (.467) t
1 x 1
.401
( • 303) s ( •418) t
2 x 1
.204
.166
.424
3 x 1
4 x 1
.112
.281
.479
.334
(.203)s (.439) t
(.146)s(.487)t
(.320) s ( •367) t
1 x 2
1 x 4
.255
.219
.356
.392
(.296)s(.385) t
(.261)s(.416)t
1 x 1
.479
.077
(.481)s(.100)t
2 x 1
3 x 1
.162
4 x 1
1 x 2
.426
.476
.428
.389
.170
.276
(.440)s(.199)t
(.479)s(.101)t
(.444)s (.211) t
(.414) s(. 211) t
1 x 4
.279
.348
(0282) s (.385) t
...
b
.078
P(s ,t) = A"'SAt
II
(s
~
0, t
~
0)
64
Table (4.14).
Trial
Sorghum
Onions
Wheat
Parameter estimates and fitted autocorre1ations
according to Quenoui11e' s func tions (1. 5) •
Plot size
and
shape
P (s, t)
"
POI
"
= r 01 P10 = r 10
1 x 1
2 x 1
.534
.531
3 x 1
4 x 1
.590
.591
.459
.580
.549
.632
1 x 2
1 x 4
.672
.661
.463
.481
1 x 1
.468
.373
2 x 1
3 x 1
.483
.554
.301
.187
4 x 1
1 x 2
.508
.470
1 x 4
.572
.463
.430
.413
1 x 1
.037
.087
2 x 1
3 x 1
.037
.034
.142
.072
4 x 1
1 x 2
.069
.120
.125
.086
1 x 4
.241
.095
= plsl pltl
10 01
Vs and 'Vt
(.459)l s l(.534)l t l
(.580)l s l (.531)l t l
(.549)l s l(.590)l t l
(.632) lsi (.591) It I
(.463)l s l(.672)l t l
(.481)ls!(.661)l t l
(.373)l s l(.468)l t l
(.301)l s l (.483)l t l
(.187)I S I(.554)l t l
(.463)l s l (.508)l t l
(.430)l s l(.470)l t l
(.413)l s l (.572) It~
(.087) lsi (.037)l t
t
(.142)l s l(.037)l
l
l
(.072) Is I (.034) Itl
(.125) Is l(.069)l t l
(.086)l s l (.120) It I
t
(.095)l s l(.241)l l
65
The actual correlograms that were observed for plots of size 1 x 1
and 3 x 1 and those which were computed according to Quenouille's and
Besag's functions above, have been tabulated in Appendix Tables (A.43a)
through (A.43d) for sorghum data, Appendix Tables (A.44a)througbL (A.44d)_
for onion data, and :Appendix Tables (A.45a) through (A.45d) show for
wheat data.
Fig. (4.2), Fig. (4.3) and Fig. (4.4) show the graphs of the
observed autocorrelations for plots of size 1 x 1 and 3 x 1 units,
together with Quenouille's and Besag's autocorrelations.
were plotted for the first column (i.e., p(s,t), s
the above tables.
= 0,
These graphs
= 0, ... ,24)
t
of
On examining these figures, we noticed that the
fitted values were very low for both functions
when compared with the
observed ones.
Also, these fitted autocorrelations decayed so fast that they
reached zero values even for small values of the lag t.
Autocorrelations
fitted according to Besag's function had lower values than the observed
ones and also decayed at a faster rate.
function was not appreciably
The rate of decay for Quenouille's
s~Qwe~ t~n
Besag's.
So, as
aposs~ble
remedy for this pattern of behavior, we though of modifying these
functions in such a way that the rate of decay would be slower.
Consequently, we considered as a modification to Quenouille's
function (2.5), the function
for all sand t,
(4.3)
66
1.0
. - . Observed autocorrelation for
plots of size I xI units
0-0 Observed autocorrelation for
plots of size 3xI units
I::r--t:1 Fitted autocorrelotion according
to Quenouille's function
0--0 Fitted autocorrelation according
to Besael' s function
0.9
0.8
..
'":" 0.7
0
(L
Z 0.6
-~
0
..J 0.5
W
0::
0::
0
0
0.4
«
0.3
...:::>
0
0.2
0.1
o
2
4
6
8
10
12 14 16 18 20 22 24
LAG DISTANCE (t)
Fig. (4.2).
The observed autocorrelation for plots of
size 1 x land 3 x 1 units and the fitted
autocorrelation according to Quenouille's
and Besag's functions, for sorghum plots
data.
67
1.0
......
Observed autocorrelation for
plots of size I x I units
0--0 Observed autocorrelation for
plots of size 3 XI units
0.9
6-A Fitted autocorrelation according
0.8
-
D-O Fitted autocorrelation according
-
0.7
Z
0.6
ol-l. .
to Quenouille's function
to Besog's function
0
~
-.<t...
0
-I 0.5
lJJ
0::
0::
0
(.)
e
0.4
~
~
<t 0.3
0.2
0.1
o
2
4
6
8
10 12 14 16 18 20 22 24
LAG DISTANCE (t)
Fig. (4.3).
The observed autocorrelation for plots of
size 1 x 1 and 3 x 1 units and the fitted
autocorrelation according to Quenouille's
and Besag's functions, for onion plots
data.
68
1.0
~
0.9
0-0 Observed autocorrelation for
plots of size 3 XI units
l:r-A Fitted autocorrelation according
to Quenouille's function
0.8
-
o-c
-
0.7
Z
0.6
+ J.
Observed autocorrelation far
plots of size I x I units
Fitted autocorrelation according
to Besag's function
0
~
0
~
...J 0.5
I.LJ
Q:
Q:
0
(.)
0.4
~
::>
<t 0.3
0.2
0.1
o
Fig. (4.4).
2
4
6 8 10 12 14 16 18 20 22 24
LAG DISTANCE (t)
The observed autocorrelation for plots of
size 1 x 1 and 3 x 1 units and the fitted
autocorrelation according to Quenouille's
and Besag's functions, for wheat plots
data.
69
while the modified Besag function was of the form,
where all the
term~
as explained earlier, and
k , k
were constants
2
l
each less than unity.
We have eXamined a large number of possibilities of the values of
k
l
and
k •
2
These chosen values led in certain cases to very large
fitted correlations and in some others to correlations which were
comparable with
the observed.
As an illustration, Appendix Tables
(A.46a), (A.47a) and (A.48a) show the fitted values according to
modified Quenouille's function when
and
k
1
= 1/3,
k2
= 1/4
k
l
for wheat data.
= k
2
= 1/3,
for sorghum, onions
The corresponding values for
the modified Besag's function are shown in Appendix Tables (A.46b),
(A.47b) and (A.48b), respectively"
In Fig. (4.5), Fig. (4.6) and Fig. (4.7) we have plotted the
graphs of the observed autocorrelations for plots of size 3 x 1 units
and those of the modified Quenouille's and Besag's functions.
Again,
these were plotted for the first column of Appendix Tables (A.43b),
(A. 44b) , (A. 45b) , (A.46a) , (A.46b), (A.47a), A.47b) , (A.48a) and (A.48b).
Having examined the correlograms, next we will examine the autocorrelation functions.
We have seen in Chapter 2 that in order to fit
an autocorrelation function to Mercer and Hall data, Whittle (1954)
considered the symmetric autoregression model (2.7) where the
correlation was expressed in terms of the modified Bessel function of
the second kind, order one as given by (2.8).
In order to fit this
autocorrelation, Whittle calculated the value of
k
and the constant
70
1.0
0-0 Observed autocorrelation for
plots of size 3 x I units
0.9
tr-A Fitted autocorrelation according
to the modified Quenouille/s
function
0.8
-
01-1.
-
o--c
0.7
Fitted autocorrelation according
to the modified Besag's
function
0
Q ..
Z
o
~
0.6
-J 0.5
LLJ
a::
a::
oo
0.4
~
::>
<t .0.3
0.2
0.1
o
2
4
6
8
10
12 14 16 18 20 22 24
LAG DISTANCE (t)
Fig. (4.5).
The observed and fitted autocorrelations
according to Quenouille's and Besag's
modified functions, for sorghum data
(plots of size 3 x 1 units).
71
1.0
0--0 Observed autocorrelation for
plots of size 3 xl units
0.9
t:r-i::A Fitted autocorrelation according
to the modified Quenouille's
function
0.8
c-o
-
-.'. 0.7
o
-
Fitted autocorrelation according
to the modified Besag's
function
<t.
Z 0.6
o
-~
.J 0.5
iLl
0:
0:
oo
~
ex
0.4
::>
0.3
0.2
0.1
o
2
4
6
8
10
12 14 16 18 20 22 24
LAG DISTANCE (t)
Fig. (4.6).
The observed and fitted autocorrelations
according to ~uenou±llets and Besag1s
modified functions, for onion data (plots
of size 3 x 1 units).
72
"
1.0
0-0 Observed autocorrelation for
plots of size 3 x I units
0.9
tr--l1 Fitted autocorrelation according
to the modified Quenouille's
function
0--0 Fitted autocorrelation according
to the modified Besag's
function
0.8
~.
-<t
o
0.7
Z 0.6
-~
o
..J 0.5
LtJ
0::
0::
oo
0.4
~
=>
~ 0.3
0.2
0.1
o
2
Fig. (4.7).
4
6
8
10 12 14 16 18 20 22 24
LAG DISTANCE (t)
The observed and fitted autocorrelations
according to Quenouille's and Besag's
modified functions, for wheat data (plots
of size 3 x 1 units).
73
that appears in the function, by equating the fitted function with the
two extreme observed values.
We took another approach, however, in order to estimate
the constant.
We estimated
a
k
and
for .the symmetric model (2.6) using
Ord's (1975) method as outlined in Chapter 2 and the computational
procedure that we developed in Chapter 3.
it is related to
estimate of
Thus
a.
k
Then we estimated
k
since
as in (4.5) below was taken as an
k.
(4.5)
As to the value of the constants in (2.7), we tried two values.
First, by taking the constant equal to unity, and the second value
that we tried was by equating the observed value of the correlation with
the fitted one when
s
=1
•
In Table (4.15), we have tabulated the parameter esttmates
(d)
for the symmetric autoregression model (2.6), for all the three trials
and various plot sizes and shapes.
By using these estimates, the
general form for the fitted autocorrelation functions were obtained for
a general value of the constant (c).
These are shown in Table (4.16).
Fitted correlations calculated by taking the value of the constant
to be unity,
P1.1(s) ,
and choosing the constant so that both the
observed and fitted autocorre1ations coincide when
s
= 1,
are shown for plots of size 1 x 1, 3 x 1 and 2 x 1 units for sorghum,
wheat and onion data in Tables (4.l7a), (4.17b) and (4.17c), respectively.
The autocorrelation for plots of size 2 x 1, 4 x 1 and 1 x 4 are shown
in the Appendix Tables (A.49a), (A.49b) and
(~.49c).
74
Table (4.15).
Parameter estimates (&) for the
symmetric autoregression model (2.7)
using different plot shapes.
Plot size and shape
(length x width)
Sorghum
Crop
Wheat
Onions
1 x 1
.179
.011
.163
2 x 1
.174
.006
.163
3 x 1
.188
.003
.180
4 x 1
.187
.011
.158
1 x 2
.188
.033
.161
1 x 4
.159
.072
.086
-
e
Table (4.16).
e
~itted
autocorrelation functions accord~ng to the symmetric model
(2.7), for different plot sizes and shapes (6 ~ 0).
Plot size and shape
(length x width)
Sorghum
Crop
Wheat
Onion
~
*K1
1 x 1
C·1.264s'K~(1.264S)
C' 9.322s·K ( 9.322s)
1
C·1.464s·K (1.464s)
1
2 x 1
C·1.31Ss·K (1.316s)
1
C·12.490s·K (12.490s)
1
C·1.468s·K (1.468s)
1
3 x 1
C·1.152s·K (1.152s)
1
C·20.986s·K (20.986s)
1
C·1.246s·K (1.246s)
1
4 x 1
C·1.175s.K (1.176s)
1
C' 9.214s·K ( 9.214s)
1
C·1.522s·K (1.522s)
1
1 x 2
C·1.154s·K (1.154s)
1
C' 5.128s·K1 ( 5.128s)
C·1.488s·K (1.488s)
1
1 x 4
C·1.514s·K (1.S14s)
1
C' 3.160s·K ( 3.160s)
1
C·2.774s·K (2.774s)
1
is the modified Bessel function of the second kind, order one.
-...J
Vl
76
Table (4.17a).
Observed and fitted autocorre1ations for sorghum
data plots of size 1 x 1, 3 x 1 and 1 x 2 units:
(8 ~ 0).
8
0
1
2
3
4
5
6
7
1 x 1
Observed
1
.496
.401
.455
.378
.359
.442
.288
Pl.1 (s)a
.449
.181
.076
.017
.006
.002
.001
P2.1 (8)
Pl.2(s)
.798
.479
.331
.250
.199
.166
.142
.496
.200
.084
.019
.007
.002
.001
~2.2(s)
.496
.298
.206
.156
.124
.103
.088
Fitted
3 x 1
Observed
1
.570
.533
.434
.470
.387
.318
.236
Pl.1 (s)
.542
.218
.081
.029
.010
.003
.001
P2.1(s)
Pl.2(s)
P2 • 2 (s)
.781
.476
.330
.249
.199
.166
.143
.570
.229
.085
.031
.010
.003
.001
.570
.347
.241
.182
.145
.121
.104
Fitted
1 x 2
Observed
1
.568
.463
.510
.413
.384
.483
.257
Pl.1 (s)
P 2 • 1 (s)
.541
.217
.081
.029
.010
.003
.001
.781
.476
.330
.240
.200
.167
.143
Pl.2(s)
P2 • 2 (s)
.568
.228
.085
.030
.010
.003
.001
.568
.346
.240
.174
.145
.121
.104
Fitted
~efer to pages
and
for the definition of these autocorre1ations.
77
Table (4.17b).
Observed and fitted autocorre1ations for wheat data,
plots of size 1 x 1, 3 x 1 and 1 x 2 units:
(5
5
0
~
0).
1
2
3
4
5
6
7
1 x 1
Observed
1
.062
.090
.067
.023
.020
.058
.053
0
0
0
0
0
0
0
Fitted
p1.1(s)a
P2.1 (s)
P1. 2 (s)
.987
.500
.333
.250
.200
.167
.143
.062
0
0
0
0
0
0
P 2 . 2 (s)
.062
.032
.021
.016
.013
.010
.009
3 x 1
Observed
.053
.075
.168
.053
.078
.123
.110
P1.1 (s)
0
0
0
0
0
0
0
P2.1(s)
.998
.500
.333
.250
.200
.167
.143
P1.2(s)
P 2 • 2 (s)
.053
0
0
0
0
0
0
.053
.027
.017
.013
.010
.009
.007
1
Fitted
1 x 2
Observed
.103
.088
.105
.064
.048
.111
.131
P1.1 (s)
P 2 • 1 (s)
.019
0
0
0
0
0
0
.959
.499
.332
.250
.200
.167
.143
P1.2(s)
.103
. 0
0
0
0
0
0
P2.2(s)
.103
.053
.036
.027
.021
.018
.015
1
Fitted
~efer to pages
and
for the definition of these autocorre1ations.
78
Table (4.17c).
Observed and fitted autocorre1ations for onion data
plots of size 1 x 1, 3 x 1 and 1 x 2 units:
(8 ~ 0) •
0
8
1
2
3
4
5
6
7
1 x 1
Observed
1
.421
.259
.199
.213
.248
.248
.198
.429
.128
.035
.009
.002
.001
0
Fitted
A
P1.1 (s)
a
P2.1 (s)
P1.2(s)
.819
.484
.332
.250
.200
.167
.143
.421
.126
.031
.009
.002
.001
0
P2.2(s)
.421
.249
.170
.129
.103
.086
.074
3 x 1
Observed
.371
.358
.232
.253
.240
.298
.118
P1.1 (s)
.505
.186
.063
.020
.007
.002
.001
P2.1.(s)
P1.2(s)
P2 • 2 (s)
.794
.371
.479
.137
.331
.062
.250
.200
.143
.015
.005
.167
.001
.001
.371
.223
.155
.117
.093
.078
.067
1
Fitted
1 x 2
.450
.282
.270
.208
.235
.239
.193
P1.1(s)
.421
.123
.033
.008
.002
.001
0
P2.1 (s)
P1.2(s)
.822
.484
.332
.250
.200
.167
.143
.450
.131
.035
.009
.002
.001
0
P2.2(s)
.450
.265
.182
.137
.109
.091
.078
Observed
1
Fitted
a
Refer to pages
and
for the definition of these autocorre1ations.
79
Another approach to develop an autocorrelation function was that
of Whittle (1962).
He considered a process in which random variation
would diffuse in a deterministic fashion through physical space.
In
the case of agriculture, he viewed the process as the diffusion of
nutrient salts through the three dimensional medium that constituted
the soil, diffusion would have a uniforming effect, but cultivation,
weather and application of fertilizers would have the effect of introducing new variation and eventually a balance would be reached between
these uniforming and diversifying effects, and the form of this
balance should be evident in the pattern of correlation of growth on
the surface.
Whittle derived the autocorrelation for this process as
pes) = const.
1 - e
s
-s/2k
(4.6)
We have applied this function to our data, where
by
K
given by (4.5).
k
was estimated
The reason for using the same estimate for
fitting the autocorre1ations (2.7) and (4.6) is that the functions were
derived as solutions for basically similar processes though under
different conditions and assumptions.
Table (A.18) shows these fitted autocorre1ations in the general
form for different plot sizes and shapes, for the three trials.
For the
actual computation of the fitted autocorre1ations, the value of the
constant
c
=1
c
was calculated as in the previous case, namely by taking
in one case denoted
P2.1(s), and by finding
c
observed and the fitted autocorre1ations coincide when
P2.Z(s) •
such that the
s
= 1,
denoted
These functions are shown in Tables (4.17a), C4.17b}, and
80
(4.l7c) for sorghum, wheat and onion, respectively, for plots of size
1 x 1, 3 x 1 and 1 x 2.
The fitted functions for plots of size 2 x 1,
4 x 1 and 1 x 4 are shown in Appendix Tables:(A.49a),
(~.49b)
and (A.49c)
for sorghum, wheat and onions, respectively.
4.3.
Discussion
We begin our discussion by examining the autonormal analysis
results.
However, in order to judge the goodness-of-fit we must
determine the significance level to be used.
suggested.
Two methods have been
The first method, proposed by D. R. Cox, in the discussion
that followed Besag's (1974) paper, was to take the average of the
exact significance levels, or P values.
Besag though that Cox's method
would be too conservative and he used the simple technique of multiplying
the minimum P value by the number of tests.
To judge the goodness-of-fit, we have adopted Besag's method,
as well as carrying out a summary analysis of variance, for each plot
size considered.
As explained earlier, this summary was obtained by
taking a simple average of the effects mean squares and a weighted
average for the error mean square.
It is simply an approximation since
the individual analyses are not independent and would provide conservative tests of significance.
Since the results for plots of size 1 x 2 were different, from
these of other plot sizes considered, in that they gave highly significant second order terms, a possible explanation for this behavior may
be stated as follows.
In case of plots of size 1 x 1, 2 x 1 and 3 x 1,
the response, for each plot, is an average effect taken along its length
~
81
and across its width.
Since the width remains the same for all these
three plot sizes, any change in the response will be due to the
averaging process taking place along the length of the plot.
So, as
there is no change in the pattern of the significance level, this may
indicate that there is no dhange in the
of the field.
fertili~y
pattern of length
Now, plots of size 1 x 2 have twice the width of its
predecessors, so the high significance of the second order terms may
be an indication of the presence of a fertility gradient across the
width of the field.
However, since the second order, for plots of
size 1 x 4, are not significant, we may hypothesize that plots of size
1 x 2 happen to coincide with some fertility pattern that resulted in
this unique behavior.
The wheat results were interesting in that, though it was generally
true that the first order terms had higher P values than second order
ones, a number of the first order terms were not significant.
This
result might indicate that though it is generally true that yields from
neighboring plots are correlated, it is not necessarily true that the
first order model is always needed.
The reason for bringing up this
point is that those who advocate adjusting yield for effect of
neighboring plots, using such methods as Papadaki's adjust for the effect
of the four neighbors, are essentially taking a first order model as
a first approximation.
The
R 2 values indicate that a first order model does not provide
p
a good fit and even for the second or the third order the fit is not
improved tremendously.
These remarks are particularly true for the
wheat data, where these values are almost zero for certain plot sizes.
82
However, although these
R 2
P
values were very low, we notice that
the lag 1 correlations were of the same order of magnitude and clearly,
the lag 1 correlations should be the primary factor determining the
"goodness" of the first order model.
Thus, while the
~2 values are
not high, they are consistent with the magnitude of the observed lag 1
correlations.
The corre10grams presented were based on plots of size 3 x 1 units.This size of plot was chosen deliberately to improve the chance of
agreement with the fitted autocorrelation functions which in all cases
are monotonically decreasing functions.
While the observed correlations
for plots of size 1 x 1 did tend to decrease with distance, there were
many cyclical jumps with an approximate cycle of length three.
Consequently, 3 x 1 units exhibited much smoother corre1ograms.
A possible explanation for this smoothness might be the following.
In order to irrigate the field we had to run ditches twenty-four meters
apart with a high ridge half way between each pair of ditches.
Thus
all plots of length 12 meters (3 units) ran from ditch to ridge and
hence experienced similar conditions along its length with the result
being this exhibited smooth correlation.
However, though we intended to irrigate by taKing the whole width
of the field as one plot, the unevenness
of the land made this
impossible and so it was left to the waterman to connect the ridges
to the ditches to make the water level the same for the whole field,
thus insuring similar level of watering.
This practice resulted in
different widths of irrigation units, thus for plots of size 3 x 1
units, the correlation across the width of the field was not as smooth
83
as it was along its length.
So the fitted autocorre1ations were computed
for plots of size 3 x 1, for all the three crops, and then compared
with the observed ones for these plot sizes.
This tactic amounts to
averaging the lag correlations of the smaller units and this results
in smoother decay.
This smoothing of the corre10gram should improve
the degree of fit of various fitted autocorrelation functions to the
observed data.
If there is an obvious lack of fit in case of plots of
size 3 x 1, then one can expect the fit for plots of size 1 x 1 to be
even poorer.
All the fitted corre10grams, without a single exception, decayed
faster than the observed ones.
However, the two modified functions,
which were made to decay at a slower rate than the original ones, by
choosing
k
1
and
k ,
2
in (4.3) and (4.4), less than unity, had higher
values than the observed for low values of the lag.
Both Besag's and Quenoui11e's functions did not seem to provide
a good fit and at the same time the suggested modifications had no
improvement over their predecessors.
Thus, there is a need for a
reinvestigation of both these functions with the hope for an improvement of the fit.
On examining the fitted correlations and their respective observed
values we notice that for sorghum and onion trials the function P1.1(s)
and P2.1(s),
where the values of the constants were taken as unity,
gave values that were higher than the observed to start with.
However,
both fitted functions decayed faster than the observed correlations
with the result that for large
s
always larger than the fitted ones.
the observed values were almost
On
the other hand, the function
84
Pl.2(s)
and
P2 • 2 (s),
where the fitted values were made to coincide
with the observed values for
s
= 1;
gave reasonable fit for short lags
(i.e., small values of s) but for long lags (i.e., large values of
s)
these fitted functions decayed faster than the observed ones.
The autocorrelations for the wheat trial were slightly different
from those for sorghum and onion.
First of all the observed values
were very low irrespective of plot size and shape but at the same time
the rate of decay was very slow as compared with those for sorghum and
onions.
The fitted functions
P2.l(s)
gave values that were vastly
different and extremely high in comparison with the observed, but
P 2 .2(s)
vlaues were somewhat similar to the observed, especially for
low values of
and
Pl.2(s) •
s.
The most interesting results were those of
They gave values of zero for all
except for plots of size 1 x 2 and 1 x 4.
Pl.l(s)
s; 1
So, even though Pl.l(s)
was very low, it showed some reasonable agreement with the observed
ones, in comparison with the other functions.
At this stage, it is worth mentioning the effort made by Whittle
(1954), Patankar (1954) and Besag (1974) to explain the discrepancy
between the observed and the autocorrelations they fitted to Mercer
and Hall (1911) wheat plots data.
Whittle thought that a possible reason for the noted disparity
might be due to the fact that the data were integrated observations
of growth over plots rather than point observations, while Patankar
suggested that the process was non-stationary.
On the other hand,
Besag thought that the answer might well lie in the use of a third
85
order autonormal scheme.
Also, Whittle, in the discussion that followed
Besag's (1974) paper, pointed out that he and Besag would have achieved
a better fit to the Mercer and Hall wheat plots data had they docked
the central spike off the observed correlogram.
Besag (1974), in his
reply to Whittle's comment, explained Whittle's phrase as meaning that
they should have fitted the model
(4. 7)
where
and
Xi.
,J
Y. .
1,J
denotes plot yield,
Zi j'
,
denoted uncorrelated noise
is a first order auto-normal scheme.
Thus,
Y .
i ,J
thought of as reflecting variations in soil fertility and
can be
Zi,j
as
reflecting the intrinsic variability of the wheat itself, from plot to
plot.
However, Besag noted that unfortunately the conditional
probability scheme
Xi,j
was no longer a finite-order process and
would lead to complications in maximum-likelihood estimation and in
testing goodness-of-fit, and any way, Besag doubted whether docking
the central spike would really satisfy Professor Whittle.
Whittle (1954) argued that if the observed correlogram did not
decay very quickly, then in order to fit a correlogram one could use
the rather direct method of equating the observed and the theoretical
autocorrelation coefficients.
All our observed correlograms decay rather slowly and hence
Whittle's suggestion may be a reasonable one to adopt.
Alternatively,
one could assume a higher order auto-normal model instead of using a
first order model as an approximation.
In such a case the f~tted
autocorrelations may provide a better fit, but it may not be much
86
better one than those obtained under the assumption of a first order
model, since the third order model does not provide very good fit, as
the values of
R 2
3
indicate.
5.
CONCLUSIONS
In the preceding sections, an attempt has been made to use the
conditional probability approach as applied to spatial processes, to
fit correlation functions to three uniformity trials.
The fitted
autocorre1ations were not as satisfactory as they were intended to be.
The correlation function (2.7) resulted in a fit that is somewhat
better than the others.
For example, the fitted autocorrelation,
P1.2(s), for onion data for plots of size 1 x 1, 3 x 1 and 1 x 2 units
in Table (4.17c).
was estimated by
For this function the value of the parameter,
k,
as given by (4.4).
k,
This estimation was made
easier through the use of Ord's method and the computational technique
that we suggested for obtaining the eigenvalues needed for the
estimation.
The autonorma1 analysis results indicated that the first order
model was highly significant, but the corre1ograms
~itted
assumption of this model order did not give a good fit.
under the
Thus, we are
inclined to support Besag's (1974) call for an alternative suggestion
on the specification of lattice models for aggregated data.
If this
alternative is found it may result in new or modified correlation
functions that will give a good fit.
However, since we have not been able to find a correlation function
that would fit the data well, using the observed autocorre1ations might
be the only option available to us at this stage.
Finally, it may be worthwhile to point out agreement between the
values of
R 2 and the corresponding lag 1 correlations for the three
1
crops examined in this study.
This results reflect the functional
relationship between these two values.
88
LIST OF REFERENCES
Bartlett, M. S. 1955.
University Press.
An Introduction to Stochastic Processes.
Cambridge.
1967. Inferences and stochastic processes.
Statist. Soc. A. 130:457-477.
J. R.
1968. A further note on nearest neighbor models.
J. R. Statist. Soc. A. 131:579-580.
1975.
Chapman and Hall.
The Statistical Analysis of Spatial Patterns.
London.
1978. Nearest neighbour models in the analysis of
field experiments. J. R. Statist. Soc. B. 40:147-174.
Besag, J. E. 1972. On the correlation structure of some twodimensional stationary processes. Biometrika 59(1):43-48.
1974. Spatial interaction and the statistical analysis
of lattice systems. J. Roy. Stat. Soc. 36(2):192-236.
Brook, D. 1964. On the distinction between the conditional probability
and the joint probability approaches in the specification of
nearest neighbour systems. Biometrika 51:481-483.
Fairfield Smith, H. 1938. An empirical law describing heterogeneity
in the yields of agricultural crops. J. Agric., Sci. 28:1-23.
Fisher, R. A. 1925. Statistical Methods for Research Workers.
and Boyd. London. 1-7.
Oliver
Gray, A. and C. B. Matthews. 1922. A Treatise on Bessel_Functions
and Their Application to Physics. Second Ed., Macmillan and Co.,
London. 313-315.
Lancaster, P.
259-261.
1977.
Theory of Matrices.
Academic Press.
Mahalanobis, P. C. 1944. On large-scale sample surveys.
Phil. Trans. B. 231:329-340.
New York.
Roy. Soc.
Mead, R. 1971. Models for interplant competition in irregularly
spaced populations. In G. P. Patil, E. C. Pielou and W. E. Waters,
eds., Statistical Ecology, Vol. 2, University Park, Pennsylvania
State Univ. Press. 13-30.
Mercer, W. B. and A. D. Hall. 1911. The experimental error of field
trials. J. Agric. Sci. 4:107-132.
89
Ord, K. 1975. Estimation methods for models of spatial interaction.
J. Amer. Statis. Assoc. 70:120-126. Theory and Methods Section.
Osborne, J. G. 1942. Sampling errors of systematic and random
surveys of cover-type areas. Amer. Statis. Assoc. 37:256-264.
Patankar, v. N. 1956. The goodness of fit of frequency distributions
obtained from stochastic processes. Biometrika 41:450-462.
Pearce, S. C. 1976. An examination of Fairfield Smith's law of
environmental variation. J. Agric. Sci. 87:21-24.
Quenoui11e, M. H.
1949.
Statis. 20:355-375.
Whittle, P. 1954.
41:434-449.
Problems in plane sampling.
On stationary processes in the plane.
Ann. Math.
Biometrika
1956. On the variation of yield variance with plot size.
Biometrika 43:337-343.
1962. Topographic correlation, power-law covariance
functions, and diffusion. Biometrika 49:305-314.
1963. Stochastic processes in several dimensions.
Int. Statist. lnst. 40:974-994.
Bull.
90
APPENDICES
91
Appendix Table (A.l).
Auto-normal analysis of sorghum
plots data (plots of size (1 xl):
first analysis ~f variance under
Fig. (4.1) codings.
Sum of
squares
Effect
d.f.
M5
61
62
11.03
2
5.52
Y1
Y2
0005
2
.02
°1
°2
81
82
~1
~2
X2
.93
8
.12
Xl
Residual
11.23
99
.11
Total
23.24
III
*Si:;nificant
a,t P
< 1
1.03
Auto-normal analysis of sorghum
plots data (plots of size 1 xl):
second analysis of variance under
Fig. (4.1) codings.
Sum of
squares
Effect
48.6l
.
< .05
Appendix Table (A.2).
F-ratio
d.f.
M.S.
F-ratio
61
62
5.87
2
2.94
Yl
Y2
.02
2
.01
<
°1
°2
8
1
82
.73
4>2
Xl
X2
8
.09
4>1
< 1
9.42
92
.10
16.04
104
Residual
Total
* Significant at P < .05.
28.62*
1
92
AppendiX Table (A.3).
Auto-normal analysis of sorghum plots
data (plots of size 1 x 1): third
analysis of variance under Fig. (4.1)
codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
13 1
13 2
11.94
2
5.97
61.64*
Yl
Y2
.68
2
.34
3.52*
°1
°2
6
<PI
<P 2
Xl
2.22
8
.28
2.87*
8.91
92
.10
23.76
104
1
6
2
X2
Residual
Total
*
Signif icant at P
<;
.05 •
Appendix Table (A.4).
Sum of
squares
Effect
13
1
13
2
Y
Y
°1
°2
<PI
<P 2 Xl
6
Residual
Total
1
Auto-normal analysis of sorghum plots
data (p~ots of size 1 x 1): fourth
analysis of variance under Fig. (4.1)
codings.
6
d.f.
M.S.
F-ratio
48.71*
7.56
2
3.78
.13
2
.06
-< 1
.82
8
.10
1.32
6.44
83
.08
15.30
95
2
X2
* Significant at P < .05 •
93
Appendix Table (A.5).
Auto-normal analysis of sorghum plots
data (plots of size 1 x 1): fifth
analysis of variance under Fig. (4.1)
codings.
Sum of
squares
Effect
d.£.
M. S.
F-ratio
61
62
12.31
2
6.15
141.34*
Y1
Y2
.36
2
.18
4.17*
°1
°2
.92
8
.12
*
2.64
4.31
99
.04
17.90
111
<PI
61
<P2
Xl
62
X2
Residual
Total
*Significant at P
<
.05 •
Appendix Table (A.6).
Auto-normal analysis of sorghum plots
data (plots of size 1 x 1): sixth
analysis of variance under Fig. (4.1)
codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
116.80* .
61
62
16.86
2
8.43
Y1
Y2
.25
2
.13
1.73.
°1
°2
61
62
1.90
3.30*
<P2
Xl
X2
8
.24
<PI
6.64
92
.07
25.65
104
Residual
Total
*Significant at P < .05
94
Appendix Table (A.7).
Auto-normal analysis of sorghum plots
data (plots of size 1 x 1); seventh
analysis of variance under Fig. (4.1)
codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
68.29*
13 1
13 2
9.94
2
4.97
Yl
Y2
.09
2
.04
< 1
(\
°2
61
6
4>1
4>2
Xl
X2
.91
8
.11
1.56
6.69
92
.07
17.62
104
2
Residual
Total
* Significant at P < .05 •
Appendix Table (A.8).
Auto-normal analysis of sorghum plots
data (plots of size 1 x 1); eighth
analysis of variance under Fig. (4.1)
codings.
Sum of
squares
Effect
d.£.
}1. S.
F-ratio
3.37
42.65 *
13 1
13 2
6.74
2
Y1
Y2
.24
2
.12
1.53
°1
°2
6
1
62
4>1
4>2
Xl
X2
1.27
8
.16
2.01
. 6.09
77
.0>8
ll~. 35
89
Residual
Total
* Significant at P < .05 .
~
95
Appendix Table (A.9).
Auto-normal analysis of sorghum plots
data (plots of size 1 x 1): ninth
analysis of variance under Fig. (4.1)
codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
81
82
8.00
2
4.00
48.48*
Yl
Y2
.43
2
.21
2.58
°1
°2
61
62
~l
~2
Xl
X2
.91
8
.11
1.38
6.36
77
.08
15.70
89
Residual
Total
*Significant at P
<
.05 •
Appendix Table (A.10).
Effect
Auto-normal analysis of sorghum data, for plots of 2 x 1 units;
under Fig. (2.2) coding.
First Analysis
M.S.
F-ratio
d.f.
Second Analysis
M.S.
F-ratio
Third Analysis
M.S.
F-ratio
Fourth Analysis
M.S.
F-ratio
81
82
2
14.35
42.14 *
15.21
55.85*
12.54
34.02
12.41
38.31*
Y1
Y2
2
2.20
6.28*
1.00
3.69*
.43
1.17
.72
3.21*
115
.35
Error
*Significant at
P ~
d. f.
Auto-normal analysis of sorghum
under Fig. (2.2) coding.
First Analysis
M. S.
F-ratio
81
82
2
21.42
Y1
Y2
2
.33
67
.56
Error
.32
.37
.05
Appendix Table (A.1l).
Effect
.27
38.26 *
<1
Second Analysis
M. S.
F-ratio
28.96
.11
.51
dat~
for plots of 3 x 1 units;
Third Analysis
M.S.
F-ratio
Fourth Analysis
F-ratio
M.S.
56.88*
27.94
54.40*
25.52
40.33*
<1
1.21
2.35
1.19
1.88
.51
.63
-* Significant at P < ,OS •
e
\0
(J\
e
e
e
e
Appendix Table (A.12).
Effect
Auto-normal analysis of sorghum data, for plots of 4 x 1 units;
under Fig. (2.2) coding.
First Analysis
M.S.
F-ratio
d. f.
13 1
13 2
2
30.56
Yl
Y2
2
1.18
43
.9:,
Error
*~~gnif~c~nt
~t
,
~
3~.82*
1.27
23.44
52.43*
1.22
2,73
Third Analysis
M.S.
F-ratio
20.62
.34
23.86*
<
1
Fourth Analysis
M.S.
F-ratio
26.59
29.46*
.39
<
1
.90
.86
.45
Auto-normal analysis of sorghum data, for plots of 1 x 2 units;
under Fig. (2.2) coding.
First Analysis
M. S.
F-ratio
d.f,
Second Analysis
F....ratio
M.S.
.05 •
Appendix Table (A.13).
Effect
e
Second Analysis
M. S.
F-ratio
Third Analysis
M.S.
F-ratio
Fourth Analysis
M.S.
F-ratio
13 1
82
2
14.75
71.81*
20.03
78.17*
18.52
89.15*
20.95
93.14*
Yl
Y2
2
3.25
15.8.*
3.55
13.85*
2.65
12.73*
4.68
20.83*
116
.21
Error
Significant at P
~
,05 •
.26
.21
.22
\0
"
Appendix Table (A.14).
Effect
d.£.
Auto-normal analysis of sorghum data. for plots of 1 x 4 units;
under Fig. (2.2) coding.
First Analysis
M. S.
F-ratio
Second Analysis
M.S.
F-ratio
Third Analysis
M.S.
F-ratio
\
81
62
2
32.56
Y1
Y2
2
.38
50
.96
Error
Fourth Analysis
M. S.
F-ratio
33.88*
40.88
49.68*
29.89
34.49*
22.00
39.09*
1
3.29
4.00*
1.62
1.87
1,10
1.34
<
.82
.87
.82
*Significant at P < ,OS ,
\0
00
e
e
e
99
Appendix Table (A.15).
Auto-normal analysis of onion data:
first analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
81
82
357.52
2
178.76
'Y1
'Y2
1. 70
2
.85
°1
°2
83.35
</>2
8
10.42
</>1
Error
438.05
99
4.42
Total
880.62
111
61
Xl
62
X2
*Significant at P
~
d.f.
M.S.
F-ratio
40.40*
<
1
2.35
*
.05 •
Appendix Table (A.16).
Auto-normal analysis of onion data:
second analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
81
82
284.24
2
142 t 12
'Yl
'Y2
2.83
2
. 1.41
<
1
°1
°2
61
62
2.83
Xl
X2
8
<
</>2
22.63
</>1
1
Error
436.12
92
4.74
Total
745.83
104
*Significant at
P '" .05 •
d.f.
M.S.
F-ratio
29.9~
100
Appendix Table (A.17).
Auto-normal analysis of onion data:
third analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
61
62
426.88
2
213.44
'Y 1
'Y2
16.31
2
8.16
2.15
t\
°2
6
1
62
60.21
<1>2
X2
8
7.53
Xl
1.99
Error
348.80
92
3.79
Total
852.21
104
!/\
*Significant at P
<
d.L
M.S.
F-ratio
56.30*
.05 •
Appendix Table (A.18).
Auto-normal analysis of onion data:
fourth analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
61
62
389.51
'Y1
'Y2
M.S.
F-ratio
2
194.76
36.47*
6.08
2
3.04
<
45.12
8
5.64
1.06
Error
411.19
77
5.34
Total
851. 91
89
1
2
° °
61
62
<1>1
<1>2
Xl
X2
* Significant at P -< .05 •
d.f.
1
101.
Appendix Table (A.19).
Auto-normal analysis of onion data:
fifth analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
6
1
62
390.34
Yl
Y2
°1
°2
4>1
4>2
M.S.
F-ratio
2
195.17
66.96*
5.12
2
2.56
< 1
78.93
9
9.87
3.38*
Error
224.45
77
2.91
Total
698.83
89
6
6
1
2
X2
Xl
*Significant
d.f.
at P < .05 •
Appendix Table (A.20).
Auto-normal analysis of onion data:
sixth analysis of variance under
Fig. (4.1) codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
52.86*
61
62
362.25
2
181.12
Yl
Y2
5.76
2
2.88
°1
°2
6
1
62
4>1
4>2
Xl
X2
21.35
8
2.67
Error
284.41
83
3.43
Total
673.77
95
*Significant at P
~
.05 •
< 1
<
1
102
Appendix Table (A.21).
Auto-normal analysis of onion data:
seventh analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
61
62
271.13
Y1
Y2
M.S.
F-ratio
2
135.57
28.85*
2.91
2
1.46
116.37
8
14.55
Error
465.16
99
4.70
Total
855.58
111
°1 °2
61
62
<1>2
Xl
X2
4>1
*Significant
d. f.
< 1
3.10*
at P < .05 •
Appendix Table (A.22).
Auto-normal analysis of onion data:
eighth analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
61
62
196.67
Y1
Y2
M. S.
F-ratio
2
98.34
19.92*
15.01
2
7.50
1.52
66.58
8
8.32
1.69
Error
454.15
92
4.93
Total
732.41
104
°1 °2
61
62
<1>1
Xl
X2
4>2
* Significant at
P " .05 •
d. f.
103
Appendix Table (A.23).
Auto-normal analysis of onion data:
ninth analysis of variance under
Fig. (4.1) codings.
Effect
Sum of
squares
61
62
308.88
"'(1
"'(2
°1
°2
<PI
<P 2
M.S.
F-ratio
2
154.44
26.73*
4.70
2
2.35
109.82
8
13.73
Error
531. 54
92
5.78
Total
954.95
104
6
1
Xl
6
d.f.
<
1
2
X2
*Significant at P
<
.05 •
2.38
*
Appendix Table (A.24).
Effect
Auto-normal analysis of onion data, for plots of 2 x 1 units; under
Fig. (2.2) codings.
First Analysis
M. S.
F-ratio
d. f.
8
1
8
2
2
394.48
28.07*
"VI
"V 2
2
42.40
3.02
115
14.05
Error
*Significant
at P
<
d. f.
1.12
28.17*
<
1
15.74
339.53
.39
Fourth Analysis
F-ratio
M.S.
25.82*
544.00
33.46*
1
19.90
1.22
<
13.15
16.26
Auto-normal analysis of onion data, for plots of 3 x 1 units; under
Fig. (2.2) codings.
First Analysis
M.S.
F-ratio
81
82
2
663.57
"VI
"V 2
2
10.03
67
21.10
Error
443.43
Third Analysis
M.S.
F-ratio
.05 •
Appendix Table (A.25).
Effect
Second Analysis
M. S.
F-ratio
31. 46*
< 1
Second Analysis
M. S.
F-ratio
493.90
14.51*
36.05
1.06
34.03
Third Analysis
M. S.
F-ratio
1011.00
62.16
25.74
Fourth Analysis
F-ratio
M.S.
39.28*
624.20
21.84*
2.41
29.17
1.02
28.58
* Significant at P < ,05 ,
e
I-'
0
.po.
e
e
e
e
Appendix Table (A.26).
Effect
d.f.
Auto-normal analysis of onion data, for plots of 4 x 1 units; under
Fig. (2.2) codings.
First Analysis
M.S.
F-ratio
13 1
13 2
2
490.39
13.94
Y1
Y2
2
67.62
1.92
43
35.19
Error
e
*
Second Analysis
M. S.
F-ratio
304.06
13.34 *
40.56
1. 78
22.79
Third Analysis
M.S.
F-ratio
518.49
1.44
12.60*
<
1
41.14
Fourt:h Analysis
F-ratio
M.S.
512.20
2.46
16.90*
<
1
30.31
*Significant
at P ~ .05 •
Appendix Table (A.27). Auto-normal analysis of onion data, for plots of 1 x 2 units; under
Fig. (2.2) codings.
Effect
d.f.
First Analysis
M.S.
F-ratio
Second Analysis
M. S.
F-ratio
Third Analysis
M.S.
F-ratio
Fourth Analysis
M.S.
F-ratio
1
13 2
2
518.32
42.03*
365.65
31.84*
592.70
54.14*
329.26
24.68*
Yl
Y2
2
346.77
28.12*
353.89
30.82*
299.43
27.35*
395.40
29.64*
116
12.33
13
Error
*Significant at P < .05 •
11.48
10.95
13.34
....a
VI
Appendix Table (A.28).
Effect
d. f.
Auto-normal analysis of onion data, for plots of 1 x 4 units; under
Fig. (2.2) codings.
First Analysis
M. S.
F-ratio
~l
~2
2
1226.05
Y1
Y2
2
50.04
50
48.80
Error
*Significant
Second Analysis
M.S.
F-ratio
25.13 *
913.73
31.55*
1.02
49.51
1. 71
28.96
Third Analysis
M. S.
F-ratio
774.36
3.18
40.86
18.95*
< 1
Fourth Analysis
M.S.
F-ratio
962.32
29.66 *
44.98
1.39
32.44
at P < .05 •
"
.....
o
C1'
e
e
e
107
Appendix Table (A.29).
Auto-normal analysis of wheat data: first
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d.£.
M.S.
F-ratio
41. 66*
81
62
3.69
2
1.85
Y1
Y2
.02
2
.01
°1
°2
</>2
.58
8
</>1
.07
Error
4.39
99
.04
Total
8.67
111
81
1
82
/
Xl
<
X2
1.62
* Significant at P < .05 •
Appendix Table (A.30).
Auto-normal analysis of wheat data: second
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
81
82
l.G1
2
.51
8.62*
Y1
Y2
.24
2
.12
2.04
°1
</>
1
°2
1.17
8
.15
2.49*
Error
5.40
92
.06
Total
7.82
104
8
1
</>
X
2
1
8
2
X
2
..Significant at P <
.05 •
108
Appendix Table (A.3l).
Auto-normal analysis of wheat data: third
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d. f.
M. S.
F-ratio
81
82
.42
2
.21
2.92*
Yl
Y2
.00
2
.00
< 1
°1
O2
61
62
.40
4>1
4>2
Xl
X2
8
.05
< 1
6.88
92
.07
Error
Total
104
*Significant at P < .05 •
Appendix Table (A.32).
Auto-normal analysis of wheat data: fourth
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d.f.
M. S.
F-ratio
.95
2
.48
12.01*
.12
2
.06
1.46
.31
8
.04
< 1
Error
3.06
77
.04
Total
4.44
89
8
1
8
Y
l
Y
2
0
0
1
cl>1
2
2
6
1
6
cl>2
Xl
X
2
2
109
Appendix Table (A.33).
Auto-normal analysis of wheat data: fifth
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
81
82
1.02
2
.51
9.08*
Yl
Y2
.14
2
.07
1.25
°1
°2
81
82
!PI !P2
Xl
X2
.39
8
.05
Error
4.34
77
.06
Total
5.88
89
* Significant at
p~
<
1
.05 •
Appendix Table (A.34).
Auto-normal analysis of wheat data: sixth
analysis of variance under Fig. (4.1) codings.
-------
-
-----
110
Appendix Table (A.35).
Auto-normal analysis of wheat data: seventh
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d. f.
M. S.
F-ratio
31.29*
1\
62
3.11
2
1.56
Yl
Y2
.18
2
.09
1.82
°1
°2
81
82
Ql
Ql 2
X2
1.03
Xl
8
.13
2.60
Error
4.92
99
.05
Total
9.25
III
l
* Significant at P < .05 •
Appendix Table (A.36).
Auto-normal analysis of wheat data: seventh
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
d.f.
M.S.
F-ratio
61 62
.81
2
.40
8.26*
Yl Y2
.10
2
.05
1.06
.73
8
.09
1.87
Error
4.50
92
.05
Total
6.15
104
°1
°2
81
82
Qll
Ql2
Xl
X2
* Significant at P
<;
.05 •
111
Appendix Table (A.37).
Auto-normal analysis of wheat data: eighth
analysis of variance under Fig. (4.1) codings.
Sum of
squares
Effect
~.-
d. f.
M. S.
81
82
.09
2
.04
Yl
Y2
.01
2
.00
°1
°2
61
62
<P l
<P2
Xl
X2
1.25
8
.16
Error
4.07
92
.04
Total
5.41
104
*Significant at P < .05
.
-
.----- _.. _-------
_.-.
F-ratio
___ 1.00
<
1
3.54*
-------
_._~----
..
_ _ _ _ _ _ _ n_." .. ___
Appendix Table (A.38).
Effect
d. f.
Auto-normal analysis of wheat data, for plots of 2 x 1 units; under
Fig. (2.2) codings.
First Analysis
M.S.
F-ratio
81
82
2
.79
Yl
Y2
2
.04
115
.23
Error
*Significant
1.rO
1
.43
<
Fourth Analysis
M.S.
F-ratio
4.95*
.65
4.36*
.65
1.93
.20
1. 30
.08
.22
.15
5.34*
<
1
.12
d.f.
Auto-normal analysis of wheat data, for plots of 3 x 1 units; under
Fig. (2.2) codings.
First Analysis
M. S.
F-ratio
81
82
2
1.33
Yl
Y2
2
.06
67
.41
Error
3.38
Third Analysis
M.S.
F-ratio
at P < .05 •
Appendix Table (A.39).
Effect
Second Analysis
F-ratio
M.S.
Second Analysis
M.S.
F-ratio
Third Analysis
F-ratio
M.S.
Fourth Analysis
M.S.
F-ratio
3.24
.58
1.13
1.13
3.23
.53
1.86
1
.48
< 1
.23
< 1
.03
<
<
.52
.35
1
.28
.....
.....
N
* Significant at P < .05 ,
e
e
e
e
e
Appendix Table (A.40).
Effect
Auto-normal analysis of wheat data, for plots of 4 x 1 units; under
Fig. (2.2) codings.
First Analysis
F-ratio
M.S.
d.f.
1\
62
2
2.33
Yl
Y2
2
.U8
43
.67
Error
~
Significant at P
~
.24
<
1
2.~0
1
.u9
<
1
.:w
<
.6R
Fourth Analysis
M.S.
F-ratio
Third Analysis
M.S. F-ratio
3.48
6.56*
<;
1
.42
.60
2.08
.35
1.21
.29
Auto-normal analysis of wheat data, for plots of 1 x 2 units; under
Fig. (2.2) codings.
First Analysis
F-ratio
M. S.
d.L
Second Analysis
M. S. F-ratio
.05 •
Appendix Table (A.4l).
Effect
e
Second Analysis
M.S.
F-ratio
Third Analysis
F-ratio
M.S.
Fourth Analysis
F-ratio
M.S.
-
_
.
_
-
~
61
62
2
2.60
21.13*
1.44
11.29*
1.93
16.00*
1.34
11.10*
Y1
Y2
2
1.26
10.21*
1.68
13.15*
1,.77
14.65*
1.38
11.45*
116
.12
Error
*Significant at P
~
.13
!.12
.12
~
~
.05 •
W
I
, i
i
I
Appendix Table (A.42).
Effect
d. f.
Auto-normal analysis of wheat data, for plots of 1 x 4 units; under
Fig. (2.2) codings.
First Analysis
M.S.
F-ratio
Second Analysis
M.S.
F-ratio
1
62
2
2.62
5.92*
2.62
9.42*
Yl
Y2
2
.72
1.62
.67
2.40
50
.44
6
Error
.28
Third Analysis
F-ratio
M.S.
4.15
.26
.22
19.01*
1.17
Fourth Analysis
M.S. F.ratio
5.09
.31
20.21*
1.21
.25
* Significant at P < .05 •
~
~
.j::-
e
e
e
115
Appendix Table (A.43a).
s
The observed autocorre1ations for plots of
size 1 x 1 for sorghum plot data.
t
0
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1.000
0.534
0.464
0.449
0.474
0.401
0.349
0.355
0.495
0.367
0.284
0.298
0.382
0.263
0.150
0.212
0.304
0.224
0.165
0.183
0.256
0.196
0.098
0.147
0.291
0.459
0.255
0.216
0.175
0.270
0.168
0.091
0.139
0.244
0.105
0.041
0.028
0.118
0.006
-.091
-.019
0.075
0.032
-.036
... 014
0.049
0.016
-.102
-.065
0.082
0.338
0!192
0.132
0.117
0.211
0.089
0.078
0.066
0.165
0.082
0.009
0.004
0.115
-.015
-.074
-.025
0.064
0.046
-.026
·-.018
0.056
-.010
-.075
-.047
0.083
0.461
0.329
0.255
0.251
0.324
0.213
0.188
0.174
0.306
0.210
0.148
0.178
0.262
0.161
0.077
0.096
0.210
0.164
0.109
0.093
0.166
0.157
0.043
0.074
0.209
0.282
0.166
0.068
0.095
0.160
0.073
0.021
0.036
0.176
0.047
0.009
0.008
0.093
-.007
-.087
-.100
0.074
0.015
-.054
-.(,5f
0.012
-.031
-.180
-.139
0.000
0.317
0.179
0.128
0.110
0.185
0.110
0.057
0.081
0.224
0.098
0.044
0.038
0.120
0.016
-.069
-.055
0.145
0.083
0.013
-.033
0.065
-.Cl22
-.155
-.092
0.049
0.484
0.379
0.287
0.286
0.379
0.277
0.236
0.236
0.371
0.292
0.209
0.214
0.318
0.195
0.146
0.174
0.321
0.261
0.176
0.094
0.227
0.109
0.018
0.079
0.220
0.221
0.132
0.064
0.032
0.110
0.041
-.031
-.016
0.118
OJ029
-.046
-.026
0.072
-.046
-.094
-.104
0.061
0.006
-.084
-.115
-.008
-.103
-.191
-.195
-.019
116
Appendix Table
s
t
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
(A.43~).
The observed autocorrelation for plots of
size 3 x 1 for sorghum data.
0
1
2
3
4
5
6
7
1.000
0.590
0.499
0.468
0.552
0.432
0.356
0.393
0.551
0.380
0.267
0.283
0.407
0.236
0.094
0.180
0.320
0.238
0.122
0.146
0.265
0.178
0.025
0.094
0.298
0.549
0.334
0.234
0.232
0.356
0.201
0.140
0.139
0.342
0.163
0.074
0.092
0.241
0.064
-.069
-.032
0.175
0.095
0.566
0.421
0.286
0.273
0.406
0.274
0.198
0.209
0.418
0.287
0.178
0.176
0.321
0.160
0.037
0.050
0.313
0.210
0.106
0.006
0.168
0.030
-.144
-.075
0.170
0.401
0.241
0.092
0.089
0.256
0.113
0.017
0.037
0.292
0.142
0.033
0.055
0.272
0.002
-.106
-.094
0.205
0.110
-.026
-.042
0.035
-.112
-.290
-.261
0.092
0.388
0.189
0.116
0.074
0.220
0.036
-.039
-.024
0.153
0.056
-.016
0.026
0.227
0.030
-.153
-.091
0.087
0.086
0.024
-.062
0.011
-.130
-.175
-.165
0.111
0.341
0.163
0.082
0.015
0.090
0.030
-.050
-.022
0.175
0.044
0.006
0.083
0.168
0.027
-.086
-.046
0.206
0.184
0.012
-.061
0.031
-.156
-.191
-.205
0.056
0.279
0.145
0.081
0.021
0.146
0.070
-.038
0.043
0.193
0.141
0.034
0.202
0.437
0.218
0.078
0.054
0.242
0.214
0.120
0.080
0.167
-.104
-.220
-.170
0.152
0.078
-.040
- .205
- .136
-.025
-.067
- .130
-.071
-.086
- .101
-.040
0.210
0.262
0.005
- .071
0.106
0.371
0.297
0.193
0.020
-.178
-.259
-.096
-.054
0.119
~.009
-.013
0.102
0.057
-.132
-.081
0.143
117
Appendix Table (A.43c).
The fitted autocorrelation according to
Quenoui11e's function for plots of size
lsi
Itl a
a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1
.5897
.3478
.2051
.1209
.0713
.0421
.0248
.0146
.0086
.0051
.0030
.0018
.0010
.0006
.0004
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
1
2
3
4
5
6
7
.5493
.3240
.1910
.1127
.0664
.0392
.0231
.0136
.0080
.0047
.0028
.0016
.0010
.0006
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.3018
.1780
.1049
.0619
.0365
.0215
.0127
.0075
.0044
.0026
.0015
.0009
.0005
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1658
.0978
.0577
.0340
.0200
.0118
.0070
.0041
.0024
.0014
.0008
.0005
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0911
.0537
.0317
.0187
.0110
.0065
.0038
.0023
.0013
.0008
.0005
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0500
.0295
.0174
.0103
.0061
.0036
.0021
.0012
.0007
.0004
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0274
.0162
.0096
.0056
.0033
.0020
.0012
.0007
.0004
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0151
.0089
.0053
.0031
.0018
.0011
.0006
.0004
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
/
118
Appendix Table (A.43d).
s
The fitted autocorrelation according to
Besag's function for plots of size 3 x 1
for sorghum data.
t
o
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1
.2223
.0494
.0110
.0024
.0005
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.4161
.0925
.0206
.0046
.0010
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1732
.0385
.0086
.0019
.0004
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0721
.0160
.0036
.0008
.0002
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
00000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0300
.0067
.0015
.0003
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0125
.0028
.0006
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0052
.0012
.0003
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0022
.0005
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
119
Appendix Table (A.44.a).
s
t
0
1
2
3
4
5
6
7
8
9
10
11
-12
13
14
15
16
17
18
19
20
21
22
23
24
The observed autocorre1ations for plots of
size 1 x 1 for onion data.
0
1
2
3
4
5
6
7
1.000
0.468
0.356
0.256
0.298
0.307
0.280
0.222
0.203
0.193
0.184
0.153
0.176
0.125
0.099
0.111
0.150
0.165
0.177
0.167
0.135
0.172
0.160
0.192
0.195
0.373
0.229
0.130
0.082
0.108
0.116
0.090
0.038
0.038
0.057
0.014
0.020
0.081
0.036
0.025
0.059
0.093
0.102
0.130
0.136
0.131
0.181
0.149
0.124
0.139
0.161
0.104
0.071
0.038
0.055
0.039
0.052
0.008
-.022
0.005
-.052
-.069
-.035
-.013
0.024
0.045
0.083
0.076
0.103
0.132
0.081
0.162
0.092
0.090
0.038
0.142
0.066
0.064
0.027
0.043
0.040
0.025
0.000
-.015
0.002
-.003
-.065
-.078
-.084
0.013
0.020
0.062
0.057
0.089
0.097
0.066
0.125
0.127
0.075
0.041
0.128
0.070
0.038
0.007
0.037
0.083
0.042
-.028
-.021
-.020
-.008
-.035
-.039
-.034
-.022
0.037
0.051
0.054
0.099
0.083
0.083
0.127
0.132
0.100
0.061
0.189
0.147
0.109
0.070
0.082
0.134
0.120
0.080
0.030
0.041
0.014
0.025
-.001
0.010
0.041
0.044
0.073
0.121
0.081
0.128
0.079
0.070
0.066
0.079
0.089
0.216
0.135
0.120
0.058
0.068
0.078
0.094
0.098
0.035
0.016
0.002
0.023
0.010
0.014
0.022
-.001
0.060
0.103
0.105
0.131
0.081
0.067
0.042
0.085
0.088
0.173
0.147
0.087
0.082
0.053
0.054
0.064
0.043
0.026
0.064
0.022
0.046
-.021
0.025
0.053
0.053
0.102
0.047
0.109
0.085
0.106
0.080
0.071
0.018
0.036
120
Appendix Table (A.44b).
s
t
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
The observed autocorrelation for plots of
size 3 x 1 for onion data.
o
1
2
3
4
5
6
7
1.000
0.554
0.383
0.268
0.336
0.330
0.313
0.226
0.203
0.178
0.175
0.151
0.220
0.150
0.125
0.170
0.197
0.194
0.223
0.221
0.169
0.245
0.211
0.238
0.268
0.187
0.106
0.081
0.036
0.043
0.070
0.056
-.014
-.026
0.014
-.028
-.080
-.103
-.071
0.031
0.074
0.145
0.145
0.190
0.181
0.152
0.259
0.255
0.179
0.092
0.332
0.230
0.178
0.118
0.139
0.174
0.174
0.146
0.056
0.042
0.009
0.029
0.012
0.014
0.014
0.024
0.090
0.128
0.123
0.167
0.110
0.086
0.050
0.055
0.060
0.196
0.158
0.078
0.068
0.008
0.029
0.005
-.072
-.087
0.047
0.044
0.030
0.023
0.108
0.143
0.171
0.198
0.143
0.166
0.117
0.154
0.176
0.089
-.005
0.140
0.170
0.133
0.047
-.066
-.061
-.013
0.046
-.051
-.147
-.061
-.118
-.085
-.203
-.137
-.182
-.108
-.107
-.057
0.005
0.033
-.077
0.076
-.068
-.099
-.102
0.149
0.179
0.072
0.027
-.106
-.021
-.018
0.033
-.089
-.039
-.081
0.009
-.095
-.050
-.019
0.145
0.112
0.177
0.261
0.214
0.092
0.068
0.129
0.027
0.036
0.282
0.332
0.192
0.121
-.009
0.016
-.005
-.018
-.134
-.123
-.247
-.232
-.082
0.003
0.136
0.168
0.135
0.059
0.067
0.024
0.031
-.020
0.012
-.015
0.099
0.010
0.156
0.115
0.066
-.056
-.017
-.044
-.044
-.129
0.054
0.004
-.016
-.070
-.182
0.021
0.190
0.228
0.263
0.423
0.383
0.453
0.175
0.317
0.278
0.388
121
The fitted autocorrelation according to
Quenouille's function for plots of size
3 x 1 for onion data.
Appendix Table (A.44c).
Is I
It I
0
1
2
3
4
5
6
7
8
9
10
11
12
13
e
14
15
16
17
18
19
20
21
22
23
24
0
1
2
3
4
5
6
7
1
.5539
.3069
.1700
.0942
.0522
.0289
.0160
.0089
.0049
.0027
.0015
.0008
.0005
.0003
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1870
.1036
.0574
.0318
.0176
.0098
.0054
.0030
.0017
.0009
.0005
.0003
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0350
.0194
.0107
.0059
.0033
.0018
.0010
.0006
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0065
.0036
.0020
.0011
.0006
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0012
.0007
.0004
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
122
Appendix Table (A.44d).
s
The fitted autocorrelation according to
Besag's function for plots of size 3 x 1
for onion data.
t
a
1
2
3
4
5
6
7
a
1
.4873
.2375
.1157
.0564
.0275
.0134
.0065
.0032
.0015
.0008
.0004
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1460
.0711
.0347
.0169
.0082
.0040
.0020
.0010
.0005
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0213
.0104
.0051
.0025
.0012
.0006
.0003
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0031
.0015
.0007
.0004
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0005
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.OQOO
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
1
2
3
4
5
6
7
8
9
·10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
123
Appendix Table (A.45a).
s
The observed autocorre1ations
size 1 x 1 for wheat data.
t
0
1
2
3
4
5
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1.000
0.037
0.092
0.085
0.031
0.049
0.062
0.056
0.075
0.037
0.084
0.059
0.077
0.062
0.214
0.068
0.081
0.022
0.046
0.030
0.089
0.073
0.091
0.049
0.178
0.087
0.030
-.018
0.013
0.002
-.001
0.027
-.011
-.024
-.057
0.035
-.018
-.027
-.016
0.024
-.005
-.010
-.012
0.005
0.005
-.005
0.016
0.010
-.027
0.134
0.087
-.005
0.009
0.041
-.024
0.006
-.010
-.007
-.009
-.016
0.025
-.047
-.044
-.009
-.023
-.048
0.017
-.014
-.041
-.024
-.058
0.004
-.028
-.010
0.109
0.049
0.072
0.025
0.037
0.056
0.006
0.034
0.014
0.020
0.000
0.055
0.001
-.021
0.054
0.068
0.046
0.049
0.023
-.016
-.021
-.022
0.058
0.087
0.048
0.147
0.015
0.026
-.007
0.004
-.004
-.026
-.031
-.030
-.014
-.054
-.008
-.014
-.071
0.031
-.001
-.023
0.025
0.015
-.026
-.034
0.017
0.019
0.069
0.023
0.003
-.00-9.
-.005
-.034
-.042
-.051
-.072
-.060
-.066
-.024
-.010
-.029
-.036
0.018
-.064
0.006
-.047
-.058
-.003
-.088
-.048
0.016
-.009
-.041
-.015
-.034
~or
plots
6·
ot
7
nO.053----0'-050
0.026
0.008
0.037
0.032
0.015
0.010
0.016
0.022
0.005
0.044
0.022
0.057
0.074
0.063
0.020
0.012
0.056
0.075
0.061
0.080
0.037
0.081
0.025
0.026
-.030
-.013
-.041
-.042
-.045
-.029
-.058
-.064
-.071
0.013
-.006
-.089
-.030
0.082
-.077
-.037
-.055
-.071
-.051
-.054
-.071
-.057
-.066
0.028
124
Appendix Table (A.45b).
s
t
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
l6
17
18
19
20
21
22
23
24
The observed autocorrelation for plots of
size 3 x 1 for wheat data.
o
1
2
3
4
5
6
7
1.000
0.034
0.097
0.122
0.031
0.036
0.055
0.022
0.090
-.012
0.122
0.039
0.086
0.016
0.271
0.068
0.067
0.012
-.021
-.072
0.078
0.096
0.090
0.009
0.217
0.072
0.090
-.019
0.016
0.029
-.046
0.011
-.024
-.061
-.035
0.086
-.045
-.100
0.037
-.023
-.012
0.032
-.014
-.052
-.029
-.100
0.021
0.040
-.013
0.302
0.053
0.016
0.002
-.015
0.008
-.074
-.046
-.074
-.013
-.062
0.016
-.009
-.003
-.001
0.090
-.070
-.022
0.040
-.027
-.028
0.085
-.008
0.080
0.009
-.045
0.214
0.042
0.022
0.061
-.128
-.106
-.015
-.062
-.051
-.111
-.022
0.057
-.129
0.045
0.169
0.002
-.068
-.044
-.161
-.080
-.096
0.079
-.114
-.002
0.076
0.074
0.042
0.018
-.025
-.066
0.041
-.034
-.068
-.098
-.100
-.051
-.030
-.108
-.093
-.032
-.042
-.036
-.059
-.211
-.102
-.094
0.159
-.026
-.002
0.126
0.120
0.048
-.105
-.023
-.048
-.014
-.115
-.174
-.221
-.149
-.030
0.057
-.160
-.163
0.176
0.133
-.122
-.190
-.180
-.018
0.078
0.210
0.061
0.083
0.157
0.190
0.112
0.015
0.030
-.088
-.139
-.032
0.052
-.128
-.125
-.061
-.128
-.299
0.034
0.165
-.019
0.027
-.054
-.192
0.047
0.099
0.149
0.136
0.047
0.033
0.197
0.008
-.010
0.002
-.224
-.299
-.102
0.162
-.062
-.044
-.189
-.122
0.062
0.089
-.021
-.290
-.208
0.164
0.081
0.204
0.121
0.168
0.290
0.222
0.050
125
The fitted autocorrelation according to
Quenoui11e's function for plots of size
3 x 1 for wheat data.
Appendix Table (A.45c).
Is I
It I
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
0
1
2
3
4
5
- 6--
7
1
.0340
.0012
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0721
.0025
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
00000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0052
.0002
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
00000
.0000
00000
00000
.0000
.0000
.0000
.0000
.0000
.0004
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
126
Appendix Table (A.45d).
s
The fitted autocorrelation according to
Besag's function f~r plots of size 3 x 1
for wheat data.
.
t
a
1
2
3
a
1
.1013
.0103
.0010
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.4797
.0486
.0049
.0005
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.2301
.0233
.0024
.0002
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1104
.0112
.0011
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
• 0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
-1
-2
-3
-4
-5
-6
-7
-8
-9
-10
-11
-12
-13
-14
-15
-16
-17
-18
-19
-20
-21
-22
-23
-24
4
.0530
.0054
.0005
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000 .
.0000
.0000
.0000
.0000
.0000
.0000
.0000
5
6
7
.0254
.0026
.0003
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0122
.0012
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0058
.0006
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
127
Appendix Table (A.46a).
Is II t I
0
1
The fitted autocorrelation according to the
modified Quenoui11e's function for plots of
size 3 x 1 for sorghum data.
2
3
4
5
6
7
-
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1
.3386
.7033
.5898
.4946
.4147
.3478
.2917
.2446
.2051
.1720
.1443
.1210
.1014
.0851
.0713
.0598
.0502
.0421
.0353
.0296
.0248
.0208
.0174
.0146
.8192
.6869
.5761
.4831
.4051
.3397
.2849
.2389
.2004
.1680
.1409
.1182
.0991
.0831
.0697
.0584
.0490
.0411
.0345
.0289
.0242
.0203
.0170
.0143
.0120
.0710
.5627
.4719
.3957
.3319
.2783
.2334
.1957
.1641
.1376
.1154
.0968
.0812
.0681
.0571
.0479
.0401
.0337
.0282
.0237
.0199
.0167
.0140
.0117
.0098
.5497
.4610
.3866
.3242
.2718
.2280
.1912
.1603
.1344
.1127
.0946
.0793
.0665
.0558
.0468
.0392
.0329
.0276
.0231
.0194
.0163
.0136
.0114
.0096
.0080
.4503
.3776
.3167
.2655
.2227
.1867
.1566
.1313
.1101
.0924
.0775
.0650
.0545
.0457
.0383
.0321
. 0269
.0226
.0189
.0159
.0133
.0112
.0094
.0079
.0066
.3688
.3093
.2594
.2175
.1824
.1530
.1283
.1076
.0902
.0757
.0634
.0532
.0446
.0374
.0314
.0263
.0221 .
.0185
.0155
.0130
.0109
.0092
.0077
.0064
.0054
.3021
.2534
.2125
.1782
.1494
.1253
.1051
.0881
.0739
.0620
.0520
.0436
.0365
.0307
.0257
.0216
.0181
.0152
.0127
.0107
.0089
.0075
.0063
.0053
.0044
.2475
.2076
.1741
.1460
.1224
.1026
.0861
.0722
.0605
.0508
.0426
.0357
.0299
.0251
.0211
.0177
.0148
.0124
.0104
.0087
.0073
.0061
.0052
.0043
.0036
---.-
-
-- _._--
128
Appendix Table (A.46b).
s
t
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
11
18
19
20
21
22
23
24
The fitted autocorrelation according to the
modified Besag's function for plots of size
3 x 1 for sorghum data.
0
1
2
3
4
5
6
7
1
.6369
.4056
.2583
.1645
.1048
.0667
.0425
.0271
.0172
.0110
.0070
.0045
.0028
.0018
.0012
.0007
.0005
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.7468
.4756
.3029
.1929
.1229
.0783
.0498
.0317
.0202
.0129
.0082
.0052
.0033
.0021
.0013
.0009
.0005
.0003
.0002
.0001
.0001
.0001
.0000
.0000
.0000
.5577
.3552
.2262
.1441
.0918
.0584
.0372
.0237
.0151
.0096
.0061
.0039
.0025
.0016
.0010
.0006
.0004
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.4165
.2653
.1689
.1076
.0685
.0436
.0278
.0177
.0113
.0072
.0046
.0029
.0019
.0012
.0008
.0005
.0003
.0002
.0001
.0001
.0001
.0000
.0000
.0000
.0000
.3110
.1981
.1262
.0803
.0512
.0326
.0208
.0132
.0084
.0054
.0034
.0022
.0014
.0009
.0006
.0004
.0002
.0001
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.2323
.1479
.0942
.0600
.0382
.0243
.0155
.0099
.0063
.0040
.0026
.0016
.0010
.0007
.0004
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.1735
.1105
.0704
.0448
.0285
.0182
.0116
.0074
.0047
.0030
.0019
.0012
.0008
.0005
.0003
.0002
.0001
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.1295
.0825
.0525
.0335
.0213
.0136
.0086
.0055
.0035
.0022
.0014
.0009
.0006
.0004
.0002
.0001
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
129
Appendix Table (A.47a).
Is I
0
1
2
3
4
5
6
7
1
.8213
.6745
.5540
.4550
.3737
.3069
.2520
.2070
.1700
.1396
.1147
.0942
.0774
.0635
.0522
.0429
.0352
.0289
.0237
.0195
.0160
.0132
.0108
.0089
.5722
.4700
.3860
.3170
.2603
.2138
.1756
.1442
.1185
.0973
.0799
.0656
.0539
.0443
.0364
.0299
.0245
.0201
.0165
.0136
.0112
.0092
.0075
.0062
.0051
.3274
.2689
.2209
.1814
.14bO
.1223
.1005
.0825
.0678
.0557
.0457
.0375
.0308
.0253
.0208
.0171
.0140
.0115
.0095
.0078
.0064
.0052
.0043
00035
.0029
.1874
.1539
.1264
.1038
.0852
.0700
.0575
.0472
.0388
.0319
.0262
.0215
.0176
.0145
.0119
.0098
.0080
.0066
.0054
.0044
.0037
.0030
.0025
.0020
.1072
.0880
.0723
.0594
.0488
.0401
.0329
.0270
.0222
.0182
.0150
.0123
.0101
.0083
.0068
.0056
.0046
.0038
.0031
.0075
.0021
.0017
.0014
.0012
.0010
.0613
.0504
.0414
.0340
.0279
.0229
.0188
.0155
.0127
.0104
.0086
.0070
.0058
.0047
.0039
.0032
.0026
.0022
.0018
.0015
.0012
.0010
.0008
.0007
.0005
.0351
.0288
.0237
.0194
.0160
.0131
.0108
.0088
.0073
.0060
.0049
.0040
.0033
.0027
.0022
.0018
.0015
.0012
.0010
.0008
.0007
.0006
.0005
.0004
.0003
.0201
.0165
.0135
.0111
.0091
.0075
.0062
.0051
.0042
.0034
.0028
.0023
.0019
.0016
.0013
.0010
.0009
.0007
.0006
.0005
.0004
.0003
.0033
.0002
.0002
It I
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
The fitted autocorrelation according to the
modified Quenouille's function for plots of
'size 3 x 1 for onion data.
~0017
i
130
Appendix Table (A.47b).
s
t
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
4
The fitted autocorrelation according to the
modified Besag's function for plots of size
3 x 1 for onion data.
0
1
2
3
4
5
6
7
1
.8060
.6496
.5236
.4220
.3402
.2742
.2210
.1781
.1436
.1157
.0933
.0752
.0606
.0488
.0394
.0317
.0256
.0206
.0166
.0134
.0108
.0087
.0070
.0057
.5269
.4247
.3423
.2759
.2224
.1792
.1445
.1164
.0938
.0756
.0610
.0491
.0396
.0319
.0257
.0207
.0167
.0135
.0109
.0088
.0071
.0057
.0046
.0037
.0030
.2776
.2237
.1803
.1453
.1171
.0944
.0761
.0613
.0494
.0399
.0321
.0259
.0209
.0168
.0136
.0109
.0088
.0071
.0057
.0046
.0037
.0030
.0024
.0019
.0016
.1462
.1179
.0950
.0766
.0617
.0497
.0401
.0323
.0260
.0210
.0169
.0136
.0110
.0089
.0071
.0058
.0046
.0037
.0030
.0024
.0020
.0016
.0013
.0010
.0008
.0771
.0621
.0501
.0403
.0325
.0262
.0211
.0170
.0137
.0111
.0080
.0072
.0058
.0047
.0038
.0030
.0024
.0020
.0016
.0013
.0010
.0008
.0007
.0005
.0004'
.0406
.0327
.0264
.0213
.0171
.0138
.0111
.0090
.0072
.0058
.0047
.0038
.0031
.0025
.0020
.0016
.0013
.0010
.0008
.0007
.0005
.0004
.0004
.0003
.0002
.0214
.0172
.0139
.0112
.0090
.0073
.0059
.0047
.0038
.0031
.0025
.0020
.0016
.0013
.0010
.0008
.0007
.0005
.0004
.0004
.0003
.0002
.0002
.0001
.0001
.0113
.0091
.0073
.0059
.0048
.0038
.0031
.0025
.0020
.0016
.0013
.0011
.0008
.0007
.0006
.0004
.0004
.0003
.0002
.0002
.0002
.0001
.0001
.0001
.0001
131
Appendix Table (A.48a).
Is I
It I a
a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
1
.4296
.1845
.0793
.0340
.0146
.0063
.0027
.0012
.0005
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
The fitted autocorrelation according to the
modified Quenoui11e's function for plots of
size 3 x 1 for wheat data.
1
2
3
4
5
6
7
.4166
.1790
.0769
.0330
.0142
.0061
.0026
.0011
.0005
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1736
.0746
.0320
.0138
.0059
.0025
.0011
.0005
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0723
.0311
.0133
.0057
.0025
.0011
.0005
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0301
.0129
.0056
.0024
.0010
.0004
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0126
.0054
.0023
.0010
.0004\
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0052
.0022
.0010
.0004
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0022
.0009
.0004
.0002
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
132
Appendix Table (A.48b).
s
The fitted autocorrelation according to the
modified Besag's function for plots of size
3 x 1 for wheat data.
t
o
1
2
3
4
5
6
7
0
1
2
3
4
5
6
7
8
9
10
11
12
1
.5641
.3183
.1795
.1013
.0571
.0322
.0182
.0103
.0058
.0033
.0018
.0010
.0006
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.7830
.4417
.2492
.1406
.0793
.0447
.0252
.0142
.0080
.0045
.0026
.0014
.0008
.0005
.0003
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.6131
.3459
.1951
.1101
.0621
.0350
.0198
.0111
.0063
.0035
.0020
.0011
.0006
.0004
.0002
·.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.4801
.2708
.1528
.0862
.0486
.0274
.0155
.0087
.0049
.0028
.0016
.0009
.0005
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.3759
.2121
.1196
.0675
.0381
.0215
.0121
.0068
.0039
.0022
.0012
.0007
.0004
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.2943
.1660
.0937
.0528
.0298
.0168
.0095
.0054
.0030
.0017
.0010
.0005
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.2305
.1300
.0733
.0414
.0233
.0132
.0074
.0042
.0024
.0013
.0008
.0004
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.1805
.1018
.0574
.0324
.0183
.0103
.0058
.0033
.0019
.0010
.0006
.0003
.0002
.0001
.0001
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
.0000
13
14
15
16
17
18
19
20
21
22
23
24
133
Appendix Table (A.49a).
0
8
1
Observed and fitted autocorre1ations for
sorghum data plots of size 4 x 1, 2 x 1
and 1 x 4 units: (8
0).
2
3
4
5
6
7
4x 1
Observed
1
.612
.470
.430
.492
.227
.535
.212
.078
.026
.009
P2.1(s)
.783
.479
.330
.249
.200
P1.2(s)
.612
.243
.089
.030
.010
P2.2 (s)
.612
.373
.258
.195
.156
Fitted
~1.1(s)
a
1 x 4
.571
.475
.501
.326
.318
P1.1 (s)
.412
.108
.032
.008
P2.1(s)
.824
.484
.332
.250
.002
.200
~\. 2 (s)
P 2 • 2 (s)
.571
.150
.044
.011
.503
.571
.335
.230
.173
.139
Observed
1
Fitted
2 x 1
Observed
1
.556
.471
.457
.418
.356
.319
.294
p1.1 (s)
.479
.165
.052
.017
.005
.001
0
P2.1(s)
.802
.480
.331
.250
.200
.166
.143
"1.2(s)
.556
.192
.060
.020
.006
.001
0
P2.2(s)
.556
.333
.229
.173
.139
.115
.099
Fitted
~efer to pages
autocorrelations.
and
for the definition of these
134
Appendix Table (A.49b).
0
8
1
Observed and fitted autocorre1ations for
wheat data, plots of size 4 x 1, 2 x 1
and 1 x 4 units: (8 2 0) .
2
3
4
5
6
7
4 x 1
Observed
1
.097
.171
.100
.136
.143
0
0
0
0
0
Fitted
A
P1.1 (s)
a
~2.1(s)
P1.2(s)
.986
.500
.333
.250
.200
.097
0
0
0
0
P2.2(s)
.097
.050
.034
.026
.020
2 x 1
Observed
1
.089
.061
.070
.071
.098
.072
.069
P1.1(s)
0
0
0
0
0
0
0
P2.1(s)
.989
.499
.333
.250
.200
.167
:H.3
P1.2(s)
.089
0
0
0
0
0
0
P2.2(s)
.089
.045
.030
.022
.018
.015
.013
Fitted
1 x 4
Observed
.168
.163
.259
.128
.063
.186
.043
131.1 (5)
.112
.006
0
0
0
0
0
PZ.1 (s)
.919
.497
.332
.200
.167
.143
13 1. 2(s)
~168
.009
0
.250
' 0
0
0
0
P 2.2(s)
.168
.091
.061
.046
.037
.031
.026
1
Fitted
~efers to pages
and
for the definition of these"autocorre1ations.
135
Appendix Table (A.49c).
8
0
1
Observed and fitted autocorre1ations for
onion data, plots of size 4 x 1, 2 x 1
and 1 x 4 units: (8 ~ 0).
2
3
4
5
6
7
4 x 1
Observed
.485
.384
.306
.312
.192
.410
.116
.029
.007
.002
P2.1(s)
Pl.Z(s)
.825
.485
.332
.250
.200
.485
.137
.034
.008
.002
P2.2(s)
.485
.285
.195
.147
.118
1
Fitted
"Pl.1 (s)a
2 x 1
Observed
1
.393
.294
.221
.269
.274
.244
.183
.429
.820
.393
.127
.035
.009
.002
.001
0
.484
.116
.332
.032
.250
.008
.200
.002
.167
.001
.143
.393
.232
.159
.120
.090
.080
.068
Fitted
p1.1 (s)
·P 2 • r (s)
P1. 2(s)
P2 • 2 (s)
a
1.x 4
.493
.281
.222
.186
.268
.346
.304
Pl.1 (s)
.147
.012
.001
0
a
0
0
PZ.1(s)
Pl.2(s)
.904
.495
.332
.250
.200
.167
.143
.493
.040
.003
a
0
0
0
$2.2(s)
.493
.270
.181
.136
.109
.091
.078
Observed
1
Fitted
~efer to pages
and
for the definition of these autocorre1ations.