Nasoetion, A.H.; (1965)An evaluation of two procedures to estimate genetic and environmental parameters in a simultaneous selfing and partial diallel test-crossing design."

AN EVALUATION OF TWO PROCEDURES TO ESTIMATE GENETIC
AND ,ENVIROnMENTAL PARAMETERS IN A SIMULTANEOUS SELFING
A:ND PARTIAL DlALLEL TEST CROSSING DESIGN
Andi Hakim Nasoetion
Institute of Statistics
lvlimea.gre.ph .series No.
1965
425
v
TABLE OF CONTENTS
Page
LIST OF TABLES
vii
LIST OF FIGURES •
ix
O.
INTRODUCTION
1
1.
REVIEW OF LITERATURE.
2
1.1
1.2
1.3
2.
The Mating Design
Genetic Interpretation of Design Components
of Variance •
Estimation of Genetic and Environmental
Variances from EXperimental Data •
ESTIMATION OF THE GmETIC AND ENVIRONMENTAL PARAMETERS
2.1
2.2
2·3
Methods of Estimation
The Design Matrices Cl and ~2
~
The Covariance Matrices of ;;..~
M.- and M
~
2.4 Point Estimators Generated
2.5 Point Estimators Generated
2.6 Point Estimators Generated
2·7
2.8
3.
Method I
Method II
Method III •
The Generalized Form of the Point Estimators
The Covariance Matrix of the Estimators
by
by
by
2
7
8
11
11
12
15
17
21
23
23
24
NUMERICAL EVALUATION OF THE COVARIANCE MATRIX
25
OF THE ESTIMATORS
3.1 Nature of the Variances and Covariances
3.2
3.3
4.
of the Estimators •
Programming of the Covariance Matrix
of the Estimators •
Choice of the Parameter Points
DISCUSSION
25
25
29
50
4.1 Conservation of Pattern of the Covariance
Matrix of the Estimators
4.2 Evaluation of the Numerical Results
4.3 Allocation of the Design Parameters
~2
4.4 The Variance of O"A
50
52
53
53
•
vi
TABLE OF CONTENTS
(continued)
Page
""2
4.5 The Variance of
~D
4.6 The Variance of
~AA
""2
•
54
•
55
""2
""2
4.7 The Variances of ~le
and ~2e
4.8 The Covariance of the Estimators
4.9 The Generalized Variance
56
57
58
4.10 Estimation of Parameters for Small
Values of ~~ and ~~ •
4.11 Influence of Block Size on the Magnitude of
Environmental Variance
-e
58
59
5.
SUMMARY AND CONCLUSIONS •
60
6.
LIST OF REFERmCES
65
7.
APPENDIX:
THE MEAN PRODUCT OF SELF AND BIPARENTAL
PROGENY MEANS
67
vii
LIST OF TABLES
Page
1.1.
Analysis of variance of self progenies
4
1.2.
Analysis of variance of biparental progenies
6
1.3.
Analysis of covariance of biparental and
6
self progeny means
3.1.
Relative magnitudes of the genetic and environmental
variances, selected as representatives of the
parameter space
30
= 480
3.2.
Possible allocations of m, d and r for t
3.3.
Variances of estimators of genetic variances by Methods
32
I, II and III and the corresponding generalized
variances for various parametric combinations
(PI = 1/2, P2 = 1, P3 = 1, P4 = 1)
(PI = 1/2, P2 = 0, P3 = 1, P4 = 1)
P3 = 0, P4 = 1)
= 0, P3 = 0, P4 = 1)
= 0, P3 = 0, P4 = 1/2)
Covariances of the estimators by Methods I, II and III
3.3·3.
3.3·4.
3.3.5.
3.4.
(p.'
(PI
(PI
= 1/2,
P2
= 1/2, P2
= 1/2, P2
= 1,
for various parametric combinations
3.4.13.4.2.
(PI
(PI
3.4.3.
3.4.4.
(PI
(PI
(PI
3.4.5.
5.1.
= 1/2,
= 1/2,
= 1/2,
= 1/2,
Choice of estimation
=1,
=
P
1, P4 = 1)
3
= 0, P3 = 1, P4 = 1)
P2 = 1, P3 = 0, p4 = 1)
P2 = 0, P = 0, P4 = 1)
3
P2 = 0, P = 0, P4 = 1/2)
3
method and its corresponding
P2
= 1/2, P2
allocation of design parameters •
33
34
35
36
37
38
39
40
42
44
46
48
64
viii
LIST OF TABLES
( continued)
Page
Analysis of variance of self progeny means
67
Analysis of variance of biparental progeny means
68
ix
LIST OF FIGUREE
Page
1.1.
Simultaneous se1ting and partial dial1e1 test
crossing mating design (m
= 4)
3
O.
mTRODUCTION
Inl960, a combination design was introduced by Matzinger, Mann
and Robinson [13J, designated as a simultaneous selfing and diallel
test crossing design.
It allows estimation of additive, dominance and
additive by additive genetic variance, together with early generation
evaluation of selfed progeny.
Several methods of estimation, applied to data obtained from this
design, will, under the assumption of a limited genetic model
(!.~.
absence of third and higher order epistatic variances) lead to unbiased
estimators of the genetic variances, which are not necessarily equally
efficient.
Moreover, variation of the design parameters influences
the magnitude of the sampling variances of the estimators.
The present study is an attempt to evaluate the efficiency of two
estimating procedures, assuming normality of the data obtained from
the experiment.
2
1.
REVIEW OF LITERATURE
1.1
The Mating Design
The current procedure to estimate genetic variances starts by
generating relatives through some system of mating, the mating design
[3;4]. A review on commonly used mating designs in relation to
genetic variance estimation is given in
[4].
The combination design introduced in [13] and to be studied in
this dissertation, is basically an amalgamation of Comstock and
RObinson's Design II (5;6] -- called a factorial mating design in
[4] -- and a selfing design.
An advantage of using this design in
comparison to Design II is the possibility of estimating er~ in
2
:2
addition to erA and aD'
Moreover, simultaneous selection of pure lines
is also possible [11].
The interpretation and analysis of this design
for a special case are given by Matzinger and Cockerham (12].
The description of the experiment for the general case can be
summarized as follows:
Random individuals from the reference popu-
lation are chosen as parents.
The mattng design is composed of
multiples of 2m parent plants; m are designated as male parents,
numbered 1 through m, and m as female parents, numbered (m+l) through
2m.
Each of these parental plants is selfed to give self progeny,
Xi or Xj • At the same time all possible crosses are made between the
two sets of m parental plants Yielding biparental progenies or fullsib families designated C (Figure 1.1). From additional samples of
ij
2m parental plants d sets of these mating designs are formed.
3
JS.
i
X
2
~@ &
~
cD
X
4
(~
~
C
5l
C
52
C
53
C
54
~
~
C6l
C62
C
63
C64
x.,
+-(j)
c71
C
72
C
n
C
74
Xs
~
CSl
CS2
C
S3
CS4
~
Figure 1.1.
Simultaneous selting and partial diallel test crossing
mating design (m=4)
In the field each set is assigned to a block at random, each
block consisting of the m2 full-sib families, 2m self families, the 2m
parents of the original cross and their Fl and F2 generations. Each
block is comprised of r replications. The inclusion of the original
parents and the Fl and F generations is to provide information on
2
heterosis and inbreeding depression and to aid in making block adJustments to compensate for soil heterogeneity among blocks preparatory
to making selections of parents for advanced generations.
In this design separate analyses of variance of the two groups of
materials are obtained.
The model for self progenies is
(1.1.1 )
where
~l
= the mean for self progenies,
dli
= the ith block effect for self progenies,
r
= the jth replicate effect in the ith block for self
liJ
progenies,
4
slik
= the
effect of the kth self progeny grown in the ith
block and
elijk
= the
experimental error associated with the Ylijk
observation.
The components of variance of interest are
2
= self
2
= variance of effects of self progenies.
~le
~ls
progeny error variance and
The analysis of variance associated with this model is given in
Table 1.1.
Table 1.1.
-_
Source of
variation
Analysis of variance of self progenies
Degrees of
freedom.
Expectation of mean square
2
Blocks
d-l
1\1 = ~le
Reps/Blocks
d(r-l)
M:t~
Selfs/Blocks
d(2m-l)
1\3 = erIe
=:
2
lr +
+ 2mcr
2
2
erIe + 2merIr
2
2
2
+ r~ls
2
lm
Males/Blocks
d(m-l)
M:t4 = ~Ie
Females/Blocks
d(m-I)
1\5
==
2 + rer2
erIe
lf
Mvs F/Blocks
d
1\6
=:
r:r2 + rer2
Ie
lmf
Error
d( r-l)( 2m-I)
Total
2drm-l
2
M:t7 = erIe
+ rer
2
r~ls
2
+ ~rmD"ld
5
The model for biparental progenies is
(1.1.2)
= the mean for biparental progenies,
where
= the
ith block effect for biparental progenies,
= the effect of the jth replicate in the ith block
for biparental progenies,.
m2ik
f
2il
mf
2ikl
= the
= the
= the
effect of the kth male parent,
effect of the lth female parent,
interaction effect of the kth male and the ,tth
female parents and
e2ijkl
-e
= the
experimental error associated with the Y2ijk,t
observation.
The analysis of biparental progenies has been described by Comstock
and Robinson [5J, and interpreted in detail by Cockerham [3J.
A
presentation of the analysis of variance of biparental progenies is
given in Table 1.2.
In this case, the components of variance of
interest are
2
~2e
= biparental
progeny error variance,
~~f = variance due to interactions of effects among male
and female parents and
~~
= average
variance of male and female parental effects.
A co-analysis is also made of the self and biparental progenies.
The mean product is computed as rJfit times the covariance of half-sib
family means from the same parents, and is an estimate of rJffi. Cov(HS,S)
6
Table 1.2.
Source of
variation
Analysis of variance of biparental progenies
Degrees of
freedom
Expectation of mean square
Blocks
Reps/Blocks
Parents/Blocks
·e
= cr22e
2d(m-l)
Males/Blocks
d(m=l)
Females/Blocks
d(m-l)
2
rcr2mf
+
2
rmD"2p
22
= cr2
+ rcr2mf + rmD"2m
2e
M25
MxF/Blocks
d(m-l) 2
Error
d(r-l)(m -1)
Total
drm -1
Table 1.3.
+
2
= CJ'2"2e
2
M26
= cr2e
~7
= cr2e
+
2
rcr22mf + rmcr2f
2
+ rcr2mf
2
2
Analysis of covariance of biparental and self progeny means
Source of
variation
Parents/Blocks
Degrees of
freedom
2d(m-l)
Expectation of
mean product
~3
= rJrn Cov(HS, S)
Males/Blocks
d(m-l)
~4
= rJrn COv( HS, S)m
Females/Blocks
d(m-l)
~5
= rJfU COv( HS, S)f
7
(see Table 1.3).
Reasons for this definition of the mean product are
stated in 7.1.
1.2 Genetic Interpretation of Design Components of Variance
Under the assumptions of:
1) normal diploid segregation,
2)
no maternal or paternal effects,
3)
gene frequency of 0.5 for all segregating loci,
4) no linkage,
5)
no epistatic variability except
2
~AA'
i.e. the total genetic
variance consists only of additive, dominance and additive by
additive genetic variance
and the validity of the following relationships:
·e
~is
= covariance
of self-sibs (self offspring from the same
parent),
~~p = covariance of half-sibs (biparental offspring with one
parent common),
2
~2mf
= covariance
of full-sibs (biparental offspring with both
parents conunon) minus twice the covariance of half-sibs and
Cov(HS,S)
= covariance of biparental and self-sibs,
the design components of variance can be translated into components of
genetic variance [3;12].
2
~2p
2
~2mf
Cov(HS,S)
With non-inbred parents the translation is:
'4
1
o
o
1
'4
=
1
1
1
2
1
1
Ib
1
'4
'8
o
'4
1
2
~D
2
~AA
(1.2.1)
8
1.3
Estimation of Genetic and Environmental Variances from Experimental Data
A mating design supplies the experimenter with data in the form
,.
Let Mbe a
,.
vector of observed values supplied by the experiment. Then Mean
of variances or covariances of metrical characters.
generally be expressed as
,.
M:::: C R +
where E .§.
""
Q"
B is
-
(1.3.1)
€
the vector of genetic and environrnental components
of variance to be estimated and C is a matrix determined by the mating
design and the physical layout of the experiment [9}
The earliest attempt to estimate
B based
,.
on
~
was done by Mather
[lOJ who suggested an unweighted least squares approach, !.~. assuming
,
2
E ~ :::: cr I, which is usually not true. Using another mating design,
,.
NeIder [15J advocates weighting with the inverse of the variances of M.
This type of estimation will be referred to as the weighted least
squares method.
Because the variances are unknown, this method is an
,
iterative method, which is also the case with Hayman s maximum likelihood method [9].
The latter is in fact also a weighted least squares
method, but taking correlations into account.
Matzinger and Cockerham
denote this method as an iterative weighted correlated least squares
estimation [12J.
Cooke
et~.
[71 applied the weighted and the weighted correlated
least squares methods to experimental data and found no significant
differences between the results.
From a purely statistical standpoint, the problem of least squares
estimation when heteroscedasticity and correlations are present was
r
9
first discussed by Aitken [1].
A current review on joint estimation of
several parameters by this generalized least squares approach is given
by Goldberger in his text [8].
Some concepts and theorems related to
the present study will be cited.
,
Let ~
= (~1'~2""'~k)
,
be a vector of parameters and b
=
(b1'~2, ••• ,bk) the corresponding estimator vector, then
E. = ~,
1)
b is an unbiased estimator of ~ if E
2)
b is a minimum variance estimator of .@. if
is non-negative definite where
3)
li
is any other estimator of ~,
b is a best unbiased (or efficient) estimator if 1) and 2)
hold jointly,
·e
4)
a best unbiased estimator minimizes the generalized variance
within the class of unbiased estimators,
5)
because each diagonal element of a non-negative definite matrix
is either positive or zero, 2) implies that the variance of
every estimator b
6)
i
(i
= 1,2, .•• ,k)
is also minimized and
b is a best linear unbiased estimator of .@. if
estimator
(!.~.,
E. is a linear
a linear form in the sample observations),
unbiased, and is the minimum variance estimator within the
class of linear unbiased estimators of
~.
Now consider the following generalized linear regression model:
(1.3,2)
where
E
E
-,€ = -02
€€
= rr
(1.3.3 )
E such that E is positive definite
(1.3.4 )
10
X is an n x k matrix of fixed values with rank k < n
(1.3.5)
Then, the following theorems are applicable.
Theorem 1.3.1:
In the generalized linear regression model, the best
linear unbiased estimator vector of
~
is
b ;:;: (x~ I:~lx(lx' I:=lx:
(1.3.6 )
whose covariance matrix is
Theorem 1.3.2:
Application of classical. least squares estimation on
(1.3.2) when in fact the generalized linear regre.ssion model is
appropriate, yields
J?*
·e
:=
(X' XrlX' x:
as the unbiased estimator vector of
v* ::
~
with covariance matrix
(j2(X' Xrl(X' I: X)(X' Xr l •
(1.3.8 )
11
2.
ESTIMATION OF THE GENETIC .AND ENVIRONMENTAL PARAMETERS
2.1
Methods of Estimation
The analyses of variance and covariance (see Tables 1.1, 1.2 and
1.3) supply the following vector of observed mean squares and products
At
.!:1:L
A
A
~
A
~
A
= (~3'~.7,M23,M26,M27'~3)
(2.1.1)
•
Moreover, by equating these observed values to their corresponding
expected values and solving for the latter, the following vector of
design components of variance and covari.ance can be obtained:
A,
M2
=
["'2 "'2
A2
'"
"'2 "'2 ]
~ls'~2mf'~2P,COV(HS,S)'~le'~2e •
In the sense of equation (1.3.1) these two vectors can be expressed
-e
as the sum of a fixed and a random component.
Hence
'"
!:!:L = C1B + .§.l
(2.1.3 )
A
and
~ = C~
where
R'
(2.1.4)
+~
22222
= (~A'~D'~AA'~le'~2e)
(2.1.5)
is the parameter vector of interest and E ~l = E € = o.
-2 the genetic and environmental parameter vector
B,
To estimate
these two equations
are subjected to an unweighted and a weighted correlated least squares
analysis.
A listing of the two methods follows:
Method I:
Unweighted least squares estimation of the genetic and
environmental parameters using observed mean squares and
products,
!.~.
unweighted least squares analysis on
12
Method II:
Unweighted least squares estimation of the genetic and
environmental parameters using design components of
variance,
1.~.
unweighted least squares analysis on
(2.1.4).
Method I is proposed in [12] as a substitute for the estimation
method utilized in [13], while Method II, though not identical with
respect to the design components of variance used, is comparable to
the method applied by Matzinger et a1. in [13].
These two methods are compared to the weighted correlated least
squares procedure on either observed mean squares and products or design
components of variance.
As is mentioned in 1.3, this theoretical
procedure yields a set of estimators with minimum variances and
·e
alized variance.
gener~
For ease of reference, this procedure, although not
exactly a method of estimation, is called Method III.
2.2
Let
~
and
N2
The Design Matrices C1 and C
2
respectively be the population equivalent of
~
and
(2.2.1)
13
is the vector of expected mean squares and products as defined in Tables
is the vector of population components of variance.
In terms of
(2.1.3) and (2.1.4) the following is true:
and
From Tables 1.1, 1.2 and 1.3, the following relat:I.on holds between
the vector of
e~pected
mean squares and products and the vector of
genetic and environmental parameters:
!i:L
·e
where
A
6x 5
::
r
0
rID.
r
=AR
'4
r
1
0
0
0
1
0
r
r(~62)
0
1
"'4
4'
0
'4
B'
0
1
0
0
0
0
1
51!!:..
2
0
:d!3:...
0
0
r
r
4
i
Hence
C1 . == A
(2.2.6)
Between population components of variance and covariance and expectations of mean squares and products the following relation is also
true:
~ = K!:!:L
where
K
=
6 x 6
1
1
r
r
0
0
0
0
0
0
0
0
0
0
1
- -1r
0
0
C
r
1
1
rm
rm
0
0
0
0
0
1
0
0
0
0
0
0
0
()
1
u"
1
rJID.
Substitution of (2.2.5) into the right hand side of (2.2.7) yields
Mr2 ::;
.
KAR
-
(2.2.8)
Hence
C2 =
- KA =
·e
1
0
0'
0
0
0
0
"4
0
0
0
0
1
0
0
0
0
1
1
1
"4
0
"4
E
0
lb
1
2
0
0
0
1
"4
1
1
1
1
A
By
definition,
s~~arize
A
M2 can be obtained from substituting M:t. for .!1:L in
the right hand side of
To
(2.2.8a)
(2.2.7),
A
!.~. ~
A
= KMl.
This implies
the results, equations (2.1.3) and (2.1.4) call now be
rewritten as
+ -::
-=1
15
and
2·3
Let
11.
The Covariance Matrices of
M:t
and
~
A
and 1: respectively be the covariance matrix of
2
...
~
and
!i2 ,
then (2.2.9) implies the following relation between the two matrices:
Under the assumption of normality of experimental errors in (1.1.1) and
(1.1.2),
11.
,
can be calculated using the properties of Wishart s
distribution [14].
An appropriate formula is given by Mode and Robinson
[15J
·e
where a
ij
=j
represents an observed mean square if i
and an observed
mean product if i ~ J, with P degrees of freedom.
The variances of the observed mean squares and products can be
calculated by using a special case of (2.3.2), !.~. if i
if i = k ~ j = t:
Hence:
Var a ii
= 20"ii/P
Var a ij
= (O"iiO"ij+crij)/p
,
Var z\3
= 2~3/{d(2m-1)J
'
...
Var MI7
= 2m17/{d(2m-1)(r-1)}
Var
2
,
M23 = 2~3/(2d(m-1)1 ,
...
Var M26
Var
,
=
_2
2
2M 6/(d(m-1) } ,
2
M
= 2~7/{d(m2-1)(r-1)}
27
and
=j =k =t
or
16
A
Var ~3
1
= Var 2
1
::=
::=
A
A
(~4 +~5)
A
A
A
A
4 rVar ~4 +Var ~5 +2 cov( ~4"~5)}
2(M13M23~3)/(4d(m-l)} = (~3M23~3)/{2d(m-l)}
,
The covariances between a mean square and a mean product, or between
two mean squares, can be calculated
if i
= j =k f
by
a special case of (2.3.2),
t:
COY
(sii,a ij )
::=
2~ii~ij/P
COY
(aii,a j j )
::=
20ij/p
Therefore:
·e
and
!.~.
17
1
,., "4 •
1
d(m-1) (2M24~4 + 2M25~5)
= ~3M23/(2d(m-1)}
•
These results, arranged in a matrix, yield the desired covariance
'"
matrix of M.I.:
~
.
1
d
:=-
2~3
2m-l
0
~
0
0
3.L~
2~7
0
0
0
0
0
0
2 m-l
~
0
0
rr:l) ( 2m-1)
~
2m-1
·e
0
2m.~1
~~
0
2(m-f)
0
0
0
~26
0
0
0
0
:~/,\~
2m-1
0
~~~
(rn-1 ) 2
2~7
(r-l)(m2-1)
0
2m-1
0
~
0
2
m~l
(2.3.5 )
2.4
Point Estimators Generated by Method I
By theorem 1.3.2 the estimator vector of
.!! using
Method I is
(2.4.1)
18
where
A'A :;
r 2 (m2 +4m+16 )
16
r 2 (m+4)
16
2
r 2 (m +10m+64 )
64
r 2 (m+4)
16
r 2 (m2 +1Om+64 )
64
r 2 (m+20)
64
~
r 2 (m+20)
64
2 2
r (m +2Om+264 )
256
r
r
"4
r
rm
r
2
r(m+42
~4
16""
nn
r
4'
r
r
2
'4
r
r
16+4)
2
0
(,
3
.J
(2.4.2)
and (A' Ar
1
1
=x(Yij ) such that Yij
1
·e
= Y ji
and
19
xl
= r 2m(4m2 +37m+72)
Y
ll
= 16(m2 +2lm+98)
Y
12
= l6(3m2 i20m+56)
Y
13
=-
,
,
,
2
32(2m t19m+56) ,
2
Y 4 == 2r(9m +48m) ,
1
2
Y15 == - 2r(9m +48m) ,
Y
==
22
32(~+33m2+8Om+16)
2
Y23 == - 64(5m +l8m+16) ,
2
y 24 == 4r( _2m3 +m +24m) ,
Y
25
·e
= - 8r(~+19m2+48m),
2
Y33 == 256 (m +5m+8 ) ,
2
Y34 == - 8r(7m +24m) ,
2
Y
== 8r(7m +24m) ,
35
Y44 ==
Y45
Therefore (A' AflA' =
~
Z:t
r(3~+37m2+72m)
= rm3 and
is such that
,
,
20
~::::
-8
8
11
-8
8
12
-8
8
8
c
11
12
13
14
-8
8
14
where
b
b
b
13
b
14
-b
14
11
-(b11 +ell)
c11
f
12
-(b12 +°12)
c
12
f
-(b
)
c
f
+b )
14 14
8
+b )
14 14
°14
13
14
-(8
14
(8
hI
:'=.
rm( )+m2 +37m+72)
all
=:
2
18m 196m
b
:::: 32m2 +188m ,
11
f
8
b
+C
J
J
2
11 :::: (-8m +16m+336 )Jm
12
12
13
J
2
ell :::: - (18m 196m)
·e
13
J
:::: - (8~-4m2-96m) ,
:::: 80m2 +272m ,
2
c12 = - (8m3 +152m +384m)
f
8
b
c
f
12
13
13
13
13
J
:::: (-56m2-128m+192).jm ,
::::-
2
:::: - (64m +256m) ,
:::: 56m2 +192m ,
:::: (32m2 +16m-384)Jm ,
13
14
f
11
12
13
14
-f
14
(2.4.4)
21
_
a14 b
14
3
TIn
,
:::: 6rm2 ,
2
c14 :::: r(3~ +37m +72m)
f 14 :::: - 5rm2Jm
2.5
and
.
Point Estimators Generated by Method II
Analogous to 2.4 the estimator vector of
B using
Method II is
,.
Bn : : (A'K'KA)-lA'K'M
-2
(A'K'KA)-lA'K'K ~
::::
where
(A'K'KA)
::::
1
21
"4
Ib
1
·e
1
~
(A'K' KAf1 ::::
12.
0
0
2....
0
0
4
"4
12.
2....
m
0
0
0
0
0
1
0
0
0
0
0
1
115
103
-148
0
0
103
244
-172
0
0
-148
-172
208
0
0
0
0
0
~
0
0
0
0
0
4
and
(2.5.1)
~
32
32
(2.5.2)
(2.5.3)
~
Again, as in 2.4, the weight matrix (A'K'KA)-lA'K1 K can be written
in a form similar to (2.4.4):
22
=111
(A'K'KA)-lA'K'K
Z
2
2
such that
Z2
=
-a
-a
-a
21
22
23
a
a
a
a 24
c
24
8
=8
b
21
b
22
b
23
b
24
-b
24
-(b
21
-(b
22
-(b
23
-(a
24
(8
24
where
21
= -58m
21
= 164Jm
22
= 64m
,
22
= 120
,
22
= -316m,
£22
= 68Jm ,
b
23
= -192
23
= 136m,
c
.e
f
a
b
c
c
,
,
,
21
22
23
24
24
+c
+c
+c
+b
21
22
23
24
+b
24
)
c
)
c
)
c
)
a
)
c
21
22
23
f
f
f
21
22
23
24
£24
24
=:f'24
(2.5.4)
23
2.6 Peint
~J
~stimators
Generated by Method III
theorem 1.3.1, the weighted correlated least squares analysis
on observed mean squares and products yields the f'ollowing estimator
vector:
R
= (AI~~l
A)AI~l M •
-III
-~
-~-
(2.6.1)
Since the vector of design components of variance is a linear transformation of the vector of observed mean squares and products with a
non-singular transformation matrix K (see (2.2.7)), the weighted
correlated least squares analysis on design components of variance also
yields (2.6.1) as the estimator vector.
2.7 The Generalized Form of the Point Estimators
It>,.
The estimator vectors
BI , BII
,.
and
BIII
can be written in the form
of
R
=-=f
where
= (AIX A)-lA'X
i
f
X. = K'K
l.
-1
~
~
(2.7.1)
i-
if
i = I
if
i = II
if
i = III
24
Hence, Method II can be thought of as a weighted correlated least
A
squares estimation procedure on ~ using K'K instead of
L11
as the
weighting matrix.
2.8
The Covariance Matrix of the Estimators
,.
The covariance matrix of
~
is, by theorems 1.3.1 and 1.3.2 equal
to
v = (A'X A)-lA'X ~X'A(A'X A)-l
i i i
~
i
i
(1
= I,II,III)
(2.8.1)
where Xi is defined as in (2.7.1).
Let (jk) denote Cov(j,k), where j and k, for j, k = 1,2,3,4,5
22222
stand for the estimators of ~A' ~D' ~AA' ~le and ~2e' respectively.
Then, in case of Methods I and II, the elements of Vi can be expressed
.e
readily in terms of
I1.
and the rows of the Z:t and Z2 matrices defined
in (2.4.4) and (2.5.4).
(-aWj,awJ,bWJ,-bWj'-CWj,CWJ,fWj)
Let
~j =
(aw4,cw4,bw4,-aw4-bw4,aw4,fw4)
where awj ' bWj ' CWj and f Wj ' for W = 1,2 and j
(2.4.4) and (2.5.4).
,.
= 1,2,3,4
j = 1,2,3
j
=4
j
=5
are given in
Then, the variances and covariances of the
estimators by Methods I (w=l) and II (w=2) are:
(jk)
W
= Lh2
W
z
~t.Yk.
~j -.I: "
25
3.
NUMERICAL EVALUATION OF THE COVARIANCE MATRIX OF THE ESTJMATORS
3.1 Nature of the Variances and Covariances of the Estimators
From 2.8 it can be concluded that the variances and covariances of
the estimators are complicated functions of the genetic, environmental
and design parameters,
i·~·
of
22222
~A' ~D' ~AA' ~le' ~2e'
m, d and r.
There is no explicit method to compare the variances and covariances of
the three sets of estimators, nor is there any direct way to find out
which combination of the design parameters will yield estimators with
least variance.
To obtain an idea on these questions, an exploration on the magnitUde reached by these variances and covariances at several points of
.e
the parameter space, is necessary •
3.2 Programming the Covariance Matrix of Estimators
The covariance matrix of
~i
as is given in (2.8.1) can be simp-
lified into the following form
1
{(A'AJ- A' I1A(A'AJVi
=
1
;
i
=I
(B'B)-lB'K IlK'B(B'B)-l
;
i
= II
(A ~lA(l
,.
i
= III
(3.2.1)
For computational purposes, the Il matrix as defined in (2.3.5) can
be rewritten in the form of:
i
j
= 1,2, •• ,6
= 1,2, •• ,6
where the elements of Hij are composed of linear functions of scalar
products of the rows of the A matrix defined in (2.2.5).
26
In particular,
2
I\l = d(2m-l)
r
r
H
-
2
r2
l;"
2
r2
~3
2
0
ib
r2
r2
l;"
r2
r
0
r
~
r
1
0
0
0
0
0
0
r
·e
d(m-l)
r
4""
2
1
2
r
l;"
22 - d(r-l)(2m-l)
=
r
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
2
0
4
(6f)
0
~
r 2 (m+2)2
256
0
r(~t)
0
0
0
0
r
r(~62)
0
1
lb
r 2m
r
"Tb
0
rm
4"
2
ib
2
2
r
2
r m~m+2)
rm
1b
4
0
0
2 2
rm
r m~m+2)
r
~
~4+2)
~
4
r
2
rm
r
27
2
H44
IS5
0
= d(m-l)2
1
66 - 2d(m-1)
0
0
r2
2
r
32
0
}i:'
r
lb
0
r2
32
b4"
0
"S
0
0
0
0
0
0
}i:'
0
1
r
r
2
= d( r-l) (m2 _1)
-
0
0
·e
H
0
r
'8'
r
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
L1!!
2
r 2 (m+4)
32
r 2 (9m+2)
32
rm
2
r 2 (m+4)
32
r 2 (9m+2)
32
8"
2
r 2 (m+18 )
128
"S
"S
r(m+2)
32
r
2
r
lb
r 2 (m+18)
128
r 2 (m+l)
8
rm
r
r
2
r
r
r(m+2)
32
0
1
2
r
r
2
1
2
0
8"
'8'
r
2
"S
.28
2r2m
I1.3 = d( 2m-I)
1
0
B"
0
0
0
0
0
0
0
B"
0
Ib
0
0
0
0
0
0
0
0
0
0
0
0
1
H16
--~
d(2m-l
r
2
r
.e
_ rJm
- d(m-1)
1
r
Ib
1
*
r
32
lb
0
if
r
32
1
~6
1
4
4
0
0
0
r
1
4
B"
0
1
;:;
0
8
0
0
0
0
0
0
0
rm
"8
r
Ib
r(~:+2)
0
1
;:;
r
Ib
r(3 m+2)
0
-32r
1
64
0
4
r
32
0
0
0
8
0
0
0
0
r(m~)
0
0
0
8
1
1
The computation of the covariance matrix Vi (for i = I,II,III),
together with its corresponding determinant,
1.~.
its generalized
variance, was programmed in Fbrtran language, and used in an IBM 1620
and 1410 computer to evaluate the magnitudes reached for a set of
parameter points.
3.3
Choice of the Parameter Points
In choosing the values of the parameters to be used in the
exploration,
to one.
2
~2
e
was used as unit measure, i.e.
- -
2
~2
e
was set to be equal
Moreover a reparametrization was done on the genetic and
environmental parameters, such that:
-e
assuming the environmental error to be constant regardless of block
size.
Therefore, the parameter vector
H',
given in (2.1.5), can be
redefined as:
The chosen values of P1' P2' P3 and P4 are shown in Table 3·1
representing cases where:
i)
all three types of genetic variances are present in equal
magnitude as the environmental variance,
11)
iii)
dominance variance is absent,
no additive by additive variance is present,
iV)
only additive variance is present besides the environmental
variance and
v)
only additive variance is present, while the environmental
variance for selfed progenies is less than that for biparental
progenies.
Table 3.1.
Relative magnitudes of the genetic and environmental variances, selected as representatives of the parameter space
case
--
PI
P2
P3
P4
i)
1/2
1
1
1
ii)
1/2
0
1
1
iii)
1/2
1
0
1
iv)
1/2
0
0
1
v)
1/2
0
0
1/2
The total number of experimental plots t is related to block size
d, number of maternal or paternal parents within blocks m and number of
replications within blocks r, in the following manner:
t
= drm(m+2)
This quantity can also be considered as the maximum feasible size of a
block.
Comparison of the variances and covariances of the estimators
was done on the basis of equal magnitudes of t.
into consideration was t
The value of t taken
= 480. Twenty-nine (m,d,r) triplets are
possible, from which eleven values were used in the exploration (see
Table 3.2).
31
The numerical values of the variances and covariances of the
estimators of the genetic and environmental variances by methods I,
II and III, are listed in Tables 3.3 and 3.4, together with the
corresponding values of the generalized variance •
•
32
Table 3.2.
Possible allocations of m, d and r for t
values are marked by a cross)
m
d
r
2
1
2
3
60
30
20
15
12
10
6
5
x
3
2
x
4
3
"
,·e
= 480 (chosen
4
6
8
10
5
6
10
12
15
20
30
1
2
32
16
4
8
8
4
16
2
1
2
4
20
10
5
5
10
2
1
2
5
1
2
3
1
2
x
4
4
6
3
2
x
x
x
x
x
x
4
x
2
x
10
5
2
33
Table 3.3.
Variances of the estimators by Methods I, II
and III, and the corresponding generalized
variance for various parametric combinations
r
=
480/dm(m+2)
m
=
number of parents of each sex
d
=
number of blocks
i
=
(11)
=
method of estimation
"2
Var U'A
(22)
=
Var
(33)
=
"'2
Var U'AA
(44)
=
"2
Var U'le
(.55 )
=
"'2
Var U'2e
IVI
=
generalized variance
yEz
=
Y • 10
=
mark to denote minimum value within
the selected group of allocations
with the same parental number
"2
U'n
~
·e
x
z
34
m
2
i
d
1
I
II
12
30
6
1
2
·e
5
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
8
1
2
3
10
1
2
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
(11)
(22)
(33 )
(44)
(55 )
IVI
2.016
2.083
2.002
1.191
2.179
1.183
.279x
•549x
.275x
·397
·759
.388
.274
.286
.216
.2llx
•234x
.170x
.243
.320
.212
.250
•236x
.180
.229x
.239
.17lx
.260
.305
.205
.256x
.229x
.172x
.284
·303
.205
3·117
3.376
3·113
.492
.512
.487
.427x
.44lx
•418x
.517
.590
·513
.351
.408
.347
•285x
.376x
•283x
.422
.490
.416
·320
.399x
.316
.303x
.421
.301x
.387
.470x
.382
.327x
.472
.326x
5.516
.01lx
.01lx
.023
.014
.014
.023x
.022
.022
.104
.020x
.020x
.037
.023
.023
.034x
.036
.034
.057
.027x
.026x
.036x
.033
.032
.040
.044
.040
.046x
.035x
.034x
.046
.053
.046
5.516
.011x
.01lx
.023
.014
.014
.023x
.022
.022
.093
.006x
.006x
.024
.007
.007
.013x
.011
.011
.041
.006x
.006x
.015x
.008
.008
.013
.011
.010
.025
.007x
.007x
.014x
.010
.010
.157E 0
.345E-3
.160E-3
•264E-5
.249E-5x
.106E-5x
.179E-5x
·295
·292
·292
.232x
.222x
.222x
.193
.230
.192
.123
.146
.123
.088x
.113x
.088x
.133
.163
.132
.092
.119
.092
.081x
.114x
.081x
.105
.137
.105
.079x
•119x
.079x
~347E-5
•138E-5
.134E-4
.102E-5
.709E-6
.123E-5
•441E-6x
.298E-6x
.427E-6x
.580E-6
.342E-6
.334E-5
.615E-6
.427E-6
.670E-6
. 461E-6x
•296E-6x
.566E-6x
.679E-6
•388E-6
•170E-5
•587E-6x
•392E-6x
.793E-6x
.850E-6
. 468E-6
35
m
2
d
i
(11)
(22)
(33 )
(44)
(55 )
1
I
II
III
I
II
III
I
II
III
I
1.180
1.273
.946
.206
.210
.206
•184x
.179x
.179x
.134
.162
.129
.091
.108
.090
.072x
.093x
.072x
.098
.120
.097
.073
.093x
.07)
.067x
.094
.067x
.082
.105
.082
.066x
.099x
.066x
.368
.637
.366
.155x
.287x
.154x
·312
.567
.304
.138
.133
.107
.125x
.132x
.10Ix
.186
.249
.166
.144x
.129x
.104x
.156
.163
.119
.198
.239
.161
•164x
.144x
.112x
.217
.239
.162
1.592
1.849
1.407
·331x
·353x
.331x
.340
.357
.336
.311
.355
·310
.239
.275x
.237
•228x
.304
. 228x
.280
·319
.278
.240x
.296x
. 239x
.243
.341
.243
.278
·329x
.276
.264x
.382
.264x
1.534
.011x
.01Ix
.018x
.014
.014
.022
.022
.022
.056
.020x
.020x
.029
.023
.022
.033x
.036
.033
.040
.027x
.026x
.032x
.033
.031
.039
.044
.039
.038x
.035x
.034x
.045
.053
.044
1.534
.011x
.01Ix
.018x
.014
.014
.022
.022
.022
.045
.006x
.006x
.016
.007
.007
.012x
.011
.011
.023
.006x
.006x
.012x
.008
.008
.012
.011
.010
.016
.007x
.007x
.012x
.010
.010
12
30
6
1
II
2
·e
5
8
1
2
3
10
1
2
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
IVI
·309E~2
. 237E-4
.114E-4
.515E-6x
.669E-6x
·299E~6x
. 787E=6
.159E-5
.640E-6
.110E-5
.172E=6
.120E-6
.220E=6
.121E-6x
.822E=7x
.186E-6x
•273E=6
.157E-6
.481E-6
.149E=6x
.105E-6x
.204E=6x
.175E~6
.113E-6
.246E-6
.32lE-6
.179E-6
.383E=6
.188E-6x
.127E-6x
.344E<-6x
.404E-6
.216E-6
m
2
d
i
(11)
(22)
(33 )
(44)
(55 )
IVI
1
I
II
III
I
II
III
1.039
1.038
1.035
.177
.168
.167
.161x
.148x
.147x
.129
.132
.127
.085
.087
.082
.065x
.077x
.064x
.094
.098
.091
.067
.076x
.065
.061x
.079
.060x
.077
.085
.075
.060x
.083x
.060x
.612
1.000
.612
•198x
.370x
.197x
.344
.634
.336
.1'71
.147
.123
•151x
.144x
.113x
.207
.260
.178
.173x
.134x
.111x
.181
.168
.127
.222
.244
.170
.193x
.143x
.116x
.243
.240
.168
1.364
1.381
1.349
. 277x
•268x
.263x
.296
·293
.282
.251
.239
.228
.194
.193x
.177x
.192x
.242
2.518
.011x
.01lx
.020x
.014
.014
.023
.022
.022
.066
.020x
.020x
.031x
.023
.022
.033
.036
.033
.044
.027x
.026x
.033x
.033
.032
.039
.044
.039
.040x
.035x
.034x
.045
.053
.045
2.518
.01lx
.01lx
.020x
.014
.014
.023
.022
.022
.054
.020x
.006x
.018
.007
.007
.012x
.011
.011
.027
.006x
.006x
.013
.008
.008
.012x
.011
.010
.019
.007x
.OO7x
.013x
.010
.010
.545E-2
. 252E-4
•123E-4
•342E-6x
.389E-6x
.163E-6x
.464E-6
.893E-6
.349E-6
.105E-5
.121E-6
·958E-7
.160E-6
.708E-7x
·533E-7x
.111E-6x
.149E-6
.927E-7
.383E-6
·915E-7x
. 734E-7
.131E-6x
.965E-7
.687E"7x
.147E..6
.175E-6
.107E-6
.270E-6
.107E-6x
.823E-7x
.207E..6x
.220E-6
.131E..6
12
30
6
1
2
·e
5
8
1
2
3
10
1
2
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
~187
.223
•216x
.200
•195x
.216
.182x
.202
.269
.196
.222
.228x
.20lx
. 217x
·300
.210
37
m
2
d
i
1
I
II
III
I
II
III
I
II
III
I
II
III
I
12
30
6
1
2
II
·e
5
8
1
2
3
10
1
2
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
(11)
.(22)
(33 )
(44)
(55 )
IV!
.501
.532
·317
.113x
.11lx
.11lx
.123
.115
.115
.086
.089
.086
.060
.062
.059
.052x
.062x
.052x
.068
.070
.067
.052
.058x
.051
.050x
.064
•05Ox
.060
.063x
.058
.050x
.068
.050x
.057x
.12lx
.051x
.097
.163
.096
.267
.464
.261
.058x
.044x
.040x
.076
.068
.057
.155
.200
.138
.080x
.057x
.05lx
.115
.107
.083
.165
.189
.132
.109x
.079x
.067x
.180
.186
.130
.401
.488
.221
.163x
.162x
.159x
.228
.230
.221
.106x
.103x
.100x
.112
.109
.103
.148
.190
.146
•120x
.112x
.108x
.135
.147
.127
.156
.211
.153
.140x
.139x
.127x
.167
.236
.164
.248
.01lx
.01lx
.016x
.014
.014
.022
.022
.022
.031
.020x
.020x
.024x
.023
.022
.032
.036
.032
.030
.027x
.026x
.030x
.033
.030
.038
.044
.037
.034x
.035x
.033x
.044
.053
.042
.248
.011x
.011x
.016x
.014
.014
.022
.022
.022
.019
.006x
.006x
.01lx
.007
.007
.012
.011
.011
.014
.006x
.006x
.010x
.008
.008
.011
.011
.010
.012
.007x
.007x
.011x
.010
.010
.164E-5
.887E-7
.368E-7
.443E-7x
•683E-7x
.308E-7x
.174E-6
.349E-6
.137E-6
.290E-7
.871E-8x
.738E-8x
.180E-7x
.128E-7
.989E-8
.421E-7
.602E-7
.366E-7
.300E-7x
.134E-7x
.111E-7x
.318E-7
.29°E-7
.207E-7
.560E-7
.714E-7
•423E-7
.428E-7x
.248E-7x
.194E-7x
.789E-7
·903E-7
.518E-7
38
m
2
d
i
1
I
II
III
I
II
III
12
30
6
1
2
·e
5
8
1
2
3
10
1
2
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
(11)
(22)
(33 )
(44)
(55 )
·500
.530
.310
.108x
.105
.104
.111
.100x
.097x
.084
.086
.084
.058
.057
.056
.048x
.047x
.045x
.065
.065
.064
.049
.048
.047
.046x
.046x
.043x
.056
.055
.054
.045x
.047x
.043x
.052x
.117x
.044x
·090
.157
.089
.257
.451
.248
.042x
.037x
.032x
.057
.059
.048
.126
.180
.122
.055x
.047x
.04lx
.082
·092
.070
.122
.165
.114
.072x
.064x
.054x
.124
.157
.111
.387
.474
.196
.14lx
•136x
.135x
.183
.170
.168
.077x
.075
.073
.080
.072x
.070x
.100
.104
·091
.081x
.072x
.070x
·090
.082
.078
.101
.107
.092
·09lx
.078x
.076x
.105
.113
·095
.227
.003x
.003x
.006
.003
.003
.006x
.006
.006
.015
.005x
.005x
.008x
.006
.006
.009
.009
.009
.011
.007x
.007x
.009x
.008
.008
.010
.011
.010
.011x
.009x
.008x
.012
.013
.012
.236
.01lx
.01lx
.016
.014
.014
.022x
.022
.022
.016
.006x
.006x
.010x
.007
.007
.011
.011
.011
.011
.006x
.006x
.009x
.008
.008
.010
.011
.010
.009x
.007x
.007x
.010
.010
.009
IVI
.789E-6x
.208E-7
. 774E-8
.905E-8
.114E-7x
.435E-8x
.232E~7
. 435E-8x
.138E-7
.536E-8
•147E-8x
.119E-8x
. 252E-8x
.187E-8
.133E-8
. 442E-8
•675E-8
.399E-8
.387E-8
•196E-8x
.153E-:8x
.330FP8x
.357E-8
.245E-8
.528E-8
. 776E-8
. 475E-8
•429E-8x
•320E-8x
.240E-8x
.676E-8
.957E-8
.593E-8
39
Table 3.4.
·e
Covariances of the estimators by Methods I, II and III for
various parametric combinations
r
=
480/dm(m+2)
m
=
number of parents of each sex
d
=
number of blocks
i
(12)
=
=
(13 )
=
(14)
=
(15)
=
(23 )
=
(24)
=
(25)
=
(34)
=
(35)
=
(45)
=
method of estimation
'"'2
Cov(erA, er'"'2D)
'"'2 '"'2
Cov(er , er )
A AA
'"'2 '"'2Cov(erA, erIe)
"'2
Cov(er , ;2 )
A 2e
"'2 "'2
Cov(er , er )
D AA
'"'2 '"'2
Cov(er , erIe)
D
"'2 '"'2
Cov(er , 0"2e)
D
"'2
)
Cov(O"AA' ;2
Ie
'"'2
"'2
Cov(erAA' er2e )
'"'2
'"'2
Cov(O"le' 0"2e)
x
=
mark to denote minimum absolute value within
the selected group of allocations with the
same parental number (in the case of more
than one minimum value, only one is marked,
such that the location corresponds to minima
for other parametric combinations)
e
e
Table 3.4.1.
(13 )
(14)
m
d
1
(12)
2
1
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
'I
II
III
.658
·932
.647
.111
.131
.115
.105x
•114x
•115x
-1.994 - .276
.0OOx
-2.133
-1.987
.OOOx
.008x
- .322
.003
- ·323
.002
- .318
.013
- .276x
'.010
- .266x
.010
- .266x
.067
.124
.074
.054
.094x
.055
-
12
30
6
1
2
5
'.174
.229
.176
.128
.167
.129
.053x - .115x
- .164x
.105
.052x - .115x
(PI = 1/2, P2 = 1, P
3
(15 )
.277
'.OOOx
.000x
- .007x
- .002
- .001
- .012
-.009
- .009
.008
- .007
.002x
.COOx
•00Ox
.00lX
.00lX
.000x
.004
.000
.000
.002
.009
.017
.009
-
~
- .002
- .004
- .002
(23 )
= 1,
P4
(24)
-1.341
- .217
'.OOCX
-1'.847
.000x
-1.335
- .229x - .004
.003
- ·315x
.001
- .233x
.001x
- .235
'.011
-·339
.004
- .245
... '.274
'.070
.002x
- ·332
.001x
- .259
.026
- .191
.005
- .240x
.001
:. .178
.003x
- .16lX
'.018
- '.255
.002
- .155x
= 1)
(25 )
(34)
.218
'.OOOx
.000x
-
.005x
.013
.011
.044
.055
.047
- .072 - .002x - .002x -
(35 )
(45 )
.154
- .153
-5·503
.000x
.000x
.000x
.000x
.000x
.000x
.012x - .008
.011x
.006
.000
.005
.000
.004
.005
.000x
.024
.025
.024
.000
.023
.022
.000
.021
.020
.021
- '.084
.00lX
.000x
.003x
.001x
.002x
.000x
.010
.013x
- .015
.000
.003
.009
.002
.000
.005
- .032
- .006
- .005
-
- .030x
- '.028
- .022
- .023
- '.038
- .023
.009x
.012
.008
.000x
.000
.001
g
e
e
Table 3.4.1-
m
8
d
i
1
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
2
3
10 1
2
e
( continued)
(23)
(24)
(25 )
(34)
.047
'.OO5x
.001x
- .052
- '.004x
- .003x
- .031
.000x
.000x
.016
.011
.002
- .03lx
- .012
- .010
- .018
-.009x
- .005x
- .016x
- .023
- .013
.015
.002x
.001x
.000
- .001
.000
-.232
- '.277
- .213
- .180
- .240x
- .165
.009
.006
.003
- .004
.000
.000
.008
.020
.008
-.001
- '.Q04
- .001
- .170x
- .271
-.161x
.003x
'.023
.002
- .033
- .026
- .021
- .024
- .047
- .024
.008x
.011
.006
.001x
.000
.001
- .112
- .166x
- .113
.002x
.008x
.003x
.000x
-.OOlx
.OOOx
- '.218
- .267x
-.196
'.033
.009x
.002
-.045
- .007x
- .006x
- .017x
- .018x
:, .009x
.012
.004x
.002x
-.013
.000x
.000x
..
- .107x
- .184
- .107x
.008
.024
.008
.000
- .004
- .001
- .182x
- ·292
- .172x
.001x
.027
.002x
- .037x
- .024
- .020
- .025
- .056
- .025
.007x
.011
.005
.001x
.000
.001
(13 )
(14)
.054
.099
.057
.049x
.092x
.049x
-.128
~ .177
- .129
- .108x
- .156x
- .108x
.000x
'.004x
.OO2x
.004
.010
.004
.050
.108
.050
- .108
- .172
- ,.108
.050
.094x
.052
.049x
.113
.050x
(12)
•
(15)
.002
'.OOOx
.OOOx
(35 )
(45 )
+"
I-'
Table 3.4.2.
m
d
i
2
1
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
12
30
6
-
e
e
1
2
5
(12)
(P1 = 1/2, P2 = 0, P = 1, P4 = 1)
3
(13)
(14)
(15 )
.246
·522
.255
.074x
.097x
.074x
- .963
-1.135
- ·755
- .213
- .222
- .213
-.595
'.OOox
.000x
.596
.000x
.000x
.002x
.003
.002
.088
.101
.094
- .217
- .214x
-.212x
.011
.010
.010
.021x
.062
.033x
- .080
- .120
- .082
- .012
.002x
.000x
.029
.057x
.033
- .077x
- .105x
- .078x
.041
.083
.041
- ·090
- .130
- ·090
(23)
(24)
(25 )
- '.054
'.OOOx
.000x
.054
'.OOOx
.000x
- .001x
- .002
- .001
-.593
- '.920
- .574
- .149x
- .204x
- .150x
- .001x
.003
.001
- .010
- '.009
- .009
- .192
- .272
- .198
.014
.000x
.OOOx
.00Ox
.004
.002
.008
.017
.008
(34)
(35 )
(45 )
·530
.OOOx
.000x
- .529
.000x
.000x
-1.521
.000x
.000x
- .008x
- .013
- .011
- .005x
- .005
- .004
.006x
.006
.005
.002
'.011
.004
- .045
- .055
- .047
- .022
- .023
- .021
.023
.024
.022
- '.157
- .188
- .154
'.034
.002x
.001x
-.036
- .002x
- .002x
- .005x
- .003x
- .002x
.005x
.001x
.001x
- .036
.000x
.000x
.002
.000
.000
- .125x
- .154x
- .120x
.014
.005
.001
- .020x
- .006
- .005
- .008
- .009
- .005
.005
.003
.002
- .006
.000
.000
- .00lx
- .004
- .002
- .127
- .204
- .124
.000x
.018
.001
-.027
- .028
- .022
- .022
- .038
- .022
.008
.012
.007
- .003
.000
.000
'.OOOx
.000
.000
.001x
.000
.001
-I='
f\)
e
e
Table 3.4.2.
m
8
d
i
(12)
(13 )
(14)
1
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
.024x
.055x
.031x
- .070x
- .102x
- .071x
- .002
'.004
.001
.032
.064
.034
- .075
- .110
- .076
.003
.010
.004
.037
.086
.039
- .084
- .136
-.084
2
3
10 1
2
.029x - '.070x
.060x - .106x
.033x - .07lx
.036
- .083
.090
- .145
- .083
.039
(15 )
.004
'.OOOx
.000x
•
e
(continued)
(23 )
(24)
(25 )
(34)
(35 )
(45 )
- .150
- '.l73x
- .140
.024
'.005x
.001x
- .030
- '.004x
- .004x
- .010x
- .009x
- .004x
.007
.002x
.001x
- .014
.000x
.000x
.000
- .001
.000
- .13lx
- .174
- .124x
.008
.011
.002
- .023x
- .012
- .010
- .013
- .023
- .012
.006
.006
.003
- .001x
.000
.000
.008
.020
.008
.0OOx
-.004
- .001
-.133
-.217
- .130
- .028
- .026
- .020
- .023
- .047
- .023
.006x
.011
.006
.002
.000
.001
.00lx
.008x
.002x
.001
- .0Olx
.000x
- '.153
- .182x
- .141
'.017
.009x
.002
-.029x
-.007x
- .006x
- .012x
- .018x
- .009x
.007
.004x
.002x
-.005
.000x
.000x
.008
.024
.007
.000x
- .004
.000
- .142x
- .235
- .139x
- .004x
.027
.001x
- .031
- .024
- .019
- .024
- .056
- .024
.006x
.011
.005
.003x
.000
.002
- .001x
.023
.001
-I=""
\J;J
e
e
Table 3.4.3.
m
d
i
(12)
(13 )
2
1
I
II
III
.424
.457
~L020
12
30
6
1
2
5
...
T
II
III
I
II
III
I
II
III
I
II
III
I
II
III
.
4~)I'::
,-,/
-1.021
-1.012
.080x - .196x
.073x = .182
.o84x = .184
.085
- .196
= .179x
.079
-.182x
.095
.068
- .122
- .122
.069
.057 - .114
= .092
.053
.057x - .095x
.044x - .086x
.051x - .089x
.078
- .114
- .086
.045
(PI
= 1/2,
(14)
·099
'.OOOx:
.000x
(15 )
~
·098
.000x
.000x
P
3
= 0,
P4
= 1)
(23 )
(24)
(25 )
-.666
- '.779
- .667
- .146x
- .180x
- .150x
- '.018
.000x:
.000x
.019
'.OOOx
.000x
=
- .001x
.003
.001
- .009x
- .013
= .011
-
(34)
.002
.011
.004
- .045
- .055
- .047
-
.192
.00Ox
.00Ox
.o14x
.005
.005
.025
.023
.022
- .010
- .161
.000x - .144
.OOOx - .128
= .127x
= .005
.000
- .124x
.000
- .102x
'.048
.002x
.001x
.019
.005
.002
-
-
.035
.003x
.003x
.018x
.009
.007
- .003x
- .004
- .002
.002x
.018
.003
- '.029
- '.028
- .023
.010x
.003
.003
.014
.010
.011
- .0Q9x
- .002
= .002
.012
.022x
.01lx
.007x
.004
.003
.010
.017
.010
= 1,
P2
e
= .013
- .009
- .010
- .185
.254
-.196
=
- .127
- .181
- .114
.050
.002x
.002x
.025x
.006
.005
=
=
- .025
- .038
- .025
(35 )
(45)
.193
.0OOx
.000x
-2.506
.OOOx
.000x
.015x
,006
.006
- .005
.000
.000
.034
.001x
.001x
.016x
.003
.002
.026
.024
.023
.011
.012
.008
.000x
.000
.000
- .046
.000x
.OOOx
= .008
.000
.000
.001x
.000
.001
+:-
+
e
e
Table 3.4.3.
m
d
8 1
2
3
10 1
2
i
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
.
e
(continued)
(13 )
(14)
(15 )
(23 )
(24)
(25)
(34 )
.058
.058x
.046
- .097
- .099
- .089
.009
'.oo'4x
.003x
-.007
'.OOOx
.000x
- .150
- '.13lx
- .113
.034
'.OO5k
.002x
-.040
- '.oo4x
- .004x
- .026
- .009x
- .006x
.024
.002x
.002x
- .018
.000x
.000x
. 05Ox
.061
.041x
- .083x
- .098x
- .079x
.008x
.010
.006
- .OO3x
- .001·
.000
- .132x
- .142
- .105x
.012
.011
.003
- .027x
- .012
- .010
- .020x
- .023
- .015
.012
.006
.004
-.002x
.000
.000
.050
.081
.044
- .085
- .120
- .083
.010
.020
.011
- .003
-.004
- .002
- .133
- .189
- .116
.001x
.023
.003
- .031
- .026
- .021
- .027
- .047
- .027
.010x
.011
.007
.002
.000
.001
.055
.059x
.043x
- .089
- .098x
- .082x
.008x
.008x
.005x
- .005
- .00lx
.000x
- '.152
- .140x
- .113x
'.025
.009x
.003x
- .037
- .007x
- .o06x
- .024x
- .018x
- .012x
.018
.004x
.002x
- .008
.000x
.000x
.051x
.084
.044
- .086x
- .129
- .084
.010
.024
.011
- .002x
- .004
- .001
- .143x
- .201
- .121
.0OOx
.027
,003
- .035x
- .024
- .020
- .028
- .056
- .030
.010x
.011
.006
.002x
.000
.001
(12)
(35 )
(45 )
+
\J1
e
e
Table 3.4.4.
m
2
d
,
~
12
30
6
1
2
5
i
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
(13 )
(14)
- ·357
- .208
.000x
.00Ox
(12)
.108
.197
.075
.051x
.05.lx
.051x
(PI
= 1/2,
(15 )
.209
P2
-
•
= 0,
P
3
= 0,
P4
= 1)
(23 )
(24)
- .036
.00Ox
.00Ox
.037
.000x
.000x
.206
.00Ox
.OOOx
- .205
.OOOx
oOOOx
=
.236
.OOOx
oOOOx
.OOOx
.003
.001
- .OlOx
- .013
- .011
.007x
- .005
-.005
.008x
.006
.006
=
.001
.024
.024
.022
.010
.001x
.00lx
=
- .002
.000
.000
(25 )
(34)
- .117x
- .114x
- .114x
.OO5x
.003
.003
- .004x
- .002
- .002
- .129'
~ .234
- 0097
=
.o85x
- .100x
- .086x
- .011
- .009
- .010
- .149
- .200
- .156
.002
.011
.004
- .046
- .055
- .047
- .063x
- .056x
- .052x
.015
.002x
.OOlx
- .017
- .002x
- .002x
- .023
=
.023
- .021
= .011
- .oo3x
- .003x
- 0415
- 0175
.000x
.000x
=
.071
.070
.0'78
=
.149
- .140
- .142
.012
.010
.011
.027x
•03Ox
.026x
- .054x
= .057
- ,.053
.OOL'<:
.002x
.001x
.030
.032
.026
=
.054
-.055x
- .05l.x
.004
.004
.003
- .002
.000
.000
-.070
- .066
- .058
.008
.005
.002
- .o14x
- .006
= 0005
- .011x
- .009
- .007
.008x
.003
.002
0039
.062
.036
- .068
- .089
- .067
.009
.017
.010
.003
.044
- .002
0095
- .142
- .089
.00Ox
.018
.002
- .026
= .028
- .022
- .023
- .038
= .024
.009
.012
.008
.OOOx
.OOOx
.000x
=
=
=
(45 )
(35)
.000
.000
.00Ox
.000
.000
.011
.OOOx
.000x
.002x
.000
.001
-g:,
e
Table 3.4.4.
m
e
e
•
d
8 1
2
3
10 1
2
i
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
(12)
(13 )
.030x - .053x
.030x - .053x
.026x - .050x
.034 - .057
.041
- .066
.029
- .055
.038
- .066
.063
- .094
- .065
.035
.034x - .056x
.035x - .060x
.028x - .052x
.038 - .066
.066
- .101
- .065
.035
(14)
(15)
(continued)
(23 )
(24)
.004x - .002x - .078x
.013
.004x
.0OOx - .064x
.005x
.OOOx - .059x
.003x
.002x
.006 - .002 - .088
.004
.010
.011
- .001
- .094
.006
.000
.003
- .073
- .002
.009
- .100
- .003x
.020
- .004
- .148
.023
.010
.002
- .001
- .090
.006x - 0002x - .094x
.010
.008x - .001x - .082x
.009x
.0OOx - .07lx
.005x
.003
.010
- .002
- .107 - .o06x
.024 - .004
.027
- .157
.002x
.010 - .001
- .095
(25 )
- .019
- .oo4x - .oo4x - .019
- .012
- .010
- .026x - 0026
- .021
- .021x - .OO7x - .o06x - .029
- .024
- .020
-
(34)
.014x
.009x
.007x
.016
.023
.015
.024
.047
.026
.016x
.018x
.013x
.026
.056
.028
(35 )
(45 )
.011
- .005
.002x
.000x
.002x
.000x
.008x
.000x
.006
.000
.004
.001
.008
.003
.011
.000
.006
.002
.01lx - .001x
.004x
.00Ox
.003x
.000x
.008
.004
.011
.000
.006
.002
~
~
e
Table 3.4.5.
m d
2
1
12
30
6
-
e
1
2
5
(P1
i
(12)
(13)
(14)
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
.107
.196
.068
- .353
- .411
- .163
- .205
.000x
.000x
.051x
.046x
.053x
- .110x
- .104x
- .104x
.071
.058
.083
.024x
.027
.023x
= 1/2,
P2
= 0,
P
3
(23 )
(15)
= 0,
P4
(24)
= 1/2)
(25)
(34)
(35 )
(45 )
- .041
.000x
.OOOx
.041
.00Ox
.OOOx
.004x
.001
.001
.206
- .122
.000x - .227
.000x - .084
- .005x - .077x
- .002
- .088x
- .002
- .079x
.000x
.001
.000
- .0lOx
- .013
- .011
- .004x
.000
.000
.009x
.006
.006
- .130
- .112
- .115
.006
.003
.003
- .012
- .009
- .010
- .001
.003
.001
- .045
- .055
- .047
- .008
- .005
- .005
.025
.024
.023
.001x
.000x
.0OOx
.010
.001x
.000x
- .011
- .002x
- .002x
- .006
.000x
.000x
.007
.001x
.001x
- .008
.00Ox
.000x
.027
.026x
.023
-
.005
.001
.000
- .010x
- .006
- .005
- .006x
- .001
- .001
.007x
.003
.003
- .002
.000
.000
.038
.046
.035
- .055
- .055
- .050
.001x
.005
.000
- .022
- .028
- .021
- .008
- .009
- .006
.010
.012
.009
.047
.049
.047
.046x
.043x
.042x
.000x .000x .000x -
.138
.172
.150
.045x
.042x
.038x
.003
.001
.001
- .002
.000
- .052
- .048
- .043
.004
.004
.003
- .004
- .004
- .003
- .079
- .100
- .073
.000
.207 - .206
.OOOx
.000x
.000x
.000x
- .223
.000x
.000x
- .001
.000
.000
.000x
.000
.000
.000x
.000
.001
g;
e
e
4
Table 3.4.5.
m
d
8 1
2
3
10 1
2
i
I
II
III
I
II
III
I
II
III
I
II
III
I
II
III
(12)
(13)
(14)
..
e
(continued)
(23 )
(15)
(24)
(25 )
(34)
(35)
(45 )
.026x
.024x
.022X
- .043x
-.04lx
- .040x
.003x
.001x
.001x
- .002x - .054x
.000x - .044x
.000x - .041x
.008
- .012x
.001x - .oo4x
.000x - .004x
- .008
- .001x
- .001x
.009
.002
.002
-.003
.000x
.000x
.031
.031
.025
- .046
- .043
- .041
.003
.003
.002
- .002
- .001
- .001
.004
.003
.000
- .007x
- .005
- .004
.008
.006
.005
.000x
.000
.000
.036
.044
.032
- .051
- .053
- .046
.004
.005
.004
- .003
- .004
- .002
.009
.011
.008
.001
.000
.001
.029x
.026x
.023x
- .044x
- .040x
- .039x
.044x
.002x
.001x
.036
.044
.030
- .050
- .054
- .045
.004
- .002x - .064x
- .001x - .053x
.000x - .048x
- .082
- .003
- .004
- .098
- .002
- .070
.00Ox - .Ve.!.
- .009
.006
- .026
- .011
.000
- .007
- .019
.007 - .013x - .008x
.002x - .007x - .004x
.001
- .006x - .003x
.0OOx - .020
- .009
.007 - .024
- .013
.000x - cOlS
- .008
.ooh
.004
-
.065
.063
.053
.079
.098
.070
- .013
-
ClO12
- $010
f"\r""'a~
.009x - .001x
.004x
.00Ox
.000x
.003x
.009
.011
.007
.001
.000
.001
$
50
4.
4.1
Consel~ation
DISCUSSION
of Pattern of the Covariance Matrix of the Estimators
Define the pattern of the covariance matrix of the estimators to
be the relative raagnitude of the covariance matrix for all possible
values of the design parameters, restricted by (3.3.3), given the true
values of the genetic and environmental parameters arc known.
To
show the existence c:f conser':ation of pattern wi til respect to variation
of total nllillber of' experi.mental plots, tile follo-wing theori;!mS c.:ce
necessa~r:
Theorem 4,1.1:
dk ,
Let 'ik be the set of all possible triplets (m ,
k
Form the set Cf'j of 8.11 triplets (m
triplets (m j , d., r j )
J
Proof:
triplet (mj
€ ",.,
J
Because t
,
nd j
,
rj
j
(OJ is also
€
"'k'
Theorem 4.1.2:
Bi
= tk/n,
r
,
t
k
:: nt
j
::
J
from all the possible
Hence, any
= mj
•
Therefore any (m
j
,
nd
j
,
r
j
)
Q.E.D.
= dk/n, where n :: 1,2, •• ,dk • Then, the
=d
J
is ~h the covariance matrix of
= dk , !.!:..
.1
(Vi]d
Proof:
= d" = n (ViJ d = d
cl
(4.1.1 )
k
From (2.3.5) it is obvious that ~ :: Q/d, where Q is some
matriX, independent of d.
yields V
k
)
nd r j m (m +2).
j
j
j
whence Cf'jC"'k'
Let d
j
satisfies the relation t k :: dkrk~(~+2),
) € Cf'j
B
for d
j
J
covariance matrix Vi of i for d
,.
nd
,
Then Cf" c: "'k'
i.e. for dk :: nd j , r :: r and ~
j
k
€
j
SUbstituting this quantity into (2.8.1)
= (A·XiArlA·XiQX~.A(A·XiArl/d.
For i :: I and II, where Xi
51
(4.1.1) holds.
is free of d, it is readily seen that
Xi = dQ-1 •
= III,
Fbr i
-1 )-1
Therefore VIII = (.
A ~
A
::: (A' Q~l A) -1;d and thus
(4.1.1) is also true.
Q.E.D.
Theorem 4.1.1 implies that any allocation within the set of
allocations with a specified total number of experimental plots t , can
j
be derived from any other class of designs, which t-value t
integer multiple n of t
•
(~,
choosing every triplet
n.
j
j
,
d
j
,
r
j
)
say, is an
triplets can be found by
d , r ) whose d-value d is divisible by
k
k
k
= (~,
Hence, (mj
,
dj
of n, some (mj
,
d , r ) triplets can also be derived by the same method
,
j
rj )
The (m
k
dkln, r k ).
Even for non-integer values
j
from triplets belonging to another set, provided d
k
is an integer
multiple of n •
.e
As an illustration, the triplets for t
= 480
= 80
can be derived from
(see Table
3.2) by choosing every
triplet J whose d-value is divisible by six, !.~.
(2,6,10), (2,12,5) and
the triplets corresponding to t
(2,30,2).
The 110ssible Allocations for t
(2,2,5) and (2,5,2).
For t
= 90,
only
=:
80 are then (2,1,10),
(3,3,2) can be derived from
(3.16,2) by mUltiplying 16 with 90/480.
The covariance matrix of the estimators in the first set of
1
allocations is, by theorem 4.1.2 equal to nth the covariance matrix
of the estimators, using the corresponding allocations in the second
set.
Therefore it can be concluded that the pattern of the covari-
ance matrix which could be observed within the set of allocations with
a special t-value, will be retained for other values of t.
52
Another implication of theorem 4.1.1 is that the difference in
magnitude of the variances of the estimators for all methods considered,
will decrease with the increase of the total number of experimental
plots.
Hence for larger values of t, the choice of estimation method
is of less importance than for smaller values.
4.2
Evaluation of the Numerical Results
As was mentioned in the introductory chapter, two main questions
could be raised in evaluating the numerical results,
!.~.
which method
and What combination of the design parameters 'Would be best •
Generally,
in performing a test crossing, the experimenter is interested in either
estimating a linear function or a ratio of linear functions of the
genetic and environmental parameters.
In
the first instance, an
estimator of the form
u1
...
= glSi
;
i
= I,Il,lI1
(4.2.1)
with variance
v(u ) :.:: cx'v a
i
-
i-
is of interest, while in the latter, an estimator of the form
with variance
(4.2.4)
has to be taken into consideration.
Given the values of the vectors g
and~,
the problem of choosing
the best method and design can be attempted by numerical methods.
In
53
this present study, however, only some special cases of
(4.2.1) could
be contemplated, !.~. for a.'
= (~, a2 , C),
element a but one is zero.
i
Moreover, the generalized variance could
~, ~) such that every
also be studied for the various methods and combinations of design
parameters, assuming smallness of the value of the variance attained as
a favorable criterion.
In comparing the numerical results for the three methods, it is
necessar'J to recall (see 2.1), that Method III is included in the
comparisf)U, not with the intention to consider it as a potential
estimation method.
Its main purpose is to function as a standard in
evaluating the efficiencies of the other two methods.
.e
4.3 Allocation of the Design Parameters
In constructing the possible allocations for a specified total
number of experimental plots, the allocations can be grouped and
ordered according to parental number (see Table 3.2).
In the following,
a small, SUb-intermediate, intermediate, super-intermediate or large
parental number indicates the relative position of an allocation with
respect to the magnitude of its parental number.
A similar classi-
fication of the number of blocks refers to the relative magnitude of d
within the group of allocations with the same parental number.
4.4 '!he Variance of (J'""2A
Table 3.3 shows the magnitude of the variances of the estimators
of the additive variance for various allocations.
To aid in revealing
patterns of local minima, the minimum values of the variances within
54
groups of allocations with the same parental number, were marked by a
cross.
Because not every possible allocation was considered, these
local minima are not true minima.
However, observation of these values
will point the direction, towards which the variance of the estimator
will decrease in value.
Fbr the variance of
A2
~A'
all methods have the same trend in every
one of the five genetic and environmental parameter combinations
considered.
Within the group of allocations with the same parental
number, the variance decreases by increasing the number of blocks,
thereby decreasing the number of replications.
.e
for r
= 2,
Hence, using the small-
est number of replications,
!.~.
will result in the smallest
variance of the estimator.
Considering allocations with r
= 2,
the
variance will be least if the largest value of m is chosen.
In section 1.3 it is implied that Method III always yields the
smallest variance.
In the forthcoming discussion, if a statement is
made concerning the smallness of the variance of an estimator by any
•
method other than Method III, it is tacitly assumed that the comparison
is made not with respect to Method III.
In estimating the additive variance, Method I is always better,
prOVided not too many replications were used within a block.
For small
values of r it is even practically as efficient as Method III.
4.5 The Variance of
A2
~D
In case of the variance of the estimator of the dominance variance,
more variation in the trend occurs with a variation in the method of
estimation and in the value of the five population parameters.
Fbr a
55
2
fixed parental number and in the absence of both O"n and
2
0"AA (and
presumably also for very low values of these parameters), the variance
of the estimator of dominance variance by any method, reaches a
using the smallest number of blocks.
minimlli~
Comparing these conditional
minima, the smallest variance will be obtained by using Method II
with a sub-intermediate parental number.
2
In the absence of only O"AA' the smallest variance of the estimator
will be reached by using Method II with a super-intermediate parental
number and the smallest number of blocks.
For the other cases, the minimum value of the variance for a
fixed parental number will be reached using an intermediate number of
blocks.
·e
Jf all these minima the one resulting from using Hethod I on
an allocation with a sub-intermediate value of m will be smallest.
Disregarding the values of the population parameters, an allocation using an intermediate parental number and a small to subintermediate number of blocks, will yield either with Method I or
Method II, an estimator of the dominance variance with a fairly small
variance.
For the population and design parameter values considered,
the magnitude is less than one third the environmental variance.
4.6 The Variance of
As in
4.5, var
of its value.
"'2
0"AA
"'2
O"AA also shows a profound variation in the trend
2
2
In the absence of both O"n and O"AA' the variance of the
estimator of the additive by additive variance within groups of
allocations with the same parental number is smaller if a small to
56
intermediate number of blocks is used.
This variance is smallest if
Method II is applied using an allocation with a sub- to super-intermediate parental number.
Fbr the case where only dominance variance is absent, an intermediate to large number of blocks will, for a fixed parental number,
yield an estimator of ~~ with a small variance, especially if Method
I is applied to an allocation with a sub- to super-intermediate value
of m.
If only ~~ is absent, an intermediate d-value will, for a fixed
parental number yield a small variance, the smallest being reached by
Method II for sub-intermediate values of m.
If all genetic variances are present, a small variance will be
·e
obtained using the maximum number of blocks.
This variance will be
smallest if Method I is used on an allocation with a sub-intermediate
number of parents.
Generally, as in the case of estimation of dominance variance, a
fairly good estimator of the additive variance will be obtained, if an
allocation with an intermediate number of parents and a small to intermediate number of blocks is used, either with Method I or Method II.
4·7 The Variances of
A2
~le
and
A2
~2e
Except for small values of r in combination with large values of m,
Method II is always preferable to Method I in the estimation of ~ie'
The best allocation is to use the minimum number of parents and blocks,
allowing for the use of the largest number of replications.
allocation is estimated most inefficiently by Method I.
This
57
2
In case of u 2e , Method II is also more desirable than Method I,
the better allocations being such that m is intermediate and d smallest.
Deviation fram these optimum allocations of the design parameters
will generally not inflate the variance of the estimators of both
environmental parameters to a large extent, as the variances of the
estimators by any method are fairly small, except when a jointly small
number of parents and blocks is used, Which as was already mentioned,
is a very inefficient allocation to estimate uie by Method I.
4.8 The Covariance of the Estimators
Table 3.4 shows the values of the covariances of the estimators
for the given parameter values.
·e
for the five cases considered.
Less regularities exist in the pattern
However, several useful properties
could be extracted.
With regard to covariances of both types of enVironmental variance
estimators with any estimator, it can be said that the magnitudes are
very small in absolute value, except when Method I is used on an
allocation with jointly low parental and block numbers.
Fbr the covariances between estimators of genetic parameters, it
is true that the absolute value of the covariance can be kept fairly
small and less variable with a change in allocation of design
parameters, if sub-intermediate to large values of m are used.
As to
the method of estimation, Method II always has covariances with a larger
absolute value than corresponding covariances estimated by Method I or
Method III.
The latter two have covariances which do not differ much.
58
4.9 The Generalized Variance
From the standpoint of unbiased estimators, it was mentioned in
1.3 that the best unbiased estimator vector minimizes the generalized
variance within the class of unbiased estimator vectors.
Given that
the limited genetic model is true, all three methods of estimation
yield unbiased estimators, Method III being the one yielding the best
unbiased estimator vector, of course.
In the numerical computations, the generalized variance was also
computed for each parametric combination, with the intention of
obtaining an impression about the relative magnitudes reached by the
,
·e
three methods.
It can be seen from Table 3.3 that quite often Method
II Yields much smaller values than Method I, and hence is closer to
Method III with respect to magnitude of the generalized variance, while
examination of variances and covariances reveals a much closer
resemblance in the magnitudes between Method I and Method III.
This
can be explained using the fact that the covariance of any estimator
with the estimator of
2
~le
or
2
~2e'
is most often very close to zero.
paralellism can however be drawn between the generalized variance and
the corresponding variances and covariances of the estimators,
1.~.
to obtain a small generalized variance, a sub- to super-intermediate
parental number is necessary.
4.10
Estimation of Parameters for Small Values of ~~ and ~~
The matrix
~
~
defined in
(2.3.5) tends to become diagonal with a
decrease in magnitude of both the additive and additive by additive
variance, because
M)3
tends to zero.
When
2
~A
= 0,
2
~AA
= 0,
and
A
59
2
~ = -d
r s11
;
i
= 1 1 21 ••• ,6
(4.9.1)
J
where
sll = (a + r2a~/16)/(2m-l)
2
8
8
8
6
22
2
/ (r-l) (2m-I)
222
33
44
55
s66
•
=a
= (a + r ~D/16)/2(m-l)
= (~2 +
?
2
r-a /16)/(m-l)
D
2
= a 2/(r_l)(m2_l)
2
= (a2+ 2
r a D/16)/2(m-l)
If in addition the dominance variance is small compared to the environmental variance, a weighted least squares procedure with the
inverse of the number of degrees of freedom as weights, will be more
desirable than Methods I and II.
4.11
Influence of Block Size on the Magnitude of Environmental Variance
In the previous discussions, the environmental variance has been
assumed constant and unrelated to size of block.
As sometimes found in
practice [16] the environmental variance increases with the size of the
block.
Should this happen, allocations with lower block sizes will
have higher efficiencies than is indicated by the results of the
numerical calculations.
60
5.
SUMMARY AND CONCLUSIONS
Two procedures to estimate genetic and environmental parameters
from a simultaneous selfing and partial diallel test crossing design
were compared to a theoretical estimation procedure which yields
estimators with least variance,
!.~.
an unweighted least squares
procedure on observed mean squares and products, and on design components of variance with respect to the corresponding weighted
correlated least squares procedure.
The weighted correlated least
squares analysis on either the observed mean squares and products or on
the design components of variance, which in practice cannot be realized,
,.
is used as a standard in evaluating the two others because of its
property of yielding estimators with least variance.
All the
procedures can be thought of as a weighted correlated least squares
analysis on observed mean squares and products, using as weights
respectively the identity matrix, a matrix determined by parental
number and replications within block, and the covariance matrix of
the observed mean squares and products.
A numerical method was utilized to obtain information on the
best method and allocation to estimate additive, dominance, additive
by additive and environmental variances, considering cases where:
i)
all three types of genetic variances are present in the same
magnitude as the environmental variance,
ii)
iii)
dominance variance is absent,
no additive by additive variance is present,
61
iv)
only additive variance is present besides the environmental
variance and
v)
only additive variance is present, while the environmental
variance for selfed is less than that for biparental progenies.
. 2
It is concluded from the numerical results, that to estimate u
A
only, Method I should be used.
The allocation to be utilized should
be chosen from the group of allocations with the largest possible
number of parents, such that it .has the largest nuniber of blocks.
To
2
estimate an
only, an allocation with a SIT£ll to intermediate number of
blocks, chosen from the group of allocations with an intermediate
parental number, should be used.
The additive by additive variance
should be estimated utiliZing an almost similar allocation to that for
222
un.
For both un and aM'
either a least squares analysis on observed
mean squares and products or on design components of variance is
applicable, depending on the composition of the population genetic
parameters.
As a rule it can be stated that Method II should be used
to estimate these two genetic variances whenever the additive by
additive variance is negligible.
To estimate environmental variances for selfed and for biparental
progenies, the least squares procedure on variance components is always
2
the best allocation is to choose from the group
le
of allocations with the smallest possible parental number, the one with
preferable.
For
U
the smallest number of blocks.
For
2
U
2e
an allocation with an inter-
mediate parental number and the smallest number of blocks is desirable.
62
A joint estimation of all the population parameters requires a
compromise rule.
Method I should be applied on an allocation with a
small to intennediate number of blocks, chosen from the group of
allocations with an intermediate parental number.
In following this
rule, estimators with small variances and covariances will be obtained.
If all the genetic parameters are very low in magnitude, then
estimation of the parameters should be done by using an apprOXimation
to Method III,
!.~.
by performing a weighted least squares analysis on
observed mean squares and products, using ,the inverse of the number of
degrees of freedom as weights.
A more detailed specification on the rules of choosing the
estimation procedure and its corresponding allocation of design
parameters is given in Table 5.1.
In this table the rule to be
follryNed is symbolized as
i(m,d)
where
i = the method to be used (i = I,II,III),
m = the number of parents to be used (m
= s,i-,i+,t
meaning
respectively small, sub-intermediate, super-intermediate,
large) and
d
= the
number of blocks to be used after it is decided that m
parents are to be used (d
= s,i,t
meaning respectively small,
intermediate, large).
As an example II(i+,s) indicates that Method II should be used.
Moreover, one has to consider from 811 possible allocations, the group
63
of allocations with a super-intermediate parental number, and then
choose that allocation with the smallest number of blocks.
The generalized variance of the estimators fails to serve as a
criterion to choose the best method and allocation to estimate the
population parameters.
A low generalized variance is compatible with
large variances and covariances, provided some of the covariances are
zero.
For increasing values of t.he total number of experimental pl.ots,
the covariance matrices of both the least squares analysis on mean
squares and products and on design components of variance, converge
to that of the weighted correlated least squares analysis.
64
Table 5.1.
Choice of estimation method and its corresponding allocation of des1gn parameters
(see (5.0.1) for addit10nal explanat10n of symbols)
Magnitude of
Est1mation
of
Method and allocation
2
2
(JA
2
(JD
2
(JAA
2
(Jle
2
(J2e
all
Legend:
2
2
(JA
(JD
(JAA
x
x
x
x
0
x
x
x
x
0
x
x
x
x
0
x
0
x
0
0
x
0
x
0
0
x
0
x
0
0
x
0
x
0
0
x
x
0
0
0
x
x
0
0
0
x
x
0
0
0
x
x
x
x
0
x
x
x
x
0
x
x
x
x
0
x
o
x
x
0
0
0
x
0
x
x
x
c
0
0
x
x
0
x
0
x
0
0
0
0
0
0
= present
=
negligible
")
J
}
I(t,t)
III(.t,1,)
I( i..:!:, i)
II(i+,s)
II(1-,s)
III(i-,s)
r(1-,.t)
I( 1+,1 to .I.)
11(1=-,1)
11(1+,6 to i)
III(13:,s to i)
}
}
}
lIes,s)
III(s,s)
II(i+,s)
III(i..:!:, s)
I(i..:!:,s to i)
III(i..:!:,s to 1)
65
6.
LIST OF REFERENCES
1.
Aitken, A. C. 1933. On fitting polynomials to data with weighted
and correlated errors. Froc. Roy. Soc. Edin. 54:12-16.
2.
Anderson, T. W. 1958. An Introduction to Multivariate Statistical
Analysis. John Wiley and Sons, Inc., New York.
3.
Cockerham, C. C. 1956. Analysis of quantitative gene action.
Brookhaven B,ymposia in Biology 9:56-68.
4.
Cockerham, C. C. 1963. Estimation of genetic variances. B,ymp.
Statistical Genetics and Plant Breeding. Nat. Acad. Sci.Nat. Res. Coun. Pub. 982, pp. 53-94.
5.
Comstock, R. E. and H. F. Robinson. 1948. The components of
genetic variance in populations of biparental progenies
and their use in estimating the average degree of
dominance. Biometrics 4:254-266.
6.
Comstock, R. E. and H. F. Robinson. 1952. Estimation of average
dominance of genes. In J. W. Gowen, ed. Heterosis.
Iowa State College Press, Ames, pp. 494-516.
7.
Cooke, P., R. M. Jones, K. Mather, G. W. Bonsall and J. A. Ne1der.
1962. Estimating the components of continuous variation.
Heredity 17:115-133.
8.
Goldberger, A. S. 1964. Econometric Theory.
Sons, Inc., New York.
9.
HaYman, B. I. 1960. Maximum likelihood estimation of genetic
components of variation. Biometrics 16:369-381.
Biometrica1 Genetics.
John Wiley and
10.
Mather, K. 1949.
London.
Methuen and Co., Ltd.,
11.
Matzinger, D. F. 1963. Experimental estimates of genetic parameters and their applications in self-fertilizing plants.
B,ym~. Statistical Genetics and Plant Breeding.
Nat. Acad.
Sci.-Nat. Res. Coun. Pub. 982, pp. 253-279.
12.
Matzinger, D. F. and C. C. Cockerham. 1963. Simultaneous se1fing
and partial dia11e1 test crossing. I. ~stimation of genetic
and environmental parameters. Crop Science 3:309-314.
13.
Matzinger, D. F., T. J. Mann and H. F. Robinson. 1960. Genetic
variability in flue-cured varieties of Nicotiana tabacum.
Hicks Broad1eaf x Coker 139. Agron. J. 52:8-11.
66
14. Mode, C. J. and H. F. Robinson. 1959. Pleiotropism and the
genetic variance and covariance. Biometrics 15:518-537.
15. NeIder, J. A. 1959.
The estimation of variance components in
certain types of experiments on quantitative genetics.
In O. Kempthorne, ed. Biometrical Genetics. Pergamon Press,
New York, pp. 139-158.
16. Smith, H. F. 1938. An empirical law describing heterogeneity in
the yields of agricultural crops. J. Agr. Sci. 28:1-23.
·e
7.
APPENDIX:
THE MEAN PRODUCT OF SELF AND BIPARENTAL PROGENY MEANS
Consider an analysis of variance of self and biparental progeny
means.
Then the model for self progenies is
and for biparental progenies is
The components of the models (7.0.1) and (7.0.2) are as defined in
(1.1.1) and (1.1.2), while the presence of a dot instead of a
subscript indicates an arithmetic averaging over the range of the
replaced sUbscript.
The analysis of variance tables corresponding to these models are
given in Tables 7.1 and 7.2.
Table 7.1.
AnalYsis of variance of selfed progeny means
Source of variation
Degrees of
freedom
Blocks
d-l
Self pro means/blocks
d(2m-l)
Males/blocks
d(m-l)
Females/blocks
d(m-l)
M vs F/blocks
d
Total
2dm-l
EXpectation of mean square
1 2
2
(212
+ CT
N =-CT
r le + 2m CT1d + 7"lr)
1s
11
1 2 + CT2
N =-CT
r Ie
ls
13
1 2
N =-CT
14
r Ie +CT~
1 2 + CT2
N =-CT
r Ie
lf
15
1 2 + CT2
N16 =-CT
r Ie
lmf
68
Table 7.2.
Analysis of variance of biparental progeny means
Source of variation
Degrees of
freedom
Expectation of mean square
Blocks
d-l
N
2l
:: 1.r
Parental pro m./bl.
2d(m-l)
N
23
::-CT
2 + 2( 2 + 1 2 ) + 2 + ~2
m CT2d ?'"2r
CT 2m:f
2p
CT2e
1
r
2
2
2e + CT2mf +
ID.CT2p
1
r
2 + 2
+
CT2mf
2e
ID.CT 2
1
r
2e
1
r
2
2
2e +CT2mf
Males/blocks
d(m-l)
N
24
::-CT
Females/blocks
d(m-l)
N
25
=-CT
N
26
:::-CT
M x F/blocks
d(m-l)2
Total
2
dm -1
2
+
2
CT2mf +
2
2m
ID.CT
2
2f
It is readily seen that
and
These relations also hold for the estimators.
to
Therefore the covariance
to
between N and N is
23
13
However, the following is also true:
to
Substitution of
(7.0.6) into (7.0.5) and sUbsequent rearrangement
yields
A
,.
The observed values N and N are respectively observed mean
24
14
squares of self progeny means from male parents (spm) and of male
parental progeny means (ppm) with d(m-l) degrees of freedom.
By (2.3.4)
the covariance of these two observed mean squares is
A
A
Cov(N14 ,N24 )
2
= d(m-l)
2
= d(m-l)
The use of
E(mean product deviation of (spm) and (ppm)}
(Cov(HS,S)}
2
2
(7.0.8)
•
(7.0.8) in (7.0.7) results in
II>
In arriving at the covariance between
~3
II>
and M , the formulae
23
resulting from (2.3.2) cannot be used directly because of a difference
in the number of degrees of freedom.
However, it would be nice for
computational purposes, to define it analogously as twice the square
of a mean product divided by some number representing degrees of
freedom.
Looking back at
(7.0.9), rJm Cov(HS,S) can be defined as the
mean product between self and biparental progenies from the same parent,
which is estimated as rJm times the covariance of self and biparental
progeny means.