Johnston, Bruce O.; (1968)Spacing of information in the simple exponential model."

SR~CING
OF
INFO~\TION
IN THE SIMPLE EXPONENTIAL MODEL
Bruce o. Johnston
Ph.D. Thesis
Institute of Statistics
Mimeograph Series No. 578
April 1968
~\
iv
TABLE OF CONTENTS
Page
LIST OF TABLES • •
LIST OF FIGURES
· . . . vi
.........
• • • •vii
.......
·.
1.
INTRODUCTION.
2.
REVIEW OF THE LITERATURE.
3.
ESTIMATION OF THE PARAMETERS AND THE CHOICE OF LEVELS
3.1
3.2
3.3
3.4
3.5
3.6
3.7
4.
Introduction
~
·."
1
4
· . . . • 13
. . . . . . . . . . . . . • . . .....
· 13
The Normal Equations and the Iterative Solution • • •
13
The Information and Asymptotic Covariance Matrices
· 18
Selection of the n Levels • • • • • . • • • • • •
· • 19
The Asymptotic Covariance Matri~ for the n Levels • • • • 25
A Compari$on of the Types of Spacing
· . 31
The Value of Replication • • . •
• . . 35
.)8
THE THREE LEVEL SOLUTION •
4.1 Introduction
• • • • • • • •
• • 38
4.2 The Asymptotic Covariance Matri~ and Its Determinant •• 39
4.3 Allocation of the Three Levels
• • • • ••
•• 4;
4.3.1 Allocation by Ma~imizing the Determinant • • • • 4,
4.3.2 Allocation by Minimizing the Ma~imum Term of
Var 'Y • • • • • • • • • • • • • • • • • • • • 52
4.3.3 Allocation of a Level to a Parameter. • • •
53
A
4.4 Three Level Estimation, Bias and Variance •
4.4.1 Estimates of the Parameters
4.4.2 The Bias of the Estimates
4.4.3 The Variance of the Estimates
4.5
·.
• • 55
· • · · • · · 55
56
··
· · · • 58
Optimal Sample Size at the Three Levels . .
· · · · · 61
4.5.1 Sample Size Chosen to
III · ·
· · · 61
4.5.2 Sample Size Chosen to Minimi:?:e the Asymptotic
Variances . • . . . . . . .
• 62
64
Sample Size Chosen to Minimize the Trace of I.
M~imize
·
4.5.3
4.6 The Three Level Solution Compared with the n Level
Solutions • •
• • •
• • • • • • • • • • • • • •
4.7 A Comparison of the ~act and Asymptotic Variance of
r..
6E5
69
v
TABLE OF CONTENTS (continued)
Page
5.
5.1
5.2
5.3
5.4
6.
8.
9.
Introduction
• • • • • , • • •
• • •
The Asymptotic Covariance Matrix and Its Determinant..
Three Levels vs. Four Levels •• • • • • • • • • • ••
Three Levels vs. Four Levels: Numerical Calculations..
CONDUCTING AN EXPERIMENT •
6.1
6.2
6.3
6.4
6.5
7.
71
THE FOUR LEVEL SOLUTION
·....·.
·..... ·...
·..
Introduction • • • • • • •
Selection of the Levels • • • •
Choice of a New Region
• •
Sample Size for the New Region
Form of the Function in the New Region
· .· ..· ... ...
·......
LIST OF REFERENCES •
·..·.
APPENDICES • • • • . . .
· ... .·...
9.1 Linear Spacing: Asymptotic Variances and Covariances ·
9.2 Exponential Spacing: Asymptotic Variances and
Covariances . .
·.···
· ·Spacing:
· · · · ·Asymptotic
· · · · . Variances
9.3 Centered Exponential
and Covariances
· · · · · Variances
· • · · · and · · · • •
9.4 Coverage Spacing: ·Asymptotic
Covariances . .
· · · · • Variances
· · · · · and Covariances· •
9.5 Linear Spacing: Asymptotic
for Large Values of r
······· ·.·····
SUMMARY
·...
71
71
73
79
88
88
89
90
94
95
97
100
101
102
105
108
111
114
vi
LIST OF TABLES
Page
...
3.1 Powered linear spacing
·........
..
3.2 Powered exponential spacing
3.3 Powered centered spacing
3.6
• 27
28
...........
Asymptotic covariance matrices .
·..
Replication of a powered linear spacing .
3.4 Powered coverage spacing
3.5
•• 26
3.7 Replication of a powered protection spacing
.
. . . 29
• 33
· 37
37
4.1 Optimal sample sizes to minimize var(y) when xl = 0,
x
2
= e - 1/
and x
. ..
3 =1
4.2 Optimal sample size to minimize the trace of I
4.3 Three point spacing; asymptotic covariance matrix
A
4.4 The three terms of var(y)/y
2
...........
· 65
•. 67
. . . . .68
• 70
2
Numerical value of
27e H (t ,t )
1 2 3
n3
· . . . . . . . . . . . . . . 84
· . . . . . . . . . . . . . . 85
5.3 Numerical value of
· . . . . . . . . . . . . . . 86
5.4 Numerical value of
· . . • . . . . . . • • . . . 87
vii
LIST OF FIGURES
Page
........... ..
2
5. 1 Region where three levels are better than four levels •
· 80
6.1 Maximizing the distance between curve and line • • •
· 91
6.2 The triangle • • • • • • • • •
· 91
1.1
The function
......
1.
INTRODUCTION
If, in an experimental situation, the form of the expected
response is known and if the levels, at which the function is observed,
can be selected before the experiment is performed, then the precision
in estimating the parameters of the function can be affected by the
positioning of the levels.
Thus it is worth considering at what levels
the observations should be taken.
In this regard some criterion is
needed to choose the levels of the variable of observation.
terion and the form of the function will determine, for any
This crifi~ed
total
sample size, the levels of the independent variables.
This study assumes that the experiment is restricted to a determined region and that the function of interest is the single exponential.
If the region is given by [a,b], then the function can be writ-
ten as:
(1.1)
for
a < z :::: b,
-00
<
Ci,
t3 <
00
and l > O.
The func tion and region can
be transformed by letting:
z-a
x = b-a
(1. 2)
The function and region are then given by:
O<x<l.
(1. 3)
The parameters of the function have a quite natural meaning.
value of the function when x = 0 is
of the function.
Ci
and hence
Ci
The
is the initial value
The total growth or decline that occurs in the func-
tion over the region 0 < x < 1 is given by t3.
The parameter l is a
2
measure of the relative change in growth over the interval.
shown in Figure 1.1 for positive a and
This is
~.
I
Y
a + ~X 1
Y
)'
a~
y
=:
a + ~X 2
=:
ex + f3x 3
~
>0
)'
for a,
and
aL----~-
o',.--------------t-i----x
Figure 1.1
The function
The fields of application for the function include the areaS of
agriculture in fertilizer experiments, biology and medicine as a growth
function, actuarial science, in a modified form, for graduating life
tables and business for re1atin~ time (or advertising) and sales volume.
In a fertilizer trial, a represents the growth or yield with no
fertilizer,
~
the total growth due to the application of the maximum
acceptable level of fertilizer and )' is the rate of response of the
plant to increasing fertilizer levels.
Different criteria are used to choose the levels at which to take
observations.
All the criteria are based on asymptotic properties be-
cause exact terms are too difficult to either find or use.
The
3
criterion used to choose the levels must be related to the experimental
goal.
If the goal is to estimate one of the parameters, then the cri-
terion for the selection of the levels will be the asymptotic variance
of the parameter.
It is more likely that the experimental goal will be to estimate
all the parameters and thus the function.
Criteria to achieve this are
the maximization of the determinant of the asymptotic information
matrix or minimizing the terms of the asymptotic covariance matrix
through choice of the experimental levels.
The selection of n suitable levels at which the function is to be
observed will be attempted by considering mathematical expressions
which generate n levels and then choosing the best of the expressions
under the criteria of maximum information and minimum variances and covariances.
The effect of replication of the n levels can be judged by
these same criteria.
The minimum number of levels which can be chosen to estimate the
three parameters is three.
The best three levels, on the basis of the
maximum determinant of the information matrix, will be found.
The
numerical comparison of n observations will be made when the observations are taken at three different levels, four different levels and n
different levels.
The goals and background knowledge brought to an experiment will
help determine the levels, the number of replications and thus the manner in which it is to be performed.
When the experiment has been con-
ducted and analyzed, interest may focus on a subregion.
If a further
experiment is to be run in the subregion, the function must be redefined.
A definite relationship will exist between the new and old function.
4
2.
REVIEW OF THE LITERATURE
There are two branches of development in the choice of levels to
observe known functional forms.
The first area is to develop general
criteria by which the levels can be chosen.
These criteria should re-
late to the experimental goals of either estimating functions of the
parameters or of testing hypotheses about the parameters.
Application
of these criteria to different models in order to find the appropriate
levels at which each model should be observed provides a second area of
development.
de la Garza (1954) developed a method to select a spacing for an
m+l parameter polynomial function of order m where the criteria of
selecting the spacing is based on the covariance matrix.
He states
that for any experiment with N observations on a polynomial of order m
there is another experiment of N observations with the same information
matrix and with the N observations being taken at m+l levels.
These
m+l levels will all be bounded by the levels of the original experiment.
Let the polynomial of order m be written as
p(x)
=
~'~
observations at x., i
~
= l, ••• ,N
are taken with errors E. which are
~
random with a mean of zero and a variance of
~..
matrix for these observations be X'WX where X'
diagonal matrix with elements
2
~.
~
2
~
=
Let the information
(~')
and W is a
in the i,i position.
Let the optimal m+l spacing be designated by r., i
~
1, ••• ,m+l,
and the optimal information matrix for it by R'UR where R is the matrix
5
of observations and U is the diagonal variance matrix.
If R exists it
is nonsingular because its determinant is a Vandermonde determinant.
The condition desired is
xlwx
R'UR
or
Because m+l points completely specify an m+l parameter function, the
function p(x) can be written in terms of the m+l rls as given by the
Lagrange interpolation formula.
This leads to the solution for the rls
from the equation B(r) where
B( r)
where the
m
mB
+ ~2r + ••• + ~m+lr + r
coefficients are given by
~
f
= ~l
f
o
f
m-l
f
m
~l
m
=
(sym)
with
f.
~
f~
=~
j
i
w. x.
J
J
i
~m+l
= O,l,2,· •• ,~+l
f~+l
•
Thus, for any given N (> m+l) spacing, an m+l spacing can be
found with the same information matrix.
de la Garza also points out that if, in an experimental design
problem, an N spacing is not given, then some m+l design is best and
6
can be chosen to minimize the variance at some particular point, say at
the point where the variance is a maximum.
Chernoff (1953) considers a function containing k parameters of
which only s are to be estimated.
mental points of which the i
th
A design is to consist of r experi-
point will be replicated Pi times to
make a total of N observations.
The basis of the criteria will be the
asymptotic covariance matrix as defined by Fisher.
In particular the
experimental points are chosen to minimize the trace of this matrix,
that is the sum of the asymptotic variances.
The asymptotic covariance
matrix will often be a function of the parameters and so the choice of
experimental points will also be a function of the parameters.
Hence a
design will be chosen on the basis of prior values of the parameters.
Thus the design will only be optimal about these prior values or 10ca11y optimal.
In some cases, such as multiple regression functions, the
asymptotic covariance matrix is not a function of the parameter and
then the design will be an optimal one.
It is shown that, under mild conditions, locally optimal designs
exist for large n and that they can be obtained by r experimental
points where r ~ k + (k-l) + ••• + (k-s+l).
A justification for using the sum of the variances as the criterion follows from the consideration of a loss function for the estimates
t ,t ,···t of e ,e ,· •• ,e of the form
1 2
1 2
s
s
g(!,£)
= g(tl,t2,···,ts,el,e2,···,es)
which is a minimum when t
i
=ei •
Under the assumption that g is well-
behaved and that N is large so that t i is close to e with high probi
ability, then
7
g(!,~)
= ~(~,~)
s
~
+
( t . -8. )( t . -8 .) +
1
i, j=l
J
1
J
2
a( t 1. -8.)
1
The value of the statistic is measured by how small Eg(!,~) is.
•
Under
mi ld condi tions
= g (8-'-8)
Eg (t
8)
-'where
s
n i, ~~j=l
+ I
aij~ij
+
o(!)
n
~ij is the i th jth element of the asymptotic covariance matrix Qf
c2g(~,~)
t and a ..
A reasonably criterion for the statistic t is
at. at.J
1J
1
that
s
~
i,j
a .. ~ ..
1J 1J
be a minimum.
Elfing
(1952) considered a similar criteria as Chernoff for a two
independent variable regression.
Here the covariance matrix is not a
function of the parameters and hence the solution is optimal rather
than locally optimal.
is illustrated.
A graphical method for finding the design points
This method of finding the solution is applicable to
the general problem proposed by Chernoff
Kiefer
(1953).
(1959) shows, by the example of the completely randomized
design for three treatments, that the usual design of equally replicated treatments is not optimal in the sense of being locally mQst
powerful about the null hypothesis for testing the hypothesis that:
(1)
or
8
= 8 = 8
1
23
=
a
8
If the experimenter's object was to make either test (1) or (2) then he
could have selected a better design,
determined.
be found.
Thus the particular goal must be
Corresponding to this goal , an optimabi1ity criterion must
On the basis of this criterion a design can be selected.
Specific design criteria are given by Kiefer when an experimenter
is interested in a linearly independent function of the parameters>
~J'
=
~
i
c .. e ..
J1
1
First, define V as the covariance matrix of the best
d
linear unbiased estimator of the
~.
J
for any design d from 6.
given design d used to test the hypothesis that all
~.
J
=0
For a
under the
assumption of normality, let S~(c) be the infimum of the power function
of the test ~ over all alternatives for which ~ ~~
2
crr .
J
Six optimality criteria are.
(1 )
M-optimum i f sup S~ (c) is a maximum over d.
(2 )
L-optimum i f
(3)
D-optimum i f Ivdl is a maximum.
(4 )
E-optimum i f maximum eigenvalue of V is minimized,
d
(5)
A-optimum i f trace V is minimized"
d
(6 )
G-optimum, for regression functions: i f supremum of the
~
are only locally M optimum.
variability difference between the true value and the least
squares predicted value is minimized,
Criteria M, Land E correspond to tests of hypothesis about the
~j'
The A optimum criterion minimizes the average variance of the best
linear estimator of the
~'s.
Under the assumption of normality) a D-
optimum design minimizes the volume of the smallest invariant confidence region of the
~1S
It also achieves a test whose power function
has maximum curvature at the null hypothesis among all locally unbiased
9
tests of a given size.
Also, a design which is optimal for one set of
w's is optimal for any other set of W's which are related by a nonsingular linear transformation to the first set.
Thus D-optimum or A-
optimum criteria for choosing designs should be used if the experimenter's goal is estimation of the parameters.
The criteria of choosing a design by minimizing the determinant of
the asymptotic covariance matrix, 11situations.
1
1,
has been used in nonlinear
Box and Lucas (1959) used this criteria when considering a
general function of the form
where
s is
a vector of k independent variables and
parameters which are to be estimated.
If 8
o
~
is a vector of p
designates the true values
of the parameters and if f(1,~) is approximately linear in the neighborhood of 8
0
,
then minimizing 11-11 is the same as minimizing Wilks'
generalized variance for the estimates.
Also, if the errors are normal-
1y and independently distributed with constant variance, then the
1
design which minimizes 11- , minimizes the volume of the confidence
region in the parameter space.
The number of design points of the independent variables was
restricted by Box and Lucas to be equal to the number of parameters to
be estimated.
That is, if 1
i
designates the levels of the variables
for one observation, then p such
of parameters.
li
's are taken where p is the number
They develop the levels for a simple decay and growth
curve, a simple first-order decay with rate a function of temperature,
the Mitscherlich equation and a function simulating a two stage chemical decomposition.
10
Box and Hunter (1965) ~eport the use of the criterion of minimizing
11-1 ,
in a sequence
·
0f
exper~ments.
The parameters of a model describ-
ing a chemical process were first estimated from a preliminary experimente
Trial levels were then tested in the determinant to find which
values minimized it.
When appropriate levels were found a new experi-
ment was performed at those levels, giving more data, new estimates of
the parameters and then new levels at which to collect more data.
Verbeck (1965) used a nonlinear model to relate twist multipliers
to the count strength of yarn.
He also found levels at which to per-
form an experiment by choosing the levels which maximized the determi_
III
nant
for appropriate parameter values.
A generalization of the functional form (1.1) was considered by
Ne1der (1961).
The function is defined as
W =
[1 + e
A
-(1+Kt)/8 8
t
>0
J
When 8 = -1, this is a form of the simple exponential function which
has been called the 'monomolecular' curve or diminishing return curve.
The iterative least squares solutions were developed for the case when
8
=
1, the logistic curve, and for an unknown 8.
For this function Ne1der considered the allocation of n levels
through one type of spacing.
He considered whether increasing the
range of the variable at which observations are taken enhances estimation.
Also, for a given range, do the extra points significantly aid
estimation.
The method of allocation considered consists in selecting
levels for the variable t in units or half units of
6~,
lOt and
l4~
where
~ =
l+Kt.
~
over the range
The asymptotic variances were one set
11
of measurements of the usefulness of the allocation.
Another criterion
was the generalized information per point which was defined as the
determinant of the information matrix divided by n.
On the basis of these criteria, NeIder was able to say that, for
the six allocations, the more intense sampling at half units of
was
't"
In changing from a range of 6T to
somewhat better than in units of T.
lOT or lOT to 14T there was a noticeable reduction in the asymptotic
variances and an increase in the generalized information per point.
In a more pragmatic way, Inkson (1964) studied the choice of the
phosphate fertilizer P205 to fit the Mitscherlich exponential function
Y
= A(1 _
10-C(X+B))
where Y is the response of swede turnip crops to fertilizer.
sults of six different fertilizer
suits could be compared.
The re-
trials were coded so that the.
re-
The amounts of fertilizer applied, after
being scaled down, for the six cases are:
i)
iV)
0,1,2
ii)
0,1,2,3
v)
iii)
0,1,3
0,1,2,4
vi)
0,1,4
0,1,4,8
The predicted results were calculated as i f the range covered in each
case was 144 1bs/acre P205'
i)
iV)
0,72 ,144
0,48,9 6,144
That is, the observations for levels
ii)
v)
0,48,144
0,36,72 ,144
Hi)
vi)
0,36,144
0,18,72 ,144
were calculated respectively for the estimated equations.
12
A
The asymptotic variance of B is given by:
AA
A
AA
+ 2(10BC) cov(a,§) _ 2( ~ ) 10BC cov(a,p)
qp
A
B
- 2(~) 10
qp
AA
2BC
A
A
Cov(~,p)J
where the variances and covariances are given by Stevens (1951) for the
estimates of the parameters of the equation
Y = a + ~px, and q is the
level of the lowest application of fertilizer for each of the spacings
given above.
A2
A
A table was formed showing A Var(B) for the values of B
C ranging
above experimental spacings.
and small over these values of
from .005 to .020 for the six
A2
A
Any spacing which kept A Var(B) constant
Cand Bwas
considered good.
By looking
at the tabulated values, it was decided that spacing ii) was best, iii)
and v) were very good, vi) was good and that the equal spacing cases i)
and iV) were poor.
13
3.
ESTIMATION OF THE PARAMETERS AND THE CHOICE OF LEVELS
3.1
Introduction
n observations of the function
(3.1.1)
are to be taken.
These observations need not be taken at n different
levels of X.
The method by which these observations are used to calculate the
estimates of the parameters a,
~
and y is developed.
This method also
yields asymptotic variance and covariance expressions for the
estimate~
of the parameters.
Let any set of n distinct levels be called a spacing of size n or
a spacing.
Four different criteria for choosing a spacing lead to
eight spacings.
Four of the spacings are not
are functions of y.
functio~s
of y and four
The eight types of spacings will be judged on the
basis of the asymptotic covariance matrix and its determinant.
The effect of the replication of a spacing on the asymptotic covariance matrix will be determined when k replicates are taken at each
of m levels where n
= mk.
This will give a basis for choosing between
using more than m levels and replication of the m levels.
3 ,}
. • c.
The Normal Equations and the Iterative Solution
The independent variable X and the dependent variable Yare related by the function given in (3.1.1).
Let the i
l
"'h
observation, y.,
1.
at the level x. be represented by:
~
i
= O,1,2, ••• ,n-1
(3.2.1)
14
were
h
E.
1
.
1S
t h e 1. th ran d om error.
The errors are assumed to be un-
correlated and to have a variance of
2
(J"
Let estimates of the parameters a,
tively.
•
~
and f be a, band g respec-
The values of the estimates will be obtained by choosing those
values of a, band g which minimize the sum of the squares of the deviations of the observations from the predicted values at the observed
levels.
These solutions are the solutions of the normal equations.
As the normal equations are not easily solved for the estimates,
an iterative solution is required.
This involves guessing the param-
eters and then solving for increments to these guessed values.
The
addition of the increments to the guessed values gives new estimates
which usually are improved solutions to the normal equations.
New in-
crements can be calculated from the first solution and improved estimates can be formed.
This process of finding increments to the last
solution can be continued until the incrementing values are small.
An observation, with the estimators replacing the parameters, can
be written as:
Yi = a + bx~1 + e.1
i
0, 1, 2, .•• , n-l
A
= Yi + e.
1
where e is the i
i
given by
y. = a
1
th
residual and y i is the predicted value at xi and is
A
+ bx~.
1
The normal equations are obtained by minimizing N where
The partial derivatives of N with respect to a, band g are given by:
15
cy.
A
N
n
a
a
~
==
N
-'L.y.
g
L:x~1.
-'L.y.x.
~
g
bL:x.1ogx.
g
1.
1.
2g
~
g
bL:x. logx.
~
b
~
-b'L.y.x .1ogx.
~
1.
1.
1
L...2:.
~/oa
-'L.(y. -y.
~
==
?JY.
-y.
)~
1.
1. au
-'L.(y.
,", cY
i
-'L.(Y. -y. h.-.--=.
/
1.
1.
og
(3.2.4)
where
(IN(a, b, g)
da
'
, an d
N == :scN(a,b,g)
g
u g
A
Expand y. in a Taylor series about y. ignoring second and higher
1.
1.
order differences.
A
Then,
A
?JY.
Cy.
cY.1.
+
1. 6a + CJ) b.b +
1. 6g
Yi == Yi
dg
da
(3.2.6)
Placing this expression in the right hand side of (3.2.4) gives
n
N
a
1.
==
g
L:x~1.
g
bL:x.1ogx.
1.
where
a
-'L.y.
1.
2g
bL:x. logx.
1.
~
-'L.y.x.
1. ~
b
-b'L.y.x~logx.
1
1. 1.
-reg)
b.b
16
n-1
L: (
oJ.
dY. oJ.
2
~
~)
ada
~
da db
L:
r(g) =
(sym)
A
When the appropriate partial derivatives of y. are taken and substi1.
tuted in (3.2.8 ), r (g) becomes:
r(g) =
1
a
a
a
1
a
a
a
b
M(g)
1
a
a
a
1
a
a
a
b
(3. 2 .9)
where
Dc
n
g
g
Dc. logx.
i
~
~
(3.2.10)
M(g) =
2g
Dc.~ logx.~
Dc~logx.
J-
~
g
2
LX.2~ ( logx.)
1.
By substituting (3.2.9) and (3.2.10) into (3.2.7), the increments in
(3.2.7) can be obtained as:
-l:Ia
1.
g
-L:y.x.
1. 1.
-&
-bD.g
a
-L:y.
n
g
Dc.1ogx.
1.
1.
2g
Dc. logx.
1.
1.
b
g
-L:y.x.1ogx.
~
1.
1.
1
(3.2.11 )
17
Note that the first two rows of M(g) are the same as the first two
columns of the matrix by which M-l(g) is to be multiplied in (3.2.11).
Hence, using this information, the expression can be written as:
a
-&
=
-bL\g
b
_ M-l(g)
g
I:y.x.
~
(3.2.12 )
~
g
I:y . x. 10 gx.
0
~
~
or
a + 6a
b +&
I:yi
=
g
I:y.x.
M-l(g)
~
~
g
I:y.xilogx.
~
~
b 6 8
Equation (3.2.13) with (3.2.10) is to be used to obtain an iterative
solution for the parameters.
mated to start the
The only parameter that needs to be esti-
iterativ~
procedure is I.
I, estimates of the parameters a,
a + 6a,
and
g
1
directly in equation (3.2.13).
=g
~
0
With the estimate g
and I can be calculated as a
+6g
where a
l
and b
l
l
o
of
=
are obtained
The value of gl is calculated by solv-
ing for b 6 g in (3.2.13), finding 6g as b 6 g/bl' and then forming gl =
go + L\g.
With this new value of g, gl' replacing g in the right hand side
of (3.2.13), new estimates of the parameters can be found.
The itera-
tive process can continue until there is no significant change in the
estimates.
For most situations the process generates convergent
18
estimates.
~
Under some conditions, such as small
and y close to 1,
the iterative process is unstable.
3.3
The Information and Asymptotic Covariance Matrices
The information matrix, as defined by Fisher, is given by:
cFN
(j2 N
(j2 N
(ja 2
dadb
dadg
(j2 N
(jb 2
(j2 N
d'f)"dg
E
(3.3.1)
(j2 N
(sym)
(jg2
where N is given in (3.2.3).
(j2
By using (3.2.3),
N
dbdg
can be written
as:
(j2 N
d'f)"dg
oY.1. dY.1.
(j2
A
Yi
L:dg db - L: (yi -Y i) d'f)"dg
A
or
(j2N
E dadg
(j2
=
oY i oY i
A
(j2
Yi
L: dg db - L: dbdg E (Yi-Y i )
A
A
Yi
The term L: ~ E(y.-y.) is zero
A
OlJog
1.
1.
Ee.
~
=0
2
(j y.
or if dbd~ and E(Yi-Yi) are orthogonal.
Under any of these conditions,
19
A similar development and conditions can be formed for the other
elements of the information matrix (3.3.1).
(j2A
Yi
?;T
(j2 A
Yi
a , d.idb
::;
Note that:
(j 2y.
A
(j2 A
Yi
a , dadg
::;
a
~
and
~
::;
a .
By comparing the elements of the information matrix and the
matrix r(g) of (3.2.8), it is clear that r(g) is the information matrix.
The asymptotic covariance matrix is the inverse of the information
matrix multiplied by ~2.
r-1(g)~2
This can be written as r-1(g)~2 which, by
1
a
a
a
1
a
a
a
b
M- 1 (g)
1
1
a
a
a
1
a
a
a
b
2
~
1
(3.3.2)
where
n
g
Lx. logx.
~
-1
~
2g
Lx. logx.
~
~
g
Lx.2 ( logx.) 2
~
~
1
M- (g) will be called the modified asymptotic covariance matrix and
M(g) will be called the modified information matrix.
3.4
Selection of the n Levels
There are an uncountable number of ways of selecting n different
levels or numbers between zero and one.
rf a particular method of
20
choosing n levels is called a spacing, then eight basically different
spacings are considered.
Four of the spacings form a class in that
they do not involve the parameters.
The four others form a class in
that they are a function of the parameter y.
Each member of the first
class has a counterpart involving y in the second.
the
fo~
That is, if Z. is
~
of a spacing, the equivalent spacing involving y is given by
x. :::: 2.'1.
~
~
The first spacing to be considered is equal spacing.
That is, ob-
servations are to be taken at:
=
i
i
n-l
= 0,1,2, ••• ,n-l •
(3.4.1)
This is a spacing which is often recommended regardless of the form of
the model being studied.
linear regression.
It is the usual recommendation for simple
For a number of nonlinear models, with observations
taken at equal intervals, tables have been formed to aid in the ca1cu1ation of the estimates of the parameters.
For example, extensive tables
have been prepared by Lipton (1961) for the model as given by Stevens
(1951 ).
In linear regression, equal spacing of the
ing" to the corresponding Eyls.
feature.
XIS
gives equal "spac-
Possibly this is really the desirable
This suggests that a spacing pattern for the nonlinear equa-
tion be chosen to keep constant the expected value of the difference
between any two successive yls.
EYi+l - EYi
where k is a constant.
Ey n- 1 - Ey 0
That is, choose the
=k
i
XIS
so that
= 0,1,2,···,n-2
The distance between EYn_1 and Eyo is
Y(l) - Y(O)
[a + ~(l)]
- [a
+ ~(O)]
~
as:
=~ •
21
Hence) the length between each succeeding set of Ey. 's is ~/(n-l).
1.
Thus
~/n-l
or
~/n-l
That is)
x: = n-l1
for
1.
As x
= 0)1)2)···)n-2
i
0) the value of xi from (3.4.2) are given by:
o
1
i
= O'
J
1
xl
1
xl
0 - n-l
xl
1
xl
1 - n-l
or
1
xl
1 - n-l
or
2
xl
2 - n-l
or
xl
(_1_)1
= n-l
1
i
1·J
i
n-2;
Thus the i
th
2
'Y
n-l
n-l - n-l
Xl
-
or
x
1
2
= (~)I
n-l
or
1 .
level is given by:
1
xi
= (2-)1
n-l
In order to obtain a different spacing consider that the possible
values for I are discrete and equally spaced.
i
That is) let
0) 1 J 2) ..• , n-l •
22
Associated with each Y is a function of the form (3.1.1) which can be
i
written as:
y.
Yi
=: rv
I.h
+ f3X
~
For any two successive curves Y + and Y there is a value of xi
i l
i
at which the vertical distance between the curves will be a maximum.
This value of x. should be a good place to distinguish between the two
~
given curves.
For any X the vertical distance is
The value of X which maximizes this distance, xi' is given by:
1
i
When Ii
i
= n-l
=:
O,l,2, ••• ,n-l •
(3.4.4)
' the spacing will be called coverage spacing and it is
given by:
i
=:
O,1,2, ••• ,n-1 •
1
When"! 1
= (~)r
,
n.. .L
then the spacing 1s given by:
n-l
x
1
=
I
(3.4.6)
and it is called powered coverage spacing.
Another type of spacing is suggested from the results of the three
point spacing of chapter four.
It is found that the middle of the
three levels should be taken at e - 1/"7 •
ThiS suggests that possibly
23
n-2 internal levels should be taken as some function of e- III or some
exponential form.
x
x
Possibly the simplest method is to let
n-l-i
= e -i
o
=0
n-l-i
= e-
i
= O,1,2, ••• ,n-2
i
= O,1,2, ••• ,n-2
or
x
ill
=0
(3.4.8)
or possibly centering the spacing about e - III which suggests
x
x
n-l-i
=e
o
=0
2i
(n-l )
-
i
= O,1,2, ... ,n-2
(3.4.9)
or
x
x
=e
n-l-i
2i
(n-l)/'
-
i
= O,1,2, ... ,n-2
i
= O,1,2, ••• ,n-l
i
= O,1,2,· •• ,n-2
0
o
Thus four possible spacings are:
(1 )
Linear Spacing
x.
l.
(2 )
i
= n-l
Exponential Spacing
x
x
n-l-i
= e -i
o
=0
24
(3)
Centered Exponential Spacing
x
n-l-i
21
n-l
=e
i
= O,l,2, ••• ,n-2
i
= O,1,2,···,n-2
o
(4) Coverage Spacing
( i
i+l
)n-l
=1
Four further spacings which are functions of I are:
(5) Powered Linear Spacing
(6)
i
= O,l,2, ••• ,n-l
i
0, 1,2, ••• , n-2
Powered Exponential Spacing
x _ _
n 1 i
=
i
I
e
=0
(7 ) Powered Centered Exponential Spacing
x
X
(8 )
n-l-i
=e
o
=0
-
2i
(n-l )y
i
O,1,2,···,n-2
i
= O,1,2,···,n-2
Powered Coverage Spacing
x.~
=
x
=
n-l
n-l
i
I
(i+l)
--
1
The levels for the four powered spacings at values of y equal to
1.0, .5, .1 and 2.0 for values of n from 3 to 10 are given in Tables
3.1, 3.2, 3.3 and 3.4.
The levels for the non-powered spacings are the
same as the equivalent powered spacing when y
= 1.0.
Table 3.2 shows that, because of the many observations near zero,
powered exponential spacing or exponential spacing in effect replicates
x = 0 with some observations scattered between zero and one.
This then
really defeats the purpose of n different levels in that replication is
a factor.
If replication is to be considered, as it will be, it will
be shown that there are better ways to replicate than to just replicate
at X
= o.
3.5
The Asymptotic Covariance Matrix for the n Levels
The powered spacings all involve terms to the power of l/y.
They
can be written as:
x.1
=
where Z. is the form of the equivalent non-powered spacing.
1
1
The modified asymptotic covariance matrix M- (y) of (3.2.10) can
then be written as:
M(y)
where
=
1
0
0
0
1
0
0
0
1
-y
W
1
0
0
0
1
0
0
0
1
-y
(3.5. 1 )
26
Table 3.1
a
Powered linear spacing
6
8
1
2
3 Loa a
3 .50 a
3 .10 a
3 .05 a
3 2.00 0
·5000
.2500
.0010
.0000
.7071
1.0000
1.0000
1.0000
1.0000
1.0000
4 Loa
4 .50
4 .10
4 .05
4 2.00
.3333
.1111
.0000
.0000
.5774
• 6667
.4444
.0173
.0003
.8165
1. 0000
1.0000
1.0000
1.0000
1.0000
5 1.00 0 .2500
5 .50 0 .062 5
5 .10 0 .0000
5 .05 0 .0000
5 2.00 0 ·5000
·5000
.2500
.0010
.0000
.7071
.7500
• 5625
.0563
.0032
6 1.00 0 .2000
6 .50 0 .0400
6 .10 0 .0000
6 .05 0 .0000
6 2.00 0 .4472
.4000
.1600
.0001
.0000
.6325
.6000
.3600
.0060
.0000
.7746
.8000
• 6400
.1074
.0115
.8944
1.0000
1. 0000
1.0000
1.0000
1.0000
7 l.00 0 • 1667
7 .50 0 .0278
7 .10 0 .0000
7 .05 0 .0000
7 2.00 0 .4082
.3333
.1111
.0000
.0000
• 5774
.5000
.2500
.0010
.0000
.7071
.6667
.4444
.0173
.0003
.8165
.8333
• 69 11- 11.1615
.0261
• 9129
1.0000
1. 0000
1.0000
1.0000
1. 0000
8 1.00 0 .1429
8 .50 0 .0204
8 .10 0 .0000
8 .05 0 .0000
8 2.00 0 .3780
• 28 57
.0816
.0000
.0000
.5345
.4286
.1837
.0002
.0000
.6547
.5714
.3265
.0037
.0000
.7559
.7 143
.5102
.0346
.0012
.8452
.8571
.7347
.2141
.0458
.9258
1.0000
1.0000
1.0000
1.0000
1. 0000
9 1.00 0 .1250
9 .50 0 .0156
.0000
9 .10
9 .05 0 .0000
9 2.00 0 .3536
.2500
.0625
.0000
.0000
.5000
.3750
.1406
.0000
.0000
.6124
·5000
.25)0
.0010
.0000
.7071
.6250
.3906
.0091
.0000
.7906
.7500
.562 5
.0563
.0032
.8660
.875)
.7656
.2631
.0692
.9354
1. 0000
1. 0000
1. 0000
1. 0000
1. 0000
10 1.00 0 .1111
10 .50 0 .0123
10 .10 0 .0000
10 .05 0 .0000
10 2.00 0 .3333
.2222
.0494
.0000
.0000
.47 1 4
.3333
.1111
.0000
.0000
.5774
.4444
.1975
.0003
.0000
.6667
.5556
.3086
.0028
.0000
.7454
.6667
.4444
.0 173
.0003
.8165
.7778
.604·9
.0810
.0066
.8819
.8889
• 7901
• 3079
.0948
.9428
N
1
0
0
0
0
0
°
3
4
5
7
9
1.0000
1. 0000
1. 0000
1.0000
•8660 1. 0000
1.0000
1. 0000
1. 0000
1.0000
1. 0000
27
Table 3.2
N
I
0
1
2
Powered exponential spacing
3
4
5
6
7
8
9
3 1.00 0 .36 79 1.0000
3 .::0 0 .1353 1.0000
3 .10 0 .0000 1.0000
3 .05 0 .0000 1.0000
3 2.00 0 .6065 1.0000
e
1.0000
1.0000
1.0000
1.0000
.6065 1.0000
4 1.00 0 .1353
4 .::0 0 .0183
4 .10 0 .0000
4 .05 0 .0000
4 2.00 0 .3679
.3679
.1353
.0000
.0000
5 1.00 0 .0498
5 .::0 0 .0025
5 .10 0 .0000
5 .05 0 .0000
5 2.00 0 .2231
.1353
.0183
.0000
.0000
.3679
.3679
.1353
.0000
•0000
6 1.00 0 .0183
6 .::0 0 .0003
6 .10 0 .0000
6 .05 0 .0000
6 2.00 0 .1353
.0498
.00 2 5
.0000
.0000
.2231
.1353
.0183
.0000
.0000
.36 79
.3679
.1353
.0000
. 0000
7 1.00 0 .0067
7 .::0 0 .0000
7 .10 0 .0000
7 .05 0 .0000
7 2.00 0 .0821
.0183
.0003
.0000
.0000
.1353
.04 8
.0025
.0000
.0000
.2231
.1353
.0183
.0000
.0000
.3679
.3 679
.1353
.0000
•0000
8 1.00 0 .00 2 5
8 .::0 0 .0000
8 .10 0 .0000
8 .05 0 .0000
8 2.00 0 .0498
.00 6 7
.0000
.0000
.0000
.0821
.0183
.0003
.0000
.0000
.1353
.0498
.0025
.0000
.0000
.2231
.1353
.0183
.0000
.0000
.3 679
.3679
.1353
.0000
.0000
9 1.00 0 .0009
9 .::0 0 .0000
9 .10 0 .0000
9 .05 0 .0000
9 2.00 0 .0302
.00 2 5
.0000
.0000
.0000
.0498
.0067
.0000
.0000
.0000
.0821
.0183
.0003
.0000
.0000
.1353
.0498
.0025
.0000
.0000
.2231
.1353
.0183
.0000
.0000
.3679
.36 79
.1353
.0000
. 0000
.6065
1.0000
1.0000
1.0000
1. 0000
1.0000
10 1.00 0 .0003
10 .::0 0 .0000
10 .10 0 .0000
10 .05 0 .0000
10 2.00 0 .0183
.0009
.0000
.0000
.0000
.0302
.0025
.0000
.0000
.0000
.0498
.0067
.0000
.0000
.0000
.0821
.0183
.0003
.0000
.0000
.1353
.0498
.00 2 5
.0000
.0000
.2231
.1353
.0183
.0000
.0000
.3679
• 3679
.1353
•0000
· 0000
1.0000
1.0000
1.0000
1.0000
.6065 1. 0000
1.0000
1.0000
1.0000
1. 0000
.6065 1. 0000
1.0000
1.0000
1.0000
1.0000
.6065 1. 0000
1.0000
1.0000
1.0000
1.0000
.60 6 5 1.0000
1. 0000
1. 0000
1. 0000
1. 0000
•6065 1. 0000
28
Powered centered spacing
Table 3.3
a
1
2
3 1.00
3 • <JJ
3 .10
3 .05
3 2.00
a
a
a
a
a
• 3679
• 1353
· 0000
.0000
.6065
1. 0000
1. 0000
1. 0000
1.0000
1. 0000
4 1.00
4 •<JJ
4 .10
4 .05
4 2.00
a
a
a
a
a
.2636
.0695
.0000
.0000
.5134
.5134
.2636
.0013
.0000
.7165
5 1.00
5 • <JJ
5 .10
5 .05
5 2.00
a
a
a
a
a
.2231
.0498
.0000
.0000
.47 2 4
.3 679
.1353
.0000
.0000
.6065
• 3679
•0067
· 0000
.7788
1. 0000
1. 0000
1. 0000
1. 0000
6 1.00
6 • <JJ
6 .10
6 .05
6 2.00
a
a
a
a
a
.2019
.0408
.0000
.0000
.4493
.3012
.0907
.0000
.0000
.5488
.4493
.2019
.0003
.0000
.6703
· 670 3
• 4493
.0183
•000 3
· 8187
1. 0000
1. 0000
1.0000
1. 0000
1. 0000
7 1.00
7 • <JJ
7 .10
7 .05
7 2.00
a
a
a
a
a
.1889
.0357.
.0000
.0000
.4346
.2636
.0695
.0000
.0000
.5134
.3679
.1353
.0000
.0000
.6066
.5134
.2636
.0013
.0000
.7 16 5
.7165
.5134
.0357
. 0013
. 8465
1. 0000
1. 0000
1.0000
1. 0000
1. 0000
8 1.00
8 • <JJ
8 .10
8 .05
8 2.00
a
a
a
a
a
.1801
.0324
.0000
.0000
.4244
.2397
.0574
.0000
.0000
.4895
.3189
.10 17
.0000
.0000
.5647
.4244
.1801
.0002
.0000
.651 4
.5647
.3189
.0033
.0000
.751 5
.7515
• 5647
.0574
.0033
.8669
9 1.00
9 •<JJ
9 .10
9 .05
9 2.00
a
a
a
a
a
.1738
.0302
.0000
.0000
.4169
.2231
.0498
.0000
.0000
.47 2 4
.286 5
.0821
.0000
.0000
.5353
.3679
.1353
.0000
.0000
.60 6 5
.47 2 4
.2231
.0006
.0000
.6873
.3679
.0067
.0000
.7788
•0821 1. 0000
.0067 1.0000
.8825 1. 0000
10 1.00
10 • <JJ
10 .10
10 .05
10 2.00
a
a
a
a
a
.1690
.0286
.0000
.0000
.4111
.2111
.0446
.0000
.0000
.4594
.2636
.0695
.0000
.0000
.5134
.3292
.1084
.0000
.0000
.5738
.4111
.1690
.0001
.0000
.6412
.5134
.2636
.0013
.0000
.7165
.6412
.4111
.0117
.0001
.8007
N
)'
3
4
5
6
7
8
9
1.0000
1. 0000
1.0000
1.0000
1.0000
•60 65 1. 0000
1.0000
1. 0000
1.0000
1.0000
1. 0000
.60 6 5 .7788 1.0000
.6065 1. 0000
.8007
.6412
• 1084
.0117
•8948
1.0000
1. 0000
1. 0000
1.0000
1. 0000
29
Table 3.4
N
)'
0
1
2
Powered coverage spacing
3
4
5
6
7
8
9
3 1.00 0 • 2500 1. 0000
3 .50 0 .0625 1. 0000
3 .10 0 .0000 1.0000
3 .05 0 •0000 1. 0000
3 2.00 0 • 5000 1. 0000
4 1.00 0 .1250
4 .50 0 .0156
4 .10 0 .0000
4 .05 0 .0000
4 2.00 0 .3536
.2963
.0 878
.0000
.0000
• 5443
1.0000
1.0000
1. 0000
1.0000
1.0000
5 1.00 0 .0625
5 .50 0 .0039
5 .10 0 .0000
5 .05 0 .0000
5 2.00 0 .2500
.1975
.0390
.0000
.0000
.4444
.3164
• 1001
•0000
•0000
• 5625
1.0000
1. 0000
1.0000
1. 0000
1. 0000
6 1.00 0 .0312
6 .50 0 .0010
6 .10 0 .0000
6 .05 0 .0000
6 2.00 0 .1768
.131'7
.0 173
.0000
.0000
.3629
.2373
.0563
.0000
.0000
.48 71
•32'77
• 1074
.0000
.0000
.5724
1.0000
1. 0000
1.0000
1. 0000
1.0000
7 1.00 0 .0156
7 .50 0 .0002
7 .10 0 .0000
7 .05 0 .0000
7 2.00 0 .1250
.0878
.0077
.0000
.0000
.2963
.1780
.03 17
.0000
.0000
.4219
.2621
.0687
.0000
.0000
.5120
.3349
.1122
.0000
.0000
.578 7
1.0000
1.0000
1.0000
1.0000
1.0000
8 1.00 0 .0078
8 .50 0 .0000
8 .10 0 .0000
8 .05 0 .0000
8 2.00 0 .0884
.0585
.0034
.0000
.0000
.2419
.1335
.0178
.0000
.0000
.3654
.2097
.0440
.0000
.0000
.4579
.2791
.0779
.0000
.0000
.528 3
.3399
.1155
.0000
•0000
• 58 30
1.0000
1.0000
1.0000
1. 0000
1. 0000
9 1.00 0 .0039
9 .50 0 .0000
9 .10 0 .0000
9 .05 0 .0000
9 2.00 0 .0625
.0390
.00 1 5
.0000
.0000
.1975
.1001
.0100
.0000
.0000
.3164
.1678
.0281
.0000
.0000
.4096
.2326
.0541
.0000
.0000
.4823
.2914
.0849
.0000
.0000
.5398
.3436
• 1181
.0000
.0000
• 5862
1.0000
1. 0000
1.0000
1.0000
1. 0000
10 1.00 0 .0020
10 ·500 .0000
10 .10 0 .0000
10 .05 0 .0000
10 2.00 0 .0442
.0260
.0751
.0056
.0000
.0000
.2740
.1342
.0180
.0000
.0000
.3664
.1938
.0376
.0000
.0000
.4402
.2497
.0624
.0000
.0000
.4997
.3007
.0904
.0000
.0000
.5483
.3464
.1200
.0000
•0000
• 5886
.0007
.0000
.0000
.1613
1.0000
1.0000
1.0000
1. 0000
1.0000
30
n
2:Z.
2:Z.logZ.
2:Z~1
2:Z.logZ.
1
W
1
1
2
1
(3.5. 2)
1
2
2:Zi(logzi)
Hence, by substituting
2
(3.5.1) and (3.5.2) into (3.2.9), the informa-
tion matrix I(y) is given by:
I( y)
=
1
0
0
0
1
0
0
0
~
W
y
1
0
0
0
1
0
0
0
f3
y
and the asymptotic covariance matrix is given by:
I-I (y)
=
1
0
0
0
1
0
0
0
13
W- l
y
1
0
0
0
1
0
0
0
(3.5.4)
y
13
Similarly, for the non-powered spacings, a matrix W( y) can be calculated.
This matrix, W( y), has the form
W(y) ""
1
0
0
0
1
0
0
0
where M(y) is given by
M(Y)
1
0
0
0
1
0
0
0
(3.2.16) and W(l) = W of
The elements of W(y) in
(3.5.5)
(3.~5.2).
(3.5.5) have been adjusted for
y in the
same way that the elements of r(y) have been adjusted for f3 in
(3.2.9).
31
In this way the elements of the asymptotic covariance matrix can be
directly compared for the power cases.
The elements of
w- 1 (y)
and IW(y)
I
are tabulated in Appendices 9.1
through 9.4 for the spacings where the number of levels are from three
to twenty.
1,
1
The non-powered spacings are tabulated for values of y
1
1
2' 10 ' and 20·
= 2,
The results for y = 1 are also the results for the
powered spacing of the equivalent type.
The table headings show how
to obtain the asymptotic covariance terms of I-l(y).
Appendix
9.5 continues the calculations of linear spacing as in
Appendix 9.1 but for values of y of 1.50, 2.00, 2.50, 3.00 and 4.00.
3.6 A Comparison of the Types of Spacing
First, a comparison of the non-powered and powered spacings can be
made by looking at the variances, covariances and the determinant which
are given in Appendices 9.1 to 9.4.
If a powered spacing is formed in
such a way that the theoretical value of y is used, then the variances
etc. for the powered spacing are given in the appendices for a value of
y
= 1.
For extreme values of y, such as y
= .05,
for all four types of
non-powered spacings, the asymptotic variances are equal to or greater
than those for the powered spacings.
Thus, for extreme y, powered
spacings are better than non-powered spacings.
This recommendation is
not universally true for all spacings over all y.
However, it is very
nearly always true and hence only powered spacings will be considered
further.
32
The powered spacings can be compared by contrasting the variances,
covariances and determinants which are given in Table
3.5.
was formed from the information contained in Appendices
This table
9.1 to 9.4.
As mentioned earlier, exponential spacing, whether powered or nonpowered, effectively replicates the point X
creases.
= 0 as the sample size in-
This reduces the variances and covariances associated with a
but does not aid those associated with
~
and!.
This is also indicated
by a relatively small determinant as the sample size increases.
If
replication can be incorporated in the experiment, it will be shown
that there is a better way to replicate than to just replicate X = O.
Thus, whether on the basis of inefficient replication or small determinant, powered exponential spacing is not a good spacing.
Because of the relatively large variances, covariances and the
small determinant, for a sample size greater than five, powered centered
spacing is not a good spacing.
On the basis of the determinant, powered linear spacing should be
chosen over powered coverage spacing.
However, for a sample greater
than five, the asymptotic variances and covariances for powered coverage
spacing are less than those for powered linear spacing with the exception of those
involving~.
On this basis, there is an indication that
powered coverage spacing might be the better of the two.
Appendix
9.1 shows that, for n greater than five, the variances
and covariances for linear spacing are on the whole smaller when ! = 2
than when ! = 1.
!
=
1.
Also, the determinant is larger for ! = 2 than for
These variances, covariances and determinant for!
=
2 are the
variances covariances and determinant for a powered spacing of the form:
33
Table
N
2
(Ja
3.5 Asymptotic covariance matrices~
(JCXt3
f(J
)' a)'
2
(Jt3
f(J
)' t3)'
(J2
2 )'
2
t3 II()')1
"2
12.48
8.344
6.308
4.736
3.913
.1201
.3745
1. 337
4.523
9.414
11.34
9.958
6.946
5.745
5.436
.1353
.3382
1.138
2.583
3.794
11.34
7.517
6.020
5.0 8 5
4.597
.1353
.3906
1.192
3.355
6.206
13.52
9.268
5.029
3.164
2.502
.1201
.3485
1.377
4.235
7.856
-t3
)'
2
)'
Powered Linear Spacing
3
5
9
15
20
1
.9608
• 8287
.6551
.5509
- 1
-.8907
-.7053
-.5300
-.4371
1.442
1. 735
1.686
1.399
1.195
2
1.622
1.132
.7784
.6108
-
0
.4366
.6434
.6094
.5424
Powered Exponential Spacing
3
5
9
15
20
.9999
.6940
.2126
.0937
.0638
-.9999
-.6422
-.1844
-.0810
-.0552
1.7 18
1.869
.697 2
.3012
.2051
1.999
1.577
1. 141
1.051
1.029
- .7182
- .9 655
.1584
.4871
.5702
Powered Centered Spacing
3
5
9
15
20
1
.9518
.8818
.8011
.7457
- 1
-.8946
-.75 15
-.6457
-.5845
1. 718
1.78 3
1. 793
1. 702
1.611
2
1.754
1. 339
1.002
.8412
-
.7183
.6160
.7140
.7917
.8023
Powered Coverage Spacing
3
5
9
15
20
1
.7670
.3974
.2300
.1702
~See
- 1
-.7337
-.3599
-.1920
-.1323
2.164
1.956
1.024
.6068
.4562
2
1.694
1.309
1.130
1.063
-1. 442
-1.163
- .2222
.1941
.3380
Appendices 9.1, 9.2, 9.3 and 9.4.
34
x.
i
~
= 0,1,2, ••• ,n-l
.
Let this spacing be designated as squared powered linear spacing.
n greater than five, it is better than powered linear spacing.
also quite comparable to powered coverage spacing.
For
It is
Roughly, squared
powered linear spacing in comparison with powered coverage spacing has
2
somewhat larger values for ~a' ~~ and
2
and ~I and a larger determinant.
~al'
smaller values for
2
~~, ~~I
1
Any powered spacing can be written as x.
~
= Z:
where Z. is the
~
~
If the spacing is not correctly deter-
equivalent non-powered form.
~
~
mined because I is unknown and taken to be I where I
=k/,
then the
spacing is given by:
1
=
ZI
i
1
= x.
k
~
The variance and covariance associated with the powered spacings are
given by I
= 11k in the appropriate appendix.
If I is not chosen accurately in powered coverage spacing then
there can be a serious increase in the variance and covariance with a
comparable loss in information.
Similarly, if the squared powered
linear spacing is not estimated accurately because of the unknown value
of I, then the variances and covariances change.
Appendix 9.1 and 9.5
show this change for any n < 20 and it is very slight over a wide range
of I.
Thus squared powered linear spacing is the best of the n level
spacings because it yields relatively small variances and covariances
35
for the estimates of the parameters, it gives high information per
point and the results are fairly stable even if the parameter
is not
y
known exactly.
When three levels are taken, powered linear exponential spacing
and powered centered linear exponential spacing represent the same spacing.
This spacing, which is 0, e - 1/1
' ,1 gives asymptotic variances
and covariances which are essentially less than or equal to those of
either powered linear spacing, squared powered linear spacing or
powered covarage spacing.
Also, the determinant for powered linear
exponential spacing at 0, e - 1/1 , 1 is greater t h an
latter three spacings.
t h at
forme d f or t h e
The spacing 0, e - 1/1 , 1 will be used as a basis
for showing the value of replication by comparing its asymptotic covariances and determinant with those of other spacings.
That this is
the best possible three level spacing is developed in the next chapter.
3.7 The Value of Replication
By using the asymptotic variances and covariances given in Appendices 9.1 to
9.1~,
the value of taking m levels with k observations at
each level, where n
= mk,
can be determined.
Designate the modified asymptotic information matrix
M(l) of the m
levels with k observations per level by Mmk(l) and of m levels with no
replications by Mml(l).
-1( )
Mmk 1
and
=
Then,
1 -l()
k ~1 1
(3.7.1)
Because of the relationship given in (3.7.2), define the information per point as:
Ie;)
where
is a 3 by 3 matrix as here.
When
Ie;)
is derived from an r
parameter model, 1(;) will be an r x r matrix and, when n > r observations are taken, the information per point would be:
This is somewhat different than the definition of generalized information per point as given by Nelder (1961).
An indication that
(3.7.4) is a "natural" definition is shown in Table 4.3 by the constant
value of
~2II(;) l/n3 ;2 when n
= 3i, i = 1, 2, 3, 4,
5 and 6.
Tables 3.6 and 3.7 give the asymptotic variances, covariances, information and information per point for powered linear and protection
spacings when observations are taken with varying number of levels and
replications at these levels to make up 16, 18 or 20 observations.
For
powered protection spacing the principal gain of replication is in the
reduction of the variance of
point.
~
and the increase of the information per
For linear spacing, the less levels and more replications that
are taken the smaller are the variances and covariances and the greater
the information per point.
Thus there is a definite gain in replicating an experiment rather
than taking extra levels.
This gain is in reducing the variances and
it also allows for a measure of the sampling error as well as the experimental error.
Offsetting this gain is a possible loss in the ability to
test for models which are different than the model (1.1) considered here.
37
Table 3.6
Replication of a powered linear spacing
Info.
Per Pt.
Info.
2
n levels reps
(m)
(k)
CT
a
CT~
2
2
2 ~CT
t3 2 t32II(y)
CT
y
t3 Y t3Y "2
Y
Y
~CT
y ay
CT
I
t32l I(~)
Y
20
4
5
10
20
5
4
2
1
.1971
.2402
.3980
.5509
-.1895
-.2227
-.3346
-.4371
18
3
6
9
18
6
3
2
1
.1667
.3100
.4144
.5889
-.1667 .2403
-.2792 .5873
-.3526 .8430
-.4703 1.27 14
2.080 25.9418
.3333 0
. 488 7 -.1784 2.548 14.8365
.5660 -.321 7 3.154 10.696
.6735 -.5693 4.207 7.173
.004448
.00 2543
.001834
.001230
16
4
8
16
4 .2464 -.2368 .4112 .4518 -.0682 2.371 11+.9888
2 .4312 -.3726 .8630 .6125 -.3134 3.341 8.208
1 .6316 -.508 5 1.354 .7399 -.5963 4.546 5.317
.003659
.002004
.001298
Table 3.7
.3290
.4338
.8200
1.1950
.3614
.4055
.5260
.6108
-.0546
-.1092
-.3246
-.54 24
1.897
2.086
2.988
3.913
29.275
23.968
13.624
9.414
L
n
.003 659
.002996
.00 1703
.001176
Replication of a powered protection spacing
In£o.
Per Pt.
diliu
3
n levels reps
(m)
(k)
20
4
5 .1837
.1918
2 .177 2
1 .1702
-.1798
-.1834
-.1584
-.1323
.4504
.4890
.4598
.4562
.3748
.4235
.6320
1.0630
-.3004
-.2907
-.57 15
.3380
2.202
2.317
2.273
2.50 2
27.000
22.304
13.944
7. 8 56
.003375
.002788
.001743
.000982
3
6
9
6
3
2
1
-.1667
-.1972
-.1800
-.1520
.3607
.5390
.5140
.5060
.3333
.5153
.6545
1.085
-.2403
-.2702
-.1111
.2915
2.253
2.557
2.514
2.721
25.9418
14.4504
11.016
6.271
.004448
.002478
.001889
.001075
4
8
4 .2297 -.2248 .5630 .0469 -.3755 2.770 13.824
2 .2264 -.2076 .5840 .0683 -.1808 2.827 8.424
1 .2149 -.1769 .5688 1.114 .2310 2.997 4.868
18
16
n
5
10
20
18
l
16
4
.1667
.2096
.1987
.1899
.003375
.002057
.001188
4.
THE THREE LEVEL SOLUTION
4.1
Introduction
At least three levels of the independent variable X at which to
take observations are needed in order to estimate the three parameters
of the model
O<X<l.
(4.1.1)
It is conceivable that it might be optimal, under some criterion, to
take three levels rather than some greater number of levels.
For the simple linear model of one variable, under the criterion
of either minimizing the variance of the estimate of the slope or maximizing the determinant of the information matrix, it is best to take
half the number of observations at each end of the region of interest.
Analogously de la Garza (1954) shows that for any N-observation experiment to estimate the parameters in an m parameter polynomial there is
another N-observation experiment with the same information matrix and
which takes observations at no more than m different levels.
This sug-
gests that, for linear models, the number of parameters in the model
should be the number of levels at which to take observations.
The prob-
lem becomes one of selecting the best set of these levels.
If the requirement that the number of levels should equal the
number of parameters carries over to the nonlinear model (4.1.1) used
here, then three levels should be used.
The value of the three levels
which should be taken is developed in this chapter.
Also, the number
of observations which should be taken at each level is determined.
39
When there are only three levels, the estimates of the parameters
are particularly simple.
three unknowns.
They are found by solving three equations in
The bias and variance of the estimates of the parame-
ters are developed.
Finally, the asymptotic properties of the three level solution can
be compared with the n level properties developed in Chapter 3.
4.2
The Asymptotic Covariance Matrix and Its Determinant
Let the three levels at which observations are to be taken be xl'
xl < x 2 < x ~ 1. Suppose it has been determined
3
that n observations are to be taken so that n , n and n observations
l
2
3
x 2 and x
3
where 0
~
are taken at xl' x 2 and x respectively where n = n + n 2 + n .
l
3
3
Each observation can be written as in (3.2.1) with a subscript indicating at which level, xl' x
2
or x ' the observation is taken.
3
That
is,
a + t3x )'li + Eli
i
1,2, ••• ,n
E2i
i
= 1,2, ••• , n2
Y3i = a + t3x 3i + E3i
i
1,2, ••. ,n
Yli
Y2i
)'
= a + t3x 2i +
)'
l
3
(4.2.1)
.
The equations, (3.2.13), which lead to the solution for
a, t3 and
include the following terms in Yi:
(4.2.2)
~.x?logx.
~ ~
~
40
where
n·
I;J
Yji
-Y = i=l
j
n.
J
j
=
1, 2 or 3
Hence the problem is actually the fitting of the equation to the
Yl , Y2
vations
obser~
and Y3.
The modified information matrix,
M(/),
of (3.2.10) can be written
as:
n
M(/)
=
(sym)
The modified asymptotic covariance matrix is M-1 ( I ) cr2 where
M- 1 (/)
1
D(x ,x 2 ,x )
l
3
An
-A 12
A
-A 12
A22
-A
A
-A23
A
13
13
23
(4.2.4)
33
with D(xl'x ,x ) being the determinant of M(/) and
2 3
2
x
xl
xl
l log l log - 2 }
x/x
+
x/x
log
D(Xl'x 2 ,x ) = nln2n3(xix~
x
x
x
1 3
2 3
3
2
3
3
(4.2.5)
41
(4.2.6)
If the three points are xl' x
detenninant is:
2
and 1, that is x
3
= 1,
then the
42
If the three points are 0, x 2 and x ' that is xl
3
detenninant is:
= 0,
then the
(4.2.8)
If the three points are 0, x
2
and 1, that is xl = 0 and x
=
3
1,
then the detenninant is:
and the modified asymptotic covariance matrix is:
(sym)
(4.2.10)
These results can be found directly as was done for xl' x 2 ' and x or
3
they can be derived from the results of (4.2.4), (4.2.5) and (4.2.6) by
using the fact that:
and
lim
x 1-> 0
xi log xl
=0
for
1'>0.
4.3 Allocation of the Three Levels
The levels at which an experiment is to be run have often been
determined by maximizing the determinant of the information matrix.
See Box and Lucas
(1959), Box and Hunter (1965), Nelder (1961), Kiefer
l
The covariance matrix, r- (7), is made up
(1959) and Verbeck (1965).
of the determinant of the minors of the information matrix divided by
the determinant.
Thus the maximizing of the determinant, Ir(7)/, is
akin to minimizing some average of all the terms of the covariance
matrix.
The graph of the function as in Figure
tionshould be observed at xl =
direct estimates of a and
~
1.1 suggests that the func-
° and x3 = 1 since with these two points
are available.
Also it will be shown that
the criterion of maximizing the determinant gives this result.
The
value of x 2 also has to be found and will be calculated by:
(1)
maximiZing the determinantj
(2)
minimizing the dominant term of the asymptotic variance of '"7j
(3)
choosing that value of x which most clearly distinguishes be..
tween the curve of interest and members of the family of
curves which are close to this curve.
Fortunately these three criteria all lead to the same solution for x •
2
Thus this solution has the above three conditions as its properties.
4.3.1 Allocation by Maximizing the Determinant
First it will be shown that the triplet (O,x ,1) maximizes the
2
determinant D(x l ,x 2 ,x ) for all values of Xl' x2 and x such that
3
3
~ Xl < x < x ~ 1.
Following that, the value of x2 which maximizes
2
°
3
D(O,x ,1) is calculated.
2
44
For 0 ~ xl < x 2 < x ~ 1, using (4.2.5), (4.2.7) and (4.2.8)
3
(4.3.1.1)
i f and only if
Xl
xl
x2
< X"Ix "I log - - x"Ix "I log - + x2"Ix 3"llog x < 0 •
1 2
1 3
x
x
2
3
3
(4.3.1.2)
< x~logX2 <
(4.3.1. 3)
Clearly,
-00
0
o < x2 < 1 .
for
Next it will be shown that:
Consider
O<x<l.
(4.3.1.5)
It is easily shown that
lim
and
g(x) = 0
lim g(x) =
x->l
x-)()
~
(4.3.1.6)
"I
Maximum, minimum and inflection points for g(x) in the range
0 <x< 1 can be established by considering g I (x) which is:
g I (x)
= x"l-l(x"l_l_logx"l)
(4.3.1. 7)
(x"l_1)2
Now g' (x)
=0
x "1-1
implies that
=0
and/or
x"l-l-logx"l
or else it is not possible for g'(x)
= O.
=0
Now x "1-1
(4.3.1.8)
=0
is of no
interest because either x
=0
or x =
00
depending on I and these are
ruled out for the space 0 < x < 1.
The expansion of log Xl gives:
x I -1 - log x I
+ •.•
As 0 < x < 1, (l_x/)k > 0 for k = 1,2,3,··· .
for 0 < x < 1 and I > O.
Hence, x/-l-logx l > 0
Thus g(x) does not have a maximum, minimum or
inflection point in the region 0 < x < 1 and it is a monotonically increasing function there.
Consider 0 ~ xl < x 2 < 1.
Then 0 < g(x l ) < g(x 2 ).
That is,
(4.).1.9)
Therefore
(x~-l)xi log xl < (xi -l)x~ log x 2
(4.).1.10)
or
(4.).1.11)
As
0
~
xl < x 2 < 1, then
-00
and
log xl - log x 2 < 0 •
Hence
-00
< log xl < log xl - log x 2 < 0
and
as
< log xl < log x 2 < 0
o < x~ <
1 .
(4.3.1.12)
46
That is,
or
(4.3.1.13)
Using (4.3.1.11) and (4.3.1.13) gives
or
-00
I
I I
xl
I
I
< x 2 log x 2 < x 1x 2 log x - xl log xl + x 2 log x 2 < 0
2
(4.3.1.14)
as was desired.
It now must be established that
(4.3.1. 15)
Let
(4.3.1.16)
q(y) = yd(log y/d) - Y logy
where
O<y<d<l.
Then,
q' (y) = (d-1) log y - d log d + d - 1
and
q"(y) = d-1
Now,
lim q(y) = 0
y->O
Y
<0
as
and
0 < Y < d < 1 •
lim q(y) = -d log d •
y->d
Maximum, minimum and inflection points for q(y), 0 < Y < dare obtained by setting q'(y) = O.
This implies that:
(d-1) log Y = d log d - d + 1
or the maximum occurs when
y
=-e-
This Y is a maximum because q"(y) < O.
However, this maximum occurs
outside the range of interest, namely when y > d.
e
-1
1
I:d
- d
=e
-1
- e
Consider
1
- I:d log d
= e -1
_{ (I-d) + (1_d)2
=e
-1
e
-1
3
2
+ ... }
e
or
1
e
-1
d 1 -d = e -1
e
-1
e
-6
where
Now
as
e -6
< 1
for
6>0.
Hence,
1
e- 1
or
Thus
d 1-d
= e- 1 (1_e- 6 )
>0
0
<
d
<
1 •
48
>
d
or the maximum y > d •
Hence, q(y) is monotonically increasing for 0 < Y < d.
d =
x;.
Let
Then
for
O
< x1
I' < xl' < x I'
23
= d
< 1 •
or
as the I' in log xl' becomes a multiplier and then cancels out.
There-
fore,
xl
By adding xl'xl' log- to each side of (4.3.1.17) the result (4.3.1.15)
1 2
x
2
appears and which is:
Finally, it must be established that
xl
X
Let
xl
x2
y
- xl'xl' log - + xl'x log
< 0 •
1 3
x
x
2 3
x3
2
3
l'xl' log -
1 2
p(Z)
;:
10gZ
1-Z
O<Z<l.
(4.3.1. 19)
Now,
lim
p (Z)
;:
and
-00
lim p(Z) ;: -1 .
Z->l
2->0
Also,
pI (Z) ;: 1 - Z + Z log Z
Z(1_Z)2
with
lim pI (Z) = +
2->0
00
1 - Z + Z log Z
=
and
;:"21 .
p' (z)
lim
Z->l
Consider
1 .. Z +Z( (l-Z) + (1_Z)2 + (1_Z)3 + ••• }
2
3
;: 1 _ Z _ Z (l-Z) + 8
for some 8 such that 0 < 8 < 1.
(1_~)2
Therefore,
2
8Z(1_Z)2
1 - Z + Z 10g Z ;: I - 22 + Z 2
Hence the slope of p(Z), p'(Z), is always positive,
o~
Z
~
Thus p(Z),
1, has a minimum at Z ;: 0, a maximum at Z ;: 1 and has no maxi-
mum, minimum or inflection points between Z ;: 0 and Z ;: 1.
Xl
Xl
As x 2 < x ' then ->
-x
x
3
2
3
Xl
x
Xl
x
Hence 0 < -- < -
3
Xl 'Y
;: (x )
2
2
Also, from 0 < Xl < x 2 < 1,
< 1 or
0
Xl Y
Xl Y
< (-)
< (-) <
x
x
3
I
2
which gives 0 < Z1 < Z2 < 1.
Now
Let
<
which gives
xl
xl' log x
3
3
xl' - xlI'
3
xl
x '1 log 2
x
2
<
xl' - xlI'
2
As x I' - xi and x I' - xl'1 are both greater than zero, then
2
3
xl
(x;-xi) xl'3 log(-)
x
3
<
xl
(x~-xD xl'2 log(-)
x
2
or
Hence
as was desired for (4.3.1.18).
The proof of (4.3.1.3), (4.3.1.4), (4.3.1.15) and (4.3.1.18) gives
(4.3.1.2) which is that
-00
From the definitions of the determ{nants given in (4.2.5), (4.2.7) and
51
(4.2.9), the inequality
o~
D(x 1,x 2 ,x ) ~ D(x 1,x 2 ,1) < D(0,x ,1)
2
3
of (4.,.1.1) is established.
Thus, if three levels are to be chosen to maximize the determinant
of the asymptotic covariance matrix, then the first level should be as
small as possible or xl
possible, or x
== 1.
3
0, and the third level should be as large as
==
The middle level, x ' can be chosen to maximize
2
D(O ,x 2 ' 1).
Now,
D(0,x 2 ,l)
dD(0,x
2
==
nln2n3(x~ logX 2 )2
,1)
dX 2
The maximum occurs when
em
diS
== 0
or when
(4.3.1.20)
Clearly it is a maximum as
lim
x
D(0,x
2 -> 0
2
,1)
==
0,
lim
x
2-> 1
D(0,x ,1)
2
o
and
D(0,x ,l) > 0 .
2
(4.3. 1. ~l)
The conclusion of this argument is that if three levels are to be
chosen in such a way as to maximize the determinant of the asymptotic
covariance matrix, then the levels should be:
1
xl
== 0,
x
2
1
== e
and
x
3
== 1
(4.3.1. 22)
and
D(O,e
- -1l
,1)
==
n n n
1 2 3
l
2 2
e
(4.3.1. 23)
A
4.3.2 Allocation by Minimizing the Maximum Term of Var r
The position of the levels can be chosen to best estimate the
parameters.
When the determinant of the asymptotic information matrix
was maximized it was found that xl
= 0 and x = 1.
Also, the informa-
3
tion from these two levels yields simple estimates for a and ~
For these reasons xl will be taken to be zero and x
3
(4.4.3).
as 1 and only x
2
will be selected.
A method of selecting the middle level is to select that x
minimizes the asymptotic variance of the estimates of
the only variance involving x
2
is the variance of
r.
2
that
In (4.2.10),
r which is:
2
A
Var(r) =
(j
or
2
CT
and
where
As
0 < x
(4.3.1. 24)
2
< 1, then 0 < x~ < 1.
Therefore
or
Also,
<
0
as x r
2
-
2 < 0
from
o < x~ < 1.
53
Hence
or
1
(log x )
2
2
1
<
Thus
or g2(x ) is the dominant term in Var(l).
2
Rather than minimizing
var(l), which is all but impossible for fixed n., the dominant term,
~
g2(x), will be minimized.
However, g2(x ), using (4.2.9), can be
2
written as:
n n n
l 2 3
D(O,x 2 ,1)
=
Thus minimizing g2(x ) is the same as maximizing D(O,x ,1).
2
2
(4.3.1.26), the value to choose for x
2
By
is again
1
=
e
I
It is worth noting that when the nls are also chosen to minimize
the variance of
1
x
2
=e
4.3.3
I
1,
the value of x
2
which minimizes Var(y) is still
This is shown in 4.5.2.
Allocation of a Level to a Parameter
It is possible to make a correspondence between three levels of
the variable and the three parameters.
function has an expected value of a.
Thus at the level of zero the
Between the levels of zero and
one the function 4as an expected growth
=0
to choose the levels xl
and x
3
of~.
Hence it seems natural
= 1.
This leaves unchosen a level between zero and one which in some
way corresponds to I'
That value of x which most clearly distinguishes
the members of the family of curves
o<
will be chosen.
Here a and
~
x
< 1
1 > 0
and
are considered fixed because they are
estimated from the information at Xl
=0
and x3
= 1.
Thus choose that x which best discriminates between the curve with
parameters a, ~, 1_61 and the curve with parameters a, ~'I'
be the difference between the curves.
~(x)
= (a +
~X/)
_
Let ~(x)
Then,
(a + ~x(/-6/)) = ~(Xl _ x Cr -6/ )) •
(4.3.3.1)
Not surprisingly, the value of x which distinguishes most clearly
among the curves for a change in 1 is not a function of a.
does not depend on
~
~
Also, it
which is a proportionality constant in (4.3.3.1).
indicates the relative ease with which the curves can be dis tin-
guished.
The value of x depends solely on the parameter I'
The curves
wil~
most easily be distinguished if x is chosen so
that ~(x) is a maximum.
Now,
The maximum occurs when ~I(X)
=
(1 _ 6 1 )61
1
which gives that
1
1
x
=0
=
1
55
1
x = e
)'
Thus the value of x which should be chosen is:
1
x
)'
e
=
If three levels are to be chosen to observe the function
Y
=a
+ ~x)',
a~
x ~ 1, then, on the basis of maximizing the deter-
minant of the asymptotic covariance matrix, minimizing the dominant
"'-
term of the asymptotic variance of r or of choosing levels to best
distinguish amongst the family of curves, the three levels to choose
are:
1
)'
=a ,
and
= 1 •
4.4 Three Level Estimation, Bias and Variance
4.4.1 Estimates of the Parameters
Let the three levels at which observations are to be taken be
xl
= 0,
xl' x
2
x
2
and x
and x
3
3
=
1.
When n , n and n observations are taken at
l
2
3
respectively, then the average value at these
XIS
can be
written as:
y(O)
y(x 2 )
y(l)
where
= a+ E
I
a + ~x)' + E
2
2
= a + ~ + E
3
(4.4.1)
ni
~
€i
: : ; j=l
n.
i =
1,2,3
~
and €ij is a random normal variable which has a mean of zero and a
.
var~ance
0
f cr.
2
As Ey(O) = a, Ey(xi = a + f3x~ and Ey(l) = a + f3 the parameters a,
f3 and yare given by:
a=Ey(O),
f3 = Ey(l) - Ey(O)
and
Ey(x 2 ) - Ey(O)
y = (log Ey(l) _ Ey(O) ) / log x 2 •
(4.4.2)
A
A
A
The estimates for a, f3 and Y, which are designated by a, f3 and Y
respectively are given by
a = y(O)
(4.4,2) without the expectation signs or:
g = y(l)
- yeO)
and
(4.4.3)
4.4.2 The Bias of the Estimates
Clearly, from
unbiased.
using
(4.4.2) and (4.4.3), the estimates of a and f3 are
This is not necessarily true for the estimate of y.
(4.4.1),
Y can be written
as:
Now,
57
1
y = log x
A
2
(log(y(x 2 ) - y(O)) - log(y(l) - y(O))}
log(l +
€ -E
3 1)}
~
or
A
_
y - y
+
1
log x
00
( L:
2
i=l
(_1)i+1
i
(E 2 -E 1 )i
~xY
2
2
00
-
L:
i=l
(_1)i+1
i
(E -E 1 )i
3
~
)
.
(4.4.2.1)
2
For k = 2 or 3, Ek-E 1 ~ N(O, ~ + ~).
n
n
k
1
Then,
Hence, from (4.4.2.1),
(:1
co , 1
J=
L:
i=l
(2j_1)cl i
2'
2i ~ ~
)
.
(4.4.2.2)
From this expression, it is clear that the bias is given by:
A
Ey-y
(4.4.2.3)
Note that the bias is small when
~
>
~
and when the n. are large.
If
~
1
2)'
3 )
-.!:.-+-.!:.n
n
x
2
=
A
(
l
1
nl
(4.4.2.4)
1
+n
2
A
then E)' = )' or )' is unbiased.
x
2
The solution for x
2
is not valid for
1 because (4.4.2.3) is undefined for this value.
=
A meaningful
result occurs for x 2 if n is taken greater than n and the value of x
2
2
3
is calculated as in (4.4.2.4).
A different approach would be to choose x
2
at some convenient
value such as e- 1/)' and then, for this value of x , choose n p
2
n
2
and
so that (4.4.2.4) holds. The restriction that n = n + n + n
l
2
3
3
would still be required. The choice of the sample sizes could be made
n
subject to some criterion and two restrictions.
4.4.3
The Variance of the Estimates
From (4.4.3) and (4.4.1), the variances for
a and § are
given by:
Var(a)
and
Var
A
~
=
Var(y(l) - y(o))
An expression for the variances of )' can also be developed.
Var(y)
and using (4.4.2.1),
A
A
E()' - E()'))
2
A
2
A
= E()'-)') - ()'-E)')
2
Now,
(4.4.3.2)
59
... }
(4.4.3.3)
Now for i and j equal to 2 or 3
E(E.-E
l )
1
221
= ~ (--
n
i
1
+--)
n
l
(4.4.3.4)
and
60
Hence, using (4.4.3.4) in (4.4.3.3),
E(1_,)2
2
=
[0"
1
~
(1 ogx2 )2 ~
{...!.- +
n
3
1
n2x2,
2
+...!.- (1 _ ...!.-)2}
n1
x'
2
From (4.4.2.2)
2
1
[.!!- {
2
( 1ogx2) 2 2~
4
+ 30"
4~4
{
Using (4.4.3.6) and (4.4.3.5) in (4.4.3.2) gives
var(l )
4
+ 0"
~
{
5 ( 1- + -1) 2 } + ••• ] •
n
n
1
3
+ -
2
61
A
Note that the first term is the asymptotic variance of y as given in
(4.2.10 ).
4.5
Optimal Sample Size at the Three Levels
Observations are to be taken at the three levels xl - 0, x
x
3
=
1 with the total number of observations being n.
of observations to be taken at 0, x
ly where n = n
l
2
2
and
Let the number
and 1 be n , n and n respective2
l
3
+ n2 + n y
Sample Size Chosen to Maximize ~
4.5.1
The choice of the value of the n.
l.
can be determined by the de-
IS
sire to fit an individual parameter or the parameters in total.
This
latter is attempted by maximizing the determinant D(0,x ,1) where
2
y
2
=
n n n ( x log x )
2
l 2 3 2
=
nln 2 (n-n l -n2 ) (x~ log x 2 )2
and
ClD(0,x , 1)
2
anI
cm(0,x ,1 )
2
dIl 2
The value of n , n and n which maximizes D(0,x ,1) such that n + n
2
l
2
l
2
3
+ n
3
=
n is given by setting
=
Then
o
and
::;=
o.
62
or that equal numbers of observations should be taken at the three
levels.
Obviously the solution is a maximum.
If xl' x 2 and x are taken as 0, e
3
n1
= n2 = n 3 =
3' then from
- l/y and 1 respectively and if
(4.2.9) ,
1
D(O,e
from
Y,
(4.2.3)
~ (e+l)
n
3
e
n (e 2;1)
"3 e
n
-~
3'1e
and from
n
- 3ye
n
- --2
3ye
(4.5.1.4)
n
22
3'1 e
(4.2.10)
3
3 (e-1)'1
n
n
n
- -n3
-n
6
3(e-2)y
3
M- 1(y)
n
3(e-1)y
e(e-2)y
n
n
(4.5.1. 5)
2
6(e -e+l)y
n
4.5.2 Sample Size Chosen to Minimize the Asymptotic Variances
i
If the observations are to be chosen to minimize the asymptotic
'" then, from
variance of a,
be taken at Xl.
(4.2.10) or (4.4.3.1), all observations should
That is,
n
2
= 0
and
If the observations are to be chosen to minimize the asymptotic
variance of ~ where, from (4.4.3.1),
2
er
~
1
= (-
1
2
+ - ) er
n1
n3
then an equal number of observations should be taken at xl and x
none at x •
2
3
and
That is,
n
and
= 0
2
A
As the asymptotic variance of 1 is a function of x ' that is:
2
2
erA =
1
1
2
t3 (logx2)
2
(
-n1
+
1
21
n x
2 2
3
+2--. (1 - 2--.)2 } er2
n1
x 21
(4.2.10)
the choice of n. to minimize this variance depends on the level of x .
2
1
_ 1/1
2
Assume x = X where X is specified and X = e
Then,
2
2
2
2
2
2
erA
1
With n
3
12
-n1
1
- -.
1
+_e_ + (l-e
n
n
2
1
3
(
=~
21
12
t3
2 )2
2
(4.5. 2 .3)
} er
== n - n 1 - n 2 '
21
12
2
L
12
12
)2
1
2
2
(l-e
+
_e_
+
(
} er
erA
2
n
n
n-n 1-n2
t3
2
1
1
Now
2
derA
1
dill
derA2
1
Cln
2
L
2
12
=~(
1
2
12
(
1
(n-n 1-n2 )
)2
2
} er
21
12
2
=~
12
2
n
1
(n-n 1-n )
2
t3
t3
(l-e
2
2
~} er
n
2
(4.5. 2 • 4 )
64
2
The solutions for n l and n which maximize
2
d~2
~A
are obtained by setting
A
=0
dIl~
and
Because n
o.
=
3
= n-n l -n2 ,
the final values are:
With these values of n l , n and n3' (4.5. 2 .3) becomes
2
2
~
A
e
2
= 41'2
I'
21'
1'2
=
7
4
2
I'
nl3 (X log X )2
2
2
The value of 1'2' or equivalently X2 , can be obtained by minimizing
As
~= is proportional to D(0,X2,1), the solution is the same as
I'
(4.3.1.20) namely X2
=e
_
1/1'
or 1'2
= 1'.
If x 2 is chosen optimally in the sense of section 3.4, then
x
2
= e - 1/1'
for the true value of 1'.
Then using (3.5.10) with 1'2 = 1',
the number of observations at the levels are:
nl
_ n (e-l)
- 2' -e-
The values of n
2
~A
I'
4.5.3
i
=
'
and
are given in Table 4.1.
n
3
n
=-
2e
With these values of n,
22
4I' e
n13
(4.5. 2 • 8 )
2
Sample Size Chosen to Minimize the Trace of
I
Chernoff (1953) suggests that the trace of the asymptotic covariance matrix be minimized.
(3.2.9) and (4.2.10)
Let T(n ,n ,u ) represent the trace.
1 2 3
By
65
= 0,
Table 4.1
Optimal sample sizes to minimize var(y) when xl
x = e- 1/"1 and x = 1
2
3
n
3
4
5
6
7
8
9
10
11
12
13
14
15
n
1
1
2
2
2
2
3
3
4
4
4
4
5
1
2
2
3
4
4
4
5
5
6
7
7
7
1
1
1
1
1
2
2
2
2
2
2
3
3
n
n
1
2
3
Assume that x
2
is chosen optimally in that x
2
=
e - 1/"1
Then,
2
2
2
2 2
T(n ,n ,n )
1 + .L..!1 2 3
= [2 + "I (e-1) ] + (1 +"1 )1:2
n1
2
t)2
n3
0t) n
2
?
Now, n , n and n are to be chosen to minimize T of (4.5.3.2) subject
1
2
3
to n = n 1 + n 2 + n or n = n - n 1 - n 2 • Substituting n = n - n 1 - n
2
3
3
3
into (4.5.3.2) and taking the partial derivatives with respect to n
l
and n
2
gives:
dT(n 1 ,n2 ,n )/0-2
3
2ln2
2 2
2
+ (1 +L) _ _1_~
- I-"A2n22
A2 ( n-n -n )2
I-"
"I e
1
2
When the partial derivatives are set equal to zero and the equations
66
are solved, the values of n. which minimize the trace subject to the
1.
restriction are found.
n
n
n
n
1
2
3
J2
These values are:
+ (1.)2(e_1)2
13
=
nye
I3g
=
nJ1 + (1)2
13
/g
/g
where
g
=
j1 + (1)2
~
+ Ie
13 +
J2 + (1.)2(e_1)2
13
Table 4.2 gives the values of n. for different nand 1/13.
1.
4.6
The Three Level Solution Compared with the n Level Solutions
As indicated at the beginning of the chapter, there is some reason
to feel that the optimal way to take observations is to choose three
levels and replicate at these levels.
that the levels should be xl
The development here suggests
= 0, x 2 = e- 1/1 and x =
3
1.
Also, if the
replications, n , n and n , at the three levels are chosen to maximize
1
2
3
the determinant of the asymptotic information matrix, then they should
be of equal size.
This will be done as far as possible in keeping with
the restriction that the nts must be integers.
The terms of the asymptotic covariance matrix for the three levels
0, e - 1/ 1 and 1 are given in Table 4.3.
The number of observations at
the three levels are to be as close to being equal as possible.
If
there is One observation above equal numbers at each level, then an
extra observation is taken at x
= O.
If there are two such
Table 4.2
.,/f'
1
n
n
Optimal sample size to minimize the trace of I
3
1
1
4
n
2
.1
.5
n
n
n
1
1
n
1
1
1
1
1
1
1
1
2
n
.01
n
1
n
2
n
1
n
2
n
1
3
1
1
2
1
2
1
1
2
1
1
2
1
1
5
2
2
1
2
2
1
2
\
2
2
1
2
6
2
3
1
2
2
2
3
1
2
3
1
2
7
2
3
2
3
2
2
4
1
2
4
1
2
8
3
3
2
3
3
2
4
1
3
4
1
3
9
3
4
2
4
3
2
5
1
3
5
1
3
10
4
4
2
4
3
3
5
1
4
5
1
4
11
4
4
3
5
3
3
6
1
4
6
1
4
12
4
5
3
5
4
3
6
1
5
7
1
4
13
4
6
3
5
4
4
7
1
5
7
1
5
14
5
6
3
6
4
4
7
2
5
8
1
5
15
5
7
3
6
5
4
8
2
5
8
1
6
3
3
observations, one is taken at x = 0 and the other at x = 1.
of Table
The terms
4.3 are calculated from W- 1(i), which, using (3.5.1) and
(4.2.10), is given by
(sym)
3
68
Table 4.3
Three point spacing; asymptotic covariance matrix
Info.
n n 1 n2 n
3
3 1 1 1
Info.
Per Pt.
[32
3"2 II (Y)1
n /
[32 2
-CY
2 /
2
[32 II (y)1
1. 000 -1. 000 1.718 2.00 -.718 11.342
.135
.00~1
.141
9. 86 5
.27 1
.00423
2
CYcx
CYcx[3 ~CY
y CXy
2
CY[3 ~CY
/ [3/
/
/
4 2
1 1
.~O
-
5 2
1 2
.~O
- .~O
. 859 1. 00 -. 359
9.3 65
.541
.00433
6 2 2 2
.~O
-
.~O
. 859 1. 00 -. 359
5.671
1.083
.00~1
7 3 2 2
.333 - .333
.573
.83 -.073
5.179
1.624
.00473
8 3 2
3
.333 - .333
·573
.67 -.240
5.012
2.436
.0047 6
9 3 3 3
.333 - .333
·573
.67 -.240
3.78 1
3.654
.00~1
.~O
.859
1.~
10 4 3
3
.2~
-
.2~
.430
.58 -.097 3.535
4.872
.00487
11 4
4
.2~
-
.2~
.430
. ~ -.180
3.451
6.496
.00488
12 4 4 4
.2~
-
.2~
.430
• ~ -.180
2.835
8.661
.00~1
13 5 4 4
.200 - .200
.344
.45 -.094
2.688
10.8 27
.00493
14 5 4
5
.200 - .200
.344
.40 -.144
2.638
13.534
.00493
15 5 5 5
.200 - .200
.344
.40 -.144
2.268
16.917
.00~1
16 6
5 5
.167 - .167
.286
.37 -.086
2.170
20.300
.00496
17 6
5 6
.167 - .167
.286
.33 -.119
2.137
24.360
.00496
18 6 6 6
.1 67 - .167
.286
.33 -.119
1.890
29.232
.00~1
19 7
6 6
.143 - .143
.245
.31 -.078
1.820
34.104
.00497
20 7
6 7
.143 - .143
.245
.29 -.102
1. 796
39.788
.00497
3
and
IW(y)
I
nl n n
2 3
= -""'-2"':;" =
A2
~
II(y) I •
y
e
The asymptotic properties of this three point spacing with replication can be compared with the asymptotic properties for the n level
spacings which are given in Appendices 9.1 to 9.5-
It is quite clear
that three point spacing with replication is better than any of the n
level spacings for the same total number of observations.
Thus, as in
section 3.7, the value of replication is indicated.
A
A Comparison of the Exact and Asymptotic Variance of y
A
It is difficult to compare the exact and asymptotic variances of /
in the general case.
However, when an experiment is restricted to the
three levels 0, e- l/y and 1 with an equal number of observations taken
at each level, namely n/3, then the asymptotic variance from (4.5.1.5)
and (3.3.2) is given by:
Var(y)
rv
rv
3y2(2e2 _ 2e + 2)~2
n13
2
From 4.4.3, the exact variance is given by
A
Var( y)
3 2
2
= Y2 [(~)(2e
-2e+2)
nl3
3(/ 2
4
3 2
+ (~) (lOe -4e -e -4e+l0)
nl3
3i
256 6
5
4 84 3
2
256
e -24e -6e - --9 e -6e -24e+ ~)z) + ••• ] •
+ (---)(--2
n13
The
f~rst
3
term of the exact variance is the asymptotic variance.
70
The i
th
term of the exact variance of r is of the form
2 i
k. (30"2)
1.
n~
quantity 30"2/n~2 is small relative to k., then the
Hence, if the
1.
series converges.
Now O"/~ is the standard deviation of the mean of the n/3
observations at one of the three levels 0, e- l/r and 1.
term
~3~/n~2
Thus the
is the ratio of the standard deviation of the mean of
the observations to the total
growth~.
be small in any experimental situation.
One would expect this ratio to
Should it not be so, the
asymptotic results would not indicate the true variability.
Table 4.4
gives the three terms and their sum for the exact variance of
2
by r .
y divided
This suggests that the ratio of the standard deviation of the
mean to the total growth should be of the order of 1/100 in order for
the asymptotic terms to be reasonable estimates of the true values.
Table 4.4
1st term
l/JlO
1.13416
The three terms of var(Y)/r
2nd term
2
3rd term
Sum
4.573737
30.32487 6
36.03 278
1/10
.113416
.045737
.030324
.18948
1/100
.00113416
.00000 457
.00000 000
.00113 88
1/1000
.00001 13416
.00000 00046
.00000 00000
.00001 13462
71
5.
THE FOUR LEVEL SOLUTION
5.1
Introduction
On the basis of the calculations of the asymptotic properties of
three levels in Chapter 4 and those of n levels considered in Chapter 3,
it appears that three levels is the best spacing.
If that is the case,
then three levels should be better than four levels.
This will be con-
sidered here with the criterion being the determinant of the asymptotic
information matrix.
The levels in the t}:lree level spacing are 0, e-
1/ /
and 1.
The
four level spacing will also have the two end levels 0 and 1.
5.2
The Asymptotic Covariance Matrix and Its Determinant
Let the four levels at which observations are to be taken be 0, w '
2
w and 1 where 0 < w < w < 1.
2
3
3
Suppose it has been determined that n
observations are to be taken such that m , m , m and m4 observations
2
l
3
are taken at 0, w , ~ and 1 respectively where n = m + m + m + m .
l
2
2
4
3
With the usual additive error,
E,
and a subscript indicating the
value of X at which the observation is taken, an observation can be
written as one of:
Yli = a + Eli
i = 1,2, ••• ,m l
Y2i = a + l3m,12 + E2i
i
Y3i
= a +~
+
Y4i
= a+t3
+ E4i
E
3i
= 1,2, ••• ,m2
i
= 1,2, ••• ,m 3
i
= 1,2, ••• ,m4
These are the same form as the equations of (4.2.1).
(5. 2 • 1 )
72
By using (3.2.10) with (5.2.1), the modified information matrix
M(Y) is given
by
n
M(y) =
(sym)
The asymptotic covariance matrix is given by:
M- 1 (y) =
1
D(0,(1)2,(1)3,1)
All
-A 12
-A 12
A
22
A
13
-A
A
13
-A
23
A
33
23
where D(0,(1)2,(1)3,1) is the determinant of M(y) and
( y
_
D( 0,(1)2,(1)3' 1) - m1m2m4 (1)2108W2)
+
2
( y
2
+ m1m m4 (1)310g(1)3)
3
(1)2 2
m1m2m3((1)~~10g ~)
Y
y
2
- (1)2108W2 + (1)3 10g(1)3)
( y Y
(1)2
+ m2m3m4 (1)2(1)3 10g ~
(5. 2 .3)
73
+
m2m4w~(w~-1)logw2 + m3m4~(~-1)10g~
_
2)'
2)'
()'
2
()'
2
A - m1m2w2 + m1m3~ + m1m4 + m2m4 w2 -1) + m m4 ~-1)
33
3
5.3
Three Levels vs. Four Levels
The relationship between the four level determinant and the three
level determinant is to be studied.
In particular, it has been indi-
cated that an n level, and thus a four level, determinant is always
less than or equal to the maximum value of the three level determinant
or that, by (5.2.4) and (4.5.1.3),
< D(O,e- 1/)' ,1)
=
or
This will be shown to be true in the region of the three point solution
and then again by numerical calculation for the total region
<W <1.
3
Using (4.2.5), (4.2.7), (4.2.8) and (4.2.9), let
Then the four level determinant can be written as:
D(O,w2,w3,1) = m1m2m4 D1(O,w2 ,1) + m1m3m4D1(O,w3,1)
+ m1m2m D1 (O,w2 ,w ) + m2m3m4D(w2'~' 1) •
3
3
0
< w2
When (4.3.1.1) is divided by n n2n , one obtains:
l
3
from which can be deduced:
D (ill ,ill ,1)
l 2 3
< Dl (0,ill ,1)
D (0,ill ,ill )
l
2 3
< Dl (0,ill2,1)
3
from all 0 < ill2 < ~ < 1.
The larger of Dl (0,ill2 ,ill ) and Dl(O,~,l)
3
depends primarily on the value of ill • Similarly, the larger of
3
D(ill ,ill ,1) and D(0,ill ,1) depends primarily on ill •
2 3
2
2
If ill2 is close to zero, then D (0,ill2 ,1) is small and Dl(0,ill2'~)
l
is even smaller.
See (4.3.1.21).
If ill is close to 1, D (0,ill ,1) is
l
3
3
small and D (ill2 ,ill ,1) is even smaller. If ill is close to zero and ~
2
l
3
is close to 1, then D(0,ill2,~,1) is small and there is no need to
compare it with D(O, e- 1/)' ,1).
If ill
D (0,ill , 1) is close to its maximum.
2
l
2
is close to e- 1/)' , then
If ill
3
is also close to e- 1/)' ,
then ill2 and ill are close which means that D (0,ill ,ill ) and D (ill ,ill ,1)
2 3
l
l 2 3
3
are close to zero. The conditions to investigate then are that:
case
i)
case
it)
case iii)
Case i)
ill2
ill and ill are close together, ill ~ ~, particularly
2
2
3
1/)'
and ill ~ e- 1/)' •
when ill ~ e
2
3
- 1/)' •
~ ~ 1, particularly when ill2 ~ e
ill2
~
0, particularly when ill
3
~
e- 1/)' •
~ ~:
If ill2 ~ ill , then Dl (0,ill2 ,1) ~ D (0,ill ,1) and Dl (0,ill ,ill ) ~ Dl (ill2 ,
l
2 3
3
3
ill ,1) ~ O. Let Dl (0,ill2 ,1) = Dl(o,~,l) = c l and Dl (0,ill2 ,ill ) = Dl (ill2 ,
3
3
ill ,1) = c 2 where c » c 2 ~ O. These equalities of the Dlls hold
l
3
exactly when ill~10gill2
= ill~10gill3.
75
Then, from (5.3.3),
The choice of m , m , m and m which maximize D(O,w ,w ,1) and subject
2
2 3
1
4
3
to the restriction that n = m + m + m + m can be found by differen1
2
4
3
tiating
with respect to m , m and m •
1
2
3
Now,
dD(O,w2,~,1)
d11 1
To investigate for maximum, minimum or inflection points the partial
derivatives are set equal to zero.
Hence,
=
0
Thus either m
2
If m = n-m , then m2~3 = n.
2
3
the only solution is that
= m or m2 = n-m ,
3
3
But n = m1+m2~3~4 and m
1
From setting (5.3.8) equal to zero,
f
O.
Thus
and using
(5.3.11),
The substitution of
(5.3.11) and (5.3.12) into (5.3.9) gives that
Then
m = n [
2
when
2c 1 -
bD(O,w2,w ,1)
3
2ID2
o.
The unique solution for m is revealed in the test for a maximum.
2
The substitution of
(5.3.14) into (5.3.15) gives that
Hence the value of m which maximizes D(O,W
2
2
Then, using
m
4
,wy
1) is:
(5.3.16) in (5.3.11), (5.3.12) and the restriction
= n-m1-m2 -m
3
, the following results are obtained:
77
2
2c 1 - c 2 - /ci - c 1c 2 + c 2
m :::: m :::: n [
]
2
·3
6(c 1-c 2 )
(5.3.17)
I
2
cl. - 2c 2 + ci - c 1c 2 + c 2
]
ml. :::: m4 :::: n [
6(cCc 2 )
Ag~in,
in
~as~
i) the assumptions gave the fact that c 1
(5.3.18)
»
c
2
~
O.
Putting c 2 :::: 0 in (5.3. 16 ), (5.3.17) and (5.3.18) gives:
n
m1 :::: m4 :::: "3
Substituting (5.3.19) into (5.3.6) gives
c
2
(n)3 c1 + (~)3
2
D ( 0,(I.)2'Wy 1) "'i"3
.J
T~us
the same
when (1.)2 a:p.d (1)3 are IIc1ose ll , D(0,(I.)2,(I.)3,1) has approximately
va1u~
as D(0,(I.)2,1).
Also, the designation of observations to
be taken at each level virtually gives the three level answer,
One
third of the observations are to be taken at each of the ends for both
the three and four level spacing.
For the three level case, one third
of the observat;i.ons are to be taken at e- 1lt .
(4.5.1.2).
See (4.3.1.20) and
For the four level spacing, one sixth of the observations
are to be taken at (1.)2 and one sixth at
~,
to e- I //' to make D(0,(I.)2,(I.)3'1) large.
This is essentially saying that
both of which should be near
one third of the observations should be taken at e- 1//' •
It is important to note that if w2 and ~ are close but not equal
to (;'!-1/m ~hen D(0,w2,~,1) is much smaller than D(O,e- 1/1' ,1), as
D(0,W ,w3'1)
2
by (4.3.1.20) for w2
~3 ~
Case ii)
f
~
D(0,w2,1) < D(0,e-
1
/1' ,1)
1
e- /1'.
1:
if ~ ~ 1, then by (4.3.1.21),
Dl (0,m ,1) ~ D1 (O,w2 ,w ) »D1 (0,m ,1) ~ D1 (m2,w ,1) ~
2
3
3
3
°.
Let D1 (0,m2,1) ; D(0,m2'~) ~ c 1 and D1 (O,w ,1) ; D (m2,m ,1) ; c2
1
3
3
wherec » c z 0. Tqen, from (5.3.3),
8
1
If in (5.?21) the subscripts onm2 and m4 are interchanged the equation becomes (5.5.6). Hence the solution for the values of m , m , m
2 3
l
and m4 whichma~imize (5.3.21) will be the same as ~iven in (5.3.16),
(5.3.17) and (5.3.18) when the noted subscripts are changed.
The value
of D(o,m2 ,m ,l) will be as given in (5.3.20), namely
3
D(O,m2,m ,1)
3
= (3)3
D1 (0,m2,1)
and tqe number pf observations at the levels are as in (5.3.19) or
and
)
e -t/I' and m ~ 1, then D(
0,m2,m
,1
3
3
If (.Q2 h not near e- 1/1' with m ~ 1, then
3
If m2
~
~
D(o,w2,m3'1) < D(0,e- 1/1' ,1)
and the three level solution will be the better one.
D(O,e -1/1' ,1 ) •
79
Case iii)
If (1)2
~
0, and
~
hac3 sane other value, then, in a similar argument to
case ii)
=
an d
n
. h
m r m4 = 3
W1t
m1 = m2 = bn .
3
1
If (1)3 ~ e- /)' with (1)2 ~ 0, then D(0,(1)2,(1)y1)
~
1
D(0,e- /)',1).
If
(1)3 is not near e - 1/)', then
D(0,(1)2,(1)y1)
<
1
D(0,e- /)' ,1)
and the three level solution is the better one.
From the three cases considered, the four level solution only becomes attractive when it is approaching the three level optimal conditions.
The number of observations at the four levels indicates that
the spacing of the three level solution should be maintained.
Figure 5.1 indicates the regions considered.
The· indicates an
optimal three level solution.
5.4
Three Levels vs. Four Levels:
Numerical Calculations
The determinant of the information matrix (5.3.3) when m , m2 , m
l
3
and m4 observations are taken at levels 0, (1)2' (1)3 and 1 respectively
where n
= m1
+ m
2
+ m
3
+ m4 can be written as:
80
case iii
ii
e - 1/1'
---f~-----'---------+----X
o
Figure
By
e -1/1'
1
5.1 Regions where three levels are better t:lan four levels
(5.3.1) the four level determinant is less than the three level
determinant if
Now, using
(5.2.4),
]
Let m~
= t2
and m~
then given by
= t3
and define 1'2D(o,m ,m ,1) as H(t ,t ) which is
2 3
2 3
81
t
2
2
( t 2 t 10 g t 3 )
3
+ ------'--
for 0 < t
2
< t
3
< 1.
The four level determinant is less than the three
level determinant if
( ~)3 1
3
2e
or
27 2
- ; - H( t 2 , t ) have been made at one
3
n
twentieth units of (t , t ) for the following four sets of mIS:
2 3
Calculations of the value of
n
(1)
m
1
='4
(2 )
m1
="t)
2n
2n
(3 ) m1 =1)
(4 ) m1
n
="5
n
m2
='4
m
2
="5
m
2
=1)
m
2
="5
n
2n
n
m
n
3 ='4
m
n
3 ="5
m
n
3 ="5
m
2n
3 =1)
n
m4
='4
m4
="t)
m4
="5
m
4
=1)
2n
n
2n
(5.4.5)
Subscripts are placed on H(t ,t ) to designate the above four
2 3
choices of mls.
Then:
82
t2
2
(t t log -- - t logt + t logt )
2
2
2 3
t
3
3
+ - - - - - - " -3- - - : 2 , . . - - - - - - - - -
t
2
2
(t t log -- - t logt + t log t )
2
2
2 3
t
3
3
3
+------"---.....".....------2
]
2
=
(~)3
! [
32
(t logt )
2
2
2
+
2
t2
+ (t t log -- - t logt + t lo gt )
2
2
2 3
t
3
3
]
3
The calculations of
i
are shown in Tables
1, 2, 3 and 4
5.1, 5.2, 5.3 and 5.4 respectively.
Note that all
the tabulated elements are less than one and hence condition
(5.4.4)
is satisfied.
Thus a four level spacing which is close to a three level spacing
yields a smaller determinant of the asymptotic information matrix.
Also, by numerical calculation over the total region of interest, a
three level spacing yields more information per point than a four level
spacing.
-
e
e
27e2H1 (t 2, t )
Table 5.1 Numerical value of
3 3
n
t
t
.05
3 2
.26
.10
.15 .39
.20 .:;0
.25 .59
.30 .66
.35 .69
.40 ·70
.45 .69
.50 .66
·55 .61
.60 .55
.65 .48
.70 .41
·75 .33
.80 .27
.85 .21
.90 .17
.95 .14
.10
.15 .20
.25 .30
.35 .40 .45 .50 .55 .60
.65 .70
.75 .80 .85 .90
.44
.40
.38
.37
.39
.42
.25
.22 .16
.22 .15 .09
.24 .16 .09 .04
.43
·52
.60
.66
.69
.71
.70
.68
.64
.60
.55
.49
.44
.39
.35
.33
.32
.59
.65
.70
.73
.7 4
.74
.7 2
·70
.66
.62
.70
.74
.77
.78
.78
.77
.74
.7 1
.68
.58 .65
.54 .62
.51 .59
.48 .58
.47 .59
.48 .61
.78
.81 .83
.81 .83 .84
.81 .83 .83 .82
.80 .81 .81 .80 .78
·77 .79 .79 .77 .75
·75 .7 6 .76 ·75 .7 2
.7 2 .74 .74 .7 2 .69
.69 .7 1 .7 1 .69 .66
.67 ·70 .70 .68 .64
.66 .69 .69 .68 .64
.65 .69 ·70 .69 .65
.67 .71 .73 .7 1 .68
.70 .75 .77 .7 6 .73
.7 1
.68
.64
.61
.59
.59
.59
.62
.67
.63
.59
.56
.53
.52
.53
.55
.54
.50
.47
.45
.45
.47
.60 .51
.34
.31
.30
.30
.33
CP
+""
-
e
e
27e2H2(t2, t )
Table 5.2 Numerical value of
3 3
n
t
t
.05 .10
3 2
.10
.29
.15 .42 .:e
.20 .j3 .60
.25 -<061 ~68
.30 .-67 -7'
.35 .70 .76
.1.K>
.71 ·77
.45 .69 ·'"l5
.65 .72
.~
.55 .60 .68
.60 .54 .63
.65 .47 .57
.70 .40 .51
·75 .33.44
.80 .26 .39
.85 .20 .34
.90 .16 .31
.95 .13 .29
a .996
.15
.20
.25 .30
.35 .40 .45 .5) .55 .60
.65 ·70
·75 .80 .85 .90
.69
.76 .£3
.80
.83
.-84
.83
.80
.76
.71
.66
.61
.56
.87
.90
.90
.89
.87
.83
.79
.74
.69
.64
.51
.47
.45
.44
.60
·57
.55
·55
.9}
.95
.95
.94
.92
.88
.84
.79
.75
.70
.67
.64
.63
.64
.98
.98
.97
.94
.91
.87
.82
.78
.74
.71
.69
.68
.69
LOO
a
.98
.95
.92
.87
.83
.79
.75
.7 2
.70
.70
.7 1
.97
.94
.91
.86
.82
.77
.74
.7 1
.69
.69
.70
.92
.88 .84
.84 .80 .75
.79 .75 .70
.74 .70 .65
.7 1 .66 .60
.68 .63 ·57
.66 .61 .55
.66 .61 .54
.67 .62 .56
.64
.59 .52
.54 .47
.43
.~
.48 .40
.47 .39
.48 .40
.40
.36
.33
.31
.32
.29
.25 .19
.23 .16 .11
.23 .16 .09 .05
CP
VI
e
e
e
27e2H (t 2,t )
Table 5.3 Numerical value of
3 3 3
n
.15
.20
.25
.30
.35
.40
.45
.5)
·55
.60
.65
.70
.75
.80
.85
.90
.95
.05 .10 .15 .20
.25 .30
.35 .40 .45 .5)
.19
.27
.34
.39
.43
.45
.46
.45
.44
.41
.38
.34
.30
.26
.22
.19
.17
.16
.69
.7!)
.7 1
.7 1
.71
.70
·70
.69
.69
.70
.7 1
.73
·77
.82
.1'5
.74
.7 4
.73
.72
.7 2
.73
.74
.7 6
.79
.84
.91
.35
.41
.4-6
.49
.51
.52
.52
.51
.5)
.47
.45
.42
.40
.38
.37
.36
.37
.55 .60 .65 .70
.75 .80
.85 .90
.49
.53 .61
.c:rJ .63
.59 .65
.60 .66
.60 .66
.59 .66
.58 .65
·57 .65
.55 .64
.54 .63
.53 .63
.52 .63
.53 .65
.54 .67
.56 .7 1
.75
.74
.74
.73
.73
.7 2
.7 2
.7 2
.73
.75
.78
.83
.89
.74
.73
.7 2
.71
.70
.71
.7 2
.74
.78
.83
.90
.70
.69
.67
.67
.67
.67
·70
.73
.79
.86
.65
.63
.62
.61
.62
.63
.67
.7 2
.79
.r;J3
.56
.5)
·55
.55
.56
.59
.64
.7 1
.48 .41
.47 .39
.48 .39
.5) .41
.54 .4it
.60 .5)
.32
.31
.31
.34
.39
.24
.23 .16
.24 .16 .09
.28 .19 .10
.05
(J)
0'\
e
e
e
27e 2H4(t 2,t )
3 3
Table 5.4 Numerical value of
n
t
t
.05 .10
3 2
.26
.10
.15 .42 .41
.20
·55 .52
.25 .66 .61
.30 .73 .68
.35 .78 .72
.40 .79 .74
.45 .77 .73
.50 .73 ·70
.55 .67 .65
.60 .60 .59
.65 .51 .52
.70 .42 .45
.75 .34 .38
.80 .25 .32
.85 .18 .26
.90 .13 .22
.95 .09 .20
.15 .20
.25 .30
.35 .40
.45 ·50
.55 .60
.65 .70
·75 .80 .85 .90
.54
.62
.68
.71
.73
.72
.69
.65
.60
.64
.69
.7 2
.73
.7 2
.70
.67
.62
.55
.49
.43
.38
.33
.30
.29
·57
.52
.47
.42
.39
.37
.36
.71
.73
.74
.73
.7 1
.67
.63
.59
.54
.49
.45
.43
.41
.42
.74
.75
.73
.7 1
.68
.63
.59
·55
.51
.47
.45
.44
.45
·75
.73
.7 1
.67
.63
.59
.54
·50
.47
.45
.45
.46
.73
·70 .68
.66 .64
.62 .60
·57 .55
.53 .51
.49 .47
.46 .44
.44 .42
.44 .42
.46 .44
.62
.58 .55
.53
.48
.44
.41
.39
.38
.40
.50
.45
.40
.37
.35
.34
.36
.47
.41
.36
.33
.30
.30
.31
.38
.33
.28
.26
.25
.26
.29
.24 .21
.21 .17 .13
.19 .15 .11 .07
.20 .15 .10 .06 .03
co
-..::J
88
6.
CONDUCTING AN EXPERIMENT
6.1
Introduction
The way in which an experimenter designs an experiment depends to
a great extent on his knowledge of the situation.
ments which explain
wh~t
Theoretical develop-
an experimenter can expect to observe as con-
ditions change will aid in conducting an experiment if, of course, the
derivations and assumptions are appropriate to the situation.
any previous
exper~mental
Also,
results will help to provide guidelines as to
how the new experiment should be conducted and possibly indicate what
factors
a~e
important,
A fairly accurate and complete knowledge of the experimental
situation is required here.
It is assumed that it includes the in-
formation that the function relating the variables X and Y or Z and Y
i~
of the form
Y =
a + l3(b~-a)
y
_a
<
Z
<
b
(1. 1)
o<
X
< 1
(1. 3)
a
or
for
-00
< a, 13 <
00
and y > O.
In some cases this knowledge of the
experimental situation inclqdes some knowledge of y and this can be
used in choosing levels of X.
In many experiments the independent variable is such that the
values at which to conduct the experiment can be selected.
For the
function (1.1), there are some levels of X which lead to more precise
estimates of the
parame~ers
of the levels is important.
than other levels.
Hence the proper choice
6.2
Selection of the Levels
The choice of the levels at which to observe the function has been
discussed in the previous chapters.
a knowledge of the parameter y.
The recommended levels all involve
This mayor may not be available.
If there is a strong evidence that the functional form (1. 3) is
valid but no knowledge of y is available, then a reasonable way to start
would be to take three levels and perform a preliminary experiment to
estimate y.
Then, with this estimate of y, three levels which are
quite efficient at providing estimates of the parameters can be selected
as in 4.3 with the sample size at the levels as discussed in 4.5.
It is probably more likely that the experimenter will know that
some form of exponential growth is to be expected.
If he is restricted
to the function (1.3) he probably can give a fairly accurate estimate
of y.
In this case the experimenter might really be model building as
much as estimating the parameters for the simple exponential function.
Here it is undoubtedly advisable to take the independent variable at
more than three levels for otherwise only equations with three or less
parameters can be estimated.
That is, the parameters of a cubic po1y-
nomia1 could not be estimated by observations at three levels.
If an estimate of y is available, then a fairly efficient choice
of levels is given by:
x. =
~
i
= 0, 1, 2, ••• , n- 1
and which has been called squared powered linear spacing.
(y.6.1)
Even if y is
not accurately known, a guessed value of y can be used to give the
level.
Appendix 9.5 indicates that the covariance matrix will not be
affected greatly by a fairly wild guess.
90
In choosing between levels and replications at the levels it
should be borne in mind the usefulness of replication as indicated in
section 3.7 and 4.6.
6.3
Choice of a New Region
Assume that the parameters in equation (1.1) have been estimated.
An experimenter might wish to perform another experiment in a subregion, say Zl < Z < Z2'
In this way the function may be more precise-
ly described in a critical area or estimates of the parameters and
their variances can be obtained under conditions which more nearly
approximate the theoretical assumptions.
In some cases the experimenter will choose the subregion on the
basis of his subject matter interest.
In other cases some aid in making
the choice of the subregion is needed.
One such method follows from
choosing the center point of the subregion so that the distance between
any member of the family of curves (1.1) and the straight line belonging to that family is maximized.
For certain applications it is meaningful to maximize the distance
between a member of the family of curves and a straight
an intercept of a and a slope of 8.
li~e
which has
This will first be found in a
general form and then applied to a model predicting economic
~rawth.
The function and line are shawn in Figure 6.1.
Now
°1 = °2 = 90 - 8.
Therefore
8 = 82
as ABD is a right tri-
angle.
Consider triangle ABD and note that AD
Figure 6.2.
t3x'Y - x tan 8, as in
91
y
y 1 ;:: 0: + x tan e
·~-+--_.J...
o
x
Figure 6.1
__
-+I_........
..l-_~_-r-
x+6x
.... X
1
Maximizing the distance between curve and line
D
A
Figure 6.2
Th~
triangle
92
d
e
Thus
cos
or
d = (3x"l cos e _ x sin e .
(3x"l - x tan e
The value of x which maximizes
(6.).1)
~
d or AB is the same as maximizing AD.
is desired.
That is
Nqte that maximizing
maximi~~ng
from the line is the same as maximizing the distance
dd
dX
(3 rx "l-l cos
=
e -
£r~
the distanGe
the X axis.
sin e
(6.).2)
Hence the value of x which maximizes d is
1
x = ( (3 "I
tan e
)'l-"Y
If in this argument
tance between Y
=a
(6.).;)
e is fixed so that tan e = ~, tpen the dis-
+ (3x"l and Y
=a
+ (3x is being maxi~ized.
The value
of x at which the distance between the curve and the line belonging to
the family of curves is maximized is:
= "I
x
1
1-"1
(6.3.4)
This has the desirable property that the center
o~
tpe
r~gion
of in-
terest, on x, is not a function of (3.
Let the central point of the new region be
1
x
g
=g
1_g
The choice of the subregion will be made by forming an a level confidence interval about the predicted Y at x
y
g
= a + b x
g
g
g
where
93
The variance of y is approximately equal to the
square of the total derivative.
expe~ted
value of
t~e
The expected values of the products of
the derivatives of the parameters are replaced by the covariances.
the
total derivative is given by:
x
dy
g
da + x g db + b g (1 + log x ) dg •
g
l-g
g
g
Then
2
0"
Y
2
~
+
O"a
x
2g 2
2 2g
b x (1 + log x )
g
g O"b +
g
(l-g)
g
2bx (l + log x )
+ -~g:::..--=---_.:::g,....
crag
l-g
2
+
2bx
2g
g
(1 + log x )
1-g
g
The 95 percent confidence interval of Y is formed by calculating
Y
3
=
Y + 2cr.y
g
The end points of the region are then given by using Y and Y of
l
3
(6.3.9) to find xl and x .
3
Their values are:
1
Yl-a
x1
1
g
= (-b-)
Y -a
,
x
= (_3_)
b
3
g
(6.3. 10 )
In the economic situation, with a cost function replacing the
line Y = a + X tan 8, the solution for x and the choice of the new region is similar to the preceding work.
Suppose the dependent
ment Y is yield and the price per unit yield is p.
unit x be tan 8.
Then the cost is c
=
x tan 8.
~easure-
Let the Gost per
The value of x which
94
maximizes the increase in price over cost is, from (6.3.3)
1
= (
X
P~I
)1- 1
tan e
(6311)
"
Then, the central point of interest would be:
1
xg
=
pbg )l- g
( tan
e
The region of interest for this x
g
can be found by using x
parameters in (6.3.8), (6.3.9) and (6.3.10).
g
and the
It is quite possible that
the new region exceeds the bounds of the old region.
6.4
Sample Size for the New Region
In general, the new region will be subregion of the original space
of interest.
cise.
This will make the estimates of the parameters
1es~
pre-
This has been shown, on the basis of the determinant, in
(4.3.1.1).
This determinant suggests how to make the parameters of the
subregion approximately as precise as the estimates of the
for the original region.
as n •
g
paramet~rs
Designate the sample size for the subregion
Then, take a sample of size
n
g
=
nD(0,x 2 ,1)
D(X 1,X g 'X )
3
where Xl and X are given in (6.3. 10), X in (6.3.5) and the deter3
3
minants in (4.2.5) and (4.2.9).
To estimate parameters with a certain efficiency, the asymptotic
variances from (4.5.1.5) may be used.
95
6.5 Form of the Function in the New Region
When an experiment is performed in a subregion Zl < Z
<
Z2' then
either the form of the original equation (1.1) can be kept or the
parameters can be kept by adjusting the equation.
When the form of the equation is to be unchanged, the independent
variable remains in the form of the ratio of the distance the variable
is in the subregion to the total distance of the subregion, namely
This is the form of the original variable of the equation in the region
a < Z < b, that is
Z-a
b-a
Thus the equation becomes
y
a:o + f3 0
(9.5.2)
When the parameters are not to be changed, the form of the independent variable must be adjusted.
The adjustment is that the variable
is not a measure of the relative distance traveled in the subregion.
It is rather the relative distance traveled in the total region [a,b],
namely
Z-a
b-a
and the equation is:
Regardless which form is used, one solution can be obtained from the
other.
The relationships between the new and old parameters are given
by:
Zl-a y
a0
=
a + ~(~)
~o
=
Z2- a I
~[(b- a)
Zl-a I
(b-a)
J
(Z _a)1 - (zl-a)1
logE
(z2- a )Y - (Zl-a)
Yo
If a,
ao '
~
=
yJ
(6.5.4)
Z -Zl
log (Z2- Zl)
and I have been calculated then the calculations to yield
~o and 10 follow from
(6.5.4).
If a
,~
and I
000
lated, then the corresponding values for a,
have been calcu-
and y can be calculated
~
by first solving the equation involving y and y.
o
This can easily be
done by iteration.
Probably the easiest way to obtain solutions is to find a
o
,~
0
and
Yo using the equations of (4.4.3) and then solving equations (6.5.4)
for a,
~
and I.
Otherwise, solutions for a,
~
and y can be obtained
iteratively using (3.2.13) or just by guessing at different values of
y until the means of the observations at Zl' Z and Z2 are exactly given
by the equation.
All solutions must be the same and they are all the
least squares solutions because the three parameter equation must pass
through the three means of the observations at the three points and
these means exactly determine the equation.
97
7.
SUMMARY
The selection of the levels of the independent variable in the
simple exponential
° < X < 1,
was considered.
-00
<
C't,
r:. <
00
and 1 >
°
The criteria used to choose a particular spacing of n
levels was the asymptotic variances and covariances and a measure of
the information, the determinant of the asymptotic information matrix.
Particular spacings considered in Chapter 3 were linear, exponential,
exponen1;:ial centered about e- l!-I' and a coverage spacing for ,.
best spacing of those considered has the i
i
th
The
level given by:
= 0,1,2,···,n-l
It is best in that the variances and covariances of the estimates of
the parameters are about the same size or significantly smaller than
the other spacings, the information is considerably greater and these
results hold even if 1 is not known but only rather inaccurately estimated.
If it is possible to take replications at the levels, then it is
advantageous to do so.
The relationship between the number of levels
and the number of replications per level has been calculated for two
types of spacing and is easily calculated for the other spacings.
advantage of replications is that the asymptotic variances and covariances are reduced.
where it is defined by
ill
r
n
Also the information per point is increased
The
when n observations are taken on an r parameter model which has an information matrix I.
With k replications at each of m levels it is not
only possible to estimate the parameters for this model as well as for
any model with up to m parameters, it is possible to obtain estimates
of both sampling and experimental errors.
The minimum number of levels needed to be able to estimate the
parameters is of course three.
If the functional form exactly fits the
real world problem then three levels can be taken with replications at
these levels.
In order to maximize the determinant of the asymptotic
information for the three levels Xl' X and X , the three levels should
2
3
be chosen at
and
The size of the sample at the three levels Xl' X and X for a
2
3
fixed total sample size of n shQuld be
respectively when they are chosen to maximize
Irl.
The number of rep-
lications at each of the three levels can be chosen to minimize the
variance of the estimators of 0,
the asymptotic covariance matrix.
given in section
~
and y or to minimize the trace of
The sample sizes for these cases are
4.3.
The value of the information per point for any of the n level spacings considered is greatest when n = 3 with Xl = 0, X = e- lly and
2
Xl
= 1.
Also, it has been shown numerically that four levels are not
as good as three levels by this same criteria.
In the neighborhood of
the three level solution, the value for the four level solution is less
99
than the value for the three level solution.
Thus there is an indica-
tion that three levels are optimal by this criteria.
The wayan experimenter will start gaining information depends on
what information is already available.
If the functional form is known
to be true and a good estimate of i is available, then a three level
experiment can be performed.
If the form of the function is to be
tested then an n level solution with the levels given by squared
powered linear spacing can be developed.
If the experiment can be done
sequentially, then a quick three level experiment to estimate i can be
performed, followed by an m level experiment with k replications per
level where n
= mk can be performed to check out the function and then
a comprehensive three level experiment can be performed to accurately
obtain estimates of a,
~
and i.
Hence there are a number of alternatives in fitting the simple
exponential.
The choice of levels and replications of these levels
depends on the knowledge of the experimental situation and the object
of the experiment.
100
8.
LIST OF REFERENCES
Box, G. E. P. and W. G. Hunter. 1965. The experimental study of
physical mechanisms. Technometrics 7:23-42.
Box, G. E. P. and H. L. Lucas. 1959. Design of experiments in nonlinear situations. Biometrika 46:77-90.
Chernoff, H. 1953. Locally optimal designs for estimating parameters.
Ann. Math. Stat. 24:586-602.
E1fing, G. 1952. Optimum allocation in linear regression theory.
Ann. Math. Stat. 23:255-262.
de 1a Garza, A. 1954. Spacing of information in polynomial regression. Ann. Math. Stat. 25:123-130.
lnkson, R. H. E. 1964. The precision of estimates of the soil content
of phosphate using the Mitscher1ich response equation. Biometrics
20:873-8 8 2.
Kiefer, J. 1959. Optimum experimental designs.
(London) B21:272-304.
J. Roy. Stat. Soc.
Lipton, S. 1961. On the extension of Stevens' tables for asymptotic
regression. Biometrics 17:321-353.
Nelder, J. A. 1961. The fitting of a generalization of the logistic
curve. Biometrics 17:89-110.
Stevens, W. L.
1951.
Asymptotic regression.
Biometrics 9:247-267.
Verbeck, A. R. 1965. Statistical method of characterizing spinning
quality. Textile Research Journal 35:1-14.
9.
A P PEN DIe E S
102
Appendix
N
y
9.1
Linear Spacing: Asymptotic Variances and Covariances for
Small Values of y
2
CT
CT
a
Ol3
t?CT
)'
a)'
2
CT
13
t?)'
cr
(3)'
13
2
'2
2
CT
)'
~II()') I
)'
3 1.00 1.000
.5) 1.000
.10 1.000
.05 1.000
2.00 .9999
-1.000
-1.000
-1.000
-1.000
- .9999
1.442
1.199
1.035
1.017
2.164
2.000
1.999
2.000
2.000
1.999
0
1.690
13.39
27.83
-1. 442
4 1.00
.5)
.10
.05
2.00
.9855
.9991
.9999
.9999
.85)5
-
.9473
.9866
.9994
.9998
.7951
1.645
1. 314
1.058
1.028
1.927
1.807
1.801
1.757
1.749
1. 719
- .27 28
.7595
7.449
15.59
- .8373
9.485
14.65
185.7
692.3
10.59
.2342
.1548
.01476
.00413
.2283
5 1.00
.5)
.10
.05
2.00
.9608 - .8907
.9972
.9999
.9999
.7298
- .9732
- .9988
-.9997
- .6442
1.735
1.38'9
1.073
1.036
1.708
1.622
1.642
1. 595
1.584
1.497
- .4366
•251l~
4.819
10.30
- .4894
8.31+4
10.78
110.4
402.4
9.041
.371+5
.2834
.03268
.00940
.3758
6 1.00
.5)
.10
.05
2.00
.9301
.9944
.9999
.9999
.6377
-
.8376
.9609
.9984
.9996
.5370
1.762
1.440
1.083
1.041
1.529
1.466
1.520
1.485
1.475
1.326
- .5351
- .06904
3.369
7.435
- .2831
7.643
8.851
76.68
271~. 4
7.955
.5495
.4459
.05949
.017 16
.57 29
7 1.00
.5)
.10
.05
2.00
.8967
.9909
.9999
.9999
.5660
-
.7892
.9496
.9980
.9995
.4585
1.754
1. 1-1,76
1.092
1.045
1.382
1. 335
1.426
1.408
1.398
1.190
- .5936
- .2889
2.463
5. 662
- .1545
7.116
7.696
58.02
204.5
7.123
.76:J)
.6425
.09252
.0 2755
.8 279
8 1.00
.5)
.10
.05
2.00
.8625
.9867
.9999
.9999
.5)87
-
.7452
.9391
.9976
.9994
.3991
1.726
1.5)1
1.098
1.048
1.260
1.225
1.351
1.351
1.342
1.079
- .6267
- .4485
1.846
4.4'70
- .07109
6.682
6.925
46.37
161.3
6.4'57
1.026
.8736
.135)
.040 69
1.1484
9 1.00
.8287
.9819
.9999
.9999
.4620
-
.7053
.9292
.9974
.9993
•3527
1.686
1.519
1.103
1.051
1.158
1.132 - .6434
1.289 - .5684
1.403
1.307
3.618
1.299
.9875 - .01520
6.308
6.372
38.49
132.3
5.909
1.337
1.139
.1861
.05666
1.542
.;0
.10
.05
2.00
12.48
26.40
448.2
1725.
13.52
2
.1201
.06005
.004182
.00112
.1201
continued
103
App~ndix
;
N
10
11
l~
,
1
2
O"a
0"0f3
~O"
1
Ci:y
2
O"~
~O"
1
~l
~2
2
2
0"
2 )'
~2II(1)
)'
)'
1.703
1.442
F'OO
.7959
.9767
.9999
·9999
.4232
-
.6691
.9197
.9971
.9992
.3156
1.640
1.532
1.108
1.053
1.07 1
1.052 - .6493
1.238 - .6613
1.069
1.272
2.982
1.265
.02308
.9100
5.977
5.953
32.83
111.6
5.448
1.00
.5J
.10
.05
2.00
.7 645
.9711
.9999
.9999
.3904
-
.6362
.9105
.9969
.9992
.2853
1.592
1.540
1.112
1.055
.9958
.9830 - .6480
1.194 - .7347
1.244
.8102
2.491
1.238
.8438
.0497 6
5.680
5.624
96.28
5·055
2.129
1. 781
.3148
.0973
2.580
1.00
.5J
.10
.05
.73 47
.9652
.9999
.9999
.3623
- .6061 1.542
- .9016 1. 546
- .9966 1, 115
- .9991 1.057
- .2602
.9304
.9223 - .6421
1.155 - .7936
1. 221
.6031
1. 216
2.101
.0 68 54
.786 5
5.412
5.357
25.32
84.44
4.7 15
2.620
2.157
.3925
.1220
3.239
.70 65
.9590
.9999
.9999
.3380
-
.5786
.8929
.9964
.9991
.2390
1.493 .8687 - .6330
1. 548 1.122 - .8414
1.118 1.202
.4342
1.784
1.058 1.197
.08182
.8730 .7365
5.167
5.135
22.7 2
75.08
4.418
3.179
2.57 2
.4792
.1498
4.002
.6800
.9526
.9998
.9999
.3168
-
.5535
.8844
.9963
.9990
.2209
1. 445 .8210 - .6218
1. 548 1.092 - .8804
1, 121 1.185
.2939
1,060 1.181
1. 523
.8222 .6925
.09120
4.943
4.946
20.60
67.5J
4.156
3.812
3.025
.5751
.1807
4.877
.6551 .9460 .9998 .9999 .2981 -
·5300
.87 61
.9961
.9990
.2054
1, 399 .7784 - .6094
1, 547 1,065 - .9 12 5
1.124 1.17 1
.1757
1.061 1.168
1. 303
.7770 .6534
.09777
4.736
4.7 83
18.85
61.26
3.924
4.523
3.520
.6801
.2146
5.870
.6316
.9393
.9998
.9999
.2184
.5J 85
.8679
.9959
.9990
.1918
1.354
1, 544
1.126
1.062
.73 65
4.546
4.639
17.38
56.04
3·717
5.317
4.055
.7942
.2515
6.990
1.00
.5J
.10
.05
~.OO
13
9.1 (continued)
~.OO
.~
.10
.05
2.00
l4 l.OO
.5J
.10
.05
2.00
15 1.00
.5J
.10
.05
2.00
16 1.00
.50
~ 10
.05
2.00
-
.7399 - .5963
1.040 - .9390
.07483
1.159
1.116
1.156
.1022
.6185
28.60
I
.2460
.07551
2.0 17
continued
104
Appendix 9.1 (continued)
N
I
2
(Jex
(J0f3
~(J
y
exy
2
(J~
~CT
y
~y
~2
2
y
-
2
(J
y
2
~2II(Y) I
y
.10
.05
2.00
.6096 - .488 7 1. 312
.7051 - .5828
.9324 - .8598 1.540 1.017
- .9609
.9998 - .9958 1.128 1.148 - .01230
.9999 - .9989 1.063 1.146
.9559
.2666 - .1799
.1052
.6999 .587 2
4.370
4.5 12
16.13
51. 62
3.530
6.199
4.632
.9176
.2917
8.244
18
1.0
.5
.1
.05
2.00
.5888
.9255
.9998
.9999
.2532
.4702 1. 27 1 .6735 - .5693
.8518 1. 535 .9969 - .9790
- .08826
.9957 1.1)0 1.138
.8163
.9989 1.064 1.137
.6668 .5588
.1070
.1693
4.207
4.398
15.05
47. 82
3.361
7.17 2
5.252
1.050
.3349
9.639
19
1.0
.1
.05
2.00
.5693 - .4531
.9184 - .8439
.9998 - .9955
.9999 - .9989
.2411 - .1599
1. 232
.6445 - .5558
1.529
.9776 - .9939
1.131 1.130
- .1550
1.0 65 1.129
.6939
.1080
.6367 .5331
4.055 8.242
4.295
5.9 17
14.12
1.192
.3813
44.53
3.208 11.18
1.0
.5
.1
.05
2.00
.5509 .9114 .9997 .9999 .2)01 -
1.195 .6180
1.522
.9595
1.133 1.122
1.066 1.122
.6092 .5096
3.913
4.200
13.20
41. 6 5
3.068
17
1.00
.50
.5
20
-
.4371
.8361
.9954
.9988
.1515
- .5424
-1.006
- .2141
.5858
.1084
9.414
6.626
1. 343
.4309
12.88
105
Appendix 9.2
N
r
Exponential Spacing:
2
.10
.05
2.00
5 1.00
.~
.10
.05
2.00
6 1.00
.~
.10
.05
2.00
7
1.00
.~
.10
.05
2.00
8
1.00
.~
.10
.05
2.00
9
1.00
.~
.10
.05
2.00
2
(3211(r)
/
/
(32 2
(T
2 /
- .9999
-1.000
-1.000
- .9999
-1.000
1. 7 18
1.297
1.051
1.0 2 5
3.194
1.999
2.000
2.000
1.999
2.000
- .7182
.7025
8.948
18.97
-2.694
11. 34
16.55
223.2
843.1
24.10
.1353
.09196
.008187
.002262
.07326
.907 1 - .8756
.9908 - .9692
.9999 - .9984
~9999 - .9996
.6576 - .6492
2.054
1.564
1.103
2.562
1.833
1.896
1.861
1.848
1.640
-1.1 69
-1. 984
4.231
9.282
-2.0 47
10.12
8.739
62.69
222.5
22.93
.2326
.2419
.04048
.01230
.1196
.6940
.9518
.9999
.9999
.4164
-
.6422
.8946
.9954
.9988
.4093
1.869
1. 782
1.156
1.076
1.697
1. 577
1. 753
1. 747
1. 726
1. 401
- .9 6 55
- . 61 59
2.261
5·330
-1.186
9.958
7.516
28.57
94.31
19.83
.3382
.3905
.1115
.03718
.1893
.4893
.8635
.9996
.9999
.2959
-
.4386
.7701
.9907
.9976
.2906
1.448
1. 867
1.210
1.102
1. 216
1.375
1.578
1.658
1.634
1.285
- .5466
- .7357
1. 225
3.312
- .7128
9.092
7.434
16.55
17.91
.4915
.5360
.2311
.08438
.2665
.3522
.7316
.9990
.9999
.2285
-
.3098
.6 153
.9842
.9959
.2244
1.091
1. 778
1.266
1.129
.9410
1. 254
1. 396
1. 585
1.562
1. 219
- .2117
- .6311
.5986
2.130
- .4419
8.165
7.374
11.16
30.73
16.78
.6865
.7070
.4035
.1609
.2671
.5876
.9979
.9998
.1860
-
.2327
.4660
.9758
.9939
.1826
.8452
1. 565
1.322
1.156
.7 66 1
1.184
1.241
1. 523
.0112
- .4096
.1823
1.372
- .2703
7.452
7.057
8.381
20.7 1
16.06
.9062
.9307
.6281
.2730
.4239
.2126 .4606 .9959 .9997 .1568 -
.1844
.3461
.9653
.9914
.1540
.6792
1. 313
1.378
1.183
.6460
.1584
- .17 2 5
- .1135
.8526
- .1523
6.946
6.5 61
6.812
14.99
15.57
1.138
1. 220
.9014
.4256
.9999
.9999
.10 1.0000
.05 .9999
2.00 1.000
(Tot?>
1.0~
1.~5
1,178
1.141
1.128
1.467
1.458
1.1~
~(T
r (3/
-
(T(3
(Ta
.~
.~
2
~(T
r ar
3 1.00
4 1.00
Asymptotic Variances and Covariances
~.22
I
.34~
.~28
continued
106
Appendix 9.2 (continued)
N
'Y
2
era
erQt3
~er
I
al
2
er~
~er
I
~I
~2
i
-
2
(J
I
~2
2"1 1 (1) I
I
.1759
.3613
.9926
.9995
.1356
... 1523
... 2593
... 9522
... 988 5
... 1331
.5641 1.113
1.083 1.052
1.433 1. 415
1.211 1. 417
.5584 1.130
.028~9
6. 58~5
6.02'(
- .3335
• 476~)
... 06633
5.881
11. 48
15.21
1. 376
1.576
1. 21 7
.6226
.5816
1.00 .1497
.;<> .2880
.10 .9875
.05 .9991
2.00 .1194
... 1295
... 1993
... 9362
... 98;<>
... 1172
.4809
.8951
1.487
1.238
.4917
1.093
1.003
1.365
1.382
1.114
.3315
.1829
... ;<>21
.1934
... 0008'(0
6. ::321
5.5)+2
5.316
9.20'(
111.• 93
1.616
1.989
1.570
.8665
.6605
... 1126
.. .1579
... 9170
... 9810
.. . 1047
.4187
.7473
1. 537
1.266
.4392
1.079
.38 54
.9'713
.2975
1. 315 ... 6330
1. 351 ... 02685
1.102
.0:D 62
6.122
5·133
1+.973
7.665
14.'{2
2.4;<>
1.954
1.158
.7394
13 1.00 .1153 ... 0997
.;<> .1953 ... 1289
.10 .9698 ... 8941
.05 .9979 ... 97 65
2.00 .09639 ... 09462
.3706
.6329
1.58)
1.294
.39 69
•14-2'{1
1.067
.3821
.9499
1.264 ... 7339
1.322 ... 2029
1.092
.09219
5.968
11-.799
4.771
6.587
14.54
2.099
2.945
2.3 6 5
1.499
.8183
14 1.00
.;<>
.10
.05
2.00
.1033 ... 08939
.1660 ... 1080
.9559 .. .8673
.9969 ... 9713
.08792 ... 08631
.3323
.5442
1.622
1.322
.3620
.4602
1.058
.4453
.935)
1.212 ... 8098
1.295 ... 34.69
1.084
.1264
5. 811-5
11-.530
11-.660
5.815
14.40
2.341
3.466
2.801
1.888
.8972
15 1.00
.;0
.10
.05
2.00
.09370
.1437
.9379
.9955
.08081
.3012 1.051
- .08102
.4871
... 09263
.493)+
.4746
.9243
- .8362 1.653 1.159 ... ~633
... 9654 1.35) 1.270 ... 1+667
.3327 1.077
.1551
- .07933
5. '7 14-5
1+. 313
4.60'7
5.2,53
14.28
2.583
4.006
3.263
2.324
.9761
16 1.00
.;0
.10
.05
2.00
.0856'7
.1263
.9154
.9937
.07477
- .07407
- .08089
- .8009
... 9587
- .07340
1.045
• ;<>9 1+
.9164
• 5307
1.103 - .8962
1.211-5 ... 5678
1.071
.1796
5.662
4.137
11-.587
4.838
14.18
2.82')
4.557
3.754
2.805
1.055
10
11
1.00
.;<>
.10
.05
2.00
12 1.00
.;<>
.10
.05
2.00
.1303
.2345
.9801
.9986
.1066
.2754
.4193
1.674
1.377
.3078
.2592
1.8~
continued
107
Appendix 9.2 (continued)
N
17
'1
2
<To:
(i,'Y
~<T
'1
~'Y
~2
2
'1
'1
2<T
~2
2 11 ('1) I
'1
1.040
.9103
1.046
1. 221
1.066
.5282
.5603
- .9098
- .6540
.2006
5.592
3.993
4.584
4.530
14.09
3.068
5.117
4.280
3.329
1.133
1.00
... 06324
.2351
- .06440
.3380
- .7184 1.680
... 9427
1.430
... 06385 .2678
1.036
.5443
. :;342
.90 55
.9888 ... 9055
... 7281
1.197
1.062
.2189
5.532
3.874
4.584
4.301
14.01
3.310
5.683
4.847
3.895
1.212
.2191
.3077
1.664
1.456
.2514
3.552
6.252
5.465
.07314
.1013
.10 .8561
.05 .9884
~.oo
20
'1
2
<T~
... 06823 .2537
... 07173
.3746
... 7616
1.684
1. 404
- .95 12
.2864
- .06829
.~
19
~<T
.07891
.1125
.10 .8882
.05 .99 14
2.00 .06957
1.00
.~
18
<T0f3
.06~4
.06815
.09212
.~
.10 .8 196
.05 .9847
2.00 .06107
... 05893
... 05843
... 6723
... 9332
... 05995
1.032
.5581
.6039
.90 17
.9304 - .8846
1.173 ... 7919
.2349
l.o58
5.481
3.774
4.57 6
4.131
13.94
1. 291
1.00
.2051 1.029
... 05517
.5702
.2822
.6203
... 05347
.8985
...
8492
... 62 38 1.634
.8725
1.149
- .8470
1.479
- .9227
.2491
- .o56~ .2370 1.055
5.436
3.689
4.555
4.006
13.88
3.794
6.824
6.147
5.145
1.370
1.00
.06380
.08440
.~
.10 .7791
.05 .9801
2.00 .05755
4.~1
108
Appendix 9.3
N
Y
Centered Exponential Spacing:
Covariances
2
0"0f3
O"cx
~O"
1
CXl
2
0"13
Asymptotic Variances and
~O"
1 13 1
-13
"I
2
2
0"2
1
13
1
2
1
2 1 (1)
I
3 1.00
.5)
.10
.05
2.00
1.000
1.000
1.000
1.000
1.000
-1.000
-1.000
-1.000
-1.000
-1.000
1.718
1.297
1.052
1.0 2 5
3.195
2.000
2.000
2.000
2.000
2.000
- .7183
.7026
8.9)+8
18.97
-2.695
11. 34
16.56
223.2
843.1
24.10
.1353
.09197
.008187
.002262
.07326
4 1.00
.5)
.10
.05
2.00
.9743
.99 8 1
1.000
1.000
.8135
-
.9457
.9 8 56
.9993
.9998
.7896
1. 754
1.367
1.068
1.034
2.238
1.885
1.894
1. 8 53
1.844
1. 763
- .6237
.4832
6.7 66
14.30
-1. 5)7
8.515
11.26
130.1
482.4
12.94
.2554
.1896
.02049
.005839
.1948
.9518
5 1.00
.5)
.9963
.10 1.000
.05 1.000
2.00
.6940
-
.8946
.9734
.9988
.9997
.6422
1.783
1.408
1.077
1.038
1.869
1. 754
1.779
1.7 27
1. 714
1. 578
- .6160
.2558
5.331
11.37
- .9 6 55
7.517
9.117
94.31
346.2
9.959
.3906
.3094
.03719
.01077
.3383
6 1.00
.9321 - .8515
.5)
.9947 - .9688
.10 1.000
- .9985
.05 1.000
- .9996
2.00
.6114 - .5361
1.798
1.434
1.082
1.040
1.662
1.630
1. 678
1.628
1. 615
1.432
- .6359
.06241
4.330
9.365
- .6443
6.952
7. 873
74.66
272.0
8.559
.5491
.4542
.05835
.01706
.5068
7
1.00
.9141
.5)
.9932
.10 1.000
.05 1.000
2.00
.5495
-
.8153
.9562
.99 82
.9995
.4565
1.803
1. 452
1.085
1.042
1. 518
1. 520
1. 593
1.552
1.539
1. 313
- .6630
- .09647
3.595
7.916
- .4311
6.561
7.040
62.04
224.8
7. 689
.7343
.6247
.08399
.02472
.7065
8
1.00
.8975
.5)
.99 17
.10 1.000
.05 1.000
2.00
.5006
-
.7 843
.949)
.9980
.9995
.3949
1.800
1.46 5
1.088
1.043
1.406
1. 423
1. ~21
1. 91
1.479
1.213
- .6899
- .2269
3.034
6.820
- .2813
6.263
6.436
53.20
191.8
7.059
.9481
.8213
.1141
.03374
.9429
9
1.00
.5)
.10
.05
2.00
.8818
.9903
1.000
1.000
.4606
-
.7555
.9447
.9979
.9995
.3461
1. 793
1.475
1.090
1.044
1. 314
1. 339
1. 461
1.442
1.430
1.128
- .7140
- .3350
2.592
5.963
- .1726
6.020
5.97 6
46.64
167.5
6.561
1.192
1.044
.1487
.04413
1. 221
continued
109
Appendix 9.3 (continued)
N
)'
2
erQf3
era
l?er
/'
a)'
er2
13
l?er
)' 13)'
2
13
- 2 er2)'
)'
2
2 1 1 (/,)
)'
13
.8669
.9889
.10 1.000
.05 1.000
2.00
• 42 7 1
-
,7339
.9422
.9977
.9994
.30 67
1.782
1.482
1.091
1.045
1. 235
1. 26 5
1.409
1. 401
1.390
1.055
- .7345
- .4255
2.235
5. 273
- .09 173
5.814
5.612
41. 57
148.7
6.148
.8527
.9876
.10 1.000
.05 1.000
2.00
.3985
-
.7188
.9363
.9976
.9994
.2744
1. 768
1.487
1.092
1.046
1.167
1.200
1.36 5
1. 367
1.357
.9909
- .7516
- .~22
1.941
4.707
- .03064
5.635 1. 775
5.316 1.570
.2314
37.53
.06900
133.8
5.794 1.916
1.00
-
.6938
.9328
.9975
.9994
.2475
1,753
1. 492
1.093
1.046
1.106
1.142
1. 327
1. 338
1.329
.9344
- .7 655
- .5678
1.694
4.233
.01612
5.477 2.116
5.071 1.873
34.23
.2795
121.6
.08348
5.484 2.342
13 1.00
.8260 - .6764
.9849 - .9296
.~
.10 1.000
- .9975
.05 1.000
- .9994
2.00
.3519 - .2248
1. 737
1.495
1.094
1.046
1.052
1.091
1.293
1. 314
1.305
.8840
- .77 66
- .6245
1.485
3.832
.0522 7
5.334 2.493
4.864 2.203
31.48
.3321
111.5
.09933
5.208 2.8 2 5
14 1.00
1.7 20
1.497
1.095
1.047
1.002
1.044
1.264
1.292
1.285
.8389
- .7852
- .6739
1. 304
3.487
.08041
5.205 2.005
4.687 2.560
29.16
.3891
.1166
103.0
4.961 3.369
- .79 17
- .7 172
1.147
3.188
.1024
5.085 3.355
4.533 2.945
27.17
.4~7
.1351
95. 67
4.738 3.978
.9643 - .7965
1. 213 - ·7555
1.009
1.257
1.251
2.925
.7 613
.1197
4.974 3.844
4.399 3.356
25.45
.5168
89.35
.1551
4.534 4.655
10
1.00
.~
11
1.00
.~
12
.8391
.9862
.~
.10 1.000
.05 1.000
2.00
.3737
.8133 - .6605
.9836 - .9 267
.~
.10 1.000
- .9974
.05 1.000
- .9994
2.00
.3327 - .2055
15 1.00
.8011
.9823
.10 1.000
.05 1.000
2.00
.3155
.~
-
.6457
.9240
.9974
.9993
.1889
.7893 - .6319
.9810 - .9216
.10 1.000
- .9973
.05 1.000
- .9993
2.00
.3001 - .1745
1.002
1.702
1. 499
1. 237
1.095 1. 273
1.266
1.047
.7982
.957 8
16 1.00
1.684
.50
1.~0
1.096
1.047
.9170
I
1.467
1.294
. 1878
.05588
1.544
continued
110
Ap~endix
9.3 (continued)
,1;
N
7
2
(ja
cr0f3
~ cr
I'
al'
2
crt'
~cr
I'
t'1'
t'2 2
- 2 crI'
I'
2
t'2II(I')
I'
17 l.OO
... 6180
... 9192
... 9973
... 9993
... 1699
1.666
.9296 ... 7997
1
1.192
... 7895
1. 50
1.096 1,242
. 887{
1.048 1,236
2.694
.8796 .7276
.1333
4.871 4.372
4.280 3.795
23.94
.5873
83.82
.1764
4.348 5.403
18 1.00
.50
- .6069
... 9170
...9972
... 9993
... 1509
1,647
1, 50 2
.8977 ... 8016
1.173 ... 8199
1.~29
1~097
.7790
1.048 1.224
2.487
.8452 .6969
.1439
4.774 4.941
4.175 4.261
22.61
.6623
.1991
78.95
4.177 6.228
.7561
.977 1
1.000
1.000
.2618
... 59 44
... 9149
... 997 2
... 9993
... 1411
1.629
.8683 ... 8025
2
1.155 ". • 8472
1.50
.6816
1.097 1.217
1.048 1.212
2.303
.1522
.8134 .6687
4.683 5·552
4.080 4.755
21,43
.7419
74.62
.2231
4.019 7.132
1.00 .7457
·50 .9758
.10 1.000
.05 1.000
2.00 .2512
... 58~5
... 91 29
... 997 2
... 9993
... 1324
.8412 ... 8023
1.611
1.503 1.139 ... 8719
1.098 1.206
.5939
1,048 1.202
2.136
.1587
.7839 .6426
4.597 6.206
3.994 5.277
.8259
20.37
.2485
70.75
3.873 8.118
.7779
.9797
.~
.10 1.000
.05 1.000
2.00 .2861
.7669
.9784
.10 1.000
.05 1.000
2.00 .2734
19 1.00
.5}
.10
.05
2.00
20
I
111
~ppendix
N
1
9.4 Coverage Spacing:
2
O"a
1
t'l
2
t'2 2
2 0"1
~II(l)1
-
1
1
2.000
2.000
1.999
2.000
2.000
-1.442
0
6.140
13.39
-5.049
13.52
12.48
121.8
448.2
62.70
.1201
.1.201
.01456
.004182
.03002
1.00
-
.8991
.9762
.9987
.9996
.6813
2.252
1.593
1.104
1.051
3.565
1.874
1.933
1.909
1.899
1. 676
-1.~2
- .3360
4.184
9.273
-3.143
11.08
8.208
56.78
202.4
38.74
.2160
.2537
.04398
.01339
.07 26 3
.7670 - .7337
.9617 - .9190
.9999 - .9958
.9999 - .9989
.5104 - . ~52
1.956
1.683
1.142
1.070
2.573
1.694
1.823
1. 788
1. 765
1.499
-1.163
- .4656
2.841
6.415
-2.144
9.268
7. 14 5
34.74
117.3
27.51
.3485
.3902
.09 247
.03010
.1363
.6287
.9067
.9997
.9999
.4097
-
.5917
.8349
.9916
.9978
.4043
1. 617
1.689
1.181
1.090
2.407
1.546
1.692
1. 671
1.640
1.398
- .8107
- .4545
1.908
4.459
-1.614
7.67 2
6.662
24.04
76.04
21.68
.5352
.5419
.1641
.057 22
.2164
.5267
.8338
.9994
.9999
.3425
-
.4892
.7395
.9864
.9965
.3369
1.356
1.622
1.221
1.109
1.705
1.442
1.562
1. 569
1. 536
1. 331
- .5490
- .3610
1.251
3.123
... 1.269
6.497
6.266
18.02
53.03
17.96
.7718
.7246
.2618
.09760
.3138
.4528 - .4 153
.7554 - .6472
.9990 - .9805
.9999 - .99~
.2941 - .2884
1.168
1. 514
1.260
1.129
1. 461
1.366
1.446
1.484
1.452
1.282
- .3615
- .2358
.7778
2.193
-1.023
5.654
5.863
14.32
39.13
15.36
1.053
.9497
.387 1
.1537
.4289
.3974
.6808
.9983
.9998
.257 6
1.028
1.392
1.298
1.148
1.278
1.309
1. 348
1.412
1.386
1.245
- .2222
- .1111
. 42 75
1. 529
- .8397
5.029
5.464
11.92
30.19
13.42
1.377
1.223
.5402
,2277
.5618
5 1.00
.~
.10
.05
2.00
6 1.00
.~
.10
.05
2.00
1.00
.~
.10
.05
2.00
1.00
.~
.10
.05
2.00
9
al
~O"
2.164
1.442
1.072
1.035
5.410
.10
.05
2.00
8
1
2
O"t'
-1.000
-1.000
-1.000
-1.000
-1.000
.~
7
~O"
1.000
1.000
.~
.10 1.000
.05 1.000
2.00 1.000
3 1.00
4
0"0f3
Asymptotic Variances and Covariances
1.00
.50
.10
.05
2.00
.9187
.9914
.9999
.9999
.6862
.,.
-
.3599
.56 58
.9741
.9935
.2518
continued
112
~ppendix
N
Y
9.4 (continued)
2
O"a
O"af3
~O"
y
ay
2
0"f3
~O"
y f3y
f32 2
- 2 0"Y
Y
2
f3 2 !I(Y)
10
1.00
.50
.10
.05
2.00
.3543
.6147
.9973
.9998
.2292
-
.3167
.4974
.9672
.9919
.2233
.9196 1.264
1.27 6 1. 26cr
1. 335 1. 351
1.166 1. 332
1.136 1. 21 7
- .1143
- .001327
. 161 5
1.042
- .6968
4.546 1. 743
5.095 1. 548
10.28
.7209
24.16
.3213
11.93
.7125
11
1.00
.50
.10
.05
2.00
.3198
.5581
.9958
.9997
.2063
-
.2820
.4407
.9598
.9902
.2005
.8327 1. 227
1.17 2 1.199
1.369
1.299
1.184 1.289
1.022 1.194
- .02804
.0904
- .04545
.6758
- .5827
4.161 2.153
4.7 67 1.925
9.122
.9283
19.92
.4358
.8809
10.74
12 1.00
.50
.05
2.00
.2913
·5102
.9940
.9996
.1877
-
.2535
.3938
.9520
.9885
.1819
.7 61 2
1.082
1.402
1.202
.9 298
1.197
1.143
1.253
1.253
1.175
.04264
. 16 55
- .2098
.3926
- .4894
3.845
4.484
8.278
16.85
9.77 2
2.353
1.161
.57 22
1.067
1,.00
.50
.10
.05
2.00
.2675
.49 67
.9915
.9994
.17 21
-
.229 6
.3546
.9438
.9868
.1662
.7014
1.005
1.432
1.219
.8524
1.171
1.094
1. 212
1.223
1.159
.1015
.2268
- .3425
.1692
- ..• 4118
3.581
4.240
7.649
14.55
8.9 66
3.105
2.832
1. 419
.7312
1. 27 1
14 1.00
.50
.10
.05
2.00
.2473
.4352
.9884
.9992
.1589
-
.2094
.3215
.9351
.9849
.1530
.6506
.9404
1.460
1.235
.7869
1.149
1.052
1.175
1.197
1.146
.1514
.277 2
- .4511
- .01072
- .3462
3.357
4.030
7.170
12.80
8.286
3.647
3.362
1.701
.9132
1.492
15 1.00
.50
.10
.05
2.00
.2300
.4056
.9845
.9989
.1475
-
.1920
.2932
.9258
.9831
.1416
.6068
.8839
1.484
1.251
.7309
1.130
1.014
1.142
1.175
1.134
.1941
.3189
- .5407
- .1581
- ,2900
3.164
3.847
"6.797
11.43
7.704
4.235
3.943
2.007
1.118
1. 732
16 1.00
.50
.10
.05
2.00
.2149 - .1769
.3799 - .2687
.9797 - .9 159
.9986 - .9811
.1377 - .1318
.2310
.5688 1.114
.8349
.9 81 7
.3537
- .6151
1. 50 6 1.110
1.267
1.155 - .2808
.6823 1.125 - .2413
2.997
3.688
6.503
10.34
7.201
4.868
4.577
2.336
1. 347
1.989
~1O
13
I
1
2.607
continued
113
Appendix 9.4 (continued)
~O"
2
0"13
~O"
1 131
2
-13 2 0"12
1
2
13 11 (1)
2
1
.1637
.2473
.9055
.9791
.1232
.5355
·7920
1.525
1.282
.6397
1.099
.9518
1.081
1.138
1.116
.2632
.3830
- .67 69
- .3843
- .1987
2.85)
3.546
6.265
9.470
6.761
5.547
5.264
2.690
1.599
2.264
-
.1520
.22 8 5
.8944
.9770
.1156
• ryJ60 1.08 5
·7539 .9247
1.540 1.053
1.297 1.122
.6022 1.108
.2915
.4078
- .7281
- .47 27
- .1611
2.721
3.420
6.069
8.754
6.373
6.271
6.006
3.068
1.874
2.556
.1795
.3197
.9599
.9969
.1148
-
.1416
.2119
.8827
.9748
.1088
.4797 1.073
.9000
.7 199
1.552 1.027
1.311 1.108
.5689 1.101
.31 65
.4289
- .7702
- .5491
- .1277
2.605
3.306
5.904
8.163
6.028
7.040
6.805
3.471
2.172
2. 867
.1702
.3038
.9512
.9962
.1088
-
.1323
.1970
.8704
.97 25
.1028
.4562 1.063
.8772
.6893
1.001
1.560
1.324 1.095
.5391 1.095
.3388
.4469
- .8045
- .6156
- .0979
2. ryJ2
3·203
5.762
7. 670
5·720
7.856
7.662
3.902
2.493
3.195
2
N
1
17
1.00
.ryJ
.10
.05
2.00
.2016
.3574
.9741
.9981
.1291
-
18 1.00
.ryJ
.10
.05
2.00
.1899
.3375
.9675
.9976
.1216
19
1.00
.ryJ
.10
.05
2.00
20
1.00
.ryJ
.10
.05
2.00
0"0f3
O"a
1
a1
I
114
Appendix 9.5
N
/
Linear Spacing:
Large Values of
2
CT
a
CT0f3
Asymptotic Variances and Covariances £9r
~CT
/
CX:y
2
CTt'
f?CT
/
t'/
2
2 CT/
t'2
-
/
2
t'2
\1(/)1
/
3 1.:0 1.000
2.00 :L.OOO
2.50 1.000
3.00 1.000
4.00 1.000
-1.000
... 1.000
-1.000
-1.000
-1. 000
1.759
2.164
2.687
3.366
5.410
2.000
2.000
2.000
2.000
2.000
- .7968
-1.443
-2.110
-2.885
-5.05)
11.42
13.53
18.21
26.36
62.70
.1351
.1201
.09384
.06756
.03003
4 1.:0
2.00
2. :0
3.00
4.00
.9364
.8:05
.7573
.6806
.5865
- .8800
- .7951
... 7152
- .6527
- .5763
1.878
1.927
1.854
1.766
1.732
1. 774
1. 7 19
1.666
1.622
1.566
- .97 82
- .8373
- .8891
.... 9276
-1.096
9·771
10.60
11.30
12.08
14.95
.2371
.2283
.2218
.2135
.1801
5 1.:0
.00
.8587
.7 299
.6229
.5470
.4575
- .7 6:;0 1.835
.... 6442 1.709
- .5566 1.531
- .4984 1.388
- .4320 1. 234
1.558
1.498
1.458
1.432
1.401
- .5605 8.791
.... 4894 9.042
.... 4080 9.1Ol
.... 3692 9·235
.... 3971 10.06
.3736
.3759
.3817
.3820
.3588
6
1.50
2.00
2.50
3.00
4.00
.7847
.6378
.5320
.4617
.3801
- .6705
- .5371
.... 4525
- .4003
.... 3432
1.740
1. 530
1.328
1.182
1.012
1.380
1.326
1.303
1.295
1.289
- .467:3
.... 2831
- .1501
.... 08033
.... 05539
7.987
7.955
7.836
7. 8 19
8.147
.5554
·5730
.5887
.5931
.5717
7
1. :0
2.00
2. :0
3.00
4.00
.7189
.5660
.4654
.4007
.3266
- .5944 1.633
.... 4586 1. 383
- .3792 1.180
- .3320 1.040
.... 2819
.8742
1.235
1.190
1.181
1.186
1.200
.... 3961
- .1546
.002 56
.08954
.1468
7.304
7.124
6.948
6.886
7.046
.7901
.8279
.8528
.8582
.8300
8
1.50
2.00
2.50
3.00
4.00
.6615
.5088
.4140
.3545
.2868
- .5328
- .3991
- .32:0
.... 2819
- .2373
1. 529
1.261
1.065
.9329
1.115
1.079
1.081
1.096
1.126
.... 3413
- .07110
.09744
.1947
.2746
6.718
6.458
6.266
6.192
6.285
1, 08 5
1.148
1.182
1.186
1.142
1.:0
2.00
.6U7
.4620
.3731
.3181
.2560
.... 4822 1.432
1.016
- .3527 1.158
.9875
8
1
- .2 35 .97 3 .9976
.8476 1.019
- .2439
1.062
.7006
- .2036
- .2984
- .0 1521
.1580
.2614
.3574
6.212
5.909
5.7 17
5.643
5.709
1.446
1.542
1.585
t.584
1.516
~.oo
2. :0
,.00
9
2.~
3.00
4.00
.7760
continued
115
Appendix 9.5 (continued)
N
r
0"af3
~O"
r ar
2
0"(3
~O"
r (3r
(32
(32 2
2" Q"r
r
2" 11(r)
r
• :;684
.4232
.3396
.2885
.2312
... 4402 1.343
.9328 - .2643
... 3156 1.07 1
.9100
.02309
... 2~8 .8933
.9261
.1972
... 2142
.3038
.7774 .9529
.4116
... 1773 .6400 1.006
5· 773
5.449
5.262
5.192
5.246
1.881
2.017
2.069
2.060
1.958
1.~
.5305
.3904
.3118
.2641
.2109
... 4047 1.264
... 2854 .9959
... 2245 .8272
... 1905 .7184
... 1564 .5898
.8618 ... 2366
.8438
.04976
.8642
.2225
.3306
.8949
.4466
.9552
5.390
5.055
4.876
4.812
4.863
2.397
2.580
2.641
2.621
2.475
1.~
.4971
.3623
.2882
.2436
.1940
... 3744
.. • 2602
... 2030
... 1711
... 1394
1.192
.9305
.7704
.6680
.5475
.8007 ... 2138
.7 866
.06855
.8101
.238 5
.3468
.8437
.4686
.9095
5.053
4.7 15
4.545
4.486
4.537
3.000
3.240
3.310
3.275
3.072
.4675
.3381
.2679
.2260
.1795
... 3483 1.127
... 2391
.8730
... 18~ .7209
... 1551 .6244
... 1254 .5109
.7476 ... 1948
.08183
.7366
.2482
.7624
.7980
.3560
.8680
.4815
4.754
4.418
4.257
4.203
4.255
3.697
4.003
4.083
4.028
3.756
1.~
.4412
.3168
.25)4
.2108
.1671
... 3255 1.069
... 2210
.8223
... 1698 .6775
... 1416 .5862
... 1137 .4792
.7010 ... 1787 4.488
.09121 4.157
.6926
.7 201
.2536 4.004
.3601
.7570
3.955
.4881 4.008
.8300
4.495
4.877
4.967
4.888
4.533
15 1.50
2.00
2.50
3.00
4.00
• 4176
.2981
.1975
.1563
... 3055 1.015
... 2054 .7770
... 1568
.6390
... 1301 .5524
... 1038 .4513
,6599 ... 1649 4.2~
.6535
.09777 3.925
.6822
.2560
3.779
.7201
.3609
3.735
.4901 3.789
.7952
5.401
5.871
5.970
5.862
5.409
.3964
.2815
.2214
.1859
.1469
... 2878
... 1918
... 1456
... 1203
... 09532
.6232 ... 1530
.6186
.1023
.6481
.2562
.6865
.3593
.4890
.7631
6.422
6.990
7.100
6.957
6.390
10
1.~
2.QO
2.5)
3.00
4.00
11
2.00
2.5)
3.00
4.00
~2
2.00
2.~
3.00
4.00
13
1.~
2.00
2.~
3.00
4.00
14
2.00
2.5)
3.00
4.00
16
1.~
2.00
2.~
;e
2
O"a
3.00
4.00
.23~
.9671
.73 65
.6047
.5224
.4265
4.035
3.717
3.579
3.538
3.594
I
continued
116
Appendix 9.5 (continued)
N
I'
2
a
CY
CYQf3
~CY
I'
al'
2
CY
t3
~CY
I' t31'
t3
2
2
2'CY
I'
I'
2
t3
2
I'
11(1')1
1.50
2.00
2.50
3.00
4.00
.377 2
.2666
.2093
.1755
.1385
-
.2720
.1799
.1358
.1117
.08800
.9230
.7000
.5739
.4955
.4044
.5904 - .1427
.1052
.587 2
.6 172
.2549
.6560
.3560
.48 55
.7334
3.841
3.530
3.398
3.361
3.419
7.564
8.244
8.364
8.180
7.483
18 1·50
2.00
2.50
3.00
4.00
.3598
.2533
.1984
.1662
.1310
-
.2579
.1694
.1272
.1043
.08164
.8826
.6669
.5461
.47 13
.3845
.5609
.5589
.5891
.6280
.7059
- .1336
.107 1
.2525
.3516
.4805
3.664
3.361
3.236
3.202
3.260
8.835
9.640
9.769
9.539
8.693
t9
1.50
2.00
2.50
3.00
4.00
.3439
.2412
.1886
.1579
.1243
-
.2451
.1600
.1196
.097 67
.07606
.8455
.6368
.5209
.4493
.3665
.5342 - .1255
.1081
.5332
.2494
.5635
.6023
.3463
.6804
.4742
3.503
3.208
3.088
3.057
3.116
10.24
11.18
11.32
11.04
10.03
20
1.50
2.00
2.50
3.00
4,00
.3293
.2302
.1798
.1504
.1183
-
.2336
.1515
.1129
.09182
.07113
.8113
.6093
.4979
.4293
.3501
.5098 - .1184
.1084
.5097
.2458
.5400
.3406
.5786
.6566
.4672
3.355
3.068
2.953
2.924
2.984
11.79
12.89
13.04
12.69
11.49
17