Johnston, B.O. and A.H.E. Grandage; (1962).A sampling study of estimators of the non-linear parameter in the exponential model." M.S. Thesis.

---r---~"-'"""~~"-"
,
. ----~'.-"" -
~-'-
_..
~.-_
- ._-
.. ..
~-.-._
."
A SAMPLING STUDY OF ESTIMATORS OF-THE NONLINEAR
PARAMETER IN THE· EXPONENTIAL MODEL
by
BRUCE OWEN JOHNSTON
and .
A. H. E. GRANDAGE
Institute of Statistics
Mimeograph Series No. 329
June..
1962
iii
TABLE OF CONTENTS
Page
. . . . . . . . . . . . . . . . . . .. . . . . iv
.. .. .. ... .. . . . . . .... ... .. . .
...............
Least Squares Estimator • • • • • • • • • . . . . . . . . . . • •
Quadratic Estimator. • • • • • • • • . . . . . . . . . .
·. 5
. . . . . . .. . . . . . . · . 8
·. . . . . .. .. . . . .......... .
. . . . . . . . . . . . . . . . . . . . . . 14
· . . . . . . . . . . . . . . . . . . . . . . . 16
· . . . . . . . . . . . . . . . . . . . . . . ,. 26
LIST OF TABLES
INTRODUCTION
1
MmiHODS OF ESTIMATING THE PARAMErERS.
2
2
BIAS AND VARIANCE OF THE ESTIMATORS
THE SAMPLING STUDY
12
RESULTS AND DISCUSSION
RECOMMENDATIONS • •
LIST OF REFERENCES
APPENDIX A - A PROOF THAT THE ESTIMATOR PROPOSED BY MONROE IS
r(l,O)
.
• •
28
APPENDIX B - FORMULAE FOR PATTERSON' S APPROXIMATIONS
33
.......
iv
LIST OF TABLES
Page
..................
.... ............
17
1.
Sample results, bias
2.
Sample results, variance
3.
Coefficient of the approximate variance of r as derived by
Patterson . . . . . . . . . . . . . .
. . . . . ..
21
...
22
4.
Coefficient of the approximate bias of r as derived by
Patterson • . . . . . . . . . . . . . . . . . . . . .
19
5. Approximate bias of r(l,O) as given by Portman (1961) • • ••
23
6.
Approximate variance of r( 1, 0) as given by Portman (1961) ••
23
7.
Coefficient of the asymptotic variance of the least squares
estimator . • . . . . . . . . . . . . . . . . . • . .
24
........
25
8.
Computational notes •
• • • • • • • • • •
INTRODUCTION
In recent years, nonlinear equations have been receiving close
study.
The estimation of the parameters et, 13 and p in the useful
equation
A.
y=et-I-'
p
X
has received much attention.
Stevens (1951) presented an iterative least squares solution for
the estimation of the parameters.
Patterson (1956, 1958, 1959) devel-
oped a class of linear and quadratic estimators, r, for p.
X
estimated et and 13 from the linear regression of y on r •
He then
This class of
estimators includes the ones derived by Hartley (1948) and Monroe (1949).
Approximate biases and variances for the class of quadratic estimators of
r were presented by Patterson (1958, 1959). Portman (1961)
derived different approximate bias and variance formulae for the Monroe
estimator.
The purpose of this study is to determine empirically the small
sample properties of these estimators of p and to contrast them with
the approximate or asymptotic properties which have been derived.
2
MmHODS OF ESTIMATING THE PARA1vlE'.rERS
Least Squares Estimator
An observation, y, which comes from the s;l.ngle exponential non-
linear equation, can be represented by
y
= 0:
I3p x + e: •
-
(1)
The random errors, e:' s, are assumed to be uncorrelated and to have a
variance of (J'2.
The estimators of the parameters,
b, and r respectively.
0:,
13, and
p are a,
The equation then becomes
y
=a
- br
x
+ e
where e is the residual.
In least squares we minimize N, the sum. of the squares of these
residuals.
That is,
N
= N(a,
b, r)
=
n 2
x
::; E (y - a - br )
Ee
1
2
The least squares equations are
N
a
dN
=~=-an-bEr
oa
~N
x
~ = ~ = - a Er
x +Ey=O
- bE r
2x
x
+E r y = 0
It is impossible to obtain explicit solutions for a, b, and r.
Several
schemes have been devised to iterate or otherwise obtain solutions.
Stevens (1951) used a method which he attributes to Fisher.
By
taking the total derivatives of the partials we obtain:
<iN
a
=
oNa
oNa
oNa
l:::.a + "'\:;:-" & + ""\:':"""" t:sr
oa
ob
or
~
= 0 = N
a
where
Let
M=
o2 N
o2 N
o2 N
2
oa
dadb
'dad"r
o2 N
oadb
o2 N
2
ob
o2 N
abdr
o2 N
o2 N
o2 N
2
or
'db"dr
d'8.'d'r
..
Then
M &
=
For given values of a, b, and r, the three equations become functions
of l:::.a, & and t:sr for which we can solve.
-1
=M
~
-1
That is, if M exists, then:
4
stevens (1951) has shown that M can be .altered so that it is only a
function of r.
Then, if we start with a value of r, say r , we can
l
find a .6.:t'1 so that r
2
= r
l
+ .6.:t'1. With r 2 we find an a2 and b 2 so that
N(a , b , r ) will be smaller than N(~, b , r ).
l
l
2
2
2
This iteration pro-
cess is then repeated until .6.:t' is as small as desired.
The negative of the expected value of M is the information matrix,
I.
The inverse of I is the asymptotic variance covariance matrix, as
is well known.
Apart from a constant term, b, in some of the elements,
M can be written as
n
M
=
n-l
i
.E r
i=O
n-l
i-l
.E ir
i=O
n-l 2i
.E r
i=O
n-l
2i-l
.E ir
i=O
n-l
1:
2 2i-2
i r
i=O
which is symmetric.
r=O and r=l.
It is easy to see that the matrix is singular when
It follows that the matrix is ill-conditioned when r is
near either of the two values.
Stevens' method, then, will not neces-
sarily iterate to a stable value at the extremes of p •
. The least squares method of obtaining r requires a good deal of
calculation.
If there are less than thirteen equally spaced observa-
tiona, then tables are available to aid in the calculations.
This
method also has the unfortunate property of using an ill- conditioned
5
matrix for large or small values of p.
To avoid this difficulty one can
fit a quadratic polynomial as reconnnended by stevens (1951).
Quadratic Estimators
Patterson (1956, 1958, 1959) presented some estimators of p which
are simpler to calculate than the least squares estimator and are
reasonably efficient and unbiassed.
Estimates of a and
obtained by linear regression ofy on r
intervals between the
squares procedure.
XiS
be equal.
X
•
~
are then
The method requires that the
This is not required in the least
However, Patterson (1956) showed that the least
squares procedure presented by stevens (1951) is a complicated quadratic method.
The estimators presented by Patterson are ratios of either linear
or quadratic functions of the observations.
are of most interest.
The quadratic estimators
They are of the form
where
Po'
k and
J,
are arbitrary constants and D is an n-l by n-1 matrix whose
elements we can choose.
The elements of D are chosen to minimize an approximate variance
which is given by Patterson (1958).
method of Finney (1958).
This variance is derived usins the
The D matrix turns out to be a function of p,
the parameter we Wish to estimate, and so we must first specify that
p
= Po
in order to have a determined matrix to calculate the estimator
6
r(po' k/J).
when P
Thus the procedure of estimating P becomes most efficient
= PO'
Patterson (1959) asserts that Hartley (1948) derived an estimator
of P which is the same as r (1, 1).
Patterson believes that Monroe
(1949) developed the estimator equivalent to r (1, 0).
is shown to be correct in Appendix A.
Patterson also suggests that
r (1, 1.5) might be the best of the three estimators.
cases Po
=
This assertion
In these three
1.
When Po = 1, Patterson (1958) has shown that we can form the
matrix D by first forming c ij where
Then
or
i(n-j) _ ;ij(n-i)(n-j)
n
n(n2-1)
3ij (n-i )(n-j)
2
i~j
i
>,
j
n(n-1)
The three estimators are then
31~% +
r (1, 1.5)
= ;lh%
r (1, 1)
= ~%
(1, 0)
liDr1
= zi%
li%
r
2liDx1
+ 21i%
+
+
liDx1
li%
(4)
7
It should be noted that the last estimator, r (1, 0 ~ does not
contain the observation yO in the numerator.
The first observations
are undoubtedly the most important elements in determining the slope of
the curve, particularly when p is small.
For this reason, we might
expect the r (1, 0) estimator to be less stable than the other two
quadratic estimators.
8
BIAS AND VARIANCE OF THE ESTIMATORS
The least squares procedure gives an asymptotic variance of r
which is obtained from I, the information matrix.
For known values of
p, the asymptotic variances of r have been calculated and are given in
Table 7.
Patterson (1958) presented formulae for the approximate bias and
variance of r.
These formulae were developed using the method of
Finney (1958).
Let A and B be functions of y and consistent estimators of two
functions of p whose ratio is p.
Then
A
r =B
where
E(A)
Now
=s
E(B) = 11
and
p = E(r)
A
r=-=
B
and by us e of the expansion of (1 +
B - 11
-1
0)
we get
110
Patterson (1958) used parts of the first three terms to obtain the
bias of r and of the first two terms to obtain the variance of r.
parts used were the constants and terms involving
order in
0-
were ignored.
2
0-
•
The
Terms of higher
Patterson showed that the approximations for
the bias and variance of the three estimators are given by
9
2 (.t-pk) trace D +k trace DU -.tp trace DU ' + (.tp2-2kp-.t)F -2kF
l
2
bias r =£:...[
F!P. ----------'-"O:~Jr--<'-=--------..,...----]
(k+.£p) F
..,
O
2
2
cr
var r = -
[
f32
where
(l+p ) F -2pF
l
2
pF
o
]
D is as given in (3)
.t and k refer to the specific quadratic estimate
Fo = R'DR
F = R'DDR
l
F
000
o
0
100
o
0
010
o 0
U= •
2
= R'D'UDR
R'
•
= (1 ,
p, p2 , ••• , p n-2)
.
,
•
•
•
000
n
2
-4
and trace D = ~
•
1
0
_2n2-l5n + 22
t race DU 30
The approximate variance of r is a function of p, f3 and the matrix
D.
It is by minimizing this variance that Patterson specifies the
elements of D.
The calculations of the approximate bias and variance require
formulae for F0' F1 and F2.
These are given in Appendix B for n=4 to
n=lO.
The coefficients of the bias and the variance are given in Tables
3 and 4. Multiplying the entries in the bias and variance tables by
cr2jf32 gives the bias and the variance approximations.
10
Portman (1961) derived different approximations to the bias and
variance for the estimator r (1, 0).
She wrote
l' (T'pT)l
A
r ;:: l'T'p~ - l'T'Pl ;:: A-C •
If we let A-C ;:: B we could then expand the ratio of A to B in the same
manner that Patterson and Finney did.
However, Portman expanded the
estimator in a Taylor's series and obtained
The partial derivatives of r with respect to A and C evaluated at
1
;:: ----:::'
r
C
(A -C
o
0
l
-2
;:: -.-.;;.-..,...
r
AC
If AO ;:: E(A)
(A -C )3
o
0
and CO;:: E(C), then
E(A-A ) ;:: E(C-C );:: 0
O
O
E(A-A )2;:: Var A
o
E(C-CO)
2
;:: Var C
E(A-AO)(C-Co)
;:: cov (A, C)
Using terms through the quadratic in the Taylor's expansion we find that
11
This gives the bias as:
bias (r) -
E~)
- P •
By a similar procedure, using only the linear terms, the approximation to the variance was found to be:
Tables of the bias and variance for certain values of a,
were calculated by Portman (1961).
~
and p
They are reproduced as Tables 5 and
6.
It is worth noting that the two approximations to the bias and variance of r given by Portman and Patterson are structurally quite different.
Patterson used the first terms of the series
~-Tl
(1 + __
0)
_
Tl O
-1
and
.
Portman used the first terms of the Taylor series expansion of A/A-C.
Also, Portman used the expansion about E(A) and E(C) whereas Patterson
expanded about So and Tl O where p
= So/Tlo•
Thus, although the ideas of
the approximation are similar, the practical methods of obtaining these
approximations are quite different.
12
THE SAMPLING STUDY
An empirical study of the four estimating procedures was carried
out to see which might be the better of the four methods of estimating
p in the equation
y = cx -
t3P x + €
X = O,l, ••• ,n-l
(5)
The four methods are:
(a) Stevens' iterative least squares estimate
and Patterson's quadratic estimates
(b) r (1, 1.5)
(c) r
(1, 0)
(d) r (1, 1)
where the quadratic estimates are given in (4).
.3,
.5,
4, 6, 8
.7,
and .9 with values of
or 10.
~
of 5, 10 or 50 and n was taken as
The value of cx is immaterial to all estimating proce-
dures but it was generally taken to be 100.
and have a variance of one.
were taken.
p was taken to be .1,
The € I S are uncorrelated
Not all combinations" of the ~, p and n
However, a wide enough selection of the combinations was
used so that some conclusions could be drawn.
For a given set of cx, ~ and p, a set of €'s was added to form the
equation (5).
These y's were then used to calculate a value of r for
each of the procedures.
For this given cx, ~ and p, 100 sets of €'s
were used to give us 100 values of r.
parameters cx,
~
The particular values of the
and p were chosen to conform with the work Portman
(1961) did on apprOXimate bias and variance for the r (1, 0) procedure.
13
The sampling computations were done on an IBM 650.
As was stated
earlier, Stevens'iterative procedure for estimating r might not converge due to the properties of the matrix being inverted.
The program
stopped iterating after calculating nine new r' s from the original one.
The criterion for saying that a value of r was the correct solution to
the normal equations was that Ar be less than or equal to 0.0005.
some samples, particularly when p
stable to calculate.
= .9,
For
Stevens' estimate was too un-
Stevens' procedure was started using the r (1, 1.5)
estimator.
The mean of the sample of the r' s and the bias are shown in Table
1 along with the approximations to the bias.
In Table 2 the sample
variance and the error mean square of the sample of r's are given along
with approximations to the variance and the efficiency relative to the
least squares procedure.
For those samples where the least squares estimator was calculated
the residual variance was also computed.
This was formed by taking the
stun of the squares of the observations subtracted from the predicted
value and dividing this stun by n-3, where n is the number of observations.
The means of these for each sample is shown in Table 8.
14
RESULTS AND DISCUSSION
Table 1 gives the information on the bias of the estimates.
Where
samples were drawn for each estimator, the least squares one has in
general a smaller bias than the quadratic procedures but only slightly
better than the r (1, 1.5) estimator.
However, the least squares
estimate is unstable for some points as shown by Table 1 and Table 8.
The bias in the r (1, 0) estimator is much larger than any of the
others.
The approximate bias derived by either Patterson or Portman does
not appear to be too accurate.
The variance and the error mean square of r are given in Table 2.
Here we see that the quadratic estimator r (1, 1.5) has a smaller
variance than the other estimators.
shows up as being unstable.
Again, the estimator r (1, 0)
This is partially due to Wild values of r,
one of which went as high as 359.
The efficiencies of the quadratic
estimators as compared to Stevens clearly contrasts the procedures.
The approximate variances are quite accurate.
The agreement be-
tween the two approximations is surprising, particularly for larger
13' s.
There appears to be no significant difference between Patterson's
approximate variance and Stevens' asymptotic variance.
r (1, 0) estimator that is not approximated well.
It is the
The Wild values that
come out of the r (1, 0) estimator make it hard to predict.
If
Portman's (1961) procedure of a Taylor's series expansion on r (1, 1.5)
were used to obtain approximate variances, it could more fairly be
contrasted With Patterson's procedure.
15
The coefficients of the approximate variance as derived by
Patterson are strikingly similar to the asymptotic variance coefficients as derived by Stevens.
See Tables 3 and 7.
Table 8 shows some of the difficulties given by Stevens' estimator
and r (1, 0).
16
RECOMMENDATIONS
The sampling study suggests that the quadratic r (1, 1.5) estimator
is the better of the four estimators of the parameter p in
x
y=a-f3p.
Its mean sample bias is lower than all but the least squares estimator.
Its sample variance is quite a bit better than the least squares sample
variance.
However, although the r (1, 1.5) estimator appears to be the best
estimator of p, this does not say that the prediction equation
y
=a -
b [r
(1, 1.5)]x
is "the best equation that can be fitted.
It is certain that the least
squares equation will give a better fit in the sense of minimizing the
error sum of squares.
This is in the process of being studied further.
other least squares procedures might give interesting results.
The procedure proposed by Stevens (1951) converges quickly for large f3
and p
tain.
< .9. However at p = .9 and/or small f3 convergence is not cerPossibly the procedure advocated by Pimental Gomes (1953) would
not have these problems.
Besides estimating p, it is also necessary to estimate a, f3 and
the residual variance.
It would be interesting to see how the esti-
mates for the quadratic procedure cOm:Pare With the least squares method
for these values.
17
Table 1.
Sampling results, bias
r(1,1.5) r(l,O) r(l,l)
n=4
-
r
bias b/
P.bias
M.bias-
c/
rbias
P.bias
M.bias
.2154
.1154
.0138
~=5
.2596
.1596
.5016
~ 37
L.S.~/
p=.l
n=4
.2341.2068
.1341 .1068
.0276
n=4 (3=10 p=.l
.0866
.2009 .0897 .0870
.1009 -.0103-.0130
- .0134
.0032
.1254 .0069
.7722
rbias
P.bias
M.bias
n=4 (3=50
.5077
.5059
.0059
.0077
.0005
.0029
.0030
P=.t
.50 1.5061
.0061 .0061
.0008
rbias
P.bias
M.bias
n=4 ~=50
.9 82
.9236
.0482
.0236
.0158
.0436
.0493
p=.9
.
.9275 M.S.§)
.0275 M.S.
.0202
r
bias
P.bias
M.bias
n=6 13=5 p=.7
·7232 1.2115 .8614 .7300
.0232
.5115 .1614 .0300
.0078
.2658 .0433
.4419
rbias
P.bias
M.bias
.9022
.0022
.0009
~/L.S.
n=6 (3=50
.9146
.0146
.0118
.0120
is the least squares
£/P.bias is the approximate
of which comes from Table
E./M.bias is the approximate
value of which comes from
~/M.S. is a missing sample.
r(1,1.5) r(l,O) r(l,l) L.S.
-
p=.9
.9041 .9036
.0041 .0036
.0026
,-
'.6173
.1175
.0535
~=5
p=.5
'.7088 '.6833.6132
.2088 .1833 .1132
.2940 .0803
1.2724
n=4 ~=10 p=.3
.2824
.3741 .2872 .2828
.0741 -.0128-.0172
-.0176
.0632 .0102
.0058
.1049
n=4 (3=50 p=.7
.7013 .6976 .6975
.6970
.0013 - .0024 -.0025
- .0030
.0060 .0023
.0017
.0061
n=6 13=5 p=.3
.1301 .3575 .3224'
.3295
.0295 -.1699 .0575 .0224
.0091
.3304 .0338
.8119
n=6 ~=10 p=.7
.8036 .6811 .6744
.6707
.1036 '-.0189 -.0256
-.0293
.0020
.0665 .0108
.0774
n=8 (3=5 p=.5
.5107 1.4322 .5378 .4896
.0107
09322 .0378 -.0104
.2067 .0265
.0039
.2544
estimator.
bias derived by Patterson, the coefficient
4.
bias derived by Portman for r( 1, 0), the
Table 5.
18
Table 1 (continued)
r(1,1.5) r(l,O) r(l,l) L.S.
r-
n=8
~=5
r(1,1.5) r(l,O) r(l,l) L.S.
n=8 13=10 p=.l
p=.9
bias
P.bias
. M.bias
-
.1003
.0003
.0000
r
bias
P.bias
M.bias
-
.1021 .1005 .1002
.0021.0005.0002
.0011 .0002
.0011
n=8 13=50 p=.9
n=10 13=5 p=.7
n=10 13=10 p=.9
r
bias
P.bias
M.bias
-
n=10
~=10
p=.5
-
n=10
~=50
p=.3
r
bias
P.bias
M.bias
r
bias
P.bias
M.bias
19
Table 2. Sampling results, variance
r(1,1.5) r(l,O) r(l,l) L.S. r(1,1.5) r(l,O) r(l,l) L.S.
n=4 ~=5 p=.l
var.!/b/
.0934 2.Er71 .0947 a056
E.M.S./ .1058 2.884 .1117 .1159
P.S .... ,M.Var •.£ .0700 48.7576
.0689
Eff.~1
113%
5%
112% 100%
Var.
E.M.S.
P.S., M.Var.
Eff.
Var.
E.M.S.
P.S., M.Var.
.0190
.0190
.0175
105%
n=4 6=10
1.5175
1.5125
.1684
1%
n=4
~=50
p=.l
.0191 .0199
.0190 .0198
.0172
104% 100%
n=4 ~=~ p=.5
.5459 2 .025~ .7199 .5804
.5542 2.0487 .7464 .5874
.1736 .7288
- .1736
106%
29%
81% 100%
n=4
~=10
p=.3
p=.5
Eff.
Var.
E.M.S.
P.S., M.Var.
M.S.
M.S.
.0401
Eff.
n=6 13=5 p=.3
.0654 .0722
.0681 .0720
.0534
110% 100%
n=6 6=2 p -~7
n=6 /3=10 p=. 7
-.~1"""52"""'9""';=';1:-::1~.3=-=3:-r4+-4-J:;;,:9=1-45~5-.-::-16""'8~B -.~02=7::7'8';;':;"-;-.2~4r-r4-=-2 .;....&;;..=02=-=7::r-4-.~02=9:<"t""4
Var.
E.M.S.
.1519 11.4827 .9631 .1680 .0284 .2525 .0275 .0297
P.S." M.Var.
.0945
.1646
.0944 .0236 .0251
.0236
Eff.
110%
1%
18% 100%
106%
12%
107% 100%
n=6
~=50
p=.9
n=8 13=5 p=.5
00391 22.104 .0380 .0550
00388 22.752 .0391 .0546
.0375 .0453
.0367
141% .2%
145% 100%
20
Table 2 (continued)
rel,1.5) r(l,O) r(l,l) L.S.
r(1,1.5) rel,O) rel,l) L.S.
n=8 ~=5 p=.~
n=8 13=10 p=.l
n=8 (3=10 p=.9
.0345
.0445 .0357 M.S.
.0344
.0554 .0354 M.S.
.0360
.0406
.0360
n=8 13=5 0 p=.7
Var.
E.M.S.
P.S. M.Var.
Eff.
Var.
E.M.S.
P.S. M. Var.
Eff.
Var.
E.M.S.
P.S. M.Var.
.0013
.0013
.0014
n=8 13=5 0 p=.9
.0013 .0013 M.S.
.0014.0013 M.S •
•0014
.0010
n=10 l3=g p=.7
.0224 .0303
.0222 .0310
.0230
136% 100%
n=10 (3=10 p=.5
n=lO (3=10 p=.9
Eff.
Var.
E.M.S.
P.S. M.Var.
Eff.
Var.
E.M.S.
P.S. M.Var.
Eff.
.0005
.0005
.0005
92%
.0004
.0005
101~
.0005 .0004
.0005 .0004
.0004
93% 100%
21
Table 3. Coefficient of the approximate variance
of r as derived by Patterson
p
n=4
n=5
n=6
n=7
0
.1
.2
.3
.4
.5
.6
.7
.8
.9
1.5556
1.7488
2.0311
2.4637
3.1575
4.3394
6.5548
11.3845
25.2457
100.2214
1.5000
1.5276
1.5773
1.6790
1.8758
2.2465
2.9678
4.5397
8.9375
31.7554
1.5400
1.4815
1.4211
1.3817
1.3898
1.4837
1.7392
2.3617
4.1441
13.2004
1.6178
1.5006
1.3709
1.2506
1.1614
1.1280
1.1901
1.4468
2.2724
6.5006
2
Var r = £:.- (coefficient)
f32
n=8
1.7143
1.5506
1.3693
1.1929
1.0423
.9372
·9022
.9904
1.3986
3.5998
n=9
n=lO
1.8214
1.6177
1.3935
1.1731
.9779
.8259
.7344
.7344
.9369
2.1737
1.9352
1.6951
1.4330
1.1752
.9443
.7576
.6293
.5782
.6695
1.4030
22
Table 4. Coefficient of the approximate bias
of r as derived by Patterson
e
o
Coefficient for bias r(1,1.5)
n=4
.1
.2
.3
.4
.5
.6
.7
.8
.9
.2963
.3198
.4100
.5783
.8597
1.3384
2.2255
4.1472
9.6444
39.4543
e
n=4.
o
.1
.2
.3
.4
.5
.6
.7
.8
.9
n=10
6.5525
4.2730 '
2.6076
1.4605
.7239
.2887
n-9
n=10
Coefficient for bias r(l,O)
o
.1 12.5393
.2 7.4871
.3 6.3154
.4 6.3697
.5 7.3497
.6 9.6705
.7 14.9698
.8 30.0253
.9 108.8989
e
n=9
4.8750
3.1494
1.8980
1.0438
.5005
.1819
.0102
-.0797
-.1447
-.3107
19.7893
10.1611
7·3037
6.2717
6.1875
7.0086
9.4118
16.5004
52.6795
~0542
-.0652
- .1436
-.3276
27.1012 35.0887 43.9195 53.6544 64.3198
12.7376 15.5490 18.6672 22.1113 25.8865
8.2593 9.3719 10.6599 12.1184 13.7401
6.3344 6.5968 7.0179 7.5658 8.2199
5.5518 5.2548 5.1667 5.2168 5.3650
5.5771 4.7662 4.2899 4.0103 3.8555
6.6459 5.1088 4.1796 3.5831 3.1839
10.3583 7.1486 5.2898 4.1278 3.3577
29.4729 18.2483 12.1725 8.5963 6.3506
Coefficient for bias r(l,l)
n=4
.6667
.6900
.8032
1.0196
1.3844
2.0063
3.1561
5.6327
12.6638
50.4193
2
bias = 0"2 (coefficient)
f3
n=9
7·0179
4.6799
3.0210
1.8957
1.1734
.7413
.5102
.4231
.4883
1.0957
n=10
9.1667
6.0926
3.9009
2.4051
1.4378
.8527
.5293
.3807
.3751
.7269
23
Table 5. Approximate bias of r(l,O) as given by Portman (1961)
n
~
4
p=.l
p=.3
p=.5
p=.7
5
10
50
37.6491
·7722
.0058
1.5225
.1049
.0026
1.2724
.1072
.0030
4.6115
.2584
.0061
24,742,973.1
14.7250
.0493
6
5
10
50
10.2634
.9360
.0120
.8119
.1132
.0033
.4489
.0658
.0022
.4419
.0774
.0028
3.5997
.4488
.0120
8
5
10
50
6.0148
1.1292
.0192
.6568
.1373
.0044
.2544
.0588
.0021
.2262
.0460
.0017
.7814
.1464
.0049
p=.9
Table 6. Approximate variance of r(l,O) as given by Portman (1961)
n
~
4
p=.l
p=.3
p=.5
p=.7
p=.9
5
10
50
48.7576
.1684
;0007
.6748
.0238
.7288
.0472
5.3793
.1633
~0010
~0017
~0046
18,195,014,208.0
29.1107
;0433
6
5
10
50
13.6430
.2651
.0006
.2491
.0146
.0005
.0896
.0148
.0006
.1646
.0251
.0009
4.1872
.2225
.0053
8
5
10
50
6.6462
.3620
.0006
.1898
.0115
.0005
.0453
.0084
.0004
.0451
.0096
.0004
.3188
.0406
.0010
24
Table 7. Coefficient of the asymptotic variance
of the least squares estimator
p
n=4
n=5
n=6
n=7
n=8
n:9
.1
.2
.3
.4
.5
.6
.7
.8
.9
1.7216
2.0189
2.4587
3.1556
4.3396
6.5547
11.3845
25.2457
100.2214
1.4337
1.5299
1.657:,
1.8668
2.2431
2.9666
4.5393
8.9374
31·7553
1.3038
1.3246
1.3354
1.3702
1.4763
1.7367
2.3609
4.1439
13.2004
1.2306
1.2161
1.1730
1.1278
1.1155
1.1861
1.4458
2.7216
6.5005
1.1838
1.1501
1.0786
.9914
.9182
.8963
.9890
1.3983
3.5997
1.1512
1.1059
1.0179
.9067
.7989
.7262
.7325
.9365
2.1736
2
Asymptotic variance = 0"2 (coefficient)
f3
n=10
1.1273
1.0743
.9761
.8503
.7214
.6183
.5756
.6691
.9443
25
Table 8.
n
~
p
4
4
5
.1
5
5
10
10
.5
4
4
4
4
4
4
4
4
6
6
6
5
5
10
6
8
8
8
8
8
8
10
10
10
10
~~he
10
50
50
50
50
.9
.1
.3
.9
.7
.7
.9
.9
.3
.7
Computational notes
a!
d.n. c.6/100
16/100
2/10
0/100
0/100
m.s'!'!
0/100
0/100
7/30
m.s.
R.V.E.!
2.1756
4.0824
s.n.u.-e/
.9011
1.0516
.7498
.9753
s.n.u.
.7
4/100
17/100
4/100
1.5617
1.2699
18.2580
50
.9
12/100
_"51/
5
5
10
10
.5
.9
.1
5/100
m. s.
.9369
2/100
m.s.
0/100
m.s.
.8844
50
50
5
10
10
50
.9
.7
.9
.7
.5
.9
.3
5/100
0/100
m.s.
0/100
proportion of least
squ~es
c~e residual variance" s2 (cr ).
1.088
r(l"O) >10£!
D.r > left-!
0
1
0
2
0
0
0
1
0
7
0
0
0
0
2
2
0
0
-~/
0
0
0
0
1
1
0
2
2
0
0
1
0
0
1
0
0
1.0368
1.0014
.9534
1
0
0
0
0
0
0
samples that did not converge.
d~e number of samples where-r(l J O) is greater than 10.
- The number of least squares samples where the increment is gr~ater
;than 10.
~/sample not used.
-;Missing sample or least squares estimate was not calcula.ted.
f~so some of r(1,,1.5) andr(l"l) were> 10.
- Contains an error.
26
UST OF REFERENCES
Finney, D. J. 1958. The efficiencies of alternative estimators for an
asymptotic regression equation. Biometrika ~:370-388.
Hartley" H. o. 1948. The estimation of non-linear parameters by
internal least squares". Biometrika 22:32-45.
Monroe, R. J. 1949. On the use of non-linear systems in the estimation
of nutritional requirements of animals. Unpublished Ph.D.
Thesis, North Carolina State College, Raleigh.
Patterson, H. D. 1956. A simple method for fitting an asymptotic
regression curve. Biometrics 12:323-329.
Patterson, H. D. 1958. The use of autoregression in fitting an
exponential curve. Biometrika 45:389-400.
Patterson, H. D. and Lipton, S. 1959. An investigation of Hartley's
method for fitting an exponential curve. Biometrika 46:281-292.
Pimentel Gomes, F. 1953. The use of Mischerlich' s regression law in
the analysis of experiments with fertilizers. Biometrika 2:498-
516.
Portman, Ruth. 1961. A study of the Monroe estimator of the non-linear
parameter in the exponential model. Unpublished Master Thesis,
North Carolina State College, Raleigh.
Stevens, W. L.
1951. Asymptotic regression. Biometrics 1:247-267.
28
APPENDIX A
A PROOF THAT THE ESTIMATOR PROPOSED BY MONROE IS r(l,O)
Portman (1961) showed that the estimator of p proposed by Monroe
(1949) is a ratio of quadratic terms.
Following the development of Monroe, Portman wrote the model as
E(y)
= w = a _ ~px
•
This is the general solution of the first order differential equation
in w,
dw
dx
= (a-w)
where
c
p
= ec
The model can be generated exactly by the first order linear difference
equation of the form:
(1)
If this equation is stumned over all values of wi and there are
equal increments of the independent variable, one obtains
where
X =
j
j
1:
i=l
i,j = 1,
w
i
... , n
~o = Wo
~
~O
~l
=--- ~ -1
~2
2
x.
= 0,
n
= the number of observations
a
-~l
=-
J
~2
1
P
1, ••• , n-l
= l-~2
29
Since the :parameters, ~'s of (2), enter linearly, they can be
estimated from actual data, Yi
= wi +
(;i ' by the.least squares method
if n is equal to, or greater than, :3.
portman (1961) took this development and showed how the estimator
of r could be written as a ratio of quadratic terms.
She develo:p ed the
normal equations in matrix form as follows:
where
and
1
1
o
1
1
1
1
1
o
'i. •
x=
=
1 11. . .
n-l
1
The matrices K' and ~ can be :partitioned as follows:
K'
=
[slrrz l
A
A
~=
[~ l
A
~2
Then the normal equations are:
where
s
= [1,
- -Xl
e= ~~]
1
30
= s'x
( 4)
Premultip1ying (3) by
l' T' S( S ' s r 1
and subtracting the results from
(4) gives
~
~2
x'T'[I - S(S'S)-lS ']X
n
= x'T'[I - S(S'S)-18']~=
n
I'T'~
X'T'~ ·
Now
This is the compact form which Portman (1961) used to develop her
approximate bias and variance formulae.
It is very similar to the form
used by Patterson (1958) • With further simplification the equivalence
of the two can be shown.
The element of the k th row and J th column of S(8'S)-18' is needed
first.
It 1s:
Using this, we can see that the i th, j th element of T'P is given by:
n
= (n-1+1)
where
u
When 1
=1
n
2
n
fx - fx~x-
(t'p) lj
=
°
n
(n-1+1) x j
for all j.
fx+
IlXj
n
~
X
31
= (T' P )T
Now T I PI'
Thus, using (T I p) and multiplying on the right
0
by T gives us:
=0
(a)
(t'pt)
(b)
(t'pt)
(c)
otherwise
1k
for all k
.n = 0
0
for all £ •
f
h
if k
hf
ifk < £
(n-k+1) - -
(t 'p)
£k
>£
=
(n-£+l) -
where
f
= [n-£+1)(n-k+1)
Therefore,
T'PI'
o
=
i: x
2
n
n
n
n
n
n
- (n-k+l) i: x i: x - (n-£+l) i:x i:x +n i: x i: x]
1
£
1k
k
£
0
0.'00.0
0
o
w
•
o
(n-j)
Then
wij
m
-h
= [(n-i)(n-j)
~
if j < i
n-l
n-l n-l
n-l n-1
n-1 n-l
2
i: x - (n-j) i: x i: x - (n-i) i: x i: x +n i:x i: x]
o
0
i
0
j
i
j
n-1
now
i
=
m
(n-i) - h
where m
if j
~
_ n(n-l)
2
LoX-
p
-
p(p-1)
2
0
32
After some simplification, we find that
i(n-j)
n
j
_ 3ij(n-i)(n-j)
n(n2-1)
(n-i) .
3ij(n-i)(n-j)
2
n(n -1)
n
·i
>
j
By comparing this with (3), we see that
W=D
T'Pr
or that
=
..•0
o•
0
0
D
0
T'P(T-In )
Similarly
=
o•
.•0
0
0
D
o
Thus
TM
= v'T'P(T-I )yM..
r(l,O)
n-
or the estimator proposed by Monroe (1949) is r(l,O) in Patterson's
(1958) notation.
33
APPENDIX B
FORMULAE FOR PATTERSON'S APPROXIMATIONS
Formulae used in the calculation of the approximate bias and
variance of Patterson's quadratic estimators of r are given below.
n=4
n=5
n=6
1
Fo = -10 [3 - 2p - 2p
1 .
F
1
= -100
F
2
1
= -100
2
- 2p
3 + 3p 4]
"2
3
4
[14 - 6p - 16p - 16p + 14p ]
"
234
[-1 + 4p - 6p + 4p - p ]
6
+ 4p ]
12
3
4
56
F =]
1
100 [24 + 12p - 14p - 44p - 14p + 12p + 2),"1lJ
1 .
2 4 6
F =[4 + 12p - 9p - 14p3 - 9p + 12p5 + 4p ]
2
100
1
2
Fo = -[ 4 - p - 6p
10
F
O
3
- p
4
= 2io [100 + 40p + 28p2 - 112p3 - l12p 4 - 112p5 + 28p6 + 40p7
+ 100 p8]
=
F1
2
4
4
2 [3850 + 4270p + 1456p - 5404p3 - 8344p - 5401Jp5
(210)
+ 1456p 6 + 4270p 7 + 3850p8]
4
2
F = 4
2 [1225 + 3220p + 72lp - 2464p3 - 5404p - 2464p5
2
(210)
+ 721p 6 + 3220p7 + 1225p 8] "
n=7
F
O
= ~ [15 + lOp + 11p2 - 8p3 - 14p4 _ 28p5 _ 14p6 _ 8p7 + 11p8
+ 10p9 + 15pl0]
F1
= 1 2 [364 + 560p + 476p 2 - 112p3 - 728p 4 - 1120p5 - 72ep 6
(28)
F
2
=
_ l12p 7 + 476p8 + 56Op9 + 364pl0]
1
[154 + 420p + 322p2 - 532p4 - 728p5 - 532p6 + 322p8
(28)2
+ 420p9 + 154pl0]
34
n=8
F0
1
= 8Ij:
[ 49 + 42p + 54p
2
+ 2p
3
- 2lp 4 - 84p 5..- 84p 6 - 84p 7
_ 2lp8 + 2p9 + 54p10 + 42p11 + 49p12]
F
1
= 1 2 [4116
(84)
.
+ 7644p + 8736p2 + 394&:>3 - 3780p 4 - 12600p5
- 1612&:>
6
- 12600p 7 - 3780p
8
+ 394&:>
9
+ 8736p10
+ 7644p11 + 4116p12]
F2
= 1 2 [2058
(84)
+ 5880p + 646&:>
2
+ 3864p3 - 289&:>
4
- 9072p5
_ 12600p6 _ 9072p 7 _ 2898p8 + 3864p9 + 6468p10
+ 5880p11 + 2058p 12]
n=9
F0
= 1~0 [112
+ 112p + 157p
2
+ 6 2p 3 + 17p
4
- 136p5 - 186p
6
- 276p 7
_ 186p8 _ 136 p9 + 17p10 + 62p11 + 157p12 + 112p13
+ 112p14]
F
1
= 1
(180f
[22848 + 48048p + 6442&:>2 + 50448p3 + 12828p4
. _ 45024p5 _ 94104p 6 - 118944p7 - 94104p 8 - 45024p9
+ 12828p10 + 50448p11 + 64428p12 + 48048p13
+ 22848p14]
F
2
=
4
1 2 [12768 + 3796&:> + 50298p2 + 44868p3 + 11298p
(180)
_ 32784p5 - 77364p 6 - 94104p 7 - 77364p 8 - 32784p9
+ 11298p 10 + 44868p11 + 50298p12 + 3796&:>13
+ 12768p14]
n= 10 F 0
= 195 [108
+ 120p + 178p
2
+ 10&:>3 + 7&:>
4
_64p5 - 132p
6
_ 264p7 _ 264p8 _ 264p9 _ 132p10 _ 64p11 + 78p12
+ 108p13 + 17&:>14 + l20p15 + l08p16]
35
F1
= 1 2 [22572 + 51876p + 77088p 2 + 76098p3 + 49038p 4
(165)
- 6600p5 - 69696p6 - 126324p7 - 148104p8 - 126324p9
_ 69696p10 _ 6600p11 + 49038p Jg + 76098p13+ 77088p14
+ 51876p15 + 22572p16]
F
2
= 1 2 [13662
(165) .
+ 41976p + 62403p2 + 67188p3 + 42603p4 - 1320p5
_ 58806p 6 - 104544p7 - 126324p8 _ 104544p9 _ 58806p10
- 1320p11 + 42603p12 + 67188p13 + 62403p14 + 41976p15
+
13662p16]