ESTnvIATION OF THE EXPONENTIALS
IN A
nOtmLE :m¥0NENTIAL REGRESSION CURVE
by
.
Marcel~'M.
Orenee and A. H. E. Grandage
This research su~~orted in ~artby Research Grant GM 07143
and Com~uting Su~~ort Grant FR-OOOll from the National
Institutes of Health.
Institute of Statistics
Series No. 382
December 1963
Mimeogra~h
iv
TABLE OF CONTENTS
Page
LIST OF TABLES. . .
v
1.
INTRODUCTION. .
1
2.
RELATED STUDIES .
4
2.l.
2.2.
3.
4
6
Single Exponential.
General Model • • .
LINEAR DIFFERENCE EQUATIONS AS MODELS FOR ESTIMATING FUNCTIONS
OF PI AND P2 · . . . . . . . . . . .
10
14
Transformation of the Model . .
Obtaining Approximate Biases.
4.
CHOICE OF PARAMETERS AND OBTAINING SAMPLES FROM ARTIFICIAL
POPULATIONS . •
• • . .
5. SAMPLE ESTIMATES . .
5·2.
Estimates
5.1.1.
5.1.2.
Estimates
5.2.1.
5.2.2.
SUMMARY AND CONCLUSIONS • •
7.
LIST OF REFERENCES.
. . . . .
8.2.
8·3.
8.4.
8.5.
24
25
27
30
32
34
36
.........
8. APPENDICES. . • . . .
8.1.
22
24
of the Coefficients •
Biases of Estimates •
Variances of Estimates • • .
of PI and P2 . . . .
Biases of Estimates . • • • .
Variances of Estimates.
6.
10
Derivation of Model 3.5, m = 1 from a Second-Order
Differential Equation . • • • • • • . •
Means, Biases and Variances of c and d. . • . .
Empirical Means and Variances of r
and r . • •
l
2
Numerical Example Using Linear Difference Equations
for Estimating P in a Single Exponential Curve. • . .
Numerical Example Using Linear Difference 1iEquations
for Estimating PI and P2 in a Double Exponential.
40
41
41
44
50
56
59
v
LIST OF TABLES
Page
4.1.
Values of parameters used in the artificial populations .
22
5.1.
Empirical and approximate biases of c and d in model
3.5, m = 1, y....,..,
•••.•
x• .:;; = a + cyx+1 + elyx + e'x .
26
Differences in empirical biases and variances of estimates
c and d obtained from model (3.5) and (3.6) with m = 1.
28
Number of populations from which samples give estimates c
and d whose empirical biases fall over specified ranges .
29
Number of populations from which samples give estimates c
and d whose empirical variances fall over specified
ranges. . . . . . . . . . .
. . . . . . .
31
Empirical means and biases of r l and r 2 in model 3.5,
m = 1, y....,..,
= a + cy +1 + dy + e'
• . . . . . .
x•.::;.
x
x
x
33
5.2.
5.3.
5.4.
5·5·
LIST OF APPENDIX TABLES
8.2.1.
8.2.2.
8.2.3.
8.2.4.
8.2.5.
Means, biases and variances of c and d in model (3.5,
m=l), y....,..,=a+cY+l+dy
. . . . • . . • . 44
x.c
x
x +e'x .
Means, biases and variances of c and d in model (3~6,
m = 1) , Yx+3 - Yx+2 = c(yx+2 - yx+l ) + d(yx+l - Yx ) + e*
x .45
Means, biases and variances of c and d in model (3.5,
m = 2), Yx+4 = a + cYx+2 + dyx + e~. . • . . . • . • . .
46
Means, biases and variances of c and d in model (3.6~
m = 2), Yx+.5 - Yx+4 = c(Yx+3- Yx +2) + d(yx +l - Yx ) + ex'
47
Means, biases and variances of c and d in model (3.5,
m = 3), Y +6 = a + c Y + - dyx + e~. . . . . . . . . . .
x
x
48
Means, biases and variances of c and d in model (3.6,
m = 3), Yx+7 - yx+6 = c(yx+4 - Yx+3) + d( Yx+1- Yx) + e~.
49
3
8.2.6.
Empirical means and variances of r
8.3.2.
and r
2 in model (3.5,
l
. . . . .
50
m = 1), Yx.c;
....,.., = a + cyx +1 + dyx + e'x • •
Empirical means and variances of r
and r 2 in model
l
(3.6, m = 1)'Yx+3 - Yx+2 = c(Yx+2 - Yx+l ) + d(YX +l - Yx )
+ e*
. ..
51
x
...................
vi
LIST OF TABLES (continued)
Page
8.3.3.
8.3.4.
8.3·5·
Empirical means and variances of
and r 2 in model
l
(3.5, m = 2), Yx+4 = a + cYx +2 + dyx + e~ ; . • . . .
Empirical means and variances of r
a.nd r
in model
l
2
(3 .6, m = 2), Yx+5 - Yx+4 = c(yx+3 - YX+2) + d( Yx+l
- y ) + e*. . . . . . . . .
. . . .
x
x
Empirical means and variances of
r
and r
in model
l
2
(3.5, m = 3), Yx+6 = a + cyX,.l
+~ + dy + e' . . . . . .
x
x
Empirical means and variances of r
and r
in model
l
2
(3.6, m =3), Yx + - Yx +6 = c(Yx +4 - Yx + ) + d(Yx +l - y)
7
3
+ e*.
x
••
110
52
53
r
•••
"'
•••
000,"0
•••
54
55
1.
INTRODUCTION
Many measurable phenomena encountered by research workers in
different fields can be mathematically represented by a linear model,
~.~.
a straight line.
This is a commonly used representation which
could be an exact relationship, but more often is a satisfactory approximation, especially if the independent variable takes values over
a short range.
Estimating the constants in such a system requires
computations that are quite easy to perform.
However, a linear function may not always be adequate.
At times a
non-linear model may be desirable as a better mathematical description
of a process,
~.~.
growth of animals before reaching maturity, effects
of fertilizers on crop yields, and many physical and chemical phenomena.
Working with such models often involves complex mathematical problems.
The non-linear model under consideration in this study is the
asymptotic regression curve which represents a function approaching a
finite asymptotic value as the independent variable tends to infinity.
Of particular interest is the linear combination of two exponentials,
(1.1)
for which yx
is observed at x
ger equal to or greater than 5.
lated with zero mean and variance
the other parameters
noted that p.,
J.
k.J. > 0,
i
= 1,
= 0,
1, ••. , n-l
The random errors
2
(J
•
and n is an inte€x are uncorre-
The asymptotic value a
are unknown.
and
It is to be
-k.
2,
can be expressed as an exponential,
hence the name attributed to this regression model.
e
J.
,
2
The fitting of an exponential regression function to observed data
presents problems of statistical nature.
estimation of the unknown parameters.
Of primary importance is the
Much work has been done on the
estimation of P in a single exponential
(2.l)!/, yet most of the
proposed estimation procedures are difficult to extend to the case of
more than one exponential term in the model.
The scope of the present work is briefly indicated as follows:
(1) Finding a transformation of the double exponential model which
can generate some simple forms of difference equations whose coefficients are functions of the two exponentials
P and P ,
l
2
(2) Obtaining estimates of such coefficients and consequently
deriVing the estimates of Pl and P from these quantities, and
2
(3) Investigating the sampling properties of the estimates including the variance and bias of each estimate indicated in (2) and the
approximate biases of the estimates of the coefficients
~l
and
~2'
Although this work is practically limited to finding estimates of
Pl
and P2' the other parameters could readily be estimated by con-
sidering the model (1.1) as an ordinary multiple linear regression of
x
x
Yx on r l and r 2 , where r l and r 2 are estimates of P and P ,
2
l
respectively. Thus, it is clear why much emphasis or interest is given
to obtaining good estimates
r
and r •
2
l
A transformation of the original model is discussed in Chapter 3 .
One of the simpler forms of the transformation referred to is the model
!/EqUations in each chapter are numbered consecutively; (2.1)
refers to the first equation in Chapter 2.
3
used by Morrison (1956) which is included in this study for comparative
purposes.
As stated in (3) above, a sampling investigation of the properties
of the estimates has been conducted to find out if these difference
equations could be used as reasonable models for estimation purposes.
The interest in the estimation problem in a double exponential
function is based on a practical consideration that a more general
model than the single exponential can be of wider applicability in many
cases where the simpler system is not adequate to represent the regression of one variable on another.
It will be seen in Chapter
4
that
the double exponential model can generate a variety of curves which may
provide research workers a choice of appropriate mathematical descriptions of processes under consideration.
4
2.
RELATED STUDIES
2.1.
Single Exponential
A considerable amount of interest has been shown by several
workers in proposing different methods of estimating p
in a single
exponential regression,
=
where x
0:
+ 13px +
€ ,
X
(2.1)
O<p<l,
= 0, 1, ... , n-1.
Patterson and Lipton (1959) showed that the estimates of
p
ob-
tained by different writers could be generalized into the form,
r
=
n-1
n-1
L. w y / L. wx Yx-1
x=l x x
x=l
= -J.
A_ / A-""2
•
These include the following:
Hartley's (1948) estimate has wx
as linear functions of y.
x
This was referred to as a "quadratic estimator" by Patte~son. Hartley
obtained his estimate from the so-called
on the independent variable x
where
S
x
~nterna1
II.
regression " of Yx
and a partial sum of y, i. e .
x --
is a partial sum defined by
(2.4)
and
·e
5
n even
=
n odd.
Hartley's estimator is
r
= (2
+ b )/(2 - b ).
3
3
The
"internal
regres-
sion" comes as a result of fitting a differential equation to the
observed data.
This consists of integrating or summing the differen-
tial equation after substituting some functions of yx for the
derivatives.
However, the choice of the particular partial sum in (2.4)
motivated other workers to consider a more general form of the partial
sum such as
and the corresponding estimate is
denoted by r(l,
(1 + kb )/(1 - .eb ) or simply
3
3
k/~).
Monroe (1949) developed procedures for r(l, 0).
Stevens' (1951) least squares estimate has for w some complix
cated functions of r itself. His procedure involves an iterative
method which requires a preliminary estimate of p
information matrices.
and the inverses of
He provided approximationsg/ to the asymptotic
covariance matrix, corresponding to preliminary estimates of p
n
= 6,
for
7 and 8 and extended later by Linhart (1960), Lipton (1961) and
g/An
inverse corresponding to a preliminary estimate can be
considered an approximation to the inverse corresponding to the
parameter being estimated.
6
Hiorns (1962).
It is now possible to use such a procedure for all
values over the ranges
n = 5(1)30 and r = .10( .01) .90.
Patterson's (1956) so-called "linear estimate" has
~
and ~
as linear functions of y, i.e. the coefficients ware constants.
x - x
Finney (1958) obtained his estimates of p by considering the
regression,
Yx = a(l - p) + PYx-1
(2.6)
and also another equation,
Patterson (1958) used a more general relationship than that of
Hartley,
i.~.
(2.8)
where
Po
= particular value of p,
and his estimate is given by
(po + kb )/(1 - ,eb )
3
3
or si111:P1y by
r(p o, k/.e).
2.2.
General Model
The problem of estimation has also been considered for the general
model by a few workers.
Of special mention is Prony's method described
in Whittaker and Robinson (1944), which can be directly applied to the
exponential curve, single or multiple, if the constant term is zero.
7
It is desirable to describe the method briefly because of the relation
that it has with the proposed estimation procedure in Chapter ,.
Let Yx be a function which has observed values at
x
= 0,
w, 2w, •.. , (n - l)w,
w known, such that Yx may be approx-
imately represented by a sum of
where
n > k.
If Yx
exponentials, i.e.
€
k
exponentials,
could be represented exactly as a sum of
k
x = 0 for all x, then yx would satisfy a
linear difference equation,
Yx+k + BYx+k-l + ...... Myx
= 0, w, .•. , (n - l)w - k. Since
=
0,
x is measured with some
error the above equation is not exactly zero, i.e.
where x
y
(2.10)
Prony used the least squares method to estimate the coefficients
w w
w
B, ... , M in (2 .10) . The estimates of Pl' P2' •.• , Pk are the
roots of the
kth degree equation,
X
k
+ bxk - l ... • • • + m
=
where b, .•. , m are estimates of B, .•. , M,
(2.11)
0,
respectively.
Since w
is known, the estimates of the exponentials are readily obtained from
these quantities.
After these estimates are substituted for the
respective parameters in (2.9), the unknown coefficients are estimated
by the ordinary least squares procedure.
8
Morrison (1956) eliminated the constant term in the model for
k
=2
by taking successive differences of the y's.
He used a form
derived from the resulting equation,
(2.12)
where x
= 0, 1, ..• , n - 2. Equation (2.12) can be simply rewritten
as follows:
where ~~ = ~l(Pl - 1)
and ~~ = ~2(P2 - 1); (2.12) is the same type
of model (2.9) as proposed by Prony.
Morrison studied the sampling
behavior of the estimates of -(PI + P2 ) and (PIP2)~/' the coefficients in the difference equation that he used as a model.
Hartley (1961) developed a modification of the Gauss-Newton method
of iterative solution.
An application of this method on the double
exponential model results in the same equations used by stevens (1951)
for finding the increments or correction factors for the preliminary
estimates.
However, this procedure has an additional feature which has
the merit of guaranteed convergence.
The most recent work on the exponential model is by Cornell (1962)
who described a procedure which he called the Method of Partials.
formed partial sums of equal numbers of observations
y.
x
He
The number
of partial sums equals the number of parameters to be estimated.
2/The difference of signs of these constants with those of the
difference equations in Chapter 3 is to be noted.
9
Forming the partial sums obviously depends on the sample size.
He then
equated each sum to its corresponding expected value, thus forming a
system of equations whose solutions are estimates of the parameters or
functions of the parameters.
Although the last three methods are available for the double
exponential model, the search for a procedure which could give good
estimates over the whole useful range of the parameters
would seem to be useful.
Pi
and P2
10
3 . LINEAR DIFFERENCE EQUATIONS· AS MODELS FOR ESTIMATING
FUNCTIONS OF
P
l
AND
P
2
The use of the least squares method on model (Ll) involves a set
of normal equations whose solutions are not readily obtainable.
How-
ever, there are iterative procedures by Stevens (1951) and Hartley
(1961) which could be used provided ~hat preliminary estimates of
and P2 are available.
by some simple methods.
3.1.
Pl
Hence, there is a need to get these estimates
Transformation of the Model
The following discussion shows a transformation of the double
exponential model.
This transformation can generate some models which
may be used for estimating P
l
functions of Pl and P2 .
and P2
or more explcitly, some
Let the expected value of Yx be denoted by Y where
x
Also, let
~
be a symbolic operator denoting the operation of
increasing the argument by one, i.e.
or in general
where m is a positive integer.
11
Let it be shown that the following relationship (3.2) is true:
Performing the indicated operation on the left hand side in (3.2) leads
to the expression,
which can be shown to be equal to the constant 0(1 - p~)(l - p~) by
substituting for
Y +2'
x m
Y +m
x
involving the parameters.
and Yx
the corresponding expectations
Hence, the relationship (3.2) reduces to the
equation,
Y
x+2m = om+ r Y
m x+m +8Y
mx '
where
'1n = 0(1
- p~)(l - p~),
r m = (p~ + p~),
8m = -(P1P2)m ,
and
x
= 0,
1, ..• , n - 2m - 1.
By taking successive differences of the y's in (3.3), the lagregression model is obtained,
!.~.
where x = 0, 1, ••. , n - 2m - 2.
In terms of the observed y's, equations (3.3) and (3.4) can be
written in the following forms:
Yx+2m
=
0
m + 'V' mYx+m + 8mYx + €'x
12
and
Yxf2m+l - Yxf2m
= )'m(Yx+m+l
*
- YX+m) + 8m(Yx+l .. Yx ) + EX'
or simply
and
where
A
and d
A
8 .
m
m
The choice of m depends on n since the condition,
C =)'
=
n - 2m
~
3,
must be satisfied so that enough values are available to make estimation possible.
Six particular cases of
(3.5) and (3.6) considered in
this study are those for which m = 1, 2 and 3.
It is to be noted that
(3.5) for m = 1 could be used for all sample sizes, n
model
Equation
approach.
~
5.
(3.5) for m = 1 can be derived by using another
In Appendix 8.1 the method indicated is essentially that of
.
t s so-called " internal
Hartley
•
"
regress~on
of using as the model a
differential equation involving the first and second derivatives.
is shown there that if the functions, Yx+l + q Yx
n yx'
Yxf2 + i Yx+l + J:'
It
and
are used as annroximations
to the first and
'.l:'J:'
second derivatives, respectively, the estimates are independent of i,
l'
and q and the resulting model reduces to
(3.5) for m = 1.
Under the assumptions of uncorrelated and homogeneous errors
EX
with expectation zero in the original model (1.1), the resulting random
errors
E' in
x
(3.5) have the following properties:
13
where
x = € x-f02m
€'
-
~ €
1
m x+m
-
5
m€ x '
and the elements of the covariance matrix can be obtained from the
following expression:
E( € I € I ) = E( €
x x+h
x-f02m
_ ~ €
1
€ )(€
vm X
x+h+2m
l:>.
m x+m -
~ €
-
1
m x+h+m -
l:>. €
)
v m x+h
'
m(€x € x+h+2m + € x+2m€ x+h )
- 5
Thus
E(€~ €~+h)
_~2 ., (1 _ 5),
m
m
h
=m
h
= 2m
=
0,
otherwise.
A special case of uncorrelated errors is attained when n
and m ~ 3
since each y and hence each
= 3m
€x appears only once in the
system of equations represented by (3.5).
€* has
x
expected value equal to zero and common variance given by the expresIn the lag-regression model (3.6) the random error
sion,
14
222
20- (1 + 'Ym + 8m + 'Ym - 'Y m 8m),
m= 1
m>l.
m
The covariances are also functions of PI
P
2
TIl
and _ P2'
For given PI
and
in models (3.5) and (3.6), the variances of the errors approach 0-2
2
and 20-,
respectively, and the covariances approach zero as m be-
comes large.
3.2.
Obtaining Approximate Biases
The ordinary least squares method has been used in the sampling
investigation for estimating the coefficients
"land
5m in each of
m
the six models represented by (3.5) and (3.6) corresponding to
m = 1, 2, and 3.
Let the estimates be denoted by
c = CIB
and d = DIB
where for model (3.5) the determinants B,
B=
C
=
,
C and D are shown below:
n - 2m
SYx+.rn
Sy
x
SYx+m
S~+m
SYxYx+m
Sy
x
SYxYx+m
SY2
x
n - 2m
SYx-t2m
Syx
SYx+.rn
Syx+mYx-t2m
SYxYx+m
Sy
x
SYxYx -t2m
SY2
x
,
(3.8)
,
(3.9)
15
and
D
where
=
n - 2m
SYx+m
SYx +2m
SYx+m
2
SYx+m
SYx +mYx+2m
Syx
SYxYx+m
SYxYx+2m
S denotes summation from x
=0
to
,
(3.10)
n - 2m - 1.
Replacing yx +2 m by exm + 'Y mx
Y +m + 1)mx
Y + €'x in the elements of
the second column of C or third column of D gives the elements in
the following forms:
Syxm
+2
= (n - 2m)exm + 'Y mSyx +m + 1)mSyx + S€x' ,
Syx+mYx+2m
2 + 1) Sy y. + S€'y
= exmSyx+m + ' mSyx+m
. m x x+m
x x+m'
'V
(3.11)
and
After substituting the expressions (3.11) in
C and D, these two
determinants can be simplified by some ordinary operations.
ing expression for each of the estimates
c
The r:esult-
and d is given as the
sum of the parameter to be estimated and a ratio of two determinants in
which the numerator involves the random error
n - 2m
c
= 'Ym+ SYx+m
Sy
x
S€'
x
SYx
S€'y
xx+m
SYxYx+m
S€'y
xx
SY2
x
Le.
IB'
(3.12)
16
and
d
= 5m +
n - 2m
SYx+m
S€'
x
SYx+m
S2
Yx+m
S€'y
xx+m
SYx
SYxYx+m
S€'y
xx
lB.
(3.13 )
It is obvious from (3.12) and (3.13) that the exact expressions
for the biases are given by the expected values of the ratios of the
determinants,
i.~.
Bias (c)
where
n*
C*
= E(~) and Bias (d) = E(~) ,
C* and n* are the numerator determinants of (3.12) and
(3.13), namely
C*
=
n - 2m
S€'
x
Sy
SYx+m
S€'y
xx+m
SYxYx+m
Sy
x
S€'y
xx
SY2
x
n - 2m
SYx+m
S€'
x
SYx+m
2
SYx+m
S€'y
xx+m
Syx
SYxYx+m
S€;crx
X
,
(3.15)
and
n*
=
Hence, approximations to the biases of
approximate expected values of
c
(3.16)
and d can be obtained if
C*jB and n*jB are available. These
can be obtained by taking the first terms of the Taylor's expansion of
C*jB and n*jB about the expectation of each element of the determinants
C*,
nand
B.
*
17
It is clearly indicated by
(3.15) and (3.16) that the approximate
biases are zero i£ the independent variables
random.
of
Yx
and Yx+m are non-
This is due to the fact that each element of the second colunm
C* and third colunm of
pected value of
D* would have zero expectation, the ex-
being zero for all x. However, if y
and Y +m
x
x
x
are random variables, as in the present problem, the approximate biases
€'
are zero if there exist no correlations between
€~
and the independent
variables.
Making use of the properties of the error
€'
x
in
(3.5) and its
covariances with the independent variables, namely,
E(€')
x
=0
,
and
gives the following expressions for the approximate biases:
Bias l ()
c
=-
22
_2
2
(n-2m)(l
r
[(n-2m)
~ + (n-2m)6Y - (6Y ) ]
x
x
Bo
{ m
- 8m [(n-2m)6Yx Yx +m - 6Yx 6Yx +m] }
,
and
22
_2
2
{ - 8m[(n-2m) ~ + (n-2m)6~+m - (6Yx +m) ]
+ rm [(n-2m)6YxYx +m - 6Yx6Yx+m] } ,
(3.18)
18
where the Y's
are the expectations of y' s .
The following expressions
are needed in computing the approximate biases:
(3.19)
and
Bo
=
n - 2m
SYx+m
SYx
SYx+m
S~+m + (n-2m)(i2
SYxYx+m
SY
x
SYxYx+m
Sr+(n-2m)(i2
x
,
(3.20)
where
Similarly, the approximate biases of
c and d, the estimates of
the coefficients in model (3.6), can be obtained.
the following expressions:
These are given by
19
(3.21)
Bias2 (c) =
and
2
(n-2m-l)cr
Bl
if m=l,
Bias2 (d)
=
where
i = 0, 1,
the
y's,
and the
d 's
i
are the differences of
Le.
The following expressions are needed in computing the approximate
biases given by the formulas (}.2l) and (3.22):
20
(3.23)
and
2
2
SdJ:o + 2(n-2m-l)<T
B
1
,
=
2
2
SdOO + 2(n-2m-l)<T
where
2
SdOO'1.0 - (n-2m-l)<T ,
if m = 1
if m = 2, 3,
Ti
= ~i ( 1
- Pin-2m-l)/( 1 - Pi ) '
and
In the sampling investigation the expectations of yare known.
x
Hence, for computing the approximate biases for all the cases considered, it was convenient to use directly the expressions,
E (c)
- rand
E (d) - 5m ,
m
as the approximate biases of
E(c)
and E(d)
c
(3.2.5)
and d, respectively.
are approximate expected values of
The terms
c and d which
21
are obtained by taking the expected values of each element of the
determinants involved in c and d.
Values of these approximate biases both for models
(3.5) and (3.6)
are compared with empirical values discussed in Chapter 5.
22
4.
CHOICE OF PARAMETERS AND OBTAINING SAMPLES FROM
ARTIFICIAL POPULATIONS
The sampling investigation consisted of obtaining random samples
from artificial populations and using these samples for the estimation
of the parameters.
Fifteen sets of parameters,
(a,
~1' ~2' P1' P2)'
were selected and are given in Table 4.1.
Table 4.1.
Values of parameters used in the artificial populations
Population
No.
a
1
250
2
~1
~2
P1
P2
-100
5
·5
.1
250
-150
5
·7
.1
3
250
-200
5
·9
.1
4
10
50
15
.4
.2
5
10
125
15
.6
.2
6
10
175
15
.8
.2
7
250
-100
35
·5
·3
8
250
-150
35
.7
.3
9
250
-200
35
.9
.3
10
10
125
45
.6
.4
11
10
175
45
.8
.4
12
250
-150
75
.7
·5
13
250
-200
75
.9
·5
14
10
175
150
.8
.6
15
259
-200
225
·9
·7
(
23
With a set of parameters substituted in model (4.1),
(4.1)
the observations
variate
used.
y
x
were generated by adding to (4.1) a random normal
€x with zero mean and variance one, for every value of x
From each of the fifteen populations one hundred random sam;ples
of size twelve were obtained.
x = 0, 1, .•• , 11.
The twelve observations correspond to
Each sample of 12 observations was used in all the
six difference equations which are particular forms of equations
(3.5)
and (3.6) for m = 1, 2 and 3.
In population 9 an additional estimation used only the first nine
observations of each of the hundred sam;ples.
This part of the investi-
gation was included to find the effect of using a smaller sample size.
All computations for generating sam;ples, estimating the parameters
and obtaining approximate biases were done with the IBM 650.
24
5.
SAMPLE ESTIMATES
The discussion of the sa:rnpling results will be divided into two
parts, namely,
(1) Sample estimates of the coefficients of the difference equations and
(2) Sample estimates of the parameters
P and P '
l
2
In both parts the properties of the estimates which will be of particular concern are the biases and variances which could provide means for
comparing the models for the same or different values of m, whichever
is meaningful.
5.1.
In Chapter
Estimates of the Coefficients
3, it was indicated that equation (3.5),
Yx+2m
= exm +
Y
+ 8my x + €' x
{m x+rn
'V
and the corresponding lag-regression
(3.6),
where m = 1, 2 and 3, involve the desired unknown parameters
P2
in simple functional forms.
P and
l
Least squares provide a very conven-
ient method of estimating the coefficients.
However, since the usual
assumptions of uncorrelated errors, non-correlation between errors and
independent variables, and non-randomness of the independent variables
are not satisfied in
(3.5) and (3.6), the procedure does not generally
yield best linear unbiased estimates.
It then becomes important to
know the sampling behavior of the estimates.
In the sampling
25
investigation two sets of estimates of
rand
8,
m
m
one for each of
the models (3.5) and (3.6) for a given m, were obtained from every
sample.
5.1.1. Biases of Estimates
Consider model 3.5,
m = 1,
or simply,
where x
=
0, 1, "0' 11. The empirical and approximate biases of c
and d for the above mentioned model are given in Table 5.1.
mate biases refer to those computed by using (3.25).
all negative for
c and positive for
ranging from -.74116 to - .021169 for
Approxi-
These values are
d, with the empirical results
c and
.028239 to .40734 for d.
The values of the empirical biases agree very closely with the
approximate values.
Hence, the dependence of the empirical bias on
both the coefficients and also the other parameters, may be indicated
by formulas (3.21) and (3.22) which clearly show how rand
8m
m
affect the approximate biases.
However, the relation between the biases
and the other parameters is not clearly defined or indicated as both
numerators and especially the denominators are complicated functions
of these constants.
A comparison between the two models for a given value of m could
show whether using the differences of the observed y's
as variables
-
tit
tit
Table 5.1. Empirical and approximate biases of
Y
x·c = a + cyx+1 + dyx + e'x
True Values
Population Population Parameters
of
No.
Coefficients
(0, ~1' ~2' PI' P2 )
(7 , 8 )
1 1
1
(250, -100, 5, .5,.1) ( .6, - .05)
2
(250, -150, 5, .7, .1) ( .8, - .07)
(250, -200, 5, .9, .1) (1.0, -.09)
3
( 10, 50, 15, .4, .2) ( .6, - .08)
4
( 10, 125, 15, .6, .2) ( .8, -.12)
5
( 10, 175, 15, .8, .2) (1.0, - .16)
6
(250, -100, 35, ·5, ·3) ( .8, - .15)
7
8
(250, -150, 35, .7, .3) (1.0, - .21)
c and d in mode1!/ 3.5, m = 1,
..LI"\
9
10
11
12
13
14
15
(250,
( 10,
( 10,
(25 0,
(25 0,
( 10,
(25 0,
-200, 35,
125, 45,
175, 45,
-150, 75,
-200, 75,
175,150,
-200,225,
.9,
.6,
.8,
.7,
.9,
.8,
.9,
.3)
.4)
.4)
.5)
.5)
.6)
.7)
(1.2,
(1.0,
(1.2,
(1.2,
(1.4,
(1.4,
(1.6,
-.27)
- .24)
- .32)
-.35)
- .45)
- .48)
-.63)
Biases of c
Biases of d
Empirical APproximate£/ Empirical APproximate£/
-.51440
- .47425
- ·36348
-.62142
- .49553
- .24248
- .49289
- .14342
(-.038084£/
(- .049679
-.74116
- .20813
-.22152
-.021169
- ·51372
-.089914
-.46944
- .46715
- ·39897
-.54780
-.48265
- .27573
- .48229
-.15950
.26616
·33939
.33450
.22025
.27811
.18361
.29319
.14418
.24314
.33429
.36664
.19407
.27082
.20900
.28726
.12686
( •040600£1
-.053333
-.73735
- .24340
-.22352
-.028470
- .55491
-.083260
( .050630
.40734
.14916
.19136
.028239
..36106
.044570
.05443
.40421
.17450
.19431
.032110
.38961
.041090
1 and 81, respectively.
£/Based on one hundred samples consisting of the first nine observations of the original
samples of size twelve .
!/c and d in the model are estimates of
7
.£/Obtained by using formulas (3.25).
ro
0\
27
in the regression model (3.6) has any advantage over the model containing the constant term and the original y's
as the variables.
Comparison of biases is meaningful only between models for a given m
for which the same values of the coefficients are being estimated in
both models.
Numerical values of the differences of the biases for
m =1
are
given in Table 5.2 where each difference is defined in the following
manner,
Difference
Since the biases of
c
= Bias
in (3.6) - Bias in (3.5).
are negative, a negative difference shows that
the bias in (3.6) is larger in absolute value than that of (3.5).
These differences clearly show that model (3.5) yields better estimates
in the sense of smaller biases of both c
and d.
The complete results for all the models are given in the Appendix,
Tables 8.2.1 to 8.2.6.
From these tables and also Table 5.3, it is
shown that all the biases of
c
are negative in all populations for
all models and all m considered, while those of
except populations 4 and 13 for
values for m = 1
than those for
m = 3.
m= 2
d are positive
The apparently larger absolute
and 3 may be due to the
fact that the coefficients estimated for m = 1 are much larger in
absolute values than for the other values of m.
5.1.2.
Variances of Estimates
The difference in the number of equations represented by the two
regression models, !.~.,n - 2m for (3.5) and n - 2m - 1
for (3.6)
could in part account for the difference in the empirical variances of
28
Table 5.2.
Differences~/ in empirical biases and variances of
estimates c and d obtained from model (3.5) and (3.6)
with
m
=1
Population
No.
True Values of
Coefficients
(71, 1)
1
( .6, -.05)
- .15449
.08301
-.005160
- .000736
2
( .8, -.07)
- .18354
.14071
- .007672
-.002677
3
(1.0, - .09)
- .20682
.20465
- .017348
-.011920
4
( .6, -.08)
-.22599
.27511
-.014653
-.001796
5
( .8, - .12)
- .28562
.03438
- .004218
- .002263
6
(1.0, - .16)
-·32007
.21731
.018344
.007559
7
( .8, - .15)
- .23452
.16677
- .004396
.001234
8
(1.0, - .21)
- .20139
.18333
.012526
.011309
9
(1.2, - .27)
-.091021
.08689
.007253
.001015
10
(1.0, -.24)
- .25734
.12973
-.014369
-.004979
11
(1.2, - .32)
- .45451
.29356
.033979
.014161
12
(1.2, - .35)
-.25144
.24576
.059003
.029634
13
(1.4, - .45)
- .14386
.098871
.014581
.015252
14
(1.4, - .48)
-.54788
.36844
.017858
.007073
15
(1.6, - .63)
-.68809
.41956
.062209
.023316
°
-a/D~' ff erence
Differences in
Biases of
d
c
Differences in
Variances of
c
d
(Bias in (3.6) - Bias
in (3.5),
= (Variance
in (3.6 ) - Variance in
(3.5).
e
e
Table 5.3.
-
Number of populations from which samples give estimates
empirical biases fall over specified ranges
c and d whose
d
c
m=2
m=2
m=l
m=l
m=3
m=3
Model Model Model Model Model Model Model Model Model Model Model Model
(3.5 ) (3·6 ) (3.5 ) (3·6) (3·5 ) (3.6) (3·5 ) (3.6 ) (3.5 ) (3.6) (3.5 ) (3.6)
Range
of
Bias
< - .60
2
9
-.60 - -.40
5
3
1
2
-.40 - -.20
4
1
7
9
4
4
-.20 - 0.00
eftl
2
7
4
11
11
0.00 -
.20
p}!:./
2
14
13
.20 -
.40
7
4
1
2
.40 -
.60
1
8
0
1
If!!:./
15
15
15
> .60
Total
!:./Samples
l~./
15
15
15
15
15
1
1
14
14
15
15
of size nine are also used in one population.
IU
\0
30
each of band c
in both models for a given m.
Generally, it is
expected that the variances are larger for (3.6) than (3.5).
differences are illustrated for
m =1
Such
and are shown in Table 5.2.
The decrease in variances, contrary to what is expected, in using (3.6)
in six of the fifteen populations may be due to sampling fluctuations.
Table 5.4 shows the ranges of the empirical values of the variances for all the populations and models.
in Appendix 8.2.
Numerical results are given
The empirical variances are generally larger for
model (3.6) than (3.5) although the differences are less pronounced
between models for
m=2
5·2.
and 3 •
Estimates of Pl and P2
Estimation of the coefficients
rm
and 5m of the linear
difference equations prOVides an intermediate step for obtaining the
desired quantities, the estimates of the unknown parameters
P
l
and
The roots of the following quadratic equation involving the
coefficients
c
and d,
z
2
- cz - d
are the mth power of the estimates
=0
,
rl
and r ,
2
!.~.
from which r
and r
can be readily computed. The choice of which
l
2
root estimates which parameter is arbitrary. This arbitrariness is in
a way an advantage in investigating the properties of the estimates
e
e e
Table 5.4.
Number of populations from which samples give estimates
variances fall over specified ranges
c and d whose empirical
c
Range
of
m=l
Em,piricaJ.
Variances Model Model
(3.5) (3.6 )
d
m=2
Model Model
(3·5 ) (3.6)
m=3
Model Model
(3.5 ) (3.6 )
m=l
Model Model
(3.5) (3·6 )
m=2
m=3
Model Model Model Model
(3.5 ) (3.6) (3.5 ) (3.6)
.00 - .02
rft./
2
3
2
2
2
~/
6
13
11
11
11
.02 - .04
3
1
3
3
0
0
6
6
1
3
3
2
.04 - .06
3
8
2
2
2
2
1
3
1
1
0
1
.06 - .08
5
3
0
1
1
1
1
0
1
1
1
2
1
.10 - .12
3
0
0
1
.12 - .14
2
2
1
0
.14 - .16
0
2
0
1
> .16
1
2
7
7
15
15
15
15
.08 - .10
Total
l~
15
1
l~
15
15
15
15
15
!/samples of size nine are also used in one population.
\>l
}-I
32
rl
and r • In the sampling results, the larger root has been
2
m
considered as the estimate of Pl'
Pl being larger than P2
in the
sets of parameters chosen for sampling study.
Furthermore, it is to be recalled that in defining the double
exponential model, each of the parameters
values within the segment
segment
(0, 1)
(0, 1).
P and P2 is limited to
l
The occurrence of values outside the
could complicate any attempted interpretation of the
sampling results.
This includes the possibility of imaginary values,
non-positive real values or values equal to or greater than 1.0.
5.2.1.
The
Biases of Estimates
means~/
and r
and their corresponding biases are
2
l
illustrated by the results for model 3.5, m = 1 and are given in
Table 5.5.
of r
It is clearly shown that the means of r
l
agree very
closely with the parameter Pl' with biases ranging from - .073 to .058.
Those of
r
range from -.696 to -.026, not counting population 15
2
which is based on thirty real values only. It is further observed in
Table 5.5 that the relatively larger absolute values of the biases of
either r l or r 2 occur where Ipl - P2 1 = .2, the smallest difference of these two parameters considered in the investigation.
It is interesting to note that in population 9 where both sample
sizes 9 and 12 were used, the means of the estimates obtained from the
smaller samples are slightly better than those of the larger samples.
~/Based on
real values of estimates.
33
Table 5.5.
Empirical means and biases of
m = 1~/,
Yx+2
Number£1 of
Population real values
No.
of estimates
=a
rl
and r 2 in model 3.5,
+ cyx +1 + dyx + e'x
Means of
Parameters
(PI' P2 )
r
Biases of
r2
l
r1
r
2
1
100
( ·5, .1)
.508
- .423
.008
-·523
2
100
( ·7, .1)
.706
-.381
.006
- .481
3
100
( ·9, .1)
.906
-.267
.006
-.367
4
98
( .4, .2 )
.366
- .402
- .034
- .602
5
100
( .6, .2 )
.580
-.265
- .020
- .465
6
100
( .8, .2 )
.789
-.018
-.011
-.218
7
98
(.5, .3)
.558
- .237
.058
- ·537
8
100
( ·7, .3)
·719
.137
.019
-.163
9
(1009-/
{.90rft/
(
(
(100
( ·9, ·3 )
( ·907
(
(
(
.25~/
.243
( .005
(
( .007
(-.044
(
(- .057
10
100
( .6, .4)
.560
-.296
-.040
-.696
11
100
( .8, .4)
.772
.220
- .028
-.180
12
93
( .7, ·5 )
·770
.184
.070
- .316
13
100
( ·9, ·5 )
·904
.474
.004
-.026
14
100
( .8, .6 )
·727
.160
- .073
- .440
15
30
( ·9, .7)
.897
.684
- .003
-.016
~/c
and d in the model are estimates of
respectively.
r1
and 8 ,
1
£/Based on one hundred samples; a set of estimates (r , r 2 )
1
not listed if at least one of the r's is imaginary.
£/Based on samples of size nine.
is
The complete results for all the models are given in Appendix 8.3
where it is shown that estimates
r
l
ing real values only, while those of
are very close to
P
2
PI' consider-
are generally negatively
biased.
The presence of imaginary estimates in using m = 2
in both
models suggests that this value of m is not appropriate under some
situations.
m = 2.
There are fewer imaginary values for m = 3 than for
The us e of m = 1 in model (;5.5) resulted in two imaginary
values in each of populations
4 and 7, seven values in population 12,
and seventy in population 15, where in all these populations
IPI
- P2 '
= .2. It may be worthwhile to note that in these populations
where the difference between PI
and P2
is .2, the greater number of
imaginary values occurred in populations 12 and 15 where both PI and
P are relatively large compared to populations 4 and 7 where only two
2
samples each resulted in imaginary values.
5.2.2.
Variances of Estimates
The estimate r l in (3.5, m = 1) has variances ranging from
are much larger and ranging from
.0000358 to .00598 while those of r
2
.00114 to .0725. Complete results for all models are given in Appendix
8.3.
Comparison of variances between models for the same or different
values of m is not very meaningful because of the existence of
imaginary values.
However, for those populations where all hundred
samples yielded real values, the smaller variances for both r
r2
and
l
are observed where m is smaller which means that the difference
in number of equations involVing the observations available in each
35
model, plays an important role.
However, in population 14 the variance
of r l is lower for model 3.6, m = 1 than model 3.5, m = 1 although
the former represents nine observational equations while the latter has
ten such equations.
This discrepancy may be due to sampling variations.
36
6.
SUMMARY AND CONCLUSIONS
The estimation of the exponentials of the double exponential
model,
was considered for the case where the independent variable x
evenly spaced and conveniently coded as
error is
0, 1, •.• , n-1
is
and the random
N(O, 1).
A transformation of the model was proposed and it provided some
particular cases of difference equations, namely model (3.5),
Yx+2m = am + 'Y' mYx+m + I:>.IIIy x +
€I
x '
where
x
= 0,
1, ..• , n - 2m - 1,
The corresponding lag-regression form, model (3.6), is
where x
= 0,
1, •.. , n - 2m - 2.
Both equations could be considered
Least squares estimation provided a very convenient method of
obtaining estimates of the coefficients in (3.5) and (3.6).
However,
since the usual assumptions of uncorre1ated errors, non-correlation
37
between errors and independent variables, and non-randomness of the
independent variables are.not satisfied in these regression models,
the procedure does not generally yield best linear unbiased estimates.
It then became important to know the sampling behavior of the estimates.
A sampling investigation was undertaken and this consisted of
generating samples from artificial populations corresponding to fifteen
sets of chosen parameters.
In one of the populations smaller samples
of size nine were obtained from the larger samples by deleting the last
three observations, hence, two sets of one hundred samples were available for estimation from this population.
Three particular cases of
(3.5) and (3.6), corresponding to m = 1, 2 and 3 were
each of models
used for estimating the coefficients
of Pl and P2
y
m
and 8 from which estimates
m
were obtained.
The properties of the estimates
c,
d,
r
and r 2 which were
l
of particular interest are the biases and variances which provided
means for comparing the models for the same or different values of m,
whichever was meaningful.
Empirical and apprOXimate biases of the estimates
c and d of
the coeffidients of the difference equations were obtained.
The
apprOXimate bias of each estimate was computed as the difference between an approximate expected value and the corresponding parameter.
The apprOXimate expected value of an estimate was taken from the first
term of the Taylor's expansion of the said estimate which is a ratio
of two determinants, about the expected values of the elements of the
determinants.
The approximate biases were attributed to the correla-
tions existing between the errors and the independent variables.
The
38
empirical biases of
biases.
c and d agreed very closely with the approximate
This close agreement afforded a means of indicating implicitly
how both the coefficients and the other parameters could affect each of
the empirical biases of
c and d.
The biases were all negative for
c
and positive for
d
except
in two populations.
The sampling variances of
those obtained in model
c
and d were generally smaller for
(3.5) than (3.6) for a given m. Such differ-
ence could be attributed to the greater number of equations represented
by
(3.5) than by (3.6). Regarding both biases and variances the differ-
ences between the two models were more pronounced for m = 1
than for
the other values of m considered.
The estimation of the coefficients of the difference equations was
an intermediate step to obtaining the desired quantities, the estimates
of the exponentials
mates
c and d.
P and P2' These were computed from the estil
In so doing, the occurrence of values of r's out-
side the segment (0, 1), the range over which the pts
are defined,
complicated the interpretation of the sampling properties of the estimates.
This included the existence of imaginary values, non-positive
real values and values equal to or greater than 1.0.
The empirical results showed that the real-valued estimates of Pl
in all populations considered were very close to the parameter.
estimates of P2 were generally negatively biased.
were the results for model
The
Of special interest
3.5, m = 1 where imaginary estimates were
obtained only where the absolute difference
IP
l
- P2 1 = .2, the smallest
39
difference of the exponentials considered in the saJl1I)ling study.
The
use of m = 2 resulted in many imaginary values, while m = 3 was
generally better than m = 2
in the sense that it yielded fewer
imaginary values.
In the population where both samples of sizes nine and twelve were
used, the empirical biases of the estimates
and r 2 from the
l
smaller saJl1I)les were slightly less than those from the larger saJl1I)les.
The variance of
r
r
was smaller for the larger samples but that of
l
was smaller for the smaller saJl1I)les.
r2
As a result of this study it could be concluded that these difference equations when used as models could not be applied to all situations.
However, the use of model 3.5,
estimation of PI
and P2
m = 1 may be recommended for
provided that in cases where negative,
large or imaginary estimates are obtained, such occurrence of an undesirable value of the estimate be taken as an indication that the
parameter being estimated could possibly be taken as zero.
In a situa-
tion like this, it may be worthwhile to consider using one of the other
values of m if the sample size warrants its use, or to fit a single
exponential to the sample.
The fitting of a single exponential curve
when such situations arise may be encouraged since the double exponential when one of the P t S
approaches zero, approaches the value of
the other exponential, or approaches one.
40
7. LIST OF REFERENCES
Cornell, R. G. 1962. A method for fitting linear combinations of
exponentials. Biometrics 18:104-113.
Finney, D. J. 1958. The efficiencies of alternative estimators for
an asymptotic regression equation. Biometrika 45:370-388.
Hartley, H. O. 1948. The estimation of non-linear parameters by
'internal least squares.' Biometrika 25:32-45.
Hartley, H. O. 1961. Modified Gauss-Newton method for fitting of nonlinear regression functions by least squares. Technometrics 3:
269-280.
Hiorns, R. W. 1962. A further note on the extension of Stevens' table
for asymptotic regression. Biometrics 18:123.
Linhart, H. 1960. Tables for W. L. Stevens'
Biometrics 16:125.
»
asymptotic regression. »
Lipton, S. 1961. On the extension of Stevens' table for asymptotic
regression. Biometrics 17:321.
Monroe, R. J. 1949. On the use of non-linear system in the estimation
of nutritional requirements of animals. Unpublished Ph.D. thesis,
Department of Statistics, North Carolina State College, Raleigh.
Morrison, R. D. 1956. Some studied on the estimates of the exponents
in'models containing one and two exponentials. Unpublished Ph.D.
thesis, Department of Statistics, North Carolina State College,
Raleigh.
Patterson, H. D. 1956. A simple method for fitting an asymptotic
regression curve. Biometrics 12:323-329.
Patterson, H. D. 1958.
exponential curve.
The use of autoregression in fitting an
Biometrika 45 :389-400.
Patterson, H. D. and Lipton, S. 1959. An investigation of Hartley's
method for fitting an exponential curve. Biometrika 46:281-292.
Stevens, W. L.
1951. Asymptotic regression. Biometrics 7:247-267.
Whittaker, E. and Robinson, G. 1944. The calculus of observations.
Blackie and Sons Limited, London and Glasgow.
41
8.
APPENDICES
Derivation of Model 3.5, m = 1 from a
Second-Order Differential Equation
8.1.
Let the double exponential model 1.1 be written as
-k x
Y = a + /31 e
x
1
-k x
+ /3 e 2,
2
k.
1.
> 0,
i
= 1,
2 .
(8.1)
Consider the first derivative of Y with respect to x:
x
(8.2 )
Eliminating
e-
k2x
from (8.1) and (8.2) gives the result,
The second derivative is given by
(8.4)
Eliminating
-k x
e 1 from 8.3 and 8.4 results in a differential equation
involving the first and second derivatives,
or simply denoted by
y
U
x
= a + bY' + cY
x
x
•
(8.6 )
In order to use equation (8.6) for estimation, some reasonable
approximations of
,
Y and Y" have to be found using the observed
x
x
42
data.
Different approximations will lead to different forms of
(8.6).
The simplest form to be considered here is where three consecutive y's
Yx ' Yx +l and Yx~ are used to form general linear functions to be substituted for the derivatives.
such as
However, in substituting some linear functions for the derivatives,
it is desirable to find the exact forms of the coefficients
and c
(8.5), in order to maintain the equality in (8.6).
Consider
to
Y~
b,
(8.6), not necessarily equal to the corresponding coeffi-
in
cients in
a,
Yx~
and y~,
band c
+ JYx +l + pYx and Yx +l + qYx as approximations
respectively, chosen such that the coefficients a,
are simple forms.
Equation
Yx.c
.LI"'I + iY +1 + pY
x
x
=a
(8.6) can thus be expressed by
+ beyx +1 + qyx ).+ cYx ,
(8.7)
where
and
a
= a(l
- Pl)(l -
b
= (Pl
+ P2 + J)
c
= -[P 1P2
P2)'
(8.8)
+ q(Pl + P2 + J) - p] .
It can be shown that estimates obtained using
those obtained using model 3.5,
If model
(8.7) are equal to
m = 1.
(8.7) is written in terms of the observed y's and the
sum of squares formed, after simplification the sum of squares is given
by
which is the same expression obtained by using model
This indicates that
3.5, m = 1 .
(8.7) reduces to model 3.5, m = 1 •
e
--
--
8.2. Means, Biases and Variances of c and d
Appendix Table 8.2.1. Means, biases and variances of c and d in model (3.5, m = 1ft-I,
YXTC
.u">
=a
+ cyx +1 + dyx + e'X
True Values
Means of
Popu- of
Coeffi1atioD cients
No.
c
d
(Yl' 81 )
.21616
1 ( .6, -.05)
.08557
2 ( .8, - .07) ·32575
.26939
.24450
3 (1.0, - .09) .63652
4 ( .6, -.08) -.021426 .14025
.15811
5 ( .8, -.12) .30447
.023613
6 (1.0, - .16) ·75752
(
.8,
-.15)
.14319
·30711
7
8 (1.0, - .21) .85658 -.095817
(1.16192~1 -.22 94dJ
9 (1.2, - .27) (1.15032 - .21937
10 (1.0, - .24) .25884
.16734
11 (1.2, - .32) .99187 -.17084
12 (1.2, - .35) ·97848 -.15864
13 (1.4, - .45) 1.37883 -.42718
14 (1.4, -.48) .88628 -.11894
15 (1.6, - .63) 1.51009 -.58543
Biases of c
Empirical
-·51440
- .47425
- .36348
- .62142
- .49553
- .24248
- .49289
-.14342
- .038084£1
- .049679
-.74116
-.20813
- .22152
-.021169
- ·51372
-.089914
APproxij
mate :£
-.46944
-.46715
-.39897
-.54780
- .48265
- .27573
-.48229
-.15950
Biases of d
Approxi-"
mate :£1
.24314
.33429
.36664
.19407
.27082
.20900
.28726
.12686
Variances of
d
.017834
.030271
.057968
.0090878
.018622
.021399
.021147
.013767
.040600~.I
.0067422~1 .0079504£1
-.053333 .050630 .05443 .0060475 .0062569
.40421 .079890
.024021
-.73735 .40734
.010164
-.24340 .14916
.17450 .019301
.021966
-.22352 .19136
.19431 .030405
-.028470 .028239 .032110 .0022925 .0030918
.38961 .071859
.035588
- .55491 .36106
-.083260 .044570 .041090 .0050542 .0011095
Empirical
.26616
.33939
.33450
.22025
.27811
.18361
.29319
.11418
c
.065525
.059146
.069638
.071502
.059209
.036154
.058666
.022699
!lc and d in the model are estimates of Y1 and 81 respectively.
E.IObtained by using (3.25).
~/Based on samples consisting of first nine observations of original samples of size twelve.
+:+:-
e
e
e
Appendix Table 8.2.2.
PopuTrue Values
1ation of Coefficients
No.
("1' 81 )
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
( .6,
( .8,
(1.0,
( .6,
( .8,
(1.0,
( .8,
(1..0,
(1..2,
(1.0,
(1.2,
(1.2,
(1.4,
(1.4,
(1.6,
-.05)
- .07)
- .09)
- .08)
-.12)
- .16)
- .15)
- .21)
- .27)
- .24)
- .32)
-.35)
- .45)
- .48)
- .63)
Means, biases and variances of c and d in model (3.6, m = 1'ft./,
*
Yx+3 - Yx+2 = c (yx+2 - y x+1 ) + d( Yx+1 - Yx ) + ex
Biases of c
Means of
c
d
APprOXij
Empirical APprOXij
mate E. Empirical mate !2.
-.66889 -.71035
-.068893
.29917
.41010
.14221
- .65779 - .68348
.42270
.44915
-·57730 -.61655
.21071
-.84741 - .86433
- .24741
.018849
-.78115 -.80959
·30049
.24092
- ..56255 -.61402
.43745
-.72741 -·75520
.30996
.072587
.087511 -.34481 - .37919
.65519
-.14070 - .16391
1.05930 - .13247
.0014982 .29707
- ·99850 -1.06940
-.66264 - .73874
.12272
.53736
.087121 - .47296 -.53221
·72703
- .16503 -.17688
1.23497 - .32289
.24950 -1.06160 -1.18590
.33840
.82200 - .16587
-.77800 - .84123
~c and d in the model are estimates of "1 and
E.IObtained by
Biases of d
8 ,
1
.34917
.48010
·53915
.29071
.31249
.40092
.45996
.29751
.13752
.53707
.44272
.43712
.12711
·72950
.46413
.37415
.49935
.57561
.29591
.43499
.43756
.47892
.32823
.15879
·57292
.49568
.49633
.13828
.81399
.50542 .
Variances of
c
d
.060365
.051474
.052290
.056849
.054991
.054488
.054270
.035225
.013300
.065521
.053280
.059003
.016873
.089717
.067263
.018570
.027594
.046048
.0072919
.0163.59
.028958
.022381
.025076
.015272
.019042
.024325
.051600
.018344
.042661
.024426
respectively.
using (3.25).
$
e
e
Appendix Table 8.2.3.
Means, biases and variances of'
Yx+4 = a + cyX'c
....,..., + ~x + e'x
Means of
PopuTrue Values
lation of Coefficients
No.
(72 , °2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
( .26,
( .50,
( .82,
( .20,
( .40,
( .68,
( .34,
( .58,
( ·90,
( .52,
( .80,
( .74,
(1.06,
(1.00,
(1.30,
!lc
- .0025)
- .0049)
- .0081)
-.0064)
- .0144)
- .0256)
- .0225)
- .0441)
- .0729)
-.0576)
-.1024)
- .1225 )
- .2025)
-.2304)
- .3969)
c
and d in model (3.5, m = 2)~/
Biases of' c
c
d
-.0662ge
.13921
.53534
-.11095
.097499
·54058
.078781
.55114
.88746
.084006
.69519
.67040
1.05287
.74457
1.2500
.082568
.17644
.22912
.034168
.084725
.056995
.063011
-.025907
- .061742
.072496
-.046667
-.070977
-.19688
-.10326
-·38572
Biases of d
Empirical APproxi
mate :Q.j
Empirical APprOXij
mate :2.
- .32630
-.36079
-.28466
- .31096
-·30250
- .13942
-.26121
- .028858
- .012348
- .43599
-.10481
-.069601
-.0071247
- .25543
-.049974
.085068
.18134
.23722
.04058
.099124
.082595
.085511
.018193
.011158
.13010
.055733
.051522
.0056211
.12714
.011181
and d in the model are estimates of' 72
EJObtained by using
e
- .24531
- .34056
-·31096
-.19569
- .28030
-.17733
- .23786
- .079890
- .021940
-.40544
- .12410
-.092070
- .0081300
- .31314
- .058160
and 02'
.064129
.17168
.25980
.025884
.091672
.10464
.078461
.047794
Variances of'
c
.12630
.10585
.081980
.13239
.10787
.055471
.10752
.022522
.0066041
~022029
.12431
.17587
.065485 .031534
.066607 .030600
.0099600 .0024710
.15544 .057989
.012540 .012629
d
.0084836
.026660
.057438
.0028694
.011771
.019547
.011563
.0077118
.0070899
.016166
.0088732
.015925
.0050618
.014408
.00061407
respectively.
(3.25).
~
e
e
Appendix Table 8.2.4.
PopuTrue Values
lation of Coefficients
No.
('Y 2' 52)
1 ( .26,
2 ( .50,
3 ( .82,
4 ( .20,
5 ( .40,
6 ( .68,
7 ( .34,
8 ( .58,
9 ( ·90,
10 ( ·52,
11 ( .80,
12 ( .74,
13 . (1. 06,
14 (1.00,
15 (1.30,
!/c
and
- .0025)
- .0049)
- .0081)
-.0064)
-.0144)
- .0256)
- .0225)
- .0444)
- .0729)
- .0576)
-.1024)
- .1225)
-.2025)
- .2304)
- .3969)
d
Means, bias es and variances of c and d in model (3.6, m
Yx+5 - Yx +4 = c(Yx +3 - Yx -+2) + d(Yx +l - Yx ) + e:
Biases of c
Means of
c
-.13932
.16514
.51854
-.023314
.091551
.46611
.096035
·52123
.88543
.050431
·56116
.63564
1.02916
.47548
1.05706
d
.091495
.16774
.24683
.018904
.080340
.090924
.065817
- .001468~
-.064446
.086206
.013356
-.042114
-.18848
.020490
- .31744
Biases of d
ApproxiEmpirical APprOXij Empirical
mate 12.
mate EJ
-·39932
- ·33486
-.30146
-.20233
- .30845
-.21389
- .24396
-.058769
- .014572
-.46957
-.23884
- .10436
- .030837
- .52452
- .24293
in the model are estimates of
E./Obtained by
e
'Y2
-.23942
-.35360
-.33586
- .19686
- ·32120
-.25775
-.26168
-.10974
- .03677
- .45999
-.25252
-.14092
-.03228
-·57004
-.25673
and
51'
·093995
.17264
.25493
.025304
.094740
.11652
.088317
.042632
.0084536
.14381
.11576
.080386
.014024
.25089
.079461
.063873
.18205
.28395
.024992
.099628
.14176
.093987
.071389
.030732
.13624
.12192
.10803
.01752
.27381
.08274
= 2 f!:./,
Variance~of
c
.16336
.13351
.088114
.20461
.14569
.059290
.14302
.033330
.0075096
.12276
.044225
.036565
.0032722
.072793
.023024
d
.015344
.034035
.061678
.0045874
.015444
.020296
.016954
.012863
.0082568
.011562
.011631
.020938
.0060503
.017226
.0025104
respectively.
using (3.25).
~
e
e
Appendix Table 8.2.5.
Means, biases and variances of
yx +6 = a + cyx....
+:;;1; +. dyx + e'x
PopuTrue Values
lation of Coefficients
No.
(1'3' 53)
c
d
( .126,
( .344,
( .730,
( .072,
( .224,
( .520,
( .152,
( .370,
( .756,
( .280,
( .576,
( .468,
( .85 4,
( .728,
(1.072,
-.0040079
.085143
·52023
- .035423
.063790
.46030
.060729
.34002
.74280
- .031335
·55133
.41541
.85008
·51513
1.03709
.016462
.091686
.15888
.0046579
.029208
.023692
.012566
.0010383
-.0055163
.038669
- .021505
-.010028
-.094048
- .034668
- .24854
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
-.000125)
- .000343)
-.000729)
- .000512)
-.001728)
-.004096)
-.003375)
-.00926 )
-.019683)
-.013824)
-.032768)
-.042875)
-.091125)
-.110592)
-.250047)
Means of
:::.J Obtained
c
and d in model (3.5, m = 3)!./,
Biases of c
Biases of d
ApproxiEmpirical APprOXij
mate J2. Empirical mate J2./
- .13001
-.25886
-.20977
-.10742
-.16021
-.059696
-.091271
-.029981
- .013199
- ·31133
- .024670
-.052590
-.003915e
- .21287
- .034906
!.Ie and d in the model are estimates of 1'3
b /
e
Variances of
c
- .12289
.016587 .01614 .29495
- .27575
·092029 .097989 .26110
.15961 .22905 .12586
-.29980
-.072081 .0051699 .0036556 .24797
.030936 .036535 .29988
- .18994
.027788 .082211 .096191
-.17930
-.13025
.015941 .023187 .30245
.010298 .032131 .053525
-.07153
- .015630 .014167 .015924 .0061837
-.24690
.052493 .042790 ·32322
.011263 .048182 .091162
- .12379
-.067620 .032847 .041181 .040091
-.0039600 -.0029226 .004235 .0014037
-.30065
.075924 .10557 .23911
-.095500 .0015049 .0064970 .061268
and 53'
d
.0052256
.033182
.074637
.0011940
.011194
.020251
.0099170
.011093
.0076916
.0096635
.013607
.015974
.0063832
.029334
.00053428
respectively.
.
by using (3.25).
is
e
e
e
. Table 8.2.6. Means, biases and variances of c and d in model (3.6, m = 3)-a/,
Appendix
Yx+7 - Yx+6 = c(Yx+4 - Yx+3 ) + d(Yx+1 - Yx ) + e~
True Values
Popu1ation .of Coefficients
No.
("3' °3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
( .126,
( .344,
( .730,
( .072,
( .224,
( .520,
( .152,
( .370,
( .756,
( .280,
( .576,
( .468,
( .854,
( .728,
(1.072,
-.000125)
-.000343)
-.000729)
- .000512)
-.001728)
-.004096)
-.003375)
-.00926)
-.019683)
-.013824)
-.032768)
-.042875)
-.091125)
-.110592)
-.250047)
Means of
c
d
.076391
.11805
.48992
.056020
.088748
.38798
.090655
.29314
·73371
.027367
.45237
·38875
.83642
.35656
·92095
.0092109
.082746
.18594
-.0040679
.021389
.050985
.011712
.024845
-.000032369
.026585
.011695
.0057754
-.089029
.017669
- .22820
Biases of c
Biases of d
APprOXij
Empirical APprOXij
mate 12. Empirical mate :!2.
-.049609
-.22594
- .24008
-.015980
- .13525
- .13202
- .061345
-.076861
- .022293
-.25263
- .12363
- .079251
- .017575
-·37144
-.15104
-.12305
- ·31038
- .30322
-.071767
-.19886
- .21425
- .13375
-.079859
-.022633
-.25924
-.17883
-.081590
-.016960
-.40094
- .19247
!/c and d in the model are estimates of "3 and 8 ,
3
'2../Obtained by using (3.25).
.0093359
.083089
.18667
-.0035559
.023117
.055081
.015087
.034106
.019651
.040409
.044463
.048650
.0020960
.12826
.021850
.016472
.11221
.23360
.0034923
.036514
.092724
.025765
.037908
.016786
.043414
.064474
.050126
.0038380
.13674
.025767
Variances of
c
d
.43006
.30062
.14101
.41281
·35423
.10367
.35236
.068196
.0056613
.36055
.085827
.040306
.0018545
.37364
.053499
.0077530
.039029
.085723
.0018169
.012540
.020890
.013196
.014505
.0071494
.010728
.012609
.016260
.0057109
.045195
.00057479
respectively.
$
50
8.3.
Empirical Means and Variances of
Appendix Table 8.3.1.
r
l
and r
2
Empirical means and variances of r l and r 2
in model (3.5, m = 1}~/, Y.£I")
x-.-.::; = a+cyx +1+ dyx +e'x
Number Eo/
Population of real Parameters
No.
values of
estimates (Pl' P2)
Means of
rl
r2
Variances of
rl
r
2
1
100
( ·5, .1)
·508
- .423
.000165
.0677
2
100
( .7, .1)
.706
- ·381
.0000368
.0603
3
100
( ·9, .1)
·906
-.267
.0000477
.0712
4
98
( .4, .2 )
.366
- .402
.000691
.0584
5
100
( .6, .2 )
.580
-.265
.000107
.0618
6
100
( .8, .2 )
.789
-.018
.0000648
.0348
7
98
(.5, ·3 )
.558
-.237
.000501
.0725
8
100
( .7, ·3 )
.719
.137
.000398
.0281
( .90rftl
(.2562.1
( .00102::.1
(1009-1
(.0011~1
( ·907
(
( .243
(
( .000249
( .6, .4)
.560
-.296
.000118
.0798
100
( .8, .4)
.772
.220
.000206
.0158
12
93
( .7" ·5 )
·770
.184
.000837
.0311
13
100
(·9, ·5 )
.904
.474
.00140
.00612
14
100
( •8, .6 )
·727
.160
.000238
.0658
15
30
(.9, .7)
.897
.684
.00598
.00161
(
(100
(.9, ·3 )
10
100
11
9
(
!/c
and d in the model are estimates of Yl
tively.
b/
.
(
( .00823
and 01' respec-
- Based on one hundred samples; a set of estimates (rl , r ) is
2
not listed if at least one of the r's is imaginary.
::'/Based on samples of size nine.
51
Appendix Table 8.3.2.
Empirical means and variances of r
and r 2
1
in model (3.6, m = 1)!/,
Yx+3 - Yx+2 = c(Yx+2 - Yx+1 ) + d(Yx+1 - Yx ) + ex*
Population
No.
Nuniber Eo/
of real
va.1ues of
estimates
Parameters
(P1' P2 )
r1
r2
1
100
(·5, .1)
·509
-.587
.00170
.0749
2
100
( ·7, .1)
·715
- ·573
.000110
.0529
3
100
( .9, .1 )
·913
- .491
.000111
.0542
4
100
( .4, .2)
·351
-·571
.000783
.0888
5
100
( .6, .2 )
.558
-.540
.000168
.0533
6
100
( .8, .2 )
.757
- .314
.000192
.0552
7
100
(.5, ·3 )
·591
-.509
.000715
.0705
8
100
( .7, ·3 )
·764
-.107
.000712
.0436
9
100
(.9, ·3 )
·909
.150
.000967
.0191
10
100
( .6, .4)
.546
- .545
.000144
.0642
11
100
( .8, .4)
·713
-.167
.000329
.0513
12
98
( .7, ·5 )
.825
- .0915
.00143
.0717
13
86
(.9, ·5 )
.868
.348
.00393
.0225
14
100
( .8, .6)
.697
-.356
.0000746
.0899
15
54
(.9, .7)
·551
.101
.00453
.0363
Means of
Variances of
r1
r2
!/c and d in model are estimates of 71 and 01' respectively.
Eo/Based on one hundred samples; a set of estimates (r , r ) is not
1 2
listed if at least one of the r's is imaginary.
52
Appendix
~ab1e
8.3.3.
means and variances of r 1 and r 2
in model (3.5~ m = 2)!/ ,
yx+4 = a + cyx•.:; + dyx + e'x
~irica1
.1J'I
Population
No.
b
Number of real
values of
estimates
Parameters
(PI' P2 )
rl
r2
r1
r2
1
13
( ·5, .1)
.549
·380
.00888
.0193
2
14
( .7, .1)
·705
.394
.00172
.0378
3
21
( ·9, .1)
.894
.349
.000129
.0297
4
15
( .4, .2 )
.607
.316
.0119
.00876
5
22
( .6, .2 )
.644
.374
.00758
.0223
6
32
( .8, .2 )
.809
·380
.000148 .. 0198
7
13
(.5, ·3 )
.551
.373
.0140
.0215
8
47
(.7, ·3 )
.696
·335
.000376
.0160
9
80
(.9, .3 )
.894
·318
.000320
.0161
10
25
( .6, .4)
.644
.372
.0148
.0209
11
68
( .8, .4)
.801
·331
.00106
.0236
12
38
( .7, ·5 )
·720
·387
.00113
.0224
13
92
( ·9, ·5 )
·901
.472
.00105
.00987
14
81
( .8, .6)
·771
.452
.00267
.0212
15
49
( ·9, .7)
·932
.678
.00575
.00198
Means of
Variali.'tces of
!/c and d in the model are estimates of 72 and 82,
respectively.
:£/Based on one hundred samples; a set of estimates (rl' r 2 ) is
not listed if at least one of the r's is imaginary.
53
Appendix Table 8.3.!l..
Empirical means and variances of r 1 and r
2
in model (3.6, m = 2)!I ,
Yx+5 - Yx+4 = c(Yx+3 - YX +2) + d(yx+1- Yx) + ex*
Number !U
of real
values of
estimates
Parameters
1
19
2
Population
No.
Means of
r
Variances of
1
r2
r1
r2
(.5, .1)
.643
.364
.0160
.0310
16
( ·7, .1)
·703
.416
.001;8
.0308
3
15
( ·9, .1)
.895
·372
.000312
.0331
4
18
( .4" .2 )
.702
.349
.0256
.00977
5
24
( .6, .2 )
.636
·378
.00751
.0204
6
25
( .8, .2 )
·797
.349
.000327
.0208
7
11
(.5, ·3 )
.623
.337
.0129
.0216
8
35
( ·7, .3 )
.698
.366
.000577
.0178
9
78
(,9, ·3 )
.888
·334
.000510
.0187
10
20
( .6, .4)
.643
.37'3
.0108
.0147
11
41
( .8, .4)
.789
.343
.00115
.0229
12
37
(.7, ·5 )
·731
.356
.000813
.0168
13
92
(.99 ·5 )
.885
.475
.00202
.0142
14
40
( .8, .6)
.741
.389
.00118
.0323
15
21
( ·9, .7)
.886
.670
.00335
.00185
(Pl" P2)
!/c and d in the model are estimates of r 2 and 82"
respectively.
£/Based on one hundred sa.m.P1es; a set of estimates (r , r ) is
not listed if at least one of the r's is imaginary. 1 2
Appendix Table 8.3.5.
Empirical means and variances of r 1 and r 2
in model (3.5, m = 3 )~i ,
YX'TU
-d. = a + cy +:~ + dy
+ eX
x -'
X
l
Population
No.
Number b
of real
values of
estimates
Parameters
(P1' P2 )
r1
r2
r1
1
89
(.5, .1)
.584
- .234
.0217
.398
2
89
(.7, .1)
.712
-.398
.00139
·375
3
97
( ·9.'1 .1)
·900
-·317
.000200
·335
4
83
( .4, .2 )
·527
- ·303
.0450
.320
5
93
( .6, .2 )
.642
-.264
.0128
.387
6
100
( .8, .2 )
.807
-.0912
.00160
.314
7
77
(·5, .3 )
.622
-·333
.0200
.381
8
78
.( ·7.'1 .3 )
·710
-.314
.000685
.170
9
99
( ·9, .3 )
.899
.000714
.190
10
95
(.6, .4)
.623
.0172
.345
11
100
( .8, .4)
.813
.00339
.276
12
72
( ·7, ·5 )
.752
-.281
.00112
.192
13
86
(.9, ·5 )
·902
.364
.00158
.101
14
100
( .8, .6 )
.788
.316
.00645
.0821
15
53
(.9, .7)
·960
.664
.00844
.00322
..
Variances of
Means of
.0528
-.401
.0274
!ojc and d in the model are estimates of
'Y3
r2
and 53'
respectively.
!lJBased on one hundred samples; a set of estimates (r1 , r 2 ) is
not listed if at least one of the rls is imaginary.
55
Appendix Table 8.3.6.
Empirical means and variances of r
1
in model (3.6, m = 3 )!/ ,
Yx+7 - Yx+6
= c(Yx+4 - Yx+3 ) + d(Yx+1
and r
2
-y) +/ex*
Number !:/
of real
values of
estimates
Parameters
(P1' P2)
r1
r
1
92
( .5, .1)
.630
- .228
.0352
.384
2
91
( ·7, .1)
.728
- .374
.00523
.386
3
95
(.9, .1)
·900
- .415
.00172
.277
4-
78
( .4, .2 )
.564
- .349
.0876
·311
5
83
( .6, .2)
... 654
-·308
.0215
.406
6
100
( .8, .2)
.797
-.178
.00102
.317
Population
No.
Means of
Variances of
r
2
r2
1
...,-
7
83
( ·5, ·3 )
.646
-·307
.0236
8
76
( ·7, ·3 )
·717
- .363
.000748·
.211
9
100
(.9, .3)
.898
.0578
.000611
.185
10
93
(.6, .4)
.630
-.299
.0174
.385
11
100
( .8, .4)
.794
-.106
.00237
.288
12
74
( ·7, ·5 )
.752
-.304
.00142
.199
13
87
(.9, ·5 )
.899
.372
.00138
.0888
14
100
( .8, .6)
.765
- .0272
.00471
.369
15
35
( ·9, .7)
.952
.655
.00703
.00309
!/c
and d in the model are estimates of 'Y
3
respectively.
. ·391
and 8 ,
3
!:/Based on one hundred sa.rn;p1esj a set of estimates (r , r )
not listed if at least one of the r's is imaginary. 1 2
is
56
8.4. Numerical Examwle USing Linear Difference
Equations for Estimating p in a Single
EePonential Curve
The following computations show how the linear difference equations discussed in Chapter 3 can be adapted as models for estimating
in a single
e~onential
curve.
The least squares procedure is performed
in some detail for each model.
The data are temperature readings (oF) at six consecutive halfminute intervals given as an example by Stevens (1951) to illustrate
his iterative method of obtaining the least squares estimates.
I 57~5
x
(i) Model 3.5, m
1
2
4
3
Yx+l
= a(l-p)
+ PYx + €~,
o
x
SYx +l
210·3
=
= 185·0
x
= 0,
1, •.. , 4 .
4
33·1
32.2
57·5
45.7
Syx
5
33.1
35·3
=1
2
x
Sy = 9,234.13
SYxYx+l
= 7,996.70
The least squares estimate of p is given by
= {SYxYX+l - SYxSYx +1 /5
r
= .55437
(ii) Model 3.5, m = 2
x
o
57·5
38·7
p
JI {SY; -
(SYx )2 /5 ]
57
= 177.2
Syx
SYx +2 = 139·3
r
2
r
= [SYxYX+2
=
=
- (SYx )2 /4 }
.54306.
Syx
0
57·5
35·3
2
= 141.9
SYx+
3
3
I [SY~
.2949126698
x
Yx
Yx +3
r
- SYx sYx-+Q/4}
:=
Syx
100.6
SYxYx +
3
=
[SYXYX+3 - SYxSYx+3/3
=
.1671466548.
log r
= 6,892.43
:=
JI [SY~ -
4,788.56
(SYx )2 /3 }
= (1/3) log(.1671466548)
= 9.74103; - 10
r = .55085 .
x
Yx +1 Yx-+Q -
Yx
yx
0
-11.8
-
7·0
1
-7·0
-;.4
2
-3.
-2.2
-0·9
58
2
S(YX+1 - YX)
r
= S(Yx+1 - Yx )(Yx+2
-
= 204.64
Yx+1 )/S(Yx+1 - yx )2
= 115.86/204.64
= .56616
x
Yx+1 - Yx
Yx+3 - Yx+2
=
58.58/199.80
=
.2931931932
0
-11-3.4
1
-7·0
2
-3·
-2.2
-0·9
r = .54147
The different values of the estimates of p including that given
by
Stevens (1951) are sunnnarized as
fo11ows~
Models or Methods used
Values of Estimates
(i) Model (3.5, m = 1)
.55437
(ii) Model (3.5, m = 2)
·54306
(111) Model (3.5, m = 3)
·55085
(iv) Model (3.6, m = 1)
.56616
(v) Model (3.6, m = 2)
.54147
(vi) Stevens (one iteration)
·55187
59
8.5.
Numerical Exam,ple Using Linear Difference Equations for
Estimating Pland P2 in a Double Exponential
The next example shows how the six difference equations are used
as models for the estimation of Pl and P2'
In each case the esti-
mation is by least squares method.
The data are viscosities in centipoises of sucrose solutions
containing 20% sucrose by weight at twelve successive five-degree (OC)
intervals:
°c
Yx
°c
Yx
0
3.804
5
3.154
10
2.652
2.~~7
20
1.960
35
1.331
40
1.193
45
1.070
50
0·970
0.i4
25
1.704
30
1.504
If these original data are fitted to a double exponential curve a
transformation from the five-degree intervals to x
will lead to estimates of
pi
and p~.
= 0,
1, •.• , 11
In the following discussion
these powers Will be denoted by P~ and P~ and their estimates
simply by r
and r 2 , respectively. The unknown coefficients a, c
l
and d in the models are the corresponding estimates of the constants
involved in the difference equations explained in Chapter 3.
(i) Model 3 ·5,
x
0
1
2
3
4
Yx+12
2. 52
2.267
1·960
1.704
1·504
m=l
Yx+l
3·15
2.652
2.2P7
1.960
1.704
Yx+12 = a + cYx+l + ~x + e'x'
x
5
6
7
8
9
Yx+12
1.331
1.193
1.070
0·970
0.884
x = 0, 1,
Yx+l
1.50
1.331
1.193
1.070
0·970
o
0
II,
9·
60
Syx
2
SYx+1 = 36.407951
SYxYx+l = 42.624401
= 20.639
.
SYx+l = 17.805
SYx+2
= 15 ·535
SYx +1Yx+2 =: 31.484065
SyxYx.c:;
.LI"'> = 36.836610
S~ = 49.937467
The normal equations are
17·805000
36.407951
e~ressed
in matrix notation as follows:
20.639000]
42.. 624401
49.937467
*
=
15 ·535000 ]
31.484065
•
[
36.836610
The forward solution of the abbreViated Doolittle procedure is indicated below:
10
17·805000
1·780500
1
I 15.535000
I 1.~53200
20.639000
2.06~~00
.00232953
1
c
= .812554
.
= 1.446002
- 1.248720 (-.507278)
The estimates of PI* and P2* are the roots of the quadratic equation,
2
z - cz - d = 0, namely r 1 = .84731 and r 2 = .59869.
(ii) Model 3·5, m=2
x
0
1
2
3
Yx+4
1.90
1.704
1·504
1.331
yx+4
=a
+ cyx
+2 +x
d¥ + ext
x
5
6
7
j
Yx+4
1.193
1.070
0·970
0.884
X
= 0,
Yx+2
1.50
1.331
1.193
1.070
1,
o
0
0 ,
7·
SYx +4
= 10.616
SYx~ = 13.681
SY;~ = 25.519335
SYx
SYx~Yx+4
SYxYx +4
2
= 19·598284
Syx
= 47.369318
61
= 18.376
SYxYx~ = 34.733520
= 26.633285
The forward solution of the Doolittle procedure gives the folloWing:
c
= 1.045992
The roots of z2 - c - d
2
=0
and d = - .234916 .
are the estimates,
.
r 1 = ·719488
Hence,
r 1 = .84823
(iii) Model 3.5, m
and
and r 2 = .57141.
=3
yx+6
= a + cyx+~ +
,..I
x
x
o
3
4
5
1
2
2
SYx +
3
SYx+3
= 9·959
= 17.341331
t
x + ex ,
~
1.070
0·970
0.884
SYx Yx +6
The solutions of the normal equations are
ri = .634668
and d = - .157954 •
has roots,
and ~ = .248876 •
From these roots the desired estimates can be obtained:
r 1 = .85937
1.50
1.331
1.193
= 18.916252
Syx = 15.541
The quadratic equation, z2 - cz - d = 0,
= 0,
SYxYx + = 27.375716
3
sy2 = 43.335741
x
c = .883545
x
and r 2 = .62901.
1, ... , 5.
62
x
= 0,
1, ••. , 8.
x
o
-.502
- .:385
-.307
-.256
-.200
-.173
- .1;8
-.123
-.100
- .086
1
2
:3
4
5
6
7
8
- .:385
-·307
-.256
-.200
-.17:3
- .1:38
-.12:3
~.100
-.650
- .502
- .:385
-.:307
-.256
-.200
- .17:3
- .1:38
-.123
The normal equations,
.674116 c + .855:305
d
.855305 c + 1.086616 d
have the solutions, c
= 1.0058849
= .537605
:=
and d
,
.681869 ,
= - .164243:3.
The estimates
are r 1 = .80078 and r = .20510.
2
(v) Model 3.6, m = 2 :
x = 0, 1, •.. ,
6.
63
x
yx -Yx+4
Yx+ -Yx+2
Yx+1-Yx
0
1
2
3
4
5
6
-.256
-.200
-.173
- .138
-.123
-.100
- .086
- .385
-.307
-.256
-.200
- .173
...138
-.123
. - .650
-.502
- .385
- ·307
-.256
-.200
- .173
2
S(Yx+3 - Yx+2) = .412112
S(Yx+1 - Yx~ = 1.052443
The values of c and d which satisfy the normal equations are
The roots of the quadratic equation,
z
2
- bz - c = 0,
are
and
ri = ·703390
Hence, the estimates are r 1
d = - .172784.
and
c = .949035
= .83868
and r 2
= .49562.
(vi) Model 3 .6, m = 3 :
x = 0, 1,
x
0
1
2
3
4
Yx+ -Yx+6
- .173
- .138
-.123
-.100
-.086
Yx+4- Yx+
-.307
-.256
-.200
-.173
- .138
Yx+1-Yx
-.650
- .502
- .385
-·307
-.256
... , 4.
64
2
S(Yx+4 - Yx+3 )
S(YX+1 - Yx )2
= .248758
= ·982514
The normal equations have the solutions,
c = .755239
and the roots of
. d = - .0925322,
2
z - cz - d =0 are
~1 = .6013705
Hence,
and
r 1 = .84408
and
~ = .1538685·
and
A summary of the different values of the estimates is presented in
the following table:
r
Model Used
(i) Model 3·5, m=l
(ii) Model 3·5, m=2
(iii) Model 3·5, m=3
(iv) Model 3.6" m=l
(v) Model 3·6, m=2
(vi) Model 3.6, m=3
1
.84731
.84823
.85937
.80078
.83868
.84408
r
2
·59869
.57141
.62901
.20510
.49562
·53586
In addition to the above computations another possible model in
which the errors are uncorre1ated can be used.
This is not included in
the sampling study but is a particular case of the general transforrnation indicated in Chapter}.
If model 3.5" m = 4 is used" the esti-
mates are
and
INSTITUTE OF STATISTICS
NORTH CAROLINA STATE COLLEGE
(Mimeo Series available for distribution at cost)
328. Schutz, W. M. and C. C. Cockerham.
The effect of field blocking on gain from selection.
329. Johnston, Bruce 0., and A. H. E. Grandage.
tial model. M.S. Thesis, June, 1962.
330. Hurst, D. C. and R. J. Hader.
1962.
331. Potthoff, Richard F.
1962.
332. Searls, Donald.
Ph.D. Thesis.
August, 1962.
A sampling study of estimators of the non-linear parameter in the exponen-
Modifications of response surface techniques for biological use.
Ph.D. Thesis, June,
A test of whether two regression lines are parallel when the variances may be unequal.
On the "large" observation problem.
August,
Ph.D. Thesis. August, 1962.
333. Gupta, Somesh Das.
On the optimum properties of some classification rules. August, 1962.
334. Potthoff, Richard F.
August, 1962.
A test of whether two parallel regression lines are the same when the variances may be unequal.
335. Bhattacharyya, P. K.
On an analog of regression analysis.
336. Fish, Frank. On networks of queues.
337. Warren, William G.
340. Srivastava,
J.
1962.
Contributions to the study of spatial point processes.
338. Naor, P, Avi-Itzhak, B.,
339. Srivastava, J. N.
August, 1962.
On discretionary priority queueing.
1962.
1962.
A note on the best linear unbiased estimates for multivariate populations.
N.
Incomplete multiresponse designs.
341. Roy, S. N. and Srivastava, J. N.
342. Srivastava, J. N.
1962.
1962.
Hierarchical and p-block multiresponse designs and their analysis.
1962.
On the monotonicity property of the three main tests for multivariate analysis of variance.
1962.
343. Kendell, Peter. Estimation of the mean life of the exponential distribution from grouped data when the sample is censored-with application to life testing. February, 1963.
344. Roberts, Charles D. An asymptotically optimal sequential design for comparing several experimental cates-ories with a
standard or control. 1963.
345. Novick, M. R.
A Bayesian indifference procedure.
346. Johnson, N. L.
Cumulative sum control charts for the folded normal distribution.
347. Potthoff, Richard F.
348. Novick, M. R.
1963.
A Bayesian approach to the analysis of data for clinical trials.
1963.
349. Sethuraman, J.
Some limit distributions connected with fixed interval analysis.
350. Sethuraman, J.
Fixed interval analysis and fractile analysis.
351. Potthoff, Richard F.
352. Smith, Walter L.
1963.
On testing for independence of unbiased coin tosses lumped in groups too small to use X'.
1963.
1963.
On the Johnson-Neyman technique and some extensions thereof.
1963.
On the elementary renewal theorem for non-identically distributed variables.
353. Naor, P. and Yadin, M.
Queueing systems with a removable service stations.
1963,
1963.
354. Page, E. S.
ary, 1963.
On Monte Carlo methods in congestion problems-I. Searching for an optimum in discrete situations. Febru.
355. Page, E. S.
On Monte Carlo methods in congestion problems-II. Simulation of queueing systems. February, 1963.
356. Page, E. S.
Controlling the standard deviation by cusums and warning lines.
357. Page, E. S.
A note on assignment problems.
358. Bose, R. C.
Strongly regular graphs, partial geometries and partially balanced designs.
March, 1963.
March, 1963.
© Copyright 2026 Paperzz