:NQNLI:NE.AR
REG~SSION
METH0DS
by
A. RONALD GALLANT
InStitute of Statistic~
Series No. 890
R~leigh
.,.
aeptember
1973
,
Mim~Qgraph
'e
NONLINEAR REGRESSION METHOW
by
A. Ronald Gallant
ABSTRACT
The Modified Gauss-Newton method for finding the least squares
estimates of the parameters appearing in a nonlineaI'regression model
. is described.
A description of software implementing
available to TUCC users is included.
NONLINEAR REGRESSION METHOre·
by
,I
-
A.Ronald Gallant
A sequence of responses
to inputs
t
accordingto;the nonlinear regression model
Y
~
are assumed to be generated
(t=l" 2" .... , n) •
arek-dimensional
and the unknown
par~eter
..• rv
8 is p-dimensional
8
rv
= (.A I "
.
82 " ••• " eJ?.)
•.
Tone least squares estimater is that value
,.,
,.,.,.,
~=. {81 " 82 ,
which minimizes
The following vector and matrix notation will be useful in describing the·
modified Gauss-Newton method.
(n X 1)
Zf(;y"g)
=
the p X 1 vector whose j th element is
...£..ee
j
matrix whose t th row is Z/fUst'~)
consider the 50 responses
(~Yt)
and inputs
We assume that
sothat p=2
andk=l.
The vectors and matrices introduced above are; for
.824
il =
·515
·576
(50 Xl)
.•• 5158
e1 e
. 2 .
.
. .833 8 2
.833 8 e
...
1
1
.
.249 8
2
.249 8 e
1
The modified Gauss-Newton (Hartley, 1961) algorithm proceeds as follows.
Find a start value
~o;
methods of finding start values are discussed later.
From the starting estimate -8
""'0
Find a.
A0 between 0 and 1
. SSE
1)·· Let ""'1
e
Find a
= rvo
e
A
l
(e
. +Aavo
D )
\-Bo
+A D •
CJ"VO .
s~ch
compute
that
S SSE (e
""'0
).
Compute
between 0 and 1 such that
These iterations are continued until terminated according to some stopping
rule; stopping rules are discussed later.
Hartley~s
(1961) paper sets forth assUlllptions such that these iterations
"e.
will converge to
Slightly weaker assumptions (Gallant,' 1971) are listed'
in Appendix I.
As users of iterative methods are well aware, a mathematical proof of
,convergence isno guarantee that convergence will obtain ana coIrIputer.
this author's experience, convergence fails for two reasons:
chosen does not fit the data, orii)
When the inputs are scalars
Plot the data.
The
Poor start values are chosen.
(k=l)
the first difficulty, can' be avoided.
If your visual impression of the plotted data differs from the
regression model chosen, expect difficulties.
valued (k> 1)
i)
In
When the inputs are vector
the same considerations apply but are more difficult to
verify.
The choice of start values is entirely an ad hoc process.
They may be
obtained from prior knowledge of the situation, inspection of the data, grid'
search, or trial and error.
Fuller (1973).
(1968) and Gallant and
For examples, see Gallant
A more general approach to finding start values is given by
Hartley and Booker
(1965). A simpler variant of their idea is the following.
and inputs
x
Nt.
~
(i :: 1, 2, "', p)
for
~
(i
= 1,
... , p).
example~
Illustrating with our
we will choose the
largest and smallest inputs obtaining the equations
.481
The solution of these equations is
~
= (.444, .823)'.
There are several ways of chosing
at each i terative step.
to satisfySSEfS.
+ L ~f'V~
D.)< SSE(S.)
~~
"'l.
Ai
.
Hartley (1961) suggests two methods in his paI;le,r.
In
this author's experience, it doesn't make very much difference how one chooses
A. •
~
What is important is that the computer program check that the condition
SSE (S.
+ A.D.)
<SSE (e
.)
f'V~
~f'V~
\-8~
.
taking the next iterative step.
In an intermediate step in Hartley's (1961) proof one sees that there is
every A between 0
and€
SSE(S. + AD.) < SSE(S.) •
~
f'V~.
rv~
For this reason, the author prefers the following method of choosing
,
.
For some
between
a
0
and 1, say
a
j
and chose
Ai
to be the largest
~j
=.6,
=
successively check the values
0, 1, •••
suchthat
SSE(e.
+ ~ J~
.D.) < SSE(KIJ.
e.).
rv~.
no
Ai
satisfying the
limits of themachinee
This situation is discussed
paragraph•
. There aJ:'e a variety of stopping rules or tests ·:for. convergence
to decide when to terminate the iterations.
tolerance
€
> 0 and terminate when
II
(The symbol
For example, one might
k - ~+l II s
€
II ~ II
II ~ II denotes the Euclidean norm; II ~
one situation where one must stop.
This is when no
1/ = (Iii=l
A can be
that
The author's preference is not to use a stopping rule other than some prechosen limit on.the number of·iteratives.
The values of ,f!i
andSSE(A.)
~
.
.
are printed out for each iteration until either this limit is reached
A
can be found to improve
8..
I"VJ.. .
The observations, predicted values,
residuals from this last iteration are printed out as well.
iterations are identical to seven significant ·digitsandthe predicted values
and residuals indicate that these values are acceptable they are used.
A
further check is to try another start value and see if the same answers are
obtained.
The value of this last iteration will be taken as
iteration are computed
,..
S
From this last
"'2
1
... '"
0= -n SSE(e)
listed in Appendix II, the least squares estimator
approximately normally distributed with mean
,g
and variance-covariance
'"
matrix "'2
(J ,e.
A FORTRAN subroutine, IMGN, is available to TUCC users which will perform
one modified Gauss-Newton iterative step.
input, output, and stopping rules.
The user is free to handle his own
The documentation is displayed as Exhibit
The requisite JCL· is as follows:
JOB
IINONLIN
/ISTEPl
xxx.yyy.zzzz,programmer-name
FrGCG
EXEC
Ilc.SYSIN
*
DD
source code
IIG.SYSLIB
II
II
IIG.SYSIN
DD DSN=NCS.ES.B4l39.GALLANT.GALLANT,DISP=SHR
DD DSN=SYSl.FORTLIB,DISP=SHR
.DD DSN=SYSI. SUBLIB, DISP=SHR
DD
*
data
II
Source coding which will handle input and output is shown in Exhibit III
shown in Exhibit IV.
Users at NCSU, Duke, and UNC
source deck by calling the author.
The exponential example we have been considering will be used to illustrate
the use of this program.
Assume that the data of Exhibit I have been punched
one observation per card according to
coded in Exhibit V and Subroutine FUNCT is
The deck arrangement is as follows:
IINONLIN
IlsTEPl
JOB
EXEC
Ilc.SYSIN
DD
xxx.yyy.zzzzz,programmer-name
FTGCG
*
source deck
subroutine INPUT deck
subroutine FUNCT deck
IIG.SYSLIB
DD DSN=NCS.ES.B4l39·GALLANT.GALIANT,DISP=SHR
II
DD DSN=SYSI. FORTLIB, DISP=SHR
II
DD DSN=SYSl.SUBLIB,DISP=SHR
I/G. SYSIN DD *
data cards
II
outputfor·the example is shown in Exhibit VII.
prints a correlation matrix computed from
covariance matrix
~2
£
rv
t'"
tv
rather than
Note that the program
'"I:.
rv
The variance-
can be recovered
by using the standard errors printed
.
page.
in documentation or difficulties with the program
•
,,
Exhibit I •
t
Yt
Observations
xt
t
Yt
0.883
0.249
0·539
0·995
0.113
0·722
0·315
0·393
0·521
0·535
0.815
0.629
0.432
0·931
0.695
0·792
0.493
0.830
0·538
0·753
0·706
0.410
0.106
0·939
0.680
26.
0.821
0·164
0·541
0.418
0.622
0·510
0·109
0.629
0.407
0.654
0.865
0·562
0·155
0.844
0·568
0·982
0.600
0.610
0.424
0.639
0.695
0·507
0.657
0·993
0·576
~!
•
3
4
5
6
1
8
9
10
11
"12
13
14
15
16
11
18
19
20
21
22
0.824
0·515
0·560
0·949
. 0.495
0.874
0.452
0.482
0.600
0.660
0.874
0·554
0·534
0·175
0.850
0.693
0.622
0.626
0·771
0.616
0.819
0·741
0.481
0·736
0·753
zr
28
29
30
31
32
33
34
35
36
31
38
39
40
41
42
43
44
45
46
47
48
49
50.
0.625
0.629
0.150
0.236
0.065
0.260
0·973··
0.495
0.212
0.815
0·931
0·553
0.486
0·931
0.214
0·902
0.480 .
0·164
0.260
0.682
0·748
0·345
0·339
0·929
0·515
Exhibit II
IM}N
4/20/72
PURPOSE
COMPUTE A MODIFIED GAUS.s~NEWTON ITERATIVE STEP EOR TBEREGRESSION
MODEL YT=F(XT, THETA) +ET.
USAGE
CALL DMGN(FLJNCT, Y,X; Tl,N,K, IP, T2,E, D, C, VAR, IER)
SUBROUTINES· CALLED
FUNCT, roMPRD, IBWEEP
ARGUMENTS
FUNCT - USER WRITTEN SUBROUTINE CONCERNING THE MJCTION TO BE
FITTED. FOR AN INPUT VECTOR XT AND PARAMETER THETA FUNCT .
SUPPLIES THE VAlliE OF THE FUNCTION F(XT, THETA), STORED IN
. THE ARGUMENT VAL, AND THE PARTIAL DERIVATIVES WITH RESPECT
. TO THETA, STORED IN THE ARGUMENT. DEL.
SUBROUTINE FUNCT IS OF THE FORM:
SUBROUTINE FUNCT(XT,THETA,VAL,DEL,ISW)
RBAL*8 XT(K),THETA(IP),VAL,DEL(IP)
(STATEMENTS TO SUPPLY VAL)
IF(ISW.EQ.l) RETURN
(STATEMENTS TO SUPPLY DEL)
RETURN
END
THIS STATEMENT MUST BE INCIDDED IN THE CALLING PROGRAM:
EXTERNAL FUNCT
- AN N BY 1 VECTOR OF OBSERVATIONS.
ELEMENTS OF Y ARE REAL*8.
- A K BYN MATRIX CONTAINING THE INPUT K BY 1 VECTORS STORED
AS COIDMNS OF X. COLUMN IOF X CONTAINS THE INPITT VECTOR
CORRESPONDING TO OBSERVATION Y(I). STORED COLUMNWISE
(STORAGE MODE 0).
ELEMENTS OF X ARE REAL*8.
- INPUT IP BY .l VECTOR CONTAINING THE START VALUE OF THETA
OR TBE VAlVE COMPITTED IN THE PREVIOUS ITERATIVE STEP.
ElEMENTS OF '1'1 ARE REAL*8.
N
- NUMBER OF OBSERVATIONS
INTEGER
K
- DIMENSION OF AN INPITT VECTOR IN THE MODEL F(XT, THETA)
INTEGER
IP
- NUNBER OF PARAMETERS IN THE MODEL F{XT,THETA)
INTEGER
'1'2
- AN IP BY 1 VECTOR CONTAINING THE l\J"'EW ESTIMATE OF THETA.
THE ELEMENTS OF T2 ARE P,E.lH}~-8.
E
- AN N BY 1 VEC1'OR OF RESIDUAIS FROM THE MODEL WITH THETA=Tl
ELEMEN'l'S OF E ARE REAL*8.
D
- AN IP BY 1 'lECTOR COIT'J'AINING AN UNMODIFIED GAUSS-NEWTON
CORRECTION 'VECTOR. SET THETA=Tl +D TO OBTAIN AN UNMODIFIED
NEH BS'l'IU1ATE.
ELEM.l~N'I'S 01" D ARE REAIJ*EJ
C
L
-AN IP BY IP r.1ATRIX CONTAINING THE ESTIMATED VARIANCECOVARIANCE MATRIX OF Tl PROVIDED T1 IS THE IEASTSQUARES
ESTIMATE OF THETA.' C=VAR*INVERSE(SUM(DEL*DEL'». STORED
COLUMNWISE (STORAGE MODE O)~
. ELEMENTS OF C .A.REBEAL*8.
-ESTIMATED VARIANCE OF OBSERVATIONSPROVIDEDT1 IS THE
LEAST SQUARES ESTIMATE•
. REAL*8.
.. INTEGER ERROR PAR.AMBTER CODED AS FOLLOWS:
IBR=O NO ERROR
IER=l T2=T1+V*D WHERE V=.6**L FAILED TO REDUCE THE
. RESIDUAL SUM OF SQUARES FORL=O, 1, 2, ••• ,40.
IBR.GT.9 AN INVERSION ERROR OCCURRED, UNITS POSITION OF
IER HAS THE SAME MEANING AS ABOVE.
REFERENCE
HARTLEY,H.O. THE MODIFIED GAUSS-NEWTON METHOD FOR~rtE FI!'TINGQF .
NON-LINEAR REGRESSION FUNCTIONS BY LEAST SQUARES.- TECHNOMETRICS,3PRCl{}AAMMER
DR. A. RONALD GALLANT
DEP~TMENT OFSTATIS TICS
NORTH CAROLINA STATE UNIVERSITY
RAIEIGH, NORTH CAROLINA 27607
Exhibit III
,
I
••
c
c
1
REAL*8 R(3000)
ISW=l
CALL·INPlJT(ISW,N,K,IP,ITER,R,R,R)
MO=l .
Ml=N
-fM0
M2=K*N
-lMl
M3=IP
-1M2
M4=IP
-1M3
. M5=N
-fM4
M6=IP.
+M5
M7=IP*IP
-+M6
1-18=1
-Mr
WRITE(3,1)M3.
WRITE (3, 2)
00 10. I=l,M3
R(I)=O.OO
IOUT=l
IOUT=2
CALL NONLIN(R(MO), R(Ml), R(M2), R(M3) ,R(M4), R(M5), R(M6), R(MT.),
" N , K , IP, ITER, lOUT)
SUBROUTINE NONLIN(Y,X,Tl,T2,E,D,C,VAR,N,K,IP,ITER,IOUT)
REAL*8Y(N),X(K,N), Tl(IP) ,T2(IP) ,E(N), D(IP) ,C(IP,IP), V,AA
STOP
.
*
FOm~T('l'/11111111
*36x, '***********lCl(X xx X)O( X xx x x**X)( l( )O(X XX XXX)El( xxx xx X xX***XlE X X X)()( Xx' I
*36X, '*
.
.
*'!
*36x,f.XTHE VECTOR R MUST BE DIMENSIONED AT LEAST
*' /
*36x, '*
AS LARGE AS 1'018 =' ,19,
,
*'/
*36X,
*'1
*36x, '-lC**** x XXX***** XXl(X )ElE )( XX***********"************ X xx xxx X)0< XXl(
FORMATe' 'IIIIIIII
.
*36x, , x x X*******x * X )DE X X X)( )(lex xx x)(loe xx x****x )HO()( XJ(X )El( X
x X X lEX 'I
*36x,'*
*'/
*36x, '*
PLEASE REPORT ANY PROBLEMS WITH THIS PROGRAM TO;
*'/
*36x, '*DR. A. RONALD GALLANT
*'/
*36x, '*
DEPARTMENT OF STATISTICS
*'/
*36x,'*
NORTH CAROLINA STATE UNIVERSITY
*'/
*36x, '*RALEIGH, NORTH CAROLINA Z7607
*'/
*36x,'*
(919) 737-2531
*'/
*36x,'*
*'1
*36x, ' )( l( l( *)( * xx x*******************"x-x X)()( x X)( x j( X xlC l( XX )c*******x IE)( l( X**' )
END
'*
**
** **
*' )
**
.'
Exhibit IV
SOURCE DECK FOR MODIFIED GAUSS-NEWTON NON"LJ:NEAR ESTIMATION
USER SUPPLIED SUBROUTINES INPUT AND FUNCT REQUlREDe
SUBROUTINE INPUT IS OF THE FORM:
SUBROlJTINE INPUT(ISW,N,K,IP,ITER,9Y,X,TO)
f{EAL*8 YeN) ,X(K,N), TO(IP)
IF(ISW.EQ.l) GO TO 1
I,F(ISW. EQ. 2) GO TO 2
1
CON'rINUE
(CODING TO SpPPLY N, K, IP, ITER)
REWRN
CONTINUE
(CODING TO SUPPLY Y,
END
x,
TO) .
SUBROUTINE FUNCT IS DESCRIBED IN THE DOCUMENTATION OF SUBROUTINE
DMGN AND IS OF TEE FORM:
SUBROUTINE FUNCT(XT, THETA, VAL, DEL, lSW)
REAL*8XT(K) ,THETA(IP), VAL, DEL(IP)
(CODING TO SUPPLY VAL)
IF(ISW.EQ.l) RETURN
(CODING TO SUPPLY DEL)
, RETURN
END
ARGUMENTS OF INPUT AND FUNCT:
ISW - INTEGER SWITCH.
SUPPLIED BY CALLING PROGRAM.
INTEGER*4
- NUMBER OF OBSERVATIONS.
SUPPLIED BY USER, AVAILABLE .TO .CALLING PROGRAM ON ·RETURN.
INTEGER*4
.. DIMENSION OF AN INPUT VECTOR IN THE MODEL F(XT,TEETA).
SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM' ON .RETURN.
INTEGER*4 .
;I:P
- NUMBER OF PARAMETERS IN THE MODEL F(XT, THETA).
SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM ON RETURN.
INTEGER*4
.
. ITER - NUMBER OF ITERATIONS DESIRED.
SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM ON RETURN.
INTEGER*4
Y
" AN N BY 1 VECTOR OF OBSERVATIONS.
SUPPLIED BY USER, AVAILABLE'TO CALLING PROGRAM ON RETURN•
. REAL*8
- A K By'N MATRIX CONTAINING THE K BYl INPUT VECTORS STORED
AS COLllMJlTS OF X. COlln1N I OF X CONTAINS THE INPUT VECTOR
CORRESPONDING TO OBSERVATIONS Y(I).
SUPPLIED BY USER, AVAlLABLB TO CALLING PROGRAlvf ON RETURN- .
REAL-l(·8
, .
'I'O
.;, INPUT IPBY 1 VECTOR CONTAINING TEE START VAWE 'OF 'mETA.
SUPPLIED BY USER, . AVAILABLE TO CALLING PROGRAM ON RETURN.
XT
- A K BY 1 VECTOR CONTAINING AN INPUT VECTOR.
SUPPLIED BY· CALLING PROGRAM.
BEAL*8
BEAL*8
'mETA-AN IPBY 1 VECTOR CONTAINING PARAMETER VAWES.
SUPPLIED BY CALLING PROGRAM.·
BEAt*8
-VAL=:F(XT,THETA) •
SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM ON RETURN.
REAL*8
. DEL
.
- AN IP BY 1 VECTOR CONTAINING THE PARTIAL DERIVATIVES OF
F(XT, THETA) WITH RESPECT TO THETA.
SUPPLIED BY USER,; AVAILABLE TO CALLING PROGRAM ON RETURN. .
. Exhibit V
ii .
,; .
,.
,
SUBROUTIJ.IJE INPUT(ISW,N,K,IP,ITER,Y,X,TO)
REAL*8 Y(50),X(l, 50), TO(2)
IF(ISW.EQ.l) GO TO 1
IF(ISW.EQ.2) GO TO 2
1 CONTINUE
N=50
K=l
. IP=2
lTER=25
RETURN
2 CONTINUE
TO(1)=.44400
TO(2)=.823DO
.
READ(l,lOO) (Y(I),X(l,I),I=l,50)
FORMAT (2F10.3)
RETURN
END
i! .
Exhibit VI
SUBROUTINE FUNCT(XT,THETA,VAL,OOL,ISW)
BEAL*8 XT(l), THETA(2) , VAL, OOL(2)
VAL=THETA(1)*DEXP(THETA(2)*XT(1)}
IF(ISW.EQ.l) REwRN
.
DEL(1)=DEXP(THETA(2)*XT(1) )
j.
i
OOL(2)=THETA(~)*XT{1)*DEXP(THETA(2)*XT(1»
RETURN
END
-"-,~~"~-'"--
-e:--·---" .'-'.-' -,.". .
.......
',"'"'' --"'.""
e.
.-.---------.--',.
'''1
I
~
·
~~~$~ttt.~
THEVECTO~
•
,
• • • ~ • • • • • • • • $ • • • • • • • • • • • ~ • • • • • • *.~
•••• t ••• ,t •••
••
"~
R MUST BE DIMENSIONEO AT lEAST
162.
AS LARce AS Ma •
•
•
••• ~ •• ~~.~ •• $
,~ •• t •••••••••
•
•
e."""""""""""'Q" ••••••
~
~
~
t
,~
.!
~
I
.I'.
+.~.~.t
••
•
.'
•
•
•
•............ ............................•.•......
~,
PLEASE REPORT ANY PROOLEHS WITH THIS PROCRAM TO;
OR. A. RONALD GALLANT
Of.PARTMENT OF STATISTICS
NORTH CAROLINA STATE UNIVERSITY
RALEIGH. NGRTH CAROLINA 27607
19191 737-2531
••
•
•
•
•
~.....
0"
~
•
•
•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
.....
.c+
;~
I.H
I.
I
1
I
.'
'.,.
""It "'it .... "' ..... 1fI $ "' • • •
•
••
•••
~ .. 6!. e ...
$:"
It: "' • • ~. o. (t • • • • 0 . . . . . . . . . . . . . . • •~ • • • • • • • • • •,• •'• • •
MOOlfl(DCAUS!;-'I(WT(it~
*~.~
•••
~.~*.~.'
'..
~
• ••
ITCRATIONS
•
•
•
$ •••••••••••••••••••••••••••••••••••••••
1
1
!
i
i
ERR{lR SUM OF
SQUARES
PAR.u~ETER
--- ... _---------
0
0.7349j68"O 00
1
Z
0.44"o00000 00
0.8230,00000 00
1
0.454472620 00
1
2
0.447490530 00
0.673350780 00
2
0."'53567l10 00
1
2
O. "49 330" 70 00
0.65'1282920 00
.
I
INTERMfDIATe
eSTIMATE
STCP
NUMUE~
I
.1
i
--------- ---------------
3
0.'053567080 00
1
Z
0.449361500 00
0.65'H61440 00
'0
0.'053567080 00
1
2
0.449161660 00
0.659160910 00
5
0.453'567080 00
1
Z
0.'049361660 00
0.6$9160900 00
6
0.'053567080 00
1
Z
0.449361660 00
0.659160900 00
7
0.453567080 00
1
2
0.'049361660 00
0.659160900 00
Ii
I
I·
I
I
I
l•
h
!
I,
j
1
~
'4
0.4'09361660
1
)
Z
0.659160900
ON THIS ESTIMATe
~"
•••
•
••
•• ~ ••••• , ••••••••••••• t •••••••••••••••••• ~
CHECK THESE ITERATIONS TO see IF CONVERGE~CE HAS
oeen ACHIEVED 8EFORE'USING THE FOLLOWING ReSULTS
.
••
•
•
•
•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
~i
.,
t
I.
.
-
. . . . . . . -- -
-.-,..4_
-.--.- ,.--. -.
.---------------~----------------------~~-_._-------~----------------+I
MODIFIEO CAUSS-NEWTON PARAMeTER
ESTI~ArES
I
f
·f
I
+------------------------------------------~-~----------------------+I
I
PARAMETfR
I
NUMseR
ESTIMATE
SlANOARD ERROR
l-STATISTIC
I
I
I
0.449362
0.2545250-01
17.6549
I
I
0.659161
0.8026480-01
8.2123
2
I
ESTIMATED VARIANCE
0.9071340-02
RESIP\JAL SUM OF SCUAP£S
NUMBER OF {lIlSERVA Tl eNS
NUMBER OF'PARAMETERS
0.45356708
50
2
I
I
I
f
I
I
I
I
I
f
+-------------------------------------------------~----- -------------+
'.. . . ,_ . .
~._...,.
......
N_~N~·__
~,~,
....'
. .,. . ;. ".,.,;'" :" " - f
.
........ :_~~ ....j
~:..i.;.-..:-.:-,-..:...
-i;
_ _ ...__
w.
• ......
_~_
~
_ __.;.
- - - . - .. - - _ _ ... - - - - - - . - -
•• • '* •• ..
.
......
....
i"
.....
.....
.....
.....
..... .., ....
......." -." .,...,"
.......'" . ..........
....... . "......
....."
••
....
.
........ ... ...........
.
.., ..... .....".....
..." .ij... ....
..... .
...... '" .......
.
.."...
....
....
...
.......
.....
....,. * •••.• ".*
<;
~
~
w
VI
w
«
...
w
,.,
w
l&
Cl
<
c..
fl.
<>
to:
of
<
-'
U
ct
D:
0
V
~(:,;
I
I
I
I
I
I
I
I
!
I
·0
I
I
•
I
I
!
!
.0'
•
c
I
:r
u.
)(
fl
N
0II'
W
Cl
fl
..----
it
e
C>
0
-.
CO
CO
CO
...
'"'"
'"
'"
C'
•
0
I
~----
-
....
~,.~~
••
e
•
. , . . . . . ~,. • , . , . . _
. .w
•
e
.•."
.
•
"
.•• ,
, ••
"
...
• •"
·
· . · · · · -••• • •• ~ ••• ~ •••••••••~ ••••••••••• ~ ••••••••• ~e
i
·•
_ _, .._
,
_..•
~ ••••••••
•
C8SCRV~ ~f;SPOnSr.s. "Rt01CTlO RE~PON~r~. RfSIDUALS
•
'
.• •
~
j
.~
•
•~
1
~~~~~~~~~o*~~ce~$te.$.$~~~~~¢$.*~¢e~~$o.~ •• ~t¢$.t.*•• ~ •• ~t*.~,~~~~~*t~
y
O.8l40000 00
O.~h()OOO 00
0.5600000 00
0.9490000 ~O
i·
!
Ii
I
f
ii
I
I
i
.!
!
t
II
I
, - _ _ ~_-,-_.;...
0.49S00CO 00
0.6140000 00
0.4S70000 00
0.4B2rOOO 00
0.6000000 00
0.6600000 00
0.8740000 00
O. S540000 00
0.534000D 00
0.775COOO 00
0.8500000 00
0.69110000 00
0.62~OOOO 00
0.6260000 00
0.7710000 00
0.6160000 00
0.819000D CO
0.7410000 00
0.4810000 00
0.7360000 00
O.7~brOCO CO
0.6210000 00
0.76400Cn 00
0.5410000 00
0.47AOOCn 00
0.b220000 00
0.5100dcD 00
0.7090000 00
0.6290000 00
0.4C7COCO CO
0.654;)000 00
0.0610600 00
0.5620000 00
0.11S0000 00
0.8440000 00
0.5660000 00
0.9820000 00
0.6COOOCO 00
0.6100000 00
0.4240000 00
0.6390000 00
0.69S0000 00
0.5070000 00
...... _:"-o..0..6~Q()OlLQ.lL
0.9930000 00
0.5760000 00
1
YKAT
0.~042IS0 CO
O.')l'}SHO 00.
0.6410~SO 00
0.S6!>8340 00
0.4841110 00
0.72J7410 00
0.~S30"80 00
0.~e7237C 00
0.6334940 00
0.6601910 00
.0.7689~40 00
0.1>807360 00
0."973990 00
0.8300670 00
0.1104830 00
0.7S73q40 00
0.6219090 00
0.77660$0 00
0.6406~30 00
0.740bO~O 00
0.7156530 00
0.SA87v~0
0.411IA~ZO
O.8344~bD
0.7034130
0.6764450
O.66073~n
I
IlfSitlUAl
0.1978460-01
-0.1451310-01
-0.8105530-01
0.8316610-01
0.496r630
0.r,Z49·/l,0
0.46'lOHL: ClO
0.5333670 00
0.~S3~6qO 00
0.6227300 00
0.S167~6~ 00
0.16B?1.4C 00
0.CS1BR1D 00
I
~
O.10~8~"n-Ol
O.1507~90 00
-0.1010~eo 00
I
I
-0.1002370 00
-0.3349470-01
-0.7907140-03
0.10S0360 00
-0.12623',11 00
j
-0.63399l0-01
-0.5506740-01
0.13~51l0 00
~0.5939390-01
1
0.9063520-04
:-0.15060S0 00
·0.1303670 00
-0.12460&0 00
0.1033470 00
O.I~1207U 00
-0.081QlbO-Ol
00
00
00
00
CO
00
00
00
'I
-0.9~41610-0t·
0.S45C73n.Ol·
0.14255S0 00
O~837637~-01
0.449]740-01
-0'.469.,570-01
0.1529670 00
-0.23367tD-Ol
-0.14436~O 00
. 0.6210220-02
-0.1097560 CO
-0.1149640 00
O.711944D-02
0~6469q90 00~0.64'~8SU-Ol
0.6190460 00
00
0.5174370 00
0.8143510 00
0.616603~ 00
0.7435430 00
0.5333670 00
0.1044210 00
0.7357430 00
0.5641040 00
0.8300~70
....... .;..._Q...5/:1~l1LOQ..':""
0.8289740 00
0.6309940 00
0.13'~54G 00
0.1393260-01
• 0.'056260-01
0.1676490 00
.
-0~1660290-01
-0.1335430 00
-0.1093670 00
-0.6S42010-01
-0.4014260-01
-0.5710400~01
-0..a.9!illZ1U=.Q.L
.0.1640260 00
-0.5499370-01
-':_:"'"
.:..-
REFERENCES
(1968) "A note on the measurement ofcost-qulintity relationships i11 the aircraft industry," Journal of the American Statistical
Association, 63. p. 1241-52.
Ga11.~t, A.R.
Gallant, A. R. (1971) IfStatistical inference for nonlinear regression ~dels, II.
Ph. D. d:lssertation, Iowa state University.
"Inference for nonlinear models, tl Institute
875, Raleigh.
~--~---=~~~~~
Ga.;l.lant, A. R. and Fu11er,W. A. (1973) "Fitting segmented polynomial models
whCl>sejoin points have to be estimated," Journal of the Ame;rican Statistical
Association, 68. p. 144-47.
Hartley, H. O. (1961) liThe modified Gauss-Newtonmethod for the fitting of
non..;linear regression functions by least squares," Technometrics,3.
p•. 269-80.
Hartley, H~ O. and Booker,.A. (1965) "Non-linear least squares
The Annals of Mathematical Statistics, 36. p. 638-50.
JEmnrich, R. L. (1969) "Asymptotic properties of· non-linear least squares.
estimators," The Annals of Mathematical Statistics, 40. p. 633-43.
Malinvaud, E.(1970) "The consistency of nonlinear regressions,"
of Mathematical Statistics, 41. p. 956-69.
The Annals
. APPENDIX I
We are given a regression model
. (Yt,X ) (t
t
Conditions-
= 1,2,
... , n).
Q(a)
Let
'There is a convex, bounded subset
= .sSE(S).
6· of .. r;l).
and a
Vf(Xt'e) exists a.nd is continuous over ·6 for
implies t~e rank of
e
€
Q
(9o ) <'Q
6
= inf[Q (e):.
.
There does not exist
F (e)
is
e a boundary point of s}.
e,, e"
S ·such that
in
Q (e ') :: Q (a")
and
.
Construction.
0)
Construct the sequence
Find A
which minimizes
O
e0+
ADo e:S} •
S. et
81. =e 0+AD
00
Al
. 8 :/- AD
1
1
2)
e'
-
•-
Set
e2
. .. 0)
fa}
1
ct C1F
1
D'o = [F'(a0
)F· (e
)r F'(e)[y
-.
0
0
Compute
. Find
p-
wb;ich minimizes
€
51 ..
=8 + A D •
1 1
1
Q (8
Q (8
0
1
f
as follows:
(e)]
•
0
A = fA: . 0:5 A :5 1,
o
+ ADo)
over
+AD )
1
over A = (A:
1
0 :5 A:5 1,
Cone~usionseThen
ea.
is an. interior· point of Sfor
The sequence
VQ
f'orthe sequence
(e*)
"=.
(ea.1
o.
Proof•.. Gallant (1971).
ex.
0::
1,
fl*
APPENDIX II
In order to'obtain aSyIDptotic results, it is
1;>ehavior Of the 'inputs
x
as n becomes large. A general way of specifying
t
limiting behavioro£' nonlinear- regression inputs is due to Malinvaud(1970).
definitions are merely stated here, a complete discussionand'examples
are contained in his paper.
Let X· be a subset of Rk
from which. the inputsx
Let Gbethe Borel subsets of X and
Let
I (x )
A t
n on
(X,G)
of' inputs. chosen from X.
~
s"Ubset A of X-
The measure
JL
(t =:
t
{~t)~=l
be the indicator function of
is defined by
for each A€ G.
Definition.
A sequence of measures
."
weakly to a
mea~ure
g' with domain
fiS
n .....
~
on
(X,G)
£u}
n
on
(X,G)
is said to converge
if for every bounded,
X
co -
Asymptotic results may be obtained under the following set of AssumptionsAss~tions.
The parameter space 0
and the set X are compact sUbsets
of' the p-dimensional and k-dimensional reals, respectivelyfunction
f(x,s)
and the partial derivatives
o~. 'f(x,e}'
~
are continuous on X XO-
The sequence of inputs
that the sequence of measur~s
L}oo
1}.I.n
n=l
(xtJ;=l
The response
and
be~~A.
f(x,e)
1J
is chosen such
converges wealdy· to a measure ~.
d~fined
over . (x,a).
The true value of
n. . Iff(x, e) = f(x,a o)
an open set which, in turn, is contained in
of
measure zero, it is assumed that
f.L
t = [ J~~. :f(;x:,eO)~~.
J
~..
ret}
8 = eO.
The p)( p
f(x, eO) au· (x) ]
are independent and
a2 .
These Assumptions are patterned after those used by Malinvaud
~~ow
that the least squares estimator is consistent.
Assumptions which does not require that
partialderivatives of f(x,e)
n
A similar set of
be bounded or that the second
exist may be found in Gallant (1971
An alternative set of Assumptions may be found in Jennrich (1969) •
Let
Theorem.
••••
. A2
1
'"
a = n- SSE(S).
. '"
e
denote the f'unctionof
y which minimizes
Under the Assumptions listed above:
estimator
'"e
is consistent for
The estimator "'2
a is consistent for
n';lF~(~)F(~) isconsistentfort.
'"
In (a-eo),
is asymptotically normal
.
.covar~ance
lJ'0of.
Remark.
..I i
..
f-
(e-eO)'
·· 2
matr~xa
with mean zero and variance..
t- l •
Gallant (1971 or 1973).
In computations, it is customary to absorb the term
in the variance-covar~ancematrix.
$
of
Thus, one enters the tables by taking
© Copyright 2026 Paperzz