The Estimation of a Simultaneous Equation Generalized Probit

The Estimation of a Simultaneous Equation Generalized Probit Model
Takeshi Amemiya
Econometrica, Vol. 46, No. 5. (Sep., 1978), pp. 1193-1205.
Stable URL:
http://links.jstor.org/sici?sici=0012-9682%28197809%2946%3A5%3C1193%3ATEOASE%3E2.0.CO%3B2-Z
Econometrica is currently published by The Econometric Society.
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained
prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in
the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at
http://www.jstor.org/journals/econosoc.html.
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is an independent not-for-profit organization dedicated to and preserving a digital archive of scholarly journals. For
more information regarding JSTOR, please contact [email protected].
http://www.jstor.org
Sat Jun 2 13:46:50 2007
Econornetrica, Vol. 46, No. 5 (September, 1978)
T H E ESTIMATION O F A SIMULTANEOUS EQUATION GENERALIZED PROBIT MODEL' This article considers a two-equation simultaneous equation model in which one o f the
dependent variables is completely observed and the other is observed only to the extent
o f whether or not it is positive. A class o f generalized least squares estimators are
proposed and their asymptotic variance-covariance matrices are obtained. The estimators are based on the principle which is applicable whenever the structural parameters
need to be determined from the estimates o f the reduced form parameters.
1.
INTRODUCTION
HECKMAN[4] CONSIDERS A CLASS OF MODELS in which a subset of the
endogenous variables of a simultaneous equation model with normally distributed
errors are observed only with respect to sign. This class contains as a special case
the usual simultaneous equation model, which occurs if the above-mentioned
subset is null, and the multivariate probit model of Ashford and Sowden [3], which
occurs if the subset is in fact the whole set and if there is no simultaneity. In
addition, Heckman assumes that some of the dummy variables corresponding to
the dichotomously observed endogenous variables may appear in the right-hand
side of the equations. H e considers in detail a two-equation model in which one of
the endogenous variables is completely observed and the other is dichotomously
observed with or without the addition of the dummy variable in the equations and
discusses, among other things, an interesting method by which to obtain a
consistent estimator which is computationally easier than the maximum likelihood estimator. H e correctly observes that the results obtained for the twoequation model can be easily generalized to more general models.
The main results of the paper are as follows: First, we obtain the asymptotic
variance-covariance matrix of Heckman's estimator for the two-equation model
with or without the dummy variable. Simple consistent estimators may be used
either as starting values for an iteration to obtain the maximum likelihood
estimator or as an end in itself on the basis of which statistical inference can be
made. In the latter case the asymptotic variance-covariance matrix is needed.
Second, we propose a class of least-squares-type consistent estimators for the
case in which the dummy variable is not included in the equations and show that
Heckman's estimator is asymptotically equivalent to a member of this class and
that the optimal member of this class, namely the generalized least squares
estimator, is asymptotically more efficient than Heckman's estimator though
computationally slightly more difficult. Third, for the model with the dummy
variable, we propose a modification of Heckman's estimator and obtain its
' This work was supported by National Science Foundation Grant SOC73-05453-A02 at the
Institute for Mathematical Studies in the Social Sciences, Stanford University. The author wishes to
thank an editor and a referee for helpful comments.
1193
1194
TAKESHI AMEMIYA
asymptotic variance-covariance matrix. Fourth, we discuss the maximum likelihood estimation and another asymptotically efficient estimation method for the
model without the dummy variable.
The class of estimators we propose can be generally used whenever structural
parameters need to be determined from estimates of reduced form parameters.
When applied to the usual simultaneous equation model, the optimal member of
this class, or the generalized least squares estimator, coincides with the two-stage
least squares estimator. Amemiya [2] discusses an application to the simultaneous equation Tobit model of Nelson and Olson [5], which differs from
Heckman's model in that in their model a subset of the endogenous variables are
truncated rather than dichotomously observed.
Heckman's two-equation model, with a slight change in notation to conform
to the more standard notation of the literature on the simultaneous equation
model, is defined by
and
(2.3)
d,
={:
if y?,>O,
otherwise
(t= 1,2,. . . T).
It is assumed that ylt and d , are observable scalar random variables, y;t, ul,, and
~ 2 are
,
unobservable scalar random variables, x l , and x2, are K 1 and K 2
component vectors of known constants, respectively, y l , y2, S1, and S2 are scalar
unknown parameters, and pl and 6 2 are K 1 and K 2 component vectors of
unknown parameters, respectively. It is further assumed that {ul,, U Z ~ are
}
i.i.d.
bivariate normal variables and that
which, as Heckman shows, is necessary for the logical consistency of the model.
We will write (2.1)and (2.2)in the obvious vector notation as
The reduced form of the above can be written as
where X is a T x K matrix consisting of distinct column vectors in ( X I ,X2) and
IT1, 112, v l , and v 2 are defined in the obvious way. For the ease of obtaining the
GENERALIZED PROBIT MODEL
1195
asymptotic results, we will assume limT,, T-'X'X is a finite and nonsingular
matrix. Note that d does not appear in the right-hand side of (2.8) because of
2
2
assumption (2.4). We denote the variances and covariance of (vI,, v2,) by a l , a 2 ,
and a12and we assume, in addition to the usual identifiable condition, a
normalization
Since y ; is only dichotomously observed, one could identify 172 and hence some
of the structural parameters only up to a scalar multiple without a normalization
such as (2.9). We will use in the following sections an equality
where v2, arid e, are independent because of the joint normality of (vl,, vz,). We
define a ; = ( y l , 0 ; )and a ; = (y2, 0 ; ) .
Heckman's estimation is carried out in two steps: In the first step, a consistent
estimator I f 2 of 112is obtained by the maximum likelihood estimation applied to
a standard probit model defined by (2.8) and (2.3) and in the second step,
structural parameters ( a l , a2) are estimated using I f 2 in the manner explained in
detail below.
Estimation of a l
Inserting (2.8) into (2.5) and putting
sl = 0, we obtain
where we have defined A = ( l f 2 , J1), J1 is the matrix consisting only of zeroes
2 172).
and ones in appropriate positions so that X J 1 = X I , and wl = vl - y l ~ ( f i Heckman's estimator of a l is defined as the least squares method applied to
(3.1): namely,
To obtain the asymptotic variance-covariance matrix of & l , we need the
following preliminary results. Estimator
is obtained by maximizing the
logarithmic likelihood function
fi
where F, is the standard normal distribution function evaluated at 17;xt and the
range of the summation is for t from 1 to T . From (3.3) we have
1196
TAKESHI AMEMIYA
where f, is the standard normal density function evaluated atJTLx,, and
(3'5)
d2 log L
Edn2
Therefore, by the standard asymptotic theory,' we have
where 4 means that both sides of (3.6) have the same asymptotic di~tribution.~
Define A as the diagonal matrix whose tth diagonal element is f:~;' (1 - F,)-'.
Then, from (3.6) we have
where V( . ) denotes the asymptotic variance-covariance matrix of its argument
throughout this paper.4 Since
we obtain, using (2.10) and (3.6),
where A E ( . ) denotes the asymptotic mean (or the mean of the limit distribution) of its argument.
Using (3.7) and (3.9), we obtain5
2
where c = u l -2ylu12. Therefore, we finally obtain the asymptotic variancecovariance matrix of Heckman's estimator of a l as
where H = (n2, J I ) and X t V ( w l ) X is as given in (3.10). Note that we are
allowed to use H in (3.11) because of the consistency of If2.
See, for example, the appendix of Amemiya [I].
It means in other words that f i t i m e s both sides of (3.6) have the same limit distribution, since
we have assumed X ' X goes to infinity at the rate of T.
That is to say, the variance-covariance matrix of the limit distribution of JT(f12-f12) is equal to
limT+, T(X'AX)-'.
The purist might object to expression V(wl) below, since the number of the elements in wl
increases with T. It should be interpreted as the exact variance-covariance matrix of the random
vector obtained by replacing f12-f12 with the right-hand side of (3.6) in the expression of w,. Or,
one may wish to rewrite X1V(wl)Xas V(X1wl)to avoid this difficulty. The same remark applies to
V(w2) that appears in (3.14) below.
1197
GENERALIZED PROBIT MODEL
Estimation of cvz
Inserting (2.8) into (2.6) and solving for yl and putting 62 = 0, we get 6
where 0 = ( I f 2 , -J2), J2 is the matrix consisting only of zeroes and ones in
appropriate positions so that XJ2 = X2, A' = (y;', y;'@;), and w z =
~ 1 Y- ; 1 ~ ( l f -172).
2
Heckman proposes estimating A by the least squares
method applied to (3.12): namely,
Then one can uniquely obtain 6 2 from ibecause there is a one-to-one correspondence between a2 and A.
We will first obtain the asymptotic variance-covariance matrix of 1 and then
subsequently that of 6 2 . Using (3.7) and (3.9) we obtain
where d = y;u: - 2 ~
2 ~ 1 Therefore
2 .
we have
where Q = (172, -J2) and X 1 V ( w 2 ) X is as given in (3.14). Note that we are
allowed to use Q in (3.15) because of the consistency of A2.
In order to derive the asymptotic variance-covariance matrix of 62 from that
of i,we must establish an asymptotic relationship between 6 2 - L Y Z and 1-A.
Partition A'= ( A l ,A ; ) so that Al = y;' and A; = y;'@;. Then we have
where the last asymptotic equality is based on the consistency of
11. We similarly
ti Here it is implicitly assumed that y, # 0. This is a drawback of Heckman's method, which does
not arise with an alternative method we will present in the next section. As is noted at the end of the
next section, the larger y2, the higher the efficiency of Heckman's estimator.
1198
TAKESHI AMEMIYA
have
where the last asymptotic equality is based on the consistency of 11. We can
write (3.16) and (3.17) together in vector notation as
where
where 0 denotes the vector of zeroes. Finally we have
(3.20)
V(LG2) = pv(fi)pt,
where ~ ( f i is) as given in (3.15).
4.
CLASS OF LEAST SQUARES ESTIMATORS WHEN
81 = 0
Structural parameters (al, a2) are related to reduced form parameters
( n l , n 2 ) according to
and
where J1 and J2 are defined after (3.1) and (3.12), respectively. The class of
estimators we propose are based on the principle of determining (al, a 2 ) from
estimates of (U1, n 2 ) using the above relationships in a direct way. We assume
that structural parameters are over-identified because otherwise our problem
does not arise. An application of this principle to any general model which
requires the determination of structural parameters from estimates or reduced
form parameters can be surmised from the subsequent results and are delineated
in Amemiya [2].
1199
GENERALIZED PROBIT MODEL
Estimation of
LV 1
Let I f l be the least squares estimator of 111obtained from (2.7) after putting
61 = 0 and let
be as defined in the first paragraph of Section 3. From (4.1)we
have
fi
where
= (& J l ) as before and q 1 = ( I f l - I l l ) the least squares estimator of a l applied to (4.3),
y1(lf2
-n2).
We define a f as
and define a ? as the generalized least squares estimator of a1 applied to (4.3),
(4.5)
a 7 = (AtQ ;'A)-'At
Q;'A,,
where Q1 is a consistent estimator of V1=V ( q l ) , the asymptotic variancecovariance matrix of q 1 We complete our class of estimators of a 1 by adding any
"wrong" generalized least squares estimator obtained by treating V 1as if it were
any arbitrary K x K positive definite matrix.
We will obtain the asymptotic variance-covariance matrix of a f and a ? and
compare them with that of
given in (3.11). Noting
and using (3.7) and (3.9),we can easily obtain
(4.7)
v1-
V(q1)
=c(XtX)-'
+ y: (x' AX)-',
where c = u: - 2 y1u12as defined before. Therefore we have
and
(4.9)
v(&)= ( H ~ H ) - ' H ' V ~ H ( H ~ H ) - ' .
To facilitate the comparison, we rewrite (3.11) as
where we have used the equality
obtained from (3.10) and (4.7). Either directly from (3.2) or indirectly from
(4.10) it is clear that Heckman's estimator of a l is identical to a "wrong"
generalized least squares estimator applied to regression equation (4.3) treating
V 1as if it were (x'x)-l. Hence we can conclude
(4.12)
V ( 6 l ) - ~ (7)a= non-negative definite
1200
TAKESHI AMEMIYA
and
(4.13)
V(&l)-~ ( a =
f indefinite.
)
However, it is interesting to note that if the variance of fil clearly dominated y:
times the variance of fi2, the efficiency of 41 would be close to that of a?.
The computation of a ? is more involved than a f or 81 but is quite feasible
since it involves t h e inversion of only a K x K matrix. In constructing a consistent estimator of V l , yl which appears in V 1may be consistently estimated by
arbitrarily picking one estimate of yl obtainable from (Al,f i2). A consistent
estimate of u: can be obtained from the least squares residuals of (2.7) after
putting S1 = 0 , and, finally, u12can be consistently estimated by ~ - ' ~ ( d , v ^ ~ , f ~ ' )
where G l , is the least squares residual of (2.7) after putting S1 = 0 and f, is
obtained by replacing I72 in f, with I f 2 . That the above estimate of a12is
consistent follows from (2.10) and (3.8).As can be easily shown, one could get a
more efficient estimate of a1 than a ? by applying the generalized least squares
method to regression equation (3.1). However, this estimator has the disadvantage of requiring the inversion of a T x T matrix.
Estimation of a2
From (4.2) we have
P1; = ~
(4.14)
~ f i ~ + ~ ~ ~ ~ + ( P 1 ; - I 7 ~ ) - ~ ~ ( f i ~ - ~ ~ )
= G a 2+ 772,
4
where 6 = ( f i l ,J2), J2 is defined after (3.12), and q 2 = ( f i 2 - n 2 ) - y2(fil - u l ) .
We define a : as the least squares estimator of a 2 ,
(4.15)
a:
= (&'e)-1e'fi2,
and define a g as the generalized least squares estimator of a 2 ,
(4.16)
ag
= (C?'o;1e)-1e'o;1fi2,
o2
where
is a consistent estimator of V2=V ( v 2 ) t,he asymptotic variancecovariance matrix of 7 ) ~ We
.
complete our class of estimators of a 2 by adding any
"wrong" generalized least squares estimator obtained by treating V 2as if it were
any arbitrary K x K positive definite matrix.
We will obtain the asymptotic variance-covariance matrix of a : and a ? and
compare them with that of ii2given in (3.20). Using (4.6), (3.7), and (3.9), we
can easily obtain
where d
(4.18)
=y
$ ~ :-2y2a12 as defined before. Finally, we have
~ ( a g=)( G ' v ; ~ G ) - ~
G E N E R A L I Z E D PROBIT M O D E L
and
(4.19)
v(&)
where G
= (U1, J2).
(4.20)
1
X'V(W~)X=~X'XV~X'X
= (G'G)-~G'v~G(G'G)-~,
Since we have from (3.14) and (4.17)
Y2
and since it can be shown that
(4.21)
1
Q = --GP,
Y2
we can rewrite (3.20) as
Equation (4.22) makes it apparent that Heckman's estimator of a 2 is asymptotically equivalent to a wrong generalized least squares estimator applied to
regression equation (4.14) treating V2 as if it were (x'x)~'. Hence we can
conclude
(4.23)
V(&2)- ~ ( a ? =) non-negative definite
and
(4.24)
V(62)- ~ ( a 4=) indefinite,
However, if y ; times the variance of fil clearly dominated the variance of If2,
the efficiency of 6 2 would be close to that of a:.
Regarding the computation of a:, we can make a similar remark as the last
paragraph of the preceding subsection on the estimation of a l .
5.
ESTIMATION W H E R E
81 f 0
In the case of S1f 0, the class of estimators we proposed in Section 4 become
more involved because the right-hand side of (2.7) contains endogenous variable
d and therefore ITl no longer can be consistently estimated by least squares. Of
course U1 still can be consistently estimated by the instrumental variables
method but that means we must construct an instrumental variable for d. For this
reason we will not discuss this class of estimators in this section. Heckman's
estimator for the present case has the advantage of not requiring an estimate of
II1in the first step but it also has a certain disadvantage which we will point out
later. Therefore we will modify his estimator, hoping to retain the advantage and
eliminate the disadvantage.
1202
TAKESHI AMEMIYA
Estimation of a
Inserting (2.8) into (2.5) we obtain
where & and wl are as defined after (3.1) and we have newly defined Z1=
(x&,
d ) and a?' = ( a ; , s l ) . Premultiply (5.1)by X ' and obtain
The estimators of aT which we propose in lieu of Heckman's estimator are the
least squares and the generalized least squares estimators applied to (5.2). The
generalized least squares estimator will be reduced to a ? of Section 4 if 61 = 0 .
The computation of these estimators and their asymptotic variance-covariance
matrices are relatively easy because X I V ( w l ) X has a simple form given in
(3.10).
To define Heckman's estimator, rewrite (5.1) as
where w? = wl + al(d - F ) - a l ( f l - F ) and F and f l are T-component vectors
whose tth elements are F,(U;xt) and F,(fi;x,), respectively. Heckman proposes
estimating a l and 61 by the least squares estimator applied to (5.3). A disadvantage of this estimator is that the expression for v ( w T ) is quite lengthy as
shown in (5.4) below and hence so is the asymptotic variance-covariance
matrix.'
Define D l and D2 as T X T diagonal matrices whose tth elements are f, and
F,(1 - F,),respectively. Then we can show
One cannot make a definite comparison between the asymptotic variancecovariance matrix of Heckman's estimator of aT with that of either the least
squares or the generalized least squares estimator of a ? defined above.
'
V(wT) is the exact variance-covariance matrix of the random vector obtained by replacing 8- F
that app:ars in the definition of wT with its asymptotic equivalence, which is obtained by approximating F-F by the first term of the Taylor expansion ~ ( ~ ; X , ) X : ( A ~ - I Zwhere
~ ) , we further replace
fi>-ZIz with the right-hand side of (3.6). As in the occurrence of V(w,) and V(w2) earlier in the
paper (see footnote 5), the justification for making the substitution in computing V(w:) is that the
resulting expressions of the form X I V ( w f ) x do in fact give correctly the value of V ( x l w f ) .
GENERALIZED PROBIT MODEL
1203
However, we can show that the least squares or the generalized least squares
estimator applied to (5.2) is better than the least squares or the generalized least
squares estimator applied to
(5.5)
xtyl
=X'XA~~+S~X'P+X'WT,
which we get by premultiplying (5.3) with X ' . This follows from
= non-negative
definite.
Estimation of a2
Inserting (2.8) into (2.6) and solving for y l , we obtain, using assumption (2.4),
where
Z2= (
d , A, and w2 are as defined after (3.12) and we have newly
~6
d ) ,and A*' = (A', d ) . Premultiply (5.7) by X ' and obtain
defined
As in the estimation of a l , we propose the least squares and the generalized
least squares estimators of A* applied to (5.8). The computation of these
estimators and their asymptotic variance-covariance matrices are relatively easy
because X 1 V ( w 2 ) X has a simple form given in (3.14). Given an estimator of A,
the corresponding esimator of a2 is uniquely determined by the obvious one-toone correspondence and its asymptotic variance-covariance matrix can be
obtained from that of the estimator of A using (3.18).
To define Heckman's estimator, rewrite (5.7) as
(5.9)
yl=~d~+61P+w,*,
where W ; = w1+ 61(d - F ) - S 1 ( 8- F ) . Heckman proposes estimating a1 and S1
by the least squares estimator applied to (5.9). A disadvantage of this estimator
is that the expression for ~ ( w ;is) quite lengthy. It is easy to show that V ( w ? )is
obtained by replacing ( y l I + S l D l ) with ( y ; l ~
+ S I D 1 ) in (5.4). We can make a
similar statement regarding the comparison between Heckman's estimator of A *
and the least squares and the generalized least squares estimators of A* defined
above as the last paragraph of the preceding section on the estimation of a l . In
fact (5.6) holds verbatim if wT and wl are replaced with w ; and w2, respectively.
Note that we have ignored the problem of utilizing the constraint (2.4) in
estimation, having obtained two separate estimates of S1 from estimates of a ?
and A*.
1204
TAKESHI AMEMIYA
In this section we will define the maximum likelihood estimator of all the
parameters of the model, indicate how to obtain its asymptotic variance-covariance matrix, and present a method by which to calculate the estimator that has
the same asymptotic distribution as the maximum likelihood estimator utilizing
relationships (4.1) and (4.2). These estimators may be too cumbersome to
compute in practice, but the results of this section will be useful at least for the
purpose of understanding the difference between the estimators we have considered in the previous section and the asymptotically efficient estimators of this
section.
Because of the joint normality of v l and u2, we have
where p = u12/u:
and 6 is independent of ul with variance equal to U ; - U : ~ / U : .
For the maximum likelihood estimation it is better to use a different normalization from (2.9): namely,
(6.2)
2
2
2
~ z - ~ l 2 / u l = l .
From (2.7), (2.8), and (6.1), the reduced form can be written as
and
where II=IT2-pII1. Since the error terms in (6.3) and (6.4) are independent,
the log likelihood function can be simply written as
where F is the standard normal distribution function as before and f l is the
density function of N ( 0 , u:).
Define 8' = (IT;,
u : , IT,p ) and w' = ( a ; , a ; , u:, p). If we had the exactly
identified model, there would be a one-to-one correspondence between 8 and w
and therefore one could simply maximize (6.5) with respect to 8 without constraint. This maximization is easy because the first term of the right-hand side of
(6.5) is the log likelihood of the standard regression model and the second and
third terms constitute that of the standard probit model. The asymptotic variance-covariance matrix is equal to [ - ~ ( a ' log L ) / ( d 8 d ~ ' ) ] -which
~ , can be easily
calculated from
(6.6)
E
a2 log L
1
an,an;= --?.X'X,
GENERALIZED PROBIT MODEL
where A is the T x T diagonal matrix whose tth diagonal element is f : ~ ; '
(1 -F,)-' evaluated at x:l;r+pyl,. Note that all the other cross-product terms
vanish.
If the model is over-identified, however, one must maximize (6.5)with respect
to w . Then, not only the maximization is more cumbersome, but also the
asymptotic variance-covariance matrix now becomes a more complicated
formula given by
It is easy to show that one obtains the asymptotically efficient estimator if one
applies the generalized least squares method to (4.3) and (4.14) simultaneously,
where the hatted variables are now the unconstrained maximum likelihood
estimates. This shows that the estimators we discussed in the previous section are
inefficient for two reasons: ( 1 )the estimates of Il used in (4.3)and (4.14) are not
as efficient as the unconstrained maximum likelihood estimator and (2) the
generalized least squares method was not applied to (4.3) and (4.14) simultaneously. However, note that avoiding either cause of the inefficiency would
considerably increase the computation cost. For example, even though the
unconstrained maximum likelihood estimates of 17' and 17' are just as easy to
compute, their asymptotic variance-covariance matrix is much more complicated
than those of Ifl and I f 2 which we defined in the previous section.
Stanford University
Manuscript received June, 1977; final revision received October, 1977.
REFERENCES
[I] AMEMIYA,T.: "The Maximum Likelihood and the Nonlinear Three-Stage Least Squares
Estimator in the General Nonlinear Simultaneous Equation Model," Econometrica, 45 (1977).
955-968.
121 : "The. Estimation of a Simultaneous-Equation Tobit Model," Technical Report No. 236,
Economics Series. Institute for Mathematical Studies in the Social Sciences, Stanford University, May, 1977.
[3] ASHFORD,J. R., AND R. R. SOWDEN:"Multivariate Probit Analysis," Biometries, 26 (1970),
535-546.
[4] HECKMAN,J. J.: "Dummy Endogenous Variables in a Simultaneous Equation System," mimeographed, University of Chicago and NBER, March, 1977.
[ 5 ] NELSON,F., AND L. OLSON:"Specification and Estimation of a Simultaneous-Equation Model
with Limited Dependent Variables," Social Science Working Paper No. 149, California
Institute of Technology, January, 1977.
http://www.jstor.org
LINKED CITATIONS
- Page 1 of 1 -
You have printed the following article:
The Estimation of a Simultaneous Equation Generalized Probit Model
Takeshi Amemiya
Econometrica, Vol. 46, No. 5. (Sep., 1978), pp. 1193-1205.
Stable URL:
http://links.jstor.org/sici?sici=0012-9682%28197809%2946%3A5%3C1193%3ATEOASE%3E2.0.CO%3B2-Z
This article references the following linked citations. If you are trying to access articles from an
off-campus location, you may be required to first logon via your library web site to access JSTOR. Please
visit your library's website or contact a librarian to learn about options for remote access to JSTOR.
[Footnotes]
2
The Maximum Likelihood and the Nonlinear Three-Stage Least Squares Estimator in the
General Nonlinear Simultaneous Equation Model
Takeshi Amemiya
Econometrica, Vol. 45, No. 4. (May, 1977), pp. 955-968.
Stable URL:
http://links.jstor.org/sici?sici=0012-9682%28197705%2945%3A4%3C955%3ATMLATN%3E2.0.CO%3B2-5
References
1
The Maximum Likelihood and the Nonlinear Three-Stage Least Squares Estimator in the
General Nonlinear Simultaneous Equation Model
Takeshi Amemiya
Econometrica, Vol. 45, No. 4. (May, 1977), pp. 955-968.
Stable URL:
http://links.jstor.org/sici?sici=0012-9682%28197705%2945%3A4%3C955%3ATMLATN%3E2.0.CO%3B2-5
3
Multi-Variate Probit Analysis
J. R. Ashford; R. R. Sowden
Biometrics, Vol. 26, No. 3. (Sep., 1970), pp. 535-546.
Stable URL:
http://links.jstor.org/sici?sici=0006-341X%28197009%2926%3A3%3C535%3AMPA%3E2.0.CO%3B2-C
NOTE: The reference numbering from the original has been maintained in this citation list.