A supplement to “Asymptotic Distributions of Quasi

A supplement to “Asymptotic Distributions of Quasi-Maximum Likelihood Estimators for Spatial Autoregressive Models” (for reference only; not for publication)
Appendix A: Some Useful Lemmas
A.1 Uniform Boundedness of Matrices in Row and Column Sums
Lemma A.1 Suppose that the spatial weights matrix Wn is a non-negative matrix with its (i, j)th
d
element being wn,ij = Pn ij d
l=1
il
and dij ≥ 0 for all i, j.
Pn
(1) If the row sums j=1 dij are bounded away from zero at the rate hn uniformly in i, and the column
Pn
sums i=1 dij are O(hn ) uniformly in j, then {Wn } are uniformly bounded in column sums.
(2) If dij = dji for all i and j and the row sums
Pn
j=1
dij are O(hn ) and bounded away from zero at the
rate hn uniformly in i, then {Wn } are uniformly bounded in column sums.
Pn
Pn
Proof: (1) Let c1 and c2 be positive constants such that c1 hn ≤ j=1 dij for all i and i=1 dij ≤ c2 hn
P
P
P
d
for all j, for large n. It follows that ni=1 wn,ij = ni=1 Pn ij d ≤ c11hn ni=1 dij ≤ cc21 for all i.
l=1
(2) This is a special case of (1) because
O(hn ).
Pn
l=1
il
dil = O(hn ) and
Pn
i=1
dij =
Pn
i=1
dji imply
Pn
i=1
dij =
Q.E.D.
Lemma A.2 Suppose that lim supn k λ0 Wn k< 1, where k · k is a matrix norm, then {k Sn−1 k} is
uniformly bounded in both row and column sums.
P∞
Proof: For any matrix norm k · k, k λ0 Wn k< 1 implies that Sn−1 = k=0 (λ0 Wn )k (Horn and Johnson
P∞
P∞
1
1985, p.301). Let c = supn k λ0 Wn k. Then, k Sn−1 k≤ k=0 k λ0 Wn kk = k=0 ck = 1−c
< ∞ for all n.
Q.E.D.
Lemma A.3 Suppose that {k Wn k} and {k Sn−1 k}, where k · k is a matrix norm, are bounded. Then
{k Sn (λ)−1 k}, where Sn (λ) = In − λWn , is uniformly bounded in a neighborhood of λ0 .
Proof: Let c be a constant such that k Wn k≤ c and k Sn−1 k≤ c for all n. We note that Sn−1 (λ) =
(Sn − (λ − λ0 )Wn )−1 = Sn−1 (In − (λ − λ0 )Gn )−1 , where Gn = Wn Sn−1 . By the submultiplicative property of
a matrix norm, k Gn k≤k Wn k · k Sn−1 k≤ c2 for all n.
Let B1 (λ0 ) = {λ : |λ − λ0 | < 1/c2 }. It follows that, for any λ ∈ B1 (λ0 ), k (λ − λ0 )Gn k≤ |λ − λ0 |· k
P∞
Gn k< 1. As k (λ − λ0 )Gn k< 1, In − (λ − λ0 )Gn is invertible and (In − (λ − λ0 )Gn )−1 = k=0 (λ − λ0 )k Gkn .
P
P∞
1
k
k
k 2k
= 1−|λ−λ
Therefore, k (In − (λ − λ0 )Gn )−1 k≤ ∞
2 < ∞ for
k=0 |λ − λ0 | k Gn k ≤
k=0 |λ − λ0 | c
0 |c
any λ ∈ B1 (λ0 ). The result follows by taking a close neighborhood B(λ0 ) contained in B1 (λ0 ). In B(λ0 ),
1
supλ∈B(λ0 ) |λ − λ0 |c2 < 1, and, hence,
sup
k Sn−1 (λ) k≤k Sn−1 k · sup
λ∈B(λ0 )
k (In − (λ − λ0 )Gn )−1 k≤
λ∈B(λ0 )
c
< ∞.
1
−
|λ
− λ0 |c2
λ∈B(λ0 )
sup
Q.E.D.
Lemma A.4 Suppose that k Wn k≤ 1 for all n, where k · k is a matrix norm, then {k Sn (λ)−1 k},
where Sn (λ) = In − λWn , are uniformly bounded in any closed subset of (−1, 1).
P∞
Proof: For any λ ∈ (−1, 1), k λWn k≤ |λ|· k Wn k< 1 and, hence, Sn−1 (λ) = k=0 λk Wnk . It follows
P∞
P∞
1
that, for any |λ| < 1, k Sn−1 (λ) k≤ k=0 |λ|k · k Wn kk ≤ k=0 |λ|k = 1−|λ|
. Hence, for any closed subset B
of (−1, 1), supλ∈B k Sn−1 (λ) k≤ supλ∈B
1
1−|λ|
< ∞.
Q.E.D.
Lemma A.5 Suppose that elements of the n × k matrices Xn are uniformly bounded; and the limiting
matrix of
1
0
n Xn Xn
exists and is nonsingular, then the projectors Mn and (In − Mn ), where Mn = In −
Xn (Xn0 Xn )−1 Xn0 , are uniformly bounded in both row and column sums.
Proof: Let Bn = ( n1 Xn0 Xn )−1 . From the assumption of the lemma, Bn converges to a finite limit.
Therefore, there exists a constant cb such that |bn,ij | ≤ cb for all n, where bn,ij is the (i, j)th element
of Bn . By the uniform boundedness of Xn , there exists a constant cx such that |xn,ij | ≤ cx for all n.
Pk Pk
X0 X
Let An = n1 Xn ( nn n )−1 Xn0 , which can be rewritten as An = n1 s=1 r=1 bn,rs xn,r x0n,s , where xn,r is
Pk Pk
Pn
Pn
the rth column of Xn . It follows that j=1 |an,ij | ≤ j=1 n1 s=1 r=1 |bn,rs xn,ir xn,js | ≤ k 2 cb c2x , for all
Pn
Pn 1 Pk Pk
2
2
i = 1, · · · , n. Similarly,
i=1 |an,ij | ≤
i=1 n
s=1
r=1 |bn,rs xn,ir xn,js | ≤ k cb cx for all j = 1, · · · , n.
That is, {Xn (Xn0 Xn )−1 Xn0 } are uniformly bounded in both row and column sums. Consequently, {Mn } are
also uniformly bounded in both row and column sums.
Q.E.D.
A.2 Orders of Some Relevant Quantities
Lemma A.6 Suppose that the elements of the sequences of vectors Pn = (pn1 , · · · , pnn )0 and Qn =
(qn1 , · · · , qnn )0 are uniformly bounded for all n.
1) If {An } are uniformly bounded in either row or column sums, then |Q0n An Pn | = O(n).
2) If the row sums of {An } and {Zn } are uniformly bounded, |zi,n An Pn | = O(1) uniformly in i, where zi,n
is the ith row of Zn .
Proof: Let constants c1 and c2 such that |pni | ≤ c1 and |qni | ≤ c2 . For 1), there exists a constant such
Pn Pn
Pn Pn
Pn Pn
that n1 i=1 j=1 |an,ij | ≤ c3 . Hence, |Q0n An Pn | = | i=1 j=1 an,ij qni pnj | ≤ c1 c2 i=1 j=1 |an,ij | ≤
P
nc1 c2 c3 . For 2), let c4 be a constant such that nj=1 |an,ij | ≤ c4 for all n and i. It follows that |e0ni An Pn | =
Pn
Pn
| j=1 an,ij pnj | ≤ c1 j=1 |an,ij | ≤ c1 c4 where eni is the ith unit column vector. Because {Zn } is uniformly
2
Pn
Pn
bounded in row sums, j=1 |zn,ij | ≤ cz for some constant cz . It follows that |zi,n An Pn | ≤ j=1 |zn,ij | ·
Pn
|e0nj An Pn | ≤ ( j=1 |zn,ij |)c1 c4 ≤ cz c1 c4 .
Q.E.D.
Lemma A.7 Suppose {An } are uniformly bounded either in row sums or in column sums. Then,
1) elements an,ij of An are uniformly bounded in i and j,
2) tr(Am
n ) = O(n) for m ≥ 1, and
3) tr(An A0n ) = O(n).
Proof: If An is uniformly bounded in row sums, let c1 be the constant such that max1≤i≤n
Pn
j=1
|an,ij | ≤
c1 for all n. On the other hand, if An is uniformly bounded in column sums, let c2 be the constant such
Pn
Pn
that max1≤j≤n i=1 |an,ij | ≤ c2 for all n. Therefore, |an,ij | ≤ l=1 |an,il | ≤ c1 if An is uniformly bounded
P
in row sums; otherwise, |an,ij | ≤ nk=1 |an,kj | ≤ c2 if An is uniformly bounded in column sums. The result
1) implies immediately that tr(An ) = O(n). If An is uniformly bounded in row (column) sums, then Am
n
for m ≥ 2 is uniformly bounded in row (column) sums. Therefore, 1) implies tr(Am
n ) = O(n). Finally, as
Pn Pn
Pn Pn
tr(An A0n ) = i=1 j=1 a2n,ij , |tr(An A0n )| ≤ i=1 ( j=1 |an,ij |)2 ≤ nc21 if An is uniformly bounded in row
Pn Pn
sums; otherwise |tr(An A0n )| ≤ j=1 ( i=1 |an,ij |)2 ≤ nc22 .
Q.E.D.
Lemma A.8 Suppose that the elements an,ij of the sequence of n×n matrices {An }, where An = [an,ij ],
are O( h1n ) uniformly in all i and j; and {Bn } is a sequence of conformable n × n matrices.
(1) If {Bn } are uniformly bounded in column sums, the elements of An Bn have the uniform order O( h1n ).
(2) If {Bn } are uniformly bounded in row sums, the elements of Bn An have the uniform order O( h1n ).
For both cases (1) and (2), tr(An Bn ) = tr(Bn An ) = O( hnn ).
Proof: Consider (1). Let an,ij =
cn,ij
hn .
Because an,ij = O( h1n ) uniformly in i and j, there exists a
constant c1 so that |cn,ij | ≤ c1 for all i, j and n. Because {Bn } is uniformly bounded in column sums, there
Pn
exists a constant c2 so that k=1 |bn,kj | ≤ c2 for all n and j. Let ai,n be the ith row of An and bn,l be the
Pn
Pn
lth column of Bn . It follows that |ai,n bn,l | ≤ h1n j=1 |cn,ij bn,jl | ≤ hc1n j=1 |bn,jl | ≤ ch1 nc2 , for all i and l.
P
P
Furthermore, |tr(An Bn )| = | ni=1 ai,n bn,i | ≤ ni=1 |ai,n bn,i | ≤ c1 c2 hnn . These prove the results in (1). The
results in (2) follow from (1) because (Bn An )0 = A0n Bn0 and the uniform boundedness in row sums of {Bn }
is equivalent to the uniform boundedness in column sums of {Bn0 }.
Q.E.D.
Lemma A.9 Suppose that An are uniformly bounded in both row and column sums. Elements of
the n × k matrices Xn are uniformly bounded; and limn→∞
In − Xn (Xn0 Xn )−1 Xn0 . Then
3
0
Xn
Xn
n
exists and is nonsingular. Let Mn =
(i) tr(Mn An ) = tr(An ) + O(1),
(ii) tr(A0n Mn An ) = tr(A0n An ) + O(1),
(iii) tr[(Mn An )2 ] = tr(A2n ) + O(1), and
(iv) tr[(A0n Mn An )2 ] = tr[(Mn An A0n )2 ] = tr[(An A0n )2 ] + O(1).
Furthermore, if An,ij = O( h1n ) for all i and j, then
(a) tr2 (Mn An ) = tr2 (An ) + O( hnn ) and
Pn
Pn
1
2
2
(b)
i=1 ((Mn An )ii ) =
i=1 (An,ii ) + O( hn ).
Proof: The assumptions imply that elements of the k × k matrix ( n1 Xn0 Xn )−1 are uniformly bounded
for large enough n. Lemma A.6 implies that elements of the k × k matrices
1
0 2
n Xn An Xn
1
1
0
0
0
n Xn An Xn , n Xn An An Xn
and
are also uniformly bounded. It follows that
tr(Mn An ) = tr(An ) − tr[(Xn0 Xn )−1 Xn0 An Xn ] = tr(An ) + O(1),
tr(A0n Mn An ) = tr(A0n An ) − tr[(Xn0 Xn )−1 Xn0 An A0n Xn ] = tr(A0n An ) + O(1),
and tr((Mn An )2 ) = tr(A2n ) − 2tr[(Xn0 Xn )−1 Xn0 A2n Xn ] + tr([(Xn0 Xn )−1 Xn0 An Xn ]2 ) = tr(A2n ) + O(1). By (iii),
tr[(A0n Mn An )2 ] = tr[(Mn An A0n )2 ] = tr[(An A0n )2 ] + O(1).
When An,ij = O( h1n ), from (i), tr2 (Mn An ) = (tr(An ) + O(1))2 = tr2 (An ) + 2tr(An ) · O(1) + O(1) =
tr2 (An ) + O( hnn ). Because An is uniformly bounded in column sums and elements of Xn are uniformly
Pn
Pn
bounded, Xn0 An eni = O(1) for all i. Hence, i=1 (Mn An )2ii = i=1 (An,ii − xi,n (Xn0 Xn )−1 Xn0 An eni )2 =
Pn
Pn
Pn
1 2
1
1
1
2
2
i=1 (An,ii + O( n )) =
i=1 [(An,ii ) + 2An,ii · O( n ) + O( n2 )] =
i=1 (An,ii ) + O( hn ). Q.E.D.
Lemma A.10 Suppose that An is an n × n matrix with its column sums being uniformly bounded and
elements of the n × k matrix Cn are uniformly bounded. Elements vi0 s of Vn = (v1 , · · · , vn )0 are i.i.d.(0, σ 2 ).
Then,
√1 C 0 An Vn
n n
D
√1 C 0 An Vn →
n n
= OP (1), Furthermore, if the limit of
1 0
0
n Cn An An Cn
exists and is positive definite, then
N (0, σ02 limn→∞ n1 Cn0 An A0n Cn ).
Proof: This is Lemma A.2 in Lee (2002). These results can be established by Chebyshev’s inequality
and Liapounov double array central limit theorem.
Q.E.D.
A.3 First and Second Moments of Quadratic Forms and Limiting Distribution
For the lemmas in this subsection, v’s in Vn = (v1 , · · · , vn )0 are i.i.d. with zero mean, variance σ 2 and
finite fourth moment µ4 .
Lemma A.11 Let An = [aij ] be an n-dimensional square matrix. Then
4
1) E(Vn0 An Vn ) = σ 2 tr(An ),
2) E(Vn0 An Vn )2 = (µ4 − 3σ 4 )
Pn
a2ii + σ 4 [tr2 (An ) + tr(An A0n ) + tr(A2n )], and
3) var(Vn0 An Vn ) = (µ4 − 3σ 4 )
Pn
a2ii + σ 4 [tr(An A0n ) + tr(A2n )].
i=1
i=1
In particular, if v’s are normally distributed, then E(Vn0 An Vn )2 = σ 4 [tr2 (An ) + tr(An A0n ) + tr(A2n )] and
var(Vn0 An Vn ) = σ 4 [tr(An A0n ) + tr(A2n )].
Proof: The result in 1) is trivial. For the second moment,
E(Vn0 An Vn )2 = E(
n X
n
X
aij vi vj )2 = E(
i=1 j=1
n
n X
n X
n X
X
aij akl vi vj vk vl ).
i=1 j=1 k=1 l=1
Because v’s are i.i.d. with zero mean, E(vi vj vk vl ) will not vanish only when i = j = k = l, (i = j) 6= (k = l),
(i = k) 6= (j = l), and (i = l) 6= (j = k). Therefore,
E(Vn0 An Vn )2 =
n
X
a2ii E(vi4 ) +
i=1
n X
n
X
aii ajj E(vi2 vj2 ) +
i=1 j6=i
= (µ4 − 3σ 4 )
n
X
= (µ4 − 3σ 4 )
i=1
n
X
a2ii + σ 4 [
n X
n
X
a2ij E(vi2 vj2 ) +
i=1 j6=i
n X
n
X
aii ajj +
i=1 j=1
n X
n
X
n X
n
X
aij aji E(vi2 vj2 )
i=1 j6=i
a2ij +
i=1 j=1
n X
n
X
aij aji ]
i=1 j=1
a2ii + σ 4 [tr2 (An ) + tr(An A0n ) + tr(A2n )].
i=1
The result 3) follows from var(Vn0 An Vn ) = E(Vn0 An Vn )2 − E 2 (Vn0 An Vn ) and those of 1) and 2). When v’s
are normally distributed, µ4 = 3σ 2 .
Q.E.D.
Lemma A.12 Suppose that {An } are uniformly bounded in either row and column sums, and the
elements an,ij of An are O( h1n ) uniformly in all i and j. Then, E(Vn0 An Vn ) = O( hnn ), var(Vn0 An Vn ) = O( hnn )
and Vn0 An Vn = OP ( hnn ). Furthermore, if limn→∞
hn
n
= 0,
hn 0
n Vn An Vn
−
hn
0
n E(Vn An Vn )
= oP (1).
Proof: E(Vn0 An Vn ) = σ 2 tr(An ) = O( hnn ). From Lemma A.11, the variance of Vn0 An Vn is var(Vn0 An Vn ) =
Pn
(µ4 − 3σ 4 ) i=1 a2n,ii + σ 4 [tr(An A0n ) + tr(A2n )]. Lemma A.8 implies that tr(A2n ) and tr(An A0n ) are O( hnn ).
Pn
Pn
n
n
2
0
2
0
As
i=1 an,ii ≤ tr(An An ), it follows that
i=1 an,ii = O( hn ). Hence, var(Vn An Vn ) = O( hn ). As
E((Vn0 An Vn )2 ) = var(Vn0 An Vn ) + E 2 (Vn0 An Vn ) = O(( hnn )2 ), the generalized Chebyshev inequality implies
that P ( hnn |Vn0 An Vn | ≥ M ) ≤
1 hn 2
0
2
M 2 ( n ) E((Vn An Vn ) )
=
1
M 2 O(1)
and, hence,
Finally, because var( hnn Vn0 An Vn ) = O( hnn ) = o(1) when limn→∞
implies that
hn 0
n Vn An Vn
−
hn
0
n E(Vn An Vn )
= oP (1).
hn
n
hn 0
n Vn An Vn
= OP (1).
= 0, the Chebyshev inequality
Q.E.D.
Lemma A.13 Suppose that {An } is a sequence of symmetric matrices with row and column sums
uniformly bounded and {bn } is a sequence of constant vectors with its elements uniformly bounded. The
2
moment E(|v|4+2δ ) for some δ > 0 of v exists. Let σQ
be the variance of Qn where Qn = b0n Vn + Vn0 An Vn −
n
5
2
2
σ 2 tr(An ). Assume that the variance σQ
is O( hnn ) with { hnn σQ
} bounded away from zero, the elements of
n
n
An are of uniform order O( h1n ) and the elements of bn of uniform order O( √1h ). If limn→∞
n
Qn
σQn
1+ 2
δ
hn
n
= 0, then
D
−→ N (0, 1).
Proof: The asymptotic distribution of the quadratic random form Qn can be established via the mar-
tingale central limit theorem. Our proof of this Lemma follows closely the original arguments in Kelejian
2
and Prucha (2001). In their paper, σQ
is assumed to be bounded away from zero with the n-rate. Our
n
2
subsequent arguments modify theirs to take into account the different rate of σQ
.
n
The Qn can be expanded into Qn =
Pn
i=1
Pn
i=1 bni vi
Zni , where Zni = bni vi + an,ii (vi2 − σ 2 ) + 2vi
+
Pi−1
j=1
Pn
i=1
an,ii vi2 + 2
Pn
i=1
Pi−1
j=1
an,ij vi vj − σ 2 tr(An ) =
an,ij vj . Define σ-fields Ji =< v1 , . . . , vi > generated
by v1 , . . . , vi . Because vs are i.i.d. with zero mean and finite variance, E(Zni |Ji−1 ) = bni E(vi )+an,ii (E(vi2 )−
P
σ 2 ) + 2E(vi ) i−1
j=1 an,ij vj = 0. The {(Zni , Ji )|1 ≤ i ≤ n, 1 ≤ n} forms a martingale difference double array.
Pn
hn 2
2
2
We note that σQ
=
i=1 E(Zni ) as Zni are martingale differences. Also n σQn = O(1). Define the
n
∗
∗
normalized variables Zni
= Zni /σQn . The {(Zni
, Ji )|1 ≤ i ≤ n} is a martingale difference double array and
Pn
Qn
∗
i=1 Zni . In order for the martingale central limit theorem to be applicable, we would show that
σQn =
Pn
∗ 2+δ ∗
there exists a δ ∗ > such that i=1 E|Zni
|
tends to zero as n goes to infinity. Second, it will be shown
Pn
p
∗2
that i=1 E(Zni
|Ji−1 ) → 1.
Pi−1
For any positive constants p and q such that 1p + 1q = 1, |Zni | ≤ |an,ii |·|vi2 −σ 2 |+|vi |(|bni |+2 j=1 |an,ij |·
Pi−1
1
1
1
1
1
1
|vj |) = |an,ii | p |an,ii | q |vi2 −σ 2 |+|vi |(|bni | p |bni | q +2 j=1 |an,ij | p |an,ij | q |vj |). The Holder inequality for inner
products applied to the last term implies that

 p1 
 1q q


i
i−1


X
X
1
1
1
1
1
|Zni |q ≤ (|bni | p )p +
(|an,ij | p )p  (|bni | q |vi |)q + (|an,ii | q |vi2 − σ 2 |)q +
(|an,ij | q 2|vi | · |vj |)q 




j=1
j=1

= |bni | +
i
X
j=1
 pq 
|an,ij | |bni | · |vi |q + |an,ii | · |vi2 − σ 2 |q +
i−1
X
j=1

|an,ij |2q |vi |q |vj |q  .
As {An } are uniformly bounded in row sums and elements of bn are uniformly bounded, there exists a
q
Pn
constant c1 such that j=1 |an,ij | ≤ c1 /2 and |bni | ≤ c1 /2 for all i and n. Hence |Zni |q ≤ 2q c1p (|bni | · |vi |q +
P
q
|an,ii |·|vi2 −σ 2 |q +|vi |q i−1
j=1 |an,ij ||vj | ). Take q = 2+δ. Let cq > 1 be a finite constant such that E(|v|) ≤ cq ,
E(|v|q ) ≤ cq and E(|v 2 − σ 2 |q ) ≤ cq . Such a constant exists under the moment conditions of v. It follows
q
Pn
Pn
Pn
Pi
Pn
1
∗ 2+δ
2+δ
that i=1 E|Zni |q ≤ 2q c1p c2q i=1 (|bni | + j=1 |an,ij |) = O(n). As i=1 E|Zni
|
= σ2+δ
i=1 E|Zni |
Qn
P
δ
δ
δ
2+δ
2
∗ 2+δ
and σQ
= ( hnn σQ
)1+ 2 ( hnn )1+ 2 ≥ c · ( hnn )1+ 2 for some constant c > 0 when n is large, ni=1 E|Zni
|
=
n
n
6
1+ 2
δ
1+ δ
2
δ
O( hn δ ) = O( hnn ) 2 , which goes to zero as n tends to infinity.
n2
Pn
p
∗2
It remains to show that i=1 E(Zni
|Ji−1 ) → 0. As
2
E(Zni
|Ji−1 ) = (µ4 − σ 4 )a2n,ii + σ 2 (bni + 2
i−1
X
an,ij vj )2 + 2µ3 an,ii (bni + 2
j=1
2
and E(Zni
) = (µ4 − σ 4 )a2n,ii + 4σ 4
Pi−1
b2ni + 4σ 2 j=1 a2n,ij . Hence,
2
2
E(Zni
|Ji−1 ) − E(Zni
) = 4σ 2 (
Pi−1
j=1
i−1 X
i−1
X
Pn
i=1
∗2
E(Zni
|Ji−1 ) − 1 =
H1n =
1
2
σQ
n
an,ij an,ik vj vk +
i−1
X
and H3n =
Pn
2
j=1
Pn
2
i=1 [E(Zni |Ji−1 )
an,ij vj )2 =
i−1
X
an,ij vj )
j=1
2
− E(Zni
)] =
n i−1 i−1
hn X X X
an,ij an,ik vj vk ,
n i=1 j=1
bni + µ3 an,ii )
j=1
H2n =
4σ 2
2
σQ
hn
n
(H1n + H2n ) +
n
hn
n
4
2
σQ
H3n , where
n
n i−1
hn X X 2
a (v 2 − σ 2 ),
n i=1 j=1 n,ij j
Pi−1
an,ij vj . We would like to show that Hjn for j = 1, 2, 3, conPn
verge in probability to zero. It is obvious that E(H3n ) = 0. By exchanging summations, i=1 (σ 2 bni +
Pi−1
Pn−1 Pn
µ3 an,ii ) j=1 an,ij vj = j=1 ( i=j+1 (σ 2 bni + µ3 an,ii )an,ij )vj . Thus,
2
E(H3n
)=
i=1 (σ
Pi−1
a2n,ij (vj2 − σ 2 )) + 4(σ 2 bni + µ3 an,ii )(
k6=j
hn
n
an,ij vj ),
j=1
a2n,ij + σ 2 b2ni + 2µ3 an,ii bni , because E(bni + 2
j=1 k6=j
and
i−1
X
j=1
n
n
n−1
n−1
X X
σ 4 h2n X X
µ3
σ 4 h2n
µ3
hn
2
2
(
(b
+
a
)a
)
≤
(
max
|b
+
a
|)
(
|an,ij |)2 = O( )
ni
n,ii n,ij
ni
n,ii
n2 j=1 i=j+1
σ2
n2 1≤i≤n
σ2
n
j=1 i=j+1
Pn−1 Pn
because maxi,j |an,ij | = O( h1n ), maxi |bni | = O( √1h ) and j=1 ( i=j+1 |an,ij |)2 = O(n). E(H2n ) = 0 and
n
Pn−1 Pn
H2n can be rewritten into H2n = hnn j=1 ( i=j+1 a2n,ij )(vj2 − σ 2 ). Thus
2
E(H2n
)=(
n−1
n
n−1
n
X X
X X
hn 2
hn
1
) (µ4 − σ 4 )
(
a2n,ij )2 ≤ ( )2 (µ4 − σ 4 ) max |an,ij |2 ·
(
|an,ij |)2 = O( ).
1≤i,j≤n
n
n
n
j=1 i=j+1
j=1 i=j+1
We conclude that H3n = oP (1) and H2n = oP (1).
E(H1n ) = 0 but its variance is relatively more
Pn Pi−1 Pi−1
complex than that of H2n and H3n . By rearranging terms, H1n = hnn i=1 j=1 k6=j an,ij an,ik vj vk =
Pn
hn Pn−1 Pn−1
j=1
k6=j S̄n,jk vj vk , where S̄n,jk =
i=max{j,k}+1 an,ij an,ik . The variance of H1n is
n
2
E(H1n
)=(
n−1 n−1 n−1 n−1
hn 2 X X X X
)
S̄n,jk S̄n,rs E(vj vk vr vs ).
n j=1
r=1
k6=j
s6=r
As k 6= j and s 6= r, E(vj vk vr vs ) 6= 0 only for the cases that (j = r) 6= (k = s) and (j = s) 6= (k = r). The
variance of H1n can be simplified and
2
E(H1n
) = 2σ 4 (
n−1 n−1
n−1 n−1 n
n
hn 2 X X 2
hn X X X X
)
(
|an,i1 j an,i1 k an,i2 j an,i2 k |)
S̄n,jk ≤ 2σ 4 ( )2
n j=1
n j=1
i =1 i =1
k6=j
k6=j
1
2
n
n
n
n
X
hn X X X
≤ 2σ 4 ( )2
(
|an,i1 j an,i1 k |) · max
|an,i2 j | · max |an,i2 k |
1≤j≤n
i2 ,k
n i =1 j=1
i =1
1
k6=j
n
X
hn
≤ 2σ 4 ( )2 max
n 1≤j≤n i
2
2
n X
n
n
X
X
hn
|an,i2 j | · max |an,i2 k | ·
(
|an,i1 j | ·
|an,i1 k |) = O( ),
i2 ,k
n
=1
i =1 j=1
1
7
k=1
because An is uniformly bounded in row and column sums and an,ij = O( h1n ) uniformly in i and j. Thus,
H1n = oP (1) as limn→∞
hn
n
= 0 implied by the condition limn→∞
As Hjn , j = 1, 2, 3, are oP (1) and limn→∞
hn 2
n σQn
> 0,
Pn
i=1
(1+ 2 )
δ
hn
n
= 0.
∗2
E(Zni
|Ji−1 ) converges in probability to 1.
The central limit theorem for the martingale difference double array is thus applicable (see, Hall and Heyde,
1980; Potscher and Prucha, 1997) to establish the result.
Q.E.D.
Lemma A.14 Suppose that An is a constant n × n matrix uniformly bounded in both row and column
q
hn 0
sums. Let cn be a column vector of constants. If hnn c0n cn = o(1), then
n cn An Vn = oP (1). On the other
q
hand, if hnn c0n cn = O(1), then hnn c0n An Vn = OP (1).
q
Proof: The first result follows from Chebyshev’s inequality if var( hnn c0n An Vn ) = σ02 hnn c0n An A0n cn goes
to zero. Let Λn be the diagonal matrix of eigenvalues of An A0n and Γn be the orthonormal matrix of
eigenvectors. As eigenvalues in absolute values are bounded by any norm of the matrix, eigenvalues in Λn
in absolute value are uniformly bounded because k An k∞ (or k An k1 ) are uniformly bounded. Hence,
hn 0
0
n cn An An cn
≤
hn 0
0
n cn Γn Γn cn
· |λn,max | =
hn 0
n cn cn |λn,max |
= o(1), where λn,max is the eigenvalue of An A0n
with the largest absolute value.
When
hn 0
n cn cn
= O(1),
hn 0
0
n cn An An cn
σ 2 hnn c0n An A0n cn = O(1). Therefore,
q
≤
hn 0
n cn cn |λn,max |
hn 0
n cn An Vn
= OP (1).
q
= O(1). In this case, var( hnn c0n An Vn ) =
Q.E.D.
Appendix B: Detailed Proofs:
Proof of Consistency (Theorem 3.1 and Theorem 4.1)
We shall prove that
1
n
ln Ln (λ) −
1
n Qn (λ)
converges in probability to zero uniformly on Λ, and the
identification uniqueness condition holds, i.e., for any > 0, lim supn→∞ [maxλ∈N̄ (λ0 ) n1 (Qn (λ)− n1 Qn (λ0 )] <
0 where N̄ (λ0 ) is the complement of an open neighborhood of λ0 in Λ with radius .
For the proof of these properties, it is useful to establish some properties for ln |Sn (λ)| and σn2 (λ), where
σn2 (λ) =
0
σo2
−1 0
−1
n tr(Sn Sn (λ)Sn (λ)Sn )
= σ02 [1 + 2(λ0 − λ) n1 tr(Gn ) + (λ0 − λ)2 n1 tr(Gn G0n )].
There is also an auxiliary model which has useful implications. Denote Qp,n (λ) = − n2 (ln(2π) + 1) −
n
2
ln σn2 (λ) + ln |Sn (λ)|. The log likelihood function of a SAR process Yn = λWn Yn + Vn , where Vn ∼
N (0, σ02 In ), is ln Lp,n (λ, σ 2 ) = − n2 ln(2π) −
n
2
ln σ 2 + ln |Sn (λ)| −
1
0 0
2σ 2 Yn Sn (λ)Sn (λ)Yn .
It is apparent that
Qp,n (λ) = maxσ2 Ep (ln Lp,n (λ, σ 2 )), where Ep is the expectation under this SAR process. By the Jensen
inequality, Qp,n (λ) ≤ Ep (ln Lp,n (λ0 , σ02 )) = Qp,n (λ0 ) for all λ. This implies that
for all λ.
8
1
n (Qp,n (λ)
− Qp,n (λ0 )) ≤ 0
Let λ1 and λ2 be in Λ. By the mean value theorem,
1
n (ln |Sn (λ2 )|−ln |Sn (λ1 )|)
= n1 tr(Wn Sn−1 (λ̄n ))(λ2 −
λ1 ) where λ̄n lies between λ1 and λ2 . By the uniform boundedness of Assumption 7, Lemma A.8 implies
that
set,
1
−1
(λ̄n ))
n tr(Wn S
1
n (ln |Sn (λ2 )|
= O( h1n ). Thus,
1
n
ln |Sn (λ)| is uniformly equicontinuous in λ in Λ. As Λ is a bounded
− ln |Sn (λ1 )|) = O(1) uniformly in λ1 and λ2 in Λ.
The σn2 (λ) is uniformly bounded away from zero on Λ. This can be established by a counter argument.
Suppose that σn2 (λ) were not uniformly bounded away from zero on Λ. Then, there would exist a sequence
{λn } in Λ such that limn→∞ σn2 (λn ) = 0. We have shown that
1
n (ln |Sn (λ0 )|−ln |Sn (λ)|)
1
n (Qp,n (λ)
− Qp,n (λ0 )) ≤ 0 for all λ, and
= O(1) uniformly on Λ. This implies that − 12 ln σn2 (λ) ≤ − 12 ln σ02 + n1 (ln |Sn (λ0 )|−
ln |Sn (λ)|) = O(1). That is, − ln σn2 (λn ) is bounded from above, a contradiction. Therefore, σn2 (λ) must be
bounded always from zero uniformly on Λ.
(uniform convergence) Show that supλ∈Λ | n1 ln Ln (λ) − n1 Qn (λ)| = oP (1).
Note that
1
n
ln Ln (λ)− n1 Qn (λ) = − 21 (ln σ̂n2 (λ)−ln σn∗2 (λ)). Because Mn Sn (λ)Yn = (λ0 −λ)Mn Gn Xn β0 +
Mn Sn (λ)Sn−1 Vn ,
σ̂n2 (λ) =
1
1 0 0
Yn Sn (λ)Mn Sn (λ)Yn = (λ0 − λ)2 (Gn Xn β0 )0 Mn (Gn Xn β0 ) + 2(λ0 − λ)H1n (λ) + H2n (λ), (B.1)
n
n
where H1n (λ) =
1
0
−1
n (Gn Xn β0 ) Mn Sn (λ)Sn Vn
1
0
n (Gn Xn β0 ) Mn Vn
and H2n (λ) =
1 0 0 −1 0
−1
n Vn Sn Sn (λ)Mn Sn (λ)Sn Vn .
As H1n (λ) =
+ (λ0 − λ) n1 (Gn Xn β0 )0 Mn Gn Vn , Lemma A.10 and the linearity of H1n (λ) in λ imply
H1n (λ) = oP (1) uniformly in λ ∈ Λ. Note that
H2n (λ) − σn2 (λ) =
0
1 0 0 −1 0
σ2
Vn Sn Sn (λ)Sn (λ)Sn−1 Vn − 0 tr(Sn−1 Sn0 (λ)Sn (λ)Sn−1 ) − Tn (λ),
n
n
0
where Tn (λ) = n1 Vn0 Sn−1 Sn0 (λ)Xn (Xn0 Xn )−1 Xn0 Sn (λ)Sn−1 Vn . By Lemma A.10,
1
1
1
√ Xn0 Sn (λ)Sn−1 Vn = √ Xn0 Sn−1 Vn − λ √ Xn0 Gn Vn = Op (1).
n
n
n
0
Thus, Tn (λ) =
1 √1
0
−1
0 Xn Xn −1 √1
( n Xn0 Sn (λ)Sn−1 Vn )0
n ( n Xn Sn (λ)Sn Vn ) ( n )
= oP (1). By Lemma A.12,
0
1 0 0 −1 0
[Vn Sn Sn (λ)Sn (λ)Sn−1 Vn − σ02 tr(Sn−1 Sn0 (λ)Sn (λ)Sn−1 )] = oP (1)
n
uniformly in λ ∈ Λ. These convergences are uniform on Λ because λ appears simply as linear or quadratic
factors in those terms. That is, H2n (λ) − σn2 (λ) = oP (1) uniformly on Λ. Therefore, σ̂n2 (λ) − σn∗2 (λ) = oP (1)
uniformly on Λ. By the Taylor expansion, | ln σ̂n2 (λ) − ln σn∗2 (λ)| = |σ̂n2 (λ) − σn∗2 (λ)|/σ̃n2 (λ), where σ̃n2 (λ) lies
between σ̂n2 (λ) and σn∗2 (λ). Note that σn∗2 (λ) ≥ σn2 (λ) because σn∗2 (λ) = (λ0 −λ)2 n1 (Gn Xn β0 )0 Mn (Gn Xn β0 )+
9
σn2 (λ). As σn2 (λ) is uniformly bounded away from zero on Λ, σn∗2 (λ) will be so too. It follows that, because
σ̂n2 (λ)−σn∗2 (λ) = oP (1) uniformly on Λ, σ̂n2 (λ) will be bounded away from zero uniformly on Λ in probability.
Hence, | ln σ̂n2 (λ) − ln σn∗2 (λ)| = oP (1) uniformly on Λ. Consequently, supλ∈Λ | n1 ln Ln (λ) − n1 Qn (λ)| = oP (1).
(uniform equicontinuity) We will show that
1
n
ln Qn (λ) = − 12 (ln(2π) + 1) −
1
2
ln σn∗2 (λ) +
1
n
ln |Sn (λ)| is
uniformly equicontinuous on Λ. The σn∗2 (λ) is uniformly continuous on Λ. This is so, because σn∗2 (λ) is a
quadratic form of λ and its coefficients,
1
1
0
n (Gn Xn β0 ) Mn (Gn Xn β0 ), n tr(Gn )
and
Lemmas A.6 and A.8. The uniform continuity of ln σn∗2 (λ) on Λ follows because
on Λ. Hence
1
n
1
0
n tr(Gn Gn )
1
∗2 (λ)
σn
are bounded by
is uniformly bounded
ln Qn (λ) is uniformly equicontinuous on Λ.
(identification uniqueness) At λ0 , σn∗2 (λ0 ) = σ02 . Therefore,
1
1
1
1
1
Qn (λ) − Qn (λ0 ) = − (ln σn2 (λ) − ln σ02 ) + (ln |Sn (λ)| − ln |Sn (λ0 )|) − [ln σn∗ (λ) − ln σn2 (λ)]
n
n
2
n
2
1
1
= (Qp,n (λ) − Qp,n (λ0 )) − [ln σn∗2 (λ) − ln σn2 (λ)].
n
2
Suppose that the identification uniqueness condition would not hold. Then, there would exist an > 0 and
a sequence λn in N̄ (λ0 ) such that limn→∞ [ n1 Qn (λn ) − n1 Qn (λ0 )] = 0. Because N̄ (λ0 ) is a compact set,
there would exist a convergent subsequence {λnm } of {λn }. Let λ+ be the limit point of λnm in Λ. As
1
n Qn (λ)
is uniformly equicontinuous in λ, limnm →∞ [ n1m Qnm (λ+ ) −
1
nm Qnm (λ0 )]
= 0. Because (Qp,n (λ) −
Qp,n (λ0 )) ≤ 0 and −[ln σn∗2 (λ)−ln σn2 (λ)] ≤ 0, this is possible only if limnm →∞ (σn∗2m (λ+ )−σn2 m (λ+ )) = 0 and
limnm →∞ ( n1m Qp,nm (λ+ ) −
1
nm Qp,nm (λ0 ))
= 0. The limnm →∞ (σn∗2m (λ+ ) − σn2 m (λ+ )) = 0 is a contradiction
when limn→∞ n1 (Gn Xn β0 )0 Mn (Gn Xn β0 ) 6= 0. In the event that limn→∞ n1 (Gn Xn β0 )0 Mn (Gn Xn β0 ) = 0, the
contradiction follows from the relation limn→∞ ( n1 Qp,n (λ+ ) − n1 Qp,n (λ0 )) = 0 under Assumption 9. This is
so, because, in this event, Assumption 9 is equivalent to that limn→∞ [ n1 (ln |Sn (λ)| − ln |Sn |) − 12 (ln σn2 (λ) −
ln σ02 )] = limn→∞ n1 [Qp,n (λ) − Qp,n (λ0 )] 6= 0 for λ 6= λ0 . Therefore, the identification uniqueness condition
must hold.
The consistency of λ̂n and, hence, θ̂n follow from this identification uniqueness and uniform convergence
(White 1994, Theorem 3.4).
Q.E.D.
Proof of Theorem 3.2
(Show that Σθ is nonsingular): Let α = (α01 , α2 , α3 )0 be a column vector of constants such that Σθ α = 0.
It is sufficient to show that α = 0. From the first row block of the linear equation system Σθ α = 0, one
has limn→∞
0
Xn
Xn
n α1
+ limn→∞
0
Xn
Gn Xn β0
α2
n
= 0 and, therefore, α1 = − limn→∞ (Xn0 Xn )−1 Xn0 Gn Xn β0 · α2 .
From the last equation of the linear system, one has α3 = −2σ02 limn→∞
10
tr(Gn )
n
· α2 . By eliminating α1 and
α3 , the remaining equation becomes
1
1
2 2
0
0
2
tr(G
tr
(G
X
β
)
M
(G
X
β
)
+
lim
G
)
+
tr(G
)
−
(G
)
α2 = 0.
n n 0
n
n n 0
n
n n
n
2
n→∞ nσ0
n→∞ n
n
lim
(B.2)
Because tr(Gn G0n ) + tr(G2n ) − n2 tr2 (Gn ) = 12 tr[(Cn0 + Cn )(Cn0 + Cn )0 ] ≥ 0 and Assumption 8 implies that
limn→∞ n1 (Gn Xn β0 )0 Mn (Gn Xn β0 ) is positive definite, it follows that α2 = 0 and, so, α = 0.
(the limiting distribution of
∂ ln Ln (θ0 )
√1
):
∂θ
n
The matrix Gn is uniformly bounded in row sums. As the
elements of Xn are bounded, the elements of Gn Xn β0 for all n are uniformly bounded by Lemma A.6. With
the existence of high order moments of v in Assumption 1, the central limit theorem for quadratic forms of
double arrays of Kelejian and Prucha (2001) can be applied and the limiting distribution of the score vector
follows.
(Show that
2
1 ∂ ln Ln (θ̃n )
n
∂θ∂θ 0
−
2
1 ∂ ln Ln (θ0 ) p
→
n
∂θ∂θ 0
∂ 2 ln Ln (θ)
1
= − 2 Xn0 Xn ,
∂β∂β 0
σ
0): The second-order derivatives are
1
∂ 2 ln Ln (θ)
= − 2 Xn0 Wn Yn ,
∂β∂λ
σ
∂ 2 ln Ln (θ)
1
= −tr([Wn Sn−1 (λ)]2 ) − 2 Yn0 Wn0 Wn Yn ,
2
∂λ
σ
and
∂ 2 ln Ln (θ)
∂σ 2 ∂σ 2
=
n
2σ 4
−
1
0
σ 6 Vn (δ)Vn (δ).
As
0
Xn
Xn
n
= O(1),
0
Xn
Wn Yn
n
∂ 2 ln Ln (θ)
1
= − 4 Xn0 Vn (δ),
∂β∂σ 2
σ
1
∂ 2 ln Ln (θ)
= − 4 Yn0 Wn0 Vn (δ),
2
∂σ ∂λ
σ
p
= OP (1) and σ̃n2 → σ02 , it follows that
1 ∂ 2 ln Ln (θ0 )
1
1 Xn0 Xn
1 ∂ 2 ln Ln (θ̃n )
−
=
(
−
)
= op (1),
2
n ∂β∂β 0
n ∂β∂β 0
σ0
σ̃n2
n
and
1 ∂ 2 ln Ln (θ0 )
1
1 ∂ 2 ln Ln (θ̃n )
1 X 0 Wn Yn
−
=( 2 − 2) n
= op (1).
n
∂β∂λ
n
∂β∂λ
σ0
σ̃n
n
As Vn (δ̃n ) = Yn − Xn β̃n − λ̃n Wn Yn = Xn (β0 − β̃n ) + (λ0 − λ̃n )Wn Yn + Vn ,
X 0 Xn
1 ∂ 2 ln Ln (θ̃n )
1 ∂ 2 ln Ln (θ0 )
1
1 X 0 Vn
X 0 Wn Yn
+ n 4 (β̃n − β0 ) + n 4 (λ̃n − λ0 ) = op (1).
−
=( 4 − 4) n
2
2
n ∂β∂σ
n ∂β∂σ
σ0
σ̃n
n
nσ̃n
ñσn
Let Gn (λ) = Wn Sn−1 (λ). By the mean value theorem, tr(G2n (λ̃n )) = tr(G2n ) + 2tr(G3n (λ̄n )) · (λ̃n − λ0 ),
therefore,
2
1 ∂ ln Ln (θ̃n )
n
∂λ2
−
2
1 ∂ ln Ln (θ0 )
n
∂λ2
= −2
tr(G3n (λ̄n ))
(λ̃n
n
− λ0 ) + ( σ12 −
0
0
0
1 Yn Wn Wn Yn
2 )
σ̃n
n
= op (1), because
tr(G3n (λ̄n )) = O( hnn ) and Yn0 Wn0 Wn Yn = OP ( hnn ). Note that Gn (λ̄n ) is uniformly bounded in row and column
sums uniformly in a neighborhood of λ0 by Lemma A.3 under Assumption 5. Therefore, tr(G3n (λ̄n )) = O( hnn ).
On the other hand, because
Y 0 W 0 Vn
Y 0 W 0 Vn
Y 0 W 0 Xn
Y 0 W 0 Wn Yn
1 0 0
Yn Wn Vn (δ̃n ) = n n
(β0 − β̃n ) + (λ0 − λ̃n ) n n
+ n n
= n n
+ oP (1)
n
n
n
n
n
11
and
1 0
V 0 Vn
X 0 Xn
Y 0 W 0 Wn Yn
Vn (δ̃n )Vn (δ̃n ) = (β̃n − β0 )0 n
(β̃n − β0 ) + (λ̃n − λ0 )2 n n
+ n
n
n
n
n
0
0
Y 0 W 0 Vn
0 Xn Wn Yn
0 Xn V n
+ 2(β0 − β̃n )
+ 2(λ0 − λ̃n ) n n
+ 2(λ̃n − λ0 )(β̃n − β0 )
n
n
n
Vn0 Vn
+ oP (1),
=
n
it follows that
1 ∂ 2 ln Ln (θ̃n )
1 ∂ 2 ln Ln (θ0 )
Yn0 Wn0 Vn (δ̃n ) Yn0 Wn0 Vn
−
=
−
+
n ∂σ 2 ∂λ
n ∂σ 2 ∂λ
σ̃n4 n
σ04 n
0
0
0
0
Y W Xn
Y W Wn Yn
1
1 Y 0 W 0 Vn
(β̃n − β0 ) + n n4
(λ̃n − λ0 ) + ( 4 − 4 ) n n
= op (1),
= n 4n
σ̃n n
σ̃n n
σ0
σ̃n
n
and
1 ∂ 2 ln Ln (θ̃n )
1 ∂ 2 ln Ln (θ0 )
1
Vn0 (δ̃n )Vn (δ̃n )
1
Vn0 Vn
−
=
−
−
+
n ∂σ 2 ∂σ 2
n ∂σ 2 ∂σ 2
2σ̃n4
nσ̃n6
2σ04
nσ06
0
1 1
1
1
1 V Vn
+ oP (1) = op (1).
= ( 4 − 4) +( 6 − 6 ) n
2 σ̃n
σ0
σ0
σ̃n
n
(Show
2
1 ∂ ln Ln (θ0 )
n
∂θ∂θ 0
oP (1). It follows that
− E( n1 ∂
2
1
0
n Xn Wn Yn
p
ln Ln (θ0 )
)→
∂θ∂θ 0
=
0): By Lemma A.10,
1
0
n Xn Gn Xn β0
+ oP (1),
1
0
n Xn Gn Vn
1 0
0
n Yn Wn Vn
= oP (1) and
1
0 0
n Xn Gn Gn Vn
=
= n1 Vn0 G0n Vn + oP (1), and
1 0 0
1
1
Y W Wn Yn = (Xn β0 )0 G0n Gn Xn β0 + Vn0 G0n Gn Vn + oP (1).
n n n
n
n
Lemmas A.11 and A.8 imply E(Vn0 G0n Vn ) = σ02 tr(Gn ) and
n
1
(µ4 − 3σ04 ) X 2
σ04
1
var( Vn0 G0n Vn ) =
G
+
[tr(Gn G0n ) + tr(G2n )] = O(
).
n,ii
2
2
n
n
n
nhn
i=1
Similarly, E(Vn0 G0n Gn Vn ) = σ02 tr(G0n Gn ) and
n
1
(µ4 − 3σ04 ) X 0
σ4
1
var( Vn0 G0n Gn Vn ) =
(Gn Gn )2ii + 2 02 tr((G0n Gn )2 ) = O(
).
2
n
n
n
nh
n
i=1
p
1 0
n Vn Vn →
σ02 . With these properties, the convergence result follows.
−1
2
√
Ln (θ̃n )
∂ ln Ln (θ0 )
√1
, the asymptotic distribution
Finally, from the expansion n(θ̂n −θ0 ) = − n1 ∂ ln
0
∂θ∂θ
∂θ
n
By the law of large numbers,
of θ̂n follows.
Q.E.D.
Proof of Theorem 4.2 The nonsingularity of Σθ will now be guaranteed by Assumption 9 instead of
Assumption 8. With (B.2) in the proof of Theorem 3.2, under Assumption 80 , one arrives at
1
tr2 (Gn )
tr(G0n Gn ) + tr(G2n ) − 2
α2 = 0.
n→∞ n
n
lim
12
Because n1 [tr(Gn G0n )+tr(G2n )− n2 tr2 (Gn )] =
1
0
0
0
2n tr[(Cn +Cn )(Cn +Cn ) ]
> 0 for large n implied by Assumption
9, it follows that α2 = 0. Hence Σθ is nonsingular. The remaining arguments are similar as in the proof of
Theorem 3.2.
Q.E.D.
Proof of Theorem 5.1
(Show that
p
hn
n [ln Ln (λ) − ln Ln (λ0 ) − (Qn (λ) − Qn (λ0 )] →
0 uniformly on Λ): From (2.7) and (3.3), by
the mean value theorem,
hn
[ln Ln (λ) − ln Ln (λ0 ) − (Qn (λ) − Qn (λ0 ))]
n
hn
= − [ln σ̂n2 (λ) − ln σ̂n2 (λ0 ) − (ln σn∗2 (λ) − ln σn∗2 (λ0 ))]
2
hn ∂[ln σ̂n2 (λ̄n ) − ln σn∗2 (λ̄n )]
hn
(λ − λ0 ).
= − [(ln σ̂n2 (λ) − ln σn∗2 (λ)) − (ln σ̂n2 (λ0 ) − ln σn∗2 (λ0 ))] = −
2
2
∂λ
With the expressions of σ̂n2 (λ) in (2.6) and σn∗2 (λ) in (3.2), it follows that
2
∂ σ̂n
(λ)
∂λ
= − n2 Yn0 Wn0 Mn Sn (λ)Yn ,
and
∂σn∗2 (λ)
1
= {2(λ − λ0 )(Gn Xn β0 )0 Mn (Gn Xn β0 ) − 2σ02 tr[G0n Sn (λ)Sn−1 ]}.
∂λ
n
These imply that
hn
[(ln Ln (λ) − ln Ln (λ0 )) − (Qn (λ) − Qn (λ0 ))]
n
1
σ̂n2 (λ̄n )
hn 0 0
[(λ0 − λ̄n )(Gn Xn β0 )0 Mn (Gn Xn β0 )
= 2
{Yn Wn Mn Sn (λ̄n )Yn − ∗2
σ̂n (λ̄n ) n
σn (λ̄n )
+ σ02 tr(G0n Sn (λ̄n )Sn−1 )]}(λ − λ0 )
=
1
hn 0 0
{Y W Mn Sn (λ̄n )Yn − [(λ0 − λ̄n )(Gn Xn β0 )0 Mn (Gn Xn β0 ) + σ02 tr(G0n Sn (λ̄n )Sn−1 )]
2
σ̂n (λ̄n ) n n n
σ̂ 2 (λ̄n ) − σn∗2 (λ̄n )
− n
[(λ0 − λ̄n )(Gn Xn β0 )0 Mn (Gn Xn β0 ) + σ02 tr(G0n Sn (λ̄n )Sn−1 )]}(λ − λ0 ).
σn∗2 (λ̄n )
By using Sn (λ)Sn−1 = In + (λ0 − λ)Gn , the model implies that
Yn0 Wn0 Mn Sn (λ)Yn
= (Gn Xn β0 )0 Mn Sn (λ)Sn−1 Xn β0 + (Gn Xn β0 )0 Mn Sn (λ)Sn−1 Vn + Vn0 G0n Mn Sn (λ)Sn−1 Xn β0
+ Vn0 G0n Mn Sn (λ)Sn−1 Vn
= (Gn Xn β0 )0 Mn (Gn Xn β0 )(λ0 − λ) + (Gn Xn β0 )0 Mn Vn + 2(Gn Xn β0 )0 Mn Gn Vn (λ0 − λ)
+ Vn0 G0n Mn Vn + Vn0 G0n Mn Gn Vn (λ0 − λ).
Lemma A.9 implies that tr(Mn Gn ) = tr(Gn ) + O(1) and tr(G0n Mn Gn ) = tr(G0n Gn ) + O(1). The law
of large numbers in Lemma A.12 shows that
hn
0
n (Vn Mn Gn Vn
13
− σ02 tr(Gn )) = oP (1) and
hn
0 0
n (Vn Gn Mn Gn Vn
−
σ02 tr(G0n Gn )) = oP (1). Under Assumption 10,
hn
0
n (Gn Xn β0 ) Mn Vn
= oP (1) and
hn
0
n (Gn Xn β0 ) Mn Gn Vn
=
oP (1) by Lemma A.14. Therefore,
hn 0 0
{Y W Mn Sn (λ)Yn − (Gn Xn β0 )0 Mn (Gn Xn β0 )(λ0 − λ) − σ02 tr(G0n Sn (λ)Sn−1 )}
n n n
hn
{(Gn Xn β0 )0 Mn Vn + 2(λ0 − λ)(Gn Xn β0 )0 Mn Gn Vn
=
n
+ Vn0 G0n Mn Vn + (λ0 − λ)Vn0 G0n Mn Gn Vn − σ02 tr(G0n ) − σ02 (λ0 − λ)tr(G0n Gn )}
= oP (1).
From (B.1) and (3.2),
1
σ̂n2 (λ) − σn∗2 (λ) = 2(λ0 − λ) (Gn Xn β0 )0 Mn Sn (λ)Sn−1 Vn
n
0
0
1
+ {Vn0 Sn−1 Sn0 (λ)Mn Sn (λ)Sn−1 Vn − σ02 tr[Sn−1 Sn0 (λ)Mn Sn (λ)Sn−1 ]}
n
0
0
1
+ σ02 {tr[Sn−1 Sn0 (λ)Mn Sn (λ)Sn−1 ] − tr[Sn−1 Sn0 (λ)Sn (λ)Sn−1 ]}
n
= oP (1),
uniformly in λ by Chebyshev’s LLN, Lemma A.12 and Lemma A.9. Note that
O(1) and
hn
0
−1
n tr(Gn Sn (λ)Sn )
n)
= O(1). When hn → ∞, σn2 (λ) = σ02 [1+2(λ0 −λ) tr(G
+(λ−λ0 )2
n
1
∗2 (λ̄ )
σn
n
σ02 uniformly on Λ. As σn∗2 (λ) ≥ σn2 (λ) and σ02 > 0,
hn
n [(ln Ln (λ)
hn
0
n (Gn Xn β0 ) Mn (Gn Xn β0 )
is O(1) and
1
2 (λ̄ )
σ̂n
n
tr(Gn G0n )
]
n
=
→
is OP (1). In conclusion,
− ln Ln (λ0 )) − (Qn (λ) − Qn (λ0 ))] = oP (1) uniformly in λ ∈ Λ.
(Show the uniform equicontinuity of
hn
n (Qn (λ)
− Qn (λ0 ))): Recall that
hn
hn
hn
(Qn (λ) − Qn (λ0 )) = − (ln σn∗2 (λ) − ln σ02 ) +
(ln |Sn (λ)| − ln |Sn (λ0 )|).
n
2
n
0
As tr(Sn−1 Sn0 (λ)Sn (λ)Sn−1 ) − n = (λ0 − λ)tr(G0n + Gn ) + (λ − λ0 )2 tr(G0n Gn ),
hn (σn∗2 (λ) − σ02 )
0
hn
hn
(Gn Xn β0 )0 Mn (Gn Xn β0 ) + σ02 {tr[Sn−1 Sn0 (λ)Sn (λ)Sn−1 ] − n}
n
n
hn
hn
2 hn
0
2
(Gn Xn β0 ) Mn (Gn Xn β0 ) + σ0 (λ0 − λ) tr(G0n + Gn ) + σ02 (λ0 − λ)2 tr(G0n Gn )
= (λ − λ0 )
n
n
n
= (λ − λ0 )2
is uniformly equicontinous in λ ∈ Λ. By the mean value theorem, hn (ln σn∗2 (λ) − ln σ02 ) =
hn
∗2
2
2 (λ) (σn (λ) − σ0 )
σ̄n
where σ̄n2 (λ) lies between σn∗2 (λ) and σ02 . As σn∗2 (λ) is uniformly bounded away from zero on Λ and σ02 > 0,
σ̄n2 (λ) is uniformly bounded from above. Hence, hn (ln σn∗2 (λ) − ln σ02 ) is uniformly equicontinuous on Λ.
The
hn
n (ln |Sn (λ)|
hn
−1
n tr(Wn Sn (λ̄n ))
− ln |Sn (λ0 )|) =
hn
−1
n tr(Wn Sn (λ̄n ))(λ
= OP (1). In conclusion,
hn
n (Qn (λ)
− λ0 ) is uniformly equicontinuous on Λ because
− Qn (λ0 )) is uniformly equicontinuous on Λ.
14
(uniqueness identification): For identification, let Dn (λ) = − h2n (ln σn2 (λ) − ln σ02 ) +
ln |Sn (λ0 )|). Then,
hn
n (Qn (λ)
− Qn (λ0 )) = Dn (λ) −
ln σn∗2 (λ) − ln σn2 (λ) =
hn
∗2
2 (ln σn (λ)
hn
n (ln |Sn (λ)|
−
− ln σn2 (λ)). By the Taylor expansion,
(λ − λ0 )2 hn
(σn∗2 (λ) − σn2 (λ))
=
(Gn Xn β0 )0 Mn (Gn Xn β0 ),
σ̄n∗2 (λ)
σ̄n∗2 (λ) n
where σ̄n∗2 lies between σn∗2 (λ) and σn2 (λ). Because σn∗2 (λ) ≥ σn2 (λ) for all λ ∈ Λ,
hn (ln σn∗2 (λ) − ln σn2 (λ)) ≥
1
σn∗2 (λ)
hn
(λ − λ0 )2 (Gn Xn β0 )0 Mn (Gn Xn β0 ).
n
For the situation in Assumption 80 , σn∗2 (λ) − σn2 (λ) = oP (1) uniformly on Λ. Thus, limn→∞ σn∗2 (λ) = σ02 .
Therefore, under Assumption 10(a),
1
− lim hn (ln σn∗2 (λ) − ln σn2 (λ)) ≤ − lim
n→∞
=
(λ − λ0 )2
∗2 (λ)
n→∞ σn
(λ − λ0 )2
lim
−
n→∞
σ02
hn
(Gn Xn β0 )0 Mn (Gn Xn β0 )
n
hn
(Gn Xn β0 )0 Mn (Gn Xn β0 ) < 0,
n
for any λ 6= λ0 . Furthermore, under the situation in Assumption 10(b), Dn (λ) < 0 whenever λ 6= λ0 .
It follows that limn→∞
hn
n (Qn (λ)
− Qn (λ0 )) < 0 whenever λ 6= λ0 . As
hn
n (Qn (λ)
− Qn (λ0 )) is uniformly
equicontinuous, the identification uniqueness condition holds and θ0 is identifiably unique.
The consistency of λ̂n follows from the uniform convergence and the identification uniqueness condition.
For the pure SAR process, β = 0 is imposed in the estimation. The consistency of the QMLE of λ
follows by similar arguments above. For the pure process, σ̂n2 (λ) = n1 Yn0 Sn0 (λ)Sn (λ)Yn and the concentrated
log likelihood function is ln Ln (λ) = − n2 (ln(2π) + 1) −
n
2
ln σ̂n2 (λ) + ln |Sn (λ)|. For the pure process, Qn (λ)
happens to have the same expression as that of the case where Gn Xn β0 is multicollinear with Xn . The
simpler analysis corresponds to setting Xn = 0 and Mn = In in the preceding arguments.
Q.E.D.
Proof of Theorem 5.2
(Show
2
hn ∂ ln Ln (λ̃n )
n (
∂λ2
−∂
2
ln Ln (λ0 )
)
∂λ2
= oP (1)): The first and second order derivatives of the concentrated
log likelihood are
1
∂ ln Ln (λ)
= 2
Y 0 W 0 Mn Sn (λ)Yn − tr(Wn Sn−1 (λ)),
∂λ
σ̂n (λ) n n
and
∂ 2 ln Ln (λ)
2
1
(Yn0 Wn0 Mn Sn (λ)Yn )2 − 2
Y 0 W 0 Mn Wn Yn − tr([Wn Sn−1 (λ)]2 ),
=
2
4
∂λ
nσ̂n (λ)
σ̂n (λ) n n
where σ̂n2 (λ) =
1 0 0
n Yn Sn (λ)Mn Sn (λ)Yn .
For the pure SAR process, β0 = 0 and the corresponding derivatives
are similar with Mn replaced by the identity In . So it is sufficient to consider the regressive model.
15
Because Mn Xn = 0 and Sn (λ) = Sn + (λ0 − λ)Wn , one has
Yn0 Wn0 Mn Wn Yn = (Gn Xn β0 )0 Mn (Gn Xn β0 ) + 2(Gn Xn β0 )0 Mn Gn Vn + Vn0 G0n Mn Gn Vn ,
and
Yn0 Wn0 Mn Sn (λ)Yn = Yn0 Wn0 Mn Sn Yn + (λ0 − λ)Yn0 Wn0 Mn Wn Yn = Yn0 Wn0 Mn Vn + (λ0 − λ)Yn0 Wn0 Mn Wn Yn
= (Gn Xn β0 )0 Mn Vn + Vn0 G0n Mn Vn + (λ0 − λ)[(Gn Xn β0 )0 Mn (Gn Xn β0 )
+ 2(Gn Xn β0 )0 Mn Gn Vn + Vn0 G0n Mn Gn Vn ].
As shown in the proof of Theorem 5.1,
hn
0
n (Gn Xn β0 ) Mn Gn Vn
= oP (1). Hence,
hn 0 0
hn
hn 0 0
Yn Wn Mn Wn Yn =
(Gn Xn β0 )0 Mn (Gn Xn β0 ) +
V G Mn Gn Vn + oP (1)
n
n
n n n
and
hn 0 0
hn 0 0
hn
hn 0 0
Y W Mn Sn (λ)Yn =
V G Mn Vn + (λ0 − λ)[ (Gn Xn β0 )0 Mn (Gn Xn β0 ) +
V G Mn Gn Vn ] + oP (1).
n n n
n n n
n
n n n
Lemma A.12 implies that Vn0 G0n Mn Vn = Op ( hnn ) and Vn0 G0n Mn Gn Vn = Op ( hnn ). Thus
hn 0
0
n Yn Wn Mn Sn (λ)Yn
=
OP (1) uniformly on Λ. From the proof of Theorem 5.1, σ̂n2 (λ) = σn2 (λ) + oP (1) = σ02 + oP (1) uniformly
on Λ, when limn→∞ hn = ∞. Thus,
hn 0 0
1
2 (λ) n Vn Gn Mn Gn Vn
σ̂n
=
1 hn 0 0
V G M G V
σ02 n n n n n n
+ oP (1) uniformly on Λ.
Therefore, one has
hn ∂ 2 ln Ln (λ)
1 hn
hn 0 0
hn
V G Mn Gn Vn ] −
tr([Wn Sn−1 (λ)]2 ) + oP (1),
= − 2 [ (Gn Xn β0 )0 Mn (Gn Xn β0 ) +
n
∂λ2
σ0 n
n n n
n
uniformly on Λ. By Lemma A.8, under Assumption 7,
fore, by the Taylor expansion,
2
hn ∂ ln Ln (λ̃n )
n (
∂λ2
−
∂ 2 ln Ln (λ0 )
)
∂λ2
hn
3
n tr(Gn (λ))
= O(1) uniformly on Λ. There-
= − hnn {tr([Wn Sn−1 (λ̃n )]2 ) − tr(G2n )} + oP (1) =
−2 hnn tr(G3n (λ̄n ))(λ̃n − λ0 ) + oP (1) = oP (1), for any λ̃n which converges in probability to λ0 .
(Show
2
hn ∂ ln Ln (λ0 )
n (
∂λ2
p
− E(Pn (λ0 ))) → 0):
Define Pn (λ0 ) = − σ12 [(Gn Xn β0 )0 Mn (Gn Xn β0 ) + Vn0 G0n Mn Gn Vn ] − tr(G2n ).
Then
0
hn
n Pn (λ0 )
2
hn ∂ ln Ln (λ0 )
n
∂λ2
=
+ oP (1). From Lemma A.9, tr(G0n Mn Gn ) = tr(G0n Gn ) + O(1), tr[(G0n Mn Gn )2 ] = tr[(Gn G0n )2 ] +
Pn
Pn
O(1) and i=1 ((G0n Mn Gn )ii )2 = i=1 ((G0n Gn )ii )2 + O( h1n ). The tr(G0n Mn Gn ) and tr((G0n Mn Gn )2 ) are
P
O( hnn ) and ni=1 ((G0n Gn )ii )2 = O( hn2 ) from Lemma A.8. Therefore,
n
E(Pn (λ0 )) = −
1
(Gn Xn β0 )0 Mn (Gn Xn β0 ) − [tr(Gn G0n ) + tr(G2n )] + O(1).
σ02
16
As
hn
n [Pn (λ0 )−E(Pn (λ0 ))]
= − σ12 ∆n +o(1), where ∆n =
0
hn
hn
0 0
2
0
n [Vn Gn Mn Gn Vn −σ0 tr(Gn Mn Gn )], n [Pn (λ0 )−
E(Pn (λ0 ))] = oP (1) if ∆n = oP (1). By Lemma A.11 and the orders of relevant terms,
E(∆2n )
n
X
hn 2
hn 2
hn
0 0
4
= ( ) var(Vn Gn Mn Gn Vn ) = ( ) [(µ4 − 3σ0 )
(G0n,i Mn Gn,i )2 + 2σ04 tr((G0n Mn Gn )2 )] = O( ),
n
n
n
i=1
2
Ln (λ0 )
which goes to zero. Therefore, hnn ∂ ln∂λ
− E(Pn (λ0 )) = oP (1).
2
q
p
(Show hnn ∂ ln L∂λn (λ0 ) → N (0, σλ2 ) when limn→∞ hn = ∞):
Let qn = Vn0 Cn0 Mn Vn . Thus,
r
1
hn ∂ ln Ln (λ0 )
= 2
n
∂λ
σ̂n (λ0 )
r
hn
[(Gn Xn β0 )0 Mn Vn + qn ].
n
The mean of qn is E(qn ) = σ02 tr(Mn Cn ) = −σ02 · tr[(Xn0 Xn )−1 Xn0 Cn Xn ] = O(1) because
0
Xn
Cn Xn
n
0
Xn
Xn
n
= O(1) and
= O(1) from Lemma A.6. The variance of qn from Lemma A.11 is
σq2n = (µ4 − 3σ04 )
n
X
= (µ4 − 3σ04 )
n
X
((Cn0 Mn )ii )2 + σ04 [tr(Cn0 Mn Cn ) + tr((Cn0 Mn )2 )]
i=1
(Cn,ii )2 + σ04 [tr(Cn0 Cn ) + tr(Cn2 )] + O(1),
i=1
where the last expression follows because
n
X
((Cn0 Mn )ii )2 =
i=1
n
X
(Cn,ii )2 +O(
i=1
1
), tr(Cn0 Mn Cn ) = tr(Cn0 Cn )+O(1) and tr[(Cn0 Mn )2 ] = tr(Cn2 )+O(1)
hn
from Lemma A.9. The covariance of a linear term Qn Vn and a quadratic form Vn0 Pn Vn is E(Q0n Vn ·Vn0 Pn Vn ) =
Pn Pn
2
Q0n i=1 j=1 pn,ij E(Vn vni vnj ) = Q0n vecD (Pn )µ3 . Denote σlq
= var((Gn Xn β0 )0 Mn Vn + qn ). Thus,
n
2
σlq
= σ02 (Gn Xn β0 )0 Mn (Gn Xn β0 ) + σq2n + 2(Gn Xn β0 )0 Mn vecD (Cn0 Mn )µ3 .
n
As
q
hn
n E(qn )
r
= O(
q
n
hn 2
n2 tr (Gn )
lim
which goes to zero,
q
r
hn
[(Gn Xn β0 )0 Mn Vn + qn − E(qn )]
hn ∂ ln Ln (λ0 )
hn E(qn )
n σlqn
= 2
·
+
n
∂λ
σ̂n (λ0 )
σlqn
n σ̂n2 (λ0 )
q
hn
2
[(Gn Xn β0 )0 Mn Vn + qn − E(qn )]
hn σlq
n σlqn
p
n
·
= 2
+ oP (1) → N (0, lim
4 ).
n→∞ n σ0
σ̂n (λ0 )
σlqn
As (Cn,ii )2 = O( h12 ),
n→∞
hn
n ),
hn
n
Pn
i=1 (Cn,ii )
2
= O( h1n ) which goes to zero when limn→∞ hn = ∞. Finally, as
= O( h1n ) = o(1),
hn
hn
hn
2
[tr(Cn Cn0 ) + tr(Cn2 )] = lim
[tr(Gn G0n ) + tr(G2n ) − tr2 (Gn )] = lim
[tr(Gn G0n ) + tr(G2n )].
n→∞
n→∞
n
n
n
n
17
The limiting distribution of
above.
√
n(λ̂n − λ0 ) follows from the Taylor expansion and the convergence results
Q.E.D.
Proof of Theorem 5.3 As Sn (λ) = Sn + (λ0 − λ̂n )Wn ,
β̂n − β0 = (Xn0 Xn )−1 Xn0 Vn − (λ̂n − λ0 )(Xn0 Xn )−1 Xn0 Wn Yn
= (Xn0 Xn )−1 Xn0 Vn − (λ̂n − λ0 )(Xn0 Xn )−1 Xn0 Gn Xn β0 − (λ̂n − λ0 )(Xn0 Xn )−1 Xn0 Gn Vn .
√
X0 X
X0 G V
As (λ̂n − λ0 )(Xn0 Xn )−1 Xn0 Gn Vn = OP ( nhn ) because nn n = O(1) and n√nn n = OP (1) and λ̂n − λ0 =
q
√
OP ( hnn ) by Theorem 5.2, β̂n − β0 = (Xn0 Xn )−1 Xn0 Vn − (λ̂n − λ0 )(Xn0 Xn )−1 Xn0 Gn Xn β0 + Op ( nhn ). In
general,
r
r
n
n
1 Xn0 Xn −1 Xn0 Vn
1
√ −
)
(β̂n − β0 ) = √ (
(λ̂n − λ0 ) · (Xn0 Xn )−1 Xn0 Gn Xn β0 + OP ( √ )
hn
n
n
h
n
hn
n
r
1
n
=−
(λ̂n − λ0 ) · (Xn0 Xn )−1 Xn0 Gn Xn β0 + OP ( √ ).
hn
hn
√
If β0 is zero,
As
n(β̂n − β0 ) = (
0
0
Xn
Xn −1 Xn
V
√ n
n )
n
+ OP (
q
hn
n )
D
−→ N (0, σ02 limn→∞ (
0
Xn
Xn −1
).
n )
1 0 0
Y S (λ̂n )Mn Sn (λ̂n )Yn
n n n
1
1
1
= Yn0 Sn0 Mn Sn Yn + 2(λ0 − λ̂n ) Yn0 Wn0 Mn Sn Yn + (λ0 − λ̂n )2 Yn0 Wn0 Mn Wn Yn
n
n
n
σ̂n2 =
and
1 0 0
n Yn Sn Mn Sn Yn
√
=
1 0
n Vn Mn Vn ,
it follows that
1
1
n(σ̂n2 − σ02 ) = √ (Vn0 Vn − nσ02 ) − √ Vn0 Xn (Xn0 Xn )−1 Xn0 Vn
n
n
√
√
r
r
n
hn 0 0
n
hn 0 0
Yn Wn Mn Sn Yn +
Y W Mn Wn Yn .
−2
(λ̂n − λ0 ) ·
(λ̂n − λ0 )2 ·
hn
n
hn
n n n
Because Mn Xn = 0, and
q
hn
0
n (Gn Xn β0 ) Mn Vn
= OP (1) and
hn
0
n (Gn Xn β0 ) Mn Gn Vn
= OP (1) under As-
sumption 10,
√
hn 0 0
Y W Mn Sn Yn =
n n n
and
√
hn
(Gn Xn β0 + Gn Vn )0 Mn Vn =
n
√
1
1
hn 0 0
V G Mn Vn + OP ( √ ) = OP ( √ )
n n n
n
hn
√
hn 0 0
hn
Yn Wn Mn Wn Yn =
(Gn Xn β0 + Gn Vn )0 Mn (Gn Xn β0 + Gn Vn )
n
n
√
hn 0 0
1
1
V G Mn Gn Vn + OP ( √ ) = OP ( √ )
=
n n n
hn
hn
√
by Lemmas A.12 and A.14. As E( √1n Vn0 Xn (Xn0 Xn )−1 Xn0 Vn ) =
σ2
√0 tr(Xn (X 0 Xn )−1 X 0 )
n
n
n
=
σ02 k
√
n
goes to
zero, the Markov inequality implies that √1n Vn0 Xn (Xn0 Xn )−1 Xn0 Vn = oP (1). Hence, as limn→∞ hn = ∞,
√
D
n(σ̂n2 − σ02 ) = √1n (Vn0 Vn − nσ02 ) + oP (1) → N (0, µ4 − σ 4 ).
Q.E.D.
18
Proof of Theorem 5.4
0
0
0
0
Let Xn = (X1n , X2n ), M1n = In − X1n (X1n
X1n )−1 X1n
and M2n = In − X2n (X2n
X2n )−1 X2n
. Using
√
0
0
M2n X1n )−1 X1n
M2n Vn − c1n (λ̂n − λ0 ) + OP (
a matrix partition for (Xn0 Xn )−1 , β̂n1 − β01 = (X1n
hn
n ),
and
√
0
0
β̂n2 − β20 = (X2n
M1n X2n )−1 X2n
M1n Vn + OP (
r
and
√
hn
n ).
Therefore,
1 1 0
n
1 0
(β̂n1 − β01 ) = √ ( X1n
M2n X1n )−1 √ X1n
M2n Vn − c1n ·
hn
n
hn n
r
n
1
= −c1n ·
(λ̂n − λ0 ) + OP ( √ ),
hn
hn
0
n(β̂n2 − β20 ) = ( n1 X2n
M1n X2n )−1 ·
and β̂n2 follow.
√1 X 0 M1n Vn
n 2n
Q.E.D.
19
+ OP (
q
hn
n ).
r
1
n
(λ̂n − λ0 ) + OP ( √ )
hn
n
The asymptotic distributions of β̂n1