Non-Linear Regression with Multidimensional Indices
Naveen K. Bansal, G. G. Hamedani
Department of Mathematics, Statistics, and Computer Science
Marquette University
Milwaukee, WI 53233
and
Hao Zhang
Program in Statistics
Washington State University
Pullman, WA 99164
Summary
We consider non-linear regression model when the index variable is multidimensional.
Sucient conditions on the non-linear function are given under which the least squares estimators are strongly consistent and asymptotically normally distributed. These sucient
conditions are satised by harmonic type functions, which are also of interest in one dimensional index case where Wu's (1981) and Jennrich's (1969) sucient conditions are not
applicable.
1
1 Introduction.
Statistical modeling with multidimensional indices is an important problem in spatial or
tempro-spatial process, in signal processing, and in texture modeling. For example, suppose
the problem is to model the growth of vegetation in a particular farm over a period of time.
This requires the modeling with three dimensional indices. For examples in signal processing,
see Rao, Zhao and Zhou (1994) and McClellan (1982). For examples in texture modeling, see
Francos, Meiri and Porot (1993), Yuan and Subba Rao (1993), and Mandrekar and Zhang
(1996). Most of the models in these areas are non-linear regression models. There has been
extensive work in the literature on non-linear regression models with one dimensional index
(Wu (1981), Jennrich (1969), and Gallant (1987)). A non-linear regression model with one
dimensional index can be dened as follows
yt = f (xt ) + t t = 1 2 : : : n
(1)
where the observed data is fyt t = 1 : : : ng xt t = 1 : : : n are some known constants,
2 Rp is an unknown parameter vector, t t = 1 : : : n are the random errors, and f
is a known function. The problem of estimating has been investigated quite extensively
in literature. Wu (1981) and Jennrich (1969) gave some sucient conditions based on the
function f and the design to establish certain asymptotic properties of the least squares
estimators. These conditions, however, are not satised if the function f is of harmonic type
(Kundu (1993)).
2
In the present work, we consider an extension to (1) with multidimensional indices,
yt = f (xt ) + t t n
(2)
where t = (t1 t2 : : : tk ) and n = (n1 n2 : : : nk ) 2 Nk (set of k dimensional non-negative
integer values), denotes the partial ordering, i.e., for m = (m1 m2 : : : mk ) 2 Nk and
o
n
n = (n1 n2 : : : nk ) 2 Nk m n if mi ni for i = 1 : : : k t t 2 Nk is an independent
eld of random variables such that
E (t) = 0 and Var (t) = 2 8 t 2 Nk (3)
2 Rp is a parameter vector, fxt t 2 Nk g a set of known constant vectors, and f is a
known non-linear function.
Remark. For signal processing models (see Rao, Zhao and Zhou (1994) for an example),
yt f ( ) and t are complex valued. Here for notational convenience we only assume them
to be real valued.
A natural choice of estimating is by least squares methods, i.e., by minimizing
Qn( ) =
X
t n
jyt ; f (xt )j2 :
(4)
Here we will not deal with the numerical method of estimating . Our aim will be to
nd sucient conditions on function f ( ) so that the least squares estimators are strongly
consistent and are asymptotically normally distributed as jnj = Qk n ! 1. Note that
i=1
3
i
consistency and asymptotic normality as jnj ! 1 provide much stronger results than consistency and asymptotic normality as min(n1 n2 : : : nk ) ! 1, as assumed, for example,
in Rao, Zhao and Zhou. Our results will also be of interest in one dimensional case since
they can be applied to harmonic type functions which do not satisfy Wu's and Jennrich's
sucient conditions (Kundu (1993)).
To illustrate, we will consider the following example which can be used to model textures
(Yuan and Subba Rao (1993) and Mandrekar and Zhang (1996)):
yt =
p
X
k =1
k cos(t11k + t22k ) + t
(5)
where t = (t1 t2) k 's are real unknown parameters, and 1k 2k are unknown parameters
in 0 ].
This model will be taken up in details in Section 2 as well as in Section 3. In Section 2, we
will present sucient conditions in terms of function f to establish strong consistency of b n.
In Section 3, sucient conditions for the asymptotic normality of b n will be given and we
will obtain its asymptotic distribution.
2 Strong Consistency.
Let 0 be the true parameter vector. Our objective is to prove the strong consistency of
b n in the sense that b n ;! 0 a.s. as jnj ! 1. Note that if fyt 1 t ng, where
4
1 = (1 1 : : : 1) 2 N , is the observed data, then the total number of observations is jnj. To
k
prove the strong consistency, we need the following lemma which is similar to lemma 1 of
Wu (1981).
Lemma 2.1. Let fQn( )g be a eld of measurable functions such that
lim
inf
jnj!1
inf
j ; 0 j
1 (Q ( ) ; Q ( )) > 0 a:s:
n 0
jnj n
for every > 0. Then b n which mimnimizes Qn ( ) converges to
Proof: Suppose b n 6!
0
and > 0 such that b ns ;
(6)
0
a.s. as jnj ! 1.
a.s. as jnj ! 1. Then there exists a subsequence fns s 1g
0 for all s 1 with positive probability. Thus, from (6),
Qns b ns ; Qns ( 0) > 0
with positive probability for all s M , for some M > 0. This contradicts the fact that b ns
is a least squares estimator.
We now make the following assumptions:
Assumption 1: The parameter space is compact.
Assumption 2: The function f (xt ) satises
(i) jf (xt 1) ; f (xt 2)j atj 1 ; 2j for all
1
6= 2 ,
where at > 0, t 2 Nk are some constants such that
5
X
t n
at = o(jnj3),
(ii)
sup jf (xt )j M0 for some M0 > 0.
t2Nk 2
Assumption 3:
lim jninf
j!1
inf
j ; 0 j
1 X f (x ) ; f (x )]2 > 0:
t
t 0
jnj
t n
Let b n denote the least squares estimator which minimizes (4).
Theorem 2.2. Under Assumptions 1 { 3, and (3), b n ! 0 a.s. as jnj ! 1.
Proof: Observe that
1 (Q ( ) ; Q ( )) = 1 X f (x ) ; f (x )]2 + 1 X f (x ) ; f (x )] :
n 0
t
t 0
t
t 0 t
jnj n
jnj
jnj
t n
t n
From Assumption 3, the rst term on the RHS of the above equality is positive. Thus, from
Lemma 2.1, the consistency of b n follows if we prove the following
X
1
sup jnj f (xt ) ; f (xt 0)] t ! 0 a:s: as jnj ! 1:
j ; 0 j
t n
(7)
Now, since in view of Assumption 2, the function d(xt ) = f (xt ) ; f (xt 0) is bounded,
i.e.,
sup
t2Nk 2
jd(xt )j < 1
and since it satises the condition (??) bleow, (7) follows from the following Lemma.
Lemma 2.3. Let g(xt ) be such that
6
(i) jg (xt 1) ; g(xt 2)j atj 1 ; 2 j for all
1
6=
(8)
2
P
where t n at = o(jnj3)
(ii)
sup
t2N
k
2
jg (xt )j = B < 1
(9)
and let ft t 2 Nk g be i.i.d. real valued random eld with mean 0 and variance 2, then
1 X
sup jnj g(xt ) t ;! 0 a.s. as jnj ! 1:
2
t n
Proof: To prove this, we use the following result of Mikosch and Norvaisa (1987) on
Banach space valued random eld.
n
o
Let Xn n 2 Nk be a eld of Banach space valued random variables with a norm k k.
Let fan n 2 Nk g be a eld of positive numbers such that limjnj!1 an = 1, it satises the
conditions A { D of Mikosch and Norvaisa (1987), and
1
0
1
X
fang 2 F2 = @fang : j ;3 jAj j = O q ;2 jAq j q ! 1A j =q
(10)
where Aj = fn : an j g, and jAj j is the cardinality of Aj .
If there exists a constant c > 0 and a positive random variable X such that
sup P (k Xn k> x) c P (X > x) 8 x > 0
n2Nk
and
X
n2Nk
P (X > an) < 1
7
(11)
(12)
1 X X = 0 a.s. is equivalent to
then jnlim
j
j!1 a
nj n
1 X X = 0 in probability.
lim
jnj!1 an j n j
(13)
We dene Xt = g(xt ) t. Then fXt t 2 Nk g is a eld of C ( )-valued random variables,
where C () is the space of continuous function on the compact metric space with the sup
norm. Dene an = jnj. It is easy to see that fan n 2 Nk g satisfy the conditions A { D of
Mikasch and Norvaisa (1987).
Now, from the above mentioned result of Mikasch and Norvaisa, it suces to show that
fan
n 2 N g and fXn n 2 N g satisfy (10) { (13).
k
k
To show (10), we need to prove
1
X
j =q
j ;3 jAj j = O(q;2 jAqj) as q ! 1:
(14)
Using mathematical induction, it can be shown that
"
#
Z Z
k ;1
k ;2
(ln
j
)
(ln
j
)
:::
dy1 : : : dyk = j (k ; 1)! ; (k ; 2)! :
1 y1 y2 :::yk j
Therefore
jAj j j (ln j )k;1 as j ! 1:
(15)
Thus
1
X
j =q
j ;3jAj j 1
X
j =q
Z1
q
8
j ;3j (ln j )k;1
y;2 (ln y)k;1 dy:
(16)
(The notation bj cj means bj =cj = O(1) cj =bj = O(1), i.e., as j ! 1 M1cj bj M2cj
for some M1 M2 > 0.)
Now, since
h
i
0 < y;2 (ln y)k;1 ; (ln M )k;1 < y;1:5 8 y > M
where M is suciently large, we have
0<
Z1
M
y;2 (ln y)k;1 dy ; (ln M )k;1
Z1
M
y;2 dy < 1:
We now conclude, from (15) and (16), that
Z1
1
X
;
3
j jAj j y;2 (ln y)k;1 dy q;1(ln q)k;1
q
j =q
O(q ;2 jAq j) as q ! 1:
This proves (14).
To prove (11) and (12), note that, since g(xt ) is bounded, for any x > 0,
0
1
P (k Xt k> x) = P @ sup g (xt ) jtj > xA P (Rj1 j > x)
2
(17)
where R is the upper bound of jg( )j. Now, substituting X = R j1j, we get
X
n2Nk
P (jX j > jnj) X R2 2
< 1:
2
n2Nk jnj
(18)
The inequalities (17) and (18) prove (11) and (12) respectively.
To prove (13), we rst note that, from (8), Xt, t n are independent random variables with
9
mean zero belonging to a subspace of C (),
Lip (C ()) = fh 2 C () : jh( 1) ; h( 2)j < Aj 1 ; 2)j for all
1
6= 2 g
where A is some positive constant.
Therefore from the inequality (6) (of Appendix) of Wu (1981), we have
X
X
E jj Xtjj2 K E jjXtjj2L for some K < 1
t n
t n
(19)
where, for h 2 Lip (C ()), the norm jj jjL is dened as
jh( 1 ) ; h( 2 )j
+ jh( a)j
j
;
)
j
1
2
6 2
1=
jjhjjL = sup
for
a
(20)
some xed point in .
Now, from (19), for any > 0, we have
0
1
X 2
EB
@
Xt CA
1
0 t n X A
1
@
P jnj Xt
> jnj2 2
t n X
K E kXt k2L
= t jnnj2 2
=
=
K
X
t n
K2
k g (xt )k2L E 2t
X
t n
jnj2 2
k g (xt )k2L
jnj2 2
:
From (8), (9) and (20), we have
jjg (xt )jjL At + B for all
10
t2N :
k
(21)
Thus, from (27),
0
1
X
1 2K2 @ A2t + knkB A
0 X
t n
! 0as jnj ! 1:
P @ jn1 j Xt
> A jnj2 2
t n This completes the proof of Lemma 2.3 and thus the proof of Theorem 2.2.
We now show that the least squares estimators for model (5) are strongly consistent under
the assumption (3). For notational convenience, we assume that p = 1 and deal only with
yt = cos(1t1 + 2t2) + t 1 t n
(22)
where 1 2 2 0 ]. Further, we assume that jj M < 1, for some M > 0. This is a
reasonable assumption since represents the amplitute of the waves.
To prove the strong consistency, via Theorem 2.2, it suces to show that Assumptions 1 {
3 are satised for this model.
We let = ( 1 2)T and
0
= (o 1o 2o)T 2 = ;M M ] 0 ]2. Clearly Assump-
tions 1 and 2 are satised by taking At = 1 and Mo = M . To varify Assumption 3, note
that
1 X cos( t + t ) ; cos( t + t )]2
1 1
2 2
o
10 1
20 2
jnj
t n
i(1o t1 +2o t2 )
;i(1o t1 +2o t2 ) #2
X " ei(1t1+2 t2) + e;i(1t1+2 t2)
1
e
+
e
= jnj
; o
2
2
t n
2
3
2
X
X
1
1
2
i
(
t
+
t
)
;
2
i
(
t
+
t
)
= 4 42 + jnj e 1 1 2 2 + jnj e 1 1 2 2 5
t n
t n
11
2
3
2
X 2i(1ot1+2ot2) 1 X ;2i(1ot1+2o t2)5
1
o 4
+ 4 2 + jnj e
+ jnj e
t n
t n
X
; o ei((1+1o )t1 +(2 +2o )t2) + e;i((1 +1o )t1 +(2 +2o )t2)
2jnj t n
+ei((1;1o )t1+(2;2o )t2) + e;i(1;1o )t1+(2;2o)t2 ]
(23)
Now, since, for all !1 and !2 2 (0 2),
X
t n
in1 !1
)(1 ; ein2 !2 ) ei(!1t1+!2 t2) = ei(!1+!2 ) (1(1;;e ei!1 )(1
; ei!2 )
the Assumption 3 follows from (23).
3 Asymptotic Normality.
Observe that the least squares estimator b n is a zero of Q0n( ) = 0. (Primes are used to
denote the derivatives with respect to ). Thus expanding Q0n( ) about
value) and evaluating it at b n, we get
0
(true parameter
0 = Q0n( 0) + Q00n( b n) b n ; 0 (24)
where b n = h 0 + (1 ; h) b n for some 0 h 1.
Note that
Q0n( ) = ;2
Q00n( ) = ;2
X
t n
X
t n
(yt ; f (xt )) f 0(xt )
(yt ; f (xt )) f 00(xt ) + 2
12
X
t n
(25)
f 0 (xt )] f 0(xt )]T
(26)
= ;2
X
X
tf 00 (xt )) + 2 f (xt ) ; f (xt 0)] f 00(xt )
t n
t n
X 0
+2 f (xt )] f 0(xt )]T :
t n
Now, we impose the following assumptions on the function f ( ) in addition to the Assumptions 1 { 3.
Assumption 4: f (xt ) is twice continuously dierentiable in .
Assumption 5: Let fDn n 2 N g be a eld of k k non-singular matrices such that
k
X 0
(i) jn1 j DnT
f (xt )] f 0 (xt )]T Dn converges to a positive denite matrix
t n
( 0) uniformly as jnj ! 1 and j ; 0j ! 0.
X
(ii) jn1 j DnT
f (xt ) ; f (xt )] f 00 (xt ) Dn ;! 0 uniformly as jnj ! 1 and
t n
j ; 0j ! 0,
(iii) DnT f 00(xt ) Dn E M for all 2 . Here k kE is the Euclidean norm on matrices.
(iv) DnT (f 00(xt 1) ; f 00(xt 2))Dn
E btnj 1 ; 2j for all 1 6= 2,
where btn > 0 t 2 Nk are some constants such that Pt n btn = o(jnj3).
1 DT f 0(x )
! 0 as jnj ! 1.
(v) 1max
t 0 E
t n jnj n
Theorem 3.1. Under Assumptions 1 { 5 and (3),
q
L
jnj (Dn );1 b n ; 0 ;!
Nk 0 2 ;1 ( 0) as jnj ! 1:
13
Proof: From (24)
q
jnj D;1
n
b
;
0
= ; jn1 j DnT Q00n b n Dn
!;1
q1 DnT Q0n( 0):
jnj
(27)
Now, from (26),
i
X h T 00
q1 DnT Q00n ( ) Dn = ;2 1
D
f
(
x
)
D
(28)
t
t
n
n
jnj t n
jnj
X
+2 jn1 j DnT f (xt ) ; f (xt 0)] f 00 (xt ) Dn
t n
X
+ jn2 j DnT
f 0(xt )] f 0(xt )]T Dn:
t n
Using Assumption 5 parts (iii) and (iv), it can be shown as in the proof of Lemma 2.3 that
1 X T 00
sup jnj
t Dn f (xt ) Dn ;! 0 in probability as jnj ! 1:
(29)
E
t n
2 Since b n ! 0 a.s., b n ! 0 a.s. as jnj ! 1, thus from (33), (34), and Assumption 5 parts
(i) and (ii), we obtain
From (25)
1 DT Q00 b D ;! 2( ) a.s. as jnj ! 1:
0
jnj n n n n
(30)
X T 0
q1 DnT Q0n ( 0) = ; q2
t Dn f (xt 0):
(31)
jnj
jnj t n
To establish the asymptotic normality of the above, we rst state the multidimension indices
version of the Hajek-Sidek Theorem (Sen and Singer (1993)).
Lemma 3.2.
Let fXn n 2 Nk g be a eld of independent and identically distributed
random variables with mean and variance 2, and let fcmn m n 2 Nk g be a eld of
14
real numbers such that
c2mn
X
;! 0 as jnj ! 1:
max
1 m n
c2mn
m n
Then
X
c2mn (Xm ; )
m ns
L N (0 2 ) as jnj ! 1:
;!
X 2
cmn
m n
The proof of this follows along the standard lines of Hajeck-Sidek Theorem.
Using this Lemma, and Assumption 5 parts (i) and (iv), we have, for any 2 Rk,
X T 0
T
t Dn f (xt 0)
t
n
L N (0 2 ):
s X
;!
T
DnT f 0(xt 0)] f 0(xt 0)]T Dn t n
From Assumption 5 (i), we obtain
T
X T 0
L N 0 2 T ( ) for any 2 Rk
q1
t Dn f (xt 0) ;!
0
jnj t n
and in view of (29),
L N 0 4 2( ) :
q1 DnT Q0n ( 0) ;!
0
jnj
Now, the proof of the theorem follows from (27) and (30).
Remark. If Assumption 5 holds only for a subsequence of fn n 2 N g such that jnj ! 1,
k
then the result of Theorem 3.1 holds for that subsequence. This will be seen in the example
below.
15
We now consider asymptotic normality of the least squares estimators for model (5). We will
show that the parameters of the asymptotic normal distribution depend on subsequences of
n. Asymptotic normality can be obtained for two types of sequences: (i) a subsequence for
which min(n1 n2) ! 1, and (ii) a subsequence for which any one of n1 and n2 ! 1 while
the other kept constant. For notational convenience, as in Section 2, we assume that p = 1
T
and deal only with model (22). Let b n = bn b 1n b2n be the least squares estimators,
and
0
= (0 10 20)T be the true parameter vector. Let
Dn = Diag 1 n;1 1 n;2 1 :
In order to apply Theorem 3.1, we need to verify Assumptions 4 and 5. Assumption 4 is
clearly satised. To verify Assumption 5, observe that
2
3
66 cos(t11 + t22) 77
66
77
6
7
f 0(xt ) = 66 ; t1 sin(t11 + t22) 77 :
66
77
64
75
; t2 sin(t1 1 + t22 )
(32)
Thus
X
t n
f 0 (xt )] f 0 (xt )]T =
(33)
3
2
P
P
P
; 2 t n t1 sin 2(t11 + t22 ) ; 2 t n t2 sin 2(t11 + t22 ) 77
66 t n cos2(t11 + t22)
77
66
7
66 P
66 ; 2 t n t1 sin 2(t11 + t22) 2 Pt n t21 sin2(t11 + t22) 2 Pt n t1t2 sin2(t11 + t22) 777.
77
66
75
64
2
2
P
2
2 P
2 P
; 2 t n t2 sin 2(t11 + t2 2 ) t n t1 t2 sin (t11 + t22 ) t n t2 sin (t11 + t22 )
16
Note that
X h 2i(t11+t2 2) ;2i(t11+t22)i
e
+e
cos 2 (t11 + t22) = 21
t n
t n
2
3
X
X
X
X
1
2
i
t
2
i
t
;
2
i
t
;
2
i
t
=24
e 11
e 22+
e 11
e 2 25
X
1 t1 n1
1 t2 n2
1 t1 n1
1 t2 n2
(34)
;2i1n1 ;2i1n1 3
2
2i1n1
2i2n2
1
;
e
1
;
e
1;e
1;e
1
= 2 4e2i(1+2) (1 ; e2i1 ) (1 ; e2i2 ) + e;2i(1+2) (1 ; e;2i1 ) (1 ; e;2i2 ) 5 :
This term is of order O(1). This implies that
1 X cos2(t + t ) = 1 + 1 X cos 2 (t + t ) ! 1 as jnj ! 1:
1 1
2 2
1 1
2 2
jnj
2 2jnj
2
t n
t n
By dierentiating (34) with respect to 1 , we get
X
t n
Thus
t1 sin 2(t11 + t22) = O(n1 ):
1 1 X t sin 2(t + t ) ;! 0 as jnj ! 1:
1 1
2 2
jnj n1 t n 1
Similarly, it can be seen that
1 1 X t sin2(t + t ) ;! 0 as jnj ! 1:
1 1
2 2
jnj n2 t n 2
By dierentiating (34) twice with respect to 1 , we get
X
t n
t21 cos 2(t11 + t22) = O(n21):
17
Thus
1 1 X t sin2(t + t ) = 1 1 X t2 ; 1 1 X t2 cos 2(t + t )
1 1
2 2
1 1
2 2
jnj n21 t n 1
2jnj n21 t n 1 2jnj n21 t n 1
1)(2n1 + 1) ; 1 1 X t2 cos2(t + t )
= (n1 + 12
1 1
2 2
n21
2jnj n21 t n 1
converges to 1=6 if min(n1 n2) ! 1 or if n1 ! 1, and converges to (n1 + 1)(2n1 + 1)=12n21
if n1 is xed but n2 ! 1.
Similarly, it can be seen that jn1 j n11n2 Pt n t1t2 sin2(t11 +t22) converges to 1=8 if min(n1 n2)
! 1, and converges to (n1 + 1)=8n1 if n1 is xed but n2 ! 1, and converges to (n2 +
1)=8n2 if n2 is xed but n1 ! 1. Also jn1j n12 Pt n t22 sin2 2(t11 + t22) converges to 1=6 if
min(n1 n2) ! 1 or if n2 ! 1, and converges to (n2 + 1)(2n2 + 1)=12n22 if n2 is xed but
n1 ! 1.
Thus, we have that Assumption 5 (i) is satised with
2
3
66 21 0
0 77
77
66
7
66
( 0) = 6 0 20=6 20=8 77 if min(n1 n2) ! 1
77
66
75
64
2
2
0 0=8 0=6
with
2
66 21
0
0
66
6
n1 +1)
( 0) = 66 0 20 (n1 +1)(2
20 (n81n+1)
12n21
1
66
64
20=6
0
20 (n81n+1)
1
18
3
77
77
77
77 if n1 is xed but n2 ! 1
77
5
(35)
(36)
and with
2
66 12
0
0
66
6
( 0) = 66 0 20=6
20 (n82n+1)
1
66
64
n2 +1)
20 (n2 +1)(2
0 20 (n82n+1)
12n22
2
3
77
77
77
77 if n2 is xed but n1 ! 1:
77
5
(37)
It is easy to see that Assumption 5 parts (iii) and (iv) are satised. To verify Assumption
5 (ii), note that
1 DT X f (x ) ; f (x )] f 00(x ) D = 1 DT X f 0(x t
t 0
t 0 n jnj n
t
jnj n
t n
t n
where t = ht 0 + (1 ; ht)
)]
t
T
( ; 0) f 00(xt ) Dn,
for some 0 ht 1.
Assumption 5 (ii) now can be veried from (32) and Assumption 5 (iii).
Thus, from the remark after the proof of Theorem 3.1, we conclude that
q
L
jnj (b ; 0 ) n1 b 1 ; 10 n2 b 2 ; 20 ;!
N3 0 2 ;1( 0)
as either min(n1 n2) ! 1, or n2 ! 1 while n1 is held xed, or n1 ! 1 while n2 is held
xed, where ( 0) is given by (35) { (37) for the appropriate subsequences.
Acknowledgement:
The authors are grateful to Jim Kuelbs for his helpful comments and suggestions. We are
indebted to Hira Koul for his careful reading of this manuscript and his invaluable suggestions
resulting in a correction and improvement of the presentation of the content.
19
References:
Francos, J. M., Meiri, A. Z. and Porat, B. (1993). A united texture model based on a
2-D Wold like decomposition. IEEE Trans. Signal Processing, 41, 2665{2678.
Gallant, A. R. (1987). Nonlinear Statistical Models . J. Wiley.
Francos, J. M., Meiri, A. Z. and Porat, B. (1993). A united texture
Jennrich, R. I. (1969). Asymptotic properties of non-linear least squares estimators.
Ann. Math. Statist., 40, 633{643.
Kundu, D. (1993). Asymptotic theory of least squares estimator of a particular nonlinear regression model. Statistics & Probability Letters, 18, 13{17.
Mandrekar, V. and Zhang, H. (1996). On parameter estimation in a texture model.
Submitted for publication.
McClellan, J. H. (1982). Multidimensional spectral estimation. Proc. IEEE, 70,
1029{1039.
Mikosch, T. and Norvaisa, R. (1987). Strong laws of large numbers for elds of Banach
space valued random variables. Prob. Th. Rel. Fields, 74, 241{253.
Rao, C. R., Zhao, L. and Zhou, B. (1994). Maximum likelihood estimation of 2-D
superimpared exponential signals. IEEE Trans. Signal Processing, 42, 1795{1802.
20
Sen, P. K. and Singer, J. M. (1993). Large Sample Methods in Statistics: An Introduction with Applications. Chapman & Hall.
Wu, C. F. (1981). Asymptotic theory of non-linear least square estimation, Ann.
Statist., 9, 501{513.
Yuan, T. and Subba Rao, T. (1993). Spectrum estimation for random elds with
applications to Markov modelling and texture classication, in Markov Random Fields,
Theory and Applications, edited by R. Chellappa and A. K. Jain, Academic Press.
21
© Copyright 2026 Paperzz