Pantula, Sastry G. and Fuller, Wayne A.; (1983)Mean Estimation Bias In Least Squares Estimation of Autoregressive Processes."

January 25, 1983
MIMEOGRAPH SERIES #16Z3
Mean Estimation Bias in Least Squares
Estimation of Autoregressive Processes
by
Sastry G. Pantula and Wayne A. Fuller
Abstract
Estimation of the parameters of an autoregressive process with a
mean that is a function of time is considered.
Approximate expressions
for the bias of the least squares estimator of the autoregressive
parameters that is due to estimating the unknown mean function are
derived.
For the case of a mean function that is a polynomial in time,
a reparameterization that isolates the bias is given.
Using the
approximate expressions, a method of modifying the least squares
estimator is proposed.
A Monte Carlo study of the second-order
autoregressive process is presented.
The Monte Carlo results
ag~ee
well
with the approximate theory and, generally speaking, the modified least
squares estimators performed better than the least squares estimator.
1.
Introduction
Let the time series
Y
t
where. Xti
•
Yt
satisfy
t •
(1)
1,2, ••• ,
1s a fixed function of time for
i · 1,2, ••• , r ,
Pt
stationary p-th order normal autoregressive process satisfying
(2)
is a
2
{e } is a sequence of normal independent (0, 0 2 ) random
t
variables. Assume that the roots of the characteristic equation
and
-
lie inside the unit circle.
p
(J
•
(3)
0
Assume that
D
(X'X)-1 "'n
D • 0(1)
...n......
(4)
and
!!-n
1
X' • O(n- 1/2 )
...t
"
t •
1 2 , ••• , n
where
and
Also assume that there ez1sta a nonaingular
r x r
matrix ...C such that
.
3
., .
where
For example, these assumptions on
•
are satisfied
by polynomial and trignometric polynomial functions of time.
For the case
r · 1, p • 1
and
!t
=1
, Mariott and Pope (1954),
Orcutt and Winokur (1969), Salem (1971), Bora-Senta and Kounias (1980)
and Lee (1981) proposed several estimators of
• These authors also
1
used Monte Carlo studies to compare the small sample properties of the
various estimators.
See Lee (1981).
least squares estimator of
order processes.
~
a
The small sample properties of the
have received little attention for higher
Salem (1971) extended the method of Mariott and Pope
(1954) to obtain expressions for the approximate biases of the least
squares estimator of
Xt1
=1
•
...a
for the case
r • 1, p • 2 and
The method proposed by Salem has no immediate extension for
processes with
p
> 2.
Bora-Senta and Kounias (1980) considered an
iterative method of moments procedure as an alternative to the least
squares estimation procedure for higher order processes with
Xt1
=1
r - 1 and
• 'Ansley and Newbold (1980) comparAd the properties of the
maximum likelihood predictor, the exact least squares predictor and the
conditional least squares predictor of
Yn+1' Yn+2
and
Yn+10
by means
of simulation.
We shall consider two procedures for estimating
procedure
Yt
Yt - i
Because the
procedure is equivalent to computing residuals
by regressing
on
...
In the first
(!t' Yt - 1 , Yt - 2 , ••• , Yt - p ) and the
estimates a i • There are a total of n-p
observations in the regression.
P
t
•
is regressed on
coefficient of
...
...a
...
Yt - i
on
...
...Xt-i
Pt - 1 , Pt-2' ••• , Pt -p •
!t
satisfy (5), this
...
,
Pt - i
and then estimating
...a
i • 0,1, ••• , p ,
by regressing
We denote the estimator of
...a
obtained
4
"
...a
...
in this way by
•
See Rao (1967) for a description of this procedure
for the polynomial mean function.
The second procedure consists of two steps.
least squares estimator
of
~
l
First, the ordinary
is constructed, where
(6)
and
Let the least squares residuals be denoted by
P t ' where
...
...
(7)
and
(8)
The second step of procedure two consists of estimating
A
regressing
...a
Pt
A
on
A
...a
by
A
(P t - 1 , Pt - 2 , ••• , Pt - p ) .
We call this estimator
•
In this paper we obtain aD approximate expression for the bias in
the estimators of
~
that arises from estimating
~.
We propose an
adjustment in the estimators based upon the expression for the
approximate bias.
Using Monte Carlo simulation, the proposed estimator
is. compared with the original estimator.
5
, l
2.
Bias in estimators of
The estimator of
-
! -
a
-
a.
obtained by method one can be written
-1~
~,
(9) .
where
n
ii -
-
(n_p)-1
-N_
(n_p)-1
t
F' F
-t-l-t-l
'
t-p+l
From the arguments of Fuller and Basza (1981) and Lee (1981), it
follows that
-a
is integrable and
+
-2 ) ,
O{n
where
--A-H-H
n
H
F' F ] '
_ - E[{ n-p )-1 t t-p+l -t-l-t-l
(lO)
6
. '<
and
The
bias in
....
....a arises from two sources.
The first source of bias
is inherent in estimating the product of the inverse of
vector
!.
The second source of bias results from estimating the
unknown function
!t~.
The approximate bias in
estimating the mean function is given by
expression for
E[A
,.......a - ,...d)
arising from
E[ -H- 1(A
a - d)].
,...,..
"..
~
An
is derived in Theorem 1.
....
a
,...
be given by
Assume that the roots of equation (3) lie inside the unit circle
and that !t
coordinate of
where
~
satisfy model (1) and let
'Iheorea 1.
(9).
H
,... and the
satisfies conditions (4) and (5).
E{A
a - d}
........
....
is
Then the i-th
7
Proof.
Under procedure 1
(12)
where
The single matrix
of assumption (5).
....
a·
....a - ....
where
We have
M(
.... -p )
can be used to construct all residuals because
It follows that
8
'I'
and the i-th coordinate of
E{A
......Q
d}
-
...
is
o
The contribution to the bias arising from the estimation of a
polynomial mean function is evaluated in Theorem 2.
Tbeorea 2.
Assume that the conditions of Theorem 1 are satisfied
and assume that
~t •
(I, t, ••• , t
r-I
),
(3)
1 •
r)
Then
E{~ ~ - ~} • (n_p)-1 r
•
t
o2{
j-O
_ (n-p)-1 r '"~2{I _
where
~
- (I, 1, ••• , I)'
and the
~j)~ + O{n- 2 )
p
~I.
j-l
Q
j
)-1 ...J + O{ n -2)
(14)
satisfy the set of homogeneous
wj
difference equations
with
o-
W
1 and
Proof.
(S).
The
wj - 0
for
It is clear that
Ii
of Theorem 1 is
j
~t
<0
•
satisfies the conditions (4) and
9
... o 1.1)0 1.1)1 1.1)2 ••• 1.1)n-i-p-l
o 0 ... 0 0 1.1)0 1.1)1 ·.. 1.1)n-i-p-2
o0
...i B
•
•
•
o0
•••
o0
0
0
• •• 1.1)0
o0
•••
o0
0
0
• •• 0
.'.. o 0
0
0
• •• 0
02
•
•
•
o0
Therefore,
-
n-i-p-l
t
1.1) (
n
t
j k-p+l
j-Q
!k~)02
n-i-p-l
t
j-O
Note that
for some constants
X _p) )-1 •
(X'
... (_p)"'(
ets
51 nce
,where
(!'!) ts is the (t,s)-th element of
I I
toaD
"j-O
I.I)j
and
toaD
j
"j-O
II.I)j I
are fi n1t e, we have
10
tr[(x~(-p)~(-p)
x
)-1 X'
BX
]
~(-p)~i~(-p)
and the i-th coordinate of
- -
E[A a - d]
'"
- r
is
OD
(n-p) -1 r a2 ( E
Cl)j)
+
O(n-2) •
j-O
Because the roots of the equation (2) are less than one in absolute
o
We now consider the bias in the second method of constructing an
-
estimator of
a.
satisfy model (1) and let
-a-
-
..
..
..
.
.
F' - (F'
F' , ••• , F' )
-p-l' -p
n-l
.
..
,
F
P
)
-t-l - (P t - 1 , Pt-2 , ••• , t -p
..
a
be given by
(15)
where
..
..
.
P - y t - !t~
t
,
11
and
! is defined in (6). Assume that the roots of equation (3) lie
inside the unit circle and
that~t
satisfies conditions (4) and (5).
Then
A.....
-2
...a} • O( n ) .
...
E {a -
Proof.
We can write
•
where
...
Pt(-i)
...
P
is the t-th element of the vector "'(-i)
defined in (12)
and
X'
• (X'
X'
X' )
"'(-i)
"'p-i+1' "'p-i+2'···'
"'n-i
•
It follows that
o"'n (~~
A
- ~(i)
)
•
[n (X'X)-1 0 ]0- 1 X'P
~n"''''
"'n"'n'" ...
• [0 (X'X)~1 0 ][0- 1 X'P - 0- 1 X'
P
]
"'n ... ...
...n ...n......
""n "'(-i)"'(-i)
12
Observe that
-1
~
II
-1
X'P - D
......
'"'11
-1 p-1
X(' 1)P(· 1) - D [ t
'"' -
'"' -
'"'11
t-l '"'
and
Therefore,
and
Nov
.-
"
"
"
(1-(1-
-
Xt'P t +
-1 "."
[D-1 "."J-1
I I
11
I 1(0)
n
t
Xt'P ]
t-n-1+1 '"' t
13
where
For f1zed
1
n
Note that
-1
and
j
t
14
'.
n
t
t-n-i+1
P
A
D- 1
Pt~t~
X CI j-i I~n
I I
D
- t-p+1-i
t
p t~t+
x j-i ~n
1
n
t
t-n-i+1
where
m
C - C •
~m
~
n
Therefore,
-1
-1
- n
It can be shown that the expectation of powers of the elements of
and
are bounded.
See Puller and Basza (1981).
Therefore, since
...a
~
and
are square integrable we have
o
a
~
15
Note that for
all equal to
!t given in (13), the coordinates of
(n-p)
-1
_~
ra-(l - t
p'
-1
Cl )
j-l j
obtained an analogous expression for
o ' i 1 < i 2 < ••• <1r
are integers.
E[!
~ -~]
are
+ O(n-2 ) • Pantula (1982)
X
• (t
flOt
il,
t
i2
, ••• , t
ir
) , where
Theorem 2 also suggests that it
is possible to isolate the effect of estimating the mean function by
transforaing the .odel (1).
lor
p > 2 , consider the following
reparameterization
(16)
where
(17)
p --cS p • Therefore, cS l - t~_l Clj and cS l is equal to one if
and only if one of the root. of the equation (3) i. one. The least
and
Cl
A
squares estimator
A
A
A of A is
A
A
A
obtained by regressing Pt
A
A
A
on Pt - 1 '
(P t - 1 - Pt - 2 ), (P t - 2 - Pt - 3 ),···, (Pt-P+l - Pt - p ) , where
A
A
Pt • Y - !t!
A
It ia given by (6). lroa Theore. 2, it follows that
t
an approz1Jlate apreaaion for the biu in A arising froa estimating
and
A
the aean function is
(18)
where
16
!·
ra 2 (n-p) -1 (1-d 1) -1 (1, 0, 0, ••• , 0)' ,
G
... • E[(n-p)
-1
and
The expression in (14) suggests the following method of correcting for
the bias due to estimating the mean function:
(a)
Obtain the least squares estimator of
...6
,
where
...G•
(n_p)-1
A
A
A
A
~t • (P t - 1 , Pt-1 - Pt - 2 ,···, P t -p+1 - Pt - p )
and
..
II • (n-p)
-1
Modify the estimator to incorporate the condition that
the modified estimator is
°1 '
1 , where
17
...
"'-1'"
...h
G
....6* • ...6 + ...
with
...
...
h •
and
(hI' 0, ... , 0)' , if
P" 2
• h1
, if
p. 1
• 0
, if
,.
6
is the upper left element of
(b)
1
<1
"'-1
...G
•
Obtain the adjusted least squares estimator as
where
,.
,.
£ • (f 1 ,
0, 0, ••• , 0)'
• f1
, if
P" 2
, if
p. 1
, if
61 " 1
• [(n-p)(l - ~ »)-1 r~2 • otherwise.
1
or
18
.
>
~ is the residual mean square error of the regression in (a).
aad
(c)
+
...a* and ...a using the
Obtain the corresponding estimators,
relations in (11).
The estillator
+ of a is relatively easy to construct and;
...
...a
hence, is of practical i1llportance.
estillators
3.
~* and ~+
A Monte Carlo-study of the
is presented. in the next section.
Monte Carlo Study
Estimators for a second-order autoregressive process with constant
mean, and a second-order autoregressive process with mean function
linear in t i . are studied.
SaIIples of size 2S and 50 were generated
for 18 sets of par_eter values
(aI' ( ) .
The (aI' ( 2 )
2
roots of the characteristic equation are given in Table 1.
normal (0, 1) raadoa variable. were generated for the
process.. with
ODe
lor the
For the other parameter values,
Yl
Y2 are defined by
y(O)· [(1 + ~)(1 - a 1 - ~)(1 + a 1 r_in1.ng ob.ervations are liven by
where
Independent
of the roots equal to one, the initial observation.
Yl' Y2 are set equal to zero.
aDd
et •
and the
(1
2)]-1(1 - ~).
The
19
..
For each
combination, various point estimates are
2
computed using the same set of observations. This is repeated for 1,000
(aI' a , n)
sets of observations.
study.
Method two is used to estimate
...8
throughout the
Sample biases, variances and mean square errors for each
estimator are obtained by averaging over the 1,000 replications.
For a
second-order autoregressive process,
...a-I E[A...
-
a -
~]
For the sample mean model
• r(1 + a 2 )(n - 2) -1 (1, 1)' + O(n-2 ) .
r · 1 and for the time trend model
(21)
r· 2 •
For the mean model, Salem (1971) and Lee (1981) used (21) to adjust for
the bias arising from estimating the unknown mean.
empirical bias of the estimators of
model.
For positive values of
~1'
a
and ~1 for the constant mean
2
the least squares estimator ~1*
has larger absolute bias than the adjusted estimator
roots are negative,
~+ underestimates
1
the empirical biases of
adjusted estimator
~1.
Unless both
The difference between
~; of 6; is close to the theoretical
2(n-2) -1 (l+a ) .
2
difference
Table 2 contains the
This is illustrated in Figure 1.
The
6~ also has smaller absolute bias than the least
6* ' except when both roots are negative. For most
1
of the parameter values considered,
has smaller absolute bias than
squares estimator
a*
2 •
If
a
is close to negative one, then the bias in the least
2
squares estimator of a
arising from estimating the unknown mean is
2
close to zero and the other source of bias is positive.
..
20
Table 3 contains the empirical mean square errors of the two
estimators of
a
than those of
6*
6+
of
1
2
1
and
6
1
The mean square errors of
•
for positive values of
6
1
•
6
for sample size 50.
1 ' even
1
are less
The mean square errors
is as much as 30 percent less than those of
values of
6+
6*
1
for positive
When both roots are negative
6* •
1
is largest for the
is larger than that of
or complex, the mean square error of
to 6*
1
values that correspond to complex roots. For all positive
The ratio of the mean square errors of
6
1
a+ has smaller mean square error than a * •
2
2
Table 4 contains the empirical bias of the estimators of
6
for the model with an estimated time trend.
1
are negative or complex,
a
2 '
and
Except when both roots
has smaller absolute bias than
difference in the biases of the estimators
The
a + and
and
2
-1
very close to
and 4(n-2) (1+a ) , respectively.
2
Therefore, for positive values of a
the reduction in the bias of the
2
least squares estimator is large.
larger absolute bias than
a* •
When
is negative,
When the roots are complex,
2
underestimates the true value of
6
1
, whereas
67
6*
1
overestimates the
true value.
For positive values of
6
1
67
'
has smaller mean square error
6* • When a
is negative and the roots lie inside unit circle,
2
1
the mean square error of
is larger than that of 6* • The gain in
1
mean square error of
6* is as much as 50%. For values of
than
1
6
1
greater than 0.5, the ratio of mean square errors of
about 40%.
Generally,
67
a+ has smaller mean square error than
2
is
If
21
a
2
is negative, then
a* has smaller mean square error than
2
a+ •
2
Generally speaking, the adjustment produces large gains in mean square
error for those parametric configurations that originally had large mean
square errors and produces losses for those parameters that originally
had small mean squares.
Therefore, the mean square error function for
the adjusted estimator is much flatter than the corresponding function
for the least squares estimator.
Summary
For an autoregressive process with mean function a polynomial in
time, the bias in the least squares estimator arising from estimating
the mean function has a relatively simple form.
This form can be used
to correct the estimator for the bias arising from estimating the mean
function.
The Monte Carlo study demonstrates that the mean square error
of the adjusted estimator is smaller than that of the least squares
estimator for a wide range of parameter values.
is reduced about 40 percent by the adjustment for
The mean square error
n. 50
when
22
Table 1-
The roots of the characteristic equation
and the parameter values.
Values of the parameters
Roots
Set
No.
n;.
m
2
a
l
a 2--0 2
1
1.0
0.8
1.8
-0.80
1.00
2
1.0
1.0
1.0
1.0
0.8
0.8
0.8
0.8
0.5
0.0
1.5
1.0
-0.50
0.00
-0.5
-0.8
0.5
0.2
0.50
0.80
1.00
1.00
1.00
1.00
0.5
0.1
1.3
-0.5
-0.8
0.0
0·3
0.0
-0.40
-0.08
0.40
0.64
0.00
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
0·5
0·5
0.5
0.1
0.1
-0·5
-0.8
-0.5
-0.8
-0.8
-0.5
0.7 ± 0·71
0.2 ± 0.81
-0.7 ± 0.71
0·9
0·5
0.0
-0.3
-0.4
-0.7
°1-al +a 2
0·90
0.82
0.70
0.64
0.25
0.40
0.50
0.25
0.10
0.05
0.08
-0·35
-0.62
-0.40
-1.3
1.4
0.4
-0·98
-0.68
-1.70
0.42
-0.28
-1.4
-0·98
-2.38
23
Table 2.
Empirical bias of various estimators for the
mean modeL
Bias of the estimators
True
Set
No.
A
(12
(l2
A
(l2
];t~~5 .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
(5
~l
1
---._... _----_ _.- _._._. _.
..
-0.80
-0.50
0.00
0.50
0.80
-0.40
-0.08
0.40
0.64
0.00
0.25
0.40
0.05
0.08
-0.40
-0.98
-0.68
-0.98
0.153
0.069
-0.061
-0.173
-0.230
-0.002
-0.055
-0.147
-0.192
-0.083
-0.120
-0.139
-0.088
-0.084
-0.009
0.023
-0.025
0.023
0.169
0.092
-0.024
-0.119
-0.169
0.027
-0.012
-0.084
-0.119,
-0.035
-0.063
-0.075
-0.040
-0.034
0.021
0.026
0.043
0.025
-0.070
-0.127
-0.223
-0.321
-0.363
-0.120
-0.164
-0.243
-0.278
-0.149
-0.173
-0.197
-0.115
-0.095
0.021
0.000
-0.008
0.042
-0.048
. -0.086
-0.151
-0.216
-0.243
-0.064
-0.079
-0.117
-0.133
-0.054
-0.058
-0.069
,,;,0.018
0.005
0.081
0.005
0.027
0.047
-0.80
-0.50
0.00
0·50
0.80
-0.40
-0.08
0.40
0.64
0.00
0.078
0.037
-0.025
-0.081
-0.119
-0.006
-0.045
-0.076
-0.086
-0.045
0.083
0.046
-0.007
-0.055
-0.089
0.007
-0.025
-0.046
-0.051
-0.024
-0.027
-0.061
-0.109
-0.157
-0.185
-0.052
-0.080
-0.128
-0.130
-0.072
-0.018
-0.042
-0.074
-0.106
-0.127
-0.025
-0.041
-0.068
-0.061
-0.030
n-SO
1
2
3
4
5
6
7
8
9
10
,
.'
24
Table 2.
(Continued)
True
Set
No.
Bias of the estimators
"-
"-
(],2
(],2
-0.067
-0.065
-0.041
12
0.25
0.40
13
14
0.05
0.08
15
16
(\
51
-0.039
-0.036
-0.092
-0.086
-0.045
-0.(J22
-0.054
-0.010
-0.052
-0.030
-0.021
-0.40
-0.004
0.010
-0.066
0.012
0.015
0.006
0.016
0.002
0.004
17
-0.98
-0.68
0.014
-0.006
0.009
18
-0.98
0.016
0.017
0.027
0.(J28
11
.
i
-0.(J27
0.038
-,
25
Table 3.
Empirical mean square error multiplied by ten
various estimators for the mean model.
True
Set
No.
Mean square error (multiplied by ten)
.....
.....
(12
c£2
<5
-0.80
-0.50
0.00
0.50
0.80
-0.40
-0.08
0.40
0.64
0.00
0.25
0.40
0.05
0.08
-0.40
-0.98
-0.68
-0.98
0.642
0.467
0.470
0.792
0·953
0·391
0.468
0.615
0.776
0.463
0.546
0.660
0.474
0.489
0·372
0.061
0.235
0.060
0.671
0.499
0.440
0.642
0.697
0.419
0.468
0.508
0.592
0.454
0.489
0.576
0.452
0.475
0.412
0.065
0.269
0.065
0.105
0.282
0.795
1.709
2.123
0.314
0.609
1.280
1.800
0.796
1.233
1.736
1.369
1.588
1.468
0.025
0·381
0.212
0.069
0.178
0.498
1.110
1.335
0.216
0.426
0.898
1·311
0.681
1.066
1.554
1.363
1.652
1.670
0.027
0.424
0.229
-0.80
-0.50
0.00
0.50
0.80
-0.40
-0.08
0.40
0.64
0.204
0.192
0.220
0.254
0.280
0.173
0.210
0.255
0.232
0.205
0.199
0.214
0.220
0.216
0.180
0.206
0.229
0.193
0.013
0.063
0.199
0.410
0·573
0.076
0.187
0.470
0.544
0.009
0.041
0.130
0.264
0.367
0.059
0.147
0.372
0.436
(12
1
<5
1
n-25
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
n-SO
1
2
3
4
5
6
7
8
9
26
Table 3.
(Continued)
True
Mean square error (multiplied by ten)
Set
No.
0.200
0.298
0.264
0.228
0.542
0.494
0.638
0.603
0.719
0.675
0.008
0.170
0.598
0.019
0.115
0.218
0.211
0.207
0.182
0.020
0.122
0.023
0.023
0.074
10
0.00
11
12
0.25
0.40
13
14
0.05
0.08
-0.40
0.239
0.217
0.217
0.174
-0.98
-0.68
-0.98
15
16
17
18
0.206
0.247
0.599
0.709
0.718
0.008
0.178
0.074
.
·
•
28
Table 6.
(Continued)
True
Bias of the estimators
Set
No.
10
0.00
-0.069
-0.026
-0.118
-0.033
11
0.25
0.40
-0·095
-0.098
-0.042
-0.152
0.05
-0.066
-0.038
-0.020
-0.153
-0.098
-0.047
-0.034
0.08
-0.062
-0.015
-0.40
-0.001
-0.98
-0.68
-0.98
0.018
0.026
0.020
-0.089
0.022
0.001
0.007
0.028
0.022
0.031
12
13
14
15
16
17
18
-0.014
0·055
-0.008
0.005
0.076
0.005
0.016
0.060
.
. ..
I.
30
Table 5.
(Continued)
True
Mean square error (multiplied by ten)
....
Set
No.
(12
10
0.00
0.24~
0.218
0.405
0.299
II
0.25
0.40
0.290
0.295
0.237
0.234
0.682
0.866
0.517
0.703
0.05
0.08
0.257
0.246
0.168
0.024
0.ll4
0.032
0.238
0·720
0.818
0.683
0.809
0.762
0.010
'"
(12
(12
"
°1
°1
..
~
~3
14
15
16
17
18
-0.40
-0.98
-0.68
-0.98
0.230
0.190
0.026
0·~9
0.035
0.650
0.009
0.175
0.ll2
0.190
0.~2
e
•
e
"
e --
•
".
c:
DIFFERENCE IN BIASES
CN-26)
H£AN HODEL •
DELTAl
*
D
0.
1
I
F
F
E
R
E
If
C
E
*
125
•*
~
0. t 09,
~"
* ,•. ,/1"
0.07~.
*
./"~
,-'
r""
,
*
.I
"
*
/"
w
I-'
"
,.,
jj .
* ,-.-- ,/'
;,,'"
,/
",
0.050
~
*
/"
*,./,/"..
I
0.025
//
,/'
~
,'"
, .. / '
."
/'
V/
./
-,·,--.-..,--r-..-...
-I .0
-0.7
~r-r-r-'-'''I· I
-0.4
-0.1
ALPHA2
I
·r-t-r
-r-,-r-r-.·,
0.2
r • .. • • •
0.5
f
0.8
•
\4
"
32
..
References
1980. Finite sample properties of estimators
Ansley, C. F., and P. Newbold.
J. of
for autoregressive moving average models.
"
Pages 93-108 in O. D. Anderson, ed.
tion of autoregressive models.
Anal¥zing Time Series.
1981.
Properties of predictors for auto-
-
J. Amer. Statist. Assoc. 76: 156-161.
Fuller, W. A., D. P. Hasza, and J. J. Goebel.
1981. Estimation of the
parameters of stochastic difference equations.
2:
In
North-Holland, Amsterdam.
Fuller, W. A., and D. P. Hasza.
regressive time series.
159-183.
1980. Parameter estimation and order determina-
Bora-Senta, E. and S. Kounias.
,
Econometrics~:
The Annals of Statistics
531-543.
Lee, H. E.
1981. Estimation of seasonal autoregress ive time series. Unpublished
Ph.D. Thesis.
Iowa State University, Ames, Iowa.
Mariott, F. H. C. and J. A. Pope.
tions.
1954. Bias in the estimation of autocorrela-
Biometrika IVV
41: 390-402.
=.;.;;:;;;..;.;;;,;;;.;;;;;;.
Orcutt, G. H., and H. S. Winokur.
estimation and prediction.
1969. First-order autoregression: inference,
-
Econometrica 37: 1-14.
1982. Properties of the estimators of the parameters of
Pantula, S. G.
autoregressive time series.
Unpublished Ph.D. thesis.
Iowa State University,
Ames, Iowa.
"
Rao, C. R. 1967. Least squares theory using an estimated dispersion matrix and
its applications to measurement of signals.
Fifth BerkeleY Sympos ium
1, 355-372.
I"oJ
Salem, A. S.
1971. Investigation of alternative estimators of the parameters
of autoregressive processes.
Ames, Iowa.
Unpublished M.S. Thesis.
Iowa State University,