June 27, 1985 Estimation for Autoregressive Processes with Several Unit Roots Sastry G. Pantula North Carolina State University Raleigh, NC 27695-8203 ABSTRACT p Let Y satisfy the stochastic difference equation Y = [ a.Y . + e , t t t j=l J t-J for t = 1, 2, ... , where the e t are independent and identically distributed random variables and the initial conditions (Y-p+l' Y_ + 2 ' ... , YO) are fixed p ... constants. I t is assumed that the true, but unknown, roots m , m , , mp of l 2 p [ a.m p - j = 0 satisfy m = m = 1 and Imjl <' 1 for j = d+l, .•. ,p. mP m = 2 l d j=l J We consider a reparameterization of the model for Y that is convenient for t ... testing the hypothesis H : m = m = ... = m = 1, and Imjl d d l 2 < 1 for j = d+l, ... ,p. We consider the likelihood ratio type "F-statistics" to test the hypothesis H . d We characterize the asymptotic distributions of the "F-statistics" under various alternative hypotheses. Using these asymptotic results, we obtain a sequential testing criterion that is asymptotically consistent. Finally, estimated per- centiles for the distributions of the test statistics are obtained for d ~ 5, by Monte Carlo methods. 1. Introduction Let the time series {Y } satisfy t p [ a.Y . + e , j=l J t- J t t = (1.1) 1, 2, ... , where {e tIt=l l~ is a sequence of independent random variables with mean zero and variance 0 2 • It is assumed that the e that E{letI4+O} < M for some 0 > t are either identically distributed or 0 and all t. It is further assumed that the initial conditions (Y-p+l' ... , YO) are known constants. For the statistics considered in this paper, without loss of generality we assume that = of order p. Let o. 0 2 = 1 and The time series is said to be an autoregressive process -2- t j=l a. mp-j = 0 (1. 2) J be the characteristic equation of the process. m , m , •.. , 1 2 ID p The roots of (1.2), denoted by are the characteristic roots of the process. Let the observations Y , Y , ... , Y be available. l 2 n 02 a = (a , a , ... , a )' and p 1 2 are unknown. Assume It is assumed that The least squares estimator ~ of a is obtained by regressing Y on Y l' ... , Y t tt-p Lai and Wei (1983) estab1isheq A the strong consistency of a under a very general set of conditions. If Imjl j = 1, 2, ... , p, then Y converges to a weakly stationary process as t t + < 00. 1, In the stationary case, the asymptotic properties of ~ and of related "F-type" 1ikelihood ratio test statistics are well known. See Mann and Wald (1943), Anderson (1959) and Hannan and Heyde (1972). Dickey (1976), Dickey and Fuller (1979), and Fuller (1979) have considered testing for a single unit root in a p-th order autoregressive process (i.e., m = 1 and Imjl 1 < 1 for j = 2, 3, ..• , p). Hasza (1977) and Hasza and Fuller (1979) considered a p-th order autoregressive process with m = m = 1 and Imjl 1 2 for j = 3,4, ... , p. <1 Hasza and Fuller (1979) characterized the distribution of "F-type" statistic for testing m = m = 1. l Z Dickey, Hasza and Fuller (1984) obtained similar results for testing the unit roots in seasonal time series. Sen (1985) considered the distribution of symmetric estimators and that of the estimated characteristic roots under the assumption m = m = 1. With the exception of 1 Z Sen (1985), no one has considered the distribution of the test statistics under the alternative hypotheses. For a second order process, Sen (1985) studied the asymptotic distribution of the regression type t-statistic for testing the hypothesis H : m = 1, Imzl 1 1 < 1, under the assumption HZ: m = m = 1. 1 Z He observed that the probability of rejecting the hypothesis H , in favor of 1 the hypothesis HO: I m1 1 < 1, Imzl < 1, is higher under HZ than under H . That 1 -3- iss it i& more likely we concludes incorrectlys that the process is stationary when there are really two unit roots present. We propose "F-type" statistics for testing the hypothesis H : m = m = ..• = m = I and Imjl d l 2 d < Is j = d+l s .•. S p. We characterize the asymptotic distribution of the "F-type" statistics under the hypotheses Hds d = Os Is ... , p. We use these asymptotic distributions to obtain an asymptotically consistent sequential procedure to test H versus H - . d d l The pro- cedure is very general and it successfully answers the question "how many times should one difference the series to attain stationarity?" In Section 2 we present the notation and the main results. In Section 3 we present the estimated percentiles of "F-type" statistics and some possible We present an example in Section 4. extensions. The proofs of the results are presented in the Appendix. 2. Notation and Main Results We consider the following reparameterizat ion of the mode I (1.1), = Y p,t where Y. 1 st = t i=l (l-B)i Yt After expanding = (l-B)i S.Y. 1 1 1- ,t- 1 + e t (2.0 , ith difference of Y ; and B is the backshift operator. t in powers of B, if we compare (2.1) with (1.1), we get (2.2) for i = Is 2 s ... (J p. S = TB Therefore s we can write + c (2.3) where T .. is an upper triangular matrix with T .. = 1J 1J C = (cIs c 2s ... S c )' with c. = p 1 (_l)i-l(~). 1 (_1)i-1(~-11) for j ~ i and 1- Since the diagonal elements of T are ±l s T is nonsingu1ar and we can also obtain B from (J. -4- It is easy to see that a + = a + = r l r 2 8r + l = 8r + 2 = ... = 8p = -1. = a p = 0 if and only if So if one would like to test whether the order of the process is r instead of p one could use the F-test for testing 8 + = 8 + = ... = 8 = -1 r l r 2 p in the regression (2.1). Our main purpose of the paper, however, is to test the hypothesis that there are exactly d unit roots. order p of the process. So, we assume that we "know" the (For all our results, all we need to assume is that the order of the process is less than or equal to p.) Now, suppose m = m = l 2 = md = 1 and Imjl <1 for j = d+1, ... ,p. Then we can write (1.1) as, P (l-B)d (l - miB)Y = e t t IT i=d+1 (2.4) Define, Zt = (1-B)d Yt = Yd,t and W = t K i=d+1 (l - miB)Y t . (2.5) Then, W is a dth order autoregressive process with d unit roots and Zt is a t (p-d)th order process with stationary characteristic roots. P IT i=d+l Note that, (1 - ml.0B)Zt = e t and we can write Zt as Zt = t j=d+1 ej Z . d + et t-J- (2.6) or as t j=d+l n J" (1_B)j-d-1 z + t-l et (2. n Since Zt is (1-B)d Yt we get, Yp,t = j=~+l n j Yj - l ,t-1 + e t (2.8) -5- Comparing (Z.8) with (Z.O, we get 8 Therefore, m 1 = ... assumption that Imjl = Bdenote = 8 Z = ... = 8 m = 1 if and only if 8 = 8 d 1 Z < = 0 and 8 =n for j j j d = ~ d+l. The 1 for j = d + 1, .•. , p imposes extra conditions on n . j < O. In particular, n + = 8 + d 1 d 1 Let 1 the ordinary least squares estimator of YP,t on Y0,t-1' Y1,t-1' ... , Yp-1,t-1· B obtained by regressing Tha t i s , (Z.9) where ... , ... , tt = (Y O,t-1' = (t , t 1 t z, Y p-1,t-l t ) )' , n and ~-~ ~ Y( p) = (y p, l' Yp, Z ' Define the "F-statistic" for testing 6 ... F. ~,n (p) = ~-,~--- ... , Yp,n ) 1 = 82 = = B i = 0 by -1'" B(i) C(i) B(i) i (MSE) (Z.10) n where C(i) = (ixi) submatrix consisting of the first i rows and i columns of (t't) B(i) = (8 1 , 8z ' ... , 8i )' and MSE n = mean square error of the regression = (n-p) -1 E t=l lY p,t - B't'Y t (p) JZ We use H to denote the hypothesis d H : m = m = ..• = m = 1 and Imjl d d 1 Z We now present the asymptotic properties of F. ~ ,n < 1, for j = d+1, d+Z, ... ,p. (p) under H . d -1 , -6- Theorem 1: Assume that Y is defined by (1.1). t if i F. (d) F. l.,n ~ Then under the hypothesis H , d d l. (p)...12... { (2.11) if i ex> >d where F.(d) l. = ~l. *' LG(ii)J- l g* g (i),d d (i),d * (it). and the elements of the vector g(i),d and the matrix Gd and G defined in (A.9). d are functl.ons of gd The elements of gd and G are functions of weighted chid (See Lemma 4 of the Appendix.) squares and normal variables. Notice that the limiting distribution of F. (p) is independent of the order l., n p. Let us denote Fd(d,a) to be the lOO(l-a)% percentile of the distribution of Since, under H , F. (d) convergence in probability to infinity for i l.,n d > d, we suggest the following criterion: Reject H in favor of H _ ' if F. (p) d d l l.,n i = d, d+l, •.• ,p. > F.(i,a) l. for Note that, tim PH LRejecting HdJ = tim PH lF d (p) ,n n-+<x> . d n-+<x> d ... , tim PH LF (p) d d ,n n-+<x> = a Also, for j . >d p lRejecting HdJ tim ·H. n-+<x> ~ a = l. J and for j <d tim PH. lRejecting HdJ n-+<x> J > Fd(d,a), F p,n (p) Fd+l,n(P) > Fp (p,a)J > Fd(d,a)] > Fd+l(d+l,a), -7That iss if we use our sequential procedures asymptotically the chance that we reject the hypothesis that there are exactly d unit roots in favor of the hypothesis that there are exactly d-l unit roots is (i) as if there are exactly d unit roots s (ii) less than or equal to as if there are more than d unit roots present and (iii) ones if there are less than d unit roots. In this senses our procedure is asymptotically consistent. Considers for examples a second order process. hypothesis that there is one unit root. statistic ("equivalently" Flsn(Z» Suppose we wish to test the A common practice is to use the t- for testing 13 1 = O. Dickey and Fuller (1979) presented the empirical percentiles for the distribution of the t-statistic s under the hypothesis HI: m = 1 and Imzl l that Imll Im.1 1 > < Is Imzl <1 1 or m. = -1.) 1. Note that rejecting HI may mean (We are excluding the possibility of However s in practices the rejection of HI is misconstrued 1 as the former. or m = m = 1. l Z < This is because "intuitively" we hope that if there are two unit roots then the test for one unit root will indicate that there is a unit root. Sen (198S)s using simulation methods s has shown that s a level a test based on the t-statistic rejected HI more than 100a% of the time when there are in fact two unit roots. Our criterion takes into account that exactly d of the roots are equal to one and the remaining are less than one in modulus and tests the However s if one is "sure" that there are at most s hypotheses sequentially. ·unit roots (e.g. s say s = 5) then one may modify the criterion for testing Hd(d~s) to "reject H d in favor of H if Fisn(p) d l > Fi(isa) for i = d s d+ls'''ss.'' In practical applications it is common to include an intercept term in the model (Z.l)s i.e. s consider p y Pst = 13 0 + i=l E I3.Y. 1 1 1 1- st- + e t (Z.lZ) . . Let ~(+) 6 denote t h e or d'111Ci.ry 1 east squares est1mator 0 f 6(+)=(13 • .13 • , .... • I3 )1 0 1 p -8- _ obtained by regressing Yp,t on 1, YO,t-l' Yl,t-l' .•. , Yp - l • t - l regression "F-statistic" for testing 8 (2.12) by So = 13 1 F~1) 1.n (p). Similarly, let = 8 1 F~2) 1.n 2 = ... = 8 i Denote the = 0 in the regression (p) denote the "F-statistic" for testing (1) = ••. = Si = O. We now present the asymptotic properties of F • (p) and i n F(2) (p) under the hypothesis H . d i. n Theorem 2: Assume that Y is defined by (1.1). t F(j) 1.n (p) - [ F~j)(d) 1 CD Then, under the hypothesis d if i :;i if i >d for j = 1, 2, where * ** and the elements of the vectors b(i),d and b(i),d and those of the matrices H(ii) d,l . and H(ii) ,2 are funct10ns of the vector b and the matrix H* defined in (A.ll). d d d elements of b d and H* d The are functions of weighted chi-square and normal variables. (See the proof of Theorem 2 in the Appendix.) Therefore. an alternative criterion is to reject H in favor of H if d l d F~l) (p) F~l) 1,n (p) in the above criterion. 1.n > F~l)(i,a) 1 for i = d. d+l, ...• p. One may use F~2) 1,n (p) in place of However, based on the power studies of Hasza and (1) Fuller (1979) and Dickey and FUller (1981) we recommend the use of F. (p) over 1,n F~2) 1,n (p). with So d- 3. l ~ From the results of Hasza (1977), it follows that if Y satisfy (2.12) t 0, then under the hypothesis H • d F~~~ (p) converges in distribution to X2 • where X2 denotes a chi-square random variable with d degrees 0 f f ree d om. d d Simulation for d=S We first present the elements of G and S gs which will be used in obtaining the empirical percentiles of the random variables F.(d). F~l)(d) and F~2)(d). 111 From (A.9) and the results of Lemma 1. we get -9- QS 1 ~ N2 S Q4 N4 NS - Q4 G = S 1 ~ Symmetric N2 4 Q3 2 N3NS-~N4 N3 N4 - Q3 N2 NS -N 3N4+Q3 2 N2N4-~N3 1 ~ N2 3 Q2 N2 N3 - Q2 1 ~ N2 2 Q1 and 2 N1N -N 2N4+ ~N3 S N1N4 -N 2N3+Q2 2 N N - 1~N2 1 3 gs = N1N2 - Ql 2 1 (N ~ 1 - 1) where Q1 = Q*1 Q2 = Q; - N~ + 2N 2N3 , Q3 = Qi - 1/3 N~ + 2N2N~ Q: - 2/1S N~ - N~ + 2/3 N2N4 17/31S N~ - 1/3 N~ + 4/1S QS = Q; Q4 = ~ClO N = 2 2 E Y .; V.;; 1 i=1 ...... ~ ClO 2 N = 2 2 E h. 3 i=1 ~ - N 2 = ~ClO 2 2 E i=1 N2 (N S - N;) + 2N 4 NS ' N N + 2 4 2 y. V. ~ ; ~ ~ClO 3 ~(2N4 - N2 )(N S - N;) + 1/3 N2N; 2 4 E l~y. - Y.;JV.;; i=1 ~ ...... y.JV. ; N4 = 2 ~ ~ * N = 2~ E ll/o Y - y.+y.JV.;N = 2~ E ll/3 y. - Y.;JV.;; 1 i S i=1 ~ ~ ~ i=1 ~ ...... ClO 2 ClO ~ N* = 2 2 E ll/l0 Y2 2 i i=l 4 - S 4 1/3 Y + Y: i ~ ~ N* = 2 2 E l 1/42 Y2 - lIs Y4 + 3 7 Yi 3 i i i=1 ClO 2. Q~ = E y. J V~ , j ?; 1 J ~ i=1 ~ 1 2 ClO 2y? JV. ; ~ ClO - ~ 9 oy. J V. ; ~ ~ 5 -10- and Yi == 2j(2i_l)~j-l(_1)i+l . Also, H* == S and h S == "5 ] [:5 G S [::] where and !.: N == t" 6 OD L i==l ll/24 y1 - O.S y~ + y?jVo . ... ~ ~ ~ All of the F and the F(j) statistics can be computed using the elements of H* and h S S· Estimates of the percentiles of the F-statistics are presented in Tables 3.1, 3.2 and 3.3. For the finite sample sizes the F-statistics were computed for samples generated using the model (1.1), with et~NID(O,1) and YO == Y- == ... ==Y:. ==0. 4 l To estimate the percentiles of the limiting distributions of the test statistics we use the method of simulation employed by Dickey (1976) and Hasza (1977). Briefly, the method consists of approximating the sequence {Yi}~==l by a finite sequence. Using the Monte Carlo methods sample distribution functions of the statistics are generated by appropriately truncating the infinite series in the d e f ~nitions o 0 f N' . s, N*' . s an d Q*' os. J J J The percentiles were "smoothed" by fitting a regression function to the original set of estimates. -ll- The estimated standard errors of the estimated percentiles are generally less than 0.90 percent of the table entry for the limiting distributions and are generally less than 1.50 percent of the table entry for the finite sample cases. Fifty thousand indpendent sample statistics were used in constructing the percentiles for the asymptotic distribution. In our discussion. we assumed that the initial conditions are fixed. The limiting distribution does not depend on the initial condition (Y-p+l •...• YO). but these values will influence the small sample distribution. criterion we proposed in Section 2 for testing H d F d .n (d.a) where F statistic F Fd(d.a). F. ~.n (p) d .n d .n (d). involved Fd(d.a) rather than (d.a) denotes the 100(1-a)% empirical percentile of the For small samples. one should use F > F. ~.n (i.a) for i = d. d+l •...• p. 1 and 8 2 Compute the regression F-statistics F for testing the hypotheses. (i) 8 = (i,a). ~.n Consider. for example. a second order autoregressive process {Y }. Suppose t 2 SO. We regress Y 2 .t = (I-B) Yt on YO .t- 1 = Yt- 1 and Y1 .t- 1 = (I-B)Y t- l' to obtain 8 a (d.a) instead of The asymptotic results of Section 2 ~ = d .n Therefore. our criterion is to reject H in favor of H if d d l are still valid even though we replace F.(i.a) by F. n Also. the 0.10. 1 =0 and (ii) 8 1 = 8 2 1 .n = O. (2) and F 2 •n respectively. (2) Let Hasza and Fuller (1979) suggest that one should reject H : two unit 2 roots (in favor of HI: exactly one unit root) if F • (2) 2 n > 2.82. Also. a common practice is to reject HI in favor of stationarity if F • (2) l n > 3.01. Our criterion differs from above in the sense we reject HI in favor of H : no unit roots. only if F (2) O l .n the condition F 2 .n (2) > > 3.01 and F 2 ,n (2) > 2.82. If we do not add 2.82. then with probability greather than 0.10 we will be concluding that the process is stationary when in fact it has two unit roots. However, if we are absolutely sure that there is at most one unit root then we may use the criterion of rejecting HI in favor of H if F • (2) O l n > 3.01. -12TABLE 3.1 Empirical Percentiles for the F-Statistics F 1, n (1) n 0.50 0.80 0.90 0.95 0.975 0.99 25 50 100 250 500 0.58 0.59 0.60 0.60 0.60 0.61 1.89 1.89 1.89 1.89 1.89 1.88 3.04 3.01 2.99 2.98 2.97 2.96 4.34 4.23 4.18 4.15 4.14 4.13 5.74 5.54 5.42 5.35 5.32 5.28 7.80 7.38 7.16 7.02 6.97 6.91 0.95 0.97 0.98 0.98 0.98 0.99 2.04 2.02 2.02 2.01 2.01 2.01 2.88 2.82 2.79 2.77 2.76 2.75 3.76 3.62 3.55 3.50 3.49 3.47 4.71 4.45 4.32 4.24 4.22 4.19 5.98 5.59 5.38 5.23 5.17 5.10 1.15 1.18 1.19 1. 20 1. 20 1.20 2.22 2.20 2.19 2.18 2.18 2.17 2.97 2.88 2.83 2.81 2.80 2.80 3.73 3.55 3.46 3.41 3.39 3.39 4.49 4.21 4.07 3.99 3.96 3.94 5.57 5.n 4.88 4.75 4.70 4.66 1. 29 1.32 1.34 1.35 1.35 1.35 2.35 2.31 2.29 2.28 2.28 2.28 3.07 2.95 2.89 2.86 2.85 2.84 3.80 3.56 3.45 3.39 3.37 3.35 4.56 4.17 3.99 3.88 3.85 3.84 5.60 4.97 4.67 4.51 4.46 4.46 1.37 1.41 1.43 1.44 1.45 1.45 2.44 2.38 2.36 2.34 2.34 2.34 3.17 3.02 2.94 2.90 2.88 2.87 3.90 3.60 3.46 3.38 3.36 3.36 4.64 4.16 3.95 3.84 3.81 3.83 5.68 4.93 4.58 4.40 4.36 4.38 00 F 2,n (2) 25 50 100 250 500 00 F 3,n (3) 25 50 100 250 500 CD F 4,n (4) 25 50 100 250 500 CD F 5,n (5 ) 25 50 100 250 500 00 -13- Table 3.2 Statistic F(l)(l) 1, n n 0.50 0.80 0.90 0.95 25 50 100 250 500 2.36 2.41 2.43 2.44 2.45 2.45 4.99 4.94 4.91 4.91 4.91 4.91 6.95 6.74 6.65 6.60 6.58 6.58 8.96 8.54 8.35 8.24 8.24 8.21 10.98 10.36 10.04 9.84 9.78 9.69 13.84 12.76 12.24 11.93 11.83 11. 76 2.55 2.56 2.57 2.58 2.58 2.59 4.43 4.30 4.24 4.21 4.20 4.19 5.79 5.49 5.34 5.25 5.23 5.20 7.15 6.60 6.35 6.22 6.18 6.15 8.56 7.69 7.33 7.14 7.09 7.06 10.51 9.14 8.59 8.33 8.27 8.23 2.68 2.67 2.67 2.67 2.67 2.67 4.39 4.19 4.08 4.02 4.01 3.99 5.56 5.16 4.96 4.85 4.81 4.79 6.78 6.ll 5.78 5.60 5.55 5.52 8.00 7.03 6.56 6.30 6.22 6.19 9.68 8.23 7.54 7.17 7.07 7.06 2.80 2.76 2.74 2.73 2.73 2.72 4.51 4.20 4.05 3.97 3.94 3.93 5.67 5.ll 4.84 4.69 4.65 4.63 6.83 5.96 5.55 5.33 5.27 5.26 8.05 6.74 6.20 5.95 5.88 5.84 9.76 7.83 7.06 7.70 6.61 6.55 2.40 2.34 2.32 2.31 2.30 2.30 3.87 3.51 3.37 3.29 3.28 3.26 4.87 4.23 3.98 3.86 3.84 3.82 5.90 4.92 4.54 4.36 4.32 4.29 6.91 5.60 5.08 4.83 4.77 4.73 8.39 6.46 5.73 5.41 5.34 5.29 00 F(l)(2) 2,n 25 50 100 250 500 00 F(l)(J) 3,n 25 50 100 250 500 00 F(l)(4) 4,n 25 50 100 250 500 00 F(l)(5) 5,n Empirical Percentiles of the F(l)-statistics 25 50 100 iso 500 00 0.975 0.99 -14- Table 3.3 Statistic F(2) 0) 1,n Empirical Percentiles of the F 0.50 n 25 50 100 250 500 00 0.80 0.90 (2) S . . - tat1st1cs 0.95 0.975 0.99 1.71 1.72 1.72 1. 72 1. 72 1. 72 3.09 3.00 2.96 2.94 2.94 2.94 4.12 3.94 3.85 3.80 3.79 3.78 5.16 4.87 4.72 4.64 4.61 4.58 6.29 5.81 5.57 5.44 5.39 5.36 7.77 7.02 6.66 6.46 6.40 6.37 2.05 2.04 2.03 2.03 2.03 2.03 3.34 3.20 3.13 3.10 3.09 3.08 4.26 3.99 3.86 3.79 3.76 3.75 5.20 4.75 4.54 4.43 4.40 4.38 6.18 5.50 5.20 5.06 5.02 4.99 7.56 6.48 6.06 5.86 5.82 5.78 2.30 2.26 2.25 2.24 2.23 2.23 3.61 3.40 3.30 3.24 3.Z2 3.21 4.52 4.14 3.96 3.86 3.82 3.80 5.46 4.86 4.58 4.42 4.37 4.36 6.43 5.57 5.17 4.94 4.88 4.86 7.73 6.50 5.92 5.61 5.52 5.51 2.49 2.43 2.40 2.38 2.37 Z.37 3.89 3.59 3.44 3.36 3.33 3.31 4.86 4.32 4.07 3.93 3.89 3.87 5.80 5.02 4.65 4.44 4.39 4.38 6.81 5.65 5.18 4.95 4.89 4.85 8.Z6 6.55 5.87 5.55 5.47 5.41 2.63 2.53 2.49 2.47 2.47 2.46 4.14 3.72 3.54 3.45 3.43 3.41 5.18 4.44 4.15 4.02 3.99 3.97 6.23 5.14 4.71 4.52 4.48 4.44 7.34 5.83 4.25 4.99 4.93 4.89 8.87 6.71 5.91 5.57 5.49 5.44 .. _.--- .~--_._._- F(2)(2) 2,n 25 50 100 250 500 00 F(2)O) 3,n 25 50 100 250 500 00 F(Z)(4) 4,n 25 50 100 250 500 00 F(2)(5) 5,n 25 50 100 250 500 00 -15- In addition, we also computed F. 1,n F. 1,n (d,a) are not included.) (d,a) for 1 ~ i <d ~ 5. (Tables for It is interesting to note that the empirical percentiles F. (d,a) increase in d for a fixed i and a ~ 0.5. For example, Fl,lOO(l, 0.05) = 4.18, = F ,100(4, 0.05) 1 = 6.77 1,n F 1 ,100(2, 0.05) 5.04, F l ,100(3, 0.05) and F ,100(5, 0.05) l = 7.14. 6.20, This indicates that if one uses the criterion, "reject the hypothesis that there is exactly one unit root in favor of stationarity if F 1 ,n > F 1 ,n (l ,a)," (p) then, with probability greater than a, one would conclude that the process is stationary when really there are several unit roots. Except for some tedious bookkeeping, the results can be extended to regressions involving time trend in the model. 4. Example Pankratz (1983) lists 70 consecutive monthly volume of commercial bank real estate loans, in billions of dollars, starting from January 1973. By fitting a sixth order autoregressive process we concluded that a third order autoregressive process provides an adequate fit for the data. Y 3,t = (1-B)3 y t = y t - 3Y t-l + 3Y . t-2 - Y on Y Y t-3 t-l' l,t-l and Y ,i-l = (1-B)2 Yt _ = Y - 2Y 2 t 1 1 t 2 with Y ,t 3 = ;2 0.0839. = Regressing + (l-B)Y t-1 = Yt-l - Y t-2 Y - , we get t 3 0.00139Y t _ l - 0.1045Y 1 ,t_l - 1.3061Y 2 ,t_l (0.00094) (0.0795) (0.1233) + et (4.1) The numbers in the parentheses are estimated standard errors calculated by the usual regression formulas used in most computer regression routines. From (4.1) we calculated the F-statistics for testing the relevant hypotheses. The calculated values are F ,70(3) l = 2.19, F ,70(3) 2 = 1.19 and F ,70(3) 3 = 47.29, where F ,70(3) is the regression F-statistic for testing the hypothesis that i -16- the first i coefficients of (4.1) are zero. Since 47.Z9 is greater than any of the table values for F ,50(3) and F ,100(3), we reject the null hypothesis 3 3 that there are three unit roots. Note that, under the hypothesis HZ: there are two unit roots and the third root is less than one in absolute value, the statistics F asymptotically equivalent. with F ,50(2, 0.20) z = 2.02 and F ,100(2, 0.20) Z autoregressive model for YZ,t with + (3) and F Z ,n Comparing the calculated value of F ,70(3) Z hypothesis H at ZO% level of significance. 2 (1 Z ,n = = 2.02, (Z) are = 1.19 we fail to reject the Finally, we fitted a first order (1-B)2 Yt to obtain 0.36B)(1-B)Zy (O.lZ) t 02 = 0.0842. Same conclusions were obtained when an intercept was included in the model (4.1). ACKNOWLEDGEMENTS I wish to express my thanks to Professor D. Boos, D. Dickey and W. Fuller for their helpful comments. I am very grateful to Dr. Dickey and Dr. Sen for letting me use their Monte Carlo programs. -17- APPENDIX We prove Theorem 1 in several steps. autoregressive process with d unit roots. distributions of F i ,n First we consider a dth order For this process we obtain the limiting (d), i = 1, 2, ... , d. process with d unit roots. Then extend the results to a pth order Define, ( A.l) = So ,t = ... , for i = 1, 2, t I: S. 1 . j=l 1- ,J et d and t = 1, 2, ... , where e t is as defined in Section l. Note that, (i-B) j S. = S .. 1,t 1-J,t for j ~ i and S. is an ith order autoregressive process with i unit roots. 1,t Consider W = Sd,t' t a dth order autoregressive process with d unit roots. In the parameterization of (2.1), we can write W as t Wd,t = W.1,t = d I: i=l 8. W 1 i-l,t-l + et (A.2) where and 8 1 = 8 2 = ... = 8 d = O. Let B denote the ordinary least squares estimator of B in the regression of Wdt on Wa,t-l' Wl,t-l' -= B ... , W d-l,t-l -1 b B d,n d,n where the (ij)th element of B is d,n n-l n-l W.1- 1 ,t W.J- 1 ,t = I: Sd-i+l,t Sd-j+l,t t=l t=l I: That is, (A.3) -18- and the ith coordinate of b d,n is n-1 n-1 1: W. 1 Wd 1 = t~l Sd-i+1,t SO,t+1 . t=l ~- ,t ,t+ . We now obtain the orders in probability of the elements of B a n d b d,n d ,n We also obtain recursive relationships between the elements. be as defined in (A. 1). Lemma 1: Let S. (i) = l(i-l)!j-l ~,t S. ~,t (ii) n -(i-~) S. ~ ,n .J4 ~ (t-r+l)(t-r+2) .•. (t-r+i-l)e r=l - N. ~ ~ ~ (iv) n-l i 1: S. t = 0 (n ) SO,t+l ~, P t=l i ~ (v) n-l + 1: S. t Si-l,t = ~ S2 i,n-l t=l ~, o Proof: as n ~ w. 1, 1, p (n U - 2 ), i ;;: 2, 1: - n-l t=l Si-l,t Si-k+l,t + The results (i) and (ii) are straightforward. 2i El E t=l ° (n 2i ), P n i j = 0 (n + ). 1: S. S. ~,t J,t P t=l Also, n Vlt~l Sit SO,t+l j = and we get (iv). n t 1: 1: t=l k=l Therefore, n = 1: S: t=l ~,t and hence ~ n-l 1: S. t = S. 1 Si-k+l,n-l ~,nt=l ~, Si-k,t for 2 ~ k ~ i. n- Now, for i ~ r N(O,o~), o~ = l(2i-l){(i-l)!}2j-1 n-l i j (iii) = 0 (n + ) for i, j 1: S. t S. ~, J,t P t=l (vi) Then, 2, 1 1 n n Now, o p (n 2i - k- 1 ), -19- n-1 2 1: 8. t=l l.,t and hence n-1 1: 8.l.,t- 1 8 l.-,t . 1 t=l =~ 8 2 i,n-1 n-1 2 - ~ 1: 8. t=l l.-l,t Therefore, n-1 1: t=l n-1 n-1 2 = 1: 8. + 1: 8. 1 18 . 1 l.,t 8.l. - 1 , t t=l l.,t- l.-,t t=l l.- , t 8. 2 2i 2 = ~ 8 + 0 (n - ) i,n-1 p Note that, for 2 ;:;; k n-1 ~ i, n-1 = 1: 8. 8. 8. k 1 8.l. - k , t + t=l l.,t l.- ,t t=l l.,t1: o (n 2i - k - 1 ) P Now, n-1 n-1 n-1 8 8 8 t:1 8 i ,t-1 i-k+1,t = t:1 8 i ,t-1 i-k+1,t-1 + t:1 8 i ,t-1 i-k,t and hence n-1 n-1 n-1 8 t:1 8 i ,t-1 8 i - k+ 1 ,t-1 = t:1 8 i ,t-1 i-k+1,t - t:1 8 i ,t-1 8 i - k ,t Also, n-1 n-1 n-1 8 t:1 8 i ,t 8 i - k+1 ,t = t:1 8 i ,t-1 8 i - k+1 ,t + t:1 8 i - 1 ,t i-k+1,t and hence n-1 n-1 8.l.,n- 1 8.l.- k+ 1 ,n- 1 = t=l 1: 8.l.-,t 1 8 i-k+ 1 ,t + t=l 1: 8.l.,t- 1 8.l.-,t k Therefore, n-1 n-1 L S.l.-,t 1 8 i-k+,t 1 t:1 8 i ,t-1 8 i - k ,t = s.l.,n- 1 8.l.- k+ 1 ,n- 1 - t=l From Lemma 1, it follows that, for ~ ~ j ~ i and i ~ u 2, n-1 k-1 r + (_l)k n-1 8. 1: 8. 1 = 1:(-1) 8. 18 .+ +1 1 1: t=l l.,t- 1 J,tr=O l.-r,n- J r ,nt=l 2 .s i-k,t-1 i f i-j is even -20- = k-l r k 2 E(-l) 8.~-r,n- 1 8 J.+r +1 ,n- 1 + (-1) ~8.~- k ,n- 1 r=O + 0 (ni+j-l),if i-j is odd, P where k = the integer part of %(i-j). -1 We now obtain the order in probability results for B d ,n d d-l Leuuna 2: Let» = diag {n , n , ... , n}. Define, d,n G d,n = »-1 B = »-1 b d,n »-1 d,n d,n and d,n d,n Then, = 0 (1) p = 0 (1) p and the determinant of G is bounded away from 0 in probability. dn From Leuuna 1, it is clear that G = 0 (1) and g = 0 (1). p dn p dn Proof: the result on determinants through induction on d. G l,n = n E 8 t=l ~ and since n -2 n n -3 8 We prove For d = 1, 2 l,t 2 2,n -3 2 8 convergence in distribution to a constant times a chi-square 2 ,n random variable with one degree of freedom, we get G is bounded away from zero. l ,n Assume now that the det Amax (Gd - 1 ,n ) ~ trace lG~:l,n] is bounded in probability. lG 1 ] = 0 (1). d - ,n p -1 Also, since we assumed that det lG - ,n] d 1 -1 is bounded in probability we get A . (G m~n l show that det lG- ] is 0 (1). d,n p Note that d- Note that 1 ,n ) is bounded in probability. We now -Zl- ... , n -1 n-l r t=l S d,t , Sl). ,t Let O(n) denote the vector of regression coefficients in the regression of Sdt on = c' t (n-(d-U S d-l,t' ... , n -1 Then, Sl,t) . -1 O(n) = Gd-l,n~ and detlG d ,n ] Note that = n -(Zd-Z) 4 + 0 (n Zd - l ) . ~Sd,n-l p 1 -(Zd-l)SZ 2 2 · S ~nce n d,n-l converges in distribution to ad Xl ' we get n- Zd 11<1~ liZ is bounded away from zero. Also Also, A- Z max lG d-l,n JII qn lIZ;;; Therefore, "O(n) " and 110(n),,-1 Now, for n* = < n, * - O(n)J ' G _ n* lO(n * ) - O(n)] n -Zd lO(n) d l, -ZZ- We now show that n€ > 0, for n find M 1 > N1 . > 0, M Z Zd > lIlO(n*)1I - 110(n)IIJ Z is bounded away from zero. Given ° and N1 large such that Let, = n* integer part and N Then, for n = > N, integer part > N1 n* and M PlIIO(n*)1I > Therefore, for n < 2"1 d n J >1 - €. N, M Pl{lIO(n)1I - lIo(n*)IIJZ > ~ nZdJ >1 - Z€. Since A . lG 1 *J is bounded away from zero, we get detlG J is bounded away from d ,n m1n d -,n U zero in probability. We now obtain the asymptotic distributions of G and gd . d,n ,n A n where L = L'L n n (A .4) is an (n-1)x(n-1) lower triangular matrix with every element below and on n the diagonal is equal to 1. L' = II' n where 1 Define, = Then, + I n-1 - Ln (1,1, ... ,1)': (n-1)x1, I Let Al of A and let x. n 1n with A. . 1n = (x.1n l' (A. 5) n- lis an identity matrix of size (n-l). >A , n ,Zn > ... > x.1n Z,.··,x.1n,n- 1)' A 1 denote the eigenvalues n- ,n denote the eigenvector Define the orthogonal transformation of e by u. 1n = n-1 tEl x int e t · , Define n into U n = (u ln associated ,u zn , •.• ,un- 1 ,n )' -Z3- Hasza and Fuller (1979) established that, for a fixed k, Following the arguments of Hasza and Fuller (1979) it can be shown that n- Zk e' Ak e n n V ~ n ~ i=l yZk vZ i -i = Q* k l)~J-l (_l)i+l and {v.} where y. = ZL(Zi 1 is a sequence of normal independent 1 (0,1) random variables. Also, the random variables N defined in Lemma 1 may be k ClO written as N = i~l 0kiVi for some constants {Oki}. k distribution of the diagonal elements of Cd Lemma 3: ~ for k n Let N. ( ) = n 1, n -(i-Yz) Now we obtain the limiting ,n S. l' where S. 1 is defined in (A. 1). 1,n1,n- 1, -Zk k A e n n = k I: i=l a (k) . (A. b) n,1 )' h "th 1 of a(k).. where S n(Zk)= (S Zk,l' S Zk,Z' ••• , S Zk,n-l ' t e J __ e ement n,1 1S n Then, -Yz Z(k-i) v I: t.=O Proof: . Z(k-i)-t (~) k,i,t n l + O(n- ) and v k,i,t We prove this using induction on k. n -Z A 0 f h f t e orm is a finite positive constant. For k =1, from Dickey (1977), e n n and hence (3.b) holds. holds for k + 1. Now, we assume that (A.b) holds for k and show that it also Note that, n -(Zk+l) L Ak e n n n = n -1 = k I: i=l k I: L i=l n a (k) . n,1 (k) (_l)kn-(Zk+l)s(Zk+l)+ 0 (n- l ) f n,i NZi,(n-l) + n p where f(k~ = n -1 L n,1 n a (k) n,i Note that the jth element of f(k~ n,1 is (A.7) -24- n = n j L -1 r=l a (k) . n,1.,r -k 2(k-i)+1 2 L £.=0 where v* k, i,£. is a constant. Now, n-(2k+2) Ak+ 1 e = n n = n- 1 l11' + = n- 1 ~ i=l I k - L J L A e n n n n n-1 lIl' + I n- 1 - L Jf(k~ N n n,1. 2i,(n-1) where a(k:1) = n-lll(I'f(k~) + f(k~ - L f(k~J n,1. n,1. n,1. n n,1. Note that the jth element of a a (k+l) .. n,1.,J = n -k2 i = 1, 2, ... , k. (k+1) . has the form, n,1. 2(k-i)+2 . 2(k-i)+2-£' -1 L v (.1) + O(n ). £.=0 k+1,i,l n Therefore, n -(2k+2) A e n n (k+l) = k+1 L a . N2 · ( i=l n,1. 1., n- 1) + (_1)k+1 S (2k+2) n 0 (n-l) . + p u Note, from (A. 0) and (A. 7) we get 2k 82 n -4k n-1 L =n -4k e 'A e t=l 2k,t n n n k - 2(-1) k k E k L i=l j=l i~l N2i ,(n-1)n + Op(n- 1 ), (k)' 8 (k) N n ,i an,j N 2i,(n-1) 2j,(n-l) -2k a(k). 'S(2k) n,1. n -Z5- and n -(4k+Z) n-l 2 L S t=l 2k+l,t = n -(4k+Z), e n A ~ - Z(-l)k i=1 +0 (n- l p 2k+l e n n N n-(2k+l)f(k~'s(Zk+l) Zi,(n-l) n,1 n >. It can be shown that, n -r-~ n-l _ L (!)m S r,t t=l n for any integers m, r ~ O. V N* ~ N(O,o*Z ) m,r m,r In fact, one can express CD N* = L 0* V. m,r i=l m,r,i 1 where V. 1 ~ NID(O,l). n- 2k a(~)IS(2k) n1 n Therefore, =n -2k n-1 L t=1 = 2(k-i) v L l=o a (k) S . n,1,t 2k,t k,i,l n -Zk-~ n-1 + 0 (n-1) t 2(k-i)-l L (-) S· t=1 n 2k,t - v p N(a) , say. i,k Similarly, n -(Zk+l) f(k)' s(Zk+l) n, i n .L Z(k-i)+1 E l=o * vk,i,l N* 2(k-i)-l+l,2k = N( f) i,k Also, there exist finite positive constants a~ . k and f~ . k such that 1,J, 1,J, .(..01.m a (k)' . a (k). n~ n,1 n,1 = a *. . 1,J, k and Therefore, Q*2k - k L k L i=1 j=1 * a . . k N21·N · 2J 1,J, -Z6- ~ N 1. N + Z(_l)k+l Z i=l 0 (a) i, k and n V -(4k+Z) n-l Z 1: S t=l 2k+l,t k k = QZk+l 1: 1: f~ ° k NZiN Q * - i=l Zk Zj j=l 1.,), k l + Z( _U + Now, using the results of Lemma 1, we get for 0 -(i+j) n-l V 1: S 1.,t. 1 S)., t -1 t=l n ~ j k 1: i=l ~ N( f) N Zi i,k i and i ~ 2, P d ,1.,) ° • where k-l 1: (_l)r No N)o+r+l + r=O 1.-r k (-1) Qk' if i-j is even Pd,i,j = k-l 1: (_l)r N. N)o+r+l + (_l)k ~N7 k ' if i-j is odd, r=O 1.-r 1.and k = integer part of -1 n-l 1: S n t=l ~(i-j). Note also that V Z SO,t+l~(Nl - U . l,t Therefore, V G d,n - Gd and <A. 9) where G and gd consist of the corresponding limiting variables. d We now present the limiting distributions of the "F-type" statistics. Lemma 4: Let F. (d) be as defined in (Z.1O) with Y = W · t 1., n t F o (d)L 1..-1 g(*'i), d 1.,n (ii) Where G d [G~iil] -1 * g( i), d = F.1. (d) is the i x i upper left hand submatrix of G~ -1 1 and vector consisting of the first i elements of G gd . (Note that d Then, for i ~ d, -27 Proof: Recall, F. (d) = 1.,n B(i ) l i( MS E)n J - 1 = l i(MSE) J n -1 B'U~1. -1 - C(i) fi( i) -1 U~J-l U~ 6 lU~ B 1. 1. d,n 1. where U(i) = (i x i) left hand corner submatrix of I d and 6 = B- 1 b d,n Note that, as defined in (A.3). (MSE) n d,n = (n-d) = (n-d) P ~ -1 -1 n 2 W d,t t=l E n E t=l e - 6' b d,n l 0 (n- ) t + P 2 l. Also, and D n B- 1 d,n D n ...E... d d-l , ... ,n}. where D = diagonal {n ,n n F. 1., n (d) Therefore, --v lJ Now we present the proof of the theorem. Proof of Theorem 1: Recall that Y is a pth order process with d unit roots. t Let W and Zt be as defined in (2.5). t Then, W is a dth order autoregressive t process with d unit roots and Zt is a (p - d)th order autoregressive process with stationary roots. We can express Zt as, -28- P Zp-d,t = j=~+l = where Z. 1,t (1 - B) i Zt' nj Zj-d-l,t-l + e t Note that n + d l < 0. From Fuller (1976), it follows that N(O , T-lr-Zl T- l ,) k - - 11) - V n-"'(11 where 11 (A. 10) is the least squares estimator of 11 obtained by regressing Z on p-d,t ZO,t-l' ... , Zp-d-l,t-l; T is given in (2.3); the (i,j)th element of the matrix Let A B be obtained by regressing i Note that, for Yp,t on Yo,t-l' Y1 ,t-l' ..• , Yp- 1 ,t - l' where Y.1, t = (1- B) Yt • i ~ d, Y. = (l_B)i-d Z 1, t t = Z. d 1- ,t . Therefore, Bis obtained by regressing Let us Zp-d,t on YO,t-l' Y1,t-l' ... , Yd - l ,t-1' ZO,t-l' Zl,t-1' ... , Zp-d-l,t-1' partition B as (B(1)" n')'. A We can obtain 11 using several regressions. First, Let regress Zp-d,t on .~ = (YO,t-l' Yl,t-l' A(p-S) R = Z - 41' 11 , be the tth residual from this regression where p-d,t p-d,t t A(p-S) 11 is the corresponding least squares estimator of 11. Similarly, regress Zi,t-l on R p-d,t 4I~ on R For i and obtain Ri,t and n(i), for i = 0, 1, ... , p-d-l. Finally, regress O,t' Rl , t' ... , Rp-d-l,t' to obtain 11. < d, let S*d . -1,t = s*i,t Then, = Y.1, t = l(i-l)IJ -1 t E r=l (t-r+l)(t-r+2) ... (t-r+i-l)Z r Using Kronecker's lemma and the arguments of Lemma 1, it can be shown that, E t=l and y2 i,t -29- n ~ l. t=l for 0 ~ i,j ~ 2d Y• Y. 1= 0 P (n - i -j) 1,t- 1 J,t- d-1. n I: t=l Similarly, one can show that, = Y.1,t- 1 Zt- k and hence n I: t=l for i = 0, 1, 2, ... , d-1; Now, consider, for 0 n Also, for 0 ~ ~ -(2d-i-j) i = Y. 1 Z 1 1,tj,t- ~ ~ i j k = 0, ~ d-1, 1, ... , p; j n = n -(2d-i-j) t=l W.1,t- 1 W.J,t- 1 I: n I: t=l l Yp-d+i,t - nd+1 Y i,t-1 d-1, =- n d+1 n -(d-i) n -1 -1 -1 -1 = I: D 111'111 D lilt 1fJ't Dn n n n t=l D -2 nd+ 1 G d,n ~ Y t=l 1,t- 1 Therefore, = 0, 1, ... , p-d. 1 + 0 (n- ) p l. • Z d +1 p-,t + 0 P (n- 1) - -30- and, we get ... (i) Dn T'I = = 0 p (1) D n "'(p-d) = T'I i = 0,1, ... , p-d-l -n d+l G -1 l 0 (n- ) d,n gd,n + p = 0 (1) p where D = diagonal {n n n ~ t=l R. = R. 1,t J,t = d , n d-l ... , , n - ~ t=l lZi,t-l n .' t n} . Now, note that, for ... (i)j T'I lz ~ Z.1,t- lZ,J,t- 1 + R d t=l j,t-l ° ~ i, j:ilp-d-l, - .'n{j)] t 0 (1) . P Similarly, n n ~ t=l R. 1,t p- ,t = t=l L Z. 1 Z d + 0 P (1). 1,tp-,t Therefore, we get, T'I - T'I and - where B is defined in (A.3), and F. 1,n (p) = F~1,n (d) + T'I is defined in (A.lO). Also, for i V F. ( p ) - F.(d). 1,n 1 d, l 0 (n- ) p where F~ (d) is the ifF-statistic" for testing 13 = 13 = ... = 13. = 1 1,n 2 1 regression of Wd,t on WO,t-l' ~ Wl,t-l' ... , Wd-l,t-l. Therefore, ° in the -31- Now, consider, = Fd+1 ,n(P) 1 (d+1)(MSE) n where C(d+1) is the (d+1) x (d+1) submatrix consisting of the first (d+1) rows and (d+1) columns of the matrix, n E t=l ·t~~ An = n E t=l and ~I ~t = (2 !.: " " " ,n 2} " O,t-1' 2 n ~t.tl E t=l ~t~~ ... , 1,t-I' -1 2 p-d-1,t-1 )" d d-l ~ Let, D* = diagonal {n ,n ,""",n,n, n Then, D* A n n Therefore, under the assumption of H : exactly d unit roots d - 2 -1 nd+1 L( d+ 1) w* J p Therefore, _ under Hd " V Fd+i,n - v 00 Now, since Fd+i,n(P) 00 , under Hd " 2, we have that u -32- We now outline the proof of Theorem 2. Proof of Theorem 2: (1, .~, Let ,,(+) ~ be obtained by regressing t' ) where ·t and tt be as defined t n n E E n t=l t t=l n n n A(1)= E E E n t=l t=l ·t t=l ·t·~ n n n E E tt E tt·~ t=l t=l t=l .' n(1)= D and 1 diagonal n T (1) = n E e E .'e t t E t'e t t t=l n t=l n t=l {n~ , n d , n d-l , Y P,t on in the proof of Theorem l. Define, -1 t' t ·tt~ ttt~ ... , n, n ~ , ... , 1 n~t , t Then, it follows from Theorem 1 that 0' r- 1 .] and (A.ll ) where H* d = ~d Pd = ( Nd+l' Nd ' ... , N2)' , ~ = diagonal (1, -n d+ l , -n d+ l , ..• , -n d+ l ) -33- and the matrix G and the vector gd are as defined in Theorem 1. d Now, following the arguments used in proving Theorem 1, we can show that, under M , d F~ 1) (d) F~1)()...R-. 1,n p { 1 , 00 i :;ii d i >d where F~1)( ) = the F-statistic for testing 8 = 1,n p 1 = 8 = 0 in the i regression of (2.12), *1 lH(ii)J- l * F~ 1) (d) = i1 h(i),d h( i), d 1 d,l H(ii) = (i x i) submatrix consisting of the rows 2 to d,l columns 2 to i+l of the matrix H*-1 d i+l and h(i),d = i x 1 vector consisting of the elements 2 to * i+l of the and vector H*-1 h . d d Similarly, F~2)( ) _ 1,n p F~2) (d) V i ~ d 1 { 00 , i >d where F~2)( ) = 1,n p the F-statistic for testing 80 = 8 = 1 = 8. = 0 1 in the regression of (2.12) F~2)(d) = 1 1 **' i+l h(i),d lH(ii)J- l ** h( i),d d,2 H(i,i)= (i+l) x (i+l) submatrix consisting of the first (i+l) rows d,2 and (i+l) columns of the matrix H*-1 ' d and (i+l) x 1 vector consisting of the first (i+l) elements of u -34- REFERENCES Anderson, T. W. (1959). On asymptotic distributions of parameters of stochastic difference equations. Annals of Mathematical Statistics, 30, 676-687. Dickey, D. A. (1976). series. Estimation and hypothesis testing in nonstationary time Ph.D. Thesis, Department of Statistics, Iowa State University, Ames, Iowa. Dickey, D. A. and Fuller, W. A. (1979). Distribution of the estimators for autoregressive time series with a unit root. Journal of American Statistical Association, 74, 427-431. Dickey, D. A. and Fuller, W. A. (1981). Likelihood ratio autoregressive time series with a unit root. Econometrica, 49, 1057-1072. Dickey, D. A., Hasza, D. P., and Fuller, W. A. (1984). in seasonal time series. statistics for Testing for unit roots Journal of American Statistical Association, 79, 355-367 Fuller, W. A. (1976). Introduction to Statistical Time Series. Wiley, New York. Fuller, W. A. (1979). Testing the autoregressive process for a unit root. Proceedings of the 42nd Session of the International Statistical Institute, Manilla, Spain. Hannan, E. J. and Heyde, C. C. (1972). of discrete time series. Hasza, D. P. (1977). On limit theorems for quadratic functions Annals of Mathematical Statistics, 43, 2058-2066. Estimation in nonstationary time series. Ph.D. Thesis, Iowa State University, Ames, Iowa. Hasza, D. P. and Fuller, W. A. (1979). with unit roots. Estimation for autoregressive processes Annals of Statistics, 7, 1106-1120. Lai, T. L. and Wei, C. Z. (1983). Asymptotic properties of general autoregressive models and strong consistency of least-squares estimates of their parameters. Journal of Multivariate Analysis, 13, 1-23. -35- Mann, H. B. and Wald, A. (1943). On the statistical treatment of linear stochastic difference equations. Pankratz, A. (1983). Forecasting with Univariate Box-Jenkins Model-s: Concepts and Cases. Rao, M. M. (1978). Wiley, New York. Asymptotic distribution of an estimator of the boundary parameter of an unstable process. Sen, D. L. (1985). Econometrica, 11, 173-220. Annals of Statistics, 6, 185-190. Robustness of single unit root test statistics in the presence of multiple unit roots. Ph.D. Thesis, North Carolina State University, Raleigh, North Carolina.
© Copyright 2024 Paperzz