Economics 310

Economics 310
Lecture 24
Univariate Time Series
Concepts to be Discussed




Time Series
Stationarity
Spurious regression
Trends
Plot of Economic Levels Data
PPI, M1, Employment
160.000
140.000
120.000
100.000
employ*
M1*
80.000
PPIACO
60.000
40.000
20.000
Date
2000
1999
1998
1998
1997
1996
1996
1995
1994
1994
1993
1992
1992
1991
1990
1990
1989
1988
1988
1987
1986
1986
1985
1984
1984
1983
1982
1982
1981
1980
1980
0.000
Date
2000
1999
1998
1997
1996
1996
1995
1994
1993
1992
1991
1991
1990
1989
1988
1987
1986
1986
1985
1984
1983
1982
1981
1981
1980
1979
1978
1977
1976
1976
Percent
Plot of Rate Data
Exchange Rate & Interest Rate
50
40
30
20
10
TWEXMMTH
BA6M
0
-10
-20
-30
Stationary Stochastic Process




Stochastic Random Process
Realization
A Stochastic process is said to be stationary if its
mean and variance are constant over time and the
value of covariance between two time periods
depends only on the distance or lag between the two
time periods and not on the actual time at which the
covariance is computed.
A time series is not stationary in the sense just define
if conditions are violated. It is called a nonstationary
time series.
Stationary Stochastic Process
Conditions for stationarity
Mean : E (Yt )  
for all t
var iance : var(Yt )  E (Yt   ) 2   2
for all t
Co var iance :  k  E[(Yt   )(Yt  k   )] for all t & k
Test for Stationarity:
Correlogram
Autocorrel ation Function (ACF)
k 
 k Co var iance at lag k

0
var iance
Graph of this gives the polulation correlogra m
We can compute the sample autocorrel ation function.
ˆk  
Y  Y Y
t k
t
Y 
n
 Y  Y 

2
ˆ0
ˆ k 
t
n
ˆk
ˆ0
The plot of the sample autocorrel ation is the sample
correlogra m
Correlogram for PPI
AUTOCORRELATION FUNCTION OF THE SERIES
1 0.98 .
(1-B) (1-B ) PPIACO
+ RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR.
2 0.96 .
+
3 0.95 .
+
4 0.93 .
+
5 0.91 .
+
6 0.90 .
+
7 0.88 .
+
8 0.87 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
9 0.85 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
10 0.84 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
11 0.83 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
12 0.81 .
+
13 0.80 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
14 0.79 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
.
15 0.78 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRR
16 0.77 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRR
.
17 0.76 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRR
.
18 0.75 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRR
.
Correlogram M1
AUTOCORRELATION FUNCTION OF THE SERIES
1 0.99 .
(1-B) (1-B ) M1
+ RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
2 0.98 .
+
3 0.97 .
+
4 0.96 .
+
5 0.95 .
+
6 0.94 .
+
7 0.93 .
+
8 0.92 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
9 0.91 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
10 0.90 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
11 0.89 .
+
12 0.88 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
13 0.87 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR .
14 0.86 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
15 0.85 .
+
16 0.84 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
17 0.83 .
+
RRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
RRRRRRRRRRRRRRRRRRRRRRRRRRRRR
.
18 0.81 .
+
Testing autocorrelation
coefficients



If data is white noise, the sample
autocorrelation coefficient is normally
distributed with mean zero and variance
~ 1/n
For our levels data sd=0.064, and 5%
test cut off is 0.126
For our rate data, sd=0.059, and 5%
test cut off is 0.115
Testing Autocorrelation
coefficients
To test all the autocorrel ation coefficien ts
simultaneo usly equal to zero, we can use
Box & Pierce Q - statistic
m
Q  n  ˆ k2 ~  m2
k 1
Ljung - Box Statistic
 ˆ k2 
 ~  m2
LB  n(n  2) 
k 1  n  k 
m
Ljung-Box test for PPI
SERIES (1-B) (1-B ) PPIACO
NET NUMBER OF OBSERVATIONS = 242
MEAN=
112.01
VARIANCE=
LAGS
1 -12
13 -18
129.36
STANDARD DEV.=
AUTOCORRELATIONS
11.374
STD ERR
0.98 0.96 0.95 0.93 0.91 0.90 0.88 0.87 0.85 0.84 0.83 0.81 0.06
0.80 0.79 0.78 0.77 0.76 0.75
0.29
MODIFIED BOX-PIERCE (LJUNG-BOX-PIERCE) STATISTICS (CHI-SQUARE)
LAG
Q
DF P-VALUE
LAG
Q
DF P-VALUE
1 236.25 1 .000
10 2060.22 10 .000
2 465.31 2 .000
11 2234.89 11 .000
3 687.34 3 .000
12 2405.13 12 .000
4 902.18 4 .000
13 2571.38 13 .000
5 1110.17 5 .000
14 2733.79 14 .000
6 1311.32 6 .000
15 2892.87 15 .000
7 1506.47 7 .000
16 3048.83 16 .000
8 1696.30 8 .000
17 3201.65 17 .000
9 1880.78 9 .000
18 3351.37 18 .000
Unit Root Test for Stationarity
Consider the mod el
Yt  Yt 1   t
t is white noise
If in fact the coefficien t of Yt -1 is 1, we have the
unit root problem. A nonstation arity situation.
If we run the regression , Yt  Yt 1   t
and  does not test different from 1, we have a
unit root problem. The series is a random walk .
Frequently, one estimates the mod el
Yt  (   1)Yt 1   t  Yt 1   t
We test   0.
Results of our test


If a time series is differenced once and
the differenced series is stationary, we
say that the original (random walk) is
integrated of order 1, and is denoted
I(1).
If the original series has to be
differenced twice before it is stationary,
we say it is integrated of order 2, I(2).
Testing for unit root



In testing for a unit root, we can not
use the traditional t values for the test.
We used revised critical values provided
by Dickey and Fuller.
We call the test the Dickey-Fuller test
for unit roots.
Dickey-Fuller Test
For theore tical and practical reasons, we apply the
Dickey - Fuller tes t to the following 3 equations.
1. Yt  Yt 1  t
2. Yt  1  Yt 1  t
3. Yt  1   2t  Yt 1   t
In each case we test   0.
If the error term is autocorrel ated, we apply the D - F test to
M
4. Yt  1   2t  Yt 1    i Yt i  t
i 1
This is known as the augmented Dickey - Fuller Test.
Dickey-Fuller Test for our level
data-PPI
|_coint ppiaco m1 employ
...NOTE..SAMPLE RANGE SET TO:
1,
242
...NOTE..TEST LAG ORDER AUTOMATICALLY SET
TOTAL NUMBER OF OBSERVATIONS = 242
VARIABLE : PPIACO
DICKEY-FULLER TESTS - NO.LAGS =
14
NO.OBS =
227
NULL
TEST
ASY. CRITICAL
HYPOTHESIS
STATISTIC
VALUE 10%
--------------------------------------------------------------------------CONSTANT, NO TREND
A(1)=0 T-TEST
-0.46372
-2.57
A(0)=A(1)=0
2.5444
3.78
AIC =
-1.298
SC =
-1.057
--------------------------------------------------------------------------CONSTANT, TREND
A(1)=0 T-TEST
-2.7258
-3.13
A(0)=A(1)=A(2)=0
4.1554
4.03
A(1)=A(2)=0
3.7243
5.34
AIC =
-1.323
SC =
-1.067
---------------------------------------------------------------------------
Dickey-Fuller Test for our level
data-M1
VARIABLE : M1
DICKEY-FULLER TESTS - NO.LAGS =
12
NO.OBS =
229
NULL
TEST
ASY. CRITICAL
HYPOTHESIS
STATISTIC
VALUE 10%
--------------------------------------------------------------------------CONSTANT, NO TREND
A(1)=0 T-TEST
-1.5324
-2.57
A(0)=A(1)=0
1.8752
3.78
AIC =
2.678
SC =
2.888
--------------------------------------------------------------------------CONSTANT, TREND
A(1)=0 T-TEST
-1.9984
-3.13
A(0)=A(1)=A(2)=0
2.2216
4.03
A(1)=A(2)=0
2.6252
5.34
AIC =
2.673
SC =
2.898
---------------------------------------------------------------------------
Dickey-Fuller on First
Difference-PPI
VARIABLE : DIFFPPI
DICKEY-FULLER TESTS - NO.LAGS =
14
NO.OBS =
226
NULL
TEST
ASY. CRITICAL
HYPOTHESIS
STATISTIC
VALUE 10%
--------------------------------------------------------------------------CONSTANT, NO TREND
A(1)=0 T-TEST
-4.2399
-2.57
A(0)=A(1)=0
8.9971
3.78
AIC =
-1.299
SC =
-1.057
--------------------------------------------------------------------------CONSTANT, TREND
A(1)=0 T-TEST
-4.0255
-3.13
A(0)=A(1)=A(2)=0
5.9875
4.03
A(1)=A(2)=0
8.9725
5.34
AIC =
-1.291
SC =
-1.033
---------------------------------------------------------------------------
Trend Stationary vs Difference
Stationary
To remove problem of trend in data, we frequently add
t as an explanator y variable to the regress. This is legit only
if model is trend variable is determinis tic and not stochastic .
For model to be a Trend Stationary Process :
ˆ t  (Yt   0  1t ) must be white noise.
A data series is difference stationary if data is generated by model
Yt  Yt 1     t
This model has a unit root with positive drift if   0.
Spurious Regression
We regress two time - series variable on each other
and get a large coefficien t of determinat ion.
Granger & Newbold suggest if coefficien t of determinat ion
for a regression is larger tha n the Durbin - Watson statistic worry
about spurious regression . What we may be observing is two random
walks with a positive drift.
Relate Price level to Money
Supply
Intercept
M1
Coefficients Standard Error t Stat
78.34396361 0.809216059 96.81464
0.041353791 0.000947232 43.65753
Note: For this regression R-square=0.888162964
And DW = 0.028682
We have to fear a Spurious regression.
Dickey-Fuller Test
Intercept
Trend
ppilag
Intercept
trend
m1lag
Coefficients Standard Error t Stat
3.9116703 1.139598437 3.432499
0.0050264 0.002010218 2.500442
-0.0387279 0.012275819 -3.15481
Coefficients Standard Error t Stat
2.017339156 1.912609208 1.054758
-0.040007924 0.017782745 -2.24982
0.007145816 0.004785534 1.493212
Cointegration


We can have two variables trending upward
in a stochastic fashion, they seem to be
trending together. The movement resembles
two dancing partners, each following a
random walk, whose random walks seem to
be unison.
Synchrony is intuitively the idea behind
cointegrated time series.
Cointegration
We may have two variables , Y and X, both which are
I(1) variables . Despite this a linear combinatio n of
the two variables may be stationary . More specifical ly,
if  t  Yt  1   2 X t is I(0), white noise, then we say
the variables Y and X are cointegrat ed.
Cointegration


We need to check the residuals from
our regression to see if they are I(0).
If the residuals are I(0) or stationary,
the traditional regression methodology
(including t and f tests) that we have
learned so far is applicable to data
involving time series.
Test for Cointegration
1. We can apply the Dickey - Fuller tes t to the residuals.
2. Cointegrat ing Regression Durbin - Watson
In the CRDW test we get the DW statistic for our regression ,
but test d  0 versus d  2 for standard DW - test. Cutoff values
are 0.511, 0.386 and 0.322 for 1%, 5% and 10% tests.
For our ppi on money example, the DW was 0.029. We can not
reject null hypothesis that errors have unit root.
Cointegrating regression: PPI
and M1
OINTEGRATING REGRESSION - CONSTANT, NO TREND
REGRESSAND : PPIACO
R-SQUARE = 0.8882
NO.OBS =
242
DURBIN-WATSON = 0.2868E-01
DICKEY-FULLER TESTS ON RESIDUALS - NO.LAGS = 14
M =
2
TEST
ASY. CRITICAL
STATISTIC
VALUE 10%
--------------------------------------------------------------------------NO CONSTANT, NO TREND
T-TEST
-2.4007
-3.04
AIC =
-1.200
SC =
-0.974
---------------------------------------------------------------------------
Error Correction Model
If we have a cointegrat ing regression , then ther e
exist a long run relationsh ip between th e variables .
We use error correction model to get to short run adjustment to long
run equilibriu m.
Yt   0  1X t   2 ˆ t 1   t
  0  1X t   2 (Yt 1  1   2 X t 1 )   t
Error Correction model:
exchange rate & interest Rate
Regression of exchange rate on interest rate
Intercept
BA6M
Coefficients Standard Error t Stat
84.4016519 1.934688878 43.62544
1.82542313
0.24291891 7.514537
Error Correction Model
Coefficients Standard Error t Stat
Intercept -0.0369947 0.095387016 -0.38784
diffintr
0.7803093 0.153370941 5.087726
Residuals -0.0153463 0.007787867 -1.97054