Vector Autogregression and Impulse Response Functions

Chapter 8
Vector Autogregression and Impulse Response
Functions
8.1 Vector Autogregressions
Consider two sequences {yt } and {zt }, where the time path of {yt } is affected by
current and past realizations of the {zt } sequence. Moreover, the time path of the
{zt } sequence is also affected by current and past realizations of the {yt } sequence.
Assuming that both variables are stationary we model them symmetrically, for example, in the following system:
yt = b10 + b12 zt + γ11 yt−1 + γ12 zt−1 + εyt
(8.1)
zt = b20 + b21 yt + γ21 yt−1 + γ22 zt−1 + εzt
(8.2)
where {εyt } and {εzt } are uncorrelated white-noise disturbances, and the standard
deviations of εyt and εzt are σy and σz , respectively. We say that Equations 8.1
and 8.2 are first-order vector autoregression (VAR) because the longest lag is equal
to one. These are not reduced-form equations because of the existence of contemporaneous effects. Using matrix algebra we can write the system in its compact form
γ γ
ε
1 b12
yt
b
yt−1
= 10 + 11 12
+ yt
γ21 γ22
εzt
b21 1 zt
b20
zt−1
or
Bxt = Γ0 + Γ1 xt−1 + εt
(8.3)
where
B=
1 b12
,
b21 1
Γ1 =
xt =
γ11 γ12
,
γ21 γ22
yt
,
zt
εt =
Γ0 =
εyt
εzt
b10
b20
89
90
8 Vector Autogregression and Impulse Response Functions
An also familiar form to express the VAR is in its standard from
xt = A0 + A1 xt−1 + et
(8.4)
which is obtained by premultiplying the compact form by B−1 , and where A0 =
B−1Γ0 , A1 = B−1Γ1 , and et = B−1 εt . This last equation can also be written as:
yt = a10 + a11 yt−1 + a12 zt−1 + e1t
zt = a20 + a21 yt−1 + a22 zt−1 + e2t
(8.5)
(8.6)
where each ai0 is the element i of the vector A0 , ai j is the element in row i and
column j of the matrix A1 , and eit is element i of vector et . These equations 8.5
and 8.6 are called VAR standard form, while equations 8.1 and 8.2 are the referred
to as the structural VAR.
The variance/covariance matrix of the shocks e1t and e2t is defined as
var(e1t ) cov(e1t , e2t )
Σ=
cov(e1t , e2t ) var(e2t )
and because the elements in Σ do not depend on time, we can also write matrix Σ
as
2
σ1 σ12
Σ=
σ21 σ22
where obviously σ12 = var(e1t ), σ22 = var(e2t ), and σ21 = σ21 = cov(e1t , e2t ). Recall
that in the multivariate GARCH we were modeling this matrix Σ = H to depend on
time, without focusing on the mean equations of yt and zt .
8.1.1 Stationarity Conditions
For the same reason that in the simple first-order autoregressive model we needed
that the autoregressive term be less than one, the VAR processes also need to some
stationarity conditions to hold. Let’s use the lag operators to rewrite Equations 8.5
and 8.6 as
yt = a10 + a11 Lyt + a12 Lzt + e1t
zt = a20 + a21 Lyt + a22 Lzt + e2t
(8.7)
(8.8)
(1 − a11 L)yt = a10 + a12 Lzt + e1t
(1 − a22 L)zt = a20 + a21 Lyt + e2t
(8.9)
(8.10)
or
8.1 Vector Autogregressions
91
Solving for yt and zt we obtain1
a10 (1 − a22 ) + a12 a20 + (1 − a22 L)e1t + a12 e2t−1
(1 − a11 L)(1 − a22 L) − a12 a21 L2
a20 (1 − a11 ) + a21 a10 + (1 − a11 L)e2t + a21 e1t−1
zt =
(1 − a11 L)(1 − a22 L) − a12 a21 L2
yt =
(8.11)
(8.12)
where as long as a12 and a21 are not both equal to zero, Equations 8.11 and 8.12
both have the same characteristic equation. Convergence requires that the root of
the polynomial (1 − a11 L)(1 − a22 L) − a12 a21 L2 must lie outside the unit circle. For
a formal treatment for the stability conditions see Appendix 6.1 in Enders (2010).
8.1.2 Dynamics in Simulated VAR Models
To obtain an intuition of the dynamics of VAR models, we will simulate four different sequences yt and zt for different values of ai j (i = 1, 2, j = 0, 1, 2) in Equations 8.5 and 8.6. The first two are stationary, while the last two are not stationary.
1. The processes y1t and z1t are generated with
y1t = 0.7y1t−1 + 0.2z1t−1 + e1t
z1t = 0.2y1t−1 + 0.7z1t−1 + e2t
(8.13)
(8.14)
where the roots of the polynomial (1 − a11 L)(1 − a22 L) − a12 a21 L2 are 1.11 and
2.0. Because both lie outside the unit circle, the system is stationary.
2. The processes y2t and z2t are generated with
y2t = 0.5y2t−1 − 0.2z2t−1 + e1t
(8.15)
z2t = −0.2y2t−1 + 0.5z2t−1 + e2t
(8.16)
where the roots of the polynomial (1 − a11 L)(1 − a22 L) − a12 a21 L2 are 1.428 and
3.33. Again, because both lie outside the unit circle, this system is also stationary.
The upper panel in Figure 8.1 illustrates the first two processes (y1t and z11 ),
while the lower panel illustrates the other two (y2t and z2t ). The Stata code to
generate the processes and the graphs is:
clear
set obs 150
set seed 12345
gen time=_n
tsset time
gen white1=invnorm(uniform())
set seed 123456
gen white2=invnorm(uniform())
gen y1 = 0
1
For the details, see Enders (2010) page 301.
92
8 Vector Autogregression and Impulse Response Functions
-5
y1 and z1
0
5
Simulated Stationary VAR Processes
0
50
100
150
100
150
time
z1
-4
y2 and z2
-2
0
2
4
y1
0
50
time
y2
z2
Fig. 8.1 Simulated Stationary VAR Processes
gen z1 = 0
forvalues iter=2/150 {
replace y1 = 0.7*l.y1 + 0.2*l.z1 + white1 if time == ‘iter’
replace z1 = 0.2*l.y1 + 0.7*l.z1 + white2 if time == ‘iter’
}
twoway line y1 z1 time, m(o) c(l) scheme(sj) ///
ytitle( "y1 and z1" ) saving(one, replace)
gen y2 = 0
gen z2 = 0
forvalues iter=2/150 {
replace y2 = + 0.5*l.y2 - 0.2*l.z2 + white1 if time == ‘iter’
replace z2 = - 0.2*l.y2 + 0.5*l.z2 + white2 if time == ‘iter’
}
twoway line y2 z2 time, m(o) c(l) scheme(sj) ///
ytitle( "y2 and z2" ) saving(two, replace)
gr combine one.gph two.gph, col(1) ///
iscale(0.7) fysize(100) ///
title( "Simulated Stationary VAR Processes" )
The upper panel of the figure shows that there is a tendency for the sequences to
move together. Because a21 is positive, large realizations of y1t induce a large realization of zt+1 . Likewise, because a12 is positive, a large realization of zt induce
a large a large realization of yt+1 . Here the two series are positively correlated,
8.1 Vector Autogregressions
93
1.00
-1.00
-1.00
-0.50
0.00
0.50
Cross-correlations of y1 and z1
-0.50
0.00
0.50
1.00
Cross-correlogram
-40
-20
0
Lag
20
40
Fig. 8.2 Cross correlogram for the simulated processes y1t and z1t
which can be easily checked by looking at the cross correlogram in Figure 8.2
obtained using Equation 7.11.
In the lower panel a21 and a12 are both negative, so positive realizations of yt can
be associated with negative realizations of zt+1 and vise versa. In this case the
two series are negatively correlated.2
3. The processes y3t and z3t are generated with
y3t = 0.5y3t−1 + 0.5z3t−1 + e1t
(8.17)
z3t = 0.5y3t−1 + 0.5z3t−1 + e2t
(8.18)
4. The processes y4t and z4t are generated with
y4t = 0.5y4t−1 + 0.5z4t−1 + e1t
(8.19)
z4t = 0.5 + 0.5y4t−1 + 0.5z4t−1 + e2t
(8.20)
The Stata code to obtain cases 3 and 4 is:
gen y3 = 0
gen z3 = 0
forvalues iter=2/150 {
replace y3 = + 0.5*l.y3 + 0.5*l.z3 + white1 if time == ‘iter’
2
Please see at the end of the chapter for the Stata codes to generate the cross correlogram in
Figure 8.2.
94
8 Vector Autogregression and Impulse Response Functions
-10
y3 and z3
-5
0
5
Simulated Nonstationary VAR Processes
0
50
100
150
100
150
time
z3
0
y4 and z4
10 20 30
40
y3
0
50
time
y4
z4
Fig. 8.3 Simulated Nonstationary VAR Processes
replace z3 = + 0.5*l.y3 + 0.5*l.z3 + white2 if time == ‘iter’
}
twoway line y3 z3 time, m(o) c(l) scheme(sj) ///
ytitle( "y3 and z3" ) saving(three, replace)
gen y4 = 0
gen z4 = 0
forvalues iter=2/150 {
replace y4 = 0.0 + 0.5*l.y4 + 0.5*l.z4 + white1 if time == ‘iter’
replace z4 = 0.5 + 0.5*l.y4 + 0.5*l.z4 + white2 if time == ‘iter’
}
twoway line y4 z4 time, m(o) c(l) scheme(sj) ///
ytitle( "y4 and z4" ) saving(four, replace)
gr combine three.gph four.gph, col(1) ///
iscale(0.7) fysize(100) ///
title( "Simulated Nonstationary VAR Processes" )
Figure 8.3 shows two processes with unit root. There is little tendency in these
series (upper and lower panels) to revert to a constant long-run value. The upper
panel shows a multivariate generalization of the random walk model. The lower
panel shows how the value of a10 = 0.5 acts as a drift for both of the series. The
drift appears as a deterministic trend that makes these series nonstationary, along
with the generalized random walk process.
8.1 Vector Autogregressions
95
8.1.3 Estimation
8.1.3.1 Forecasting
Consider the simple first-order model presented in Equation 8.4 and reproduced here
xt = A0 + A1 xt−1 + et .
(8.21)
Once the coefficients A0 and A1 are estimated using data until period T , it is easy to
obtain the one-step ahead forecast as:
ET xT +1 = A0 + A1 xT .
(8.22)
Likewise, the two-step ahead forecast can be obtain recursively using:
ET xT +2 = A0 + A1 ET xT +1 = A0 + A1 [A0 + A1 xT ].
(8.23)
However, because usually VAR models are overparameterized, this forecast may be
unreliable. One alternative is to drop statistically significant coefficients and estimate the remaining model using Seemingly Unrelated Regressions (SUR).
8.1.3.2 Identification
There are basically two representations for the VAR. The structural VAR as presented in Equations 8.1 and 8.2 and the standard form VAR, as presented in Equations 8.5 and 8.6. Because of the feedback in the VAR process, the structural VAR
cannot be directly estimated. Notice that zt is correlated with the error term εyt and
that yt is correlated with the error term εzt . This is a problem because standard estimation techniques require the regressors to the uncorrelated with the error term.
However, this problem does not arise in the standard form VAR, where OLS can be
used to obtain estimates of A0 and A1 . Then the key question is whether we can use
the OLS estimated of A0 and A1 to retrieve the structural VAR estimates.
The short answer is no, unless we are willing to impose restrictions on the structural VAR equations. The reason is simple, in the structural VAR we need to estimate
eight coefficients (b10 , b20 , b12 , b21 , γ11 , γ12 , γ21 , and γ22 ) and there are two standard
deviations (σy and σz ), while in the standard form VAR there are only six coefficient
estimates (a10 , a20 , a11 , a12 , a21 , and a2 ) and we calculate three additional values
(var(e1t ), var(e2t ), cov(e1t ,e2t )). That is, the standard form has 9 parameters and the
structural needs 10. Hence, we say that the structural VAR is underidentified.
A simple identification strategy is to impose a restriction such as b21 = 0, so that
the structural VAR becomes
yt = b10 + b12 zt + γ11 yt−1 + γ12 zt−1 + εyt
zt = b20 + γ21 yt−1 + γ22 zt−1 + εzt
(8.24)
(8.25)
96
8 Vector Autogregression and Impulse Response Functions
This means that zt has a contemporaneous effect on yt , but yt can only affect the
{zt } sequence with one lag. The resulting system is exactly identified. B−1 is
1 −b12
B=
0 1
Premultiplying the structural VAR system by B−1 yields
γ11 γ12
εyt
1 −b12
yt−1
yt
1 −b12
1 −b12
b10
+
+
=
γ21 γ22
εzt
0 1
zt−1
zt
0 1
0 1
b20
or
γ11 + b12 γ21 γ12 − b12 γ22
εyt − b12 εzt
b12 − b12 b20
yt
yt−1
=
+
+
γ21
γ22
εzt
b20
zt
zt−1
After estimating A0 and A1 we have the following nine equations:
a10 = b12 − b12 b20
(8.26)
a20 = b20
a11 = γ11 + b12 γ21
a12 = γ12 − b12 γ22
a21 = γ21
(8.27)
(8.28)
a22 = γ22
e1t = εyt − b12 εzt
e2t = εzt
var(e1 ) = σy2 + b212 σz2
(8.31)
(8.32)
var(e2 ) = σz2
(8.35)
cov(e1 , e2) =
−b12 σz2
(8.29)
(8.30)
(8.33)
(8.34)
(8.36)
that can be used to obtain b10 , b12 , γ11 , γ12 , b20 , γ21 , γ22 , σy2 , and σz2 .
8.2 The Impulse Response Function
Is we write the standard form VAR in matrix form we have
e
yt−1
a11 a12
yt
a10
+ 1t
+
=
e2t
zt−1
a21 a22
a20
zt
Recall that every autoregressive process has a moving-average representation. The
same is true for VAR, where the moving-average representation is called the vector
moving average (VMA). The idea in the moving average representations is to write
the current values of yt and zt in terms of the current and past values of the shocks.
8.2 The Impulse Response Function
97
The moving-average representation of the above system is:3
∞ a
yt
ȳ
= t + ∑ 11
a
zt
z̄t
21
i=0
a12
a22
i e1t−i
e2t−i
While these VMA are written in terms of e1t−i and e2t−i , we can use et = B−1 εt to
write the VMA in terms of εyt and εzt :
∞ 1
a
yt
ȳt
=
+
∑ a11
zt
z̄t
1 − b12 b21 i=0
21
a12
a22
i 1
−b21
−b12
1
εyt
εzt
A simplified way of writing this moving-average representation is:
∞ ȳt
yt
φ11 (i) φ12 (i) εyt−i
+∑
=
z̄t
zt
φ21 (i) φ22 (i) εzt−i
i=0
where φ jk (i) are just elements of the φi matrix:
Ai1
1
φi =
1 − b12 b21 −b21
−b12
1
The moving-average representation is important because it allows examining the
interaction between the {yt } and {zt } sequences. The coefficients φi are used to
generate the effects of shocks εyt and εzt on the entire time paths of the {yt } and {zt }
sequences. The impact multipliers are the four elements φ jk (0). Likewise, φik (1) are
the one-period responses and so on. The impulse response functions are the four sets
of coefficients φ11 (i), φ12 (i), φ21 (i), and φ22 (i). They are usually presented using a
plot of φ jk (i) against i.
To illustrate the impulse response functions, we will use the same simulated sequences as before. That is, the processes y1t and z1t generated with
0.7 0.2
e
y1t
y1t−1
=
+ 1t
·
e2t
0.2 0.7
z1t−1
z1t
and the processes y2t and z2t are generated with
y2t
0.5
−0.2
e
y2t−1
=
+ 1t
·
e2t
−0.2
0.5
z2t−1
z2t
To keeps things simple we impose the restrictions b12 = b21 = 0, such that the structural VAR is the same as the standard form VAR. Figure 8.4 shows the responses
of the sequences {y1t } and {z1t } to e1t and e2t . Likewise, Figure 8.5 shows the responses of the sequences {y2t } and {z2t } to e1t and e2t . Notice the symmetry in the
impulse response functions. This is just coming from the symmetry in the VARs.
3
This one is obtained by backward iteration and assuming that the stability conditions are met.
98
8 Vector Autogregression and Impulse Response Functions
1
.8
Response to e2t shock
.4
.6
.2
0
0
.2
Response to e1t shock
.4
.6
.8
1
Impulse Response Functions
0
5
10
time
y1
15
20
0
z1
5
10
time
y1
15
20
z1
Fig. 8.4 Impulse Response Functions for the Simulated Processes y1 and z1
8.3 Estimation in Stata
8.3.1 Vector Autoregressions Models
Estimation of VAR models in Stata is initially a simple task, but you have to make
sure you understand the options because Stata may impose different identifying
restrictions in the structural VAR. Consider the following example
use http://www.stata-press.com/data/r11/lutkepohl2, clear
tsset
var dln_inv dln_inc if qtr>=tq(1961q2), lags(1/2)
Vector autoregression
Sample: 1961q2 Log likelihood =
FPE
=
Det(Sigma_ml) =
1982q4
419.3878
2.80e-07
2.23e-07
No. of obs
AIC
HQIC
SBIC
=
87
= -9.411213
= -9.297082
= -9.127776
Equation
Parms
RMSE
R-sq
chi2
P>chi2
---------------------------------------------------------------dln_inv
5
.043968
0.0851
8.092091
0.0883
dln_inc
5
.011489
0.0980
9.452706
0.0507
--------------------------------------------------------------------------------------------------------------------------------------------|
Coef.
Std. Err.
z
P>|z|
[95% Conf. Interval]
-------------+---------------------------------------------------------------dln_inv
|
dln_inv |
L1. | -.2325515
.1046414
-2.22
0.026
-.4376449
-.0274582
8.3 Estimation in Stata
99
1
Response to e2t shock
0
.5
-.5
-.5
Response to e1t shock
0
.5
1
Impulse Response Functions
0
5
10
time
y2
15
z2
20
0
5
10
time
y2
15
20
z2
Fig. 8.5 Impulse Response Functions for the Simulated Processes y2 and z2
L2. | -.1221506
.1056724
-1.16
0.248
-.3292646
.0849634
|
dln_inc |
L1. |
.748249
.401704
1.86
0.063
-.0390763
1.535574
L2. |
.3737625
.4001908
0.93
0.350
-.4105971
1.158122
|
_cons |
.0001197
.0112041
0.01
0.991
-.02184
.0220794
-------------+---------------------------------------------------------------dln_inc
|
dln_inv |
L1. |
.058984
.0273432
2.16
0.031
.0053924
.1125755
L2. |
.0546247
.0276126
1.98
0.048
.0005051
.1087443
|
dln_inc |
L1. |
.0331042
.1049667
0.32
0.752
-.1726266
.2388351
L2. |
.0695935
.1045713
0.67
0.506
-.1353624
.2745494
|
_cons |
.0150516
.0029277
5.14
0.000
.0093135
.0207898
------------------------------------------------------------------------------
There is a number of selection-order statistics to assist in fitting the VAR of the correct order. A useful command in Stata is varsoc, which computes the following
four information criteria: the final prediction error (FPE), the Akaike’s information criterion (AIC), the Hannan and Quinn information criterion (HQIC), and the
Schwarzs Bayesian information criterion(SBIC).
varsoc dln_inv dln_inc
Selection-order criteria
Sample: 1961q2 - 1982q4
Number of obs
=
87
+---------------------------------------------------------------------------+
|lag |
LL
LR
df
p
FPE
AIC
HQIC
SBIC
|
|----+----------------------------------------------------------------------|
| 0 | 410.565
2.9e-07
-9.3923 -9.36948* -9.33562* |
| 1 | 415.882 10.633*
4 0.031 2.8e-07* -9.42257* -9.35409
-9.2525 |
100
8 Vector Autogregression and Impulse Response Functions
varbasic, dln_inc, dln_inc
varbasic, dln_inc, dln_inv
varbasic, dln_inv, dln_inc
varbasic, dln_inv, dln_inv
.06
.04
.02
0
-.02
.06
.04
.02
0
-.02
0
2
4
6
8
0
2
4
6
8
step
95% CI
orthogonalized irf
Graphs by irfname, impulse variable, and response variable
Fig. 8.6 Impulse Response Functions
| 2 | 419.388 7.0124
4 0.135 2.8e-07 -9.41121 -9.29708 -9.12778 |
| 3 |
422.34 5.9045
4 0.206 2.9e-07 -9.38713 -9.22734 -8.99031 |
| 4 | 426.595 8.5107
4 0.075 2.9e-07
-9.393 -9.18756 -8.88281 |
+---------------------------------------------------------------------------+
Endogenous: dln_inv dln_inc
Exogenous:
_cons
8.3.2 Impulse Response Function
Note that Stata reports the standard form VAR and will not be able to obtain the impulse response function (IRF) without further assumptions. The following command
will estimate the same VAR and will provide the orthogonalized IRF:
set scheme sj
varbasic dln_inv dln_inc, lags(1/2)
The orthogonalized impulse-response functions is imposing the cholesky decomposition in the structure of the errors. In simple words, it is setting b21 = 0 when
going from the standard form VAR to the structural VAR. Notice that the order in
which you place the variables in the varbasic command is important because a
different order will impose a different restriction on the structure of the errors. Try
estimating:
varbasic dln_inc dln_inv, lags(1/2)
and you will see that the standard form VAR as reported in the output is exactly the
same as before, however, the IFR are different because of the different restriction. If
you are willing to impose b21 = b12 = 0, then the IRF can be obtained using:
irf graph irf
8.3 Estimation in Stata
101
varbasic, dln_inc, dln_inc
varbasic, dln_inc, dln_inv
varbasic, dln_inv, dln_inc
varbasic, dln_inv, dln_inv
1.5
1
.5
0
-.5
1.5
1
.5
0
-.5
0
2
4
6
8
0
2
4
6
8
step
95% CI
impulse response function (irf)
Graphs by irfname, impulse variable, and response variable
Fig. 8.7 Impulse Response Functions
The resulting IRF is presented in Figure 8.7. Obviously, in this case the order is
not important and by construction the size on the shock is one. Stata can estimate
different structural VAR depending on the restrictions you are willing to impose,
please see the manual for more details.
8.3.3 Stability Conditions
If you want to test whether the stability conditions hold, Stata has the command
varstable that will calculate the eigenvalues for the stability conditions. Consider the following example using the stationary processes {y1t } and {z1t } that we
simulated before
var y1 z1, lags(1/2)
(output omitted)
varstable
Eigenvalue stability condition
+----------------------------------------+
|
Eigenvalue
|
Modulus
|
|--------------------------+-------------|
|
.9166213
|
.916621
|
|
.6722131
|
.672213
|
|
-.222676
|
.222676
|
| -.02350165
|
.023502
|
+----------------------------------------+
All the eigenvalues lie inside the unit circle.
VAR satisfies stability condition.
102
8 Vector Autogregression and Impulse Response Functions
Now, consider the nonstationary processes {y4t } and {z4t } that we simulated earlier
var y4 z4, lags(1/2)
(output omitted)
varstable
Eigenvalue stability condition
+----------------------------------------+
|
Eigenvalue
|
Modulus
|
|--------------------------+-------------|
|
1.000533
|
1.00053
|
| -.3468524
|
.346852
|
|
.2712863
|
.271286
|
| .00696671
|
.006967
|
+----------------------------------------+
At least one eigenvalue is at least 1.0.
VAR does not satisfy stability condition.
Note that when the system is not stationary, the IRF will not dissipate over time
because by definition shocks will have a permanent effect on the series.
Finally, including exogenous variables in the estimation is simple.
var dln_inc dln_inv, lags(1/2) exog(dln_consump)
(output omitted)
The key assumptions are: (1) We know the exact form in which the exogenous variable enters the system. (2) We role out any feedback from the endogenous variable
to the exogenous variable.
8.3.4 Granger Causality
Consider the following model:
p
p
yt = a10 + ∑ a11 (i)Li yt + ∑ a12 (i)Li zt + e1t
i=1
p
zt = a20 + ∑ a21 (i)Li yt + ∑ a22 (i)Li zt + e2t
i=1
(8.37)
i=1
p
(8.38)
i=1
One causality test is whether the lags of one variable enter into the equation for another variable. In the two-equation model above we say that {yt } does not Granger
cause {zt } if and only if all the coefficients of the lags of yt on the zt equation are
equal to zero. Hence, if {yt } does not improve the forecasting performance of {zt },
then {yt } does not Granger cause {zt }. Under the assumption that all the VAR variables are stationary, a direct way to test Granger causality is use the standard F-test
of the restriction
a21 (1) = a21 (2) = a21 (3) = · · · = a21 (p) = 0
Consider the following example in Stata:
use http://www.stata-press.com/data/r11/lutkepohl2
var dln_inv dln_inc dln_consump
(8.39)
8.4 Supporting .do files
103
(output omitted)
vargranger
Granger causality Wald tests
+------------------------------------------------------------------+
|
Equation
Excluded |
chi2
df Prob > chi2 |
|--------------------------------------+---------------------------|
|
dln_inv
dln_inc | .55668
2
0.757
|
|
dln_inv
dln_consump | 1.9443
2
0.378
|
|
dln_inv
ALL | 7.3184
4
0.120
|
|--------------------------------------+---------------------------|
|
dln_inc
dln_inv | 6.2466
2
0.044
|
|
dln_inc
dln_consump | 5.1029
2
0.078
|
|
dln_inc
ALL | 13.087
4
0.011
|
|--------------------------------------+---------------------------|
|
dln_consump
dln_inv | 4.2446
2
0.120
|
|
dln_consump
dln_inc | 16.275
2
0.000
|
|
dln_consump
ALL | 21.717
4
0.000
|
+------------------------------------------------------------------+
which can also be carried out using a simple F-test:
test [dln_inv]L.dln_inc [dln_inv]L2.dln_inc
( 1)
( 2)
[dln_inv]L.dln_inc = 0
[dln_inv]L2.dln_inc = 0
chi2( 2) =
Prob > chi2 =
0.56
0.7570
A large p-value is evidence against the null.
8.4 Supporting .do files
For Figures 8.3 and 8.4:
clear
set obs 100000
set seed 12345
gen time=_n
tsset time
gen white1=invnorm(uniform())
set seed 123456
gen white2=invnorm(uniform())
gen y1 = 0
gen z1 = 0
forvalues iter=2/100000 {
replace y1 = 0.7*l.y1 + 0.2*l.z1 +
replace z1 = 0.2*l.y1 + 0.7*l.z1 +
}
xcorr y1 z1, lags(40)
gen y2 = 0
gen z2 = 0
forvalues iter=2/100000 {
replace y2 = + 0.5*l.y2 - 0.2*l.z2
replace z2 = - 0.2*l.y2 + 0.5*l.z2
}
xcorr y2 z2, lags(40)
white1 if time == ‘iter’
white2 if time == ‘iter’
+ white1 if time == ‘iter’
+ white2 if time == ‘iter’
Figure 8.8 presents the cross correlogram for the simulated processes y2t and z2t .
104
8 Vector Autogregression and Impulse Response Functions
1.00
-1.00
-1.00
-0.50
0.00
0.50
Cross-correlations of y2 and z2
-0.50
0.00
0.50
1.00
Cross-correlogram
-40
-20
0
Lag
20
Fig. 8.8 Cross correlogram for the simulated processes y2t and z2t
40