Bootstrap Inference for Impulse Response Functions in Factor

Bootstrap Inference for Impulse Response Functions in
Factor-Augmented Vector Autoregressions
Yohei Yamamoto
University of Alberta, School of Business
January 2010: This version May 2011 (in progress)
Abstract
This paper investigates structural identi…cation and residual-based bootstrap inference technique for impulse response functions (IRFs) in factor-augmented vector
autoregressions (FAVARs). It is well-known that parameters are not statistically identi…ed in reduced-form factor models and principal components estimates are randomly
rotated. To address this problem, we …rst propose structural identi…cation schemes
which explicitly account for the factor rotation. The proposed identifying restrictions
are based on structural parameters or IRFs and common in many empirical studies.
Then we consider two bootstrap procedures : A) bootstrap with factor estimation and
B) bootstrap without factor estimation, for the identi…ed IRFs. Although both procedures are asymptotically valid in …rst-order, errors in the factor estimation produce
lower-order discrepancy. Monte Carlo simulations indicate that A performs well over
all cases. B may produce smaller coverage ratios than the nominal level especially
when N is small compared to T . The asymptotic normal inference also tends to produce smaller coverage in …nite samples and can be quite erratic. These results suggest
that the uncertainty associated with factor estimation can be relevant in structural
IRF estimates in FAVARs.
JEL Classi…cation Number: C14, C22
Keywords: structural identi…cation, principal components, coverage ratio, factor
estimation errors
University of Alberta, School of Business, 2-37 Business Building, Edmonton, AB, Canada T6G 2R6
([email protected]).
1
Introduction
Fast-growing attention has recently been paid to factor analysis in macroeconomics and …nance. Factor models essentially capture comovements of large data sets with respect to
a handful of common latent factors and these models have become increasingly useful due
to the recent upsurge of computation facility and data availability, often referred to as the
"data-rich environment" (Bernanke and Boivin, 2003). One of the breakthroughs in the
…eld of factor analysis is the factor-augmented vector autoregression (FAVAR), which uses
the vector autoregression (VAR) framework to analyse the time series characteristics of the
common factors. These factors are considered to be driving such a large number of data
series which cannot directly be accommodated by small-scaled VARs.1 We have lately seen
numerous empirical works directing this research in a promising direction. Stock and Watson (2005) provide a comprehensive summary of ongoing FAVAR modeling and estimation.
In policy analysis, Bernanke and Boivin (2003), Bernanke et al. (2005), Giannone et al.
(2005), Belviso and Milani (2006), Boivin et al. (2007) among others apply the technique
to US macroeconomic data series and …nd the device is useful to monetary policy analysis.
Acconcia and Simonelli (2007) study the e¤ects of economy-wide and sector-speci…c productivity shocks to sectoral dynamics of employment. For …nance and asset pricing applications,
Ang and Piazzesi (2003) and Moench (2008) incorporate a multifactor a¢ ne term structure
of interest rates, Ludvigson and Ng (2009) investigate the e¤ects of macro factors on bond
market premia ‡uctuations. Gilchrist et al. (2009) and Boivin et al. (2010) analyze the
impact of credit spreads on macroeconomic activity.
For these models, impulse response functions (IRFs) play an important role in the assessment of dynamic e¤ects of innovations on the variables of interest. When the latent
factors are extracted by the popular static principal components (PC) method,2 a benchmark inference technique for the coe¢ cients and factors based on the normal approximation
is discovered by Bai and Ng (2006) and their seminal works. They show that, under certain
p
conditions including T =N ! 0 as N; T ! 1; one can replace the latent factors with
their PC estimates in FAVAR models and still rely on the same asymptotic distribution for
1
This framework also …ts the situation where any of observed variables are imperfect measures of the
corresponding theoretical concepts. The same idea in macroeconomics can be traced back to Sims and
Sargent (1977), for example. More recent literature include Stock and Watson (1998, 2002) among others.
2
In contrast to the static PC method, Forni, Hallin, Lippi and Reichlin (2000, 2005) suggest estimating
dynamic factors using a dynamic PC technique. Since the static PC method can deal with dynamic factors
by stacking them in one vectors, we do not take their method into account.
1
inference purpose. Given this fact, it is quite straightforward to extend their results into
the impulse response estimates, however, one may also anticipate that the errors in latent
p
factor estimation can be relevant in …nite samples especially in cases where T is much
p
larger than N , i.e. T =N ! 0 is not appropriate. This motivates us to reexamine a widely
used residual-based bootstrap inference for IRFs in FAVARs as alternatives to the normal
approximation.
To this end, we …rst consider structural identi…cation schemes for the IRFs in FAVARs.
As Bai and Ng (2010) point out, it is well-known that the parameters are not statistically
identi…ed in reduced-form factor models and the PC estimates are randomly rotated. Hence
in order to estimate the individual parameters and IRFs, identifying restrictions and estimations must explicitly account for this factor rotation. Bai and Ng (2010) propose three
sets of parameter restrictions to achieve statistical identi…cation of the reduced-form factor
models. However, it is also recognized that in structural VARs what we are able to impose
meaningful restrictions on are not the reduced-form parameters but the structural ones. Following this line, we propose several identi…cation schemes based on structural parameter or
IRF restrictions which are common in many empirical studies and still account for the factor
rotation.
Next, we investigate theoretical and …nite sample properties of i.i.d. residual-based bootstrap con…dence intervals for the identi…ed IRFs. In particular, we focus on the e¤ect of
factor estimation uncertainty on the coverage properties. In order to better demonstrate
this e¤ect, we compare two bootstrap algorithms: A) boostrapping with factor estimation
and B) bootstrapping without factor estimation. It is shown that, although they are both
asymptotically valid in …rst-order, errors in factor estimation produce lower-order discrepancy. Monte Carlo simulations indicate that A performs well overall and is of practical use.
However, B is not able to capture the e¤ects of lower-order factor estimation errors and may
produce smaller coverage ratio than the nominal level in …nite samples. Indeed, our simulation results con…rm this …nding especially when N is small compared to T . The asymptotic
normal intervals also tend to yield undercoverage and can be quite erratic. These results
suggest that the uncertainty associated with factor estimation can be relevant in structural
IRF estimates in …nite samples.
The rest of the paper is structured as follows. Section 2 introduces the models and
regularity conditions. Section 3 discusses identi…cation and estimation methods of IRFs.
We also introduce an extension of the model that incorporates observable factors as in
Bernanke et al. (2005). In section 4, we propose bootstrap inference methods while the
2
asymptotic validity of the methods is given in Section 5. Section 6 assesses the procedures’
…nite sample properties via simulations using arti…cial data as well as calibrated models of
US macroeconomic data. Section 7 is composed of concluding remarks and the appendices
include some technical derivations and details on a bootstrap bias-correction procedure.
Throughout the paper, the following notations are used. The Euclidean norm of vector
p
x is denoted by kxk. For matrices, the vector-induced norm is used. The symbols "!" and
d
"!" represent convergence in probability under the probability measure P and convergence
in distribution. Op ( ) and op ( ) are the order of convergence in probability under P . We
de…ne P as the bootstrap probability measure conditional on the original sample.For any
p
bootstrap statistic T , we write T ! 0, in probability, or T = op (1), in probability, when
for all > 0; P (jT j > ) = op (1). We write T = Op ( ), in probability, when for all > 0
d
there exists M > 0 such that limN;T !1 P [P (jTN T j > M ) > ] = 0: We also write T ! D,
in probability, if conditional on the original sample with probability that converges to one,
T converges in distribution to D under P . Let = min fN; T g and L be the standard lag
operator. Chol [X] denotes the Choleskey factorization of a positive de…nite matrix X and
it returns an lower triangular matrix Z such that Z 0 Z = X.
2
Models and Assumptions
2.1
Reduced-form models
Consider the following factor model
Xt = Ft + ut ;
(1)
t = 1; :::; T
where Xt is an N 1 vector of observations and N is the (typically large) number of equations.
We assume that Xt is driven by much lower dimensional unobservable factors Ft (r
r) with time-invariant unobservable factor loadings
=[
0
1 :::
0 0
N]
(N
is an N 1 idiosyncratic shock. We call (1) the observation equation.
In addition, the factors Ft form a VAR with coe¢ cient parameters
an error term et (r
1; N
r). ut = [u1t :::uN t ]0
(L) of order p and
1) so that
Ft = (L)Ft
1
+ et :
(2)
Equation (2) is called the VAR equation. Variables written without their associated "t"
subscript are meant to denote the entire matrix of observations, for example, X = [X1 ::XT ]0
is a T
N matrix and F = [F1 ::FT ]0 is a T
r matrix. We omit intercepts from (1) and (2)
to simplify the notation.
3
2.2
Structural models
Structural VARs are popularly used to identify the contemporaneous relationships between
variables of interest in macroeconomic applications. FAVARs are also able to identify most
of these relationships given a particular interpretation of the models (see Bernanke et al.,
2005, and Kapetanios and Marcellino, 2006, among others). Stock and Watson (2005) give a
comprehensive modeling strategy, hence we take their lead. Using an r
S, let the structural factor model be de…ned as
where
s
= S, Fts = S
1
Ft ,
s
Xt =
s
Fts + ut ;
Fts =
s
(L)Fts 1 +
(L) = S
1
(L)S and
r invertible matrix
(3)
(4)
t;
t
is a structural innovation.
As we will see later in detail, estimation for the structural model is conducted using the
reduced-form representation obtained by multiplying (4) by the matrix S from left so that
Ft = (L)Ft
where et = S
2.3
t
1
+ et ;
as is common in standard structural VAR literature.
Assumptions
We now state a set of regularity conditions. First let the data generating processes above
be de…ned on a probability space ( ; z; P ) and the following assumptions hold. Note that
c < 1 is some constant.
Assumption 1.
The common factors Ft in (1) and (2) satisfy E kFt k4 < 1, and T
for some r
r positive de…nite matrix
The factor loadings
for some r
i
r positive de…nite matrix
;
Assumption 2.
c;
4
1
PN
i=1
0
i
PT
t=1
F;
in (1) satisfy, k i k < 1, and N
For all i, t, E(uit ) = 0 and E juit j8
1
p
i
!
p
Ft Ft0 !
F
as N ! 1
E(uit ujs ) =
PN
1
N
i;j=1
ij;ts
and j
c; and
ij
For every (t; s), E N
1
T
ij;ts j
PT
ij
t;s=1
1=2
ts
PN
for all (t; s) and j ij;ts j
P
c and N1T Ti;j;ts=1 j ij;ts j
ts
for all (i; j) such that
c;
4
i=1 (uis uit
E(uis uit ))
c;
f i g ; fFt g ; and fuit g are mutually independent groups. Dependence within each
group is allowed.
For each i
T
where
F ui
= p limT !1
1=2
PT
t=1
PT PT
t=1
s=1
d
Ft uit ! N (0;
F ui );
E(Ft Fs0 uit uis );
Assumption 3.
E(et ) = 0; E(et e0t ) =
for s 6= t;
E jeit ejt ekt elt j
e an
r
r positive de…nite matrix, and et and es are independent
M for i; j; k; l = 1; :::; r; and all t
et are independent of the idiosyncratic errors uis for all i; t and s;
Assumption 4. Roots of det(Ir
circle.
Assumption 5. The r
1z
2z
2
pz
p
) = 0 lie outside the unit
r matrix S has full rank.
Assumption 6. The eigenvalues of the r
r matrix
F
are distinct.
Most assumptions are usual regularity conditions discussed in the seminal works on factor
models by Bai (2003) and Bai and Ng (2006) and standard VAR literature such as Lütkepohl
(2005). Assumption 1 allows general processes for the factors and loadings. Assumption 2
imposes so-called weak dependence in cross-section and time-series in uit . We allow for
this form in ut in considering identi…cation and estimation, however, these conditions are
strengthened to simplify the analysis when we establish the bootstrap procedures in section
4. We also take advantage of an independence assumption between factors and idiosyncratic
5
errors (and loadings). Assumptions 3, 4 and 5 are standard in VAR literature to enforce
a stable system and estimable by simple least squares. Assumption 3 imposes a white
noise property on et . Note that a stable covariance matrix e is needed to obtain standard
types of structural identi…cations which we will consider later. Assumption 6 guarantees the
uniqueness of the limit of F^ 0 F=T; which is also standard in factor models and important to
discuss the behavior of the rotation matrix.
2.4
Impulse response functions
Next we consider the IRF of variable i to the VAR innovations in both reduced-form and
structural models. For the reduced-form models, the model composed of (1) and (2) can be
rewritten in vector moving average form under Assumption 4 so that
Xit =
i
(5)
(L)et + ut ;
P1
j
where (L)
0 = Ir is the moving average polynomial associated with Ft
j=0 j L and
such that Ft = (L)et and (L) = [ Ir
(L)] 1 . Let the reduced-form IRF of observable
i at time horizon h (h = 0; 1; 2; :::) be
ih
ih .
Then
@Xit+h
=
@et
i
h:
The structural IRFs 'ih will be similarly de…ned based on the model (3) and (4). It can
be straightforwardly shown that
'ih
@Xit+h
=
@ t
s
i
s
h;
where the moving average parameters are now de…ned such that
[Ir
s
(L)]
1
= S
1
(L)S with
s
0
= Ir and
s
s
(L)
P1
j=0
s j
jL
=
(L) de…ned in (4). It is noted that the
structural IRF can take a form which involves only structural parameters and no reducedform parameters. This suggests a simple fact that the identi…cation of structural parameters
guarantees the identi…cation of structural IRFs. It can equivalently be written as
'ih =
by using reduced-form parameters
i
and
i
h.
distribution in a later section.
6
h S;
We use this form to derive the asymptotic
3
Identi…cation
3.1
Identi…cation of reduced-form models
It is well-known that, in the reduced-form factor model (1), the factors and loadings are not
statistically identi…ed. In fact, only the space spanned by factors is identi…ed. As Bai and
Ng (2010) point out, this fact per se is not problematic as long as the researcher’s interest
is in the conditional mean or values of dependent variables, however, if the analysis involves
the coe¢ cient values, then identi…cation of the individual factors must be achieved. Since
the IRF is nothing but a function of individual coe¢ cients, the identi…cation of individual
parameters is necessary.
First we brie‡y review a consequence of this non-identi…cation problem in the FAVAR
setting. Suppose that the reduced-form models (1) and (2) are estimated by the following
two-step PC procedure. In the …rst step, we extract the factors using the PC method. This
is implemented by …nding a solution of
(F^ ; ^ ) = arg min
;F
PN PT
i=1
t=1 (Xit
2
i Ft ) :
(6)
In the second step, the VAR equation for F^t is estimated by using standard least squares.
However, the problem (6) is not uniquely solvable since for any r r invertible matrix
H, then H
1
and HF are also solutions for (6). Also HF can be generated through (2) by
combination of (H H
1
, He) with the same H. To overcome this observational equivalence
problem between two sets ( ; F; u; ; e) and ( H 1 ; HF; u; H H 1 ; He) embedded with
the system (1) and (2), the PC method uses an arbitrary normalization F 0 F=T = Ir to
…x r2 parameters in an arbitrary manner. This device yields an estimate F^ which is the
p
eigenvectors of X 0 X=(N T ) corresponding to the r largest eigenvalues (multiplied by T ).
As Bai and Ng (2002) shows, the particular H through the above estimation is
HN T = (
0
=N )(F 0 F^ =T )VN T1 ;
(7)
where VN T is a diagonal matrix with its diagonal elements being the r largest eigenvalues of
X 0 X=(N T ) in descending order. It is stressed that the actual value of HN T depends on the
realized unobservable process Ft , an estimate F^t , and unknown parameters . What makes
the situation unique is the fact that the researchers neither know nor are able to consistently
estimate the realization of HN T :3
3
A classic work of Cattell (1978) called it "accidental" rotation.
7
Bai and Ng (2010) further investigate this statistical non-identi…cation problem in the
reduced-form factor model (1) and provide three sets of parameter restrictions with which
PC estimation yields HN T which converges to the identity matrix as N; T ! 1 up to sign
normalization. In other words, if one of these restrictions holds, then the estimated factors
and parameters are individually identi…ed up to sign.
3.2
Structural identi…cation
Due to the above statistical identi…cation problem of factor models, the conventional structural VAR identi…cation schemes do not simply go through with FAVARs under the standard
regularity conditions. Also, the identifying assumptions proposed by Bai and Ng (2010) are
on reduced-form parameters and may not be fully justi…ed by underlying economic interpretations. Recent VAR or DSGE literature emphasize the importance of structural parameter
restrictions. See Rubio-Ramirez, Waggoner and Zha (2010) for structural VARs or Komunjer
and Ng (2010) in DSGE context. Therefore we propose di¤erent identi…cation schemes from
Bai and Ng (2010) in the sense that we impose identifying restrictions on the structural
parameters rather than on reduced-form parameters and still account for the factor rotation.
Indeed, these identi…cation schemes are technically distinct from but conceptually common
in many existing structural VAR studies. It is also seen that through these identifying restrictions, although the reduced-form parameters are not identi…ed, so are the structural
ones.
To be precise, we introduce the following assumptions:
Assumption 7. The lag order p and the number of factors r are known a priori.
Assumption 8. E(
0
=T ) = Ir .
Assumption 9. We have either:
1. (short-run restriction) The short-run IRFs '0
an r
s
2
=4
s
1:r
s
r+1:N
3
5 have
s
1:r
=[
s0
1 :::
s0 0
r]
r (upper or) lower triangular matrix with positive diagonal elements; or
2. (long-run restriction) The long-run IRFs '1:r;1
vations are an r
s
1:r
P
( 1
h=0
s
h)
from 1st to rth obser-
r (upper or) lower triangular matrix with positive diagonal elements;
or
8
3. (recursive restriction) (Q0 ) 1 S is an (upper or) lower triangular matrix and the signs
of its diagonal elements are known where Q
(7);
1
p limN;T !1 HN T with HN T de…ned in
Assumption 7 excludes the model uncertainty from the analysis and simpli…es the identi…cation and inference problem. Any trials of relaxing this assumption must be practically
relevant and of great interest, however, this problem is beyond the scope of this paper.
Assumption 8 imposes the orthogonality of the structural shocks, which is standard in the
structural VAR literature. Note that Assumption 7 …xes the total number of parameters in
2
the model and Assumption 8 imposes restrictions on r 2+r parameters since the covariance
matrix is symmetric by de…nition.
One of three conditions in Assumption 9 leads a su¢ cient condition for structural identi…cation and it plays an essential role in this paper4 . Assumption 9.1 provides a set of
restrictions on the short-run or contemporaneous structural IRFs. It requires researchers to
…nd at least r
1 observable variables where the kth (k
r
1) of which is contempo-
raneously a¤ected only by the …rst k factors. Assumption 9.2 works similarly, but it now
restricts the long-run IRFs instead of the short-run IRFs. The implication of the long-run
IRF restriction follows from Blanchard and Quah (1989) for example.
Assumption 9.3 is similar to the popular recursive restriction in structural VARs and it
imposes zeros on
r2 r
2
parameters in an invertible matrix (Q0 ) 1 S. Note that in FAVARs we
do not restrict the contemporaneous matrix S itself but its (asymptotic) rotation (Q0 ) 1 S.
In this sense, this is not an identifying restriction on structural parameters and may be of a
limited use. However, since it involves the most common Cholesky identi…cation procedure,
we further break down Assumption 9.3 into the following set of conditions:
Assumption 9.3’: The following three restrictions imply Assumption 9.3:
1.
2.
F
is diagonal;
or
0
=N is diagonal;
3. S is an (upper or) lower triangular matrix and the signs of diagonal elements of (Q0 ) 1 S
are known where Q is de…ned in Assumption 9.3;
4
We consider only exact identi…cation cases.
9
The …rst two parts of Assumption 9.3’imply that the model involves orthogonal factors
and loadings in its reduced-form and they are rather statistical assumptions. Given these two
statistical restrictions, we are now able to impose the recursive structure not on (Q0 ) 1 S but
on S as we are in conventional structural VARs. The signs of diagonal elements of (Q0 ) 1 S
are hardly known since the matrix Q does not have a structural interpretation, however, one
can easily deduct them by using the signs of structural IRFs as we will discuss in the next
subsection. Finally, the reason why these three conditions imply Assumption 9.3 follows the
same steps as in Bai and Ng (2010)’s PC1 condition. The …rst two restrictions ensure that
the limit of HN T is asymptotically diagonal so is Q. Then the diagonality of Q preserves
the triangularity of (Q0 ) 1 S as long as S is triangular which is guaranteed by Assumption
9.3’.3.5
Note that each set of Assumption 9 imposes
r2 r
2
zeros on the structural parameters
respectively. Hence either one of Assumption 9 together with Assumption 8 achieves the
necessary order condition of r2 parameter restrictions on the structural models.
Given above assumptions and the two-step PC estimation, we can proceed by introducing
a su¢ cient identifying condition for the structural parameters and IRFs.
Condition 1. (Su¢ cient condition for structural identi…cation) We obtain an
r matrix S^ such that
S^ HN0 T S = op (1);
r
as N; T ! 1 where HN T is de…ned in (7).
The next question is how to obtain such S^ as in Condition 1. In the following subsection,
we discuss examples of how the restrictions of Assumption 9 will enable us to consistently
estimates the structural parameters and IRFs.
3.3
Estimation of identi…ed structural models
Once the reduced-form models are estimated by the two-step PC method, structural parameter estimates are obtained by the contemporaneous coe¢ cient matrix S^ which satis…es
5
However, this restriction may be too restrictive. If we assume S is triangular, it eventually requires
=
(L)SS 0 (L)0 to be diagonal by part 1 in Assumption 9.3’. This requirement only allows for speci…c
F
parameter values for (L) with which all the o¤-diagonal elements are all zeros. Instead of this strong
requirement for (L), assuming the diagonality of S may be more practical. This case reduces to the
orthogonal factor restriction in which F s0 F s =T is also diagonal.
10
Condition 1. The following three schemes are simple to implement and often used in empirical applications.
ID1 (short-run restriction):
1. Construct a short-run IRF estimate for observations from 1 to r :
h
i
'
^ 1:r;0 = Chol ^ 1:r (^
e0 e^=T ) ^ 01:r ;
2. Obtain S^ such that
S^ = ^ 1:r1 '
^ 1:r;0 ;
where ^ 1:r is a reduced-form estimate for
1:r .
This scheme achieves Condition 1 under
Assumption 9.1.
ID2 (long-run restriction):
1. Construct a long-run IRF estimate for the observations from 1 to r :
h
i
'
^ 1:r;1 = Chol ^ 1:r ^ 1 (^
e0 e^=T ) ^ 01 ^ 01:r ;
h
with ^ 1 = Ir
Pp
h=1
^h
2. Obtain S^ such that
i
1
,
S^ = ^ 11 ^ 1:r1 '
^ 1:r;1 :
This scheme achieves Condition 1 under Assumption 9.2.
ID3 (recursive restriction):
1. Obtain S^ such that
S^ = Chol [^
e0 e^=T ] :
2. Adjust the signs of '
^ ih (h = 0; 1; :::) if sign(^
'i0 ) is not what was expected.
This scheme achieves the Condition 1 under Assumption 9.3. Note that the step 2
is to normalize the signs of (Q0 ) 1 S which are not directly known. However, they
are deduced through the sign of the structural IRF '1:r;0 in practice for the following
reason. Since an estimate for 1:r Q is available as ^ 1:r and we know the correct
signs of '1:r;0 =
1:r S,
they imply the signs of (Q0 ) 1 S. Hence the sign restriction in
Assumption 9.3’.3 has an structural interpretation through the signs of '1:r;0 .
11
In Appendix B, we prove that these methods will provide a contemporaneous matrix
estimate S^ which satis…es Condition 1. Note that these examples should not be only ones
which lead the condition, however, they would cover many empirical studies.6
Theorem 1 (Consistency of the structural parameters) Under Assumptions 1-7 and Condition 1, the followings hold for the two-step PC estimators of si , s , and 'ih :
p
^s
i
s
i
! 0;
^s
s
! 0;
p
and
p
k^
'ih
'ih k ! 0;
8i uniformly in h = 0; 1; 2; ::: as N; T ! 1.
Next we move on to the asymptotic distributions of the structural parameters. First we
^
need a high-level condition about the limit distribution of S.
^ The estimate S^ satis…es
Condition 2. (Asymptotic normality of S)
p
as N; T ! 1 and
p
T vec(S^
d
HN0 T S) ! N (0;
S );
T =N ! 0 with HN T de…ned in (7).
This condition is standard when one uses the standard Gaussian VARs, however, there
is a caveat with FAVARs. Indeed, the approximation is reasonable when a simple Cholesky
factorization method (ID3) is used, however, if more practical ID1 or ID2 methods are implemented, the distribution of S^ is a¤ected, even asymptotically, by the distributions of
reduced-form IRF estimates which are used for identi…cation. This is because constructing
S^ involves an inverse of a reduced-form IRF estimate. This issue is further examined in
^ s distriAppendix C, however, its consequences are twofold. First, an exact expression of S’
bution is intractable. Second, resulting distributions tend to be asymmetric. However, once
we obtain Condition 2, the following theorem follows.
6
For instance, Stock and Watson (2005) proposes ID1, Acconcia and Simonelli (2007) uses ID2, and
Gilchrist, et. al (2008) exploit ID3 among others.
12
Theorem 2 (Asymptotic distribution of structural IRFs) Under Assumptions 1-7 and Conditions 1 and 2,
p
8i uniformly in h = 0; 1; 2;
=[
i
vec( )0 vec(S)0 ]0 ,
@'0ih @'ih
;
@
@ 0
and
being such that
'ih
and
= diag(
i;
;
S)
d
'ih ) ! N (0; 'ih );
p
as T; N ! 1 and T =N ! 0 provided @'ih =@ 6= 0 where
T (^
'ih
with
p
p
T (^i
T(
i
=
d
HN 1T ) ! N (0;
i );
d
HN0 T (HN0 T ) 1 ) ! N (0;
);
as shown in Bai and Ng (2006).
Despite the important implication that the structural IRFs are functions of structural
parameters, when we consider the distribution of IRFs, we present the expression in terms
of the reduced-form parameters. This is because the researchers would straightforwardly
construct the IRF estimates using reduced-form parameter estimates, if needed. The above
expression can easily be applied to variance computations when one constructs the analytical
intervals by the normal approximation.7
3.4
Impulse response analysis in models with observable factors
In macroeconomic applications, the following model is often considered instead of that given
by (1) and (2):
X =
2 3t
F
4 t5 =
Yt
f
Ft + y Yt + ut ;
3 2 3
2
Ft 1
eft
4
5
4
5;
(L)
+
y
Yt 1
et
(8)
(9)
where the vector Yt (m 1) is observable factors. An important feature of these models is the
fact that observable factors Yt enter in the same manner as the latent factors hence enabling
the researcher to assess the situation in which some small number of observable variables
Yt also a¤ect the overall system. For example, a monetary policy shock to the federal funds
rate can be regarded as the driving force of a wide range of macroeconomic variables.
7
See Lütkepohl (1990, 2005).
13
In this model, estimation of the latent factors is not conducted the same as in the model
without observable factors. In the simplest case, Xt can …rst be regressed on Yt and the
factors can then be extracted as the principal components of Z 0 Z=(N T ), not X 0 X=(N T ),
where Zt = Xt
X 0 Y (Y 0 Y ) 1 Yt . Since the variation of Yt is already taken o¤ from Zt , thus
estimated factors by reduced-form model are those for Ft if Ft and Yt are orthogonal. It is
concluded that the individual parameters attached to the observable factors are identi…ed
under this assumption.
It is also importantly noted that the reduced-form IRFs are statistically identi…ed. To
see this, consider the models (1) and (2) in their moving average form
Xt = [
f
ff
(L) +
where (L) = [ f f (L), f y (L);
mials for Ft and Yt such that
y
yf
yf
(L)]eft + [
(L),
yy
f
fy
(L) +
y
yy
(L)]eyt + ut ;
(L)] are de…ned as the moving average polyno-
Ft =
ff
(L)eft +
fy
Yt =
yf
(L)eft +
yy
(L)eyt ;
(L)eyt :
Let the reduced-form IRF of an observed factor shock to Xi at time horizon h be
y
i;h .
Then
y
i;h
where
h
f
i
fy
h
+
y
i
yy
h ;
is the moving average coe¢ cient matrix of the hth lag such that (L)
The following lemma then obtains.
(10)
P1
i=0
iL
i
.
Lemma 1 Assume Ft and Yt are orthogonal in models (8) and (9). Under Assumptions
1-7, which are extended for F to include Y , and Condition 1,
^y
i;h
y
i;h
= op (1):
as N; T ! 1, 8i and uniformly in h = 0; 1; 2; :::.
This result implies that as long as the assumption that the latent and observable factors are orthogonal, the standard structural identi…cation schemes among observable factors
go through to get structural IRFs. We next move on to inference techniques to evaluate
con…dence intervals for the identi…ed structural IRFs.
14
4
Bootstrap inference
This section considers residual-based bootstrap algorithms to construct con…dence intervals
for the IRFs investigated so far. The methods discussed here focus on i.i.d. bootstraps. We
start with the following assumption
Assumption 10. uit and et are independent and identically distributed for each t with
E(uit ) = 0; E(et ) = 0, E(u2it ) =
2
i
a positive constant, and E(et e0t ) =
e
an r
r positive
de…nite matrix.
This assumption on et is widespread in VAR literature as we placed in Assumption 4.
However, assuming that the i.i.d. idiosyncratic errors uit may be thought as restrictive in
some factor analysis.8 Gonçalves and Perron (2010) recently developed a rigorous theory for
residual-based wild bootstrap procedures which are applicable to heteroskedastic idiosyncratic errors uit . Their setup is simpler than ours in the sense that no particular structural
relations are assumed in the model, however, applying their residual-based bootstrap in
FAVARs or a method which can account for autocorrelations, e.g. block bootstrap, must be
important future extensions.
At this moment, we present two i.i.d. bootstrap algorithms. The …rst procedure is A:
bootstrapping with factor estimation, and the second algorithm is B: bootstrapping without
factor estimation. The main feature of the procedure A is that it includes factor estimation within each bootstrap replication so that the created con…dence intervals can properly
account for the uncertainty associated with factor estimation. On the other hand, the procedure B does not re-estimate the factors in bootstrap replications and take the original factor
estimates as factual.
First the bootstrap procedure A is outlined as follows:
Procedure A : Bootstrapping with factor estimation
1. Estimate the model by the two-step PC procedure and obtain parameter estimates ^ ;
^ ; S^ and residuals u^t and e^t . Obtain the IRF estimate '
^ i:
2. Resample the residuals e^t with replacement, and label them et . Generate the bootstrapped sample Ft by Ft = ^ (L)Ft 1 + et with the initial condition Fj = F^j for
8
Dufor and Stevanovic (2010) explore the usefulness of VARMA in factor-augmented models.
15
; p.9 Also resample the residuals u^t with replacement, and label them ut .
Generate the bootstrapped observations X by Xt = ^ Ft + ut 10 .
j = 1;
3. Using the bootstrapped observation Xt (and Yt ), estimate (F^ ; ^ ) by the …rst step of
the PC procedure. Then estimate the VAR equation of F^t to obtain the bootstrapped
estimates ^ and S^ by the second step of the PC procedure. This yields the bootstrap
IRF estimates '
^i .
4. Repeat 2 and 3 R times.
^ ih . Sort the statistics and pick the 100
'
^ ih '
5. Store the re-centered statistic s
100 (1
th
)
for 'ih is [^
'ih
percentiles (s
s(1
)
;'
^ ih
( )
(1
;s
)
). The resulting 100(1
th
and
2 )% con…dence interval
s( ) ] for h = 0; 1; :::11
In particular, the bootstrap sample Xt shares the same data generating process as the
original sample Xt in step 2. In step 3, the bootstrap estimate involves the same identi…cation and estimation methods as the original. These two ingredients will promise that the
dispersions of the bootstrap estimates can mimic those of the original estimates.
Procedure B : Bootstrapping without factor estimation
Second, we consider the procedure B. It actually needs a modi…cation only in the step 3
of procedure A and is formalized as follows.
3. Using the bootstrapped observation Xt and factors Ft , estimate ^ , ^
and S^ .
This yields the bootstrap IRF estimates '
^i .
This procedure is regarded as a natural and simple extension of the methods conducted
in the standard VAR analysis. However, the generated con…dence intervals will not properly
account for the uncertainty associated with factor estimation. Their theoretical and empirical
properties are further investigated in the following sections.
9
At this stage, some types of bias correction methods can be applied such as the bootstrap after bootstrap
discussed in Kilian (1998). See Appendix D.
10
When the model includes observable factors Yt in VAR, let Ft be (Ft ; Yt ) and F^t be (F^t ; Yt ).
11
This is often called Hall’s percentile intervals (Hall, 1992). One can alternatively consider what is called
Efron’s percentile method by storing s
'
^ ih and constructing s( ) ; s(1 ) , however, this method is not
exact when the interval is not symmetric, even asymptotically (See Lütkepohl, 2005, for instance). Another
popular choice is percentile-t intervals (See Hall, 1992), however, they are likely to provide explosive intervals
for IRF estimates in long horizons when the sample size is small (See Kilian, 1999). We compare this three
intervals in Appendix Table A where we label Hall’s percentile as PER-H, Efron’s percentile as PER-E, and
percentile-t as PER-T.
16
5
Asymptotic validity
This section discusses asymptotic validity of the bootstrapping inference procedures A and
B. The …rst-order asymptotic results are given in Theorems 3 and 4 and several remarks on
higher-order correctness will follow the statements. Note that we also need the high-level
conditions A1-A2 which are stated in the Appendix A to obtain the asymptotic validity of
the bootstrap procedure A.
Theorem 3 (Asymptotic validity of the procedure A) Under Assumptions 1-7, 10, Conditions 1, 2, A1, and A2,
sup jP [(^
'ih
'
^ ih )
x2R
x]
for all i and uniformly in h = 0; 1; 2;
P [(^
'ih
p
x]j ! 0;
'ih )
as N; T ! 1, and
p
T =N ! 0.
This results are again of …rst-order and in order to understand …nite sample inference
results better, lower-order terms in the estimation errors are of interest. First the errors in
the original structural parameter estimation can be expanded into three components: (a)
^ (b) factor estimation errors and
errors regarding the contemporaneous coe¢ cient matrix S,
(c) combinations of (a) and (b). If we take the structural IRF at time 0 as an example12 ,
the expansion of the original estimate is:
p
T (^
'i0
'i0 ) = T
1=2
+T
|
+T
|
+T
|
S 0 HN T HN0 T F 0 ui + T 1=2 "0 HN 1T
" HN0 T F 0 ui
{z
}
0
i
1=2 0
(a): errors in S^
1=2
S 0 HN T F^ 0 (F
F^ HN 1T )
0
i+T
{z
1=2
S 0 HN T (F^
(b): factor estimation errors
1=2 0
^0
" F (F
F^ HN 1T )
0
i
+T
{z
" (F^
1=2 0
(c): (a) and (b)
F HN T )0 ui
}
F HN T )0 ui :
}
(11)
with " = S^ HN0 T S. In the original estimate, the terms in (a), (b), and (c) are of op (1);
p
p
p
Op ( T = ), and op (1) respectively. Note Op ( T = ) = op (1) when T =N ! 0. When
12
Although we represent the estimate of structural IRF at h = 0 for simplicity, the same discussion goes
through for structural IRFs at any lags or any structural parameter estimates with possibly more terms
involved.
17
we follow the procedure A, the bootstrap parameter estimates take the same form in the
bootstrap space so that:
p
T (^
'i0
'
^ i0 ) = T
1=2
+T
|
+T
|
+T
|
with
S^0 HN T HN0T F 0 ui + T 1=2 " 0 HN T1 ^ 0i
1=2
" 0 HN0T F 0 ui
{z
}
(a): errors in S^
1=2
S^0 HN T F^ 0 (F
F^ HN T1 ) ^ 0i + T
{z
1=2
S^0 HN T (F^
(b): factor estimation errors
1=2
" 0 F^ 0 (F
F^ HN T1 ) ^ 0i + T
{z
1=2
" 0 (F^
F HN T )0 ui :
}
(c): (a) and (b)
HN T =
^ 0 ^ F 0 F^
V^
N
T
F HN T )0 ui
}
1
(12)
(13)
^0 ^
where V^ is a diagonal matrix with its elements being eigenvalues of N in descending order
^ The validity is shown under Conditions A1 and A2. These conditions
and " = S^
HN0T S:
guarantee that all the terms in (12) are of the same probability order under P , in probability,13 as ones in (11) under P . Hence (a) and (c) disappear as N; T ! 1 and so does (b)
p
with an additional condition T =N ! 0.
Second, the validity of the procedure B is provided.
Theorem 4 (Asymptotic validity of the procedure B) Under Assumptions 1-8,10, Conditions
1, 2, A1(6)(7) and A3
sup P
(^
'i;h
x2R
'
^ i;h )
for all i and uniformly in h = 0; 1; 2;
x
P (^
'i;h
as N; T ! 1;
'i;h )
p
x
p
! 0;
T =N ! 0.
When one uses the procedure B, the bootstrap estimate of the structural IRF at time 0
is expanded in the bootstrap space as follows:
p
T (^
'i0
'
^ i0 )
T
1=2
+T
|
S^0 F 0 ui + T 1=2 "
1=2
" 0 F 0 ui :
{z
}
0 ^0
i
(a): errors in S^
13
This means that the arguments in Op ( ) and Op ( ) (or op ( ) and op ( )) are the same.
18
(14)
The lower-order terms associated with factor estimation errors (b) and (c) do not appear
with the procedure B. Hence we expect that the intervals constructed by procedure B are
in general smaller than those by procedure A because of the factor estimation errors. More
importantly, the intervals by procedure B may not be as accurate as those by procedure A
p p
especially when N is smaller than T ( T =N ! 0 is not appropriate) since the terms in
(b) which are not present with procedure B are relevant. It is also noted that when the
errors in the contemporaneous matrix estimate " is not small for some reason, the terms in
(a) and (c) can play a signi…cant role. This leads an error in coverage in short horizons since
the e¤ect of " will diminish in long horizons. Finally, the asymptotic normal approximation
neither accounts for the factor estimation errors (b) and (c) nor is able to capture the e¤ect
of (a) well as explained in section 3.3. Hence it is also anticipated that the coverage ratios
will become smaller than the nominal level and quite erratic.
6
6.1
Finite sample properties
Monte Carlo simulations
In this section, we provide simulation results to assess the …nite sample properties of the
proposed bootstrap procedures. For simplicity we consider a two-factor VAR(1) model. For
case 1, the observable variables xi;t are generated as
xi;t =
and the factors (ft : 2
i ft
+ ui;t ;
1) evolve such that
ft = ft
for i = 1; :::; N and t = 1; :::; T with
i
=[
i;1 ;
1
+ et ;
i;2 ]
and
is a 2
IRF of a unit shock to the …rst factor is studied with et = S
2
3
1 0:5
5:
S=4
0 1
t
2 matrix. The structural
and S is
For identi…cation, we consider ID1.We also consider a case with an observable factor yt (1 1)
so that
xi;t =
and
f
i ft
2 3
f
4 t5 =
yt
+
y
i yt
+ ui;t ;
2
3
ft 1
4
5 + et ;
yt 1
19
with
a3
3 matrix. We consider the structural IRF of a unit shock to the observable
factor.
Parameter values are set as follows. When we consider the case 1, we set S is the identity
matrix so that all the parameters can be interpreted as structural parameters. The loadings
are
ij
i:i:d:U (0; 1) and
21
= 0 so that we can use the triangular structure of the …rst
two loadings. The VAR parameter
for ID1 can be general so that
2
3
0:4 0:2
5:
=4
0:2 0:4
For the case of observed factor shock, fij , yij i:i:d:U (0; 1) and
2
3
0:4 0:2 0
6
7
6
7
= 60:2 0:4 0 7 ;
4
5
0
0 0:4
so that Y and F are asymptotically orthogonal. Since we consider only one observable factor,
we do not need a particular identifying restriction among factors.
We generate quasi-random variables ej;t (j = 1; 2) and ui;t following i.i.d.standard normal
(Gaussian errors) or a centered chi-square distribution with one degree of freedom (Chisquared errors) with unit variance. To eliminate the e¤ect of initial value assumptions, we
generate a sample with size of 2
T and discard the …rst T sample.
Since the e¤ect of the sample size on the inference results is of major interest, we compare
the results of the four (N; T ) combinations of N = f50; 200g and T = f40; 120g. The
bootstrap inference method is conducted for 1; 000 replications and the results for equalsided con…dence intervals of 95%, 90% and 85% nominal levels are reported. By default,
the PER-H is used unless otherwise speci…ed and the bias correction14 in the spirit of Kilian
^ is estimated by another Rb = 1; 000 times bootstrap
(1998) is applied where the bias for
h
i
P
loop and evaluated by R 1 Rb ^
H ^ j H 1 with j = 1; :::; Rb . We also discuss the
b
j=1
j
results of other intervals (PER-E and PER-T) and the cases where the bias correction is not
applied. The number of the replications of the Monte Carlo simulations for evaluating the
coverage ratio is 1; 000 and the impulse response are considered up to 5 periods ahead. The
coverage ratios and the median of the length of the con…dence intervals are reported. The
reported lengths are normalized with the value under N = 200 and T = 120 as unity.
14
See appendix C for detail.
20
The results for case 1 by using the suggested procedure A are shown in Table 1-a,b and
that for case 2 in Table 2-a,b. The …rst observation to note is that throughout the experiments, the bootstrap procedure A shows coverage probabilities close to the corresponding
nominal levels. This result is robust to the sample sizes considered. The second notable
result is that the coverage rates appear to be robust to the two distributional assumptions
of the errors. Finally, sample sizes a¤ect the lengths of the con…dence intervals, with more
available data inducing tighter intervals as expected.
Next we compare the results of con…dence intervals constructed by the bootstrapping
procedures A, B, and the asymptotic approximation (denoted by "N"). In doing so, we only
present the result of case 1. The results of case 2 basically follow the same line, but the
di¤erences are less clear so that they are not reported here. The case of smaller sample
size is of interest since the factors are estimated less precisely and it should enlighten the
e¤ects of the factor estimation uncertainty. The sample sizes are now chosen (N; T ) =
f(10; 120); (30; 120); (50; 120)g. As for VAR parameters, we consider the case of the diagonal
elements 0:4 and 0:7; and the o¤-diagonal elements 0:2. Everything else is same as the
baseline simulation and results of only 95% nominal level and the normal errors are reported
in Table 3.
Table 3 shows that the procedure A provides intervals with coverage ratios closer to
the nominal levels in …nite samples than the procedure B. First, for smaller sample sizes,
the e¤ect of neglecting the factor estimation uncertainty becomes more distinct. Second
and as frequently shown in empirical data, when factors are more persistent and have more
variability (diagonal elements are larger) the di¤erence of two procedures becomes more
distinct. We also found that the asymptotic normal approximation can be quite erratic.
This is mainly due to the estimation errors in the contemporaneous matrix S^ because of the
reason stated in section 3.3 and Appendix C.
6.2
Monte Carlo simulation using empirical data
Finally, we present an empirical experiment to ascertain the robustness of the proposed
bootstrap procedure A to actual economic data. To this end, we use 110 US macroeconomic
series which are investigated by Stock and Watson (2008). The frequencies of the data
are a mixture of quarterly and monthly, spanning from 1959Q1 to 2006Q4. We conducted
the following treatment, as the original paper did. First, monthly data are converted into
quarterly by taking a simple average over three months. Second, all series are transformed
into stationary processes following Stock and Watson’s (2008) guidelines. In addition, the
21
data are demeaned and standardized to have unit standard deviations. A brief description
of the data set is listed in Table 6 although more in-depth details are contained in the
original paper. Coinciding with the previous simulation experiment, we consider two types
of models: case 1, IRFs to factor innovations with ID1 identifying method, and case 2, IRFs
to observed policy instrument (Federal Funds rate). We chose the number of factors to be
2 which is justi…ed by the ICP2 criteria of Bai and Ng (2002) although moderate variations
of the lag order and number of factors do not a¤ect the qualitative results. We also …nd
that the …rst factor is closely related to medium-run real economic activity measures (e.g.
production) and the second factor has a stronger correlation with price variables. This is
consistent with Sargent and Sims (1977)’s or Stock and Watson (2005)’s …nding. Hence for
identi…cation, we select an assumption that the producer price index15 is contemporaneously
a¤ected only by the second factor. We chose the order of the vector autoregression to be
four. The observation equations and the VAR equations are identical to those described in
the previous subsection except now with the larger lag order.
The aim here is to evaluate the coverage properties for the IRFs of the bootstrap procedure A. However, the coverage probabilities of the con…dence intervals constructed from
actual data cannot be calculated. Hence, I use the following calibration experiment in order
to replicate an approximation of the actual data generating process.
1. Estimate the model using the PC method to obtain coe¢ cient estimates and residuals.
2. Generate quasi-observations from the calibrated model with the error terms resampled
from f^
ut g and f^
et g with replacement. Note that f^
et g are orthogonalized by t = e^t ^ 1 ,
where ^ is the Cholesky decomposed covariance matrix of e^t . This allows
interpreted as a structural innovation.
t
to be
3. Using each generated data set, construct 95% con…dence intervals of the IRFs by the
proposed bootstrap procedure and see if the true (calibrated) IRFs are included in the
estimated interval.
4. Repeat 2. and 3. 1; 000 times to evaluate the coverage probabilities.
The considered IRFs are for prices (the personal consumption expenditures price index
which is composed of nondurables excluding food, clothing and oil : GDP275_4), long term
interest rates (the 10 year US treasury bill interest rate : FYGM10), a production index (the
15
For the disaggregate data set, we chose the producer price index (transportation).
22
industrial production index : IPS43) and the unemployment rate (the unemployment rate
for all workers 16 years & over : LHUR). For case 2, the federal funds rate (FYFF) is chosen
as the policy variable. Results for both case 1 and case 2 are provided in Tables 4-a and 4-b
for impulse responses up to 8 periods ahead. To examine the impact of the sample size, I
conduct this experiment using the full data set (T = 190), shown in Table 4-a, and post-1984
data (1984Q1 : 2006Q4; T = 90), shown in Table 4-b. The results for both cases generally
yield values very close to 95% nominal level for all four variables when using the full sample
data set. This …ndings are also true for the smaller sample size consisting post-1984 data.
Therefore, the good …nite sample properties of the bootstrap procedure are con…rmed by
this calibrated experiment.
Finally we compare the results with the bootstrap procedure B in Table 5. Here we use a
smaller data set by choosing only aggregate series from the Stock and Watson’s data set. For
example, we use industrial production index (total) instead of using industrial production
index for durable, nondurable, and so on. This procedure leaves us 47 data series (See Table
7), however, the basic structure of the data set remains the same and gives clearer results16 .
We consider the full time length (T = 190). It is also shown that if the bootstrapping
is applied without considering uncertainty associated with factor estimation, the resulting
con…dence intervals become narrower and the coverage ratios are mostly below the 95%
nominal level.
7
Conclusions
This paper has two main contributions. First we explicitly consider structural identi…cation
schemes in FAVAR models. It is stressed that an extra care must be taken in structural identi…cation in FAVARs since the popular PC estimation identi…es individual parameters only
up to some random rotation. Our strategy is to impose identifying restrictions on structural
parameters or IRFs in conventional manners and still account for the factor rotation. Second
we investigate popular residual-based bootstrap procedures with a particular emphasis on
the e¤ects of the factor estimation errors. We …rst proposed a valid algorithm in theory and
practice (procedure A), then compared it with two simple alternatives: bootstrap without
factor estimation (procedure B) and the asymptotic normal approximation. It reveals that
the factor estimation errors may be relevant when N is small compared to T . In FAVARs,
this e¤ect can be further intensi…ed through contemporaneous coe¢ cient matrix estimation
16
The results using full data set follow the same line.
23
in structural identifying methods. These results suggest that the uncertainty associated
with factor estimation can be relevant especially in structural IRF estimates in FAVARs
and researchers must pay a close attention to identi…cation schemes and the steps of the
bootstraping algorithm.
Acknowledgement: T.B.A.
24
Appendix A : Proof of Theorems
In appendix, we suppress the subscript N T for the PC rotation matrix HN T and use H.
Proof of Theorem 1: We show that the results of individual structural parameters
and s . Then the continuous mapping theorem immediatelly yields the result of the
structural IRF. First the reduced-form PC estimate ^ i is expanded into the following form
(see Bai, 2003, proof of Theorem 2).
s
i
^0 = H
i
1
0
i
+T
1
H 0 F 0 ui + T
1
F^ 0 (F
F^ H
1
)
0
i
+T
1
(F^
F H)0 ui :
(A.1)
Under condition 1, we let S^ = H 0 S + " with " = op (1) as N; T ! 1. This implies that
0 0
= S^ 1 S 1 (H 0 ) 1 = op (1). Then the estimate for the structural parameter s0
i = S i is
given by:
^ s0
= S^0 ^ 0i = S 0 0i + "0 H 1 0i + T 1 S 0 HH 0 F 0 ui + T 1 "0 H 0 F 0 ui
i
+T 1 S^0 F^ 0 (F F^ H 1 ) 0i + T 1 S^0 (F^ F H)0 ui :
(A.2)
Note that the last two terms in (A.2) are associated with the factor estimation errors and
shown to be of order Op ( 1 ) by using Lemmas B1 and B3 of Bai (2003). Hence, rearranging
the terms in (A.2) gives:
^ s0
i
s0
i
=T
1
S 0 HH 0 F 0 ui + "0 H
1
0
i
+T
1 0
" H 0 F 0 ui + Op (
1
(A.3)
):
Since the …rst term in (A.3) is Op (T 1=2 ) and the second and third terms are
p of op (1) by
1
s
Condition 1, the result for i is shown. Note that Op ( ) = op (1) under T =N ! 0 as
N; T ! 1.
For s , …rst we have (Bai and Ng, 2006, proof of Theorem 217 )
^ p
F^ = Z(I
H
1
) H + eH
(F H
F^ );
(A.4)
with F = [Fp+1 ; Fp+2 ; FT ]0 ; Z = [F 1 ; F 2 ;
; F p ] with F j = [Fp+1 j ; :::; FT j ]0 and
0
e = [ep+1 ; ep+2 ;
; eT ] : The least square estimate for the reduced-form parameter
=
[ 1 ; 2 ; :::; p ]0 is given by
^ = T
Z^ 0 F^ = (Ip
+T 1 (Z^ (Ip
1
H
1
((Ip H)Z)0 eH
T 1 Z^ 0 (F H F^ ):
) H +T
H)Z)0 eH
1
(A.5)
Again the last two terms in (A.5) are factor estimation errors and of order Op (
estimate for the structural parameter sj = S 1 j S (j = 1; :::; p) is then
(Ip
17
^ ^ S^
S)
1
= (Ip S) S 1 + T 1 (Ip
+ (Ip S) + T 1 (Ip
SH 0 H 0 )Z 0 eS
SH 0 H 0 )Z 0 e
1
): The
1
+(Ip
+(Ip
")[(Ip (H 0 ) 1 ) + T 1 (Ip H 0 )Z 0 e]S
") (Ip (H 0 ) 1 ) + T 1 (Ip H 0 )Z 0 e
+Op (
1
1
(A.6)
):
Bai and Ng (2006) use VAR of order 1 without loss of generality.
25
The second term in the RHS of (A.6) is Op (T 1=2 ) and, under Condition 1, the terms from
the second line to the fourth line are of op (1), hence we obtain
^s
s
= Op (T
1=2
1
) + op (1) + op (1) + op (1) + Op (
):
This proves the result for ^ s . These imply the result for a continuous mapping of structural parameters, i.e. the structural IRF estimate '
^ h.
Proof of Theorem 2. First Theorem 1 shows that the 'ih is consistently estimated
and we know that it is also a function of the reduced-form parameters ( i and ) and S.
Given that the p
reduced-form parameter estimates ^ i and ^ are asymptotically normal as
N; T ! 1 and T =N ! 0 under Assumptions 1-4, 6 and 7 (See Bai, 2003 and Bai and Ng,
2006 who prove under the same or weaker conditions) up to rotation and the asymptotic
normality for S^ by Condition 2, the delta method yields the asymptotic normality for the
structural IRF which is free from the rotation with the variance given in the theorem.
Proof of Lemma 1: Here we omit the notation of the lag operator L and write
(L)L without a¤ecting the results of the proof. Let and be partitioned as
2
3
2
3
ff
=4
fy
yf
yy
5 and
=4
ff
fy
yf
yy
5:
:By a simple manipulation, it can be shown that the components of the moving average
coe¢ cient become
ff
=
fy
=
yf
yy
n
I
I
n
= I
n
= I
1
fy
ff
I
ff
[I
yy
]
1
yf
I
ff
1
fy
[I
yy
]
1
yf
I
ff
1
fy
fy
yy
1
I
[I
yy
[I
]
1
]
yf
o
1
yf
o
o
1
1
ff
I
1
1
ff
I
fy
[I
yy
]
1
[I
yy
]
1
1
;
yy
[I
yf
I
];
ff
;
;
respectively. Denote the rotated parameters of the coe¢ cients by (H). Then we will
simply obtain the forms of (H) by plugging (H) in for in the above equations. Note
that f f (H) = H f f H 1 , f y (H) = H f y , yf (H) = yf H 1 , and yy (H) = yy . I also
have
ff
ff
I
(H) = I H f f H 1 = H I
H 1;
I
ff
(H)
1
= I
H
ff
H
1
1
=H I
ff
1
H
1
:
Hence, f f (H) = H f f H 1 ; f y (H) = H f y ; yf (H) = yf H 1 ;and yy (H) = yy . For
the parameters in the observation equation, f (H) = f H 1 and y (H) = y . These will
yield the IRFs
y
f fy
+ yi yy = yi;h ;
i;h (H) = i
26
completing the proof.
Lemma A1. Suppose that Assumptions 1-5, 7, 8 and 10 hold. With the idiosyncratic and
u
^0 u
^
VAR residuals by the two-step PC estimation, (a) p limN;T !1 iT i = 2i for all i = 1;
;N
e^0 e^
0 1
1
and (b) p limN;T !1 T = (Q )
eQ :
Proof of Lemma A1: (a) The idiosyncratic residuals u^i are expanded into
u^i = ui + (F^ H
1
F)
0
i +T
1
F^ H 0 F 0 ui + T
1
F^ F^ 0 (F
F^ H
1
)
0
i +T
1
F^ (F^
F H)0 ui : (A.7)
Then,
u0 ui
1
u^0i u^i
^ 1 F )0 (F^ H 1 F ) 0 + 1 u0i F HH 0 F 0 ui
= i +
i (F H
i
T
T
T
T2
1
1
+ 2 i (F F^ H 1 )0 F^ F^ 0 (F F^ H 1 ) 0i + 2 u0i (F^ F H)(F^ F H)0 ui + cross terms,
T
T
u0i ui
+ Op ( 1 ) + Op (T 1 ) + Op ( 2 ) + Op ( 2 ) + cross terms,
=
T
by using Lemmas A1, B1 and B3 in Bai (2003). The cross terms are not presented to conserve
space but are all shown to be of less than op (1). Hence the result follows from Assumption
10.
(b) First we expand the residuals e^. The data generating process F = Z + e can be
rewritten as follows by right multiplying the rotation matrix H
(A.8)
F H = Z H + eH:
Transforming (A.8) into
F^ = eH + (F^
F H) + Z H;
will yield an expression of stochastic expansion of e^ = F^ Z^ ^ such that
1 ^0 ^
^
Z F );
Z^ ^ = F^ Z(T
= eH + (F^ F H) T 1 Z^ Z^ 0 eH
e^ = F^
Note that there are other two terms Z H and
they are cancelled out. Then,
T
1
T
1
Z^ Z^ 0 (F H
F^ ):
Z^ 0 Z^ Z^ 0 Z H in the RHS of (A.9), but
e0 e
e0 e
(F^ F H)0 (F^ F H) H 0 e0 Z^ Z^ 0 eH
= H0 H +
+
T
T
T 1=2
T 1=2
T
T
0 ^ ^0 ^
^
(F F H) Z Z (F F H)
+
+ cross terms
T
T
e0 e
= H 0 H + Op ( 1 ) + op (1) + Op ( 1 ) + cross terms,
T
27
(A.9)
The cross terms are not presented to conserve space, however, they are shown to be less than
op (1) using Lemma A1 in Bai and Ng (2006). Hence we can show that
1
e^0 e^
= H 0 e0 eH + op (1);
T
T
p
p
under T =N ! 0 as N; T ! 1. Using the fact that H ! Q
10 lead the result.
(A.10)
1
for (A.10) and Assumption
The following conditions A1-A3 are high-level assumptions in the bootstrapped statistics
and the bootstrap validity is shown under these assumptions. Note that the validity of
Condition A1 is partly proven by Gonçalves and Perron (2010).
Condition A1. The following conditions hold, in probability:
P
1)T 1 (F^
F H )0 (F^
F H ) = Op ( 1 ); 2)T 1 Tt=1 (F^t H 0 Ft )Ft 0 = Op (
3)T 1 (F^
F H )0 F^ = Op ( 1 ); 4)T 1 (F^
F H )0 e = Op ( 1 );
T
P
d
Ft uit ! N (0; F:u );
5)T 1 (F^
F H )0 ui = Op ( 1 ); 6)T 1=2
t=1
PT h
d
0
1=2
7)T
t=1 vec(Ft et+h ) ! N (0; F:e ); where H is de…ned in (13).
1
);
Condition A2. The following condition on the contemporaneous coe¢ cient matrix
estimate in the bootstrap procedure A holds:
S^
H 0 S^ = op (1),
in probability, as N; T ! 1 and
p
d
^ !
T vec(S^
H 0 S)
N (0; S );
p
in probability, as N; T ! 1 and T =N ! 0 with H de…ned in (13).
Condition A3. The following condition on the contemporaneous coe¢ cient matrix
estimate in the bootstrap procedure B holds:
S^
S^ = op (1),
in probability, as N; T ! 1 and
p
T vec(S^
d
^ !
S)
N (0;
in probability, as N; T ! 1:
28
S );
p
d
Proof of Theorem 3. We equivalently show that T (^
'ih
'ih ) ! N (0; ';i;h );
p
d
^ ih ) ! N (0; ';i;h ); in probability P that converges to one as N; T ! 1 given
'ih '
pT (^
T =N ! 0. In the following, we …rst con…rm the identi…cation of 'ih in the bootstrap
probability space by showing it for the individual parameters si and s . Second, we show
^ ih ). In doing so we introduce hypothetical sign-adjusted
the asymptotic normality for (^
'ih '
reduced-form parameters by r r matrix Q0 as the limit of the bootstrap analogue of the
rotation matrix H 1 given the original sample. It is shown that Q0 a matrix which has +1
or 1 in the diagonal elements and zeros otherwise under the common regularity conditions.
Note that this matrix is is available to the researcher but not necessary in practice if the
structural IRF is identi…ed since all Q0 will eventually be cancelled out. This is only for
convenience of the proof.
We start showing the consistency. Let " = S^
H 0 S^ and = S^ 1 S 1 (H 0 ) 1 . For
s
i , the bootstrap estimate is given by:
^s
i
0
^ s0 = T
i
S^ 0 H H 0 F 0 ui + " 0 H 1 ^ 0i + T 1 " 0 H 0 F 0 ui
+T 1 S^ 0 F^ 0 (F
F^ H 1 ) ^ 0i + T 1 S^ 0 (F^
F H )0 ui :
1
(A.11)
Note that we have the same expression as (A.2) with replacing the original parameters,
factors, rotation, and errors with their bootstrap counterparts. The …rst term in (A.11) is
Op (T 1=2 ) by Condition A1(6), the second and the third terms are op (1) by Condition A2,
and the last two terms are Op ( 1 ) by Condition A1(3) and (5), in probability. Hence the
RHS of (A.11) is op (1), in probability, as N; T ! 1. For s ; it follows that
^s
^s = T
1
h
(Ip
+ (Ip
+(Ip
+(Ip
+Op (
^ 0 H 0 )Z 0 e S^
SH
1
i
^ ^ + T 1 (Ip SH
^ 0 H 0 )Z 0 e
S)
h
i
" ) (Ip (H 0 ) 1 ) ^ + T 1 (Ip H 0 )Z 0 e S^
h
i
0 1 ^
1
0
0
" ) (Ip (H ) ) + T (Ip H )Z e
1
);
1
(A.12)
in probability, under Condition A1. Now (A.12) is analogous to (A.6). It is easily shown that
all the terms are smaller than op (1), in probability, under Conditions A1 and A2. These
two imply '
^ ih '
^ ih = op (1); in probability, as N; T ! 1. Hence we established the fact
that the boostrap structural IRF estimates converges to the original estimates in probability
P .
Next we move on to the limit distributions. We hypothetically consider the "signadjusted" version of the reduced-form parameter estimates in the bootstrap space by multiplying Q0 .
p
T (Q0 ^ i 0 Q0 H 1 ^ 0i )
F^ H 1 ) 0 + T 1=2 Q (F^
F H )0 u ;
= T 1=2 Q H 0 F 0 u + T 1=2 Q F^ 0 (F
i
0
= T
1=2
0
Q0 H F ui + Op (T
i
0
1=2
1
d
) ! N (0;
29
2
0 1
i (Q )
0
FQ
1
);
i
(A.13)
p
in probability, as N; T ! 1 and T =N ! 0 under Condition A1(6).
We used Condition
p
1
1=2
A1(3)(5) for the second equality and Op (T
) = op (1) under T =N ! 0 and Q0 H 0
Ir = op (1), in probability, by de…nition in the limit result in (A.13). To obtain the …nal
form of the kimit covariance, we used an implied fact of independence between Ft and ut ,
0
^0 ^
ui 0 ui
u
^0i u
^i
=
, and Lemma A1(a). For VAR parameters
E F TF = FTF ; E
T
T
p
T [(Ip Q0 1 ) ^ Q0 (Ip Q0 1 H 1 ) ^ H Q0 ]
= T 1=2 ((Ip Q 1 H )Z )0 e H Q + T 1=2 (Ip Q 1 )(Z^
(Ip H )Z )0 e H Q
0
T
= T
1=2
1=2
(Ip
(Ip
Q0
0
1
)Z^ 0 (F H
0
0
F^ )Q0 ;
Q0 1 H 0 )Z 0 e H Q0 + Op (T 1=2
1
);
in probability, if Condition A1(3)(4) are used. Hence,
h
i
p
d
T vec (Ip Q0 1 ) ^ Q0 (Ip Q0 1 H 1 ) ^ H Q0 ! N (0; Ip ((Q0 ) 1 F Q 1 (Q0 ) 1 e Q 1 ));
p
in probability, under T =N ! 0 as N; T ! 1 under Condition A1(7). Note that we can also
use Q0 1 H 0 Ir = op (1), in probability, since the probability limit of H 0 under P is Q0 1
0
^0 ^
and Q0 1 is a diagonal matrix with elements of 1 or 1. We also used E F TF = FTF ;
0
0
E e Te = e^Te^ , conditional independence of e given Z and Lemma A1(b). In addition,
p
d
^ !
N (0; S ); in probability, under the same condiCondition A2 ensures T vec(S^
H 0 S)
tions. It is noted that since the structural IRF is identi…ed, all Q00 s are eventually cancelled
out in the construction of the IRF estimate. For the original estimate, it is straightforward
to show from (A.1) and (A.5) that
p
d
T ( ^ 0i H 1 0i ) ! N (0; 2i (Q0 ) 1 F Q 1 );
h
i
p
d
T vec ^ (Ip H 1 ) ^ H ! N (0; Ip ((Q0 ) 1 F Q 1 (Q0 ) 1 e Q 1 ));
given Assumption 10. The proof is completed.
Proof of Theorem 4. With the procedure B, H does not show up. Hence we do not
need to introduce the sign-adjusted parameter estimates. Also, expansions of the bootstrap
parameter estimates will have fewer terms in the absence of the factor estimation errors. For
s
i;
^ s 0 ^ s0 = T 1 S^0 F 0 u + " 0 ^ 0 + T 1 " 0 F 0 u = op (1);
(A.14)
i
i
i
i
i
in probability, under Condition A3 as N; T ! 1. Note that we do not have the terms of
factor estimation errors in (A.14). Since the …rst term is Op (T 1=2 ) under Condition A1 (6)
and the second and the third terms are also op (1) under Condition A3, in probability, we
obtain the above result. For s
h
i
^s
^ s = T 1 (Ip S)Z
^ 0 e S^ 1 + (Ip S)
^ ^ + T 1 (Ip S)Z
^ 0e
h
i
1
0
^
+(Ip " )
+T Z e
+ (Ip " )[ ^ + T 1 Z 0 e ]S^ 1 ;
= op (1);
30
in probability, under Condition A1(7) and Condition A3. Next consider the asymptotic
distribution of reduced-form parameters. For i ;
p
d
T ( ^ i 0 ^ 0i ) = T 1=2 F ui ! N (0; 2i (Q0 ) 1 F Q 1 );
(A.15)
in probability, as N; T ! 1 by Condition A1(6), Condition A3, and Lemma A1(a). For ,
p
d
^ ) = T 1=2 vec(Z 0 e ) !
T vec( ^
N (0; Ip ((Q0 ) 1 F Q 1 (Q0 ) 1 e Q 1 )); (A.16)
in probability, as N; T ! 1 by Condition A1(7), Condition A3, and Lemma A1(b). Given
(A.15), (A.16) and condition A3, the rest of the proof follows Theorem 3.
Appendix B: Proof of Condition 1
In this appendix, we show that under Assumptions 1-9, S^ computed by ID1, ID2 and
ID3 satisfy Condition 1 respectively. We start with ID3 since it is the simplest case. To this
end, we show 1) the limit of H 0 S is a triangular matrix and the signs of its diagonal elements
0
are known and 2) e^Te^ H 0 SS 0 H = op (1) . We can trivially show 1) since H 0 S ! (Q0 ) 1 S
and Assumption 9.3 holds. For 2), as shown in the proof of Lemma A1,
e^0 e^
e0 e
= H 0 H + op (1) = H 0 SS 0 H + op (1):
T
T
We used the de…nition e = S and Assumption 8 in the second equality. Since X =
Chol(X 0 X) with a triangular matrix X with with positive diagonal elements, 2) is shown.
This completes the proof. We next consider ID1 case, consider the part inside the Choleskey
factorization:
0
^ 1:r H 0 SS 0 H ^ 0 + ^ 1:r # ^ 0 ;
^ 1:r e e ^ 0
1:r =
1:r
1:r
T
h
0 1 0
0
1 0
^ 1:r
=
1:r (H ) H SS HH
1:r
h
i0
0 1
^ 1:r H 0 SS 0 H ^ 1:r
(H
)
1:r
=
s
1:r
s0
1:r
+ op (1):
i0
0 1
^
H SS H 1:r
1:r (H )
1:r (H )
h
i
0 1
^ 1:r
H 0 SS 0 H ^ 01:r + ^ 1:r # ^ 01:r ;
1:r (H )
0
1
i
0
0
h
e^0 e^
0 1
with #
H 0 SS 0 H = op (1) and since ^ 1:r
= op (1) by Theorem 2 in Bai
1:r (H )
T
s
(2003). Since 1:r is a triangular matrix with positive diagonal elements by Assumption 9.1,
when one uses ID1
0
ee
S^ = ^ 1:r1 Chol ^ 1:r ^ 01:r = ^ 1:r1
T
0
1
= H 1:r 1:r S + ( ^ 1:r1 H 1:r1 )
|
{z
}
op (1)
0
= H S + op (1):
31
s
1:r
+ op (1);
1:r S
+ op (1);
This completes the proof. Finally ID2 case can exactly follow ID1 case with replacing the
short-run IRF s1:r (= '1:r;0 ) with the long-run IRF '1:r;1 . We also replace an estimate of
the former with an estimate of the latter. Since '
^ 1:r;1 '1:r;1 = op (1) is straightforward
from Theorem 1, the whole discussion of ID1 case goes through with ID2.
Appendix C : Distribution of S^
In this appendix, we illustrate how the distribution of S^ is a¤ected by the distribution of
the reduced-form IRF estimates which are used for identi…cation. To this end, we conduct
a simple simulation experiment: …rst, generate two scalar random variables ^
N ( 0; v )
and s^ N (s0 ; vs ). In this setting,
estimating
S
by
ID1
or
ID2
is
equivalent
to
getting
an
q
p
1
( ^ s^)2 ; and estimating S by s^3 = s^ is same as using ID3
estimate for s0 by s^12 = ^
identi…cation scheme. The table below shows simulated mean, variance, and skewness of s^12
and s^3 with 10,000 replications. We set 0 = (0:1; 0:5; 1:0); s0 = 1, v = vs = 0:1, and no
correlation between ^ and s^ is assumed.
s^12
0=
1
0=
5
s^3
0=
10
mean
0.25
0.89
1.00
1.00
variance
1.05
0.31
0.10
0.10
skewness
-0.04
-2.17
-0.25
-0.01
Note that the simulated variance of ID1 or ID2 estimate (represented by s^12 ) is a¤ected
the value of 0 . The smaller the value of 0 is, the more the distributions are contaminated.
It is also observed that s^12 is negatively skewed. It is concluded that with ID1 and ID2
identifying schemes, the distribution of IRFs may be contaminated when the means of the
reduced-form IRFs are close to zero.
Appendix D : Finite sample bias-correction procedure for bootstrap
For both of the simulation studies presented in this paper, I applied the following …nite
sample bias-correction procedure, in the spirit of Kilian (1998), for the VAR parameter .
The important di¤erence for our setup from Kilian (1998) is to estimate the bias by using
^
H ^ j H 1 (j = 1; :::; p). This estimation should be straightforward to implement given
j
the asymptotic results for the two- step PC estimates described in the main text.
32
1. Estimate the model and generate Rb bootstrap replications ^
Then approximate the bias Bj = E( ^ j H ^ j H
where H is estimated by regressing F^t on Ft .
1
) by Bjb =
1
Rb
;ib
, ib = 1; 2;
Rb
P
; Rb .
( ^ j ;ib H ^ j H
1
)
ib=1
2. Calculate the modulus of the largest eigenvalues of the companion matrix :
3
2
^ p 1 Bpb 1 ^ p Bpb
^ 1 B1b
7
6
7
6
6 Ik
0
0
0 7
7;
6
7
6
..
.
6 0
0
0 7
5
4
0
0
Ik
0
and if it is less than 1, construct the bias-corrected coe¢ cient estimate e = ^
not, let e = ^ . This will preserve the stationarity of the generated process.
B b . If
3. Generate the bias-corrected bootstrap replications for the IRFs by using ^ , e , e^t , and
u^t .
33
References
Acconcia, A. and S. Simonelli (2008): "Interpreting aggregate ‡uctuations looking at sectors," Journal of Economic Dynamics & Control, 32 (9), pp. 3009-3031
Ang, A., and M. Piazzesi (2003): "A no-arbitrage vector autoregression of term structure
dynamics with macroeconomic and latent variables," Journal of Monetary Economics, 50,
745-787.
Bai, J. (2003): "Inferential theory for factor models of large dimensions," Econometrica, 71,
135-171.
Bai, J., and S. Ng (2006): "Con…dence intervals for di¤usion index forecasts and inference
for factor-augmented regressions," Econometrica, 74, 1133-1150.
Bai, J., and S. Ng (2010): "Principal Components Estimation and Identi…cation of the
Factors," Working Paper.
Belviso, F., and F. Milani (2006): "Structural factor-augmented VARs (SFAVARs) and the
e¤ects of monetary policy," Topics in Macroeconomics, Vol 6, 3, The Berkeley Electronic
Press.
Bernanke, B., and J. Boivin (2003): "Monetary policy in a data-rich environment," Journal
of Monetary Economics, 50, 525-546.
Bernanke, B., Boivin, J., and P. Eliasz (2005): "Measuring the e¤ects of monetary policy:
a factor-augmented vector autoregressive (FAVAR) approach," The Quarterly Journal of
Economics, 120, 387-422.
Blanchard, O. J., and D. Quah (1989): "The dynamic e¤ects of aggregate demand and
supply disturbances," The American Economic Review, 79.4, 655-673
Boivin, J., M.P.Giannoni, and I. Mihov (2007): "Sticky prices and monetary policy: evidence
from disaggregated U.S. data," NBER Working Paper 12824
Boivin, J., M.P.Giannoni, and D.Stevanovic (2010): "Dynamic e¤ects of credit shocks in a
data-rich environment," Working paper
Cattell, R.B. (1978): The scienti…c use of factor analysis in behavioral and life sciences. New
York: Plenum Press.
Christiano,L., M.Eichenbaum, and C. Evans (1996): "The e¤ects of monetary policy shocks:
evidence from the ‡ow of funds," Review of Economics and Statistics, 78(1), 16-34.
34
Davidson, A.C., and D.V.Hinkley (1997): "Bootstrap methods and their application," Cambridge
Dufour, J.-M., and D. Stevanović (2010): "Factor-augmented VARMA models: identi…cation, estimation, forecasting and impulse responses", Working paper
Efron, B. and R.J. Tibshirani (1994): "An Introduction to the bootstrap," Chapman &
Hall/CRC
Forni, M., M. Hallin, M. Lippi, and L. Reichlin (2000): "The generalized dynamic-factor
model: identi…cation and estimation," The Review of Economics and Statistics, 82(4), 540554.
Forni, M., M. Hallin, M. Lippi, and L. Reichlin (2005): "The generalized dynamic-factor
model: one-sided estimation and forecasting," Journal of American Statistical Association,
100(471), 830-840.
Gali, J. (1999): "Technology, employment, and the business cycle: Do technology shocks
explain aggregate ‡uctuations?," American Economic Review, 89(1), 249-271
Giannone, D., L. Reichlin, and L. Sala (2005): "Monetary policy in real time," in NBER
Macroeconomics Annual, pp. 161-200 The MIT Press, Cambridge.
Gilchrist, S., V. Yankov, and E. Zakraj¼
sek (2009): "Credit market shocks and economic
‡uctuations: evidence from corporate bond and stock markets," Journal of Monetary Economics, 56, 471-493..
Gonçalves, S., and B. Perron (2010): "Bootstrapping Factor-Augmented Regression Models,"
Working paper
Hall, P. (1992): "The bootstrap and Edgeworth expansion," Springer-Verlag
Kapetanios, G., M. Marcellino (2006): "Impulse response functions from structural dynamic
factor models: a Monte Carlo evaluation," Working Paper 306, IGIER.
Kilian, L. (1998): "Small sample con…dence intervals for impulse response functions," The
Review of Economics and Statistics, 80, 218-230.
Kilian, L. (1999): "Finite-sample properties of percentile and percentile-t bootstrap con…dence intervals for impulse responses," The Review of Economics and Statistics, 81(4),
652-660.
Kilian, L. (2001): "Impulse response analysis in vector autoregressions with unknown lag
order," Journal of Forecasting, 20, 161-179.
35
Komunjer, I., and S. Ng (2010): "Dynamic identi…cation of DSGE models," Working Paper
Ludvigson, S. C., and S. Ng (2009): "Macro factors in bond risk premia," Review of Financial
Studies vol.22, issue 12, 5027-5067
Lütkepohl, H. (1990): "Asymptotic distributions of impulse response functions and forecast
error variance decompositions of vector autoregressive models," Review of Economics and
Statistics 72, 116-125.
Lütkepohl, H. (2005): "New introduction to multiple time series analysis," Springer
Moench, E. (2008): "Forecasting the yield curve in a data-rich environment: a no-arbitrage
factor-augmented VAR approach," Journal of Econometrics, 146, 26-43
Rubio-Ramirez, J., D.F. Waggoner, and T. Zha (2010): "Structural Vector Autoregressions:
Theory of Identi…cation and Algorithms for Inference," The Review of Economic Studies,
vol.77 2, pp665-696
Sargent, T. J., and C. A. Sims (1977):"Business Cycle Modeling Without Pretending to have
too much a priori economic theory," FRB Working Paper #55
Stock, J. H., and M. W. Watson (1998): "Di¤usion index," Working Paper 6702, National
Bureau of Economic Research.
Stock, J. H., and M. W. Watson (2002): "Forecasting using principal components from a
large number of predictors," Journal of the American Statistical Association, 97, 1167-1179.
Stock, J. H., and M. W. Watson (2005): "Implications of dynamic factor models for VAR
analysis," Working Paper 11467, National Bureau of Economic Research.
Stock, J. H., and M. W. Watson (2008): "Forecasting in dynamic factor models subject to
structural instability," in The Methodology and Practice of Econometrics, A Festschrift in
Honour of Professor David F. Hendry, Jennifer Castle and Neil Shephard (eds), Oxford:
Oxford University Press.
36
Table 1-a. Impulse responses to latent factor innovations
(ID1, normal errors)
1) 95% norminal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
95.6
96.0
95.4
94.5
93.7
7.85
4.69
2.95
1.96
1.32
N=200
96.1
96.3
96.4
96.1
95.8
7.41
4.51
3.05
2.07
1.44
N=50
95.2
97.9
97.9
96.8
95.8
2.71
1.59
1.12
0.73
0.49
N=200
93.8
97.3
97.2
95.8
95.4
2.00
1.52
1.13
0.81
0.57
2) 90% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
91.0
92.4
90.5
89.4
87.8
4.00
2.38
1.60
1.08
0.76
N=200
91.0
92.4
92.4
91.9
91.4
3.65
2.32
1.62
1.17
0.82
N=50
90.7
95.3
95.4
94.1
93.1
1.56
1.01
0.72
0.50
0.34
N=200
87.2
94.0
94.2
93.5
92.8
1.31
1.00
0.81
0.59
0.42
3) 85% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
84.4
87.0
85.9
84.5
83.3
2.64
1.65
1.15
0.80
0.56
N=200
84.6
88.2
88.0
87.0
86.3
2.43
1.62
1.19
0.87
0.63
N=50
84.0
84.2
81.9
81.0
82.8
1.17
0.79
0.58
0.42
0.28
N=200
83.5
83.8
81.3
89.6
89.3
1.00
0.82
0.65
0.48
0.34
37
Table 1-b. Impulse responses to latent factor innovations
(ID1, chi-squared errors)
1) 95% norminal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
95.4
95.6
95.8
95.2
94.9
7.33
4.05
2.45
1.57
1.05
N=200
95.1
95.5
95.7
95.5
94.4
7.01
4.02
2.43
1.58
1.10
N=50
95.5
97.1
96.8
95.7
93.6
2.97
1.55
0.97
0.61
0.41
N=200
95.5
97.5
97.4
96.8
95.6
2.16
1.25
0.83
0.55
0.39
2) 90% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
90.1
91.7
91.3
89.5
87.6
3.68
2.08
1.31
0.87
0.60
N=200
89.5
91.0
91.1
89.9
88.3
3.38
2.04
1.30
0.90
0.62
N=50
90.9
93.7
93.5
91.6
89.7
1.63
0.94
0.61
0.41
0.27
N=200
90.5
94.0
94.5
93.9
92.3
1.33
0.79
0.55
0.39
0.27
3) 85% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
84.2
86.4
85.2
82.7
79.4
2.47
1.41
0.91
0.62
0.42
N=200
83.9
86.4
85.4
83.3
80.3
2.26
1.39
0.94
0.65
0.45
N=50
83.1
87.6
88.7
86.5
85.6
1.22
0.70
0.47
0.33
0.22
N=200
83.2
89.7
90.7
90.3
88.6
1.00
0.63
0.44
0.31
0.22
38
Table 2-a. Impulse responses to observed factor innovations
(normal errors)
1) 95% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
93.1
94.5
94.3
98.0
94.7
2.28
2.12
1.40
0.96
0.65
N=200
91.4
93.7
94.5
97.9
94.5
2.30
2.11
1.44
0.97
0.67
N=50
94.4
94.4
93.1
95.6
93.7
1.39
1.25
0.90
0.61
0.40
N=200
94.8
94.0
94.0
95.8
93.2
1.37
1.22
0.89
0.61
0.40
2) 90% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
88.5
88.7
84.9
91.1
91.3
1.91
1.77
1.16
0.76
0.50
N=200
85.2
87.5
87.1
92.9
91.5
1.92
1.77
1.18
0.78
0.50
N=50
89.9
90.4
88.3
90.3
88.4
1.16
1.05
0.75
0.50
0.32
N=200
90.1
89.3
87.5
89.8
89.0
1.15
1.02
0.74
0.49
0.32
3) 85% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
84.0
83.1
77.9
81.6
81.8
1.67
1.53
0.99
0.64
0.40
N=200
78.9
81.2
78.1
84.2
85.2
1.67
1.54
1.02
0.65
0.41
N=50
85.2
85.0
83.4
83.6
82.3
1.01
0.92
0.65
0.43
0.27
N=200
84.3
84.0
81.2
83.3
82.4
1.00
0.90
0.65
0.43
0.28
39
Table 2-b. Impulse responses to observed factor innovations
(chi-squared errors)
1) 95% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
96.2
96.3
95.9
98.5
94.4
2.44
2.25
1.51
0.98
0.65
N=200
95.3
95.8
96.2
97.2
94.6
2.42
2.20
1.50
1.02
0.68
N=50
96.7
96.0
95.0
97.4
94.0
1.45
1.29
0.94
0.63
0.41
N=200
96.3
95.7
94.5
96.0
93.7
1.39
1.23
0.89
0.60
0.39
2) 90% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
91.4
91.2
89.0
93.5
92.3
1.98
1.83
1.22
0.76
0.48
N=200
89.7
91.7
89.8
94.8
92.6
1.99
1.80
1.22
0.76
0.48
N=50
93.4
90.9
90.1
91.8
90.9
1.20
1.07
0.77
0.51
0.33
N=200
92.4
91.1
88.6
90.5
90.6
1.15
1.02
0.73
0.49
0.32
3) 85% nominal level
Coverage Ratio (%)
h
T=40
T=120
Width of C.I. (median)
0
1
2
3
4
0
1
2
3
4
N=50
86.5
85.8
81.4
85.5
86.4
1.71
1.57
1.04
0.64
0.38
N=200
86.0
87.6
83.8
86.6
86.0
1.70
1.56
1.02
0.66
0.41
N=50
87.6
85.1
83.4
85.5
84.7
1.04
0.93
0.67
0.44
0.28
N=200
88.7
86.2
83.9
85.2
83.0
1.00
0.89
0.63
0.42
0.27
40
Table 3. Comparison with bootstrapping without factor estimation
(ID1, normal errors, 95% level)
1) diagonal elements 0:4
Coverage Ratio (%)
N=10
N=30
N=50
Width of C.I. (median)
h
0
1
2
3
4
0
1
2
3
4
A
93.4
95.4
95.0
91.3
86.2
5.24
2.10
1.08
0.62
0.36
B
80.1
92.9
93.4
88.7
83.6
1.06
0.64
0.44
0.31
0.21
N
37.7
68.3
78.0
78.0
77.5
0.49
0.43
0.32
0.22
0.14
A
95.4
96.5
96.7
95.8
93.6
2.79
1.42
0.88
0.55
0.38
B
90.1
94.1
95.1
93.8
91.4
1.28
0.77
0.54
0.40
0.29
N
50.0
71.2
82.0
86.5
86.1
0.50
0.42
0.33
0.25
0.17
A
95.3
96.6
96.7
94.7
93.1
2.46
1.27
0.80
0.52
0.35
B
92.4
95.4
95.7
94.0
92.0
1.38
0.74
0.52
0.38
0.28
N
53.1
69.7
79.4
86.8
87.8
0.50
0.41
0.33
0.25
0.17
2) diagonal elements 0:7
Coverage Ratio (%)
N=10
N=30
N=50
Width of C.I. (median)
h
0
1
2
3
4
0
1
2
3
4
A
89.1
89.3
92.2
94.4
95.3
2.94
2.10
2.01
2.00
1.95
B
64.4
68.4
75.4
82.8
88.1
0.77
0.66
0.70
0.76
0.82
N
31.7
46.4
58.3
65.9
70.1
0.47
0.44
0.46
0.47
0.48
A
91.9
93.4
95.5
96.6
96.7
1.39
1.20
1.24
1.28
1.35
B
81.9
83.4
87.9
92.6
94.4
0.75
0.67
0.74
0.80
0.84
N
47.0
53.6
60.5
67.8
72.3
0.46
0.42
0.44
0.45
0.45
A
90.3
92.3
94.7
96.2
96.4
1.20
1.08
1.17
1.21
1.24
B
85.4
86.2
89.9
93.7
94.5
0.80
0.70
0.74
0.81
0.87
N
51.0
58.6
63.1
68.5
73.3
0.46
0.43
0.43
0.44
0.45
41
Table 4-a. Coverage properties for calibrated model (full sample)
1) Impulse responses to factor innovations
Price
10-yr Tbill rate
Production
Unemployment rate
0
1
2
3
4
5
6
7
94.6
94.7
96.3
95.8
95.2
94.5
94.3
95.4
length 4.11
1.55
0.77
0.80
0.97
1.16
1.15
1.09
94.6
94.8
95.0
94.6
94.5
94.8
94.3
94.3
length 6.79
1.25
2.01
3.41
3.34
3.10
2.64
2.16
94.5
94.3
94.7
94.3
94.0
94.4
94.4
95.0
length 2.22
8.38
10.04
9.63
6.50
4.24
2.72
1.59
93.5
94.4
94.6
94.6
94.5
94.3
94.5
94.6
length 0.27
7.32
9.64
9.72
7.02
4.79
3.28
1.82
cov
cov
cov
cov
2) Impulse responses to policy innovations
Price
10-yr Tbill rate
Production
Unemployment rate
0
1
2
3
4
5
6
7
94.9
93.9
96.6
96.7
96.6
97.2
97.1
95.6
length 0.24
0.16
0.18
0.14
0.13
0.12
0.10
0.09
95.6
93.6
95.6
95.3
95.4
95.5
97.1
96.8
length 0.19
0.25
0.26
0.23
0.22
0.20
0.17
0.15
93.2
94.9
94.2
94.3
92.8
93.9
94.2
94.5
length 0.29
0.32
0.33
0.34
0.34
0.32
0.29
0.26
93.6
95.0
94.6
94.2
93.2
94.1
94.6
94.2
length 0.30
0.33
0.34
0.35
0.35
0.32
0.29
0.26
cov
cov
cov
cov
42
Table 4-b. Coverage properties for calibrated model (post 1984)
1) Impulse responses to factor innovations
Price
10-yr Tbill rate
Production
Unemployment rate
0
1
2
3
4
5
6
7
93.7
93.8
92.5
97.0
95.7
96.4
96.9
96.6
length 2.04
0.42
0.37
0.35
0.44
0.26
0.21
0.21
94.7
93.7
94.7
96.1
94.0
95.4
97.3
96.2
length 2.13
1.25
1.06
1.01
0.97
0.73
0.58
0.53
90.8
92.7
94.8
95.0
92.9
93.3
94.6
93.4
length 1.00
1.15
0.94
0.89
0.84
0.67
0.52
0.48
91.8
93.5
93.0
93.5
94.5
94.0
93.5
92.9
length 0.16
1.25
1.03
0.97
0.91
0.74
0.58
0.55
cov
cov
cov
cov
2) Impulse responses to policy innovations
Price
10-yr Tbill rate
Production
Unemployment rate
0
1
2
3
4
5
6
7
95.6
94.8
96.3
95.8
95.2
94.5
94.3
94.5
length 0.32
0.28
0.17
0.21
0.23
0.16
0.14
0.15
95.4
93.7
93.6
92.4
93.0
94.2
95.2
92.1
length 0.26
0.44
0.47
0.49
0.48
0.44
0.40
0.39
94.7
94.1
92.7
92.6
93.5
93.3
92.9
92.1
length 0.18
0.32
0.35
0.37
0.37
0.35
0.32
0.30
94.0
92.5
93.7
92.4
94.4
94.5
94.9
94.4
length 1.98
0.37
0.41
0.42
0.42
0.40
0.37
0.35
cov
cov
cov
cov
43
Table 5. Comparison with bootstrapping without factor estimation
Coverage ratios
Price
10-yr Tbill rate
Production
Unemployment rate
0
1
2
3
4
5
6
7
94.9
94.7
94.8
94.8
95.5
95.8
95.7
95.9
B 81.7
77.5
80.6
90.4
89.1
87.0
89.0
88.9
A
92.6
94.7
96.1
96.5
98.3
96.0
94.8
96.5
B 63.5
72.1
77.1
66.5
92.7
88.4
85.9
88.0
A
93.2
94.2
96.9
96.7
98.0
96.1
96.4
95.1
B 64.9
70.5
74.8
67.0
88.9
88.0
86.5
88.0
A
94.5
94.1
96.2
97.0
98.0
96.8
96.9
96.6
B 67.9
69.9
72.2
69.0
82.8
87.2
87.1
88.2
1
2
3
4
5
6
7
A 2.37
1.66
1.21
1.12
1.38
1.06
0.83
0.85
B
0.83
0.63
0.46
0.41
0.44
0.31
0.22
0.21
A 8.61
6.05
3.66
4.01
3.13
2.79
2.69
2.53
B
3.25
2.58
1.50
1.72
1.04
0.81
0.75
0.62
A 7.41
5.11
3.01
3.51
2.57
2.27
2.32
2.12
B
2.93
2.27
1.33
1.50
0.92
0.70
0.66
0.60
A 7.59
5.18
3.09
3.48
2.68
2.31
2.32
2.15
B
2.30
1.40
1.56
1.08
0.77
0.70
0.62
A
Length of C.I. (median)
0
Price
10-yr Tbill rate
Production
Unemployment rate
2.97
44
Table 6. Data description (disaggregate data)
1) Monthly data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
mnemonic transform
IPS13
5
IPS18
5
IPS25
5
IPS34
5
IPS38
5
IPS43
5
IPS306
5
PMP
1
UTL11
1
CES277R
5
CES278 R
5
CES006
5
CES011
5
CES017
5
CES033
5
CES046
5
CES048
5
CES049
5
CES053
5
CES088
5
CES140
5
LHEL
2
LHELX
2
LHNAG
5
LHUR
2
LHU680
2
LHU5
5
LHU14
5
LHU15
5
LHU26
5
LHU27
5
CES151
1
CES155
2
HSNE
4
HSMW
4
HSSOU
4
HSWST
4
FYFF
2
FYGM3
2
FYGT1
2
FYGT10
2
FYBAAC
2
Sfygm6
1
Sfygt1
1
Sfygt10
1
sFYAAAC
1
sFYBAAC
1
FM1
6
MZMSL
6
FM2
6
FMFBA
6
FMRRA
6
FMRNBA
6
BUSLOANS
6
CCINRV
6
PSCCOMR
5
PW561R
5
PMCP
1
EXRUS
5
EXRSW
5
data description
INDUSTRIAL PRODUCTION INDEX - DURABLE CONSUMER GOODS
INDUSTRIAL PRODUCTION INDEX - NONDURABLE CONSUMER GOODS
INDUSTRIAL PRODUCTION INDEX - BUSINESS EQUIPMENT
INDUSTRIAL PRODUCTION INDEX - DURABLE GOODS MATERIALS
INDUSTRIAL PRODUCTION INDEX - NONDURABLE GOODS MATERIALS
INDUSTRIAL PRODUCTION INDEX - MANUFACTURING (SIC)
INDUSTRIAL PRODUCTION INDEX - FUELS
NAPM PRODUCTION INDEX (PERCENT)
CAPACITY UTILIZATION - MANUFACTURING (SIC)
REAL AVG HRLY EARNINGS, PROD WRKRS, NONFARM - CONSTRUCTION (CES277/PI071)
REAL AVG HRLY EARNINGS, PROD WRKRS, NONFARM - MFG (CES278/PI071)
EMPLOYEES, NONFARM - MINING
EMPLOYEES, NONFARM - CONSTRUCTION
EMPLOYEES, NONFARM - DURABLE GOODS
EMPLOYEES, NONFARM - NONDURABLE GOODS
EMPLOYEES, NONFARM - SERVICE-PROVIDING
EMPLOYEES, NONFARM - TRADE, TRANSPORT, UTILITIES
EMPLOYEES, NONFARM - WHOLESALE TRADE
EMPLOYEES, NONFARM - RETAIL TRADE
EMPLOYEES, NONFARM - FINANCIAL ACTIVITIES
EMPLOYEES, NONFARM - GOVERNMENT
INDEX OF HELP-WANTED ADVERTISING IN NEWSPAPERS (1967=100;SA)
EMPLOYMENT: RATIO; HELP-WANTED ADS:NO. UNEMPLOYED CLF
CIVILIAN LABOR FORCE: EMPLOYED, NONAGRIC.INDUSTRIES (THOUS.,SA)
UNEMPLOYMENT RATE: ALL WORKERS, 16 YEARS & OVER (%,SA)
UNEMPLOY.BY DURATION: AVERAGE(MEAN)DURATION IN WEEKS (SA)
UNEMPLOY.BY DURATION: PERSONS UNEMPL.LESS THAN 5 WKS (THOUS.,SA)
UNEMPLOY.BY DURATION: PERSONS UNEMPL.5 TO 14 WKS (THOUS.,SA)
UNEMPLOY.BY DURATION: PERSONS UNEMPL.15 WKS + (THOUS.,SA)
UNEMPLOY.BY DURATION: PERSONS UNEMPL.15 TO 26 WKS (THOUS.,SA)
UNEMPLOY.BY DURATION: PERSONS UNEMPL.27 WKS + (THOUS,SA)
AVG WKLY HOURS, PROD WRKRS, NONFARM - GOODS-PRODUCING
AVG WKLY OVERTIME HOURS, PROD WRKRS, NONFARM - MFG
HOUSING STARTS:NORTHEAST (THOUS.U.)S.A.
HOUSING STARTS:MIDWEST(THOUS.U.)S.A.
HOUSING STARTS:SOUTH (THOUS.U.)S.A.
HOUSING STARTS:WEST (THOUS.U.)S.A.
INTEREST RATE: FEDERAL FUNDS (EFFECTIVE) (% PER ANNUM,NSA)
INTEREST RATE: U.S.TREASURY BILLS,SEC MKT,3-MO.(% PER ANN,NSA)
INTEREST RATE: U.S.TREASURY CONST MATURITIES,1-YR.(% PER ANN,NSA)
INTEREST RATE: U.S.TREASURY CONST MATURITIES,10-YR.(% PER ANN,NSA)
BOND YIELD: MOODY'S BAA CORPORATE (% PER ANNUM)
fygm6-fygm3
fygt1-fygm3
fygt10-fygm3
FYAAAC-Fygt10
FYBAAC-Fygt10
MONEY STOCK: M1(CURR,TRAV.CKS,DEM DEP,OTHER CK'ABLE DEP)(BIL$,SA)
MZM (SA) FRB St. Louis
MONEY STOCK:M2(M1+O'NITE RPS,EURO$,G/P&B/D MMMFS&SAV&SM TIME DEP(BIL$,
MONETARY BASE, ADJ FOR RESERVE REQUIREMENT CHANGES(MIL$,SA)
DEPOSITORY INST RESERVES:TOTAL,ADJ FOR RESERVE REQ CHGS(MIL$,SA)
DEPOSITORY INST RESERVES:NONBORROWED,ADJ RES REQ CHGS(MIL$,SA)
Commercial and Industrial Loans at All Commercial Banks (FRED) Billions $ (SA)
CONSUMER CREDIT OUTSTANDING - NONREVOLVING(G19)
Real SPOT MARKET PRICE INDEX:BLS & CRB: ALL COMMODITIES(1967=100) (PSCCOM/PCEPILFE)
PPI Crude (Relative to Core PCE) (pw561/PCEPiLFE)
NAPM COMMODITY PRICES INDEX (PERCENT)
UNITED STATES;EFFECTIVE EXCHANGE RATE(MERM)(INDEX NO.)
FOREIGN EXCHANGE RATE: SWITZERLAND (SWISS FRANC PER U.S.$)
45
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
EXRJAN
EXRUK
EXRCAN
FSPCOM
FSPIN
FSDXP
FSPXE
FSDJ
HHSNTN
PMI
PMNO
PMDEL
PMNV
MOCMQ
MSONDQ
5
5
5
5
5
2
2
5
2
1
1
1
1
5
5
FOREIGN EXCHANGE RATE: JAPAN (YEN PER U.S.$)
FOREIGN EXCHANGE RATE: UNITED KINGDOM (CENTS PER POUND)
FOREIGN EXCHANGE RATE: CANADA (CANADIAN $ PER U.S.$)
S&P'S COMMON STOCK PRICE INDEX: COMPOSITE (1941-43=10)
S&P'S COMMON STOCK PRICE INDEX: INDUSTRIALS (1941-43=10)
S&P'S COMPOSITE COMMON STOCK: DIVIDEND YIELD (% PER ANNUM)
S&P'S COMPOSITE COMMON STOCK: PRICE-EARNINGS RATIO (%,NSA)
COMMON STOCK PRICES: DOW JONES INDUSTRIAL AVERAGE
U. OF MICH. INDEX OF CONSUMER EXPECTATIONS(BCD-83)
PURCHASING MANAGERS' INDEX (SA)
NAPM NEW ORDERS INDEX (PERCENT)
NAPM VENDOR DELIVERIES INDEX (PERCENT)
NAPM INVENTORIES INDEX (PERCENT)
NEW ORDERS (NET) - CONSUMER GOODS & MATERIALS, 1996 DOLLARS (BCI)
NEW ORDERS, NONDEFENSE CAPITAL GOODS, IN 1996 DOLLARS (BCI)
2) Quarterly data
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
GDP253
GDP254
GDP255
GDP259
GDP260
GDP261
GDP263
GDP264
GDP266
GDP267
LBOUT
LBPUR7
LBMNU
LBLCPU
GDP274_1
GDP274_2
GDP274_3
GDP275_1
GDP275_2
GDP275_3
GDP275_4
GDP276_1
GDP276_3
GDP276_4
GDP276_5
GDP276_6
GDP276_7
GDP276_8
GDP280A
GDP281A
GDP282A
GDP284A
GDP285A
GDP287A
GDP288A
5
5
5
5
5
5
5
5
5
5
5
5
5
5
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
Real Personal Consumption Expenditures - Durable Goods , Quantity Index (2000=
Real Personal Consumption Expenditures - Nondurable Goods, Quantity Index (200
Real Personal Consumption Expenditures - Services, Quantity Index (2000=100) ,
Real Gross Private Domestic Investment - Nonresidential - Structures, Quantity
Real Gross Private Domestic Investment - Nonresidential - Equipment & Software
Real Gross Private Domestic Investment - Residential, Quantity Index (2000=100
Real Exports, Quantity Index (2000=100) , SAAR
Real Imports, Quantity Index (2000=100) , SAAR
Real Government Consumption Expenditures & Gross Investment - Federal, Quantit
Real Government Consumption Expenditures & Gross Investment - State & Local, Q
OUTPUT PER HOUR ALL PERSONS: BUSINESS SEC(1982=100,SA)
REAL COMPENSATION PER HOUR,EMPLOYEES:NONFARM BUSINESS(82=100,SA)
HOURS OF ALL PERSONS: NONFARM BUSINESS SEC (1982=100,SA)
UNIT LABOR COST: NONFARM BUSINESS SEC (1982=100,SA)
Motor vehicles and parts Price Index
Furniture and household equipment Price Index
Other Price Index
Food Price Index
Clothing and shoes Price Index
Gasoline, fuel oil, and other energy goods Price Index
Other Price Index
Housing Price Index
Electricity and gas Price Index
Other household operation Price Index
Transportation Price Index
Medical care Price Index
Recreation Price Index
Other Price Index
Structures
Equipment and software Price Index
Residential Price Index
Exports Price Index
Imports Price Index
Federal Price Index
State and local Price Index
Note : Transformation code indicates followings,
1-no transformation, 2-…rst di¤erence, 3-second di¤erence, 4-logarithms, 5-…rst di¤erence
after taking logarithms, 6-second di¤erence after taking logarithms.
46
Table 7. Data description (aggregate data)
1) Monthly data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
mnemonic transform
data description
IPS10
5
INDUSTRIAL PRODUCTION INDEX - TOTAL INDEX
IPS43
5
INDUSTRIAL PRODUCTION INDEX - MANUFACTURING (SIC)
UTL11
1
CAPACITY UTILIZATION - MANUFACTURING (SIC)
CES278
6
AVG HRLY EARNINGS, PROD WRKRS, NONFARM - MFG
CES002
5
EMPLOYEES, NONFARM - TOTAL PRIVATE
LHEL
2
INDEX OF HELP-WANTED ADVERTISING IN NEWSPAPERS (1967=100;SA)
LHUR
2
UNEMPLOYMENT RATE: ALL WORKERS, 16 YEARS & OVER (%,SA)
LHU680
2
UNEMPLOY.BY DURATION: AVERAGE(MEAN)DURATION IN WEEKS (SA)
CES151
1
AVG WKLY HOURS, PROD WRKRS, NONFARM - GOODS-PRODUCING
CES155
2
AVG WKLY OVERTIME HOURS, PROD WRKRS, NONFARM - MFG
HSBR
4
HOUSING AUTHORIZED: TOTAL NEW PRIV HOUSING UNITS (THOUS.,SAAR)
HSFR
4
HOUSING STARTS:NONFARM(1947-58);TOTAL FARM&NONFARM(1959-)(THOUS.,SA
FYFF
2
INTEREST RATE: FEDERAL FUNDS (EFFECTIVE) (% PER ANNUM,NSA)
FYGM3
2
INTEREST RATE: U.S.TREASURY BILLS,SEC MKT,3-MO.(% PER ANN,NSA)
FYGT10
2
INTEREST RATE: U.S.TREASURY CONST MATURITIES,10-YR.(% PER ANN,NSA)
FYAAAC
2
BOND YIELD: MOODY'S AAA CORPORATE (% PER ANNUM)
Sfygt10
1
fygt10-fygm3
FMFBA
6
MONETARY BASE, ADJ FOR RESERVE REQUIREMENT CHANGES(MIL$,SA)
FMRRA
6
DEPOSITORY INST RESERVES:TOTAL,ADJ FOR RESERVE REQ CHGS(MIL$,SA)
BUSLOANS
6
Commercial and Industrial Loans at All Commercial Banks (FRED) Billions $ (SA)
CCINRV
6
CONSUMER CREDIT OUTSTANDING - NONREVOLVING(G19)
PWFSA
6
PRODUCER PRICE INDEX: FINISHED GOODS (82=100,SA)
PSCCOM
6
SPOT MARKET PRICE INDEX:BLS & CRB: ALL COMMODITIES(1967=100)
PW561
6
PRODUCER PRICE INDEX: CRUDE PETROLEUM (82=100,NSA)
EXRUS
5
UNITED STATES;EFFECTIVE EXCHANGE RATE(MERM)(INDEX NO.)
FSPCOM
5
S&P'S COMMON STOCK PRICE INDEX: COMPOSITE (1941-43=10)
HHSNTN
2
U. OF MICH. INDEX OF CONSUMER EXPECTATIONS(BCD-83)
PMI
1
PURCHASING MANAGERS' INDEX (SA)
PMNO
1
NAPM NEW ORDERS INDEX (PERCENT)
PMDEL
1
NAPM VENDOR DELIVERIES INDEX (PERCENT)
PMNV
1
NAPM INVENTORIES INDEX (PERCENT)
2) Quarterly data
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
GDP251
GDP252
GDP256
GDP263
GDP264
GDP265
GDP272
GDP273
GDP275_4
GDP277
GDP284
GDP285
GDP286
LBOUT
LBMNU
LBLCPU
5
5
5
5
5
5
6
6
6
6
6
6
6
5
5
5
Real Gross Domestic Product, Quantity Index (2000=100) , SAAR
Real Personal Consumption Expenditures, Quantity Index (2000=100) , SAAR
Real Gross Private Domestic Investment, Quantity Index (2000=100) , SAAR
Real Exports, Quantity Index (2000=100) , SAAR
Real Imports, Quantity Index (2000=100) , SAAR
Real Government Consumption Expenditures & Gross Investment, Quantity Index (2
Gross Domestic Product, Price Index (2000=100) , SAAR
Personal Consumption Expenditures, Price Index (2000=100) , SAAR
Other Price Index
Gross Private Domestic Investment, Price Index (2000=100) , SAAR
Exports, Price Index (2000=100) , SAAR
Imports, Price Index (2000=100) , SAAR
Government Consumption Expenditures & Gross Investment, Price Index (2000=100)
OUTPUT PER HOUR ALL PERSONS: BUSINESS SEC(1982=100,SA)
HOURS OF ALL PERSONS: NONFARM BUSINESS SEC (1982=100,SA)
UNIT LABOR COST: NONFARM BUSINESS SEC (1982=100,SA)
Note : Same as Table 6.
47