Numerical Solution of Linear Rational Expectations Models Lecture

Numerical Solution of Linear Rational
Expectations Models
Lecture Notes
Part I
Eduardo Loyo
John F. Kennedy Shool of Government
Harvard University
August 2000
1
Introduction
Here, we study numerical solution methods for linear rational expectations
models. The models are represented by linear algebraic relations among
three components: an exogenous stochastic process {xt }, possibly multidimensional but with known probability distribution (at least, enough known
moments); endogenous variables collected in a vector {yt }; and conditional
expectations of both, which may be conditioned on several different information sets. To solve a model of that type involves, roughly speaking, expressing
the trajectory of endogenous variables {yt } as an explicit function of the forcing (or driving) process {xt }. As we shall see, however, one is often more
interested in an intermediate representation of the solution, in which the endogenous variables are written, in part, as a function of some of their own
past values.
1
2
Linearization
Of course, there is no reason to presume that relevant economic models must
be formed by linear equilibrium conditions. Equilibrium conditions derived
from intertemporal maximization by individual agents typically display nonlinearities. So will certain equilibrium conditions derived from mere accounting identities.
Example 1 An equilibrium condition that often appears in general equilibrium macroeconomic models is an Euler equation for consumption:
Rt ′
′
u (ct ) = βEt
u (ct+1 )
(1)
π t+1
This is clearly a nonlinear relation. A mere identity that involves nonlinearities is the flow budget constraint of the government, written in real terms:
bt = gt + bt−1
Rt−1
πt
(2)
As a result, one needs some justification for relying on linear models and
their respective methods of solution. The usual justification acknowledges
that the true model is likely to be nonlinear, and rests on an approximation
argument. It goes as follows.
One can generally choose a degenerate distribution of the forcing process
{xt } for which it is easy to find the corresponding degenerate process for the
endogenous variables {yt }, as a solution to the nonlinear model at hand. A
typical choice is a steady-state equilibrium, one in which xt ≡ x̄ and yt ≡ ȳ
with probability one. The economy is supposed to fluctuate around that
steady-state, always remaining close enough to it: small enough deviations
of xt from x̄ generating also small enough deviations of yt from ȳ, as dictated
by the model.
Example 2 Consider an economy with exogenous endowments of consumption goods. The goods market will only clear (also an equilibrium condition)
if the consumption plans chosen according to the Euler equation (1) coincide
with the available endowments. Assume that monetary policy is described by
the following reaction function for the nominal interest rate:
Rt = Ψ (πt ) vt
2
(3)
where vt represents some random deviation from the basic ‘recipe’ described
by Ψ. Equations (1) and (3) constitute a complete (though simple) dynamic
general equilibrium macroeconomic model with rational expectations, in which
{ct , vt } are exogenous variables, and {Rt , π t } are endogenous. If ct ≡ c̄ and
vt ≡ v̄ with probability one, then the corresponding steady-state solution for
R̄ and π̄ is given by:
1
R̄
=
π̄
β
R̄ = Ψ (π̄) v̄
which can be easily (and uniquely) solved once the functional form of Ψ is
properly specified. The model might also involve the flow budget constraint
(2), perhaps with an exogenous sequence {gt }. The steady-state solution corresponding to gt ≡ ḡ (taking into account that it must involve a real interest
rate of β −1 ), must satisfy the steady-state version of (2):
b̄ =
β
ḡ
β−1
(4)
Once a steady-state solution is found, one can compute a linear approximation to the nonlinear relations that form the model, in a neighborhood of
the steady-state. Those first-order approximations form a linear model that
can be more easily solved by the methods described below. The rationale for
relying on such linearized model is that it will describe the (truly nonlinear)
economy well enough as long as the deviations from the approximation point
(the steady-state) remain small enough. In other words, the lack of accuracy
is a term of smaller order than the deviations from steady-state themselves.
Like any first-order Taylor expansion, linearized models are most conveniently expressed not in terms of the original variables, but in terms of
deviations of these variables from their steady-state values. Those can be
y −ȳ
absolute deviations yt − ȳ, or percentage deviations i,tȳi i for each variable,
or deviations in one variable as a proportion of the steady-state value of
y −ȳ
another variable: i,tȳj i , j = i. The choice may be a question of algebraic
convenience and/or a question of the parameters that one feels better about
calibrating.
When percentage deviations are used, the model is often said to be loglinearized. An equation:
Ef(x) = 1
3
can be linearly approximated by:
df (x)
E (x − x̄) = 0
dx
(5)
where x̄ satisfies f(x̄) = 1 and the derivative is calculated at x = x̄. It would
have been equivalent to take logarithms on both sides of the original equation
to obtain:
log Ef (x) = 0
Disregarding the smaller order implications of Jensen’s inequality (since all
the argument is for x distributed close to x̄ anyway), one can replace the
latter with:
E log f (x) = 0
and approximate the latter by:
d log f (x)
E (log x − log x̄) = 0
d log x
Recall however that:
d log f (x)
d log f (x)
=
d log x
dx
d log x
dx
−1
= x̄
df (x)
dx
since f(x̄) = 1, while:
x − x̄
x̄
provided that x ∼
= x̄. The latter two can be combined into:
log x − log x̄ ∼
=
x̄
df (x) x − x̄
E
=0
dx
x̄
(6)
Note that equation (6) could have been much more easily obtained directly
from equation (5), by simultaneously multiplying and dividing by x̄. One
might prefer to use equation (6) if it is found more convenient to calibrate
an elasticity than a derivative.
Example 3 Continuing with the example used above, one can approximate
the Euler equation (1) by:
Rt − R̄
π t+1 − π̄
ct+1 − c̄ ct − c̄
− Et
= σ Et
−
π̄
c̄
c̄
R̄
4
where:
σ≡−
u′′ (c̄)
c̄
u′ (c̄)
With this type of multiplicative relation, log-linearization is usually most convenient: note that the resulting linear equation in percentage deviations involves a single parameter, which is itself an elasticity, and in particular it does
not involve as parameters the steady-state values of inflation or the nominal
interest rate (which would appear in a linear approximation written in terms
of absolute deviations). With regard to the flow budget constraint (2), it can
be linearized by:
β
gt − ḡ bt−1 − b̄ Rt−1 − R̄ π t − π̄
bt − b̄
=β
+
+
−
π̄
R̄
b̄
b̄
b̄
It turns out to be more convenient to write the deviations of primary deficits
as a proportion of the debt stock: if the latter is positive (as usual), then
steady-state primary deficits must be negative (there must be a surplus), according to equation (4); as a result, the sign of percentage deficit deviations
would be inconveniently switched (a positive deviation meaning that the deficit
is smaller than in steady-state).
Once the model is written in linear form, in terms of deviations from
steady-state, we can apply the methods of solution to be described below.
3
Local solutions
When we approximate the exact nonlinear model by a linearized counterpart,
and then find a solution to the latter, that will only be a proper approximation to the solution of the exact model if it only involves sufficiently small
deviations of all variables from steady-state. Solutions to the linearized model
in which variables wander arbitrarily far away from the steady-state are not
a proper approximation to the solution of the exact model.
As a result, in using the linearization method, we must restrict attention
to solutions that are in some sense local to the steady-state, as far as both
forcing processes and endogenous variables are concerned. Because solutions
are relations between forcing process and the trajectory of endogenous variables, it is natural to seek relations of that type with the property that,
whenever the exogenous forcing process happens to be somehow bounded
5
around the steady-state, so will be the corresponding trajectory of endogenous variables. In principle, there may be any number of such solutions. If
there are many (often, infinitely many), that is a case of local indeterminacy.
If there is exactly one local solution, then the case is one of local determinacy. There may also be cases of local nonexistence of solution in a vicinity
of the steady-state. Note that the steady-state itself is not a solution in the
sense explored here, since it will only satisfy the model equations for a very
particular driving process.
Clearly, the linear model may also involve solutions such that, for a
bounded forcing process, the endogenous variables do not remain bounded.
Those solutions are of little interest since they do not necessarily reflect
solutions to the exact model. The exact model, however, may have its own
solutions in which endogenous variables explode despite bounded forcing processes. The linearization method does not say anything about those, though.
In other words, using the linearization method amounts to disregarding the
possibility of solutions other than those local to the steady-state.
With regard to the dynamic behavior of the endogenous variables, the
truly great danger of being misled by the linearization method arises when
the solution to the linearized system is locally determinate, and yet there
are other conceivable solutions that do not remain in a neighborhood of the
steady-state. There are different types of argument in support of the local
solution being the relevant one, which may be more or less persuasive depending on the model at hand. In some cases, explosive solutions - that is,
explosive trajectories of endogenous variables with bounded forcing process
- are selected away as ‘bubbles’, while the local solution is retained as the
‘fundamental’ one. In such cases, one would typically argue that expectations can be expected to coordinate around the fundamental solution, which
will then materialize (rather than any other) as a self-fulfilling prophecy. In
other cases, explosive solutions would violate transversality conditions for optimization by some inhabitant of the economy, and would thus be inherently
inconsistent with equilibrium.
4
Some simple deterministic cases
Methods of solution of systems of linear expectational difference equations
draw heavily on the standard methods of solution that apply to deterministic difference systems. To put the basic issues in perspective, we start by
6
examining the latter. In particular, we start with the following first-order
difference system:
yt+1 = Ayt + xt
(7)
Writing the model as a first-order system is convenient for the application of
matrix decomposition methods of solution. That is not a restrictive requirement since higher-order systems can be stacked into first-order systems of a
higher dimension. Take for instance:
yt+1 = A1 yt + A2 yt−1 + xt
which can be written in first-order form as:
yt+1
A1 A2
yt
xt
=
+
yt
I
0
yt−1
0
Returning to the system (7), the solution starts by making a decomposition of matrix A. There are several possibilities here, but the most popular
is to make a Jordan canonical decomposition:
LA = DL
(8)
where D is a diagonal matrix whose diagonal entries are the eigenvalues of
A, while L is the matrix whose rows are the left eigenvectors of A (arranged
in the same order as the corresponding eigenvalues are arranged in D).
With the Jordan decomposition (8), the first-order system (7) can be
rewritten as:
Lyt+1 = DLyt + Lxt
or still:
zt+1 = Dzt + wt
(9)
where zt ≡ Lyt and wt ≡ Lxt . The variables appearing in the vector zt are
all linear combinations of the variables in yt , and are usually referred to as
canonical variables. The system in the canonical variables has the advantage
of being entirely decoupled: to each variable, there corresponds a difference
equation in which no other variable enters.
We can consider some simple cases in which the solution is easy to characterize. Suppose first that all eigenvalues of A lie strictly outside the unit
circle. In this case, D contains no zeros on the main diagonal, being thus
invertible. We can write the system (9) as:
zt = D−1 zt+1 − D−1 wt
7
Since a similar expression holds for zt+1 in terms of zt+2 and wt+1 :
zt = D−2 zt+2 − D−2 wt+1 − D−1 wt
We can iterate forward with similar substitutions to find:
zt = D−T zt+T −
T −1
D−s−1 wt+s
s=0
We want to find a local solution, one in which {zt } is bounded. If the solution
satisfies that property, with D being a diagonal matrix with main diagonal
entries strictly greater than unity, then:
lim D−T zt+T = 0
T →∞
(10)
and the solution for the canonical variables can be uniquely described by:
zt = −
∞
D−s−1 wt+s
s=0
To recover the solution for the original variables in terms of the original
shocks, we need to invert the transformation L. For now, we simply assume
that L is indeed full rank and thus invertible, which will be the case if all
eigenvalues are distinct or, trivially, if that fails but A is already diagonal.
We shall soon elaborate on rank conditions, making them less stringent.
yt = −
∞
L−1 D−s−1 Lxt+s
(11)
s=0
Given the forcing process, this fully characterizes the unique bounded
solution to the original system. Note that, at each date, yt depends only
on the current and future values of the forcing process. The solution is
completely pinned down thanks to the transversality conditions imposed by
the requirement of boundedness.
Now suppose that A has all eigenvalues strictly inside the unit circle. If
the system (7) is assumed to have described the dynamics of the endogenous
variables since times immemorial, we could solve the system iteratively, only
now doing so backwards in time, and require that the canonical variables
remain bounded as one moves back towards the infinite far past. The solution
8
would again be uniquely characterized (no other solution would satisfy the
boundedness requirement) and express the endogenous variables in terms of
the past history of the forcing process.
Suppose otherwise that we are provided with initial conditions for the
endogenous variables: they are what they are to start with, and from this
point onwards their dynamics will be governed by the system (7); nothing
is said about the model that governed the dynamics leading up to the time
when the initial conditions hold. In this case, the system (7) provides a recursive rule by which to generate all later values of the endogenous variables;
equivalently, the system (9) provides a recursive rule by which to generate
all later values of the canonical variables. If indeed all eigenvalues of A lie
strictly inside the unit circle, the {zt } thus generated is stationary and satisfies the boundedness requirement. In this case, we do have a unique solution
that satisfies both the initial conditions and remains bounded. The system
(7) is in this case a handy enough characterization of the dynamic properties
of the endogenous variables, and there is no more ‘solving’ to do.
If, however, initial conditions were given, but the eigenvalues of A all lied
strictly outside the unit circle, then we already know that there is no bounded
solution, starting at any date, except the one described by (11). Only by
sheer coincidence will the first value of the endogenous variables so generated
coincide with the specified initial condition. Except in that coincidence, the
subsequent values recursively generated by (7) will not coincide with the
solution described by (11) either, and are sure to form an explosive sequence.
In other words, to remain bounded, the solution cannot satisfy the initial
conditions. In this case, we do not have any solution satisfying both the
initial condition and the boundedness requirement.
5
A slightly more general deterministic case
The more interesting cases occur when we have initial conditions for some
variables, but not for others, and have some eigenvalues inside and some
outside the unit circle. Variables which are not associated with initial conditions are usually called jumping variables, while variables that must satisfy
initial conditions are called predetermined variables. Typical jumping variables in economic models are flows and prices, whereas stocks are the typical
predetermined variables.
Consider a still relatively simple case in which the system is block-diagonal,
9
with jumping and predetermined variables decoupled into separate blocks.
Partition the vector of endogenous variables into jumping variables pt and
predetermined variables kt , in that order, and suppose that the system is
described by:
A11 0
pt
ut
pt+1
=
+
(12)
kt+1
0 A22
kt
vt
Suppose that all eigenvalues of A11 lie strictly outside the unit circle (these
are called unstable eigenvalues), and that all eigenvalues of A22 lie strictly
inside the unit circle (called stable eigenvalues). We disregard for now the
possibility of eigenvalues lying right on the unit circle, to which we return
later.
We can solve the two blocks separately, using in each case the corresponding methods described in the previous section. Let D11 be the diagonal matrix
of eigenvalues of A11 , and L11 the corresponding matrix of left eigenvectors:
L11 A11 = D11 L11
Since all eigenvalues in D11 are unstable, the unique bounded solution for the
jumping variables is of the same form as the forward looking solution (11):
pt = −
∞
−s−1
L−1
L11 ut+s
11 D11
s=0
provided that L11 is full rank. As for the predetermined variables, we are
satisfied in characterizing their solution by the original difference equation:
kt+1 = A22 kt + vt
which provides a recursive rule that generates a bounded trajectory for {kt }
starting with their initial conditions, since all eigenvalues of A22 are stable.
This block-diagonal example does not yet incorporate all the possible
complications encountered in first-order systems with both jumping and predetermined variables, but it serves to highlight two points that do generalize: (i) with exactly as many unstable eigenvalues as jumping variables,
the system has a unique bounded solution satisfying arbitrarily given initial
conditions for the predetermined variables; (ii) this conclusion holds, and the
unique solution can be easily characterized, provided that L11 is full rank.
With regard to the latter, note that we do not require that the entire L be
full rank.
10
6
The general deterministic case
We can allow for even greater generality by abandoning the assumption of
decoupled blocks of predetermined and jumping variables. When the system
is not block diagonal, it no longer makes sense to speak of eigenvalues associated with one or the other subset of endogenous variables: eigenvalues will
be associated with canonical variables that are now linear combinations involving both predetermined and jumping variables. We find conditions under
which the solution to the system exists and is unique, following the classical
results of Blanchard and Kahn.1
In place of (12), consider the general first-order system:
pt+1
A11 A12
pt
ut
=
+
(13)
kt+1
A21 A22
kt
vt
Let n be the number of jumping variables (the dimension of pt ) in the system,
and let m be the number of unstable eigenvalues of the autoregressive matrix
A. We can write:
L11 L12
A11 A12
D11 0
L11 L12
=
(14)
L21 L22
A21 A22
0 D22
L21 L22
where D is the diagonal matrix of eigenvalues and L is the matrix with
the left eigenvectors of A as rows. We are free to assemble D with the
eigenvalues appearing in any order in the main diagonal, provided that we
assemble L with the corresponding eigenvectors in the same order. For ease
of reference, we choose to place all unstable eigenvalues in D11 , which ends
up with dimension m × m. Since A11 has dimension n × n, our partition of
L into blocks will only be conformable if L11 is of size m × n.
Define:
zt ≡ L11 pt + L12 kt
wt ≡ L11 ut + L12 vt
which are, respectively, the system’s canonical variables associated with unstable eigenvalues, and their corresponding canonical forcing processes. These
must satisfy:
zt+1 = D11 zt + wt
1
Olivier Jean Blanchard and Charles M. Kahn, “The solution of linear difference models
under rational expectations”, Econometrica 48(5): 1305-11, 1980.
11
which is the unstable portion of the diagonalized system.
We can solve the latter forward, as we have done above, to find the unique
bounded solution for {zt }:
zt = −
∞
−s−1
D11
wt+s
s=0
Note that boundedness of {zt } is necessary for boundedness of {pt , kt }, the
condition we wish to impose on the solution we seek. From the latter expression, and the definitions of zt and wt , we recover:
L11 pt + L12 kt = −
∞
−s−1
D11
(L11 ut+s + L12 vt+s )
(15)
s=0
The solution for {pt , kt } must satisfy equation (15) together with the original
dynamic equations for the evolution of the predetermined variables:
kt+1 = A21 pt + A22 kt + vt
(16)
We study the characterization of the solution case by case, according to
the relation between the number or unstable eigenvalues and the number of
jumping variables:
• m = n: Suppose that L11 has full rank: because it is square in the
present case, it is invertible. We can turn equation (15) into:
pt =
−L−1
11 L12 kt
−
∞
−s−1
L−1
(L11 ut+s + L12 vt+s )
11 D11
(17)
s=0
Together with equation (16), this completely characterizes the unique
bounded solution for {pt , kt } that simultaneously satisfies the initial
conditions for {kt }. The solution can be found recursively: from (17),
given the initial condition k0 and the entire future trajectory of the
forcing process, one can compute p0 ; from (16), and already knowing
k0 and p0 (and exogenous v0 ), one finds k1 ; the latter enter the computation of p1 according to (17), reinitiating the recursion. One can
indeed verify that the recursion produces a bounded trajectory of {kt }
for any bounded forcing process: use (17) to eliminate the jumping
variables in (16), obtaining:
kt+1 = A22 − A21 L−1
11 L12 kt + “f.p.”
12
where all the eigenvalues of the autoregressive matrix in brackets are
stable,2 and the rightmost term is just a bounded combination of the
forcing processes. If the dynamics of {kt } is bounded, it immediately
follows from equation (17) that {pt } is bounded too. This is a case of
local determinacy.
• m > n: Any solution must satisfy equation (15), which we can write
simply as:
L11 pt = r.h.s.
(18)
where the right-hand side is arbitrary (in each period, it will be a combination of predetermined variables and the unrelated future trajectory
of the forcing process). Here we have a system of m linear equations in
n variables, with an arbitrary r.h.s., which has generically no solution.
So, this is a case of local nonexistence.
• m < n: Assume that L11 has full row rank m. Then there is a choice of
m columns from L11 such that the matrix formed by those columns (call
it L̃11 ), which is square, is invertible. We can always order the jumping
variables in the system in such a way that L̃11 appears as the leftmost
m×m block of L11 (we can do that without interfering with our decision
to order jumping variables ahead of predetermined variables, or with
our decision to order the unstable eigenvalues first - the latter affects
only the ordering of the rows of L). Now consider what happens if we
arbitrarily assign initial conditions to the n − m jumping variables ordered last. If we regard those as predetermined variables in making the
partitions in equation (14), then L̃11 will be the crucial northwest block
of L. It is as if we were in the first case examined above, where that
block is square and invertible: there will be a unique bounded solution
satisfying the initial conditions, including the ones we arbitrarily added
to the system. But, since these initial conditions are indeed chosen arbitrarily, and to each set of initial conditions corresponds a different
solution, then in reality there are infinitely many bounded solutions to
the system satisfying the legitimate initial conditions.
2
The eigenvalues are those in D22 . This is easy to verify in the special case of L12 = 0,
in which the autoregressive matrix in brackets simplifies to A22 . From equation (14),
L11 A12 = 0, which, L11 being invertible, implies A12 = 0. But equation (14) then implies
that L22 A22 = D22 L22 , that is, D22 contains the eigenvalues of A22 .
13
These results are the core of the method popularized by Blanchard and
Kahn to verify the existence and uniqueness of solution to a linear model,
and to calculate the solution whenever it is determinate.
7
Quick remark on unit roots and boundedness
So far, we have completely disregarded the possibility of eigenvalues with
absolute value exactly equal to unity. That is different from the original
formulation of Blanchard and Kahn, where unit roots are counted among
the stable eigenvalues. At the same time, we require that forcing processes
and endogenous variables be uniformly bounded sequences, while Blanchard
and Kahn make the weaker requirement that the variables in the model do
not explode ‘too fast’.
Unstable eigenvalues need to be strictly greater than one, thus guaranteeing that a transversality condition like (10) holds whenever endogenous
variables are required to be somehow bounded (either uniformly or in the
Blanchard and Kahn sense), which is in turn essential to pin down the unique
forward looking solution for the jumping variables. If anywhere, it is among
the stable eigenvalues that unit roots might be accommodated.
Allowing for unit roots among the stable eigenvalues, however, has a
bearing on what boundedness criterion the recursive solution for the predetermined variables will satisfy (and, when the system is not block diagonal,
also on the boundedness of the jumping variables). Unit roots would be
inconsistent with the strict requirement of uniform boundedness: unlike a
stationary process, a unit root autoregression for predetermined variables
may wander infinitely far away from its starting point, even if the forcing
process is itself uniformly bounded. But unit roots would not violate the
weaker criterion of Blanchard and Kahn: variables might be wandering infinitely far away, and yet not do so ‘too fast’.
Therefore, in order to characterize bounded solutions in the Blanchard
and Kahn sense, one can allow for unit roots among the stable eigenvalues
- as they do. In order to characterize solutions that are uniformly bounded,
unit roots must be ruled out.
If the linear model is taken as an exact representation of truth, then solutions that are bounded in the Blanchard and Kahn sense are perfectly ac-
14
ceptable. On the other hand, if the linear model is taken as a mere first-order
approximation in a neighborhood of a steady-state, then uniform boundedness is the appropriate criterion to ensure the accuracy of the results obtained
by linearization, case in which the presence of unit roots poses a problem.
8
Expectational version
Going from the deterministic cases seen above to the expectational version
of the problem does not change the method of solution, and may indeed
facilitate the characterization of the unique bounded solution if one exists.
Consider a linearized rational expectations model written as a first-order
system of expectational difference equations (here, this standard format is
more restrictive than in the deterministic case, as we shall see):
pt+1
A11 A12
pt
ut
=
+
(19)
Et
kt+1
A21 A22
kt
vt
Besides the information directly contained in the system of equations (19),
it is understood that predetermined variables kt are assigned initial conditions for the first date t = T when the system is supposed to hold. Such
initial conditions are inherited from the period before, t = T − 1. Under the
dynamics governed by (19), each kt continues to be entirely determined at
t − 1 (hence the term ‘predetermined’). As a result, it is also understood
that Et kt+1 = kt+1 at all dates.
One can proceed to the same steps followed in the solution of the deterministic case. A Jordan decomposition expresses the system in terms of
canonical variables. Solving recursively forward for the canonical variables
{zt } associated with unstable eigenvalues, in terms of their corresponding
forcing processes {wt } (both defined just as before), one finds:
zt =
−T
D11
Et zt+T
−
T −1
−s−1
D11
Et wt+s
s=0
recursions use the fact that Et [Et+s (·)] = Et (·) fors 0. Now we require
that Et wt+s and Et zt+s be uniformly bounded (for all t and s), and taking
T → ∞ we find:
∞
−s−1
zt = −
D11
Et wt+s
s=0
15
Using no more than the definitions of {zt } and {wt }, that can also be written
as:
∞
−s−1
L11 pt + L12 kt = −
D11
(L11 Et ut+s + L12 Et vt+s )
s=0
and, if m = n and L11 is full rank:
pt =
−L−1
11 L12 kt
−
∞
−s−1
L−1
(L11 Et ut+s + L12 Et vt+s )
11 D11
s=0
Thanks to our understanding about the nature of a predetermined variable, this can be put together with equation (16) as a characterization of
the solution, which will indeed satisfy the boundedness criterion we chose to
impose if all stable eigenvalues are strictly inside the unit circle.
Now, the forward looking solution for the jumping variables depends only
on the current expectation of the current and future value of the forcing
processes. By specifying the stochastic properties of these processes, these
conditional moments can be easily calculated. One particularly easy case is
when (ut , vt ) is known at date t and Et ut+s = Et vt+s = 0 for all s > 0 - the
forcing process is an unforecastable innovation. The solution for the jumping
variables simplifies to:
−1 −1
pt = −L−1
11 L12 kt − L11 D11 (L11 ut + L12 vt )
Actually, we can restrict attention to this case by embedding all the serial
correlation in the forcing process in the model itself. Consider the first-order
model:
Et yt+1 = Ayt + xt
where xt is known at t. Suppose that the forcing process is described by the
following autoregression:
xt = Bxt−1 + et
where et is known at t and Et et+s = 0 for all s > 0. For the forcing process
to be bounded, we require that all eigenvalues of B lie strictly inside the unit
circle. We can then write the model as:
yt+1
A B
yt
et
=
+
Et
xt
0 B
xt−1
et
which has an unforecastable forcing process. Because the autoregressive matrix in the augmented system is block triangular, its eigenvalues are those of
16
the main diagonal blocks A and B. Consequently, we have augmented the
system with as many predetermined variables (in xt ) as stable eigenvalues
(those of B), and the determinacy properties are the same as in the original
system.
9
Wrapping up
The results presented above can be summarized in a formal proposition, for
ease of reference:
Proposition 4 Consider the first order linear expectational difference system:
pt+1
A11 A12
pt
ut
=
+
Et
kt+1
A21 A22
kt
vt
which holds for all t 0, and must satisfy initial conditions k0 = k̄. Suppose
that, for all t 0, Et (ut+s , vt+s ) = 0 for s > 0, Et (ut+s , vt+s ) = (ut+s , vt+s )
for s 0, and Et kt+1 = kt+1 . Let n be the size of A11 . One can always find
a diagonal matrix D and a matrix L satisfying:
A11 A12
D11 0
L11 L12
L11 L12
=
L21 L22
A21 A22
0 D22
L21 L22
where D11 contains all the entries of D that have absolute value strictly
greater than unity. Let m be the size of D11 . If all entries in D22 have
absolute value strictly smaller than unity, and if L11 is full rank:
(i) If m = n, then there is a unique solution to the system with the
property that Et (pt+s , kt+s ) is uniformly bounded in t and s. That solution
is fully characterized by:
−1 −1
pt = −L−1
11 L12 kt − L11 D11 (L11 ut + L12 vt )
kt+1 = A21 pt + A22 kt + vt
for all t 0, with k0 = k̄.
(ii) If m > n, then there is no solution to the system with the property
that Et (pt+s , kt+s ) is uniformly bounded in t and s.
(iii) If m < n, then there are infinitely many solutions to the system with
the property that Et (pt+s , kt+s ) is uniformly bounded in t and s.
17