Lecture 2
Dynamic Equilibrium Models: Three and More (Finite) Periods
1. Introduction
In ECON 501, we discussed the structure of two-period dynamic general equilibrium
models, some solution methods, and their application to issues such as optimal consumption and savings and asset pricing. In this lecture, we increase the time horizon to three
and more (but finite) periods. We will see that while increase in the time horizon does
not change the structure of dynamic general equilibrium models, it increases the number
of choice variables, which makes the Langarangian method of the previous lecture quite
cumbersome (in many cases almost impossible). In this lecture, we will discuss an alternative method – the method of dynamic programming – to solve DGE models. After
learning this method, we will see some of its application.
Let us begin with an example. We will solve this example using both Langrangian
and dynamic programming methods and then compare the structure of these two solution
methods.
Example 1
Suppose that the economy lasts for three periods and the representative consumer
wants to choose an optimal consumption plan subject to his budget constraint. The
optimization problem is as follows:
max
c1 , c 2 , c 3 , a 1 , a 2
ln c1 + β ln c2 + β 2 ln c3
(1.1)
subject to
c1 + a1 = a0 (1 + r)
(1.2)
c2 + a2 = a1 (1 + r)
(1.3)
c3 = a2 (1 + r)
(1.4)
given a0 and a3 = 0. (1.2)-(1.4) are period budget constraints. ai is the asset level
in period i and r is the real rate of interest. Let λ1 , λ2 , and λ3 be the Langrangian
multipliers associated with (1.2), (1.3), and (1.4) respectively. The first order conditions
are
c1 :
1
= λ1
c1
(1.5)
c2 :
β
= λ2
c2
(1.6)
1
β2
= λ3
c3
c3 :
(1.7)
a1 : λ1 = (1 + r)λ2
(1.8)
a2 : λ2 = (1 + r)λ3 .
(1.9)
Combining (1.5), (1.6), and (1.8), we have
1
1
= β(1 + r) .
c1
c2
(1.10)
Similarly, from (1.6), (1.7), and (1.9), we have
1
1
= β(1 + r)
c2
c3
(1.11)
(1.10) and (1.11) are Euler equations linking consumptions in adjacent periods. Using
(1.10) and (1.11) together with three budget constraints, we can solve for five unknowns
c1 , c2 , c3 , a1 , & a2 . From (1.2), (1.3), and (1.10), we get
1
β(1 + r)
=
.
(1 + r)a0 − a1
(1 + r)a1 − a2
(1.13)
After some manipulation, we have
a2 = (1 + r)(1 + β)a1 − β(1 + r)2 a0 .
(1.14)
From (1.3), (1.4), and (1.11), we have another expression for a2 :
a2 =
β(1 + r)
a1 .
1+β
(1.15)
From (1.14) and (1.15) we get solution for a1 as
a1 =
β(1 + β)(1 + r)
a0 .
(1 + β(1 + β))
(1.16)
Once we have solved for a1 , we can solve for other choice variables using (1.14) and three
budget constraints.
Let us look at the structure of the solution method. In Langrangian method, we
choose all the choice variables c1 , c2 , c3 , a1 & a2 simultaneously. This leads to five first
order conditions (1.5 to 1.9). In addition, we have three budget constraints. Using these
eight equations we solve for eight unknowns c1 , c2 , c3 , a1 , a2 , λ1 , λ2 & λ3 .
In general, when we use Langrangian method, we get a system of simultaneous equations and the number of equations in the system is equal to the number of choice variables
plus the number of constraints. From the structure of solution method, it is immediately
2
clear that as the number of choice variables increases (e.g. more time periods), the system
of simultaneous equations becomes bigger and more complex making it tougher (if not
impossible) to solve it. For instance, if we add one more time period in the previous example, we will need to solve system of simultaneous equations consisting of 11 equations.
Obviously then in order to solve a multiple period DGE model, we need another solution
method which will reduce the dimension of system of simultaneous equation we need to
solve. This leads to the method of dynamic programming.
2. Dynamic Programming: Finite Periods
The method of dynamic programming provides a way of breaking down the big multiperiod optimization problem into a series of smaller optimization problems which are solved
one at a time. Essentially, the optimization problem is broken down in a sequence of twoperiods optimization problems. The choice variables are chosen sequentially (recursively)
rather than simultaneously. Under certain conditions (discussed below), the solutions
derived are identical to Langrangian method. Let us solve the previous optimization
problem using this new method.
Example 2
We begin with the last period. We choose c3 in order to maximize last period utility
subject to the last period budget constraint. Let W3 (a2 ) be a function defined as
W3 (a2 ) ≡ max ln c3 + λ3 [(1 + r)a2 − c3 ]
c3
(2.1)
W3 (a2 ) is simply the optimum value of utility in period 3. The first order condition is
c3 :
1
= λ3 .
c3
(2.2)
From the budget constraint, we have
c3 = (1 + r)a2 .
(2.3)
dW3 (a2 )
1
= λ3 (1 + r) = .
da2
a2
(2.4)
By envelope condition, we have
After solving for optimal consumption in the third period, we step back to the second
period. We choose c2 and a2 to maximize the following function
W2 (a1 ) ≡ max ln c2 + βW3 (a2 ) + λ2 [(1 + r)a1 − c2 − a2 ]
c2 , a 2
(2.5)
W2 (a1 ) is the optimum value of the objective function. Here we are choosing c2 & a2
in order to maximize the utility of the second period plus the discounted optimum value
3
of utility in period 3 subject to the second period budget constraint. Notice that (2.5)
involves two periods. The first order conditions are
c2 :
a2 : β
1
= λ2 .
c2
dW3 (a2 )
= λ2 .
da2
(2.6)
(2.7)
Using (2.4), (2.6), (2.7), and the second period budget constraint, we have
β
1
=
.
a2
(1 + r)a1 − a2
(2.8)
From (2.8), we have
a2 =
β(1 + r)
a1
1+β
(2.9)
c2 =
(1 + r)
a1 .
1+β
(2.10)
By envelope condition, we have
dW2 (a1 )
1+β
= λ2 (1 + r) =
.
da1
a1
(2.11)
Now we step back to period 1 and solve for optimal c1 and a1 . Define W1 (a0 ) as
W1 (a0 ) ≡ max ln c1 + βW2 (a1 ) + λ1 [(1 + r)a0 − c1 − a1 ]
c1 , a 1
(2.12)
(2.12) has interpretation similar to (2.5). Notice that the objective function in (2.12)
involves W2 (a1 ) but not W3 (a2 ), since W2 (a1 ) already incorporates W3 (a2 ) (see 2.5). The
first order conditions are
c1 :
a1 : β
1
= λ1 .
c1
dW2 (a1 )
= λ1 .
da1
(2.13)
(2.14)
Using (2.11), (2.13), (2.14), and the first period budget constraint, we have
β(1 + β)
1
=
.
a1
(1 + r)a0 − a1
(2.15)
From (2.15), we get
a1 =
β(1 + β)(1 + r)
a0 .
1 + β(1 + β)
4
(2.16)
As you can see, the solution for a1 is exactly the same as we found earlier (1.16). Using
(2.9) and budget constraints, we can find out the value of other choice variables. Let
us recapitulate the steps involved in the second method. Basically, we reformulated the
optimization problem (1.1) by breaking it down in three smaller optimization problems as
follows:
W1 (a0 ) ≡ max ln c1 + βW2 (a1 ) + λ1 [(1 + r)a0 − c1 − a1 ]
c1 , a 1
where
W2 (a1 ) ≡ max ln c2 + βW3 (a2 ) + λ2 [(1 + r)a1 − c2 − a2 ]
c2 , a 2
where
W3 (a2 ) ≡ max ln c3 + λ3 [(1 + r)a2 − c3 ] .
c3
Then we solved this problem backward. We started with the last period and solved for
3 (a2 )
2 (a1 )
c3 & dWda
. Then we stepped back and solved for c2 , a2 & dWda
. In the final step, we
2
1
solved for c1 & a1 . As you can see, at each step we are choosing only current period choice
variables subject to the current period budget constraint, which reduces the dimension
of the system of simultaneous equations that needs to be solved at any step. In the
Langrangian method, we were choosing choice variables for all the periods at the same
time. It is this feature of the method of dynamic programming, which makes it quite
suitable for solving DGE models. Numerically, it is much easier to invert 10 by 10
matrix 10 times rather than invert 100 by 100 matrix one time.
Let us now discuss some of the elements of the method of dynamic programming.
Functions such as W3 (a2 ), W2 (a1 ) & W1 (a0 ) are called value functions. They are nothing
but indirect utility functions. The value function Wt (at−1 ) is a function of at−1 , which
the utility maximizer at time t takes as given. Such variables are known as state variables
(a0 at time 1, a1 at time 2, and so on). These variables describe the state of the system at a
particular time and provide inter-temporal linkages. The variables which utility maximizer
chooses at time t are called control or choice or decision variables (c1 , a1 at time 1,
c2 , a2 at time 2, and c3 at time 3). Notice that the control variables at time t can be state
variables at time t + 1. We distinguish between control and state variables on the basis of
what variables the decision maker takes as given and what he can choose at the time of
making decision.
At each stage of the solution process, the first order conditions together with the
budget constraint and envelope condition allow us to solve for choice variables as a function
of state variables. Such functions are called policy functions (2.3, 2.9, 2.10, 2.16 etc.).
Finally, we have budget constraints which express next period value of state variable as a
function of current state and choice variables (e.g a2 = (1 + r)a1 − c1 ). Such functions are
known as laws of motion, which describe how state variables evolve over time.
Let us look at the method of dynamic programming in a somewhat more formal way.
Let Ft (Xt , Ut ) be the period objective function where Xt is a vector of state variables and
5
Ut is the vector of control variables at time t. Suppose that the optimization problem
given to us is as follows:
W = max
Ut
T
X
β t−1 Ft (Xt , Ut )
(2.17)
t=1
subject to the laws of motion
xt+1 = mtx (Xt , Ut ) ∀ xt ∈ Xt & ∀ t
(2.18)
given X0 & XT . We can recast this optimization problem as
Wt (Xt ) = max Ft (Xt , Ut ) + βWt+1 (Xt+1 )
Ut
(2.19)
subject to the laws of motion
xt+1 = mtx (Xt , Ut ) ∀ xt ∈ Xt
(2.20)
Equations such as (2.19) are known as Bellman Equations. Since Wt (Xt ) is a
function of another value function Wt+1 (Xt+1 ), it is also known as functional equation.
Bellman equation is said to satisfy the principle of optimality. The idea behind the
principle of optimality is as follows. Suppose we solve the optimality problem for time
t onwards given the state variables at time t and derive optimal policy functions (Ut =
Gt (Xt ), Ut+1 = Gt+1 (Xt+1 ), ... Ut+i = Gt+i (Xt+i ), ..., UT = GT (XT )). These policy
functions provide optimal trajectories of choice variables as functions of state variables.
In other words, we derive the optimal plan of actions over time as a function of state
variables. Now suppose we solve for optimal plan of action from time t + i, with i ≥ 1
onwards and derive optimal policy functions (Ut+i = G0t+i (Xt+i ), ..., UT = G0T (XT )).
Then the principle of optimality is satisfied iff Gt+i (Xt+i ) = G0t+i (Xt+i ), ∀ i.
The implication of the principle of optimality is that the optimal plan of action (solution) is time consistent. We compute the optimal path from the beginning of the plan
period and start moving along it. After a while, we stop and recalculate the optimal path
from the current period onwards. The principle of optimality tells us that the solution of
the new problem will be the remainder of the original optimal plan. Hence, the decision
maker will not be tempted to change his mind or deviate from the original plan as time
passes. In the parlance of game theory, the optimal plan is sub-game perfect.
So far we have not asked the question, whether all optimization problems can be recast
as dynamic programming problems? Answer is NO! Only under certain conditions, we
can solve the optimization problem through the method of dynamic programming. The
main conditions are as follows:
(1) The objective function must be time-additive and separable, and
(2) the period objective function Ft (Xt , Ut ) and laws of motion xt+1 = mt (Xt , Ut ) must
be the functions of current state or control variables or both. They cannot be the
functions of past or future state or control variables.
6
Let us recapitulate the steps involved in solving dynamic programming problem.
(1) Start at the last period and solve for control variables as a function of state variable.
T (XT )
Derive envelope condition dWdX
.
T
(2) Step one period back and solve
WT −1 (XT −1 ) = max FT −1 (XT −1 , UT −1 ) + βWT (XT )
UT −1
(2.21)
subject to the laws of motion
xT = mT −1x (XT −1 , UT −1 ) ∀ xT −1 ∈ XT −1
(2.22)
Derive policy functions and solve for envelope condition. One can show that envelope
condition satisfies
dWt (Xt )
dFt (Xt , Ut )
dmtx (Xt , Ut ) dWt+1 (Xt+1 )
=
+β
∀ xt ∈ XT & t.
dxt
dxt
dxt
dxt+1
(2.23)
The envelope condition is also known as Benveniste-Scheinkman condition.
(3) Keep on repeating step 2 till t = 1.
Example 3
Let us solve the problem of optimal consumption and saving using the method of
dynamic programming.
max
ct ,kt
3
X
β t−1 ln ct
(2.24)
t=1
subject to
α
kt = kt−1
− ct
(2.25)
given k0 > 0 and k3 = 0. We can recast the above problem as
α
Wt (kt−1 ) = max ln ct + βWt+1 (kt ) + λt [kt−1
− ct − kt ], ∀ t = 1..3.
ct ,kt
(2.26)
Here kt−1 is the state variable, which decision maker takes as given at time t. We begin
with the last period t = 3. We have
W3 (k2 ) = max ln c3 + λ3 [k2α − c3 ].
c3
The solution is
7
(2.27)
c3 = k2α
(2.28)
which is our policy function (choice variable as a function of state variable). From envelope
condition, we get
dW3 (k2 )
α
=
dk2
k2
(2.29)
Now we go back to period 2. The period 2 problem is
W2 (k1 ) = max ln c2 + βW3 (k2 ) + λ2 [k1α − c2 − k2 ].
c2 ,k2
(2.30)
First order conditions are
c2 :
k2 : β
1
= λ2
c2
dW3 (k2 )
= λ2 .
dk2
(2.31)
(2.32)
From (2.29), (2.31), (2.32), and the second period budget constraint, we have
k1α
1
αβ
=
.
− k2
k2
(2.33)
αβ
kα
1 + αβ 1
(2.34)
k1α
.
1 + αβ
(2.35)
(2.33) implies that
k2 =
and
c2 =
From envelope condition, we have
dW2 (k1 )
βα2 k1α−1
=
.
dk1
k2
(2.36)
α(1 + αβ)
dW2 (k1 )
=
.
dk1
k1
(2.37)
(2.34) and (2.36) imply that
Now we go back to period 1. The period 1 problem is
W1 (k0 ) = max ln c1 + βW2 (k1 ) + λ1 [k0α − c1 − k1 ].
c1 ,k1
First order conditions are
8
(2.38)
c1 :
k1 : β
1
= λ1
c1
(2.39)
dW2 (k1 )
= λ1 .
dk1
(2.40)
From (2.37), (2.39), (2.40), and the first period budget constraint, we have
k0α
1
αβ(1 + αβ)
.
=
k1
− k1
(2.41)
αβ(1 + αβ)
kα .
1 + αβ(1 + αβ) 0
(2.42)
From (2.41), we have
k1 =
Once we have solved for k1 , we can solve for other choice variables.
Exercise: Solve the above problem using the Langrangian method.
3. Uncertainty
So far we have dealt with certainty case. Now we introduce uncertainty. We will see
that the method of dynamic programming can be readily applied to solve optimization
problems with uncertainty. Before we extend our analysis to multi-period economies with
uncertainty, it is useful to refresh our memories about joint probability distribution.
Suppose x and y are two random variables, where x and y can take values {1, 2}. Suppose
their joint probability distribution is given as follows:
Table 1
Joint Probability Table
x
1
2
y
Joint
Probability
Total
1
2
0.4
0.1
0.1
0.4
0.5
0.5
Total
0.5
0.5
1
The marginal distribution of x is given by entries in the last row. For instance,
P rob (x = 1) = 0.5 = P (x = 2). similarly, the marginal distribution of y is given
by the entries in the last column. Using these marginal distributions, one can derive
unconditional average and variance. The unconditional average of x is given by
9
1 ∗ P (x = 1) + 2 ∗ P (x = 2) = 1.5.
One can similarly derive unconditional moments of y. The conditional probability
distribution is derived by using joint and marginal probability distributions. The
probability that x = 1 given (conditional on) y = 1 is given by
P (x = 1|y = 1) =
P (x = 1, y = 1)
0.4
=
= 0.8.
P (y = 1)
0.5
P (x = 2|y = 1) =
P (x = 2, y = 1)
0.1
=
= 0.2.
P (y = 1)
0.5
Similarly,
Notice that the conditional probability distribution adds up to one. Corresponding to the
conditional probability distribution, one can calculate conditional average and variance. For instance, the average of x conditional on y = 1 is given by
E(x|y = 1) = 1 ∗ P (x = 1|y = 1) + 2 ∗ P (x = 2|y = 1) = 1.2
which is lower than the unconditional average of x.
Exercise: Find out conditional distribution of x when y = 2. Derive the conditional
mean and variance.
Two random variables x and y are said to be independent if
P (x = xi , y = yi ) = P (x = xi )P (y = yi ).
In this case, the conditional and unconditional (marginal) expectations coincide. From the
probabilities given one can work out covariance or correlation. In the case two random
variables are independent, their covariance or correlation is zero. In the previous example,
x and y are not independent.
Exercise: What is the covariance between x and y in the previous example?
Exercise: Construct an example where x and y are independent.
Suppose now that x is the income in period 1 and y is the income in period 2. If period
1 income and period 2 income are not independent (in the above example actually they
are positively correlated), then knowing about period 1 income helps us in improving our
forecast about period 2 income. In what follows, we will make extensive use of conditional
expectation and variance. We will denote conditional expectation operator as Et to show
that expectation is made conditional on information available at the beginning of period
t.
10
Example 4
Let us introduce uncertainty in example 3. Suppose the production function is given
α
by At kt−1
, where At is a non-independent random variable. Suppose that At can take
two values Al , Ah in any period t. Also suppose that the value of At is realized at the
beginning of period t, before the decision maker chooses consumption and investment for
that period.
The optimization problem is
max E
ct ,kt
3
X
β t−1 ln ct
(3.1)
t=1
subject to
α
ct + kt = At kt−1
(3.2)
given k0 > 0 and k3 = 0. We can recast this optimization problem as
£
¤
α
Wt (kt−1 , At ) = max ln ct + βEt Wt+1 (kt , At+1 ) + λt At kt−1
− ct − kt
ct ,kt
(3.3)
Notice that there are two state variables kt−1 and At , since the state of technology is
revealed to the decision maker before he makes choices for time t. As a convention, if
a random variable is i.i.d, it is not treated as a state variable, as knowledge of it is not
informative about its future values.
The expectation operator Et denotes the fact that the expectation is conditional on
information available at the beginning of period t (in this case value of kt−1 and At ). As
usual, we start with the last period. The optimization problem is
W3 (k2 , A3 ) = max ln c3 + λ3 [A3 k2α − c3 ]
c3
(3.4)
Since economy lasts for three periods, we have
c3 = A3 k2α .
(3.5)
c3 can take two values depending on whether A3 is Al or Ah . From envelope condition,
we have
dW3 (k2 , A3 )
α
=
dk2
k2
(3.6)
Now we go back to period 2. The period 2 problem is
W2 (k1 , A2 ) = max ln c2 + βE2 W3 (k2 , A3 ) + λ2 [A2 k1α − c2 − k2 ].
c2 ,k2
First order conditions are
11
(3.7)
c2 :
µ
k2 : E2
1
= λ2
c2
dW3 (k2 , A3 )
dk2
(3.8)
¶
= λ2 .
(3.9)
From (3.6), (3.8), (3.9), and the second period budget constraint, we have
1
αβ
.
k2
(3.10)
αβ
A2 k1α
1 + αβ
(3.11)
A2 k1α
.
1 + αβ
(3.12)
A2 k1α − k2
=
The solution for k2 conditional on A2 and k1 is
k2 =
and
c2 =
Since A2 can take two values Al , Ah , there will be two values of k2 and c2 for a given k1 .
From envelope condition, we have
dW2 (k1 , A2 )
βα2 A2 k1α−1
=
.
dk1
k2
(3.13)
dW2 (k1 , A2 )
α(1 + αβ)
=
.
dk1
k1
(3.14)
(3.11) and (3.13) imply that
Now we go back to period 1. The period 1 problem is
W1 (k0 , A1 ) = max ln c1 + βE1 W2 (k1 , A2 ) + λ1 [A1 k0α − c1 − k1 ].
c1 ,k1
(3.15)
First order conditions are
c1 :
µ
k1 : E1
1
= λ1
c1
dW2 (k1 , A2 )
dk1
(3.16)
¶
= λ1 .
(3.17)
From (3.14), (3.16), (3.17), and the first period budget constraint, we have
1
A1 k0α
− k1
=
αβ(1 + αβ)
.
k1
From (3.18), we have
12
(3.18)
k1 =
αβ(1 + αβ)
A1 k0α .
1 + αβ(1 + αβ)
(3.19)
Once we have solved for k1 , we can solve for other choice variables. Notice again that k1
can take two values depending on whether A1 is equal to Al or Ah . Suppose that A1 = Al ,
then
k1l =
αβ(1 + αβ)
Al k0α .
1 + αβ(1 + αβ)
(3.20)
Given k1l , k2 can take two values in the second period
k2l =
αβ
α
Al k1l
1 + αβ
(3.21)
k2h =
αβ
α
Ah k1l
.
1 + αβ
(3.22)
and
Similarly, there will be two values of consumption in period 2. Corresponding to each
value of investment in period two, there will be two values of consumption in period 3, c3
(see 3.5). Thus from the perspective of the first period, there will be four values of c3 .
Exercise: Solve the above example using the Langrangian method.
4. Asset Pricing
In ECON 501, we learned about asset pricing in two-period economies. Here, we
extend the analysis to more than two periods (finite). With multi-period economies, the
methodology to price assets remains the same as in two period economies. The price of
an asset equates the marginal cost (in utility terms) to its expected marginal benefit (in
utility terms). Suppose that an asset pays off at time t + i with i ≥ 1 and its payoff is
yt+i (suppose resale value is 0), which is a random variable. Then its price at time t, qt
satisfies
qt u0 (ct ) = β i Et [u0 (ct+i )yt+i ] .
(4.1)
The left hand side is the marginal cost of buying the asset in utility terms and the right
hand is the expected marginal benefit. The asset pays off yt+i in period t + i, which is
converted in utility terms by multiplying it with the marginal utility of consumption at
t + i. To make it comparable to time t utility, we multiply it by β i . (4.1) can be rewritten
as
· 0
¸
u (ct+i )yt+i
i
qt = β Et
.
(4.2)
u0 (ct )
13
Long and Short Bonds
Let q L be the price of a bond today, which pays 1 unit in period 3 (long or twoperiod bond), and q1S be the price of one period bond, which pays 1 unit next period. Let
u(c) = ln c. Then,
· ¸
c1
L
2
q = β E1
(4.3)
c3
·
q1S
= βE1
c1
c2
¸
(4.4)
From (4.3) and (4.4), we can derive rates of interest on long and short bonds. The gross
return on long bond satisfies
(1 + rL )2 =
1
1
h i.
=
L
q
β 2 E1 cc13
(4.5)
Similarly, the gross return on short bond satisfies
1
1
h i.
=
S
q1
βE1 cc12
1 + r1S =
(4.6)
The pattern of returns on long and short bonds are known as term structure. The plot
of term structure over maturity is also known as yield curve. The term structure or yield
curve embodies the forecasts of future consumption growth. In general, yield curve slopes
up reflecting growth. Downward sloping yield curve often forecasts a recession.
What is the relationship between the prices of short and long bonds? We turn to
covariance decomposition.
·
¸
c1 c2
L
2
q = β E1
(4.7)
c2 c3
which implies
µ
L
q =
q1S E1 q2S
+ Cov
βc1 βc2
,
c2 c3
¶
(4.8)
where q2S is the second period price of one period bond. If we ignore the covariance term,
then in terms of returns (4.8) can be written as
µ
1
1 + rL
¶2
=
1
1
E1
.
S
1 + r1
1 + r2S
(4.9)
Taking logarithms, utilizing the fact that ln(1 + r) ≈ r, and ignoring Jensen’s inequality we get
14
rL ≈
r1S + E1 r2S
.
2
(4.10)
(4.10) suggests that the long run bond yield is approximately equal to the arithmetic mean
of the current and expected short bond yields. This is called expectation hypothesis of
the term structure.
Exercise: Find out the prices of short and long term nominal bonds and their relationship.
Exercise: What is the price of bond which pays 1 unit of good in period T in all states?
Forward Prices
Suppose in period 1, you sign a contract, which requires you to pay f in period 2
in exchange for a payoff of 1 in period 3. How do we value this contract? Notice that
the price of contract, which is to be paid in period 2, is agreed in period 1. Then the
expected marginal cost of the contract in period 1 is βE1 u0 (c2 )f . The expected benefit
of the contract is β 2 E1 u0 (c3 ). Since the price equates the expected marginal cost with
expected marginal benefit of the asset, we have
f=
qL
βE1 u0 (c3 )
=
E1 u0 (c2 )
q1S
(4.11)
Exercise: Show that the expectation hypothesis implies that forward rates are equal to
the expected future short rates.
Exercise: Find the relationship among forward, long, and short rates of return. Also
use covariance decomposition to define risk-premium on forward prices.
Share
Suppose that a share pays a stream of dividend di for i ∈ [t, T ]. The resale value of
the share at time T is zero. Then the price of the share at time t is given by
qST = Et
T
−t
X
β i u0 (ct+i )dt+i .
(4.12)
i=1
References:
Sargent, Thomas J., 1987, Dynamic Macroeconomic Theory, Cambridge, Harvard
University press.
15
Additional Exercises
(1.) Time Inconsistency: Suppose the representative agent maximizes following utility
ln(c1 c2 c3 ) + β ln(c2 c3 ) + β 2 ln c3
subject to
at = at−1 − ct , ∀ t = 1, 2, 3
given a0 , and a3 = 0. Compute the consumption plan from the perspective of period
1. Now maximize the following utility
β ln(c2 c3 ) + β 2 ln c3
subject to
at = at−1 − ct , ∀ t = 2, 3
and a1 computed in the first part. Find the optimal c2 and c3 . Compare your results
from the original plan. Has the consumer changed his mind? How and why? Does
Bellman equation hold?
We will discuss some other examples of time-inconsistency later in the course.
(2.) Imagine a three period model with utility function:
ln c1 + βE1 ln c2 + β 2 ln c3 .
Let β = 0.98. Suppose that c1 = 1.02 but c2 and c3 can take two values 1.02 and
1.08. Also suppose that in the next period the value stays the same with probability
0.8 and changes to other value with probability 0.2.
(a.) Write down the joint probability distribution of c2 and c3 . What is the expected value
of c3 conditional on c2 = 1.02?
(b.) What is the expected value of c3 conditional on c1 = 1.02?
(c.) Find the values of q1S , q L , and f .
(d.) Find the values of r1S and rL . Also find E1 r2S , the forecast of next-period’s short
interest rate (Hint: Find possible values of q2S first).
16
© Copyright 2026 Paperzz