Prof. Dr. Thomas Steger
Advanced Macroeconomics | Lecture| SS 2013
Dynamic Optimization
(June 25, 2013)
No-Ponzi-Game condition
Method of Lagrange Multipliers
Dynamic Programming
Control Theory
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
The No-Ponzi-Game condition
Finite life (time may be continuous or discrete)
Everyone must repay his/her debt, i.e. leave the scene without debt at terminal point in time.
No-Ponzi-Game condition (NPGC) represents an equilibrium constraint that is imposed on every agent.
Infinite life (time is continuous)
Assume Mr. Ponzi (and his dynasty) wishes to increase consumption today by x€. Consumption
expenditures are being financed by borrowing money. Debt repayment as well as interest
payments are being financed by increasing indebtedness further. Debt then evolves according to
t
0
1
2
…
debt
x€
x(1+r)€
x(1+r)2€
…
Noting that d(t)=‐a(t) the above NPGC may be stated as
Charles Ponzi (1910)
Charles Ponzi became
known 1920s as a
swindler in for his
money making
scheme. He promised
clients huge profits by
buying discounted
postal reply coupons in
other countries and
redeeming them at
face value in the US as
a form of arbitrage. In
reality, Ponzi was
paying early investors
using the investments
of later investors. This
type of scheme is now
known as a "Ponzi
scheme". (Wikipedia,
June 3rd 2013)
If Mr. Ponzi increases consumption by x€, financed by employing his innovative financing scheme, debt evolves
according to d(t)=ertx€ such that the present value of debt would remain positive, which is excluded
2
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
The Method of Lagrange Multipliers (1)
Consider the problem of maximizing an intertemporal objective function extending over
three periods
where ct denotes the control variable, xt the state variable, and 0<β<1 is the discount factor.
The problem is to maximize the objective function (*) with respect to c₀, c₁, and c₂ subject to
the constraint (**).
This problem can easily be solved by the Method of Lagrange multipliers. This requires to
form the Lagrangian function. (We focus on interior solutions.)
where we treat x₀ as given and the variables c₀, c₁, c₂, x₁ and x₂ as unknowns.
3
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
The Method of Lagrange Multipliers (2)
Differentiating
w.r.t. the unknowns yields
Together with xt+1=f(xt,ct) this system of equation defines c₀, c₁, c₂, x₁, x₂ as well as λ₁ and λ2.
The FOC u′(c₂)=0, assuming a concave utility function with u′(c)>0 for all c, has the following
interpretation. It basically says that consumption should be increased as far as possible.
That is, the entire wealth is to be consumed in the terminal period. This implication can be
made explicit by taking border conditions into account.
4
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
The Method of Lagrange Multipliers (3)
Consider now the following stochastic intertemporal optimization problem
E₀ denotes the expected value given information at t=0 and εt is an i.i.d. random variable with E(εt)=0 and V(εt)=σε.
No-Ponzi Game condition (NPGC) says that a Ponzi-type financing scheme of consumption (increase consumption today by
running into debt and finance repayment and interest payments permanently by further increasing indebtedness) is excluded.
At t=0 the agent decides on c₀, taking x₀ as given. The unknowns at this stage are c₀ and x₁. The
associated Lagrangian function reads
This formulation is in line with the RBC examples (Heer and Maussner, 2004,
p. 37)
It leads to a FOC of the form u’(ct)=λt. (this formulation is quite usual).
If consumption today reduces the capital stock today (consumption at the
beginning of period), this formulation is appropriate.
Remark: The definition of LA differs from Chow, where it reads
This formulation is in line with Chow (Chapter 2).
It leads to a FOC of the form u’(ct)=λt+1 (this formulation is less usual).
If consumption today reduces the capital stock tomorrow (consumption at the
end of period), this formulation is appropriate.
5
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
The Method of Lagrange Multipliers (4)
For convenience the Lagrangian for the problem at t=0 is restated
Let us write the relevant parts of the Lagrangian function (i.e. those including c₀ and x₁)
The FOCs ∂ /∂c0=0 and ∂ /∂x1=0 read as follows
6
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
The Method of Lagrange Multipliers (5)
At t=1 the agent decides on c1, taking x1 as given. The unknowns at this stage are c1 and x2.
The associated Lagrangian function reads
Let us write the relevant parts of the Lagrangian function (i.e. those including c1 and x2)
The FOCs ∂ /∂c1=0 and ∂ /∂x2=0 read as follows
Solution strategy: {c₀,c₁,c₂,...} were chosen sequentially, given the information xt at time t (closed-loop policy), rather
than choosing {c₀,c₁,c₂,...} all at once at t=0 (open loop policy).
The dynamic constraint xt+1=f(xt, ct)+εt+1 and the TVC limt→∞βtEt(λtxt)=0 belong to the set of FOCs.
Because xt is in the information set when ct is to be determined, the expectations are Et and not E0.
7
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a simple example (1)
Consider an individual living for T periods. The intertemporal utility function is given by
The intertemporal budget constraint reads at+1=(at‐ct)Rt.
The optimal consumption plan {c₀, c₁,..., cT} can be determined by backward induction. At T‐1
the individual has wealth aT‐1 and the maximum utility may be described as follows
The value function VT-1(aT-1) shows an indirect utility, i.e. the maximum
attainable utility given wealth aT-1.
The first‐order condition for the problem on the RHS is
Substituting cT‐1=cT‐1(aT‐1) back into equ. (*) gives VT‐1=VT‐1(aT‐1).
8
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a simple example (2)
At T‐2 the consumer's maximization problem may be expressed as
This is just like equ. (*) but with "second-period" utility replaced by indirect utility VT‐1(aT‐1); this equation is often
referred to as the Bellman equation.
The first‐order condition for the problem on the RHS is
Substituting cT‐2=cT‐2 (aT‐2) back into the Bellman equation gives VT‐2=VT‐2(aT‐2).
Hence, the entire sequence Vt=Vt(at) for all t∈{0,...,T} can be traced out.
Once we have Vt=Vt(at), optimal ct follows from the first-order conditions
Finally, the time path at for all t∈{0,...,T} results from at+1=(at‐ct)Rt with a₀ given.
9
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a simple example (2a)
As an illustration, consider an agent living for 10 periods who is endowed with initial wealth a₀=1. The subsequent figure
displays the time paths for wealth (at and ct) assuming different constellations with regard to R and β.
10
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a more general treatment (1)
The setup
Consider an infinitely-lived, representative agent with intertemporal welfare given by (0<β<1)
The agent is assumed to solve the following problem
Notice that the state variable xt cannot be controlled directly but is under indirect control by choosing ct.
An optimal program is a sequence {ct, ct+1,...} which solves the above stated problem. The value of
an optimal program is denoted by
The value function V(xt) shows the maximum level of intertemporal welfare Ut given the amount of xt.
11
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a more general treatment (2)
Step #1: Bellman equation and first‐order conditions
Since the objective function Ut is additively separable, it may be written as Ut=u(ct)+βUt+1 and
hence the maximization problem may be stated as
This is the Bellman equation. The problem with potentially infinitely many control variables was broken down into
many problems with one control variable.
Notice that, compared to Ut=u(ct)+βUt+1, Ut+1 is replaced by V(xt). This says that we assume that the optimal program
for tomorrow is solved and we worry about the maximization problem for today only.
The first‐order condition for the problem on the RHS of Bellman equation, noting xt+1=f(xt,ct), is
The benefit of an increase in ct consist in higher utility today, which is reflected here by marginal utility u′(ct). The cost consists in lower overall utility - the value function V(.) - tomorrow.
The reduction in overall utility amounts to the change in xt+1, i.e. the derivative fc, times the marginal value of xt+1, i.e. V′(xt+1). As the disadvantage arises only tomorrow, this is discounted at the rate β.
Notice that this FOC, together with xt+1=f(xt,ct), implicitly defines ct = ct(xt). As we know very little about the properties
of V(.) at this stage, however, we need to go through two further steps in order to eliminate V(.) from this FOC and
obtain a condition that uses only functions of which properties like signs of first and second derivatives are known.
12
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a more general treatment (3)
Step #2: Evolution of the costate variable – V´(xt)
At first we set up the maximized Bellman equation. This is obtained by replacing the control variable in
the Bellman equation by the optimal level of the control variable according to the FOC, i.e. ct=ct(xt)
The derivative w.r.t. xt gives
Inserting the first-order condition u′(ct)=‐βV′(xt+1)fc gives
This equation is a difference equation in the costate variable V′(xt). The costate variable is also called the shadow price of the state variable xt.
It says how much an additional unit of the state variable (e.g. of wealth) is valued: As V(xt) gives the value of optimal
behavior between t and the end of the planning horizon, V′(xt) says by how much this value changes when xt is
changed marginally.
This equ. describes how the shadow price of the state variable changes over time when the agent behaves optimally. 13
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Dynamic Programming: a more general treatment (4)
Step #3: Euler equation
Noting the FOC (**) one may write V′(xt)=‐u′(ct‐1)/(βfc) and V′(xt+1)=‐u′(ct)/(βfc). Inserting into equ. (***)
gives
This is the famous Euler equation.
It represents a difference equation (DE) in marginal utility, which can be transformed into a DE in ct.
If, for instance, the utility function is of the usual CIES shape, then the Euler equation implies
The dynamic evolution is then determined by this equation together with xt+1=f(xt,ct)
Boundary conditions read: x₀=given and a terminal condition c∞=c.
14
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Control Theory: Maximum Principle (1)
The dynamic problem under consideration can be stated as follows
I(.), F(.), and f(.) are continuously differentiable functions. x is an n-dimensional vector of state variables
and u is an r-dimensional vector of control variables (n⋛r).
The control trajectory u(t) ∀ t∈[t₀,T] must belong to a given control set U, and must be a piecewise
continuous function of time.
The maximum principle can be considered as an extension of the method of Lagrange multipliers
to dynamic optimization problems. To keep the exposition simple, it is assumed that the control
trajectory u(t) is unconstrained.
15
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Control Theory: Maximum Principle (2)
Recall that the Method of Lagrange Multipliers requires to first introduce Lagrange multipliers and then
to set up the Lagrangian function. Subsequently, the saddle point of this function (maximizing w.r.t.
choice variables and minimizing w.r.t. the Lagrange multipliers) is determined.
At first, we set up a (row) vector of so-called costate variables, which are the dynamic equivalent of the
Lagrange multipliers
Next we set up the Lagrangian function
By analogy to the static case, a saddle point of the Lagrangian would yield the solution. Here, however,
the saddle point is in the space of functions, where (u*(t),λ*(t)) represent a saddle point if
16
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Control Theory: Maximum Principle (3)
We now consider the necessary conditions for a saddle point of (*). A change in the costate variable trajectory from λ(t) to λ(t)+Δλ(t), where Δλ(t) is any continuous function of time, would change the Lagrangian to
Hence, one set of first‐order necessary conditions, resulting from ΔL=0, reads as follows
To develop the remaining necessary first‐order conditions, we first rewrite (*) as follows
The preceding equation may be expressed as
17
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Control Theory: Maximum Principle (4)
We now define the following function, labeled the Hamiltonian function Equ. (**) may then be written as
Now consider the effect of a change in the control trajectory from u(t) to u(t)+Δu(t), associated by a
corresponding change in the state trajectory from x(t) to x(t)+Δx(t), which yields
For a maximum it is necessary that ΔL=0 implying that
18
Dynamic Optimization
Institut für Theoretische Volkswirtschaftslehre
Makroökonomik
Control Theory: Maximum Principle (5)
Comments
To check for a maximum, sufficiency conditions must be considered.
provided that the Hamiltonian is jointly concave in the control and the state variable (Mangasarian
sufficiency conditions) or
provided that the maximized Hamiltonian is concave in the state variable (Arrow sufficiency conditions),
the necessary conditions are also sufficient (Kamien and Schwartz, 1981, part II, sections 3 and 15).
The costate variables λ have the following interpretation: ∂J*/∂x(t₀)=λ*(t₀). That is, the (initial) costate
variables give the change in the optimal value of the objective functional due to changes in the corresponding
initial state variables. This is analogous to the interpretation of the static Langrange multipliers.
In the context of growth models, the (current-value) Hamiltonian can be viewed as net national product in
utility terms (Solow, 2000, p. 127). This can be seen more clearly by writing the Hamiltonian as H=u(C)+λI(C),
where I denotes investment and I(C) indicates that investment depends negatively on consumption. Note that
the shadow price λ has the dimension "utility per unit of numeraire good".
From this statement it seems plausible that a first-order necessary condition reads: ∂H/∂C=0. Moreover, the
costate equation, λ=ρλ‐∂H/∂K, can be explained as arbitrage condition or Fisher equation (Solow, 2001, p.
160). For this purpose rewrite the costate equation as: λ+∂H/∂K=ρλ. The LHS shows the (marginal) benefit
that can be obtained by devoting one unit of output for investment. This comprises an increase in "income in
utility terms" ∂H/∂K, accounting for a change in the value of capital. The RHS gives the opportunity costs of
devoting one unit of output to investment rather than consumption. Notice that λ=ρλ‐∂H/∂K implies λ(0)=∫0∞
∂H/∂K exp(‐ρt)dt (provided that an appropriate boundary condition holds), which says that capital is valued
according to fundamentals.
19
© Copyright 2026 Paperzz