Outline
Merton
Limitation of the Merton predictions
Portfolio Allocation
Ali Lazrak
University of British Columbia
Sauder School of Business
CIMPA UNESCO MOROCCO School
Marrakesh
April 2007
Solutions
Outline
Merton
Limitation of the Merton predictions
Portfolio allocation models
The canonical Merton model
Clash with the data
Solutions:
Learning
Predictability in returns
Consumption habit
Solutions
Outline
Merton
Limitation of the Merton predictions
A toy model
A toy model
One period: t = 0 and t = T .
Complete markets and no arbitrage
N ”states” with probability pi
N risky assets and 1 riskless asset
Asset i = 1, ..., N − 1: Price Si (0) and dividend
(Di1 , Di2 , ..., DiN )
Asset i = 0 is risk free and its payoff is D0j = e rT for all j.
investor/consumer consumes c0 and cT
Solutions
Outline
Merton
Limitation of the Merton predictions
A toy model
Pricing kernel
State price density ψ = (ψ1 , ...ψN ) ∈ RN with positive
components, ψj > 0 for all j = 1, ..., N such that
N
Si (0) =
∑ ψj Dij ,
i = 0, 1, ..., N − 1.
j =1
The risk neutral probability measure Q = (qj )j =1,..,N :
qj =
ψj
N
∑ j =1
= ψj e rT .
ψj
Under Q, the expected return on any asset i is equal to the
risk free rate:
h
i
N
Si (0) = ∑ qj e −rT Dij = EQ e −rT Di .
j =1
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
A toy model
Strategies
Portfolio position: ϕ = ( ϕ0 , ..., ϕN −1 )
Consumption (c0 , cT ) ∈ D = (0, ∞) × (0, ∞)N .
(c0 , cT ) ∈ Λ(x ) is feasible for the initial wealth x > 0:
N −1
c0 = x −
∑
ϕi ,
(1)
i =0
N −1
cT
=
∑
i =0
ϕi
Di .
Si (0)
(2)
Proposition: Λ(x ) = {(c0 , cT ) ∈ D | x = EQ (c0 + e −rT cT )}
Outline
Merton
Limitation of the Merton predictions
Solutions
A toy model
Preferences
Time additive and exponential discount
J (c0 , cT ) = u (c0 ) + e −δT E (u (cT )) ,
u increasing concave and δ > 0 (around 5% in the data).
4 economic forces: Consumption smoothing (time/states),
preference for immediate consumption, asset substitution and
income effect!
Outline
Merton
Limitation of the Merton predictions
Solutions
A toy model
The allocation problem: primal and dual formulation
The allocation problem:
V (x ) = sup J (c0 , cT )
(3)
ϕ
s. t. (1) − (2) hold.
The dual formulation:
V (x ) =
sup
J ( c0 , cT )
(4)
(c0 ,cT )∈D
s. t. x = EQ (c0 + e −rT cT ).
Principle of optimality: Along an optimal path, we must be
marginally indifferent
Outline
Merton
Limitation of the Merton predictions
A toy model
The primal approach
First order condition
u 0 (c0 ) = e −δT e rT E (u 0 (cT )),
Si (0)u 0 (c0 ) = e −δT E (u 0 (cT )Di ),
i = 1, .., N − 1
Indifference between saving/investing
The pricing kernel can be retrieved from the optimal choice:
e −δT u 0 (cT (ωi ))
ψ
= i,
0
u ( c0 )
pi
Solutions
Outline
Merton
Limitation of the Merton predictions
A toy model
The dual approach
Get rid of the portfolio strategies
u 0 (c0 ) = λ,
λ ψi
u 0 (cT (ωi )) =
, i = 1, ..., N.
e −δT pi
Recover the portfolio strategies from duplicating the optimal
consumption path.
Example: u (c ) = log (c )
c0 =
x
,
1 + e −δT
cT = x
e −δT p
.
1 + e −δT ψ
Solutions
Outline
Merton
Limitation of the Merton predictions
Continuous time model
The market
One risky asset
dS1 (t ) = S1 (t ) [µ1 (t )dt + σ1 (t )dz1t ] ,
S1 (0) given,
The rsik free asset
dS0 (t ) = rt S0 (t )dt,
S0 (0) = 1.
µ1 , σ1 , r are progressively measurable.
Market price of risk
θt = σ−1 (t ) (µ(t ) − rt )
Notation:
β(t ) = exp −
Z t
0
rs ds
.
Solutions
Outline
Merton
Limitation of the Merton predictions
Continuous time model
The stochastic discount factor
The risk neutral measure:
Z
Z t
dQ 1 t
2
0
=
ζ
=
exp
−
k
θ
k
ds
−
θ
dz
t
s
s s .,
dP 2 0
0
Ft
Girsanov Theorem: the process
ztQ
= zt +
Z t
0
θs ds
is a Brownian motion under Q.
The stochastic discount factor
Mt = β ( t ) ζ t .
M is a pricing kernel:
S1 (t ) = Et
Ms
S1 (s ) , t ≤ s.
Mt
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Continuous time model
Strategies
Consumption plan: (c, Z ) and portfolio ( ϕ0 , ϕ1 )
(c, Z ) ∈ Λ(x ) is feasible for the initial wealth x if
1
X (t )
= ∑1i =0 ϕi (t )
=x+∑
Z t
ϕi (s )
i =0 0
Si (s )
dSi (s ) −
Z t
0
cs ds
X (T ) = ∑1i =0 ϕi (T ) = Z .
The wealth obeys
dX (t ) = (rt X (t ) + ϕ1 (t )σ1 (t )θt − ct ) dt + ϕ1 (t )σ1 (t )dzt ,
X (0) = x,
Proposition: Λ(x ) = {(c, Z ) : E
hR
T
0
i
Mt ct + MT Z = x }
Outline
Merton
Limitation of the Merton predictions
Solutions
Continuous time model
Strategies
Time additive utility
J (c, Z ) = E
T
Z
e
0
−δt
u (ct )dt + e
−δT
CRRA utility
u (c ) =
c 1− γ
,
1−γ
γ > 0.
u (Z )
Outline
Merton
Limitation of the Merton predictions
Solutions
Continuous time model
The allocation problem
Martingale approach
V (x ) =
J (c, Z )
sup
(c,Z )∈Λ(x )
T
Z
=
sup
e
E
(c,Z )∈Λ(x )
0
−δt
u (ct )dt + e
−δT
u (Z ) ,
Primal problem
V (x )
=
sup J (c, X (T ))
c,ϕ
=
T
Z
sup E
c,ϕ
e
0
−δt
u (ct )dt + e
−δT
u (X (T ))
s.t. dX (t ) = (rt X (t ) + ϕ1 (t )σ1 (t )θt − ct ) dt + ϕ1 (t )σ1 (t )
X (0) = x.
Outline
Merton
Limitation of the Merton predictions
Continuous time model
Dual approach: a systematic method
FOC
e −δt u 0 (ct ) = λMt ,
e −δT u 0 (Z ) = λMT ,
ct
Z
= I λMt e δt ,
= I λMT e δT .
Compute λ and then X (t )
the portfolio ϕ1 is pined down by identification
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Continuous time model
The primal approach
Time consistency enforces the dynamic programming equation
Z u
− δ (v −t )
− δ (u −t )
e
u (cv )dv + e
V (u, X (u )) .
V (t, x ) = sup Et
c,ϕ
t
Assuming r , µ, σ are functions of a factor f satisfying
df (t ) = m (f (t ))dt + η (f (t ))dzt we get the HJB equation
∂V
∂V
δV =
+ sup u (c ) +
(rx + ϕσθ − c )
∂t
∂x
c,ϕ
1 ∂2 V
1 ∂2 V 2
∂2 V
∂V
2
+
m+
ϕσ
ϕ
+
η
+
ϕση
,
∂f
2 ∂x 2
2 ∂f 2
∂x ∂f
Outline
Merton
Limitation of the Merton predictions
Continuous time model
The primal approach
FOC
u 0 (c ) = Vx ,
and
ϕ=−
V
Vx 0 −1
(σ ) θ − xf (σ0 )−1 η.
Vxx
Vxx
Merton mutual fund separation
Solutions
Outline
Merton
Limitation of the Merton predictions
Continuous time model
An example
r , µ, σ constant investment opportunity set
CRRA preference
Merton solution:
1−γ 2
1
δ − r (1 − γ ) −
θ .
ν=
γ
2γ
γ γ x 1− γ
1
V (t, x ) =
1 + ( ν − 1 ) e − ν (T −t )
,
ν
1−γ
ν
c (t, x ) =
x,
1 + ( ν − 1 ) e − ν (T −t )
ϕ(t, x ) =
1
(σ)−1 θx.
γ
Solutions
Outline
Merton
Limitation of the Merton predictions
Continuous time model
Predictions
Constant proportion policy
We have to look at the data.
Economists have proposed many extensions:
Labor income
Taxes and home bias
Non iid returns
Habit model
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit: (Detemple, Zapatero 1991, Econometrica)
Learning: Cvitanic, Lazrak, Martellini, Zapatero (2006)
Review Of Financial Studies
Predictability: (Wachter (2002) Journal of Financial and
Quantitative analysis)
Hyperbolic discount: Ekeland, Lazrak (2007) (working
paper, UBC, Ekeland’s website)
Solutions
Outline
Merton
Limitation of the Merton predictions
Habit formation
A toy model
One period deterministic
Benchmark: supc0 u (c0 ) + u (x − s )
Internal habit: supc0 u (c0 ) + u (x − c0 − ρc0 )
Perfect smoothing is too costly to sustain!
Solutions
Outline
Merton
Limitation of the Merton predictions
Habit formation
Continuous time model
dS (t ) = µS (t )dt + σS (t )dz,
θ2
µ−r
σ
−δt
e u (c (t ), yt )dt ,
ψt = e −rt − 2 t −θzt ≡ Mt e −rt , θt =
U (c, y ) = E
T
Z
0
dyt = (ρc (t ) − αyt ) dt, y0 = y .
yt = ye −αt + ρ
Z t
0
e −α(t −s ) c (s )ds,
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit formation
The allocation problem
V (0, x, y ) = supc,ϕ U (c, y ) = supc,ϕ E
hR
T
0
e −δt u (c (t ), yt )dt
s.t. dX (t ) = (rX (t ) + ϕ(t )(µ − r ) − c (t )) dt + ϕ(t )σdz,
X (0) = x,
V (0, x, y )
Z
=
T
sup U (c, y ) = sup E
e
0
c
c
Z T
s.t. E
ψt c (t )dt = x.
0
−δt
u (c (t ), yt )dt
i
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit formation
Computation
The gradient of the utility functional is
e −δt
∇u (C , Y ) =
hR
i
T
uc (c (t ), yt ) + ρEt t e −(δ+α)(τ −t ) uy (c (τ ), yτ )d τ
First order condition
∇u (C , Y ) = λψt
The problem can be solved under the additional assumption
u (c, y ) = v (c − y ) =
(c − y )1− γ
, γ > 0.
1−γ
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit formation
Computation
Define νt = v 0 (ct − yt )
We have
δt
νt = λe ψt
ρ
−(r +α−ρ)(T −t )
1−e
.
1+
r +α−ρ
ct = yt + [λMt ]−1/γ φ(t )
where
−1/γ 1
ρ
−(r +α−ρ)(T −t )
φ (t ) = 1 +
1−e
e − γ ( δ −r )t .
r +α−ρ
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit formation
Computation
Integrating dy yields
y t = y0 e
( ρ − α )t
+ ρλ
−1/γ
c (t ) = y0 e (ρ−α)t + ρλ−1/γ
Z t
0
Z t
0
e (ρ−α)(t −s ) φ(s )Ms−1/γ ds,
e (ρ−α)(t −s ) φ(s )Ms−1/γ ds + [λMt ]−1/γ
What is λ?
For s > t, we have
ys = e (ρ−α)(s −t ) yt + ρλ−1/γ
Z s
t
−1/γ
e (ρ−α)(s −τ ) φ(τ )Mτ
dτ
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit formation
Computation
Budget constraint
X ( t ) = Et
T
Z
0
ψs
c (s )ds
ψt
We have
X ( t ) = yt
Z T
t
A1 (t, T ) =
A2 (t, T ) =
− γ1
e −(r +α−ρ)(s −t ) ds + λ−1/γ Mt
Z T
t
Z T
t
(ρA2 (t, T ) + A1 (t, T )
e −r (s −t ) φ(s )g1− 1 (s − t )ds
γ
Z s
−r (s −t )
(ρ−α)(s −τ )
e
e
φ(τ )g1− 1 (τ −t ) ds
t
1 2
p (p −1)
gp (t ) = E (Mtp ) ≡ e 2 θ
γ
, t > 0, p 6= 0.
Outline
Merton
Limitation of the Merton predictions
Solutions
Habit formation
Optimal policies
X ( t ) − yt
Z T
t
− γ1
e −(r +α−ρ)(s −t ) ds = Mt
λ−1/γ [A1 (t, T ) + ρA2 (t, T )] .
RT
x − y0 0 e −(r +α−ρ)s ds
λ
=
.
A1 (0, T ) + ρA2 (0, T )
Z T
1θ
−(r +α−ρ)s
ϕ (t ) =
Xt − yt
e
ds
γσ
t
−1/γ
Z T
1
−(r +α−ρ)s
c ( t ) = yt +
Xt − yt
e
ds
φ (t ).
A1 (t, T ) + ρA2 (t, T )
t
Outline
Merton
Limitation of the Merton predictions
Learning
A toy example
2 period model: t =, 0, 1, 2. Initial wealth: x
risk free asset rf = 0.
Risky asset: r̃0 , r̃1
Benchmark case: r̃0 , r̃1 iid
Sequential portfolio problem :
v (z ) = max E1 (u (z (1 + w1 r̃1 )))
w1
w0 = Argmax w E (v (x (1 + w r̃0 ))).
u (c ) =
c 1− γ
1− γ
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Learning
A toy example
iid case, the portfolio
w0 = w1 : Flat allocation
Example:
r̃0 , r̃1 ∼ Payoff
If
p=
r
−r
probability
p
1−p
1
, r = 4, r = 1, γ = 2
2
then
1
1
r−γ − r−γ
w 0 = w1 =
r
1− γ1
+r
1− γ1
=
1
≈ 16%.
6
Outline
Merton
Limitation of the Merton predictions
Solutions
Learning
A toy example with learning
A box with 10 urns. 5 are good and 5 are bad
Prior distribution
r̃0 ∼
r̃1 |r̃0 =r
∼
r [0.75]
−r [0.25]
r [0.5]
−r [0.5]
.
,
r̃1 |r̃0 =−r
∼
Solution: w1u = 33%, w1d = 3%, w0 = 12.7%
r [0.25]
−r [0.75]
Outline
Merton
Limitation of the Merton predictions
Learning
A Continuous time example
dS1 (t ) = µ1 S1 (t )dt + σ1 S1 (t )dz1 ,
µ 1 ∼ N ( m 1 , v1 ) .
m1 (t ) = E µ1 | FtS
m1 ( t ) =
σ12
v1 t
m1 + 2
S 1 (t ),
2
σ1 + v1 t
σ1 + v1 t
where
1
S 1 (t ) = ln
t
S1 (t )
S1 (0)
1
σ 2 v1
+ σ12 , v1 (t ) = 2 1
.
2
σ1 + v1 t
dS1 (t ) = m1 (t )S1 (t )dt + σ1 S1 (t )dz 1 .
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Learning
The allocation problem
(X (T ))1−γ
V (x, m1 ) = sup J (X (T )) = sup E
1−γ
ϕ1
ϕ1
s.t. dX (t ) = (rX (t ) + ϕ1 (t )(m1 (t ) − r )) dt + ϕ1 (t )σdz 1 ,
X (0) = x.
Closed form allocation
1
w1 ( t ) =
γ
− (1 − γ) vσ12(T+v−tt)
1
1
m1 ( t ) − r
σ12
(1−γ)v1 T
σ12
(1−γ)v1 T
σ2
1 m1 − r
1 m1 − r
h1 (0) := w1 (0) − ·
=
2
γ
γ σ12 γ −
σ1
,
Outline
Merton
Limitation of the Merton predictions
Learning
Multiple dimensions
dS2 (t ) = µ2 S2 (t )dt + σ2 S2 (t )dz2 ,
µ≡
µ1
µ2
w1 ( 0 )
+
∼N
m1
m2
=
1
v1 v12
,
.
v12 v2
v T
γ−(1−γ) 12
σ1
v12
(1− γ )T 2
σ1
v T
γ−(1−γ) 12
σ1
·
m1 −r
σ12
v T
γ−(1−γ) 22
σ2
·
m2 −r
σ22
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Predictability
A toy model
A box with 10 urns. 5 are good and 5 are bad
Prior distribution
r̃0 ∼
r̃1 |r̃0 =−r
∼
r [0.75]
−r [0.25]
r [0.5]
−r [0.5]
.
,
r̃1 |r̃0 =r
∼
Solution: w1u = 3%, w1d = 33%, w0 = 20.7%
r [0.25]
−r [0.75]
Outline
Merton
Limitation of the Merton predictions
Predictability
The continuous time model
µt = α + βlog
Dt
St
,
dS (t ) = µt S (t )dt + σS (t )dz,
d θt = a(θ − θt )dt − bdz,
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Predictability
The allocation problem
V (0, x, µ0 )
=
sup J (c (.)) = sup E
ϕ1
ϕ1
T
Z
e
0
−δt (c (t ))
1− γ
1−γ
dt
s.t. dX (t ) = (rX (t ) + ϕ1 (t )(µt − r ) − c (t )) dt + ϕ1 (t )σd
X (0) = x.
FOC
− γ1
ct = λψt e δt
Outline
Merton
Limitation of the Merton predictions
Solutions
The solution
The continuous time model
X ( t ) = Et
T
Z
t
1
− γ1 ψs δs
ds .
λψs e
ψt
X (t ) = (λψt )− γ e − γ
where
H (τ, θ ) = e
1
γ
δs
A1 ( τ )
Z T −t
t
θt2
2
H (τ, θt )d τ.
+ A2 ( τ ) θ t + A3 ( τ )
Outline
Merton
Limitation of the Merton predictions
Solutions
Hyperbolic discounting
The continuous time model
Real life Phenomena: Procrastination and Addiction
It is of economic relevance: It is a logical implication of
rational dynamic policymaking when some agents’ decisions
depend on their expectations on future policies.
Examples: Government policy, Saving/hyperbolic discounting,
Durable goods, Large shareholder ownership policy etc. . .
Outline
Merton
Limitation of the Merton predictions
Hyperbolic discounting
Motivation
Canonical model for intertemporal decision making in
macroeconomics and finance (Samuelson (1937))
Ut (c, X ) =
Z T
t
e −ρ(s −t ) u (cs )ds + e −ρ(T −t ) g (X ), 0 < ρ.
The exponential discount is the only discount function which
exhibit the time consistency property: later preferences
confirm earlier preferences
Solutions
Outline
Merton
Limitation of the Merton predictions
Hyperbolic discounting
A toy example
Example:
1
1
J0 (c0 , cT , c2T ) = u (c0 ) + u (cT ) + u (c2T )
2
2
1
J1 (cT , c2T ) = u (cT ) + u (c2T )
2
u (c0 ) = 0, u (c1 ) = −8, u (c2 ) = 12.
From the perspective of time 0
J0 = 0 + u ( c 0 ) −
But
J1 = − 8 +
Preference reversal!
1
1
∗ 8 + ∗ 12 = 2 > 0.
2
2
1
∗ 12 = −2 < 0.
2
Solutions
Outline
Merton
Limitation of the Merton predictions
Hyperbolic discounting
A toy example
Example:
1
1
J0 (c0 , cT , c2T ) = u (c0 ) + u (cT ) + u (c2T )
2
2
1
J1 (cT , c2T ) = u (cT ) + u (c2T )
2
u (c0 ) = 0, u (c1 ) = −8, u (c2 ) = 12.
From the perspective of time 0
J0 = 0 + u ( c 0 ) −
But
J1 = − 8 +
Preference reversal!
1
1
∗ 8 + ∗ 12 = 2 > 0.
2
2
1
∗ 12 = −2 < 0.
2
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Hyperbolic discounting
Why?
Experimental evidence on hyperbolic discount.
Environment decision making
Hyperbolic discount or any sort of non constant discount rate
creates preference reversals (Strotz (1956)): Optimal control
is meaningless.
Outline
Merton
Limitation of the Merton predictions
Solutions
Hyperbolic discounting
Solution
Assumptions on how people act when they face time
inconsistency need to be made:
Naivete or sophistication.
Commitment mechanisms
We assume sophistication and absence of commitment.
Considerable interest in to explain individual
consumption,saving and retirement behavior (Laibson (1997),
Harris and Laibson (2001), Luttmer and Mariotti (2003),
Diamond and Koszegi (2003)).
Outline
Merton
Limitation of the Merton predictions
Hyperbolic discounting
Results
We solve the “intrapersonal game” between successive
intrapersonal incarnations in the context of the intertemporal
decision making with non constant discount.
The equilibrium is characterized by an equation which
generalizes the HJB equation
We established existence/uniqueness (in a local sense) under
”general” conditions on the discount function and the
utility/production functions and provide few workable
examples.
Solutions
Outline
Merton
Limitation of the Merton predictions
Hyperbolic discounting
The problem
V (t, k ) = “ sup ”
c
Z T
h(s − t )u (c (s ))ds + h(T − t )g (k (T ))
t
under the state equation
dk (s )
= f (k (s )) − c (s ),
ds
k (t ) = k.
h is a decreasing positive function with h(0) = 1.
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Hyperbolic discounting
The equilibrium
Suppose a global (Markovian) consumption policy
σ : [0, T ] × R → R, (t, k ) 7→ σ(t, k ) is announced.
Assume that self t can dictate a given consumption level c for
all selves on the interval [t, t + ε].
The expectation of self t is that all instantaneous selves on
(t + ε, T ] will use the policy σ.
Outline
Merton
Limitation of the Merton predictions
Hyperbolic discounting
The equilibrium
Given this expectation, Self t contemplates the best
consumption level c she can pick (for her coalition): call it
c (t, k, ε)
The policy σ is an equilibrium if no self (t, k ) by himself has
an incentive to deviate from σ(·, ·) when the commanded
coalition is vanishingly small.
Mathematically
lim c (t, k, ε) = σ(t, k ).
ε ↓0
Solutions
Outline
Merton
Limitation of the Merton predictions
Solutions
Hyperbolic discounting
Characterization
Theorem: A consumption policy σ : [0, T ] × R → R is an
equilibrium if and only if
∂V
∂V
(t, k ) + u (σ(t, k )) +
(k ) (f (t, k ) − σ(t, k ))
∂t
∂k
Z T
∂V
0
h (s − t )u ◦ i
=−
(s, k0 (s ) ds − h0 (T − t )g (k0 (T )),
∂k
t
with the boundary condition
V (T , k ) = g (k )
with
dk0
∂V
= f (k0 (s )) − i
(s, k0 (s )) , s ≥ t, k0 (t ) = k
ds
∂k
and
u 0 (σ(t, k )) =
∂V
(t, k ).
∂k
© Copyright 2026 Paperzz