Short course on Comparative Statics and Comparative Information

Short course on Comparative Statics and Comparative Information
Course description: This course provides a concise and accessible introduction to some
basic concepts and techniques used when comparing information structures or solutions
to optimization problems. It will introduce concepts like supermodularity and the single
crossing property and show its relevance for guaranteeing that the solutions to an optimization problem are monotonically increasing (or decreasing) with respect to various parameters. Consideration will be given to both decision-making with and without uncertainty.
Lehmann’s notion of informativeness and its role in valuing information structures will also
be examined. Applications to games with strategic complementarities and to statistical
decision theory will be discussed.
Style: This is a short course and not a seminar. While there will be some discussion of
very new results, most of the time will be spent on known results in this area that I consider
important. The emphasis is on providing an accessible introduction that will allow anyone
interested in the techniques or its applications to do further reading and research. To give
a flavor of how the arguments work, I shall at various points be proving results in some
detail. In other words, I shall be sacrificing some coverage for the sake of providing greater
depth and texture.
Readings:
AMIR, R. (1996): “Cournot Oligopoly and the Theory of Supermodular Games,” Games
and Economic Behavior, 15, 132-148.
AMIR, R. (2005): “Supermodularity and Complementarity in Economics: An Elementary
Survey,” Southern Economic Journal, 71(3), 636660.
ATHEY, S. (2001): “Single Crossing Properties and the Existence of Pure Strategy Equilibria in Games of Incomplete Information,” Econometrica, 69(4), 861-890.
ATHEY, S. (2002): “Monotone Comparative Statics Under Uncertainty,” The Quarterly
Journal of Economics, 117(1), 187-223.
ATHEY, S., P. MILGROM, AND J. ROBERTS (1998): Robust Comparative Statics.
(Draft Chapters, available on Athey’s webpage.)
BLACKWELL, D. AND M. A. GIRSHIK (1954): Theory of Games and Statistical Decisions. New York: Dover Publications
ECHENIQUE, F. (2002): “Comparative Statics by Adaptive Dynamics and the Correspondence Principle,” Econometrica, 70(2), 833-844.
GOLLIER, C. (2001): The Economics of Risk and Time. Cambridge: MIT Press.
KARLIN, S. AND H. RUBIN (1956): “The Theory of Decision Procedures for Distributions with Monotone Likelihood Ratio,” The Annals of Mathematical Statistics, 27(2),
272-299.
LEHMANN, E. L. (1988): “Comparing Location Experiments,” The Annals of Statistics,
16(2), 521-533.
1
MILGROM, P. (1994): “Comparing Optima: Do Simplifying Assumptions Affect Conclusions?” Journal of Political Economy, 102(3), 607-615.
MILGROM, P. AND C. SHANNON (1994):* “Monotone Comparative Statics,” Econometrica, 62(1), 157-180.
MILGROM, P. AND J. ROBERTS (1990):* “Rationalizability, Learning, and Equilibrium
in Games with Strategic Complementarities,” Econometrica, 58(6), 1255-1277.
MILGROM, P. AND J. ROBERTS (1996): “The LeChatelier Principle,” American Economic Review, 86(1), 173-179.
QUAH, J. K.-H. (2007):* “The Comparative Statics of Constrained Optimization Problems,” Econometrica, Vol. 75, No. 2, 401-431.
QUAH, J. K.-H. AND B. STRULOVICI (2007):* “Comparative Statics, Informativeness,
and the Interval Dominance Order,” Nuffield College Working Paper No. 4.
TOPKIS, D. M. (1998):* Supermodularity and Complementarity. Princeton: Princeton
University Press.
VIVES, X. (1990): “Nash Equilibrium with Strategic Complementarities,” Journal of
Mathematical Economics, 19(3), 305-321.
VIVES, X. (1999): Oligopoly Pricing. Cambridge: MIT Press.
ZHOU, L. (1994): “The Set of Nash Equilibria of a Supermodular Game is a Complete
Lattice,” Games and Economic Behavior, 7(2), 295-300.
The closest thing to a textbook for the material covered in this course is the book by Topkis.
The survey by Amir and the incomplete notes by Athey et al. are also relevant. The books
by Gollier and Vives are not books on mathematical techniques as such, but they do make
use of the concepts and techniques covered in the course. The book by Blackwell and
Girschik on statistical decision theory is very old, but it is beautifully written and provides
an introduction to the topic that is congenial to economists. The readings most closely
related to the lectures are indicated with asterisks*.
2
Short Course on
Comparative Statics and Comparative Information
John Quah
Short Course onComparative Statics and Comparative Information – p. 1/2
One-dimensional comparative statics
Let X ⊆ R and f, g : X → R.
We are interested in comparing argmaxx∈X f (x) with argmaxx∈X g(x).
Standard approach:
Assume X is a compact interval and f and g are quasi-concave
functions.
Let x∗ be the unique maximizer of f . Then f ′ (x∗ ) = 0.
Show that g ′ (x∗ ) ≥ 0. Then optimum has shifted to the right.
Short Course onComparative Statics and Comparative Information – p. 2/2
One-dimensional comparative statics
Let X ⊆ R and f, g : X → R.
We are interested in comparing argmaxx∈X f (x) with argmaxx∈X g(x).
Standard approach:
Assume X is a compact interval and f and g are quasi-concave
functions.
Let x∗ be the unique maximizer of f . Then f ′ (x∗ ) = 0.
Show that g ′ (x∗ ) ≥ 0. Then optimum has shifted to the right.
This approach makes various assumptions, most notably the
quasi-concavity of f and g.
Short Course onComparative Statics and Comparative Information – p. 2/2
One-dimensional comparative statics
Let X ⊆ R and f, g : X → R.
We are interested in comparing argmaxx∈X f (x) with argmaxx∈X g(x).
Standard approach:
Assume X is a compact interval and f and g are quasi-concave
functions.
Let x∗ be the unique maximizer of f . Then f ′ (x∗ ) = 0.
Show that g ′ (x∗ ) ≥ 0. Then optimum has shifted to the right.
This approach makes various assumptions, most notably the
quasi-concavity of f and g. Not the most natural assumption;
example:
let x be output, P the inverse demand function, and c the marginal cost
of producing good.
The profit function Π(x) = xP (x) − cx is not naturally concave.
Short Course onComparative Statics and Comparative Information – p. 2/2
One-dimensional comparative statics
Assume that f and g are continuous functions and their domain X is
compact. Then argmaxx∈X f (x) and argmaxx∈X g(x) are nonempty.
But these sets need not be singletons or intervals.
First question: how do we compare sets?
Definition: Let S ′ and S ′′ be subsets of R. S ′′ dominates S ′ in the
strong set order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have
max{x′′ , x′ } in S ′′ and min{x′′ , x′ } in S ′ .
Example: {3, 5, 6, 7} 6≥ {1, 4, 6}
Short Course onComparative Statics and Comparative Information – p. 3/2
One-dimensional comparative statics
Assume that f and g are continuous functions and their domain X is
compact. Then argmaxx∈X f (x) and argmaxx∈X g(x) are nonempty.
But these sets need not be singletons or intervals.
First question: how do we compare sets?
Definition: Let S ′ and S ′′ be subsets of R. S ′′ dominates S ′ in the
strong set order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have
max{x′′ , x′ } in S ′′ and min{x′′ , x′ } in S ′ .
Example: {3, 5, 6, 7} 6≥ {1, 4, 6} but {3, 4, 5, 6, 7} ≥ {1, 3, 4, 5, 6}.
Short Course onComparative Statics and Comparative Information – p. 3/2
One-dimensional comparative statics
Assume that f and g are continuous functions and their domain X is
compact. Then argmaxx∈X f (x) and argmaxx∈X g(x) are nonempty.
But these sets need not be singletons or intervals.
First question: how do we compare sets?
Definition: Let S ′ and S ′′ be subsets of R. S ′′ dominates S ′ in the
strong set order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have
max{x′′ , x′ } in S ′′ and min{x′′ , x′ } in S ′ .
Example: {3, 5, 6, 7} 6≥ {1, 4, 6} but {3, 4, 5, 6, 7} ≥ {1, 3, 4, 5, 6}.
Note: if S ′′ = {x′′ } and S ′ = {x′ }, then x′′ ≥ x′ .
More generally,
largest element in S ′′ is larger than the largest element in S ′ ;
smallest element in S ′′ is larger than the smallest element in S ′ .
Short Course onComparative Statics and Comparative Information – p. 3/2
One-dimensional comparative statics
Definition: g dominates f by the single crossing property (g SC f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
f (x′′ ) − f (x′ ) ≥ (>) 0 =⇒ g(x′′ ) − g(x′ ) ≥ (>) 0.
(1)
Let S be a partially ordered set.
A family of real-valued functions {f (·, s)}s∈S is an SCP family if the
functions are ordered by the single crossing property (SCP), i.e.,
whenever s′′ > s′ , we have f (·, s′′ ) SC f (·, s′ ).
Short Course onComparative Statics and Comparative Information – p. 4/2
One-dimensional comparative statics
Definition: g dominates f by the single crossing property (g SC f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
f (x′′ ) − f (x′ ) ≥ (>) 0 =⇒ g(x′′ ) − g(x′ ) ≥ (>) 0.
(2)
Let S be a partially ordered set.
A family of real-valued functions {f (·, s)}s∈S is an SCP family if the
functions are ordered by the single crossing property (SCP), i.e.,
whenever s′′ > s′ , we have f (·, s′′ ) SC f (·, s′ ).
Motivation for the term ‘single crossing’: for any x′′ > x′ , the function
∆ : S → R defined by
∆(s) = f (x′′ , s) − f (x′ , s)
crosses the horizontal axis at most once.
Short Course onComparative Statics and Comparative Information – p. 4/2
One-dimensional comparative statics
Definition: g dominates f by the single crossing property (g SC f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
f (x′′ ) − f (x′ ) ≥ (>) 0 =⇒ g(x′′ ) − g(x′ ) ≥ (>) 0.
(3)
Let S be a partially ordered set.
A family of real-valued functions {f (·, s)}s∈S is an SCP family if the
functions are ordered by the single crossing property (SCP), i.e.,
whenever s′′ > s′ , we have f (·, s′′ ) SC f (·, s′ ).
Motivation for the term ‘single crossing’: for any x′′ > x′ , the function
∆ : S → R defined by
∆(s) = f (x′′ , s) − f (x′ , s)
crosses the horizontal axis at most once.
Single crossing is an ordinal property: if g SC f , then φ ◦ g SC ψ ◦ f ,
where φ and ψ are increasing functions mapping R to R.
Short Course onComparative Statics and Comparative Information – p. 4/2
One-dimensional comparative statics
In this case, f (·, s′′ ) SC f (·, s′ )
y
f (,s )
f (,s )
x
x
x
Short Course onComparative Statics and Comparative Information – p. 5/2
One-dimensional comparative statics
g
f
f
x
*
x
**
g
x
Short Course onComparative Statics and Comparative Information – p. 6/2
One-dimensional comparative statics
g
f
f
x
*
x
**
x
g
In the top picture f SC g, but not in the bottom picture.
Short Course onComparative Statics and Comparative Information – p. 6/2
One-dimensional comparative statics
Definition: g dominates f by the increasing differences (g IN f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
g(x′′ ) − g(x′ ) ≥ f (x′′ ) − f (x′ )
(4)
Clearly, g IN f implies g SC f .
Similarly, a family {f (·, s)}s∈S satisfies increasing differences if the
functions are ordered by increasing differences, i.e., whenever s′′ > s′ ,
we have f (·, s′′ ) IN f (·, s′ ).
Let S be an open subset of Rl and X an open interval. Then a
sufficient (and necessary) condition for increasing differences is
∂2f
(x, s) ≥ 0
∂x∂si
at every point (x, s) and for all i.
Short Course onComparative Statics and Comparative Information – p. 7/2
One-dimensional comparative statics
Theorem 1: (MS-94) Suppose X ⊆ R and f, g : X → R; then
argmaxx∈Y g(x) ≥ argmaxx∈Y f (x) for any Y ⊆ X if and only if g SC f .
Short Course onComparative Statics and Comparative Information – p. 8/2
One-dimensional comparative statics
Theorem 1: (MS-94) Suppose X ⊆ R and f, g : X → R; then
argmaxx∈Y g(x) ≥ argmaxx∈Y f (x) for any Y ⊆ X if and only if g SC f .
Proof: Assume g SC f , x′′ ∈ argmaxx∈Y f (x), and
x′ ∈ argmaxx∈Y g(x). We have to show that
max{x′ , x′′ } ∈ argmaxx∈Y g(x) and min{x′ , x′′ } ∈ argmaxx∈Y f (x).
We need only consider the case where x′′ > x′ .
Short Course onComparative Statics and Comparative Information – p. 8/2
One-dimensional comparative statics
Theorem 1: (MS-94) Suppose X ⊆ R and f, g : X → R; then
argmaxx∈Y g(x) ≥ argmaxx∈Y f (x) for any Y ⊆ X if and only if g SC f .
Proof: Assume g SC f , x′′ ∈ argmaxx∈Y f (x), and
x′ ∈ argmaxx∈Y g(x). We have to show that
max{x′ , x′′ } ∈ argmaxx∈Y g(x) and min{x′ , x′′ } ∈ argmaxx∈Y f (x).
We need only consider the case where x′′ > x′ .
Since x′′ is in argmaxx∈Y f (x), we have f (x′′ ) ≥ f (x′ ) and since
g SC f , we also have g(x′′ ) ≥ g(x′ ); thus x′′ is in argmaxx∈Y g(x).
Furthermore, f (x′′ ) = f (x′ ) so that x′ is in argmaxx∈Y f (x). If not,
f (x′′ ) > f (x′ ) which implies (by the fact that g SC f ) that
g(x′′ ) > g(x′ ), contradicting the assumption that g is maximized at x′ .
Short Course onComparative Statics and Comparative Information – p. 8/2
One-dimensional comparative statics
Theorem 1: (MS-94) Suppose X ⊆ R and f, g : X → R; then
argmaxx∈Y g(x) ≥ argmaxx∈Y f (x) for any Y ⊆ X if and only if g SC f .
Proof: Assume g SC f , x′′ ∈ argmaxx∈Y f (x), and
x′ ∈ argmaxx∈Y g(x). We have to show that
max{x′ , x′′ } ∈ argmaxx∈Y g(x) and min{x′ , x′′ } ∈ argmaxx∈Y f (x).
We need only consider the case where x′′ > x′ .
Since x′′ is in argmaxx∈Y f (x), we have f (x′′ ) ≥ f (x′ ) and since
g SC f , we also have g(x′′ ) ≥ g(x′ ); thus x′′ is in argmaxx∈Y g(x).
Furthermore, f (x′′ ) = f (x′ ) so that x′ is in argmaxx∈Y f (x). If not,
f (x′′ ) > f (x′ ) which implies (by the fact that g SC f ) that
g(x′′ ) > g(x′ ), contradicting the assumption that g is maximized at x′ .
Necessity: clear that if SCP property is violated at x′ and x′′ , then
argmaxx∈{x′ ,x′′ } g(x) 6≥ argmaxx∈{x′ .x′′ } f (x).
Short Course onComparative Statics and Comparative Information – p. 8/2
One-dimensional comparative statics
Application: Recall Π(x, −c) = xP (x) − cx. Then {Π(·, −c)}−c∈R− obey
increasing differences, since
∂2Π
= −1.
∂x∂c
By Theorem 1, argmaxx∈X Π(x, −c) is increasing in −c.
For example, argmaxx∈X Π(x, −3) ≥ argmaxx∈X Π(x, −5).
In other words, the profit-maximizing output decreases as the marginal
cost of output increases.
Short Course onComparative Statics and Comparative Information – p. 9/2
One-dimensional comparative statics
Application: Bertrand Oligopoly with differentiated products, with
Πa (pa , p−a ) =
ln Πa (pa , p−a ) =
(pa − ca )Da (pa , p−a )
ln(pa − ca ) + ln Da (pa , p−a )
So {ln Πa (·, p−a )}−a∈−A has increasing differences if
∂2
[ln Da ] ≥ 0.
∂pa ∂p−a
This is equivalent to
∂
pa ∂Da
−
≤ 0;
∂p−a
Da ∂pa
i.e., firm a’s own-price elasticity of demand,
pa ∂Da
ǫa (pa , p−a ) = −
decreases with p−a .
Da ∂pa
If this assumption holds, argmaxpa ∈P Πa (pa , p−a ) increases with p−a .
Short Course onComparative Statics and Comparative Information – p. 10/2
Supermodular Games
Let X = ΠN
i=1 Xi , where each Xi is a compact interval.
Theorem 2: (Tarski) Suppose φ : X → X is an increasing function.
Then the set of fixed points {x ∈ X : φ(x) = x} is nonempty.
In fact, x∗∗ = sup{x ∈ X : x ≤ φ(x)} is a fixed point and is the largest
fixed point, i.e., for any other fixed point x∗ , we have x∗ ≤ x∗∗ .
Note: φ need not be continuous.
Short Course onComparative Statics and Comparative Information – p. 11/2
Supermodular Games
Let X = ΠN
i=1 Xi , where each Xi is a compact interval.
Theorem 2: (Tarski) Suppose φ : X → X is an increasing function.
Then the set of fixed points {x ∈ X : φ(x) = x} is nonempty.
In fact, x∗∗ = sup{x ∈ X : x ≤ φ(x)} is a fixed point and is the largest
fixed point, i.e., for any other fixed point x∗ , we have x∗ ≤ x∗∗ .
Note: φ need not be continuous.
Theorem 3: Suppose φ(·, t) : X → X is increasing in (x, t). Then the
largest fixed point of φ(·, t) is increasing in t.
Short Course onComparative Statics and Comparative Information – p. 11/2
Supermodular Games
Theorem 3: Suppose φ(·, t) : X → X is increasing in (x, t). Then the
largest fixed point of f (·, t) is increasing in t.
Proof: By Theorem 2, the largest fixed point
x̄(t) = sup{x ∈ X : x ≤ φ(x, t)}.
Suppose t′′ > t′ ; since φ(x, t′′ ) ≥ φ(x, t′ ) for all x,
{x ∈ X : x ≤ φ(x, t′ )} ⊆ {x ∈ X : x ≤ φ(x, t′′ )}.
Therefore,
sup{x ∈ X : x ≤ φ(x, t′ )} ≤ sup{x ∈ X : x ≤ φ(x, t′′ )}.
By Theorem 2 again, x̄(t′ ) ≤ x̄(t′′ ).
QED
Short Course onComparative Statics and Comparative Information – p. 12/2
Supermodular Games
Bertrand Oligopoly: assume the set of firms is A; the typical firm a
chooses its price from the compact interval P to maximize
Πa (pa , p−a ) = (pa − ca )Da (pa , p−a ).
Recall: if own-price elasticity is decreasing in p−a then the family
{Da (·, p−a )}p−a forms an SCP family.
Define Ba (p−a ) = argmaxpa ∈P Πa (pa , p−a ).
This is the set of a’s best responses to the pricing of other firms. It is
nonempty if Πa is continuous.
Short Course onComparative Statics and Comparative Information – p. 13/2
Supermodular Games
Bertrand Oligopoly: assume the set of firms is A; the typical firm a
chooses its price from the compact interval P to maximize
Πa (pa , p−a ) = (pa − ca )Da (pa , p−a ).
Recall: if own-price elasticity is decreasing in p−a then the family
{Da (·, p−a )}p−a forms an SCP family.
Define Ba (p−a ) = argmaxpa ∈P Πa (pa , p−a ).
This is the set of a’s best responses to the pricing of other firms. It is
nonempty if Πa is continuous.
Define B̄a (p−a ) = max argmaxpa ∈P Πa (pa , p−a ) ;
This is the largest best response to the pricing of other firms. This is an
increasing function of p−a (by Theorem 1).
Short Course onComparative Statics and Comparative Information – p. 13/2
Supermodular Games
Bertrand Oligopoly: assume the set of firms is A; the typical firm a
chooses its price from the compact interval P to maximize
Πa (pa , p−a ) = (pa − ca )Da (pa , p−a ).
Recall: if own-price elasticity is decreasing in p−a then the family
{Da (·, p−a )}p−a forms an SCP family.
Define Ba (p−a ) = argmaxpa ∈P Πa (pa , p−a ).
This is the set of a’s best responses to the pricing of other firms. It is
nonempty if Πa is continuous.
Define B̄a (p−a ) = max argmaxpa ∈P Πa (pa , p−a ) ;
This is the largest best response to the pricing of other firms. This is an
increasing function of p−a (by Theorem 1).
Define P̄ = P × P × ... × P and the map B̄ : P̄ → P̄ by
B̄(p) = B̄a (p−a )
a∈A
.
A fixed point of this map is a NE of the game.
Short Course onComparative Statics and Comparative Information – p. 13/2
Supermodular Games
By Theorem 1, B̄ is an increasing function.
By Tarski’s Fixed Point Theorem, a fixed point exists.
Specifically, p∗ = sup{p ∈ P̄ : p ≤ B̄(p)} is a fixed point of the map B̄
and thus a NE. In fact, this is the largest NE. Why?
Short Course onComparative Statics and Comparative Information – p. 14/2
Supermodular Games
By Theorem 1, B̄ is an increasing function.
By Tarski’s Fixed Point Theorem, a fixed point exists.
Specifically, p∗ = sup{p ∈ P̄ : p ≤ B̄(p)} is a fixed point of the map B̄
and thus a NE. In fact, this is the largest NE. Why?
Let p̂ = (pa )a∈A be any other NE. Then p̂a ∈ Ba (p̂−a ). So
p̂a ≤ B̄a (p̂−a ),
and we obtain p̂ ≤ B̄(p̂). Thus, p̂ ∈ {p ∈ P̄ : p ≤ B̄(p)}, which implies
p̂ ≤ p∗ ≡ sup{p ∈ P̄ : p ≤ B̄(p)}.
Short Course onComparative Statics and Comparative Information – p. 14/2
Supermodular Games
By Theorem 1, B̄ is an increasing function.
By Tarski’s Fixed Point Theorem, a fixed point exists.
Specifically, p∗ = sup{p ∈ P̄ : p ≤ B̄(p)} is a fixed point of the map B̄
and thus a NE. In fact, this is the largest NE. Why?
Let p̂ = (pa )a∈A be any other NE. Then p̂a ∈ Ba (p̂−a ). So
p̂a ≤ B̄a (p̂−a ),
and we obtain p̂ ≤ B̄(p̂). Thus, p̂ ∈ {p ∈ P̄ : p ≤ B̄(p)}, which implies
p̂ ≤ p∗ ≡ sup{p ∈ P̄ : p ≤ B̄(p)}.
In this game, best response function is monotonic but not necessarily
continuous. Existence guaranteed by appealing to Tarski’s fixed point
theorem, rather than Kakutani’s. The game has a largest NE. We can
do comparative statics exercises on the largest NE...
Short Course onComparative Statics and Comparative Information – p. 14/2
Supermodular Games
What happens to the largest NE when firm ã experiences an increase
in marginal cost from cã to c′ã ? Recall
Πã (pã , p−ã , cã ) = (pã − cã )Dã (pã , p−ã )
ln Πã (pã , p−ã , cã ) = ln(pã − cã ) + ln Dã (pã , p−ã )
Observe that
∂
[ln Πã ] > 0.
∂pã ∂cã
By Theorem 1, firm a’s best response increase with cã (for fixed p−a ).
Formally,
Bã (p−ã , c′ã ) ≥ Bã (p−ã , cã ).
Short Course onComparative Statics and Comparative Information – p. 15/2
Supermodular Games
What happens to the largest NE when firm ã experiences an increase
in marginal cost from cã to c′ã ? Recall
Πã (pã , p−ã , cã ) = (pã − cã )Dã (pã , p−ã )
ln Πã (pã , p−ã , cã ) = ln(pã − cã ) + ln Dã (pã , p−ã )
Observe that
∂
[ln Πã ] > 0.
∂pã ∂cã
By Theorem 1, firm a’s best response increase with cã (for fixed p−a ).
Formally,
Bã (p−ã , c′ã ) ≥ Bã (p−ã , cã ).
This implies that B̄(p, c′ã ) ≥ B̄(p, cã ). So largest fixed point of B̄(·, c′ã ) is
larger than the largest fixed point of B̄(·, cã ) (by Theorem 3). In other
words, if firm ã’s marginal cost increases from cã to c′ã , the largest NE
increases: every firm increases its price.
Short Course onComparative Statics and Comparative Information – p. 15/2
Decisions Under Uncertainty
Recall that a family {f (·, s)}s∈S is an SCP family (for S ⊆ R) if
for any x′′ > x′ , the function ∆ : S → R defined by
∆(s) = f (x′′ , s) − f (x′ , s)
crosses the horizontal axis at most once.
Definition: A function ∆ : S → R is said to have property (⋆) if,
whenever s′′ > s′ ,
∆(s′ ) ≥ (>) 0 =⇒ ∆(s′′ ) ≥ (>) 0.
Definition: Consider the family h(·, t) : S → R where S ⊆ R. This family
obeys log increasing differences if {ln h(·, t)}t∈T obeys increasing
differences. Equivalently, whenever s′′ > s′ ,
h(s′′ , t)
increases with t.
′
h(s , t)
Short Course onComparative Statics and Comparative Information – p. 16/2
Decisions Under Uncertainty
Theorem 4: Suppose S ⊆ R, the function ∆ : S → R has property (⋆),
and {h(·, t)}t∈T obeys log increasing differences. Then
Z
φ(t) =
∆(s)h(s, t)ds has property (⋆).
S
Indeed, if ∆ crosses the horizontal axis at s = s0 , then, for t2 > t1 ,
h(s0 , t2 )
φ(t1 ).
φ(t2 ) ≥
h(s0 , t1 )
Short Course onComparative Statics and Comparative Information – p. 17/2
Decisions Under Uncertainty
R
Proof: We split the integral φ(t2 ) = S ∆(s)h(s, t2 )ds into two:
Z ∞
Z s0
h(s, t2 )
h(s, t2 )
φ(t2 ) =
∆(s)h(s, t1 )
ds +
∆(s)h(s, t1 )
ds.
h(s, t1 )
h(s, t1 )
−∞
s0
Between [−∞, s0 ], ∆(s) ≤ 0, so the first term on the right is greater
than
Z s0
h(s0 , t2 )
∆(s)h(s, t1 ) ds.
h(s0 , t1 ) −∞
Between [s0 .∞], ∆(s) ≥ 0, so the second term is greater than
Z
h(s0 , t2 ) ∞
∆(s)h(s, t1 ) ds.
h(s0 , t1 ) s0
Adding up the two lower bounds gives us
Z
h(s0 , t2 )
h(s0 , t2 )
φ(t2 ) ≥
φ(t1 ).
∆(s)h(s, t1 ) ds =
h(s0 , t1 ) S
h(s0 , t1 )
.
QED
Short Course onComparative Statics and Comparative Information – p. 18/2
Decisions Under Uncertainty
Application: consider Bernoulli utility functions v(·, t) parameterized
by t. Assume that the family {v ′ (·, t)}t∈T obeys log increasing
differences. This is equivalent to
v ′′ (z, t2 )
v ′′ (z, t1 )
− ′
≤− ′
v (z, t2 )
v (z, t1 )
whenever t2 > t1 ; i.e., type t2 is less risk averse than type t1 .
Agent has wealth w and chooses between a safe and risky asset to
maximize expected utility
Z
V (x, t) = v((w − x)r + xs, t) λ(s) ds
– r and s are payoffs of safe and risky assets, latter with density λ;
x ∈ [0, w] is the investment in the risky asset.
Short Course onComparative Statics and Comparative Information – p. 19/2
Decisions Under Uncertainty
Recall V (x, t) =
R
v((w − x)r + xs, t) λ(s) ds. Differentiating w.r.t x,
Z
V ′ (x, t) =
(s − r)λ(s) v ′ ((w − x)r + xs, t) ds.
s∈S
By Theorem 4 and the fact that {v ′ (·, t)}t∈T obeys log increasing
differences, for any t2 > t1 ,
v ′ (wr, t2 ) ′
V (x, t1 ).
V (x, t2 ) ≥ ′
v (wr, t1 )
′
Short Course onComparative Statics and Comparative Information – p. 20/2
Decisions Under Uncertainty
Recall V (x, t) =
R
v((w − x)r + xs, t) λ(s) ds. Differentiating w.r.t x,
Z
V ′ (x, t) =
(s − r)λ(s) v ′ ((w − x)r + xs, t) ds.
s∈S
By Theorem 4 and the fact that {v ′ (·, t)}t∈T obeys log increasing
differences, for any t2 > t1 ,
v ′ (wr, t2 ) ′
V (x, t1 ).
V (x, t2 ) ≥ ′
v (wr, t1 )
′
So whenever x′′ > x′ ,
v ′ (wr, t2 )
[V (x′′ , t1 ) − V (x′ , t1 )] .
V (x , t2 ) − V (x , t2 ) ≥ ′
v (wr, t1 )
′′
′
Clearly, the family {V (·, t)}t∈T obeys SCP.
Short Course onComparative Statics and Comparative Information – p. 20/2
Decisions Under Uncertainty
Recall V (x, t) =
R
v((w − x)r + xs, t) λ(s) ds. Differentiating w.r.t x,
Z
V ′ (x, t) =
(s − r)λ(s) v ′ ((w − x)r + xs, t) ds.
s∈S
By Theorem 4 and the fact that {v ′ (·, t)}t∈T obeys log increasing
differences, for any t2 > t1 ,
v ′ (wr, t2 ) ′
V (x, t1 ).
V (x, t2 ) ≥ ′
v (wr, t1 )
′
So whenever x′′ > x′ ,
v ′ (wr, t2 )
[V (x′′ , t1 ) − V (x′ , t1 )] .
V (x , t2 ) − V (x , t2 ) ≥ ′
v (wr, t1 )
′′
′
Clearly, the family {V (·, t)}t∈T obeys SCP.
By Theorem 1, argmaxx∈[0,w] V (x, t) is increasing in t,
i.e., the less risk averse agent invests more in the risky asset.
Short Course onComparative Statics and Comparative Information – p. 20/2
Decisions Under Uncertainty
Definition: A family of density functions {λ(·, t)}t∈T (defined on S ⊆ R)
that obeys log increasing differences is said to respect
the monotone likelihood ratio order (MLR).
Short Course onComparative Statics and Comparative Information – p. 21/2
Decisions Under Uncertainty
Definition: A family of density functions {λ(·, t)}t∈T (defined on S ⊆ R)
that obeys log increasing differences is said to respect
the monotone likelihood ratio order (MLR).
Fact: If {λ(·, t)}t∈T is MLR-ordered then it is ordered by first order
stochastic dominance (FOSD), i.e., whenever t2 > t1 ,
λ(·, t2 ) first order stochastically dominates λ(·, t1 ).
Short Course onComparative Statics and Comparative Information – p. 21/2
Decisions Under Uncertainty
Definition: A family of density functions {λ(·, t)}t∈T (defined on S ⊆ R)
that obeys log increasing differences is said to respect
the monotone likelihood ratio order (MLR).
Fact: If {λ(·, t)}t∈T is MLR-ordered then it is ordered by first order
stochastic dominance (FOSD), i.e., whenever t2 > t1 ,
λ(·, t2 ) first order stochastically dominates λ(·, t1 ).
Theorem 5: Suppose {f (·, s)}s∈S (for S ⊆ R) is an SCP family. Then
the family {F (·, t)}t∈T (for T ⊆ R) given by
Z
F (x, t) =
f (x, s) λ(s, t) ds
s∈S
is also an SCP family if {λ(·, t)}t∈T is MLR-ordered.
Short Course onComparative Statics and Comparative Information – p. 21/2
Decisions Under Uncertainty
Theorem 5: Suppose {f (·, s)}s∈S (for S ⊆ R) is an SCP family. Then
the family {F (·, t)}t∈T given by
Z
F (x, t) =
f (x, s) λ(s, t) ds
s∈S
is also an SCP family if {λ(·, t)}t∈T is MLR-ordered.
Proof: The function ∆(s) = f (x′′ , s) − f (x′ , s) has property (⋆). By
Theorem 4, the function φ given by
Z
φ(t) = F (x′′ , t) − F (x′ , t) =
[f (x′′ , s) − f (x′ , s)] λ(s, t) ds
s∈S
Z
=
∆(s) λ(s, t) ds
s∈S
also obeys property (⋆). In other words, {F (·, t)}t∈T is an SCP family.
QED
Short Course onComparative Statics and Comparative Information – p. 22/2
Decisions Under Uncertainty
Theorem 6: Suppose {f (·, s)}s∈S (for S ⊆ R) satisfies increasing
differences. Then the family {F (·, t)}t∈T given by
Z
F (x, t) =
f (x, s) λ(s, t) ds
s∈S
also satisfies increasing differences if {λ(·, t)}t∈T is FOSD-ordered.
Short Course onComparative Statics and Comparative Information – p. 23/2
Decisions Under Uncertainty
Theorem 6: Suppose {f (·, s)}s∈S (for S ⊆ R) satisfies increasing
differences. Then the family {F (·, t)}t∈T given by
Z
F (x, t) =
f (x, s) λ(s, t) ds
s∈S
also satisfies increasing differences if {λ(·, t)}t∈T is FOSD-ordered.
Proof: Since {f (·, s)}s∈S satisfies increasing differences, for x′′ > x′ ,
the function ∆(s) = f (x′′ , s) − f (x′ , s) is increasing. By a standard
result (Rothschild-Stiglitiz),
Z
Z
∆(s)f (s, t2 )ds ≥
∆(s)f (s, t1 )ds if t2 > t1
S
S
Re-writing this expression, we obtain
V (x′′ , t2 ) − V (x′ , t2 ) ≥ V (x′′ , t1 ) − V (x′ , t1 ),
as required by increasing differences.
QED
Short Course onComparative Statics and Comparative Information – p. 23/2
Decisions Under Uncertainty
Application: The family {f (·, s)}s∈S given by
f (x, s) = v((w − x)r + xs)
is an SCP family. By Theorem 5, the family {F (·, t)}t∈T given by
Z
F (x, t) = v((w − x)r + xs) λ(s, t) ds
is an SCP family provided {λ(·, t)}t∈T is MLR-ordered.
By Theorem 1, argmaxx∈[0,w] F (x, t) increases with t.
Interpretation: an MLR shift in the payoff distribution λ(·, t) means that
higher payoffs for the risky asset are more likely, so investment in the
risky asset increases.
Short Course onComparative Statics and Comparative Information – p. 24/2
Short Course on
Comparative Statics and Comparative Information
⋆⋆
Part II
John Quah
⋆⋆
[email protected]
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 1/2
One-dimensional comparative statics
Recall....
Definition: Let S ′ and S ′′ be subsets of R. S ′′ dominates S ′ in the
strong set order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have
max{x′′ , x′ } in S ′′ and min{x′′ , x′ } in S ′ .
Example: {3, 5, 6, 7} 6≥ {1, 4, 6} but {3, 4, 5, 6, 7} ≥ {1, 3, 4, 5, 6}.
Note: if S ′′ = {x′′ } and S ′ = {x′ }, then x′′ ≥ x′ .
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 2/2
One-dimensional comparative statics
Recall....
Definition: Let S ′ and S ′′ be subsets of R. S ′′ dominates S ′ in the
strong set order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have
max{x′′ , x′ } in S ′′ and min{x′′ , x′ } in S ′ .
Example: {3, 5, 6, 7} 6≥ {1, 4, 6} but {3, 4, 5, 6, 7} ≥ {1, 3, 4, 5, 6}.
Note: if S ′′ = {x′′ } and S ′ = {x′ }, then x′′ ≥ x′ .
Definition: g dominates f by the single crossing property (g SC f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
f (x′′ ) − f (x′ ) ≥ (>) 0 =⇒ g(x′′ ) − g(x′ ) ≥ (>) 0.
(2)
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 2/2
One-dimensional comparative statics
Recall....
Definition: Let S ′ and S ′′ be subsets of R. S ′′ dominates S ′ in the
strong set order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have
max{x′′ , x′ } in S ′′ and min{x′′ , x′ } in S ′ .
Example: {3, 5, 6, 7} 6≥ {1, 4, 6} but {3, 4, 5, 6, 7} ≥ {1, 3, 4, 5, 6}.
Note: if S ′′ = {x′′ } and S ′ = {x′ }, then x′′ ≥ x′ .
Definition: g dominates f by the single crossing property (g SC f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
f (x′′ ) − f (x′ ) ≥ (>) 0 =⇒ g(x′′ ) − g(x′ ) ≥ (>) 0.
(3)
Theorem 1: (MS-94) Suppose X ⊆ R and f, g : X → R; then
argmaxx∈Y g(x) ≥ argmaxx∈Y f (x) for any Y ⊆ X if and only if g SC f .
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 2/2
Multidimensional comparative statics
Definitions:
We endow Rl with the product order: x′ ≥ x if x′i ≥ xi for all i.
An element y in Rl is an upper bound of S ⊂ Rl if y ≥ x for all x in S.
It is the least upper bound (or supremum) if it is an upper bound and
for any other other upper bound y ′ , we have y ′ ≥ y.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 3/2
Multidimensional comparative statics
Definitions:
We endow Rl with the product order: x′ ≥ x if x′i ≥ xi for all i.
An element y in Rl is an upper bound of S ⊂ Rl if y ≥ x for all x in S.
It is the least upper bound (or supremum) if it is an upper bound and
for any other other upper bound y ′ , we have y ′ ≥ y.
An element z in Rl is a lower bound of S ⊂ Rl if z ≤ x for all x in S. It is
the greatest lower bound (or infimum) if it is a lower bound and for any
other other lower bound z ′ , we have z ′ ≤ z.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 3/2
Multidimensional comparative statics
Definitions:
We endow Rl with the product order: x′ ≥ x if x′i ≥ xi for all i.
An element y in Rl is an upper bound of S ⊂ Rl if y ≥ x for all x in S.
It is the least upper bound (or supremum) if it is an upper bound and
for any other other upper bound y ′ , we have y ′ ≥ y.
An element z in Rl is a lower bound of S ⊂ Rl if z ≤ x for all x in S. It is
the greatest lower bound (or infimum) if it is a lower bound and for any
other other lower bound z ′ , we have z ′ ≤ z.
Note:
(1) sup S is unique if it exists. (2) sup S need not be in S.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 3/2
Multidimensional comparative statics
We denote the supremum of x′ and x′′ by x′ ∨ x′′ and their infimum by
x′ ∧ x′′ . Easy to check that
x′ ∨ x′′ = (max{x′1 , x′′1 , max{x′2 , x′′2 }, ..., max{x′l , x′′l }) and
x′ ∧ x′′ = (min{x′1 , x′′1 , min{x′2 , x′′2 }, ..., min{x′l , x′′l }).
x
x
x
x
x
x
A set S is a sublattice of Rl if for any x′ and x′′ in S, x′ ∨ x′′ and x′ ∧ x′′
are both in S.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 4/2
Multidimensional comparative statics
More definitions:
Let S ′ and S ′′ be subsets of Rl . S ′′ dominates S ′ in the strong set
order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have x′ ∨ x′′ in S ′′
and x′ ∧ x′′ in S ′ .
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 5/2
Multidimensional comparative statics
More definitions:
Let S ′ and S ′′ be subsets of Rl . S ′′ dominates S ′ in the strong set
order (S ′′ ≥ S ′ ) if for any for x′′ in S ′′ and x′ in S ′ , we have x′ ∨ x′′ in S ′′
and x′ ∧ x′′ in S ′ .
Note:
(1) If S ′′ = {x′′ } and S ′ = {x′ }, then S ′′ ≥ S ′ if and only if x′′ ≥ x′ .
(2) Suppose S ′ and S ′′ both contain their suprema; then S ′′ ≥ S ′
implies that sup S ′′ ≥ sup S ′ .
Proof of (2): Let x′′ and x′ be suprema of S ′′ and S ′ respectively. Then
x′′ ∨ x′ is in S ′′ . Since x′′ is sup S ′′ , we have x′′ ≥ x′′ ∨ x′ , which
implies s′′ ≥ x′ .
QED
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 5/2
Multidimensional comparative statics
Let X be a sublattice of Rl and f : X → R. The function f is
supermodular (SPM) if
f (x′ ∨ x′′ ) − f (x′′ ) ≥ f (x′ ) − f (x′ ∧ x′′ ).
x
x
x
x
x
x
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 6/2
Multidimensional comparative statics
Let X be a sublattice of Rl and f : X → R. The function f is
supermodular (SPM) if
f (x′ ∨ x′′ ) − f (x′′ ) ≥ f (x′ ) − f (x′ ∧ x′′ ).
x
x
x
x
x
x
The function f is quasisupermodular (QSM) if
f (x′ ) − f (x′ ∧ x′′ ) ≥ (>)0 =⇒ f (x′ ∨ x′′ ) − f (x′′ ) ≥ (>)0.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 6/2
Multidimensional comparative statics
Theorem 2.1: Let X be a sublattice of Rl and suppose that f : X → R
is a QSM function. Then argmaxx∈X f (x) is a sublattice.
Proof: Suppose x′ and x′′ are both in argmaxx∈X f (x). Since X is a
sublattice, x′ ∨ x′′ and x′ ∧ x′′ are both in is in X. Note that
f (x′ ) ≥ f (x′ ∧ x′′ ) since x′ ∈ argmaxx∈X f (x). The QSM property
guarantees that f (x′ ∨ x′′ ) ≥ f (x′′ ), which implies that
x′ ∨ x′′ ∈ argmaxx∈X f (x). Etc.
QED
x
x
x
x
x
x
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 7/2
Multidimensional comparative statics
Definition: g dominates f by the single crossing property (g SC f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
f (x′′ ) − f (x′ ) ≥ (>) 0 =⇒ g(x′′ ) − g(x′ ) ≥ (>) 0.
x
(4)
x
x
x
x
x
Definition: g dominates f by the increasing differences (g IN f ) if for
all x′′ and x′ such that x′′ > x′ , the following holds:
g(x′′ ) − g(x′ ) ≥ f (x′′ ) −ShortfCourse
(x′ )onComparative Statics and Comparative Information⋆(5)
⋆Part II – p. 8/2
Multidimensional comparative statics
A family {f (·, s)}s∈S satisfies SCP if the functions are ordered by SCP,
i.e., whenever s′′ > s′ , we have f (·, s′′ ) SC f (·, s′ ). Clearly, g IN f
implies g SC f .
Similarly, a family {f (·, s)}s∈S satisfies increasing differences if the
functions are ordered by increasing differences, i.e., whenever s′′ > s′ ,
we have f (·, s′′ ) IN f (·, s′ ).
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 9/2
Multidimensional comparative statics
A family {f (·, s)}s∈S satisfies SCP if the functions are ordered by SCP,
i.e., whenever s′′ > s′ , we have f (·, s′′ ) SC f (·, s′ ). Clearly, g IN f
implies g SC f .
Similarly, a family {f (·, s)}s∈S satisfies increasing differences if the
functions are ordered by increasing differences, i.e., whenever s′′ > s′ ,
we have f (·, s′′ ) IN f (·, s′ ).
Fact: Let S be an open subset of Rl and X an open interval. Then a
sufficient (and necessary) condition for increasing differences is
∂2f
(x, s) ≥ 0
∂xi ∂sj
at every point (x, s) and for all i and j.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 9/2
Multidimensional comparative statics
Theorem 2.1: (MS-94) Suppose X ⊆ R is a sublattice and
f, g : X → R two QSM functions. Then
argmaxx∈Y g(x) ≥ argmaxx∈Y f (x) for every sublattice Y contained in
X if and only if g SC f .
Proof: Let x′ ∈ argmaxx∈Y f (x) and x′′ ∈ argmaxx∈Y g(x). Then
f (x′ ) ≥ f (x′ ∧ x′′ ). By QSM of f , f (x′ ∨ x′′ ) ≥ f (x′′ ). Since g SC f , we
obtain g(x′ ∨ x′′ ) ≥ g(x′′ ), which implies that x′ ∨ x′′ is in
argmaxx∈Y g(x). Etc.
QED
x
x
x
x
x
x
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 10/2
Multidimensional comparative statics
Application:
l
Let x denote the vector of inputs (drawn from X = R+
), p the vector of
input prices, and R the revenue function mapping input vector x to
revenue (in R). The firm’s profit is
Π(x; p) = R(x) − p · x.
Note that
∂2Π
(x; p) = −1.
∂xi ∂pj
So there is decreasing differences.
If R is supermodular, then Π is supermodular.
Conclusion:
By Theorem 2.1, argmaxx∈X Π(x; p′ ) ≥ argmaxx∈X Π(x; p′′ ) if p′ < p′′ .
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 11/2
One-dimensional comparative statics - again!
Recall that g SC f if, for x∗∗ > x∗ , we have
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
But look at these pictures
g
f
f
x
*
x
**
g
x
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 12/2
One-dimensional comparative statics - again!
Recall that g SC f if, for x∗∗ > x∗ , we have
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
But look at these pictures
g
f
f
x
*
x
**
g
x
g SC f in the upper diagram but not in the lower. This is unsatisfactory
since in both cases, the optimum has increased.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 12/2
The interval dominance order
Motivation for the interval dominance order: to develop an ordering for
functions that captures both situations.
Let X ⊆ R and f , g : X → R.
Recall: g SC f if, for any pair x∗∗ > x∗ , we have
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 13/2
The interval dominance order
Motivation for the interval dominance order: to develop an ordering for
functions that captures both situations.
Let X ⊆ R and f , g : X → R.
Recall: g SC f if, for any pair x∗∗ > x∗ , we have
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
g dominates f by the interval dominance order (g I f ) if for any pair
x∗∗ > x∗ satisfying
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 13/2
The interval dominance order
Motivation for the interval dominance order: to develop an ordering for
functions that captures both situations.
Let X ⊆ R and f , g : X → R.
Recall: g SC f if, for any pair x∗∗ > x∗ , we have
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
g dominates f by the interval dominance order (g I f ) if for any pair
x∗∗ > x∗ satisfying f (x∗∗ ) ≥ f (x) for x in the interval
[x∗ , x∗∗ ] = {x ∈ X : x∗ ≤ x ≤ x∗∗ }, the following holds:
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 13/2
The interval dominance order
g I f if for any pair x∗∗ > x∗ satisfying f (x∗∗ ) ≥ f (x) for x in the
interval [x∗ , x∗∗ ] = {x ∈ X : x∗ ≤ x ≤ x∗∗ }, the following holds:
f (x∗∗ ) − f (x∗ ) ≥ (>) 0 ⇒ g(x∗∗ ) − g(x∗ ) ≥ (>) 0.
(⋆)
g
f
f
x
*
x
**
x
g
Notice that (⋆) need not be applied to x∗∗ and x∗ .
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 14/2
The interval dominance order
A subset I of X is called an interval of X if for any x′ and x′′ in I and x
in X such that x′ < x < x′′ , we have x in I.
Example: If X = {1, 2, 3, 5, 6}, then {1, 2, } and {3, 5, 6} are intervals of
X but {3, 6} is not an interval of X.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 15/2
The interval dominance order
A subset I of X is called an interval of X if for any x′ and x′′ in I and x
in X such that x′ < x < x′′ , we have x in I.
Example: If X = {1, 2, 3, 5, 6}, then {1, 2, } and {3, 5, 6} are intervals of
X but {3, 6} is not an interval of X.
Theorem 2.2: Suppose that f and g are real-valued functions defined
on X ⊂ R. Then g I f if and only if
argmaxx∈I g(x) ≥ argmaxx∈I f (x) for any interval I of X.
Compare this with
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 15/2
The interval dominance order
A subset I of X is called an interval of X if for any x′ and x′′ in I and x
in X such that x′ < x < x′′ , we have x in I.
Example: If X = {1, 2, 3, 5, 6}, then {1, 2, } and {3, 5, 6} are intervals of
X but {3, 6} is not an interval of X.
Theorem 2.2: Suppose that f and g are real-valued functions defined
on X ⊂ R. Then g I f if and only if
argmaxx∈I g(x) ≥ argmaxx∈I f (x) for any interval I of X.
Compare this with
Theorem: Suppose that f and g are real-valued functions defined on
X ⊂ R. Then g SC f if and only if
argmaxx∈S g(x) ≥ argmaxx∈S f (x) for any subset S of X.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 15/2
x
IDO with Uncertainty
Basic example of IDO family: for every s, u(·, s) is quasiconcave (in x)
with argmax u(x, s′′ ) ≥ argmax u(x, s′ ) for s′′ > s′ .
s'
s ''
s '''
This family of quasiconcave functions with increasing peaks need not
be an SCP family.
At each state s there is an optimal action, which increases with s.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 16/2
x
IDO with Uncertainty
Basic example of IDO family: for every s, u(·, s) is quasiconcave (in x)
with argmax u(x, s′′ ) ≥ argmax u(x, s′ ) for s′′ > s′ .
s'
s ''
s '''
This family of quasiconcave functions with increasing peaks need not
be an SCP family.
At each state s there is an optimal action, which increases with s.
Agent chooses x under uncertainty, i.e., before s is realized. He
R
maximizes U (x) = S u(x, s)λ(s)ds, where λ : S → R is the density
function.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 16/2
IDO with Uncertainty
We expect the optimal choice of x to increase when higher states are
more likely.
Recall: a family of density functions {λ(·, t)}t∈T (defined on S ⊆ R)
respects the monotone likelihood ratio order (MLR) if for t′′ > t′ ,
λ(s, t′′ )
increases with s.
λ(s, t′ )
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 17/2
IDO with Uncertainty
We expect the optimal choice of x to increase when higher states are
more likely.
Recall: a family of density functions {λ(·, t)}t∈T (defined on S ⊆ R)
respects the monotone likelihood ratio order (MLR) if for t′′ > t′ ,
λ(s, t′′ )
increases with s.
λ(s, t′ )
Theorem 2.2: (QS-07) Suppose {u(·, s)}s∈S forms an IDO family and
{λ(·, t)}t∈T is ordered by MLR. Then {U (·, t)}t∈T , defined by
Z
U (x; t) =
u(x, s)λ(s, t)ds
S
forms an an IDO family. Consequently,
argmaxx∈X U (x, t) is increasing in t.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 17/2
IDO with Uncertainty
Example: a Firm decides on the time x to launch a new product. The more time the Firm
gives itself, the more it can improve the quality of the product and its manufacturing
process. But there is a Rival who may launch a similar product at time s. If the Firm is
anticipated, it suffers a discrete fall in profit. Graphically, the Firm’s family of profit
functions {π(·, s)}s∈S looks like
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 18/2
IDO with Uncertainty
Example: a Firm decides on the time x to launch a new product. The more time the Firm
gives itself, the more it can improve the quality of the product and its manufacturing
process. But there is a Rival who may launch a similar product at time s. If the Firm is
anticipated, it suffers a discrete fall in profit. Graphically, the Firm’s family of profit
functions {π(·, s)}s∈S looks like
( g, s )
x$ ( s ')
x$ ( s '')
x$ ( s ''')
x
R
The firm chooses x to maximize F (x, λ) = s∈S π(x, s)λ(s)ds.
Note that {π(·, s)}s∈S is an IDO, but not necessarily SCP family.
By the Theorem 2.2, Firm’s optimal launch time increases if λ experiences MLR shift.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 18/2
Statistical Decision Theory
Agent chooses action after observing a signal z; z is drawn from an
interval Z (Z ⊂ R), but before realization of state.
Agent’s decision rule is a map from Z to X ⊂ R.
The distribution over signals at a state s ∈ S is H(z|s) (with density
h(z|s)).
We refer to the family {H(·|s)}s∈S as the information structure H.
R
Expected utility of rule φ in state s is Z u(φ(z), s)dH(z|s).
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 19/2
Statistical Decision Theory
Agent chooses action after observing a signal z; z is drawn from an
interval Z (Z ⊂ R), but before realization of state.
Agent’s decision rule is a map from Z to X ⊂ R.
The distribution over signals at a state s ∈ S is H(z|s) (with density
h(z|s)).
We refer to the family {H(·|s)}s∈S as the information structure H.
R
Expected utility of rule φ in state s is Z u(φ(z), s)dH(z|s).
How should a decision maker choose amongst decision rules?
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 19/2
Statistical Decision Theory
Agent chooses action after observing a signal z; z is drawn from an
interval Z (Z ⊂ R), but before realization of state.
Agent’s decision rule is a map from Z to X ⊂ R.
The distribution over signals at a state s ∈ S is H(z|s) (with density
h(z|s)).
We refer to the family {H(·|s)}s∈S as the information structure H.
R
Expected utility of rule φ in state s is Z u(φ(z), s)dH(z|s).
How should a decision maker choose amongst decision rules?
The Bayesian decision maker will have a prior P on the states of the
world S: the value to him of decision rule φ is
Z Z
u(φ(z), s)dH(z|s) dP (s).
S
Z
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 19/2
Statistical Decision Theory
Theorem 2.3: (QS-07) Suppose {u(·, s)}s∈S is an IDO family and
{h(·|s)}s∈S is MLR-ordered. Then there exists an increasing decision
rule that maximizes the Bayesian’s ex ante utility.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 20/2
Statistical Decision Theory
Theorem 2.3: (QS-07) Suppose {u(·, s)}s∈S is an IDO family and
{h(·|s)}s∈S is MLR-ordered. Then there exists an increasing decision
rule that maximizes the Bayesian’s ex ante utility.
Proof: We denote that posterior distribution (over states) conditional on
z by H̃P (·|z). The ex ante utility of rule φ is
Z
Z Z
Z
u(φ(z), s)dH(z|s) dP (s) =
u(φ(z), s)dH̃P (s|z) dνH
S
Z
z∈Z
s∈S
where νH is the marginal distribution of z. It follows that an optimal
decision rule can be found by choosing, for each signal z, an action
R
that maximizes argmaxx∈X s∈S u(x, s)dH̃P (s, z).
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 20/2
Statistical Decision Theory
Proof: We denote that posterior distribution (over states) conditional on
z by H̃P (·|z). The ex ante utility of rule φ is
Z
Z
Z Z
u(φH (z), s)dH(z|s) dP (s) =
u(φH (z), s)dH̃P (s|z) dνH
S
Z
z∈Z
s∈S
where νH is the marginal distribution of z. It follows that an optimal
decision rule can be found by choosing, for each signal z, an action
R
˜ P (s, z).
that maximizes argmaxx∈X s∈S u(x, s)dH
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 21/2
Statistical Decision Theory
Proof: We denote that posterior distribution (over states) conditional on
z by H̃P (·|z). The ex ante utility of rule φ is
Z
Z
Z Z
u(φH (z), s)dH(z|s) dP (s) =
u(φH (z), s)dH̃P (s|z) dνH
S
Z
z∈Z
s∈S
where νH is the marginal distribution of z. It follows that an optimal
decision rule can be found by choosing, for each signal z, an action
R
˜ P (s, z).
that maximizes argmaxx∈X s∈S u(x, s)dH
We know that {h̃P (·|z)}z∈Z is MLR-ordered because {h(·|s)}s∈S is
MLR-ordered. By Theorem 2.2, for z ′′ > z ′ , we have
Z
Z
argmaxx∈X
u(x, s)h̃P (s, z ′′ )ds ≥ argmaxx∈X
u(x, s)h̃P (s, z ′ )ds.
s∈S
s∈S
h
For each z, choose φ∗ (z) = max argmaxx∈X
Then φ∗ is optimal and increasing in z.
i
u(x, s)h̃P (s, z)ds .
s∈S
R
QED
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 21/2
Lehmann informativeness
Assume prior is P and consider information structure H. If φH is the
optimal decision rule, then agent’s ex ante utility is
Z Z
V(H, P ) =
u(φH (z), s)dH(z|s) dP (s)
S Z
Z
Z
=
u(φH (z), s)dH̃P (s|z) dνH
z∈Z
s∈S
where νH is the marginal distribution of z.
We wish to compare H with another information structure
G = {G(·|s)}s∈S . The ex ante utility of G is
Z Z
V(G, P ) =
u(φG (z), s)dG(z|s) dP (s)
S
Z
where φG is the optimal decision rule for G.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 22/2
Lehmann informativeness
Assume prior is P and consider information structure H. If φH is the
optimal decision rule, then agent’s ex ante utility is
Z Z
V(H, P ) =
u(φH (z), s)dH(z|s) dP (s)
S Z
Z
Z
=
u(φH (z), s)dH̃P (s|z) dνH
z∈Z
s∈S
where νH is the marginal distribution of z.
We wish to compare H with another information structure
G = {G(·|s)}s∈S . The ex ante utility of G is
Z Z
V(G, P ) =
u(φG (z), s)dG(z|s) dP (s)
S
Z
where φG is the optimal decision rule for G.
When is V(H, P ) ≥ V(G, P ) for all P ?
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 22/2
Lehmann informativeness
Given information structures H ≡ {H(·|s)}s∈S and G ≡ {G(·|s)}s∈S ,
define T (z, s) by H(T (z, s)|s) = G(z|s).
Definition: H is more informative than G (in the sense of Lehmann)
if T (z, ·) is increasing in s.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 23/2
Lehmann informativeness
Theorem 2.4: (QS-07) Suppose that
(i) information structure H is more informative than G
(ii) {u(·, s)}s∈S is an IDO family
(iii) G = {g(·|s)}s∈S is MLR-ordered.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 24/2
Lehmann informativeness
Theorem 2.4: (QS-07) Suppose that
(i) information structure H is more informative than G
(ii) {u(·, s)}s∈S is an IDO family
(iii) G = {g(·|s)}s∈S is MLR-ordered.
Then V(H, P ) ≥ V(G, P ).
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 24/2
Other decision criteria...
The risk of a decision rule φ : Z → X is given by
Z
u(φ(z), s)dH(z|s)
.
Z
s∈S
The Bayesian (with prior P ) chooses the rule that maximizes
Z Z
u(φ(z), s)dH(z|s) dP (s).
S
Z
The minimax criterion chooses the rule that maximizes
Z
min
u(φ(z), s)dH(z|s).
s∈S
Z
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 25/2
Complete Class Theorem
Definition: A rule φ is better than ψ if
Z
Z
u(φ(z), s)h(z|s) dz
≥
z∈Z
s∈S
u(ψ(z), s)h(z|s) dz
z∈Z
s∈S
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 26/2
Complete Class Theorem
Definition: A rule φ is better than ψ if
Z
Z
u(φ(z), s)h(z|s) dz
≥
z∈Z
s∈S
u(ψ(z), s)h(z|s) dz
z∈Z
s∈S
Geometrically.... when S is finite, we can think of
R
|S|
as
a
point
in
R
. Each rule is associated
u(φ(z),
s)dH(z|s)
Z
s∈S
with a point in R|S| .
s2
s2
s1
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 26/2
Complete Class Theorem
Definition: A subset of decision rules C form an essentially complete
class if for every rule ψ there is a rule φ in C such that φ is better than
ψ.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 27/2
Complete Class Theorem
Definition: A subset of decision rules C form an essentially complete
class if for every rule ψ there is a rule φ in C such that φ is better than
ψ.
Theorem 2.5: (QS-07) Let H be an MLR-ordered information structure
and {u(·, s)}s∈S an IDO family. Then the increasing decision rules form
an essentially complete class.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 27/2
Complete Class Theorem
Definition: A subset of decision rules C form an essentially complete
class if for every rule ψ there is a rule φ in C such that φ is better than
ψ.
Theorem 2.5: (QS-07) Let H be an MLR-ordered information structure
and {u(·, s)}s∈S an IDO family. Then the increasing decision rules form
an essentially complete class.
Notice that Theorem 2.5 implies Theorem 2.3: let ψ be any rule
maximizing ex ante utility for a Bayesian; by Theorem 2.5 there is an
increasing rule φ that is better than ψ, so φ must also maximize the
Bayesian’s ex ante utility.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 27/2
Complete Class Theorem
Application (adapted from Manski):
Medical Treatment A is the status quo with known recovery probability
of p̄A .
Treatment B is the new treatment with unknown recovery probability of
pB , taking values in set S.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 28/2
Complete Class Theorem
Application (adapted from Manski):
Medical Treatment A is the status quo with known recovery probability
of p̄A .
Treatment B is the new treatment with unknown recovery probability of
pB , taking values in set S.
N subjects are randomly selected to receive Treatment B. Planner
observes the number z who are cured. Planner’s decision (treatment)
rule maps z to the proportion x of population who will receive treatment
B.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 28/2
Complete Class Theorem
Application (adapted from Manski):
Medical Treatment A is the status quo with known recovery probability
of p̄A .
Treatment B is the new treatment with unknown recovery probability of
pB , taking values in set S.
N subjects are randomly selected to receive Treatment B. Planner
observes the number z who are cured. Planner’s decision (treatment)
rule maps z to the proportion x of population who will receive treatment
B.
Fact: The distribution of z given pB is binomial and
{h(·|pB )}pB ∈S is MLR-ordered.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 28/2
Statistical Decision Theory
Assume that cost of treating fraction x of the population with B (and the
rest with A) is C(x).
Normalize utility of cure at 1 and that of no cure at 0. Planner’s utility if
fraction x of the population receives B (and the rest A) is
u(x, pB ) = (1 − x) p̄A + x pB − C(x).
(6)
Notice that {u(·, pB )}pB ∈S is an IDO family.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 29/2
Statistical Decision Theory
Assume that cost of treating fraction x of the population with B (and the
rest with A) is C(x).
Normalize utility of cure at 1 and that of no cure at 0. Planner’s utility if
fraction x of the population receives B (and the rest A) is
u(x, pB ) = (1 − x) p̄A + x pB − C(x).
(7)
Notice that {u(·, pB )}pB ∈S is an IDO family.
Conclusion:
By Theorem 2.5, planner can confine herself to rules where x
increases with z.
Short Course onComparative Statics and Comparative Information⋆
⋆Part II – p. 29/2