Natalia Lazzati Mathematics for Economics (Part I) Note 9: Parametric Optimization - Envelope Theorem and Comparative Statics Note 9 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 8) and Simon and Blume (1994, Ch. 19). Parameterized Optimization in Economics Most of the economic analysis considers whole families of optimization problems instead of just one in isolation. Let us consider a simple model of one-choice variable and a single parameter maxx ln x : x2 where 0 (1) 2 R++ and x 2 R++ . For any speci…cation of ; this is a standard nonlinear program- ming problem. In particular, since the constraint set entails an inequality it is a Kuhn-Tucker problem. Problem (1) di¤ers from the ones in Note 8 in that now both the maximizer x and the maximum value of the objective function f (x ) = ln x will depend on the value of . For this parametrized family of optimization problems two questions are of fundamental relevance: How does the maximum value of the objective function depend on ?; and How does the maximizer depend on ? We start answering them for problem (1), and then extend the idea to more general cases. For any 2 R++ and x 2 R++ all the points in the feasible (or constraint) set satisfy the NDCQ. The Lagrangian function is x2 L (x; ; ) = ln x where we separate the variables of choice x and from the parameter by a semi-colon. Since f is C 1 and nonstationary (see Corollary 3 in Note 8) the necessary …rst order conditions are @L (x; ; ) @x x2 1 x = 0 = > 0: 1 2 x=0 p Then x = + , depend on for = 1=2 and f (x ) = (1=2) ln . As we anticipated, all these functions . Moreover, @x ( ) =@ 1=2 = (1=2) > 0 and @f [x ( )] =@ = (1=2) 1 > 0 2 R++ : This is the kind of statements economists care about. We next provide a general framework to address these issues. Let x = (x1 ; x2 ; :::; xn ) be the choice variables and = ( 1; 2 ; :::; l) be a vector of all parameters appearing in our model. We will allow both objective and constraint functions to depend on and will write the values of these functions as f (x; ) and U ( ) respectively. The parameters must belong to some set that we name the admissible parameter set. We consider the general family of problems maxx ff (x; ) : x 2 U ( )g where 2 Rl and x 2 X Rn . We assume both (F.P) and X are open sets. The set of maximizers is described by the decision rule S ( ) = arg max ff (x; ) : x 2 U ( )g : That is, S ( ) is the set of elements x that are the optimal solutions to problem (F.P) for a given 2 . If the solution to (F.P) is unique for each value of , then the decision rule becomes a function, and we write x = x ( ) : The payo¤ accruing to an optimizing agent is given by the (maximum) value function, V : ! R, de…ned by V ( ) = maxx ff (x; ) : x 2 U ( )g = f (x ; ) ; where x 2 S ( ) : As we noticed before, in most economic applications we are interested in asserting how the maximum value function is a¤ected by changes in the parameters, and in the comparative statics of the decision rule S ( ) : That is, we would like to know how the maximum payo¤ and the behavior of the agent varies in response to changes in his environment (e.g. the prices he faces, the income, etc.). The next section answers the …rst question, and the last one focuses on the second question. 2 The Envelope Theorems This section considers three versions of problem (F.P), that di¤er with respect to the constraint set, and discusses the corresponding theorems that study the e¤ects of changing on the maxi- mum value function V ( ). Such theorems are called Envelope Theorems. We start with the Envelope Theorem for unconstrained optimization problems. Theorem 1. (Envelope Theorem for unconstrained optimization) Consider problem (F.P) with U ( ) = Rn , and assume f (x; ) is a C 1 function: Let x ( ) be a solution to this problem. Suppose x ( ) is a C 1 function of . Then @V ( ) @f = [x ( ) ; ] @ i @ i for i = 1; :::; l. Proof. Via the Chain rule @V ( ) @ i = = Xn j=1 @xj @f @f [x ( ) ; ] ( )+ [x ( ) ; ] @xj @ i @ i @f [x ( ) ; ] @ i since @f =@xj [x ( ) ; ] = 0 for j = 1; 2; :::; n by the usual …rst order conditions. In problem (F.P) each parameter direct way, i i a¤ects the maximum value function in two ways. In a a¤ects V ( ) because it is part of the objective function f (x; ) : Indirectly, i a¤ects V ( ) through its e¤ect on the variables of choice x ( ) : The Envelop Theorem states that to assert @V =@ i we only need to consider the direct e¤ect of i on f (x; ) ; @f =@ i, and evaluate this partial derivative at the optimal x ( )! The next example applies this theorem to a standard …rm problem. Example 1. A competitive …rm sells its product at a unit price p = p, where > 0 is a positive parameter that captures the strength of demand. Assume the cost of producing y units is C(y), with C(0) = 0 C 0 (y) > 0 and C 00 (y) > 0: The …rm pro…t function is ( ) = maxy f py 3 C(y)g : The conditions on the cost function guarantee that there is a nonzero pro…t-maximizing output y ( ) which depends smoothly on : By the Envelope Theorem for unconstrained optimization problems we get @ ( ) = py ( ) > 0: @ As expected, the …rm makes more pro…ts when the demand is stronger. N Let us extend the analysis to constrained optimization problems. We start with the Lagrange problem, and end the analysis with the problem of Kuhn-Tucker. Theorem 2. (Envelope Theorem for the Lagrange problem) Consider problem (F.P) with U ( ) = fx 2 Rn : h1 (x; ) = 0; :::; hm (x; ) = 0g, and assume f (x; ) is a C 1 function: Let x ( ) be a solution to this problem. Suppose x ( ) and the Lagrange multipliers are C 1 functions of ( ) and that the NDCQ holds. Then @L @V ( ) = [x ( ) ; @ i @ i ( ); ] for i = 1; :::; l, where L is the natural Lagrangian function for this problem. Proof. Form the Lagrangian for the maximization problem L (x; ; ) = f (x; ) From the …rst order conditions we know that @f [x ( ) ; ] @xj Xm t=1 t ( ) Xm ( ); ] = ): (2) @ht [x ( ) ; ] = 0 for j = 1; :::; n @xj ht [x ( ) ; ] = 0 for t = 1; :::; m: The partial derivative of the Lagrangian with respect to @L [x ( ) ; @ i t ht (x; t=1 @f [x ( ) ; ] @ i i at [x ( ) ; Xm t=1 t ( ) (3) (4) ( ) ; ] is @ht [x ( ) ; ] : @ i We need to show that the derivative of the maximum value function with respect to (5) i is equal to the last expression. Let us then di¤erentiate V ( ) with respect to i @xj @V ( ) Xn @f @f = [x ( ) ; ] ( )+ [x ( ) ; ] : j=1 @xj @ i @ i @ i 4 (6) Next go back to the …rst order conditions and rearrange terms Xm @f [x ( ) ; ] = t=1 @xj t ( ) @ht [x ( ) ; ] for j = 1; :::; n: @xj (7) Substituting (7) in (6) and rearranging terms we get @V ( ) Xm = t=1 @ i t ( ) Xn j=1 @xj @ht @f [x ( ) ; ] ( )+ [x ( ) ; ] : @xj @ i @ i The last trick is to go back to the …rst order conditions and di¤erentiate (4) with respect to Xn j=1 @xj @ht @ht [x ( ) ; ] ( )+ [x ( ) ; ] = 0 for t = 1; :::; m: @xj @ i @ i (8) i (9) Substituting (9) in (8) we get @V ( ) = @ i Xm t t=1 ( ) @ht @f [x ( ) ; ] + [x ( ) ; ] : @ i @ i which is identical to expression (5). The last theorem is extremely important in economics. It allows, for instance, to provide a nice interpretation of the Lagrange multipliers. Example 2. Consider a simple version of the Lagrange problem maxx ff (x) : x 2 U g (10) where U = fx 2 Rn : h (x) = 0g : Now suppose that we perturb the constraint by an amount 2 R (with 0 2 ) as follows maxx ff (x) : h (x) = g Notice that (11) reduces to (10) when (11) = 0: Assume the conditions of the Envelope Theorem hold. The Lagrangian is given by L (x; ) = f (x) [h (x) Therefore, @V ( ) = @ 5 ( ): ]: Then ( ) measures the change in the optimal value function with respect to the parameter : That is, how much the maximum value function increases if we relax the constraint in one single unit. N Theorem 3. (Envelope Theorem for the Kuhn-Tucker problem) Consider problem (F.P) with U ( ) = fx 2 X : g1 (x; ) 0; :::; gk (x; ) 0g ; and assume f (x; ) is a C 1 function: Let x ( ) be a solution to this problem. Suppose x ( ) and the multipliers of ( ) are C 1 functions and that the NDCQ holds. Then @V ( ) @L = [x ( ) ; @ i @ i ( ); ] for i = 1; :::; l, where L is the natural Lagrangian function for this problem. Proof. (Prove this result.) The next two examples end this section. Example 3. Consider the problem n 1=2 1=2 maxx2R2++ x1 x2 : where 1 x1 + 2 x2 3 o 0 2 R3++ : (Find the solution function, multiplier function and maximum value function. Di¤erentiate these expressions with respect to the three parameters, and compare these results with the claim of the Envelope Theorem for the Kuhn-Tucker problem.) N Example 4. (Roy’s Identity) Suppose the conditions of Theorem 3 hold in the problem n Xn max U (x) : pi xi i=1 x m o n where (p; m) 2 Rn+1 ++ and x 2 R++ : Let U (x) be strictly increasing. [Use the last Envelope Theorem to prove that xi (p; m) = @V =@pi @V =@m for i = 1; :::; n:] N 6 Smooth Dependence of the Maximizers on All the results in the previous section relied on a basic hypothesis: the smooth dependence of the maximizers on the parameters of the problem. In this section we look at this hypothesis more carefully. Let us consider …rst the unconstrained problem of Theorem 1 maxx ff (x; ) : x 2 Rn g where 2 Rl and (12) is an open set. Let f be a C 2 function. Since we are assuming that a maximizer x ( ) exists, then x ( ) is a solution to the typical …rst-order conditions @f (x; ) = 0 @x1 .. . . = .. @f (x; ) = 0: @xn By the IFT, we can solve these n equations for the n unknowns x1 ; :::; xn as C 1 functions of the exogenous parameters ; provided that the Jacobian of these functions with respect to the endogenous variables x1 ; :::; xn is non-singular at [x ( ) ; ] : But the Jacobian of the …rst-order partial derivatives @f =@xi is simply the Hessian of f at [x ( ) ; ] 0 B B B B D2 f [x ( ) ; ] = B B B @ @2f @x21 @2f @x1 @x2 @2f @x2 @x1 @2f @xn @x1 @2f @xn @x2 .. . @2f @x22 .. . @2f @x1 @xn @2f @x2 @xn .. . .. . @2f @x2n 1 C C C C C: C C A (13) The Hessian matrix evaluated at [x ( ) ; ] is generally non-singular. Moreover, since (12) is a maximization problem, its determinant has the same sign as ( 1)n by the usual second order su¢ cient conditions. This means that we can replace the hypothesis that x ( ) is a C 1 function of i in Theorem 1, by the hypothesis that x ( ) is a non-degenerate critical point of f in the sense that the Hessian matrix (13) of f is non-singular at [x ( ) ; ] : The next theorem captures this observation. Theorem 4. (Smoothness of x ( ) on in the unconstrained problem) Consider problem (F.P) with U ( ) = Rn . Let x ( ) be the solution of the parametrized maximization 7 problem. Fix the value of the parameters at at the point x 0 ; 0 0: = If the Hessian matrix (13) is non-singular ; then x ( ) is a C 1 function of at 0: = A similar analysis works for constrained problems. Consider the parametrized constrained maximization problem of Theorem 2 maxx ff (x; ) : x 2 U ( )g where U ( ) = fx 2 Rn : h1 (x; ) = 0; :::; hm (x; ) = 0g, Rl and we assume that 2 an open set. Let f be a C 2 function. Assume the NDCQ holds at [x ( ) ; ] ; that is 1 0 @h1 @h1 [x ( ) ; ] [x ( ) ; ] @xn C B @x1 .. .. C B . . rank B C = m: . . . A @ @hm @hm [x ( ) ; ] [x ( ) ; ] @x1 @xn is (14) Let us write the corresponding Lagrangian function L (x; ; ) = f (x; ) 1 h1 (x; ) ::: m hm (x; ): The constrained maximizer x ( ) must satisfy the …rst order conditions @L @L (x; ; ) = 0; :::; (x; ; ) = 0 @x1 @xn @L @L (x; ; ) = 0; :::; (x; ; ) = 0: @ 1 @ m This is a system of n + m equations in n + m unknowns x1 ; :::; xn ; 1 ; :::; on the IFT to provide conditions that guarantee that x ( ) and on the exogenous parameter i; the Jacobian of these equations with respect to the endogenous is simply the Hessian of the Lagrangian 0 D L [x ( ) ; Once again, we call ( ) will depend smoothly variables must be an (n + m) (n + m) non-singular matrix at [x ( ) ; 2 m: B B B B B B B ( ); ] = B B B B B B @ @2L @xn @x1 @2L @x21 .. . .. .. . @2L @x1 @xn @2L @x1 @ .. . . .. . @2L @xn @ 1 .. @2L @x1 @ m . .. . @ 1 @2L @xn @ m 8 @2L @ 1 @x1 .. . @2L @x2n ( ) ; ] : This Jacobian .. .. . . @2L 1 @xn @ @2L @ 21 @2L @ m@ .. . @ @2L @ m @x1 @2L 1@ m .. . @2L m @xn .. . @2L @ 2m 1 1 C C C C C C C C C C C C C A (15) evaluated at [x ( ) ; ( ); ]: This is the matrix that we used before to check the second order su¢ cient conditions for a constrained maximum. The Hessian usually has non-zero determinant at [x ( ) ; ( ) ; ]. In fact, for a non-degenerate constrained maximization problem, its determinant has the same sign as ( 1)n : As a consequence, we can replace the conditions of Theorem 2 that x ( ) and are C 1 functions of the parameter i ( ) and the requirement of the NDCQ by the non-degenerate second order condition for constrained problems, namely that the Hessian of the Lagrangian has a nonzero determinant at [x ( ) ; ( ) ; ] : (It can be shown that the NDCQ is a necessary condition for the latter to hold.) Theorem 5. (Smoothness of x ( ) and ( ) on in the Lagrange problem) Consider problem (F.P) with U ( ) = fx 2 Rn : h1 (x; ) = 0; :::; hm (x; ) = 0g : Let x ( ) be the solution of the parametrized maximization problem, and let multipliers. Fix the value of the parameters at the point x 0 (a) x ( ) and ; 0 ; 0 If the Hessian matrix is non-singular at ; then ( ) are C 1 functions of (b) the NDCQ holds at x 0: = ( ) be the corresponding Lagrange 0 ; 0 at ; 0 = 0; and : Remark 1. The extension of the previous result to the K-T problem follows immediately. We next extend Example 4 in Note 3. Example 5. (Comparative Statics) Let us consider a …rm that produces a good y by using a single input x. The …rm sells the output and acquires the input in competitive markets: The market price of y is p, and the cost of each unit of x is just w. Its technology is given by f : R+ ! R+ ; where f is a C 2 function with f 0 > 0 and f 00 < 0. Its pro…ts are given by e (x; p; w) = pf (x) wx: (16) The …rm selects the input level, x; in order to maximize pro…ts. We would like to know how its choice of x is a¤ected by a change in w: 9 Assuming an interior solution, the …rst-order condition of this optimization problem is @e (x; p; w) = pf 0 (x) @x w=0 (17) for some x = x . We know that if such x exists, it is a global maximum and it is unique. (Why?) Notice that here @2e (x ; p; w) = pf 00 (x ) < 0: @x2 (18) So the conditions of Theorem 4 are satis…ed. It follows that if there is an x = x (p; w) that satis…es (16) it is di¤erentiable. Moreover, by the IFT we get @x (p; w) = @p f 0 (x ) @x > 0 and (p; w) = pf 00 (x ) @w 1 < 0: pf 00 (x ) (19) We conclude that if the price of the output increases, then the …rm will acquire more of the input; and the opposite holds if the price of the input increases. N 10
© Copyright 2026 Paperzz