Math Primer - Sites@Duke

Math Primer
October 15, 2014
1
Optimization
Often in this class we will be concerned with maximizing utility subject to some
budget constraint. The best way to do this is through the method of Lagrange
multipliers.
A simple general form is as follows: We would like to choose variables x, y
to maximize some function f (x, y). Now, if there were no constraints here, we
would simply solve maxx,y f (x, y).
However, suppose x, y must also satisfy some constraint g(x, y) = c. We
must account for this constraint in some way. The way to do this is through
the use of a Lagrange multiplier.
First, rewrite the constraint as g(x, y) − c = 0. Then rewrite the problem as
L = maxx,y f (x, y) + λ[g(x, y) − c]
λ is called the Lagrange Multiplier. We can also subtract the multplier term;
therefore we can equivalently rewrite the problem as
L = maxx,y f (x, y) + λ[c − g(x, y)]
From here we take derivatives with respect to our choice variables x and y
(and λ), and solve the problem normally.
1
1.1
Example
Now, in this class the function we will be looking to maximize is typically the
utility function. Our choice variables will typically be the levels of consumption
and leisure (equivalently labor) in each period. The constraint will typically be
a budget constraint.
So, lets consider the two period model, where yi = f (li ). We seek to maximize U (c1 , c2 , l1 , l2 ). Our budget constraint is f (l1 ) +
f (l2 )
1+R
= c1 +
c2 1
1+R ,
and
individuals choose their levels of consumption and labor. Thus we solve the
problem:
max U (c1 , c2 , l1 , l2 ) + λ[f (l1 ) +
c1 ,c2 ,l1 ,l2
f (l2 )
c2
− c1 −
]
1+R
1+R
To solve this problem we would need to take derivatives with respect to
c1 , c2 , l1 , l2 (and λ) and solve.
2
Partial Derivatives
To solve some of the problems outlined above, we need to use partial derivatives.
Consider a function of two variables, f (x, y). To take the partial derivative with
respect to x, merely consider y as a constant and differentiate as you normally
would. Similarly for y, consider x a constant and differentiate with respect to y.
• f (x, y) = x2 + xy 2 =⇒
• =⇒
∂2f
∂x∂y
∂f
∂x
= 2x + y 2
= 2y
• g(x, y, z) = xz + ln(y) − z 3 =⇒
• =⇒
∂2g
∂z 2
∂g
∂z
= x − 3z 2
= −6z
One helpful thing to recall is that, with second derivatives:
the order in which you take derivatives does not matter.
1 Assume
here that b0 = 0 = b2 and P1 = P = P2
2
∂2f
∂x∂y
=
∂2f
∂y∂x ,
so
3
Chain Rule
Suppose we have h(x) = g(f (x)) and we would like to take the derivative of h
with respect to x. To do this we must use the chain rule. The formal definition of
the chain rule here is:
dh
dx
= f 0 (x)g 0 (f (x)). Basically we are taking the derivative
of f with respect to x and multiplying it by the derivative of g with respect to
f(x). The best way to understand this is through examples.
3.1
Examples
3.1.1
Example 1
Consider h(x) = ln(3x). Here f (x) = 3x and g(z) = ln(z).
So, f 0 (x) = 3 and g 0 (z) =
1
z.
As mentioned, chain rule says that h0 (x) =
1
)=
f 0 (x)g 0 (f (x)). Plugging in what we have gives us h0 (x) = (3)( 3x
3.1.2
1
x
Example 2
Consider h(x, y) = (sinx + cosy)3 . Here g(z) = z 3 and f (x, y) = sinx + cosy.
Suppose I care about
∂h
∂x .
Then what we are after is2 : fx (x, y)g 0 (f (x, y)). Note
that g still only a function of one term, f, even though f is now a function of
two arguments.
So, g 0 (z) = 3z 2 . Meanwhile, fx (x, y) = cosx. So fx (x, y)g 0 (f (x, y)) =
cosx(3(sinx + cosy)2 )
Thus
3.1.3
∂h
∂x
= (3cosx)(sinx + cosy)2
Example 3
The chain rule works recursively. Suppose we have j(x) = h(g(f(x))). Then
dj
dx
= f 0 (x)g 0 (f (x))h0 (g(f (x)))
For instance, let f (x) = sinx, g(y) = y 2 , and h = ln(z).
Then j(x) = ln((sinx)2 )
f 0 (x) = cosx, g 0 (y) = 2y, h0 (z) = z1 .
2 Note
fx (x, y) =
∂f
∂x
3
1
So, j 0 (x) = (cosx)(2sinx)( sinx
2) =
2cosx
sinx
4
= 2cotx