Scalar-valued functions have vector derivatives.

L2a Back to the basics:
Terms and techniques
Rev 12/11/2016
“Reeling and Writhing, of course, to begin with,” the Mock Turtle
replied; “and then the different branches of arithmetic -Ambition, Distraction, Uglification, and Derision.”
`I never heard of "Uglification,"' Alice ventured to say. `What is
it?'
The Gryphon lifted up both its paws in surprise. `What! Never
heard of uglifying!' it exclaimed. `You know what to beautify is, I
suppose?'
`Yes,' said Alice doubtfully: `it means--to--make--anything--prettier.'
`Well, then,' the Gryphon went on, `if you don't know what to uglify is, you are a simpleton.'
This lab is a veritable cornucopia of terminology and some basic ideas regarding partial
derivatives and PDEs. You will find considerable overlap here with Multi-Variable Calculus.
But the price of admission is: Review your notes for solutions to ODEs, especially the
LINEAR and the EXACT ODEs.
A scalar valued function has a vector derivative
f
f
 ay and
 ax . Thus the derivative of
x
y
f(x, y), a scalar-valued function, is a vector that can be represented in the form:
 
 ,  f ( x, y )   f x ( x, y ), f y ( x, y )  . In the example, that’s just a<y, x>.
x y
Let f ( x, y )  axy for nonzero constant a. Then
Yes, you are in Wonderland! Scalar-valued functions have vector derivatives.
But wait, there’s more: If f(x,y) was a vector-valued function, say f(x, y) = <ax, by>, then we
would need a matrix to represent all four first derivatives – and there’s a name for it:
a 0
The Jacobian Matrix J  
 , (named for Carl Gustav Jacob Jacobi – famous for having
0 b
‘Jacob’ in his name twice; but more about him below). Can you determine how we obtained
the elements in this Jacobian?
We can take higher order and mixed partials of scalar functions:
Let f(x,y) = axy:
2 f
2 f
0
and

0
x 2
y 2
f
f
a
xy
yx
 2
 x 2
And hence, a matrix for the second derivatives: 
 

 yx
Another example:
 
xy 
0 a

f
(
x
,
y
)
a 0 
2 


2 
y 
Let f ( x, y )  ye 2 x
Then
f
f
 f x  2 ye 2 x
 f y  e 2 x and
x
y
2 f
 f xx  4 ye 2 x
x 2
2 f
 f xy  2e 2 x  f yx
xy
2 f
 f yy  0
y 2
Which leads to a …
Theorem: If f has continuous second order derivatives, fxy = fyx The order of
differentiation may be reversed without changing the result.
More Examples: See http://www.math.umn.edu/~nykamp/m2374/readings/pderivex/
Implicit function definitions
Sometimes it is easier to define a function of two or more variables implicitly (without a strict
statement of u(x, y) = ax2 + by2 + cx + dy + e. It is of course possible to take regular
derivatives implicitly (you knew that already); to be fair, whatever is good for regular derivatives
must also be good for partials.
2
Example: Suppose we want the partial derivatives of u, where x 2  y 2  u 2  a 2 for a constant a.
Note that we could write this explicitly as two functions:
u   a 2  x 2  y 2 , where a 2  x 2  y 2 .
But why not just take the partial derivatives implicitly and solve:
u
u
u
x
 0 or
 .
: 2 x  2u
x
x
x
u
After all, in the partial derivative with respect to x, y is constant.
Do the same to find
u
.
y
Partial Practice
Find the partial derivatives (you should do these by hand).
1. u = ln(x2 + y), find
2. Find
u u  2u
,
,
x y xy
 sin xy
x cos( y )
3. If u = x2 + y2 and x = r cos , y = r sin  (which you should recognize as polar
u u
coordinates), find
,
r 
Product rule with partial derivatives - what we’re multiplying are functions!
Suppose z = u v, where u = x y and v = cos (x y)
u
v
Then:
y
  y sin xy
x
x
So that
z u
v

v  u  y cos xy  xy 2 sin xy
x x
x
Find
z
.
y
3
The Chain Rule for Partial Derivatives
dz f dx f dy


. Notice how the differentials that
dt x dt y dt
would seem to cancel out don’t - because dx isn’t the same as x .
If z = f (x,y) and x = g(t), y = h(t), then
Example: f ( x, y )  cos( x 2  y 2 ), x  t , y  t
df f x f y


 2t sin(t 2  t )  sin(t 2  t ) Note answer in terms of t only.
dt x t y t
Important definition: The Total Derivative
df ( x, y ) f f dy


, which can be written in differential form (known as
dx
x y dx
f
f
the exact differential): df  dx  dy
x
y
Two variable case:
Three variables:
df ( x, y, t ) f f dx f dy



. And on and on …
dt
t x dt y dt
Partial answers to partial practice above:
1. u = ln(x2 + y) :
u
u 2
4 xy
2x
 2
 2
,
2
x x  y xy ( x  y 2 ) 2
 sin xy
 y cos( xy) sec( y )
x cos y
u
2
2
 2r cos 2   2r sin 2 
3. u = x + y
r
2.
More Practice
Find the derivative of f ( x, y, z ) 
x2 y
with respect to x, with respect to y and with respect to
z  yx
z.
4
Some classy functions of more than one variable z = f(x,y)
f f f  2 f  2 f
,
,
,
,
for each of the following?
What are the partial derivatives
x y xy x 2 y 2
The “monkey saddle” z  f ( x, y )  x( x 2  4 y 2 )
So-named because there is a convenient place
for the monkey’s tail.
Handkerchief z 
x3
 xy 2  2( x 2  y 2 )
3
Gently fluttering to the ground
5
Want some curly fries with that?
z = x sin x + y cos y, -2< x,y < 2
0
5
-5
-10
0
-5
0
-5
These functions are a good time to experiment with
Plot3D; perhaps with some Manipulate[ ] as well.
5
For additional practice with functions of several variables (and some very useful plotting tricks),
work through all examples in the Mathematica file Functions_Sev_Var.nb. Expand the cells
marked with the downwards-pointing arrow on the right hand side by double clicking it.
Then execute each command line and study the results. Complete the exercises in the sections
marked ‘Work on’.
5
Here’s some good news: Some PDEs are easy to solve, especially those that solve just like
ODEs
So that means: Review your notes on solutions to ODEs!
Examples
PDEs with only one partial derivative
u
 3 x 2  xy 2
x
Since uy doesn’t show up, we can treat y as a constant and just integrate with respect to x.
You should obtain u(x, y) = x3 + x2y2/2 + F. What is F? Since it doesn’t show up in the
given equation for ux, F has to be something whose partial derivative with respect to x is
0 – in other words, F is a function of y alone. It’s like a constant of integration in an
ODE; just constant with respect to x.
Get used to that idea: ‘constants’ that aren’t always constant!
Try some examples and verify that F(y) has no influence on the ability of our solution
u(x, y) to fit the given PDE.
2nd order PDEs with only a single mixed derivative
 2u
 5 x  y 2 We can just integrate twice: Integrating with respect to x, we obtain
xy
u
x2
 5  xy 2  F ( y ) . Integrating with respect to y, we obtain
x
2
x2
y3
u ( x, y )  5 y  x  F ( y )  G ( x) . Verify that this works!
2
3
Why didn’t we have to integrate F(y)dy? It’s an arbitrary function, to be determined by
the initial and boundary conditions on the PDE. One arbitrary function is as good as any
other.
This PDE looks like an ODE, so maybe it is
uxx – u = 0 is just like the ODE u’’ – u = 0. However, when we integrate with respect to
x, we again produce functions of y rather than constants of integration. Hence the
solution
u(x, y) = A(y) ex + B(y) e-x Verify that this works!
6
PDE with change of variables
uxy = -ux Set p = ux, then py = uxy = -p or py/p = -1.
This is now variables-separable in y: p = c(x) e-y and therefore
u(x, y) = f(x) e-y + g(y) Verify that this works!
Remember: constants of integration become functions of the other variable.
Some Essential Terminology
Analytic function – that is just another way of saying that the function is differentiable with
respect to the given variables. With PDEs, we just have lots more variables.
Linear PDE – all terms (both the function and/or its derivatives) are combined in a linear
fashion, which may be stated quasi-formally as:
If the PDE L(u, ux , uy) = 0 and L is a linear operator, then the PDE is linear.
L may contain a wide variety of functions and/or derivatives of the independent variables; just
nothing like ux2 or u ux, but uxx is fine.
Homogeneous polynomial – all terms are in the same degree, ie x2 + xy + y2 = 0
Homogeneous PDE – all terms including derivatives in the same degree (and usually the sum or
product of functions and their derivatives = 0). The PDE L(u) = g(x) is homogeneous if g(x) = 0.
It is always easier to solve a homogenous PDE than a non-homogenous PDE.
Order – the order of the PDE is the highest partial derivative in the equation. Higher order
PDEs can still be linear. If you don’t believe that, go back and reread what ‘Linear PDE’ means.
Coefficients – anything multiplying a partial derivative or a function; they may be constants or
variables or even functions.
Boundary conditions (BCs) – the solution function u and/or its derivatives have some known
values on the edges of the domain of the problem. Also known as Boundary Values (BVs).
7
There are three general types of BV Problems (BVPs).
1. Dirichlet problems: Values of the solution function at a particular x
are known for all times t:
ie, u(0, t) = 0 or u(0, t) = f(t).
For 2nd order PDEs like uxx + uyy = 0, the BC might be
u(x,y) = a given function of x and y on the boundary of the region {x,y}
where the problem is defined.
http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Dirichlet.html
Named for Johann Peter Gustav Lejeune Dirichlet, (famous for having 5 names) who was a
student of Georg Ohm (of Ohm’s Law fame), had a lifelong friend named Carl Jacobi (the
Jacobian matrix guy with Jacob as his middle name). Dirichlet had a student named Georg
Riemann (you’ve heard of his sums, his hypothesis, his zeta function, etc).
FYI: Gauss had this to say of Dirichlet: “The total number of Dirichlet's publications is not
large: jewels are not weighed on a grocery scale.”
See http://www-history.mcs.st-and.ac.uk/Quotations/Gauss.html
2. Neumann problems: Values of the partial derivatives of the
solution function are known for a particular x and all times t:
ux(0, t) = g1(t).
Named for Carl Gottfried Neumann; he was a student of Otto
Hesse, who in turn studied under Jacobi.
http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Neumann_Carl.html
3. Robin problems: A mixture of the two other types in the form of a linear combination:
ux(0, t) – a u(0, t) = g2(t).
Named for Victor Gustave Robin, although “… we have uncovered all of his works (they are
relatively few). Nowhere have we found him using the Robin boundary condition. Robin wrote a
nice thesis in potential theory and also worked in thermodynamics. We have concluded that it is
neither inappropriate nor especially appropriate that the third boundary condition now bears his
name.”
http://www.ams.org/conm/218/articles/B0-8218-0988-1-03039-9.pdf
8
Note: BVPs that separately combine Dirichlet and Neumann
conditions are sometimes known as Cauchy problems.
http://jeff560.tripod.com/images/cauchy.jpg
Initial conditions (ICs) – a specific type of Boundary Condition, in which the solution function
u(x, t) and/or its derivatives have known values at time t = 0. ICs are also known as initial values
(IVs); such problems are called initial value problems (IVPs).
Consideration of various BCs and ICs are essential to the solution of
PDEs that model real-world situations. Solutions often change
dramatically with simple change of BCs.
9