Homework 7 - Penn Math

1
Math 425 / AMCS 525
Assignment 7
Dr. DeTurck
Due Tuessday, March 29, 2016
Topics for this week — Convergence of Fourier series; Laplace’s equation and harmonic functions:
basic properties, compuations on rectangles and cubes (Fourier!), Poisson’s formula for the disk
Seventh Homework Assignment - due Tuesday, March 29
Reading: Read sections 5.4, 5.5, and 6.1 through 6.3 of the text
Be prepared to discuss the following problems in class:
• Page 134 (page 129 in the first ed) problems 12, 13
• Page 145 (page 139 in the first ed) problems 2, 3
• Page 160 (page 154 in the first ed) problems 2, 5, 7
• Page 164 (page 158 in the first ed) problems 1, 6
• *Page 172 (page 163 in the first ed) problems 1, 2
Page 134, problem 12. The Fourier sine series of f (x) = x on (0, `) is
x∼
∞
X
An sin
n=1
where
An =
2
`
Z
`
x sin
0
nπx
`
nπx
dx
`
`
nπx
nπx
dx, so du = dx and v = −
cos
to get
Integrate by parts with u = x and dv = sin
`
nπ
`
`
`
Z `
2x
nπx 2
nπx
2x
nπx
2`
nπx
(−1)n+1 2`
An = −
cos
+
cos
dx
=
−
cos
+
sin
=
nπ
` 0 nπ 0
`
nπ
`
n2 π 2
` 0
nπ
Therefore
x∼
∞
X
(−1)n+1 2`
nπx
sin
.
nπ
`
n=1
Parseval’s theorem says that
`
Z
(f (x))2 dx =
0
so we have
Z
0
`
x2 dx =
∞
X
n=1
A2n
Z
0
`
sin2
nπx
dx
`
∞
∞
X
X
4`2 `
2`3
`3
=
·
=
2
2
3
n π 2 n=1 n2 π 2
n=1
2
Multiply both sides of the equation
∞
X
`3
2`3
=
3
n2 π 2
n=1
by
π2
and obtain
2`3
∞
X
π2
1
.
=
2
6
n
n=1
Page 134, problem 13. We can obtain the Fourier cosine series for x2 by integrating the Fourier
sine series of x, which we found in the preceding problem:
x2 =
Z
x
Z
2t dt ∼
0
=
∞
xX
x
∞
X
nπt (−1)n+1 4`
nπt
(−1)n 4`2
cos
sin
dt =
nπ
`
n2 π 2
` 0
n=1
n=1
0
∞
∞
X
(−1)n 4`2
nπx X (−1)n+1 4`2
cos
+
n2 π 2
`
n2 π 2
n=1
n=1
We can evaluate the constant sum on the right by integrating both sides from 0 to ` (all the cosine
terms will integrate to zero) and obtain
`
Z
∞
X
(−1)n+1 4`3
`3
x dx =
=
,
3
n2 π 2
n=1
2
0
so, dividing this by ` we can conclude that
∞
x2 ∼
`2 X (−1)n 4`2
nπx
+
cos
2 π2
3
n
`
n=1
This time, Parseval’s theorem says
Z
`
x4 dx =
0
`4
`5
=
5
9
Z
`
12 dx +
0
Z
∞
∞
X
16`4 `
`5 X 8`5
2 nπx
dx
=
+
cos
n4 π 4 0
`
9
n4 π 4
n=1
n=1
Now multiply both sides of the equation
∞
`5
`5 X 8`5
=
+
5
9
n4 π 4
n=1
by
π4
and obtain
8`5
∞
π4 X 1
π4
=
+
40
72 n=1 n4
Rearrange this to get
∞
X
π4
π4
1
π4
−
=
=
n4
40
72
90
n=1
Page 145, problem 2. This was problem 5(b) on the first midterm. But let”s try it using the
hint. Let
ϕ(t) = kf + tgk2 = hf + tg, f + tgi = hf, f i + 2t hf, gi + t2 hg, gi .
We know that ϕ(t) ≥ 0. And the minimum of ϕ happens where ϕ0 = 0:
ϕ0 (t) = 2 hf, gi + 2t hg, gi
so ϕ0 = 0 for
t=−
hf, gi
.
hg, gi
3
And the minimum is
2
hf, gi
hf, gi
hf, gi
ϕ −
= hf, f i − 2
hf, gi +
hg, gi
hg, gi
hg, gi
hg, gi
2
= hf, f i − 2
2
hf, gi
hf, gi
+
hg, gi
hg, gi
2
hf, gi
hg, gi
= hf, f i −
The minimum value of ϕ must be non-negative, so
2
hf, f i −
hf, gi
≥0
hg, gi
2
hf, f i hg, gi ≥ hf, gi .
in other words
Taking the square root of both sides gives the Schwarz inequality.
Page 145, problem 3. A slight variation of this was also on the midterm (this is Poincaré’s
inequality), but here’s a proof of the version given: Use the (square of the) Schwarz inequality with
f = 1 and g = f 0 , which says that
(h1, f 0 i)2 ≤ k1k2 kf 0 k2
Now
0
Z
2
!2
`
0
(h1, f i) =
= (f (`) − f (0))2
f (x) dx
0
and
2
0 2
Z
`
k1k kf k =
2
`
Z
0
1 dx
0
Z
2
`
(f (x)) dx = `
0
(f 0 (x))2 dx
0
So now Schwarz says
Z
2
(f (`) − f (0)) ≤ `
`
(f 0 (x))2 dx
0
which is the version of Poincaré that we were to prove.
Page 160, problem 2. If u is a solution of ∆u = k 2 u that depends only on r, then (as we have
shown previously, or else from the text) u(r) satisfies
2
u00 + u0 = k 2 u.
r
Following the hint, let u = v/r. Then
u0 =
v0
v
− 2
r
r
u00 =
and
v 00
v0
2v
−2 2 + 3.
r
r
r
The equation becomes:
v 00
v0
2v 2
−2 2 + 3 +
r
r
r
r
v0
v
− 2
r
r
=
v 00
v
= k2
r
r
4
or in other words v 00 = k 2 v. Since we’re given that k > 0 (in particular, k 6= 0), the solutions of this
equation are v = c1 ekr + c2 e−kr , which in turn yield
u(r) = c1
ekr
e−kr
+ c2
.
r
r
R
Page 160, problem 5. To solve ∆u = 1 on r < a in 2 with u = 0 when r = a, it’s enough
to look for radial solutions (since we will have shown that the solution of this Dirichlet problem is
unique). So we seek a solution u(r) to the ordinary differential equation:
1
u00 + u0 = 1
r
that defines a smooth function at the origin and for which u(a) = 0. The general solution of
1
u00 + u0 = 0
r
is
u0 = c1 + c2 ln r
and a particular solution of the inhomogeneous equation is
up =
x2
4
so the general solution is
x2
+ c1 + c2 ln r.
4
We don’t want a ln r term since that will become infinite as r → 0, so we set c2 = 0. In order for
u(a) = 0, we should set c1 = − 41 a2 , and so the solution of the problem is
ug =
u(r) =
1
1 2
(r − a2 ) = (x2 + y 2 − a2 ).
4
4
Page 160, problem 7. The general solution of
2
∆u = urr + ur = 1
r
is
c2
r2
+ .
r
6
Now we need to arrange for u = 0 when r = a and r = b, in other words to solve the linear system
"
#"
# "
#
1 a−1
c1
− 61 a2
=
1 b−1
c2
− 16 b2
u = c1 +
for c1 and c2 . The solution is
"
#
"
c1
b−1
1
= −1
b − a−1
−1
c2
Therefore
u=
−a−1
1
#"
− 16 a2
− 61 b2
#
ab
=
a−b
"
1 ab(a + b)
− (b2 + ab + a2 ) + r2
6
r
1 2 −1
6 (b a
1 2
6 (a
− a2 b−1 )
− b2 )
#
5
Page 164, problem 1. Since this is a pure Neumann problem, it is important to check that the
integral of the normal derivative around the boundary of the region is zero, and it is, since the value
of the outward -pointing normal is +a on a segment of length b (where x = 0) and is equal to −b on
a segment of length a (where y = 0) and zero on the other two segments. So there is a solution to
this problem, and it is unique only up to adding a constant.
We could try writing the solution as the sum of two Fourier cosine series:
u(x, y) = C +
∞
X
∞
An cosh(a − x) cos
n=1
nπy X
nπx
+
Bn cosh(b − y) cos
b
a
n=1
but it is simpler to note that since the boundary conditions are constants, we can seek the solution
of the problem as u(x, y) = F (x) + G(y), where F (x) is a quadratic polynomial with F 0 (0) = −a
and F 0 (a) = 0, and where G(y) is also a quadratic polynomial with G0 (0) = b and G(b) = 0. From
the conditions on F , we have that F 0 is linear in x with slope 1, so F 0 (x) = x − a, and likewise
G0 (y) = b − y. So F (x) = 12 x2 − ax and G(x) = by − 21 y 2 (up to an additive constant), so
u(x, y) = F (x) + G(y) + C =
1 2 1 2
x − y − ax + by + C.
2
2
It’s easy to check that this is harmonic and satisfies all the boundary conditions.
Page 164, problem 6. We have to solve the Neumann problem on the cube, with uz (x, y, 1) =
g(x, y) and the normal derivatives zero on the other five sides. So in our separated solutions
X(x)Y (y)Z(z) we will have X 0 (0) = X 0 (1) = 0, Y 0 (0) = Y 0 (1) = 0 and Z 0 (0) = 0. So our
separated solutions are
p
XY Z = cos mπx cos nπy cosh m2 + n2 πz
and the solution of the problem is
u(x, y, z) =
∞
∞ X
X
Amn cos mπx cos nπy cosh
p
m2 + n2 πz.
n=0 m=0
Now
uz (x, y, 1) =
∞ X
∞ p
X
p
m2 + n2 π sinh
m2 + n2 π cos mπx cos nπy
n=0 m=0
So, except when m = n = 0 (i.e., the constant term)
p
m2 + n2 π sinh
p
hg(x, y), cos mπx cos nπyi
hcos mπx cos nπy, cos mπx cos nπyi
Z 1Z 1
g(x, y) cos mπx cos nπy dx dy
0 0
=
Z 1Z 1
cos2 mπx cos2 nπy dx dy
m2 + n2 πAmn =
0
Z 1Z
=4
0
1
g(x, y) cos mπx cos nπy dx dy
0
0
6
and A00 is undetermined (and we expect the solution of the Neumann to be unique only up to the
addition of an arbitrary constant) . So the solution is
∞ X
∞
X
u(x, y) = A00 +
n=0 n=0
(n,m)6=(0,0)
Z 1Z
4
0
1
g(x, y) cos mπx cos nπy dx dy
p
√0
√
cos mπx cos nπy cosh m2 + n2 πz
m2 + n2 π sinh m2 + n2 π
Page 172, problem 1. (a) Since u = 3 sin 2θ + 1 on the boundary of the disk, and the maximum
of this function of θ is 4, we have that 4 is the maximum value of u throughout the disk by the
maximum principle.
(b) The value of u at the origin is the average of u on the boundary of the disk, namely 1.
Page 172, problem 2. We need the harmonic function
u=
∞
X
1
rn (An cos nθ + Bn sin nθ)
A0 =
2
n=1
to equal 1 + 3 sin θ when r = a. This will be so if all the An = 0 except n = 0, for which A0 = 2
and all the Bn = 0 except for n = 1, for which
B1 =
So
3
.
a
3
u = 1 + r sin θ.
a
Write up solutions of the following to hand in:
• Page 134 (page 129 in the first ed) problems 14, 15
• Page 145 (page 139 in the first ed) problems 4, 12
• Page 160 (page 154 in the first ed) problems 4, 6, 8
• Page 164 (page 158 in the first ed) problems 4, 7
• *Page 172 (page 163 in the first ed) problem 3
∞
X
1
we’ll find the Fourier series for x3 ,
6
n
n=1
using the work from problems 12 and 13 above. From problem 13, we know that the cosine series
for x2 is
∞
nπx
`2 X (−1)n 4`2
x2 ∼
+
cos
2 π2
3
n
`
n=1
Page 134, problem 14. To find the sum of the series
7
so we’ll integrate 3 times this to get
#
x
Z x
Z x"
∞
∞
X
X
nπt
nπt (−1)n 12`2
(−1)n 12`3
2
2
2
3
cos
sin
3t dt ∼
` +
dt = ` x +
x =
n2 π 2
`
n3 π 3
` 0
0
0
n=1
n=1
= `2 x +
∞
X
(−1)n 12`3
nπx
sin
3 π3
n
`
n=1
And this will do — the series on the right is the series for x3 − `2 x:
x3 − ` 2 x ∼
∞
X
nπx
(−1)n 12`3
sin
3 π3
n
`
n=1
and we can apply Parseval’s theorem to this:
Z `
Z
∞
X
144`6 ` 2 nπx
3
2 2
(x − ` x) dx =
sin
dx
n6 π 6 0
`
0
n=1
Since (x3 − `2 x)2 = x6 − 2`2 x4 + `4 x2 , the integral on the left gives ( 17 −
usual, all the integrals on the right evaluate to 12 `, so we obtain:
2
5
+ 13 )`7 =
8 7
105 `
And as
∞
X
72`7
8`7
=
105 n=1 n6 π 6
Multiply both sides by
π6
and obtain
72`7
∞
X
1
π6
=
.
945 n=1 n6
Page 134, problem 15. (a) To write
1=
∞
X
Bn cos[(n + 21 )x]
n=0
we have
Z π
1
cos[(n + 21 )x] dx
1
n+
1, cos[(n + 2 )x]
= Z 0π
Bn = =
1
1
cos[(n + 2 )x], cos[(n + 2 )x]
cos2 [(n + 21 )x] dx
1
2
π
sin[(n + 21 )x]
0
1
2π
=
(−1)n 4
(2n + 1)π
0
(b) The series will converge for all x to the extension of the 1 that is even across x = 0, odd
across xπ and 2π-periodic. So, on the interval −2π < x < 2π, the series converges to


 −1 for −2π < x < π



 0 for x = −π

1 for −π < x < π



0 for x = π




−1 for π < x < 2π
8
(c) The series is
1=
∞
X
(−1)n 4
cos[(n + 21 )x]
(2n
+
1)π
n=0
for 0 < x < π. And the Parseval’s theorem says:
Z
π
0
∞
X
16
1 dx =
(2n
+
1)2 π 2
n=0
2
or:
π=
Multiply both sides by 18 π/ to get
Z
π
0
cos2 [(n + 21 )x] dx
∞
X
8
(2n + 1)2 π
n=0
∞
X
1
π2
=
2
(2n + 1)
8
n=0
Page 145, problem 4. Preliminaries: The boundary conditions for X(x) in the separated solutions
are
`X 0 (0) + X(0) − X(`) = 0 and `X 0 (`) + X(0) − X(`) = 0
If f and g are two functions that satisfy these boundary conditions, then
`
f (x)g(x) − f (x)g (x) = f 0 (`)g(`) − f (`)g 0 (`) − f 0 (0)g(0) + f (0)g 0 (0)
0
0
0
1
(f (`) − f (0))g(`) − f (`)(g(`) − g(0)) − (f (`) − f (0))g(0) + f (0)(g(`) − g(0))
`
1
=
f (`)g(`) − f (0)g(`) − f (`)g(`) + f (`)g(0) − f (`)g(0) + f (0)g(0) + f (0)g(`) − f (0)g(0)
`
=
=0
and so by Theorems 1 and 2 of section 5.3, all the eigenvalues are real and eigenfunctions corresponding to distinct eigenvalues are orthogonal.
We are going to use Poincaré’s inequality in part (c) to show that there are no negative eigenvalues, but here is a more direct proof: Suppose λ = −β 2 , then
X = c1 cosh βx + c2 sinh βx
and
X 0 = βc1 sinh βx + βc2 cosh βx.
The boundary condition `X 0 (0) + X(0) − X(`) = 0 says
`βc2 + c1 − c1 cosh β` − c2 sinh β` = 0
and the boundary condition `X 0 (`) + X(0) − X(`) = 0 says
`βc1 sinh β` + `βc2 cosh β` + c1 − c2 cosh β` − c2 sinh β` = 0
We can simplify things a bit by subtracting the first equation from the second, dividing the result
by β`, and replacing the second equation with
c1 sinh β` + c2 cosh β` − c2 = 0
9
Then the first boundary condition and this last equation comprise the following system of two linear
equations in the two unknowns c1 and c2 :
1 − cosh β` `β − sinh β`
c1
0
=
.
sinh β`
cosh β` − 1
c2
0
If −β 2 is to be an eigenvalue, the determinant of the matrix on the left must be zero. The determinant
is
2 cosh β` − 1 − cosh2 β` + sinh2 β` − β` sinh β` = 2 cosh β` − 2 − β` sinh β`.
We need to show there are no nonzero values of β for which this quantity is zero. So we study the
function f (z) = 2 cosh z − 2 − z sinh z We have f (0) = 0 and f is an even function. We’re going to
write the Maclaurin series for f and see that all the coefficients of even powers of z are negative,
and all the coefficients of odd powers of z are zero, which will show that f (z) < 0 for z 6= 0. The
series for cosh z and sinh z are
cosh z =
∞
X
z 2n
(2n)!
n=0
and
sinh z =
∞
X
x2n−1
(2n − 1)!
n=1
(just like the series for cosine and sine, but without the alternating signs). So the series for
2 cosh z − 2 =
∞
X
2z 2n
(2n)!
n=1
since subtracting 2 cancels the n = 0 term, and the series for
z sinh z =
∞
X
z 2n
(2n − 1)!
n=1
Therefore
∞ ∞
X
X
2
z 2n
n−1
z 2n
2 cosh z − 2 − z sinh z =
−1
=
−
2n
(2n − 1)! n=2
n (2n − 1)!
n=1
which proves the claim that there are no values of z other than z = 0 for which this function is zero.
Hence there are no negative eigenvalues.
Now we can begin the problem.
(a) Zero is a double eigenvalue of the problem, since any linear function X(x) = A + Bx satisfies
the equation X 00 = 0 together with the boundary conditions, which (as they are printed in the book)
say essentially that “the slope equals the slope equals the slope”.
Now we can concentrate on the positive eigenvalues and their eigenfunctions. The beginning of
the analysis is not so different from the negative case above, except we have trigonometric rather
than hyperbolic functions. So, suppose λ = β 2 , then
X = c1 cos βx + c2 sin βx
and
X 0 = −βc1 sin βx + βc2 cos βx.
The boundary condition `X 0 (0) + X(0) − X(`) = 0 says
`βc2 + c1 − c1 cos β` − c2 sin β` = 0
10
and the boundary condition `X 0 (`) + X(0) − X(`) = 0 says
−`βc1 sin β` + `βc2 cos β` + c1 − c1 cos β` − c2 sin β` = 0
We can simplify things a bit by subtracting the first equation from the second, dividing the result
by β`, and replacing the second equation with
−c1 sin β` + c2 cos β` − c2 = 0
Then the first boundary condition and this last equation comprise the following system of two linear
equations in the two unknowns c1 and c2 :
1 − cos β` `β − sin β`
c1
0
=
.
− sin β`
cos β` − 1
c2
0
If β 2 is to be an eigenvalue, the determinant of the matrix on the left must be zero. The determinant
is
2 cos β` − 1 − cos2 β` − sin2 β` + β` sin β` = 2 cos β` − 2 + β` sin β`.
Using the double-angle formulas for sine and cosine, we can rewrite this as
−4 sin2 21 β` + 4( 12 β`) sin 12 β` cos 12 β`.
So the values of β that give eigenvalues are the roots of
sin 12 β` 12 β` cos 12 β` − sin 12 β` = 0
2πn
If the first factor is zero, i.e., sin 21 β` = 0, then β =
and the system of equations for c1 and c2
`
becomes:
0 2πn
c1
0
=
0
0
c2
0
so that c2 = 0 and the corresponding eigenfunction is X = c1 cos
2πnx
(for n = 1, 2, 3, . . .).
`
If the second factor is zero, then 21 β` must be a root of the equation x = tan x. By graphing y = x
and y = tan x on the same graph and observing where the graphs cross, you can see that the values
of 12 β` that yield eigenvalues are all near (and a bit less than and getting closer and closer to)
(n − 12 )π.
11
In this case we start by rewriting the system of equations for c1 and c2 in terms of the angle
and the double-angle formulas, since we know that sin 21 β` = 21 β` cos 12 β`:
"
1
2 β`
#"
#
# "
0
c1
2 12 β` − sin 12 β` cos 21 β`
=
0
c2
−2 sin 12 β` cos 21 β`
−2 sin2 21 β`
2 sin2 12 β`
The second equation (after dividing by −2 sin 12 β`) becomes
(cos 12 β`)c1 + (sin 12 β`)c2 = 0
Now replace sin 12 β` with 12 β` cos 21 β` and divide out the cosine factor to get
c1 + 21 β`c2 = 0.
So the eigenfunction in this case will be (a multiple of)
X(x) = β` cos βx − 2 sin βx
or, using that β =
√
λ,
√
X(x) =
√
λ ` cos
√
λ x − 2 sin λ x.
To summarize, we have three sets of eigenvalues:
• λ = 0, with eigenfunction A + Bx
• λ=
4n2 π 2
2nπx
with eigenfunction cos
, for n = 1, 2, 3, . . .
2
`
`
• λn = βn2 where 21 βn ` are the positive roots of x = tan x (there are infinitely many of these)
√
√
√
with eigenfunctions Xn (x) = λn ` cos λn x − 2 sin λn x.
So the solution of the heat equation is
∞ p
X
p
p
2nπx
−4kn2 π 2 t/`2
−kλn t
u(x, t) = A + Bx +
Pn e
cos
+ Qn e
λn ` cos λn x − 2 sin λn x
`
n=1
where the Pn and Qn are the Fourier coefficients of the initial data ϕ(x).
(b) In the solution u(x, t) given above, all the terms of the series have factors that are exponentials
of negative numbers times t. So as t → ∞, all these terms go to zero (rapidly!) and we are left with
lim u(x, t) = A + Bx.
t→∞
(c) Using integration by parts (Green’s first identity): Suppose X(x) is an eigenfunction with
12
eigenvalue λ; in the integration by parts, let u = X and dv = X 00 dx so that du = X 0 dx and v = X 0 :
λ hX, Xi = hλX, Xi = − hX 00 , Xi
"
#
` Z `
Z `
00
0
0
2
=−
X (x)X(x) dx = − X(x)X (x) −
(X (x)) dx
0
0
0
Z `
X(`) − X(0)
X(`) − x(0)
(X 0 (x))2 dx
= − X(`)
− X(0)
+
`
`
0
Z `
1
(X 0 (x))2 dx − (X(`) − X(0))2
=
`
0
≥0
by the result of problem 3. But since λ hX, Xi ≥ 0 and of course hX, Xi > 0, we must have λ ≥ 0.
So there are no negative eigenvalues.
(d) To find A and B, we must be careful to use orthogonal eigenfunctions, and the functions
1 and x are not orthogonal on the interval 0 ≤ x ≤ `. However, the functions 1 and x − 21 ` are
orthogonal:
`
1, x −
2
Z
=
0
`
`
x2
`x `
− =0
x − dx =
2
2
2 0
Therefore
ϕ, x − 12 `
hϕ, 1i
1
A + Bx =
+
x
−
`
h1, 1i
2
x − 12 `, x − 12 `
Now h1, 1i = `, and
Z `
2
3 `
3
1
1
1
` `3
`
2 `
x − `, x − ` =
dx =
x−
=
x−
=
2
2
2
3
2 3 2
12
0
0
Therefore, the components of ϕ in the directions of 1 and of x − 21 ` are
hϕ, 1i
h1, 1i
Z
and we have h1, 1i =
and
ϕ, x − 21 `
x − 12 `, x − 12 `
`
12 dx = ` and
0
`
Z `
2
(x − 21 `)3 1
1
`
`3
(−`)3
`3
x − `, x − ` =
x−
dx =
=
−
=
2
2
2
3
24
24
12
0
0
13
Therefore
ϕ, x − 12 `
hϕ, 1i
`
+
x
−
h1, 1i
2
x − 12 `, x − 12 `
!
Z
Z 1 `
`
`
12 `
=
ϕ(x) dx +
x
−
ϕ(x)
dx
x
−
` 0
`3 0
2
2
" Z
!#
!
Z Z ` 12 `
`
`
1 `
12 `
ϕ(x) dx −
x−
x−
=
ϕ(x) dx
+
ϕ(x) dx x
` 0
2 `3 0
2
`3 0
2
!
Z `
Z `
12x
6
4 6x
− 2 ϕ(x) dx x
=
− 2 ϕ(x) dx +
`
`
`3
`
0
0
Ax + B =
So
Z `
A=
0
4 6x
− 2
`
`
Z `
ϕ(x) dx
and B =
0
12x
6
− 2
3
`
`
ϕ(x) dx
Page 145, problem 12. Since we know we can integrate a Fourier series term by term but not
necessarily differentiate one, start with the series for f 0 (x):
f 0 (x) = A0 +
∞
X
An cos nx + Bn sin nx
n=1
where
A0 =
1
2π
Z
π
f 0 (x) dx
and An =
−π
1
π
Z
π
f 0 (x) cos nx dx,
−π
Bn =
1
π
Z
π
f 0 (x) sin nx dx for n = 1, 2, . . .
−π
Because f (x) satisfies periodic boundary conditions, we have
Z π
0 = f (π) − f (−π) =
f 0 (x) dx = 2πA0
−π
So A0 = 0. Next, the Fourier series for f (x) is
f (x) = C0 +
∞
X
Cn cos nx + Dn sin nx
n=1
where
Z
Z
1 π
1 π
f (x) cos nx dx, Dn =
f (x) sin nx dx for n = 1, 2, . . .
π −π
π −π
−π
Z π
We are given in the problem that
f (x) dx = 0, so C0 = 0. Also, on integration by parts,
C0 =
1
2π
Z
π
f (x) dx
and Cn =
−π
π
Z π
1
1
Bn
f (x) cos nx dx =
f (x) sin nx −
f 0 (x) sin nx dx = −
nπ
nπ −π
n
−π
−π
π
1
Cn =
π
Z
1
Dn =
π
Z
and
π
Z π
1
1
An
f (x) sin nx dx = − f (x) cos nx +
f 0 (x) cos nx dx =
nπ
nπ −π
n
−π
−π
π
14
So Parseval’s theorem tells us that (using that the integrals of cos nx and sin nx from −π to π are
equal to π for all n)
Z
π
0
2
(f (x)) dx =
−π
∞
X
(A2n
n=1
+
Bn2 )π
≥
∞ 2
X
A
n
n2
n=1
B2
+ 2n
n
π=
∞
X
(Cn2
n=1
+
Dn2 )π
Z
π
(f (x))2 dx.
=
−π
And from this we can also see that equality holds if and only if An = Bn = Cn = Dn = 0 for all
n ≥ 2.
Page 160, problem 4. We know (from the very first assignment perhaps) that for functions u
that depend only on r that
2
∆u = urr + ur
r
The differential equation r2 u00 + ru0 = 0 is a Cauchy-Euler equation with general solution u =
c1 + c2 r−1 . If u(a) = A and u(b) = B, then we have two equations in two unknowns:
1 a−1
c1
A
=
1 b−1
c2
B
the solution of which is
−1
1
b
c1
= −1
c2
b − a−1 −1
−a−1
1
so
u=
A
B
=
ab
a−b
b−1 A + a−1 B
B−A
=
1
a−b
aA + bB
ab(B − A)
aA + bB
ab(B − A)
+
a−b
(a − b)r
Page 160, problem 6. The general solution of
1
∆u = urr + ur = 1
r
is
r2
.
4
Now we need to arrange for u = 0 when r = a and r = b, in other words to solve the linear system
"
#"
# "
#
1 ln a
c1
− 41 a2
=
1 ln b
c2
− 14 b2
u = c1 + c2 ln r +
for c1 and c2 . The solution is
"
#
"
#"
#
"
c1
ln b − ln a
− 14 a2
1
1
=
=
ln b − ln a −1
ln b − ln a
c2
1
− 14 b2
Therefore
u=
1 2
4 (b ln a
1 2
4 (a
r2
1
b2 ln a − a2 ln b + (a2 − b2 ) ln r +
4(ln b − ln a)
4
− a2 ln b)
− b2 )
#
15
Page 160, problem 8. The general solution of
2
∆u = urr + ur = 1
r
is
u = c1 +
c2
r2
+ .
r
6
Now we need to arrange for u = 0 when r = a and for ur = 0 when r = b, in other words, to solve
the linear system
#
"
#"
# "
1
a−1
c1
− 16 a2
=
0 −b−2
c2
− 13 b
for c1 and c2 . The solution is
"
#
"
c1
−b−2
2
= −b
0
c2
−a−1
1
#"
− 61 a2
− 13 b
#
"
2
= −b
1 2 −2
6 (a b
+ 2a−1 b)
− 13 b
#
Therefore
u=
1
6
b3
2b3
− a2 − 2 + r2
r
a
As a → 0, the “hole” in the middle of the sphere closes up, and the problem would seem to approach
a Neumann problem on the solid ball. Unfortunately, though, the necessary condition for a solution
will not be satisfied, since the normal derivative on the surface of the ball is zero, but ∆u = 1 in the
interior and the integrals of these can’t agree. That is why there is a singularity forming with the a
in the denominator of the constant term. The solution will approach −∞ near the r = b boundary.
Page 164, problem 4. To find the harmonic function u(x, y) with the given boundary conditions,
we will need to add together harmonic functions v and w that have inhomogeneous conditions on
only one side. So let v satisfy
v(x, 0) = x
v(x, 1) = 0
vx (0, y) = 0 vx (1, y) = 0
and let w satisfy
w(x, 0) = 0
w(x, 1) = 0 wx (0, y) = 0 wx (1, y) = y 2 .
Since vx is zero for x = 0 and x = 1, the series for v will have cosines of x, and since v is zero when
y = 1 but not when y = 0, we’ll have hyperbolic sines of 1 − y, (except when λ = 0, where we have
1 − y) as follows:
∞
X
a0
an sinh nπ(1 − y) cos nπx
v(x, y) = (1 − y) +
2
n=1
So for y = 0, we’ll want
∞
v(x, 0) =
a0 X
+
an sinh nπ cos nπx = x
2
n=1
Therefore, we need
Z
a0 = 2
1
x dx = 1
0
16
and
1
Z 1
2
2
x sin nπx cos nπx x cos nπx dx =
+
sinh nπ 0
sinh nπ
nπ
n2 π 2 0

0
even n

=
−4

odd n
n2 π 2 sinh nπ
an =
Therefore
∞
v(x, y) =
X 4 sinh[(2k + 1)π(1 − y)]
1
(1 − y) −
cos[(2k + 1)πx]
2
(2k + 1)2 π 2 sinh[(2k + 1)π]
k=0
Now w is zero for y = 0 and y = 1 so the series for w will have sines of y, and since wx is zero when
x = 0 but not when x = 1, we’ll have hyperbolic cosines of x as follows:
w(x, y) =
∞
X
bn cosh nπx sin nπy
n=1
For x = 1, we’ll want
wx (1, y) =
∞
X
nπbn sinh nπ sin nπy = y 2
n=1
so we need
2
bn =
nπ sinh nπ
Z
2
nπ sinh nπ
=
1
0
1
y 2 cos nπy 2y sin nπy 2 cos nπy +
+
−
nπ
n2 π 2
n3 π 3
0

−2


even n
2((−1)n − 1)
2 π 2 sinh nπ
n
+
=
3
3
2
8

n π

−
odd n
n2 π 2 sinh nπ n4 π 4 sinh nπ
2
y sin nπy dy =
nπ sinh nπ
2
(−1)n+1
nπ
Therefore
w(x, y) =
∞ X
k=1
2
8
−
cosh[(2k − 1)πx] sin[(2k − 1)πy]
(2k − 1)2 π 2 sinh(2k − 1)π (2k − 1)4 π 4 sinh(2k − 1)π
−
∞
X
k=1
2k 2 π 2
1
cosh 2kπx sin 2kπy
sinh 2kπ
So finally,
u(x, y) = v(x, y) + w(x, y)
=
∞
∞
k=0
k=1
X 4 sinh[(2k + 1)π(1 − y)]
X
1
1
(1 − y) −
cos[(2k + 1)πx] −
cosh 2kπx sin 2kπy
2
2
2
(2k + 1) π sinh[(2k + 1)π]
2k 2 π 2 sinh 2kπ
+
∞ X
k=1
2
8
−
cosh[(2k − 1)πx] sin[(2k − 1)πy]
(2k − 1)2 π 2 sinh(2k − 1)π (2k − 1)4 π 4 sinh(2k − 1)π
17
Page 164, problem 7. Since u(0, y) = u(π, y) = 0, we’ll have X(0) = X(π) = 0 in the separated
solutions, and so X(x) = sin nx (with eigenvalue n2 ). So Y (y) = aeny + be−ny , but because we want
the solution to decay to zero as y → ∞, we must have a = 0. So the solution is
u(x, y) =
∞
X
Bn e−ny sin nx
n=1
where
2
Bn =
π
π
Z
h(x) sin nx dx.
0
(b) If we didn’t have the condition at infinity, then we would have
∞
X
u(x, y) =
(An eny + Bn e−ny ) sin nx
n=1
and there would be no way to determine the coefficients, since we would know only that
Z
2 π
An + B n =
h(x) sin nx dx.
π 0
Page 172, problem 3. Let’s start by proving the trig identity for sin 3θ:
sin 3θ = sin(θ + 2θ) = sin θ cos 2θ + cos θ sin 2θ
= sin θ(cos2 θ − sin2 θ) + 2 cos θ(sin θ cos θ)
= 3 sin θ cos2 θ − sin3 θ = 3 sin θ(1 − sin2 θ) − sin3 θ
= 3 sin θ − 4 sin3 θ.
So we have
sin3 θ =
3
4
sin θ −
1
4
sin 3θ
We need the harmonic function
u=
∞
X
1
A0 =
rn (An cos nθ + Bn sin nθ)
2
n=1
to equal 43 sin θ − 14 sin 3θ when r = a. This will be so if all the An = 0 and all the Bn = 0 except
for n = 1 and n = 3, for which
B1 =
So
u=
3
4a
and B3 = −
1
.
4a3
3
1
r sin θ − 3 r3 sin 3θ.
4a
4a