Infinite Fourier series

Math 21b — Infinite Fourier series
OH: Tuesday 4–5, Thursday 3–4
Previously: Inner product spaces, projections, Fourier series
Today: Infinite Fourier series, Parseval’s identity, diagonalizing differential operators
Infinite Fourier series
Last time, we introduced the Fourier series of a function f on [−π, π] as an example of
orthogonal projection in an inner product space; here, the inner product is
Z
1 π
hf, gi =
f (x)g(x) dx
π −π
and the subspace is T Pn , the set of “trigonometric polynomials” which are sums of the
functions sin(kx), cos(kx) for k ≤ n. Of course, n is finite, but it can be as large as we like,
and it turns out that much of the time it makes sense to take the limit:
Theorem 1. If f ∈ C ∞ , then on [−π, π], it is equal to the following infinite series:
∞
X
a0
(ak cos(kx) + bk sin(kx),
f (x) = √ +
2 k=1
where we recall that
ak = hf, cos(kx)i
bk = hf, sin(kx)i,
√
and a0 = hf, 1i/ 2.
Another way to interpret this is that these trig functions form an infinite orthonormal
basis for C[−π, π], although this statement is imprecise in many ways. However, one fact
about coordinates in orthonormal bases remains true:
Theorem 2. (“Pythagorean theorem”) We have the equality
2
kf k =
∞
X
(ak )2 ,
k=0
where kf k2 =
1
π
Rπ
−π
f (x)2 dx. This is called Parseval’s identity.
Aside from how to do the integrals, this is the only thing you ever need to know about
Fourier series. In doing the integrals, you should:
1
1. Remember your trig identities! These include the formulas for sums of angles:
sin(x + y) = sin(x) cos(y) + cos(x) sin(y) cos(x + y) = cos(x) cos(y) − sin(x) sin(y)
which you can remember by comparing them to the much easier formulas
cos(2x) = cos2 (x) − sin2 (x).
sin(2x) = 2 sin(x) cos(x)
Of course, you must never forget that sin2 (x) + cos2 (x) = 1, and using the cosine
identity above with this one, you can get formulas for squared trig functions (which
are often thought of as half-angle formulas):
sin2 (x) =
1 − cos(2x)
2
cos2 (x) =
1 + cos(2x)
.
2
Using the angle-sum formulas and a little algebraic ingenuity, you can also get identities
which allow you to break up products of trig functions into sums with different angles:
1
1
sin(x) sin(y) = (cos(x − y) − cos(x + y))
sin(x) cos(y) = (sin(x + y) + sin(x − y))
2
2
1
cos(x) cos(y) = (cos(x − y) + cos(x + y)).
2
Knowing these well will speed up your computations a lot.
2. Remember your integration techniques! The single most important one is integration
by parts:
Z
Z b
b
udv = uv|a − vdu,
a
where u and v are functions of x and dv = v 0 dx is a “differential”. Use this to
differentiate polynomials (which tend to go away when you do this) and integrate trig
functions. Sometimes, you can also pull tricks when there are no polynomials:
Z
Z
2x
2x
e sin(x) dx = (e )(− cos(x)) − (2ex )(− cos(x)) dx
Z
2x
= −e cos(x) + 2 ex cos(x) dx,
and then again:
Z
Z
Z
2x
2x
x
2x
e cos(x) dx = (e )(sin(x)) − (2e )(sin(x)) dx = e sin(x) − 2 e2x sin(x) dx.
All together, you get
Z
Z
2x
2x
2x
e sin(x) dx = −e cos(x) + e sin(x) − 4 e2x sin(x) dx,
2
so solving for the integral, you get
Z
1
1
e2x sin(x) dx = − e2x cos(x) + e2x sin(x).
5
5
If you are brave, you can also do this one with complex numbers:
Z
Z
Z
ix
−ix
1
2x
2x e − e
dx =
e sin(x) dx = e
(e(2+i)x − e(2−i)x ) dx
2i
2i
1 (2+i)x
1 (2−i)x
1
e
−
e
=
2i 2 + i
2−i
but then you have to turn it back!
3. When you are dealing with piecewise-defined functions, you have to break up the
integral. For example,
Z π
Z 0
Z π
Z π
h|x| , sin(x)i =
|x| sin(x) dx =
(−x) sin(x) dx+
x sin(x) dx = 2
sin(x) dx.
−π
−π
0
0
Diagonalizing differential operators
Remember that when we solved the linear system of differential equations
d~v
= A~v (t),
dt
P
we did it by diagonalizing A. Since if ~v (t) =
aλ (t)~vλ is expressed with “moving” coordinates in the eigenbasis, then each coordinate must satisfy a0λ (t) = λaλ (t), we found that
aλ (t) = kλ eλt for some constant k. Another way of saying this is:
An eigenvector
~vλ evolves in the continuous dynamical systemP
as eλt~vλ .
P
If ~v (0) = kλ~vλ is the initial state, then it evolves into ~v (t) = kλ eλt~vλ .
The point of introducing Fourier series is that we can do this for other kinds of differential
equations now. We will talk next time about how we will actually solve such equations; for
now, we observe that:
The Fourier basis diagonalizes the operator Dx2 = ∂ 2 /∂x2 on [−π, π].
√
That is, 1/ 2, sin(kx), cos(kx) are an eigenbasis for this differential operator; you might
imagine that we can solve equations like Dt f (x, t) = Dx2 f (x, t) using this fact, and we will
do this on Wednesday. Right now, we just verify:
Dx2 sin(kx) =
d2
sin(kx) = −k 2 sin(kx),
dx2
and likewise Dx2 cos(kx) = −k 2 cos(kx), so that these functions have eigenvalue −k 2 (you
might notice that we have an orthonormal eigenbasis, so that in fact, Dx2 can be said to be
“symmetric”, although we don’t know what it means to take the transpose).
3