Infinite Series

12
Infinite Series
Many of the functions we will be dealing with later are described using “infinite series”, often either
power series or series of eigenfunctions for certain self-adjoint operators (“Fourier series” are the
best-known examples of eigenfunction series). You were probably introduced to infinite series
in calculus, where their main use is in providing fodder for seemingly meaningless convergence
tests. With luck, you later saw real applications of infinite series in later courses.
In this set of notes, we will quickly review the basic theory of infinite series, and further
develop those aspects that we will find most useful later. With luck, we may even stumble across
an application.
12.1
Introduction
Recall that an infinite series (often shortened to series) is mathspeak for a summation with an
infinite number of terms,
∞
X
k=γ
u k = u γ + u γ +1 + u γ +2 + u γ +3 + · · ·
The γ is a fixed integer that varies from series to series. The u k ’s can be anything that can
be added together — numbers, vectors, matrices, functions, … . We will be most interested in
infinite series of numbers and of functions. Some examples are:
∞
X
1
k=1
k
3 + .1 + .04 + .001 + .0005 + · · ·
= 1 +
1
1
1
+
+
+ ···
2
3
4
∞
X
1
1
1
1
+
−
+ ···
(−1)k+1 = 1 −
∞ k
X
1
k=0
3
∞ k
X
i
k=0
1/17/2014
3
2
k
k=1
=
0
1
3
=
0
i
3
+
3
1
1
3
+
1
i
3
+
(the harmonic series)
(the alternating harmonic series)
4
2
1
3
+
2
i
3
+
3
1
3
+
3
i
3
(π)
+ ···
+ ···
(a geometric series)
(another geometric series)
Chapter & Page: 12–2
∞
X
3k = 32 + 33 + 34 + 35 + · · ·
k=2
∞
X
1
k=0
∞
X
1
k=1
∞
X
k=1
k2
1 1
k! 0
Infinite Series
k!
xk = x0 + x1 +
sin(kπ x) = sin(π x) +
(yet another geometric series)
1 2
1
x + x3 + · · ·
2
6
(a power series)
1
1
1
sin(2π x) + sin(3π x) +
sin(4π x) + · · ·
4
9
16
(a Fourier sine series)
0
3
k
=
1 0
0 3
+
1 1
2 0
0
3
2
1 1
6 0
+
0
3
3
+ · · · (a power series of matrices)
(12.1)
Suppose we have a particular infinite series
refer to S N , defined by
SN =
N
X
k=γ
P∞
k=γ
u k . For each integer N ≥ γ , we will
u k = u γ + u γ +1 + u γ +2 + · · · + u N
,
as the corresponding N th partial sum of the series.1 Naturally, we want to be able to say
∞
X
k=γ
u k = lim S N
,
N →∞
which, of course, requires that the limit exists (in some sense — exactly what the limit means
may depend on the application). If this limit exists (in the desired sense), then
P∞
1.
→∞ S N ). This thing it adds up to
k=γ u k actually adds up to something (namely lim NP
is called the sum of the series, and is also denoted by ∞
k=γ u k .
2.
We say the series is convergent.
If the limit does not exist, then
P∞
1.
k=γ u k does not add up to something, and we cannot treat this summation as representing anything other than a probably useless expression.
2.
We say the series is divergent.
It should be obvious that the convergence
or divergence of an infinite series does not depend
P
u
is
any
infinite series and η is an integer greater than
on its first few terms. After all, if ∞
k=γ k
γ , then
η−1
∞
∞
X
X
X
uk =
uk +
uk
k=γ
k=γ
k=η
and, since the summation from k = γ to k = η − 1 is a simple finite sum,
∞
X
k=γ
uk
converges
⇐⇒
∞
X
uk
converges .
k=η
1 Alternatively, you can let S be the sum of the first N terms. It doesn’t really matter.
N
Introduction
Chapter & Page: 12–3
We should also recall that series can be added together and multiplied by scalars in a natural
manner. Consider the general case where
∞
X
∞
X
and
Ak
k=γ
Bk
k=γ
are two series whose terms are all taken from the same vector space (e.g., all the Ak ’s and Bk ’s
are numbers, or all are functions with a common domain, or all are matrices of the same size)
and α and β are any two scalars. Since the partial sums are finite, we have
N
X
k=γ
[α Ak + β Bk ] =
N
X
k=γ
α Ak +
N
X
k=γ
β Bk = α
N
X
k=γ
Ak + β
N
X
Bk
.
k=γ
Thus,
lim
N →∞
N
X
k=γ

[α Ak + β Bk ] = lim α
N →∞
= α lim
N →∞
So if
∞
X
Ak
N
X
k=γ
N
X
k=γ
Ak + β
and
k=γ
Ak + β lim
N →∞
∞
X
k=γ
N
X

Bk 
N
X
Bk
.
k=γ
Bk
k=γ
are convergent, so is
∞
X
k=γ
and we have
∞
X
k=γ
[α Ak + β Bk ]
[α Ak + β Bk ] = α
∞
X
k=γ
,
Ak + β
∞
X
Bk
.
k=γ
Of course, omitting the Bk ’s in the above would have shown that, for any nonzero scalar α ,
∞
X
Ak
is convergent
k=γ
with
∞
X
k=γ
version: 1/17/2014
∞
X
⇐⇒
α Ak = α
α Ak
k=γ
∞
X
k=γ
Ak
.
is convergent
Chapter & Page: 12–4
12.2
Infinite Series
Series of Numbers
Basics on Convergence
P∞
In this section, we’ll assume the u k ’s in any given
k=γ u k are numbers (real or complex).
Recall that,
∞
X
|u k | → 0 as k → ∞ .
u k converges
H⇒
k=γ
Conversely,
|u k | 6→ 0 as k → ∞
∞
X
H⇒
diverges
uk
.
k=γ
P∞
However, it is possible for k=γ u k to diverge even though |u k | → 0 as k → ∞ . The classic
P 1
/
example is the harmonic series, ∞
k=1 k , which diverges (easily verified via the integral test).
Our sense of convergence/divergence can be refined as follows:
P∞
PN
1.
k=γ u k converges if and only if lim N →∞
k=γ u k exists as a finite number.
P∞
2.
k=γ u k diverges if and only if it doesn’t converge.
3.
4.
P∞
k=γ
u k converges absolutely if and only if lim N →∞
P∞
PN
k=γ
|u k | exists as a finite number.
k=γ u k converges conditionally if and only if the series converges, but does not converge
absolutely; i.e.,
lim
N →∞
but
lim
N →∞
N
X
u k exits as a finite number,
N
X
|u k | does not.
k=γ
k=γ
It’s worth noting that an absolutely convergent series is also merely convergent. This is “easily
verified” (see the appendix to this section, starting on page 12–14). Consequently, if a series is
not convergent (i.e., is divergent) then it cannot
P∞ be absolutely convergent.
th
It is also worth remembering that, if
k=γ u k converges, then the error in using the N
partial sum instead of the entire series is the “tail end” of the series,
Error N =
∞
X
k=γ
uk −
N
X
k=γ
uk =
∞
X
uk
.
k=N +1
An almost trivial, yet occasionally useful, consequence is that
∞
X
k=γ
uk
converges
H⇒
∞
X
k=N +1
u k → 0 as N → ∞
.
When dealing with series, it is sometimes useful to remember that the ‘triangle inequality’,
|a + b| ≤ |a| + |b|
,
Series of Numbers
Chapter & Page: 12–5
holds whenever a and b are any two real or complex numbers.2 This inequality is easily
extended to sums with many terms:
u γ + u γ +1 + u γ +2 + · · · + u N ≤ u γ + u γ +1 + u γ +2 + · · · + u N ≤ u γ + u γ +1 + u γ +2 + · · · + u N ..
.
≤ u γ + u γ +1 + u γ +2 + + · · · |u N |
That is,
N
N
X
X |u k |
u
≤
k
k=γ k=γ
Letting N → ∞ ,
.
.
X
∞
X
∞ |u k |
u
≤
k
k=γ k=γ
.
This holds whether or not the series is absolutely convergent; however; if the series is not
absolutely convergent, then the right-hand side will be +∞ . In particular, in applying the
triangle inequality to the expression given above for the error in using the N th partial sum
instead of the entire series, we get
∞
∞
X
X
|Error N | = |u k | .
uk ≤
k=N +1
k=N +1
If a series converges absolutely, it is because the u k ’s are shrinking to zero “fast enough”
as k → ∞ to ensure that
∞
X
|u k | < ∞
k=γ
(some tests for determining when the terms are shrinking “fast enough” will be discussed later).
A conditionally convergent series does not have its terms shrinking fast enough to ensure convergence. Instead, a conditionally convergent series converges because of a fortuitous pattern of
cancellations in the summation.
!◮Example 12.1:
Consider the partial sums of the alternating harmonic series
S
∞
X
k=1
(−1)
k+1 1
z
}|2
{
1
1
1
= |{z}
1 − + − + ···
k
2
3
4
|
S1
{z
S3
.
}
If we plot these partial sums on the real line (see figure 12.1), it becomes clear that this series
converges to some value between whatever last two S N ’s are plotted. In particular, this value
must be between 1/2 and 1 .
2 The validity of this inequality can easily be seen if you treat a and b as complex numbers, and consider the sides
of the possible triangles with corners a , b and a + b in the complex plane. This also explains why it’s called the
triangle inequality.
version: 1/17/2014
Chapter & Page: 12–6
Infinite Series
+1
−1/2
+1/3
+1/5
−1/4
−1/6
S6
S2
0
1/
2
+1/7
S5
S7
S4
S1
1
S3
Figure 12.1: Constructing the first seven partial sums — S1 , S1 , S1 , . . . and S7 — of the
alternating harmonic series,
1 −
1
1
1
1
1
1
+
−
+
−
+
− ···
2
3
4
5
6
7
.
Note how each term partially cancels out the previous term.
Extending these observations yields
TheoremP
12.1 (The Alternating Series Test for Convergence)
Assume ∞
k=γ u k is an alternating series; that is,
∞
X
k=γ
u k = ± u γ − u γ +1 + u γ +2 − u γ +3 + · · ·
.
Suppose, further, that the terms “steadily decrease to zero”; that is,
uk → 0
as
k → ∞
with
|u k | > |u k+1 | > |u k+2 | > · · ·
P
for all k ’s bigger than some fixed integer K . Then ∞
k u k converges at least conditionally.
th
Moreover, the error in using the N partial sum for the entire series is less than the first term
neglected,
∞
N
X
X
≤ |u N +1 |
u
−
u
k
k
k=γ
k=γ
(provided we choose N > K ).
Unfortunately, when the sum of a series depends on “a fortuitous pattern of cancellations”,
changing that pattern changes the sum. This means that simply rearranging the terms in a
conditionally convergent series can change the value to which it adds up — or even yield a series
that no longer converges.
!◮Example 12.2:
Consider the alternating harmonic series
∞
X
1
1
1
1
(−1)k+1 = 1 −
+
−
+ ···
k=1
k
2
3
4
,
Series of Numbers
Chapter & Page: 12–7
which, in example 12.1, we saw converged to some value less than 1 . Cleverly moving each
negative term further down the series, we get
1 +
1
1
1
1
1
1
1
+
−
+
+
+
−
+ ···
3
5
2
7
9
11
4
1
1
1
1
1
1
1
> 1 + + −
+
+
+
−
+ ···
3
6
2
12
12
12
4
1
3
1
1
−
+
−
+ ···
> 1 +
2
2
12
4
≥ 1
.
Thus, we can get an infinite series exactly the same terms as in the alternating series (just
rearranged), but which adds up to something greater than 1 (if it converges at all).
It can be shown that rearranging the terms in an absolutely convergent series does not change
the value to which it adds up. Thus, if you plan to rearrange a series to simplify computations,
make sure it is an absolutely convergent series.
The Geometric Series
Definition
Any series of the form
∞
X
ar k
k=γ
where a and r are fixed (nonzero) numbers3 is called a geometric series, with
∞
X
k=0
rk = r0 + r1 + r2 + r3 + · · ·
being the basic geometric series. Note that
∞
X
k=γ
ar k = ar γ + ar γ +1 + ar γ +2 + ar γ +3 + · · ·
= ar γ r 0 + r 1 + r 2 + r 3 + · · ·
= ar
γ
∞
X
rk
.
k=0
Convergence of the Geometric Series
First note that4
k
lim |r | =
k→∞



∞
1


 0
if |r | > 1
if |r | = 1
.
if |r | < 1
3 We insist on r being nonzero to avoid triviality when γ ≥ 0 and to avoid division by 0 when γ < 0 !
4 Verify this, yourself, if it isn’t obvious.
version: 1/17/2014
Chapter & Page: 12–8
Infinite Series
P
k
So, clearly, ∞
k=γ ar will diverge if |r | ≥ 1 (since the terms won’t shrink to zero).
Now consider the partial sums of the basic geometric series,
SN =
N
X
k=0
rk = r0 + r1 + r2 + r3 + · · · + r N
= 1 + r1 + r2 + r3 + · · · + r N
when |r | < 1 . Multiplying by r ,
r SN = r 1 + r 1 + r 2 + r 3 + · · · + r N
= r 1 + r 2 + r 3 + r 4 + · · · + r N +1
,
and subtracting this from S N :
SN = 1 + r 1 + r 2 + r 3 + · · · + r N
− r S N = − r 1 + r 2 + r 3 + r 4 + · · · + r N +1
(1 − r )S N = 1 − r N +1
Dividing through by 1 − r then gives a simple formula for computing the N th partial sum,
SN =
1 − r N +1
1−r
.
(12.2)
This formula does not actually require that |r | < 1 . But it is important that |r | < 1 if we want
N → ∞ , because then
1 − r N +1
1 − 0
=
N →∞
1−r
1−r
lim S N = lim
N →∞
,
which is a finite number. Thus, if |r | < 1 , the basic geometric series converges and we have a
simple formula for its sum:
∞
X
1
.
(12.3)
rk =
1−r
k=0
More generally,
Theorem 12.2 (Convergence of the Geometric Series)
The geometric series
∞
X
ar k
k=γ
converges if and only if |r | < 1 . Moreover, when |r | < 1 ,
∞
X
k=γ
ar
k
= ar
γ
∞
X
k=0
r
k
= ar
γ
1
1−r
.
Series of Numbers
Chapter & Page: 12–9
!◮Example 12.3:
∞ k
X
1
k=0
and
1
=
3
1−
1
3
=
3
2
k
2 X
2 ∞ k
∞
X
1
1
1
1
3
4
= 8
= 8
=
8
3
k=2
However,
3
k=0
3
∞
X
3
2
3
.
3k
k=0
diverges.
Convergence Tests for Other Series
The Idea of Bounded Partial Sums
Unfortunately, we usually cannot find such nice formulas as (12.2) for the partial sums of our
series, and so, determining whether a given series converges can be a challenge. That’s one
reason you spent so much time in your calculus class on “tests for convergence”. Of greatest
interest to us are the comparison and ratio tests. First, though, we really should note a very basic
test, the “bounded partial sums” test for series of nonnegative real numbers.
Theorem 12.3 (Bounded Partial Sums Test)
Suppose all the terms in
∞
X
uk
k=γ
are nonnegative real numbers (i.e., u k ≥ 0 for each k ≥ γ ). Then this series converges if and
only if there is a finite value M bounding every partial sum,
SN =
N
X
k=γ
for every
uk ≤ M
N ≥γ
.
(12.4)
PROOF: As usual, let S N denote the N th partial sum. Keeping in mind that the terms are all
nonnegative real numbers, we see that
S N +1 =
N +1
X
k=γ
uk =
N
X
k=γ
u k + u N +1 = S N + u N +1 ≥ S N ≥ 0
| {z }
.
≥0
So the partial sums form a “never decreasing” sequence,
0 ≤ Sγ ≤ Sγ +1 ≤ Sγ +2 ≤ Sγ +3 ≤ · · · ≤ S N ≤ S N +1 ≤ · · ·
P
Clearly, if ∞
k=γ u k converges, then we can choose
M =
version: 1/17/2014
∞
X
k=γ
uk
= lim S N
N →∞
.
.
Chapter & Page: 12–10
Infinite Series
On the other hand, if (12.4) holds, then the partial sums form a bounded, nondecreasing
sequence,
Sγ ≤ Sγ +1 ≤ Sγ +2 ≤ Sγ +3 ≤ · · · ≤ S N ≤ S N +1 ≤ · · · ≤ M
.
It is then easily seen that any such sequence must converge to some value S∞ less than or equal
to M . Thus, our series converges with
X
u k = lim Sn = S∞ .
k→∞
k=γ
(If the convergence of the S N ’s is not obvious to you, let I be the set of all real numbers
that are less than or equal to at least one of the partial sums,
I = {x ∈ R : x ≤ S N for some N }
.
Note that I is a subinterval of (−∞, M] containing 0 . So, for some real number S∞ in the
interval [0, M] , we have
I = (−∞, S∞ )
or
I = (−∞, S∞ ] .
Either way, if we choose any ε > 0 and set xε = S∞ − ε , then xε is in I and, because of the
way we constructed this interval, there must be an Nε such that
S∞ − ε = xε ≤ S Nε
.
By the choice of xε and the fact that the partial sums form a nondecreasing sequence, we now
have, for each choice of ε > 0 , a corresponding Nε such that
|S∞ − S N | ≤ ε
whenever
N > Nε
.
This, you should recall, is the definition for
lim S N
N →∞
existing and S∞ .)
The Comparison and Ratio Tests
The trick to using the “bounded partial sums test” is in finding the value M (or showing that
none exists). Many of the more widely used tests are really the bounded partial sums test along
with a moderately clever way of finding that M or showing it doesn’t exist. For example, if you
already have a convergent series of nonnegative real numbers, then you can use the sum of that
series as your M for verifying the convergence of ‘smaller’ series. That yields the well-known
comparison test.
Theorem 12.4 (Comparison Test)
Suppose
∞
X
k=γ1
ak
and
∞
X
k=γ2
bk
Series of Numbers
Chapter & Page: 12–11
are two infinite series such that, for some integer K ,
0 ≤ ak ≤ bk
Then,
∞
X
bk converges
H⇒
k=γ2
∞
X
for each k ≥ K
ak diverges
H⇒
k=γ1
.
∞
X
ak converges.
∞
X
bk diverges.
k=γ1
k=γ2
Combining either test above with the appropriate geometric series then yields the ratio test.
Theorem
P 12.5 (Ratio Test)
Let ∞
k=γ u k be an infinite series, and suppose we can find an integer K and a positive value
r such that either
u k+1 for each k ≥ K
u ≤ r < 1
k
or
u k+1 for each k ≥ K .
u ≥ r ≥ 1
k
Then:
r < 1
r ≥ 1
PROOF:
H⇒
∞
X
uk
converges (in fact, converges absolutely).
H⇒
∞
X
uk
diverges.
If r ≥ 1 , then
u k+1 u ≥ r ≥ 1
k
k=γ
k=γ
H⇒
|u k+1 | ≥ |u k | > 0
for each k ≥ K
Hence we cannot have u k → 0 as k → ∞ , which means the series must diverge.
On the other hand, if
u k+1 for each k ≥ K ,
u ≤ r < 1
k
then
|u k+1 | ≤ r |u k |
for each k ≥ K
.
Thus:
|u K +1 | ≤ r |u K |
|u K +2 | ≤ r |u K +1 | ≤ r · r |u K | ≤ r 2 |u K |
version: 1/17/2014
.
Chapter & Page: 12–12
Infinite Series
|u K +3 | ≤ r |u K +2 | ≤ r · r 2 |u K | ≤ r 3 |u K |
..
.
Consequently, using what we know about the geometric series (and the fact that 0 ≤ r < 1 ),
∞
X
k=K
|u k | = |u K | + |u K +1 | + |u K +2 | + |u K +3 | + · · ·
≤ |u K | + r |u K | + r 2 |u K | + r 3 |u K | + · · ·
=
∞
X
k=K
|u K | r k
= |u K | r
K
1
1−r
.
Either the bounded partial sums test or the comparison test can clearly be invoked, assuring us
P
that ∞
k=γ |u k | converges.
The Limit Comparison and Limit Ratio Tests
The limit comparison test is a variation of the comparison that, when applicable, is often easier to
use than the original test. Basically, it involves computing a limit of a ratio to see if the original
comparison test could be invoked and what the result of that test would be. To see how this
works, assume we have two infinite series
∞
X
and
ak
k=γ1
∞
X
bk
,
k=γ2
and that the limit
ak lim k→∞ bk
exists as a finite, nonzero value. Then there are two other finite real numbers c and C such that
ak 0 < c < lim < C < ∞ .
k→∞
bk
By the nature of “limits”, this means there is an integer K such that
ak 0 < c < < C < ∞
whenever k ≥ K
bk
.
By algebra, then, we have
0 < c |bk | < |ak | < C |bk |
whenever k ≥ K
.
By basic algebra and the comparison test, it follows that
∞
X
k=γ2
c |bk | converges
H⇒
∞
X
k=γ2
|bk | converges
H⇒
∞
X
k=γ1
|ak | converges ,
Series of Numbers
Chapter & Page: 12–13
while
∞
X
k=γ1
|ak | diverges
∞
X
H⇒
k=γ2
C |bk | diverges
H⇒
∞
X
k=γ2
|bk | diverges
.
These results, along with observing what of these results can be salvaged when
ak ak lim = 0
or
lim = ∞ ,
k→∞ bk
k→∞ bk
can be summarized as follows:
Theorem 12.6 (limit comparison test)
Suppose
∞
X
ak
and
k=γ1
∞
X
bk
k=γ2
are two infinite series such that
ak lim k→∞ bk
exists as either a finite number or as +∞ . Then:
∞
X
ak |bk | converges
lim
< ∞ and
k→∞ bk k=γ2
H⇒
∞
X
k=γ1
|ak | converges .
while
ak lim > 0
k→∞ bk
and
∞
X
k=γ1
|bk | diverges
H⇒
∞
X
k=γ2
|ak | diverges .
In a similar manner, if
u k+1 lim
k→∞ u k exists
P∞ and is not 1 , then you can easily verify that the ratio test can be applied to the series
k=γ u k using a value of r between this limit and 1 . If you think about it, that means you
don’t really have to find r , and you can simplify the ratio test (in this case) to
Theorem
P∞ 12.7 (Limit Ratio Test)
Let k=γ u k be an infinite series for which
u k+1 lim
k→∞ u k exists (either as a finite number or as +∞ ). Then
∞
X
u k+1 < 1 H⇒
lim u k converges (in fact, converges absolutely).
k→∞ u k k=γ
u k+1 > 1
lim k→∞ u k
H⇒
∞
X
uk
k=γ
(If the limit is 1 , there is no conclusion.)
version: 1/17/2014
diverges.
Chapter & Page: 12–14
Infinite Series
Other Tests for Convergence
For some other important convergence tests, such as the integral test and the root test, see the
first section of the text by Arfken, Weber and Harris or, better yet, see your old calculus text. By
the way, some nifty extensions of the the comparison test are given on pages 8 and 9 of Arfken,
Weber and Harris.
Appendix: On the Convergence of Absolutely Convergence
Series
Let us look at that claim made earlier that any absolutely convergent series must be a convergent
series. That is, we claimed
∞
∞
X
X
|u k | is convergent
H⇒
u k is convergent
.
k=γ
k=γ
This can be easily verified by splitting the series into convenient ‘subseries’ and considering
those subseries.
Let us start by considering the case where each u k is a real number and assume
∞
X
k=γ
We’ll split
P∞
|u k |
is convergent
.
u k into its positive and negative parts by setting
(
(
|u k |
0
if u k ≥ 0
and
bk =
ak =
|u k |
0
if u k < 0
k=γ
if u k ≥ 0
if u < 0
for each k ≥ γ . Trivially, we have
Since
P∞
0 ≤ ak ≤ |u k |
k=γ
and
0 ≤ bk ≤ |u k |
for each k ≥ γ
.
|u k | converges, the comparison assures us that
∞
X
ak
k=γ
and
∞
X
both also converge.
bk
k=γ
Hence, as noted much earlier,
∞
X
[ak − bk ] converges and equals
∞
X
k=γ
But, for each k ≥ γ ,
ak − bk =
(
|u k | − 0
0 − |u k |
if u k ≥ 0
if u k < 0
)
=
So statement (12.5) can be restated as
∞
X
u k converges and equals
k=γ
k=γ
ak −
∞
X
bk
(
uk
if u k ≥ 0
∞
X
∞
X
uk
k=γ
if u < 0
ak −
.
(12.5)
k=γ
bk
)
= uk
.
.
k=γ
verifying our claim when the terms are real numbers.
To verify the claim when the terms are complex numbers, simply split the series and its
terms into real and imaginary parts, and apply the above.
Infinite Series of Functions
12.3
Chapter & Page: 12–15
Infinite Series of Functions
Functions as Terms
Let us now expand our studies to infinite series whose terms are functions. Such a series will be
written, generically, as
∞
X
u k (x) .
k=γ
Each u k (x) is a function of x (of course other symbols for the variable may be used), and we
will assume all these functions are defined on some common domain D . This domain can be a
subset of either the real line, R , or the complex plane, C .
Initially, we will mainly be interested in power series — series for which there are constants
x0 and a bunch of ak ’s such that
u k (x) = ak (x − x0 )k
for k = 0, 1, 2, 3, . . .
.
If x0 = 0 and all the ak ’s are the same value a , then the series reduces to a well-known
geometric series
∞
X
ax k .
k=0
Later, we will find ourselves very interested in series in which the u k (x)’s are given by such
functions as sines, cosines, Bessel functions, Hermite polynomials, and so forth.
Convergence and Error Functions
Suppose we have
∞
X
u k (x)
k=γ
and a region D on which each u k (x) is defined. Typically, D is a subinterval of R when we
are considering functions of a real variable, and is a two-dimensional subregion of C when we
are considering functions of a complex variable (in which case we usually use z instead of x
as the variable).
The basic notion of convergence of this series of functions (called “pointwise convergence”)
naturally depends on the convergence of the series of numbers obtained by replacing the variable
with specific values from D . We say that
∞
X
u k (x)
converges pointwise on D
k=γ
if and only if
∞
X
k=γ
u k (x0 )
converges (as a series of numbers) for each x0 ∈ D .
P
k
!◮Example 12.4: We know the geometric series ∞
k=0 z converges whenever |z| < 1 . So
this series converges pointwise on any set of points that lies completely within the unit circle
version: 1/17/2014
Chapter & Page: 12–16
Infinite Series
P∞ k
about 0 in the complex plane. In particular,
k=0 x converges pointwise on the interval
(−1, 1) . It also converges pointwise on any
subinterval
of (−1, 1) . It does not, however,
P
k
x
diverges
when x = 1 .
converge pointwise on, say, (−1, 1] since ∞
k=0
For much of our work, pointwise convergence will not be sufficient. To help describe a
stronger type of convergence, let’s look at the error in using the N th partial sum in place of the
entire series,
N
∞
X
X
E N (x) = error in using
u k (x) in place of
u k (x)
k=γ
k=γ
∞
N
X
X
= u k (x) −
u k (x)
k=γ
k=γ
∞
X
u k (x) .
= k=N +1
P∞
IF
k=γ u k (x) converges pointwise on D , then E N (x) is a well-defined function on D , and,
clearly,
E N (x) → 0 as N → ∞
for each x ∈ D .
However, the rate at which E N (x) → 0 may depend wildly on the choice of x . To identify
cases where E N (x) is not so poorly behaved on the region of interest, let E N ,max be the greatest
error in using the N th partial sum in place of the entire series on D ; that is
E N ,max = max {E N (x) : x ∈ D}
.
We do allow this value to be infinite.5 Observe that E N (x) is a function of x , while E N ,max is
a single number that “uniformly bounds” the possible values of E N (x) . Accordingly, we then
say that
∞
X
u k (x) converges uniformly on D
k=γ
if and only if
E N ,max → 0
as
N →∞ .
Observe that
∞
X
k=γ
u k (x) converges uniformly on D
H⇒
∞
X
u k (x) converges pointwise on D .
k=γ
Before looking at an example that makes everything clear, let me tell you about a famous,
yet simple, test for uniform convergence.
5 Strictly speaking, we should refer to E
th
N ,max as the least upper bound on the error in using the N partial sum
in place of the entire series on D , and we should have used “sup” instead of “max”. This is because the actual
maximum may not exist, while the least upper bound always will. For example, if
{E N (x) : x ∈ D} = [0, 4) ,
then we will use E N ,max = 4 even though there is no x in D for which E N (x) = 4 .
Infinite Series of Functions
Chapter & Page: 12–17
Theorem 12.8 (Weierstrass M Test)
Let
∞
X
u k (x)
k=γ
be a series with all the u k (x)’s being functions defined on some domain D . Suppose, further,
that, for each k , we can find a finite number Mk such that
1.
|u k (x)| ≤ Mk for every x ∈ D , and
2.
P∞
Then
k=γ
Mk converges.
∞
X
u k (x) converges uniformly on D .
k=γ
Basically, the Weierstrass M test says that, if you can bound a series of functions over some
domain by a single convergent series of numbers, then that series of functions will converge
uniformly.
PROOF:
First of all, for each N ≥ γ and each x ∈ D ,
N
X
k=γ
|u k (x)| ≤
N
X
k=γ
Mk ≤
∞
X
Mk
.
k=γ
Since
P∞ the last sum is convergent, it bounds the indicated partial sums,
P∞which tells us that
u
(x)
converges
(absolutely)
for
each
x
in
D
.
In
other
words,
k
k=γ
k=γ u k (x) converges
at least pointwise on D .
To verify that the convergence is uniform, we first observe that
∞
∞
∞
X
X
X
|u k (x)| ≤
E N (x) = u k (x) ≤
Mk
for each x ∈ D .
k=N +1
This, along with the convergence of
k=N +1
P∞
k=γ
k=N +1
Mk , gives
E N ,max = max {E N (x) : x ∈ D} ≤
"
∞
X
Mk
k=N +1
#
→ 0
as
verifying the uniform convergence of our original series.
Now, for examples, let us consider the basic geometric power series
∞
X
k=0
on various subintervals of R .
version: 1/17/2014
u k (x) =
∞
X
k=0
xk
N →∞ ,
Chapter & Page: 12–18
!◮Example 12.5:
Infinite Series
Consider
∞
X
k=0
u k (x) =
∞
X
xk
h
on the interval D =
k=0
1 1
2 2
− ,
i
.
Here, for each k and each x ,
|u k (x)| = x k = |x|k ≤
Since 1/2 < 1 , we know
∞
X
k=0
Mk =
k
1
2
← use this as Mk
.
∞ k
X
1
2
k=0
is a convergent geometric series. The Weierstrass M test then assures us that
∞
X
h
i
1 1
− ,
x k converges uniformly on
2 2
k=0
.
In fact, for every positive integer N and each x in this interval, a bound for the error in using
the N th partial sum instead of the entire series is easily computed:
∞
∞
N
X
X
X k
k
k
E N (x) = x −
x = x k=0
k=γ
k=N +1
≤
≤
=
∞
X
k
x k=N +1
∞ k
X
1
2
k=N +1
N +1 "
1
2
← this is E N ,max
1
1−
1
2
#
=
N
1
2
.
(It’s worth noting that, if the series is not a geometric series, it is very unlikely that we can
find the exact value of E N ,max .)
!◮Example 12.6:
Now consider the same series
∞
X
k=0
u k (x) =
∞
X
xk
,
k=0
but on the interval
D = (−1, 1)
.
From example 12.4, we know this series does converge pointwise on the given interval. What
about uniform convergence?
In the previous example, we were able to choose the Mk ’s for the Weierstrass M test by
finding the maximum value of each term over the interval. Attempting the same thing here,
where x can be any value between −1 and 1 , yields
|u k (x)| = x k = |x|k ≤ 1k = 1 ← use this as Mk .
Infinite Series of Functions
Chapter & Page: 12–19
Clearly, we cannot choose any smaller values for the Mk ’s . But
∞
X
k=0
Mk =
∞
X
k=0
1 = 1 + 1 + 1 + 1 + 1 + ···
is a divergent series. SoP
we cannot appeal to the Weierstrass M test.
k
Fortunately, since ∞
k=0 x is a geometric series, we can completely compute the error
terms. For each x in the interval,
∞
N +1 X
x
k
.
E N (x) = x = 1−x
k=N +1
Unfortunately, for each positive integer N ,
lim− E N (x) = lim−
x→1
x→1
x N +1
1
=
= +∞
1−x
0
.
And so, on (−1, 1)
E N ,max = max {E N (x) : x ∈ (−1, 1)} = +∞ .
Hence, of course
E N ,max 6→ 0 as N → ∞ ,
P
k
which means that our geometric series ∞
k=0 x does NOT converge uniformly on (−1, 1) .
?◮Exercise 12.1:
Again, consider the basic geometric series
∞
X
xk
.
k=0
a: Graph (roughly) E N (x) on (−1, 1) for an arbitrary choice of N . At least get the general
shape of the function on the interval, as well as the behavior of the function near the points
x = 1 , x = 0 , and x = −1 .
b: What happens to the graph of E N (x) as N → ∞ ?
P
k
c: Does ∞
k=0 x does converge uniformly on (−1, 0] .
There are ways of testing for uniform convergence other than the Weierstrass M test. Abel’s
test is worth mentioning (see theorem 12.10 on page 12–22). Unfortunately an explanation of
why Abel’s test works is not for the faint of mathematical heart (see section 12.4 on page 12–22).
Also, if the series ends up being an alternating series on some interval, then you can often
use the error estimate from the alternating series test (theorem 12.1 on page 12–6) to find an
upper bound on E N (x) over the interval for each N . Showing that this upper bound shrinks
to zero as N → ∞ then shows that the series converges uniformly on that interval. This is
especially useful for power series whose coefficients shrink to zero too slowly to ensure absolute
convergence at the endpoints of their “intervals of convergence”.
version: 1/17/2014
Chapter & Page: 12–20
?◮Exercise 12.2:
Infinite Series
Using the error estimate from the alternating series test, show that
∞
X
1
xk
(−1)k
k+1
k=0
converges uniformly on [0, 1] .
?◮Exercise 12.3:
Problem 1.2.1-a (page 24) of Arfken, Weber and Harris.
The Importance of Uniform Convergence
In Approximations
First of all, if
∞
X
u k (x) is uniformly convergent on D ,
k=γ
then it can be uniformly approximated to any degree of accuracy by an appropriate partial sum.
That is, for any desired maximum error ε > 0 , there is a corresponding Nε such that, for every
N ≥ Nε ,
N
∞
X
X
error in using
u k (x) = E N (x) ≤ ε
for every x ∈ D .
u k (x) for
k=γ
k=γ
All one has to do is choose Nε so that
E N ,max ≤ ε
whenever
N ≥ Nε
,
which, in theory, is certainly possible since, for a uniformly convergent series,
E N ,max → 0 as N → ∞ .
(Of course, knowing that Nε exists is quite different from actually being able to find Nε .)
On the other hand, if
∞
X
u k (x) is not uniformly convergent on D ,
k=γ
then the above does not hold. Partial sums cannot “uniformly approximate” the entire series to
any desired degree of accuracy. No matter how many terms you pick for a partial sum,
P there will
always be some values of x for which that partial sum is a lousy approximation of ∞
k=γ u k (x) .
In the Calculus
Suppose
P∞
k=γ
u k (x) converges pointwise on D . Then we can define a function f on D via
f (x) =
∞
X
k=γ
u k (x)
for each
x ∈D
.
Infinite Series of Functions
Chapter & Page: 12–21
Indeed, we may have derived the u k (x)’s to obtain such a series expansion for f (x) . (In solving
partial differential equations, these u k ’s will usually be “eigenfunctions” for some self-adjoint
differential operator on a vector space of functions.)
The next theorem is the big theorem on integrating and differentiating uniformly convergent
series. We will use it repeatedly (and often without comment) when solving partial differential
equations.
TheoremP
12.9 (uniform convergence and calculus)
Suppose ∞
k=γ u k (x) converges uniformly on an interval I , and that each u k (x) is a smooth
function on the interval (i.e., each u k and its derivative is continuous on I ). Let
f (x) =
∞
X
u k (x)
for each
k=γ
x ∈I
.
Then:
1.
f (x) is continuous on I .
2.
f (x) can be integrated “term by term” on I . More precisely, if [a, b] is a subinterval
of I , then
Z b
Z bX
∞
∞ Z b
X
f (x) d x =
u k (x) d x =
u k (x) d x .
a
3.
a
k=γ
k=γ
a
P
′
If the corresponding series of derivatives, ∞
k=γ u k (x) also converges uniformly on I ,
then f can be differentiated “term by term”,
f ′ (x) =
PROOF:
∞
∞
∞
X
X
d
d X
u k (x) =
u k (x) =
u k ′ (x)
dx
dx
k=γ
k=γ
,
k=γ
Trust me. Or take a course in real analysis.
A similar theorem holds for series uniformly convergent on a region in the complex plane.
It should be pointed out that, if you do not have uniform convergence, you cannot assume
the integration and differentiation formulas in the above theorem. For example, it can be shown
that the series
∞
X
1
sin(kπ x)
k=1
k
converges pointwise (but not uniformly) to a “sawshaped” function on R . However,
∞
∞
X
X
1
d
sin(kπ x) =
π cos(kπ x)
k=1
dx
k
k=1
“blows up” at odd integer values of x .
?◮Exercise 12.4:
Using a computer mathematics package such as Mathematica or Maple,
sketch the 25th partial sum of each of the above two series. Do your sketches seem to verify
the claims just made?
version: 1/17/2014
Chapter & Page: 12–22
Infinite Series
12.4 ‘Optional’ Addendum for Section 12.3:
Abel’s Test
The next theorem is what Arfken, Weber and Harris call Abel’s test.6 It is a subtle result, and
we will prove it by employing a remarkably clever construction usually attributed to the early
nineteenth-century mathematician Niels Abel.
Theorem 12.10 (Abel’s Test)
Let φ1 , φ2 , φ3 , . . . be a sequence of functions on an interval I such that, for some finite value
M and k = 1, 2, 3, . . . ,
for all x in I
0 ≤ φk+1 (x) ≤ φk (x) ≤ M
.
Suppose further that a1 , a2 , a3 , . . . is a sequence of numbers such that
Then
∞
X
ak φk (x)
P∞
k=1
ak converges.
k=1
converges uniformly on I .
The proof that follows was lifted (with some necessary alterations) from the proof of lemma
16.3 in Foundations of Fourier Analysis by Howell . Be warned: I’ve ‘cleaned up’ this proof only
a little for these notes. And before proving it, I need to state a basic fact about the convergence
of series that, up to now, we’ve not needed:
Lemma
P 12.11
Let ∞
k=1 bk be a series of numbers. Assume that, for each ǫ > 0 , there is a corresponding
integer Nǫ such that
K
X
b
whenever Nǫ ≤ N < K < ∞ .
k ≤ ǫ
k=N +1
P∞
Then k=1 bk converges, and, for each ǫ > 0 ,
∞
∞
N
X
X
X
bk −
bk = bk ≤ ǫ
whenever N ≥ Nǫ .
k=1
k=1
k=N +1
Basically, this lemma gives you permission to say
Since
K
X
bk ≤ ǫ
whenever
k=N +1
we have
∞
K
X
X
bk = lim bk ≤ ǫ
K →∞
k=N +1
k=N +1
Nǫ ≤ N ≤ K < ∞
whenever
,
Nǫ ≤ N
.
6 Actually, it’s not quite the same test as other authors call “Abel’s test”, but it is related to them.
‘Optional’ Addendum for Section 12.3:
Abel’s Test
Chapter & Page: 12–23
Naively, this may seem obvious. It’s not quite as simple as that. Still, I won’t spend space here
to prove this lemma. Accept it, or take an introductory course in real analysis. It should be part
of the “completeness of the real number system” discussion.
Now, here’s the proof of theorem 12.10:
PROOF (Abel’s
Let a “maximum allowed error” ǫ > 0 be chosen.
P∞ test):
Since k=1 ak is convergent, there is an integer Nǫ such that
∞
∞
N
X
X
X
ǫ
a
−
a
=
a
k
k
k <
2M
k=1
k=1
whenever
k=N +1
Nǫ ≤ N < ∞ .
(12.6)
Now, pick any integer N with N ≥ Nǫ , and, for each integer k greater than N , let
k
X
Ak =
aj
.
j=N +1
Observe that A N +1 = a N +1 and that, for k = N + 2, N + 3, N + 4, . . . ,
ak + Ak−1 = Ak
and
X
X
∞
X
∞
k
|Ak | = aj −
a j aj = j=N +1
j=N +1 j=k+1
∞
∞
X
X
ǫ
ǫ
ǫ
a j ≤
aj + ≤ +
=
2M
2M
M
j=k+1 j=N +1 .
(12.7)
Here is the clever bit: Pick any x in the interval I and any M > N . For the sake of brevity
in the following calculations, let ψk denote φk (x) . Observe that
M
X
k=N +1
ak φk (x) =
M
X
ak ψk
k=N +1
= a N +1 ψ N +1 +
= a N +1 ψ N +1 +
= A N +1 ψ N +1 +
M
X
ak ψk
k=N +2
M
X
(ak + Ak−1 −Ak−1 )ψk
| {z }
k=N +2
M
X
=Ak
(Ak − Ak−1 )ψk
k=N +2
= A N +1 ψ N +1 + (A N +2 − A N +1 )ψ N +2
+ (A N +3 − A N +2 )ψ N +3 + · · · + (A M − A M−1 )ψ M
version: 1/17/2014
Chapter & Page: 12–24
Infinite Series
= A N +1 (ψ N +1 − ψ N +2 ) + A N +2 (ψ N +2 − ψ N +3 )
+ A N +3 (ψ N +3 − ψ N +4 ) + · · · + A M−1 (ψ M−1 − ψ M ) + A M ψ M
= A M ψM +
M−1
X
k=N +1
Ak (ψk − ψk+1 ) .
This, along with inequality (12.7), gives
M
M−1
X
X
|Ak | |ψk − ψk+1 |
ak ψk ≤ |A M | |ψ M | +
k=N +1
k=N +1
ǫ
≤
M
"
M−1
X
|ψ M | +
k=N +1
#
|ψk − ψk+1 |
.
(12.8)
Remember, 0 ≤ φk+1 (x) ≤ φk (x) ≤ 1 for each positive integer k , and ψk is just shorthand for
φ(x) . So |ψ M | = ψ M and
|ψk − ψk+1 | = |φk (x) − φk+1 (x)| = φk (x) − φk+1 (x) = ψk − ψk+1
Plugging this into inequality (12.8) gives us
M
"
#
M−1
X
X
ǫ
ak ψk ≤
ψM +
(ψk − ψk+1 )
M
k=N +1
.
.
k=N +1
But, since
N −1
X
(ψk − ψk+1 ) = (ψ N +1 − ψ N +2 ) + (ψ N +2 − ψ N +3 )
k=1
+ (ψ N +3 − ψ N +4 ) + · · · + (ψ M−1 − ψ M )]
= ψ N +1 − ψ M
,
and ψ N +1 = φ N +1 (x) ≤ M , our last inequality reduces to
M
X
ǫ ǫ
ak ψk ≤
(ψ N +1 − ψ M ) + ψ M ≤
ψ N +1 ≤ ǫ
M
M
.
k=N +1
That is, for each x in I
M
X
ak φk (x) ≤ ǫ
whenever
k=N +1
Nǫ ≤ N < M < ∞ .
P
Lemma 12.11 now assures us that ∞
k=1 ak φk (x) converges, and that, no matter which x
in I we choose,
∞
N
X
X
ak φk (x) −
ak φk (x) ≤ ǫ
whenever N ≥ Nǫ .
k=1
k=1
And that tells us the convergence is uniform.
Power Series
12.5
Chapter & Page: 12–25
Power Series
Basics
Let us now restrict ourselves to power series. Letting x0 be any fixed point on the real line or
the complex plane, the general form for a power series about x0 is
∞
X
k=0
ak (x − x0 )k
where the ak ’s are constants. Depending on the application or mood of the instructor, the
variable, x can be real or complex. In practice, the “center”, x0 , is often 0 , in which case the
series is
∞
X
ak x k .
k=0
Even if x0 6= 0 , we can almost always treat
∞
X
P∞
k=0
ak (x − x0 )k as being
ak x k shifted by x0
.
k=0
Radii of Convergence
The convergence issues for power series are greatly simplified because of the following remarkable lemma:
Lemma 12.12
Suppose we have a power series
∞
X
ak x k
k=0
and a (nonzero) value r . Then
∞
X
ak r k converges
k=0
H⇒
∞
X
ak x k converges absolutely whenever |x| < |r |
H⇒
∞
X
ak x k diverges whenever |r | < |x|
k=0
while
∞
X
ak r
k
diverges
k=0
PROOF:
k=0
First, assume
∞
X
k=0
version: 1/17/2014
ak r k converges and
|x| < |r |
.
.
,
Chapter & Page: 12–26
Infinite Series
Then we must have
k
ak r → 0
as
k→∞
,
which, in turn means that, for some integer K ,
k
ak r ≤ 1
for each k ≥ K
Thus
k
k
k
k
ak x = ak r k · x = ak r k x ≤ x rk r
r
.
for each k ≥ K
.
Moreover, since |x| < |r | , we have that |x/r | < 1 . Hence, the geometric series
∞ X
x k
k=0
r
converges. The above and the comparison test then tells us that
∞
X
k
ak x k=0
also converges, verifying the first claim.
The second claim actually follows from the first: Assume
∞
X
ak r k diverges and
k=0
|r | < |x|
.
By what we just verified (with the roles of r and x reversed), we know
∞
X
k=0
ak x k converges
H⇒
∞
X
ak r k converges,
k=0
contrary to the assumption. Hence, under the given assumptions,
it must diverge.
P∞
k=0
ak x k cannot converge;
P∞
k
Keep in mind that a power series
k=0 ak x must converge or diverge at each point of
[0, ∞) . Applying the above lemma, we find that there are exactly three possibilities:
1.
The series diverges for every r in (0, ∞) . Then the series converges for no x in R or
C except 0 .
2.
The series converges for at least one x = rc in (0, ∞) , and diverges for at least one
x = rd in (0, ∞) . In this case, the lemma tells us that the series converges absolutely
whenever |x| < rc and diverges whenever rd < |x| . By seeking the largest possible
rc and smallest possible rd , we eventually discover a single positive value R , with the
series converging absolutely whenever |x| < R , and diverging whenever R < |x| .
3.
The series converges for every x in (0, ∞) . Then the lemma assures us that the series
converges absolutely at every x in R and C .
Power Series
Chapter & Page: 12–27
All this and “shifting by x0 ” give us the following major result:
Theorem 12.13 (convergence of power series)
Given a power series
∞
X
ak (x − x0 )k
,
k=0
there is an R — which is either 0 , a positive value, or +∞ — such that
|x − x0 | < R
H⇒
∞
X
ak (x − x0 )k converges absolutely.
|x − x0 | > R
H⇒
∞
X
ak (x − x0 )k diverges.
k=0
k=0
(The convergence at each x where |x − x0 | = R must be checked separately.)
The R from the theorem is called the radius of convergence for the series. If x denotes a
complex variable, then the set of all x in the complex plane satisfying |x − x0 | < R is a disk
of radius R centered at x0 . That is why we call R a “radius”. Note that, whether x is a real or
complex variable,
R=0
R=∞
H⇒
∞
X
ak (x − x0 )k converges only if x = x0 (which is trivial).
H⇒
∞
X
ak (x − x0 )k converges for all x (which is very nice).
k=0
k=0
Suppose, for the moment, that x denotes a real variable. If you think about it briefly, you’ll
see that the above theorem assures us that the set of all x’s for which the series converges is
either the trivial interval
[x0 , x0 ]
when R = 0
or the interval
(−∞, ∞)
when
R=∞
or one of the following intervals:
(x0 − R, x0 + R)
,
[x0 − R, x0 + R)
, (x0 − R, x0 + R]
or
[x0 − R, x0 + R]
.
Whichever it is, the interval of all x’s for which the series converges is called the interval of
convergence for the series. It can be found by
1.
determining R , and then, if R is neither 0 nor ∞ ,
2.
testing the series for convergence at each endpoint of (x0 − R, x0 + R) .
It is important to know the radius of convergence R for a given power series
∞
X
k=0
version: 1/17/2014
ak (x − x0 )k
Chapter & Page: 12–28
Infinite Series
so that we know when the series can be treated as a valid function and when the series is rubbish.
One way to find R is via the limit ratio test. By that test, we have
∞
X
ak+1 (x − x0 )k+1 H⇒
ak (x − x0 )k converges absolutely.
lim < 1
k
k→∞
ak (x − x 0 )
ak+1 (x − x0 )k+1 > 1
lim
k→∞ ak (x − x 0 )k
k=0
∞
X
H⇒
k=0
ak (x − x0 )k diverges.
Clearly, the above limit can only equal 1 if |x − x0 | = R . Thus, to find the radius of convergence
R for
∞
X
ak (x − x0 )k ,
k=0
set
ak+1 R k+1 = 1
lim k→∞
ak R k and solve for R .7 This test requires that the appropriate limits exist. If they don’t, the test is
useless, and you have to try something else — possibly a similar approach using the root test.
!◮Example 12.7:
Consider the power series
∞
X
2k
k=0
Here, the center is x0 = 0 , and
xk
.
2n
1+n
.
1+k
an =
As noted above, the radius of convergence R must satisfy
ak+1 R k+1 .
1 = lim k→∞
ak R k That is,
1 =
2k+1
R k+1
1 + [k + 1]
lim
k→∞
2k
Rk
1+k
= lim
k→∞
1+k
1+k
2R = 2R lim
= 2R · 1
k→∞ 2 + k
2+k
.
Solving this for R gives us our radius of convergence,
R =
1
2
.
So the above power series converge absolutely on the interval (−1/2, 1/2) .
Now let’s check the convergence at the endpoints of this interval.
7 Yes, you can simplify this to
ak ,
R = lim k→∞ ak+1 but I find it easier to remember the ratio test and derive the R than to remember whether it is ak/ak+1 or ak+1/ak .
Power Series
Chapter & Page: 12–29
At x = 1/2 ,
∞
X
2k
k=0
1+k
x
k
=
k
∞
X
2k
1
k=0
1+k
2
=
∞
X
1
1
1
1
= 1 +
+
+
+ ···
1+k
2
3
4
k=0
.
This we recognize as being the harmonic series, which we know diverges. So our power series
diverges at the endpoint x = 1/2 .
At x = −1/2 ,
∞
X
2k
k=0
1+k
x
k
=
k
∞
X
2k
−1
k=0
1+k
2
=
∞
X
(−1)k
k=0
1+k
= 1 −
1
1
1
+
−
+ ···
2
3
4
.
This we recognize as being the alternating harmonic series, which we know converges. So
our power series converges at the endpoint x = 1/2 .
Hence, the interval of convergence for our power series is [−1/2, 1/2) .
(Note: If we had not obtained “well-known” series at x = ±1/2 , then we would had to
use one or more of the “tests for convergence” discussed in section 12.2.)
While on the subject of radii of convergence, here is a lemma relating the radius of convergence for any power series to the radius of convergence for the series’ “term-by-term derivative”.
We will need it for the “calculus of power series”.
Lemma 12.14 (radius of convergence for differentiated power series)
The radii of convergence for the two power series
∞
X
k=0
ak (x − x0 )k
and
∞
X
k=0
k ak (x − x0 )k−1
are equal.
PROOF: For simplicity, we will let x0 = 0 and just consider the convergence for the two
power series
∞
∞
X
X
ak x k
and
k ak x k−1
k=0
k=0
using the limit comparison test (theorem 12.6 on page 12–13). The lemma will then follow from
the following by simply “shifting by x0 ”.
Let R be the radius of convergence for the first series, and R ′ the radius of convergence
for the second series.
P k−1 Suppose x is any point with |x| < R ′ . Then ∞
converges, and, computing
k=0 kak x
the limit for the limit comparison test, we get
ak x k = lim |x| = 0 .
lim k−1 k→∞
kak x
k→∞
k
P k
also converges, which is only possible
The limit comparison test then tells us that ∞
k=0 ak x
if |x| ≤ R . Hence
|x| < R ′ H⇒ |x| ≤ R .
(12.9)
version: 1/17/2014
Chapter & Page: 12–30
Infinite Series
Now assume x is any point with R ′ < |x| , and let y be a point satisfying
R ′ < |y| < |x|
Then
.
∞
X
k ak y k−1 k=0
must diverge, and, computing the limit for the limit comparison test, we get
k x ak x k rk
= |y| lim k = |y| lim
where r =
lim k−1 k→∞
kak y
k→∞
ky
k
k→∞
x y
.
By our choice of y , r > 1 . Using this fact and L’Hôpital’s rule, we can continue the computation
of the above limit:
d k
r
k
ak x k r
r k ln r
dk
= |y| lim
|y|
|y|
=
lim
=
lim
= +∞ .
lim k→∞ k
k→∞ d k→∞
k→∞ kak y k−1 1
dk
k
P k
also diverges, which is only possible if
The limit comparison test then tells us that ∞
k=0 ak x
R ≤ |x| . Thus,
R ′ < |x| H⇒ R ≤ |x| .
(12.10)
Finally, let us note that lines (12.9) and (12.10), together, tells us that R ′ = R . If this is not
obvious, let x be the midpoint between R ′ and R ,
x =
R′ + R
2
.
Then either
R′ < x < R
or
R < x < R′
or
R′ = x = R
.
But line (12.10) rules out R ′ < x < R , and line (12.9) rules out R < x < R ′ , leaving us with
R′ = x = R
.
Letting bk = k ak and re-indexing appropriately, we can change this last lemma into another
lemma about series of integrated terms:
Lemma 12.15 (radius of convergence for integrated power series)
The radii of convergence for the two series
∞
X
1
k=0
k+1
bk (x − x0 )k+1
and
∞
X
k=0
are equal.
?◮Exercise 12.5:
Derive lemma 12.15 from lemma 12.14.
bk (x − x0 )k
Power Series
Chapter & Page: 12–31
Uniform Convergence and the Calculus of Power Series
Since we haven’t yet discussed complex differentiation or integration of parts of the complex
plane, we will now mainly consider power series in which the variable is a real-value variable.
If we are going to do calculus with a power series,
∞
X
k=0
ak (x − x0 )k
,
we had better figure out the intervals over which it is uniformly convergent (so we know when
we can compute derivatives and integrals “term by term”, as described in the theorem on uniform
convergence and calculus, theorem 12.9 on page 12–21).
Let R be the radius of convergence for
∞
X
k=0
ak (x − x0 )k
.
To avoid triviality, assume R > 0 . Also, choose any positive value S less than R , and consider
the convergence of the power series over the interval
[x0 − S, x0 + S]
.
This will be an interval over which our power series converges uniformly. To see this, let
Mk = |ak | S k
for each k ≥ γ
,
and observe the following:
1.
2.
For each x in [x0 − S, x0 + S] ,
ak (x − x0 )k = |ak | |x − x0 |k ≤ |ak | S k = Mk
.
Since x0 + S satisfies |(x0 + S) − x0 | < R , we know the series
∞
X
k=0
k
ak S =
∞
X
k=0
ak ((x0 + S) − x0 )k
.
converges absolutely. Thus,
∞
X
k=0
Mk =
∞
X
k=0
|ak | S k
converges.
The Weierstrass M test then assures us that
∞
X
k=0
ak (x − x0 )k converges uniformly on [x0 − s, x0 + s] .
Since S was any real value between 0 and R , what we just derived can be expanded slightly
to:
version: 1/17/2014
Chapter & Page: 12–32
Infinite Series
Lemma 12.16
Let R be the radius of convergence for
∞
X
k=0
ak (x − x0 )k
.
Then this series will converge uniformly on any interval [a, b] (or (a, b) , etc.) where
x0 − R < a < b < x0 + R
.
Combining all we’ve learned about the convergence of power series with the theorem on
uniform convergence and calculus (theorem 12.9), we get the following
Theorem 12.17 (calculus of power series)
Let R > 0 be the radius of convergence for a power series
∞
X
k=0
ak (x − x0 )k
,
If R = ∞ , let I = (−∞, ∞) . Otherwise let I = (x0 − R, x0 + R) . Define f on I by
∞
X
f (x) =
k=0
ak (x − x0 )k
for each
x ∈I
.
Then:
1.
The above power series converges uniformly on each closed subinterval of I .
2.
f is a continuous function on I .
3.
f (x) can be integrated “term by term” on I . More precisely, if [a, b] is any subinterval
of I , then
Z
a
b
f (x) d x =
=
Z
a
∞
bX
k=0
∞ Z b
X
k=0
a
ak (x − x0 )k d x
ak (x − x0 )k d x =
∞
X
k=0
1
ak (b − x0 )k+1 − (a − x0 )k+1
k+1
and this last series converges.
4.
f (x) is infinitely differentiable on I , and its derivatives can be found by differentiating
the series “term by term”:
f (x) =
∞
X
d
ak (x − x0 ) =
f ′′ (x) =
∞
X
d2
a (x − x0 )k =
2 k
′
k=0
k=0
dx
dx
k
∞
X
k=0
k ak (x − x0 )k−1
∞
X
k=0
k(k − 1) ak (x − x0 )k−2
,
Power Series
Chapter & Page: 12–33
f ′′′ (x) =
∞
X
d3
k=0
dx
a (x − x0 )k =
3 k
∞
X
k=0
k(k − 1)(k − 2) ak (x − x0 )k−3
..
.
Moreover, each of these power series has R as its radius of convergence and converges
uniformly on each closed subinterval of I .
Be assured: Similar results hold for power series in which the variable is complex (in which
case, the interval I is replaced by a disk of radius R about x0 in the complex plane).
Power Series Representations of Functions
Mathematicians and physicists have long been thrilled whenever a function of interest f can be
expressed as a power series about a point x0 on some region D ,
f (x) =
∞
X
k=0
ak (x − x0 )k
for all x in D
.
D , of course, must be in the region — interval or disk about x0 — in which the power series
converges. We will automatically assume D is that region of convergence, unless there is a good
reason not to.
For the moment, let us generally restrict ourselves to functions and power series of a real
variable (with the understanding that almost everything we do will extend to functions and power
series of complex variables).
When a function can be represented by a power series (about x0 ),
f (x) =
∞
X
k=0
ak (x − x0 )k
for all x in D
,
we say “ f is analytic (about x0 ) on D ”. There are two big advantages of dealing with such a
function:
1.
The value of f (x) for any specific value of x ∈ D can be closely approximated by a
suitable partial sum
N
X
ak (x − x0 )k ,
k=0
which may be more easily computed than using any other formula or definition for f (x) .
2.
The calculus with f (x) is essentially the calculus of power series. In particular, f is
infinitely differentiable on any [a, b] ⊂ D , with
version: 1/17/2014
f (x) =
∞
X
ak (x − x0 )k
f ′ (x) =
∞
X
kak (x − x0 )k−1
k=0
k=0
,
,
Chapter & Page: 12–34
Infinite Series
f ′′ (x) =
∞
X
k=0
k(k − 1)ak (x − x0 )k−2
,
..
.
It’s a good exercise to look at these series after writing them out in ‘long’ form; that is, as
f (x) = a0 (x − x0 )0 + a1 (x − x0 )1 + a2 (x − x0 )2 + a3 (x − x0 )3 + · · ·
instead of
f (x) =
∞
X
k=0
ak (x − x0 )k
,
.
?◮Exercise 12.6:
Write out the series just above for f (x) , f ′ (x) , etc. in long form and
observe how the first few terms of each simplify. Then use your series to verify that
a0 = f (x0 ) , a1 = f ′ (x0 )
, a2 =
1 ′′
f (x0 )
2
...
From the results of this last exercise it follows that power series representations about a
given point are unique. More precisely, if, in some region D containing x0 ,
f (x) =
∞
X
k=0
ak (x − x0 )k
and
f (x) =
∞
X
k=0
bk (x − x0 )k
,
then
for k = 0, 1, 2, 3, . . .
bk = ak
.
However, we can still have different power series representations for a given function about two
different points,
f (x) =
∞
X
k=0
ak (x − x0 )k
and
f (x) =
∞
X
k=0
bk (x − x1 )k
.
with
bk 6 = ak
for k = 0, 1, 2, 3, . . .
.
Moreover, the regions over which these two series converge may also be different. (This will be
a more significant issue when our variables are complex.)
Do note that, if you already have a power series representation for some function,
f (x) =
∞
X
k=0
ak (x − x0 )k
for all
x ∈D
,
then you can get the power series representations about x0 for all derivatives (and integrals) of
f by suitably differentiating (and integrating) the known power series for f (x) .
Another basic method for determining if a function f is analytic about x0 and for getting
its power series about x0 , starts with the integral equation
Z x
f (x) − f (x0 ) =
f ′ (s) ds .
x0
Power Series
Chapter & Page: 12–35
”Cleverly” integrating by parts, with
u = f ′ (s)
dv = ds
du = f ′′ (s) ds
v = s−x
gives
x
f (x) − f (x0 ) = f ′ (s)(s − x)s=x −
0
Z
x
f ′′ (s)(s − x) ds
x0
Z
= f ′ (x) (x − x) − f ′ (x0 ) (x0 − x) −
| {z }
| {z }
0
So
−(x−x 0 )
′
f (x) = f (x0 ) + f (x)(x0 − x) −
Z
x
f ′′ (s)(s − x) ds
x0
.
x
f ′′ (s)(s − x) ds
x0
.
Integrating by parts again, with
u = f ′′ (s)
dv = (s − x) ds
du = f ′′′ (s) ds
v =
1
(s
2
− x)2
gives
′
f (x) = f (x0 ) + f (x)(x0 − x) −
f
′′
1
(s) (s
2
= ···
′
= f (x0 ) + f (x)(x0 − x) +
1 ′′
f (x0 )(x
2
x
− x) 2
s=x 0
2
− x0 ) +
−
Z
Z
x
f
x0
′′′
1
(s) (s
2
2
− x) ds
x
1
2 x
0
f ′′′ (s)(s − x)2 ds
.
Repeating this process until we finally see the light yields, for each positive integer N ,
f (x) = PN (x) + R N (x)
where
PN (x) = f (x0 ) + f ′ (x)(x0 − x) +
=
k=0
k!
1
f ′′′ (x0 )(x
3·2
− x 0 )3
− x0 ) N
f (k) (x0 )(x − x0 )k
and
1
R N (x) = (−1)
N!
N
− x 0 )2 +
1
f (N ) (x0 )(x
N · (N − 1) · · · 3 · 2
+ ··· +
N
X
1
1 ′′
f (x0 )(x
2
Z
x
x0
f (N +1) (s)(s − x) N ds
.
You should recognize PN (x) as the N th degree Taylor polynomial about x0 with R N (x) being
the corresponding error in using that polynomial for f (x) . If, for each x ∈ D ,
R N (x) → 0
version: 1/17/2014
as N → ∞
,
Chapter & Page: 12–36
Infinite Series
then f is analytic on D , with
f (x) = lim PN (x) =
N →∞
∞
X
1
k=0
k!
f (k) (x0 )(x − x0 )k
for each x in D
.
This power series is the famous Taylor series for f (x) about x0 .8
Let’s briefly consider the problem of showing that
R N (x) → 0
as N → ∞
for some given function f . In practice, if you can completely compute the value of R N (x) , then
you already know enough about computing f (x) that you don’t need to find its Taylor series.
More likely, you won’t be able to completely compute R N (x) , but you will be able to ‘bound’
each derivative of f ; that is, for each positive integer k , you’ll be able to find an Mk (x) such
that, for every s in the interval having endpoints x0 and x ,
(k) f (s) ≤ Mk (x) .
Then, if x0 ≤ x ,
Z x
1
(N +1)
N
|R N (x)| = (−1) N
f
(s)(s − x) ds N ! x0
Z x
(N +1)
1
f
≤
(s)(s − x) N ds
N!
≤
1
N!
Z
x0
x
x0
M N +1 (x)(x − s) N ds
x
1
−1
=
M N +1 (x)
(x − s) N +1 x
0
N!
N +1
1
M N +1 (x) (x − x) N +1 − (x − x0 ) N +1
(N + 1)!
1
=
M N +1 (x)(x − x0 ) N +1 .
(N + 1)!
= −
Similar computations yield
|R N (x)| ≤
1
M N +1 (x)(x0 − x) N +1
(N + 1)!
if
x ≤ x0
Either way, we have
|R N (x)| ≤
1
M N +1 (x) |x − x0 | N +1
(N + 1)!
.
This gives an upper bound on each remainder term. If
1
M N +1 (x) |x − x0 | N +1 → 0
(N + 1)!
as N → ∞ ,
then we know
R N (x) → 0
8 also called the Maclaurin series if x = 0 .
0
as N → ∞
,
.
Power Series of Other Things
Chapter & Page: 12–37
and f (x) can be represented by its Taylor series,
f (x) =
∞
X
1
k=0
k!
f (k) (x0 )(x − x0 )k
.
At this point you should further refresh your memory regarding Taylor series, especially the
“well-known” cases:
e
x
=
∞
X
1
k=0
k!
xk
,
sin(x) = · · · (You figure this out.) ,
cos(x) = · · · (You figure this out.) ,
and the binomial series
(1 + x) p = 1 + px +
p( p − 1)( p − 2) 3
p( p − 1) 2
x +
x + ···
2!
3!
.
Rederive these series, determine the values of x for which they are valid by considering R N (x)
(for the binomial series, this depends on the value of p ), and play with them by doing the
homework assigned. (See also the subsection Taylor’s Expansion starting on page 25, of Arfken,
Weber and Harris).
12.6
Power Series of Other Things
Suppose
f (x) =
∞
X
ak x k
k=0
for all real or complex values of of x , and let X be any (real or complex) thingee for which
Xk = X
| · X{z· · · X}
k times
makes sense for each nonnegative integer k . Then we can define f (X) by
f (X) =
!◮Example 12.8:
∞
X
ak X k
Recall that
exp(x) = e x =
version: 1/17/2014
.
k=0
∞
X
1
k=0
k!
xk
Chapter & Page: 12–38
Infinite Series
for all real x (it also holds for all complex x ). Now, if A is any square matrix, say of
dimension n×n , then so is
Ak = A
| · A{z· · · A}
for k = 1, 2, 3, . . .
k times
.
For k = 0 , we naturally assume/define
A0 = In
(the n×n identity matrix) .
So, the “exponential function of square matrices” is defined by
exp(A) = e
A
=
∞
X
1
k=0
k!
Ak
.
This function of matrices turns out to be useful in solving certain systems of differential
equations
!◮Example 12.9:
Along the same lines, we can even have the differential operator
∞
X
d
d
1 dk
exp
.
= e /dx =
k
dx
?◮Exercise 12.7:
matrices.
k=0
k! dx
All the following concern the exponential function of square ( n × n )
a: Let A be a fixed matrix, and show that
d
exp(At) = A exp(At)
dt
.
b: Let D be the diagonal matrix

λ1 0 0
 0 λ2 0


D =  0 0 λ3
 ..
..
..
.
.
.
0 0 0

0
0

0

.. 
.
.
i: Convince yourself that, for each nonnegative integer k ,

(λ1 )k
0
0
···
k
 0
(λ
)
0
···
2

k
 0
k
0
(λ3 ) · · ·
D = 
 ..
..
..
..
 .
.
.
.
0
0
0
..
.
0
0
···
···
···
..
.
· · · λn
0
· · · (λn )k
ii: Compute exp(D) , simplifying it as much as possible.
c: Is it true that exp(A)† = exp A† ?9
9 Recall: A† = adjoint of A = transpose of the complex conjugate of A .







.
Convergence in Norm
Chapter & Page: 12–39
d: Is exp(A) Hermitian if A is Hermitian?10
e: Show that
U† exp(A)U = exp(U† AU)
.
whenever U is unitary.11
f: Using results from the above, compute exp(A) when
9 3
A =
.
3 1
?◮Exercise 12.8:
σ1
The three Pauli spin matrices are
0 1
0 −i
=
,
σ2 =
1 0
i 0
and
σ3
1 0
=
0 −1
.
Using these matrices:
a: Show that, for k = 1, 2, and 3 ,
(σ k )2 = I2
where I2 is the 2×2 identity matrix.
b: Then show that, for k = 1, 2, and 3 ,
exp(iσ k θ) = I2 cos θ + iσ k sin θ
?◮Exercise 12.9:
.
Let r be a fixed real number, and show that, for any analytic function f ,
d
exp −r
f (x) = f (x − r ) .
dx
(Hint: −r = [x − r ] − x )
12.7
Convergence in Norm
Definition and General Commentary
We can generalize many of our notions of convergence to handle any infinite series
∞
X
uk
k=γ
10 Remember: A is Hermitian ⇐⇒ A is self adjoint ⇐⇒ A† = A .
11 Remember: U is unitary ⇐⇒ U† = U−1 .
version: 1/17/2014
Chapter & Page: 12–40
Infinite Series
for which all the u k ’s are elements of some vector space V having an inner product h · | · i and
corresponding norm k·k .12 Given such a series, we say that it converges (in that norm) if and
only if there is a S in V such that
N
X
lim S −
uk = 0
N →∞
.
k=γ
This S , naturally, is called the sum of the series and we normally write
S =
∞
X
.
uk
k=γ
If there is no such S in V , then the series diverges.
It should be noted that the sort of series convergence already discussed are just special cases
of norm convergence. When our series was just a series of numbers, these numbers are just
elements of the vector space R or C , and the norm was the usual “absolute value” norm,
kuk = |u|
.
When we were discussing the uniform convergence of functions on some domain D , then
the vector space was the set of all “reasonable” functions on D (with the exact meaning of
“reasonable” dependent on the choice of the u k (x)’s ), and the norm was “maximum value”
norm,
kuk = max {|u(x)| : x ∈ D} .
(Again, this really should be the “least upper bound” and we really should call it the “sup” norm.)
?◮Exercise 12.10:
Convince yourself of the validity of the above claims.
Of course, using the standard inner products and norms for traditional vectors and for matrices, we can extend much of our discussion regarding series of numbers to series of traditional
vectors and series of matrices. Later, when dealing with Fourier series and solving partial differential equations, we will be especially interested in infinite series of functions defined over some
region D using the “energy” inner product and norm
Z
h f | gi =
f ∗ (x)g(x) d x
D
and
kfk =
p
h f | f i =
sZ
D
| f (x)|2 d x
,
or even using a “weighted energy” inner product and norm
Z
h f | gi =
f ∗ (x)g(x)w(x) d x
D
12 Remember: kuk = √h u | u i . You may want to review the basics about inner products and norms in section 3.3
of our notes.
Convergence in Norm
Chapter & Page: 12–41
and
kfk =
p
h f | f i =
sZ
D
| f (x)|2 w(x) d x
where w is some positive-valued function on D . (When dealing with these integral norms, we
will also extend what we did with “orthogonal sets” and “self-adjoint operators”.)
?◮Exercise 12.11: Let our vector space be the set of all continuous functions on the interval
[−1, 1] with the standard energy inner product and norm mentioned above. Let f be the
analytic function on [−1, 1] given by some power series
f (x) =
∞
X
ak x k
k=0
whose radius of convergence R is larger than 1 . Remember, this means that the series
converges uniformly (to f ) on [−1, 1] . Now show that this series also converges in norm
(using the standard energy norm).
Some Basic Inequalities
The Inequalities
At this point, it may be worthwhile to note a couple of relations that possibly should have
been mentioned earlier when we discussed general inner products and norms. (Proofs of these
inequalities are given in the next subsection for those interested.)
1.
(The Schwarz Inequaltiy) Recall that, if u and v are two traditional vectors having an
angle of θ between them, then
u · v = kuk kvk cos(θ)
.
Thus,
|u · v| ≤ kuk kvk
.
This is a special case of the Schwarz inequality
|h u | v i| ≤ kuk kvk
,
which holds whenever u and v are elements of any vector space having any inner product
h·|·i.
2.
(The Triangle Inequality) The classical triangle inequality is that, for any two real or
complex numbers u and v ,
|u + v| ≤ |u| + |v|
.
Combining this with the Schwarz inequality allows us to derive the general triangle
inequality
ku + vk ≤ kuk + kvk ,
which holds whenever u and v are elements of any vector space having any inner product
h·|·i.
version: 1/17/2014
Chapter & Page: 12–42
Infinite Series
The general triangle inequality can be extended to sums with infinitely many terms just as
we extended the classical triangle inequality. That is, we have
∞ ∞
X
X ku k k
uk ≤
k=γ
k=γ
P∞
in general. In some
this can be used to show that, if
k=γ ku k k converges as a series
Pcases
∞
ku
k
of numbers, then
converges
in
norm.
However,
this
is not always the case, and
k
k=γ
sometimes when you do have “convergence”, the convergence is to something not in the vector
space you started with. For example, a series of infinitely differentiable functions may converge
to a function that is not infinitely differentiable (a common occurrence with Fourier series).
We will discuss these and other issues as necessary later, when the need arises.
Proving the Inequalities
For all the following, assume u and v are some arbitrary elements in some arbitrary vector
space V having an inner product h · | · i .
The easiest way to verify the Schwarz inequality,
|h u | v i| ≤ kuk kvk
,
is probably through clever algebra. First observe that this inequality is certainly true if either u
or v is the zero element. Now assume neither is the zero element, and, for convenience, let A
and B be the scalars
A = kvk2
and
B = hv | ui
.
Note that A∗ = A and that, by the properties of inner products,
B = h u | v i∗
and
B∗ = h u | v i
.
Furthermore (applying the properties and recalling our conventions),
0 ≤ k Au − Bvk2
= h Au − Bv | Au − Bv i
= h Au | Au i − h Au | Bv i − h Bv | Au i + h Bv | Bv i
= A∗ A h u | u i − A∗ B h u | v i − B ∗ A h v | u i + B ∗ B h v | v i
= A2 kuk2 − AB B ∗ − B ∗ AB + B ∗ B A
= A kuk2 A − |B|2
.
Cutting out the middle and recalling what A and B are yields
0 ≤ kvk2 kuk2 kvk2 − |h u | v i|2
Since kvk is nonzero here, the last inequality must mean that
0 ≤ kuk2 kvk2 − |h u | v i|2
.
That is
|h u | v i|2 ≤ kuk2 kvk2
.
.
Convergence in Norm
Chapter & Page: 12–43
Taking the square root of both sides then yields the Schwarz inequality.
Verifying the general triangle inequality is now easy:
ku + vk2 = h u + v | u + v i
= hu | ui + hu | vi + hv | ui + hv | vi
= kuk2 + h u | v i + h v | u i + kvk2
≤ kuk2 + |h u | v i| + |h v | u i| + kvk2
≤ kuk2 + kuk kvk + kuk kvk + kvk2
≤ kuk2 + 2 kuk kvk + kvk2
= (kuk + kvk)2
.
Cut out the middle and take the square root of both sides then yields
ku + vk ≤ kuk + kvk
which is the claimed triangle inequality.
version: 1/17/2014
,