Error in Taylor Polynomials

How good are Taylor series?
Learning goal: Investigate the Lagrange error formula for the Taylor polynomials
Let’s investigate exactly how accurate the Taylor approximation is.
Let’s start with the zeroth order polynomial expanded around x = a: p0(x) = f(a) (the constant
term). If we move to x = b, how bad an approximation is this?
Well, the correct value is f(b), but our estimate is p0(b) = f(a). So the error is f(b) – f(a). Now
let’s think: why is there an error at all—why isn’t the zeroth degree polyonial correct? Well,
because the actual function changes. What measures how much the function is changing? The
derivative, of course! Well, there’s a connection between f(b) – f(a) and the derivative. Namely,
the mean value theorem: f(b) – f(a) = f′(c)(b – a) for some choice of c between a and b (given
that the function is differentiable, of course).
OK, so let’s use the first degree polynomial p1(x) = f(a) + f′(a)(x – a). Why does it make an error
when estimating f(b)? Well, because f isn’t a straight line. It curves. What measures how much
it is curving? The second derivative, of course. The larger the second derivative, the worse the
function will curve away from the stratight line. To estimate the error, let’s pretend the second
derivative is constant, with value B (for “bad”). To make sure we’re overestimating the error,
we’ll take B to be the worst value—largest positive or smallest negative—that the second
derivative takes. We’re overestimating how much the function is peeling away from the line.
So we know –B ≤ f′′(x) ≤ B. Let’s integrate from a to t: –B(t – a) ≤ f′(t) - f′(a) ≤ B(t – a). Now
let’s integrate again, from a to b: –B(b – a)2/2 ≤ f(b) – f(a) – f′(a)(b – a) ≤ B(b – a)2/2. Notice
that the middle terms is f(b) – p1(b). So the size of the error is |f(b) – p1(b)| ≤ B(b – a)2/2. In fact,
with a little more care, you can get an exact version: f(b) – p1(b) = f′′(c)(b – a)2/2. The error
looks kind of like the next term in the Taylor polynomial, only the derivative is evaluated at
some unknown c instead of the base point a.
Would you believe this keeps up?
Taylor’s Theorem (Lagrange form of error): if the function f is (n + 1)-times differentiable on
the interval from a to b, and pn(x) is the nth degree Taylor polynomial centered at x = a, then the
error f(b) – pn(b) = f(n+1)(c)(b – a)n + 1/(n + 1)!
Aside: a complete proof. To find the exact error, let A be the exact number needed so that
f(b) – pn(b) = A(b – a)n+1. Let q(x) = pn(x) + A(x – a)n+1 and let e(x) = f(x) – q(x). Note that q(a)
= pn(a) = f(a), and that by definition, q(b) = f(b). So e(a) = e(b) = 0. By Rolle’s theorem, there is
a c0 between a and b so that e′(c0) = 0. Now e′(x) = f′(x) – pn′(x) – (n + 1)A(x – a)n. Notice that
e′(a) = 0, and e′(c0) = 0. So by Rolle, there is a c1 between a and c0 so that e′′(c1) = 0. Well,
e′′(x) = f′′(x) – pn′′(x) – (n + 1)n(x – a)n – 1. And notice that e′′(a) = 0. So we can keep going.
After n steps, we have e(n)(x) = f(n)(x) – pn(n)(x) – (n + 1)(n)(n – 1) ! (2)(x – a). We have e(n)(a)
= 0, and e(n)(cn–1) = 0. So by one last Rolle—note the function is n + 1 times differentiatble—we
get one last c so that e(n+1)(c) = 0. But e(n+1)(x) = f(n+1)(x) – (n+1)!A. So plugging in c and solving
for A gives A = f(n+1)(c) / (n+1)! as desired.
Example: estimate the error if x – x3/6 is used to estimate sin(π/6).
Solution: we know sin(π/6) = ½, while plugging into x – x3/6 gives 0.499674. So the true error
is 0.000326. Since we are using the third degree polynomial, we know that the error is
f(4)(c)(b – a)4/4!. The fourth derivative of sin(x) is sin(x). The center, a = 0. The point we are
trying to estimate is b = π/6. But what is c? We don’t know! But the worst sin(x) is for any c
between 0 and π/6 is sin(π/6) = ½. So we know the error is less than ½ (π/6)4/4! or about
0.00156.
(Actually, though, this is also the fourth degree polynomial for sin(x). So we could have used
the error estimate f(5)(c)(b – a)5/5! The fifth derivative of sin(x) is cos(x). Between 0 and π/6
cos(x) can’t be any bigger than one. So the error is at most (π/6)5/5! or about 0.000327. Not
bad!)
Example: use Taylor polynomials to determine 6 , with a bound on the error.
Solution: Let’s make a table to get our coefficients. We can’t work around a = 0, because the
derivative is undefined there. We will use a = 4 because that is a convenient square root. If we
use the third degree Taylor polynomial, we will need the fourth degree term to estimate the error.
So we calculate:
n
f(n)(x)
f(n)(4)
coefficient
0
x1/2
2
2
–1/2
1
x /2
¼
¼
–3/2
2
–x /4
–1/32
–1/64
3
3x–5/2/8
3/256
1/512
–7/2
4
–15x /16
So the Taylor polynomial is 2 + (x – 4)/4 – (x – 4)2/64 + (x – 4)3/512. Plugging in x = 6 we find
that 61/2 is approximately 2 + ½ - 4/64 + 8/512 = 2 29/64 = 2.453125.
The error is –15c–7/2(b – a)4/4!. Now b = 6 and a = 4. We choose c between 4 and 6 to make this
as large as possible—choose c = 4. Notice that no matter what c we choose the error is negative,
telling us our estimate is too large. So our estimate is too large, but by at most 15⋅4–7/2⋅24/4! =
5/1024 or about 0.005. The exact value of
0.0037.
6 is 2.44949. Our estimate was too large, by about
We often use the simpler form |error| ≤ Kn+1(b – a)n+1/(n+1)!, where K is some value that is
known to be at least as big as the derivative on the interval from a to b.
Example: show that the series for sin(x) converges to sin(0.2) when we plug in x on the interval
from –π to π.
Solution: the error is f(n+1)(c)(x)n+1/(n+1)! Now no matter what, the derivatives are sines and
cosines, so the derivative term can never be larger than one. That is, |error| ≤ 1⋅xn+1/(n+1)! Now
this goes to zero for any x, so the error between the polynomial and the true value of the function
goes to zero for any x, not just –π to π. So the series for sine converges to the true value of sine
for any x.
Example: How many terms are necessary to approximate sin(5) to within 0.001?
Solution: since we now know the series converges to the true value of sine, we have two
approaches:
(a) use the error formula f(n+1)(c)⋅5n+1/(n+1)! trying n’s until the formula is less than 0.001.
(b) use the alternating series test, where the error is less than the next term (once they start
decreasing) to make this estimate. This is justified because the alternating series test tells you
how far off the partial sum is from the series, and the previous example already showed that the
series value is the true value of the function. Since the terms are 52n+1/(2n+1)!, by trial and error
we find that 2n + 1 = 19 is the first term that is small enough to leave out. Yes, we need to go up
to the 17th power to be accurate enough here.
⎧⎪ e−1/ x 2
Example: Not everything works out well! Find the Maclaurin series for f (x) = ⎨
⎩⎪ 0
It turns out that this function is continuous and
differentiable any number of times on the real numbers.
A graph is shown here. See how flat it is near the
origin? It turns out that all the derivatives, evaluated at
the origin, are zero. So the Maclaurin series is zero!
Notice how this does not converge to the true value of
the function for any other value of x other than zero.
The reason is that the derivatives of this function
become gargantuan very quickly, so the f(n+1)(c) term
becomes large, even in comparison to the factorial.
x≠0 .
x=0