Chapter 10
Orthogonal Polynomials and
Least-Squares Approximations to
Functions
**4/5/13 ET
10.1
Discrete Least-Squares Approximations
Given a set of data points (x1 , y1 ), (x2 , y2 ), . . ., (xm , ym ), a normal and useful practice in many
applications in statistics, engineering and other applied sciences is to construct a curve that is
considered to be the “best fit” for the data, in some sense.
Several types of “fits” can be considered. But the one that is used most in applications is the
“least-squares fit”. Mathematically, the problem is the following:
Discrete Least-Squares Approximation Problem
Given a set of n discrete data points (xi , yi ), i = 1, 2, . . . , m.
Find the algebraic polynomial
Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn (n < m)
such that the error E(a0 , a1 , . . . , an ) in the least-squares sense in minimized; that is,
E(a0 , a1 , . . . , an ) =
m
X
i=1
(yi − a0 − a1 xi − a2 x2i − · · · − an xni )2
465
466CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
is minimum.
Here E(a0 , a1 , . . . , an ) is a function of (n + 1) variables: a0 , a1 , . . . , an .
Since E(a0 , a1 , . . . , an ) is a function of the variables, a0 , a1 , . . . , an , for this function to be
minimum, we must have:
∂E
= 0, j = 0, 1, . . . , n.
∂aj
Now, simple computations of these partial derivatives yield:
m
X
∂E
(yi − a0 − a1 xi − · · · − an xni )
= −2
∂a0
i=1
∂E
= −2
∂a1
..
.
m
X
i=1
xi (yi − a0 − a1 xi − · · · − an xni )
m
X
∂E
xni (yi − a0 − a1 xi − · · · − an xni )
= −2
∂an
i=1
Setting these equations to be zero, we have
a0
a0
m
X
i=1
m
X
i=1
..
.
a0
m
X
i=1
1 + a1
m
X
x i + a2
i=1
m
X
x i + a1
i=1
xni + a1
i=1
x2i + · · · + an
x2i + · · · + an
m
X
i=1
m
X
m
X
xni =
i=1
m
X
xn+1
=
i
i=1
xn+1
+ · · · + an
i
m
X
m
X
i=1
m
X
x2n
i =
bk =
xi y i
m
X
i=1
xki , k = 0, 1, . . . , 2n
i=1
m
X
yi
i=1
i=1
Set
sk =
m
X
xki yi , k = 0, 1, . . . , n
i=1
Using these notations, the above equations can be written as:
..
.
xni yi
467
10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS
s0 a0 + s1 a1 + · · · + sn an = b0 (Note that
s1 a0 + s2 a1 + · · · + sn+1 an = b1
..
.
m
X
1=
i=1
m
X
x0i = s0 .)
i=1
..
.
sn a0 + sn+1 a1 + · · · + s2n an = bn
This is a system of (n + 1) equations in (n + 1) unknowns a0 , a1 , . . . , an . These equations are
called Normal Equations. This system now can be solved to obtain these (n + 1) unknowns,
provided a solution to the system exists. We will not show that this system has a unique solution
if xi ’s are distinct.
The system can be written in the following matrix form:
s0
s1 · · ·
sn
a0
b0
s2 · · · sn+1 a1 b1
s1
.
..
..
.
.. = .. .
.
. . .
.
sn sn+1 · · · s2n
an
bn
or
Sa = b
where
s0
s1
S=
..
.
s1
s2
..
.
···
···
sn
sn+1
..
,
.
s2n
sn sn+1 · · ·
Define
1
1
1
V =
..
.
x1
x2
x3
..
.
x21
x22
x23
..
.
a0
a1
a=
.. ,
.
an
···
···
···
1 xm x2m · · ·
Then the above system has the form:
b0
b1
b=
.. .
.
bn
xn1
xn2
xn3
..
.
xnm
V T V a = b.
The matrix V is known as the Vandermonde matrix, and we have seen in Chapter 6 that
this matrix has full rank if xi ’s are distinct. In this case, the matrix S = V T V is symmetric
and positive definite [Exercise] and is therefore nonsingular. Thus, if xi ’s are distinct, the
equation Sa = b has a unique solution.
468CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Theorem 10.1 (Existence and uniqueness of Discrete Least-Squares Solutions). Let
(x1 , y1 ), (x2 , y2 ), . . ., (xn , yn ) be n distinct points. Then the discrete least-square approximation
problem has a unique solution.
10.1.1
Least-Squares Approximation of a Function
We have described least-squares approximation to fit a set of discrete data. Here we describe
continuous least-square approximations of a function f (x) by using polynomials.
First, consider approximation by a polynomial with monomial basis: {1, x, x2 , . . . , xn }.
Least-Square Approximations of a Function Using Monomial Polynomials
Given a function f (x), continuous on [a, b], find a polynomial Pn (x) of degree at most n:
Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn
such that the integral of the square of the error is minimized. That is,
E=
Z
a
b
[f (x) − Pn (x)]2 dx
is minimized.
The polynomial Pn (x) is called the Least-Squares Polynomial.
Since E is a function of a0 , a1 , . . . , an , we denote this by E(a0 , a1 , . . . , an ).
For minimization, we must have
∂E
= 0,
∂ai
i = 0, 1, . . . , n.
As before, these conditions will give rise to a system of (n + 1) normal equations in (n + 1)
unknowns: a0 , a1 , . . . , an . Solution of these equations will yield the unknowns: a0 , a1 , . . . , an .
469
10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS
10.1.2
Setting up the Normal Equations
Since
b
Z
[f (x) − (a0 + a1 x + a2 x2 + · · · + an xn )]2 dx,
Z b
∂E
[f (x) − a0 − a1 x − a2 x2 − · · · − an xn ] dx
= −2
∂a0
a
Z b
∂E
x[f (x) − a0 − a1 x − a2 x2 − · · · − an xn ] dx
= −2
∂a1
a
..
.
Z b
∂E
xn [f (x) − a0 − a1 x − a2 x2 − · · · − an xn ] dx
= −2
∂an
a
E=
a
So,
∂E
= 0 ⇒ a0
∂a0
Z
b
1 dx + a1
Z
b
Z
x dx + a2
a
a
b
a
x2 dx + · · · + an
Z
b
Z
b
xn dx =
b
Z
f (x) dx
a
a
Similarly,
∂E
= 0 ⇒ a0
∂ai
Z
b
Z
i
x dx + a1
b
x
i+1
dx + a2
b
x
i+2
a
a
a
Z
dx + · · · + an
i = 1, 2, 3, . . . , n.
x
i+n
Z
dx =
a
b
xi f (x) dx,
a
So, the (n + 1) normal equations in this case are:
Z b
Z b
Z b
Z b
Z b
2
n
i = 0: a0
1 dx + a1
x dx + a2
f (x) dx
x dx + · · · + an
x dx =
a
a
a
a
a
Z b
Z b
Z b
Z b
Z b
n+1
3
2
x f (x) dx
x
dx =
x dx + · · · + an
x dx + a3
x dx + a1
i = 1: a0
a
a
a
a
a
..
.
i = n: a0
Z
b
a
xn dx + a1
Z
a
b
xn+1 dx + a2
Z
b
a
xn+2 dx + · · · + an
Z
b
a
x2n dx =
Z
b
xn f (x) dx
a
Denote
Z
b
xi dx = si , i = 0, 1, 2, 3, . . . , 2n, and bi =
a
Z
b
xi f (x) dx, i = 0, 1, . . . , n.
a
Then the above (n + 1) equations can be written as
a 0 s 0 + a 1 s 1 + a 2 s 2 + · · · + a n s n = b0
a0 s1 + a1 s2 + a2 s3 + · · · + an sn+1 = b1
..
.
a0 sn + a1 sn+1 + a2 sn+2 + · · · + an s2n = bn .
470CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
or in matrix notation
s0
s1
.
.
.
s1
s2
..
.
sn
sn sn+1 sn+2 · · ·
Denote
S = (si ),
a0
b0
sn+1 a1 b1
..
. = .
. .. ..
···
···
s2
s3
..
.
s2n
a0
a1
a=
.. ,
.
an
bn
b0
b1
b=
.. .
.
an
bn
Then we have the system of normal equations:
Sa = b
The solution of these equations will yield the coefficients a0 , a1 , . . . , an of the least-squares polynomial Pn (x).
A Special Case:
Let the interval be [0, 1]. Then
si =
Z
1
xi dx =
0
Thus, in this case the matrix of the normal
1
1
2
S=
..
.
1
n
1
,
i+1
i = 0, 1, . . . , 2n.
equations
1
2
1
3
···
···
1
n+1
···
..
.
1
n
1
n+1
..
.
1
2n
which is a Hilbert Matrix. It is well-known to be ill-conditioned (see Chapter 3).
10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS
471
Algorithm 10.2 (Least-Squares Approximation using Monomial Polynomials).
Inputs:
(i) f (x) - A continuous function on [a, b].
(ii) n - The degree of the desired least-squares polynomial
Output: The coefficients a0 , a1 , . . . , an of the desired least-squares polynomial: Pn (x) = a0 +
a1 x + · · · + an xn .
Step 1. Compute s0 , s1 , . . . , s2n :
For i = 0, 1, . . . , 2n do
Rb
si = a xi dx
End
Step 2. Compute b0 , b1 , . . . , bn :
For i = 0, 1, . . . , n do
Rb
bi = a xi f (x) dx
End
Step 3. Form the matrix S from the numbers s0 , s1 , . . . , s2n and the vector b from the numbers
b0 , b1 , . . . , bn .
s0
s1 . . . sn
b0
s2 . . . sn+1
s1
b
, b = .1 .
S=
.
.
.
.
.
..
..
.
.
sn sn+1 . . . s2n
bn
Step 4. Solve the (n + 1) × (n + 1) system of equations for a0 , a1 , . . . an :
a0
.
.
Sa = b, where a =
. .
an
Example 10.3
Find Linear and Quadratic least-squares approximations to f (x) = ex on [−1, 1].
472CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Linear Approximation:
n = 1; P1 (x) = a0 + a1 x
Z 1
dx = 2
s0 =
−1
1
1
1
1
x2
= −
=0
x dx =
s1 =
2 −1 2
2
−1
3 1
Z 1
x
−1
1
2
2
x dx =
s2 =
= −
=
3 −1 3
3
3
−1
Z
Thus,
s0 s1
s1 s2
S=
b0 =
Z
1
−1
1
b1 =
Z
!
The normal system is:
=
ex dx = e −
ex x dx =
−1
2 0
0 23
!
a0
a1
2 0
0 23
!
1
= 2.3504
e
2
= 0.7358
e
!
=
b0
b1
!
This gives
a0 = 1.1752,
a1 = 1.1037
The linear least-squares polynomial P1 (x) = 1.1752 + 1.1037x.
Accuracy Check:
P1 (0.5) = 1.7270,
e0.5 = 1.6487
Relative Error: = 0.0453.
|e0.5 − P1 (0.5)|
|1.6487 − 1.7270|
=
= 0.0475.
|e0.5 |
|1.6487|
Quadratic Fitting:
n = 2; P2 (x) = a0 + a1 x + a2 x2
2
s0 = 2, s1 = 0, s2 =
3
4 1
Z 1
x
=0
x3 dx =
s3 =
4 −1
−1
5 1
Z 1
x
2
4
x dx =
s4 =
=
5 −1 5
−1
10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS
b0 =
Z
1
−1
1
b1 =
Z
−1
1
b2 =
Z
−1
ex dx = e −
xex dx =
473
1
= 2.3504
e
2
= 0.7358
e
x2 ex dx = e −
5
= 0.8789.
e
The system of normal equations is:
2
0
2
3
a0
2.3504
0 a1 = 0.7358
2
a2
0.8789
5
2
3
0
2
3
0
The solution of this system is:
a0 = 0.9963,
a1 = 1.1037,
a2 = 0.5368.
The quadratic least-squares polynomial P2 (x) = 0.9963 + 1.1037x + 0.5368x2 .
Accuracy Check:
P2 (0.5) = 1.6889
e0.5 = 1.6487
Relative error:
|P2 (0.5)−e0.5 |
|e0.5 |
=
|1.6824−1.6487|
|1.6487|
= 0.0204.
Example 10.4
Find linear and quadratic least-squares polynomial approximation to f (x) = x2 + 5x + 6 in
[0, 1].
Linear Fit: P1 (x) = a0 + a1 x
s0 =
Z
1
dx = 1
0
1
1
2
0
Z 1
1
x2 dx =
s2 =
3
0
s1 =
Z
xdx =
474CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
b0 =
Z
1
(x2 + 5x + 6) dx =
0
1 5
+ +6
3 2
53
=
6
Z 1
Z 1
2
(x3 + 5x2 + 6x) dx
x(x + 5x + 6) dx =
b1 =
0
0
1 5 6
59
= + + =
4 3 2
12
The normal equations are:
1
1
2
1
2
1
3
!
a0
a1
!
53
6
59
12
=
!
⇒
a0 = 5.8333
a1 = 6
The linear least squares polynomial P1 (x) = 5.8333 + 6x.
Accuracy Check:
Exact Value: f (0.5) = 8.75;
|8.833−8.75|
|8.75|
Relative error:
= 0.0095.
Quadratic Least-Square Approximation:
1
S = 21
b0 =
b2 =
53
,
6
Z 1
0
b1 =
2
2
P1 (0.5) = 8.833
1
3
P2 (x) = a0 + a1 x + a2 x2
1
2
1
3
1
4
1
3
1
4
1
5
59
,
12
x (x + 5x + 6) dx =
Z
1
(x4 + 5x3 + 6x2 ) dx =
0
1 5 6
69
+ + = .
5 4 3
20
The solution of the linear system is: a0 = 6, a1 = 5, a2 = 1,
P2 (x) = 6 + 5x + x2 (Exact)
10.1.3
Use of Orthogonal Polynomials in Least-squares Approximations
The least-squares approximation using monomial polynomials, as described above, is not numerically effective; since the system matrix S of normal equations is very often ill-conditioned.
10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS
475
For example, when the interval is [0, 1], we have seen that S is a Hilbert matrix, which is
notoriously ill-conditioned for even modest values of n. When n = 5, the condition number of
this matrix = cond(S) = O(105 ). Such computations can, however, be made computationally
effective by using a special type of polynomials, called orthogonal polynomials.
Definition 10.5. The set of functions {φ0 , φ1 , . . . , φn } in [a, b] is called a set of orthogonal
functions, with respect to a weight function w(x), if
Z
b
w(x)φj (x)φi (x) dx =
a
where Cj is a real positive number.
0
Cj
if i 6= j
if i = j
Furthermore, if Cj = 1, j = 0, 1, . . . , n, then the orthogonal set is called an orthonormal set.
Using this interesting property, least-squares computations can be more numerically effective,
as shown below. Without any loss of generality, let’s assume that w(x) = 1.
Idea: The idea is to find a least-squares approximation of f (x) on [a, b] by means of a
polynomial of the form
Pn (x) = a0 φ0 (x) + a1 φ1 (x) + · · · + an φn (x),
where {φn }nk=0 is a set of orthogonal polynomials. That is, the basis for generating Pn (x)
in this case is a set of orthogonal polynomials.
The question now arises: Is it possible to write Pn (x) as a linear combination of the orthogonal polynomials? The answer is yes and provided in the following. It can be shown
[Exercise]:
Given the set of orthogonal polynomials {Qi (x)}ni=0 , a polynomial Pn (x) of degree ≤ n, can be
written as:
Pn (x) = a0 Q0 (x) + a1 Q1 (x) + · · · + an Qn (x)
for some a0 , a1 , . . . , an .
Finding the least-squares approximation of f (x) on [a, b] using orthogonal polynomials, then
can be stated as follows:
476CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Least-squares Approximation of a Function Using Orthogonal Polynomials
Given f (x), continuous on [a, b], find a0 , a1 , . . . , an using a polynomial of the form:
Pn (x) = a0 φ0 (x) + a1 φ1 (x) + · · · + an φn (x),
where
{φk (x)}nk=0
is a given set of orthogonal polynomials on [a, b], such that the error function:
E(a0 , a1 , . . . , an ) =
Z
b
a
[f (x) − (a0 φ0 (x) + · · · an φn (x))]2 dx
is minimized.
As before, we set
∂E
= 0,
∂ai
Now
∂E
= −2
∂a0
Z
i = 0, 1, . . . , n.
b
φ0 (x)[f (x) − a0 φ0 (x) − a1 φ1 (x) − · · · − an φn (x)] dx.
a
Setting this equal to zero, we get
Z b
Z b
(a0 φ0 (x) + · · · + an φn (x))φ0 (x) dx.
φ0 (x)f (x) dx =
a
a
Since,
{φk (x)}nk=0
is an orthogonal set, we have,
Z b
φ20 (x) dx = C0 ,
a
and
Z
b
φ0 (x)φi(x) dx = 0,
a
i 6= 0.
Applying the above orthogonal property, we see from above that
Z b
φ0 (x)f (x) dx = C0 a0 .
a
That is,
1
a0 =
C0
Z
b
φ0 (x)f (x) dx.
a
Similarly,
∂E
= −2
∂a1
Z
b
a
φ1 (x)[f (x) − a0 φ0 (x) − a1 φ1 (x) − · · · − an φn (x)] dx.
477
10.1. DISCRETE LEAST-SQUARES APPROXIMATIONS
Again from the orthogonal property of {φj (x)}nj=0 we have
Z
b
a
so, setting
∂E
∂a1
φ21 (x) dx
= C1 and
Z
b
Z
b
φ1 (x)φi (x) dx = 0,
a
i 6= 1,
= 0, we get
1
a1 =
C1
φ1 (x)f (x) dx
a
In general, we have
1
ak =
Ck
Z
b
φk (x)f (x) dx,
where
Ck =
Z
a
10.1.4
k = 0, 1, . . . , n,
a
b
φ2k (x) dx.
Expressions for ak with Weight Function w(x).
If the weight function w(x) is included, then ak is modified to
1
ak =
Ck
Where Ck =
Z
a
b
=
Z
a
b
Z
b
w(x)f (x)φk (x) dx,
k = 0, 1, . . . , n
a
w(x)φ2k (x)dx
Algorithm 10.6 (Least-Squares Approximation Using Orthogonal Polynomials).
Inputs: f (x) - A continuous function f (x) on [a, b]
w(x) - A weight function (an integrable function on [a, b]).
{φk (x)}nk=0 - A set of n orthogonal functions on [a, b].
Output: The coefficients a0 , a1 , . . . , an such that
Z
b
a
w(x)[f (x) − a0 φ0 (x) − a1 φ1 (x) − · · · − an φn (x)]2 dx
is minimized.
Step 1. Compute Ck , k = 0, 1, . . . , n as follows:
For k = 0, 1, 2, . . . , n do
Rb
Ck = a w(x)φ2k (x) dx
End
478CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Step 2. Compute ak , k = 0, . . . , n as follows:
For k = 0, 1, 2, . . . , n do
Rb
ak = C1k a w(x)f (x)φk (x) dx
End
Generating a Set of Orthogonal Polynomials
The next question is: How to generate a set of orthogonal polynomials on [a, b], with respect
to a weight function w(x)?
The classical Gram-Schmidt process (see Chapter 7), can be used for this purpose. Starting
with the set of monomials {1, x, . . . , xn }, we can generate the orthonormal set of polynomials
{Qk (x)}.
10.2
Least-Squares Approximation Using Legendre’s Polynomials
Recall from the last chapter that the first few Legendre polynomials are given by:
φ0 (x)
φ1 (x)
φ2 (x)
φ3 (x)
etc.
=1
=x
= x2 −
= x3 −
1
3
3
5x
To use the Legendre polynomials to approximate least-squares solution of a function f (x), we
set w(x) = 1, and [a, b] = [−1, 1] and compute
• Ck , k = 0, 1, . . . , n by Step 1 of Algorithm 10.6:
Z 1
φ2k (x)dx
Ck =
and
−1
• ak , k = 0, 1, . . . , n by Step 2 of Algorithm 10.6:
Z 1
1
ak =
f (x)φk (x)dx.
Ck −1
10.2. LEAST-SQUARES APPROXIMATION USING LEGENDRE’S POLYNOMIALS
479
The least-squares polynomial will then be given by Pn (x) = a0 Q0 (x) + a1 Q1 (x) + · · · + an Qn (x).
A few Ck ’s are now listed below:
and so on.
C0
C1
C2
=
=
=
R1
2
−1 φ0 (x) dx
R1 2
−1 φ1 (x) dx
R1 2
−1 φ2 (x) dx
=
=
=
R1
−1 1 dx = 2
2
2
−1 x dx = 3
R1
1 2
2
−1 x − 3
R1
dx =
8
45 .
Example 10.7
Find linear and quadratic least-squares approximation to f (x) = ex using Legendre polynomials.
Linear Approximation:
P1 (x) = a0 φ0 (x) + a1 φ1 (x)
φ0 (x) = 1,
φ1 (x) = x
Step 1. Compute C0 and C1 :
C0 =
Z
1
−1
C1 =
Z
1
−1
φ20 (x) dx
φ21 (x) dx
1
=
Z
1
−1
dx = [x]1−1 = 2
x3
x dx =
=
3
−1
Z
2
1
=
−1
1 1
2
+ = .
3 3
3
Step 2. Compute a0 and a1 :
1
a0 =
C0
Z
1
1
φ0 (x)e dx
2
−1
x
Z
Z 1
1
f (x)φ1 (x)dx
C1 −1
Z
3 1 x
3
=
e xdx = .
2 −1
e
1
1
e dx =
2
−1
x
a1 =
The linear least-squares polynomial:
P1 (x) = a0 φ0 (x) + a1 φ1 (x)
1
3
1
e−
+ x
=
2
e
e
1
e−
e
480CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Accuracy Check:
P1 (0.5) =
1
1
3
e−
+ · 0.5 = 1.7270
2
e
e
e0.5 = 1.6487
Relative error:
|1.7270−1.6487|
|1.6487|
Quadratic Approximation:
1
a0 =
2
= 0.0475.
P2 (x) = a0 φ0 (x) + a1 φ1 (x) + a2 φ2 (x)
1
e−
e
,
a1 =
3
(Already computed above)
e
Step 1. Compute
1 2
=
x −
C2 =
dx
3
−1
−1
5
1
x
8
2 x3
1
=
=
− ·
+ x
5
3 3
a
45
−1
Z
1
φ22 (x) dx
Z
1
2
Step 2. Compute
1
C2
Z
1
ex φ2 (x) dx
−1
Z
1
45 1 x
2
e x −
dx
a2 =
8 −1
3
7
=e−
e
a2 =
The quadratic least-squares polynomial:
7
1
1
3
1
2
P2 (x) =
e−
+ x+ e−
x −
2
e
e
e
3
Accuracy Check:
P2 (0.5) = 1.7151
e0.5 = 1.6487
Relative error:
|1.7151−1.6487|
|1.6487|
= 0.0403.
Compare this relative error with that obtained earlier with an non-orthogonal polynomial of
degree 2.
10.3. CHEBYSHEV POLYNOMIALS: ANOTHER WONDERFUL FAMILY OF ORTHOGONAL POLYNOMIAL
10.3
Chebyshev polynomials: Another wonderful family of orthogonal polynomials
Definition 10.8. The set of polynomials defined by
Tn (x) = cos[n arccos x],
n≥0
on [−1, 1] are called the Chebyshev polynomials.
To see that Tn (x) is a polynomial of degree n in our familiar form, we derive a recursive
relation by noting that
T0 (x) = 1 (the Chebyshev polynomial of degree zero).
T1 (x) = x (the Chebyshev polynomial of degree 1).
10.3.1
A Recursive Relation for Generating Chebyshev Polynomials
Substitute θ = arccos x. Then,
Tn (x) = cos(nθ),
0 ≤ θ ≤ π.
Tn+1 (x) = cos(n + 1)θ = cos nθ cos θ − sin nθ sin θ
Tn−1 (x) = cos(n − 1)θ = cos nθ cos θ + sin nθ sin θ
Adding the last two equations, we obtain
Tn+1 (x) + Tn−1 (x) = 2 cos nθ cos θ
The right-hand side still does not look like a polynomial in x. But note that cos θ = x
So,
Tn+1 (x) = 2 cos nθ cos θ − Tn−1 (x)
= 2xTn (x) − Tn−1 (x).
This above is a three-term recurrence relation to generate the Chebyshev Polynomials.
Three-Term Recurrence Formula for Chebyshev Polynomials
T0 (x) = 1, T1 (x) = x
Tn+1 (x) = 2xTn (x) − Tn−1 (x),
n ≥ 1.
482CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Using this recursive relation, the Chebyshev polynomials of the successive degrees can be generated:
n=1:
n=2:
T2 (x) = 2xT1 (x) − T0 (x) = 2x2 − 1,
T3 (x) = 2xT2 (x) − T1 (x) = 2x(2x2 − 1) − x = 4x3 − 3x
and so on.
10.3.2
The orthogonal property of the Chebyshev polynomials
We now show that Chebyshev polynomials are orthogonal with respect to the weight function
w(x) = √
1
in the interval [−1, 1].
1 − x2
To demonstrate the orthogonal property of these polynomials, we show that
Orthogonal Property of the Chebyshev Polynomials
Z
−1
1
0
Tm (x)Tn (x)
√
dx =
π
1 − x2
2
if m 6= n
if m = n.
First,
Z
1
Tm (x)Tn (x)
√
dx, m 6= n.
1 − x2
−1
Z 1
cos(m arccos x) cos(n arccos x)
√
dx
=
1 − x2
−1
Since
θ = arccos x
1
dx,
dθ = − √
1 − x2
10.4. THE LEAST-SQUARE APPROXIMATION USING CHEBYSHEV POLYNOMIALS483
The above integral becomes:
0
−
Z
=
Z
Now, cos mθ cos nθ can be written as
cos mθ cos nθdθ
π
π
cos mθ cos nθdθ
0
1
[cos(m + n)θ + cos(m − n)θ]
2
So,
Z
π
cos mθ cos nθ dθ
Z
Z
1 π
1 π
=
cos(m + n)θ dθ +
cos(m − n)θ dθ
2 0
2 0
π
π
1
1
1
1
sin(m + n)θ +
sin(m − n)θ
=
2 (m + n)
2 (m − n)
0
0
0
= 0.
Similarly, it can be shown [Exercise] that
Z
−1
1
10.4
Tn2 (x) dx
π
√
= for n ≥ 1.
2
2
1−x
The Least-Square Approximation using Chebyshev Polynomials
As before, the Chebyshev polynomials can be used to find least-squares approximations to a
function f (x) as stated below.
In Algorithm 10.6, set w(x) =
√ 1
,
1−x2
[a, b] = [−1, 1],
and φk (x) = Tk (x),
Then, it is easy to see that using the orthogonal property of Chebyshev polynomials:
484CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
C0 =
Ck =
Z
Z
1
−1
1
−1
T 2 (x)
√0
dx = π
1 − x2
T 2 (x)
π
√k
dx = ,
2
2
1−x
k = 1, . . . , n
Thus, from Step 2 of Algorithm 10.6, we have
a0 =
1
π
Z
1
−1
1
2
and ai =
π
Z
−1
f (x)dx
√
1 − x2
f (x)dx
√
.
1 − x2
The least-squares approximating polynomial Pn (x) of f (x) using Chebyshev polynomials is
given by:
Pn (x) = a0 T0 (x) + a1 T1 (x) + · · · + an Tn (x)
where
2
ai =
π
Z
and
1
−1
f (x)Ti (x) dx
√
,
1 − x2
1
a0 =
π
Z
1
−1
i = 1, · · · , n
f (x) dx
√
1 − x2
Example 10.9
Find a linear least-squares approximation of f (x) = ex using Chebyshev polynomials.
Here
P1 (x) = a0 φ0 (x) + a1 φi (x) = a0 T0 (x) + a1 T1 (x) = a0 + a1 x,
where
1
a0 =
π
a1 =
2
π
Z
1
−1
1
Z
−1
ex dx
√
≈ 1.2660
1 − x2
xex
√
dx ≈ 1.1303
1 − x2
Thus, P1 (x) = 1.2660 + 1.1303x
Accuracy Check:
P1 (0.5) = 1.8312;
e0.5 = 1.6487
485
10.5. MONIC CHEBYSHEV POLYNOMIALS
Relative error:
10.5
|1.6487−1.8312|
1.6487
= 0.1106.
Monic Chebyshev Polynomials
Note that Tk (x) is a Chebyshev polynomial of degree k with the leading coefficient 2k−1 , k ≥ 1.
Thus we can generate a set of monic Chebyshev polynomials from the polynomials Tk (x) as
follows:
• The Monic Chebyshev Polynomials, T̃k (x), are then given by
T̃0 (x) = 1,
T̃k (x) =
1
2k−1
Tk (x),
k ≥ 1.
• The k zeros of T̃k (x) are easily calculated [Exercise]:
2j − 1
x̃j = cos
π , j = 1, 2, . . . , k.
2k
• The maximum or minimum values of T̃k (x) [Exercise] occur at
jπ
x̃j = cos
,
k
and
T̃k (x̃j ) =
10.6
(−1)j
,
2k−1
j = 0, 1, . . . , k.
Minimax Polynomial Approximations with Chebyshev Polynomials
As seen above the Chebyshev polynomials can, of course, be used to find least-squares polynomial approximations. However, these polynomials have several other wonderful polynomial
approximation properties. First, we state the minimax property of the Chebyshev polynomials.
Minimax Property of the Chebyshev Polynomials
If Pn (x) is any monic polynomial of degree n, then
1
= max |T̃n (x)| ≤ max |Pn (x)|.
n−1
2
x∈[−1,1]
x∈[−1,1]
Moreover, this happens when
Pn (x) ≡ T̃n (x).
486CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
Interpretation: The above results say that the absolute maximum of the monic Chebyshev
polynomial of degree n, T̃n (x), over [−1, 1] is less than or equal to that of any polynomial Pn (x)
over the same interval.
Proof. By contradiction [Exercise].
10.6.1
Choosing the interpolating nodes with the Chebyshev Zeros
Recall from Chapter 6 that error in polynomial interpolation of a function f (x) with the
nodes, x0 , x1 , . . . , xn by a polynomial Pn (x) of degree at most n is given by
E = f (x) − P (x) =
f n+1 (ξ)
Ψ(x),
(n + 1)!
where Ψ(x) = (x − x0 )(x − x1 ) · · · (x − xn ).
The question is: How to choose these (n + 1) nodes x0 , x1 , . . . , xn so that |Ψ(x)| is minimized
in [−1, 1]?
The answer can be given from the above minimax property of the monic Chebyshev polynomials.
Note that Ψ(x) is a monic polynomial of degree (n + 1).
So, by the minimax property of the Chebyshev polynomials, we have
max |T̃n+1 (x)| ≤ max |Ψ(x)|.
x∈[−1,1]
x∈[−1,1]
Thus,
• The maximum value of Ψ(x) = (x − x0 )(x − x1 ) · · · (x − xn ) in [−1, 1] is smallest when
x0 , x1 , . . . , xn are chosen as the (n+1) zeros of the (n+1)th degree monic Chebyshev polynomials
T̃n+1 (x).
That is, when x0 , x1 , . . . , xn are chosen as:
x̃k+1 = cos
(2k + 1)
π, k = −1, 0, 1, . . . , n − 1.
2(n + 1)
• The smallest absolute maximum value of Ψ(x) with x0 , x1 , . . . , xn as chosen above, is:
10.6.2
1
.
2n
Working with an Arbitrary Interval
If the interval is [a, b], different from [−1, 1], then, the zeros of T̃n+1 (x) need to be shifted by
using the transformation:
1
x̃ = [(b − a)x + (a + b)]
2
10.6. MINIMAX POLYNOMIAL APPROXIMATIONS WITH CHEBYSHEV POLYNOMIALS487
Example 10.10
Let the interpolating polynomial be of degree at most 2 and the interval be [1.5, 2].
The three zeros of T̃3 (x) in [−1, 1] are given by
x̃0 = cos
π
,
6
x̃1 = cos
π
pi
, and x̃2 = cos .
6
2
These zeros are to be shifted using transformation:
xnew i =
10.6.3
1
[(2 − 1.5)x̃i + (2 + 1.5)] ,
2
i = 0, 1, 2.
Use of Chebyshev Polynomials to Economize Power Series
Power Series Economization
Let Pn (x) = a0 + a1 x + · · · + an xn be a polynomial of degree n obtained by truncating a power
series expansion of a continuous function on [a, b]. The problem is to find a polynomial Pr (x)
of degree r (< n) such that
max |Pn (x) − Pr (x)|,
x∈[−1,1]
is as small as possible.
We first consider r = n − 1; that is, the problem of approximating Pn (x) by a polynomial
Pn−1 (x) of degree n − 1. If the total error (sum of truncation error and error of approximation)
is still within an acceptable tolerance, then we consider approximation by a polynomial of
Pn−1 (x) and the process is continued until the accumulated error exceeds the tolerance.
We will show below how minimax property of the Chebyshev polynomials can be gainfully used.
First note that | a1n Pn (x) − Pn−1 (x)| is a monic polynomial. So, by the minimax property, we
have
1
1
max | Pn (x) − Pn−1 (x)| ≥ max |T̃n (x)| = n−1 .
2
x∈[−1,1] an
x∈[−1,1]
Thus, if we choose
Pn−1 (x) = Pn (x) − an T̃n (x),
Then the minimum value of the maximum error of approximating Pn (x) by Pn−1 (x) over [−1, 1]
is given by:
|an |
max |Pn (x) − Pn−1 (x)| = n−1 .
−1≤x≤1
2
n|
, plus error due to the truncation of the power series is within the perIf this quantity, 2|an−1
missible tolerance ǫ, we can then repeat the process by constructing Pn−2 (x) from Pn−1 (x) as
above. The process can be continued until the accumulated error exceeds the error tolerance ǫ.
488CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
So, the process can be summarized as follows:
Power Series Economization Process by Chebyshev Polynomials
Inputs: (i) f (x) - (n + 1) times differentiable function on [a, b]
(ii) n = positive integer
Outputs: An economized power series of f (x).
Step 1. Obtain Pn (x) = a0 + a1 xn + · · · + an xn by truncating the power series expansion of
f (x).
Step 2. Find the truncation error.
ET R (x) = Reminder after n terms =
|f (n+1) (ξ(x))||xn+1 |
(n + 1)!
and compute the upper bound of |ET R (x)| for −1 ≤ x ≤ 1.
Step 3. Compute Pn−1 (x):
Pn−1 (x) = Pn (x) − an T̃n (x)
Step 4. Check if the maximum value of the total error |ET R | +
approximate Pn−1 (x) by Pn−2 (x). Compute Pn−2 (x) as:
|an |
2n−1
is less than ǫ. If so,
Pn−2 (x) = Pn−1 (x) − an−1 T̃n−1 (x)
Step 5. Compute the error of the current approximation:
|Pn−1 (x) − Pn−2 (x)| =
|an−1 |
.
2n−2
Step 6. Compute the accumulated error:
an
an−1 |ET R | + | n−1 | + | n−2 |
2
2
If this error is still less than ǫ, continue until the accumulated error exceeds ǫ.
Example 10.11
Find the economized power series for cos x = 1 − 21 x2 +
ǫ = 0.05.
x4
4!
− · · · for 0 ≤ x ≤ 1 with a tolerance
10.6. MINIMAX POLYNOMIAL APPROXIMATIONS WITH CHEBYSHEV POLYNOMIALS489
Step 1. Let’s first try with n = 4.
1
x4
P4 (x) = 1 − x2 + 0 · x3 +
2
4!
Step 2. Truncation Error.
TR (x) =
f 5 (ξ(x)) 5
x
5!
since f (5) (x) = − sin x and | sin x| ≤ 1 for all x, we have
|TR (x)| ≤
1
(1)(1)5 = 0.0083.
120
Step 3. Compute P3 (x):
P3 (x) = P4 (x) − a4 T̃4 (x)
x4
1
1
−
= 1 − x2 +
2
4!
4!
x4 − x2 +
= −0.4583x2 + 0.9948
1
8
Step 4. Maximum absolute accumulated error = Maximum absolute truncation error + max1
imum absolute Chebyshev error = 0.0083 + 24
· 18 = 0.0135 which is greater than ǫ = 0.05.
Thus, the economized power series of f (x) = cos x for 0 ≤ x ≤ 1 with an error tolerance of
ǫ = 0.05 is the 3rd degree polynomial:
P3 (x) = −0.4583x2 + 0.9948.
Accuracy Check: We will check below how accurate this approximation is for three values of
x in 0 ≤ x ≤ 1: two extreme values x = 0 and x = 1, and one intermediate value x = 0.5.
For x=0.5.
cos(0.5) = 0.8776
P3 (0.5) = 0.8802
Error = | cos(0.5) − P3 (0.5)| = | − 0.0026| = 0.0026.
For x=0.
cos(0) = 1;
P3 (0) = 0.9948
Error = | cos(0) − P3 (0)| = 0.0052.
490CHAPTER 10. ORTHOGONAL POLYNOMIALS AND LEAST-SQUARES APPROXIMATIONS TO FUNCT
For x=1.
cos(1) = 0.5403;
P3 (1) = 0.5365
Error = | cos(1) − P3 (1)| = 0.0038.
Remark: These errors are all much less than 0.05 even though theoretical error-bound, 0.0135,
exceeded it.
© Copyright 2026 Paperzz