Discrete Least Squares Approximations

Jim Lambers
COS 702
Spring Semeseter 2010-11
Lecture 1 Notes
Discrete Least Squares Approximations
One of the most fundamental problems in science and engineering is data fitting–constructing a
function that, in some sense, conforms to given data points. Two such data-fitting techniques are
polynomial interpolation and piecewise polynomial interpolation. Interpolation techniques, of any
kind, construct functions that agree exactly with the data. That is, given points (π‘₯1 , 𝑦1 ), (π‘₯2 , 𝑦2 ),
. . ., (π‘₯π‘š , π‘¦π‘š ), interpolation yields a function 𝑓 (π‘₯) such that 𝑓 (π‘₯𝑖 ) = 𝑦𝑖 for 𝑖 = 1, 2, . . . , π‘š.
However, fitting the data exactly may not be the best approach to describing the data with a
function. We have seen that high-degree polynomial interpolation can yield oscillatory functions
that behave very differently than a smooth function from which the data is obtained. Also, it
may be pointless to try to fit data exactly, for if it is obtained by previous measurements or other
computations, it may be erroneous. Therefore, we consider revising our notion of what constitutes
a β€œbest fit” of given data by a function.
One alternative approach to data fitting is to solve the minimax problem, which is the problem
of finding a function 𝑓 (π‘₯) of a given form for which
max βˆ£π‘“ (π‘₯𝑖 ) βˆ’ 𝑦𝑖 ∣
1≀𝑖≀𝑛
is minimized. However, this is a very difficult problem to solve.
Another approach is to minimize the total absolute deviation of 𝑓 (π‘₯) from the data. That is,
we seek a function 𝑓 (π‘₯) of a given form for which
π‘š
βˆ‘
βˆ£π‘“ (π‘₯𝑖 ) βˆ’ 𝑦𝑖 ∣
𝑖=1
is minimized. However, we cannot apply standard minimization techniques to this function, because, like the absolute value function that it employs, it is not differentiable.
This defect is overcome by considering the problem of finding 𝑓 (π‘₯) of a given form for which
π‘š
βˆ‘
[𝑓 (π‘₯𝑖 ) βˆ’ 𝑦𝑖 ]2
𝑖=1
is minimized. This is known as the least squares problem. We will first show how this problem is
solved for the case where 𝑓 (π‘₯) is a linear function of the form 𝑓 (π‘₯) = π‘Ž1 π‘₯ + π‘Ž0 , and then generalize
this solution to other types of functions.
1
When 𝑓 (π‘₯) is linear, the least squares problem is the problem of finding constants π‘Ž0 and π‘Ž1
such that the function
π‘š
βˆ‘
𝐸(π‘Ž0 , π‘Ž1 ) =
(π‘Ž1 π‘₯𝑖 + π‘Ž0 βˆ’ 𝑦𝑖 )2
𝑖=1
is minimized. In order to minimize this function of π‘Ž0 and π‘Ž1 , we must compute its partial derivatives
with respect to π‘Ž0 and π‘Ž1 . This yields
π‘š
π‘š
βˆ‘
βˆ‚πΈ
=
2(π‘Ž1 π‘₯𝑖 + π‘Ž0 βˆ’ 𝑦𝑖 ),
βˆ‚π‘Ž0
βˆ‘
βˆ‚πΈ
=
2(π‘Ž1 π‘₯𝑖 + π‘Ž0 βˆ’ 𝑦𝑖 )π‘₯𝑖 .
βˆ‚π‘Ž1
𝑖=1
𝑖=1
At a minimum, both of these partial derivatives must be equal to zero. This yields the system of
linear equations
(π‘š )
π‘š
βˆ‘
βˆ‘
π‘₯𝑖 π‘Ž1 =
𝑦𝑖 ,
π‘šπ‘Ž0 +
𝑖=1
(
π‘š
βˆ‘
)
π‘₯𝑖
π‘Ž0 +
𝑖=1
(π‘š
βˆ‘
)
π‘₯2𝑖
𝑖=1
π‘Ž1 =
𝑖=1
π‘š
βˆ‘
π‘₯𝑖 𝑦𝑖 .
𝑖=1
These equations are called the normal equations.
Using the formula for the inverse of a 2 × 2 matrix,
]
[
]βˆ’1
[
1
𝑑 βˆ’π‘
π‘Ž 𝑏
,
=
π‘Ž
𝑐 𝑑
π‘Žπ‘‘ βˆ’ 𝑏𝑐 βˆ’π‘
we obtain the solutions
) βˆ‘π‘š
βˆ‘π‘š
βˆ‘
( 𝑖=1 𝑦𝑖 ) βˆ’ ( π‘š
𝑖=1 π‘₯𝑖 ) ( 𝑖=1 π‘₯𝑖 𝑦𝑖 )
,
βˆ‘
βˆ‘π‘š
2
2
π‘š π‘š
𝑖=1 π‘₯𝑖 βˆ’ ( 𝑖=1 π‘₯𝑖 )
βˆ‘π‘š
βˆ‘π‘š
βˆ‘
π‘š π‘š
𝑖=1 π‘₯𝑖 𝑦𝑖 βˆ’ ( 𝑖=1 π‘₯𝑖 ) ( 𝑖=1 𝑦𝑖 )
.
βˆ‘π‘š
βˆ‘
2
2
π‘š π‘š
𝑖=1 π‘₯𝑖 βˆ’ ( 𝑖=1 π‘₯𝑖 )
(βˆ‘π‘š
π‘Ž0 =
π‘Ž1 =
2
𝑖=1 π‘₯𝑖
Example We wish to find the linear function 𝑦 = π‘Ž1 π‘₯ + π‘Ž0 that best approximates the data shown
in Table 1, in the least-squares sense. Using the summations
π‘š
βˆ‘
π‘₯𝑖 = 56.2933,
𝑖=1
π‘š
βˆ‘
π‘š
βˆ‘
π‘₯2𝑖 = 380.5426,
𝑖=1
𝑖=1
𝑦𝑖 = 73.8373,
π‘š
βˆ‘
π‘₯𝑖 𝑦𝑖 = 485.9487,
𝑖=1
we obtain
π‘Ž0 =
π‘Ž1 =
380.5426 β‹… 73.8373 βˆ’ 56.2933 β‹… 485.9487
742.5703
=
= 1.1667,
2
10 β‹… 380.5426 βˆ’ 56.2933
636.4906
10 β‹… 485.9487 βˆ’ 56.2933 β‹… 73.8373
702.9438
=
= 1.1044.
2
10 β‹… 380.5426 βˆ’ 56.2933
636.4906
2
𝑖
1
2
3
4
5
6
7
8
9
10
π‘₯𝑖
2.0774
2.3049
3.0125
4.7092
5.5016
5.8704
6.2248
8.4431
8.7594
9.3900
𝑦𝑖
3.3123
3.8982
4.6500
6.5576
7.5173
7.0415
7.7497
11.0451
9.8179
12.2477
Table 1: Data points (π‘₯𝑖 , 𝑦𝑖 ), for 𝑖 = 1, 2, . . . , 10, to be fit by a linear function
We conclude that the linear function that best fits this data in the least-squares sense is
𝑦 = 1.1044π‘₯ + 1.1667.
The data, and this function, are shown in Figure 1. β–‘
It is interesting to note that if we define the π‘š × 2 matrix 𝐴, the 2-vector a, and the π‘š-vector
y by
⎑
⎀
⎑
⎀
1 π‘₯1
𝑦1
[
]
⎒ 1 π‘₯2 βŽ₯
⎒ 𝑦2 βŽ₯
π‘Ž0
⎒
βŽ₯
⎒
βŽ₯
𝐴=⎒ .
, a=
, y = ⎒ . βŽ₯,
βŽ₯
.
.
.
.
π‘Ž
⎣ .
⎣ . ⎦
1
. ⎦
1 π‘₯π‘š
π‘¦π‘š
then a is the solution to the system of equations
𝐴𝑇 𝐴a = 𝐴𝑇 y.
These equations are the normal equations defined earlier, written in matrix-vector form. They arise
from the problem of finding the vector a such that
∣𝐴a βˆ’ y∣
is minimized, where, for any vector u, ∣u∣ is the magnitude, or length, of u.
In this case, this expression is equivalent to the square root of the expression we originally
intended to minimize,
π‘š
βˆ‘
(π‘Ž1 π‘₯𝑖 + π‘Ž0 βˆ’ 𝑦𝑖 )2 ,
𝑖=1
3
Figure 1: Data points (π‘₯𝑖 , 𝑦𝑖 ) (circles) and least-squares line (solid line)
but the normal equations also characterize the solution a, an 𝑛-vector, to the more general linear
least squares problem of minimizing ∣𝐴a βˆ’ y∣ for any matrix 𝐴 that is π‘š × π‘›, where π‘š β‰₯ 𝑛, whose
columns are linearly independent.
We now consider the problem of finding a polynomial of degree 𝑛 that gives the best leastsquares fit. As before, let (π‘₯1 , 𝑦1 ), (π‘₯2 , 𝑦2 ), . . ., (π‘₯π‘š , π‘¦π‘š ) be given data points that need to be
approximated by a polynomial of degree 𝑛. We assume that 𝑛 < π‘š βˆ’ 1, for otherwise, we can use
polynomial interpolation to fit the points exactly.
Let the least-squares polynomial have the form
𝑝𝑛 (π‘₯) =
𝑛
βˆ‘
π‘Žπ‘— π‘₯𝑗 .
𝑗=0
Our goal is to minimize the sum of squares of the deviations in 𝑝𝑛 (π‘₯) from each 𝑦-value,
⎑
⎀2
π‘š
π‘š
𝑛
βˆ‘
βˆ‘
βˆ‘
⎣
𝐸(a) =
[𝑝𝑛 (π‘₯𝑖 ) βˆ’ 𝑦𝑖 ]2 =
π‘Žπ‘— π‘₯𝑗𝑖 βˆ’ 𝑦𝑖 ⎦ ,
𝑖=1
𝑖=1
4
𝑗=0
where a is a column vector of the unknown coefficients of 𝑝𝑛 (π‘₯),
⎀
⎑
π‘Ž0
⎒ π‘Ž1 βŽ₯
βŽ₯
⎒
a = ⎒ . βŽ₯.
⎣ .. ⎦
π‘Žπ‘›
Differentiating this function with respect to each π‘Žπ‘˜ yields
⎑
⎀
π‘š
𝑛
βˆ‘
βˆ‘
βˆ‚πΈ
=
2⎣
π‘Žπ‘— π‘₯𝑗𝑖 βˆ’ 𝑦𝑖 ⎦ π‘₯π‘˜π‘– , π‘˜ = 0, 1, . . . , 𝑛.
βˆ‚π‘Žπ‘˜
𝑖=1
𝑗=0
Setting each of these partial derivatives equal to zero yields the system of equations
(π‘š
)
𝑛
π‘š
βˆ‘
βˆ‘ 𝑗+π‘˜
βˆ‘
π‘₯𝑖
π‘Žπ‘— =
π‘₯π‘˜π‘– 𝑦𝑖 , π‘˜ = 0, 1, . . . , 𝑛.
𝑗=0
𝑖=1
𝑖=1
These are the normal equations. They are a generalization of the normal equations previously
defined for the linear case, where 𝑛 = 1. Solving this system yields the coefficients {π‘Žπ‘— }𝑛𝑗=0 of the
least-squares polynomial 𝑝𝑛 (π‘₯).
As in the linear case, the normal equations can be written in matrix-vector form
𝐴𝑇 𝐴a = 𝐴𝑇 y,
where
⎑
⎒
⎒
⎒
𝐴=⎒
⎒
⎣
1
1
1
..
.
π‘₯0
π‘₯1
π‘₯2
..
.
π‘₯20
π‘₯21
π‘₯22
..
.
β‹…β‹…β‹…
β‹…β‹…β‹…
β‹…β‹…β‹…
..
.
π‘₯𝑛0
π‘₯𝑛1
π‘₯𝑛2
..
.
1 π‘₯π‘š π‘₯2π‘š β‹… β‹… β‹… π‘₯π‘›π‘š
⎀
βŽ₯
βŽ₯
βŽ₯
βŽ₯,
βŽ₯
⎦
⎑
⎒
⎒
a=⎒
⎣
π‘Ž0
π‘Ž1
..
.
π‘Žπ‘›
⎀
βŽ₯
βŽ₯
βŽ₯,
⎦
⎑
⎒
⎒
y=⎒
⎣
𝑦1
𝑦2
..
.
⎀
βŽ₯
βŽ₯
βŽ₯.
⎦
𝑦𝑛
The normal equations equations can be used to compute the coefficients of any linear combination
of functions {πœ™π‘— (π‘₯)}𝑛𝑗=0 that best fits data in the least-squares sense, provided that these functions
are linearly independent. In this general case, the entries of the matrix 𝐴 are given by π‘Žπ‘–π‘— = πœ™π‘– (π‘₯𝑗 ),
for 𝑖 = 1, 2, . . . , π‘š and 𝑗 = 0, 1, . . . , 𝑛.
Example We wish to find the quadratic function 𝑦 = π‘Ž2 π‘₯2 + π‘Ž1 π‘₯ + π‘Ž0 that best approximates the
data shown in Table 2, in the least-squares sense. By defining
⎑
⎀
⎑
⎀
1 π‘₯1 π‘₯21
𝑦1
⎑
⎀
π‘Ž0
⎒ 1 π‘₯2 π‘₯2 βŽ₯
⎒ 𝑦2 βŽ₯
2 βŽ₯
⎒
⎒
βŽ₯
⎣
⎦
π‘Ž
𝐴=⎒ .
,
a
=
,
y
=
⎒ .. βŽ₯ ,
βŽ₯
..
..
1
⎣ ..
⎦
⎣
. ⎦
.
.
π‘Ž2
2
1 π‘₯10 π‘₯10
𝑦10
5
𝑖
1
2
3
4
5
6
7
8
9
10
π‘₯𝑖
2.0774
2.3049
3.0125
4.7092
5.5016
5.8704
6.2248
8.4431
8.7594
9.3900
𝑦𝑖
2.7212
3.7798
4.8774
6.6596
10.5966
9.8786
10.5232
23.3574
24.0510
27.4827
Table 2: Data points (π‘₯𝑖 , 𝑦𝑖 ), for 𝑖 = 1, 2, . . . , 10, to be fit by a quadratic function
and solving the normal equations
𝐴𝑇 𝐴a = 𝐴𝑇 y,
we obtain the coefficients
𝑐0 = 4.7681,
𝑐1 = βˆ’1.5193,
𝑐2 = 0.4251,
and conclude that the quadratic function that best fits this data in the least-squares sense is
𝑦 = 0.4251π‘₯2 βˆ’ 1.5193π‘₯ + 4.7681.
The data, and this function, are shown in Figure 2. β–‘
Least-squares fitting can also be used to fit data with functions that are not linear combinations
of functions such as polynomials. Suppose we believe that given data points can best be matched
to an exponential function of the form 𝑦 = π‘π‘’π‘Žπ‘₯ , where the constants π‘Ž and 𝑏 are unknown. Taking
the natural logarithm of both sides of this equation yields
ln 𝑦 = ln 𝑏 + π‘Žπ‘₯.
If we define 𝑧 = ln 𝑦 and 𝑐 = ln 𝑏, then the problem of fitting the original data points {(π‘₯𝑖 , 𝑦𝑖 )}π‘š
𝑖=1
with an exponential function is transformed into the problem of fitting the data points {(π‘₯𝑖 , 𝑧𝑖 )}π‘š
𝑖=1
with a linear function of the form 𝑐 + π‘Žπ‘₯, for unknown constants π‘Ž and 𝑐.
Similarly, suppose the given data is believed to approximately conform to a function of the form
𝑦 = 𝑏π‘₯π‘Ž , where the constants π‘Ž and 𝑏 are unknown. Taking the natural logarithm of both sides of
this equation yields
ln 𝑦 = ln 𝑏 + π‘Ž ln π‘₯.
6
Figure 2: Data points (π‘₯𝑖 , 𝑦𝑖 ) (circles) and quadratic least-squares fit (solid curve)
If we define 𝑧 = ln 𝑦, 𝑐 = ln 𝑏 and 𝑀 = ln π‘₯, then the problem of fitting the original data points
{(π‘₯𝑖 , 𝑦𝑖 )}π‘š
𝑖=1 with a constant times a power of π‘₯ is transformed into the problem of fitting the data
points {(𝑀𝑖 , 𝑧𝑖 )}π‘š
𝑖=1 with a linear function of the form 𝑐 + π‘Žπ‘€, for unknown constants π‘Ž and 𝑐.
Example We wish to find the exponential function 𝑦 = π‘π‘’π‘Žπ‘₯ that best approximates the data
shown in Table 3, in the least-squares sense. By defining
⎑
⎀
⎑
⎀
1 π‘₯1
𝑧1
[ ]
⎒ 1 π‘₯2
βŽ₯
⎒ 𝑧2 βŽ₯
𝑐
⎒
βŽ₯
⎒
βŽ₯
𝐴 = ⎒ . . . βŽ₯, c =
, z = ⎒ . βŽ₯,
π‘Ž
⎣ .. .. .. ⎦
⎣ .. ⎦
1 π‘₯5
𝑧5
where 𝑐 = ln 𝑏 and 𝑧𝑖 = ln 𝑦𝑖 for 𝑖 = 1, 2, . . . , 5, and solving the normal equations
𝐴𝑇 𝐴c = 𝐴𝑇 z,
we obtain the coefficients
π‘Ž = 0.4040,
𝑏 = 𝑒𝑐 = π‘’βˆ’0.2652 = 0.7670,
7
𝑖
1
2
3
4
5
π‘₯𝑖
2.0774
2.3049
3.0125
4.7092
5.5016
𝑦𝑖
1.4509
2.8462
2.1536
4.7438
7.7260
Table 3: Data points (π‘₯𝑖 , 𝑦𝑖 ), for 𝑖 = 1, 2, . . . , 5, to be fit by an exponential function
and conclude that the exponential function that best fits this data in the least-squares sense is
𝑦 = 0.7670𝑒0.4040π‘₯ .
β–‘
8