Efficient Algorithms for Polynomial Interpolation and Numerical

Efficient
Algorithms for Polynomial Interpolation
and Numerical Differentiation*
By Fred T. Krogh
Abstract. Algorithms based on Newton's interpolation
formula are given for: simple
polynomial interpolation,
polynomial interpolation
with derivatives supplied at some
of the data points, interpolation with piecewise polynomials having a continuous first
derivative, and numerical differentiation.
These algorithms have all the advantages of
the corresponding algorithms based on Aitken-Neville
interpolation,
and are more
efficient.
Introduction. The polynomial of nth degree passing through the points (a:¿,/(a:,)),
i = 0, 1, ■■-, n, is given by the Newton interpolation formula
,.4
P7.(a;) = f[x0] + 7Ti/[a;o, a;i] + ir2f[x0, Xi, x2] +
••• +
ATnf[Xo, Xl, • ■ • , Xn]
where x» = Ox — x0) ix — x/) ■■■ Ox — Xi-i) and/[a;0, xh • • •, x{] is the ith divided
difference of / defined by**
f[Xk] = fixk)
ñr
(2)
r¡1 _ /foo] - flxk]
f[Xo'Xk] ~
,r
x.-xk
-, _ f[X¡¡, Xi, • • •, Xj] — f[Xg, Xi, ■■-, Xi-i, Xk]
f[Xo, Xi, • • ■, Xi, Xk\ —
•1/%
_
tfsfc
,
. s^ 1
I <= 1 .
Although Newton's interpolation formula is well known, it is not widely used due
to the popular misconception that it is inefficient. Usually recommended for
interpolation are the methods of Aitken [1], Neville [2], or Lagrange's formula.
Algorithms which make use of derivative values for interpolation
are given in
[2], [3], and [4]; and algorithms for numerical differentiation are given in [3], [5],
and [6]. In all cases the algorithms given here are more efficient than other algorithms in the literature.
Simple Interpolation.
The algorithms
given below follow naturally
from Eqs.
(1) and (2). In these algorithms
Received February 12, 1968, revised June 25, 1969.
AMS Subject Classifications. Primary 6515, 6555.
Key Words and Phrases. Interpolation,
numerical differentiation,
Newton's interpolation
formula, Aitken interpolation,
Neville interpolation,
Lagrange interpolation,
Hermite interpolation, spline function.
* This work was done while the author was employed by TRW Systems.
** The recursion used in this definition gives the same results as obtained with the more usual
f[xo, Xi, • ■-, Xk] = {f[xo, ■■-, xt-i] — f[xi, • ■-, Xjfc]l/(x0 — Xk). It has the minor advantages of
letting one save the divided differences used in Eq. (1) with no extra storage requirement, and of
giving indexes which do not decrease in the implementation
of the algorithms (a convenience with
some FORTRAN
compilers).
185
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
186
FRED T. KR0GH
Vik = f{xk) , i = 0,
= f[xa, Xi, •••, Xi-i, Xk] ,
(3)
o>k =
i = 1,2,
X — Xk ,
iro = 1 ,
n = ix — x0)ix — xi) ■• ■ ix — Xk-i)
In the algorithms a statement of the form x = fix, y, • • ■) means compute / using
•, and store the result in location x.
the numbers in locations x, y,
Algorithm I.
JTo= 1,
To,o = fix.) ,
Po = Fo,0 ,
Vo.k = fix/)
i+l,k
—
Vi., - Vi.k
Xi
i = 0,1, •••,& - 1
Xk
k = 1, 2, • ■-, n
0>k-l
— X — Xk-i
ÂTk=
,
co,i._i7r,ï_i ,
PkOx) = Pk-iOx) + TTkVk.k,
Note that in a computer program the co's, it's, and p's can be scalars since in each
case they are only used immediately after being computed, and similarly the array
V can be replaced by a vector c with c„ = FM,„.
An important feature of the Newton (or Aitken-Neville) algorithm is that one
can select the value of n based on the convergence of the sequence Pkix), k = 0, 1,
• • -, without any additional computation (except for that involved in selecting ft).
If one is taking advantage of this fact, the Xkshould be selected so that \x — Xk+i\ ^
\x — xk\, cf. [7, p. 50].
Algorithm II.
Fo,o = /(a;0) ,
r„
fiXk) ,
f
Vi+i,k —
Vk-i.k-i
Vi.i - Vi,
Xi — xh
=
Vk-i.k-i
i = 0, 1,
'
+
ix — Xk-i)Vk,k,
k= 1,2,
k- 1
k = n,n — 1,
PnOx.) = Fo,0.
Hermite Interpolation. Equations (1) and (2) hold (see e.g., Steffensen [8]) for
Xi = Xj+i = ■• • = xj+s provided one avoids division by zero in Eq. (2) and defines
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
EFFICIENT
(4)
187
ALGORITHMS
f[Xj, Xj, • ' •, Xj\
-
.
s
where x¡ is repeated s + 1 times. To simplify the exposition we assume the data is
given in the form
k
Xk
2/*
0
So
ao,o
1
¿44
ao.i
go
So
a0,<j„
o>i,o
Ç0+ 1
Ço+ 9i + 1
Si
al.4?l
go + qi + 2
Î2
a2,n
where
a"» -
s\ dx> f_f,
In Algorithm III
,g4.
Ft* = IJk= Mr, Sr, • ■■, Sr] ,
¿= 0
= /[.To, ¡Fi, • • -,Xi-i, Sr, Sr, • • -, Sr],
where £r is repeated
with yk.
i > 0,
s + 1 times, and r and s are the subscripts
of the a associated
Algorithm III.
TTo= 1
Co = fix»)
Pa = Co
Vok = yk
y i+i,k
—
d -
Vik
Xi
Xk
Vi+l,k-l
s = 0
i = 0, 1, •••,& - 1 - .s
(If s 2ï k, do nothing.)
— V ik
c* =
ITk =
Vks.k
Wk-lTTk-l ,
k = 1, 2, ■• •, ft .
s> 0
Xi — Xk
,
uk-i
PkOx)
= x — Xk-i ,
=
Pk-lix)
+
TkCk
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
188
FRED T. KROGH
In the implementation of this algorithm V^ can be replaced by d». Algorithm
can be extended to do Hermite interpolation in a similar way.
II
An Interpolating Function in C1. If ft = 2m — 1 (m > 1), and the a,* are always
selected so that m of them are on either side of x, then it is easy to construct an
interpolating
function which is composed of ftth degree polynomials between
tabular points and has a continuous first derivative where the polynomials are
joined.
Let the table be given by (£,-,/(S>)), * = 1, 2, • • -, N, with £4 < £t+i. Define
Pn-iix)
and PS-iix) as the polynomials
passing through points with indices i = k —
m + l,k — m + 2, ■• -,k + m — 1 and i = k — m + 2, k — m + 3, ■■■, k + m
respectively, where £* Ú x < £*+i- If x is so close to one end of the table that there
are not m points on either side of x, then PAiix) = Pú-iix) = the polynomial
passing through the n points nearest that end of the table. It is easy to verify that
the function
(7)
CAx) = Ptiix)
+
* ~ ** [Ptiix)
- PtiOx)]
kk+i — Çk
has the desired properties.
-, m — 1. With this labeling of the
Let x2j = %k-j, x2j+i = i-k+i+i, J = 0, 1,
a;'s, Eq. (1) yields
, s Pn-lix)
— Pn-lix)
=
{X — X0)ix
— Xl)
(a; — xn-2) {fAo,
• • •, xn—2,xn]
- f[x0, ■■-,a;n_i]¡
and thus
Cnix)
= Pn-1 +
Ox — X.) ■■• OX— Xn-2)
Ox — Xp)
Xi
f[X0,
X<¡
(9)
' j Xn—2) Xn\
- f[xo, ■
= Pn-i + Ox — xo) ■■• Ox — x„-2) Ox
X°)
r
_
r„
Xl — XQ
/[«O,
Xl,
-, a;K_i]}
Xn\
If the Xk are selected as indicated above, then the function pAA) defined by the
following algorithm has a continuous first derivative.
Algorithm IV.
ft = 2m — 1
= 2m — 2
Do everything
if there are m points on both sides of x
otherwise.
as in Algorithm
I, except replace u>k-i = x — x¡,-i in Algorithm
Wk-i = x — Xk-i
03k-l =
if k < 2m — 1 ,
x — Xn
if k = 2m - 1 .
\
Vk.k = V
\ a;i — xo /
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
I with
EFFICIENT
189
ALGORITHMS
Numerical Differentiation. The algorithm for numerical differentiation is easily
obtained by repeatedly differentiating Eq. (1).
Algorithm V. Start by performing
Algorithm
I (or IV) with Vß,r replaced
by c„.
(Or if extra derivatives are available perform Algorithm III.) In any case save the
wk's and oik's, and then do the following
Co = Pnix)
ATi =
ÍCk+i-lTTi-1
+
ÁTi ,
k =
r i = 1,2, ■■-,n — k , \
J
J
ck = ck + ÂTiCk+i,
(If ft Ú 1 do nothing.)
The Cjt's give the coefficients of the interpolating polynomial
(Of course, one need not compute all of them.) That is
VnOA)
= Z)ci(|
(10)
1, 2, • ■ -, ft — 1
expanded about x.
- X)k
and
dkPnOK)
(11)
d£
= k\ck
í=x
Lyness and Moler [6] give a brief discussion of the errors in a process such as this,
and point out that iterative refinement can be used to reduce the rounding error
introduced by the algorithm.
Comparison of Methods for Simple Interpolation. Table 1 gives the number of
arithmetic operations required by the algorithms for simple polynomial interpolation. The computation in the Lagrangian case is assumed to be carried out according
to the formula
pAx)= (Íl-o)±f^,
(12)
\i=0
/
k=0 AnkOik
where «,• = x — Xi and Ank = IXi=o Oxk— x¡). (The prime indicates that the term
with j = k is not included in the product.) Formula (12) is an efficient computational form of Lagrange's formula. It has the drawback that exceptional cases may
cause overflow or underflow. Lagrange's formula is most efficient if polynomial
interpolation of fixed degree is to be performed on several components of a vector
valued function.
Table
1. Number of Operations Required to Compute p„{x)
Number of
Divisions
Lagrange
Aitken or Neville***
Algorithm I
Algorithm II
n + 1
\n{n + 1)
\nOn + 1)
Number of
Multiplications
(ft + 1) (ft + 2)
ft(ft + 1)
2ft
\n{n + 1)
* See footnote on p. 190.
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
Number of
Additions and
Subtractions
(ft + 1) (n + 2)
(ft +
(ft +
(ft + l)2
l)2 + ft l)2 + ft -
1
1
190
FRED T. K.R0GH
The arithmetic operations required by the interpolation algorithm are an increasingly less important factor in determining the efficiency of a subroutine. The
optimal algorithm in a given case will depend on the value of n and programming
considerations. For example, Algorithm I requires more multiplications
than
Algorithm II, but it has simpler indexing and gives the difference between successive
approximations.
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, California 91103
1. A. C. Aitken,
"On interpolation
by iteration
of proportional
parts, without
the use of
differences," Proc. Edinburgh Math. Soc, v. 3, 1932, pp. 56-76.
2. E. H. Neville, "Iterative interpolation," /. Indian Math. Soc, v. 20, 1934, pp. 87-120.
3. M. Gershinsky
& D. A. Levine,
"Aitken-Hermite
Mach., v. 11, 1964, pp. 352-356. MR 29 #2938.
4. A. C. R. Newbeey,
"Interpolation
interpolation,"
by algebraic and trigonometric
Comp., v. 20, 1966, pp. 597-599. MR 34 #3752.
5. D. B. Hunter,
"An iterative
method
1960/61, pp. 270-271. MR 22 #8657.
6. J. N. Lyness
& C. B. Moler,
of numerical
differentiation,"
J. Assoc. Comput.
polynomials,"
Math.
Comput. J., v. 3,
"Van Der Monde systems and numerical differentiation,"
Numer. Math., v. 8, 1966, pp. 458-464.
7. F. B. Hildebrand,
Introduction to Numerical Analysis, McGraw-Hill, New York, 1956.
MR 17, 788.
8. J. F. Steffensen,
Interpolation, Chelsea, New York, 1950. MR 12, 164.
9. L. B. Winrich, "Note on a comparison of evaluation schemes for the interpolating polynomial," Comput. J., v. 12, 1969, pp. 154r-155. (For comparison with the results given in Table 1
of this reference, our Algorithm II involves n(n + 1) subtractions and in(n + 1) divisions for
setup, and n additions, n subtractions, and n multiplications for each evaluation.)
*** This is for the usual procedure which is based on linear interpolation with Lagrange's
formula. The referee points out that if this interpolation is given in the form of Newton's formula,
the algorithm requires \n(n + 1) fewer multiplications and \n{n + 1) more additions. At the
same time the round-off characteristics are improved. Thus for Aitken's algorithm.
fo.o = f(x.)
wo = x — Xi,
Pk.o(x) = f(xk)
oik = X — Xk
ukPi.i
— oiiPk.i
Xi — Xk
Pk.i+i(x)
i - 0, 1, ••-, k - 1
= i
Pi.i + —^—
Xi
(Lagrange)
Xk
[Pi.i - Pk.i] (Newton)
p„(x) = P„,„
License or copyright restrictions may apply to redistribution; see http://www.ams.org/journal-terms-of-use
k = 1,2,