associated polynomials and uniform methods for the solution of

ASSOCIATED POLYNOMIALS AND UNIFORM M E T H O D S
FOR THE SOLUTION OF LINEAR PROBLEMS
J. F. TRAUB
SIAM
REVIEW
Vol. 8, N o . 3, July, 1966
SIAM
REVIEW
Vol. 8, N o . 3, July, 1966
ASSOCIATED POLYNOMIALS A N D UNIFORM M E T H O D S
FOR T H E SOLUTION OF LINEAR PROBLEMS*
J. F. TRAUBf
TABLE OF CONTENTS
1.
2.
3.
4.
5.
6.
7.
8.
Introduction
Brief survey of results
The basic orthonormality relation
The inverse of the Vandermonde matrix
Interpolation
Approximation of linear functionals
The variation of the coefficients of a polynomial with the variation of its zeros
Solution of inhomogeneous difference and differential equations with constant
coefficients
9. Divided differences
10. Applications of divided difference calculations
11. Some interesting formulas
12. The inverse transformation and Wronski's aleph function
13. A result concerning powers of zeros
14. Historical survey
Appendix. Notation
References
277
279
279
281
285
287
289
290
291
292
293
295
297
298
299
300
1. Introduction. T o every polynomial P of degree n we associate a sequence
of n + 1 polynomials of increasing degree which we call the associated polynomials of P. T h e associated polynomials depend in a particularly simple way
on the coefficients of P. These polynomials have appeared in m a n y guises in the
literature, usually related to some particular application and most often going
unrecognized. T h e y have been called Horner polynomials and Laguerre polynomials. (For a historical discussion see §14.) Often w h a t occurs is not an associated polynomial itself b u t a number which is an associated polynomial evaluated at a zero of P.
T h e properties of associated polynomials have never been investigated in
themselves. W e shall t r y to demonstrate t h a t associated polynomials provide a
useful unifying concept.
Although m a n y of the results of this paper are new, we shall also present
known results in our framework.
Let P(t) be a monic polynomial of degree n,
n
(1.1)
P(t) = Zfln-/,
O0 = I-
3=0
T h e condition t h a t P be monic is convenient rather t h a n essential. T h e a - and
t are in general complex. Let the zeros of P be labeled pj. I n some of our applica3
* Received by the editors June 10, 1965, and in final revised form December 20, 1965.
t Mathematics Research Center, Bell Telephone Laboratories, Incorporated, Murray
Hill, New Jersey.
277
278
J. F. TRAUB
tions we refer to t h e set of py as t h e sample point set. W e assume here t h a t the
Pj are distinct; t h e uniform extension of our results to confluent zeros will be
reported elsewhere.
Let u be a complex parameter and define polynomials Aj(u, P),j
= 0 , 1 , • • • , n,
by
t
(1.2)
P
{
t
\
U
~
P
{
U
)
-
t
An-jiu,
P)t\
t *
u.
j=0
T h e remainder theorem shows t h a t (1.2) is a valid definition. W e verify below
t h a t Aj(u, P) is a polynomial in u of degree j . W e call the set Aj(u, P),j = 0, 1,
• • • , n , the associated polynomials of P. For m a n y applications we can abbreviate
A iu, P) b y
A iu).
Observe t h a t t h e left side of (1.2) is t h e first divided difference of t h e polynomial tP(t) with respect to t and u. Hence we m a y write
t — U
3
3
n
(1.3)
(tP)[t, u] =
An-Au,
P)t\
t
u.
3=0
I n words, t h e generating function of t h e associated polynomials of P is the first
divided difference of tP.
Multiplying (1.2) b y t — u, comparing powers of t, and recalling t h a t P is
monic, yields t h e recursion
A (u)
= 1,
Aj(u)
= uAj_i(u)
0
(1.4)
If we define A-i(u)
+ ay,
j = 1, 2, • • • , n.
+ ay,
j = 0, 1, • • • , n.
= 0, we can write
(1.5)
Aj(u)
= uAj_ {u)
x
F r o m (1.5) we obtain t h e explicit formula
3
(1.6)
Aj(u)
This verifies t h a t Aj(u)
r
=
a,j-rU ,
j
=
0,
1,
• • • ,
n.
is a polynomial of degree j . I n particular
(1.7)
A {u)
n
=
P(u).
I t is sometimes convenient to replace t h e generating function for
Aj(u),
j = 0, 1, • • • , n, by a simpler generating function for Aj(u),j
= 0,1, ••• ,
n - 1.
Since
(tP)[t, u] = tP[t, u] +
P(u),
n-l
P[t,u] = £ 4n-i-;(t0* '.
y=o
J
(1.8)
Since A {u)
n
=
P(u),
n-l
(1.9)
t — u
t — u
y=o
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
279
METHODS
Hence the quotient and remainder after division by a monic linear polynomial
m a y be calculated recursively. T h e nature of the recursion is such t h a t one m a y
work with detached coefficients. T h e process is often called synthetic division.
( F o r historical comments see §14.)
From (1.9) it follows t h a t
.n—l—j
Thus
(1.10)
Aj(pi)
where a ,i,j
denotes the j i h elementary symmetric function of p i , p , • • • , p
with pi deleted. Hence any result which involves Aj(pi)
can always be interpreted
as a result involving elementary symmetric functions.
2. Brief survey of results. T h e orthonormality relation given b y (3.5) is of
basic importance. I t is used in §4 to provide us with two numerical algorithms
for inverting the Vandermonde matrix. A variety of row and column sums to be
used for check calculations are provided.
I n §5 we point out advantages of writing the representation of an interpolatory
polynomial in canonical polynomial form. I n §6, the canonical representation of
the preceding section is used to derive formulas for the approximation of linear
functionals. E q u a t i o n (6.4) exhibits the approximation as a vector-matrix-vector
product. T h e columns of the inverse to the Vandermonde matrix m a y be characterized as the coefficients of the approximations to
f (0)/kl.
I n §7 we show t h a t the variation of the coefficients of a polynomial with the
variation of its zeros is determined b y associated polynomials, (7.1) being the
main result. In §8 we give the general solution of an inhomogeneous difference or
differential equation with constant coefficients as an explicit function of the
initial conditions.
T h e calculation of divided differences is discussed in §9 and is applied to §10
to "nonnoisy" computation and to the calculation of Stirling numbers. A number
of interesting formulas are derived in §11. A symbolic statement of the Newton
relations is given b y (11.7) and a sweeping generalization of an important formula
of Euler is given b y (11.14) or (11.16).
Consideration of the transformation between powers of t and associated polynomials leads to recognition of the relation between Wronski's aleph function
and associated polynomials. T h e main result is given by (12.4). I n §13 we give
a simple solution to the classical problem of expressing an arbitrary power of a
zero of a polynomial as a linear combination of the first n — 1 powers of t h a t
zero. A symbolic solution is provided by (13.5). We end with a historical survey
in §14 and a summary of notation in the Appendix.
3. The basic orthonormality relation. Consider the generating function
2
n
(k)
(3.1)
n-l
P[t, M] = E A „ _ ( « ) « ' ' .
H
n
280
J.
F.
TRAUB
Let t —> u. T h e n
(3.2)
P\u)
= E^n-l-i(^)^'.
Let pi and p be distinct zeros of P. T h e n
n-l
k
(3.3)
0 = ]C An_ (p<W.
w
Since pi is a simple zero of P, we conclude from (3.2) and (3.3),
(3.4)
g^-i-iUW
= 1,2, ... , n ,
= S i k >
where
is the Kronecker delta function which vanishes except when the indices are equal.
Let cc and (5 denote the matrices whose elements are
a i j
=
^
P
^
'
y
ft-* = P* ,
i = 0, 1, • • • , n -
1,
< = 1, 2, • • • , n;
j = 0, 1, • • • , n -
1, fc = 1, 2, • • • , w.
T h u s 5 is just t h e Vandermonde matrix. Let / denote the identity matrix. T h e n
(3.4) m a y be written
a(3
=
I.
$a
=
I
This implies t h a t
or
E
(3.5)
*=1
=
p
8
i k
.
(Pi)
T h e orthonormality relation given by (3.5) is a basic result which permits us
to solve a variety of linear problems. I t is of importance for two reasons. First it
permits the theoretical investigation of "inverse problems". Secondly it gives t h e
inverse to the Vandermonde matrix in a form well suited for numerical calculation. Two algorithms for the inversion are discussed in detail in the next section.
Note t h a t t h e ranges of i, j , k are not t h e same. This is because we have chosen
to follow the usual convention of denoting t h e n zeros of a polynomial by p i , p ,
• • • , p . If we had labeled t h e zeros po, Pi, • • • , Pn-i, the difference in ranges
could have been avoided.
Two important consequences of (3.5) are
n
v
2
n
(3.6)
(3.7)
E p£h =
E^4=5„-,„,
.
if = 0 , 1 , ••• , n -
1,
v = 0 , 1, ••• , n - 1.
ASSOCIATED
281
POLYNOMIALS A N D UNIFORM METHODS
Relation (3.6) is a well-known formula of Euler. B o t h (3.6) and (3.7) are special
cases of (11.16).
4. The inverse of the Vandermonde matrix. F r o m (3.5) we see t h a t the
inverse of t h e Vandermonde matrix is given b y the matrix whose elements are
(41)
a
. . = ^n-l-j(pt)
E q u a t i o n (4.1) n o t only gives a n explicit formula for t h e inverse, b u t in addition
this formula is well suited for h a n d or machine calculation. W e shall describe two
algorithms for forming a, given t h e sample points p .
As we shall show below, check calculations m a y easily be performed on t h e
rows a n d columns of (4.1). If t h e pi satisfy symmetry conditions, t h e n t h e rows
of (4.1) satisfy certain s y m m e t r y conditions. A characterization of t h e columns
of a in terms of approximation of derivatives is given in §6. T h e case of most importance for applications is t h a t of a* and pi real. F o r t h e purpose of making
operation counts, we restrict ourselves t o this case for t h e remainder of this
section.
W e now describe Algorithm I for calculating a. T h e first step in this algorithm
is to calculate t h e coefficients of P from t h e p . Let <j ,j denote t h e ^ t h elementary
symmetric function of t h e numbers p i , p , • • • , p . T h e n aj = ( — l ) V , j • .
T h e a j m a y be calculated recursively b y
t
{
m
2
n
m
m
(4.2)
aj
= a -£i,j
m
m
+
p (r -i,j-i
,
^
o-i.i =
m
m
m
=
2, 3,
• • • , n,
(T ,j
=
j
=
1, 2,
• • • ,
m\
with
<r ,o
m
= 1 ,
j
0;
Pi ;
m
0,
j
>
m.
A numerical example of t h e recursion is given below. T h e calculation of all t h e
di requires \n{n — 1) multiplications a n d t h e same number of additions.
For each p , A _i_,-(pt) m a y be calculated b y t h e recursion
t
n
Aj(pi)
= PiA^ipi)
Ao(pi)
= 1.
+ aj,
j = 1, 2, • • • , n -
1;
(4.3)
Hence Aj(pi), j = 0, 1, • • • , n — 1, m a y be calculated with just n — 2 multiplications and n — 1 additions. Observe t h a t
4
'
4
A/(pi)
= piAj-_i( ) + Aj-i(pi),
A,\ i)
= 1,
P
j = 2, 3, • • •, n ;
Pi
A \ i)
n P
=
P\ ).
Pi
Hence P ' ( p . ) m a y be calculated with just n — 2 multiplications and n — 1
additions.
Let us assume t h a t a division time is comparable to a multiplication time.
T h e n a m a y be formed from t h e pi with ( n / 2 ) ( 7 n — 9) multiplications a n d
( 5 / 2 ) n ( n — 1) additions for n > 1. (See also t h e discussion below on t h e decomposition of a into t h e product of two matrices.)
We now describe Algorithm I I for forming a. This algorithm is based on t h e
282
J.
F. TRAUB
fact t h a t
(4.3)
= pf±t — pi
=
n
(t -
P
j
) =
j=l
j
Ei - (p^ .
n
H
j=0
Hence t h e A _i_y(pi), j = 0, 1, • • • , n — 1, are given b y t h e coefficients of
TTi(t)
and these are formed b y t h e recursion (4.2) based on t h e numbers pi, P 2 ,
• • • , pi_i, p i , • • • , p . T h e calculation of the P'{p%) coincides with t h e calculation of t h e P\pi) in Algorithm I .
T h e calculation of a from t h e p* b y Algorithm I I requires n(n + n — 2 ) / 2
multiplications and n(n — l ) / 2 additions.
Hence for n ^ 4, Algorithm I I requires fewer operations while for n > 4,
Algorithm I requires fewer operations. Observe t h a t in Algorithm I, a is formed
by a process of "analysis" whereas in Algorithm I I , a is formed b y a process of
"synthesis". T h a t is, in Algorithm I we form t h e coefficients of P a n d then calculate t h e Aj(pi)
b y division. I n Algorithm I I we form t h e coefficients of iri(t)
and no division is involved. I t is clear t h a t Algorithm I I is numerically more
stable t h a n Algorithm I .
We can cut down t h e number of multiplications required b y Algorithm I I b y
reusing some of t h e common factors obtained in t h e calculation of one iri(t)
in t h e calculation of another
iri(t).
We now w a n t t o show t h a t certain advantages are obtained b y decomposing
a into t h e product of two matrices. Let n be t h e diagonal matrix with elements
n
i +
(4.6)
n
tin
=
P\pi).
1
T h e n n" h a s elements 1/P'(p;) on its diagonal. Let y be t h e matrix with elements
(4.7)
Jij = A _ ( p i ) .
n
W
Then
(4.8)
a
=
ITY
This decomposition of a h a s a number of advantages. As we shall see, a is
often used t o form a vector-matrix-vector product, faM. (See (6.4).) R a t h e r
t h a n using n multiplications t o form a from n a n d y , we can form y M and then
form n ( y M ) in just n multiplications. A second advantage of this approach is
t h a t if t h e pi are integers, all elements of y a n d n are integers, whereas t h e elements of a would not in general be integers. Because of t h e reduction in t h e
number of multiplications and because of t h e integral property, t h e accuracy of
t h e calculation is improved b y t h e use of t h e decomposition. If t h e matrices n
and y are stored in a machine or in a table, one vector suffices t o hold t h e information in n. Observe on t h e other hand t h a t a can be produced once and for
all, and to use t h e decomposition a = n f requires n extra divisions for each M
to which oc is applied.
As a numerical example, we choose t h e pi as —2, — 1 , 0 , 1 , 2 and calculate a b y
Algorithm I. T h e calculation of t h e ay, Ay(pi), and P'(pi)
is performed with
_ 1
- 1
- 1
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
283
METHODS
detached coefficients. T h e rows of y are obtained by writing the elements enclosed in parentheses in reverse order. T h e P ' ( p . ) are enclosed in circles. Note
that A ( )
is calculated as a check; it should be zero. See Table 1.
Hence
n Pi
n = (24, - 6 , 4 , - 6 , 2 4 ) ,
(4.7)
Y =
0
0
4
0
0
0
0
24
0
0
(4.8)
2
4
0
-4
-2
-1
-4
-5
-4
-1
2
-16
0
16
-2
2
-1
0
1
2
-1
16
30
16
-1
TABLE
-2
-1
1
1
(1
1
1
(1
1
1
(1
1
1
(1
1
1
(1
1
1
1
1
1
1
0
-2
-2
-2
-4
0
-1
-1
-1
-2
0
0
0
0
0
0
1
1
1
2
0
2
2
2
4
-2
-3
-3
-2
0
-5
4
-1
8
7
-5
1
-4
2
-2
-5
0
-5
0
-5
-5
1
-4
2
-2
-5
4
-1
8
7
2
2
-1
-5
0
2
2
-14
-12
0
4
4
2
6
0
0
0
0
0
0
-4
-4
-2
-6
0
-2
-2
14
12
1"
1
1
1
1_
-2
4
0
•4
2
1
-4
6
-4
1
1
0
2
0
4
-4
0)
24
@
4
-4
0)
-6
Q>
4
0
4)
0
©
4
-4
0)
-6
0
-4
0)
24
®
0
4
0
00
0
-2
1
-1
0
0o
1
0
0
0
0
0
1
0
0
0
2
284
J.
F.
TRAUB
Observe t h a t t h e rows of y and a enjoy certain s y m m e t r y properties and t h a t
certain of the elements are zero. W e show t h a t this is characteristic for symmetric
arrangements of the zeros of P .
First we draw some conclusions for t h e case t h a t P ( 0 ) = 0. I n particular,
let p = 0. T h e n it m a y be shown t h a t
k
JiO = dikdn-l ;
(4.9)
(4.10)
i = 1, 2,
71,
i = 1, 2,
, n.
Furthermore
Jkj ~ dn-l-j j
(4.11)
Q>n—1—j
(4.12)
0, 1, • • • , n -
1,
0, 1, • • • ,n
1.
-
dn-l
Observe t h a t a -i
0 since double zeros are not permitted.
Assume now t h a t n is odd and t h a t the pi are symmetrically placed about t h e
origin. This is an i m p o r t a n t case in t h e applications. Arrange t h e pi in increasing
order. T h e n it m a y be shown t h a t
n
ru = ( - l ) V f i - , ; ,
j = 0, 1, • • • , n -
1,
i = 1, 2,
(4.13)
=
Jkj — Q>n-l-j ,
J
<Xij = ( — l) Oi +l-i,j
n
(4.14)
Oikj =
,
j
1,2,
0,1
Gn—l—j
dn-l
1)
,i(n-
1)
fc = i ( w + 1).
Observe t h a t P h a s only odd powers of t. T h e application of (4.9), (4.10), (4.13),
and (4.14) shows t h a t t h e occurrence of zeros in t h e first column and middle row
of (4.7) and (4.8) is not accidental.
Now let n be even and let the zeros of P be symmetrically arranged about t h e
origin. T h e n
(4.15) < ; = (-iy yn i-ij,
+1
7
(4.16)
+
3
an = (-l) a i-i ,
n+
tj
j = 0, 1, • • • , n -
1,
i =
1, 2, • • • , i n ,
j = 0, 1, • • • , n -
1,
i =
1, 2, • • • ,
\n.
A number of check formulas m a y be derived. First, note t h a t A (pi)
= 0.
Hence in performing t h e recursion to form t h e elements Aj(pt), j = 0, 1, • • • ,
n — 1, it m a y be worthwhile to perform one more recursion so as to obtain A (p»)
and compare with zero.
I n addition we obtain row and column check sums for y and a. F r o m (11.18),
n
n
(4.17)
4„_ (p.) =
7a = U + l ) a » - w ,
w
i=i
i = 0, 1,
,n -
1,
i=i
which provides us with a column sum check for y. F r o m (3.7),
x (p.) ^ , .
.
, l ,
(4.18)
=
i=i
r {pi)
a
=
5
i=i
which provides us with a column sum check for a
0
?
i
=
0
. . . , n - l ,
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
METHODS
285
T o obtain row sum checks we proceed as follows. Let p , 5 * 1, i = 1, 2, • • • , n .
From
n-l
—
t
=
E
3=0
Pi
An-t-jipi)?,
it follows t h a t
Z
(4.19)
= E
An-i-jM
7iJ
=
^
,
—
i
=
1, 2,
where
C =
E
a,.
k=0
On the other hand, suppose one of the pi is unity. Let p = 1. T h e n
n-l
(4.20)
Z 7iJ = P (l)«<* ,
i = 1, 2, • • • , n .
i=o
T h e corresponding row sum check for a is
k
,
n-l
E «.i
(4.21)
C
1
p't \ 1
„
n
if none of t h e p is unity and
t
n-l
E
(4.22)
oLij
=
8
i k
3=0
if
= 1.
From (3.6),
Pk
n
-j
n
E p77-s = E n «
(4.23)
=0.
If n is even and the zeros are symmetric with respect to the origin, t h e n
E
(4.24)
= E
= 0.
i=l
5. Interpolation. Let p i , p , • • • , p be n arbitrary distinct numbers and let
fi ? h> • • • , /n be n arbitrary numbers which are not necessarily distinct. Let
P ( t ) = H?=i (2 — p.-) - T h e n there exists a unique polynomial of degree n — 1,
I -i(t,
f, P),
called the interpolatory
polynomial,
such t h a t
P'(
P
i
)
1=1
2
n
n
(5.1)
h ^ i p i J i P )
= f
i
y
i
=
1,2,
,n.
T h e symbol / on the left side of (5.1) symbolizes t h e n given numbers / 1 , / ,
. . . , / , and does not imply the existence of a function / . Any of the coefficients
of I n - i m a y be zero and indeed I - i could be identically zero.
There are two well-known representations of t h e interpolatory polynomial,
those which bear the names of Lagrange and Newton. One other representation
has certain advantages over the two conventional representations. This is a
2
n
n
286
J.
F.
TRAUB
representation in the canonical form for a polynomial, as a linear combination of
powers of t. Such a representation is advantageous if one w a n t s to perform operations on the interpolatory polynomial, which is often the case in practice. Writing
the interpolatory polynomial in canonical form is hardly a surprising step. B u t
the advantages of doing so seem t o have been largely overlooked.
Let
n-l
3=0
Thus
3
fi =
dn-i-jpi ,
i = 1, 2, • • • , n.
3=0
By t h e orthonormality relation (3.5),
(5.2)
d
. _
1
^ - g
i=i
/
. % ^
r {pi)
Let a be the matrix with elements
_
A -i-j(pi)
n
j = 0, 1, • • • , n -
P'iPi)
1,
i = 1, 2,
Let f be the vector with elements /» ,i= 1,2, • • • , n, and let t be the vector with
elements t ? = t\ j = 0, 1, • • • , n — 1. T h e n
3
n
n—1
/„_!(*,/,P) = D
ZWy,
i=o
i=L
or
(5.3)
I ^(tJ,P)
= fat.
n
Observe t h a t a depends only on the sample points pi , while f depends only on
the fi . T h e matrix a m a y be easily calculated as discussed in §4. For certain
" s t a n d a r d s e t s " of sample points this calculation m a y be done once and for all
and tabulated. For any set of / * , the coefficients of I -i(t) m a y then be computed
by a vector-matrix multiplication. As we shall see in the next section, the form
(5.3) is well suited for t h e approximation of linear functionals.
R a t h e r t h a n using the set 1, t, • • • , t ~ as the basic set, we can use Ao(t),
Ai(t), ••• , A _ i ( 0 . L e t
n
n
l
n
(5.4)
/ N - L ( * , / ,
P)
=
£
3=0
T h e n from the orthonormality relation,
C
N
_
W
Aj(t)
.
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
287
METHODS
6. Approximation of linear functionals. Let L be a linear functional from a
function space with elements / to t h e real or complex numbers. Let / ( p . ) = /»• .
One general method for approximating Lf is b y applying L to a n interpolatory
polynomial for / . M a n y well-known approximate integration and differentiation
formulas are of this type. Let M - denote the moments of L,
3
s
(6.1)
Lt
= M
j = 0, 1, • • • , n -
j7
1.
W e introduce t h e notation
(6.2)
Q(L,f,P)
If we approximate Lf b y LI -i(t,
n
=
LI ^(t,f,P).
n
/ , P ) , we write
(6.3)
Lf~Q(L,f,P).
From
In-lit,
f,P)
= fat,
we have immediately t h a t
(6.4)
Q(L,f,P)
= f«M,
where M denotes the vector with elements M , Mi, • • • , M -i.
Observe t h a t
M depends only on L, a depends only on t h e sample points, and f depends only on
/ . These formulas are conventionally given as a linear combination of /* . T h a t is,
t h e matrix-vector product aM is formed and Q is t h e n expressed as a scalar
product. T h e generalization of (6.4) to the case of confluent sample points will
be discussed in another paper.
T h e columns of the a matrix m a y be characterized by the approximation of
one particular set of functionals. Let
0
(6.5)
Lf
=
k
n
,
k = 0, 1, • • • , n -
1.
Then
Lt
k
J
= Sjk,
j , k = 0, 1, • • • , n — 1.
3
For k fixed Lkt = M is a column vector with one nonzero element and aM^ is
just the kth column of a. Hence
k
n
faM = ^2f(pi)a
ik
,
i=l
(6.6)
——— ~ 2^f(pi)oLik •
k!
i=i
T h u s we can characterize the columns of a as the coefficients in the approximation of f (Q)/k\,
where t h e approximation is obtained by applying L to the
interpolatory polynomial.
(k)
k
288
J.
F.
TRAUB
As a simple numerical illustration of t h e results of this chapter we t a k e t h e
sample zeros as —2, — 1, 0, 1, 2. We calculated in §4 t h a t
"
2
0
0
24
16
24
0
- 3 0
0
16
_
16
- 2
0
-2
4
0
-4
2
- 1
16
- 1
1
-4
6
-4
1
Let
Lf
=
/ ( 0 ) .
Then,
M
=
0
r
1
-8
0
8
-1
aM = —
12
0
0
0
(6.7)
/'(O) ~
T W ( - 2 )
8/(-l) +
-
8/(1)
-
/(2)].
This is a special case of t h e preceding discussion on t h e columns of a.
Let
Lf
=
f
f(t)
dt.
Then
15"
" 7
0
32
aM = —
20
45
0
12
32
7
48
(6.8)
[ 7 / ( - 2 )
dt
+
32/(-l) +
12/(0)
+
32/(1)
+
7/(2)].
45
Let
dt.
Lf
J-3
Then
M
(6.9)
dt
^
=
=
5
5
0
15
0
81
[ll/(-2) -
aM =
10
11
-14
26
-14
11
14/(-l) +26/(0) -
14/(1) + 11/(2)].
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
METHODS
289
Equation (6.8) is an example of a closed integration formula while (6.9) is an
example of an open formula.
7. The variation of the coefficients of a polynomial with the variation of its
zeros. Let
n
P(t)
=
On-jt',
a
=
0
1,
j=0
n
P(t)
= I I (t
-
Pi)
= PC*, P I ,
••• , P n ) .
Denote b y Aa& t h e change in dk due t o a change Ap* in p*. W e have
P(t,
Pi , * ' ' , Pi
+ Api , • • • , p ) =
(J —
n
P i ) • ' • (J —
— P(2, pi , • • • ,
Pi ,
Pi
— Api) • • • (t —
• • • ,
p)
n
—
Api
p)
n
Pit)
t — pi
Hence
1
ap = - Z
r
Wr(p<)r- - ,
where
AP = P(t, pi , • • • ,
pi, +
Ap* ,
• • • , pn) -
P(t, pi , • • • ,
pi
, • • • , p ).
n
On t h e other h a n d
AP =
ZAa
n
r + 1
1
r
t - - .
r=0
Hence
(7.1)
= —ApiA (p ),
Aa +i
r
r
i = 1, 2, • • • , n,
t
r = 0, 1, • • • ,
n
—
1.
T h u s A ( p ) m a y be characterized as t h e ehange in a +i due t o a change of — 1 in
p i . N o t e t h a t (7.1) holds in t h e large, t h a t is, for arbitrary changes Ap* . Taking
t h e limit as Ap —> 0,
r
t
r
t
(7.2)
=
- A
r
( p i ) .
dpi
T h e differential form only is given b y Brioschi [2] without proof.
Since t h e matrix with elements dpi/da i
is t h e inverse of t h e matrix with
elements da +i/dpi,
we can conclude from (3.5) t h a t
r+
r
^
OPi
d a
r
n—1—r
_
+
1
pi
P'(p,) '
or
(7.3)
dpi
da
pi
r
n—r
P'( )
Pi
E q u a t i o n (7.3) has been obtained b y Brioschi [2] a n d applied b y Olver [21, p .
412].
290
J.
F.
TRAUB
8. Solution of inhomogeneous difference and differential equations with
constant coefficients. Consider the linear difference equation (or recursion
formula)
n
(8.1)
£ an-J(\ + j) = g(\),
X = 0, 1, • • • ,
with the initial conditions
V*
/(m) =
,
Using t h e translation operator we m a y also write
, ,
P{E)f{\) = g{\),
EJiO) = y, ,
(8.2)
m = 0, 1, • • • ,
n-
1.
n = 0, 1, • • • , n 1.
I t m a y be shown t h a t the solution as a n explicit function of the initial conditions is given b y
(8.3)
/(X) = E
»„-!-,-E
j=o
g(\-i-j)t^h:.
i=i
(pi)
+ E
i=i
r
(pi)
j=o
r
This formula was given b y T r a u b [27], the proof based on a division algebra.
This formula m a y also be derived by any of the standard transform techniques.
A direct derivation based on polynomial interpolation is given b y T r a u b [28].
T h e fact t h a t tne solution satisfies the initial conditions follows immediately
from (3.5). T h a t the solution satisfies the difference equation m a y also be verified.
Certain solutions of the homogeneous equation are particularly simple. Let
g(\)
= 0. Let y - i - j = 8jk and let the corresponding solution be labeled/^(X).
T h a t is, the starting sequence has a one in the (n — 1 — k)th position and zero
elsewhere. T h e n
n
Jk\A)
—
—7577—\— J
t-i
r (pi)
or using (1.6),
(8.4)
f (\) =
A (EM\),
k
where
k
Q(X) =
U.
P'iPi)'
We t u r n to differential equations. Let
P(D)f(x)
D f(0)
(8-5)
= g(x),
,
= W'
(
j
where D is t h e differentiation operator. T h e n
(8.6)
f(x)
= E
3=0
y ^
E e"*
i=i
+
r
(pi)
f gix - v)
Jo
E
4r~\
i=i
1
de
(pj
'
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
291
METHODS
As in the difference equation case treated above, one m a y verify t h a t (8.6)
satisfies the equation and initial conditions of (8.5).
T h e fitting of initial conditions usually involves determinants. We have
avoided the determinants b y inverting the Vandermonde matrix in (3.5).
9. Divided differences. I t is well known t h a t repeated synthetic division of a
polynomial P b y a fixed number u results in the calculation of
P (uo)/jl
j = 0, 1, • • • , n. This fact lies at the heart of the Ruffini-Horner method for
solving algebraic equations [3], [9]. This method depends on calculating from the
coefficients of a given polynomial the coefficients of a second polynomial all of
whose zeros are less t h a n those of the given polynomial by an amount Uo. Let
P(t + u ) = Z(t). T h e n
ij)
0
J
Q
(t)
p
= ±
Z
W
^
o
)
t\
J!
3=0
Let pi be a zero of P . T h e n
0 = P(pi)
= Z(
-
Pi
u ).
Q
u
Hence the polynomial with reduced zeros has coefficients P \uo)/jl,
which
coefficients m a y be computed b y iterated synthetic division by u .
We generalize this result by investigating synthetic division by a sequence
Uo, Ui, • • • , u -\ . T h e results m a y be used to perform "nonnoisy" calculations
on polynomials as .shown in §10.
Divided differences are usually recursively defined as follows. Let u , U\,
• • • , u -i be a sequence of numbers. Let
0
n
0
n
Pit] =
r.
Pit, Uo , Ui
D
, • • • ,
P(t),
!
P[t, Uo , Ui , ' ' ' , Ui-i] — PWo , Ui , ' ' * Ui]
Ui\ =
,
t — Ui
i = 0, 1, • • • , n — 1,
with
P[t, U- ]
t
=
P[(\.
If P is a polynomial of degree n, t h e n the ^th divided difference, P[t, u , Ui,
• • • , Ui-i], is a polynomial of degree n — i and is symmetric in all its arguments.
We introduce the simplified notation
0
Po(t)
= P(0,
Pi{t)
= P[t, uo, Ui, • • • ,
Pi(t)
=
i = 1, 2, • • • , n.
Then
(9.1)
,
t — Ui-i
or
(9.2)
P (t)
t
= Pi-l[t,
Ut-i].
% = 1, 2, • • • , n,
292
J.
F.
TRAUB
Let
(9.3)
m = n — i.
F r o m (9.1) a n d (1.8),
m-l
(9.4)
p (t)
= p [t,
i+1
t
u<\
=
E
w
p,)r- .
A j ( u i ,
3=0
T h a t is, t h e coefficients of t h e (i + l ) s t difference of P(t) with respect t o t,
u , u i , • • • , w» are given b y the associated polynomials of the ith difference of P .
I t follows t h a t if, for convenience, we define t h e symbols
0
A - i ( U i ,
Pi)
=
0,
P_i)
A j ( u - i ,
=
a,-,
then
A y ( W i , Pi)
=
UiAj-tiUi
, Pi)
+
,
P;_i),
(9.5)
i = 0, 1, • • • , n , j = 0, 1, • • • , m, where m = n — i.
F r o m this i t follows t h a t
(9.6)
A j ( U i ,
= E A (^_!,
Pi)
f c
i = 0, 1, • • • , m.
P ; _ i W " * ,
fc=0
Furthermore
(9.7)
A
m
( v , i ,
Pi)
=
P[w ,
0
w i , • • • , w,-].
This generalizes (1.7),
A ( w , P ) = P(tto).
n
0
E q u a t i o n (9.7) gives t h e generalization of the comments a t the beginning of this
section concerning t h e calculation of derivatives b y detached coefficients since
if Uo = U\ = • • • = Ui, the ith divided difference is just P ( u ) / i l .
10. Applications of divided difference calculations. T h e importance of (9.5)
and (9.7) lies in t h e fact t h a t these results permit t h e calculation of divided
differences of polynomials in a "nonnoisy m o d e . " By this we mean the following.
If the points u , U \ , • • • , u i are close together, then the calculation of P [ u , U\,
(
t
)
0
0
• • • , Wi+i]
/
i
n
n
(10.1)
i
+
0
by
nr
P[uo,
,
fl
u i ,
„,
• • • , Ui+i\
i
P[U0
,Ui
, ' • • , Uj\
-
P[Ui
, U j ,
,
Uj+i]
=
Uo
—
Ui+i
leads t o the loss of accuracy due t o t h e subtraction of quantities which are close
together. This is the case when divided differences are being used t o approximate
the derivative a t a point. This is easily corrected b y performing t h e calculation
using (9.5) a n d (9.7).
W e must add one word of caution. T h e use of (9.5) a n d (9.7) permits u s t o
avoid the subtraction of (10.1). Of course, it is possible t h a t (9.5) also involves a
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
293
METHODS
subtraction of close quantities. A simple example is given when Uq ~ — a\. However, this would be an unusual rather t h a n a typical circumstance.
We apply this to t h e solution of polynomial equations b y iterative methods.
One common method is to use t h e iteration formula
_
P(Xj)
with P(Xi) and P'(Xi) calculated b y detached coefficients. T h i s variation of t h e
Newton-Raphson method is often called t h e Birge-Vieta method. F o r monic
polynomials, t h e Birge-Vieta method requires 2n — 1 multiplications per iteration.
Another possibility is to use the formula
(10.2)
x
= Xi -
i+1
P
^
P[Xi ,
X i )
and to use a t t h e next iteration the two points a t which P has opposite signs. This
method is commonly called regula falsi [26]. Let P(a) = 0. As x. —> a, P[x , a\_i]
becomes harder to calculate accurately by t h e usual formula
{
(10.3)
P b < , x
] -
P
M
Z
P
i
X
"
)
X{ X%—i
B y calculating P(xi) b y synthetic division and P[x ,
b y a second synthetic
division, t h e difficulty is avoided. T h e number of multiplications is 2n — 1 as in
t h e Birge-Vieta method. Regula falsi has the virtue of bracketing the zero at each
step.
As a second application we consider the calculation of Stirling numbers of t h e
second kind, s(n,i), defined b y
n
s n
x = Z (>
>
M
t
n
where
3=0
I t follows t h a t
s(n,i)
n
= ( x ) [ 0 , 1, •••
i = 0, 1,
t h a t is, t h e nth row of t h e Stirling number matrix m a y be obtained using repeated synthetic division b y 0, 1, • • • , n. This is a more efficient way of obtaining
a number of rows of this matrix t h a n b y using either the explicit formula for
Stirling numbers of t h e second kind or their recurrence relation [11, p . 169].
11. Some interesting formulas. A number of interesting formulas are direct
consequences of the interpolation formulas of §5. T h e y stem from t h e fact t h a t
if there exists a polynomial / , of degree less t h a n or equal to n — 1, such t h a t
f(pi) = fi > t h e In-iit, / , P) = f(t). A number of the formulas to be derived are
n
294
j.
F
.
TRAUB
well known while others are new. Define
(11-1)
s(j)
:
= ± p/,
i=l
3
(n.2)
o
(
;
)
=
Z
'
n
n—l+j
and recall t h a t
(H.4)
A {t)
=
3
Taking/, = P ( )
Pi
Z a ^
r
f .
in (5.5) and using (5.4) yields
(11.5)
P\t)
=
Z s i J ) A
n
+ j i t ) .
3=0
Comparing coefficients of powers of t in (11.5) yields
r
(11.6)
= X ) a -js(j),
(n - r)a
r
r
r = 0, 1, • • • , n — 1,
which is equivalent to t h e Newton relations. E q u a t i o n (11.6) m a y be written in
t h e symbolic form
A (E)s(0)
(11.7)
= (n -
r
r)ar.
T a k i n g / i = pf, fc = 0, 1, • • • , n — 1, in (5.5) and using (5.4) yields
(H.8)
f
2 f l O '
3=0
=
*Mn-w(0.
+
I n particular, we have
(n.9)
r
1
=
Z u i j l A n + j i t )
3=0
as the counterpart of (11.5). Comparing coefficients of powers of tin (11.9) yields
3
(11.10)
£
r=0
a _«(r) =
y
8
j t 0
,
j = 0, 1, • • • , n -
This m a y be written in symbolic form as
(11.11)
Aj(E)a(0)
Let R (t)
m
= 5y, .
0
be a n arbitrary polynomial of degree m S n — 1,
m
1.
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
295
METHODS
This m a y be written
n-l
(11.12)
R (t)
j
Hgm-jt
=
m
3=0
if we define
(11.13)
Q-i = 0,
I t follows from (5.2) w i t h / ( 0 = R (t)
A
Z
1
-
)
- ^
f
w
(
p
1 -
m.
j , m = 0, 1, • • • , n -
1.
that
m
(11.14)
i = 1, 2, • • • , n -
°
= g -j,
m
Let P ( 0 be monic. T h e n
m
A -i-j(pi)R {pi)
E
*-i
( P * )
n
(11.15)
_
m
.
p
j = 0, 1, ••• , n -
1,
m = 0, 1, •••
J.
m = 0, 1, • • • , n -
1.
m = 0, 1, • • • , n -
1.
I n particular, if i2 (2) is monic,
m
(11.16)
Z § ^
=
i=i r (pi)
This includes (3.6)*and (3.7) as special cases.
Taking R (t) = A {t) in (11.14) yields
m
m
with t h e convention of (11.13). Hence
Z
(11.17)
P
^p// ^ =
i=l f (Pi)
Taking R (t)
m
= P\t)
a
2m-n+i,
yields
n
12 .A„_i_y(pi) = 0 " + l)a„_i_y,
j = 0, 1, • • • , n — 1.
1
T a k i n g ft (<) = ^ ^ o ^ yields
ro
A
( n . 1 9 )
£ »-yy
i=l
. r (Pij
Pi —
1
=
i,
P l
.
^ i ,
i
=
0
, i , . . . , » - i .
12. The inverse transformation and Wronski's aleph function. T h e exphcit
formula for Aj(t, P ) ,
3
(12.1)
Aj(t, P) = 12 aj-tt,
k=0
a = 1,
0
j = 0, 1, • • • , n,
296
J.
F.
TRAUB
shows t h a t t h e matrix of t h e transformation between t h e set t\ j = 0 , 1 , • • • , n,
and t h e set A -(t, P ) , j = 0 , 1 , • • • , n , is given by
3
"1
ai
a
ai
_a
a -i
1
2
(12.2)
a
=
1
"
n
dn-2
n
1_
Observe t h a t t h e transformation is specified b y t h e n numbers ai, a , • • • , a .
Since | a | = 1 , t h e transformation is nonsingular. Hence t h e associated polynomials form a basis in t h e space of nth degree polynomials. This is t h e case
whether or n o t P h a s multiple zeros. Furthermore t h e matrix of t h e inverse transformation is of t h e same form as ( 1 2 . 2 ) . I t is triangular, h a s t h e same element
along a n y diagonal, a n d is determined b y just n numbers. I n t h e case t h a t t h e
zeros of P are all simple, t h e formula for t h e elements of t h e inverse matrix m a y
be obtained as follows.
Let
2
n
n-l+v
WU) = X
0, 1, •
^77
9
=1
P'iPi)
This is t h e formula for Wronski's aleph function of order v for t h e numbers
Pi y Pi j " j Pn « T h e aleph function is sometimes referred t o as t h e homogeneous
product symmetric function of order v (see [ 1 8 ] , [ 6 ] ) . I t is known t h a t o)(v) satisfies t h e difference equation
(12.3)
X
a -jo>(\
n
+
j )
=
0,
0,
1,
We show t h a t if P h a s only simple zeros, then
(12.4)
* > =
£<*(j
If j ^ n — 1 , this result follows from
j = n. W e have
f =
t - r
1
= X«(n
-
k)A (t),
0,
k
(11.8).
1,
Hence we need prove it only for
k)tA (t)
k
= X"(™ -
1 - k)[A i(t)
k+
- ajfc+J = X«>(™ -
k=0
k)A (t),
k
k=0
where t h e last line was obtained using ( 1 2 . 3 ) . This completes t h e proof of
Hence for t h e case of simple zeros, t h e inverse of ( 1 2 . 2 ) is given b y
1
«(D
1
co(2)
co(L)
1
(12.5)
o(n)
n.
aj(n
—
1)
co(n —
2 )
(12.4).
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
METHODS
297
We have used the fact t h a t co(0) = 1 which is a consequence of Euler's formula
(3.6). F r o m (11.10) and (12.3),
A;
X
(12.6)
<*k-fl»U)
So ,
=
* = 0, 1, • • • , n.
)fc
3=0
This equation m a y be written in operational form as
=
A (E)u(0)
(12.7)
k
* = 0, 1, • • • , n.
8o, ,
k
E q u a t i o n s (11.10) and (12.3) m a y be used to recursively calculate the
, in terms of the ay, = 0, 1, • • • , n. T h e first few oj(v)
are
given b y
cx>(v), v = 0, 1, • • •
j
co(O) = 1,
co(l) = — a i ,
co(2) = a* — ai,
o)(3) = — a\ + 2aia2 — a%.
F r o m (11.10) and (12.3) it follows t h a t
<
1
2
-
8
>
m
m
(
5 " '
-
,
,
"
E q u a t i o n (12.8) is the basis for methods used to calculate t h e dominant zero of
polynomials. W e defer to another paper a general discussion of such methods.
F r o m (12.8) we h a v e
Observe
that
P(t)
may
be
considered
n
l
a , a „ _ i , • • • , a . On the other h a n d [t P{l/t)Y
w(v), v = 0, 1, • • • .
n
0
the generating function for
is the generating function for
13. A result concerning powers of zeros. I t is clear t h a t if p is any zero of P,
then an arbitrary power of p» m a y be expressed as a linear combination of
1, p t , * * * > P ; ~ \ with coefficients which are polynomials in t h e coefficients of P.
T h e problem of calculating t h e coefficients of this linear combination is a classical
one.
t
n
We m a y obtain t h e solution simply as follows. Let
1-
Zftypr ',
P.'=
(13.1)
x = 0,
3=0
Our orthonormality relation immediately yields
(13.2)
p
w j
=
V
Z Pi Aj(pi)
or
3
(13.3)
0
r i
= Z aj-r
r=0
Z -^n-\
i-1 t
(.pi)
= T , aj~ Q ( v
r
r=0
+ r).
i , . . . .
298
J.
F.
TRATJB
Since Q(v) = co(v — n + 1) is a polynomial in t h e aj, this is the desired result.
Observe t h a t (13.3) m a y be written in t h e symbolic form
(13.4)
ft,
=
Aj(EMv).
F r o m (13.1),
w
Pi =
tpr^AjiEMv),
3=0
and hence
(13.5)
P
: =
P[E, Mv).
Pi
I t is clear t h a t a n arbitrary power of m a y also be expressed as a linear combination of j4 (pt), Ai(pi),
• • • , A _i(p»). Let
P i
0
w
n-1
Pi
=
E
3=0
a
^ « - i - i ( p » ) -
Then
a
vj
(13.6)
= Q(v + j),
-!
n
Pi* =
T,A +j(piMv+j).
n
3=0
I n symbolic form, we have
P! = T, A +j( i)E Q(v)
j
n
P
=
P[E iMv),
jP
3=0
which is the result obtained in (13.5).
14. Historical survey. T h e recursion formula
(14.1)
Aj(u)
= uAj_ {u)
x
+ a,,
j = 0, 1, ••• , n,
A-i(u)
= 0,
and P{u) = A (u) show t h a t a polynomial m a y be evaluated b y "nesting.'' T h e
recursion formula and
n
(14.2)
^
=
^
+ E 4 , - H W
t — u
t — U
j=0
lead to synthetic division.
T h e nested evaluation of polynomials, synthetic division, and a certain method
for approximating t h e roots of polynomial equations are each commonly called
Horner's method. Indeed what we have labeled associated polynomials are sometimes referred to as Horner polynomials. These appellations seem incorrect, as in
all these m a t t e r s Horner was anticipated b y others.
As pointed out b y Ostrowski [22], Newton (1711) observed t h a t t h e polynomial y* — Ay + 5y — 12y + 17 could be evaluated b y nesting. Horner (1819)
[9] noted t h a t the associated polynomials, introduced b y their recursion, were
used b y Lagrange in his Theory of Functions. As to the method (1819) for solving
2
ASSOCIATED
POLYNOMIALS
AND
UNIFORM
METHODS
299
polynomial equations, which depends on evaluating the derivatives of a polynomial b y repeated synthetic division (see §9), Cajori [3] points out t h a t the same
results were obtained b y Ruffini in 1804. Cajori recommends naming this the
Ruffini-Horner method. ( D e Morgan [5, p p . 66-67] observes t h a t both men h a d
been anticipated b y Chinese mathematicians.) T o sum u p this section of our
history, although Horner's work was at first not appreciated b y the Royal Society
[5, p . 188], he is today credited with results of which he was not t h e originator.
Lagrange (1775) [14] introduced associated polynomials b y giving t h e m in t h e
explicit form
3
k=0
H e uses t h e m to obtain a solution of the homogeneous
linear difference equation
with constant coefficients in the case t h a t the zeros of the indicial polynomial
are distinct. T h e result is presented without proof or verification, and his generalization to the case t h a t the indical polynomial has confluent zeros is incorrect.
(Lagrange finally corrected this error in 1792.)
T h e associated polynomials should perhaps be called Lagrange polynomials
except for the confusion with the polynomials in Lagrange's interpolation
formula which bear his name.
Scattered results on associated polynomials appear in a number of 19th
century works. These include Schroder [24], Laguerre [15], Netto [19, p. 97],
and Weber [29, pp. 34, 156-163]. (Obreschkoff [20, p. 50] calls the associated
polynomials Laguerre polynomials.) Associated polynomials are implicit in
Cauchy's [4] t r e a t m e n t of the solution of linear differential equations.
A prescription for the numerical inversion of the Vandermonde matrix is
given b y Kowalewski [13, pp. 2-6] who obtains the results from Cauchy's interpolatory relations. Kowalewski's prescription is the basis for Algorithm I I , b u t
Kowalewski does not concern himself with the numerical details of the prescription. Since then a number of authors have independently rediscovered variations
of this method. See Stojakovic [25], Macon and Spitzbart [17], H a m m i n g [8,
pp. 124-128], Legras [16, pp. 98-104], Parker [23], and Klinger [12]. Gautschi [7]
has studied the norm of the inverse Vandermonde matrix. T h e importance of
the Vandermonde inverse for the approximation of linear functionals has been
pointed out b y Legras [16] and H a m m i n g [8]. See also Bragg [1]. H a m m i n g calls
the inverse matrices "universal matrices". None of these authors has studied
the numerical aspects of the inversion.
Our contribution to the theory has been a unified approach which has led to
m a n y new results. We have taken the generating function as t h e basic definition
and the orthonormality relation as the basic result.
Acknowledgment. I t is a pleasure to t h a n k m y colleague A. J. Goldstein for a
number of illuminating discussions.
Appendix. Notation.
1. P(t)
pa)
2.
=
Z"-o
W " ,
=
N ? - I
(t -
oo = 1.
p o .
300
J.
F.
TRAUB
3. P[t, Uo, Ui, • • • , u^i] is t h e i t h divided difference of P(t) with respect to
4. P (t)
=
t
t l o , tti ,
• • • ,
M<-J,
J r
5. (*P)[*,u] = E ? = o i » - i ( w , P ) l =
0
if j * k,
6
Po(0
=
P(0-
Z"-o^»-i(ti)^.
dik =
* u ^ if j = *.
o M /=
L ,
87.. / s( w
X += l E
) P= / ,tf/(X),
( X£) -= g D/(X).
9. a :
10.
= ^ ' f f l
a i j
n:?7iy
=
f
P (pi)8ij
\
;
c o ( , ) = Z ^
Pa = p/« T h e n «0 = / .
r 7;;
:
= A _i_y(p;).
n
Then a =
n
-
1
^ -
REFERENCES
[1] L. H.
A matrix approach to numerical integration, Amer. Math. Monthly, 71
BRAGG,
(1964), pp. 391-398.
}
[2J F.
BRIOSCHI,
[3] F.
CAJORI,
Intorno ad alcune questioni d algebra superiore, Ann. di Sci. Matem. Fis.,
(1854), pp. 301-312.
Horner's method of approximation anticipated by Ruffini, Bull. Amer. Math.
S o c , 17(1911), pp. 409-414.
[4] A. L.
CAUCHY,
Sur la determination des constantes arbitraires renfermees dans la inte-
grates des equations differentielles lineares, Oeuvres Ser. 2, vol. 7, Academie des
Sciences, Paris.
[5] A. D E M O R G A N , A Budget of Paradoxes, vol. I I , London, 1872.
[6] Encyklop. d. Math. Wissensch., vol. I , part I , §3b, Rationale Funktionen der Wurzeln,
pp. 45CM79.
[7] W. G A U T S C H I , On inverses of Vandermonde and confluent Vandermonde matrices, Numer.
Math., 4(1962), pp. 117-123; 5(1963), pp. 425-430.
[8] R. W. H A M M I N G , Numerical Methods for Scientists and Engineers, McGraw-Hill, New
York, 1962.
[9] W. G. H O R N E R , A new method of solving numerical equations of all orders by continuous
approximation, Philos. Trans. Roy. Soc. London, (1819), pp. 308-335.
[10]
, On algebraic transformation, The Mathematician, 1(1845), pp. 108-112.
[11] C . J O R D A N , Calculus of Finite Differences, Chelsea, New York, 1950.
[12] A. K L I N G E R , The Vandermonde matrix, Report P3201, R A N D Corporation, Santa
Monica, California, 1965.
[13] G. K O W A L E W S K I , Interpolation und gendherte Quadratur, Teubner, Berlin, 1932.
[14] J . L. L A G R A N G E , Sur les suites recurrentes, Nouveau Memoires de V Academie Roy ale
de Berlin, 6(1775), pp. 183-195.
[15] E . L A G U E R R E , Th&orie des equations numiriques, J . Math. Pures Appl., 3 serie, 9(1883),
p. 99.
[16] J . L E G R A S , PrScis d'analyse numerique, Dunod, Paris, 1963.
[17] N . M A C O N A N D A. S P I T Z B A R T , Inverses of Vandermonde matrices, Amer. Math. Monthly,
65(1958), pp. 95-100.
[18] M. M A R T O N E , La Funzione Alef di Hoene Wronski, Catanzaro, 1891.
[19] E . N E T T O , Vorlesungen uber Algebra, vol. I , Leipzig, 1896.
[20] N . O B R E S C H K O F F , Verteilung und Berechnung der Nullstellen reeler Polynome, V E B
Deutscher Verlag der Wissenschaften, Berlin, 1963.
[21] F. W. J . O L V E R , The evaluation of zeros of high degree polynomials, Philos. Trans.
Roy. Soc. London. Ser. A, 244(1952), pp. 385-415.
[22] A. O S T R O W S K I , On two problems in abstract algebra connected with Horner's rule, Memorial to Von Mises, Academic Press, New York, 1954, pp. 40-48.
e
ASSOCIATED POLYNOMIALS AND
[23]
F. D .
PARKER,
pp.
[24]
E.
M.
Inverses of Vandermonde matrices, Amer. Math. Monthly,
301
71(1964),
410-411.
SCHRODER,
Ann.,
[25]
UNIFORM METHODS
Uber unendlich viele Algorithmen zur Auflosung der Gleichungen, Math.
2 ( 1 8 7 0 ) , pp.
STOJAKOVIC,
317-365.
Solution du probleme d'inversion d'une classe importante de matrices,
C. R. Acad. Sci. Paris, 2 4 6 ( 1 9 5 8 ) , pp. 1 1 3 3 - 1 1 3 5 .
[26] J .
F.
TRAUB,
Iterative Methods for the Solution of Equations, Prentice-Hall, Englewood
Cliffs, New Jersey, 1 9 6 4 .
[27]
, Generalized sequences with applications to the discrete calculus, Math. Comp.,
1 9 ( 1 9 6 5 ) , pp.
[28]
7 1 ( 1 9 6 5 ) , pp.
29]
177-200.
, Solution of linear difference and differential equations, Bull. Amer. Math. S o c ,
H.
WEBER,
538-541.
Lehrbuch der Algebra, vol.
I
(English transl.
1898
ed.), Chelsea, New York.