WIDfc GAL N — I I
WIDE GAL N—11
Gal. 1 A C M 9066 p . 2 7-17-10 C 2185 7-10
.
000
MARY SHAW AND J . F . TRAUB
000
MARY SHAW AND J . F . TRAUB
000
MARY SHAW AND J . F . TRAUB
000
MARY SHAW AND J . F . TRAUB
000
MARY SHAW AND J . F . TRAUB
000
On the Number
MARY SHAW AND J . F . TRAUB
of Multiplications
for the Evaluation
of a Polynomial
000
On the Nximber of Multiplications
for the Evaluation
of a Polynomial
000
On the Number
of Multiplications
for the Evaluation
of a Polynomial
000
On the Number
of Multiplications
for the Evaluation
of a Polynomial
000
On the Number
of Multiplications
for the Evaluation
of a Polynomial
000
On the Number
of Multiplications
for the Evaluation
of a Polynomial
000
Copyright © 1973, Association for Computing Machinery, Inc. General permission t o republish^
but not for profit, all or part of this material is granted provided that ACM's copyright notice is
given and that reference is made to the publication, to its date of issue, and to the fact that reprinting
privileges were granted by permission of the Association for Computing Machinery.
This work was supported in part by the National Science Foundation under grant GJ-32111 and the
Office of Naval Research under contract N00014-67-A-0314-0010, N R 044-422. Part of the work was
done while J. F. Traub was a Visiting Scientist at the National Center for Atmospheric Research,
which is sponsored by the National Science Foundation.
Authors' present address: Department of Computer Science, Carnegie-Mellon University, Schenley
Park, Pittsburgh, P A 15213.
On the Number of Multiplications for the Evaluation
of a Polynomial and Some of Its Derivatives
MARY S H A W A N D J. F . T R A U B
Carneyic-Mcllon
University\
Pittsburgh,
Pennsylvania
A familj' of now algorithms is given for evaluating the first m derivatives of a polynomial.
In particular, it is shown that all derivatives i. :iy be evaluated in 3M — 2 multiplications. The best
previous result required I n(n -f 1) multiplications. Some optimality results are presented.
ABSTRACT.
analysis of algorithms, computational complexity, Horner's rule, arithmetic operations, optimality, polynomial evaluation, zero shifting
KEY WORDS AND PHRASES:
cu
CATEGORIES:
5.12,
5.2")
/^s
1.
Introduction
Some of the recent work in computational complexity has dealt with the number of
arithmetic operations needed to evaluate a polynomial or a polynomial a n d its first
derivative [2, 5, 0]. In this paper we consider t h e evaluation of a polynomial a n d its first
m derivatives and, in particular, the calculation of all the derivatives.
4
L e t P denote a polynomial
T h e normalized derivatives are
refer to derivatives in this paper
self t o be t h e zeroth derivative
U)
of degree n. Define a normalized derivative as
P /i\.
commonly needed for applications (Section 7 ) . When tee
we always mean normalized derivatives. W e consider P itP .
( 0 )
All arithmetic operations are counted. A multiplication or division is denoted b y M / D .
Preconditioning is not permitted.
P r i o r t o t h e new results reported here, t h e best algorithm for computing all t h e derivatives was t h e iterated use of H o r n e r ' s rule (synthetic division), which requires \n{n + 1)
multiplications a n d t h e same n u m b e r of additions. (The iterated Horner's rule is defined
in E x a m p l e I I of Section 5.)
W e give a new algorithm which computes all t h e derivatives of a polynomial in
3/? — 2 M / D . Unlike m a n y algorithms which reduce t h e n u m b e r of multiplications required to calculate some function, this algorithm does n o t increase t h e n u m b e r of additions.
If n is odd, we give a n algorithm (Example V I of Section 5 ) which computes all derivatives
hi 3n - 3 M / D .
B o t h these algorithms belong to a family of algorithms for computing t h e first m
derivativ« •;. All t h e algorithms in t h e family require t h e same n u m b e r of additions as t h e
iterated Horner's rule for t h e first m derivatives.
Since n is t h e optimal n u m b e r of multiplications for evaluating P alone, Sn — 2 M / D is
within a constant factor of optimality. Although we h a v e n o optimality results for derivatives, we can r e p o r t related optimality results. W e show t h a t t h e calculation of either
a{x\ i = 1, • • • , n, or x P (x)/i\, i = 0, • • • , ?i, in 2n — 1 multiplications is optimal.
. W e summarize t h e contents of t h e p a p e f ^ n Section 2 we present t h e algorithm for
computing all derivatives in Zn — 2 M / D . A family of splitting algorithms for computing
t h e first 7U derivatives a n d arithmetic operation counts for these algorithms are given in
t h e next tw o sections. I n Section 5 we obtain, as special cases, three known algorithms,
t w o new algorithms, a n d one a l g o r i t h m very similar t o a recently discovered technique.
Optimality results are presented in Section 6. W e close b y giving applications of these
algorithms.
%
{%)
r
2.
An Algorithm for Calculating All Derivatives in a Linear Number of
Multiplications
I n this section we exhibit an algorithm for computing a polynomial a n d all its derivatives
a t a point x in a n u m b e r of M / D which is linear in t h e degree of t h e polynomial. L e t
n
t'=0
Algorithm
1
M
T7
T/
= a x
i = 0, 1, • • • , n = a«x\ j = 0, 1, • • • , n
Ti
= T£\
w
9
+ Ti-i,
1
j = 0, 1, • • • , n -
1, i * j + 1, • • • , n
This is a special case of a one-parameter family of algorithms presented in t h e next section.
W e show there t h a t
7V=
U)
(P (x)/j\)x\
i = 0, 1,
1.
n
Since x -—,x
m a y be obtained in n — 1 multiplications while
• • • , a -ix m a y be
obtained in n multiplications, t h e TV m a y be obtained in 2n — 1 multiplications. We
show in Section 6 t h a t this is optimal. T h e derivatives P ( x ) / j ! , j = 1, • • • , n — 1,
m a y then be obtained in n — 1 divisions. Since P (x)/n! = a , it need not be computed.
Thus this algorithm yields all the normalized derivatives in 3n — 2 M/D.
Consider t h e matrix of TV, with i and ,/ indicating the row a n d column indices. T h e
derivatives m a y be calculated in increasing order by calculating t h e matrix by columns,
or in decreasing order by calculating t h e matrix b y diagonals. These two variations have
identical roundoff properties since they produce t h e same chains of intermediate results.
N o rounding error analysis has yet been performed.
y
n
( ; )
( n )
0
3.
A Family
of Splitting
Algorithms
W e .study a family of algorithms for computing the first m derivatives of a polynomial.
Assume without loss of generality t h a t n + 1 = pq. Define
= (/.' - j) m o d q,
j
= 0, 1, • • • , ii.
Algorithm
TT = a
Tf~a*
1
M
m
9
x'
\ i = 0, 1, • • • , n j = 0, 1, . . . , m
i m
1
(3.1)
(3.2)
W e show t h a t this recurrence may be used to compute t h e derivatives. T h e key is t h e
following theorem. L e t C(k, j) denote binomial coefficients.
THEOREM 1.
PROOF. I n t h e triangle where the} a r e defined, t h e Tj are uniquely determined b y t h e
starting values (3.1) a n d (3.2) a n d t h e recurrence relation (3.3). W e need only verify
t h a t t h e TV given b y (3.4) satisfy t h e starting conditions a n d t h e recurrence.
r
WIDE GAL N—22
WIDE GAL N—22
Gal. 2 A C M 9066 p . 6 7-17-10 E X 2185 7-12
T h e starting conditions arc satisfied since
it
= £
2^
^
—
)a»-jt£
1
a»+ i,
= x
A=»-1
since C ( - 1 , - 1 ) = 1, C(fc, - 1 ) = 0 ;
T/ = x
E
m
i
C(kJ)a^T
= ^
( 0 )
a .
0
T h e recurrence is satisfied since
9
TV -
T t i = x™
\J2
k j
C(kJ)a^ x -
which completes t h e proof.
77
r <
(P (*)/Jh^
Using the properties of s(j),
TJ
1
-w+1)
= a,- i*
+
5P/ = aox*" ,
1
^—
0)
=
,
Z
C(kJ
-
l)a«-*r*" - l
1
,
C(fc, j)ty-war'
{i j)
= x' - ^jt
COROLLARY.
-
k
^
.
/ •
iniodff
.
^ / i S
/
/
t h e recurrence m a y be written as
t = 0,l,
- 1
i « 0, 1, • • • , m
T / = TiZ\ + TLi,
(t - j) mod g ^ 0
TV = m i + TLix ,
(i - j) mod q = 0, j = 0, - • - , ro, » = j + 1,
71
T h e following pseudo-ALGOL program implements t h e algorithm. Assume still t h a t
n + 1 = pq. F u r t h e r , let m = rq + s, where r a n d s are obtained b y division of m by q.
I n t h e interest of clarity, a n n + 2 b y m + 2 element array is used t o develop t h e derivatives. I t is clearly possible t o rewrite t h e program t o use only a b o u t n storage locations.
9
begin
[x is the point of evaluation ( i ^ 0), a(i) are coefficients]
\x(i) will be x* T{i j) will be TV]
*(0)
1
x(l) <— x
for i = 2, 3, . . . , q
x(i) <— x * x(i — 1)
[powers of x require q — 1 multiplications]
for i = 0, q, 2q, . . . , (p - 1) * q
9
t
begin
for k = 0, 1, . . . , ? - 2
T(t +
- 1, - 1 ) <- a(i + k) * x(q
- /c - 1)
[inner loop requires 9 — 1 multiplications each time]
T(i + <7 - 2, - 1 ) < - a ( t + 9 - 1)
[multiplication by coefficients requires total of p * (q — 1) multiplications]
N
end
[entire initialization requires (/> -+- 1)(<7 — 1) multiplications]
for j = 0, 1, . . . , ??t
begin
for i = ; -f 1, i + 2, . . . , n
if ( t — j) mod q = 0 t h e n
7(t, j) - 3T(t - 1, j - 1) + T(i - 1,
T(i, j)
<- T(t
-
1,
j
-
1) + T(i -
1,
j) *
x{q)
j)
end
r
[recurrence requires (m + 1) « — Jw) additions and (m -f- l ) ( p ~ r — 1) + §9*^( + 1)
tions]
^ /
, ^
[!T(n, j ) is n o w ^ ' ^ ^ ] ( i ) / ! )
^
V
( ^ J / A '
\
f o r j = 0, r/, 2q, . . .
for
t
(>=-tT> 9
i)
^
= 1, 2, . . . , < / - 1
for j = 1 , 2 , . . .
,8
T ( n , r * + j) <- r ( « , r * r/ 4- j)/*0")
[///- — r divisions used to obtain normalized derivatives]
9
end
w
multiplica-
4.
Arithmetic
Operation
Counts
Recall t h a t we wi h t o calculate t h e first m derivatives of a polynomial of degree ?t. As
above, let n + 1 = pq, m = rq + s, where r a n d s a r e obtained b y division of m 1 v q.
I t can be shown that t h e family of algorithms uses (w + l ) ( n — %m) additions, indep e n d e n t of t h e p a r a m e t e r q. I t uses t h e following n u m b e r of M / D :
(p + l)(q — 1)
multiplications for initialization,
(m + 1) (p — r — 1) + %qr (r + 1)
multiplications for t h e recurrence,
m —r
divisions t o calculate
P (x)/j\.
Letf n(q)
denote t h e t o t a l n u m b e r of M / D required t o calculate t h e first m derivatives
of a polynomial of degree n if t h e splitting q is used. T h e n
{})
mt
/ » . . ( « ) = w - 1 + (2 + tJw(n + l ) / 9 T ^ ( w i + 2 j r + %qr{r + 1 ) .
(4.1)
T h e algorithm can b e slightly improved in t w o cases. If all 1^ derivatives are required*
P (x)/nl = a a n d it is not necessary t o compute it from x ^^P
{x)/n\)fso
one division is saved. If q = n + 1, it is n o t necessary t o compute x , so one multiplication is
saved. These special cases a r e not reflected in t h e function
f ,n(q).
{n)
q
{n)
0
n+1
m
Given m a n d n , t h e best choice of q can b e determined b y minimizing/ ,„ (q) subject t o
t h e constraint t h a t q b e a n integer. Analysis of best splittings a s a function of m a n d n
will b e reported in a future paper.
m
5.
Special
Cases
B y choosing particular values of m a n d q, we specialize t h e algorithm a n d operation count
formula of t h e last t w o sections. T h e first three algorithms a r e known a n d a r e included
for comparison. T h e fourth algorithm is very similar t o a n algorithm discovered by M u n r o
[5]. T h e last t w o algorithms a r e new.
Example I.
1
T7
m = 0, q = 1
= di+i,
i = 0, 1, • • • , n — 1
r.° - Tjli + rj-ix,
P{x) = T °
i = i , . • •, n
n
This is Horner's rule a n d requires n additions a n d n multiplications.
E x a m p l e I I . m = n, q = 1
1
TJ
= a , i , i = 0, 1, • • • , n — 1
+
i
T/ = a<>,
= 0, 1, . . . , n
1
/ I / = TiZ\ + TLix
9
{i
<5^P \x)/j\<^T \
k = 0, 1, • • - , n -
1, i = j + 1, • • • , n
j = 0, 1, • • - , n
n
This is t h e iterated H o m e r ' s rule a n d requires | n ( w + 1 ) additions a n d \n(n
multiplications.
Example I I I .
+ 1 )
m = 0, q = n + 1
l
,_1
T7 = a , x " ~ , t = 0, 1, • • • , n - 1
To - a**"
+1
0
r,°
=
P(x)
77-i
+
,
t =
1, • • • , n
r„°
=
Tins is t h e " n a i v e " w a y of evaluating a polynomial b y calculating all t h e monomial
terms first. T h e general specification of t h e algorithm requires n additions a n d In multiplications; however, in this case x" = x** is never used, so 2n — 1 multiplications suffice.
Example I V . m = 1, q = (n + 1 ) = a
jlTT - a ix' ,
i = 0,1,••• , n - 1
p y = a&°-\ j = 0 , 1
1
!
1
i<+1)
i+
pV = r £ i + rUr*^-."-' -''- ^ , J - 0, 1, i = j + 1, • • • , n
0
X
1
1
x
WIDE GAL
24
v
- ^ GM
N—24
G a l . 3 A C M 9060 p . 14 7-17-10 E X 2185 7-12
T h e recurrence m a y also be written as
7 7 = Tt\
+ TU
,
TV = T t l + T t i x * ,
(i - j) mod a * 0
(t - j) mod <r = 0
This is a new algorithm for computing P(x)
n - 1 + 2(n + 1 ) * M / D .
a n d P\x).
I t requires 2n — 1 additions a n d
F o r simplicity of exposition, we have assumed t h a t n + 1 is t h e square of a n integer.
If this is not the case, q m a y be taken as approximately (n + 1)*,
T h e calculation of P(x) and P'(x) b y the iterated H o r n e r ' s rule requires 2?i — 1
additions a n d 2n — 1 multiplications. M u n r o [5] gives an algorithm for computing P a n d
P' which he asserts requires 2// + 2-/t* additions a n d n + 2n multiplications. Our algorithm
uses a b o u t t h e same number of multiplications b u t fewer additions. Our algorithm is
vei
imilar t o Munro's, but wo have not performed a detailed analysis of t h e differences.
E>
;>lcV. m < w , q =
•//+!
h
n
»"/ = a ,r , y « 0, 1, . . .
(
TS-TlZl
:><»
(p{P (x)/jll^
u)
+ TL*,
x'Tj,
T h i s algorithm r e q u i r e
UJn
+ 1) = 2» + w;
if m = n, then / ' " /»/!
a n d 3n - 2 M / D . This
f
W
j = 0,1,
j ~ 0, '1, • • • ,
m
( « + 1)(« - J , „ ) additions. T h e predicted n u m b e r of M / D is
l»ut x • is never used, so 2n + m - 1 M / D will suffice. F u r t h e r
- <>,, «> we can obtain all n derivatives in } n ( n + 1)additions
special r a w was presented in Section 2.
E x a m p l e V I . L e t » be «*!.! »,
|(„
i).
derivatives in \n (n + U .'"Mil ions and 3 m — 3 M / D .
+
T
h
i
s
algorithm calculates all
0.
Optimality
1
J
i3)
We shall prove t h a t t h e calculation of the x P
(x)/j\, .7 = 0 , • • • , n, in 2n — 1 multiplications, as done by t h e algorithm of Section 2 , optimizes t h e n u m b e r of multiplications.
Before proving this result, we .summarize* what is known on optimality with respect t o
additions, multiplications, a n d t o t a l arithmetic operations.
If m = 0 , H o r n e r ' s rule ( E x ^ n p l e I of Section 5 ) optimizes b o t h additions a n d multiplications" F u r t h e r m o r e , Borodin [1] h a s shown t h a t it is the only algorithm which
optimizes additions a n d multiplications.
i
If m = 1, t h e new algorithm given in Example I V of 5 requires n. — 1 + 2 (n + 1 ) M / D .
M u n r o ' s algorithm requires a b o u t t h e same n u m b e r of M / D . These are best algorithms
known for P a n d P\
If m = ii, t h e new algorithms given in Examples V a n d V I , which require Sn — 2
M / D a n d 3n — 3 M / D (for n o d d ) , respectively, are t h e best algorithms known.
All t h e splittings require (m + 1 ) (/ — \m) additions. F o r m = 0 , t h i s reduces t o n
additions which is known to b e optimal. For m = 1, this reduces t o 2n — 1 additions
which Kirkpatrick [4] h a s shown t o b e optimal.
If m = 0 , 2n arithmetic operations a r e optimal. If m = n, Borodin [3] h a s shown t h a t a
polynomial a n d all its derivatives can b e evaluated in 0(n log n) arithmetic operations.
O u r algorithms do this calculation in \n + 0(n) arithmetic operations. Since n < n log n
for n less thmi a b o u t 1 0 , Borodin's result pertains only asymptotically. T h e optimal
n u m b e r of arithmetic operatio^sLfor b o t h n small a n d n large is open.
3
3
3
J
i})
W e now consider t h e o p t i m g & y of t h e evaluation of x P (x)/jl,
j = 0 , 1 , • • • , n,
w i t h respect t o m u l t i p l i c a t i o n r l n r s t we shall require a result which is interesting in its
own right. W e use t h e notation d(V) t o d e n o t e t h e degree of t h e polynomial V.
T H E O R E M 2 . Given a number x and n arbitrary numbers ai, • • • , a , the computation
of aiXy • • • , a x using multiplications
and additions requires at least 2n — 1 multiplications. Furthermore, if only 2n — 1 multiplications
are used, it is impossible to evaluate
dix, • • • ^a x ,
Q(x), for any polynomial
Q d d(Q) > n.
P R O O F . T h e proof is by induction on n. T h e theorem is certainly t r u e for n = 1.
Assume t h e t h e o r e m hf been proven for n = L. L e t a (x) denote t h e set a&, • • • , ajj€ .
W e shall first show t h a t 2 L + 1 multiplications a r e required t o evaluate O l + i ( # ) . B y
t h e inductive hypothesis, 2L — 1 multiplications are n o t enough. W e now show t h a t 2 L
multiplications a r e n o t enough. T h e m o s t general form involving a x + i which can b e built
w i t h o u t multiplications involving <Xl+i is Ka^i + U(x), where K is a n integer a n d U(x)
is a polynomial > hich is independent of a^i.
Let
n
n
n
n
n
L
L
T(x)
= [KiUm+i + E M * ) ] [ & a
+
w
U (x))
(6.1)
2
b e t h e first multiplication in t h e chain of operations leading t o a L + i #
T(x)
-
C + a Z(x)
L + 1
.
Then
+ W(x),
w
(6.2)
where
C = KiK&l+i,
Z{x) = KiU*(z) + KtUi(x),
W(x) =
Ui(x)U (x).
If d(Z) > L + 1 , t h e n b y t h e inductive assumption t h e evaluation of <Tl(x),
Z(x),
T{x) requires 2 L + 1 multiplications. If d(z) < L + 1 , t h e n t h e evaluation of ( T l ( x ) ,
T(x) requires 2L multiplications a n d a n o t h e r multiplication is required t o evaluate
a +iX . Hence we h a v e shown t h a t t h e evaluation of <Tl+i(x) requires a t least 2L + 1
multiplications.
2
L+1
L
Suppose now that exactly jL + 1 multiplications are used. T h e induction is complete if
we show it is t h e n impossible t o evaluate <7 +i(a;), Q(x) for a n y polynomial Q 5 d(Q) >
L + 1 . W e use t h e n o t a t i o n ( 6 . 1 ) a n d ( 6 . 2 ) . W e consider t h r e e cases depending on d (Z).
Case 1 . d(Z) > L + 1 . T h e n t h e evaluation of <r (x), Z(x), T(x) requires 2 L + 1
multiplications. T(x) has a t e r m in a^ix", p. > L + 1 . T o extract a^ix ^
from
T(x)
requires a n o t h e r polynomial depending on a l+ix^, which is impossible.
L
L
1
1
C a s e 2 . d(Z) = L + 1 . T h e n t h e evaluation of cr (x), Z(x) T(x)
requires 2L + 1
multiplications. ltd(W)
= L + 1 , no polynomial of degree greater t h a n L + 1 has been
produced. If d(W) = p. > L + 1 , a n o t h e r polynomial of degree p m u s t be available and
this is impossible.
L
Case 3. d(Z) = p < L + 1.
tiplications. W e may write
T(x)
y
T h e n t h e evaluation of <r (x),
L
= a
M
/ +
V(x)
+
W(x),
T(;x) requires 2L mul-
where d(V) < ix. If d(W) < L + 1, it is impossible t o evaluate a polynomial of degree
greater t h a n L + 1 a n d extract aL+iX ~ with j u s t one more multiplication. If d(W)
>
L + 1, it is impossible t o produce ai+\x
a n d e x t r a c t it with j u s t one more multiplication.
T h u s with 2L + 1 multiplications it is impossible t o evaluate a&i
, aL+iX ,
Q(x) for a n y polynomial Q 3 d(Q) > L + 1. This completes t h e proof a n d t h e theorem.
T H E O R E M 3. The evaluation of x'P
(x)/jl, j = 0, 1, • • • , w, requires at least 2n — 1
multiplications.
\
L
LJt
n
1
L+1
u)
P R O O F . B y Theorem 2, t h e calculation of ajx~\ j = 0, 1, - • • , n — 1, requires 2n — 1
multiplications. T h e x P
(x) j \ can b e expressed in t e r m s of t h e ajX ~ a s a triangular
linear system with integer coefficients, with unity multiplying t h e ajX ~ . This system can
b e solved for the AJX ~
as linear combinations of t h e x P
(x)/jl w i t h integer coefficients.
If t h e x'P (x)/jl
could b e calculated w ith less t h a n 2n — 1 multiplications, t h e n so
could t h e ajX ~\ which contradicts Theorem 2 a n d completes t h e proof.
7.
Applications
J
(J)
n
n
N
3
3
0)
J
J
{})
T
n
Polynom
i a l derivatives
often occur in applications of T a y l o r series, which, of course
involve
normalized
derivatives.
As a n example, we consider t h e problem of shifting t h e zeros of a polynomial, given its
coefficients. L e t P (t) = ] T J U a * - / h a v e zeros X i , . . . , X„ . T h e n
h a s zeros Xi — x ...,
y
Q(t) = P(t + x) = E
X — x.
( 0
(P (*)/i!)**
n
Shifting of zeros is a key ingredient i n w h a t is called H o r n e r ' s m e t h o d for solving polyn o m i a l equations. H o r n e r ' s n a m e is a t t a c h e d t o t h r e e a l g o r i t h m s : n e s t e d evaluation of
polynomials, calculation of all derivatives b y t h e i t e r a t e d " H o r n e r ' s r u l e , " a n d a m e t h o d
for solving polynomial equations. Ironically, h e was a n t i c i p a t e d in e a c h of these b y o t h e r
workers ( T r a u b [9, p p . 298-299]).
V OE GAL
N—40
G 4x, A C M , 9066, p 2 1 , t a k e 7-17-10EX, 2433, 7-12-73.
H o r n e r ' s m e t h o d for solving polynomial equations is a digit-at-a-time technique for
calculating polynomial zeros. Although Horner's method is no longer competitive with
modern zero-finding algorithms, zero shifting is still a useful technique. Stewart [7] has
performed an analysis of t h e effect of rounding errors in t h e iterated H o r n e r ' s rule a n d
concludes t h a t zeros near t h e shift are n o t u n d u l y perturbed. N o rounding error analysis
h a s y e t been performed on t h e new algorithms of this paper.
Finally we observe t h a t sometimes it is llie x'j> (x)/j\
r a t h e r t h a n t h e derivatives
which are needed. L e t v ^ x ' P * ( x ) / i ! J ^ I a n y one-point iterations ( T r a u b [S, C h .
5]) can b e written in t e r m s of t h e v . T h u s N e w t o n iteration is
{j)
0
t
{
$ = x[l
-
a n d t h e third-order iteration of Euler t y p e is
\p = x[l —
ACKNOWLEDGMENT.
—
(PO/VI)
(v*/vif(v%/v\)].
W e would like t o t h a n k A. Cline, National Center for Atmospheric
Research, for t h e ideas underlying t h e proof of T h e o r e m 2.
REFERENCES
1.
B O R O D I N , A.
Horner's rule is uniquely optimal. In Theory
of Machines
and
Computations,
3.
Kohavi and Paz, Eds. (to appear).
B O R O D I N , A. Computational complexity—theory and practice. In Currents in the Theory of
Computing, A. Aho, Ed., Prentice-Hall, Englewood Cliffs, N . J. (to appear).
B O R O D I N , A. Private communication.
4.
KIRKPATRICK,
2.
5.
6.
D.
On the additions necessary to compute certain functions. M.Sc. T h . , Dep. of
Computer Sci., U. of Toronto, 1971.
M U N R O , J . I.
Some results in the study of algorithms. Tech. Rep. 32, Dep. of Computer Sci.,
U. of Toronto, 1971.
P A T E R S O N , M. S., A N D S T O C K M E Y E R , L.
Bounds on the evaluation time for rational polynomials.
I E E E Conf. R e c , 12th Ann. Symp. on Switching and Automata Theory, 1971, pp. 140-143.
7.
STEWART,
8.
thetic division. Math. Comp. 25 (1971), 13"»-139.
T R A U B , J . F.
Iterative Methods for the Solution of Equations.
N . J., 1964.
9.
TRAUB,
SIAM
G . W.
J . F.
Error analysis of the algorithm for shifting the zeros of a polynomial by synPrentice-Hall, Englewood Cliffs,
Associated polynomials and uniform methods for the solution of linear problems.
Rev. 8 (1966), 277-301.*
R E C E I V E D A U G U S T 1972;
REVISED DECEMBER
\
1972
© Copyright 2026 Paperzz