Clenshaw`s method for evaluating certain finite series

378
Clenshaw's method for evaluating certain finite
series
D. B. Hunter
Department of Mathematics, The University, Bradford, Yorkshire
It is shown that Clenshaw's method for evaluating finite series involving functions which satisfy
a certain second-order recurrence-relation may be interpreted as a generalisation of the 'synthetic
division' method for evaluating polynomials. The process is then extended to give algorithms
for dividing by a quadratic factor, and for evaluating the derivative of the series. Methods for
obtaining zeros of the function defined by the series are also discussed.
(Received October 1969)
1. Summation of finite series
Clenshaw (1955) has described a method for evaluating
a finite series
= 2
in which the functions </>,•(*) satisfy a second-order
recurrence-relation of the form
where a,- and /?,• may be functions of x. In many cases
which arise in practice this relation takes the rather less
general form
-
(3)
where p(x) is a fixed function of x, and A,-, ^,- and v,depend on i only. Relations of the form (3) hold, for
example, for the powers <f>i(x) = x', the standard
sequences of orthogonal polynomials, the trigonometric
polynomials cos ix and sin (i + l)x, the Bessel functions
•//(*), etc.
The main object of the present paper is to show that,
for sequences of functions satisfying (3), Clenshaw's
method may be regarded as a generalisation of the
well-known 'synthetic division' method for evaluating
polynomials.
We shall assume that equation (3), with the last term
on the right omitted, holds when i = 0; that is,
4>i(x) = [Ao/>to - /*oWo(*)(4)
This assumption, which holds in many important cases,
is not strictly necessary, but it simplifies many details
of the argument. The most important exception is
probably the case <£,(*) = /,(x); but it has been shown
by Elliott (1968) that, in any case, series of Bessel
functions are not always suitable for summation by
Clenshaw's method.
If <f>0(x) and (f>i(x) are related by (4) we may write
4>t(x) = Ux)Ux),
<j>i(x). Setting
(1)
1=0
= [\p(x)
where ^(x) is a polynomial of degree / in p(x). It is
convenient to work with the functions i/>,(x) instead of
(6)
1=0
we thus have
Sn(x) = Mx)sn(x).
(7)
We now define a sequence of polynomials 6,(x) as
follows:
b,(x) = 0 if i > n,
"1
*,•(*) = [ V ( * ) - /*/]&,•+ i(x) - vi+ lbi+2(x) + a, I
for I = n, n — 1, . . . , 0.
J
(8)
Then
(9)
sn(x) = bo(x).
(cf. Clenshaw (1955), Smith (1965)). To prove this, let
/=o
Then it follows from (8) that
[p(v) ~ P(x)]qn- ,00 + bo(x) = sn(y),
that is, <7n_[(j) is the quotient and bo(x) the remainder
on dividing sn(y) by p(y) — p(x). Equation (9) follows
on setting y = x in (11).
2. Division by a quadratic factor
The above derivation of Clenshaw's method suggests
that a similar algorithm might be devised for dividing
sn(x) by a quadratic factor, p2 + Ap + B, where A, B
are constants, and p represents p{x). Here, we generate
numbers b,- by the equations
b, = 0 if / > n,
b, = a, - Wfi,+, - Xibi+2 - 7A+3 - ZA+4 }• (12)
(5)
The Computer Journal Volume 13 Number 4 November 1970
for i = /j, /i — 1, . . . , 0
Clenshaw's method for evaluatingfiniteseries
where
1
W, = A\,
(13)
When / = 0 those terms in (13) which involve negative
suffixes should be omitted. It can now be shown that
sn(x) = (P2 +Ap+
B)qn_2(x)
Ro (14)
where
n-2
(15)
379
4. Solution of the equation sn{x) = 0
The above arguments suggest that the well-known
iterative methods based on synthetic division for solving
algebraic equations may be generalised to the solution
of the equation
(20)
sn(x) = 0.
In considering such a generalisation, we shall denote by
Xj the /th approximation to a root of (20). In practice,
it is convenient to work in terms of p = p(x), and
accordingly we set
PI=P(XI).
Perhaps the best-known iterative method is that of
Newton and Raphson. In the notation being used, this
gives for the relation between two successive iterates the
equation
Pi+i — Pi ~ bQ(Xi)/cQ(Xi)
These equations will be used later (Section 5).
3. Evaluation of derivatives
Smith (1965) extended Clenshaw's method to give a
means of evaluating the derivative of Sn(x). We now
show that Smith's formula can be interpreted quite
naturally in terms of synthetic division. To do this, we
divide qn-\{y) of equation (10) by p(y) — p{x); that is,
we set
(21)
(22)
Other standard methods based on synthetic division by
a linear factor can be similarly generalised.
It is desirable to say something about the error in a
root x of (20) as given by this method. An error analysis
for Clenshaw's method for evaluating sn(x) has been
given by Elliott (1968). We shall consider here only the
effect on x of errors in the coefficients a,. If e ; denotes
the error in ah it can be shown, as in Ralston (1965),
Section 8.13, that the corresponding error e in x is
given approximately by the formula
Cj(x) = 0 if / > n,
- S
(16)
(23)
;=o
for i = n — 1, n — 2, . . . , 0.
Then
co(x)
(17)
where
(18)
(= 0
s'n(x) being given by (19). Of course, this expression is
not valid if s'n{x) is zero or very small, and the other
limitations mentioned by Ralston also apply.
A possible application of (22) is to the determination
of the zeros of a function whose Chebyshev expansion
is known. To illustrate the method, we shall estimate
the smallest positive zero,y01, of the Bessel function JQ(x).
One form of Chebyshev series for J0(x) is
Inserting (17) in (11), differentiating with respect to y,
and setting y — x, we deduce that
(19)
s&x) = co(x)p'(x).
Jo(x) = S
a,T2lQx), (\x\ < 4)
the values a,- for / < 7 being listed in Table 1. Here
This is Smith's formula, in a slightly more general form.
Higher-order derivatives of sn(x) may be evaluated by
further synthetic divisions.
p(x) = £* 2 -
Table 1
Synthetic division by p + 0 • 28
/
"i
7
6
5
4
3
2
1
0
- 0 0000 0006
0 0000 0289
- 0 0000 9911
0 0023 1142
- 0 0332 5272
0-2489 8370
-0-6652 2301
00501 2708
b;
- 0 0000 0006
0 0000 0292
- 0 0001 0068
0 0023 6488
- 0 0344 7636
0-2659 2558
-0-7796 6497
0 0025 0768
Ci
36
72
12
63
39
07
79
- 0 0000 0012
0 0000 0591
- 0 0002 0456
0 0048 3840
-00714 5767
0-5670 2905
-0-8669 7543
44
65
52
30
95
44
D. B. Hunter
380
A,=
x, = 0,
1 if i = 0,
2 if i > 0.
v, = 1.
We take the value xt = 2 • 4 as initial approximation, so
that pt = I** — 1 == — 0-28. The calculation for this
value of p is set out in Table 1. Using the values
from this table in (22) we get
To determine the partial derivatives in (24), we shall
divide the quotient qtt-.2(x) of (15) once again by
p2 + Ap + B. To do this, we generate quantities ct by
the relations
c, — 0 if i > n — 2,
c
i — "•l^-i+V}i + 2
p2 = — 0-28 + 0-0025 0768 79/0-8669 7543 44
= —0-2771 0754 45.
(25)
(i = n - 2, n - 3, . . . , 0).
A second iteration leads top3 = — 0-2771 0174 94, and
this is found to give a value sn(x3) = 10 ~ 10 . The
corresponding approximation forj01 is, to 8 decimals,
Joi = (8(1 +P3W'2 = 2-4048;2557.
Then
= (P2 +Ap+ B)rn_4(Xy+
+
(26)
where
From (11) we deduce further that
n-4
(^)= S
2
/>3) ~ 6,(*3) + S 6/+ i(* 3 )r 2i (ix), (|x| < 4).
The coefficients bt(x3) are listed in Table 2.
Table 2
Values of d,<jc3>
1
&i
7
6
5
4
3
2
1
0
- 0 0000 0006
0 0000 0292
- 0 0001 0067
0 0023 6429
- 0 0344 6235
0-2657 1856
-0-7780 2282
0 0000 0000
(27)
1= 0
i?,' = c,
Substituting (26) in (14), differentiating with respect to
A and B, and evaluating the result at a zero of
p2 + Ap + 5 , we deduce that
= — cx
(28)
H = - (c0
We shall now use (23) to obtain bounds on the error e
in the value of j 0 l just calculated. Here, |e,| < 0-5 x
10 ~ 8 and p'(x) = £x. Inserting in (23), we obtain the
result
\e\ < 0-0000 0005.
In fact, the value obtained is correct to within about
1 unit in the 8th decimal place.
5. Determination of quadratic factors
Since sn(x) is a polynomial in p, it is appropriate to
consider a method for determining quadratic factors,
similar to Bairstow's method for polynomials. In
Section 2, an algorithm for dividing sn(x) by a quadratic
factor of the form p2 + Ap + B was derived. We shall
now make small changes 8A, SB in A and B respectively,
in an attempt to reduce the remainder terms Rt and Ro to
zero. To a first approximation, the effect of these changes
is to replace i?, and Ro by Rt + ^r-~ SA + - ^ SB and
7)i?
7>R
Ro + ~ 8A + -~ SB respectively. Thus the required
values of SA and SB are the solutions of the simultaneous
equations
(24)
provided /72 + Ap + 5 is not a perfect square. The
details of the argument are similar to those in Isaacson
and Keller (1966), Chapter 3, Section 4.4.
The error e in the calculated value of x, due to errors
e-t in the coefficients a-, is again given approximately by
(23). An expression for s'n[x) may be obtained by inserting (26) in (14) and differentiating with respect to x.
Since p2 + Ap + B == 0 and Rx = Ro = 0, this gives
the result
s'n{x) = [2p(x) + A][c0 +
(29)
Cl(W0
To illustrate the method, we shall estimate the first two
positive zeros, j n andy 12 , of J\(x), using the Chebyshev
expansion given by Clenshaw (1954), viz.,
Ji(x) = x S a,.r,(x2/100),
(|*|
10).
1= 0
The coefficients a, are listed in Table 3.
x 2 /50-l, and equations (13) give
_ (A
if / = 0
'~\2A
if i > 0,
(2B + 1 if i = 0
Xt = J 4B + 3 if i = 1
l45 + 2 i f i > l ,
yi
—— O ^
—
-^J
z,- = l.
Here, p =
381
Clenshaw's method for evaluating finite series
Table 3
Synthetic division by p1 + 0 72p + 0 0 1
bt
I
"i
11
10
9
8
7
6
5
4
3
2
1
0
- 0 0000 0001
0 0000 0022
- 0 0000 0346
0 0000 4410
- 0 0004 3706
0 0032 3503
-00169 2388
0-0577 9053
-01148 8405
01216 7941
-01155 7791
0 0694 2435
- 0 0000 0001
0 0000 0023
- 0 0000 0377
0 0000 4907
- 0 0005 0035
0 0038 6062
-0-0215 2935
00815 8855
-01935 1063
0-2610 3571
0 0008 4482
- 0 0003 7359
We shall use p2 + 0-72/> + 0-01 as first trial factor.
The calculation is set out in Table 3. Using equations (15) and (28), we deduce that
Rt = 0-0008 4482 34,
M l = _ 1-0049 7338 34,
oA
^ = 1 - 1 1 4 2 2331 59,
Ro = 0-0002 3467 65,
^ = - 0 - 0 1 1 1 4223 32,
oA
^ =
- 0 - 2 0 2 7 3259 60.
On setting up and solving equations (24), we get
8/1 = 0-0020 0205, 8 5 = 0-0010 4753, leading to a
new trial factor p2 + 0 • 7220 0205/> + 0-0110 4753. A
second iteration gives a factor/? 2 + 0-7219 9137 45/? +
0-0110 4097 62, which divides sn(x), leaving remainders
Ci
44
71
52
05
59
75
75
43
50
34
63
- 0 0000 0004
0 0000 0099
-0-0000 1645
0 0002 1803
-0-0022 8318
00183 0820
-0-1081 2106
0-4477 6956
-11142 2331
10049 7338
52
99
04
07
88
59
88
59
34
which are negligible to the accuracy required. The
zeros of this last factor are pt — — 0-7063 6058 14 and
p2 = — 0-0156 3079 32, and these in turn give the
approximations
j u x 3-8317 0602,
j 1 2 as 70155 8693.
In this example, the derivatives s'n(Ju) and s'n{j\i)
given by (29) are rather small—
s'nUn)= - 0 - 1 0 5 1 ,
= 0-0428.
Consequently, the error bounds given by (23) are large;
in fact, f o r ; , , we have \e\ < 0-0000 0035, while, for
j i 2 , \e\ < 0-0000 0076.
References
CLENSHAW, C. W. (1954). Polynomial Approximations to Elementary Functions, MTAC, Vol. 8, p. 143.
CLENSHAW. C. W. (1955). A Note on the Summation of Chebyshev Series. MTAC. Vol. 9, p. 118.
ELLIOTT, D. (1968). Error Analysis of an Algorithm for Summing Certain Finite Series, / . Australian Math. Soc, Vol. 8, p. 213.
ISAACSON, E., and KELLER, H. B. (1966). Analysis of Numerical Methods, New York: Wiley.
RALSTON, A. (1965). A First Course in Numerical Analysis, New York: McGraw-Hill.
SMITH, F. J. (1965). An Algorithm for Summing Orthogonal Polynomial Series and their Derivatives with Applications to Curve-
fitting and Interpolation, Math. Comp., Vol. 19, p. 33.
Book review
Methodes Numeriques, by Jean Kuntzmann, 1969; 192 pages.
{Hermann, Paris, 36 Francs)
This workman-like monograph is a simple introduction to the
important topic of Numerical and Computational Methods.
It is one of the popular Hermann scientific series, written by a
Professor at Grenoble University and includes computer
orientated material suitable for understanding and using
digital computers in both the numerical and non-numerical
sense.
The first three chapters deal with some basic material concerning the processing of information inside the computer.
Topics such as processing of symbols by algorithmic techniques, lists, flags, pointers, stacks, number ranges, binary
system, syntax and semantics of algebraic expressions, Backus
normal form, Polish notation, symbol strings, post fixed notation, compiling techniques, rounding errors in computer
arithmetic, number systems, norms and interval arithmetic.
Chapter four gives a detailed but brief survey of computer
methods for the solution of linear equations by the Gaussian
elimination method with pivoting, condition numbers of
systems of equations, polynomial interpolation, Lagrange,
Newton, theory of differences, derivatives and integration,
quadrature formulae and simple notions of differential equations. Chapter five deals with the principal sources of error
propagation in numerical computation.
The final three chapters contain the basic principles of such
miscellaneous topics as checking and supervision of obtaining
numerical results, well known practical instruments of calculation and their usage andfinallythe construction of tables, thenrole, function and practical uses. Each chapter concludes
with a number of exercises and their solutions to aid the reader.
This well written and attractively presented work fully
deserves attention from computer scientists, mathematicians
and engineers if only for the brief and refreshing way the subject matter is treated. It is written in French and so only
suitable for those fluent in French or those of us who wish to
take their medicine the hard way.
D. J. EVANS (Sheffield)