Finding roots III

A polynomial of degree n has n zeros
(counting multiplicities)
Numerical Analysis 2, lecture 3:
Finding roots III
remainder theorem
(textbook sections 4.6–8)
factor theorem
p(x) = (x ! c)q(x) + r "
p(x) = (x ! c)q(x) "
division (Horner) algorithm
•
• square root
• systems of equations
polynomials
an x n + ! + a1x + a0
(
)
p(x) = x 3 ! 1.1x 2 + 2x ! 2,
1
p(c) = 0
" bn!1 = an
$b
= an!1 + cbn!1
$$ n!2
!
#
$ b = a + cb
1
1
$ 0
$% r = a0 + cb0
= (x ! c) bn!1x n!1 + ! + b1x + b0 + r
example
p(c) = r
p(1) = ?
1 !1.1 2
!2
1
!0.1 1.9
1 !0.1 1.9 !0.1
p(x) = (x ! 1)(x 2 ! 0.1x + 1.9) ! 0.1,
p(1) = !0.1
Numerical Analysis 2, lecture 3, slide! 2
Newton’s method with deflation
can find all the zeros of a polynomial
The division algorithm gives p!(c) too
p(x) = (x ! c)q(x) + r
" p#(x) = q(x) + (x ! c)q#(x)
"
example
>> p=[1 -1.1 2 -2];
>> z=polyroot(p);
p#(c) = q(c)
p(x) = x 3 ! 1.1x 2 + 2x ! 2,
1 !1.1 2
!2
1
!0.1 1.9
1 !0.1 1.9 !0.1
1
1
0.9
1 0.9 2.8
1
z =
p"(1) = ?
1.0349 - 6.7842e-29i
p(x) = (x ! 1) ( (x ! 1)(x + 0.9) + 2.8 ) ! 0.1
>> p=deflate(p,z); z=polyroot(p)
p"(1) = 2.8
z =
0.032563 +
1.3898i
function z = polyroot(p,z)
if nargin<2, z=rand+i*rand; end
for it = 1:20
[q,f] = deflate(p,z);
[q,df] = deflate(q,z);
dz = f/df;
z = z - dz;
if abs(dz)<1e-8, break, end
end
>> p=deflate(p,z); z=polyroot(p)
>> [q,r] = deflate([1 -1.1 2 -2],1)
z =
q =
function [q,r] = deflate(p,c)
for k=2:length(p)
p(k) = p(k) + c*p(k-1);
end
q = p(1:end-1);
r = p(end);
0.032563 1
-0.1
this algorithm still needs work:
-0.1
>> [q,r] = deflate(q,1)
q =
1
1.3898i
1.9
r =
0.9
r =
2.8
Numerical Analysis 2, lecture 3, slide! 3
• initial guess
• zeros of real polynomials are real or complex-conjugate
• deflation error accumulation
Numerical Analysis 2, lecture 3, slide! 4
Polynomial zeros can also be found
using eigenvalue codes
Polynomial roots can be sensitive
to small changes in the coefficients
companion matrix
%
#(x! ) (
f! ' x! "
*
f $(x! ) )
&
f! (x) = f (x) + !(x) "
+0
x n + an!1x n!1 + ! + a2 x 2 + a1x + a0 = det(xI ! C)
" !an!1 !an!2
0
$ 1
1
C=$ 0
$ "
"
$# 0
0
>> p = poly(1:12)
! !a1 !a0 %
! 0
0 '
! 0
0 '
# "
" '
! 1
0 '&
p =
1
QR iteration computes all eigenvalues simultaneously
>> p = [1 -1.1 2 -2];
>> roots(p)
ans =
0.032563 +
0.032563 1.0349
1.3898i
1.3898i
Numerical Analysis 2, lecture 3, slide! 5
Square roots can be computed
by Newton’s method
f (x) = x 2 ! a " xk +1 = xk !
xk2 ! a 1 #
a&
= % xk + (
2xk
2$
xk '
e =
0.00009204957398
0.00000000244585
theorem (p. 88)
Let a > 0 and x0 > 0 and ! k := xk " a and ! 0 # 0. Then:
• ! k +1 =
...
-1.4864e+09
4.79e+08
>> [roots(p) roots(q)]
f (x) = (x ! 1)(x ! 2)!(x ! 12)
ans =
coefficient of x 5 is -206070150
12
11
10
9
8
7
6
5
4
3
2
1
11.994
11.042
9.8362
9.2456
7.7013
7.2419
5.9164
5.0269
3.9958
3.0003
2
1
shift exponent:
table lookup:
>> e = x-sqrt(3)
0.01794919243112
2717
perturb it by 1 : "(r) = r 5
12
f #(r) = % (r ! j)
j =1
j $r
9!
"(9)
95
& 9 ! !8!3! = 9.2441
f #(9)
Numerical Analysis 2, lecture 3, slide! 6
3 iterations suffice
to compute !A with 23 correct bits
>> a=3; x(1)=2; for k=1:3, x(k+1)=(x(k)+a/x(k))/2; end
0.26794919243112
-78
>> q=p; q(8)=q(8)+1;
! k2
2xk
A = 1. b1b2 !bt ! 2 e = c1c2 . d1d2 ! ! 2 2k
"$#$%
a "[1, 4)
x0 = c1c2 . d1d2 1
"01.00, 01.01, 01.10, 01.11,
$
c1c2 . d1d2 ! #10.00, 10.01, 10.10, 10.11,
$11.00, 11.01, 11.10, 11.11
%
initial error:
!0 =
a+h " a =
h
2 #
<
&
$
'
$
(
h
< 2 "4
2
error bounds:
• 0 < ! k +1 < 12 ! k
!1 " 12 ! 02 < 2 #9 , ! 2 " 12 !12 < 2 #19 , ! 3 " 12 ! 22 < 2 #39
• If a > 1 and x0 $ 1 then ! k +1 < 12 ! k2
Numerical Analysis 2, lecture 3, slide! 7
Numerical Analysis 2, lecture 3, slide! 8
Finding zeros of multivariable functions
can be difficult
Fixed point iteration can be used to find
multivariable function zeros
! ! !
f ( x) = 0
x[k +1] = ! (x[k] )
theorem (p. 94)
double root? no root?
unknown number of
intersections
If
max
x! x" #$
%&
(x) # m < 1 and x[0] ! x" # $ then
%x
a) x[k] ! x" # $ for k = 1, 2,…, b) x[k] ' x" ,
{
c) x" is the only fixed point of & in x : x" ! x # $
f1 (x1, x2 ) = 0
f2 (x1, x2 ) = 0
example (p. 95)
}
& 1 36 + 9x 2 )
16x12 ! 9x22 ! 36 = 0 "
2+
# % (x) = ( 14
4 x12 + 9x22 ! 36 = 0 $
( 36 ! 4 x12 +
'3
*
" 3.6 % "1.8974 %
x! = $
(
'=
# 2.4 & $#1.5492 '&
)*
0
0.4593%
= "
= 0.5443
$# + 0.5443
0 '&
)x
1
1.6771 $
2.0616 $
1.9486 $
x[ 0 ] = ! $ , x[1] = !
, x[ 2 ] = !
, x[ 3] = !
, …
#"1&%
#"1.8856 &%
#" 1.6583 &%
#"1.4530 &%
Numerical Analysis 2, lecture 3, slide! 9
Newton-Raphson converges quadratically
near a simple root
iteration:
xk +1 = xk ! ( J(xk ))
example (p. 93)
!1
f (xk ), J ij =
"fi
"x j
f =
J =
x =
for
@(x) [4,9;16,-9]*x.^2-36;
@(x) [8,18;32,-18]*diag(x);
[1;1];
k=1:3, x = x - J(x)\f(x), end
what happened, what’s next
• division (Horner) algorithm for p(c), p!(c),
jacobian matrix
4 x12 + 9x22 ! 36 = 0 "
18x (
% 8x
# J(x) = ' 32x1 !18x2 *
&
1
2)
16x12 ! 9x22 ! 36 = 0 $
>>
>>
>>
>>
Numerical Analysis 2, lecture 3, slide!10
don’t compute inv(J) !!!
x =
deflation
• polynomial zeros by Newton or eigenvalue
method
• three Newton iterations suffice for !a
• multivariable fixed-point iteration &
Newton iteration
2.3000
1.7000
x =
1.9326
1.5559
Next: splines (§5.9–11)
x =
1.8977
1.5492
Numerical Analysis 2, lecture 3, slide!11
Numerical Analysis 2, lecture 3, slide!12