Lecture 05 Finding Roots of equations Open Methods

AML702 Applied Computational Methods
I
I
T
Dc
E
L
H
I
Lecture 05
Finding Roots of equations
Open Methods
Open Methods
•
•
•
•
I
I
T
Dc
E
L
H
I
2
Fixed Point Iteration and its convergence
Newton-Raphson method
Secant method and Modified Secant method
Brent’s method combining bracketing method
with open method.
• Matlab fzero examples
Bracketing Vs Open Methods
I
I
T
D
E
L
H
I
3
Simple Fixed Point Iteration
I
I
T
D
E
L
H
I
• Rearrange the function so that x is on the left
side of the equation:
f ( x) = 0 ⇒ g ( x) = x
xk = g ( xk −1 )
xo given, k = 1, 2, ...
• Bracketing methods are “convergent”.
• Fixed-point methods may sometime “diverge”,
depending on the starting point (initial guess)
and how the function behaves.
• The approx. error
Fixed pt. Iteration - Example
f ( x) = x 2 − x − 2
I
I
T
g ( x) = x 2 − 2
or
D
E
L
H
I
g ( x) = x + 2
or
2
g ( x) = 1 +
x
M
5
xf0
Fixed Point - Algorithm
Algorithm - Fixed Point Iteration Scheme
I
I
T
Given an equation f(x) = 0
Convert f(x) = 0 into the form x = g(x)
Let the initial guess be x0
Do
xi+1= g(xi)
while (none of the convergence criterion C1 or C2 is met)
D
E
L
H
I
C1. Fixing apriori the total number of iterations N .
C2. By testing the condition | xi+1 - g(xi) | (where i is the iteration
number) less than some tolerance limit, say ε, fixed apriori.
6
Fixed pt iteration- Convergence
x=g(x) can be expressed as a
pair of equations:
y1=x
y2=g(x) (component equations)
Plot them separately.
Fixed-point iteration
converges if
I
I
T
D
E
L
H
I
g ′( x) p 1
7
(slope of the line f(x) = x)
•When the method converges, the
error is roughly proportional to or less
than the error of the previous step,
therefore it is called “linearly
convergent.”
The Fixed Point with examples
Find a root of x4-x-10 = 0
3-1) and the fixed point
Consider
g1(x)
=
10
/
(x
I
3 -1),
iterative
scheme
x
=10
/
(x
i = 0, 1, 2, .
i+1
i
I
. .let the initial guess x0 be 2.0
T
i
0
1
2
3
4
5
6
7
xi
2
1.429
5.214
0.071
-10.004 -9.978E-3
-10
-9.99E-3
D
E
L So the iterative process with g1 gone into an
infinite loop without converging.
H
I
8
-10
The Fixed Point with examples
I
I
T
D
E
L
H
I
Consider another function g2(x) = (x + 10)1/4 and the fixed
point iterative scheme
xi+1= (xi + 10)1/4, i = 0, 1, 2, . . .
let the initial guess x0 be 1.0, 2.0 and 4.0
i
0
1
2
3
4
5
xi
1.0
1.82116
1.85424
1.85553
1.85558
1.85558
xi
2.0
1.861
1.8558
1.85559
1.85558
1.85558
xi
4.0
1.93434
1.85866
1.8557
1.85559
1.85558
6
1.85558
That is for g2 the iterative process is converging to 1.85558 with any
initial guess very quickly.
Newton-Raphson Method
This is the most commonly used root finding method.
I
I
T
D
E
L
H
I
If the initial guess at the root is xi , a tangent can be
extended from the point
[xi , f (xi )]. The point where this tangent crosses the x axis
usually represents an improved estimate of the root.
Newton – Raphson : Algorithm
I
I
T
D
E
L
H
I
Algorithm – Newton –Raphson Method
Given an equation f(x) = 0
Let the initial guess be x0
Do
where i=0,1,2,3
while (one of the convergence criterion C1 or C2 is met)
C1. Fixing apriori the total number of iterations N.
2. By testing the condition | xi+1 - xi | (where i is the
iteration number) less than some tolerance limit, say ε, fixed
apriori.
Newton-Raphson: Example
I Find a root of 3x+sin[x]-exp[x]=0
I Let the initial guess x0 be 2.0
f(x) = 3x+sin[x]-exp[x] f '(x) = 3+cos[x]-exp[x]
T
D
i
0
1
2
1.90016
x
E
L
The approximate error
H
I
i
2
1.89013
3
1.89003
4
1.89003
Based on this we can calculate rate of convergence as