Finding Roots of Equations

PGE 310: Formulation and Solution in Geosystems Engineering
Dr. Balhoff
Finding Roots of Equations
“Numerical Methods with MATLAB”,
Recktenwald, Chapter 6
and
“Numerical Methods for Engineers”, Chapra and Canale, 5th Ed., Part Two, Chapters 5, 6 and 7
and
“Applied Numerical Methods with MATLAB”, Chapra, 2nd Ed., Part Two, Chapters 5 and 6
1
Martini and Wine Glass
Martini glass filled to 6cm depth
contains ~ 216 /3 of wine. How deep
must you fill wine glass to ensure that
it contains the same amount of wine?
Volume of a spherical cap
Vcap 

6
h  3a 2  h 2 
Use Pythagorean Theorem
 R  h
2
 a2  R2
a 2  2 Rh  h 2
Substitute into formula (R=7)
Vcap 
h3 216
f ( h)  7 h  
0
3
3

3
h 2  3R  h   
216
3
2
2
1
Peng-Robinson Equation of State

Ideal gas law: PVi=RT

Many gases not “ideal”: Reservoirs



High temperatures
High Pressures
Peng-Robinson accounts for non-ideality
Methane
RT
a
P
 2
Vi  b Vi  2bVi  b 2

a
0.457 R 2Tc2
 2.3E 6
Pc
b
0.0778 RTc
 24.7
Pc
At P= 50 bar and T=473K….
f (Vi )  50 
39325
2.3E 6
 2
0
Vi  24.7 Vi  49.4Vi  611
3
Falling Parachutist

Newton’s Law:
F  ma  m
dv
dt

Force Balance:
FD  FU  m
dv
dt

Plugging in gravity and drag:
mg  cv  m

Integrate:
v(t ) 

What is “c” if: g = 9.8 m/s2, m = 68.1 kg, v =40 m/s, t = 10s
f (c ) 

dv
dt
gm
1  e( c / m)t 
c

667.38
1  e 0.146843c  40  0
c
4
2
Roots of Nonlinear Equations: f(x) = 0

Find the root of the quadratic
equation
f ( x )  ax 2  bx  c  0;

 b  b 2  4ac
2a
Many problems aren’t so simple




x
Martini and wine glass
Peng-Robinson Equation of State
Falling Parachutist
no “analytical” solution!!!
Use a “numerical” method to solve
 Approximate technique
5
Root Finding Techniques

Bracketing Methods




Open Methods




Graphical
Bisection
False Position
Fixed Point Iteration
Newton-Raphson
Secant Method
f ( h)  7 h 2 
f (Vi )  50 
f (c ) 
h3 216

0
3
3
39325
2.3E 6

0
Vi  24.7 Vi 2  49.4Vi  611


667.38
1  e 0.146843c  40  0
c
Polynomials


Muller’s Method
Bairstows Method
6
3
Graphical Methods
Nonlinear Equation for drag coefficient
f (c ) 


667.38
1  e 0.146843c  40  0
c
• Find “c”, but how?
• Maybe I could just plot points?
c = 4;
c = 8;
c = 12;
c = 16;
c = 20;
f(c) = 34.115
f(c) = 17.653
f(c) = 6.067
f(c) = -2.269
f(c) = -8.401
root
7
Bisection (Interval Halving)
We saw from the graph




The root: 12 < c < 16
Why f(c) went from being positive to negative?
The value of “c”, that gave f(c)= 0 had to be between those numbers

Theorem: If f(x) is real & continuous in the interval xL to xU and f(xL)
and f(xU) have opposite signs, then there is at least one root between
xL to xU

In English: If “f(x)” goes from positive to negative, it must have been
zero somewhere in the middle
f ( x L ) f ( xU )  0
Means positive number multiplied by
negative #
8
4
f(c) > 0
f(c) < 0
9
Bisection Method
1. Bracket the root between 12 and 16
2. Bisect in half; x=14?
3. Is root between 14 and 16? Yes.
4. Bisect in half; x=15
5. Is root between 15 and 16? No.
10
5
Bisection Method: basic algorithm
1.
Choose xL and xU so that root is bracketed: f(xL)*f(xU) < 0
2.
Divide “x” in half xr = (xL + xU)/2
3.
Calculate f(xr): evaluate function at new point
4.
Determine if f(xr)* f(xU) < 0 OR if f(xr)* f(xL) < 0
5.
Replace either xU or xL with xr
6.
Repeat iterative strategy
11
False-Position Method

Bisection is “brute-force” and inefficient

No account is taken for magnitude of f(xU) and f(xL)


If f(xU) is closer to zero than f(xL), xU is probably closer to the root
Replace the curve with a straight line to give a “false position”
• Line creates “similar triangles”
• Need a formula to find the x-intercept
• Sounds like a geometry problem
f ( xl )
f ( xu )

xr  xl xr  xu
xr  xu 
f ( xu ) xl  xu 
f  xl   f  xu 
6
Convergence

Bisection guaranteed to converge
 “Brute force”


May converge slowly/oscillate
False position also guaranteed to
converge


Converges linearly and hopefully
quicker
There are pitfalls
xr  xu 
vs.
xr 
f ( xu ) xl  xu 
f  xl   f  xu 
xu  xl
2
Pitfalls of False Position
While the false-position WILL
converge to the root eventually…
Here is a case where it does very
slowly
xr  xu 
f ( xu )xl  xu 
f xl   f xu 
14
7
Summary of Bracketing Methods

Bracketing methods guarantee convergence!
Graphical can be difficult for accurate answers
Bisection is simple and straightforward




False Position takes advantage of f(x) magnitude


Bounds the root
Bisects and creates new bounds

Uses similar triangles to find new bounds

Converges linearly and usually more efficient
Next time….Open Methods
Only require 1 guess
Usually faster, but convergence not guaranteed


15
Fixed Point Iteration
Re-arrange equation to get x = g(x)

4x2-x+3=0
sin(x)=0
x=4x2+3
x=sin(x)+x

So, the solution will be the intersection point
between these two curves: f1(x)=x and
f2(x)=g(x)

Use an “iterative” method to solve for x
1.
2.
3.
4.
Guess a value of xk
Calculate g(xk)
Update xk+1=g(xk)
Iteratively update “x” until the solution
converges
May be several ways to rearrange equation



Some arrangements converge faster
Some arrangements may DIVERGE
8
Convergence of Fixed Point Iteration : x=g(x)

Fixed Point does not always converge

Converges b/w x = a & b ONLY IF
g '( x )  1
(for all a < x< b)
It means that:
0  g ( x)  1
or
g ( x)  1
or
 1  g ( x )  0

Converge
g  ( x )  1

Diverge
Why? Because of the slopes:
g’(x) is the slope of f2(x)
1 is the slope of f1(x)
Look at the next slides
Fixed Point Iteration Convergence for Positive Slopes
0  g ( x )  1
y
f1(x)=x
f2(x)=g(x)
1 is the slope of f1(x)
g’(x) is the slope of f2(x)
Lower slope of f2(x) leads to
convergence of iterations
CONVERGES !
Monotonically
Root
2nd try
g ( x)  1
y
f2(x)=g(x)
2nd
Higher slope of f2(x) leads
to divergence of iterations
x
1st try
f1(x)=x
try
DIVERGES !
1st try
Root
x
9
Fixed Point Iteration Convergence for Negative Slopes
y
 1  g ( x )  0
f1(x)=x
1 is the slope of f1(x)
g’(x) is the slope of f2(x)
2nd try
Lower slope of f2(x) leads
to convergence of iterations
1st try
f2(x)=g(x)
CONVERGES !
Oscillatory
x
g  ( x )  1
y
Higher slope of f2(x) leads
to divergence of iterations
f1(x)=x
2nd try
DIVERGES !
1st try
f2(x)=g(x)
x
Newton’s Method (or Newton-Raphson)
xk 1  xk 
f ( xk )
f '( xk )

Most widely used method

Often converges much faster
(quadratic convergence)

Idea: Use the line tangent to the
curve to find the new root

Derive from a Taylor Series
20
10
Convergence of Newton-Raphson Method
Usually converges quadratically

Example: f ( x )  e  x  x
(true solution = 0.567143290409784)
Solved with 2 methods:
Newton-Raphson with x0=0
False-Position Method with xl=0 and xu=20
x0 =
Newton‐Raphson
Iterations true error
0
100.000000000%
x1 =
0.500000000000000
11.838858282%
x2 =
0.566311003197218
0.146750782%
x3 =
0.567143165034862
0.000022106%
x4 =
0.567143290409781
0.000000000%
x0 =
False‐Position
Iterations true error
0.952380952
67.925984240%
x1 =
0.607944265065116 7.194121018%
x2 =
0.571658116501746 0.796064446%
x3 =
0.567645088312370 0.088478152%
x4 =
0.567199089558233 0.009838633%
Quadratic
curve
Newton-Raphson Pitfalls
(a) Inflection point near root; f”(x)=0, causes
divergence
(b) Oscillates around local minimum or maximum
(c) Near-zero slopes can jump to location several roots
away
(d) Zero slope is true disaster!
• No general convergence criterion!
• Depends on nature of the function
• Best remedy is to have guess “sufficiently”
close to root
• Some functions – no guess works!
22
11
The Secant Method

Derivative can be difficult or inconvenient to calculate for some functions

Approximate the derivative


Remember the derivative is “slope of the line tangent to the curve”
Calculate the slope of a line that approximates the tangent line
• 2 points make a line (slope = rise/run)
f '  xk  
f ( xk 1 )  f ( xk )
xk 1  xk
• Substitute into Newton’s Formula
f ( xk )
xk 1  xk 
f '( xk )
xk 1  xk 
f ( xk )( xk 1  xk )
f ( xk 1 )  f ( xk )
23
False Position versus Secant Method


False Position and Secant Method look like the same formula
What is the difference?

False Position:





f ( xU )( xL  xU )
f ( xL )  f ( xU )
replaces new guess with one of the bounds
Always “bracketed”
Secant Method:

xr  xU 
xk 1  xk 
f ( xk )( xk 1  xk )
f ( xk 1 )  f ( xk )
replaces guesses in strict sequence
So xk+1 replaces xk and xk replaces xk-1
No guarantee of bracketing root
24
12
Convergence Comparison

Newton’s method converges fastest



Quadratic convergence
sometimes (like many open
methods) it may fail
Bracketing Methods slower


But convergence is guaranteed
Bisection is the slowest of all
25
Modified Secant Method

Newton’s method is fast (quadratic convergence) but derivative may not be
available

Secant method uses two points to approximate the derivative, but
approximation may be poor if points are far apart.

Modified Secant method is a much better approximation because it uses
one point, and the derivative is found by using another point some small
distance, , away
xk 1  xk 
f '  xk  
f ( xk )
f '( xk )
f ( xk   )  f  xk 

26
13
“Multiple Roots”

Several equations may have more than root
2nd order polynomials have 2 roots, right?
Start with an initial guess close to the root you want



But “multiple roots” refers to equations with the same
root multiple times:
f ( x)   x  1 x  1 x  3  x3  5 x 2  5 x  3

Problems arise with multiple roots



Bracketing methods don’t work, b/c no sign change
f(x) and f’(x) both zero exactly at root
Newton’s method because linearly, not quadratically
convergent….but an alternative formula is quadratic
xi 1  xi 
f ( xi ) f '( xi )
 f '  xi    f ( xi ) f "( xi )
27
Matlab Built-In Functions

fzero uses a hybrid of several methods
r = fzero(fun,x0,options)
r = fzero(fun,x0)
Your
root

initial guess
function “.m” file
roots function gives multiple roots of polynomial
Consider f(x) = 1x2 - 3x + 2 = 0
>> roots ([1 -3 2])
ans =
2
1
28
14
Review


Need to find roots of nonlinear equations

Martini and wine glass
f ( h)  7 h 2 

Peng-Robinson EOS
f (Vi )  50 

Parachute Problem
f (c ) 
h3 216

0
3
3
39325
2.3E 6

0
Vi  24.7 Vi 2  49.4Vi  611


667.38
1  e 0.146843c  40  0
c
Learned a few different methods


Closed – Convergence guaranteed by bracketing the root

Bisection

False Position
Open – No guaranteed Convergence, but much faster

Fixed Point Iteration

Newton’s

Secant and modified secant Method
29
15