Numerical Solutions 1st Order Differential Equations

Numerical Methods
Introduction
1) Basic Theory – Difference Equations
2) Euler’s Method – Tangent Line Approximation
3) Taylor Series Methods (with example)
4) Runge-Kutta Methods – 2nd Order Derivation
5) Runge-Kutta – Higher Order Expressions
6) 4th Order Runge-Kutta – 2nd Order Differential Equtions
7) Examples
a) MathCAD: 4th Order Runge-Kutta/2nd Order Diff. Eq.
b) C++: Euler’s Method
c) C++: Taylor Series Method
d) C++: 4th Order Runge-Kutta
8) References
Authored by: Jonathan W. Gibson
Basic Theory - Difference Equations
When Numerical Methods Apply
1)
Other methods are not applicable
2)
Aids in physical understanding when the exact solution is difficult to
interpret
3)
The exact solution is extremely difficult to obtain
Difference Equation (definition of slope / equation of a line)
y1  y0
 m12
x1  x0

y  y 0  m   x  x0 
Definition of the Derivative
dy
y ( x  x)  y ( x)
 lim
dx x0
x
Graphical Representation
or
dy
y ( x  h)  y ( x )
 lim
dx h0
h
Euler’s Method – Tangent Line Approximation
Consider the general, first order initial value problem.
dy
 f ( x, y )
dx
y ( x0 )  y 0
We wish to approximate the result at a finite number of points separated by Δx = h,
thus,
y ( x0 )  y 0
y ( x0  x)  y ( x0  h)  y ( x1 )  y1
y ( x0  2x)  y ( x0  2h)  y ( x2 )  y 2
y ( x0  nx)  y ( x0  nh)  y ( xn )  y n
Integrating the original expression yields
y1
x1
y0
x0
 dy 
 f ( x, y )dx
If we approximate the function f(x,y) by the value at (x0,y0), it becomes a number and
may be pulled out of the integral. The result is,
y1  y0

x1
f ( x0 , y0 )  dx 
f ( x 0 , y0 )   x1  x0 
x0
Identifying the quantity (x1 - x0) = h we may write this as,
y1  y0  h  f ( x0 , y0 )
Repeating the procedure on the interval xn  x  xn+1 gives us Euler’s Method.
y n1  y n  h  f ( xn , y n )
Euler’s Method – Tangent Line Approximation
Graphical Representation of Euler’s Method
Errors in Numerical Methods
Truncation Error: Also called discretization error, this occurs because we are
approximating the slope with constant values over discrete intervals. In reality, the
slope is a smooth function that varies continuously. The larger the step size, the more
prevalent truncation error becomes.
Round Off Error: Depending on the number of decimal points carried in each step, this
may or may not be large. This error may be both positive and negative and tends to
cancel itself out. The smaller the step size, the more prevalent round off error becomes.
Every differential equation has an optimum step size that balances truncation and roundoff errors.
Taylor Series Methods
Taylor Series Approximation
f ( x0  h) 


n 0
h n n 
f ( x0 )
n!
Expanded Taylor Series
h2
h n (n)
f ( x0  h)  f ( x0 )  hf ' ( x0 ) 
f ' ' ( x0 )   
f ( x0 )  
2
n!
Writing this as an iterated equation we have
y n 1  y 0  hf  xn , y n  
where
f ( xn , y n ) 
h2
h n (n)
f ' ( x n , yn )   
f ( xn , y n )  
2
n!
dy
dx
Comparing this to Euler’s Method
y n 1  y 0  hf  xn , y n 
We see that Euler’s Method is just a Taylor Approximation of order 1, where the order
is defined as the highest order derivative used in the approximation.
Truncation Errors in Taylor Series Methods
The truncation error may be crudely estimated by the first unused term in the Taylor
Series approximation.
Euler’s Method Local Truncation Error:
 (h2)
Example Using Taylor Series Method
Consider the initial value problem,
dx
 1  x2  t 3
dt
x(0)  0
and
Through repeated differentiation with respect to t, we find
x1  x
x2
x3
x4
x5




x1
x2
x3
x4
x1 (0)  0




x 
1  x2  t 3
x 
2xx  3t 2
x  2 xx  2 x 2  6t
x 4   2 xx  6 xx  6




1  x1  t 3
2x1 x2  3t 2
2
2 x1 x3  2 x2  6t
2 x1 x4  6 x2 x3  6
2
x 2 ( 0)  1
x 3 ( 0)  0
x 4 ( 0)  2
x 5 ( 0)  6
Using the Taylor Series Approximation, we have
h2
h3
h 4 4 
x(t ) 
x(t ) 
x(t  h)  x(t )  hx (t ) 
x (t )   (h 5 )
2
6
24
In iterated form this becomes
xn1
h2
h3
h4
 xn  hx2 
x3 
x4 
x5   ( h 5 )
2
6
24
or
xn1
h
h2
h3 

 xn  h   x2  x3 
x4 
x5    (h 5 )
2
6
24 

At each step, x2, x3, x4, and x5 must be recalculated using the expressions above. This
method has the disadvantage that these derivatives must be analytically determined
before allowing a computer to perform the iteration process.
Runge-Kutta Method – 2nd Order Derivation
In the Taylor Series Method we were required to analytically determine the various
derivatives by implicit differentiation. This can be problematic when the function is
rather involved. Runge-Kutta Methods avoid this difficulty.
Consider the initial value problem,
dx
 f (t , x)
dt
x(0)  x0
and
A form is adopted having two function evaluations of the form
k1  f (t , x) 
dx
dt
and
k 2  f (t  ah, x  bhk1 )
By feeding k1 into the expression for k2, we effectively eliminate the need to analytically
calculate derivatives of f(t,x).
A linear combination of k1 and k2 are added to the value of x at t to obtain the value at
t+h. We may write,
x(t  h)  x(t )  h  c1k1  c2 k2    h3 
x(t  h)  x(t )  h  c1 f (t , x)  c2 f (t  ah, x  bhf )   h
3

(1)
The goal is to find c1, c2, , and  so that equation (1) comforms with the Taylor Series
Approximation of Order 2, equation (2).
h2
x(t )   (h 3 )
x(t  h)  x(t )  hx (t ) 
2
(2)
We may rewrite k2 using the 2-Variable Taylor Series Approximation
f (t  h, x  k ) 


n 0
1
n!
n 

 
 h  k  f t , x 
x 
 t
Taylor Series
Expanding to 2nd Order, we have
k2  f (t  ah, x  bhf )  f  ahft  bhfx f   h3 
We now substitute this expression for k2 into equation (1) to yield
x(t  h)  x(t )  hc1 f  hc2  f  ahft  bhfx f    h3 
Rearranging, we have
x(t  h)  x(t )  c1  c2  hf  bh2 c2 f x f  ah2 c1 f t   h3 
(3)
Using the following relation,
x  f (t , x)

x 
f dx
f dt

x dt
t dt
 f x f  ft
We may rewrite equation (1)
2
h
h2
x(t  h)  x(t )  hf 
fx f 
f t   h 3 
2
2
(4)
Comparing equations (3) and (4), we see that we must have the following relations,
bc2 
c1  c2  1
1
2
ac1 
1
2
A convenient solution (but not the only solution) to this set of equations is
c1  c2 
1
2
and
a  b 1
Inserting these into equation (1) yields the 2nd Order Runge-Kutta Method
x(t  h)  x(t ) 
where
h
 k1  k 2    h 3 
2
k1  f (t , x)
and
k 2  f t  h, x  hk1 
Runge-Kutta Higher Order Expressions
2nd Order Runge-Kutta
xt  h   x(t ) 
h
k1  k2 
2
k1  f (t , x)
k 2  f t  h, x  hk1 
3rd Order Runge-Kutta
xt  h   x(t ) 
h
2k1  3k 2  4k3 
9
k1  f (t , x)
h 
 h
k 2  f  t  , x  k1 
2 
 2
3h 
 3h
k3  f  t  , x  k 2 
4
4 

4th Order Runge-Kutta
xt  h   x(t ) 
h
k1  2k 2  2k3  k 4 
6
k1  f (t , x)
h 
 h
k 2  f  t  , x  k1 
2 
 2
h 
 h
k3  f  t  , x  k 2 
2 
 2
k 4  f t  h, x  hk 3 
Runge-Kutta – 2nd Order Differential Equations
Any nth order differential equation may be decomposed into n 1st order differential
equations. Consider the following 2nd order differential equation for a damped and
driven harmonic oscillator.
x 
b

x  x 
m
m
F
cos(  t   )
m
We may decompose this into 2 first order differential equations in the following way,
x  v
v 
F
b

cos(  t   )  v  x  G(t , x, v)
m
m
m
Stated without proof, the 4th Order Runge-Kutta Method is
xt  h   x(t ) 
h
k1  2k 2  2k3  k 4 
6
vt  h  
h
K1  2 K 2  2 K 3  K 4 
6
v(t ) 
k1  v
K1  G (t , x, v)
h 
 h
k 2  f  t  , x  k1 
2 
 2
h
h
h
K 2  G(t  , x  k1 , v  K1 )
2
2
2
h 
 h
k3  f  t  , x  k 2 
2 
 2
h
h
h
K 3  G(t  , x  k 2 , v  K 2 )
2
2
2
k 4  f t  h, x  hk 3 
K 4  G (t  h, x  hk 3 , v  hK 3 )
4th Order Runge-Kutta Routine for MathCad: SmallAngle, Damped and Forced Pendulum
The Equation of Motion is given by:
b
'' 
g
 ' 
l
2
m l
  
T
m l
 cos ( W  t )
(where the primes indicate time derivatives)
Input the system parameters (mass of bob, length of rod, damping coefficient, driving torque and frequency, etc.)
m  2
b  50
l  10
T  5
g  9.81
W  5
Define the coefficients based on these parameters
B 
b
K 
2
m l
g
l
A 
T
m l
The Runge-Kutta algorithm
Define the initial conditions and the time interval
init  5

180
init  0
startt  0
endt  25
Define the angular acceleration as a function of t,  , and 
G t      K   B   A  cos ( W  t )
Define the number of iterations, step size, counting index, and indexed variables
n  ( endt  startt )  100
j  0  n  1
h 
endt  startt
t  startt
0
n
The heart of the 4th Order Runge-Kutta algorithm is defined below
k1 t      h   
K1 t      h   G t    
k2 t      h     0.5 h  K1 t      h 
K2 t      h   G t  0.5 h    0.5 h  k1 t      h     0.5 h  K1 t      h  
k3 t      h     0.5 h  K2 t      h 
K3 t      h   G t  0.5 h    0.5 h  k2 t      h     0.5 h  K2 t      h  
k4 t      h     h  K3 t      h 
K4 t      h   G t  h    h  k3 t      h     h  K3 t      h  
h
rk t      h     k1 t      h   2 k2 t      h   2 k3 t      h   k4 t      h  
6
h
rK t      h     K1 t      h   2 K2 t      h   2 K3 t      h   K4 t      h  
6
Last, place the output values from each step in a matrix
 0   init 
   

  0   init 


 
 
  j 1    j  rK t j   j   j  h

 
  j 1    j  rk t j   j   j  h
t
j 1
 t  h
j
Plot the data. Time series and Phase Diagrams are shown below.
0.1
j
0
5
10
15
20
25
0.1
tj
0.1
0
j
0.1
0.2
0
5
10
15
20
25
tj
0.1
0
j
0.1
0.2
0.08
0.06
0.04
0.02
0
0.02
j
0.04
0.06
0.08
// Euler Method f(x,t) = 1 + x^2 + t^3
(C++)
#include <iostream>
#include <cmath>
using namespace std;
//introduces namespace std
void main()
{
cout.setf(ios_base::fixed, ios_base::floatfield);
float h = 0.0078125;
float x = 0.0;
float t = 0.0;
float n = 1/h;
for (int i = 1; i<=n; i++)
{
double dxdt = 1 + x*x + t*t*t;
x = x + (h*dxdt);
t = t+h;
cout << x << "\n";
}
}
// Result: x(1) = 1.957787
// 128 steps over interval 0 to 1
// Taylor Method f(x,t) = 1 + x^2 + t^3
(C++)
#include <iostream>
#include <cmath>
using namespace std;
//introduces namespace std
void main()
{
cout.setf(ios_base::fixed, ios_base::floatfield);
double h = 0.0078125;
double x = 0.0;
double t = 0.0;
double n = 1/h;
double x2=1.0;
double x3=0.0;
double x4=2.0;
double x5=6.0;
for (int i = 1; i<=n; i++)
// 128 steps over interval 0 to 1
{
x2 = 1.0 + (x*x) + (t*t*t);
x3 = (2*x*x2) + (3*t*t);
x4 = (2*x*x3) + (2*x2*x2) + (6*t);
x5 = (2*x*x4) + (6*x2*x3) + 6;
x = x + (h*x2)+(0.5*h*h*x3)+((1/6)*h*h*h*x4)+((1/24)*h*h*h*h*x5);
t = t+h;
cout << x << "\n";
}
}
// Result: x(1) = 1.991327
// Runge-Kutta Method f(x,t) = 1 + x^2 + t^3
#include <iostream>
#include <cmath>
using namespace std;
//introduces namespace std
void main()
{
cout.setf(ios_base::fixed, ios_base::floatfield);
double h = 0.0078125;
double x = 0.0;
double t = 0.0;
double n = 1/h;
double k1;
double k2;
double k3;
double k4;
for (int i = 1; i<=n; i++)
// 128 steps over interval 0 to 1
{
k1 = 1 + (x*x) + (t*t*t);
k2 = 1 + (x+(0.5*h*k1))*(x+(0.5*h*k1)) +
((t+0.5*h)*(t+0.5*h)*(t+0.5*h));
k3 = 1 + (x+(0.5*h*k2))*(x+(0.5*h*k2)) +
((t+0.5*h)*(t+0.5*h)*(t+0.5*h));
k4 = 1 + (x+(h*k3))*(x+(h*k1)) + ((t+h)*(t+h)*(t+h));
x = x + ((h/6)*(k1+(2*k2)+(2*k3)+k4));
t = t+h;
cout << x << "\n";
}
}
// Result: x(1) = 1.991742
(C++)
Numerical Methods
Bibliography/References
“Elementary Differential Equations and Boundary Value Problems”; 4th Ed;
Boyce/DiPrima
“Multivariable Calculus, Linear Algebra, and Differential Equation; 3rd Ed; Grossman
“Numerical Mathematics and Computing”; 2nd Ed; Cheney/Kincaid
“Classical Dynamics of Particles and Systems”; 4th Ed; Marion/Thornton
“The Numerical Analysis of Ordinary Differential Equations”; JC Butcher
“Calculus”; 9th Ed; Thomas/Finney