Section 7

 Module 6 : Solving Ordinary Differential Equations - Initial Value Problems (ODE-IVPs)
Section 7 : Convergence Analysis and Selection of Integration Interval
7 Convergence Analysis and Selection of Integration Interval
Selection of integration interval is a crucial parameter while solving ODE-IVPs numerically. In this
section, we provide some insights into this aspect. The methods developed in this module are primarily
meant for numerically solving nonlinear ODE-IVPs. However, for the purpose of analysis, we apply them
on linear ODE-IVPs. This is because the true solution of linear ODE-IVPs can be found and thus can be
used as a basis for carrying out the convergence analysis.
7.1 Analysis of Linear ODE-IVPs
To see how choice of integration interval can affect solution behavior, consider a scalar linear ODE-IVP
of the form
---164)
where
. Analytical (true) solution of the above equation is given as
- (165)
Defining
, we can write true solution as a difference equation as follows
- (166)
- (167)
or
- (168)
Now consider the approximate solution of the above ODE-IVP by explicit Euler methods
- (169)
Defining approximation error introduced due to numerical integration,
- (170)
we can write
- (171)
Thus, the combined equation becomes
Now, since
- (172)
i.e. the ODE (SISO) is asymptotically stable. As a consequence, for the difference
equation (168) we have
and
as
, i.e. the solution of the difference equation
is also asymptotically stable. Thus, we can expect that the approximate solution
should exhibit
similar behavior qualitatively and
as
This requires that the difference equation given
by (AS) should be asymptotically stable, i.e., all eigenvalues of matrix
should have magnitude strictly less than one. The eigenvalues of matrix
the characteristic equation of B, i.e.
can be computed by solving
Thus, the approximation error
as
provided the following condition holds
- (173)
- (174)
This inequality gives constraint on the choice of integration interval
approximation error will vanish asymptotically.
, which will ensure that
Following similar line of arguments, we can derive conditions for choosing integration interval for
different methods. For example,
Implicit Euler
- (175)
Convergence conditions can be stated as follows
- (176)
Since it is assumed that
and the step size
is always positive, the condition given by
equation (176) is satisfied for any positive value of h. As a consequence, the implicit Euler has much better convergence properties when compared with the explicit Euler method.
Trapeziodal Rule (Simpson's method):
- (177)
Convergence conditions can be stated as follows
- (178)
and the step size
is always positive, the condition given by
Since it is assumed that
equation (178) is satisfied for any positive value of h. As a consequence, the Simpson's method has much better convergence properties when compared with the explicit Euler
method.
2'nd Order Runge Kutta Method
- (179)
Convergence conditions can be stated as follows
- (180)
Thus, the choice of integration interval depends on the parameters of the equation to be solved and the
method used for solving ODE IVP. These simple example also shows that the approximation error
analysis gives considerable insight into relative merits of different numerical methods for solving ODEIVPs. For example, in the case of implicit Euler or Simpson's rule, the approximation error asymptotically
. (Of course, larger the value of , less accurate is the
reduces to zero for any choice of
numerical solution.) This, however, is not true for explicit Euler method or Runge-Kutta 2'nd order
methods. This clearly shows that implicit Euler method and Simpson's rule are superior to explicit Euler
method.
The above analysis can be easily extended to a coupled system of linear ODE-IVP of the form
- (181)
- (182)
where
and
is a
left half of the complex plane i.e.
matrix. Let us further assume that all eigenvalues of
are in the
The true solution is given as follows (see Appendix for details)
or in discrete settings
When matrix
is diagonalizable, i.e.
we can write
Since all eigenvalues of matrix
have -ve real part, it follows that
as
Then, following similar arguments as in the scalar case, it can be shown
and
that condition for choosing the integration interval are as follows
Explicit Euler: Approximate solution can governed by the following difference equation
and the dynamics of the approximation error,
folling matrix difference equation
is governed by the - (183)
- (184)
represents spectral radius of the matrix
where
diagonalizable, we can write
When matrix
is and
eigen
values
of
matrix
are
represent the eigenvalues of matrix
condition can be stated as
for
where
Thus, the convergence Implicit Euler: Approximate solution can governed by the following difference equation
- (185)
and the dynamics of the approximation error,
folling matrix difference equation
is governed by the - (186)
In order that
as
the following condition should hold
When matrix A is diagonalizable, the convergence condition can be stated as
Trapeziodal Rule: Approximate solution can governed by the following difference equation
and the dynamics of the approximation error,
folling matrix difference equation
- (187)
is governed by the - (188)
In order that
as
the following condition should hold
- (189)
When matrix A is diagonalizable, the convergence condition can be stated as
- (190)
Similar error analysis (or stability analysis) can be performed for other integration methods. For
example, consider the scenario when the 3-step algorithm is used for obtaining the numerical solution
of equation (164).
The above difference equation can be rearranged in the following form.
- (191)
Defining
- (192)
we have
- (193)
- (194)
Similarly, the true solution can be expressed as
- (195)
where
The evolution of the approximation error is given as
The stability criterion that can be used to choose integration interval
can be derived as
- (196)
Note that characteristic equation for matrix
is given as
- (197)
Thus, eigenvales of matrix
can be directly computed using the coefficients
and
which are
functions of integration interval
Equations such as (184), (185), (190) and (197) can be used to generate stability envelopes for each
method in the complex plane (eigenvalues of a matrix can be complex). Stability envelopes for most of
the methods are available in literature. The following general conclusions can be reached by studying
these plots [3].
Even though the first and second order Adams-Moulton methods ( implicit Euler and CrankNicholson) are Asymptotically-stable, the higher order techniques have restricted regions of
stability. These regions are larger than the Adams-Bashforth family of the same order.
All forms of the R-K algorithms with order
4 have identical stability envelopes.
Explicit R-K techniques have better stability characteristics than explicit Euler.
For predictor-corrector schemes, accuracy of scheme improves with order. However, stability
region shrinks with order.
7.2 Extension to Nonlinear ODE IVPs
The conclusions reached from studying linear systems can be extended to general nonlinear systems
locally using Taylor expansion.
--(198)
can be approximated as
--(199)
--(200)
--(201)
Applying some numerical technique to solve this problem will lead to a difference equation of the form
and stability will depend on the choice of
such that
perform global analysis for general nonlinear system.
-- (202)
for all
. Note that, it is difficult to
7.3 Stiffness of ODEs [3]
The problem of integrating multi-variable ODE-IVP with some variables changing very fast in time while
others changing slowly, is difficult to solve. This is because, the stepsize has to be selected according to
the fastest changing variable / mode. For example, consider the equation
-- (203)
It can be shown that the solution for the above system of equations is
-- (204)
It can be observed that the terms with
lead to a sharp decrease in
maximum in
is dominated by e
at
. The term
and to a small
which decreases slowly. Thus,
-- (205)
-- (206)
Now, stepsize should be selected such that the faster dynamics can be captured. The stiffness of a given
ODE-IVP is determined by finding the stiffness ratio defined as
-- (207)
where matrix A is defined above. Systems with 'large' stiffness ratio are called as stiff.
This analysis can be extended to a general nonlinear systems only locally. Using Taylor's theorem, we
can write
-- (208)
-- (209)
Local stiffness ratio can be calculated using eigenvalues of the Jacobian and the ODE-IVP is locally stiff if
the local S.R. is high, i.e., the system has at least one eigenvalue which does not contribute
significantly over most of the domain of interest. In general, eigenvalues of the Jacobian are time
dependent and S.R. is a function of time. Thus, for stiff systems it is better to use variable step size
methods or special algorithms for stiff systems.
7.4 Variable stepsize implementation with accuracy monitoring [2]
One practical difficulty involved in the integration with fixed stepsize is the choice of stepsize such that
the approximation errors are kept small. If a system of nonlinear ODEs is stiff only in certain regions of
the state space, then selecting a fixed step size is a non-trivial task. In such a situation, a variable
stepsize algorithm is implemented with error monitoring as given in Table Table varStep. It may be noted
that above algorithm can be implemented with any Runge-Kutta class methods.
Table 2: Variable Size Implememtation of Runge-Kutta Algorithms
Given:
Step 1: Choose stepsize
and let
Step 2: Compute
using the chosen method (e.g. Euler / Runge-Kutta etc.).
Step 3: Define
Compute
Step 4:
using the chosen method
IF (
Accept
Set
ELSE
Set
END IF
as the new value
and
and proceed to the step 2.
and proceed to Step 1.