In solving the system of linear equations, we can use the following

In solving the system of linear equations  Ax  b , we can use the following
approaches.
1) Direct methods
 Gauss elimination,
 LU decomposition,
 Gauss-Jordon elimination, etc.
2) Iterative methods
 Jacobi method,
 Gauss-Seidel method ,
 SOR method (Successive over relaxation method).
Iterative methods
Iterative methods for solving linear system  Ax  b begin with initial guess for
solution x and successively improve it until solution is as accurate as desired.
Advantages of iterative methods compared to direct methods:
 Insensitive to round-off errors: In general, Round-off errors are common in
computer simulation; and when large systems are being solved by direct methods,
the growing errors can become so large as to render the results obtained quite
unacceptable. Where as iterative methods are not much sensitive to propagation of
round-off errors.
 Computationally inexpensive in solving large number of systems: iterative
methods are often useful for solving linear systems involving several hundred
variables, particularly, if many of the coefficients were zero (Sparse matrix).
Systems of over 100,000 variables have been successfully solved on computers by
iterative methods, whereas systems of 10,000 or more variables are difficult or
impossible to solve by direct methods.
 a11
a
 21
 a31

0
0

0
0

0
0

0
a12
a13
0
0
0
0
0
0
a22
a23
a24
0
0
0
0
0
a32
a33
a34
a35
0
0
0
0
a42
a43
a44
a45
a46
0
0
0
0
a53
a54
a55
a56
a57
0
0
0
0
a64
a65
a66
a67
a68
0
0
0
0
a75
a76
a77
a78
a79
0
0
0
0
a86
a87
a88
a89
0
0
0
0
0
a97
a98
a99
0
0
0
0
0
0
a10 ,8
a10 ,9

0 

0 

0 
0 

0 
0 

a8 ,10 
a9 ,10 

a10 ,10 
0
Sparse matrix example


Can be used for solving non-linear equations: Iterative methods are usually the
only choice for nonlinear equations.
Easy algorithm: Iterative methods can be implemented easily on computers
compared to direct methods.
Jacobi method
The first iterative technique is called the Jacobi method, after Carl Gustav Jacob Jacobi
(1804–1851). This method makes two assumptions: (1) that the system given by
a11x1  a12 x2  a13 x3  a14 x4  ...........  a1n xn

b1
eq.(1)
a21x1  a22 x2  a23 x3  a24 x4  ...........  a2 n xn 
b2
eq.( 2)
a31x1  a32 x2  a33 x3  a34 x4  ...........  a3n xn 
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
::
an1x1  an 2 x2  an3 x3  an 4 x4  ...........  ann xn 
b3
:
:
:
:
bn
eq.(3)
eq.(n)
has a unique solution and (2) that the coefficient matrix A has no zeros on its main
diagonal. If any of the diagonal entries are zero, then rows must be interchanged to obtain
a coefficient matrix that has nonzero entries on the main diagonal.
Procedure:
Step 1:
 Solve the first equation for x1 and second equation for x 2 and so on, as follows:
b  ( a12 x2  a13 x3  a14 x4  ...........  a1n xn )
x1  1
a11
x2 
xn 
b2  ( a21x1  a23 x3  a24 x4  ...........  a2 n xn )
a22
:
(1)
bn  ( an1x1  an 2 x2  an3 x3  ...........  an ,n1xn1 )
ann
Step 2:
 Start the iteration by substituting initial guess values for x1 , x2 ,....., xn into the
right hand side of the equation (1); and then calculate the { x} in first
approximation. After this procedure has been completed, one iteration has been
completed. In the same way, the second approximation is performed by
substituting the first approximation’s { x} into the right-hand side of the rewritten
equations.
k 1
1
x
x
k 1
2
k 1
n
x
b1  ( a12 x2k  a13 x3k  a14 x4k  ...........  a1n xnk )

a11
b2  ( a21x1k  a23 x3k  a24 x4k  ...........  a2 n xnk )

a22
:

( 2)
bn  ( an1x1k  an 2 x2k  an3 x3k  ...........  an ,n1xnk 1 )
ann
Here ' k ' represents the iteration number. In general, the xi value in Jacobi iteration
can be calculated using
bi 
xik 1 
n

j 1, j  i
aij x kj
aii
Step 3:
 Check the convergence error, Max xik 1  xik . If the error is greater than the
desired value repeat the iterations until the solution is accurate enough.
Example:
Consider the following 3  3 example:
10x1  2x2  x3
 13
eq.(1)
2x1  10x2  x3
 13
eq.( 2)
2x1  x2  10x3
 13
eq.(3)
x1k 1 
x2k 1 
x3k 1 
13 2 x2k  x3k
10
13 2 x1k  x3k
10
13 2 x1k  x2k
10
(1)
Select initial guess x1 , x2 , x3  0 then we get new values x11 , x12 , x13  1.3 .
Substitute x11 , x12 , x13  1.3 in equation (1), then we get new values x12 , x22 , x32  0.91
Error
Max xik 1  xik
Iteration no
' k'
x1
x2
x3
0
0
0
0
1
1.3
1.3
1.3
1.3
2
0.91
0.91
0.91
0.39
3
1.027
1.027
1.027
0.117
4
0.9919
0.9919
0.9919
0.0351
5
1.00243
1.00243
1.00243
0.01053
6
0.999271
0.999271
0.999271
0.003159
7
1.000219
1.000219
1.000219
0.000948
8
0.9999343
0.9999343
0.9999343
0.0002847
9
1.00002
1.00002
1.00002
0.0000857
Algorithm for Jacobi method:
1) Consider system of equations  Ax  b .
2) Read n,[ A],b .
3) Set initial {xi }  0 , for i  1,...n .
4) Start iteration loop
bi 
k 1
5) Calculate new {xi } value using xi 
n

j 1, j  i
aij x kj
, for i  1,...n
aii
6) Calculate maximum error between current approximation and previous
approximation, error  Max xik 1  xik , for i  1,...n .
7) If error greater than the desired value assign the new values of {xi } to old
values of {xi } and continue the iteration loop (go to 4th step in algorithm).
8) Else print the current approximation of {xi } .
Numerical code for Jacobi method:
/* THIS PROGRAM SOLVES THE SYSTEM OF LINEAR EQUATIONS BY JACOBI
ITERATIVE METHOD */
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
void main()
{
int i,j,n=3,k;
//INPUT THE MATRIX [A] AND {b}
float a[10][10],b[10],x_new[10],x_old[10],sum,error;
a[1][1]= 10.0; a[1][2]= 2.0; a[1][3]= 1.0; b[1]= 13.0;
a[2][1]= 2.0; a[2][2]= 10.0; a[2][3]= 1.0; b[2]= 13.0;
a[3][1]= 2.0; a[3][2]= 1.0; a[3][3]= 10.0; b[3]= 13.0;
//INITIALIZE THE {x} TO ZERO
for (i=1; i<=n; i++)
{
x_old[i]=0.0;
}
// START THE ITERATION
// HERE k IS THE ITERATION NUMBER
k=0;
iterate:
k = k+1;
// PRINTING ITERATION NUMBER
printf("ITERATION NUMBER=%2d\n",k);
// CALCULATION OF {x[i]} USING JACOBI METHOD
for (i=1; i<=n; i++)
{
sum = 0.;
for(j=1; j<=n; j++)
{
if(i!=j) sum = sum + a[i][j]*x_old[j];
}
x_new[i] = (b[i]-sum)/a[i][i];
// PRINTING NEWLY CALCULATED {x[i]} VALUSES IN EACH ITERATION
printf("x_new[%1d]=%11.8f\t",i,x_new[i]);
}
// INITIALIZING THE ERROR TO ZERO IN EVERY ITERATION
error = 0.0;
for (i=1; i<=n; i++)
{
// CALCULATION OF MAXIMUM ERROR
if(error<fabs(x_new[i]-x_old[i])) error=fabs(x_new[i]-x_old[i]);
}
printf("\nMAXIMUM ERROR IN THE CURRENT ITERATION=%e\n",error);
printf("\n");
printf("\n");
/* IF ERROR GREATER THAN DESIRED VALUE ASSIGN NEW APPROXIMATE VALUE,
x_new[i] TO THE OLD APPROXIMATE VALUE, x_old[i] AND START THE ITERATION
AGAIN */
if(error > 1e-06)
{
for(i=1; i<=n; i++) { x_old[i] = x_new[i];}
goto iterate;
}
//IF ERROR IS LESS THAN DESIRED VALUE PRINT THE RESULT
else goto result;
result:
printf("\n");
printf("The result is:");
for (i=1; i<=n; i++)
{
printf("x[%1d]=%11.8f\t",i,x_new[i]);
}
printf("\n");
}
Guass-Seidel method
You will now look at a modification of the Jacobi method called the Gauss-Seidel
method, named after Carl Friedrich Gauss (1777–1855) and Philipp L. Seidel (1821–
1896). This modification is no more difficult to use than the Jacobi method, and it often
requires fewer iterations to produce the same degree of accuracy. With the Jacobi method,
the values of obtained in the k th approximation remain unchanged until the entire
( k  1)th approximation has been calculated. With the Gauss- Seidel method, on the other
hand, you use the new values of each { x} as soon as they are known.
x1k 1 
x
k 1
2
b1  ( a12 x2k  a13 x3k  a14 x4k  ...........  a1n xnk )
a11
b2  ( a21x1k 1  a23 x3k  a24 x4k  ...........  a2 n xnk )

a22
x3k 1 
b3  ( a31x1k 1  a32 x2k 1  a34 x4k  ...........  a3n xnk )
a33
:
( 2)
:
bn  ( an1x1k 1  an 2 x2k 1  an3 x3k 1  ...........  an ,n1xnk 11 )
x 
ann
Here ' k ' represents the iteration number. In general, the xi value in Gauss- Seidel
iteration method can be calculated using
k 1
n
i 1
bi   aij x kj 1 
j 1
xik 1 
n
a x
j i 1
ij
k
j
aii
Example:
Consider the following 3  3 example:
10x1  2x2  x3
 13
eq.(1)
2x1  10x2  x3
 13
eq.( 2)
2x1  x2  10x3
 13
eq.(3)
13 2 x2k  x3k
10
13 2 x1k 1  x3k
k 1
x2 
(1)
10
13 2 x1k 1  x2k 1
k 1
x3 
10
Select initial guess x2 , x3  0 then we get new values
13  2 * 1.3  0
13  2 * 1.3  1.04
x11  1.3, x12 
 1.04, x13 
 0.936 .
10
10
Substitute x12  1.04, x13  0.936 in equation (1), then we get new values
x1k 1 
x12  0.9984, x22  1.00672, x32  0.999648
Error
Max xik 1  xik
Iteration no
' k'
x1
x2
x3
0
0
0
0
1
1.3
1.04
0.936
1.3
2
0.9984
1.00672
0.999648
0.3016
3
0.998691
1.000287
1.000232
0.006433
4
0.9999194
0.9999929
1.000017
0.0012284
5
0.9999997 0.9999983
1.000000
0.0000803
Algorithm for Gauss-Seidel method:
1) Consider system of equations  Ax  b .
2) Read n,[ A],b .
3) Set initial {xi }  0 , for i  1,...n .
4) Start iteration loop
i 1
k 1

bi   aij x kj 1 
j 1
n
a x
j i 1
ij
k
j
, for i  1,...n
aii
6) Calculate maximum error between current approximation and previous
approximation, error  Max xik 1  xik , for i  1,...n .
5) Calculate new {xi } value using xi
7) If error greater than the desired value assign the new values of {xi } to old values
of {xi } and continue the iteration loop (go to 4th step in algorithm).
8) Else print the current approximation of {xi } .
Numerical code for Gauss-Seidel method:
/* THIS PROGRAM SOLVES THE SYSTEM OF LINEAR EQUATIONS BY GAUSS-SEIDEL
ITERATIVE METHOD */
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
void main()
{
int i,j,n=3,k;
//INPUT THE MATRIX [A] AND {b}
double a[10][10],b[10],x_new[10],x_old[10],sum,error;
a[1][1]= 10.0; a[1][2]= 2.0; a[1][3]= 1.0; b[1]= 13.0;
a[2][1]= 2.0; a[2][2]= 10.0; a[2][3]= 1.0; b[2]= 13.0;
a[3][1]= 2.0; a[3][2]= 1.0; a[3][3]= 10.0; b[3]= 13.0;
//INITIALIZE THE {x} TO ZERO
for (i=1; i<=n; i++)
{
x_old[i]=0.0;
}
// START THE ITERATION
// HERE k IS THE ITERATION NUMBER
k=0;
iterate:
k = k+1;
// PRINTING ITERATION NUMBER
printf("ITERATION NUMBER=%2d\n",k);
// CALCULATION OF {x[i]} USING GAUSS-SEIDEL METHOD
for (i=1; i<=n; i++)
{
sum = 0.;
for(j=1; j<=n; j++)
{
if(j<i) sum = sum + a[i][j]*x_new[j];
if(j>i) sum = sum + a[i][j]*x_old[j];
}
x_new[i] = (b[i]-sum)/a[i][i];
// PRINTING NEWLY CALCULATED {x[i]} VALUSES IN EACH ITERATION
printf("x_new[%1d]=%11.8f\t",i,x_new[i]);
}
// INITIALIZING THE ERROR TO ZERO IN EVERY ITERATION
error = 0.0;
for (i=1; i<=n; i++)
{
// CALCULATION OF MAXIMUM ERROR
if(error<fabs(x_new[i]-x_old[i])) error=fabs(x_new[i]-x_old[i]);
}
printf("\nMAXIMUM ERROR IN THE CURRENT ITERATION=%e\n",error);
printf("\n");
printf("\n");
/* IF ERROR GREATER THAN DESIRED VALUE ASSIGN NEW APPROXIMATE VALUE,
x_new[i] TO THE OLD APPROXIMATE VALUE, x_old[i] AND START THE ITERATION
AGAIN */
if(error > 1e-06)
{
for(i=1; i<=n; i++) { x_old[i] = x_new[i];}
goto iterate;
}
//IF ERROR IS LESS THAN DESIRED VALUE PRINT THE RESULT
else goto result;
result:
printf("\n");
printf("The result is:");
for (i=1; i<=n; i++)
{
printf("x[%1d]=%11.8f\t",i,x_new[i]);
}
printf("\n");
}
Successive over-relaxation (SOR) method
SOR uses step to next Gauss-Seidel iterate as search direction with fixed search
k 1
 xik ) , where
parameter  (omega). SOR computes next iterate as xik 1  xik  ( xGS
k 1
xGS
is next iterative computed by Gauss-Seidel method.
b1  ( a12 x2k  a13 x3k  a14 x4k  ...........  a1n xnk )
x 
a11
*
1
x1k 1  x1k  ( x1*  x1k )
x2* 
b2  ( a21x1k 1  a23x3k  a24 x4k  ...........  a2 n xnk )
a22
x2k 1  x2k  ( x2*  x2k )
b3  ( a31x1k 1  a32 x2k 1  a34 x4k  ...........  a3n xnk )
x 
a33
*
3
( 2)
x3k 1  x3k  ( x3*  x3k )
:
:
x 
*
n
k 1
n
x
bn  ( an1x1k 1  an 2 x2k 1  an3x3k 1  ...........  an ,n1xnk11 )
ann
 x  ( x  x )
k
n
*
n
k
n
Here ' k ' represents the iteration number. In general, the xi value in SOR method can be
calculated using
i 1
n


xik 1  xik   bi   aij x kj 1   aij x kj 
aii 
j 1
j i 1

Example:
Consider the following 3  3 example, use   0.9 :
10x1  2x2  x3
 13
eq.(1)
2x1  10x2  x3
 13
eq.( 2)
2x1  x2  10x3
 13
eq.(3)
13 2 x2k  x3k
10
k 1
k
x1  x1    x1*  x1k 
x1* 
(1)
13 2 x1k 1  x3k
x 
10
k 1
k
x2  x2    x2*  x2k 
*
2
13 2 x1k 1  x2k 1
x 
10
k 1
k
x3  x3    x3*  x3k 
*
3
Select initial guess x2 , x3  0 then we get new values
x1*  1.3 &
x11  0  0.9 * (1.3)  1.17
13  2 * 1.17  0
 1.066 & x13  0  0.9 * (1.066)  0.9594
.
10
13  2 * 1.17  0.9594
x3* 
 0.97006 & x13  0 0.9 * (0.97006)  0.873054
10
Substitute x12  0.9594, x13  0.873054 in equation (1), then we get new values
x2* 
x12  1.035733, x22  1.000933, x32  0.980789
Error
Max xik 1  xik
Iteration no
' k'
x1
x2
x3
0
0
0
0
1
1.17
0.9594
0.873054
1.17
2
1.035733
1.000933
0.980789
0.13426686
3
1.005134
1.000898
0.997074
0.03059885
4
1.000615
1.000242
0.999575
0.004519
5
1.000056
1.00052
0.999943
0.00055898
6
1.000001
1.00001
0.999993
0.0000548
NOTE:




 is fixed relaxation parameter chosen to accelerate the convergence
 1 gives over-relaxation, while  1 gives under-relaxation  1 gives
Gauss-Seidel method
SOR diverges unless 0   2 , but choosing optimal  is difficult in general
except for special classes of matrices
With optimal value for  , convergence rate of SOR method can be order of
magnitude faster than that of Gauss-Seidel.
Algorithm for SOR method:
1) Consider system of equations  Ax  b .
2) Read n,[ A],b .
3) Set initial {xi }  0 , for i  1,...n .
4) Start iteration loop
k 1
k
5) Calculate new {xi } value using xi  xi 
i 1
n


k 1
k
 bi   aij x j   aij x j  ,
aii 
j 1
j i

for i  1,...n
6) Calculate maximum error between current approximation and previous
approximation, error  Max xik 1  xik , for i  1,...n .
7) If error greater than the desired value assign the new values of {xi } to old values
of {xi } and continue the iteration loop (go to 4th step in algorithm).
8) Else print the current approximation of {xi } .
Numerical code for SOR method:
/* THIS PROGRAM SOLVES THE SYSTEM OF LINEAR EQUATIONS BY SOR ITERATIVE
METHOD */
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
void main()
{
int i,j,n=3,k;
//INPUT THE MATRIX [A] AND {b}
double a[10][10],b[10],x_new[10],x_old[10],sum,error,omega=0.9;
a[1][1]= 10.0; a[1][2]= 2.0; a[1][3]= 1.0; b[1]= 13.0;
a[2][1]= 2.0; a[2][2]= 10.0; a[2][3]= 1.0; b[2]= 13.0;
a[3][1]= 2.0; a[3][2]= 1.0; a[3][3]= 10.0; b[3]= 13.0;
//INITIALIZE THE {x} TO ZERO
for (i=1; i<=n; i++)
{
x_old[i]=0.0;
}
// START THE ITERATION
// HERE k IS THE ITERATION NUMBER
k=0;
iterate:
k = k+1;
// PRINTING ITERATION NUMBER
printf("ITERATION NUMBER=%2d\n",k);
// CALCULATION OF {x[i]} USING SOR METHOD
for (i=1; i<=n; i++)
{
sum = 0.;
for(j=1; j<=n; j++)
{
if(j<i) sum = sum + a[i][j]*x_new[j];
else sum = sum + a[i][j]*x_old[j];
}
x_new[i] = x_old[i]+omega*(b[i]-sum)/a[i][i];
// PRINTING NEWLY CALCULATED {x[i]} VALUSES IN EACH ITERATION
printf("x_new[%1d]=%11.8f\t",i,x_new[i]);
}
// INITIALIZING THE ERROR TO ZERO IN EVERY ITERATION
error = 0.0;
for (i=1; i<=n; i++)
{
// CALCULATION OF MAXIMUM ERROR
if(error<fabs(x_new[i]-x_old[i])) error=fabs(x_new[i]-x_old[i]);
}
printf("\nMAXIMUM ERROR IN THE CURRENT ITERATION=%e\n",error);
printf("\n");
printf("\n");
/* IF ERROR GREATER THAN DESIRED VALUE ASSIGN NEW APPROXIMATE VALUE,
x_new[i] TO THE OLD APPROXIMATE VALUE, x_old[i] AND START THE ITERATION
AGAIN */
if(error > 1e-06)
{
for(i=1; i<=n; i++) { x_old[i] = x_new[i];}
goto iterate;
}
//IF ERROR IS LESS THAN DESIRED VALUE PRINT THE RESULT
else goto result;
result:
printf("\n");
printf("The result is:");
for (i=1; i<=n; i++)
{
printf("x[%1d]=%11.8f\t",i,x_new[i]);
}
printf("\n");
}
An Example of Divergence
Apply the Gauss-Seidel method to the system:
x1  5x2
 4
7 x1  x2

6
Iteration no
' k'
x1
x2
0
0
0
1
-4
-6
2
-34
-34
3
-174
-244
4
-1244
-1244
5
6
-6124
-42874
-8574
-42874
7
-214,374
-300,124
With an initial approximation of x1  0 , x2  0 neither the Jacobi method nor the GaussSeidel method converges to the solution of the system of linear equations given in the
above example. So for convergence of iterative methods the coefficients matrix [ A ]
should be diagonally dominant.
Diagonally dominant matrix: An n  n matrix [ A ] is strictly diagonally dominant if the
absolute value of each entry on the main diagonal is greater than the sum of the absolute
values of the other entries in the same row. That is,
a11  a12  a13  ......  a1n
a22  a21  a23  ......  a2 n
:
:
ann  an1  an 2  ......  an ,n1
Example of diagonally dominant matrix:
10x1  2x2  x3
 13
eq.(1)
2x1  10x2  x3
 13
eq.( 2)
2x1  x2  10x3
 13
eq.(3)