Mei Yin
Nanjing University of Science and Technology
Different General Algorithms for Solving Poisson Equation
Mei Yin
Nanjing University of Science and Technology
SUMMARY
The objective of this thesis is to discuss the application of different general algorithms to the
solution of Poisson Equation subject to Dirichlet boundary condition on a square domain:
⎧− ∆u = 2π 2 sin(π * x ) sin(π * y)
where
⎨
⎩u | Γ = 0
___
Ω : −1 ≤ x , y ≤ 1.
I adopt two kinds of grids, one with a coarse mesh and one with a fine mesh. According to
different needs, I discuss both the advantages and disadvantages of these algorithms. Some of the
ideas might be novel and a few improvements are offered.
*Underlying (J − 1) 2 denotes the number of unknowns, where J equals 50 (J1) and 100 (J2),
respectively.
INTRODUCTION
Finite Difference Method (FDM) is a primary numerical method for solving Poisson Equations.
As electronic digital computers are only capable of handling finite data and operations, any
numerical method requiring the use of computers must first be discretized. The steps are as
follows: 1) Mesh partition is employed and finite grid nodes are constructed on a continuous
region. 2) Differentiation operators are discretized to transfer the original problem to finite sets of
linear equations.
THE ALGORITHM
⎧ − ∆u = f ( x ⋅ y )
Consider Poisson Equation ⎨
where
⎩u | Γ = α ( x ⋅ y )
___
Ω : −1 ≤ x , y ≤ 1.
Adopt a square grid with x and y mesh lengths h1=h2=h and draw a bunch of parallel lines to the
⎧x = i * h
. Their intersections (ih, jh) are denoted node (i, j).
⎩y = j * h
axes ⎨
Employ second-order central difference along x and y directions to substitute for u xx and u yy , we
arrive at five-point finite difference formula:
⎡ u i +1⋅ j − 2u i⋅ j + u i −1⋅ j u i⋅ j+1 − 2u i⋅ j + u i⋅ j−1 ⎤
− ∆u = − ⎢
+
⎥ = f (i ⋅ j)
h2
h2
⎦
⎣
For simplification, make u h and f h stand for mesh functions, where u h ( x i ⋅ y j ) = u i⋅ j ,
f h ( x i ⋅ y j ) = f i⋅ j = f ( x i ⋅ y j ) . The above Poisson Equation is then rewritten as − ∆ h u h = f h .
Page 1
Mei Yin
Nanjing University of Science and Technology
From Taylor Expansion, we know that the cutoff error of − ∆ h is O(h 2 ) .
Here is a sketch map for illustration:
At last, we can set up the approximate sets of linear equations:
1
Lh u h = fh
h2
⎡B ⋅ −I
⎤
⎢− I ⋅ B ⋅ −I
⎥
⎢
⎥
⎥ is a (M-1)2 matrix, I is a (M-1) identity matrix and
Where L h = ⎢O ⋅ − I ⋅ B ⋅ − I
⎢
⎥
⎢OOOOOOO⎥
⎢KKKK − I ⋅ B ⎥
⎣
⎦
⎡4 ⋅ −1
⎤
⎢− 1 ⋅ 4 ⋅ −1
⎥
⎢
⎥
⎥ is also a (M-1) matrix ( M = 2 / h ).
B= ⎢0 ⋅ −1 ⋅ 4 ⋅ −1
⎢
⎥
⎢OOOOOOO⎥
⎢KKKK − 1 ⋅ 4 ⎥
⎣
⎦
As L h is a huge sparse matrix and there are at most five nonzero elements on every line, it can be
seen as a band matrix with half band width of N and band width of 2N+1. Restricted by computer
memory, it can not be input by ordinary means and MATLAB command "sparse" is adopted.
Underlying is the construction process of 50*50 mesh:
J1=50;
h1=2/J1;
a1=sparse(1:(J1-1)^2,1:(J1-1)^2,4*ones(1,(J1-1)^2),(J1-1)^2,(J1-1)^2);
a2=sparse(J1:J1^2-2*J1+1,1:J1^2-3*J1+2,-ones(1,J1^2-3*J1+2),(J1-1)^2,(J1-1)^2);
for i=1:J1^2-2*J1
s(i)=-1;
end
for i=1:J1-2
s(i*(J1-1))=0;
end
%sub-diagonal zero elements are considered when establishing s
a3=sparse(2:J1^2-2*J1+1,1:J1^2-2*J1,s,(J1-1)^2,(J1-1)^2);
A1=a1+a2+a2'+a3+a3';
Now we take a look at how to solve the sets of linear equations with coefficient matrix A.
Underlying is the main process of Steepest Descent (SD) Method:
while (normr>tol)
Page 2
Mei Yin
Nanjing University of Science and Technology
p = A * r;
alpha = (normr)^2/(p'*r);
%fix alpha
x = x + alpha * r; %form new iterate
r = b - A * x;
iter=iter+1;
ys(iter)=normr;
normr=norm(r);
end
And the main process of Conjugate Gradient (CG) Method:
while (normr>tol)
q = A * p;
alpha = (normr)^2/(p'*q);
%fix alpha
x = x + alpha * p;
%form new iterate
r1 = r - alpha * q;
beta = (norm(r1))^2/(normr)^2; %fix beta
r=r1;
p = r + beta * p;
iter=iter+1;
yc(iter)=normr;
normr=norm(r);
end
When A is morbid ( K >> 1 ), convergence is evolving very slowly. To tackle this problem,
precondition method is introduced to reduce A’s condition number.
The simplest way is to choose a diagonal matrix (Jacobian Precondition), however it might not
always be effective.
M=sparse(1:n,1:n,0.25*ones(1,n),n,n);
z=M*r;
p=z;
while (normr>tol)
q = A * p;
alpha = (z'*r)/(p'*q);
%fix alpha
x = x + alpha * p;
%form new iterate
r1= r - alpha * q;
z1=M*r1;
beta = (z1'*r1)/(z'*r);
%fix beta
z=z1;
r=r1;
p = z + beta * p;
iter=iter+1;
yp(iter)=normr;
normr=norm(r);
end
Three-diagonal matrix M seems to be a nice choice, which can be solved by Thomas Algorithm.
a1=diag(diag(A));
Page 3
Mei Yin
Nanjing University of Science and Technology
a2=diag(diag(A,-1),-1);
M=a1+a2+a2';
%three-diagonal matrix M
Ax = d
⎛ b1 ⋅ c1
⎞
⎜
⎟
⎜a 2 ⋅ b2 ⋅ c2
⎟
⎜
⎟
A = OOOOO
⎜
⎟
⎜ 0 ⋅ 0 ⋅ 0 ⋅ 0 ⋅ 0 ⋅ c n −1 ⎟
⎜
⎟
⎝KKKKK a n ⋅ b n ⎠
⎧u 1 = b 1
⎪
a
⎪
Perform LU Decomposition of A, ⎨l i = i
u i −1
⎪
⎪⎩u i = b i − l i c i −1
⎛ d1 ⎞
⎜
⎟
⎜d2 ⎟
M
d=⎜ M ⎟
⎜
⎟
⎜ d n −1 ⎟
⎜
⎟
⎝dn ⎠
i = 2.3...n
Ax = d is thus transformed into Ly = d and Ux = y . ( u i ≠ 0 )
⎧ y1 = d 1
⎨
⎩ y i = d i − l i y i −1
yn
⎧
⎪x n = u
⎪
n
⎨
⎪x = y i − c i x i +1
⎪⎩ i
ui
i = 2.3...n
i = n − 1.n − 2...1
It will be even better to choose M as the precondition matrix for SSOR, which will reduce A’s
condition number to its square root. Especially in the case of w = 1 , symmetric GS iteration will
achieve good results.
a1=tril(A);
a2=sparse(1:n,1:n,0.5*ones(1,n),n,n);
S=a1*a2;
M=S*S';
Two-grid method is a very popular parallel algorithm these days, which serves to reduce the
amount of computation needed to process the data.
___
___
Main Principle: u M = u M + v M L M u M = f M − d M L M v M = d M where v M is the modifier.
Meshsmooth iteration is embedded in-between. (The overall sequence of iteration on fine mesh,
restriction, iteration on the coarse mesh, interpolation and then further iteration on the fine mesh
represents one cycle of iteration.)
General iteration formula is adapted to this problem and computation is further reduced.
(J-1) lines in the front and back are discussed separately and other lines are viewed as a whole.
Besides, sub-diagonal nonzero elements are also taken into consideration.
c=sqrt(n); %reduce later computation
x(1)=(b(1)+x(2)+x(1+c))/4;
for i=2:c
x(i)=b(i)+x(i-1);
if i~=c
x(i)=x(i)+x(i+1);
end
Page 4
Mei Yin
Nanjing University of Science and Technology
x(i)=x(i)+x(i+c);
x(i)=x(i)/4; %compute (J-1) lines in the front
end
k=1;
for i=2:c-1
for j=1:c
e=(i-1)*c+j;
x(e)=b(e)+x(e-c)+x(e+c);
if e~=k*c+1
x(e)=x(e)+x(e-1);
end
if e~=(k+1)*c
x(e)=x(e)+x(e+1);
end
x(e)=x(e)/4;
end
k=k+1;
end %other lines are viewed as a whole
for i=n-c+1:n-1
x(i)=b(i)+x(i+1);
if i~=n-c+1
x(i)=x(i)+x(i-1);
end
x(i)=x(i)+x(i-c);
x(i)=x(i)/4;
end
x(n)=(b(n)+x(n-1)+x(n-c))/4;
%compute (J-1) lines in the back
Programming used for restriction and interpolation:
Vectors are transformed into nodes on the grid, which are first arrayed horizontally and then
vertically. Here is a sketch map for illustration:
Define restriction operator I Hh and interpolation operator I hH :
I Hh
⎡1 ⋅ 2 ⋅ 1 ⎤
1 ⎢
= ⎢2 ⋅ 4 ⋅ 2⎥⎥ from fine mesh to coarse mesh
16
⎢⎣1 ⋅ 2 ⋅ 1 ⎥⎦
Page 5
Mei Yin
I hH
Nanjing University of Science and Technology
⎡1 ⋅ 2 ⋅ 1 ⎤
1⎢
= ⎢2 ⋅ 4 ⋅ 2⎥⎥ from coarse mesh to fine mesh (distribution according to weight coefficient)
4
⎢⎣1 ⋅ 2 ⋅ 1 ⎥⎦
It is quite straightforward to design I Hh :
for i=2:2:J2-2
for j=2:2:J2-2 %spacing is enlarged
u(k)=0.0625*(A(i+1,j-1)+2*A(i+1,j)+A(i+1,j+1)+2*A(i,j-1)+4*A(i,j)+2*A(i,j+1)+A(i-1,j-1)+2*A
(i-1,j)+A(i-1,j+1)); %evaluate according to weight coefficient
k=k+1;
end
end
When dealing with the design of I hH , the situation becomes complicated and boundary and internal
points need to be considered separately. Moreover, different relative positions between fine mesh
nodes and coarse mesh nodes will lead to varied weight coefficients.
coarse mesh nodesÆfine mesh nodes on different rows and different columnsÆfine mesh nodes
on different rows but same columnsÆfine mesh nodes on different columns but same rows
four verticesÆfine mesh nodes on same rows (first and last column)Æfine mesh nodes on
different rows (first and last column)Æfine mesh nodes on same columns (first and last row)
Æfine mesh nodes on different columns (first and last row)
mesh nodes are transformed into vectors
STRUCTURE PROGRAMMING
Data Flow Graph
Input
Evaluate x
Output
normr>tol
Steepest Descent (SD) Method:
Initial value of
x0
r ( k ) = b − Ax ( k )
Error
αk =
(r ( k ) ⋅ r ( k ) )
( Ar ( k ) ⋅ r ( k ) )
x ( k +1) = x ( k ) + α k r ( k )
Page 6
Mei Yin
Nanjing University of Science and Technology
Conjugate Gradient (CG) Method:
Error
Initial value of
x0
r ( 0 ) = b − Ax ( 0 )
p ( 0) = r (0)
p ( k +1) =
r ( k +1)
r ( k +1) = r ( k ) − α k Ap( k )
(r ( k ) ⋅ r ( k ) )
α k = (k)
(p ⋅ Ap ( k ) )
x ( k +1) = x ( k ) + α k p ( k )
+ βk p (k)
βk =
(r ( k +1) ⋅ r ( k +1) )
(r ( k ) ⋅ r ( k ) )
Precondition Conjugate Gradient (PCG) Method:
Error
Initial value of
x0
r ( 0) = b − Ax ( 0)
p ( 0 ) = z ( 0) = M −1r ( 0 )
p ( k +1) =
r ( k +1) = r ( k ) − α k Ap( k )
r ( k +1)
+ β k p (k )
z ( k +1) = M −1 r ( k +1)
βk =
Page 7
(z ( k +1) ⋅ z ( k +1) )
(z ( k ) ⋅ r ( k ) )
αk =
(r ( k ) ⋅ r ( k ) )
(p ( k ) ⋅ Ap ( k ) )
x ( k +1) = x ( k ) + α k p ( k )
Mei Yin
Nanjing University of Science and Technology
Two-grid Method:
Relaxation on
fine mesh
Restriction
Modification
on coarse mesh
Error
Relaxation on
coarse mesh
Interpolation
Modification
on fine mesh
Module Structure
M-File
Main
PCG Algorithm
Thomas Algorithm
Interpolation
Two-grid Method
Restriction
Seidel
Thomas
PCG
RESULTS AND FURTHER DISCUSSION
Characteristics of the difference format:
∆ h u i⋅ j = f i⋅ j
(x i ⋅ y j ) ∈ D h
u i⋅ j = α i⋅ j
( x i ⋅ y j ) ∈ ∂D h (★)
Suppose u i⋅ j is a function defined on the domain D h ∪ ∂D h , we have:
1) ∆ h u i⋅ j ≥ 0 , ∀( x i ⋅ y j ) ∈ D h then max u i⋅ j ≤ max u i⋅ j
∂D h
Dh
2) ∆ h u i⋅ j ≤ 0 , ∀( x i ⋅ y j ) ∈ D h then max u i⋅ j ≥ max u i⋅ j
∂D h
Dh
(★)has a unique solution, max | u i⋅ j |≤ max | u i⋅ j |+
Dh
∂D h
α2
max | ∆ h u i⋅ j | where α is the x-side
2
Dh
length of D .
Page 8
Mei Yin
Nanjing University of Science and Technology
Five-point finite difference method converges with max | u i⋅ j − u ( x i ⋅ y j ) |≤ O(h 2 + k 2 )
Dh
Numerical results: (Initial value of x0=ones(n,1))
J1=50
Iteration
Steps
Steepest
Descent
CG
PCG
Revised
PCG1
Revised
PCG2
Two-grid
Method
0.01
1920
72
72
52
27
28
0.001
3086
78
78
58
29
36
J2=100
Iteration
Steps
Steepest
Descent
CG
PCG
Revised
PCG1
Revised
PCG2
Two-grid
Method
0.01
6311
142
142
101
51
28
0.001
10977
156
156
112
56
36
For Steepest Descent (SD) Method, Solving A * x = b is equivalent to finding the minimum point
of ϕ( x ) =
1
( Ax ⋅ x ) − (b ⋅ x ) . Start from an arbitrary x ( 0 ) and find the direction along which
2
ϕ( x ) = ϕ( x ( 0) ) decreases greatest, i.e. − ∇ϕ( x ( 0) ) = r ( 0 ) = b − Ax ( 0 ) . If r ( 0 ) ≠ 0 , find α that
makes ϕ( x ( 0 ) + αr ( 0) ) least in value. This represents one cycle of iteration.
It can be further proved that
lim x
k −>∞
(k)
−1
=x =A b
*
|| x
(k)
⎛ λ − λn
− x || A ≤ ⎜⎜ 1
⎝ λ1 + λ n
*
k
⎞
⎟⎟ || x ( 0) − x * || A
⎠
|| u || A = (Au ⋅ u ) 0.5
Where λ1 and λ n are A’s maximum and minimum eigenvalues, respectively.
Convergence is extremely slow when λ1 >> λ n , so this method is not of practical use.
CG is a variational method. It first appeared in the 1950’s and gained popularity with the
development of PCG in the 1980’s and is now a primary method for solving huge sparse matrices..
In essence, it calls for ϕ( x ( k ) ) =
min
y∈span{p ( 0 ) ...p ( k −1) }
ϕ( y) . Theoretically, this method can find the
solution after at most n iteration steps. However, because of the existence of rounding error, this
goal might not be achieved so easily.
k
We have || x
(k)
⎛
K − 1 ⎞⎟
( 0)
*
− x || A ≤ ⎜⎜ 2
⎟ || x − x || A where K = cond 2 (A ) .
K
+
1
⎠
⎝
*
When A is morbid ( K >> 1 ), convergence is evolving very slowly and precondition method is
introduced to reduce A’s condition number.
Consider Cholesky Decomposition M = LLT and A = M − N , where M is symmetric positive
definite and N is as near zero as possible.
The simplest way is to choose a diagonal matrix (Jacobian Precondition) and my attached PCG
programme adopts this idea. My Revised PCG1 chooses three-diagonal matrix M as the
Page 9
Mei Yin
Nanjing University of Science and Technology
precondition matrix and my Revised PCG2 chooses M as the precondition matrix for SSOR
( M = SS T where S = (D − L) D −0.5 ).
For Two-grid Method: Eigenfunction ϕ m ( x ) = 2 sin(m1 π * x 1 ) sin(m 2 π * x 2 )
And eigenvalue λ m = 1 − 0.5 * (cos(m1 π * h ) + cos(m 2 π * h )) .
Underlying x denotes iteration steps and y denotes error.
J1=50
Precision: 0.01
J1=50
Precision: 0.001
J2=100
Precision: 0.01
J2=100
Precision: 0.001
J1=50
Precision: 0.01
J1=50
Precision: 0.001
Page 10
Mei Yin
J2=100
Nanjing University of Science and Technology
Precision: 0.01
Precision: 0.01
J2=100
Precision: 0.001
Precision: 0.001
Multiple Grid (MG) Method is nearly always the best and has been widely used since 1977 for
solving elliptical boundary condition problems.
Here is a sketch map for one cycle of MG iteration.
Page 11
Mei Yin
Nanjing University of Science and Technology
Underlying is a comparison table.
Method
Dimension
Jacobian
4
Gauss-Seidel
4
SOR
MG
3
Two
O(N )
O(N )
O(N )
O(N2)
Three
O(N5)
O(N5)
O(N4)
O(N3)
A LIST OF RELATED PROGRAMS
sparseparameters1
50*50 mesh
sparseparameters2
100*100 mesh
steepestdescent
Steepest Descent Method
cg CG Method
pcg
PCG Method (Jacobian Precondition)
pcgg PCG Method (three-diagonal precondition matrix, solved by embedded system programme)
revisedpcg1 PCG Method (three-diagonal precondition matrix, solved by Thomas Algorithm)
Thomas
Thomas Algorithm
revisedpcg2 PCG Method (precondition matrix for SSOR)
grid Two-grid Method
Page 12
Mei Yin
Nanjing University of Science and Technology
restrict
Restriction Algorithm
interpolate Interpolation Algorithm
seidel Meshsmooth Algorithm
REFERENCES
z Yang Huazhong, Wang Hui
Numerical Methods & C Language
Science Press 1996
z Dai Jiazun, Qiu Jianxian
Numerical Methods for Differential Equations
Southeast University Press 2002
z Li Ronghua, Feng Guochen
Numerical Methods for Differential Equations
People’s Education Press 1980
z Li Qingyang, Guan Zhi, Bai Fengshan Numerical Computation
Tsinghua University Press 2000
z Tang Huaimin, Hu Jianwei
Numerical Methods for Differential Equations
Nankai University Press 1990
z Wang Moran
MATLAB & Scientific Computation
Publishing House of Electronics Industry 2004
z Lu Jinfu, Guan Zhi
Numerical Methods for Partial Differential Equations
Tsinghua University Press 1987
z Guan Zhi, Lu Jinfu
Introduction to Numerical Analysis
Higher Education Press 1998
Page 13
© Copyright 2026 Paperzz