Finite Element Methods on a Time-Varying
System
Christopher Lustri
June 16, 2010
1
Introduction
We wish to explore the theory underlying the implementation of finite element
methods in numerical analysis, particularly with reference to solutions of the partial differential equation
−∇2 u = f in Ω,
(1)
∂u
= gN on ∂ΩN .
(2)
∂n
where Ω ⊂ R2 , ∂ΩD ∪ ∂ΩN = ∂Ω and the two boundary regions are distinct with
∂ΩD non-empty. Specifically, we will consider the key results in the development
of the theory of finite elements, including weak solutions and the structure of the
individual elements, as well as the numerical implementation of the method. We
will subsequently implement a finite element numerical solver in order to solve
this equation in a region of space, with f , gD and gN prescribed.
u = gD
on ∂ΩD
and
We will then extend these concepts in order to consider a time-varying problem
using Euler time-steps, and finally implement a finite element solution to a simple
time-varying analogue of the original spatial partial differential equation, given as
∂u
− ∇2 u = f
∂t
u = u0
in Ω,
in Ω at t = 0,
(3)
(4)
with the same boundary conditions as before. We will consider the problem with
both steady and time-varying boundary conditions.
1
2
Finite Element Methods on a Spatial Problem
2.1
Weak formulation
Many equations of interest in applied mathematics have ‘classical’ solutions. We
consider the sets of functions
C 0 (Ω) = {f : Ω → R|f is continuous},
C 2 (Ω) = f : Ω → R|fxi xj ∈ C 0 (Ω) ∀ i, j .
If Ω is sufficiently regular, the solution u to an second-order partial differential
equation, such as that given in (1, 2) satisfying u ∈ C 2 (Ω) is known as a classical,
or strong solution [4]. That is, if there exists a solution which has continuous second partial derivatives, and is therefore sufficiently smooth, it is a strong solution
to the problem.
There are numerous numerical techniques that may be applied in order to find
classical solutions to partial differential equations [1, 2, 5]. Unfortunately, many
problems of interest do not admit classical solutions. If this is the case, we must
instead search for a variational, or weak formulation of the original problem. This
involves reformulating the original problem such that the smoothness requirement
is relaxed. The resultant expression is known as a weak formulation of the problem.
In order to obtain a weak formulation, we consider a test function w that lies in
the Sobolev space
HE1 0 (Ω) = {v ∈ H 1 (Ω)|v = 0 on ∂ΩD },
where
H 1 (Ω) = {v : Ω → R|u, ux , uy ∈ L2 (Ω)} ,
Z
2
L2 (Ω) = v : Ω → R| v < ∞ .
Ω
That is, we require that the function w and associated first-derivatives to be
square-integrable. Now, if we integrate the product of problem given in (1) and
the test function w, we obtain
Z
Z
2
− ∇ u·w =
f · w.
Ω
Ω
Integration by parts and application of the divergence theorem gives
Z
Z
Z
∂u
− ∇u · ∇w =
wf +
w .
∂n
Ω
Ω
∂Ω
Substitution of the boundary conditions given in (2), noting that v ∈ HE1 0 (Ω),
gives the formulation
Z
Z
Z
∇u · ∇w =
wf +
wgN .
(5)
Ω
Ω
2
∂ΩN
∆k
Ω:
∂Ω
Figure 1: Triangular tesselation of a domain Ω using triangular elements ∆k . The domain is
shaded blue, with nodes denoted by bold circles, and the boundary of the domain,
∂Ω, denoted by bold lines.
This is the weak formulation of the partial differential equation given in (1). In
order for a solution u to be considered a weak solution of (1, 2), we require that
u ∈ HE1 (Ω) = {v ∈ H 1 (Ω)|v = gD on ∂ΩD },
and that u satisfies the weak formulation of (1), given in (5) [3]. While for strong
solutions we required the second-derivatives to be continuous, we simply require
for weak solutions that the first-derivatives be square-integrable. It may be demonstrated that all strong solutions to the problem also satisfy the conditions to be
considered weak solutions. By considering the weak formulation, we are effectively
permitting a broader range of solutions to the original problem by extending the
solution space.
2.2
Galerkin method
The Galerkin approximation method allows us to convert a continuous problem,
such as the weak formulation for the partial differential equation given in (refweak)
into a discrete problem that may be solved numerically. This method involves
using a finite dimensional subspace S0 of HE1 0 , and therefore obtaining a solution
in a finite dimensional subset of HE1 .
The first stage is to tesselate the domain Ω into non-overlapping regions. Any
polygonal domain may be tesselated using triangles and, as such, triangular tesselations are frequently chosen when implementing the Galerkin method. It is,
however, possible to divide the domain up in a variety of fashions, such as quadrilateral tesselations, which will be touched upon in a later section. We will assume
that the domain is tesselated with non-overlapping triangles, or elements 4k for
k = 1, . . . , K, such that the vertices of neighbouring elements meet at points called
nodes. An example of this tesselation is included in Figure 1.
3
φj
Node j
Figure 2: The right-hand side shows a schematic of a typical φj , which is nonzero at node j and
zero at all other nodes, with φj defined in a piecewise fashion such that it is linear
in each triangle. The left-hand side shows φj on a single triangle. Each triangle will
be nonzero for three different values of j, corresponding to each of the three nodes on
the edge of the triangle.
The subspace S0 is typically defined as the span of a set of basis functions, such
that
S0 = span{φ1 , φ2 , . . . , φn },
where φj for j = 1, . . . , n is a set of linearly independent basis functions, and n is
the number of nodes in the tesselation of the domain that are not found on ∂ΩD .
The remaining nodes found on ∂ΩD are denoted as node j = n+1, n+2, . . . , n+n∂ .
A common method of constructing S0 is to select φj such that φj is continuous,
equal to one at node j and equal to zero on all elements not touching node j. On
the elements that do touch node j, φj is linear in the form ax + by + c such that
the previous requirements are satisfied. A schematic of the function φj may be
seen in Figure 2. The set {φ1 , φ2 , . . . , φn } forms a basis for the space S0 = P1 ,
which is the space of continuous piecewise linear functions on the triangulation.
We also define φn+1 , φn+2 , . . . , φn+n∂ in a similar fashion.
There are other methods of constructing the approximation space S0 such that
it takes different forms. It is possible to use quadratic functions of the form
ax2 + bxy + cy 2 + dx + ey + f , rather than linear functions, on each elements. This
requires nodes to be defined on the sides of each triangle, as well as at the vertices.
In fact, it is possible to select polynomials of any order, although this typically
involves selecting more nodes on the boundary, as well as internal nodes. To
define cubic basis functions, we require that the triangles contain 10 nodes, with
four on each side and one internal node. While there are some situations where
selecting higher-order basis functions is desirable, they are beyond the scope of
this document, and we will consider only linear basis functions for the purposes of
calculation. For illustrative purposes, Figure 3 contains an example of a triangular
element containing part of a quadratic basis function. Additionally, the elements
may not be triangular, but may instead take different shapes, such as quadrilateral
elements. Again, there are some situations where such basis functions may be
4
Node j
Figure 3: A single triangular element containing part of a piecewise quadratic basis function
φj . The element contains six nodes, which is required in order to generate the basis
function.
more useful for computational purposes, however that is also beyond the scope of
this document. Further information on element types and basis functions may be
found in Elman et al. [3].
The Galerkin method involves seeking
uh =
n
X
uj φj +
j=1
n+n
X∂
uj φj ∈ HE1 ,
(6)
j=n+1
such that uh satisfies the weak form of the original problem, given in (5). This
gives
Z
Z
Z
∇uh · ∇w =
wf +
wgN ,
(7)
Ω
Ω
∂ΩN
for all w ∈ S0 . As S0 = span{φ1 , φ2 , . . . , φn }, this is equivalent to finding uh such
that
Z
Z
Z
∇uh · ∇φi =
φi f +
φi gN ,
Ω
Ω
∂ΩN
for i = 1, 2, . . . , n. Using (6), this may be expressed in the form of a linear system
Au = f such that
Z
A = {ai,j }
where
ai,j =
∇φj · ∇φi ,
(8)
Ω
u = (u1 , u2 , . . . , un )T and f = (f1 , f2 , . . . , fn )T , where
Z
fi =
Z
φi gN −
φi f +
Ω
∂ΩN
n+n
X∂
j=n+1
Z
∇φj · ∇φi .
uj
Ω
As such, the Galerkin method involves determining A and f , and then solving the
resultant linear system in order to obtain an approximation for u in S0 . We could
apply this method to problems with different governing equations, however this
entails obtaining a new weak formulation for the problem.
5
..
.
∂Ω
y
Ω:
∆2
∆1
···
∆4
···
∆3
···
x
Figure 4: Triangular tesselation of part of the domain Ω, specified in (1) as 0 < x < 1 and
0 < y < 1, using triangular elements ∆k . The part of the domain shown is shaded
blue, with nodes denoted by bold circles, and the boundary of the domain, ∂Ω, denoted
by bold lines.
2.3
Implementation
In order to demonstrate the Galerkin finite-element method, we considered the
problem
−∇2 u = f in 0 < x < 1, 0 < y < 1,
(9)
with discontinuous Dirichlet conditions on the left boundary, given by
0 x = 0, y < 0.5,
u=
1 x = 0, y ≥ 0.5,
(10)
and no-flux Neumann conditions prescribed on the remaining boundaries. The
boundary conditions are discontinous, which means that we will require a weak
solution to the partial differential equation. We first considered the problem with
no forcing term, or f = 0, and then with a nonzero forcing term, f = 0.25y. In
order to discretize the domain, the region in question was first divided into squares,
which were subsequently split into two triangular regions, as seen in Figure 4. The
domain consisted of 30 rows, each containing 60 triangles.
The Galerkin method described in the previous section was implemented according
the the algorithm described Elman et al. [3]. This method involves computing
the stiffness matrix A and the load vector f numerically. Calculations on each
element were performed by mapping ∆k , with vertecies at (x,y) = (x1 , y1 ), (x2 ,
y2 ) and (x3 , y3 ), to a reference element ∆∗ with verticies at (ξ,µ) = (0,0), (0,1)
and (1,0), with the mapping given as
x(ξ, µ) = x1 χ1 (ξ, µ) + x2 χ2 (ξ, µ) + x3 χ3 (ξ, µ),
y(ξ, µ) = y1 χ1 (ξ, µ) + y2 χ2 (ξ, µ) + y3 χ3 (ξ, µ),
6
(a) f = 0
(b) f = 0.25y
Figure 5: Solution to the spatial problem described in (1) with boundary conditions given in
(10) along x = 0 and no-flux conditions on the remaining boundaries, with (a) f = 0
and (b) f = 0.25y.
with
χ1 = 1 − ξ − µ,
χ2 = ξ,
χ3 = µ.
This simplified the numerical evaluation of the integrals within the definition of
A and f . Integrals performed over an element were performed using triangular
quadrature such that
Z
1
h(ξ, µ) dξ dµ ≈ h(1/3, 1/3),
2
∆∗
while the trapezium rule was applied in order to evaluate line integrals along the
edges of elements. The stiffness matrix and load vector were then assembled and
mapped back into the (x,y) domain. The solution u to the equation Au = f was
then computed using MATLAB. The results may be seen in Figure 2.3.
We see that in both cases, the heat flows into the domain where the boundary
is equal to one, and out of the domain where the boundary is equal to zero. In
the figure with a forcing term, we see that the gradient of the heat flow is steeper
in the bottom half of the domain and less steep at the top of the domain, as the
contribution from the forcing term is proportional to y. It is important to note
that the discontinuous boundary condition does not pose a problem in either case,
due to the nature of numerical finite-element methods.
3
Finite-Element Methods on a Time-Varying
Problem
We may now extend the ideas developed in the previous section in order to develop
a scheme for solving the time-varying problem given in (3, 4). To accomplish this,
7
we obtain a weak formulation for the solution in space, and approximate the solution in space by finite elements as before. This will produce a system of ordinary
differential equations which may be solved using finite difference methods. We
will use the Euler method, although other methods are applicable.
3.1
Weak formulation and Finite Element Scheme
The weak formulation for the time-varying problem given in (3) takes the form
Z
Z
Z
Z
∂u
w + ∇u · ∇w =
wf +
wgN .
Ω
Ω
∂Ω
Ω ∂t
We then discretise the spatial domain as before, and apply the Galerkin approximation such that we search for uh ∈ HE1 that solves the equation in a finite subspace, given by S0 = span{φ1 , φ2 , . . . , φn }. We therefore find that the Galerkin
equation takes the form
Z
Z
Z
Z
∂uh
w + ∇uh · ∇w =
wf +
wgN .
(11)
Ω
Ω
∂Ω
Ω ∂t
Aside from the time-derivative, this equation is identical to that given in (7). As
such, each of these terms has been addressed in the previous section. It remains
to be demonstrated how the time-derivative alters the resulting system. In order
to discretise this term in space, we begin with the expression for uh given in (6).
From this, we see that
n+n
n
X
X∂ duj
duj
∂uh
(x, t) =
(t)φj (x) +
(t)φj (x) ∈ HE1 .
∂t
dt
dt
j=1
j=n+1
We therefore find that the discrete equation contains
Z
Ω
n
X duj
∂uh
w=
(t)
∂t
dt
j=1
n+n
X∂
Z
duj
φj (x)φi (x) +
(t)
dt
Ω
j=n+1
Z
φj (x)φi (x).
Ω
The time-varying discretised system may therefore be expressed using (6) as
Z
Z
n
n
X
X
u˙j
φj φi +
uj (t) ∇φj · ∇φi =
j=1
Ω
j=1
Z
φi gN −
f φi +
Ω
Ω
Z
∂ΩN
n+n
X∂
Z
∇φj · ∇φi −
uj
j=n+1
Ω
n+n
X∂
j=n+1
Z
u˙j
φj φi .
Ω
This is a linear system of semi-discrete first-order ordinary differential equations
of the form M u̇(t) + Au(t) = f, with A defined in (8), M is given by
Z
M = {mi,j }
where
mi,j =
φj φi ,
Ω
8
and f = (f1 , f2 , . . . , fn )T takes the new form
Z
fi =
φi gN −
f φi +
Ω
3.2
Z
∂ΩN
n+n
X∂
Z
∇φj · ∇φi −
uj
Ω
j=n+1
n+n
X∂
j=n+1
Z
u˙j
φj φi .
Ω
Euler Method
We now use the Euler method [2] in order to write the semi-discrete system given
by M u̇(t) + Au(t) = f as a fully discretised system. If we write
uk = (u1 (k∆t), u2 (k∆t), . . . , un (k∆t))T ,
for k = 1, 2, . . ., the Euler method involves approximating the time-derivatives as
u̇(k∆t) ≈
uk+1 − uk
.
∆t
The semi-discrete system at some t = k∆t therefore becomes
M (uk+1 − uk ) + ∆tAuk = ∆tf,
with some prescribed initial condition u0 . As such, we can solve a linear system
at each time-step given by
M uk+1 = (M − ∆tA)uk + ∆f,
(12)
where u0 is known, and each subsequent uk may be obtained from uk−1 . As
such, we may now obtain the solution for the system over a number of timesteps by solving the fully-discretised linear system at each time step (that is, for
k = 1, 2, . . .).
It is possible to show that this method will be unstable for large enough ∆t,
and therefore care must be taken when determining the time-steps that are to
be chosen when solving a problem. If the time-steps are too large, the solution
vectors will tend to infinity as k → ∞, even if the solution does not behave in
this manner.
3.3
Implementation
In order to demonstrate the Galerkin finite-element method, we considered the
problem
∂u
− ∇2 u = f in 0 < x < 1, 0 < y < 1, 0 < t ≤ 2,
(13)
∂t
with the discontinuous Dirichlet conditions given in (10) given on the left boundary and no-flux Neumann conditions prescribed on the remaining boundaries. The
initial condition was prescribed as being zero everywhere within the domain. The
9
discretization and finite-element scheme was identical to that applied to the stationary problem, with the exception that the domain was divided into 18 rows,
each containing 36 triangles, in order to improve the speed of computation.
The introduction of the time-stepping procedure as described earlier led to instabilities unless the time-steps were set to be extremely small. In order to ensure
stability, the time-steps were chosen as ∆t = 10−5 , although this could be made
larger without a loss of stability in the solution. The linear system described in
(12) was solved at each time-step in order to demonstrate the behaviour of the
system when flow starts at t = 0. Some of the results may be seen in Figure 3.3.
We see that the flow tends towards the steady solution over time, as required.
Additionally, we could easily apply this method to more complicated problems,
such as a problem with time-varying forcing terms or boundary conditions. We
could also adapt the method to other governing equations by obtaining a different
weak formulation, as described in the previous section.
4
Conclusions
There are many partial differential equations that arise in applied mathematics
for which no classical solution exists. In these cases, we must search for weak
solutions to the partial differential equations by expressing the equation in variational form, which may be solved in order to obtain weak solutions to the original
problem. We may use finite element methods in order to compute the solution to
the equations in variational form and subsequently obtain weak solutions to the
original equation.
The method of finite elements involved discretising the domain of the problem into
a set of elements, and defining a set of piecewise basis functions on these elements
in order to obtain an approximation space, generated by the basis functions. It
was within this approximation space that the weak solution was sought. We subsequently applied the Galerkin finite element method which entailed constructing
a stiffness matrix and load vector in order to express the finite element solution
as the solution to a linear system of equations, which could be solved numerically.
This method was applied in order to solve two simple problems; the heat equation
without a forcing term in the original equation, the same equation with a forcing
term included.
In order to solve time-varying problems, we obtained a variational form for the
equation with a time-derivative included, and applied the finite element method
as before, obtaining a semi-discrete system that was discrete in space but continuous in time. This, however, resulted in a system of ordinary differential equations,
rather than a fully discrete linear system. To fully discretise the system, we used
the Euler method to discretise the ordinary differential equations in time. The resultant system was then solved using standard methods for solving linear systems.
This method was applied in order to solve the time-varying heat equation and it
10
(a) t = 0
(b) t = 0.4
(c) t = 0.8
(d) t = 1.2
(e) t = 1.6
(f) t = 2
Figure 6: Solution to the time-dependent spatial problem described in (13) with boundary conditions given in 10 along x = 0 and no-flux conditions on the remaining boundaries,
with u = 0 initially within the domain. The solutions is shown at regular intervals
between t = 0 and t = 2.
11
was observed that the computed solution evolved from the initial configuration
towards the steady solution as time progressed.
A number of extensions to the methods described here were touched upon. We can
consider a range of different equations, however doing so involves determining the
variational form for each new equation. It is possible to use elements of different
shapes, such as quadrilateral elements. Additionally, it is possible to consider
higher-order piecewise basis functions, such as quadratic or cubic basis functions.
In many cases, these can give greater accuracy, but require more nodes within
each element, increasing computational cost. We could also consider different
methods of discretisation in time, such as Runge-Kutta methods, and apply the
current scheme to more demanding time-varying problems, such as problems with
time-varying boundary conditions or forcing terms.
A
Example finite-element solver
This code was used to discretise the system in question and apply the Galerkin
finite-element method used on a simple linear basis with triangular elements in
order to solve the system. This code was used to produce the results in the report,
with minor modifications depending on the problem in question.
1
clear all
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
% Discretisation of domain
T = 10;
% Triangles per row (even)
N = T/2 + 1;
% Nodes per row
rows = 5;
% Number of rows
Xt = linspace(0,1,N);
% Range in x
Yt = linspace(0,1,rows+1); % Range in y
X = [];
Y = [];
for i = 1:rows+1
X = [X Xt];
Y((i-1)*N+1:i*N) = Yt(i)*ones(1,N);
end
nodes = length(X);
% Number of nodes in system
triangles = T*rows;
% Number of elements in system
17
18
19
L = Xt(2)-Xt(1);
Jk = Lˆ2;
% Distance between nodes
% Twice the area of each element
20
21
22
23
% Contiguous list of nodes on boundary
B = [1:N:(rows-1)*N+1 rows*N+1:(rows+1)*N ...
rows*N:-N:N N-1:-1:2];
24
25
26
27
% Forming Connectivity Matrix C
C = zeros(4,triangles);
C(1,:) = 1:triangles;
12
28
29
30
31
32
33
for i = 1:rows
C(2,(i-1)*T+1:i*T) = ceil(C(1,1:T)/2)+(i-1)*N;
end
C(3,1:2:triangles-1) = C(2,1:2:triangles-1)+N;
C(3,2:2:triangles) = C(2,1:2:triangles)+1;
C(4,:) = C(2,:)+N+1;
34
35
36
37
38
% Forming Stiffness Matrix A and Load Vector f
dchidxi = [-1 1 0];
% Partial derivatives
dchidnu = [-1 0 1];
Jstar = 0.5;
% Size of reference element
39
40
for k = 1:triangles
41
% Finding spatial coordinates of vertices
X1 = X(C(2,k));
X2 = X(C(3,k));
X3 = X(C(4,k));
XX = [X1 X2 X2];
Y1 = Y(C(2,k));
Y2 = Y(C(3,k));
Y3 = Y(C(4,k));
YY = [Y1 Y2 Y3];
CC = [C(2,k) C(3,k) C(4,k)];
42
43
44
45
46
47
48
49
50
51
52
53
for i = 1:3
for j = 1:3
Aloc(i,j,k) = ((Y3-Y1) * dchidxi(i) + ...
(Y1-Y2) * dchidnu(i)) * ...
((Y3-Y1) * dchidxi(j) + ...
(Y1-Y2) * dchidnu(j)) * ...
Jstar/Jk + ((X1-X3)*dchidxi(i) + ...
(X2-X1) * dchidnu(i)) * ...
((X1-X3) * dchidxi(j) + ...
(X2-X1)*dchidnu(j)) * Jstar/Jk;
end
54
55
56
57
58
59
60
61
62
63
64
65
xstore = 0;
ystore = 0;
for j = 1:3
xstore = xstore + XX(j)/3;
ystore = ystore + YY(j)/3;
end
% Define the function f as ffunc(x,y)
floc(k,i) = 0.5*ffunc(xstore, ystore)*2*Jk/3;
66
67
68
69
70
71
72
73
74
if sum(CC(i) == B) > 0
floc(k,i) = floc(k,i) + L/2*GN(XX(i),YY(i));
end
75
76
77
78
end
79
80
end
81
13
82
83
Pt = C(2:end,:);
P = Pt';
84
85
86
87
88
89
90
91
92
93
94
A = zeros(nodes,nodes);
f = zeros(nodes,1);
for k = 1:triangles
for j = 1:3
for i = 1:3
A(P(k,i),P(k,j)) = A(P(k,i),P(k,j)) + Aloc(i,j,k);
end
f(P(k,j)) = f(P(k,j)) + floc(k,j);
end
end
95
96
UJ = linspace(0,1,rows+1);
97
98
99
100
101
102
% Setting Dirichlet condition edge of domain.
for i = 1:rows+1
A(N*(i-1)+1,:) = zeros(1,nodes);
A(N*(i-1)+1,N*(i-1)+1) = 1;
end
103
104
105
106
107
% Obtaining the steady-state solution
uout = A\f;
umapss = reshape(uout, T/2+1, rows+1);
umapss = umapss';
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
% Computing Mass Matrix M
for k = 1:triangles
for i = 1:3
for j = 1:3
Mloc(i,j,k) = 0.5*(1/3)*(1/3)*Jk;
end
end
end
M = zeros(nodes,nodes);
for k = 1:triangles
for j = 1:3
for i = 1:3
M(P(k,i),P(k,j)) = M(P(k,i),P(k,j)) + Mloc(i,j,k);
end
end
end
126
127
128
129
130
131
% Setting timesteps
dt = 0.000001;
Nsteps = 100000;
usol = zeros(length(f), Nsteps/10000);
uk = zeros(length(f),1);
132
133
134
135
% Applying Dirichlet conditions
for i = 1:rows+1
M(N*(i-1)+1,:) = zeros(1,nodes);
14
M(N*(i-1)+1,N*(i-1)+1) = 1;
136
137
end
138
139
for i = 2:Nsteps
140
% Creating load vector
fnew = (M - dt*A)*uk + dt*f;
141
142
143
% Performing Euler step
for k = 1:rows+1
fnew(N*(k-1)+1) = round(UJ(k));
end
uk = M\fnew;
144
145
146
147
148
149
% Storing solution
if i/10000 == round(i/10000)
usol(:,i/10000) = uk;
end
150
151
152
153
154
end
155
156
157
158
% Preparing solution for visualisation
umap = reshape(usol(:,end), T/2+1, rows+1);
umap = umap';
References
[1] W F Ames. Numerical Methods for Partial Differential Equations. Academic
Press, New York, 1992.
[2] B Carnahan, H A Luther, and J O Wilkes. Applied Numerical Methods. John
Wiley and Sons, New York, 1969.
[3] H Elman, D Silvester, and A Watham. Finite Elements and Fast Iterative
Solvers. Oxford University Press, Oxford, 2005.
[4] J Ockendon, S Howison, A Lacey, and A Movchan. Applied Partial Differential
Equations. Oxford University Press, Oxford, 2003.
[5] Q A Press, B P Flannery, S A Teukolsky, and W T Vetterling. Numerical
recipes: The Art of Scientific Computing. Cambridge University Press, New
York, 1986.
15
© Copyright 2026 Paperzz