1 Linear elliptic problem (folder ex1/)

Summer School CEA-EDF-INRIA (June 20-24, 2016)
Reduced Basis Methods and Empirical Interpolation: Computer session
Exercices list
1 Linear elliptic problem (folder ex1/)
1
2 Empirical Interpolation Method: first contact
5
3 EIM: Treatment of non-affine parametric dependency (folder ex3/)
8
4 GEIM: Data assimilation and reduced models (folder ex4/)
9
5 Appendix
12
In these computer sessions we will consider simple toy problems, intended to be representative of larger
classes. This will let us focus more on the methodology of reduced basis methods without falling into
specific technicalities of more involved problems. We will program in Matlab with two libraries developed
at the Institut de Recherche en Génie Civil et Mécanique (GeM): FEMObject and ApproximationToolbox.
We recommend reading the monograph [1] for an overview of reduced basis methods with special focus on
details for its implementation. For Python users, we refer to
http://mathesaurus.sourceforge.net/matlab-numpy.html
for the equivalence between Matlab and Python commands. Each exercise is in a folder ex1/, ex2/...
1
Linear elliptic problem (folder ex1/)
We consider a steady heat conduction problem on the square Ω = [0, 1] × [0, 1]. Ω contains 8 disks
Ωi ((xi , yi ), r) of radius r = 0.2 and centered at coordinates (xi , yi ), i = 2, . . . , 9 as is shown in figure 1.
In the center, there is a small square Ω10 of length 0.2. The part of the domain which is not in the disks nor
in the small square is Ω1 = Ω \ ∪10
i=2 Ωi . Let κ be the thermal conductivity. Denoting κi the restriction of κ
to the domain Ωi , we consider the case where
κ1 = 1,
κi = µi ,
i = 2, . . . , 9,
κ10 = 1,
with µi ∈ [0.1 × 10] for i = 2, . . . , 9. An additional parameter µ1 ∈ [0.1 × 10] will also be considered. Its
role is specified in the following equation. We gather the nine parameters in the vector µ := (µ1 , . . . , µ9 )T
ranging in
D = [0.1 × 10]9 .
For a given µ ∈ D, we have to find u(·, µ) solution of
(
−∇ · (κ∇u(·, µ)) = f (·; µ),
u(·, µ) = 0,
in Ω
on ∂Ω,
(1.1)
where
f (x; µ) = µ1 (sin(4πx) + cos(4πy)) 1Ω1 + 100.1Ω10 .
(1.2)
M := {u(·, µ) : µ ∈ D}.
(1.3)
The solution manifold is
1
1
2
3
4
5
6
7
8
9
10
Figure 1: Domain Ω of exercice 1.
Our goal is to build a reduced space in which we will approximate the elements of M. We want to approximate
M and the following output of interest
Z
s(µ) =
u(x, µ) dx.
(1.4)
Ω10
The first step is to translate problem (1.1) into a variational formulation. We consider the Dirichlet boundary
conditions as essential and introduce the space
V := H01 (Ω) = {v ∈ H 1 (Ω) : v|∂Ω = 0}
(1.5)
and endow it with the scalar product
Z
hw, viV =
∇w · ∇v,
∀w, v ∈ V.
(1.6)
Ω
The weak formulation reads: for a given µ ∈ D, find u(·, µ) ∈ V such that
a (u(·, µ), v; µ) = `(v; µ),
∀v ∈ V,
(1.7)
where, for all v, w ∈ V,
Z
Z
κ∇w · ∇v
a(w, v; µ) :=
and `(v; µ) =
Ω
f (·, µ)v(·).
(1.8)
Ω
The Lax-Milgram theorem guarantees the existence and uniqueness of the solution u(·, µ) of the weak formulation for any µ ∈ D. To approximate u(·, µ), we consider a triangular mesh of size h = 0.02 and its
corresponding P1 finite element space that we denote Vh . The problem reads: find uh (·; µ) ∈ Vh such that
a (uh (·, µ), vh ; µ) = `(vh ; µ),
∀vh ∈ Vh .
(1.9)
Questions:
1. Show that problem (1.7) has an affine decomposition with respect to the parameters. Formulate the
discrete problem (1.9) into a system matrix.
We have for any w, v ∈ V,
Z
κ∇w · ∇v =
a(w, v; µ) =
Ω
10
X
i=1
2
Z
∇w · ∇v,
κi
Ωi
so the problem has an affine decomposition with respect to the parameters. As for the second
question, denoting ϕi the finite element basis functions and nh the number of degrees of freedom,
the system reads
!
10
X
κk Ak x = µ1 b1 + b10 ,
k=1
where x is the vector of unknowns and for i, j = 1, . . . , nh ,
Z
b1 (i) =
(sin(4πx) + cos(4πx)) ϕi (·),
Ω1
Z
b10 (i) = 100
ϕi (·),
Ω10
Z
Ak (i, j) =
∇ϕi · ∇ϕj .
Ωk
2. First contact with the code: We will work with an object of the class MultiInclusionsModel
which will handle basic operations to solve (1.9) with finite elements as a black box.
(a) Get familiar with the functions of MultiInclusionsModel.
(b) Create the object
Model=MultiInclusionsModel()
and generate randomly a vector µ ∈ D. Compute uh (·, µ) using
function u=getFEMsolution(Model,mu) [class MultiInclusionsModel]
and then visualize the output using
visualize(Model,u) [class MultiInclusionsModel]
3. Proper Orthogonal Decomposition: Let us first approximate the elements of M using the POD
method and study the convergence of the error. This will be used as a benchmark for the approximation
with the reduced basis method. From the parameter set D, we consider a discrete set DS of S = 500
sample parameters which we choose randomly. From DS , we define the set of snapshots
MS = {uh (µ) : µ ∈ DS }.
(1.10)
We assume that MS is representative of M.
(a) Store DS in a 9 × S matrix muS whose columns are vectors µ ∈ D generated randomly. Generate
MS in a matrix snapshots whose columns are the solutions uh (· · · , µ) for µ ∈ DS . For this, use
the black box function
function snapshots=getSnapshots(Model,muS) [class MultiInclusionsModel].
Using
function C =getCorrelationMatrix(Model,snapshots) [class MultiInclusionsModel]
compute the correlation matrix C whose entries are
Z
1
∇u(x, µi )∇u(x, µj ) dx
Ci,j := hu(·, µi ), u(·, µj )iV =
S
Ω
(1.11)
for µi , µj ∈ DS .
(b) Using the Matlab function
[V,D] = eig(C)
get the diagonalized form of C, where D is the diagonal with its eigenvalues λi (in decreasing order)
and the rows of V contain its eigenfunctions. Plot the POD error using λn as an approximation
of the error at dimension n.
3
4. Reduced Basis Method: Let us now use the reduced basis method (see [2, 3]).
(a) The variational problem in the reduced basis space reads: find uN (·, µ) ∈ VN such that
a (uN (·, µ), vN ; µ) = f (vN ; µ),
∀vN ∈ VN .
(1.12)
Formulate the reduced basis Galerkin problem (1.12) into a system matrix:
Since VN = span{uh (·, µj ) : j = 1, . . . , N }, the system reads
!
10
X
κ|Ωk Ak x = b
k=1
where x is the vector of unknowns and for i, j = 1, . . . , N ,
Z
b1 (i) =
(sin(4πx) + cos(4πx)) uh (·, µi ),
Ω1
Z
b10 (i) = 100
uh (·, µi ),
Ω10
Z
Ak (i, j) =
∇uh (·, µi ) · ∇uh (·, µj ).
Ωk
(b) Offline phase:
i. Implement
function [rBasis,errGreedy,N]=greedyRB(tol,maxIt,snapshots,muS,Model)
which runs the greedy algorithm over the set snapshots (associated to the parameters stored
in muS). The function returns a reduced basis rBasis of dimension N which defines VN . The
output errGreedy is a column vector containing the errors
en := max kuh (·, µ) − un (·, µ)kV
µ∈DS
(1.13)
for n = 1, . . . , N . The algorithm stops whenever en < tol or when the number of steps
exceeds maxIt. To implement this function, it is necessary to solve the Galekin problem
(1.12) on a generic reduced basis space Vn of dimension n. For this, you can use the black
box function
function umuRB =getRBsolution(Model, rBasis, mu) [class MultiInclusionsModel].
Another function which will be of use is
function [value,pos]=argmax(Model,snapshots) [class MultiInclusionsModel]
Given a set of snapshots, argmax(Model,snapshots) returns pos, the column index of the
function which has maximum V-norm. It also returns the value of the norm.
For our fast programmers: If you are ahead of our schedule, we propose you to program
the function getRBsolution yourself. As an intermediate step, you can implement
function assembleRBmatrices(M,rBasis) [class MultiInclusionsModel]
function assembleRBrhs(M,rBasis) [class MultiInclusionsModel].
ii. Taking tol=10E-6 and maxIt=100, run greedyRB and plot the decay of the errors errGreedy.
Compare it to the decay with the POD approach.
5. Online phase: Generate a set of snapshots DS 0 different from the DS that have been used to run the
greedy algorithm. Plot the decay of the errors en with this new set.
See figure 2.
4
Ex 1: Error decay for POD and RB approximation
10 0
decay POD
decay RB greedy
decay RB online
10 -2
10 -4
10 -6
10 -8
0
10
20
30
40
50
60
70
80
90
Figure 2: Error decay for POD and RB approximation.
2
Empirical Interpolation Method: first contact
2.1
Approximation of a set of parameter dependent functions (folder ex2/)
Let Ω = [0, 1] × [0, 1] and D = [1.1, 2] × [1.1, 2]. For a given µ = (µx , µy )T ∈ D, let
g(x, y; µ) := p
2
,
(x − µx )2 + (y − µy )2
∀(x, y) ∈ Ω.
(2.1)
Our goal is to approximate the manifold
M := {g(·; µ) : µ ∈ D}
(2.2)
with the empirical interpolation method (EIM, see [4, 5]).
Questions:
1. Warm up: Let M ≥ 1. Let XM := span{qm }M
m=1 be a subspace of C(Ω) of dimension m and let
{xm }M
be
a
set
of
points
of
Ω.
Given
µ
∈
D,
write the interpolant JM [g(·; µ)] ∈ XM of g(·; µ) at
m=1
points {xm }M
.
m=1
JM [g(·; µ)] ∈ XM satisfies the interpolating conditions
JM [g(xi ; µ)] = g(xi ; µ),
Since JM [g(x; µ)] =
PM
m=1
i = 1, . . . , M.
(2.3)
αm (µ)qm (x), we can rewrite the interpolating conditions as
M
X
αm (µ)qm (xi ) = g(xi ; µ),
∀i = 1, . . . , M.
(2.4)
m=1
This yields a system matrix to derive the coefficients {αm (µ)}M
m=1 . Denoting x the vector of
these unknowns and g = {g(xi ; µ)}M
,
the
system
reads
Bx
=
g
where
i=1
B(i, j) = qj (xi ),
2. Offline phase: Implement
5
1 ≤ i, j ≤ M.
(2.5)
function [rBasis,x_int,ind_x_int,mu_int,ind_mu_int,B,M,errGreedy]...
=greedyEIM(tol,maxIt,snapshots,muS,x)
which runs the greedy algorithm over a set of snapshots MS stored in the variable snapshots. The
rows of muS are the values of the parameters (µx , µy ) associated to the snapshots. The variable x is a
matrix whose rows are point coordinates of Ω. The search for interpolation points will be done over
these points. The algorithm stops when a prescribed tolerance tol is reached or when a maximum
number of steps maxIt is performed. The function returns rBasis, a matrix whose columns are basis
functions of XM . More specifically, for any 1 ≤ m ≤ M , column m of rBasis contains the basis
function
g(·; µm ) − Jm−1 [g(·; µm )]
.
(2.6)
qm (·) =
g(xm ; µm ) − Jm−1 [g(xm ; µm )]
Outputs x_int and ind_x_int are respectively the interpolation points and the row index of these
points in x. Similarly, mu_int and mu_x_int are the parameters µm of the functions g(·; µm ) and their
row index in muS. The matrix B is the one defined in (4.3). It is necessary for the interpolation of
functions. In our setting, we can prove that B is low triangular with unity diagonal. Finally, errGreedy
is a vector containing the errors
en = max kg(·; µ) − Jn [g(·; µ)]kL∞ (Ω) .
µ∈DS
(2.7)
For any m ≥ 1, the interpolant Jm [u] of a function u can be computed with the black box method
[interp,coefs]=get_EIM_interpolant(u,rBasis,ind_x_int,B)
where interp is the interpolant and coefs are the coefficients of the interpolant in the basis {qm }M
m=1 .
Another function that you can use is
function [value_max,fun,ind_mu_max,ind_x_max]=...
argmaxEIM(snapshots,rBasis,ind_x_int,B,dim)
The function returns en in the variable value_max. The function for which the maximum is attained
is in fun. The index of its corresponding parameter is ind_mu_max and the index of the point where
the maximum is attained is given in ind_x_max.
To validate your algorithm: You can uncomment the piece of code entitled %Validation which
runs the greedy algorithm implemented in the library ApproximationToolbox.
3. Online phase: Generate a set of snapshots DS 0 different from the DS that have been used to run the
greedy algorithm. Plot the decay of the errors
en = max kg(·; µ) − Jn [g(·; µ)]kL∞ (Ω) .
µ∈DS 0
See figure 3.
4. Stability of the interpolation: Using as black box
function lebesgue=computeLebesgueEIM(rBasis,B)
compute the norm of the interpolation operator
Λn := max kJn [ϕ]k∞ /kϕk∞ .
ϕ∈C(Ω)
Compare it with the theoretical bound Λn ≤ 2n . What can you conjecture about it?
See figure 4.
5. Repeat the same questions with a different manifold. For instance, you can consider
g(x, y; µx , µy ) = 1 + sin(µx x) cos(µy y),
with (µx , µy ) ∈ [1.1, 2].
6
∀(x, y) ∈ Ω,
(2.8)
Ex 2.1: Error decay with EIM
10 2
decay greedy
decay EIM online
10 0
10 -2
10 -4
10 -6
10
10
-8
-10
0
10
20
30
40
50
Figure 3: Error decay in EIM.
10
Ex 2.1: Lebesgue constant
12
computed
theoretical bound
10 10
10 8
10 6
10 4
10
2
10
0
0
5
10
15
20
25
30
35
Figure 4: Behavior of the Lebesgue constant.
7
40
2.2
Approximation of a set of parameter dependent matrices
Mathilde, Loı̈c.
3
EIM: Treatment of non-affine parametric dependency (folder
ex3/)
This exercise combines the methods implemented in exercises 1 and 2.1.
We consider the heat conduction of exercise 1 but now the thermal conductivity is as follows:
κ1 = g(x, y; (µ10 , µ11 )),
κi = µi + g(x, y; (µ10 , µ11 )),
i = 2, . . . , 9,
κ10 = g(x, y; (µ10 , µ11 )),
The function g is the one defined in exercise 2.1. To keep a consistent notation, its parameters are now
denoted µ10 , µ11 instead of µx , µy . We now have a PDE depending on 11 parameters, which we gather in
the vector µ. The range of each parameter is the one given in 1 and 2.1.
Questions:
1. Show that, in this new setting, problem (1.7) does not have an affine decomposition with respect to
the parameters.
We have for any u, v ∈ V,
Z
κ∇w · ∇v
a(w, v; µ) =
=
Ω
10
X
Z
∇w · ∇v
κi
Ωi
i=1
=
9
X
Z
Z
∇w · ∇v +
µi
g(·; (µ10 , µ11 ))∇w · ∇v.
Ω
Ωi
i=2
Since g(x, y; µ) is not of the form g1 (x, y)g2 (µ), we do not have an affine decomposition w.r.t. the
parameter. The problem cannot
PN be treated with reduced basis unless we do an approximation
g(x, y; µ) ≈ g1 (x, y)g2 (µ) or k=1 g1,k (x, y)g2,k (µ).
2. Justify the use of EIM and write the new variational formulation. Formulate the discrete problem (1.9)
into a system matrix.
Approximating g with JM [g](µ) =
lation yields for any u, v ∈ V,
aM (w, v; µ) =
9
X
PM
m=1
αm (µ)qm (x, y) and using it in the variational formu-
Z
∇w · ∇v +
µi
Ωi
i=2
M
X
Z
qm ∇w · ∇v
αm (µ)
m=1
Ω
This bilinear form has an affine decomposition and we can now consider the Galerkin problem
in the reduced space VN :
aM,N (wN , vN ; µ) =
9
X
i=2
Z
∇wN · ∇vN +
µi
Ωi
M
X
m=1
Z
qm ∇wN · ∇vN .
αm (µ)
Ω
The Galerkin problem in VN can then be formulated in the form of an N × N system matrix.
8
Ex 3: Error decay EIM greedy algorithm
10 1
10 0
10 -1
10
-2
10 -3
10
-4
0
10
20
30
40
50
Figure 5: Greedy algorithm EIM: Decay of the interpolation error.
The stiffness matrices related to the non-affine part of the equation treated with EIM read
Z
bm (wN , vN ) :=
qm ∇wN · ∇vN , 1 ≤ m ≤ M.
Ωi
They can be precomputed in an offline phase.
3. From the implementation point of view, the solution of the current reduced problem is a combination
of the techniques used in exercises 1 and 2.1 so we will not focus here on this aspect. Instead, we
propose to see the impact of some elements in the convergence of the reduced basis greedy algorithm.
(a) Modify the code that you are given to evaluate the impact of M in the convergence of the reduced
basis greedy algorithm.
(b) Compare the convergence of the the reduced basis greedy algorithm with the POD method.
(c) Same questions with a different g.
Figures 5 and 6 show the influence of the dimension M of the interpolant on the function
g(x, y; µ10 , µ11 ) = 1 + sin(µ10 x) cos(µ11 y),
∀(x, y) ∈ Ω,
and with (µ10 , µ11 ) ∈ [1.1, 2].
(d) For our fast programmers: Measure the gain in computing time of the reduced basis method
with respect to the solution with finite elements.
4
GEIM: Data assimilation and reduced models (folder ex4/)
We are going to consider the same problem as in section 1 but now we want to view the problem from a data
assimilation perspective in the following sense. We imagine that M represents the set of all possible states of
a real physical system. We want to approximate any possible state of the system with sensor measurements
and the knowledge of a physical model represented by the parameter dependent PDE (1.1). For this, we
approximate the functions of M with the generalized empirical interpolation method (GEIM, see [6, 7, 5]).
9
Ex 3: Error decay for POD and RB approximation
10 4
10
decay POD
decay RB greedy (M=3)
decay RB greedy (M=7)
decay RB greedy (M=15)
2
decay RB greedy (0EIM <1E-4)
10 0
10
-2
10 -4
10
-6
0
10
20
30
40
50
Figure 6: Influence of M in the convergence rate of the greedy algorithm for RB.
At a given dimension N , the approximation requires N linear functionals σi ∈ V0 , 1 ≤ i ≤ N and a basis VN
of dimension N . Physically speaking, the linear functionals are a model for real-life sensors that one places
in the real-life experiment. The basis VN (which we want to be as small as possible for our targeted accuracy
in order to reduce the number of sensors) encodes the knowledge of a model for the physical experiment
in the sense that some solutions u(·; µi ) are going to span a basis of VN . The construction of VN and the
selection of the linear functionals σi is done with the same greedy procedure as in EIM. The only difference
is that the interpolation points are replaced by linear functionals.
N
1. Warm up: Let N ≥ 1. Let VN := span{qn }N
n=1 be a subspace of V of dimension N and let {σn }n=1
0
be a set of linear functionals of V . Given µ ∈ D, write the interpolant JN [g(·; µ)] ∈ VN of g(·; µ).
JN [g(·; µ)] ∈ VN satisfies the interpolating conditions
σi (JN [g(·; µ)]) = σi (g(·; µ)) ,
Since JN [g(x; µ)] =
PM
n=1
i = 1, . . . , N.
(4.1)
αn (µ)qn (x), we can rewrite the interpolating conditions as
N
X
αn (µ)σi (qn ) = σi (g(·; µ)) ,
∀i = 1, . . . , N.
(4.2)
n=1
This yields a system matrix to derive the coefficients {αn (µ)}N
n=1 . Denoting x the vector of these
unknowns and g = {σi (g(·; µ))}N
i=1 , the system reads Bx = g where
B(i, j) = σi (qj ),
1 ≤ i, j ≤ N.
(4.3)
2. Offline phase:
(a) We assume that the sensors are local averages of the following form. For any u ∈ V,
Z
(x−xi )2 +(y−yi )2
2σ 2
σi (u) = Ci
e−
u(x, y) dxdy,
(4.4)
Ω
where (xi , yi ) ∈ Ω is the location of the sensor, σ is the “range” of the average and Ci is a
normalization factor. Generate a set snapshots and a set sensors of candidate sensors with
function getSensors(Model,xsensors,sigma) [class MultiInclusionsModel]
10
Ex 4: Error decay for POD and GEIM approximation
10 0
decay POD
decay GEIM greedy
decay GEIM online
decay RB greedy
decay RB online
10 -2
10 -4
10
-6
10
-8
0
10
20
30
40
50
60
70
Figure 7: Decay rates of GEIM, RB and POD.
The input variable xsensors is a matrix whose rows are the coordinates (xi , yi ) of the location
of a candidate sensor. Set, e.g., sigma=0.05. For any function uh , σj (uh ) can be computed by
sensors(:,j)’*u;
(b) In the same spirit as for EIM in section 2.1, implement
function [rBasis,sensors_int,xsensor_int,mu_int,B,dim,errGreedy]...
=greedyGEIM(tol,maxIt,Model,snapshots,muS,sensors,xsensors)
For this, you can use as a black box
function [errGreedy,fun,mu_interp]=...
argmaxGEIM(Model,snapshots,muS,rBasis,sensor_int,B)
which computes
en = max kg(·; µ) − Jn [g(·; µ)]kV
µ∈DS
(4.5)
for a given dimension n. The dimension is given implicitly in the number of columns of rBasis.
Also, for any n ≥ 1, the interpolant Jn [u] of a function u can be computed with the black box
method
function [interp,coefs]=get_GEIM_interpolant(u,rBasis,sensors,B)
3. Online phase: Generate a set of snapshots DS 0 different from the DS that have been used to run the
greedy algorithm. Plot the decay of the errors
en = max kg(·; µ) − Jn [g(·; µ)]kV .
µ∈DS 0
(4.6)
Compare it to the decay of the POD approach and also the RB approach. See figure 7.
4. Stability of the interpolation: Using as a black box
function lebesgue=computeLebesgueGEIM(rBasis,sensors_int,Model)
compute the norm of the interpolation operator
Λn := max kJn [ϕ]kV /kϕkV .
ϕ∈V
Compare it with the theoretical bound Λn ≤ 2n . What can you conjecture about it?
11
(4.7)
Ex 4: Lebesgue constant
10 20
10
<=0.01
<=0.1
<=0.4
<=0.6
<=1.0
15
theoretical bound
10 10
10
5
10
0
0
10
20
30
40
50
60
Figure 8: Behavior of the Lebesgue constant depending on σ.
5. Influence of the dictionary: By varying the parameter σ of the sensors, examine the influence of the
dictionary in the convergence of the greedy algorithm and in the behavior of the Lebesgue constant.
How can you interpret the different types of behavior?
See figure 8. There are two cases:
• For σ ≤ h, with h = 0.02 being the mesh size, the computation of the Lebesgue constant is
unstable: the local average acts on a range smaller than h and the resolution is not enough
to study correctly the behavior. This is the case of σ = 0.01.
• For σ ≥ h, we can trust the numerical computation of Λn . We see that as σ increases, the
Λn increases at a faster rate. An intuitive way of understanding this is that the more blurry
the local average is, the more overlap of information there will be between different sensors.
The information brought by the different sensors becomes more and more redundant and
this eventually leads to instabilities.
5
5.1
Appendix
Greedy algorithm of the reduced basis method
Step n = 1: Find
µ1 = arg max kuh (·, µ)k
µ∈DS
and set
Step n > 1: Having Vn−1 , find
V1 := span{uh (·, µ1 )}
µn = arg max kuh (·, µ) − un (·, µ)k
µ∈DS
and set
Vn := span{Vn−1 , uh (·, µn )}
12
5.2
Greedy algorithm of the Generalized Empirical Interpolation Method
(
Step n = 1: Find
and set
u1 = arg maxu∈MS kuk
σ1 = arg maxσ∈Σ |σ(u1 )|


q1 := u1 /σ1 (u1 ) (“normalization”)
W1 := span{σ1 }

 :=
V1
span{u1 } = span{q1 }
Step n > 1: Having Vn−1 and Wn−1 ,
(
find
and set
un
σn
= arg maxu∈MS ku − AWn−1 ,Vn−1 [u]k
= arg maxσ∈Σ |σ(un − AWn−1 ,Vn−1 [un ])|



qn
:=
Wn


V
:= span{Wn−1 , σn }
:= span{Vn−1 , qn }
n
un −AWn−1 ,Vn−1 [un ]
σn (un −AWn−1 ,Vn−1 [un ]
(“normalization”)
References
[1] J. S. Hesthaven, G. Rozza, and B. Stamm. Certified Reduced Basis Methods for Parametrized Partial
Differential Equations. SpringerBriefs in Mathematics, 2015.
[2] M.A. Grepl, Y. Maday, N.C. Nguyen, and A.T. Patera. Efficient reduced-basis treatment of nonaffine
and nonlinear partial differential equations. ESAIM, Math. Model. Numer. Anal., 41(3):575–605, 2007.
[3] P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, and P. Wojtaszczyk. Convergence rates for
greedy algorithms in reduced basis methods. SIAM Journal on Mathematical Analysis, 43(3):1457–1472,
2011.
[4] Y. Maday, N.C. Nguyen, A.T. Patera, and G.S.H. Pau. A general multipurpose interpolation procedure:
the magic points. Comm. Pure Appl. Anal., 8(1):383–404, 2009.
[5] Y. Maday, O. Mula, and G. Turinici. Convergence analysis of the Generalized Empirical Interpolation
Method. Accepted in SIAM J. Num. An., 2016.
[6] Y. Maday and O. Mula. A Generalized Empirical Interpolation Method: application of reduced basis
techniques to data assimilation. In Franco Brezzi, Piero Colli Franzone, Ugo Gianazza, and Gianni
Gilardi, editors, Analysis and Numerics of Partial Differential Equations, volume 4 of Springer INdAM
Series, pages 221–235. Springer Milan, 2013.
[7] Y. Maday, O. Mula, A. T. Patera, and M. Yano. The Generalized Empirical Interpolation Method:
Stability theory on Hilbert spaces with an application to the Stokes equation. Computer Methods in
Applied Mechanics and Engineering, 287(0):310–334, 2015.
13