Model order reduction for speeding up computational

Model order reduction for speeding up
computational homogenisation methods of
type FE2
Olivier Goury, Pierre Kerfriden, Stéphane Bordas
Cardiff University
Outline
Introduction
Heterogeneous materials
Computational Homogenisation
Model order reduction in Computational Homogenisation
Proper Orthogonal Decomposition (POD)
System approximation
Results
Conclusion
Heterogeneous materials
Many natural or engineered materials are heterogeneous
I
Homogeneous at the macroscopic length scale
I
Heterogeneous at the microscopic length scale
Heterogeneous materials
Need to model the macro-structure while taking the
micro-structures into account
=⇒ better understanding of material behaviour, design, etc..
Two choices:
I
Direct numerical simulation: brute force!
I
Multiscale methods: when modelling a non-linear materials
=⇒ Computational Homogenisation
Semi-concurrent Computational Homogenisation
(FE2 , ...)
Macrostructure
Effective
Stress
Effective strain
tensor
Effective
Stress
Effective strain
tensor
Problem
I
For non-linear materials: Have to solve a RVE boundary
value problem at each point of the macro-mesh where it is
needed. Still expensive!
I
Need parallel programming
Strategy
I
Use model order reduction to make the solving of the RVE
boundary value problems computationally achievable
I
Linear displacement:
M
(t) =
xx (t) xy (t)
xy (t) yy (t)
u(t) = M (t)(x − x̄) + ũ
with ũ|Γ = 0
P
Fluctuation ũ approximated by: ũ ≈ i φi αi
Projection-based model order reduction
The RVE problem can be written:
Fint (ũ(M (t)), M (t)) + Fext (M (t)) = 0
|
{z
}
(1)
Non-linear
We are interested in the solution ũ(M ) for many different
values of M (t ∈ [0, T ]) ≡ xx , xy , yy .
Projection-based model order reduction assumption:
Solutions ũ(M ) for different parameters M are contained in a
space of small dimension span((φi )i∈J1,nK )
RVE boundary value problem
Matrix
Inclusions
Proper Orthogonal Decomposition (POD)
How to choose the basis [φ1 , φ2 , . . .] = Φ ?
Proper Orthogonal Decomposition (POD)
How to choose the basis [φ1 , φ2 , . . .] = Φ ?
I
“Offline“ Stage ≡ Learning stage : Solve the RVE problem
for a certain number of chosen values of M
Proper Orthogonal Decomposition (POD)
How to choose the basis [φ1 , φ2 , . . .] = Φ ?
I
“Offline“ Stage ≡ Learning stage : Solve the RVE problem
for a certain number of chosen values of M
I
We obtain a base of solutions (the snapshot):
(u1 , u2 , ..., unS ) = S
,
,
, ...
Proper Orthogonal Decomposition (POD)
How to choose the basis [φ1 , φ2 , . . .] = Φ ?
I
“Offline“ Stage ≡ Learning stage : Solve the RVE problem
for a certain number of chosen values of M
I
We obtain a base of solutions (the snapshot):
(u1 , u2 , ..., unS ) = S
,
I
,
, ...
That snapshot may be large and have linearly dependent
components ⇒ Need to extract the core information from it
I
Find the basis [φ1 , φ2 , . . .] = Φ that minimises the cost
function:
s
Jh.i
(Φ) =
X
kui −
nX
POD
µ∈P s
k
with the constraint φi , φj = δij
φk . hφk , ui i k2
(2)
I
Find the basis [φ1 , φ2 , . . .] = Φ that minimises the cost
function:
s
Jh.i
(Φ) =
X
kui −
nX
POD
µ∈P s
φk . hφk , ui i k2
(2)
k
with the constraint φi , φj = δij
I
What is the quantity of interest here? Which scalar product
and norm should we choose?
I
Find the basis [φ1 , φ2 , . . .] = Φ that minimises the cost
function:
s
Jh.i
(Φ) =
X
kui −
nX
POD
µ∈P s
φk . hφk , ui i k2
(2)
k
with the constraint φi , φj = δij
I
What is the quantity of interest here? Which scalar product
and norm should we choose?
I
The canonic scalar product? ⇒
hx, yi = xT y
I
Find the basis [φ1 , φ2 , . . .] = Φ that minimises the cost
function:
s
Jh.i
(Φ) =
X
kui −
nX
POD
µ∈P s
φk . hφk , ui i k2
(2)
k
with the constraint φi , φj = δij
I
What is the quantity of interest here? Which scalar product
and norm should we choose?
I
The canonic scalar product? ⇒
I
Some kind of energy scalar product makes more sense:
hx, yi = xT K{τ |τ <t} y
hx, yi = xT y
I
Find the basis [φ1 , φ2 , . . .] = Φ that minimises the cost
function:
s
Jh.i
(Φ) =
X
kui −
nX
POD
µ∈P s
φk . hφk , ui i k2
(2)
k
with the constraint φi , φj = δij
I
What is the quantity of interest here? Which scalar product
and norm should we choose?
I
The canonic scalar product? ⇒
I
Some kind of energy scalar product makes more sense:
hx, yi = xT K{τ |τ <t} y
I
Simplify to hx, yi = xT K0 y since we want a fixed basis with
time
hx, yi = xT y
I
Find the basis [φ1 , φ2 , . . .] = Φ that minimises the cost
function:
s
Jh.i
(Φ) =
X
kui −
nX
POD
µ∈P s
φk . hφk , ui i k2
(2)
k
with the constraint φi , φj = δij
I
What is the quantity of interest here? Which scalar product
and norm should we choose?
I
The canonic scalar product? ⇒
I
Some kind of energy scalar product makes more sense:
hx, yi = xT K{τ |τ <t} y
I
Simplify to hx, yi = xT K0 y since we want a fixed basis with
time
I
One can prove analytically that the solution is given by the
eigenvectors of K0 S ST K0
hx, yi = xT y
Next question: how many vectors should we pick?
2
10
99% accuracy
0
10
−2
10
−4
Eigenvalues
10
99.99999%
accuracy
−6
10
−8
10
−10
10
−12
10
−14
10
0
50
100
150
Eigenvalue Number
200
250
300
Reduced equations
I
Reduced system after linearisation: min
α
kKΦ α + Fext k
Reduced equations
kKΦ α + Fext k
I
Reduced system after linearisation: min
I
In the Galerkin framework: ΦT KΦ α + ΦT Fext = 0
α
Reduced equations
kKΦ α + Fext k
I
Reduced system after linearisation: min
I
In the Galerkin framework: ΦT KΦ α + ΦT Fext = 0
I
That’s it! In the online stage, this much smaller system will
be solved.
α
Example
Snapshot selection:
simplify to monotonic loading in xx , xy , yy . 100 snapshots .
First 2 modes:
Coarse contribution
Fluctuation
.
.
=
+
+
+
.
.
+
.
+
.
Is that good enough?
I
Speed-up actually poor
I
Equation “ΦT KΦ α + ΦT Fext = 0“ quicker to solve but
ΦT KΦ still expensive to evaluate
I
Need to do something more =⇒ system approximation
Idea
I
Define a surrogate structure that retains only very few
elements of the original one
Idea
I
Define a surrogate structure that retains only very few
elements of the original one
I
Reconstruct the operators using a second POD basis
representing the internal forces
“Gappy“ technique
Originally used to reconstruct altered signals
I
Fint (Φ α) approximated by Fint (Φ α) ≈ Ψ β
“Gappy“ technique
Originally used to reconstruct altered signals
I
Fint (Φ α) approximated by Fint (Φ α) ≈ Ψ β
I
Fint (Φ α) is evaluated exactly only on a few selected
\
nodes: Fint
(Φ α)
“Gappy“ technique
Originally used to reconstruct altered signals
I
Fint (Φ α) approximated by Fint (Φ α) ≈ Ψ β
I
Fint (Φ α) is evaluated exactly only on a few selected
\
nodes: Fint
(Φ α)
b
β found through: min Ψ
β−F \
(Φ α)
I
β
int
2
0
Relative error in energy norm
10
−1
10
−2
10
−3
10
−4
10
−5
10
0
20
Num 40
b
60
in th er of vect
e st
atic ors
basis
80
6
100
10
4
in
8
ectors is
s
er of v
Numb lacement ba
p
is
d
e
th
2
Relative error
Number of basis functions
increases
Speedup
Conclusion
I
Model order reduction can be used to solved the RVE
problem faster and with a reasonable accuracy
I
Can be thought of as a bridge between analytical and
computational homogenisation:
the reduced bases are pseudo-analytical solutions of the
RVE problem that is still computationally solved at very
reduced cost
Thank you for your attention!