Computational Reality XIX
Evolution of probability distribution
B. Emek Abali, Wolfram Martens @ LKM - TU Berlin
Abstract
How likely an event occurs, depends on the statistical data. Suppose we pulled the mass on a springdashpot system away from its equilibrium position and left it free to move. It gets a velocity and swings
back to the equilibrium position, due to the spring back-force. We may measure the position, x, and velocity,
ẋ, of the mass over time. Assume we can measure without any errors. When we redone the same test ten
times, but if at a specific time only eight times the same state x, ẋ is measured and two times something
else, then the probability of this state is 0.8 which is somehow strange. Mathematically it should be always
the same path in the state space x, ẋ, however, in reality a small error in the initial condition or even a tiny
misalignment in the system may lead to not deterministic processes. We have seen the notion of probability
in a discrete dataset in Comp.Real.XIV where the measurement was error-prone. This time the uncertainty
is in the mechanical system or in the initial conditions which we will describe or bring that uncertainty to
the equations in a probabilistic way.
1 Motivation
Suppose we apply a force F (t) + f (t) to a mechanical oscillator, simply a spring-dashpot system, where the
F (t) represents the deterministic, transient force and f (t) a random (in Greek stochastic) force. The stochastic
force simulates the uncertainty, i.e., deviation of the system from the ideal path due to the imperfections in the
system. It is of importance to set the external force F (t) zero since we will use the Fokker-Planck equations
in the next section. The Fokker-Planck equations cannot model a system with the external deterministic
force. Of course this stochastic deviation is small, however, later we will see a chaotic1 system where this
stochastic part is quite important. First, let us start with a system, modeled with a linear spring responding to
a force linearly: cx , connected parallel with a linear dashpot: κẋ. The differential equation of the system reads:
mẍ + κẋ + cx = f (t), where the mass, m, oscillates back and forth. This equation can be solved analytically
or numerically. We solved it by discretizing with finite difference method and plot the position (x, t) and the
velocity ẋ = ẋ(t). Moreover we can plot the motion in the state space (x, ẋ, t) or its projection (x, ẋ), often
called the phase space. For the deterministic loading, i.e., f (t) = 0 the results are in Fig. 1 and in Fig. 2. This
phase space2 represents the motion in an abstract way which Poincare used for analysis. Now we put the
1 When
a small change of the input creates an extravagant change in output then the response of the system is called chaotic.
name phase space is misleading, therefore often this is also called state space since it is the projection in the state space.
The state space was called in 20’s first by Cartan in French as being the phase space and time. We will also use state for phase
space, since it is the standard use in thermodynamics.
2 The
Figure 1: Linear harmonic oscillator under deterministic loading, the position in time at left, the velocity in time in the
middle and the motion path in the phase space at right.
1
stochastic part upon and show the path of the same system with a stochastic perturbation on the (projected)
state space. Let us recall that stochastic means random, i.e., if we solve it again the result changes. Thus
we solve once and then five times and show the results in Fig. 3. Computational difficulties by convergence
occur thus we used small stochastic part, tiny time steps and solved only the fifth in time regarding the Fig. 1.
The random part produces another result therefore a unique solution does not exist. However, we can solve
the deterministic part uniquely, which is the expected value, and upon that we can produce a probabilistic
distribution. When we solve it many times, say 100, and then count in every small range the solutions, say
for the time t in some range3 30 solutions exist, then the probability in time t reads 0.3. This method needs
many iterations since we have hundred different datasets and we want to count for a specific time range. For
every time range we should scan every dataset once. Instead of counting data in a range we can distribute
every data and sum them up. This costs computationally less because we scan the dataset once and span the
value in space by means of a distribution function. A simple example in one-dimension is shown in Fig. 4 to
clarify the idea. We choose eight random numbers between zero and one and distribute them in space [0, 1) via
Gauss normal distribution (black lines on the left figure). Their sum is the pdf (red line on the left figure).
3 Concretely
the range is: t ± where epsilon is small with respect to t.
Figure 2: State space and the phase space as its projection for the linear harmonic oscillator under deterministic loading.
2
Figure 3: The deterministic solution in red and the solution with the stochastic part in blue, solved once (at left) and
many times (at right).
Figure 4: Discrete data and their Gauss normal distribution with the superposition of them and the cumulative (integral
of pdf) distribution which is a sigmoid function.
The integral of the pdf is the sigmoid function (red line on the right figure). This would be exactly one if we
would taken the whole space into consideration. The whole space is always (−∞, +∞) which is of course not
possible for numerics. It is of importance to notice in Fig. 4 that the variance, σ, for the Gauss distribution:
2 1
is arbitrary and thus difficult to find out any physical meaning if not started
p(x, µ) = √2πσ
exp(− 12 x−µ
2
σ
from the discrete values. However, this value is only needed for creating a continuous pdf out of discrete values.
The chosen Gauss distribution function is one of the many distribution functions. Many other probabilistic
distributions, like Gauss normal, Bernoulli, Cauchy, Pareto, Poisson, Rayleigh, Laplace, Weibull,
logistic, log-normal, geometric, gamma, beta, F, student’s-t, chi-square exist in the literature. For finite elements
we utilize a similar procedure to create a field function from the nodal values by interpolating them with the
shape functions. The distribution functions are not changing in time, as in fem the shape functions does not
change in time. The totality of the distribution functions in every node or discrete data configures the space,
the set of the distribution or shape functions and it does not evolve in time. But the nodal values do regarding
a differential equation. For mechanics we use the balance equations. In probabilistic evolutions this differential
equation is the Fokker-Planck equation if the process is a Markov process. The process that lacks history
is called a Markov process.
2 Fokker-Planck equation
Consider an evolution of the probability distribution. The stochastic part of the loading is represented with
a probability density function, p(x1 , x2 ), in (projected) state space of x1 = x and x2 = ẋ. If the space is big
enough, theoretically x1 ∈ (−∞, +∞) and x2 ∈ (−∞, +∞), then the probability density function integrated
over the whole space reads the probability that is equal to one. No matter how the pdf evolves in time, the
probability reads one. The state space spans a n-dimensional abstract space where every single dimension xi is
a real number xi ∈ (−∞, +∞), i = 1, 2, 3, ..n.
3
Now suppose that the pdf is a conserved quantity in the state space. We do not use the space-time notation
therefore the meaning of the conserved quantity is misleading. It is actually being conserved in space-time,
however, it is easier to give the physical meaning for using space and time separately. A conserved quantity
cannot vanish in one place and pop out in another simultaneously, it has to follow a path. Being a conserved
quantity is the first axiom in Liouville’s theorem, thus in state space, i.e., {x1 , x2 , . . . , xn } a balance equation
for the pdf, p = p(x1 , x2 , .., xn ), can be formulated:
Z
d
dP̄ (y = 1|xk )
=
p dx1 . . . dxn , dx = dx1 . . . dxn ,
dt
dt Ω
Z
Z
Z
d
p dx = −
Ci p dxi +
Ni dxi , dxi = dx1 . . . dxi−1 dxi+1 . . . dxn ,
(1)
dt Ω
∂Ω
∂Ω
where summation convention over multiple indices are used although indices denote just the members of the
argument lists, no tensorial relation or any transformation laws must exist. This is the usual balance equation
for open systems, where Ci and Ni represent convective and non-convective (also called diffusive) fluxes into
i
and out of the probability. When we assume that dx
dt = vi and neglect convective flux, i.e. Ci = 0, then after
1 ∂(Bij p)
using Gauss theorem and defining Ni = 2 ∂xj , the balance equation reads:
Z Z
∂p
∂p
∂vi 1 ∂(Bij p) i
+ vi
+p
dx ,
dx =
∂xi
∂xi
∂xj
Ω ∂t
∂Ω 2
Z ∂p ∂(vi p) 1 ∂ 2 (Bij p) +
−
dx = 0 .
∂xi
2 ∂xi ∂xj
Ω ∂t
(2)
The diffusive matrix Bij = Gik Gjk is actually the connection to the stochastic process for the features xi as
written in Ito form:
dxi = vi dt + Gij dWj ,
(3)
describes in a Brownian motion or Wiener process with aforementioned noise Gij dWj by Langevin in 10’s.
The white noise implies that the mean value, or the expected value4 is zero: E[Wj ] = 0 so that the stochastic
part changes the deterministic part as we have seen in the last section.
Although the real derivation is out of scope, we show the first and second moments. The rate of the first
moment of the pdf5 is associated with the velocity:
Z
d
(xi − X)p(xk , t + dt|X, t) dx = vi (X, t) + O() .
(4)
dt kxi −Xk<
Similarly the rate of the second moment of the pdf6 provides the diffusion matrix:
Z
d
(xi − X)(xj − X)p(xk , t + dt|X, t) dx = Bij (X, t) + O()
dt kxi −Xk<
(5)
Einstein used the latter connection and described the molecular fluctuation with the differential equation explained below in 1917.
Finally, the Fokker-Planck equation from the 10’s reads in local:
∂p ∂(vi p) 1 ∂ 2 (Bij p)
+
−
=0.
∂t
∂xi
2 ∂xi ∂xj
By utilizing method of weighted residuals the form reads in state space:
Z ∂p
∂(vi p)
1 ∂ 2 (Bij p) δp +
δp −
δp dx = 0 , dx = dx1 dx2 . . . dxn ,
∂xi
2 ∂xi ∂xj
Ω ∂t
4 also
called the ensemble average E[..]
moment is the mean or expected value.
6 Second moment is the correlation function.
5 First
4
(6)
(7)
where δp is the test function. Thus, we discretize in xi space using finite element method and in time with
finite difference method, as usual:
Z (p − p0 )
∂(vi p)
1 ∂ 2 (Bij p) δp +
δp −
δp dx = 0 .
(8)
dt
∂xi
2 ∂xi ∂xj
Ω
Now we employ integration by parts only once to the diffusion matrix since it is in second-order. The boundary
terms leads to zero because the probability density vanishes asymptotically and we integrate on a big enough
domain so that on the boundaries p vanishes:
Z ∂δp 1 ∂(Bij p) ∂δp (p − p0 )
dx1 dx2 = 0 .
(9)
δp − vi p
+
dt
∂xi
2 ∂xi ∂xj
Ω
This can be solved after declaring the diffusion matrix or the second-moment, Bij , and the drift vector, the
velocity or the first-moment, vi , which is the aim in the coming section.
3 Applications
Again consider the mass connected to the parallel spring-dashpot system and the differential equation that
describes the motion X:
mẌ + κẊ + cx = σξ(t) ,
x1 = X , x2 = Ẋ ,
(10)
dx1
dx1
dx2
+κ
+ cx1 = σξ(t) , x2 =
,
m
dt
dt
dt
where we obtained from one differential equation in second-order in time two equations each one in first-order
in time. The state functions x1 and x2 are both defined in the displacement-space X whereas the displacements
are defined in time, X = X̄(t). We get by writing slightly differently:
dx2 +
or even as a set of equations:
dx1
dx2
κ
c
x2 dt + x1 dt = σξ(t) dt , x2 dt = dx1 ,
m
m
=
x2
κ
x2 −
−m
c
m x1
0
dt +
0
0
σ
dW1 (t)
,
dW2 (t)
(11)
(12)
where we separated the stochastic part to a instationary7 amplitude (0, σ) and the white noise ( dW1 (t), dW2 (t)).
The white noise has the mean value zero E[ dWi (t)] = 0 and it is independent in time, that means there are
no correlation between the time steps E[ dWi (t) dWi (t + dt)] = δ( dt), where the delta function is one at zero
vanishes elsewhere. Therefore in different times, the latter expected value vanishes, an abstract way of saying
that the noise is fluctuating in time independently in each time step, no continuity or smoothness is needed for
that function. Let us give an example in two subsequent time steps. Suppose at the same position for these
subsequent times, the values 0.2, 0.8 are given. We think always that a quantity evolves in time. Thus, if we
would have taken the second time instant more closer to the first one it should read, e.g., 0.2, 0.4 since the
function is not changing instantaneously. Exactly this fact does not hold for the noise. It fluctuates as quick
as it is necessary, i.e., different times lack of correlation. We give the codes at the end, where we simply have
chosen random numbers without any dependence at all. Finally it is an easy task to relate the Eq. (12) to the
Ito Eq. (3) such that:
x2
0 0
vi =
, Gij =
, Bij = Gik Gjk .
(13)
c
κ
x2 − m
x1
0 σ
−m
For the linear oscillator we have solved the variational form (9) obtained from the Fokker-Planck equation
by using the latter drift vector and diffusion matrix, cf. the Fig. 5. This is a transient solution, i.e., with a given
initial condition the evolution of the pdf in time and in state space has been computed. The initial condition
is a Gauss normal distribution around the 0, 0 point, i.e., initially the mass is at the x1 = X = 0 position and
has the x2 = dX/ dt = 0 velocity. The distribution function spreads out due to the diffusion matrix, Bij . Upon
7 The
“white” noise is not affected from the system, thus f is constant. There are some systems that the stochastic term depends
on time.
5
Figure 5: Transient solution of the pdf in state space for the linear oscillator.
that the drift vector, vi , leads the pdf in association with the deterministic part of the differential equation. In
the end, without solving the differential equation of motion, we get the path in the state space to be seen in
Fig. 5. Compare with the Fig. 3, where the differential equation has been solved. Two quite different way of
solving the same reality which is deterministic and stochastic.
4 Code
The code for the harmonic linear oscillator:
#f r o m
dolfin
import
import
import
import
import
import
∗
numpy
scipy
scipy . optimize
pylab
m p l t o o l k i t s . mplot3d . axes3d a s p3
f r e q =5.
f = lambda t : numpy . random . random sample ( )
m=1.
c =5.
kappa =1.
dt =0.1
t =0.
tend =10.
#x 0 ,
x00 =0. ,
0. #i n i t i a l
#r e t u r n s
a
random
conditions
x0 , x00 = 5 . , 5 .
#t h e
residual
ode= lambda x : m∗ ( x −2.∗ x0+x00 ) / dt / dt + kappa ∗ ( x−x0 ) / dt + c ∗x
# p y l a b . i o n ( ) #u n c o m m e n t
for
draw ( )
in
time
loop
6
number
between
[0 ,
1)
py l a b . r c ( ’ f o n t ’ , s i z e =26 )
py l a b . f i g u r e ( 1 , f i g s i z e = ( 3 2 , 8 ) , d p i =100)
py l a b . s u b p l o t ( 1 , 3 , 1 )
py l a b . x l a b e l ( ’ $ t $ ’ )
py l a b . y l a b e l ( ’ $x$ ’ )
py l a b . t i t l e ( ’ P o s i t i o n i n time ’ )
py l a b . s u b p l o t ( 1 , 3 , 2 )
py l a b . x l a b e l ( ’ $ t $ ’ )
py l a b . y l a b e l ( ’ $ \\ dot x$ ’ )
py l a b . t i t l e ( ’ V e l o c i t y i n time ’ )
py l a b . s u b p l o t ( 1 , 3 , 3 )
py l a b . x l a b e l ( ’ $x$ ’ )
py l a b . y l a b e l ( ’ $ \\ dot x$ ’ )
py l a b . t i t l e ( ’ Phase s p a c e ’ )
f i g = p y l a b . f i g u r e ( 2 , f i g s i z e = ( 2 0 , 2 0 ) , d p i =100)
p l o t 3 d = f i g . gca ( p r o j e c t i o n= ’ 3d ’ )
p l o t 3 d . s e t x l a b e l ( ’ $x$ ’ )
p l o t 3 d . s e t y l a b e l ( ’ $ \\ dot x$ ’ )
plot3d . s e t z l a b e l ( ’ $t$ ’ )
plot3d . s e t t i t l e ( ’ State space ’ )
t =0.
while t<tend :
x=s c i p y . o p t i m i z e . f s o l v e ( ode , x0 )
x d o t =(x−x0 ) / dt
pylab . f i g u r e ( 1 )
pylab . subplot ( 1 , 3 , 1 )
p y l a b . p l o t ( t , x , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =7)
pylab . subplot ( 1 , 3 , 2 )
p y l a b . p l o t ( t , x dot , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =7)
pylab . subplot ( 1 , 3 , 3 )
p y l a b . p l o t ( x , x dot , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =7)
#p y l a b . draw ( )
pylab . f i g u r e ( 2 )
p l o t 3 d . p l o t ( x , x dot , t , ’ b ’ , marker= ’ o ’ , m a r k e r s i z e =12)
p l o t 3 d . p l o t ( x , x dot , 0 . ∗ t , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =7)
#p y l a b . draw ( )
x00=x0
x0=x
t+=dt
py l a b .
py l a b .
py l a b .
py l a b .
figure (1)
s a v e f i g ( ’ C R 1 9 o s c i l l a t o r f 0 0 1 . png ’ )
figure (2)
s a v e f i g ( ’ C R 1 9 o s c i l l a t o r f 0 0 2 . png ’ )
The code for the deterministic and stochastic part of loading solved many times:
#f r o m
import
import
import
import
import
dolfin
import
∗
numpy
scipy
scipy . optimize
pylab
m p l t o o l k i t s . mplot3d . axes3d a s p3
f r e q =5.
sigma = 0 . 2 6
f = lambda t : sigma ∗numpy . random . uni for m ( − 1 . , + 1 . )
m=1.
c =5.
kappa =1.
7
#r e t u r n s
a
random
number
between
−1 , +1
dt =0.01
t =0.
x0 , x00 =5. , 5 . # i n i t i a l
x i 0 , x i 0 0 =5. , 5 .
#t h e
conditions
residual
o d e x= lambda x : m∗ ( x −2.∗ x0+x00 ) / dt / dt + kappa ∗ ( x−x0 ) / dt + c ∗x
o d e x i= lambda x i : m∗ ( x i −2.∗ x i 0+x i 0 0 ) / dt / dt + kappa ∗ ( x i −x i 0 ) / dt + c ∗ x i − f ( t )
py l a b . r c ( ’ f o n t ’ , s i z e =26 )
py l a b . f i g u r e ( 1 , f i g s i z e = ( 2 0 , 8 ) , d p i =100)
py l a b . s u b p l o t ( 1 , 2 , 1 )
py l a b . x l a b e l ( ’ $x , \\ x i $ ’ )
py l a b . y l a b e l ( ’ $ \\ dot x , \\ dot \\ x i $ ’ )
py l a b . t i t l e ( ’ D e t e r m i n i s t i c and s t o c h a s t i c p a r t ’ )
py l a b . s u b p l o t ( 1 , 2 , 2 )
py l a b . x l a b e l ( ’ $x , \\ x i $ ’ )
py l a b . y l a b e l ( ’ $ \\ dot x , \\ dot \\ x i $ ’ )
py l a b . t i t l e ( ’ S o l v e d many t i m e s ’ )
t =0.
tend =0.5
xiplot , xidotplot = [ ] , [ ]
while t<tend :
x=s c i p y . o p t i m i z e . f s o l v e ( ode x , x0 )
x d o t =(x−x0 ) / dt
pylab . subplot ( 1 , 2 , 1 )
p y l a b . p l o t ( x , x dot , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =7)
x i=s c i p y . o p t i m i z e . f s o l v e ( o d e x i , x i 0 )
x i p l o t . append ( x i )
x i d o t =( x i −x i 0 ) / dt
x i d o t p l o t . append ( x i d o t )
p y l a b . p l o t ( x i p l o t , x i d o t p l o t , ’ b− ’ , marker= ’ d ’ , m a r k e r s i z e =4 , l i n e w i d t h =2, a l p h a =0.8)
x00 , x i 0 0 = x0 , x i 0
x0 , x i 0 = x , x i
t+=dt
f o r k in r a n g e ( 5 ) :
print k
t =0.
x0 , x00 =5. , 5 . # i n i t i a l c o n d i t i o n s
x i 0 , x i 0 0 =5. , 5 .
xiplot , xidotplot = [ ] , [ ]
while t<tend :
x=s c i p y . o p t i m i z e . f s o l v e ( ode x , x0 )
x d o t =(x−x0 ) / dt
pylab . s u b p l o t ( 1 , 2 , 2 )
pylab . p l o t ( x , x dot , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =7)
x i=s c i p y . o p t i m i z e . f s o l v e ( o d e x i , x i 0 )
x i p l o t . append ( x i )
x i d o t =( x i −x i 0 ) / dt
x i d o t p l o t . append ( x i d o t )
pylab . p l o t ( x i p l o t , x i d o t p l o t , ’ b− ’ , marker= ’ d ’ , m a r k e r s i z e =4 , l i n e w i d t h =2 , a l p h a =0.8)
x00 , x i 0 0 = x0 , x i 0
x0 , x i 0 = x , x i
t+=dt
py l a b . s a v e f i g ( ’ C R 1 9 o s c i l l a t o r f 0 1 . png ’ )
The code for the probability density function and its cumulative distribution for a random set of numbers
between zero and one:
import numpy
import s c i p y
import s c i p y . o p t i m i z e
8
import p y l a b
data=numpy . random . random sample ( ( 8 , ) )
n o r m a l i z e =1./ f l o a t ( l e n ( data ) )
sigma =0.1
def d i s t r i b u t i o n ( x , mean ) :
return 1 . / ( 2 . ∗ numpy . p i ∗ sigma ∗ ∗ 2 ) ∗ ∗ ( 0 . 5 ) ∗ numpy . exp ( − 1 . / 2 . ∗ ( ( x−mean ) / sigma ) ∗ ∗ 2 ) ∗ n o r m a l i z e
def p ( x ) :
p=0.
f o r i in r a n g e ( l e n ( data ) ) :
p+= d i s t r i b u t i o n ( x , data [ i ] )
return p
h=100.
dx =1./h
s p a c e=numpy . l i n s p a c e ( 0 . , 1 . , i n t ( h ) )
def c u m u l a t i v e ( x ) :
out =0.
f o r i in r a n g e ( l e n ( s p a c e ) ) :
i f space [ i ] < x :
out+=p ( s p a c e [ i ] ) ∗ dx
return out
py l a b . r c ( ’ f o n t ’ , s i z e =26 )
py l a b . f i g u r e ( 1 , f i g s i z e = ( 3 0 , 8 ) , d p i =100)
py l a b . s u b p l o t ( 1 , 2 , 1 )
py l a b . x l a b e l ( ’ $x$ ’ )
py l a b . y l a b e l ( ’ pdf , $p ( x ) $ ’ )
py l a b . t i t l e ( ’ P r o b a b i l i t y d e n s i t y f u n c t i o n ’ )
py l a b . p l o t ( data , 0 . ∗ data , ’ b ’ , marker= ’ d ’ , m a r k e r s i z e =22)
py l a b . s u b p l o t ( 1 , 2 , 2 )
py l a b . x l a b e l ( ’ $x$ ’ )
py l a b . y l a b e l ( ’ $ \\ i n t 0 ˆx p ( x ) $d$x$ ’ )
py l a b . t i t l e ( ’ Cumulative p r o b a b i l i t y d i s t r i b u t i o n ’ )
f o r i in r a n g e ( l e n ( s p a c e ) ) :
x=s p a c e [ i ]
pylab . subplot ( 1 , 2 , 1 )
p y l a b . p l o t ( x , p ( x ) , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =12)
f o r k in r a n g e ( l e n ( data ) ) :
p y lab . p l o t ( x , d i s t r i b u t i o n ( x , data [ k ] ) , ’ k ’ , marker= ’ o ’ , m a r k e r s i z e =4 , a l p h a =0.7)
pylab . subplot ( 1 , 2 , 2 )
p y l a b . p l o t ( x , c u m u l a t i v e ( x ) , ’ r ’ , marker= ’ o ’ , m a r k e r s i z e =12)
print f l o a t ( i ) / f l o a t ( l e n ( s p a c e ) ) ∗ 1 0 0 . , ’% i s done ’
py l a b . s a v e f i g ( ’ CR19 p . png ’ )
The code for the transient Fokker-Planck equation of the linear oscillator:
””” Computational
a
linear
Reality
oscillator
XIX ,
regarding
probability
Fokker
density
Planck
distribution
of
e q u a t i o n ”””
author
= ”B . Emek A b a l i ( abali@tu −b e r l i n . de ) ”
date
= ”2012−10−07”
copyright
= ” C o py r i g h t (C) 2 0 1 2 , B . Emek A b a l i ”
license
= ”GNU GPL V e r s i o n 3 o r any l a t e r v e r s i o n ”
# Copyright
(C)
2012 B .
Emek
Abali
#
# This
code
is
free
software :
you
can
redistribute
9
it
and / o r
modify
# it
under
the
terms
# the
Free
Software
# ( at
your
option )
of
t h e GNU L e s s e r
Foundation ,
any
later
either
General
Public
version
3
that
will
of
License
the
as
License ,
published
by
or
version .
#
# This
code
is
distributed
in
# b u t WITHOUT ANY WARRANTY ;
# MERCHANTABILITY
# GNU L e s s e r
or
General
the
hope
without
even
it
the
be
implied
FITNESS FOR A PARTICULAR PURPOSE .
Public
License
for
more
useful ,
warranty
See
of
the
details .
#
# For
t h e GNU L e s s e r
General
Public
License
s e e < h t t p : / / www . g n u . o r g / l i c e n s e s / > .
#
#−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
from d o l f i n import ∗
import numpy
from numpy import a r r a y
parameters [ ” form compiler ” ] [ ” optimize ” ]
= True
p a r a m e t e r s [ ” f o r m c o m p i l e r ” ] [ ” c p p o p t i m i z e ” ] = True
class I n i t i a l C o n d i t i o n s ( Expression ) :
def
i n i t ( s e l f , sigma ) :
s e l f . s = sigma
def e v a l ( s e l f , out , x ) :
out [ 0 ] = 1 . / ( 2 . ∗ p i ∗ s e l f . s ∗ ∗ 2 ) ∗ ∗ 0 . 5 ∗ exp ( −(( x [ 0 ] − 5 . ) ∗ ∗ 2 + ( x [ 1 ] − 0 . ) ∗ ∗ 2 ) / s e l f . s ∗ ∗ 2 )
c l a s s i t e r a t e ( No n li n ea rP r ob le m ) :
def
i n i t ( s e l f , a , L , bc , i n t e r B , e x t e r B ) :
N o n l in ea r Pr ob l em .
init ( self )
s e l f .L = L
self .a = a
s e l f . bc = bc
s e l f . inter = inter B
s e l f . exter = exter B
def F( s e l f , b , x ) :
a s s e m b l e ( s e l f . L , t e n s o r=b , i n t e r i o r f a c e t d o m a i n s= s e l f . i n t e r , \
e x t e r i o r f a c e t d o m a i n s= s e l f . e x t e r )
f o r c o n d i t i o n in s e l f . bc : c o n d i t i o n . apply ( b , x )
def J ( s e l f , A, x ) :
a s s e m b l e ( s e l f . a , t e n s o r=A, i n t e r i o r f a c e t d o m a i n s= s e l f . i n t e r , \
e x t e r i o r f a c e t d o m a i n s= s e l f . e x t e r )
f o r c o n d i t i o n in s e l f . bc : c o n d i t i o n . apply (A)
class s p a c e a s f i e l d ( Expression ) :
def
i n i t ( s e l f , mesh ) :
s e l f . mesh = mesh
def e v a l c e l l ( s e l f , out , x , u f c c e l l ) :
out [ 0 ] = x [ 0 ]
out [ 1 ] = x [ 1 ]
def v a l u e s h a p e ( s e l f ) :
return ( 2 , )
x1Min , x1Max , x1Dof = 0 . , 6 . , 200
x2Min , x2Max , x2Dof = −9. , 1 . , 200
mesh=R e c t a n g l e ( x1Min , x2Min , x1Max , x2Max , x1Dof , x2Dof )
D = mesh . t o p o l o g y ( ) . dim ( )
s p a c e = F u n c t io nS pa ce ( mesh ,
’CG ’ , 1 )
i n t e r i o r = MeshFunction ( ” u i n t ” , mesh , D−1)
i n t e r i o r . s e t a l l (0)
e x t e r i o r = MeshFunction ( ” u i n t ” , mesh , D−1)
exterior . s e t a l l (0)
bc = [ ]
dp = T r i a l F u n c t i o n ( s p a c e )
10
delp = TestFunction ( space )
p = F unc tion ( s p a c e )
p0 = Fun cti on ( s p a c e )
x = s p a c e a s f i e l d ( mesh )
t , dt , t e n d = 0 . , 0 . 1 , 0 . 6
m, kappa , c =1. , 1 . , 5 .
sigma =0.26
i , j , k , l = indices (4)
v = a s v e c t o r ( [ x [ 1 ] , −kappa /m∗x [ 1 ] −c /m∗x [ 0 ] ] )
G = a s m a t r i x ( [ [ 0 . , 0 . ] , [ 0 . , sigma ] ] )
B = a s m a t r i x ( G[ i , k ] ∗G[ j , k ] , ( i , j ) )
x =v a r i a b l e ( x )
Form = ( ( p−p0 ) / dt ∗ d e l p + d i f f ( v , x ) [ i , i ] ∗ p∗ d e l p + v [ i ] ∗ p . dx ( i ) ∗ d e l p \
+ 1 . / 2 . ∗B [ i , j ] ∗ p . dx ( i ) ∗ d e l p . dx ( j ) ) ∗ dx
Gain = d e r i v a t i v e ( Form , p , dp )
import p y l a b
import m p l t o o l k i t s . mplot3d . axes3d a s p3
from m a t p l o t l i b import cm
#p y l a b . i o n ( )
def p l o t s t a t e s p a c e ( p , time ) :
print ’ p l o t t i n g . . . ’
p y l a b . r c ( ’ f o n t ’ , s i z e =30 )
f i g = pylab . f i g u r e ( 1 , f i g s i z e = ( 2 0 , 2 0 ) , d p i =100)
p l o t 3 d = f i g . gca ( p r o j e c t i o n= ’ 3d ’ )
p l o t 3 d . s e t x l a b e l ( ’ $ x 1=x$ ’ )
p l o t 3 d . s e t y l a b e l ( ’ $ x 2 =\\dot x$ ’ )
p l o t 3 d . s e t z l a b e l ( ’ $p ( x 1 , x 2 ) $ ’ )
p l o t 3 d . s e t t i t l e ( ’ S o l u t i o n o f f p e , t=%s ’ % t )
x 1 d a t a = numpy . l i n s p a c e ( x1Min , x1Max , x1Dof )
x 2 d a t a = numpy . l i n s p a c e ( x2Min , x2Max , x2Dof )
X1 , X2 = numpy . meshgrid ( x1 data , x 2 d a t a )
p = numpy . z e r o s ( ( x1Dof , x2Dof ) )
f o r i in r a n g e ( x1Dof ) :
f o r j in r a n g e ( x2Dof ) :
p [ i , j ] = p ( ( X1 [ i , j ] , X2 [ i , j ] ) )
s u r f = p l o t 3 d . p l o t s u r f a c e (X1 , X2 , p , r s t r i d e =1, c s t r i d e =1, cmap=cm . j e t , \
l i n e w i d t h =0, a n t i a l i a s e d=F a l s e )
f i g . c o l o r b a r ( s u r f , s h r i n k =0.5 , a s p e c t =5)
p y l a b . s a v e f i g ( ’ CR19 fpe %s . png ’ % t )
pylab . c l o s e ( )
def n o r m a l i z e ( p ) :
p r o b a b i l i t y = a s s e m b l e ( p∗dx )
print ’ n o r m a l i z e : ’ , p r o b a b i l i t y
p . vector ( ) [ : ] = p . vector ( ) . array ( ) [ : ] / pr ob ab il it y
p i n i t = I n i t i a l C o n d i t i o n s ( sigma )
p. interpolate ( p init )
while t<t e n d :
11
normalize (p)
plot statespace (p , t )
t+=dt
p0 . a s s i g n ( p )
print ’ s o l v i n g f o r t h e time : ’ , t
problem = i t e r a t e ( Gain , Form , bc , i n t e r i o r , e x t e r i o r )
s o l v e r = NewtonSolver ( ’ l u ’ )
s o l v e r . parameters [ ’ c o n v e r g e n c e c r i t e r i o n ’ ] = ’ incremental ’
s o l v e r . p a r a m e t e r s [ ’ r e l a t i v e t o l e r a n c e ’ ] = 1 e−3
s o l v e r . s o l v e ( problem , p . v e c t o r ( ) )
12
© Copyright 2026 Paperzz