Random Walks and Diffusion

Random Walks and
Diffusion
- random motion and diffusion - analytic treatment
- a simplified model: random walks
- Brownian motion: implementation of an algorithm
based on the Langevin treatment
- Brownian motion: mathematical eqs. & miscellanea
M. Peressi - UniTS - Laurea Magistrale in Physics
Laboratory of Computational Physics - Unit IV
I part:
Random motion and
diffusion
-history and analytic treatment-
Random motion
Brownian motion is by now a well-understood problem, but the concepts,
techniques and models have proven fruitful in many different fields, from
statistical mechanics to econophysics. A brief history:
Random Walk
•
•
•
•
•
•
Robert Brown 1828
J.C. Maxwell 1867
Albert Einstein 1905
Maryan Smoluchowski 1906
Jean Perrin 1912
J. Bardeen , C. Herring 1950
Random motion
• random motion of tiny particles had been
reported early in scientific literature
• before 1827, random motion was attributed
to living particles.
• random motion = “brownian motion”, after
1827, when the British botanist Robert
Brown claimed that even dead particles
could exhibit a random motion
• What is the origin? In 1870, Loschmidt
suggested that brownian motion is caused by
thermal agitation
Observations of "active molecules" by scientist Robert Brown in 1827
Random motion
“Brownian”
• random motion of tiny particles had been
reported early in scientific literature
• before 1827, random motion was attributed
to living particles.
• random motion = “brownian motion”, after
1827, when the British botanist Robert
Brown claimed that even dead particles
could exhibit a random motion
• What is the origin of the brownian motion?
In 1870, Loschmidt suggested that it is caused
by thermal agitation
Brownian motion
-open questionsObservations of "active molecules" made by Brown in
1827 led the physics community to search for the
proof that molecules indeed existed.
At the turn of 20th century, the atomic nature of
matter was fairly widely accepted among scientists,
but not universally (there was NO direct evidence!)
Another argument under discussion: the kinetic
theory of gases
Maxwell-Boltzmann distribution of velocity
Kinetic theory of gases
_
???
• Under discussion in ~1900:
• Can we prove its validity from the observation of the
Brownian motion?
3
1
2
mv = kB T
2
2
• Could
m be obtained from that relationship? In
principle yes, provided one can measure v. But v
cannot be measured from the erratic trajectory of
particles observed at the microscope!
• so... What can we really measure?
Brownian motion
-Einstein’s 1905 paper-
In essence, the Einstein’s paper provides:
- evidence for existence of atoms/molecules
- estimation of the size of atoms/molecules
- estimation of the Avogadro’s number
Einstein predicted that microscopic particles
dispersed in water undergo random motion as
a result of collisions (stochastic forces) with
water molecules much smaller and light (not
visible on the chosen observation scale).
diameter of Brownian particles: ~ 1 μ, water: ~ 10-4 μ
Brownian motion
fat droplets (0.5-3 μm) in milk
http://www.microscopy-uk.org.uk/dww/home/hombrown.htm
credit to David Walker, Micscape
larger particles (blue = fat droplets) jiggle more slowly
than smaller (red = water) particles;
only the larger particles are visible
A. Einstein:
"On the Movement of Small Particles Suspended in Stationary
Liquids Required by the Molecular-Kinetic Theory of Heat"
Annalen der Physik 19, p. 549 (1905)
...
In this paper it will be shown that, according to the molecular-kinetic theory
of heat, bodies of a microscopically visible size suspended in liquids
must, as a result of thermal molecular motions, perform motions of such
magnitude that they can be easily observed with a microscope. It is
possible that the motions to be discussed here are identical with so-called
Brownian molecular motion; however, the data available to me on the latter
are so imprecise that I could not form a judgment on the question.
If the motion to be discussed here can actually be observed, together
with the laws it is expected to obey, then [...] an exact determination
of actual atomic sizes becomes possible. On the other hand, if the
prediction of the motion were to be proved wrong, this fact would provide a
far-reaching argument against the molecular-kinetic conception of heat....
Later Einstein wrote: "My major aim in this was to find facts which would
guarantee as much as possible the existence of atoms of definite finite size."
Brownian motion
-Einstein’s 1905 paper-
Einstein suggests that mean square displacements
<Δr2> of suspended particles undergoing brownian
motion rather then their velocities are suitable
observable and measurable quantities, and
directly related to their diffusion coefficient D:
<Δr2> = 2dDt
with
D = μkBT = kBT/(6πηP)
(t time, d dimensionality of the system, μ mobility,
P radius of brownian particles; η solvent viscosity; kB =R/N)
<Δr2> (and therefore D), η, T measurable => obtain P !
Brownian motion
-Einstein’s 1905 paper-
Einstein suggests that mean square displacements
<Δr2> of suspended particles undergoing brownian
motion rather then their velocities are suitable
observable and measurable quantities, and
directly related to their diffusion coefficient D:
(**)
(*)
<Δr2> = 2dDt with D = μkBT = kBT/(6πηP)
(t time, d dimensionality of the system, μ mobility,
P radius of brownian particles; η solvent viscosity; kB =R/N)
<Δr2> (and therefore D), η, T measurable => obtain P !
(*) and (**): from where?
Diffusion
Slide 8
Part I – Sedimentation Equilibrium
Compare Two Independent Analyses of Final State
First Fick’s law
(particle
diffusion eq.) From Mass Transfer Theory:
From Thermodynamics:
states that the flux
(μWc) goes from
regions of high
concentration to
regions of low
concentration, with a
magnitude that is
proportional to the
concentration
gradient .
flux  mWc
μ

d
dx

dc
D
0
dx

migration
in gravity diffusion
W  net weight of one particle
c  concentration of particles
velocity
1
m  mobility 

μ=
force
6RP
  viscosity of fluid
R
P  particle radius
gravitational
potential
d ln c
 RT
0
dx


If there is a variation
in the potential
energy of a system,
an energy flow will
occur.
chemical
potential
  WNx  PE per mole
N  Avogadro's number
R  universal gas constant
T  absolute temperature
RT []energy mole
m 
 μ
c  x   c0 exp   Wx 
 D 
x
 N

c  x   c0 exp  
Wx 
 RT

m be equal!
Compare: exponentials Nmust
 RT
D
µ
N = RT
D
(*)
N, R, T known; D measurable, according to Einstein
=>Imagine
Obtain
μ; from μ (and η, known) we get particle size P
that the microscopic particles whose Brownian motion is going to be measured are heavier than the water they are disperse
8
in. Then they will tend to settle toward the bottom of the container. But because of Brownian motion or diffusion they do not simply re
Brownian motion and diffusion
Slide 9
Fick’s law of diffusion (1855): a continuum model
Part II – Statistical Analysis of B.M.
one dimension: d=1
Here: p=c (concentration)
Fick's 2nd law:
Initial Condition:
B.C.'s:




p  x,0     x  
p  , t   0 
p
2 p
D 2
t
x
 x2 
1
p  x, t  
exp  
 4 Dt 
4Dt



1
x t  
x2 t  
 p  x, t  dx


for all t

xp  x, t  dx  0

x 2 p  x, t  dx  2 Dt



remember the gaussian:
2
2
1 1
e−x /(2σ )
p(x) = √
σ 2π
with σ 2 = 2Dt
(**)
The mean square displacements <Δr2> of suspended particles are
BM is random,
it is impossible
predictD
how any single particle will move. But it is possible to
suitableBecause
observable
quantities
andtogive
9
predict the average behavior of a large number of particles. The probability of finding a particle at any location
satisfies Fick’s second law of diffusion from a point source. We further specify that we know for certain that the
ok
Random motion in nature
• in gases or diluted matter: random motion
(after how many collisions on average a
particle covers a distance Δr?
or which
is the distance from the starting point
covered on average by a particle after N
collisions?)
• in solids: diffusion of impurities (molten
metals) or vacancies..., electronic transport
in metals...
II part:
Random walks
A very simplified model
for many phenomena,
including brownian motion
Fractal trajector
Random Walks
brownian motion
• traditional RW
• modified (interacting) RW the
motion of the walker depends on his
previous trajectory
Si on
des
100
cha
rem
pol
aus
des
suit
s’év
traj
Scaling properties of RW
2
(t)" on t :
!R
Dependence of
• normal behavior:
2
!R (t)" ∼ t
for the brownian motion
•
•
2
2ν
(t)"
∼
t
!R
superdiffusive behavior:
with ν > 1/2
in models where autointersections are unfavoured
subdiffusive behavior !R2 (t)" ∼ t2ν with ν < 1/2
in models where autointersections are favoured
Slide 10
One-dimensional RW
1-D Random Walk
!
x
A walker can walk either left or right:
N : numberx of steps
! : length of the random displacement (random direction)
( si = ±! relative displacement of the i step)
xN : displacement
!N from the starting point after N steps
( xN = x i=1 si , xN ∈ [−N !, +N !] )
p→ , p← : probability of left or right displacement
15
10
5
0
0
5
10
0
10
20
30
40
50
60
70
80
90
100
200
150
2
100
50
What can we calculate?
t
!xN " : average net displacement
after N steps
2This slide summarizes some Brownian dynamics simulations for 1-D. At each time step, the particle (red circle
!x
" average
square
after
N steps
in N
top graph): must
move either
left or right displacement
exactly one spatial increment,
depending
on the flip of a coin. The
for the net displacement from the origin (x) or x is shown in the lower two graphs. Averaging over as few
: probability
for x to be the final net displacement
Pasresults
(x)
N10 such simulations
yields results in reasonable agreement with the expectations from the previous slide.
By making
observationspoint
(in 2-D), Jean
BaptistN
Perrin
was able to deduce the diffusion coefficient for single
from
thesuch
starting
after
steps
0
0
0
10
20
30
40
50
Simulation #1
Simulation #2
Simulation #3
Theory Average
Average of 10
70
80
90
100
10
2
colloidal particles.
60
RW 1D
Exact analytic expressions can be easily derived for p← = p→
!xN " = !
N
!
si " = . . . (if p← = p→ ) . . . = 0
i=1
! N #2
N
"
"
"
2
2
si " = !
si " + !
!xN " = !
si sj " = . . . (if p← = p→ ) . . . = N !2
i=1
i=1
i!=j
More general:
xN = n← (−!) + n→ (+!) (with N = n← + n→ )
!xN " = N (p→ − p← )!
therefore:
!x2N " = [N (p→ − p← )!]2 + 4p→ p← N !2
2
h x i = N`
2
RW 1D
In general, average quantities can be calculated from PN (x):
!xN " =
x=+N
!!
xPN (x)
x=−N !
Let’s make an example
of analytical calculation of PN(x)
(N=3 is enough!)
...
(how many
different walks of length N?)
(There are 2N different possible walks
of N steps...)
RW 1D
In general, average quantities can be calculated from PN (x):
!xN " =
x=+N
!!
xPN (x)
x=−N !
Let’s make an example
of analytical calculation of PN(x)
(N=3 is enough!)
...
(There are 2N different possible walks
of N steps...)
RW 1D
145
e Random Walk For Dummies
P
(x)
Generalizing the expression
for
:
N
⇢
n
l
pr q l , if x = n mod 2;
P (GnP=(1)
x) == p ; P (−1) = p
1
→
1 otherwise.←
0,
From:
PN +1 (x) = 1PN (x − 1)p→ + PN (x + 1)p←
nsider for example the case p = q = 2 ; this is the case of the symmetric random
have: P (Gn = x) of being at position x after n steps is given in Table
lk. we
The probability
N
x
x
. This table is simply a pascal triangle interspersed with 0s.N
+
−
2
2
2
2
PN (x) = ! N
N!
"
!
"
p
→
x
x
N
3-1 −
+Table
!
2
2
2 !
p←
2
The Symmetric
Random Walk
number of steps
n\x
-5
-4
-3
-2
-1
0
1
2
3
4
5
1
0
1
2
1
4
0
2
4
0
1
4
1
8
0
3
8
0
3
8
0
1
8
1
16
0
4
16
0
6
16
0
4
16
0
1
16
0
5
32
0
10
32
0
10
32
0
5
32
0
2
3
4
1
32
PN (x)
1
2
1
5
0
for
p ← = p→
1
32
(Pascal
triangle)
RW 1D
PN (x) = ! N
2
N
N
x
x
N!
+
−
2
2
2
2
"
!
"
p
p
→
←
x
x
N
+2 ! 2 −2 !
Can be generalized to large N (put N = t/∆t , then ∆t → 0 ,
continuum limit):
P (x, N ∆t) =
!
2 −x2 /(2N )
e
πN
(*)
which looks like a Gaussian.
Why?
Let’s describe the RW problem with a space/time differential
equation...
ft
and
it is walk
clear behaviour
that
ting
theright,
random
in terms of a so called maste
Diffusion
continuum
limit
e probability that
a walker
is at site -i after
N steps. Since
walk
1
1
Pand(i,
N
)rewrite
= 2eq.P3(i
+of1,the
N length
1) +
P (i scales,
1, Nwe have
1)
ability
is
independent
or
time
we
can
as
2
left
and
right,
it
is
clear
that
p← = p→ )
)
1
(case
⇥ P (x, t)[P (x
+ l,
t ) ) +1 P (x l, t
) 2P (x, t
)]
P
(x,
t
2
[PaP
(x with
+(x,
l, t t)
)=
+1P
P (x
l, ⇥
tt) for
) the
2P
(x,
)] (x,
[P (x
+t l, tbP
) + we
Pt)
(x can
l, tPidentify
) bt)
2P (x, t
(ax,
or
=
(x,
imit
familiar
names
variables,
1
"
2 N
the probability is independent
of the length
or timePscales,
haveN
t) but
(i,
+(x,
1,
1)
+
(i⌅P we
1)
2
" Psince
1 N ) = 2 P (i⌅P
t)
(x,1,
t)
" ⌅ P (x, t)2 2
RW 1D:
P⇥
(x, t)
1
⌅P (x,t)
2
2
+
l
+
l
(x,
t)t)
⌅
P
(x,
t)
⌅P
(x,
t)
P⌅P
(x,P
t)(x,
+
1
⌅P
(x,
t)
⌅
P (x, t) 2
⌅P (x
2
2
2
⌅t 1
1
2
⌅l
⌅x
⌅t
aP l(x,
=⇥
P (ax, t)
bP (x, t) =lP+(x,
bt) 2 l
t) +
+ t)
l P or
(x, t) +
2
2
2
a, Pb(x,
. So
we can
multiply
the
equation
with
l
and
to obtain ⌅t
⌅lil
⌅x
⌅lt = N ⌅x2 and
⌅t x =
Defining:
we
have:
x
=
i!
,
limit with familiar names
for the
variables,⌅Pwe
can identify
⌅P (x, t)
⌅ 2P (x, t)
(x, t)
for any constants a, b. So we can2 multiply the1equation with
to obtain
2 l and
2
⌅P
(x,
t)
⌅
P (x, t) 2
+P
(x,
l
⌅P
(x,
t)t) 1 ⌅ P (x, t)l +
⌅P
(x,
t)
1
2 +P
2 (x, t)2
l
+
l
1
1
⌅l 2l, tl
⌅x
⌅t l,
+P (x,
t)
lP
+ (x
2 t
2
2
P
(x,
t)
=
+
)
+
P
(x
)
⌅l l, t
⌅x
1
1
–
⌅l P2(x, t) =
⌅x
⌅t
the previous equation
as
P
(x
+
l,
t
)
+
P
(x
)
2
–
2 (x, t)
2x = il
t
=
N
and
– ⌅P
⌅P (x, t)
⌅P
2P(x,
(x,t)
t) + 2
2P
(x,
t)
+
2
(x, t)this
+ 2by subtracting P (x, t⌅t ) and dividing by
We2P
rewrite
⌅t
⌅t
⌅P (x
⌅
1P (x, t
1
subtracting
) t/
and dividing
by
l,
t/
)
=
P
(x/l
+
1,
1)
+
P
(x/l
1,
t/
1
2
2
the Pprevious
equation
as
(x, t) P (x, t
)
P (x + l, t
) + P (x l, t
) 2P (x, t
)
=
2
and
after we do
all the
cancellations
we get ) + P (x
ous
cancellations
we
get obvious
(x,
t
)
P
(x
+
l,
t
l, t
) 2P
ns we get
1of a Monte
=
The t/
left hand
already
clearly
derivative,
we t/
take the lim
Carloif1,
simulations,
K1
x/l,
) =side12 P
(x/l
+ resembles
1, t/ the definition
1)
+
P
(x/l
2
2
2t) the2P functions as Taylor ser
2happens
2
⌅⌅P
0
.
To
see
what
on
the
right
hand
side,
we
expand
⌅P
(x,
t)
l
⌅
P
(x,
t)
l ⌅ P (x, t)
2 (x,
2
⇥
t)aboutlx ⌅andP t(x,
t) l and as the deviation. We
⇥
2out the terms needed:
with
only
write
⌅t
2
⌅x
2
⇥
2
⌅x
2
2 ⌅t⌅xclearly
e already
resembles
the definition
of
a
derivative,
if
w
Monte
Carlo
simulations,
K
⇤P (x, t)
2
In the limit ⇤ 0,2l ⇤ 0 but
where
l / t)is finite, this becomes an exact relation. If
Pright
(x,
t the
) ratio
⇤ P side,
(x,
what
on
the
hand
we ⇤t
expand
the P function
2 happens
but
where
the
ratio
l
/
is
finite,
this
becomes
anIfexact
If we
ratio l / is finite, this becomes an exact relation.
we relation.
RW 1D:
which is
P (x, t) =
1
exp
⇡Dt
x
4Dt
Diffusion - continuum limit
The fundamental solution of the continuum di↵usion equation of the previous slide, defining
`2
D=
is:
2⌧
r
✓
◆
1
x2
P (x, t) =
exp
.
4⇡Dt
4Dt
The discretized solution of the RW problem:
r
✓
◆
2
2
x
PN (x) =
exp
⇡N
2N
considering t = N ⌧ and the definition of D, can be rewritten as:
r
✓
◆
2
1
x
P (x, t) =
exp
⇡Dt
4Dt
a part from the normalization which is a factor of 2 larger in this form because of the spatial
discretization that excludes alternatively odd or even values of x.
The solution is therefore a Gaussian distribution with 2 = 2Dt which describes a pulse
gradually decreasing in height and broadening in width in such a manner that its area is
conserved.
1
2
RW 1D:
Diffusion - continuum limit
physical meaning!
(hint: try to simulate a number of particles
initially concentrated at 0 and
evolving according to the RW model:
... the ‘cloud’ is progressively expanding)
allocate(rnd(N))
allocate(x_N(N))
allocate(x2_N(N))
allocate(P_N(-N:N))
The =basic
algorithm:
x_N
0
ix = position
of the walker
x2_N
= 0
P_N
= 0 = cumulative quantities
x_N, x2_N
CALL
RANDOM_SEED(PUT=seed)
rnd(N)
= sequence of N random numbers
RW 1D: simulation
(1 run= 1 particle)
do irun = 1, nruns
ix = 0 ! initial position of each run
call random_number(rnd) ! get a sequence of random numbers
do istep = 1, N
if (rnd(istep) < 0.5) then ! random move
ix = ix - 1 ! left
else
Note:
ix = ix + 1 ! right
x_N and x2_N are NOT
end if
reset to zero, but summed
x_N (istep) = x_N (istep) + ix
x2_N(istep) = x2_N(istep) + ix**2
over the runs (walkers)
end do
P_N(ix) = P_N(ix) + 1 ! accumulate (only for istep = N)
end do
1-D Random Walk
RW 1D: simulation
15
10
x
5
0
0
5
10
0
10
20
30
40
50
60
70
80
90
100
200
150
x2
100
50
0
0
0
10
Simulation #1
Simulation #2
Simulation #3
Theory Average
Average of 10
20
30
40
50
t
60
70
80
90
100
10
slide summarizes some Brownian dynamics simulations for 1-D. At each time step, the particle (red
aph) must move either left or right exactly one spatial increment, depending on the flip of a coin
RW 1D: simulation
0.14
’prob_N64_ntrial0100’
0.12
0.1
P_N (x)
0.08
0.06
0.04
0.02
0
-60
-40
-20
0
x_N
20
40
60
RW 1D: simulation
0.12
’prob_N64_ntrial1000’
0.1
P_N (x)
0.08
0.06
0.04
0.02
0
-60
-40
-20
0
x_N
20
40
60
RW 1D: simulation
0.12
’prob_N64_ntrial1000’
0.1
’prob_N64_ntrial9999’
P_N (x)
0.08
0.06
0.04
0.02
0
-60
-40
-20
0
x_N
20
40
60
RW 1D: simulation
0.12
0.1
’prob_N64_ntrial9999’
’prob_N64_exact’
P_N (x)
0.08
0.06
0.04
0.02
0
-60
-40
-20
0
x_N
20
40
60
Up: MONTE-CARLO TECHNIQUES
Previous: Project
Random Walks
Random
Random walk of 1000 steps going nowhere
Many physical processes such as Brownian motion, electron
Random Walks 2D
y
x
2
" =!(∆x1 + ... + ∆xN )2 + (∆y1 + ... + ∆yN )2!= ... = N !∆x2i + ∆yi2 " = N !2
!RN
!R2 " ∝ N
also in 2D! (and in general in each dimension)
Random Walks 2D
A Computational Physics Course, or Why Computer Scientists ...
http://www.software-carpentry.com/extern/cse-landau.html
Table 1: Problems
Figure 1: Seven different randon walks. Each walk starts at the origin and makes 1000 steps. (Courtesy of
P. Lagner.)
2
Although the theory of RW predicts that !R " ∝ N for large N, this
holds only on the average after many trials, and even then only if
particular care is used in generating the random walk.
Random Walks 2D
w to generate random unit steps
http://www.krellinst.org/UCES/archive/modules/monte/unitstep.
Generating 2-D random unit steps
1. Choose a random number in the range
2. Choose a random value for
randomly too).
in the range
3. Choose separate random values for
Normalize
and then set
.
and
in the range
(choose the sign
(but not
).
so that the step size is 1.
4. Choose a direction (N, E, S, W) randomly as the step direction (no trigonometric functions are
then needed). Note, choosing one of four directions is equivalent to choosing a random
integer on [0,3].
5. Choose separate random values
in the range
generally not 1, it becomes 1 on the average.
. Although the step size is
????
Although all these methods seem to be reasonable, only the last one gives us good results when we
are dealing with a large number of steps.
BACK to the main document.
Random Walks 2D
Figure 1: Seven different randon walks. Each walk starts at the origin
P. Lagner.)
2: The distance R covered in random walks of N steps as a fu
Although the theory Figure
of
RW
predicts
thatto !R
for large
this
" ∝ Ntechniques
solid curves
correspond
two different
for N,
simulating
Kowallik.)after many trials, and even then only if
holds only on the average
particular care is used in generating the random walk.
2
Random Walks 2D
0
10
20
30
40
50
60
.....
0.0000000
0.2242774
-1.7333623
-1.4481916
-2.2553353
-3.8911035
-3.6508965
0.0000000
3.7794106
1.3218992
-3.1119978
-3.5246484
-6.6665235
-8.0110636
if (mod(i,10)==0) then
WRITE (...) i,x,y
end if
WRITE (...) i,x,y
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
.....
0.0000000
0.6946244
0.9359566
1.8891419
0.9642899
0.1308700
0.2071800
0.9160752
0.2856980
1.0143363
0.2242774
-0.7752404
-1.7280728
-2.0930278
-3.0587580
-2.0729706
-1.8304152
-2.2890768
-1.7717266
-1.1920205
-1.7333623
-1.5798329
0.0000000
0.7193726
1.6898152
1.9922019
2.3725290
2.9251692
3.9222534
4.6275673
3.8512783
3.1663797
3.7794106
3.8104627
3.5069659
4.4379911
4.1784425
4.0104446
3.0403070
2.1516960
1.2959222
0.4810965
1.3218992
0.3337551
Random Walks 2D
self-similarity!
plot every 10 steps
plot every step
Brownian motion
and
fractal trajectory
S
Fractal trajectory D =2
Si on faisait des pointés à
des intervalles de temps
100 fois plus rapprochés,
chaque segment serait
remplacé par un contour
polygonal relativement
aussi compliqué que le
dessin entier, et ainsi de
suite. On voit comment
s’évanouit … la notion de
trajectoire.
Jean Perrin
(1912)
Random Walks 2D
on a triangular lattice
Other Random Walks
CHAPTER 12. RANDOM WALKS
408
Examples
of theof the
random
of ato the
raindrop
to probabilities
the ground
Figure 12.1: Examples
random pathpath
of a raindrop
ground. The step
are
in Problemof
12.2f.
Thegiven
probability
a step down is larger than the probability of a step up;
furthermore, this is a restricted RW, i.e. limited by boundaries
12.2
Modified Random Walks
So far we have considered random walks on one- and two-dimensional lattices where the walker
has no “memory” of the previous step. What happens if the walkers remember the nature of their
previous steps? What happens if there are multiple random walkers, with the condition that no
double occupancy is allowed? We explore these and other variations of the simple random walk in
. Enumerate all the random walks on a square lattice for N = 4 and ob
2
"yN # and "∆RN
#. Assume that all four directions are equally proba
by comparing the Monte Carlo and exact enumeration results.
Self-avoiding Random Walks
2
. Estimate "∆RN
# for N = 8,a)
16,
32, and illustration
64 using aof reasonable
numbe
Schematic
a linear
2
in a good solvent
:
N . Assume that "∆RN
# haspolymer
the asymptotic
N dependence:
420
ER 12. RANDOM WALKS
head-tail mean square distance is (in 3D):
2
"∆RN
# ∼ N 2ν ,
(N0.592
>> 1)
ν=
2
and estimate the exponent ν from a log-log plot of "∆RN
# versus N
b) Simulation
with
SAW on
magnitude of the self-diffusion
coefficient
D agiven
by a square lattice:
ν = 3/4
2D model gives
(independent on
details
such as monomers
2
"RN
# ∼ 2dDN.
.3: (a) Schematic illustration of a linear polymer in a good solvent. (b) Example
of thesolvent structures)
and
ding self-avoiding walk on a square lattice.
(a)
(b)
The form (12.5) is similar to (8.39) with the time t in (8.39) replac
N.
us consider a familiar example of a polymer chain in a good solvent: a noodle in warm
short time after we place a noodle in warm water, the noodle becomes flexible, and it
ollapse into a little ball or becomes fully stretched. Instead, it adopts a random structure
schematically in Figure 12.3. If we do not add too many noodles, we can say that the
ehave as a dilute solution of polymer chains in a good solvent. The dilute nature of the
mplies that we can ignore entanglement effects of the noodles and consider each noodle
lly. The presence of a good solvent implies that the polymers can move freely and adopt
erent configurations.
2
2
2
. Estimate the quantities "xN #, "yN #, "RN
# = "x2N + yN
#, and "∆RN
#f
in part (d), with the probabilities 0.4, 0.2, 0.2, 0.2, corresponding to a
and down, respectively. This choice of probabilities corresponds to a
ndamental geometrical property that characterizes a polymer in a good solvent is the
2
are end-to-end distance !RN
", where N is the number of monomers. It is known that for
2
olution of polymer chains in a good solvent, the asymptotic dependence of !RN
" is given
with the exponent ν ≈ 0.592 in three dimensions. The result for ν in two dimensions is
be exactly ν = 3/4 for the model of polymers that we will discussed. The proportionality
in (12.4) depends on the structure of the monomers and on the solvent. In contrast, the
Other Random Walks
• RW with traps
• persistent RW (superdiffusive behaviour)
• ....
Some programs:
on
$/home/peressi/comp-phys/IV-random-walk/f90
[do: $cp /home/peressi/.../f90/* .]
or on https://moodle2.units.it
rw1d.f90
rw2d.f90
rw2zoom.f90
contour, pl => see following slide
‘pl’: macro for gnuplot for plotting trajectories
(suppose column 1 is ‘time’, 2 is x, 3 is y)
and check self-similarity:
set term postscript color
set size square
set out '1.ps'
p [-20:5][-10:15] '1.dat' u 2:3 w l
set out '10.ps'
p [-40:20][-10:50] '10.dat' u 2:3 w l, 'contour' u 1:2 w l
Use:
gnuplot$ load ‘pl’
III part:
algorithm for the
Brownian motion
(Langevin treatment)
Other program:
on
$/home/peressi/comp-phys/IV-random-walk/f90
[do: $cp /home/peressi/.../f90/* .]
brown.f90
The numerical approach:
the ingredients
Here: NOT Einstein’s, but Langevin’s (1906) approach
arriving at a Newtonian equation of motion including a
random force due to the solvent
See: De Grooth BG, Am. J. Phy. 67, 1248 (1999)
Ingredients:
* large Brownian particles - solvent interactions described
by: elastic collisions between large particle (mass M,
velocity V) and small (solvent) particles (m, v);
* momentum and energy conservation at each collision
MV+mv = MV’+mv’
MV2/2+mv2/2 = MV’2/2+mv’2/2
The numerical approach:
the equation of motion
After reasonable assumptions (many collisions (i) in a time
interval Δt, where Vi are the same…, m<<M…, …)
=>
arrive at a simple expression for MΔV/Δt=M(V’-V)/Δt :
"
Ma = Fs - γV(t)
Fs : stochastic force, i.e. the cumulative effect, in the time
interval, of many collisions with smaller particles
-γV(t) : drag force, opposite to V(t) (γ>0); γ can be
expressed (using Stokes’ formula for a sphere of radius P)
as: "
γ = 6πηP
(both forces have the same origin, in the collisions with the smaller particles) "
The numerical approach:
discretization of the equation of motion
Ma = Fs - γV(t)
Rewritten as:
!MΔV/Δt = ΔVs /Δt - γ V(t)
=>
!Vq+1 = Vq + ΔVs - γ(Δt/M)Vq
!
!
with:
ΔVs = 2mv/M = (…) = 1/M v/|v| √(2γkBT/n);
At each collision v/|v| is -1 or +1 => after N collisions ???
Allowing students discover themselves (e.g. with dice-rolling
experiments) that the result is a gaussian random variable
wq centered in 0, s.d.=√(N/2) => (see also next lectures)
The numerical approach:
discretized equations for positions and velocities
Vq+1 = Vq - (γ/M)VqΔt +wq(√(2γkBTΔt))/M
Xq+1 = Xq + Vq+1Δt
- the hearth of our numerical approach
-  can be easily implemented for iterative execution
NOTE : we are NOT imposing any specific time
dependence behavior: it will come out as an
“experimental” result of the simulation
The numerical approach:
Input parameters - I
Vq+1 = Vq [1 - (γ/M)Δt] + wq(√(2γkBTΔt))/M
- 
physical parameters of the system: T
(through η and P:
γ=6πηP)
and
γ
The numerical approach:
Input parameters - II
Vq+1 = Vq [1 - (γ/M)Δt] + wq(√(2γkBTΔt))/M
- 
time step Δt : cannot be fixed a priori!
Some suggestions from physical and rough numerical considerations
[(γ/M)Δt < 1 to reproduce the situation of T≈0 (damped motion)
Δt too small: too long numerical simulations necessary…
Δt too large: serious numerical uncertainties…]
Our numerical work:
choice of Δt is analogous of an instrument calibration !!!
suggestion: start from small Δt s.t. γΔt/M << 1, increase Δt until important
changes in the diffusion coefficient are observed.
Running the code…
kBT=4⋅10-21J, M=1.4⋅10-10kg,
γ≈8⋅10-7Ns/m
Snapshot of a numerical simulation
of the Brownian motion in 2D
of many large particles.
The trajectories of four of them are shown
Discovering the results
We can prove by numerical experiments:
(i)  the linear behavior of the mean square displacement
<R2> with time:
!<R2> = 2dD t
(i)  the validity of the Einstein relation between the slope of
this line and the solvent parameters (temperature and
drag coefficient):
"<R2> = (2d kBT / γ) t
"
"
"
"
"
"
IV part:
Brownian motion in finance
mathematical formulation
Statistical Properties of Price Returns
price returns
Simulated Returns (Geometric Brownian Motion)
time (min)
Real Returns (Financial Time-Series)
price returns
!
Brownian motion in finance
time (min)
Cont, Empirical properties of asset returns, stylized facts and statistical issues, 2001
MARIO FILIASI – UNIVERSITY OF TRIESTE
PhD COURSE – FINAL EXAM – MAR 31, 2015
Random Walk in Finance
• Geometric Brownian motion: µ: drift; σ: volatility; ε: random variable
following normal distribution with unit variance
dS = µSdt + σSε dt
• Let the 2nd term be 0,
S(t) = S0 exp( µt )
• Let the 1st term be 0,
€ then for U = ln S (dU = dS/S)
dU = σε dt
€
N
U (t) − U (0) = σ Δt ∑ ε i
i=1
• Central-limit theorem states
that Σi εi is normal distribution with variance N;
€
let t = NΔt
U (t) − U (0) = σ tε
• Log-normal distribution
€
€
credits: Nakano
MC
Simulation
of
Stock
Price
MC
Simulation
of
Stock
Price
t = 0.00274 year (= 1 day); the expected return from the stock be 14
Simulation
ofreturn
Stock
Price
nnum
(µdt==dt0.14);
the
standard
deviation
offrom
the
return
20% per
• Let:
=MC
0.00274
year
(=
1 day);
expected
return
from
the
stock
be
14%
• Let:
0.00274
year
(=
1 day);
thethe
expected
the
stock
bebe
14%
annum
= 0.14);
standard
deviation
the
return
20%
annum
(&
µ (=µthe
0.14);
thethe
standard
deviation
of of
the
return
be be
20%
perper
m (σ per
=per
0.20);
starting
stock
price
be
$20.0
annum
= 0.20);
&
stock
price
$20.0
0.20);
&
thethe
starting
price
be be
$20.0
from
•annum
Let:
dt(σ=(=σ
0.00274
year
(=
1starting
day);stock
the
expected
return
the stock be 14%
dS
per annum (µ = 0.14); the standard
of the return beof
20%
per
dS dS deviation
MC
Simulation
Stock
Price
=
µ
dt
σ
dt
ξ
=
µ
dt
+
σ
dt
ξ
=
µ
dt
+
σ
dt
ξ
annum (σ = 0.20); & the starting
stock price be $20.0 S
S
S dS
• Box-Muller
algorithm:
Generate
random
numbers
r1 &
r2 in
• Let:
dt = 0.00274
year
1ξday);
the expected
=uniform
µdt
+random
σ(=
dt
• Box-Muller
algorithm:
Generate
uniform
numbers
r1 &
r2 return
in
thethefrom the stock
1/2
S(uniform
range
then
= (-2ln
r1)cos(2πr
cos(2πr
) the
Muller
algorithm:
random
numbers
rtrials
& r2bein20%
the
per
annum
µ = 0.14);
standard
deviation
of the
range
(0,(0,
1),1),
then
ξ Generate
=ξ(-2ln
r1)1/2
2) 2
1return
Histogram
with
1,000
Histogram
with
1,000
trials
annum
= 0.20);
&) the
starting
stock price
$20.0
• Box-Muller
uniform
random
numbers
r1 &be
r2 in
the (0, 1),
then ξ =algorithm:
(-2ln
r1Generate
)1/2(σcos(2πr
€€
2
1/2
range (0, 1), then ξ = (-2ln r1) cos(2πr2) €
Histogram
with
1,000
Day
dS365
Day
365
Histogram
with
1,000
trials
= µdt + σ dtξ
tria
S
€
DayDay
365 365
• Box-Muller algorithm: Generate
uniform random numbers r1 & r2
range (0, 1), then ξ = (-2ln r1)1/2cos(2πr2) Histogram with 1,00
€
Day 365
Stochastic Model of Stock Prices
Basis of Black-Scholes
analysis of option prices
dS = µSdt + σSε dt
Computational stock portfolio
trading
Technology Stock Basket Performance
63-Stock Basket
S&P 500
NASDAQ
500
450
400
350
300
250
200
150
Dec-99
Andrey Omeltchenko (Quantlab)
Jan-00
Oct-99
Nov-99
Aug-99
Sep-99
Jul-99
Jun-99
Apr-99
May-99
Mar-99
Jan-99
Feb-99
Nov-98
Dec-98
Aug-98
Sep-98
Oct-98
Jul-98
May-98
Jun-98
Mar-98
Apr-98
50
Feb-98
100
Dec-97
Jan-98
€
Basis of Black-Scholes
analysis of option prices
dS = µSdt + σSε dt
Comput
Tec
63-Stock Basket
S&P 500
NASDAQ
500
450
350
300
250
200
150
May-98
Mar-98
Apr-98
50
Feb-98
100
Dec-97
Jan-98
€
400