Molecular Dynamics
F = ma
Theoretical Modeling Seminar
a.a. 2012/2013 - Material Science
S. Casassa
February 1, 2013
Contents
1 Molecular Dynamics
2
2 Overview: critical concepts
2
3 Introduction
2
4 A program
4.1 Initialization . . . . .
4.2 Forces calculation . .
4.3 Integrate . . . . . . .
4.4 Evaluating quantities
3
3
4
5
7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5 Theoretical Background
8
6 Born-Oppenheimer Molecular Dynamics
9
7 Car-Parrinello Molecular Dynamics
10
8 Statistic Thermodynamic
8.1 The Canonical Ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Thermodynamic Potential . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3 Summarizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
11
12
14
9 Vibrational Energy
14
10 Solution of the nuclear equation
15
A The Maxwell-Boltzmann distribution
17
B Virial Theorem
18
C Units and Physical Constants
18
1
1
Molecular Dynamics
Molecular Dynamics simulation (MD) is a technique for computing the equilibrium and
transport properties of a classical many-body system. In this context, the word classical
means that the nuclear motion of the particles obeys the law of classical mechanics. This
is an excellent approximation for a wide range of material and chimico-physical processes,
being aware that quantum effects cannot be avoided if:
(i) translational and rotational motion of light atoms or molecules as He, H2 , D2 is
considered, or
(ii) vibrational motions with frequency ν such that hν > kB T have to be taken into
account.
Most of the ideas summarized in these paragraphs can be found in the beautiful book
Understanding Molecular Simulation[5]. Moreover, the excellent CPMD manual[1] and
the illuminating article by Tuckerman e Martyna[7] are strongly recommended.
2
Overview: critical concepts
• The NVE statistical ensemble: defining temperature, starting a calculation,
reaching equilibrium;
• Energies and Forces: potential, electrostatic, periodic boundary conditions;
• Integrating the Newtonian Equations of Motion: Verlet algorithm, timereversibility, conserved quantities;
• Contact with Experiment: time scale constraints, calculation of macroscopic
quantities, remembering that to measure an observable quantity in MD it has to
be expressed as a function of the positions and momenta of all the particles in the
system.
3
Introduction
MD deals with the solution of the equations of motion of N atoms moving under the influence of a given potential. The following general Hamiltonian, function of the position coordinates R ≡ (R1 , R2 , ...RN ) and of the momenta p ≡ (p1 , p2 , ...pN ) with pA = MA vA ,
can be written:
H(p, R) =
X
A
p2A
+ V (R)
2MA
(1)
and yields the well known Hamiltonian equations of motion:
∂RA
pA
∂ Ĥ
=
=
∂t
∂pA
MA
∂pA
∂ Ĥ
∂V
=
=−
=−
= FA
∂t
∂RA
∂RA
ṘA =
(2)
p˙A
(3)
2
from which the Newton’s second law can be derived:
MA R¨A = FA
(4)
In a computer experiment the trajectories can be generated adopting techniques based on
a discretization of time and a repeated calculation of the forces on the particles. Theory
and technicalities of a MD simulation have become rather standard and will be presented
in the next sections. The main difference is related with the potential energy V (R) that
is, depending by its evaluation three main approaches can be envisaged:
• Classical MD: van der Waals, force field, parameterized potentials are used;
• Born-Oppenheimer MD (BOMD): the potential is calculated at the ab initio
Hartree-Fock (HF) or DFT (Density Functional) level within the BO approximation;
• Car-Parrinello MD (CPMD): both electrons and nuclei are moving at the same
time. The potential is a function of electron spin-orbitals and nuclear coordinates.
4
A program
The best introduction to MD simulation is to consider a simple program, able to perform
a classical MD calculation. In the flow chart of table 1 five logical blocks are given:
A definition of input parameters which completely specify the condition of the run:
initial temperature, number of particles, time step, time length of the simulation
(in femtoseconds), etc..;
B initialization of the system: initial positions and velocities are selected;
C computation of the forces acting on the particles;
D integration of the Newton’s equations of motion using a particular algorithm; this
step and the previous one make up the core of the simulation. They are repeated
until the time evolution of the system is computed for the desired length of time;
E after completion of the central loop, some average of computed quantities can be
evaluated, as temperature, energy, pressure, etc..
4.1
Initialization
To start a MD simulation, the initial positions and velocities of all particles of the system
have to be defined. The particle positions should be chosen compatible with the structure
that has to be simulated, as the atoms positions in the reference cell, taken from crystallograpyc database in the case of crystalline systems. The velocities can be assigned using
an algorithm for the random generation of numbers and then rescaled according to the
P
desired temperature, given by input (Tt ). Temperature, kinetic energy Tk = 21 m A vA2
and velocities, for a system of N particles of mass m are related by the following equation:
m < v2 > =
mX 2
v = 3kT
N A A
3
(5)
PROGRAM MD
CALL INPUT(N,T,delt,tmax)
CALL INIT
t=0
do while (t.lt.tmax)
CALL FORCE(F,E)
CALL INTERP(F,E)
t=t+delt
CALL SAMPLE
enddo
stop
end
Table 1: Flow chart of the main program.
see for details appendix A, equation 79. So, at time t = 0 the temperature T is given by
1/2
the summation over all the initial velocities {vi0 }. By defining a scale factor s = 13 TTr0
(where T 0 = 31 m < (v 0 )2 >) and substituting the new velocities v t = v 0 ∗ s in equation 5
the Tr temperature can be obtained:
m < (v t )2 > =
m X t 2 m X 0 2 Tr
(v ) =
(v )
= kTt
N A A
N A A 3T
(6)
Finally, the velocities are shifted with respect to their center of mass < v > at each
time step in order to keep their values ρ(v) centered around the zero, as needed by a
Maxwell-Boltzmann distribution (for details, see appendix A).
4.2
Forces calculation
One of the most famous pair potential for van der Waals system is the Lennard-Jones
potential (LJ):
"
VLJ (R) = 4
σ
R
12
σ
−
R
6 #
(7)
This potential is strongly repulsive at short distances, where the r−12 term, modeling the
repulsion due to the overlap among electronic orbitals, dominates. At large distances,
the attractive term r−6 takes into account the van der Waals dispersion forces caused by
the interaction between fluctuating dipoles. The parameter governs the strength of the
interaction, in fact the minimum of the potential corresponds to:
"
!
σ 6 1
−
= 0 ⇒ Rmin = 21/6 σ
Rmin
2
"
12
6 #
σ
σ
1 1
VLJ (Rmin ) = 4
−
=
4
−
=
21/6 σ
21/6 σ
4 2
∂VLJ (R)
=
∂R
(8)
The parameter σ define a length scale: the interaction goes to zero, OVLJ (R) = 0, as
R = σ. Of course, the two parameters, σ and are characteristic of each compound.
4
SUBROUTINE INIT
sumv=0
sumv2=0
do a =1,N
x(a)=lattic pos(i)
v(a)=(rand()-0.5)
sumv=sumv+v(a)
sumv2=sumv2+v(a)**2
enddo
sumv=sumv/N
sumv2=sumv2/N
sf=sqrt(3*T/sumv2)
do a =1,N
v(a) = (v(a)-sumv)*sf
xm(a)= x(a) - v(a) * delt
enddo
return
end
velocity center of mass
kinetic energy
<v>
variance of the distribution, < v 2 >
scale factor of the velocity
velocity center of mass shift to zero
Table 2: Flow chart of the INIT subroutine.
The forces acting on a particle i are composed of the individual interactions with the rest
of the particles and, once defined the distance between two particle as RAB = [(xA −
xB )2 + (yA − yB )2 + (zA − zB )2 ]1/2 , it can be factorized in the components along the three
main direction, x, y, z as follows:
FA,x
N
X
xAB ∂VLJ
48xAB
∂VLJ
x
=−
fAB
=−
=
= −
RAB ∂RAB
R2AB
B6=A
B6=A ∂xAB
N
X
1
1
−
12
RAB 2 R6AB
!
(9)
which represents the x-component of the force acting on the i-particle due to all the
particles included in a sphere of radius Rmax = σ. The algorithm to implement the forces
is sketch in table 3 and takes into account the Newton’s third law, for which fAB = −fBA
and, at each step, updates the potential energy.
4.3
Integrate
The Newton’s equations 2 and 3, which have two important properties as discussed in
paragraph 5, cannot be solved exactly and then an approximated algorithm, able to
satisfies the same properties and to produce reliable trajectories, has to be used. The
most efficient one is the Verlet method that can be derived as follows:
1 ∂ 2 R(t) 2
∂R(t)
∆t +
∆t
R(t + ∆t) = R(t) +
∂t
2 ∂t2
1
= R(t) + v(t)∆t + a(t)∆t2
(10)
2
∂R(t)
1 ∂ 2 R(t) 2
R(t − ∆t) = R(t) −
∆t +
∆t
∂t
2 ∂t2
1
= R(t) + v(t)∆t − a(t)∆t2
(11)
2
5
SUBROUTINE FORCE(F,EP)
e=0
do a =1,N
f(i)=0
enddo
do a =1,N-1
do b =i,N
xr = x(a) - x(b)
xr = xr -box*nint(xr/box)
r2 = xr**2
if (r2.lt.rmax**2) then
r2i=1/r1
r6i=r2i**3
ff=48*r2i*r6i*(r6i-0.5)
f(a) = f(a) + ff*xr
f(b) = f(b) - ff*xr
ep = ep + 4*r6i*(r6i-1)
endif
enddo
enddo
return
end
periodic boundary conditions
test cutoff
LJ potential
update forces
update potential energy
Table 3: Flow chart of the FORCE subroutine.
0)
where the time step ∆t and initial position {R(t0 )} and velocities { ∂R(t
} are defined
∂t
0
in the INIT subroutine. By summing and subtracting equations 10 and 11, both the
displacements and the velocities can be obtained:
R(t + ∆t) = 2R(t) − R(t − ∆t) + a(t)∆t2
R(t + ∆t) − R(t − ∆t)
v(t) =
2∆t
(12)
(13)
Equations 12 and 13, implemented in the flow chart reproduced in table 4, despite their
simplicity and efficiency, are affected by the so called self-starting problem: to know the
position of nuclei at time t + ∆t it is mandatory to know their position at time t and
t − ∆t. In order to overcome this problem they can be re casted into a more convenient
and symmetric form; using the simplified notation:R(t + ∆t) → xn+1 , equation 12 can be
rewritten:
1
1
xn+1 = xn + (xn+1 − xn−1 ) − (xn+1 + xn−1 − 2xn ) + an ∆t2
2
2
(14)
where the first and second terms in parenthesis are the velocity and acceleration of equation 13 respectively, and then equation 14 becomes:
1
xn+1 = xn + vn ∆t + an ∆t2
2
6
For the velocity, the following equation can be written and used in the definition of the
velocity at vn+1 according to 13:
1
xn+2 = xn+1 + nn+1 ∆t + an+1 ∆t2
2
(xn+2 − xn )
1
1
vn+1 =
=
(xn+1 + vn+1 ∆t + an+1 ∆t2 − xn ) =
2∆t
2∆t
2
1
1
1
1
1
=
(xn+1 − xn + an+1 ∆t2 ) =
(vn ∆t + an ∆t2 + an+1 ∆t2 ) =
∆t
2
∆t
2
2
1
= vn + ∆t(an + an+1 )
2
The final equations
1
xn+1 = xn + vn ∆t + an ∆t2
2
1
vn+1 = vn + ∆t(an + an+1 )
2
allow for a refined update procedure like this:
(15)
(16)
v(:) = v(:)+ dt/2 m(:) * f(:)
R(:) = R(:)+ dt*V(:)
calculate new forces f(:)
v(:) = v(:) +dt/2 m(:) * f(:)
SUBROUTINE INTEGRATE(F,EP)
sumv=0
sumv2=0
do a=1,N
xx=2*x(a) - xm(a) +delt**2 * f(a)
vi = (xx -xm(a))/(2* delt)
sumv=sumv+vi
sumv2=sumv2+vi**2
xm(a) = x(a)
x(a) = xx
enddo
temp=sumv2/(3*N)
etot = (ep + 0.5*sumv2)/N
return
end
Verlet algorithm, equation 12
velocity, equation 13
velocity center of mass
kinetic energy
update position at time t
update position at time t+delt
instantaneous temperature
total energy per particle
Table 4: Flow chart of the INTEGRATE subroutine.
4.4
Evaluating quantities
The first part of a simulation is the equilibration phase in which strong fluctuations
may occur. Once all the important quantities are sufficiently equilibrated, the actual
simulation is performed and then physical observable can be calculated from trajectories
and velocities:
7
• the average temperature
D
m v2
E
= 3kT
(17)
• the average pressure for an ideal gas
PV
X
=
vi2
(18)
i
• the diffusion constant for large times τ
D =
E
1
1 D
∆R2 (τ ) =
(R(τ ) − R(0))2
6τ
6τ
(19)
• the pair correlation function
V
g(r) =
N
*
XX
i
+
δ(r − Rij )
(20)
j6=i
• the temporal Fourier transform of the velocity autocorrelation function hv(t)v(0)i is
proportional to the density of normal modes, in a purely harmonic system. Time
correlation functions are then related to transport coefficients and spectra via linear
response theory[6]
5
Theoretical Background
As already mentioned, the Newton’s equations, as well as the Verlet ones, satisfy two
important properties:
i they are time-reversible, that is invariant under the transformation from t to −t;
the consequence of time reversal symmetry is that the microscopic physics is independent of the direction of the flow of time.
ii the value of the Hamiltonian does not change as a function of time, so the trajectories
follow a iso energetic surface; this statement can be easily proved by considering the
time derivative of the Hamiltonian in 1:
dH
dt
=
X
A
∂H dRA
∂H dpA
+
∂RA dt
∂pA dt
!
=
X
A
∂H ∂H
∂H ∂H
−
∂RA ∂p
∂pA ∂R
!
=0
This properties is fundamental to establish a link between molecular dynamics and statistical mechanics, through the Gibbs ensemble concept. That is, many individual microscopic configurations of a very large system lead to the same macroscopic properties,
implying that is not necessary to know the precise detailed motion of every particle in a
system in order to predict its properties. It is sufficient to simply average over a large
number of identical systems, each in a different of such microscopic configuration; i.e., the
macroscopic observable of a system are formulated in terms of ensemble averages. Statistical ensembles are usually characterized by fixed values of thermodynamics variables as
8
energy E, temperature, T , pressure p, volume V , particles number N or chemical potential µ. Dynamical trajectory (i.e. the positions and momenta of all particles over time)
of a system containing a fixed number of particles N will generate a series of classical
states having constant V and E, corresponding to a micro canonical ensemble (NVE).
The energy conservation condition, Ĥ(p, R) = E which imposes a restriction on the
classical microscopic states accessible to the system, defines a hyper-surface in the phase
space called a constant energy surface. As already stated, a system evolving according
to Hamiltonian’s equation of motion will remain on this surface. The assumption that
a system, given an infinite amount of time, will cover the entire constant energy hypersurface is known as the ergodic hypothesis. Thus, under this hypothesis, averages over
a trajectory are equivalent to averages over the microcanonical ensemble: time averages
and time correlation functions of the trajectories are directly related to ensemble averages
of the microcanonical ensemble.
In the thermodynamic limit, all equilibrium ensemble are equivalent and thus, for very
large systems, a single long trajectory can be used to generate a time correlation function[3].
Energy conservation is an important criterion but, then implementing a code, two kinds
of energy conservation, namely short time and long time, have to be considered. Sophisticated higher-order algorithms tend to have very good energy conservation for short times
but often have the undesiderable feature that the overall energy drifts for long time. In
contrast, the Verlet method tends to have only moderate short-term energy conservation
but very little long-term drift.
6
Born-Oppenheimer Molecular Dynamics
In the BO formulation of the MD (BOMD), the Hamiltonian is always in the form 1 and
the equations of motion are the 15 and16 but the potential V (R), acting on the nuclei,
is now evaluated by solving the following electron Schrödinger equation at the ab initio
quantum-mechanical level, in the framework of the Born-Oppenheimer approximation:
Hˆel Ψ(r; R) = Eel (R)Ψ(r; R)
(21)
So, it is possible to write:
V (R) = Eel +
0
ZA ZB e2
1X
2 A,B (RA − RB )
(22)
and the forces are obtained, as usual, as the first derivative of the potential with respect
to the atom coordinates:
F = −∇V (R)
0
X
∂E
∂Eel
(xA − xB )
FA,x = −
=−
+
∂RA,x
∂RA,x A,B (RA − RB )3
(23)
The Hellmman-Feynman theorem can be invoked to evaluate the gradient of the electronic
energy:
∂E(λ)
∂
∂
∂
∂
=
< Ψ|Ĥ|Ψ >=<
Ψ|Ĥ|Ψ > + < Ψ| Ĥ|Ψ > + < Ψ|Ĥ| Ψ >=
∂λ
∂λ
∂λ
∂λ
∂λ
9
!
Z
∂
∂ ∗
∂
= E
Ψ Ψdτ + Ψ∗ Ψdτ + < Ψ| Ĥ|Ψ >=
∂λ
∂λ
∂λ
Z
∂
∂
∂
Ψ∗ Ψdτ + < Ψ| Ĥ|Ψ >=< Ψ| Ĥ|Ψ >
(24)
= E
∂λ
∂λ
∂λ
that is, only the Hamiltonian has to be derived with respect to the nuclear coordinates.
Then the same scheme and algorithms, applied for a classical MD simulation, can be used.
In particular, with reference to the CRYSTAL code[2] the procedure can be outlined as
follows:
(i) starting from some initial nuclear position and velocities,
(ii) CRYSTAL is called to: solve the Schrödinger equation, evaluate the total energy,
Eel , and the forces acting on each of nucleus in the unit cell;
(iii) then the trajectories (position and velocity) are update according to a Verlet’s scheme
(iv) and finally, after a check on the total energy, some thermodynamic observable are
evaluated;
(v) the whole procedure is repeated until a given tmax .
Z
7
Car-Parrinello Molecular Dynamics
The original theory can be found in the elegant article by Car and Parrinello[4]. This
method neglects the BO approximation and then couples the motion of nuclei and electrons. The potential energy of equation 1 is described as a function of a set of monoelectronic spin-orbitals, resulting from the Kohn-Sham or Hartree-Fock solution of the
Schrödinger equation, and of the nuclear coordinates:
V
= V ({φi (r, t)}, {RA (t)})
(25)
where both of these quantities are time-dependent. In stead of the usual Hamiltonian
approach (equations 2 abd 3) , the equations of motion can be derived using a Laplacian
˙
formalism. By defining the position coordinates {q(t)} and the velocities {q(t)}
it is
possible to write the Laplacian of a given system as follows:
L(q̇, q) = T (q̇, q) − V (q)
(26)
and the corresponding Eulero-Lagrange equations become:
d
dt
!
∂L
∂L
−
= 0
∂ q̇
∂q
(27)
Because the potential is a function of the positions only, equation 27 simplifies:
d
dt
∂T
∂ q̇
!
−
∂T
∂q
=
dV
dq
(28)
Then a fictions mass µ is associated to the electrons and their motion is treated classically,
in analogy to the nuclear ones, by defining a Lagrangian of the form:
L(φ, R) = TN + Te − V (φ, R) =
0
1X
1 XZ
1X
ZA ZB e2
=
MA vA2 + µ
φi (r)2 dr − Eel −
2 A
2 i
2 A,B (RA − RB )
0
∂E
1 X ∂ < φi |φi >
1X
ZA ZB e2
= −
+ µ
− Eel −
∂RA 2 i
∂t
2 A,B (RA − RB )
10
(29)
A constrained Lagrangian is defined, by means of multipliers, in order to preserve the
orthonormality between spin-orbitals along the time evolution of the system:
L(φ, R, θ) = TN + Te − V
X
+ λi,j
< φi |φj > −1
(30)
i,j
Then, using the definition 28 for q = φ and q = R, by deriving the Lagrangian 30, a double
set of equations can be obtained in the nuclear and electrons coordinates, respectively:
∂E
MA R¨A = −
∂RA
∂E X
λi,j φj
µφ̈i (r, t) = − ∗ +
∂φi
j
(31)
(32)
By defining the monodeterminantal Hamiltonian in the HF and KS approach:
fˆ = ĥ + Jˆ + Ĉ
hˆeff = ĥ + Jˆ + Vˆx,c
(33)
(34)
ˆ Ĉ and Vˆx,c have the usual meaning of Coulomb, Exchange and Correlation and
where J,
Exchange operators, it is easy to see that the energy gradient correspond to:
∂E
∂φi (r, t)
∂E
hˆeff φi (r, t) =
∂φi (r, t)
fˆ(t)φi (r, t) =
(35)
(36)
and in the limit of t → 0 these equations become the well known HF and KS timeindependent monoelectronic equations.
8
8.1
Statistic Thermodynamic
The Canonical Ensemble
An ensemble is a huge collection of copies of a given macroscopic system. Statistical
ensembles are usually characterized by fixed values of thermodynamic variables. One
fundamental ensemble is called the canonical one and is characterized by constant particle
number N, constant volume V and constant temperature T (NVT): on the contrary,
fluctuation of the energy around a mean values are permitted. The canonical ensemble is
formed by a certain number of copies A of the same system having the same eigenvalues
spectrum but with different occupation of the eigenstates.
The average over space is used to simulate the average over time, i.e. to describe the
energy fluctuations occurring as a function of time. If ai is the number of copies of the
system with energy Ei the ratio between the number of copies with energy E1 and E2
must be a function of the energies themselves:
a1
= f (E1 , E2 )
a2
11
(37)
The energies are always evaluated with respect to a reference value, namely the zero of
the energy, so to allow the ratio 37 to be independent by the adopted reference its value
has to be a function of the energies difference:
a1
= f (E1 − E2 )
(38)
a2
and then the following relationship can be derived:
a1
a1 a2
=
= f (E1 − E3 ) = f (E1 − E2 ) ∗ f (E2 − E3 )
a3
a2 a3
(39)
Then by defining x = E1 −E2 e y = E2 −E3 it emerges as the exponential is the functional
form capable of guaranteeing the equivalence in 39:
exp(x+y) = expx expy
(40)
Then the number of micro states characterized by a given energy Ei can be expressed as
follows:
ai = C expβEi
(41)
where C is a coefficient to be defined. In order to obtain the multiplicative coefficients C
P
the condition i ai = A can be exploited:
X
X
ai = C
i
expβEi = A
i
A
βEi
i exp
C = P
(42)
When A tends to infinity, the ai /A ratio represents the probability to find one of the
system copy in the i-th state and by combining together equations 41 and 42 it turns
that:
pi =
expβEi
exp( βEi )
ai
=P
=
βEi
A
Q
i exp
(43)
where Q = i exp( βEi ) if the partition function of the system, related to the degree
of freedom associated to E. It is easy to see that the probability are normalized at 1:
P
X
P
i
8.2
exp−i /kT
=1
−i /kT
i exp
X
pi =
i
(44)
Thermodynamic Potential
The value of any system’s given quantity evaluated as a mean over an ensemble of the
system itself using the probability distribution of equation 43 equals the observed experimental value of that quantity. Then, the mean over an ensemble of a quantity corresponds
to its most probable outcome from a measured.
The mean value of an observable O, characterized by a set of eigenvalues {ωi }, can be
found as:
<O> =
X
i
−ωi /kT
i ωi exp
ωi pi = P
−ωi /kT
i exp
P
12
(45)
P
The energy, < E >= i i , can be obtained by deriving the logarithm of the partition
function with respect to the temperature:
X i
∂ ln Q
1
−i /kT
= P
=
exp
−i /kT
∂T
kT 2
i exp
i
−i /kT
1
1
i i exp
=
=
E
P
2
−
/kT
i
kT
kT 2
i exp
P
(46)
About entropy, it is possible to relate its statistical definition by Boltzmann with the
system partition function. Entropy is a measure of the system disorder, which in turns
can be correlated with the number of ways a given configuration (a1 , a2 , ...aA ) can be
obtained:
W =
A!
Πi ai
(47)
The greater the number of ways to get a given configurations the greater the degree of
disorder. A relationship between S and W must exist and has to take into account the
fact that entropy is an extensive property then given the A + B system, obtained as a
summation between the systems A and B, it can be written:
SAB = SA + SB
(48)
On the contrary, for the disorder degree the following relation for the A + B system holds
true:
WAB = WA · WB
(49)
A logarithm dependency between S and W , S ∝ ln W , allows for both of these properties
to be satisfied:
SAB = SA + SB = k(ln WAB ) = k(ln WA · ln WB )
= k ln WA + k ln WB = SA + SB
(50)
where k = R/Na is the Boltzmann constant (1.380 10−23 J K−1 ). As an example, the two
limit cases can be considered:
- an ensemble in which all the copies of the system are characterize by the same energy
value E3 , a3 = A → (0, 0, A, 0, 0, ..), W = 1, S = 0, that is a completely ordered system.
- an ensemble where each copy of the system has a different energy (1, 1, 1, ...1), W = A!,
that is the entropy is at its maximum and the ensemble is totally disordered.
Finally, it can be written:
Sensemble = k ln Wensenble = k ln
A!
Π i ai
(51)
and, by exploiting the Stirling formula ln x! = x ln x − x entropy becomes:
Sensemble = k(A ln A − A −
X
ai ln ai +
i
= k(A ln A −
X
X
ai ) = k(A ln A −
i
Api ln(Api )) = k(A ln A − A
i
= −kA
X
ai ln ai )
i
X
i
pi ln pi
X
pi ln A − A
X
pi ln pi )
i
(52)
i
13
where in the second line the definition of the probability distribution 43 has been introduced. Then, for the mean value of the entropy:
"
#
X
X exp(−Ei /kT )
exp(−Ei /kT )
Sensemble
= −k
ln
pi ln pi = −k
<S> =
A
Q
Q
i
i
X exp(−Ei /kT ) −Ei
<E>
= −k
− ln Q =
+ k ln Q
(53)
Q
kT
Q
i
8.3
Summarizing
The key role played by the partition function has become evident: its knowledge allow
the evaluation of many thermodynamic functions such as the Energy, the Entropy, the
Gibbs and Helmoltz free energy, accordingly to the follow equations:
∂ ln Q
∂T
∂ ln Q
S = kT
+ K ln Q
∂T
A = kT ln Q
E = kT 2
9
(54)
(55)
(56)
Vibrational Energy
The energies associated to a single harmonic oscillator, vibrating with frequency ν in the
vibrational state characterized by the quantic number v and resulting by the solution of
a nuclear Schrödinger equation where the Hamiltonian has the form:
1
h2
∂
ˆ
+ kR2
H(R)
= − 2
2
8π m ∂R
2
where k is the force constant, are:
1
v = hν(v + )
2
Its partition function is:
qvib =
X
exp−v /kT =
v
=
X
hν
1
hν
exp− kT (v+ 2 ) = exp− 2kT
v
(57)
(58)
X
hν
exp− kT v =
v
hν
− 2kT
exp
(59)
hν
1 − exp− kT
1
where in the last step, the value v xv = 1−x
of the converging serie has been used. The
total vibrational energy, which takes into account the different occupancy of the various
vibrational eigenstates, has becomes:
P
hν
Evib
∂ ln qvib
∂ exp− 2kT
= kT
= kT 2
hν =
∂T
∂T 1 − exp− kT
"
#
hν
hν
2 ∂
− kT
= kT
−
− ln(1 − exp
) =
∂T
2kT
2
hν
hν
hν exp− kT hν
hν exp− kT
hν
= kT 2
+
=
+
hν
hν
2kT 2 kT 2 1 − exp− kT
2
1 − exp− kT
14
(60)
In the case of 3N harmonic independent oscillators, the total partition function can be
written as the product of the partition function related to each of the normal modes:
Tot
qvib
= q1,vib · qα,vib · ·q3N,vib = Πα qα,vib =
hνα
= Πi
exp− 2kT
(61)
hνα
1 − exp− kT
Then, the total vibrational energy of 3N oscillators becomes:
Evib =
hνα X hνα
+
hν
2
α exp kT −1
X
α
(62)
which is a Bose-Einstein distribution for indistinguishable particles. For a periodic system,
taking into account the dispersion the vibrational energy can be written as:
Evib
1 X X hναk
1 XX
hναk
+
=
k
α
Ncell k α 2
Ncell k α exp hν
kT −1
(63)
where the first term represents the zero-point energy (ZPE).
10
Solution of the nuclear equation
The quantum-mechanic Hamiltonian of an N atoms system, where the nuclear displacements around their equilibrium position are {XA } can be written:
1
ˆ
H(X)
= −
2
X
A
1 ∂2
+ V (X)
MA ∂X2A
(64)
where atomic unit has been used.
(i) The potential can be expanded around its minimum in a Taylor’s serie as a function
of the atomic displacements {XA }:
V (X) = Eel + VAB = V (X)|0 +
X
A
1X
∂ 2 V (X)
∂V (X)
|0 XA +
XA
|0 XB + ...
∂XA
2 A,B
∂XA ∂XB
2
=
1
∂ V (X)
XA
|0 XB
2 A,B
∂XA ∂XB
X
(65)
(ii) Weighted coordinates can be introduce qA =
1
ˆ
H(X)
= −
2
X
A
q
(MA )XA in 64 to get:
∂2
1 X ∂ 2 V (X)
+
qA
qB
∂q2A 2 A,B ∂qA ∂qB
1
∂ ∂
1
= − <
|
> + < q|W |q >
2
∂q ∂q
2
(66)
where W is the Hessian matrix of the second derivatives of the potential with respect to
2V
the atomic displacements, i.e. wAB = ∂q∂A ∂q
.
B
15
(iii) A unitary transformation, which yields the Hessian in a diagonal form, can be performed:
X
Qα =
UαA qA
A
|Q > = U |q >
Λ = U† W U
and then a set of independent displacements have been found, generally called normal
modes {Qα } and the Hamiltonian can be rewritten as a sum of 3N terms:
1
ˆ
H(X)
= −
2
X
α
∂2
1X
+
λα Q2α
2
∂Qα 2 α
(67)
Each term is equivalent to the Hamiltonian 57 and then the solution of a systems of such
equations give rise a complete set of eigenvalues and eigenvectors, where the equivalence
between the force constants k and the diagonal elements of the Hessian matrix is evident:
1q
(λ)α
ν =
2π
(68)
(iv) For a periodic lattice, taking advantage of the translational symmetry, the atomic
displacements can be described by a Bloch function like equation of the form:
1
qAk = Pˆk qA = q
(Ncell )
X
exp(ikT) T̂ qA = q
T
1
(Ncell )
X
exp(ikT) qAT
(69)
T
where k is a point in the first Brillouin zone.
T
=< qA0 |W |qBT > the
(v) Then, defining an Hessian matrix in the direct space as WAB
corresponding matrix in the reciprocal space can be derived in any given k point:
k
WAB
= < qAk |W |qBk >=
=
X
1 X
0
exp(−ikT) exp(ikT0 ) < q T |W |qBT >
Ncell TT0
00
exp(ikT00 ) < qA0 |W |qBT >
(70)
T00
(vi) These matrices can be diagonalized so that 3N eigenvectors and eigenvalues for each
of the selected point in the Brillouin zone are generated. In particular, the energy values
are of the form:
1
kv,α = h ναk (v + )
2
q
1
ναk =
(λkα )
2π
The total vibrational energy is given by the expression 63.
16
(71)
(72)
A
The Maxwell-Boltzmann distribution
The distribution of one component of the velocity of a gas follows the Boltzmann general
equation:
p(vx ) =
1
2π
1/2
1
−v 2
exp[ 2x ]
σ
2σ
(73)
To discover the value of σ the distribution probability can be equated to:
mv 2
= A exp − x
2kT
"
p(vx ) =
A exp[−αvx2 ]
and, because the distribution probability is normalized
R
#
(74)
p(x)dx = 1:
mv 2
exp − x dvx = 1
A
2kT
−∞
"
Z ∞
and remembering that
R
exp[−αx2 ]dx =
π2kT
A
m
!1/2
#
1/2
π
α
with α = m/2kT we end with:
1/2
m
= 1⇒ A=
2πkT
p(vx ) =
1
2π
1/2 m
kT
−vx2
exp[ 2 ]
2σ
1/2
(75)
By equating equation 73 and 75 it is easy to see that σ 2 = kT
The standard deviation is
m
defined as follows σv2 =< v− < v >>2 and it is easy to show that:
< v− < v >>2 =
X
=
X
(vi − < v >)2 pi =
X
i
(vi2 − 2vi < v > + < v >2 )pi
i
vi2 pi
−2<v >
i
X
vi pi + < v >2
i
X
pi
i
= < v 2 > − < v >2
(76)
P
P
where in the last step the equivalences i pi = 1 and
The standard deviation can be also obtained as follows:
1/2 i
vi pi =< v > have been used.
m 1/2 Z
−v 2
vx exp[ 2x ]dvx = 0
< vx > =
kT
2σ
1/2 1/2 Z
Z
1
m
−v 2
vx2 exp[ 2x ]dvx =
σ 2 = < vx2 >= vx2 p(vx )dvx =
2π
kT
2σ
!1/2
1/2 1/2
1
kT 2πkT
kT
m
=
=
2π
kT
m
m
m
Z
1
vx p(vx )dvx =
2π
where the integral has been solved by remembering that
component of the velocity contributes with kT anf then:
R
(78)
x exp[−αx2 ]dx = 0. So each
m < v2 >= m(< vx2 > + < vy2 > + < vz2 >) = 3kT
17
(77)
(79)
The experimental curves differ because they represent the distribution of the absolute
value of the total molecular speed v = (vx2 + vy2 + vz2 )1/2 , which is an intrinsic positive
quantity and is distributed accordingly to the Maxwell-Boltzmann equation:
m
pvdv = p(vx )dvx p(vy )dvy p(vz )dvz =
2πkT
"
#
3/2
2
m
mv
= 4π
v2 exp −
dv
2πkT
kT
3/2
mv2
exp −
dvx dvy dvz
kT
"
#
(80)
because the integration over a volume can be seen as dvx dvy dvz = 4πv2 dv.
B
Virial Theorem
The pressure is calculated via the virial theorem of Clausius, which states that the virial
is equal to -3NkBT. The total virial for a real system has two contributions: the ideal gas
part, -3PV , and the interaction between the particles. The later is defined as the sum of
the products of the coordinates of the particles and the forces acting on them. Therefore,
N
−1 X
1 NX
1
(RAB FAB )
N kB T +
P =
V
3 A B=A+1
C
(81)
Units and Physical Constants
Atomic Units are particularly suited to describe the properties of multielectrons systems.
The most important atomic units are reported in the table below, together with the
conversion factors to get the corresponding International system units.
Quantità
atomic unit
name
electron charge
electron mass
atomic mass unit
Plank constant
angular momentum
lenght
permittivity
energy
time
velocity
Boltzman constant
e
m0
u
h
h̄
a0
4π0
Eh = e2 /(4π 0 a0 )
t0 = h̄/Eh
a0 /t0 = α c
k
1.6022 10−18 C
9.1096 10−31 kg
1.66057 10−27 kg
6.6260693 10−34 J s
1.0546 10−34 J s rad−1
Bohr radius 5.2918 10−11 m
1.1126 10−10 J−1 C2 m−1
Hartree
4.3598 10−18 J
2.4189 10−17 s
2.1877 106 m s−1
1.38066 10−23 J K−1
Table 5: Atomic Unit.
18
Equivalente SI
• The atomic mass unit u corresponds to 1/12 the mass of a neutral unbound Carbon12 atom.
• The electron kinetic energy operator becomes [−h̄2 /(2 m) ∇2 ] becomes −∇2 /2.
• The Coulomb interaction between two electrons at distance rij [e2 /(4 π 0 rij )]
simplifies 1/rij , and the interaction between an electron and a nucleus of charge Z
at distance rA becomes −Z/rA .
• The length unit is Bohr, corresponding to the Bohr radius of the Hydrogen atom,
a0 = 4π 0 h̄2 /(m0 e2 ) = 0.52918 Å, and it is particularly suited to describe distances
on atomic and molecular scales.
• The angular momentum (orbital or spin) has integer or half-integer eigenvalues.
• The light velocity is c = 137.036 a.u.; its inverse, α = 137.036−1 is the fine structure
constant.
• The energy unit is Hartree, which is equals to 627.6 kcal/mol = 2625.9 kJ/mol =
27.21 eV, and corresponds to the Coulomb energy between two electrons at 1 Bohr
distance.
• The angual frequency of a transition is given by: ωαβ = Eβ − Eα .
If Energy is in Hartree, Lenghts in a0, M in amu and F in Ha/a0 the units of time are
fsec 1 femtosecond = 1.0 · 10−15 seconds;
Starting from the kinetic energy definition and adopting the previous conventions we end
up with a step in time of this order:
1 2
mv
2
a2
Ha = u 20
ss
u · a20
dt =
= 0.1032 fs
Ha
Ek =
(82)
19
References
[1] CPMD web site. www.cpmd.org.
[2] CRYSTAL web site.
www.crystal.unito.it.
[3] B. J. Berne and G. D. Harp. . Adv. Chem. Phys., 17:63, 1970.
[4] R. Car and M. Parrinello. . Phys. Rev. Lett., 55:2471, 1985.
[5] D. Frenkel and B. Smit. Understanding Molecular Simulation. ACADEMIC PRESS,
MPG Books Ltd, Bodmin, Cornwall, Great Britain, 1996.
[6] A. Putrino, D. Sebastiani, and M. Parrinello. . J. Chem. Phys., 113:7102, 2000.
[7] M. E. Tuckerman and G. J. Martyna. . J. Phys. Chem. B, 104:159–178, 2000.
20
© Copyright 2026 Paperzz