21. Excited States Thus far we have discussed exclusively problems

21. Excited States
Thus far we have discussed exclusively problems that involved electronic ground states.
That means that the electrons are in such configuration, and the orbitals are optimized in such a
way, as to give the minimum possible energy. While ground states are generally what we consider
as “stable”, e.g. they can persist forever, molecules often find themselves in electronically excited
states, where at least one electron is not in the lowest possible energy orbital. The electronically
excited states may have limited lifetimes, i.e. they typically eventually relax back to the ground
state. However, they are also stable in a sense that they represent minima on their respective PES:
as we will see they can be optimized, we can calculate their vibrational frequencies and
thermochemistry just as we did for ground states. Excited states are important in many chemical
processes, including photochemistry and electronic spectroscopy.
21.1. Excited configurations
We have already encountered configurations with electrons in excited orbitals – recall e.g.
the configuration interaction (CI). It is important to keep in mind the different objectives: before,
we used excited configurations to include electron correlation in the electronic ground state. Here
we are talking not about the electron correlation, but about electronically excited states.
Within the single-determinant formalism, the simplest and most straight forward
description of an excited state may seem to be taking an electron and promoting it to the excited
(virtual) orbital. Unfortunately, there are many things wrong with this picture.

Using orbitals that were optimized for the ground state calculation does not make sense for
excited states. Remember that the variational principle minimized the ground state energy –
that of occupied orbitals. Virtual orbitals are not optimized in any way. One way to think about
it is to recall that there are electron-electron interactions (J and K terms). When moving an
electron up to a virtual orbital, these interactions change and this change would have to be
taken into account.

The restricted closed-shell picture breaks down. An excited state, where two different spatial
orbitals have a single electron, is no longer a pure spin state, but, as it turns out, a mixture of a
singlet and a triplet. To get one or the other, one would need at least two Slater orbitals (kind
of like the ROHF approach).
The first bullet suggests that we can simply reoptimize all of the orbitals. In practice, however, this
can work only in instances where the ground-state and the excited-state wave functions are
mutually orthogonal, meaning that they do not interact: otherwise, the variational solution for the
excited-state wave function will collapse back to the ground-state wave function, which is the true
minimum. This can happen if they have a different spin or belong to different irreducible
representations of the molecular symmetry point group. Orthogonality of the singlet and triplet
spin coordinates ensures that the wave function of the singlet and triplet state will not interconvert.
Therefore, only if the desired excited state is the lowest triplet, or the lowest you can use the
standard “ground state” HF (or DFT/KS) variational procedure. In fact, in this case the state in
question is the ground state = the lowest energy state for the given spin or symmetry of the
wavefunction.
Speaking of the DFT, it is interesting to note that it should not work at all, because the
Hohenberg-Kohn theorem (remember) can be proven only for the absolute lowest energy state,
irrespective of spin or symmetry. The DFT methods do not seem to care, however, and in practice
they work very well for this, much better than HF. In fact, DFT sometimes works reasonably well
using the very crude approach without re-optimization of the ground state orbitals (first bullet
above). The HF, on the other hand, is generally horrible – the HF virtual orbitals tend to be too
high in energy and way too diffuse to be useful even for very crude calculations. We have already
encountered this when we discussed Koopman’s theorem and how it works reasonably well for
the IPs, but fails miserably for the EAs.
21.2. Singly Excited States: Configuration Interaction Singles (CIS)
The CIS is the simplest general method for calculation of excited electronic states, roughly
equivalent (i.e. not that good) to the ground state HF level. It involves a configuration interaction
calculation for the singly-excited Slater determinants, obtained from a HF calculation. If you recall
the previous discussion of configuration interaction (CI) methods, they look for the solution to the
Schrodinger equation in the form of linear combination of Slater determinant, but do not involve
any re-optimizations of HF orbitals. It was also shown that the ground state (HF) configurations
will not interact with the singly excited ones (Brillouin theorem), so the lowest and most important
contribution to the ground state wavefunction was from the doubles. Again, it is important to keep
in mind that here we are not interested in including electron correlation in the ground state, but to
describe the electronically excited state. For the excited states, the singly excited determinants are
the most important contributions and the simplest thing one can do is to build the excited state
wavefunction as a linear combination of the singly excited determinants. Since they do not interact
with the ground state, and consequently there is no danger of falling back to the ground state, this
wavefunction can be optimized using the variational principle. This is exactly what CIS does.
More formally, the excited state wavefunction is written as (see also eqn. 64):
   car  ra
(157)
a ,r
and the variational principle leads to a CIS Hamiltonian matrix diagonalization, essentially
equivalent to the ground state CI described by equations (65) – (67). The matrix is essentially of
size M × N where M is the number of occupied orbitals from which excitation is allowed, and N
is the number of virtual orbitals into which excitation is considered. (If excitation is allowed to
occur with a spin-flip of the excited electron then the size increases, although none of the triplet
states have matrix elements with any of the singlet states because of their different spins.)
Diagnoalization of the CIS matrix takes place only in the space(s) of the excited states, since they
do not mix with the HF reference (ground state) and yields energy eigenvalues and corresponding
eigenvectors detailing the weight of every singly excited determinant in the excited state. Note that
in practice you cannot solve for all excited states and have to specify the number of states to solve
for generally starting from the lowest, which are naturally most interesting to the chemistry.
The CIS results should be regarded as qualitative and they generally give the correct
ordering of the excited states. Quantitatively the energies are not very accurate (nor are they
expected to be) but error is fairly systematic – all states are predicted to be too high in energy by
an average of 0.7 eV. The worst prediction is for the lowest excited state, which is known to have
significant dynamical electron correlation.
To improve CIS results beyond their roughly HF quality, various options may be
considered. Particularly for spectroscopic predictions, semiempirical parameterization of the CIS
matrix elements may be preferred over their direct evaluation in an ab initio sense and the most
complete realization of this formalism is the INDO/S parameterization of Zerner and co-workers
(in Gaussian called ZINDO). This highly computationally efficient model often offers excellent
performance.
Further improvement of ab initio CIS includes the effect of double excitations referred to
as CIS(D). It is also worth noting that the CIS technology has a particularly valuable application
that is unrelated to an interest in excited states, but to the stability of the SCF wavefunction, as we
have already discussed before. The stability test (keyword Stable) actually does a CIS type
calculation to look for possible lower energy determinants.
21.3. Higher Roots of MCSCF Calculations
In principle, excited states can be obtained as higher energy roots of the MCSCF and CI
calculations, discussed previously in the context of including electron correlation. The MCSCF (in
its most common CASSCF incarnation) is often used for this purpose but, as mentioned before in
the context of ground state calculations, it is generally a quite tricky approach. Conceptually, it is
straightforward though - once a root is specified (the lowest energy would be the ground state, the
next one the first excited state etc.) MCSCF process minimizes the energy for that root following
the usual variational procedure. Problems can arise in the vicinity of so-called ‘conical
intersections’ where the states may become degenerate.
To correct for dynamical correlation, the second order perturbation theory (MP2) can be
added to CASSCF method – the CASPT2 level, as it is often called, is generally considered the
most robust method for calculating excited state energies and wavefunctions. There are other more
sophisticated ones, mostly coupled-cluster (CC) related approaches, but, as expected, they also
carry a much more significant computational burden.
21.4. Time-Dependent HF and DFT
Time-dependent methods are a completely different way of getting at the excited states.
Despite the name, remember that this is about excited state calculations, not about anything
depending on time. The time comes from considering a time-dependent perturbation to the
molecular Hamiltonian, which allows to obtain information about the excited states indirectly
through the response of the molecule to that perturbation. Consider an oscillating electric field E:
E  E 0 cos(t )
(158)
where t is time and  is the angular frequency. The field will cause the electronic states of the
molecule to change due to the electrostatic interaction between the change density and the field,
in much the same way as the reaction field of a dielectric medium did to mimic the surrounding
solvent. The molecule will polarize in response to the field, and the response function is called
polarizability and usually denoted , which can be written as follows:



states
0 μ i
   E
i 0
i
2
 E0 
(159)
where the numerator of each term in the sum is a so-called transition dipole moment and the
denominator involves the frequency and the energies of the excited states and the ground state.
Note that, if the frequency corresponds exactly to the difference in energy between an excited state
and the ground state, there is a pole in the frequency-dependent polarizability, i.e., it diverges since
the denominator goes to zero. Using propagator methodology (sometimes also called a Green’s
function approach or an equation-of-motion (EOM) method), the poles of the frequency-dependent
polarizability (where the denominator goes to zero) can be determined without having to compute
all of the necessary excited-state wave functions and their corresponding state energies.
The practical way to do this involves once again the variational principle and leads to a socalled random phase approximation (RPA), which is often used synonymously with TimeDependent Hartree Fock (TDHF). The integrals that are required to compute the excitation
energies are essentially those required to fill the CI matrix containing all single and double
excitations and the transition dipole moments between the ground state and all singly excited
configurations. Because the RPA method includes double excitations, it is usually more accurate
than CIS for predicting excited-state energies. On the other hand, the method does not deliver a
formal wave function, as CIS does. The RPA method may be applied to either HF or MCSCF wave
functions. As with the CI formalisms they somewhat resemble, RPA solutions are most efficiently
found by an iterative process that focuses only on a few lowest-energy excitations.
A DFT method that is strongly analogous to RPA is called time-dependent DFT (TDDFT).
In this case, the KS orbital energies and various exchange integrals are used in place of matrix
elements of the Hamiltonian. Just like DFT for ground states, TDDFT is the most widely used
method for calculating excited states. It is reasonably robust and computationally efficient and in
most cases gives very good results. However, it is not perfect. In particular, TDDFT is usually
most successful for low-energy excitations, because the KS orbital energies for orbitals that are
high up in the virtual manifold are typically quite poor. Generally, for TDDFT results to be most
reliable, the following two criteria should be met:


the excitation energy is significantly smaller than the molecular ionization potential (note
that excitations from occupied orbitals below the HOMO are allowed, so this is not a
tautological condition)
promotion(s) should not take place into orbitals having positive KS eigenvalues.
In the table below (from Cramer: Essentials of Computational Chemistry) the TDDFT is
compared to RPA and CIS. As you can appreciate, the improved quality of the TDDFT results
compared to CIS or RPA is substantial. One known problem with TDDFT, particularly using
GGA or hybrid functionals, is that it performs especially poorly for excitations characterized as
charge-transfer (CT) or charge-resonance in weakly interacting composite chromophores.
However, range separated density functionals, such as CAM-B3LYP (see sec. 14.10.) often
perform much better in cases where spurious low lying CT states are predicted by GGA or hybrid
methods.
Energies (in eV) for a singlet excited states of benzene relative to the ground statea
21.5. UV-vis and Circular Dichroism (CD)
TDDFT methods are very useful for calculation of electronic spectra, UV-vis absorption
and circular dichroism (CD). CD spectroscopy, together with optical rotatory dispersion (ORD),
which is also open to computation, is particularly useful in assigning absolute configuration for
chiral molecules. UV-vis spectra are calculated for all methods (CIS, Zindo, TDHF and TDDFT)
automatically from excited state energies (denominator in eqn. 133) and oscillator strengths
(numerator in eqn. 133). Likewise, CD spectra require a so-called rotational strengths, which are
similar except they also include a magnetic transition dipole moment.
21.6. Excited State Calculations in Gaussian
The CIS calculation is requested with the keyword CIS along with the basis set. Timedependent HF and DFT are done with specifying the HF or the density functional with the basis
set as for a ground state calculation and a keyword TD is added. Semi-empirical ZINDO calculation
is specified with a Zindo keyword with no basis set specification.
All three keywords: CIS, TD and Zindo take the same options and with the same syntax.
As usual, restricted calculations are done by default. For unrestricted calculations, CIS and Zindo
take the prefix “U” . For TD the prefix goes with either the HF or DFT functional. For example,
the route section for an unrestricted CIS calculation would look like this:
#UCIS/6-31+G(d) Test
while for a TDDFT one using PBEPBE density functional it would be:
#UPBEPBE/6-31+G(d) TD Test
The options for CIS, TD and Zindo are:
Singlets
Solve only for singlet excited states. Note that shit option only makes sense
for closed shell systems, for which it is the default.
Triplets
Solve only for triplet excited states. Again, this only affects calculations on
closed-shell systems.
50-50
Solve for half triplet and half singlet states.
Root=N
Specifies which excited state is to be studied: used for geometry
optimizations, population analysis, and other properties. The default is the
first excited state (N = 1).
Read
Reads the initial guesses for the excited states from the checkpoint file. This
option is used to perform an additional calculation (e.g. it can be geometry
optimization, population analysis) for an excited state computed during the
previous job step. It is accompanied by Guess=Read and Geom=Check
and Root, if other than the 1st excited state is of interest.
NStates=M
Solve for M states starting from the lowest energy one (the default is 3). If
50-50 is requested, NStates gives the number of each type of state for
which to solve (i.e., the default is 3 singlets and 3 triplets).
Add=N
Read converged states off the checkpoint file and solve for an additional N
states. This option implies Read as well. NStates cannot be used with
this option.
21.7. Excited States in Solution
Excited state calculations may be combined with the implicit solvent (SCRF – see below)
to simulate condensed phase systems. Here it is important to remember that there are two kinds of
solvation of the excited state:

non-equilibrium – this is when the solvent does not have time to respond and adapt
to the excited state of the molecule. In other words the solvent reaction field
corresponds to the ground electronic state, even though the molecule is excited.
This is the case, for example, when absorption, i.e. UV-vis or CD spectra are of
interest and it is the default. To force non-equilibrium solvation, NonEqSolv
option can be used.

equilibrium – here the solvent is fully adjusted to the molecule being in the excited
electronic state. This is the case when the molecule stays excited for a long time,
for example long enough so that its geometry (nuclear positions) can also relax
following the electronic excitation. Therefore, it makes sense to use equilibrium
solvation for excited state geometry optimizations and frequency calculations.
Gaussian again makes it the default for such jobs, so that you don’t have to worry
about it, but EqSolv option can be always used to force it.
21.8. Example: Excited State Optimization and Frequencies
This would be an example of the input for optimization and vibrational frequency
calculation for the first singlet excited state of formaldehyde:
# B3LYP/6-31G(d) TD(Root=1) Opt Freq Test
Formaldehyde exc. state opt freq
0
C
O
H
H
1
0.5339
-0.6829
1.1292
1.1300
-0.0000
-0.0000
0.9266
-0.9261
0.0000
0.0000
0.0000
0.0000
Remember that frequency calculations also give you thermochemistry – you can use the same
techniques as for the ground state thermochemistry to change the parameters: temperature,
pressure, isotopes etc.
Also note that Root=1 is redundant in this case: it is the default, but it does not hurt to have it
there for transparency.
An alternative way to do this would be through a multi-step job: an excited state
calculation in the first step, and then read-in the desired state (root) and do the optimization and
vibrational analysis. The input could look like this:
%chk=exc.chk
# B3LYP/6-31G(d) TD(Root=1) Opt Freq Test
Formaldehyde exc. state opt freq
0
C
O
H
H
1
0.5339
-0.6829
1.1292
1.1300
-0.0000
-0.0000
0.9266
-0.9261
0.0000
0.0000
0.0000
0.0000
--Link1-%chk=exc.chk
# B3LYP/6-31G(d) TD(Root=1,Read) Geom=AllCheck Guess=Read Opt Freq Test
Note that TD keyword has the option Read, and we also used Guess=Read. If you forget to
specify Read for the TD, it will do the excited state calculation all over again instead of reading it
from the checkpoint file, which means the first step of your job is useless – it will be all redone in
the second step.
The output of the excited state calculations starts with:
**********************************************************************
Excited states from <AA,BB:AA,BB> singles matrix:
**********************************************************************
and after the listing of transition dipole moments (electric and magnetic) gives the summary of the excitation
energies, oscillator strengths and contributions of the individual orbitals to the each excited state:
Excitation energies and oscillator strengths:
Excited State
1:
Singlet-A"
4.0164 eV 308.70 nm f=0.0000 <S**2>=0.000
8 -> 9
0.70705
This state for optimization and/or second-order correction.
Total Energy, E(TD-HF/TD-KS) = -114.352601192
Copying the excited state density for this state as the 1-particle RhoCI
density.
Excited State
6 -> 9
Excited State
8 -> 10
2:
3:
Singlet-A"
0.70608
Singlet-A'
0.70590
9.0653 eV
136.77 nm
f=0.0018
<S**2>=0.000
9.1554 eV 135.42 nm f=0.1503 <S**2>=0.000
This says:

The first excited state is a singlet (by default we calculated only singlets), with the A”
symmetry, the excitation energy is 4.0164 eV (corresponding to the wavelength of 308.7
nm) oscillator strength of zero (i.e. you won’t see this transition in the UV spectrum) and
the total spin of zero (as it should be for a singlet).

The 1st excited state consists of an excitation from orbital 8 (HOMO) to orbital 9 (LUMO),
but not entirely – notice that the coefficient that follows is not 1.0, but only 0.70705. That
means that other excited determinants contribute as well, but the contribution of each
individual one is small enough that Gaussian does not list it. If you do want it listed, you
can request a lower cutoff by using IOp(9/40)=N which will cause all coefficients
greater than 10-N to be listed.

The total energy of the excited state, in Hartrees, is 114.352601192

The second excited state is also a singlet with A” symmetry, with a higher excitation
energy etc. and similar for the rest of the excited states.
The statement: This state for optimization and/or second-order correction. after the
first excited state means that this state will be used for optimization. If you specified another one
(by Root) this sentence would be printed after that state. It is good for checking that you are
indeed picking the excited state you want for further calculations.
The optimization finishes just like the ground state one with listing of the optimized
geometry:
Item
Value
Threshold Converged?
Maximum Force
0.000109
0.000450
RMS
Force
0.000043
0.000300
Maximum Displacement
0.000129
0.001800
RMS
Displacement
0.000076
0.001200
Predicted change in Energy=-1.219192D-08
Optimization completed.
-- Stationary point found.
YES
YES
YES
YES
---------------------------!
Optimized Parameters
!
! (Angstroms and Degrees) !
-----------------------------------------------! Name Definition
Value
Derivative Info.
!
---------------------------------------------------------------------------! R1
R(1,2)
1.3098
-DE/DX =
-0.0001
!
! R2
R(1,3)
1.0888
-DE/DX =
0.0
!
! R3
R(1,4)
1.0888
-DE/DX =
0.0
!
! A1
A(2,1,3)
118.5345
-DE/DX =
0.0
!
! A2
A(2,1,4)
118.5343
-DE/DX =
0.0
!
! A3
A(3,1,4)
122.9312
-DE/DX =
0.0
!
! D1
D(2,1,4,3)
180.0
-DE/DX =
0.0
!
---------------------------------------------------------------------------GradGradGradGradGradGradGradGradGradGradGradGradGradGradGradGradGradGrad
And is followed by the frequency calculation. The output is again identical to the ground state
frequencies, with all the thermochemistry etc. In this particular case, you will notice an imaginary
frequency:
******
1 imaginary frequencies (negative Signs) ******
Diagonal vibrational polarizability:
0.1450156
0.4600067
9.7037505
Harmonic frequencies (cm**-1), IR intensities (KM/Mole), Raman scattering
activities (A**4/AMU), depolarization ratios for plane and unpolarized
incident light, reduced masses (AMU), force constants (mDyne/A),
and normal coordinates:
1
2
3
A"
A'
A'
Frequencies --570.7987
902.1715
1308.8424
Red. masses -1.3203
1.2950
1.5325
Frc consts -0.2534
0.6210
1.5467
IR Inten
-117.4823
4.2887
26.5323
By now you (should) know what that means and how to deal with it.
21.8. Example: UV-vis and CD Spectra
As already noted, the UV and CD spectra are calculated automatically and no additional
keywords are needed. Chances are, however, that you will need more than the default 3 excited
states to give you a broader region of your spectrum: this is where Nstates gets used a lot.
As an example, this would be the calculation of a UV-vis and CD spectra for an amino acid
alanine in a zwitterionic state in an implicit water:
%chk=lala.chk
# B3LYP/6-31+G(d) TD(Nstates=100, NonEqSolv) SCRF=(Solvent=Water)
L-Ala UV/CD
0
1
molecule spec
The option is NonEqSolv, is redundant in this case because it is the default, but it does not hurt.
A quick check of the output shows that we indeed have a 100 states:
Excitation energies and oscillator strengths:
Excited State
1:
Singlet-A
5.8812 eV 210.81 nm f=0.0031 <S**2>=0.000
24 -> 25
-0.42981
24 -> 26
0.45983
24 -> 27
0.10582
24 -> 28
-0.14354
24 -> 29
0.24547
This state for optimization and/or second-order correction.
Total Energy, E(TD-HF/TD-KS) = -323.559139118
Copying the excited state density for this state as the 1-particle RhoCI
density.
Excited
24
24
24
24
State
-> 25
-> 26
-> 27
-> 29
2:
Singlet-A
0.54357
0.31637
0.22705
0.18642
6.0946 eV
203.43 nm
f=0.0115
<S**2>=0.000
11.9459 eV
103.79 nm
f=0.0147
<S**2>=0.000
Singlet-A 11.9899 eV
0.29059
-0.13462
0.24535
0.30003
0.21203
0.15813
-0.10650
0.28058
-0.13961
103.41 nm
f=0.0303
<S**2>=0.000
…
…
Excited State 99:
16 -> 26
17 -> 29
19 -> 30
19 -> 31
20 -> 33
23 -> 45
Excited
16
17
19
19
19
22
23
23
24
State 100:
-> 26
-> 29
-> 31
-> 32
-> 33
-> 45
-> 44
-> 45
-> 46
Singlet-A
-0.15488
0.17299
-0.15415
0.57122
-0.16522
-0.10928
Notice that, unlike the previous example, none of the excited states here is composed of only a
single excitation, but a mixture of many.
To plot the spectra, use Gabedit. Under Tools on the top menu bar select UV spectrum
and Read energies and intensities from Gaussian output file.
Load your output file and you’ll get your spectrum. Same as for the IR/Raman you can change
various settings below the plot and a right click on the spectrum opens a window that lets you
modify various other things:
To plot the CD spectrum use exactly the same procedure, except instead of UV spectrum in
the Tools menu, you select ECD spectrum.
22. Implicit Solvent Models
Previously, we have examined the thermodynamics of molecules in gas-phase states.
However many molecules do not reside within gas-phase states and are instead found within
solvents. The interaction between the solvent and the solute impacts the general chemistry of the
molecule being studied. The interaction can alter energy, stability, and molecular orientation.
Thus properties relating to energy (i.e. vibrational frequency, spectrum, etc.) will also change.
Therefore we need a way to model the chemistry of these molecules in a solvent like state. This
is accomplished using implicit solvation models. These models differ from the “explicit” models
which attempt to deal with the solvent as individual molecules, and instead treat the solvent as a
continuous medium that acts upon the solute. This leads to a significant reduction in complexity
by describing the solvent as a uniform continuum than having to calculate multiple molecular
interactions.
Explicit Solvation
Models
Implicit Solvation
Models
22.1 Onsager Solvation Model
One of the first implicit solvation models was designed by Lars Onsager in 1936. This
model was a continuation and improvement of the Born model (1920) which was the first model
to use a dielectric continuum. The Born model was built around the idea that a solute could have
a spherical cavity within the solvent where the solute and the solvent would interact based upon
the net charge. A problem with the Born model was that it was arbitrarily accurate depending on
the choice of the ion radii which was used to determine the size of the sphere. Furthermore the
Born model did not allow for mutual polarization between the solute and the solvent. The Onsager
model changed the Born model by looking at the dipole moment within the molecule instead of
the net charge. This model considers a polarizable dipole with polarizability α at the center of a
sphere. The solute dipole induces a reaction field in the surrounding medium which in turn induces
an electric field in the cavity (reaction field) which interacts with the dipole. The following
expression is for a spherical solute with the dipole moment µ.
The Onsager method can give very bad results for compounds where the electron
distribution is poorly described by the dipole moment. Systems with a zero net dipole moment
will not exhibit solvation by this model.
Advantages
 Better than the Born model
Disadvantages
 Only takes into account the polarizability, does not really account for the cavitation
energy or the electrostatic energy
 Needs a spherical molecule – non-spherical molecules are modeled very poorly by a
spherical cavity
 If the molecule does not have a dipole moment – no solvation will occur
In Gaussian the way to use the Onsager is method is with the route command SCRF=Dipole.
The SCRF (self-consistent reaction field) is the Gaussian keyword required for all implicit solvent
models. The Onsager method in Gaussian also requires the input of the solute radius in Angstroms
(Å) and the dielectric constant of the solvent. A suitable solute radius can be computed by a gasphase molecular volume calculation (in a separate job step) using the Gaussian keyword Volume.
Additionally the Opt Freq keyword combination cannot be used in conjunction with the
SCRF=Dipole calculation.
22.2 Polarizable Continuum Model (PCM)
One of the more modern methods to deal with implicit solvation is the Polarizable
Continuum Model (PCM). This model is based upon the idea of generating multiple overlapping
spheres for each of the atoms within the molecule inside of a dielectric continuum. This differs
from the Onsager methodology which uses a single sphere (or an ellipse) to surround the whole
molecule and thus allows for a greater amount of accuracy in determining the solute-solvent
interaction energy. This method treats the continuum as a polarizable dielectric and thus is
sometimes referred to as dielectric PCM (DPCM). The PCM model calculates the free energy of
solvation by attempting to sum over three different terms:
Gsolvation = Gelectrostatic + Gdispersion-repulsion + Gcavitation
(158)
The cavity used in the PCM is generated by a series of overlapping spheres normally defined by
the van der Waals radii of the individual atoms, however there is not set way to define the radii of
the spheres and it is possible in Gaussian to customize the spherical radii.
Solvent accessible surface (SAS) traced out by
the center of the probe representing a solvent
molecule. The solvent excluded surface (SES) is
the topological boundary of the union of all
possible probes that do not overlap with the
molecule.
The mathematical formalism for the integral equation formalism PCM (IEF-PCM) model (this is
the actual model that is employed by Gaussian) is illustrated below:
The complete Hamiltonian of the solute molecule can be written as:
where H0 is the Hamiltonian in vacuo, VMS is the solute-solvent molecule interaction, and the V’(t)
component is the time-dependent perturbation on the solute molecule. The VMS component is
further defined as:
Here, the surface charge density is broken into two parts for the nuclei (σN(s)) and the electrons
(σe(ρ;s)) for the solute. The V(s) component is the electrostatic potential of the solute molecule
calculated on the cavity surface, Σ. The last element to the Hamiltonian of the solute molecule
describes the cavity-field effect and the response of the solvent to the external field after creation
of the solute cavity in the solvent:
This allows for the direct calculation of the effective polarizabilities of the molecule in the solvent.
The PCM method attempts to give a complete answer to the free energy of solvation but it fails to
directly calculate the energy of cavitation which is the energy defined by the surface of the van der
Waals-spheres and the dispersion-repulsion energy. The free energy of solvation for any PCM
calculation is primarily the electrostatic energy.
Advantages
 More accurate than Onsager
 Gives good electrostatic energy results
Disadvantages
 Computationally expensive – lots of gradients and derivatives
 Does not account for the cavitation or dispersion-repulsion energies
 No set rules for the radii of the spheres in the cavity
The Gaussian keyword for employing the PCM model is SCRF=PCM. However the PCM
model does not actually need to be specified with the SCRF option as Gaussian will default to the
PCM model if no other implicit solvent model is given. The PCM model is available for all HF,
DFT, and coupled clusters calculations and can be run with the Opt and Freq commands, unlike
the Onsager method.
22.3. Conductor-like Polarizable Continuum Model (CPCM)
The CPCM model is a variation of the DPCM model in that it uses a group of nuclear
centered spheres to define the cavity within a dielectric continuum. The fundamental difference
between this model and the DPCM model is that the CPCM model treats the solvent like a
conductor. This impacts the polarization charges of the accessible surface area between the solvent
and the solute. The CPCM model attempts to solve the nonhomogeneous Poisson equation for an
infinite dielectric constant with scaled dielectric boundary conditions to approximate the result for
a finite dielectric constant. In the CPCM model the permittivity of the solvent will impact the
results of the model as solvents with higher permittivity will behave more like an ideal conductor
and return better results. This model also differs from the DPCM model in that it reduces the
outlying charge errors which are the errors caused by portions of the electron density which are
actually outside of the cavity.
The CPCM method is currently the most commonly employed model for implicit solvation.
By considering the dielectric continuum as a conductor-like continuum, the math involved in
calculating the integrals simplifies by assuming that the dielectric constant is infinite. Furthermore
the equations simplify by considering that the polarizability of the system becomes 0 with a
conductor like solvent. Thus this has the effect of decreasing the computational complexity of the
problem. Although the results of the CPCM are improved when the dielectric constant of the
solvent is high, it has been shown that when the dielectric constant of the solvent is low the results
are still equal to the results of a DPCM solvation model.
Advantages
 Simplification of the math involved in calculating the free energy of solvation
 Decreased computational costs
 High quality results for high permittivity solvents – but good results for low permittivity
solvents
Disadvantages
 Still does not accurately account for cavitation energy or dispersion-repulsion energy – the
equations are the same as DPCM only simplified
The Gaussian keywords for using the CPCM method are SCRF=CPCM. All tools that
can be used with the PCM model are also available for the CPCM model.
22.4 SMD (Density-based Solvation Model)
The SMD model attempts to determine the free energy of solvation using the full solute
electron density without determining partial atomic charges. The SMD method separates the
observable solvation free energy into two main components:
1) Electrostatic energy – calculated primarily from an IEF-PCM interaction
2) Cavity-dispersion-solvent-structure term – energy arising from short-range
interactions between the solute and solvent molecules in the first solvation shell
The SMD model attempts to use electron density to estimate the solvent accessible surface
area (SASA) and the atomic surface tensions to determine the cavitation and dispersion-repulsion
energies. This method is the best method to use when attempting to calculate ΔGsolvation for a
molecule going from the gas-phase to the solvent as it attempts to actually calculate the energy of
cavitation and dispersion-repulsion energy. This method has been shown to produce solvation free
energies of mean unsigned errors of 0.6-1.0 kcal/mol for neutral molecules, and mean unsigned
errors of 4 kcal/mol for ions.
Advantages
 Calculation of cavitation and dispersion-repulsion energies
 High quality results for free energies of solvation
Disadvantages
 Computationally expensive
In Gaussian the way to use the SMD method is through the keyword SCRF=SMD. This
method is available for all HF and DFT calculations.
22.5 Practical Example
When using an implicit solvation model in Gaussian the SCRF keyword is the primary
command to enable all of the models. Additionally the SCRF=Solvent option may be used to
explicitly determine what solvent is going to be used for the analysis. Gaussian has a long list of
available solvents to choose from with predefined dielectric constants and solvent properties (a
complete list may be found on the Gaussian website). However, Gaussian also allows the user to
define their own solvents by using the keyword SCRF=Read, and then below the input of the
molecule enter in the necessary solvent information (a complete list of solvent information may
be found on the website).
# B3LYP/6-31G(d) 5D SCRF=(Solvent=Generic,Read)
Water, solvation by methanol, re-defined as generic solvent.
0 1
O
H,1,0.94
H,1,0.94,2,104.5
stoichiometry=C1H4O1
solventname=methanol
eps=32.63
epsinf=1.758
Input section for PCM keywords
…
The output for of the energy of a PCM experiment in a Gaussian output file will be:
Hartree-Fock SCRF calculation:
SCF Done: E(RHF) = -99.4687828290
Convg =
0.2586D-08
MP2 SCRF calculation:
E2 =
-0.1192799427D+00
EUMP2 =
A.U. after
8 cycles
-V/T = 2.0015
-0.99584491345297D+02
Gaussian allows the user to change the cavity of the dielectric medium by adjusting the radii of
the spheres which define the cavity for the PCM, CPCM, and SMD methods. The radii may be
changed using the SCRF=Read and then entering in the command Radii=model below the input
lines of the molecule.
The following example will show how to compute the ΔG of solvation for water in ethanol:
Step 1: Calculate the free energy of water in the gas-phase state
----------------------------------------------------------%chk=water.chk
# T HF/6-31G(d) Opt Freq Test
Water – gas phase
0 1
O1
H2 O1 1.08
H3 O1 1.08 H2 107.5
-----------------------------------------------------------Sum of electronic and thermal Free Energies = -76.005366
Step 2: Calculate the free energy of water in ethanol using the SMD model
-----------------------------------------------------------%chk=water.chk
# T HF/6-31G(d) SCRF=(SMD,Solvent=Ethanol) Geom=AllCheck
Guess=Read Opt Freq Test
----------------------------------------------------------Sum of electronic and thermal Free Energies=
Step 3: Subtract the two to get the solvation free energy:
ΔGsolvation = Gsolvent  Ggas-phase
– 0.01494 = 76.020307 (– 76.005363) Hartree
-76.020307