A Multiscale Model for Efficient Simulation of a Membrane Bound

A Multiscale Model for Efficient Simulation of a Membrane Bound Viral Fusion
Peptide
Yudong Sun, Steve McKeever
Computing Laboratory
University of Oxford
Parks Road, Oxford OX1 3QD, UK
[email protected]
[email protected]
Abstract
Biomolecular simulations have been particularly useful
in providing atomic-level insights into biological processes.
The simulations can be conducted at atomistic or coarsegrained resolution. An atomistic simulation can model
atomic details of a biological process but is computationally expensive. A coarse-grained simulation is time-efficient
but cannot fully expose atomic details. In order to support
efficient simulations of complex biomolecular processes, we
have developed a multiscale simulation model that dynamically integrates both atomistic and coarse-grained simulations. The model has been used to simulate a membrane
bound viral fusion peptide associating with a phospholipid
bilayer. The simulation provides an important pre-requisite
in understanding the viral fusion mechanism that can aid
the design of better drugs against infectious viruses. The
simulation has achieved a high performance with a minimum 8-fold speedup.
1. Introduction
Membrane protein structures are determined by crystallography experiments, but little or no information is presented regarding their interactions with phospholipids (also
referred to as lipids) that form an integral part of a cell
membrane. Membrane bound proteins and peptides (a short
fragment of protein) constitute an important class of drug
targets.
Biomolecular simulations have become an effective
means of providing atomic insights into biological processes [7, 13]. These processes are usually not accessible
by lab experiments. They have complex conformational
and dynamic behaviour which are significantly modulated
by cellular environments. Biomolecular simulations can
Kia Balali-Mood, Mark S. P. Sansom
Department of Biochemistry
University of Oxford
South Parks Road, Oxford OX1 3QU, UK
[email protected]
[email protected]
use computational methods to model the dynamics and motions of proteins from a pathogen, such as a virus, and other
molecules residing within the cellular environment of a host
cell. This enables us to understand and predict the functions
of proteins and their interactions with other molecules such
as membranes. The knowledge acquired from the simulations has important uses in biomedical and pharmaceutical
sciences, such as the identification of new drug targets. It
can also aid the formulation of new drugs that can pass into
a cell via the cell membrane and bind to the active site of a
viral protein to inhibit the infection.
Molecular dynamics (MD) is probably the most popular
simulation approach in biomolecular modelling [9]. A simulation model can be established at atomistic and coarsegrained resolutions. An atomistic simulation is based on
an all-atom model that computes atom-to-atom interactions [6]. As an MD simulation usually involves tens of
thousands of atoms, an atomistic simulation is extremely
computationally complex. Hence, it is not feasible for
the simulation to cover long timescales. However, longtimescale simulations are essential to study biological processes such as the binding of a protein to a cell membrane.
A coarse-grained simulation provides an alternative model
for MD simulations that has a much lower time complexity [2, 3, 12]. In a coarse-grained model, a number of
bonded atoms are gathered into and represented by a coarsegrained particle so that the interactions are only computed
between these particles. Therefore, coarse-grained simulations are capable of handling increased timescales and more
complex molecular systems. However, the goal of MD simulations is ultimately to provide atomic insights of a molecular system, which is beyond the resolution that a coarsegrained model can achieve.
In order to support long-timescale MD simulations with
the provision of atomic details, we have developed a multiscale simulation model that combines atomistic and coarse-
grained simulations into an integrated model. The multiscale model supports the flexible integration of different
simulation models by means of dynamic coupling and automated switching. The model starts a simulation at a coarsegrained scale for a rapid configuration of a molecular system. When the simulated system has been equilibrated,
the simulation can be switched to an atomistic scale to reveal atomic details. The model provides a novel paradigm
for high-throughput biomolecular simulations that also supports the use of parallel and distributed computing to implement the simulations. We have applied the multiscale
model to simulate the process of a fusion peptide from SIV
(simian immunodeficiency virus) binding to a DPPC (dipalmitoylphosphatidylcholine) phospholipid bilayer in order to understand the fusion mechanism. Our test shows that
the multiscale model can efficiently implement a biomolecular simulation on a long timescale with a complex molecular structure and present sufficient atomic details.
The paper is organised as follows. Section 2 introduces
related work. Section 3 describes the multiscale simulation
model. Section 4 discusses the simulation of a membrane
bound peptide associating with a lipid bilayer. Section 5
concludes the paper with future work.
2. Related Work
The combination of different simulation models has only
been recently applied to molecular dynamics simulations.
Consequently, there are only a limited number of research
projects in this area. The multigraining algorithm [3] runs
fine-grained and coarse-grained simulations of a molecular system simultaneously and combines the interaction energy terms between particles contributed from both fineand coarse-grained parts in fixed time steps. The mixed allatom and coarse-grained (AA-CG) model [15] represents
the most interesting part of a molecular simulation in full
atomistic detail and the remaining parts at the CG level.
For example, it models a gramicidin peptide in atomistic
detail and the lipid bilayer and surrounding water in coarsegrained representations. The interactions are calculated in
three categories: atom-atom, CG-CG, and atom-CG. HybridMD [4, 5] is a hybrid multiscale model that jointly uses
mixed atomistic and continuum-based models for the modelling and simulation of complex fluid flow. It uses a simpler, continuum-based description to model the bulk of water and uses an atomistic description to represent the protein
and surrounding water.
Compared with the simulation models above, our multiscale simulation model is unique in supporting the dynamic
coupling of atomistic and coarse-grained simulations. Our
model runs an MD simulation in one of the atomistic and
coarse-grained scales but allows a dynamic switch between
the two scales to occur. This switch depends on the runtime
state of a simulation. It can efficiently accomplish a simulation with a long timescale and a complex molecular structure, and meanwhile produce atomic details. Using parallel
and distributed computing methods, high speedup has been
achieved in the simulation of an SIV fusion peptide associating with a lipid bilayer.
3. Multiscale Simulation Model
3.1. The Model
The multiscale simulation model is built based on the integration of atomistic and coarse-grained simulations. The
integration is established by dynamically coupling and flexibly switching between atomistic and coarse-grained simulations at runtime. This model is able to accomplish a
biomolecular simulation with a long timescale or a complex
molecular structure within a reasonable length of execution
time, while showing atomic details of the simulation.
Given a molecular system, a multiscale simulation will
start at a coarse-grained scale. A coarse-grained molecular
system will be created in which a group of bonded atoms
are assembled into a coarse-grained particle which usually
represents the centre of mass of these atoms. The methods for creating a coarse-grained molecular structure from
original atomistic representation vary for different types of
molecules. We use the method proposed in [1, 2] to create the coarse-grained molecules in our simulation. Once
a coarse-grained molecular system has been created, the
coarse-grained simulation starts to run using an MD simulation tool such as GROMACS [10].
The coarse-grained simulation can rapidly stabilise the
initial molecular system. As the simulation advances, the
particles in the molecular system will remain dynamic due
to the force field and electrostatics interaction. When the
motion has slowed down, the molecular system is entering
an equilibrium state in which the particles have merely a
slight motion between simulation steps. This means a “local minimum” of the potential energy in a molecular system, which reflects an equilibrated conformation of the system. In molecular dynamics, the equilibrium state can be
assessed by the RMSD (root mean square deviation) of the
Cartesian coordinates of the particles between time steps. A
low RMSD represents a minor movement of the particles.
The molecular system can be considered as being equilibrated when the RMSD drops beneath a given boundary for
a period of time. The equilibrium state can be viewed as
an appropriate point to switch the simulation from coarsegrained to atomistic scale to expose atomic detail. It is critical to reconstruct an atomistic structure from the coarsegrained structure at the switch point. Different methods will
be applied to the conversion of different types of molecule.
Section 4.1 will discuss the coarse-grained to atomistic con-
Figure 1. The workflow of a multiscale simulation: (a) interleaving mode, (b) concurrent mode
version in the simulation of a fusion peptide and a lipid bilayer. When the atomistic molecular structure has been rebuilt, the simulation will carry on at the atomistic scale with
a low simulation speed. The subsequent simulation can be
performed in interleaving mode or concurrent mode.
Interleaving mode: The atomistic simulation starts to
run from the switch point and meanwhile the coarse-grained
simulation is stopped. Later on, the atomistic simulation
can be switched back to the coarse-grained scale. This
switch-back happens when the atomistic simulation remains
in an equilibrium state and is distant from the termination
condition. Since an atomistic simulation is much slower
than a coarse-grained one, it is necessary to switch the simulation back to the coarse-grained scale to maintain the rapid
progress. In the atomistic simulation, the equilibrium state
is also assessed by the RMSD. If an atomistic simulation
needs to be switched back to the coarse-grained scale, the
atomistic molecular structure needs to be converted back to
the coarse-grained structure. After the coarse-grained structure has been rebuilt, the coarse-grained simulation will
restart from the latest switch point until it has reached the
next switch point. Figure 1(a) shows the workflow of a multiscale simulation in the interleaving mode. The simulation
can switch back and forth between atomistic and coarsegrained simulations many times. The simulation will finally
come to an end at the atomistic level to show the atomic details.
The atomistic and coarse-grained simulations have different computational complexity. They can be executed on
heterogeneous systems. Our execution plan is running the
fast coarse-grained simulation on a client desktop so that
users can closely monitor the simulation progress. The
atomistic simulation is extremely computationally intensive. It is executed by the GROMACS parallel mode on
a high performance computing (HPC) system.
Concurrent mode: The RMSD fluctuates during the execution of a simulation. The selection of a switch point,
therefore, is a heuristic decision that causes a simulation un-
der the interleaving mode to alternate between the coarsegrained and atomistic level. To further improve the performance of a multiscale simulation both in the speed of
simulation and the quality of output, we can harness HPC
systems to simultaneously run multiple instances of atomistic simulation spawned from a coarse-grained simulation
at different switch points. This leads to a more efficient simulation mode—the concurrent mode. Different to the interleaving mode, a coarse-grained simulation in the concurrent
model will continue to run after it has spawned an atomistic
simulation. When the next switch point has been reached,
the coarse-grained simulation will spawn another atomistic
simulation based on the molecular structure at that point.
The new atomistic simulation can be run concurrently with
the coarse-grained simulation and the first atomistic simulation. Generally, the coarse-grained simulation can spawn
an atomistic simulation at each switch point occurring in
the simulation. All these simulation instances can be jointly
executed provided there are adequate computing resources
available. Figure 1(b) shows the workflow of the concurrent mode. Each of the atomistic simulations will run to the
end and will not switch back to the coarse-grained simulation. The results of all atomistic simulations can be analysed on-the-fly or afterwards to select the most stabilised
output. Compared with the interleaving mode, the concurrent mode can achieve a high throughput by means of parallel execution of multiple atomistic simulations on HPC
systems.
The result of a multiscale simulation can approximate
to that of an equivalent pure atomistic simulation. Related
research in [1, 2] has confirmed that the result of a coarsegrained simulation is approximate to that of an equivalent
atomistic simulation. The RMSD and the RMSF (root mean
square fluctuation) between the resulting molecular systems
produced by equivalent coarse-grained and atomistic simulations were within 5% of each other. In addition, the partial
density distribution of the two systems shows very similar
distribution of components. The small differences lie in the
level of resolution of each system. The coarse-grained system has about 4 times less resolution, as approximately four
atoms are grouped into a coarse-grained particle in our simulation. We conclude that a coarse-grained simulation can
be an accurate substitution of an atomistic simulation. A
multiscale simulation can certainly assure a higher accuracy
than a single coarse-grained simulation.
3.2. Implementation
The multiscale simulation model is implemented based
on the multi-level simulation framework we have developed
[16]. The framework provides a software infrastructure that
will be deployed on the computing systems to run multiscale simulations.
To run a simulation in the interleaving mode, the simulation framework is built with two parts. Each part will be
deployed on a computer system where a coarse-grained or
an atomistic simulation will run. For example, the coarsegrained simulation will run on a client machine and the
atomistic simulation will run on a HPC system. Each
part contains the components of simulation manager, simulation monitor, simulation executor, data converter, and
BioSimML handler. The simulation manager on the client
administrates the process of the coarse-grained simulation
and interacts with the simulation manager on the HPC system to fulfil a simulation switch. The simulation manager
activates the simulation executor to start a coarse-grained
simulation; the latter then calls an MD simulation tool to run
the simulation based on a given coarse-grained molecular
system. In the mean time, the simulation manager triggers
the simulation monitor to regularly check the state of the
coarse-grained simulation to derive a switch point. GROMACS 3.3.1 is used in our simulation. During the execution, GROMACS outputs the trajectory of the simulation
to a file at a user-defined frequency. The simulation monitor calls the GROMACS g rms command to calculate the
RMSD between successive time frames to derive a switch
point. A time frame contains a specified number of time
steps. When a switch point has been reached, the simulation
monitor activates the data converter to create an atomistic
molecular structure from the current coarse-grained structure. This conversion is a time-consuming process for a
molecular structure with hundreds of macromolecules. To
rapidly accomplish the switch, the structural conversion can
be collaboratively conducted by the data converters on both
sides. Section 4.1 will discuss such a collaborative structural conversion.
After the structural conversion is done, the simulation
manager contacts its counterpart—the simulation manager
on the atomistic simulation side—to start an atomistic simulation. The molecular structure and simulation parameters which define an atomistic simulation are sent to the
atomistic simulator on the HPC system. The data exchange between the two simulation sides is captured in
BioSimML [16], an XML-based biomolecular simulation
markup language we have developed. The BioSimML handler processes the BioSimML captured data. Subsequently,
the coarse-grained simulation in the interleaving mode will
be stopped. On the atomistic side, the simulation manager calls the BioSimML handler to decode the received
data and inputs it to the atomistic simulation. From then
on, the simulation manager on the atomistic side takes control of the simulation process. It initiates the simulation
executor to continue the simulation at the atomistic scale
from the switch point. The simulation monitor is triggered
to check the progress of the atomistic simulation and derive any switch point where the simulation could revert to
the coarse-grained scale. The atomistic to coarse-grained
switching process is similar to the previous switching process but in the reverse direction. A multiscale simulation
can be switched back and forth between the two simulation
scales under this framework.
For the concurrent mode, the framework will be composed of a coarse-grained part and a number of atomistic
parts (one part per atomistic simulation instance). In this
mode, the coarse-grained simulation will continue to run
along with all atomistic simulations until the entire multiscale simulation has finished.
4. A Simulation of Membrane Bound Fusion
Peptide
A molecular dynamics simulation of the peptide-lipid
interaction for an SIV fusion peptide has been developed
based on the multiscale simulation model. SIV is an infectious virus which has numerous strains including HIV-1
(human immunodeficiency virus) and HIV-2 found in humans. A fusion peptide resides on the surface of a virus and
acts by fusing with the lipid bilayer (i.e. cell membrane) of
a host cell. The fusion allows the virus to enter and infect
the host cell. Understanding how a fusion peptide interacts
with a bilayer is an important pre-requisite in understanding
the viral fusion mechanism. The knowledge can aid in the
design of better drugs against viral fusion proteins such as
in HIV.
The molecular system in this simulation contains an SIV
fusion peptide gp160, a phospholipid bilayer of 256 DPPC
molecules, and 4500 water molecules. The gp160 peptide
consists of 12 amino acid residues. DPPC is a phospholipid
often used for studying lipid bilayers and modelling biological membranes.
The atomistic simulation of such a molecular system is
extremely time-consuming, even when running in a parallel mode on a HPC system. Figure 2 shows the speed
of the atomistic simulation for this molecular system using
the GROMACS parallel mode on HPCx (www.hpcx.ac.uk).
The speed is measured in ns (nanoseconds) per day that is
the timescale a simulation can run in 24 hours. The peak
speed occurs on 32 processors, i.e. 5.6ns per day. To perform a realistic simulation of the fusion mechanism, it is
necessary to extend the timescale by orders of magnitude.
For this purpose, we apply the multiscale simulation model
to accelerate the simulation process.
6
ns per day
5
4
3
2
1
0
1
2
4
8
16
32
64
128
256
number of processors
Figure 2. The speed of the atomistic simulation for gp160 fusion peptide and DPPC bilayer by running GROMACS on HPCx
4.1. The Simulation
The multiscale simulation of the fusion mechanism is developed based on the concurrent mode. It is implemented
using the simulation framework discussed in Section 3.2
on a distributed system with a client desktop and HPCx.
The simulation starts at a coarse-grained scale by running
GROMACS in serial mode on the client. The initial coarsegrained molecular system is shown in Figure 3(a). The fusion peptide is positioned in the middle of the simulation
box with surrounding DPPC lipids and water molecules.
The molecular system has been equilibrated after 1.89 hours
of coarse-grained simulation during which 3.5 × 105 time
steps have been run, which represents a 14ns timescale
(0.04ps per step). At this point, the molecular structure has
been configured and a DPPC lipid bilayer has been formed,
as shown in Figure 3(b). The peptide has docked onto a
DPPC layer and the water molecules are pushed aside beyond the surfaces of the bilayer. This can be considered as a
switch point where an atomistic simulation can be spawned.
To spawn an atomistic simulation, the coarse-grained
structure is converted to an atomistic structure as shown in
Figure 3(c). Different types of molecule require different
structural conversion methods. The conversion of the lipid
bilayer requires much computation. To efficiently convert
the structure, a collaborative approach is used to divide the
conversion task between the client and HPCx based on the
required hardware and software.
Converting the peptide requires the third-party software
MODELLER [11, 14] (a tool for homology or comparative
modelling of 3D protein structures) and PROCHECK [8]
(a tool for stereochemical quality check of a protein structure). We use MODELLER to generate five atomistic
protein structures and then use PROCHECK to choose
the best-quality structure among them. MODELLER and
PROCHECK have been installed on the client desktop
where the conversion of the peptide can be done in around
a minute.
A DPPC lipid has a different chemical structure that
needs a different method for the structural conversion. The
coarse-grained lipid bilayer is converted to an atomistic
structure by comparing each coarse-grained lipid with a library of atomistic lipids to find the best match. A library
of 1500 atomistic DPPC lipids has been pre-generated by a
simulation of a lipid bilayer. The comparison of a coarsegrained lipid with the library of atomistic lipids takes 3 to 4
minutes. There are 256 lipids in this simulation. The total
conversion time will be 15 hours. Nevertheless, the conversion of a coarse-grained lipid is independent from all other
lipids. We consequently have developed a parallel conversion method by an MPI program running on HPCx. The
parallel conversion of 256 lipids only takes 10 minutes to
finish. The conversions of the peptide on the client and
the lipids on HPCx are simultaneously performed. Thereafter, the atomistic peptide and lipids are gathered into one
molecular system on the atomistic simulation side. The
atomistic water molecules are directly added into the system using the GROMACS solvent model spc216. After
that, the atomistic simulation can be executed by the GROMACS parallel mode on HPCx. As Figure 2 shows, the
atomistic simulation achieves the best performance on 32
processors. Hence, we use 32 processors to run each atomistic simulation on HPCx. The atomistic simulation has run
for 7.5 × 105 steps (0.002ps per step, and 1.5ns in total),
which takes 5.89 hours, and produced the atomistic structure shown in Figure 3(d).
Running in the concurrent mode, the coarse-grained simulation can continually spawn new atomistic simulation instances at subsequent switch points. All atomistic simulations are created to run on HPCx via the same procedure as
described above. In our test, the coarse-grained simulation
spawned an atomistic simulation respectively at the time
point 14ns, 24.8ns, and 36.4ns. Table 1 compares the execution time and the speed of these simulations. As the Table shows, the coarse-grained simulation spawned the first
atomistic simulation after running for a timescale of 14ns
(the execution time of 1.89 hours). The atomistic simulation
ran on 32 processors for a timescale of 1.5ns in 5.89 hours.
There was also an extra 0.35 hour (21 minutes) spent on cre-
Figure 3. A multiscale simulation: the fusion peptide gp160 is shown in the centred box; other
molecules shown are DPPC lipids and water. (a) the initial CG structure. (b) the CG structure with a
DPPC bilayer formed. (c) the AT structure converted from (b). (d) the AT structure produced by the
AT simulation. (CG: coarse-grained, AT: atomistic)
Distance
Distance
3.3
3.25
Distance
4.1
4.1
4
4
3.9
3.9
3.15
Distance (nm)
Distance (nm)
Distance (nm)
3.2
3.8
3.8
3.1
3.7
3.05
3
0
500
1000
Time (ps)
1500
2000
3.6
3.7
0
500
1000
Time (ps)
1500
2000
3.6
0
500
1000
Time (ps)
1500
2000
Figure 4. Distance between the centres of mass of the molecules in the atomistic simulations starting
at 14ns (left), 24.8ns (middle), and 36.4ns (right).
Table 1. Times and speed of the multiscale simulation for the gp160 fusion peptide and the DPPC
lipid bilayer (CG: coarse-grained, AT: atomistic)
CG + AT1
Timescale Run time
(ns)
(hours)
14
1.89
1.5
5.89
0.35
15.5
8.13
45.76
ating an atomistic simulation (including the coarse-grained
to atomistic structural conversion). In total, the multiscale
simulation took 8.13 hours to simulate a 15.5ns timescale
which equals to a speed of 45.76ns per day. Compared with
the peak speed of a pure atomistic simulation, i.e. 5.6ns per
day, the multiscale simulation has achieved an eight-fold
speedup. The Table also shows the times of two subsequent
simulation instances which reached the simulation speed of
65.34ns per day and 84.61ns per day respectively. The performance shows that the multiscale simulation model can
enhance the simulation speed by orders of magnitude.
CG + AT2
Timescale Run time
(ns)
(hours)
24.8
3.35
1.5
5.94
0.37
26.3
9.66
65.34
CG + AT3
Timescale Run time
(ns)
(hours)
36.4
4.37
1.5
6.04
0.34
37.9
10.75
84.61
Hydrogen Bonds
8
7
Number
Multiscale
Simulation
Coarse-grained (serial)
Atomistic (parallel)
Data conversion
Total
Speed (ns per day)
6
5
4
3
0
500
Time (ps)
1000
1500
Figure 6. The number of hydrogen bonds occurring between the gp160 peptide and the
DPPC lipid bilayer
Switch point
Switch point
Switch point
Figure 5. The RMSD of the coarse-grained
simulation
4.2. Analysis
In our simulation, we define the equilibrium state of a
molecular system in the coarse-grained simulation as when
the RMSD remains below 0.2 for over five consecutive time
frames. Each time frame encloses 10000 time steps which
represents a timescale of 400ps. Thus, five time frames have
a total timescale of 2ns. A low RMSD sustaining in this
period shows a local minimum of potential energy that reflects a local system stability. This is defined as a switch
point. There were three switch points derived during the
coarse-grained simulation as shown in Figure 5. An atomistic simulation was spawned at each switch point.
Each atomistic simulation ran for a 1.5ns timescale and
produced an atomistic molecular structure similar to Figure 3(d). The stability of the molecular structure produced
by an atomistic simulation can be examined by the distance
between the centres of mass of the residues in the gp160
peptide and the centres of mass of the lipids. Figure 4 shows
the distance calculated in each atomistic simulation. Comparing the range of the distance in the three simulations,
we can find that the range becomes narrower in the three
simulations from left to right. The distance in the last atomistic simulation (which was generated at 36.4ns) converges
to around 3.95nm (nanometre). We can conclude from this
trend that a simulation with a longer timescale can produce
a more stabilised result and the multiscale simulation model
enables a long-timescale simulation at a low time cost.
Table 2. The number of hydrogen bonds between each residue of gp160 and the DPPC
bilayer during the atomistic simulation
Residue no.
1
2
3
4
5
6
7
8
9
10
11
12
Residue
Gly
Val
Phe
Val
Leu
Gly
Phe
Leu
Gly
Phe
Leu
Ala
No. of hydrogen bonds
0–2
0–1
0–1
0
0
0–1
0–2
0
0
0–1
0–1
0
The simulation has also shown the atomistic, chemical interactions in the molecular system, such as hydrogen
bonding between the gp160 peptide and the DPPC lipid bilayer. Figure 6 shows the number of hydrogen bonds occurring during the atomistic simulation spawned at 14ns. The
Figure shows that there are 3 to 8 hydrogen bonds occurring
in each sampling point, mostly between 5 to 7 bonds. The
gp160 peptide consists of 12 residues. We can further discern the number of hydrogen bonds formed between each
residue and the DPPC bilayer during this atomistic simulation as shown in Table 2. The hydrogen bonds give information on the binding of the peptide to the bilayer. This in
turn provides insights into the fusion mechanism.
From the multiscale simulation, we can see that the
gp160 fusion peptide orientates itself near to the DPPC lipid
bilayer and water interface, and associates with the bilayer.
The hydrogen bonds facilitate the binding of the fusion peptide at the bilayer-water interface.
5. Conclusions
To conclude, we have developed a novel multiscale simulation model to support efficient biomolecular simulations.
We have used the simulation of a membrane bound viral fusion peptide associating with a lipid bilayer to demonstrate
the performance of the model. The simulation has manifested the high performance enhancement realised by the
multiscale model. Such a simulation can also help us to
understand the viral fusion mechanism. The knowledge obtained is useful to aid the design of new antiviral drugs.
Our future work will extend the scope of the multiscale
simulation to solve more complex biological and biochemical problems. We will simulate well-characterised proteins
such as Monoamino oxidase (MAO) and Cyclooxygenase
(COX) to test the performance of the multiscale model.
References
[1] P. Bond, J. Holyoake, A. Ivetac, S. Khalid, and M. S. P.
Sansom. Coarse-grained molecular dynamics simulations
of membrane proteins and peptides. Journal of Structural
Biology, 157(3):593–605, 2006.
[2] P. Bond and M. S. P. Sansom. Insertion and assembly of
membrane proteins via simulation. Journal of the American
Chemical Society, 128(8):2697–2704, 2006.
[3] M. Christen and W. Gunsteren. Multigraining: an algorithm for simultaneous fine-grained and coarse-grained simulation of molecular systems. Journal of Chemical Physics,
124(15), 2006.
[4] P. Coveney and P. Fowler. Modelling biological complexity:
a physical scientist’s perspective. Journal of Royal Society
Interface, 2(4):267–280, 2005.
[5] R. Delgado-Buscalioni, P. Coveney, G. Riley, and R. Ford.
Hybrid molecular-continuum fluid models: implementation
within a general coupling framework. Philosophical Transactions of the Royal Society A, 363(1833):1975–1985, 2005.
[6] P. Fowler, K. Balali-Mood, S. Deol, P. Coveney, and M. S. P.
Sansom. Monotopic enzymes and lipid bilayers: a comparative study. Biochemistry, 46(11):3108–3115, 2007.
[7] M. Karplus and J. McCammon. Molecular dynamics
simulations of biomolecules. Nature Structural Biology,
9(9):646–652, 2002.
[8] R. Laskowski, M. MacArthur, D. Moss, and J. Thornton.
PROCHECK: a program to check the stereochemical quality
of protein structures. Journal of Applied Crystallography,
26(2):283–291, 1993.
[9] A. Leach. Molecular Modelling: Principles and Applications. Prentice Hall, second edition, 2001.
[10] E. Lindahl, B. Hess, and D. Spoel. GROMACS 3.0: a package for molecular simulation and trajectory analysis. Journal of Molecular Modeling, 7(8):306–317, 2001.
[11] M. Marti-Renom, A. Stuart, A. Fiser, R. Sánchez, F. Melo,
and A. Sali. Comparative protein structure modeling of
genes and genomes. Annual Review of Biophysics and
Biomolecular Structure, 29:291–325, 2000.
[12] S. Nielsen, C. Lopez, P. Moore, J. Shelley, and et al. Molecular dynamics investigations of lipid langmuir monolayers
using a coarse-grain model. Journal of Chemical Physics,
107(50):13911–13917, 2003.
[13] B. Roux and K. Schulten. Computational studies of membrane channels. Structure, 12(9):1343–1351, 2004.
[14] A. Sali and T. Blundell. Comparative protein modelling by
satisfaction of spatial restraints. Journal of Molecular Biology, 234:779–815, 1993.
[15] Q. Shi, S. Izvekov, and G. Voth. Mixed atomistic and coarsegrained molecular dynamics: simulation of a membranebound ion channel. Journal of Physical Chemistry B,
110(31):15045–15048, 2006.
[16] Y. Sun, S. McKeever, K. Balali-Mood, and M. S. P. Sansom.
Integrating multi-level molecular simulations across heterogeneous resources. In Proc. IEEE/ACM Grid 2007, Austin,
Texas, Sept. 19–21, 2007.