Visualization of Signal Transduction Processes in the Crowded
Environment of the Cell
Martin Falk∗
Michael Klann†
Matthias Reuss†
Thomas Ertl∗
∗
†
VISUS – Visualization Research Center, Universität Stuttgart, Germany
Institute of Biochemical Engineering and Center Systems Biology, Universität Stuttgart, Germany
Figure 1: Different representations of signal transduction in biological cells: microscopic image from an experiment obtained with confocal laser
scanning microscopy; microscope-like image generated with one of our visualization techniques; geometric representation emphasizing single
proteins and the structure of the cell; closeup of the highlighted region showing the crowded environment in a simulated cell (from left to right).
A BSTRACT
1
In this paper, we propose a stochastic simulation to model and analyze cellular signal transduction. The high number of objects in a
simulation requires advanced visualization techniques: first to handle the large data sets, second to support the human perception in
the crowded environment, and third to provide an interactive exploration tool. To adjust the state of the cell to an external signal, a
specific set of signaling molecules transports the information to the
nucleus deep inside the cell. There, key molecules regulate gene expression. In contrast to continuous ODE models we model all signaling molecules individually in a more realistic crowded and disordered environment. Beyond spatiotemporal concentration profiles
our data describes the process on a mesoscopic, molecular level,
allowing a detailed view of intracellular events. In our proposed
schematic visualization individual molecules, their tracks, or reactions can be selected and brought into focus to highlight the signal transduction pathway. Segmentation, depth cues and depth of
field are applied to reduce the visual complexity. We also provide
a virtual microscope to display images for comparison with wet lab
experiments. The method is applied to distinguish different transport modes of MAPK (mitogen-activated protein kinase) signaling
molecules in a cell. In addition, we simulate the diffusion of drug
molecules through the extracellular space of a solid tumor and visualize the challenges in cancer related therapeutic drug delivery.
Life Science depends on imaging and image analysis. X-ray scans
of the body and microscopy of individual cells provide us information about the state of health. In contrast, biochemistry and genetics
allow a deep insight into the molecular machinery and the regulations. Mathematical models integrate this data, test new hypothesis
for biological functions, and can predict diseases. Nevertheless this
systems biology approach can only lead to a better understanding
of the biological processes, if we are able to visualize the results
and relate it to the images and outcomes of experiments.
In this paper, we present simulation results that tackle the question how signals reach their targets (e.g. the signal that tells a cancer
cell to commit suicide). A biological signal is a set of molecules
that, in the right time and place, activates the next set of molecules
until the state of the cell has adapted. The low abundance of signaling molecules together with the inhomogeneous medium leads to a
stochastic, four dimensional spatiotemporal problem.
Visualization is needed to have an intuitive understanding of this
data. Biologists are interested in following the signaling molecules
through this structure. For this purpose, we developed a visualization method to show molecules of interest as well as their paths
and interactions. It allows to interactively select species and even
individual molecules of interest and to zoom into the virtual cell.
Since cell simulations include thousands of proteins, obstacles, and
filaments the schematic visualization ends up in visual clutter (see
Figure 1, center right). Diffusing proteins are extremely difficult
to track due to their chaotic motion. Also, reactions between proteins cannot easily be spotted. The aim of this work is especially to
highlight the events of interest in the confusing orchestra of structures and background molecules. The cytoskeleton and crowding
elements present in a cell are included in the simulation and represent the structure of the cell in our images. The human perception
is guided by depth of field and depth cues to the objects in focus.
The dynamics of the simulated process can be displayed by visualizing the data of different time points of the simulation as animation. Handling of data and objects by glyphs and storing in vertex
Index Terms: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics
and Realism; I.3.8 [Computer Graphics]: Applications; I.6.8 [Simulation and Modeling]: Types of Simulation—Discrete Event; J.3
[Life and Medical Sciences]: Biology and Genetics;
∗ e-mail:
† e-mail:
{falk|ertl}@vis.uni-stuttgart.de
{klann|reuss}@ibvt.uni-stuttgart.de
I NTRODUCTION
buffers as well as GPU-based rendering provide the necessary speed
for an interactive visualization.
to stochastic effects [5]. All these features are covered by our agent
based simulation.
2
3.1
R ELATED W ORK
The applied simulation is based on Smoluchowski dynamics, a concept that has been used also by other groups, e.g., SMOLDYN [2]
or FLAME [19]. Our aim is to provide a framework that includes
a realistic and interactive visualization of the dynamics in the cell
coupled to scientific results. The visualization of particle dynamics
in cell biology is also addressed in the field of biomedical 3D animation. In [17] the authors go into much greater detail of the objects
to be visualized but are not focused on the analysis of biochemical
networks. To visualize the (static) networks structure in the field of
systems biology, different standards have been developed, e.g., the
Systems Biology Graphical Notation (SBGN).
Traditionally, biologists are only interested in the number of activated molecules at a given time. The systems behaviour is shown in
protein concentration profiles over time (and space). These profiles
also include 3D graphs of the concentrations with respect to parameter changes [9]. Two or three dimensional plots are employed to
visualize signal transduction pathways of single proteins. [7, 8, 12].
Instead of graphs, our proposed visualization tool uses three dimensional glyphs. Glyph-based rendering allows interactive display of
tens or hundreds of thousands objects. Gumhold [6] as well as Klein
and Ertl [10] use splatting to render implicit described ellipsoids on
the GPU. Reina and Ertl [15] refined the method to render dipole
glyphs consisting of cylinders and spheres.
The visual perception in crowded environments is usually hindered because of the high visual complexity. Depth cues improve
depth perception by affecting color intensity and saturation in dependence to the distance to the camera [20]. Depth of field, as
known from photography, can be used to separate foreground and
background. Potmesil [14] suggests a small lens camera model to
simulate this phenomena. A GPU-based implementation can be
found in [16].
3
V ISUALIZATION AND S IMULATION AS TOOLS IN S YSTEMS
B IOLOGY
On the experimental side it has become possible to label molecules
of interest with fluorescence markers and most recently also markers of the state of signaling molecules [1]. Fluorescence microscopy
reveals the distribution of these molecules in the cell. Ideally, the
outcome of an agent based Monte Carlo simulation — where each
agent represents a signaling molecule — should give the same images when the agents are visualized. The agents should behave
like the molecules in the cell. There, they move according to diffusion. In some cases they are transported by so-called motor proteins
along the cytoskeleton (motorized transport). On their way the proteins can react with each other. The interaction rules for the agents
are based on the biochemical reaction rates [13].
The resolution of simulations can be much higher than any experimental resolution, both in time and space domain. So simulations can not only be compared to experiments but, given the right
parameters, lead to a greater insight into nature.
The classical way of modeling signal transduction uses sets of
coupled ordinary differential equations (ODEs) and thus neglects
spatial aspects [18]. Spatial attributes are included in partially differential equations (PDEs) and have a distinct effect on the signal
strength at the location where it matters: in the nucleus of the cell
[9].
A more detailed simulation has to follow all relevant molecules.
Their movement and interaction is influenced by all other molecules
of the cells. The intracellular space is filled with so many molecules
(molecular crowding), that this effect should not be neglected [4].
Also note that the discrete character of biochemical reactions leads
Schematic Illustration
The highly sophisticated nature of a cell is simplified in the following way: it consists only of proteins, obstacles, and a generalized cytoskeleton. Each of these components can be selected
for visualization. Proteins, peptides, and hormones, all part of
the signal transduction process in cells, are mapped to spheres,
which are large enough to enclose their structure. Furthermore, we
have spherical obstacles to simulate other subcellular components.
These obstacles are essential for simulating hindered diffusion. The
filaments of the cytoskeleton are represented by elongated cylinders, which are placed randomly in the cell. During the simulation
the cytoskeleton is stationary.
The proposed visualization technique employs geometric objects
and glyphs to display these simplified cell components. The purpose of this schematic representation is to have visual feedback
of the simulation. This allows biologists to integrate the present
knowledge about the cell in one image. The presentation of the
crowding in the cell together with highlighted proteins of interest is close to reality and shows the stochastic distribution of the
molecules. Especially in signal transduction, where the low number of molecules leads to high levels of noise, this representation is
superior to simple plots of the distribution profile. Due to the high
resolution of the simulation, the detailed visualization provides a
better understanding of the cell and signal transduction.
3.2
Proteins and Trajectories
Some proteins are too small to be clearly visible at the desired resolution of the cell. Hence, a selected class of proteins can be scaled
up by a scaling factor.
It is interesting to see how specific proteins propagate through
the environment by diffusion and active motor transport. For example, it might be appropriate to track single signal proteins along
their way to the nucleus to study their trajectory. On their way into
the cell they might react with other proteins, be replaced by other
proteins, and subsequently change their direction back toward the
cell membrane. Proteins might even be trapped between filaments
of the cytoskeleton. The given examples are hardly detected when
analyzing numerical data, but are clearly visible when exploring the
data in 3D.
Three dimensional line plots are typically the choice for visualizing protein trajectories. Usually line segments with a constant
line width are used neglecting occlusion. As the step size varies
in our simulation it is difficult to estimate the length and direction
of a single line segment in the spatial domain. But a correct estimation is helpful when combining protein trajectories with our
schematic cell representation. We have therefore chosen to use thin
illuminated cylinders as line segments. This allows to perceive the
direction of single steps by means of lighting. Additionally, depth
perception is enhanced by the inherent perspective miniaturization
and self-occlusion of the trajectories.
Proteins can traverse large parts of the cell leaving long trajectories behind them. Markers along a trajectory are used to highlight
the direction of the proteins at particular segments. We use small
arrow tips to illustrate the motion of the molecule. Figure 2 and
Figure 5 depict pathways of MAPK molecules annotated with these
glyphs.
A simulation usually consists of more than 1 million time steps.
It is neither possible to store all individual positions nor necessary
since the incremental changes are usually insignificant. The interval
of visualization points can be selected in the simulation. The visualized trajectories connect the sampled points with straight lines.
These line segments might cross an obstacle, whereas the true trajectory avoided it.
(a)
(b)
(c)
(d)
(e)
Figure 2: A signaling molecule (orange) and its trajectory (red) between obstacles (purple) and the cytoskeleton (rose and light blue) of a cell.
The utilization of depth cues and depth of field emphasizes molecule and trajectory: (a) without depth of field and depth cues; (b) applied color
gradient on the cytoskeleton; (c) applied depth of field; (d) desaturation and color gradient as depth cue; (e) combined depth cues and depth of
field.
3.3
Reactions
Reactions are based on collisions of two molecules or the spontaneous decay of one molecule. The interaction connects the paths
of the reactants and product molecules. This can be highlighted by
arrows as exemplified in Figure 3. The location of reactions can be
important, e.g., in the case of crosstalk between different signaling
pathways. During the simulation time and position of the reactions
are stored for a subsequent analysis.
3.4
Cuts and Transections
Especially when looking at events deep inside the cell, crowding
objects often hide the objects or events of interest. Although transparency allows to look through the obstacles, it creates a haze of
shapes that only disorients the eye. In this case, it is much more
helpful to cut away the perturbing parts of the cell to allow a clear
view at the object of interest (like in Figure 5). If two cut planes
are aligned parallel to each other, a transection of the cell can be
realized (see Figure 7(c) as an example).
3.5
Depth of Field and Depth Cues
In the “crowded environment” of our simulated cell, several thousand proteins and obstacles move around. The cylinders representing the cytoskeleton are the largest objects and mostly responsible
for occlusion. Displaying the cylinders increases the visual complexity tremendously. But exactly these skeletal filaments support
spatial perception. They form a 3D structure guiding the eye and
occlude proteins in the background. Without the filaments there
remains only a point cloud of proteins.
Focus & context techniques allows us to reduce both problems,
with and without displaying the filaments. A color gradient with
cool-warm shading is used for the filaments of the cytoskeleton.
The color is chosen according to the relative distance toward the
nucleus. In this way, the user is always aware in which direction
the camera is aimed (compare Figure 2(a) and (b)).
Depth of field furthermore separates foreground and background. It is based on the same effect in photography. Objects
in focus are emphasized and the surrounding is blurred by depth of
field. The closer the object is to the camera, the shallower is the
field in focus. Far away focal points lead to a broad field in focus.
In our visualization, depth of field is used to draw the attention of
the eye to a specific region. To see the difference between a scene
without depth of field and the same scene with depth of field applied
compare Figures 2(b) and (c).
Similar to depth of field, depth cues assist the human perception
by accentuating the colors of objects according to their depth. In
this work, we use desaturation and a slight increase in brightness.
Figure 3: Reactions (gray) between molecules are illustrated by arrows. Reactants (green) enter the reaction and products (red) leave
it .
This is done by a linear transformation into 3D tristimulus color
space as described by Weiskopf and Ertl [20]. Instead of using
the object depth as input for the depth cues, information about the
blurriness from the depth of field effect is reused. The more a object
is out of focus the less saturated is it. In this way, only objects “in
focus” keep their original colors. See Figure 2(d) for an example
of depth cues and Figure 2(e) for a combination of depth cues with
depth of field.
3.6
Microscopic Images
The goal of this method is to produce images familiar to biologists
and to support the comparison between simulation and experiment.
As most images from experiments are made with microscopes, the
need arises to represent simulated data in a similar manner.
Confocal laser microscopy can be used to obtain experimental
data. It uses a laser which is focused in a small volume. There, the
laser excites fluorescent dye in the cell. For this purpose, proteins
are tagged with a fluorescent marker. The fluorescent light is detected after passing a small aperture to improve the resolution. The
focus of the laser beam is very small (250 nm×250 nm×750 nm),
which represents the currently highest resolution in life cell imaging. An image is obtained by scanning the volume of the cell with
the focal spot (with up to 1 frame per second, depending on the
size of the cell/region of interest). The left image in Figure 1 shows
an example of tagged TNF receptors (tumor necrosis factor). The
distribution and also clustering of the receptors is visible.
Rendering the precise positions from the simulation results in
points which do not influence close points unless alpha compositing
or more sophisticated techniques are employed. In our approach,
small camera-aligned quads are used for each protein position to
overcome this issue. Due to the enlarged area of the proteins, we
reach a higher coverage of the image plane. We apply a radial intensity distribution to each protein to mimic the Gaussian intensity
Figure 4: The MAPK-pathway: MAPK is activated by the receptor
(reaction r1 ). The activated form MAPKp then has to transport the
information to the nucleus but is attacked by phosphatases (r2 ). The
phosphatases relax back into the active form immediately (r3 )
profile of the laser. In combination with an additive blending function the resulting brightness looks much like light emitted by fluorescent markers. The overall appearance is slightly blurred like the
experimental images.
Only “tagged” proteins are rendered, all other protein types are
neglected. Background fluorescence is imitated by a projection of
the cell membrane with high transparency. The final result is an
image with high similarity to the experimental ones thus allowing
an easy comparison (compare the two leftmost images of Figure 1).
4
4.1
A PPLICATION
Mitogen-Activated Protein Kinases (MAPK)
Kholodenko [9] raised the question whether signaling molecules,
in this case MAPK, diffuse through the cell or are transported with
motor proteins along the cytoskeleton from the plasma membrane
to the nucleus. The MAPKs are changed to an activated, phosphorylated form MAPKp by MAPK kinases (MAPKK), which are
themselves activated by the respective MAPKKK. The MAPKKK
is activated by receptors that are triggered by external signals. This
multi component cascade transfers the external signal to an internal
signal and allows amplification and regulation. The regulation of
the signal includes the opponent deactivation of MAPKp by phosphatases. It is assumed that the upstream part of the cascade is
located in ’scaffolds’ at the plasma membrane and MAPK is the
mobile component. Thus the signal translocation to the nucleus depends on the arrival of MAPKp, despite the dephosphorylation reaction everywhere in the cell (see Figure 4). The local excitation (at
the plasma membrane) together with the global inhibition (LEGI)
leads to spatial gradients; only a few MAPKp reach the nucleus.
If motor proteins transport the MAPKp along the cytoskeleton directly to the nucleus, this will change the MAPK distribution in the
cell and more molecules reach the nucleus [9].
A microscope-like image of a transection of the cell was created as described in section 3.6. While diffusion leads to the above
mentioned radial gradient of active signaling molecules in the cell,
the transport with motor proteins strongly increases the concentration around the nucleus (see Figure 6). This (obvious) result can
be compared to experimental results and thus clarify whether the
respective signaling molecules are actively transported through the
cell in vivo. The parameters of the simulation could be furthermore
adjusted to better fit experimental data.
With a much higher resolution it is possible to look at individual
molecules and to follow them through the cell. The resulting images give an impression of the crowding in the cell — a fact that
is often neglected when modeling signal transduction. The local-
Figure 5: Difference between diffusion (left) and motorized transport
along the cytoskeleton (right) towards the nucleus (gray). Diffusion
leads to random-walk tracks while the transported molecules follow
the direction of the cytoskeleton cylinders on straight paths. In the
second row the cytoskeleton is included to show the alignment of
tracks and cytoskeleton (closeup). Some diffusive paths seem to go
through the cytoskeleton solely because only every thousandth position is visualized and connected by a straight line. Additionally, the
lower half of the cell was cut away and the edges of the cytoskeleton
are emphasized for better distinction in the lower left image.
ization of the signaling molecules can affect the fate of the cell.
Furthermore, we are able to visualize the track of single molecules.
In the MAPK example one can see the difference between diffusive
traces and the straight paths following the cytoskeleton into the cell
(see Figure 5). Note that the individual steps in motorized transport
are much smaller than the diffusive steps. The diffusive paths of
only 92 molecules thus cover a much larger fraction of space than
165 molecules that are partly following the cytoskeleton. So, in
principle, diffusing particles can reach the nucleus faster than those
transported along the cytoskeleton, but the probability to do so decreases strongly with the distance.
4.2
Delivery of Drug Molecules in Solid Tumors
The same simulation method can also be used to simulate the delivery of drug molecules to cancer cells in a tumor (the drug is designed to trigger the death of the cancer cells) [3]. After the drug
leaves the blood vessels it has to diffuse into the tumor. The diffusion is restricted by the cells and the inter-cellular fibers stabilizing the tissue. A tumor is much larger compared to single cells
and therefore spatial effects are much more pronounced. Figure 7
shows that even after 30 minutes the drug reaches only cells close
to blood vessels at the tumor surface.
Here, visualization has to tackle two major tasks. First, the
handling of one million structures plus hundreds of thousands
drug molecules together with the interactive control of the visualized section and objects has to be provided. Second, the tumor
(here 1.2 mm) is much larger than the cells (here 20 µm), which
themselves are several orders of magnitude larger than the drug
molecules (here 20 nm). The smallest object class is not visible
when looking at the tumor while the spatial dimensions of the tumor can not be sensed when focusing on single drug molecules.
Figure 8 demonstrates the proportions of the virtual tumor. A realistic tumor can be much larger, thus increasing the computational
complexity as well as the difficulties in visualization and the challenges in optimizing drug delivery.
(a)
(b)
(c)
Figure 6: Generated microscopic images showing the protein concentration difference between diffusion only (a) and additional motor transport
along the cytoskeleton (b), as well as the respective radial concentration profiles calculated from the particle positions (c).
(a)
(b)
(c)
(d)
Figure 7: Drug diffusion into the tumor and binding to the tumor cells: (a) 5 min after injection; (b) 30 min after injection; (c) transection 30 min
after injection; (d) radial concentration profile 30 min after injection.
5
5.1
T ECHNICAL A SPECTS
Implementation of the Simulation
All modeled molecules are represented by agents. In every time
step all mobile agents move according to diffusion or motorized
transport. Diffusion is simulated as random walk, where the step
length is adjusted to fit the diffusion coefficient [11]. In the motor transport mode agents follow the direction of the assigned cytoskeleton fiber with the step length vmotor ∆t. If agents collide with
fixed objects, the step is not carried out. Reactions are based on the
distance between two agents. They will react if they are closer than
the critical distance, which is chosen to fit the macroscopic reaction
rate [13].
5.2
Implementation of the Visualization Framework
A GPU-based implementation is employed to allow interactive exploration of the simulation results. We use C++ as programming
language and OpenGL 2.0 for rendering, employing the GL Shading Language (GLSL) for vertex and fragment programming. The
software was tested on both Windows and Linux PCs.
Our rendering system for the schematic illustration is divided
into several parts: cytoskeleton, proteins, trajectories, and reactions. Static structures like the time-independent cytoskeleton, protein trajectories, or immobile obstacles are stored in vertex buffers
in the GPU memory to speed up rendering. Time-dependent proteins are rendered in immediate mode of OpenGL.
The filaments of the cytoskeleton and the fibers, in case of the
tumor, are approximated by thin cylinders. These cylinders are tessellated and rendered via vertex buffers. Instancing, if supported
by the GPU, can be used as an alternative rendering method of the
filaments. In case of instancing, only one geometric representation
of a cylinder is uploaded into a vertex buffer on the GPU. During
rendering this cylinder is then transformed by an additional transformation matrix to create the instances. These matrices can also
be stored on the GPU requiring only 12 floating point values independent of the cylinder tessellation and thus saving bandwidth and
memory. The edges of the cylinders are optionally emphasized to
aid object detection by clear boundaries (compare Figure 5, second
row). The boundaries b are determined by the following equation
p
|hv, ni| > 0.4
1 if 1 − hv, ti2
b=
0 else
where v denotes the viewing direction, n the normal, and t the tangent vector of the cylinder. The power of two and the square root
are chosen arbitrary to yield visual appealing results.
All spherical objects, including proteins and obstacles, are rendered as spherical glyphs with the technique presented in [15]. Ray
tracing is used in the fragment program to solve the implicit sphere
equation. Only position and radius of the sphere have to be transferred to the GPU for rendering. The resulting spheres are free
of any tessellation artifacts and require a minimal bandwidth usage. Protein trajectories and reactions also use glyphs for rendering. The trajectories consist of cylinders for the linear segments and
small spheres for the joints between those segments. Cone glyphs
representing the directional markers of a trajectory are placed on
segments, which are long enough to fit a marker. A combination of
cylinder and cone is used for identifying reactions.
The depth of field is applied in a post processing. It requires two
separate rendering passes. From the previously rendered scene and
the content of the depth buffer mipmaps containing depth information are generated. In the second pass up to 20 texture lookups are
performed in these mipmaps for each pixel in the framebuffer. The
lookups are uniformly distributed in disc. The disc radius is determined by the radius of the circle of confusion, a value dependent on
the distance to the focal plane. The overall effect can be adjusted
via an aperture value like in photography. The radius of the circle of confusion is also used for the computation of the depth cues
to desaturate objects which are out of focus. The computation of
the depth cues takes place in the fragment programs of each object
type.
5.3
Interaction
The exploration of data coming from a simulation is supported in
our framework by the following means: as the simulation usually
covers several hundred frames, we allow the user to step through
them by keystrokes. The position is interpolated for a smooth motion of the molecules between two frames. For convenience, an
automated animation over all frames is provided.
Up to three clip planes can be enabled to cut away uninteresting
or occluding parts of the data set. Additionally, the clip planes can
be used to prepare slices by aligning them in parallel.
Camera parameters like position, orientation, field of view, and
the aperture are to be set by the user. If the depth of field effect is
enabled, the camera aperture directly affects the depth of field as
described in section 3.5. The focal point, which is of importance
for the depth of field, can be adjusted by the user in two ways. In
the first mode the focus follows the mouse cursor. The new focal
depth is set to the depth of the first object below the mouse cursor. A
smooth transition from the old value to the new focal depth prevents
popping of the depth of field. The other mode locks the focus onto
a selected molecule and adjusts it during tracking over time. The
user can select individual molecules with the mouse and display
their trajectories and reactions they are participating at.
The style of the visualization, schematic or microscopic, can be
changed by the user at any time. Default color schemes for our
visualization techniques based on clearly distinguishable colors are
provided. Important objects like protein molecules and trajectories
are dyed with saturated colors. Filaments of the cytoskeleton and
obstacles are dyed with pale colors. The colors can be adjusted
by the user so that they match, e.g., images obtained by wet lab
experiments.
6
The tests were conducted on a Windows PC with an AMD64 X2
Dual Core 4600+, 2 GB RAM, and a NVIDIA GeForce8800GTS
with 320 MB. The viewport resolution was set to 1600×1200.
Table 1 shows the number of components computed in the simulations for the MAPK and Drug Diffusion data sets. Because of
the large amount of filaments in Drug Diffusion the vertex buffers
were disabled as their memory requirements exceeded the available memory on the GPU. For comparison, we added an additional
mode for the skeleton replacing the tessellated cylinders by simple
lines using the immediate rendering mode. These lines were then
rendered without applying a shader program.
In Table 2, the measurements in frames per second are shown
for different rendering methods of the cytoskeleton and fibers for
both data sets. All other components where rendered as glyphs.
Additional measurements were taken for the MAPK data set with
disabled depth of field. With these measurements, we can compute
the time needed by depth of field. The consumed time lies between
10 and 12 milliseconds and is almost constant as expected from a
post processing filter.
The microscopic visualization of the MAPK data set reached 27
frames per second when rendering all 70,000 proteins. The camera
was again set up to fit the whole cell. Rendering only the “tagged”
proteins (7,038) resulted in 240 frames per second.
The inter-cellular fibers of our tumor model can be approximated
by simple lines as their diameter is negligible when visualizing
the whole tumor. If available, instancing can be used for closeup views. The tumor data set used for Figure 7 consists of 616,957
drug molecules and no fibers. We measured 10 frames per second
with visible cells and drug molecules when the whole tumor was
fitted into the viewport. Displaying only the drug molecules resulted in 24 frames per second. The fuzzy edges of the molecules
in Figure 8(c) arise from precision issues on the GPU due to the
difference in size between the whole tumor and the tiny molecules.
The simulations were computed on an Intel Core2 Quad Q6700
with 2 GB RAM. Seven seconds of the MAPK simulation took
about 12 hours depending on the level of crowding and the chosen time step. The Drug Diffusion simulation needed 16 hours for
the data set with fibers covering one minute of simulated time. For
the 30 minutes of the data set shown in Figure 7 took nearly a week
for computation. Here, the time needed for each frame increases as
more and more molecules diffuse into the tumor.
R ESULTS
The rendering performance has been tested with two data sets and
two camera settings. We used data from a MAPK simulation and a
drug molecule delivery simulation (referred to as Drug Diffusion).
The first camera setup showed the whole data set, the second a
closeup. The scenes used for measurements were similar to Figure 1 on the right and Figure 8. Depth of field was applied through
all tests.
MAPK
Dof
VB
Instance
Lines
None
Complete
yes
no
32.00
51.77
31.94
51.70
45.00
101.57
59.22
145.36
Closeup
yes
no
34.37
58.85
34.25
58.64
46.24
124.21
58.94
145.64
MAPK
Proteins
Obstacles
Cytoskeleton Filaments
70,000
43,062
6,097
Drug Diffusion
Drug Molecules
Cells
Inter-cellular Fibers
143,634
107,303
887,818
Table 1: Number of simulated components in the MAPK and Drug
Diffusion data sets.
Drug Diffusion
VB
Instance
Lines
None
Complete
–
1.87
8.03
16.22
Closeup
–
2.98
9.17
9.93
Table 2: Rendering performance measured in frames per second of
the MAPK and Drug Diffusion data sets with different rendering techniques for cytoskeleton and inter-cellular fibers. For MAPK frame
rates are recorded with depth of field (dof) and without. Vertex buffers
(VB) were disabled for Drug Diffusion due to their high memory consumption.
(a)
(b)
(c)
Figure 8: Illustration of the difference in size between a tumor (1.2 mm), cells (20 µm, rose) and drug molecules (20 nm, yellow while diffusing
and red if bound to a cell) visualized by zooming in. The drug molecules are up-scaled by a factor of 15. The magnification in (c) shows rendering
artifacts due to precision issues on the GPU.
7
C ONCLUSIONS
AND
F UTURE W ORK
We have developed an interactive visualization framework to explore data from simulations in a virtual cellular environment. For
the visualization of this crowded environment we proposed two approaches. The schematic visualization abstracts the cell to proteins,
obstacles, and cytoskeleton. GPU-based glyphs are used for rendering the molecules and obstacles. The filaments of the cytoskeleton
were rendered as geometric objects. Focus & context methods were
employed to reduce the visual complexity. In contrast, the second
visualization technique produces microscope-like images. These
images are similar to the results obtained by confocal laser scanning microscopy in wet lab experiments. Microscopic images are
easier to grasp than the detailed schematic representation. But the
interactive segmentation and selection in the schematic view allow
a fast understanding of the visualized information.
Two applications of ongoing research in the fields of biology
and medicine were presented. The signal transduction by mitogenactivated protein kinase (MAPK) in a single cell and the delivery of
drug molecules in a tumor by diffusion have been simulated and
visualized. The trajectories of signal proteins can be displayed
to be able to analyze the discrete signal transduction of individual molecules in contrast to continuum approaches. We think the
detailed results justify the high computational effort for the simulation.
In future work, simulation and visualization could be coupled to
allow interactive adjustment of the simulation parameters. As most
parts of the simulation can be scheduled in parallel, GPUs could
speed up the computation and allow the analysis of more complex
problems. Caching strategies could be applied for visualizing and
handling larger data sets, which consist of more molecules and time
steps.
Furthermore, the simulation could be enhanced by considering
fluid dynamics in the so far “empty” space in between the cytoskeleton or the inter-cellular fibers. In addition, the state of every
cell in the tumor simulation, depending on the number of attached
drugs, could be considered and visualized. We are also thinking
of adapting our simulation method to porous media in material sciences.
ACKNOWLEDGEMENTS
We wish to thank Steffen Steinert for the image from an TNFreceptor experiment with confocal laser scanning microscopy. Spe-
cial thanks to Magnus Strengert and Steffen Koch for their valuable
discussions about the visualization and Alexei Lapin for his assistance in the development of the simulation method.
R EFERENCES
[1] M. D. Allen, L. M. DiPilato, B. Ananthanarayanan, R. H. Newman,
Q. Ni, and J. Zhang. Dynamic visualization of signaling activities in
living cells. Science Signaling, 1(37):pt6, 2008.
[2] S. S. Andrews and D. Bray. Stochastic simulation of chemical reactions with spatial resolution and single molecule detail. Physical
Biology, 1:137–151, 2004.
[3] M. R. Dreher, W. Liu, C. R. Michelich, M. W. Dewhirst, F. Yuan, and
A. Chilkoti. Tumor vascular permeability, accumulation, and penetration of macromolecular drug carriers. Journal of the National Cancer
Institute, 98(5):335–344, 2006.
[4] R. J. Ellis. Macromolecular crowding: an important but neglected
aspect of the intracellular environment. Current Opinion in Structural
Biology, 11:114–119, 2001.
[5] C. A. Gómez-Uribe and G. C. Verghese. Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions
through coupled mean-variance computations. The Journal of Chemical Physics, 126(2):024109, 2007.
[6] S. Gumhold. Splatting illuminated ellipsoids with depth correction. In
Proceedings of 8th International Fall Workshop on Vision, Modelling
and Visualization, pages 245–252, 2003.
[7] G. L. Hazelbauer. Myriad molecules in motion: Simulated diffusion
as a new tool to study molecular movement and interaction in a living
cell. Journal of Bacteriology, 187(1):23–25, 2005.
[8] C. L. Howe and W. C. Mobley. Signaling endosome hypothesis: A
cellular mechanism for long distance communication. Journal of Neurobiology, 58(2):207–216, 2004.
[9] B. N. Kholodenko. Four-dimensional organization of protein kinase
signaling cascades: the roles of diffusion, endocytosis and molecular
motors. J Exp Biol, 206(12):2073–2082, June 2003.
[10] T. Klein and T. Ertl. Illustrating magnetic field lines using a discrete
particle model. In Proceedings of Vision Modeling and Visualization
2004, pages 387–394, 2004.
[11] A. Lapin, M. Klann, and M. Reuss. Stochastic simulations of 4dspatial temporal dynamics of signal transduction processes. In Proceedings of the FOSBE 2007, pages 421–425, 2007.
[12] K. Lipkow, S. S. Andrews, and D. Bray. Simulated diffusion of phosphorylated CheY through the cytoplasm of Escherichia coli. Journal
of Bacteriology, 187(1):45–53, 2005.
[13] M. Pogson, R. Smallwood, E. Qwarnstrom, and M. Holcombe. Formal
agent-based modelling of intracellular chemical interactions. Biosystems, 85:37–45, 2006.
[14] M. Potmesil and I. Chakravarty. A lens and aperture camera model for
synthetic image generation. ACM SIGGRAPH Computer Graphics,
15(3):297–305, 1981.
[15] G. Reina and T. Ertl. Hardware-accelerated glyphs for mono-and
dipoles in molecular dynamics visualization. In Procceedings of
EUROGRAPHICS-IEEE VGTC Symposium on Visualization 2005,
pages 177–182, 2005.
[16] T. Scheuermann and N. Tatarchuk. ShaderX3: Advanced Rendering
with DirectX and OpenGL (Shaderx Series), chapter Improved Depth
of Field Rendering, pages 363–377. Charles River Media, 2004.
[17] J. Sharpe, C. J. Lumsden, and N. Woolridge. In Silico: 3D Animation and Simulation of Cell Biology with Maya and MEL. Morgan
Kaufmann, 2008.
[18] K. Takahashi, S. N. V. Arjunan, and M. Tomita. Space in systems biology of signaling pathways - towards intracellular molecular crowding
in silico. FEBS Letters, 579(8):1783–1788, Mar. 2005.
[19] D. Walker, S. Wood, J. Southgate, M. Holcombe, and R. Smallwood.
An integrated agent-mathematical model of the effect of intercellular
signalling via the epidermal growth factor receptor on cell proliferation. Journal of Theoretical Biology, 242(3):774–789, 2006.
[20] D. Weiskopf and T. Ertl. Real-time depth-cueing beyond fogging.
Journal of Graphical Tools, 7(4):83–90, 2002.
© Copyright 2026 Paperzz