Raising the Speed Limits for 4D Fluorescence Microscopy

Traffic 2000 1: 935–940
Munksgaard International Publishers
Toolbox
Raising the Speed Limits for 4D Fluorescence
Microscopy
Adam T. Hammond and Benjamin S. Glick*
Department of Molecular Genetics and Cell Biology, The
University of Chicago, 920 East 58th Street, Chicago, IL
60637, USA
* Corresponding author: B.S. Glick,
[email protected]
Three-dimensional time-lapse (4D) fluorescence microscopy is becoming a routine experimental tool. This
article summarizes current technologies, and describes a
new method for speeding image acquisition during 4D
confocal microscopy.
Key words: Confocal microscopy, deconvolution microscopy, microscopy, optical sectioning, photobleaching, piezoelectric, projection, video
Received 12 September 2000, revised 29 September
2000, accepted for publication 29 September 2000
Many cell biology journals now publish online video images
as supplements to the printed articles. This investment of
resources is justified, because video microscopy is playing an
increasingly important role in cell biology. The green fluorescent protein (GFP) has revolutionized the study of intracellular dynamics (1). Meanwhile, improvements in microscopy
instrumentation and computer power have enabled researchers to process large amounts of digital image data. An
emerging technology is the visualization of biological samples in three dimensions over time, also known as 4D microscopy (2). At each time point in a 4D experiment, the
sample is optically sectioned to generate a stack of images.
Optical sectioning can be performed with transmitted light
using a method such as differential interference contrast
microscopy (2). We will discuss a more common application:
the optical sectioning of fluorescently labeled structures.
Imaging a biological sample by fluorescence microscopy usually requires multiple optical sections. The reason is that with
high-resolution objectives, only a thin layer of the sample is in
focus at any one time (3). For a typical microscope configuration, optical sections should be spaced less than 0.5 mm
apart, so even a small sample such as a yeast cell may
require 15 or more optical sections. Hence, the major technical hurdle in 4D microscopy is the acquisition and processing
of three-dimensional image stacks. The theory and practice
of optical sectioning microscopy is a vast topic that can only
be covered with a full-length book, such as the excellent
volume edited by Pawley (4). Instead of trying to provide a
comprehensive summary, we will describe some of the
practical issues that arise with 4D fluorescence microscopy.
An immediate concern is to keep the specimen alive and
healthy during the observation period, while maintaining high
image quality. Accurate temperature control is often critical.
These requirements can be met with available instruments
(5), and microscope accessories that facilitate live-cell imaging are commercially available (Table 1).
The acquisition of multiple optical sections imposes two
experimental limitations. First, repeated exposure to intense
excitation light can be toxic to living samples (6), and can
enhance photobleaching because some of the excitation light
is absorbed by fluorophores above and below the image
plane (7). Using brief exposures to low-intensity illumination
will minimize such photodamage. This strategy requires that
the structures of interest be as brightly fluorescent as possible. For example, if a structure is labeled with a GFP fusion
protein, tandem copies of the GFP tag will often increase the
signal without perturbing the function of the fusion protein.
The second problem with 4D microscopy is that acquiring
many optical sections at each time point can make data
collection prohibitively slow. This issue will be discussed
later in the article.
Confocal Versus Wide-Field Deconvolution
Microscopy
Fluorescence emanating from above and below the image
plane creates an out-of-focus haze. This problem can be
avoided by confocal microscopy (4). Most confocal microscopes image the sample one point at a time, using a pinhole
aperture to exclude out-of-focus light. Each optical section is
generated by scanning a line of points at one edge of the
sample, then progressively scanning adjacent lines until the
opposite edge of the sample has been reached. This technology is well established and modern confocal microscopes
(Table 1) combine high sensitivity with excellent image quality. An alternative method for removing out-of-focus haze is
wide-field deconvolution microscopy (8,9). This approach
uses mathematical algorithms to reconstruct images based
on the optical properties of the system. In deconvolution
microscopy, a conventional fluorescence microscope is used
to capture a stack of haze-containing images, and subsequent processing reassigns the out-of-focus light to the appropriate focal planes. This technique can yield remarkably
crisp views of structures in living cells [e.g. (10 – 12)].
935
Hammond and Glick
Table 1: Sources of instrumentation and software
Organization and website
Confocal microscopes
Bio-Rad; microscopy.bio-rad.com
Product name
Radiance, RTS
2000
LSM 510
PCM 2000
TCS SP2
FLUOVIEW
Wallac
UltraVIEW
CLSM 2010
Ultima 312
Carl Zeiss; www.zeiss.com
Nikon; www.nikonusa.com
Leica; www.leica-microsystems.com
Olympus; www.olympus-europa.com
Perkin-Elmer;
lifesciences.perkinelmer.com
Molecular Dynamics; www.mdyn.com
Meridian;
www.microscopy-online.com/Vendors/
Meridian
Optiscan; www.optiscan.com
Personal
Confocal
Polytec; www.konfokal.de
CWS 200
Deconvolution systems
Applied Precision; www.api.com
Scanalytics; www.scanalytics.com
Bitplane; www.bitplane.com
Vaytek; www.vaytek.com
AutoQuant Imaging; www.aqi.com
3D and 4D Visualization Software
Scanalytics; www.scanalytics.com
Universal Imaging; www.image1.com
Vital Images; www.vitalimages.com
Bitplane; www.bitplane.com
Molecular Dynamics; www.mdyn.com
Vaytek; www.vaytek.com
AutoQuant Imaging; www.aqi.com
LOCI; www.loci.wisc.edu
NIH; rsb.info.nih.gov/nih-image
Biomedical Computer Laboratory;
www.ibc.wustl.edu/bcl
Movie assembly and viewing
Apple Computer;
www.apple.com/quicktime
NIH; rsb.info.nih.gov/nih-image
Adobe; www.adobe.com
Live-cell microscopy accessories
Bioptechs; www.bioptechs.com
Nalge Nunc; nunc.nalgenunc.com
Molecular Probes; www.probes.com
Harvard/Medical Systems;
www.haicellbiology.com
Life Imaging Services; www.lis.ch
Warner Instrument;
www.warnerinstrument.com
Piezoelectric Objective Positioners
Physical Instruments;
www.physikinstrumente.com
DeltaVision
EPR
Huygens
MicroTome
AutoDeblur
IPLab
MetaMorph
VoxelView
IMARIS
ImageSpace
VoxBlast
AutoVisualize-3D
4D Viewer
NIH Image
XCOSM
QuickTime
NIH Image
Premiere
DT, FCS2
Systems
LabTek II
Chambers
Attofluor
Chambers
Micro-Incubators
Ludin Chamber
Series 20
Chambers
Note that deconvolution can be performed not only with
wide-field images, but also with confocal images. The combination of confocal and deconvolution methods reportedly
yields better results than either method alone (13). In particular, deconvolution helps to overcome a major limitation of
confocal microscopy by improving resolution along the z -axis.
Processing of 4D Image Data
A 4D microscopy experiment can generate thousands of
image files. With dedicated 4D acquisition systems, the
images are automatically stored in a format that facilitates
viewing and processing. For customized 4D acquisition systems, such as the one described below, the image files must
be imported into an appropriate software environment. We
have had good results with IPLab, which is one of several
commercial and public software packages designed to manipulate 4D image files (Table 1) (17).
NanoPositioners
These software packages are available free of charge.
936
For a cell biologist attempting to choose between confocal
and deconvolution microscopy (13), the following considerations may be relevant. (a) Deconvolution effectively removes moderate levels of background haze, but for samples
with high levels of out-of-focus fluorescence, confocal microscopy is the only practical technique. (b) Confocal microscopy is the easier alternative for most users because
optical sections are generated automatically and can be
viewed immediately. Deconvolution is performed after image
acquisition, and proper application of this method has traditionally required considerable expertise (8,9). However, commercial deconvolution systems (Table 1) are now making this
technology more accessible. (c) Deconvolution microscopes
employ standard fluorescence filter sets, and can therefore
be used to visualize any of the common fluorophores. Most
confocal microscopes are more limited because they utilize a
fixed set of laser excitation wavelengths. For example, dualcolor imaging of the cyan and yellow variants of GFP is
difficult
with
standard
confocal
systems
(14).
(d) Deconvolution systems are usually less expensive, but
confocal systems offer more functionality, including the ability to perform selective photobleaching (11,15). (e) It is often
assumed that deconvolution microscopy causes less photodamage than confocal microscopy because deconvolution
systems employ weaker excitation light and capture more of
the emission signal (9,11). However, we and others have
found that confocal microscopes can be used to capture
thousands of optical sections with minimal photodamage.
The mechanisms of photobleaching and phototoxicity are
poorly understood, and the intense but very brief illumination
that occurs during confocal microscopy might actually be
less damaging than the more prolonged illumination that
occurs during deconvolution microscopy (16). In any case,
the best policy is to compare different imaging systems
empirically using a test sample.
To extract visual information from 4D data, the image stacks
are often converted to a different format. Programs exist for
the rendering and animation of three-dimensional images
Traffic 2000: 1: 935 – 940
Raising the Speed Limits
(18), and this approach is now being extended to 4D data
sets (J. Ellenberg, personal communication). However, for
experiments that do not require sophisticated topological
analysis, a simpler approach is to project each stack of optical
sections into a single flat image. The sequence of projections
is then viewed with movie-making software (Table 1).
Various algorithms can be used to project a stack of optical
sections (18). Some algorithms highlight the surface features
of a fluorescent structure (see below, Figure 4). More commonly, a researcher needs to monitor all of the fluorescence
emanating from a structure. In such cases, the usual approach is to generate either maximum intensity projections
or average intensity projections. The maximum intensity algorithm compares all of the pixel values at a given (x,y )
position in the stack, and chooses the brightest of these pixel
values for the projection (Figure 1B). This method emphasizes the details in fluorescence images (18,19). The average
intensity algorithm sums all of the pixel values at a given (x,y )
position in the stack, and utilizes the average value for the
projection (Figure 1C). This method yields a more quantitatively accurate representation of the data. However, average
intensity projections tend to have low contrast, and they are
often dim because each fluorescent structure may appear in
only a few optical sections (18,19). For these reasons, maximum intensity projections are more popular than average
intensity projections.
One problem with projecting an image stack is that noise
from all of the optical sections is incorporated into the projection (20). This effect is particularly severe with maximum
intensity projections. Digital fluorescence images are corrupted by ‘shot’ noise, which derives from statistical fluctuations in the number of photons detected for the individual
pixels. We have found that shot noise can be removed very
effectively, with minimal loss of image data, by processing
each optical section with a 3 ×3 hybrid median filter (21).
However, the best way to obtain a high signal-to-noise ratio
is to maximize the fluorescence intensities of the structures
being imaged.
Quantitation of 4D Image Data
The quantitation of images obtained by 4D fluorescence
microscopy can be challenging. For typical applications, the
goal is to estimate the relative intensities of the fluorescence
signals emanating from various structures. The most rigorous quantitation method is three-dimensional analysis by
means of volume rendering, which treats a stack of optical
sections as a set of voxels (three-dimensional pixels). A
‘segmentation’ algorithm is used to highlight the relevant
voxels in each structure (Figure 1A). Both the volume and the
total fluorescence intensity of the structure can then be
measured. Software for three-dimensional analysis is available from microscope manufacturers and independent companies (Table 1). This method is time-consuming but
accurate. For example, in an immunofluorescence study, we
quantified the fluorescence from optically resolvable Golgi
Traffic 2000: 1: 935– 940
Figure 1: Different ways to represent three-dimensional
fluorescence data. For simplicity, two fluorescent structures are
shown as idealized cubes. These cubes contain the same uniform density of a fluorophore (red color), but one of the cubes
has 3-fold longer edges than the other, corresponding to a
27-fold difference in volume and total fluorescence emission.
(A) Volume rendering treats the structures as cubes that differ in
volume by a factor of 27. Quantitative three-dimensional analysis
will accurately measure the relative fluorescence emissions of
the two structures. (B) A maximum intensity projection represents the structures as squares that differ in area by a factor of
9. However, the fluorescence signal intensities of the two
squares are identical, so quantifying the projected image would
overestimate the relative fluorescence emission of the smaller
structure by a factor of 3. (C) An average intensity projection also
represents the structures as squares that differ in area by a
factor of 9, but the fluorescence signal intensity of the larger
square is 3-fold higher. Thus, quantifying the projected images
will correctly indicate the relative fluorescence emissions of the
two structures.
structures in interphase and mitotic vertebrate cells (22).
During prophase, these cells changed shape and the continuous Golgi ribbon broke apart into multiple fragments, yet our
measurements confirmed that the total Golgi fluorescence
was virtually identical in interphase and prophase cells. In
general, if the experiment requires evaluating individual
structures that can only be resolved by volume rendering,
then three-dimensional analysis is the appropriate technique.
For simpler quantitative studies, an alternative strategy is to
use projected images. Summing the pixel values from a
region of interest is technically easier than summing the
voxel values from a volume of interest. However, the analysis of projected images can be problematic. Maximum inten-
937
Hammond and Glick
sity projections should not be used for quantitation because
they enhance small fluorescent structures relative to large
ones (Figure 1B). Average intensity projections are suitable
for quantitation (Figure 1C) provided that the background
fluorescence of the sample is low enough to yield adequate
contrast. For suitable samples, we find that quantifying average intensity projections gives results comparable to
three-dimensional analysis (A.T.H., unpublished data).
During the course of a 4D-microscopy experiment, the
fluorescence progressively decreases due to photobleaching. If this effect is not too severe, the data can be corrected by measuring the bleaching rate of a reference
structure that contains a fixed number of fluorophores. A
new generation of ‘two-photon’ microscopes may alleviate
photobleaching by exciting fluorophores only within the focal plane (7).
Speed Limits in 4D Microscopy
During 4D microscopy of organelles or cytoskeletal elements, it may be desirable to collect a stack of optical
sections every few seconds. This requirement stretches the
capacity of many 4D imaging systems. Shortening the exposure time for each optical section decreases the resolution
and/or the signal-to-noise ratio of the data. Nevertheless,
the capture of individual optical sections is usually quite
rapid: with a bright fluorescence signal, wide-field and confocal systems can record a decent image in 100 ms or less.
Under these conditions, the rate-limiting step in 4D microscopy is often the delay between successive optical sections (23). As shown in Figure 2A, the usual method for
acquiring an optical section is to move the objective to the
desired position, then wait for a fixed time before capturing
the image. This delay is needed to allow oscillations in the
stepper motor and immersion oil to subside. With the stan-
Figure 2: Two strategies for acquiring 4D data with a scanning confocal microscope. (A) With the standard method, the
objective is moved by a stepper motor to the appropriate position, and after a delay to allow oscillations to subside, an optical
section is captured. The delays between optical sections may
constitute a significant fraction of the time needed to collect an
image stack. (B) If the objective is moved continuously during
image acquisition, the delays between optical sections are eliminated. The resulting optical sections are tilted relative to the
focal plane, with the tilt angle determined by the width of each
confocal scan and the z -axis spacing between scans.
938
dard stepper motor on the Zeiss LSM 510 confocal microscope, the delay between optical sections is 300 ms. In
addition, the 4D acquisition software of the LSM 510
pauses for 4 s between successive image stacks. As a
result, collecting a stack of 20 100-ms optical sections requires about 12 s, of which only 2 s are devoted to imaging.
Other instruments permit faster data acquisition, but the
optical sections are always separated by delays that reduce
the speed of 4D imaging. Such delays also exacerbate image blurring when a moving structure appears in multiple
optical sections.
We recently devised a simple method that can be used in
conjunction with confocal microscopy to eliminate the delays between optical sections. A piezoelectric positioner
moves the objective continuously in the z -direction during
the confocal scan, yielding optical sections that are slightly
tilted relative to the focal plane (Figure 2B). The tilt angle is
chosen such that at the end of each scan, the objective is
positioned to begin capturing the next optical section. Continuous objective movement prevents the oscillations and
shock waves that can occur with rapid step movements
(23). For microscopes that capture all of the pixels in an
optical section simultaneously, continuous objective movement would blur the image. However, a standard confocal
microscope assembles an optical section from individual line
scans, and the objective movement during any given line
scan is negligible, so image quality is not compromised. In a
typical 4D analysis of a yeast cell, we collect 100-ms confocal sections of 60 lines each ( 5 mm total scan width) at a
z -axis spacing of 0.35 mm; these parameters correspond to
an average objective velocity of 3.5 mm/s, an average objective movement of 6 nm during each line scan, and an optical section that is tilted 4° relative to the focal plane. Figure
3 shows a test experiment in which mitochondrial dynamics
were visualized in a living yeast cell by collecting image
stacks at intervals of 2.6 s. Eight successive projections are
shown for purposes of illustration, and the entire movie can
be viewed at www.traffic.dk.
To confirm that continuous objective movement does not
degrade image quality, we collected optical sections of a
fluorescent pollen grain using both the conventional method
shown in Figure 2A and the fast method shown in Figure
2B. For any given set of microscope parameters, the resulting projected images were nearly indistinguishable (Figure
4). Note that the capture of tilted optical sections does not
alter views obtained by projecting along the z -axis. We have
detected no image distortion with objective velocities of up
to 5 mm/s. This result is not surprising in light of recent
work from Callamaras and Parker (24), who used a
piezoelectric positioner to acquire x -z confocal sections at
an objective velocity of 1 mm/s. Therefore, it should be
possible to use continuous objective movement with videorate confocal microscopes (24,25) to achieve extremely
rapid 4D imaging.
Technical details regarding continuous objective movement
are available from the authors upon request.
Traffic 2000: 1: 935 – 940
Raising the Speed Limits
Figure 3: An example of 4D fluorescence microscopy using continuous objective movement. Living cells of the yeast S.
cerevisiae were labeled with MitoTracker Red CMXRos (Molecular Probes) to visualize the dynamics of the mitochondrial network (10).
For the cell shown, we collected 138 confocal image stacks at intervals of 2.6 s. Each image stack consisted of 30 88-ms optical
sections of 10 ×5 mm (107 × 55 pixels) at a z -axis spacing of 0.2 mm. The optical sections were processed with a 3 ×3 hybrid
median filter to reduce shot noise, and then projected using a maximum intensity algorithm. The figure highlights eight successive
projected images from the movie (see www.traffic.dk). This image sequence shows the mitochondria in the daughter cell separating
from the maternal mitochondrial network.
manuscript. This work was supported by grants from the National
Science Foundation (MCB-9875939), the American Cancer Society
(RPG-00-245-01-CSM), and the Pew Charitable Trusts.
References
1.
2.
3.
Figure 4: Comparison of three-dimensional confocal data
obtained with stepwise or continuous objective movements. The panels show an endogenously fluorescent grain of
ragweed pollen (McCrone Microscopes and Accessories). Stacks
of optical sections were acquired with an LSM 510 confocal
microscope using 50 243-ms confocal scans of 20 × 22 mm at
a z -axis spacing of 0.3 mm. Projections were made with an
algorithm that emphasizes surface features (18). The image on
the left was generated using the standard optical sectioning
software, with each stepwise movement of the objective followed by a delay of 300 ms. The image on the right was
generated by moving the objective continuously at a velocity of
1.2 mm/s. We also performed similar tests using a variety of
other imaging parameters (not shown). In all cases, continuous
objective movement had no visible effect on image quality.
4.
5.
6.
7.
8.
9.
Conclusions
4D-fluorescence microscopy systems are becoming faster
and more versatile. When applying this technology, perhaps
the most important consideration is that bright signals allow
high-quality images to be captured rapidly with minimal
photodamage. Therefore, the structures of interest should be
as intensely fluorescent as possible. In other words, if the
lights are bright enough, we can see into the fourth
dimension.
Acknowledgments
Thanks to Craig Lassy and Tim Karr for help with microscopy, and to
Jan Ellenberg and Clare Waterman-Storer for critical reading of the
Traffic 2000: 1: 935– 940
10.
11.
12.
13.
14.
15.
Sullivan KF, Kay SA, editors. Green Fluorescent Proteins (Methods
in Cell Biology, Vol. 58). San Diego: Academic Press; 1999.
Thomas CF, White JG. Four-dimensional imaging: the exploration of
space and time. Trends Biotechnol 1998;16: 175 – 182.
Lacey AJ. The principles and aims of light microscopy. In Lacey AJ
(Ed.), Light Microscopy in Biology. A Practical Approach. Oxford:
IRL Press, 1989: 1 – 59.
Pawley JB, editor. Handbook of Biological Confocal Microscopy,
2nd Edition. New York: Plenum Press; 1995.
Focht DC. Observation of live cells in the light microscope. In
Spector DL, Goldman RD, Leinwand LA (Eds.), Cells, A Laboratory
Manual, Vol 2. Cold Spring Harbor: Cold Spring Harbor Laboratory
Press, 1998: 75.1 – 75.13.
Piston DW. Imaging living cells and tissues by two-photon excitation
microscopy. Trends Cell Biol 1999;9: 66 – 69.
Centonze V, Pawley J. Tutorial on practical confocal microscopy
and use of the confocal test specimen. In Pawley JB (Ed.), Handbook of Biological Confocal Microscopy, 2nd Edition. New York:
Plenum Press, 1995: 549 – 569.
Agard DA, Hiraoka Y, Shaw P, Sedat JW. Fluorescence microscopy
in three dimensions. In Taylor DL, Wang Y (Eds.), Fluorescence
Microscopy of Living Cells in Culture, Part B (Methods in Cell
Biology, Vol 30). San Diego: Academic Press, 1989: 353 – 377.
McNally JG, Karpova T, Cooper J, Conchello JA. Three-dimensional
imaging by deconvolution microscopy. Methods 1999;19: 373 – 385.
Nunnari J, Marshall WF, Straight A, Murray A, Sedat JW, Walter P.
Mitochondrial transmission during mating in Saccharomyces cere visiae is determined by mitochondrial fusion and fission and the
intramitochondrial segregation of mitochondrial DNA. Mol Biol Cell
1997;8: 1233 – 1242.
Rizzuto R, Carrington W, Tuft RA. Digital imaging microscopy of
living cells. Trends Cell Biol 1998;8: 288 – 292.
Prinz WA, Grzyb L, Veenhuis M, Kahana JA, Silver PA, Rapoport TA.
Mutants affecting the structure of the cortical endoplasmic reticulum
in Saccharomyces cerevisiae. J Cell Biol 2000;150: 461 – 474.
Shaw PJ. Comparison of wide-field/deconvolution and confocal
microscopy for 3D imaging. In Pawley JB (Ed.), Handbook of Biological Confocal Microscopy, 2nd Edition. New York: Plenum Press,
1995: 373 – 387.
Ellenberg J, Lippincott-Schwartz J, Presley JF. Dual-color imaging
with GFP variants. Trends Cell Biol 1999;9: 52 – 56.
White J, Stelzer E. Photobleaching GFP reveals protein dynamics
inside live cells. Trends Cell Biol 1999;9: 61 – 65.
939
Hammond and Glick
16. Tsien RY, Waggoner A. Fluorophores for confocal microscopy.
Photophysics and photochemistry. In Pawley JB (Ed.), Handbook of
Biological Confocal Microscopy, 2nd Edition. New York: Plenum
Press, 1995: 267–279.
17. Thomas CF, White JG. Acquisition, display, and analysis of digital
three-dimensional time-lapse (four-dimensional) data sets using free
software applications. In Tuan RS, Lo CW (Eds.), Developmental
Biology Protocols, Vol I (Methods in Molecular Biology, Vol 135).
Totowa: Humana Press, 2000: 263 – 276.
18. White NS. Visualization systems for multidimensional CLSM images.
In Pawley JB (Ed.), Handbook of Biological Confocal Microscopy,
2nd Edition. New York: Plenum Press, 1995: 211 – 254.
19. Cox G. Equipment for mass storage and processing of data. In Conn
PM (Ed.), Confocal Microscopy (Methods in Enzymology, Vol 307).
San Diego: Academic Press, 1999: 29 – 55.
940
20. Sheppard CJR, Gan X, Gu M, Roy M. Signal-to-noise in confocal
microscopes. In Pawley JB (Ed.), Handbook of Biological Confocal
Microscopy, 2nd Edition. New York: Plenum Press, 1995: 363 – 371.
21. Russ JC. The Image Processing Handbook, 3rd Edition. Boca Raton:
CRC Press, 1999.
22. Hammond AT, Glick BS. Dynamics of transitional endoplasmic
reticulum sites in vertebrate cells. Mol Biol Cell 2000;11: 3013 – 3030.
23. Sieck GC, Mantilla CB, Prakash YS. Volume measurements in
confocal microscopy. In Conn PM (Ed.), Confocal Microscopy (Methods in Enzymology, Vol 307). San Diego: Academic Press, 1999:
297 – 315.
24. Callamaras N, Parker I. Construction of a confocal microscope for
real-time x -y and x -z imaging. Cell Calcium 1999;26: 271 – 279.
25. Tsien RY, Bacskai BJ. Video-rate confocal microscopy. In Pawley JB
(Ed.), Handbook of Biological Confocal Microscopy, 2nd Edition.
New York: Plenum Press, 1995: 459 – 478.
Traffic 2000: 1: 935 – 940