A Correlation-Based Approach to Calculate Rotation

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
1939
A Correlation-Based Approach to Calculate
Rotation and Translation of Moving Cells
Cyrus A. Wilson and Julie A. Theriot
Abstract—We present a noniterative image cross-correlation
approach to track translation and rotation of crawling cells in
time-lapse video microscopy sequences. The method does not rely
on extracting features or moments, and therefore does not impose
specific requirements on the type of microscopy used for imaging.
Here we use phase-contrast images. We calculate cell rotation and
translation from one image to the next in two stages. First, rotation
is calculated by cross correlating the images’ polar-transformed
magnitude spectra (Fourier magnitudes). Rotation of the cell
about any center in the original images results in translation in
this representation. Then, we rotate the first image such that the
cell has the same orientation in both images, and cross correlate
this image with the second image to calculate translation. By
calculating the rotation and translation over each interval in the
movie, and thereby tracking the cell’s position and orientation in
each image, we can then map from the stationary reference frame
in which the cell was observed to the cell’s moving coordinate
system. We describe our modifications enabling application to
nonidentical images from video sequences of moving cells, and
compare this method’s performance with that of a feature extraction method and an iterative optimization method.
Index Terms—Biological cells, biomedical image processing,
image motion analysis, image registration, microscopy, motion
estimation.
I. INTRODUCTION
Q
UANTITATIVE motion analysis of crawling eukaryotic
cells is a challenging problem. Attempts to make sense of
the information-rich image sequences by reducing them
to simple parameters often fail to capture the biologically relevant phenomena under investigation. A specific experimental
condition might affect aspects of a cell’s movement without
altering the trajectory of the cell centroid, or cause aberrant shape
while preserving the overall aspect ratio. More sophisticated
measurements may be specific to a position within the cell, or
best expressed as a spatially varying distribution, or perhaps
a higher dimensional function of location. For example, the
persistent moving of crawling cells depends on the proper spatial
coordination of actin polymerization, actin depolymerization
and myosin contraction, with actin polymerization biased toward
the cell’s leading edge and myosin contraction strongest at the
rear [1]. For quantitative observations of such processes to be
interpretable in a biological context, to be followed as a function
Manuscript received November 29, 2004; revised May 2, 2005. C. A. Wilson
was supported by the National Institute of General Medical Sciences under a
graduate training grant for the Stanford University Program in Cellular and
Molecular Biology. J. A. Theriot was supported by the American Heart Association under Grant 0240038N. The associate editor coordinating the review of
this manuscript and approving it for publication was Dr. Erik H. W. Meijering.
The authors are with the Department of Biochemistry, Stanford University,
Stanford, CA 94305 USA (e-mail: [email protected]; [email protected]).
Digital Object Identifier 10.1109/TIP.2006.873434
of time in a single cell, to be comparable across different cells
as a function of experimental conditions, and to be statistically
analyzable, it is important that they be expressed in the coordinate system of the cell. In the case of a motile cell, the entire cell
coordinate system moves relative to the frame of reference in
which the cell was observed. It, therefore, becomes necessary to
calculate a mapping between the stationary lab reference frame
and the moving cell reference frame, which in turn demands an
accurate and objective method for determining the position and
orientation of the cell in every frame of a video sequence.
Tracking the position and orientation of a moving cell can
be accomplished by calculating a centroid of image intensity,
and second-order moments of the intensity distribution, respectively [2], [3]. However, this requires that the cell have a simple
enough shape that these parameters are sufficient to reliably
follow its motion. It also requires that the distribution of pixel
values in the image be related to the distribution of cell mass, or
otherwise strongly representative of cell shape. This can impose
constraints on the choice of cell type and on the way in which it
is imaged, such as necessitating the use of a volume marker. In
certain cases these constraints cannot be met due to the nature
of the system or the limitations of an experiment. It may be possible to track the cells based on other features, but again if the
features discard too much information they might be inadequate
for complex motions, and if they are difficult to extract reliably
they might fail for complex appearances.
A more desirable strategy, then, is one that uses information
derived from the entire appearance of the cell: one which does
not involve feature extraction or assumptions about shape,
and one which does not depend on the mode of imaging, as
long as the cell’s appearance—as depicted by image pixel
intensity distribution—is roughly similar from image to image
in a sequence. A correlation-based approach meets these
qualifications. Given two images—a template image and test
image—which are identical but for a translation of the objects
in those images, the coordinates of the maximum value of the
cross correlation between those images recovers precisely the
relative translation between the images [4]. Cross correlation
between approximately similar images retrieves the approximate relative translation between those images. Since such a
method is based directly on appearance in terms of pixel values,
even phase-contrast images—easy to acquire but often complicated and difficult to interpret quantitatively—can be used
for determining the cell’s trajectory, as long as the appearance
does not change too dramatically between consecutive images
in the sequence. If images have been obtained in multiple
channels (e.g., phase-contrast and multiple fluorescence wavelengths) this method can be applied to all channels in order
1057-7149/$20.00 © 2006 IEEE
1940
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
to determine the degree to which the movement of different
subcellular structures is correlated. Alternatively, information
from multiple channels can be combined for computing the
cell’s moving frame of reference, with appropriate weighting
given to each channel as determined by the investigator.
,
The cross correlation of two images and , denoted
can be calculated efficiently from the product of their Fourier
transforms and [4], [5]
(1)
where indicates the complex conjugate of , and
is the
Fourier transform operator. Unfortunately, cross correlation can
only recover translations. If an object rotates enough, its relative translation cannot be found by two-dimensional (2-D) cross
correlation. Image noise and structures in the image other than
the object of interest further confound 2-D cross correlation.
Cross correlation is only one of several approaches to image
registration [6], [7]. A general strategy in the field of medical
image registration for dealing with transformations beyond
simple translation is an iterative local search to optimize
a similarity measure over the parameter space of possible
transformations [8]. This approach is well suited to nonrigid
registration of common features extracted from different types
of images, such as MRIs versus PET scans. However, applying
this method to intra-modal image pairs using an image intensity
based similarity measure, as would be desirable for application
to a series of images in a video microscopy sequence, involves
substantial image resampling to calculate the similarity measure and its partial derivatives at each iteration of the search [9].
This can be computationally intensive, and the transformation
estimated for best fit is only a local optimum.
In this paper, we describe the implementation of a two-stage
cross correlation based method to estimate a cell’s rotation
and translation between images of a video sequence, in such
a manner that iterative search through rotation-translation
space is not required. Rotation is isolated using the translation-invariant magnitudes of the 2-D Fourier transforms of the
two images. Applying a polar transform to each magnitude
spectrum converts rotation to be represented as translation; we
can then calculate the angle of rotation by cross correlation.
This enables us to estimate the change in orientation of the cell
in a single calculation. The idea underlying this approach is
not new [10], [11]. However, we combine this with nonlinear
intensity scaling, dynamic masking, and spatial frequency
bandpass filtering to make possible the successful application
of this strategy to objects with nonrigid and highly dynamic
appearances such as biological cells. Having calculated the rotation angle, we rotate the original template image by this angle
around the cell center, eliminating rotation. This, combined
with intensity scaling, masking, and filtering as in the first pass,
isolates translation, which we then calculate by cross correlation. We have found that this two-step method works robustly
for tracking the cell’s moving frame of reference for crawling
cells in phase-contrast video microscopy sequences—a challenging application—and its performance is superior to iterative
optimization and feature extraction methods with respect to
both accuracy and computational speed.
II. CONCEPTUAL APPROACH: REPRESENTING
ROTATION AS TRANSLATION
Given two images of an object, if the object is translated in
one image relative to the other, the displacement can be found
from the cross correlation of the images. If images can be transformed in such a way that rotation of an object in the original
images is converted to translation in the new representation, then
that translation (and thus the original rotation) can be easily
retrieved with cross correlation. For example, rotation around
the central origin can be transmuted into translation via a polar
transform [10]: The rectangular axes of the polar-transformed
image are the radius and angle in the original image (in the
convention, we follow here, angle in the original image maps
to the horizontal axis of the polar transform image, and radius
maps to the vertical axis of the polar transform). Rotation about
the origin in the original image becomes translation along the
angle axis in the polar transform. However, this representation
is not invariant to translation—that is, translations in the original
image drastically alter the polar representation. Furthermore, rotation around a center other than the origin (the location designated as a radius of zero for the polar transform; often the center
of the image) results in a more complicated transformation in the
image’s polar transform. Therefore, if we do not know the center
of rotation and the translation between images, we cannot use
the polar transform directly. Note that simple cross correlation
cannot find the translation without already knowing the rotation,
and that neither method yields a center of rotation.
However, the 2-D Fourier transform has two properties
which make it extremely well-suited to the problem of finding
an image representation in which rotation about any center
is represented as a translation, and in which translation of
the original image is disregarded entirely. First, rotation of an
image about any center results in a rotation of the 2-D Fourier
transform magnitude about its origin [4], [11]. This is illustrated by the rotation of an object, off-center in the field of view,
around its own center in Fig. 1; the Fourier transform (real and
imaginary components) rotates around its origin between (a)
and (b). Second, since position information is contained in the
phase of the Fourier transform, the magnitude of the Fourier
transform (magnitude spectrum) is invariant to translation, as
shown in Fig. 1; the magnitude spectrum does not change between (b) and (c). If we then calculate the polar transform of
the magnitude spectrum, we obtain a representation in which
rotation in the original image, about any center, converts to
centered rotation of the Fourier spectrum, and translation is
ignored. The polar transform of the magnitude spectrum then
converts centered rotation into translation (Fig. 1). With this
transform, then, rotation of an object can usually be determined with simple cross correlation, without any information
regarding the center of that rotation or the translation of that
object. Given a test image which contains a rotated, translated
template, we can perform polar transforms on the magnitude
spectra of the test and template images, and then calculate the
cross correlation of the images in this representation to elegantly and efficiently find the angle of rotation in most cases.
With this information, we can then rotate the template image
so it lines up with the test image, and use cross correlation
WILSON AND THERIOT: ROTATION AND TRANSLATION OF MOVING CELLS
1941
Fig. 1. Representing rotation as translation. Rotation [between (a) and (b)] of the glyph around its own center, though it is not at the origin of the image, results
in rotation of the 2-D Fourier transform magnitude around the origin. This converts to a horizontal translation of the polar transform of the magnitude spectrum
(Fourier magnitude). Translation [between (b) and (c)] of the rotated changes the phase but not the magnitude of the Fourier transform; therefore the polar
transform of the magnitude spectrum is unaffected. See the supplementary material at http://ieeexplore.ieee.org for an animated version of this figure.
as before to find the relative translation between the two. This
approach is not entirely foolproof: If the magnitude spectra of
the images are radially symmetric, for example, then the angle
of rotation cannot be recovered in the manner just described.
Such a scenario is unlikely in practice, however.
This strategy has been described previously as an approach to
image registration for images that have been translated, rotated,
and/or scaled relative to each other [10]–[15] (if, instead of a
polar transform, a log-polar transform is applied to the magnitude spectrum, rotation results in translation in one dimension
1942
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
Fig. 2. Ideal image registration compared to that of similar but nonidentical images. Example cross correlations for the calculation of rotation are shown for a pair
of ideal images and a pair of real images. The ideal images are identical but for rotation and translation; image one is a rotated and translated version of image zero.
The real images are two images, taken 60 s apart, from a movie of a crawling cell: a fish epidermal keratocyte. The cell’s appearance is similar in the two images
but not identical, as the cell is not a rigid object. The polar transforms of the magnitude spectra (second row) of the images are used to calculate cross correlations
and phase correlations (third row). Polar transforms and correlations have been cropped for space. Correlation values along the centers of the correlation surfaces
are plotted in the fourth row. Images and plots of the ideal and real cross correlations have been scaled identically; similarly the phase correlations have been scaled
identically. Phase correlation is especially sensitive to differences in the images other than rotation and translation (note the absence of a peak for the real images).
Simple cross correlation is more forgiving; however the calculation as previously described is not sufficiently accurate for this application (Fig. 3).
as before, and scaling results in translation in the other dimension [11]. This representation is also known as a Fourier–Mellin
Transform [12], [13]). These implementations have been shown
to accurately calculate, in the presence of additive noise, imposed transformations that change an image’s position, orientation, and/or scale. In [13] and [14], the method was applied
to separately acquired images of the same object or scene, but
the main changes in appearance were geometric transformations (movement of the image acquisition apparatus between
images). However, the extent to which this method can find the
most likely angle of rotation for images of an object which has
changed by more than rotation, translation, and scaling has not
yet been determined. We find that differences in the appearance
of a cell in two images 60 s apart have a significant effect on
the correlation of the polar transforms of the magnitude spectra
(Fig. 2). The phase correlation, preferred because it gives a narrower peak than cross correlation [11], [13], [14], [16] yields a
sharp peak for two simulated “ideal” images, but no peak for the
real images. Cross correlation gives a peak for both the ideal and
real cases. For the real images, however, the calculated rotation
angle, as well as the subsequently calculated translation, do not
approximate the cell’s movement as well as the results obtained
by applying the improved method we present here (Fig. 3).
Given that a cell’s appearance may change substantially over
time, we generally cannot track the cell in each image in a sequence by registration with the sequence’s initial image, but
rather with a more recent image, adding up the calculated transformations along the way. This means that a constant error in
each registration does not merely produce a constant error for
each point in the track, but instead results in a cumulative error.
Consequently, the two-stage cross-correlation strategy as described in previous implementations is unsuitable for application to tracking cell motion. We introduce our adaptations for
this application below.
III. APPLICATION TO DYNAMIC CELLS
In the analysis of images of moving cells from video microscopy sequences, we are not actually looking at two identical images where one has been translated and/or rotated with
WILSON AND THERIOT: ROTATION AND TRANSLATION OF MOVING CELLS
Fig. 3. Application of the modified approach to track rotation and translation
of moving cells. The two-stage cross-correlation approach, as previously
described [10], [11], is suboptimal for tracking the movement of nonrigid
objects. This is illustrated by the registration (a) of the two real images from
Fig. 2 using rotation and translation calculated as in prior work. An improved
registration (b) is obtained with rotation and translation computed using the
modified approach (c) presented here. Nonlinear intensity scaling and bandpass
filtering select the intensity range and spatial scale, respectively, of the features
whose motion we are approximating as rigid.
respect to the other. We are looking at two different images;
cells are nonrigid, dynamic objects. While we might characterize some overall motion of a cell as a combination of translation and rotation, that is a simplification. When the cell as
a whole “translates” and “rotates,” not every point on the cell
translates with the cell center or rotates about the cell center. In
fact, this description of motion does not apply directly to the individual physical components of the cell but rather to a larger
scale arrangement: a configuration whose motion emerges from
the dynamic remodeling of its component structures. For example, much of the actin cytoskeleton remains stationary or
moves slowly backward relative to the substrate during keratocyte movement while the cell as a whole moves forward [17].
It is thus only at a specific spatial scale that there appears to
be an overall arrangement which is rotating and translating. For
example, forward motion at the cell’s leading and trailing edges
is not concerted movement of the components of the membrane
(a fluid structure) but a movement of the enclosed cellular space
1943
that is surrounded by membrane. In contrast, the nucleus moves
forward approximately as a unit. Yet, when observing the entire
cell, it appears that the edges and the nucleus, with their contrasting modes of motion, move roughly in concert. Bulk movement of our system can be summarized as rotation and translation at a certain scale; therefore our adaptation to extend this
technique to dynamic cells involves applying a spatial frequency
bandpass filter at each of the correlation steps to select only spatial information at that scale. Our procedure for tracking the rotation and translation of a cell from each image to the next in
a video sequence, diagrammed in Fig. 3, is as follows. For the
current image (designated image zero), a mask is applied to select the region of interest (the cell) to form the template image.
The remainder of the image may contain structures which do not
move or rotate with the cell, and thus should be ignored. This
prevents other objects in the video frame besides the moving cell
from contributing to the spatial frequency map and therefore obscuring the result. The polar transform of the magnitude spectrum is then calculated for the template image (generated from
image zero) and for image one (the next image in the sequence).
After bandpass filtering to focus attention on only the spatial
scale over which meaningful rotation is occurring, the cross correlation is calculated; the location of the maximum corresponds
to the angle the cell rotated between image zero and image one.
The masked image zero is then rotated around the center of the
mask region by the angle just calculated to form a template
image in which the cell has the same orientation as it does in
image one. After additional bandpass filtering, the cross correlation of these two images is calculated; the location of the maximum is the relative translation of the cell between image zero
and image one. Now that the rotation and translation relative to
image zero are known, the mask can be positioned at the location
of the cell in image one—which will become image zero in the
next iteration, in the simplest implementation relevant for application to a video sequence—to form a template image. Another
option is to use the initial image in the sequence as image zero
at each iteration, and assign each subsequent image to be image
one, calculating the rotation and translation in each image relative to the orientation and position in the starting image rather
than in the previous image. However, for later images, the appearance of the cell may have evolved sufficiently such that it
cannot be reliably correlated with the starting image (note that,
even when correlating with the previous image, it may be necessary to occasionally update the mask shape if the cell shape is
rapidly changing).
In the following two sections, we present for the two calculation stages (rotation and translation) the implementation details
that enable successful application of this approach to dynamic
cells.
IV. CALCULATION OF ROTATION
Two images, 60 s apart, from a video of a crawling keratocyte (fish epidermal cell) are shown in Fig. 2 (“real”). The cell
experiences rotation as well as translation over the interval between the two images: image zero and image one. As shown in
Fig. 4, the magnitude spectrum of image one is rotated relative
to that of image zero, by the same angle of the cell’s rotation.
1944
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
Fig. 4. Representations of a cell’s rotation. The cell from Fig. 2 is shown at 0 s (image 0) and 60 s (image 1). The magnitude spectrum (center region enlarged,
inset) of image 1 is rotated by the same angle relative to the magnitude spectrum of image 0 as is the cell. Correspondingly, the polar transform of the image 1
magnitude spectrum is translated relative to that of image 0 (the black vertical arrows point to approximately the same feature in the two polar transforms; the gray
arrow in the second polar transform indicates the position of the black arrow in the first, for reference). See the supplementary material at http://ieeexplore.ieee.org
for an animated version of this figure.
This rotation results in a horizontal translation—the horizontal
axis is the angle axis—of the polar transform of the magnitude
spectrum.
A. Nonlinear Intensity Scaling
For various types of images, including phase-contrast and
epifluorescence, the image features corresponding to the cell fall
in a specific range of pixel intensities, and confounding objects
in the image field often tend to fall outside of this range. To
improve the specificity with which the cross-correlation calculations find our object of interest, we can select a range of pixel
values somewhat analogously to our selection of a spatial region
with a mask and a spatial scale with a bandpass filter (below).
Our first step, before masking, transforming, or filtering the images is to apply a sigmoidal scaling to the pixel values. This
stretches the contrast in the range of interest, while flattening
out intensities above and below. We determine our pixel value
range of choice by drawing a line from inside the edge of the cell
to outside the edge; the minimum and maximum values along
that line define the range to be stretched.
We observed that flattening out features outside the intensity
range of interest can help to decrease the influence of cell and
tissue debris (floating past or being carried by the cell) on the
cross correlations. The cell shown in Figs. 2–9 is carrying a
round remnant of debris with features at intensities above our
selected range. Nonlinear scaling, applied to the cell image in
Figs. 4–9, attenuates the variations, making the debris appear
Fig. 5. Selecting the region of interest. We use a mask derived from the outline
of the cell in the initial image of the sequence; the mask is rotated and translated
to the cell’s current calculated orientation and position. The mask need not be
shaped like the cell, as long as the masked region does not include features
other than the cell. We apply a Gaussian lowpass filter to the mask so that the
frequency content it contributes is below our spatial frequency passband (hence
the lack of dependence on mask shape). We subtract from image zero the average
value of the pixels outside the mask, and then multiply this image with the mask.
The neutral gray in the masked image (right) corresponds to a pixel value of 0,
due to scaling of the image for the figure.
uniform as it does in the figures, and reduces its frequency content such that it is nearly eliminated by the bandpass filter, as
seen in Fig. 7. Without this initial sigmoidal intensity scaling
step, the debris had a pronounced texture, and the error it contributed to the rotation and translation calculations could not
be reduced by bandpass filtering alone. This was the case for
phase-contrast video sequences of other keratocytes as well.
B. Region of Interest
In order to specify the template (the cell), we are looking for
in image one, we select the region in which the cell is located
in image zero by applying a mask. This step is critical for video
WILSON AND THERIOT: ROTATION AND TRANSLATION OF MOVING CELLS
1945
Dictyostelium discoideum, for example, change shape rapidly,
but tend to be somewhat spherical overall; for such cells a mask
might be made from a circle of some radius around the center
(as specified manually or from a volume marker centroid). Alternatively, one might use an outline but automatically update it
periodically to keep up with shape changes.
We apply a lowpass filter to the mask by convolving the binarized mask with a Gaussian. We want to avoid introducing new
frequency components into the template image; the lowpass filtering ensures that any spatial frequency content contributed by
the mask is below the lower limit of the bandpass filter we will
be applying later.
The mask, which now smoothly drops off to zero, is ready to
be applied to the image. We subtract from image zero a constant
background intensity—the average intensity of pixels outside
the mask in image zero—then we multiply by the mask. Fig. 5
shows application of the mask to image zero to generate the
template image.
C. Calculation of Magnitude Spectrum and Polar Transform
Fig. 6. Calculation of rotation. We calculate the polar transform of the
magnitude spectrum of the masked image zero (shown) and of the windowed
image one. We then apply a spatial frequency bandpass filter to each. In
this representation, spatial frequency is an axis; therefore, filtering can be
accomplished simply by taking a slice across the image. We only take angles
0 to , as the region from to 2 is a repeat of 0 to due to the conjugate
symmetry of the Fourier transform. The location of the maximum of the cross
correlation of these two sections corresponds to the angle of the cell’s rotation
between image zero and image one. We calculate the centroid of the peak; the
angle coordinate of the centroid (or the angle plus ) is the angle of rotation.
sequences of moving cells, which typically include several cells
in a single image field. Our mask is derived from the outline
of the cell in the initial image of the movie, rotated and translated as necessary according to the calculated motion of the cell
up to the current image zero. For our purposes, we compute the
outline using an “active contours,” or “snakes,” approach based
on [18]. Briefly, a rough outline is drawn around the cell by the
user, and then iteratively deformed toward the cell edge. Once
the outline has been computed, it is filled in and then dilated
such that it spans the whole cell even after lowpass filtering. For
best results, the mask should include the entire cell but not include other features in the image; however it does not need to
be based on a cell outline. An outline might be a poor choice for
some modes of imaging, or for cells which change shape substantially over the course of the video sequence. Neutrophils and
Since the Fourier transform treats a signal as periodic, the discontinuities at the boundaries of an image contribute artifacts to
its Fourier transform. It is important that the magnitude spectrum we calculate not include such artifacts, as they may obfuscate the important features in the representation in which we
will be calculating the angle of rotation, decreasing our ability
to reliably find that angle. We therefore want both our template
image and our test image to be continuous at the boundaries,
meaning that the pixel intensities should gently drop off to a
constant value near the boundaries. The boundaries of the template image (the masked image zero) are already continuous,
due to the application of the mask (however, if the mask had
nonzero values at a boundary, an additional window may need
to be applied). To ensure continuities at the boundaries of the
test image (image one) we can optionally apply a Hann window
[19] in the spatial domain. The window is applied to image one
the same way the mask was applied to image zero above; we
subtract from image one its average intensity, then multiply by
the window, which smoothly falls to zero near the boundaries.
Though we include this windowing in the implementation as described here, we find that in practice it is generally unnecessary,
and, in some cases, the uneven weighting of image one due to
windowing can interfere the accuracy of the calculations. We
did not apply this window when calculating rotations and translations in the comparison section.
When we calculate the polar transform of the magnitude spectrum, we use bicubic interpolation to reduce the appearance of
edges between neighboring pixels in the polar transform representation. Previous work uses a log-polar transform at this step,
allowing simultaneous calculation of rotation and scale [11].
Since we will be applying a spatial frequency bandpass filter
to look at motion at a specific scale, we cannot simultaneously
calculate scale at this step. Therefore, we calculate a polar transform, not a log-polar transform, of each magnitude spectrum.
D. Bandpass Filter
As mentioned earlier, the “rotation” and “translation” that we
are characterizing are not direct properties of individual compo-
1946
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
Fig. 7. Calculation of translation. We rotate the masked image zero (shown before and after rotation) by our calculated rotation angle of the cell, such that the cell
has the same orientation in each image (note that image 1 is shown after windowing). We apply a spatial frequency bandpass filter to the rotated, masked image
zero and to the windowed image one; then we calculate the cross correlation. The centroid of this peak corresponds to the translation of the cell between the images
(note that, for simplicity, the bandpass filtering here is equivalent to that used in the calculation of rotation; the infinitely steep frequency cutoffs of that filter are
responsible for the ringing artifacts in the filtered images).
nents of the cell but of an emergent configuration at a particular
spatial scale. Therefore, to calculate the angle of what we are
defining as “rotation” of the cell, we apply a bandpass filter to
select the spatial frequency range corresponding to the scale
at which rotation occurs. Conveniently, one of the axes—the
vertical axis here—of the polar transform of the magnitude
spectrum corresponds to spatial frequency. We can perform
the filtering simply by taking a horizontal slice across this
representation of the image, as depicted in Fig. 6. Furthermore,
as we will only be using a slice of the polar transform, that is the
only portion of the polar transform we need to calculate. This
speeds up the method significantly, as the polar transform is the
most computationally expensive step in our implementation.
We only compute the left half of this slice (angles 0 through )
is a repeat of 0 through , due to
because the area through
the conjugate symmetry of the Fourier transform of a real image
[11]. This means that after calculating the angle of rotation ,
the calculation of translation must be computed for the template
(the cross correlation with the larger
rotated by both and
maximum value corresponds to the actual rotation angle). If, on
the other hand, it can be assumed that the rotation angle lies
within certain limits, it may not be necessary to test both and
. For our cells we can safely assume that
.
We do not apply an additional window to the bandpass filtered polar transform of the magnitude spectrum before calculating the cross correlation. This signal is periodic in the angle
dimension, so there is no discontinuity across the vertical (left
and right) edges. The discontinuity across the horizontal edges
has the effect of constraining the maximum of the cross correlation to lie along the line corresponding to no spatial frequency
displacement. This is appropriate for our purposes, as we only
want the displacement in angle (rotation) at this step.
E. Cross Correlation
In previous applications of this approach to image registration, phase correlation has been found to give a narrower
peak than correlation as computed here [11], [13], [14], [16].
The phase correlation technique assumes that images f and g
differ only by translation. Under that assumption, their phase
correlation (the inverse Fourier transform of the cross-power
spectrum) will have a peak at the displacement corresponding
to the relative translation between the images, and will be near
zero everywhere else [14]. However, our filtered polar-transformed magnitude spectra (Fig. 6) differ by significantly more
than translation. We found that the phase correlation of these
representations failed to reliably produce a peak corresponding
to the relative displacement between them (Fig. 2). For our
similar, but not identical, images, standard cross correlation is
required.
The cross correlation is performed using (1) the complex conjugate of the Fourier transform of the filtered, polar-transformed
magnitude spectrum of the template image is multiplied by the
Fourier transform of the filtered, polar-transformed magnitude
spectrum of image one. The inverse Fourier transform of this
product gives the correlation. The resulting cross correlation is
shown in Fig. 6.
For a precise calculation of rotation angle, we desire a subpixel location of the cross-correlation peak. We obtain this by
computing a centroid. Each above-threshold pixel in a neighborhood centered at the maximum pixel is weighted in the centroid calculation by the difference between the pixel value and
the threshold. In practice this procedure gives better localization
than the nearest integer value. However, it should be noted that
as this is a discrete case, the Fourier shift theorem does not hold
exactly for subpixel shifts.
WILSON AND THERIOT: ROTATION AND TRANSLATION OF MOVING CELLS
1947
V. CALCULATION OF TRANSLATION
A. Rotation of Template Around Region Center
Having determined the angle the cell rotated between image
zero and image one, image zero can be rotated by that angle to
generate a new template image in which the cell has the same
orientation as it does in image one. Cross correlation can then
be used to compute the translation of the cell. However, the displacement that will be calculated depends on the center around
which image zero was rotated. In our current implementation,
we define the cell center as the centroid of the vertices of the
cell outline (originally represented as a polygon) in the initial
image, displaced according to the cell’s translations calculated
for the subsequent frames up to the current image zero. The cell
center does not need to be defined based on this criterion; other
possibilities include using the location of the cell nucleus, or,
if the cell has been imaged with a volume marker, the center
of volume. The center need not be computed automatically either, as it only needs to be defined in the first image of a sequence; it is subsequently updated with the recovered translation information.
The spatial transformation to rotate the masked image zero
around the specified center is then the composition of three
transformations translation of an object located at the region
center to the origin, rotation by the angle calculated in the previous section, and translation from the origin back to the region center [20]. The transformation is then applied to the image
via an inverse mapping [2] with bicubic interpolation. By composing a single transformation we avoid resampling the image
more than once. Multiple rounds of resampling would be undesirable because they would introduce artifacts in the image,
which hampers accurate cross correlation of the transformed
image with image one. The rotated, masked image zero is shown
in Fig. 7.
B. Bandpass Filtering
We apply a spatial frequency bandpass filter when calculating
translation for the same reasons as when we calculate rotation.
When we calculated rotation we were working with a representation in which spatial frequency was an axis, and bandpass
filtering could be accomplished by slicing. This time we are
working in the spatial domain of the images, so filtering can be
performed by convolution in the spatial domain or multiplication in the frequency domain. We do the latter, simultaneously
with the cross-correlation calculation. There would be no additional computational cost to using a filter kernel without infinitely steep frequency cutoffs; however, for consistency in this
demonstration, we use a filter kernel which is spectrally equivalent to the earlier bandpass filtering operation. The infinitely
steep cutoffs produce the ringing artifacts that can be seen in
Fig. 7. These have no significant effect on the calculation of
translation.
C. Cross Correlation
The cross correlation, Fig. 7, is calculated from the product
of the Fourier transforms (1). We calculate the centroid of the
peak as before to get a translation displacement with subpixel
precision.
Fig. 8. Testing the result. Registering the first image to the second using the
calculated angle and displacement allows us to visually gauge the accuracy of
the computed values. Image zero is rotated and translated (a) such that the cell
has the same orientation and position as it does in image one (b). The average (c)
of these two images shows good overlap, indicating that the calculated rotation
and translation match the actual motion of the cell. The difference (d) reveals
that discrepancies (light or dark areas) are in regions of the cell that changed
shape or appearance.
At this point, we can assess how accurately the calculated
angle and displacement match the rotation and translation of
the cell from image zero to image one by registering image zero
such that the cell has the same orientation and position as it does
in image one. The registration of image zero is the composition of three spatial transformations: translation of the cell from
its center (as located in image zero) to the origin, rotation, and
translation from the origin to the location of the cell center in
image one. The registered image zero is shown together with the
original image one in Fig. 8. The average and difference images
in Fig. 8 reveal that the only differences are in areas in which the
cell shape changed slightly; the calculated rotation and translation closely match the cell’s overall motion.
VI. CELL FRAME OF REFERENCE
As mentioned in the Introduction, one of the motivations for
tracking a cell’s motion is to establish a mapping between the
stationary frame of reference of the lab and the moving frame
of reference of the cell. This allows us to transform measurements made in the lab reference frame to cell coordinates, or
to transform entire image sequences to the cell reference frame
for visualization purposes, such that the cell stays still and the
world travels by. This can aid in observing, for example, how
local interactions of one part of the cell with its environment
can then affect another part of the cell due to the cell’s motion:
In the cell reference frame, cell-substratum adhesions formed at
the front of the cell “travel” to the back of the cell where they
must be dismantled if the cell is to continue forward.
1948
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
In order to generate a spatial transformation from an image in
the sequence to the cell frame of reference, it is necessary to know
the position and orientation of the cell in the image. This is simply
the orientation and position of the cell at the beginning of the sequence, rotated and translated according to the cell’s motion calculated above. Therefore, we need to define the cell’s position and
orientation in the initial image. We have already defined the position—the cell “center” we used as a center of rotation; but we still
need to know which direction is front or back: We need to define
an orientation, by manual or automatic means. In this implementation, we define the cell’s orientation in the initial image based on
the same outline we used to define a center. We compute a major
and minor axis by a principle components analysis of the vertices
of the outline (represented as a polygon). The orientation of the
minor axis (negated if necessary such that it is not against the direction of cell movement) is taken to be the orientation of the cell.
Having updated the position and orientation of the cell for the
current frame, given its calculated translation and rotation, we
generate the spatial transformation to the cell reference frame as
the composition of two transformations: translation of the cell
from its center in the lab reference frame to the origin, and rotation of the cell from its orientation in the lab frame of reference
to an orientation of zero. Optionally, the rotation can be to a difso
ferent orientation; for visualization we typically rotate to
the cell is pointed up. The final effect is a movie of cell motility
as it would be imaged by a camera poised above the center of
the cell and moving with it. Coordinates of a location in the lab
frame can be converted to the cell frame coordinates via a forward mapping, or the entire image can be transformed into the
cell frame using an inverse mapping. Fig. 9 shows the result of
transforming an image sequence such that the cell is centered
and pointed up in each image.
With this method, there are two possible approaches to calculating the rotation and translation a cell has undergone to reach
its orientation and position in a given frame. The strategy we
have described thus far involves calculating at each image the
rotation and translation relative to the previous image, and summing those rotations and translations to get the total change in
orientation and position relative to the initial image. The cumulative nature of this approach has a disadvantage in that small
errors in individual calculations can add up to give an overall
drift over long sequences, and a large error for a given image
will throw off all subsequent images. Another approach is to directly calculate the rotation and translation relative to the initial
image: using the initial image as image zero for each calculation.
However, since the calculation is based on correlation and the
cell’s appearance changes over time, this alternate strategy is unreliable for long intervals. A strategy that incorporates calculations over multiple interval lengths might combine the strengths
of both above approaches, reducing drift while still handling
changes in appearance. Periodically updating the mask may also
be appropriate if over time the cell’s shape becomes drastically
different of that from the mask.
VII. COMPARISON
Are the position and orientation measures obtained by this
method well suited to defining a cell frame of reference? How
Fig. 9. Cell frame of reference. We used the method presented here to track the
cell’s rotation and translation between images 10 s apart. Having defined the position and orientation in the initial image, we determine the orientation and position in each image by adding the cell’s rotation and translation to the orientation
and position in the previous image. We then rotate and translate each image such
that the cell is centered in the image and oriented upward. See the supplementary
material at http://ieeexplore.ieee.org for an animated version of this figure.
well does our correlation-based approach perform for this
application next to other methods that might be used to track
WILSON AND THERIOT: ROTATION AND TRANSLATION OF MOVING CELLS
cell translation and rotation? We compared our procedure with
two other methods that might be used to compute a cell’s
position and orientation in phase-contrast microscopy image
sequences. We did not include in our comparison methods
based on thresholded regions or on moments of image intensity
distributions, even though they are most commonly used, because such methods are attached to a specific mode of imaging,
therefore imposing constraints such as the use of a fluorescent
volume marker. The methods selected below can be applied
to a fluorescent volume or membrane marker movie, or to
a phase-contrast movie, or can be adapted to other methods
that do not lend themselves to thresholding such as Nomarski
or birefringence imaging. We implemented each approach in
MATLAB; computation times are reported for execution in
MATLAB 7.0.1 on a 2-GHz G5 processor.
A. Methods
The first method is based on feature extraction; it obtains the
cell position and orientation in each image from a polygonal cell
outline. The outline is generated by the same active contours approach as described above, manually initialized around the cell
in the initial image. In subsequent images, the snake is initialized with the final snake configuration from the previous image.
We define the cell’s position in each image as the centroid of
the outline vertices. For orientation, we employ the ellipticity
of the characteristic keratocyte shape: The cell has a major axis
perpendicular to the direction of motion, and a minor axis parallel to the motion. We therefore calculate a major and minor
axis of the outline by performing a principle components analysis on the coordinates of the outline vertices. The minor axis
(negated if necessary such that it is not opposite the direction of
motion) is then taken to be the cell’s orientation (note that this
is how we earlier defined the cell orientation in the initial image
of the sequence).
The second method is image registration by local iterative
optimization of an image similarity measure. We apply intensity scaling, masking, bandpass filtering as we do for our twostage cross-correlation strategy. Then, we minimize the sum
of squared pixel intensity differences between registered images as a function of rotation angle, horizontal displacment, and
vertical displacement. The optimization algorithm we use is a
large-scale trust-region method using preconditioned conjugate
gradients [21], [22]; it is provided by the “lsqnonlin” function in
MATLAB’s Optimization Toolbox. We found that on our keratocyte images this algorithm minimizes the squared differences
more successfully and with fewer iterations than the Optimization Toolbox’s available alternatives. However, for images taken
10 s apart, as we used in this comparison, the cell moves too
far between images to be found by local iterative optimization
without an initial estimate of rotation and translation. Therefore,
we perform the optimization first at half-resolution—at which
rotation and translation can be found successfully without an
initial guess—and then refine the half-resolution estimate by optimization at full resolution. For intervals of 20 s, not used in this
comparison, we found it necessary to start at quarter-resolution.
In addition, we present the results of using the transformation
returned by our cross-correlation method as an initial estimate
for further refinement by iterative optimization.
1949
B. Dataset
We apply each method to calculate the rotation and translation
of keratocytes over 10-s intervals in 87 movies. All cells were
imaged at a single magnification and under the same conditions.
Images are 512 512 pixels. If the microscope stage was moved
during a movie (to bring a cell that is moving out of the field of
view back into full view), then we only use images from before
the first repositioning of the stage. While our cross-correlation
method is global and can follow the cell as long as enough of
it is in the field before and after moving the stage, the local
iterative optimization cannot recover large movements of the
cell; nor can the feature extraction, as it uses the cell outline from
the previous image in finding the outline in the current image.
With this restriction, this gave 731 intervals from the 87 image
sequences. Each method was applied to calculate cell rotation
and translation over each of these intervals.
C. Results
We compared the accuracy and computation time of each
method on each interval (Fig. 10). Here, accuracy is evaluated
by how well the rotation and translation returned by each
method minimizes the sum of squared pixel value differences
between the images flanking the interval. This is the same measure that the iterative optimization method seeks to minimize;
it is calculated after intensity scaling, masking, and bandpass
filtering. We show the distributions of sums of squared differences for each method; we include the data prior to registration
for reference.
Unsurprisingly, the feature extraction approach performs relatively poorly. The appearance of cells in phase-contrast images
is complex—modern transmitted light microscopy methods
are designed to reveal the slightest variations across otherwise
transparent cells—making it difficult to extract an outline with
consistency between images. Furthermore, the outline considers
only the cell margin, not the entire appearance of the cell. Motile
cells typically exhibit dynamics at the cell edge which are not
strongly coupled to the motion of the whole cell.
The accuracies of the iterative optimization approach and our
two-stage cross-correlation strategy are similar. Though the difference is small, our method achieves better results than the iterative optimization, as assessed by a paired Wilcoxon signed
rank test:
. Most likely, this is because the iterative optimization can get trapped in a local minimum. If we
use our cross-correlation approach to calculate an initial transformation estimate, then iteratively optimize it, we get the best
by paired
score overall [Fig. 10(a), far right];
signed rank test. Though consistent, it is still only a minor improvement over the cross-correlation approach alone.
The two-stage cross-correlation method we present in this
paper is by far the fastest. We show in Fig. 10(b) the amount of
time each method takes to calculate rotation and translation of a
cell between a pair of images, assuming the mask for image zero
has already been provided. Our method takes a median 1.67 s,
whereas the iterative optimization takes a median 52.57 s with
variation by the number of iterations. The median number of iterations is ten at half-resolution followed by eight at full resolution. The large computational expense of the iterative optimiza-
1950
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 7, JULY 2006
without extracting features, and despite the complexities of a
cell’s appearance in a phase-contrast image. The method does
not require that the images show clearly defined cell edges that
can be extracted computationally or that the distributions of
image intensity have a specific relationship to the shape of the
object; therefore, it can be applied to a variety of modes of
imaging. The requirement is simply that the motion of interest
is manifested in the movement of the intensity distribution.
A. Generality
Fig. 10. Comparison of methods for calculating cell rotation and translation.
We compared the performance of our approach (labeled "cross-correlation")
with a feature extraction method and an iterative optimization method for
calculating rotation and translation over 731 intervals between images 10 s
apart taken from movies of 87 keratocytes. We also show the results of using
the transformation calculated by our two-stage cross-correlation method as an
initial guess for refinement by iterative optimization. For each pair of images,
the masked image zero is registered to image one (both after intensity scaling
and bandpass filtering) using the calculated rotation and translation; we then
sum the squared differences in pixel intensity values between the images. A
smaller difference between the registered images suggests a more accurate
calculation of rotation and translation. The distributions of sums of squared
differences before and after registration with transformations calculated by
each method are shown as box-and-whisker plots (a). We also report the time
it takes to compute rotation and translation between each pair of images (b).
Boxes indicate the spread from the bottom quartile (25th percentile) to the
top quartile (75th percentile). Medians are indicated by the filled circles and
adjacent text annotations. Whiskers extend to the first and 99th percentiles;
data points in the first and 99th percentiles are displayed individually.
tion method comes from resampling the image four times per iteration: to calculate the sum of squared pixel differences for the
current transformation and the partial derivatives with respect to
rotation, horizontal displacement, and vertical displacement.
VIII. CONCLUSION AND DISCUSSION
Our application of this modified correlation-based approach
successfully tracks the rotation and translation of moving keratocytes based on image intensity, without an iterative search,
Being able to track a cell’s translation and rotation without
requiring that it be imaged with a fluorescent volume marker
opens significant new experimental opportunities. Fluorescence microscopy setups for live-cell imaging are limited in
how many channels can be acquired in a single experiment.
If an experiment calls for simultaneous or near-simultaneous
observation of different labeled molecules or structures, the
limit might be reached before a volume marker is added to
the list. This can be due to limitations of the equipment, the
need for sufficient separation between emission spectra of
the fluorophores being used, sensitivity of the cells to certain
wavelengths of light, or photo-reactivity of drugs with which
the cells are being treated. In other situations, the experimental
requirements accommodate imaging a volume marker, but
the cells are being observed long enough—to capture a rare
event for example—that photobleaching of the volume marker
is a substantial problem. Transmitted-light imaging (such as
phase-contrast or Nomarski microscopy) is already frequently
combined with fluorescence imaging of specific molecular
markers in live-cell imaging experiments, and is used by investigators to evaluate the overall behavior of the cell during the
time of observation. In comparison with the use of a fluorescent
volume marker, transmitted light imaging is less likely to conflict with other experimental requirements—especially if the
illumination is not intense—and it is not subject to bleaching.
It is also less likely to be prohibited by equipment limitations,
assuming the microscope has a transmitted light path.
B. Rigid and Non-Rigid Transformations
In order to extend the correlation-based strategy, previously
applied to images which differed by a rigid transformation, to
our nonideal application—cells with similar appearances from
image to image, whose movement resembles rigid motion—it
was necessary to specify the scale, by spatial frequency bandpass filtering, for which we wanted to approximate motion as
rotation and translation. In contrast to the straightforward selection of the spatial region and pixel intensity range containing
the cell, the choice of appropriate frequency range for the bandpass filter is not obvious. Empirically, we found a passband that
was suitable to a collection of cells imaged under the same conditions; this spatial frequency range corresponds to variations
with periods between 2 and 6 m. However, a change in mode of
imaging or cell type will likely require a corresponding change
in frequency range. In the future we hope to automate the determination of optimal spatial frequency range.
This discussion of approximating nonrigid transformations as
rigid raises the question of whether it is even appropriate to summarize cell dynamics as rigid motion. As we pointed out early
WILSON AND THERIOT: ROTATION AND TRANSLATION OF MOVING CELLS
on, excessive simplification of complex phenomena can sacrifice biologically relevant details. On the other hand, some extent
of reduction is prerequisite to interpretation. In modern biological imaging, the central challenge of quantitative analysis is not
the conversion of information to a quantitative form—the image
data are already quantitative—but of making sense of the overwhelmingly rich information.
We view this characterization of movement as rotation and
translation as a first step, and as a base for further analysis of
cell motion. The observed changes include nonrigid transformations as well as changes in appearance not due to spatial transformations (or at least not due to any spatial transformations on
a scale large enough to be captured by the imaging method).
Having separated out the overall movement of the cell, we can
further deconstruct the remaining dynamics. We can also interpret observations and measurements in the context of locations
in or paths traversed through the cell. In the case of the actin
cytoskeleton in keratocytes, different remodeling processes are
localized to specific regions of the moving cell, but the filament
meshwork near the front of the cell is nearly stationary relative
to the substratum—which means it is moving in the frame of
reference of the cell [17]. The architecture at a specific point in
the meshwork depends on where in the cell that portion of the
meshwork has been. How that architecture will affect the cell’s
movement depends on where that portion of the meshwork is
going. The spatiotemporal mapping between the frame of reference of the actin meshwork and that of the cell is intimately related to the organization of motility processes and the feedback
between them. Comparing or connecting the mappings (rigid or
nonrigid) between frames of reference derived from simultaneously acquired channels has the potential to help illuminate the
coordination between the underlying biological processes.
ACKNOWLEDGMENT
The authors would like to thank Z. Pincus and K. Keren for
critical reading of the manuscript. This paper has supplementary
downloadable material available at http://ieeexplore.ieee.org,
provided by the authors. This includes three QuickTime movies,
which show animated versions of Figs. 1, 4, and 9. This material
is 49 MB in size.
1951
[8] B. F. Hutton and M. Braun, “Software for image registration: Algorithms, accuracy, efficacy,” Sem. Nucl. Med., vol. 3, pp. 180–192, Jul.
2003.
[9] P. Thevénaz, U. E. Ruttimann, and M. Unser, “A pyramid approach to
subpixel registration based on intensity,” IEEE Trans. Image Process.,
vol. 7, no. 1, pp. 27–41, Jan. 1998.
[10] D. Casasent and D. Psaltis, “Position, rotation, and scale invariant optical
correlation,” Appl. Opt., vol. 15, no. 7, pp. 1795–1799, Jul. 1976.
[11] B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans.
Image Process., vol. 5, no. 8, pp. 1266–1271, Aug. 1996.
[12] Y. Sheng and H. H. Arsenault, “Experiments on pattern recognition
using invariant Fourier-Mellin descriptors,” J. Opt. Soc. Amer. A, vol. 3,
no. 6, pp. 771–776, Jun. 1986.
[13] Q. Chen, M. Defrise, and F. Deconinck, “Symmetric phase-only
matched filtering of Fourier-Mellin transforms for image registration
and recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no.
12, pp. 1156–1168, Dec. 1994.
[14] S. Ertürk, “Translation, rotation and scale stabilization of image sequences,” Electron. Lett., vol. 39, no. 17, pp. 1245–1246, Aug. 2003.
[15] E. DeCastro and C. Morandi, “Registration of translated and rotated images using finite Fourier transforms,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 9, no. 5, pp. 700–703, Sep. 1987.
[16] J. L. Horner and P. D. Gianino, “Phase-only matched filtering,” Appl.
Opt., vol. 23, no. 6, pp. 812–816, Mar. 1984.
[17] J. Lee, A. Ishihara, J. A. Theriot, and K. Jacobson, “Principles of locomotion for simple-shaped cells,” Nature, vol. 362, pp. 167–171, Mar.
1993.
[18] C. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE
Trans. Image Process., vol. 7, no. 3, pp. 359–369, Mar. 1998.
[19] S. L. Marple, Digital Spectral Analysis: With Applications. Englewood Cliffs, NJ: Prentice-Hall, 1987, pp. 136–144.
[20] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes, Computer
Graphics: Principles and Practice. Boston, MA: Addison-Wesley,
1996, pp. 208–210.
[21] T. F. Coleman and Y. Li, “An interior trust region approach for nonlinear minimization subject to bounds,” SIAM J. Optim., vol. 6, no. 2,
pp. 418–445, May 1996.
[22]
, “On the convergence of reflective newton methods for large-scale
nonlinear minimization subject to bounds,” Math. Progr., vol. 67, no. 2,
pp. 189–224, 1994.
Cyrus A. Wilson received the B.S. degree in molecular biophysics and biochemistry from Yale University, New Haven, CT, in 2000. He is currently pursuing the Ph.D. degree in biochemistry at Stanford
University, Stanford, CA.
His research interests include quantitative cell biophysics, image analysis, modularity and abstraction
in complex biological systems, and biological representations of information.
Mr. Wilson is a member of the Biophysical Society
and the American Society for Cell Biology.
REFERENCES
[1] S. M. Rafelski and J. A. Theriot, “Crawling toward a unified model of
cell motility: Spatial and temporal regulation of actin dynamics,” Annu.
Rev. Biochem., pp. 209–239, 2004.
[2] B. Jähne, Practical Handbook on Image Processing for Scientific and
Technical Applications, 2nd ed. Boca Raton, FL: CRC, 2004, pp.
271–272.
[3] F. S. Soo and J. A. Theriot, “Large-scale quantitative analysis of sources
of variation in the actin polymerization-based movement of Listeria
monocytogenes,” Biophys. J., vol. 89, pp. 703–723, 2005.
[4] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed.
Upper Saddle River, NJ: Prentice-Hall, 2002, pp. 194–211.
[5] R. N. Bracewell, The Fourier Transform and Its Applications, 2nd ed.
New York: McGraw-Hill, 1986.
[6] B. Zitová and J. Flusser, “Image registration methods: A survey,” Image
Vis. Comput., vol. 21, pp. 977–1000, 2003.
[7] F. Maes, D. Vandermeulen, and P. Suetens, “Medical image registration
using mutual information,” Proc. IEEE, vol. 91, no. 10, pp. 1699–1721,
Oct. 2003.
Julie A. Theriot received concurrent B.S. degrees in
physics and biology from the Massachusetts Institute
of Technology, Cambridge, in 1988, and the Ph.D. in
cell biology from the University of California, San
Francisco, in 1993.
She has been on the faculty of the Stanford
University School of Medicine, Stanford, CA, since
1997, with joint appointments in the Department of
Biochemistry and the Department of Microbiology
and Immunology. Previously she was a Whitehead
Fellow at the Whitehead Institute for Biomedical
Research. Her research interests include cell motility, protein polymerization
and large-scale self-organization in the cytoskeleton, cell shape determination,
and the cell biology of bacterial infection.
Dr. Theriot is a member of the American Society for Cell Biology, the American Society for Microbiology, the Biophysical Society, and the American Physical Society.