Detection of Biological Cells in Phase

Detection of Biological Cells in Phase-Contrast Microscopy Images
F. Ambriz-Colı́n∗
M. Torres-Cisneros
J. G. Aviña-Cervantes
J. E. Saavedra-Martı́nez∗
Universidad de Guanajuato
Facultad de Ingenierı́a Mecánica, Eléctrica y Electrónica
Tampico 912, Salamanca, Guanajuato, México
[email protected], avina/[email protected], [email protected]
O. Debeir
Université Libre de Bruxelles
Information and Decision Systems
50, Av. F.Roosevelt, 1050 Bruxelles, Belgium
[email protected]
J.J. Sánchez-Mondragón
Photonics and Optical Physics
INAOE, Tonantzintla, Puebla, 72000, México
[email protected]
Abstract
In this paper, we propose an automatic method to obtain cells detection and cells migration
tracking in order to analyze cells behaviors under different conditions. The images were obtained
using phase-contrast video microscopy method. Proposed method normalizes original images in
order to increase image contrast, and a classification process based on variance operator determines the nature of pixels in the image as cells or background. Each detected cell is associated
to its centroid in order to initialize the tracking procedure to quantify the migration process. This
technique is a fast way to describe cells migrations, robust to cell contracts and mitosis, all over
their trajectories.
1: Introduction
The quantification and measurement of cells displacements have been an useful parameter to be
applied on many bio-medical applications, such as cells migration and their variations under different conditions, like those provided by drugs. The experiments were made in vitro, the cell’s images
are from standardized cancer cell lines.
Several techniques of supervised tracking or assisted by computer (semi-automatic or automatic)
have been investigated by many authors [4, 3, 8, 9, 1, 2]. Nevertheless, when the number of cells
or the tracking period (e.g., dozens of hours) is increased to obtain more reliable statistical parameters, these tasks could be difficult and tedious (in a supervised way). Therefore, an automatization
∗
M. Eng. Students
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
Figure 1. Image of Cells by Phase-Contrast Video Microscopy
is strongly demanded. Unfortunately, the fluorescent markers were not used (for biological reasons
not explained here), the cell’s images were obtained by phase-contrast video microscopy provide
poor quality images due to the nature of biological material.
Another inconvenient for cells detection is related to obtain the cells contours. They are always
moving and changing their shapes (i.e., cells evolution or migration), and when the cells are too
close to each others, the detection system could detect them as one single object or cell. This is the
main reason for which the image processing and cell detection does not depend of a single method,
we should test many combinations of methods until obtaining the best result for our applications.
In relation to cellular images, the gray level intensity of the background and the gray level intensity of the cells (foreground) are almost the same, this fact makes difficult the detection task, due to
low contrast. Thus, some methods that make the segmentation by gray level intensity (thresholding, region growing,etc.) give the poor performances. Border detection could also be a problem,
because the cells are surrounded by bright halos often exhibiting uncomplete contours. The membranes of the cells could be confused with the background.
The proposed method to identify cells under phase-contrast video microscopy, is a technique
that could make the automatic detection of cells; as a first stage cells are detected and a set of cells
centroids is computed. This detection provides the initialization for the tracking procedure.
This paper is organized as follow. The section 2 discuss the methods used to make the detection
of cells. The section 3 presents the cell tracking, finally experimental results are presented in
section 4.
2
Cells Detection
Some previous works [4] reported that the tracking procedure are initialized in an human assisted
way, i.e., an specialist pick in the center of the cells (cell soma) and then the system starts to track
the sequence of frames. These systems are called semiautomatic, because they require this initialization step. A monochromatic image obtained by phase-contrast video microscopy, is shown in
figure 1.
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
Figure 2. Equalized image (obtained from image in fig 1)
We aim to develop an automatic biological cell detection method, by using indeed, several methods. Cell detection system must be able to detect all cells, regardingless the shape, size or other
variables that could affect the detection. For these reasons, it is suggested to combine and to test
the best existing methods.
Histogram equalization method was initially tested. This method allows to increase the contrast
in images where the gray level intensity is very concentrated, figure 4(a), but this technique does
not provide good results, since it increases the noise of the images, in figure 2, this phenomenon is
observed in the background of the image.
Besides, some segmentation methods were applied in our images likes pixel clustering, in order
to obtain some pattern from the image. Basically, segmentation might delimit the background from
the image to separate the cells from background. But, the similarity between the gray level intensity of the background and the gray level intensity of cells makes more difficult to obtain a good
classification of pixels in the image (see figure 3 for an unsatisfactory segmented image).
In image segmentation three classes were used, because we want to segment the background or
cells from the image, so we consider that the images have 3 main components: the background, the
body of cells and the bright halos around the cells. Adding more classes as a rule does not improve
the results and neither reducing the number of classes. In the first case (adding classes), the final
effect consist in subdivide the background in more regions (over-segmentation), and in the second
case (reducing number of classes), the background, the cells body and the few background around
the cells were consider as one single class (under-segmentation).
In figure 3 the components of the image are easily observed, figure 3(a) shows the result of
classification method by k-means, and shows one tonality for each class in order to distinguish the
classes. For example in figure 3(b), detected pixels (white pixels) belong to the bright halos around
the cells (characteristic of the images by phase-contrast video microscopy), in figure 3(c) the pixels
(gray scale pixels) related to the background, but due to the gray level intensity similarity between
cells and background, the body of cells were taken as background. And, in figure 3(d) detected
pixels (gray scale pixels) are part of the background around of the body of the cells.
Consequently, a method based on computing the pixels intensity variance [3] has been proposed,
using this technique the approximate regions are obtained automatically, these regions could have
a cell or connected cells. Afterward, cells are characterized using a centroid, that point will be use
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
(a)
(b)
(c)
(d)
Figure 3. Clustering segmentation Method, (a) Clusterized image (3 clusters), and
one cluster image in (b) Class 1, (c) Class 2, (d) Class 3
to describe the trajectory of a specific cell between frames of video sequence.
Another method revised is based on morphological operations, could be used too. where the cell
detection has a good performance i.e. computational time, and do not have border image loses like
a method based on computing the pixels intensity variance, who has a good cell detection performance but it has a border image loses.
2.1
Computing The Pixels Intensity Variance
The system works with monochromatic (gray-scale) images, these images have a size of 700 ×
500 pixels, one typical image with biological cells is shown in figure 1. In order to complete the
detection of cells, the task is performed in two steps. The first step adjusts the image increasing
its contrast so that all possible information can be extracted from these images and consequently,
the system increases the efficiency of the cells detection. The second step detects the probable area
covered by each cell or cells (if the cells are too close to each other) in order to focalized the next
operation on this localized regions.
2.1.1
First step: Image normalization
To complete this task the first image was taken, gray level histogram of the image was analyzed
to detect its more important features. We intend to adjust the histogram in order to increase difference between gray level intensity of the cells and their background, and making the detection
easier. The histogram shown in, figure 4(a) , show the low contrast in the images, and how the pixel
gray level intensities are to close to each others, the result of the histogram adjustment of the gray
level histogram is shown in the figure 4(b) that covers all gray level intensity. Image normalization
was done by the next equation,
A(i, j) =
I(i, j) − min(I)
· Imax
max(I) − min(I)
(1)
Where A(i, j) is the new value of image I adjusted at pixel (i, j) by equation (1), max(I) is the
highest pixel value in the image I, min(I) is the lowest pixel value in the image I, and Imax is
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
16000
16000
14000
14000
12000
12000
10000
10000
8000
8000
6000
6000
4000
4000
2000
2000
0
0
0
50
100
150
200
250
0
(a)
50
100
150
200
250
(b)
Figure 4. (a) Original Histogram, (b) Normalized Histogram
(a)
(b)
(c)
(d)
Figure 5. (a) Normalized Image , (b) Image of Normalized Variance, (c) Binary Image
of Variance, (d) Image of Contours(Outlines)
fixed to 255. This equation (1) spreads out the gray level intensity of image. If I(i, j) is equal to
min(I) then the value for A(i, j) will be 0, that moves the lowest value of I(i, j) at the bottom gray
scale intensity. Moreover if I(i, j) is equal to max(I), then the value of A(i, j) will be 255, that
moves the highest value of I(i, j) at the top of gray scale intensity, the values of I(i, j) between
max(I) and min(I), will be obtain a value according to equation (1).
Indeed, Equation (1), increases of contrast between figure 1 and figure 5(a), now the bright halos around the cells are more visible and the cell’s body now are more isolated from the background.
2.1.2
Second Step: Pixels classification
Adjusted image is used to obtain the regions containing the cells and their background neighborhood. This was done by applying a global threshold to the local variation of the intensity image.
A measure of local variations in intensity is provided by the second order statistics, the variance of
the gray level intensity. The variance value at each pixel is computed over a square mask, centered
at a given pixel,
σ 2 (i, j) =
j+M
i+M
1
·
[I(k, l) − µ(i, j)]2
2
W k=i−M l=j−M
(2)
Where I(i, j) is the gray level intensity at the pixel (i, j), W is an odd integer denoting the width
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
Figure 6. Image shown the result of the detection method
of the mask, M = (W −1)/2 and µ(i, j) is the mean intensity value within the mask, calculated by,
j+M
µ(i, j) =
i+M
1
·
I(k, l)
2
W k=i−M l=j−M
(3)
When the variance value is obtained, according to equation (2), the result is normalized uniformly between (0 − 255) interval, using the equation (1), see figure 5(b). The next step consist in
converting the normalized image to binary image, using a threshold of 0.5 according to our experiments, see figure 5(c) and equation (4). The cells are segmented from the binary images, and then
the (contours) outlines are obtained of the regions that contain the cells, see figure 5(d); finally the
outlines are used over the original image as a mask see figure 6,
B(i, j) =
A(i,j)−min(A)
1, max(A)−min(A)
>T
0, Others
(4)
Where B(i, j) is the binary pixel value in the image, and T is the thresholding value.
Finally, in segmented cells, pixels are determined and are contained in some outlined area from
the image, and that is done from the binary image, see figure 5(c). All filled areas are labeled to
distinguish them between pixels belong to an object (foreground) and to the background. However
the segmented region is an approximate region, because it is formed by a cell or a small set of cells,
a few part of background around the cells and the bright halo.
2.2
Gray Level Morphological Gradient
This method used the morphological operations applied over gray scale images, see figure 1,
there are two basic morphological operations, dilation and erosion. Dilation add pixels to the
boundaries of objects in an image, while erosion removes pixels on object boundaries, both operation dilation or erosion are based on a specific kind of structure, that structure could be any
geometric shape.
2.2.1
Dilation Generalized
Generalized dilation is expressed symbolically as [7]
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
(a)
(b)
(c)
(d)
Figure 7. (a) Dilation of figure 1, (b) Erosion of figure 1, (c) The morphological gradient of figure 1, (d) The outlined image shown the detection procedure
G(j, k) = F (j, k) ⊕ H(j, k)
(5)
where F (i, j) for 1 ≤ j, k ≤ N is a binary-valued image and H(j, k) for 1 ≤ j, k ≤ L, where
L is an odd integer, is a binary-valued array called a structuring element. For notational simplicity, and are assumed to be square arrays. Generalized dilation can be defined mathematically and
implemented in several ways. To make the dilation has been used a circular structuring element of
radius equal to 15 pixels [4], see figure 7 (a).
2.2.2
Erosion Generalized
Generalized erosion is expressed symbolically as [7]
G(j, k) = F (j, k) H(j, k)
(6)
Where H(j, k) again is an odd size L × L structuring element.
The meaning of this relation is that erosion of F (j, k) by H(j, k) is the intersection of all translates of F (j, k) in which the translation distance is the row and column index of pixels of H(j, k)
that are in the logical 1 state. To make the erosion has been used the same structure as in dilation, a
circular structuring element of radius equal to 15 pixels, see figure 7 (b).
2.2.3
The gray level morphological gradient
The gray level morphological gradient [7] is computed from the dilation and erosion images
(figures 7 (a) and (b)), its equation is
1
(7)
Gradient(j, k) = (DG (j, k) − EG (j, k))
2
Where DG (j, k) is the dilation of gray scale image, and EG (j, k) is the erosion of the gray scale
image, the result of gray level morphological gradient is shown in figure 7 (c) and in figure 7 (d)
the detected cell are shown.
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
Figure 8. (a) One cell outlines, (b) Two cells outlined, (C) The same cells in (b) but
now they are separate
x
x
X
(a)
x
(b)
x
x
(d)
(e)
(g)
(f)
x
x
x
(c)
(h)
(i)
Figure 9. A small sequence of tracking
3
Cell Tracking
By the moment, the tracking system works with all detection sequence, in both methods pixel
intensity variance and gray level morphological gradient, as a way to track cells and the centroids
are setting within the outlined objects as a way to follow the cell or cells. It is possible to observe
cells movements. These observations show us that when the cells are near to each other the detection algorithms take them as a single object see figure 8(b), it was noted that there is an outlined
area that contain two cells, and when that happens the centroids are combined and make one single
centroid. The opposite case is when two or more cells are outlined, and they begin to separate, the
systems detect two or more objects and set different centroids, see figure 8(b),(c). Or when one
mother cell divide in two derived cells, and one centroid must be divides in two centroids one per
each cell or when some group of cells were separate.
4
Experimental Results
Using the pixel intensity variance method, the images were processed, see figure 1, using different sizes of masks M, (see equation (2)), and our experiments and results shown that the optimal
size of M is 15 pixels, which agreed with results obtained in [4]. And different threshold are also
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
tried to convert the image of σ(i, j), see figure 5(b), to a binary image see figure 5(c), our result
demonstrates that optimal threshold was 0.5. Those parameters were changed many times so that
the results were observed and compared them. Some of those experiments demonstrate that if the
size of the mask M increases, the computational time increases too, and the area of detection or
recognition increases too, taking more background around the cell or cells, and it has another negative effect, the small cells of the images (smaller that the size of the mask M) could be ignored by
the method, it means the smallest cells could not be detected.
The resulting image is smaller than the original image, this depend of the size of the mask M, in
our case is M = 15 pixels the original image have a lose of 7 pixels around the image (i.e size of
original image 700 × 500 pixels, the resulting images have a size of 686 × 486 pixels).
Moreover using the gray level morphological gradient, the images were processed without change
of size of resulting images, this could be an advantage because the images do not have loses, and
the performance of detections is satisfactory. To perform the dilation and erosion, the structuring
element used is a disk of radius equal to 15 pixels, it is according with the size of the mask in pixel
intensity variance method and [4].
The threshold to convert the gray level morphological image into a binary images is settled
to 19 according with our experiments. This threshold could be dynamic in order to improve the
performance of the detection method.
5
Conclusions
A very important step in both methods is the normalization, due to the features of the image, and
in order to save computational time, due to the range of values resultant of normalization are [0−1].
The first method proposed the pixel intensity variance, detects or recognizes the probable regions
that could contain a cell or cells and discriminate the rest of image like the background, without
assistance of a human expert, setting the centroid in the outlined areas detected by the system. The
idea is to save computational time in order to run the cell tracking application in real time, and to
have the possibility to apply the method to all kind of cells, regardingless the shape, size, etc. To
cope with the case of cells are close each others and construct trajectories some adaptation must be
added.
The second method gray level morphological gradient, makes the cells detection, basically by
using the background. The method separates the background and the probable areas that contain a
cell or cells, so the cell tracking can be done perform without considering the background, shape,
size, etc.
6
Acknowledgements
We would like to thank to University of Guanajuato for the funding obtained through the projects:
”Photonic band-gap enhanced second harmonic generation in planar lithium niobato waveguide
(#64/2005)” and ”Diseño de un sistema robusto de estero-visión para aplicaciones de robótica
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006
móvil y modelado de objetos 3D (#80/2005)” and to CONCyTEG, PROMEP and ALFA for the
funding obtained through the projects: (06 − 16 − K117 − 31) ”Desarrollo de un sistema de percepción activa para la automatización de tareas en robots móviles y vehı́culos inteligentes”, ”PTC”
and ”Images: des β-Puces en Epidémiologie à la Chirurgie Assistée (IPECA)”, respectively. Also,
Fernando Ambriz Colı́n and José Emanuel Saavedra Martı́nez would like to thank to CONACYT
for their master scholarships.
References
[1] Jinshan Tang Adam P. Goobic and Scott T. Acton. Image stabilization and registration for
tracking cells in the microvasculature. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING., 52(2), February 2005.
[2] Scott T. Acton. Dipti Prasad Mukherjee, Nilanjan Ray. Level set analysis for leukocyte detection and tracking. IEEE TRANSACTIONS ON IMAGE PROCESSCING., 13(4), April 2004.
[3] David Gauthier Kenong Wu and Fellow. Martin D. Levine. Live cell image segmentation.
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 42(1), January 1995.
[4] R. Kiss O. Debeir, P. Van Ham and C. Decaestecker. Tracking of migrating cells under phasecontrast video microscopy with combined mean-shift process. IEEE TRANSACTIONS ON
MEDICAL IMAGES, 24(6), June 2005.
[5] Robert Kiss Philippe Van Ham Olivier Debeir, Isabelle Camby and Christine Decaestecker. A
model-based approach for automated in vitro cell tracking and chemotaxis analyses. 2004.
[6] Dwayne Phillips. Image Processing in C.
[7] William K. Pratt. Digital Image Processing. Wiley & Sons, Inc, 605 Third Avenue, New
York, NY, 2001.
[8] Nilanjan Ray and Scott T. Acton. Active contours for cell tracking. Fifth IEEE Southwest
Symposium on Image Analysis and Interpretation (SSIAI’02), 2002.
[9] Nilanjan Ray and Scott T. Acton. Motion gradient vector flow: An external force for tracking
rolling leukocytes with shape and size constrained active contours. IEEE TRANSACTIONS
ON MEDICAL IMAGES., 23(2), December 2004.
[10] David G. Stork. Richard O. Duda, Peter E. Hart. Pattern Classification. Wiley Interscience,
605 Third Avenue, New York, NY, 2001.
[11] John L. Semmlow. Biosignal and Biomedical Image Processing Matlab-Based Applications.
Marcel Dekker, Inc.
Proceedings of the Fifth Mexican International
Conference on Artificial Intelligence (MICAI'06)
0-7695-2722-1/06 $20.00 © 2006