Computerized Classification of Color Textured Perthite Images

Computerized Classification of Color
Textured Perthite Images
Boaz Cohen1, Its’hak Dinstein1 and Moshe Eyal2
1
Electrical and Computer Engineering Department, 2 Geology Department
Ben Gurion University of the Negev
Beer Sheva, 84105, Israel.
E-mail: (boazc@newton, dinstein@bguee, moey@bgumail).bgu.ac.il
Abstract
A fast, reliable, and objective system for computerized
classification of color textured perthite images is
proposed. Computerized classification of perthite textures
enables large scale comparative perthite texture studies.
In order to locate a perthite crystal’s borders, color and
texture features are combined in a pixel classification
segmentation operation, followed by probabilistic
relaxation. A new operator based on vertical - horizontal
run length ratios is used in order to identify and separate
vein elements. Joint patch elements are disconnected
using an iterative procedure that locates patch cores.
After the geometrical properties are extracted, the
desired distribution of the perthite texture elements is
computed. Experimental classification results are
presented and compared to expert manual classification.
system, segmentation results are further improved by
probabilistic relaxation as used in [7].
Kondepudy and Healey [8] define a color texture model
that is based on the three spatial correlations within and
between the color bands of the color textured image.
Panjwani and Healy [9], use Gaussian Markov Random
Field models (GMRF) for color textures, taking into
account the interaction between different color planes.
Tan and Kittler [10] propose to extract both spatial and
spectral attributes in order to represent the colored
texture. In [11], a color codebook is used to represent the
color aspect of textured images. Colorbook prototypes
may be computed by the method presented in [12].
Once the crystal borders are located, the color image is
transformed into a binary one, and noise is reduced using
mathematical morphological operators [13]. Finally,
texture elements are separated with a new operator
proposed in this work, and classified.
1. Introduction
Classification of perthite textures helps geologists
distinguish between granitoid rocks and contributes to the
understanding of the development of the different perthite
textures [1]. The classification is conventionally
performed by hand, a time consuming routine with results
that are highly influenced by the classifier. The system
described in this paper is relatively fast, reliable,
objective, and enables large scale comparative studies of
perthite textures.
In the presented system, crystal borders are located by
a color and texture feature based pixel classification
segmentation method. Image segmentation methods are
surveyed in [2] and [3]. Various approaches to texture
analysis are reviewed in [4], [5], and [6]. In the proposed
___________________________________________________________________________
This work was partially supported by the Paul Ivanier Center for Robotics
and Production Automation, Ben Gurion University of the Negev, Beer
Sheva, 84105, Israel.
(a)
(b)
(c)
Figure 1. Perthite textures: (a) medium vein; (b)
coarse patch; (c) vein to patch
2. Perthite crystal texture classes
The relationships between the perthite crystal phases
(albite and K-feldspar) form different kinds of textures,
defined by the albitic elements shape and dimension [1].
The different texture classes are: fine, medium and coarse
veins, fine, medium and coarse patches and vein to patch.
As seen in Fig. 1, veins are narrow and long, patches
resemble blobs, and the vein to patch class contains both.
3. System description
The input color images are obtained from a color TV
camera mounted on a microscope. A thin section of a
granite rock containing the crystal of interest and part of
its neighbors are illuminated by polarized light. At the
input image, textures of different crystals appear with
different color combinations.
In the first stage, the segmentation stage, borders of the
crystal of interest are located using color and texture
features. In the second stage, the pre-classification stage,
the color image is transformed into a binary one, noise is
reduced, and the image is rotated so the general direction
of the vein elements is vertical. In the final stage, albite
elements are separated and classified.
4. Crystal border identification
4.1 Color and texture features
According to the color codebook approach [11], a
texture of index i is represented in terms of the following
codebook:
Ci = ( f ( c1 ), f ( c2 ),..., f ( c L ))
T
(C ⋅ C )
2
cos −1 m n
π
Cm ⋅ Cn
4.2 Segmentation
The goal of the segmentation stage is to locate the
borders of the crystal of interest. The user is instructed to
mark an as large as possible reference rectangle that is
bounded inside the crystal of interest, as may be seen in
Fig. 2(a). We use the color and texture features extracted
from this rectangle as training data for the
characterization of the crystal of interest’s color texture.
Using the data from the reference rectangle, the texel
size is determined [14]. Later, color and texture features
are calculated around each pixel and compared to the
crystal’s characteristics using (2). A distance threshold is
set as the lowest threshold that will classify at least 85%
of the pixels in the reference rectangle as belonging to the
crystal of interest [14]. Fig. 2(b) is an example of the
initial segmentation of a perthite crystal.
(1)
where f(cj) is the frequency of the color prototype cj, and L
is the number of color prototypes in the codebook. The
color codebook may be considered as a vector whose
elements f(cj) assume values in the range [0,Nimg] and Sj
f(cj)=Nimg, where Nimg is the number of pixels in the input
image.
Two texture representations my be compared by using
the angle θ between two vectors Cm and Cn as a simple
similarity measure, ρmn:
ρ mn = 1 −
and 135 degrees) per area unit and the total amount of
edge pixels in the same unit. The amount of edge pixels
reflect the texture’s coarseness and the edge pixels
direction distribution characterizes its orientation.
∈ [0,1]
(a)
(b)
(2)
where C is the vector length, and (Cm⋅Cn) is the scalar
product of the two vectors. Given this similarity measure,
two textures are similar in aspect of color if ρmn is high.
In order to define the color prototypes, the
representational approach color quantization method
introduced by Scharcanski at el. [12] was used. The main
idea is choosing the smallest set of representative colors
using the concept of color tolerance volumes.
Since the color aspect of a colored texture is not
enough to characterize it, additional texture features are
derived. The texture features used in this work are
directional edge densities, namely, the amount of edge
pixels in every one of the four main directions ( 0, 45, 90
(c)
Figure 2. Segmentation example: (a) reference
rectangle; (b) before the relaxation process; (c)
after the relaxation process
4.3 Result improvement by probabilistic relaxation
As may be seen in Fig. 2(b), the results of the initial
segmentation usually require improvement. Probabilistic
relaxation is an iterative approach for using contextual
information to reduce local ambiguities. The implemented
probabilistic relaxation scheme is similar to the one used
by Hsiao and Sawchuk [7] and described in detail at [14].
In Fig. 2(c), the segmentation improvement of the initial
segmentation example in Fig. 2(b) is shown.
After the crystals borders are located, the color image
is transformed into a binary image by thresholding a high
class separability band image. This band image is
obtained using the image’s principal components. Since
the vein element extracting routine assumes that vein
elements are generally vertical, the image is rotated in a
way that will assure their verticality, using the image’s
power spectrum. The final part of the pre-classification
stage is a noise reducing process using mathematical
morphology open and close operations. All of the above
operations are described in [14].
6. Texture element separation & classification
Some processing is required in order to separate
connected elements and identify the vein and patch type
elements.
6.1 Small elements classification
Since the element separating tools tend to erase small
texture elements, they are removed first. As described in
[1], all vein type elements have a length-width ratio that
is bigger than three. The classification is done by testing
an element’s ratio using its minimal enclosing box.
6.2 Vein elements identification operator
A new operator proposed in this paper identifies and
separates vertical vein elements, using the property stated
at the previous section. For every pixel (i,j), we define the
horizontal run (RH) and the vertical run (RV), by-


RV (i, j ) = (k, j )

l ≤ j, P(i, y) = 1
and
if l ≥ j, P(i, y) = 1
for all (l ≤ y ≤ j )


for all ( j ≤ y ≤ l )
if k ≤ i, P( x, j ) = 1 for all ( k ≤ x ≤ i)


and
if k ≥ i, P( x, j ) = 1 for all (i ≤ x ≤ k )
if
{
i, j =
max
# R (k , j)
H max ( ) i ≤ k ≤ i
H
1
2
{
}
i = min (i, j ) ∈ R (i, j )
1
H
i
}
(5)
{
}
i = max (i, j ) ∈ R (i, j )
2
H
i
Let the sets V , Q, and the operator S, be defined as
follows:
5. Pre-Classification processing


RH (i, j ) = (i, l )

#R
{(i, j)
P(i, j ) = 1
V = {(i, j )
(i, j ) ∈ Q
Q=
}
(6)
and
# RV ( i, j ) / # RH ( i, j ) ≥ 3}
(7)
(8)
S: Q→V
The operator S may be used recursively. We define the
steady state as the state in which a repeating use of S does
not change the image at all. The following two properties
of S are of interest (proof in [14]):
Statement 1:
For any finite binary image, there is an integer N, such
that N time recursive repetitions of the operator S form a
steady state set V. Denote this set asV SN .
In order to characterize the set V, we mark the vertical
run that the pixel (i,j) belongs to by R(i1,i2,j), using i1 and
i2 as defined in (5). In addition, we define the stable
vertical run in the following way -
{
RS (i1 , i2 , j ) = (i, j )
# RV (i, j ) / # RH max (i, j ) ≥ 3 ∀ i1 ≤ i ≤ i2
}
(9)
Statement 2:
The steady state set, V SN , is a collection of connected
components V k N , each one of which is a union of stable
vertical runs, in the following way VSN =
Nk
(10)
VkN
k =1
where Nk is the number of connected components in the
set V SN .
(3)
(4)
where P is the image, P(i,j)=1 for every albite pixel and
P(i,j)=0 for every K-feldspar pixel. We mark the vertical
run length (the number of pixels in the vertical run) as
#RV and the horizontal run length as #RH. The maximal
horizontal run length for the (i,j) pixel, is defined by
(a)
(b)
(c)
Figure 3. Vein identification example: (a) initial
image; (b) vein elements; (c) steady state image
Figure 3 is an example of applying the operator, where
Figure 3(a) is the initial image, Figure 3(b) shows the
pixels left after applying the operator once, and Figure
3(c) is the steady state, received after applying the
operator six times.
7. Experimental results and conclusions
The performance of the proposed system was tested on
a data base that contains 14 different hand segmented and
classified crystals.
6.3 Vein elements identification and separation
At first, pixels that may belong to veins are detected by
the operator S, eliminated from the image, and used for
generation of the vein image. Since the vein identifying
operator tends to tear long stripes from the center of large
patch elements, we define an edge identity criterion that
every possible vein element must fulfill [14], and transfer
back elements that don’t fulfill this criterion. In order to
separate connected vein elements, the operator S is used
again (on the vein elements image). This sequence is
concluded by returning erased pixels that do not connect
separated elements.
(a)
(b)
6.4 Patch elements separation
The idea behind the separating routine is to identify
the patch cores and perform region growing without
allowing element connection [14]. In order to identify the
patch cores we use the vertical and horizontal run lengths
with the two additional main diagonal run lengths. We
define a patch core as a set that all its pixels have a
maximal - minimal run length ratio less than five.
The patch cores identification is done recursively, by
shrinking non-core connected components (by deleting
pixels with high run length values), until every connected
component is a patch core [14]. An example of a patch
elements separation appears in Figure 4.
(a)
(b)
(c)
Figure 4. Patch separation example: (a) original
image; (b) patch cores; (c) patch separation
result
6.5 Element classification
Vein elements are classified according to their
characterizing widths, found by dividing the area of the
element by the length of its minimal enclosing box. Patch
elements are classified according to their diameters.
(c)
Figure 5. Segmentation results examples: (a)
fairly good results; (b) miss error example; (c)
add error example
The hand segmentation results have been compared to
the system’s results. For every pixel, one of the following
occurs: the segmentation result is correct, a miss error (a
crystal pixel classified as a background pixel), or an add
error (a background pixel classified as a crystal pixel). We
received an average correct pixel segmentation of
91.57%, an average miss error of 8.43%, and an average
add error of 1.64%. The result shown in Figure 5(a) is
fairly good, the result in (b) has a few miss errors, and the
result in (c) has some add errors.
Usually, miss errors appear when there are very large
texture elements, or when the texture is sparse. When one
or more of the crystal of interest’s neighboring crystals
have similar color combinations and sparse texture, add
errors may occur.
Part of the data base crystals have been manually
classified twice by the same person, and some crystals
have been classified by two individuals. In a significant
number of cases, there are large differences between
different classifier results. Differences also appear in the
classification results of the same crystal classified twice by
the same person, reflecting the disadvantages of human
classification (see Table 1). Some separation and
classification results are presented in Figure 6 and Figure
7. In these figures, the gray level is proportional to the
element size (fine, medium, and coarse).
Table 1. Classification results (partial)
crystal classifier coarse medium
fine
coarse med.
& fig.
patch
patch
patch
vein
vein
1) Fig. system
2(c) person A
person B range
" $" 2) Fig. system
6(a) person A
"
#
person B ! " range
!#% $ &
3) Fig. system
5(c) person A
*
person A
$ $
4) Fig. system
8(a) person A
"
5) Fig. system
5(b) person A
$
6) Fig. system
5(a) person A
*
fine
vein
" # The average was not taken, since there is an extreme
difference.
(a)
(b)
(c)
Figure 6. Classification result example: (a) binary
image; (b) patch elements; (c) vein elements
(a)
(b)
(c)
Figure 7. Classification result example: (a) binary
image; (b) patch elements; (c) vein elements
According to geologists that examined the system’s
and human’s classification results, in most cases, the
compatibility is very high. Most of the classification
errors occur when the vein or patch type is classified
correctly, but the size group is shifted (coarse elements
classified as medium elements, etc.).
Sometimes, errors occur when elements (usually small)
appear in a form that will be classified as vein type in an
image with mostly vein elements, and as patch type with
mostly patch type elements. If the morphological
operators do not connect split vein elements, split vein
element components may be classified incorrectly as patch
elements.
References
[1] M. Eyal and E. Shimshilashvili, “A Comparative Album for
Quantitative Study of Perthite Textures”, Israel Journal of
Earth Sciences, Vol. 37, pp. 171-180, 1988.
[2] K.S. Fu and J.K. Mui, “A Survey on Image Segmentation”,
Pattern Recog., Vol. 13, No. 1, pp. 3-16, 1981.
[3] N.R. Pal and S.K. Pal, “A Review on Image Segmentation
Techniques”, Pattern Recognition, Vol. 26, No. 9, pp.
1277-1294, 1993.
[4] R.M. Haralick, “Statistical and Structural Approaches to
Texture”, Proc. of the IEEE, Vol. 67, No. 5, pp. 786-804,
1979.
[5] L. Van Gool, P. Dewaele and A. Oosterlinck, “Texture
Analysis Anno 1983”, Comp. Vision, Graphics and Image
Proc., Vol. 29, No. 3, pp. 336-357, 1985.
[6] T.D. Reed and J.M. Hans du Buf, “A Review of Recent
Texture Segmentation and Feature Extraction Techniques”,
CVGIP: Image Understanding, Vol. 57, No. 3, pp. 359-372,
1993.
[7] J.Y. Hsiao and A.A. Sawchuk, “Supervised Textured Image
Segmentation Using Feature Smoothing and Probabilistic
Relaxation Techniques”, IEEE transactions on Pattern
Analysis and Machine Intelligence, Vol. 11, No. 12, pp.
1279-1292, 1989.
[8] R. Kondepudy and G. Healey, “Use of invariants for
recognition of three-dimensional color textures”, Jour. of
the Optical Soc. of America A, Vol. 11, No. 11, pp. 30373049, 1994.
[9] D.K. Panjwani and G. Healey, “Markov Random Field
Models for Unsupervised Segmentation of Textured color
Images”, IEEE Trans. on PAMI, Vol. 17, No. 10, pp. 939954, 1995.
[10] T.S.C. Tan and J. Kittler, “Colour texture analysis using
colour histogram”, IEE Proc. - Vision, Image and Signal
processing, Vol. 141, No. 6, pp. 403-412, 1994.
[11] J. Scharanski, J.K. Hovis and H.C. Shen, “Representing the
color aspect of texture images”, Pattern Recognition
Letters, Vol. 15, No. 2, pp. 191-197, 1994.
[12] J. Scharcanski, H.C. Shen and A.P. Alves da Silva, “Colour
Quantisation for Colour Texture Analysis”, IEE Proc. - E,
Vol. 140, No. 2, pp. 109-114, 1993.
[13] R.M. Haralick, S.R. Sternberg and X. Zhuang, “Image
Analysis Using Mathematical Morphology”, IEEE
Transactions on PAMI, Vol. 9, No. 4, pp. 532-550, 1987.
[14] B. Cohen, I. Dinstein and M. Eyal, “A System for
Computerized Classification of Color Textured Perthite
Images”, submitted to Pattern Recognition.