Radon Representation-Based Feature Descriptor for Texture

Radon Representation-Based Feature Descriptor for Texture Classification
Implementation and Comparison Study
A Final Project Report for EEL5820
Amir Roshan Zamir
University of Central Florida
Paul Scovanner
University of Central Florida
[email protected]
[email protected]
Abstract
5
12
In this paper we will discuss our implementation of
the paper Radon Representation-Based Feature Descriptor for Texture Classification [1]. We were able to reproduce many of the experiments of this paper and in
this study we will show our texture classification results using the Radon Representation Feature Descriptor (RRFD).
div ( x , x̃ ) = mi nDs f (Ds )
x 10
10
f (Ds )
8
6
4
2
1. Introduction
0
(0.75, 39675.18)
0
1
2
3
Ds
4
5
6
We implemented the feature descriptor proposed by
this paper, which we will refer to as RRFD, as well as
both distance metrics (Illumination dependent and illumination invariant) discussed in the paper [1]. We tested
the RRFD on various textured images downloaded from
the web.
in Figure 1.
2. Method
3. Results
In this section we will briefly outline the steps of
the proposed method. In the paper ’invariants’ are
calculated in order to determine how the radon pixels
are grouped together. Calculating these invariants is
slow even for relatively small textures (we used 80x80
patches). However these invariants only need to be computed once as long as all textures are the same shape.
Once these invariants were calculated, the actual
radon transformation itself can be calculated and the
statistics which make up the final feature vector (mean
and covariance) can be found.
The feature descriptors are compact and can be evaluated quickly against other texture descriptors to determine texture similarities. The illumination invariant distance measure requires minimizing a function. We did
this by discretizing the space and calculating a discrete
minimum from a range of likely values. The function
which is minimized and the minimum value can be seen
We tested the RRFD using various textures downloaded off the web. Figure 2 displays a sample of 4 images we used for testing purposes. Figure 3 and Figure
4 show the distances of the texture features from each
other using the simple and the illumination invariant distance metrics respectively.
As can be seen, the two most similar textures are the
grass and the wood. However, when evaluating with the
illumination invariant metric, the difference is even more
noticeable.
Figure 1. f (Ds) function which is minimized to find div .
4. Discussion
The assumption made by the illumination invariant
distance metric, ”Now consider that an image I is taken
under a different illumination and becomes I{s,t} =
sI + t” would be false in many real world illumination
situations since pixel values do not scale linearly with
1
Stones
Grass
Wood
Chains
Stones
Grass
Wood
Chains
Figure 2. Sample textures downloaded from web. Clockwise from
Figure 4. Illumination Invariant div (x, x̃) confusion matrix of distances for multiple textures. Matrix is normalized to [0,1].
top left: stones, grass, chainmail, wood.
5. Conclusion
Stones
Grass
Wood
Chains
Stones
Grass
Wood
We have implemented the paper and done additional
testing beyond that in the original paper [1]. It is unclear
whether this method outperforms other texture description methods, and further testing is required for this purpose. We have shown that RRFD can discern between
different materials and has certain advantages, such as:
compact description, accuracy, illumination invariance,
and affine warp invariance. However, disadvantages include: runtime, implementation complexity, and lack of
scalability.
6. Tasks
The separation of tasks are outlined in Table 6 below.
Overall effort/time from each partner was equal.
Chains
Tasks
Implementation
of RRFD [1]
Amir’s Contribution
Wrote much of the
code
Implementation
of
Distance
Metrics
Writing of Report
Overall
Checked for errors
Figure 3. Illumination dependent d(x, x̃) confusion matrix of distances for multiple textures. Matrix is normalized to [0,1].
the actual illumination. In fact, it would seem that this
simplistic of an illumination model could be compensated much easier by normalizing the intensity values of
the input images to [0, 1] before calculating the RRFD. It
is also unclear how ∆t is found and how it is used. The
paper claims that ∆t is a function of Ds, however this is
untrue, as they are independent illumination parameters.
Also, ∆t is not used in the final equation for div (x, x̃).
Proofread
50%
Paul’s Contribution
Assisted in writing
and optimizing the
code key creation
Wrote the code
Wrote the paper and
created figures
50%
References
[1] G. Liu, Z. Lin, and Y. Yu. Radon representation-based
feature descriptor for texture classification. IEEE Transactions on Image Processing, 2009. 1, 2