1 The new concept of Information Sets and its application

The new concept of Information Sets and its application in iris recognition
*
M.Hanmandlu R. A. Boby M. Bansal, M. Bindal F. Sayeed R. Gupta
Abstract
This paper introduces a new concept of information sets wherein information and membership function is
considered together to represent sets. Based on the new concept, six new texture features namely
Cumulative response, Effective information, Energy, Sigmoid, Hanman filter and Hanman transform
features have been developed. Underlying these features is the concept of information sets emanating as
the offshoot of fuzzy set. Each element of the information set is a product of an element of the fuzzy set
and its membership function. The information set is transformed by a new filter that captures the
frequency components from the set and features derived using this filter are called Hanman filter features.
The same concept had been used to develop a distance based classifier based on t-norms and this has been
named as Inner product classifier (IPC). The performance of the new features and classifier has been
evaluated by applying them on iris recognition. The performance of the new features has been evaluated
using different classifiers including IPC. The performance of the new information sets based features and
the new information sets based classifiers matches or are superior to that of the conventional features and
classifiers. At the final stage a majority voting based method has been applied on different sized iris strips
and this enables better performance compared to that obtained when only single sized iris strip is used.
The final results from the fusion reiterates the fact that iris strips have significant texture information
away from the pupilary boundary and thus agrees with the observation of Hollingsworth et. al [50].
Keywords: Information sets, fuzzy sets, texture features, classifier, iris recognition
*M. Hanmandlu, M. Bansal, and M. Bindal,. are with EE Department, IIT Delhi, New Delhi 110016, India.e-mail:
[email protected], [email protected], [email protected]
R. A. Boby is with Mechanical Engineering Department, IIT Delhi, New Delhi, India, .e-mail:
[email protected]
F Sayeed is with P.A.College of Engineering, Mangalore, Karnataka, India. e-mail: [email protected]
R Gupta is with PEC University of Technology, Chandigarh, India. email: [email protected]
1
I.
INTRODUCTION
A new concept of information sets is proposed in this paper. In [1] there is a small account of how
information sets concept can be used in the extraction of fractal features. This had been applied on iris
recognition and has shown considerable improvement in performance over the conventional approaches
of determination of fractal dimension while applied in iris recognition. In this work, the concept of
information sets is introduced and applied to iris recognition through modeling of new texture features
and classification algorithm.
The concept of Information sets is inspired from various theories and a brief overview is discussed. The
fuzzy sets theory [2] and Shannon’s entropy function [3] had been a major foundation for the
development of the newtheory. Beginning with Shannon numerous ways have been proposed to quantify
entropy. Pal et al. [4] discussed about exponential entropy function which is based on an exponential
expression. The same has been generalized by Hanmandlu et al.[5] using a polynomial expression. The
driving force behind this generalization is to express the membership function relative to a reference
value similar to the Rough sets concept of Pawlak [6]. The concept of Information sets imply that using a
reference value membership functions similar to entropy can be defined for a given sub-set of any given
set. This can then be used for distinguishing different sets from one another. In this paper information sets
concept has been proposed to model membership values and implement classification on the basis of it.
The concept has been further extended to design classifiers based on distance measures.
In this paper the concept of information sets is proposed and then its performance is validated using an
application of iris recognition. Iris is endowed with peculiar texture information and thus a suitable test
platform for new texture features and classifiers. It has been a topic of interest for many years since the
pioneering works by Daugman [7] and Wildes[8].In iris recognition, the onus is on selecting the most
suitable features which will enable accurate classification. Gabor filter based approach is one of the best
tools available to characterize and classify texture [9]. It has played a significant role in characterizing iris
texture by production of iris codes which is generated using phase information [7]. It is beyond doubt that
better performance of applications like iris recognition can stem out from a better understanding of
2
textures. The advantage of using Gabor filter is the ability to quantify spatio-temporal component of
texture. Even after about 20 years after the advent of the technology, there have been numerous effort to
design better features and classifiers to enable iris recognition [10-11].
A.
Motivation
Information sets concept can be deduced combining the basic theories of Fuzzy sets and Shannon’s
entropy. Biometrics especially iris recognition is an application of quantifying texture and later
classifying it and this can be a platform for application of the new concept. The new concept has been
implemented at the stage of feature extraction and classification. There has always been a possibility of
attaining accurate iris recognition by way of more effective texture representation. Many approaches had
been implemented among which Gabor wavelet based approach stands out [12]. New features can be
designed using the salient aspects of Gabor filter. Any new texture feature is adjudged by its ability to
distinguish between textures. Thus devising a new feature makes it necessary to implement the
classification using new methods which are suitable for the feature under consideration. Though the
principal motive of the paper is to introduce the concept of information sets, to enable appreciation of the
new concept it has been implemented on iris recognition application and its performance has been
compared with that of other texture features and classifiers.
B.
Literature Survey
Information sets concept has been proposed in [5] using the basic theories. Later it has been implemented
on calculation of fractal features for iris textures in [1]. In [1] it has been proved that information set
based approaches can improve the performance of conventional texture feature extraction methods like
fractals. The successful performance of information sets based concept on this applications has opened up
possibilities of applying the same in applications like iris recognition though the recognition rates in [1]
are not substantial in comparison with other methods in literature. Since iris recognition has been an
active area of research for many years it will be possible to fully validate the information sets concept by
apllying it on iris recognition and comparing it with the performance in literature.
3
The pioneering approaches of Daugman [7] and Wildes [8] are considered as the herald of using iris for
personal authentication. Daugman uses Gabor wavelet [7,12] phase information as features whereas
Wildes uses Laplacian of Gaussian filter at multiple scales to produce a template and the normalized
correlation as a similarity measure [8]. Iris recognition has three important steps mainly segmentation,
feature extraction and matching and the major contributions in literature these areas are discussed below.
Segmentation of iris texture region plays a pivotal role in the iris recognition. Different approaches like
morphological operations [13], thresholding using histogram curve analysis [14]. Hough transform and
edge detection [17] are proposed for the segmentation of iris. Many researchers have investigated on the
most optimum boundary for the separation of iris texture from the pupilary boundary and limbic boundary
[18]. There are a host of problems such as non circular shape of iris and pupil and off axis images, which
have prompted special consideration [19,20]. It has been proved in [15] that a better iris segmentation will
help in improving the overall performance of iris recognition. Many new methods of iris segmentation has
been discussed in detail in [16]. Gabor filter features are the most sought after so far as the texture is
concerned [21]. Other feature extraction methods like Hibert transform [22], Wavelet based filters [23]
are also in vogue in literature. About the classification algorithms, mention may be made of correlation of
phase information from windows [24], support vector methods [25] apart from simple Euclidean distance
classifiers.
Practical implementation of iris based biometrics requires faster and more efficient data storage and a
possible solution to this problem is suggested using FPGA [26]. Spoofing of the iris can be created from
iris codes and to circumvent this, counterfeiting measures need to developed [27]. Factors affecting the
quality of iris images captured using visible wavelength are studied in [28]. Concerns regarding
degradation of quality due to compression techniques are dispelled in [29]. In [30], the quality of iris
images and their effect on the recognition rates is analyzed with respect to the area of iris texture region
visible.
4
There is a need to work on the segmentation algorithms because of the possibility of iris images being
captured in non ideal environments. Segmentation methodology has to be adapted to incorporate non
circular pupil boundary [31-34]. An attempt is made to enable iris recognition using directional wavelets
[35]. New methodology like biometric recognition using periocular region (facial region in the immediate
vicinity of eye rather than using the texture features from iris usually visible in Near Infrared (NIR)
lighting conditions) [36] and iris recognition with score level fusion techniques using video frames is
proposed in [37].
It can be understood that feature extraction deserves a lot of attention even today to enable better
performance of iris recognition. Different features necessitate application of efficient matching algorithms
also as discussed in [38]. In this paper an effort has been made to devise a common theory to enable both
feature extraction and classification using basic concepts in information theory.
C.
Organization of the paper
Section II discusses the feature extraction methods based on the information sets. In Section III a new
classification paradigm called Inner product classifier (IPC) suited to information sets is proposed. These
two sections constitute the description of the new concept of information sets. The results are given in
Section IV followed by conclusions in Section V.
II INFORMATION SETS BASED FEATURES
Consider any fuzzy set which in our case is constituted by a set of gray levels I
{
}in a window.
When these are fitted with a membership function, the corresponding membership functions values are
denoted by {
} . A pair of gray level and its membership function is thus a 2D element of a fuzzy set.
However in the context of information theoretic entropy called Hanman-Anirban entropy[6] which is an
offshoot of Pal and Pal entropy [27], the pair comes about as the product when we treat the gray levels as
information sources but not as probabilities as assumed in the entropy function. This way the entropy
function is viewed in the framework of possibility instead of probability. The products now become the
elements of information set as per the definition here.
5
Definition: Information set: Any fuzzy set defined over a Universe of Discourse can be thought of as
information set with its elements as product of information source values and their membership grades.
The Information set thus obtained is
H= {µijI(i, j)}.
(1)
Proof: If an attribute or property follows a distribution it is easy to fit a membership function or at least
an approximating function describing the very distribution. Let us consider the most sought after
membership functions such as the exponential and Gaussian type functions given by
|
|
{
[
√
}
(2)
]
(3)
Where we have used a fuzzifier proposed by Hanmandlu et al. [40]. It is a sort of spread function that
captures the spread of attribute values with respect to the chosen reference defined as
∑
∑
(4)
It may be noted that the above fuzzifier gives more spread than possible with variance as used in the
Gaussian function. It is interesting to come up with other spread functions capable of capturing the
uncertainty from the attribute values. We will now consider a membership function that follows no
distribution (nd) unlike the exponential and Gaussian functions.
|
|
(5)
We will now invoke Hanman - Anirban entropy function (See Appendix) given by
∑
(
)
(6)
6
a,b,c and d are real valued parameters. We will now convert (6) into the product of information source
and exponential membership value by first assuming
a=0, b=0,
is positive and then taking,
as
∑∑
(7)
With a bit of adaptation of (6), as given by
p  I (i, j ); a  0, b 
1
2 I (ref )
I 2 (ref )
,
c


,
d

2 f h2 (ref )
2 f h2 (ref )
2 f h2 (ref )
Hanman and Anirban entropy [36] is converted to the product but with Gaussian function:
∑∑
(8)
In short the concept is, if we have a fuzzy set represented by membership function then each element in
the information set bears the relation:
Information= (information source)×(membership function)
(9)
Based on the information set, we will formulate methods for devising new features which we will call by
the following names.
A
Cumulative Response
The Cumulative response is the centre of gravity of the information contained in a window. When the
given image is partitioned into windows of size WxW, its value from kth window is obtained as:
̅
∑ ∑
(10)
∑ ∑
Where (̅ k) is the feature from the kth window. The membership function used in (10) is taken as the
Gaussian function given by,
7
(
)
(11)
is the reference gray scale value in image which could be the maximum gray level value or median
etc.
Effective Information, ̅ ̅
B
The Effective information,
̅ ̅ , which is the product of Cumulative response and the cumulative
membership function value. Following (11) we will find ̅ just as ̅ from a window:
̅
∑ ∑
(12)
∑ ∑
The product of ̅ and ̅ is the Effective information.
C
Energy Features
Several forms qualify for information sets and some of these forms are {
{
}, {
∑∑
},
} and so on. Here we consider Energy as the feature. To prove that
comes from the entropy function, we choose the following in (6) to get
p  I (i, j )  ijg ; a  0, b 
Similarly we get
} ,{
1
2 ij2 f h2 (ref )
∑∑
,c  
2 I (ref )
I 2 (ref )
,
d

2 ij f h2 (ref )
2 f h2 (ref )
with ; p  I (i, j )  ije ; a  b  0, c  
(13)
1
2 f (ref )
e
ij
2
h
,d 
I (ref )
2 f h2 (ref )
Note that for getting the Energy feature, p is itself entropy; moreover b and c are also functions of  ij .
The Energy in the k th window, E(k) is found by summing the Energy (
) of each pixel in the window.
Instead of the above Gaussian and exponential membership functions, we use a different membership
function for the Energy (this form can be easily derived just as the above) in:
8
∑
Where,
∑
̅
|
;
(14)
|
. The choice of this function is dictated by its suitability to iris
recognition.
D
Sigmoid feature
The information can also be a function like {
}. One can consider any form for information and
the option to select a particular form is vested with a user. We now consider the information in the mould
of a sigmoid function. The resulting Sigmoid feature is obtained by applying a sigmoid function on the
information values in the kth window as,
∑
∑
(15)
Note that
. The choice of this
function is arrived at by experimentation in which sigmoid is found to give the best performance.
E
Information set based filter (Hanman filter)
The information
at a pixel in a window is modified by taking the membership function as
a function of parameter s. The modified
is defined as
(16)
Where the membership function depends on s as
|
|
for
{
}
(17)
In order to capture the frequency components involved in the information value
a cosine function that depends on the frequencies:
9
is transformed by
(18)
Where the parameter of cosine function is
[
];
(19)
By varying s and p, we can create several information images. These images are aggregated to get a
composite image and the windows of different sizes are used to partition this image. The values within
each window are averaged to get feature value for each window. The authors will like to christen this new
filter as Hanman filter.
F
Information set based transform (Hanman Transform)
Just as Fourier transform nets out frequency content from a periodic signal, a fuzzy transform is intended
to net out the uncertainty from the information sources in a window of an image. As we know that the
membership value associated with each information source gives a measure of uncertainty, this should be
a parameter in the exponential gain function in Hanman and Anirban entropy function (6) where we take
a=b=d=0 and
for deriving the entropy of the information named as Hanman transform:
∑∑
(20)
WhereI={I(i, j) } represents a set of information sources in a window. Alternatively, transformation can
also be written in the matrix form as
(21)
Where I is the sub image of the window and
corresponding information matrix; hence
(here the product is taken element-wise) is the
is also a matrix. The information is obtained as the sum of
the matrix elements. If the sub image is in the form of a histogram drawn as g(k) vs. h(k), where g(k) is
the kthgray level and h(k) is the frequency of occurrence of kthgray level, then the transform becomes.
10
∑
Where
(22)
is membership function value of kth gray level. The above transform is more useful for the
analysis of speech signals. In the context of images partitioned into several windows the corresponding
histograms are too many to apply the transform. The authors will like to call this new transform as
Hanman transform.
Algorithm for the extraction of Hanman Transform features
1) Compute the membership value associated with each gray level in a window of size W×W.
2) Compute the normalized information as the product of the gray level and its membership function
value divided by the maximum gray level in the window.
3) Take the exponential of the normalized information and multiply with the gray level.
4) Repeat Step 3 on all gray levels in a window and sum the values to obtain a feature.
5) Repeat Steps 1-4 on all windows in an iris image to get all features.
6) Repeat Steps 1-5 for W=3,5,7,9 for the evaluation of performance.
III.
INFORMATION SETS BASED CLASSIFIER (INNER PRODUCT CLASSIFIER)
This classifier has been inspired from Support Vector Method (SVM) classifier [41]. This classifier is
built up based on the training features and the errors between the test and training image features using
the triangular or t-norms. We consider the aggregate of the two training features and fusion of errors using
t-norms for the development of the classifier. The purpose of fusion is to get the minimum of two errors.
The inner product between the aggregated training features and the fused errors must be the least for the
test features to match with the training features. This is the concept behind the proposed classifier.
11
The difference between the highest and the lowest inner products gives the highest margin. The training
feature vectors with the lowest inner product (margin) give the identity of the test feature vector. As the
errors are positive, the margin is towards the positive side of the projection plane, i.e. hyperplane. The
other forms of errors like square can also be investigated. The t-norms generalize the logical conjunction
of two fuzzy variables (two feature vectors) a and b in the interval [0, 1] satisfying the conditions:
1) Commutativity: t(a,b)= t(b,a)
2) Associativity: t(a, t( b,c))= t(t(a,b),c)
3) Monotonicity:
4) Neutral element 1:
If the training set contains a number of sample iris images, t-norm is taken between the feature vectors of
two training samples at a time There are many families of t- norms [45]; we have used the Frank t- norm
having the parametric form given by
(23)
Where, p>0 is a constant. This t-norm is found to be most appropriate for iris recognition. We will
present the steps involved in the classifier by an algorithm. Before applying the classifier the features
have been normalized by
(24)
Where f(i,j) is the jth feature of ith sample. Similar normalization is done on fte(j).
A. Algorithm for new classifier
I.
Calculate the error
|
Where,
and
between the ith training sample of each user and the jth test sample,given by
|
(25)
are the feature vectors of the training and test samples respectively.
12
i =1,2,..,M ; j= 1,2,..,N where M is the total number of samples for each user in the training set and N is
the total number of features in the test sample.
II.
Fuse the errors of ith and kthtraining samples by the Frank t- norm denoted by
with p choosen
depending upon application:
(26)
In the above, we consider all possible combinations of the training sample errors entailing in a huge
computation but with the prospect of obtaining the least value of
III.
.
Find the average feature value of ith and kth training samples
{
}
(27)
The above fused error vectors act as support vectors and the average feature vectors act as weights
together leading to the hyper-plane given by their inner product. So it is necessary and sufficient that the
inner product of Eik(j)and fik(j) must be the least for the training sample to be matched with the test
sample.
∑
(28)
For i,k=1,2,...,M and the number of products generated from (33) is
minimum of
∑
. The
is the measure of dissimilarity corresponding to the lth user. While matching, which
ever user corresponds to min{
sample.
{ }
}
provides the identity of the test user that owns the training
is the jth information (feature) from ith sample and the fusion of two errors gives the
confidence about the information.
It appears from the above approach, that one can go for more than two training features for taking the
aggregate and the corresponding errors for the fusion. It has been found that such a combination never
13
works out in practice for the simple reason that the information loses its effectiveness when averaged over
several samples. Authors will like to name this new distance based classifier as Inner Product Classifier
(IPC) since it involves the product of feature and t-norm values.
B. Hanman Integral
If the errors Eik(j) are sorted in the ascending order and accordingly fik(j) are rearranged it is possible to
construct a new integral by knowing the membership function values of fik(j). In view of this
modification, (33) becomes new integral as follows:
∑
(29)
The issue of membership function in (34) is left to the user. The above integral may not serve as a
classifier but has the potential for the selection of a modality in the context of multimodal biometrics
where one or two modalities need to be selected out of a few modalities. It may be noted that Eqn. (34)
aggregates the information from the training features with the knowledge of test feature vectors. That is,
∑
∑
(30)
Where t(.) is the t-norm. If the membership function is not available we take
(
)
. Authors
will like to christen this integral as Hanman integral.
IV.
APPLICATION OF INFORMATION SETS ON IRIS RECOGNITION
A. Segmentation and Feature Extraction
Segmentation forms a very important part of iris recognition as is evident from the iris recognition
performance improvement from better segmentation [15, 34]. The segmentation was based on the code
developed by Masek [47]. Though segmentation is not the major focus of this paper we will discuss
briefly the segmentation methodology. The iris segmentation has been done on the basis of the Hough
14
transform based approach proposed by Masek [47]. First segmentation was made using Canny edge filter
[39] and then hugh transform was used to detect the boundaries of circular regions in the segmented
region. Initial attempts at using Masek codes had given inferior results on CASIA IRIS v3 interval
database. Therefore a modified version of Maseks code has been used. The changes were made after
observing that the Algorithm was failing in the case of many images in CASIA Iris V3 lamp database
[48]. Only 80% of the total images could be segmented correctly. There was a requirement of manually
initializing the radius ranges for iris segmentation since the pupil sizes were varying across a large range
of values. In cases where the segmentation had failed the radius range was manually intialized and iris
strips were produced using the polar to rectangular strip conversion method rather than the interpolation
based method used by Masek. Figure 1 shows the sample image from Casia Iris V3 interval and the
corresponding iris strip that was generated. Iris strips being generated, still suffer from eyelids and
eyelash occlusion as is evident from Figure 1b. This problem is taken care of by cropping the
most likely to be occluded portion of the strips. Similar approaches can be seen in literature also
[51]. This is achieved by duplicating the strip and concatenating them horizontally as shown in
Figure 2a and by considering the middle portion of the joined strip. The obtained strip with rich
in texture as shown in Figure 2b and has little or no occlusion. These rectangular strips can be
enhanced and normalized before feature extraction.
Features were extracted by dividing the enhanced image to non overlapping windows. Within
each window the information set based feature was calculated. As discussed in the Section III.
Different window sizes were considered and the number of features for each iris strip is
dependent on the window size considered. With increasing window size the feature dimension
also decreases. Iris strips corresponding to 404 persons have been extracted. The database for
experimentation is chosen as CASIA-IRISV3 [48] that has eye images of 411 people with 404 of them
15
having minimum of ten images for left eye. Therefore 4040 eye images have been used to generate 4040
iris strips.
(a)
(b)
Fig. 1: Sample image of iris and the rectangular strip that is generated from it
(a)
(b)
Figure 2 Generation of iris strip devoid of occlusions and eyelids
B. Iris strip Classification
The features have been extracted for each iris strip. After various trails an optimum value of p=2 have
been used in the classifier. As mentioned earlier the normalization across dimension has been done before
classification.
C
Rank level fusion of iris strips
[51] has discussed about using certain regions of iris strip to get better iris segmentation. The findings
proved that the middle protion of the iris strip aids in better performance of iris recognition algorithms.
We extend the concept and implement division of iris strips into different regions and implement
matching of each region separately. Later the classification results of each person is fused through rank
level fusion to achieve better results. The concept of rank level fusion is stated here
16
Majority voting is a means of classification in which a particular class is favored by different classifiers
[47]. It is expressed as:
(31)
Where,
is the final classification,
is the score for ith class. This approach is well suited in the
presence of several classifiers employing different principles. One can consider the higher ranks also to
make classification and with consideration of higher ranks the accuracy of classification improves similar
to the rank level fusion explained in [46]. Weights has been assigned to different classification attempts
and thus allow for penalising less accurate classifier and rewarding more accurate one. The weights were
decided from the recognition accuracies for classification of the training images.
RESULTS AND DISCUSSIONS
A
Iris segmentation and feature extraction
The pupil and iris boundaries are correctly detected in 80% of the images with the proposed segmentation
algorithm. Rest of the iris images were segmented by manual intervention as discussed in Section IV.
The number of features depends upon the window size chosen to partition iris. The window size is varied
to get varied size of features. The performance of different feature types, viz., Cumulative response ( )̅ ,
Effective information ( ̅ )̅ , Energy, Sigmoid, Hanman filter and Hanman transformation have been
analysed using SVM [41-44]. Though SVM is not the best suited classifier for use in biometric matching
cases, the results using cab be used to compare the performance of the features. Gabor filter based feature
has been extracted after dividing the image into windows as in the case of other features discussed in this
paper. This will facilitate better quantification of local texture features. The parameters of Gabor filter
have been optimized after experimentations. The performance of the features can be inferred with
reference to the performance conventional Gabor filter feature on using SVM which is a conventional
17
classifier. Better classification accuracy confirms the fact that features are quantifying the texture
satisfactorily.
C
Classifier performance
The IPC classifier performance can be evaluated by comparing it with conventional classifiers like SVM
and Euclidean distance classifier. Though SVM is not well suited to Biometrics applications owing to the
requirement for optimization on addition of new classes or enrollment it can be used as a benchmark for
classifier performance for supervised classification tasks. Whereas, ROC curve clearly shows the
performance of IPC and Euclidean distance on the same given features for unsupervised classification
tasks. Therefore both recognition rates and ROC curves have been plotted to check the classification
performance of the new classifiers. The recognition performance of different feature types on different
classifiers is given in Table 1. Each iris strip image has been matched with all other iris strip images in the
database of iris strips generated in this study. The accuracies are mean values after implementing k-fold
validation. The best result obtained with the Information set based features like Cumulative response,
Effective information, Energy, Sigmoid, Hanman filter and Hanman transform which gives a maximum
recognition rate of 98.1% using IPC and 99.2% using SVM classifier.The results fall down with Gabor
features attaining a recognition rate of 90% using IPC and 94% using SVM.. Euclidean distance based
ROC plot is depicted in Fig. 3 which shows the maximum GAR of 96.3% at FAR of 0.1% on the Hanman
transform features.
A maximum GAR of 98.3% at FAR of 0.1%
have been achieved using Energy feature and IPC
classifiera featuresas shown in ROC (See Fig. 4). Barring the ROC of Gabor, all other ROCs are close by.
The perfromance of IPC is clearly better than that of Euclidean distance based classifier. These curves
thus indicate the suitability of the information based features for iris recognition by IPC.
The results are better in comparision with many recent papers related to new features for iris recongition
proposed in [38]. In comparison these features performance has been studied on a bigger and standard
18
database. Also the reaults presented here is an improvement over that is presnted in [53] but on a different
database. In [34] the results for the same database is explained but with reference to iris segmentation
application and the results in current study is equivalent or better in terms of recognition rate. The ROC
curve in this study is better than that proposed in the feature encoding method in [21]. The results are
inferior to that in [15] but considering the fact that no stress has been given on segmentation and lesser
feature size has been used the results in this work is significant. The feature sizes varied from a minmum
of 80 to maxiimum of 380.
100
90
EI
EUI
EF
HF
HT
SF
GF
GAR(%)
80
70
60
50
40
-4
10
-3
10
-2
10
-1
10
FAR(%)
0
10
1
10
2
10
Fig. 3 ROC using different features and Euclidean distance based classifier (Maximum GAR of 96.3% at
FAR of 0.1% for Hanman transform feature)(EI is Cumulative response, EUI is Effective information, EF
is Energy feature, HF is Hanman filter, HT is Hanman Transform, SF is Sigmoid and GF is Gabor filter
feature)
19
100
95
90
GF
HT
HF
EF
SF
EUI
EI
GAR(%)
85
80
75
70
65
60
55
-4
10
-3
10
-2
-1
10
0
10
FAR(%)
10
1
10
2
10
Fig. 4 ROC for IPC (Maximum GAR of 98.3% at FAR of 0.1% for Energy feature)(EI is Cumulative
response, EUI is Effective information, EF is Energy feature, HF is Hanman filter, HT is Hanman
transform , SF is Sigmoid and GF is Gabor filter feature)
Table 1: Features and their mean recognition rates with different classifiers after k fold validation
Classifier
IPC
SVM
Features
Hanman filter
97.3 98.7
Energy
98.1 98.9
Sigmoid
97.8 99.2
̅
97.5 99
̅ ̅
97.3 98.6
20
Gabor
90.3 94.1
Hanman Transform 95.8 98.6
Voting method is applied on the recognition results of individual classifiers on varied iris strip sizes (by
varying the effective outer ellipse diameter). The iris region between the largest ellipse (corresponding to
the limbic boundary) and the smallest ellipse size (corresponding to the pupilary boundary) is divided into
three sizes other than the full size. The majority voting method treats the results of the IPC classifieras
votes and the majority of votes got by a class as the final decision in Table 2. The classification rates of
irises with varying widths of the iris strips proportional to the distance between the pupilary boundary and
the limbic boundary are found to be different. However the use of features extracted closer to the limbic
boundary decreases the accuracy of detection when compared to the features extracted closer to the
middle of the extracted iris region (Table 2). This observation has been made in [50] also. This might be
attributed to the fact that for some persons, the iris textures are spread over the region between the
pupilary boundary and the limbic boundary while majority of the people have iris texture features lying
closer to the pupil boundary. This observation is reflected from the results of the majority voting method.
The fusion of results from different sized iris strips enhance the overall recognition rate. After studying
the individual classification, it has been observed that in a few cases the correct classification is made
using the small sized iris strips and is the motivation behind the proposal for considering data from
differently sized iris strips. The results of fusion of results from all different strip sizesand all feature
types is shown in Table 2.
Using IPC classifier on a particular strip size, the accuracy obtained is given Table 2. It can be seen that
the the maximum recognition rate is obtained for the strip size of ¾ for all of the features. The
comparative recognition rate on the lower sized iris strips is inferior. When the decisions from the
individual classifiers on strips of different sizes are combined using the majority voting method, the final
decision is on each feature type is shown in the last column of Table 2. There is an enhancement in
21
recognition rate when results from different sized iris strips are combined. The results reach even 100%
when results from the strip level fusion of all the six features are fused.
The performance of voting can be improved further if we take account of the rank information in terms of
the number of votes received by each person. By applying the Majority voting to sectors of iris the
problem of occlusions can be surmounted to some extent. This type of segmental approach for iris
recognition has been proposed in [50]. Majority voting can be incorporated with reject options to detect
possibilities of erroneous classifications if the different classifications does not reach a consensus.
It can be inferred from the table that the feature level performance is augmented by the majority level
voting scheme. Also at the final stage the majority voting across different feature types enabled a vary
high performance of 100%. This observation validates the fact that different significant texture regions
are present in iris at different radial distances from the pupilary boundary. The fact that higher
performance is sought at farther distance from pupilary boundary refutes the earlier held belief that iris
texture is always concentrated near the pupilary boundary and agrees with the observation in [52].
Table 2: Majority voting results for different features
Final
Accuracy
Accuracy
accuracy
after
Fraction
by
of size
individual
the
the
Features
after the
fusion
of
fusion of
strip
strips(%)
feature
results(%)
types(%)
Energy
Full
98.5
¾
99.3
99.8
½
99.3
¼
93.3
100
22
Full
98.3
Hanman
¾
99.3
filter
½
98.8
¼
91.8
Full
98.3
¾
99.3
99.5
Sigmoid
̅
99.3
½
99.3
¼
93.6
Full
97.3
¾
98.5
99
½
97.5
¼
93.6
Full
97.8
¾
98.8.1
½
98.0
¼
94.0
Full
99.78
Hanman
¾
99.3
Transform
½
97.8
¼
72.0
̅ ̅
99.3
99.8
VI CONCLUSIONS
A new concept of information sets has been introduced using the basic concepts of fuzzy sets and rough
sets. New features and classifier have been proposed based on this new theory. The performance of the
new feature has been studied by comparing it with conventional feature that is Gabor filter. Classifier
23
performance has been studied by comparing it with conventional classifiers like SVM and Euclidean
distance based matching.
Different feature types are extracted from the rectangular strip evolved from the information sets. For this
rectangular strip is partitioned into windows of different sizes. For a particular size of the window, the
gray levels in the window are fitted with a membership function. The product of gray level and their
membership function is termed as an element of an Information set. Using the information values of the
set we obtain the Cumulative response, Effective information, Energy, Sigmoid, Hanman filter and
Hanman transform features. Note that Hanman filter features are the result of applying cosine function on
the information values. Several classifiers are then utilized for the classification, viz., Euclidean distance,
SVM and majority voting. A new classifier called Inner product classifier has also been developed by
taking the inner product of the training features and the errors between the training and test features. The
performance of this classifier is similar to that of SVM, but consistent on all feature types. The new
concept of information sets that is proposed is validated by the performance of the new features and
classifiers on iris texture.
Also an attempt has been made to utilize the findings of Hollingsworth [52]. A majority voting scheme
was applied on differently sized strips obtained at different distances from the pupilary boundary and this
augmented the final results for each features.
ACKNOWLEDGEMENT
This research work was funded by Department of Science and Technology (DST), Government of India.
We acknowledge with thanks the database CASIA-IrisV3 from the Chinese Academy of Sciences,
Institute of Automation.
APPENDIX A
Hanman-AnirbanEntropy Function [6]
24
The non-normalized Hanman-Anirban entropy function for the probability distribution P=[p1, p2,…pn] is
defined as follows:
∑
∑
(A.1)
The normalized entropy HN(p) of the above can be defined as
H N ( p) 
[ H ( p )  e  ( a b  c  d ) ]
(A.2)

where   e
(
a b c
  d )
n3 n 2 n
 e ( a b  c  d )
and a, b, c and d are real valued parameters and n is the number of events in the probabilistic experiment
(or the number of states in the system). The normalized entropy satisfies all the properties of an entropy
function. In the present work I(pi) assumes the membership value when we relax the assumption that pi is
not a probability but is an information source. This entropy has already been used in the image retrieval,
encryption of messages, texture analysis and in the segmentation of medical images. For encryption the
parameters are estimated as follows:
⁄
⁄
⁄
⁄
As far as the texture is concerned the parameters have to be adjusted or estimated.
REFERENCES
25
[1] R. A. Boby, M. Hanmandlu, A. Sharma and M. Bindal, “Extraction of Fractal Dimension for Iris
Texture”, 5th IAPR Inter. Conf. Biom., New Delhi, India, March 29 - April 1, 2012 .
[2] L A Zadeh, “Fuzzy sets”, Information and Control. 1965; 8: 338–353, 1965.
[3] C E Shannon, “A Mathematical Theory of Communication”, Bell System Technical Journal, Vol. 27,
pp.379–423, 623–656, 1948.
[4] N. R. Pal and S. K. Pal. “Some properties of the exponential entropy”, Inf. Sci., vol. 66, pp.113117,1992.
[5] M Hanmandlu and A Das,”Content-based image retrieval by information theoretic measure”, Defence
Science Journal, vol. 61, no. 5, pp. 415-430, Sept. 2011
[6] Z. Pawlak, "Rough sets". International Journal of Parallel Programming11 (5): 341–356, 1982
[7] J Daugman,"High confidence visual recognition of persons by a test of statistical independence",IEEE
Trans. Pattern Anal. Mach. Intell., vol. 15, no 11, pp. 1148-1161, 1993.
[8] R P Wildes, "Iris recognition: An emerging biometric technology", Proc. IEEE, pp. 1348–1363, 1997.
[9] J Daugman: "Complete Discrete 2-D Gabor Transforms by Neural Networks for Image Analysis and
Compression", IEEE Trans on Acoustics, Speech, and Signal Processing. Vol. 36. No. 7, , pp. 1169–
1179, 1988.
[10] Z. He, T. Tan, Z. Sun and X. Qiu , "Towards Accurate and Fast Iris Segmentation for Iris
Biometrics". IEEE Trans. Pattern Anal. Mach. Intell.,31 (9): 1670–84, 2008.
[11] K W Bowyer, K Hollingsworthand P J Flynn, "Image understanding for iris biometrics: A survey",
Comput. Vis.Image Underst., pp. 281–307, 2008.
[12] J. G.Daugman, "Uncertainty relation for resolution in space, spatial frequency, and orientation
optimized by two-dimensional visual cortical filters", J. Opt. Soc. Am. A, vol. 2,no. 7, pp. 1160-1170,
1985.
26
[13] B. Bonney, R. Ives, D. Etterand Y. Du, "Iris pattern extraction using bit planes and standard
deviations", 38th Asilomar Conf. Signals, Systems comp. CA, pp. 82–586, 2004.
[14] P. Liliand X. Mei "The algorithm of iris image processing.", 4th IEEE Workshop on Automatic
Identification Technologies, vol. 1, pp. 134–138, 2005.
[15] M. Vatsa, R. Singh, and A. Noore, Improving Iris Recognition Performance using Segmentation,
Quality Enhancement, Match Score Fusion and Indexing, IEEE Transactions on Systems, Man, and
Cybernetics - B, Vol. 38, No. 3, 2008.
[16] J Daugman," New Methods in Iris Recognition ", IEEE trans. sys. man cyb.—part b, Vol. 37, No. 5,
pp 1167-1175, 2007
[17] T. A. Camusand R. P. Wildes ,"Reliable and fast eye finding in close-up images", Int. Conf. Pattern
Recogn.,Quebec City, Canada, pp. 389–394, 2002.
[18] Y. Du, B. Bonney, R. Ives, D. Etterand R. Schultz, "Analysis of partial iris recognition using a 1-D
approach", Intern. Conf. Acoustics, Speech Signal Processing,Philadelphia, USA, vol. 2, pp. 961–964,
2005.
[19] J. Pillai, V. M. Patel and R. Chellappa,"Secure and robust iris recognition using random projections
and sparse representations", IEEE Trans. Pattern Anal. Mach. Intell., vol. 99, pp. 1,2011.
[20] A. Abhyankar, L. Hornakand S. Schuckers, "Offangle angle iris recognition using bi-orthogonal
wavelet network system", 4th IEEE Workshop Automatic Identification Advanced Technologies,NY
USA, pp. 239–244, 2005.
[21] L. Ma, T. Tan, Y. Wangand D. Zhang , Local intensity variation analysis for iris recognition6,
Pattern Recogn., vol. 37 , no. 6, pp. 1287–1298 ,2004.
27
[22] C. Tisse, L. Martin, L. Torresand M. Robert., "Person identification technique using human iris
recognition", 15th Intern. Conf. on Vision Interface,Calgary,Canada, pp. 294–299, 2002.
[23] H. Huangand G. Hu.,"Iris recognition based on adjustable scale wavelet transform", 27th Annu.
Intern. Conf.of the IEEE Engineering in Medicine and Biology, Shanghai, China, pp. 7533–7536,
2005.
[24] K. Miyazawa, K. Ito, T. Aokiand K. Kobayashi., "An efficient iris recognition algorithm using
phase-based image matching", Intern. Conf. on Image Processing, Genoa, Italy, pp. 49–52, 2005.
[25] K. Roy and P. Bhattacharya, "Iris recognition with support vector machines", Springer LNCS 3832:
Intern. Conf., Hong Kong, China, pp. 486–492, 2006.
[26] R. N. Rakvic, B. J. Ulis, R. P. Broussard, R. W. Ives and N. Steiner,” Parallelizing iris recognition”,
IEEE Trans.Inf. Forensics Security, vol. 4, no. 4, pp. 812-824, 2009.
[27] S. Venugopalan and M. Savvides, “How to generate spoofed irises from an iris code template”,IEEE
Trans.Inf. Forensics Security, vol. 6, no. 2, pp. 385-396, 2011.
[28] H. Proença, “Quality assessment of degraded iris images acquired in the visible wavelength”, IEEE
Trans.Inf. Forensics Security, vol. 6, no. 1, 2011.
[29] J. Dougman and C. Downing, “Effect of severe image compression on Iris recognition performance”
IEEE Trans.Inf. Forensics Security, vol. 3, no. 1, pp. 52-62, 2008.
[30] C. Belcher and Y. Du,” Information approach for iris image-quality measure”, IEEE Trans.Inf.
Forensics Security, vol. 3, no. 3, pp. 572-578, 2008.
[31] Y. Chen and M. Adjouadi, “A New Noise Tolerant Segmentation Approach to Non-ideal Iris Image
with Optimized Computational Speed”, 2009 International Conference on Image Processing,
Computer Vision,and Pattern Recognition (IPCV 09), Vol. II, pp. 547-553, Las Vegas, Nevada, USA,
July 13-16, 2009.
[32] Y Chen, M Adjouadi , C Han, J Wang, A Barreto, N Rishe and J Andrian, A highly accurate and
computationally efficient approach for unconstrained iris segmentation, Image and Vision Computing,
vol. 28, pp. 261–269, 2010
28
[33] Y. Chen, J. Wang, C. Han, L. Wang, and M. Adjouadi, “A robust segmentation approach to iris
recognition based on video”, 37th IEEE Applied Imagery Pattern Recognition (AIPR), Washington
DC, Oct. 15-17, 2008.
[34] S. Shah and A. Ross, “Iris segmentation using geodesic active contours”, IEEE Trans.Inf. Forensics
Security, vol.4, no. 4, pp. 824-837, 2009.
[35] V. Velisavljevic´, “Low-complexity iris coding and recognition based on directionlets”, IEEE
Trans.Inf. Forensics Security, vol. 4, no. 3, pp. 410-418, 2009.
[36] U. Park, R. R. Jillela, A. Ross and A. K. Jain, “Periocular biometrics in the visible spectrum”, IEEE
Trans.Inf. Forensics Security,vol. 6, no. 1, pp. 96-107, 2011.
[37] K. Hollingsworth, T. Peters, K. V. Bowyer and P. J. Flynn, “Iris recognition using signal-level fusion
of frames from video“, IEEE Trans.Inf. Forensics Security, vol. 4, no. 4, pp. 837-849, 2009.
[38] A. D. Rahulkar and R. S. Holambe, Half-Iris Feature Extraction and Recognition Using a New Class
of Biorthogonal Triplet Half-Band Filter Bank and Flexible k-out-of-n:A Postclassifier, IEEE
Trans.Inf. Forensics Security, vol. 7, no 1, pp 230-240, 2012.
[39] Woods, R. C. Gonzalez and R. E.,Digital Image Processing, Addison-Wesley Longman Publishing
Co., Boston, MA, USA , 1992.
[40] M. Hanmandlu, D. Jha, and R. Sharma, “Color image enhancement by fuzzy intensification”, Pattern
Recogn. Lett., Vol. 24(1-3), pp. 81-87, 2003.
[41] Burges, C J C., "A tutorial on support vector machines for patternrecognition", Data Min. Knowl.
Discov., pp. 121-167, 1998.
[42] R.P.W. Duin, P. Juszczak, P. Paclik, E. Pekalska, D. de Ridder, D.M.J. Taxand S. Verzakov.,
PRTools4.1, A Matlab Toolbox for Pattern Recognition., Delft University of Technology, 2007.
[43] Lin, C. Chang and C., "LIBSVM : a library for support vector machines", ACM Trans. Intell. Syst.
Technol, vol. 2,no. 27, pp. 1-27, 2011.
29
[44] Duda R. O, Hart P. E., and Stork D.G. . New York: John Wiley & Sons, Pattern Classification,New
York : John Wiley & Sons, 2001.
[45] Erich Peter Klement,Radko Mesiarand Endre Pep,Triangular Norms, Kluwer Academic
Publications, Netherlands,2000.
[46] M. M. Monwar and M. L. Gavrilova, Multimodal Biometric System Using Rank-Level Fusion
Approach, IEEE Transactions on Systems, Man, and Cybernetics - B , Vol. 39, No. 4, pp. 867-878, 2009
[47] A. Narasimhamurthy,"Theoretical Bounds of majority voting performance for a binary classification
problem", IEEE Trans. Pattern Anal. Mach. Intell., pp. 1988-1995, 2005.
[48] L. Masek, P. Kovesi, “MATLAB Source Code for a Biometric Identification System Based on Iris
Patterns,” The University of Western Australia, 2003.
[49] CASIA-IrisV3. (Online) http://www.cbsr.ia.ac.cn/IrisDatabase.
[50] L. Ma, T. Tan , Y. Wang, and D. Zhang, Personal Identification Based on Iris Texture Analysis,
IEEE Trans. Pattern Anal. Mach. Intell., Vol. 25, No. 12, pp. 1519–1533, 2003.
[51] F. Sayeed, M. Hanmandlu, A.Q. Ansariand S. Vasikarla, "Iris recognition using segmental euclidean
distances", 2011 8th Int. Conf. Inf. Tech.: New Generations, Las Vegas, Nevada USA, pp.520-525,
April 2011.
[52] K.P. Hollingsworth, K. W. Bowyer and P. J. Flynn, The Best Bits in an Iris Code, IEEE Trans.
Pattern Anal. Mach. Intell., Vol. 31, No. 6, pp: 964 – 973, 2009
[53] Kumar and A. Passi, Comparison and combination of iris matchers for reliable personal
authentication, Pattern Recognition, vol. 43, pp. 1016-1026, 2010
30