Final Rep. - The University of Texas at Arlington

STUDY AND IMPLEMENTATION OF IRIS RECOGNITION
SCHEMES
Under guidance of
DR K R RAO
UNIVERSITY OF TEXAS AT ARLINGTON
SPRING 2012
Presented by:
Ritika Jain
[email protected]
1000797700
Proposal:
This project focusses upon studying and implementing the various iris
recognition schemes available and an analysis of the different algorithms using
Chinese academy of sciences-institute of automation (CASIA) [14] database.
General working of biometric systems [3]
A biometric system first captures the sample of the feature which is then
transformed using some sort of mathematical function into a biometric template
and this biometric template will provide a normalized, efficient and highly
discriminating representation of the feature, which can then be
objectively compared with other templates in order to determine identity. Most
biometric systems allow two modes of operation namely enrolment and
identification.
Brief introduction about iris recognition[19]:
Iris recognition is an automated method of biometric identification that uses
mathematical pattern-recognition techniques on video images of the irides of an
individual's eyes, whose complex random patterns are unique and can be seen
from some distance.
Comparison of iris recognition and retinal scanning [19]:
Iris Recognition uses a camera which is similar to that in a home video
camcorder to capture an image of the Iris. A picture is taken from a distance of
3 to 10 inches away.Iris recognition uses camera technology with subtle
infrared illumination to acquire images of iris. While in case of retinal scanning,
a very close encounter with a scanning device is required, that sends a beam of
light deep inside the eye to capture an image of the retina. (intrusive process
required to capture an image).
Masek's Principle [3]:
The iris recognition system is composed of a number of sub-systems, which
correspond to each stage of iris recognition. These stages are:



segmentation – locating the iris region in an eye image
normalization – creating a dimensionally consistent representation of the iris
region
feature encoding – creating a template containing only the
most discriminating features of the iris
In Masek’s [3] method, the automatic segmentation system is based on the
Hough transform [3], and it is able to localize the circular iris and pupil
regions. The extracted iris region is then normalized into a rectangular block
with constant dimensions to account for imaging inconsistencies and finally,
the phase data from 1D Log-Gabor [3] filters is extracted and quantized to
four levels to encode the unique pattern of the iris into a bit-wise biometric
template. The Hamming distance [3] is employed for classification of iris
templates, and two templates are found to match if a test of statistical
independence has failed. The input to the system is an eye image, and the
output is an iris template, which will provide a mathematical representation
of the iris region.
Types of segmentation techniques available[3]:




Hough transform (employed by Wildes et al, [7])
Daugman’s integro-differential operator approach, [5]
Active contour models (used by Ritter, [17])
Eyelash and noise detection (used by Kong and Zhang, [16])
Segmentation technique used in Masek's method [3]:
The Hough transform is used, which first involves Canny edge [10]
detection to generate edge map using Canny edge detection MATLAB
function [10]. Eyelids detection is done using Hough transform [10], [11]
(The used code is described below).
MATLAB functions involved in segmentation technique [3], [10], [11]:

createiristemplate - generates a biometric template from an iris eye
image.

segmentiris - peforms automatic segmentation of the iris region from
an eye image. Also isolates noise areas such as occluding eyelids and
eyelashes.

addcircle - circle generator for adding weights into a Hough
accumulator array.

adjgamma - for adjusting image gamma.
circlecoords - returns the pixel coordinates of a circle defined by
the radius and x, y coordinates of its center.


CANNY - Canny edge detection - function to perform Canny edge
detection.

findcircle - returns the coordinates of a circle in an image using the
Hough transform and Canny edge detection to create the edge map.
findline - returns the coordinates of a line in an image using the
Hough transform and Canny edge detection to create the edge map.


houghcircle - takes an edge map image, and performs the Hough
transform for finding circles in the image.

HYSTHRESH - Hysteresis thresholding - Function performs
hysteresis thresholding of an image.

linecoords - returns the x y coordinates of positions along a line.

NONMAXSUP - Function for performing non-maxima suppression
on an image using an orientation image. It is assumed that the
orientation image gives feature normal orientation angles in degrees
(0-180).
Normalization techniques available[3]:



Daugman’s rubber sheet model, [5]
Image registration technique, [7]
Virtual circles technique, [8]
Normalization technique used in Masek’s method[3]:
For normalization of iris region a technique based on Daugman's rubber
sheet model [5] is implemented. In this, the center of the pupil is considered
as the reference point and radial vectors pass through the iris region. A
number of data points are selected along each radial line and this is defined
as the radial resolution. The number of radial lines going around the iris
region is defined as the angular resolution. A constant number of points are
chosen along each radial line, so that a constant number of radial data points
are taken, irrespective of how narrow or wide the radius is at a particular
angle. The normalized pattern is created by backtracking to find
the Cartesian coordinates of data points from the radial and angular positions
in the normalized pattern.
(The used code is described below)
Feature Extraction and Encoding techniques available [3]:




Gabor filters [3]
Log Gabor filters (used by Masek, [4])
Zero crossings of 1D wavelet (used by Boles et al, [8])
Laplacian of Gaussian filters (used by Wildes et al, [7])
Feature encoding and extraction technique implemented in Masek’s
method[3]:
Feature encoding is implemented by convolving the normalized iris pattern
with 1D Log-Gabor wavelets. The 2D normalized pattern is broken up into a
number of 1D signals, and then these 1D signals are convolved with 1D
Gabor wavelets. The output of filtering is then phase quantized to four levels
using the Daugman method [1], with each filter producing two bits of data
for each phasor. The output of phase quantization is chosen to be a grey
code, so that when going from one quadrant to another, only 1 bit changes.
(The used code is described below)
Functions involved in steps 2 and 3:- Normalization and Encoding [3],
[10], [11]:
 normaliseiris - normalization of the iris region by unwrapping the circular
region into a rectangular block of constant dimensions.
 encode - generates a biometric template from the normalized iris region, also
generates corresponding noise mask
 gaborconvolve - function for convolving each row of an image with 1D logGabor filters.
Techniques used for matching of pattern[3]:
 Hamming distance (employed by Daugman) [3], [5]
 Weighted Euclidean distance (used by Zhu et al, [18])
 Normalized correlation (used by Wildes et al, [7])
Matching algorithm used in Masek’s method[3]:
For matching, the Hamming distance is chosen, since bit-wise comparison is
required. The Hamming distance algorithm employed also incorporates
noise masking, so that only significant bits are used in calculating the
Hamming distance between two iris templates. When taking the
Hamming distance, only those bits in the iris pattern that correspond to ‘0’
bits in noise masks of both iris patterns is used in the calculation involved in
the matching of pattern [3].
(The used code is described below)
Functions involved in step 4 :- Matching [3], [10], [11]:
 gethammingdistance - returns the Hamming Distance between two iris
templates incorporates noise masks, so noise bits are not used for calculating
the HD.
 shiftbits - function to shift the bit-wise iris patterns in order to provide the
best match. Each shift is by two bit values and left to right, since one pixel
value in the normalized iris pattern gives two bit values in the template.
MATLAB code
The MATLAB code (Appendix A) gives the functions involved in Masek’s
process of iris segmentation [3]; i.e. segmentation, normalization, feature
encoding and matching.
The segmentation system is based on the Hough transform [3] and the
extracted iris region normalized into a rectangular block with constant and
finally a biometric template is made.
The Hamming distance [3] is employed for classification of iris templates to
be matching or non-matching according to the set Hamming distance.
The input to the system is an eye image, and the output is an iris template.
The various functions involved have been described above.
Test Results:
1. Output Segmented Images:
Figure 1: 001_1_1.bmp
Figure 2: 001_1_3.bmp
Figure 3: Img_2_1_1.jpg
Figure 4: Img_2_1_2.jpg
2. Output Normalized Images
Figure 5: 001_1_1.bmp
Figure 6: 001_1_3.bmp
Figure 5: Img_2_1_1.jpg
Figure 6: Img_2_1_2.jpg
3. Output Noise Images
Figure 9: 001_1_1.bmp
Figure 10: 001_1_3.bmp
Figure 11: Img_2_1_1.jpg
Figure 12: Img_2_1_2.jpg
4. Output Polar Noise Images
Figure 13: 001_1_1.bmp
Figure 14: 001_1_3.bmp
Figure 15: Img_2_1_1.jpg
Figure 16: Img_2_1_2.jpg
Table 1 shows the calculated Hamming distance for the four tests conducted.
If the Hamming distance calculated is less than a preset Hamming distance (It is
0.4 for the tests conducted, [3]), the images are said to be related; else the images
are different.
Test Number
1.
2.
Input 1
Input 2
001_1_1.
001_1_3.
bmp
bmp
Img_2_1_1.jpg Img_2_1_2.jpg
Hamming Distance
Match
Found/No
Match
Found
0.2647
Match
Found
Match
Found
Match
Found
0.1506
3.
001_1_1.
bmp
001_1_1.
bmp
0
4.
001_1_1.
bmp
Img_2_1_1.jpg
0.4454
No Match
Found
Table 1: Calculated Hamming distance for four pairs of test inputs
Appendix A
%
% ADDCIRCLE
% Arguments:
%
%
%
%
%
%
% Returns:
h
c
radius
weight
-
2D accumulator array.
[x,y] coords of centre of circle.
radius of the circle
optional weight of values to be added to the
accumulator array (defaults to 1)
h - Updated accumulator array.
function h = addcircle(h, c, radius, weight)
[hr, hc] = size(h);
if nargin == 3
weight = 1;
end
if any(c-fix(c))
error('Circle centre must be in integer coordinates');
end
if radius-fix(radius)
error('Radius must be an integer');
end
x = 0:fix(radius/sqrt(2));
costheta = sqrt(1 - (x.^2 / radius^2));
y = round(radius*costheta);
px = c(2) + [x
py = c(1) + [y
y y x -x -y -y -x];
x -x -y -y -x x y];
validx = px>=1 & px<=hr;
validy = py>=1 & py<=hc;
valid = find(validx & validy);
px = px(valid);
py = py(valid);
ind = px+(py-1)*hr;
h(ind) = h(ind) + weight;
% ADJGAMMA
%
% function g = adjgamma(im, g)
%
% Arguments:
%
im
- image to be processed.
%
%
%
%
g
- image gamma value.
Values in the range 0-1 enhance contrast of bright
regions, values > 1 enhance contrast in dark
regions.
function newim = adjgamma(im, g)
if g <= 0
error('Gamma value must be > 0');
end
if isa(im,'uint8');
newim = double(im);
else
newim = im;
end
newim = newim-min(min(newim));
newim = newim./max(max(newim));
newim =
% CANNY
%
% Arguments:
%
%
%
%
%
%
% Returns:
%
%
%
newim.^(1/g);
im
sigma
- image to be procesed
- standard deviation of Gaussian smoothing filter
(typically 1)
scaling - factor to reduce input image by
vert
- weighting for vertical gradients
horz
- weighting for horizontal gradients
gradient - edge strength image (gradient amplitude)
or
- orientation image (in degrees 0-180, positive
anti-clockwise)
function [gradient, or] = canny(im, sigma, scaling, vert, horz)
xscaling = vert;
yscaling = horz;
hsize = [6*sigma+1, 6*sigma+1];
gaussian = fspecial('gaussian',hsize,sigma);
im = filter2(gaussian,im);
im = imresize(im, scaling);
[rows, cols] = size(im);
h = [
v = [
d1 = [
];
d2 = [
];
im(:,2:cols) zeros(rows,1) ] - [ zeros(rows,1) im(:,1:cols-1) ];
im(2:rows,:); zeros(1,cols) ] - [ zeros(1,cols); im(1:rows-1,:) ];
im(2:rows,2:cols) zeros(rows-1,1); zeros(1,cols) ] - ...
[ zeros(1,cols); zeros(rows-1,1) im(1:rows-1,1:cols-1)
zeros(1,cols); im(1:rows-1,2:cols) zeros(rows-1,1); ] - ...
[ zeros(rows-1,1) im(2:rows,1:cols-1); zeros(1,cols)
X = ( h + (d1 + d2)/2.0 ) * xscaling;
Y = ( v + (d1 - d2)/2.0 ) * yscaling;
gradient = sqrt(X.*X + Y.*Y);
or = atan2(-Y, X);
neg = or<0;
or = or.*~neg + (or+pi).*neg;
or = or*180/pi;
% circlecoords
% Arguments:
%
c
%
%
r
%
imgsize
%
nsides
%
Default
%
%
% Output:
%
x
%
%
y
%
- an array containing the centre coordinates of the circle
[x,y]
- the radius of the circle
- size of the image array to plot coordinates onto
- the circle is actually approximated by a polygon, this
argument gives the number of sides used in this approximation.
is 600.
- an array containing x coordinates of circle boundary
points
- an array containing y coordinates of circle boundary
points
function [x,y] = circlecoords(c, r, imgsize,nsides)
if nargin == 3
nsides = 600;
end
nsides = round(nsides);
a = [0:pi/nsides:2*pi];
xd = (double(r)*cos(a)+ double(c(1)) );
yd = (double(r)*sin(a)+ double(c(2)) );
xd = round(xd);
yd = round(yd);
xd2 = xd;
coords = find(xd>imgsize(2));
xd2(coords) = imgsize(2);
coords = find(xd<=0);
xd2(coords) = 1;
yd2 = yd;
coords = find(yd>imgsize(1));
yd2(coords) = imgsize(1);
coords = find(yd<=0);
yd2(coords) = 1;
x = int32(xd2);
y = int32(yd2);
% createiristemplate
% Arguments:
%
eyeimage_filename
- the file name of the eye image
%
function [template, mask] = createiristemplate(eyeimage_filename)
global DIAGPATH
DIAGPATH = 'diagnostics\';
radial_res = 40;
angular_res = 240;
minWaveLength=18;
mult=1;
sigmaOnf=0.5;
eyeimage = imread(eyeimage_filename);
savefile = [eyeimage_filename,'-houghpara.mat'];
[stat,mess]=fileattrib(savefile);
[circleiris circlepupil imagewithnoise] = segmentiris(eyeimage);
save(savefile,'circleiris','circlepupil','imagewithnoise');
imagewithnoise2 = uint8(imagewithnoise);
imagewithcircles = uint8(eyeimage);
[x,y] = circlecoords([circleiris(2),circleiris(1)],circleiris(3),size(eyeimage));
ind2 = sub2ind(size(eyeimage),double(y),double(x));
[xp,yp] = circlecoords([circlepupil(2),circlepupil(1)],circlepupil(3),size(eyeimage));
ind1 = sub2ind(size(eyeimage),double(yp),double(xp));
imagewithnoise2(ind2) = 255;
imagewithnoise2(ind1) = 255;
imagewithcircles(ind2) = 255;
imagewithcircles(ind1) = 255;
w = cd;
cd(DIAGPATH);
imwrite(imagewithnoise2,[eyeimage_filename,'-noise.jpg'],'jpg');
imwrite(imagewithcircles,[eyeimage_filename,'-segmented.jpg'],'jpg');
cd(w);
[polar_array noise_array] = normaliseiris(imagewithnoise, circleiris(2),...
circleiris(1), circleiris(3), circlepupil(2), circlepupil(1),
circlepupil(3),eyeimage_filename, radial_res, angular_res);
w = cd;
cd(DIAGPATH);
imwrite(polar_array,[eyeimage_filename,'-polar.jpg'],'jpg');
imwrite(noise_array,[eyeimage_filename,'-polarnoise.jpg'],'jpg');
cd(w);
[template mask] = encode(polar_array, noise_array, nscales, minWaveLength, mult,
sigmaOnf);
%
%
%
%
%
%
%
%
%
%
%
%
encode
Arguments:
polar_array
noise_array
nscales
minWaveLength
mult
sigmaOnf
-
Output:
template
mask
- the binary iris biometric template
- the binary iris noise mask
normalised iris region
corresponding normalised noise region map
number of filters to use in encoding
base wavelength
multicative factor between each filter
bandwidth parameter
function [template, mask] = encode(polar_array,noise_array, nscales, minWaveLength,
mult, sigmaOnf)
[E0 filtersum] = gaborconvolve(polar_array, nscales, minWaveLength, mult, sigmaOnf);
length = size(polar_array,2)*2*nscales;
template = zeros(size(polar_array,1), length);
length2 = size(polar_array,2);
h = 1:size(polar_array,1);
%create the iris template
mask = zeros(size(template));
for k=1:nscales
E1 = E0{k};
H1 = real(E1) > 0;
H2 = imag(E1) > 0;
H3 = abs(E1) < 0.0001;
for i=0:(length2-1)
ja = double(2*nscales*(i));
template(h,ja+(2*k)-1) = H1(h, i+1);
template(h,ja+(2*k)) = H2(h,i+1);
mask(h,ja+(2*k)-1) = noise_array(h, i+1) | H3(h, i+1);
mask(h,ja+(2*k)) =
noise_array(h, i+1) | H3(h, i+1);
end
end
% findcircle
%
% Arguments:
%
image
- the image in which to find circles
%
lradius
%
uradius
%
scaling
%
%
sigma
%
%
hithres
%
lowthres
%
vert
%
horz
%
% Output:
%
circleiris
%
%
circlepupil
%
%
imagewithnoise
%
%
- lower radius to search for
- upper radius to search for
- scaling factor for speeding up the
Hough transform
- amount of Gaussian smoothing to
apply for creating edge map.
- threshold for creating edge map
- threshold for connected edges
- vertical edge contribution (0-1)
- horizontal edge contribution (0-1)
- centre coordinates and radius
of the detected iris boundary
- centre coordinates and radius
of the detected pupil boundary
- original eye image, but with
location of noise marked with
NaN values
function [row, col, r] = findcircle(image,lradius,uradius,scaling, sigma, hithres,
lowthres, vert, horz)
lradsc = round(lradius*scaling);
uradsc = round(uradius*scaling);
rd = round(uradius*scaling - lradius*scaling);
[I2 or] = canny(image, sigma, scaling, vert, horz);
I3 = adjgamma(I2, 1.9);
I4 = nonmaxsup(I3, or, 1.5);
edgeimage = hysthresh(I4, hithres, lowthres);
h = houghcircle(edgeimage, lradsc, uradsc);
maxtotal = 0;
for i=1:rd
layer = h(:,:,i);
[maxlayer] = max(max(layer));
if maxlayer > maxtotal
maxtotal = maxlayer;
r = int32((lradsc+i) / scaling);
[row,col] = ( find(layer == maxlayer) );
row = int32(row(1) / scaling);
col = int32(col(1) / scaling);
end
end
% findline
% Arguments:
%
image
- the input image
%
% Output:
%
lines
- parameters of the detected line in polar form
function lines = findline(image)
[I2 or] = canny(image, 2, 1, 0.00, 1.00);
I3 = adjgamma(I2, 1.9);
I4 = nonmaxsup(I3, or, 1.5);
edgeimage = hysthresh(I4, 0.20, 0.15);
theta = (0:179)';
[R, xp] = radon(edgeimage, theta);
maxv = max(max(R));
if maxv > 25
i = find(R == max(max(R)));
else
lines = [];
return;
end
[foo, ind] = sort(-R(i));
u = size(i,1);
k = i(ind(1:u));
[y,x]=ind2sub(size(R),k);
t = -theta(x)*pi/180;
r = xp(y);
lines = [cos(t) sin(t) -r];
cx = size(image,2)/2-1;
cy = size(image,1)/2-1;
lines(:,3) = lines(:,3) - lines(:,1)*cx - lines(:,2)*cy;
% gaborconvolve
% Arguments:
%
im
%
nscale
%
minWaveLength
%
mult
%
sigmaOnf
%
%
%
-
the image to convolve
number of filters to use
wavelength of the basis filter
multiplicative factor between each filter
Ratio of the standard deviation of the Gaussian describing
the log Gabor filter's transfer function in the frequency
domain to the filter center frequency.
function [EO, filtersum] = gaborconvolve(im, nscale, minWaveLength, mult, ...
sigmaOnf)
[rows cols] = size(im);
filtersum = zeros(1,size(im,2));
EO = cell(1, nscale);
ndata = cols;
if mod(ndata,2) == 1
ndata = ndata-1;
end
logGabor = zeros(1,ndata);
result = zeros(rows,ndata);
radius = [0:fix(ndata/2)]/fix(ndata/2)/2;
radius(1) = 1;
wavelength = minWaveLength;
for s = 1:nscale,
fo = 1.0/wavelength;
rfo = fo/0.5;
logGabor(1:ndata/2+1) = exp((-(log(radius/fo)).^2) / (2 * log(sigmaOnf)^2));
logGabor(1) = 0;
filter = logGabor;
filtersum = filtersum+filter;
for r = 1:rows
% For each row
signal = im(r,1:ndata);
imagefft = fft( signal );
result(r,:) = ifft(imagefft .* filter);
end
EO{s} = result;
wavelength = wavelength * mult;
end
filtersum = fftshift(filtersum);
% gethammingdistance
% Arguments:
%
template1
- first template
%
mask1
- corresponding noise mask
%
template2
- second template
%
mask2
- corresponding noise mask
%
scales
- the number of filters used to encode the templates,
%
needed for shifting.
%
% Output:
%
hd
- the Hamming distance as a ratio
function hd = gethammingdistance(template1, mask1, template2, mask2, scales)
template1 = logical(template1);
mask1 = logical(mask1);
template2 = logical(template2);
mask2 = logical(mask2);
hd = NaN;
for shifts=-8:8
template1s = shiftbits(template1, shifts,scales);
mask1s = shiftbits(mask1, shifts,scales);
mask = mask1s | mask2;
nummaskbits = sum(sum(mask == 1));
totalbits = (size(template1s,1)*size(template1s,2)) - nummaskbits;
C = xor(template1s,template2);
C = C & ~mask;
bitsdiff = sum(sum(C==1));
if totalbits == 0
hd = NaN;
else
hd1 = bitsdiff / totalbits;
if
hd1 < hd || isnan(hd)
hd = hd1;
end
end
end
% houghcircle
% Arguments:
%
edgeim
%
rmin, rmax
%
% Output:
%
h
%
- the edge map image to be transformed
- the minimum and maximum radius values
of circles to search for
- the Hough transform
function h = houghcircle(edgeim, rmin, rmax)
[rows,cols] = size(edgeim);
nradii = rmax-rmin+1;
h = zeros(rows,cols,nradii);
[y,x] = find(edgeim~=0);
for index=1:size(y)
cx = x(index);
cy = y(index);
for n=1:nradii
h(:,:,n) = addcircle(h(:,:,n),[cx,cy],n+rmin);
end
end
% HYSTHRESH
% Arguments:
%
%
%
im
T1
T2
- image to be thresholded (assumed to be non-negative)
- upper threshold value
- lower threshold value
function bw = hysthresh(im, T1, T2)
if (T2 > T1 | T2 < 0 | T1 < 0)
error('T1 must be >= T2 and both must be >= 0 ');
end
[rows, cols] = size(im);
rc = rows*cols;
rcmr = rc - rows;
rp1 = rows+1;
bw = im(:);
pix = find(bw > T1);
npix = size(pix,1);
stack = zeros(rows*cols,1);
stack(1:npix) = pix;
stp = npix;
for k = 1:npix
bw(pix(k)) = -1;
end
O = [-1, 1, -rows-1, -rows, -rows+1, rows-1, rows, rows+1];
while stp ~= 0
v = stack(stp);
stp = stp - 1;
if v > rp1 & v < rcmr
for l = 1:8
ind = index(l);
if bw(ind) > T2
stp = stp+1;
stack(stp) = ind;
bw(ind) = -1;
end
end
end
end
bw = (bw == -1);
bw = reshape(bw,rows,cols);
% linecoords
% Arguments:
%
%
%
%
%
lines
imsize
- an array containing parameters of the line in
form
- size of the image, needed so that x y coordinates
are within the image boundary
function [x,y] = linecoords(lines, imsize)
xd = [1:imsize(2)];
yd = (-lines(3) - lines(1)*xd ) / lines(2);
coords = find(yd>imsize(1));
yd(coords) = imsize(1);
coords = find(yd<1);
yd(coords) = 1;
x = int32(xd);
y = int32(yd);
% NONMAXSUP
% input:
%
inimage - image to be non-maxima suppressed.
%
%
orient - image containing feature normal orientation angles in degrees
%
(0-180), angles positive anti-clockwise.
%
%
radius - distance in pixel units to be looked at on each side of each
%
pixel when determining whether it is a local maxima or not.
%
(Suggested value about 1.2 - 1.5)
function im = nonmaxsup(inimage, orient, radius)
if size(inimage) ~= size(orient)
error('image and orientation image are of different sizes');
end
if radius < 1
error('radius must be >= 1');
end
[rows,cols] = size(inimage);
im = zeros(rows,cols);
iradius = ceil(radius);
angle = [0:180].*pi/180;
xoff = radius*cos(angle);
yoff = radius*sin(angle);
hfrac = xoff - floor(xoff);
vfrac = yoff - floor(yoff);
orient = fix(orient)+1;
for col = (iradius+1):(cols - iradius)
or = orient(row,col);
x = col + xoff(or);
y = row - yoff(or);
fx
cx
fy
cy
=
=
=
=
floor(x);
ceil(x);
floor(y);
ceil(y);
tl = inimage(fy,fx);
upperavg = tl + hfrac(or) * (tr - tl);
loweravg = bl + hfrac(or) * (br - bl);
v1 = upperavg + vfrac(or) * (loweravg - upperavg);
if inimage(row, col) > v1
x = col - xoff(or);
y = row + yoff(or);
fx = floor(x);
cx = ceil(x);
fy = floor(y);
cy = ceil(y);
tl = inimage(fy,fx);
tr = inimage(fy,cx);
bl = inimage(cy,fx);
br = inimage(cy,cx);
upperavg = tl + hfrac(or) * (tr - tl);
loweravg = bl + hfrac(or) * (br - bl);
v2 = upperavg + vfrac(or) * (loweravg - upperavg);
if inimage(row,col) > v2
im(row, col) = inimage(row, col);
end
end
end
end
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
normaliseiris
Arguments:
image
x_iris
y_iris
r_iris
x_pupil
y_pupil
r_pupil
eyeimage_filename
radpixels
angulardiv
- the input eye image to extract iris data from
- the x coordinate of the circle defining the iris
boundary
- the y coordinate of the circle defining the iris
boundary
- the radius of the circle defining the iris
boundary
- the x coordinate of the circle defining the pupil
boundary
- the y coordinate of the circle defining the pupil
boundary
- the radius of the circle defining the pupil
boundary
- original filename of the input eye image
- radial resolution, defines vertical dimension of
normalised representation
- angular resolution, defines horizontal dimension
of normalised representation
function [polar_array, polar_noise] = normaliseiris(image, x_iris, y_iris, r_iris,...
x_pupil, y_pupil, r_pupil,eyeimage_filename, radpixels, angulardiv)
global DIAGPATH
radiuspixels = radpixels + 2;
angledivisions = angulardiv-1;
r = 0:(radiuspixels-1);
theta = 0:2*pi/angledivisions:2*pi;
x_iris = double(x_iris);
y_iris = double(y_iris);
r_iris = double(r_iris);
x_pupil = double(x_pupil);
y_pupil = double(y_pupil);
r_pupil = double(r_pupil);
% calculate displacement of pupil center from the iris center
ox = x_pupil - x_iris;
oy = y_pupil - y_iris;
if ox <= 0
sgn = -1;
elseif ox > 0
sgn = 1;
end
if ox==0 && oy > 0
sgn = 1;
end
r = double(r);
theta = double(theta);
a = ones(1,angledivisions+1)* (ox^2 + oy^2);
if ox == 0
phi = pi/2;
else
phi = atan(oy/ox);
end
b = sgn.*cos(pi - phi - theta);
r = (sqrt(a).*b) + ( sqrt( a.*(b.^2) - (a - (r_iris^2))));
r = r - r_pupil;
rmat = ones(1,radiuspixels)'*r;
rmat = rmat.* (ones(angledivisions+1,1)*[0:1/(radiuspixels-1):1])';
rmat = rmat + r_pupil;
rmat
= rmat(2:(radiuspixels-1), :);
xcosmat = ones(radiuspixels-2,1)*cos(theta);
xsinmat = ones(radiuspixels-2,1)*sin(theta);
xo = rmat.*xcosmat;
yo = rmat.*xsinmat;
xo = x_pupil+xo;
yo = y_pupil-yo;
[x,y] = meshgrid(1:size(image,2),1:size(image,1));
polar_array = interp2(x,y,image,xo,yo);
polar_noise = zeros(size(polar_array));
coords = find(isnan(polar_array));
polar_noise(coords) = 1;
polar_array = double(polar_array)./255;
coords = find(xo > size(image,2));
xo(coords) = size(image,2);
coords = find(xo < 1);
xo(coords) = 1;
coords = find(yo > size(image,1));
yo(coords) = size(image,1);
coords = find(yo<1);
yo(coords) = 1;
xo = round(xo);
yo = round(yo);
xo = int32(xo);
yo = int32(yo);
ind1 = sub2ind(size(image),double(yo),double(xo));
image = uint8(image);
image(ind1) = 255;
[x,y] = circlecoords([x_iris,y_iris],r_iris,size(image));
ind2 = sub2ind(size(image),double(y),double(x));
[xp,yp] = circlecoords([x_pupil,y_pupil],r_pupil,size(image));
ind1 = sub2ind(size(image),double(yp),double(xp));
image(ind2) = 255;
image(ind1) = 255;
w = cd;
cd(DIAGPATH);
imwrite(image,[eyeimage_filename,'-normal.jpg'],'jpg');
cd(w);
coords = find(isnan(polar_array));
polar_array2 = polar_array;
polar_array2(coords) = 0.5;
avg = sum(sum(polar_array2)) / (size(polar_array,1)*size(polar_array,2));
polar_array(coords) = avg;
% segmentiris
% Arguments:
%
eyeimage
- the input eye image
%
% Output:
%
circleiris
- centre coordinates and radius
%
%
%
%
%
%
circlepupil
imagewithnoise
of the detected iris boundary
- centre coordinates and radius
of the detected pupil boundary
- original eye image, but with
location of noise marked with
NaN values
function [circleiris, circlepupil, imagewithnoise] = segmentiris(eyeimage)
lpupilradius = 28;
upupilradius = 75;
lirisradius = 80;
uirisradius = 150;
scaling = 0.4;
reflecthres = 240;
[row, col, r] = findcircle(eyeimage, lirisradius, uirisradius, scaling, 2, 0.20, 0.19,
1.00, 0.00);
circleiris = [row col r];
rowd = double(row);
cold = double(col);
rd = double(r);
irl
iru
icl
icu
=
=
=
=
round(rowd-rd);
round(rowd+rd);
round(cold-rd);
round(cold+rd);
imgsize = size(eyeimage);
if irl < 1
irl = 1;
end
if icl < 1
icl = 1;
end
if iru > imgsize(1)
iru = imgsize(1);
end
if icu > imgsize(2)
icu = imgsize(2);
end
imagepupil = eyeimage( irl:iru,icl:icu);
[rowp, colp, r] = findcircle(imagepupil, lpupilradius, upupilradius
,0.6,2,0.25,0.25,1.00,1.00);
rowp = double(rowp);
colp = double(colp);
r = double(r);
row = double(irl) + rowp;
col = double(icl) + colp;
row = round(row);
col = round(col);
circlepupil = [row col r];
imagewithnoise = double(eyeimage);
%find top eyelid
topeyelid = imagepupil(1:(rowp-r),:);
lines = findline(topeyelid);
if size(lines,1) > 0
[xl yl] = linecoords(lines, size(topeyelid));
yl = double(yl) + irl-1;
xl = double(xl) + icl-1;
yla = max(yl);
y2 = 1:yla;
ind3 = sub2ind(size(eyeimage),yl,xl);
imagewithnoise(ind3) = NaN;
imagewithnoise(y2, xl) = NaN;
end
%find bottom eyelid
bottomeyelid = imagepupil((rowp+r):size(imagepupil,1),:);
lines = findline(bottomeyelid);
if size(lines,1) > 0
[xl yl] = linecoords(lines, size(bottomeyelid));
yl = double(yl)+ irl+rowp+r-2;
xl = double(xl) + icl-1;
yla = min(yl);
y2 = yla:size(eyeimage,1);
ind4 = sub2ind(size(eyeimage),yl,xl);
imagewithnoise(ind4) = NaN;
imagewithnoise(y2, xl) = NaN;
end
ref = eyeimage < 100;
coords = find(ref==1);
imagewithnoise(coords) = NaN;
% shiftbits
%
% Arguments:
%
template
%
noshifts
%
%
nscales
%
- the template to shift
- number of shifts to perform to the right, a negative
value results in shifting to the left
- number of filters used for encoding, needed to
determine how many bits to move in a shift
%
function templatenew = shiftbits(template, noshifts,nscales)
templatenew = zeros(size(template));
width = size(template,2);
s = round(2*nscales*abs(noshifts));
p = round(width-s);
if noshifts == 0
templatenew = template;
elseif noshifts < 0
x=1:p;
templatenew(:,x) = template(:,s+x);
x=(p + 1):width;
templatenew(:,x) = template(:,x-p);
else
x=(s+1):width;
templatenew(:,x) = template(:,x-s);
x=1:s;
templatenew(:,x) = template(:,p+x);
end
References:
• [1] J. Daugman, "High confidence visual recognition of persons by a test of
statistical independence", IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 15, No. 11, pp.1148-1160, November, 1993.
• [2] J. Daugman, " How iris recognition works", IEEE Transactions on
Circuits and Systems for Video Technology, Vol.14, No.1, pp.21-30,
January, 2004.
• [3] L. Masek, "Recognition of human iris patterns for biometric
identification", M.S. thesis, University of Western Australia, 2003.
• [4] R. Wildes, " Iris recognition: an emerging biometric technology",
Proceedings of the IEEE, Vol. 85, No. 9, pp.1348-1363, September, 1997.
• [5] J. Daugman, Biometric personal identification system based on iris
analysis. United States Patent, Patent Number: 5,291,560, 1994.
• [6] S. Sanderson and J. Erbetta, "Authentication for secure environments
based on iris scanning technology", IEE Colloquium on Visual Biometrics,
pp.8/1-8/7, March, 2000.
•
[7] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey and
S. McBride, " A system for automated iris recognition", Proceedings IEEE
Workshop on Applications of Computer Vision, Sarasota, FL, pp.121-128 ,
December, 1994.
• [8] W. Boles and B. Boashash, "A human identification technique using
images of the iris and wavelet transform", IEEE Transactions on Signal
Processing, Vol. 46, No. 4, pp.1185-1188, April, 1998.
• [9] A. Gongazaga and R.M. da Costa, "Extraction and selection of dynamic
features of human iris", IEEE Computer Graphics and Image Processing,
Vol. XXII, pp.202-208 , October, 2009.
• [10] P. Kovesi,"MATLAB functions for computer vision and image
analysis”, available at:
http://www.cs.uwa.edu.au/~pk/Research/MatlabFns/index.html.
• [11] L. Masek and P. Kovesi, '' MATLAB source code for a biometric
identification system based on iris patterns’’, The school of computer
science and software engineering, The university of Western Australia,
2003.
• [12] D.M. Monro, S.Rakshit and Z. Dexin, "DCT based iris recognition",
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29,
Issue 4, pp.586-595, April, 2007.
• [13] Different sample source codes for the functions used in Masek’s
algorithm for iris recognition are available at:
Advancedsourcode.com: http://www.advancedsourcecode.com/iris.asp.
• [14] Chinese Academy of Sciences – Institute of Automation. Database of
greyscale eye images http://www.cbsr.ia.ac.cn/IrisDatabase.htm.
• [15] K. Miyazawa, K. Ito, K. Aoki, T. Kobayashi and K. Nakajima, " An
efficient iris recognition algorithm using phase based image matching ",
IEEE International Conference on Image processing, pp.325-328,
September, 1995.
• [16] W. Kong and D. Zhang, "Accurate iris segmentation based on novel
reflection and eyelash detection model", Proceedings of 2001 International
Symposium on Intelligent Multimedia, Video and Speech Processing, Hong
Kong, pp.263-266, May, 2001.
•
[17] N. Ritter, "Location of the pupil-iris border in slit-lamp images of the
cornea", Proceedings of the International Conference on Image Analysis
and Processing, pp.740-745, September, 1999.
• [18] Y. Zhu, T. Tan and Y. Wang, '' Biometric personal identification based
on iris patterns'', Proceedings of the 15th International Conference on
Pattern Recognition, Spain, Vol. 2, pp.801-804, February, 2000.
• [19]Online free encyclopedia, Wikipedia :http://www.wikipedia.org/.
• [20]K. R.Rao and P.Yip, ''Discrete cosine transform'', Boca Raton, FL:
Academic press, 1990.