Project Report
<Fingerprint Recognition>
Supervisor: Nigel Whyte
Student:
Dayu Chen
Student ID: C00131022
Submit date: 2010-4-8
1
Contents
1.
Introduction ....................................................................................................................... 3
2.
Problems encountered and resolved ................................................................................ 4
2.1Variance in Normalization Algorithm ............................................................................... 4
2.1 Orientation Estimation .................................................................................................... 5
2.3 Image Enhancement ........................................................................................................ 8
2.4 Fix Ridge......................................................................................................................... 19
2.5 Core Point Detection ..................................................................................................... 20
2.6 Problems in Match ........................................................................................................ 24
3.
What I achieved ............................................................................................................... 25
4.
What I did not achieve .................................................................................................... 26
5.
What I learned ................................................................................................................. 27
6.
What I would do differently if starting again .................................................................. 27
7.
Update my earlier report................................................................................................. 28
7.1 Update My Earlier Research .......................................................................................... 28
7.2 Update My Earlier Design Manual................................................................................. 29
8.
Module description ......................................................................................................... 30
9.
Data Structures and data record ..................................................................................... 31
10.
Conclusion .................................................................................................................... 34
Reference ................................................................................................................................ 35
2
1. Introduction
This project is fingerprint recognition. During the development of this project, I met many
problems, and made some changes of my earlier idea. From the development of this project, I
learned more about algorithm in graphic pre-processing and how to fix problems in these
algorithms. I also got more experience in researching, software design, coding and so on.
This report is written as a conclusion for the development of this project. It includes: problems
that I met during project developing and how I solved it, what I achieved and what I did not
achieved, what I learned from this project, what I will do different if this project starting again,
the different things from my earlier report and some description of module, data structure.
3
2. Problems encountered and resolved
As I mentioned before, during this project, I met many algorithms about graphic pre-processing.
Almost this algorithm is expressed by some complex mathematical formulas, these formulas I had
never seen before. During the developing time, I spent most of time to learn these formulas.
2.1Variance in Normalization Algorithm
Normalization is for scale the gray level and variance of an image; let them nearby the desired
average gray level and variance, so that can omit some noisy point about finger pressure in
scanning. In this phase, how to choose a desired variance confused me a few times. I was using
100 as the desired variance which I mentioned in my earlier report, but the result was unclear.
After change some variance, I found that: if the desired variance is too small, the output image
will be unclear, but it can omit the noisy point. If the desired variance is too big, the output image
will very clear, but it cannot omit the noisy point. An unclear image will make a big influence to
next algorithms. I tried to use many values as the desired variance to test this algorithm, and
found that when the variance is 150 the result is the best (Figure 1).
Figure 1 Image (a) is the original image, the red circle marks the noisy point of the finger pressure.
4
Image (b) is normalized image which variance is 1500; the noisy point cannot be omitted (red
circle in (b)). Image (c) is normalized image which variance is 150; most the pressure noisy point
can be omitted (red circle in (c)).
2.1 Orientation Estimation
Orientation is the most important parameters in this project. Image enhancement algorithm,
core point detection algorithm and generate feature vector in matching need to use orientation
parameters. In this phase, I follow the algorithm which I wrote in my earlier research manual.
This algorithm computes the orientation base on image gray gradient, but the last step is unclear
(Figure2 [1]).
Figure 2 the last step of Orientation estimation
It is said that the last step is to compute the consistency of the orientation filed. There is a
sentence said “If the consistency level is above a certain threshold Tc, then the local orientations
around this region are re-estimated at a lower resolution level until C(i, j) is below a certain
level.”[1] In this sentence, author did not give a very clear standard for me, he did not give a clear
threshold and did not give an answer that if cannot find a certain value, what is the standard to
decrease the neighborhood block D until find a certain level of C(I,j). It is difficult for me to
implement the last step in this algorithm. I tried to omit this step, but the result is inconsistency
(Figure 3).
Figure 3 Image (a) is the input image, image (b) is the Orientation filed without the last step
From Figure 3, I found that although a compute the orientation, but it still cannot be used in the
5
next algorithm, because it is inconsistency. This orientation filed cannot express the direction of a
fingerprint ridge.
After that, I did some research; found two ways to fix the orientation. One way is using a
Gaussian low pass filter to smooth the orientation [2], the process of the algorithm is:
(1) Compute the orientation base on image gray gradient.
(2) Convert the orientation filed into a continuous vector filed using the follow formulas:
(3) Using Gaussian low pass filter to filter these two result. In the following formulas, wΦ is the
size of the Gaussian filter and G is Gaussian filter.
(4) Generate the new Orientation filed with the result which gets from the above formulas.
I tried to implement this algorithm, but it still cannot get a good result (Figure 4).
6
Figure 4 Image (a) is the inconsistency Orientation filed in Figure 3. Image (b) is after smoothing.
(a) and (b) are both computed from the same image in Figure 3.
After I check my code, I think I was following the algorithm correctly, and my Gaussian filter is
true (I also do the Gaussian smoothing in another step, I will mention it after), I could not find out
any incorrect in the coding logic, so I gave up this way. I used the second way to solve this
problem, the second way is very easy for understanding and easy to implement. I used the
following formulas to compute the orientation without consistency.
I found that, the range of the output direction is [-PI/4, PI/4], I think this is the reason why the
orientation is inconsistency, because I think the angle between –PI/4 to PI/4 cannot contain all
the direction in a fingerprint image, thus I convert the range [-PI/4, PI/4] to [0, PI]. After research
I found a rule to convert this angle [3]. The rule is using the following formula to convert the
angle:
7
After implement this way, the problem is solved (Figure 5)
Figure 5 Image (a) is the input image; it is the same image in Figure 3. Image (b) is the orientation
witch range is [0, PI]
2.3 Image Enhancement
Image enhancement is the most important stage in image pre-processing. The enhancement
problem stucks me during the developing time. As I mentioned in my earlier research, I choose
Gabor filter to implement image enhancement. I spent most of time to learn the formulas in
Gabor filter. Although it was working, but its result was not good. In this phase, I have tried to
implement image enhancement with three ways: Sobel edge detector, Gabor filter and OGorman
filter.
I spent most of time to improve the Gabor filter. Gabor filter is a direction filter that generated by
the input image’s direction (orientation) and frequency. The mathematical formulas about Gabor
filter shows in following.
δx and δy are the space constants of the Gaussian envelope. Typically, these two parameters
can be a fixed value [2]. I assign both of them is 4. Φ is the direction of the centre pixel and f is
the frequency of the centre pixel. From the formulas, I know that there are two parameters will
8
make an influence the Gabor filter. At first, in order to implement it easily, I using a fixed
frequency value to generate the Gabor filter, but result cannot be used (Figure 6).
Figure 6 Image (a) is the input image, Image (b) is the enhanced image which frequency is a fixed
value, and where f = 0.14
With this result, I think the frequency cannot be a fixed value, so I tried to calculate the frequency
with the following formulas [2]:
The theory of these formulas is the count the gray value of every pixel in each block (divided the
image into several blocks). The theory shows in Figure 7.
9
Figure 7 Every ridge will expression as a sine wave, to compute sine wave in each block, and the
average distance of the peaks is the average weight of the ridge and the frequency of current
block f = 1/weight.
After I implemented this way, I found that in an image, some block may be will contain the core
point or feature point, thus there will not appear any peaks in this block, so when I want to
enhance the pixel in this kind of block, I cannot generate the Gabor filter for this kind of pixel,
because this pixel have no frequency. Figure 8 is the output frequency of each block (I using
11*11 as the block size).
Figure 8 output frequency about image (a) in Figure 6
In figure 8, this is the first nine blocks’ frequency. The first value is -1.0, it means it cannot find
out a sine wave peaks in this block (The first block is background, not ridge appear in Figure 6
image (a)). After research, this essay also provides a way to solve this problem. [2] The author
said that can use a mathematic method called Interpolation. “The frequency values for these
blocks need to be interpolated from the frequency of the neighboring blocks which have a
well-defined frequency.” [2] In interpolation, I only know how to do linear interpolation, because I
learned it in Graphic class, but after I learned some information about interpolation, this kind of
problem cannot be solved by linear interpolation only, it needs another kind of interpolation, but
it is so difficult for me to understand, so I gave up to compute the frequency. For the frequency, I
computed each blocks’ frequency, and carry all the frequency which is usable (not equals to -1.0),
and then compute the sum of the usable frequency, count the sum of how many blocks can have
frequency, at last to compute the average frequency for this image. After this way, I computed
10
the average frequency is 0.1 to 0.12, and I choose 0.1 as the average frequency to generate the
Gabor filter. The result shows in Figure 9.
Figure 9 Image (a) is input image, and image b is enhanced image which frequency is 1.0 and
filter size is 3*3
Gabor was working so far, but result is not good. Another essay said that filter size,δx and δy
also need to generated dynamically .[4] The relationship between frequency and δx δy shows
in the next formula
Kx and Ky is a fixed value, in this essay, author provided both of them is 0.5. [4] The relationship
between block size and frequency shows in the following formulas:
I also have tried this way, but the result is worst than which I showed it in Figure 9. Figure 10 is
the result of Gabor filter which change the size of the filter. The result is too bad.
11
Figure 10, the result of image (a) in Figure 9, which Gabor filter size is 7*7
When I was go to the next algorithm (ridge detection and thinning), I found that although the
Gabor is not good in Figure 9, but actually it is working. Figure 11 shows the Gabor filter is
working.
12
Figure 11 Image (a) is input image; Image (b) is doing the binarization directly. Image (c) is doing
the binarization after normalization and Image (d) is doing the binarization after Gabor filter.
Next figure shows the different result after thinning algorithm.
13
Figure 12 Image (a) is doing the thinning without any process. Image (b) is the thinning result
after normalization. Image (c) is the thinning result after Gabor filter
From figure 11 and figure 12, it proofs the Gabor filter is actually working.
In Image enhancement, I also did some attempts using another algorithm, one is Sobel edge
detection.[5] This is a simple algorithm. It is easy to implement and its theory is easy for
understanding. In Sobel edge detection, I use Sobel edge detector to compute the gradient in X
axis and Y axis, and then detect the edge base on these two gradients. After I implement this way,
I found this algorithm seems not suits to fingerprint image. I used some typical image to test it,
the result is good, but when I was running this algorithm in fingerprint image, the result is bad.
Figure 13 shows the Soble edge detection in typical gray image, and Figure 14 shows the Sobel
edge detection in fingerprint image.
14
Figure 13 Image (a) is the input gray image, Image (b) is the result of Sobel edge detection
Figure 14 Image (a) is the input fingerprint image; Image (b) is the result of Sobel edge detection
Al the pre-processing are the same in Figure 13 and Figure 14. The pre-processing is that doing
the Gaussian smoothing at first and the doing the Sobel edge detection. From Figure 13 and
Figure 14, I though Sobel edge detection is not suitable for fingerprint image, so I gave up this
way.
After I finished the basic functions about my project, I went back to improve the Gabor filter. I
researched some essays, I found a filter called O’Gorman filter [6] is suits to fingerprint image. It
is a direction filter. Similar with Gabor filter, O’Gorman filter need to using pixel’s direction to
generate the filter, but it does not need the frequency. It is easier to implement. In developing
this direction filter, I only need to identify a 7*7 matrix in 0 degree direction, and then rotate this
filter base on each pixel’s orientation; I do not need to considered about the frequency. I only
need to do is computing the rotate filter’s coordinate, and find out the weight for the rotated
15
filter from the 0 degree direction filter depend on the rotated coordinate. The rule of generate
this direction filter shows in following:
Where:
U>X>Y>Z
My 0 degree direction filter is follow with this [7]:
And the rotate formulas are the typical rotate formula:
Most times I’ and j’ will not be an integer, so I can find out the weight from the 0 degree direction
filter. This problem can be solved by linear interpolation. Figure 15 express this situation.
Figure 15 linear interpolation to find out (X,Y).
16
Suppose A, B, C, D are four points which coordinates are integer numbers and A, B, C, D can find
out corresponding weight in the 0 degree direction filter. A, B, C, D are neighbors with point (X,Y)
where is the rotate coordinate. I using linear interpolation to compute the weight E, F and then
doing the linear interpolation again to compute the point (X, Y)’s weight base on E, F. This
processes it easier than Gabor. After I implement this algorithm, I found that the result is better
than before. The following figures show the Image enhancement result with Gabor filter and
O’Gorman filter.
Figure 16 Image (a) is Image enhancement with Gabor filter. Image (b) is Image enhancement
with O’Gorman filter.
Figure 17 Image (a) is binarization after Gabor filter and Image (b) is binarization after O’Gorman
filter.
From Figure 16 and Figure 17, I can see that Gabor filter can get a clear result in image
enhancement stage, but O’Gorman filter is more suitable to the next pre-processing algorithm. In
17
Figure 17, the ridge in image (b) is smoother than image (a). Image (b) can link some break ridge,
but image (a) cannot (the red circle in figure 17) and image (b) can remove some bridge link (blue
circle in figure 17). Figure 18 will shows the different in thinning image.
Figure 18 Image (a) is thinning image which image enhancement using Gabor filter. Image (b) is
thinning image which image enhancement using O’Gorman filter.
After I tried to implement these three algorithms in image enhancement (Gabor filter, Sobel edge
detection and O’Gorman filter). I found O’Gorman is the best in my program. Gabor filter is the
most traditional and famous algorithm in fingerprint recognition, but it is too complex and
difficult for me, so I gave up using Gabor filter to implement image enhancement and choose
O’Gorman filter to do image enhancement.
18
2.4 Fix Ridge
After thinning, I found that there are some new noisy points caused by thinning. (Figure 19)
Figure 19 thinning image
In figure 19, there are some holds in the thinned image (red circle). This kind of noisy point will
leads to two false bifurcations. There are some dots in the thinned image (blue circle). This kind
of noisy point will leads to false ridge ending. At first, I thought this is due to my thinning
algorithm is not good, so I draw an image to do the thinning algorithm, but I find that the
thinning algorithm is fine. (Figure 20)
Figure 20 Image (a) I draw using windows graphics. Image (b) is using my application to thin it
In Figure 20, there is no any hold and dot, so I thought the problem maybe occur in the earlier
algorithm. I do two operations to solve this problem. At first, I add a Gaussian smoothing filter
after normalization, so that can remove some dots. Next, I add a fix ridge algorithm after
binarization.
The algorithm like that: Using a 3*3 matrix to check each pixel on the binary image.
19
P1
P2
P3
P8
P
P4
P7
P6
P5
If p is foreground and all its neighbors are background, and then remove p. If p is background and
check its four neighbors, if there are more than 3 neighbors is foreground and then mark p is
foreground. E.g.:
1
1
P
1
1
After test, I found that when I do fix ridge operation three times, almost noisy point can be
removed. (Figure 21)
Figure 21 the thin image after fix ridge
Compare Figure 21 with Figure 19, most hold and dot be removed.
2.5 Core Point Detection
I used core point as the reference point to do the matching. In the core point detection, I met a
big problem; I have tried many ways to solved it. At last, it is working, but the result is not good.
Firstly, I implement core point detection using the algorithm which I mentioned in my earlier
research report. The result is bad, I have tested 20 images, only one image can find out the core
point. After I think about this algorithm carefully, I found that there are two reasons will lead to
this bad result: one is the orientation is not very accuracy another one is the method which I
choose to compute the orientation. This core point detection algorithm is very sensitive for
orientation accurate. In core point detection, I search each pixel and its eight neighbors. If the
change angle of its neighbors equals 180 degree, then centre pixel is core. This algorithm is not
20
satisfying my orientation algorithm, because I compute the orientation base on gradient, I
divided the input image into several blocks, each pixel in the same block will have the same
direction, thus, if the core occur in the centre of a block, its neighbors will have the same angle,
so the change of the angle is 0 degree. I think this is why I cannot find out the core by this
algorithm.
I had tried to change another core detection algorithm, after research, I found a core detection
algorithm, this algorithm is curvature based. [8] This algorithm is that:
After get the orientation block, find each block’s eight neighbor, and then computing the different
of the direction components using the following formulas:
If both diffy and diff x are negative, current point is core point. If cannot find the core point,
decrease the size of the input image, and do the detection again.
I have implemented this algorithm, but still cannot find out the core point. Most of pixel will get
both diff y and diff x are negative. I decrease the image and then do the algorithm again, but is
still not working. Figure 22 is the result of this algorithm in different size image.
Figure 22 input image size is 110*120
Figure 22 probably cannot be clear after print on paper, this is because my program scales its
variance in a low level (150) and this image’s size is deceased from 260*300 to 110*120, but in
the windows word it is clear. If you are interested in this, please go to down load my doc in my
project web page.
When I decrease the size become 110*120, only one pixel have both negative diff x and diff y, but
this point is not the core point. I have checked the logic of my code; I think I follow this algorithm
correctly, so at last, I gave up this way.
After that, I went back to my first algorithm, and then tried to improve the accurate of the
orientation. In this core detection algorithm, I decided do not use the orientation which I
computed in orientation estimation, because it is block orientation, I need the point orientation. I
compute the orientation using another algorithm, this algorithm compute the orientation for
each pixel base on the change of gray level in its neighbors. [11] This algorithm I will describe in
the update earlier report stage. After this algorithm, I can get a point orientation filed. I also got a
method to detect the core from some essays [9][10]. This algorithm base on my first algorithm,
21
just make some change. At my first algorithm in core detection, I check the point’s eight
neighbors’ direction, count their change. In the improve version, I used two matrixes to check the
neighbors’ direction change, one is 2*2 matrixes, shows in the following:
I, j-1
I-1, j-1
I, j
I+1, j
Where (I, j) is the point be checked
Another matrix is bigger than this one, it is 9*9 matrix, in this matrix, only 24 pixel will be
checked. It shows in the following:
P5
P6
P7
P8
P9
P4
P10
P3
P11
P2
P12
P1
I,j
P13
P24
P14
P23
P15
P22
P16
P21
P20
P19
P18
P17
In this matrix, 24 pixel is the neighbor of point (I,j), and I check the change of these 24 pixels’
direction.
If the change of neighbors’ direction in matrix one equal to 180 degree and change of neighbors’
direction in matrix two equal or bigger than 180 degree, then point (I ,j) is core point.
Compute change of neighbors’ direction follow these formulas:
δis the different angle between to link pixel, Poincare (I, j) is the change of neighbors’ direction
in point (I ,j).
After improve the first algorithm using this way, the core point detection is working. The result is
better than before, but the result still not good. I have tested 40 fingerprint images; only 22
images can get an accurate core point. (Figure 23)
22
Figure 23 Images which detect an accurate core point
Some of images have a big deviation. Some image will have two cores, but only can compute one
(Figure 24)
Figure 24 Images which have two cores but only can compute one and Images which has a big
deviation.
Without these two kinds of situation there are almost ten images cannot detect the core point. I
think the biggest problem is the orientation cannot be 100% accurate. I output the point
orientation filed to check it’s accurate, I found that the point orientation is worst than block
orientation (Figure 25), but I still cannot find out a perfect way to solve this problem, to make the
accurate more accuracy.
23
Figure 25 point orientation filed
I using different color to mark different direction, from this fields, I found that the accurate is
worst than block orientation which I shows in the orientation estimation, but it is more suitable
to my core detection algorithm.
2.6 Problems in Match
As I said blow, the core point detection is not very accuracy, it will make a big influence in match
algorithm. At first, when I did this match algorithm, I tried to make a big accept deviation in the
matching; I thought that probably can decrease the influence which came from the core point
detection. After I did that, the algorithm is working. My supervisor told me that I need to
highlight the minutiae which are matched, using the same color to mark them, so that can proof
my match algorithm. After I did this, I found that most matched minutiae are false. I feed back
the match algorithm with this result, found that, two reasons due to this result. One is the
accuracy of core point detection, one is I made a too big deviation for the accept range. I thought
I have no enough time to change a core point detection algorithm, so I just focus on the second
reason. I tried to decrease the accept range for the match deviation, but with this way, the
influence which came from the core point detection cannot be decreased.
After I check the output result from the match algorithm, I found that, with the big deviation,
sometimes 1 minutia in input image can find out more than 1 minutia in the template image, this
is the problem, so I tried to add one more condition into the match algorithm. In my original
match algorithm, I match a minutia with four conditions, the type of a minutia, the distance to
core point and the slope angle between minutia and core point (I using the core point to establish
the coordinates system, the direction of core point is the positive direction of x axis in this
coordinates system, and then using these two fields to detect the local position of a minutia) and
the different angle between minutia and core point. After match with this four conditions with a
big accept deviation range, sometimes a minutia will have more than one matched minutia, so I
add one more condition. This condition is count how many ridges between minutia and core
point. I using Bresenham's line algorithm [12] to detection a line from core point to minutia, and
count the change of 0 to 1 to computed how many ridges in that line. With this way, most
matched minutiae are correct, but it is more sensitive about the core point accuracy. Most
images cannot be recognized, so I gave up this way to improve my match algorithm.
24
I went back to see the image, find that, the deviation of different minutiae in different distance
will be different. If a minutia near by the core point, when the core point has deviation, the slope
angle of this minutia will have a big change, but the distance of this minutia will not have big
influence. If a minutia far from the core point, when the core point, when the core point has
deviation, the slope angle will not have big change, but the distance will have a big influence.
With this idea, I make the accept range for each conditions depend on the distance of a minutia.
If the distance is close to core, I make a big accept deviation in slope angle and a lower accept
deviation in distance. If the distance is far away, I make a big accept deviation range in distance
and a lower deviation range in slope angle. After that, the result is better, in most image, most
matched minutiae are true, but still have some false.(Figure 26)
Figure 26 Match result which be improved, most minutiae are true, but still have some false.
3. What I achieved
Input image
My application allow user to input an external fingerprint image.
Normalization
This is the first step of my application, it can completely to scale a image which has a accept
average gray and accept variance.
Orientation estimation
My application can compute the orientation about a fingerprint image. From the output
orientation file, I think this result is good, and it can express the fingerprint ridge direction
Image enhancement
After I change the O’Gorman filter to do this stage, the result is good, some break ridges can
be linked, and some bridge linked can be removed. The enhancement result is suitable for
the next stage.
25
Binarization
Binarization function works, it can extract the ridge and valley, the result of this step is good
and can be used in the next stage.
Thinning
The thinned is completed; the result is true, not any big problem occur in this stage. It is
working well.
Minutiae extraction
I achieved this function. It can extract the minutiae correctly, in this phase, I using two
different color to show the different minutia (red is ridge ending and blue is bifurcation).
Remove false minutiae
This function I have achieved, but the accuracy is not 100%. Some of the false minutiae
cannot be removed correctly; a few true minutiae will be remoed. After test, I think it will
not make a big influence to the final result, because after this stage, there are more than 80%
true minutiae will be keep, it is enough to do the matching algorithm. (Figure 27)
Figure 27 most minutiae is true
Core detection
As I mentioned, core detection is working, but accuracy is not good, some of them have a
big deviation and some of them is correct.
Matching
The matching function is depending on the core detection. It can distinguish two input
image. The accuracy of this function is good, but it is impacted by core detection.
Database management
User can enroll, match, delete, and update a fingerprint image in database.
4. What I did not achieve
I have one function which I mentioned in my functional specification cannot achieved. This
function is capturing a fingerprint image from the scanner. It due to two reasons one is the
programming language. I using Java to implement this project, Java is running in Java virtual
machine, but the driver program of the scanner is provide for window operating system. Java
cannot invoke the scanner driver program cross Java virtual machine, so I need to use a
26
technique called Java Native Interface [12], this technique enables Java program which running in
JVM to call or be called by a native application (programs specific to a hardware and operating
system platform). I have not used this technique before, so I need some times to learn it. The
second reason is times. I finished all the basic algorithm in April 3th and I still need some time to
finish the database and write the report. During the developing time, I always will stuck a
problem in a long time, I will not want to change another way to implement it at first, so it waste
many my times. This is my weakness, this lead to me have not enough time to do the next stage. I
am very sorry that I cannot achieve this function.
Another thing what I did not achieve is a high accuracy. As I mentioned, core detection algorithm
have a bad accuracy, it make a big influence in matching algorithm. As I said before, there 22
images can detect a good core, in these 22 image, there three image cannot match correctly, so I
am not happy with the accuracy of the last algorithm in my project.
5. What I learned
Firstly, I think this project give me a big opportunity to learn much knowledge about computer
graphics, especially the image preprocessing. During this project, I learned much knowledge in
image preprocessing, such as I learned two algorithms in orientation estimation, what is the
different from these algorithms and what the theory of them is. I think this is the great things
what I learned from this project.
Next, I got a great experience in software developing. During this project, I improve my skill in
documentation. This project let me know how important of research, for example, I choose a
core point detection algorithm that is not suitable for my orientation algorithm. This is my
mistake in research, next time I will be more careful in research. I also know the important of the
time schedule, this project give me more experience in time schedule, let me know that don’t
stuck in a problem too much time, if the current way is not suitable for me, I have to change it
quickly, time waits for no one.
Thirdly, this project also gives me a big challenge to improve my coding skill. I got more
experience in Java coding and solved problems in coding. Make me more familiar with Java
programming.
At last, this project also let me know my weakness and push me to improve myself.
6. What I would do differently if starting again
If this project starting again, I will do the following different:
1. Change using another programming language, such as C++ or C#, so that I think I can
implement the scanner function more easily.
27
2.
3.
4.
5.
6.
Choose the algorithm which is easy to understanding. Do not choose the complex algorithm,
although it is famous for this project (e. g Gabor filter), so that I can finish coding earlier and
leave more time for me to testing and debugging.
I will change the matching algorithm, do not uses the core point based match, change to
detect the reference point dynamically or minutiae neighbor based match.
Improve the accurate. To make the result more accurate.
I will try to add a segmentation step into this project, so that if there is another thing in a
fingerprint image background, I can remove it.
I will try to make some additional function into it. For example allow user to print out the
match result. Mark the some pair minutiae in the image after matching.
7. Update my earlier report
7.1 Update My Earlier Research
As I mentioned before, I computed the orientation again in core detection. This algorithm is
different from my orientation estimation algorithm which I implement before.
In this algorithm[11], I using a 9*9 matrix to check each pixel’s eight direction, the 9*9 block
shows in the following:
For each pixel in this matrix, compute the average gray in eight different directions. Gmean(i),
where I is the number of each direction (e.g. 0,1,2,3,…,7). After that, organized them into four
groups (e.g. 0 and 4, 1 and 5, 2 and 6, 3 and 7), compute the different gray in each group, (e.g. G[i]
= Gmean(i)-Gmean(i+4) where I = 0,1,2,3). Find out the biggest absolute value G[i_max] in these
groups, for each centre pixel(x,y):
If abs(gary(x,y)-Gmean(i_max))-abs(gray(x,y)-Gmean(i_max+4));
Direction(x,y) = i_max
Otherwise
Direction(x,y) = i_max+4;
28
After compute all pixel of an image, divide image into several blocks, this block size is w*w, count
the direction of each block, identify the block direction as the direction which occur the most in
this block. After get the block orientation, and then need to smooth it. Using a more big W*W
block, which center is pixel(I, j), count which direction occur the most in this block, and set the
centre pixel’s direction as this occur most direction.
7.2 Update My Earlier Design Manual
I have not do any big change in my design manual, I only remove one button from the GUI,
because I have not achieve the scanner function and I add two GUI into on windows. Figure 28
and 29 are my new GUI.
Figure 28 new GUI in fingerprint recognition
29
Figure 29 new GUI in fingerprint database management
8. Module description
Figure 29 module of system
There are four modules in this system, user interface, and preprocessing, match and database
30
management module.
User interface: This module is for user to control this system, access this system.
Preprocessing: This module is for image preprocessing, in this module, it contains six
sub-modules, normalization, orientation estimation, image enhancement, ridge detection,
thinning and minutiae module, each sub-module running each step in image pre-processing.
Match: This module is for the matching algorithm, it includes three sub-modules: core point
detection, match image, and match DB. Match image is for match two input image, and
match DB is for match a image in database..
Database management: This module is for access the database. It includes enroll, update
and select. The select parameter came from the Match module.
9. Data Structures and data record
There four data structures need to be identified
Pixel structures for express the Pixel object in image.
Pixel {
int x,y; // pixel coordinate
int rgb; // pixel gray value
double gx,gy; // pixel gradient in x axes and y axes
double angle = 0;// pixel direction
int mask = 0; //mark binarization
int type = 0;//mark type 1 is ending 2 is bifurcation
private int check = 0;// mark this pixel has not be checked
}
Minutia is for hold the minutia data
Minutia {
double angle; //minutia direction
int x,y; // minutia coordiate
int type; // minutia type
boolean mark = true; // mark minutia is not a false minutia
}
Point structure is for compute the point orientation, it is similar with pixel, in order to distinguish
point orientation and block orientation.
Point {
int x, y; // point coordinate
int d; // point direction
int rgb; // point gray value
}
Featrue structure is generate as a feature vector, it is used for do the match and enroll it into
31
databse.
Featrue {
Minutia m; // the corresponding minutia
int distance;// distance from the core point
double angle;// Polar angle, using polar angle and distance to identify the local location of a
minutiae
double diffAngel;// the different angle between minutia and core point
int type; // type of the minutia
boolean isMatch = false; // it is use for macth algorithm
String image = null; // it is use for enroll into database
int id = 0; // id in database
}
FingerprintImage is for enroll a fingerprint image into database.
FingerprintImage {
String image_id; // image id
String image_directory;// image directory in which file
String description;// description of this image
int core_x,core_y;// core coordinate
double core_angle;// core angle;
There two table in database to storage the fingerprint information
Fingerprint Image table
Filed
Type
Comment
Fringerprint_ID
Varchar
Mark the fingerprint image id
in databse
Image_directory
Varchar
The directory in file
Description
Varchar
Description of this image
Core_x
Int
Core x coordinate
Core_y
Int
Core y coordinate
Core_angle
Double
Core direction
Filed
Type
Comment
ID
Int
Primary key minutia id
Fingerprint_ID
String
Mark corresponding
fingerprint image
X_coor
Int
X coordinate of a minutia
Y_coor
Int
Y coordinate of a minutia
Angle
Double
Direction of a minutia
Type
Int
Type of a minutia
Fingerprint Minutiae table
32
Distance
Int
Distance to core point
Polar angle
Double
Gradient of the line which
from minutia to core
Diff_angle
Double
Different angle from core
point
33
10. Conclusion
My project is finished; I think I implement all algorithms, and I satisfy almost requirement except
scanner function. I am happy I can achieve this. Actually, I am not happy with the accuracy of this
project, especially the core point detection algorithm, but I am very happy with the
preprocessing algorithm, I think the preprocessing algorithm is fine. I spent most of my private
time to work on this project; I think I already tried my best to do this project.
As I mentioned, from this project, I learned most of things in different kind of area. I am happy I
can have this challenge to do this project. Thanks for my supervisor Mr. Nigel Whyte, during the
developing time, he gave me many help and good suggestions. Thank you very much.
34
Reference
[1] Anil Jain and Sharath Pankanti. Fingerprint Classification and Matching [online] available:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.83.3331&rep=rep1&type=pdf
[accessed 4 April 2011]
[2] Lin Hong, Yifei Wan and Anil Jain (1998). Fingerprint Image Enhancement: Algorithm and Performance
Evaluation [online] available:
http://ai.pku.edu.cn/aiwebsite/research.files/collected%20papers%20-%20fingerprint/Fingerprint%20image%20e
nhancement%20-%20algorithm%20and%20performance%20evaluation.pdf
[accessed 4 April 2011]
[3] Yuan Mei, Huaijiang Sun, and Deshen Xia (2006). A Gradient-Based Robust Method for Estimation of
Fingerprint Orientation Field
[4] RayMond Thai (2003). Fingerprint Image Enhancement and Minutiae Extraction
[5] Wikipedia. (2011). Sobel Operator, available: http://en.wikipedia.org/wiki/Sobel_operator
[accessed 3 April 2011]
[6] O’Gorman L, Nickerson J V (1989). An approach to fingerprint filter design [J]. Pattern Recognition.
[7] Gui Ke (2010). Fingerprint Image processing and algorithm research. (Chinese version)
[8] Atipat Julasayvake and Smosak Choomchuay (2007). An algorithm for fingerprint core point detection.
[9] Dey T K Hudson J. PM R (2002) Point to mesh rendering, a featruebased approach
[10] Guijun Nie, Jian Wang, Zhenhui Wu, Yuanting Li and Rongqing Xu (2006). Fingerprint Singularith Detection
Based On continuously Distributed Directional Image.
[11]Luo Xiping and Tian Jie (2002), Image Enhancment and Minutia Matching in Fingerprint Verification.
[12]
Wikipedia.
(2011).
Bresenham's
http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
line
algorithm,
available:
[accessed 8 April 2011]
35
© Copyright 2026 Paperzz