Face Authentication Based on Multiple Profiles
Extracted from Range Data
Yijun Wu, Gang Pan, and Zhaohui Wu
Department of Computer Science and Engineering
Zhejiang University, Hangzhou, 310027, P. R. China
{wyj9201,gpan,wzh}@cs.zju.edu.cn
Abstract. In this paper we address face authentication based on profiles extracted from range data. Three kinds of profiles are defined, then
extracted, and are combined to classify. For obtaining central profile,
a novel robust symmetry plane detection method is proposed. A global
profile matching approach based on the partial Hausdorff metric is presented to align and compare profiles, without detection of fiducial points
that is often unreliable. To utilize more information in range data, we
extract nose-crossing profile and forehead-crossing profile,more than only
central profile. The experiments are carried out on a low-quality database
with 180 pieces of range data of 30 individuals acquired by structured
light system. Based on the experimental results, we observe that the
presented scheme can cope with limited quality of facial range data.
1
Introduction
Even though numerous techniques for face recognition or authentication have
been explored over the past two decades, most research has primarily focused
on recognition from 2D image. Current methods work very well under conditions similar to those of the training images. However, either of the illumination
variation and pose change may cause serious performance degradation for most
existing systems.
Recent advances in modelling and digitizing techniques have made acquisition
of 3D data much easier and cheaper [9]. The 3D data have the potential to
overcome these problems, whose advantage is the explicit representation of 3D
shape. For applied point of view, an economical 3D face recognition system
is preferred, whereas the data acquired by an economical fully automatic 3D
acquisition system are often range data of limited quality and contains inaccurate
and noisy data. 3D face recognition method tolerant to relatively coarse range
data is highly expected.
Because of the noise such as those caused by beard and spectacle, and the
expression which makes the range data sets of same individual differ from each
This work is partly supported by NSFC (60273059), 863 High-Tech Programme
(2001AA4180) and Zhejiang NSF for Young Scientist(RC01058).The authors would
like to thank Dr. Charles Beumier for the supply of 3D RMA database.
J. Kittler and M.S. Nixon (Eds.): AVBPA 2003, LNCS 2688, pp. 515–522, 2003.
c Springer-Verlag Berlin Heidelberg 2003
516
Yijun Wu et al.
other, We must select some reliable and discriminative region for the recognition
to get an robust approach. The central profile curve, which are stable, can give
fairly high discriminability. And it is easier and faster to deal with 2d curves
than 3d range data. We try to find some other ’profiles’ to utilize in the face
authentication to get better performance. This paper presents a effective scheme
for facial profile extraction from low-quality range data and an method for face
authentication based on multiple profiles.
For recognition using facial profile, many methods have been done. Most of
previous work is based on extraction of fiducial points. [6, 10, 11, 5] extracted
fiducial marks from the profile by heuristic rules, and a set of features was
detected in terms of the positions of these fiducials. The distance between the
features of a test profile’s fiducial points and that of the model’s fiducial points
was calculated. Harmon et al. [5] manually drew the outlines from profile photos
of 256 males. Nine fiducial points were selected. A set of 11 features was derived
from these fiducial points. After aligning the two profiles to be matched by two
selected fiducial marks, the matching was achieved by measuring the Euclidean
distance of the feature vectors derived from the outlines. Wu et al. [10] developed
a face profile recognition procedure based on 24 fiducial points. The outline
curves were automatically obtained instead of by an artist’s drawing. Then, they
used a B-spline to extract turning points on the outline curve. Subsequently, six
interesting points and 24 features were derived from these points. Yu et.al [11]
used a tuning method to get more precise position of the fiducial points. They
define a number of small steps around the determined positions of the fiducial
points. For each combination of the new positions of the fiducial points, the
matching was performed,and the one with the best matching score is chosen.
Here we presents a face authentication method based on multiple profiles.
A novel and effective method was proposed to detect the symmetry plane of
face to extract the central profile. For each pair of range data sets, we use the
symmetry planes and profiles to align them together, then we get the horizontal
’profile’ curves. We minimize the Hausdorff distance between two profiles to measure their difference. In the rest of this paper, we introduce our method to extract
profiles from range data in section 2. In section 3 we introduce our approaches
for profile matching base on partial Hausdorff distance. Our experimental result
is reported in section 4. And Section 5 concludes the paper.
2
2.1
Extracting Profiles from Range Data
Symmetry Plane Detection
Given range data for facial profile recognition, we should first extract the profile
from range data or detect the symmetry plane. Cartoux et al [2] proposed an
approach which extracts the profile by looking for the vertical symmetry axis of
Gaussian curvature values of the facial surface. However computation of Gaussian curvature needs sufficiently accurate range data, which can not usually be
provided by automatic 3D face acquisition system. Here we present an effec-
Face Authentication Based on Multiple Profiles Extracted from Range Data
(a)
(b)
517
(c)
Fig. 1. Finding symmetry plane with alignment. (a) The original facial surface,
(b) The mirrored facial surface with respect to an initial symmetry plane, (c)
Alignment of both surfaces
tive approach based on alignment, which is not involved in curvature and can
robustly extract the symmetry plane from range data of low quality.
Assume that the 3D facial surface is symmetrical and continuous. After setting the facial surface an initial but inaccurate symmetry plane, its mirrored
surface can be easily obtained, as shown in Fig. 1, where point A in Fig. 1(b) is
symmetrical to point A in Fig. 1(a) with respect to the initial symmetry plane.
Once we align the two surface accurately, segment AA must be vertical to the
true symmetry plane and its central point must lie on the true symmetry plane.
That is the segment AA determines the true symmetry plane.
The true symmetry plane can be written as
−
→
→
n ·−
x +k =0
(1)
→
where −
n is the normal vector of the plane and k is a constant. After alignment
→
of two surfaces, for point A denoted by −
a (x1 , y1 , z1 ) and point A denoted by
→
−
a (x2 , y2 , z2 ), the symmetry plane can be obtained by solve:
→
−
→
→
−
n = (−
a − a ) a − a (2)
→
−
−
→
→
n · (−
a + a )/2 + k = 0
Considering error from range data acquisition and non-exact-symmetry of
facial surface, in addition, for robustness purpose, actually we solve Eq. 2 for
each point in facial surface and then perform least-mean-square method for final
solution.
The algorithm we have used for alignment in our system is a variant of ICP
(Iterative Closest Points)[8], which is widely used for geometric alignment of
three-dimensional models when an initial estimate of the relative pose is known.
After some experimentation, we choose a variant similar to [9], which can complete alignment in one second. Some results are shown in Fig. 2. The sample
in the first row is seriously incomplete around eye regions. The second row is
a sample whose data on the right side is obviously less than that on the left side.
The third row shows a sample with depth rotation. However, in these cases,
symmetry planes are all detected correctly.
518
Yijun Wu et al.
(a)
(b)
(c)
(d)
Fig. 2. Profile extracting with symmetry plane detection. (a) the test model
after triangle-based linear interpolation of the original range data, and its initial
symmetry plane denoted by a gray line, (b) the mirrored model after aligning,
(c) the symmetry plane detected, (d) the profile extracted
2.2
Horizontal Profiles
To utilize more information, we considered to extract more ’profiles’ for the
authentication. These profiles should be robust, insensitive to the facial expression, and have some discriminability. We choose the horizontal profile located
on nose and forehead. It must be guaranteed that the positions we get on different range data sets of same individual are the same, otherwise, the horizontal
profiles would be quite different. We achieved this by aligning range data sets,
denoted by A and B, first by the central profile. We then use a line to fit one
of central profiles, followed by rotation of the two range data sets so that the
line gets upright. We then detect the nose saddle and nose tip by curvature and
distance from the line. denote the vertical distance between nose saddle and nose
tip as l, we get the horizontal profiles at the position l/2 below and above the
nose saddle, named nose-crossing profile and forehead-crossing profile. Fig. 3(a)
shows the method and the position of two horizontal profiles. Fig. 3(b,c) shows
the nose-crossing profile and forehead-crossing profile.
3
Facial Profile Matching
As described in Sect. 1, most existing methods for profile recognition are based on
fiducial points, which involve in inconsistency problem of feature point detection.
One solution to this issue is to match the whole profile without detection of
Face Authentication Based on Multiple Profiles Extracted from Range Data
519
Fig. 3. Horizontal profiles extraction. (a) the positions of two horizontal profiles,
(b) nose-crossing profile, (c) forehead-crossing profile
fiducial points. We propose a global matching approach to align and compare
two profiles.
3.1
Similarity Metric
To be tolerant towards the noise and lack of features, the function measuring the
difference between two profiles should be robust enough. It should be insensitive
to the small difference and mainly measure the global difference between the
profiles. The partial Hausdorff distance is a function measuring the distance
between two point sets [7]. Given two sets of points A = {a1 , ..., am } and B =
{b1 , ..., bn }, the partial Hausdorff distance is defined as
Hk (A, B) = max(hk (A, B), h(B, A))
(3)
hk (A, B) = kth min a − b
(4)
where
a∈A b∈B
where kth denotes the k-th ranked value (or equivalently the percentile of m
values). If user specifies the fraction f , 0 ≤ f ≤ 1,k can be determined by
k = f · m. The partial Hausdorff distance is insensitive to small perturbations
of the point sets and allows for small positional errors in point sets.In terms
of set containment, hk (A, B) ≤ δ if and only if there is some Ak ⊆ A such
that Ak ⊆ B , where Ak contains k points of A. Thus we can think of hk (A, B)
as partitioning A into two sets, Ak which is “close to” (within δ of) B and the
”outliers” A − Ak . It separately accounts for perturbations (by the distance δ)
and for outliers (by the rank k).
3.2
Matching by Optimization
For alignment of profiles, the dimension of transformation space is 3, they are
rotation angle θ and translation vector (tx , ty ). After collecting the parameters
520
Yijun Wu et al.
Fig. 4. Profiles at the initial position and the matching result
−
into a parameter vector →
a = (θ, tx , ty ), given a point x in a profile, the transformation is:
cos θ sin θ
t
→
−
x+ x
T2d ( a ; x) = T (θ, tx , ty ; x) =
(5)
− sin θ cos θ
ty
Thus, the alignment of profile L1 = pi by L0 = qj can be formalized as:
→
argmin Hlk (L0 , T2d (−
a ; L1 ))
(6)
→
−
a
As the wide variation of profiles and the characteristic of Hausdorff distance,
many local minimums occur in the state space, with the result that many conventional optimization methods like Newton algorithm usually can not converge
at the global minimum of the function. Here we use the simulated annealing
method [4] to solve this optimization problem. Before optimization, we used
a line to fit the profile curve, followed by rotation and translation of the profiles
so that the two lines and their centroid overlapped each other. Fig. 4 demonstrates the profiles at the initial position and the final position converged.
4
Experimental Results
We performed our experiment on the facial range data from 3D RM A database,
which is a part of M2VTS project. The range data were obtained by a 3D
acquisition system based on structured light, in xyz form. There are about 3000
points in a model. It consists of four parts (DBs1m, DBs2m, DBs1a, DBs2a)
that were build up from two sessions taken in different time separated by several
months. See [1] for more details. In each part a person has exactly three shots
with different orientation of head: straight forward, left or right, upward or
downward, and some people smiled in some shots. Since spectacles, beards and
moustaches may be present, some facial features are often incomplete, like nose,
eye. Fig. 5 shows two examples, two views for each.
The proposed system is implemented on the P4 2.0GHz. The percentile of
the parital Hausdorff distance is set to 0.8. It takes about 2.2 seconds to compare
two range data sets,including symmetry plane detecting and profile matching.
To combine the three scores from three kinds of profiles, we use Fisher discriminant analysis to transform the 3-dimensional score vector into a new subspace that maximizes the between-class separation.
Face Authentication Based on Multiple Profiles Extracted from Range Data
521
Fig. 5. Two sample models from 3D RM A, two views for each model
Fig. 6(a) shows the ROC curves using central profile, nose-crossing profile,
forehead-crossing profile for DBs1m and Fig. 6(d) shows the ROC curve of fusion approach for the same session. Although the accurate rate of nose-crossing
profile and forehead-crossing profile are rather low, the fusion can still make
the performance improved that error rate decreased to half of which using only
central profile.
More detailed results on six databases are shown in Tab.1. The EER(Equal
Error Rate) of fusion get notably decreased from the EER of central profile. The
best EER performance, tested for DBs1m, reaches 1.11%.
5
Conclusions
We have proposed a robust face authentication method based one multiple profiles. we present a novel and effective method to extract symmetry plane of
facial surface based on ICP. A global profile matching approach based on partial Hausdorff metric is also presented to align and compare profiles. It does
Receiver Operating Characteristic
Receiver Operating Characteristic
25
25
Central Profile
Nose−crossing Profile
Forehead−crossing Profile
20
20
False Rejection Rate(%)
False Rejection Rate(%)
18.85
15
14.08
10
5
15
10
5
2.22
1.11
0
0
5
10
15
False Acceptance Rate(%)
(a)
20
25
0
0
5
10
15
False Acceptance Rate(%)
20
25
(b)
Fig. 6. ROC curves of profile matching for DBs1m(30 persons). (a) central
profile,nose-crossing profile,forehead-crossing profile, (b) fusion
522
Yijun Wu et al.
Table 1. Profile matching EER on six databases, each database has 30 individuals
Databases
central profile nose-crossing profile forehead-crossing profile Fusion
DBs1m
2.22%
14.08%
18.85%
1.11%
DBs2m
4.44%
14.08%
19.89%
2.22%
DBs1m+s2m 6.67%
16.56%
21.40%
4.44%
DBs1a
5.56%
16.56%
19.89%
3.33%
DBs2a
7.78%
17.67%
21.40%
4.44%
DBs1a+s2a 8.89%
18.77%
23.10%
5.56%
not need to detect the fiducial points. By adding the nose-crossing profile and
forehead-crossing profile, we improved notably the performance of the face authentication. The experiments on the 3D RM A database demonstrate that the
presented scheme can cope with limited quality of facial range data and method
based on profiles can get fairly high performance utilizing limited information
in 3d range data.
References
[1] C. Beumier and M. Acheroy. Automatic 3d face authentication. 18(4):315–321,
2000. 520
[2] J.Y. Cartoux, J.T.Lapreste, and M.Richetin. Face authentification or recognition
by profile extraction from range images. In Workshop on Interpretation of 3D
Scenes, pages 194–199, Nov. 1989. 516
[3] Yongsheng Gao and Maylor K.H. Leung. Human face profile recognition using
attributed string. Pattern Recognition, 35:353–360, 2002.
[4] William L. Goffe. Global optimization of statistical functions simulated annealing.
Journal of Econometrics, 60:65–99, 1994. 520
[5] L. D. Harmon and W. F. Hunt. Automatic recognition of human face profiles.
Comput. Graphics Image Process, 6, 1997. 516
[6] L. D. Harmon, M. K. Khan, R. Larsch, and P.F. Raming. Machine identification
of human faces. Pattern Recognition, 13:97–110, 1981. 516
[7] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing images
using the hausdorff distance. 15(9):850–863, September 1993. 519
[8] Neil D. Mckay Paul J. Best. A method for registration of 3-d shapes. 14(2):239–
256, 1992. 517
[9] Szymon Rusinkiewicz, Olaf Hall-Holt, and Marc Levoy. Real-time 3d model acquisition. In SIGGRAPH 2002 Proceedings, pages 438–446, July 2002. 515, 517
[10] C.J. Wu and J.S. Huang. Human face profile recognition by computer. Pattern
Recognition, 23:255–259, 1990. 516
[11] K. Yu, X. Y. Jiang, and H. Bunke. Robust facial profile recognition. In Proc.
IEEE Int. Conf. on Image Processing, volume 3, pages 491–494, 1996. 516
© Copyright 2026 Paperzz