Video based Parallel Face recognition using Gabor filter on homogeneous distributed systems Usman Ali Muhammad Ali Jinnah University, Islamabad Campus, Pakistan usma giin abot~otmail.com Abstract - This research aimed at building a fast video, parallel face recognition system based on the well known Gabor filtering approach. Face recognition is done after face detection in each frame of the video, individually. The master-slave technique is employed as the parallel computing model. Each frame is processed by different slave Personal Computers (PC) attached to the master, which acquire and distribute frames. It is believed that this approach can be used for practical face recognition applications with some further optimization. Index Terms - Gabor Filter, Parallel image processing, Cluster computing, Video based face recognition. I. INTRODUCTION Face recognition isanimportanttaskofcomputervi. Face recognition iS an important task of computer vision. In order to improve the recognition performance, enriched information to represent each person must be extracted. One promising direction is to use image sequences of faces resulting from head, lips, eye etc movements. Recent psychophysical results show that humans make use of face movement information for recognition. A. Related work in Image-based Face Recognition: Face recognition using visual images is gaining acceptance as a superior biometric [1]. Images taken from visual band are formed due to reflectance. Therefore they depend on the external light source, which sometimes might be absent e.g. night time or when there are heavy clouds. Imageryy is also difficult because it depend on the intensity of l . light and ane comofncid sen ofacelight Some of the commonly used face recognition techniques ,ager Principal Component Analysis (PCA) [2], Independent PCa)[is ndependent rincipalComponent Analysis ( sis (LDA) [4], and Elstic [3unch Graph Matching (EBGM) sis. G M hM are Muhammad Bilal COMSATS Institute of Information Technology, Abbottabad Campus, Pakistan mbilal19@a Ocm ognition is to prevent the fraudulent system penetration by pre-recorded facial images. The great difficulty to forge a video sequence in front of a live video camera may ensure the biometric data come from the actual user. Another key advantage of the video based method is that more informa- Ion iS available in a video sequence than in a single image. If the additional information can be properly used, we may further increase the recognition accuracy. In [11], a multiple classifiers fusion based video face recognition algorithm is developed. The method preserves all the spatial temporal information contained in a video sequence. And has it has been reported to achieved recognition rate of 98.6%. The need to speed up image processing computations brought parallel processing into the computer vision domain. There are several parallel programming models in common use: Shared Memory, Threads, Message Passing, Data Parallel, Task Parallel, and Hybrid. Besides raw computational needs, other issues must be addressed to efficiently use parallel processing in solving vision problems [12], which are: architectures, System integration, programming model and software tools, real time vision. Two open standards for portable high performance computing, VSIPL and OpenMP, are utilized for paralled processing and the performance benefits are demonstrated in [13]. Speedups achieved were analyzed for different number of processors. nube of prcsos A hybrid and parallel face recognition system based on artificial neural networks and PCA is presented in [14], where a parallel architecture for Artifical neural networks is (Ld [and Elatei Buchn [5], Rland G orkinVideoBased In [15], computation architecture for an adaptive Filteringetechniquei[proposed. B. Related work in Video Based Face Recognition. visual information retrieval system is presented. System has been designed to run as a distributed system using middleOne problem with the image-based method is that it is ware techniques. It contains methods to dynamically balpossible to use a pre-recorded face photo to pretend as a ance load among available servers to improve overall syslive subject. The second problem is that the image-based recognition accuracy is still too low to be used in some tem performance. In [16] the usefulness of synchronous practical applications. In order to overcome these problems, parallel processing for real time image processing is demvideo based face recognition has been proposed recently [7onstrated. A number of basic image processing operators 10]. One of the major advantages of video-based face rectogether with their data parallel implementation are presented. Some of image processing operators explored are: I$20.OOC2006 IEEE Region-based Image Operators, Morphologic Operators, Edge Detection, Point Operators and Local Operators. A very easy approach of adding parallelism to a sequential image processing library is presented in [17]. The method parallelizes a framework function of the library responsible for processing filter operators. Method is validated by testing it with the geometric mean filter. Two new algorithms have been introduced in [18] to reduce overhead for data transmission for parallel algorithms requiring neighbourhood pixels.A parallel face recognition system recognition system based on the Eigenfaces method is designed in [19] to support a live sequence of input images from a video camera at approximate rate of 30 frames/second. In this paper we proposed a parallel video based face recognition on heterogeneous environment. Popular technique of gabor filter is used for face recognition in each individual frame of the video. Paper is organized as follow: In section 2 Face skin detection is discussed. Section 3 and 4 discusses Feature point calculation and Feature point selection respectively. Feature vector generation, Similarity calculation in proposed architecture is presented in subsequent sections. Finally, the results and conclusions are discussed II. GABOR FILTERING FOR FACE RECOGNITION A. Face Skin Detection Once a face is detected in a video frame, the conventional image based face recognition technique will be used for single frame recognition. Skin color based detection [20] is implemented in the proposed architecture due to its simplicity though it's less efficient. Skin color range, tested for wide range of skin color, is ( (r>95) & (g>40) & (b>20) & ((max (r,g,b) - min (r,g,b)) > 15) & (abs(r-g)>1 5) & (r>g) & (r>b)) Where r,g and b are red, green and blue contents of the pixel respectively. B. Feature point calculation Physiological studies found simple cells in human visual cortex that are selectively tuned to orientation as well as to special frequency, it was suggested that the response of a simple cell could be approximated by 2-D Gabor filters [21]. One of the most successful recognition methods is based on graph matching of coefficients, which have disadvantages due to there matching complexity. A filter response at any point can be calculated by convolving the filter kernel with the image at that point. For point (X, Y), filter response denoted as R is defined as R1 = x cos(0) + y sin(0) R2 = -x sin(0) + y cos(0) N-X-1 M-Y-1 E E I(X + x, Y + y)f(x, y, 0, A) X=-x Y=-Y f (x, y, 0, A, (X, (Y) = e -O.5( RI2 OX + R22 Y ) = 0 n, * k / k 1,2. Where .X and (TY are the standard deviation of the Gaussian envelop along the x and y dimensions respectively. ) , 0 and n are the wavelength, orientation and no of ' orientationsrespectively. I(x,y) denotesNxMimage. We have chosen four orientations and a constant wavelength because feature points are relatively insensitive to the Gabor kernel wavelength, while vary significantly across different orientations. We have constant X = 2*1.414 and TX= (TY= /2. C. Feature Point selection We chose the point, in a particular window of size SxT around which the behavior or response of Gabor filter kernel is maximum, as feature point. Where S = N and T = M Where N = no of columns and M = no of rows and W is the no of windows. Feature point located at any point can be evaluated as Rf(xo, yo) = maX (Rj(x, y)) (x,y)EC Where Rj is the response of the image to the jth Gabor filter and C is any window. D. Feature vector generation Feature vectors are generated at feature points as discussed in previous sections. pth feature vector of ith reference face is defined as: _ [xp, yp, R, 1(xp, y) Where j = no of responses. Feature vector contains response with location information. E. Similarit calculations Similarity and difference between features of input image and any image from database is calculated using formulas provided in [22]. III. VIDEO BASED PARALLEL FACE RECOGNITION: Acquired face video is first converted in to image se- quences. technique The featurediscussed points areinthen calculated using filtering sections II. For test Gabor input video the person is asked to move his/her face, face lips, R(X,Y,O,2~~~~~~~~) = ~~eyes, etc. Feature vectors are calculated for each frame (im- age sequence) of the input video, individually. The parallel algorithm of this stage can be illustrated in Figure 1. It is usually desired that the recognition system should be fast as well as efficient. As, same face recognition steps are repeated for each image sequence. So it is proposed that individual face recognition processes for each image sequence should be carried out on different slave computers. The parallel algorithm is designed based on same master slave programming model as in [19, 23]. Design of the parallel architecture for video based parallel face recognition is show in figure 2. The master processing node acquires the input frames, distribute the frames to a set of slave nodes, gather feature vectors from the slaves, determine the closest distance, and declare the final recognition result. The slave nodes, on the other hand, compute the feature points. The input frames are partitioning in the time domain rather than the image dimension domain. Each slave receives a set of different input frames (of the same video stream) from the master in a round robin fashion. Thus, multiple computers are utilized to speedup the identification process up to the point, where real-time performance can be achieved. Feature Vectors Database - - -I- (A r-< :- - - --L-I- -- -r r-n I~~~~~~~~~~~~~~~~~ - =I 1 r--D- Face Detection - Pre- processing Feature value calculation h i~~~~~Master - < I~ Decisionl Maker Feature vector selection Slaity calculation i| | Figure 1: Parallel Video Based Face Recognition Architecture El E22 l*,,,,, 21m Figure 2: Master Slave implementation of video based Face Recognition A. Distributed implementation protocol Complete system is consisted of 'n' clients and one server. All the clients are connected to the server using dif- ferent ports. Each frame is send to individual 1.2 GHz processor client, which apply Gabor filtering technique on the image and send similarity calculation of the image, with all the database images, to the server. Complete work flow is given below: m All the images are ready to send to 'n' clients i-e the images have been read from 'txt' file format and have been saved in arrays. * 'n' number of Clients connect to server. * Server sends the image file to n number of Clients. * Server in writes send times files 'server_send timeo'.............' server_send_time1,'. * each client receives each image and write the receive time in file 'client_recv_time'. * Each client writes the process start time in file 'cli- ent_process_start_time'. * Each client now processes (apply Gabor filter) the image sent to it. * After the processing ends each client write the process end time in file 'client_process_end_time'. * 'n' number of Clients now send their results to server. * Server writes the received times in files 'server _recv_timeo'........ ' server_recv_time11'. B. Processing Time Calculation We have 'n' frames images to be send to 'n' clients. Total time taken to processed 'n' frames=Total Time Total Time=server send timeo - server_recv_time11 i-e first image send time to client minus last image received time from Client Total Time also includes the whole network delays. Network delay=nd ndl=server_send_timeo - client_recv_time nd2=client_send_time -server_recv time0 nd=ndl+nd2 This is the delay that each client cost us, as we are in distributed environment and it is assumed that same amount of delay is caused while communication between a clients and a server. Time taken by the process at clients= client_process_end_time - client_process_start_time IV. LIMITATIONS OF PARALLEL FACE RECOGNITION A. Complexity: In general, parallel applications are much more complex than corresponding serial applications, perhaps an order of magnitude. The costs of complexity are measured in programmer time in virtually every aspect of the software development cycle: Design, Coding, Debugging, Tuning and Maintenance. Face recognition result would depend now on the proper functioning of the all the slaves and master, thus increasing the complexity. B. Resource Requirements. The primary intent of parallel video based face recognition programming iS to decrease execution clock time, however in order to accomplish this, more CPU time is required. For example, a parallel code that runs in 1 hour on 3 processors actually uses 3 hours of CPU time. Similarly, the amount of memory required would be greater for parallel recognition than serial one, due to the need to replicate data. The overhead costs associated with setting up the parallel environment, task creation, communications and task termination can comprise a significant portion of the total execution time for short runs. C. Scalability: Hardware factors play a significant role in scalability. For example: Memory-CPU bus bandwidth on an SMP machine, Communications network bandwidth, Amount of memory available on any given computer. Thus the performance might be reduced; if there is high traffic on network or the systems used as slaves or master have slow speed. V. BENEFITS OF PARALLEL FACE RECOGNITION A. Speedups: Speedup performance could reach to up to number of sequences used for video based face recognition. If numbers of frames are 10 then parallel processing through proposed architecture would take almost 10 times less then the sequential processing time required for processing. Speedup would also depend on the network traffic, speed of all slaves and master, memory of all slaves and master, network card speed, network speed etc. B. Portability: All of the usual portability issues associated with serial l fteuulpraiiy susascae ihsra programs apply to parallel programs. Developed applications for slave and master for face recognition can be run on any Local Area Network (LAN) and no special arrangements would be required. VI. CONCLUSIONS This paper briefly described a parallel design framework for efficient and real-time video based face recognition system. As efficient video based face recognition system has high computational cost, and it is even high, if high computational (but efficient) Gabor filtering technique is utilized for face recognition, the proposed parallel cluster computer architecture would make the video based face recognition process both faster and efficient.With our design framework, the real-time performance can be achieved on regular computers, such as those found in a student cluster. FUTURE WORK We are currently working on the implementation of the proposed design on the four 1.2 GHz processors for the same database for which non-parallel design has been demonstrated. Applications for master and slaves are developed in Visual C++. The prototype system would be capa- ble of reliable recognition, with reduced constraints in regards to the position and orientation of the faces. ACKNOWLEDGEMENT We are thankful to Higher Education Commission (HEC) Pakistan and PTCL Pakistan for facilitating us towards research and providing us travel grant support to attend conferences. REFERENCES [1] Zhao, W., Chellappa, R., Phillips, P.J., Rosenfeld, A.: Face recognition: A literature survey. ACM Computing Surveys (CSUR) 35 (2003) 399-458 [2] Turk, M., Pentland, A.: Eigenfaces for Recognition, Journal of Cognitive Neurosicence, Vol. 3, No. 1, 1991, pp. 71-86. [3] Bartlett, M., Movellan, J., Sejnowski, T.: Face Recognition by Independent Component Analysis, IEEE Trans. on NeuralNetworks", Vol. 13, No. 6, November 2002, pp. 1450-1464. [4] Belhumeur, P., P.Hespanha, J., Kriegman, D.: Eigenfaces vs. Fisherfaces: Recognition using Class Specific Linear Projection, 4th European Conf. on Computer Vision, ECCV'96, 15-18 April 1996, Cambridge, UK, pp. 45-58. [5] Wiskott, L. Fellous, J.-M. Kuiger, N. von der Malsburg, C.: Face Recognition by Elastic Bunch Graph Matching, Chapter 11 in Intelligent Biometric Techniques in Fingerprint and Face Recognition, eds. L.C. Jain et al., CRC Press, 1999, pp. 355-396. alI, Color Face Recognition using Qua[6] Creed F. Jones ternionic Gabor Filters, PhD Dissertation, 15, Jan 2003. [7] G. Edwards, C. Taylor, and T. Cootes, "Improving identification performance by integrating evidence from sequences," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 486-491, 1999. [8] V. Kruger and S. Zhou, "Exemplar-based face recognition from video," In Proceedings of IEEE International Conference on Automatic Face and Gesture, Page(s): 182 -187, 2002. [9] S. Satoh, "Comparative evaluation of face sequence matching for content-based video access," In Proceedings of IEEE International Conference on Automatic Face and Gesture, Page(s): 163-168, 2000. [10] 0. Yamaguchi, K. Fukui, and K. Maeda, "Face recognition using temporal image sequence," In Proceedings of IEEE International Conference on Automatic Face and Gesture, Page(s): 318 -323, 1998. [11] Xiaoou Tang and Zhifeng Li, "Video Based Face Recognition Using Multiple Classifiers" in the Proceedings Of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR'04) [12] Choudhary, A. and Ranka, S.: Parallel Processing for Computer Vision and Image Understanding, Guest Edi- tor's Introduction, IEEE Computer Magazine, February 1992, vol. 25, No. 2, pp. 7--10. [13] Kepner, J.: Exploiting VSIPL and OpenMP for Parallel Image Processing, November 12-15, 2000, Boston, MA Astronomical Data Analysis Software and Systems X (ADASS), ASP Conference Proceedings, Vol. 238. San Francisco: Astronomical Society of the Pacific, ISSN: 1080-7926, 2001, p.209-212. [14] Bazanov, P., Kim, T., Kee, S. C. and Lee, S. U.: Hybrid and Parallel Face Classifier based on Artificial Neural Networks and Principal Component Analysis, IEEE International Conference on Image Processing(ICIP), Rochester, New York, Sept. 22-25, 2002. [15] Kruger, T., Wickel, J. and Kraiss, K.-F.: Parallel and Distributed Computing for an Adaptive Visual Object Retrieval System. In Proceedings of the 17th International Parallel and Distributed Processing Symposium IPDPS 2003, France, April 2003. [16] Braunl T.: Tutorial in Data Parallel Image Processing", Australian Journal of Intelligent Information Processing Systems (AJIIPS), vol. 6, no. 3, 2001, pp. 164-174. [17] Nicolescu C. and Jonker P.P.: Parallel low-level image processing on a distributed-memory system", in: J. Rolim (eds.), Proc. Workshop on Parallel and Distributed Methods for Image Processing (held in conjuction with IPDPS'2000, Cancun, Mexico, May 1-5), 2000, 226-233. [18]Altilar, D., Paker, Y.: Minimum Overhead Data Partitioning Algorithms for Parallel Video Processing, 12th Intl. Conf. on Domain Decomposition Methods, 2001. [19] Keatkaew, T. and Achalakul, T.: Real-time, Parallel Face Recognition Using Eigenfaces, In Proc. of InterComputers, and Communications(ITC-CSCC 2005), 4-7 july,2005, jeju, Korea, Volume 2, pp-1 149-1150. [20] Resmana Lim, Marcel J.T. Reinders, Thiang "Face Detection Using Skin Color and Gabor Wavelet Representation" Proc. of the International Conf. on Electrical, Electronics, Communication, and Information CECI'200 1, March 7-8, Jakarta. [21] Fan, W., Wang, Y., Liu, W., Tan, T., Combining Null Space-based Gabor Features for Face Recognition, In the proc. of 17th International Conference on Pattern Recognition. [22] Taj, J. A. , Shahzad, M. I., Ali, U. , Qureshi, R. J.: Digital Signal Processor based Implementation of Face Recognition using Gabor Filter", Accepted for publication in International Conference on Intelligent Systems, Sunway Lagoon Resort Hotel, Kuala Lumpur, MALAYSIA, 1 st - 3rd December 2005. [23] Wang, C.-Li.: High-Performance Computing for Vision, 1996, pp.:93 1-946.
© Copyright 2026 Paperzz