International Journal of Computer Applications (0975 – 8887) Volume 38– No.10, January 2012 Face Detection Identification and Tracking by PRDIT Algorithm using Image Database for Crime Investigation V. S. Manjula Lt. Dr. S. Santhosh Baboo Asst. Professor, Dept. of Computer Application, St. Joseph’ College, Chennai- 600 122, INDIA Reader, P.G & Research, Dept. of Computer Application, D.G. Vaishnav College, Chennai-106. INDIA ABSTRACT In general, the field of face recognition has lots of research that have put interest in order to detect the face and to identify it and also to track it. Many researchers have concentrated on the face identification and detection problem by various approaches. The proposed approach is further very useful and helpful in very huge real time application. Thus the Face Detection, Identification and Tracking mechanism which is proposed here is used to detect the faces in videos in the real time application by using the PRDIT (Proposed Rectangular Detection Identification and tracking) algorithm. Thus the proposed mechanism is very help full in identifying individual persons who are been involved in the action of robbery, murder cases and terror activities. Although in face recognition the algorithm used is of histogram equalization combined with Back propagation neural network in which we recognize an unknown test image by comparing it with the known training set images that are been stored in the database. Also the proposed approach uses skin color extraction as a parameter for face detection. A multi linear training and rectangular face feature extraction are done for training, identifying, detecting and tracking. Thus the proposed technique is very useful in identify a single person from a group of faces. Thus the proposed technique is well suited for all kinds of faces which have a specific complexion varying under certain range. Also we have taken a real life example and simulated the algorithms in IDL Tool successfully. Keywords: PRDIT, histogram equalization, rectangular features. 1. INTRODUCTION The face is our primary focus of attention in social life playing an important role in conveying identity and emotions. Detection and recognisation of face is an important research in the area of computer vision [1]. The process of face detecting is a challenge and toughest process because of different facial expression, races, backgrounds, illumination, overlapping, low brightness make the face detection process as more complicated one. Computational models of face recognition are interesting because they can contribute not only to theoretical knowledge but also to practical applications. Computers that detect and recognize faces could be applied to a wide variety of tasks including criminal identification, security system, image and film processing, identity verification, tagging purposes and human-computer interaction. Unfortunately, developing a computational model of face detection and recognition is quite difficult because faces are complex, multidimensional and meaningful visual stimuli [2]. Thus the Face detections are used in many places now a days especially on websites hosting like picassa and face book. The automatically tagging feature adds a new dimension in order to share pictures among the people who are in the picture and also gives idea to other people about who the person in the image. Here we have studied and implemented a pretty simple but very effective face detection algorithm which takes human skin colour into account for detecting and tracking the face[3][4]. Also here we propose a new approach which is based on the multi linear training and rectangular face feature extraction. Detecting, training, tracking and identification are the major steps of the proposed technique. The main aim, which we believe we have reached, was to develop a method of face recognition that is of fast, robust, reasonably simple and accurate with a relatively simple and easy to understand algorithms and techniques [4]. 2. PERSPECTIVE OF OUR WORK The proposed approach is simple, fast and accurate which is been applied together as a single algorithm to provide better results under complex circumstances like face position, luminance variation etc. Each of these algorithms are been discussed one by one below. Thus the proposed approach handle changes on the face image like lighting, complexity in the background, multiple faces in the image. Thus the proposed approach makes an improvement in the detection results rather than the other detectors. The more challenging function in the detectors is to handle the poses. Different type of pose makes conflict in detecting a particular personality and our proposed approaches overcome these drawbacks [1]. Next the various method of tracking approach gets confused in the beginning stage and we use a tracker algorithm for the detection and it avoids the above mentioned problem. The information for the detecting process incorporates with the parameter and reproduces the information itself. Hence these algorithms always find faces in the frames even though the frame based detectors gets fails. Thus the knowledge of training can understand new faces that are entered in the training and it is always ready to integrate in the updating process [7]. 41 International Journal of Computer Applications (0975 – 8887) Volume 38– No.10, January 2012 2.1 Training Phase: The training process is done by applying a back-propagation neural network. The input will be of face features along with that the individual users’ names are fed into the system for processing. In order to extract the skin color from the face images we apply a histogram equalization process. Thus the trained data hold individual users skin color and some extra facial features. Also here we introduce a new proposed rectangle based approach for detecting the face [8]. In this approach the human face images are been splitted into various rectangle portions i.e. of with different shapes and size also with their different features are also been saved along with the training samples in databases for the later verification. Figure 1.explains about the training procedure of our proposed approach. Figure 1.Trainning Module For training the database 100 input face images with different features are given as input to the training system. The input images are taken from GTAV Face Database for training the input datasets [10]. The face and non-face block weights are calculated as negative and positive values for the neural network. Also our proposed neural network system is made up with 9 hidden layers. Pseudo code of the Network: o o o - - - Weight as the random values with the range -1.0 to 1.0 Input pattern as binary value for input layer of the neural network. Neuron Process: Multiply neuron leading connection weight values with previous neuron output values. Add all values. Apply activation function to generate output. Output layer is reached to stop this process. - Output pattern is checked with the target pattern for calculate the value of mean square error. -Weight (old) + Learning Rate * Output Error * Output (Neuron i) * Output (Neuron i + 1) * (1 – Output (Neuron i + 1)) Go to the Initial step. The algorithm should be end if the output of the all pattern should be matched with the pattern of the target [9]. Train1 Ii (Ni) Where, Ii Intensity, Ni Different angle image for single person , Train1 Multi linear training 2.2 Preprocessing of the Image Face images are given as input to the proposed system which extracts multi features of the face. Also various face regions are taken under some conditions e.g. background, light and distance between the camera and face makes the different appearance of the face in the image. For getting the scale variance property, the input images where resized as pyramid image. Also we partition the image as 21 x 21 pixel blocks which is present in the sub-sampled image and then normalize the blocks of the image to unit variance and zero means. We take an RGB color image as input (Ri, Gi, Bi) Є [0.1]3 And produce a grayscale image as output T Є [0.1] To avoid gamma correction issues. 2.2.1. Mathematical approach Apply the same method individually to the RGB image i.e., red, green and blue components in the color model. Applying the similar color method to the RGB image may produce the different color balance in the images. Since the color channels changes their algorithm for the getting the relative distribution. First of all the image is changed to another space color like HSV/HSL or Lab color space then we apply the proposed algorithm to value channels or luminance which does not affect the image hue and saturation. Let us take the image of discrete grayscale and consider ni be the gray level occurrence number i. 42 International Journal of Computer Applications (0975 – 8887) Volume 38– No.10, January 2012 The probability of an occurrence of a pixel of level i in the image is P (xi) = ni / n Where, i Є 0… L - 1 L gray level total number in the image. n pixels total number of the image P in fact the image's histogram and Normalized to [0, 1] x an occurrence of a pixel in the image C (i) =Σi j=0 P (xj) Where C the cumulative distribution function corresponding to p, To produce the original image of level y on the original image level on x we have to transform the form .i.e Y’s cumulative probability function be the linearized in the range of value. The transformation is defined by: Yi= T (xi) = c (i) Yi linearized transformation of pixels Note that the T plots the levels between the domain of 0…1. to plot the values in the original domain The following transformation should be applied to the results. y’i=yi . (Max – min) + Min ’ y i equalized image MaxMaximam value pixel in the image Min Minmam value pixel in the image 3. PRDIT APPROACH Proposed Rectangular Detection Identification and tracking (PRDIT) technique is an extension of RDIT that uses histogram equalization process and skin color extraction and neural network with the proficient of learning the interactions of the multiple factors like different viewpoints, different lighting conditions, different expressions etc. In the face detection process skin color plays an important role when extracting the information in the area of face [8]. That obtained color information is used to make the task easier. The process of identifying the face localization is very hard to achieve but the color information makes it easier in the complex environment. Based on the various model of color space like YCbCr, HSV and YIQ the researchers offer the many techniques for detecting the skin information. Generally no one have the same skin information. The color of the skin is differed in each person. It is arranged in the small portion of area in CbCr plane [9]. This model is the strong as compared with the other model having the skin information. The people in the different areas like Europe, Africa and Asia has the different type of skin in formation. So we implement the YCbCr color model to detect the color of the skin [10]. 3.1 Detection of the Face We randomly provide the image input to find whether the human being face is present or not in the input photos or videos. If the face is present, it return the location and the degree of the space between each face. GRectsi=GRectf (Ii>equalized value) SRectsi =GRectsi (w*h) / ni If Ii (SRectsi) = Train2 This rectangle is in face Else This rectangle not in face Where GRectsiFace rectangle Group image GRectfFull input Group mage Rectangle SRectsi Small rectangle Group image 3.2 Identification of the Face: All face images are divided into blocks of triangles which are providing to the neural network. Some of the face candidate blocks will be in the output of the networks. To eliminate the false detection blocks we present the face verification methods. The basic idea behind in this concept is considering the edge point distribution in facial features. By using the method of the blocks of the each candidates the edge map detecting is done. Then the proposed algorithm stores the features of the face rectangle for the various images. The images which are stored in the databases will be matched with the any of the features of the input images. If Ii (GRectsi) = Train1 Face Identified who is that person STRect= GRectsi Else No one Identified Where STRectStore the Identified face rectangle 3.3 Face Tracking & Recognisation: In the active scenario, the tracking is the more important process and it follows the sequence of process. Also the process of tracking is divided into two categories. 1) Head tracking 2) Facial feature tracking Head tracking methods is color, region and shape based one and the feature tracking method has trackers for each and every feature and outlines the points like mouth, eyes for the tracking process. For this process of predicting and updating Condensation filter and Kalman filter are used. The grouping of the head tracking and facial feature tracking along with the concepts of filter reduce the overall problem. The framework mainly focused to watch out the multiple faces at same time. By using the Kalman filters the process of tracking of multiple people were performed. If Ii (NFRectsi) = STRect Identified person was present in the frame Else Identified person was not present in the frame Where NF Next Frame of Group image (applied preprocessing and Face Detection) NFRectsi Next Frame Face rectangles 4. RESULTS AND DISCUSSION: To deliver the effective system we take the various color images which includes many faces with the numerous size and have the dissimilar lighting circumstance and different expressions. The color of the skin may be varied because of the different lighting effect. To minimize the lighting effects we include the gray methods which carry out the damages in the light effects. Then the color components of RGB color i.e., Red, Green and Blue nonlinearly changed to the YCbCr color space. 43 International Journal of Computer Applications (0975 – 8887) Volume 38– No.10, January 2012 Fig.2.1 a Fig.2.2 a Fig.2.3 a Fig.2.1 Fig.2.1 Fig.2.1 Fig.2.1 b c d e Fig.2.2 Fig.2.2 Fig.2.2 Fig. b c d 2.2 e Fig.2.3 Fig.2.3 Fig.2.3 Fig. b c d 2.3 e Figure 4.1, 4.2, 4.3 Comparison of (a)original (b) Grey Scale (c) Histogram equalization Image (d) Skin Color Extraction (e) Thus the preferred threshold is [Cb1, Cb2] & [Cr1, Cr2]. The value of the [Cb, Cr] presents with in the threshold limit is considered to have a good skin tone of pixels. If the threshold limit is higher then it is considered to be a dull skin with less pixel value .Thus the proposed algorithm presents is an outstanding performance by showing the evidence above. Hence we perform our experiments with different par parameters and features. Figure 2.4 a explains about the comparison between the number of trained input vs. accuracy of result. Figure 2.4 b explains about the comparison between number of trained input vs. speed of detection which is been compared with the existing PCA, RDIT and PRDIT approaches. Figure 2.4 a Trained input vs. accuracy 44 International Journal of Computer Applications (0975 – 8887) Volume 38– No.10, January 2012 Figure 2.4 b Trained input vs. speed of detection The features of test image were also extracted in the same fashion and were compared with the trained database. The recognition was successful on 95% of the occasion. The recognition algorithm was same as PRDIT and it involved the training approach. Hence either way it detected a face either true or false. The quantitative success rates are provided below. Also a comparison between PCA, RDIT and PRDIT algorithm is done to highlight the advantages of this algorithm over PCA and RDIT. PCA RDIT PRDIT Requires full frontal display Requires full frontal display with no lightning conditions Works well with different viewpoints, expressions and lighting conditions Each face is a single entity in the database Training is done on faces of same person with different features Training is done on faces of same person with different features Recognisation is done by one to one matching Recognisation is done by different pose of same person Recognisation is done from the group of different users. Table 1 comparison between PCA, RDIT and PRDIT 45 International Journal of Computer Applications (0975 – 8887) Volume 38– No.10, January 2012 Thus these proposed techniques works well under any robust conditions like complex background and also with different face positions. These algorithms give different rates of accuracy under different conditions as experimentally observed. A test e shown in figure 2.5a was taken on the training of different of different types of technique versus accuracy in percentage were highlighted. Figure 2.5 a Different algorithm vs. accuracy Figure 2.5 b Different algorithm vs. Error Rates The experiment was conducted again and again which provided a 100% result. Also the experiment was proved in Figure 2.5 b that the proposed algorithm has a tolerable Error rates. 5. CONCLUSION PRDIT is useful for face Detection, Identification and tracking mechanism and also used to detect the faces in videos in the real time. Thus the proposed technique is very useful in identify a single person from a group of faces in a set of video frames. Thus the proposed technique is well suited for all kinds of faces. Thus the face recognition, detection and tracking algorithms were implemented and tested with different types of images under varying conditions. The RDIT, PCA and PRDIT success rates were given for face detection and the success rate was different for different images depending on the external factors. The overall success rate was 95%. 6. REFERENCES [1] Chiang.C.C et al., “A novel method for detecting lips eyes and faces in real time”, Real Time Imaging-2003 [2] Garcia.C & Tziritas.G “Face detection using quantized skin color regions merging and wavelet packet analysis”, IEEE Trans. Multimedia-1999 [4] HsuR.L. & Jain.A.K and Abdel Mottaleb.M “Face detection in color images”, IEEE Trans-2002 [5] Kruppa.H & Schiele.B “Using Local Context to Improve Face Detection”, In Proc. of the British Machine Vision Conference (BMVC), Norwich-2003 [6] Mostafa.L & Abdelazeem.S ”Face Detection based on Skin Color using Neural Networks”, in Proc. of 1 st ICGST Intl Conference on Graphics, Vision and Image Processing GVIP, Egypt-2005 [7] Pitas.I & Karasaridis.A “Multichannel transforms for signal image processing” IEEE Trans Image Processing1996 [8] Rizzi.A & Marini.D and Gatta.C “Color correction between gray world and white patch”, The human Vision and Electronic Imaging Conf-2002 [9] Viola.P & Jones.M.J “Robust real time face detection”, Intl Journal of Computer Vision-2004 [10] Wang.H & Chang.S.F “A highly efficient system for automatic face region detection in MPEG videos”, IEEE Trans-Circuit Systems for Video Technology in 1997 [11] Yepeng Guan & Lin Yang “unsupervised face detection based on skin color and geometric information” [3] 3. Guetta.A & Rajagopal.s and Pare.M “Face Detection EE368 Final Project”- 2003 46
© Copyright 2026 Paperzz