B11 - 122 - University of Pittsburgh

Session B11
#122
Disclaimer—This paper partially fulfills a writing requirement for first year (freshman) engineering students at the
University of Pittsburgh Swanson School of Engineering. This paper is a student, not a professional, paper. This paper is
based on publicly available information and may not provide complete analyses of all relevant data. If this paper is used for
any purpose other than these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering
students at the University of Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
THE APPLICATION OF MACHINE LEARNING IN FACIAL RECOGNITION
Keting Zhao, [email protected], Mahboobin 4:00, Jiacong Liu, [email protected], Mahboobin 4:00
Abstract— Most Facial Recognition(FR) systems are using
principal component analysis which limits the accuracy and
efficiency of the recognition process. Thus, a better
algorithm is needed to improve the current FR system. This
paper introduces the Machine Learning based FR system
and discusses its potential social impacts. Integrating the
Machine Learning algorithm into the FR system will allow
the computer to learn and detect more detailed differences
among given objects. In that, the optimized FR system will
yield better image search results with a lower error rate.
This application of Machine Learning is significant to
strengthen the law enforcement and to improve the
surveillance system as well. However, some ethical issues
also arise with this optimized FR system, mainly on violating
the individual privacy in public. These ethical issues can
influence on the sustainability of FR system. Furthermore,
the solution of these moral issues is the key to improve its
sustainability.
Key Words— Computing ethics, Computer Object
Recognition, Facial Recognition, Machine Learning
Algorithm, Public security and privacy
INTRODUCTION:THE CURRENT
EXISTING ISSUE IN FACIAL
RECOGNITION
Facial recognition(FR) uses the computer to determine
individuals by their facial features and has been applied in
various field in the past decade. As indicating in the TV
show, Person of Interest, FR system could predict the
criminal actions and sends out the corresponding
information of the targeted person such as his or her social
security number. The algorithm that facial recognition used
in the past was called Principle Component Analysis, which
simply transformed various human facial features, such as
the distance between the eyes and nose, into geometric
shapes. The software would then match these shapes with
those on actual faces in order to recognize an individual’s
identity. The problem with this analysis, however, was it
limited facial recognition accuracy since it could not
distinguish between two people with similar facial patterns
like human beings can. Therefore, a better algorithm was
University of Pittsburgh Swanson School of Engineering 1
03.03.2017
needed. The technology modified to solve this issue is called
the Machine Learning Algorithm. In the next section, the
basic mechanism of Machine Learning will be introduced.
Further in the paper, FR system and it relevant application
and case study will be showed as well as the current ethical
issues that FR is facing.
MECHANISMS OF MACHINE LEARNING
For a human, “learning” means to gain knowledge or
skills through study, experience, or being taught.
Therefore, the general idea of machine learning
algorithms is to let the computer “understand” objects,
such as a human face, through the learning process
similar to the human learning process. Two important
reason for a machine to have the ability of learning is
that: concise the relationship between input and output
among large amount of data, which could not be done by
human; extract the significant relationships among the
hidden piles of data [1]. In other words, machine learning
method could detect the desired patterns from the
providing enormous data sets and classify, or separate the
objects into specific classes, or category.
Data Structure and Machine Learning
There are several parallels between machine
learning computational and human models are developed
in the way that is based on the theory of animal and
human learning [2]. The basic process in machine
learning is just like how humans learn the structure of the
universe [3]. Let’s say a person lives in a twodimensional random flat world where every pixel value,
or every point that builds up this flat world, is random
across the time and space. It is hard for the person to
point out which direction is north or south because there
is nothing makes any part of this universal distinguishable
from other parts. Therefore, the idea of moving north or
south in this two-dimensional world is meaningless since
the knowledge of north and south does not exist in this
world. Thus, before a human or the machine to learn the
structure of the universe, there has to have the structure in
the universe which is correlated with some patterns and is
relational within time and space between these pixel
Keting Zhao
Jiacong Liu
value in order to have the knowledge that could be
learned. Referring this back into the example of moving
north and south is that the different magnetic pole given
to north and south is their unique structure which human
learned and use to define north and south eventually. In
other words, the fundamental of the machine learning
algorithm is to learn the structure in the data; when
specific structure does exist in the data, the machine
which works as human brain will then able to catch the
pattern in the structure, and determine the object based on
their special correlations or compressing the data
structure into corresponding geometric shapes based on
their arrangement. Through combining a lot of correlated
information across different modalities, such as fitting
these pattern into social construct and emotions, a human
starts to “create” reality and derive his or her own selfconsciousness [3]. The computer using a Machine
Learning algorithm in the similar way to learn the
structure of data and analysis the pattern in the data
structure and eventually derive appropriate result under
giving contexts.
In a generally terms, whenever machine changes its
structure, database(inputs), and program in such a manner
that improves its performance on such tasks, it is
considered machine learning. The figure 1 shows the
basic components that the Machine Learning Algorithm
needs to include and any of the improvement occurs in
these components count as learning.
dimension represents one distinguishable information of
the targeted object in the real world. By using vectors to
sort attributes, or information, each object could be
represented by a point in an enormous large database. For
example, in order to represent individual cats and dogs at
a pet store in a two-dimensional data base, the data
analysist will set y-axis as tail length and x-axis as its
weight. Thus, on a x-y coordinates graph, each dog and
cat will be store as a point with the corresponding (x, y)
based on its weight and tail length. When more attributes
of these cats and dogs are added in, they are still
presented as points just in a high coordinate, such as (x, y,
z, h, j). In general, the first step in the machine learning is
to reduce all the features of the objects into data points as
the appropriate input for the machine learning algorithm.
Supervised Learning in Machine Learning Algorithm
One of the task that machine learning algorithm tries
to accomplish in the context above is to distinguish which
points are cats and which points are dogs. As showing in
figure 2, red points are cats and blue points are dogs.
With human visual processing system, a person could
easily learn a dog or a cat by looking its multiple feature
or the data structure. However, the computer does not
have this such advanced visual system. Thus, the machine
learning algorithm is going to draw the classification
function or the separation line based on the labels of data
points, in this case, the green line in the graph. The
computer takes these input data which has labels
described as a cat or a dog and generates the relationship
among these data points which is the way of computer
learning. In that, when a new vector sorts into this data
base, the computer could use the relationship it generates
earlier to be able to accurately determine if this vector
represents a dog or a cat.
FIGURE 1[1]
The diagram of AI System
Using of Vectors in the Machine Learning
FIGURE 2[15]
Plotting graph of cats and dogs at the pet store
(Red spots - Cat, Blue spots - Dogs)
The fundamental mathematics in the machine
learning algorithm is the vector which is the real world in
the eyes of the computer [1]. A vector is another way to
say a point in multiple dimensional space where each
2
Keting Zhao
Jiacong Liu
The example above describe a branch of machine
learning, called supervised learning. By definition,
supervised learning means to generate a function based
upon assigned labels that maps input to desired output
[1]. This means the initial training using data set has been
given classification already and the relationship is
generated from the data set could be used as a useful tool
to determine the class of new input data. This is learning
algorithm has been used in facial recognition system
which will be discussed in later section.
Unsupervised Learning in Machine Learning Algorithm
FIGURE 4[12]
Gaussian mixture model Example
Another type of learning mechanism in machine
learning algorithm will be also applied in facial
recognition system called unsupervised learning [1],
which applies when the given data does not have label on
them. In this case, the computer needs to determine
whether there is specific data structure in the data sets or
not. The particular functions using in the Machine
Learning Algorithm are K-means clustering and Gaussian
mixture model [3]. Figure 3 and Figure 4 are examples of
using these two function in the Machine Learning. Kmeans clustering function helps to find the relational and
correlational associations and distances between the
vector values. The computer will come out the result that
when the distances between two data points are lower
than certain threshold, the relationship determines as
close and if not, then the relationship will be determines
as far. Thus, all the data points will be separate into
different cluster based on the structure which just native
to the data. Similarly, the data could be separate into
groups based on their histogram value and compute into
Gaussian mixture model.
Thus with unsupervised learning, the computer
could generate the relationship among data independently
to the labeling. With this mechanism, the FR system will
gain more flexibility and eventually process the
recognition progress as a human and even faster and more
accurately than a human.
FACIAL RECOGNITION WITH PCA
(PRINCIPAL COMPONENT ANALYSIS)
In the past decade scientists have made several
different approaches to build Facial Recognition systems
through combining complex mathematical algorithms with
Machine Learning. Among them one of the most popular
solutions was the Facial Recognition system based on the
Principal component analysis (PCA). The PCA was an
algorithm that was designed to train computers to recognize
similar patterns between objects (in this case objects would
be images). This algorithm used a mathematical formula to
lower the dimension of images and stored the information of
these images into vectors. Doing so allowed computers to
manipulate the image information for further analysis.
The PCA recognition system was widely adapted and
used by agencies such as FBI for criminal verification and
used as employee check-in system for ordinary companies.
Using criminal verification as an example, suppose a
police station has 1000 face images of 100 criminals
collected from past criminal records, and the police wanted
to use these images as a criminal database to verify whether
a person has committed crimes in the past. To present such
a task, the operator must manually input this dataset full with
each faces labeled into the PCA recognition system, and
split the images into different groups. For this scenario,
suppose the operator split the data into 100 groups, each
groups with 10 images from the same identity into the PCA
system.
FIGURE 3[12]
K-means Clustering Example
3
Keting Zhao
Jiacong Liu
FIGURE 7 [4]
Example of Eigenfaces
Once the eigenvector is transformed into a normal size
image, the operator would get a so-called “Eigenface.” The
top four pictures in figure 7 are examples of eigenfaces. In
this case the police station would have 100 eigenfaces stored
in the database and ready to be used.
FIGURE 5 [4]
PCA Image processing 1
FIGURE 8 [4]
Determining Similarity Through Linear Algebra
When the operator inputs a new face image into the
system, the system would again transform the new image
into vector, then compare the vector to each of the
eigenvector and compute the similarity between the new
face and the existing eigenvectors as shown in figure 8 [4].
If the computer found a high similarity between input vector
and one of the eigenvectors, it will look further into the
group related to that eigenvector and find out which identity
specifically matched with the input vector. On the other
hand, if the computer found the input vector has no
similarity with all of the eigenvectors in the database, the
system would conclude that this person has not been
recorded for crime in the past.
FIGURE 6 [4]
PCA Image processing 2
As figure 5 and, 6 show above, the PCA algorithm
takes all input images and transform them into a 2D vector
[4]. Each value inside the vector correlates to the pixel in the
original image. Then, it would combine all the 2D vectors
that belong to the same group into a single vector to
represent all identities from the group. By taking the average
value and normalizing the group vectors, the system will
create an eigenvector for each group [4]. Eigenvector could
be understood as a vector that contains value of the “average”
face of all 10 different faces from one group.
Sub- Disadvantage, Shortcoming of PCA-like System
Although statistics have shown that the PCA system
has an accuracy rate of 60%-80% [6], there are many
shortcomings and conditions that limit the performance of
the PCA algorithm. First, the lightning condition of the
image have to be perfect. In order for computers to conduct
comparisons at maximum level, the faces in the image must
be fully under good lighting conditions thus all the pixels
can be transformed into useful values in the vector. In poor
light conditions only part of the face would be revealed
which would heavily impact the success rate in matching.
Secondly, the PCA algorithm required all the pictures in the
database and the input images to be the exact same size with
4
Keting Zhao
Jiacong Liu
the face centered at a fixed position. The requirement in
alignment restricted the flexibility of the recognition system,
forcing the operator to manually crop and adjust all the
pictures that didn’t fit the standard. Last, all pictures must be
frontal image of faces, which means if the face in the image
were tilted at an angle the system might failed the matching
test [4].
Given these pre-existing flaws in the PCA and PCAlike systems, a better facial recognition algorithm is in
demand to improve the accuracy and usability of current
machine recognition systems.
capture six reference points on the detected face, then it
crops the image to a suitable size keeping only the general
face image to analyze (9b). To compute a 3D model based
on a 2D image, computers will then find 67 additional
reference points on the cropped image (9c), projecting the
facial features into a 3D model (9d, 9e). Doing so not only
allow computers to rotate the face in the image to a proper
angle for recognition (9g) but could also be used to predict
the side look of the face (9h). This 3D modeling method
solved the facial alignment problem in the past facial
recognition systems and it increased the usability of the
program since now the system can recognize someone from
a picture even if the face is not fully frontal [5].
FACIAL RECOGNITION BASED ON DEEP
MACHINE LEARNING (DEEPFACE)
In 2014, Facebook announced their own facial
recognition system – DeepFace, claiming that it has a 97%
accuracy in recognizing human faces. In a paper published
by the Facebook AI Research lab, it stated that the Facebook
DeepFace Algorithm is different from other facial
recognition systems because it used the 3D face alignment
and Deep Machine Learning algorithm.
FIGURE 10 [5]
Example of Nine-Layer DNN
What Is DeepFace
Unlike the PCA algorithm, where an operator has to
crop and edit all pictures following a certain guideline, the
DeepFace algorithm trained computers to detect the
coordinates of the face in a picture by themselves. Being a
social network company itself, Facebook has one of the
largest image database in the world, which enabled the
DeepFace team show computers millions of face images and
let computers understand and learn the overall structure of a
human face through mathine learning algorithm [5]. This
allowed computers to detect faces and create a frontal image
of the face from undocumented pictures all by itself.
In terms of processing the face image, instead of
transforming the entire image into vector like the PCA
system would, DeepFace uses a nine-layer neural network to
extract and process useful information of the image (figure
10). Eventually the system will compute a vector that
represents the input face image (figure 10, F7), but every
value in the vector would only be a useful representation of
the facial features such as distance between eyes and nose
[5]. This comparison method is much more reliable than
PCA algorithm since the representation vector created by
DeepFace doesn’t contain any useless data.
DeepFace also built its database based on labeled faces.
First it grouped all images with the same labeled identity
together and studied their similarities. Then the system
computes and stores the representation vector for the face of
that identity for future references. When the operator inputs
an undocumented image asking the system to verify the
identity, the system would compare the vector of the new
input image with existing representation vectors in the
database and return the identity once a matched is found or
conclude the identity is unknown [5].
In this sense the DeepFace would has a much higher
accuracy rates than the PCA system since DeepFace really
understands the structure of a human face and only takes the
important parameters into account for comparison.
According to Labeled Faces in the Wild (LFW) benchmark,
an online image database specifically designed to test facial
recognition systems, the DeepFace algorithm achieved an
accuracy rate of 97.35%, while the PCA algorithm only
scored with a 60.2% [6]. The DeepFace algorithm have
enabled computers to recognize faces at the human level.
FIGURE 9 [5]
DeepFace Analyzing Image
Figure 9a is a simulation of what DeepFace system
“sees” from an image. In DeepFace recognition system,
computers will find the general area of the human face and
5
Keting Zhao
Jiacong Liu
APPLICATION OF FACIAL RECOGNITION
Surveillance
There are many fields of interest in our society that
facial recognition can be apply to. The two main
applications of the facial recognition include identification
and surveillance. These two applications will benefit from
the improved facial recognition system.
Facial recognition system could also greatly improve
the current surveillance system. Today there are hundreds of
surveillance cameras everywhere, but none of them can
actually tell the identity of a person. With that being said, all
the surveillance cameras are useless without people seating
behind the monitors and staring at all the faces that are
passing by. However, with facial recognition system, law
enforcement could use computers and surveillance cameras
to locate suspects and criminals. In fact, back in 2001, the
PCA facial recognition system was experimented in
monitoring all the people entering the super bowl game and
found 19 criminals [9]. Although due to the uncertainty of
accuracy, movement and poses of the subjects, and public
concerns over privacy violation, FBI and police departments
have given up expanding the usage of facial recognition on
surveillance. The PCA recognition system required the
subject to sit still and fully face the camera in order the
perceive a usable face image, but the problem here was that
since people were moving all the time, the PCA system
didn’t work all that well. This is where DeepFace algorithm
can make up for the disadvantages of the PCA algorithm by
applying its 3D modeling, even when the people are
constantly moving or facing the camera sideways, it would
still be able to capture the face image.
The same principle as finding criminals, facial
recognition can also be applied to finding missing persons.
For example, if a child got lost in Disneyland, the parents of
the child could provide a sample picture of the child and
input them into the facial recognition system; then by using
the surveillance camera, the system would locate the
position of the child.
With its high accuracy, DeepFace has many potential
usages that could provide countless values to the society in
the future. The law enforcements could use it to greatly
improve our security.
Identification
The main objective of identification is to solve the
general problems such as “Who is this?” and “Is this the
right person?” Solving these questions is essential in
scenarios where identification is used to gain permission for
accessing personal properties. Facial recognition could be
used similarly to fingerprints: allowing you to unlock your
cell phone and set up payment method based on your
identity. The following are some examples of these
application in real life scenarios.
First, imagine a door lock with pre-installed facial
recognition system that would unlock the door when it
recognized the owner of the house; you would no longer
need to carry a key around and worryed about losing it.
Second, experiments on facial payment have been
conducted by a Finnish software company called Uniqul [7].
In the future, people can link their payment methods and
bank accounts together with their unique faces. Once all the
information has been recorded, one can go to a grocery store
or a movie theater without bringing a wallet, and pay simply
by looking at the camera for a few seconds. When the
system recognizes the face, it will make the transaction
online. We no longer will have to pull out our credit card,
enter passwords, and sign the receipt when going to a store.
Another potential application of the facial recognition
system is to reinforce academic integrity. The college
standardized tests such as ACT and SAT could use the facial
recognition to verify valid test takers and prevent
substitution test takers from entering the exam room. This
would promise an equal chance for all test takers.
Last, in China, several tourist attractions limit the
number of people that can enter the site each day because
some of these attractions are old and need to be preserved.
Limiting the number of tourists would make it a lot easier to
maintain the attraction sites. However, limiting the number
of tickets provide scalpers an opportunity to raise the price
and make profit from reselling these tickets. To solve this
problem, WuZhen, one of the oldest tourist sites, decided to
use facial recognition to verify the owner of the ticket, thus
only the person who bought the ticket would have the
permission to enter the tourist site [8].
These scenarios are only valid and applicable with the
assumption that facial recognition system are accurate to a
standard that it won’t make mistakes. Clearly the DeepFace
would outperform PCA algorithm in these scenarios with a
human level accuracy.
ETHICAL CONCERNS AND
SUSTAINABILITY OF MACHINE
LEARNING BASED FACIAL
RECOGNITION
The current argument about the Machine Learning
based Facial Recognition is the tradeoff among security,
privacy and freedom [10]. The balance among these three
elements will also determine the sustainability of FR system.
United Nations Conference on Sustainable Development
emphasizes that sustainability is not just strong economic
performance but intergenerational and intergenerational
equity which involves a balanced consideration of social,
economic and environmental goals and objectives in both
public and private decision-making [11]. In this case, how to
reach the equity at the tradeoff of personal privacy, freedom
and public security is the main roadblock of the
6
Keting Zhao
Jiacong Liu
sustainability of optimized FR system. Brey, from
University of Twente, indicates in his journal that violation
of freedom and violation of privacy are two main moral
issues that have effect on the sustainability of FR system.
Since the Machine Learning based Facial Recognition
System is designed to connect with online databases, which
contain sufficient amounts of personal information such as
social security number, medical prescription, and ID pictures
on driver license, the potential violation of personal privacy
at public space becomes an issue. Although the opponents
do not deny the concept that the optimized FR could reduce
the crime rate and enhance the life quality, they question its
reliability and efficiency of stopping crime [10]. Mainly,
opponents argue that the trends of leaking personal
information to the illegal users and error rates of identifying
criminals are great than the gains in security. On the other
hands, if engineers could find the solutions of these moral
concerns, the sustainability of FR systems will be improved.
Bedoya, executive director of the center on privacy and
technology at Georgetown Law, claims that, “No federal law
controls this technology, no court decision limits it. This
technology is not under control” [a]. FR system will be used
by different agencies for slightly different purpose. For
example, FBI will use it to determine a criminal which needs
the database that contains all the criminal records, while
other police department will use it to trace a missing high
school students which will need to access the database that
contains information of high students. It is understandable
that determining which databases are necessary for the tasks
is hard. However, sometimes the law enforcement will use
the databases without notifying and this put the
sustainability of FR system on risk of violating individuals’
privacy.
FBI had been revealed that they first launched its
advanced biometric database, Next Generation Identification,
in 2010, enlarging the previous fingerprint database with
further capabilities including FR system. However, the
bureau did not inform the public about its newfound
capabilities nor did it publish a privacy impact assessment,
required by law, for five years. Furthermore, unlike with the
collection of fingerprints and DNA, which is done after the
legal arrestments, photos of innocent civilians are being
collected proactively. The FBI made arrangements with 18
different states to gain access to their databases of driver’s
license photos [12]. It is appalling that one’s picture on his
or her driver license will be put into a repository that could
be searched by law enforcement across the country. The
purpose to optimize FR system is to provide better security
for the society, but the action of FBI violates this purpose
and put the entire society under unnecessary panic that each
personal information has been scanned and used without
noticing. If this ethical issue cannot be solved, the negative
feedback from the society will overwrite the benefits that FR
could bring to the surveillance system. This unbalance will
eventually damage the sustainability of the FR system.
In addition, the growing flexibility in FR system itself will
also cause violation of privacy. FR system has the ability of
doing contextual integrity [10], relating an identified person
with their other personal information by aggregating data
from multiple database. All the confidentially agreements
are invalid at this point. Since there is also no clear standard
to determine who could be users of FR system [13], some
data collectors may easily get access to the system and sell
the aggregative information which originally under the
protection of confidentially agreements. For example,
information about people’s prescription may be sold to
medicine companies for marketing purposes, or information
collected for scientific purposes may be used in some
political activities. In addition, there is also no rules about
the purpose of using of the FR system which may cause
domain and user shifts [10]. For instance, instead of using
the FR system to examine criminal suspects, a police force
could use it to do loop statistical analysis on a composition
of crowd’s face prints in order to track individuals over long
Violation of Freedom
When the FR yields the incorrect matches and sends
the wrong alert to the police, the payment would be that the
innocent citizens become subjected to harassment by police
[10]. The opponents claim that this action violates individual
freedom; nobody deserves to experience arrestment for
crimes he or she does not commit. People do tend to accept
the idea that it has to suffer minor inconveniences so that
criminals can be apprehended. However, under the current
situation, the harm done to innocent citizens who are
determined as false positive may begin to outweigh the
benefits of a few additional arrests of criminals. Engineers
reported that there are three types of error which are
responsible for the occurrence of incorrect matches in
optimized FR [10]. First error exists when unpredictable
wrong information appears in the online data base where the
FR abstracts the inputs from. Second, the errors in
probability estimates creates a margin of error which is not
avoidable. The third type of error occurs due to the wrong
way of installing and using the system. In order to narrow
down the false positive results, it requires to intergrade more
accurate and comprehensive data check codes in the
program. For example, we think to add error check points or
multiple data comparison from different database in the
algorithm before using the data as input to the FR system. If
the false positive results could be significantly narrowed
down, the value of FR system towards law enforcement can
maintain its social sustainability which, in other words, it
can actually enhance the public security by reporting the
correct criminals to the police.
Violation of Privacy
Another debating on FR system is its assess to different
databases. So far, there is no regulation clear states which
databases FR system could have access to. As Alvaro
7
Keting Zhao
Jiacong Liu
distance. As journalist Richard Meares reports, there have
been several reports of damaging of the FR system due to
abuse the system repeatedly by tracking and zooming in on
attractive women [10]. Opponents argue that this action will
cause panic in public with the uncomfortable feeling of
being watched incessantly.
Thus, the violation of privacy could be limited by setting
strict regulation of using as well as by setting user
identification checkpoint before login the system. Adding
obligations of preventing violation of privacy on system
developers and users can also be the efficient way to stop the
undesirable forms of violation occurring.
To stop these will not only requires continuously
optimizing the algorithm of FR system, but also requires the
government putting efforts to establish the standard of using
FR system. If a satisfied balance of the tradeoff between
public security and individuals’ freedom and privacy could
be reached, the FR system will have a sustainable
development seven toward next generation.
https://research.fb.com/wpcontent/uploads/2016/11/deepface-closing-the-gap-tohuman-level-performance-in-face-verification.pdf
[6] “Labeled Faces in the Wild.” University of
Massachusetts. n.d. Web. Accessed 26 Feb 2017
http://vis-www.cs.umass.edu/lfw/results.html
[7] “World’s first face recognition payment method.” Uniqul,
Inc. 15 Jul. 2013. Web. Accessed 28 Feb 2017.
http://uniqul.com/worlds-first-face-recognition-paymentsystem/
[8] T. Revell. “Chinese tourist town uses face recognition as
an entry pass.” New Scientist. Reed Business Information,
Ltd. 26 Nov. 2016. Web. Accessed 1 Mar. 2017.
https://www.newscientist.com/article/2113176-chinesetourist-town-uses-face-recognition-as-an-entry-pass/
[9] V. Chachere. “Biometrics Used to Detect Criminals at
Super Bowl.” ABC News. 13 Feb. 2002. Web. Accessed. 2
March. 2017.
http://abcnews.go.com/Technology/story?id=98871
[10] Brey, P. "Ethical Aspects of Facial Recognition
Systems in Public Places." Journal of Information,
Communication & Ethics in Society (2004): 97-109.
University of Twente. University of Twente. Web. 26 Jan.
2017.
https://www.utwente.nl/en/bms/wijsb/staff/brey/Publicati
es_ Brey/Brey_2004_Face-Recognition.pdf
[11]"Sustainable Development Goals: 17 Goals to
Transform Our World." United Nations. United Nations.
Web.
31
Mar.
2017.
http://www.un.org/sustainabledevelopment/
[12] Solon, Olivia. "Facial recognition database used by FBI
is out of control, House committee hears." The Guardian.
Guardian News and Media, 27 Mar. 2017. Web. Accessed
31 Mar. 2017.
https://www.theguardian.com/technology/2017/mar/27/u
s-facial-recognition-database-fbi-drivers-licenses-passports
[13] Padania, Sameer. "The Ethics of Face Recognition
Technology." WITNESS Blog. N.p., 20 July 2015. Web.
Accessed 11 Jan. 2017.
https://blog.witness.org/2012/03/the-ethics-of-facerecognition-technology/
[14] Maciej Pacula. N.p., n.d. Web. 03 Mar. 2017.
http://blog.mpacula.com/2011/04/27/k-means-clusteringexample-python/
PERCPECTION ON MACHINE LEARNING
BASED FACIAL RECOGONITION
Based on deep machine learning, the DeepFace FR
system is the ideal solution to overcome the difficulties and
flaws encountered by previous FR systems. If used correctly,
the DeepFace system could provide numerous valuable
application to strengthen the verification, security, and
surveillance system in our society. However, there must be
regulations and standards established to restrict the area of
interest for using FR systems in order to prevent violation on
individual privacies and freedoms.
SOURCES
[1] Nillson, Nils. “Introduction to Machine Learning.”
Stanford University. N.p., 4 February 2015. Web. Accessed
10 February 2017.
http://ai.stanford.edu/~nilsson/MLBOOK.pdf
[2] Jordan, M., and T. Mitchell. "Machine learning:
Trends, perspectives, and prospects." Science AAAS. N.p.,
17 June 2015. Web. Accessed 12 Jan. 2017.
http://science.sciencemag.org/content/349/6245/255.full
[3] TheScienceguy3000. YouTube, 27 July 2013. Web. 03
Mar. 2017.
https://www.youtube.com/watch?v=-rMMTv7XLYw
[4] M. Turk and A. Pentland. “Eigenfaces for Face
Detection/Recognition.” Journal of Cognitive Neuroscience.
1991. Accessed 25 Feb. 2017.
[5] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf.
“DeepFace: Closing the Gap to Human-Level Performance
in Face Verification” Facebook AI Research Lab. Facebook,
Inc. 2016. Web. Accessed 25 Feb 2017.
ACKNOWLEDGMENTS
Thank you to our writing instructor, Rachel for
continuously giving us useful feedback and pushing us to
write a better paper. Also, thank you to our co-chair, Patrick,
for regularly checking on our process stratus. Once more,
thank you to Beth for talking us through all the information
and requirements of this conference paper.
8