Features, Configuration, and Holistic Face Processing Overview

Holistic Representation
●
Features, Configuration, and Holistic
Face Processing
Faces are recognized at subordinate level
rather than basic level or first order
●
Most people are face experts
–
●
Tanaka and Gordon
250 ms to recognize a face
Cannot be based on first order due to subtle
differences in features and configuration in
individuals
–
Everyone has eyes above a nose above a mouth
Three Test
Overview
●
●
Holistic Representation
●
●
3 Paradigms
Configural view vs Featural+Configural view
Theories of Holistic face perception: features are
encoded simultaneously to be integrated into a “global
precept”
●
Paradigms:
–
Disproportional face inversion
●
●
Representations in the Brain
–
●
Definitional Problems
●
Feature? Configuration?
Face Composite Task
●
●
–
Face Inversion
Composite face of 2 well known faces has a new identity
People cannot attend well to just a cued area of a face
Parts-Wholes Task
●
●
Faces are disproportionally impaired by inversion than other
objects
●
Series of faces are studied and a feature of one face is tested in
isolation or in a whole control face
Recognition of feature is greater in face part than isolation
●
Parts-Wholes Task
●
●
Inversion Task:
●
Faces are more difficult to
to recognize than other
objects when inverted
–
–
●
●
●
–
Applies to famous and
novel faces and photos
Applicable under many
conditions
●
Low level properties remain
constant while recognition
is impaired
Were affected by
non-cued area
Inverting the composite face
voids interference of the
non-cued half
–
●
–
–
Leads to believe that
inverting causes
“piecemeal” perceptual
analysis
Suggest features are not
perceived individually
from one another, but
integrated
Faces are unified non
decomposable forms where
part and configural
information makes up the
holistic representation
–
Face Composite
People were asked to
identify person by looking at
a cued half face
Was not found in scrambled
faces, inverted faces or nonface
objects
→ Modification of one part of
the face affects perception of
another feature
Limitation: unsure source of
inversion effect
●
Feature recognition was
better when feature was
placed in a face
i.e. change eye distance and
mouth is harder to recognize
Cause of face inversion effect?
Configural view: Face recognition depends
particularly on encoding the spatial relationships
between parts
●
Sensitivity to second order relational properties is
compromised in inversion
–
–
●
Larger inversion effects were found for configural
changes that featural discrimination
Spatial distance between features are more susceptible
than identification
Judgments involving feature spacing produce
activation in FFA while features activated left
prefrontal cortex
Lost & Preserved in inversion
Cause of face inversion effect?
Featural+configural view: Both featural and
configural information are compromised by
inversion
●
Methodology Issues;
–
–
–
●
Blocked designs of studies make participants choose a
certain strategy (local vs whole face approach)
In unblocked studies difference between feature and
configuration disruption from inversion disappeared
When test were controlled for baseline differences, the
discrimination difficulties were equal
Feature and configural changes produces equal
BOLD activation in FFA
Inversion disrupted perception of features and spacial changes
in lower more than upper regions of face
Inversion, Fractured Faces &
Perceptual fields
What is a feature/configuration?
●
Feature
●
Eyes, mouth, nose
●
Eyes treated as one or two features
Contour of face → changes configuration based on edge to
edge
●
Surface features (color, brightness) → invariant to inversion
–
●
●
●
Configuration
●
Based on centroid
●
Based on edge to edge
Interchanging features can lead to a change in
both
Successful face integration requires coding of eyes
●
●
●
Interdependence of features on eyes and others is lost
during inversion
Upright faces appear expansive and shrink when
inverted causing perceptual field to decrease leading
to piecemeal analysis centered around eyes
When cued to mouth region inversion effects were
less than cuing to eye region
Modularity and Domain Specificity
Now interchangeable
●
What Determines Whether Faces
are Special?
A specialized cognitive system that handles
specific information
●
Degrees of specificity can change
●
●
Chang Hong Liu and Avi Chaudhuri
Face modules have greater specificity that object
modules
–
Innate Endowment
Specialness
●
●
“Uniqueness and Specificity” (Hay and Young)
●
This paper:
●
checked by level and number of categories module
handles
●
●
Modularity
Criterion for specialness
Presences of innate module(s) dedicated to faces or module
predisposed to developing face specialization
Neonates studies support this claim
●
Track faces longer – 30 min
●
Domain Specificity
●
Distinguish mothers face- few days
●
Innateness
●
Look at attraction faces longer – 3 days
●
Expertise
●
Localization
●
CONSPEC and CONLERN
●
●
●
CONSPEC – directs newborns to face like patterns
CONLERN – system that achieves high levels of face recognition
through learning
Damaged eyes sight during infancy and childhood leads to
inability to form normal face recognition
Expertise
●
Adult face recognition is expertise gained
overtime
–
●
●
●
Effect not dependent on face stimuli
Expertise vs Maturation of High level vision
Impairment studies: Prosopagnosia and visual
agnosia
Localization
●
●
Activation of a distinct brain region for certain
stimuli – i.e. faces
Face processing more distributed
●
Prosopagnosia and agnosia – areas may be very close and
damage may not be limited
●
Unclear what FFA detects in faces
●
Monkey Studies
●
●
●
Brain is very plastic
–
Neural plasticity against innateness
Children test poorly on face recognition
Inversion effect occurs on dog experts
–
Plasticity
Cells can represent non faces same was as faces after
training
Expertise and experience may play role in coding
●
Questions
–
–
Are faces the “default setting” for brain areas they
activate?
Are the areas general domains for learning that do
not have an initial preference?