How Long Do We Need to Detect Pain

How Long Do We Need to Detect Pain Expressions in Challenging Visual Conditions?
Shan Wanga, b, Christopher Ecclestonb, Edmund Keogha, b
a Department
of Psychology, University of Bath, UK
b Centre for Pain Research, University of Bath, UK
Correspondence: [email protected]
Background
Being able to detect pain from facial expressions is
critical for pain communication. Alongside identifying
specific facial codes for pain, there are other types of
basic perceptual features. For example, early stage of
visual analysis consists of the extraction of visual
elementary features at different spatial frequencies
(SF). Low-SF conveys coarse elements, and high-SF
conveys fine-details. In clear and intact representations
(conveyed by broad-SF), both low-SF and high-SF are
available.
Pain expressions could be identified in challenging
visual conditions, with limited SF information1. However,
we do not know how efficient the low-SF and high-SF
information is, and how fast pain could be detected. We
therefore aimed to investigate the exposure time
required to identify pain from faces in intact (i.e. broadSF information) or degraded (i.e. low-SF or high-SF
information) visual conditions, and compare with other
core emotions.
Figures
Analysis revealed significant main effect of exposure time
(F(2.79, 119.85)=29.26, p<.001, η2p=.41), SF information
(F(2, 86)=54.20, p<.001, η2p=.56) and expression (F(2.45,
105.48)=15.47, p<.001, η2p=.27) on estimated sensitivity,
where the sensitive to happiness was higher than that to
pain, fear and neutral.
Figure 1. Example stimulus images of one actor showing
pain in broad-SF, low-SF and high-SF (from left to right)
used in the current study. The original face images were
taken from the STOIC database2.
We need less than 33 millisecond to reliably detect pain in
low-SF faces, and approx. 150 millisecond in high-SF.
Low-SF information (coarse elements) plays a key role in
fast detection of pain, which provides the basis for pain
face decoding that is progressively refined when the highSF information (fine-details) is integrated.
46 healthy participants (24 females; aged 19-28)
completed an expression identification task of pain, fear,
happiness and neutral faces in 3 different visual
conditions (i.e. broad-SF, low-SF and high-SF; see
Figure 1).
Participants’ response data were analysed with Signal
Detection Theory. The dependent variable was
estimated sensitivity (A’) for identifying a target
expression, which ranges from 0 to 1, with 0.5 being
chance level performance.
Significant interaction was found between exposure time
and SF information, F(5.24, 225.35)=35.13, p<.001, η2p=.45
(Figure 2). Participants‘ sensitivity to expressions presented
by broad-SF and low-SF information was not affected by
exposure time, while the sensitivity to high-SF expressions
increased as exposure time increased till 150 millisecond.
Thus low-SF had an advantage over high-SF at exposure
time of 33 and 67 millisecond.
Conclusion
Method
The task consisted of 4 sessions, with each session
assigning an exposure time of the face stimuli of 33, 67,
150 or 300 millisecond.
Results
This pattern was found for expressions of core emotions
too, which indicates that decoding of expressions of pain
and core emotions shares similar properties of visual
information analysis.
Figure 2. Estimated sensitivity to expressions displayed
by broad-SF, low-SF and high-SF with exposure time of
33, 67, 150 and 300 millisecond.
References
1.
2.
S. Wang, C. Eccleston, E. Keogh. The role of spatial frequency
information in recognition of facial expressions of pain. PAIN 2015;
Epub ahead of print.
S. Roy, C. Roy, I. Fortin, C. Ethier-Majcher, P. Belin, F. Gosselin. A
dynamic facial expression database. J. Vis. 2007; 7, 944.
Acknowledgements
This study was funded by a Graduate School Scholarship granted
to the lead author by the University of Bath.
www.bath.ac.uk/pain