Research ReportWhat is adapted in face adaptation? The neural representations of expression in the human visual system
Introduction
Facial expression is an important vehicle for social communication. The perception and interpretation of facial expression provides us with clues about the emotional state of those with whom we interact. Disordered perception of facial expression is a feature of neurological disorders such as autism and Asperger syndrome, and may contribute to the social disruption experienced by patients with these diagnoses (Hefter et al., 2005). Understanding the neural representation of facial expression is important to advancing our knowledge of how the human visual system organizes and extracts socially relevant perceptual signals.
Current concepts of facial recognition suggest parallel processing of facial identity and facial expression in both cognitive and anatomic models, based largely on human functional imaging experiments that supplement earlier neurophysiological data from monkeys (Bruce and Young, 1986, Haxby et al., 2000, Andrews and Ewbank, 2004, Eifuku et al., 2004). Processing of facial identity may be a specific role of the fusiform face area, located in the inferior occipitotemporal cortex (Haxby et al., 2000, Barton, 2003, Grill-Spector et al., 2004), whereas facial expression may be preferentially processed in the superior temporal sulcus, located in the lateral occipitotemporal cortex (Haxby et al., 2000). The superior temporal sulcus appears to be involved in recognizing the changeable aspects of the face (Haxby et al., 2000), such as direction of gaze (Pelphrey et al., 2003), mouth movements (Puce et al., 1998, Callan et al., 2004), as well as expression (Winston et al., 2004). In addition, fMRI shows that activity in the superior temporal sulcus is selectively increased when attention is directed towards emotion in facial images (Narumoto et al., 2001).
While these data suggest that expression may have a specific neuroanatomic substrate in the superior temporal sulcus, they are less clear on the nature of the representations contained within that substrate. Recent work using adaptation paradigms and aftereffects have suggested a means of exploring the neural representations of faces. Previous reports have shown that aftereffects (biased perceptions following sensory adaptation to a stimulus) exist not only for photoreceptor-based phenomena such as color (Allan et al., 1997, Nieman et al., 2005), but also for cortically based phenomena such as motion (Snowden and Milne, 1997, Seiffert et al., 2003), tilt in two dimensions (Adams and Mamassian, 2002), slant in three dimensions (Domini et al., 2001), and, more recently, for faces (Leopold et al., 2001, Webster et al., 2004, Yamashita et al., 2005).
One of these reports has documented aftereffects specific to facial identity (Leopold et al., 2001). When shown a series of morphed faces that varied between a target face and its ‘anti-face’ (one with the opposite structural characteristics to the target face), subjects were more likely to perceive the identity of the target face in an ambiguous morphed image after they had been exposed to the ‘anti-face’. Another study found similar aftereffects for a variety of facial properties beyond identity, including gender, race, and expression (Webster et al., 2004). This second study confirmed that an adaptation paradigm can be a useful tool to probe the neural populations involved in perceiving expression. However, the conclusions that can be made from its results, about the neural representations of expression, are limited because their adapting stimulus was the same image as the one used to generate the morph series. Therefore, one cannot determine whether this adaptation is of expression in general, expression in a specific face, or expression in a specific image.
Our objective was to systematically explore how differences in the adapting stimulus affected the production of aftereffects on expression perception, thereby better defining the neural representations of facial expression. Our initial hypothesis was that there should be a neural representation of expression that generalizes across different facial identities. For facial expression to be a truly useful social cue, it is important to be able to infer similar emotional states from similar expressions on the faces of different people. If so, we predicted that we would find adaptation aftereffects even if the faces of different people were used as the adapting stimuli and the probe stimuli.
Section snippets
Experiment 1: an identity-independent representation of expression
In the first part of this study, we contrasted the effects of four different adapting conditions on the production of an expression-based aftereffect (Fig. 1). This was done for three different series of morphed images, one from angry to afraid, one from sad to happy, and one from disgusted to surprised. The first adapting condition consisted of images that were identical to those used to derive the morphed images which served as probes of the aftereffect. This ‘same-image’ condition also
Discussion
The results of these two experiments suggest that at least two neural representations of facial expression exist in the human visual system (Fig. 4). First, the fact that aftereffects can be generated from the faces of different people confirms our hypothesis that a neural representation of expression that is independent and generalizable across facial identity exists. Deduction about the second neural representation relates to the observation that much larger aftereffects were generated by
Subjects
Thirty-eight subjects (23 female) participated in the entire study. All subjects spoke English and did not understand German. In the first experiment twenty-seven subjects (16 female; age = 30.63 years, SD = 10.24 years) were randomly assigned to one of the three possible expression pairs, while CJF participated in all three of the expression pairs in experiment 1, giving 10 subjects for each of the three expression pairs used in the first experiment. Other than CJF all subjects were naïve to the
Acknowledgments
This work was supported by NIH grant 1R01 MH069898 and CIHR grant MOP 77615. C.J.F. was supported by a Michael Smith Foundation for Health Research Junior Graduate Studentship and a Canadian Institutes of Health Research Canada Graduate Scholarship Doctoral Award. J.J.S.B. was supported by a Canada Research Chair and a Michael Smith Foundation for Health Research Senior Scholarship.
The authors would like to thank those individuals who provided their pictures for publication herein.
References (40)
- et al.
Common mechanisms for 2D tilt and 3D slant after-effects
Vision Res.
(2002) - et al.
Social perception from visual cues: role of the STS region
Trends Cogn. Sci.
(2000) - et al.
Distinct representations for facial identity and changeable aspects of faces in the human temporal lobe
NeuroImage
(2004) Disorders of face perception and recognition
Neurol. Clin.
(2003)- et al.
Selective attention to facial emotion and identity in schizophrenia
Neuropsychologia
(2002) - et al.
A principal component analysis of facial expressions
Vision Res.
(2001) - et al.
Phonetic perceptual identification by native- and second-language speakers differentially activates brain regions involved with acoustic phonetic processing and those involved with articulatory-auditory/orosensory internal models
NeuroImage
(2004) - et al.
3D after-effects are due to shape and not disparity adaptation
Vision Res.
(2001) - et al.
The distributed human neural system for face perception
Trends Cogn. Sci.
(2000) - et al.
Face adaptation depends on seeing the face
Neuron
(2005)
Attention to emotion modulates fMRI activity in human right superior temporal sulcus
Brain Res. Cogn. Brain Res.
Gaze direction modulates visual aftereffects in depth and color
Vision Res.
Brain activation evoked by perception of gaze shifts: the influence of context
Neuropsychologia.
Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI
NeuroImage
Perception of facial expressions and voices and of their combination in the human brain
Cortex
Auditory-visual speech perception examined by fMRI and PET
Neurosci. Res.
Phantom motion after effects—Evidence of detectors for the analysis of optic flow
Curr. Biol.
Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust
Neuron
Isoluminance and contingent color aftereffects
Percept. Psychophys.
Understanding face recognition
Br. J. Psychol.
Cited by (179)
Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency
2022, CognitionCitation Excerpt :Zäske et al. (2013) reported a similar pattern for speaker age and speaker gender: vocal age aftereffects were reduced but still present in a cross-gender condition. Further, our results are reminiscent of dependencies in the processing of facial expression and facial identity (Campbell & Burke, 2009; Ellamil et al., 2008; Fox & Barton, 2007; Schweinberger & Soukup, 1998; Vida & Mondloch, 2009) and, importantly, of facial emotional expression and gender (Bestelmeyer, Jones, et al., 2010). Of interest, one study showed that it was possible to simultaneously induce opposite aftereffects for male and female faces that varied on an anger-fear continuum (Bestelmeyer, Jones, et al., 2010).
Spontaneous recovery of adaptation aftereffects of natural facial categories
2021, Vision ResearchNonlinear transduction of emotional facial expression
2020, Vision ResearchAdaptation aftereffects influence the perception of specific emotions from walking gait
2020, Acta Psychologica