Elsevier

Brain Research

Volume 1127, 5 January 2007, Pages 80-89
Brain Research

Research Report
What is adapted in face adaptation? The neural representations of expression in the human visual system

https://doi.org/10.1016/j.brainres.2006.09.104Get rights and content

Abstract

The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a ‘visual semantic’ for facial expression in the human visual system.

Introduction

Facial expression is an important vehicle for social communication. The perception and interpretation of facial expression provides us with clues about the emotional state of those with whom we interact. Disordered perception of facial expression is a feature of neurological disorders such as autism and Asperger syndrome, and may contribute to the social disruption experienced by patients with these diagnoses (Hefter et al., 2005). Understanding the neural representation of facial expression is important to advancing our knowledge of how the human visual system organizes and extracts socially relevant perceptual signals.

Current concepts of facial recognition suggest parallel processing of facial identity and facial expression in both cognitive and anatomic models, based largely on human functional imaging experiments that supplement earlier neurophysiological data from monkeys (Bruce and Young, 1986, Haxby et al., 2000, Andrews and Ewbank, 2004, Eifuku et al., 2004). Processing of facial identity may be a specific role of the fusiform face area, located in the inferior occipitotemporal cortex (Haxby et al., 2000, Barton, 2003, Grill-Spector et al., 2004), whereas facial expression may be preferentially processed in the superior temporal sulcus, located in the lateral occipitotemporal cortex (Haxby et al., 2000). The superior temporal sulcus appears to be involved in recognizing the changeable aspects of the face (Haxby et al., 2000), such as direction of gaze (Pelphrey et al., 2003), mouth movements (Puce et al., 1998, Callan et al., 2004), as well as expression (Winston et al., 2004). In addition, fMRI shows that activity in the superior temporal sulcus is selectively increased when attention is directed towards emotion in facial images (Narumoto et al., 2001).

While these data suggest that expression may have a specific neuroanatomic substrate in the superior temporal sulcus, they are less clear on the nature of the representations contained within that substrate. Recent work using adaptation paradigms and aftereffects have suggested a means of exploring the neural representations of faces. Previous reports have shown that aftereffects (biased perceptions following sensory adaptation to a stimulus) exist not only for photoreceptor-based phenomena such as color (Allan et al., 1997, Nieman et al., 2005), but also for cortically based phenomena such as motion (Snowden and Milne, 1997, Seiffert et al., 2003), tilt in two dimensions (Adams and Mamassian, 2002), slant in three dimensions (Domini et al., 2001), and, more recently, for faces (Leopold et al., 2001, Webster et al., 2004, Yamashita et al., 2005).

One of these reports has documented aftereffects specific to facial identity (Leopold et al., 2001). When shown a series of morphed faces that varied between a target face and its ‘anti-face’ (one with the opposite structural characteristics to the target face), subjects were more likely to perceive the identity of the target face in an ambiguous morphed image after they had been exposed to the ‘anti-face’. Another study found similar aftereffects for a variety of facial properties beyond identity, including gender, race, and expression (Webster et al., 2004). This second study confirmed that an adaptation paradigm can be a useful tool to probe the neural populations involved in perceiving expression. However, the conclusions that can be made from its results, about the neural representations of expression, are limited because their adapting stimulus was the same image as the one used to generate the morph series. Therefore, one cannot determine whether this adaptation is of expression in general, expression in a specific face, or expression in a specific image.

Our objective was to systematically explore how differences in the adapting stimulus affected the production of aftereffects on expression perception, thereby better defining the neural representations of facial expression. Our initial hypothesis was that there should be a neural representation of expression that generalizes across different facial identities. For facial expression to be a truly useful social cue, it is important to be able to infer similar emotional states from similar expressions on the faces of different people. If so, we predicted that we would find adaptation aftereffects even if the faces of different people were used as the adapting stimuli and the probe stimuli.

Section snippets

Experiment 1: an identity-independent representation of expression

In the first part of this study, we contrasted the effects of four different adapting conditions on the production of an expression-based aftereffect (Fig. 1). This was done for three different series of morphed images, one from angry to afraid, one from sad to happy, and one from disgusted to surprised. The first adapting condition consisted of images that were identical to those used to derive the morphed images which served as probes of the aftereffect. This ‘same-image’ condition also

Discussion

The results of these two experiments suggest that at least two neural representations of facial expression exist in the human visual system (Fig. 4). First, the fact that aftereffects can be generated from the faces of different people confirms our hypothesis that a neural representation of expression that is independent and generalizable across facial identity exists. Deduction about the second neural representation relates to the observation that much larger aftereffects were generated by

Subjects

Thirty-eight subjects (23 female) participated in the entire study. All subjects spoke English and did not understand German. In the first experiment twenty-seven subjects (16 female; age = 30.63 years, SD = 10.24 years) were randomly assigned to one of the three possible expression pairs, while CJF participated in all three of the expression pairs in experiment 1, giving 10 subjects for each of the three expression pairs used in the first experiment. Other than CJF all subjects were naïve to the

Acknowledgments

This work was supported by NIH grant 1R01 MH069898 and CIHR grant MOP 77615. C.J.F. was supported by a Michael Smith Foundation for Health Research Junior Graduate Studentship and a Canadian Institutes of Health Research Canada Graduate Scholarship Doctoral Award. J.J.S.B. was supported by a Canada Research Chair and a Michael Smith Foundation for Health Research Senior Scholarship.

The authors would like to thank those individuals who provided their pictures for publication herein.

References (40)

  • J. Narumoto et al.

    Attention to emotion modulates fMRI activity in human right superior temporal sulcus

    Brain Res. Cogn. Brain Res.

    (2001)
  • D.R. Nieman et al.

    Gaze direction modulates visual aftereffects in depth and color

    Vision Res.

    (2005)
  • K.A. Pelphrey et al.

    Brain activation evoked by perception of gaze shifts: the influence of context

    Neuropsychologia.

    (2003)
  • K.L. Phan et al.

    Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI

    NeuroImage

    (2002)
  • G. Pourtois et al.

    Perception of facial expressions and voices and of their combination in the human brain

    Cortex

    (2005)
  • K. Sekiyama et al.

    Auditory-visual speech perception examined by fMRI and PET

    Neurosci. Res.

    (2003)
  • R.J. Snowden et al.

    Phantom motion after effects—Evidence of detectors for the analysis of optic flow

    Curr. Biol.

    (1997)
  • B. Wicker et al.

    Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust

    Neuron

    (2003)
  • L.G. Allan et al.

    Isoluminance and contingent color aftereffects

    Percept. Psychophys.

    (1997)
  • V. Bruce et al.

    Understanding face recognition

    Br. J. Psychol.

    (1986)
  • Cited by (179)

    • Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency

      2022, Cognition
      Citation Excerpt :

      Zäske et al. (2013) reported a similar pattern for speaker age and speaker gender: vocal age aftereffects were reduced but still present in a cross-gender condition. Further, our results are reminiscent of dependencies in the processing of facial expression and facial identity (Campbell & Burke, 2009; Ellamil et al., 2008; Fox & Barton, 2007; Schweinberger & Soukup, 1998; Vida & Mondloch, 2009) and, importantly, of facial emotional expression and gender (Bestelmeyer, Jones, et al., 2010). Of interest, one study showed that it was possible to simultaneously induce opposite aftereffects for male and female faces that varied on an anger-fear continuum (Bestelmeyer, Jones, et al., 2010).

    View all citing articles on Scopus
    View full text