Review
Two neural pathways of face processing: A critical evaluation of current models

https://doi.org/10.1016/j.neubiorev.2015.06.010Get rights and content

Highlights

  • Current neural face models comprise two functionally dissociated pathways.

  • The dissociation to identity and expression is not supported by current data.

  • The processing of dynamic faces in the two pathways has been recently investigated.

  • A revised neural model is proposed highlighting a division to face motion and form.

Abstract

The neural basis of face processing has been extensively studied in the past two decades. The current dominant neural model proposed by Haxby et al. (2000); Gobbini and Haxby (2007) suggests a division of labor between the fusiform face area (FFA), which processes invariant facial aspects, such as identity, and the posterior superior temporal sulcus (pSTS), which processes changeable facial aspects, such as expression. An extension to this model for the processing of dynamic faces proposed by O’Toole et al. (2002) highlights the role of the pSTS in the processing of identity from dynamic familiar faces. To evaluate these models, we reviewed recent neuroimaging studies that examined the processing of identity and expression with static and dynamic faces. Based on accumulated data we propose an updated model, emphasizing the dissociation between form and motion as the primary functional division between a ventral stream that goes through the FFA and a dorsal stream that goes through the STS, respectively. We also encourage future studies to expand their research to the processing of dynamic faces.

Introduction

Face perception has long been the center of intense behavioral and neuroscience research. Functional MRI (fMRI) studies have revealed well-defined cortical regions that generate a highly selective neural response for faces (for review see Kanwisher and Yovel, 2006). These regions have been suggested to form a neural network specialized in face processing. Furthermore, the functional role of each of the different face-selective areas and possible dissociations among them has been extensively studied and discussed. In this review, we evaluate the two primary neural models of face processing, the Haxby face neural model (Haxby et al., 2000; Gobbini and Haxby 2007) and an extension to this model for dynamic faces that was suggested by O’Toole et al. (2002). We start with an overall description of the face-selective neural network and the functional roles that have been suggested for its different components by current neural models of face processing. We then evaluate these models in light of recent empirical findings. Based on the data that have been accumulated about the neural underpinning of face processing since the proposals of the Haxby and O’Toole models nearly 15 years ago, we propose an updated model that may better fit existing data and suggest directions for future investigations.

Section snippets

Functional neuroanatomy of face processing

The functional neuroanatomy of face processing has been investigated in the past two decades in numerous fMRI experiments. Faces were shown to elicit face-selective neural responses in multiple regions along the occipital-temporal cortex. Such face-selective activations are typically found in the inferior occipital cortex (OFA – occipital face area), the fusiform gyrus (FFA – fusiform face area) and the posterior part of the superior temporal sulcus (pSTS Face Area (pSTS-FA)) (Fig. 1). The

The Haxby model: invariant vs. changeable aspects of faces

The Haxby model is considered the most dominant neural model of face processing (Haxby et al., 2000, 2007). According to this model the OFA, FFA and pSTS-FA constitute the core system of face processing. The model postulates that the OFA provides inputs to both FFA and pSTS-FA and each plays a different role in face processing: the FFA is involved with the representation of invariant aspects of the face, such as face identity, while the pSTS-FA is involved with the representation of changeable

Expression processing of static faces in the FFA and pSTS-FA

There is growing and consistent evidence that the pSTS-FA is involved in the perception of facial expressions (Adolphs, 2002, Calder and Young, 2005, Engell and Haxby, 2007, Furl et al., 2007, LaBar et al., 2003, Narumoto et al., 2001, Said et al., 2010, Schultz and Pilz, 2009, Winston et al., 2004). For example, Engell and Haxby (2007) showed that pSTS-FA was more strongly activated when subjects viewed facial expressions than when they viewed neutral faces. Using multivoxel pattern analysis,

Motion processing in the pSTS-FA and FFA

The role of the STS in the processing of dynamic facial information (face motion) is well established (Adolphs, 2009, Bartels and Zeki, 2004, Blake and Shiffrar, 2007, Bonda et al., 1996, Decety and Grezes, 1999, Fox et al., 2009a, LaBar et al., 2003, Peelen et al., 2006, Puce et al., 1998, Puce and Perrett, 2003, Schultz et al., 2005). For example, Puce et al. (1998) showed that the STS selectively responds to eye and mouth motion. A control condition of moving pixels that were matched in

The role of the OFA

The current review primarily emphasized the FFA and STS-FA, which were the focus in many of the studies that examined the functional role of the different areas in the face network. What about the occipital face area (OFA)? Similar to the FFA, the OFA also shows similar response to dynamic and static face stimuli, as opposed to the pSTS-FA, which clearly prefers dynamic stimuli (e.g., Furl et al., 2014, Pitcher et al., 2011a, see also Pitcher et al., 2014). Moreover, TMS and fMR-adaptation

Additional face-selective areas that are responsive to dynamic faces

So far we’ve discussed findings concerning the three face-selective regions composing the “core system” of Haxby's model: the OFA, FFA and pSTS-FA. However, recent studies that used dynamic stimuli to localize the face areas (Fox et al., 2009a, Pitcher et al., 2011a) have reported additional, more anterior face-selective activations in the anterior superior temporal sulcus (aSTS-FA) and in the inferior frontal gyrus (IFG-FA). These two regions, similar to the pSTS-FA and in contrast to the

A revised face processing model: face form vs. face motion

Based on the findings reviewed above, a revised neural model of face processing should address the following issues: First, since in the real world the face system extracts information about identity, expression, eye gaze and head view and so on from dynamic faces, a revised model of the system should address the processing of dynamic information, as was suggested by O’Toole et al. (2002). Furthermore, recent studies that presented dynamic faces suggest that the dorsal (pSTS-FA) and the ventral

Conclusions

This review of the literature suggests that current neural models of face processing are not fully supported by recent empirical findings and therefore should be revised based on existing data. First, both Haxby and O’Toole models highlight the role of the FFA in identity processing, and not expression. Given the many findings that show strong sensitivity of the FFA to facial expression, we suggest that the FFA may extract form information about both face identity and face expression. This

Acknowledgments

We thank the Sagol School of Neuroscience at Tel-Aviv University for awarding a fellowship to the first author.

We thank Noa Simhi, Jonathan Oron and the anonymous reviewer for providing us with comments on earlier versions of this manuscript.

References (87)

  • K. Grill-Spector et al.

    fMR-adaptation: a tool for studying the functional properties of human cortical neurons

    Acta Psychol. (Amst.)

    (2001)
  • E.D. Grossman et al.

    Repetitive TMS over posterior STS disrupts perception of biological motion

    Vis. Res.

    (2005)
  • J.V. Haxby et al.

    The distributed human neural system for face perception

    Trends Cogn. Sci.

    (2000)
  • C.D. Kilts et al.

    Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions

    Neuroimage

    (2003)
  • Z. Kourtzi et al.

    Linking form and motion in the primate brain

    Trends Cogn. Sci.

    (2008)
  • L.C. Lee et al.

    Neural responses to rigidly moving faces displaying shifts in social attention investigated with fMRI and MEG

    Neuropsychologia

    (2010)
  • C.A. Longmore et al.

    Motion as a cue to face recognition: evidence from congenital prosopagnosia

    Neuropsychologia

    (2013)
  • A. Mazard et al.

    Recovery from adaptation to facial identity is larger for upright than inverted faces in the human occipito-temporal cortex

    Neuropsychologia

    (2006)
  • J. Narumoto et al.

    Attention to emotion modulates fMRI activity in human right superior temporal sulcus

    Cogn. Brain Res.

    (2001)
  • A.J. O’Toole et al.

    Recognizing moving faces: a psychological and neural synthesis

    Trends Cogn. Sci.

    (2002)
  • M.V. Peelen et al.

    Patterns of fMRI activity dissociate overlapping functional brain areas that respond to biological motion

    Neuron

    (2006)
  • D. Pitcher et al.

    Triple dissociation of faces, bodies, and objects in extrastriate cortex

    Curr. Biol.

    (2009)
  • D. Pitcher et al.

    Differential selectivity for dynamic versus static information in face-selective cortical regions

    Neuroimage

    (2011)
  • D. Pitcher et al.

    Combined TMS and FMRI reveal dissociable cortical pathways for dynamic and static face perception

    Curr. Biol.

    (2014)
  • A. Puce et al.

    The human temporal lobe integrates facial form and motion: evidence from fMRI and ERP studies

    Neuroimage

    (2003)
  • W. Sato et al.

    Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study

    Cogn. Brain Res.

    (2004)
  • J. Schultz et al.

    Activation in posterior superior temporal sulcus parallels parameter inducing the percept of animacy

    Neuron

    (2005)
  • S.A. Surguladze et al.

    A preferential increase in the extrastriate response to signals of danger

    Neuroimage

    (2003)
  • S.A. Trautmann et al.

    Emotions in motion: dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations

    Brain Res.

    (2009)
  • P. Vuilleumier et al.

    Effects of attention and emotion on face processing in the human brain: an event-related fMRI study

    Neuron

    (2001)
  • J.S. Winston et al.

    Common and distinct neural responses during direct and incidental processing of multiple facial emotions

    Neuroimage

    (2003)
  • G. Yovel et al.

    The neural basis of the behavioral face-inversion effect

    Curr. Biol.

    (2005)
  • R. Adolphs

    The social brain: neural basis of social knowledge

    Annu. Rev. Psychol.

    (2009)
  • M. Arsalidou et al.

    Converging evidence for the advantage of dynamic facial expressions

    Brain Topogr.

    (2011)
  • G. Avidan et al.

    Selective dissociation between core and extended regions of the face processing network in congenital prosopagnosia

    Cereb. Cortex

    (2014)
  • V. Axelrod et al.

    Successful decoding of famous faces in the fusiform face area

    PLOS ONE

    (2015)
  • A. Bartels et al.

    Functional brain mapping during free viewing of natural scenes

    Hum. Brain Mapp.

    (2004)
  • H.A. Baseler et al.

    Neural responses to expression and gaze in the posterior superior temporal sulcus interact with facial identity

    Cereb. Cortex

    (2013)
  • M.S. Beauchamp et al.

    fMRI responses to video and point-light displays of moving humans and manipulable objects

    J. Cogn. Neurosci.

    (2003)
  • R.J. Bennetts et al.

    Movement cues aid face recognition in developmental prosopagnosia

    Neuropsychology

    (2015)
  • R. Blake et al.

    Perception of human motion

    Annu. Rev. Psychol.

    (2007)
  • E. Bonda et al.

    Specific involvement of human parietal systems and the amygdala in the perception of biological motion

    J. Neurosci.

    (1996)
  • A.J. Calder et al.

    Understanding the recognition of facial identity and facial expression

    Nat. Rev. Neurosci.

    (2005)
  • Cited by (135)

    View all citing articles on Scopus
    View full text