Categorizing facial identities, emotions, and genders: Attention to high- and low-spatial frequencies by children and adults

https://doi.org/10.1016/j.jecp.2004.09.001Get rights and content

Abstract

Three age groups of participants (5–6 years, 7–8 years, adults) matched faces on the basis of facial identity. The procedure involved either low- or high-pass filtered faces or hybrid faces composed from two faces associated with different spatial bandwidths. The comparison stimuli were unfiltered faces. In the three age groups, the data indicated a significant bias for processing of low-pass information in priority. In a second task, participants were asked to identify the emotion (smiling or grimacing) or gender (male or female) of hybrid high-pass/low-pass faces. Opposite results emerged in the two tasks irrespective of the age group; the gender discrimination task indicated a bias for low-pass information, and the emotion task indicated a bias for high-pass information. These differences suggest independent processing routes for functionally different types of information such as emotion, gender, and identity. These routes are already established by 5 years of age.

Introduction

Faces are polymorphous stimuli conveying various kinds of information such as gender, emotion, and identity. Most models of face processing posit that these different types of information are processed by functionally different neural/cognitive systems (e.g., Bruce & Young, 1986). The hypothesis that different systems process gender, emotion, and identity information (hereafter referred to as the hypothesis of functional separability) is supported in the literature by the following five lines of converging evidence. First, gender and identity are processed at different speeds than are emotions in judgmental tasks (e.g., Le Gal & Bruce, 2002). Second, variations of emotional expressions have little effect on facial identity judgments, but reaction times for emotion judgments are influenced by identity variations (e.g., Schweinberger & Soukup, 1998). Third, distinct patterns of cerebral activation emerge for the recognition of identity, expression, and gender (from event-related potentials [ERP] studies: e.g., Boles, Martin, Olivares, & Valdés-Sosa, 2000; from functional magnetic resonance imaging [fMRI] studies: e.g., McCarthy, Puce, Gore, & Allison, 1997; from positron emission tomography [PET] studies: e.g., Sergent, Otha, MacDonald, & Zuck, 1994). Fourth, brain lesions selectively affect adults’ recognition of facial identity and emotion (e.g., Humphreys, Donnelly, & Riddoch, 1993). Fifth, in fMRI studies, presentation of low- and high-pass filtered faces results in distinct patterns of processing of face and emotional expressions (e.g., Vuillemier, Armony, Driver, & Dolan, 2003). Interestingly, all of these findings supporting the hypothesis of functional separability are based on studies of normal adults or patients. There is surprisingly little developmental research on this topic.

In one of the few relevant studies, Bruce et al. (2000) studied the development of expression, lipreading, gaze, and identity processing in 4- to 10-year-olds. There were no significant correlations among matching-to-sample tasks in the different test conditions, suggesting that these various aspects of faces are already processed independently in children of that age. In another study (Bormann-Kischkel, 1986), 5-year-olds and adults were asked to sort cards that varied in terms of identity and emotion expressions. Adults and children showed a bias for sorting on an emotion basis rather than on an identity basis, suggesting that emotion and identity can be tapped independently. Finally, De Sonneville et al. (2002) tested children (7–10 years of age) and adults in face identity and face emotion discrimination tasks. Regardless of age, face recognition was faster than emotion recognition. These three studies tend to support the hypothesis of separability in children, and this would confirm as well as expand what is known about adults. Conclusions cannot be guaranteed, however, due to the reduced number of available developmental studies. These studies, moreover, convey no detailed information on the stimulus dimensions to which children paid attention when categorizing facial emotion and identity.

In that context, the current research was aimed at better documenting the relative development during childhood of perception of identity, gender, and emotion. To better characterize the processing of emotion, gender, and identity, participants at three ages (5–6 years, 7–8 years, and adults) were tested in the current research using spatial frequency filtered facial stimuli. Our reasoning was that controlling the frequency band available on faces should indicate whether different spatial frequency channels are used by children when processing identity, emotion, and gender facial information.

The two experiments reported here focus on the processing of identity (Experiment 1) and on the processing of both gender and emotion (Experiment 2). Although presented in succession, these two experiments were actually run in a balanced order, with half of the participants being tested at first in Experiment 1 and the other half being tested at first in Experiment 2.

Section snippets

Experiment 1: Facial identity

Experiment 1 focused on the processing of facial identity using a procedure largely inspired by Schyns and Oliva (1999, Experiment 3). These authors showed adults hybrid stimuli that were composed by systematically overlapping the low-spatial frequency components of one individual face with the high-spatial frequency components of another individual face. After being taught to identify the pictures of six different faces, participants were asked to name the face shown in the hybrid stimulus.

Method

The participants and apparatus were the same as in Experiment 1

General discussion

Experiment 1 revealed that participants of the three age groups preferentially paid attention to low spatial frequencies when processing facial identity. Experiment 2 extended this finding. It revealed that different spatial frequency biases also emerged in the processing of gender and emotion, with gender processing being associated with a low-spatial frequency bias and emotion processing being associated with a high-spatial frequency bias. Taken together, Experiments 1 and 2 consistently

Acknowledgments

The authors thank all of the children and adults who participated in this study.

References (28)

  • V. Bruce et al.

    Testing face processing skills in children

    British Journal of Developmental Psychology

    (2000)
  • V. Bruce et al.

    Understanding face recognition

    British Journal of Psychology

    (1986)
  • A.J. Calder et al.

    Configural information in facial perception

    Journal of Experimental Psychology: Human Perception and Performance

    (2000)
  • S. Carey et al.

    Are faces perceived as configurations more by adults than by children?

    Visual Cognition

    (1994)
  • Cited by (55)

    • High spatial frequency filtered primes hastens happy faces categorization in autistic adults

      2021, Brain and Cognition
      Citation Excerpt :

      Our results are at odds with the Coarse-to-Fine model of visual perception (Bar et al., 2006) and with its extension to faces (Goffaux et al., 2011; Khalid & Ansorge, 2017). Nevertheless, other studies found an HSF advantage (Deruelle & Fagot, 2005; Jahshan et al., 2017; Kovarski et al., 2019; Shankland et al., 2021) or no behavioral advantage for either filtering during emotional faces categorization (Vanmarcke & Wagemans, 2017). Several hypotheses could explain our results.

    • Time course of spatial frequency integration in face perception: An ERP study

      2019, International Journal of Psychophysiology
      Citation Excerpt :

      Indeed, the task requirements could, if we consider task diagnosticity, lead to the processing of a favored range of spatial frequencies (Morrison and Schyns, 2001; Schyns and Oliva, 1999; Smith and Merlusca, 2014). For example, at behavioral level, it has been shown that discriminating face images based on ethnic group necessitates tuning to LSFs (Koyama et al., 2010) or that facial identity recognition required LSF information (Deruelle and Fagot, 2005). However, gender discrimination task, as used in the present study, has sometimes been linked to prioritized LSF processing (Deruelle and Fagot, 2005; Goffaux et al., 2003a, 2003b), sometimes to HSF processing (Koyama et al., 2010), and even to neither of these ranges (Schyns and Oliva, 1999).

    • Proficient use of low spatial frequencies facilitates face memory but shows protracted maturation throughout adolescence

      2017, Acta Psychologica
      Citation Excerpt :

      Interestingly, the same children who perform poorly on configural face perception tasks, perform well on tasks that entail the use of face features (Mondloch et al., 2002; Schwarzer, 2000; see Maurer, Le Grand, & Mondloch, 2002 for review). This good performance, which in some cases reaches adult levels, might benefit from the preferred use of HSF instead of LSF content in (emotional) faces, as observed in children between 5 and 8 years old (Deruelle & Fagot, 2005; Vlamings, Jonkman, & Kemner, 2010). This bias toward HSF- over LSF-processing might result from the immaturity of the LSF pathway in children.

    View all citing articles on Scopus
    View full text