Elsevier

Cognition

Volume 125, Issue 3, December 2012, Pages 373-393
Cognition

Perceptual, categorical, and affective processing of ambiguous smiling facial expressions

https://doi.org/10.1016/j.cognition.2012.07.021Get rights and content

Abstract

Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of the expression (semantic), and valence evaluation (affective). The face stimulus display duration and stimulus onset asynchrony (SOA) were varied to assess the time course of each process. Results indicated that (a) a smiling mouth was visually more salient than the eyes both in truly happy and blended expressions; (b) a smile led viewers to categorize blended expressions as happy similarly for upright and inverted faces; (c) truly happy, but not blended, expressions primed the affective evaluation of probe scenes 550 ms following face onset; (d) both truly happy and blended expressions primed the detection of a smile in a probe scene by 170 ms post-stimulus; and (e) smile detection and expression categorization had similar processing thresholds and preceded affective evaluation. We conclude that the saliency of single physical features such as the mouth shape makes the smile quickly accessible to the visual system, which initially speeds up expression categorization regardless of congruence with the eyes. Only when the eye expression is later configurally integrated with the mouth, will affective discrimination begin. The present research provides support for serial models of facial expression processing.

Highlights

► A smile is highly visually salient in a face regardless of the eye expression. ► Feature analysis of a salient smile precedes configural integration of the eyes. ► A perceptual representation of the smile is active by 170 ms from stimulus onset. ► Positive affect is extracted only from genuine happy faces ∼550 ms post-stimulus. ► Discrimination between smiling faces is delayed until after affect has been extracted.

Introduction

In a categorical approach to facial affect, emotional expressions are conceptualized as discrete entities that can be subsumed under six basic categories: fear, anger, sadness, happiness, disgust, and surprise (Ekman, 1994). Although this conceptualization is not devoid of its own limitations (see critical reviews in Barrett (2006) and Barrett, Gendron, and Huang (2009)), it has been widely adopted by prior research on the recognition of facial expressions, and studies have generally used prototypical examples of the six categories as stimuli. In real life, however, there is an enormous idiosyncrasy and variability across individuals and social contexts, where ambiguous expressions are frequently encountered (Carroll and Russell, 1997, Scherer and Ellgring, 2007). For example, Ekman (2001) identified at least 18 different types of smiles, and proposed that there may be as many as 50 in all. However, the evidence regarding the perception, categorization, and affective processing of non-prototypical expressions has remained elusive. In the present study we examined whether the processing of ambiguous expressions with a smile, but non-happy eyes, involves the same mechanisms and stages as that of the prototypical expressions.

To investigate the recognition of ambiguous expressions, studies have used genuine blends (Nummenmaa, 1988), hybrids (Schyns & Oliva, 1999), morphed (Calder, Rowland, et al., 2000), and composite (Calder, Young, Keane, & Dean, 2000) face stimuli. In composite faces, the top half of a face conveying one expression is fused with the bottom half of another expression. The resulting facial configuration is thus a blend of two expressions and therefore becomes ambiguous. In the current study we employed this approach by aligning the bottom half of a happy face with the top half of non-happy faces (either angry, sad, fearful, disgusted, surprised, or neutral) of the same individual. This produced blended expressions with a smiling mouth but non-happy eyes. For comparison, we also used intact faces conveying prototypical happy or non-happy expressions, in which the eye region was congruent with the mouth region, as both belonged to the same category (and individual), and therefore these expressions were genuine and unambiguous. With this approach, we explored (a) the extent to which the presence of a smile—even though incongruent with other facial components—can bias the recognition of an expression as happy, (b) whether featural or configural processing is involved in the categorization and affective evaluation of blended expressions, and (c) how such bias develops over time for the extraction of perceptual, categorical, and emotional information.

From a theoretical standpoint, smiles provide a useful and well-established model for studying the processing of blended facial expressions. First, happy faces are recognized more accurately and faster than all the other basic expressions (Calder, Young, et al., 2000, Calvo and Lundqvist, 2008, Calvo and Nummenmaa, 2009, Juth et al., 2005, Leppänen and Hietanen, 2004, Loughead et al., 2008, Milders et al., 2008, Palermo and Coltheart, 2004, Tottenham et al., 2009). Facial happiness is also the most consistently identified expression across different cultures (Russell, 1994). Second, the smile is a critical feature supporting the recognition advantage of happy faces. Whereas a smiling mouth is a necessary and sufficient criterion for categorizing faces as happy, the eye region makes only a modest contribution (Calder, Young, et al., 2000, Kontsevich and Tyler, 2004, Leppänen and Hietanen, 2007, Nusseck et al., 2008, Smith et al., 2005). Third, the importance of the smile in facilitating expression recognition has been attributed to its high visual saliency and diagnostic value (Calvo and Nummenmaa, 2009, Calvo et al., 2010). Because the smiling mouth is a salient or conspicuous feature, the smile attracts the first eye fixation more likely than any other region of the six basic expressions (Calvo & Nummenmaa, 2008). In addition, because of its distinctiveness or diagnostic value, the smile is systematically associated with facial happiness and is absent in all other expression categories (Calvo & Marrero, 2009). Such a single diagnostic feature can thus be used as a shortcut for a quick categorization of a face as happy.

By assuming the contribution of the saliency and distinctiveness of a smile, we addressed two questions. First, is the smiling mouth so salient and distinctive that it overrides the processing of other facial components, such as the eye region, even when these are inconsistent in meaning with the smile? If so, the presence of a smiling mouth would bias viewers towards judging blended facial expressions as happy, regardless of other expressive sources. This issue has obvious theoretical and practical importance, as in many everyday situations people smile without necessarily being happy. The smile is a complex, multifunctional signal, which can also reflect mere politeness or even conceal negative motives (embarrassment, dominance, etc.), depending on the combination with other facial signals (see Ambadar et al., 2009, Niedenthal et al., 2010). It is, nevertheless, possible that, because of its saliency and distinctiveness, a smiling mouth “dazzles” the viewers and prevents them from noticing less salient yet informative facial cues such as frowns, which would be necessary to interpret the smile accurately and react with adaptive behavior in time.

Second, does the categorization of an ambiguous, as well as a genuine, smiling face involve only perceptual processing—either single feature detection or configural pattern recognition—of a salient mouth, or also extraction of positive affect? Furthermore, can affect be obtained from feature analysis or does it require configural analysis, and what is the relative time course of these processes? There is considerable evidence that faces (Richler, Mack, Gauthier, & Palmieri, 2009) and facial expressions (Calder, Young, et al., 2000) are processed configurally or holistically, that is, coded as unitary objects, in an integrated representation that combines the different face parts. Studies have, nevertheless, also shown that the category of an emotional expression can be inferred from the separate analysis of single distinctive facial components (Ellison and Massaro, 1997, Fiorentini and Viviani, 2009). Probably, both views are complementary, with some expressions being more dependent on holistic and others relying more on analytic processing (Tanaka, Kaiser, Butler, & Le Grand, in press). In any case, for both the configural and the featural conceptualization, it is possible that expression recognition is performed solely on the basis of some perceptual pattern or a single visual cue from the face image, without retrieving any affective meaning. It remains unresolved whether emotional representations are also activated during perceptual processing and used for expression categorization, and how this applies to blended facial expressions.

To investigate whether a smiling mouth can overshadow other facial regions and override their processing even when these are incongruent with the smile, we used composite faces (Calder, Young, et al., 2000, Leppänen and Hietanen, 2007, Tanaka et al., in press), in which non-happy (e.g., angry) eyes were combined with a smiling mouth, thus conveying blended expressions. These faces were then compared with intact faces conveying prototypical expressions (happy, angry, etc.) in several tasks. In Experiment 1, we used a categorization task in which participants responded whether each face looked happy or not. To determine the role of configural and featural processing, the face stimuli were presented upright or spatially inverted. In Experiments 2 and 3, we aimed to distinguish between affective and perceptual processes. To this end, we used priming tasks in which happy, neutral, or blended face primes were followed by an emotional probe scene. Participants responded whether the scene was pleasant or unpleasant (affective priming task) or whether there was any person smiling in the probe scene (perceptual priming task). Depending on the emotional congruence and the visual similarity between the prime and the probe, priming effects (i.e., faster responding to the probe following the happy or the blended prime, relative to the neutral prime) allowed us to tease apart affective and perceptual processing of the genuine and the blended expressions. Finally, in Experiment 4, we directly compared the performance on all three tasks (perceptual, categorization, and affective) within the same experimental design.

A major aim of this study was to investigate the relative time course of perceptual, categorical, and affective processing of blended and prototypical expressions. To examine when discrimination between genuinely happy and blended expressions with a smile occurs, in Experiments 2 and 3, we varied the stimulus onset asynchrony (SOA; 170, 340, 550, and 800 ms) between a prime face and a probe scene. The lower boundary of 170 ms was motivated by the earliest event-related potential (ERP; N170 component) that reflects facial structural encoding (i.e., differentiation between faces and non-face objects), about which there is a debate on whether it is also sensitive to emotional expression (i.e., differentiation of emotional from non-emotional faces; see Eimer and Holmes, 2007, Schacht and Sommer, 2009). Discrimination among emotional expressions is likely to start later, as reflected in the N300 and P300 (from 250 to 500 ms) ERP components (Luo, Feng, He, Wang, & Luo, 2010). Similarly, conscious recognition of basic expressions can be accomplished in ∼300 ms (Calvo and Nummenmaa, 2009, Calvo and Nummenmaa, 2011), which thus represents a critical time point for comparison with blended expressions. Finally, affective retrieval is predicted to occur later than categorization. In fact, affective priming by happy or liked faces has been found between 300 and 750 ms from face onset (Calvo et al., 2010, Lipp et al., 2009, Nummenmaa et al., 2008). Our 550- and 800-ms SOA conditions will thus explore a possible delay for blended expressions. In a complementary approach to estimate the processing time course, in Experiment 4, we varied the display duration of the face stimuli (20, 40, 70, and 100 ms), while participants judged whether or not (a) the mouth was smiling (perceptual), (b) the face was happy (categorization), and (c) the expression was pleasant (affective).

Section snippets

Experiment 1

Face stimuli were presented in their normal upright position or upside-down, and displayed until participants responded whether a face looked happy or not. Expressions were either (a) genuinely happy (henceforth, happy), with a smiling mouth that was congruent with the eye expression, (b) genuinely non-happy (henceforth, non-happy; i.e., angry, sad, and so forth, with matching eyes and mouth), or (c) blended, with non-happy eyes (e.g., angry, neutral, etc.) and a smiling mouth. We also modeled

Experiment 2

To determine whether emotional information is extracted from blended expressions with a smile, we used an affective priming protocol (see Fazio & Olson, 2003). Participants were presented with a prime face followed by an emotional probe scene photograph, and responded whether the scene was pleasant or unpleasant. The prime faces conveyed either neutral, happy, or blended expressions. The blended expressions with (a) angry eyes + smile, (b) sad eyes + smile, or (c) neutral eyes + smile were chosen

Experiment 3

In this experiment, we investigated the time course of perceptual priming of the smile in blended expressions. This served to determine whether an early perceptual processing of the smiling mouth underlies the tendency to evaluate such faces as happy and delays their correct rejection as “not happy”. With the same stimuli and SOAs as in Experiment 2, the perceptual priming task in Experiment 3 involved responding whether or not there were some people smiling in the probe scene, instead of

Experiment 4

In Experiments 2 and 3, we found that perceptual priming by a happy face occurred earlier than affective priming. Furthermore, whereas there was perceptual and affective priming for happy faces with congruent smiling mouth and eyes, perceptual but not affective priming appeared for blended expressions with a smile but not happy eyes. This suggests that the perceptual detection of a smiling mouth occurs in advance and regardless of the affective evaluation of the expression as pleasant. In

General discussion

This study highlighted the considerable influence of the smile on the processing of ambiguous facial expressions with non-happy eyes. First, a smiling mouth proved to be visually highly salient, regardless of its congruence or incongruence with the eye region expression. Second, the presence of a smile increased the probability of judging blended expressions as happy and delayed their categorization as “not happy”, relative to faces with the same eyes but no smile. Third, the categorization

Acknowledgments

This research was supported by Grant PSI2009-07245 from the Spanish Ministry of Science and Innovation, and the Canary Agency for Research, Innovation and Information Society (NEUROCOG Project), and the European Regional Development Fund, to M.G.C. and A.F.M., and by the Academy of Finland Grant #251125, and the AivoAALTO grant from the Aalto University, to L.N.

References (74)

  • J.J. Richler et al.

    Holistic processing of faces happens at a glance

    Vision Research

    (2009)
  • A. Schacht et al.

    Emotions in word and face processing: Early and late cortical responses

    Brain and Cognition

    (2009)
  • P. Schyns et al.

    Dr. Angry and Mr. Smile: When categorization flexibly modifies the perception of faces in rapid visual presentations

    Cognition

    (1999)
  • V. Surakka et al.

    Facial and emotional reactions to Duchenne and non-Duchenne smiles

    International Journal of Psychophysiology

    (1998)
  • N. Tottenham et al.

    The NimStim set of facial expressions: Judgments from untrained research participants

    Psychiatry Research

    (2009)
  • D. Walther et al.

    Modelling attention to salient proto-objects

    Neural Networks

    (2006)
  • R. Adolphs

    Recognizing emotion from facial expressions: Psychological and neurological mechanisms

    Behavioral and Cognitive Neuroscience Reviews

    (2002)
  • Z. Ambadar et al.

    All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous

    Journal of Nonverbal Behavior

    (2009)
  • J.A. Bargh

    The automaticity of everyday life

  • L.F. Barrett

    Are emotions natural kinds?

    Perspectives on Psychological Science

    (2006)
  • L.F. Barrett et al.

    Do discrete emotions exist?

    Philosophical Psychology

    (2009)
  • B.G. Breitmeyer et al.

    Recent models and findings in visual backward masking: A comparison, review, and update

    Perception and Psychophysics

    (2000)
  • A.J. Calder et al.

    Understanding facial identity and facial expression recognition

    Nature Reviews Neuroscience

    (2005)
  • A.J. Calder et al.

    Configural information in facial expression perception

    Journal of Experimental Psychology: Human Perception and Performance

    (2000)
  • M.G. Calvo et al.

    Primacy of emotional vs. semantic scene recognition in peripheral vision

    Cognition and Emotion

    (2011)
  • M.G. Calvo et al.

    Facial expressions of emotion (KDEF): Identification under different display-duration conditions

    Behavior Research Methods

    (2008)
  • M.G. Calvo et al.

    Visual search of emotional faces: The role of affective content and featural distinctiveness

    Cognition and Emotion

    (2009)
  • M.G. Calvo et al.

    Processing of unattended emotional visual scenes

    Journal of Experimental Psychology: General

    (2007)
  • M.G. Calvo et al.

    Detection of emotional faces: Salient physical features guide effective visual search

    Journal of Experimental Psychology: General

    (2008)
  • M.G. Calvo et al.

    Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications

    Cognitive, Affective and Behavioral Neuroscience

    (2009)
  • M.G. Calvo et al.

    Recognition advantage of happy faces in extrafoveal vision: Featural and affective processing

    Visual Cognition

    (2010)
  • J.M. Carroll et al.

    Facial expressions in Hollywood’s protrayal of emotion

    Journal of Personality and Social Psychology

    (1997)
  • N.C. Carroll et al.

    Priming of emotion recognition

    The Quarterly Journal of Experimental Psychology

    (2005)
  • K.R. Cave et al.

    From searching for features to searching for threat: Drawing the boundary between preattentive and attentive vision

    Visual Cognition

    (2006)
  • J.B. Debruille et al.

    Assessing the way people look to judge their intentions

    Emotion

    (2011)
  • U. Dimberg et al.

    Unconscious facial reactions to emotional facial expressions

    Psychological Science

    (2000)
  • P. Ekman

    Telling lies: Clues to deceit in the marketplace, politics, and marriage

    (2001)
  • Cited by (52)

    • The time course of categorical perception of facial expressions

      2022, Neuropsychologia
      Citation Excerpt :

      This suggests that without a focus of attention explicitly directed toward facial expressions, expressions changes, even rapid and subtle, automatically captured the participants’ attentional resources. Thus, since the emotional information is extracted more efficiently from a face conveying an expression of happiness (Adolphs, 2002; Calvo and Beltrán, 2013; Calvo and Nummenmaa, 2008; Calvo et al., 2012; Leppänen and Hietanen, 2007; for review, see Nummenmaa and Calvo, 2015), they would require less attentional resources and therefore would be categorized faster than other expressions. Further research would be necessary to better understand the origin of this advantage of processing for happy expressions.

    • Selective gaze direction and interpretation of facial expressions in social anxiety

      2019, Personality and Individual Differences
      Citation Excerpt :

      Presumably, the interpretative bias follows from the attentional bias. That is, the eye region is typically much less visually salient than the mouth region, especially for faces with a smile, regardless of the eye expression (Calvo, Fernández-Martín, & Nummenmaa, 2012). As a consequence, the LSA observers' early focusing on the salient smiling mouth would shadow the less visually prominent eye expression incongruences, and therefore the smile would become dominant in biasing perception towards trustworthiness (Calvo, Gutiérrez-García, & Fernández-Martín, in press).

    View all citing articles on Scopus
    View full text