Can you feel what you do not see? Using internal feedback to detect briefly presented emotional stimuli

https://doi.org/10.1016/j.ijpsycho.2011.04.007Get rights and content

Abstract

Briefly presented (e.g., 10 ms) emotional stimuli (e.g., angry faces) can influence behavior and physiology. Yet, they are difficult to identify in an emotion detection task. The current study investigated whether identification can be improved by focusing participants on their internal reactions. In addition, we tested how variations in presentation parameters and expression type influence identification rate and facial reactions, measured with electromyography (EMG). Participants made force-choice identifications of brief expressions (happy/angry/neutral). Stimulus and presentation properties were varied (duration, face set, masking-type). In addition, as their identification strategy, one group of participants was instructed to use their bodily and feeling changes. One control group was instructed to focus on visual details, and another group received no strategy instructions. The results revealed distinct EMG responses, with greatest corrugator activity to angry, then neutral, and least to happy faces. All variations in stimulus and presentation properties had robust and parallel effects on both identification and EMG. Corrugator EMG was reliable enough to statistically predict stimulus valence. However, instructions to focus on the internal states did not improve identification rates or change physiological responses. These findings suggest that brief expressions produce a robust bodily signal, which could in principle be used as feedback to improve identification. However, the fact that participants did not improve with internal focus suggests that bodily and feeling reactions are either principally unconscious, or that other ways of training or instruction are necessary to make use of their feedback potential.

Research highlights

► Very briefly presented emotional faces elicit a corrugator response. ► Strength of the signal depends on emotion, face set, exposure time, and mask. ► Corrugator signal can be used to infer what stimulus has been presented. ► Simple instruction does not suffice to make subjects deliberately use these signals.

Introduction

Numerous studies have shown that emotional stimuli influence behavior and physiology when they are presented very briefly, even “subliminally” (i.e., without being consciously perceived). For instance, in a classical paradigm introduced by Niedenthal (1990) and expanded by Murphy and Zajonc (1993), participants are asked to rate how much they like neutral targets (e.g., Chinese ideographs). The targets are preceded by brief (e.g., 10 ms) pictures of emotional faces, which are either positive (usually happy) or negative (usually angry, but sometimes fearful, disgusted or sad). Results show that positive faces enhance ratings of the targets, whereas negative faces lower them (see also Winkielman et al., 1997, Rotteveel et al., 2001, Stapel et al., 2002, Wong and Root, 2003). Interestingly, the effects of subliminal faces go beyond ratings and influence behaviors such as the consumption of a novel beverage (Winkielman et al., 2005b) or the willingness to take risks (Winkielman et al., under review). Furthermore, people react to subliminal smiling faces by smiling themselves and to angry faces by frowning themselves (e.g., Dimberg et al., 2000, Rotteveel et al., 2001). Despite such effects on ratings, behavior, and physiology, participants in those studies remain largely unaware of the brief emotional stimuli, even when informed about their presence and asked to identify them (e.g., Dannlowski et al., 2007, Murphy and Zajonc, 1993, Öhman and Soares, 1993, Winkielman et al., 1997, Wong and Root, 2003). For instance, in a typical “forced-choice awareness procedure”, an emotional face is first briefly flashed (e.g., 10 ms), and is then followed by a mask, consisting either of a neutral face or some graphical pattern (e.g., scrambled picture fragments or random dots). Participants are then shown two faces, the previously presented one and a new one, and are asked to indicate which face had been flashed. Typically, participants' performance on this task is around the chance level or barely above it.

These findings are puzzling. After all, the effects on ratings, behavior and physiology suggest that brief emotional stimuli trigger some internal reactions, so that information about them is available “somewhere” in the mental system. Yet, participants cannot identify those stimuli in a forced-choice awareness procedure. In our research, we test whether people can deliberately access their internal emotional reactions to improve identification of brief stimuli. Different predictions are possible regarding access to such internal reactions. One prediction is that people, if directed properly, can utilize fluctuations in their subjective feeling and sense their own physiological reactions. If so, they should be able to “feel what they do not see”, that is, discern the valence of a brief emotional stimulus by basing their judgments on their own affective state (physiology and subjective experience). This prediction is consistent with two major theoretical models: (i) the Affect-As-Information model, and (ii) Facial Feedback model of emotion recognition. Here is why.

The Affect-As-Information model (AAI) proposes that people base their judgments on their subjective feelings (Schwarz and Clore, 2003, Clore and Huntsinger, 2007, Clore et al., 2001). On this model, affective priming effects (e.g., Murphy and Zajonc, 1993) occur because subliminal emotional faces elicit subtle, fleeting, but principally detectable changes in phenomenal experience. Subjects, who lack any useful knowledge about ambiguous targets, such as a Chinese ideograph, ask themselves, ‘How do I feel about it’, and rate the ideograph in line with their current feelings. In essence, the AAI model proposes that subjects misattribute their prime-induced feelings to the neutral target (see Schwarz, 1990, p. 538). If this is true, then changes in subjective feeling could be used deliberately to identify briefly presented emotional faces in the forced-choice paradigm.

The Facial Feedback model of emotion recognition proposes that when we see emotional expressions, we engage in spontaneous facial mimicry — involuntarily mirroring the expressions on our own faces (e.g., Achaibou et al., 2008, Dimberg et al., 2000, Hess et al., 1999, McIntosh et al., 2006, Sato and Yoshikawa, 2007). This facial mimicry could facilitate emotion detection via multiple mechanisms. Some researchers propose that the facial movements influence the actual emotion experienced by the subject (Laird, 1974, Zajonc et al., 1989). Others suggest that feedback from one's own facial muscles provides an embodied cue to what expression was actually shown (Goldman and Sripada, 2005). Assuming the outputs from these processes are conscious, then focusing on facial feedback should facilitate identification of brief emotional faces.

The AAI model and the Facial Feedback model predict that focusing participants on their feelings and facial responses should improve identification of brief emotional expressions. An opposing prediction, however, is offered by recent ideas about “unconscious emotion” (e.g., Winkielman et al., 2005a, Berridge and Winkielman, 2003). According to this proposal, briefly presented emotional faces are processed using low-level and automatic mechanisms that run below consciousness. Subliminal emotional priming effects are due to front-end changes in perception of the stimulus' incentive value (e.g., the ideograph “looks” better; the Kool-Aid “seems” tastier). On this account, there are no consciously accessible changes in feelings that could assist in the identification of the briefly presented expressions. Accordingly, the Unconscious Emotion model predicts no effects of the internal focus manipulation.

Our first question was whether subjects can be instructed to strategically use their physiological reactions or changes in their feelings to discern the valence of a briefly presented face in a forced-choice awareness test. We therefore devised three different instructions, one asking participants to monitor their own internal reactions and two control conditions. This manipulation was inspired by an earlier study which examined different strategies for the perception of briefly presented neutral (non-emotional) words (Snodgras et al., 1993). In that study, an intuitive ‘pop’ strategy, encouraging subjects to “just relax” and say “whatever word pops into your head” improved detection of subliminal words, over a visual look-hard strategy.

A precondition for the use of a physiological response is that such a response actually occurs under the conditions of a forced-choice awareness test. Subliminally presented faces have been shown to induce spontaneous smiling and frowning (e.g., Dimberg et al., 2000, Rotteveel et al., 2001). However, it is not clear whether the same effects occur when people know about the presence of the faces and deliberately try to perceive them. Thus, we have measured the physiological response with facial EMG. Furthermore, we wanted to know how much information about the briefly presented face is mirrored in the physiological signal. This comes down to the question: Using the physiological signal in a computationally optimal way, how precisely can we infer what stimulus has been presented to the subject?

Finally, we varied different parameters of the stimuli and their presentation, such as emotion type, face set, mask, and duration. We did this for several reasons. First, we wanted to identify a condition where the physiological signal induced by the emotional face is strong, but produces low behavioral detection rates. In such a condition, the use of physiological feedback might be particularly beneficial for enhancing recognition. Second, we wanted to learn how the EMG response and identification depend on the parameters of stimulus presentation. This is of theoretical interest, because these parameters bear on different mechanisms involved in emotion processing (we elaborate on this in the discussion). It is also of practical interest to researchers in the field, because individual studies often differ on such parameters, making systematic comparisons across studies difficult.

Thus, first, we varied displayed emotion: happy, angry, and neutral. We chose happiness and anger because these emotions are most commonly used in studies relying on brief presentation. Second, we varied face sets, relying on 3 widely used (details below; Section 2.2). Third, we varied mask type: neutral face or dotted pattern — these represent the two most typical ways of masking. Finally, we varied prime duration: 10 or 20 ms. Although some studies mentioned earlier used presentations as short as 10 ms, others used durations even longer than 20 ms (e.g., Stapel et al., 2002, used 30 ms and 100 ms). We used 10 and 20 ms to explore how the behavioral and physiological parameters depend on the strength of the affective input, while keeping the detection reasonably close to chance.

A simple forced-choice awareness procedure was used. Participants were flashed with a face that was either emotional or neutral. The face was immediately covered by a mask (either a face or an assembly of dots). After the mask, participants were to indicate whether the briefly presented face was emotional or neutral.

Section snippets

Participants

Participants were 58 undergraduates from the University of California, San Diego (gender: 14 male, 1 no gender specified; mean age = 19.8 years, sd = 1.39 years). They participated for partial course credit. Ethnicity was predominantly Asian (42 Asian, 6 Caucasian, 6 Hispanic, 1 Indian, 1 Persian, 2 missing), but most had been raised in the USA and spoke perfect English. Because of the need to attach EMG electrodes strong facial hair was an exclusion criterion.

Materials

Three different face sets were used (

Data transformations

One subject was removed from the analysis because he always responded with “no emotion”. Response times of the remaining subjects were scanned for outliers. 315 out of 6752 trials (4.67%) were removed because response time was more than two standard deviations away from the individual mean. For the EMG data, outlier/artifact analysis was performed by removing activity that was 2 standard-deviations above or below the individual mean activity of the respective channel. The last 200 ms before the

Discussion

The study had several objectives. First, we investigated whether subjects can use changes in their own physiology and subjective feelings to make more accurate judgments about valence of briefly presented emotional faces. This yielded the following main result: simply instructing subjects to pay attention to their own facial muscles and subjective feelings did not suffice to improve detection of emotional stimuli, as compared to subjects who were uninstructed or instructed to look very closely

Funding

BB was supported by a scholarship of the Studienstiftung des Deutschen Volkes. PW was supported by Grant BCS — 0350687 of the National Science Foundation.

Acknowledgments

We thank Liam Kavanagh, Shlomi Sher, Mark Starr, Josh Susskind, and Galit Yavne for their help at various stages of this research. This study was conceived and performed when Boris Bornemann visited UCSD Psychology Department.

References (56)

  • P.S. Wong et al.

    Dynamic variations in affective priming

    Consciousness and Cognition

    (2003)
  • J.A. Bargh et al.

    The unbearable automaticity of being

    American Psychologist

    (1999)
  • K.C. Berridge et al.

    What is an unconscious emotion? (The case of unconscious “liking”)

    Cognition and Emotion

    (2003)
  • K.R. Bogart et al.

    Facial mimicry is not necessary to recognize emotion: facial expression recognition by people with Moebius syndrome

    Social Neuroscience

    (2010)
  • G.L. Clore et al.

    Affective feelings as feedback: some cognitive consequences

  • C. Cratty

    TSA behavior detection efforts missed alleged terrorists

  • U. Dimberg et al.

    Unconscious facial reactions to emotional facial expressions

    Psychological Science

    (2000)
  • U. Dimberg et al.

    Facial reactions to emotional stimuli: automatically controlled emotional responses

    Cognition and Emotion

    (2002)
  • P. Ekman

    How to spot a terrorist on the fly. WashingtonPost

  • P. Ekman et al.

    Pictures of Facial Affect

    (1976)
  • A.J. Fridlund et al.

    Guidelines for human electromyographic research

    Psychophysiology

    (1986)
  • W. Hart

    The Art of Living: Vipassana-Meditation as Taught by S.N. Goenka

    (1987)
  • U. Hess et al.

    Facial mimicry

  • C.H. Hjortsjö

    Man's Face and Mimic Language

    (1970)
  • B.K. Hölzel et al.

    Investigation of mindfulness meditation practitioners with voxel-based morphometry

    Social Cognitive and Affective Neuroscience

    (2008)
  • J. Kabat-Zinn

    Mindfulness-based interventions in context: past, present, and future

    Clinical Psychology: Science and Practice

    (2003)
  • K.H. Kim et al.

    Emotion recognition system using short-term monitoring of physiological signals

    Medical Biological Engineering Computing

    (2004)
  • J.D. Laird

    Self-attribution of emotion: the effects of expressive behavior on the quality of emotional experience

    Journal of Personality and Social Psychology

    (1974)
  • Cited by (52)

    • The neuroethology of spontaneous mimicry and emotional contagion in human and non-human animals

      2020, Neuroscience and Biobehavioral Reviews
      Citation Excerpt :

      They occur even when participants are asked to inhibit a facial response (Korb et al., 2010) or when they are asked to respond to the perceived facial stimulus in the opposite way (“counter-mimicry”; Dimberg et al., 2002). As expected with automatic responses, spontaneous facial mimicry occurs after minimal stimulus input, even upon brief presentations for expressions of joy or anger (Bornemann et al., 2012; Dimberg et al., 2000). Interestingly, both motor and affective processes play a role in such spontaneous matching (for more see Moody et al., 2007; Neumann et al., 2014).

    • Understanding the Higher-Order Approach to Consciousness

      2019, Trends in Cognitive Sciences
      Citation Excerpt :

      It is known that threats can elicit amygdala activity and trigger physiological responses nonconsciously [122,123]. Participants do not typically spontaneously report feelings, but even when asked for a verbal report, they do not respond in a way that would suggest they experienced an emotion appropriate to the eliciting stimulus [124]. Furthermore, direct electrical stimulation of the amygdala reliably elicits physiological responses, but only rarely subjective experiences [125]; even when subjective experiences are elicited, it is unclear that these arise from the amygdala itself, as opposed to resulting from activity spreading to higher-order processes [48].

    • Empathy and emotion regulation: An integrative account

      2019, Progress in Brain Research
      Citation Excerpt :

      Perception of emotional facial expressions elicits activity in congruent muscle groups in the observer; e.g., the Corrugator supercilii in response to angry faces and the Zygomaticus major in response to happy faces (Dimberg and Petterson, 2000; Dimberg and Thunberg, 1998; Hess et al., 1998; Rymarczyk et al., 2018; Sims et al., 2012). Crucially, SFM can be triggered rapidly and automatically (Dimberg and Thunberg, 1998), even when the observer is not consciously aware of the presence of the face (Bornemann et al., 2012; Dimberg et al., 2000). The lack of need for high-resolution stimulus representations in triggering SFM, as well as evidence of such spontaneous mimicry in young children who have not yet developed efficient cognitive control mechanisms (Nadel, 2002), suggests that these processes are recruited early in the hierarchy of empathy-related responses.

    View all citing articles on Scopus
    1

    Tel.: + 49 30 2093 9390; fax: + 49 30 2093 9391.

    View full text