Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 30 November 2012
Sec. Emotion Science

Superior Recognition Performance for Happy Masked and Unmasked Faces in Both Younger and Older Adults

  • 1Aging Research Center, Karolinska Institute, Stockholm, Sweden
  • 2Department of Psychology, Stockholm University, Stockholm, Sweden

In the aging literature it has been shown that even though emotion recognition performance decreases with age, the decrease is less for happiness than other facial expressions. Studies in younger adults have also revealed that happy faces are more strongly attended to and better recognized than other emotional facial expressions. Thus, there might be a more age independent happy face advantage in facial expression recognition. By using a backward masking paradigm and varying stimulus onset asynchronies (17–267 ms) the temporal development of a happy face advantage, on a continuum from low to high levels of visibility, was examined in younger and older adults. Results showed that across age groups, recognition performance for happy faces was better than for neutral and fearful faces at durations longer than 50 ms. Importantly, the results showed a happy face advantage already during early processing of emotional faces in both younger and older adults. This advantage is discussed in terms of processing of salient perceptual features and elaborative processing of the happy face. We also investigate the combined effect of age and neuroticism on emotional face processing. The rationale was previous findings of age-related differences in physiological arousal to emotional pictures and a relation between arousal and neuroticism. Across all durations, there was an interaction between age and neuroticism, showing that being high in neuroticism might be disadvantageous for younger, but not older adults’ emotion recognition performance during arousal enhancing tasks. These results indicate that there is a relation between aging, neuroticism, and performance, potentially related to physiological arousal.

Introduction

In the aging and emotion literature it is often assumed that, although recognition performance for facial expressions seems to decrease with age, the absolute performance level for happy faces is preserved (see Isaacowitz et al., 2007, p. 148 for a comprehensible table). However, results from a recent meta-analysis of aging and emotion recognition (Ruffman et al., 2008) showed a decrease in performance with age also in recognition of happy faces. Even though older adults performed worse than younger adults in recognition of all facial expressions except for faces displaying disgust, the age-related differences in performance were smaller for happy (and surprised) faces than for other facial expressions. Thus, recognition performance for happy faces seems to be relatively preserved in older adults, which is in line with the notion of a shift of focus toward positive, and away from negative information with advancing age (e.g., Mather and Carstensen, 2005).

On the other hand, a growing literature based on younger adults shows an advantage for happy faces compared to other facial expressions both in recognition and attention. For example, Calvo and Lundqvist (2008) found that during a free-viewing condition, accuracy rates were higher and reaction times were shorter for happy faces compared to all other emotional expressions. When faces were presented for shorter durations (25–500 ms), recognition of happy faces reached a ceiling at the 50 ms duration. No other expressions reached the ceiling even at the 500 ms duration. Further, visual search studies with younger adults showed that, when real faces were used as targets, happy faces were detected faster and with higher accuracy than other emotional faces (Juth et al., 2005; Calvo and Nummenma, 2008; Calvo and Marrero, 2009). The fact that this happy face advantage is evident even outside of overt visual attention and for inverted faces has led to the conclusion that it is driven by processing of salient facial features (Calvo et al., 2010). Thus, rather than positive preferences in older adults (e.g., Mather and Carstensen, 2005), the relative preserved recognition performance for happy faces might be explained by a more general advantage for happy faces that is independent of age (e.g., Calvo et al., 2010).

Although the happy face advantage has been investigated in several spatial attention studies (e.g., Juth et al., 2005; Calvo and Marrero, 2009), the temporal aspect of the happy face advantage is relatively unexplored. Calvo and Lundqvist (2008) found that in overt recognition, younger adults reached the ceiling in performance already when faces were displayed for 50 ms. However, the accuracy was higher for happy faces than all other facial expressions in the shortest presentation time condition (25 ms). Thus, the onset of the happy face advantage in recognition performance remains unclear. By using a backward masking paradigm with stimulus onset asynchronies (SOAs) starting at 17 ms, we investigated when the happy face advantage occurred during visual processing. This technique allows parametrical modulation of the visibility that, in turn, allows investigation of recognition performance as a continuum from low to high levels of visibility. In addition to making it possible to study the temporal aspect of the happy face advantage, the present masked recognition task also differs from other recognition tasks, in which participants are asked to label the facial expressions (McDowell et al., 1994; Phillips et al., 2002; Calder et al., 2003). In such recognition tasks, participants typically have several negative but only one positive expression to choose from. The present masked recognition task used a forced choice task, where participants could choose between fearful, happy, or neutral expressions. Such distinct facial expressions in terms of emotional valence minimize the risks of confusing expressions. Thus, our masked recognition task gives a valid measure of processing more clear-cut positive and negative emotional information in contrast to non-forced recognition tasks where participants have several negative expressions to choose from which might lead to confusion, especially among older adults (Ruffman et al., 2008).

In addition, and with a quite different research question, we investigated the effect of neuroticism on processing of briefly presented emotional stimuli. The rationale for this was the assumption of an inverted U-shaped relationship between arousal and performance (Yerkes and Dodson, 1908; Humphreys and Revelle, 1984). In series of visual attention tasks, Szymura and Wodniecka (2003) found that for less demanding conditions, with relatively long stimuli presentations (850 ms), neurotics, and controls (younger adults) performed equally well. However, in more difficult conditions when the presentation time was shorter, neurotics performed worse than controls. These results may indicate that neurotic individuals’ arousal levels became too high for optimal performance in the more difficult condition, leading to impaired performance. Given results from previous research showing decreased physiological arousal in older adults compared to younger adults when exposed to emotional pictorial stimuli (Gavazzeni et al., 2008) and emotion-eliciting film clips (Tsai et al., 2000), it was of interest to investigate whether the high baseline arousal associated with neuroticism (Eysenck and Eysenck, 1985) would have an age-related reverse influence on emotion recognition performance in the present arousal enhancing backward masking task. Based on Szymura and Wodniecka’s (2003) findings, we expected a decrease in performance of high neurotic younger adults in the mask recognition task, but not in the unmasked recognition task. For older adults, we expected the reverse. An amplified arousal level in high neurotic older adults should be beneficial for their performance. That is, because older adults on average have a lower arousal level during processing of emotional visual information (Tsai et al., 2000; Gavazzeni et al., 2008), high neurotic older adults are unlikely to surpass an optimal arousal level in the current study task. Instead, their arousal levels were expected to approach optimal arousal levels.

To summarize, we used a backward masking paradigm with several SOAs in order to investigate the temporal aspect of the happy face advantage in younger and older adults. This approach allowed an investigation of recognition performance as a continuum from low to high levels of visibility. Lastly, we investigated whether neuroticism and aging has a combined effect on emotion recognition performance during an arousal enhancing condition.

Materials and Methods

Participants

The sample consisted of 20 older (10 women and 10 men) and 19 younger participants (10 women and 9 men). Mean age (SD) was 25.5 years (2.6) for younger women, 27.3 years (2.6) for younger men, 72.4 years (2.3) for older women, and 75.0 years (2.6) for older men. An ANOVA with age (years) as dependent variable and age group and sex as independent variables revealed main effects for age groups, F(1, 35) = 3446. 59, p < 0.001, and sex, F(1, 35) = 7.58, p = 0.009 (men older than women), but no interaction between age group and sex, F(1, 35) < 1, p > 0.05. Independent samples t-tests showed that there were no differences between younger and older adults in any of the demographic variables (ts < 1.7, ps > 0.05): years of education (Myoung = 14.4, SDyoung = 1.9; Mold = 14.15, SDold = 1.8); Mini mental state exam – MMSE (Myoung = 29.1, SDyoung = 0.8; Mold = 28.7, SDold = 1.1; Folstein et al., 1975); Neuroticism (Myoung = 83.8, SDyoung = 26.3; Mold = 70.0, SDold = 23.7; Costa and McCrae, 1992); and Vocabulary (Myoung = 24.1, SDyoung = 3.1; Mold = 25.1, SDold = 2.7; Dureman, 1960). All subjects had responded to ads regarding participation in psychological studies at the Aging Research Center at Karolinska Institute and were contacted via phone. They were asked if they would like to participate in a study in which pictures of faces would be shown at different durations, and their task was to judge the facial expressions. The study was approved by the local ethics committee. Participation was based on written informed consent and participants received a lottery ticket worth 6 Euro.

For the neuroticism analysis, participants were divided into low and high neurotics using a median split approach. Thus, the analyses contained 39 subjects divided into four groups; younger high neurotics (n = 10; MNscore = 103.8, SDNscore = 20.03), younger low neurotics (n = 9; MNscore = 61.56, SDNscore = 7.32), older high neurotics (n = 9; MNscore = 88.89, SDNscore = 20.06), and older low neurotics (n = 11; MNscore = 54.55, SDNscore = 12.87). Compared to a norm group consisted of 363 men and 269 women (Costa and McCrae, 1985), across sex, the participants did not differ in neuroticism scores, t(38) < 1, p > 0.05 (Mnorm = 78.35; Mcurrent study = 76.72).

Materials

Thirty gray scale pictures of faces depicted 10 individuals, each displaying fearful, neutral, and happy facial expressions (Ekman and Friesen, 1976). Half of the identities were female (labeled by Ekman and Friesen as C, MF, NR, PF, and SW), and half were male (EM, GS, JJ, PE, and WF). Thirty masking pictures were generated by scrambling the 30 original pictures. Each original picture (200 × 310 pixels) was divided into blocks of 5 × 5 pixels (62 rows by 40 columns), and the 2480 blocks were shuffled so that their location was random. The experiment was run on a desktop PC with a standard 17-inch cathode ray tube (CRT) monitor. Screen resolution was 800 × 600 pixels, refresh rate was 60 Hz and the experiment was programmed in Presentation 9.3 (Neurobehavioral Systems, www.neurobs.com). Participants viewed pictures at a distance of about 1 m and the visual angle was 7.4° × 9.4°.

Procedure

The experiment consisted of two tasks; the masked recognition task was preceded by an unmasked recognition and an intensity rating task. The unmasked recognition task enabled comparisons of overt facial expression recognition performance between participants of the present study and participants in previous studies. In the rating task, all faces were shown until participants were done rating intensity and facial expression labeling. In the masked recognition task, the same faces were shown briefly and masked by scrambled versions of the pictures. Piloting showed that older participants tended to be unfamiliar with computers. To avoid distraction from the tasks, none of the participants were required to enter their answers themselves on a keyboard. Instead, participants received screen prompts to respond verbally and their answers were collected by the experimenter (Joakim Svärd).

Unmasked recognition and intensity rating task

Participants were informed that they would be shown pictures of faces for unlimited durations, and their task was to rate intensity and to label the emotional facial expression. Participants rated intensity of expressions on a nine-point scale (1 = very low, 9 = very high). Because some pilot participants tended to confuse intensity of expression with accuracy of emotional expression (e.g., a neutral face that was clearly neutral was rated as nine), participants in the present experiment completed a practice task. They were shown line drawings of neutral and emotional faces with different degrees of emotional expression and practiced to judge intensity rather than accuracy of the expression. In the actual task, participants were shown all 30 pictures in random order. While each picture was presented on the screen, participants rated intensity of expression. Next, they labeled the expression by choosing among seven response alternatives (happy, surprised, angry, sad, neutral, disgusted, and fearful) presented on the screen. In order to be able to make comparisons with other overt facial expression recognition studies, we used the seven response alternative approach.

To analyze the recognition data, indexes of signal detection theory (MacMillan and Creelman, 1991) were used to separate discrimination ability from response biases, that is, participants’ tendency to favor one response alternative over the others. For example, if response biases are not taken into account (as for percent correct), participants favoring happy obtain an inflated percent correct for happy faces. To separate discrimination ability (d′) from response biases (C), signal detection indexes (Snodgrass and Corwin, 1988; MacMillan and Creelman, 1990) were calculated for each participant and target emotion (fearful, happy, neutral) based on hit rates (i.e., percent of trials in which a target expression was labeled correctly as a target, e.g., happy face labeled as happy) and false alarm rates (i.e., percent of trials in which a non-target expression was labeled incorrectly as a target, e.g., fearful and neutral face labeled as happy). Because signal detection indexes cannot be computed with hit and false alarm rates that are 0 or 1, recommendations were followed (Snodgrass and Corwin, 1988) when calculating each rate; half a trial was added as a response (numerator) and one trial was added to the total number of possible responses (denominator).

For C, positive values reflect conservative responses biases (i.e., the target expression is favored less), negative values reflect liberal biases (i.e., the target expression is favored more), and zero reflects no bias. That is, C = 0 is the point at which it is as probable to classify a target as a non-target (miss) as it is to classify a non-target as a target (false alarm).

Masked recognition task

Participants were informed that they would be shown faces displaying fearful, neutral, and happy expressions at various durations. Due to the risk of floor effects that would have followed a seven response choice as used in the unmasked recognition task, we instead used a three response choice, in line with the majority of previous backward masking studies of facial expression recognition (e.g., Duan et al., 2010; Bornemann et al., 2012). On each trial, a picture would be shown followed immediately by another (scrambled) picture (Figure 1). Participants were informed that a face would always be shown, and that they were to decide if the expression was happy, neutral, or fearful. Importantly, to study response biases, participants were not informed that all emotions were equally likely across trials. However, they were informed that picture duration might be sufficiently brief to appear to them as if no face was presented. If so, they were instructed to guess the emotional expression. Target pictures were shown for durations of about 17, 33, 50, 67, 83, 117, and 267 ms, corresponding to 1, 2, 3, 4, 5, 7, and 16 refresh cycles on a 60 Hz monitor, respectively. That is, on CRT monitors, picture duration is a function of number of refresh cycles (Bridgeman, 1998). Because the software synchronized picture display with individual refresh cycles (Wiens et al., 2004), at a refresh rate of 60 Hz, each refresh cycle lasted about 17 ms (1000 ms/60 Hz = 16.7 ms). After each target, a masking picture was shown for 317 ms (19 refresh cycles). The 30 masking pictures were used once on 30 consecutive trials in random order. In total, the task consisted of 210 trials (7 durations × 3 expressions × 10 individual faces). To avoid fatigue, participants were allowed a 5-min break after completing half of the trials.

FIGURE 1
www.frontiersin.org

Figure 1. Example of the stimulus presentation sequence in the masked recognition task. Target pictures were shown for durations of 17, 33, 50, 67, 83, 117, and 267 ms, followed by a scrambled picture (mask) for 317 ms.

As for the unmasked recognition task, signal detection analyses (MacMillan and Creelman, 1991) were performed to separate discrimination ability (d′) from response bias (C). Based on hit and false alarm rates, these indexes were calculated for each participant for each target emotion (fearful, neutral, and happy) and the seven picture durations.

Results

Although the goal was to study effects of age, the sample size was sufficient to analyze also effects of sex. The reason for including sex as a factor was findings that showed that women seems to be better than men in facial expression recognition during low intensity condition (Hoffman et al., 2010). However, because effects of sex are not the main focus in the present study, results for sex are mentioned but not discussed further.

In all analyses involving repeated measures, regular degrees of freedom are reported together with observed significance levels after Greenhouse–Geisser correction. Results were considered significant at p < 0.05.

Unmasked Recognition and Intensity Rating Task

Unmasked recognition

When participants labeled the emotional expression of the target faces, they could choose among happy, surprised, angry, sad, neutral, disgusted, or fearful expressions.

An ANOVA of d′ with age, sex, and neuroticism as between-subjects variables and emotion as a within-subjects variable revealed a significant main effect for age, F(1, 31) = 8.73, p = 0.006. As shown in Figure 2A, younger adults performed better than older adults across emotions. There was also a main effect for emotion, F(2, 62) = 46.84, p < 0.001. Across age, performance was significantly better for happy faces than for neutral, F(1, 31) = 90.12, p < 0.001, and fearful faces, F(1, 31) = 78.15, p < 0.001, and that the latter two did not differ from each other, F(1, 31) < 1, p > 0.05. For completeness, t-tests were computed and showed that younger adults performed better than older adults on fearful faces, t(37) = 2.15, p = 0.039, but not on happy, t(37) = 1.9, p = 0.068, or neutral faces, t(37) = 1.27, p = 0.213 (Table 1). The main effect of sex, F(1, 31) = 5.56, p = 0.007, showed that women performed better than men (Mwomen = 2.74, SEwomen = 0.06; Mmen = 2.29, SEmen = 0.06). However, these main effects were qualified by an interaction between age, sex, neuroticism, and emotion, F(2, 62) = 3.94, p = 0.029. Two follow-up ANOVAs, computed for men and women separately, revealed an emotion by neuroticism by age interaction for men, F(2, 30) = 6.56, p = 0.004, but not for women, F(2, 30) < 1, p > 0.05, parsimoniously described as that older low neurotic men performed better than older high neurotic men for neutral faces.

FIGURE 2
www.frontiersin.org

Figure 2. Mean (SE) recognition performance (d′) across durations in the unmasked recognition task (A) and masked recognition task (B), separately for expression and age. Note. Results of t-tests for age differences are denoted with asterisks. **p < 0.01, *p < 0.05.

TABLE 1
www.frontiersin.org

Table 1. Mean (and SDs) intensity ratings and mean d′ (and SDs) in the unmasked recognition.

Intensity ratings

Participants rated each of the 10 individuals with fearful, neutral, and happy expressions in intensity of expression on a 9-point scale. Because analyses of participants’ ability to label expressions correctly suggested age and sex differences (below), analyses of intensity ratings are reported only for faces that were labeled correctly. Thus, this analysis does not confound intensity ratings with participants’ ability to label expressions correctly. However, results were comparable when all trials were included.

An ANOVA of intensity ratings with age, sex, and neuroticism as between-subjects variables and facial emotion (fear, happy, neutral) as a within-subjects variable showed significant main effects for emotion, F(2, 62) = 117.74, p < 0.001, and for age, F(1, 31) = 6.07, p = 0.02. However, these main effects were qualified by an interaction between age and facial emotion, F(2, 62) = 5.23, p = 0.016. Results from follow-up t-tests are presented in Table 1.

Masked Recognition Task

Due to various research questions, the masked recognition performance was computed twice. In the main age and emotion analysis, the aim was to investigate the development of facial expression recognition performance along durations ranging from 17 up to 267 ms. Rather than investigate performance at different durations, the aim in the additional neuroticism analysis was to investigate performance in relation to the different nature of the tasks (i.e., high arousal generating task (masked recognition) versus low arousal generating tasks (unmasked recognition). Because even the longest durations in the current study were beyond the exposure previously reported as aggravating for high neurotic younger adults (Szymura and Wodniecka, 2003) and performance at single durations is not considered important for the neurotic analysis, an additional univariate ANOVA across durations were computed for effects of neuroticism in the masked recognition task. In addition, because we expect arousal rather than emotional valence per se to moderate performance, the ANOVA was collapsed across emotions. Thus, for the masked recognition task two separated analyses were computed; age, sex and duration (Figure 3) followed by age, sex and neuroticism across all durations (Figure 4).

FIGURE 3
www.frontiersin.org

Figure 3. Mean (SE) recognition performance (d′) in the masked recognition task depending on picture duration, facial emotion and age.

FIGURE 4
www.frontiersin.org

Figure 4. Mean (SE) recognition performance (d′) in the masked recognition task (across emotions and durations) for age and neuroticism. Note. Results of t-tests are denoted with asterisks. **p < 0.01, *p < 0.05.

Recognition performance: age and duration

An ANOVA of d′ with age and sex as between-subjects variables and picture duration and emotion as within-subjects variables revealed a significant main effect for age, F(1, 35) = 22.11, p < 0.001. As shown in Figure 2B, younger adults performed better than older adults across all durations (Myoung = 2.75, SEyoung = 1.1; Mold = 2.03, SEold = 1.1). There was also a main effect of emotion, F(2, 70) = 57.08, p < 0.001. Contrast tests across age showed that performance was significantly better for happy faces (Mhappy = 2.61, SEhappy = 0.07) than for neutral (Mneutral = 2.23, SEneutral = 0.09), F(1, 35) = 90.83, p < 0.001, and fearful faces (Mfearful = 2.34, SEfearful = 0.08), F(1, 35) > 41.88, p < 0.001. However, in contrast to results for the unmasked recognition task, performance was better for fearful than neutral faces, F(1, 35) = 18.07, p < 0.001. Further, these main effects was qualified by a interaction between emotion and age, F(2, 70) = 6.19, p = 0.006. Contrast tests in two follow-up ANOVAs, separated by age, showed that performance were better for fearful (Moldfear = 1.98, SEoldfear = 0.11) than for neutral faces (Moldneu = 1.81, SEoldfear = 0.12) for older, F(1, 17) = 24.45, p < 0.001, but not younger adults, F(1, 17) = 1.62, p < 0.05. In both age groups, performance was better for happy faces compared to fearful and neutral faces (ps < 0.001). Lastly, there was a main effect of duration, F(6, 210) = 341.24, p < 0.001, which was qualified by a interaction between duration and age, F(6, 210) = 8.7, p < 0.001, parsimoniously described as follow: with longer durations, the age-related differences across emotions decreased.

In order to investigate recognition performance for all facial expression, for all durations, both within and between age groups planned paired samples t-tests and independent samples t-test were conducted (see Table 2).

TABLE 2
www.frontiersin.org

Table 2. Mean d′ (and SDs) for masked recognition performance between and within age groups.

Response bias

A similar ANOVA of response bias (C) revealed effects of age, F(1, 36) = 41.75, p < 0.001, facial emotion, F(2, 72) = 40.18, p < 0.001, and an age by emotion interaction, F(2, 72) = 15.57, p < 0.001. In general, participants were less willing to respond that an expression was fearful or happy (i.e., conservative bias, C > 0) compared to respond that an expression was neutral (i.e., liberal bias, C < 0). This effect was stronger for older than younger participants.

Recognition performance: age and neuroticism

A univariate ANOVA with masked recognition performance (averaged across durations) as dependent variable and age, sex, and neuroticism (high, low) as fixed factors was computed. As showed in Figure 4, an interaction effect between age and neuroticism, F(1, 31) = 5.17, p = 0.030, indicates that across durations, high levels of neuroticism seem to have an opposite effect on performance in younger and older adults. A high level of neuroticism seems to be disadvantageous for younger, but not older adults. However, follow-up t-tests did not show differences between high and low neurotics in younger, t(17) = 1.79, p = 0.099, or older, t(18) = 1.01, p = 0.327, adults. Rather, the interaction seemed to be driven by age-related differences in low neurotic subjects, t(18) = 6.62, p < 0.001, but not in high neurotic subjects, t(17) = 1.81, p = 0.091, indicating a reduction in performance in young high, but not old high neurotics, compared to their low neurotic counterparts.

Discussion

In the present study, visibility was parametrically reduced in order to investigate the temporal development of a happy faces advantage (e.g., Calvo and Lundqvist, 2008) in younger and older adults. Also, the combined effect of age and neuroticism, and its relation to arousal and performance was investigated.

In their meta-analysis of emotion recognition and aging, Ruffman et al. (2008) showed that younger adults performance were better than older adults for all expressions except disgust. But the age difference was smaller for happiness (and disgust) than for all other facial expressions. Our results from the unmasked recognition task mirror the findings in this meta-analysis, at least for neutral, fearful, and happy facial expressions. Although younger adults performance was better than older adults across all emotions, the age-related differences were smaller for happy than fearful faces. Our results are further expending those by Ruffman et al. (2008) by replicating the same recognition pattern with backwardly masked faces. As in the unmasked recognition task, younger adults performed better than older adults across expressions, but the age-related differences were smaller for happy than fearful and neutral faces. This shows that across age, happy faces have an advantage in recognition performance that holds even under restricted perceptual conditions (i.e., by the use of backwardly masked faces). Note however that neither our results, nor the results from the meta-analysis (Ruffman et al., 2008) support a so-called positivity effect among older adults, that is, a tendency for older adults to have a positivity bias in emotional attention and memory (Mather and Carstensen, 2005). As Ruffman et al. (2008) suggested, ceiling effects might explain why the effect size for the difference in performance on happy faces compared to other emotions was small. The present findings, both from the masked and the unmasked recognition tasks, clearly show that this was true for the younger adults also in our study. The only support for a positivity effect among older adults in our study is the findings that performance was better for happy than fearful and neutral faces at almost all durations for older, but not for younger adults. Since the task is perceptually demanding, these results may suggest that older adults rely on salient facial features to a greater extent than younger adults. In support for such a saliency-based explanation is the finding that older, but not younger adults’ masked recognition performance was better for fearful than neutral faces. However, this is inconsistent with the idea of neglecting negative information among older adults (e.g., Mather and Carstensen, 2005). Also inconsistent with the notion of a positivity effect in older adults, and more in line with a more general age independent happy face advantage, is the finding that across age, and for both the masked and the unmasked recognition tasks, happy faces were recognized better than fearful or neutral faces. Thus, instead of an age by emotion interaction, driven by age differences in recognition of negative, but not positive faces (e.g., Calder et al., 2003), inspection of our masked recognition results (Figure 3) rather indicate a similar performance trajectory across age for both negative and positive faces.

Our results show that in both age groups, recognition was better for happy than neutral and fearful faces across all durations. These findings are in line with previous studies with younger adults showing a happy face advantage primarily in visual attention (Juth et al., 2005; Calvo and Nummenma, 2008; Calvo and Marrero, 2009) but also in recognition (e.g., Calvo and Lundqvist, 2008). When younger adults were showed unmasked faces at fixed-display time, Calvo and Lundqvist (2008) found that recognition of happy faces reached ceiling levels at 50 ms duration. They also found that recognition performance was better for happy than all other expressions already at their shortest duration (25 ms). Thus, an investigation of the temporal development of a happy face advantage based on their results is difficult. By using a backward masking paradigm and SOAs ranging from 17 to 267 ms, our study adds to the literature by showing that at durations longer than 50 ms, both age groups recognized masked happy faces better than neutral and fearful faces. This might indicate that when more time is given, which presumably would allow more elaborative based processing, the more effective is the processing of happy faces compared to fearful and neutral faces. This, in turn, suggests that the happy face advantage is not merely a result of salient featural processing. In fact, as durations become longer, the masked recognition performance resembles the results from the unmasked recognition task. In other words, with increased visibility, emotion recognition may rely less on salient features, and instead shift to a more elaborative, emotional evaluation of the face. Although the present study cannot conclude when such a shift may occur on this continuum, this is of interest for future studies to explore. Further, younger adults reached the performance ceiling when happy faces were presented for 67 ms. This emphasizes the strength of the happy face advantage not only in spatial attention, as has been shown in visual search tasks (Juth et al., 2005; Calvo and Nummenma, 2008; Calvo and Marrero, 2009), but also in the temporal aspect of the processing of facial expressions. The finding that older adults reached ceiling for happy faces at the 117 ms duration demonstrate that the happy face advantage remains intact with age.

Our findings from the masked recognition task indicate that salient facial features might guide attention during early processing of facial expressions in reduced visible stimuli conditions. For instance, a smiling face (happy) reveals more white parts (teeth) around the mouth than a fearful or a neutral face does, which might facilitate salient feature processing (Calvo and Marrero, 2009). Especially when tasks require fast processing, participants may rely heavily on salient features. Such interpretations have recently been suggested in studies using visual search (Calvo and Nummenma, 2008) and attentional blink tasks (Miyazawa and Iwasaki, 2010), and this may account for the happy face advantage in early processing of facial expressions. However, a majority of the studies investigating the happy face advantage focus mainly on the perceptual part and do not thoroughly consider the emotional aspect. Before any of these explanations can be established, both aspects must be taken into consideration, preferably in the same study. Our combined masked and unmasked recognition results suggest a combined effect of processing of salient perceptual features and more elaborative processing in emotional facial expression recognition. Results from our unmasked recognition task, in which unlimited time was given to the participants, recognition performance was better for happy faces than neutral and fearful faces. These results are difficult to explain simply in terms of saliency or perception. On the other hand, the perceptually demanding backward masking task force participants to rely more on processing of salient features. However, it would be unrealistic to assume that this will be so also in a non-perceptually demanding task such as our unmasked recognition task. Rather, this task would tap on participants’ elaborative processing of the facial expression. In order to categorize the face into an emotional expression, the participants’ most likely need to evaluate the face and choose a label that fits their internal representation of that specific expression. Thus, when visibility is reduced, participants might rely more on salient features that match their internal representation of a facial expression. On the other hand, when the visibility is not reduced and enough time is given, the participants’ internal representation (i.e., labeling) should be a result of their elaborative processing of the face. From that point of view, our results from the masked recognition task would suggest a shift from salience based processing (at the shortest durations) to a more elaborative based processing (at the longest durations). If our masked recognition results are studied in light of a processing continuum from more salient to more elaborative processing, that would suggest that the happy face advantage emerge from salient based processing and extends to, and accelerates when reaching a more elaborative processing. That is, although happy faces were better recognized already at the shortest durations, this advantage was even more pronounced at the longer durations although our results from the masked and unmasked recognition task suggest a combined effect of processing of salient perceptual features and elaborative processing as explanatory factors for the happy face advantage, future studies most try to disentangle these factors within the same study-paradigm in order to investigate their individual contribution to the happy face advantage.

The backward masking procedure has been used extensively in functional magnetic resonance imaging studies to investigate amygdala activity in unconscious processing of facial expressions (e.g., Whalen et al., 1998; Duan et al., 2010). The detection of briefly exposed fearful faces have been showed to remain intact even in patients with bilateral (Tsuchiya et al., 2009) and left lateral (Palermo et al., 2010) amygdala lesions. Interestingly, amygdala lesions impair the ability to consciously recognize a fearful facial expression (Tsuchiya et al., 2009; Palermo et al., 2010). Such findings support the idea of functionally separated pathways for conscious and unconscious fear processing (see Tamietto and de Gelder, 2010 for a review). Whereas conscious processing of fear seems to rely on a cortical route including fusiform gyrus and the temporal pole, unconscious fear processing instead may depend on a subcortical route including the superior colliculus and the pulvinar (Morris et al., 1999). Because of its evolutionary relevance, fearful faces have received the most attention in imaging studies on amygdala activation to masked faces (e.g., Ottaviani et al., 2012). However, Williams et al. (2004) found that the amygdala was activated not only in response to fearful faces but also to happy faces during binocular suppression conditions, which might indicate that the information that reach amygdala through a subcortical route is not sufficient to discriminate between expressions. This is consistent with the assumption that the subcortical route’s role is to pass on low spatial rather than high spatial frequency visual information to amygdala (Vuillumier et al., 2003). In other words, without high spatial frequency information (e.g., sharp contours), the amygdala may only be able to do a rough evaluation of the face. Not until high spatial frequency information reaches the amygdala through the cortical route, a more detailed facial expression discrimination may occur (Vuillumier et al., 2003). Bar et al. (2006) showed that this bottom-up low spatial frequency information might, via a fast and crude pathway to the orbitofrontal cortex, activate an anticipation, which is subsequently matched with the accumulated information received from the slower cortical route. Similar “predictive codes” for faces stored in medial frontal cortex (MFC) are assumed to be matched with incoming information in order to confirm face recognition (Summerfield et al., 2006). These top-down facilitation models would be at least partially consistent with our masked recognition results in the sense that a happy face consists of more perceptual salient features (e.g., Calvo et al., 2010) which in turn might activate “stronger” anticipations compared to neutral and fearful faces. These differences in top-down activated anticipations would subsequently lead to differences in recognition performance between the facial expressions. In the light of such approach, the present masked recognition results would suggest a stronger and/or faster relation between a face template in MFC and confirmatory visual information for happy compared to neutral and fearful faces. Although the present behavioral study for obvious reasons cannot conclude whether any predictive facial expression templates exist and to what extent they might differ among expressions, this is of interest for future studies to explore.

Finally, the present study investigated the combined effect of age and neuroticism on facial expression recognition at different presentation durations. Our results revealed an interaction between age and neuroticism, indicating that being high in neuroticism might be disadvantageous for younger, but not older adults. The former is in line with previous findings showing decreased performance for high neurotic younger adults compared to low neurotic younger adults in attention demanding tasks (Szymura and Wodniecka, 2003). Our results extend these findings by suggesting reversed performance for older neurotic adults, at least during recognition of facial expressions. These data indicate increased performance for high neurotic older adults compared to low neurotic older adults. Current findings are also in the expected direction given previous findings of age-related differences in physiological arousal when exposed to emotional stimuli (Tsai et al., 2000; Gavazzeni et al., 2008). So, enhanced arousal from neuroticism in elderly persons could be beneficial to compensate for decreased arousal in older adults when exposed to emotional faces. Unfortunately, arousal was not directly measured in the present study, and therefore, we can only assume that level of arousal is mediating the interaction effect between age and neuroticism. However, as the age by neuroticism interaction was only evident in the briefly exposed masked recognition task and not in the self-paced intensity and emotion rating tasks, the findings support the interpretation that the effect is mediated by age-related arousal differences. Also, there was no interaction between facial emotion and neuroticism, which further emphasizes the general arousal assumption. There were no significant differences between high and low neurotics within age groups, which may be a result of the small sample size. For example, the study of Szymura and Wodniecka (2003) included sample sizes ranging between 64 and 102 participants; the current study included 39 participants. Nonetheless, the age by neuroticism interaction remain significant, but given the small sample size in the present study, these results are preliminary and needs to be replicated in a larger sample.

To summarize, in line with previous research on overt facial recognition (Calvo and Lundqvist, 2008), our results indicate that even when emotional faces are briefly exposed, and backwardly masked, younger and older adults’ recognition performance was better for happy than fearful or neutral faces. This happy face advantage is suggested to emerge from a combination of salient based and more elaborative processing of the happy face. Results from our two tasks showed that this advantage was also present in older adults, indicating a more general advantage for happy faces that is independent of age. Our results also revealed an interaction between age and neuroticism, showing that being high in neuroticism might be disadvantageous for younger but not older adults in arousal enhancing tasks.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

This research was conducted at Aging Research Center, Karolinska Institutet, Stockholm, Sweden, and supported by the Swedish Research Council and The Bank of Sweden Tercentenary Foundation awarded to Håkan Fischer. The authors wish to thank Johanna Lovén for valuable comments during the preparation of this manuscript.

References

Bar, M., Kassam, K. S., Ghuman, A. S., Boshyan, J., Schmid, A. M., Dale, M. S., et al. (2006). Top-down facilitation of visual recognition. Proc. Natl. Acad. Sci. U.S.A. 103, 449–454.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bornemann, B., Winkielman, P., and van der Meer, E. (2012). Can you feel what you do not see? Using internal feedback to detect briefly presented emotional stimuli. Int. J. Psychophysiol. 85, 116–124.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Bridgeman, B. (1998). Durations of stimuli displayed on video display terminals: (n-1)/f plus persistence. Psychol. Sci. 9, 232–233.

CrossRef Full Text

Calder, A. J., Keane, J., Manly, T., Sprengelmeyer, R., Scott, S., Nimmo-Smith, I., et al. (2003). Facial expression recognition across the adult life span. Neuropsychologia 41, 195–202.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Calvo, M. G., and Lundqvist, D. (2008). Facial expressions of emotion (KDEF): identification under different display-duration conditions. Behav. Res. Methods 40, 109–115.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Calvo, M. G., and Marrero, H. (2009). Visual search for emotional faces: the role of affective content and featural distinctiveness. Cogn. Emot. 23, 782–806.

CrossRef Full Text

Calvo, M. G., and Nummenma, L. (2008). Detection of emotional faces: salient physical features guide effective visual search. J. Exp. Psychol. Gen. 137, 471–494.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Calvo, M. G., Nummenma, L., and Avero, P. (2010). Recognition advantage of happy faces in extrafoveal vision: featural and affective processing. Vis. Cogn. 18, 1274–1297.

CrossRef Full Text

Costa, P. T., and McCrae, R. R. (1985). The NEO Personality Inventory Manual. Odessa, FL: Psychological Assessment Resources.

Costa, P. T., and McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO-PI-R) and NEO Five Factor Inventory (NEO-FFI) Professional Manual. Odessa, FL: Psychological Assessment Resources.

Duan, X., Dai, Q., Gong, Q., and Chen, H. (2010). Neural mechanism of unconscious perception of surprised facial expression. Neuroimage 52, 401–407.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Dureman, I. (1960). SRB:1. Stockholm: Psykologiförlaget.

Ekman, P., and Friesen, W. (1976). Pictures of Facial Affect. Palo Alto: Consulting Psychologists Press.

Eysenck, H. J., and Eysenck, M. W. (1985). Personality and Individual Differences: A Natural Science Approach. New York: Plenum Press.

Folstein, M. F., Folstein, S. E., and McHugh, P. R. (1975). “Mini-mental state.” A practical method for grading the cognitive state of patients for clinician. J. Psychiatry Res. 12, 189–198.

CrossRef Full Text

Gavazzeni, J., Wiens, S., and Fischer, H. (2008). Age effects to negative arousal differ for self-report and electrodermal activity. Psychophysiology 45, 148–151.

Pubmed Abstract | Pubmed Full Text

Hoffman, H., Kessler, H., Eppel, T., Rukavina, S., and Traue, H. C. (2010). Expression intensity, gender and facial emotion reognition: women recognize only subtle facial emotions better than men. Acta Psychol. (Amst.) 135, 278–283.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Humphreys, M. S., and Revelle, W. (1984). Personality, motivation, and performance: a theory of the relationship between individual differences and information processing. Psychol. Rev. 91, 153–184.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Isaacowitz, D. M., Löckenhoff, C. E., Lane, R. D., Wright, R., Sechrest, L., Riedel, R., et al. (2007). Age differences in recognition of emotion in lexical stimuli and facial expressions. Psychol. Aging 22, 147–159.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Juth, P., Lundqvist, D., Karlsson, A., and Öhman, A. (2005). Looking for foes and friends: perceptual and emotional factors when finding a face in the crowd. Emotion 5, 379–395.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

MacMillan, N. A., and Creelman, C. D. (1990). Response bias: characteristics of detection theory, threshold theory, and nonparametric indexes. Psychol. Bull. 107, 401–413.

CrossRef Full Text

MacMillan, N. A., and Creelman, C. D. (1991). Detection Theory: A User’s Guide. New York: Cambridge University Press.

Mather, M., and Carstensen, L. L. (2005). Aging and motivated cognition: the positivity effect in attention and memory. Trends Cogn. Sci. (Regul. Ed.) 9, 496–502.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

McDowell, C. L., Harrison, D. W., and Demaree, H. A. (1994). Is right hemisphere decline in the perception of emotion a function of aging? Int. J. Neurosci. 79, 1–11.

Pubmed Abstract | Pubmed Full Text

Miyazawa, S., and Iwasaki, S. (2010). Do happy faces capture attention? The happiness superiority effect in attentional blink. Emotion 10, 712–716.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Morris, J. S., Öhman, A., and Dolan, R. J. (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proc. Natl. Acad. Sci. U.S.A. 96, 1680–1985.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ottaviani, C., Cevolani, D., Nucifora, V., Borlimi, R., Agati, R., Leonardi, M., et al. (2012). Amygdala responses to masked and low spatial frequancy fearful faces: a preliminary fMRI study in panic disorder. Psychiatry Res. 203, 159–165.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Palermo, R., Schmalzl, L., Mohamed, A., Bleasel, A., and Miller, L. (2010). The effect of unilateral amygdala removals on detecting fear from briefly presented backward-masked faces. J. Clin. Exp. Neuropsychol. 32, 123–131.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Phillips, L. H., MacLean, R. D. J., and Allen, R. (2002). Aging and the perception and understanding of emotions. J. Gerontol. B Psychol. Sci. Soc. Sci. 57, 526–530.

CrossRef Full Text

Ruffman, T., Henry, J. D., Livingstone, V., and Phillips, L. H. (2008). A meta-analytic review of emotion and aging: implications for neuropsychological models of aging. Neurosci. Biobehav. Rev. 32, 863–881.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Snodgrass, J. G., and Corwin, J. (1988). Pragmatics of measuring recognition memory: applications to dementia and amnesia. J. Exp. Psychol. Gen. 117, 34–50.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Summerfield, C., Egner, T., Greene, M., Koechlin, E., Mangels, J., and Hirsch, J. (2006). Predictive codes for forthcoming perception in the frontal cortex. Science 314, 1311–1314.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Szymura, B., and Wodniecka, Z. (2003). What really bothers neurotics? In search for factors impairing attentional performance. Pers. Individ. Dif. 34, 109–126.

CrossRef Full Text

Tamietto, M., and de Gelder, B. (2010). Neural bases of the non-conscious perception of emotional signals. Nat. Rev. Neurosci. 11, 697–709.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Tsai, J. L., Levenson, R. W., and Carstensen, L. L. (2000). Autonomic, subjective, and expressive responses to emotional films in older and younger Chinese Americans and European Americans. Psychol. Aging 15, 684–693.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Tsuchiya, N., Moradi, F., Felsen, C., Yamazaki, M., and Adolphs, R. (2009). Intact rapid detection of fearful faces in the absence of the amygdala. Nat. Neurosci. 12, 1224–1225.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vuillumier, P., Armony, J. L., Driver, J., and Dolan, R. J. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 6, 624–631.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., and Jenike, M. A. (1998). Masked presentation of emotional facial expressions modulate amygdala activity without explicit knowledge. J. Neurosci. 18, 411–418.

Pubmed Abstract | Pubmed Full Text

Wiens, S., Fransson, P., Dietrich, T., Lohmann, P., Ingvar, M., and Öhman, A. (2004). Keeping it short: a comparison of methods for brief picture presentation. Psychol. Sci. 15, 282–285.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Williams, M. A., Morris, A. P., McGlone, F., Abbott, D. F., and Mattingley, J. B. (2004). Amygdala responses to fearful and happy facial expressions under conditions of binocular suppression. J. Neurosci. 24, 2898–2904.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Yerkes, R. M., and Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit-formation. J. Comp. Neurol. 18, 459–482.

CrossRef Full Text

Keywords: emotion, faces, aging, masking, happy, fearful, positivity bias, neuroticism

Citation: Svärd J, Wiens S and Fischer H (2012) Superior recognition performance for happy masked and unmasked faces in both younger and older adults. Front. Psychology 3:520. doi: 10.3389/fpsyg.2012.00520

Received: 06 August 2012; Accepted: 03 November 2012;
Published online: 30 November 2012.

Edited by:

Pascal Belin, University of Glasgow, UK

Reviewed by:

Iiro P. Jääskeläinen, University of Helsinki, Finland
Marianne Latinus, University of Glasgow, UK

Copyright: © 2012 Svärd, Wiens and Fischer. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.

*Correspondence: Joakim Svärd, Aging Research Center, Gävlegatan 16, Stockholm 113 30, Sweden. e-mail: joakim.svard@ki.se

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.