Many physical and bodily manifestations of affect have strong spatial associations. Good things are generally considered to be “up,” and bad things “down” (Lakoff & Johnson, 1980), with these associations being scattered liberally throughout language (e.g., things are looking up, down in the dumps). It is usually assumed that such mappings between affect and space emerge through “embodied” experiences of positive and negative events. For example, one tends to stand upright when in a positive mood, but to adopt a more slouched posture when depressed (LaFrance & Mayo, 1978). In contrast with this dominant “conceptual-metaphor” framework, an alternative account has been proposed in terms of polarity effects (Lakens, 2012). Here, we tested between these rival accounts through competing response time predictions in a novel task examining the relationship between affective valence and location in vertical space.

Conceptual-metaphor theory versus polarity

Conceptual-metaphor theory (Lakoff & Johnson, 1980) has been used to describe how space/valence relations emerge and to motivate many empirical studies linking affect and verticality. In one of the first empirical demonstrations of the “GOOD is UP” conceptual metaphor, Meier and Robinson (2004) found that people are faster to judge positively valenced words (e.g., champion, witty) when they appear in a high spatial location rather than a low location, whereas they are faster to judge negatively valenced words (e.g., spider, liar) when they are in a low location rather than a high one. Related work has revealed similar patterns (e.g., Meier, Hauser, Robinson, Kelland Friesen, & Schjeldahl, 2007; Schubert, 2005), with faster performance for congruent (positive–UP/negative–DOWN) than for incongruent (positive–DOWN/negative–UP) trials. Such results are often taken as evidence that the valence of words automatically activates congruent vertical location information in the minds of participants.

Recently, an alternative, polarity-based explanation of these associations has been developed (e.g., Lakens, 2012; see also Santiago, Ouellet, Román, & Valenzuela, 2012). Rather than considering the metaphoric congruency between the conceptual content of a stimulus (e.g., a positive/negative word) and its spatial location (e.g., high/low in the visual field), one can instead consider the various structural dimensions involved in a task or judgment (e.g., stimulus content, spatial location, response code, etc.) and the overlap in relational structure between these dimensions. Each dimension in the task has a default endpoint that is considered +polar, and an opposing endpoint considered –polar (Proctor & Cho, 2006). For example, in relation to emotion, the +polar endpoint would be “happy,” as it is considered the default, the most frequent, and is the unmarked end of a dimension (i.e., one can negate the positive pole [unhappy], but not the negative pole [*unsad]). It has been demonstrated that the +polar end of a dimension will induce a processing advantage over the –polar end of that dimension during conceptual processing (Clark, 1969; Seymour, 1974; see Hommel, Müsseler, Aschersleben, & Prinz, 2001, for a related account). Furthermore, the processing benefits due to these dimensions are additive (Lakens, 2012; Seymour, 1974). By summing the +polar and –polar elements involved in a set of task conditions, one can make predictions about the relative speed of responses in different experimental conditions. As an extreme example, for a condition in which all dimensions are +polar, one would expect very fast responses relative to other conditions (see Table 1). Critically, for studies investigating the relationship between valence and vertical space, the polarity-based account makes predictions that contrast with those of conceptual-metaphor theory, allowing us to test between the two. From a conceptual-metaphor perspective, the expectation is that an interaction will take place between stimulus valence and spatial location, with equivalent, symmetrical effects for positive and negative stimuli. The prediction of symmetrical effects is made explicitly by Meier and Robinson (2004, p. 244), who stated that positive stimuli will be processed more quickly in a high than in a low location, whereas negative stimuli will be processed more quickly in a low than in a high location. However, a polarity-based account makes quite different predictions.

Table 1 Schematic of polarity mappings for different conditions in Experiments 1 and 2 (featuring a two-alternative forced choice task; i.e., judging whether a face is happy or sad)

For a two-alternative forced choice response time task, Lakens (2012) identified four key structural dimensions of the task: (1) stimulus valence, (2) spatial location of the stimulus, (3) response code, and (4) the polarity correspondence between the stimulus and its location (Proctor & Cho, 2006). For stimulus valence, people process positively valenced stimuli faster than negatively valenced ones (Meier & Robinson, 2004); therefore, positively valenced stimuli are considered +polar in this dimension. For spatial location, high location is coded as +polar, whereas low location is –polar, as people are quicker to process items associated with higher than with lower spatial locations (Clark & Brownell, 1975; Làdavas, 1988). For bimanual response tasks, responses are explicitly or implicitly coded as YES–NO, POSITIVE–NEGATIVE, or TRUE–FALSE. People are faster to make judgments for +polar response codes (yes/true/positive/happy) than for –polar responses (no/false/negative/sad: Clark & Brownell, 1975). Finally, the polarity correspondence principle (Proctor & Cho, 2006) suggests that when the conceptual meaning (i.e., valence) and the perceptual features (e.g., location) of a stimulus overlap, a further processing boost is observed (see also Hommel et al., 2001). As Lakens (2012, p. 728) noted,

the polarity correspondence principle predicts that trials where the polarities of the conceptual and perceptual dimensions overlap (+polar words presented UP and –polar words presented DOWN) should receive a processing benefit compared to when the polarities do not overlap (+polar words presented DOWN and –polar words presented UP).

In a meta-analysis of five previous studies, Lakens (2012) applied these structural mappings and observed that, whereas the conceptual-metaphor account predicts faster responses for congruent conditions and slower responses, due to interference, for incongruent conditions, the effects reported in the literature are actually asymmetric, with no support for the influence of spatial location on the processing of negatively valenced stimuli. Furthermore, Lakens demonstrated that the presence or absence of these congruency effects could be dictated by increasing the frequency of negative words in a block, such that negative words now became the default +polar response, something that was not predicted by the conceptual-metaphor account.

Although Lakens’s (2012) findings support polarity as a means of accounting for valence–space interactions in response time tasks, the weight of empirical support for a conceptual-metaphor perspective demands further testing between these accounts. In the present study we did so, employing emotion recognition tasks in which happy or sad faces appeared on screen in different spatial locations. This was designed to provide three important advances over existing work. First, according to a conceptual-metaphor account, the emotion metaphors HAPPY is UP and SAD is DOWN represent the strongest and most frequently cited examples of a spatial conceptual metaphor (e.g., Kövecses, 1991; Lakoff & Johnson, 1980), thereby providing a strong test case to contrast theoretical positions. As well as being two of the most commonly used terms related to affect, happy and sad are also considered to be basic emotions by many researchers (e.g., Batty & Taylor, 2003; Ekman, 1992). Second, Lakens and others have focused solely or primarily on linguistic stimuli in their examinations of valence and vertical conceptual metaphors (e.g., Meier & Robinson, 2004; Schubert, 2005). The focus on linguistic stimuli is problematic from a conceptual-metaphor perspective, as the relationships between valenced words and space, for example, may simply represent shared descriptive associations, and thus may not reflect deeper conceptual content (Murphy, 1996). If emotion concepts are represented spatially, rather than merely being described in that way, one should find affect–spatial associations for both linguistic and nonlinguistic stimuli (Crawford, Margolies, Drake, & Murphy, 2006). The use of facial emotion stimuli also avoids the issue of possible confounds of lexical associations between emotion words and spatial terms (e.g., polarity is related to lexical frequency: Clark, 1969). Furthermore, we know of no other study examining emotion–space interactions that has used facial stimuli, despite the fact that emotion recognition tasks are paradigmatic in emotion research (e.g., Elfenbein & Ambady, 2002, for a meta-analysis). Thus, using facial emotions provides a novel task-domain alternative to lexical processing, through which the two accounts can be contrasted. Finally, previous response time studies have generally utilized a two-alternative forced choice paradigm (e.g., judging whether a stimulus is positive or negative). We adopted this paradigm for two experiments (Experiments 12), but then introduced a go/no-go paradigm (Experiment 3) that would provide additional, contrasting predictions.

The predictions for these tasks from a conceptual-metaphor perspective are straightforward: When spatial location and emotional valence are congruent, response times will be facilitated. However, when they are incongruent (e.g., happy–DOWN), response times will be slowed, due to interference between the emotional content of the stimulus and the perceptual location of the image. Furthermore, because of the symmetry of the valence–space relationship, we would expect to find no difference in response times between the congruent conditions of happy faces in a high location (happy–UP) or sad faces in a low location (sad–DOWN) or between the two incongruent conditions of happy faces in a low location (happy–DOWN) or sad faces in a high location (sad–UP).

By summing the polarities of the dimensions involved in each condition, however, the polarity account makes predictions that contrast with those of the conceptual-metaphor account, as follows (see Table 1 for a schematic of the predictions):

  1. 1.

    Happy faces presented UP should be identified more quickly than sad faces presented DOWN. This occurs because happy faces receive a +polar processing boost from the valence of the stimulus and the stimulus location, the polarity correspondence, and the fact that people are responding with the +polar response code “happy” (making this the fastest condition). On the other hand, the sad–DOWN condition should receive a boost only from the polarity correspondence between valence and location, but not from the valence or location dimension in isolation (since both sad and DOWN are –polar). From a conceptual-metaphor viewpoint, both congruent conditions should be equivalent.

  2. 2.

    Happy faces are identified more quickly when they are presented UP rather than DOWN, but sad faces are identified equally quickly whether they are presented UP or DOWN. Happy faces presented UP receive a boost from the valence, location, response code, and polarity correspondence (as above), whereas happy faces presented DOWN have a +polar valence, but a –polar location and –polar polarity correspondence. Sad faces, on the other hand, receive a boost only from the +polar location in the UP position, and only from the +polar polarity correspondence in the DOWN location. The conceptual-metaphor view would predict sad faces to be judged more quickly when they were presented DOWN, as this is the metaphorically congruent condition.

  3. 3.

    Sad faces presented DOWN should be identified more slowly overall than happy faces presented DOWN, as happy faces receive +polar boosts from stimulus valence and response code, whereas, again, sad faces presented DOWN receive a boost only from the polarity correspondence between valence and location. By contrast, the conceptual-metaphor account would predict sad–DOWN faces to be processed more quickly because the condition is metaphorically congruent, whereas the happy–DOWN condition is metaphorically incongruent.

In three experiments, we examined empirically whether the conceptual-metaphor or polarity account provides a better fit for the response time patterns observed.

Experiment 1: testing the basic facial emotion–spatial location congruency effect

In this experiment, we tested whether the speed of identification of the emotional expression of a face is affected by the spatial location of the face on screen.

Method

Participants

A group of 33 undergraduate students participated for course credit.

Materials

The stimuli were 16 faces (eight each presented with happy and sad expressions) matched for the degree of valence that they exhibited. Four normed faces were taken from existing databases (i.e., Facial Expressions of Emotion: Stimuli and Tests [FEEST], Young, Perrett, Calder, Sprengelmeyer, & Ekman, 2002; and the Japanese Female Facial Expression Database [JAFFE], Lyons, Akamatsu, Kamachi, & Gyoba, 1998), and we created four additional faces and had them matched by independent raters to the normed faces for the degree of happiness or sadness exhibited.

Procedure

Participants were to identify the emotion of each face as being HAPPY or SAD as quickly as possible, using their index fingers to depress the “A” or “L” keys (located on the horizontal axis of the keyboard in order to eliminate stimulus–response compatibility effects as possible confounds). For each trial, participants focused on a central fixation cross for 500 ms, after which a gap of 500 ms was presented, followed by the face. Each face remained on screen until a response was made, with a 500-ms gap from the response to the beginning of the next trial. Faces (height = 60 mm, width = 50 mm) were presented directly above or below the midpoint of the screen at a distance of 85 mm to the midpoint of the face. Each face was presented in a high and a low location, repeated in four blocks (fully randomized, 128 trials total). Response mappings were counterbalanced and switched halfway through (with additional practice trials included prior to the test items with the new mapping), thus controlling for possible associations of right with positive states and left with negative states.

Design

The design was a 2 (facial emotion: happy, sad) × 2 (location: UP, DOWN) repeated measures design. Effect sizes for all experiments are reported as Cohen’s d.

Results and discussion

Prior to the analyses, incorrect responses (4 %) and outliers—defined as response times greater than two standard deviations above a participant’s mean response time (3 %)—were removed. We found no main effects or interactions involving error rates here or in the following experiments. The remaining response times were submitted to a 2 × 2 repeated measures analysis of variance (ANOVA). Overall, faces were identified more quickly when they were in the UP location (M = 699 ms) rather than the DOWN location (M = 708 ms), F(1, 32) = 8.54, MSE = 297, p = .006, d = 0.11, and happy faces were identified more quickly (M = 688 ms) than sad faces (M = 718 ms), F(1, 32) = 11.93, MSE = 2,476, p = .002, d = 0.37. The interaction predicted by both conceptual-metaphor and polarity-based accounts between emotion and position was also significant, F(1, 32) = 6.79, MSE = 414, p = .014, d = 0.44. For all follow-up pairwise analyses (Experiments 13), Holm–Bonferroni corrections are employed (Holm, 1979).

In terms of the three contrasting predictions outlined above, all three were in favor of the polarity-based account. First, happy faces presented UP were identified more quickly than sad faces presented DOWN (t = 4.356, p < .001, d = 0.44). Second, whereas happy faces presented UP were identified more quickly than happy faces presented DOWN (t = 3.926, p = .002, d = 0.12), sad faces presented DOWN were processed just as quickly as sad faces presented UP (t < 1, p = .921). Finally, sad faces presented DOWN were identified more slowly than happy faces presented DOWN (t = 2.394, p = .046, d = 0.36; see Fig. 1).

Fig. 1
figure 1

Response times for emotion recognition judgments in Experiment 1. The graph illustrates the emotion–spatial congruency effect, but only for happy faces. Here, and in Fig. 2, error bars represent 95 % confidence intervals for within-participants designs (Loftus & Masson, 1994), with the values inside the bars representing mean response times (in milliseconds), standard errors (in parentheses), and below them, error rates (as percentages) per condition

Overall, the results of Experiment 1 closely followed the predictions of the polarity-based account, suggesting that structural overlap between key task dimensions may underlie the effects that have been previously observed in the literature. Nonetheless, other possible reasons could explain why this study did not exhibit the patterns expected from the conceptual-metaphor theory. One possibility is that as the mouth is the main diagnostic cue used to evaluate emotion (Gosselin & Schyns, 2001), participants first perform a saccade to locate the mouth. If this is the case, then for a face in a high location, the distance to the mouth in Experiment 1 was shorter in the high position (~71.5 mm) than in the low position (~98.5 mm), with differences in saccade times (Hoffman & Subramaniam, 1995) potentially masking an effect for sad faces, while magnifying the effect for happy faces. Therefore, in Experiment 2, distance from the fixation point to the midpoint of the mouth, rather than the face, was used to generate the high and low locations of the faces.

Experiment 2: does gradedness of vertical location affect the effect?

As well as attempting to replicate the basic findings of Experiment 1, we also considered whether the degree of aboveness/belowness of the spatial location influenced the speed of judgments. It is possible that the lower the location, the more negative the state, and vice versa for positive locations/states. To be clear, the conceptual-metaphor view does not make explicit predictions regarding the gradedness of responses. Nonetheless, there is evidence for graded routes in spatial representation (Kosslyn et al., 1989), but also of categorical processing of valenced stimuli (Estes & Adelman, 2008); the latter evidence would more closely align with a polarity-based account, since +polar and –polar response dimensions are treated categorically (Lakens, Semin, & Foroni, 2012).

To determine whether categorical or coordinate/graded information is in evidence in relation to emotion judgments, we manipulated distance in the vertical plane. Faces appeared in one of eight positions: four above the central fixation, and four below. Each position above the fixation was equidistant from its below-the-fixation counterpart (i.e., Position 1 above the center was the same distance from the fixation as Position 1 below the center), with distance being calculated to the midpoint of the mouth.

If judgments of emotion are graded, we would predict an interaction between facial emotion, spatial location (UP, DOWN), and distance from the central fixation (i.e., the higher the location of a happy face, the faster the responses made). Alternatively, if such judgments are categorical, we would expect only the Emotion × Location interaction that we observed in Experiment 1, with no three-way interaction. As before, we contrasted the three critical predictions of the polarity-based account with those of the conceptual-metaphor account.

Method

All methods were the same as in Experiment 1, with the following exceptions.

Participants

A group of 30 undergraduates participated for course credit.

Materials

We used the same faces as in Experiment 1, with the addition of one additional pair of faces (one happy, one sad) selected according to the same criteria.

Design

The design was a 2 (emotion: happy, sad) × 2 (location: UP, DOWN) × 4 (distance to mouth: one, two, three, four) repeated measures design.

Procedure

This was identical to that of Experiment 1.

Results and discussion

Prior to the analyses, incorrect responses (3 %) and outliers (4 %) were removed using the same criteria as before. The remaining response times were submitted to a 2 (emotion: happy, sad) × 2 (location: UP, DOWN) × 4 (distance: four levels of distance from the central fixation) repeated measures ANOVA; see Table 2 for the condition means. We found a significant main effect of facial emotion, F(1, 29) = 20.224, MSE = 20,353, p < .001, d = 0.66, with happy faces being identified more quickly (M = 767 ms) than sad faces (M = 825 ms). A marginal main effect of location emerged, F(1, 29) = 4.066, MSE = 4,922, p = .053, d = 0.11, with faces appearing in a higher location (M = 789 ms) being identified marginally more quickly than faces in a lower location (M = 802 ms). We also observed a significant main effect of distance, F(3, 87) = 9.189, MSE = 3,222, p < .001, with faces in more proximal positions generally being identified more quickly than faces in more distal positions: Position 1 (M = 778 ms) = Position 2 (M = 788 ms; t = 1.3, p = .203, d = 0.08) < Position 3 (M = 811 ms; t = 3.4, p = .002, d = 0.18). No difference in response times was apparent for Positions 3 and 4 (M = 807 ms, t < 1). In terms of the three contrasting theoretical predictions, the patterns again followed those of the polarity-based account.

Table 2 Mean response times (in milliseconds), standard errors (in parentheses), and error rates (as percentages) per condition in Experiment 2

We observed a marginal interaction between facial emotion and spatial location, F(1, 29) = 3.697, MSE = 4,792, p = .064, d = 0.35. First, happy faces presented UP (M = 754 ms) were identified more quickly than sad faces presented DOWN (M = 826 ms), t = 4.878, p < .001, d = 0.54. Second, happy faces were identified more quickly when they were presented UP rather than DOWN (M = 780 ms), t = 2.893, p = .014, d = 0.22, but sad faces were processed equally quickly whether they were presented UP (M = 824 ms) or DOWN (M = 826 ms), t < 1. Third, sad faces presented DOWN were judged more slowly than happy faces presented DOWN (t = 3.44, p = .006, d = 0.37).

Finally, no reliable interactions were apparent between emotion and distance, F(3, 87) = 2.049, MSE = 3,509, p = .113, location and distance (F < 1), or emotion, location, and distance (F < 1). The absence of a reliable three-way interaction between emotion, location, and distance indicates that the size of the emotion–location effect was invariant across proximal and distal locations. This suggests that participants’ valence judgments were categorical in nature, lending further support to the view that people’s responses are driven by the poles of the task dimensions, rather than by gradations along those dimensions: It only mattered that a face appeared above or below the midpoint, and not how much above or below the midpoint was its position.

Experiment 3: go/no-go task

In Experiments 1 and 2, we attempted to account for potential stimulus–response compatibility issues by having participants respond in the axis orthogonal to that in which the stimuli of interest were being presented. Despite this, the possibility remained that some mapping existed between the positive and negative dimensions of both axes (e.g., rightward responses might be more linked with either upward locations or positive valence). To address this issue and remove any possible mappings between the direction of movement involved in the buttonpress and either the spatial location or the stimulus valence, we adopted a go/no-go paradigm, in which participants only responded to relevant trials by pressing a single key. Of course, changing the response key altered the response code dimension of the task, thus altering the predictions derived from the polarity-based account, but not those of the conceptual-metaphor account. Specifically, the polarity account predicted that a processing time difference should no longer occur between happy faces presented DOWN and sad faces presented UP or DOWN, because now the “go” response would by default become the +polar response code. Thus, for blocks in which people responded only to sad faces, the go response would be +polar, just as for blocks in which participants only responded to happy faces, thereby removing response code differences between these conditions. By contrast, the conceptual-metaphor account would still predict sad faces presented DOWN to be identified more quickly than happy faces presented DOWN, as sad–DOWN represents the metaphorically congruent condition. As before, both accounts predicted that responses to happy faces presented UP would be faster than those to happy faces presented DOWN, but whereas the metaphor account predicted that sad faces presented DOWN would be faster than sad faces presented UP, the polarity account still predicted no difference.

Method

Participants

A group of 29 undergraduates took part for course credit. One participant’s data were removed due to use of the wrong button during the experiment.

Materials

These were as in Experiment 2.

Design

The design was a 2 (facial emotion: happy, sad) × 2 (location: UP, DOWN) within-participants, repeated measures design, with responses evenly split between happy and sad faces.

Procedure

The procedure was as in Experiment 1, but with the following changes. Participants were to identify the emotion of each face as being HAPPY or SAD as quickly as possible by depressing the “5” key (located in the center of the number pad of the keyboard). For half of the experiment, the participant was to respond only to happy faces (“go” trials), not to sad ones (“no-go” trials), whereas in the second half, the participant was to respond only to sad faces. The order of responding to happy/sad faces was counterbalanced.

Results and discussion

Prior to the analyses, incorrect responses (8 %) and outliers (4.3 %) were removed, following the same criteria as before. The remaining data were submitted to a 2 × 2 repeated measures ANOVA. The results followed the patterns observed previously: We found a main effect of emotion, F(1, 27) = 5.220, MSE = 3,355, p = .03, d = 0.43, with happy faces (M = 556 ms) being identified more quickly than sad faces (M = 581 ms). A main effect of location was also evident, F(1, 27) = 9.459, MSE = 996, p = .005, d = 0.34, with items appearing UP (M = 559 ms) being identified more quickly than those appearing DOWN (M = 578 ms). As before, a significant Emotion × Location interaction emerged, F(1, 27) = 10.955, MSE = 342, p = .003, d = 1.24 (Fig. 2).

Fig. 2
figure 2

Illustration of the asymmetric facial emotion–spatial congruency effect in Experiment 3’s go/no-go task

In terms of the predictions of the competing accounts, happy faces presented UP (M = 541 ms) were indeed processed more quickly than those presented DOWN (M = 571 ms), t = 4.979, p < .001, d = 0.49, and in support of the polarity account, we observed no difference in response times for sad faces presented DOWN (M = 584 ms) or UP (M = 577 ms), t < 1. For the new prediction, the polarity account was again supported, with no significant differences occurring between happy faces presented DOWN and sad faces presented either UP (t < 1) or DOWN (t = 1.13, p > .5).

Thus, even when we removed possible stimulus–response compatibility effects, the emotion–spatial congruity effect still emerged, and still only for positive facial expressions, with the overall pattern closely following the predictions of the polarity account.

General discussion

Building on Lakoff and Johnson’s work (1980) on conceptual metaphor, many empirical studies have demonstrated important associations between affective valence and spatial representations and evaluations. In three experiments, we observed that distinct predictions from the polarity account were borne out, while contrasting predictions from the conceptual-metaphor view were not. In particular, we observed an asymmetry not predicted by the metaphor account in all three experiments: Happy faces were identified more quickly in an UP than in a DOWN location, but no difference was observed between sad faces in the UP and DOWN locations. Furthermore, in a go/no-go paradigm, differences predicted by the conceptual-metaphor account between happy and sad faces were not observed (i.e., happy–DOWN = sad–UP = sad–DOWN), even though moving away from a standard two-alternative forced choice paradigm might be expected to eliminate potential polarity effects (Lakens, 2012). These findings extend previous empirical demonstrations in favor of the polarity account by investigating happy versus sad emotional valence using nonlinguistic stimuli, and by testing predictions in two different response time paradigms. It is important that the polarity account be extended beyond linguistic stimuli, as polarity asymmetries often correlate with the linguistic characteristics of words (e.g., frequency). The results suggest that valence–space interactions may be characterized better by the structural relationship between commonly coded task dimensions, rather than by the metaphoric congruency/incongruency between the stimulus valence and the spatial location.

In terms of the broader relevance of the polarity approach, its predictions have now been tested using a number of different response time paradigms across a number of domains, including emotion recognition, power, valence, divinity, and morality (Lakens, 2012). Thus, it is clear that a polarity-based account has explanatory power, but on the basis of existing tests, we would not claim that the polarity perspective would account for all effects that have supported a conceptual-metaphor interpretation (see, e.g., Crawford et al., 2006; Giessner & Schubert, 2007; van Quaquebeke & Giessner, 2010). The goal of future research will be to determine the extent to which the polarity perspective can explain the full range of data that have been collected in support of a conceptual-metaphor account, including examining different metaphoric domains and task paradigms (e.g., Crawford et al., 2006, found that memory for the location of images was influenced by stimulus valence), considering the common neural mechanisms that may underlie these judgments (Quadflieg et al., 2011), and understanding how polarities are established during development and learning.

Finally, one might expect (as is predicted by the theory of event coding; Hommel et al., 2001) that the various polar dimensions would not necessarily be equivalent (as was implicitly assumed by Lakens, 2012, and Proctor & Cho, 2006), and also that some polar dimensions might be privileged as a function of the task underway. For example, making a judgment about the color of a face may not necessarily produce the same strong pattern of effects as judging the emotion of that face, even though the emotion is conveyed by the face when color is being judged (see Frühholz, Jellinghaus, & Herrmann, 2011). A consideration of the relative strengths of different polar dimensions under varying task conditions might enable finer-grained distinctions between experimental conditions that would currently be considered equal in terms of processing requirements.